borgbackup-1.1.5/0000755000175000017500000000000013260002401012727 5ustar twtw00000000000000borgbackup-1.1.5/CHANGES.rst0000644000175000017500000036724113257361140014564 0ustar twtw00000000000000 .. _important_notes: Important notes =============== This section provides information about security and corruption issues. .. _broken_validator: Pre-1.1.4 potential data corruption issue ----------------------------------------- A data corruption bug was discovered in borg check --repair, see issue #3444. This is a 1.1.x regression, releases < 1.1 (e.g. 1.0.x) are not affected. To avoid data loss, you must not run borg check --repair using an unfixed version of borg 1.1.x. The first official release that has the fix is 1.1.4. Package maintainers may have applied the fix to updated packages of 1.1.x (x<4) though, see the package maintainer's package changelog to make sure. If you never had missing item metadata chunks, the bug has not affected you even if you did run borg check --repair with an unfixed version. When borg check --repair tried to repair corrupt archives that miss item metadata chunks, the resync to valid metadata in still present item metadata chunks malfunctioned. This was due to a broken validator that considered all (even valid) item metadata as invalid. As they were considered invalid, borg discarded them. Practically, that means the affected files, directories or other fs objects were discarded from the archive. Due to the malfunction, the process was extremely slow, but if you let it complete, borg would have created a "repaired" archive that has lost a lot of items. If you interrupted borg check --repair because it was so strangely slow (killing borg somehow, e.g. Ctrl-C) the transaction was rolled back and no corruption occurred. The log message indicating the precondition for the bug triggering looks like: item metadata chunk missing [chunk: 001056_bdee87d...a3e50d] If you never had that in your borg check --repair runs, you're not affected. But if you're unsure or you actually have seen that, better check your archives. By just using "borg list repo::archive" you can see if all expected filesystem items are listed. .. _tam_vuln: Pre-1.0.9 manifest spoofing vulnerability (CVE-2016-10099) ---------------------------------------------------------- A flaw in the cryptographic authentication scheme in Borg allowed an attacker to spoof the manifest. The attack requires an attacker to be able to 1. insert files (with no additional headers) into backups 2. gain write access to the repository This vulnerability does not disclose plaintext to the attacker, nor does it affect the authenticity of existing archives. The vulnerability allows an attacker to create a spoofed manifest (the list of archives). Creating plausible fake archives may be feasible for small archives, but is unlikely for large archives. The fix adds a separate authentication tag to the manifest. For compatibility with prior versions this authentication tag is *not* required by default for existing repositories. Repositories created with 1.0.9 and later require it. Steps you should take: 1. Upgrade all clients to 1.0.9 or later. 2. Run ``borg upgrade --tam `` *on every client* for *each* repository. 3. This will list all archives, including archive IDs, for easy comparison with your logs. 4. Done. Prior versions can access and modify repositories with this measure enabled, however, to 1.0.9 or later their modifications are indiscernible from an attack and will raise an error until the below procedure is followed. We are aware that this can be be annoying in some circumstances, but don't see a way to fix the vulnerability otherwise. In case a version prior to 1.0.9 is used to modify a repository where above procedure was completed, and now you get an error message from other clients: 1. ``borg upgrade --tam --force `` once with *any* client suffices. This attack is mitigated by: - Noting/logging ``borg list``, ``borg info``, or ``borg create --stats``, which contain the archive IDs. We are not aware of others having discovered, disclosed or exploited this vulnerability. Vulnerability time line: * 2016-11-14: Vulnerability and fix discovered during review of cryptography by Marian Beermann (@enkore) * 2016-11-20: First patch * 2016-12-20: Released fixed version 1.0.9 * 2017-01-02: CVE was assigned * 2017-01-15: Released fixed version 1.1.0b3 (fix was previously only available from source) .. _attic013_check_corruption: Pre-1.0.9 potential data loss ----------------------------- If you have archives in your repository that were made with attic <= 0.13 (and later migrated to borg), running borg check would report errors in these archives. See issue #1837. The reason for this is a invalid (and useless) metadata key that was always added due to a bug in these old attic versions. If you run borg check --repair, things escalate quickly: all archive items with invalid metadata will be killed. Due to that attic bug, that means all items in all archives made with these old attic versions. Pre-1.0.4 potential repo corruption ----------------------------------- Some external errors (like network or disk I/O errors) could lead to corruption of the backup repository due to issue #1138. A sign that this happened is if "E" status was reported for a file that can not be explained by problems with the source file. If you still have logs from "borg create -v --list", you can check for "E" status. Here is what could cause corruption and what you can do now: 1) I/O errors (e.g. repo disk errors) while writing data to repo. This could lead to corrupted segment files. Fix:: # check for corrupt chunks / segments: borg check -v --repository-only REPO # repair the repo: borg check -v --repository-only --repair REPO # make sure everything is fixed: borg check -v --repository-only REPO 2) Unreliable network / unreliable connection to the repo. This could lead to archive metadata corruption. Fix:: # check for corrupt archives: borg check -v --archives-only REPO # delete the corrupt archives: borg delete --force REPO::CORRUPT_ARCHIVE # make sure everything is fixed: borg check -v --archives-only REPO 3) In case you want to do more intensive checking. The best check that everything is ok is to run a dry-run extraction:: borg extract -v --dry-run REPO::ARCHIVE .. _changelog: Changelog ========= Version 1.1.5 (2018-04-01) -------------------------- Compatibility notes: - When upgrading from borg 1.0.x to 1.1.x, please note: - read all the compatibility notes for 1.1.0*, starting from 1.1.0b1. - borg upgrade: you do not need to and you also should not run it. - borg might ask some security-related questions once after upgrading. You can answer them either manually or via environment variable. One known case is if you use unencrypted repositories, then it will ask about a unknown unencrypted repository one time. - your first backup with 1.1.x might be significantly slower (it might completely read, chunk, hash a lot files) - this is due to the --files-cache mode change (and happens every time you change mode). You can avoid the one-time slowdown by using the pre-1.1.0rc4-compatible mode (but that is less safe for detecting changed files than the default). See the --files-cache docs for details. - 1.1.5 changes: - require msgpack-python >= 0.4.6 and < 0.5.0. 0.5.0+ dropped python 3.4 testing and also caused some other issues because the python package was renamed to msgpack and emitted some FutureWarning. Fixes: - create --list: fix that it was never showing M status, #3492 - create: fix timing for first checkpoint (read files cache early, init checkpoint timer after that), see #3394 - extract: set rc=1 when extracting damaged files with all-zero replacement chunks or with size inconsistencies, #3448 - diff: consider an empty file as different to a non-existing file, #3688 - files cache: improve exception handling, #3553 - ignore exceptions in scandir_inorder() caused by an implicit stat(), also remove unneeded sort, #3545 - fixed tab completion problem where a space is always added after path even when it shouldn't - build: do .h file content checks in binary mode, fixes build issue for non-ascii header files on pure-ascii locale platforms, #3544 #3639 - borgfs: fix patterns/paths processing, #3551 - config: add some validation, #3566 - repository config: add validation for max_segment_size, #3592 - set cache previous_location on load instead of save - remove platform.uname() call which caused library mismatch issues, #3732 - add exception handler around deprecated platform.linux_distribution() call - use same datetime object for {now} and {utcnow}, #3548 New features: - create: implement --stdin-name, #3533 - add chunker_params to borg archive info (--json) - BORG_SHOW_SYSINFO=no to hide system information from exceptions Other changes: - updated zsh completions for borg 1.1.4 - files cache related code cleanups - be more helpful when parsing invalid --pattern values, #3575 - be more clear in secure-erase warning message, #3591 - improve getpass user experience, #3689 - docs build: unicode problem fixed when using a py27-based sphinx - docs: - security: explicitly note what happens OUTSIDE the attack model - security: add note about combining compression and encryption - security: describe chunk size / proximity issue, #3687 - quickstart: add note about permissions, borg@localhost, #3452 - quickstart: add introduction to repositories & archives, #3620 - recreate --recompress: add missing metavar, clarify description, #3617 - improve logging docs, #3549 - add an example for --pattern usage, #3661 - clarify path semantics when matching, #3598 - link to offline documentation from README, #3502 - add docs on how to verify a signed release with GPG, #3634 - chunk seed is generated per repository (not: archive) - better formatting of CPU usage documentation, #3554 - extend append-only repo rollback docs, #3579 - tests: - fix erroneously skipped zstd compressor tests, #3606 - skip a test if argparse is broken, #3705 - vagrant: - xenial64 box now uses username 'vagrant', #3707 - move cleanup steps to fs_init, #3706 - the boxcutter wheezy boxes are 404, use local ones - update to Python 3.5.5 (for binary builds) Version 1.1.4 (2017-12-31) -------------------------- Compatibility notes: - When upgrading from borg 1.0.x to 1.1.x, please note: - read all the compatibility notes for 1.1.0*, starting from 1.1.0b1. - borg upgrade: you do not need to and you also should not run it. - borg might ask some security-related questions once after upgrading. You can answer them either manually or via environment variable. One known case is if you use unencrypted repositories, then it will ask about a unknown unencrypted repository one time. - your first backup with 1.1.x might be significantly slower (it might completely read, chunk, hash a lot files) - this is due to the --files-cache mode change (and happens every time you change mode). You can avoid the one-time slowdown by using the pre-1.1.0rc4-compatible mode (but that is less safe for detecting changed files than the default). See the --files-cache docs for details. - borg 1.1.4 changes: - zstd compression is new in borg 1.1.4, older borg can't handle it. - new minimum requirements for the compression libraries - if the required versions (header and lib) can't be found at build time, bundled code will be used: - added requirement: libzstd >= 1.3.0 (bundled: 1.3.2) - updated requirement: liblz4 >= 1.7.0 / r129 (bundled: 1.8.0) Fixes: - check: data corruption fix: fix for borg check --repair malfunction, #3444. See the more detailled notes close to the top of this document. - delete: also delete security dir when deleting a repo, #3427 - prune: fix building the "borg prune" man page, #3398 - init: use given --storage-quota for local repo, #3470 - init: properly quote repo path in output - fix startup delay with dns-only own fqdn resolving, #3471 New features: - added zstd compression. try it! - added placeholder {reverse-fqdn} for fqdn in reverse notation - added BORG_BASE_DIR environment variable, #3338 Other changes: - list help topics when invalid topic is requested - fix lz4 deprecation warning, requires lz4 >= 1.7.0 (r129) - add parens for C preprocessor macro argument usages (did not cause malfunction) - exclude broken pytest 3.3.0 release - updated fish/bash completions - init: more clear exception messages for borg create, #3465 - docs: - add auto-generated docs for borg config - don't generate HTML docs page for borgfs, #3404 - docs update for lz4 b2 zstd changes - add zstd to compression help, readme, docs - update requirements and install docs about bundled lz4 and zstd - refactored build of the compress and crypto.low_level extensions, #3415: - move some lib/build related code to setup_{zstd,lz4,b2}.py - bundle lz4 1.8.0 (requirement: >= 1.7.0 / r129) - bundle zstd 1.3.2 (requirement: >= 1.3.0) - blake2 was already bundled - rename BORG_LZ4_PREFIX env var to BORG_LIBLZ4_PREFIX for better consistency: we also have BORG_LIBB2_PREFIX and BORG_LIBZSTD_PREFIX now. - add prefer_system_lib* = True settings to setup.py - by default the build will prefer a shared library over the bundled code, if library and headers can be found and meet the minimum requirements. Version 1.1.3 (2017-11-27) -------------------------- Fixes: - Security Fix for CVE-2017-15914: Incorrect implementation of access controls allows remote users to override repository restrictions in Borg servers. A user able to access a remote Borg SSH server is able to circumvent access controls post-authentication. Affected releases: 1.1.0, 1.1.1, 1.1.2. Releases 1.0.x are NOT affected. - crc32: deal with unaligned buffer, add tests - this broke borg on older ARM CPUs that can not deal with unaligned 32bit memory accesses and raise a bus error in such cases. the fix might also improve performance on some CPUs as all 32bit memory accesses by the crc32 code are properly aligned now. #3317 - mount: fixed support of --consider-part-files and do not show .borg_part_N files by default in the mounted FUSE filesystem. #3347 - fixed cache/repo timestamp inconsistency message, highlight that information is obtained from security dir (deleting the cache will not bypass this error in case the user knows this is a legitimate repo). - borgfs: don't show sub-command in borgfs help, #3287 - create: show an error when --dry-run and --stats are used together, #3298 New features: - mount: added exclusion group options and paths, #2138 Reused some code to support similar options/paths as borg extract offers - making good use of these to only mount a smaller subset of dirs/files can speed up mounting a lot and also will consume way less memory. borg mount [options] repo_or_archive mountpoint path [paths...] paths: you can just give some "root paths" (like for borg extract) to only partially populate the FUSE filesystem. new options: --exclude[-from], --pattern[s-from], --strip-components - create/extract: support st_birthtime on platforms supporting it, #3272 - add "borg config" command for querying/setting/deleting config values, #3304 Other changes: - clean up and simplify packaging (only package committed files, do not install .c/.h/.pyx files) - docs: - point out tuning options for borg create, #3239 - add instructions for using ntfsclone, zerofree, #81 - move image backup-related FAQ entries to a new page - clarify key aliases for borg list --format, #3111 - mention break-lock in checkpointing FAQ entry, #3328 - document sshfs rename workaround, #3315 - add FAQ about removing files from existing archives - add FAQ about different prune policies - usage and man page for borgfs, #3216 - clarify create --stats duration vs. wall time, #3301 - clarify encrypted key format for borg key export, #3296 - update release checklist about security fixes - document good and problematic option placements, fix examples, #3356 - add note about using --nobsdflags to avoid speed penalty related to bsdflags, #3239 - move most of support section to www.borgbackup.org Version 1.1.2 (2017-11-05) -------------------------- Fixes: - fix KeyError crash when talking to borg server < 1.0.7, #3244 - extract: set bsdflags last (include immutable flag), #3263 - create: don't do stat() call on excluded-norecurse directory, fix exception handling for stat() call, #3209 - create --stats: do not count data volume twice when checkpointing, #3224 - recreate: move chunks_healthy when excluding hardlink master, #3228 - recreate: get rid of chunks_healthy when rechunking (does not match), #3218 - check: get rid of already existing not matching chunks_healthy metadata, #3218 - list: fix stdout broken pipe handling, #3245 - list/diff: remove tag-file options (not used), #3226 New features: - bash, zsh and fish shell auto-completions, see scripts/shell_completions/ - added BORG_CONFIG_DIR env var, #3083 Other changes: - docs: - clarify using a blank passphrase in keyfile mode - mention "!" (exclude-norecurse) type in "patterns" help - document to first heal before running borg recreate to re-chunk stuff, because that will have to get rid of chunks_healthy metadata. - more than 23 is not supported for CHUNK_MAX_EXP, #3115 - borg does not respect nodump flag by default any more - clarify same-filesystem requirement for borg upgrade, #2083 - update / rephrase cygwin / WSL status, #3174 - improve docs about --stats, #3260 - vagrant: openindiana new clang package Already contained in 1.1.1 (last minute fix): - arg parsing: fix fallback function, refactor, #3205. This is a fixup for #3155, which was broken on at least python <= 3.4.2. Version 1.1.1 (2017-10-22) -------------------------- Compatibility notes: - The deprecated --no-files-cache is not a global/common option any more, but only available for borg create (it is not needed for anything else). Use --files-cache=disabled instead of --no-files-cache. - The nodump flag ("do not backup this file") is not honoured any more by default because this functionality (esp. if it happened by error or unexpected) was rather confusing and unexplainable at first to users. If you want that "do not backup NODUMP-flagged files" behaviour, use: borg create --exclude-nodump ... Fixes: - borg recreate: correctly compute part file sizes. fixes cosmetic, but annoying issue as borg check complains about size inconsistencies of part files in affected archives. you can solve that by running borg recreate on these archives, see also #3157. - bsdflags support: do not open BLK/CHR/LNK files, avoid crashes and slowness, #3130 - recreate: don't crash on attic archives w/o time_end, #3109 - don't crash on repository filesystems w/o hardlink support, #3107 - don't crash in first part of truncate_and_unlink, #3117 - fix server-side IndexError crash with clients < 1.0.7, #3192 - don't show traceback if only a global option is given, show help, #3142 - cache: use SaveFile for more safety, #3158 - init: fix wrong encryption choices in command line parser, fix missing "authenticated-blake2", #3103 - move --no-files-cache from common to borg create options, #3146 - fix detection of non-local path (failed on ..filename), #3108 - logging with fileConfig: set json attr on "borg" logger, #3114 - fix crash with relative BORG_KEY_FILE, #3197 - show excluded dir with "x" for tagged dirs / caches, #3189 New features: - create: --nobsdflags and --exclude-nodump options, #3160 - extract: --nobsdflags option, #3160 Other changes: - remove annoying hardlinked symlinks warning, #3175 - vagrant: use self-made FreeBSD 10.3 box, #3022 - travis: don't brew update, hopefully fixes #2532 - docs: - readme: -e option is required in borg 1.1 - add example showing --show-version --show-rc - use --format rather than --list-format (deprecated) in example - update docs about hardlinked symlinks limitation Version 1.1.0 (2017-10-07) -------------------------- Compatibility notes: - borg command line: do not put options in between positional arguments This sometimes works (e.g. it worked in borg 1.0.x), but can easily stop working if we make positional arguments optional (like it happened for borg create's "paths" argument in 1.1). There are also places in borg 1.0 where we do that, so it doesn't work there in general either. #3356 Good: borg create -v --stats repo::archive path Good: borg create repo::archive path -v --stats Bad: borg create repo::archive -v --stats path Fixes: - fix LD_LIBRARY_PATH restoration for subprocesses, #3077 - "auto" compression: make sure expensive compression is actually better, otherwise store lz4 compressed data we already computed. Other changes: - docs: - FAQ: we do not implement futile attempts of ETA / progress displays - manpage: fix typos, update homepage - implement simple "issue" role for manpage generation, #3075 Version 1.1.0rc4 (2017-10-01) ----------------------------- Compatibility notes: - A borg server >= 1.1.0rc4 does not support borg clients 1.1.0b3-b5. #3033 - The files cache is now controlled differently and has a new default mode: - the files cache now uses ctime by default for improved file change detection safety. You can still use mtime for more speed and less safety. - --ignore-inode is deprecated (use --files-cache=... without "inode") - --no-files-cache is deprecated (use --files-cache=disabled) New features: - --files-cache - implement files cache mode control, #911 You can now control the files cache mode using this option: --files-cache={ctime,mtime,size,inode,rechunk,disabled} (only some combinations are supported). See the docs for details. Fixes: - remote progress/logging: deal with partial lines, #2637 - remote progress: flush json mode output - fix subprocess environments, #3050 (and more) Other changes: - remove client_supports_log_v3 flag, #3033 - exclude broken Cython 0.27(.0) in requirements, #3066 - vagrant: - upgrade to FUSE for macOS 3.7.1 - use Python 3.5.4 to build the binaries - docs: - security: change-passphrase only changes the passphrase, #2990 - fixed/improved borg create --compression examples, #3034 - add note about metadata dedup and --no[ac]time, #2518 - twitter account @borgbackup now, better visible, #2948 - simplified rate limiting wrapper in FAQ Version 1.1.0rc3 (2017-09-10) ----------------------------- New features: - delete: support naming multiple archives, #2958 Fixes: - repo cleanup/write: invalidate cached FDs, #2982 - fix datetime.isoformat() microseconds issues, #2994 - recover_segment: use mmap(), lower memory needs, #2987 Other changes: - with-lock: close segment file before invoking subprocess - keymanager: don't depend on optional readline module, #2976 - docs: - fix macOS keychain integration command - show/link new screencasts in README, #2936 - document utf-8 locale requirement for json mode, #2273 - vagrant: clean up shell profile init, user name, #2977 - test_detect_attic_repo: don't test mount, #2975 - add debug logging for repository cleanup Version 1.1.0rc2 (2017-08-28) ----------------------------- Compatibility notes: - list: corrected mix-up of "isomtime" and "mtime" formats. Previously, "isomtime" was the default but produced a verbose human format, while "mtime" produced a ISO-8601-like format. The behaviours have been swapped (so "mtime" is human, "isomtime" is ISO-like), and the default is now "mtime". "isomtime" is now a real ISO-8601 format ("T" between date and time, not a space). New features: - None. Fixes: - list: fix weird mixup of mtime/isomtime - create --timestamp: set start time, #2957 - ignore corrupt files cache, #2939 - migrate locks to child PID when daemonize is used - fix exitcode of borg serve, #2910 - only compare contents when chunker params match, #2899 - umount: try fusermount, then try umount, #2863 Other changes: - JSON: use a more standard ISO 8601 datetime format, #2376 - cache: write_archive_index: truncate_and_unlink on error, #2628 - detect non-upgraded Attic repositories, #1933 - delete various nogil and threading related lines - coala / pylint related improvements - docs: - renew asciinema/screencasts, #669 - create: document exclusion through nodump, #2949 - minor formatting fixes - tar: tarpipe example - improve "with-lock" and "info" docs, #2869 - detail how to use macOS/GNOME/KDE keyrings for repo passwords, #392 - travis: only short-circuit docs-only changes for pull requests - vagrant: - netbsd: bash is already installed - fix netbsd version in PKG_PATH - add exe location to PATH when we build an exe Version 1.1.0rc1 (2017-07-24) ----------------------------- Compatibility notes: - delete: removed short option for --cache-only New features: - support borg list repo --format {comment} {bcomment} {end}, #2081 - key import: allow reading from stdin, #2760 Fixes: - with-lock: avoid creating segment files that might be overwritten later, #1867 - prune: fix checkpoints processing with --glob-archives - FUSE: versions view: keep original file extension at end, #2769 - fix --last, --first: do not accept values <= 0, fix reversed archive ordering with --last - include testsuite data (attic.tar.gz) when installing the package - use limited unpacker for outer key, for manifest (both security precautions), #2174 #2175 - fix bashism in shell scripts, #2820, #2816 - cleanup endianness detection, create _endian.h, fixes build on alpine linux, #2809 - fix crash with --no-cache-sync (give known chunk size to chunk_incref), #2853 Other changes: - FUSE: versions view: linear numbering by archive time - split up interval parsing from filtering for --keep-within, #2610 - add a basic .editorconfig, #2734 - use archive creation time as mtime for FUSE mount, #2834 - upgrade FUSE for macOS (osxfuse) from 3.5.8 to 3.6.3, #2706 - hashindex: speed up by replacing modulo with "if" to check for wraparound - coala checker / pylint: fixed requirements and .coafile, more ignores - borg upgrade: name backup directories as 'before-upgrade', #2811 - add .mailmap - some minor changes suggested by lgtm.com - docs: - better explanation of the --ignore-inode option relevance, #2800 - fix openSUSE command and add openSUSE section - simplify ssh authorized_keys file using "restrict", add legacy note, #2121 - mount: show usage of archive filters - mount: add repository example, #2462 - info: update and add examples, #2765 - prune: include example - improved style / formatting - improved/fixed segments_per_dir docs - recreate: fix wrong "remove unwanted files" example - reference list of status chars in borg recreate --filter description - update source-install docs about doc build dependencies, #2795 - cleanup installation docs - file system requirements, update segs per dir - fix checkpoints/parts reference in FAQ, #2859 - code: - hashindex: don't pass side effect into macro - crypto low_level: don't mutate local bytes() - use dash_open function to open file or "-" for stdin/stdout - archiver: argparse cleanup / refactoring - shellpattern: add match_end arg - tests: added some additional unit tests, some fixes, #2700 #2710 - vagrant: fix setup of cygwin, add Debian 9 "stretch" - travis: don't perform full travis build on docs-only changes, #2531 Version 1.1.0b6 (2017-06-18) ---------------------------- Compatibility notes: - Running "borg init" via a "borg serve --append-only" server will *not* create an append-only repository anymore. Use "borg init --append-only" to initialize an append-only repository. - Repositories in the "repokey" and "repokey-blake2" modes with an empty passphrase are now treated as unencrypted repositories for security checks (e.g. BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK). Previously there would be no prompts nor messages if an unknown repository in one of these modes with an empty passphrase was encountered. This would allow an attacker to swap a repository, if one assumed that the lack of password prompts was due to a set BORG_PASSPHRASE. Since the "trick" does not work if BORG_PASSPHRASE is set, this does generally not affect scripts. - Repositories in the "authenticated" mode are now treated as the unencrypted repositories they are. - The client-side temporary repository cache now holds unencrypted data for better speed. - borg init: removed the short form of --append-only (-a). - borg upgrade: removed the short form of --inplace (-i). New features: - reimplemented the RepositoryCache, size-limited caching of decrypted repo contents, integrity checked via xxh64. #2515 - reduced space usage of chunks.archive.d. Existing caches are migrated during a cache sync. #235 #2638 - integrity checking using xxh64 for important files used by borg, #1101: - repository: index and hints files - cache: chunks and files caches, chunks.archive.d - improve cache sync speed, #1729 - create: new --no-cache-sync option - add repository mandatory feature flags infrastructure, #1806 - Verify most operations against SecurityManager. Location, manifest timestamp and key types are now checked for almost all non-debug commands. #2487 - implement storage quotas, #2517 - serve: add --restrict-to-repository, #2589 - BORG_PASSCOMMAND: use external tool providing the key passphrase, #2573 - borg export-tar, #2519 - list: --json-lines instead of --json for archive contents, #2439 - add --debug-profile option (and also "borg debug convert-profile"), #2473 - implement --glob-archives/-a, #2448 - normalize authenticated key modes for better naming consistency: - rename "authenticated" to "authenticated-blake2" (uses blake2b) - implement "authenticated" mode (uses hmac-sha256) Fixes: - hashindex: read/write indices >2 GiB on 32bit systems, better error reporting, #2496 - repository URLs: implement IPv6 address support and also more informative error message when parsing fails. - mount: check whether llfuse is installed before asking for passphrase, #2540 - mount: do pre-mount checks before opening repository, #2541 - FUSE: - fix crash if empty (None) xattr is read, #2534 - fix read(2) caching data in metadata cache - fix negative uid/gid crash (fix crash when mounting archives of external drives made on cygwin), #2674 - redo ItemCache, on top of object cache - use decrypted cache - remove unnecessary normpaths - serve: ignore --append-only when initializing a repository (borg init), #2501 - serve: fix incorrect type of exception_short for Errors, #2513 - fix --exclude and --exclude-from recursing into directories, #2469 - init: don't allow creating nested repositories, #2563 - --json: fix encryption[mode] not being the cmdline name - remote: propagate Error.traceback correctly - fix remote logging and progress, #2241 - implement --debug-topic for remote servers - remote: restore "Remote:" prefix (as used in 1.0.x) - rpc negotiate: enable v3 log protocol only for supported clients - fix --progress and logging in general for remote - fix parse_version, add tests, #2556 - repository: truncate segments (and also some other files) before unlinking, #2557 - recreate: keep timestamps as in original archive, #2384 - recreate: if single archive is not processed, exit 2 - patterns: don't recurse with ! / --exclude for pf:, #2509 - cache sync: fix n^2 behaviour in lookup_name - extract: don't write to disk with --stdout (affected non-regular-file items), #2645 - hashindex: implement KeyError, more tests Other changes: - remote: show path in PathNotAllowed - consider repokey w/o passphrase == unencrypted, #2169 - consider authenticated mode == unencrypted, #2503 - restrict key file names, #2560 - document follow_symlinks requirements, check libc, use stat and chown with follow_symlinks=False, #2507 - support common options on the main command, #2508 - support common options on mid-level commands (e.g. borg *key* export) - make --progress a common option - increase DEFAULT_SEGMENTS_PER_DIR to 1000 - chunker: fix invalid use of types (function only used by tests) - chunker: don't do uint32_t >> 32 - FUSE: - add instrumentation (--debug and SIGUSR1/SIGINFO) - reduced memory usage for repository mounts by lazily instantiating archives - improved archive load times - info: use CacheSynchronizer & HashIndex.stats_against (better performance) - docs: - init: document --encryption as required - security: OpenSSL usage - security: used implementations; note python libraries - security: security track record of OpenSSL and msgpack - patterns: document denial of service (regex, wildcards) - init: note possible denial of service with "none" mode - init: document SHA extension is supported in OpenSSL and thus SHA is faster on AMD Ryzen than blake2b. - book: use A4 format, new builder option format. - book: create appendices - data structures: explain repository compaction - data structures: add chunk layout diagram - data structures: integrity checking - data structures: demingle cache and repo index - Attic FAQ: separate section for attic stuff - FAQ: I get an IntegrityError or similar - what now? - FAQ: Can I use Borg on SMR hard drives?, #2252 - FAQ: specify "using inline shell scripts" - add systemd warning regarding placeholders, #2543 - xattr: document API - add docs/misc/borg-data-flow data flow chart - debugging facilities - README: how to help the project, #2550 - README: add bountysource badge, #2558 - fresh new theme + tweaking - logo: vectorized (PDF and SVG) versions - frontends: use headlines - you can link to them - mark --pattern, --patterns-from as experimental - highlight experimental features in online docs - remove regex based pattern examples, #2458 - nanorst for "borg help TOPIC" and --help - split deployment - deployment: hosting repositories - deployment: automated backups to a local hard drive - development: vagrant, windows10 requirements - development: update docs remarks - split usage docs, #2627 - usage: avoid bash highlight, [options] instead of - usage: add benchmark page - helpers: truncate_and_unlink doc - don't suggest to leak BORG_PASSPHRASE - internals: columnize rather long ToC [webkit fixup] internals: manifest & feature flags - internals: more HashIndex details - internals: fix ASCII art equations - internals: edited obj graph related sections a bit - internals: layers image + description - fix way too small figures in pdf - index: disable syntax highlight (bash) - improve options formatting, fix accidental block quotes - testing / checking: - add support for using coala, #1366 - testsuite: add ArchiverCorruptionTestCase - do not test logger name, #2504 - call setup_logging after destroying logging config - testsuite.archiver: normalise pytest.raises vs. assert_raises - add test for preserved intermediate folder permissions, #2477 - key: add round-trip test - remove attic dependency of the tests, #2505 - enable remote tests on cygwin - tests: suppress tar's future timestamp warning - cache sync: add more refcount tests - repository: add tests, including corruption tests - vagrant: - control VM cpus and pytest workers via env vars VMCPUS and XDISTN - update cleaning workdir - fix openbsd shell - add OpenIndiana - packaging: - binaries: don't bundle libssl - setup.py clean to remove compiled files - fail in borg package if version metadata is very broken (setuptools_scm) - repo / code structure: - create borg.algorithms and borg.crypto packages - algorithms: rename crc32 to checksums - move patterns to module, #2469 - gitignore: complete paths for src/ excludes - cache: extract CacheConfig class - implement IntegrityCheckedFile + Detached variant, #2502 #1688 - introduce popen_with_error_handling to handle common user errors Version 1.1.0b5 (2017-04-30) ---------------------------- Compatibility notes: - BORG_HOSTNAME_IS_UNIQUE is now on by default. - removed --compression-from feature - recreate: add --recompress flag, unify --always-recompress and --recompress Fixes: - catch exception for os.link when hardlinks are not supported, #2405 - borg rename / recreate: expand placeholders, #2386 - generic support for hardlinks (files, devices, FIFOs), #2324 - extract: also create parent dir for device files, if needed, #2358 - extract: if a hardlink master is not in the to-be-extracted subset, the "x" status was not displayed for it, #2351 - embrace y2038 issue to support 32bit platforms: clamp timestamps to int32, #2347 - verify_data: fix IntegrityError handling for defect chunks, #2442 - allow excluding parent and including child, #2314 Other changes: - refactor compression decision stuff - change global compression default to lz4 as well, to be consistent with --compression defaults. - placeholders: deny access to internals and other unspecified stuff - clearer error message for unrecognized placeholder - more clear exception if borg check does not help, #2427 - vagrant: upgrade FUSE for macOS to 3.5.8, #2346 - linux binary builds: get rid of glibc 2.13 dependency, #2430 - docs: - placeholders: document escaping - serve: env vars in original commands are ignored - tell what kind of hardlinks we support - more docs about compression - LICENSE: use canonical formulation ("copyright holders and contributors" instead of "author") - document borg init behaviour via append-only borg serve, #2440 - be clear about what buzhash is used for, #2390 - add hint about chunker params, #2421 - clarify borg upgrade docs, #2436 - FAQ to explain warning when running borg check --repair, #2341 - repository file system requirements, #2080 - pre-install considerations - misc. formatting / crossref fixes - tests: - enhance travis setuptools_scm situation - add extra test for the hashindex - fix invalid param issue in benchmarks These belong to 1.1.0b4 release, but did not make it into changelog by then: - vagrant: increase memory for parallel testing - lz4 compress: lower max. buffer size, exception handling - add docstring to do_benchmark_crud - patterns help: mention path full-match in intro Version 1.1.0b4 (2017-03-27) ---------------------------- Compatibility notes: - init: the --encryption argument is mandatory now (there are several choices) - moved "borg migrate-to-repokey" to "borg key migrate-to-repokey". - "borg change-passphrase" is deprecated, use "borg key change-passphrase" instead. - the --exclude-if-present option now supports tagging a folder with any filesystem object type (file, folder, etc), instead of expecting only files as tags, #1999 - the --keep-tag-files option has been deprecated in favor of the new --keep-exclude-tags, to account for the change mentioned above. - use lz4 compression by default, #2179 New features: - JSON API to make developing frontends and automation easier (see :ref:`json_output`) - add JSON output to commands: `borg create/list/info --json ...`. - add --log-json option for structured logging output. - add JSON progress information, JSON support for confirmations (yes()). - add two new options --pattern and --patterns-from as discussed in #1406 - new path full match pattern style (pf:) for very fast matching, #2334 - add 'debug dump-manifest' and 'debug dump-archive' commands - add 'borg benchmark crud' command, #1788 - new 'borg delete --force --force' to delete severely corrupted archives, #1975 - info: show utilization of maximum archive size, #1452 - list: add dsize and dcsize keys, #2164 - paperkey.html: Add interactive html template for printing key backups. - key export: add qr html export mode - securely erase config file (which might have old encryption key), #2257 - archived file items: add size to metadata, 'borg extract' and 'borg check' do check the file size for consistency, FUSE uses precomputed size from Item. Fixes: - fix remote speed regression introduced in 1.1.0b3, #2185 - fix regression handling timestamps beyond 2262 (revert bigint removal), introduced in 1.1.0b3, #2321 - clamp (nano)second values to unproblematic range, #2304 - hashindex: rebuild hashtable if we have too little empty buckets (performance fix), #2246 - Location regex: fix bad parsing of wrong syntax - ignore posix_fadvise errors in repository.py, #2095 - borg rpc: use limited msgpack.Unpacker (security precaution), #2139 - Manifest: Make sure manifest timestamp is strictly monotonically increasing. - create: handle BackupOSError on a per-path level in one spot - create: clarify -x option / meaning of "same filesystem" - create: don't create hard link refs to failed files - archive check: detect and fix missing all-zero replacement chunks, #2180 - files cache: update inode number when --ignore-inode is used, #2226 - fix decompression exceptions crashing ``check --verify-data`` and others instead of reporting integrity error, #2224 #2221 - extract: warning for unextracted big extended attributes, #2258, #2161 - mount: umount on SIGINT/^C when in foreground - mount: handle invalid hard link refs - mount: fix huge RAM consumption when mounting a repository (saves number of archives * 8 MiB), #2308 - hashindex: detect mingw byte order #2073 - hashindex: fix wrong skip_hint on hashindex_set when encountering tombstones, the regression was introduced in #1748 - fix ChunkIndex.__contains__ assertion for big-endian archs - fix borg key/debug/benchmark crashing without subcommand, #2240 - Location: accept //servername/share/path - correct/refactor calculation of unique/non-unique chunks - extract: fix missing call to ProgressIndicator.finish - prune: fix error msg, it is --keep-within, not --within - fix "auto" compression mode bug (not compressing), #2331 - fix symlink item fs size computation, #2344 Other changes: - remote repository: improved async exception processing, #2255 #2225 - with --compression auto,C, only use C if lz4 achieves at least 3% compression - PatternMatcher: only normalize path once, #2338 - hashindex: separate endian-dependent defs from endian detection - migrate-to-repokey: ask using canonical_path() as we do everywhere else. - SyncFile: fix use of fd object after close - make LoggedIO.close_segment reentrant - creating a new segment: use "xb" mode, #2099 - redo key_creator, key_factory, centralise key knowledge, #2272 - add return code functions, #2199 - list: only load cache if needed - list: files->items, clarifications - list: add "name" key for consistency with info cmd - ArchiveFormatter: add "start" key for compatibility with "info" - RemoteRepository: account rx/tx bytes - setup.py build_usage/build_man/build_api fixes - Manifest.in: simplify, exclude .so, .dll and .orig, #2066 - FUSE: get rid of chunk accounting, st_blocks = ceil(size / blocksize). - tests: - help python development by testing 3.6-dev - test for borg delete --force - vagrant: - freebsd: some fixes, #2067 - darwin64: use osxfuse 3.5.4 for tests / to build binaries - darwin64: improve VM settings - use python 3.5.3 to build binaries, #2078 - upgrade pyinstaller from 3.1.1+ to 3.2.1 - pyinstaller: use fixed AND freshly compiled bootloader, #2002 - pyinstaller: automatically builds bootloader if missing - docs: - create really nice man pages - faq: mention --remote-ratelimit in bandwidth limit question - fix caskroom link, #2299 - docs/security: reiterate that RPC in Borg does no networking - docs/security: counter tracking, #2266 - docs/development: update merge remarks - address SSH batch mode in docs, #2202 #2270 - add warning about running build_usage on Python >3.4, #2123 - one link per distro in the installation page - improve --exclude-if-present and --keep-exclude-tags, #2268 - improve automated backup script in doc, #2214 - improve remote-path description - update docs for create -C default change (lz4) - document relative path usage, #1868 - document snapshot usage, #2178 - corrected some stuff in internals+security - internals: move toctree to after the introduction text - clarify metadata kind, manifest ops - key enc: correct / clarify some stuff, link to internals/security - datas: enc: 1.1.x mas different MACs - datas: enc: correct factual error -- no nonce involved there. - make internals.rst an index page and edit it a bit - add "Cryptography in Borg" and "Remote RPC protocol security" sections - document BORG_HOSTNAME_IS_UNIQUE, #2087 - FAQ by categories as proposed by @anarcat in #1802 - FAQ: update Which file types, attributes, etc. are *not* preserved? - development: new branching model for git repository - development: define "ours" merge strategy for auto-generated files - create: move --exclude note to main doc - create: move item flags to main doc - fix examples using borg init without -e/--encryption - list: don't print key listings in fat (html + man) - remove Python API docs (were very incomplete, build problems on RTFD) - added FAQ section about backing up root partition Version 1.0.10 (2017-02-13) --------------------------- Bug fixes: - Manifest timestamps are now monotonically increasing, this fixes issues when the system clock jumps backwards or is set inconsistently across computers accessing the same repository, #2115 - Fixed testing regression in 1.0.10rc1 that lead to a hard dependency on py.test >= 3.0, #2112 New features: - "key export" can now generate a printable HTML page with both a QR code and a human-readable "paperkey" representation (and custom text) through the ``--qr-html`` option. The same functionality is also available through `paperkey.html `_, which is the same HTML page generated by ``--qr-html``. It works with existing "key export" files and key files. Other changes: - docs: - language clarification - "borg create --one-file-system" option does not respect mount points, but considers different file systems instead, #2141 - setup.py: build_api: sort file list for determinism Version 1.1.0b3 (2017-01-15) ---------------------------- Compatibility notes: - borg init: removed the default of "--encryption/-e", #1979 This was done so users do a informed decision about -e mode. Bug fixes: - borg recreate: don't rechunkify unless explicitly told so - borg info: fixed bug when called without arguments, #1914 - borg init: fix free space check crashing if disk is full, #1821 - borg debug delete/get obj: fix wrong reference to exception - fix processing of remote ~/ and ~user/ paths (regressed since 1.1.0b1), #1759 - posix platform module: only build / import on non-win32 platforms, #2041 New features: - new CRC32 implementations that are much faster than the zlib one used previously, #1970 - add blake2b key modes (use blake2b as MAC). This links against system libb2, if possible, otherwise uses bundled code - automatically remove stale locks - set BORG_HOSTNAME_IS_UNIQUE env var to enable stale lock killing. If set, stale locks in both cache and repository are deleted. #562 #1253 - borg info : print general repo information, #1680 - borg check --first / --last / --sort / --prefix, #1663 - borg mount --first / --last / --sort / --prefix, #1542 - implement "health" item formatter key, #1749 - BORG_SECURITY_DIR to remember security related infos outside the cache. Key type, location and manifest timestamp checks now survive cache deletion. This also means that you can now delete your cache and avoid previous warnings, since Borg can still tell it's safe. - implement BORG_NEW_PASSPHRASE, #1768 Other changes: - borg recreate: - remove special-cased --dry-run - update --help - remove bloat: interruption blah, autocommit blah, resuming blah - re-use existing checkpoint functionality - archiver tests: add check_cache tool - lints refcounts - fixed cache sync performance regression from 1.1.0b1 onwards, #1940 - syncing the cache without chunks.archive.d (see :ref:`disable_archive_chunks`) now avoids any merges and is thus faster, #1940 - borg check --verify-data: faster due to linear on-disk-order scan - borg debug-xxx commands removed, we use "debug xxx" subcommands now, #1627 - improve metadata handling speed - shortcut hashindex_set by having hashindex_lookup hint about address - improve / add progress displays, #1721 - check for index vs. segment files object count mismatch - make RPC protocol more extensible: use named parameters. - RemoteRepository: misc. code cleanups / refactors - clarify cache/repository README file - docs: - quickstart: add a comment about other (remote) filesystems - quickstart: only give one possible ssh url syntax, all others are documented in usage chapter. - mention file:// - document repo URLs / archive location - clarify borg diff help, #980 - deployment: synthesize alternative --restrict-to-path example - improve cache / index docs, esp. files cache docs, #1825 - document using "git merge 1.0-maint -s recursive -X rename-threshold=20%" for avoiding troubles when merging the 1.0-maint branch into master. - tests: - FUSE tests: catch ENOTSUP on freebsd - FUSE tests: test troublesome xattrs last - fix byte range error in test, #1740 - use monkeypatch to set env vars, but only on pytest based tests. - point XDG_*_HOME to temp dirs for tests, #1714 - remove all BORG_* env vars from the outer environment Version 1.0.10rc1 (2017-01-29) ------------------------------ Bug fixes: - borg serve: fix transmission data loss of pipe writes, #1268 This affects only the cygwin platform (not Linux, BSD, OS X). - Avoid triggering an ObjectiveFS bug in xattr retrieval, #1992 - When running out of buffer memory when reading xattrs, only skip the current file, #1993 - Fixed "borg upgrade --tam" crashing with unencrypted repositories. Since :ref:`the issue ` is not relevant for unencrypted repositories, it now does nothing and prints an error, #1981. - Fixed change-passphrase crashing with unencrypted repositories, #1978 - Fixed "borg check repo::archive" indicating success if "archive" does not exist, #1997 - borg check: print non-exit-code warning if --last or --prefix aren't fulfilled - fix bad parsing of wrong repo location syntax - create: don't create hard link refs to failed files, mount: handle invalid hard link refs, #2092 - detect mingw byte order, #2073 - creating a new segment: use "xb" mode, #2099 - mount: umount on SIGINT/^C when in foreground, #2082 Other changes: - binary: use fixed AND freshly compiled pyinstaller bootloader, #2002 - xattr: ignore empty names returned by llistxattr(2) et al - Enable the fault handler: install handlers for the SIGSEGV, SIGFPE, SIGABRT, SIGBUS and SIGILL signals to dump the Python traceback. - Also print a traceback on SIGUSR2. - borg change-passphrase: print key location (simplify making a backup of it) - officially support Python 3.6 (setup.py: add Python 3.6 qualifier) - tests: - vagrant / travis / tox: add Python 3.6 based testing - vagrant: fix openbsd repo, #2042 - vagrant: fix the freebsd64 machine, #2037 #2067 - vagrant: use python 3.5.3 to build binaries, #2078 - vagrant: use osxfuse 3.5.4 for tests / to build binaries vagrant: improve darwin64 VM settings - travis: fix osxfuse install (fixes OS X testing on Travis CI) - travis: require succeeding OS X tests, #2028 - travis: use latest pythons for OS X based testing - use pytest-xdist to parallelize testing - fix xattr test race condition, #2047 - setup.cfg: fix pytest deprecation warning, #2050 - docs: - language clarification - VM backup FAQ - borg create: document how to backup stdin, #2013 - borg upgrade: fix incorrect title levels - add CVE numbers for issues fixed in 1.0.9, #2106 - fix typos (taken from Debian package patch) - remote: include data hexdump in "unexpected RPC data" error message - remote: log SSH command line at debug level - API_VERSION: use numberspaces, #2023 - remove .github from pypi package, #2051 - add pip and setuptools to requirements file, #2030 - SyncFile: fix use of fd object after close (cosmetic) - Manifest.in: simplify, exclude \*.{so,dll,orig}, #2066 - ignore posix_fadvise errors in repository.py, #2095 (works around issues with docker on ARM) - make LoggedIO.close_segment reentrant, avoid reentrance Version 1.0.9 (2016-12-20) -------------------------- Security fixes: - A flaw in the cryptographic authentication scheme in Borg allowed an attacker to spoof the manifest. See :ref:`tam_vuln` above for the steps you should take. CVE-2016-10099 was assigned to this vulnerability. - borg check: When rebuilding the manifest (which should only be needed very rarely) duplicate archive names would be handled on a "first come first serve" basis, allowing an attacker to apparently replace archives. CVE-2016-10100 was assigned to this vulnerability. Bug fixes: - borg check: - rebuild manifest if it's corrupted - skip corrupted chunks during manifest rebuild - fix TypeError in integrity error handler, #1903, #1894 - fix location parser for archives with @ char (regression introduced in 1.0.8), #1930 - fix wrong duration/timestamps if system clock jumped during a create - fix progress display not updating if system clock jumps backwards - fix checkpoint interval being incorrect if system clock jumps Other changes: - docs: - add python3-devel as a dependency for cygwin-based installation - clarify extract is relative to current directory - FAQ: fix link to changelog - markup fixes - tests: - test_get\_(cache|keys)_dir: clean env state, #1897 - get back pytest's pretty assertion failures, #1938 - setup.py build_usage: - fixed build_usage not processing all commands - fixed build_usage not generating includes for debug commands Version 1.0.9rc1 (2016-11-27) ----------------------------- Bug fixes: - files cache: fix determination of newest mtime in backup set (which is used in cache cleanup and led to wrong "A" [added] status for unchanged files in next backup), #1860. - borg check: - fix incorrectly reporting attic 0.13 and earlier archives as corrupt - handle repo w/o objects gracefully and also bail out early if repo is *completely* empty, #1815. - fix tox/pybuild in 1.0-maint - at xattr module import time, loggers are not initialized yet New features: - borg umount exposed already existing umount code via the CLI api, so users can use it, which is more consistent than using borg to mount and fusermount -u (or umount) to un-mount, #1855. - implement borg create --noatime --noctime, fixes #1853 Other changes: - docs: - display README correctly on PyPI - improve cache / index docs, esp. files cache docs, fixes #1825 - different pattern matching for --exclude, #1779 - datetime formatting examples for {now} placeholder, #1822 - clarify passphrase mode attic repo upgrade, #1854 - clarify --umask usage, #1859 - clarify how to choose PR target branch - clarify prune behavior for different archive contents, #1824 - fix PDF issues, add logo, fix authors, headings, TOC - move security verification to support section - fix links in standalone README (:ref: tags) - add link to security contact in README - add FAQ about security - move fork differences to FAQ - add more details about resource usage - tests: skip remote tests on cygwin, #1268 - travis: - allow OS X failures until the brew cask osxfuse issue is fixed - caskroom osxfuse-beta gone, it's osxfuse now (3.5.3) - vagrant: - upgrade OSXfuse / FUSE for macOS to 3.5.3 - remove llfuse from tox.ini at a central place - do not try to install llfuse on centos6 - fix FUSE test for darwin, #1546 - add windows virtual machine with cygwin - Vagrantfile cleanup / code deduplication Version 1.1.0b2 (2016-10-01) ---------------------------- Bug fixes: - fix incorrect preservation of delete tags, leading to "object count mismatch" on borg check, #1598. This only occurred with 1.1.0b1 (not with 1.0.x) and is normally fixed by running another borg create/delete/prune. - fix broken --progress for double-cell paths (e.g. CJK), #1624 - borg recreate: also catch SIGHUP - FUSE: - fix hardlinks in versions view, #1599 - add parameter check to ItemCache.get to make potential failures more clear New features: - Archiver, RemoteRepository: add --remote-ratelimit (send data) - borg help compression, #1582 - borg check: delete chunks with integrity errors, #1575, so they can be "repaired" immediately and maybe healed later. - archives filters concept (refactoring/unifying older code) - covers --first/--last/--prefix/--sort-by options - currently used for borg list/info/delete Other changes: - borg check --verify-data slightly tuned (use get_many()) - change {utcnow} and {now} to ISO-8601 format ("T" date/time separator) - repo check: log transaction IDs, improve object count mismatch diagnostic - Vagrantfile: use TW's fresh-bootloader pyinstaller branch - fix module names in api.rst - hashindex: bump api_version Version 1.1.0b1 (2016-08-28) ---------------------------- New features: - new commands: - borg recreate: re-create existing archives, #787 #686 #630 #70, also see #757, #770. - selectively remove files/dirs from old archives - re-compress data - re-chunkify data, e.g. to have upgraded Attic / Borg 0.xx archives deduplicate with Borg 1.x archives or to experiment with chunker-params. - borg diff: show differences between archives - borg with-lock: execute a command with the repository locked, #990 - borg create: - Flexible compression with pattern matching on path/filename, and LZ4 heuristic for deciding compressibility, #810, #1007 - visit files in inode order (better speed, esp. for large directories and rotating disks) - in-file checkpoints, #1217 - increased default checkpoint interval to 30 minutes (was 5 minutes), #896 - added uuid archive format tag, #1151 - save mountpoint directories with --one-file-system, makes system restore easier, #1033 - Linux: added support for some BSD flags, #1050 - add 'x' status for excluded paths, #814 - also means files excluded via UF_NODUMP, #1080 - borg check: - will not produce the "Checking segments" output unless new --progress option is passed, #824. - --verify-data to verify data cryptographically on the client, #975 - borg list, #751, #1179 - removed {formatkeys}, see "borg list --help" - --list-format is deprecated, use --format instead - --format now also applies to listing archives, not only archive contents, #1179 - now supports the usual [PATH [PATHS…]] syntax and excludes - new keys: csize, num_chunks, unique_chunks, NUL - supports guaranteed_available hashlib hashes (to avoid varying functionality depending on environment), which includes the SHA1 and SHA2 family as well as MD5 - borg prune: - to better visualize the "thinning out", we now list all archives in reverse time order. rephrase and reorder help text. - implement --keep-last N via --keep-secondly N, also --keep-minutely. assuming that there is not more than 1 backup archive made in 1s, --keep-last N and --keep-secondly N are equivalent, #537 - cleanup checkpoints except the latest, #1008 - borg extract: - added --progress, #1449 - Linux: limited support for BSD flags, #1050 - borg info: - output is now more similar to borg create --stats, #977 - borg mount: - provide "borgfs" wrapper for borg mount, enables usage via fstab, #743 - "versions" mount option - when used with a repository mount, this gives a merged, versioned view of the files in all archives, #729 - repository: - added progress information to commit/compaction phase (often takes some time when deleting/pruning), #1519 - automatic recovery for some forms of repository inconsistency, #858 - check free space before going forward with a commit, #1336 - improved write performance (esp. for rotating media), #985 - new IO code for Linux - raised default segment size to approx 512 MiB - improved compaction performance, #1041 - reduced client CPU load and improved performance for remote repositories, #940 - options that imply output (--show-rc, --show-version, --list, --stats, --progress) don't need -v/--info to have that output displayed, #865 - add archive comments (via borg (re)create --comment), #842 - borg list/prune/delete: also output archive id, #731 - --show-version: shows/logs the borg version, #725 - added --debug-topic for granular debug logging, #1447 - use atomic file writing/updating for configuration and key files, #1377 - BORG_KEY_FILE environment variable, #1001 - self-testing module, #970 Bug fixes: - list: fixed default output being produced if --format is given with empty parameter, #1489 - create: fixed overflowing progress line with CJK and similar characters, #1051 - prune: fixed crash if --prefix resulted in no matches, #1029 - init: clean up partial repo if passphrase input is aborted, #850 - info: quote cmdline arguments that have spaces in them - fix hardlinks failing in some cases for extracting subtrees, #761 Other changes: - replace stdlib hmac with OpenSSL, zero-copy decrypt (10-15% increase in performance of hash-lists and extract). - improved chunker performance, #1021 - open repository segment files in exclusive mode (fail-safe), #1134 - improved error logging, #1440 - Source: - pass meta-data around, #765 - move some constants to new constants module - better readability and fewer errors with namedtuples, #823 - moved source tree into src/ subdirectory, #1016 - made borg.platform a package, #1113 - removed dead crypto code, #1032 - improved and ported parts of the test suite to py.test, #912 - created data classes instead of passing dictionaries around, #981, #1158, #1161 - cleaned up imports, #1112 - Docs: - better help texts and sphinx reproduction of usage help: - Group options - Nicer list of options in Sphinx - Deduplicate 'Common options' (including --help) - chunker: added some insights by "Voltara", #903 - clarify what "deduplicated size" means - fix / update / add package list entries - added a SaltStack usage example, #956 - expanded FAQ - new contributors in AUTHORS! - Tests: - vagrant: add ubuntu/xenial 64bit - this box has still some issues - ChunkBuffer: add test for leaving partial chunk in buffer, fixes #945 Version 1.0.8 (2016-10-29) -------------------------- Bug fixes: - RemoteRepository: Fix busy wait in call_many, #940 New features: - implement borgmajor/borgminor/borgpatch placeholders, #1694 {borgversion} was already there (full version string). With the new placeholders you can now also get e.g. 1 or 1.0 or 1.0.8. Other changes: - avoid previous_location mismatch, #1741 due to the changed canonicalization for relative paths in PR #1711 / #1655 (implement /./ relpath hack), there would be a changed repo location warning and the user would be asked if this is ok. this would break automation and require manual intervention, which is unwanted. thus, we automatically fix the previous_location config entry, if it only changed in the expected way, but still means the same location. - docs: - deployment.rst: do not use bare variables in ansible snippet - add clarification about append-only mode, #1689 - setup.py: add comment about requiring llfuse, #1726 - update usage.rst / api.rst - repo url / archive location docs + typo fix - quickstart: add a comment about other (remote) filesystems - vagrant / tests: - no chown when rsyncing (fixes boxes w/o vagrant group) - fix FUSE permission issues on linux/freebsd, #1544 - skip FUSE test for borg binary + fakeroot - ignore security.selinux xattrs, fixes tests on centos, #1735 Version 1.0.8rc1 (2016-10-17) ----------------------------- Bug fixes: - fix signal handling (SIGINT, SIGTERM, SIGHUP), #1620 #1593 Fixes e.g. leftover lock files for quickly repeated signals (e.g. Ctrl-C Ctrl-C) or lost connections or systemd sending SIGHUP. - progress display: adapt formatting to narrow screens, do not crash, #1628 - borg create --read-special - fix crash on broken symlink, #1584. also correctly processes broken symlinks. before this regressed to a crash (5b45385) a broken symlink would've been skipped. - process_symlink: fix missing backup_io() Fixes a chmod/chown/chgrp/unlink/rename/... crash race between getting dirents and dispatching to process_symlink. - yes(): abort on wrong answers, saying so, #1622 - fixed exception borg serve raised when connection was closed before reposiory was openend. add an error message for this. - fix read-from-closed-FD issue, #1551 (this seems not to get triggered in 1.0.x, but was discovered in master) - hashindex: fix iterators (always raise StopIteration when exhausted) (this seems not to get triggered in 1.0.x, but was discovered in master) - enable relative paths in ssh:// repo URLs, via /./relpath hack, #1655 - allow repo paths with colons, #1705 - update changed repo location immediately after acceptance, #1524 - fix debug get-obj / delete-obj crash if object not found and remote repo, #1684 - pyinstaller: use a spec file to build borg.exe binary, exclude osxfuse dylib on Mac OS X (avoids mismatch lib <-> driver), #1619 New features: - add "borg key export" / "borg key import" commands, #1555, so users are able to backup / restore their encryption keys more easily. Supported formats are the keyfile format used by borg internally and a special "paper" format with by line checksums for printed backups. For the paper format, the import is an interactive process which checks each line as soon as it is input. - add "borg debug-refcount-obj" to determine a repo objects' referrer counts, #1352 Other changes: - add "borg debug ..." subcommands (borg debug-* still works, but will be removed in borg 1.1) - setup.py: Add subcommand support to build_usage. - remote: change exception message for unexpected RPC data format to indicate dataflow direction. - improved messages / error reporting: - IntegrityError: add placeholder for message, so that the message we give appears not only in the traceback, but also in the (short) error message, #1572 - borg.key: include chunk id in exception msgs, #1571 - better messages for cache newer than repo, #1700 - vagrant (testing/build VMs): - upgrade OSXfuse / FUSE for macOS to 3.5.2 - update Debian Wheezy boxes, #1686 - openbsd / netbsd: use own boxes, fixes misc rsync installation and FUSE/llfuse related testing issues, #1695 #1696 #1670 #1671 #1728 - docs: - add docs for "key export" and "key import" commands, #1641 - fix inconsistency in FAQ (pv-wrapper). - fix second block in "Easy to use" section not showing on GitHub, #1576 - add bestpractices badge - link reference docs and faq about BORG_FILES_CACHE_TTL, #1561 - improve borg info --help, explain size infos, #1532 - add release signing key / security contact to README, #1560 - add contribution guidelines for developers - development.rst: add sphinx_rtd_theme to the sphinx install command - adjust border color in borg.css - add debug-info usage help file - internals.rst: fix typos - setup.py: fix build_usage to always process all commands - added docs explaining multiple --restrict-to-path flags, #1602 - add more specific warning about write-access debug commands, #1587 - clarify FAQ regarding backup of virtual machines, #1672 - tests: - work around FUSE xattr test issue with recent fakeroot - simplify repo/hashindex tests - travis: test FUSE-enabled borg, use trusty to have a recent FUSE - re-enable FUSE tests for RemoteArchiver (no deadlocks any more) - clean env for pytest based tests, #1714 - fuse_mount contextmanager: accept any options Version 1.0.7 (2016-08-19) -------------------------- Security fixes: - borg serve: fix security issue with remote repository access, #1428 If you used e.g. --restrict-to-path /path/client1/ (with or without trailing slash does not make a difference), it acted like a path prefix match using /path/client1 (note the missing trailing slash) - the code then also allowed working in e.g. /path/client13 or /path/client1000. As this could accidentally lead to major security/privacy issues depending on the paths you use, the behaviour was changed to be a strict directory match. That means --restrict-to-path /path/client1 (with or without trailing slash does not make a difference) now uses /path/client1/ internally (note the trailing slash here!) for matching and allows precisely that path AND any path below it. So, /path/client1 is allowed, /path/client1/repo1 is allowed, but not /path/client13 or /path/client1000. If you willingly used the undocumented (dangerous) previous behaviour, you may need to rearrange your --restrict-to-path paths now. We are sorry if that causes work for you, but we did not want a potentially dangerous behaviour in the software (not even using a for-backwards-compat option). Bug fixes: - fixed repeated LockTimeout exceptions when borg serve tried to write into a already write-locked repo (e.g. by a borg mount), #502 part b) This was solved by the fix for #1220 in 1.0.7rc1 already. - fix cosmetics + file leftover for "not a valid borg repository", #1490 - Cache: release lock if cache is invalid, #1501 - borg extract --strip-components: fix leak of preloaded chunk contents - Repository, when a InvalidRepository exception happens: - fix spurious, empty lock.roster - fix repo not closed cleanly New features: - implement borg debug-info, fixes #1122 (just calls already existing code via cli, same output as below tracebacks) Other changes: - skip the O_NOATIME test on GNU Hurd, fixes #1315 (this is a very minor issue and the GNU Hurd project knows the bug) - document using a clean repo to test / build the release Version 1.0.7rc2 (2016-08-13) ----------------------------- Bug fixes: - do not write objects to repository that are bigger than the allowed size, borg will reject reading them, #1451. Important: if you created archives with many millions of files or directories, please verify if you can open them successfully, e.g. try a "borg list REPO::ARCHIVE". - lz4 compression: dynamically enlarge the (de)compression buffer, the static buffer was not big enough for archives with extremely many items, #1453 - larger item metadata stream chunks, raise archive item limit by 8x, #1452 - fix untracked segments made by moved DELETEs, #1442 Impact: Previously (metadata) segments could become untracked when deleting data, these would never be cleaned up. - extended attributes (xattrs) related fixes: - fixed a race condition in xattrs querying that led to the entire file not being backed up (while logging the error, exit code = 1), #1469 - fixed a race condition in xattrs querying that led to a crash, #1462 - raise OSError including the error message derived from errno, deal with path being a integer FD Other changes: - print active env var override by default, #1467 - xattr module: refactor code, deduplicate, clean up - repository: split object size check into too small and too big - add a transaction_id assertion, so borg init on a broken (inconsistent) filesystem does not look like a coding error in borg, but points to the real problem. - explain confusing TypeError caused by compat support for old servers, #1456 - add forgotten usage help file from build_usage - refactor/unify buffer code into helpers.Buffer class, add tests - docs: - document archive limitation, #1452 - improve prune examples Version 1.0.7rc1 (2016-08-05) ----------------------------- Bug fixes: - fix repo lock deadlocks (related to lock upgrade), #1220 - catch unpacker exceptions, resync, #1351 - fix borg break-lock ignoring BORG_REPO env var, #1324 - files cache performance fixes (fixes unnecessary re-reading/chunking/ hashing of unmodified files for some use cases): - fix unintended file cache eviction, #1430 - implement BORG_FILES_CACHE_TTL, update FAQ, raise default TTL from 10 to 20, #1338 - FUSE: - cache partially read data chunks (performance), #965, #966 - always create a root dir, #1125 - use an OrderedDict for helptext, making the build reproducible, #1346 - RemoteRepository init: always call close on exceptions, #1370 (cosmetic) - ignore stdout/stderr broken pipe errors (cosmetic), #1116 New features: - better borg versions management support (useful esp. for borg servers wanting to offer multiple borg versions and for clients wanting to choose a specific server borg version), #1392: - add BORG_VERSION environment variable before executing "borg serve" via ssh - add new placeholder {borgversion} - substitute placeholders in --remote-path - borg init --append-only option (makes using the more secure append-only mode more convenient. when used remotely, this requires 1.0.7+ also on the borg server), #1291. Other changes: - Vagrantfile: - darwin64: upgrade to FUSE for macOS 3.4.1 (aka osxfuse), #1378 - xenial64: use user "ubuntu", not "vagrant" (as usual), #1331 - tests: - fix FUSE tests on OS X, #1433 - docs: - FAQ: add backup using stable filesystem names recommendation - FAQ about glibc compatibility added, #491, glibc-check improved - FAQ: 'A' unchanged file; remove ambiguous entry age sentence. - OS X: install pkg-config to build with FUSE support, fixes #1400 - add notes about shell/sudo pitfalls with env. vars, #1380 - added platform feature matrix - implement borg debug-dump-repo-objs Version 1.0.6 (2016-07-12) -------------------------- Bug fixes: - Linux: handle multiple LD_PRELOAD entries correctly, #1314, #1111 - Fix crash with unclear message if the libc is not found, #1314, #1111 Other changes: - tests: - Fixed O_NOATIME tests for Solaris and GNU Hurd, #1315 - Fixed sparse file tests for (file) systems not supporting it, #1310 - docs: - Fixed syntax highlighting, #1313 - misc docs: added data processing overview picture Version 1.0.6rc1 (2016-07-10) ----------------------------- New features: - borg check --repair: heal damaged files if missing chunks re-appear (e.g. if the previously missing chunk was added again in a later backup archive), #148. (*) Also improved logging. Bug fixes: - sync_dir: silence fsync() failing with EINVAL, #1287 Some network filesystems (like smbfs) don't support this and we use this in repository code. - borg mount (FUSE): - fix directories being shadowed when contained paths were also specified, #1295 - raise I/O Error (EIO) on damaged files (unless -o allow_damaged_files is used), #1302. (*) - borg extract: warn if a damaged file is extracted, #1299. (*) - Added some missing return code checks (ChunkIndex._add, hashindex_resize). - borg check: fix/optimize initial hash table size, avoids resize of the table. Other changes: - tests: - add more FUSE tests, #1284 - deduplicate FUSE (u)mount code - fix borg binary test issues, #862 - docs: - changelog: added release dates to older borg releases - fix some sphinx (docs generator) warnings, #881 Notes: (*) Some features depend on information (chunks_healthy list) added to item metadata when a file with missing chunks was "repaired" using all-zero replacement chunks. The chunks_healthy list is generated since borg 1.0.4, thus borg can't recognize such "repaired" (but content-damaged) files if the repair was done with an older borg version. Version 1.0.5 (2016-07-07) -------------------------- Bug fixes: - borg mount: fix FUSE crash in xattr code on Linux introduced in 1.0.4, #1282 Other changes: - backport some FAQ entries from master branch - add release helper scripts - Vagrantfile: - centos6: no FUSE, don't build binary - add xz for redhat-like dists Version 1.0.4 (2016-07-07) -------------------------- New features: - borg serve --append-only, #1168 This was included because it was a simple change (append-only functionality was already present via repository config file) and makes better security now practically usable. - BORG_REMOTE_PATH environment variable, #1258 This was included because it was a simple change (--remote-path cli option was already present) and makes borg much easier to use if you need it. - Repository: cleanup incomplete transaction on "no space left" condition. In many cases, this can avoid a 100% full repo filesystem (which is very problematic as borg always needs free space - even to delete archives). Bug fixes: - Fix wrong handling and reporting of OSErrors in borg create, #1138. This was a serious issue: in the context of "borg create", errors like repository I/O errors (e.g. disk I/O errors, ssh repo connection errors) were handled badly and did not lead to a crash (which would be good for this case, because the repo transaction would be incomplete and trigger a transaction rollback to clean up). Now, error handling for source files is cleanly separated from every other error handling, so only problematic input files are logged and skipped. - Implement fail-safe error handling for borg extract. Note that this isn't nearly as critical as the borg create error handling bug, since nothing is written to the repo. So this was "merely" misleading error reporting. - Add missing error handler in directory attr restore loop. - repo: make sure write data hits disk before the commit tag (#1236) and also sync the containing directory. - FUSE: getxattr fail must use errno.ENOATTR, #1126 (fixes Mac OS X Finder malfunction: "zero bytes" file length, access denied) - borg check --repair: do not lose information about the good/original chunks. If we do not lose the original chunk IDs list when "repairing" a file (replacing missing chunks with all-zero chunks), we have a chance to "heal" the file back into its original state later, in case the chunks re-appear (e.g. in a fresh backup). Healing is not implemented yet, see #148. - fixes for --read-special mode: - ignore known files cache, #1241 - fake regular file mode, #1214 - improve symlinks handling, #1215 - remove passphrase from subprocess environment, #1105 - Ignore empty index file (will trigger index rebuild), #1195 - add missing placeholder support for --prefix, #1027 - improve exception handling for placeholder replacement - catch and format exceptions in arg parsing - helpers: fix "undefined name 'e'" in exception handler - better error handling for missing repo manifest, #1043 - borg delete: - make it possible to delete a repo without manifest - borg delete --forced allows to delete corrupted archives, #1139 - borg check: - make borg check work for empty repo - fix resync and msgpacked item qualifier, #1135 - rebuild_manifest: fix crash if 'name' or 'time' key were missing. - better validation of item metadata dicts, #1130 - better validation of archive metadata dicts - close the repo on exit - even if rollback did not work, #1197. This is rather cosmetic, it avoids repo closing in the destructor. - tests: - fix sparse file test, #1170 - flake8: ignore new F405, #1185 - catch "invalid argument" on cygwin, #257 - fix sparseness assertion in test prep, #1264 Other changes: - make borg build/work on OpenSSL 1.0 and 1.1, #1187 - docs / help: - fix / clarify prune help, #1143 - fix "patterns" help formatting - add missing docs / help about placeholders - resources: rename atticmatic to borgmatic - document sshd settings, #545 - more details about checkpoints, add split trick, #1171 - support docs: add freenode web chat link, #1175 - add prune visualization / example, #723 - add note that Fnmatch is default, #1247 - make clear that lzma levels > 6 are a waste of cpu cycles - add a "do not edit" note to auto-generated files, #1250 - update cygwin installation docs - repository interoperability with borg master (1.1dev) branch: - borg check: read item metadata keys from manifest, #1147 - read v2 hints files, #1235 - fix hints file "unknown version" error handling bug - tests: add tests for format_line - llfuse: update version requirement for freebsd - Vagrantfile: - use openbsd 5.9, #716 - do not install llfuse on netbsd (broken) - update OSXfuse to version 3.3.3 - use Python 3.5.2 to build the binaries - glibc compatibility checker: scripts/glibc_check.py - add .eggs to .gitignore Version 1.0.3 (2016-05-20) -------------------------- Bug fixes: - prune: avoid that checkpoints are kept and completed archives are deleted in a prune run), #997 - prune: fix commandline argument validation - some valid command lines were considered invalid (annoying, but harmless), #942 - fix capabilities extraction on Linux (set xattrs last, after chown()), #1069 - repository: fix commit tags being seen in data - when probing key files, do binary reads. avoids crash when non-borg binary files are located in borg's key files directory. - handle SIGTERM and make a clean exit - avoids orphan lock files. - repository cache: don't cache large objects (avoid using lots of temp. disk space), #1063 Other changes: - Vagrantfile: OS X: update osxfuse / install lzma package, #933 - setup.py: add check for platform_darwin.c - setup.py: on freebsd, use a llfuse release that builds ok - docs / help: - update readthedocs URLs, #991 - add missing docs for "borg break-lock", #992 - borg create help: add some words to about the archive name - borg create help: document format tags, #894 Version 1.0.2 (2016-04-16) -------------------------- Bug fixes: - fix malfunction and potential corruption on (nowadays rather rare) big-endian architectures or bi-endian archs in (rare) BE mode. #886, #889 cache resync / index merge was malfunctioning due to this, potentially leading to data loss. borg info had cosmetic issues (displayed wrong values). note: all (widespread) little-endian archs (like x86/x64) or bi-endian archs in (widespread) LE mode (like ARMEL, MIPSEL, ...) were NOT affected. - add overflow and range checks for 1st (special) uint32 of the hashindex values, switch from int32 to uint32. - fix so that refcount will never overflow, but just stick to max. value after a overflow would have occurred. - borg delete: fix --cache-only for broken caches, #874 Makes --cache-only idempotent: it won't fail if the cache is already deleted. - fixed borg create --one-file-system erroneously traversing into other filesystems (if starting fs device number was 0), #873 - workround a bug in Linux fadvise FADV_DONTNEED, #907 Other changes: - better test coverage for hashindex, incl. overflow testing, checking correct computations so endianness issues would be discovered. - reproducible doc for ProgressIndicator*, make the build reproducible. - use latest llfuse for vagrant machines - docs: - use /path/to/repo in examples, fixes #901 - fix confusing usage of "repo" as archive name (use "arch") Version 1.0.1 (2016-04-08) -------------------------- New features: Usually there are no new features in a bugfix release, but these were added due to their high impact on security/safety/speed or because they are fixes also: - append-only mode for repositories, #809, #36 (see docs) - borg create: add --ignore-inode option to make borg detect unmodified files even if your filesystem does not have stable inode numbers (like sshfs and possibly CIFS). - add options --warning, --error, --critical for missing log levels, #826. it's not recommended to suppress warnings or errors, but the user may decide this on his own. note: --warning is not given to borg serve so a <= 1.0.0 borg will still work as server (it is not needed as it is the default). do not use --error or --critical when using a <= 1.0.0 borg server. Bug fixes: - fix silently skipping EIO, #748 - add context manager for Repository (avoid orphan repository locks), #285 - do not sleep for >60s while waiting for lock, #773 - unpack file stats before passing to FUSE - fix build on illumos - don't try to backup doors or event ports (Solaris and derivates) - remove useless/misleading libc version display, #738 - test suite: reset exit code of persistent archiver, #844 - RemoteRepository: clean up pipe if remote open() fails - Remote: don't print tracebacks for Error exceptions handled downstream, #792 - if BORG_PASSPHRASE is present but wrong, don't prompt for password, but fail instead, #791 - ArchiveChecker: move "orphaned objects check skipped" to INFO log level, #826 - fix capitalization, add ellipses, change log level to debug for 2 messages, #798 Other changes: - update llfuse requirement, llfuse 1.0 works - update OS / dist packages on build machines, #717 - prefer showing --info over -v in usage help, #859 - docs: - fix cygwin requirements (gcc-g++) - document how to debug / file filesystem issues, #664 - fix reproducible build of api docs - RTD theme: CSS !important overwrite, #727 - Document logo font. Recreate logo png. Remove GIMP logo file. Version 1.0.0 (2016-03-05) -------------------------- The major release number change (0.x -> 1.x) indicates bigger incompatible changes, please read the compatibility notes, adapt / test your scripts and check your backup logs. Compatibility notes: - drop support for python 3.2 and 3.3, require 3.4 or 3.5, #221 #65 #490 note: we provide binaries that include python 3.5.1 and everything else needed. they are an option in case you are stuck with < 3.4 otherwise. - change encryption to be on by default (using "repokey" mode) - moved keyfile keys from ~/.borg/keys to ~/.config/borg/keys, you can either move them manually or run "borg upgrade " - remove support for --encryption=passphrase, use borg migrate-to-repokey to switch to repokey mode, #97 - remove deprecated --compression , use --compression zlib, instead in case of 0, you could also use --compression none - remove deprecated --hourly/daily/weekly/monthly/yearly use --keep-hourly/daily/weekly/monthly/yearly instead - remove deprecated --do-not-cross-mountpoints, use --one-file-system instead - disambiguate -p option, #563: - -p now is same as --progress - -P now is same as --prefix - remove deprecated "borg verify", use "borg extract --dry-run" instead - cleanup environment variable semantics, #355 the environment variables used to be "yes sayers" when set, this was conceptually generalized to "automatic answerers" and they just give their value as answer (as if you typed in that value when being asked). See the "usage" / "Environment Variables" section of the docs for details. - change the builtin default for --chunker-params, create 2MiB chunks, #343 --chunker-params new default: 19,23,21,4095 - old default: 10,23,16,4095 one of the biggest issues with borg < 1.0 (and also attic) was that it had a default target chunk size of 64kiB, thus it created a lot of chunks and thus also a huge chunk management overhead (high RAM and disk usage). please note that the new default won't change the chunks that you already have in your repository. the new big chunks do not deduplicate with the old small chunks, so expect your repo to grow at least by the size of every changed file and in the worst case (e.g. if your files cache was lost / is not used) by the size of every file (minus any compression you might use). in case you want to immediately see a much lower resource usage (RAM / disk) for chunks management, it might be better to start with a new repo than continuing in the existing repo (with an existing repo, you'ld have to wait until all archives with small chunks got pruned to see a lower resource usage). if you used the old --chunker-params default value (or if you did not use --chunker-params option at all) and you'ld like to continue using small chunks (and you accept the huge resource usage that comes with that), just explicitly use borg create --chunker-params=10,23,16,4095. - archive timestamps: the 'time' timestamp now refers to archive creation start time (was: end time), the new 'time_end' timestamp refers to archive creation end time. This might affect prune if your backups take rather long. if you give a timestamp via cli this is stored into 'time', therefore it now needs to mean archive creation start time. New features: - implement password roundtrip, #695 Bug fixes: - remote end does not need cache nor keys directories, do not create them, #701 - added retry counter for passwords, #703 Other changes: - fix compiler warnings, #697 - docs: - update README.rst to new changelog location in docs/changes.rst - add Teemu to AUTHORS - changes.rst: fix old chunker params, #698 - FAQ: how to limit bandwidth Version 1.0.0rc2 (2016-02-28) ----------------------------- New features: - format options for location: user, pid, fqdn, hostname, now, utcnow, user - borg list --list-format - borg prune -v --list enables the keep/prune list output, #658 Bug fixes: - fix _open_rb noatime handling, #657 - add a simple archivename validator, #680 - borg create --stats: show timestamps in localtime, use same labels/formatting as borg info, #651 - llfuse compatibility fixes (now compatible with: 0.40, 0.41, 0.42) Other changes: - it is now possible to use "pip install borgbackup[fuse]" to automatically install the llfuse dependency using the correct version requirement for it. you still need to care about having installed the FUSE / build related OS package first, though, so that building llfuse can succeed. - Vagrant: drop Ubuntu Precise (12.04) - does not have Python >= 3.4 - Vagrant: use pyinstaller v3.1.1 to build binaries - docs: - borg upgrade: add to docs that only LOCAL repos are supported - borg upgrade also handles borg 0.xx -> 1.0 - use pip extras or requirements file to install llfuse - fix order in release process - updated usage docs and other minor / cosmetic fixes - verified borg examples in docs, #644 - freebsd dependency installation and FUSE configuration, #649 - add example how to restore a raw device, #671 - add a hint about the dev headers needed when installing from source - add examples for delete (and handle delete after list, before prune), #656 - update example for borg create -v --stats (use iso datetime format), #663 - added example to BORG_RSH docs - "connection closed by remote": add FAQ entry and point to issue #636 Version 1.0.0rc1 (2016-02-07) ----------------------------- New features: - borg migrate-to-repokey ("passphrase" -> "repokey" encryption key mode) - implement --short for borg list REPO, #611 - implement --list for borg extract (consistency with borg create) - borg serve: overwrite client's --restrict-to-path with ssh forced command's option value (but keep everything else from the client commandline), #544 - use $XDG_CONFIG_HOME/keys for keyfile keys (~/.config/borg/keys), #515 - "borg upgrade" moves the keyfile keys to the new location - display both archive creation start and end time in "borg info", #627 Bug fixes: - normalize trailing slashes for the repository path, #606 - Cache: fix exception handling in __init__, release lock, #610 Other changes: - suppress unneeded exception context (PEP 409), simpler tracebacks - removed special code needed to deal with imperfections / incompatibilities / missing stuff in py 3.2/3.3, simplify code that can be done simpler in 3.4 - removed some version requirements that were kept on old versions because newer did not support py 3.2 any more - use some py 3.4+ stdlib code instead of own/openssl/pypi code: - use os.urandom instead of own cython openssl RAND_bytes wrapper, #493 - use hashlib.pbkdf2_hmac from py stdlib instead of own openssl wrapper - use hmac.compare_digest instead of == operator (constant time comparison) - use stat.filemode instead of homegrown code - use "mock" library from stdlib, #145 - remove borg.support (with non-broken argparse copy), it is ok in 3.4+, #358 - Vagrant: copy CHANGES.rst as symlink, #592 - cosmetic code cleanups, add flake8 to tox/travis, #4 - docs / help: - make "borg -h" output prettier, #591 - slightly rephrase prune help - add missing example for --list option of borg create - quote exclude line that includes an asterisk to prevent shell expansion - fix dead link to license - delete Ubuntu Vivid, it is not supported anymore (EOL) - OS X binary does not work for older OS X releases, #629 - borg serve's special support for forced/original ssh commands, #544 - misc. updates and fixes Version 0.30.0 (2016-01-23) --------------------------- Compatibility notes: - you may need to use -v (or --info) more often to actually see output emitted at INFO log level (because it is suppressed at the default WARNING log level). See the "general" section in the usage docs. - for borg create, you need --list (additionally to -v) to see the long file list (was needed so you can have e.g. --stats alone without the long list) - see below about BORG_DELETE_I_KNOW_WHAT_I_AM_DOING (was: BORG_CHECK_I_KNOW_WHAT_I_AM_DOING) Bug fixes: - fix crash when using borg create --dry-run --keep-tag-files, #570 - make sure teardown with cleanup happens for Cache and RepositoryCache, avoiding leftover locks and TEMP dir contents, #285 (partially), #548 - fix locking KeyError, partial fix for #502 - log stats consistently, #526 - add abbreviated weekday to timestamp format, fixes #496 - strip whitespace when loading exclusions from file - unset LD_LIBRARY_PATH before invoking ssh, fixes strange OpenSSL library version warning when using the borg binary, #514 - add some error handling/fallback for C library loading, #494 - added BORG_DELETE_I_KNOW_WHAT_I_AM_DOING for check in "borg delete", #503 - remove unused "repair" rpc method name New features: - borg create: implement exclusions using regular expression patterns. - borg create: implement inclusions using patterns. - borg extract: support patterns, #361 - support different styles for patterns: - fnmatch (`fm:` prefix, default when omitted), like borg <= 0.29. - shell (`sh:` prefix) with `*` not matching directory separators and `**/` matching 0..n directories - path prefix (`pp:` prefix, for unifying borg create pp1 pp2 into the patterns system), semantics like in borg <= 0.29 - regular expression (`re:`), new! - --progress option for borg upgrade (#291) and borg delete - update progress indication more often (e.g. for borg create within big files or for borg check repo), #500 - finer chunker granularity for items metadata stream, #547, #487 - borg create --list now used (additionally to -v) to enable the verbose file list output - display borg version below tracebacks, #532 Other changes: - hashtable size (and thus: RAM and disk consumption) follows a growth policy: grows fast while small, grows slower when getting bigger, #527 - Vagrantfile: use pyinstaller 3.1 to build binaries, freebsd sqlite3 fix, fixes #569 - no separate binaries for centos6 any more because the generic linux binaries also work on centos6 (or in general: on systems with a slightly older glibc than debian7 - dev environment: require virtualenv<14.0 so we get a py32 compatible pip - docs: - add space-saving chunks.archive.d trick to FAQ - important: clarify -v and log levels in usage -> general, please read! - sphinx configuration: create a simple man page from usage docs - add a repo server setup example - disable unneeded SSH features in authorized_keys examples for security. - borg prune only knows "--keep-within" and not "--within" - add gource video to resources docs, #507 - add netbsd install instructions - authors: make it more clear what refers to borg and what to attic - document standalone binary requirements, #499 - rephrase the mailing list section - development docs: run build_api and build_usage before tagging release - internals docs: hash table max. load factor is 0.75 now - markup, typo, grammar, phrasing, clarifications and other fixes. - add gcc gcc-c++ to redhat/fedora/corora install docs, fixes #583 Version 0.29.0 (2015-12-13) --------------------------- Compatibility notes: - when upgrading to 0.29.0 you need to upgrade client as well as server installations due to the locking and commandline interface changes otherwise you'll get an error msg about a RPC protocol mismatch or a wrong commandline option. if you run a server that needs to support both old and new clients, it is suggested that you have a "borg-0.28.2" and a "borg-0.29.0" command. clients then can choose via e.g. "borg --remote-path=borg-0.29.0 ...". - the default waiting time for a lock changed from infinity to 1 second for a better interactive user experience. if the repo you want to access is currently locked, borg will now terminate after 1s with an error message. if you have scripts that shall wait for the lock for a longer time, use --lock-wait N (with N being the maximum wait time in seconds). Bug fixes: - hash table tuning (better chosen hashtable load factor 0.75 and prime initial size of 1031 gave ~1000x speedup in some scenarios) - avoid creation of an orphan lock for one case, #285 - --keep-tag-files: fix file mode and multiple tag files in one directory, #432 - fixes for "borg upgrade" (attic repo converter), #466 - remove --progress isatty magic (and also --no-progress option) again, #476 - borg init: display proper repo URL - fix format of umask in help pages, #463 New features: - implement --lock-wait, support timeout for UpgradableLock, #210 - implement borg break-lock command, #157 - include system info below traceback, #324 - sane remote logging, remote stderr, #461: - remote log output: intercept it and log it via local logging system, with "Remote: " prefixed to message. log remote tracebacks. - remote stderr: output it to local stderr with "Remote: " prefixed. - add --debug and --info (same as --verbose) to set the log level of the builtin logging configuration (which otherwise defaults to warning), #426 note: there are few messages emitted at DEBUG level currently. - optionally configure logging via env var BORG_LOGGING_CONF - add --filter option for status characters: e.g. to show only the added or modified files (and also errors), use "borg create -v --filter=AME ...". - more progress indicators, #394 - use ISO-8601 date and time format, #375 - "borg check --prefix" to restrict archive checking to that name prefix, #206 Other changes: - hashindex_add C implementation (speed up cache re-sync for new archives) - increase FUSE read_size to 1024 (speed up metadata operations) - check/delete/prune --save-space: free unused segments quickly, #239 - increase rpc protocol version to 2 (see also Compatibility notes), #458 - silence borg by default (via default log level WARNING) - get rid of C compiler warnings, #391 - upgrade OS X FUSE to 3.0.9 on the OS X binary build system - use python 3.5.1 to build binaries - docs: - new mailing list borgbackup@python.org, #468 - readthedocs: color and logo improvements - load coverage icons over SSL (avoids mixed content) - more precise binary installation steps - update release procedure docs about OS X FUSE - FAQ entry about unexpected 'A' status for unchanged file(s), #403 - add docs about 'E' file status - add "borg upgrade" docs, #464 - add developer docs about output and logging - clarify encryption, add note about client-side encryption - add resources section, with videos, talks, presentations, #149 - Borg moved to Arch Linux [community] - fix wrong installation instructions for archlinux Version 0.28.2 (2015-11-15) --------------------------- New features: - borg create --exclude-if-present TAGFILE - exclude directories that have the given file from the backup. You can additionally give --keep-tag-files to preserve just the directory roots and the tag-files (but not backup other directory contents), #395, attic #128, attic #142 Other changes: - do not create docs sources at build time (just have them in the repo), completely remove have_cython() hack, do not use the "mock" library at build time, #384 - avoid hidden import, make it easier for PyInstaller, easier fix for #218 - docs: - add description of item flags / status output, fixes #402 - explain how to regenerate usage and API files (build_api or build_usage) and when to commit usage files directly into git, #384 - minor install docs improvements Version 0.28.1 (2015-11-08) --------------------------- Bug fixes: - do not try to build api / usage docs for production install, fixes unexpected "mock" build dependency, #384 Other changes: - avoid using msgpack.packb at import time - fix formatting issue in changes.rst - fix build on readthedocs Version 0.28.0 (2015-11-08) --------------------------- Compatibility notes: - changed return codes (exit codes), see docs. in short: old: 0 = ok, 1 = error. now: 0 = ok, 1 = warning, 2 = error New features: - refactor return codes (exit codes), fixes #61 - add --show-rc option enable "terminating with X status, rc N" output, fixes 58, #351 - borg create backups atime and ctime additionally to mtime, fixes #317 - extract: support atime additionally to mtime - FUSE: support ctime and atime additionally to mtime - support borg --version - emit a warning if we have a slow msgpack installed - borg list --prefix=thishostname- REPO, fixes #205 - Debug commands (do not use except if you know what you do: debug-get-obj, debug-put-obj, debug-delete-obj, debug-dump-archive-items. Bug fixes: - setup.py: fix bug related to BORG_LZ4_PREFIX processing - fix "check" for repos that have incomplete chunks, fixes #364 - borg mount: fix unlocking of repository at umount time, fixes #331 - fix reading files without touching their atime, #334 - non-ascii ACL fixes for Linux, FreeBSD and OS X, #277 - fix acl_use_local_uid_gid() and add a test for it, attic #359 - borg upgrade: do not upgrade repositories in place by default, #299 - fix cascading failure with the index conversion code, #269 - borg check: implement 'cmdline' archive metadata value decoding, #311 - fix RobustUnpacker, it missed some metadata keys (new atime and ctime keys were missing, but also bsdflags). add check for unknown metadata keys. - create from stdin: also save atime, ctime (cosmetic) - use default_notty=False for confirmations, fixes #345 - vagrant: fix msgpack installation on centos, fixes #342 - deal with unicode errors for symlinks in same way as for regular files and have a helpful warning message about how to fix wrong locale setup, fixes #382 - add ACL keys the RobustUnpacker must know about Other changes: - improve file size displays, more flexible size formatters - explicitly commit to the units standard, #289 - archiver: add E status (means that an error occurred when processing this (single) item - do binary releases via "github releases", closes #214 - create: use -x and --one-file-system (was: --do-not-cross-mountpoints), #296 - a lot of changes related to using "logging" module and screen output, #233 - show progress display if on a tty, output more progress information, #303 - factor out status output so it is consistent, fix surrogates removal, maybe fixes #309 - move away from RawConfigParser to ConfigParser - archive checker: better error logging, give chunk_id and sequence numbers (can be used together with borg debug-dump-archive-items). - do not mention the deprecated passphrase mode - emit a deprecation warning for --compression N (giving a just a number) - misc .coverragerc fixes (and coverage measurement improvements), fixes #319 - refactor confirmation code, reduce code duplication, add tests - prettier error messages, fixes #307, #57 - tests: - add a test to find disk-full issues, #327 - travis: also run tests on Python 3.5 - travis: use tox -r so it rebuilds the tox environments - test the generated pyinstaller-based binary by archiver unit tests, #215 - vagrant: tests: announce whether fakeroot is used or not - vagrant: add vagrant user to fuse group for debianoid systems also - vagrant: llfuse install on darwin needs pkgconfig installed - vagrant: use pyinstaller from develop branch, fixes #336 - benchmarks: test create, extract, list, delete, info, check, help, fixes #146 - benchmarks: test with both the binary and the python code - archiver tests: test with both the binary and the python code, fixes #215 - make basic test more robust - docs: - moved docs to borgbackup.readthedocs.org, #155 - a lot of fixes and improvements, use mobile-friendly RTD standard theme - use zlib,6 compression in some examples, fixes #275 - add missing rename usage to docs, closes #279 - include the help offered by borg help in the usage docs, fixes #293 - include a list of major changes compared to attic into README, fixes #224 - add OS X install instructions, #197 - more details about the release process, #260 - fix linux glibc requirement (binaries built on debian7 now) - build: move usage and API generation to setup.py - update docs about return codes, #61 - remove api docs (too much breakage on rtd) - borgbackup install + basics presentation (asciinema) - describe the current style guide in documentation - add section about debug commands - warn about not running out of space - add example for rename - improve chunker params docs, fixes #362 - minor development docs update Version 0.27.0 (2015-10-07) --------------------------- New features: - "borg upgrade" command - attic -> borg one time converter / migration, #21 - temporary hack to avoid using lots of disk space for chunks.archive.d, #235: To use it: rm -rf chunks.archive.d ; touch chunks.archive.d - respect XDG_CACHE_HOME, attic #181 - add support for arbitrary SSH commands, attic #99 - borg delete --cache-only REPO (only delete cache, not REPO), attic #123 Bug fixes: - use Debian 7 (wheezy) to build pyinstaller borgbackup binaries, fixes slow down observed when running the Centos6-built binary on Ubuntu, #222 - do not crash on empty lock.roster, fixes #232 - fix multiple issues with the cache config version check, #234 - fix segment entry header size check, attic #352 plus other error handling improvements / code deduplication there. - always give segment and offset in repo IntegrityErrors Other changes: - stop producing binary wheels, remove docs about it, #147 - docs: - add warning about prune - generate usage include files only as needed - development docs: add Vagrant section - update / improve / reformat FAQ - hint to single-file pyinstaller binaries from README Version 0.26.1 (2015-09-28) --------------------------- This is a minor update, just docs and new pyinstaller binaries. - docs update about python and binary requirements - better docs for --read-special, fix #220 - re-built the binaries, fix #218 and #213 (glibc version issue) - update web site about single-file pyinstaller binaries Note: if you did a python-based installation, there is no need to upgrade. Version 0.26.0 (2015-09-19) --------------------------- New features: - Faster cache sync (do all in one pass, remove tar/compression stuff), #163 - BORG_REPO env var to specify the default repo, #168 - read special files as if they were regular files, #79 - implement borg create --dry-run, attic issue #267 - Normalize paths before pattern matching on OS X, #143 - support OpenBSD and NetBSD (except xattrs/ACLs) - support / run tests on Python 3.5 Bug fixes: - borg mount repo: use absolute path, attic #200, attic #137 - chunker: use off_t to get 64bit on 32bit platform, #178 - initialize chunker fd to -1, so it's not equal to STDIN_FILENO (0) - fix reaction to "no" answer at delete repo prompt, #182 - setup.py: detect lz4.h header file location - to support python < 3.2.4, add less buggy argparse lib from 3.2.6 (#194) - fix for obtaining ``char *`` from temporary Python value (old code causes a compile error on Mint 17.2) - llfuse 0.41 install troubles on some platforms, require < 0.41 (UnicodeDecodeError exception due to non-ascii llfuse setup.py) - cython code: add some int types to get rid of unspecific python add / subtract operations (avoid ``undefined symbol FPE_``... error on some platforms) - fix verbose mode display of stdin backup - extract: warn if a include pattern never matched, fixes #209, implement counters for Include/ExcludePatterns - archive names with slashes are invalid, attic issue #180 - chunker: add a check whether the POSIX_FADV_DONTNEED constant is defined - fixes building on OpenBSD. Other changes: - detect inconsistency / corruption / hash collision, #170 - replace versioneer with setuptools_scm, #106 - docs: - pkg-config is needed for llfuse installation - be more clear about pruning, attic issue #132 - unit tests: - xattr: ignore security.selinux attribute showing up - ext3 seems to need a bit more space for a sparse file - do not test lzma level 9 compression (avoid MemoryError) - work around strange mtime granularity issue on netbsd, fixes #204 - ignore st_rdev if file is not a block/char device, fixes #203 - stay away from the setgid and sticky mode bits - use Vagrant to do easy cross-platform testing (#196), currently: - Debian 7 "wheezy" 32bit, Debian 8 "jessie" 64bit - Ubuntu 12.04 32bit, Ubuntu 14.04 64bit - Centos 7 64bit - FreeBSD 10.2 64bit - OpenBSD 5.7 64bit - NetBSD 6.1.5 64bit - Darwin (OS X Yosemite) Version 0.25.0 (2015-08-29) --------------------------- Compatibility notes: - lz4 compression library (liblz4) is a new requirement (#156) - the new compression code is very compatible: as long as you stay with zlib compression, older borg releases will still be able to read data from a repo/archive made with the new code (note: this is not the case for the default "none" compression, use "zlib,0" if you want a "no compression" mode that can be read by older borg). Also the new code is able to read repos and archives made with older borg versions (for all zlib levels 0..9). Deprecations: - --compression N (with N being a number, as in 0.24) is deprecated. We keep the --compression 0..9 for now to not break scripts, but it is deprecated and will be removed later, so better fix your scripts now: --compression 0 (as in 0.24) is the same as --compression zlib,0 (now). BUT: if you do not want compression, you rather want --compression none (which is the default). --compression 1 (in 0.24) is the same as --compression zlib,1 (now) --compression 9 (in 0.24) is the same as --compression zlib,9 (now) New features: - create --compression none (default, means: do not compress, just pass through data "as is". this is more efficient than zlib level 0 as used in borg 0.24) - create --compression lz4 (super-fast, but not very high compression) - create --compression zlib,N (slower, higher compression, default for N is 6) - create --compression lzma,N (slowest, highest compression, default N is 6) - honor the nodump flag (UF_NODUMP) and do not backup such items - list --short just outputs a simple list of the files/directories in an archive Bug fixes: - fixed --chunker-params parameter order confusion / malfunction, fixes #154 - close fds of segments we delete (during compaction) - close files which fell out the lrucache - fadvise DONTNEED now is only called for the byte range actually read, not for the whole file, fixes #158. - fix issue with negative "all archives" size, fixes #165 - restore_xattrs: ignore if setxattr fails with EACCES, fixes #162 Other changes: - remove fakeroot requirement for tests, tests run faster without fakeroot (test setup does not fail any more without fakeroot, so you can run with or without fakeroot), fixes #151 and #91. - more tests for archiver - recover_segment(): don't assume we have an fd for segment - lrucache refactoring / cleanup, add dispose function, py.test tests - generalize hashindex code for any key length (less hardcoding) - lock roster: catch file not found in remove() method and ignore it - travis CI: use requirements file - improved docs: - replace hack for llfuse with proper solution (install libfuse-dev) - update docs about compression - update development docs about fakeroot - internals: add some words about lock files / locking system - support: mention BountySource and for what it can be used - theme: use a lighter green - add pypi, wheel, dist package based install docs - split install docs into system-specific preparations and generic instructions Version 0.24.0 (2015-08-09) --------------------------- Incompatible changes (compared to 0.23): - borg now always issues --umask NNN option when invoking another borg via ssh on the repository server. By that, it's making sure it uses the same umask for remote repos as for local ones. Because of this, you must upgrade both server and client(s) to 0.24. - the default umask is 077 now (if you do not specify via --umask) which might be a different one as you used previously. The default umask avoids that you accidentally give access permissions for group and/or others to files created by borg (e.g. the repository). Deprecations: - "--encryption passphrase" mode is deprecated, see #85 and #97. See the new "--encryption repokey" mode for a replacement. New features: - borg create --chunker-params ... to configure the chunker, fixes #16 (attic #302, attic #300, and somehow also #41). This can be used to reduce memory usage caused by chunk management overhead, so borg does not create a huge chunks index/repo index and eats all your RAM if you back up lots of data in huge files (like VM disk images). See docs/misc/create_chunker-params.txt for more information. - borg info now reports chunk counts in the chunk index. - borg create --compression 0..9 to select zlib compression level, fixes #66 (attic #295). - borg init --encryption repokey (to store the encryption key into the repo), fixes #85 - improve at-end error logging, always log exceptions and set exit_code=1 - LoggedIO: better error checks / exceptions / exception handling - implement --remote-path to allow non-default-path borg locations, #125 - implement --umask M and use 077 as default umask for better security, #117 - borg check: give a named single archive to it, fixes #139 - cache sync: show progress indication - cache sync: reimplement the chunk index merging in C Bug fixes: - fix segfault that happened for unreadable files (chunker: n needs to be a signed size_t), #116 - fix the repair mode, #144 - repo delete: add destroy to allowed rpc methods, fixes issue #114 - more compatible repository locking code (based on mkdir), maybe fixes #92 (attic #317, attic #201). - better Exception msg if no Borg is installed on the remote repo server, #56 - create a RepositoryCache implementation that can cope with >2GiB, fixes attic #326. - fix Traceback when running check --repair, attic #232 - clarify help text, fixes #73. - add help string for --no-files-cache, fixes #140 Other changes: - improved docs: - added docs/misc directory for misc. writeups that won't be included "as is" into the html docs. - document environment variables and return codes (attic #324, attic #52) - web site: add related projects, fix web site url, IRC #borgbackup - Fedora/Fedora-based install instructions added to docs - Cygwin-based install instructions added to docs - updated AUTHORS - add FAQ entries about redundancy / integrity - clarify that borg extract uses the cwd as extraction target - update internals doc about chunker params, memory usage and compression - added docs about development - add some words about resource usage in general - document how to backup a raw disk - add note about how to run borg from virtual env - add solutions for (ll)fuse installation problems - document what borg check does, fixes #138 - reorganize borgbackup.github.io sidebar, prev/next at top - deduplicate and refactor the docs / README.rst - use borg-tmp as prefix for temporary files / directories - short prune options without "keep-" are deprecated, do not suggest them - improved tox configuration - remove usage of unittest.mock, always use mock from pypi - use entrypoints instead of scripts, for better use of the wheel format and modern installs - add requirements.d/development.txt and modify tox.ini - use travis-ci for testing based on Linux and (new) OS X - use coverage.py, pytest-cov and codecov.io for test coverage support I forgot to list some stuff already implemented in 0.23.0, here they are: New features: - efficient archive list from manifest, meaning a big speedup for slow repo connections and "list ", "delete ", "prune" (attic #242, attic #167) - big speedup for chunks cache sync (esp. for slow repo connections), fixes #18 - hashindex: improve error messages Other changes: - explicitly specify binary mode to open binary files - some easy micro optimizations Version 0.23.0 (2015-06-11) --------------------------- Incompatible changes (compared to attic, fork related): - changed sw name and cli command to "borg", updated docs - package name (and name in urls) uses "borgbackup" to have fewer collisions - changed repo / cache internal magic strings from ATTIC* to BORG*, changed cache location to .cache/borg/ - this means that it currently won't accept attic repos (see issue #21 about improving that) Bug fixes: - avoid defect python-msgpack releases, fixes attic #171, fixes attic #185 - fix traceback when trying to do unsupported passphrase change, fixes attic #189 - datetime does not like the year 10.000, fixes attic #139 - fix "info" all archives stats, fixes attic #183 - fix parsing with missing microseconds, fixes attic #282 - fix misleading hint the fuse ImportError handler gave, fixes attic #237 - check unpacked data from RPC for tuple type and correct length, fixes attic #127 - fix Repository._active_txn state when lock upgrade fails - give specific path to xattr.is_enabled(), disable symlink setattr call that always fails - fix test setup for 32bit platforms, partial fix for attic #196 - upgraded versioneer, PEP440 compliance, fixes attic #257 New features: - less memory usage: add global option --no-cache-files - check --last N (only check the last N archives) - check: sort archives in reverse time order - rename repo::oldname newname (rename repository) - create -v output more informative - create --progress (backup progress indicator) - create --timestamp (utc string or reference file/dir) - create: if "-" is given as path, read binary from stdin - extract: if --stdout is given, write all extracted binary data to stdout - extract --sparse (simple sparse file support) - extra debug information for 'fread failed' - delete (deletes whole repo + local cache) - FUSE: reflect deduplication in allocated blocks - only allow whitelisted RPC calls in server mode - normalize source/exclude paths before matching - use posix_fadvise to not spoil the OS cache, fixes attic #252 - toplevel error handler: show tracebacks for better error analysis - sigusr1 / sigint handler to print current file infos - attic PR #286 - RPCError: include the exception args we get from remote Other changes: - source: misc. cleanups, pep8, style - docs and faq improvements, fixes, updates - cleanup crypto.pyx, make it easier to adapt to other AES modes - do os.fsync like recommended in the python docs - source: Let chunker optionally work with os-level file descriptor. - source: Linux: remove duplicate os.fsencode calls - source: refactor _open_rb code a bit, so it is more consistent / regular - source: refactor indicator (status) and item processing - source: use py.test for better testing, flake8 for code style checks - source: fix tox >=2.0 compatibility (test runner) - pypi package: add python version classifiers, add FreeBSD to platforms Attic Changelog --------------- Here you can see the full list of changes between each Attic release until Borg forked from Attic: Version 0.17 ~~~~~~~~~~~~ (bugfix release, released on X) - Fix hashindex ARM memory alignment issue (#309) - Improve hashindex error messages (#298) Version 0.16 ~~~~~~~~~~~~ (bugfix release, released on May 16, 2015) - Fix typo preventing the security confirmation prompt from working (#303) - Improve handling of systems with improperly configured file system encoding (#289) - Fix "All archives" output for attic info. (#183) - More user friendly error message when repository key file is not found (#236) - Fix parsing of iso 8601 timestamps with zero microseconds (#282) Version 0.15 ~~~~~~~~~~~~ (bugfix release, released on Apr 15, 2015) - xattr: Be less strict about unknown/unsupported platforms (#239) - Reduce repository listing memory usage (#163). - Fix BrokenPipeError for remote repositories (#233) - Fix incorrect behavior with two character directory names (#265, #268) - Require approval before accessing relocated/moved repository (#271) - Require approval before accessing previously unknown unencrypted repositories (#271) - Fix issue with hash index files larger than 2GB. - Fix Python 3.2 compatibility issue with noatime open() (#164) - Include missing pyx files in dist files (#168) Version 0.14 ~~~~~~~~~~~~ (feature release, released on Dec 17, 2014) - Added support for stripping leading path segments (#95) "attic extract --strip-segments X" - Add workaround for old Linux systems without acl_extended_file_no_follow (#96) - Add MacPorts' path to the default openssl search path (#101) - HashIndex improvements, eliminates unnecessary IO on low memory systems. - Fix "Number of files" output for attic info. (#124) - limit create file permissions so files aren't read while restoring - Fix issue with empty xattr values (#106) Version 0.13 ~~~~~~~~~~~~ (feature release, released on Jun 29, 2014) - Fix sporadic "Resource temporarily unavailable" when using remote repositories - Reduce file cache memory usage (#90) - Faster AES encryption (utilizing AES-NI when available) - Experimental Linux, OS X and FreeBSD ACL support (#66) - Added support for backup and restore of BSDFlags (OSX, FreeBSD) (#56) - Fix bug where xattrs on symlinks were not correctly restored - Added cachedir support. CACHEDIR.TAG compatible cache directories can now be excluded using ``--exclude-caches`` (#74) - Fix crash on extreme mtime timestamps (year 2400+) (#81) - Fix Python 3.2 specific lockf issue (EDEADLK) Version 0.12 ~~~~~~~~~~~~ (feature release, released on April 7, 2014) - Python 3.4 support (#62) - Various documentation improvements a new style - ``attic mount`` now supports mounting an entire repository not only individual archives (#59) - Added option to restrict remote repository access to specific path(s): ``attic serve --restrict-to-path X`` (#51) - Include "all archives" size information in "--stats" output. (#54) - Added ``--stats`` option to ``attic delete`` and ``attic prune`` - Fixed bug where ``attic prune`` used UTC instead of the local time zone when determining which archives to keep. - Switch to SI units (Power of 1000 instead 1024) when printing file sizes Version 0.11 ~~~~~~~~~~~~ (feature release, released on March 7, 2014) - New "check" command for repository consistency checking (#24) - Documentation improvements - Fix exception during "attic create" with repeated files (#39) - New "--exclude-from" option for attic create/extract/verify. - Improved archive metadata deduplication. - "attic verify" has been deprecated. Use "attic extract --dry-run" instead. - "attic prune --hourly|daily|..." has been deprecated. Use "attic prune --keep-hourly|daily|..." instead. - Ignore xattr errors during "extract" if not supported by the filesystem. (#46) Version 0.10 ~~~~~~~~~~~~ (bugfix release, released on Jan 30, 2014) - Fix deadlock when extracting 0 sized files from remote repositories - "--exclude" wildcard patterns are now properly applied to the full path not just the file name part (#5). - Make source code endianness agnostic (#1) Version 0.9 ~~~~~~~~~~~ (feature release, released on Jan 23, 2014) - Remote repository speed and reliability improvements. - Fix sorting of segment names to ignore NFS left over files. (#17) - Fix incorrect display of time (#13) - Improved error handling / reporting. (#12) - Use fcntl() instead of flock() when locking repository/cache. (#15) - Let ssh figure out port/user if not specified so we don't override .ssh/config (#9) - Improved libcrypto path detection (#23). Version 0.8.1 ~~~~~~~~~~~~~ (bugfix release, released on Oct 4, 2013) - Fix segmentation fault issue. Version 0.8 ~~~~~~~~~~~ (feature release, released on Oct 3, 2013) - Fix xattr issue when backing up sshfs filesystems (#4) - Fix issue with excessive index file size (#6) - Support access of read only repositories. - New syntax to enable repository encryption: attic init --encryption="none|passphrase|keyfile". - Detect and abort if repository is older than the cache. Version 0.7 ~~~~~~~~~~~ (feature release, released on Aug 5, 2013) - Ported to FreeBSD - Improved documentation - Experimental: Archives mountable as FUSE filesystems. - The "user." prefix is no longer stripped from xattrs on Linux Version 0.6.1 ~~~~~~~~~~~~~ (bugfix release, released on July 19, 2013) - Fixed an issue where mtime was not always correctly restored. Version 0.6 ~~~~~~~~~~~ First public release on July 9, 2013 borgbackup-1.1.5/PKG-INFO0000644000175000017500000002170013260002401014024 0ustar twtw00000000000000Metadata-Version: 1.1 Name: borgbackup Version: 1.1.5 Summary: Deduplicated, encrypted, authenticated and compressed backups Home-page: https://borgbackup.readthedocs.io/ Author: The Borg Collective (see AUTHORS file) Author-email: borgbackup@python.org License: BSD Description: |screencast_basic| More screencasts: `installation`_, `advanced usage`_ What is BorgBackup? ------------------- BorgBackup (short: Borg) is a deduplicating backup program. Optionally, it supports compression and authenticated encryption. The main goal of Borg is to provide an efficient and secure way to backup data. The data deduplication technique used makes Borg suitable for daily backups since only changes are stored. The authenticated encryption technique makes it suitable for backups to not fully trusted targets. See the `installation manual`_ or, if you have already downloaded Borg, ``docs/installation.rst`` to get started with Borg. There is also an `offline documentation`_ available, in mutiple formats. .. _installation manual: https://borgbackup.readthedocs.org/en/stable/installation.html .. _offline documentation: https://readthedocs.org/projects/borgbackup/downloads Main features ~~~~~~~~~~~~~ **Space efficient storage** Deduplication based on content-defined chunking is used to reduce the number of bytes stored: each file is split into a number of variable length chunks and only chunks that have never been seen before are added to the repository. A chunk is considered duplicate if its id_hash value is identical. A cryptographically strong hash or MAC function is used as id_hash, e.g. (hmac-)sha256. To deduplicate, all the chunks in the same repository are considered, no matter whether they come from different machines, from previous backups, from the same backup or even from the same single file. Compared to other deduplication approaches, this method does NOT depend on: * file/directory names staying the same: So you can move your stuff around without killing the deduplication, even between machines sharing a repo. * complete files or time stamps staying the same: If a big file changes a little, only a few new chunks need to be stored - this is great for VMs or raw disks. * The absolute position of a data chunk inside a file: Stuff may get shifted and will still be found by the deduplication algorithm. **Speed** * performance critical code (chunking, compression, encryption) is implemented in C/Cython * local caching of files/chunks index data * quick detection of unmodified files **Data encryption** All data can be protected using 256-bit AES encryption, data integrity and authenticity is verified using HMAC-SHA256. Data is encrypted clientside. **Compression** All data can be optionally compressed: * lz4 (super fast, low compression) * zstd (wide range from high speed and low compression to high compression and lower speed) * zlib (medium speed and compression) * lzma (low speed, high compression) **Off-site backups** Borg can store data on any remote host accessible over SSH. If Borg is installed on the remote host, big performance gains can be achieved compared to using a network filesystem (sshfs, nfs, ...). **Backups mountable as filesystems** Backup archives are mountable as userspace filesystems for easy interactive backup examination and restores (e.g. by using a regular file manager). **Easy installation on multiple platforms** We offer single-file binaries that do not require installing anything - you can just run them on these platforms: * Linux * Mac OS X * FreeBSD * OpenBSD and NetBSD (no xattrs/ACLs support or binaries yet) * Cygwin (experimental, no binaries yet) * Linux Subsystem of Windows 10 (experimental) **Free and Open Source Software** * security and functionality can be audited independently * licensed under the BSD (3-clause) license, see `License`_ for the complete license Easy to use ~~~~~~~~~~~ Initialize a new backup repository (see ``borg init --help`` for encryption options):: $ borg init -e repokey /path/to/repo Create a backup archive:: $ borg create /path/to/repo::Saturday1 ~/Documents Now doing another backup, just to show off the great deduplication:: $ borg create -v --stats /path/to/repo::Saturday2 ~/Documents ----------------------------------------------------------------------------- Archive name: Saturday2 Archive fingerprint: 622b7c53c... Time (start): Sat, 2016-02-27 14:48:13 Time (end): Sat, 2016-02-27 14:48:14 Duration: 0.88 seconds Number of files: 163 ----------------------------------------------------------------------------- Original size Compressed size Deduplicated size This archive: 6.85 MB 6.85 MB 30.79 kB <-- ! All archives: 13.69 MB 13.71 MB 6.88 MB Unique chunks Total chunks Chunk index: 167 330 ----------------------------------------------------------------------------- For a graphical frontend refer to our complementary project `BorgWeb `_. Helping, Donations and Bounties ------------------------------- Your help is always welcome! Spread the word, give feedback, help with documentation, testing or development. You can also give monetary support to the project, see there for details: https://www.borgbackup.org/support/free.html#bounties-and-fundraisers Links ----- * `Main Web Site `_ * `Releases `_, `PyPI packages `_ and `ChangeLog `_ * `Offline Documentation `_ * `GitHub `_ and `Issue Tracker `_. * `Web-Chat (IRC) `_ and `Mailing List `_ * `License `_ * `Security contact `_ Compatibility notes ------------------- EXPECT THAT WE WILL BREAK COMPATIBILITY REPEATEDLY WHEN MAJOR RELEASE NUMBER CHANGES (like when going from 0.x.y to 1.0.0 or from 1.x.y to 2.0.0). NOT RELEASED DEVELOPMENT VERSIONS HAVE UNKNOWN COMPATIBILITY PROPERTIES. THIS IS SOFTWARE IN DEVELOPMENT, DECIDE YOURSELF WHETHER IT FITS YOUR NEEDS. Security issues should be reported to the `Security contact`_ (or see ``docs/suppport.rst`` in the source distribution). Platform: Linux Platform: MacOS X Platform: FreeBSD Platform: OpenBSD Platform: NetBSD Classifier: Development Status :: 4 - Beta Classifier: Environment :: Console Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: POSIX :: BSD :: FreeBSD Classifier: Operating System :: POSIX :: BSD :: OpenBSD Classifier: Operating System :: POSIX :: BSD :: NetBSD Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Topic :: Security :: Cryptography Classifier: Topic :: System :: Archiving :: Backup borgbackup-1.1.5/conftest.py0000644000175000017500000000525413257361140015152 0ustar twtw00000000000000import os import pytest # IMPORTANT keep this above all other borg imports to avoid inconsistent values # for `from borg.constants import PBKDF2_ITERATIONS` (or star import) usages before # this is executed from borg import constants # no fixture-based monkey-patching since star-imports are used for the constants module constants.PBKDF2_ITERATIONS = 1 # needed to get pretty assertion failures in unit tests: if hasattr(pytest, 'register_assert_rewrite'): pytest.register_assert_rewrite('borg.testsuite') import borg.cache from borg.logger import setup_logging # Ensure that the loggers exist for all tests setup_logging() from borg.testsuite import has_lchflags, has_llfuse from borg.testsuite import are_symlinks_supported, are_hardlinks_supported, is_utime_fully_supported from borg.testsuite.platform import fakeroot_detected, are_acls_working from borg import xattr @pytest.fixture(autouse=True) def clean_env(tmpdir_factory, monkeypatch): # avoid that we access / modify the user's normal .config / .cache directory: monkeypatch.setenv('XDG_CONFIG_HOME', tmpdir_factory.mktemp('xdg-config-home')) monkeypatch.setenv('XDG_CACHE_HOME', tmpdir_factory.mktemp('xdg-cache-home')) # also avoid to use anything from the outside environment: keys = [key for key in os.environ if key.startswith('BORG_')] for key in keys: monkeypatch.delenv(key, raising=False) def pytest_report_header(config, startdir): tests = { "BSD flags": has_lchflags, "fuse": has_llfuse, "root": not fakeroot_detected(), "symlinks": are_symlinks_supported(), "hardlinks": are_hardlinks_supported(), "atime/mtime": is_utime_fully_supported(), "modes": "BORG_TESTS_IGNORE_MODES" not in os.environ } enabled = [] disabled = [] for test in tests: if tests[test]: enabled.append(test) else: disabled.append(test) output = "Tests enabled: " + ", ".join(enabled) + "\n" output += "Tests disabled: " + ", ".join(disabled) return output class DefaultPatches: def __init__(self, request): self.org_cache_wipe_cache = borg.cache.LocalCache.wipe_cache def wipe_should_not_be_called(*a, **kw): raise AssertionError("Cache wipe was triggered, if this is part of the test add @pytest.mark.allow_cache_wipe") if 'allow_cache_wipe' not in request.keywords: borg.cache.LocalCache.wipe_cache = wipe_should_not_be_called request.addfinalizer(self.undo) def undo(self): borg.cache.LocalCache.wipe_cache = self.org_cache_wipe_cache @pytest.fixture(autouse=True) def default_patches(request): return DefaultPatches(request) borgbackup-1.1.5/README.rst0000644000175000017500000001742413257361140014444 0ustar twtw00000000000000|screencast_basic| More screencasts: `installation`_, `advanced usage`_ What is BorgBackup? ------------------- BorgBackup (short: Borg) is a deduplicating backup program. Optionally, it supports compression and authenticated encryption. The main goal of Borg is to provide an efficient and secure way to backup data. The data deduplication technique used makes Borg suitable for daily backups since only changes are stored. The authenticated encryption technique makes it suitable for backups to not fully trusted targets. See the `installation manual`_ or, if you have already downloaded Borg, ``docs/installation.rst`` to get started with Borg. There is also an `offline documentation`_ available, in mutiple formats. .. _installation manual: https://borgbackup.readthedocs.org/en/stable/installation.html .. _offline documentation: https://readthedocs.org/projects/borgbackup/downloads Main features ~~~~~~~~~~~~~ **Space efficient storage** Deduplication based on content-defined chunking is used to reduce the number of bytes stored: each file is split into a number of variable length chunks and only chunks that have never been seen before are added to the repository. A chunk is considered duplicate if its id_hash value is identical. A cryptographically strong hash or MAC function is used as id_hash, e.g. (hmac-)sha256. To deduplicate, all the chunks in the same repository are considered, no matter whether they come from different machines, from previous backups, from the same backup or even from the same single file. Compared to other deduplication approaches, this method does NOT depend on: * file/directory names staying the same: So you can move your stuff around without killing the deduplication, even between machines sharing a repo. * complete files or time stamps staying the same: If a big file changes a little, only a few new chunks need to be stored - this is great for VMs or raw disks. * The absolute position of a data chunk inside a file: Stuff may get shifted and will still be found by the deduplication algorithm. **Speed** * performance critical code (chunking, compression, encryption) is implemented in C/Cython * local caching of files/chunks index data * quick detection of unmodified files **Data encryption** All data can be protected using 256-bit AES encryption, data integrity and authenticity is verified using HMAC-SHA256. Data is encrypted clientside. **Compression** All data can be optionally compressed: * lz4 (super fast, low compression) * zstd (wide range from high speed and low compression to high compression and lower speed) * zlib (medium speed and compression) * lzma (low speed, high compression) **Off-site backups** Borg can store data on any remote host accessible over SSH. If Borg is installed on the remote host, big performance gains can be achieved compared to using a network filesystem (sshfs, nfs, ...). **Backups mountable as filesystems** Backup archives are mountable as userspace filesystems for easy interactive backup examination and restores (e.g. by using a regular file manager). **Easy installation on multiple platforms** We offer single-file binaries that do not require installing anything - you can just run them on these platforms: * Linux * Mac OS X * FreeBSD * OpenBSD and NetBSD (no xattrs/ACLs support or binaries yet) * Cygwin (experimental, no binaries yet) * Linux Subsystem of Windows 10 (experimental) **Free and Open Source Software** * security and functionality can be audited independently * licensed under the BSD (3-clause) license, see `License`_ for the complete license Easy to use ~~~~~~~~~~~ Initialize a new backup repository (see ``borg init --help`` for encryption options):: $ borg init -e repokey /path/to/repo Create a backup archive:: $ borg create /path/to/repo::Saturday1 ~/Documents Now doing another backup, just to show off the great deduplication:: $ borg create -v --stats /path/to/repo::Saturday2 ~/Documents ----------------------------------------------------------------------------- Archive name: Saturday2 Archive fingerprint: 622b7c53c... Time (start): Sat, 2016-02-27 14:48:13 Time (end): Sat, 2016-02-27 14:48:14 Duration: 0.88 seconds Number of files: 163 ----------------------------------------------------------------------------- Original size Compressed size Deduplicated size This archive: 6.85 MB 6.85 MB 30.79 kB <-- ! All archives: 13.69 MB 13.71 MB 6.88 MB Unique chunks Total chunks Chunk index: 167 330 ----------------------------------------------------------------------------- For a graphical frontend refer to our complementary project `BorgWeb `_. Helping, Donations and Bounties ------------------------------- Your help is always welcome! Spread the word, give feedback, help with documentation, testing or development. You can also give monetary support to the project, see there for details: https://www.borgbackup.org/support/free.html#bounties-and-fundraisers Links ----- * `Main Web Site `_ * `Releases `_, `PyPI packages `_ and `ChangeLog `_ * `Offline Documentation `_ * `GitHub `_ and `Issue Tracker `_. * `Web-Chat (IRC) `_ and `Mailing List `_ * `License `_ * `Security contact `_ Compatibility notes ------------------- EXPECT THAT WE WILL BREAK COMPATIBILITY REPEATEDLY WHEN MAJOR RELEASE NUMBER CHANGES (like when going from 0.x.y to 1.0.0 or from 1.x.y to 2.0.0). NOT RELEASED DEVELOPMENT VERSIONS HAVE UNKNOWN COMPATIBILITY PROPERTIES. THIS IS SOFTWARE IN DEVELOPMENT, DECIDE YOURSELF WHETHER IT FITS YOUR NEEDS. Security issues should be reported to the `Security contact`_ (or see ``docs/suppport.rst`` in the source distribution). .. start-badges |doc| |build| |coverage| |bestpractices| |bounties| .. |bounties| image:: https://api.bountysource.com/badge/team?team_id=78284&style=bounties_posted :alt: Bounty Source :target: https://www.bountysource.com/teams/borgbackup .. |doc| image:: https://readthedocs.org/projects/borgbackup/badge/?version=stable :alt: Documentation :target: https://borgbackup.readthedocs.org/en/stable/ .. |build| image:: https://api.travis-ci.org/borgbackup/borg.svg :alt: Build Status :target: https://travis-ci.org/borgbackup/borg .. |coverage| image:: https://codecov.io/github/borgbackup/borg/coverage.svg?branch=master :alt: Test Coverage :target: https://codecov.io/github/borgbackup/borg?branch=master .. |screencast_basic| image:: https://asciinema.org/a/133292.png :alt: BorgBackup Basic Usage :target: https://asciinema.org/a/133292?autoplay=1&speed=1 .. _installation: https://asciinema.org/a/133291?autoplay=1&speed=1 .. _advanced usage: https://asciinema.org/a/133293?autoplay=1&speed=1 .. |bestpractices| image:: https://bestpractices.coreinfrastructure.org/projects/271/badge :alt: Best Practices Score :target: https://bestpractices.coreinfrastructure.org/projects/271 .. end-badges borgbackup-1.1.5/LICENSE0000644000175000017500000000301713257361140013753 0ustar twtw00000000000000Copyright (C) 2015-2018 The Borg Collective (see AUTHORS file) Copyright (C) 2010-2014 Jonas Borgström All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. borgbackup-1.1.5/setup.cfg0000644000175000017500000000042013260002401014544 0ustar twtw00000000000000[tool:pytest] python_files = testsuite/*.py [flake8] ignore = E122,E123,E125,E126,E127,E128,E226,E402,E722,E731,E741,F401,F405,F811 max-line-length = 255 exclude = build,dist,.git,.idea,.cache,.tox,docs/conf.py [egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 borgbackup-1.1.5/AUTHORS0000644000175000017500000000306113257361140014015 0ustar twtw00000000000000Borg authors ("The Borg Collective") ------------------------------------ - Thomas Waldmann - Antoine Beaupré - Radek Podgorny - Yuri D'Elia - Michael Hanselmann - Teemu Toivanen - Marian Beermann - Martin Hostettler - Daniel Reichelt - Lauri Niskanen - Abdel-Rahman A. (Abogical) Borg is a fork of Attic. Attic authors ------------- Attic is written and maintained by Jonas Borgström and various contributors: Attic Development Lead `````````````````````` - Jonas Borgström Attic Patches and Suggestions ````````````````````````````` - Brian Johnson - Cyril Roussillon - Dan Christensen - Jeremy Maitin-Shepard - Johann Klähn - Petros Moisiadis - Thomas Waldmann BLAKE2 ------ Borg includes BLAKE2: Copyright 2012, Samuel Neves , licensed under the terms of the CC0, the OpenSSL Licence, or the Apache Public License 2.0. Slicing CRC32 ------------- Borg includes a fast slice-by-8 implementation of CRC32, Copyright 2011-2015 Stephan Brumme, licensed under the terms of a zlib license. See http://create.stephan-brumme.com/crc32/ Folding CRC32 ------------- Borg includes an extremely fast folding implementation of CRC32, Copyright 2013 Intel Corporation, licensed under the terms of the zlib license. xxHash ------ XXH64, a fast non-cryptographic hash algorithm. Copyright 2012-2016 Yann Collet, licensed under a BSD 2-clause license. borgbackup-1.1.5/src/0000755000175000017500000000000013260002401013516 5ustar twtw00000000000000borgbackup-1.1.5/src/borg/0000755000175000017500000000000013260002401014447 5ustar twtw00000000000000borgbackup-1.1.5/src/borg/hashindex.c0000644000175000017500000176724413260002400016612 0ustar twtw00000000000000/* Generated by Cython 0.27.3 */ #define PY_SSIZE_T_CLEAN #include "Python.h" #ifndef Py_PYTHON_H #error Python headers needed to compile C extensions, please install development version of Python. #elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) #error Cython requires Python 2.6+ or Python 3.3+. #else #define CYTHON_ABI "0_27_3" #define CYTHON_FUTURE_DIVISION 0 #include #ifndef offsetof #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) #endif #if !defined(WIN32) && !defined(MS_WINDOWS) #ifndef __stdcall #define __stdcall #endif #ifndef __cdecl #define __cdecl #endif #ifndef __fastcall #define __fastcall #endif #endif #ifndef DL_IMPORT #define DL_IMPORT(t) t #endif #ifndef DL_EXPORT #define DL_EXPORT(t) t #endif #define __PYX_COMMA , #ifndef HAVE_LONG_LONG #if PY_VERSION_HEX >= 0x02070000 #define HAVE_LONG_LONG #endif #endif #ifndef PY_LONG_LONG #define PY_LONG_LONG LONG_LONG #endif #ifndef Py_HUGE_VAL #define Py_HUGE_VAL HUGE_VAL #endif #ifdef PYPY_VERSION #define CYTHON_COMPILING_IN_PYPY 1 #define CYTHON_COMPILING_IN_PYSTON 0 #define CYTHON_COMPILING_IN_CPYTHON 0 #undef CYTHON_USE_TYPE_SLOTS #define CYTHON_USE_TYPE_SLOTS 0 #undef CYTHON_USE_PYTYPE_LOOKUP #define CYTHON_USE_PYTYPE_LOOKUP 0 #if PY_VERSION_HEX < 0x03050000 #undef CYTHON_USE_ASYNC_SLOTS #define CYTHON_USE_ASYNC_SLOTS 0 #elif !defined(CYTHON_USE_ASYNC_SLOTS) #define CYTHON_USE_ASYNC_SLOTS 1 #endif #undef CYTHON_USE_PYLIST_INTERNALS #define CYTHON_USE_PYLIST_INTERNALS 0 #undef CYTHON_USE_UNICODE_INTERNALS #define CYTHON_USE_UNICODE_INTERNALS 0 #undef CYTHON_USE_UNICODE_WRITER #define CYTHON_USE_UNICODE_WRITER 0 #undef CYTHON_USE_PYLONG_INTERNALS #define CYTHON_USE_PYLONG_INTERNALS 0 #undef CYTHON_AVOID_BORROWED_REFS #define CYTHON_AVOID_BORROWED_REFS 1 #undef CYTHON_ASSUME_SAFE_MACROS #define CYTHON_ASSUME_SAFE_MACROS 0 #undef CYTHON_UNPACK_METHODS #define CYTHON_UNPACK_METHODS 0 #undef CYTHON_FAST_THREAD_STATE #define CYTHON_FAST_THREAD_STATE 0 #undef CYTHON_FAST_PYCALL #define CYTHON_FAST_PYCALL 0 #undef CYTHON_PEP489_MULTI_PHASE_INIT #define CYTHON_PEP489_MULTI_PHASE_INIT 0 #undef CYTHON_USE_TP_FINALIZE #define CYTHON_USE_TP_FINALIZE 0 #elif defined(PYSTON_VERSION) #define CYTHON_COMPILING_IN_PYPY 0 #define CYTHON_COMPILING_IN_PYSTON 1 #define CYTHON_COMPILING_IN_CPYTHON 0 #ifndef CYTHON_USE_TYPE_SLOTS #define CYTHON_USE_TYPE_SLOTS 1 #endif #undef CYTHON_USE_PYTYPE_LOOKUP #define CYTHON_USE_PYTYPE_LOOKUP 0 #undef CYTHON_USE_ASYNC_SLOTS #define CYTHON_USE_ASYNC_SLOTS 0 #undef CYTHON_USE_PYLIST_INTERNALS #define CYTHON_USE_PYLIST_INTERNALS 0 #ifndef CYTHON_USE_UNICODE_INTERNALS #define CYTHON_USE_UNICODE_INTERNALS 1 #endif #undef CYTHON_USE_UNICODE_WRITER #define CYTHON_USE_UNICODE_WRITER 0 #undef CYTHON_USE_PYLONG_INTERNALS #define CYTHON_USE_PYLONG_INTERNALS 0 #ifndef CYTHON_AVOID_BORROWED_REFS #define CYTHON_AVOID_BORROWED_REFS 0 #endif #ifndef CYTHON_ASSUME_SAFE_MACROS #define CYTHON_ASSUME_SAFE_MACROS 1 #endif #ifndef CYTHON_UNPACK_METHODS #define CYTHON_UNPACK_METHODS 1 #endif #undef CYTHON_FAST_THREAD_STATE #define CYTHON_FAST_THREAD_STATE 0 #undef CYTHON_FAST_PYCALL #define CYTHON_FAST_PYCALL 0 #undef CYTHON_PEP489_MULTI_PHASE_INIT #define CYTHON_PEP489_MULTI_PHASE_INIT 0 #undef CYTHON_USE_TP_FINALIZE #define CYTHON_USE_TP_FINALIZE 0 #else #define CYTHON_COMPILING_IN_PYPY 0 #define CYTHON_COMPILING_IN_PYSTON 0 #define CYTHON_COMPILING_IN_CPYTHON 1 #ifndef CYTHON_USE_TYPE_SLOTS #define CYTHON_USE_TYPE_SLOTS 1 #endif #if PY_VERSION_HEX < 0x02070000 #undef CYTHON_USE_PYTYPE_LOOKUP #define CYTHON_USE_PYTYPE_LOOKUP 0 #elif !defined(CYTHON_USE_PYTYPE_LOOKUP) #define CYTHON_USE_PYTYPE_LOOKUP 1 #endif #if PY_MAJOR_VERSION < 3 #undef CYTHON_USE_ASYNC_SLOTS #define CYTHON_USE_ASYNC_SLOTS 0 #elif !defined(CYTHON_USE_ASYNC_SLOTS) #define CYTHON_USE_ASYNC_SLOTS 1 #endif #if PY_VERSION_HEX < 0x02070000 #undef CYTHON_USE_PYLONG_INTERNALS #define CYTHON_USE_PYLONG_INTERNALS 0 #elif !defined(CYTHON_USE_PYLONG_INTERNALS) #define CYTHON_USE_PYLONG_INTERNALS 1 #endif #ifndef CYTHON_USE_PYLIST_INTERNALS #define CYTHON_USE_PYLIST_INTERNALS 1 #endif #ifndef CYTHON_USE_UNICODE_INTERNALS #define CYTHON_USE_UNICODE_INTERNALS 1 #endif #if PY_VERSION_HEX < 0x030300F0 #undef CYTHON_USE_UNICODE_WRITER #define CYTHON_USE_UNICODE_WRITER 0 #elif !defined(CYTHON_USE_UNICODE_WRITER) #define CYTHON_USE_UNICODE_WRITER 1 #endif #ifndef CYTHON_AVOID_BORROWED_REFS #define CYTHON_AVOID_BORROWED_REFS 0 #endif #ifndef CYTHON_ASSUME_SAFE_MACROS #define CYTHON_ASSUME_SAFE_MACROS 1 #endif #ifndef CYTHON_UNPACK_METHODS #define CYTHON_UNPACK_METHODS 1 #endif #ifndef CYTHON_FAST_THREAD_STATE #define CYTHON_FAST_THREAD_STATE 1 #endif #ifndef CYTHON_FAST_PYCALL #define CYTHON_FAST_PYCALL 1 #endif #ifndef CYTHON_PEP489_MULTI_PHASE_INIT #define CYTHON_PEP489_MULTI_PHASE_INIT (0 && PY_VERSION_HEX >= 0x03050000) #endif #ifndef CYTHON_USE_TP_FINALIZE #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1) #endif #endif #if !defined(CYTHON_FAST_PYCCALL) #define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) #endif #if CYTHON_USE_PYLONG_INTERNALS #include "longintrepr.h" #undef SHIFT #undef BASE #undef MASK #endif #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag) #define Py_OptimizeFlag 0 #endif #define __PYX_BUILD_PY_SSIZE_T "n" #define CYTHON_FORMAT_SSIZE_T "z" #if PY_MAJOR_VERSION < 3 #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) #define __Pyx_DefaultClassType PyClass_Type #else #define __Pyx_BUILTIN_MODULE_NAME "builtins" #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) #define __Pyx_DefaultClassType PyType_Type #endif #ifndef Py_TPFLAGS_CHECKTYPES #define Py_TPFLAGS_CHECKTYPES 0 #endif #ifndef Py_TPFLAGS_HAVE_INDEX #define Py_TPFLAGS_HAVE_INDEX 0 #endif #ifndef Py_TPFLAGS_HAVE_NEWBUFFER #define Py_TPFLAGS_HAVE_NEWBUFFER 0 #endif #ifndef Py_TPFLAGS_HAVE_FINALIZE #define Py_TPFLAGS_HAVE_FINALIZE 0 #endif #if PY_VERSION_HEX < 0x030700A0 || !defined(METH_FASTCALL) #ifndef METH_FASTCALL #define METH_FASTCALL 0x80 #endif typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject **args, Py_ssize_t nargs); typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject **args, Py_ssize_t nargs, PyObject *kwnames); #else #define __Pyx_PyCFunctionFast _PyCFunctionFast #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords #endif #if CYTHON_FAST_PYCCALL #define __Pyx_PyFastCFunction_Check(func)\ ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS))))) #else #define __Pyx_PyFastCFunction_Check(func) 0 #endif #if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000 #define __Pyx_PyThreadState_Current PyThreadState_GET() #elif PY_VERSION_HEX >= 0x03060000 #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() #elif PY_VERSION_HEX >= 0x03000000 #define __Pyx_PyThreadState_Current PyThreadState_GET() #else #define __Pyx_PyThreadState_Current _PyThreadState_Current #endif #if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) #define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) #else #define __Pyx_PyDict_NewPresized(n) PyDict_New() #endif #if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) #else #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) #endif #if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) #define CYTHON_PEP393_ENABLED 1 #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ 0 : _PyUnicode_Ready((PyObject *)(op))) #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u) #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch) #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) #else #define CYTHON_PEP393_ENABLED 0 #define PyUnicode_1BYTE_KIND 1 #define PyUnicode_2BYTE_KIND 2 #define PyUnicode_4BYTE_KIND 4 #define __Pyx_PyUnicode_READY(op) (0) #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111) #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE)) #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch) #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) #endif #if CYTHON_COMPILING_IN_PYPY #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) #else #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) #endif #if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains) #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) #endif #if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check) #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) #endif #if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format) #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) #endif #if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) #define PyObject_Malloc(s) PyMem_Malloc(s) #define PyObject_Free(p) PyMem_Free(p) #define PyObject_Realloc(p) PyMem_Realloc(p) #endif #if CYTHON_COMPILING_IN_PYSTON #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co) #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno) #else #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) #endif #define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) #define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) #if PY_MAJOR_VERSION >= 3 #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) #else #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) #endif #if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) #define PyObject_ASCII(o) PyObject_Repr(o) #endif #if PY_MAJOR_VERSION >= 3 #define PyBaseString_Type PyUnicode_Type #define PyStringObject PyUnicodeObject #define PyString_Type PyUnicode_Type #define PyString_Check PyUnicode_Check #define PyString_CheckExact PyUnicode_CheckExact #endif #if PY_MAJOR_VERSION >= 3 #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) #else #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) #endif #ifndef PySet_CheckExact #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type) #endif #define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) #if PY_MAJOR_VERSION >= 3 #define PyIntObject PyLongObject #define PyInt_Type PyLong_Type #define PyInt_Check(op) PyLong_Check(op) #define PyInt_CheckExact(op) PyLong_CheckExact(op) #define PyInt_FromString PyLong_FromString #define PyInt_FromUnicode PyLong_FromUnicode #define PyInt_FromLong PyLong_FromLong #define PyInt_FromSize_t PyLong_FromSize_t #define PyInt_FromSsize_t PyLong_FromSsize_t #define PyInt_AsLong PyLong_AsLong #define PyInt_AS_LONG PyLong_AS_LONG #define PyInt_AsSsize_t PyLong_AsSsize_t #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask #define PyNumber_Int PyNumber_Long #endif #if PY_MAJOR_VERSION >= 3 #define PyBoolObject PyLongObject #endif #if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY #ifndef PyUnicode_InternFromString #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) #endif #endif #if PY_VERSION_HEX < 0x030200A4 typedef long Py_hash_t; #define __Pyx_PyInt_FromHash_t PyInt_FromLong #define __Pyx_PyInt_AsHash_t PyInt_AsLong #else #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t #define __Pyx_PyInt_AsHash_t PyInt_AsSsize_t #endif #if PY_MAJOR_VERSION >= 3 #define __Pyx_PyMethod_New(func, self, klass) ((self) ? PyMethod_New(func, self) : PyInstanceMethod_New(func)) #else #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass) #endif #ifndef __has_attribute #define __has_attribute(x) 0 #endif #ifndef __has_cpp_attribute #define __has_cpp_attribute(x) 0 #endif #if CYTHON_USE_ASYNC_SLOTS #if PY_VERSION_HEX >= 0x030500B1 #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) #else #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) #endif #else #define __Pyx_PyType_AsAsync(obj) NULL #endif #ifndef __Pyx_PyAsyncMethodsStruct typedef struct { unaryfunc am_await; unaryfunc am_aiter; unaryfunc am_anext; } __Pyx_PyAsyncMethodsStruct; #endif #ifndef CYTHON_RESTRICT #if defined(__GNUC__) #define CYTHON_RESTRICT __restrict__ #elif defined(_MSC_VER) && _MSC_VER >= 1400 #define CYTHON_RESTRICT __restrict #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L #define CYTHON_RESTRICT restrict #else #define CYTHON_RESTRICT #endif #endif #ifndef CYTHON_UNUSED # if defined(__GNUC__) # if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) # define CYTHON_UNUSED __attribute__ ((__unused__)) # else # define CYTHON_UNUSED # endif # elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) # define CYTHON_UNUSED __attribute__ ((__unused__)) # else # define CYTHON_UNUSED # endif #endif #ifndef CYTHON_MAYBE_UNUSED_VAR # if defined(__cplusplus) template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { } # else # define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x) # endif #endif #ifndef CYTHON_NCP_UNUSED # if CYTHON_COMPILING_IN_CPYTHON # define CYTHON_NCP_UNUSED # else # define CYTHON_NCP_UNUSED CYTHON_UNUSED # endif #endif #define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) #ifdef _MSC_VER #ifndef _MSC_STDINT_H_ #if _MSC_VER < 1300 typedef unsigned char uint8_t; typedef unsigned int uint32_t; #else typedef unsigned __int8 uint8_t; typedef unsigned __int32 uint32_t; #endif #endif #else #include #endif #ifndef CYTHON_FALLTHROUGH #if defined(__cplusplus) && __cplusplus >= 201103L #if __has_cpp_attribute(fallthrough) #define CYTHON_FALLTHROUGH [[fallthrough]] #elif __has_cpp_attribute(clang::fallthrough) #define CYTHON_FALLTHROUGH [[clang::fallthrough]] #elif __has_cpp_attribute(gnu::fallthrough) #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] #endif #endif #ifndef CYTHON_FALLTHROUGH #if __has_attribute(fallthrough) #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) #else #define CYTHON_FALLTHROUGH #endif #endif #if defined(__clang__ ) && defined(__apple_build_version__) #if __apple_build_version__ < 7000000 #undef CYTHON_FALLTHROUGH #define CYTHON_FALLTHROUGH #endif #endif #endif #ifndef CYTHON_INLINE #if defined(__clang__) #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) #elif defined(__GNUC__) #define CYTHON_INLINE __inline__ #elif defined(_MSC_VER) #define CYTHON_INLINE __inline #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L #define CYTHON_INLINE inline #else #define CYTHON_INLINE #endif #endif #if defined(WIN32) || defined(MS_WINDOWS) #define _USE_MATH_DEFINES #endif #include #ifdef NAN #define __PYX_NAN() ((float) NAN) #else static CYTHON_INLINE float __PYX_NAN() { float value; memset(&value, 0xFF, sizeof(value)); return value; } #endif #if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) #define __Pyx_truncl trunc #else #define __Pyx_truncl truncl #endif #define __PYX_ERR(f_index, lineno, Ln_error) \ { \ __pyx_filename = __pyx_f[f_index]; __pyx_lineno = lineno; __pyx_clineno = __LINE__; goto Ln_error; \ } #ifndef __PYX_EXTERN_C #ifdef __cplusplus #define __PYX_EXTERN_C extern "C" #else #define __PYX_EXTERN_C extern #endif #endif #define __PYX_HAVE__borg__hashindex #define __PYX_HAVE_API__borg__hashindex #include #include #include #include #include "_hashindex.c" #include "cache_sync/cache_sync.c" #ifdef _OPENMP #include #endif /* _OPENMP */ #if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) #define CYTHON_WITHOUT_ASSERTIONS #endif typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; #define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 #define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT 0 #define __PYX_DEFAULT_STRING_ENCODING "" #define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString #define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize #define __Pyx_uchar_cast(c) ((unsigned char)c) #define __Pyx_long_cast(x) ((long)x) #define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ (sizeof(type) < sizeof(Py_ssize_t)) ||\ (sizeof(type) > sizeof(Py_ssize_t) &&\ likely(v < (type)PY_SSIZE_T_MAX ||\ v == (type)PY_SSIZE_T_MAX) &&\ (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ v == (type)PY_SSIZE_T_MIN))) ||\ (sizeof(type) == sizeof(Py_ssize_t) &&\ (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ v == (type)PY_SSIZE_T_MAX))) ) #if defined (__cplusplus) && __cplusplus >= 201103L #include #define __Pyx_sst_abs(value) std::abs(value) #elif SIZEOF_INT >= SIZEOF_SIZE_T #define __Pyx_sst_abs(value) abs(value) #elif SIZEOF_LONG >= SIZEOF_SIZE_T #define __Pyx_sst_abs(value) labs(value) #elif defined (_MSC_VER) #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L #define __Pyx_sst_abs(value) llabs(value) #elif defined (__GNUC__) #define __Pyx_sst_abs(value) __builtin_llabs(value) #else #define __Pyx_sst_abs(value) ((value<0) ? -value : value) #endif static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); #define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) #define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) #define __Pyx_PyBytes_FromString PyBytes_FromString #define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); #if PY_MAJOR_VERSION < 3 #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize #else #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize #endif #define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) #define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) #define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) #define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) #define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) #define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) #define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s)) #define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s)) #define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s)) #define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) #define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) #define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) #define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) #define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) #define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) #define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) { const Py_UNICODE *u_end = u; while (*u_end++) ; return (size_t)(u_end - u - 1); } #define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) #define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode #define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode #define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) #define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) #define __Pyx_PyBool_FromLong(b) ((b) ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False)) static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); #define __Pyx_PySequence_Tuple(obj)\ (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); #if CYTHON_ASSUME_SAFE_MACROS #define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) #else #define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) #endif #define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) #if PY_MAJOR_VERSION >= 3 #define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) #else #define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) #endif #define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x)) #if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII static int __Pyx_sys_getdefaultencoding_not_ascii; static int __Pyx_init_sys_getdefaultencoding_params(void) { PyObject* sys; PyObject* default_encoding = NULL; PyObject* ascii_chars_u = NULL; PyObject* ascii_chars_b = NULL; const char* default_encoding_c; sys = PyImport_ImportModule("sys"); if (!sys) goto bad; default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); Py_DECREF(sys); if (!default_encoding) goto bad; default_encoding_c = PyBytes_AsString(default_encoding); if (!default_encoding_c) goto bad; if (strcmp(default_encoding_c, "ascii") == 0) { __Pyx_sys_getdefaultencoding_not_ascii = 0; } else { char ascii_chars[128]; int c; for (c = 0; c < 128; c++) { ascii_chars[c] = c; } __Pyx_sys_getdefaultencoding_not_ascii = 1; ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); if (!ascii_chars_u) goto bad; ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { PyErr_Format( PyExc_ValueError, "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", default_encoding_c); goto bad; } Py_DECREF(ascii_chars_u); Py_DECREF(ascii_chars_b); } Py_DECREF(default_encoding); return 0; bad: Py_XDECREF(default_encoding); Py_XDECREF(ascii_chars_u); Py_XDECREF(ascii_chars_b); return -1; } #endif #if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 #define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) #else #define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) #if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT static char* __PYX_DEFAULT_STRING_ENCODING; static int __Pyx_init_sys_getdefaultencoding_params(void) { PyObject* sys; PyObject* default_encoding = NULL; char* default_encoding_c; sys = PyImport_ImportModule("sys"); if (!sys) goto bad; default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); Py_DECREF(sys); if (!default_encoding) goto bad; default_encoding_c = PyBytes_AsString(default_encoding); if (!default_encoding_c) goto bad; __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c)); if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); Py_DECREF(default_encoding); return 0; bad: Py_XDECREF(default_encoding); return -1; } #endif #endif /* Test for GCC > 2.95 */ #if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) #define likely(x) __builtin_expect(!!(x), 1) #define unlikely(x) __builtin_expect(!!(x), 0) #else /* !__GNUC__ or GCC < 2.95 */ #define likely(x) (x) #define unlikely(x) (x) #endif /* __GNUC__ */ static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } static PyObject *__pyx_m = NULL; static PyObject *__pyx_d; static PyObject *__pyx_b; static PyObject *__pyx_cython_runtime; static PyObject *__pyx_empty_tuple; static PyObject *__pyx_empty_bytes; static PyObject *__pyx_empty_unicode; static int __pyx_lineno; static int __pyx_clineno = 0; static const char * __pyx_cfilenm= __FILE__; static const char *__pyx_filename; static const char *__pyx_f[] = { "src/borg/hashindex.pyx", "stringsource", "type.pxd", }; /*--- Type declarations ---*/ struct __pyx_obj_4borg_9hashindex_IndexBase; struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex; struct __pyx_obj_4borg_9hashindex_NSIndex; struct __pyx_obj_4borg_9hashindex_NSKeyIterator; struct __pyx_obj_4borg_9hashindex_ChunkIndex; struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator; struct __pyx_obj_4borg_9hashindex_CacheSynchronizer; /* "borg/hashindex.pyx":78 * * @cython.internal * cdef class IndexBase: # <<<<<<<<<<<<<< * cdef HashIndex *index * cdef int key_size */ struct __pyx_obj_4borg_9hashindex_IndexBase { PyObject_HEAD HashIndex *index; int key_size; }; /* "borg/hashindex.pyx":163 * * * cdef class FuseVersionsIndex(IndexBase): # <<<<<<<<<<<<<< * # 4 byte version + 16 byte file contents hash * value_size = 20 */ struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex { struct __pyx_obj_4borg_9hashindex_IndexBase __pyx_base; }; /* "borg/hashindex.pyx":193 * * * cdef class NSIndex(IndexBase): # <<<<<<<<<<<<<< * * value_size = 8 */ struct __pyx_obj_4borg_9hashindex_NSIndex { struct __pyx_obj_4borg_9hashindex_IndexBase __pyx_base; }; /* "borg/hashindex.pyx":238 * * * cdef class NSKeyIterator: # <<<<<<<<<<<<<< * cdef NSIndex idx * cdef HashIndex *index */ struct __pyx_obj_4borg_9hashindex_NSKeyIterator { PyObject_HEAD struct __pyx_obj_4borg_9hashindex_NSIndex *idx; HashIndex *index; void const *key; int key_size; int exhausted; }; /* "borg/hashindex.pyx":269 * * * cdef class ChunkIndex(IndexBase): # <<<<<<<<<<<<<< * """ * Mapping of 32 byte keys to (refcount, size, csize), which are all 32-bit unsigned. */ struct __pyx_obj_4borg_9hashindex_ChunkIndex { struct __pyx_obj_4borg_9hashindex_IndexBase __pyx_base; struct __pyx_vtabstruct_4borg_9hashindex_ChunkIndex *__pyx_vtab; }; /* "borg/hashindex.pyx":468 * * * cdef class ChunkKeyIterator: # <<<<<<<<<<<<<< * cdef ChunkIndex idx * cdef HashIndex *index */ struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator { PyObject_HEAD struct __pyx_obj_4borg_9hashindex_ChunkIndex *idx; HashIndex *index; void const *key; int key_size; int exhausted; }; /* "borg/hashindex.pyx":502 * * * cdef class CacheSynchronizer: # <<<<<<<<<<<<<< * cdef ChunkIndex chunks * cdef CacheSyncCtx *sync */ struct __pyx_obj_4borg_9hashindex_CacheSynchronizer { PyObject_HEAD struct __pyx_obj_4borg_9hashindex_ChunkIndex *chunks; CacheSyncCtx *sync; }; /* "borg/hashindex.pyx":269 * * * cdef class ChunkIndex(IndexBase): # <<<<<<<<<<<<<< * """ * Mapping of 32 byte keys to (refcount, size, csize), which are all 32-bit unsigned. */ struct __pyx_vtabstruct_4borg_9hashindex_ChunkIndex { PyObject *(*_add)(struct __pyx_obj_4borg_9hashindex_ChunkIndex *, void *, uint32_t *); }; static struct __pyx_vtabstruct_4borg_9hashindex_ChunkIndex *__pyx_vtabptr_4borg_9hashindex_ChunkIndex; /* --- Runtime support code (head) --- */ /* Refnanny.proto */ #ifndef CYTHON_REFNANNY #define CYTHON_REFNANNY 0 #endif #if CYTHON_REFNANNY typedef struct { void (*INCREF)(void*, PyObject*, int); void (*DECREF)(void*, PyObject*, int); void (*GOTREF)(void*, PyObject*, int); void (*GIVEREF)(void*, PyObject*, int); void* (*SetupContext)(const char*, int, const char*); void (*FinishContext)(void**); } __Pyx_RefNannyAPIStruct; static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; #ifdef WITH_THREAD #define __Pyx_RefNannySetupContext(name, acquire_gil)\ if (acquire_gil) {\ PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ PyGILState_Release(__pyx_gilstate_save);\ } else {\ __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ } #else #define __Pyx_RefNannySetupContext(name, acquire_gil)\ __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) #endif #define __Pyx_RefNannyFinishContext()\ __Pyx_RefNanny->FinishContext(&__pyx_refnanny) #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0) #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0) #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0) #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0) #else #define __Pyx_RefNannyDeclarations #define __Pyx_RefNannySetupContext(name, acquire_gil) #define __Pyx_RefNannyFinishContext() #define __Pyx_INCREF(r) Py_INCREF(r) #define __Pyx_DECREF(r) Py_DECREF(r) #define __Pyx_GOTREF(r) #define __Pyx_GIVEREF(r) #define __Pyx_XINCREF(r) Py_XINCREF(r) #define __Pyx_XDECREF(r) Py_XDECREF(r) #define __Pyx_XGOTREF(r) #define __Pyx_XGIVEREF(r) #endif #define __Pyx_XDECREF_SET(r, v) do {\ PyObject *tmp = (PyObject *) r;\ r = v; __Pyx_XDECREF(tmp);\ } while (0) #define __Pyx_DECREF_SET(r, v) do {\ PyObject *tmp = (PyObject *) r;\ r = v; __Pyx_DECREF(tmp);\ } while (0) #define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) #define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) /* PyObjectGetAttrStr.proto */ #if CYTHON_USE_TYPE_SLOTS static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { PyTypeObject* tp = Py_TYPE(obj); if (likely(tp->tp_getattro)) return tp->tp_getattro(obj, attr_name); #if PY_MAJOR_VERSION < 3 if (likely(tp->tp_getattr)) return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); #endif return PyObject_GetAttr(obj, attr_name); } #else #define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) #endif /* GetBuiltinName.proto */ static PyObject *__Pyx_GetBuiltinName(PyObject *name); /* RaiseDoubleKeywords.proto */ static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); /* ParseKeywords.proto */ static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\ PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\ const char* function_name); /* RaiseArgTupleInvalid.proto */ static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); /* PyObjectCall.proto */ #if CYTHON_COMPILING_IN_CPYTHON static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); #else #define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) #endif /* PyObjectLookupSpecial.proto */ #if CYTHON_USE_PYTYPE_LOOKUP && CYTHON_USE_TYPE_SLOTS static CYTHON_INLINE PyObject* __Pyx_PyObject_LookupSpecial(PyObject* obj, PyObject* attr_name) { PyObject *res; PyTypeObject *tp = Py_TYPE(obj); #if PY_MAJOR_VERSION < 3 if (unlikely(PyInstance_Check(obj))) return __Pyx_PyObject_GetAttrStr(obj, attr_name); #endif res = _PyType_Lookup(tp, attr_name); if (likely(res)) { descrgetfunc f = Py_TYPE(res)->tp_descr_get; if (!f) { Py_INCREF(res); } else { res = f(res, obj, (PyObject *)tp); } } else { PyErr_SetObject(PyExc_AttributeError, attr_name); } return res; } #else #define __Pyx_PyObject_LookupSpecial(o,n) __Pyx_PyObject_GetAttrStr(o,n) #endif /* PyCFunctionFastCall.proto */ #if CYTHON_FAST_PYCCALL static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs); #else #define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL) #endif /* PyFunctionFastCall.proto */ #if CYTHON_FAST_PYCALL #define __Pyx_PyFunction_FastCall(func, args, nargs)\ __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) #if 1 || PY_VERSION_HEX < 0x030600B1 static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, int nargs, PyObject *kwargs); #else #define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs) #endif #endif /* PyObjectCallMethO.proto */ #if CYTHON_COMPILING_IN_CPYTHON static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); #endif /* PyObjectCallOneArg.proto */ static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); /* PyObjectCallNoArg.proto */ #if CYTHON_COMPILING_IN_CPYTHON static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func); #else #define __Pyx_PyObject_CallNoArg(func) __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL) #endif /* PyThreadStateGet.proto */ #if CYTHON_FAST_THREAD_STATE #define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; #define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; #define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type #else #define __Pyx_PyThreadState_declare #define __Pyx_PyThreadState_assign #define __Pyx_PyErr_Occurred() PyErr_Occurred() #endif /* SaveResetException.proto */ #if CYTHON_FAST_THREAD_STATE #define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); #define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); #else #define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) #define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) #endif /* GetException.proto */ #if CYTHON_FAST_THREAD_STATE #define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); #else static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); #endif /* PyErrFetchRestore.proto */ #if CYTHON_FAST_THREAD_STATE #define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) #define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) #define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) #define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) #define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); #if CYTHON_COMPILING_IN_CPYTHON #define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) #else #define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) #endif #else #define __Pyx_PyErr_Clear() PyErr_Clear() #define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) #define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) #define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) #define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) #define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) #define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) #define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) #endif /* RaiseException.proto */ static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); /* PySequenceContains.proto */ static CYTHON_INLINE int __Pyx_PySequence_ContainsTF(PyObject* item, PyObject* seq, int eq) { int result = PySequence_Contains(seq, item); return unlikely(result < 0) ? result : (result == (eq == Py_EQ)); } /* PyErrExceptionMatches.proto */ #if CYTHON_FAST_THREAD_STATE #define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); #else #define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) #endif /* GetItemInt.proto */ #define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\ (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\ __Pyx_GetItemInt_Generic(o, to_py_func(i)))) #define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL)) static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, int wraparound, int boundscheck); #define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL)) static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, int wraparound, int boundscheck); static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j); static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list, int wraparound, int boundscheck); /* GetModuleGlobalName.proto */ static CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name); /* ArgTypeTest.proto */ #define __Pyx_ArgTypeTest(obj, type, none_allowed, name, exact)\ ((likely((Py_TYPE(obj) == type) | (none_allowed && (obj == Py_None)))) ? 1 :\ __Pyx__ArgTypeTest(obj, type, name, exact)) static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact); /* ListAppend.proto */ #if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) { PyListObject* L = (PyListObject*) list; Py_ssize_t len = Py_SIZE(list); if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) { Py_INCREF(x); PyList_SET_ITEM(list, len, x); Py_SIZE(list) = len+1; return 0; } return PyList_Append(list, x); } #else #define __Pyx_PyList_Append(L,x) PyList_Append(L,x) #endif /* ExtTypeTest.proto */ static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type); /* IncludeStringH.proto */ #include /* decode_c_string_utf16.proto */ static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16(const char *s, Py_ssize_t size, const char *errors) { int byteorder = 0; return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); } static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16LE(const char *s, Py_ssize_t size, const char *errors) { int byteorder = -1; return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); } static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16BE(const char *s, Py_ssize_t size, const char *errors) { int byteorder = 1; return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); } /* decode_c_string.proto */ static CYTHON_INLINE PyObject* __Pyx_decode_c_string( const char* cstring, Py_ssize_t start, Py_ssize_t stop, const char* encoding, const char* errors, PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)); /* SetupReduce.proto */ static int __Pyx_setup_reduce(PyObject* type_obj); /* SetVTable.proto */ static int __Pyx_SetVtable(PyObject *dict, void *vtable); /* Import.proto */ static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); /* ImportFrom.proto */ static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); /* None.proto */ static CYTHON_INLINE long __Pyx_mod_long(long, long); /* GetNameInClass.proto */ static PyObject *__Pyx_GetNameInClass(PyObject *nmspace, PyObject *name); /* ClassMethod.proto */ #include "descrobject.h" static PyObject* __Pyx_Method_ClassMethod(PyObject *method); /* CLineInTraceback.proto */ #ifdef CYTHON_CLINE_IN_TRACEBACK #define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) #else static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); #endif /* CodeObjectCache.proto */ typedef struct { PyCodeObject* code_object; int code_line; } __Pyx_CodeObjectCacheEntry; struct __Pyx_CodeObjectCache { int count; int max_count; __Pyx_CodeObjectCacheEntry* entries; }; static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); static PyCodeObject *__pyx_find_code_object(int code_line); static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); /* AddTraceback.proto */ static void __Pyx_AddTraceback(const char *funcname, int c_line, int py_line, const char *filename); /* CIntToPy.proto */ static CYTHON_INLINE PyObject* __Pyx_PyInt_From_uint32_t(uint32_t value); /* CIntToPy.proto */ static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value); /* CIntToPy.proto */ static CYTHON_INLINE PyObject* __Pyx_PyInt_From_uint64_t(uint64_t value); /* CIntToPy.proto */ static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); /* CIntFromPy.proto */ static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); /* CIntFromPy.proto */ static CYTHON_INLINE uint32_t __Pyx_PyInt_As_uint32_t(PyObject *); /* CIntFromPy.proto */ static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); /* FastTypeChecks.proto */ #if CYTHON_COMPILING_IN_CPYTHON #define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); #else #define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) #define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) #define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) #endif /* CheckBinaryVersion.proto */ static int __Pyx_check_binary_version(void); /* PyIdentifierFromString.proto */ #if !defined(__Pyx_PyIdentifier_FromString) #if PY_MAJOR_VERSION < 3 #define __Pyx_PyIdentifier_FromString(s) PyString_FromString(s) #else #define __Pyx_PyIdentifier_FromString(s) PyUnicode_FromString(s) #endif #endif /* ModuleImport.proto */ static PyObject *__Pyx_ImportModule(const char *name); /* TypeImport.proto */ static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, size_t size, int strict); /* InitStrings.proto */ static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); static PyObject *__pyx_f_4borg_9hashindex_10ChunkIndex__add(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, void *__pyx_v_key, uint32_t *__pyx_v_data); /* proto*/ /* Module declarations from 'cython' */ /* Module declarations from 'libc.stdint' */ /* Module declarations from 'libc.errno' */ /* Module declarations from 'libc.string' */ /* Module declarations from 'libc.stdio' */ /* Module declarations from '__builtin__' */ /* Module declarations from 'cpython.type' */ static PyTypeObject *__pyx_ptype_7cpython_4type_type = 0; /* Module declarations from 'cpython' */ /* Module declarations from 'cpython.object' */ /* Module declarations from 'cpython.exc' */ /* Module declarations from 'cpython.buffer' */ /* Module declarations from 'cpython.bytes' */ /* Module declarations from 'borg.hashindex' */ static PyTypeObject *__pyx_ptype_4borg_9hashindex_IndexBase = 0; static PyTypeObject *__pyx_ptype_4borg_9hashindex_FuseVersionsIndex = 0; static PyTypeObject *__pyx_ptype_4borg_9hashindex_NSIndex = 0; static PyTypeObject *__pyx_ptype_4borg_9hashindex_NSKeyIterator = 0; static PyTypeObject *__pyx_ptype_4borg_9hashindex_ChunkIndex = 0; static PyTypeObject *__pyx_ptype_4borg_9hashindex_ChunkKeyIterator = 0; static PyTypeObject *__pyx_ptype_4borg_9hashindex_CacheSynchronizer = 0; static PyObject *__pyx_v_4borg_9hashindex__NoDefault = 0; static Py_buffer __pyx_f_4borg_9hashindex_ro_buffer(PyObject *); /*proto*/ #define __Pyx_MODULE_NAME "borg.hashindex" extern int __pyx_module_is_main_borg__hashindex; int __pyx_module_is_main_borg__hashindex = 0; /* Implementation of 'borg.hashindex' */ static PyObject *__pyx_builtin_object; static PyObject *__pyx_builtin_open; static PyObject *__pyx_builtin_KeyError; static PyObject *__pyx_builtin_TypeError; static PyObject *__pyx_builtin_IndexError; static PyObject *__pyx_builtin_StopIteration; static PyObject *__pyx_builtin_ValueError; static const char __pyx_k_os[] = "os"; static const char __pyx_k_rb[] = "rb"; static const char __pyx_k_wb[] = "wb"; static const char __pyx_k_key[] = "key"; static const char __pyx_k_exit[] = "__exit__"; static const char __pyx_k_main[] = "__main__"; static const char __pyx_k_name[] = "__name__"; static const char __pyx_k_open[] = "open"; static const char __pyx_k_path[] = "path"; static const char __pyx_k_read[] = "read"; static const char __pyx_k_refs[] = "refs"; static const char __pyx_k_size[] = "size"; static const char __pyx_k_test[] = "__test__"; static const char __pyx_k_csize[] = "csize"; static const char __pyx_k_enter[] = "__enter__"; static const char __pyx_k_value[] = "value"; static const char __pyx_k_1_1_07[] = "1.1_07"; static const char __pyx_k_chunks[] = "chunks"; static const char __pyx_k_import[] = "__import__"; static const char __pyx_k_locale[] = "locale"; static const char __pyx_k_marker[] = "marker"; static const char __pyx_k_object[] = "object"; static const char __pyx_k_reduce[] = "__reduce__"; static const char __pyx_k_default[] = "default"; static const char __pyx_k_KeyError[] = "KeyError"; static const char __pyx_k_capacity[] = "capacity"; static const char __pyx_k_getstate[] = "__getstate__"; static const char __pyx_k_key_size[] = "_key_size"; static const char __pyx_k_setstate[] = "__setstate__"; static const char __pyx_k_MAX_VALUE[] = "MAX_VALUE"; static const char __pyx_k_TypeError[] = "TypeError"; static const char __pyx_k_reduce_ex[] = "__reduce_ex__"; static const char __pyx_k_IndexError[] = "IndexError"; static const char __pyx_k_ValueError[] = "ValueError"; static const char __pyx_k_key_size_2[] = "key_size"; static const char __pyx_k_namedtuple[] = "namedtuple"; static const char __pyx_k_pyx_vtable[] = "__pyx_vtable__"; static const char __pyx_k_value_size[] = "value_size"; static const char __pyx_k_API_VERSION[] = "API_VERSION"; static const char __pyx_k_collections[] = "collections"; static const char __pyx_k_StopIteration[] = "StopIteration"; static const char __pyx_k_reduce_cython[] = "__reduce_cython__"; static const char __pyx_k_permit_compact[] = "permit_compact"; static const char __pyx_k_ChunkIndexEntry[] = "ChunkIndexEntry"; static const char __pyx_k_MAX_LOAD_FACTOR[] = "MAX_LOAD_FACTOR"; static const char __pyx_k_setstate_cython[] = "__setstate_cython__"; static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; static const char __pyx_k_refcount_size_csize[] = "refcount size csize"; static const char __pyx_k_hashindex_set_failed[] = "hashindex_set failed"; static const char __pyx_k_hashindex_init_failed[] = "hashindex_init failed"; static const char __pyx_k_cache_sync_feed_failed[] = "cache_sync_feed failed: "; static const char __pyx_k_cache_sync_init_failed[] = "cache_sync_init failed"; static const char __pyx_k_hashindex_delete_failed[] = "hashindex_delete failed"; static const char __pyx_k_invalid_reference_count[] = "invalid reference count"; static const char __pyx_k_Expected_bytes_of_length_16_for[] = "Expected bytes of length 16 for second value"; static const char __pyx_k_hashindex_read_returned_NULL_wit[] = "hashindex_read() returned NULL with no exception set"; static const char __pyx_k_maximum_number_of_segments_reach[] = "maximum number of segments reached"; static const char __pyx_k_maximum_number_of_versions_reach[] = "maximum number of versions reached"; static const char __pyx_k_no_default___reduce___due_to_non[] = "no default __reduce__ due to non-trivial __cinit__"; static const char __pyx_k_stats_against_key_contained_in_s[] = "stats_against: key contained in self but not in master_index."; static PyObject *__pyx_kp_s_1_1_07; static PyObject *__pyx_n_s_API_VERSION; static PyObject *__pyx_n_s_ChunkIndexEntry; static PyObject *__pyx_kp_s_Expected_bytes_of_length_16_for; static PyObject *__pyx_n_s_IndexError; static PyObject *__pyx_n_s_KeyError; static PyObject *__pyx_n_s_MAX_LOAD_FACTOR; static PyObject *__pyx_n_s_MAX_VALUE; static PyObject *__pyx_n_s_StopIteration; static PyObject *__pyx_n_s_TypeError; static PyObject *__pyx_n_s_ValueError; static PyObject *__pyx_kp_s_cache_sync_feed_failed; static PyObject *__pyx_kp_s_cache_sync_init_failed; static PyObject *__pyx_n_s_capacity; static PyObject *__pyx_n_s_chunks; static PyObject *__pyx_n_s_cline_in_traceback; static PyObject *__pyx_n_s_collections; static PyObject *__pyx_n_s_csize; static PyObject *__pyx_n_s_default; static PyObject *__pyx_n_s_enter; static PyObject *__pyx_n_s_exit; static PyObject *__pyx_n_s_getstate; static PyObject *__pyx_kp_s_hashindex_delete_failed; static PyObject *__pyx_kp_s_hashindex_init_failed; static PyObject *__pyx_kp_s_hashindex_read_returned_NULL_wit; static PyObject *__pyx_kp_s_hashindex_set_failed; static PyObject *__pyx_n_s_import; static PyObject *__pyx_kp_s_invalid_reference_count; static PyObject *__pyx_n_s_key; static PyObject *__pyx_n_s_key_size; static PyObject *__pyx_n_s_key_size_2; static PyObject *__pyx_n_s_locale; static PyObject *__pyx_n_s_main; static PyObject *__pyx_n_s_marker; static PyObject *__pyx_kp_s_maximum_number_of_segments_reach; static PyObject *__pyx_kp_s_maximum_number_of_versions_reach; static PyObject *__pyx_n_s_name; static PyObject *__pyx_n_s_namedtuple; static PyObject *__pyx_kp_s_no_default___reduce___due_to_non; static PyObject *__pyx_n_s_object; static PyObject *__pyx_n_s_open; static PyObject *__pyx_n_s_os; static PyObject *__pyx_n_s_path; static PyObject *__pyx_n_s_permit_compact; static PyObject *__pyx_n_s_pyx_vtable; static PyObject *__pyx_n_s_rb; static PyObject *__pyx_n_s_read; static PyObject *__pyx_n_s_reduce; static PyObject *__pyx_n_s_reduce_cython; static PyObject *__pyx_n_s_reduce_ex; static PyObject *__pyx_kp_s_refcount_size_csize; static PyObject *__pyx_n_s_refs; static PyObject *__pyx_n_s_setstate; static PyObject *__pyx_n_s_setstate_cython; static PyObject *__pyx_n_s_size; static PyObject *__pyx_kp_s_stats_against_key_contained_in_s; static PyObject *__pyx_n_s_test; static PyObject *__pyx_n_s_value; static PyObject *__pyx_n_s_value_size; static PyObject *__pyx_n_s_wb; static int __pyx_pf_4borg_9hashindex_9IndexBase___cinit__(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self, PyObject *__pyx_v_capacity, PyObject *__pyx_v_path, PyObject *__pyx_v_permit_compact); /* proto */ static void __pyx_pf_4borg_9hashindex_9IndexBase_2__dealloc__(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_4read(PyTypeObject *__pyx_v_cls, PyObject *__pyx_v_path, PyObject *__pyx_v_permit_compact); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_6write(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self, PyObject *__pyx_v_path); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_8clear(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_10setdefault(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_value); /* proto */ static int __pyx_pf_4borg_9hashindex_9IndexBase_12__delitem__(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self, PyObject *__pyx_v_key); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_14get(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_default); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_16pop(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_default); /* proto */ static Py_ssize_t __pyx_pf_4borg_9hashindex_9IndexBase_18__len__(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_20size(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_22compact(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_24__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_26__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_17FuseVersionsIndex___getitem__(struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex *__pyx_v_self, PyObject *__pyx_v_key); /* proto */ static int __pyx_pf_4borg_9hashindex_17FuseVersionsIndex_2__setitem__(struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_value); /* proto */ static int __pyx_pf_4borg_9hashindex_17FuseVersionsIndex_4__contains__(struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex *__pyx_v_self, PyObject *__pyx_v_key); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_17FuseVersionsIndex_6__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_17FuseVersionsIndex_8__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_7NSIndex___getitem__(struct __pyx_obj_4borg_9hashindex_NSIndex *__pyx_v_self, PyObject *__pyx_v_key); /* proto */ static int __pyx_pf_4borg_9hashindex_7NSIndex_2__setitem__(struct __pyx_obj_4borg_9hashindex_NSIndex *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_value); /* proto */ static int __pyx_pf_4borg_9hashindex_7NSIndex_4__contains__(struct __pyx_obj_4borg_9hashindex_NSIndex *__pyx_v_self, PyObject *__pyx_v_key); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_7NSIndex_6iteritems(struct __pyx_obj_4borg_9hashindex_NSIndex *__pyx_v_self, PyObject *__pyx_v_marker); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_7NSIndex_8__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_NSIndex *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_7NSIndex_10__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_NSIndex *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ static int __pyx_pf_4borg_9hashindex_13NSKeyIterator___cinit__(struct __pyx_obj_4borg_9hashindex_NSKeyIterator *__pyx_v_self, PyObject *__pyx_v_key_size); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_13NSKeyIterator_2__iter__(struct __pyx_obj_4borg_9hashindex_NSKeyIterator *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_13NSKeyIterator_4__next__(struct __pyx_obj_4borg_9hashindex_NSKeyIterator *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_13NSKeyIterator_6__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_NSKeyIterator *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_13NSKeyIterator_8__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_NSKeyIterator *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex___getitem__(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, PyObject *__pyx_v_key); /* proto */ static int __pyx_pf_4borg_9hashindex_10ChunkIndex_2__setitem__(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_value); /* proto */ static int __pyx_pf_4borg_9hashindex_10ChunkIndex_4__contains__(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, PyObject *__pyx_v_key); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_6incref(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, PyObject *__pyx_v_key); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_8decref(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, PyObject *__pyx_v_key); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_10iteritems(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, PyObject *__pyx_v_marker); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_12summarize(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_14stats_against(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_master_index); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_16add(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_refs, PyObject *__pyx_v_size, PyObject *__pyx_v_csize); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_18merge(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_other); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_20zero_csize_ids(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_22__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_24__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ static int __pyx_pf_4borg_9hashindex_16ChunkKeyIterator___cinit__(struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *__pyx_v_self, PyObject *__pyx_v_key_size); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_16ChunkKeyIterator_2__iter__(struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_16ChunkKeyIterator_4__next__(struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_16ChunkKeyIterator_6__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_16ChunkKeyIterator_8__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ static int __pyx_pf_4borg_9hashindex_17CacheSynchronizer___cinit__(struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *__pyx_v_self, PyObject *__pyx_v_chunks); /* proto */ static void __pyx_pf_4borg_9hashindex_17CacheSynchronizer_2__dealloc__(struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_17CacheSynchronizer_4feed(struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *__pyx_v_self, PyObject *__pyx_v_chunk); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_17CacheSynchronizer_9num_files___get__(struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_17CacheSynchronizer_6__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_9hashindex_17CacheSynchronizer_8__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ static PyObject *__pyx_tp_new_4borg_9hashindex_IndexBase(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ static PyObject *__pyx_tp_new_4borg_9hashindex_FuseVersionsIndex(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ static PyObject *__pyx_tp_new_4borg_9hashindex_NSIndex(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ static PyObject *__pyx_tp_new_4borg_9hashindex_NSKeyIterator(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ static PyObject *__pyx_tp_new_4borg_9hashindex_ChunkIndex(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ static PyObject *__pyx_tp_new_4borg_9hashindex_ChunkKeyIterator(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ static PyObject *__pyx_tp_new_4borg_9hashindex_CacheSynchronizer(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ static PyObject *__pyx_int_0; static PyObject *__pyx_int_8; static PyObject *__pyx_int_12; static PyObject *__pyx_int_16; static PyObject *__pyx_int_20; static PyObject *__pyx_int_32; static PyObject *__pyx_int_4294967295; static PyObject *__pyx_k__6; static PyObject *__pyx_tuple_; static PyObject *__pyx_tuple__2; static PyObject *__pyx_tuple__3; static PyObject *__pyx_tuple__4; static PyObject *__pyx_tuple__5; static PyObject *__pyx_tuple__7; static PyObject *__pyx_tuple__8; static PyObject *__pyx_tuple__9; static PyObject *__pyx_tuple__10; static PyObject *__pyx_tuple__11; static PyObject *__pyx_tuple__12; static PyObject *__pyx_tuple__13; static PyObject *__pyx_tuple__14; static PyObject *__pyx_tuple__15; static PyObject *__pyx_tuple__16; static PyObject *__pyx_tuple__17; static PyObject *__pyx_tuple__18; static PyObject *__pyx_tuple__19; static PyObject *__pyx_tuple__20; static PyObject *__pyx_tuple__21; static PyObject *__pyx_tuple__22; static PyObject *__pyx_tuple__23; static PyObject *__pyx_tuple__24; static PyObject *__pyx_tuple__25; static PyObject *__pyx_tuple__26; static PyObject *__pyx_tuple__27; static PyObject *__pyx_tuple__28; /* "borg/hashindex.pyx":87 * MAX_VALUE = _MAX_VALUE * * def __cinit__(self, capacity=0, path=None, permit_compact=False): # <<<<<<<<<<<<<< * self.key_size = self._key_size * if path: */ /* Python wrapper */ static int __pyx_pw_4borg_9hashindex_9IndexBase_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ static int __pyx_pw_4borg_9hashindex_9IndexBase_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { PyObject *__pyx_v_capacity = 0; PyObject *__pyx_v_path = 0; PyObject *__pyx_v_permit_compact = 0; int __pyx_r; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); { static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_capacity,&__pyx_n_s_path,&__pyx_n_s_permit_compact,0}; PyObject* values[3] = {0,0,0}; values[0] = ((PyObject *)__pyx_int_0); values[1] = ((PyObject *)Py_None); values[2] = ((PyObject *)Py_False); if (unlikely(__pyx_kwds)) { Py_ssize_t kw_args; const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); switch (pos_args) { case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); CYTHON_FALLTHROUGH; case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); CYTHON_FALLTHROUGH; case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } kw_args = PyDict_Size(__pyx_kwds); switch (pos_args) { case 0: if (kw_args > 0) { PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_capacity); if (value) { values[0] = value; kw_args--; } } CYTHON_FALLTHROUGH; case 1: if (kw_args > 0) { PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_path); if (value) { values[1] = value; kw_args--; } } CYTHON_FALLTHROUGH; case 2: if (kw_args > 0) { PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_permit_compact); if (value) { values[2] = value; kw_args--; } } } if (unlikely(kw_args > 0)) { if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(0, 87, __pyx_L3_error) } } else { switch (PyTuple_GET_SIZE(__pyx_args)) { case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); CYTHON_FALLTHROUGH; case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); CYTHON_FALLTHROUGH; case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } } __pyx_v_capacity = values[0]; __pyx_v_path = values[1]; __pyx_v_permit_compact = values[2]; } goto __pyx_L4_argument_unpacking_done; __pyx_L5_argtuple_error:; __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 0, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 87, __pyx_L3_error) __pyx_L3_error:; __Pyx_AddTraceback("borg.hashindex.IndexBase.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); __Pyx_RefNannyFinishContext(); return -1; __pyx_L4_argument_unpacking_done:; __pyx_r = __pyx_pf_4borg_9hashindex_9IndexBase___cinit__(((struct __pyx_obj_4borg_9hashindex_IndexBase *)__pyx_v_self), __pyx_v_capacity, __pyx_v_path, __pyx_v_permit_compact); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static int __pyx_pf_4borg_9hashindex_9IndexBase___cinit__(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self, PyObject *__pyx_v_capacity, PyObject *__pyx_v_path, PyObject *__pyx_v_permit_compact) { PyObject *__pyx_v_fd = NULL; int __pyx_r; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; int __pyx_t_2; int __pyx_t_3; int __pyx_t_4; int __pyx_t_5; PyObject *__pyx_t_6 = NULL; PyObject *__pyx_t_7 = NULL; PyObject *__pyx_t_8 = NULL; PyObject *__pyx_t_9 = NULL; PyObject *__pyx_t_10 = NULL; PyObject *__pyx_t_11 = NULL; PyObject *__pyx_t_12 = NULL; HashIndex *__pyx_t_13; PyObject *__pyx_t_14 = NULL; int __pyx_t_15; __Pyx_RefNannySetupContext("__cinit__", 0); /* "borg/hashindex.pyx":88 * * def __cinit__(self, capacity=0, path=None, permit_compact=False): * self.key_size = self._key_size # <<<<<<<<<<<<<< * if path: * if isinstance(path, (str, bytes)): */ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_key_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 88, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __pyx_t_2 = __Pyx_PyInt_As_int(__pyx_t_1); if (unlikely((__pyx_t_2 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 88, __pyx_L1_error) __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __pyx_v_self->key_size = __pyx_t_2; /* "borg/hashindex.pyx":89 * def __cinit__(self, capacity=0, path=None, permit_compact=False): * self.key_size = self._key_size * if path: # <<<<<<<<<<<<<< * if isinstance(path, (str, bytes)): * with open(path, 'rb') as fd: */ __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_v_path); if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 89, __pyx_L1_error) if (__pyx_t_3) { /* "borg/hashindex.pyx":90 * self.key_size = self._key_size * if path: * if isinstance(path, (str, bytes)): # <<<<<<<<<<<<<< * with open(path, 'rb') as fd: * self.index = hashindex_read(fd, permit_compact) */ __pyx_t_4 = PyString_Check(__pyx_v_path); __pyx_t_5 = (__pyx_t_4 != 0); if (!__pyx_t_5) { } else { __pyx_t_3 = __pyx_t_5; goto __pyx_L5_bool_binop_done; } __pyx_t_5 = PyBytes_Check(__pyx_v_path); __pyx_t_4 = (__pyx_t_5 != 0); __pyx_t_3 = __pyx_t_4; __pyx_L5_bool_binop_done:; __pyx_t_4 = (__pyx_t_3 != 0); if (__pyx_t_4) { /* "borg/hashindex.pyx":91 * if path: * if isinstance(path, (str, bytes)): * with open(path, 'rb') as fd: # <<<<<<<<<<<<<< * self.index = hashindex_read(fd, permit_compact) * else: */ /*with:*/ { __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 91, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_INCREF(__pyx_v_path); __Pyx_GIVEREF(__pyx_v_path); PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_path); __Pyx_INCREF(__pyx_n_s_rb); __Pyx_GIVEREF(__pyx_n_s_rb); PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_n_s_rb); __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_open, __pyx_t_1, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 91, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_6); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __pyx_t_7 = __Pyx_PyObject_LookupSpecial(__pyx_t_6, __pyx_n_s_exit); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 91, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_7); __pyx_t_8 = __Pyx_PyObject_LookupSpecial(__pyx_t_6, __pyx_n_s_enter); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 91, __pyx_L7_error) __Pyx_GOTREF(__pyx_t_8); __pyx_t_9 = NULL; if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_8))) { __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_8); if (likely(__pyx_t_9)) { PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8); __Pyx_INCREF(__pyx_t_9); __Pyx_INCREF(function); __Pyx_DECREF_SET(__pyx_t_8, function); } } if (__pyx_t_9) { __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_t_8, __pyx_t_9); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 91, __pyx_L7_error) __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; } else { __pyx_t_1 = __Pyx_PyObject_CallNoArg(__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 91, __pyx_L7_error) } __Pyx_GOTREF(__pyx_t_1); __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; __pyx_t_8 = __pyx_t_1; __pyx_t_1 = 0; __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; /*try:*/ { { __Pyx_PyThreadState_declare __Pyx_PyThreadState_assign __Pyx_ExceptionSave(&__pyx_t_10, &__pyx_t_11, &__pyx_t_12); __Pyx_XGOTREF(__pyx_t_10); __Pyx_XGOTREF(__pyx_t_11); __Pyx_XGOTREF(__pyx_t_12); /*try:*/ { __pyx_v_fd = __pyx_t_8; __pyx_t_8 = 0; /* "borg/hashindex.pyx":92 * if isinstance(path, (str, bytes)): * with open(path, 'rb') as fd: * self.index = hashindex_read(fd, permit_compact) # <<<<<<<<<<<<<< * else: * self.index = hashindex_read(path, permit_compact) */ __pyx_t_2 = __Pyx_PyInt_As_int(__pyx_v_permit_compact); if (unlikely((__pyx_t_2 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 92, __pyx_L11_error) __pyx_t_13 = hashindex_read(__pyx_v_fd, __pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 92, __pyx_L11_error) __pyx_v_self->index = __pyx_t_13; /* "borg/hashindex.pyx":91 * if path: * if isinstance(path, (str, bytes)): * with open(path, 'rb') as fd: # <<<<<<<<<<<<<< * self.index = hashindex_read(fd, permit_compact) * else: */ } __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0; __Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0; goto __pyx_L16_try_end; __pyx_L11_error:; __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; /*except:*/ { __Pyx_AddTraceback("borg.hashindex.IndexBase.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); if (__Pyx_GetException(&__pyx_t_8, &__pyx_t_6, &__pyx_t_1) < 0) __PYX_ERR(0, 91, __pyx_L13_except_error) __Pyx_GOTREF(__pyx_t_8); __Pyx_GOTREF(__pyx_t_6); __Pyx_GOTREF(__pyx_t_1); __pyx_t_9 = PyTuple_Pack(3, __pyx_t_8, __pyx_t_6, __pyx_t_1); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 91, __pyx_L13_except_error) __Pyx_GOTREF(__pyx_t_9); __pyx_t_14 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_9, NULL); __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 91, __pyx_L13_except_error) __Pyx_GOTREF(__pyx_t_14); __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_14); __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; if (__pyx_t_4 < 0) __PYX_ERR(0, 91, __pyx_L13_except_error) __pyx_t_3 = ((!(__pyx_t_4 != 0)) != 0); if (__pyx_t_3) { __Pyx_GIVEREF(__pyx_t_8); __Pyx_GIVEREF(__pyx_t_6); __Pyx_XGIVEREF(__pyx_t_1); __Pyx_ErrRestoreWithState(__pyx_t_8, __pyx_t_6, __pyx_t_1); __pyx_t_8 = 0; __pyx_t_6 = 0; __pyx_t_1 = 0; __PYX_ERR(0, 91, __pyx_L13_except_error) } __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; goto __pyx_L12_exception_handled; } __pyx_L13_except_error:; __Pyx_XGIVEREF(__pyx_t_10); __Pyx_XGIVEREF(__pyx_t_11); __Pyx_XGIVEREF(__pyx_t_12); __Pyx_ExceptionReset(__pyx_t_10, __pyx_t_11, __pyx_t_12); goto __pyx_L1_error; __pyx_L12_exception_handled:; __Pyx_XGIVEREF(__pyx_t_10); __Pyx_XGIVEREF(__pyx_t_11); __Pyx_XGIVEREF(__pyx_t_12); __Pyx_ExceptionReset(__pyx_t_10, __pyx_t_11, __pyx_t_12); __pyx_L16_try_end:; } } /*finally:*/ { /*normal exit:*/{ if (__pyx_t_7) { __pyx_t_12 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_tuple_, NULL); __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 91, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_12); __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; } goto __pyx_L10; } __pyx_L10:; } goto __pyx_L20; __pyx_L7_error:; __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; goto __pyx_L1_error; __pyx_L20:; } /* "borg/hashindex.pyx":90 * self.key_size = self._key_size * if path: * if isinstance(path, (str, bytes)): # <<<<<<<<<<<<<< * with open(path, 'rb') as fd: * self.index = hashindex_read(fd, permit_compact) */ goto __pyx_L4; } /* "borg/hashindex.pyx":94 * self.index = hashindex_read(fd, permit_compact) * else: * self.index = hashindex_read(path, permit_compact) # <<<<<<<<<<<<<< * assert self.index, 'hashindex_read() returned NULL with no exception set' * else: */ /*else*/ { __pyx_t_2 = __Pyx_PyInt_As_int(__pyx_v_permit_compact); if (unlikely((__pyx_t_2 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 94, __pyx_L1_error) __pyx_t_13 = hashindex_read(__pyx_v_path, __pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 94, __pyx_L1_error) __pyx_v_self->index = __pyx_t_13; } __pyx_L4:; /* "borg/hashindex.pyx":95 * else: * self.index = hashindex_read(path, permit_compact) * assert self.index, 'hashindex_read() returned NULL with no exception set' # <<<<<<<<<<<<<< * else: * self.index = hashindex_init(capacity, self.key_size, self.value_size) */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { if (unlikely(!(__pyx_v_self->index != 0))) { PyErr_SetObject(PyExc_AssertionError, __pyx_kp_s_hashindex_read_returned_NULL_wit); __PYX_ERR(0, 95, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":89 * def __cinit__(self, capacity=0, path=None, permit_compact=False): * self.key_size = self._key_size * if path: # <<<<<<<<<<<<<< * if isinstance(path, (str, bytes)): * with open(path, 'rb') as fd: */ goto __pyx_L3; } /* "borg/hashindex.pyx":97 * assert self.index, 'hashindex_read() returned NULL with no exception set' * else: * self.index = hashindex_init(capacity, self.key_size, self.value_size) # <<<<<<<<<<<<<< * if not self.index: * raise Exception('hashindex_init failed') */ /*else*/ { __pyx_t_2 = __Pyx_PyInt_As_int(__pyx_v_capacity); if (unlikely((__pyx_t_2 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 97, __pyx_L1_error) __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_value_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 97, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __pyx_t_15 = __Pyx_PyInt_As_int(__pyx_t_1); if (unlikely((__pyx_t_15 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 97, __pyx_L1_error) __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __pyx_v_self->index = hashindex_init(__pyx_t_2, __pyx_v_self->key_size, __pyx_t_15); /* "borg/hashindex.pyx":98 * else: * self.index = hashindex_init(capacity, self.key_size, self.value_size) * if not self.index: # <<<<<<<<<<<<<< * raise Exception('hashindex_init failed') * */ __pyx_t_3 = ((!(__pyx_v_self->index != 0)) != 0); if (__pyx_t_3) { /* "borg/hashindex.pyx":99 * self.index = hashindex_init(capacity, self.key_size, self.value_size) * if not self.index: * raise Exception('hashindex_init failed') # <<<<<<<<<<<<<< * * def __dealloc__(self): */ __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 99, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(0, 99, __pyx_L1_error) /* "borg/hashindex.pyx":98 * else: * self.index = hashindex_init(capacity, self.key_size, self.value_size) * if not self.index: # <<<<<<<<<<<<<< * raise Exception('hashindex_init failed') * */ } } __pyx_L3:; /* "borg/hashindex.pyx":87 * MAX_VALUE = _MAX_VALUE * * def __cinit__(self, capacity=0, path=None, permit_compact=False): # <<<<<<<<<<<<<< * self.key_size = self._key_size * if path: */ /* function exit code */ __pyx_r = 0; goto __pyx_L0; __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_XDECREF(__pyx_t_6); __Pyx_XDECREF(__pyx_t_8); __Pyx_XDECREF(__pyx_t_9); __Pyx_AddTraceback("borg.hashindex.IndexBase.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = -1; __pyx_L0:; __Pyx_XDECREF(__pyx_v_fd); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":101 * raise Exception('hashindex_init failed') * * def __dealloc__(self): # <<<<<<<<<<<<<< * if self.index: * hashindex_free(self.index) */ /* Python wrapper */ static void __pyx_pw_4borg_9hashindex_9IndexBase_3__dealloc__(PyObject *__pyx_v_self); /*proto*/ static void __pyx_pw_4borg_9hashindex_9IndexBase_3__dealloc__(PyObject *__pyx_v_self) { __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); __pyx_pf_4borg_9hashindex_9IndexBase_2__dealloc__(((struct __pyx_obj_4borg_9hashindex_IndexBase *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); } static void __pyx_pf_4borg_9hashindex_9IndexBase_2__dealloc__(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self) { __Pyx_RefNannyDeclarations int __pyx_t_1; __Pyx_RefNannySetupContext("__dealloc__", 0); /* "borg/hashindex.pyx":102 * * def __dealloc__(self): * if self.index: # <<<<<<<<<<<<<< * hashindex_free(self.index) * */ __pyx_t_1 = (__pyx_v_self->index != 0); if (__pyx_t_1) { /* "borg/hashindex.pyx":103 * def __dealloc__(self): * if self.index: * hashindex_free(self.index) # <<<<<<<<<<<<<< * * @classmethod */ hashindex_free(__pyx_v_self->index); /* "borg/hashindex.pyx":102 * * def __dealloc__(self): * if self.index: # <<<<<<<<<<<<<< * hashindex_free(self.index) * */ } /* "borg/hashindex.pyx":101 * raise Exception('hashindex_init failed') * * def __dealloc__(self): # <<<<<<<<<<<<<< * if self.index: * hashindex_free(self.index) */ /* function exit code */ __Pyx_RefNannyFinishContext(); } /* "borg/hashindex.pyx":106 * * @classmethod * def read(cls, path, permit_compact=False): # <<<<<<<<<<<<<< * return cls(path=path, permit_compact=permit_compact) * */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_5read(PyObject *__pyx_v_cls, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_5read(PyObject *__pyx_v_cls, PyObject *__pyx_args, PyObject *__pyx_kwds) { PyObject *__pyx_v_path = 0; PyObject *__pyx_v_permit_compact = 0; PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("read (wrapper)", 0); { static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_path,&__pyx_n_s_permit_compact,0}; PyObject* values[2] = {0,0}; values[1] = ((PyObject *)Py_False); if (unlikely(__pyx_kwds)) { Py_ssize_t kw_args; const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); switch (pos_args) { case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); CYTHON_FALLTHROUGH; case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } kw_args = PyDict_Size(__pyx_kwds); switch (pos_args) { case 0: if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_path)) != 0)) kw_args--; else goto __pyx_L5_argtuple_error; CYTHON_FALLTHROUGH; case 1: if (kw_args > 0) { PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_permit_compact); if (value) { values[1] = value; kw_args--; } } } if (unlikely(kw_args > 0)) { if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "read") < 0)) __PYX_ERR(0, 106, __pyx_L3_error) } } else { switch (PyTuple_GET_SIZE(__pyx_args)) { case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); CYTHON_FALLTHROUGH; case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); break; default: goto __pyx_L5_argtuple_error; } } __pyx_v_path = values[0]; __pyx_v_permit_compact = values[1]; } goto __pyx_L4_argument_unpacking_done; __pyx_L5_argtuple_error:; __Pyx_RaiseArgtupleInvalid("read", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 106, __pyx_L3_error) __pyx_L3_error:; __Pyx_AddTraceback("borg.hashindex.IndexBase.read", __pyx_clineno, __pyx_lineno, __pyx_filename); __Pyx_RefNannyFinishContext(); return NULL; __pyx_L4_argument_unpacking_done:; __pyx_r = __pyx_pf_4borg_9hashindex_9IndexBase_4read(((PyTypeObject*)__pyx_v_cls), __pyx_v_path, __pyx_v_permit_compact); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_4read(PyTypeObject *__pyx_v_cls, PyObject *__pyx_v_path, PyObject *__pyx_v_permit_compact) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; PyObject *__pyx_t_2 = NULL; __Pyx_RefNannySetupContext("read", 0); /* "borg/hashindex.pyx":107 * @classmethod * def read(cls, path, permit_compact=False): * return cls(path=path, permit_compact=permit_compact) # <<<<<<<<<<<<<< * * def write(self, path): */ __Pyx_XDECREF(__pyx_r); __pyx_t_1 = __Pyx_PyDict_NewPresized(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 107, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_path, __pyx_v_path) < 0) __PYX_ERR(0, 107, __pyx_L1_error) if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_permit_compact, __pyx_v_permit_compact) < 0) __PYX_ERR(0, 107, __pyx_L1_error) __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_v_cls), __pyx_empty_tuple, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 107, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __pyx_r = __pyx_t_2; __pyx_t_2 = 0; goto __pyx_L0; /* "borg/hashindex.pyx":106 * * @classmethod * def read(cls, path, permit_compact=False): # <<<<<<<<<<<<<< * return cls(path=path, permit_compact=permit_compact) * */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_XDECREF(__pyx_t_2); __Pyx_AddTraceback("borg.hashindex.IndexBase.read", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":109 * return cls(path=path, permit_compact=permit_compact) * * def write(self, path): # <<<<<<<<<<<<<< * if isinstance(path, (str, bytes)): * with open(path, 'wb') as fd: */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_7write(PyObject *__pyx_v_self, PyObject *__pyx_v_path); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_7write(PyObject *__pyx_v_self, PyObject *__pyx_v_path) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("write (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_9IndexBase_6write(((struct __pyx_obj_4borg_9hashindex_IndexBase *)__pyx_v_self), ((PyObject *)__pyx_v_path)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_6write(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self, PyObject *__pyx_v_path) { PyObject *__pyx_v_fd = NULL; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations int __pyx_t_1; int __pyx_t_2; int __pyx_t_3; PyObject *__pyx_t_4 = NULL; PyObject *__pyx_t_5 = NULL; PyObject *__pyx_t_6 = NULL; PyObject *__pyx_t_7 = NULL; PyObject *__pyx_t_8 = NULL; PyObject *__pyx_t_9 = NULL; PyObject *__pyx_t_10 = NULL; PyObject *__pyx_t_11 = NULL; PyObject *__pyx_t_12 = NULL; __Pyx_RefNannySetupContext("write", 0); /* "borg/hashindex.pyx":110 * * def write(self, path): * if isinstance(path, (str, bytes)): # <<<<<<<<<<<<<< * with open(path, 'wb') as fd: * hashindex_write(self.index, fd) */ __pyx_t_2 = PyString_Check(__pyx_v_path); __pyx_t_3 = (__pyx_t_2 != 0); if (!__pyx_t_3) { } else { __pyx_t_1 = __pyx_t_3; goto __pyx_L4_bool_binop_done; } __pyx_t_3 = PyBytes_Check(__pyx_v_path); __pyx_t_2 = (__pyx_t_3 != 0); __pyx_t_1 = __pyx_t_2; __pyx_L4_bool_binop_done:; __pyx_t_2 = (__pyx_t_1 != 0); if (__pyx_t_2) { /* "borg/hashindex.pyx":111 * def write(self, path): * if isinstance(path, (str, bytes)): * with open(path, 'wb') as fd: # <<<<<<<<<<<<<< * hashindex_write(self.index, fd) * else: */ /*with:*/ { __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 111, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __Pyx_INCREF(__pyx_v_path); __Pyx_GIVEREF(__pyx_v_path); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_path); __Pyx_INCREF(__pyx_n_s_wb); __Pyx_GIVEREF(__pyx_n_s_wb); PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_n_s_wb); __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_open, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 111, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; __pyx_t_6 = __Pyx_PyObject_LookupSpecial(__pyx_t_5, __pyx_n_s_exit); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 111, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_6); __pyx_t_7 = __Pyx_PyObject_LookupSpecial(__pyx_t_5, __pyx_n_s_enter); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 111, __pyx_L6_error) __Pyx_GOTREF(__pyx_t_7); __pyx_t_8 = NULL; if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) { __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_7); if (likely(__pyx_t_8)) { PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); __Pyx_INCREF(__pyx_t_8); __Pyx_INCREF(function); __Pyx_DECREF_SET(__pyx_t_7, function); } } if (__pyx_t_8) { __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_t_8); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 111, __pyx_L6_error) __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; } else { __pyx_t_4 = __Pyx_PyObject_CallNoArg(__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 111, __pyx_L6_error) } __Pyx_GOTREF(__pyx_t_4); __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; __pyx_t_7 = __pyx_t_4; __pyx_t_4 = 0; __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; /*try:*/ { { __Pyx_PyThreadState_declare __Pyx_PyThreadState_assign __Pyx_ExceptionSave(&__pyx_t_9, &__pyx_t_10, &__pyx_t_11); __Pyx_XGOTREF(__pyx_t_9); __Pyx_XGOTREF(__pyx_t_10); __Pyx_XGOTREF(__pyx_t_11); /*try:*/ { __pyx_v_fd = __pyx_t_7; __pyx_t_7 = 0; /* "borg/hashindex.pyx":112 * if isinstance(path, (str, bytes)): * with open(path, 'wb') as fd: * hashindex_write(self.index, fd) # <<<<<<<<<<<<<< * else: * hashindex_write(self.index, path) */ hashindex_write(__pyx_v_self->index, __pyx_v_fd); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 112, __pyx_L10_error) /* "borg/hashindex.pyx":111 * def write(self, path): * if isinstance(path, (str, bytes)): * with open(path, 'wb') as fd: # <<<<<<<<<<<<<< * hashindex_write(self.index, fd) * else: */ } __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0; __Pyx_XDECREF(__pyx_t_11); __pyx_t_11 = 0; goto __pyx_L15_try_end; __pyx_L10_error:; __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; /*except:*/ { __Pyx_AddTraceback("borg.hashindex.IndexBase.write", __pyx_clineno, __pyx_lineno, __pyx_filename); if (__Pyx_GetException(&__pyx_t_7, &__pyx_t_5, &__pyx_t_4) < 0) __PYX_ERR(0, 111, __pyx_L12_except_error) __Pyx_GOTREF(__pyx_t_7); __Pyx_GOTREF(__pyx_t_5); __Pyx_GOTREF(__pyx_t_4); __pyx_t_8 = PyTuple_Pack(3, __pyx_t_7, __pyx_t_5, __pyx_t_4); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 111, __pyx_L12_except_error) __Pyx_GOTREF(__pyx_t_8); __pyx_t_12 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_8, NULL); __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 111, __pyx_L12_except_error) __Pyx_GOTREF(__pyx_t_12); __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_12); __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; if (__pyx_t_2 < 0) __PYX_ERR(0, 111, __pyx_L12_except_error) __pyx_t_1 = ((!(__pyx_t_2 != 0)) != 0); if (__pyx_t_1) { __Pyx_GIVEREF(__pyx_t_7); __Pyx_GIVEREF(__pyx_t_5); __Pyx_XGIVEREF(__pyx_t_4); __Pyx_ErrRestoreWithState(__pyx_t_7, __pyx_t_5, __pyx_t_4); __pyx_t_7 = 0; __pyx_t_5 = 0; __pyx_t_4 = 0; __PYX_ERR(0, 111, __pyx_L12_except_error) } __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; goto __pyx_L11_exception_handled; } __pyx_L12_except_error:; __Pyx_XGIVEREF(__pyx_t_9); __Pyx_XGIVEREF(__pyx_t_10); __Pyx_XGIVEREF(__pyx_t_11); __Pyx_ExceptionReset(__pyx_t_9, __pyx_t_10, __pyx_t_11); goto __pyx_L1_error; __pyx_L11_exception_handled:; __Pyx_XGIVEREF(__pyx_t_9); __Pyx_XGIVEREF(__pyx_t_10); __Pyx_XGIVEREF(__pyx_t_11); __Pyx_ExceptionReset(__pyx_t_9, __pyx_t_10, __pyx_t_11); __pyx_L15_try_end:; } } /*finally:*/ { /*normal exit:*/{ if (__pyx_t_6) { __pyx_t_11 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_tuple__3, NULL); __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 111, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_11); __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; } goto __pyx_L9; } __pyx_L9:; } goto __pyx_L19; __pyx_L6_error:; __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; goto __pyx_L1_error; __pyx_L19:; } /* "borg/hashindex.pyx":110 * * def write(self, path): * if isinstance(path, (str, bytes)): # <<<<<<<<<<<<<< * with open(path, 'wb') as fd: * hashindex_write(self.index, fd) */ goto __pyx_L3; } /* "borg/hashindex.pyx":114 * hashindex_write(self.index, fd) * else: * hashindex_write(self.index, path) # <<<<<<<<<<<<<< * * def clear(self): */ /*else*/ { hashindex_write(__pyx_v_self->index, __pyx_v_path); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 114, __pyx_L1_error) } __pyx_L3:; /* "borg/hashindex.pyx":109 * return cls(path=path, permit_compact=permit_compact) * * def write(self, path): # <<<<<<<<<<<<<< * if isinstance(path, (str, bytes)): * with open(path, 'wb') as fd: */ /* function exit code */ __pyx_r = Py_None; __Pyx_INCREF(Py_None); goto __pyx_L0; __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_4); __Pyx_XDECREF(__pyx_t_5); __Pyx_XDECREF(__pyx_t_7); __Pyx_XDECREF(__pyx_t_8); __Pyx_AddTraceback("borg.hashindex.IndexBase.write", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XDECREF(__pyx_v_fd); __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":116 * hashindex_write(self.index, path) * * def clear(self): # <<<<<<<<<<<<<< * hashindex_free(self.index) * self.index = hashindex_init(0, self.key_size, self.value_size) */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_9clear(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_9clear(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("clear (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_9IndexBase_8clear(((struct __pyx_obj_4borg_9hashindex_IndexBase *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_8clear(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; int __pyx_t_2; int __pyx_t_3; __Pyx_RefNannySetupContext("clear", 0); /* "borg/hashindex.pyx":117 * * def clear(self): * hashindex_free(self.index) # <<<<<<<<<<<<<< * self.index = hashindex_init(0, self.key_size, self.value_size) * if not self.index: */ hashindex_free(__pyx_v_self->index); /* "borg/hashindex.pyx":118 * def clear(self): * hashindex_free(self.index) * self.index = hashindex_init(0, self.key_size, self.value_size) # <<<<<<<<<<<<<< * if not self.index: * raise Exception('hashindex_init failed') */ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_value_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 118, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __pyx_t_2 = __Pyx_PyInt_As_int(__pyx_t_1); if (unlikely((__pyx_t_2 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 118, __pyx_L1_error) __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __pyx_v_self->index = hashindex_init(0, __pyx_v_self->key_size, __pyx_t_2); /* "borg/hashindex.pyx":119 * hashindex_free(self.index) * self.index = hashindex_init(0, self.key_size, self.value_size) * if not self.index: # <<<<<<<<<<<<<< * raise Exception('hashindex_init failed') * */ __pyx_t_3 = ((!(__pyx_v_self->index != 0)) != 0); if (__pyx_t_3) { /* "borg/hashindex.pyx":120 * self.index = hashindex_init(0, self.key_size, self.value_size) * if not self.index: * raise Exception('hashindex_init failed') # <<<<<<<<<<<<<< * * def setdefault(self, key, value): */ __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 120, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(0, 120, __pyx_L1_error) /* "borg/hashindex.pyx":119 * hashindex_free(self.index) * self.index = hashindex_init(0, self.key_size, self.value_size) * if not self.index: # <<<<<<<<<<<<<< * raise Exception('hashindex_init failed') * */ } /* "borg/hashindex.pyx":116 * hashindex_write(self.index, path) * * def clear(self): # <<<<<<<<<<<<<< * hashindex_free(self.index) * self.index = hashindex_init(0, self.key_size, self.value_size) */ /* function exit code */ __pyx_r = Py_None; __Pyx_INCREF(Py_None); goto __pyx_L0; __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.IndexBase.clear", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":122 * raise Exception('hashindex_init failed') * * def setdefault(self, key, value): # <<<<<<<<<<<<<< * if not key in self: * self[key] = value */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_11setdefault(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_11setdefault(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { PyObject *__pyx_v_key = 0; PyObject *__pyx_v_value = 0; PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("setdefault (wrapper)", 0); { static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_key,&__pyx_n_s_value,0}; PyObject* values[2] = {0,0}; if (unlikely(__pyx_kwds)) { Py_ssize_t kw_args; const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); switch (pos_args) { case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); CYTHON_FALLTHROUGH; case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } kw_args = PyDict_Size(__pyx_kwds); switch (pos_args) { case 0: if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_key)) != 0)) kw_args--; else goto __pyx_L5_argtuple_error; CYTHON_FALLTHROUGH; case 1: if (likely((values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_value)) != 0)) kw_args--; else { __Pyx_RaiseArgtupleInvalid("setdefault", 1, 2, 2, 1); __PYX_ERR(0, 122, __pyx_L3_error) } } if (unlikely(kw_args > 0)) { if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "setdefault") < 0)) __PYX_ERR(0, 122, __pyx_L3_error) } } else if (PyTuple_GET_SIZE(__pyx_args) != 2) { goto __pyx_L5_argtuple_error; } else { values[0] = PyTuple_GET_ITEM(__pyx_args, 0); values[1] = PyTuple_GET_ITEM(__pyx_args, 1); } __pyx_v_key = values[0]; __pyx_v_value = values[1]; } goto __pyx_L4_argument_unpacking_done; __pyx_L5_argtuple_error:; __Pyx_RaiseArgtupleInvalid("setdefault", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 122, __pyx_L3_error) __pyx_L3_error:; __Pyx_AddTraceback("borg.hashindex.IndexBase.setdefault", __pyx_clineno, __pyx_lineno, __pyx_filename); __Pyx_RefNannyFinishContext(); return NULL; __pyx_L4_argument_unpacking_done:; __pyx_r = __pyx_pf_4borg_9hashindex_9IndexBase_10setdefault(((struct __pyx_obj_4borg_9hashindex_IndexBase *)__pyx_v_self), __pyx_v_key, __pyx_v_value); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_10setdefault(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_value) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations int __pyx_t_1; int __pyx_t_2; __Pyx_RefNannySetupContext("setdefault", 0); /* "borg/hashindex.pyx":123 * * def setdefault(self, key, value): * if not key in self: # <<<<<<<<<<<<<< * self[key] = value * */ __pyx_t_1 = (__Pyx_PySequence_ContainsTF(__pyx_v_key, ((PyObject *)__pyx_v_self), Py_NE)); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(0, 123, __pyx_L1_error) __pyx_t_2 = (__pyx_t_1 != 0); if (__pyx_t_2) { /* "borg/hashindex.pyx":124 * def setdefault(self, key, value): * if not key in self: * self[key] = value # <<<<<<<<<<<<<< * * def __delitem__(self, key): */ if (unlikely(PyObject_SetItem(((PyObject *)__pyx_v_self), __pyx_v_key, __pyx_v_value) < 0)) __PYX_ERR(0, 124, __pyx_L1_error) /* "borg/hashindex.pyx":123 * * def setdefault(self, key, value): * if not key in self: # <<<<<<<<<<<<<< * self[key] = value * */ } /* "borg/hashindex.pyx":122 * raise Exception('hashindex_init failed') * * def setdefault(self, key, value): # <<<<<<<<<<<<<< * if not key in self: * self[key] = value */ /* function exit code */ __pyx_r = Py_None; __Pyx_INCREF(Py_None); goto __pyx_L0; __pyx_L1_error:; __Pyx_AddTraceback("borg.hashindex.IndexBase.setdefault", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":126 * self[key] = value * * def __delitem__(self, key): # <<<<<<<<<<<<<< * assert len(key) == self.key_size * rc = hashindex_delete(self.index, key) */ /* Python wrapper */ static int __pyx_pw_4borg_9hashindex_9IndexBase_13__delitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_key); /*proto*/ static int __pyx_pw_4borg_9hashindex_9IndexBase_13__delitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_key) { int __pyx_r; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__delitem__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_9IndexBase_12__delitem__(((struct __pyx_obj_4borg_9hashindex_IndexBase *)__pyx_v_self), ((PyObject *)__pyx_v_key)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static int __pyx_pf_4borg_9hashindex_9IndexBase_12__delitem__(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self, PyObject *__pyx_v_key) { int __pyx_v_rc; int __pyx_r; __Pyx_RefNannyDeclarations Py_ssize_t __pyx_t_1; char *__pyx_t_2; int __pyx_t_3; PyObject *__pyx_t_4 = NULL; PyObject *__pyx_t_5 = NULL; __Pyx_RefNannySetupContext("__delitem__", 0); /* "borg/hashindex.pyx":127 * * def __delitem__(self, key): * assert len(key) == self.key_size # <<<<<<<<<<<<<< * rc = hashindex_delete(self.index, key) * if rc == 1: */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { __pyx_t_1 = PyObject_Length(__pyx_v_key); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 127, __pyx_L1_error) if (unlikely(!((__pyx_t_1 == __pyx_v_self->key_size) != 0))) { PyErr_SetNone(PyExc_AssertionError); __PYX_ERR(0, 127, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":128 * def __delitem__(self, key): * assert len(key) == self.key_size * rc = hashindex_delete(self.index, key) # <<<<<<<<<<<<<< * if rc == 1: * return # success */ __pyx_t_2 = __Pyx_PyObject_AsWritableString(__pyx_v_key); if (unlikely((!__pyx_t_2) && PyErr_Occurred())) __PYX_ERR(0, 128, __pyx_L1_error) __pyx_v_rc = hashindex_delete(__pyx_v_self->index, ((char *)__pyx_t_2)); /* "borg/hashindex.pyx":129 * assert len(key) == self.key_size * rc = hashindex_delete(self.index, key) * if rc == 1: # <<<<<<<<<<<<<< * return # success * if rc == -1: */ __pyx_t_3 = ((__pyx_v_rc == 1) != 0); if (__pyx_t_3) { /* "borg/hashindex.pyx":130 * rc = hashindex_delete(self.index, key) * if rc == 1: * return # success # <<<<<<<<<<<<<< * if rc == -1: * raise KeyError(key) */ __pyx_r = 0; goto __pyx_L0; /* "borg/hashindex.pyx":129 * assert len(key) == self.key_size * rc = hashindex_delete(self.index, key) * if rc == 1: # <<<<<<<<<<<<<< * return # success * if rc == -1: */ } /* "borg/hashindex.pyx":131 * if rc == 1: * return # success * if rc == -1: # <<<<<<<<<<<<<< * raise KeyError(key) * if rc == 0: */ __pyx_t_3 = ((__pyx_v_rc == -1L) != 0); if (__pyx_t_3) { /* "borg/hashindex.pyx":132 * return # success * if rc == -1: * raise KeyError(key) # <<<<<<<<<<<<<< * if rc == 0: * raise Exception('hashindex_delete failed') */ __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 132, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __Pyx_INCREF(__pyx_v_key); __Pyx_GIVEREF(__pyx_v_key); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_key); __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_KeyError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 132, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; __Pyx_Raise(__pyx_t_5, 0, 0, 0); __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; __PYX_ERR(0, 132, __pyx_L1_error) /* "borg/hashindex.pyx":131 * if rc == 1: * return # success * if rc == -1: # <<<<<<<<<<<<<< * raise KeyError(key) * if rc == 0: */ } /* "borg/hashindex.pyx":133 * if rc == -1: * raise KeyError(key) * if rc == 0: # <<<<<<<<<<<<<< * raise Exception('hashindex_delete failed') * */ __pyx_t_3 = ((__pyx_v_rc == 0) != 0); if (__pyx_t_3) { /* "borg/hashindex.pyx":134 * raise KeyError(key) * if rc == 0: * raise Exception('hashindex_delete failed') # <<<<<<<<<<<<<< * * def get(self, key, default=None): */ __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 134, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __Pyx_Raise(__pyx_t_5, 0, 0, 0); __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; __PYX_ERR(0, 134, __pyx_L1_error) /* "borg/hashindex.pyx":133 * if rc == -1: * raise KeyError(key) * if rc == 0: # <<<<<<<<<<<<<< * raise Exception('hashindex_delete failed') * */ } /* "borg/hashindex.pyx":126 * self[key] = value * * def __delitem__(self, key): # <<<<<<<<<<<<<< * assert len(key) == self.key_size * rc = hashindex_delete(self.index, key) */ /* function exit code */ __pyx_r = 0; goto __pyx_L0; __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_4); __Pyx_XDECREF(__pyx_t_5); __Pyx_AddTraceback("borg.hashindex.IndexBase.__delitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = -1; __pyx_L0:; __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":136 * raise Exception('hashindex_delete failed') * * def get(self, key, default=None): # <<<<<<<<<<<<<< * try: * return self[key] */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_15get(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_15get(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { PyObject *__pyx_v_key = 0; PyObject *__pyx_v_default = 0; PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("get (wrapper)", 0); { static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_key,&__pyx_n_s_default,0}; PyObject* values[2] = {0,0}; values[1] = ((PyObject *)Py_None); if (unlikely(__pyx_kwds)) { Py_ssize_t kw_args; const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); switch (pos_args) { case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); CYTHON_FALLTHROUGH; case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } kw_args = PyDict_Size(__pyx_kwds); switch (pos_args) { case 0: if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_key)) != 0)) kw_args--; else goto __pyx_L5_argtuple_error; CYTHON_FALLTHROUGH; case 1: if (kw_args > 0) { PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_default); if (value) { values[1] = value; kw_args--; } } } if (unlikely(kw_args > 0)) { if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "get") < 0)) __PYX_ERR(0, 136, __pyx_L3_error) } } else { switch (PyTuple_GET_SIZE(__pyx_args)) { case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); CYTHON_FALLTHROUGH; case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); break; default: goto __pyx_L5_argtuple_error; } } __pyx_v_key = values[0]; __pyx_v_default = values[1]; } goto __pyx_L4_argument_unpacking_done; __pyx_L5_argtuple_error:; __Pyx_RaiseArgtupleInvalid("get", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 136, __pyx_L3_error) __pyx_L3_error:; __Pyx_AddTraceback("borg.hashindex.IndexBase.get", __pyx_clineno, __pyx_lineno, __pyx_filename); __Pyx_RefNannyFinishContext(); return NULL; __pyx_L4_argument_unpacking_done:; __pyx_r = __pyx_pf_4borg_9hashindex_9IndexBase_14get(((struct __pyx_obj_4borg_9hashindex_IndexBase *)__pyx_v_self), __pyx_v_key, __pyx_v_default); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_14get(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_default) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; PyObject *__pyx_t_2 = NULL; PyObject *__pyx_t_3 = NULL; PyObject *__pyx_t_4 = NULL; int __pyx_t_5; PyObject *__pyx_t_6 = NULL; PyObject *__pyx_t_7 = NULL; __Pyx_RefNannySetupContext("get", 0); /* "borg/hashindex.pyx":137 * * def get(self, key, default=None): * try: # <<<<<<<<<<<<<< * return self[key] * except KeyError: */ { __Pyx_PyThreadState_declare __Pyx_PyThreadState_assign __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); __Pyx_XGOTREF(__pyx_t_1); __Pyx_XGOTREF(__pyx_t_2); __Pyx_XGOTREF(__pyx_t_3); /*try:*/ { /* "borg/hashindex.pyx":138 * def get(self, key, default=None): * try: * return self[key] # <<<<<<<<<<<<<< * except KeyError: * return default */ __Pyx_XDECREF(__pyx_r); __pyx_t_4 = PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_key); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 138, __pyx_L3_error) __Pyx_GOTREF(__pyx_t_4); __pyx_r = __pyx_t_4; __pyx_t_4 = 0; goto __pyx_L7_try_return; /* "borg/hashindex.pyx":137 * * def get(self, key, default=None): * try: # <<<<<<<<<<<<<< * return self[key] * except KeyError: */ } __pyx_L3_error:; __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; /* "borg/hashindex.pyx":139 * try: * return self[key] * except KeyError: # <<<<<<<<<<<<<< * return default * */ __pyx_t_5 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_KeyError); if (__pyx_t_5) { __Pyx_AddTraceback("borg.hashindex.IndexBase.get", __pyx_clineno, __pyx_lineno, __pyx_filename); if (__Pyx_GetException(&__pyx_t_4, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(0, 139, __pyx_L5_except_error) __Pyx_GOTREF(__pyx_t_4); __Pyx_GOTREF(__pyx_t_6); __Pyx_GOTREF(__pyx_t_7); /* "borg/hashindex.pyx":140 * return self[key] * except KeyError: * return default # <<<<<<<<<<<<<< * * def pop(self, key, default=_NoDefault): */ __Pyx_XDECREF(__pyx_r); __Pyx_INCREF(__pyx_v_default); __pyx_r = __pyx_v_default; __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; goto __pyx_L6_except_return; } goto __pyx_L5_except_error; __pyx_L5_except_error:; /* "borg/hashindex.pyx":137 * * def get(self, key, default=None): * try: # <<<<<<<<<<<<<< * return self[key] * except KeyError: */ __Pyx_XGIVEREF(__pyx_t_1); __Pyx_XGIVEREF(__pyx_t_2); __Pyx_XGIVEREF(__pyx_t_3); __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); goto __pyx_L1_error; __pyx_L7_try_return:; __Pyx_XGIVEREF(__pyx_t_1); __Pyx_XGIVEREF(__pyx_t_2); __Pyx_XGIVEREF(__pyx_t_3); __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); goto __pyx_L0; __pyx_L6_except_return:; __Pyx_XGIVEREF(__pyx_t_1); __Pyx_XGIVEREF(__pyx_t_2); __Pyx_XGIVEREF(__pyx_t_3); __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); goto __pyx_L0; } /* "borg/hashindex.pyx":136 * raise Exception('hashindex_delete failed') * * def get(self, key, default=None): # <<<<<<<<<<<<<< * try: * return self[key] */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_4); __Pyx_XDECREF(__pyx_t_6); __Pyx_XDECREF(__pyx_t_7); __Pyx_AddTraceback("borg.hashindex.IndexBase.get", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":142 * return default * * def pop(self, key, default=_NoDefault): # <<<<<<<<<<<<<< * try: * value = self[key] */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_17pop(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_17pop(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { PyObject *__pyx_v_key = 0; PyObject *__pyx_v_default = 0; PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("pop (wrapper)", 0); { static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_key,&__pyx_n_s_default,0}; PyObject* values[2] = {0,0}; values[1] = __pyx_k__6; if (unlikely(__pyx_kwds)) { Py_ssize_t kw_args; const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); switch (pos_args) { case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); CYTHON_FALLTHROUGH; case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } kw_args = PyDict_Size(__pyx_kwds); switch (pos_args) { case 0: if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_key)) != 0)) kw_args--; else goto __pyx_L5_argtuple_error; CYTHON_FALLTHROUGH; case 1: if (kw_args > 0) { PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_default); if (value) { values[1] = value; kw_args--; } } } if (unlikely(kw_args > 0)) { if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "pop") < 0)) __PYX_ERR(0, 142, __pyx_L3_error) } } else { switch (PyTuple_GET_SIZE(__pyx_args)) { case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); CYTHON_FALLTHROUGH; case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); break; default: goto __pyx_L5_argtuple_error; } } __pyx_v_key = values[0]; __pyx_v_default = values[1]; } goto __pyx_L4_argument_unpacking_done; __pyx_L5_argtuple_error:; __Pyx_RaiseArgtupleInvalid("pop", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 142, __pyx_L3_error) __pyx_L3_error:; __Pyx_AddTraceback("borg.hashindex.IndexBase.pop", __pyx_clineno, __pyx_lineno, __pyx_filename); __Pyx_RefNannyFinishContext(); return NULL; __pyx_L4_argument_unpacking_done:; __pyx_r = __pyx_pf_4borg_9hashindex_9IndexBase_16pop(((struct __pyx_obj_4borg_9hashindex_IndexBase *)__pyx_v_self), __pyx_v_key, __pyx_v_default); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_16pop(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_default) { PyObject *__pyx_v_value = NULL; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; PyObject *__pyx_t_2 = NULL; PyObject *__pyx_t_3 = NULL; PyObject *__pyx_t_4 = NULL; int __pyx_t_5; PyObject *__pyx_t_6 = NULL; PyObject *__pyx_t_7 = NULL; PyObject *__pyx_t_8 = NULL; int __pyx_t_9; __Pyx_RefNannySetupContext("pop", 0); /* "borg/hashindex.pyx":143 * * def pop(self, key, default=_NoDefault): * try: # <<<<<<<<<<<<<< * value = self[key] * del self[key] */ { __Pyx_PyThreadState_declare __Pyx_PyThreadState_assign __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); __Pyx_XGOTREF(__pyx_t_1); __Pyx_XGOTREF(__pyx_t_2); __Pyx_XGOTREF(__pyx_t_3); /*try:*/ { /* "borg/hashindex.pyx":144 * def pop(self, key, default=_NoDefault): * try: * value = self[key] # <<<<<<<<<<<<<< * del self[key] * return value */ __pyx_t_4 = PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_key); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 144, __pyx_L3_error) __Pyx_GOTREF(__pyx_t_4); __pyx_v_value = __pyx_t_4; __pyx_t_4 = 0; /* "borg/hashindex.pyx":145 * try: * value = self[key] * del self[key] # <<<<<<<<<<<<<< * return value * except KeyError: */ if (unlikely(PyObject_DelItem(((PyObject *)__pyx_v_self), __pyx_v_key) < 0)) __PYX_ERR(0, 145, __pyx_L3_error) /* "borg/hashindex.pyx":146 * value = self[key] * del self[key] * return value # <<<<<<<<<<<<<< * except KeyError: * if default != _NoDefault: */ __Pyx_XDECREF(__pyx_r); __Pyx_INCREF(__pyx_v_value); __pyx_r = __pyx_v_value; goto __pyx_L7_try_return; /* "borg/hashindex.pyx":143 * * def pop(self, key, default=_NoDefault): * try: # <<<<<<<<<<<<<< * value = self[key] * del self[key] */ } __pyx_L3_error:; __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; /* "borg/hashindex.pyx":147 * del self[key] * return value * except KeyError: # <<<<<<<<<<<<<< * if default != _NoDefault: * return default */ __pyx_t_5 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_KeyError); if (__pyx_t_5) { __Pyx_AddTraceback("borg.hashindex.IndexBase.pop", __pyx_clineno, __pyx_lineno, __pyx_filename); if (__Pyx_GetException(&__pyx_t_4, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(0, 147, __pyx_L5_except_error) __Pyx_GOTREF(__pyx_t_4); __Pyx_GOTREF(__pyx_t_6); __Pyx_GOTREF(__pyx_t_7); /* "borg/hashindex.pyx":148 * return value * except KeyError: * if default != _NoDefault: # <<<<<<<<<<<<<< * return default * raise */ __pyx_t_8 = PyObject_RichCompare(__pyx_v_default, __pyx_v_4borg_9hashindex__NoDefault, Py_NE); __Pyx_XGOTREF(__pyx_t_8); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 148, __pyx_L5_except_error) __pyx_t_9 = __Pyx_PyObject_IsTrue(__pyx_t_8); if (unlikely(__pyx_t_9 < 0)) __PYX_ERR(0, 148, __pyx_L5_except_error) __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; if (__pyx_t_9) { /* "borg/hashindex.pyx":149 * except KeyError: * if default != _NoDefault: * return default # <<<<<<<<<<<<<< * raise * */ __Pyx_XDECREF(__pyx_r); __Pyx_INCREF(__pyx_v_default); __pyx_r = __pyx_v_default; __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; goto __pyx_L6_except_return; /* "borg/hashindex.pyx":148 * return value * except KeyError: * if default != _NoDefault: # <<<<<<<<<<<<<< * return default * raise */ } /* "borg/hashindex.pyx":150 * if default != _NoDefault: * return default * raise # <<<<<<<<<<<<<< * * def __len__(self): */ __Pyx_GIVEREF(__pyx_t_4); __Pyx_GIVEREF(__pyx_t_6); __Pyx_XGIVEREF(__pyx_t_7); __Pyx_ErrRestoreWithState(__pyx_t_4, __pyx_t_6, __pyx_t_7); __pyx_t_4 = 0; __pyx_t_6 = 0; __pyx_t_7 = 0; __PYX_ERR(0, 150, __pyx_L5_except_error) } goto __pyx_L5_except_error; __pyx_L5_except_error:; /* "borg/hashindex.pyx":143 * * def pop(self, key, default=_NoDefault): * try: # <<<<<<<<<<<<<< * value = self[key] * del self[key] */ __Pyx_XGIVEREF(__pyx_t_1); __Pyx_XGIVEREF(__pyx_t_2); __Pyx_XGIVEREF(__pyx_t_3); __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); goto __pyx_L1_error; __pyx_L7_try_return:; __Pyx_XGIVEREF(__pyx_t_1); __Pyx_XGIVEREF(__pyx_t_2); __Pyx_XGIVEREF(__pyx_t_3); __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); goto __pyx_L0; __pyx_L6_except_return:; __Pyx_XGIVEREF(__pyx_t_1); __Pyx_XGIVEREF(__pyx_t_2); __Pyx_XGIVEREF(__pyx_t_3); __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); goto __pyx_L0; } /* "borg/hashindex.pyx":142 * return default * * def pop(self, key, default=_NoDefault): # <<<<<<<<<<<<<< * try: * value = self[key] */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_4); __Pyx_XDECREF(__pyx_t_6); __Pyx_XDECREF(__pyx_t_7); __Pyx_XDECREF(__pyx_t_8); __Pyx_AddTraceback("borg.hashindex.IndexBase.pop", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XDECREF(__pyx_v_value); __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":152 * raise * * def __len__(self): # <<<<<<<<<<<<<< * return hashindex_len(self.index) * */ /* Python wrapper */ static Py_ssize_t __pyx_pw_4borg_9hashindex_9IndexBase_19__len__(PyObject *__pyx_v_self); /*proto*/ static Py_ssize_t __pyx_pw_4borg_9hashindex_9IndexBase_19__len__(PyObject *__pyx_v_self) { Py_ssize_t __pyx_r; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_9IndexBase_18__len__(((struct __pyx_obj_4borg_9hashindex_IndexBase *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static Py_ssize_t __pyx_pf_4borg_9hashindex_9IndexBase_18__len__(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self) { Py_ssize_t __pyx_r; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__len__", 0); /* "borg/hashindex.pyx":153 * * def __len__(self): * return hashindex_len(self.index) # <<<<<<<<<<<<<< * * def size(self): */ __pyx_r = hashindex_len(__pyx_v_self->index); goto __pyx_L0; /* "borg/hashindex.pyx":152 * raise * * def __len__(self): # <<<<<<<<<<<<<< * return hashindex_len(self.index) * */ /* function exit code */ __pyx_L0:; __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":155 * return hashindex_len(self.index) * * def size(self): # <<<<<<<<<<<<<< * """Return size (bytes) of hash table.""" * return hashindex_size(self.index) */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_21size(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ static char __pyx_doc_4borg_9hashindex_9IndexBase_20size[] = "Return size (bytes) of hash table."; static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_21size(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("size (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_9IndexBase_20size(((struct __pyx_obj_4borg_9hashindex_IndexBase *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_20size(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("size", 0); /* "borg/hashindex.pyx":157 * def size(self): * """Return size (bytes) of hash table.""" * return hashindex_size(self.index) # <<<<<<<<<<<<<< * * def compact(self): */ __Pyx_XDECREF(__pyx_r); __pyx_t_1 = __Pyx_PyInt_From_int(hashindex_size(__pyx_v_self->index)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 157, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __pyx_r = __pyx_t_1; __pyx_t_1 = 0; goto __pyx_L0; /* "borg/hashindex.pyx":155 * return hashindex_len(self.index) * * def size(self): # <<<<<<<<<<<<<< * """Return size (bytes) of hash table.""" * return hashindex_size(self.index) */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.IndexBase.size", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":159 * return hashindex_size(self.index) * * def compact(self): # <<<<<<<<<<<<<< * return hashindex_compact(self.index) * */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_23compact(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_23compact(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("compact (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_9IndexBase_22compact(((struct __pyx_obj_4borg_9hashindex_IndexBase *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_22compact(struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("compact", 0); /* "borg/hashindex.pyx":160 * * def compact(self): * return hashindex_compact(self.index) # <<<<<<<<<<<<<< * * */ __Pyx_XDECREF(__pyx_r); __pyx_t_1 = __Pyx_PyInt_From_uint64_t(hashindex_compact(__pyx_v_self->index)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 160, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __pyx_r = __pyx_t_1; __pyx_t_1 = 0; goto __pyx_L0; /* "borg/hashindex.pyx":159 * return hashindex_size(self.index) * * def compact(self): # <<<<<<<<<<<<<< * return hashindex_compact(self.index) * */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.IndexBase.compact", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "(tree fragment)":1 * def __reduce_cython__(self): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_25__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_25__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_9IndexBase_24__reduce_cython__(((struct __pyx_obj_4borg_9hashindex_IndexBase *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_24__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("__reduce_cython__", 0); /* "(tree fragment)":2 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(1, 2, __pyx_L1_error) /* "(tree fragment)":1 * def __reduce_cython__(self): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.IndexBase.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "(tree fragment)":3 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_27__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_9IndexBase_27__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_9IndexBase_26__setstate_cython__(((struct __pyx_obj_4borg_9hashindex_IndexBase *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_9IndexBase_26__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_IndexBase *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("__setstate_cython__", 0); /* "(tree fragment)":4 * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< */ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(1, 4, __pyx_L1_error) /* "(tree fragment)":3 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.IndexBase.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":168 * _key_size = 16 * * def __getitem__(self, key): # <<<<<<<<<<<<<< * cdef FuseVersionsElement *data * assert len(key) == self.key_size */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_17FuseVersionsIndex_1__getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_key); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_17FuseVersionsIndex_1__getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_key) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_17FuseVersionsIndex___getitem__(((struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex *)__pyx_v_self), ((PyObject *)__pyx_v_key)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_17FuseVersionsIndex___getitem__(struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex *__pyx_v_self, PyObject *__pyx_v_key) { FuseVersionsElement *__pyx_v_data; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations Py_ssize_t __pyx_t_1; char *__pyx_t_2; int __pyx_t_3; PyObject *__pyx_t_4 = NULL; PyObject *__pyx_t_5 = NULL; PyObject *__pyx_t_6 = NULL; __Pyx_RefNannySetupContext("__getitem__", 0); /* "borg/hashindex.pyx":170 * def __getitem__(self, key): * cdef FuseVersionsElement *data * assert len(key) == self.key_size # <<<<<<<<<<<<<< * data = hashindex_get(self.index, key) * if data == NULL: */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { __pyx_t_1 = PyObject_Length(__pyx_v_key); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 170, __pyx_L1_error) if (unlikely(!((__pyx_t_1 == __pyx_v_self->__pyx_base.key_size) != 0))) { PyErr_SetNone(PyExc_AssertionError); __PYX_ERR(0, 170, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":171 * cdef FuseVersionsElement *data * assert len(key) == self.key_size * data = hashindex_get(self.index, key) # <<<<<<<<<<<<<< * if data == NULL: * raise KeyError(key) */ __pyx_t_2 = __Pyx_PyObject_AsWritableString(__pyx_v_key); if (unlikely((!__pyx_t_2) && PyErr_Occurred())) __PYX_ERR(0, 171, __pyx_L1_error) __pyx_v_data = ((FuseVersionsElement *)hashindex_get(__pyx_v_self->__pyx_base.index, ((char *)__pyx_t_2))); /* "borg/hashindex.pyx":172 * assert len(key) == self.key_size * data = hashindex_get(self.index, key) * if data == NULL: # <<<<<<<<<<<<<< * raise KeyError(key) * return _le32toh(data.version), PyBytes_FromStringAndSize(data.hash, 16) */ __pyx_t_3 = ((__pyx_v_data == NULL) != 0); if (__pyx_t_3) { /* "borg/hashindex.pyx":173 * data = hashindex_get(self.index, key) * if data == NULL: * raise KeyError(key) # <<<<<<<<<<<<<< * return _le32toh(data.version), PyBytes_FromStringAndSize(data.hash, 16) * */ __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 173, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __Pyx_INCREF(__pyx_v_key); __Pyx_GIVEREF(__pyx_v_key); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_key); __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_KeyError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 173, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; __Pyx_Raise(__pyx_t_5, 0, 0, 0); __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; __PYX_ERR(0, 173, __pyx_L1_error) /* "borg/hashindex.pyx":172 * assert len(key) == self.key_size * data = hashindex_get(self.index, key) * if data == NULL: # <<<<<<<<<<<<<< * raise KeyError(key) * return _le32toh(data.version), PyBytes_FromStringAndSize(data.hash, 16) */ } /* "borg/hashindex.pyx":174 * if data == NULL: * raise KeyError(key) * return _le32toh(data.version), PyBytes_FromStringAndSize(data.hash, 16) # <<<<<<<<<<<<<< * * def __setitem__(self, key, value): */ __Pyx_XDECREF(__pyx_r); __pyx_t_5 = __Pyx_PyInt_From_uint32_t(_le32toh(__pyx_v_data->version)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 174, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __pyx_t_4 = PyBytes_FromStringAndSize(__pyx_v_data->hash, 16); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 174, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __pyx_t_6 = PyTuple_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 174, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_6); __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_5); __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_4); __pyx_t_5 = 0; __pyx_t_4 = 0; __pyx_r = __pyx_t_6; __pyx_t_6 = 0; goto __pyx_L0; /* "borg/hashindex.pyx":168 * _key_size = 16 * * def __getitem__(self, key): # <<<<<<<<<<<<<< * cdef FuseVersionsElement *data * assert len(key) == self.key_size */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_4); __Pyx_XDECREF(__pyx_t_5); __Pyx_XDECREF(__pyx_t_6); __Pyx_AddTraceback("borg.hashindex.FuseVersionsIndex.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":176 * return _le32toh(data.version), PyBytes_FromStringAndSize(data.hash, 16) * * def __setitem__(self, key, value): # <<<<<<<<<<<<<< * cdef FuseVersionsElement data * assert len(key) == self.key_size */ /* Python wrapper */ static int __pyx_pw_4borg_9hashindex_17FuseVersionsIndex_3__setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_value); /*proto*/ static int __pyx_pw_4borg_9hashindex_17FuseVersionsIndex_3__setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_value) { int __pyx_r; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_17FuseVersionsIndex_2__setitem__(((struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex *)__pyx_v_self), ((PyObject *)__pyx_v_key), ((PyObject *)__pyx_v_value)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static int __pyx_pf_4borg_9hashindex_17FuseVersionsIndex_2__setitem__(struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_value) { FuseVersionsElement __pyx_v_data; int __pyx_r; __Pyx_RefNannyDeclarations Py_ssize_t __pyx_t_1; PyObject *__pyx_t_2 = NULL; uint32_t __pyx_t_3; int __pyx_t_4; int __pyx_t_5; char *__pyx_t_6; __Pyx_RefNannySetupContext("__setitem__", 0); /* "borg/hashindex.pyx":178 * def __setitem__(self, key, value): * cdef FuseVersionsElement data * assert len(key) == self.key_size # <<<<<<<<<<<<<< * data.version = value[0] * assert data.version <= _MAX_VALUE, "maximum number of versions reached" */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { __pyx_t_1 = PyObject_Length(__pyx_v_key); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 178, __pyx_L1_error) if (unlikely(!((__pyx_t_1 == __pyx_v_self->__pyx_base.key_size) != 0))) { PyErr_SetNone(PyExc_AssertionError); __PYX_ERR(0, 178, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":179 * cdef FuseVersionsElement data * assert len(key) == self.key_size * data.version = value[0] # <<<<<<<<<<<<<< * assert data.version <= _MAX_VALUE, "maximum number of versions reached" * if not PyBytes_CheckExact(value[1]) or PyBytes_GET_SIZE(value[1]) != 16: */ __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_value, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 179, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __pyx_t_3 = __Pyx_PyInt_As_uint32_t(__pyx_t_2); if (unlikely((__pyx_t_3 == ((uint32_t)-1)) && PyErr_Occurred())) __PYX_ERR(0, 179, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; __pyx_v_data.version = __pyx_t_3; /* "borg/hashindex.pyx":180 * assert len(key) == self.key_size * data.version = value[0] * assert data.version <= _MAX_VALUE, "maximum number of versions reached" # <<<<<<<<<<<<<< * if not PyBytes_CheckExact(value[1]) or PyBytes_GET_SIZE(value[1]) != 16: * raise TypeError("Expected bytes of length 16 for second value") */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { if (unlikely(!((__pyx_v_data.version <= _MAX_VALUE) != 0))) { PyErr_SetObject(PyExc_AssertionError, __pyx_kp_s_maximum_number_of_versions_reach); __PYX_ERR(0, 180, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":181 * data.version = value[0] * assert data.version <= _MAX_VALUE, "maximum number of versions reached" * if not PyBytes_CheckExact(value[1]) or PyBytes_GET_SIZE(value[1]) != 16: # <<<<<<<<<<<<<< * raise TypeError("Expected bytes of length 16 for second value") * memcpy(data.hash, PyBytes_AS_STRING(value[1]), 16) */ __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_value, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 181, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __pyx_t_5 = ((!(PyBytes_CheckExact(__pyx_t_2) != 0)) != 0); __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; if (!__pyx_t_5) { } else { __pyx_t_4 = __pyx_t_5; goto __pyx_L4_bool_binop_done; } __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_value, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 181, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __pyx_t_5 = ((PyBytes_GET_SIZE(__pyx_t_2) != 16) != 0); __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; __pyx_t_4 = __pyx_t_5; __pyx_L4_bool_binop_done:; if (__pyx_t_4) { /* "borg/hashindex.pyx":182 * assert data.version <= _MAX_VALUE, "maximum number of versions reached" * if not PyBytes_CheckExact(value[1]) or PyBytes_GET_SIZE(value[1]) != 16: * raise TypeError("Expected bytes of length 16 for second value") # <<<<<<<<<<<<<< * memcpy(data.hash, PyBytes_AS_STRING(value[1]), 16) * data.version = _htole32(data.version) */ __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 182, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __Pyx_Raise(__pyx_t_2, 0, 0, 0); __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; __PYX_ERR(0, 182, __pyx_L1_error) /* "borg/hashindex.pyx":181 * data.version = value[0] * assert data.version <= _MAX_VALUE, "maximum number of versions reached" * if not PyBytes_CheckExact(value[1]) or PyBytes_GET_SIZE(value[1]) != 16: # <<<<<<<<<<<<<< * raise TypeError("Expected bytes of length 16 for second value") * memcpy(data.hash, PyBytes_AS_STRING(value[1]), 16) */ } /* "borg/hashindex.pyx":183 * if not PyBytes_CheckExact(value[1]) or PyBytes_GET_SIZE(value[1]) != 16: * raise TypeError("Expected bytes of length 16 for second value") * memcpy(data.hash, PyBytes_AS_STRING(value[1]), 16) # <<<<<<<<<<<<<< * data.version = _htole32(data.version) * if not hashindex_set(self.index, key, &data): */ __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_value, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 183, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); memcpy(__pyx_v_data.hash, PyBytes_AS_STRING(__pyx_t_2), 16); __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; /* "borg/hashindex.pyx":184 * raise TypeError("Expected bytes of length 16 for second value") * memcpy(data.hash, PyBytes_AS_STRING(value[1]), 16) * data.version = _htole32(data.version) # <<<<<<<<<<<<<< * if not hashindex_set(self.index, key, &data): * raise Exception('hashindex_set failed') */ __pyx_v_data.version = _htole32(__pyx_v_data.version); /* "borg/hashindex.pyx":185 * memcpy(data.hash, PyBytes_AS_STRING(value[1]), 16) * data.version = _htole32(data.version) * if not hashindex_set(self.index, key, &data): # <<<<<<<<<<<<<< * raise Exception('hashindex_set failed') * */ __pyx_t_6 = __Pyx_PyObject_AsWritableString(__pyx_v_key); if (unlikely((!__pyx_t_6) && PyErr_Occurred())) __PYX_ERR(0, 185, __pyx_L1_error) __pyx_t_4 = ((!(hashindex_set(__pyx_v_self->__pyx_base.index, ((char *)__pyx_t_6), ((void *)(&__pyx_v_data))) != 0)) != 0); if (__pyx_t_4) { /* "borg/hashindex.pyx":186 * data.version = _htole32(data.version) * if not hashindex_set(self.index, key, &data): * raise Exception('hashindex_set failed') # <<<<<<<<<<<<<< * * def __contains__(self, key): */ __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 186, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __Pyx_Raise(__pyx_t_2, 0, 0, 0); __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; __PYX_ERR(0, 186, __pyx_L1_error) /* "borg/hashindex.pyx":185 * memcpy(data.hash, PyBytes_AS_STRING(value[1]), 16) * data.version = _htole32(data.version) * if not hashindex_set(self.index, key, &data): # <<<<<<<<<<<<<< * raise Exception('hashindex_set failed') * */ } /* "borg/hashindex.pyx":176 * return _le32toh(data.version), PyBytes_FromStringAndSize(data.hash, 16) * * def __setitem__(self, key, value): # <<<<<<<<<<<<<< * cdef FuseVersionsElement data * assert len(key) == self.key_size */ /* function exit code */ __pyx_r = 0; goto __pyx_L0; __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_2); __Pyx_AddTraceback("borg.hashindex.FuseVersionsIndex.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = -1; __pyx_L0:; __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":188 * raise Exception('hashindex_set failed') * * def __contains__(self, key): # <<<<<<<<<<<<<< * assert len(key) == self.key_size * return hashindex_get(self.index, key) != NULL */ /* Python wrapper */ static int __pyx_pw_4borg_9hashindex_17FuseVersionsIndex_5__contains__(PyObject *__pyx_v_self, PyObject *__pyx_v_key); /*proto*/ static int __pyx_pw_4borg_9hashindex_17FuseVersionsIndex_5__contains__(PyObject *__pyx_v_self, PyObject *__pyx_v_key) { int __pyx_r; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__contains__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_17FuseVersionsIndex_4__contains__(((struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex *)__pyx_v_self), ((PyObject *)__pyx_v_key)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static int __pyx_pf_4borg_9hashindex_17FuseVersionsIndex_4__contains__(struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex *__pyx_v_self, PyObject *__pyx_v_key) { int __pyx_r; __Pyx_RefNannyDeclarations Py_ssize_t __pyx_t_1; char *__pyx_t_2; __Pyx_RefNannySetupContext("__contains__", 0); /* "borg/hashindex.pyx":189 * * def __contains__(self, key): * assert len(key) == self.key_size # <<<<<<<<<<<<<< * return hashindex_get(self.index, key) != NULL * */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { __pyx_t_1 = PyObject_Length(__pyx_v_key); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 189, __pyx_L1_error) if (unlikely(!((__pyx_t_1 == __pyx_v_self->__pyx_base.key_size) != 0))) { PyErr_SetNone(PyExc_AssertionError); __PYX_ERR(0, 189, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":190 * def __contains__(self, key): * assert len(key) == self.key_size * return hashindex_get(self.index, key) != NULL # <<<<<<<<<<<<<< * * */ __pyx_t_2 = __Pyx_PyObject_AsWritableString(__pyx_v_key); if (unlikely((!__pyx_t_2) && PyErr_Occurred())) __PYX_ERR(0, 190, __pyx_L1_error) __pyx_r = (hashindex_get(__pyx_v_self->__pyx_base.index, ((char *)__pyx_t_2)) != NULL); goto __pyx_L0; /* "borg/hashindex.pyx":188 * raise Exception('hashindex_set failed') * * def __contains__(self, key): # <<<<<<<<<<<<<< * assert len(key) == self.key_size * return hashindex_get(self.index, key) != NULL */ /* function exit code */ __pyx_L1_error:; __Pyx_AddTraceback("borg.hashindex.FuseVersionsIndex.__contains__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = -1; __pyx_L0:; __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "(tree fragment)":1 * def __reduce_cython__(self): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_17FuseVersionsIndex_7__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_17FuseVersionsIndex_7__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_17FuseVersionsIndex_6__reduce_cython__(((struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_17FuseVersionsIndex_6__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex *__pyx_v_self) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("__reduce_cython__", 0); /* "(tree fragment)":2 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__11, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(1, 2, __pyx_L1_error) /* "(tree fragment)":1 * def __reduce_cython__(self): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.FuseVersionsIndex.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "(tree fragment)":3 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_17FuseVersionsIndex_9__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_17FuseVersionsIndex_9__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_17FuseVersionsIndex_8__setstate_cython__(((struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_17FuseVersionsIndex_8__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("__setstate_cython__", 0); /* "(tree fragment)":4 * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< */ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__12, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(1, 4, __pyx_L1_error) /* "(tree fragment)":3 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.FuseVersionsIndex.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":197 * value_size = 8 * * def __getitem__(self, key): # <<<<<<<<<<<<<< * assert len(key) == self.key_size * data = hashindex_get(self.index, key) */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_7NSIndex_1__getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_key); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_7NSIndex_1__getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_key) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_7NSIndex___getitem__(((struct __pyx_obj_4borg_9hashindex_NSIndex *)__pyx_v_self), ((PyObject *)__pyx_v_key)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_7NSIndex___getitem__(struct __pyx_obj_4borg_9hashindex_NSIndex *__pyx_v_self, PyObject *__pyx_v_key) { uint32_t *__pyx_v_data; uint32_t __pyx_v_segment; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations Py_ssize_t __pyx_t_1; char *__pyx_t_2; int __pyx_t_3; PyObject *__pyx_t_4 = NULL; PyObject *__pyx_t_5 = NULL; PyObject *__pyx_t_6 = NULL; __Pyx_RefNannySetupContext("__getitem__", 0); /* "borg/hashindex.pyx":198 * * def __getitem__(self, key): * assert len(key) == self.key_size # <<<<<<<<<<<<<< * data = hashindex_get(self.index, key) * if not data: */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { __pyx_t_1 = PyObject_Length(__pyx_v_key); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 198, __pyx_L1_error) if (unlikely(!((__pyx_t_1 == __pyx_v_self->__pyx_base.key_size) != 0))) { PyErr_SetNone(PyExc_AssertionError); __PYX_ERR(0, 198, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":199 * def __getitem__(self, key): * assert len(key) == self.key_size * data = hashindex_get(self.index, key) # <<<<<<<<<<<<<< * if not data: * raise KeyError(key) */ __pyx_t_2 = __Pyx_PyObject_AsWritableString(__pyx_v_key); if (unlikely((!__pyx_t_2) && PyErr_Occurred())) __PYX_ERR(0, 199, __pyx_L1_error) __pyx_v_data = ((uint32_t *)hashindex_get(__pyx_v_self->__pyx_base.index, ((char *)__pyx_t_2))); /* "borg/hashindex.pyx":200 * assert len(key) == self.key_size * data = hashindex_get(self.index, key) * if not data: # <<<<<<<<<<<<<< * raise KeyError(key) * cdef uint32_t segment = _le32toh(data[0]) */ __pyx_t_3 = ((!(__pyx_v_data != 0)) != 0); if (__pyx_t_3) { /* "borg/hashindex.pyx":201 * data = hashindex_get(self.index, key) * if not data: * raise KeyError(key) # <<<<<<<<<<<<<< * cdef uint32_t segment = _le32toh(data[0]) * assert segment <= _MAX_VALUE, "maximum number of segments reached" */ __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 201, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __Pyx_INCREF(__pyx_v_key); __Pyx_GIVEREF(__pyx_v_key); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_key); __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_KeyError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 201, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; __Pyx_Raise(__pyx_t_5, 0, 0, 0); __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; __PYX_ERR(0, 201, __pyx_L1_error) /* "borg/hashindex.pyx":200 * assert len(key) == self.key_size * data = hashindex_get(self.index, key) * if not data: # <<<<<<<<<<<<<< * raise KeyError(key) * cdef uint32_t segment = _le32toh(data[0]) */ } /* "borg/hashindex.pyx":202 * if not data: * raise KeyError(key) * cdef uint32_t segment = _le32toh(data[0]) # <<<<<<<<<<<<<< * assert segment <= _MAX_VALUE, "maximum number of segments reached" * return segment, _le32toh(data[1]) */ __pyx_v_segment = _le32toh((__pyx_v_data[0])); /* "borg/hashindex.pyx":203 * raise KeyError(key) * cdef uint32_t segment = _le32toh(data[0]) * assert segment <= _MAX_VALUE, "maximum number of segments reached" # <<<<<<<<<<<<<< * return segment, _le32toh(data[1]) * */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { if (unlikely(!((__pyx_v_segment <= _MAX_VALUE) != 0))) { PyErr_SetObject(PyExc_AssertionError, __pyx_kp_s_maximum_number_of_segments_reach); __PYX_ERR(0, 203, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":204 * cdef uint32_t segment = _le32toh(data[0]) * assert segment <= _MAX_VALUE, "maximum number of segments reached" * return segment, _le32toh(data[1]) # <<<<<<<<<<<<<< * * def __setitem__(self, key, value): */ __Pyx_XDECREF(__pyx_r); __pyx_t_5 = __Pyx_PyInt_From_uint32_t(__pyx_v_segment); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 204, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __pyx_t_4 = __Pyx_PyInt_From_uint32_t(_le32toh((__pyx_v_data[1]))); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 204, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __pyx_t_6 = PyTuple_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 204, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_6); __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_5); __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_4); __pyx_t_5 = 0; __pyx_t_4 = 0; __pyx_r = __pyx_t_6; __pyx_t_6 = 0; goto __pyx_L0; /* "borg/hashindex.pyx":197 * value_size = 8 * * def __getitem__(self, key): # <<<<<<<<<<<<<< * assert len(key) == self.key_size * data = hashindex_get(self.index, key) */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_4); __Pyx_XDECREF(__pyx_t_5); __Pyx_XDECREF(__pyx_t_6); __Pyx_AddTraceback("borg.hashindex.NSIndex.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":206 * return segment, _le32toh(data[1]) * * def __setitem__(self, key, value): # <<<<<<<<<<<<<< * assert len(key) == self.key_size * cdef uint32_t[2] data */ /* Python wrapper */ static int __pyx_pw_4borg_9hashindex_7NSIndex_3__setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_value); /*proto*/ static int __pyx_pw_4borg_9hashindex_7NSIndex_3__setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_value) { int __pyx_r; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_7NSIndex_2__setitem__(((struct __pyx_obj_4borg_9hashindex_NSIndex *)__pyx_v_self), ((PyObject *)__pyx_v_key), ((PyObject *)__pyx_v_value)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static int __pyx_pf_4borg_9hashindex_7NSIndex_2__setitem__(struct __pyx_obj_4borg_9hashindex_NSIndex *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_value) { uint32_t __pyx_v_data[2]; uint32_t __pyx_v_segment; int __pyx_r; __Pyx_RefNannyDeclarations Py_ssize_t __pyx_t_1; PyObject *__pyx_t_2 = NULL; uint32_t __pyx_t_3; char *__pyx_t_4; int __pyx_t_5; __Pyx_RefNannySetupContext("__setitem__", 0); /* "borg/hashindex.pyx":207 * * def __setitem__(self, key, value): * assert len(key) == self.key_size # <<<<<<<<<<<<<< * cdef uint32_t[2] data * cdef uint32_t segment = value[0] */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { __pyx_t_1 = PyObject_Length(__pyx_v_key); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 207, __pyx_L1_error) if (unlikely(!((__pyx_t_1 == __pyx_v_self->__pyx_base.key_size) != 0))) { PyErr_SetNone(PyExc_AssertionError); __PYX_ERR(0, 207, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":209 * assert len(key) == self.key_size * cdef uint32_t[2] data * cdef uint32_t segment = value[0] # <<<<<<<<<<<<<< * assert segment <= _MAX_VALUE, "maximum number of segments reached" * data[0] = _htole32(segment) */ __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_value, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 209, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __pyx_t_3 = __Pyx_PyInt_As_uint32_t(__pyx_t_2); if (unlikely((__pyx_t_3 == ((uint32_t)-1)) && PyErr_Occurred())) __PYX_ERR(0, 209, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; __pyx_v_segment = __pyx_t_3; /* "borg/hashindex.pyx":210 * cdef uint32_t[2] data * cdef uint32_t segment = value[0] * assert segment <= _MAX_VALUE, "maximum number of segments reached" # <<<<<<<<<<<<<< * data[0] = _htole32(segment) * data[1] = _htole32(value[1]) */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { if (unlikely(!((__pyx_v_segment <= _MAX_VALUE) != 0))) { PyErr_SetObject(PyExc_AssertionError, __pyx_kp_s_maximum_number_of_segments_reach); __PYX_ERR(0, 210, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":211 * cdef uint32_t segment = value[0] * assert segment <= _MAX_VALUE, "maximum number of segments reached" * data[0] = _htole32(segment) # <<<<<<<<<<<<<< * data[1] = _htole32(value[1]) * if not hashindex_set(self.index, key, data): */ (__pyx_v_data[0]) = _htole32(__pyx_v_segment); /* "borg/hashindex.pyx":212 * assert segment <= _MAX_VALUE, "maximum number of segments reached" * data[0] = _htole32(segment) * data[1] = _htole32(value[1]) # <<<<<<<<<<<<<< * if not hashindex_set(self.index, key, data): * raise Exception('hashindex_set failed') */ __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_value, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 212, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __pyx_t_3 = __Pyx_PyInt_As_uint32_t(__pyx_t_2); if (unlikely((__pyx_t_3 == ((uint32_t)-1)) && PyErr_Occurred())) __PYX_ERR(0, 212, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; (__pyx_v_data[1]) = _htole32(__pyx_t_3); /* "borg/hashindex.pyx":213 * data[0] = _htole32(segment) * data[1] = _htole32(value[1]) * if not hashindex_set(self.index, key, data): # <<<<<<<<<<<<<< * raise Exception('hashindex_set failed') * */ __pyx_t_4 = __Pyx_PyObject_AsWritableString(__pyx_v_key); if (unlikely((!__pyx_t_4) && PyErr_Occurred())) __PYX_ERR(0, 213, __pyx_L1_error) __pyx_t_5 = ((!(hashindex_set(__pyx_v_self->__pyx_base.index, ((char *)__pyx_t_4), __pyx_v_data) != 0)) != 0); if (__pyx_t_5) { /* "borg/hashindex.pyx":214 * data[1] = _htole32(value[1]) * if not hashindex_set(self.index, key, data): * raise Exception('hashindex_set failed') # <<<<<<<<<<<<<< * * def __contains__(self, key): */ __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__13, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 214, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __Pyx_Raise(__pyx_t_2, 0, 0, 0); __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; __PYX_ERR(0, 214, __pyx_L1_error) /* "borg/hashindex.pyx":213 * data[0] = _htole32(segment) * data[1] = _htole32(value[1]) * if not hashindex_set(self.index, key, data): # <<<<<<<<<<<<<< * raise Exception('hashindex_set failed') * */ } /* "borg/hashindex.pyx":206 * return segment, _le32toh(data[1]) * * def __setitem__(self, key, value): # <<<<<<<<<<<<<< * assert len(key) == self.key_size * cdef uint32_t[2] data */ /* function exit code */ __pyx_r = 0; goto __pyx_L0; __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_2); __Pyx_AddTraceback("borg.hashindex.NSIndex.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = -1; __pyx_L0:; __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":216 * raise Exception('hashindex_set failed') * * def __contains__(self, key): # <<<<<<<<<<<<<< * cdef uint32_t segment * assert len(key) == self.key_size */ /* Python wrapper */ static int __pyx_pw_4borg_9hashindex_7NSIndex_5__contains__(PyObject *__pyx_v_self, PyObject *__pyx_v_key); /*proto*/ static int __pyx_pw_4borg_9hashindex_7NSIndex_5__contains__(PyObject *__pyx_v_self, PyObject *__pyx_v_key) { int __pyx_r; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__contains__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_7NSIndex_4__contains__(((struct __pyx_obj_4borg_9hashindex_NSIndex *)__pyx_v_self), ((PyObject *)__pyx_v_key)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static int __pyx_pf_4borg_9hashindex_7NSIndex_4__contains__(struct __pyx_obj_4borg_9hashindex_NSIndex *__pyx_v_self, PyObject *__pyx_v_key) { uint32_t __pyx_v_segment; uint32_t *__pyx_v_data; int __pyx_r; __Pyx_RefNannyDeclarations Py_ssize_t __pyx_t_1; char *__pyx_t_2; int __pyx_t_3; __Pyx_RefNannySetupContext("__contains__", 0); /* "borg/hashindex.pyx":218 * def __contains__(self, key): * cdef uint32_t segment * assert len(key) == self.key_size # <<<<<<<<<<<<<< * data = hashindex_get(self.index, key) * if data != NULL: */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { __pyx_t_1 = PyObject_Length(__pyx_v_key); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 218, __pyx_L1_error) if (unlikely(!((__pyx_t_1 == __pyx_v_self->__pyx_base.key_size) != 0))) { PyErr_SetNone(PyExc_AssertionError); __PYX_ERR(0, 218, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":219 * cdef uint32_t segment * assert len(key) == self.key_size * data = hashindex_get(self.index, key) # <<<<<<<<<<<<<< * if data != NULL: * segment = _le32toh(data[0]) */ __pyx_t_2 = __Pyx_PyObject_AsWritableString(__pyx_v_key); if (unlikely((!__pyx_t_2) && PyErr_Occurred())) __PYX_ERR(0, 219, __pyx_L1_error) __pyx_v_data = ((uint32_t *)hashindex_get(__pyx_v_self->__pyx_base.index, ((char *)__pyx_t_2))); /* "borg/hashindex.pyx":220 * assert len(key) == self.key_size * data = hashindex_get(self.index, key) * if data != NULL: # <<<<<<<<<<<<<< * segment = _le32toh(data[0]) * assert segment <= _MAX_VALUE, "maximum number of segments reached" */ __pyx_t_3 = ((__pyx_v_data != NULL) != 0); if (__pyx_t_3) { /* "borg/hashindex.pyx":221 * data = hashindex_get(self.index, key) * if data != NULL: * segment = _le32toh(data[0]) # <<<<<<<<<<<<<< * assert segment <= _MAX_VALUE, "maximum number of segments reached" * return data != NULL */ __pyx_v_segment = _le32toh((__pyx_v_data[0])); /* "borg/hashindex.pyx":222 * if data != NULL: * segment = _le32toh(data[0]) * assert segment <= _MAX_VALUE, "maximum number of segments reached" # <<<<<<<<<<<<<< * return data != NULL * */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { if (unlikely(!((__pyx_v_segment <= _MAX_VALUE) != 0))) { PyErr_SetObject(PyExc_AssertionError, __pyx_kp_s_maximum_number_of_segments_reach); __PYX_ERR(0, 222, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":220 * assert len(key) == self.key_size * data = hashindex_get(self.index, key) * if data != NULL: # <<<<<<<<<<<<<< * segment = _le32toh(data[0]) * assert segment <= _MAX_VALUE, "maximum number of segments reached" */ } /* "borg/hashindex.pyx":223 * segment = _le32toh(data[0]) * assert segment <= _MAX_VALUE, "maximum number of segments reached" * return data != NULL # <<<<<<<<<<<<<< * * def iteritems(self, marker=None): */ __pyx_r = (__pyx_v_data != NULL); goto __pyx_L0; /* "borg/hashindex.pyx":216 * raise Exception('hashindex_set failed') * * def __contains__(self, key): # <<<<<<<<<<<<<< * cdef uint32_t segment * assert len(key) == self.key_size */ /* function exit code */ __pyx_L1_error:; __Pyx_AddTraceback("borg.hashindex.NSIndex.__contains__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = -1; __pyx_L0:; __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":225 * return data != NULL * * def iteritems(self, marker=None): # <<<<<<<<<<<<<< * cdef const void *key * iter = NSKeyIterator(self.key_size) */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_7NSIndex_7iteritems(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_7NSIndex_7iteritems(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { PyObject *__pyx_v_marker = 0; PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("iteritems (wrapper)", 0); { static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_marker,0}; PyObject* values[1] = {0}; values[0] = ((PyObject *)Py_None); if (unlikely(__pyx_kwds)) { Py_ssize_t kw_args; const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); switch (pos_args) { case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } kw_args = PyDict_Size(__pyx_kwds); switch (pos_args) { case 0: if (kw_args > 0) { PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_marker); if (value) { values[0] = value; kw_args--; } } } if (unlikely(kw_args > 0)) { if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "iteritems") < 0)) __PYX_ERR(0, 225, __pyx_L3_error) } } else { switch (PyTuple_GET_SIZE(__pyx_args)) { case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } } __pyx_v_marker = values[0]; } goto __pyx_L4_argument_unpacking_done; __pyx_L5_argtuple_error:; __Pyx_RaiseArgtupleInvalid("iteritems", 0, 0, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 225, __pyx_L3_error) __pyx_L3_error:; __Pyx_AddTraceback("borg.hashindex.NSIndex.iteritems", __pyx_clineno, __pyx_lineno, __pyx_filename); __Pyx_RefNannyFinishContext(); return NULL; __pyx_L4_argument_unpacking_done:; __pyx_r = __pyx_pf_4borg_9hashindex_7NSIndex_6iteritems(((struct __pyx_obj_4borg_9hashindex_NSIndex *)__pyx_v_self), __pyx_v_marker); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_7NSIndex_6iteritems(struct __pyx_obj_4borg_9hashindex_NSIndex *__pyx_v_self, PyObject *__pyx_v_marker) { void const *__pyx_v_key; struct __pyx_obj_4borg_9hashindex_NSKeyIterator *__pyx_v_iter = NULL; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; PyObject *__pyx_t_2 = NULL; HashIndex *__pyx_t_3; int __pyx_t_4; char *__pyx_t_5; int __pyx_t_6; __Pyx_RefNannySetupContext("iteritems", 0); /* "borg/hashindex.pyx":227 * def iteritems(self, marker=None): * cdef const void *key * iter = NSKeyIterator(self.key_size) # <<<<<<<<<<<<<< * iter.idx = self * iter.index = self.index */ __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_self->__pyx_base.key_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 227, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 227, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __Pyx_GIVEREF(__pyx_t_1); PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1); __pyx_t_1 = 0; __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_4borg_9hashindex_NSKeyIterator), __pyx_t_2, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 227, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; __pyx_v_iter = ((struct __pyx_obj_4borg_9hashindex_NSKeyIterator *)__pyx_t_1); __pyx_t_1 = 0; /* "borg/hashindex.pyx":228 * cdef const void *key * iter = NSKeyIterator(self.key_size) * iter.idx = self # <<<<<<<<<<<<<< * iter.index = self.index * if marker: */ __Pyx_INCREF(((PyObject *)__pyx_v_self)); __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); __Pyx_GOTREF(__pyx_v_iter->idx); __Pyx_DECREF(((PyObject *)__pyx_v_iter->idx)); __pyx_v_iter->idx = __pyx_v_self; /* "borg/hashindex.pyx":229 * iter = NSKeyIterator(self.key_size) * iter.idx = self * iter.index = self.index # <<<<<<<<<<<<<< * if marker: * key = hashindex_get(self.index, marker) */ __pyx_t_3 = __pyx_v_self->__pyx_base.index; __pyx_v_iter->index = __pyx_t_3; /* "borg/hashindex.pyx":230 * iter.idx = self * iter.index = self.index * if marker: # <<<<<<<<<<<<<< * key = hashindex_get(self.index, marker) * if marker is None: */ __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_v_marker); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(0, 230, __pyx_L1_error) if (__pyx_t_4) { /* "borg/hashindex.pyx":231 * iter.index = self.index * if marker: * key = hashindex_get(self.index, marker) # <<<<<<<<<<<<<< * if marker is None: * raise IndexError */ __pyx_t_5 = __Pyx_PyObject_AsWritableString(__pyx_v_marker); if (unlikely((!__pyx_t_5) && PyErr_Occurred())) __PYX_ERR(0, 231, __pyx_L1_error) __pyx_v_key = hashindex_get(__pyx_v_self->__pyx_base.index, ((char *)__pyx_t_5)); /* "borg/hashindex.pyx":232 * if marker: * key = hashindex_get(self.index, marker) * if marker is None: # <<<<<<<<<<<<<< * raise IndexError * iter.key = key - self.key_size */ __pyx_t_4 = (__pyx_v_marker == Py_None); __pyx_t_6 = (__pyx_t_4 != 0); if (__pyx_t_6) { /* "borg/hashindex.pyx":233 * key = hashindex_get(self.index, marker) * if marker is None: * raise IndexError # <<<<<<<<<<<<<< * iter.key = key - self.key_size * return iter */ __Pyx_Raise(__pyx_builtin_IndexError, 0, 0, 0); __PYX_ERR(0, 233, __pyx_L1_error) /* "borg/hashindex.pyx":232 * if marker: * key = hashindex_get(self.index, marker) * if marker is None: # <<<<<<<<<<<<<< * raise IndexError * iter.key = key - self.key_size */ } /* "borg/hashindex.pyx":234 * if marker is None: * raise IndexError * iter.key = key - self.key_size # <<<<<<<<<<<<<< * return iter * */ __pyx_v_iter->key = (__pyx_v_key - __pyx_v_self->__pyx_base.key_size); /* "borg/hashindex.pyx":230 * iter.idx = self * iter.index = self.index * if marker: # <<<<<<<<<<<<<< * key = hashindex_get(self.index, marker) * if marker is None: */ } /* "borg/hashindex.pyx":235 * raise IndexError * iter.key = key - self.key_size * return iter # <<<<<<<<<<<<<< * * */ __Pyx_XDECREF(__pyx_r); __Pyx_INCREF(((PyObject *)__pyx_v_iter)); __pyx_r = ((PyObject *)__pyx_v_iter); goto __pyx_L0; /* "borg/hashindex.pyx":225 * return data != NULL * * def iteritems(self, marker=None): # <<<<<<<<<<<<<< * cdef const void *key * iter = NSKeyIterator(self.key_size) */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_XDECREF(__pyx_t_2); __Pyx_AddTraceback("borg.hashindex.NSIndex.iteritems", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XDECREF((PyObject *)__pyx_v_iter); __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "(tree fragment)":1 * def __reduce_cython__(self): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_7NSIndex_9__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_7NSIndex_9__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_7NSIndex_8__reduce_cython__(((struct __pyx_obj_4borg_9hashindex_NSIndex *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_7NSIndex_8__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_NSIndex *__pyx_v_self) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("__reduce_cython__", 0); /* "(tree fragment)":2 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__14, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(1, 2, __pyx_L1_error) /* "(tree fragment)":1 * def __reduce_cython__(self): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.NSIndex.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "(tree fragment)":3 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_7NSIndex_11__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_7NSIndex_11__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_7NSIndex_10__setstate_cython__(((struct __pyx_obj_4borg_9hashindex_NSIndex *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_7NSIndex_10__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_NSIndex *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("__setstate_cython__", 0); /* "(tree fragment)":4 * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< */ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__15, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(1, 4, __pyx_L1_error) /* "(tree fragment)":3 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.NSIndex.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":245 * cdef int exhausted * * def __cinit__(self, key_size): # <<<<<<<<<<<<<< * self.key = NULL * self.key_size = key_size */ /* Python wrapper */ static int __pyx_pw_4borg_9hashindex_13NSKeyIterator_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ static int __pyx_pw_4borg_9hashindex_13NSKeyIterator_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { PyObject *__pyx_v_key_size = 0; int __pyx_r; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); { static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_key_size_2,0}; PyObject* values[1] = {0}; if (unlikely(__pyx_kwds)) { Py_ssize_t kw_args; const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); switch (pos_args) { case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } kw_args = PyDict_Size(__pyx_kwds); switch (pos_args) { case 0: if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_key_size_2)) != 0)) kw_args--; else goto __pyx_L5_argtuple_error; } if (unlikely(kw_args > 0)) { if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(0, 245, __pyx_L3_error) } } else if (PyTuple_GET_SIZE(__pyx_args) != 1) { goto __pyx_L5_argtuple_error; } else { values[0] = PyTuple_GET_ITEM(__pyx_args, 0); } __pyx_v_key_size = values[0]; } goto __pyx_L4_argument_unpacking_done; __pyx_L5_argtuple_error:; __Pyx_RaiseArgtupleInvalid("__cinit__", 1, 1, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 245, __pyx_L3_error) __pyx_L3_error:; __Pyx_AddTraceback("borg.hashindex.NSKeyIterator.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); __Pyx_RefNannyFinishContext(); return -1; __pyx_L4_argument_unpacking_done:; __pyx_r = __pyx_pf_4borg_9hashindex_13NSKeyIterator___cinit__(((struct __pyx_obj_4borg_9hashindex_NSKeyIterator *)__pyx_v_self), __pyx_v_key_size); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static int __pyx_pf_4borg_9hashindex_13NSKeyIterator___cinit__(struct __pyx_obj_4borg_9hashindex_NSKeyIterator *__pyx_v_self, PyObject *__pyx_v_key_size) { int __pyx_r; __Pyx_RefNannyDeclarations int __pyx_t_1; __Pyx_RefNannySetupContext("__cinit__", 0); /* "borg/hashindex.pyx":246 * * def __cinit__(self, key_size): * self.key = NULL # <<<<<<<<<<<<<< * self.key_size = key_size * self.exhausted = 0 */ __pyx_v_self->key = NULL; /* "borg/hashindex.pyx":247 * def __cinit__(self, key_size): * self.key = NULL * self.key_size = key_size # <<<<<<<<<<<<<< * self.exhausted = 0 * */ __pyx_t_1 = __Pyx_PyInt_As_int(__pyx_v_key_size); if (unlikely((__pyx_t_1 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 247, __pyx_L1_error) __pyx_v_self->key_size = __pyx_t_1; /* "borg/hashindex.pyx":248 * self.key = NULL * self.key_size = key_size * self.exhausted = 0 # <<<<<<<<<<<<<< * * def __iter__(self): */ __pyx_v_self->exhausted = 0; /* "borg/hashindex.pyx":245 * cdef int exhausted * * def __cinit__(self, key_size): # <<<<<<<<<<<<<< * self.key = NULL * self.key_size = key_size */ /* function exit code */ __pyx_r = 0; goto __pyx_L0; __pyx_L1_error:; __Pyx_AddTraceback("borg.hashindex.NSKeyIterator.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = -1; __pyx_L0:; __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":250 * self.exhausted = 0 * * def __iter__(self): # <<<<<<<<<<<<<< * return self * */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_13NSKeyIterator_3__iter__(PyObject *__pyx_v_self); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_13NSKeyIterator_3__iter__(PyObject *__pyx_v_self) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__iter__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_13NSKeyIterator_2__iter__(((struct __pyx_obj_4borg_9hashindex_NSKeyIterator *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_13NSKeyIterator_2__iter__(struct __pyx_obj_4borg_9hashindex_NSKeyIterator *__pyx_v_self) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__iter__", 0); /* "borg/hashindex.pyx":251 * * def __iter__(self): * return self # <<<<<<<<<<<<<< * * def __next__(self): */ __Pyx_XDECREF(__pyx_r); __Pyx_INCREF(((PyObject *)__pyx_v_self)); __pyx_r = ((PyObject *)__pyx_v_self); goto __pyx_L0; /* "borg/hashindex.pyx":250 * self.exhausted = 0 * * def __iter__(self): # <<<<<<<<<<<<<< * return self * */ /* function exit code */ __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":253 * return self * * def __next__(self): # <<<<<<<<<<<<<< * if self.exhausted: * raise StopIteration */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_13NSKeyIterator_5__next__(PyObject *__pyx_v_self); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_13NSKeyIterator_5__next__(PyObject *__pyx_v_self) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__next__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_13NSKeyIterator_4__next__(((struct __pyx_obj_4borg_9hashindex_NSKeyIterator *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_13NSKeyIterator_4__next__(struct __pyx_obj_4borg_9hashindex_NSKeyIterator *__pyx_v_self) { uint32_t *__pyx_v_value; uint32_t __pyx_v_segment; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations int __pyx_t_1; PyObject *__pyx_t_2 = NULL; PyObject *__pyx_t_3 = NULL; PyObject *__pyx_t_4 = NULL; PyObject *__pyx_t_5 = NULL; __Pyx_RefNannySetupContext("__next__", 0); /* "borg/hashindex.pyx":254 * * def __next__(self): * if self.exhausted: # <<<<<<<<<<<<<< * raise StopIteration * self.key = hashindex_next_key(self.index, self.key) */ __pyx_t_1 = (__pyx_v_self->exhausted != 0); if (__pyx_t_1) { /* "borg/hashindex.pyx":255 * def __next__(self): * if self.exhausted: * raise StopIteration # <<<<<<<<<<<<<< * self.key = hashindex_next_key(self.index, self.key) * if not self.key: */ __Pyx_Raise(__pyx_builtin_StopIteration, 0, 0, 0); __PYX_ERR(0, 255, __pyx_L1_error) /* "borg/hashindex.pyx":254 * * def __next__(self): * if self.exhausted: # <<<<<<<<<<<<<< * raise StopIteration * self.key = hashindex_next_key(self.index, self.key) */ } /* "borg/hashindex.pyx":256 * if self.exhausted: * raise StopIteration * self.key = hashindex_next_key(self.index, self.key) # <<<<<<<<<<<<<< * if not self.key: * self.exhausted = 1 */ __pyx_v_self->key = hashindex_next_key(__pyx_v_self->index, ((char *)__pyx_v_self->key)); /* "borg/hashindex.pyx":257 * raise StopIteration * self.key = hashindex_next_key(self.index, self.key) * if not self.key: # <<<<<<<<<<<<<< * self.exhausted = 1 * raise StopIteration */ __pyx_t_1 = ((!(__pyx_v_self->key != 0)) != 0); if (__pyx_t_1) { /* "borg/hashindex.pyx":258 * self.key = hashindex_next_key(self.index, self.key) * if not self.key: * self.exhausted = 1 # <<<<<<<<<<<<<< * raise StopIteration * cdef uint32_t *value = (self.key + self.key_size) */ __pyx_v_self->exhausted = 1; /* "borg/hashindex.pyx":259 * if not self.key: * self.exhausted = 1 * raise StopIteration # <<<<<<<<<<<<<< * cdef uint32_t *value = (self.key + self.key_size) * cdef uint32_t segment = _le32toh(value[0]) */ __Pyx_Raise(__pyx_builtin_StopIteration, 0, 0, 0); __PYX_ERR(0, 259, __pyx_L1_error) /* "borg/hashindex.pyx":257 * raise StopIteration * self.key = hashindex_next_key(self.index, self.key) * if not self.key: # <<<<<<<<<<<<<< * self.exhausted = 1 * raise StopIteration */ } /* "borg/hashindex.pyx":260 * self.exhausted = 1 * raise StopIteration * cdef uint32_t *value = (self.key + self.key_size) # <<<<<<<<<<<<<< * cdef uint32_t segment = _le32toh(value[0]) * assert segment <= _MAX_VALUE, "maximum number of segments reached" */ __pyx_v_value = ((uint32_t *)(__pyx_v_self->key + __pyx_v_self->key_size)); /* "borg/hashindex.pyx":261 * raise StopIteration * cdef uint32_t *value = (self.key + self.key_size) * cdef uint32_t segment = _le32toh(value[0]) # <<<<<<<<<<<<<< * assert segment <= _MAX_VALUE, "maximum number of segments reached" * return (self.key)[:self.key_size], (segment, _le32toh(value[1])) */ __pyx_v_segment = _le32toh((__pyx_v_value[0])); /* "borg/hashindex.pyx":262 * cdef uint32_t *value = (self.key + self.key_size) * cdef uint32_t segment = _le32toh(value[0]) * assert segment <= _MAX_VALUE, "maximum number of segments reached" # <<<<<<<<<<<<<< * return (self.key)[:self.key_size], (segment, _le32toh(value[1])) * */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { if (unlikely(!((__pyx_v_segment <= _MAX_VALUE) != 0))) { PyErr_SetObject(PyExc_AssertionError, __pyx_kp_s_maximum_number_of_segments_reach); __PYX_ERR(0, 262, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":263 * cdef uint32_t segment = _le32toh(value[0]) * assert segment <= _MAX_VALUE, "maximum number of segments reached" * return (self.key)[:self.key_size], (segment, _le32toh(value[1])) # <<<<<<<<<<<<<< * * */ __Pyx_XDECREF(__pyx_r); __pyx_t_2 = __Pyx_PyBytes_FromStringAndSize(((char *)__pyx_v_self->key) + 0, __pyx_v_self->key_size - 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 263, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __pyx_t_3 = __Pyx_PyInt_From_uint32_t(__pyx_v_segment); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 263, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_3); __pyx_t_4 = __Pyx_PyInt_From_uint32_t(_le32toh((__pyx_v_value[1]))); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 263, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 263, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_4); __pyx_t_3 = 0; __pyx_t_4 = 0; __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 263, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_2); __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_5); __pyx_t_2 = 0; __pyx_t_5 = 0; __pyx_r = __pyx_t_4; __pyx_t_4 = 0; goto __pyx_L0; /* "borg/hashindex.pyx":253 * return self * * def __next__(self): # <<<<<<<<<<<<<< * if self.exhausted: * raise StopIteration */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_2); __Pyx_XDECREF(__pyx_t_3); __Pyx_XDECREF(__pyx_t_4); __Pyx_XDECREF(__pyx_t_5); __Pyx_AddTraceback("borg.hashindex.NSKeyIterator.__next__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "(tree fragment)":1 * def __reduce_cython__(self): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_13NSKeyIterator_7__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_13NSKeyIterator_7__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_13NSKeyIterator_6__reduce_cython__(((struct __pyx_obj_4borg_9hashindex_NSKeyIterator *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_13NSKeyIterator_6__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_NSKeyIterator *__pyx_v_self) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("__reduce_cython__", 0); /* "(tree fragment)":2 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__16, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(1, 2, __pyx_L1_error) /* "(tree fragment)":1 * def __reduce_cython__(self): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.NSKeyIterator.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "(tree fragment)":3 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_13NSKeyIterator_9__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_13NSKeyIterator_9__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_13NSKeyIterator_8__setstate_cython__(((struct __pyx_obj_4borg_9hashindex_NSKeyIterator *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_13NSKeyIterator_8__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_NSKeyIterator *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("__setstate_cython__", 0); /* "(tree fragment)":4 * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< */ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__17, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(1, 4, __pyx_L1_error) /* "(tree fragment)":3 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.NSKeyIterator.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":288 * value_size = 12 * * def __getitem__(self, key): # <<<<<<<<<<<<<< * assert len(key) == self.key_size * data = hashindex_get(self.index, key) */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_1__getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_key); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_1__getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_key) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_10ChunkIndex___getitem__(((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)__pyx_v_self), ((PyObject *)__pyx_v_key)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex___getitem__(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, PyObject *__pyx_v_key) { uint32_t *__pyx_v_data; uint32_t __pyx_v_refcount; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations Py_ssize_t __pyx_t_1; char *__pyx_t_2; int __pyx_t_3; PyObject *__pyx_t_4 = NULL; PyObject *__pyx_t_5 = NULL; PyObject *__pyx_t_6 = NULL; PyObject *__pyx_t_7 = NULL; PyObject *__pyx_t_8 = NULL; PyObject *__pyx_t_9 = NULL; int __pyx_t_10; PyObject *__pyx_t_11 = NULL; __Pyx_RefNannySetupContext("__getitem__", 0); /* "borg/hashindex.pyx":289 * * def __getitem__(self, key): * assert len(key) == self.key_size # <<<<<<<<<<<<<< * data = hashindex_get(self.index, key) * if not data: */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { __pyx_t_1 = PyObject_Length(__pyx_v_key); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 289, __pyx_L1_error) if (unlikely(!((__pyx_t_1 == __pyx_v_self->__pyx_base.key_size) != 0))) { PyErr_SetNone(PyExc_AssertionError); __PYX_ERR(0, 289, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":290 * def __getitem__(self, key): * assert len(key) == self.key_size * data = hashindex_get(self.index, key) # <<<<<<<<<<<<<< * if not data: * raise KeyError(key) */ __pyx_t_2 = __Pyx_PyObject_AsWritableString(__pyx_v_key); if (unlikely((!__pyx_t_2) && PyErr_Occurred())) __PYX_ERR(0, 290, __pyx_L1_error) __pyx_v_data = ((uint32_t *)hashindex_get(__pyx_v_self->__pyx_base.index, ((char *)__pyx_t_2))); /* "borg/hashindex.pyx":291 * assert len(key) == self.key_size * data = hashindex_get(self.index, key) * if not data: # <<<<<<<<<<<<<< * raise KeyError(key) * cdef uint32_t refcount = _le32toh(data[0]) */ __pyx_t_3 = ((!(__pyx_v_data != 0)) != 0); if (__pyx_t_3) { /* "borg/hashindex.pyx":292 * data = hashindex_get(self.index, key) * if not data: * raise KeyError(key) # <<<<<<<<<<<<<< * cdef uint32_t refcount = _le32toh(data[0]) * assert refcount <= _MAX_VALUE, "invalid reference count" */ __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 292, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __Pyx_INCREF(__pyx_v_key); __Pyx_GIVEREF(__pyx_v_key); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_key); __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_KeyError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 292, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; __Pyx_Raise(__pyx_t_5, 0, 0, 0); __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; __PYX_ERR(0, 292, __pyx_L1_error) /* "borg/hashindex.pyx":291 * assert len(key) == self.key_size * data = hashindex_get(self.index, key) * if not data: # <<<<<<<<<<<<<< * raise KeyError(key) * cdef uint32_t refcount = _le32toh(data[0]) */ } /* "borg/hashindex.pyx":293 * if not data: * raise KeyError(key) * cdef uint32_t refcount = _le32toh(data[0]) # <<<<<<<<<<<<<< * assert refcount <= _MAX_VALUE, "invalid reference count" * return ChunkIndexEntry(refcount, _le32toh(data[1]), _le32toh(data[2])) */ __pyx_v_refcount = _le32toh((__pyx_v_data[0])); /* "borg/hashindex.pyx":294 * raise KeyError(key) * cdef uint32_t refcount = _le32toh(data[0]) * assert refcount <= _MAX_VALUE, "invalid reference count" # <<<<<<<<<<<<<< * return ChunkIndexEntry(refcount, _le32toh(data[1]), _le32toh(data[2])) * */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { if (unlikely(!((__pyx_v_refcount <= _MAX_VALUE) != 0))) { PyErr_SetObject(PyExc_AssertionError, __pyx_kp_s_invalid_reference_count); __PYX_ERR(0, 294, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":295 * cdef uint32_t refcount = _le32toh(data[0]) * assert refcount <= _MAX_VALUE, "invalid reference count" * return ChunkIndexEntry(refcount, _le32toh(data[1]), _le32toh(data[2])) # <<<<<<<<<<<<<< * * def __setitem__(self, key, value): */ __Pyx_XDECREF(__pyx_r); __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_ChunkIndexEntry); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 295, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __pyx_t_6 = __Pyx_PyInt_From_uint32_t(__pyx_v_refcount); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 295, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_6); __pyx_t_7 = __Pyx_PyInt_From_uint32_t(_le32toh((__pyx_v_data[1]))); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 295, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_7); __pyx_t_8 = __Pyx_PyInt_From_uint32_t(_le32toh((__pyx_v_data[2]))); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 295, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_8); __pyx_t_9 = NULL; __pyx_t_10 = 0; if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) { __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_4); if (likely(__pyx_t_9)) { PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); __Pyx_INCREF(__pyx_t_9); __Pyx_INCREF(function); __Pyx_DECREF_SET(__pyx_t_4, function); __pyx_t_10 = 1; } } #if CYTHON_FAST_PYCALL if (PyFunction_Check(__pyx_t_4)) { PyObject *__pyx_temp[4] = {__pyx_t_9, __pyx_t_6, __pyx_t_7, __pyx_t_8}; __pyx_t_5 = __Pyx_PyFunction_FastCall(__pyx_t_4, __pyx_temp+1-__pyx_t_10, 3+__pyx_t_10); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 295, __pyx_L1_error) __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; __Pyx_GOTREF(__pyx_t_5); __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; } else #endif #if CYTHON_FAST_PYCCALL if (__Pyx_PyFastCFunction_Check(__pyx_t_4)) { PyObject *__pyx_temp[4] = {__pyx_t_9, __pyx_t_6, __pyx_t_7, __pyx_t_8}; __pyx_t_5 = __Pyx_PyCFunction_FastCall(__pyx_t_4, __pyx_temp+1-__pyx_t_10, 3+__pyx_t_10); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 295, __pyx_L1_error) __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; __Pyx_GOTREF(__pyx_t_5); __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; } else #endif { __pyx_t_11 = PyTuple_New(3+__pyx_t_10); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 295, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_11); if (__pyx_t_9) { __Pyx_GIVEREF(__pyx_t_9); PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_9); __pyx_t_9 = NULL; } __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_11, 0+__pyx_t_10, __pyx_t_6); __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_11, 1+__pyx_t_10, __pyx_t_7); __Pyx_GIVEREF(__pyx_t_8); PyTuple_SET_ITEM(__pyx_t_11, 2+__pyx_t_10, __pyx_t_8); __pyx_t_6 = 0; __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_11, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 295, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; } __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; __pyx_r = __pyx_t_5; __pyx_t_5 = 0; goto __pyx_L0; /* "borg/hashindex.pyx":288 * value_size = 12 * * def __getitem__(self, key): # <<<<<<<<<<<<<< * assert len(key) == self.key_size * data = hashindex_get(self.index, key) */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_4); __Pyx_XDECREF(__pyx_t_5); __Pyx_XDECREF(__pyx_t_6); __Pyx_XDECREF(__pyx_t_7); __Pyx_XDECREF(__pyx_t_8); __Pyx_XDECREF(__pyx_t_9); __Pyx_XDECREF(__pyx_t_11); __Pyx_AddTraceback("borg.hashindex.ChunkIndex.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":297 * return ChunkIndexEntry(refcount, _le32toh(data[1]), _le32toh(data[2])) * * def __setitem__(self, key, value): # <<<<<<<<<<<<<< * assert len(key) == self.key_size * cdef uint32_t[3] data */ /* Python wrapper */ static int __pyx_pw_4borg_9hashindex_10ChunkIndex_3__setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_value); /*proto*/ static int __pyx_pw_4borg_9hashindex_10ChunkIndex_3__setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_value) { int __pyx_r; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_10ChunkIndex_2__setitem__(((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)__pyx_v_self), ((PyObject *)__pyx_v_key), ((PyObject *)__pyx_v_value)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static int __pyx_pf_4borg_9hashindex_10ChunkIndex_2__setitem__(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_value) { uint32_t __pyx_v_data[3]; uint32_t __pyx_v_refcount; int __pyx_r; __Pyx_RefNannyDeclarations Py_ssize_t __pyx_t_1; PyObject *__pyx_t_2 = NULL; uint32_t __pyx_t_3; char *__pyx_t_4; int __pyx_t_5; __Pyx_RefNannySetupContext("__setitem__", 0); /* "borg/hashindex.pyx":298 * * def __setitem__(self, key, value): * assert len(key) == self.key_size # <<<<<<<<<<<<<< * cdef uint32_t[3] data * cdef uint32_t refcount = value[0] */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { __pyx_t_1 = PyObject_Length(__pyx_v_key); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 298, __pyx_L1_error) if (unlikely(!((__pyx_t_1 == __pyx_v_self->__pyx_base.key_size) != 0))) { PyErr_SetNone(PyExc_AssertionError); __PYX_ERR(0, 298, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":300 * assert len(key) == self.key_size * cdef uint32_t[3] data * cdef uint32_t refcount = value[0] # <<<<<<<<<<<<<< * assert refcount <= _MAX_VALUE, "invalid reference count" * data[0] = _htole32(refcount) */ __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_value, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 300, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __pyx_t_3 = __Pyx_PyInt_As_uint32_t(__pyx_t_2); if (unlikely((__pyx_t_3 == ((uint32_t)-1)) && PyErr_Occurred())) __PYX_ERR(0, 300, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; __pyx_v_refcount = __pyx_t_3; /* "borg/hashindex.pyx":301 * cdef uint32_t[3] data * cdef uint32_t refcount = value[0] * assert refcount <= _MAX_VALUE, "invalid reference count" # <<<<<<<<<<<<<< * data[0] = _htole32(refcount) * data[1] = _htole32(value[1]) */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { if (unlikely(!((__pyx_v_refcount <= _MAX_VALUE) != 0))) { PyErr_SetObject(PyExc_AssertionError, __pyx_kp_s_invalid_reference_count); __PYX_ERR(0, 301, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":302 * cdef uint32_t refcount = value[0] * assert refcount <= _MAX_VALUE, "invalid reference count" * data[0] = _htole32(refcount) # <<<<<<<<<<<<<< * data[1] = _htole32(value[1]) * data[2] = _htole32(value[2]) */ (__pyx_v_data[0]) = _htole32(__pyx_v_refcount); /* "borg/hashindex.pyx":303 * assert refcount <= _MAX_VALUE, "invalid reference count" * data[0] = _htole32(refcount) * data[1] = _htole32(value[1]) # <<<<<<<<<<<<<< * data[2] = _htole32(value[2]) * if not hashindex_set(self.index, key, data): */ __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_value, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 303, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __pyx_t_3 = __Pyx_PyInt_As_uint32_t(__pyx_t_2); if (unlikely((__pyx_t_3 == ((uint32_t)-1)) && PyErr_Occurred())) __PYX_ERR(0, 303, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; (__pyx_v_data[1]) = _htole32(__pyx_t_3); /* "borg/hashindex.pyx":304 * data[0] = _htole32(refcount) * data[1] = _htole32(value[1]) * data[2] = _htole32(value[2]) # <<<<<<<<<<<<<< * if not hashindex_set(self.index, key, data): * raise Exception('hashindex_set failed') */ __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_value, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 304, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __pyx_t_3 = __Pyx_PyInt_As_uint32_t(__pyx_t_2); if (unlikely((__pyx_t_3 == ((uint32_t)-1)) && PyErr_Occurred())) __PYX_ERR(0, 304, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; (__pyx_v_data[2]) = _htole32(__pyx_t_3); /* "borg/hashindex.pyx":305 * data[1] = _htole32(value[1]) * data[2] = _htole32(value[2]) * if not hashindex_set(self.index, key, data): # <<<<<<<<<<<<<< * raise Exception('hashindex_set failed') * */ __pyx_t_4 = __Pyx_PyObject_AsWritableString(__pyx_v_key); if (unlikely((!__pyx_t_4) && PyErr_Occurred())) __PYX_ERR(0, 305, __pyx_L1_error) __pyx_t_5 = ((!(hashindex_set(__pyx_v_self->__pyx_base.index, ((char *)__pyx_t_4), __pyx_v_data) != 0)) != 0); if (__pyx_t_5) { /* "borg/hashindex.pyx":306 * data[2] = _htole32(value[2]) * if not hashindex_set(self.index, key, data): * raise Exception('hashindex_set failed') # <<<<<<<<<<<<<< * * def __contains__(self, key): */ __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__18, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 306, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __Pyx_Raise(__pyx_t_2, 0, 0, 0); __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; __PYX_ERR(0, 306, __pyx_L1_error) /* "borg/hashindex.pyx":305 * data[1] = _htole32(value[1]) * data[2] = _htole32(value[2]) * if not hashindex_set(self.index, key, data): # <<<<<<<<<<<<<< * raise Exception('hashindex_set failed') * */ } /* "borg/hashindex.pyx":297 * return ChunkIndexEntry(refcount, _le32toh(data[1]), _le32toh(data[2])) * * def __setitem__(self, key, value): # <<<<<<<<<<<<<< * assert len(key) == self.key_size * cdef uint32_t[3] data */ /* function exit code */ __pyx_r = 0; goto __pyx_L0; __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_2); __Pyx_AddTraceback("borg.hashindex.ChunkIndex.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = -1; __pyx_L0:; __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":308 * raise Exception('hashindex_set failed') * * def __contains__(self, key): # <<<<<<<<<<<<<< * assert len(key) == self.key_size * data = hashindex_get(self.index, key) */ /* Python wrapper */ static int __pyx_pw_4borg_9hashindex_10ChunkIndex_5__contains__(PyObject *__pyx_v_self, PyObject *__pyx_v_key); /*proto*/ static int __pyx_pw_4borg_9hashindex_10ChunkIndex_5__contains__(PyObject *__pyx_v_self, PyObject *__pyx_v_key) { int __pyx_r; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__contains__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_10ChunkIndex_4__contains__(((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)__pyx_v_self), ((PyObject *)__pyx_v_key)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static int __pyx_pf_4borg_9hashindex_10ChunkIndex_4__contains__(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, PyObject *__pyx_v_key) { uint32_t *__pyx_v_data; int __pyx_r; __Pyx_RefNannyDeclarations Py_ssize_t __pyx_t_1; char *__pyx_t_2; int __pyx_t_3; __Pyx_RefNannySetupContext("__contains__", 0); /* "borg/hashindex.pyx":309 * * def __contains__(self, key): * assert len(key) == self.key_size # <<<<<<<<<<<<<< * data = hashindex_get(self.index, key) * if data != NULL: */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { __pyx_t_1 = PyObject_Length(__pyx_v_key); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 309, __pyx_L1_error) if (unlikely(!((__pyx_t_1 == __pyx_v_self->__pyx_base.key_size) != 0))) { PyErr_SetNone(PyExc_AssertionError); __PYX_ERR(0, 309, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":310 * def __contains__(self, key): * assert len(key) == self.key_size * data = hashindex_get(self.index, key) # <<<<<<<<<<<<<< * if data != NULL: * assert _le32toh(data[0]) <= _MAX_VALUE, "invalid reference count" */ __pyx_t_2 = __Pyx_PyObject_AsWritableString(__pyx_v_key); if (unlikely((!__pyx_t_2) && PyErr_Occurred())) __PYX_ERR(0, 310, __pyx_L1_error) __pyx_v_data = ((uint32_t *)hashindex_get(__pyx_v_self->__pyx_base.index, ((char *)__pyx_t_2))); /* "borg/hashindex.pyx":311 * assert len(key) == self.key_size * data = hashindex_get(self.index, key) * if data != NULL: # <<<<<<<<<<<<<< * assert _le32toh(data[0]) <= _MAX_VALUE, "invalid reference count" * return data != NULL */ __pyx_t_3 = ((__pyx_v_data != NULL) != 0); if (__pyx_t_3) { /* "borg/hashindex.pyx":312 * data = hashindex_get(self.index, key) * if data != NULL: * assert _le32toh(data[0]) <= _MAX_VALUE, "invalid reference count" # <<<<<<<<<<<<<< * return data != NULL * */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { if (unlikely(!((_le32toh((__pyx_v_data[0])) <= _MAX_VALUE) != 0))) { PyErr_SetObject(PyExc_AssertionError, __pyx_kp_s_invalid_reference_count); __PYX_ERR(0, 312, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":311 * assert len(key) == self.key_size * data = hashindex_get(self.index, key) * if data != NULL: # <<<<<<<<<<<<<< * assert _le32toh(data[0]) <= _MAX_VALUE, "invalid reference count" * return data != NULL */ } /* "borg/hashindex.pyx":313 * if data != NULL: * assert _le32toh(data[0]) <= _MAX_VALUE, "invalid reference count" * return data != NULL # <<<<<<<<<<<<<< * * def incref(self, key): */ __pyx_r = (__pyx_v_data != NULL); goto __pyx_L0; /* "borg/hashindex.pyx":308 * raise Exception('hashindex_set failed') * * def __contains__(self, key): # <<<<<<<<<<<<<< * assert len(key) == self.key_size * data = hashindex_get(self.index, key) */ /* function exit code */ __pyx_L1_error:; __Pyx_AddTraceback("borg.hashindex.ChunkIndex.__contains__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = -1; __pyx_L0:; __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":315 * return data != NULL * * def incref(self, key): # <<<<<<<<<<<<<< * """Increase refcount for 'key', return (refcount, size, csize)""" * assert len(key) == self.key_size */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_7incref(PyObject *__pyx_v_self, PyObject *__pyx_v_key); /*proto*/ static char __pyx_doc_4borg_9hashindex_10ChunkIndex_6incref[] = "Increase refcount for 'key', return (refcount, size, csize)"; static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_7incref(PyObject *__pyx_v_self, PyObject *__pyx_v_key) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("incref (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_10ChunkIndex_6incref(((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)__pyx_v_self), ((PyObject *)__pyx_v_key)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_6incref(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, PyObject *__pyx_v_key) { uint32_t *__pyx_v_data; uint32_t __pyx_v_refcount; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations Py_ssize_t __pyx_t_1; char *__pyx_t_2; int __pyx_t_3; PyObject *__pyx_t_4 = NULL; PyObject *__pyx_t_5 = NULL; PyObject *__pyx_t_6 = NULL; PyObject *__pyx_t_7 = NULL; __Pyx_RefNannySetupContext("incref", 0); /* "borg/hashindex.pyx":317 * def incref(self, key): * """Increase refcount for 'key', return (refcount, size, csize)""" * assert len(key) == self.key_size # <<<<<<<<<<<<<< * data = hashindex_get(self.index, key) * if not data: */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { __pyx_t_1 = PyObject_Length(__pyx_v_key); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 317, __pyx_L1_error) if (unlikely(!((__pyx_t_1 == __pyx_v_self->__pyx_base.key_size) != 0))) { PyErr_SetNone(PyExc_AssertionError); __PYX_ERR(0, 317, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":318 * """Increase refcount for 'key', return (refcount, size, csize)""" * assert len(key) == self.key_size * data = hashindex_get(self.index, key) # <<<<<<<<<<<<<< * if not data: * raise KeyError(key) */ __pyx_t_2 = __Pyx_PyObject_AsWritableString(__pyx_v_key); if (unlikely((!__pyx_t_2) && PyErr_Occurred())) __PYX_ERR(0, 318, __pyx_L1_error) __pyx_v_data = ((uint32_t *)hashindex_get(__pyx_v_self->__pyx_base.index, ((char *)__pyx_t_2))); /* "borg/hashindex.pyx":319 * assert len(key) == self.key_size * data = hashindex_get(self.index, key) * if not data: # <<<<<<<<<<<<<< * raise KeyError(key) * cdef uint32_t refcount = _le32toh(data[0]) */ __pyx_t_3 = ((!(__pyx_v_data != 0)) != 0); if (__pyx_t_3) { /* "borg/hashindex.pyx":320 * data = hashindex_get(self.index, key) * if not data: * raise KeyError(key) # <<<<<<<<<<<<<< * cdef uint32_t refcount = _le32toh(data[0]) * assert refcount <= _MAX_VALUE, "invalid reference count" */ __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 320, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __Pyx_INCREF(__pyx_v_key); __Pyx_GIVEREF(__pyx_v_key); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_key); __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_KeyError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 320, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; __Pyx_Raise(__pyx_t_5, 0, 0, 0); __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; __PYX_ERR(0, 320, __pyx_L1_error) /* "borg/hashindex.pyx":319 * assert len(key) == self.key_size * data = hashindex_get(self.index, key) * if not data: # <<<<<<<<<<<<<< * raise KeyError(key) * cdef uint32_t refcount = _le32toh(data[0]) */ } /* "borg/hashindex.pyx":321 * if not data: * raise KeyError(key) * cdef uint32_t refcount = _le32toh(data[0]) # <<<<<<<<<<<<<< * assert refcount <= _MAX_VALUE, "invalid reference count" * if refcount != _MAX_VALUE: */ __pyx_v_refcount = _le32toh((__pyx_v_data[0])); /* "borg/hashindex.pyx":322 * raise KeyError(key) * cdef uint32_t refcount = _le32toh(data[0]) * assert refcount <= _MAX_VALUE, "invalid reference count" # <<<<<<<<<<<<<< * if refcount != _MAX_VALUE: * refcount += 1 */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { if (unlikely(!((__pyx_v_refcount <= _MAX_VALUE) != 0))) { PyErr_SetObject(PyExc_AssertionError, __pyx_kp_s_invalid_reference_count); __PYX_ERR(0, 322, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":323 * cdef uint32_t refcount = _le32toh(data[0]) * assert refcount <= _MAX_VALUE, "invalid reference count" * if refcount != _MAX_VALUE: # <<<<<<<<<<<<<< * refcount += 1 * data[0] = _htole32(refcount) */ __pyx_t_3 = ((__pyx_v_refcount != _MAX_VALUE) != 0); if (__pyx_t_3) { /* "borg/hashindex.pyx":324 * assert refcount <= _MAX_VALUE, "invalid reference count" * if refcount != _MAX_VALUE: * refcount += 1 # <<<<<<<<<<<<<< * data[0] = _htole32(refcount) * return refcount, _le32toh(data[1]), _le32toh(data[2]) */ __pyx_v_refcount = (__pyx_v_refcount + 1); /* "borg/hashindex.pyx":323 * cdef uint32_t refcount = _le32toh(data[0]) * assert refcount <= _MAX_VALUE, "invalid reference count" * if refcount != _MAX_VALUE: # <<<<<<<<<<<<<< * refcount += 1 * data[0] = _htole32(refcount) */ } /* "borg/hashindex.pyx":325 * if refcount != _MAX_VALUE: * refcount += 1 * data[0] = _htole32(refcount) # <<<<<<<<<<<<<< * return refcount, _le32toh(data[1]), _le32toh(data[2]) * */ (__pyx_v_data[0]) = _htole32(__pyx_v_refcount); /* "borg/hashindex.pyx":326 * refcount += 1 * data[0] = _htole32(refcount) * return refcount, _le32toh(data[1]), _le32toh(data[2]) # <<<<<<<<<<<<<< * * def decref(self, key): */ __Pyx_XDECREF(__pyx_r); __pyx_t_5 = __Pyx_PyInt_From_uint32_t(__pyx_v_refcount); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 326, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __pyx_t_4 = __Pyx_PyInt_From_uint32_t(_le32toh((__pyx_v_data[1]))); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 326, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __pyx_t_6 = __Pyx_PyInt_From_uint32_t(_le32toh((__pyx_v_data[2]))); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 326, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_6); __pyx_t_7 = PyTuple_New(3); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 326, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_7); __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_5); __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_t_4); __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_7, 2, __pyx_t_6); __pyx_t_5 = 0; __pyx_t_4 = 0; __pyx_t_6 = 0; __pyx_r = __pyx_t_7; __pyx_t_7 = 0; goto __pyx_L0; /* "borg/hashindex.pyx":315 * return data != NULL * * def incref(self, key): # <<<<<<<<<<<<<< * """Increase refcount for 'key', return (refcount, size, csize)""" * assert len(key) == self.key_size */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_4); __Pyx_XDECREF(__pyx_t_5); __Pyx_XDECREF(__pyx_t_6); __Pyx_XDECREF(__pyx_t_7); __Pyx_AddTraceback("borg.hashindex.ChunkIndex.incref", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":328 * return refcount, _le32toh(data[1]), _le32toh(data[2]) * * def decref(self, key): # <<<<<<<<<<<<<< * """Decrease refcount for 'key', return (refcount, size, csize)""" * assert len(key) == self.key_size */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_9decref(PyObject *__pyx_v_self, PyObject *__pyx_v_key); /*proto*/ static char __pyx_doc_4borg_9hashindex_10ChunkIndex_8decref[] = "Decrease refcount for 'key', return (refcount, size, csize)"; static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_9decref(PyObject *__pyx_v_self, PyObject *__pyx_v_key) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("decref (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_10ChunkIndex_8decref(((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)__pyx_v_self), ((PyObject *)__pyx_v_key)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_8decref(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, PyObject *__pyx_v_key) { uint32_t *__pyx_v_data; uint32_t __pyx_v_refcount; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations Py_ssize_t __pyx_t_1; char *__pyx_t_2; int __pyx_t_3; PyObject *__pyx_t_4 = NULL; PyObject *__pyx_t_5 = NULL; PyObject *__pyx_t_6 = NULL; PyObject *__pyx_t_7 = NULL; __Pyx_RefNannySetupContext("decref", 0); /* "borg/hashindex.pyx":330 * def decref(self, key): * """Decrease refcount for 'key', return (refcount, size, csize)""" * assert len(key) == self.key_size # <<<<<<<<<<<<<< * data = hashindex_get(self.index, key) * if not data: */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { __pyx_t_1 = PyObject_Length(__pyx_v_key); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 330, __pyx_L1_error) if (unlikely(!((__pyx_t_1 == __pyx_v_self->__pyx_base.key_size) != 0))) { PyErr_SetNone(PyExc_AssertionError); __PYX_ERR(0, 330, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":331 * """Decrease refcount for 'key', return (refcount, size, csize)""" * assert len(key) == self.key_size * data = hashindex_get(self.index, key) # <<<<<<<<<<<<<< * if not data: * raise KeyError(key) */ __pyx_t_2 = __Pyx_PyObject_AsWritableString(__pyx_v_key); if (unlikely((!__pyx_t_2) && PyErr_Occurred())) __PYX_ERR(0, 331, __pyx_L1_error) __pyx_v_data = ((uint32_t *)hashindex_get(__pyx_v_self->__pyx_base.index, ((char *)__pyx_t_2))); /* "borg/hashindex.pyx":332 * assert len(key) == self.key_size * data = hashindex_get(self.index, key) * if not data: # <<<<<<<<<<<<<< * raise KeyError(key) * cdef uint32_t refcount = _le32toh(data[0]) */ __pyx_t_3 = ((!(__pyx_v_data != 0)) != 0); if (__pyx_t_3) { /* "borg/hashindex.pyx":333 * data = hashindex_get(self.index, key) * if not data: * raise KeyError(key) # <<<<<<<<<<<<<< * cdef uint32_t refcount = _le32toh(data[0]) * # Never decrease a reference count of zero */ __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 333, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __Pyx_INCREF(__pyx_v_key); __Pyx_GIVEREF(__pyx_v_key); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_key); __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_KeyError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 333, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; __Pyx_Raise(__pyx_t_5, 0, 0, 0); __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; __PYX_ERR(0, 333, __pyx_L1_error) /* "borg/hashindex.pyx":332 * assert len(key) == self.key_size * data = hashindex_get(self.index, key) * if not data: # <<<<<<<<<<<<<< * raise KeyError(key) * cdef uint32_t refcount = _le32toh(data[0]) */ } /* "borg/hashindex.pyx":334 * if not data: * raise KeyError(key) * cdef uint32_t refcount = _le32toh(data[0]) # <<<<<<<<<<<<<< * # Never decrease a reference count of zero * assert 0 < refcount <= _MAX_VALUE, "invalid reference count" */ __pyx_v_refcount = _le32toh((__pyx_v_data[0])); /* "borg/hashindex.pyx":336 * cdef uint32_t refcount = _le32toh(data[0]) * # Never decrease a reference count of zero * assert 0 < refcount <= _MAX_VALUE, "invalid reference count" # <<<<<<<<<<<<<< * if refcount != _MAX_VALUE: * refcount -= 1 */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { __pyx_t_3 = (0 < __pyx_v_refcount); if (__pyx_t_3) { __pyx_t_3 = (__pyx_v_refcount <= _MAX_VALUE); } if (unlikely(!(__pyx_t_3 != 0))) { PyErr_SetObject(PyExc_AssertionError, __pyx_kp_s_invalid_reference_count); __PYX_ERR(0, 336, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":337 * # Never decrease a reference count of zero * assert 0 < refcount <= _MAX_VALUE, "invalid reference count" * if refcount != _MAX_VALUE: # <<<<<<<<<<<<<< * refcount -= 1 * data[0] = _htole32(refcount) */ __pyx_t_3 = ((__pyx_v_refcount != _MAX_VALUE) != 0); if (__pyx_t_3) { /* "borg/hashindex.pyx":338 * assert 0 < refcount <= _MAX_VALUE, "invalid reference count" * if refcount != _MAX_VALUE: * refcount -= 1 # <<<<<<<<<<<<<< * data[0] = _htole32(refcount) * return refcount, _le32toh(data[1]), _le32toh(data[2]) */ __pyx_v_refcount = (__pyx_v_refcount - 1); /* "borg/hashindex.pyx":337 * # Never decrease a reference count of zero * assert 0 < refcount <= _MAX_VALUE, "invalid reference count" * if refcount != _MAX_VALUE: # <<<<<<<<<<<<<< * refcount -= 1 * data[0] = _htole32(refcount) */ } /* "borg/hashindex.pyx":339 * if refcount != _MAX_VALUE: * refcount -= 1 * data[0] = _htole32(refcount) # <<<<<<<<<<<<<< * return refcount, _le32toh(data[1]), _le32toh(data[2]) * */ (__pyx_v_data[0]) = _htole32(__pyx_v_refcount); /* "borg/hashindex.pyx":340 * refcount -= 1 * data[0] = _htole32(refcount) * return refcount, _le32toh(data[1]), _le32toh(data[2]) # <<<<<<<<<<<<<< * * def iteritems(self, marker=None): */ __Pyx_XDECREF(__pyx_r); __pyx_t_5 = __Pyx_PyInt_From_uint32_t(__pyx_v_refcount); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 340, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __pyx_t_4 = __Pyx_PyInt_From_uint32_t(_le32toh((__pyx_v_data[1]))); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 340, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __pyx_t_6 = __Pyx_PyInt_From_uint32_t(_le32toh((__pyx_v_data[2]))); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 340, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_6); __pyx_t_7 = PyTuple_New(3); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 340, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_7); __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_5); __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_t_4); __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_7, 2, __pyx_t_6); __pyx_t_5 = 0; __pyx_t_4 = 0; __pyx_t_6 = 0; __pyx_r = __pyx_t_7; __pyx_t_7 = 0; goto __pyx_L0; /* "borg/hashindex.pyx":328 * return refcount, _le32toh(data[1]), _le32toh(data[2]) * * def decref(self, key): # <<<<<<<<<<<<<< * """Decrease refcount for 'key', return (refcount, size, csize)""" * assert len(key) == self.key_size */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_4); __Pyx_XDECREF(__pyx_t_5); __Pyx_XDECREF(__pyx_t_6); __Pyx_XDECREF(__pyx_t_7); __Pyx_AddTraceback("borg.hashindex.ChunkIndex.decref", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":342 * return refcount, _le32toh(data[1]), _le32toh(data[2]) * * def iteritems(self, marker=None): # <<<<<<<<<<<<<< * cdef const void *key * iter = ChunkKeyIterator(self.key_size) */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_11iteritems(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_11iteritems(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { PyObject *__pyx_v_marker = 0; PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("iteritems (wrapper)", 0); { static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_marker,0}; PyObject* values[1] = {0}; values[0] = ((PyObject *)Py_None); if (unlikely(__pyx_kwds)) { Py_ssize_t kw_args; const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); switch (pos_args) { case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } kw_args = PyDict_Size(__pyx_kwds); switch (pos_args) { case 0: if (kw_args > 0) { PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_marker); if (value) { values[0] = value; kw_args--; } } } if (unlikely(kw_args > 0)) { if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "iteritems") < 0)) __PYX_ERR(0, 342, __pyx_L3_error) } } else { switch (PyTuple_GET_SIZE(__pyx_args)) { case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } } __pyx_v_marker = values[0]; } goto __pyx_L4_argument_unpacking_done; __pyx_L5_argtuple_error:; __Pyx_RaiseArgtupleInvalid("iteritems", 0, 0, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 342, __pyx_L3_error) __pyx_L3_error:; __Pyx_AddTraceback("borg.hashindex.ChunkIndex.iteritems", __pyx_clineno, __pyx_lineno, __pyx_filename); __Pyx_RefNannyFinishContext(); return NULL; __pyx_L4_argument_unpacking_done:; __pyx_r = __pyx_pf_4borg_9hashindex_10ChunkIndex_10iteritems(((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)__pyx_v_self), __pyx_v_marker); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_10iteritems(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, PyObject *__pyx_v_marker) { void const *__pyx_v_key; struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *__pyx_v_iter = NULL; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; PyObject *__pyx_t_2 = NULL; HashIndex *__pyx_t_3; int __pyx_t_4; char *__pyx_t_5; int __pyx_t_6; __Pyx_RefNannySetupContext("iteritems", 0); /* "borg/hashindex.pyx":344 * def iteritems(self, marker=None): * cdef const void *key * iter = ChunkKeyIterator(self.key_size) # <<<<<<<<<<<<<< * iter.idx = self * iter.index = self.index */ __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_self->__pyx_base.key_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 344, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 344, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __Pyx_GIVEREF(__pyx_t_1); PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1); __pyx_t_1 = 0; __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_4borg_9hashindex_ChunkKeyIterator), __pyx_t_2, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 344, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; __pyx_v_iter = ((struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *)__pyx_t_1); __pyx_t_1 = 0; /* "borg/hashindex.pyx":345 * cdef const void *key * iter = ChunkKeyIterator(self.key_size) * iter.idx = self # <<<<<<<<<<<<<< * iter.index = self.index * if marker: */ __Pyx_INCREF(((PyObject *)__pyx_v_self)); __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); __Pyx_GOTREF(__pyx_v_iter->idx); __Pyx_DECREF(((PyObject *)__pyx_v_iter->idx)); __pyx_v_iter->idx = __pyx_v_self; /* "borg/hashindex.pyx":346 * iter = ChunkKeyIterator(self.key_size) * iter.idx = self * iter.index = self.index # <<<<<<<<<<<<<< * if marker: * key = hashindex_get(self.index, marker) */ __pyx_t_3 = __pyx_v_self->__pyx_base.index; __pyx_v_iter->index = __pyx_t_3; /* "borg/hashindex.pyx":347 * iter.idx = self * iter.index = self.index * if marker: # <<<<<<<<<<<<<< * key = hashindex_get(self.index, marker) * if marker is None: */ __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_v_marker); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(0, 347, __pyx_L1_error) if (__pyx_t_4) { /* "borg/hashindex.pyx":348 * iter.index = self.index * if marker: * key = hashindex_get(self.index, marker) # <<<<<<<<<<<<<< * if marker is None: * raise IndexError */ __pyx_t_5 = __Pyx_PyObject_AsWritableString(__pyx_v_marker); if (unlikely((!__pyx_t_5) && PyErr_Occurred())) __PYX_ERR(0, 348, __pyx_L1_error) __pyx_v_key = hashindex_get(__pyx_v_self->__pyx_base.index, ((char *)__pyx_t_5)); /* "borg/hashindex.pyx":349 * if marker: * key = hashindex_get(self.index, marker) * if marker is None: # <<<<<<<<<<<<<< * raise IndexError * iter.key = key - self.key_size */ __pyx_t_4 = (__pyx_v_marker == Py_None); __pyx_t_6 = (__pyx_t_4 != 0); if (__pyx_t_6) { /* "borg/hashindex.pyx":350 * key = hashindex_get(self.index, marker) * if marker is None: * raise IndexError # <<<<<<<<<<<<<< * iter.key = key - self.key_size * return iter */ __Pyx_Raise(__pyx_builtin_IndexError, 0, 0, 0); __PYX_ERR(0, 350, __pyx_L1_error) /* "borg/hashindex.pyx":349 * if marker: * key = hashindex_get(self.index, marker) * if marker is None: # <<<<<<<<<<<<<< * raise IndexError * iter.key = key - self.key_size */ } /* "borg/hashindex.pyx":351 * if marker is None: * raise IndexError * iter.key = key - self.key_size # <<<<<<<<<<<<<< * return iter * */ __pyx_v_iter->key = (__pyx_v_key - __pyx_v_self->__pyx_base.key_size); /* "borg/hashindex.pyx":347 * iter.idx = self * iter.index = self.index * if marker: # <<<<<<<<<<<<<< * key = hashindex_get(self.index, marker) * if marker is None: */ } /* "borg/hashindex.pyx":352 * raise IndexError * iter.key = key - self.key_size * return iter # <<<<<<<<<<<<<< * * def summarize(self): */ __Pyx_XDECREF(__pyx_r); __Pyx_INCREF(((PyObject *)__pyx_v_iter)); __pyx_r = ((PyObject *)__pyx_v_iter); goto __pyx_L0; /* "borg/hashindex.pyx":342 * return refcount, _le32toh(data[1]), _le32toh(data[2]) * * def iteritems(self, marker=None): # <<<<<<<<<<<<<< * cdef const void *key * iter = ChunkKeyIterator(self.key_size) */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_XDECREF(__pyx_t_2); __Pyx_AddTraceback("borg.hashindex.ChunkIndex.iteritems", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XDECREF((PyObject *)__pyx_v_iter); __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":354 * return iter * * def summarize(self): # <<<<<<<<<<<<<< * cdef uint64_t size = 0, csize = 0, unique_size = 0, unique_csize = 0, chunks = 0, unique_chunks = 0 * cdef uint32_t *values */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_13summarize(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_13summarize(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("summarize (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_10ChunkIndex_12summarize(((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_12summarize(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self) { uint64_t __pyx_v_size; uint64_t __pyx_v_csize; uint64_t __pyx_v_unique_size; uint64_t __pyx_v_unique_csize; uint64_t __pyx_v_chunks; uint64_t __pyx_v_unique_chunks; uint32_t *__pyx_v_values; uint32_t __pyx_v_refcount; void *__pyx_v_key; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations int __pyx_t_1; PyObject *__pyx_t_2 = NULL; PyObject *__pyx_t_3 = NULL; PyObject *__pyx_t_4 = NULL; PyObject *__pyx_t_5 = NULL; PyObject *__pyx_t_6 = NULL; PyObject *__pyx_t_7 = NULL; PyObject *__pyx_t_8 = NULL; __Pyx_RefNannySetupContext("summarize", 0); /* "borg/hashindex.pyx":355 * * def summarize(self): * cdef uint64_t size = 0, csize = 0, unique_size = 0, unique_csize = 0, chunks = 0, unique_chunks = 0 # <<<<<<<<<<<<<< * cdef uint32_t *values * cdef uint32_t refcount */ __pyx_v_size = 0; __pyx_v_csize = 0; __pyx_v_unique_size = 0; __pyx_v_unique_csize = 0; __pyx_v_chunks = 0; __pyx_v_unique_chunks = 0; /* "borg/hashindex.pyx":358 * cdef uint32_t *values * cdef uint32_t refcount * cdef void *key = NULL # <<<<<<<<<<<<<< * * while True: */ __pyx_v_key = NULL; /* "borg/hashindex.pyx":360 * cdef void *key = NULL * * while True: # <<<<<<<<<<<<<< * key = hashindex_next_key(self.index, key) * if not key: */ while (1) { /* "borg/hashindex.pyx":361 * * while True: * key = hashindex_next_key(self.index, key) # <<<<<<<<<<<<<< * if not key: * break */ __pyx_v_key = hashindex_next_key(__pyx_v_self->__pyx_base.index, __pyx_v_key); /* "borg/hashindex.pyx":362 * while True: * key = hashindex_next_key(self.index, key) * if not key: # <<<<<<<<<<<<<< * break * unique_chunks += 1 */ __pyx_t_1 = ((!(__pyx_v_key != 0)) != 0); if (__pyx_t_1) { /* "borg/hashindex.pyx":363 * key = hashindex_next_key(self.index, key) * if not key: * break # <<<<<<<<<<<<<< * unique_chunks += 1 * values = (key + self.key_size) */ goto __pyx_L4_break; /* "borg/hashindex.pyx":362 * while True: * key = hashindex_next_key(self.index, key) * if not key: # <<<<<<<<<<<<<< * break * unique_chunks += 1 */ } /* "borg/hashindex.pyx":364 * if not key: * break * unique_chunks += 1 # <<<<<<<<<<<<<< * values = (key + self.key_size) * refcount = _le32toh(values[0]) */ __pyx_v_unique_chunks = (__pyx_v_unique_chunks + 1); /* "borg/hashindex.pyx":365 * break * unique_chunks += 1 * values = (key + self.key_size) # <<<<<<<<<<<<<< * refcount = _le32toh(values[0]) * assert refcount <= _MAX_VALUE, "invalid reference count" */ __pyx_v_values = ((uint32_t *)(__pyx_v_key + __pyx_v_self->__pyx_base.key_size)); /* "borg/hashindex.pyx":366 * unique_chunks += 1 * values = (key + self.key_size) * refcount = _le32toh(values[0]) # <<<<<<<<<<<<<< * assert refcount <= _MAX_VALUE, "invalid reference count" * chunks += refcount */ __pyx_v_refcount = _le32toh((__pyx_v_values[0])); /* "borg/hashindex.pyx":367 * values = (key + self.key_size) * refcount = _le32toh(values[0]) * assert refcount <= _MAX_VALUE, "invalid reference count" # <<<<<<<<<<<<<< * chunks += refcount * unique_size += _le32toh(values[1]) */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { if (unlikely(!((__pyx_v_refcount <= _MAX_VALUE) != 0))) { PyErr_SetObject(PyExc_AssertionError, __pyx_kp_s_invalid_reference_count); __PYX_ERR(0, 367, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":368 * refcount = _le32toh(values[0]) * assert refcount <= _MAX_VALUE, "invalid reference count" * chunks += refcount # <<<<<<<<<<<<<< * unique_size += _le32toh(values[1]) * unique_csize += _le32toh(values[2]) */ __pyx_v_chunks = (__pyx_v_chunks + __pyx_v_refcount); /* "borg/hashindex.pyx":369 * assert refcount <= _MAX_VALUE, "invalid reference count" * chunks += refcount * unique_size += _le32toh(values[1]) # <<<<<<<<<<<<<< * unique_csize += _le32toh(values[2]) * size += _le32toh(values[1]) * _le32toh(values[0]) */ __pyx_v_unique_size = (__pyx_v_unique_size + _le32toh((__pyx_v_values[1]))); /* "borg/hashindex.pyx":370 * chunks += refcount * unique_size += _le32toh(values[1]) * unique_csize += _le32toh(values[2]) # <<<<<<<<<<<<<< * size += _le32toh(values[1]) * _le32toh(values[0]) * csize += _le32toh(values[2]) * _le32toh(values[0]) */ __pyx_v_unique_csize = (__pyx_v_unique_csize + _le32toh((__pyx_v_values[2]))); /* "borg/hashindex.pyx":371 * unique_size += _le32toh(values[1]) * unique_csize += _le32toh(values[2]) * size += _le32toh(values[1]) * _le32toh(values[0]) # <<<<<<<<<<<<<< * csize += _le32toh(values[2]) * _le32toh(values[0]) * */ __pyx_v_size = (__pyx_v_size + (((uint64_t)_le32toh((__pyx_v_values[1]))) * _le32toh((__pyx_v_values[0])))); /* "borg/hashindex.pyx":372 * unique_csize += _le32toh(values[2]) * size += _le32toh(values[1]) * _le32toh(values[0]) * csize += _le32toh(values[2]) * _le32toh(values[0]) # <<<<<<<<<<<<<< * * return size, csize, unique_size, unique_csize, unique_chunks, chunks */ __pyx_v_csize = (__pyx_v_csize + (((uint64_t)_le32toh((__pyx_v_values[2]))) * _le32toh((__pyx_v_values[0])))); } __pyx_L4_break:; /* "borg/hashindex.pyx":374 * csize += _le32toh(values[2]) * _le32toh(values[0]) * * return size, csize, unique_size, unique_csize, unique_chunks, chunks # <<<<<<<<<<<<<< * * def stats_against(self, ChunkIndex master_index): */ __Pyx_XDECREF(__pyx_r); __pyx_t_2 = __Pyx_PyInt_From_uint64_t(__pyx_v_size); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 374, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __pyx_t_3 = __Pyx_PyInt_From_uint64_t(__pyx_v_csize); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 374, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_3); __pyx_t_4 = __Pyx_PyInt_From_uint64_t(__pyx_v_unique_size); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 374, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __pyx_t_5 = __Pyx_PyInt_From_uint64_t(__pyx_v_unique_csize); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 374, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __pyx_t_6 = __Pyx_PyInt_From_uint64_t(__pyx_v_unique_chunks); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 374, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_6); __pyx_t_7 = __Pyx_PyInt_From_uint64_t(__pyx_v_chunks); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 374, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_7); __pyx_t_8 = PyTuple_New(6); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 374, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_8); __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_2); __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_t_3); __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_8, 2, __pyx_t_4); __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_8, 3, __pyx_t_5); __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_8, 4, __pyx_t_6); __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_8, 5, __pyx_t_7); __pyx_t_2 = 0; __pyx_t_3 = 0; __pyx_t_4 = 0; __pyx_t_5 = 0; __pyx_t_6 = 0; __pyx_t_7 = 0; __pyx_r = __pyx_t_8; __pyx_t_8 = 0; goto __pyx_L0; /* "borg/hashindex.pyx":354 * return iter * * def summarize(self): # <<<<<<<<<<<<<< * cdef uint64_t size = 0, csize = 0, unique_size = 0, unique_csize = 0, chunks = 0, unique_chunks = 0 * cdef uint32_t *values */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_2); __Pyx_XDECREF(__pyx_t_3); __Pyx_XDECREF(__pyx_t_4); __Pyx_XDECREF(__pyx_t_5); __Pyx_XDECREF(__pyx_t_6); __Pyx_XDECREF(__pyx_t_7); __Pyx_XDECREF(__pyx_t_8); __Pyx_AddTraceback("borg.hashindex.ChunkIndex.summarize", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":376 * return size, csize, unique_size, unique_csize, unique_chunks, chunks * * def stats_against(self, ChunkIndex master_index): # <<<<<<<<<<<<<< * """ * Calculate chunk statistics of this index against *master_index*. */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_15stats_against(PyObject *__pyx_v_self, PyObject *__pyx_v_master_index); /*proto*/ static char __pyx_doc_4borg_9hashindex_10ChunkIndex_14stats_against[] = "\n Calculate chunk statistics of this index against *master_index*.\n\n A chunk is counted as unique if the number of references\n in this index matches the number of references in *master_index*.\n\n This index must be a subset of *master_index*.\n\n Return the same statistics tuple as summarize:\n size, csize, unique_size, unique_csize, unique_chunks, chunks.\n "; static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_15stats_against(PyObject *__pyx_v_self, PyObject *__pyx_v_master_index) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("stats_against (wrapper)", 0); if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_master_index), __pyx_ptype_4borg_9hashindex_ChunkIndex, 1, "master_index", 0))) __PYX_ERR(0, 376, __pyx_L1_error) __pyx_r = __pyx_pf_4borg_9hashindex_10ChunkIndex_14stats_against(((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)__pyx_v_self), ((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)__pyx_v_master_index)); /* function exit code */ goto __pyx_L0; __pyx_L1_error:; __pyx_r = NULL; __pyx_L0:; __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_14stats_against(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_master_index) { uint64_t __pyx_v_size; uint64_t __pyx_v_csize; uint64_t __pyx_v_unique_size; uint64_t __pyx_v_unique_csize; uint64_t __pyx_v_chunks; uint64_t __pyx_v_unique_chunks; uint32_t __pyx_v_our_refcount; uint32_t __pyx_v_chunk_size; uint32_t __pyx_v_chunk_csize; uint32_t const *__pyx_v_our_values; uint32_t const *__pyx_v_master_values; void const *__pyx_v_key; HashIndex *__pyx_v_master; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations HashIndex *__pyx_t_1; int __pyx_t_2; PyObject *__pyx_t_3 = NULL; PyObject *__pyx_t_4 = NULL; PyObject *__pyx_t_5 = NULL; PyObject *__pyx_t_6 = NULL; PyObject *__pyx_t_7 = NULL; PyObject *__pyx_t_8 = NULL; PyObject *__pyx_t_9 = NULL; __Pyx_RefNannySetupContext("stats_against", 0); /* "borg/hashindex.pyx":388 * size, csize, unique_size, unique_csize, unique_chunks, chunks. * """ * cdef uint64_t size = 0, csize = 0, unique_size = 0, unique_csize = 0, chunks = 0, unique_chunks = 0 # <<<<<<<<<<<<<< * cdef uint32_t our_refcount, chunk_size, chunk_csize * cdef const uint32_t *our_values */ __pyx_v_size = 0; __pyx_v_csize = 0; __pyx_v_unique_size = 0; __pyx_v_unique_csize = 0; __pyx_v_chunks = 0; __pyx_v_unique_chunks = 0; /* "borg/hashindex.pyx":392 * cdef const uint32_t *our_values * cdef const uint32_t *master_values * cdef const void *key = NULL # <<<<<<<<<<<<<< * cdef HashIndex *master = master_index.index * */ __pyx_v_key = NULL; /* "borg/hashindex.pyx":393 * cdef const uint32_t *master_values * cdef const void *key = NULL * cdef HashIndex *master = master_index.index # <<<<<<<<<<<<<< * * while True: */ __pyx_t_1 = __pyx_v_master_index->__pyx_base.index; __pyx_v_master = __pyx_t_1; /* "borg/hashindex.pyx":395 * cdef HashIndex *master = master_index.index * * while True: # <<<<<<<<<<<<<< * key = hashindex_next_key(self.index, key) * if not key: */ while (1) { /* "borg/hashindex.pyx":396 * * while True: * key = hashindex_next_key(self.index, key) # <<<<<<<<<<<<<< * if not key: * break */ __pyx_v_key = hashindex_next_key(__pyx_v_self->__pyx_base.index, __pyx_v_key); /* "borg/hashindex.pyx":397 * while True: * key = hashindex_next_key(self.index, key) * if not key: # <<<<<<<<<<<<<< * break * our_values = (key + self.key_size) */ __pyx_t_2 = ((!(__pyx_v_key != 0)) != 0); if (__pyx_t_2) { /* "borg/hashindex.pyx":398 * key = hashindex_next_key(self.index, key) * if not key: * break # <<<<<<<<<<<<<< * our_values = (key + self.key_size) * master_values = hashindex_get(master, key) */ goto __pyx_L4_break; /* "borg/hashindex.pyx":397 * while True: * key = hashindex_next_key(self.index, key) * if not key: # <<<<<<<<<<<<<< * break * our_values = (key + self.key_size) */ } /* "borg/hashindex.pyx":399 * if not key: * break * our_values = (key + self.key_size) # <<<<<<<<<<<<<< * master_values = hashindex_get(master, key) * if not master_values: */ __pyx_v_our_values = ((uint32_t const *)(__pyx_v_key + __pyx_v_self->__pyx_base.key_size)); /* "borg/hashindex.pyx":400 * break * our_values = (key + self.key_size) * master_values = hashindex_get(master, key) # <<<<<<<<<<<<<< * if not master_values: * raise ValueError('stats_against: key contained in self but not in master_index.') */ __pyx_v_master_values = ((uint32_t const *)hashindex_get(__pyx_v_master, __pyx_v_key)); /* "borg/hashindex.pyx":401 * our_values = (key + self.key_size) * master_values = hashindex_get(master, key) * if not master_values: # <<<<<<<<<<<<<< * raise ValueError('stats_against: key contained in self but not in master_index.') * our_refcount = _le32toh(our_values[0]) */ __pyx_t_2 = ((!(__pyx_v_master_values != 0)) != 0); if (__pyx_t_2) { /* "borg/hashindex.pyx":402 * master_values = hashindex_get(master, key) * if not master_values: * raise ValueError('stats_against: key contained in self but not in master_index.') # <<<<<<<<<<<<<< * our_refcount = _le32toh(our_values[0]) * chunk_size = _le32toh(master_values[1]) */ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__19, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 402, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_3); __Pyx_Raise(__pyx_t_3, 0, 0, 0); __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; __PYX_ERR(0, 402, __pyx_L1_error) /* "borg/hashindex.pyx":401 * our_values = (key + self.key_size) * master_values = hashindex_get(master, key) * if not master_values: # <<<<<<<<<<<<<< * raise ValueError('stats_against: key contained in self but not in master_index.') * our_refcount = _le32toh(our_values[0]) */ } /* "borg/hashindex.pyx":403 * if not master_values: * raise ValueError('stats_against: key contained in self but not in master_index.') * our_refcount = _le32toh(our_values[0]) # <<<<<<<<<<<<<< * chunk_size = _le32toh(master_values[1]) * chunk_csize = _le32toh(master_values[2]) */ __pyx_v_our_refcount = _le32toh((__pyx_v_our_values[0])); /* "borg/hashindex.pyx":404 * raise ValueError('stats_against: key contained in self but not in master_index.') * our_refcount = _le32toh(our_values[0]) * chunk_size = _le32toh(master_values[1]) # <<<<<<<<<<<<<< * chunk_csize = _le32toh(master_values[2]) * */ __pyx_v_chunk_size = _le32toh((__pyx_v_master_values[1])); /* "borg/hashindex.pyx":405 * our_refcount = _le32toh(our_values[0]) * chunk_size = _le32toh(master_values[1]) * chunk_csize = _le32toh(master_values[2]) # <<<<<<<<<<<<<< * * chunks += our_refcount */ __pyx_v_chunk_csize = _le32toh((__pyx_v_master_values[2])); /* "borg/hashindex.pyx":407 * chunk_csize = _le32toh(master_values[2]) * * chunks += our_refcount # <<<<<<<<<<<<<< * size += chunk_size * our_refcount * csize += chunk_csize * our_refcount */ __pyx_v_chunks = (__pyx_v_chunks + __pyx_v_our_refcount); /* "borg/hashindex.pyx":408 * * chunks += our_refcount * size += chunk_size * our_refcount # <<<<<<<<<<<<<< * csize += chunk_csize * our_refcount * if our_values[0] == master_values[0]: */ __pyx_v_size = (__pyx_v_size + (((uint64_t)__pyx_v_chunk_size) * __pyx_v_our_refcount)); /* "borg/hashindex.pyx":409 * chunks += our_refcount * size += chunk_size * our_refcount * csize += chunk_csize * our_refcount # <<<<<<<<<<<<<< * if our_values[0] == master_values[0]: * # our refcount equals the master's refcount, so this chunk is unique to us */ __pyx_v_csize = (__pyx_v_csize + (((uint64_t)__pyx_v_chunk_csize) * __pyx_v_our_refcount)); /* "borg/hashindex.pyx":410 * size += chunk_size * our_refcount * csize += chunk_csize * our_refcount * if our_values[0] == master_values[0]: # <<<<<<<<<<<<<< * # our refcount equals the master's refcount, so this chunk is unique to us * unique_chunks += 1 */ __pyx_t_2 = (((__pyx_v_our_values[0]) == (__pyx_v_master_values[0])) != 0); if (__pyx_t_2) { /* "borg/hashindex.pyx":412 * if our_values[0] == master_values[0]: * # our refcount equals the master's refcount, so this chunk is unique to us * unique_chunks += 1 # <<<<<<<<<<<<<< * unique_size += chunk_size * unique_csize += chunk_csize */ __pyx_v_unique_chunks = (__pyx_v_unique_chunks + 1); /* "borg/hashindex.pyx":413 * # our refcount equals the master's refcount, so this chunk is unique to us * unique_chunks += 1 * unique_size += chunk_size # <<<<<<<<<<<<<< * unique_csize += chunk_csize * */ __pyx_v_unique_size = (__pyx_v_unique_size + __pyx_v_chunk_size); /* "borg/hashindex.pyx":414 * unique_chunks += 1 * unique_size += chunk_size * unique_csize += chunk_csize # <<<<<<<<<<<<<< * * return size, csize, unique_size, unique_csize, unique_chunks, chunks */ __pyx_v_unique_csize = (__pyx_v_unique_csize + __pyx_v_chunk_csize); /* "borg/hashindex.pyx":410 * size += chunk_size * our_refcount * csize += chunk_csize * our_refcount * if our_values[0] == master_values[0]: # <<<<<<<<<<<<<< * # our refcount equals the master's refcount, so this chunk is unique to us * unique_chunks += 1 */ } } __pyx_L4_break:; /* "borg/hashindex.pyx":416 * unique_csize += chunk_csize * * return size, csize, unique_size, unique_csize, unique_chunks, chunks # <<<<<<<<<<<<<< * * def add(self, key, refs, size, csize): */ __Pyx_XDECREF(__pyx_r); __pyx_t_3 = __Pyx_PyInt_From_uint64_t(__pyx_v_size); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 416, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_3); __pyx_t_4 = __Pyx_PyInt_From_uint64_t(__pyx_v_csize); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 416, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __pyx_t_5 = __Pyx_PyInt_From_uint64_t(__pyx_v_unique_size); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 416, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __pyx_t_6 = __Pyx_PyInt_From_uint64_t(__pyx_v_unique_csize); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 416, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_6); __pyx_t_7 = __Pyx_PyInt_From_uint64_t(__pyx_v_unique_chunks); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 416, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_7); __pyx_t_8 = __Pyx_PyInt_From_uint64_t(__pyx_v_chunks); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 416, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_8); __pyx_t_9 = PyTuple_New(6); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 416, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_9); __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_3); __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_9, 1, __pyx_t_4); __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_9, 2, __pyx_t_5); __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_9, 3, __pyx_t_6); __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_9, 4, __pyx_t_7); __Pyx_GIVEREF(__pyx_t_8); PyTuple_SET_ITEM(__pyx_t_9, 5, __pyx_t_8); __pyx_t_3 = 0; __pyx_t_4 = 0; __pyx_t_5 = 0; __pyx_t_6 = 0; __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_r = __pyx_t_9; __pyx_t_9 = 0; goto __pyx_L0; /* "borg/hashindex.pyx":376 * return size, csize, unique_size, unique_csize, unique_chunks, chunks * * def stats_against(self, ChunkIndex master_index): # <<<<<<<<<<<<<< * """ * Calculate chunk statistics of this index against *master_index*. */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_3); __Pyx_XDECREF(__pyx_t_4); __Pyx_XDECREF(__pyx_t_5); __Pyx_XDECREF(__pyx_t_6); __Pyx_XDECREF(__pyx_t_7); __Pyx_XDECREF(__pyx_t_8); __Pyx_XDECREF(__pyx_t_9); __Pyx_AddTraceback("borg.hashindex.ChunkIndex.stats_against", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":418 * return size, csize, unique_size, unique_csize, unique_chunks, chunks * * def add(self, key, refs, size, csize): # <<<<<<<<<<<<<< * assert len(key) == self.key_size * cdef uint32_t[3] data */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_17add(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_17add(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { PyObject *__pyx_v_key = 0; PyObject *__pyx_v_refs = 0; PyObject *__pyx_v_size = 0; PyObject *__pyx_v_csize = 0; PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("add (wrapper)", 0); { static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_key,&__pyx_n_s_refs,&__pyx_n_s_size,&__pyx_n_s_csize,0}; PyObject* values[4] = {0,0,0,0}; if (unlikely(__pyx_kwds)) { Py_ssize_t kw_args; const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); switch (pos_args) { case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); CYTHON_FALLTHROUGH; case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); CYTHON_FALLTHROUGH; case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); CYTHON_FALLTHROUGH; case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } kw_args = PyDict_Size(__pyx_kwds); switch (pos_args) { case 0: if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_key)) != 0)) kw_args--; else goto __pyx_L5_argtuple_error; CYTHON_FALLTHROUGH; case 1: if (likely((values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_refs)) != 0)) kw_args--; else { __Pyx_RaiseArgtupleInvalid("add", 1, 4, 4, 1); __PYX_ERR(0, 418, __pyx_L3_error) } CYTHON_FALLTHROUGH; case 2: if (likely((values[2] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_size)) != 0)) kw_args--; else { __Pyx_RaiseArgtupleInvalid("add", 1, 4, 4, 2); __PYX_ERR(0, 418, __pyx_L3_error) } CYTHON_FALLTHROUGH; case 3: if (likely((values[3] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_csize)) != 0)) kw_args--; else { __Pyx_RaiseArgtupleInvalid("add", 1, 4, 4, 3); __PYX_ERR(0, 418, __pyx_L3_error) } } if (unlikely(kw_args > 0)) { if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "add") < 0)) __PYX_ERR(0, 418, __pyx_L3_error) } } else if (PyTuple_GET_SIZE(__pyx_args) != 4) { goto __pyx_L5_argtuple_error; } else { values[0] = PyTuple_GET_ITEM(__pyx_args, 0); values[1] = PyTuple_GET_ITEM(__pyx_args, 1); values[2] = PyTuple_GET_ITEM(__pyx_args, 2); values[3] = PyTuple_GET_ITEM(__pyx_args, 3); } __pyx_v_key = values[0]; __pyx_v_refs = values[1]; __pyx_v_size = values[2]; __pyx_v_csize = values[3]; } goto __pyx_L4_argument_unpacking_done; __pyx_L5_argtuple_error:; __Pyx_RaiseArgtupleInvalid("add", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 418, __pyx_L3_error) __pyx_L3_error:; __Pyx_AddTraceback("borg.hashindex.ChunkIndex.add", __pyx_clineno, __pyx_lineno, __pyx_filename); __Pyx_RefNannyFinishContext(); return NULL; __pyx_L4_argument_unpacking_done:; __pyx_r = __pyx_pf_4borg_9hashindex_10ChunkIndex_16add(((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)__pyx_v_self), __pyx_v_key, __pyx_v_refs, __pyx_v_size, __pyx_v_csize); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_16add(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, PyObject *__pyx_v_key, PyObject *__pyx_v_refs, PyObject *__pyx_v_size, PyObject *__pyx_v_csize) { uint32_t __pyx_v_data[3]; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations Py_ssize_t __pyx_t_1; uint32_t __pyx_t_2; char *__pyx_t_3; PyObject *__pyx_t_4 = NULL; __Pyx_RefNannySetupContext("add", 0); /* "borg/hashindex.pyx":419 * * def add(self, key, refs, size, csize): * assert len(key) == self.key_size # <<<<<<<<<<<<<< * cdef uint32_t[3] data * data[0] = _htole32(refs) */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { __pyx_t_1 = PyObject_Length(__pyx_v_key); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 419, __pyx_L1_error) if (unlikely(!((__pyx_t_1 == __pyx_v_self->__pyx_base.key_size) != 0))) { PyErr_SetNone(PyExc_AssertionError); __PYX_ERR(0, 419, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":421 * assert len(key) == self.key_size * cdef uint32_t[3] data * data[0] = _htole32(refs) # <<<<<<<<<<<<<< * data[1] = _htole32(size) * data[2] = _htole32(csize) */ __pyx_t_2 = __Pyx_PyInt_As_uint32_t(__pyx_v_refs); if (unlikely((__pyx_t_2 == ((uint32_t)-1)) && PyErr_Occurred())) __PYX_ERR(0, 421, __pyx_L1_error) (__pyx_v_data[0]) = _htole32(__pyx_t_2); /* "borg/hashindex.pyx":422 * cdef uint32_t[3] data * data[0] = _htole32(refs) * data[1] = _htole32(size) # <<<<<<<<<<<<<< * data[2] = _htole32(csize) * self._add( key, data) */ __pyx_t_2 = __Pyx_PyInt_As_uint32_t(__pyx_v_size); if (unlikely((__pyx_t_2 == ((uint32_t)-1)) && PyErr_Occurred())) __PYX_ERR(0, 422, __pyx_L1_error) (__pyx_v_data[1]) = _htole32(__pyx_t_2); /* "borg/hashindex.pyx":423 * data[0] = _htole32(refs) * data[1] = _htole32(size) * data[2] = _htole32(csize) # <<<<<<<<<<<<<< * self._add( key, data) * */ __pyx_t_2 = __Pyx_PyInt_As_uint32_t(__pyx_v_csize); if (unlikely((__pyx_t_2 == ((uint32_t)-1)) && PyErr_Occurred())) __PYX_ERR(0, 423, __pyx_L1_error) (__pyx_v_data[2]) = _htole32(__pyx_t_2); /* "borg/hashindex.pyx":424 * data[1] = _htole32(size) * data[2] = _htole32(csize) * self._add( key, data) # <<<<<<<<<<<<<< * * cdef _add(self, void *key, uint32_t *data): */ __pyx_t_3 = __Pyx_PyObject_AsWritableString(__pyx_v_key); if (unlikely((!__pyx_t_3) && PyErr_Occurred())) __PYX_ERR(0, 424, __pyx_L1_error) __pyx_t_4 = ((struct __pyx_vtabstruct_4borg_9hashindex_ChunkIndex *)__pyx_v_self->__pyx_vtab)->_add(__pyx_v_self, ((char *)__pyx_t_3), __pyx_v_data); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 424, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; /* "borg/hashindex.pyx":418 * return size, csize, unique_size, unique_csize, unique_chunks, chunks * * def add(self, key, refs, size, csize): # <<<<<<<<<<<<<< * assert len(key) == self.key_size * cdef uint32_t[3] data */ /* function exit code */ __pyx_r = Py_None; __Pyx_INCREF(Py_None); goto __pyx_L0; __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_4); __Pyx_AddTraceback("borg.hashindex.ChunkIndex.add", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":426 * self._add( key, data) * * cdef _add(self, void *key, uint32_t *data): # <<<<<<<<<<<<<< * cdef uint64_t refcount1, refcount2, result64 * values = hashindex_get(self.index, key) */ static PyObject *__pyx_f_4borg_9hashindex_10ChunkIndex__add(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, void *__pyx_v_key, uint32_t *__pyx_v_data) { uint64_t __pyx_v_refcount1; uint64_t __pyx_v_refcount2; uint64_t __pyx_v_result64; uint32_t *__pyx_v_values; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations int __pyx_t_1; uint32_t __pyx_t_2; uint64_t __pyx_t_3; uint64_t __pyx_t_4; PyObject *__pyx_t_5 = NULL; __Pyx_RefNannySetupContext("_add", 0); /* "borg/hashindex.pyx":428 * cdef _add(self, void *key, uint32_t *data): * cdef uint64_t refcount1, refcount2, result64 * values = hashindex_get(self.index, key) # <<<<<<<<<<<<<< * if values: * refcount1 = _le32toh(values[0]) */ __pyx_v_values = ((uint32_t *)hashindex_get(__pyx_v_self->__pyx_base.index, __pyx_v_key)); /* "borg/hashindex.pyx":429 * cdef uint64_t refcount1, refcount2, result64 * values = hashindex_get(self.index, key) * if values: # <<<<<<<<<<<<<< * refcount1 = _le32toh(values[0]) * refcount2 = _le32toh(data[0]) */ __pyx_t_1 = (__pyx_v_values != 0); if (__pyx_t_1) { /* "borg/hashindex.pyx":430 * values = hashindex_get(self.index, key) * if values: * refcount1 = _le32toh(values[0]) # <<<<<<<<<<<<<< * refcount2 = _le32toh(data[0]) * assert refcount1 <= _MAX_VALUE, "invalid reference count" */ __pyx_v_refcount1 = _le32toh((__pyx_v_values[0])); /* "borg/hashindex.pyx":431 * if values: * refcount1 = _le32toh(values[0]) * refcount2 = _le32toh(data[0]) # <<<<<<<<<<<<<< * assert refcount1 <= _MAX_VALUE, "invalid reference count" * assert refcount2 <= _MAX_VALUE, "invalid reference count" */ __pyx_v_refcount2 = _le32toh((__pyx_v_data[0])); /* "borg/hashindex.pyx":432 * refcount1 = _le32toh(values[0]) * refcount2 = _le32toh(data[0]) * assert refcount1 <= _MAX_VALUE, "invalid reference count" # <<<<<<<<<<<<<< * assert refcount2 <= _MAX_VALUE, "invalid reference count" * result64 = refcount1 + refcount2 */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { if (unlikely(!((__pyx_v_refcount1 <= _MAX_VALUE) != 0))) { PyErr_SetObject(PyExc_AssertionError, __pyx_kp_s_invalid_reference_count); __PYX_ERR(0, 432, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":433 * refcount2 = _le32toh(data[0]) * assert refcount1 <= _MAX_VALUE, "invalid reference count" * assert refcount2 <= _MAX_VALUE, "invalid reference count" # <<<<<<<<<<<<<< * result64 = refcount1 + refcount2 * values[0] = _htole32(min(result64, _MAX_VALUE)) */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { if (unlikely(!((__pyx_v_refcount2 <= _MAX_VALUE) != 0))) { PyErr_SetObject(PyExc_AssertionError, __pyx_kp_s_invalid_reference_count); __PYX_ERR(0, 433, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":434 * assert refcount1 <= _MAX_VALUE, "invalid reference count" * assert refcount2 <= _MAX_VALUE, "invalid reference count" * result64 = refcount1 + refcount2 # <<<<<<<<<<<<<< * values[0] = _htole32(min(result64, _MAX_VALUE)) * values[1] = data[1] */ __pyx_v_result64 = (__pyx_v_refcount1 + __pyx_v_refcount2); /* "borg/hashindex.pyx":435 * assert refcount2 <= _MAX_VALUE, "invalid reference count" * result64 = refcount1 + refcount2 * values[0] = _htole32(min(result64, _MAX_VALUE)) # <<<<<<<<<<<<<< * values[1] = data[1] * values[2] = data[2] */ __pyx_t_2 = _MAX_VALUE; __pyx_t_3 = __pyx_v_result64; if (((__pyx_t_2 < __pyx_t_3) != 0)) { __pyx_t_4 = __pyx_t_2; } else { __pyx_t_4 = __pyx_t_3; } (__pyx_v_values[0]) = _htole32(__pyx_t_4); /* "borg/hashindex.pyx":436 * result64 = refcount1 + refcount2 * values[0] = _htole32(min(result64, _MAX_VALUE)) * values[1] = data[1] # <<<<<<<<<<<<<< * values[2] = data[2] * else: */ (__pyx_v_values[1]) = (__pyx_v_data[1]); /* "borg/hashindex.pyx":437 * values[0] = _htole32(min(result64, _MAX_VALUE)) * values[1] = data[1] * values[2] = data[2] # <<<<<<<<<<<<<< * else: * if not hashindex_set(self.index, key, data): */ (__pyx_v_values[2]) = (__pyx_v_data[2]); /* "borg/hashindex.pyx":429 * cdef uint64_t refcount1, refcount2, result64 * values = hashindex_get(self.index, key) * if values: # <<<<<<<<<<<<<< * refcount1 = _le32toh(values[0]) * refcount2 = _le32toh(data[0]) */ goto __pyx_L3; } /* "borg/hashindex.pyx":439 * values[2] = data[2] * else: * if not hashindex_set(self.index, key, data): # <<<<<<<<<<<<<< * raise Exception('hashindex_set failed') * */ /*else*/ { __pyx_t_1 = ((!(hashindex_set(__pyx_v_self->__pyx_base.index, __pyx_v_key, __pyx_v_data) != 0)) != 0); if (__pyx_t_1) { /* "borg/hashindex.pyx":440 * else: * if not hashindex_set(self.index, key, data): * raise Exception('hashindex_set failed') # <<<<<<<<<<<<<< * * def merge(self, ChunkIndex other): */ __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__20, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 440, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __Pyx_Raise(__pyx_t_5, 0, 0, 0); __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; __PYX_ERR(0, 440, __pyx_L1_error) /* "borg/hashindex.pyx":439 * values[2] = data[2] * else: * if not hashindex_set(self.index, key, data): # <<<<<<<<<<<<<< * raise Exception('hashindex_set failed') * */ } } __pyx_L3:; /* "borg/hashindex.pyx":426 * self._add( key, data) * * cdef _add(self, void *key, uint32_t *data): # <<<<<<<<<<<<<< * cdef uint64_t refcount1, refcount2, result64 * values = hashindex_get(self.index, key) */ /* function exit code */ __pyx_r = Py_None; __Pyx_INCREF(Py_None); goto __pyx_L0; __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_5); __Pyx_AddTraceback("borg.hashindex.ChunkIndex._add", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = 0; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":442 * raise Exception('hashindex_set failed') * * def merge(self, ChunkIndex other): # <<<<<<<<<<<<<< * cdef void *key = NULL * */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_19merge(PyObject *__pyx_v_self, PyObject *__pyx_v_other); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_19merge(PyObject *__pyx_v_self, PyObject *__pyx_v_other) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("merge (wrapper)", 0); if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_other), __pyx_ptype_4borg_9hashindex_ChunkIndex, 1, "other", 0))) __PYX_ERR(0, 442, __pyx_L1_error) __pyx_r = __pyx_pf_4borg_9hashindex_10ChunkIndex_18merge(((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)__pyx_v_self), ((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)__pyx_v_other)); /* function exit code */ goto __pyx_L0; __pyx_L1_error:; __pyx_r = NULL; __pyx_L0:; __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_18merge(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_other) { void *__pyx_v_key; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations int __pyx_t_1; PyObject *__pyx_t_2 = NULL; __Pyx_RefNannySetupContext("merge", 0); /* "borg/hashindex.pyx":443 * * def merge(self, ChunkIndex other): * cdef void *key = NULL # <<<<<<<<<<<<<< * * while True: */ __pyx_v_key = NULL; /* "borg/hashindex.pyx":445 * cdef void *key = NULL * * while True: # <<<<<<<<<<<<<< * key = hashindex_next_key(other.index, key) * if not key: */ while (1) { /* "borg/hashindex.pyx":446 * * while True: * key = hashindex_next_key(other.index, key) # <<<<<<<<<<<<<< * if not key: * break */ __pyx_v_key = hashindex_next_key(__pyx_v_other->__pyx_base.index, __pyx_v_key); /* "borg/hashindex.pyx":447 * while True: * key = hashindex_next_key(other.index, key) * if not key: # <<<<<<<<<<<<<< * break * self._add(key, (key + self.key_size)) */ __pyx_t_1 = ((!(__pyx_v_key != 0)) != 0); if (__pyx_t_1) { /* "borg/hashindex.pyx":448 * key = hashindex_next_key(other.index, key) * if not key: * break # <<<<<<<<<<<<<< * self._add(key, (key + self.key_size)) * */ goto __pyx_L4_break; /* "borg/hashindex.pyx":447 * while True: * key = hashindex_next_key(other.index, key) * if not key: # <<<<<<<<<<<<<< * break * self._add(key, (key + self.key_size)) */ } /* "borg/hashindex.pyx":449 * if not key: * break * self._add(key, (key + self.key_size)) # <<<<<<<<<<<<<< * * def zero_csize_ids(self): */ __pyx_t_2 = ((struct __pyx_vtabstruct_4borg_9hashindex_ChunkIndex *)__pyx_v_self->__pyx_vtab)->_add(__pyx_v_self, __pyx_v_key, ((uint32_t *)(__pyx_v_key + __pyx_v_self->__pyx_base.key_size))); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 449, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; } __pyx_L4_break:; /* "borg/hashindex.pyx":442 * raise Exception('hashindex_set failed') * * def merge(self, ChunkIndex other): # <<<<<<<<<<<<<< * cdef void *key = NULL * */ /* function exit code */ __pyx_r = Py_None; __Pyx_INCREF(Py_None); goto __pyx_L0; __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_2); __Pyx_AddTraceback("borg.hashindex.ChunkIndex.merge", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":451 * self._add(key, (key + self.key_size)) * * def zero_csize_ids(self): # <<<<<<<<<<<<<< * cdef void *key = NULL * cdef uint32_t *values */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_21zero_csize_ids(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_21zero_csize_ids(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("zero_csize_ids (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_10ChunkIndex_20zero_csize_ids(((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_20zero_csize_ids(struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self) { void *__pyx_v_key; uint32_t *__pyx_v_values; PyObject *__pyx_v_entries = NULL; uint32_t __pyx_v_refcount; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; int __pyx_t_2; int __pyx_t_3; __Pyx_RefNannySetupContext("zero_csize_ids", 0); /* "borg/hashindex.pyx":452 * * def zero_csize_ids(self): * cdef void *key = NULL # <<<<<<<<<<<<<< * cdef uint32_t *values * entries = [] */ __pyx_v_key = NULL; /* "borg/hashindex.pyx":454 * cdef void *key = NULL * cdef uint32_t *values * entries = [] # <<<<<<<<<<<<<< * while True: * key = hashindex_next_key(self.index, key) */ __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 454, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __pyx_v_entries = ((PyObject*)__pyx_t_1); __pyx_t_1 = 0; /* "borg/hashindex.pyx":455 * cdef uint32_t *values * entries = [] * while True: # <<<<<<<<<<<<<< * key = hashindex_next_key(self.index, key) * if not key: */ while (1) { /* "borg/hashindex.pyx":456 * entries = [] * while True: * key = hashindex_next_key(self.index, key) # <<<<<<<<<<<<<< * if not key: * break */ __pyx_v_key = hashindex_next_key(__pyx_v_self->__pyx_base.index, __pyx_v_key); /* "borg/hashindex.pyx":457 * while True: * key = hashindex_next_key(self.index, key) * if not key: # <<<<<<<<<<<<<< * break * values = (key + self.key_size) */ __pyx_t_2 = ((!(__pyx_v_key != 0)) != 0); if (__pyx_t_2) { /* "borg/hashindex.pyx":458 * key = hashindex_next_key(self.index, key) * if not key: * break # <<<<<<<<<<<<<< * values = (key + self.key_size) * refcount = _le32toh(values[0]) */ goto __pyx_L4_break; /* "borg/hashindex.pyx":457 * while True: * key = hashindex_next_key(self.index, key) * if not key: # <<<<<<<<<<<<<< * break * values = (key + self.key_size) */ } /* "borg/hashindex.pyx":459 * if not key: * break * values = (key + self.key_size) # <<<<<<<<<<<<<< * refcount = _le32toh(values[0]) * assert refcount <= _MAX_VALUE, "invalid reference count" */ __pyx_v_values = ((uint32_t *)(__pyx_v_key + __pyx_v_self->__pyx_base.key_size)); /* "borg/hashindex.pyx":460 * break * values = (key + self.key_size) * refcount = _le32toh(values[0]) # <<<<<<<<<<<<<< * assert refcount <= _MAX_VALUE, "invalid reference count" * if _le32toh(values[2]) == 0: */ __pyx_v_refcount = _le32toh((__pyx_v_values[0])); /* "borg/hashindex.pyx":461 * values = (key + self.key_size) * refcount = _le32toh(values[0]) * assert refcount <= _MAX_VALUE, "invalid reference count" # <<<<<<<<<<<<<< * if _le32toh(values[2]) == 0: * # csize == 0 */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { if (unlikely(!((__pyx_v_refcount <= _MAX_VALUE) != 0))) { PyErr_SetObject(PyExc_AssertionError, __pyx_kp_s_invalid_reference_count); __PYX_ERR(0, 461, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":462 * refcount = _le32toh(values[0]) * assert refcount <= _MAX_VALUE, "invalid reference count" * if _le32toh(values[2]) == 0: # <<<<<<<<<<<<<< * # csize == 0 * entries.append(PyBytes_FromStringAndSize( key, self.key_size)) */ __pyx_t_2 = ((_le32toh((__pyx_v_values[2])) == 0) != 0); if (__pyx_t_2) { /* "borg/hashindex.pyx":464 * if _le32toh(values[2]) == 0: * # csize == 0 * entries.append(PyBytes_FromStringAndSize( key, self.key_size)) # <<<<<<<<<<<<<< * return entries * */ __pyx_t_1 = PyBytes_FromStringAndSize(((char *)__pyx_v_key), __pyx_v_self->__pyx_base.key_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 464, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __pyx_t_3 = __Pyx_PyList_Append(__pyx_v_entries, __pyx_t_1); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(0, 464, __pyx_L1_error) __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; /* "borg/hashindex.pyx":462 * refcount = _le32toh(values[0]) * assert refcount <= _MAX_VALUE, "invalid reference count" * if _le32toh(values[2]) == 0: # <<<<<<<<<<<<<< * # csize == 0 * entries.append(PyBytes_FromStringAndSize( key, self.key_size)) */ } } __pyx_L4_break:; /* "borg/hashindex.pyx":465 * # csize == 0 * entries.append(PyBytes_FromStringAndSize( key, self.key_size)) * return entries # <<<<<<<<<<<<<< * * */ __Pyx_XDECREF(__pyx_r); __Pyx_INCREF(__pyx_v_entries); __pyx_r = __pyx_v_entries; goto __pyx_L0; /* "borg/hashindex.pyx":451 * self._add(key, (key + self.key_size)) * * def zero_csize_ids(self): # <<<<<<<<<<<<<< * cdef void *key = NULL * cdef uint32_t *values */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.ChunkIndex.zero_csize_ids", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XDECREF(__pyx_v_entries); __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "(tree fragment)":1 * def __reduce_cython__(self): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_23__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_23__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_10ChunkIndex_22__reduce_cython__(((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_22__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("__reduce_cython__", 0); /* "(tree fragment)":2 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__21, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(1, 2, __pyx_L1_error) /* "(tree fragment)":1 * def __reduce_cython__(self): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.ChunkIndex.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "(tree fragment)":3 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_25__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_10ChunkIndex_25__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_10ChunkIndex_24__setstate_cython__(((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_10ChunkIndex_24__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_ChunkIndex *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("__setstate_cython__", 0); /* "(tree fragment)":4 * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< */ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__22, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(1, 4, __pyx_L1_error) /* "(tree fragment)":3 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.ChunkIndex.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":475 * cdef int exhausted * * def __cinit__(self, key_size): # <<<<<<<<<<<<<< * self.key = NULL * self.key_size = key_size */ /* Python wrapper */ static int __pyx_pw_4borg_9hashindex_16ChunkKeyIterator_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ static int __pyx_pw_4borg_9hashindex_16ChunkKeyIterator_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { PyObject *__pyx_v_key_size = 0; int __pyx_r; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); { static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_key_size_2,0}; PyObject* values[1] = {0}; if (unlikely(__pyx_kwds)) { Py_ssize_t kw_args; const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); switch (pos_args) { case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } kw_args = PyDict_Size(__pyx_kwds); switch (pos_args) { case 0: if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_key_size_2)) != 0)) kw_args--; else goto __pyx_L5_argtuple_error; } if (unlikely(kw_args > 0)) { if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(0, 475, __pyx_L3_error) } } else if (PyTuple_GET_SIZE(__pyx_args) != 1) { goto __pyx_L5_argtuple_error; } else { values[0] = PyTuple_GET_ITEM(__pyx_args, 0); } __pyx_v_key_size = values[0]; } goto __pyx_L4_argument_unpacking_done; __pyx_L5_argtuple_error:; __Pyx_RaiseArgtupleInvalid("__cinit__", 1, 1, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 475, __pyx_L3_error) __pyx_L3_error:; __Pyx_AddTraceback("borg.hashindex.ChunkKeyIterator.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); __Pyx_RefNannyFinishContext(); return -1; __pyx_L4_argument_unpacking_done:; __pyx_r = __pyx_pf_4borg_9hashindex_16ChunkKeyIterator___cinit__(((struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *)__pyx_v_self), __pyx_v_key_size); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static int __pyx_pf_4borg_9hashindex_16ChunkKeyIterator___cinit__(struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *__pyx_v_self, PyObject *__pyx_v_key_size) { int __pyx_r; __Pyx_RefNannyDeclarations int __pyx_t_1; __Pyx_RefNannySetupContext("__cinit__", 0); /* "borg/hashindex.pyx":476 * * def __cinit__(self, key_size): * self.key = NULL # <<<<<<<<<<<<<< * self.key_size = key_size * self.exhausted = 0 */ __pyx_v_self->key = NULL; /* "borg/hashindex.pyx":477 * def __cinit__(self, key_size): * self.key = NULL * self.key_size = key_size # <<<<<<<<<<<<<< * self.exhausted = 0 * */ __pyx_t_1 = __Pyx_PyInt_As_int(__pyx_v_key_size); if (unlikely((__pyx_t_1 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 477, __pyx_L1_error) __pyx_v_self->key_size = __pyx_t_1; /* "borg/hashindex.pyx":478 * self.key = NULL * self.key_size = key_size * self.exhausted = 0 # <<<<<<<<<<<<<< * * def __iter__(self): */ __pyx_v_self->exhausted = 0; /* "borg/hashindex.pyx":475 * cdef int exhausted * * def __cinit__(self, key_size): # <<<<<<<<<<<<<< * self.key = NULL * self.key_size = key_size */ /* function exit code */ __pyx_r = 0; goto __pyx_L0; __pyx_L1_error:; __Pyx_AddTraceback("borg.hashindex.ChunkKeyIterator.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = -1; __pyx_L0:; __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":480 * self.exhausted = 0 * * def __iter__(self): # <<<<<<<<<<<<<< * return self * */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_16ChunkKeyIterator_3__iter__(PyObject *__pyx_v_self); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_16ChunkKeyIterator_3__iter__(PyObject *__pyx_v_self) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__iter__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_16ChunkKeyIterator_2__iter__(((struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_16ChunkKeyIterator_2__iter__(struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *__pyx_v_self) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__iter__", 0); /* "borg/hashindex.pyx":481 * * def __iter__(self): * return self # <<<<<<<<<<<<<< * * def __next__(self): */ __Pyx_XDECREF(__pyx_r); __Pyx_INCREF(((PyObject *)__pyx_v_self)); __pyx_r = ((PyObject *)__pyx_v_self); goto __pyx_L0; /* "borg/hashindex.pyx":480 * self.exhausted = 0 * * def __iter__(self): # <<<<<<<<<<<<<< * return self * */ /* function exit code */ __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":483 * return self * * def __next__(self): # <<<<<<<<<<<<<< * if self.exhausted: * raise StopIteration */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_16ChunkKeyIterator_5__next__(PyObject *__pyx_v_self); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_16ChunkKeyIterator_5__next__(PyObject *__pyx_v_self) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__next__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_16ChunkKeyIterator_4__next__(((struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_16ChunkKeyIterator_4__next__(struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *__pyx_v_self) { uint32_t *__pyx_v_value; uint32_t __pyx_v_refcount; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations int __pyx_t_1; PyObject *__pyx_t_2 = NULL; PyObject *__pyx_t_3 = NULL; PyObject *__pyx_t_4 = NULL; PyObject *__pyx_t_5 = NULL; PyObject *__pyx_t_6 = NULL; PyObject *__pyx_t_7 = NULL; PyObject *__pyx_t_8 = NULL; int __pyx_t_9; PyObject *__pyx_t_10 = NULL; __Pyx_RefNannySetupContext("__next__", 0); /* "borg/hashindex.pyx":484 * * def __next__(self): * if self.exhausted: # <<<<<<<<<<<<<< * raise StopIteration * self.key = hashindex_next_key(self.index, self.key) */ __pyx_t_1 = (__pyx_v_self->exhausted != 0); if (__pyx_t_1) { /* "borg/hashindex.pyx":485 * def __next__(self): * if self.exhausted: * raise StopIteration # <<<<<<<<<<<<<< * self.key = hashindex_next_key(self.index, self.key) * if not self.key: */ __Pyx_Raise(__pyx_builtin_StopIteration, 0, 0, 0); __PYX_ERR(0, 485, __pyx_L1_error) /* "borg/hashindex.pyx":484 * * def __next__(self): * if self.exhausted: # <<<<<<<<<<<<<< * raise StopIteration * self.key = hashindex_next_key(self.index, self.key) */ } /* "borg/hashindex.pyx":486 * if self.exhausted: * raise StopIteration * self.key = hashindex_next_key(self.index, self.key) # <<<<<<<<<<<<<< * if not self.key: * self.exhausted = 1 */ __pyx_v_self->key = hashindex_next_key(__pyx_v_self->index, ((char *)__pyx_v_self->key)); /* "borg/hashindex.pyx":487 * raise StopIteration * self.key = hashindex_next_key(self.index, self.key) * if not self.key: # <<<<<<<<<<<<<< * self.exhausted = 1 * raise StopIteration */ __pyx_t_1 = ((!(__pyx_v_self->key != 0)) != 0); if (__pyx_t_1) { /* "borg/hashindex.pyx":488 * self.key = hashindex_next_key(self.index, self.key) * if not self.key: * self.exhausted = 1 # <<<<<<<<<<<<<< * raise StopIteration * cdef uint32_t *value = (self.key + self.key_size) */ __pyx_v_self->exhausted = 1; /* "borg/hashindex.pyx":489 * if not self.key: * self.exhausted = 1 * raise StopIteration # <<<<<<<<<<<<<< * cdef uint32_t *value = (self.key + self.key_size) * cdef uint32_t refcount = _le32toh(value[0]) */ __Pyx_Raise(__pyx_builtin_StopIteration, 0, 0, 0); __PYX_ERR(0, 489, __pyx_L1_error) /* "borg/hashindex.pyx":487 * raise StopIteration * self.key = hashindex_next_key(self.index, self.key) * if not self.key: # <<<<<<<<<<<<<< * self.exhausted = 1 * raise StopIteration */ } /* "borg/hashindex.pyx":490 * self.exhausted = 1 * raise StopIteration * cdef uint32_t *value = (self.key + self.key_size) # <<<<<<<<<<<<<< * cdef uint32_t refcount = _le32toh(value[0]) * assert refcount <= _MAX_VALUE, "invalid reference count" */ __pyx_v_value = ((uint32_t *)(__pyx_v_self->key + __pyx_v_self->key_size)); /* "borg/hashindex.pyx":491 * raise StopIteration * cdef uint32_t *value = (self.key + self.key_size) * cdef uint32_t refcount = _le32toh(value[0]) # <<<<<<<<<<<<<< * assert refcount <= _MAX_VALUE, "invalid reference count" * return (self.key)[:self.key_size], ChunkIndexEntry(refcount, _le32toh(value[1]), _le32toh(value[2])) */ __pyx_v_refcount = _le32toh((__pyx_v_value[0])); /* "borg/hashindex.pyx":492 * cdef uint32_t *value = (self.key + self.key_size) * cdef uint32_t refcount = _le32toh(value[0]) * assert refcount <= _MAX_VALUE, "invalid reference count" # <<<<<<<<<<<<<< * return (self.key)[:self.key_size], ChunkIndexEntry(refcount, _le32toh(value[1]), _le32toh(value[2])) * */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { if (unlikely(!((__pyx_v_refcount <= _MAX_VALUE) != 0))) { PyErr_SetObject(PyExc_AssertionError, __pyx_kp_s_invalid_reference_count); __PYX_ERR(0, 492, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":493 * cdef uint32_t refcount = _le32toh(value[0]) * assert refcount <= _MAX_VALUE, "invalid reference count" * return (self.key)[:self.key_size], ChunkIndexEntry(refcount, _le32toh(value[1]), _le32toh(value[2])) # <<<<<<<<<<<<<< * * */ __Pyx_XDECREF(__pyx_r); __pyx_t_2 = __Pyx_PyBytes_FromStringAndSize(((char *)__pyx_v_self->key) + 0, __pyx_v_self->key_size - 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 493, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_ChunkIndexEntry); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 493, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __pyx_t_5 = __Pyx_PyInt_From_uint32_t(__pyx_v_refcount); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 493, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __pyx_t_6 = __Pyx_PyInt_From_uint32_t(_le32toh((__pyx_v_value[1]))); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 493, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_6); __pyx_t_7 = __Pyx_PyInt_From_uint32_t(_le32toh((__pyx_v_value[2]))); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 493, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_7); __pyx_t_8 = NULL; __pyx_t_9 = 0; if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) { __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_4); if (likely(__pyx_t_8)) { PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); __Pyx_INCREF(__pyx_t_8); __Pyx_INCREF(function); __Pyx_DECREF_SET(__pyx_t_4, function); __pyx_t_9 = 1; } } #if CYTHON_FAST_PYCALL if (PyFunction_Check(__pyx_t_4)) { PyObject *__pyx_temp[4] = {__pyx_t_8, __pyx_t_5, __pyx_t_6, __pyx_t_7}; __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_4, __pyx_temp+1-__pyx_t_9, 3+__pyx_t_9); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 493, __pyx_L1_error) __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; __Pyx_GOTREF(__pyx_t_3); __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; } else #endif #if CYTHON_FAST_PYCCALL if (__Pyx_PyFastCFunction_Check(__pyx_t_4)) { PyObject *__pyx_temp[4] = {__pyx_t_8, __pyx_t_5, __pyx_t_6, __pyx_t_7}; __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_4, __pyx_temp+1-__pyx_t_9, 3+__pyx_t_9); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 493, __pyx_L1_error) __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; __Pyx_GOTREF(__pyx_t_3); __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; } else #endif { __pyx_t_10 = PyTuple_New(3+__pyx_t_9); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 493, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_10); if (__pyx_t_8) { __Pyx_GIVEREF(__pyx_t_8); PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_8); __pyx_t_8 = NULL; } __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_10, 0+__pyx_t_9, __pyx_t_5); __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_10, 1+__pyx_t_9, __pyx_t_6); __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_10, 2+__pyx_t_9, __pyx_t_7); __pyx_t_5 = 0; __pyx_t_6 = 0; __pyx_t_7 = 0; __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_10, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 493, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_3); __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; } __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 493, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_2); __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_3); __pyx_t_2 = 0; __pyx_t_3 = 0; __pyx_r = __pyx_t_4; __pyx_t_4 = 0; goto __pyx_L0; /* "borg/hashindex.pyx":483 * return self * * def __next__(self): # <<<<<<<<<<<<<< * if self.exhausted: * raise StopIteration */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_2); __Pyx_XDECREF(__pyx_t_3); __Pyx_XDECREF(__pyx_t_4); __Pyx_XDECREF(__pyx_t_5); __Pyx_XDECREF(__pyx_t_6); __Pyx_XDECREF(__pyx_t_7); __Pyx_XDECREF(__pyx_t_8); __Pyx_XDECREF(__pyx_t_10); __Pyx_AddTraceback("borg.hashindex.ChunkKeyIterator.__next__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "(tree fragment)":1 * def __reduce_cython__(self): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_16ChunkKeyIterator_7__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_16ChunkKeyIterator_7__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_16ChunkKeyIterator_6__reduce_cython__(((struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_16ChunkKeyIterator_6__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *__pyx_v_self) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("__reduce_cython__", 0); /* "(tree fragment)":2 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__23, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(1, 2, __pyx_L1_error) /* "(tree fragment)":1 * def __reduce_cython__(self): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.ChunkKeyIterator.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "(tree fragment)":3 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_16ChunkKeyIterator_9__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_16ChunkKeyIterator_9__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_16ChunkKeyIterator_8__setstate_cython__(((struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_16ChunkKeyIterator_8__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("__setstate_cython__", 0); /* "(tree fragment)":4 * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< */ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__24, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(1, 4, __pyx_L1_error) /* "(tree fragment)":3 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.ChunkKeyIterator.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":496 * * * cdef Py_buffer ro_buffer(object data) except *: # <<<<<<<<<<<<<< * cdef Py_buffer view * PyObject_GetBuffer(data, &view, PyBUF_SIMPLE) */ static Py_buffer __pyx_f_4borg_9hashindex_ro_buffer(PyObject *__pyx_v_data) { Py_buffer __pyx_v_view; Py_buffer __pyx_r; __Pyx_RefNannyDeclarations int __pyx_t_1; __Pyx_RefNannySetupContext("ro_buffer", 0); /* "borg/hashindex.pyx":498 * cdef Py_buffer ro_buffer(object data) except *: * cdef Py_buffer view * PyObject_GetBuffer(data, &view, PyBUF_SIMPLE) # <<<<<<<<<<<<<< * return view * */ __pyx_t_1 = PyObject_GetBuffer(__pyx_v_data, (&__pyx_v_view), PyBUF_SIMPLE); if (unlikely(__pyx_t_1 == ((int)-1))) __PYX_ERR(0, 498, __pyx_L1_error) /* "borg/hashindex.pyx":499 * cdef Py_buffer view * PyObject_GetBuffer(data, &view, PyBUF_SIMPLE) * return view # <<<<<<<<<<<<<< * * */ __pyx_r = __pyx_v_view; goto __pyx_L0; /* "borg/hashindex.pyx":496 * * * cdef Py_buffer ro_buffer(object data) except *: # <<<<<<<<<<<<<< * cdef Py_buffer view * PyObject_GetBuffer(data, &view, PyBUF_SIMPLE) */ /* function exit code */ __pyx_L1_error:; __Pyx_AddTraceback("borg.hashindex.ro_buffer", __pyx_clineno, __pyx_lineno, __pyx_filename); __Pyx_pretend_to_initialize(&__pyx_r); __pyx_L0:; __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":506 * cdef CacheSyncCtx *sync * * def __cinit__(self, chunks): # <<<<<<<<<<<<<< * self.chunks = chunks * self.sync = cache_sync_init(self.chunks.index) */ /* Python wrapper */ static int __pyx_pw_4borg_9hashindex_17CacheSynchronizer_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ static int __pyx_pw_4borg_9hashindex_17CacheSynchronizer_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { PyObject *__pyx_v_chunks = 0; int __pyx_r; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); { static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_chunks,0}; PyObject* values[1] = {0}; if (unlikely(__pyx_kwds)) { Py_ssize_t kw_args; const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); switch (pos_args) { case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } kw_args = PyDict_Size(__pyx_kwds); switch (pos_args) { case 0: if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_chunks)) != 0)) kw_args--; else goto __pyx_L5_argtuple_error; } if (unlikely(kw_args > 0)) { if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(0, 506, __pyx_L3_error) } } else if (PyTuple_GET_SIZE(__pyx_args) != 1) { goto __pyx_L5_argtuple_error; } else { values[0] = PyTuple_GET_ITEM(__pyx_args, 0); } __pyx_v_chunks = values[0]; } goto __pyx_L4_argument_unpacking_done; __pyx_L5_argtuple_error:; __Pyx_RaiseArgtupleInvalid("__cinit__", 1, 1, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 506, __pyx_L3_error) __pyx_L3_error:; __Pyx_AddTraceback("borg.hashindex.CacheSynchronizer.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); __Pyx_RefNannyFinishContext(); return -1; __pyx_L4_argument_unpacking_done:; __pyx_r = __pyx_pf_4borg_9hashindex_17CacheSynchronizer___cinit__(((struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *)__pyx_v_self), __pyx_v_chunks); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static int __pyx_pf_4borg_9hashindex_17CacheSynchronizer___cinit__(struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *__pyx_v_self, PyObject *__pyx_v_chunks) { int __pyx_r; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; int __pyx_t_2; __Pyx_RefNannySetupContext("__cinit__", 0); /* "borg/hashindex.pyx":507 * * def __cinit__(self, chunks): * self.chunks = chunks # <<<<<<<<<<<<<< * self.sync = cache_sync_init(self.chunks.index) * if not self.sync: */ if (!(likely(((__pyx_v_chunks) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_chunks, __pyx_ptype_4borg_9hashindex_ChunkIndex))))) __PYX_ERR(0, 507, __pyx_L1_error) __pyx_t_1 = __pyx_v_chunks; __Pyx_INCREF(__pyx_t_1); __Pyx_GIVEREF(__pyx_t_1); __Pyx_GOTREF(__pyx_v_self->chunks); __Pyx_DECREF(((PyObject *)__pyx_v_self->chunks)); __pyx_v_self->chunks = ((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)__pyx_t_1); __pyx_t_1 = 0; /* "borg/hashindex.pyx":508 * def __cinit__(self, chunks): * self.chunks = chunks * self.sync = cache_sync_init(self.chunks.index) # <<<<<<<<<<<<<< * if not self.sync: * raise Exception('cache_sync_init failed') */ __pyx_v_self->sync = cache_sync_init(__pyx_v_self->chunks->__pyx_base.index); /* "borg/hashindex.pyx":509 * self.chunks = chunks * self.sync = cache_sync_init(self.chunks.index) * if not self.sync: # <<<<<<<<<<<<<< * raise Exception('cache_sync_init failed') * */ __pyx_t_2 = ((!(__pyx_v_self->sync != 0)) != 0); if (__pyx_t_2) { /* "borg/hashindex.pyx":510 * self.sync = cache_sync_init(self.chunks.index) * if not self.sync: * raise Exception('cache_sync_init failed') # <<<<<<<<<<<<<< * * def __dealloc__(self): */ __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__25, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 510, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(0, 510, __pyx_L1_error) /* "borg/hashindex.pyx":509 * self.chunks = chunks * self.sync = cache_sync_init(self.chunks.index) * if not self.sync: # <<<<<<<<<<<<<< * raise Exception('cache_sync_init failed') * */ } /* "borg/hashindex.pyx":506 * cdef CacheSyncCtx *sync * * def __cinit__(self, chunks): # <<<<<<<<<<<<<< * self.chunks = chunks * self.sync = cache_sync_init(self.chunks.index) */ /* function exit code */ __pyx_r = 0; goto __pyx_L0; __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.CacheSynchronizer.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = -1; __pyx_L0:; __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":512 * raise Exception('cache_sync_init failed') * * def __dealloc__(self): # <<<<<<<<<<<<<< * if self.sync: * cache_sync_free(self.sync) */ /* Python wrapper */ static void __pyx_pw_4borg_9hashindex_17CacheSynchronizer_3__dealloc__(PyObject *__pyx_v_self); /*proto*/ static void __pyx_pw_4borg_9hashindex_17CacheSynchronizer_3__dealloc__(PyObject *__pyx_v_self) { __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); __pyx_pf_4borg_9hashindex_17CacheSynchronizer_2__dealloc__(((struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); } static void __pyx_pf_4borg_9hashindex_17CacheSynchronizer_2__dealloc__(struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *__pyx_v_self) { __Pyx_RefNannyDeclarations int __pyx_t_1; __Pyx_RefNannySetupContext("__dealloc__", 0); /* "borg/hashindex.pyx":513 * * def __dealloc__(self): * if self.sync: # <<<<<<<<<<<<<< * cache_sync_free(self.sync) * */ __pyx_t_1 = (__pyx_v_self->sync != 0); if (__pyx_t_1) { /* "borg/hashindex.pyx":514 * def __dealloc__(self): * if self.sync: * cache_sync_free(self.sync) # <<<<<<<<<<<<<< * * def feed(self, chunk): */ cache_sync_free(__pyx_v_self->sync); /* "borg/hashindex.pyx":513 * * def __dealloc__(self): * if self.sync: # <<<<<<<<<<<<<< * cache_sync_free(self.sync) * */ } /* "borg/hashindex.pyx":512 * raise Exception('cache_sync_init failed') * * def __dealloc__(self): # <<<<<<<<<<<<<< * if self.sync: * cache_sync_free(self.sync) */ /* function exit code */ __Pyx_RefNannyFinishContext(); } /* "borg/hashindex.pyx":516 * cache_sync_free(self.sync) * * def feed(self, chunk): # <<<<<<<<<<<<<< * cdef Py_buffer chunk_buf = ro_buffer(chunk) * cdef int rc */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_17CacheSynchronizer_5feed(PyObject *__pyx_v_self, PyObject *__pyx_v_chunk); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_17CacheSynchronizer_5feed(PyObject *__pyx_v_self, PyObject *__pyx_v_chunk) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("feed (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_17CacheSynchronizer_4feed(((struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *)__pyx_v_self), ((PyObject *)__pyx_v_chunk)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_17CacheSynchronizer_4feed(struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *__pyx_v_self, PyObject *__pyx_v_chunk) { Py_buffer __pyx_v_chunk_buf; int __pyx_v_rc; char const *__pyx_v_error; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations Py_buffer __pyx_t_1; int __pyx_t_2; PyObject *__pyx_t_3 = NULL; PyObject *__pyx_t_4 = NULL; __Pyx_RefNannySetupContext("feed", 0); /* "borg/hashindex.pyx":517 * * def feed(self, chunk): * cdef Py_buffer chunk_buf = ro_buffer(chunk) # <<<<<<<<<<<<<< * cdef int rc * rc = cache_sync_feed(self.sync, chunk_buf.buf, chunk_buf.len) */ __pyx_t_1 = __pyx_f_4borg_9hashindex_ro_buffer(__pyx_v_chunk); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 517, __pyx_L1_error) __pyx_v_chunk_buf = __pyx_t_1; /* "borg/hashindex.pyx":519 * cdef Py_buffer chunk_buf = ro_buffer(chunk) * cdef int rc * rc = cache_sync_feed(self.sync, chunk_buf.buf, chunk_buf.len) # <<<<<<<<<<<<<< * PyBuffer_Release(&chunk_buf) * if not rc: */ __pyx_v_rc = cache_sync_feed(__pyx_v_self->sync, __pyx_v_chunk_buf.buf, __pyx_v_chunk_buf.len); /* "borg/hashindex.pyx":520 * cdef int rc * rc = cache_sync_feed(self.sync, chunk_buf.buf, chunk_buf.len) * PyBuffer_Release(&chunk_buf) # <<<<<<<<<<<<<< * if not rc: * error = cache_sync_error(self.sync) */ PyBuffer_Release((&__pyx_v_chunk_buf)); /* "borg/hashindex.pyx":521 * rc = cache_sync_feed(self.sync, chunk_buf.buf, chunk_buf.len) * PyBuffer_Release(&chunk_buf) * if not rc: # <<<<<<<<<<<<<< * error = cache_sync_error(self.sync) * if error != NULL: */ __pyx_t_2 = ((!(__pyx_v_rc != 0)) != 0); if (__pyx_t_2) { /* "borg/hashindex.pyx":522 * PyBuffer_Release(&chunk_buf) * if not rc: * error = cache_sync_error(self.sync) # <<<<<<<<<<<<<< * if error != NULL: * raise ValueError('cache_sync_feed failed: ' + error.decode('ascii')) */ __pyx_v_error = cache_sync_error(__pyx_v_self->sync); /* "borg/hashindex.pyx":523 * if not rc: * error = cache_sync_error(self.sync) * if error != NULL: # <<<<<<<<<<<<<< * raise ValueError('cache_sync_feed failed: ' + error.decode('ascii')) * */ __pyx_t_2 = ((__pyx_v_error != NULL) != 0); if (__pyx_t_2) { /* "borg/hashindex.pyx":524 * error = cache_sync_error(self.sync) * if error != NULL: * raise ValueError('cache_sync_feed failed: ' + error.decode('ascii')) # <<<<<<<<<<<<<< * * @property */ __pyx_t_3 = __Pyx_decode_c_string(__pyx_v_error, 0, strlen(__pyx_v_error), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 524, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_3); __pyx_t_4 = __Pyx_PyUnicode_Concat(__pyx_kp_s_cache_sync_feed_failed, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 524, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 524, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_3); __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4); __pyx_t_4 = 0; __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 524, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; __Pyx_Raise(__pyx_t_4, 0, 0, 0); __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; __PYX_ERR(0, 524, __pyx_L1_error) /* "borg/hashindex.pyx":523 * if not rc: * error = cache_sync_error(self.sync) * if error != NULL: # <<<<<<<<<<<<<< * raise ValueError('cache_sync_feed failed: ' + error.decode('ascii')) * */ } /* "borg/hashindex.pyx":521 * rc = cache_sync_feed(self.sync, chunk_buf.buf, chunk_buf.len) * PyBuffer_Release(&chunk_buf) * if not rc: # <<<<<<<<<<<<<< * error = cache_sync_error(self.sync) * if error != NULL: */ } /* "borg/hashindex.pyx":516 * cache_sync_free(self.sync) * * def feed(self, chunk): # <<<<<<<<<<<<<< * cdef Py_buffer chunk_buf = ro_buffer(chunk) * cdef int rc */ /* function exit code */ __pyx_r = Py_None; __Pyx_INCREF(Py_None); goto __pyx_L0; __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_3); __Pyx_XDECREF(__pyx_t_4); __Pyx_AddTraceback("borg.hashindex.CacheSynchronizer.feed", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/hashindex.pyx":527 * * @property * def num_files(self): # <<<<<<<<<<<<<< * return cache_sync_num_files(self.sync) */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_17CacheSynchronizer_9num_files_1__get__(PyObject *__pyx_v_self); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_17CacheSynchronizer_9num_files_1__get__(PyObject *__pyx_v_self) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_17CacheSynchronizer_9num_files___get__(((struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_17CacheSynchronizer_9num_files___get__(struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *__pyx_v_self) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("__get__", 0); /* "borg/hashindex.pyx":528 * @property * def num_files(self): * return cache_sync_num_files(self.sync) # <<<<<<<<<<<<<< */ __Pyx_XDECREF(__pyx_r); __pyx_t_1 = __Pyx_PyInt_From_uint64_t(cache_sync_num_files(__pyx_v_self->sync)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 528, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __pyx_r = __pyx_t_1; __pyx_t_1 = 0; goto __pyx_L0; /* "borg/hashindex.pyx":527 * * @property * def num_files(self): # <<<<<<<<<<<<<< * return cache_sync_num_files(self.sync) */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.CacheSynchronizer.num_files.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "(tree fragment)":1 * def __reduce_cython__(self): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_17CacheSynchronizer_7__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_17CacheSynchronizer_7__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_17CacheSynchronizer_6__reduce_cython__(((struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_17CacheSynchronizer_6__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *__pyx_v_self) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("__reduce_cython__", 0); /* "(tree fragment)":2 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__26, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(1, 2, __pyx_L1_error) /* "(tree fragment)":1 * def __reduce_cython__(self): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.CacheSynchronizer.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "(tree fragment)":3 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_9hashindex_17CacheSynchronizer_9__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ static PyObject *__pyx_pw_4borg_9hashindex_17CacheSynchronizer_9__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_9hashindex_17CacheSynchronizer_8__setstate_cython__(((struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_9hashindex_17CacheSynchronizer_8__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("__setstate_cython__", 0); /* "(tree fragment)":4 * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< */ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__27, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(1, 4, __pyx_L1_error) /* "(tree fragment)":3 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.hashindex.CacheSynchronizer.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_tp_new_4borg_9hashindex_IndexBase(PyTypeObject *t, PyObject *a, PyObject *k) { PyObject *o; if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { o = (*t->tp_alloc)(t, 0); } else { o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); } if (unlikely(!o)) return 0; if (unlikely(__pyx_pw_4borg_9hashindex_9IndexBase_1__cinit__(o, a, k) < 0)) goto bad; return o; bad: Py_DECREF(o); o = 0; return NULL; } static void __pyx_tp_dealloc_4borg_9hashindex_IndexBase(PyObject *o) { #if CYTHON_USE_TP_FINALIZE if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && (!PyType_IS_GC(Py_TYPE(o)) || !_PyGC_FINALIZED(o))) { if (PyObject_CallFinalizerFromDealloc(o)) return; } #endif { PyObject *etype, *eval, *etb; PyErr_Fetch(&etype, &eval, &etb); ++Py_REFCNT(o); __pyx_pw_4borg_9hashindex_9IndexBase_3__dealloc__(o); --Py_REFCNT(o); PyErr_Restore(etype, eval, etb); } (*Py_TYPE(o)->tp_free)(o); } static int __pyx_mp_ass_subscript_4borg_9hashindex_IndexBase(PyObject *o, PyObject *i, PyObject *v) { if (v) { PyErr_Format(PyExc_NotImplementedError, "Subscript assignment not supported by %.200s", Py_TYPE(o)->tp_name); return -1; } else { return __pyx_pw_4borg_9hashindex_9IndexBase_13__delitem__(o, i); } } static PyMethodDef __pyx_methods_4borg_9hashindex_IndexBase[] = { {"read", (PyCFunction)__pyx_pw_4borg_9hashindex_9IndexBase_5read, METH_VARARGS|METH_KEYWORDS, 0}, {"write", (PyCFunction)__pyx_pw_4borg_9hashindex_9IndexBase_7write, METH_O, 0}, {"clear", (PyCFunction)__pyx_pw_4borg_9hashindex_9IndexBase_9clear, METH_NOARGS, 0}, {"setdefault", (PyCFunction)__pyx_pw_4borg_9hashindex_9IndexBase_11setdefault, METH_VARARGS|METH_KEYWORDS, 0}, {"get", (PyCFunction)__pyx_pw_4borg_9hashindex_9IndexBase_15get, METH_VARARGS|METH_KEYWORDS, 0}, {"pop", (PyCFunction)__pyx_pw_4borg_9hashindex_9IndexBase_17pop, METH_VARARGS|METH_KEYWORDS, 0}, {"size", (PyCFunction)__pyx_pw_4borg_9hashindex_9IndexBase_21size, METH_NOARGS, __pyx_doc_4borg_9hashindex_9IndexBase_20size}, {"compact", (PyCFunction)__pyx_pw_4borg_9hashindex_9IndexBase_23compact, METH_NOARGS, 0}, {"__reduce_cython__", (PyCFunction)__pyx_pw_4borg_9hashindex_9IndexBase_25__reduce_cython__, METH_NOARGS, 0}, {"__setstate_cython__", (PyCFunction)__pyx_pw_4borg_9hashindex_9IndexBase_27__setstate_cython__, METH_O, 0}, {0, 0, 0, 0} }; static PySequenceMethods __pyx_tp_as_sequence_IndexBase = { __pyx_pw_4borg_9hashindex_9IndexBase_19__len__, /*sq_length*/ 0, /*sq_concat*/ 0, /*sq_repeat*/ 0, /*sq_item*/ 0, /*sq_slice*/ 0, /*sq_ass_item*/ 0, /*sq_ass_slice*/ 0, /*sq_contains*/ 0, /*sq_inplace_concat*/ 0, /*sq_inplace_repeat*/ }; static PyMappingMethods __pyx_tp_as_mapping_IndexBase = { __pyx_pw_4borg_9hashindex_9IndexBase_19__len__, /*mp_length*/ 0, /*mp_subscript*/ __pyx_mp_ass_subscript_4borg_9hashindex_IndexBase, /*mp_ass_subscript*/ }; static PyTypeObject __pyx_type_4borg_9hashindex_IndexBase = { PyVarObject_HEAD_INIT(0, 0) "borg.hashindex.IndexBase", /*tp_name*/ sizeof(struct __pyx_obj_4borg_9hashindex_IndexBase), /*tp_basicsize*/ 0, /*tp_itemsize*/ __pyx_tp_dealloc_4borg_9hashindex_IndexBase, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ #if PY_MAJOR_VERSION < 3 0, /*tp_compare*/ #endif #if PY_MAJOR_VERSION >= 3 0, /*tp_as_async*/ #endif 0, /*tp_repr*/ 0, /*tp_as_number*/ &__pyx_tp_as_sequence_IndexBase, /*tp_as_sequence*/ &__pyx_tp_as_mapping_IndexBase, /*tp_as_mapping*/ 0, /*tp_hash*/ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/ 0, /*tp_doc*/ 0, /*tp_traverse*/ 0, /*tp_clear*/ 0, /*tp_richcompare*/ 0, /*tp_weaklistoffset*/ 0, /*tp_iter*/ 0, /*tp_iternext*/ __pyx_methods_4borg_9hashindex_IndexBase, /*tp_methods*/ 0, /*tp_members*/ 0, /*tp_getset*/ 0, /*tp_base*/ 0, /*tp_dict*/ 0, /*tp_descr_get*/ 0, /*tp_descr_set*/ 0, /*tp_dictoffset*/ 0, /*tp_init*/ 0, /*tp_alloc*/ __pyx_tp_new_4borg_9hashindex_IndexBase, /*tp_new*/ 0, /*tp_free*/ 0, /*tp_is_gc*/ 0, /*tp_bases*/ 0, /*tp_mro*/ 0, /*tp_cache*/ 0, /*tp_subclasses*/ 0, /*tp_weaklist*/ 0, /*tp_del*/ 0, /*tp_version_tag*/ #if PY_VERSION_HEX >= 0x030400a1 0, /*tp_finalize*/ #endif }; static PyObject *__pyx_tp_new_4borg_9hashindex_FuseVersionsIndex(PyTypeObject *t, PyObject *a, PyObject *k) { PyObject *o = __pyx_tp_new_4borg_9hashindex_IndexBase(t, a, k); if (unlikely(!o)) return 0; return o; } static PyObject *__pyx_sq_item_4borg_9hashindex_FuseVersionsIndex(PyObject *o, Py_ssize_t i) { PyObject *r; PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); Py_DECREF(x); return r; } static int __pyx_mp_ass_subscript_4borg_9hashindex_FuseVersionsIndex(PyObject *o, PyObject *i, PyObject *v) { if (v) { return __pyx_pw_4borg_9hashindex_17FuseVersionsIndex_3__setitem__(o, i, v); } else { if (__pyx_ptype_4borg_9hashindex_IndexBase->tp_as_mapping && __pyx_ptype_4borg_9hashindex_IndexBase->tp_as_mapping->mp_ass_subscript) return __pyx_ptype_4borg_9hashindex_IndexBase->tp_as_mapping->mp_ass_subscript(o, i, v); PyErr_Format(PyExc_NotImplementedError, "Subscript deletion not supported by %.200s", Py_TYPE(o)->tp_name); return -1; } } static PyMethodDef __pyx_methods_4borg_9hashindex_FuseVersionsIndex[] = { {"__reduce_cython__", (PyCFunction)__pyx_pw_4borg_9hashindex_17FuseVersionsIndex_7__reduce_cython__, METH_NOARGS, 0}, {"__setstate_cython__", (PyCFunction)__pyx_pw_4borg_9hashindex_17FuseVersionsIndex_9__setstate_cython__, METH_O, 0}, {0, 0, 0, 0} }; static PySequenceMethods __pyx_tp_as_sequence_FuseVersionsIndex = { #if CYTHON_COMPILING_IN_PYPY __pyx_pw_4borg_9hashindex_9IndexBase_19__len__, /*sq_length*/ #else 0, /*sq_length*/ #endif 0, /*sq_concat*/ 0, /*sq_repeat*/ __pyx_sq_item_4borg_9hashindex_FuseVersionsIndex, /*sq_item*/ 0, /*sq_slice*/ 0, /*sq_ass_item*/ 0, /*sq_ass_slice*/ __pyx_pw_4borg_9hashindex_17FuseVersionsIndex_5__contains__, /*sq_contains*/ 0, /*sq_inplace_concat*/ 0, /*sq_inplace_repeat*/ }; static PyMappingMethods __pyx_tp_as_mapping_FuseVersionsIndex = { #if CYTHON_COMPILING_IN_PYPY __pyx_pw_4borg_9hashindex_9IndexBase_19__len__, /*mp_length*/ #else 0, /*mp_length*/ #endif __pyx_pw_4borg_9hashindex_17FuseVersionsIndex_1__getitem__, /*mp_subscript*/ __pyx_mp_ass_subscript_4borg_9hashindex_FuseVersionsIndex, /*mp_ass_subscript*/ }; static PyTypeObject __pyx_type_4borg_9hashindex_FuseVersionsIndex = { PyVarObject_HEAD_INIT(0, 0) "borg.hashindex.FuseVersionsIndex", /*tp_name*/ sizeof(struct __pyx_obj_4borg_9hashindex_FuseVersionsIndex), /*tp_basicsize*/ 0, /*tp_itemsize*/ __pyx_tp_dealloc_4borg_9hashindex_IndexBase, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ #if PY_MAJOR_VERSION < 3 0, /*tp_compare*/ #endif #if PY_MAJOR_VERSION >= 3 0, /*tp_as_async*/ #endif 0, /*tp_repr*/ 0, /*tp_as_number*/ &__pyx_tp_as_sequence_FuseVersionsIndex, /*tp_as_sequence*/ &__pyx_tp_as_mapping_FuseVersionsIndex, /*tp_as_mapping*/ 0, /*tp_hash*/ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/ 0, /*tp_doc*/ 0, /*tp_traverse*/ 0, /*tp_clear*/ 0, /*tp_richcompare*/ 0, /*tp_weaklistoffset*/ 0, /*tp_iter*/ 0, /*tp_iternext*/ __pyx_methods_4borg_9hashindex_FuseVersionsIndex, /*tp_methods*/ 0, /*tp_members*/ 0, /*tp_getset*/ 0, /*tp_base*/ 0, /*tp_dict*/ 0, /*tp_descr_get*/ 0, /*tp_descr_set*/ 0, /*tp_dictoffset*/ 0, /*tp_init*/ 0, /*tp_alloc*/ __pyx_tp_new_4borg_9hashindex_FuseVersionsIndex, /*tp_new*/ 0, /*tp_free*/ 0, /*tp_is_gc*/ 0, /*tp_bases*/ 0, /*tp_mro*/ 0, /*tp_cache*/ 0, /*tp_subclasses*/ 0, /*tp_weaklist*/ 0, /*tp_del*/ 0, /*tp_version_tag*/ #if PY_VERSION_HEX >= 0x030400a1 0, /*tp_finalize*/ #endif }; static PyObject *__pyx_tp_new_4borg_9hashindex_NSIndex(PyTypeObject *t, PyObject *a, PyObject *k) { PyObject *o = __pyx_tp_new_4borg_9hashindex_IndexBase(t, a, k); if (unlikely(!o)) return 0; return o; } static PyObject *__pyx_sq_item_4borg_9hashindex_NSIndex(PyObject *o, Py_ssize_t i) { PyObject *r; PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); Py_DECREF(x); return r; } static int __pyx_mp_ass_subscript_4borg_9hashindex_NSIndex(PyObject *o, PyObject *i, PyObject *v) { if (v) { return __pyx_pw_4borg_9hashindex_7NSIndex_3__setitem__(o, i, v); } else { if (__pyx_ptype_4borg_9hashindex_IndexBase->tp_as_mapping && __pyx_ptype_4borg_9hashindex_IndexBase->tp_as_mapping->mp_ass_subscript) return __pyx_ptype_4borg_9hashindex_IndexBase->tp_as_mapping->mp_ass_subscript(o, i, v); PyErr_Format(PyExc_NotImplementedError, "Subscript deletion not supported by %.200s", Py_TYPE(o)->tp_name); return -1; } } static PyMethodDef __pyx_methods_4borg_9hashindex_NSIndex[] = { {"iteritems", (PyCFunction)__pyx_pw_4borg_9hashindex_7NSIndex_7iteritems, METH_VARARGS|METH_KEYWORDS, 0}, {"__reduce_cython__", (PyCFunction)__pyx_pw_4borg_9hashindex_7NSIndex_9__reduce_cython__, METH_NOARGS, 0}, {"__setstate_cython__", (PyCFunction)__pyx_pw_4borg_9hashindex_7NSIndex_11__setstate_cython__, METH_O, 0}, {0, 0, 0, 0} }; static PySequenceMethods __pyx_tp_as_sequence_NSIndex = { #if CYTHON_COMPILING_IN_PYPY __pyx_pw_4borg_9hashindex_9IndexBase_19__len__, /*sq_length*/ #else 0, /*sq_length*/ #endif 0, /*sq_concat*/ 0, /*sq_repeat*/ __pyx_sq_item_4borg_9hashindex_NSIndex, /*sq_item*/ 0, /*sq_slice*/ 0, /*sq_ass_item*/ 0, /*sq_ass_slice*/ __pyx_pw_4borg_9hashindex_7NSIndex_5__contains__, /*sq_contains*/ 0, /*sq_inplace_concat*/ 0, /*sq_inplace_repeat*/ }; static PyMappingMethods __pyx_tp_as_mapping_NSIndex = { #if CYTHON_COMPILING_IN_PYPY __pyx_pw_4borg_9hashindex_9IndexBase_19__len__, /*mp_length*/ #else 0, /*mp_length*/ #endif __pyx_pw_4borg_9hashindex_7NSIndex_1__getitem__, /*mp_subscript*/ __pyx_mp_ass_subscript_4borg_9hashindex_NSIndex, /*mp_ass_subscript*/ }; static PyTypeObject __pyx_type_4borg_9hashindex_NSIndex = { PyVarObject_HEAD_INIT(0, 0) "borg.hashindex.NSIndex", /*tp_name*/ sizeof(struct __pyx_obj_4borg_9hashindex_NSIndex), /*tp_basicsize*/ 0, /*tp_itemsize*/ __pyx_tp_dealloc_4borg_9hashindex_IndexBase, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ #if PY_MAJOR_VERSION < 3 0, /*tp_compare*/ #endif #if PY_MAJOR_VERSION >= 3 0, /*tp_as_async*/ #endif 0, /*tp_repr*/ 0, /*tp_as_number*/ &__pyx_tp_as_sequence_NSIndex, /*tp_as_sequence*/ &__pyx_tp_as_mapping_NSIndex, /*tp_as_mapping*/ 0, /*tp_hash*/ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/ 0, /*tp_doc*/ 0, /*tp_traverse*/ 0, /*tp_clear*/ 0, /*tp_richcompare*/ 0, /*tp_weaklistoffset*/ 0, /*tp_iter*/ 0, /*tp_iternext*/ __pyx_methods_4borg_9hashindex_NSIndex, /*tp_methods*/ 0, /*tp_members*/ 0, /*tp_getset*/ 0, /*tp_base*/ 0, /*tp_dict*/ 0, /*tp_descr_get*/ 0, /*tp_descr_set*/ 0, /*tp_dictoffset*/ 0, /*tp_init*/ 0, /*tp_alloc*/ __pyx_tp_new_4borg_9hashindex_NSIndex, /*tp_new*/ 0, /*tp_free*/ 0, /*tp_is_gc*/ 0, /*tp_bases*/ 0, /*tp_mro*/ 0, /*tp_cache*/ 0, /*tp_subclasses*/ 0, /*tp_weaklist*/ 0, /*tp_del*/ 0, /*tp_version_tag*/ #if PY_VERSION_HEX >= 0x030400a1 0, /*tp_finalize*/ #endif }; static PyObject *__pyx_tp_new_4borg_9hashindex_NSKeyIterator(PyTypeObject *t, PyObject *a, PyObject *k) { struct __pyx_obj_4borg_9hashindex_NSKeyIterator *p; PyObject *o; if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { o = (*t->tp_alloc)(t, 0); } else { o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); } if (unlikely(!o)) return 0; p = ((struct __pyx_obj_4borg_9hashindex_NSKeyIterator *)o); p->idx = ((struct __pyx_obj_4borg_9hashindex_NSIndex *)Py_None); Py_INCREF(Py_None); if (unlikely(__pyx_pw_4borg_9hashindex_13NSKeyIterator_1__cinit__(o, a, k) < 0)) goto bad; return o; bad: Py_DECREF(o); o = 0; return NULL; } static void __pyx_tp_dealloc_4borg_9hashindex_NSKeyIterator(PyObject *o) { struct __pyx_obj_4borg_9hashindex_NSKeyIterator *p = (struct __pyx_obj_4borg_9hashindex_NSKeyIterator *)o; #if CYTHON_USE_TP_FINALIZE if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { if (PyObject_CallFinalizerFromDealloc(o)) return; } #endif PyObject_GC_UnTrack(o); Py_CLEAR(p->idx); (*Py_TYPE(o)->tp_free)(o); } static int __pyx_tp_traverse_4borg_9hashindex_NSKeyIterator(PyObject *o, visitproc v, void *a) { int e; struct __pyx_obj_4borg_9hashindex_NSKeyIterator *p = (struct __pyx_obj_4borg_9hashindex_NSKeyIterator *)o; if (p->idx) { e = (*v)(((PyObject *)p->idx), a); if (e) return e; } return 0; } static int __pyx_tp_clear_4borg_9hashindex_NSKeyIterator(PyObject *o) { PyObject* tmp; struct __pyx_obj_4borg_9hashindex_NSKeyIterator *p = (struct __pyx_obj_4borg_9hashindex_NSKeyIterator *)o; tmp = ((PyObject*)p->idx); p->idx = ((struct __pyx_obj_4borg_9hashindex_NSIndex *)Py_None); Py_INCREF(Py_None); Py_XDECREF(tmp); return 0; } static PyMethodDef __pyx_methods_4borg_9hashindex_NSKeyIterator[] = { {"__next__", (PyCFunction)__pyx_pw_4borg_9hashindex_13NSKeyIterator_5__next__, METH_NOARGS|METH_COEXIST, 0}, {"__reduce_cython__", (PyCFunction)__pyx_pw_4borg_9hashindex_13NSKeyIterator_7__reduce_cython__, METH_NOARGS, 0}, {"__setstate_cython__", (PyCFunction)__pyx_pw_4borg_9hashindex_13NSKeyIterator_9__setstate_cython__, METH_O, 0}, {0, 0, 0, 0} }; static PyTypeObject __pyx_type_4borg_9hashindex_NSKeyIterator = { PyVarObject_HEAD_INIT(0, 0) "borg.hashindex.NSKeyIterator", /*tp_name*/ sizeof(struct __pyx_obj_4borg_9hashindex_NSKeyIterator), /*tp_basicsize*/ 0, /*tp_itemsize*/ __pyx_tp_dealloc_4borg_9hashindex_NSKeyIterator, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ #if PY_MAJOR_VERSION < 3 0, /*tp_compare*/ #endif #if PY_MAJOR_VERSION >= 3 0, /*tp_as_async*/ #endif 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash*/ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ 0, /*tp_doc*/ __pyx_tp_traverse_4borg_9hashindex_NSKeyIterator, /*tp_traverse*/ __pyx_tp_clear_4borg_9hashindex_NSKeyIterator, /*tp_clear*/ 0, /*tp_richcompare*/ 0, /*tp_weaklistoffset*/ __pyx_pw_4borg_9hashindex_13NSKeyIterator_3__iter__, /*tp_iter*/ __pyx_pw_4borg_9hashindex_13NSKeyIterator_5__next__, /*tp_iternext*/ __pyx_methods_4borg_9hashindex_NSKeyIterator, /*tp_methods*/ 0, /*tp_members*/ 0, /*tp_getset*/ 0, /*tp_base*/ 0, /*tp_dict*/ 0, /*tp_descr_get*/ 0, /*tp_descr_set*/ 0, /*tp_dictoffset*/ 0, /*tp_init*/ 0, /*tp_alloc*/ __pyx_tp_new_4borg_9hashindex_NSKeyIterator, /*tp_new*/ 0, /*tp_free*/ 0, /*tp_is_gc*/ 0, /*tp_bases*/ 0, /*tp_mro*/ 0, /*tp_cache*/ 0, /*tp_subclasses*/ 0, /*tp_weaklist*/ 0, /*tp_del*/ 0, /*tp_version_tag*/ #if PY_VERSION_HEX >= 0x030400a1 0, /*tp_finalize*/ #endif }; static struct __pyx_vtabstruct_4borg_9hashindex_ChunkIndex __pyx_vtable_4borg_9hashindex_ChunkIndex; static PyObject *__pyx_tp_new_4borg_9hashindex_ChunkIndex(PyTypeObject *t, PyObject *a, PyObject *k) { struct __pyx_obj_4borg_9hashindex_ChunkIndex *p; PyObject *o = __pyx_tp_new_4borg_9hashindex_IndexBase(t, a, k); if (unlikely(!o)) return 0; p = ((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)o); p->__pyx_vtab = __pyx_vtabptr_4borg_9hashindex_ChunkIndex; return o; } static PyObject *__pyx_sq_item_4borg_9hashindex_ChunkIndex(PyObject *o, Py_ssize_t i) { PyObject *r; PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); Py_DECREF(x); return r; } static int __pyx_mp_ass_subscript_4borg_9hashindex_ChunkIndex(PyObject *o, PyObject *i, PyObject *v) { if (v) { return __pyx_pw_4borg_9hashindex_10ChunkIndex_3__setitem__(o, i, v); } else { if (__pyx_ptype_4borg_9hashindex_IndexBase->tp_as_mapping && __pyx_ptype_4borg_9hashindex_IndexBase->tp_as_mapping->mp_ass_subscript) return __pyx_ptype_4borg_9hashindex_IndexBase->tp_as_mapping->mp_ass_subscript(o, i, v); PyErr_Format(PyExc_NotImplementedError, "Subscript deletion not supported by %.200s", Py_TYPE(o)->tp_name); return -1; } } static PyMethodDef __pyx_methods_4borg_9hashindex_ChunkIndex[] = { {"incref", (PyCFunction)__pyx_pw_4borg_9hashindex_10ChunkIndex_7incref, METH_O, __pyx_doc_4borg_9hashindex_10ChunkIndex_6incref}, {"decref", (PyCFunction)__pyx_pw_4borg_9hashindex_10ChunkIndex_9decref, METH_O, __pyx_doc_4borg_9hashindex_10ChunkIndex_8decref}, {"iteritems", (PyCFunction)__pyx_pw_4borg_9hashindex_10ChunkIndex_11iteritems, METH_VARARGS|METH_KEYWORDS, 0}, {"summarize", (PyCFunction)__pyx_pw_4borg_9hashindex_10ChunkIndex_13summarize, METH_NOARGS, 0}, {"stats_against", (PyCFunction)__pyx_pw_4borg_9hashindex_10ChunkIndex_15stats_against, METH_O, __pyx_doc_4borg_9hashindex_10ChunkIndex_14stats_against}, {"add", (PyCFunction)__pyx_pw_4borg_9hashindex_10ChunkIndex_17add, METH_VARARGS|METH_KEYWORDS, 0}, {"merge", (PyCFunction)__pyx_pw_4borg_9hashindex_10ChunkIndex_19merge, METH_O, 0}, {"zero_csize_ids", (PyCFunction)__pyx_pw_4borg_9hashindex_10ChunkIndex_21zero_csize_ids, METH_NOARGS, 0}, {"__reduce_cython__", (PyCFunction)__pyx_pw_4borg_9hashindex_10ChunkIndex_23__reduce_cython__, METH_NOARGS, 0}, {"__setstate_cython__", (PyCFunction)__pyx_pw_4borg_9hashindex_10ChunkIndex_25__setstate_cython__, METH_O, 0}, {0, 0, 0, 0} }; static PySequenceMethods __pyx_tp_as_sequence_ChunkIndex = { #if CYTHON_COMPILING_IN_PYPY __pyx_pw_4borg_9hashindex_9IndexBase_19__len__, /*sq_length*/ #else 0, /*sq_length*/ #endif 0, /*sq_concat*/ 0, /*sq_repeat*/ __pyx_sq_item_4borg_9hashindex_ChunkIndex, /*sq_item*/ 0, /*sq_slice*/ 0, /*sq_ass_item*/ 0, /*sq_ass_slice*/ __pyx_pw_4borg_9hashindex_10ChunkIndex_5__contains__, /*sq_contains*/ 0, /*sq_inplace_concat*/ 0, /*sq_inplace_repeat*/ }; static PyMappingMethods __pyx_tp_as_mapping_ChunkIndex = { #if CYTHON_COMPILING_IN_PYPY __pyx_pw_4borg_9hashindex_9IndexBase_19__len__, /*mp_length*/ #else 0, /*mp_length*/ #endif __pyx_pw_4borg_9hashindex_10ChunkIndex_1__getitem__, /*mp_subscript*/ __pyx_mp_ass_subscript_4borg_9hashindex_ChunkIndex, /*mp_ass_subscript*/ }; static PyTypeObject __pyx_type_4borg_9hashindex_ChunkIndex = { PyVarObject_HEAD_INIT(0, 0) "borg.hashindex.ChunkIndex", /*tp_name*/ sizeof(struct __pyx_obj_4borg_9hashindex_ChunkIndex), /*tp_basicsize*/ 0, /*tp_itemsize*/ __pyx_tp_dealloc_4borg_9hashindex_IndexBase, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ #if PY_MAJOR_VERSION < 3 0, /*tp_compare*/ #endif #if PY_MAJOR_VERSION >= 3 0, /*tp_as_async*/ #endif 0, /*tp_repr*/ 0, /*tp_as_number*/ &__pyx_tp_as_sequence_ChunkIndex, /*tp_as_sequence*/ &__pyx_tp_as_mapping_ChunkIndex, /*tp_as_mapping*/ 0, /*tp_hash*/ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/ "\n Mapping of 32 byte keys to (refcount, size, csize), which are all 32-bit unsigned.\n\n The reference count cannot overflow. If an overflow would occur, the refcount\n is fixed to MAX_VALUE and will neither increase nor decrease by incref(), decref()\n or add().\n\n Prior signed 32-bit overflow is handled correctly for most cases: All values\n from UINT32_MAX (2**32-1, inclusive) to MAX_VALUE (exclusive) are reserved and either\n cause silent data loss (-1, -2) or will raise an AssertionError when accessed.\n Other values are handled correctly. Note that previously the refcount could also reach\n 0 by *increasing* it.\n\n Assigning refcounts in this reserved range is an invalid operation and raises AssertionError.\n ", /*tp_doc*/ 0, /*tp_traverse*/ 0, /*tp_clear*/ 0, /*tp_richcompare*/ 0, /*tp_weaklistoffset*/ 0, /*tp_iter*/ 0, /*tp_iternext*/ __pyx_methods_4borg_9hashindex_ChunkIndex, /*tp_methods*/ 0, /*tp_members*/ 0, /*tp_getset*/ 0, /*tp_base*/ 0, /*tp_dict*/ 0, /*tp_descr_get*/ 0, /*tp_descr_set*/ 0, /*tp_dictoffset*/ 0, /*tp_init*/ 0, /*tp_alloc*/ __pyx_tp_new_4borg_9hashindex_ChunkIndex, /*tp_new*/ 0, /*tp_free*/ 0, /*tp_is_gc*/ 0, /*tp_bases*/ 0, /*tp_mro*/ 0, /*tp_cache*/ 0, /*tp_subclasses*/ 0, /*tp_weaklist*/ 0, /*tp_del*/ 0, /*tp_version_tag*/ #if PY_VERSION_HEX >= 0x030400a1 0, /*tp_finalize*/ #endif }; static PyObject *__pyx_tp_new_4borg_9hashindex_ChunkKeyIterator(PyTypeObject *t, PyObject *a, PyObject *k) { struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *p; PyObject *o; if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { o = (*t->tp_alloc)(t, 0); } else { o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); } if (unlikely(!o)) return 0; p = ((struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *)o); p->idx = ((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)Py_None); Py_INCREF(Py_None); if (unlikely(__pyx_pw_4borg_9hashindex_16ChunkKeyIterator_1__cinit__(o, a, k) < 0)) goto bad; return o; bad: Py_DECREF(o); o = 0; return NULL; } static void __pyx_tp_dealloc_4borg_9hashindex_ChunkKeyIterator(PyObject *o) { struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *p = (struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *)o; #if CYTHON_USE_TP_FINALIZE if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { if (PyObject_CallFinalizerFromDealloc(o)) return; } #endif PyObject_GC_UnTrack(o); Py_CLEAR(p->idx); (*Py_TYPE(o)->tp_free)(o); } static int __pyx_tp_traverse_4borg_9hashindex_ChunkKeyIterator(PyObject *o, visitproc v, void *a) { int e; struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *p = (struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *)o; if (p->idx) { e = (*v)(((PyObject *)p->idx), a); if (e) return e; } return 0; } static int __pyx_tp_clear_4borg_9hashindex_ChunkKeyIterator(PyObject *o) { PyObject* tmp; struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *p = (struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator *)o; tmp = ((PyObject*)p->idx); p->idx = ((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)Py_None); Py_INCREF(Py_None); Py_XDECREF(tmp); return 0; } static PyMethodDef __pyx_methods_4borg_9hashindex_ChunkKeyIterator[] = { {"__next__", (PyCFunction)__pyx_pw_4borg_9hashindex_16ChunkKeyIterator_5__next__, METH_NOARGS|METH_COEXIST, 0}, {"__reduce_cython__", (PyCFunction)__pyx_pw_4borg_9hashindex_16ChunkKeyIterator_7__reduce_cython__, METH_NOARGS, 0}, {"__setstate_cython__", (PyCFunction)__pyx_pw_4borg_9hashindex_16ChunkKeyIterator_9__setstate_cython__, METH_O, 0}, {0, 0, 0, 0} }; static PyTypeObject __pyx_type_4borg_9hashindex_ChunkKeyIterator = { PyVarObject_HEAD_INIT(0, 0) "borg.hashindex.ChunkKeyIterator", /*tp_name*/ sizeof(struct __pyx_obj_4borg_9hashindex_ChunkKeyIterator), /*tp_basicsize*/ 0, /*tp_itemsize*/ __pyx_tp_dealloc_4borg_9hashindex_ChunkKeyIterator, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ #if PY_MAJOR_VERSION < 3 0, /*tp_compare*/ #endif #if PY_MAJOR_VERSION >= 3 0, /*tp_as_async*/ #endif 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash*/ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ 0, /*tp_doc*/ __pyx_tp_traverse_4borg_9hashindex_ChunkKeyIterator, /*tp_traverse*/ __pyx_tp_clear_4borg_9hashindex_ChunkKeyIterator, /*tp_clear*/ 0, /*tp_richcompare*/ 0, /*tp_weaklistoffset*/ __pyx_pw_4borg_9hashindex_16ChunkKeyIterator_3__iter__, /*tp_iter*/ __pyx_pw_4borg_9hashindex_16ChunkKeyIterator_5__next__, /*tp_iternext*/ __pyx_methods_4borg_9hashindex_ChunkKeyIterator, /*tp_methods*/ 0, /*tp_members*/ 0, /*tp_getset*/ 0, /*tp_base*/ 0, /*tp_dict*/ 0, /*tp_descr_get*/ 0, /*tp_descr_set*/ 0, /*tp_dictoffset*/ 0, /*tp_init*/ 0, /*tp_alloc*/ __pyx_tp_new_4borg_9hashindex_ChunkKeyIterator, /*tp_new*/ 0, /*tp_free*/ 0, /*tp_is_gc*/ 0, /*tp_bases*/ 0, /*tp_mro*/ 0, /*tp_cache*/ 0, /*tp_subclasses*/ 0, /*tp_weaklist*/ 0, /*tp_del*/ 0, /*tp_version_tag*/ #if PY_VERSION_HEX >= 0x030400a1 0, /*tp_finalize*/ #endif }; static PyObject *__pyx_tp_new_4borg_9hashindex_CacheSynchronizer(PyTypeObject *t, PyObject *a, PyObject *k) { struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *p; PyObject *o; if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { o = (*t->tp_alloc)(t, 0); } else { o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); } if (unlikely(!o)) return 0; p = ((struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *)o); p->chunks = ((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)Py_None); Py_INCREF(Py_None); if (unlikely(__pyx_pw_4borg_9hashindex_17CacheSynchronizer_1__cinit__(o, a, k) < 0)) goto bad; return o; bad: Py_DECREF(o); o = 0; return NULL; } static void __pyx_tp_dealloc_4borg_9hashindex_CacheSynchronizer(PyObject *o) { struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *p = (struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *)o; #if CYTHON_USE_TP_FINALIZE if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { if (PyObject_CallFinalizerFromDealloc(o)) return; } #endif PyObject_GC_UnTrack(o); { PyObject *etype, *eval, *etb; PyErr_Fetch(&etype, &eval, &etb); ++Py_REFCNT(o); __pyx_pw_4borg_9hashindex_17CacheSynchronizer_3__dealloc__(o); --Py_REFCNT(o); PyErr_Restore(etype, eval, etb); } Py_CLEAR(p->chunks); (*Py_TYPE(o)->tp_free)(o); } static int __pyx_tp_traverse_4borg_9hashindex_CacheSynchronizer(PyObject *o, visitproc v, void *a) { int e; struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *p = (struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *)o; if (p->chunks) { e = (*v)(((PyObject *)p->chunks), a); if (e) return e; } return 0; } static int __pyx_tp_clear_4borg_9hashindex_CacheSynchronizer(PyObject *o) { PyObject* tmp; struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *p = (struct __pyx_obj_4borg_9hashindex_CacheSynchronizer *)o; tmp = ((PyObject*)p->chunks); p->chunks = ((struct __pyx_obj_4borg_9hashindex_ChunkIndex *)Py_None); Py_INCREF(Py_None); Py_XDECREF(tmp); return 0; } static PyObject *__pyx_getprop_4borg_9hashindex_17CacheSynchronizer_num_files(PyObject *o, CYTHON_UNUSED void *x) { return __pyx_pw_4borg_9hashindex_17CacheSynchronizer_9num_files_1__get__(o); } static PyMethodDef __pyx_methods_4borg_9hashindex_CacheSynchronizer[] = { {"feed", (PyCFunction)__pyx_pw_4borg_9hashindex_17CacheSynchronizer_5feed, METH_O, 0}, {"__reduce_cython__", (PyCFunction)__pyx_pw_4borg_9hashindex_17CacheSynchronizer_7__reduce_cython__, METH_NOARGS, 0}, {"__setstate_cython__", (PyCFunction)__pyx_pw_4borg_9hashindex_17CacheSynchronizer_9__setstate_cython__, METH_O, 0}, {0, 0, 0, 0} }; static struct PyGetSetDef __pyx_getsets_4borg_9hashindex_CacheSynchronizer[] = { {(char *)"num_files", __pyx_getprop_4borg_9hashindex_17CacheSynchronizer_num_files, 0, (char *)0, 0}, {0, 0, 0, 0, 0} }; static PyTypeObject __pyx_type_4borg_9hashindex_CacheSynchronizer = { PyVarObject_HEAD_INIT(0, 0) "borg.hashindex.CacheSynchronizer", /*tp_name*/ sizeof(struct __pyx_obj_4borg_9hashindex_CacheSynchronizer), /*tp_basicsize*/ 0, /*tp_itemsize*/ __pyx_tp_dealloc_4borg_9hashindex_CacheSynchronizer, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ #if PY_MAJOR_VERSION < 3 0, /*tp_compare*/ #endif #if PY_MAJOR_VERSION >= 3 0, /*tp_as_async*/ #endif 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash*/ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ 0, /*tp_doc*/ __pyx_tp_traverse_4borg_9hashindex_CacheSynchronizer, /*tp_traverse*/ __pyx_tp_clear_4borg_9hashindex_CacheSynchronizer, /*tp_clear*/ 0, /*tp_richcompare*/ 0, /*tp_weaklistoffset*/ 0, /*tp_iter*/ 0, /*tp_iternext*/ __pyx_methods_4borg_9hashindex_CacheSynchronizer, /*tp_methods*/ 0, /*tp_members*/ __pyx_getsets_4borg_9hashindex_CacheSynchronizer, /*tp_getset*/ 0, /*tp_base*/ 0, /*tp_dict*/ 0, /*tp_descr_get*/ 0, /*tp_descr_set*/ 0, /*tp_dictoffset*/ 0, /*tp_init*/ 0, /*tp_alloc*/ __pyx_tp_new_4borg_9hashindex_CacheSynchronizer, /*tp_new*/ 0, /*tp_free*/ 0, /*tp_is_gc*/ 0, /*tp_bases*/ 0, /*tp_mro*/ 0, /*tp_cache*/ 0, /*tp_subclasses*/ 0, /*tp_weaklist*/ 0, /*tp_del*/ 0, /*tp_version_tag*/ #if PY_VERSION_HEX >= 0x030400a1 0, /*tp_finalize*/ #endif }; static PyMethodDef __pyx_methods[] = { {0, 0, 0, 0} }; #if PY_MAJOR_VERSION >= 3 #if CYTHON_PEP489_MULTI_PHASE_INIT static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ static int __pyx_pymod_exec_hashindex(PyObject* module); /*proto*/ static PyModuleDef_Slot __pyx_moduledef_slots[] = { {Py_mod_create, (void*)__pyx_pymod_create}, {Py_mod_exec, (void*)__pyx_pymod_exec_hashindex}, {0, NULL} }; #endif static struct PyModuleDef __pyx_moduledef = { PyModuleDef_HEAD_INIT, "hashindex", 0, /* m_doc */ #if CYTHON_PEP489_MULTI_PHASE_INIT 0, /* m_size */ #else -1, /* m_size */ #endif __pyx_methods /* m_methods */, #if CYTHON_PEP489_MULTI_PHASE_INIT __pyx_moduledef_slots, /* m_slots */ #else NULL, /* m_reload */ #endif NULL, /* m_traverse */ NULL, /* m_clear */ NULL /* m_free */ }; #endif static __Pyx_StringTabEntry __pyx_string_tab[] = { {&__pyx_kp_s_1_1_07, __pyx_k_1_1_07, sizeof(__pyx_k_1_1_07), 0, 0, 1, 0}, {&__pyx_n_s_API_VERSION, __pyx_k_API_VERSION, sizeof(__pyx_k_API_VERSION), 0, 0, 1, 1}, {&__pyx_n_s_ChunkIndexEntry, __pyx_k_ChunkIndexEntry, sizeof(__pyx_k_ChunkIndexEntry), 0, 0, 1, 1}, {&__pyx_kp_s_Expected_bytes_of_length_16_for, __pyx_k_Expected_bytes_of_length_16_for, sizeof(__pyx_k_Expected_bytes_of_length_16_for), 0, 0, 1, 0}, {&__pyx_n_s_IndexError, __pyx_k_IndexError, sizeof(__pyx_k_IndexError), 0, 0, 1, 1}, {&__pyx_n_s_KeyError, __pyx_k_KeyError, sizeof(__pyx_k_KeyError), 0, 0, 1, 1}, {&__pyx_n_s_MAX_LOAD_FACTOR, __pyx_k_MAX_LOAD_FACTOR, sizeof(__pyx_k_MAX_LOAD_FACTOR), 0, 0, 1, 1}, {&__pyx_n_s_MAX_VALUE, __pyx_k_MAX_VALUE, sizeof(__pyx_k_MAX_VALUE), 0, 0, 1, 1}, {&__pyx_n_s_StopIteration, __pyx_k_StopIteration, sizeof(__pyx_k_StopIteration), 0, 0, 1, 1}, {&__pyx_n_s_TypeError, __pyx_k_TypeError, sizeof(__pyx_k_TypeError), 0, 0, 1, 1}, {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1}, {&__pyx_kp_s_cache_sync_feed_failed, __pyx_k_cache_sync_feed_failed, sizeof(__pyx_k_cache_sync_feed_failed), 0, 0, 1, 0}, {&__pyx_kp_s_cache_sync_init_failed, __pyx_k_cache_sync_init_failed, sizeof(__pyx_k_cache_sync_init_failed), 0, 0, 1, 0}, {&__pyx_n_s_capacity, __pyx_k_capacity, sizeof(__pyx_k_capacity), 0, 0, 1, 1}, {&__pyx_n_s_chunks, __pyx_k_chunks, sizeof(__pyx_k_chunks), 0, 0, 1, 1}, {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, {&__pyx_n_s_collections, __pyx_k_collections, sizeof(__pyx_k_collections), 0, 0, 1, 1}, {&__pyx_n_s_csize, __pyx_k_csize, sizeof(__pyx_k_csize), 0, 0, 1, 1}, {&__pyx_n_s_default, __pyx_k_default, sizeof(__pyx_k_default), 0, 0, 1, 1}, {&__pyx_n_s_enter, __pyx_k_enter, sizeof(__pyx_k_enter), 0, 0, 1, 1}, {&__pyx_n_s_exit, __pyx_k_exit, sizeof(__pyx_k_exit), 0, 0, 1, 1}, {&__pyx_n_s_getstate, __pyx_k_getstate, sizeof(__pyx_k_getstate), 0, 0, 1, 1}, {&__pyx_kp_s_hashindex_delete_failed, __pyx_k_hashindex_delete_failed, sizeof(__pyx_k_hashindex_delete_failed), 0, 0, 1, 0}, {&__pyx_kp_s_hashindex_init_failed, __pyx_k_hashindex_init_failed, sizeof(__pyx_k_hashindex_init_failed), 0, 0, 1, 0}, {&__pyx_kp_s_hashindex_read_returned_NULL_wit, __pyx_k_hashindex_read_returned_NULL_wit, sizeof(__pyx_k_hashindex_read_returned_NULL_wit), 0, 0, 1, 0}, {&__pyx_kp_s_hashindex_set_failed, __pyx_k_hashindex_set_failed, sizeof(__pyx_k_hashindex_set_failed), 0, 0, 1, 0}, {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, {&__pyx_kp_s_invalid_reference_count, __pyx_k_invalid_reference_count, sizeof(__pyx_k_invalid_reference_count), 0, 0, 1, 0}, {&__pyx_n_s_key, __pyx_k_key, sizeof(__pyx_k_key), 0, 0, 1, 1}, {&__pyx_n_s_key_size, __pyx_k_key_size, sizeof(__pyx_k_key_size), 0, 0, 1, 1}, {&__pyx_n_s_key_size_2, __pyx_k_key_size_2, sizeof(__pyx_k_key_size_2), 0, 0, 1, 1}, {&__pyx_n_s_locale, __pyx_k_locale, sizeof(__pyx_k_locale), 0, 0, 1, 1}, {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, {&__pyx_n_s_marker, __pyx_k_marker, sizeof(__pyx_k_marker), 0, 0, 1, 1}, {&__pyx_kp_s_maximum_number_of_segments_reach, __pyx_k_maximum_number_of_segments_reach, sizeof(__pyx_k_maximum_number_of_segments_reach), 0, 0, 1, 0}, {&__pyx_kp_s_maximum_number_of_versions_reach, __pyx_k_maximum_number_of_versions_reach, sizeof(__pyx_k_maximum_number_of_versions_reach), 0, 0, 1, 0}, {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, {&__pyx_n_s_namedtuple, __pyx_k_namedtuple, sizeof(__pyx_k_namedtuple), 0, 0, 1, 1}, {&__pyx_kp_s_no_default___reduce___due_to_non, __pyx_k_no_default___reduce___due_to_non, sizeof(__pyx_k_no_default___reduce___due_to_non), 0, 0, 1, 0}, {&__pyx_n_s_object, __pyx_k_object, sizeof(__pyx_k_object), 0, 0, 1, 1}, {&__pyx_n_s_open, __pyx_k_open, sizeof(__pyx_k_open), 0, 0, 1, 1}, {&__pyx_n_s_os, __pyx_k_os, sizeof(__pyx_k_os), 0, 0, 1, 1}, {&__pyx_n_s_path, __pyx_k_path, sizeof(__pyx_k_path), 0, 0, 1, 1}, {&__pyx_n_s_permit_compact, __pyx_k_permit_compact, sizeof(__pyx_k_permit_compact), 0, 0, 1, 1}, {&__pyx_n_s_pyx_vtable, __pyx_k_pyx_vtable, sizeof(__pyx_k_pyx_vtable), 0, 0, 1, 1}, {&__pyx_n_s_rb, __pyx_k_rb, sizeof(__pyx_k_rb), 0, 0, 1, 1}, {&__pyx_n_s_read, __pyx_k_read, sizeof(__pyx_k_read), 0, 0, 1, 1}, {&__pyx_n_s_reduce, __pyx_k_reduce, sizeof(__pyx_k_reduce), 0, 0, 1, 1}, {&__pyx_n_s_reduce_cython, __pyx_k_reduce_cython, sizeof(__pyx_k_reduce_cython), 0, 0, 1, 1}, {&__pyx_n_s_reduce_ex, __pyx_k_reduce_ex, sizeof(__pyx_k_reduce_ex), 0, 0, 1, 1}, {&__pyx_kp_s_refcount_size_csize, __pyx_k_refcount_size_csize, sizeof(__pyx_k_refcount_size_csize), 0, 0, 1, 0}, {&__pyx_n_s_refs, __pyx_k_refs, sizeof(__pyx_k_refs), 0, 0, 1, 1}, {&__pyx_n_s_setstate, __pyx_k_setstate, sizeof(__pyx_k_setstate), 0, 0, 1, 1}, {&__pyx_n_s_setstate_cython, __pyx_k_setstate_cython, sizeof(__pyx_k_setstate_cython), 0, 0, 1, 1}, {&__pyx_n_s_size, __pyx_k_size, sizeof(__pyx_k_size), 0, 0, 1, 1}, {&__pyx_kp_s_stats_against_key_contained_in_s, __pyx_k_stats_against_key_contained_in_s, sizeof(__pyx_k_stats_against_key_contained_in_s), 0, 0, 1, 0}, {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, {&__pyx_n_s_value, __pyx_k_value, sizeof(__pyx_k_value), 0, 0, 1, 1}, {&__pyx_n_s_value_size, __pyx_k_value_size, sizeof(__pyx_k_value_size), 0, 0, 1, 1}, {&__pyx_n_s_wb, __pyx_k_wb, sizeof(__pyx_k_wb), 0, 0, 1, 1}, {0, 0, 0, 0, 0, 0, 0} }; static int __Pyx_InitCachedBuiltins(void) { __pyx_builtin_object = __Pyx_GetBuiltinName(__pyx_n_s_object); if (!__pyx_builtin_object) __PYX_ERR(0, 55, __pyx_L1_error) __pyx_builtin_open = __Pyx_GetBuiltinName(__pyx_n_s_open); if (!__pyx_builtin_open) __PYX_ERR(0, 91, __pyx_L1_error) __pyx_builtin_KeyError = __Pyx_GetBuiltinName(__pyx_n_s_KeyError); if (!__pyx_builtin_KeyError) __PYX_ERR(0, 132, __pyx_L1_error) __pyx_builtin_TypeError = __Pyx_GetBuiltinName(__pyx_n_s_TypeError); if (!__pyx_builtin_TypeError) __PYX_ERR(1, 2, __pyx_L1_error) __pyx_builtin_IndexError = __Pyx_GetBuiltinName(__pyx_n_s_IndexError); if (!__pyx_builtin_IndexError) __PYX_ERR(0, 233, __pyx_L1_error) __pyx_builtin_StopIteration = __Pyx_GetBuiltinName(__pyx_n_s_StopIteration); if (!__pyx_builtin_StopIteration) __PYX_ERR(0, 255, __pyx_L1_error) __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(0, 402, __pyx_L1_error) return 0; __pyx_L1_error:; return -1; } static int __Pyx_InitCachedConstants(void) { __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); /* "borg/hashindex.pyx":91 * if path: * if isinstance(path, (str, bytes)): * with open(path, 'rb') as fd: # <<<<<<<<<<<<<< * self.index = hashindex_read(fd, permit_compact) * else: */ __pyx_tuple_ = PyTuple_Pack(3, Py_None, Py_None, Py_None); if (unlikely(!__pyx_tuple_)) __PYX_ERR(0, 91, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple_); __Pyx_GIVEREF(__pyx_tuple_); /* "borg/hashindex.pyx":99 * self.index = hashindex_init(capacity, self.key_size, self.value_size) * if not self.index: * raise Exception('hashindex_init failed') # <<<<<<<<<<<<<< * * def __dealloc__(self): */ __pyx_tuple__2 = PyTuple_Pack(1, __pyx_kp_s_hashindex_init_failed); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(0, 99, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__2); __Pyx_GIVEREF(__pyx_tuple__2); /* "borg/hashindex.pyx":111 * def write(self, path): * if isinstance(path, (str, bytes)): * with open(path, 'wb') as fd: # <<<<<<<<<<<<<< * hashindex_write(self.index, fd) * else: */ __pyx_tuple__3 = PyTuple_Pack(3, Py_None, Py_None, Py_None); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(0, 111, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__3); __Pyx_GIVEREF(__pyx_tuple__3); /* "borg/hashindex.pyx":120 * self.index = hashindex_init(0, self.key_size, self.value_size) * if not self.index: * raise Exception('hashindex_init failed') # <<<<<<<<<<<<<< * * def setdefault(self, key, value): */ __pyx_tuple__4 = PyTuple_Pack(1, __pyx_kp_s_hashindex_init_failed); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(0, 120, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__4); __Pyx_GIVEREF(__pyx_tuple__4); /* "borg/hashindex.pyx":134 * raise KeyError(key) * if rc == 0: * raise Exception('hashindex_delete failed') # <<<<<<<<<<<<<< * * def get(self, key, default=None): */ __pyx_tuple__5 = PyTuple_Pack(1, __pyx_kp_s_hashindex_delete_failed); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(0, 134, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__5); __Pyx_GIVEREF(__pyx_tuple__5); /* "(tree fragment)":2 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ __pyx_tuple__7 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(1, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__7); __Pyx_GIVEREF(__pyx_tuple__7); /* "(tree fragment)":4 * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< */ __pyx_tuple__8 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(1, 4, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__8); __Pyx_GIVEREF(__pyx_tuple__8); /* "borg/hashindex.pyx":182 * assert data.version <= _MAX_VALUE, "maximum number of versions reached" * if not PyBytes_CheckExact(value[1]) or PyBytes_GET_SIZE(value[1]) != 16: * raise TypeError("Expected bytes of length 16 for second value") # <<<<<<<<<<<<<< * memcpy(data.hash, PyBytes_AS_STRING(value[1]), 16) * data.version = _htole32(data.version) */ __pyx_tuple__9 = PyTuple_Pack(1, __pyx_kp_s_Expected_bytes_of_length_16_for); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(0, 182, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__9); __Pyx_GIVEREF(__pyx_tuple__9); /* "borg/hashindex.pyx":186 * data.version = _htole32(data.version) * if not hashindex_set(self.index, key, &data): * raise Exception('hashindex_set failed') # <<<<<<<<<<<<<< * * def __contains__(self, key): */ __pyx_tuple__10 = PyTuple_Pack(1, __pyx_kp_s_hashindex_set_failed); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(0, 186, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__10); __Pyx_GIVEREF(__pyx_tuple__10); /* "(tree fragment)":2 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ __pyx_tuple__11 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(1, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__11); __Pyx_GIVEREF(__pyx_tuple__11); /* "(tree fragment)":4 * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< */ __pyx_tuple__12 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__12)) __PYX_ERR(1, 4, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__12); __Pyx_GIVEREF(__pyx_tuple__12); /* "borg/hashindex.pyx":214 * data[1] = _htole32(value[1]) * if not hashindex_set(self.index, key, data): * raise Exception('hashindex_set failed') # <<<<<<<<<<<<<< * * def __contains__(self, key): */ __pyx_tuple__13 = PyTuple_Pack(1, __pyx_kp_s_hashindex_set_failed); if (unlikely(!__pyx_tuple__13)) __PYX_ERR(0, 214, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__13); __Pyx_GIVEREF(__pyx_tuple__13); /* "(tree fragment)":2 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ __pyx_tuple__14 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(1, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__14); __Pyx_GIVEREF(__pyx_tuple__14); /* "(tree fragment)":4 * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< */ __pyx_tuple__15 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__15)) __PYX_ERR(1, 4, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__15); __Pyx_GIVEREF(__pyx_tuple__15); /* "(tree fragment)":2 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ __pyx_tuple__16 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__16)) __PYX_ERR(1, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__16); __Pyx_GIVEREF(__pyx_tuple__16); /* "(tree fragment)":4 * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< */ __pyx_tuple__17 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__17)) __PYX_ERR(1, 4, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__17); __Pyx_GIVEREF(__pyx_tuple__17); /* "borg/hashindex.pyx":306 * data[2] = _htole32(value[2]) * if not hashindex_set(self.index, key, data): * raise Exception('hashindex_set failed') # <<<<<<<<<<<<<< * * def __contains__(self, key): */ __pyx_tuple__18 = PyTuple_Pack(1, __pyx_kp_s_hashindex_set_failed); if (unlikely(!__pyx_tuple__18)) __PYX_ERR(0, 306, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__18); __Pyx_GIVEREF(__pyx_tuple__18); /* "borg/hashindex.pyx":402 * master_values = hashindex_get(master, key) * if not master_values: * raise ValueError('stats_against: key contained in self but not in master_index.') # <<<<<<<<<<<<<< * our_refcount = _le32toh(our_values[0]) * chunk_size = _le32toh(master_values[1]) */ __pyx_tuple__19 = PyTuple_Pack(1, __pyx_kp_s_stats_against_key_contained_in_s); if (unlikely(!__pyx_tuple__19)) __PYX_ERR(0, 402, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__19); __Pyx_GIVEREF(__pyx_tuple__19); /* "borg/hashindex.pyx":440 * else: * if not hashindex_set(self.index, key, data): * raise Exception('hashindex_set failed') # <<<<<<<<<<<<<< * * def merge(self, ChunkIndex other): */ __pyx_tuple__20 = PyTuple_Pack(1, __pyx_kp_s_hashindex_set_failed); if (unlikely(!__pyx_tuple__20)) __PYX_ERR(0, 440, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__20); __Pyx_GIVEREF(__pyx_tuple__20); /* "(tree fragment)":2 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ __pyx_tuple__21 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__21)) __PYX_ERR(1, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__21); __Pyx_GIVEREF(__pyx_tuple__21); /* "(tree fragment)":4 * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< */ __pyx_tuple__22 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__22)) __PYX_ERR(1, 4, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__22); __Pyx_GIVEREF(__pyx_tuple__22); /* "(tree fragment)":2 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ __pyx_tuple__23 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__23)) __PYX_ERR(1, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__23); __Pyx_GIVEREF(__pyx_tuple__23); /* "(tree fragment)":4 * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< */ __pyx_tuple__24 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__24)) __PYX_ERR(1, 4, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__24); __Pyx_GIVEREF(__pyx_tuple__24); /* "borg/hashindex.pyx":510 * self.sync = cache_sync_init(self.chunks.index) * if not self.sync: * raise Exception('cache_sync_init failed') # <<<<<<<<<<<<<< * * def __dealloc__(self): */ __pyx_tuple__25 = PyTuple_Pack(1, __pyx_kp_s_cache_sync_init_failed); if (unlikely(!__pyx_tuple__25)) __PYX_ERR(0, 510, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__25); __Pyx_GIVEREF(__pyx_tuple__25); /* "(tree fragment)":2 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ __pyx_tuple__26 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__26)) __PYX_ERR(1, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__26); __Pyx_GIVEREF(__pyx_tuple__26); /* "(tree fragment)":4 * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< */ __pyx_tuple__27 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__27)) __PYX_ERR(1, 4, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__27); __Pyx_GIVEREF(__pyx_tuple__27); /* "borg/hashindex.pyx":266 * * * ChunkIndexEntry = namedtuple('ChunkIndexEntry', 'refcount size csize') # <<<<<<<<<<<<<< * * */ __pyx_tuple__28 = PyTuple_Pack(2, __pyx_n_s_ChunkIndexEntry, __pyx_kp_s_refcount_size_csize); if (unlikely(!__pyx_tuple__28)) __PYX_ERR(0, 266, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__28); __Pyx_GIVEREF(__pyx_tuple__28); __Pyx_RefNannyFinishContext(); return 0; __pyx_L1_error:; __Pyx_RefNannyFinishContext(); return -1; } static int __Pyx_InitGlobals(void) { if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error); __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error) __pyx_int_8 = PyInt_FromLong(8); if (unlikely(!__pyx_int_8)) __PYX_ERR(0, 1, __pyx_L1_error) __pyx_int_12 = PyInt_FromLong(12); if (unlikely(!__pyx_int_12)) __PYX_ERR(0, 1, __pyx_L1_error) __pyx_int_16 = PyInt_FromLong(16); if (unlikely(!__pyx_int_16)) __PYX_ERR(0, 1, __pyx_L1_error) __pyx_int_20 = PyInt_FromLong(20); if (unlikely(!__pyx_int_20)) __PYX_ERR(0, 1, __pyx_L1_error) __pyx_int_32 = PyInt_FromLong(32); if (unlikely(!__pyx_int_32)) __PYX_ERR(0, 1, __pyx_L1_error) __pyx_int_4294967295 = PyInt_FromString((char *)"4294967295", 0, 0); if (unlikely(!__pyx_int_4294967295)) __PYX_ERR(0, 1, __pyx_L1_error) return 0; __pyx_L1_error:; return -1; } #if PY_MAJOR_VERSION < 3 PyMODINIT_FUNC inithashindex(void); /*proto*/ PyMODINIT_FUNC inithashindex(void) #else PyMODINIT_FUNC PyInit_hashindex(void); /*proto*/ PyMODINIT_FUNC PyInit_hashindex(void) #if CYTHON_PEP489_MULTI_PHASE_INIT { return PyModuleDef_Init(&__pyx_moduledef); } static int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name) { PyObject *value = PyObject_GetAttrString(spec, from_name); int result = 0; if (likely(value)) { result = PyDict_SetItemString(moddict, to_name, value); Py_DECREF(value); } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { PyErr_Clear(); } else { result = -1; } return result; } static PyObject* __pyx_pymod_create(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) { PyObject *module = NULL, *moddict, *modname; if (__pyx_m) return __Pyx_NewRef(__pyx_m); modname = PyObject_GetAttrString(spec, "name"); if (unlikely(!modname)) goto bad; module = PyModule_NewObject(modname); Py_DECREF(modname); if (unlikely(!module)) goto bad; moddict = PyModule_GetDict(module); if (unlikely(!moddict)) goto bad; if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__") < 0)) goto bad; if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__") < 0)) goto bad; if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__") < 0)) goto bad; if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__") < 0)) goto bad; return module; bad: Py_XDECREF(module); return NULL; } static int __pyx_pymod_exec_hashindex(PyObject *__pyx_pyinit_module) #endif #endif { PyObject *__pyx_t_1 = NULL; PyObject *__pyx_t_2 = NULL; int __pyx_t_3; __Pyx_RefNannyDeclarations #if CYTHON_PEP489_MULTI_PHASE_INIT if (__pyx_m && __pyx_m == __pyx_pyinit_module) return 0; #endif #if CYTHON_REFNANNY __Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); if (!__Pyx_RefNanny) { PyErr_Clear(); __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); if (!__Pyx_RefNanny) Py_FatalError("failed to import 'refnanny' module"); } #endif __Pyx_RefNannySetupContext("PyMODINIT_FUNC PyInit_hashindex(void)", 0); if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) #ifdef __Pyx_CyFunction_USED if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) #endif #ifdef __Pyx_FusedFunction_USED if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) #endif #ifdef __Pyx_Coroutine_USED if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) #endif #ifdef __Pyx_Generator_USED if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) #endif #ifdef __Pyx_AsyncGen_USED if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) #endif #ifdef __Pyx_StopAsyncIteration_USED if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) #endif /*--- Library function declarations ---*/ /*--- Threads initialization code ---*/ #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS #ifdef WITH_THREAD /* Python build with threading support? */ PyEval_InitThreads(); #endif #endif /*--- Module creation code ---*/ #if CYTHON_PEP489_MULTI_PHASE_INIT __pyx_m = __pyx_pyinit_module; Py_INCREF(__pyx_m); #else #if PY_MAJOR_VERSION < 3 __pyx_m = Py_InitModule4("hashindex", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); #else __pyx_m = PyModule_Create(&__pyx_moduledef); #endif if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) #endif __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) Py_INCREF(__pyx_d); __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) #if CYTHON_COMPILING_IN_PYPY Py_INCREF(__pyx_b); #endif if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); /*--- Initialize various global constants etc. ---*/ if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) #endif if (__pyx_module_is_main_borg__hashindex) { if (PyObject_SetAttrString(__pyx_m, "__name__", __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) } #if PY_MAJOR_VERSION >= 3 { PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) if (!PyDict_GetItemString(modules, "borg.hashindex")) { if (unlikely(PyDict_SetItemString(modules, "borg.hashindex", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error) } } #endif /*--- Builtin init code ---*/ if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) /*--- Constants init code ---*/ if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) /*--- Global init code ---*/ __pyx_v_4borg_9hashindex__NoDefault = Py_None; Py_INCREF(Py_None); /*--- Variable export code ---*/ /*--- Function export code ---*/ /*--- Type init code ---*/ if (PyType_Ready(&__pyx_type_4borg_9hashindex_IndexBase) < 0) __PYX_ERR(0, 78, __pyx_L1_error) __pyx_type_4borg_9hashindex_IndexBase.tp_print = 0; if (__Pyx_setup_reduce((PyObject*)&__pyx_type_4borg_9hashindex_IndexBase) < 0) __PYX_ERR(0, 78, __pyx_L1_error) __pyx_ptype_4borg_9hashindex_IndexBase = &__pyx_type_4borg_9hashindex_IndexBase; __pyx_type_4borg_9hashindex_FuseVersionsIndex.tp_base = __pyx_ptype_4borg_9hashindex_IndexBase; if (PyType_Ready(&__pyx_type_4borg_9hashindex_FuseVersionsIndex) < 0) __PYX_ERR(0, 163, __pyx_L1_error) __pyx_type_4borg_9hashindex_FuseVersionsIndex.tp_print = 0; if (PyObject_SetAttrString(__pyx_m, "FuseVersionsIndex", (PyObject *)&__pyx_type_4borg_9hashindex_FuseVersionsIndex) < 0) __PYX_ERR(0, 163, __pyx_L1_error) if (__Pyx_setup_reduce((PyObject*)&__pyx_type_4borg_9hashindex_FuseVersionsIndex) < 0) __PYX_ERR(0, 163, __pyx_L1_error) __pyx_ptype_4borg_9hashindex_FuseVersionsIndex = &__pyx_type_4borg_9hashindex_FuseVersionsIndex; __pyx_type_4borg_9hashindex_NSIndex.tp_base = __pyx_ptype_4borg_9hashindex_IndexBase; if (PyType_Ready(&__pyx_type_4borg_9hashindex_NSIndex) < 0) __PYX_ERR(0, 193, __pyx_L1_error) __pyx_type_4borg_9hashindex_NSIndex.tp_print = 0; if (PyObject_SetAttrString(__pyx_m, "NSIndex", (PyObject *)&__pyx_type_4borg_9hashindex_NSIndex) < 0) __PYX_ERR(0, 193, __pyx_L1_error) if (__Pyx_setup_reduce((PyObject*)&__pyx_type_4borg_9hashindex_NSIndex) < 0) __PYX_ERR(0, 193, __pyx_L1_error) __pyx_ptype_4borg_9hashindex_NSIndex = &__pyx_type_4borg_9hashindex_NSIndex; if (PyType_Ready(&__pyx_type_4borg_9hashindex_NSKeyIterator) < 0) __PYX_ERR(0, 238, __pyx_L1_error) __pyx_type_4borg_9hashindex_NSKeyIterator.tp_print = 0; if (PyObject_SetAttrString(__pyx_m, "NSKeyIterator", (PyObject *)&__pyx_type_4borg_9hashindex_NSKeyIterator) < 0) __PYX_ERR(0, 238, __pyx_L1_error) if (__Pyx_setup_reduce((PyObject*)&__pyx_type_4borg_9hashindex_NSKeyIterator) < 0) __PYX_ERR(0, 238, __pyx_L1_error) __pyx_ptype_4borg_9hashindex_NSKeyIterator = &__pyx_type_4borg_9hashindex_NSKeyIterator; __pyx_vtabptr_4borg_9hashindex_ChunkIndex = &__pyx_vtable_4borg_9hashindex_ChunkIndex; __pyx_vtable_4borg_9hashindex_ChunkIndex._add = (PyObject *(*)(struct __pyx_obj_4borg_9hashindex_ChunkIndex *, void *, uint32_t *))__pyx_f_4borg_9hashindex_10ChunkIndex__add; __pyx_type_4borg_9hashindex_ChunkIndex.tp_base = __pyx_ptype_4borg_9hashindex_IndexBase; if (PyType_Ready(&__pyx_type_4borg_9hashindex_ChunkIndex) < 0) __PYX_ERR(0, 269, __pyx_L1_error) __pyx_type_4borg_9hashindex_ChunkIndex.tp_print = 0; if (__Pyx_SetVtable(__pyx_type_4borg_9hashindex_ChunkIndex.tp_dict, __pyx_vtabptr_4borg_9hashindex_ChunkIndex) < 0) __PYX_ERR(0, 269, __pyx_L1_error) if (PyObject_SetAttrString(__pyx_m, "ChunkIndex", (PyObject *)&__pyx_type_4borg_9hashindex_ChunkIndex) < 0) __PYX_ERR(0, 269, __pyx_L1_error) if (__Pyx_setup_reduce((PyObject*)&__pyx_type_4borg_9hashindex_ChunkIndex) < 0) __PYX_ERR(0, 269, __pyx_L1_error) __pyx_ptype_4borg_9hashindex_ChunkIndex = &__pyx_type_4borg_9hashindex_ChunkIndex; if (PyType_Ready(&__pyx_type_4borg_9hashindex_ChunkKeyIterator) < 0) __PYX_ERR(0, 468, __pyx_L1_error) __pyx_type_4borg_9hashindex_ChunkKeyIterator.tp_print = 0; if (PyObject_SetAttrString(__pyx_m, "ChunkKeyIterator", (PyObject *)&__pyx_type_4borg_9hashindex_ChunkKeyIterator) < 0) __PYX_ERR(0, 468, __pyx_L1_error) if (__Pyx_setup_reduce((PyObject*)&__pyx_type_4borg_9hashindex_ChunkKeyIterator) < 0) __PYX_ERR(0, 468, __pyx_L1_error) __pyx_ptype_4borg_9hashindex_ChunkKeyIterator = &__pyx_type_4borg_9hashindex_ChunkKeyIterator; if (PyType_Ready(&__pyx_type_4borg_9hashindex_CacheSynchronizer) < 0) __PYX_ERR(0, 502, __pyx_L1_error) __pyx_type_4borg_9hashindex_CacheSynchronizer.tp_print = 0; if (PyObject_SetAttrString(__pyx_m, "CacheSynchronizer", (PyObject *)&__pyx_type_4borg_9hashindex_CacheSynchronizer) < 0) __PYX_ERR(0, 502, __pyx_L1_error) if (__Pyx_setup_reduce((PyObject*)&__pyx_type_4borg_9hashindex_CacheSynchronizer) < 0) __PYX_ERR(0, 502, __pyx_L1_error) __pyx_ptype_4borg_9hashindex_CacheSynchronizer = &__pyx_type_4borg_9hashindex_CacheSynchronizer; /*--- Type import code ---*/ __pyx_ptype_7cpython_4type_type = __Pyx_ImportType(__Pyx_BUILTIN_MODULE_NAME, "type", #if CYTHON_COMPILING_IN_PYPY sizeof(PyTypeObject), #else sizeof(PyHeapTypeObject), #endif 0); if (unlikely(!__pyx_ptype_7cpython_4type_type)) __PYX_ERR(2, 9, __pyx_L1_error) /*--- Variable import code ---*/ /*--- Function import code ---*/ /*--- Execution code ---*/ #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) #endif /* "borg/hashindex.pyx":2 * # -*- coding: utf-8 -*- * from collections import namedtuple # <<<<<<<<<<<<<< * import locale * import os */ __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_INCREF(__pyx_n_s_namedtuple); __Pyx_GIVEREF(__pyx_n_s_namedtuple); PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_namedtuple); __pyx_t_2 = __Pyx_Import(__pyx_n_s_collections, __pyx_t_1, -1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_namedtuple); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); if (PyDict_SetItem(__pyx_d, __pyx_n_s_namedtuple, __pyx_t_1) < 0) __PYX_ERR(0, 2, __pyx_L1_error) __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; /* "borg/hashindex.pyx":3 * # -*- coding: utf-8 -*- * from collections import namedtuple * import locale # <<<<<<<<<<<<<< * import os * */ __pyx_t_2 = __Pyx_Import(__pyx_n_s_locale, 0, -1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 3, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); if (PyDict_SetItem(__pyx_d, __pyx_n_s_locale, __pyx_t_2) < 0) __PYX_ERR(0, 3, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; /* "borg/hashindex.pyx":4 * from collections import namedtuple * import locale * import os # <<<<<<<<<<<<<< * * cimport cython */ __pyx_t_2 = __Pyx_Import(__pyx_n_s_os, 0, -1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 4, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); if (PyDict_SetItem(__pyx_d, __pyx_n_s_os, __pyx_t_2) < 0) __PYX_ERR(0, 4, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; /* "borg/hashindex.pyx":14 * from cpython.bytes cimport PyBytes_FromStringAndSize, PyBytes_CheckExact, PyBytes_GET_SIZE, PyBytes_AS_STRING * * API_VERSION = '1.1_07' # <<<<<<<<<<<<<< * * */ if (PyDict_SetItem(__pyx_d, __pyx_n_s_API_VERSION, __pyx_kp_s_1_1_07) < 0) __PYX_ERR(0, 14, __pyx_L1_error) /* "borg/hashindex.pyx":55 * * * cdef _NoDefault = object() # <<<<<<<<<<<<<< * * """ */ __pyx_t_2 = __Pyx_PyObject_CallNoArg(__pyx_builtin_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 55, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __Pyx_XGOTREF(__pyx_v_4borg_9hashindex__NoDefault); __Pyx_DECREF_SET(__pyx_v_4borg_9hashindex__NoDefault, __pyx_t_2); __Pyx_GIVEREF(__pyx_t_2); __pyx_t_2 = 0; /* "borg/hashindex.pyx":72 * """ * * assert UINT32_MAX == 2**32-1 # <<<<<<<<<<<<<< * * assert _MAX_VALUE % 2 == 1 */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { __pyx_t_2 = __Pyx_PyInt_From_uint32_t(UINT32_MAX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 72, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __pyx_t_1 = PyObject_RichCompare(__pyx_t_2, __pyx_int_4294967295, Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 72, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 72, __pyx_L1_error) __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; if (unlikely(!__pyx_t_3)) { PyErr_SetNone(PyExc_AssertionError); __PYX_ERR(0, 72, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":74 * assert UINT32_MAX == 2**32-1 * * assert _MAX_VALUE % 2 == 1 # <<<<<<<<<<<<<< * * */ #ifndef CYTHON_WITHOUT_ASSERTIONS if (unlikely(!Py_OptimizeFlag)) { if (unlikely(!((__Pyx_mod_long(_MAX_VALUE, 2) == 1) != 0))) { PyErr_SetNone(PyExc_AssertionError); __PYX_ERR(0, 74, __pyx_L1_error) } } #endif /* "borg/hashindex.pyx":82 * cdef int key_size * * _key_size = 32 # <<<<<<<<<<<<<< * * MAX_LOAD_FACTOR = HASH_MAX_LOAD */ if (PyDict_SetItem((PyObject *)__pyx_ptype_4borg_9hashindex_IndexBase->tp_dict, __pyx_n_s_key_size, __pyx_int_32) < 0) __PYX_ERR(0, 82, __pyx_L1_error) PyType_Modified(__pyx_ptype_4borg_9hashindex_IndexBase); /* "borg/hashindex.pyx":84 * _key_size = 32 * * MAX_LOAD_FACTOR = HASH_MAX_LOAD # <<<<<<<<<<<<<< * MAX_VALUE = _MAX_VALUE * */ __pyx_t_1 = PyFloat_FromDouble(HASH_MAX_LOAD); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 84, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); if (PyDict_SetItem((PyObject *)__pyx_ptype_4borg_9hashindex_IndexBase->tp_dict, __pyx_n_s_MAX_LOAD_FACTOR, __pyx_t_1) < 0) __PYX_ERR(0, 84, __pyx_L1_error) __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; PyType_Modified(__pyx_ptype_4borg_9hashindex_IndexBase); /* "borg/hashindex.pyx":85 * * MAX_LOAD_FACTOR = HASH_MAX_LOAD * MAX_VALUE = _MAX_VALUE # <<<<<<<<<<<<<< * * def __cinit__(self, capacity=0, path=None, permit_compact=False): */ __pyx_t_1 = __Pyx_PyInt_From_uint32_t(_MAX_VALUE); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 85, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); if (PyDict_SetItem((PyObject *)__pyx_ptype_4borg_9hashindex_IndexBase->tp_dict, __pyx_n_s_MAX_VALUE, __pyx_t_1) < 0) __PYX_ERR(0, 85, __pyx_L1_error) __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; PyType_Modified(__pyx_ptype_4borg_9hashindex_IndexBase); /* "borg/hashindex.pyx":106 * * @classmethod * def read(cls, path, permit_compact=False): # <<<<<<<<<<<<<< * return cls(path=path, permit_compact=permit_compact) * */ __pyx_t_1 = __Pyx_GetNameInClass((PyObject *)__pyx_ptype_4borg_9hashindex_IndexBase, __pyx_n_s_read); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 106, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); /* "borg/hashindex.pyx":105 * hashindex_free(self.index) * * @classmethod # <<<<<<<<<<<<<< * def read(cls, path, permit_compact=False): * return cls(path=path, permit_compact=permit_compact) */ __pyx_t_2 = __Pyx_Method_ClassMethod(__pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 105, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; if (PyDict_SetItem((PyObject *)__pyx_ptype_4borg_9hashindex_IndexBase->tp_dict, __pyx_n_s_read, __pyx_t_2) < 0) __PYX_ERR(0, 106, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; PyType_Modified(__pyx_ptype_4borg_9hashindex_IndexBase); /* "borg/hashindex.pyx":142 * return default * * def pop(self, key, default=_NoDefault): # <<<<<<<<<<<<<< * try: * value = self[key] */ __Pyx_INCREF(__pyx_v_4borg_9hashindex__NoDefault); __pyx_k__6 = __pyx_v_4borg_9hashindex__NoDefault; __Pyx_GIVEREF(__pyx_v_4borg_9hashindex__NoDefault); /* "borg/hashindex.pyx":165 * cdef class FuseVersionsIndex(IndexBase): * # 4 byte version + 16 byte file contents hash * value_size = 20 # <<<<<<<<<<<<<< * _key_size = 16 * */ if (PyDict_SetItem((PyObject *)__pyx_ptype_4borg_9hashindex_FuseVersionsIndex->tp_dict, __pyx_n_s_value_size, __pyx_int_20) < 0) __PYX_ERR(0, 165, __pyx_L1_error) PyType_Modified(__pyx_ptype_4borg_9hashindex_FuseVersionsIndex); /* "borg/hashindex.pyx":166 * # 4 byte version + 16 byte file contents hash * value_size = 20 * _key_size = 16 # <<<<<<<<<<<<<< * * def __getitem__(self, key): */ if (PyDict_SetItem((PyObject *)__pyx_ptype_4borg_9hashindex_FuseVersionsIndex->tp_dict, __pyx_n_s_key_size, __pyx_int_16) < 0) __PYX_ERR(0, 166, __pyx_L1_error) PyType_Modified(__pyx_ptype_4borg_9hashindex_FuseVersionsIndex); /* "borg/hashindex.pyx":195 * cdef class NSIndex(IndexBase): * * value_size = 8 # <<<<<<<<<<<<<< * * def __getitem__(self, key): */ if (PyDict_SetItem((PyObject *)__pyx_ptype_4borg_9hashindex_NSIndex->tp_dict, __pyx_n_s_value_size, __pyx_int_8) < 0) __PYX_ERR(0, 195, __pyx_L1_error) PyType_Modified(__pyx_ptype_4borg_9hashindex_NSIndex); /* "borg/hashindex.pyx":266 * * * ChunkIndexEntry = namedtuple('ChunkIndexEntry', 'refcount size csize') # <<<<<<<<<<<<<< * * */ __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_namedtuple); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 266, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_tuple__28, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 266, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; if (PyDict_SetItem(__pyx_d, __pyx_n_s_ChunkIndexEntry, __pyx_t_1) < 0) __PYX_ERR(0, 266, __pyx_L1_error) __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; /* "borg/hashindex.pyx":286 * """ * * value_size = 12 # <<<<<<<<<<<<<< * * def __getitem__(self, key): */ if (PyDict_SetItem((PyObject *)__pyx_ptype_4borg_9hashindex_ChunkIndex->tp_dict, __pyx_n_s_value_size, __pyx_int_12) < 0) __PYX_ERR(0, 286, __pyx_L1_error) PyType_Modified(__pyx_ptype_4borg_9hashindex_ChunkIndex); /* "borg/hashindex.pyx":1 * # -*- coding: utf-8 -*- # <<<<<<<<<<<<<< * from collections import namedtuple * import locale */ __pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error) __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; /*--- Wrapped vars code ---*/ goto __pyx_L0; __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_XDECREF(__pyx_t_2); if (__pyx_m) { if (__pyx_d) { __Pyx_AddTraceback("init borg.hashindex", 0, __pyx_lineno, __pyx_filename); } Py_DECREF(__pyx_m); __pyx_m = 0; } else if (!PyErr_Occurred()) { PyErr_SetString(PyExc_ImportError, "init borg.hashindex"); } __pyx_L0:; __Pyx_RefNannyFinishContext(); #if CYTHON_PEP489_MULTI_PHASE_INIT return (__pyx_m != NULL) ? 0 : -1; #elif PY_MAJOR_VERSION >= 3 return __pyx_m; #else return; #endif } /* --- Runtime support code --- */ /* Refnanny */ #if CYTHON_REFNANNY static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { PyObject *m = NULL, *p = NULL; void *r = NULL; m = PyImport_ImportModule((char *)modname); if (!m) goto end; p = PyObject_GetAttrString(m, (char *)"RefNannyAPI"); if (!p) goto end; r = PyLong_AsVoidPtr(p); end: Py_XDECREF(p); Py_XDECREF(m); return (__Pyx_RefNannyAPIStruct *)r; } #endif /* GetBuiltinName */ static PyObject *__Pyx_GetBuiltinName(PyObject *name) { PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name); if (unlikely(!result)) { PyErr_Format(PyExc_NameError, #if PY_MAJOR_VERSION >= 3 "name '%U' is not defined", name); #else "name '%.200s' is not defined", PyString_AS_STRING(name)); #endif } return result; } /* RaiseDoubleKeywords */ static void __Pyx_RaiseDoubleKeywordsError( const char* func_name, PyObject* kw_name) { PyErr_Format(PyExc_TypeError, #if PY_MAJOR_VERSION >= 3 "%s() got multiple values for keyword argument '%U'", func_name, kw_name); #else "%s() got multiple values for keyword argument '%s'", func_name, PyString_AsString(kw_name)); #endif } /* ParseKeywords */ static int __Pyx_ParseOptionalKeywords( PyObject *kwds, PyObject **argnames[], PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args, const char* function_name) { PyObject *key = 0, *value = 0; Py_ssize_t pos = 0; PyObject*** name; PyObject*** first_kw_arg = argnames + num_pos_args; while (PyDict_Next(kwds, &pos, &key, &value)) { name = first_kw_arg; while (*name && (**name != key)) name++; if (*name) { values[name-argnames] = value; continue; } name = first_kw_arg; #if PY_MAJOR_VERSION < 3 if (likely(PyString_CheckExact(key)) || likely(PyString_Check(key))) { while (*name) { if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) && _PyString_Eq(**name, key)) { values[name-argnames] = value; break; } name++; } if (*name) continue; else { PyObject*** argname = argnames; while (argname != first_kw_arg) { if ((**argname == key) || ( (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) && _PyString_Eq(**argname, key))) { goto arg_passed_twice; } argname++; } } } else #endif if (likely(PyUnicode_Check(key))) { while (*name) { int cmp = (**name == key) ? 0 : #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 : #endif PyUnicode_Compare(**name, key); if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; if (cmp == 0) { values[name-argnames] = value; break; } name++; } if (*name) continue; else { PyObject*** argname = argnames; while (argname != first_kw_arg) { int cmp = (**argname == key) ? 0 : #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 : #endif PyUnicode_Compare(**argname, key); if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; if (cmp == 0) goto arg_passed_twice; argname++; } } } else goto invalid_keyword_type; if (kwds2) { if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; } else { goto invalid_keyword; } } return 0; arg_passed_twice: __Pyx_RaiseDoubleKeywordsError(function_name, key); goto bad; invalid_keyword_type: PyErr_Format(PyExc_TypeError, "%.200s() keywords must be strings", function_name); goto bad; invalid_keyword: PyErr_Format(PyExc_TypeError, #if PY_MAJOR_VERSION < 3 "%.200s() got an unexpected keyword argument '%.200s'", function_name, PyString_AsString(key)); #else "%s() got an unexpected keyword argument '%U'", function_name, key); #endif bad: return -1; } /* RaiseArgTupleInvalid */ static void __Pyx_RaiseArgtupleInvalid( const char* func_name, int exact, Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found) { Py_ssize_t num_expected; const char *more_or_less; if (num_found < num_min) { num_expected = num_min; more_or_less = "at least"; } else { num_expected = num_max; more_or_less = "at most"; } if (exact) { more_or_less = "exactly"; } PyErr_Format(PyExc_TypeError, "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", func_name, more_or_less, num_expected, (num_expected == 1) ? "" : "s", num_found); } /* PyObjectCall */ #if CYTHON_COMPILING_IN_CPYTHON static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { PyObject *result; ternaryfunc call = func->ob_type->tp_call; if (unlikely(!call)) return PyObject_Call(func, arg, kw); if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) return NULL; result = (*call)(func, arg, kw); Py_LeaveRecursiveCall(); if (unlikely(!result) && unlikely(!PyErr_Occurred())) { PyErr_SetString( PyExc_SystemError, "NULL result without error in PyObject_Call"); } return result; } #endif /* PyCFunctionFastCall */ #if CYTHON_FAST_PYCCALL static CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *func_obj, PyObject **args, Py_ssize_t nargs) { PyCFunctionObject *func = (PyCFunctionObject*)func_obj; PyCFunction meth = PyCFunction_GET_FUNCTION(func); PyObject *self = PyCFunction_GET_SELF(func); int flags = PyCFunction_GET_FLAGS(func); assert(PyCFunction_Check(func)); assert(METH_FASTCALL == (flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS))); assert(nargs >= 0); assert(nargs == 0 || args != NULL); /* _PyCFunction_FastCallDict() must not be called with an exception set, because it may clear it (directly or indirectly) and so the caller loses its exception */ assert(!PyErr_Occurred()); if ((PY_VERSION_HEX < 0x030700A0) || unlikely(flags & METH_KEYWORDS)) { return (*((__Pyx_PyCFunctionFastWithKeywords)meth)) (self, args, nargs, NULL); } else { return (*((__Pyx_PyCFunctionFast)meth)) (self, args, nargs); } } #endif /* PyFunctionFastCall */ #if CYTHON_FAST_PYCALL #include "frameobject.h" static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, PyObject *globals) { PyFrameObject *f; PyThreadState *tstate = __Pyx_PyThreadState_Current; PyObject **fastlocals; Py_ssize_t i; PyObject *result; assert(globals != NULL); /* XXX Perhaps we should create a specialized PyFrame_New() that doesn't take locals, but does take builtins without sanity checking them. */ assert(tstate != NULL); f = PyFrame_New(tstate, co, globals, NULL); if (f == NULL) { return NULL; } fastlocals = f->f_localsplus; for (i = 0; i < na; i++) { Py_INCREF(*args); fastlocals[i] = *args++; } result = PyEval_EvalFrameEx(f,0); ++tstate->recursion_depth; Py_DECREF(f); --tstate->recursion_depth; return result; } #if 1 || PY_VERSION_HEX < 0x030600B1 static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, int nargs, PyObject *kwargs) { PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); PyObject *globals = PyFunction_GET_GLOBALS(func); PyObject *argdefs = PyFunction_GET_DEFAULTS(func); PyObject *closure; #if PY_MAJOR_VERSION >= 3 PyObject *kwdefs; #endif PyObject *kwtuple, **k; PyObject **d; Py_ssize_t nd; Py_ssize_t nk; PyObject *result; assert(kwargs == NULL || PyDict_Check(kwargs)); nk = kwargs ? PyDict_Size(kwargs) : 0; if (Py_EnterRecursiveCall((char*)" while calling a Python object")) { return NULL; } if ( #if PY_MAJOR_VERSION >= 3 co->co_kwonlyargcount == 0 && #endif likely(kwargs == NULL || nk == 0) && co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { if (argdefs == NULL && co->co_argcount == nargs) { result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); goto done; } else if (nargs == 0 && argdefs != NULL && co->co_argcount == Py_SIZE(argdefs)) { /* function called with no arguments, but all parameters have a default value: use default values as arguments .*/ args = &PyTuple_GET_ITEM(argdefs, 0); result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); goto done; } } if (kwargs != NULL) { Py_ssize_t pos, i; kwtuple = PyTuple_New(2 * nk); if (kwtuple == NULL) { result = NULL; goto done; } k = &PyTuple_GET_ITEM(kwtuple, 0); pos = i = 0; while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { Py_INCREF(k[i]); Py_INCREF(k[i+1]); i += 2; } nk = i / 2; } else { kwtuple = NULL; k = NULL; } closure = PyFunction_GET_CLOSURE(func); #if PY_MAJOR_VERSION >= 3 kwdefs = PyFunction_GET_KW_DEFAULTS(func); #endif if (argdefs != NULL) { d = &PyTuple_GET_ITEM(argdefs, 0); nd = Py_SIZE(argdefs); } else { d = NULL; nd = 0; } #if PY_MAJOR_VERSION >= 3 result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, args, nargs, k, (int)nk, d, (int)nd, kwdefs, closure); #else result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, args, nargs, k, (int)nk, d, (int)nd, closure); #endif Py_XDECREF(kwtuple); done: Py_LeaveRecursiveCall(); return result; } #endif #endif /* PyObjectCallMethO */ #if CYTHON_COMPILING_IN_CPYTHON static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { PyObject *self, *result; PyCFunction cfunc; cfunc = PyCFunction_GET_FUNCTION(func); self = PyCFunction_GET_SELF(func); if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) return NULL; result = cfunc(self, arg); Py_LeaveRecursiveCall(); if (unlikely(!result) && unlikely(!PyErr_Occurred())) { PyErr_SetString( PyExc_SystemError, "NULL result without error in PyObject_Call"); } return result; } #endif /* PyObjectCallOneArg */ #if CYTHON_COMPILING_IN_CPYTHON static PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) { PyObject *result; PyObject *args = PyTuple_New(1); if (unlikely(!args)) return NULL; Py_INCREF(arg); PyTuple_SET_ITEM(args, 0, arg); result = __Pyx_PyObject_Call(func, args, NULL); Py_DECREF(args); return result; } static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { #if CYTHON_FAST_PYCALL if (PyFunction_Check(func)) { return __Pyx_PyFunction_FastCall(func, &arg, 1); } #endif if (likely(PyCFunction_Check(func))) { if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { return __Pyx_PyObject_CallMethO(func, arg); #if CYTHON_FAST_PYCCALL } else if (PyCFunction_GET_FLAGS(func) & METH_FASTCALL) { return __Pyx_PyCFunction_FastCall(func, &arg, 1); #endif } } return __Pyx__PyObject_CallOneArg(func, arg); } #else static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { PyObject *result; PyObject *args = PyTuple_Pack(1, arg); if (unlikely(!args)) return NULL; result = __Pyx_PyObject_Call(func, args, NULL); Py_DECREF(args); return result; } #endif /* PyObjectCallNoArg */ #if CYTHON_COMPILING_IN_CPYTHON static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) { #if CYTHON_FAST_PYCALL if (PyFunction_Check(func)) { return __Pyx_PyFunction_FastCall(func, NULL, 0); } #endif #ifdef __Pyx_CyFunction_USED if (likely(PyCFunction_Check(func) || __Pyx_TypeCheck(func, __pyx_CyFunctionType))) { #else if (likely(PyCFunction_Check(func))) { #endif if (likely(PyCFunction_GET_FLAGS(func) & METH_NOARGS)) { return __Pyx_PyObject_CallMethO(func, NULL); } } return __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL); } #endif /* SaveResetException */ #if CYTHON_FAST_THREAD_STATE static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { #if PY_VERSION_HEX >= 0x030700A2 *type = tstate->exc_state.exc_type; *value = tstate->exc_state.exc_value; *tb = tstate->exc_state.exc_traceback; #else *type = tstate->exc_type; *value = tstate->exc_value; *tb = tstate->exc_traceback; #endif Py_XINCREF(*type); Py_XINCREF(*value); Py_XINCREF(*tb); } static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { PyObject *tmp_type, *tmp_value, *tmp_tb; #if PY_VERSION_HEX >= 0x030700A2 tmp_type = tstate->exc_state.exc_type; tmp_value = tstate->exc_state.exc_value; tmp_tb = tstate->exc_state.exc_traceback; tstate->exc_state.exc_type = type; tstate->exc_state.exc_value = value; tstate->exc_state.exc_traceback = tb; #else tmp_type = tstate->exc_type; tmp_value = tstate->exc_value; tmp_tb = tstate->exc_traceback; tstate->exc_type = type; tstate->exc_value = value; tstate->exc_traceback = tb; #endif Py_XDECREF(tmp_type); Py_XDECREF(tmp_value); Py_XDECREF(tmp_tb); } #endif /* GetException */ #if CYTHON_FAST_THREAD_STATE static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { #else static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) { #endif PyObject *local_type, *local_value, *local_tb; #if CYTHON_FAST_THREAD_STATE PyObject *tmp_type, *tmp_value, *tmp_tb; local_type = tstate->curexc_type; local_value = tstate->curexc_value; local_tb = tstate->curexc_traceback; tstate->curexc_type = 0; tstate->curexc_value = 0; tstate->curexc_traceback = 0; #else PyErr_Fetch(&local_type, &local_value, &local_tb); #endif PyErr_NormalizeException(&local_type, &local_value, &local_tb); #if CYTHON_FAST_THREAD_STATE if (unlikely(tstate->curexc_type)) #else if (unlikely(PyErr_Occurred())) #endif goto bad; #if PY_MAJOR_VERSION >= 3 if (local_tb) { if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) goto bad; } #endif Py_XINCREF(local_tb); Py_XINCREF(local_type); Py_XINCREF(local_value); *type = local_type; *value = local_value; *tb = local_tb; #if CYTHON_FAST_THREAD_STATE #if PY_VERSION_HEX >= 0x030700A2 tmp_type = tstate->exc_state.exc_type; tmp_value = tstate->exc_state.exc_value; tmp_tb = tstate->exc_state.exc_traceback; tstate->exc_state.exc_type = local_type; tstate->exc_state.exc_value = local_value; tstate->exc_state.exc_traceback = local_tb; #else tmp_type = tstate->exc_type; tmp_value = tstate->exc_value; tmp_tb = tstate->exc_traceback; tstate->exc_type = local_type; tstate->exc_value = local_value; tstate->exc_traceback = local_tb; #endif Py_XDECREF(tmp_type); Py_XDECREF(tmp_value); Py_XDECREF(tmp_tb); #else PyErr_SetExcInfo(local_type, local_value, local_tb); #endif return 0; bad: *type = 0; *value = 0; *tb = 0; Py_XDECREF(local_type); Py_XDECREF(local_value); Py_XDECREF(local_tb); return -1; } /* PyErrFetchRestore */ #if CYTHON_FAST_THREAD_STATE static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { PyObject *tmp_type, *tmp_value, *tmp_tb; tmp_type = tstate->curexc_type; tmp_value = tstate->curexc_value; tmp_tb = tstate->curexc_traceback; tstate->curexc_type = type; tstate->curexc_value = value; tstate->curexc_traceback = tb; Py_XDECREF(tmp_type); Py_XDECREF(tmp_value); Py_XDECREF(tmp_tb); } static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { *type = tstate->curexc_type; *value = tstate->curexc_value; *tb = tstate->curexc_traceback; tstate->curexc_type = 0; tstate->curexc_value = 0; tstate->curexc_traceback = 0; } #endif /* RaiseException */ #if PY_MAJOR_VERSION < 3 static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, CYTHON_UNUSED PyObject *cause) { __Pyx_PyThreadState_declare Py_XINCREF(type); if (!value || value == Py_None) value = NULL; else Py_INCREF(value); if (!tb || tb == Py_None) tb = NULL; else { Py_INCREF(tb); if (!PyTraceBack_Check(tb)) { PyErr_SetString(PyExc_TypeError, "raise: arg 3 must be a traceback or None"); goto raise_error; } } if (PyType_Check(type)) { #if CYTHON_COMPILING_IN_PYPY if (!value) { Py_INCREF(Py_None); value = Py_None; } #endif PyErr_NormalizeException(&type, &value, &tb); } else { if (value) { PyErr_SetString(PyExc_TypeError, "instance exception may not have a separate value"); goto raise_error; } value = type; type = (PyObject*) Py_TYPE(type); Py_INCREF(type); if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { PyErr_SetString(PyExc_TypeError, "raise: exception class must be a subclass of BaseException"); goto raise_error; } } __Pyx_PyThreadState_assign __Pyx_ErrRestore(type, value, tb); return; raise_error: Py_XDECREF(value); Py_XDECREF(type); Py_XDECREF(tb); return; } #else static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { PyObject* owned_instance = NULL; if (tb == Py_None) { tb = 0; } else if (tb && !PyTraceBack_Check(tb)) { PyErr_SetString(PyExc_TypeError, "raise: arg 3 must be a traceback or None"); goto bad; } if (value == Py_None) value = 0; if (PyExceptionInstance_Check(type)) { if (value) { PyErr_SetString(PyExc_TypeError, "instance exception may not have a separate value"); goto bad; } value = type; type = (PyObject*) Py_TYPE(value); } else if (PyExceptionClass_Check(type)) { PyObject *instance_class = NULL; if (value && PyExceptionInstance_Check(value)) { instance_class = (PyObject*) Py_TYPE(value); if (instance_class != type) { int is_subclass = PyObject_IsSubclass(instance_class, type); if (!is_subclass) { instance_class = NULL; } else if (unlikely(is_subclass == -1)) { goto bad; } else { type = instance_class; } } } if (!instance_class) { PyObject *args; if (!value) args = PyTuple_New(0); else if (PyTuple_Check(value)) { Py_INCREF(value); args = value; } else args = PyTuple_Pack(1, value); if (!args) goto bad; owned_instance = PyObject_Call(type, args, NULL); Py_DECREF(args); if (!owned_instance) goto bad; value = owned_instance; if (!PyExceptionInstance_Check(value)) { PyErr_Format(PyExc_TypeError, "calling %R should have returned an instance of " "BaseException, not %R", type, Py_TYPE(value)); goto bad; } } } else { PyErr_SetString(PyExc_TypeError, "raise: exception class must be a subclass of BaseException"); goto bad; } if (cause) { PyObject *fixed_cause; if (cause == Py_None) { fixed_cause = NULL; } else if (PyExceptionClass_Check(cause)) { fixed_cause = PyObject_CallObject(cause, NULL); if (fixed_cause == NULL) goto bad; } else if (PyExceptionInstance_Check(cause)) { fixed_cause = cause; Py_INCREF(fixed_cause); } else { PyErr_SetString(PyExc_TypeError, "exception causes must derive from " "BaseException"); goto bad; } PyException_SetCause(value, fixed_cause); } PyErr_SetObject(type, value); if (tb) { #if CYTHON_COMPILING_IN_PYPY PyObject *tmp_type, *tmp_value, *tmp_tb; PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); Py_INCREF(tb); PyErr_Restore(tmp_type, tmp_value, tb); Py_XDECREF(tmp_tb); #else PyThreadState *tstate = __Pyx_PyThreadState_Current; PyObject* tmp_tb = tstate->curexc_traceback; if (tb != tmp_tb) { Py_INCREF(tb); tstate->curexc_traceback = tb; Py_XDECREF(tmp_tb); } #endif } bad: Py_XDECREF(owned_instance); return; } #endif /* PyErrExceptionMatches */ #if CYTHON_FAST_THREAD_STATE static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { Py_ssize_t i, n; n = PyTuple_GET_SIZE(tuple); #if PY_MAJOR_VERSION >= 3 for (i=0; icurexc_type; if (exc_type == err) return 1; if (unlikely(!exc_type)) return 0; if (unlikely(PyTuple_Check(err))) return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); return __Pyx_PyErr_GivenExceptionMatches(exc_type, err); } #endif /* GetItemInt */ static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { PyObject *r; if (!j) return NULL; r = PyObject_GetItem(o, j); Py_DECREF(j); return r; } static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, CYTHON_NCP_UNUSED int wraparound, CYTHON_NCP_UNUSED int boundscheck) { #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS Py_ssize_t wrapped_i = i; if (wraparound & unlikely(i < 0)) { wrapped_i += PyList_GET_SIZE(o); } if ((!boundscheck) || likely((0 <= wrapped_i) & (wrapped_i < PyList_GET_SIZE(o)))) { PyObject *r = PyList_GET_ITEM(o, wrapped_i); Py_INCREF(r); return r; } return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); #else return PySequence_GetItem(o, i); #endif } static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, CYTHON_NCP_UNUSED int wraparound, CYTHON_NCP_UNUSED int boundscheck) { #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS Py_ssize_t wrapped_i = i; if (wraparound & unlikely(i < 0)) { wrapped_i += PyTuple_GET_SIZE(o); } if ((!boundscheck) || likely((0 <= wrapped_i) & (wrapped_i < PyTuple_GET_SIZE(o)))) { PyObject *r = PyTuple_GET_ITEM(o, wrapped_i); Py_INCREF(r); return r; } return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); #else return PySequence_GetItem(o, i); #endif } static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list, CYTHON_NCP_UNUSED int wraparound, CYTHON_NCP_UNUSED int boundscheck) { #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS if (is_list || PyList_CheckExact(o)) { Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o); if ((!boundscheck) || (likely((n >= 0) & (n < PyList_GET_SIZE(o))))) { PyObject *r = PyList_GET_ITEM(o, n); Py_INCREF(r); return r; } } else if (PyTuple_CheckExact(o)) { Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o); if ((!boundscheck) || likely((n >= 0) & (n < PyTuple_GET_SIZE(o)))) { PyObject *r = PyTuple_GET_ITEM(o, n); Py_INCREF(r); return r; } } else { PySequenceMethods *m = Py_TYPE(o)->tp_as_sequence; if (likely(m && m->sq_item)) { if (wraparound && unlikely(i < 0) && likely(m->sq_length)) { Py_ssize_t l = m->sq_length(o); if (likely(l >= 0)) { i += l; } else { if (!PyErr_ExceptionMatches(PyExc_OverflowError)) return NULL; PyErr_Clear(); } } return m->sq_item(o, i); } } #else if (is_list || PySequence_Check(o)) { return PySequence_GetItem(o, i); } #endif return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); } /* GetModuleGlobalName */ static CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name) { PyObject *result; #if !CYTHON_AVOID_BORROWED_REFS result = PyDict_GetItem(__pyx_d, name); if (likely(result)) { Py_INCREF(result); } else { #else result = PyObject_GetItem(__pyx_d, name); if (!result) { PyErr_Clear(); #endif result = __Pyx_GetBuiltinName(name); } return result; } /* ArgTypeTest */ static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact) { if (unlikely(!type)) { PyErr_SetString(PyExc_SystemError, "Missing type object"); return 0; } else if (exact) { #if PY_MAJOR_VERSION == 2 if ((type == &PyBaseString_Type) && likely(__Pyx_PyBaseString_CheckExact(obj))) return 1; #endif } else { if (likely(__Pyx_TypeCheck(obj, type))) return 1; } PyErr_Format(PyExc_TypeError, "Argument '%.200s' has incorrect type (expected %.200s, got %.200s)", name, type->tp_name, Py_TYPE(obj)->tp_name); return 0; } /* ExtTypeTest */ static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) { if (unlikely(!type)) { PyErr_SetString(PyExc_SystemError, "Missing type object"); return 0; } if (likely(__Pyx_TypeCheck(obj, type))) return 1; PyErr_Format(PyExc_TypeError, "Cannot convert %.200s to %.200s", Py_TYPE(obj)->tp_name, type->tp_name); return 0; } /* decode_c_string */ static CYTHON_INLINE PyObject* __Pyx_decode_c_string( const char* cstring, Py_ssize_t start, Py_ssize_t stop, const char* encoding, const char* errors, PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) { Py_ssize_t length; if (unlikely((start < 0) | (stop < 0))) { size_t slen = strlen(cstring); if (unlikely(slen > (size_t) PY_SSIZE_T_MAX)) { PyErr_SetString(PyExc_OverflowError, "c-string too long to convert to Python"); return NULL; } length = (Py_ssize_t) slen; if (start < 0) { start += length; if (start < 0) start = 0; } if (stop < 0) stop += length; } length = stop - start; if (unlikely(length <= 0)) return PyUnicode_FromUnicode(NULL, 0); cstring += start; if (decode_func) { return decode_func(cstring, length, errors); } else { return PyUnicode_Decode(cstring, length, encoding, errors); } } /* SetupReduce */ static int __Pyx_setup_reduce_is_named(PyObject* meth, PyObject* name) { int ret; PyObject *name_attr; name_attr = __Pyx_PyObject_GetAttrStr(meth, __pyx_n_s_name); if (likely(name_attr)) { ret = PyObject_RichCompareBool(name_attr, name, Py_EQ); } else { ret = -1; } if (unlikely(ret < 0)) { PyErr_Clear(); ret = 0; } Py_XDECREF(name_attr); return ret; } static int __Pyx_setup_reduce(PyObject* type_obj) { int ret = 0; PyObject *object_reduce = NULL; PyObject *object_reduce_ex = NULL; PyObject *reduce = NULL; PyObject *reduce_ex = NULL; PyObject *reduce_cython = NULL; PyObject *setstate = NULL; PyObject *setstate_cython = NULL; #if CYTHON_USE_PYTYPE_LOOKUP if (_PyType_Lookup((PyTypeObject*)type_obj, __pyx_n_s_getstate)) goto GOOD; #else if (PyObject_HasAttr(type_obj, __pyx_n_s_getstate)) goto GOOD; #endif #if CYTHON_USE_PYTYPE_LOOKUP object_reduce_ex = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto BAD; #else object_reduce_ex = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto BAD; #endif reduce_ex = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce_ex); if (unlikely(!reduce_ex)) goto BAD; if (reduce_ex == object_reduce_ex) { #if CYTHON_USE_PYTYPE_LOOKUP object_reduce = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto BAD; #else object_reduce = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto BAD; #endif reduce = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce); if (unlikely(!reduce)) goto BAD; if (reduce == object_reduce || __Pyx_setup_reduce_is_named(reduce, __pyx_n_s_reduce_cython)) { reduce_cython = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce_cython); if (unlikely(!reduce_cython)) goto BAD; ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce, reduce_cython); if (unlikely(ret < 0)) goto BAD; ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce_cython); if (unlikely(ret < 0)) goto BAD; setstate = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_setstate); if (!setstate) PyErr_Clear(); if (!setstate || __Pyx_setup_reduce_is_named(setstate, __pyx_n_s_setstate_cython)) { setstate_cython = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_setstate_cython); if (unlikely(!setstate_cython)) goto BAD; ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate, setstate_cython); if (unlikely(ret < 0)) goto BAD; ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate_cython); if (unlikely(ret < 0)) goto BAD; } PyType_Modified((PyTypeObject*)type_obj); } } goto GOOD; BAD: if (!PyErr_Occurred()) PyErr_Format(PyExc_RuntimeError, "Unable to initialize pickling for %s", ((PyTypeObject*)type_obj)->tp_name); ret = -1; GOOD: #if !CYTHON_USE_PYTYPE_LOOKUP Py_XDECREF(object_reduce); Py_XDECREF(object_reduce_ex); #endif Py_XDECREF(reduce); Py_XDECREF(reduce_ex); Py_XDECREF(reduce_cython); Py_XDECREF(setstate); Py_XDECREF(setstate_cython); return ret; } /* SetVTable */ static int __Pyx_SetVtable(PyObject *dict, void *vtable) { #if PY_VERSION_HEX >= 0x02070000 PyObject *ob = PyCapsule_New(vtable, 0, 0); #else PyObject *ob = PyCObject_FromVoidPtr(vtable, 0); #endif if (!ob) goto bad; if (PyDict_SetItem(dict, __pyx_n_s_pyx_vtable, ob) < 0) goto bad; Py_DECREF(ob); return 0; bad: Py_XDECREF(ob); return -1; } /* Import */ static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { PyObject *empty_list = 0; PyObject *module = 0; PyObject *global_dict = 0; PyObject *empty_dict = 0; PyObject *list; #if PY_MAJOR_VERSION < 3 PyObject *py_import; py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); if (!py_import) goto bad; #endif if (from_list) list = from_list; else { empty_list = PyList_New(0); if (!empty_list) goto bad; list = empty_list; } global_dict = PyModule_GetDict(__pyx_m); if (!global_dict) goto bad; empty_dict = PyDict_New(); if (!empty_dict) goto bad; { #if PY_MAJOR_VERSION >= 3 if (level == -1) { if (strchr(__Pyx_MODULE_NAME, '.')) { module = PyImport_ImportModuleLevelObject( name, global_dict, empty_dict, list, 1); if (!module) { if (!PyErr_ExceptionMatches(PyExc_ImportError)) goto bad; PyErr_Clear(); } } level = 0; } #endif if (!module) { #if PY_MAJOR_VERSION < 3 PyObject *py_level = PyInt_FromLong(level); if (!py_level) goto bad; module = PyObject_CallFunctionObjArgs(py_import, name, global_dict, empty_dict, list, py_level, NULL); Py_DECREF(py_level); #else module = PyImport_ImportModuleLevelObject( name, global_dict, empty_dict, list, level); #endif } } bad: #if PY_MAJOR_VERSION < 3 Py_XDECREF(py_import); #endif Py_XDECREF(empty_list); Py_XDECREF(empty_dict); return module; } /* ImportFrom */ static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) { PyObject* value = __Pyx_PyObject_GetAttrStr(module, name); if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) { PyErr_Format(PyExc_ImportError, #if PY_MAJOR_VERSION < 3 "cannot import name %.230s", PyString_AS_STRING(name)); #else "cannot import name %S", name); #endif } return value; } /* None */ static CYTHON_INLINE long __Pyx_mod_long(long a, long b) { long r = a % b; r += ((r != 0) & ((r ^ b) < 0)) * b; return r; } /* GetNameInClass */ static PyObject *__Pyx_GetGlobalNameAfterAttributeLookup(PyObject *name) { __Pyx_PyThreadState_declare __Pyx_PyThreadState_assign if (unlikely(!__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) return NULL; __Pyx_PyErr_Clear(); return __Pyx_GetModuleGlobalName(name); } static PyObject *__Pyx_GetNameInClass(PyObject *nmspace, PyObject *name) { PyObject *result; result = __Pyx_PyObject_GetAttrStr(nmspace, name); if (!result) { result = __Pyx_GetGlobalNameAfterAttributeLookup(name); } return result; } /* ClassMethod */ static PyObject* __Pyx_Method_ClassMethod(PyObject *method) { #if CYTHON_COMPILING_IN_PYPY && PYPY_VERSION_NUM <= 0x05080000 if (PyObject_TypeCheck(method, &PyWrapperDescr_Type)) { return PyClassMethod_New(method); } #else #if CYTHON_COMPILING_IN_PYSTON || CYTHON_COMPILING_IN_PYPY if (PyMethodDescr_Check(method)) { #else static PyTypeObject *methoddescr_type = NULL; if (methoddescr_type == NULL) { PyObject *meth = PyObject_GetAttrString((PyObject*)&PyList_Type, "append"); if (!meth) return NULL; methoddescr_type = Py_TYPE(meth); Py_DECREF(meth); } if (__Pyx_TypeCheck(method, methoddescr_type)) { #endif PyMethodDescrObject *descr = (PyMethodDescrObject *)method; #if PY_VERSION_HEX < 0x03020000 PyTypeObject *d_type = descr->d_type; #else PyTypeObject *d_type = descr->d_common.d_type; #endif return PyDescr_NewClassMethod(d_type, descr->d_method); } #endif else if (PyMethod_Check(method)) { return PyClassMethod_New(PyMethod_GET_FUNCTION(method)); } else if (PyCFunction_Check(method)) { return PyClassMethod_New(method); } #ifdef __Pyx_CyFunction_USED else if (__Pyx_TypeCheck(method, __pyx_CyFunctionType)) { return PyClassMethod_New(method); } #endif PyErr_SetString(PyExc_TypeError, "Class-level classmethod() can only be called on " "a method_descriptor or instance method."); return NULL; } /* CLineInTraceback */ #ifndef CYTHON_CLINE_IN_TRACEBACK static int __Pyx_CLineForTraceback(CYTHON_UNUSED PyThreadState *tstate, int c_line) { PyObject *use_cline; PyObject *ptype, *pvalue, *ptraceback; #if CYTHON_COMPILING_IN_CPYTHON PyObject **cython_runtime_dict; #endif __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); #if CYTHON_COMPILING_IN_CPYTHON cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime); if (likely(cython_runtime_dict)) { use_cline = PyDict_GetItem(*cython_runtime_dict, __pyx_n_s_cline_in_traceback); } else #endif { PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback); if (use_cline_obj) { use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; Py_DECREF(use_cline_obj); } else { PyErr_Clear(); use_cline = NULL; } } if (!use_cline) { c_line = 0; PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False); } else if (PyObject_Not(use_cline) != 0) { c_line = 0; } __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); return c_line; } #endif /* CodeObjectCache */ static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { int start = 0, mid = 0, end = count - 1; if (end >= 0 && code_line > entries[end].code_line) { return count; } while (start < end) { mid = start + (end - start) / 2; if (code_line < entries[mid].code_line) { end = mid; } else if (code_line > entries[mid].code_line) { start = mid + 1; } else { return mid; } } if (code_line <= entries[mid].code_line) { return mid; } else { return mid + 1; } } static PyCodeObject *__pyx_find_code_object(int code_line) { PyCodeObject* code_object; int pos; if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { return NULL; } pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { return NULL; } code_object = __pyx_code_cache.entries[pos].code_object; Py_INCREF(code_object); return code_object; } static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { int pos, i; __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; if (unlikely(!code_line)) { return; } if (unlikely(!entries)) { entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); if (likely(entries)) { __pyx_code_cache.entries = entries; __pyx_code_cache.max_count = 64; __pyx_code_cache.count = 1; entries[0].code_line = code_line; entries[0].code_object = code_object; Py_INCREF(code_object); } return; } pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { PyCodeObject* tmp = entries[pos].code_object; entries[pos].code_object = code_object; Py_DECREF(tmp); return; } if (__pyx_code_cache.count == __pyx_code_cache.max_count) { int new_max = __pyx_code_cache.max_count + 64; entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( __pyx_code_cache.entries, (size_t)new_max*sizeof(__Pyx_CodeObjectCacheEntry)); if (unlikely(!entries)) { return; } __pyx_code_cache.entries = entries; __pyx_code_cache.max_count = new_max; } for (i=__pyx_code_cache.count; i>pos; i--) { entries[i] = entries[i-1]; } entries[pos].code_line = code_line; entries[pos].code_object = code_object; __pyx_code_cache.count++; Py_INCREF(code_object); } /* AddTraceback */ #include "compile.h" #include "frameobject.h" #include "traceback.h" static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( const char *funcname, int c_line, int py_line, const char *filename) { PyCodeObject *py_code = 0; PyObject *py_srcfile = 0; PyObject *py_funcname = 0; #if PY_MAJOR_VERSION < 3 py_srcfile = PyString_FromString(filename); #else py_srcfile = PyUnicode_FromString(filename); #endif if (!py_srcfile) goto bad; if (c_line) { #if PY_MAJOR_VERSION < 3 py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); #else py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); #endif } else { #if PY_MAJOR_VERSION < 3 py_funcname = PyString_FromString(funcname); #else py_funcname = PyUnicode_FromString(funcname); #endif } if (!py_funcname) goto bad; py_code = __Pyx_PyCode_New( 0, 0, 0, 0, 0, __pyx_empty_bytes, /*PyObject *code,*/ __pyx_empty_tuple, /*PyObject *consts,*/ __pyx_empty_tuple, /*PyObject *names,*/ __pyx_empty_tuple, /*PyObject *varnames,*/ __pyx_empty_tuple, /*PyObject *freevars,*/ __pyx_empty_tuple, /*PyObject *cellvars,*/ py_srcfile, /*PyObject *filename,*/ py_funcname, /*PyObject *name,*/ py_line, __pyx_empty_bytes /*PyObject *lnotab*/ ); Py_DECREF(py_srcfile); Py_DECREF(py_funcname); return py_code; bad: Py_XDECREF(py_srcfile); Py_XDECREF(py_funcname); return NULL; } static void __Pyx_AddTraceback(const char *funcname, int c_line, int py_line, const char *filename) { PyCodeObject *py_code = 0; PyFrameObject *py_frame = 0; PyThreadState *tstate = __Pyx_PyThreadState_Current; if (c_line) { c_line = __Pyx_CLineForTraceback(tstate, c_line); } py_code = __pyx_find_code_object(c_line ? -c_line : py_line); if (!py_code) { py_code = __Pyx_CreateCodeObjectForTraceback( funcname, c_line, py_line, filename); if (!py_code) goto bad; __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); } py_frame = PyFrame_New( tstate, /*PyThreadState *tstate,*/ py_code, /*PyCodeObject *code,*/ __pyx_d, /*PyObject *globals,*/ 0 /*PyObject *locals*/ ); if (!py_frame) goto bad; __Pyx_PyFrame_SetLineNumber(py_frame, py_line); PyTraceBack_Here(py_frame); bad: Py_XDECREF(py_code); Py_XDECREF(py_frame); } /* CIntToPy */ static CYTHON_INLINE PyObject* __Pyx_PyInt_From_uint32_t(uint32_t value) { const uint32_t neg_one = (uint32_t) -1, const_zero = (uint32_t) 0; const int is_unsigned = neg_one > const_zero; if (is_unsigned) { if (sizeof(uint32_t) < sizeof(long)) { return PyInt_FromLong((long) value); } else if (sizeof(uint32_t) <= sizeof(unsigned long)) { return PyLong_FromUnsignedLong((unsigned long) value); #ifdef HAVE_LONG_LONG } else if (sizeof(uint32_t) <= sizeof(unsigned PY_LONG_LONG)) { return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); #endif } } else { if (sizeof(uint32_t) <= sizeof(long)) { return PyInt_FromLong((long) value); #ifdef HAVE_LONG_LONG } else if (sizeof(uint32_t) <= sizeof(PY_LONG_LONG)) { return PyLong_FromLongLong((PY_LONG_LONG) value); #endif } } { int one = 1; int little = (int)*(unsigned char *)&one; unsigned char *bytes = (unsigned char *)&value; return _PyLong_FromByteArray(bytes, sizeof(uint32_t), little, !is_unsigned); } } /* CIntFromPyVerify */ #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) #define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) #define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ {\ func_type value = func_value;\ if (sizeof(target_type) < sizeof(func_type)) {\ if (unlikely(value != (func_type) (target_type) value)) {\ func_type zero = 0;\ if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ return (target_type) -1;\ if (is_unsigned && unlikely(value < zero))\ goto raise_neg_overflow;\ else\ goto raise_overflow;\ }\ }\ return (target_type) value;\ } /* CIntToPy */ static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) { const int neg_one = (int) -1, const_zero = (int) 0; const int is_unsigned = neg_one > const_zero; if (is_unsigned) { if (sizeof(int) < sizeof(long)) { return PyInt_FromLong((long) value); } else if (sizeof(int) <= sizeof(unsigned long)) { return PyLong_FromUnsignedLong((unsigned long) value); #ifdef HAVE_LONG_LONG } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); #endif } } else { if (sizeof(int) <= sizeof(long)) { return PyInt_FromLong((long) value); #ifdef HAVE_LONG_LONG } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { return PyLong_FromLongLong((PY_LONG_LONG) value); #endif } } { int one = 1; int little = (int)*(unsigned char *)&one; unsigned char *bytes = (unsigned char *)&value; return _PyLong_FromByteArray(bytes, sizeof(int), little, !is_unsigned); } } /* CIntToPy */ static CYTHON_INLINE PyObject* __Pyx_PyInt_From_uint64_t(uint64_t value) { const uint64_t neg_one = (uint64_t) -1, const_zero = (uint64_t) 0; const int is_unsigned = neg_one > const_zero; if (is_unsigned) { if (sizeof(uint64_t) < sizeof(long)) { return PyInt_FromLong((long) value); } else if (sizeof(uint64_t) <= sizeof(unsigned long)) { return PyLong_FromUnsignedLong((unsigned long) value); #ifdef HAVE_LONG_LONG } else if (sizeof(uint64_t) <= sizeof(unsigned PY_LONG_LONG)) { return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); #endif } } else { if (sizeof(uint64_t) <= sizeof(long)) { return PyInt_FromLong((long) value); #ifdef HAVE_LONG_LONG } else if (sizeof(uint64_t) <= sizeof(PY_LONG_LONG)) { return PyLong_FromLongLong((PY_LONG_LONG) value); #endif } } { int one = 1; int little = (int)*(unsigned char *)&one; unsigned char *bytes = (unsigned char *)&value; return _PyLong_FromByteArray(bytes, sizeof(uint64_t), little, !is_unsigned); } } /* CIntToPy */ static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { const long neg_one = (long) -1, const_zero = (long) 0; const int is_unsigned = neg_one > const_zero; if (is_unsigned) { if (sizeof(long) < sizeof(long)) { return PyInt_FromLong((long) value); } else if (sizeof(long) <= sizeof(unsigned long)) { return PyLong_FromUnsignedLong((unsigned long) value); #ifdef HAVE_LONG_LONG } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); #endif } } else { if (sizeof(long) <= sizeof(long)) { return PyInt_FromLong((long) value); #ifdef HAVE_LONG_LONG } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { return PyLong_FromLongLong((PY_LONG_LONG) value); #endif } } { int one = 1; int little = (int)*(unsigned char *)&one; unsigned char *bytes = (unsigned char *)&value; return _PyLong_FromByteArray(bytes, sizeof(long), little, !is_unsigned); } } /* CIntFromPy */ static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { const int neg_one = (int) -1, const_zero = (int) 0; const int is_unsigned = neg_one > const_zero; #if PY_MAJOR_VERSION < 3 if (likely(PyInt_Check(x))) { if (sizeof(int) < sizeof(long)) { __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) } else { long val = PyInt_AS_LONG(x); if (is_unsigned && unlikely(val < 0)) { goto raise_neg_overflow; } return (int) val; } } else #endif if (likely(PyLong_Check(x))) { if (is_unsigned) { #if CYTHON_USE_PYLONG_INTERNALS const digit* digits = ((PyLongObject*)x)->ob_digit; switch (Py_SIZE(x)) { case 0: return (int) 0; case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0]) case 2: if (8 * sizeof(int) > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) { return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); } } break; case 3: if (8 * sizeof(int) > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) { return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); } } break; case 4: if (8 * sizeof(int) > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) { return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); } } break; } #endif #if CYTHON_COMPILING_IN_CPYTHON if (unlikely(Py_SIZE(x) < 0)) { goto raise_neg_overflow; } #else { int result = PyObject_RichCompareBool(x, Py_False, Py_LT); if (unlikely(result < 0)) return (int) -1; if (unlikely(result == 1)) goto raise_neg_overflow; } #endif if (sizeof(int) <= sizeof(unsigned long)) { __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) #ifdef HAVE_LONG_LONG } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) #endif } } else { #if CYTHON_USE_PYLONG_INTERNALS const digit* digits = ((PyLongObject*)x)->ob_digit; switch (Py_SIZE(x)) { case 0: return (int) 0; case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0])) case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0]) case -2: if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); } } break; case 2: if (8 * sizeof(int) > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); } } break; case -3: if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); } } break; case 3: if (8 * sizeof(int) > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); } } break; case -4: if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); } } break; case 4: if (8 * sizeof(int) > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); } } break; } #endif if (sizeof(int) <= sizeof(long)) { __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) #ifdef HAVE_LONG_LONG } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) #endif } } { #if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) PyErr_SetString(PyExc_RuntimeError, "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); #else int val; PyObject *v = __Pyx_PyNumber_IntOrLong(x); #if PY_MAJOR_VERSION < 3 if (likely(v) && !PyLong_Check(v)) { PyObject *tmp = v; v = PyNumber_Long(tmp); Py_DECREF(tmp); } #endif if (likely(v)) { int one = 1; int is_little = (int)*(unsigned char *)&one; unsigned char *bytes = (unsigned char *)&val; int ret = _PyLong_AsByteArray((PyLongObject *)v, bytes, sizeof(val), is_little, !is_unsigned); Py_DECREF(v); if (likely(!ret)) return val; } #endif return (int) -1; } } else { int val; PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); if (!tmp) return (int) -1; val = __Pyx_PyInt_As_int(tmp); Py_DECREF(tmp); return val; } raise_overflow: PyErr_SetString(PyExc_OverflowError, "value too large to convert to int"); return (int) -1; raise_neg_overflow: PyErr_SetString(PyExc_OverflowError, "can't convert negative value to int"); return (int) -1; } /* CIntFromPy */ static CYTHON_INLINE uint32_t __Pyx_PyInt_As_uint32_t(PyObject *x) { const uint32_t neg_one = (uint32_t) -1, const_zero = (uint32_t) 0; const int is_unsigned = neg_one > const_zero; #if PY_MAJOR_VERSION < 3 if (likely(PyInt_Check(x))) { if (sizeof(uint32_t) < sizeof(long)) { __PYX_VERIFY_RETURN_INT(uint32_t, long, PyInt_AS_LONG(x)) } else { long val = PyInt_AS_LONG(x); if (is_unsigned && unlikely(val < 0)) { goto raise_neg_overflow; } return (uint32_t) val; } } else #endif if (likely(PyLong_Check(x))) { if (is_unsigned) { #if CYTHON_USE_PYLONG_INTERNALS const digit* digits = ((PyLongObject*)x)->ob_digit; switch (Py_SIZE(x)) { case 0: return (uint32_t) 0; case 1: __PYX_VERIFY_RETURN_INT(uint32_t, digit, digits[0]) case 2: if (8 * sizeof(uint32_t) > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) >= 2 * PyLong_SHIFT) { return (uint32_t) (((((uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0])); } } break; case 3: if (8 * sizeof(uint32_t) > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) >= 3 * PyLong_SHIFT) { return (uint32_t) (((((((uint32_t)digits[2]) << PyLong_SHIFT) | (uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0])); } } break; case 4: if (8 * sizeof(uint32_t) > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) >= 4 * PyLong_SHIFT) { return (uint32_t) (((((((((uint32_t)digits[3]) << PyLong_SHIFT) | (uint32_t)digits[2]) << PyLong_SHIFT) | (uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0])); } } break; } #endif #if CYTHON_COMPILING_IN_CPYTHON if (unlikely(Py_SIZE(x) < 0)) { goto raise_neg_overflow; } #else { int result = PyObject_RichCompareBool(x, Py_False, Py_LT); if (unlikely(result < 0)) return (uint32_t) -1; if (unlikely(result == 1)) goto raise_neg_overflow; } #endif if (sizeof(uint32_t) <= sizeof(unsigned long)) { __PYX_VERIFY_RETURN_INT_EXC(uint32_t, unsigned long, PyLong_AsUnsignedLong(x)) #ifdef HAVE_LONG_LONG } else if (sizeof(uint32_t) <= sizeof(unsigned PY_LONG_LONG)) { __PYX_VERIFY_RETURN_INT_EXC(uint32_t, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) #endif } } else { #if CYTHON_USE_PYLONG_INTERNALS const digit* digits = ((PyLongObject*)x)->ob_digit; switch (Py_SIZE(x)) { case 0: return (uint32_t) 0; case -1: __PYX_VERIFY_RETURN_INT(uint32_t, sdigit, (sdigit) (-(sdigit)digits[0])) case 1: __PYX_VERIFY_RETURN_INT(uint32_t, digit, +digits[0]) case -2: if (8 * sizeof(uint32_t) - 1 > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) - 1 > 2 * PyLong_SHIFT) { return (uint32_t) (((uint32_t)-1)*(((((uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0]))); } } break; case 2: if (8 * sizeof(uint32_t) > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) - 1 > 2 * PyLong_SHIFT) { return (uint32_t) ((((((uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0]))); } } break; case -3: if (8 * sizeof(uint32_t) - 1 > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) - 1 > 3 * PyLong_SHIFT) { return (uint32_t) (((uint32_t)-1)*(((((((uint32_t)digits[2]) << PyLong_SHIFT) | (uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0]))); } } break; case 3: if (8 * sizeof(uint32_t) > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) - 1 > 3 * PyLong_SHIFT) { return (uint32_t) ((((((((uint32_t)digits[2]) << PyLong_SHIFT) | (uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0]))); } } break; case -4: if (8 * sizeof(uint32_t) - 1 > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) - 1 > 4 * PyLong_SHIFT) { return (uint32_t) (((uint32_t)-1)*(((((((((uint32_t)digits[3]) << PyLong_SHIFT) | (uint32_t)digits[2]) << PyLong_SHIFT) | (uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0]))); } } break; case 4: if (8 * sizeof(uint32_t) > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) - 1 > 4 * PyLong_SHIFT) { return (uint32_t) ((((((((((uint32_t)digits[3]) << PyLong_SHIFT) | (uint32_t)digits[2]) << PyLong_SHIFT) | (uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0]))); } } break; } #endif if (sizeof(uint32_t) <= sizeof(long)) { __PYX_VERIFY_RETURN_INT_EXC(uint32_t, long, PyLong_AsLong(x)) #ifdef HAVE_LONG_LONG } else if (sizeof(uint32_t) <= sizeof(PY_LONG_LONG)) { __PYX_VERIFY_RETURN_INT_EXC(uint32_t, PY_LONG_LONG, PyLong_AsLongLong(x)) #endif } } { #if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) PyErr_SetString(PyExc_RuntimeError, "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); #else uint32_t val; PyObject *v = __Pyx_PyNumber_IntOrLong(x); #if PY_MAJOR_VERSION < 3 if (likely(v) && !PyLong_Check(v)) { PyObject *tmp = v; v = PyNumber_Long(tmp); Py_DECREF(tmp); } #endif if (likely(v)) { int one = 1; int is_little = (int)*(unsigned char *)&one; unsigned char *bytes = (unsigned char *)&val; int ret = _PyLong_AsByteArray((PyLongObject *)v, bytes, sizeof(val), is_little, !is_unsigned); Py_DECREF(v); if (likely(!ret)) return val; } #endif return (uint32_t) -1; } } else { uint32_t val; PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); if (!tmp) return (uint32_t) -1; val = __Pyx_PyInt_As_uint32_t(tmp); Py_DECREF(tmp); return val; } raise_overflow: PyErr_SetString(PyExc_OverflowError, "value too large to convert to uint32_t"); return (uint32_t) -1; raise_neg_overflow: PyErr_SetString(PyExc_OverflowError, "can't convert negative value to uint32_t"); return (uint32_t) -1; } /* CIntFromPy */ static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { const long neg_one = (long) -1, const_zero = (long) 0; const int is_unsigned = neg_one > const_zero; #if PY_MAJOR_VERSION < 3 if (likely(PyInt_Check(x))) { if (sizeof(long) < sizeof(long)) { __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) } else { long val = PyInt_AS_LONG(x); if (is_unsigned && unlikely(val < 0)) { goto raise_neg_overflow; } return (long) val; } } else #endif if (likely(PyLong_Check(x))) { if (is_unsigned) { #if CYTHON_USE_PYLONG_INTERNALS const digit* digits = ((PyLongObject*)x)->ob_digit; switch (Py_SIZE(x)) { case 0: return (long) 0; case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0]) case 2: if (8 * sizeof(long) > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) { return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); } } break; case 3: if (8 * sizeof(long) > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) { return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); } } break; case 4: if (8 * sizeof(long) > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) { return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); } } break; } #endif #if CYTHON_COMPILING_IN_CPYTHON if (unlikely(Py_SIZE(x) < 0)) { goto raise_neg_overflow; } #else { int result = PyObject_RichCompareBool(x, Py_False, Py_LT); if (unlikely(result < 0)) return (long) -1; if (unlikely(result == 1)) goto raise_neg_overflow; } #endif if (sizeof(long) <= sizeof(unsigned long)) { __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) #ifdef HAVE_LONG_LONG } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) #endif } } else { #if CYTHON_USE_PYLONG_INTERNALS const digit* digits = ((PyLongObject*)x)->ob_digit; switch (Py_SIZE(x)) { case 0: return (long) 0; case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0])) case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0]) case -2: if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); } } break; case 2: if (8 * sizeof(long) > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); } } break; case -3: if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); } } break; case 3: if (8 * sizeof(long) > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); } } break; case -4: if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); } } break; case 4: if (8 * sizeof(long) > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); } } break; } #endif if (sizeof(long) <= sizeof(long)) { __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) #ifdef HAVE_LONG_LONG } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) #endif } } { #if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) PyErr_SetString(PyExc_RuntimeError, "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); #else long val; PyObject *v = __Pyx_PyNumber_IntOrLong(x); #if PY_MAJOR_VERSION < 3 if (likely(v) && !PyLong_Check(v)) { PyObject *tmp = v; v = PyNumber_Long(tmp); Py_DECREF(tmp); } #endif if (likely(v)) { int one = 1; int is_little = (int)*(unsigned char *)&one; unsigned char *bytes = (unsigned char *)&val; int ret = _PyLong_AsByteArray((PyLongObject *)v, bytes, sizeof(val), is_little, !is_unsigned); Py_DECREF(v); if (likely(!ret)) return val; } #endif return (long) -1; } } else { long val; PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); if (!tmp) return (long) -1; val = __Pyx_PyInt_As_long(tmp); Py_DECREF(tmp); return val; } raise_overflow: PyErr_SetString(PyExc_OverflowError, "value too large to convert to long"); return (long) -1; raise_neg_overflow: PyErr_SetString(PyExc_OverflowError, "can't convert negative value to long"); return (long) -1; } /* FastTypeChecks */ #if CYTHON_COMPILING_IN_CPYTHON static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { while (a) { a = a->tp_base; if (a == b) return 1; } return b == &PyBaseObject_Type; } static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { PyObject *mro; if (a == b) return 1; mro = a->tp_mro; if (likely(mro)) { Py_ssize_t i, n; n = PyTuple_GET_SIZE(mro); for (i = 0; i < n; i++) { if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) return 1; } return 0; } return __Pyx_InBases(a, b); } #if PY_MAJOR_VERSION == 2 static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { PyObject *exception, *value, *tb; int res; __Pyx_PyThreadState_declare __Pyx_PyThreadState_assign __Pyx_ErrFetch(&exception, &value, &tb); res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; if (unlikely(res == -1)) { PyErr_WriteUnraisable(err); res = 0; } if (!res) { res = PyObject_IsSubclass(err, exc_type2); if (unlikely(res == -1)) { PyErr_WriteUnraisable(err); res = 0; } } __Pyx_ErrRestore(exception, value, tb); return res; } #else static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0; if (!res) { res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); } return res; } #endif static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject* exc_type) { if (likely(err == exc_type)) return 1; if (likely(PyExceptionClass_Check(err))) { return __Pyx_inner_PyErr_GivenExceptionMatches2(err, NULL, exc_type); } return PyErr_GivenExceptionMatches(err, exc_type); } static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *exc_type1, PyObject *exc_type2) { if (likely(err == exc_type1 || err == exc_type2)) return 1; if (likely(PyExceptionClass_Check(err))) { return __Pyx_inner_PyErr_GivenExceptionMatches2(err, exc_type1, exc_type2); } return (PyErr_GivenExceptionMatches(err, exc_type1) || PyErr_GivenExceptionMatches(err, exc_type2)); } #endif /* CheckBinaryVersion */ static int __Pyx_check_binary_version(void) { char ctversion[4], rtversion[4]; PyOS_snprintf(ctversion, 4, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION); PyOS_snprintf(rtversion, 4, "%s", Py_GetVersion()); if (ctversion[0] != rtversion[0] || ctversion[2] != rtversion[2]) { char message[200]; PyOS_snprintf(message, sizeof(message), "compiletime version %s of module '%.100s' " "does not match runtime version %s", ctversion, __Pyx_MODULE_NAME, rtversion); return PyErr_WarnEx(NULL, message, 1); } return 0; } /* ModuleImport */ #ifndef __PYX_HAVE_RT_ImportModule #define __PYX_HAVE_RT_ImportModule static PyObject *__Pyx_ImportModule(const char *name) { PyObject *py_name = 0; PyObject *py_module = 0; py_name = __Pyx_PyIdentifier_FromString(name); if (!py_name) goto bad; py_module = PyImport_Import(py_name); Py_DECREF(py_name); return py_module; bad: Py_XDECREF(py_name); return 0; } #endif /* TypeImport */ #ifndef __PYX_HAVE_RT_ImportType #define __PYX_HAVE_RT_ImportType static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, size_t size, int strict) { PyObject *py_module = 0; PyObject *result = 0; PyObject *py_name = 0; char warning[200]; Py_ssize_t basicsize; #ifdef Py_LIMITED_API PyObject *py_basicsize; #endif py_module = __Pyx_ImportModule(module_name); if (!py_module) goto bad; py_name = __Pyx_PyIdentifier_FromString(class_name); if (!py_name) goto bad; result = PyObject_GetAttr(py_module, py_name); Py_DECREF(py_name); py_name = 0; Py_DECREF(py_module); py_module = 0; if (!result) goto bad; if (!PyType_Check(result)) { PyErr_Format(PyExc_TypeError, "%.200s.%.200s is not a type object", module_name, class_name); goto bad; } #ifndef Py_LIMITED_API basicsize = ((PyTypeObject *)result)->tp_basicsize; #else py_basicsize = PyObject_GetAttrString(result, "__basicsize__"); if (!py_basicsize) goto bad; basicsize = PyLong_AsSsize_t(py_basicsize); Py_DECREF(py_basicsize); py_basicsize = 0; if (basicsize == (Py_ssize_t)-1 && PyErr_Occurred()) goto bad; #endif if (!strict && (size_t)basicsize > size) { PyOS_snprintf(warning, sizeof(warning), "%s.%s size changed, may indicate binary incompatibility. Expected %zd, got %zd", module_name, class_name, basicsize, size); if (PyErr_WarnEx(NULL, warning, 0) < 0) goto bad; } else if ((size_t)basicsize != size) { PyErr_Format(PyExc_ValueError, "%.200s.%.200s has the wrong size, try recompiling. Expected %zd, got %zd", module_name, class_name, basicsize, size); goto bad; } return (PyTypeObject *)result; bad: Py_XDECREF(py_module); Py_XDECREF(result); return NULL; } #endif /* InitStrings */ static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { while (t->p) { #if PY_MAJOR_VERSION < 3 if (t->is_unicode) { *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); } else if (t->intern) { *t->p = PyString_InternFromString(t->s); } else { *t->p = PyString_FromStringAndSize(t->s, t->n - 1); } #else if (t->is_unicode | t->is_str) { if (t->intern) { *t->p = PyUnicode_InternFromString(t->s); } else if (t->encoding) { *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); } else { *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); } } else { *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); } #endif if (!*t->p) return -1; if (PyObject_Hash(*t->p) == -1) PyErr_Clear(); ++t; } return 0; } static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); } static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { Py_ssize_t ignore; return __Pyx_PyObject_AsStringAndSize(o, &ignore); } #if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT #if !CYTHON_PEP393_ENABLED static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { char* defenc_c; PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); if (!defenc) return NULL; defenc_c = PyBytes_AS_STRING(defenc); #if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII { char* end = defenc_c + PyBytes_GET_SIZE(defenc); char* c; for (c = defenc_c; c < end; c++) { if ((unsigned char) (*c) >= 128) { PyUnicode_AsASCIIString(o); return NULL; } } } #endif *length = PyBytes_GET_SIZE(defenc); return defenc_c; } #else static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; #if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII if (likely(PyUnicode_IS_ASCII(o))) { *length = PyUnicode_GET_LENGTH(o); return PyUnicode_AsUTF8(o); } else { PyUnicode_AsASCIIString(o); return NULL; } #else return PyUnicode_AsUTF8AndSize(o, length); #endif } #endif #endif static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { #if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT if ( #if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII __Pyx_sys_getdefaultencoding_not_ascii && #endif PyUnicode_Check(o)) { return __Pyx_PyUnicode_AsStringAndSize(o, length); } else #endif #if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) if (PyByteArray_Check(o)) { *length = PyByteArray_GET_SIZE(o); return PyByteArray_AS_STRING(o); } else #endif { char* result; int r = PyBytes_AsStringAndSize(o, &result, length); if (unlikely(r < 0)) { return NULL; } else { return result; } } } static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { int is_true = x == Py_True; if (is_true | (x == Py_False) | (x == Py_None)) return is_true; else return PyObject_IsTrue(x); } static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { #if PY_MAJOR_VERSION >= 3 if (PyLong_Check(result)) { if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, "__int__ returned non-int (type %.200s). " "The ability to return an instance of a strict subclass of int " "is deprecated, and may be removed in a future version of Python.", Py_TYPE(result)->tp_name)) { Py_DECREF(result); return NULL; } return result; } #endif PyErr_Format(PyExc_TypeError, "__%.4s__ returned non-%.4s (type %.200s)", type_name, type_name, Py_TYPE(result)->tp_name); Py_DECREF(result); return NULL; } static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { #if CYTHON_USE_TYPE_SLOTS PyNumberMethods *m; #endif const char *name = NULL; PyObject *res = NULL; #if PY_MAJOR_VERSION < 3 if (likely(PyInt_Check(x) || PyLong_Check(x))) #else if (likely(PyLong_Check(x))) #endif return __Pyx_NewRef(x); #if CYTHON_USE_TYPE_SLOTS m = Py_TYPE(x)->tp_as_number; #if PY_MAJOR_VERSION < 3 if (m && m->nb_int) { name = "int"; res = m->nb_int(x); } else if (m && m->nb_long) { name = "long"; res = m->nb_long(x); } #else if (likely(m && m->nb_int)) { name = "int"; res = m->nb_int(x); } #endif #else if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { res = PyNumber_Int(x); } #endif if (likely(res)) { #if PY_MAJOR_VERSION < 3 if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { #else if (unlikely(!PyLong_CheckExact(res))) { #endif return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); } } else if (!PyErr_Occurred()) { PyErr_SetString(PyExc_TypeError, "an integer is required"); } return res; } static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { Py_ssize_t ival; PyObject *x; #if PY_MAJOR_VERSION < 3 if (likely(PyInt_CheckExact(b))) { if (sizeof(Py_ssize_t) >= sizeof(long)) return PyInt_AS_LONG(b); else return PyInt_AsSsize_t(x); } #endif if (likely(PyLong_CheckExact(b))) { #if CYTHON_USE_PYLONG_INTERNALS const digit* digits = ((PyLongObject*)b)->ob_digit; const Py_ssize_t size = Py_SIZE(b); if (likely(__Pyx_sst_abs(size) <= 1)) { ival = likely(size) ? digits[0] : 0; if (size == -1) ival = -ival; return ival; } else { switch (size) { case 2: if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); } break; case -2: if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); } break; case 3: if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); } break; case -3: if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); } break; case 4: if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); } break; case -4: if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); } break; } } #endif return PyLong_AsSsize_t(b); } x = PyNumber_Index(b); if (!x) return -1; ival = PyInt_AsSsize_t(x); Py_DECREF(x); return ival; } static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { return PyInt_FromSize_t(ival); } #endif /* Py_PYTHON_H */ borgbackup-1.1.5/src/borg/algorithms/0000755000175000017500000000000013260002401016620 5ustar twtw00000000000000borgbackup-1.1.5/src/borg/algorithms/checksums.c0000644000175000017500000064573713260002400020776 0ustar twtw00000000000000/* Generated by Cython 0.27.3 */ #define PY_SSIZE_T_CLEAN #include "Python.h" #ifndef Py_PYTHON_H #error Python headers needed to compile C extensions, please install development version of Python. #elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) #error Cython requires Python 2.6+ or Python 3.3+. #else #define CYTHON_ABI "0_27_3" #define CYTHON_FUTURE_DIVISION 0 #include #ifndef offsetof #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) #endif #if !defined(WIN32) && !defined(MS_WINDOWS) #ifndef __stdcall #define __stdcall #endif #ifndef __cdecl #define __cdecl #endif #ifndef __fastcall #define __fastcall #endif #endif #ifndef DL_IMPORT #define DL_IMPORT(t) t #endif #ifndef DL_EXPORT #define DL_EXPORT(t) t #endif #define __PYX_COMMA , #ifndef HAVE_LONG_LONG #if PY_VERSION_HEX >= 0x02070000 #define HAVE_LONG_LONG #endif #endif #ifndef PY_LONG_LONG #define PY_LONG_LONG LONG_LONG #endif #ifndef Py_HUGE_VAL #define Py_HUGE_VAL HUGE_VAL #endif #ifdef PYPY_VERSION #define CYTHON_COMPILING_IN_PYPY 1 #define CYTHON_COMPILING_IN_PYSTON 0 #define CYTHON_COMPILING_IN_CPYTHON 0 #undef CYTHON_USE_TYPE_SLOTS #define CYTHON_USE_TYPE_SLOTS 0 #undef CYTHON_USE_PYTYPE_LOOKUP #define CYTHON_USE_PYTYPE_LOOKUP 0 #if PY_VERSION_HEX < 0x03050000 #undef CYTHON_USE_ASYNC_SLOTS #define CYTHON_USE_ASYNC_SLOTS 0 #elif !defined(CYTHON_USE_ASYNC_SLOTS) #define CYTHON_USE_ASYNC_SLOTS 1 #endif #undef CYTHON_USE_PYLIST_INTERNALS #define CYTHON_USE_PYLIST_INTERNALS 0 #undef CYTHON_USE_UNICODE_INTERNALS #define CYTHON_USE_UNICODE_INTERNALS 0 #undef CYTHON_USE_UNICODE_WRITER #define CYTHON_USE_UNICODE_WRITER 0 #undef CYTHON_USE_PYLONG_INTERNALS #define CYTHON_USE_PYLONG_INTERNALS 0 #undef CYTHON_AVOID_BORROWED_REFS #define CYTHON_AVOID_BORROWED_REFS 1 #undef CYTHON_ASSUME_SAFE_MACROS #define CYTHON_ASSUME_SAFE_MACROS 0 #undef CYTHON_UNPACK_METHODS #define CYTHON_UNPACK_METHODS 0 #undef CYTHON_FAST_THREAD_STATE #define CYTHON_FAST_THREAD_STATE 0 #undef CYTHON_FAST_PYCALL #define CYTHON_FAST_PYCALL 0 #undef CYTHON_PEP489_MULTI_PHASE_INIT #define CYTHON_PEP489_MULTI_PHASE_INIT 0 #undef CYTHON_USE_TP_FINALIZE #define CYTHON_USE_TP_FINALIZE 0 #elif defined(PYSTON_VERSION) #define CYTHON_COMPILING_IN_PYPY 0 #define CYTHON_COMPILING_IN_PYSTON 1 #define CYTHON_COMPILING_IN_CPYTHON 0 #ifndef CYTHON_USE_TYPE_SLOTS #define CYTHON_USE_TYPE_SLOTS 1 #endif #undef CYTHON_USE_PYTYPE_LOOKUP #define CYTHON_USE_PYTYPE_LOOKUP 0 #undef CYTHON_USE_ASYNC_SLOTS #define CYTHON_USE_ASYNC_SLOTS 0 #undef CYTHON_USE_PYLIST_INTERNALS #define CYTHON_USE_PYLIST_INTERNALS 0 #ifndef CYTHON_USE_UNICODE_INTERNALS #define CYTHON_USE_UNICODE_INTERNALS 1 #endif #undef CYTHON_USE_UNICODE_WRITER #define CYTHON_USE_UNICODE_WRITER 0 #undef CYTHON_USE_PYLONG_INTERNALS #define CYTHON_USE_PYLONG_INTERNALS 0 #ifndef CYTHON_AVOID_BORROWED_REFS #define CYTHON_AVOID_BORROWED_REFS 0 #endif #ifndef CYTHON_ASSUME_SAFE_MACROS #define CYTHON_ASSUME_SAFE_MACROS 1 #endif #ifndef CYTHON_UNPACK_METHODS #define CYTHON_UNPACK_METHODS 1 #endif #undef CYTHON_FAST_THREAD_STATE #define CYTHON_FAST_THREAD_STATE 0 #undef CYTHON_FAST_PYCALL #define CYTHON_FAST_PYCALL 0 #undef CYTHON_PEP489_MULTI_PHASE_INIT #define CYTHON_PEP489_MULTI_PHASE_INIT 0 #undef CYTHON_USE_TP_FINALIZE #define CYTHON_USE_TP_FINALIZE 0 #else #define CYTHON_COMPILING_IN_PYPY 0 #define CYTHON_COMPILING_IN_PYSTON 0 #define CYTHON_COMPILING_IN_CPYTHON 1 #ifndef CYTHON_USE_TYPE_SLOTS #define CYTHON_USE_TYPE_SLOTS 1 #endif #if PY_VERSION_HEX < 0x02070000 #undef CYTHON_USE_PYTYPE_LOOKUP #define CYTHON_USE_PYTYPE_LOOKUP 0 #elif !defined(CYTHON_USE_PYTYPE_LOOKUP) #define CYTHON_USE_PYTYPE_LOOKUP 1 #endif #if PY_MAJOR_VERSION < 3 #undef CYTHON_USE_ASYNC_SLOTS #define CYTHON_USE_ASYNC_SLOTS 0 #elif !defined(CYTHON_USE_ASYNC_SLOTS) #define CYTHON_USE_ASYNC_SLOTS 1 #endif #if PY_VERSION_HEX < 0x02070000 #undef CYTHON_USE_PYLONG_INTERNALS #define CYTHON_USE_PYLONG_INTERNALS 0 #elif !defined(CYTHON_USE_PYLONG_INTERNALS) #define CYTHON_USE_PYLONG_INTERNALS 1 #endif #ifndef CYTHON_USE_PYLIST_INTERNALS #define CYTHON_USE_PYLIST_INTERNALS 1 #endif #ifndef CYTHON_USE_UNICODE_INTERNALS #define CYTHON_USE_UNICODE_INTERNALS 1 #endif #if PY_VERSION_HEX < 0x030300F0 #undef CYTHON_USE_UNICODE_WRITER #define CYTHON_USE_UNICODE_WRITER 0 #elif !defined(CYTHON_USE_UNICODE_WRITER) #define CYTHON_USE_UNICODE_WRITER 1 #endif #ifndef CYTHON_AVOID_BORROWED_REFS #define CYTHON_AVOID_BORROWED_REFS 0 #endif #ifndef CYTHON_ASSUME_SAFE_MACROS #define CYTHON_ASSUME_SAFE_MACROS 1 #endif #ifndef CYTHON_UNPACK_METHODS #define CYTHON_UNPACK_METHODS 1 #endif #ifndef CYTHON_FAST_THREAD_STATE #define CYTHON_FAST_THREAD_STATE 1 #endif #ifndef CYTHON_FAST_PYCALL #define CYTHON_FAST_PYCALL 1 #endif #ifndef CYTHON_PEP489_MULTI_PHASE_INIT #define CYTHON_PEP489_MULTI_PHASE_INIT (0 && PY_VERSION_HEX >= 0x03050000) #endif #ifndef CYTHON_USE_TP_FINALIZE #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1) #endif #endif #if !defined(CYTHON_FAST_PYCCALL) #define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) #endif #if CYTHON_USE_PYLONG_INTERNALS #include "longintrepr.h" #undef SHIFT #undef BASE #undef MASK #endif #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag) #define Py_OptimizeFlag 0 #endif #define __PYX_BUILD_PY_SSIZE_T "n" #define CYTHON_FORMAT_SSIZE_T "z" #if PY_MAJOR_VERSION < 3 #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) #define __Pyx_DefaultClassType PyClass_Type #else #define __Pyx_BUILTIN_MODULE_NAME "builtins" #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) #define __Pyx_DefaultClassType PyType_Type #endif #ifndef Py_TPFLAGS_CHECKTYPES #define Py_TPFLAGS_CHECKTYPES 0 #endif #ifndef Py_TPFLAGS_HAVE_INDEX #define Py_TPFLAGS_HAVE_INDEX 0 #endif #ifndef Py_TPFLAGS_HAVE_NEWBUFFER #define Py_TPFLAGS_HAVE_NEWBUFFER 0 #endif #ifndef Py_TPFLAGS_HAVE_FINALIZE #define Py_TPFLAGS_HAVE_FINALIZE 0 #endif #if PY_VERSION_HEX < 0x030700A0 || !defined(METH_FASTCALL) #ifndef METH_FASTCALL #define METH_FASTCALL 0x80 #endif typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject **args, Py_ssize_t nargs); typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject **args, Py_ssize_t nargs, PyObject *kwnames); #else #define __Pyx_PyCFunctionFast _PyCFunctionFast #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords #endif #if CYTHON_FAST_PYCCALL #define __Pyx_PyFastCFunction_Check(func)\ ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS))))) #else #define __Pyx_PyFastCFunction_Check(func) 0 #endif #if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000 #define __Pyx_PyThreadState_Current PyThreadState_GET() #elif PY_VERSION_HEX >= 0x03060000 #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() #elif PY_VERSION_HEX >= 0x03000000 #define __Pyx_PyThreadState_Current PyThreadState_GET() #else #define __Pyx_PyThreadState_Current _PyThreadState_Current #endif #if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) #define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) #else #define __Pyx_PyDict_NewPresized(n) PyDict_New() #endif #if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) #else #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) #endif #if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) #define CYTHON_PEP393_ENABLED 1 #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ 0 : _PyUnicode_Ready((PyObject *)(op))) #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u) #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch) #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) #else #define CYTHON_PEP393_ENABLED 0 #define PyUnicode_1BYTE_KIND 1 #define PyUnicode_2BYTE_KIND 2 #define PyUnicode_4BYTE_KIND 4 #define __Pyx_PyUnicode_READY(op) (0) #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111) #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE)) #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch) #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) #endif #if CYTHON_COMPILING_IN_PYPY #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) #else #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) #endif #if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains) #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) #endif #if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check) #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) #endif #if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format) #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) #endif #if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) #define PyObject_Malloc(s) PyMem_Malloc(s) #define PyObject_Free(p) PyMem_Free(p) #define PyObject_Realloc(p) PyMem_Realloc(p) #endif #if CYTHON_COMPILING_IN_PYSTON #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co) #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno) #else #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) #endif #define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) #define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) #if PY_MAJOR_VERSION >= 3 #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) #else #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) #endif #if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) #define PyObject_ASCII(o) PyObject_Repr(o) #endif #if PY_MAJOR_VERSION >= 3 #define PyBaseString_Type PyUnicode_Type #define PyStringObject PyUnicodeObject #define PyString_Type PyUnicode_Type #define PyString_Check PyUnicode_Check #define PyString_CheckExact PyUnicode_CheckExact #endif #if PY_MAJOR_VERSION >= 3 #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) #else #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) #endif #ifndef PySet_CheckExact #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type) #endif #define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) #if PY_MAJOR_VERSION >= 3 #define PyIntObject PyLongObject #define PyInt_Type PyLong_Type #define PyInt_Check(op) PyLong_Check(op) #define PyInt_CheckExact(op) PyLong_CheckExact(op) #define PyInt_FromString PyLong_FromString #define PyInt_FromUnicode PyLong_FromUnicode #define PyInt_FromLong PyLong_FromLong #define PyInt_FromSize_t PyLong_FromSize_t #define PyInt_FromSsize_t PyLong_FromSsize_t #define PyInt_AsLong PyLong_AsLong #define PyInt_AS_LONG PyLong_AS_LONG #define PyInt_AsSsize_t PyLong_AsSsize_t #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask #define PyNumber_Int PyNumber_Long #endif #if PY_MAJOR_VERSION >= 3 #define PyBoolObject PyLongObject #endif #if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY #ifndef PyUnicode_InternFromString #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) #endif #endif #if PY_VERSION_HEX < 0x030200A4 typedef long Py_hash_t; #define __Pyx_PyInt_FromHash_t PyInt_FromLong #define __Pyx_PyInt_AsHash_t PyInt_AsLong #else #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t #define __Pyx_PyInt_AsHash_t PyInt_AsSsize_t #endif #if PY_MAJOR_VERSION >= 3 #define __Pyx_PyMethod_New(func, self, klass) ((self) ? PyMethod_New(func, self) : PyInstanceMethod_New(func)) #else #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass) #endif #ifndef __has_attribute #define __has_attribute(x) 0 #endif #ifndef __has_cpp_attribute #define __has_cpp_attribute(x) 0 #endif #if CYTHON_USE_ASYNC_SLOTS #if PY_VERSION_HEX >= 0x030500B1 #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) #else #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) #endif #else #define __Pyx_PyType_AsAsync(obj) NULL #endif #ifndef __Pyx_PyAsyncMethodsStruct typedef struct { unaryfunc am_await; unaryfunc am_aiter; unaryfunc am_anext; } __Pyx_PyAsyncMethodsStruct; #endif #ifndef CYTHON_RESTRICT #if defined(__GNUC__) #define CYTHON_RESTRICT __restrict__ #elif defined(_MSC_VER) && _MSC_VER >= 1400 #define CYTHON_RESTRICT __restrict #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L #define CYTHON_RESTRICT restrict #else #define CYTHON_RESTRICT #endif #endif #ifndef CYTHON_UNUSED # if defined(__GNUC__) # if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) # define CYTHON_UNUSED __attribute__ ((__unused__)) # else # define CYTHON_UNUSED # endif # elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) # define CYTHON_UNUSED __attribute__ ((__unused__)) # else # define CYTHON_UNUSED # endif #endif #ifndef CYTHON_MAYBE_UNUSED_VAR # if defined(__cplusplus) template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { } # else # define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x) # endif #endif #ifndef CYTHON_NCP_UNUSED # if CYTHON_COMPILING_IN_CPYTHON # define CYTHON_NCP_UNUSED # else # define CYTHON_NCP_UNUSED CYTHON_UNUSED # endif #endif #define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) #ifdef _MSC_VER #ifndef _MSC_STDINT_H_ #if _MSC_VER < 1300 typedef unsigned char uint8_t; typedef unsigned int uint32_t; #else typedef unsigned __int8 uint8_t; typedef unsigned __int32 uint32_t; #endif #endif #else #include #endif #ifndef CYTHON_FALLTHROUGH #if defined(__cplusplus) && __cplusplus >= 201103L #if __has_cpp_attribute(fallthrough) #define CYTHON_FALLTHROUGH [[fallthrough]] #elif __has_cpp_attribute(clang::fallthrough) #define CYTHON_FALLTHROUGH [[clang::fallthrough]] #elif __has_cpp_attribute(gnu::fallthrough) #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] #endif #endif #ifndef CYTHON_FALLTHROUGH #if __has_attribute(fallthrough) #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) #else #define CYTHON_FALLTHROUGH #endif #endif #if defined(__clang__ ) && defined(__apple_build_version__) #if __apple_build_version__ < 7000000 #undef CYTHON_FALLTHROUGH #define CYTHON_FALLTHROUGH #endif #endif #endif #ifndef CYTHON_INLINE #if defined(__clang__) #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) #elif defined(__GNUC__) #define CYTHON_INLINE __inline__ #elif defined(_MSC_VER) #define CYTHON_INLINE __inline #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L #define CYTHON_INLINE inline #else #define CYTHON_INLINE #endif #endif #if defined(WIN32) || defined(MS_WINDOWS) #define _USE_MATH_DEFINES #endif #include #ifdef NAN #define __PYX_NAN() ((float) NAN) #else static CYTHON_INLINE float __PYX_NAN() { float value; memset(&value, 0xFF, sizeof(value)); return value; } #endif #if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) #define __Pyx_truncl trunc #else #define __Pyx_truncl truncl #endif #define __PYX_ERR(f_index, lineno, Ln_error) \ { \ __pyx_filename = __pyx_f[f_index]; __pyx_lineno = lineno; __pyx_clineno = __LINE__; goto Ln_error; \ } #ifndef __PYX_EXTERN_C #ifdef __cplusplus #define __PYX_EXTERN_C extern "C" #else #define __PYX_EXTERN_C extern #endif #endif #define __PYX_HAVE__borg__algorithms__checksums #define __PYX_HAVE_API__borg__algorithms__checksums #include #include #include #include "crc32_dispatch.c" #include "xxh64/xxhash.c" #ifdef _OPENMP #include #endif /* _OPENMP */ #if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) #define CYTHON_WITHOUT_ASSERTIONS #endif typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; #define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 #define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT 0 #define __PYX_DEFAULT_STRING_ENCODING "" #define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString #define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize #define __Pyx_uchar_cast(c) ((unsigned char)c) #define __Pyx_long_cast(x) ((long)x) #define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ (sizeof(type) < sizeof(Py_ssize_t)) ||\ (sizeof(type) > sizeof(Py_ssize_t) &&\ likely(v < (type)PY_SSIZE_T_MAX ||\ v == (type)PY_SSIZE_T_MAX) &&\ (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ v == (type)PY_SSIZE_T_MIN))) ||\ (sizeof(type) == sizeof(Py_ssize_t) &&\ (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ v == (type)PY_SSIZE_T_MAX))) ) #if defined (__cplusplus) && __cplusplus >= 201103L #include #define __Pyx_sst_abs(value) std::abs(value) #elif SIZEOF_INT >= SIZEOF_SIZE_T #define __Pyx_sst_abs(value) abs(value) #elif SIZEOF_LONG >= SIZEOF_SIZE_T #define __Pyx_sst_abs(value) labs(value) #elif defined (_MSC_VER) #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L #define __Pyx_sst_abs(value) llabs(value) #elif defined (__GNUC__) #define __Pyx_sst_abs(value) __builtin_llabs(value) #else #define __Pyx_sst_abs(value) ((value<0) ? -value : value) #endif static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); #define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) #define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) #define __Pyx_PyBytes_FromString PyBytes_FromString #define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); #if PY_MAJOR_VERSION < 3 #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize #else #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize #endif #define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) #define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) #define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) #define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) #define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) #define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) #define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s)) #define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s)) #define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s)) #define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) #define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) #define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) #define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) #define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) #define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) #define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) { const Py_UNICODE *u_end = u; while (*u_end++) ; return (size_t)(u_end - u - 1); } #define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) #define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode #define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode #define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) #define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) #define __Pyx_PyBool_FromLong(b) ((b) ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False)) static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); #define __Pyx_PySequence_Tuple(obj)\ (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); #if CYTHON_ASSUME_SAFE_MACROS #define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) #else #define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) #endif #define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) #if PY_MAJOR_VERSION >= 3 #define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) #else #define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) #endif #define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x)) #if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII static int __Pyx_sys_getdefaultencoding_not_ascii; static int __Pyx_init_sys_getdefaultencoding_params(void) { PyObject* sys; PyObject* default_encoding = NULL; PyObject* ascii_chars_u = NULL; PyObject* ascii_chars_b = NULL; const char* default_encoding_c; sys = PyImport_ImportModule("sys"); if (!sys) goto bad; default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); Py_DECREF(sys); if (!default_encoding) goto bad; default_encoding_c = PyBytes_AsString(default_encoding); if (!default_encoding_c) goto bad; if (strcmp(default_encoding_c, "ascii") == 0) { __Pyx_sys_getdefaultencoding_not_ascii = 0; } else { char ascii_chars[128]; int c; for (c = 0; c < 128; c++) { ascii_chars[c] = c; } __Pyx_sys_getdefaultencoding_not_ascii = 1; ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); if (!ascii_chars_u) goto bad; ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { PyErr_Format( PyExc_ValueError, "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", default_encoding_c); goto bad; } Py_DECREF(ascii_chars_u); Py_DECREF(ascii_chars_b); } Py_DECREF(default_encoding); return 0; bad: Py_XDECREF(default_encoding); Py_XDECREF(ascii_chars_u); Py_XDECREF(ascii_chars_b); return -1; } #endif #if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 #define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) #else #define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) #if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT static char* __PYX_DEFAULT_STRING_ENCODING; static int __Pyx_init_sys_getdefaultencoding_params(void) { PyObject* sys; PyObject* default_encoding = NULL; char* default_encoding_c; sys = PyImport_ImportModule("sys"); if (!sys) goto bad; default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); Py_DECREF(sys); if (!default_encoding) goto bad; default_encoding_c = PyBytes_AsString(default_encoding); if (!default_encoding_c) goto bad; __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c)); if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); Py_DECREF(default_encoding); return 0; bad: Py_XDECREF(default_encoding); return -1; } #endif #endif /* Test for GCC > 2.95 */ #if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) #define likely(x) __builtin_expect(!!(x), 1) #define unlikely(x) __builtin_expect(!!(x), 0) #else /* !__GNUC__ or GCC < 2.95 */ #define likely(x) (x) #define unlikely(x) (x) #endif /* __GNUC__ */ static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } static PyObject *__pyx_m = NULL; static PyObject *__pyx_d; static PyObject *__pyx_b; static PyObject *__pyx_cython_runtime; static PyObject *__pyx_empty_tuple; static PyObject *__pyx_empty_bytes; static PyObject *__pyx_empty_unicode; static int __pyx_lineno; static int __pyx_clineno = 0; static const char * __pyx_cfilenm= __FILE__; static const char *__pyx_filename; static const char *__pyx_f[] = { "stringsource", "src/borg/algorithms/checksums.pyx", "type.pxd", }; /*--- Type declarations ---*/ struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64; /* "borg/algorithms/checksums.pyx":82 * * * cdef class StreamingXXH64: # <<<<<<<<<<<<<< * cdef XXH64_state_t state * */ struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 { PyObject_HEAD XXH64_state_t state; }; /* --- Runtime support code (head) --- */ /* Refnanny.proto */ #ifndef CYTHON_REFNANNY #define CYTHON_REFNANNY 0 #endif #if CYTHON_REFNANNY typedef struct { void (*INCREF)(void*, PyObject*, int); void (*DECREF)(void*, PyObject*, int); void (*GOTREF)(void*, PyObject*, int); void (*GIVEREF)(void*, PyObject*, int); void* (*SetupContext)(const char*, int, const char*); void (*FinishContext)(void**); } __Pyx_RefNannyAPIStruct; static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; #ifdef WITH_THREAD #define __Pyx_RefNannySetupContext(name, acquire_gil)\ if (acquire_gil) {\ PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ PyGILState_Release(__pyx_gilstate_save);\ } else {\ __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ } #else #define __Pyx_RefNannySetupContext(name, acquire_gil)\ __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) #endif #define __Pyx_RefNannyFinishContext()\ __Pyx_RefNanny->FinishContext(&__pyx_refnanny) #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0) #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0) #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0) #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0) #else #define __Pyx_RefNannyDeclarations #define __Pyx_RefNannySetupContext(name, acquire_gil) #define __Pyx_RefNannyFinishContext() #define __Pyx_INCREF(r) Py_INCREF(r) #define __Pyx_DECREF(r) Py_DECREF(r) #define __Pyx_GOTREF(r) #define __Pyx_GIVEREF(r) #define __Pyx_XINCREF(r) Py_XINCREF(r) #define __Pyx_XDECREF(r) Py_XDECREF(r) #define __Pyx_XGOTREF(r) #define __Pyx_XGIVEREF(r) #endif #define __Pyx_XDECREF_SET(r, v) do {\ PyObject *tmp = (PyObject *) r;\ r = v; __Pyx_XDECREF(tmp);\ } while (0) #define __Pyx_DECREF_SET(r, v) do {\ PyObject *tmp = (PyObject *) r;\ r = v; __Pyx_DECREF(tmp);\ } while (0) #define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) #define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) /* PyObjectGetAttrStr.proto */ #if CYTHON_USE_TYPE_SLOTS static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { PyTypeObject* tp = Py_TYPE(obj); if (likely(tp->tp_getattro)) return tp->tp_getattro(obj, attr_name); #if PY_MAJOR_VERSION < 3 if (likely(tp->tp_getattr)) return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); #endif return PyObject_GetAttr(obj, attr_name); } #else #define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) #endif /* GetBuiltinName.proto */ static PyObject *__Pyx_GetBuiltinName(PyObject *name); /* RaiseDoubleKeywords.proto */ static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); /* ParseKeywords.proto */ static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\ PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\ const char* function_name); /* RaiseArgTupleInvalid.proto */ static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); /* PyThreadStateGet.proto */ #if CYTHON_FAST_THREAD_STATE #define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; #define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; #define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type #else #define __Pyx_PyThreadState_declare #define __Pyx_PyThreadState_assign #define __Pyx_PyErr_Occurred() PyErr_Occurred() #endif /* PyErrFetchRestore.proto */ #if CYTHON_FAST_THREAD_STATE #define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) #define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) #define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) #define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) #define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); #if CYTHON_COMPILING_IN_CPYTHON #define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) #else #define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) #endif #else #define __Pyx_PyErr_Clear() PyErr_Clear() #define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) #define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) #define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) #define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) #define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) #define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) #define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) #endif /* GetException.proto */ #if CYTHON_FAST_THREAD_STATE #define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); #else static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); #endif /* SwapException.proto */ #if CYTHON_FAST_THREAD_STATE #define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb) static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); #else static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb); #endif /* SaveResetException.proto */ #if CYTHON_FAST_THREAD_STATE #define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); #define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); #else #define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) #define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) #endif /* PyObjectCall.proto */ #if CYTHON_COMPILING_IN_CPYTHON static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); #else #define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) #endif /* RaiseException.proto */ static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); /* GetModuleGlobalName.proto */ static CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name); /* PyCFunctionFastCall.proto */ #if CYTHON_FAST_PYCCALL static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs); #else #define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL) #endif /* PyFunctionFastCall.proto */ #if CYTHON_FAST_PYCALL #define __Pyx_PyFunction_FastCall(func, args, nargs)\ __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) #if 1 || PY_VERSION_HEX < 0x030600B1 static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, int nargs, PyObject *kwargs); #else #define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs) #endif #endif /* PyObjectCallMethO.proto */ #if CYTHON_COMPILING_IN_CPYTHON static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); #endif /* PyObjectCallOneArg.proto */ static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); /* PyObjectCallNoArg.proto */ #if CYTHON_COMPILING_IN_CPYTHON static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func); #else #define __Pyx_PyObject_CallNoArg(func) __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL) #endif /* SetupReduce.proto */ static int __Pyx_setup_reduce(PyObject* type_obj); /* Import.proto */ static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); /* ImportFrom.proto */ static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); /* CLineInTraceback.proto */ #ifdef CYTHON_CLINE_IN_TRACEBACK #define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) #else static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); #endif /* CodeObjectCache.proto */ typedef struct { PyCodeObject* code_object; int code_line; } __Pyx_CodeObjectCacheEntry; struct __Pyx_CodeObjectCache { int count; int max_count; __Pyx_CodeObjectCacheEntry* entries; }; static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); static PyCodeObject *__pyx_find_code_object(int code_line); static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); /* AddTraceback.proto */ static void __Pyx_AddTraceback(const char *funcname, int c_line, int py_line, const char *filename); /* CIntToPy.proto */ static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value); /* CIntToPy.proto */ static CYTHON_INLINE PyObject* __Pyx_PyInt_From_uint32_t(uint32_t value); /* CIntFromPy.proto */ static CYTHON_INLINE uint32_t __Pyx_PyInt_As_uint32_t(PyObject *); /* CIntFromPy.proto */ static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_PyInt_As_unsigned_PY_LONG_LONG(PyObject *); /* CIntToPy.proto */ static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); /* CIntFromPy.proto */ static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); /* CIntFromPy.proto */ static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); /* FastTypeChecks.proto */ #if CYTHON_COMPILING_IN_CPYTHON #define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); #else #define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) #define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) #define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) #endif /* CheckBinaryVersion.proto */ static int __Pyx_check_binary_version(void); /* PyIdentifierFromString.proto */ #if !defined(__Pyx_PyIdentifier_FromString) #if PY_MAJOR_VERSION < 3 #define __Pyx_PyIdentifier_FromString(s) PyString_FromString(s) #else #define __Pyx_PyIdentifier_FromString(s) PyUnicode_FromString(s) #endif #endif /* ModuleImport.proto */ static PyObject *__Pyx_ImportModule(const char *name); /* TypeImport.proto */ static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, size_t size, int strict); /* InitStrings.proto */ static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); /* Module declarations from 'libc.stdint' */ /* Module declarations from 'cpython.buffer' */ /* Module declarations from 'libc.string' */ /* Module declarations from 'libc.stdio' */ /* Module declarations from '__builtin__' */ /* Module declarations from 'cpython.type' */ static PyTypeObject *__pyx_ptype_7cpython_4type_type = 0; /* Module declarations from 'cpython' */ /* Module declarations from 'cpython.object' */ /* Module declarations from 'cpython.bytes' */ /* Module declarations from 'borg.algorithms.checksums' */ static PyTypeObject *__pyx_ptype_4borg_10algorithms_9checksums_StreamingXXH64 = 0; static Py_buffer __pyx_f_4borg_10algorithms_9checksums_ro_buffer(PyObject *); /*proto*/ #define __Pyx_MODULE_NAME "borg.algorithms.checksums" extern int __pyx_module_is_main_borg__algorithms__checksums; int __pyx_module_is_main_borg__algorithms__checksums = 0; /* Implementation of 'borg.algorithms.checksums' */ static PyObject *__pyx_builtin_TypeError; static const char __pyx_k_val[] = "val"; static const char __pyx_k_data[] = "data"; static const char __pyx_k_hash[] = "hash"; static const char __pyx_k_main[] = "__main__"; static const char __pyx_k_name[] = "__name__"; static const char __pyx_k_seed[] = "seed"; static const char __pyx_k_test[] = "__test__"; static const char __pyx_k_crc32[] = "crc32"; static const char __pyx_k_value[] = "value"; static const char __pyx_k_xxh64[] = "xxh64"; static const char __pyx_k_digest[] = "digest"; static const char __pyx_k_import[] = "__import__"; static const char __pyx_k_reduce[] = "__reduce__"; static const char __pyx_k_seed_2[] = "_seed"; static const char __pyx_k_helpers[] = "helpers"; static const char __pyx_k_data_buf[] = "data_buf"; static const char __pyx_k_getstate[] = "__getstate__"; static const char __pyx_k_setstate[] = "__setstate__"; static const char __pyx_k_TypeError[] = "TypeError"; static const char __pyx_k_reduce_ex[] = "__reduce_ex__"; static const char __pyx_k_bin_to_hex[] = "bin_to_hex"; static const char __pyx_k_have_clmul[] = "have_clmul"; static const char __pyx_k_crc32_clmul[] = "crc32_clmul"; static const char __pyx_k_reduce_cython[] = "__reduce_cython__"; static const char __pyx_k_setstate_cython[] = "__setstate_cython__"; static const char __pyx_k_crc32_slice_by_8[] = "crc32_slice_by_8"; static const char __pyx_k_XXH64_reset_failed[] = "XXH64_reset failed"; static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; static const char __pyx_k_XXH64_update_failed[] = "XXH64_update failed"; static const char __pyx_k_borg_algorithms_checksums[] = "borg.algorithms.checksums"; static const char __pyx_k_no_default___reduce___due_to_non[] = "no default __reduce__ due to non-trivial __cinit__"; static const char __pyx_k_src_borg_algorithms_checksums_py[] = "src/borg/algorithms/checksums.pyx"; static PyObject *__pyx_n_s_TypeError; static PyObject *__pyx_kp_s_XXH64_reset_failed; static PyObject *__pyx_kp_s_XXH64_update_failed; static PyObject *__pyx_n_s_bin_to_hex; static PyObject *__pyx_n_s_borg_algorithms_checksums; static PyObject *__pyx_n_s_cline_in_traceback; static PyObject *__pyx_n_s_crc32; static PyObject *__pyx_n_s_crc32_clmul; static PyObject *__pyx_n_s_crc32_slice_by_8; static PyObject *__pyx_n_s_data; static PyObject *__pyx_n_s_data_buf; static PyObject *__pyx_n_s_digest; static PyObject *__pyx_n_s_getstate; static PyObject *__pyx_n_s_hash; static PyObject *__pyx_n_s_have_clmul; static PyObject *__pyx_n_s_helpers; static PyObject *__pyx_n_s_import; static PyObject *__pyx_n_s_main; static PyObject *__pyx_n_s_name; static PyObject *__pyx_kp_s_no_default___reduce___due_to_non; static PyObject *__pyx_n_s_reduce; static PyObject *__pyx_n_s_reduce_cython; static PyObject *__pyx_n_s_reduce_ex; static PyObject *__pyx_n_s_seed; static PyObject *__pyx_n_s_seed_2; static PyObject *__pyx_n_s_setstate; static PyObject *__pyx_n_s_setstate_cython; static PyObject *__pyx_kp_s_src_borg_algorithms_checksums_py; static PyObject *__pyx_n_s_test; static PyObject *__pyx_n_s_val; static PyObject *__pyx_n_s_value; static PyObject *__pyx_n_s_xxh64; static PyObject *__pyx_pf_4borg_10algorithms_9checksums_crc32_slice_by_8(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_data, PyObject *__pyx_v_value); /* proto */ static PyObject *__pyx_pf_4borg_10algorithms_9checksums_2crc32_clmul(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_data, PyObject *__pyx_v_value); /* proto */ static PyObject *__pyx_pf_4borg_10algorithms_9checksums_4xxh64(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_data, PyObject *__pyx_v_seed); /* proto */ static int __pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64___cinit__(struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *__pyx_v_self, PyObject *__pyx_v_seed); /* proto */ static PyObject *__pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64_2update(struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *__pyx_v_self, PyObject *__pyx_v_data); /* proto */ static PyObject *__pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64_4digest(struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64_6hexdigest(struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64_8__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *__pyx_v_self); /* proto */ static PyObject *__pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64_10__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ static PyObject *__pyx_tp_new_4borg_10algorithms_9checksums_StreamingXXH64(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ static PyObject *__pyx_int_0; static PyObject *__pyx_tuple_; static PyObject *__pyx_tuple__2; static PyObject *__pyx_tuple__3; static PyObject *__pyx_tuple__4; static PyObject *__pyx_tuple__5; static PyObject *__pyx_tuple__7; static PyObject *__pyx_tuple__9; static PyObject *__pyx_codeobj__6; static PyObject *__pyx_codeobj__8; static PyObject *__pyx_codeobj__10; /* "borg/algorithms/checksums.pyx":38 * * * cdef Py_buffer ro_buffer(object data) except *: # <<<<<<<<<<<<<< * cdef Py_buffer view * PyObject_GetBuffer(data, &view, PyBUF_SIMPLE) */ static Py_buffer __pyx_f_4borg_10algorithms_9checksums_ro_buffer(PyObject *__pyx_v_data) { Py_buffer __pyx_v_view; Py_buffer __pyx_r; __Pyx_RefNannyDeclarations int __pyx_t_1; __Pyx_RefNannySetupContext("ro_buffer", 0); /* "borg/algorithms/checksums.pyx":40 * cdef Py_buffer ro_buffer(object data) except *: * cdef Py_buffer view * PyObject_GetBuffer(data, &view, PyBUF_SIMPLE) # <<<<<<<<<<<<<< * return view * */ __pyx_t_1 = PyObject_GetBuffer(__pyx_v_data, (&__pyx_v_view), PyBUF_SIMPLE); if (unlikely(__pyx_t_1 == ((int)-1))) __PYX_ERR(1, 40, __pyx_L1_error) /* "borg/algorithms/checksums.pyx":41 * cdef Py_buffer view * PyObject_GetBuffer(data, &view, PyBUF_SIMPLE) * return view # <<<<<<<<<<<<<< * * */ __pyx_r = __pyx_v_view; goto __pyx_L0; /* "borg/algorithms/checksums.pyx":38 * * * cdef Py_buffer ro_buffer(object data) except *: # <<<<<<<<<<<<<< * cdef Py_buffer view * PyObject_GetBuffer(data, &view, PyBUF_SIMPLE) */ /* function exit code */ __pyx_L1_error:; __Pyx_AddTraceback("borg.algorithms.checksums.ro_buffer", __pyx_clineno, __pyx_lineno, __pyx_filename); __Pyx_pretend_to_initialize(&__pyx_r); __pyx_L0:; __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/algorithms/checksums.pyx":44 * * * def crc32_slice_by_8(data, value=0): # <<<<<<<<<<<<<< * cdef Py_buffer data_buf = ro_buffer(data) * cdef uint32_t val = value */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_10algorithms_9checksums_1crc32_slice_by_8(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ static PyMethodDef __pyx_mdef_4borg_10algorithms_9checksums_1crc32_slice_by_8 = {"crc32_slice_by_8", (PyCFunction)__pyx_pw_4borg_10algorithms_9checksums_1crc32_slice_by_8, METH_VARARGS|METH_KEYWORDS, 0}; static PyObject *__pyx_pw_4borg_10algorithms_9checksums_1crc32_slice_by_8(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { PyObject *__pyx_v_data = 0; PyObject *__pyx_v_value = 0; PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("crc32_slice_by_8 (wrapper)", 0); { static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_data,&__pyx_n_s_value,0}; PyObject* values[2] = {0,0}; values[1] = ((PyObject *)__pyx_int_0); if (unlikely(__pyx_kwds)) { Py_ssize_t kw_args; const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); switch (pos_args) { case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); CYTHON_FALLTHROUGH; case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } kw_args = PyDict_Size(__pyx_kwds); switch (pos_args) { case 0: if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_data)) != 0)) kw_args--; else goto __pyx_L5_argtuple_error; CYTHON_FALLTHROUGH; case 1: if (kw_args > 0) { PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_value); if (value) { values[1] = value; kw_args--; } } } if (unlikely(kw_args > 0)) { if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "crc32_slice_by_8") < 0)) __PYX_ERR(1, 44, __pyx_L3_error) } } else { switch (PyTuple_GET_SIZE(__pyx_args)) { case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); CYTHON_FALLTHROUGH; case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); break; default: goto __pyx_L5_argtuple_error; } } __pyx_v_data = values[0]; __pyx_v_value = values[1]; } goto __pyx_L4_argument_unpacking_done; __pyx_L5_argtuple_error:; __Pyx_RaiseArgtupleInvalid("crc32_slice_by_8", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 44, __pyx_L3_error) __pyx_L3_error:; __Pyx_AddTraceback("borg.algorithms.checksums.crc32_slice_by_8", __pyx_clineno, __pyx_lineno, __pyx_filename); __Pyx_RefNannyFinishContext(); return NULL; __pyx_L4_argument_unpacking_done:; __pyx_r = __pyx_pf_4borg_10algorithms_9checksums_crc32_slice_by_8(__pyx_self, __pyx_v_data, __pyx_v_value); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_10algorithms_9checksums_crc32_slice_by_8(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_data, PyObject *__pyx_v_value) { Py_buffer __pyx_v_data_buf; uint32_t __pyx_v_val; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations Py_buffer __pyx_t_1; uint32_t __pyx_t_2; PyObject *__pyx_t_3 = NULL; int __pyx_t_4; int __pyx_t_5; char const *__pyx_t_6; PyObject *__pyx_t_7 = NULL; PyObject *__pyx_t_8 = NULL; PyObject *__pyx_t_9 = NULL; PyObject *__pyx_t_10 = NULL; PyObject *__pyx_t_11 = NULL; PyObject *__pyx_t_12 = NULL; __Pyx_RefNannySetupContext("crc32_slice_by_8", 0); /* "borg/algorithms/checksums.pyx":45 * * def crc32_slice_by_8(data, value=0): * cdef Py_buffer data_buf = ro_buffer(data) # <<<<<<<<<<<<<< * cdef uint32_t val = value * try: */ __pyx_t_1 = __pyx_f_4borg_10algorithms_9checksums_ro_buffer(__pyx_v_data); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 45, __pyx_L1_error) __pyx_v_data_buf = __pyx_t_1; /* "borg/algorithms/checksums.pyx":46 * def crc32_slice_by_8(data, value=0): * cdef Py_buffer data_buf = ro_buffer(data) * cdef uint32_t val = value # <<<<<<<<<<<<<< * try: * return _crc32_slice_by_8(data_buf.buf, data_buf.len, val) */ __pyx_t_2 = __Pyx_PyInt_As_uint32_t(__pyx_v_value); if (unlikely((__pyx_t_2 == ((uint32_t)-1)) && PyErr_Occurred())) __PYX_ERR(1, 46, __pyx_L1_error) __pyx_v_val = __pyx_t_2; /* "borg/algorithms/checksums.pyx":47 * cdef Py_buffer data_buf = ro_buffer(data) * cdef uint32_t val = value * try: # <<<<<<<<<<<<<< * return _crc32_slice_by_8(data_buf.buf, data_buf.len, val) * finally: */ /*try:*/ { /* "borg/algorithms/checksums.pyx":48 * cdef uint32_t val = value * try: * return _crc32_slice_by_8(data_buf.buf, data_buf.len, val) # <<<<<<<<<<<<<< * finally: * PyBuffer_Release(&data_buf) */ __Pyx_XDECREF(__pyx_r); __pyx_t_3 = __Pyx_PyInt_From_uint32_t(crc32_slice_by_8(__pyx_v_data_buf.buf, __pyx_v_data_buf.len, __pyx_v_val)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 48, __pyx_L4_error) __Pyx_GOTREF(__pyx_t_3); __pyx_r = __pyx_t_3; __pyx_t_3 = 0; goto __pyx_L3_return; } /* "borg/algorithms/checksums.pyx":50 * return _crc32_slice_by_8(data_buf.buf, data_buf.len, val) * finally: * PyBuffer_Release(&data_buf) # <<<<<<<<<<<<<< * * */ /*finally:*/ { __pyx_L4_error:; /*exception exit:*/{ __Pyx_PyThreadState_declare __Pyx_PyThreadState_assign __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; if (PY_MAJOR_VERSION >= 3) __Pyx_ExceptionSwap(&__pyx_t_10, &__pyx_t_11, &__pyx_t_12); if ((PY_MAJOR_VERSION < 3) || unlikely(__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9) < 0)) __Pyx_ErrFetch(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9); __Pyx_XGOTREF(__pyx_t_7); __Pyx_XGOTREF(__pyx_t_8); __Pyx_XGOTREF(__pyx_t_9); __Pyx_XGOTREF(__pyx_t_10); __Pyx_XGOTREF(__pyx_t_11); __Pyx_XGOTREF(__pyx_t_12); __pyx_t_4 = __pyx_lineno; __pyx_t_5 = __pyx_clineno; __pyx_t_6 = __pyx_filename; { PyBuffer_Release((&__pyx_v_data_buf)); } if (PY_MAJOR_VERSION >= 3) { __Pyx_XGIVEREF(__pyx_t_10); __Pyx_XGIVEREF(__pyx_t_11); __Pyx_XGIVEREF(__pyx_t_12); __Pyx_ExceptionReset(__pyx_t_10, __pyx_t_11, __pyx_t_12); } __Pyx_XGIVEREF(__pyx_t_7); __Pyx_XGIVEREF(__pyx_t_8); __Pyx_XGIVEREF(__pyx_t_9); __Pyx_ErrRestore(__pyx_t_7, __pyx_t_8, __pyx_t_9); __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; __pyx_lineno = __pyx_t_4; __pyx_clineno = __pyx_t_5; __pyx_filename = __pyx_t_6; goto __pyx_L1_error; } __pyx_L3_return: { __pyx_t_12 = __pyx_r; __pyx_r = 0; PyBuffer_Release((&__pyx_v_data_buf)); __pyx_r = __pyx_t_12; __pyx_t_12 = 0; goto __pyx_L0; } } /* "borg/algorithms/checksums.pyx":44 * * * def crc32_slice_by_8(data, value=0): # <<<<<<<<<<<<<< * cdef Py_buffer data_buf = ro_buffer(data) * cdef uint32_t val = value */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_3); __Pyx_AddTraceback("borg.algorithms.checksums.crc32_slice_by_8", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/algorithms/checksums.pyx":53 * * * def crc32_clmul(data, value=0): # <<<<<<<<<<<<<< * cdef Py_buffer data_buf = ro_buffer(data) * cdef uint32_t val = value */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_10algorithms_9checksums_3crc32_clmul(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ static PyMethodDef __pyx_mdef_4borg_10algorithms_9checksums_3crc32_clmul = {"crc32_clmul", (PyCFunction)__pyx_pw_4borg_10algorithms_9checksums_3crc32_clmul, METH_VARARGS|METH_KEYWORDS, 0}; static PyObject *__pyx_pw_4borg_10algorithms_9checksums_3crc32_clmul(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { PyObject *__pyx_v_data = 0; PyObject *__pyx_v_value = 0; PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("crc32_clmul (wrapper)", 0); { static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_data,&__pyx_n_s_value,0}; PyObject* values[2] = {0,0}; values[1] = ((PyObject *)__pyx_int_0); if (unlikely(__pyx_kwds)) { Py_ssize_t kw_args; const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); switch (pos_args) { case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); CYTHON_FALLTHROUGH; case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } kw_args = PyDict_Size(__pyx_kwds); switch (pos_args) { case 0: if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_data)) != 0)) kw_args--; else goto __pyx_L5_argtuple_error; CYTHON_FALLTHROUGH; case 1: if (kw_args > 0) { PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_value); if (value) { values[1] = value; kw_args--; } } } if (unlikely(kw_args > 0)) { if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "crc32_clmul") < 0)) __PYX_ERR(1, 53, __pyx_L3_error) } } else { switch (PyTuple_GET_SIZE(__pyx_args)) { case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); CYTHON_FALLTHROUGH; case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); break; default: goto __pyx_L5_argtuple_error; } } __pyx_v_data = values[0]; __pyx_v_value = values[1]; } goto __pyx_L4_argument_unpacking_done; __pyx_L5_argtuple_error:; __Pyx_RaiseArgtupleInvalid("crc32_clmul", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 53, __pyx_L3_error) __pyx_L3_error:; __Pyx_AddTraceback("borg.algorithms.checksums.crc32_clmul", __pyx_clineno, __pyx_lineno, __pyx_filename); __Pyx_RefNannyFinishContext(); return NULL; __pyx_L4_argument_unpacking_done:; __pyx_r = __pyx_pf_4borg_10algorithms_9checksums_2crc32_clmul(__pyx_self, __pyx_v_data, __pyx_v_value); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_10algorithms_9checksums_2crc32_clmul(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_data, PyObject *__pyx_v_value) { Py_buffer __pyx_v_data_buf; uint32_t __pyx_v_val; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations Py_buffer __pyx_t_1; uint32_t __pyx_t_2; PyObject *__pyx_t_3 = NULL; int __pyx_t_4; int __pyx_t_5; char const *__pyx_t_6; PyObject *__pyx_t_7 = NULL; PyObject *__pyx_t_8 = NULL; PyObject *__pyx_t_9 = NULL; PyObject *__pyx_t_10 = NULL; PyObject *__pyx_t_11 = NULL; PyObject *__pyx_t_12 = NULL; __Pyx_RefNannySetupContext("crc32_clmul", 0); /* "borg/algorithms/checksums.pyx":54 * * def crc32_clmul(data, value=0): * cdef Py_buffer data_buf = ro_buffer(data) # <<<<<<<<<<<<<< * cdef uint32_t val = value * try: */ __pyx_t_1 = __pyx_f_4borg_10algorithms_9checksums_ro_buffer(__pyx_v_data); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 54, __pyx_L1_error) __pyx_v_data_buf = __pyx_t_1; /* "borg/algorithms/checksums.pyx":55 * def crc32_clmul(data, value=0): * cdef Py_buffer data_buf = ro_buffer(data) * cdef uint32_t val = value # <<<<<<<<<<<<<< * try: * return _crc32_clmul(data_buf.buf, data_buf.len, val) */ __pyx_t_2 = __Pyx_PyInt_As_uint32_t(__pyx_v_value); if (unlikely((__pyx_t_2 == ((uint32_t)-1)) && PyErr_Occurred())) __PYX_ERR(1, 55, __pyx_L1_error) __pyx_v_val = __pyx_t_2; /* "borg/algorithms/checksums.pyx":56 * cdef Py_buffer data_buf = ro_buffer(data) * cdef uint32_t val = value * try: # <<<<<<<<<<<<<< * return _crc32_clmul(data_buf.buf, data_buf.len, val) * finally: */ /*try:*/ { /* "borg/algorithms/checksums.pyx":57 * cdef uint32_t val = value * try: * return _crc32_clmul(data_buf.buf, data_buf.len, val) # <<<<<<<<<<<<<< * finally: * PyBuffer_Release(&data_buf) */ __Pyx_XDECREF(__pyx_r); __pyx_t_3 = __Pyx_PyInt_From_uint32_t(crc32_clmul(__pyx_v_data_buf.buf, __pyx_v_data_buf.len, __pyx_v_val)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 57, __pyx_L4_error) __Pyx_GOTREF(__pyx_t_3); __pyx_r = __pyx_t_3; __pyx_t_3 = 0; goto __pyx_L3_return; } /* "borg/algorithms/checksums.pyx":59 * return _crc32_clmul(data_buf.buf, data_buf.len, val) * finally: * PyBuffer_Release(&data_buf) # <<<<<<<<<<<<<< * * */ /*finally:*/ { __pyx_L4_error:; /*exception exit:*/{ __Pyx_PyThreadState_declare __Pyx_PyThreadState_assign __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; if (PY_MAJOR_VERSION >= 3) __Pyx_ExceptionSwap(&__pyx_t_10, &__pyx_t_11, &__pyx_t_12); if ((PY_MAJOR_VERSION < 3) || unlikely(__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9) < 0)) __Pyx_ErrFetch(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9); __Pyx_XGOTREF(__pyx_t_7); __Pyx_XGOTREF(__pyx_t_8); __Pyx_XGOTREF(__pyx_t_9); __Pyx_XGOTREF(__pyx_t_10); __Pyx_XGOTREF(__pyx_t_11); __Pyx_XGOTREF(__pyx_t_12); __pyx_t_4 = __pyx_lineno; __pyx_t_5 = __pyx_clineno; __pyx_t_6 = __pyx_filename; { PyBuffer_Release((&__pyx_v_data_buf)); } if (PY_MAJOR_VERSION >= 3) { __Pyx_XGIVEREF(__pyx_t_10); __Pyx_XGIVEREF(__pyx_t_11); __Pyx_XGIVEREF(__pyx_t_12); __Pyx_ExceptionReset(__pyx_t_10, __pyx_t_11, __pyx_t_12); } __Pyx_XGIVEREF(__pyx_t_7); __Pyx_XGIVEREF(__pyx_t_8); __Pyx_XGIVEREF(__pyx_t_9); __Pyx_ErrRestore(__pyx_t_7, __pyx_t_8, __pyx_t_9); __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; __pyx_lineno = __pyx_t_4; __pyx_clineno = __pyx_t_5; __pyx_filename = __pyx_t_6; goto __pyx_L1_error; } __pyx_L3_return: { __pyx_t_12 = __pyx_r; __pyx_r = 0; PyBuffer_Release((&__pyx_v_data_buf)); __pyx_r = __pyx_t_12; __pyx_t_12 = 0; goto __pyx_L0; } } /* "borg/algorithms/checksums.pyx":53 * * * def crc32_clmul(data, value=0): # <<<<<<<<<<<<<< * cdef Py_buffer data_buf = ro_buffer(data) * cdef uint32_t val = value */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_3); __Pyx_AddTraceback("borg.algorithms.checksums.crc32_clmul", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/algorithms/checksums.pyx":69 * * * def xxh64(data, seed=0): # <<<<<<<<<<<<<< * cdef unsigned long long _seed = seed * cdef XXH64_hash_t hash */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_10algorithms_9checksums_5xxh64(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ static PyMethodDef __pyx_mdef_4borg_10algorithms_9checksums_5xxh64 = {"xxh64", (PyCFunction)__pyx_pw_4borg_10algorithms_9checksums_5xxh64, METH_VARARGS|METH_KEYWORDS, 0}; static PyObject *__pyx_pw_4borg_10algorithms_9checksums_5xxh64(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { PyObject *__pyx_v_data = 0; PyObject *__pyx_v_seed = 0; PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("xxh64 (wrapper)", 0); { static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_data,&__pyx_n_s_seed,0}; PyObject* values[2] = {0,0}; values[1] = ((PyObject *)__pyx_int_0); if (unlikely(__pyx_kwds)) { Py_ssize_t kw_args; const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); switch (pos_args) { case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); CYTHON_FALLTHROUGH; case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } kw_args = PyDict_Size(__pyx_kwds); switch (pos_args) { case 0: if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_data)) != 0)) kw_args--; else goto __pyx_L5_argtuple_error; CYTHON_FALLTHROUGH; case 1: if (kw_args > 0) { PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_seed); if (value) { values[1] = value; kw_args--; } } } if (unlikely(kw_args > 0)) { if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "xxh64") < 0)) __PYX_ERR(1, 69, __pyx_L3_error) } } else { switch (PyTuple_GET_SIZE(__pyx_args)) { case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); CYTHON_FALLTHROUGH; case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); break; default: goto __pyx_L5_argtuple_error; } } __pyx_v_data = values[0]; __pyx_v_seed = values[1]; } goto __pyx_L4_argument_unpacking_done; __pyx_L5_argtuple_error:; __Pyx_RaiseArgtupleInvalid("xxh64", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 69, __pyx_L3_error) __pyx_L3_error:; __Pyx_AddTraceback("borg.algorithms.checksums.xxh64", __pyx_clineno, __pyx_lineno, __pyx_filename); __Pyx_RefNannyFinishContext(); return NULL; __pyx_L4_argument_unpacking_done:; __pyx_r = __pyx_pf_4borg_10algorithms_9checksums_4xxh64(__pyx_self, __pyx_v_data, __pyx_v_seed); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_10algorithms_9checksums_4xxh64(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_data, PyObject *__pyx_v_seed) { unsigned PY_LONG_LONG __pyx_v__seed; XXH64_hash_t __pyx_v_hash; XXH64_canonical_t __pyx_v_digest; Py_buffer __pyx_v_data_buf; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations unsigned PY_LONG_LONG __pyx_t_1; Py_buffer __pyx_t_2; PyObject *__pyx_t_3 = NULL; __Pyx_RefNannySetupContext("xxh64", 0); /* "borg/algorithms/checksums.pyx":70 * * def xxh64(data, seed=0): * cdef unsigned long long _seed = seed # <<<<<<<<<<<<<< * cdef XXH64_hash_t hash * cdef XXH64_canonical_t digest */ __pyx_t_1 = __Pyx_PyInt_As_unsigned_PY_LONG_LONG(__pyx_v_seed); if (unlikely((__pyx_t_1 == (unsigned PY_LONG_LONG)-1) && PyErr_Occurred())) __PYX_ERR(1, 70, __pyx_L1_error) __pyx_v__seed = __pyx_t_1; /* "borg/algorithms/checksums.pyx":73 * cdef XXH64_hash_t hash * cdef XXH64_canonical_t digest * cdef Py_buffer data_buf = ro_buffer(data) # <<<<<<<<<<<<<< * try: * hash = XXH64(data_buf.buf, data_buf.len, _seed) */ __pyx_t_2 = __pyx_f_4borg_10algorithms_9checksums_ro_buffer(__pyx_v_data); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 73, __pyx_L1_error) __pyx_v_data_buf = __pyx_t_2; /* "borg/algorithms/checksums.pyx":74 * cdef XXH64_canonical_t digest * cdef Py_buffer data_buf = ro_buffer(data) * try: # <<<<<<<<<<<<<< * hash = XXH64(data_buf.buf, data_buf.len, _seed) * finally: */ /*try:*/ { /* "borg/algorithms/checksums.pyx":75 * cdef Py_buffer data_buf = ro_buffer(data) * try: * hash = XXH64(data_buf.buf, data_buf.len, _seed) # <<<<<<<<<<<<<< * finally: * PyBuffer_Release(&data_buf) */ __pyx_v_hash = XXH64(__pyx_v_data_buf.buf, __pyx_v_data_buf.len, __pyx_v__seed); } /* "borg/algorithms/checksums.pyx":77 * hash = XXH64(data_buf.buf, data_buf.len, _seed) * finally: * PyBuffer_Release(&data_buf) # <<<<<<<<<<<<<< * XXH64_canonicalFromHash(&digest, hash) * return PyBytes_FromStringAndSize( digest.digest, 8) */ /*finally:*/ { /*normal exit:*/{ PyBuffer_Release((&__pyx_v_data_buf)); goto __pyx_L5; } __pyx_L5:; } /* "borg/algorithms/checksums.pyx":78 * finally: * PyBuffer_Release(&data_buf) * XXH64_canonicalFromHash(&digest, hash) # <<<<<<<<<<<<<< * return PyBytes_FromStringAndSize( digest.digest, 8) * */ XXH64_canonicalFromHash((&__pyx_v_digest), __pyx_v_hash); /* "borg/algorithms/checksums.pyx":79 * PyBuffer_Release(&data_buf) * XXH64_canonicalFromHash(&digest, hash) * return PyBytes_FromStringAndSize( digest.digest, 8) # <<<<<<<<<<<<<< * * */ __Pyx_XDECREF(__pyx_r); __pyx_t_3 = PyBytes_FromStringAndSize(((char const *)__pyx_v_digest.digest), 8); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 79, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_3); __pyx_r = __pyx_t_3; __pyx_t_3 = 0; goto __pyx_L0; /* "borg/algorithms/checksums.pyx":69 * * * def xxh64(data, seed=0): # <<<<<<<<<<<<<< * cdef unsigned long long _seed = seed * cdef XXH64_hash_t hash */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_3); __Pyx_AddTraceback("borg.algorithms.checksums.xxh64", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/algorithms/checksums.pyx":85 * cdef XXH64_state_t state * * def __cinit__(self, seed=0): # <<<<<<<<<<<<<< * cdef unsigned long long _seed = seed * if XXH64_reset(&self.state, _seed) != XXH_OK: */ /* Python wrapper */ static int __pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ static int __pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { PyObject *__pyx_v_seed = 0; int __pyx_r; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); { static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_seed,0}; PyObject* values[1] = {0}; values[0] = ((PyObject *)__pyx_int_0); if (unlikely(__pyx_kwds)) { Py_ssize_t kw_args; const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); switch (pos_args) { case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } kw_args = PyDict_Size(__pyx_kwds); switch (pos_args) { case 0: if (kw_args > 0) { PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_seed); if (value) { values[0] = value; kw_args--; } } } if (unlikely(kw_args > 0)) { if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 85, __pyx_L3_error) } } else { switch (PyTuple_GET_SIZE(__pyx_args)) { case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); CYTHON_FALLTHROUGH; case 0: break; default: goto __pyx_L5_argtuple_error; } } __pyx_v_seed = values[0]; } goto __pyx_L4_argument_unpacking_done; __pyx_L5_argtuple_error:; __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 0, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 85, __pyx_L3_error) __pyx_L3_error:; __Pyx_AddTraceback("borg.algorithms.checksums.StreamingXXH64.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); __Pyx_RefNannyFinishContext(); return -1; __pyx_L4_argument_unpacking_done:; __pyx_r = __pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64___cinit__(((struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *)__pyx_v_self), __pyx_v_seed); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static int __pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64___cinit__(struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *__pyx_v_self, PyObject *__pyx_v_seed) { unsigned PY_LONG_LONG __pyx_v__seed; int __pyx_r; __Pyx_RefNannyDeclarations unsigned PY_LONG_LONG __pyx_t_1; int __pyx_t_2; PyObject *__pyx_t_3 = NULL; __Pyx_RefNannySetupContext("__cinit__", 0); /* "borg/algorithms/checksums.pyx":86 * * def __cinit__(self, seed=0): * cdef unsigned long long _seed = seed # <<<<<<<<<<<<<< * if XXH64_reset(&self.state, _seed) != XXH_OK: * raise Exception('XXH64_reset failed') */ __pyx_t_1 = __Pyx_PyInt_As_unsigned_PY_LONG_LONG(__pyx_v_seed); if (unlikely((__pyx_t_1 == (unsigned PY_LONG_LONG)-1) && PyErr_Occurred())) __PYX_ERR(1, 86, __pyx_L1_error) __pyx_v__seed = __pyx_t_1; /* "borg/algorithms/checksums.pyx":87 * def __cinit__(self, seed=0): * cdef unsigned long long _seed = seed * if XXH64_reset(&self.state, _seed) != XXH_OK: # <<<<<<<<<<<<<< * raise Exception('XXH64_reset failed') * */ __pyx_t_2 = ((XXH64_reset((&__pyx_v_self->state), __pyx_v__seed) != XXH_OK) != 0); if (__pyx_t_2) { /* "borg/algorithms/checksums.pyx":88 * cdef unsigned long long _seed = seed * if XXH64_reset(&self.state, _seed) != XXH_OK: * raise Exception('XXH64_reset failed') # <<<<<<<<<<<<<< * * def update(self, data): */ __pyx_t_3 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple_, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 88, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_3); __Pyx_Raise(__pyx_t_3, 0, 0, 0); __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; __PYX_ERR(1, 88, __pyx_L1_error) /* "borg/algorithms/checksums.pyx":87 * def __cinit__(self, seed=0): * cdef unsigned long long _seed = seed * if XXH64_reset(&self.state, _seed) != XXH_OK: # <<<<<<<<<<<<<< * raise Exception('XXH64_reset failed') * */ } /* "borg/algorithms/checksums.pyx":85 * cdef XXH64_state_t state * * def __cinit__(self, seed=0): # <<<<<<<<<<<<<< * cdef unsigned long long _seed = seed * if XXH64_reset(&self.state, _seed) != XXH_OK: */ /* function exit code */ __pyx_r = 0; goto __pyx_L0; __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_3); __Pyx_AddTraceback("borg.algorithms.checksums.StreamingXXH64.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = -1; __pyx_L0:; __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/algorithms/checksums.pyx":90 * raise Exception('XXH64_reset failed') * * def update(self, data): # <<<<<<<<<<<<<< * cdef Py_buffer data_buf = ro_buffer(data) * try: */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_3update(PyObject *__pyx_v_self, PyObject *__pyx_v_data); /*proto*/ static PyObject *__pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_3update(PyObject *__pyx_v_self, PyObject *__pyx_v_data) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("update (wrapper)", 0); __pyx_r = __pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64_2update(((struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *)__pyx_v_self), ((PyObject *)__pyx_v_data)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64_2update(struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *__pyx_v_self, PyObject *__pyx_v_data) { Py_buffer __pyx_v_data_buf; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations Py_buffer __pyx_t_1; int __pyx_t_2; PyObject *__pyx_t_3 = NULL; int __pyx_t_4; int __pyx_t_5; char const *__pyx_t_6; PyObject *__pyx_t_7 = NULL; PyObject *__pyx_t_8 = NULL; PyObject *__pyx_t_9 = NULL; PyObject *__pyx_t_10 = NULL; PyObject *__pyx_t_11 = NULL; PyObject *__pyx_t_12 = NULL; __Pyx_RefNannySetupContext("update", 0); /* "borg/algorithms/checksums.pyx":91 * * def update(self, data): * cdef Py_buffer data_buf = ro_buffer(data) # <<<<<<<<<<<<<< * try: * if XXH64_update(&self.state, data_buf.buf, data_buf.len) != XXH_OK: */ __pyx_t_1 = __pyx_f_4borg_10algorithms_9checksums_ro_buffer(__pyx_v_data); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 91, __pyx_L1_error) __pyx_v_data_buf = __pyx_t_1; /* "borg/algorithms/checksums.pyx":92 * def update(self, data): * cdef Py_buffer data_buf = ro_buffer(data) * try: # <<<<<<<<<<<<<< * if XXH64_update(&self.state, data_buf.buf, data_buf.len) != XXH_OK: * raise Exception('XXH64_update failed') */ /*try:*/ { /* "borg/algorithms/checksums.pyx":93 * cdef Py_buffer data_buf = ro_buffer(data) * try: * if XXH64_update(&self.state, data_buf.buf, data_buf.len) != XXH_OK: # <<<<<<<<<<<<<< * raise Exception('XXH64_update failed') * finally: */ __pyx_t_2 = ((XXH64_update((&__pyx_v_self->state), __pyx_v_data_buf.buf, __pyx_v_data_buf.len) != XXH_OK) != 0); if (__pyx_t_2) { /* "borg/algorithms/checksums.pyx":94 * try: * if XXH64_update(&self.state, data_buf.buf, data_buf.len) != XXH_OK: * raise Exception('XXH64_update failed') # <<<<<<<<<<<<<< * finally: * PyBuffer_Release(&data_buf) */ __pyx_t_3 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 94, __pyx_L4_error) __Pyx_GOTREF(__pyx_t_3); __Pyx_Raise(__pyx_t_3, 0, 0, 0); __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; __PYX_ERR(1, 94, __pyx_L4_error) /* "borg/algorithms/checksums.pyx":93 * cdef Py_buffer data_buf = ro_buffer(data) * try: * if XXH64_update(&self.state, data_buf.buf, data_buf.len) != XXH_OK: # <<<<<<<<<<<<<< * raise Exception('XXH64_update failed') * finally: */ } } /* "borg/algorithms/checksums.pyx":96 * raise Exception('XXH64_update failed') * finally: * PyBuffer_Release(&data_buf) # <<<<<<<<<<<<<< * * def digest(self): */ /*finally:*/ { /*normal exit:*/{ PyBuffer_Release((&__pyx_v_data_buf)); goto __pyx_L5; } __pyx_L4_error:; /*exception exit:*/{ __Pyx_PyThreadState_declare __Pyx_PyThreadState_assign __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; if (PY_MAJOR_VERSION >= 3) __Pyx_ExceptionSwap(&__pyx_t_10, &__pyx_t_11, &__pyx_t_12); if ((PY_MAJOR_VERSION < 3) || unlikely(__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9) < 0)) __Pyx_ErrFetch(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9); __Pyx_XGOTREF(__pyx_t_7); __Pyx_XGOTREF(__pyx_t_8); __Pyx_XGOTREF(__pyx_t_9); __Pyx_XGOTREF(__pyx_t_10); __Pyx_XGOTREF(__pyx_t_11); __Pyx_XGOTREF(__pyx_t_12); __pyx_t_4 = __pyx_lineno; __pyx_t_5 = __pyx_clineno; __pyx_t_6 = __pyx_filename; { PyBuffer_Release((&__pyx_v_data_buf)); } if (PY_MAJOR_VERSION >= 3) { __Pyx_XGIVEREF(__pyx_t_10); __Pyx_XGIVEREF(__pyx_t_11); __Pyx_XGIVEREF(__pyx_t_12); __Pyx_ExceptionReset(__pyx_t_10, __pyx_t_11, __pyx_t_12); } __Pyx_XGIVEREF(__pyx_t_7); __Pyx_XGIVEREF(__pyx_t_8); __Pyx_XGIVEREF(__pyx_t_9); __Pyx_ErrRestore(__pyx_t_7, __pyx_t_8, __pyx_t_9); __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; __pyx_lineno = __pyx_t_4; __pyx_clineno = __pyx_t_5; __pyx_filename = __pyx_t_6; goto __pyx_L1_error; } __pyx_L5:; } /* "borg/algorithms/checksums.pyx":90 * raise Exception('XXH64_reset failed') * * def update(self, data): # <<<<<<<<<<<<<< * cdef Py_buffer data_buf = ro_buffer(data) * try: */ /* function exit code */ __pyx_r = Py_None; __Pyx_INCREF(Py_None); goto __pyx_L0; __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_3); __Pyx_AddTraceback("borg.algorithms.checksums.StreamingXXH64.update", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/algorithms/checksums.pyx":98 * PyBuffer_Release(&data_buf) * * def digest(self): # <<<<<<<<<<<<<< * cdef XXH64_hash_t hash * cdef XXH64_canonical_t digest */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_5digest(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ static PyObject *__pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_5digest(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("digest (wrapper)", 0); __pyx_r = __pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64_4digest(((struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64_4digest(struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *__pyx_v_self) { XXH64_hash_t __pyx_v_hash; XXH64_canonical_t __pyx_v_digest; PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("digest", 0); /* "borg/algorithms/checksums.pyx":101 * cdef XXH64_hash_t hash * cdef XXH64_canonical_t digest * hash = XXH64_digest(&self.state) # <<<<<<<<<<<<<< * XXH64_canonicalFromHash(&digest, hash) * return PyBytes_FromStringAndSize( digest.digest, 8) */ __pyx_v_hash = XXH64_digest((&__pyx_v_self->state)); /* "borg/algorithms/checksums.pyx":102 * cdef XXH64_canonical_t digest * hash = XXH64_digest(&self.state) * XXH64_canonicalFromHash(&digest, hash) # <<<<<<<<<<<<<< * return PyBytes_FromStringAndSize( digest.digest, 8) * */ XXH64_canonicalFromHash((&__pyx_v_digest), __pyx_v_hash); /* "borg/algorithms/checksums.pyx":103 * hash = XXH64_digest(&self.state) * XXH64_canonicalFromHash(&digest, hash) * return PyBytes_FromStringAndSize( digest.digest, 8) # <<<<<<<<<<<<<< * * def hexdigest(self): */ __Pyx_XDECREF(__pyx_r); __pyx_t_1 = PyBytes_FromStringAndSize(((char const *)__pyx_v_digest.digest), 8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 103, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __pyx_r = __pyx_t_1; __pyx_t_1 = 0; goto __pyx_L0; /* "borg/algorithms/checksums.pyx":98 * PyBuffer_Release(&data_buf) * * def digest(self): # <<<<<<<<<<<<<< * cdef XXH64_hash_t hash * cdef XXH64_canonical_t digest */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.algorithms.checksums.StreamingXXH64.digest", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "borg/algorithms/checksums.pyx":105 * return PyBytes_FromStringAndSize( digest.digest, 8) * * def hexdigest(self): # <<<<<<<<<<<<<< * return bin_to_hex(self.digest()) */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_7hexdigest(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ static PyObject *__pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_7hexdigest(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("hexdigest (wrapper)", 0); __pyx_r = __pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64_6hexdigest(((struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64_6hexdigest(struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *__pyx_v_self) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; PyObject *__pyx_t_2 = NULL; PyObject *__pyx_t_3 = NULL; PyObject *__pyx_t_4 = NULL; PyObject *__pyx_t_5 = NULL; __Pyx_RefNannySetupContext("hexdigest", 0); /* "borg/algorithms/checksums.pyx":106 * * def hexdigest(self): * return bin_to_hex(self.digest()) # <<<<<<<<<<<<<< */ __Pyx_XDECREF(__pyx_r); __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_bin_to_hex); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 106, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __pyx_t_4 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_digest); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 106, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_4); __pyx_t_5 = NULL; if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) { __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4); if (likely(__pyx_t_5)) { PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); __Pyx_INCREF(__pyx_t_5); __Pyx_INCREF(function); __Pyx_DECREF_SET(__pyx_t_4, function); } } if (__pyx_t_5) { __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 106, __pyx_L1_error) __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; } else { __pyx_t_3 = __Pyx_PyObject_CallNoArg(__pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 106, __pyx_L1_error) } __Pyx_GOTREF(__pyx_t_3); __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; __pyx_t_4 = NULL; if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2); if (likely(__pyx_t_4)) { PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); __Pyx_INCREF(__pyx_t_4); __Pyx_INCREF(function); __Pyx_DECREF_SET(__pyx_t_2, function); } } if (!__pyx_t_4) { __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 106, __pyx_L1_error) __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; __Pyx_GOTREF(__pyx_t_1); } else { #if CYTHON_FAST_PYCALL if (PyFunction_Check(__pyx_t_2)) { PyObject *__pyx_temp[2] = {__pyx_t_4, __pyx_t_3}; __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 106, __pyx_L1_error) __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; __Pyx_GOTREF(__pyx_t_1); __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; } else #endif #if CYTHON_FAST_PYCCALL if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) { PyObject *__pyx_temp[2] = {__pyx_t_4, __pyx_t_3}; __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 106, __pyx_L1_error) __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; __Pyx_GOTREF(__pyx_t_1); __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; } else #endif { __pyx_t_5 = PyTuple_New(1+1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 106, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_5); __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); __pyx_t_4 = NULL; __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_5, 0+1, __pyx_t_3); __pyx_t_3 = 0; __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 106, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; } } __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; __pyx_r = __pyx_t_1; __pyx_t_1 = 0; goto __pyx_L0; /* "borg/algorithms/checksums.pyx":105 * return PyBytes_FromStringAndSize( digest.digest, 8) * * def hexdigest(self): # <<<<<<<<<<<<<< * return bin_to_hex(self.digest()) */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_XDECREF(__pyx_t_2); __Pyx_XDECREF(__pyx_t_3); __Pyx_XDECREF(__pyx_t_4); __Pyx_XDECREF(__pyx_t_5); __Pyx_AddTraceback("borg.algorithms.checksums.StreamingXXH64.hexdigest", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __pyx_L0:; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "(tree fragment)":1 * def __reduce_cython__(self): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_9__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ static PyObject *__pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_9__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64_8__reduce_cython__(((struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *)__pyx_v_self)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64_8__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *__pyx_v_self) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("__reduce_cython__", 0); /* "(tree fragment)":2 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(0, 2, __pyx_L1_error) /* "(tree fragment)":1 * def __reduce_cython__(self): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.algorithms.checksums.StreamingXXH64.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } /* "(tree fragment)":3 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ /* Python wrapper */ static PyObject *__pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_11__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ static PyObject *__pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_11__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { PyObject *__pyx_r = 0; __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); __pyx_r = __pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64_10__setstate_cython__(((struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); /* function exit code */ __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_pf_4borg_10algorithms_9checksums_14StreamingXXH64_10__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64 *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { PyObject *__pyx_r = NULL; __Pyx_RefNannyDeclarations PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("__setstate_cython__", 0); /* "(tree fragment)":4 * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< */ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 4, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_Raise(__pyx_t_1, 0, 0, 0); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __PYX_ERR(0, 4, __pyx_L1_error) /* "(tree fragment)":3 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ /* function exit code */ __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_AddTraceback("borg.algorithms.checksums.StreamingXXH64.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); __pyx_r = NULL; __Pyx_XGIVEREF(__pyx_r); __Pyx_RefNannyFinishContext(); return __pyx_r; } static PyObject *__pyx_tp_new_4borg_10algorithms_9checksums_StreamingXXH64(PyTypeObject *t, PyObject *a, PyObject *k) { PyObject *o; if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { o = (*t->tp_alloc)(t, 0); } else { o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); } if (unlikely(!o)) return 0; if (unlikely(__pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_1__cinit__(o, a, k) < 0)) goto bad; return o; bad: Py_DECREF(o); o = 0; return NULL; } static void __pyx_tp_dealloc_4borg_10algorithms_9checksums_StreamingXXH64(PyObject *o) { #if CYTHON_USE_TP_FINALIZE if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && (!PyType_IS_GC(Py_TYPE(o)) || !_PyGC_FINALIZED(o))) { if (PyObject_CallFinalizerFromDealloc(o)) return; } #endif (*Py_TYPE(o)->tp_free)(o); } static PyMethodDef __pyx_methods_4borg_10algorithms_9checksums_StreamingXXH64[] = { {"update", (PyCFunction)__pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_3update, METH_O, 0}, {"digest", (PyCFunction)__pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_5digest, METH_NOARGS, 0}, {"hexdigest", (PyCFunction)__pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_7hexdigest, METH_NOARGS, 0}, {"__reduce_cython__", (PyCFunction)__pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_9__reduce_cython__, METH_NOARGS, 0}, {"__setstate_cython__", (PyCFunction)__pyx_pw_4borg_10algorithms_9checksums_14StreamingXXH64_11__setstate_cython__, METH_O, 0}, {0, 0, 0, 0} }; static PyTypeObject __pyx_type_4borg_10algorithms_9checksums_StreamingXXH64 = { PyVarObject_HEAD_INIT(0, 0) "borg.algorithms.checksums.StreamingXXH64", /*tp_name*/ sizeof(struct __pyx_obj_4borg_10algorithms_9checksums_StreamingXXH64), /*tp_basicsize*/ 0, /*tp_itemsize*/ __pyx_tp_dealloc_4borg_10algorithms_9checksums_StreamingXXH64, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ #if PY_MAJOR_VERSION < 3 0, /*tp_compare*/ #endif #if PY_MAJOR_VERSION >= 3 0, /*tp_as_async*/ #endif 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash*/ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/ 0, /*tp_doc*/ 0, /*tp_traverse*/ 0, /*tp_clear*/ 0, /*tp_richcompare*/ 0, /*tp_weaklistoffset*/ 0, /*tp_iter*/ 0, /*tp_iternext*/ __pyx_methods_4borg_10algorithms_9checksums_StreamingXXH64, /*tp_methods*/ 0, /*tp_members*/ 0, /*tp_getset*/ 0, /*tp_base*/ 0, /*tp_dict*/ 0, /*tp_descr_get*/ 0, /*tp_descr_set*/ 0, /*tp_dictoffset*/ 0, /*tp_init*/ 0, /*tp_alloc*/ __pyx_tp_new_4borg_10algorithms_9checksums_StreamingXXH64, /*tp_new*/ 0, /*tp_free*/ 0, /*tp_is_gc*/ 0, /*tp_bases*/ 0, /*tp_mro*/ 0, /*tp_cache*/ 0, /*tp_subclasses*/ 0, /*tp_weaklist*/ 0, /*tp_del*/ 0, /*tp_version_tag*/ #if PY_VERSION_HEX >= 0x030400a1 0, /*tp_finalize*/ #endif }; static PyMethodDef __pyx_methods[] = { {0, 0, 0, 0} }; #if PY_MAJOR_VERSION >= 3 #if CYTHON_PEP489_MULTI_PHASE_INIT static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ static int __pyx_pymod_exec_checksums(PyObject* module); /*proto*/ static PyModuleDef_Slot __pyx_moduledef_slots[] = { {Py_mod_create, (void*)__pyx_pymod_create}, {Py_mod_exec, (void*)__pyx_pymod_exec_checksums}, {0, NULL} }; #endif static struct PyModuleDef __pyx_moduledef = { PyModuleDef_HEAD_INIT, "checksums", 0, /* m_doc */ #if CYTHON_PEP489_MULTI_PHASE_INIT 0, /* m_size */ #else -1, /* m_size */ #endif __pyx_methods /* m_methods */, #if CYTHON_PEP489_MULTI_PHASE_INIT __pyx_moduledef_slots, /* m_slots */ #else NULL, /* m_reload */ #endif NULL, /* m_traverse */ NULL, /* m_clear */ NULL /* m_free */ }; #endif static __Pyx_StringTabEntry __pyx_string_tab[] = { {&__pyx_n_s_TypeError, __pyx_k_TypeError, sizeof(__pyx_k_TypeError), 0, 0, 1, 1}, {&__pyx_kp_s_XXH64_reset_failed, __pyx_k_XXH64_reset_failed, sizeof(__pyx_k_XXH64_reset_failed), 0, 0, 1, 0}, {&__pyx_kp_s_XXH64_update_failed, __pyx_k_XXH64_update_failed, sizeof(__pyx_k_XXH64_update_failed), 0, 0, 1, 0}, {&__pyx_n_s_bin_to_hex, __pyx_k_bin_to_hex, sizeof(__pyx_k_bin_to_hex), 0, 0, 1, 1}, {&__pyx_n_s_borg_algorithms_checksums, __pyx_k_borg_algorithms_checksums, sizeof(__pyx_k_borg_algorithms_checksums), 0, 0, 1, 1}, {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, {&__pyx_n_s_crc32, __pyx_k_crc32, sizeof(__pyx_k_crc32), 0, 0, 1, 1}, {&__pyx_n_s_crc32_clmul, __pyx_k_crc32_clmul, sizeof(__pyx_k_crc32_clmul), 0, 0, 1, 1}, {&__pyx_n_s_crc32_slice_by_8, __pyx_k_crc32_slice_by_8, sizeof(__pyx_k_crc32_slice_by_8), 0, 0, 1, 1}, {&__pyx_n_s_data, __pyx_k_data, sizeof(__pyx_k_data), 0, 0, 1, 1}, {&__pyx_n_s_data_buf, __pyx_k_data_buf, sizeof(__pyx_k_data_buf), 0, 0, 1, 1}, {&__pyx_n_s_digest, __pyx_k_digest, sizeof(__pyx_k_digest), 0, 0, 1, 1}, {&__pyx_n_s_getstate, __pyx_k_getstate, sizeof(__pyx_k_getstate), 0, 0, 1, 1}, {&__pyx_n_s_hash, __pyx_k_hash, sizeof(__pyx_k_hash), 0, 0, 1, 1}, {&__pyx_n_s_have_clmul, __pyx_k_have_clmul, sizeof(__pyx_k_have_clmul), 0, 0, 1, 1}, {&__pyx_n_s_helpers, __pyx_k_helpers, sizeof(__pyx_k_helpers), 0, 0, 1, 1}, {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, {&__pyx_kp_s_no_default___reduce___due_to_non, __pyx_k_no_default___reduce___due_to_non, sizeof(__pyx_k_no_default___reduce___due_to_non), 0, 0, 1, 0}, {&__pyx_n_s_reduce, __pyx_k_reduce, sizeof(__pyx_k_reduce), 0, 0, 1, 1}, {&__pyx_n_s_reduce_cython, __pyx_k_reduce_cython, sizeof(__pyx_k_reduce_cython), 0, 0, 1, 1}, {&__pyx_n_s_reduce_ex, __pyx_k_reduce_ex, sizeof(__pyx_k_reduce_ex), 0, 0, 1, 1}, {&__pyx_n_s_seed, __pyx_k_seed, sizeof(__pyx_k_seed), 0, 0, 1, 1}, {&__pyx_n_s_seed_2, __pyx_k_seed_2, sizeof(__pyx_k_seed_2), 0, 0, 1, 1}, {&__pyx_n_s_setstate, __pyx_k_setstate, sizeof(__pyx_k_setstate), 0, 0, 1, 1}, {&__pyx_n_s_setstate_cython, __pyx_k_setstate_cython, sizeof(__pyx_k_setstate_cython), 0, 0, 1, 1}, {&__pyx_kp_s_src_borg_algorithms_checksums_py, __pyx_k_src_borg_algorithms_checksums_py, sizeof(__pyx_k_src_borg_algorithms_checksums_py), 0, 0, 1, 0}, {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, {&__pyx_n_s_val, __pyx_k_val, sizeof(__pyx_k_val), 0, 0, 1, 1}, {&__pyx_n_s_value, __pyx_k_value, sizeof(__pyx_k_value), 0, 0, 1, 1}, {&__pyx_n_s_xxh64, __pyx_k_xxh64, sizeof(__pyx_k_xxh64), 0, 0, 1, 1}, {0, 0, 0, 0, 0, 0, 0} }; static int __Pyx_InitCachedBuiltins(void) { __pyx_builtin_TypeError = __Pyx_GetBuiltinName(__pyx_n_s_TypeError); if (!__pyx_builtin_TypeError) __PYX_ERR(0, 2, __pyx_L1_error) return 0; __pyx_L1_error:; return -1; } static int __Pyx_InitCachedConstants(void) { __Pyx_RefNannyDeclarations __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); /* "borg/algorithms/checksums.pyx":88 * cdef unsigned long long _seed = seed * if XXH64_reset(&self.state, _seed) != XXH_OK: * raise Exception('XXH64_reset failed') # <<<<<<<<<<<<<< * * def update(self, data): */ __pyx_tuple_ = PyTuple_Pack(1, __pyx_kp_s_XXH64_reset_failed); if (unlikely(!__pyx_tuple_)) __PYX_ERR(1, 88, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple_); __Pyx_GIVEREF(__pyx_tuple_); /* "borg/algorithms/checksums.pyx":94 * try: * if XXH64_update(&self.state, data_buf.buf, data_buf.len) != XXH_OK: * raise Exception('XXH64_update failed') # <<<<<<<<<<<<<< * finally: * PyBuffer_Release(&data_buf) */ __pyx_tuple__2 = PyTuple_Pack(1, __pyx_kp_s_XXH64_update_failed); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(1, 94, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__2); __Pyx_GIVEREF(__pyx_tuple__2); /* "(tree fragment)":2 * def __reduce_cython__(self): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") */ __pyx_tuple__3 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(0, 2, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__3); __Pyx_GIVEREF(__pyx_tuple__3); /* "(tree fragment)":4 * raise TypeError("no default __reduce__ due to non-trivial __cinit__") * def __setstate_cython__(self, __pyx_state): * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< */ __pyx_tuple__4 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(0, 4, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__4); __Pyx_GIVEREF(__pyx_tuple__4); /* "borg/algorithms/checksums.pyx":44 * * * def crc32_slice_by_8(data, value=0): # <<<<<<<<<<<<<< * cdef Py_buffer data_buf = ro_buffer(data) * cdef uint32_t val = value */ __pyx_tuple__5 = PyTuple_Pack(4, __pyx_n_s_data, __pyx_n_s_value, __pyx_n_s_data_buf, __pyx_n_s_val); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(1, 44, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__5); __Pyx_GIVEREF(__pyx_tuple__5); __pyx_codeobj__6 = (PyObject*)__Pyx_PyCode_New(2, 0, 4, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__5, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_src_borg_algorithms_checksums_py, __pyx_n_s_crc32_slice_by_8, 44, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__6)) __PYX_ERR(1, 44, __pyx_L1_error) /* "borg/algorithms/checksums.pyx":53 * * * def crc32_clmul(data, value=0): # <<<<<<<<<<<<<< * cdef Py_buffer data_buf = ro_buffer(data) * cdef uint32_t val = value */ __pyx_tuple__7 = PyTuple_Pack(4, __pyx_n_s_data, __pyx_n_s_value, __pyx_n_s_data_buf, __pyx_n_s_val); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(1, 53, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__7); __Pyx_GIVEREF(__pyx_tuple__7); __pyx_codeobj__8 = (PyObject*)__Pyx_PyCode_New(2, 0, 4, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__7, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_src_borg_algorithms_checksums_py, __pyx_n_s_crc32_clmul, 53, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__8)) __PYX_ERR(1, 53, __pyx_L1_error) /* "borg/algorithms/checksums.pyx":69 * * * def xxh64(data, seed=0): # <<<<<<<<<<<<<< * cdef unsigned long long _seed = seed * cdef XXH64_hash_t hash */ __pyx_tuple__9 = PyTuple_Pack(6, __pyx_n_s_data, __pyx_n_s_seed, __pyx_n_s_seed_2, __pyx_n_s_hash, __pyx_n_s_digest, __pyx_n_s_data_buf); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(1, 69, __pyx_L1_error) __Pyx_GOTREF(__pyx_tuple__9); __Pyx_GIVEREF(__pyx_tuple__9); __pyx_codeobj__10 = (PyObject*)__Pyx_PyCode_New(2, 0, 6, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__9, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_src_borg_algorithms_checksums_py, __pyx_n_s_xxh64, 69, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__10)) __PYX_ERR(1, 69, __pyx_L1_error) __Pyx_RefNannyFinishContext(); return 0; __pyx_L1_error:; __Pyx_RefNannyFinishContext(); return -1; } static int __Pyx_InitGlobals(void) { if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(1, 1, __pyx_L1_error); __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(1, 1, __pyx_L1_error) return 0; __pyx_L1_error:; return -1; } #if PY_MAJOR_VERSION < 3 PyMODINIT_FUNC initchecksums(void); /*proto*/ PyMODINIT_FUNC initchecksums(void) #else PyMODINIT_FUNC PyInit_checksums(void); /*proto*/ PyMODINIT_FUNC PyInit_checksums(void) #if CYTHON_PEP489_MULTI_PHASE_INIT { return PyModuleDef_Init(&__pyx_moduledef); } static int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name) { PyObject *value = PyObject_GetAttrString(spec, from_name); int result = 0; if (likely(value)) { result = PyDict_SetItemString(moddict, to_name, value); Py_DECREF(value); } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { PyErr_Clear(); } else { result = -1; } return result; } static PyObject* __pyx_pymod_create(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) { PyObject *module = NULL, *moddict, *modname; if (__pyx_m) return __Pyx_NewRef(__pyx_m); modname = PyObject_GetAttrString(spec, "name"); if (unlikely(!modname)) goto bad; module = PyModule_NewObject(modname); Py_DECREF(modname); if (unlikely(!module)) goto bad; moddict = PyModule_GetDict(module); if (unlikely(!moddict)) goto bad; if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__") < 0)) goto bad; if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__") < 0)) goto bad; if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__") < 0)) goto bad; if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__") < 0)) goto bad; return module; bad: Py_XDECREF(module); return NULL; } static int __pyx_pymod_exec_checksums(PyObject *__pyx_pyinit_module) #endif #endif { PyObject *__pyx_t_1 = NULL; PyObject *__pyx_t_2 = NULL; int __pyx_t_3; __Pyx_RefNannyDeclarations #if CYTHON_PEP489_MULTI_PHASE_INIT if (__pyx_m && __pyx_m == __pyx_pyinit_module) return 0; #endif #if CYTHON_REFNANNY __Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); if (!__Pyx_RefNanny) { PyErr_Clear(); __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); if (!__Pyx_RefNanny) Py_FatalError("failed to import 'refnanny' module"); } #endif __Pyx_RefNannySetupContext("PyMODINIT_FUNC PyInit_checksums(void)", 0); if (__Pyx_check_binary_version() < 0) __PYX_ERR(1, 1, __pyx_L1_error) __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(1, 1, __pyx_L1_error) __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(1, 1, __pyx_L1_error) __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(1, 1, __pyx_L1_error) #ifdef __Pyx_CyFunction_USED if (__pyx_CyFunction_init() < 0) __PYX_ERR(1, 1, __pyx_L1_error) #endif #ifdef __Pyx_FusedFunction_USED if (__pyx_FusedFunction_init() < 0) __PYX_ERR(1, 1, __pyx_L1_error) #endif #ifdef __Pyx_Coroutine_USED if (__pyx_Coroutine_init() < 0) __PYX_ERR(1, 1, __pyx_L1_error) #endif #ifdef __Pyx_Generator_USED if (__pyx_Generator_init() < 0) __PYX_ERR(1, 1, __pyx_L1_error) #endif #ifdef __Pyx_AsyncGen_USED if (__pyx_AsyncGen_init() < 0) __PYX_ERR(1, 1, __pyx_L1_error) #endif #ifdef __Pyx_StopAsyncIteration_USED if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(1, 1, __pyx_L1_error) #endif /*--- Library function declarations ---*/ /*--- Threads initialization code ---*/ #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS #ifdef WITH_THREAD /* Python build with threading support? */ PyEval_InitThreads(); #endif #endif /*--- Module creation code ---*/ #if CYTHON_PEP489_MULTI_PHASE_INIT __pyx_m = __pyx_pyinit_module; Py_INCREF(__pyx_m); #else #if PY_MAJOR_VERSION < 3 __pyx_m = Py_InitModule4("checksums", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); #else __pyx_m = PyModule_Create(&__pyx_moduledef); #endif if (unlikely(!__pyx_m)) __PYX_ERR(1, 1, __pyx_L1_error) #endif __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(1, 1, __pyx_L1_error) Py_INCREF(__pyx_d); __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(1, 1, __pyx_L1_error) __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(1, 1, __pyx_L1_error) #if CYTHON_COMPILING_IN_PYPY Py_INCREF(__pyx_b); #endif if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(1, 1, __pyx_L1_error); /*--- Initialize various global constants etc. ---*/ if (__Pyx_InitGlobals() < 0) __PYX_ERR(1, 1, __pyx_L1_error) #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(1, 1, __pyx_L1_error) #endif if (__pyx_module_is_main_borg__algorithms__checksums) { if (PyObject_SetAttrString(__pyx_m, "__name__", __pyx_n_s_main) < 0) __PYX_ERR(1, 1, __pyx_L1_error) } #if PY_MAJOR_VERSION >= 3 { PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(1, 1, __pyx_L1_error) if (!PyDict_GetItemString(modules, "borg.algorithms.checksums")) { if (unlikely(PyDict_SetItemString(modules, "borg.algorithms.checksums", __pyx_m) < 0)) __PYX_ERR(1, 1, __pyx_L1_error) } } #endif /*--- Builtin init code ---*/ if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(1, 1, __pyx_L1_error) /*--- Constants init code ---*/ if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(1, 1, __pyx_L1_error) /*--- Global init code ---*/ /*--- Variable export code ---*/ /*--- Function export code ---*/ /*--- Type init code ---*/ if (PyType_Ready(&__pyx_type_4borg_10algorithms_9checksums_StreamingXXH64) < 0) __PYX_ERR(1, 82, __pyx_L1_error) __pyx_type_4borg_10algorithms_9checksums_StreamingXXH64.tp_print = 0; if (PyObject_SetAttrString(__pyx_m, "StreamingXXH64", (PyObject *)&__pyx_type_4borg_10algorithms_9checksums_StreamingXXH64) < 0) __PYX_ERR(1, 82, __pyx_L1_error) if (__Pyx_setup_reduce((PyObject*)&__pyx_type_4borg_10algorithms_9checksums_StreamingXXH64) < 0) __PYX_ERR(1, 82, __pyx_L1_error) __pyx_ptype_4borg_10algorithms_9checksums_StreamingXXH64 = &__pyx_type_4borg_10algorithms_9checksums_StreamingXXH64; /*--- Type import code ---*/ __pyx_ptype_7cpython_4type_type = __Pyx_ImportType(__Pyx_BUILTIN_MODULE_NAME, "type", #if CYTHON_COMPILING_IN_PYPY sizeof(PyTypeObject), #else sizeof(PyHeapTypeObject), #endif 0); if (unlikely(!__pyx_ptype_7cpython_4type_type)) __PYX_ERR(2, 9, __pyx_L1_error) /*--- Variable import code ---*/ /*--- Function import code ---*/ /*--- Execution code ---*/ #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) if (__Pyx_patch_abc() < 0) __PYX_ERR(1, 1, __pyx_L1_error) #endif /* "borg/algorithms/checksums.pyx":1 * from ..helpers import bin_to_hex # <<<<<<<<<<<<<< * * from libc.stdint cimport uint32_t */ __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __Pyx_INCREF(__pyx_n_s_bin_to_hex); __Pyx_GIVEREF(__pyx_n_s_bin_to_hex); PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_bin_to_hex); __pyx_t_2 = __Pyx_Import(__pyx_n_s_helpers, __pyx_t_1, 2); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_bin_to_hex); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); if (PyDict_SetItem(__pyx_d, __pyx_n_s_bin_to_hex, __pyx_t_1) < 0) __PYX_ERR(1, 1, __pyx_L1_error) __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; /* "borg/algorithms/checksums.pyx":44 * * * def crc32_slice_by_8(data, value=0): # <<<<<<<<<<<<<< * cdef Py_buffer data_buf = ro_buffer(data) * cdef uint32_t val = value */ __pyx_t_2 = PyCFunction_NewEx(&__pyx_mdef_4borg_10algorithms_9checksums_1crc32_slice_by_8, NULL, __pyx_n_s_borg_algorithms_checksums); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 44, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); if (PyDict_SetItem(__pyx_d, __pyx_n_s_crc32_slice_by_8, __pyx_t_2) < 0) __PYX_ERR(1, 44, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; /* "borg/algorithms/checksums.pyx":53 * * * def crc32_clmul(data, value=0): # <<<<<<<<<<<<<< * cdef Py_buffer data_buf = ro_buffer(data) * cdef uint32_t val = value */ __pyx_t_2 = PyCFunction_NewEx(&__pyx_mdef_4borg_10algorithms_9checksums_3crc32_clmul, NULL, __pyx_n_s_borg_algorithms_checksums); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 53, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); if (PyDict_SetItem(__pyx_d, __pyx_n_s_crc32_clmul, __pyx_t_2) < 0) __PYX_ERR(1, 53, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; /* "borg/algorithms/checksums.pyx":62 * * * have_clmul = _have_clmul() # <<<<<<<<<<<<<< * if have_clmul: * crc32 = crc32_clmul */ __pyx_t_2 = __Pyx_PyInt_From_int(have_clmul()); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 62, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); if (PyDict_SetItem(__pyx_d, __pyx_n_s_have_clmul, __pyx_t_2) < 0) __PYX_ERR(1, 62, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; /* "borg/algorithms/checksums.pyx":63 * * have_clmul = _have_clmul() * if have_clmul: # <<<<<<<<<<<<<< * crc32 = crc32_clmul * else: */ __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_have_clmul); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 63, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(1, 63, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; if (__pyx_t_3) { /* "borg/algorithms/checksums.pyx":64 * have_clmul = _have_clmul() * if have_clmul: * crc32 = crc32_clmul # <<<<<<<<<<<<<< * else: * crc32 = crc32_slice_by_8 */ __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_crc32_clmul); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 64, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); if (PyDict_SetItem(__pyx_d, __pyx_n_s_crc32, __pyx_t_2) < 0) __PYX_ERR(1, 64, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; /* "borg/algorithms/checksums.pyx":63 * * have_clmul = _have_clmul() * if have_clmul: # <<<<<<<<<<<<<< * crc32 = crc32_clmul * else: */ goto __pyx_L2; } /* "borg/algorithms/checksums.pyx":66 * crc32 = crc32_clmul * else: * crc32 = crc32_slice_by_8 # <<<<<<<<<<<<<< * * */ /*else*/ { __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_crc32_slice_by_8); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 66, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); if (PyDict_SetItem(__pyx_d, __pyx_n_s_crc32, __pyx_t_2) < 0) __PYX_ERR(1, 66, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; } __pyx_L2:; /* "borg/algorithms/checksums.pyx":69 * * * def xxh64(data, seed=0): # <<<<<<<<<<<<<< * cdef unsigned long long _seed = seed * cdef XXH64_hash_t hash */ __pyx_t_2 = PyCFunction_NewEx(&__pyx_mdef_4borg_10algorithms_9checksums_5xxh64, NULL, __pyx_n_s_borg_algorithms_checksums); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 69, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); if (PyDict_SetItem(__pyx_d, __pyx_n_s_xxh64, __pyx_t_2) < 0) __PYX_ERR(1, 69, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; /* "borg/algorithms/checksums.pyx":1 * from ..helpers import bin_to_hex # <<<<<<<<<<<<<< * * from libc.stdint cimport uint32_t */ __pyx_t_2 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_2); if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_2) < 0) __PYX_ERR(1, 1, __pyx_L1_error) __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; /*--- Wrapped vars code ---*/ goto __pyx_L0; __pyx_L1_error:; __Pyx_XDECREF(__pyx_t_1); __Pyx_XDECREF(__pyx_t_2); if (__pyx_m) { if (__pyx_d) { __Pyx_AddTraceback("init borg.algorithms.checksums", 0, __pyx_lineno, __pyx_filename); } Py_DECREF(__pyx_m); __pyx_m = 0; } else if (!PyErr_Occurred()) { PyErr_SetString(PyExc_ImportError, "init borg.algorithms.checksums"); } __pyx_L0:; __Pyx_RefNannyFinishContext(); #if CYTHON_PEP489_MULTI_PHASE_INIT return (__pyx_m != NULL) ? 0 : -1; #elif PY_MAJOR_VERSION >= 3 return __pyx_m; #else return; #endif } /* --- Runtime support code --- */ /* Refnanny */ #if CYTHON_REFNANNY static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { PyObject *m = NULL, *p = NULL; void *r = NULL; m = PyImport_ImportModule((char *)modname); if (!m) goto end; p = PyObject_GetAttrString(m, (char *)"RefNannyAPI"); if (!p) goto end; r = PyLong_AsVoidPtr(p); end: Py_XDECREF(p); Py_XDECREF(m); return (__Pyx_RefNannyAPIStruct *)r; } #endif /* GetBuiltinName */ static PyObject *__Pyx_GetBuiltinName(PyObject *name) { PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name); if (unlikely(!result)) { PyErr_Format(PyExc_NameError, #if PY_MAJOR_VERSION >= 3 "name '%U' is not defined", name); #else "name '%.200s' is not defined", PyString_AS_STRING(name)); #endif } return result; } /* RaiseDoubleKeywords */ static void __Pyx_RaiseDoubleKeywordsError( const char* func_name, PyObject* kw_name) { PyErr_Format(PyExc_TypeError, #if PY_MAJOR_VERSION >= 3 "%s() got multiple values for keyword argument '%U'", func_name, kw_name); #else "%s() got multiple values for keyword argument '%s'", func_name, PyString_AsString(kw_name)); #endif } /* ParseKeywords */ static int __Pyx_ParseOptionalKeywords( PyObject *kwds, PyObject **argnames[], PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args, const char* function_name) { PyObject *key = 0, *value = 0; Py_ssize_t pos = 0; PyObject*** name; PyObject*** first_kw_arg = argnames + num_pos_args; while (PyDict_Next(kwds, &pos, &key, &value)) { name = first_kw_arg; while (*name && (**name != key)) name++; if (*name) { values[name-argnames] = value; continue; } name = first_kw_arg; #if PY_MAJOR_VERSION < 3 if (likely(PyString_CheckExact(key)) || likely(PyString_Check(key))) { while (*name) { if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) && _PyString_Eq(**name, key)) { values[name-argnames] = value; break; } name++; } if (*name) continue; else { PyObject*** argname = argnames; while (argname != first_kw_arg) { if ((**argname == key) || ( (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) && _PyString_Eq(**argname, key))) { goto arg_passed_twice; } argname++; } } } else #endif if (likely(PyUnicode_Check(key))) { while (*name) { int cmp = (**name == key) ? 0 : #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 : #endif PyUnicode_Compare(**name, key); if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; if (cmp == 0) { values[name-argnames] = value; break; } name++; } if (*name) continue; else { PyObject*** argname = argnames; while (argname != first_kw_arg) { int cmp = (**argname == key) ? 0 : #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 : #endif PyUnicode_Compare(**argname, key); if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; if (cmp == 0) goto arg_passed_twice; argname++; } } } else goto invalid_keyword_type; if (kwds2) { if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; } else { goto invalid_keyword; } } return 0; arg_passed_twice: __Pyx_RaiseDoubleKeywordsError(function_name, key); goto bad; invalid_keyword_type: PyErr_Format(PyExc_TypeError, "%.200s() keywords must be strings", function_name); goto bad; invalid_keyword: PyErr_Format(PyExc_TypeError, #if PY_MAJOR_VERSION < 3 "%.200s() got an unexpected keyword argument '%.200s'", function_name, PyString_AsString(key)); #else "%s() got an unexpected keyword argument '%U'", function_name, key); #endif bad: return -1; } /* RaiseArgTupleInvalid */ static void __Pyx_RaiseArgtupleInvalid( const char* func_name, int exact, Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found) { Py_ssize_t num_expected; const char *more_or_less; if (num_found < num_min) { num_expected = num_min; more_or_less = "at least"; } else { num_expected = num_max; more_or_less = "at most"; } if (exact) { more_or_less = "exactly"; } PyErr_Format(PyExc_TypeError, "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", func_name, more_or_less, num_expected, (num_expected == 1) ? "" : "s", num_found); } /* PyErrFetchRestore */ #if CYTHON_FAST_THREAD_STATE static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { PyObject *tmp_type, *tmp_value, *tmp_tb; tmp_type = tstate->curexc_type; tmp_value = tstate->curexc_value; tmp_tb = tstate->curexc_traceback; tstate->curexc_type = type; tstate->curexc_value = value; tstate->curexc_traceback = tb; Py_XDECREF(tmp_type); Py_XDECREF(tmp_value); Py_XDECREF(tmp_tb); } static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { *type = tstate->curexc_type; *value = tstate->curexc_value; *tb = tstate->curexc_traceback; tstate->curexc_type = 0; tstate->curexc_value = 0; tstate->curexc_traceback = 0; } #endif /* GetException */ #if CYTHON_FAST_THREAD_STATE static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { #else static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) { #endif PyObject *local_type, *local_value, *local_tb; #if CYTHON_FAST_THREAD_STATE PyObject *tmp_type, *tmp_value, *tmp_tb; local_type = tstate->curexc_type; local_value = tstate->curexc_value; local_tb = tstate->curexc_traceback; tstate->curexc_type = 0; tstate->curexc_value = 0; tstate->curexc_traceback = 0; #else PyErr_Fetch(&local_type, &local_value, &local_tb); #endif PyErr_NormalizeException(&local_type, &local_value, &local_tb); #if CYTHON_FAST_THREAD_STATE if (unlikely(tstate->curexc_type)) #else if (unlikely(PyErr_Occurred())) #endif goto bad; #if PY_MAJOR_VERSION >= 3 if (local_tb) { if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) goto bad; } #endif Py_XINCREF(local_tb); Py_XINCREF(local_type); Py_XINCREF(local_value); *type = local_type; *value = local_value; *tb = local_tb; #if CYTHON_FAST_THREAD_STATE #if PY_VERSION_HEX >= 0x030700A2 tmp_type = tstate->exc_state.exc_type; tmp_value = tstate->exc_state.exc_value; tmp_tb = tstate->exc_state.exc_traceback; tstate->exc_state.exc_type = local_type; tstate->exc_state.exc_value = local_value; tstate->exc_state.exc_traceback = local_tb; #else tmp_type = tstate->exc_type; tmp_value = tstate->exc_value; tmp_tb = tstate->exc_traceback; tstate->exc_type = local_type; tstate->exc_value = local_value; tstate->exc_traceback = local_tb; #endif Py_XDECREF(tmp_type); Py_XDECREF(tmp_value); Py_XDECREF(tmp_tb); #else PyErr_SetExcInfo(local_type, local_value, local_tb); #endif return 0; bad: *type = 0; *value = 0; *tb = 0; Py_XDECREF(local_type); Py_XDECREF(local_value); Py_XDECREF(local_tb); return -1; } /* SwapException */ #if CYTHON_FAST_THREAD_STATE static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { PyObject *tmp_type, *tmp_value, *tmp_tb; #if PY_VERSION_HEX >= 0x030700A2 tmp_type = tstate->exc_state.exc_type; tmp_value = tstate->exc_state.exc_value; tmp_tb = tstate->exc_state.exc_traceback; tstate->exc_state.exc_type = *type; tstate->exc_state.exc_value = *value; tstate->exc_state.exc_traceback = *tb; #else tmp_type = tstate->exc_type; tmp_value = tstate->exc_value; tmp_tb = tstate->exc_traceback; tstate->exc_type = *type; tstate->exc_value = *value; tstate->exc_traceback = *tb; #endif *type = tmp_type; *value = tmp_value; *tb = tmp_tb; } #else static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb) { PyObject *tmp_type, *tmp_value, *tmp_tb; PyErr_GetExcInfo(&tmp_type, &tmp_value, &tmp_tb); PyErr_SetExcInfo(*type, *value, *tb); *type = tmp_type; *value = tmp_value; *tb = tmp_tb; } #endif /* SaveResetException */ #if CYTHON_FAST_THREAD_STATE static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { #if PY_VERSION_HEX >= 0x030700A2 *type = tstate->exc_state.exc_type; *value = tstate->exc_state.exc_value; *tb = tstate->exc_state.exc_traceback; #else *type = tstate->exc_type; *value = tstate->exc_value; *tb = tstate->exc_traceback; #endif Py_XINCREF(*type); Py_XINCREF(*value); Py_XINCREF(*tb); } static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { PyObject *tmp_type, *tmp_value, *tmp_tb; #if PY_VERSION_HEX >= 0x030700A2 tmp_type = tstate->exc_state.exc_type; tmp_value = tstate->exc_state.exc_value; tmp_tb = tstate->exc_state.exc_traceback; tstate->exc_state.exc_type = type; tstate->exc_state.exc_value = value; tstate->exc_state.exc_traceback = tb; #else tmp_type = tstate->exc_type; tmp_value = tstate->exc_value; tmp_tb = tstate->exc_traceback; tstate->exc_type = type; tstate->exc_value = value; tstate->exc_traceback = tb; #endif Py_XDECREF(tmp_type); Py_XDECREF(tmp_value); Py_XDECREF(tmp_tb); } #endif /* PyObjectCall */ #if CYTHON_COMPILING_IN_CPYTHON static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { PyObject *result; ternaryfunc call = func->ob_type->tp_call; if (unlikely(!call)) return PyObject_Call(func, arg, kw); if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) return NULL; result = (*call)(func, arg, kw); Py_LeaveRecursiveCall(); if (unlikely(!result) && unlikely(!PyErr_Occurred())) { PyErr_SetString( PyExc_SystemError, "NULL result without error in PyObject_Call"); } return result; } #endif /* RaiseException */ #if PY_MAJOR_VERSION < 3 static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, CYTHON_UNUSED PyObject *cause) { __Pyx_PyThreadState_declare Py_XINCREF(type); if (!value || value == Py_None) value = NULL; else Py_INCREF(value); if (!tb || tb == Py_None) tb = NULL; else { Py_INCREF(tb); if (!PyTraceBack_Check(tb)) { PyErr_SetString(PyExc_TypeError, "raise: arg 3 must be a traceback or None"); goto raise_error; } } if (PyType_Check(type)) { #if CYTHON_COMPILING_IN_PYPY if (!value) { Py_INCREF(Py_None); value = Py_None; } #endif PyErr_NormalizeException(&type, &value, &tb); } else { if (value) { PyErr_SetString(PyExc_TypeError, "instance exception may not have a separate value"); goto raise_error; } value = type; type = (PyObject*) Py_TYPE(type); Py_INCREF(type); if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { PyErr_SetString(PyExc_TypeError, "raise: exception class must be a subclass of BaseException"); goto raise_error; } } __Pyx_PyThreadState_assign __Pyx_ErrRestore(type, value, tb); return; raise_error: Py_XDECREF(value); Py_XDECREF(type); Py_XDECREF(tb); return; } #else static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { PyObject* owned_instance = NULL; if (tb == Py_None) { tb = 0; } else if (tb && !PyTraceBack_Check(tb)) { PyErr_SetString(PyExc_TypeError, "raise: arg 3 must be a traceback or None"); goto bad; } if (value == Py_None) value = 0; if (PyExceptionInstance_Check(type)) { if (value) { PyErr_SetString(PyExc_TypeError, "instance exception may not have a separate value"); goto bad; } value = type; type = (PyObject*) Py_TYPE(value); } else if (PyExceptionClass_Check(type)) { PyObject *instance_class = NULL; if (value && PyExceptionInstance_Check(value)) { instance_class = (PyObject*) Py_TYPE(value); if (instance_class != type) { int is_subclass = PyObject_IsSubclass(instance_class, type); if (!is_subclass) { instance_class = NULL; } else if (unlikely(is_subclass == -1)) { goto bad; } else { type = instance_class; } } } if (!instance_class) { PyObject *args; if (!value) args = PyTuple_New(0); else if (PyTuple_Check(value)) { Py_INCREF(value); args = value; } else args = PyTuple_Pack(1, value); if (!args) goto bad; owned_instance = PyObject_Call(type, args, NULL); Py_DECREF(args); if (!owned_instance) goto bad; value = owned_instance; if (!PyExceptionInstance_Check(value)) { PyErr_Format(PyExc_TypeError, "calling %R should have returned an instance of " "BaseException, not %R", type, Py_TYPE(value)); goto bad; } } } else { PyErr_SetString(PyExc_TypeError, "raise: exception class must be a subclass of BaseException"); goto bad; } if (cause) { PyObject *fixed_cause; if (cause == Py_None) { fixed_cause = NULL; } else if (PyExceptionClass_Check(cause)) { fixed_cause = PyObject_CallObject(cause, NULL); if (fixed_cause == NULL) goto bad; } else if (PyExceptionInstance_Check(cause)) { fixed_cause = cause; Py_INCREF(fixed_cause); } else { PyErr_SetString(PyExc_TypeError, "exception causes must derive from " "BaseException"); goto bad; } PyException_SetCause(value, fixed_cause); } PyErr_SetObject(type, value); if (tb) { #if CYTHON_COMPILING_IN_PYPY PyObject *tmp_type, *tmp_value, *tmp_tb; PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); Py_INCREF(tb); PyErr_Restore(tmp_type, tmp_value, tb); Py_XDECREF(tmp_tb); #else PyThreadState *tstate = __Pyx_PyThreadState_Current; PyObject* tmp_tb = tstate->curexc_traceback; if (tb != tmp_tb) { Py_INCREF(tb); tstate->curexc_traceback = tb; Py_XDECREF(tmp_tb); } #endif } bad: Py_XDECREF(owned_instance); return; } #endif /* GetModuleGlobalName */ static CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name) { PyObject *result; #if !CYTHON_AVOID_BORROWED_REFS result = PyDict_GetItem(__pyx_d, name); if (likely(result)) { Py_INCREF(result); } else { #else result = PyObject_GetItem(__pyx_d, name); if (!result) { PyErr_Clear(); #endif result = __Pyx_GetBuiltinName(name); } return result; } /* PyCFunctionFastCall */ #if CYTHON_FAST_PYCCALL static CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *func_obj, PyObject **args, Py_ssize_t nargs) { PyCFunctionObject *func = (PyCFunctionObject*)func_obj; PyCFunction meth = PyCFunction_GET_FUNCTION(func); PyObject *self = PyCFunction_GET_SELF(func); int flags = PyCFunction_GET_FLAGS(func); assert(PyCFunction_Check(func)); assert(METH_FASTCALL == (flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS))); assert(nargs >= 0); assert(nargs == 0 || args != NULL); /* _PyCFunction_FastCallDict() must not be called with an exception set, because it may clear it (directly or indirectly) and so the caller loses its exception */ assert(!PyErr_Occurred()); if ((PY_VERSION_HEX < 0x030700A0) || unlikely(flags & METH_KEYWORDS)) { return (*((__Pyx_PyCFunctionFastWithKeywords)meth)) (self, args, nargs, NULL); } else { return (*((__Pyx_PyCFunctionFast)meth)) (self, args, nargs); } } #endif /* PyFunctionFastCall */ #if CYTHON_FAST_PYCALL #include "frameobject.h" static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, PyObject *globals) { PyFrameObject *f; PyThreadState *tstate = __Pyx_PyThreadState_Current; PyObject **fastlocals; Py_ssize_t i; PyObject *result; assert(globals != NULL); /* XXX Perhaps we should create a specialized PyFrame_New() that doesn't take locals, but does take builtins without sanity checking them. */ assert(tstate != NULL); f = PyFrame_New(tstate, co, globals, NULL); if (f == NULL) { return NULL; } fastlocals = f->f_localsplus; for (i = 0; i < na; i++) { Py_INCREF(*args); fastlocals[i] = *args++; } result = PyEval_EvalFrameEx(f,0); ++tstate->recursion_depth; Py_DECREF(f); --tstate->recursion_depth; return result; } #if 1 || PY_VERSION_HEX < 0x030600B1 static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, int nargs, PyObject *kwargs) { PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); PyObject *globals = PyFunction_GET_GLOBALS(func); PyObject *argdefs = PyFunction_GET_DEFAULTS(func); PyObject *closure; #if PY_MAJOR_VERSION >= 3 PyObject *kwdefs; #endif PyObject *kwtuple, **k; PyObject **d; Py_ssize_t nd; Py_ssize_t nk; PyObject *result; assert(kwargs == NULL || PyDict_Check(kwargs)); nk = kwargs ? PyDict_Size(kwargs) : 0; if (Py_EnterRecursiveCall((char*)" while calling a Python object")) { return NULL; } if ( #if PY_MAJOR_VERSION >= 3 co->co_kwonlyargcount == 0 && #endif likely(kwargs == NULL || nk == 0) && co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { if (argdefs == NULL && co->co_argcount == nargs) { result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); goto done; } else if (nargs == 0 && argdefs != NULL && co->co_argcount == Py_SIZE(argdefs)) { /* function called with no arguments, but all parameters have a default value: use default values as arguments .*/ args = &PyTuple_GET_ITEM(argdefs, 0); result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); goto done; } } if (kwargs != NULL) { Py_ssize_t pos, i; kwtuple = PyTuple_New(2 * nk); if (kwtuple == NULL) { result = NULL; goto done; } k = &PyTuple_GET_ITEM(kwtuple, 0); pos = i = 0; while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { Py_INCREF(k[i]); Py_INCREF(k[i+1]); i += 2; } nk = i / 2; } else { kwtuple = NULL; k = NULL; } closure = PyFunction_GET_CLOSURE(func); #if PY_MAJOR_VERSION >= 3 kwdefs = PyFunction_GET_KW_DEFAULTS(func); #endif if (argdefs != NULL) { d = &PyTuple_GET_ITEM(argdefs, 0); nd = Py_SIZE(argdefs); } else { d = NULL; nd = 0; } #if PY_MAJOR_VERSION >= 3 result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, args, nargs, k, (int)nk, d, (int)nd, kwdefs, closure); #else result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, args, nargs, k, (int)nk, d, (int)nd, closure); #endif Py_XDECREF(kwtuple); done: Py_LeaveRecursiveCall(); return result; } #endif #endif /* PyObjectCallMethO */ #if CYTHON_COMPILING_IN_CPYTHON static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { PyObject *self, *result; PyCFunction cfunc; cfunc = PyCFunction_GET_FUNCTION(func); self = PyCFunction_GET_SELF(func); if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) return NULL; result = cfunc(self, arg); Py_LeaveRecursiveCall(); if (unlikely(!result) && unlikely(!PyErr_Occurred())) { PyErr_SetString( PyExc_SystemError, "NULL result without error in PyObject_Call"); } return result; } #endif /* PyObjectCallOneArg */ #if CYTHON_COMPILING_IN_CPYTHON static PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) { PyObject *result; PyObject *args = PyTuple_New(1); if (unlikely(!args)) return NULL; Py_INCREF(arg); PyTuple_SET_ITEM(args, 0, arg); result = __Pyx_PyObject_Call(func, args, NULL); Py_DECREF(args); return result; } static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { #if CYTHON_FAST_PYCALL if (PyFunction_Check(func)) { return __Pyx_PyFunction_FastCall(func, &arg, 1); } #endif if (likely(PyCFunction_Check(func))) { if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { return __Pyx_PyObject_CallMethO(func, arg); #if CYTHON_FAST_PYCCALL } else if (PyCFunction_GET_FLAGS(func) & METH_FASTCALL) { return __Pyx_PyCFunction_FastCall(func, &arg, 1); #endif } } return __Pyx__PyObject_CallOneArg(func, arg); } #else static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { PyObject *result; PyObject *args = PyTuple_Pack(1, arg); if (unlikely(!args)) return NULL; result = __Pyx_PyObject_Call(func, args, NULL); Py_DECREF(args); return result; } #endif /* PyObjectCallNoArg */ #if CYTHON_COMPILING_IN_CPYTHON static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) { #if CYTHON_FAST_PYCALL if (PyFunction_Check(func)) { return __Pyx_PyFunction_FastCall(func, NULL, 0); } #endif #ifdef __Pyx_CyFunction_USED if (likely(PyCFunction_Check(func) || __Pyx_TypeCheck(func, __pyx_CyFunctionType))) { #else if (likely(PyCFunction_Check(func))) { #endif if (likely(PyCFunction_GET_FLAGS(func) & METH_NOARGS)) { return __Pyx_PyObject_CallMethO(func, NULL); } } return __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL); } #endif /* SetupReduce */ static int __Pyx_setup_reduce_is_named(PyObject* meth, PyObject* name) { int ret; PyObject *name_attr; name_attr = __Pyx_PyObject_GetAttrStr(meth, __pyx_n_s_name); if (likely(name_attr)) { ret = PyObject_RichCompareBool(name_attr, name, Py_EQ); } else { ret = -1; } if (unlikely(ret < 0)) { PyErr_Clear(); ret = 0; } Py_XDECREF(name_attr); return ret; } static int __Pyx_setup_reduce(PyObject* type_obj) { int ret = 0; PyObject *object_reduce = NULL; PyObject *object_reduce_ex = NULL; PyObject *reduce = NULL; PyObject *reduce_ex = NULL; PyObject *reduce_cython = NULL; PyObject *setstate = NULL; PyObject *setstate_cython = NULL; #if CYTHON_USE_PYTYPE_LOOKUP if (_PyType_Lookup((PyTypeObject*)type_obj, __pyx_n_s_getstate)) goto GOOD; #else if (PyObject_HasAttr(type_obj, __pyx_n_s_getstate)) goto GOOD; #endif #if CYTHON_USE_PYTYPE_LOOKUP object_reduce_ex = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto BAD; #else object_reduce_ex = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto BAD; #endif reduce_ex = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce_ex); if (unlikely(!reduce_ex)) goto BAD; if (reduce_ex == object_reduce_ex) { #if CYTHON_USE_PYTYPE_LOOKUP object_reduce = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto BAD; #else object_reduce = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto BAD; #endif reduce = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce); if (unlikely(!reduce)) goto BAD; if (reduce == object_reduce || __Pyx_setup_reduce_is_named(reduce, __pyx_n_s_reduce_cython)) { reduce_cython = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce_cython); if (unlikely(!reduce_cython)) goto BAD; ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce, reduce_cython); if (unlikely(ret < 0)) goto BAD; ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce_cython); if (unlikely(ret < 0)) goto BAD; setstate = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_setstate); if (!setstate) PyErr_Clear(); if (!setstate || __Pyx_setup_reduce_is_named(setstate, __pyx_n_s_setstate_cython)) { setstate_cython = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_setstate_cython); if (unlikely(!setstate_cython)) goto BAD; ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate, setstate_cython); if (unlikely(ret < 0)) goto BAD; ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate_cython); if (unlikely(ret < 0)) goto BAD; } PyType_Modified((PyTypeObject*)type_obj); } } goto GOOD; BAD: if (!PyErr_Occurred()) PyErr_Format(PyExc_RuntimeError, "Unable to initialize pickling for %s", ((PyTypeObject*)type_obj)->tp_name); ret = -1; GOOD: #if !CYTHON_USE_PYTYPE_LOOKUP Py_XDECREF(object_reduce); Py_XDECREF(object_reduce_ex); #endif Py_XDECREF(reduce); Py_XDECREF(reduce_ex); Py_XDECREF(reduce_cython); Py_XDECREF(setstate); Py_XDECREF(setstate_cython); return ret; } /* Import */ static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { PyObject *empty_list = 0; PyObject *module = 0; PyObject *global_dict = 0; PyObject *empty_dict = 0; PyObject *list; #if PY_MAJOR_VERSION < 3 PyObject *py_import; py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); if (!py_import) goto bad; #endif if (from_list) list = from_list; else { empty_list = PyList_New(0); if (!empty_list) goto bad; list = empty_list; } global_dict = PyModule_GetDict(__pyx_m); if (!global_dict) goto bad; empty_dict = PyDict_New(); if (!empty_dict) goto bad; { #if PY_MAJOR_VERSION >= 3 if (level == -1) { if (strchr(__Pyx_MODULE_NAME, '.')) { module = PyImport_ImportModuleLevelObject( name, global_dict, empty_dict, list, 1); if (!module) { if (!PyErr_ExceptionMatches(PyExc_ImportError)) goto bad; PyErr_Clear(); } } level = 0; } #endif if (!module) { #if PY_MAJOR_VERSION < 3 PyObject *py_level = PyInt_FromLong(level); if (!py_level) goto bad; module = PyObject_CallFunctionObjArgs(py_import, name, global_dict, empty_dict, list, py_level, NULL); Py_DECREF(py_level); #else module = PyImport_ImportModuleLevelObject( name, global_dict, empty_dict, list, level); #endif } } bad: #if PY_MAJOR_VERSION < 3 Py_XDECREF(py_import); #endif Py_XDECREF(empty_list); Py_XDECREF(empty_dict); return module; } /* ImportFrom */ static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) { PyObject* value = __Pyx_PyObject_GetAttrStr(module, name); if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) { PyErr_Format(PyExc_ImportError, #if PY_MAJOR_VERSION < 3 "cannot import name %.230s", PyString_AS_STRING(name)); #else "cannot import name %S", name); #endif } return value; } /* CLineInTraceback */ #ifndef CYTHON_CLINE_IN_TRACEBACK static int __Pyx_CLineForTraceback(CYTHON_UNUSED PyThreadState *tstate, int c_line) { PyObject *use_cline; PyObject *ptype, *pvalue, *ptraceback; #if CYTHON_COMPILING_IN_CPYTHON PyObject **cython_runtime_dict; #endif __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); #if CYTHON_COMPILING_IN_CPYTHON cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime); if (likely(cython_runtime_dict)) { use_cline = PyDict_GetItem(*cython_runtime_dict, __pyx_n_s_cline_in_traceback); } else #endif { PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback); if (use_cline_obj) { use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; Py_DECREF(use_cline_obj); } else { PyErr_Clear(); use_cline = NULL; } } if (!use_cline) { c_line = 0; PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False); } else if (PyObject_Not(use_cline) != 0) { c_line = 0; } __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); return c_line; } #endif /* CodeObjectCache */ static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { int start = 0, mid = 0, end = count - 1; if (end >= 0 && code_line > entries[end].code_line) { return count; } while (start < end) { mid = start + (end - start) / 2; if (code_line < entries[mid].code_line) { end = mid; } else if (code_line > entries[mid].code_line) { start = mid + 1; } else { return mid; } } if (code_line <= entries[mid].code_line) { return mid; } else { return mid + 1; } } static PyCodeObject *__pyx_find_code_object(int code_line) { PyCodeObject* code_object; int pos; if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { return NULL; } pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { return NULL; } code_object = __pyx_code_cache.entries[pos].code_object; Py_INCREF(code_object); return code_object; } static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { int pos, i; __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; if (unlikely(!code_line)) { return; } if (unlikely(!entries)) { entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); if (likely(entries)) { __pyx_code_cache.entries = entries; __pyx_code_cache.max_count = 64; __pyx_code_cache.count = 1; entries[0].code_line = code_line; entries[0].code_object = code_object; Py_INCREF(code_object); } return; } pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { PyCodeObject* tmp = entries[pos].code_object; entries[pos].code_object = code_object; Py_DECREF(tmp); return; } if (__pyx_code_cache.count == __pyx_code_cache.max_count) { int new_max = __pyx_code_cache.max_count + 64; entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( __pyx_code_cache.entries, (size_t)new_max*sizeof(__Pyx_CodeObjectCacheEntry)); if (unlikely(!entries)) { return; } __pyx_code_cache.entries = entries; __pyx_code_cache.max_count = new_max; } for (i=__pyx_code_cache.count; i>pos; i--) { entries[i] = entries[i-1]; } entries[pos].code_line = code_line; entries[pos].code_object = code_object; __pyx_code_cache.count++; Py_INCREF(code_object); } /* AddTraceback */ #include "compile.h" #include "frameobject.h" #include "traceback.h" static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( const char *funcname, int c_line, int py_line, const char *filename) { PyCodeObject *py_code = 0; PyObject *py_srcfile = 0; PyObject *py_funcname = 0; #if PY_MAJOR_VERSION < 3 py_srcfile = PyString_FromString(filename); #else py_srcfile = PyUnicode_FromString(filename); #endif if (!py_srcfile) goto bad; if (c_line) { #if PY_MAJOR_VERSION < 3 py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); #else py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); #endif } else { #if PY_MAJOR_VERSION < 3 py_funcname = PyString_FromString(funcname); #else py_funcname = PyUnicode_FromString(funcname); #endif } if (!py_funcname) goto bad; py_code = __Pyx_PyCode_New( 0, 0, 0, 0, 0, __pyx_empty_bytes, /*PyObject *code,*/ __pyx_empty_tuple, /*PyObject *consts,*/ __pyx_empty_tuple, /*PyObject *names,*/ __pyx_empty_tuple, /*PyObject *varnames,*/ __pyx_empty_tuple, /*PyObject *freevars,*/ __pyx_empty_tuple, /*PyObject *cellvars,*/ py_srcfile, /*PyObject *filename,*/ py_funcname, /*PyObject *name,*/ py_line, __pyx_empty_bytes /*PyObject *lnotab*/ ); Py_DECREF(py_srcfile); Py_DECREF(py_funcname); return py_code; bad: Py_XDECREF(py_srcfile); Py_XDECREF(py_funcname); return NULL; } static void __Pyx_AddTraceback(const char *funcname, int c_line, int py_line, const char *filename) { PyCodeObject *py_code = 0; PyFrameObject *py_frame = 0; PyThreadState *tstate = __Pyx_PyThreadState_Current; if (c_line) { c_line = __Pyx_CLineForTraceback(tstate, c_line); } py_code = __pyx_find_code_object(c_line ? -c_line : py_line); if (!py_code) { py_code = __Pyx_CreateCodeObjectForTraceback( funcname, c_line, py_line, filename); if (!py_code) goto bad; __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); } py_frame = PyFrame_New( tstate, /*PyThreadState *tstate,*/ py_code, /*PyCodeObject *code,*/ __pyx_d, /*PyObject *globals,*/ 0 /*PyObject *locals*/ ); if (!py_frame) goto bad; __Pyx_PyFrame_SetLineNumber(py_frame, py_line); PyTraceBack_Here(py_frame); bad: Py_XDECREF(py_code); Py_XDECREF(py_frame); } /* CIntToPy */ static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) { const int neg_one = (int) -1, const_zero = (int) 0; const int is_unsigned = neg_one > const_zero; if (is_unsigned) { if (sizeof(int) < sizeof(long)) { return PyInt_FromLong((long) value); } else if (sizeof(int) <= sizeof(unsigned long)) { return PyLong_FromUnsignedLong((unsigned long) value); #ifdef HAVE_LONG_LONG } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); #endif } } else { if (sizeof(int) <= sizeof(long)) { return PyInt_FromLong((long) value); #ifdef HAVE_LONG_LONG } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { return PyLong_FromLongLong((PY_LONG_LONG) value); #endif } } { int one = 1; int little = (int)*(unsigned char *)&one; unsigned char *bytes = (unsigned char *)&value; return _PyLong_FromByteArray(bytes, sizeof(int), little, !is_unsigned); } } /* CIntFromPyVerify */ #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) #define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) #define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ {\ func_type value = func_value;\ if (sizeof(target_type) < sizeof(func_type)) {\ if (unlikely(value != (func_type) (target_type) value)) {\ func_type zero = 0;\ if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ return (target_type) -1;\ if (is_unsigned && unlikely(value < zero))\ goto raise_neg_overflow;\ else\ goto raise_overflow;\ }\ }\ return (target_type) value;\ } /* CIntToPy */ static CYTHON_INLINE PyObject* __Pyx_PyInt_From_uint32_t(uint32_t value) { const uint32_t neg_one = (uint32_t) -1, const_zero = (uint32_t) 0; const int is_unsigned = neg_one > const_zero; if (is_unsigned) { if (sizeof(uint32_t) < sizeof(long)) { return PyInt_FromLong((long) value); } else if (sizeof(uint32_t) <= sizeof(unsigned long)) { return PyLong_FromUnsignedLong((unsigned long) value); #ifdef HAVE_LONG_LONG } else if (sizeof(uint32_t) <= sizeof(unsigned PY_LONG_LONG)) { return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); #endif } } else { if (sizeof(uint32_t) <= sizeof(long)) { return PyInt_FromLong((long) value); #ifdef HAVE_LONG_LONG } else if (sizeof(uint32_t) <= sizeof(PY_LONG_LONG)) { return PyLong_FromLongLong((PY_LONG_LONG) value); #endif } } { int one = 1; int little = (int)*(unsigned char *)&one; unsigned char *bytes = (unsigned char *)&value; return _PyLong_FromByteArray(bytes, sizeof(uint32_t), little, !is_unsigned); } } /* CIntFromPy */ static CYTHON_INLINE uint32_t __Pyx_PyInt_As_uint32_t(PyObject *x) { const uint32_t neg_one = (uint32_t) -1, const_zero = (uint32_t) 0; const int is_unsigned = neg_one > const_zero; #if PY_MAJOR_VERSION < 3 if (likely(PyInt_Check(x))) { if (sizeof(uint32_t) < sizeof(long)) { __PYX_VERIFY_RETURN_INT(uint32_t, long, PyInt_AS_LONG(x)) } else { long val = PyInt_AS_LONG(x); if (is_unsigned && unlikely(val < 0)) { goto raise_neg_overflow; } return (uint32_t) val; } } else #endif if (likely(PyLong_Check(x))) { if (is_unsigned) { #if CYTHON_USE_PYLONG_INTERNALS const digit* digits = ((PyLongObject*)x)->ob_digit; switch (Py_SIZE(x)) { case 0: return (uint32_t) 0; case 1: __PYX_VERIFY_RETURN_INT(uint32_t, digit, digits[0]) case 2: if (8 * sizeof(uint32_t) > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) >= 2 * PyLong_SHIFT) { return (uint32_t) (((((uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0])); } } break; case 3: if (8 * sizeof(uint32_t) > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) >= 3 * PyLong_SHIFT) { return (uint32_t) (((((((uint32_t)digits[2]) << PyLong_SHIFT) | (uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0])); } } break; case 4: if (8 * sizeof(uint32_t) > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) >= 4 * PyLong_SHIFT) { return (uint32_t) (((((((((uint32_t)digits[3]) << PyLong_SHIFT) | (uint32_t)digits[2]) << PyLong_SHIFT) | (uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0])); } } break; } #endif #if CYTHON_COMPILING_IN_CPYTHON if (unlikely(Py_SIZE(x) < 0)) { goto raise_neg_overflow; } #else { int result = PyObject_RichCompareBool(x, Py_False, Py_LT); if (unlikely(result < 0)) return (uint32_t) -1; if (unlikely(result == 1)) goto raise_neg_overflow; } #endif if (sizeof(uint32_t) <= sizeof(unsigned long)) { __PYX_VERIFY_RETURN_INT_EXC(uint32_t, unsigned long, PyLong_AsUnsignedLong(x)) #ifdef HAVE_LONG_LONG } else if (sizeof(uint32_t) <= sizeof(unsigned PY_LONG_LONG)) { __PYX_VERIFY_RETURN_INT_EXC(uint32_t, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) #endif } } else { #if CYTHON_USE_PYLONG_INTERNALS const digit* digits = ((PyLongObject*)x)->ob_digit; switch (Py_SIZE(x)) { case 0: return (uint32_t) 0; case -1: __PYX_VERIFY_RETURN_INT(uint32_t, sdigit, (sdigit) (-(sdigit)digits[0])) case 1: __PYX_VERIFY_RETURN_INT(uint32_t, digit, +digits[0]) case -2: if (8 * sizeof(uint32_t) - 1 > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) - 1 > 2 * PyLong_SHIFT) { return (uint32_t) (((uint32_t)-1)*(((((uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0]))); } } break; case 2: if (8 * sizeof(uint32_t) > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) - 1 > 2 * PyLong_SHIFT) { return (uint32_t) ((((((uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0]))); } } break; case -3: if (8 * sizeof(uint32_t) - 1 > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) - 1 > 3 * PyLong_SHIFT) { return (uint32_t) (((uint32_t)-1)*(((((((uint32_t)digits[2]) << PyLong_SHIFT) | (uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0]))); } } break; case 3: if (8 * sizeof(uint32_t) > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) - 1 > 3 * PyLong_SHIFT) { return (uint32_t) ((((((((uint32_t)digits[2]) << PyLong_SHIFT) | (uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0]))); } } break; case -4: if (8 * sizeof(uint32_t) - 1 > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) - 1 > 4 * PyLong_SHIFT) { return (uint32_t) (((uint32_t)-1)*(((((((((uint32_t)digits[3]) << PyLong_SHIFT) | (uint32_t)digits[2]) << PyLong_SHIFT) | (uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0]))); } } break; case 4: if (8 * sizeof(uint32_t) > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(uint32_t, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(uint32_t) - 1 > 4 * PyLong_SHIFT) { return (uint32_t) ((((((((((uint32_t)digits[3]) << PyLong_SHIFT) | (uint32_t)digits[2]) << PyLong_SHIFT) | (uint32_t)digits[1]) << PyLong_SHIFT) | (uint32_t)digits[0]))); } } break; } #endif if (sizeof(uint32_t) <= sizeof(long)) { __PYX_VERIFY_RETURN_INT_EXC(uint32_t, long, PyLong_AsLong(x)) #ifdef HAVE_LONG_LONG } else if (sizeof(uint32_t) <= sizeof(PY_LONG_LONG)) { __PYX_VERIFY_RETURN_INT_EXC(uint32_t, PY_LONG_LONG, PyLong_AsLongLong(x)) #endif } } { #if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) PyErr_SetString(PyExc_RuntimeError, "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); #else uint32_t val; PyObject *v = __Pyx_PyNumber_IntOrLong(x); #if PY_MAJOR_VERSION < 3 if (likely(v) && !PyLong_Check(v)) { PyObject *tmp = v; v = PyNumber_Long(tmp); Py_DECREF(tmp); } #endif if (likely(v)) { int one = 1; int is_little = (int)*(unsigned char *)&one; unsigned char *bytes = (unsigned char *)&val; int ret = _PyLong_AsByteArray((PyLongObject *)v, bytes, sizeof(val), is_little, !is_unsigned); Py_DECREF(v); if (likely(!ret)) return val; } #endif return (uint32_t) -1; } } else { uint32_t val; PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); if (!tmp) return (uint32_t) -1; val = __Pyx_PyInt_As_uint32_t(tmp); Py_DECREF(tmp); return val; } raise_overflow: PyErr_SetString(PyExc_OverflowError, "value too large to convert to uint32_t"); return (uint32_t) -1; raise_neg_overflow: PyErr_SetString(PyExc_OverflowError, "can't convert negative value to uint32_t"); return (uint32_t) -1; } /* CIntFromPy */ static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_PyInt_As_unsigned_PY_LONG_LONG(PyObject *x) { const unsigned PY_LONG_LONG neg_one = (unsigned PY_LONG_LONG) -1, const_zero = (unsigned PY_LONG_LONG) 0; const int is_unsigned = neg_one > const_zero; #if PY_MAJOR_VERSION < 3 if (likely(PyInt_Check(x))) { if (sizeof(unsigned PY_LONG_LONG) < sizeof(long)) { __PYX_VERIFY_RETURN_INT(unsigned PY_LONG_LONG, long, PyInt_AS_LONG(x)) } else { long val = PyInt_AS_LONG(x); if (is_unsigned && unlikely(val < 0)) { goto raise_neg_overflow; } return (unsigned PY_LONG_LONG) val; } } else #endif if (likely(PyLong_Check(x))) { if (is_unsigned) { #if CYTHON_USE_PYLONG_INTERNALS const digit* digits = ((PyLongObject*)x)->ob_digit; switch (Py_SIZE(x)) { case 0: return (unsigned PY_LONG_LONG) 0; case 1: __PYX_VERIFY_RETURN_INT(unsigned PY_LONG_LONG, digit, digits[0]) case 2: if (8 * sizeof(unsigned PY_LONG_LONG) > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(unsigned PY_LONG_LONG, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(unsigned PY_LONG_LONG) >= 2 * PyLong_SHIFT) { return (unsigned PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); } } break; case 3: if (8 * sizeof(unsigned PY_LONG_LONG) > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(unsigned PY_LONG_LONG, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(unsigned PY_LONG_LONG) >= 3 * PyLong_SHIFT) { return (unsigned PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); } } break; case 4: if (8 * sizeof(unsigned PY_LONG_LONG) > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(unsigned PY_LONG_LONG, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(unsigned PY_LONG_LONG) >= 4 * PyLong_SHIFT) { return (unsigned PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); } } break; } #endif #if CYTHON_COMPILING_IN_CPYTHON if (unlikely(Py_SIZE(x) < 0)) { goto raise_neg_overflow; } #else { int result = PyObject_RichCompareBool(x, Py_False, Py_LT); if (unlikely(result < 0)) return (unsigned PY_LONG_LONG) -1; if (unlikely(result == 1)) goto raise_neg_overflow; } #endif if (sizeof(unsigned PY_LONG_LONG) <= sizeof(unsigned long)) { __PYX_VERIFY_RETURN_INT_EXC(unsigned PY_LONG_LONG, unsigned long, PyLong_AsUnsignedLong(x)) #ifdef HAVE_LONG_LONG } else if (sizeof(unsigned PY_LONG_LONG) <= sizeof(unsigned PY_LONG_LONG)) { __PYX_VERIFY_RETURN_INT_EXC(unsigned PY_LONG_LONG, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) #endif } } else { #if CYTHON_USE_PYLONG_INTERNALS const digit* digits = ((PyLongObject*)x)->ob_digit; switch (Py_SIZE(x)) { case 0: return (unsigned PY_LONG_LONG) 0; case -1: __PYX_VERIFY_RETURN_INT(unsigned PY_LONG_LONG, sdigit, (sdigit) (-(sdigit)digits[0])) case 1: __PYX_VERIFY_RETURN_INT(unsigned PY_LONG_LONG, digit, +digits[0]) case -2: if (8 * sizeof(unsigned PY_LONG_LONG) - 1 > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(unsigned PY_LONG_LONG, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(unsigned PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { return (unsigned PY_LONG_LONG) (((unsigned PY_LONG_LONG)-1)*(((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]))); } } break; case 2: if (8 * sizeof(unsigned PY_LONG_LONG) > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(unsigned PY_LONG_LONG, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(unsigned PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { return (unsigned PY_LONG_LONG) ((((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]))); } } break; case -3: if (8 * sizeof(unsigned PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(unsigned PY_LONG_LONG, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(unsigned PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { return (unsigned PY_LONG_LONG) (((unsigned PY_LONG_LONG)-1)*(((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]))); } } break; case 3: if (8 * sizeof(unsigned PY_LONG_LONG) > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(unsigned PY_LONG_LONG, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(unsigned PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { return (unsigned PY_LONG_LONG) ((((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]))); } } break; case -4: if (8 * sizeof(unsigned PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(unsigned PY_LONG_LONG, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(unsigned PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { return (unsigned PY_LONG_LONG) (((unsigned PY_LONG_LONG)-1)*(((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]))); } } break; case 4: if (8 * sizeof(unsigned PY_LONG_LONG) > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(unsigned PY_LONG_LONG, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(unsigned PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { return (unsigned PY_LONG_LONG) ((((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]))); } } break; } #endif if (sizeof(unsigned PY_LONG_LONG) <= sizeof(long)) { __PYX_VERIFY_RETURN_INT_EXC(unsigned PY_LONG_LONG, long, PyLong_AsLong(x)) #ifdef HAVE_LONG_LONG } else if (sizeof(unsigned PY_LONG_LONG) <= sizeof(PY_LONG_LONG)) { __PYX_VERIFY_RETURN_INT_EXC(unsigned PY_LONG_LONG, PY_LONG_LONG, PyLong_AsLongLong(x)) #endif } } { #if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) PyErr_SetString(PyExc_RuntimeError, "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); #else unsigned PY_LONG_LONG val; PyObject *v = __Pyx_PyNumber_IntOrLong(x); #if PY_MAJOR_VERSION < 3 if (likely(v) && !PyLong_Check(v)) { PyObject *tmp = v; v = PyNumber_Long(tmp); Py_DECREF(tmp); } #endif if (likely(v)) { int one = 1; int is_little = (int)*(unsigned char *)&one; unsigned char *bytes = (unsigned char *)&val; int ret = _PyLong_AsByteArray((PyLongObject *)v, bytes, sizeof(val), is_little, !is_unsigned); Py_DECREF(v); if (likely(!ret)) return val; } #endif return (unsigned PY_LONG_LONG) -1; } } else { unsigned PY_LONG_LONG val; PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); if (!tmp) return (unsigned PY_LONG_LONG) -1; val = __Pyx_PyInt_As_unsigned_PY_LONG_LONG(tmp); Py_DECREF(tmp); return val; } raise_overflow: PyErr_SetString(PyExc_OverflowError, "value too large to convert to unsigned PY_LONG_LONG"); return (unsigned PY_LONG_LONG) -1; raise_neg_overflow: PyErr_SetString(PyExc_OverflowError, "can't convert negative value to unsigned PY_LONG_LONG"); return (unsigned PY_LONG_LONG) -1; } /* CIntToPy */ static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { const long neg_one = (long) -1, const_zero = (long) 0; const int is_unsigned = neg_one > const_zero; if (is_unsigned) { if (sizeof(long) < sizeof(long)) { return PyInt_FromLong((long) value); } else if (sizeof(long) <= sizeof(unsigned long)) { return PyLong_FromUnsignedLong((unsigned long) value); #ifdef HAVE_LONG_LONG } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); #endif } } else { if (sizeof(long) <= sizeof(long)) { return PyInt_FromLong((long) value); #ifdef HAVE_LONG_LONG } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { return PyLong_FromLongLong((PY_LONG_LONG) value); #endif } } { int one = 1; int little = (int)*(unsigned char *)&one; unsigned char *bytes = (unsigned char *)&value; return _PyLong_FromByteArray(bytes, sizeof(long), little, !is_unsigned); } } /* CIntFromPy */ static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { const long neg_one = (long) -1, const_zero = (long) 0; const int is_unsigned = neg_one > const_zero; #if PY_MAJOR_VERSION < 3 if (likely(PyInt_Check(x))) { if (sizeof(long) < sizeof(long)) { __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) } else { long val = PyInt_AS_LONG(x); if (is_unsigned && unlikely(val < 0)) { goto raise_neg_overflow; } return (long) val; } } else #endif if (likely(PyLong_Check(x))) { if (is_unsigned) { #if CYTHON_USE_PYLONG_INTERNALS const digit* digits = ((PyLongObject*)x)->ob_digit; switch (Py_SIZE(x)) { case 0: return (long) 0; case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0]) case 2: if (8 * sizeof(long) > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) { return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); } } break; case 3: if (8 * sizeof(long) > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) { return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); } } break; case 4: if (8 * sizeof(long) > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) { return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); } } break; } #endif #if CYTHON_COMPILING_IN_CPYTHON if (unlikely(Py_SIZE(x) < 0)) { goto raise_neg_overflow; } #else { int result = PyObject_RichCompareBool(x, Py_False, Py_LT); if (unlikely(result < 0)) return (long) -1; if (unlikely(result == 1)) goto raise_neg_overflow; } #endif if (sizeof(long) <= sizeof(unsigned long)) { __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) #ifdef HAVE_LONG_LONG } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) #endif } } else { #if CYTHON_USE_PYLONG_INTERNALS const digit* digits = ((PyLongObject*)x)->ob_digit; switch (Py_SIZE(x)) { case 0: return (long) 0; case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0])) case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0]) case -2: if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); } } break; case 2: if (8 * sizeof(long) > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); } } break; case -3: if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); } } break; case 3: if (8 * sizeof(long) > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); } } break; case -4: if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); } } break; case 4: if (8 * sizeof(long) > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); } } break; } #endif if (sizeof(long) <= sizeof(long)) { __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) #ifdef HAVE_LONG_LONG } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) #endif } } { #if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) PyErr_SetString(PyExc_RuntimeError, "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); #else long val; PyObject *v = __Pyx_PyNumber_IntOrLong(x); #if PY_MAJOR_VERSION < 3 if (likely(v) && !PyLong_Check(v)) { PyObject *tmp = v; v = PyNumber_Long(tmp); Py_DECREF(tmp); } #endif if (likely(v)) { int one = 1; int is_little = (int)*(unsigned char *)&one; unsigned char *bytes = (unsigned char *)&val; int ret = _PyLong_AsByteArray((PyLongObject *)v, bytes, sizeof(val), is_little, !is_unsigned); Py_DECREF(v); if (likely(!ret)) return val; } #endif return (long) -1; } } else { long val; PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); if (!tmp) return (long) -1; val = __Pyx_PyInt_As_long(tmp); Py_DECREF(tmp); return val; } raise_overflow: PyErr_SetString(PyExc_OverflowError, "value too large to convert to long"); return (long) -1; raise_neg_overflow: PyErr_SetString(PyExc_OverflowError, "can't convert negative value to long"); return (long) -1; } /* CIntFromPy */ static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { const int neg_one = (int) -1, const_zero = (int) 0; const int is_unsigned = neg_one > const_zero; #if PY_MAJOR_VERSION < 3 if (likely(PyInt_Check(x))) { if (sizeof(int) < sizeof(long)) { __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) } else { long val = PyInt_AS_LONG(x); if (is_unsigned && unlikely(val < 0)) { goto raise_neg_overflow; } return (int) val; } } else #endif if (likely(PyLong_Check(x))) { if (is_unsigned) { #if CYTHON_USE_PYLONG_INTERNALS const digit* digits = ((PyLongObject*)x)->ob_digit; switch (Py_SIZE(x)) { case 0: return (int) 0; case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0]) case 2: if (8 * sizeof(int) > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) { return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); } } break; case 3: if (8 * sizeof(int) > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) { return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); } } break; case 4: if (8 * sizeof(int) > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) { return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); } } break; } #endif #if CYTHON_COMPILING_IN_CPYTHON if (unlikely(Py_SIZE(x) < 0)) { goto raise_neg_overflow; } #else { int result = PyObject_RichCompareBool(x, Py_False, Py_LT); if (unlikely(result < 0)) return (int) -1; if (unlikely(result == 1)) goto raise_neg_overflow; } #endif if (sizeof(int) <= sizeof(unsigned long)) { __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) #ifdef HAVE_LONG_LONG } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) #endif } } else { #if CYTHON_USE_PYLONG_INTERNALS const digit* digits = ((PyLongObject*)x)->ob_digit; switch (Py_SIZE(x)) { case 0: return (int) 0; case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0])) case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0]) case -2: if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); } } break; case 2: if (8 * sizeof(int) > 1 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); } } break; case -3: if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); } } break; case 3: if (8 * sizeof(int) > 2 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); } } break; case -4: if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); } } break; case 4: if (8 * sizeof(int) > 3 * PyLong_SHIFT) { if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); } } break; } #endif if (sizeof(int) <= sizeof(long)) { __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) #ifdef HAVE_LONG_LONG } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) #endif } } { #if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) PyErr_SetString(PyExc_RuntimeError, "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); #else int val; PyObject *v = __Pyx_PyNumber_IntOrLong(x); #if PY_MAJOR_VERSION < 3 if (likely(v) && !PyLong_Check(v)) { PyObject *tmp = v; v = PyNumber_Long(tmp); Py_DECREF(tmp); } #endif if (likely(v)) { int one = 1; int is_little = (int)*(unsigned char *)&one; unsigned char *bytes = (unsigned char *)&val; int ret = _PyLong_AsByteArray((PyLongObject *)v, bytes, sizeof(val), is_little, !is_unsigned); Py_DECREF(v); if (likely(!ret)) return val; } #endif return (int) -1; } } else { int val; PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); if (!tmp) return (int) -1; val = __Pyx_PyInt_As_int(tmp); Py_DECREF(tmp); return val; } raise_overflow: PyErr_SetString(PyExc_OverflowError, "value too large to convert to int"); return (int) -1; raise_neg_overflow: PyErr_SetString(PyExc_OverflowError, "can't convert negative value to int"); return (int) -1; } /* FastTypeChecks */ #if CYTHON_COMPILING_IN_CPYTHON static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { while (a) { a = a->tp_base; if (a == b) return 1; } return b == &PyBaseObject_Type; } static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { PyObject *mro; if (a == b) return 1; mro = a->tp_mro; if (likely(mro)) { Py_ssize_t i, n; n = PyTuple_GET_SIZE(mro); for (i = 0; i < n; i++) { if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) return 1; } return 0; } return __Pyx_InBases(a, b); } #if PY_MAJOR_VERSION == 2 static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { PyObject *exception, *value, *tb; int res; __Pyx_PyThreadState_declare __Pyx_PyThreadState_assign __Pyx_ErrFetch(&exception, &value, &tb); res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; if (unlikely(res == -1)) { PyErr_WriteUnraisable(err); res = 0; } if (!res) { res = PyObject_IsSubclass(err, exc_type2); if (unlikely(res == -1)) { PyErr_WriteUnraisable(err); res = 0; } } __Pyx_ErrRestore(exception, value, tb); return res; } #else static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0; if (!res) { res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); } return res; } #endif static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject* exc_type) { if (likely(err == exc_type)) return 1; if (likely(PyExceptionClass_Check(err))) { return __Pyx_inner_PyErr_GivenExceptionMatches2(err, NULL, exc_type); } return PyErr_GivenExceptionMatches(err, exc_type); } static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *exc_type1, PyObject *exc_type2) { if (likely(err == exc_type1 || err == exc_type2)) return 1; if (likely(PyExceptionClass_Check(err))) { return __Pyx_inner_PyErr_GivenExceptionMatches2(err, exc_type1, exc_type2); } return (PyErr_GivenExceptionMatches(err, exc_type1) || PyErr_GivenExceptionMatches(err, exc_type2)); } #endif /* CheckBinaryVersion */ static int __Pyx_check_binary_version(void) { char ctversion[4], rtversion[4]; PyOS_snprintf(ctversion, 4, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION); PyOS_snprintf(rtversion, 4, "%s", Py_GetVersion()); if (ctversion[0] != rtversion[0] || ctversion[2] != rtversion[2]) { char message[200]; PyOS_snprintf(message, sizeof(message), "compiletime version %s of module '%.100s' " "does not match runtime version %s", ctversion, __Pyx_MODULE_NAME, rtversion); return PyErr_WarnEx(NULL, message, 1); } return 0; } /* ModuleImport */ #ifndef __PYX_HAVE_RT_ImportModule #define __PYX_HAVE_RT_ImportModule static PyObject *__Pyx_ImportModule(const char *name) { PyObject *py_name = 0; PyObject *py_module = 0; py_name = __Pyx_PyIdentifier_FromString(name); if (!py_name) goto bad; py_module = PyImport_Import(py_name); Py_DECREF(py_name); return py_module; bad: Py_XDECREF(py_name); return 0; } #endif /* TypeImport */ #ifndef __PYX_HAVE_RT_ImportType #define __PYX_HAVE_RT_ImportType static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, size_t size, int strict) { PyObject *py_module = 0; PyObject *result = 0; PyObject *py_name = 0; char warning[200]; Py_ssize_t basicsize; #ifdef Py_LIMITED_API PyObject *py_basicsize; #endif py_module = __Pyx_ImportModule(module_name); if (!py_module) goto bad; py_name = __Pyx_PyIdentifier_FromString(class_name); if (!py_name) goto bad; result = PyObject_GetAttr(py_module, py_name); Py_DECREF(py_name); py_name = 0; Py_DECREF(py_module); py_module = 0; if (!result) goto bad; if (!PyType_Check(result)) { PyErr_Format(PyExc_TypeError, "%.200s.%.200s is not a type object", module_name, class_name); goto bad; } #ifndef Py_LIMITED_API basicsize = ((PyTypeObject *)result)->tp_basicsize; #else py_basicsize = PyObject_GetAttrString(result, "__basicsize__"); if (!py_basicsize) goto bad; basicsize = PyLong_AsSsize_t(py_basicsize); Py_DECREF(py_basicsize); py_basicsize = 0; if (basicsize == (Py_ssize_t)-1 && PyErr_Occurred()) goto bad; #endif if (!strict && (size_t)basicsize > size) { PyOS_snprintf(warning, sizeof(warning), "%s.%s size changed, may indicate binary incompatibility. Expected %zd, got %zd", module_name, class_name, basicsize, size); if (PyErr_WarnEx(NULL, warning, 0) < 0) goto bad; } else if ((size_t)basicsize != size) { PyErr_Format(PyExc_ValueError, "%.200s.%.200s has the wrong size, try recompiling. Expected %zd, got %zd", module_name, class_name, basicsize, size); goto bad; } return (PyTypeObject *)result; bad: Py_XDECREF(py_module); Py_XDECREF(result); return NULL; } #endif /* InitStrings */ static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { while (t->p) { #if PY_MAJOR_VERSION < 3 if (t->is_unicode) { *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); } else if (t->intern) { *t->p = PyString_InternFromString(t->s); } else { *t->p = PyString_FromStringAndSize(t->s, t->n - 1); } #else if (t->is_unicode | t->is_str) { if (t->intern) { *t->p = PyUnicode_InternFromString(t->s); } else if (t->encoding) { *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); } else { *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); } } else { *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); } #endif if (!*t->p) return -1; if (PyObject_Hash(*t->p) == -1) PyErr_Clear(); ++t; } return 0; } static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); } static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { Py_ssize_t ignore; return __Pyx_PyObject_AsStringAndSize(o, &ignore); } #if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT #if !CYTHON_PEP393_ENABLED static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { char* defenc_c; PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); if (!defenc) return NULL; defenc_c = PyBytes_AS_STRING(defenc); #if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII { char* end = defenc_c + PyBytes_GET_SIZE(defenc); char* c; for (c = defenc_c; c < end; c++) { if ((unsigned char) (*c) >= 128) { PyUnicode_AsASCIIString(o); return NULL; } } } #endif *length = PyBytes_GET_SIZE(defenc); return defenc_c; } #else static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; #if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII if (likely(PyUnicode_IS_ASCII(o))) { *length = PyUnicode_GET_LENGTH(o); return PyUnicode_AsUTF8(o); } else { PyUnicode_AsASCIIString(o); return NULL; } #else return PyUnicode_AsUTF8AndSize(o, length); #endif } #endif #endif static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { #if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT if ( #if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII __Pyx_sys_getdefaultencoding_not_ascii && #endif PyUnicode_Check(o)) { return __Pyx_PyUnicode_AsStringAndSize(o, length); } else #endif #if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) if (PyByteArray_Check(o)) { *length = PyByteArray_GET_SIZE(o); return PyByteArray_AS_STRING(o); } else #endif { char* result; int r = PyBytes_AsStringAndSize(o, &result, length); if (unlikely(r < 0)) { return NULL; } else { return result; } } } static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { int is_true = x == Py_True; if (is_true | (x == Py_False) | (x == Py_None)) return is_true; else return PyObject_IsTrue(x); } static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { #if PY_MAJOR_VERSION >= 3 if (PyLong_Check(result)) { if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, "__int__ returned non-int (type %.200s). " "The ability to return an instance of a strict subclass of int " "is deprecated, and may be removed in a future version of Python.", Py_TYPE(result)->tp_name)) { Py_DECREF(result); return NULL; } return result; } #endif PyErr_Format(PyExc_TypeError, "__%.4s__ returned non-%.4s (type %.200s)", type_name, type_name, Py_TYPE(result)->tp_name); Py_DECREF(result); return NULL; } static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { #if CYTHON_USE_TYPE_SLOTS PyNumberMethods *m; #endif const char *name = NULL; PyObject *res = NULL; #if PY_MAJOR_VERSION < 3 if (likely(PyInt_Check(x) || PyLong_Check(x))) #else if (likely(PyLong_Check(x))) #endif return __Pyx_NewRef(x); #if CYTHON_USE_TYPE_SLOTS m = Py_TYPE(x)->tp_as_number; #if PY_MAJOR_VERSION < 3 if (m && m->nb_int) { name = "int"; res = m->nb_int(x); } else if (m && m->nb_long) { name = "long"; res = m->nb_long(x); } #else if (likely(m && m->nb_int)) { name = "int"; res = m->nb_int(x); } #endif #else if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { res = PyNumber_Int(x); } #endif if (likely(res)) { #if PY_MAJOR_VERSION < 3 if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { #else if (unlikely(!PyLong_CheckExact(res))) { #endif return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); } } else if (!PyErr_Occurred()) { PyErr_SetString(PyExc_TypeError, "an integer is required"); } return res; } static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { Py_ssize_t ival; PyObject *x; #if PY_MAJOR_VERSION < 3 if (likely(PyInt_CheckExact(b))) { if (sizeof(Py_ssize_t) >= sizeof(long)) return PyInt_AS_LONG(b); else return PyInt_AsSsize_t(x); } #endif if (likely(PyLong_CheckExact(b))) { #if CYTHON_USE_PYLONG_INTERNALS const digit* digits = ((PyLongObject*)b)->ob_digit; const Py_ssize_t size = Py_SIZE(b); if (likely(__Pyx_sst_abs(size) <= 1)) { ival = likely(size) ? digits[0] : 0; if (size == -1) ival = -ival; return ival; } else { switch (size) { case 2: if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); } break; case -2: if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); } break; case 3: if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); } break; case -3: if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); } break; case 4: if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); } break; case -4: if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); } break; } } #endif return PyLong_AsSsize_t(b); } x = PyNumber_Index(b); if (!x) return -1; ival = PyInt_AsSsize_t(x); Py_DECREF(x); return ival; } static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { return PyInt_FromSize_t(ival); } #endif /* Py_PYTHON_H */ borgbackup-1.1.5/src/borg/algorithms/checksums.pyx0000644000175000017500000000614513257361140021373 0ustar twtw00000000000000from ..helpers import bin_to_hex from libc.stdint cimport uint32_t from cpython.buffer cimport PyBUF_SIMPLE, PyObject_GetBuffer, PyBuffer_Release from cpython.bytes cimport PyBytes_FromStringAndSize cdef extern from "crc32_dispatch.c": uint32_t _crc32_slice_by_8 "crc32_slice_by_8"(const void* data, size_t length, uint32_t initial_crc) uint32_t _crc32_clmul "crc32_clmul"(const void* data, size_t length, uint32_t initial_crc) int _have_clmul "have_clmul"() cdef extern from "xxh64/xxhash.c": ctypedef struct XXH64_canonical_t: char digest[8] ctypedef struct XXH64_state_t: pass # opaque ctypedef unsigned long long XXH64_hash_t ctypedef enum XXH_errorcode: XXH_OK, XXH_ERROR XXH64_hash_t XXH64(const void* input, size_t length, unsigned long long seed); XXH_errorcode XXH64_reset(XXH64_state_t* statePtr, unsigned long long seed); XXH_errorcode XXH64_update(XXH64_state_t* statePtr, const void* input, size_t length); XXH64_hash_t XXH64_digest(const XXH64_state_t* statePtr); void XXH64_canonicalFromHash(XXH64_canonical_t* dst, XXH64_hash_t hash); XXH64_hash_t XXH64_hashFromCanonical(const XXH64_canonical_t* src); cdef Py_buffer ro_buffer(object data) except *: cdef Py_buffer view PyObject_GetBuffer(data, &view, PyBUF_SIMPLE) return view def crc32_slice_by_8(data, value=0): cdef Py_buffer data_buf = ro_buffer(data) cdef uint32_t val = value try: return _crc32_slice_by_8(data_buf.buf, data_buf.len, val) finally: PyBuffer_Release(&data_buf) def crc32_clmul(data, value=0): cdef Py_buffer data_buf = ro_buffer(data) cdef uint32_t val = value try: return _crc32_clmul(data_buf.buf, data_buf.len, val) finally: PyBuffer_Release(&data_buf) have_clmul = _have_clmul() if have_clmul: crc32 = crc32_clmul else: crc32 = crc32_slice_by_8 def xxh64(data, seed=0): cdef unsigned long long _seed = seed cdef XXH64_hash_t hash cdef XXH64_canonical_t digest cdef Py_buffer data_buf = ro_buffer(data) try: hash = XXH64(data_buf.buf, data_buf.len, _seed) finally: PyBuffer_Release(&data_buf) XXH64_canonicalFromHash(&digest, hash) return PyBytes_FromStringAndSize( digest.digest, 8) cdef class StreamingXXH64: cdef XXH64_state_t state def __cinit__(self, seed=0): cdef unsigned long long _seed = seed if XXH64_reset(&self.state, _seed) != XXH_OK: raise Exception('XXH64_reset failed') def update(self, data): cdef Py_buffer data_buf = ro_buffer(data) try: if XXH64_update(&self.state, data_buf.buf, data_buf.len) != XXH_OK: raise Exception('XXH64_update failed') finally: PyBuffer_Release(&data_buf) def digest(self): cdef XXH64_hash_t hash cdef XXH64_canonical_t digest hash = XXH64_digest(&self.state) XXH64_canonicalFromHash(&digest, hash) return PyBytes_FromStringAndSize( digest.digest, 8) def hexdigest(self): return bin_to_hex(self.digest()) borgbackup-1.1.5/src/borg/algorithms/xxh64/0000755000175000017500000000000013260002401017601 5ustar twtw00000000000000borgbackup-1.1.5/src/borg/algorithms/xxh64/xxhash.h0000644000175000017500000002346313257361140021303 0ustar twtw00000000000000/* xxHash - Extremely Fast Hash algorithm Header File Copyright (C) 2012-2016, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - xxHash source repository : https://github.com/Cyan4973/xxHash */ /* Notice extracted from xxHash homepage : xxHash is an extremely fast Hash algorithm, running at RAM speed limits. It also successfully passes all tests from the SMHasher suite. Comparison (single thread, Windows Seven 32 bits, using SMHasher on a Core 2 Duo @3GHz) Name Speed Q.Score Author xxHash 5.4 GB/s 10 CrapWow 3.2 GB/s 2 Andrew MumurHash 3a 2.7 GB/s 10 Austin Appleby SpookyHash 2.0 GB/s 10 Bob Jenkins SBox 1.4 GB/s 9 Bret Mulvey Lookup3 1.2 GB/s 9 Bob Jenkins SuperFastHash 1.2 GB/s 1 Paul Hsieh CityHash64 1.05 GB/s 10 Pike & Alakuijala FNV 0.55 GB/s 5 Fowler, Noll, Vo CRC32 0.43 GB/s 9 MD5-32 0.33 GB/s 10 Ronald L. Rivest SHA1-32 0.28 GB/s 10 Q.Score is a measure of quality of the hash function. It depends on successfully passing SMHasher test set. 10 is a perfect score. A 64-bits version, named XXH64, is available since r35. It offers much better speed, but for 64-bits applications only. Name Speed on 64 bits Speed on 32 bits XXH64 13.8 GB/s 1.9 GB/s XXH32 6.8 GB/s 6.0 GB/s */ #ifndef XXHASH_H_5627135585666179 #define XXHASH_H_5627135585666179 1 #define XXH_STATIC_LINKING_ONLY #if defined (__cplusplus) extern "C" { #endif /* **************************** * Compiler specifics ******************************/ #if !(defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L)) /* ! C99 */ # define restrict /* disable restrict */ #endif /* **************************** * Definitions ******************************/ #include /* size_t */ typedef enum { XXH_OK=0, XXH_ERROR } XXH_errorcode; /* **************************** * API modifier ******************************/ /** XXH_PRIVATE_API * This is useful to include xxhash functions in `static` mode * in order to inline them, and remove their symbol from the public list. * Methodology : * #define XXH_PRIVATE_API * #include "xxhash.h" * `xxhash.c` is automatically included. * It's not useful to compile and link it as a separate module. */ #ifdef XXH_PRIVATE_API # ifndef XXH_STATIC_LINKING_ONLY # define XXH_STATIC_LINKING_ONLY # endif # if defined(__GNUC__) # define XXH_PUBLIC_API static __inline __attribute__((unused)) # elif defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) # define XXH_PUBLIC_API static inline # elif defined(_MSC_VER) # define XXH_PUBLIC_API static __inline # else # define XXH_PUBLIC_API static /* this version may generate warnings for unused static functions; disable the relevant warning */ # endif #else # define XXH_PUBLIC_API /* do nothing */ #endif /* XXH_PRIVATE_API */ /*!XXH_NAMESPACE, aka Namespace Emulation : If you want to include _and expose_ xxHash functions from within your own library, but also want to avoid symbol collisions with other libraries which may also include xxHash, you can use XXH_NAMESPACE, to automatically prefix any public symbol from xxhash library with the value of XXH_NAMESPACE (therefore, avoid NULL and numeric values). Note that no change is required within the calling program as long as it includes `xxhash.h` : regular symbol name will be automatically translated by this header. */ #ifdef XXH_NAMESPACE # define XXH_CAT(A,B) A##B # define XXH_NAME2(A,B) XXH_CAT(A,B) # define XXH_versionNumber XXH_NAME2(XXH_NAMESPACE, XXH_versionNumber) # define XXH32 XXH_NAME2(XXH_NAMESPACE, XXH32) # define XXH32_createState XXH_NAME2(XXH_NAMESPACE, XXH32_createState) # define XXH32_freeState XXH_NAME2(XXH_NAMESPACE, XXH32_freeState) # define XXH32_reset XXH_NAME2(XXH_NAMESPACE, XXH32_reset) # define XXH32_update XXH_NAME2(XXH_NAMESPACE, XXH32_update) # define XXH32_digest XXH_NAME2(XXH_NAMESPACE, XXH32_digest) # define XXH32_copyState XXH_NAME2(XXH_NAMESPACE, XXH32_copyState) # define XXH32_canonicalFromHash XXH_NAME2(XXH_NAMESPACE, XXH32_canonicalFromHash) # define XXH32_hashFromCanonical XXH_NAME2(XXH_NAMESPACE, XXH32_hashFromCanonical) # define XXH64 XXH_NAME2(XXH_NAMESPACE, XXH64) # define XXH64_createState XXH_NAME2(XXH_NAMESPACE, XXH64_createState) # define XXH64_freeState XXH_NAME2(XXH_NAMESPACE, XXH64_freeState) # define XXH64_reset XXH_NAME2(XXH_NAMESPACE, XXH64_reset) # define XXH64_update XXH_NAME2(XXH_NAMESPACE, XXH64_update) # define XXH64_digest XXH_NAME2(XXH_NAMESPACE, XXH64_digest) # define XXH64_copyState XXH_NAME2(XXH_NAMESPACE, XXH64_copyState) # define XXH64_canonicalFromHash XXH_NAME2(XXH_NAMESPACE, XXH64_canonicalFromHash) # define XXH64_hashFromCanonical XXH_NAME2(XXH_NAMESPACE, XXH64_hashFromCanonical) #endif /* ************************************* * Version ***************************************/ #define XXH_VERSION_MAJOR 0 #define XXH_VERSION_MINOR 6 #define XXH_VERSION_RELEASE 2 #define XXH_VERSION_NUMBER (XXH_VERSION_MAJOR *100*100 + XXH_VERSION_MINOR *100 + XXH_VERSION_RELEASE) XXH_PUBLIC_API unsigned XXH_versionNumber (void); #ifndef XXH_NO_LONG_LONG /*-********************************************************************** * 64-bits hash ************************************************************************/ typedef unsigned long long XXH64_hash_t; /*! XXH64() : Calculate the 64-bits hash of sequence of length "len" stored at memory address "input". "seed" can be used to alter the result predictably. This function runs faster on 64-bits systems, but slower on 32-bits systems (see benchmark). */ XXH_PUBLIC_API XXH64_hash_t XXH64 (const void* input, size_t length, unsigned long long seed); /*====== Streaming ======*/ typedef struct XXH64_state_s XXH64_state_t; /* incomplete type */ XXH_PUBLIC_API XXH64_state_t* XXH64_createState(void); XXH_PUBLIC_API XXH_errorcode XXH64_freeState(XXH64_state_t* statePtr); XXH_PUBLIC_API void XXH64_copyState(XXH64_state_t* restrict dst_state, const XXH64_state_t* restrict src_state); XXH_PUBLIC_API XXH_errorcode XXH64_reset (XXH64_state_t* statePtr, unsigned long long seed); XXH_PUBLIC_API XXH_errorcode XXH64_update (XXH64_state_t* statePtr, const void* input, size_t length); XXH_PUBLIC_API XXH64_hash_t XXH64_digest (const XXH64_state_t* statePtr); /*====== Canonical representation ======*/ typedef struct { unsigned char digest[8]; } XXH64_canonical_t; XXH_PUBLIC_API void XXH64_canonicalFromHash(XXH64_canonical_t* dst, XXH64_hash_t hash); XXH_PUBLIC_API XXH64_hash_t XXH64_hashFromCanonical(const XXH64_canonical_t* src); #endif /* XXH_NO_LONG_LONG */ #ifdef XXH_STATIC_LINKING_ONLY /* ================================================================================================ This section contains definitions which are not guaranteed to remain stable. They may change in future versions, becoming incompatible with a different version of the library. They shall only be used with static linking. Never use these definitions in association with dynamic linking ! =================================================================================================== */ /* These definitions are only meant to allow allocation of XXH state statically, on stack, or in a struct for example. Do not use members directly. */ struct XXH32_state_s { unsigned total_len_32; unsigned large_len; unsigned v1; unsigned v2; unsigned v3; unsigned v4; unsigned mem32[4]; /* buffer defined as U32 for alignment */ unsigned memsize; unsigned reserved; /* never read nor write, will be removed in a future version */ }; /* typedef'd to XXH32_state_t */ #ifndef XXH_NO_LONG_LONG struct XXH64_state_s { unsigned long long total_len; unsigned long long v1; unsigned long long v2; unsigned long long v3; unsigned long long v4; unsigned long long mem64[4]; /* buffer defined as U64 for alignment */ unsigned memsize; unsigned reserved[2]; /* never read nor write, will be removed in a future version */ }; /* typedef'd to XXH64_state_t */ #endif # ifdef XXH_PRIVATE_API # include "xxhash.c" /* include xxhash function bodies as `static`, for inlining */ # endif #endif /* XXH_STATIC_LINKING_ONLY */ #if defined (__cplusplus) } #endif #endif /* XXHASH_H_5627135585666179 */ borgbackup-1.1.5/src/borg/algorithms/xxh64/xxhash.c0000644000175000017500000005132113257361140021270 0ustar twtw00000000000000/* * xxHash - Fast Hash algorithm * Copyright (C) 2012-2016, Yann Collet * * BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * You can contact the author at : * - xxHash homepage: http://www.xxhash.com * - xxHash source repository : https://github.com/Cyan4973/xxHash */ /* ************************************* * Tuning parameters ***************************************/ /*!XXH_FORCE_MEMORY_ACCESS : * By default, access to unaligned memory is controlled by `memcpy()`, which is safe and portable. * Unfortunately, on some target/compiler combinations, the generated assembly is sub-optimal. * The below switch allow to select different access method for improved performance. * Method 0 (default) : use `memcpy()`. Safe and portable. * Method 1 : `__packed` statement. It depends on compiler extension (ie, not portable). * This method is safe if your compiler supports it, and *generally* as fast or faster than `memcpy`. * Method 2 : direct access. This method doesn't depend on compiler but violate C standard. * It can generate buggy code on targets which do not support unaligned memory accesses. * But in some circumstances, it's the only known way to get the most performance (ie GCC + ARMv6) * See http://stackoverflow.com/a/32095106/646947 for details. * Prefer these methods in priority order (0 > 1 > 2) */ #ifndef XXH_FORCE_MEMORY_ACCESS /* can be defined externally, on command line for example */ # if defined(__GNUC__) && ( defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) || defined(__ARM_ARCH_6K__) || defined(__ARM_ARCH_6Z__) || defined(__ARM_ARCH_6ZK__) || defined(__ARM_ARCH_6T2__) ) # define XXH_FORCE_MEMORY_ACCESS 2 # elif defined(__INTEL_COMPILER) || \ (defined(__GNUC__) && ( defined(__ARM_ARCH_7__) || defined(__ARM_ARCH_7A__) || defined(__ARM_ARCH_7R__) || defined(__ARM_ARCH_7M__) || defined(__ARM_ARCH_7S__) )) # define XXH_FORCE_MEMORY_ACCESS 1 # endif #endif /*!XXH_ACCEPT_NULL_INPUT_POINTER : * If the input pointer is a null pointer, xxHash default behavior is to trigger a memory access error, since it is a bad pointer. * When this option is enabled, xxHash output for null input pointers will be the same as a null-length input. * By default, this option is disabled. To enable it, uncomment below define : */ /* #define XXH_ACCEPT_NULL_INPUT_POINTER 1 */ /*!XXH_FORCE_NATIVE_FORMAT : * By default, xxHash library provides endian-independant Hash values, based on little-endian convention. * Results are therefore identical for little-endian and big-endian CPU. * This comes at a performance cost for big-endian CPU, since some swapping is required to emulate little-endian format. * Should endian-independance be of no importance for your application, you may set the #define below to 1, * to improve speed for Big-endian CPU. * This option has no impact on Little_Endian CPU. */ #ifndef XXH_FORCE_NATIVE_FORMAT /* can be defined externally */ # define XXH_FORCE_NATIVE_FORMAT 0 #endif /*!XXH_FORCE_ALIGN_CHECK : * This is a minor performance trick, only useful with lots of very small keys. * It means : check for aligned/unaligned input. * The check costs one initial branch per hash; set to 0 when the input data * is guaranteed to be aligned. */ #ifndef XXH_FORCE_ALIGN_CHECK /* can be defined externally */ # if defined(__i386) || defined(_M_IX86) || defined(__x86_64__) || defined(_M_X64) # define XXH_FORCE_ALIGN_CHECK 0 # else # define XXH_FORCE_ALIGN_CHECK 1 # endif #endif /* ************************************* * Includes & Memory related functions ***************************************/ /* Modify the local functions below should you wish to use some other memory routines */ /* for malloc(), free() */ #include static void* XXH_malloc(size_t s) { return malloc(s); } static void XXH_free (void* p) { free(p); } /* for memcpy() */ #include static void* XXH_memcpy(void* dest, const void* src, size_t size) { return memcpy(dest,src,size); } #define XXH_STATIC_LINKING_ONLY #include "xxhash.h" /* ************************************* * Compiler Specific Options ***************************************/ #ifdef _MSC_VER /* Visual Studio */ # pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */ # define FORCE_INLINE static __forceinline #else # if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L /* C99 */ # ifdef __GNUC__ # define FORCE_INLINE static inline __attribute__((always_inline)) # else # define FORCE_INLINE static inline # endif # else # define FORCE_INLINE static # endif /* __STDC_VERSION__ */ #endif /* ************************************* * Basic Types ***************************************/ #ifndef MEM_MODULE # if !defined (__VMS) && (defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) ) # include typedef uint8_t BYTE; typedef uint16_t U16; typedef uint32_t U32; typedef int32_t S32; # else typedef unsigned char BYTE; typedef unsigned short U16; typedef unsigned int U32; typedef signed int S32; # endif #endif #if (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==2)) /* Force direct memory access. Only works on CPU which support unaligned memory access in hardware */ static U32 XXH_read32(const void* memPtr) { return *(const U32*) memPtr; } #elif (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==1)) /* __pack instructions are safer, but compiler specific, hence potentially problematic for some compilers */ /* currently only defined for gcc and icc */ typedef union { U32 u32; } __attribute__((packed)) unalign; static U32 XXH_read32(const void* ptr) { return ((const unalign*)ptr)->u32; } #else /* portable and safe solution. Generally efficient. * see : http://stackoverflow.com/a/32095106/646947 */ static U32 XXH_read32(const void* memPtr) { U32 val; memcpy(&val, memPtr, sizeof(val)); return val; } #endif /* XXH_FORCE_DIRECT_MEMORY_ACCESS */ /* **************************************** * Compiler-specific Functions and Macros ******************************************/ #define GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__) /* Note : although _rotl exists for minGW (GCC under windows), performance seems poor */ #if defined(_MSC_VER) # define XXH_rotl32(x,r) _rotl(x,r) # define XXH_rotl64(x,r) _rotl64(x,r) #else # define XXH_rotl32(x,r) ((x << r) | (x >> (32 - r))) # define XXH_rotl64(x,r) ((x << r) | (x >> (64 - r))) #endif #if defined(_MSC_VER) /* Visual Studio */ # define XXH_swap32 _byteswap_ulong #elif GCC_VERSION >= 403 # define XXH_swap32 __builtin_bswap32 #else static U32 XXH_swap32 (U32 x) { return ((x << 24) & 0xff000000 ) | ((x << 8) & 0x00ff0000 ) | ((x >> 8) & 0x0000ff00 ) | ((x >> 24) & 0x000000ff ); } #endif /* ************************************* * Architecture Macros ***************************************/ typedef enum { XXH_bigEndian=0, XXH_littleEndian=1 } XXH_endianess; /* XXH_CPU_LITTLE_ENDIAN can be defined externally, for example on the compiler command line */ #ifndef XXH_CPU_LITTLE_ENDIAN static const int g_one = 1; # define XXH_CPU_LITTLE_ENDIAN (*(const char*)(&g_one)) #endif /* *************************** * Memory reads *****************************/ typedef enum { XXH_aligned, XXH_unaligned } XXH_alignment; FORCE_INLINE U32 XXH_readLE32_align(const void* ptr, XXH_endianess endian, XXH_alignment align) { if (align==XXH_unaligned) return endian==XXH_littleEndian ? XXH_read32(ptr) : XXH_swap32(XXH_read32(ptr)); else return endian==XXH_littleEndian ? *(const U32*)ptr : XXH_swap32(*(const U32*)ptr); } FORCE_INLINE U32 XXH_readLE32(const void* ptr, XXH_endianess endian) { return XXH_readLE32_align(ptr, endian, XXH_unaligned); } /* ************************************* * Macros ***************************************/ #define XXH_STATIC_ASSERT(c) { enum { XXH_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */ XXH_PUBLIC_API unsigned XXH_versionNumber (void) { return XXH_VERSION_NUMBER; } #ifndef XXH_NO_LONG_LONG /* ******************************************************************* * 64-bits hash functions *********************************************************************/ #define XXH_get32bits(p) XXH_readLE32_align(p, endian, align) /*====== Memory access ======*/ #ifndef MEM_MODULE # define MEM_MODULE # if !defined (__VMS) && (defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) ) # include typedef uint64_t U64; # else typedef unsigned long long U64; /* if your compiler doesn't support unsigned long long, replace by another 64-bit type here. Note that xxhash.h will also need to be updated. */ # endif #endif #if (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==2)) /* Force direct memory access. Only works on CPU which support unaligned memory access in hardware */ static U64 XXH_read64(const void* memPtr) { return *(const U64*) memPtr; } #elif (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==1)) /* __pack instructions are safer, but compiler specific, hence potentially problematic for some compilers */ /* currently only defined for gcc and icc */ typedef union { U32 u32; U64 u64; } __attribute__((packed)) unalign64; static U64 XXH_read64(const void* ptr) { return ((const unalign64*)ptr)->u64; } #else /* portable and safe solution. Generally efficient. * see : http://stackoverflow.com/a/32095106/646947 */ static U64 XXH_read64(const void* memPtr) { U64 val; memcpy(&val, memPtr, sizeof(val)); return val; } #endif /* XXH_FORCE_DIRECT_MEMORY_ACCESS */ #if defined(_MSC_VER) /* Visual Studio */ # define XXH_swap64 _byteswap_uint64 #elif GCC_VERSION >= 403 # define XXH_swap64 __builtin_bswap64 #else static U64 XXH_swap64 (U64 x) { return ((x << 56) & 0xff00000000000000ULL) | ((x << 40) & 0x00ff000000000000ULL) | ((x << 24) & 0x0000ff0000000000ULL) | ((x << 8) & 0x000000ff00000000ULL) | ((x >> 8) & 0x00000000ff000000ULL) | ((x >> 24) & 0x0000000000ff0000ULL) | ((x >> 40) & 0x000000000000ff00ULL) | ((x >> 56) & 0x00000000000000ffULL); } #endif FORCE_INLINE U64 XXH_readLE64_align(const void* ptr, XXH_endianess endian, XXH_alignment align) { if (align==XXH_unaligned) return endian==XXH_littleEndian ? XXH_read64(ptr) : XXH_swap64(XXH_read64(ptr)); else return endian==XXH_littleEndian ? *(const U64*)ptr : XXH_swap64(*(const U64*)ptr); } FORCE_INLINE U64 XXH_readLE64(const void* ptr, XXH_endianess endian) { return XXH_readLE64_align(ptr, endian, XXH_unaligned); } static U64 XXH_readBE64(const void* ptr) { return XXH_CPU_LITTLE_ENDIAN ? XXH_swap64(XXH_read64(ptr)) : XXH_read64(ptr); } /*====== xxh64 ======*/ static const U64 PRIME64_1 = 11400714785074694791ULL; static const U64 PRIME64_2 = 14029467366897019727ULL; static const U64 PRIME64_3 = 1609587929392839161ULL; static const U64 PRIME64_4 = 9650029242287828579ULL; static const U64 PRIME64_5 = 2870177450012600261ULL; static U64 XXH64_round(U64 acc, U64 input) { acc += input * PRIME64_2; acc = XXH_rotl64(acc, 31); acc *= PRIME64_1; return acc; } static U64 XXH64_mergeRound(U64 acc, U64 val) { val = XXH64_round(0, val); acc ^= val; acc = acc * PRIME64_1 + PRIME64_4; return acc; } FORCE_INLINE U64 XXH64_endian_align(const void* input, size_t len, U64 seed, XXH_endianess endian, XXH_alignment align) { const BYTE* p = (const BYTE*)input; const BYTE* const bEnd = p + len; U64 h64; #define XXH_get64bits(p) XXH_readLE64_align(p, endian, align) #ifdef XXH_ACCEPT_NULL_INPUT_POINTER if (p==NULL) { len=0; bEnd=p=(const BYTE*)(size_t)32; } #endif if (len>=32) { const BYTE* const limit = bEnd - 32; U64 v1 = seed + PRIME64_1 + PRIME64_2; U64 v2 = seed + PRIME64_2; U64 v3 = seed + 0; U64 v4 = seed - PRIME64_1; do { v1 = XXH64_round(v1, XXH_get64bits(p)); p+=8; v2 = XXH64_round(v2, XXH_get64bits(p)); p+=8; v3 = XXH64_round(v3, XXH_get64bits(p)); p+=8; v4 = XXH64_round(v4, XXH_get64bits(p)); p+=8; } while (p<=limit); h64 = XXH_rotl64(v1, 1) + XXH_rotl64(v2, 7) + XXH_rotl64(v3, 12) + XXH_rotl64(v4, 18); h64 = XXH64_mergeRound(h64, v1); h64 = XXH64_mergeRound(h64, v2); h64 = XXH64_mergeRound(h64, v3); h64 = XXH64_mergeRound(h64, v4); } else { h64 = seed + PRIME64_5; } h64 += (U64) len; while (p+8<=bEnd) { U64 const k1 = XXH64_round(0, XXH_get64bits(p)); h64 ^= k1; h64 = XXH_rotl64(h64,27) * PRIME64_1 + PRIME64_4; p+=8; } if (p+4<=bEnd) { h64 ^= (U64)(XXH_get32bits(p)) * PRIME64_1; h64 = XXH_rotl64(h64, 23) * PRIME64_2 + PRIME64_3; p+=4; } while (p> 33; h64 *= PRIME64_2; h64 ^= h64 >> 29; h64 *= PRIME64_3; h64 ^= h64 >> 32; return h64; } XXH_PUBLIC_API unsigned long long XXH64 (const void* input, size_t len, unsigned long long seed) { #if 0 /* Simple version, good for code maintenance, but unfortunately slow for small inputs */ XXH64_CREATESTATE_STATIC(state); XXH64_reset(state, seed); XXH64_update(state, input, len); return XXH64_digest(state); #else XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN; if (XXH_FORCE_ALIGN_CHECK) { if ((((size_t)input) & 7)==0) { /* Input is aligned, let's leverage the speed advantage */ if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT) return XXH64_endian_align(input, len, seed, XXH_littleEndian, XXH_aligned); else return XXH64_endian_align(input, len, seed, XXH_bigEndian, XXH_aligned); } } if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT) return XXH64_endian_align(input, len, seed, XXH_littleEndian, XXH_unaligned); else return XXH64_endian_align(input, len, seed, XXH_bigEndian, XXH_unaligned); #endif } /*====== Hash Streaming ======*/ XXH_PUBLIC_API XXH64_state_t* XXH64_createState(void) { return (XXH64_state_t*)XXH_malloc(sizeof(XXH64_state_t)); } XXH_PUBLIC_API XXH_errorcode XXH64_freeState(XXH64_state_t* statePtr) { XXH_free(statePtr); return XXH_OK; } XXH_PUBLIC_API void XXH64_copyState(XXH64_state_t* restrict dstState, const XXH64_state_t* restrict srcState) { memcpy(dstState, srcState, sizeof(*dstState)); } XXH_PUBLIC_API XXH_errorcode XXH64_reset(XXH64_state_t* statePtr, unsigned long long seed) { XXH64_state_t state; /* using a local state to memcpy() in order to avoid strict-aliasing warnings */ memset(&state, 0, sizeof(state)-8); /* do not write into reserved, for future removal */ state.v1 = seed + PRIME64_1 + PRIME64_2; state.v2 = seed + PRIME64_2; state.v3 = seed + 0; state.v4 = seed - PRIME64_1; memcpy(statePtr, &state, sizeof(state)); return XXH_OK; } FORCE_INLINE XXH_errorcode XXH64_update_endian (XXH64_state_t* state, const void* input, size_t len, XXH_endianess endian) { const BYTE* p = (const BYTE*)input; const BYTE* const bEnd = p + len; #ifdef XXH_ACCEPT_NULL_INPUT_POINTER if (input==NULL) return XXH_ERROR; #endif state->total_len += len; if (state->memsize + len < 32) { /* fill in tmp buffer */ XXH_memcpy(((BYTE*)state->mem64) + state->memsize, input, len); state->memsize += (U32)len; return XXH_OK; } if (state->memsize) { /* tmp buffer is full */ XXH_memcpy(((BYTE*)state->mem64) + state->memsize, input, 32-state->memsize); state->v1 = XXH64_round(state->v1, XXH_readLE64(state->mem64+0, endian)); state->v2 = XXH64_round(state->v2, XXH_readLE64(state->mem64+1, endian)); state->v3 = XXH64_round(state->v3, XXH_readLE64(state->mem64+2, endian)); state->v4 = XXH64_round(state->v4, XXH_readLE64(state->mem64+3, endian)); p += 32-state->memsize; state->memsize = 0; } if (p+32 <= bEnd) { const BYTE* const limit = bEnd - 32; U64 v1 = state->v1; U64 v2 = state->v2; U64 v3 = state->v3; U64 v4 = state->v4; do { v1 = XXH64_round(v1, XXH_readLE64(p, endian)); p+=8; v2 = XXH64_round(v2, XXH_readLE64(p, endian)); p+=8; v3 = XXH64_round(v3, XXH_readLE64(p, endian)); p+=8; v4 = XXH64_round(v4, XXH_readLE64(p, endian)); p+=8; } while (p<=limit); state->v1 = v1; state->v2 = v2; state->v3 = v3; state->v4 = v4; } if (p < bEnd) { XXH_memcpy(state->mem64, p, (size_t)(bEnd-p)); state->memsize = (unsigned)(bEnd-p); } return XXH_OK; } XXH_PUBLIC_API XXH_errorcode XXH64_update (XXH64_state_t* state_in, const void* input, size_t len) { XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN; if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT) return XXH64_update_endian(state_in, input, len, XXH_littleEndian); else return XXH64_update_endian(state_in, input, len, XXH_bigEndian); } FORCE_INLINE U64 XXH64_digest_endian (const XXH64_state_t* state, XXH_endianess endian) { const BYTE * p = (const BYTE*)state->mem64; const BYTE* const bEnd = (const BYTE*)state->mem64 + state->memsize; U64 h64; if (state->total_len >= 32) { U64 const v1 = state->v1; U64 const v2 = state->v2; U64 const v3 = state->v3; U64 const v4 = state->v4; h64 = XXH_rotl64(v1, 1) + XXH_rotl64(v2, 7) + XXH_rotl64(v3, 12) + XXH_rotl64(v4, 18); h64 = XXH64_mergeRound(h64, v1); h64 = XXH64_mergeRound(h64, v2); h64 = XXH64_mergeRound(h64, v3); h64 = XXH64_mergeRound(h64, v4); } else { h64 = state->v3 + PRIME64_5; } h64 += (U64) state->total_len; while (p+8<=bEnd) { U64 const k1 = XXH64_round(0, XXH_readLE64(p, endian)); h64 ^= k1; h64 = XXH_rotl64(h64,27) * PRIME64_1 + PRIME64_4; p+=8; } if (p+4<=bEnd) { h64 ^= (U64)(XXH_readLE32(p, endian)) * PRIME64_1; h64 = XXH_rotl64(h64, 23) * PRIME64_2 + PRIME64_3; p+=4; } while (p> 33; h64 *= PRIME64_2; h64 ^= h64 >> 29; h64 *= PRIME64_3; h64 ^= h64 >> 32; return h64; } XXH_PUBLIC_API unsigned long long XXH64_digest (const XXH64_state_t* state_in) { XXH_endianess endian_detected = (XXH_endianess)XXH_CPU_LITTLE_ENDIAN; if ((endian_detected==XXH_littleEndian) || XXH_FORCE_NATIVE_FORMAT) return XXH64_digest_endian(state_in, XXH_littleEndian); else return XXH64_digest_endian(state_in, XXH_bigEndian); } /*====== Canonical representation ======*/ XXH_PUBLIC_API void XXH64_canonicalFromHash(XXH64_canonical_t* dst, XXH64_hash_t hash) { XXH_STATIC_ASSERT(sizeof(XXH64_canonical_t) == sizeof(XXH64_hash_t)); if (XXH_CPU_LITTLE_ENDIAN) hash = XXH_swap64(hash); memcpy(dst, &hash, sizeof(*dst)); } XXH_PUBLIC_API XXH64_hash_t XXH64_hashFromCanonical(const XXH64_canonical_t* src) { return XXH_readBE64(src); } #endif /* XXH_NO_LONG_LONG */ borgbackup-1.1.5/src/borg/algorithms/blake2/0000755000175000017500000000000013260002401017760 5ustar twtw00000000000000borgbackup-1.1.5/src/borg/algorithms/blake2/ref/0000755000175000017500000000000013260002401020534 5ustar twtw00000000000000borgbackup-1.1.5/src/borg/algorithms/blake2/ref/blake2-impl.h0000644000175000017500000001011713257361140023022 0ustar twtw00000000000000/* BLAKE2 reference source code package - reference C implementations Copyright 2012, Samuel Neves . You may use this under the terms of the CC0, the OpenSSL Licence, or the Apache Public License 2.0, at your option. The terms of these licenses can be found at: - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0 - OpenSSL license : https://www.openssl.org/source/license.html - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0 More information about the BLAKE2 hash function can be found at https://blake2.net. */ #ifndef BLAKE2_IMPL_H #define BLAKE2_IMPL_H #include #include #if !defined(__cplusplus) && (!defined(__STDC_VERSION__) || __STDC_VERSION__ < 199901L) #if defined(_MSC_VER) #define BLAKE2_INLINE __inline #elif defined(__GNUC__) #define BLAKE2_INLINE __inline__ #else #define BLAKE2_INLINE #endif #else #define BLAKE2_INLINE inline #endif static BLAKE2_INLINE uint32_t load32( const void *src ) { #if defined(NATIVE_LITTLE_ENDIAN) uint32_t w; memcpy(&w, src, sizeof w); return w; #else const uint8_t *p = ( const uint8_t * )src; return (( uint32_t )( p[0] ) << 0) | (( uint32_t )( p[1] ) << 8) | (( uint32_t )( p[2] ) << 16) | (( uint32_t )( p[3] ) << 24) ; #endif } static BLAKE2_INLINE uint64_t load64( const void *src ) { #if defined(NATIVE_LITTLE_ENDIAN) uint64_t w; memcpy(&w, src, sizeof w); return w; #else const uint8_t *p = ( const uint8_t * )src; return (( uint64_t )( p[0] ) << 0) | (( uint64_t )( p[1] ) << 8) | (( uint64_t )( p[2] ) << 16) | (( uint64_t )( p[3] ) << 24) | (( uint64_t )( p[4] ) << 32) | (( uint64_t )( p[5] ) << 40) | (( uint64_t )( p[6] ) << 48) | (( uint64_t )( p[7] ) << 56) ; #endif } static BLAKE2_INLINE uint16_t load16( const void *src ) { #if defined(NATIVE_LITTLE_ENDIAN) uint16_t w; memcpy(&w, src, sizeof w); return w; #else const uint8_t *p = ( const uint8_t * )src; return (( uint16_t )( p[0] ) << 0) | (( uint16_t )( p[1] ) << 8) ; #endif } static BLAKE2_INLINE void store16( void *dst, uint16_t w ) { #if defined(NATIVE_LITTLE_ENDIAN) memcpy(dst, &w, sizeof w); #else uint8_t *p = ( uint8_t * )dst; *p++ = ( uint8_t )w; w >>= 8; *p++ = ( uint8_t )w; #endif } static BLAKE2_INLINE void store32( void *dst, uint32_t w ) { #if defined(NATIVE_LITTLE_ENDIAN) memcpy(dst, &w, sizeof w); #else uint8_t *p = ( uint8_t * )dst; p[0] = (uint8_t)(w >> 0); p[1] = (uint8_t)(w >> 8); p[2] = (uint8_t)(w >> 16); p[3] = (uint8_t)(w >> 24); #endif } static BLAKE2_INLINE void store64( void *dst, uint64_t w ) { #if defined(NATIVE_LITTLE_ENDIAN) memcpy(dst, &w, sizeof w); #else uint8_t *p = ( uint8_t * )dst; p[0] = (uint8_t)(w >> 0); p[1] = (uint8_t)(w >> 8); p[2] = (uint8_t)(w >> 16); p[3] = (uint8_t)(w >> 24); p[4] = (uint8_t)(w >> 32); p[5] = (uint8_t)(w >> 40); p[6] = (uint8_t)(w >> 48); p[7] = (uint8_t)(w >> 56); #endif } static BLAKE2_INLINE uint64_t load48( const void *src ) { const uint8_t *p = ( const uint8_t * )src; return (( uint64_t )( p[0] ) << 0) | (( uint64_t )( p[1] ) << 8) | (( uint64_t )( p[2] ) << 16) | (( uint64_t )( p[3] ) << 24) | (( uint64_t )( p[4] ) << 32) | (( uint64_t )( p[5] ) << 40) ; } static BLAKE2_INLINE void store48( void *dst, uint64_t w ) { uint8_t *p = ( uint8_t * )dst; p[0] = (uint8_t)(w >> 0); p[1] = (uint8_t)(w >> 8); p[2] = (uint8_t)(w >> 16); p[3] = (uint8_t)(w >> 24); p[4] = (uint8_t)(w >> 32); p[5] = (uint8_t)(w >> 40); } static BLAKE2_INLINE uint32_t rotr32( const uint32_t w, const unsigned c ) { return ( w >> c ) | ( w << ( 32 - c ) ); } static BLAKE2_INLINE uint64_t rotr64( const uint64_t w, const unsigned c ) { return ( w >> c ) | ( w << ( 64 - c ) ); } /* prevents compiler optimizing out memset() */ static BLAKE2_INLINE void secure_zero_memory(void *v, size_t n) { static void *(*const volatile memset_v)(void *, int, size_t) = &memset; memset_v(v, 0, n); } #endif borgbackup-1.1.5/src/borg/algorithms/blake2/ref/blake2b-ref.c0000644000175000017500000002351113257361140022774 0ustar twtw00000000000000/* BLAKE2 reference source code package - reference C implementations Copyright 2012, Samuel Neves . You may use this under the terms of the CC0, the OpenSSL Licence, or the Apache Public License 2.0, at your option. The terms of these licenses can be found at: - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0 - OpenSSL license : https://www.openssl.org/source/license.html - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0 More information about the BLAKE2 hash function can be found at https://blake2.net. */ #include #include #include #include "blake2.h" #include "blake2-impl.h" static const uint64_t blake2b_IV[8] = { 0x6a09e667f3bcc908ULL, 0xbb67ae8584caa73bULL, 0x3c6ef372fe94f82bULL, 0xa54ff53a5f1d36f1ULL, 0x510e527fade682d1ULL, 0x9b05688c2b3e6c1fULL, 0x1f83d9abfb41bd6bULL, 0x5be0cd19137e2179ULL }; static const uint8_t blake2b_sigma[12][16] = { { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 } , { 14, 10, 4, 8, 9, 15, 13, 6, 1, 12, 0, 2, 11, 7, 5, 3 } , { 11, 8, 12, 0, 5, 2, 15, 13, 10, 14, 3, 6, 7, 1, 9, 4 } , { 7, 9, 3, 1, 13, 12, 11, 14, 2, 6, 5, 10, 4, 0, 15, 8 } , { 9, 0, 5, 7, 2, 4, 10, 15, 14, 1, 11, 12, 6, 8, 3, 13 } , { 2, 12, 6, 10, 0, 11, 8, 3, 4, 13, 7, 5, 15, 14, 1, 9 } , { 12, 5, 1, 15, 14, 13, 4, 10, 0, 7, 6, 3, 9, 2, 8, 11 } , { 13, 11, 7, 14, 12, 1, 3, 9, 5, 0, 15, 4, 8, 6, 2, 10 } , { 6, 15, 14, 9, 11, 3, 0, 8, 12, 2, 13, 7, 1, 4, 10, 5 } , { 10, 2, 8, 4, 7, 6, 1, 5, 15, 11, 9, 14, 3, 12, 13 , 0 } , { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 } , { 14, 10, 4, 8, 9, 15, 13, 6, 1, 12, 0, 2, 11, 7, 5, 3 } }; static void blake2b_set_lastnode( blake2b_state *S ) { S->f[1] = (uint64_t)-1; } /* Some helper functions, not necessarily useful */ static int blake2b_is_lastblock( const blake2b_state *S ) { return S->f[0] != 0; } static void blake2b_set_lastblock( blake2b_state *S ) { if( S->last_node ) blake2b_set_lastnode( S ); S->f[0] = (uint64_t)-1; } static void blake2b_increment_counter( blake2b_state *S, const uint64_t inc ) { S->t[0] += inc; S->t[1] += ( S->t[0] < inc ); } static void blake2b_init0( blake2b_state *S ) { size_t i; memset( S, 0, sizeof( blake2b_state ) ); for( i = 0; i < 8; ++i ) S->h[i] = blake2b_IV[i]; } /* init xors IV with input parameter block */ int blake2b_init_param( blake2b_state *S, const blake2b_param *P ) { const uint8_t *p = ( const uint8_t * )( P ); size_t i; blake2b_init0( S ); /* IV XOR ParamBlock */ for( i = 0; i < 8; ++i ) S->h[i] ^= load64( p + sizeof( S->h[i] ) * i ); S->outlen = P->digest_length; return 0; } int blake2b_init( blake2b_state *S, size_t outlen ) { blake2b_param P[1]; if ( ( !outlen ) || ( outlen > BLAKE2B_OUTBYTES ) ) return -1; P->digest_length = (uint8_t)outlen; P->key_length = 0; P->fanout = 1; P->depth = 1; store32( &P->leaf_length, 0 ); store32( &P->node_offset, 0 ); store32( &P->xof_length, 0 ); P->node_depth = 0; P->inner_length = 0; memset( P->reserved, 0, sizeof( P->reserved ) ); memset( P->salt, 0, sizeof( P->salt ) ); memset( P->personal, 0, sizeof( P->personal ) ); return blake2b_init_param( S, P ); } int blake2b_init_key( blake2b_state *S, size_t outlen, const void *key, size_t keylen ) { blake2b_param P[1]; if ( ( !outlen ) || ( outlen > BLAKE2B_OUTBYTES ) ) return -1; if ( !key || !keylen || keylen > BLAKE2B_KEYBYTES ) return -1; P->digest_length = (uint8_t)outlen; P->key_length = (uint8_t)keylen; P->fanout = 1; P->depth = 1; store32( &P->leaf_length, 0 ); store32( &P->node_offset, 0 ); store32( &P->xof_length, 0 ); P->node_depth = 0; P->inner_length = 0; memset( P->reserved, 0, sizeof( P->reserved ) ); memset( P->salt, 0, sizeof( P->salt ) ); memset( P->personal, 0, sizeof( P->personal ) ); if( blake2b_init_param( S, P ) < 0 ) return -1; { uint8_t block[BLAKE2B_BLOCKBYTES]; memset( block, 0, BLAKE2B_BLOCKBYTES ); memcpy( block, key, keylen ); blake2b_update( S, block, BLAKE2B_BLOCKBYTES ); secure_zero_memory( block, BLAKE2B_BLOCKBYTES ); /* Burn the key from stack */ } return 0; } #define G(r,i,a,b,c,d) \ do { \ a = a + b + m[blake2b_sigma[r][2*i+0]]; \ d = rotr64(d ^ a, 32); \ c = c + d; \ b = rotr64(b ^ c, 24); \ a = a + b + m[blake2b_sigma[r][2*i+1]]; \ d = rotr64(d ^ a, 16); \ c = c + d; \ b = rotr64(b ^ c, 63); \ } while(0) #define ROUND(r) \ do { \ G(r,0,v[ 0],v[ 4],v[ 8],v[12]); \ G(r,1,v[ 1],v[ 5],v[ 9],v[13]); \ G(r,2,v[ 2],v[ 6],v[10],v[14]); \ G(r,3,v[ 3],v[ 7],v[11],v[15]); \ G(r,4,v[ 0],v[ 5],v[10],v[15]); \ G(r,5,v[ 1],v[ 6],v[11],v[12]); \ G(r,6,v[ 2],v[ 7],v[ 8],v[13]); \ G(r,7,v[ 3],v[ 4],v[ 9],v[14]); \ } while(0) static void blake2b_compress( blake2b_state *S, const uint8_t block[BLAKE2B_BLOCKBYTES] ) { uint64_t m[16]; uint64_t v[16]; size_t i; for( i = 0; i < 16; ++i ) { m[i] = load64( block + i * sizeof( m[i] ) ); } for( i = 0; i < 8; ++i ) { v[i] = S->h[i]; } v[ 8] = blake2b_IV[0]; v[ 9] = blake2b_IV[1]; v[10] = blake2b_IV[2]; v[11] = blake2b_IV[3]; v[12] = blake2b_IV[4] ^ S->t[0]; v[13] = blake2b_IV[5] ^ S->t[1]; v[14] = blake2b_IV[6] ^ S->f[0]; v[15] = blake2b_IV[7] ^ S->f[1]; ROUND( 0 ); ROUND( 1 ); ROUND( 2 ); ROUND( 3 ); ROUND( 4 ); ROUND( 5 ); ROUND( 6 ); ROUND( 7 ); ROUND( 8 ); ROUND( 9 ); ROUND( 10 ); ROUND( 11 ); for( i = 0; i < 8; ++i ) { S->h[i] = S->h[i] ^ v[i] ^ v[i + 8]; } } #undef G #undef ROUND int blake2b_update( blake2b_state *S, const void *pin, size_t inlen ) { const unsigned char * in = (const unsigned char *)pin; if( inlen > 0 ) { size_t left = S->buflen; size_t fill = BLAKE2B_BLOCKBYTES - left; if( inlen > fill ) { S->buflen = 0; memcpy( S->buf + left, in, fill ); /* Fill buffer */ blake2b_increment_counter( S, BLAKE2B_BLOCKBYTES ); blake2b_compress( S, S->buf ); /* Compress */ in += fill; inlen -= fill; while(inlen > BLAKE2B_BLOCKBYTES) { blake2b_increment_counter(S, BLAKE2B_BLOCKBYTES); blake2b_compress( S, in ); in += BLAKE2B_BLOCKBYTES; inlen -= BLAKE2B_BLOCKBYTES; } } memcpy( S->buf + S->buflen, in, inlen ); S->buflen += inlen; } return 0; } int blake2b_final( blake2b_state *S, void *out, size_t outlen ) { uint8_t buffer[BLAKE2B_OUTBYTES] = {0}; size_t i; if( out == NULL || outlen < S->outlen ) return -1; if( blake2b_is_lastblock( S ) ) return -1; blake2b_increment_counter( S, S->buflen ); blake2b_set_lastblock( S ); memset( S->buf + S->buflen, 0, BLAKE2B_BLOCKBYTES - S->buflen ); /* Padding */ blake2b_compress( S, S->buf ); for( i = 0; i < 8; ++i ) /* Output full hash to temp buffer */ store64( buffer + sizeof( S->h[i] ) * i, S->h[i] ); memcpy( out, buffer, S->outlen ); secure_zero_memory(buffer, sizeof(buffer)); return 0; } /* inlen, at least, should be uint64_t. Others can be size_t. */ int blake2b( void *out, size_t outlen, const void *in, size_t inlen, const void *key, size_t keylen ) { blake2b_state S[1]; /* Verify parameters */ if ( NULL == in && inlen > 0 ) return -1; if ( NULL == out ) return -1; if( NULL == key && keylen > 0 ) return -1; if( !outlen || outlen > BLAKE2B_OUTBYTES ) return -1; if( keylen > BLAKE2B_KEYBYTES ) return -1; if( keylen > 0 ) { if( blake2b_init_key( S, outlen, key, keylen ) < 0 ) return -1; } else { if( blake2b_init( S, outlen ) < 0 ) return -1; } blake2b_update( S, ( const uint8_t * )in, inlen ); blake2b_final( S, out, outlen ); return 0; } int blake2( void *out, size_t outlen, const void *in, size_t inlen, const void *key, size_t keylen ) { return blake2b(out, outlen, in, inlen, key, keylen); } #if defined(SUPERCOP) int crypto_hash( unsigned char *out, unsigned char *in, unsigned long long inlen ) { return blake2b( out, BLAKE2B_OUTBYTES, in, inlen, NULL, 0 ); } #endif #if defined(BLAKE2B_SELFTEST) #include #include "blake2-kat.h" int main( void ) { uint8_t key[BLAKE2B_KEYBYTES]; uint8_t buf[BLAKE2_KAT_LENGTH]; size_t i, step; for( i = 0; i < BLAKE2B_KEYBYTES; ++i ) key[i] = ( uint8_t )i; for( i = 0; i < BLAKE2_KAT_LENGTH; ++i ) buf[i] = ( uint8_t )i; /* Test simple API */ for( i = 0; i < BLAKE2_KAT_LENGTH; ++i ) { uint8_t hash[BLAKE2B_OUTBYTES]; blake2b( hash, BLAKE2B_OUTBYTES, buf, i, key, BLAKE2B_KEYBYTES ); if( 0 != memcmp( hash, blake2b_keyed_kat[i], BLAKE2B_OUTBYTES ) ) { goto fail; } } /* Test streaming API */ for(step = 1; step < BLAKE2B_BLOCKBYTES; ++step) { for (i = 0; i < BLAKE2_KAT_LENGTH; ++i) { uint8_t hash[BLAKE2B_OUTBYTES]; blake2b_state S; uint8_t * p = buf; size_t mlen = i; int err = 0; if( (err = blake2b_init_key(&S, BLAKE2B_OUTBYTES, key, BLAKE2B_KEYBYTES)) < 0 ) { goto fail; } while (mlen >= step) { if ( (err = blake2b_update(&S, p, step)) < 0 ) { goto fail; } mlen -= step; p += step; } if ( (err = blake2b_update(&S, p, mlen)) < 0) { goto fail; } if ( (err = blake2b_final(&S, hash, BLAKE2B_OUTBYTES)) < 0) { goto fail; } if (0 != memcmp(hash, blake2b_keyed_kat[i], BLAKE2B_OUTBYTES)) { goto fail; } } } puts( "ok" ); return 0; fail: puts("error"); return -1; } #endif borgbackup-1.1.5/src/borg/algorithms/blake2/ref/blake2.h0000644000175000017500000001446713257361140022077 0ustar twtw00000000000000/* BLAKE2 reference source code package - reference C implementations Copyright 2012, Samuel Neves . You may use this under the terms of the CC0, the OpenSSL Licence, or the Apache Public License 2.0, at your option. The terms of these licenses can be found at: - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0 - OpenSSL license : https://www.openssl.org/source/license.html - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0 More information about the BLAKE2 hash function can be found at https://blake2.net. */ #ifndef BLAKE2_H #define BLAKE2_H #include #include #if defined(_MSC_VER) #define BLAKE2_PACKED(x) __pragma(pack(push, 1)) x __pragma(pack(pop)) #else #define BLAKE2_PACKED(x) x __attribute__((packed)) #endif #if defined(__cplusplus) extern "C" { #endif enum blake2s_constant { BLAKE2S_BLOCKBYTES = 64, BLAKE2S_OUTBYTES = 32, BLAKE2S_KEYBYTES = 32, BLAKE2S_SALTBYTES = 8, BLAKE2S_PERSONALBYTES = 8 }; enum blake2b_constant { BLAKE2B_BLOCKBYTES = 128, BLAKE2B_OUTBYTES = 64, BLAKE2B_KEYBYTES = 64, BLAKE2B_SALTBYTES = 16, BLAKE2B_PERSONALBYTES = 16 }; typedef struct blake2s_state__ { uint32_t h[8]; uint32_t t[2]; uint32_t f[2]; uint8_t buf[BLAKE2S_BLOCKBYTES]; size_t buflen; size_t outlen; uint8_t last_node; } blake2s_state; typedef struct blake2b_state__ { uint64_t h[8]; uint64_t t[2]; uint64_t f[2]; uint8_t buf[BLAKE2B_BLOCKBYTES]; size_t buflen; size_t outlen; uint8_t last_node; } blake2b_state; typedef struct blake2sp_state__ { blake2s_state S[8][1]; blake2s_state R[1]; uint8_t buf[8 * BLAKE2S_BLOCKBYTES]; size_t buflen; size_t outlen; } blake2sp_state; typedef struct blake2bp_state__ { blake2b_state S[4][1]; blake2b_state R[1]; uint8_t buf[4 * BLAKE2B_BLOCKBYTES]; size_t buflen; size_t outlen; } blake2bp_state; BLAKE2_PACKED(struct blake2s_param__ { uint8_t digest_length; /* 1 */ uint8_t key_length; /* 2 */ uint8_t fanout; /* 3 */ uint8_t depth; /* 4 */ uint32_t leaf_length; /* 8 */ uint32_t node_offset; /* 12 */ uint16_t xof_length; /* 14 */ uint8_t node_depth; /* 15 */ uint8_t inner_length; /* 16 */ /* uint8_t reserved[0]; */ uint8_t salt[BLAKE2S_SALTBYTES]; /* 24 */ uint8_t personal[BLAKE2S_PERSONALBYTES]; /* 32 */ }); typedef struct blake2s_param__ blake2s_param; BLAKE2_PACKED(struct blake2b_param__ { uint8_t digest_length; /* 1 */ uint8_t key_length; /* 2 */ uint8_t fanout; /* 3 */ uint8_t depth; /* 4 */ uint32_t leaf_length; /* 8 */ uint32_t node_offset; /* 12 */ uint32_t xof_length; /* 16 */ uint8_t node_depth; /* 17 */ uint8_t inner_length; /* 18 */ uint8_t reserved[14]; /* 32 */ uint8_t salt[BLAKE2B_SALTBYTES]; /* 48 */ uint8_t personal[BLAKE2B_PERSONALBYTES]; /* 64 */ }); typedef struct blake2b_param__ blake2b_param; typedef struct blake2xs_state__ { blake2s_state S[1]; blake2s_param P[1]; } blake2xs_state; typedef struct blake2xb_state__ { blake2b_state S[1]; blake2b_param P[1]; } blake2xb_state; /* Padded structs result in a compile-time error */ enum { BLAKE2_DUMMY_1 = 1/(sizeof(blake2s_param) == BLAKE2S_OUTBYTES), BLAKE2_DUMMY_2 = 1/(sizeof(blake2b_param) == BLAKE2B_OUTBYTES) }; /* Streaming API */ int blake2s_init( blake2s_state *S, size_t outlen ); int blake2s_init_key( blake2s_state *S, size_t outlen, const void *key, size_t keylen ); int blake2s_init_param( blake2s_state *S, const blake2s_param *P ); int blake2s_update( blake2s_state *S, const void *in, size_t inlen ); int blake2s_final( blake2s_state *S, void *out, size_t outlen ); int blake2b_init( blake2b_state *S, size_t outlen ); int blake2b_init_key( blake2b_state *S, size_t outlen, const void *key, size_t keylen ); int blake2b_init_param( blake2b_state *S, const blake2b_param *P ); int blake2b_update( blake2b_state *S, const void *in, size_t inlen ); int blake2b_final( blake2b_state *S, void *out, size_t outlen ); int blake2sp_init( blake2sp_state *S, size_t outlen ); int blake2sp_init_key( blake2sp_state *S, size_t outlen, const void *key, size_t keylen ); int blake2sp_update( blake2sp_state *S, const void *in, size_t inlen ); int blake2sp_final( blake2sp_state *S, void *out, size_t outlen ); int blake2bp_init( blake2bp_state *S, size_t outlen ); int blake2bp_init_key( blake2bp_state *S, size_t outlen, const void *key, size_t keylen ); int blake2bp_update( blake2bp_state *S, const void *in, size_t inlen ); int blake2bp_final( blake2bp_state *S, void *out, size_t outlen ); /* Variable output length API */ int blake2xs_init( blake2xs_state *S, const size_t outlen ); int blake2xs_init_key( blake2xs_state *S, const size_t outlen, const void *key, size_t keylen ); int blake2xs_update( blake2xs_state *S, const void *in, size_t inlen ); int blake2xs_final(blake2xs_state *S, void *out, size_t outlen); int blake2xb_init( blake2xb_state *S, const size_t outlen ); int blake2xb_init_key( blake2xb_state *S, const size_t outlen, const void *key, size_t keylen ); int blake2xb_update( blake2xb_state *S, const void *in, size_t inlen ); int blake2xb_final(blake2xb_state *S, void *out, size_t outlen); /* Simple API */ int blake2s( void *out, size_t outlen, const void *in, size_t inlen, const void *key, size_t keylen ); int blake2b( void *out, size_t outlen, const void *in, size_t inlen, const void *key, size_t keylen ); int blake2sp( void *out, size_t outlen, const void *in, size_t inlen, const void *key, size_t keylen ); int blake2bp( void *out, size_t outlen, const void *in, size_t inlen, const void *key, size_t keylen ); int blake2xs( void *out, size_t outlen, const void *in, size_t inlen, const void *key, size_t keylen ); int blake2xb( void *out, size_t outlen, const void *in, size_t inlen, const void *key, size_t keylen ); /* This is simply an alias for blake2b */ int blake2( void *out, size_t outlen, const void *in, size_t inlen, const void *key, size_t keylen ); #if defined(__cplusplus) } #endif #endif borgbackup-1.1.5/src/borg/algorithms/__init__.py0000644000175000017500000000044413257361140020751 0ustar twtw00000000000000""" borg.algorithms =============== This package is intended for hash and checksum functions. Ideally these would be sourced from existing libraries, but are frequently not available yet (blake2), are available but in poor form (crc32) or don't really make sense as a library (xxHash). """ borgbackup-1.1.5/src/borg/algorithms/crc32_dispatch.c0000644000175000017500000000542213257361140021600 0ustar twtw00000000000000 /* always compile slice by 8 as a runtime fallback */ #include "crc32_slice_by_8.c" #ifdef __GNUC__ /* * GCC 4.4(.7) has a bug that causes it to recurse infinitely if an unknown option * is pushed onto the options stack. GCC 4.5 was not tested, so is excluded as well. * GCC 4.6 is known good. */ #if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6) /* * clang also has or had GCC bug #56298 explained below, but doesn't support * target attributes or the options stack. So we disable this faster code path for clang. */ #ifndef __clang__ /* * While OpenBSD uses GCC, they don't have Intel intrinsics, so we can't compile this code * on OpenBSD. */ #ifndef __OpenBSD__ #if __x86_64__ /* * Because we don't want a configure script we need compiler-dependent pre-defined macros for detecting this, * also some compiler-dependent stuff to invoke SSE modes and align things. */ #define FOLDING_CRC /* * SSE2 misses _mm_shuffle_epi32, and _mm_extract_epi32 * SSSE3 added _mm_shuffle_epi32 * SSE4.1 added _mm_extract_epi32 * Also requires CLMUL of course (all AES-NI CPUs have it) * Note that there are no CPUs with AES-NI/CLMUL but without SSE4.1 */ #define CLMUL __attribute__ ((target ("pclmul,sse4.1"))) #define ALIGNED_(n) __attribute__ ((aligned(n))) /* * Work around https://gcc.gnu.org/bugzilla/show_bug.cgi?id=56298 * These are taken from GCC 6.x, so apparently the above bug has been resolved in that version, * but it still affects widely used GCC 4.x. * Part 2 of 2 follows below. */ #ifndef __PCLMUL__ #pragma GCC push_options #pragma GCC target("pclmul") #define __BORG_DISABLE_PCLMUL__ #endif #ifndef __SSE3__ #pragma GCC push_options #pragma GCC target("sse3") #define __BORG_DISABLE_SSE3__ #endif #ifndef __SSSE3__ #pragma GCC push_options #pragma GCC target("ssse3") #define __BORG_DISABLE_SSSE3__ #endif #ifndef __SSE4_1__ #pragma GCC push_options #pragma GCC target("sse4.1") #define __BORG_DISABLE_SSE4_1__ #endif #endif /* if __x86_64__ */ #endif /* ifndef __OpenBSD__ */ #endif /* ifndef __clang__ */ #endif /* __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6) */ #endif /* ifdef __GNUC__ */ #ifdef FOLDING_CRC #include "crc32_clmul.c" #else static uint32_t crc32_clmul(const uint8_t *src, long len, uint32_t initial_crc) { (void)src; (void)len; (void)initial_crc; assert(0); return 0; } static int have_clmul(void) { return 0; } #endif /* * Part 2 of 2 of the GCC workaround. */ #ifdef __BORG_DISABLE_PCLMUL__ #undef __BORG_DISABLE_PCLMUL__ #pragma GCC pop_options #endif #ifdef __BORG_DISABLE_SSE3__ #undef __BORG_DISABLE_SSE3__ #pragma GCC pop_options #endif #ifdef __BORG_DISABLE_SSSE3__ #undef __BORG_DISABLE_SSSE3__ #pragma GCC pop_options #endif #ifdef __BORG_DISABLE_SSE4_1__ #undef __BORG_DISABLE_SSE4_1__ #pragma GCC pop_options #endif borgbackup-1.1.5/src/borg/algorithms/lz4-libselect.h0000644000175000017500000000011613257361140021462 0ustar twtw00000000000000#ifdef BORG_USE_LIBLZ4 #include #else #include "lz4/lib/lz4.h" #endif borgbackup-1.1.5/src/borg/algorithms/crc32_clmul.c0000644000175000017500000004221213257361140021113 0ustar twtw00000000000000/* * Compute the CRC32 using a parallelized folding approach with the PCLMULQDQ * instruction. * * A white paper describing this algorithm can be found at: * http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf * * Copyright (C) 2013 Intel Corporation. All rights reserved. * Authors: * Wajdi Feghali * Jim Guilford * Vinodh Gopal * Erdinc Ozturk * Jim Kukunas * * Copyright (c) 2016 Marian Beermann (add support for initial value, restructuring) * * This software is provided 'as-is', without any express or implied * warranty. In no event will the authors be held liable for any damages * arising from the use of this software. * * Permission is granted to anyone to use this software for any purpose, * including commercial applications, and to alter it and redistribute it * freely, subject to the following restrictions: * * 1. The origin of this software must not be misrepresented; you must not * claim that you wrote the original software. If you use this software * in a product, an acknowledgment in the product documentation would be * appreciated but is not required. * 2. Altered source versions must be plainly marked as such, and must not be * misrepresented as being the original software. * 3. This notice may not be removed or altered from any source distribution. */ #include #include #include #ifdef _MSC_VER #include #else /* * Newer versions of GCC and clang come with cpuid.h * (ftr GCC 4.7 in Debian Wheezy has this) */ #include #endif static void cpuid(int info, unsigned* eax, unsigned* ebx, unsigned* ecx, unsigned* edx) { #ifdef _MSC_VER unsigned int registers[4]; __cpuid(registers, info); *eax = registers[0]; *ebx = registers[1]; *ecx = registers[2]; *edx = registers[3]; #else /* GCC, clang */ unsigned int _eax; unsigned int _ebx; unsigned int _ecx; unsigned int _edx; __cpuid(info, _eax, _ebx, _ecx, _edx); *eax = _eax; *ebx = _ebx; *ecx = _ecx; *edx = _edx; #endif } static int have_clmul(void) { unsigned eax, ebx, ecx, edx; int has_pclmulqdq; int has_sse41; cpuid(1 /* feature bits */, &eax, &ebx, &ecx, &edx); has_pclmulqdq = ecx & 0x2; /* bit 1 */ has_sse41 = ecx & 0x80000; /* bit 19 */ return has_pclmulqdq && has_sse41; } CLMUL static void fold_1(__m128i *xmm_crc0, __m128i *xmm_crc1, __m128i *xmm_crc2, __m128i *xmm_crc3) { const __m128i xmm_fold4 = _mm_set_epi32( 0x00000001, 0x54442bd4, 0x00000001, 0xc6e41596); __m128i x_tmp3; __m128 ps_crc0, ps_crc3, ps_res; x_tmp3 = *xmm_crc3; *xmm_crc3 = *xmm_crc0; *xmm_crc0 = _mm_clmulepi64_si128(*xmm_crc0, xmm_fold4, 0x01); *xmm_crc3 = _mm_clmulepi64_si128(*xmm_crc3, xmm_fold4, 0x10); ps_crc0 = _mm_castsi128_ps(*xmm_crc0); ps_crc3 = _mm_castsi128_ps(*xmm_crc3); ps_res = _mm_xor_ps(ps_crc0, ps_crc3); *xmm_crc0 = *xmm_crc1; *xmm_crc1 = *xmm_crc2; *xmm_crc2 = x_tmp3; *xmm_crc3 = _mm_castps_si128(ps_res); } CLMUL static void fold_2(__m128i *xmm_crc0, __m128i *xmm_crc1, __m128i *xmm_crc2, __m128i *xmm_crc3) { const __m128i xmm_fold4 = _mm_set_epi32( 0x00000001, 0x54442bd4, 0x00000001, 0xc6e41596); __m128i x_tmp3, x_tmp2; __m128 ps_crc0, ps_crc1, ps_crc2, ps_crc3, ps_res31, ps_res20; x_tmp3 = *xmm_crc3; x_tmp2 = *xmm_crc2; *xmm_crc3 = *xmm_crc1; *xmm_crc1 = _mm_clmulepi64_si128(*xmm_crc1, xmm_fold4, 0x01); *xmm_crc3 = _mm_clmulepi64_si128(*xmm_crc3, xmm_fold4, 0x10); ps_crc3 = _mm_castsi128_ps(*xmm_crc3); ps_crc1 = _mm_castsi128_ps(*xmm_crc1); ps_res31 = _mm_xor_ps(ps_crc3, ps_crc1); *xmm_crc2 = *xmm_crc0; *xmm_crc0 = _mm_clmulepi64_si128(*xmm_crc0, xmm_fold4, 0x01); *xmm_crc2 = _mm_clmulepi64_si128(*xmm_crc2, xmm_fold4, 0x10); ps_crc0 = _mm_castsi128_ps(*xmm_crc0); ps_crc2 = _mm_castsi128_ps(*xmm_crc2); ps_res20 = _mm_xor_ps(ps_crc0, ps_crc2); *xmm_crc0 = x_tmp2; *xmm_crc1 = x_tmp3; *xmm_crc2 = _mm_castps_si128(ps_res20); *xmm_crc3 = _mm_castps_si128(ps_res31); } CLMUL static void fold_3(__m128i *xmm_crc0, __m128i *xmm_crc1, __m128i *xmm_crc2, __m128i *xmm_crc3) { const __m128i xmm_fold4 = _mm_set_epi32( 0x00000001, 0x54442bd4, 0x00000001, 0xc6e41596); __m128i x_tmp3; __m128 ps_crc0, ps_crc1, ps_crc2, ps_crc3, ps_res32, ps_res21, ps_res10; x_tmp3 = *xmm_crc3; *xmm_crc3 = *xmm_crc2; *xmm_crc2 = _mm_clmulepi64_si128(*xmm_crc2, xmm_fold4, 0x01); *xmm_crc3 = _mm_clmulepi64_si128(*xmm_crc3, xmm_fold4, 0x10); ps_crc2 = _mm_castsi128_ps(*xmm_crc2); ps_crc3 = _mm_castsi128_ps(*xmm_crc3); ps_res32 = _mm_xor_ps(ps_crc2, ps_crc3); *xmm_crc2 = *xmm_crc1; *xmm_crc1 = _mm_clmulepi64_si128(*xmm_crc1, xmm_fold4, 0x01); *xmm_crc2 = _mm_clmulepi64_si128(*xmm_crc2, xmm_fold4, 0x10); ps_crc1 = _mm_castsi128_ps(*xmm_crc1); ps_crc2 = _mm_castsi128_ps(*xmm_crc2); ps_res21 = _mm_xor_ps(ps_crc1, ps_crc2); *xmm_crc1 = *xmm_crc0; *xmm_crc0 = _mm_clmulepi64_si128(*xmm_crc0, xmm_fold4, 0x01); *xmm_crc1 = _mm_clmulepi64_si128(*xmm_crc1, xmm_fold4, 0x10); ps_crc0 = _mm_castsi128_ps(*xmm_crc0); ps_crc1 = _mm_castsi128_ps(*xmm_crc1); ps_res10 = _mm_xor_ps(ps_crc0, ps_crc1); *xmm_crc0 = x_tmp3; *xmm_crc1 = _mm_castps_si128(ps_res10); *xmm_crc2 = _mm_castps_si128(ps_res21); *xmm_crc3 = _mm_castps_si128(ps_res32); } CLMUL static void fold_4(__m128i *xmm_crc0, __m128i *xmm_crc1, __m128i *xmm_crc2, __m128i *xmm_crc3) { const __m128i xmm_fold4 = _mm_set_epi32( 0x00000001, 0x54442bd4, 0x00000001, 0xc6e41596); __m128i x_tmp0, x_tmp1, x_tmp2, x_tmp3; __m128 ps_crc0, ps_crc1, ps_crc2, ps_crc3; __m128 ps_t0, ps_t1, ps_t2, ps_t3; __m128 ps_res0, ps_res1, ps_res2, ps_res3; x_tmp0 = *xmm_crc0; x_tmp1 = *xmm_crc1; x_tmp2 = *xmm_crc2; x_tmp3 = *xmm_crc3; *xmm_crc0 = _mm_clmulepi64_si128(*xmm_crc0, xmm_fold4, 0x01); x_tmp0 = _mm_clmulepi64_si128(x_tmp0, xmm_fold4, 0x10); ps_crc0 = _mm_castsi128_ps(*xmm_crc0); ps_t0 = _mm_castsi128_ps(x_tmp0); ps_res0 = _mm_xor_ps(ps_crc0, ps_t0); *xmm_crc1 = _mm_clmulepi64_si128(*xmm_crc1, xmm_fold4, 0x01); x_tmp1 = _mm_clmulepi64_si128(x_tmp1, xmm_fold4, 0x10); ps_crc1 = _mm_castsi128_ps(*xmm_crc1); ps_t1 = _mm_castsi128_ps(x_tmp1); ps_res1 = _mm_xor_ps(ps_crc1, ps_t1); *xmm_crc2 = _mm_clmulepi64_si128(*xmm_crc2, xmm_fold4, 0x01); x_tmp2 = _mm_clmulepi64_si128(x_tmp2, xmm_fold4, 0x10); ps_crc2 = _mm_castsi128_ps(*xmm_crc2); ps_t2 = _mm_castsi128_ps(x_tmp2); ps_res2 = _mm_xor_ps(ps_crc2, ps_t2); *xmm_crc3 = _mm_clmulepi64_si128(*xmm_crc3, xmm_fold4, 0x01); x_tmp3 = _mm_clmulepi64_si128(x_tmp3, xmm_fold4, 0x10); ps_crc3 = _mm_castsi128_ps(*xmm_crc3); ps_t3 = _mm_castsi128_ps(x_tmp3); ps_res3 = _mm_xor_ps(ps_crc3, ps_t3); *xmm_crc0 = _mm_castps_si128(ps_res0); *xmm_crc1 = _mm_castps_si128(ps_res1); *xmm_crc2 = _mm_castps_si128(ps_res2); *xmm_crc3 = _mm_castps_si128(ps_res3); } static const unsigned ALIGNED_(32) pshufb_shf_table[60] = { 0x84838281, 0x88878685, 0x8c8b8a89, 0x008f8e8d, /* shl 15 (16 - 1)/shr1 */ 0x85848382, 0x89888786, 0x8d8c8b8a, 0x01008f8e, /* shl 14 (16 - 3)/shr2 */ 0x86858483, 0x8a898887, 0x8e8d8c8b, 0x0201008f, /* shl 13 (16 - 4)/shr3 */ 0x87868584, 0x8b8a8988, 0x8f8e8d8c, 0x03020100, /* shl 12 (16 - 4)/shr4 */ 0x88878685, 0x8c8b8a89, 0x008f8e8d, 0x04030201, /* shl 11 (16 - 5)/shr5 */ 0x89888786, 0x8d8c8b8a, 0x01008f8e, 0x05040302, /* shl 10 (16 - 6)/shr6 */ 0x8a898887, 0x8e8d8c8b, 0x0201008f, 0x06050403, /* shl 9 (16 - 7)/shr7 */ 0x8b8a8988, 0x8f8e8d8c, 0x03020100, 0x07060504, /* shl 8 (16 - 8)/shr8 */ 0x8c8b8a89, 0x008f8e8d, 0x04030201, 0x08070605, /* shl 7 (16 - 9)/shr9 */ 0x8d8c8b8a, 0x01008f8e, 0x05040302, 0x09080706, /* shl 6 (16 -10)/shr10*/ 0x8e8d8c8b, 0x0201008f, 0x06050403, 0x0a090807, /* shl 5 (16 -11)/shr11*/ 0x8f8e8d8c, 0x03020100, 0x07060504, 0x0b0a0908, /* shl 4 (16 -12)/shr12*/ 0x008f8e8d, 0x04030201, 0x08070605, 0x0c0b0a09, /* shl 3 (16 -13)/shr13*/ 0x01008f8e, 0x05040302, 0x09080706, 0x0d0c0b0a, /* shl 2 (16 -14)/shr14*/ 0x0201008f, 0x06050403, 0x0a090807, 0x0e0d0c0b /* shl 1 (16 -15)/shr15*/ }; CLMUL static void partial_fold(const size_t len, __m128i *xmm_crc0, __m128i *xmm_crc1, __m128i *xmm_crc2, __m128i *xmm_crc3, __m128i *xmm_crc_part) { const __m128i xmm_fold4 = _mm_set_epi32( 0x00000001, 0x54442bd4, 0x00000001, 0xc6e41596); const __m128i xmm_mask3 = _mm_set1_epi32(0x80808080); __m128i xmm_shl, xmm_shr, xmm_tmp1, xmm_tmp2, xmm_tmp3; __m128i xmm_a0_0, xmm_a0_1; __m128 ps_crc3, psa0_0, psa0_1, ps_res; xmm_shl = _mm_load_si128((__m128i *)pshufb_shf_table + (len - 1)); xmm_shr = xmm_shl; xmm_shr = _mm_xor_si128(xmm_shr, xmm_mask3); xmm_a0_0 = _mm_shuffle_epi8(*xmm_crc0, xmm_shl); *xmm_crc0 = _mm_shuffle_epi8(*xmm_crc0, xmm_shr); xmm_tmp1 = _mm_shuffle_epi8(*xmm_crc1, xmm_shl); *xmm_crc0 = _mm_or_si128(*xmm_crc0, xmm_tmp1); *xmm_crc1 = _mm_shuffle_epi8(*xmm_crc1, xmm_shr); xmm_tmp2 = _mm_shuffle_epi8(*xmm_crc2, xmm_shl); *xmm_crc1 = _mm_or_si128(*xmm_crc1, xmm_tmp2); *xmm_crc2 = _mm_shuffle_epi8(*xmm_crc2, xmm_shr); xmm_tmp3 = _mm_shuffle_epi8(*xmm_crc3, xmm_shl); *xmm_crc2 = _mm_or_si128(*xmm_crc2, xmm_tmp3); *xmm_crc3 = _mm_shuffle_epi8(*xmm_crc3, xmm_shr); *xmm_crc_part = _mm_shuffle_epi8(*xmm_crc_part, xmm_shl); *xmm_crc3 = _mm_or_si128(*xmm_crc3, *xmm_crc_part); xmm_a0_1 = _mm_clmulepi64_si128(xmm_a0_0, xmm_fold4, 0x10); xmm_a0_0 = _mm_clmulepi64_si128(xmm_a0_0, xmm_fold4, 0x01); ps_crc3 = _mm_castsi128_ps(*xmm_crc3); psa0_0 = _mm_castsi128_ps(xmm_a0_0); psa0_1 = _mm_castsi128_ps(xmm_a0_1); ps_res = _mm_xor_ps(ps_crc3, psa0_0); ps_res = _mm_xor_ps(ps_res, psa0_1); *xmm_crc3 = _mm_castps_si128(ps_res); } static const unsigned ALIGNED_(16) crc_k[] = { 0xccaa009e, 0x00000000, /* rk1 */ 0x751997d0, 0x00000001, /* rk2 */ 0xccaa009e, 0x00000000, /* rk5 */ 0x63cd6124, 0x00000001, /* rk6 */ 0xf7011640, 0x00000001, /* rk7 */ 0xdb710640, 0x00000001 /* rk8 */ }; static const unsigned ALIGNED_(16) crc_mask[4] = { 0xFFFFFFFF, 0xFFFFFFFF, 0x00000000, 0x00000000 }; static const unsigned ALIGNED_(16) crc_mask2[4] = { 0x00000000, 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF }; #define ONCE(op) if(first) { \ first = 0; \ (op); \ } /* * somewhat surprisingly the "naive" way of doing this, ie. with a flag and a cond. branch, * is consistently ~5 % faster on average than the implied-recommended branchless way (always xor, * always zero xmm_initial). Guess speculative execution and branch prediction got the better of * yet another "optimization tip". */ #define XOR_INITIAL(where) ONCE(where = _mm_xor_si128(where, xmm_initial)) CLMUL static uint32_t crc32_clmul(const uint8_t *src, long len, uint32_t initial_crc) { unsigned long algn_diff; __m128i xmm_t0, xmm_t1, xmm_t2, xmm_t3; __m128i xmm_initial = _mm_cvtsi32_si128(initial_crc); __m128i xmm_crc0 = _mm_cvtsi32_si128(0x9db42487); __m128i xmm_crc1 = _mm_setzero_si128(); __m128i xmm_crc2 = _mm_setzero_si128(); __m128i xmm_crc3 = _mm_setzero_si128(); __m128i xmm_crc_part; int first = 1; /* fold 512 to 32 step variable declarations for ISO-C90 compat. */ const __m128i xmm_mask = _mm_load_si128((__m128i *)crc_mask); const __m128i xmm_mask2 = _mm_load_si128((__m128i *)crc_mask2); uint32_t crc; __m128i x_tmp0, x_tmp1, x_tmp2, crc_fold; if (len < 16) { if (len == 0) return initial_crc; if (len < 4) { /* * no idea how to do this for <4 bytes, delegate to classic impl. */ uint32_t crc = ~initial_crc; switch (len) { case 3: crc = (crc >> 8) ^ Crc32Lookup[0][(crc & 0xFF) ^ *src++]; case 2: crc = (crc >> 8) ^ Crc32Lookup[0][(crc & 0xFF) ^ *src++]; case 1: crc = (crc >> 8) ^ Crc32Lookup[0][(crc & 0xFF) ^ *src++]; } return ~crc; } xmm_crc_part = _mm_loadu_si128((__m128i *)src); XOR_INITIAL(xmm_crc_part); goto partial; } /* this alignment computation would be wrong for len<16 handled above */ algn_diff = (0 - (uintptr_t)src) & 0xF; if (algn_diff) { xmm_crc_part = _mm_loadu_si128((__m128i *)src); XOR_INITIAL(xmm_crc_part); src += algn_diff; len -= algn_diff; partial_fold(algn_diff, &xmm_crc0, &xmm_crc1, &xmm_crc2, &xmm_crc3, &xmm_crc_part); } while ((len -= 64) >= 0) { xmm_t0 = _mm_load_si128((__m128i *)src); xmm_t1 = _mm_load_si128((__m128i *)src + 1); xmm_t2 = _mm_load_si128((__m128i *)src + 2); xmm_t3 = _mm_load_si128((__m128i *)src + 3); XOR_INITIAL(xmm_t0); fold_4(&xmm_crc0, &xmm_crc1, &xmm_crc2, &xmm_crc3); xmm_crc0 = _mm_xor_si128(xmm_crc0, xmm_t0); xmm_crc1 = _mm_xor_si128(xmm_crc1, xmm_t1); xmm_crc2 = _mm_xor_si128(xmm_crc2, xmm_t2); xmm_crc3 = _mm_xor_si128(xmm_crc3, xmm_t3); src += 64; } /* * len = num bytes left - 64 */ if (len + 16 >= 0) { len += 16; xmm_t0 = _mm_load_si128((__m128i *)src); xmm_t1 = _mm_load_si128((__m128i *)src + 1); xmm_t2 = _mm_load_si128((__m128i *)src + 2); XOR_INITIAL(xmm_t0); fold_3(&xmm_crc0, &xmm_crc1, &xmm_crc2, &xmm_crc3); xmm_crc1 = _mm_xor_si128(xmm_crc1, xmm_t0); xmm_crc2 = _mm_xor_si128(xmm_crc2, xmm_t1); xmm_crc3 = _mm_xor_si128(xmm_crc3, xmm_t2); if (len == 0) goto done; xmm_crc_part = _mm_load_si128((__m128i *)src + 3); } else if (len + 32 >= 0) { len += 32; xmm_t0 = _mm_load_si128((__m128i *)src); xmm_t1 = _mm_load_si128((__m128i *)src + 1); XOR_INITIAL(xmm_t0); fold_2(&xmm_crc0, &xmm_crc1, &xmm_crc2, &xmm_crc3); xmm_crc2 = _mm_xor_si128(xmm_crc2, xmm_t0); xmm_crc3 = _mm_xor_si128(xmm_crc3, xmm_t1); if (len == 0) goto done; xmm_crc_part = _mm_load_si128((__m128i *)src + 2); } else if (len + 48 >= 0) { len += 48; xmm_t0 = _mm_load_si128((__m128i *)src); XOR_INITIAL(xmm_t0); fold_1(&xmm_crc0, &xmm_crc1, &xmm_crc2, &xmm_crc3); xmm_crc3 = _mm_xor_si128(xmm_crc3, xmm_t0); if (len == 0) goto done; xmm_crc_part = _mm_load_si128((__m128i *)src + 1); } else { len += 64; if (len == 0) goto done; xmm_crc_part = _mm_load_si128((__m128i *)src); XOR_INITIAL(xmm_crc_part); } partial: partial_fold(len, &xmm_crc0, &xmm_crc1, &xmm_crc2, &xmm_crc3, &xmm_crc_part); done: (void)0; /* fold 512 to 32 */ /* * k1 */ crc_fold = _mm_load_si128((__m128i *)crc_k); x_tmp0 = _mm_clmulepi64_si128(xmm_crc0, crc_fold, 0x10); xmm_crc0 = _mm_clmulepi64_si128(xmm_crc0, crc_fold, 0x01); xmm_crc1 = _mm_xor_si128(xmm_crc1, x_tmp0); xmm_crc1 = _mm_xor_si128(xmm_crc1, xmm_crc0); x_tmp1 = _mm_clmulepi64_si128(xmm_crc1, crc_fold, 0x10); xmm_crc1 = _mm_clmulepi64_si128(xmm_crc1, crc_fold, 0x01); xmm_crc2 = _mm_xor_si128(xmm_crc2, x_tmp1); xmm_crc2 = _mm_xor_si128(xmm_crc2, xmm_crc1); x_tmp2 = _mm_clmulepi64_si128(xmm_crc2, crc_fold, 0x10); xmm_crc2 = _mm_clmulepi64_si128(xmm_crc2, crc_fold, 0x01); xmm_crc3 = _mm_xor_si128(xmm_crc3, x_tmp2); xmm_crc3 = _mm_xor_si128(xmm_crc3, xmm_crc2); /* * k5 */ crc_fold = _mm_load_si128((__m128i *)crc_k + 1); xmm_crc0 = xmm_crc3; xmm_crc3 = _mm_clmulepi64_si128(xmm_crc3, crc_fold, 0); xmm_crc0 = _mm_srli_si128(xmm_crc0, 8); xmm_crc3 = _mm_xor_si128(xmm_crc3, xmm_crc0); xmm_crc0 = xmm_crc3; xmm_crc3 = _mm_slli_si128(xmm_crc3, 4); xmm_crc3 = _mm_clmulepi64_si128(xmm_crc3, crc_fold, 0x10); xmm_crc3 = _mm_xor_si128(xmm_crc3, xmm_crc0); xmm_crc3 = _mm_and_si128(xmm_crc3, xmm_mask2); /* * k7 */ xmm_crc1 = xmm_crc3; xmm_crc2 = xmm_crc3; crc_fold = _mm_load_si128((__m128i *)crc_k + 2); xmm_crc3 = _mm_clmulepi64_si128(xmm_crc3, crc_fold, 0); xmm_crc3 = _mm_xor_si128(xmm_crc3, xmm_crc2); xmm_crc3 = _mm_and_si128(xmm_crc3, xmm_mask); xmm_crc2 = xmm_crc3; xmm_crc3 = _mm_clmulepi64_si128(xmm_crc3, crc_fold, 0x10); xmm_crc3 = _mm_xor_si128(xmm_crc3, xmm_crc2); xmm_crc3 = _mm_xor_si128(xmm_crc3, xmm_crc1); /* * could just as well write xmm_crc3[2], doing a movaps and truncating, but * no real advantage - it's a tiny bit slower per call, while no additional CPUs * would be supported by only requiring SSSE3 and CLMUL instead of SSE4.1 + CLMUL */ crc = _mm_extract_epi32(xmm_crc3, 2); return ~crc; } borgbackup-1.1.5/src/borg/algorithms/zstd-libselect.h0000644000175000017500000000012213257361140021732 0ustar twtw00000000000000#ifdef BORG_USE_LIBZSTD #include #else #include "zstd/lib/zstd.h" #endif borgbackup-1.1.5/src/borg/algorithms/crc32_slice_by_8.c0000644000175000017500000006634413257361140022033 0ustar twtw00000000000000// ////////////////////////////////////////////////////////// // Crc32.h // Copyright (c) 2011-2016 Stephan Brumme. All rights reserved. // see http://create.stephan-brumme.com/disclaimer.html // // uint8_t, uint32_t, int32_t #include // size_t #include #include "../_endian.h" /// compute CRC32 (Slicing-by-8 algorithm), unroll inner loop 4 times uint32_t crc32_4x8bytes(const void* data, size_t length, uint32_t previousCrc32); // ////////////////////////////////////////////////////////// // Crc32.cpp // Copyright (c) 2011-2016 Stephan Brumme. All rights reserved. // Slicing-by-16 contributed by Bulat Ziganshin // Tableless bytewise CRC contributed by Hagai Gold // see http://create.stephan-brumme.com/disclaimer.html // /// zlib's CRC32 polynomial const uint32_t Polynomial = 0xEDB88320; // ////////////////////////////////////////////////////////// // constants /// look-up table, already declared above const uint32_t Crc32Lookup[8][256] = { //// same algorithm as crc32_bitwise //for (int i = 0; i <= 0xFF; i++) //{ // uint32_t crc = i; // for (int j = 0; j < 8; j++) // crc = (crc >> 1) ^ ((crc & 1) * Polynomial); // Crc32Lookup[0][i] = crc; //} //// ... and the following slicing-by-8 algorithm (from Intel): //// http://www.intel.com/technology/comms/perfnet/download/CRC_generators.pdf //// http://sourceforge.net/projects/slicing-by-8/ //for (int slice = 1; slice < MaxSlice; slice++) // Crc32Lookup[slice][i] = (Crc32Lookup[slice - 1][i] >> 8) ^ Crc32Lookup[0][Crc32Lookup[slice - 1][i] & 0xFF]; { // note: the first number of every second row corresponds to the half-byte look-up table ! 0x00000000,0x77073096,0xEE0E612C,0x990951BA,0x076DC419,0x706AF48F,0xE963A535,0x9E6495A3, 0x0EDB8832,0x79DCB8A4,0xE0D5E91E,0x97D2D988,0x09B64C2B,0x7EB17CBD,0xE7B82D07,0x90BF1D91, 0x1DB71064,0x6AB020F2,0xF3B97148,0x84BE41DE,0x1ADAD47D,0x6DDDE4EB,0xF4D4B551,0x83D385C7, 0x136C9856,0x646BA8C0,0xFD62F97A,0x8A65C9EC,0x14015C4F,0x63066CD9,0xFA0F3D63,0x8D080DF5, 0x3B6E20C8,0x4C69105E,0xD56041E4,0xA2677172,0x3C03E4D1,0x4B04D447,0xD20D85FD,0xA50AB56B, 0x35B5A8FA,0x42B2986C,0xDBBBC9D6,0xACBCF940,0x32D86CE3,0x45DF5C75,0xDCD60DCF,0xABD13D59, 0x26D930AC,0x51DE003A,0xC8D75180,0xBFD06116,0x21B4F4B5,0x56B3C423,0xCFBA9599,0xB8BDA50F, 0x2802B89E,0x5F058808,0xC60CD9B2,0xB10BE924,0x2F6F7C87,0x58684C11,0xC1611DAB,0xB6662D3D, 0x76DC4190,0x01DB7106,0x98D220BC,0xEFD5102A,0x71B18589,0x06B6B51F,0x9FBFE4A5,0xE8B8D433, 0x7807C9A2,0x0F00F934,0x9609A88E,0xE10E9818,0x7F6A0DBB,0x086D3D2D,0x91646C97,0xE6635C01, 0x6B6B51F4,0x1C6C6162,0x856530D8,0xF262004E,0x6C0695ED,0x1B01A57B,0x8208F4C1,0xF50FC457, 0x65B0D9C6,0x12B7E950,0x8BBEB8EA,0xFCB9887C,0x62DD1DDF,0x15DA2D49,0x8CD37CF3,0xFBD44C65, 0x4DB26158,0x3AB551CE,0xA3BC0074,0xD4BB30E2,0x4ADFA541,0x3DD895D7,0xA4D1C46D,0xD3D6F4FB, 0x4369E96A,0x346ED9FC,0xAD678846,0xDA60B8D0,0x44042D73,0x33031DE5,0xAA0A4C5F,0xDD0D7CC9, 0x5005713C,0x270241AA,0xBE0B1010,0xC90C2086,0x5768B525,0x206F85B3,0xB966D409,0xCE61E49F, 0x5EDEF90E,0x29D9C998,0xB0D09822,0xC7D7A8B4,0x59B33D17,0x2EB40D81,0xB7BD5C3B,0xC0BA6CAD, 0xEDB88320,0x9ABFB3B6,0x03B6E20C,0x74B1D29A,0xEAD54739,0x9DD277AF,0x04DB2615,0x73DC1683, 0xE3630B12,0x94643B84,0x0D6D6A3E,0x7A6A5AA8,0xE40ECF0B,0x9309FF9D,0x0A00AE27,0x7D079EB1, 0xF00F9344,0x8708A3D2,0x1E01F268,0x6906C2FE,0xF762575D,0x806567CB,0x196C3671,0x6E6B06E7, 0xFED41B76,0x89D32BE0,0x10DA7A5A,0x67DD4ACC,0xF9B9DF6F,0x8EBEEFF9,0x17B7BE43,0x60B08ED5, 0xD6D6A3E8,0xA1D1937E,0x38D8C2C4,0x4FDFF252,0xD1BB67F1,0xA6BC5767,0x3FB506DD,0x48B2364B, 0xD80D2BDA,0xAF0A1B4C,0x36034AF6,0x41047A60,0xDF60EFC3,0xA867DF55,0x316E8EEF,0x4669BE79, 0xCB61B38C,0xBC66831A,0x256FD2A0,0x5268E236,0xCC0C7795,0xBB0B4703,0x220216B9,0x5505262F, 0xC5BA3BBE,0xB2BD0B28,0x2BB45A92,0x5CB36A04,0xC2D7FFA7,0xB5D0CF31,0x2CD99E8B,0x5BDEAE1D, 0x9B64C2B0,0xEC63F226,0x756AA39C,0x026D930A,0x9C0906A9,0xEB0E363F,0x72076785,0x05005713, 0x95BF4A82,0xE2B87A14,0x7BB12BAE,0x0CB61B38,0x92D28E9B,0xE5D5BE0D,0x7CDCEFB7,0x0BDBDF21, 0x86D3D2D4,0xF1D4E242,0x68DDB3F8,0x1FDA836E,0x81BE16CD,0xF6B9265B,0x6FB077E1,0x18B74777, 0x88085AE6,0xFF0F6A70,0x66063BCA,0x11010B5C,0x8F659EFF,0xF862AE69,0x616BFFD3,0x166CCF45, 0xA00AE278,0xD70DD2EE,0x4E048354,0x3903B3C2,0xA7672661,0xD06016F7,0x4969474D,0x3E6E77DB, 0xAED16A4A,0xD9D65ADC,0x40DF0B66,0x37D83BF0,0xA9BCAE53,0xDEBB9EC5,0x47B2CF7F,0x30B5FFE9, 0xBDBDF21C,0xCABAC28A,0x53B39330,0x24B4A3A6,0xBAD03605,0xCDD70693,0x54DE5729,0x23D967BF, 0xB3667A2E,0xC4614AB8,0x5D681B02,0x2A6F2B94,0xB40BBE37,0xC30C8EA1,0x5A05DF1B,0x2D02EF8D, } ,{ 0x00000000,0x191B3141,0x32366282,0x2B2D53C3,0x646CC504,0x7D77F445,0x565AA786,0x4F4196C7, 0xC8D98A08,0xD1C2BB49,0xFAEFE88A,0xE3F4D9CB,0xACB54F0C,0xB5AE7E4D,0x9E832D8E,0x87981CCF, 0x4AC21251,0x53D92310,0x78F470D3,0x61EF4192,0x2EAED755,0x37B5E614,0x1C98B5D7,0x05838496, 0x821B9859,0x9B00A918,0xB02DFADB,0xA936CB9A,0xE6775D5D,0xFF6C6C1C,0xD4413FDF,0xCD5A0E9E, 0x958424A2,0x8C9F15E3,0xA7B24620,0xBEA97761,0xF1E8E1A6,0xE8F3D0E7,0xC3DE8324,0xDAC5B265, 0x5D5DAEAA,0x44469FEB,0x6F6BCC28,0x7670FD69,0x39316BAE,0x202A5AEF,0x0B07092C,0x121C386D, 0xDF4636F3,0xC65D07B2,0xED705471,0xF46B6530,0xBB2AF3F7,0xA231C2B6,0x891C9175,0x9007A034, 0x179FBCFB,0x0E848DBA,0x25A9DE79,0x3CB2EF38,0x73F379FF,0x6AE848BE,0x41C51B7D,0x58DE2A3C, 0xF0794F05,0xE9627E44,0xC24F2D87,0xDB541CC6,0x94158A01,0x8D0EBB40,0xA623E883,0xBF38D9C2, 0x38A0C50D,0x21BBF44C,0x0A96A78F,0x138D96CE,0x5CCC0009,0x45D73148,0x6EFA628B,0x77E153CA, 0xBABB5D54,0xA3A06C15,0x888D3FD6,0x91960E97,0xDED79850,0xC7CCA911,0xECE1FAD2,0xF5FACB93, 0x7262D75C,0x6B79E61D,0x4054B5DE,0x594F849F,0x160E1258,0x0F152319,0x243870DA,0x3D23419B, 0x65FD6BA7,0x7CE65AE6,0x57CB0925,0x4ED03864,0x0191AEA3,0x188A9FE2,0x33A7CC21,0x2ABCFD60, 0xAD24E1AF,0xB43FD0EE,0x9F12832D,0x8609B26C,0xC94824AB,0xD05315EA,0xFB7E4629,0xE2657768, 0x2F3F79F6,0x362448B7,0x1D091B74,0x04122A35,0x4B53BCF2,0x52488DB3,0x7965DE70,0x607EEF31, 0xE7E6F3FE,0xFEFDC2BF,0xD5D0917C,0xCCCBA03D,0x838A36FA,0x9A9107BB,0xB1BC5478,0xA8A76539, 0x3B83984B,0x2298A90A,0x09B5FAC9,0x10AECB88,0x5FEF5D4F,0x46F46C0E,0x6DD93FCD,0x74C20E8C, 0xF35A1243,0xEA412302,0xC16C70C1,0xD8774180,0x9736D747,0x8E2DE606,0xA500B5C5,0xBC1B8484, 0x71418A1A,0x685ABB5B,0x4377E898,0x5A6CD9D9,0x152D4F1E,0x0C367E5F,0x271B2D9C,0x3E001CDD, 0xB9980012,0xA0833153,0x8BAE6290,0x92B553D1,0xDDF4C516,0xC4EFF457,0xEFC2A794,0xF6D996D5, 0xAE07BCE9,0xB71C8DA8,0x9C31DE6B,0x852AEF2A,0xCA6B79ED,0xD37048AC,0xF85D1B6F,0xE1462A2E, 0x66DE36E1,0x7FC507A0,0x54E85463,0x4DF36522,0x02B2F3E5,0x1BA9C2A4,0x30849167,0x299FA026, 0xE4C5AEB8,0xFDDE9FF9,0xD6F3CC3A,0xCFE8FD7B,0x80A96BBC,0x99B25AFD,0xB29F093E,0xAB84387F, 0x2C1C24B0,0x350715F1,0x1E2A4632,0x07317773,0x4870E1B4,0x516BD0F5,0x7A468336,0x635DB277, 0xCBFAD74E,0xD2E1E60F,0xF9CCB5CC,0xE0D7848D,0xAF96124A,0xB68D230B,0x9DA070C8,0x84BB4189, 0x03235D46,0x1A386C07,0x31153FC4,0x280E0E85,0x674F9842,0x7E54A903,0x5579FAC0,0x4C62CB81, 0x8138C51F,0x9823F45E,0xB30EA79D,0xAA1596DC,0xE554001B,0xFC4F315A,0xD7626299,0xCE7953D8, 0x49E14F17,0x50FA7E56,0x7BD72D95,0x62CC1CD4,0x2D8D8A13,0x3496BB52,0x1FBBE891,0x06A0D9D0, 0x5E7EF3EC,0x4765C2AD,0x6C48916E,0x7553A02F,0x3A1236E8,0x230907A9,0x0824546A,0x113F652B, 0x96A779E4,0x8FBC48A5,0xA4911B66,0xBD8A2A27,0xF2CBBCE0,0xEBD08DA1,0xC0FDDE62,0xD9E6EF23, 0x14BCE1BD,0x0DA7D0FC,0x268A833F,0x3F91B27E,0x70D024B9,0x69CB15F8,0x42E6463B,0x5BFD777A, 0xDC656BB5,0xC57E5AF4,0xEE530937,0xF7483876,0xB809AEB1,0xA1129FF0,0x8A3FCC33,0x9324FD72, }, { 0x00000000,0x01C26A37,0x0384D46E,0x0246BE59,0x0709A8DC,0x06CBC2EB,0x048D7CB2,0x054F1685, 0x0E1351B8,0x0FD13B8F,0x0D9785D6,0x0C55EFE1,0x091AF964,0x08D89353,0x0A9E2D0A,0x0B5C473D, 0x1C26A370,0x1DE4C947,0x1FA2771E,0x1E601D29,0x1B2F0BAC,0x1AED619B,0x18ABDFC2,0x1969B5F5, 0x1235F2C8,0x13F798FF,0x11B126A6,0x10734C91,0x153C5A14,0x14FE3023,0x16B88E7A,0x177AE44D, 0x384D46E0,0x398F2CD7,0x3BC9928E,0x3A0BF8B9,0x3F44EE3C,0x3E86840B,0x3CC03A52,0x3D025065, 0x365E1758,0x379C7D6F,0x35DAC336,0x3418A901,0x3157BF84,0x3095D5B3,0x32D36BEA,0x331101DD, 0x246BE590,0x25A98FA7,0x27EF31FE,0x262D5BC9,0x23624D4C,0x22A0277B,0x20E69922,0x2124F315, 0x2A78B428,0x2BBADE1F,0x29FC6046,0x283E0A71,0x2D711CF4,0x2CB376C3,0x2EF5C89A,0x2F37A2AD, 0x709A8DC0,0x7158E7F7,0x731E59AE,0x72DC3399,0x7793251C,0x76514F2B,0x7417F172,0x75D59B45, 0x7E89DC78,0x7F4BB64F,0x7D0D0816,0x7CCF6221,0x798074A4,0x78421E93,0x7A04A0CA,0x7BC6CAFD, 0x6CBC2EB0,0x6D7E4487,0x6F38FADE,0x6EFA90E9,0x6BB5866C,0x6A77EC5B,0x68315202,0x69F33835, 0x62AF7F08,0x636D153F,0x612BAB66,0x60E9C151,0x65A6D7D4,0x6464BDE3,0x662203BA,0x67E0698D, 0x48D7CB20,0x4915A117,0x4B531F4E,0x4A917579,0x4FDE63FC,0x4E1C09CB,0x4C5AB792,0x4D98DDA5, 0x46C49A98,0x4706F0AF,0x45404EF6,0x448224C1,0x41CD3244,0x400F5873,0x4249E62A,0x438B8C1D, 0x54F16850,0x55330267,0x5775BC3E,0x56B7D609,0x53F8C08C,0x523AAABB,0x507C14E2,0x51BE7ED5, 0x5AE239E8,0x5B2053DF,0x5966ED86,0x58A487B1,0x5DEB9134,0x5C29FB03,0x5E6F455A,0x5FAD2F6D, 0xE1351B80,0xE0F771B7,0xE2B1CFEE,0xE373A5D9,0xE63CB35C,0xE7FED96B,0xE5B86732,0xE47A0D05, 0xEF264A38,0xEEE4200F,0xECA29E56,0xED60F461,0xE82FE2E4,0xE9ED88D3,0xEBAB368A,0xEA695CBD, 0xFD13B8F0,0xFCD1D2C7,0xFE976C9E,0xFF5506A9,0xFA1A102C,0xFBD87A1B,0xF99EC442,0xF85CAE75, 0xF300E948,0xF2C2837F,0xF0843D26,0xF1465711,0xF4094194,0xF5CB2BA3,0xF78D95FA,0xF64FFFCD, 0xD9785D60,0xD8BA3757,0xDAFC890E,0xDB3EE339,0xDE71F5BC,0xDFB39F8B,0xDDF521D2,0xDC374BE5, 0xD76B0CD8,0xD6A966EF,0xD4EFD8B6,0xD52DB281,0xD062A404,0xD1A0CE33,0xD3E6706A,0xD2241A5D, 0xC55EFE10,0xC49C9427,0xC6DA2A7E,0xC7184049,0xC25756CC,0xC3953CFB,0xC1D382A2,0xC011E895, 0xCB4DAFA8,0xCA8FC59F,0xC8C97BC6,0xC90B11F1,0xCC440774,0xCD866D43,0xCFC0D31A,0xCE02B92D, 0x91AF9640,0x906DFC77,0x922B422E,0x93E92819,0x96A63E9C,0x976454AB,0x9522EAF2,0x94E080C5, 0x9FBCC7F8,0x9E7EADCF,0x9C381396,0x9DFA79A1,0x98B56F24,0x99770513,0x9B31BB4A,0x9AF3D17D, 0x8D893530,0x8C4B5F07,0x8E0DE15E,0x8FCF8B69,0x8A809DEC,0x8B42F7DB,0x89044982,0x88C623B5, 0x839A6488,0x82580EBF,0x801EB0E6,0x81DCDAD1,0x8493CC54,0x8551A663,0x8717183A,0x86D5720D, 0xA9E2D0A0,0xA820BA97,0xAA6604CE,0xABA46EF9,0xAEEB787C,0xAF29124B,0xAD6FAC12,0xACADC625, 0xA7F18118,0xA633EB2F,0xA4755576,0xA5B73F41,0xA0F829C4,0xA13A43F3,0xA37CFDAA,0xA2BE979D, 0xB5C473D0,0xB40619E7,0xB640A7BE,0xB782CD89,0xB2CDDB0C,0xB30FB13B,0xB1490F62,0xB08B6555, 0xBBD72268,0xBA15485F,0xB853F606,0xB9919C31,0xBCDE8AB4,0xBD1CE083,0xBF5A5EDA,0xBE9834ED, }, { 0x00000000,0xB8BC6765,0xAA09C88B,0x12B5AFEE,0x8F629757,0x37DEF032,0x256B5FDC,0x9DD738B9, 0xC5B428EF,0x7D084F8A,0x6FBDE064,0xD7018701,0x4AD6BFB8,0xF26AD8DD,0xE0DF7733,0x58631056, 0x5019579F,0xE8A530FA,0xFA109F14,0x42ACF871,0xDF7BC0C8,0x67C7A7AD,0x75720843,0xCDCE6F26, 0x95AD7F70,0x2D111815,0x3FA4B7FB,0x8718D09E,0x1ACFE827,0xA2738F42,0xB0C620AC,0x087A47C9, 0xA032AF3E,0x188EC85B,0x0A3B67B5,0xB28700D0,0x2F503869,0x97EC5F0C,0x8559F0E2,0x3DE59787, 0x658687D1,0xDD3AE0B4,0xCF8F4F5A,0x7733283F,0xEAE41086,0x525877E3,0x40EDD80D,0xF851BF68, 0xF02BF8A1,0x48979FC4,0x5A22302A,0xE29E574F,0x7F496FF6,0xC7F50893,0xD540A77D,0x6DFCC018, 0x359FD04E,0x8D23B72B,0x9F9618C5,0x272A7FA0,0xBAFD4719,0x0241207C,0x10F48F92,0xA848E8F7, 0x9B14583D,0x23A83F58,0x311D90B6,0x89A1F7D3,0x1476CF6A,0xACCAA80F,0xBE7F07E1,0x06C36084, 0x5EA070D2,0xE61C17B7,0xF4A9B859,0x4C15DF3C,0xD1C2E785,0x697E80E0,0x7BCB2F0E,0xC377486B, 0xCB0D0FA2,0x73B168C7,0x6104C729,0xD9B8A04C,0x446F98F5,0xFCD3FF90,0xEE66507E,0x56DA371B, 0x0EB9274D,0xB6054028,0xA4B0EFC6,0x1C0C88A3,0x81DBB01A,0x3967D77F,0x2BD27891,0x936E1FF4, 0x3B26F703,0x839A9066,0x912F3F88,0x299358ED,0xB4446054,0x0CF80731,0x1E4DA8DF,0xA6F1CFBA, 0xFE92DFEC,0x462EB889,0x549B1767,0xEC277002,0x71F048BB,0xC94C2FDE,0xDBF98030,0x6345E755, 0x6B3FA09C,0xD383C7F9,0xC1366817,0x798A0F72,0xE45D37CB,0x5CE150AE,0x4E54FF40,0xF6E89825, 0xAE8B8873,0x1637EF16,0x048240F8,0xBC3E279D,0x21E91F24,0x99557841,0x8BE0D7AF,0x335CB0CA, 0xED59B63B,0x55E5D15E,0x47507EB0,0xFFEC19D5,0x623B216C,0xDA874609,0xC832E9E7,0x708E8E82, 0x28ED9ED4,0x9051F9B1,0x82E4565F,0x3A58313A,0xA78F0983,0x1F336EE6,0x0D86C108,0xB53AA66D, 0xBD40E1A4,0x05FC86C1,0x1749292F,0xAFF54E4A,0x322276F3,0x8A9E1196,0x982BBE78,0x2097D91D, 0x78F4C94B,0xC048AE2E,0xD2FD01C0,0x6A4166A5,0xF7965E1C,0x4F2A3979,0x5D9F9697,0xE523F1F2, 0x4D6B1905,0xF5D77E60,0xE762D18E,0x5FDEB6EB,0xC2098E52,0x7AB5E937,0x680046D9,0xD0BC21BC, 0x88DF31EA,0x3063568F,0x22D6F961,0x9A6A9E04,0x07BDA6BD,0xBF01C1D8,0xADB46E36,0x15080953, 0x1D724E9A,0xA5CE29FF,0xB77B8611,0x0FC7E174,0x9210D9CD,0x2AACBEA8,0x38191146,0x80A57623, 0xD8C66675,0x607A0110,0x72CFAEFE,0xCA73C99B,0x57A4F122,0xEF189647,0xFDAD39A9,0x45115ECC, 0x764DEE06,0xCEF18963,0xDC44268D,0x64F841E8,0xF92F7951,0x41931E34,0x5326B1DA,0xEB9AD6BF, 0xB3F9C6E9,0x0B45A18C,0x19F00E62,0xA14C6907,0x3C9B51BE,0x842736DB,0x96929935,0x2E2EFE50, 0x2654B999,0x9EE8DEFC,0x8C5D7112,0x34E11677,0xA9362ECE,0x118A49AB,0x033FE645,0xBB838120, 0xE3E09176,0x5B5CF613,0x49E959FD,0xF1553E98,0x6C820621,0xD43E6144,0xC68BCEAA,0x7E37A9CF, 0xD67F4138,0x6EC3265D,0x7C7689B3,0xC4CAEED6,0x591DD66F,0xE1A1B10A,0xF3141EE4,0x4BA87981, 0x13CB69D7,0xAB770EB2,0xB9C2A15C,0x017EC639,0x9CA9FE80,0x241599E5,0x36A0360B,0x8E1C516E, 0x866616A7,0x3EDA71C2,0x2C6FDE2C,0x94D3B949,0x090481F0,0xB1B8E695,0xA30D497B,0x1BB12E1E, 0x43D23E48,0xFB6E592D,0xE9DBF6C3,0x516791A6,0xCCB0A91F,0x740CCE7A,0x66B96194,0xDE0506F1, } ,{ 0x00000000,0x3D6029B0,0x7AC05360,0x47A07AD0,0xF580A6C0,0xC8E08F70,0x8F40F5A0,0xB220DC10, 0x30704BC1,0x0D106271,0x4AB018A1,0x77D03111,0xC5F0ED01,0xF890C4B1,0xBF30BE61,0x825097D1, 0x60E09782,0x5D80BE32,0x1A20C4E2,0x2740ED52,0x95603142,0xA80018F2,0xEFA06222,0xD2C04B92, 0x5090DC43,0x6DF0F5F3,0x2A508F23,0x1730A693,0xA5107A83,0x98705333,0xDFD029E3,0xE2B00053, 0xC1C12F04,0xFCA106B4,0xBB017C64,0x866155D4,0x344189C4,0x0921A074,0x4E81DAA4,0x73E1F314, 0xF1B164C5,0xCCD14D75,0x8B7137A5,0xB6111E15,0x0431C205,0x3951EBB5,0x7EF19165,0x4391B8D5, 0xA121B886,0x9C419136,0xDBE1EBE6,0xE681C256,0x54A11E46,0x69C137F6,0x2E614D26,0x13016496, 0x9151F347,0xAC31DAF7,0xEB91A027,0xD6F18997,0x64D15587,0x59B17C37,0x1E1106E7,0x23712F57, 0x58F35849,0x659371F9,0x22330B29,0x1F532299,0xAD73FE89,0x9013D739,0xD7B3ADE9,0xEAD38459, 0x68831388,0x55E33A38,0x124340E8,0x2F236958,0x9D03B548,0xA0639CF8,0xE7C3E628,0xDAA3CF98, 0x3813CFCB,0x0573E67B,0x42D39CAB,0x7FB3B51B,0xCD93690B,0xF0F340BB,0xB7533A6B,0x8A3313DB, 0x0863840A,0x3503ADBA,0x72A3D76A,0x4FC3FEDA,0xFDE322CA,0xC0830B7A,0x872371AA,0xBA43581A, 0x9932774D,0xA4525EFD,0xE3F2242D,0xDE920D9D,0x6CB2D18D,0x51D2F83D,0x167282ED,0x2B12AB5D, 0xA9423C8C,0x9422153C,0xD3826FEC,0xEEE2465C,0x5CC29A4C,0x61A2B3FC,0x2602C92C,0x1B62E09C, 0xF9D2E0CF,0xC4B2C97F,0x8312B3AF,0xBE729A1F,0x0C52460F,0x31326FBF,0x7692156F,0x4BF23CDF, 0xC9A2AB0E,0xF4C282BE,0xB362F86E,0x8E02D1DE,0x3C220DCE,0x0142247E,0x46E25EAE,0x7B82771E, 0xB1E6B092,0x8C869922,0xCB26E3F2,0xF646CA42,0x44661652,0x79063FE2,0x3EA64532,0x03C66C82, 0x8196FB53,0xBCF6D2E3,0xFB56A833,0xC6368183,0x74165D93,0x49767423,0x0ED60EF3,0x33B62743, 0xD1062710,0xEC660EA0,0xABC67470,0x96A65DC0,0x248681D0,0x19E6A860,0x5E46D2B0,0x6326FB00, 0xE1766CD1,0xDC164561,0x9BB63FB1,0xA6D61601,0x14F6CA11,0x2996E3A1,0x6E369971,0x5356B0C1, 0x70279F96,0x4D47B626,0x0AE7CCF6,0x3787E546,0x85A73956,0xB8C710E6,0xFF676A36,0xC2074386, 0x4057D457,0x7D37FDE7,0x3A978737,0x07F7AE87,0xB5D77297,0x88B75B27,0xCF1721F7,0xF2770847, 0x10C70814,0x2DA721A4,0x6A075B74,0x576772C4,0xE547AED4,0xD8278764,0x9F87FDB4,0xA2E7D404, 0x20B743D5,0x1DD76A65,0x5A7710B5,0x67173905,0xD537E515,0xE857CCA5,0xAFF7B675,0x92979FC5, 0xE915E8DB,0xD475C16B,0x93D5BBBB,0xAEB5920B,0x1C954E1B,0x21F567AB,0x66551D7B,0x5B3534CB, 0xD965A31A,0xE4058AAA,0xA3A5F07A,0x9EC5D9CA,0x2CE505DA,0x11852C6A,0x562556BA,0x6B457F0A, 0x89F57F59,0xB49556E9,0xF3352C39,0xCE550589,0x7C75D999,0x4115F029,0x06B58AF9,0x3BD5A349, 0xB9853498,0x84E51D28,0xC34567F8,0xFE254E48,0x4C059258,0x7165BBE8,0x36C5C138,0x0BA5E888, 0x28D4C7DF,0x15B4EE6F,0x521494BF,0x6F74BD0F,0xDD54611F,0xE03448AF,0xA794327F,0x9AF41BCF, 0x18A48C1E,0x25C4A5AE,0x6264DF7E,0x5F04F6CE,0xED242ADE,0xD044036E,0x97E479BE,0xAA84500E, 0x4834505D,0x755479ED,0x32F4033D,0x0F942A8D,0xBDB4F69D,0x80D4DF2D,0xC774A5FD,0xFA148C4D, 0x78441B9C,0x4524322C,0x028448FC,0x3FE4614C,0x8DC4BD5C,0xB0A494EC,0xF704EE3C,0xCA64C78C, }, { 0x00000000,0xCB5CD3A5,0x4DC8A10B,0x869472AE,0x9B914216,0x50CD91B3,0xD659E31D,0x1D0530B8, 0xEC53826D,0x270F51C8,0xA19B2366,0x6AC7F0C3,0x77C2C07B,0xBC9E13DE,0x3A0A6170,0xF156B2D5, 0x03D6029B,0xC88AD13E,0x4E1EA390,0x85427035,0x9847408D,0x531B9328,0xD58FE186,0x1ED33223, 0xEF8580F6,0x24D95353,0xA24D21FD,0x6911F258,0x7414C2E0,0xBF481145,0x39DC63EB,0xF280B04E, 0x07AC0536,0xCCF0D693,0x4A64A43D,0x81387798,0x9C3D4720,0x57619485,0xD1F5E62B,0x1AA9358E, 0xEBFF875B,0x20A354FE,0xA6372650,0x6D6BF5F5,0x706EC54D,0xBB3216E8,0x3DA66446,0xF6FAB7E3, 0x047A07AD,0xCF26D408,0x49B2A6A6,0x82EE7503,0x9FEB45BB,0x54B7961E,0xD223E4B0,0x197F3715, 0xE82985C0,0x23755665,0xA5E124CB,0x6EBDF76E,0x73B8C7D6,0xB8E41473,0x3E7066DD,0xF52CB578, 0x0F580A6C,0xC404D9C9,0x4290AB67,0x89CC78C2,0x94C9487A,0x5F959BDF,0xD901E971,0x125D3AD4, 0xE30B8801,0x28575BA4,0xAEC3290A,0x659FFAAF,0x789ACA17,0xB3C619B2,0x35526B1C,0xFE0EB8B9, 0x0C8E08F7,0xC7D2DB52,0x4146A9FC,0x8A1A7A59,0x971F4AE1,0x5C439944,0xDAD7EBEA,0x118B384F, 0xE0DD8A9A,0x2B81593F,0xAD152B91,0x6649F834,0x7B4CC88C,0xB0101B29,0x36846987,0xFDD8BA22, 0x08F40F5A,0xC3A8DCFF,0x453CAE51,0x8E607DF4,0x93654D4C,0x58399EE9,0xDEADEC47,0x15F13FE2, 0xE4A78D37,0x2FFB5E92,0xA96F2C3C,0x6233FF99,0x7F36CF21,0xB46A1C84,0x32FE6E2A,0xF9A2BD8F, 0x0B220DC1,0xC07EDE64,0x46EAACCA,0x8DB67F6F,0x90B34FD7,0x5BEF9C72,0xDD7BEEDC,0x16273D79, 0xE7718FAC,0x2C2D5C09,0xAAB92EA7,0x61E5FD02,0x7CE0CDBA,0xB7BC1E1F,0x31286CB1,0xFA74BF14, 0x1EB014D8,0xD5ECC77D,0x5378B5D3,0x98246676,0x852156CE,0x4E7D856B,0xC8E9F7C5,0x03B52460, 0xF2E396B5,0x39BF4510,0xBF2B37BE,0x7477E41B,0x6972D4A3,0xA22E0706,0x24BA75A8,0xEFE6A60D, 0x1D661643,0xD63AC5E6,0x50AEB748,0x9BF264ED,0x86F75455,0x4DAB87F0,0xCB3FF55E,0x006326FB, 0xF135942E,0x3A69478B,0xBCFD3525,0x77A1E680,0x6AA4D638,0xA1F8059D,0x276C7733,0xEC30A496, 0x191C11EE,0xD240C24B,0x54D4B0E5,0x9F886340,0x828D53F8,0x49D1805D,0xCF45F2F3,0x04192156, 0xF54F9383,0x3E134026,0xB8873288,0x73DBE12D,0x6EDED195,0xA5820230,0x2316709E,0xE84AA33B, 0x1ACA1375,0xD196C0D0,0x5702B27E,0x9C5E61DB,0x815B5163,0x4A0782C6,0xCC93F068,0x07CF23CD, 0xF6999118,0x3DC542BD,0xBB513013,0x700DE3B6,0x6D08D30E,0xA65400AB,0x20C07205,0xEB9CA1A0, 0x11E81EB4,0xDAB4CD11,0x5C20BFBF,0x977C6C1A,0x8A795CA2,0x41258F07,0xC7B1FDA9,0x0CED2E0C, 0xFDBB9CD9,0x36E74F7C,0xB0733DD2,0x7B2FEE77,0x662ADECF,0xAD760D6A,0x2BE27FC4,0xE0BEAC61, 0x123E1C2F,0xD962CF8A,0x5FF6BD24,0x94AA6E81,0x89AF5E39,0x42F38D9C,0xC467FF32,0x0F3B2C97, 0xFE6D9E42,0x35314DE7,0xB3A53F49,0x78F9ECEC,0x65FCDC54,0xAEA00FF1,0x28347D5F,0xE368AEFA, 0x16441B82,0xDD18C827,0x5B8CBA89,0x90D0692C,0x8DD55994,0x46898A31,0xC01DF89F,0x0B412B3A, 0xFA1799EF,0x314B4A4A,0xB7DF38E4,0x7C83EB41,0x6186DBF9,0xAADA085C,0x2C4E7AF2,0xE712A957, 0x15921919,0xDECECABC,0x585AB812,0x93066BB7,0x8E035B0F,0x455F88AA,0xC3CBFA04,0x089729A1, 0xF9C19B74,0x329D48D1,0xB4093A7F,0x7F55E9DA,0x6250D962,0xA90C0AC7,0x2F987869,0xE4C4ABCC, }, { 0x00000000,0xA6770BB4,0x979F1129,0x31E81A9D,0xF44F2413,0x52382FA7,0x63D0353A,0xC5A73E8E, 0x33EF4E67,0x959845D3,0xA4705F4E,0x020754FA,0xC7A06A74,0x61D761C0,0x503F7B5D,0xF64870E9, 0x67DE9CCE,0xC1A9977A,0xF0418DE7,0x56368653,0x9391B8DD,0x35E6B369,0x040EA9F4,0xA279A240, 0x5431D2A9,0xF246D91D,0xC3AEC380,0x65D9C834,0xA07EF6BA,0x0609FD0E,0x37E1E793,0x9196EC27, 0xCFBD399C,0x69CA3228,0x582228B5,0xFE552301,0x3BF21D8F,0x9D85163B,0xAC6D0CA6,0x0A1A0712, 0xFC5277FB,0x5A257C4F,0x6BCD66D2,0xCDBA6D66,0x081D53E8,0xAE6A585C,0x9F8242C1,0x39F54975, 0xA863A552,0x0E14AEE6,0x3FFCB47B,0x998BBFCF,0x5C2C8141,0xFA5B8AF5,0xCBB39068,0x6DC49BDC, 0x9B8CEB35,0x3DFBE081,0x0C13FA1C,0xAA64F1A8,0x6FC3CF26,0xC9B4C492,0xF85CDE0F,0x5E2BD5BB, 0x440B7579,0xE27C7ECD,0xD3946450,0x75E36FE4,0xB044516A,0x16335ADE,0x27DB4043,0x81AC4BF7, 0x77E43B1E,0xD19330AA,0xE07B2A37,0x460C2183,0x83AB1F0D,0x25DC14B9,0x14340E24,0xB2430590, 0x23D5E9B7,0x85A2E203,0xB44AF89E,0x123DF32A,0xD79ACDA4,0x71EDC610,0x4005DC8D,0xE672D739, 0x103AA7D0,0xB64DAC64,0x87A5B6F9,0x21D2BD4D,0xE47583C3,0x42028877,0x73EA92EA,0xD59D995E, 0x8BB64CE5,0x2DC14751,0x1C295DCC,0xBA5E5678,0x7FF968F6,0xD98E6342,0xE86679DF,0x4E11726B, 0xB8590282,0x1E2E0936,0x2FC613AB,0x89B1181F,0x4C162691,0xEA612D25,0xDB8937B8,0x7DFE3C0C, 0xEC68D02B,0x4A1FDB9F,0x7BF7C102,0xDD80CAB6,0x1827F438,0xBE50FF8C,0x8FB8E511,0x29CFEEA5, 0xDF879E4C,0x79F095F8,0x48188F65,0xEE6F84D1,0x2BC8BA5F,0x8DBFB1EB,0xBC57AB76,0x1A20A0C2, 0x8816EAF2,0x2E61E146,0x1F89FBDB,0xB9FEF06F,0x7C59CEE1,0xDA2EC555,0xEBC6DFC8,0x4DB1D47C, 0xBBF9A495,0x1D8EAF21,0x2C66B5BC,0x8A11BE08,0x4FB68086,0xE9C18B32,0xD82991AF,0x7E5E9A1B, 0xEFC8763C,0x49BF7D88,0x78576715,0xDE206CA1,0x1B87522F,0xBDF0599B,0x8C184306,0x2A6F48B2, 0xDC27385B,0x7A5033EF,0x4BB82972,0xEDCF22C6,0x28681C48,0x8E1F17FC,0xBFF70D61,0x198006D5, 0x47ABD36E,0xE1DCD8DA,0xD034C247,0x7643C9F3,0xB3E4F77D,0x1593FCC9,0x247BE654,0x820CEDE0, 0x74449D09,0xD23396BD,0xE3DB8C20,0x45AC8794,0x800BB91A,0x267CB2AE,0x1794A833,0xB1E3A387, 0x20754FA0,0x86024414,0xB7EA5E89,0x119D553D,0xD43A6BB3,0x724D6007,0x43A57A9A,0xE5D2712E, 0x139A01C7,0xB5ED0A73,0x840510EE,0x22721B5A,0xE7D525D4,0x41A22E60,0x704A34FD,0xD63D3F49, 0xCC1D9F8B,0x6A6A943F,0x5B828EA2,0xFDF58516,0x3852BB98,0x9E25B02C,0xAFCDAAB1,0x09BAA105, 0xFFF2D1EC,0x5985DA58,0x686DC0C5,0xCE1ACB71,0x0BBDF5FF,0xADCAFE4B,0x9C22E4D6,0x3A55EF62, 0xABC30345,0x0DB408F1,0x3C5C126C,0x9A2B19D8,0x5F8C2756,0xF9FB2CE2,0xC813367F,0x6E643DCB, 0x982C4D22,0x3E5B4696,0x0FB35C0B,0xA9C457BF,0x6C636931,0xCA146285,0xFBFC7818,0x5D8B73AC, 0x03A0A617,0xA5D7ADA3,0x943FB73E,0x3248BC8A,0xF7EF8204,0x519889B0,0x6070932D,0xC6079899, 0x304FE870,0x9638E3C4,0xA7D0F959,0x01A7F2ED,0xC400CC63,0x6277C7D7,0x539FDD4A,0xF5E8D6FE, 0x647E3AD9,0xC209316D,0xF3E12BF0,0x55962044,0x90311ECA,0x3646157E,0x07AE0FE3,0xA1D90457, 0x579174BE,0xF1E67F0A,0xC00E6597,0x66796E23,0xA3DE50AD,0x05A95B19,0x34414184,0x92364A30, }, { 0x00000000,0xCCAA009E,0x4225077D,0x8E8F07E3,0x844A0EFA,0x48E00E64,0xC66F0987,0x0AC50919, 0xD3E51BB5,0x1F4F1B2B,0x91C01CC8,0x5D6A1C56,0x57AF154F,0x9B0515D1,0x158A1232,0xD92012AC, 0x7CBB312B,0xB01131B5,0x3E9E3656,0xF23436C8,0xF8F13FD1,0x345B3F4F,0xBAD438AC,0x767E3832, 0xAF5E2A9E,0x63F42A00,0xED7B2DE3,0x21D12D7D,0x2B142464,0xE7BE24FA,0x69312319,0xA59B2387, 0xF9766256,0x35DC62C8,0xBB53652B,0x77F965B5,0x7D3C6CAC,0xB1966C32,0x3F196BD1,0xF3B36B4F, 0x2A9379E3,0xE639797D,0x68B67E9E,0xA41C7E00,0xAED97719,0x62737787,0xECFC7064,0x205670FA, 0x85CD537D,0x496753E3,0xC7E85400,0x0B42549E,0x01875D87,0xCD2D5D19,0x43A25AFA,0x8F085A64, 0x562848C8,0x9A824856,0x140D4FB5,0xD8A74F2B,0xD2624632,0x1EC846AC,0x9047414F,0x5CED41D1, 0x299DC2ED,0xE537C273,0x6BB8C590,0xA712C50E,0xADD7CC17,0x617DCC89,0xEFF2CB6A,0x2358CBF4, 0xFA78D958,0x36D2D9C6,0xB85DDE25,0x74F7DEBB,0x7E32D7A2,0xB298D73C,0x3C17D0DF,0xF0BDD041, 0x5526F3C6,0x998CF358,0x1703F4BB,0xDBA9F425,0xD16CFD3C,0x1DC6FDA2,0x9349FA41,0x5FE3FADF, 0x86C3E873,0x4A69E8ED,0xC4E6EF0E,0x084CEF90,0x0289E689,0xCE23E617,0x40ACE1F4,0x8C06E16A, 0xD0EBA0BB,0x1C41A025,0x92CEA7C6,0x5E64A758,0x54A1AE41,0x980BAEDF,0x1684A93C,0xDA2EA9A2, 0x030EBB0E,0xCFA4BB90,0x412BBC73,0x8D81BCED,0x8744B5F4,0x4BEEB56A,0xC561B289,0x09CBB217, 0xAC509190,0x60FA910E,0xEE7596ED,0x22DF9673,0x281A9F6A,0xE4B09FF4,0x6A3F9817,0xA6959889, 0x7FB58A25,0xB31F8ABB,0x3D908D58,0xF13A8DC6,0xFBFF84DF,0x37558441,0xB9DA83A2,0x7570833C, 0x533B85DA,0x9F918544,0x111E82A7,0xDDB48239,0xD7718B20,0x1BDB8BBE,0x95548C5D,0x59FE8CC3, 0x80DE9E6F,0x4C749EF1,0xC2FB9912,0x0E51998C,0x04949095,0xC83E900B,0x46B197E8,0x8A1B9776, 0x2F80B4F1,0xE32AB46F,0x6DA5B38C,0xA10FB312,0xABCABA0B,0x6760BA95,0xE9EFBD76,0x2545BDE8, 0xFC65AF44,0x30CFAFDA,0xBE40A839,0x72EAA8A7,0x782FA1BE,0xB485A120,0x3A0AA6C3,0xF6A0A65D, 0xAA4DE78C,0x66E7E712,0xE868E0F1,0x24C2E06F,0x2E07E976,0xE2ADE9E8,0x6C22EE0B,0xA088EE95, 0x79A8FC39,0xB502FCA7,0x3B8DFB44,0xF727FBDA,0xFDE2F2C3,0x3148F25D,0xBFC7F5BE,0x736DF520, 0xD6F6D6A7,0x1A5CD639,0x94D3D1DA,0x5879D144,0x52BCD85D,0x9E16D8C3,0x1099DF20,0xDC33DFBE, 0x0513CD12,0xC9B9CD8C,0x4736CA6F,0x8B9CCAF1,0x8159C3E8,0x4DF3C376,0xC37CC495,0x0FD6C40B, 0x7AA64737,0xB60C47A9,0x3883404A,0xF42940D4,0xFEEC49CD,0x32464953,0xBCC94EB0,0x70634E2E, 0xA9435C82,0x65E95C1C,0xEB665BFF,0x27CC5B61,0x2D095278,0xE1A352E6,0x6F2C5505,0xA386559B, 0x061D761C,0xCAB77682,0x44387161,0x889271FF,0x825778E6,0x4EFD7878,0xC0727F9B,0x0CD87F05, 0xD5F86DA9,0x19526D37,0x97DD6AD4,0x5B776A4A,0x51B26353,0x9D1863CD,0x1397642E,0xDF3D64B0, 0x83D02561,0x4F7A25FF,0xC1F5221C,0x0D5F2282,0x079A2B9B,0xCB302B05,0x45BF2CE6,0x89152C78, 0x50353ED4,0x9C9F3E4A,0x121039A9,0xDEBA3937,0xD47F302E,0x18D530B0,0x965A3753,0x5AF037CD, 0xFF6B144A,0x33C114D4,0xBD4E1337,0x71E413A9,0x7B211AB0,0xB78B1A2E,0x39041DCD,0xF5AE1D53, 0x2C8E0FFF,0xE0240F61,0x6EAB0882,0xA201081C,0xA8C40105,0x646E019B,0xEAE10678,0x264B06E6, } }; /// compute CRC32 (Slicing-by-8 algorithm), unroll inner loop 4 times uint32_t crc32_slice_by_8(const void* data, size_t length, uint32_t previousCrc32) { uint32_t crc = ~previousCrc32; // same as previousCrc32 ^ 0xFFFFFFFF const uint32_t* current; const uint8_t* currentChar = (const uint8_t*) data; // enabling optimization (at least -O2) automatically unrolls the inner for-loop const size_t Unroll = 4; const size_t BytesAtOnce = 8 * Unroll; // wanted: 32 bit / 4 Byte alignment, compute leading, unaligned bytes length uintptr_t unaligned_length = (4 - (((uintptr_t) currentChar) & 3)) & 3; // process unaligned bytes, if any (standard algorithm) while ((length != 0) && (unaligned_length != 0)) { crc = (crc >> 8) ^ Crc32Lookup[0][(crc & 0xFF) ^ *currentChar++]; length--; unaligned_length--; } // pointer points to 32bit aligned address now current = (const uint32_t*) currentChar; // process 4x eight bytes at once (Slicing-by-8) while (length >= BytesAtOnce) { size_t unrolling; for (unrolling = 0; unrolling < Unroll; unrolling++) { #if BORG_BIG_ENDIAN uint32_t one = *current++ ^ _le32toh(crc); uint32_t two = *current++; crc = Crc32Lookup[0][ two & 0xFF] ^ Crc32Lookup[1][(two>> 8) & 0xFF] ^ Crc32Lookup[2][(two>>16) & 0xFF] ^ Crc32Lookup[3][(two>>24) & 0xFF] ^ Crc32Lookup[4][ one & 0xFF] ^ Crc32Lookup[5][(one>> 8) & 0xFF] ^ Crc32Lookup[6][(one>>16) & 0xFF] ^ Crc32Lookup[7][(one>>24) & 0xFF]; #else uint32_t one = *current++ ^ crc; uint32_t two = *current++; crc = Crc32Lookup[0][(two>>24) & 0xFF] ^ Crc32Lookup[1][(two>>16) & 0xFF] ^ Crc32Lookup[2][(two>> 8) & 0xFF] ^ Crc32Lookup[3][ two & 0xFF] ^ Crc32Lookup[4][(one>>24) & 0xFF] ^ Crc32Lookup[5][(one>>16) & 0xFF] ^ Crc32Lookup[6][(one>> 8) & 0xFF] ^ Crc32Lookup[7][ one & 0xFF]; #endif } length -= BytesAtOnce; } currentChar = (const uint8_t*) current; // remaining 1 to 31 bytes (standard algorithm) while (length-- != 0) crc = (crc >> 8) ^ Crc32Lookup[0][(crc & 0xFF) ^ *currentChar++]; return ~crc; // same as crc ^ 0xFFFFFFFF } borgbackup-1.1.5/src/borg/algorithms/zstd/0000755000175000017500000000000013260002401017604 5ustar twtw00000000000000borgbackup-1.1.5/src/borg/algorithms/zstd/lib/0000755000175000017500000000000013260002401020352 5ustar twtw00000000000000borgbackup-1.1.5/src/borg/algorithms/zstd/lib/deprecated/0000755000175000017500000000000013260002401022452 5ustar twtw00000000000000borgbackup-1.1.5/src/borg/algorithms/zstd/lib/deprecated/zbuff_decompress.c0000644000175000017500000000357713257361140026210 0ustar twtw00000000000000/* * Copyright (c) 2016-present, Yann Collet, Facebook, Inc. * All rights reserved. * * This source code is licensed under both the BSD-style license (found in the * LICENSE file in the root directory of this source tree) and the GPLv2 (found * in the COPYING file in the root directory of this source tree). * You may select, at your option, one of the above-listed licenses. */ /* ************************************* * Dependencies ***************************************/ #define ZBUFF_STATIC_LINKING_ONLY #include "zbuff.h" ZBUFF_DCtx* ZBUFF_createDCtx(void) { return ZSTD_createDStream(); } ZBUFF_DCtx* ZBUFF_createDCtx_advanced(ZSTD_customMem customMem) { return ZSTD_createDStream_advanced(customMem); } size_t ZBUFF_freeDCtx(ZBUFF_DCtx* zbd) { return ZSTD_freeDStream(zbd); } /* *** Initialization *** */ size_t ZBUFF_decompressInitDictionary(ZBUFF_DCtx* zbd, const void* dict, size_t dictSize) { return ZSTD_initDStream_usingDict(zbd, dict, dictSize); } size_t ZBUFF_decompressInit(ZBUFF_DCtx* zbd) { return ZSTD_initDStream(zbd); } /* *** Decompression *** */ size_t ZBUFF_decompressContinue(ZBUFF_DCtx* zbd, void* dst, size_t* dstCapacityPtr, const void* src, size_t* srcSizePtr) { ZSTD_outBuffer outBuff; ZSTD_inBuffer inBuff; size_t result; outBuff.dst = dst; outBuff.pos = 0; outBuff.size = *dstCapacityPtr; inBuff.src = src; inBuff.pos = 0; inBuff.size = *srcSizePtr; result = ZSTD_decompressStream(zbd, &outBuff, &inBuff); *dstCapacityPtr = outBuff.pos; *srcSizePtr = inBuff.pos; return result; } /* ************************************* * Tool functions ***************************************/ size_t ZBUFF_recommendedDInSize(void) { return ZSTD_DStreamInSize(); } size_t ZBUFF_recommendedDOutSize(void) { return ZSTD_DStreamOutSize(); } borgbackup-1.1.5/src/borg/algorithms/zstd/lib/deprecated/zbuff_common.c0000644000175000017500000000175313257361140025326 0ustar twtw00000000000000/* * Copyright (c) 2016-present, Yann Collet, Facebook, Inc. * All rights reserved. * * This source code is licensed under both the BSD-style license (found in the * LICENSE file in the root directory of this source tree) and the GPLv2 (found * in the COPYING file in the root directory of this source tree). * You may select, at your option, one of the above-listed licenses. */ /*-************************************* * Dependencies ***************************************/ #include "error_private.h" #include "zbuff.h" /*-**************************************** * ZBUFF Error Management (deprecated) ******************************************/ /*! ZBUFF_isError() : * tells if a return value is an error code */ unsigned ZBUFF_isError(size_t errorCode) { return ERR_isError(errorCode); } /*! ZBUFF_getErrorName() : * provides error code string from function result (useful for debugging) */ const char* ZBUFF_getErrorName(size_t errorCode) { return ERR_getErrorName(errorCode); } borgbackup-1.1.5/src/borg/algorithms/zstd/lib/deprecated/zbuff_compress.c0000644000175000017500000001215013257361140025662 0ustar twtw00000000000000/* * Copyright (c) 2016-present, Yann Collet, Facebook, Inc. * All rights reserved. * * This source code is licensed under both the BSD-style license (found in the * LICENSE file in the root directory of this source tree) and the GPLv2 (found * in the COPYING file in the root directory of this source tree). * You may select, at your option, one of the above-listed licenses. */ /* ************************************* * Dependencies ***************************************/ #define ZBUFF_STATIC_LINKING_ONLY #include "zbuff.h" /*-*********************************************************** * Streaming compression * * A ZBUFF_CCtx object is required to track streaming operation. * Use ZBUFF_createCCtx() and ZBUFF_freeCCtx() to create/release resources. * Use ZBUFF_compressInit() to start a new compression operation. * ZBUFF_CCtx objects can be reused multiple times. * * Use ZBUFF_compressContinue() repetitively to consume your input. * *srcSizePtr and *dstCapacityPtr can be any size. * The function will report how many bytes were read or written by modifying *srcSizePtr and *dstCapacityPtr. * Note that it may not consume the entire input, in which case it's up to the caller to call again the function with remaining input. * The content of dst will be overwritten (up to *dstCapacityPtr) at each function call, so save its content if it matters or change dst . * @return : a hint to preferred nb of bytes to use as input for next function call (it's only a hint, to improve latency) * or an error code, which can be tested using ZBUFF_isError(). * * ZBUFF_compressFlush() can be used to instruct ZBUFF to compress and output whatever remains within its buffer. * Note that it will not output more than *dstCapacityPtr. * Therefore, some content might still be left into its internal buffer if dst buffer is too small. * @return : nb of bytes still present into internal buffer (0 if it's empty) * or an error code, which can be tested using ZBUFF_isError(). * * ZBUFF_compressEnd() instructs to finish a frame. * It will perform a flush and write frame epilogue. * Similar to ZBUFF_compressFlush(), it may not be able to output the entire internal buffer content if *dstCapacityPtr is too small. * @return : nb of bytes still present into internal buffer (0 if it's empty) * or an error code, which can be tested using ZBUFF_isError(). * * Hint : recommended buffer sizes (not compulsory) * input : ZSTD_BLOCKSIZE_MAX (128 KB), internal unit size, it improves latency to use this value. * output : ZSTD_compressBound(ZSTD_BLOCKSIZE_MAX) + ZSTD_blockHeaderSize + ZBUFF_endFrameSize : ensures it's always possible to write/flush/end a full block at best speed. * ***********************************************************/ ZBUFF_CCtx* ZBUFF_createCCtx(void) { return ZSTD_createCStream(); } ZBUFF_CCtx* ZBUFF_createCCtx_advanced(ZSTD_customMem customMem) { return ZSTD_createCStream_advanced(customMem); } size_t ZBUFF_freeCCtx(ZBUFF_CCtx* zbc) { return ZSTD_freeCStream(zbc); } /* ====== Initialization ====== */ size_t ZBUFF_compressInit_advanced(ZBUFF_CCtx* zbc, const void* dict, size_t dictSize, ZSTD_parameters params, unsigned long long pledgedSrcSize) { return ZSTD_initCStream_advanced(zbc, dict, dictSize, params, pledgedSrcSize); } size_t ZBUFF_compressInitDictionary(ZBUFF_CCtx* zbc, const void* dict, size_t dictSize, int compressionLevel) { return ZSTD_initCStream_usingDict(zbc, dict, dictSize, compressionLevel); } size_t ZBUFF_compressInit(ZBUFF_CCtx* zbc, int compressionLevel) { return ZSTD_initCStream(zbc, compressionLevel); } /* ====== Compression ====== */ size_t ZBUFF_compressContinue(ZBUFF_CCtx* zbc, void* dst, size_t* dstCapacityPtr, const void* src, size_t* srcSizePtr) { size_t result; ZSTD_outBuffer outBuff; ZSTD_inBuffer inBuff; outBuff.dst = dst; outBuff.pos = 0; outBuff.size = *dstCapacityPtr; inBuff.src = src; inBuff.pos = 0; inBuff.size = *srcSizePtr; result = ZSTD_compressStream(zbc, &outBuff, &inBuff); *dstCapacityPtr = outBuff.pos; *srcSizePtr = inBuff.pos; return result; } /* ====== Finalize ====== */ size_t ZBUFF_compressFlush(ZBUFF_CCtx* zbc, void* dst, size_t* dstCapacityPtr) { size_t result; ZSTD_outBuffer outBuff; outBuff.dst = dst; outBuff.pos = 0; outBuff.size = *dstCapacityPtr; result = ZSTD_flushStream(zbc, &outBuff); *dstCapacityPtr = outBuff.pos; return result; } size_t ZBUFF_compressEnd(ZBUFF_CCtx* zbc, void* dst, size_t* dstCapacityPtr) { size_t result; ZSTD_outBuffer outBuff; outBuff.dst = dst; outBuff.pos = 0; outBuff.size = *dstCapacityPtr; result = ZSTD_endStream(zbc, &outBuff); *dstCapacityPtr = outBuff.pos; return result; } /* ************************************* * Tool functions ***************************************/ size_t ZBUFF_recommendedCInSize(void) { return ZSTD_CStreamInSize(); } size_t ZBUFF_recommendedCOutSize(void) { return ZSTD_CStreamOutSize(); } borgbackup-1.1.5/src/borg/algorithms/zstd/lib/deprecated/zbuff.h0000644000175000017500000002632213257361140023762 0ustar twtw00000000000000/* * Copyright (c) 2016-present, Yann Collet, Facebook, Inc. * All rights reserved. * * This source code is licensed under both the BSD-style license (found in the * LICENSE file in the root directory of this source tree) and the GPLv2 (found * in the COPYING file in the root directory of this source tree). * You may select, at your option, one of the above-listed licenses. */ /* *************************************************************** * NOTES/WARNINGS ******************************************************************/ /* The streaming API defined here is deprecated. * Consider migrating towards ZSTD_compressStream() API in `zstd.h` * See 'lib/README.md'. *****************************************************************/ #if defined (__cplusplus) extern "C" { #endif #ifndef ZSTD_BUFFERED_H_23987 #define ZSTD_BUFFERED_H_23987 /* ************************************* * Dependencies ***************************************/ #include /* size_t */ #include "zstd.h" /* ZSTD_CStream, ZSTD_DStream, ZSTDLIB_API */ /* *************************************************************** * Compiler specifics *****************************************************************/ /* Deprecation warnings */ /* Should these warnings be a problem, it is generally possible to disable them, typically with -Wno-deprecated-declarations for gcc or _CRT_SECURE_NO_WARNINGS in Visual. Otherwise, it's also possible to define ZBUFF_DISABLE_DEPRECATE_WARNINGS */ #ifdef ZBUFF_DISABLE_DEPRECATE_WARNINGS # define ZBUFF_DEPRECATED(message) ZSTDLIB_API /* disable deprecation warnings */ #else # if defined (__cplusplus) && (__cplusplus >= 201402) /* C++14 or greater */ # define ZBUFF_DEPRECATED(message) [[deprecated(message)]] ZSTDLIB_API # elif (defined(__GNUC__) && (__GNUC__ >= 5)) || defined(__clang__) # define ZBUFF_DEPRECATED(message) ZSTDLIB_API __attribute__((deprecated(message))) # elif defined(__GNUC__) && (__GNUC__ >= 3) # define ZBUFF_DEPRECATED(message) ZSTDLIB_API __attribute__((deprecated)) # elif defined(_MSC_VER) # define ZBUFF_DEPRECATED(message) ZSTDLIB_API __declspec(deprecated(message)) # else # pragma message("WARNING: You need to implement ZBUFF_DEPRECATED for this compiler") # define ZBUFF_DEPRECATED(message) ZSTDLIB_API # endif #endif /* ZBUFF_DISABLE_DEPRECATE_WARNINGS */ /* ************************************* * Streaming functions ***************************************/ /* This is the easier "buffered" streaming API, * using an internal buffer to lift all restrictions on user-provided buffers * which can be any size, any place, for both input and output. * ZBUFF and ZSTD are 100% interoperable, * frames created by one can be decoded by the other one */ typedef ZSTD_CStream ZBUFF_CCtx; ZBUFF_DEPRECATED("use ZSTD_createCStream") ZBUFF_CCtx* ZBUFF_createCCtx(void); ZBUFF_DEPRECATED("use ZSTD_freeCStream") size_t ZBUFF_freeCCtx(ZBUFF_CCtx* cctx); ZBUFF_DEPRECATED("use ZSTD_initCStream") size_t ZBUFF_compressInit(ZBUFF_CCtx* cctx, int compressionLevel); ZBUFF_DEPRECATED("use ZSTD_initCStream_usingDict") size_t ZBUFF_compressInitDictionary(ZBUFF_CCtx* cctx, const void* dict, size_t dictSize, int compressionLevel); ZBUFF_DEPRECATED("use ZSTD_compressStream") size_t ZBUFF_compressContinue(ZBUFF_CCtx* cctx, void* dst, size_t* dstCapacityPtr, const void* src, size_t* srcSizePtr); ZBUFF_DEPRECATED("use ZSTD_flushStream") size_t ZBUFF_compressFlush(ZBUFF_CCtx* cctx, void* dst, size_t* dstCapacityPtr); ZBUFF_DEPRECATED("use ZSTD_endStream") size_t ZBUFF_compressEnd(ZBUFF_CCtx* cctx, void* dst, size_t* dstCapacityPtr); /*-************************************************* * Streaming compression - howto * * A ZBUFF_CCtx object is required to track streaming operation. * Use ZBUFF_createCCtx() and ZBUFF_freeCCtx() to create/release resources. * ZBUFF_CCtx objects can be reused multiple times. * * Start by initializing ZBUF_CCtx. * Use ZBUFF_compressInit() to start a new compression operation. * Use ZBUFF_compressInitDictionary() for a compression which requires a dictionary. * * Use ZBUFF_compressContinue() repetitively to consume input stream. * *srcSizePtr and *dstCapacityPtr can be any size. * The function will report how many bytes were read or written within *srcSizePtr and *dstCapacityPtr. * Note that it may not consume the entire input, in which case it's up to the caller to present again remaining data. * The content of `dst` will be overwritten (up to *dstCapacityPtr) at each call, so save its content if it matters or change @dst . * @return : a hint to preferred nb of bytes to use as input for next function call (it's just a hint, to improve latency) * or an error code, which can be tested using ZBUFF_isError(). * * At any moment, it's possible to flush whatever data remains within buffer, using ZBUFF_compressFlush(). * The nb of bytes written into `dst` will be reported into *dstCapacityPtr. * Note that the function cannot output more than *dstCapacityPtr, * therefore, some content might still be left into internal buffer if *dstCapacityPtr is too small. * @return : nb of bytes still present into internal buffer (0 if it's empty) * or an error code, which can be tested using ZBUFF_isError(). * * ZBUFF_compressEnd() instructs to finish a frame. * It will perform a flush and write frame epilogue. * The epilogue is required for decoders to consider a frame completed. * Similar to ZBUFF_compressFlush(), it may not be able to output the entire internal buffer content if *dstCapacityPtr is too small. * In which case, call again ZBUFF_compressFlush() to complete the flush. * @return : nb of bytes still present into internal buffer (0 if it's empty) * or an error code, which can be tested using ZBUFF_isError(). * * Hint : _recommended buffer_ sizes (not compulsory) : ZBUFF_recommendedCInSize() / ZBUFF_recommendedCOutSize() * input : ZBUFF_recommendedCInSize==128 KB block size is the internal unit, use this value to reduce intermediate stages (better latency) * output : ZBUFF_recommendedCOutSize==ZSTD_compressBound(128 KB) + 3 + 3 : ensures it's always possible to write/flush/end a full block. Skip some buffering. * By using both, it ensures that input will be entirely consumed, and output will always contain the result, reducing intermediate buffering. * **************************************************/ typedef ZSTD_DStream ZBUFF_DCtx; ZBUFF_DEPRECATED("use ZSTD_createDStream") ZBUFF_DCtx* ZBUFF_createDCtx(void); ZBUFF_DEPRECATED("use ZSTD_freeDStream") size_t ZBUFF_freeDCtx(ZBUFF_DCtx* dctx); ZBUFF_DEPRECATED("use ZSTD_initDStream") size_t ZBUFF_decompressInit(ZBUFF_DCtx* dctx); ZBUFF_DEPRECATED("use ZSTD_initDStream_usingDict") size_t ZBUFF_decompressInitDictionary(ZBUFF_DCtx* dctx, const void* dict, size_t dictSize); ZBUFF_DEPRECATED("use ZSTD_decompressStream") size_t ZBUFF_decompressContinue(ZBUFF_DCtx* dctx, void* dst, size_t* dstCapacityPtr, const void* src, size_t* srcSizePtr); /*-*************************************************************************** * Streaming decompression howto * * A ZBUFF_DCtx object is required to track streaming operations. * Use ZBUFF_createDCtx() and ZBUFF_freeDCtx() to create/release resources. * Use ZBUFF_decompressInit() to start a new decompression operation, * or ZBUFF_decompressInitDictionary() if decompression requires a dictionary. * Note that ZBUFF_DCtx objects can be re-init multiple times. * * Use ZBUFF_decompressContinue() repetitively to consume your input. * *srcSizePtr and *dstCapacityPtr can be any size. * The function will report how many bytes were read or written by modifying *srcSizePtr and *dstCapacityPtr. * Note that it may not consume the entire input, in which case it's up to the caller to present remaining input again. * The content of `dst` will be overwritten (up to *dstCapacityPtr) at each function call, so save its content if it matters, or change `dst`. * @return : 0 when a frame is completely decoded and fully flushed, * 1 when there is still some data left within internal buffer to flush, * >1 when more data is expected, with value being a suggested next input size (it's just a hint, which helps latency), * or an error code, which can be tested using ZBUFF_isError(). * * Hint : recommended buffer sizes (not compulsory) : ZBUFF_recommendedDInSize() and ZBUFF_recommendedDOutSize() * output : ZBUFF_recommendedDOutSize== 128 KB block size is the internal unit, it ensures it's always possible to write a full block when decoded. * input : ZBUFF_recommendedDInSize == 128KB + 3; * just follow indications from ZBUFF_decompressContinue() to minimize latency. It should always be <= 128 KB + 3 . * *******************************************************************************/ /* ************************************* * Tool functions ***************************************/ ZBUFF_DEPRECATED("use ZSTD_isError") unsigned ZBUFF_isError(size_t errorCode); ZBUFF_DEPRECATED("use ZSTD_getErrorName") const char* ZBUFF_getErrorName(size_t errorCode); /** Functions below provide recommended buffer sizes for Compression or Decompression operations. * These sizes are just hints, they tend to offer better latency */ ZBUFF_DEPRECATED("use ZSTD_CStreamInSize") size_t ZBUFF_recommendedCInSize(void); ZBUFF_DEPRECATED("use ZSTD_CStreamOutSize") size_t ZBUFF_recommendedCOutSize(void); ZBUFF_DEPRECATED("use ZSTD_DStreamInSize") size_t ZBUFF_recommendedDInSize(void); ZBUFF_DEPRECATED("use ZSTD_DStreamOutSize") size_t ZBUFF_recommendedDOutSize(void); #endif /* ZSTD_BUFFERED_H_23987 */ #ifdef ZBUFF_STATIC_LINKING_ONLY #ifndef ZBUFF_STATIC_H_30298098432 #define ZBUFF_STATIC_H_30298098432 /* ==================================================================================== * The definitions in this section are considered experimental. * They should never be used in association with a dynamic library, as they may change in the future. * They are provided for advanced usages. * Use them only in association with static linking. * ==================================================================================== */ /*--- Dependency ---*/ #define ZSTD_STATIC_LINKING_ONLY /* ZSTD_parameters, ZSTD_customMem */ #include "zstd.h" /*--- Custom memory allocator ---*/ /*! ZBUFF_createCCtx_advanced() : * Create a ZBUFF compression context using external alloc and free functions */ ZBUFF_DEPRECATED("use ZSTD_createCStream_advanced") ZBUFF_CCtx* ZBUFF_createCCtx_advanced(ZSTD_customMem customMem); /*! ZBUFF_createDCtx_advanced() : * Create a ZBUFF decompression context using external alloc and free functions */ ZBUFF_DEPRECATED("use ZSTD_createDStream_advanced") ZBUFF_DCtx* ZBUFF_createDCtx_advanced(ZSTD_customMem customMem); /*--- Advanced Streaming Initialization ---*/ ZBUFF_DEPRECATED("use ZSTD_initDStream_usingDict") size_t ZBUFF_compressInit_advanced(ZBUFF_CCtx* zbc, const void* dict, size_t dictSize, ZSTD_parameters params, unsigned long long pledgedSrcSize); #endif /* ZBUFF_STATIC_H_30298098432 */ #endif /* ZBUFF_STATIC_LINKING_ONLY */ #if defined (__cplusplus) } #endif borgbackup-1.1.5/src/borg/algorithms/zstd/lib/decompress/0000755000175000017500000000000013260002401022516 5ustar twtw00000000000000borgbackup-1.1.5/src/borg/algorithms/zstd/lib/decompress/huf_decompress.c0000644000175000017500000011701113257361140025707 0ustar twtw00000000000000/* ****************************************************************** Huffman decoder, part of New Generation Entropy library Copyright (C) 2013-2016, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - FSE+HUF source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ /* ************************************************************** * Dependencies ****************************************************************/ #include /* memcpy, memset */ #include "bitstream.h" /* BIT_* */ #include "compiler.h" #include "fse.h" /* header compression */ #define HUF_STATIC_LINKING_ONLY #include "huf.h" #include "error_private.h" /* ************************************************************** * Error Management ****************************************************************/ #define HUF_isError ERR_isError #define HUF_STATIC_ASSERT(c) { enum { HUF_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */ /* ************************************************************** * Byte alignment for workSpace management ****************************************************************/ #define HUF_ALIGN(x, a) HUF_ALIGN_MASK((x), (a) - 1) #define HUF_ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask)) /*-***************************/ /* generic DTableDesc */ /*-***************************/ typedef struct { BYTE maxTableLog; BYTE tableType; BYTE tableLog; BYTE reserved; } DTableDesc; static DTableDesc HUF_getDTableDesc(const HUF_DTable* table) { DTableDesc dtd; memcpy(&dtd, table, sizeof(dtd)); return dtd; } /*-***************************/ /* single-symbol decoding */ /*-***************************/ typedef struct { BYTE byte; BYTE nbBits; } HUF_DEltX2; /* single-symbol decoding */ size_t HUF_readDTableX2_wksp(HUF_DTable* DTable, const void* src, size_t srcSize, void* workSpace, size_t wkspSize) { U32 tableLog = 0; U32 nbSymbols = 0; size_t iSize; void* const dtPtr = DTable + 1; HUF_DEltX2* const dt = (HUF_DEltX2*)dtPtr; U32* rankVal; BYTE* huffWeight; size_t spaceUsed32 = 0; rankVal = (U32 *)workSpace + spaceUsed32; spaceUsed32 += HUF_TABLELOG_ABSOLUTEMAX + 1; huffWeight = (BYTE *)((U32 *)workSpace + spaceUsed32); spaceUsed32 += HUF_ALIGN(HUF_SYMBOLVALUE_MAX + 1, sizeof(U32)) >> 2; if ((spaceUsed32 << 2) > wkspSize) return ERROR(tableLog_tooLarge); workSpace = (U32 *)workSpace + spaceUsed32; wkspSize -= (spaceUsed32 << 2); HUF_STATIC_ASSERT(sizeof(DTableDesc) == sizeof(HUF_DTable)); /* memset(huffWeight, 0, sizeof(huffWeight)); */ /* is not necessary, even though some analyzer complain ... */ iSize = HUF_readStats(huffWeight, HUF_SYMBOLVALUE_MAX + 1, rankVal, &nbSymbols, &tableLog, src, srcSize); if (HUF_isError(iSize)) return iSize; /* Table header */ { DTableDesc dtd = HUF_getDTableDesc(DTable); if (tableLog > (U32)(dtd.maxTableLog+1)) return ERROR(tableLog_tooLarge); /* DTable too small, Huffman tree cannot fit in */ dtd.tableType = 0; dtd.tableLog = (BYTE)tableLog; memcpy(DTable, &dtd, sizeof(dtd)); } /* Calculate starting value for each rank */ { U32 n, nextRankStart = 0; for (n=1; n> 1; U32 u; HUF_DEltX2 D; D.byte = (BYTE)n; D.nbBits = (BYTE)(tableLog + 1 - w); for (u = rankVal[w]; u < rankVal[w] + length; u++) dt[u] = D; rankVal[w] += length; } } return iSize; } size_t HUF_readDTableX2(HUF_DTable* DTable, const void* src, size_t srcSize) { U32 workSpace[HUF_DECOMPRESS_WORKSPACE_SIZE_U32]; return HUF_readDTableX2_wksp(DTable, src, srcSize, workSpace, sizeof(workSpace)); } static BYTE HUF_decodeSymbolX2(BIT_DStream_t* Dstream, const HUF_DEltX2* dt, const U32 dtLog) { size_t const val = BIT_lookBitsFast(Dstream, dtLog); /* note : dtLog >= 1 */ BYTE const c = dt[val].byte; BIT_skipBits(Dstream, dt[val].nbBits); return c; } #define HUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr) \ *ptr++ = HUF_decodeSymbolX2(DStreamPtr, dt, dtLog) #define HUF_DECODE_SYMBOLX2_1(ptr, DStreamPtr) \ if (MEM_64bits() || (HUF_TABLELOG_MAX<=12)) \ HUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr) #define HUF_DECODE_SYMBOLX2_2(ptr, DStreamPtr) \ if (MEM_64bits()) \ HUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr) HINT_INLINE size_t HUF_decodeStreamX2(BYTE* p, BIT_DStream_t* const bitDPtr, BYTE* const pEnd, const HUF_DEltX2* const dt, const U32 dtLog) { BYTE* const pStart = p; /* up to 4 symbols at a time */ while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) && (p <= pEnd-4)) { HUF_DECODE_SYMBOLX2_2(p, bitDPtr); HUF_DECODE_SYMBOLX2_1(p, bitDPtr); HUF_DECODE_SYMBOLX2_2(p, bitDPtr); HUF_DECODE_SYMBOLX2_0(p, bitDPtr); } /* closer to the end */ while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) && (p < pEnd)) HUF_DECODE_SYMBOLX2_0(p, bitDPtr); /* no more data to retrieve from bitstream, hence no need to reload */ while (p < pEnd) HUF_DECODE_SYMBOLX2_0(p, bitDPtr); return pEnd-pStart; } static size_t HUF_decompress1X2_usingDTable_internal( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable) { BYTE* op = (BYTE*)dst; BYTE* const oend = op + dstSize; const void* dtPtr = DTable + 1; const HUF_DEltX2* const dt = (const HUF_DEltX2*)dtPtr; BIT_DStream_t bitD; DTableDesc const dtd = HUF_getDTableDesc(DTable); U32 const dtLog = dtd.tableLog; { size_t const errorCode = BIT_initDStream(&bitD, cSrc, cSrcSize); if (HUF_isError(errorCode)) return errorCode; } HUF_decodeStreamX2(op, &bitD, oend, dt, dtLog); /* check */ if (!BIT_endOfDStream(&bitD)) return ERROR(corruption_detected); return dstSize; } size_t HUF_decompress1X2_usingDTable( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable) { DTableDesc dtd = HUF_getDTableDesc(DTable); if (dtd.tableType != 0) return ERROR(GENERIC); return HUF_decompress1X2_usingDTable_internal(dst, dstSize, cSrc, cSrcSize, DTable); } size_t HUF_decompress1X2_DCtx_wksp(HUF_DTable* DCtx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize) { const BYTE* ip = (const BYTE*) cSrc; size_t const hSize = HUF_readDTableX2_wksp(DCtx, cSrc, cSrcSize, workSpace, wkspSize); if (HUF_isError(hSize)) return hSize; if (hSize >= cSrcSize) return ERROR(srcSize_wrong); ip += hSize; cSrcSize -= hSize; return HUF_decompress1X2_usingDTable_internal (dst, dstSize, ip, cSrcSize, DCtx); } size_t HUF_decompress1X2_DCtx(HUF_DTable* DCtx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { U32 workSpace[HUF_DECOMPRESS_WORKSPACE_SIZE_U32]; return HUF_decompress1X2_DCtx_wksp(DCtx, dst, dstSize, cSrc, cSrcSize, workSpace, sizeof(workSpace)); } size_t HUF_decompress1X2 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { HUF_CREATE_STATIC_DTABLEX2(DTable, HUF_TABLELOG_MAX); return HUF_decompress1X2_DCtx (DTable, dst, dstSize, cSrc, cSrcSize); } static size_t HUF_decompress4X2_usingDTable_internal( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable) { /* Check */ if (cSrcSize < 10) return ERROR(corruption_detected); /* strict minimum : jump table + 1 byte per stream */ { const BYTE* const istart = (const BYTE*) cSrc; BYTE* const ostart = (BYTE*) dst; BYTE* const oend = ostart + dstSize; const void* const dtPtr = DTable + 1; const HUF_DEltX2* const dt = (const HUF_DEltX2*)dtPtr; /* Init */ BIT_DStream_t bitD1; BIT_DStream_t bitD2; BIT_DStream_t bitD3; BIT_DStream_t bitD4; size_t const length1 = MEM_readLE16(istart); size_t const length2 = MEM_readLE16(istart+2); size_t const length3 = MEM_readLE16(istart+4); size_t const length4 = cSrcSize - (length1 + length2 + length3 + 6); const BYTE* const istart1 = istart + 6; /* jumpTable */ const BYTE* const istart2 = istart1 + length1; const BYTE* const istart3 = istart2 + length2; const BYTE* const istart4 = istart3 + length3; const size_t segmentSize = (dstSize+3) / 4; BYTE* const opStart2 = ostart + segmentSize; BYTE* const opStart3 = opStart2 + segmentSize; BYTE* const opStart4 = opStart3 + segmentSize; BYTE* op1 = ostart; BYTE* op2 = opStart2; BYTE* op3 = opStart3; BYTE* op4 = opStart4; U32 endSignal; DTableDesc const dtd = HUF_getDTableDesc(DTable); U32 const dtLog = dtd.tableLog; if (length4 > cSrcSize) return ERROR(corruption_detected); /* overflow */ { size_t const errorCode = BIT_initDStream(&bitD1, istart1, length1); if (HUF_isError(errorCode)) return errorCode; } { size_t const errorCode = BIT_initDStream(&bitD2, istart2, length2); if (HUF_isError(errorCode)) return errorCode; } { size_t const errorCode = BIT_initDStream(&bitD3, istart3, length3); if (HUF_isError(errorCode)) return errorCode; } { size_t const errorCode = BIT_initDStream(&bitD4, istart4, length4); if (HUF_isError(errorCode)) return errorCode; } /* 16-32 symbols per loop (4-8 symbols per stream) */ endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4); for ( ; (endSignal==BIT_DStream_unfinished) && (op4<(oend-7)) ; ) { HUF_DECODE_SYMBOLX2_2(op1, &bitD1); HUF_DECODE_SYMBOLX2_2(op2, &bitD2); HUF_DECODE_SYMBOLX2_2(op3, &bitD3); HUF_DECODE_SYMBOLX2_2(op4, &bitD4); HUF_DECODE_SYMBOLX2_1(op1, &bitD1); HUF_DECODE_SYMBOLX2_1(op2, &bitD2); HUF_DECODE_SYMBOLX2_1(op3, &bitD3); HUF_DECODE_SYMBOLX2_1(op4, &bitD4); HUF_DECODE_SYMBOLX2_2(op1, &bitD1); HUF_DECODE_SYMBOLX2_2(op2, &bitD2); HUF_DECODE_SYMBOLX2_2(op3, &bitD3); HUF_DECODE_SYMBOLX2_2(op4, &bitD4); HUF_DECODE_SYMBOLX2_0(op1, &bitD1); HUF_DECODE_SYMBOLX2_0(op2, &bitD2); HUF_DECODE_SYMBOLX2_0(op3, &bitD3); HUF_DECODE_SYMBOLX2_0(op4, &bitD4); endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4); } /* check corruption */ if (op1 > opStart2) return ERROR(corruption_detected); if (op2 > opStart3) return ERROR(corruption_detected); if (op3 > opStart4) return ERROR(corruption_detected); /* note : op4 supposed already verified within main loop */ /* finish bitStreams one by one */ HUF_decodeStreamX2(op1, &bitD1, opStart2, dt, dtLog); HUF_decodeStreamX2(op2, &bitD2, opStart3, dt, dtLog); HUF_decodeStreamX2(op3, &bitD3, opStart4, dt, dtLog); HUF_decodeStreamX2(op4, &bitD4, oend, dt, dtLog); /* check */ endSignal = BIT_endOfDStream(&bitD1) & BIT_endOfDStream(&bitD2) & BIT_endOfDStream(&bitD3) & BIT_endOfDStream(&bitD4); if (!endSignal) return ERROR(corruption_detected); /* decoded size */ return dstSize; } } size_t HUF_decompress4X2_usingDTable( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable) { DTableDesc dtd = HUF_getDTableDesc(DTable); if (dtd.tableType != 0) return ERROR(GENERIC); return HUF_decompress4X2_usingDTable_internal(dst, dstSize, cSrc, cSrcSize, DTable); } size_t HUF_decompress4X2_DCtx_wksp(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize) { const BYTE* ip = (const BYTE*) cSrc; size_t const hSize = HUF_readDTableX2_wksp (dctx, cSrc, cSrcSize, workSpace, wkspSize); if (HUF_isError(hSize)) return hSize; if (hSize >= cSrcSize) return ERROR(srcSize_wrong); ip += hSize; cSrcSize -= hSize; return HUF_decompress4X2_usingDTable_internal (dst, dstSize, ip, cSrcSize, dctx); } size_t HUF_decompress4X2_DCtx (HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { U32 workSpace[HUF_DECOMPRESS_WORKSPACE_SIZE_U32]; return HUF_decompress4X2_DCtx_wksp(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, sizeof(workSpace)); } size_t HUF_decompress4X2 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { HUF_CREATE_STATIC_DTABLEX2(DTable, HUF_TABLELOG_MAX); return HUF_decompress4X2_DCtx(DTable, dst, dstSize, cSrc, cSrcSize); } /* *************************/ /* double-symbols decoding */ /* *************************/ typedef struct { U16 sequence; BYTE nbBits; BYTE length; } HUF_DEltX4; /* double-symbols decoding */ typedef struct { BYTE symbol; BYTE weight; } sortedSymbol_t; /* HUF_fillDTableX4Level2() : * `rankValOrigin` must be a table of at least (HUF_TABLELOG_MAX + 1) U32 */ static void HUF_fillDTableX4Level2(HUF_DEltX4* DTable, U32 sizeLog, const U32 consumed, const U32* rankValOrigin, const int minWeight, const sortedSymbol_t* sortedSymbols, const U32 sortedListSize, U32 nbBitsBaseline, U16 baseSeq) { HUF_DEltX4 DElt; U32 rankVal[HUF_TABLELOG_MAX + 1]; /* get pre-calculated rankVal */ memcpy(rankVal, rankValOrigin, sizeof(rankVal)); /* fill skipped values */ if (minWeight>1) { U32 i, skipSize = rankVal[minWeight]; MEM_writeLE16(&(DElt.sequence), baseSeq); DElt.nbBits = (BYTE)(consumed); DElt.length = 1; for (i = 0; i < skipSize; i++) DTable[i] = DElt; } /* fill DTable */ { U32 s; for (s=0; s= 1 */ rankVal[weight] += length; } } } typedef U32 rankValCol_t[HUF_TABLELOG_MAX + 1]; typedef rankValCol_t rankVal_t[HUF_TABLELOG_MAX]; static void HUF_fillDTableX4(HUF_DEltX4* DTable, const U32 targetLog, const sortedSymbol_t* sortedList, const U32 sortedListSize, const U32* rankStart, rankVal_t rankValOrigin, const U32 maxWeight, const U32 nbBitsBaseline) { U32 rankVal[HUF_TABLELOG_MAX + 1]; const int scaleLog = nbBitsBaseline - targetLog; /* note : targetLog >= srcLog, hence scaleLog <= 1 */ const U32 minBits = nbBitsBaseline - maxWeight; U32 s; memcpy(rankVal, rankValOrigin, sizeof(rankVal)); /* fill DTable */ for (s=0; s= minBits) { /* enough room for a second symbol */ U32 sortedRank; int minWeight = nbBits + scaleLog; if (minWeight < 1) minWeight = 1; sortedRank = rankStart[minWeight]; HUF_fillDTableX4Level2(DTable+start, targetLog-nbBits, nbBits, rankValOrigin[nbBits], minWeight, sortedList+sortedRank, sortedListSize-sortedRank, nbBitsBaseline, symbol); } else { HUF_DEltX4 DElt; MEM_writeLE16(&(DElt.sequence), symbol); DElt.nbBits = (BYTE)(nbBits); DElt.length = 1; { U32 const end = start + length; U32 u; for (u = start; u < end; u++) DTable[u] = DElt; } } rankVal[weight] += length; } } size_t HUF_readDTableX4_wksp(HUF_DTable* DTable, const void* src, size_t srcSize, void* workSpace, size_t wkspSize) { U32 tableLog, maxW, sizeOfSort, nbSymbols; DTableDesc dtd = HUF_getDTableDesc(DTable); U32 const maxTableLog = dtd.maxTableLog; size_t iSize; void* dtPtr = DTable+1; /* force compiler to avoid strict-aliasing */ HUF_DEltX4* const dt = (HUF_DEltX4*)dtPtr; U32 *rankStart; rankValCol_t* rankVal; U32* rankStats; U32* rankStart0; sortedSymbol_t* sortedSymbol; BYTE* weightList; size_t spaceUsed32 = 0; rankVal = (rankValCol_t *)((U32 *)workSpace + spaceUsed32); spaceUsed32 += (sizeof(rankValCol_t) * HUF_TABLELOG_MAX) >> 2; rankStats = (U32 *)workSpace + spaceUsed32; spaceUsed32 += HUF_TABLELOG_MAX + 1; rankStart0 = (U32 *)workSpace + spaceUsed32; spaceUsed32 += HUF_TABLELOG_MAX + 2; sortedSymbol = (sortedSymbol_t *)workSpace + (spaceUsed32 * sizeof(U32)) / sizeof(sortedSymbol_t); spaceUsed32 += HUF_ALIGN(sizeof(sortedSymbol_t) * (HUF_SYMBOLVALUE_MAX + 1), sizeof(U32)) >> 2; weightList = (BYTE *)((U32 *)workSpace + spaceUsed32); spaceUsed32 += HUF_ALIGN(HUF_SYMBOLVALUE_MAX + 1, sizeof(U32)) >> 2; if ((spaceUsed32 << 2) > wkspSize) return ERROR(tableLog_tooLarge); workSpace = (U32 *)workSpace + spaceUsed32; wkspSize -= (spaceUsed32 << 2); rankStart = rankStart0 + 1; memset(rankStats, 0, sizeof(U32) * (2 * HUF_TABLELOG_MAX + 2 + 1)); HUF_STATIC_ASSERT(sizeof(HUF_DEltX4) == sizeof(HUF_DTable)); /* if compiler fails here, assertion is wrong */ if (maxTableLog > HUF_TABLELOG_MAX) return ERROR(tableLog_tooLarge); /* memset(weightList, 0, sizeof(weightList)); */ /* is not necessary, even though some analyzer complain ... */ iSize = HUF_readStats(weightList, HUF_SYMBOLVALUE_MAX + 1, rankStats, &nbSymbols, &tableLog, src, srcSize); if (HUF_isError(iSize)) return iSize; /* check result */ if (tableLog > maxTableLog) return ERROR(tableLog_tooLarge); /* DTable can't fit code depth */ /* find maxWeight */ for (maxW = tableLog; rankStats[maxW]==0; maxW--) {} /* necessarily finds a solution before 0 */ /* Get start index of each weight */ { U32 w, nextRankStart = 0; for (w=1; w> consumed; } } } } HUF_fillDTableX4(dt, maxTableLog, sortedSymbol, sizeOfSort, rankStart0, rankVal, maxW, tableLog+1); dtd.tableLog = (BYTE)maxTableLog; dtd.tableType = 1; memcpy(DTable, &dtd, sizeof(dtd)); return iSize; } size_t HUF_readDTableX4(HUF_DTable* DTable, const void* src, size_t srcSize) { U32 workSpace[HUF_DECOMPRESS_WORKSPACE_SIZE_U32]; return HUF_readDTableX4_wksp(DTable, src, srcSize, workSpace, sizeof(workSpace)); } static U32 HUF_decodeSymbolX4(void* op, BIT_DStream_t* DStream, const HUF_DEltX4* dt, const U32 dtLog) { size_t const val = BIT_lookBitsFast(DStream, dtLog); /* note : dtLog >= 1 */ memcpy(op, dt+val, 2); BIT_skipBits(DStream, dt[val].nbBits); return dt[val].length; } static U32 HUF_decodeLastSymbolX4(void* op, BIT_DStream_t* DStream, const HUF_DEltX4* dt, const U32 dtLog) { size_t const val = BIT_lookBitsFast(DStream, dtLog); /* note : dtLog >= 1 */ memcpy(op, dt+val, 1); if (dt[val].length==1) BIT_skipBits(DStream, dt[val].nbBits); else { if (DStream->bitsConsumed < (sizeof(DStream->bitContainer)*8)) { BIT_skipBits(DStream, dt[val].nbBits); if (DStream->bitsConsumed > (sizeof(DStream->bitContainer)*8)) /* ugly hack; works only because it's the last symbol. Note : can't easily extract nbBits from just this symbol */ DStream->bitsConsumed = (sizeof(DStream->bitContainer)*8); } } return 1; } #define HUF_DECODE_SYMBOLX4_0(ptr, DStreamPtr) \ ptr += HUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog) #define HUF_DECODE_SYMBOLX4_1(ptr, DStreamPtr) \ if (MEM_64bits() || (HUF_TABLELOG_MAX<=12)) \ ptr += HUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog) #define HUF_DECODE_SYMBOLX4_2(ptr, DStreamPtr) \ if (MEM_64bits()) \ ptr += HUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog) HINT_INLINE size_t HUF_decodeStreamX4(BYTE* p, BIT_DStream_t* bitDPtr, BYTE* const pEnd, const HUF_DEltX4* const dt, const U32 dtLog) { BYTE* const pStart = p; /* up to 8 symbols at a time */ while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) & (p < pEnd-(sizeof(bitDPtr->bitContainer)-1))) { HUF_DECODE_SYMBOLX4_2(p, bitDPtr); HUF_DECODE_SYMBOLX4_1(p, bitDPtr); HUF_DECODE_SYMBOLX4_2(p, bitDPtr); HUF_DECODE_SYMBOLX4_0(p, bitDPtr); } /* closer to end : up to 2 symbols at a time */ while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) & (p <= pEnd-2)) HUF_DECODE_SYMBOLX4_0(p, bitDPtr); while (p <= pEnd-2) HUF_DECODE_SYMBOLX4_0(p, bitDPtr); /* no need to reload : reached the end of DStream */ if (p < pEnd) p += HUF_decodeLastSymbolX4(p, bitDPtr, dt, dtLog); return p-pStart; } static size_t HUF_decompress1X4_usingDTable_internal( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable) { BIT_DStream_t bitD; /* Init */ { size_t const errorCode = BIT_initDStream(&bitD, cSrc, cSrcSize); if (HUF_isError(errorCode)) return errorCode; } /* decode */ { BYTE* const ostart = (BYTE*) dst; BYTE* const oend = ostart + dstSize; const void* const dtPtr = DTable+1; /* force compiler to not use strict-aliasing */ const HUF_DEltX4* const dt = (const HUF_DEltX4*)dtPtr; DTableDesc const dtd = HUF_getDTableDesc(DTable); HUF_decodeStreamX4(ostart, &bitD, oend, dt, dtd.tableLog); } /* check */ if (!BIT_endOfDStream(&bitD)) return ERROR(corruption_detected); /* decoded size */ return dstSize; } size_t HUF_decompress1X4_usingDTable( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable) { DTableDesc dtd = HUF_getDTableDesc(DTable); if (dtd.tableType != 1) return ERROR(GENERIC); return HUF_decompress1X4_usingDTable_internal(dst, dstSize, cSrc, cSrcSize, DTable); } size_t HUF_decompress1X4_DCtx_wksp(HUF_DTable* DCtx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize) { const BYTE* ip = (const BYTE*) cSrc; size_t const hSize = HUF_readDTableX4_wksp(DCtx, cSrc, cSrcSize, workSpace, wkspSize); if (HUF_isError(hSize)) return hSize; if (hSize >= cSrcSize) return ERROR(srcSize_wrong); ip += hSize; cSrcSize -= hSize; return HUF_decompress1X4_usingDTable_internal (dst, dstSize, ip, cSrcSize, DCtx); } size_t HUF_decompress1X4_DCtx(HUF_DTable* DCtx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { U32 workSpace[HUF_DECOMPRESS_WORKSPACE_SIZE_U32]; return HUF_decompress1X4_DCtx_wksp(DCtx, dst, dstSize, cSrc, cSrcSize, workSpace, sizeof(workSpace)); } size_t HUF_decompress1X4 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { HUF_CREATE_STATIC_DTABLEX4(DTable, HUF_TABLELOG_MAX); return HUF_decompress1X4_DCtx(DTable, dst, dstSize, cSrc, cSrcSize); } static size_t HUF_decompress4X4_usingDTable_internal( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable) { if (cSrcSize < 10) return ERROR(corruption_detected); /* strict minimum : jump table + 1 byte per stream */ { const BYTE* const istart = (const BYTE*) cSrc; BYTE* const ostart = (BYTE*) dst; BYTE* const oend = ostart + dstSize; const void* const dtPtr = DTable+1; const HUF_DEltX4* const dt = (const HUF_DEltX4*)dtPtr; /* Init */ BIT_DStream_t bitD1; BIT_DStream_t bitD2; BIT_DStream_t bitD3; BIT_DStream_t bitD4; size_t const length1 = MEM_readLE16(istart); size_t const length2 = MEM_readLE16(istart+2); size_t const length3 = MEM_readLE16(istart+4); size_t const length4 = cSrcSize - (length1 + length2 + length3 + 6); const BYTE* const istart1 = istart + 6; /* jumpTable */ const BYTE* const istart2 = istart1 + length1; const BYTE* const istart3 = istart2 + length2; const BYTE* const istart4 = istart3 + length3; size_t const segmentSize = (dstSize+3) / 4; BYTE* const opStart2 = ostart + segmentSize; BYTE* const opStart3 = opStart2 + segmentSize; BYTE* const opStart4 = opStart3 + segmentSize; BYTE* op1 = ostart; BYTE* op2 = opStart2; BYTE* op3 = opStart3; BYTE* op4 = opStart4; U32 endSignal; DTableDesc const dtd = HUF_getDTableDesc(DTable); U32 const dtLog = dtd.tableLog; if (length4 > cSrcSize) return ERROR(corruption_detected); /* overflow */ { size_t const errorCode = BIT_initDStream(&bitD1, istart1, length1); if (HUF_isError(errorCode)) return errorCode; } { size_t const errorCode = BIT_initDStream(&bitD2, istart2, length2); if (HUF_isError(errorCode)) return errorCode; } { size_t const errorCode = BIT_initDStream(&bitD3, istart3, length3); if (HUF_isError(errorCode)) return errorCode; } { size_t const errorCode = BIT_initDStream(&bitD4, istart4, length4); if (HUF_isError(errorCode)) return errorCode; } /* 16-32 symbols per loop (4-8 symbols per stream) */ endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4); for ( ; (endSignal==BIT_DStream_unfinished) & (op4<(oend-(sizeof(bitD4.bitContainer)-1))) ; ) { HUF_DECODE_SYMBOLX4_2(op1, &bitD1); HUF_DECODE_SYMBOLX4_2(op2, &bitD2); HUF_DECODE_SYMBOLX4_2(op3, &bitD3); HUF_DECODE_SYMBOLX4_2(op4, &bitD4); HUF_DECODE_SYMBOLX4_1(op1, &bitD1); HUF_DECODE_SYMBOLX4_1(op2, &bitD2); HUF_DECODE_SYMBOLX4_1(op3, &bitD3); HUF_DECODE_SYMBOLX4_1(op4, &bitD4); HUF_DECODE_SYMBOLX4_2(op1, &bitD1); HUF_DECODE_SYMBOLX4_2(op2, &bitD2); HUF_DECODE_SYMBOLX4_2(op3, &bitD3); HUF_DECODE_SYMBOLX4_2(op4, &bitD4); HUF_DECODE_SYMBOLX4_0(op1, &bitD1); HUF_DECODE_SYMBOLX4_0(op2, &bitD2); HUF_DECODE_SYMBOLX4_0(op3, &bitD3); HUF_DECODE_SYMBOLX4_0(op4, &bitD4); endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4); } /* check corruption */ if (op1 > opStart2) return ERROR(corruption_detected); if (op2 > opStart3) return ERROR(corruption_detected); if (op3 > opStart4) return ERROR(corruption_detected); /* note : op4 already verified within main loop */ /* finish bitStreams one by one */ HUF_decodeStreamX4(op1, &bitD1, opStart2, dt, dtLog); HUF_decodeStreamX4(op2, &bitD2, opStart3, dt, dtLog); HUF_decodeStreamX4(op3, &bitD3, opStart4, dt, dtLog); HUF_decodeStreamX4(op4, &bitD4, oend, dt, dtLog); /* check */ { U32 const endCheck = BIT_endOfDStream(&bitD1) & BIT_endOfDStream(&bitD2) & BIT_endOfDStream(&bitD3) & BIT_endOfDStream(&bitD4); if (!endCheck) return ERROR(corruption_detected); } /* decoded size */ return dstSize; } } size_t HUF_decompress4X4_usingDTable( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable) { DTableDesc dtd = HUF_getDTableDesc(DTable); if (dtd.tableType != 1) return ERROR(GENERIC); return HUF_decompress4X4_usingDTable_internal(dst, dstSize, cSrc, cSrcSize, DTable); } size_t HUF_decompress4X4_DCtx_wksp(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize) { const BYTE* ip = (const BYTE*) cSrc; size_t hSize = HUF_readDTableX4_wksp(dctx, cSrc, cSrcSize, workSpace, wkspSize); if (HUF_isError(hSize)) return hSize; if (hSize >= cSrcSize) return ERROR(srcSize_wrong); ip += hSize; cSrcSize -= hSize; return HUF_decompress4X4_usingDTable_internal(dst, dstSize, ip, cSrcSize, dctx); } size_t HUF_decompress4X4_DCtx(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { U32 workSpace[HUF_DECOMPRESS_WORKSPACE_SIZE_U32]; return HUF_decompress4X4_DCtx_wksp(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, sizeof(workSpace)); } size_t HUF_decompress4X4 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { HUF_CREATE_STATIC_DTABLEX4(DTable, HUF_TABLELOG_MAX); return HUF_decompress4X4_DCtx(DTable, dst, dstSize, cSrc, cSrcSize); } /* ********************************/ /* Generic decompression selector */ /* ********************************/ size_t HUF_decompress1X_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable) { DTableDesc const dtd = HUF_getDTableDesc(DTable); return dtd.tableType ? HUF_decompress1X4_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable) : HUF_decompress1X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable); } size_t HUF_decompress4X_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable) { DTableDesc const dtd = HUF_getDTableDesc(DTable); return dtd.tableType ? HUF_decompress4X4_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable) : HUF_decompress4X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable); } typedef struct { U32 tableTime; U32 decode256Time; } algo_time_t; static const algo_time_t algoTime[16 /* Quantization */][3 /* single, double, quad */] = { /* single, double, quad */ {{0,0}, {1,1}, {2,2}}, /* Q==0 : impossible */ {{0,0}, {1,1}, {2,2}}, /* Q==1 : impossible */ {{ 38,130}, {1313, 74}, {2151, 38}}, /* Q == 2 : 12-18% */ {{ 448,128}, {1353, 74}, {2238, 41}}, /* Q == 3 : 18-25% */ {{ 556,128}, {1353, 74}, {2238, 47}}, /* Q == 4 : 25-32% */ {{ 714,128}, {1418, 74}, {2436, 53}}, /* Q == 5 : 32-38% */ {{ 883,128}, {1437, 74}, {2464, 61}}, /* Q == 6 : 38-44% */ {{ 897,128}, {1515, 75}, {2622, 68}}, /* Q == 7 : 44-50% */ {{ 926,128}, {1613, 75}, {2730, 75}}, /* Q == 8 : 50-56% */ {{ 947,128}, {1729, 77}, {3359, 77}}, /* Q == 9 : 56-62% */ {{1107,128}, {2083, 81}, {4006, 84}}, /* Q ==10 : 62-69% */ {{1177,128}, {2379, 87}, {4785, 88}}, /* Q ==11 : 69-75% */ {{1242,128}, {2415, 93}, {5155, 84}}, /* Q ==12 : 75-81% */ {{1349,128}, {2644,106}, {5260,106}}, /* Q ==13 : 81-87% */ {{1455,128}, {2422,124}, {4174,124}}, /* Q ==14 : 87-93% */ {{ 722,128}, {1891,145}, {1936,146}}, /* Q ==15 : 93-99% */ }; /** HUF_selectDecoder() : * Tells which decoder is likely to decode faster, * based on a set of pre-determined metrics. * @return : 0==HUF_decompress4X2, 1==HUF_decompress4X4 . * Assumption : 0 < cSrcSize, dstSize <= 128 KB */ U32 HUF_selectDecoder (size_t dstSize, size_t cSrcSize) { /* decoder timing evaluation */ U32 const Q = cSrcSize >= dstSize ? 15 : (U32)(cSrcSize * 16 / dstSize); /* Q < 16 */ U32 const D256 = (U32)(dstSize >> 8); U32 const DTime0 = algoTime[Q][0].tableTime + (algoTime[Q][0].decode256Time * D256); U32 DTime1 = algoTime[Q][1].tableTime + (algoTime[Q][1].decode256Time * D256); DTime1 += DTime1 >> 3; /* advantage to algorithm using less memory, for cache eviction */ return DTime1 < DTime0; } typedef size_t (*decompressionAlgo)(void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); size_t HUF_decompress (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { static const decompressionAlgo decompress[2] = { HUF_decompress4X2, HUF_decompress4X4 }; /* validation checks */ if (dstSize == 0) return ERROR(dstSize_tooSmall); if (cSrcSize > dstSize) return ERROR(corruption_detected); /* invalid */ if (cSrcSize == dstSize) { memcpy(dst, cSrc, dstSize); return dstSize; } /* not compressed */ if (cSrcSize == 1) { memset(dst, *(const BYTE*)cSrc, dstSize); return dstSize; } /* RLE */ { U32 const algoNb = HUF_selectDecoder(dstSize, cSrcSize); return decompress[algoNb](dst, dstSize, cSrc, cSrcSize); } } size_t HUF_decompress4X_DCtx (HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { /* validation checks */ if (dstSize == 0) return ERROR(dstSize_tooSmall); if (cSrcSize > dstSize) return ERROR(corruption_detected); /* invalid */ if (cSrcSize == dstSize) { memcpy(dst, cSrc, dstSize); return dstSize; } /* not compressed */ if (cSrcSize == 1) { memset(dst, *(const BYTE*)cSrc, dstSize); return dstSize; } /* RLE */ { U32 const algoNb = HUF_selectDecoder(dstSize, cSrcSize); return algoNb ? HUF_decompress4X4_DCtx(dctx, dst, dstSize, cSrc, cSrcSize) : HUF_decompress4X2_DCtx(dctx, dst, dstSize, cSrc, cSrcSize) ; } } size_t HUF_decompress4X_hufOnly(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { U32 workSpace[HUF_DECOMPRESS_WORKSPACE_SIZE_U32]; return HUF_decompress4X_hufOnly_wksp(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, sizeof(workSpace)); } size_t HUF_decompress4X_hufOnly_wksp(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize) { /* validation checks */ if (dstSize == 0) return ERROR(dstSize_tooSmall); if (cSrcSize == 0) return ERROR(corruption_detected); { U32 const algoNb = HUF_selectDecoder(dstSize, cSrcSize); return algoNb ? HUF_decompress4X4_DCtx_wksp(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, wkspSize): HUF_decompress4X2_DCtx_wksp(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, wkspSize); } } size_t HUF_decompress1X_DCtx_wksp(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize) { /* validation checks */ if (dstSize == 0) return ERROR(dstSize_tooSmall); if (cSrcSize > dstSize) return ERROR(corruption_detected); /* invalid */ if (cSrcSize == dstSize) { memcpy(dst, cSrc, dstSize); return dstSize; } /* not compressed */ if (cSrcSize == 1) { memset(dst, *(const BYTE*)cSrc, dstSize); return dstSize; } /* RLE */ { U32 const algoNb = HUF_selectDecoder(dstSize, cSrcSize); return algoNb ? HUF_decompress1X4_DCtx_wksp(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, wkspSize): HUF_decompress1X2_DCtx_wksp(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, wkspSize); } } size_t HUF_decompress1X_DCtx(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { U32 workSpace[HUF_DECOMPRESS_WORKSPACE_SIZE_U32]; return HUF_decompress1X_DCtx_wksp(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, sizeof(workSpace)); } borgbackup-1.1.5/src/borg/algorithms/zstd/lib/decompress/zstd_decompress.c0000644000175000017500000033075613257361140026126 0ustar twtw00000000000000/* * Copyright (c) 2016-present, Yann Collet, Facebook, Inc. * All rights reserved. * * This source code is licensed under both the BSD-style license (found in the * LICENSE file in the root directory of this source tree) and the GPLv2 (found * in the COPYING file in the root directory of this source tree). * You may select, at your option, one of the above-listed licenses. */ /* *************************************************************** * Tuning parameters *****************************************************************/ /*! * HEAPMODE : * Select how default decompression function ZSTD_decompress() will allocate memory, * in memory stack (0), or in memory heap (1, requires malloc()) */ #ifndef ZSTD_HEAPMODE # define ZSTD_HEAPMODE 1 #endif /*! * LEGACY_SUPPORT : * if set to 1, ZSTD_decompress() can decode older formats (v0.1+) */ #ifndef ZSTD_LEGACY_SUPPORT # define ZSTD_LEGACY_SUPPORT 0 #endif /*! * MAXWINDOWSIZE_DEFAULT : * maximum window size accepted by DStream, by default. * Frames requiring more memory will be rejected. */ #ifndef ZSTD_MAXWINDOWSIZE_DEFAULT # define ZSTD_MAXWINDOWSIZE_DEFAULT (((U32)1 << ZSTD_WINDOWLOG_DEFAULTMAX) + 1) #endif /*-******************************************************* * Dependencies *********************************************************/ #include /* memcpy, memmove, memset */ #include "mem.h" /* low level memory routines */ #define FSE_STATIC_LINKING_ONLY #include "fse.h" #define HUF_STATIC_LINKING_ONLY #include "huf.h" #include "zstd_internal.h" #if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT>=1) # include "zstd_legacy.h" #endif /*-************************************* * Errors ***************************************/ #define ZSTD_isError ERR_isError /* for inlining */ #define FSE_isError ERR_isError #define HUF_isError ERR_isError /*_******************************************************* * Memory operations **********************************************************/ static void ZSTD_copy4(void* dst, const void* src) { memcpy(dst, src, 4); } /*-************************************************************* * Context management ***************************************************************/ typedef enum { ZSTDds_getFrameHeaderSize, ZSTDds_decodeFrameHeader, ZSTDds_decodeBlockHeader, ZSTDds_decompressBlock, ZSTDds_decompressLastBlock, ZSTDds_checkChecksum, ZSTDds_decodeSkippableHeader, ZSTDds_skipFrame } ZSTD_dStage; typedef enum { zdss_init=0, zdss_loadHeader, zdss_read, zdss_load, zdss_flush } ZSTD_dStreamStage; typedef struct { FSE_DTable LLTable[FSE_DTABLE_SIZE_U32(LLFSELog)]; FSE_DTable OFTable[FSE_DTABLE_SIZE_U32(OffFSELog)]; FSE_DTable MLTable[FSE_DTABLE_SIZE_U32(MLFSELog)]; HUF_DTable hufTable[HUF_DTABLE_SIZE(HufLog)]; /* can accommodate HUF_decompress4X */ U32 workspace[HUF_DECOMPRESS_WORKSPACE_SIZE_U32]; U32 rep[ZSTD_REP_NUM]; } ZSTD_entropyDTables_t; struct ZSTD_DCtx_s { const FSE_DTable* LLTptr; const FSE_DTable* MLTptr; const FSE_DTable* OFTptr; const HUF_DTable* HUFptr; ZSTD_entropyDTables_t entropy; const void* previousDstEnd; /* detect continuity */ const void* base; /* start of current segment */ const void* vBase; /* virtual start of previous segment if it was just before current one */ const void* dictEnd; /* end of previous segment */ size_t expected; ZSTD_frameHeader fParams; U64 decodedSize; blockType_e bType; /* used in ZSTD_decompressContinue(), store blockType between block header decoding and block decompression stages */ ZSTD_dStage stage; U32 litEntropy; U32 fseEntropy; XXH64_state_t xxhState; size_t headerSize; U32 dictID; ZSTD_format_e format; const BYTE* litPtr; ZSTD_customMem customMem; size_t litSize; size_t rleSize; size_t staticSize; /* streaming */ ZSTD_DDict* ddictLocal; const ZSTD_DDict* ddict; ZSTD_dStreamStage streamStage; char* inBuff; size_t inBuffSize; size_t inPos; size_t maxWindowSize; char* outBuff; size_t outBuffSize; size_t outStart; size_t outEnd; size_t lhSize; void* legacyContext; U32 previousLegacyVersion; U32 legacyVersion; U32 hostageByte; /* workspace */ BYTE litBuffer[ZSTD_BLOCKSIZE_MAX + WILDCOPY_OVERLENGTH]; BYTE headerBuffer[ZSTD_FRAMEHEADERSIZE_MAX]; }; /* typedef'd to ZSTD_DCtx within "zstd.h" */ size_t ZSTD_sizeof_DCtx (const ZSTD_DCtx* dctx) { if (dctx==NULL) return 0; /* support sizeof NULL */ return sizeof(*dctx) + ZSTD_sizeof_DDict(dctx->ddictLocal) + dctx->inBuffSize + dctx->outBuffSize; } size_t ZSTD_estimateDCtxSize(void) { return sizeof(ZSTD_DCtx); } static size_t ZSTD_startingInputLength(ZSTD_format_e format) { size_t const startingInputLength = (format==ZSTD_f_zstd1_magicless) ? ZSTD_frameHeaderSize_prefix - ZSTD_frameIdSize : ZSTD_frameHeaderSize_prefix; ZSTD_STATIC_ASSERT(ZSTD_FRAMEHEADERSIZE_PREFIX >= ZSTD_FRAMEIDSIZE); /* only supports formats ZSTD_f_zstd1 and ZSTD_f_zstd1_magicless */ assert( (format == ZSTD_f_zstd1) || (format == ZSTD_f_zstd1_magicless) ); return startingInputLength; } static void ZSTD_initDCtx_internal(ZSTD_DCtx* dctx) { dctx->format = ZSTD_f_zstd1; /* ZSTD_decompressBegin() invokes ZSTD_startingInputLength() with argument dctx->format */ dctx->staticSize = 0; dctx->maxWindowSize = ZSTD_MAXWINDOWSIZE_DEFAULT; dctx->ddict = NULL; dctx->ddictLocal = NULL; dctx->inBuff = NULL; dctx->inBuffSize = 0; dctx->outBuffSize = 0; dctx->streamStage = zdss_init; } ZSTD_DCtx* ZSTD_initStaticDCtx(void *workspace, size_t workspaceSize) { ZSTD_DCtx* const dctx = (ZSTD_DCtx*) workspace; if ((size_t)workspace & 7) return NULL; /* 8-aligned */ if (workspaceSize < sizeof(ZSTD_DCtx)) return NULL; /* minimum size */ ZSTD_initDCtx_internal(dctx); dctx->staticSize = workspaceSize; dctx->inBuff = (char*)(dctx+1); return dctx; } ZSTD_DCtx* ZSTD_createDCtx_advanced(ZSTD_customMem customMem) { if (!customMem.customAlloc ^ !customMem.customFree) return NULL; { ZSTD_DCtx* const dctx = (ZSTD_DCtx*)ZSTD_malloc(sizeof(*dctx), customMem); if (!dctx) return NULL; dctx->customMem = customMem; dctx->legacyContext = NULL; dctx->previousLegacyVersion = 0; ZSTD_initDCtx_internal(dctx); return dctx; } } ZSTD_DCtx* ZSTD_createDCtx(void) { return ZSTD_createDCtx_advanced(ZSTD_defaultCMem); } size_t ZSTD_freeDCtx(ZSTD_DCtx* dctx) { if (dctx==NULL) return 0; /* support free on NULL */ if (dctx->staticSize) return ERROR(memory_allocation); /* not compatible with static DCtx */ { ZSTD_customMem const cMem = dctx->customMem; ZSTD_freeDDict(dctx->ddictLocal); dctx->ddictLocal = NULL; ZSTD_free(dctx->inBuff, cMem); dctx->inBuff = NULL; #if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT >= 1) if (dctx->legacyContext) ZSTD_freeLegacyStreamContext(dctx->legacyContext, dctx->previousLegacyVersion); #endif ZSTD_free(dctx, cMem); return 0; } } /* no longer useful */ void ZSTD_copyDCtx(ZSTD_DCtx* dstDCtx, const ZSTD_DCtx* srcDCtx) { size_t const toCopy = (size_t)((char*)(&dstDCtx->inBuff) - (char*)dstDCtx); memcpy(dstDCtx, srcDCtx, toCopy); /* no need to copy workspace */ } /*-************************************************************* * Decompression section ***************************************************************/ /*! ZSTD_isFrame() : * Tells if the content of `buffer` starts with a valid Frame Identifier. * Note : Frame Identifier is 4 bytes. If `size < 4`, @return will always be 0. * Note 2 : Legacy Frame Identifiers are considered valid only if Legacy Support is enabled. * Note 3 : Skippable Frame Identifiers are considered valid. */ unsigned ZSTD_isFrame(const void* buffer, size_t size) { if (size < ZSTD_frameIdSize) return 0; { U32 const magic = MEM_readLE32(buffer); if (magic == ZSTD_MAGICNUMBER) return 1; if ((magic & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START) return 1; } #if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT >= 1) if (ZSTD_isLegacy(buffer, size)) return 1; #endif return 0; } /** ZSTD_frameHeaderSize_internal() : * srcSize must be large enough to reach header size fields. * note : only works for formats ZSTD_f_zstd1 and ZSTD_f_zstd1_magicless * @return : size of the Frame Header * or an error code, which can be tested with ZSTD_isError() */ static size_t ZSTD_frameHeaderSize_internal(const void* src, size_t srcSize, ZSTD_format_e format) { size_t const minInputSize = ZSTD_startingInputLength(format); if (srcSize < minInputSize) return ERROR(srcSize_wrong); { BYTE const fhd = ((const BYTE*)src)[minInputSize-1]; U32 const dictID= fhd & 3; U32 const singleSegment = (fhd >> 5) & 1; U32 const fcsId = fhd >> 6; return minInputSize + !singleSegment + ZSTD_did_fieldSize[dictID] + ZSTD_fcs_fieldSize[fcsId] + (singleSegment && !fcsId); } } /** ZSTD_frameHeaderSize() : * srcSize must be >= ZSTD_frameHeaderSize_prefix. * @return : size of the Frame Header */ size_t ZSTD_frameHeaderSize(const void* src, size_t srcSize) { return ZSTD_frameHeaderSize_internal(src, srcSize, ZSTD_f_zstd1); } /** ZSTD_getFrameHeader_internal() : * decode Frame Header, or require larger `srcSize`. * note : only works for formats ZSTD_f_zstd1 and ZSTD_f_zstd1_magicless * @return : 0, `zfhPtr` is correctly filled, * >0, `srcSize` is too small, value is wanted `srcSize` amount, * or an error code, which can be tested using ZSTD_isError() */ static size_t ZSTD_getFrameHeader_internal(ZSTD_frameHeader* zfhPtr, const void* src, size_t srcSize, ZSTD_format_e format) { const BYTE* ip = (const BYTE*)src; size_t const minInputSize = ZSTD_startingInputLength(format); if (srcSize < minInputSize) return minInputSize; if ( (format != ZSTD_f_zstd1_magicless) && (MEM_readLE32(src) != ZSTD_MAGICNUMBER) ) { if ((MEM_readLE32(src) & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START) { /* skippable frame */ if (srcSize < ZSTD_skippableHeaderSize) return ZSTD_skippableHeaderSize; /* magic number + frame length */ memset(zfhPtr, 0, sizeof(*zfhPtr)); zfhPtr->frameContentSize = MEM_readLE32((const char *)src + ZSTD_frameIdSize); zfhPtr->frameType = ZSTD_skippableFrame; return 0; } return ERROR(prefix_unknown); } /* ensure there is enough `srcSize` to fully read/decode frame header */ { size_t const fhsize = ZSTD_frameHeaderSize_internal(src, srcSize, format); if (srcSize < fhsize) return fhsize; zfhPtr->headerSize = (U32)fhsize; } { BYTE const fhdByte = ip[minInputSize-1]; size_t pos = minInputSize; U32 const dictIDSizeCode = fhdByte&3; U32 const checksumFlag = (fhdByte>>2)&1; U32 const singleSegment = (fhdByte>>5)&1; U32 const fcsID = fhdByte>>6; U64 windowSize = 0; U32 dictID = 0; U64 frameContentSize = ZSTD_CONTENTSIZE_UNKNOWN; if ((fhdByte & 0x08) != 0) return ERROR(frameParameter_unsupported); /* reserved bits, must be zero */ if (!singleSegment) { BYTE const wlByte = ip[pos++]; U32 const windowLog = (wlByte >> 3) + ZSTD_WINDOWLOG_ABSOLUTEMIN; if (windowLog > ZSTD_WINDOWLOG_MAX) return ERROR(frameParameter_windowTooLarge); windowSize = (1ULL << windowLog); windowSize += (windowSize >> 3) * (wlByte&7); } switch(dictIDSizeCode) { default: assert(0); /* impossible */ case 0 : break; case 1 : dictID = ip[pos]; pos++; break; case 2 : dictID = MEM_readLE16(ip+pos); pos+=2; break; case 3 : dictID = MEM_readLE32(ip+pos); pos+=4; break; } switch(fcsID) { default: assert(0); /* impossible */ case 0 : if (singleSegment) frameContentSize = ip[pos]; break; case 1 : frameContentSize = MEM_readLE16(ip+pos)+256; break; case 2 : frameContentSize = MEM_readLE32(ip+pos); break; case 3 : frameContentSize = MEM_readLE64(ip+pos); break; } if (singleSegment) windowSize = frameContentSize; zfhPtr->frameType = ZSTD_frame; zfhPtr->frameContentSize = frameContentSize; zfhPtr->windowSize = windowSize; zfhPtr->blockSizeMax = (unsigned) MIN(windowSize, ZSTD_BLOCKSIZE_MAX); zfhPtr->dictID = dictID; zfhPtr->checksumFlag = checksumFlag; } return 0; } /** ZSTD_getFrameHeader() : * decode Frame Header, or require larger `srcSize`. * note : this function does not consume input, it only reads it. * @return : 0, `zfhPtr` is correctly filled, * >0, `srcSize` is too small, value is wanted `srcSize` amount, * or an error code, which can be tested using ZSTD_isError() */ size_t ZSTD_getFrameHeader(ZSTD_frameHeader* zfhPtr, const void* src, size_t srcSize) { return ZSTD_getFrameHeader_internal(zfhPtr, src, srcSize, ZSTD_f_zstd1); } /** ZSTD_getFrameContentSize() : * compatible with legacy mode * @return : decompressed size of the single frame pointed to be `src` if known, otherwise * - ZSTD_CONTENTSIZE_UNKNOWN if the size cannot be determined * - ZSTD_CONTENTSIZE_ERROR if an error occurred (e.g. invalid magic number, srcSize too small) */ unsigned long long ZSTD_getFrameContentSize(const void *src, size_t srcSize) { #if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT >= 1) if (ZSTD_isLegacy(src, srcSize)) { unsigned long long const ret = ZSTD_getDecompressedSize_legacy(src, srcSize); return ret == 0 ? ZSTD_CONTENTSIZE_UNKNOWN : ret; } #endif { ZSTD_frameHeader zfh; if (ZSTD_getFrameHeader(&zfh, src, srcSize) != 0) return ZSTD_CONTENTSIZE_ERROR; if (zfh.frameType == ZSTD_skippableFrame) { return 0; } else { return zfh.frameContentSize; } } } /** ZSTD_findDecompressedSize() : * compatible with legacy mode * `srcSize` must be the exact length of some number of ZSTD compressed and/or * skippable frames * @return : decompressed size of the frames contained */ unsigned long long ZSTD_findDecompressedSize(const void* src, size_t srcSize) { unsigned long long totalDstSize = 0; while (srcSize >= ZSTD_frameHeaderSize_prefix) { U32 const magicNumber = MEM_readLE32(src); if ((magicNumber & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START) { size_t skippableSize; if (srcSize < ZSTD_skippableHeaderSize) return ERROR(srcSize_wrong); skippableSize = MEM_readLE32((const BYTE *)src + ZSTD_frameIdSize) + ZSTD_skippableHeaderSize; if (srcSize < skippableSize) { return ZSTD_CONTENTSIZE_ERROR; } src = (const BYTE *)src + skippableSize; srcSize -= skippableSize; continue; } { unsigned long long const ret = ZSTD_getFrameContentSize(src, srcSize); if (ret >= ZSTD_CONTENTSIZE_ERROR) return ret; /* check for overflow */ if (totalDstSize + ret < totalDstSize) return ZSTD_CONTENTSIZE_ERROR; totalDstSize += ret; } { size_t const frameSrcSize = ZSTD_findFrameCompressedSize(src, srcSize); if (ZSTD_isError(frameSrcSize)) { return ZSTD_CONTENTSIZE_ERROR; } src = (const BYTE *)src + frameSrcSize; srcSize -= frameSrcSize; } } /* while (srcSize >= ZSTD_frameHeaderSize_prefix) */ if (srcSize) return ZSTD_CONTENTSIZE_ERROR; return totalDstSize; } /** ZSTD_getDecompressedSize() : * compatible with legacy mode * @return : decompressed size if known, 0 otherwise note : 0 can mean any of the following : - frame content is empty - decompressed size field is not present in frame header - frame header unknown / not supported - frame header not complete (`srcSize` too small) */ unsigned long long ZSTD_getDecompressedSize(const void* src, size_t srcSize) { unsigned long long const ret = ZSTD_getFrameContentSize(src, srcSize); ZSTD_STATIC_ASSERT(ZSTD_CONTENTSIZE_ERROR < ZSTD_CONTENTSIZE_UNKNOWN); return (ret >= ZSTD_CONTENTSIZE_ERROR) ? 0 : ret; } /** ZSTD_decodeFrameHeader() : * `headerSize` must be the size provided by ZSTD_frameHeaderSize(). * @return : 0 if success, or an error code, which can be tested using ZSTD_isError() */ static size_t ZSTD_decodeFrameHeader(ZSTD_DCtx* dctx, const void* src, size_t headerSize) { size_t const result = ZSTD_getFrameHeader_internal(&(dctx->fParams), src, headerSize, dctx->format); if (ZSTD_isError(result)) return result; /* invalid header */ if (result>0) return ERROR(srcSize_wrong); /* headerSize too small */ if (dctx->fParams.dictID && (dctx->dictID != dctx->fParams.dictID)) return ERROR(dictionary_wrong); if (dctx->fParams.checksumFlag) XXH64_reset(&dctx->xxhState, 0); return 0; } /*! ZSTD_getcBlockSize() : * Provides the size of compressed block from block header `src` */ size_t ZSTD_getcBlockSize(const void* src, size_t srcSize, blockProperties_t* bpPtr) { if (srcSize < ZSTD_blockHeaderSize) return ERROR(srcSize_wrong); { U32 const cBlockHeader = MEM_readLE24(src); U32 const cSize = cBlockHeader >> 3; bpPtr->lastBlock = cBlockHeader & 1; bpPtr->blockType = (blockType_e)((cBlockHeader >> 1) & 3); bpPtr->origSize = cSize; /* only useful for RLE */ if (bpPtr->blockType == bt_rle) return 1; if (bpPtr->blockType == bt_reserved) return ERROR(corruption_detected); return cSize; } } static size_t ZSTD_copyRawBlock(void* dst, size_t dstCapacity, const void* src, size_t srcSize) { if (srcSize > dstCapacity) return ERROR(dstSize_tooSmall); memcpy(dst, src, srcSize); return srcSize; } static size_t ZSTD_setRleBlock(void* dst, size_t dstCapacity, const void* src, size_t srcSize, size_t regenSize) { if (srcSize != 1) return ERROR(srcSize_wrong); if (regenSize > dstCapacity) return ERROR(dstSize_tooSmall); memset(dst, *(const BYTE*)src, regenSize); return regenSize; } /*! ZSTD_decodeLiteralsBlock() : * @return : nb of bytes read from src (< srcSize ) * note : symbol not declared but exposed for fullbench */ size_t ZSTD_decodeLiteralsBlock(ZSTD_DCtx* dctx, const void* src, size_t srcSize) /* note : srcSize < BLOCKSIZE */ { if (srcSize < MIN_CBLOCK_SIZE) return ERROR(corruption_detected); { const BYTE* const istart = (const BYTE*) src; symbolEncodingType_e const litEncType = (symbolEncodingType_e)(istart[0] & 3); switch(litEncType) { case set_repeat: if (dctx->litEntropy==0) return ERROR(dictionary_corrupted); /* fall-through */ case set_compressed: if (srcSize < 5) return ERROR(corruption_detected); /* srcSize >= MIN_CBLOCK_SIZE == 3; here we need up to 5 for case 3 */ { size_t lhSize, litSize, litCSize; U32 singleStream=0; U32 const lhlCode = (istart[0] >> 2) & 3; U32 const lhc = MEM_readLE32(istart); switch(lhlCode) { case 0: case 1: default: /* note : default is impossible, since lhlCode into [0..3] */ /* 2 - 2 - 10 - 10 */ singleStream = !lhlCode; lhSize = 3; litSize = (lhc >> 4) & 0x3FF; litCSize = (lhc >> 14) & 0x3FF; break; case 2: /* 2 - 2 - 14 - 14 */ lhSize = 4; litSize = (lhc >> 4) & 0x3FFF; litCSize = lhc >> 18; break; case 3: /* 2 - 2 - 18 - 18 */ lhSize = 5; litSize = (lhc >> 4) & 0x3FFFF; litCSize = (lhc >> 22) + (istart[4] << 10); break; } if (litSize > ZSTD_BLOCKSIZE_MAX) return ERROR(corruption_detected); if (litCSize + lhSize > srcSize) return ERROR(corruption_detected); if (HUF_isError((litEncType==set_repeat) ? ( singleStream ? HUF_decompress1X_usingDTable(dctx->litBuffer, litSize, istart+lhSize, litCSize, dctx->HUFptr) : HUF_decompress4X_usingDTable(dctx->litBuffer, litSize, istart+lhSize, litCSize, dctx->HUFptr) ) : ( singleStream ? HUF_decompress1X2_DCtx_wksp(dctx->entropy.hufTable, dctx->litBuffer, litSize, istart+lhSize, litCSize, dctx->entropy.workspace, sizeof(dctx->entropy.workspace)) : HUF_decompress4X_hufOnly_wksp(dctx->entropy.hufTable, dctx->litBuffer, litSize, istart+lhSize, litCSize, dctx->entropy.workspace, sizeof(dctx->entropy.workspace))))) return ERROR(corruption_detected); dctx->litPtr = dctx->litBuffer; dctx->litSize = litSize; dctx->litEntropy = 1; if (litEncType==set_compressed) dctx->HUFptr = dctx->entropy.hufTable; memset(dctx->litBuffer + dctx->litSize, 0, WILDCOPY_OVERLENGTH); return litCSize + lhSize; } case set_basic: { size_t litSize, lhSize; U32 const lhlCode = ((istart[0]) >> 2) & 3; switch(lhlCode) { case 0: case 2: default: /* note : default is impossible, since lhlCode into [0..3] */ lhSize = 1; litSize = istart[0] >> 3; break; case 1: lhSize = 2; litSize = MEM_readLE16(istart) >> 4; break; case 3: lhSize = 3; litSize = MEM_readLE24(istart) >> 4; break; } if (lhSize+litSize+WILDCOPY_OVERLENGTH > srcSize) { /* risk reading beyond src buffer with wildcopy */ if (litSize+lhSize > srcSize) return ERROR(corruption_detected); memcpy(dctx->litBuffer, istart+lhSize, litSize); dctx->litPtr = dctx->litBuffer; dctx->litSize = litSize; memset(dctx->litBuffer + dctx->litSize, 0, WILDCOPY_OVERLENGTH); return lhSize+litSize; } /* direct reference into compressed stream */ dctx->litPtr = istart+lhSize; dctx->litSize = litSize; return lhSize+litSize; } case set_rle: { U32 const lhlCode = ((istart[0]) >> 2) & 3; size_t litSize, lhSize; switch(lhlCode) { case 0: case 2: default: /* note : default is impossible, since lhlCode into [0..3] */ lhSize = 1; litSize = istart[0] >> 3; break; case 1: lhSize = 2; litSize = MEM_readLE16(istart) >> 4; break; case 3: lhSize = 3; litSize = MEM_readLE24(istart) >> 4; if (srcSize<4) return ERROR(corruption_detected); /* srcSize >= MIN_CBLOCK_SIZE == 3; here we need lhSize+1 = 4 */ break; } if (litSize > ZSTD_BLOCKSIZE_MAX) return ERROR(corruption_detected); memset(dctx->litBuffer, istart[lhSize], litSize + WILDCOPY_OVERLENGTH); dctx->litPtr = dctx->litBuffer; dctx->litSize = litSize; return lhSize+1; } default: return ERROR(corruption_detected); /* impossible */ } } } typedef union { FSE_decode_t realData; U32 alignedBy4; } FSE_decode_t4; /* Default FSE distribution table for Literal Lengths */ static const FSE_decode_t4 LL_defaultDTable[(1< max) return ERROR(corruption_detected); FSE_buildDTable_rle(DTableSpace, *(const BYTE*)src); *DTablePtr = DTableSpace; return 1; case set_basic : *DTablePtr = (const FSE_DTable*)tmpPtr; return 0; case set_repeat: if (!flagRepeatTable) return ERROR(corruption_detected); return 0; default : /* impossible */ case set_compressed : { U32 tableLog; S16 norm[MaxSeq+1]; size_t const headerSize = FSE_readNCount(norm, &max, &tableLog, src, srcSize); if (FSE_isError(headerSize)) return ERROR(corruption_detected); if (tableLog > maxLog) return ERROR(corruption_detected); FSE_buildDTable(DTableSpace, norm, max, tableLog); *DTablePtr = DTableSpace; return headerSize; } } } size_t ZSTD_decodeSeqHeaders(ZSTD_DCtx* dctx, int* nbSeqPtr, const void* src, size_t srcSize) { const BYTE* const istart = (const BYTE* const)src; const BYTE* const iend = istart + srcSize; const BYTE* ip = istart; DEBUGLOG(5, "ZSTD_decodeSeqHeaders"); /* check */ if (srcSize < MIN_SEQUENCES_SIZE) return ERROR(srcSize_wrong); /* SeqHead */ { int nbSeq = *ip++; if (!nbSeq) { *nbSeqPtr=0; return 1; } if (nbSeq > 0x7F) { if (nbSeq == 0xFF) { if (ip+2 > iend) return ERROR(srcSize_wrong); nbSeq = MEM_readLE16(ip) + LONGNBSEQ, ip+=2; } else { if (ip >= iend) return ERROR(srcSize_wrong); nbSeq = ((nbSeq-0x80)<<8) + *ip++; } } *nbSeqPtr = nbSeq; } /* FSE table descriptors */ if (ip+4 > iend) return ERROR(srcSize_wrong); /* minimum possible size */ { symbolEncodingType_e const LLtype = (symbolEncodingType_e)(*ip >> 6); symbolEncodingType_e const OFtype = (symbolEncodingType_e)((*ip >> 4) & 3); symbolEncodingType_e const MLtype = (symbolEncodingType_e)((*ip >> 2) & 3); ip++; /* Build DTables */ { size_t const llhSize = ZSTD_buildSeqTable(dctx->entropy.LLTable, &dctx->LLTptr, LLtype, MaxLL, LLFSELog, ip, iend-ip, LL_defaultDTable, dctx->fseEntropy); if (ZSTD_isError(llhSize)) return ERROR(corruption_detected); ip += llhSize; } { size_t const ofhSize = ZSTD_buildSeqTable(dctx->entropy.OFTable, &dctx->OFTptr, OFtype, MaxOff, OffFSELog, ip, iend-ip, OF_defaultDTable, dctx->fseEntropy); if (ZSTD_isError(ofhSize)) return ERROR(corruption_detected); ip += ofhSize; } { size_t const mlhSize = ZSTD_buildSeqTable(dctx->entropy.MLTable, &dctx->MLTptr, MLtype, MaxML, MLFSELog, ip, iend-ip, ML_defaultDTable, dctx->fseEntropy); if (ZSTD_isError(mlhSize)) return ERROR(corruption_detected); ip += mlhSize; } } return ip-istart; } typedef struct { size_t litLength; size_t matchLength; size_t offset; const BYTE* match; } seq_t; typedef struct { BIT_DStream_t DStream; FSE_DState_t stateLL; FSE_DState_t stateOffb; FSE_DState_t stateML; size_t prevOffset[ZSTD_REP_NUM]; const BYTE* base; size_t pos; uPtrDiff gotoDict; } seqState_t; FORCE_NOINLINE size_t ZSTD_execSequenceLast7(BYTE* op, BYTE* const oend, seq_t sequence, const BYTE** litPtr, const BYTE* const litLimit, const BYTE* const base, const BYTE* const vBase, const BYTE* const dictEnd) { BYTE* const oLitEnd = op + sequence.litLength; size_t const sequenceLength = sequence.litLength + sequence.matchLength; BYTE* const oMatchEnd = op + sequenceLength; /* risk : address space overflow (32-bits) */ BYTE* const oend_w = oend - WILDCOPY_OVERLENGTH; const BYTE* const iLitEnd = *litPtr + sequence.litLength; const BYTE* match = oLitEnd - sequence.offset; /* check */ if (oMatchEnd>oend) return ERROR(dstSize_tooSmall); /* last match must start at a minimum distance of WILDCOPY_OVERLENGTH from oend */ if (iLitEnd > litLimit) return ERROR(corruption_detected); /* over-read beyond lit buffer */ if (oLitEnd <= oend_w) return ERROR(GENERIC); /* Precondition */ /* copy literals */ if (op < oend_w) { ZSTD_wildcopy(op, *litPtr, oend_w - op); *litPtr += oend_w - op; op = oend_w; } while (op < oLitEnd) *op++ = *(*litPtr)++; /* copy Match */ if (sequence.offset > (size_t)(oLitEnd - base)) { /* offset beyond prefix */ if (sequence.offset > (size_t)(oLitEnd - vBase)) return ERROR(corruption_detected); match = dictEnd - (base-match); if (match + sequence.matchLength <= dictEnd) { memmove(oLitEnd, match, sequence.matchLength); return sequenceLength; } /* span extDict & currentPrefixSegment */ { size_t const length1 = dictEnd - match; memmove(oLitEnd, match, length1); op = oLitEnd + length1; sequence.matchLength -= length1; match = base; } } while (op < oMatchEnd) *op++ = *match++; return sequenceLength; } typedef enum { ZSTD_lo_isRegularOffset, ZSTD_lo_isLongOffset=1 } ZSTD_longOffset_e; /* We need to add at most (ZSTD_WINDOWLOG_MAX_32 - 1) bits to read the maximum * offset bits. But we can only read at most (STREAM_ACCUMULATOR_MIN_32 - 1) * bits before reloading. This value is the maximum number of bytes we read * after reloading when we are decoding long offets. */ #define LONG_OFFSETS_MAX_EXTRA_BITS_32 \ (ZSTD_WINDOWLOG_MAX_32 > STREAM_ACCUMULATOR_MIN_32 \ ? ZSTD_WINDOWLOG_MAX_32 - STREAM_ACCUMULATOR_MIN_32 \ : 0) static seq_t ZSTD_decodeSequence(seqState_t* seqState, const ZSTD_longOffset_e longOffsets) { seq_t seq; U32 const llCode = FSE_peekSymbol(&seqState->stateLL); U32 const mlCode = FSE_peekSymbol(&seqState->stateML); U32 const ofCode = FSE_peekSymbol(&seqState->stateOffb); /* <= MaxOff, by table construction */ U32 const llBits = LL_bits[llCode]; U32 const mlBits = ML_bits[mlCode]; U32 const ofBits = ofCode; U32 const totalBits = llBits+mlBits+ofBits; static const U32 LL_base[MaxLL+1] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 18, 20, 22, 24, 28, 32, 40, 48, 64, 0x80, 0x100, 0x200, 0x400, 0x800, 0x1000, 0x2000, 0x4000, 0x8000, 0x10000 }; static const U32 ML_base[MaxML+1] = { 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 37, 39, 41, 43, 47, 51, 59, 67, 83, 99, 0x83, 0x103, 0x203, 0x403, 0x803, 0x1003, 0x2003, 0x4003, 0x8003, 0x10003 }; static const U32 OF_base[MaxOff+1] = { 0, 1, 1, 5, 0xD, 0x1D, 0x3D, 0x7D, 0xFD, 0x1FD, 0x3FD, 0x7FD, 0xFFD, 0x1FFD, 0x3FFD, 0x7FFD, 0xFFFD, 0x1FFFD, 0x3FFFD, 0x7FFFD, 0xFFFFD, 0x1FFFFD, 0x3FFFFD, 0x7FFFFD, 0xFFFFFD, 0x1FFFFFD, 0x3FFFFFD, 0x7FFFFFD, 0xFFFFFFD, 0x1FFFFFFD, 0x3FFFFFFD, 0x7FFFFFFD }; /* sequence */ { size_t offset; if (!ofCode) offset = 0; else { ZSTD_STATIC_ASSERT(ZSTD_lo_isLongOffset == 1); ZSTD_STATIC_ASSERT(LONG_OFFSETS_MAX_EXTRA_BITS_32 == 5); assert(ofBits <= MaxOff); if (MEM_32bits() && longOffsets) { U32 const extraBits = ofBits - MIN(ofBits, STREAM_ACCUMULATOR_MIN_32-1); offset = OF_base[ofCode] + (BIT_readBitsFast(&seqState->DStream, ofBits - extraBits) << extraBits); if (MEM_32bits() || extraBits) BIT_reloadDStream(&seqState->DStream); if (extraBits) offset += BIT_readBitsFast(&seqState->DStream, extraBits); } else { offset = OF_base[ofCode] + BIT_readBitsFast(&seqState->DStream, ofBits); /* <= (ZSTD_WINDOWLOG_MAX-1) bits */ if (MEM_32bits()) BIT_reloadDStream(&seqState->DStream); } } if (ofCode <= 1) { offset += (llCode==0); if (offset) { size_t temp = (offset==3) ? seqState->prevOffset[0] - 1 : seqState->prevOffset[offset]; temp += !temp; /* 0 is not valid; input is corrupted; force offset to 1 */ if (offset != 1) seqState->prevOffset[2] = seqState->prevOffset[1]; seqState->prevOffset[1] = seqState->prevOffset[0]; seqState->prevOffset[0] = offset = temp; } else { offset = seqState->prevOffset[0]; } } else { seqState->prevOffset[2] = seqState->prevOffset[1]; seqState->prevOffset[1] = seqState->prevOffset[0]; seqState->prevOffset[0] = offset; } seq.offset = offset; } seq.matchLength = ML_base[mlCode] + ((mlCode>31) ? BIT_readBitsFast(&seqState->DStream, mlBits) : 0); /* <= 16 bits */ if (MEM_32bits() && (mlBits+llBits >= STREAM_ACCUMULATOR_MIN_32-LONG_OFFSETS_MAX_EXTRA_BITS_32)) BIT_reloadDStream(&seqState->DStream); if (MEM_64bits() && (totalBits >= STREAM_ACCUMULATOR_MIN_64-(LLFSELog+MLFSELog+OffFSELog))) BIT_reloadDStream(&seqState->DStream); /* Verify that there is enough bits to read the rest of the data in 64-bit mode. */ ZSTD_STATIC_ASSERT(16+LLFSELog+MLFSELog+OffFSELog < STREAM_ACCUMULATOR_MIN_64); seq.litLength = LL_base[llCode] + ((llCode>15) ? BIT_readBitsFast(&seqState->DStream, llBits) : 0); /* <= 16 bits */ if (MEM_32bits()) BIT_reloadDStream(&seqState->DStream); DEBUGLOG(6, "seq: litL=%u, matchL=%u, offset=%u", (U32)seq.litLength, (U32)seq.matchLength, (U32)seq.offset); /* ANS state update */ FSE_updateState(&seqState->stateLL, &seqState->DStream); /* <= 9 bits */ FSE_updateState(&seqState->stateML, &seqState->DStream); /* <= 9 bits */ if (MEM_32bits()) BIT_reloadDStream(&seqState->DStream); /* <= 18 bits */ FSE_updateState(&seqState->stateOffb, &seqState->DStream); /* <= 8 bits */ return seq; } HINT_INLINE size_t ZSTD_execSequence(BYTE* op, BYTE* const oend, seq_t sequence, const BYTE** litPtr, const BYTE* const litLimit, const BYTE* const base, const BYTE* const vBase, const BYTE* const dictEnd) { BYTE* const oLitEnd = op + sequence.litLength; size_t const sequenceLength = sequence.litLength + sequence.matchLength; BYTE* const oMatchEnd = op + sequenceLength; /* risk : address space overflow (32-bits) */ BYTE* const oend_w = oend - WILDCOPY_OVERLENGTH; const BYTE* const iLitEnd = *litPtr + sequence.litLength; const BYTE* match = oLitEnd - sequence.offset; /* check */ if (oMatchEnd>oend) return ERROR(dstSize_tooSmall); /* last match must start at a minimum distance of WILDCOPY_OVERLENGTH from oend */ if (iLitEnd > litLimit) return ERROR(corruption_detected); /* over-read beyond lit buffer */ if (oLitEnd>oend_w) return ZSTD_execSequenceLast7(op, oend, sequence, litPtr, litLimit, base, vBase, dictEnd); /* copy Literals */ ZSTD_copy8(op, *litPtr); if (sequence.litLength > 8) ZSTD_wildcopy(op+8, (*litPtr)+8, sequence.litLength - 8); /* note : since oLitEnd <= oend-WILDCOPY_OVERLENGTH, no risk of overwrite beyond oend */ op = oLitEnd; *litPtr = iLitEnd; /* update for next sequence */ /* copy Match */ if (sequence.offset > (size_t)(oLitEnd - base)) { /* offset beyond prefix -> go into extDict */ if (sequence.offset > (size_t)(oLitEnd - vBase)) return ERROR(corruption_detected); match = dictEnd + (match - base); if (match + sequence.matchLength <= dictEnd) { memmove(oLitEnd, match, sequence.matchLength); return sequenceLength; } /* span extDict & currentPrefixSegment */ { size_t const length1 = dictEnd - match; memmove(oLitEnd, match, length1); op = oLitEnd + length1; sequence.matchLength -= length1; match = base; if (op > oend_w || sequence.matchLength < MINMATCH) { U32 i; for (i = 0; i < sequence.matchLength; ++i) op[i] = match[i]; return sequenceLength; } } } /* Requirement: op <= oend_w && sequence.matchLength >= MINMATCH */ /* match within prefix */ if (sequence.offset < 8) { /* close range match, overlap */ static const U32 dec32table[] = { 0, 1, 2, 1, 4, 4, 4, 4 }; /* added */ static const int dec64table[] = { 8, 8, 8, 7, 8, 9,10,11 }; /* subtracted */ int const sub2 = dec64table[sequence.offset]; op[0] = match[0]; op[1] = match[1]; op[2] = match[2]; op[3] = match[3]; match += dec32table[sequence.offset]; ZSTD_copy4(op+4, match); match -= sub2; } else { ZSTD_copy8(op, match); } op += 8; match += 8; if (oMatchEnd > oend-(16-MINMATCH)) { if (op < oend_w) { ZSTD_wildcopy(op, match, oend_w - op); match += oend_w - op; op = oend_w; } while (op < oMatchEnd) *op++ = *match++; } else { ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength-8); /* works even if matchLength < 8 */ } return sequenceLength; } static size_t ZSTD_decompressSequences( ZSTD_DCtx* dctx, void* dst, size_t maxDstSize, const void* seqStart, size_t seqSize, const ZSTD_longOffset_e isLongOffset) { const BYTE* ip = (const BYTE*)seqStart; const BYTE* const iend = ip + seqSize; BYTE* const ostart = (BYTE* const)dst; BYTE* const oend = ostart + maxDstSize; BYTE* op = ostart; const BYTE* litPtr = dctx->litPtr; const BYTE* const litEnd = litPtr + dctx->litSize; const BYTE* const base = (const BYTE*) (dctx->base); const BYTE* const vBase = (const BYTE*) (dctx->vBase); const BYTE* const dictEnd = (const BYTE*) (dctx->dictEnd); int nbSeq; DEBUGLOG(5, "ZSTD_decompressSequences"); /* Build Decoding Tables */ { size_t const seqHSize = ZSTD_decodeSeqHeaders(dctx, &nbSeq, ip, seqSize); DEBUGLOG(5, "ZSTD_decodeSeqHeaders: size=%u, nbSeq=%i", (U32)seqHSize, nbSeq); if (ZSTD_isError(seqHSize)) return seqHSize; ip += seqHSize; } /* Regen sequences */ if (nbSeq) { seqState_t seqState; dctx->fseEntropy = 1; { U32 i; for (i=0; ientropy.rep[i]; } CHECK_E(BIT_initDStream(&seqState.DStream, ip, iend-ip), corruption_detected); FSE_initDState(&seqState.stateLL, &seqState.DStream, dctx->LLTptr); FSE_initDState(&seqState.stateOffb, &seqState.DStream, dctx->OFTptr); FSE_initDState(&seqState.stateML, &seqState.DStream, dctx->MLTptr); for ( ; (BIT_reloadDStream(&(seqState.DStream)) <= BIT_DStream_completed) && nbSeq ; ) { nbSeq--; { seq_t const sequence = ZSTD_decodeSequence(&seqState, isLongOffset); size_t const oneSeqSize = ZSTD_execSequence(op, oend, sequence, &litPtr, litEnd, base, vBase, dictEnd); DEBUGLOG(6, "regenerated sequence size : %u", (U32)oneSeqSize); if (ZSTD_isError(oneSeqSize)) return oneSeqSize; op += oneSeqSize; } } /* check if reached exact end */ DEBUGLOG(5, "after decode loop, remaining nbSeq : %i", nbSeq); if (nbSeq) return ERROR(corruption_detected); /* save reps for next block */ { U32 i; for (i=0; ientropy.rep[i] = (U32)(seqState.prevOffset[i]); } } /* last literal segment */ { size_t const lastLLSize = litEnd - litPtr; if (lastLLSize > (size_t)(oend-op)) return ERROR(dstSize_tooSmall); memcpy(op, litPtr, lastLLSize); op += lastLLSize; } return op-ostart; } HINT_INLINE seq_t ZSTD_decodeSequenceLong(seqState_t* seqState, ZSTD_longOffset_e const longOffsets) { seq_t seq; U32 const llCode = FSE_peekSymbol(&seqState->stateLL); U32 const mlCode = FSE_peekSymbol(&seqState->stateML); U32 const ofCode = FSE_peekSymbol(&seqState->stateOffb); /* <= MaxOff, by table construction */ U32 const llBits = LL_bits[llCode]; U32 const mlBits = ML_bits[mlCode]; U32 const ofBits = ofCode; U32 const totalBits = llBits+mlBits+ofBits; static const U32 LL_base[MaxLL+1] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 18, 20, 22, 24, 28, 32, 40, 48, 64, 0x80, 0x100, 0x200, 0x400, 0x800, 0x1000, 0x2000, 0x4000, 0x8000, 0x10000 }; static const U32 ML_base[MaxML+1] = { 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 37, 39, 41, 43, 47, 51, 59, 67, 83, 99, 0x83, 0x103, 0x203, 0x403, 0x803, 0x1003, 0x2003, 0x4003, 0x8003, 0x10003 }; static const U32 OF_base[MaxOff+1] = { 0, 1, 1, 5, 0xD, 0x1D, 0x3D, 0x7D, 0xFD, 0x1FD, 0x3FD, 0x7FD, 0xFFD, 0x1FFD, 0x3FFD, 0x7FFD, 0xFFFD, 0x1FFFD, 0x3FFFD, 0x7FFFD, 0xFFFFD, 0x1FFFFD, 0x3FFFFD, 0x7FFFFD, 0xFFFFFD, 0x1FFFFFD, 0x3FFFFFD, 0x7FFFFFD, 0xFFFFFFD, 0x1FFFFFFD, 0x3FFFFFFD, 0x7FFFFFFD }; /* sequence */ { size_t offset; if (!ofCode) offset = 0; else { ZSTD_STATIC_ASSERT(ZSTD_lo_isLongOffset == 1); ZSTD_STATIC_ASSERT(LONG_OFFSETS_MAX_EXTRA_BITS_32 == 5); assert(ofBits <= MaxOff); if (MEM_32bits() && longOffsets) { U32 const extraBits = ofBits - MIN(ofBits, STREAM_ACCUMULATOR_MIN_32-1); offset = OF_base[ofCode] + (BIT_readBitsFast(&seqState->DStream, ofBits - extraBits) << extraBits); if (MEM_32bits() || extraBits) BIT_reloadDStream(&seqState->DStream); if (extraBits) offset += BIT_readBitsFast(&seqState->DStream, extraBits); } else { offset = OF_base[ofCode] + BIT_readBitsFast(&seqState->DStream, ofBits); /* <= (ZSTD_WINDOWLOG_MAX-1) bits */ if (MEM_32bits()) BIT_reloadDStream(&seqState->DStream); } } if (ofCode <= 1) { offset += (llCode==0); if (offset) { size_t temp = (offset==3) ? seqState->prevOffset[0] - 1 : seqState->prevOffset[offset]; temp += !temp; /* 0 is not valid; input is corrupted; force offset to 1 */ if (offset != 1) seqState->prevOffset[2] = seqState->prevOffset[1]; seqState->prevOffset[1] = seqState->prevOffset[0]; seqState->prevOffset[0] = offset = temp; } else { offset = seqState->prevOffset[0]; } } else { seqState->prevOffset[2] = seqState->prevOffset[1]; seqState->prevOffset[1] = seqState->prevOffset[0]; seqState->prevOffset[0] = offset; } seq.offset = offset; } seq.matchLength = ML_base[mlCode] + ((mlCode>31) ? BIT_readBitsFast(&seqState->DStream, mlBits) : 0); /* <= 16 bits */ if (MEM_32bits() && (mlBits+llBits >= STREAM_ACCUMULATOR_MIN_32-LONG_OFFSETS_MAX_EXTRA_BITS_32)) BIT_reloadDStream(&seqState->DStream); if (MEM_64bits() && (totalBits >= STREAM_ACCUMULATOR_MIN_64-(LLFSELog+MLFSELog+OffFSELog))) BIT_reloadDStream(&seqState->DStream); /* Verify that there is enough bits to read the rest of the data in 64-bit mode. */ ZSTD_STATIC_ASSERT(16+LLFSELog+MLFSELog+OffFSELog < STREAM_ACCUMULATOR_MIN_64); seq.litLength = LL_base[llCode] + ((llCode>15) ? BIT_readBitsFast(&seqState->DStream, llBits) : 0); /* <= 16 bits */ if (MEM_32bits()) BIT_reloadDStream(&seqState->DStream); { size_t const pos = seqState->pos + seq.litLength; seq.match = seqState->base + pos - seq.offset; /* single memory segment */ if (seq.offset > pos) seq.match += seqState->gotoDict; /* separate memory segment */ seqState->pos = pos + seq.matchLength; } /* ANS state update */ FSE_updateState(&seqState->stateLL, &seqState->DStream); /* <= 9 bits */ FSE_updateState(&seqState->stateML, &seqState->DStream); /* <= 9 bits */ if (MEM_32bits()) BIT_reloadDStream(&seqState->DStream); /* <= 18 bits */ FSE_updateState(&seqState->stateOffb, &seqState->DStream); /* <= 8 bits */ return seq; } HINT_INLINE size_t ZSTD_execSequenceLong(BYTE* op, BYTE* const oend, seq_t sequence, const BYTE** litPtr, const BYTE* const litLimit, const BYTE* const base, const BYTE* const vBase, const BYTE* const dictEnd) { BYTE* const oLitEnd = op + sequence.litLength; size_t const sequenceLength = sequence.litLength + sequence.matchLength; BYTE* const oMatchEnd = op + sequenceLength; /* risk : address space overflow (32-bits) */ BYTE* const oend_w = oend - WILDCOPY_OVERLENGTH; const BYTE* const iLitEnd = *litPtr + sequence.litLength; const BYTE* match = sequence.match; /* check */ if (oMatchEnd>oend) return ERROR(dstSize_tooSmall); /* last match must start at a minimum distance of WILDCOPY_OVERLENGTH from oend */ if (iLitEnd > litLimit) return ERROR(corruption_detected); /* over-read beyond lit buffer */ if (oLitEnd>oend_w) return ZSTD_execSequenceLast7(op, oend, sequence, litPtr, litLimit, base, vBase, dictEnd); /* copy Literals */ ZSTD_copy8(op, *litPtr); if (sequence.litLength > 8) ZSTD_wildcopy(op+8, (*litPtr)+8, sequence.litLength - 8); /* note : since oLitEnd <= oend-WILDCOPY_OVERLENGTH, no risk of overwrite beyond oend */ op = oLitEnd; *litPtr = iLitEnd; /* update for next sequence */ /* copy Match */ if (sequence.offset > (size_t)(oLitEnd - base)) { /* offset beyond prefix */ if (sequence.offset > (size_t)(oLitEnd - vBase)) return ERROR(corruption_detected); if (match + sequence.matchLength <= dictEnd) { memmove(oLitEnd, match, sequence.matchLength); return sequenceLength; } /* span extDict & currentPrefixSegment */ { size_t const length1 = dictEnd - match; memmove(oLitEnd, match, length1); op = oLitEnd + length1; sequence.matchLength -= length1; match = base; if (op > oend_w || sequence.matchLength < MINMATCH) { U32 i; for (i = 0; i < sequence.matchLength; ++i) op[i] = match[i]; return sequenceLength; } } } assert(op <= oend_w); assert(sequence.matchLength >= MINMATCH); /* match within prefix */ if (sequence.offset < 8) { /* close range match, overlap */ static const U32 dec32table[] = { 0, 1, 2, 1, 4, 4, 4, 4 }; /* added */ static const int dec64table[] = { 8, 8, 8, 7, 8, 9,10,11 }; /* subtracted */ int const sub2 = dec64table[sequence.offset]; op[0] = match[0]; op[1] = match[1]; op[2] = match[2]; op[3] = match[3]; match += dec32table[sequence.offset]; ZSTD_copy4(op+4, match); match -= sub2; } else { ZSTD_copy8(op, match); } op += 8; match += 8; if (oMatchEnd > oend-(16-MINMATCH)) { if (op < oend_w) { ZSTD_wildcopy(op, match, oend_w - op); match += oend_w - op; op = oend_w; } while (op < oMatchEnd) *op++ = *match++; } else { ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength-8); /* works even if matchLength < 8 */ } return sequenceLength; } static size_t ZSTD_decompressSequencesLong( ZSTD_DCtx* dctx, void* dst, size_t maxDstSize, const void* seqStart, size_t seqSize, const ZSTD_longOffset_e isLongOffset) { const BYTE* ip = (const BYTE*)seqStart; const BYTE* const iend = ip + seqSize; BYTE* const ostart = (BYTE* const)dst; BYTE* const oend = ostart + maxDstSize; BYTE* op = ostart; const BYTE* litPtr = dctx->litPtr; const BYTE* const litEnd = litPtr + dctx->litSize; const BYTE* const base = (const BYTE*) (dctx->base); const BYTE* const vBase = (const BYTE*) (dctx->vBase); const BYTE* const dictEnd = (const BYTE*) (dctx->dictEnd); int nbSeq; /* Build Decoding Tables */ { size_t const seqHSize = ZSTD_decodeSeqHeaders(dctx, &nbSeq, ip, seqSize); if (ZSTD_isError(seqHSize)) return seqHSize; ip += seqHSize; } /* Regen sequences */ if (nbSeq) { #define STORED_SEQS 4 #define STOSEQ_MASK (STORED_SEQS-1) #define ADVANCED_SEQS 4 seq_t sequences[STORED_SEQS]; int const seqAdvance = MIN(nbSeq, ADVANCED_SEQS); seqState_t seqState; int seqNb; dctx->fseEntropy = 1; { U32 i; for (i=0; ientropy.rep[i]; } seqState.base = base; seqState.pos = (size_t)(op-base); seqState.gotoDict = (uPtrDiff)dictEnd - (uPtrDiff)base; /* cast to avoid undefined behaviour */ CHECK_E(BIT_initDStream(&seqState.DStream, ip, iend-ip), corruption_detected); FSE_initDState(&seqState.stateLL, &seqState.DStream, dctx->LLTptr); FSE_initDState(&seqState.stateOffb, &seqState.DStream, dctx->OFTptr); FSE_initDState(&seqState.stateML, &seqState.DStream, dctx->MLTptr); /* prepare in advance */ for (seqNb=0; (BIT_reloadDStream(&seqState.DStream) <= BIT_DStream_completed) && seqNbentropy.rep[i] = (U32)(seqState.prevOffset[i]); } } /* last literal segment */ { size_t const lastLLSize = litEnd - litPtr; if (lastLLSize > (size_t)(oend-op)) return ERROR(dstSize_tooSmall); memcpy(op, litPtr, lastLLSize); op += lastLLSize; } return op-ostart; } static size_t ZSTD_decompressBlock_internal(ZSTD_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize, const int frame) { /* blockType == blockCompressed */ const BYTE* ip = (const BYTE*)src; /* isLongOffset must be true if there are long offsets. * Offsets are long if they are larger than 2^STREAM_ACCUMULATOR_MIN. * We don't expect that to be the case in 64-bit mode. * If we are in block mode we don't know the window size, so we have to be * conservative. */ ZSTD_longOffset_e const isLongOffset = (ZSTD_longOffset_e)(MEM_32bits() && (!frame || dctx->fParams.windowSize > (1ULL << STREAM_ACCUMULATOR_MIN))); /* windowSize could be any value at this point, since it is only validated * in the streaming API. */ DEBUGLOG(5, "ZSTD_decompressBlock_internal (size : %u)", (U32)srcSize); if (srcSize >= ZSTD_BLOCKSIZE_MAX) return ERROR(srcSize_wrong); /* Decode literals section */ { size_t const litCSize = ZSTD_decodeLiteralsBlock(dctx, src, srcSize); DEBUGLOG(5, "ZSTD_decodeLiteralsBlock : %u", (U32)litCSize); if (ZSTD_isError(litCSize)) return litCSize; ip += litCSize; srcSize -= litCSize; } if (frame && dctx->fParams.windowSize > (1<<23)) return ZSTD_decompressSequencesLong(dctx, dst, dstCapacity, ip, srcSize, isLongOffset); return ZSTD_decompressSequences(dctx, dst, dstCapacity, ip, srcSize, isLongOffset); } static void ZSTD_checkContinuity(ZSTD_DCtx* dctx, const void* dst) { if (dst != dctx->previousDstEnd) { /* not contiguous */ dctx->dictEnd = dctx->previousDstEnd; dctx->vBase = (const char*)dst - ((const char*)(dctx->previousDstEnd) - (const char*)(dctx->base)); dctx->base = dst; dctx->previousDstEnd = dst; } } size_t ZSTD_decompressBlock(ZSTD_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize) { size_t dSize; ZSTD_checkContinuity(dctx, dst); dSize = ZSTD_decompressBlock_internal(dctx, dst, dstCapacity, src, srcSize, /* frame */ 0); dctx->previousDstEnd = (char*)dst + dSize; return dSize; } /** ZSTD_insertBlock() : insert `src` block into `dctx` history. Useful to track uncompressed blocks. */ ZSTDLIB_API size_t ZSTD_insertBlock(ZSTD_DCtx* dctx, const void* blockStart, size_t blockSize) { ZSTD_checkContinuity(dctx, blockStart); dctx->previousDstEnd = (const char*)blockStart + blockSize; return blockSize; } static size_t ZSTD_generateNxBytes(void* dst, size_t dstCapacity, BYTE byte, size_t length) { if (length > dstCapacity) return ERROR(dstSize_tooSmall); memset(dst, byte, length); return length; } /** ZSTD_findFrameCompressedSize() : * compatible with legacy mode * `src` must point to the start of a ZSTD frame, ZSTD legacy frame, or skippable frame * `srcSize` must be at least as large as the frame contained * @return : the compressed size of the frame starting at `src` */ size_t ZSTD_findFrameCompressedSize(const void *src, size_t srcSize) { #if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT >= 1) if (ZSTD_isLegacy(src, srcSize)) return ZSTD_findFrameCompressedSizeLegacy(src, srcSize); #endif if ( (srcSize >= ZSTD_skippableHeaderSize) && (MEM_readLE32(src) & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START ) { return ZSTD_skippableHeaderSize + MEM_readLE32((const BYTE*)src + ZSTD_frameIdSize); } else { const BYTE* ip = (const BYTE*)src; const BYTE* const ipstart = ip; size_t remainingSize = srcSize; ZSTD_frameHeader zfh; /* Extract Frame Header */ { size_t const ret = ZSTD_getFrameHeader(&zfh, src, srcSize); if (ZSTD_isError(ret)) return ret; if (ret > 0) return ERROR(srcSize_wrong); } ip += zfh.headerSize; remainingSize -= zfh.headerSize; /* Loop on each block */ while (1) { blockProperties_t blockProperties; size_t const cBlockSize = ZSTD_getcBlockSize(ip, remainingSize, &blockProperties); if (ZSTD_isError(cBlockSize)) return cBlockSize; if (ZSTD_blockHeaderSize + cBlockSize > remainingSize) return ERROR(srcSize_wrong); ip += ZSTD_blockHeaderSize + cBlockSize; remainingSize -= ZSTD_blockHeaderSize + cBlockSize; if (blockProperties.lastBlock) break; } if (zfh.checksumFlag) { /* Final frame content checksum */ if (remainingSize < 4) return ERROR(srcSize_wrong); ip += 4; remainingSize -= 4; } return ip - ipstart; } } /*! ZSTD_decompressFrame() : * @dctx must be properly initialized */ static size_t ZSTD_decompressFrame(ZSTD_DCtx* dctx, void* dst, size_t dstCapacity, const void** srcPtr, size_t *srcSizePtr) { const BYTE* ip = (const BYTE*)(*srcPtr); BYTE* const ostart = (BYTE* const)dst; BYTE* const oend = ostart + dstCapacity; BYTE* op = ostart; size_t remainingSize = *srcSizePtr; /* check */ if (remainingSize < ZSTD_frameHeaderSize_min+ZSTD_blockHeaderSize) return ERROR(srcSize_wrong); /* Frame Header */ { size_t const frameHeaderSize = ZSTD_frameHeaderSize(ip, ZSTD_frameHeaderSize_prefix); if (ZSTD_isError(frameHeaderSize)) return frameHeaderSize; if (remainingSize < frameHeaderSize+ZSTD_blockHeaderSize) return ERROR(srcSize_wrong); CHECK_F( ZSTD_decodeFrameHeader(dctx, ip, frameHeaderSize) ); ip += frameHeaderSize; remainingSize -= frameHeaderSize; } /* Loop on each block */ while (1) { size_t decodedSize; blockProperties_t blockProperties; size_t const cBlockSize = ZSTD_getcBlockSize(ip, remainingSize, &blockProperties); if (ZSTD_isError(cBlockSize)) return cBlockSize; ip += ZSTD_blockHeaderSize; remainingSize -= ZSTD_blockHeaderSize; if (cBlockSize > remainingSize) return ERROR(srcSize_wrong); switch(blockProperties.blockType) { case bt_compressed: decodedSize = ZSTD_decompressBlock_internal(dctx, op, oend-op, ip, cBlockSize, /* frame */ 1); break; case bt_raw : decodedSize = ZSTD_copyRawBlock(op, oend-op, ip, cBlockSize); break; case bt_rle : decodedSize = ZSTD_generateNxBytes(op, oend-op, *ip, blockProperties.origSize); break; case bt_reserved : default: return ERROR(corruption_detected); } if (ZSTD_isError(decodedSize)) return decodedSize; if (dctx->fParams.checksumFlag) XXH64_update(&dctx->xxhState, op, decodedSize); op += decodedSize; ip += cBlockSize; remainingSize -= cBlockSize; if (blockProperties.lastBlock) break; } if (dctx->fParams.frameContentSize != ZSTD_CONTENTSIZE_UNKNOWN) { if ((U64)(op-ostart) != dctx->fParams.frameContentSize) { return ERROR(corruption_detected); } } if (dctx->fParams.checksumFlag) { /* Frame content checksum verification */ U32 const checkCalc = (U32)XXH64_digest(&dctx->xxhState); U32 checkRead; if (remainingSize<4) return ERROR(checksum_wrong); checkRead = MEM_readLE32(ip); if (checkRead != checkCalc) return ERROR(checksum_wrong); ip += 4; remainingSize -= 4; } /* Allow caller to get size read */ *srcPtr = ip; *srcSizePtr = remainingSize; return op-ostart; } static const void* ZSTD_DDictDictContent(const ZSTD_DDict* ddict); static size_t ZSTD_DDictDictSize(const ZSTD_DDict* ddict); static size_t ZSTD_decompressMultiFrame(ZSTD_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize, const void* dict, size_t dictSize, const ZSTD_DDict* ddict) { void* const dststart = dst; assert(dict==NULL || ddict==NULL); /* either dict or ddict set, not both */ if (ddict) { dict = ZSTD_DDictDictContent(ddict); dictSize = ZSTD_DDictDictSize(ddict); } while (srcSize >= ZSTD_frameHeaderSize_prefix) { U32 magicNumber; #if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT >= 1) if (ZSTD_isLegacy(src, srcSize)) { size_t decodedSize; size_t const frameSize = ZSTD_findFrameCompressedSizeLegacy(src, srcSize); if (ZSTD_isError(frameSize)) return frameSize; /* legacy support is not compatible with static dctx */ if (dctx->staticSize) return ERROR(memory_allocation); decodedSize = ZSTD_decompressLegacy(dst, dstCapacity, src, frameSize, dict, dictSize); dst = (BYTE*)dst + decodedSize; dstCapacity -= decodedSize; src = (const BYTE*)src + frameSize; srcSize -= frameSize; continue; } #endif magicNumber = MEM_readLE32(src); DEBUGLOG(4, "reading magic number %08X (expecting %08X)", (U32)magicNumber, (U32)ZSTD_MAGICNUMBER); if (magicNumber != ZSTD_MAGICNUMBER) { if ((magicNumber & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START) { size_t skippableSize; if (srcSize < ZSTD_skippableHeaderSize) return ERROR(srcSize_wrong); skippableSize = MEM_readLE32((const BYTE*)src + ZSTD_frameIdSize) + ZSTD_skippableHeaderSize; if (srcSize < skippableSize) return ERROR(srcSize_wrong); src = (const BYTE *)src + skippableSize; srcSize -= skippableSize; continue; } return ERROR(prefix_unknown); } if (ddict) { /* we were called from ZSTD_decompress_usingDDict */ CHECK_F(ZSTD_decompressBegin_usingDDict(dctx, ddict)); } else { /* this will initialize correctly with no dict if dict == NULL, so * use this in all cases but ddict */ CHECK_F(ZSTD_decompressBegin_usingDict(dctx, dict, dictSize)); } ZSTD_checkContinuity(dctx, dst); { const size_t res = ZSTD_decompressFrame(dctx, dst, dstCapacity, &src, &srcSize); if (ZSTD_isError(res)) return res; /* no need to bound check, ZSTD_decompressFrame already has */ dst = (BYTE*)dst + res; dstCapacity -= res; } } /* while (srcSize >= ZSTD_frameHeaderSize_prefix) */ if (srcSize) return ERROR(srcSize_wrong); /* input not entirely consumed */ return (BYTE*)dst - (BYTE*)dststart; } size_t ZSTD_decompress_usingDict(ZSTD_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize, const void* dict, size_t dictSize) { return ZSTD_decompressMultiFrame(dctx, dst, dstCapacity, src, srcSize, dict, dictSize, NULL); } size_t ZSTD_decompressDCtx(ZSTD_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize) { return ZSTD_decompress_usingDict(dctx, dst, dstCapacity, src, srcSize, NULL, 0); } size_t ZSTD_decompress(void* dst, size_t dstCapacity, const void* src, size_t srcSize) { #if defined(ZSTD_HEAPMODE) && (ZSTD_HEAPMODE>=1) size_t regenSize; ZSTD_DCtx* const dctx = ZSTD_createDCtx(); if (dctx==NULL) return ERROR(memory_allocation); regenSize = ZSTD_decompressDCtx(dctx, dst, dstCapacity, src, srcSize); ZSTD_freeDCtx(dctx); return regenSize; #else /* stack mode */ ZSTD_DCtx dctx; return ZSTD_decompressDCtx(&dctx, dst, dstCapacity, src, srcSize); #endif } /*-************************************** * Advanced Streaming Decompression API * Bufferless and synchronous ****************************************/ size_t ZSTD_nextSrcSizeToDecompress(ZSTD_DCtx* dctx) { return dctx->expected; } ZSTD_nextInputType_e ZSTD_nextInputType(ZSTD_DCtx* dctx) { switch(dctx->stage) { default: /* should not happen */ assert(0); case ZSTDds_getFrameHeaderSize: case ZSTDds_decodeFrameHeader: return ZSTDnit_frameHeader; case ZSTDds_decodeBlockHeader: return ZSTDnit_blockHeader; case ZSTDds_decompressBlock: return ZSTDnit_block; case ZSTDds_decompressLastBlock: return ZSTDnit_lastBlock; case ZSTDds_checkChecksum: return ZSTDnit_checksum; case ZSTDds_decodeSkippableHeader: case ZSTDds_skipFrame: return ZSTDnit_skippableFrame; } } static int ZSTD_isSkipFrame(ZSTD_DCtx* dctx) { return dctx->stage == ZSTDds_skipFrame; } /** ZSTD_decompressContinue() : * srcSize : must be the exact nb of bytes expected (see ZSTD_nextSrcSizeToDecompress()) * @return : nb of bytes generated into `dst` (necessarily <= `dstCapacity) * or an error code, which can be tested using ZSTD_isError() */ size_t ZSTD_decompressContinue(ZSTD_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize) { DEBUGLOG(5, "ZSTD_decompressContinue"); /* Sanity check */ if (srcSize != dctx->expected) return ERROR(srcSize_wrong); /* not allowed */ if (dstCapacity) ZSTD_checkContinuity(dctx, dst); switch (dctx->stage) { case ZSTDds_getFrameHeaderSize : assert(src != NULL); if (dctx->format == ZSTD_f_zstd1) { /* allows header */ assert(srcSize >= ZSTD_frameIdSize); /* to read skippable magic number */ if ((MEM_readLE32(src) & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START) { /* skippable frame */ memcpy(dctx->headerBuffer, src, srcSize); dctx->expected = ZSTD_skippableHeaderSize - srcSize; /* remaining to load to get full skippable frame header */ dctx->stage = ZSTDds_decodeSkippableHeader; return 0; } } dctx->headerSize = ZSTD_frameHeaderSize_internal(src, srcSize, dctx->format); if (ZSTD_isError(dctx->headerSize)) return dctx->headerSize; memcpy(dctx->headerBuffer, src, srcSize); dctx->expected = dctx->headerSize - srcSize; dctx->stage = ZSTDds_decodeFrameHeader; return 0; case ZSTDds_decodeFrameHeader: assert(src != NULL); memcpy(dctx->headerBuffer + (dctx->headerSize - srcSize), src, srcSize); CHECK_F(ZSTD_decodeFrameHeader(dctx, dctx->headerBuffer, dctx->headerSize)); dctx->expected = ZSTD_blockHeaderSize; dctx->stage = ZSTDds_decodeBlockHeader; return 0; case ZSTDds_decodeBlockHeader: { blockProperties_t bp; size_t const cBlockSize = ZSTD_getcBlockSize(src, ZSTD_blockHeaderSize, &bp); if (ZSTD_isError(cBlockSize)) return cBlockSize; dctx->expected = cBlockSize; dctx->bType = bp.blockType; dctx->rleSize = bp.origSize; if (cBlockSize) { dctx->stage = bp.lastBlock ? ZSTDds_decompressLastBlock : ZSTDds_decompressBlock; return 0; } /* empty block */ if (bp.lastBlock) { if (dctx->fParams.checksumFlag) { dctx->expected = 4; dctx->stage = ZSTDds_checkChecksum; } else { dctx->expected = 0; /* end of frame */ dctx->stage = ZSTDds_getFrameHeaderSize; } } else { dctx->expected = ZSTD_blockHeaderSize; /* jump to next header */ dctx->stage = ZSTDds_decodeBlockHeader; } return 0; } case ZSTDds_decompressLastBlock: case ZSTDds_decompressBlock: DEBUGLOG(5, "case ZSTDds_decompressBlock"); { size_t rSize; switch(dctx->bType) { case bt_compressed: DEBUGLOG(5, "case bt_compressed"); rSize = ZSTD_decompressBlock_internal(dctx, dst, dstCapacity, src, srcSize, /* frame */ 1); break; case bt_raw : rSize = ZSTD_copyRawBlock(dst, dstCapacity, src, srcSize); break; case bt_rle : rSize = ZSTD_setRleBlock(dst, dstCapacity, src, srcSize, dctx->rleSize); break; case bt_reserved : /* should never happen */ default: return ERROR(corruption_detected); } if (ZSTD_isError(rSize)) return rSize; DEBUGLOG(5, "decoded size from block : %u", (U32)rSize); dctx->decodedSize += rSize; if (dctx->fParams.checksumFlag) XXH64_update(&dctx->xxhState, dst, rSize); if (dctx->stage == ZSTDds_decompressLastBlock) { /* end of frame */ DEBUGLOG(4, "decoded size from frame : %u", (U32)dctx->decodedSize); if (dctx->fParams.frameContentSize != ZSTD_CONTENTSIZE_UNKNOWN) { if (dctx->decodedSize != dctx->fParams.frameContentSize) { return ERROR(corruption_detected); } } if (dctx->fParams.checksumFlag) { /* another round for frame checksum */ dctx->expected = 4; dctx->stage = ZSTDds_checkChecksum; } else { dctx->expected = 0; /* ends here */ dctx->stage = ZSTDds_getFrameHeaderSize; } } else { dctx->stage = ZSTDds_decodeBlockHeader; dctx->expected = ZSTD_blockHeaderSize; dctx->previousDstEnd = (char*)dst + rSize; } return rSize; } case ZSTDds_checkChecksum: assert(srcSize == 4); /* guaranteed by dctx->expected */ { U32 const h32 = (U32)XXH64_digest(&dctx->xxhState); U32 const check32 = MEM_readLE32(src); DEBUGLOG(4, "checksum : calculated %08X :: %08X read", h32, check32); if (check32 != h32) return ERROR(checksum_wrong); dctx->expected = 0; dctx->stage = ZSTDds_getFrameHeaderSize; return 0; } case ZSTDds_decodeSkippableHeader: assert(src != NULL); assert(srcSize <= ZSTD_skippableHeaderSize); memcpy(dctx->headerBuffer + (ZSTD_skippableHeaderSize - srcSize), src, srcSize); /* complete skippable header */ dctx->expected = MEM_readLE32(dctx->headerBuffer + ZSTD_frameIdSize); /* note : dctx->expected can grow seriously large, beyond local buffer size */ dctx->stage = ZSTDds_skipFrame; return 0; case ZSTDds_skipFrame: dctx->expected = 0; dctx->stage = ZSTDds_getFrameHeaderSize; return 0; default: return ERROR(GENERIC); /* impossible */ } } static size_t ZSTD_refDictContent(ZSTD_DCtx* dctx, const void* dict, size_t dictSize) { dctx->dictEnd = dctx->previousDstEnd; dctx->vBase = (const char*)dict - ((const char*)(dctx->previousDstEnd) - (const char*)(dctx->base)); dctx->base = dict; dctx->previousDstEnd = (const char*)dict + dictSize; return 0; } /* ZSTD_loadEntropy() : * dict : must point at beginning of a valid zstd dictionary * @return : size of entropy tables read */ static size_t ZSTD_loadEntropy(ZSTD_entropyDTables_t* entropy, const void* const dict, size_t const dictSize) { const BYTE* dictPtr = (const BYTE*)dict; const BYTE* const dictEnd = dictPtr + dictSize; if (dictSize <= 8) return ERROR(dictionary_corrupted); dictPtr += 8; /* skip header = magic + dictID */ { size_t const hSize = HUF_readDTableX4_wksp( entropy->hufTable, dictPtr, dictEnd - dictPtr, entropy->workspace, sizeof(entropy->workspace)); if (HUF_isError(hSize)) return ERROR(dictionary_corrupted); dictPtr += hSize; } { short offcodeNCount[MaxOff+1]; U32 offcodeMaxValue = MaxOff, offcodeLog; size_t const offcodeHeaderSize = FSE_readNCount(offcodeNCount, &offcodeMaxValue, &offcodeLog, dictPtr, dictEnd-dictPtr); if (FSE_isError(offcodeHeaderSize)) return ERROR(dictionary_corrupted); if (offcodeLog > OffFSELog) return ERROR(dictionary_corrupted); CHECK_E(FSE_buildDTable(entropy->OFTable, offcodeNCount, offcodeMaxValue, offcodeLog), dictionary_corrupted); dictPtr += offcodeHeaderSize; } { short matchlengthNCount[MaxML+1]; unsigned matchlengthMaxValue = MaxML, matchlengthLog; size_t const matchlengthHeaderSize = FSE_readNCount(matchlengthNCount, &matchlengthMaxValue, &matchlengthLog, dictPtr, dictEnd-dictPtr); if (FSE_isError(matchlengthHeaderSize)) return ERROR(dictionary_corrupted); if (matchlengthLog > MLFSELog) return ERROR(dictionary_corrupted); CHECK_E(FSE_buildDTable(entropy->MLTable, matchlengthNCount, matchlengthMaxValue, matchlengthLog), dictionary_corrupted); dictPtr += matchlengthHeaderSize; } { short litlengthNCount[MaxLL+1]; unsigned litlengthMaxValue = MaxLL, litlengthLog; size_t const litlengthHeaderSize = FSE_readNCount(litlengthNCount, &litlengthMaxValue, &litlengthLog, dictPtr, dictEnd-dictPtr); if (FSE_isError(litlengthHeaderSize)) return ERROR(dictionary_corrupted); if (litlengthLog > LLFSELog) return ERROR(dictionary_corrupted); CHECK_E(FSE_buildDTable(entropy->LLTable, litlengthNCount, litlengthMaxValue, litlengthLog), dictionary_corrupted); dictPtr += litlengthHeaderSize; } if (dictPtr+12 > dictEnd) return ERROR(dictionary_corrupted); { int i; size_t const dictContentSize = (size_t)(dictEnd - (dictPtr+12)); for (i=0; i<3; i++) { U32 const rep = MEM_readLE32(dictPtr); dictPtr += 4; if (rep==0 || rep >= dictContentSize) return ERROR(dictionary_corrupted); entropy->rep[i] = rep; } } return dictPtr - (const BYTE*)dict; } static size_t ZSTD_decompress_insertDictionary(ZSTD_DCtx* dctx, const void* dict, size_t dictSize) { if (dictSize < 8) return ZSTD_refDictContent(dctx, dict, dictSize); { U32 const magic = MEM_readLE32(dict); if (magic != ZSTD_MAGIC_DICTIONARY) { return ZSTD_refDictContent(dctx, dict, dictSize); /* pure content mode */ } } dctx->dictID = MEM_readLE32((const char*)dict + ZSTD_frameIdSize); /* load entropy tables */ { size_t const eSize = ZSTD_loadEntropy(&dctx->entropy, dict, dictSize); if (ZSTD_isError(eSize)) return ERROR(dictionary_corrupted); dict = (const char*)dict + eSize; dictSize -= eSize; } dctx->litEntropy = dctx->fseEntropy = 1; /* reference dictionary content */ return ZSTD_refDictContent(dctx, dict, dictSize); } /* Note : this function cannot fail */ size_t ZSTD_decompressBegin(ZSTD_DCtx* dctx) { assert(dctx != NULL); dctx->expected = ZSTD_startingInputLength(dctx->format); /* dctx->format must be properly set */ dctx->stage = ZSTDds_getFrameHeaderSize; dctx->decodedSize = 0; dctx->previousDstEnd = NULL; dctx->base = NULL; dctx->vBase = NULL; dctx->dictEnd = NULL; dctx->entropy.hufTable[0] = (HUF_DTable)((HufLog)*0x1000001); /* cover both little and big endian */ dctx->litEntropy = dctx->fseEntropy = 0; dctx->dictID = 0; ZSTD_STATIC_ASSERT(sizeof(dctx->entropy.rep) == sizeof(repStartValue)); memcpy(dctx->entropy.rep, repStartValue, sizeof(repStartValue)); /* initial repcodes */ dctx->LLTptr = dctx->entropy.LLTable; dctx->MLTptr = dctx->entropy.MLTable; dctx->OFTptr = dctx->entropy.OFTable; dctx->HUFptr = dctx->entropy.hufTable; return 0; } size_t ZSTD_decompressBegin_usingDict(ZSTD_DCtx* dctx, const void* dict, size_t dictSize) { CHECK_F( ZSTD_decompressBegin(dctx) ); if (dict && dictSize) CHECK_E(ZSTD_decompress_insertDictionary(dctx, dict, dictSize), dictionary_corrupted); return 0; } /* ====== ZSTD_DDict ====== */ struct ZSTD_DDict_s { void* dictBuffer; const void* dictContent; size_t dictSize; ZSTD_entropyDTables_t entropy; U32 dictID; U32 entropyPresent; ZSTD_customMem cMem; }; /* typedef'd to ZSTD_DDict within "zstd.h" */ static const void* ZSTD_DDictDictContent(const ZSTD_DDict* ddict) { return ddict->dictContent; } static size_t ZSTD_DDictDictSize(const ZSTD_DDict* ddict) { return ddict->dictSize; } size_t ZSTD_decompressBegin_usingDDict(ZSTD_DCtx* dstDCtx, const ZSTD_DDict* ddict) { CHECK_F( ZSTD_decompressBegin(dstDCtx) ); if (ddict) { /* support begin on NULL */ dstDCtx->dictID = ddict->dictID; dstDCtx->base = ddict->dictContent; dstDCtx->vBase = ddict->dictContent; dstDCtx->dictEnd = (const BYTE*)ddict->dictContent + ddict->dictSize; dstDCtx->previousDstEnd = dstDCtx->dictEnd; if (ddict->entropyPresent) { dstDCtx->litEntropy = 1; dstDCtx->fseEntropy = 1; dstDCtx->LLTptr = ddict->entropy.LLTable; dstDCtx->MLTptr = ddict->entropy.MLTable; dstDCtx->OFTptr = ddict->entropy.OFTable; dstDCtx->HUFptr = ddict->entropy.hufTable; dstDCtx->entropy.rep[0] = ddict->entropy.rep[0]; dstDCtx->entropy.rep[1] = ddict->entropy.rep[1]; dstDCtx->entropy.rep[2] = ddict->entropy.rep[2]; } else { dstDCtx->litEntropy = 0; dstDCtx->fseEntropy = 0; } } return 0; } static size_t ZSTD_loadEntropy_inDDict(ZSTD_DDict* ddict) { ddict->dictID = 0; ddict->entropyPresent = 0; if (ddict->dictSize < 8) return 0; { U32 const magic = MEM_readLE32(ddict->dictContent); if (magic != ZSTD_MAGIC_DICTIONARY) return 0; /* pure content mode */ } ddict->dictID = MEM_readLE32((const char*)ddict->dictContent + ZSTD_frameIdSize); /* load entropy tables */ CHECK_E( ZSTD_loadEntropy(&ddict->entropy, ddict->dictContent, ddict->dictSize), dictionary_corrupted ); ddict->entropyPresent = 1; return 0; } static size_t ZSTD_initDDict_internal(ZSTD_DDict* ddict, const void* dict, size_t dictSize, ZSTD_dictLoadMethod_e dictLoadMethod) { if ((dictLoadMethod == ZSTD_dlm_byRef) || (!dict) || (!dictSize)) { ddict->dictBuffer = NULL; ddict->dictContent = dict; } else { void* const internalBuffer = ZSTD_malloc(dictSize, ddict->cMem); ddict->dictBuffer = internalBuffer; ddict->dictContent = internalBuffer; if (!internalBuffer) return ERROR(memory_allocation); memcpy(internalBuffer, dict, dictSize); } ddict->dictSize = dictSize; ddict->entropy.hufTable[0] = (HUF_DTable)((HufLog)*0x1000001); /* cover both little and big endian */ /* parse dictionary content */ CHECK_F( ZSTD_loadEntropy_inDDict(ddict) ); return 0; } ZSTD_DDict* ZSTD_createDDict_advanced(const void* dict, size_t dictSize, ZSTD_dictLoadMethod_e dictLoadMethod, ZSTD_customMem customMem) { if (!customMem.customAlloc ^ !customMem.customFree) return NULL; { ZSTD_DDict* const ddict = (ZSTD_DDict*) ZSTD_malloc(sizeof(ZSTD_DDict), customMem); if (!ddict) return NULL; ddict->cMem = customMem; if (ZSTD_isError( ZSTD_initDDict_internal(ddict, dict, dictSize, dictLoadMethod) )) { ZSTD_freeDDict(ddict); return NULL; } return ddict; } } /*! ZSTD_createDDict() : * Create a digested dictionary, to start decompression without startup delay. * `dict` content is copied inside DDict. * Consequently, `dict` can be released after `ZSTD_DDict` creation */ ZSTD_DDict* ZSTD_createDDict(const void* dict, size_t dictSize) { ZSTD_customMem const allocator = { NULL, NULL, NULL }; return ZSTD_createDDict_advanced(dict, dictSize, ZSTD_dlm_byCopy, allocator); } /*! ZSTD_createDDict_byReference() : * Create a digested dictionary, to start decompression without startup delay. * Dictionary content is simply referenced, it will be accessed during decompression. * Warning : dictBuffer must outlive DDict (DDict must be freed before dictBuffer) */ ZSTD_DDict* ZSTD_createDDict_byReference(const void* dictBuffer, size_t dictSize) { ZSTD_customMem const allocator = { NULL, NULL, NULL }; return ZSTD_createDDict_advanced(dictBuffer, dictSize, ZSTD_dlm_byRef, allocator); } ZSTD_DDict* ZSTD_initStaticDDict(void* workspace, size_t workspaceSize, const void* dict, size_t dictSize, ZSTD_dictLoadMethod_e dictLoadMethod) { size_t const neededSpace = sizeof(ZSTD_DDict) + (dictLoadMethod == ZSTD_dlm_byRef ? 0 : dictSize); ZSTD_DDict* const ddict = (ZSTD_DDict*)workspace; assert(workspace != NULL); assert(dict != NULL); if ((size_t)workspace & 7) return NULL; /* 8-aligned */ if (workspaceSize < neededSpace) return NULL; if (dictLoadMethod == ZSTD_dlm_byCopy) { memcpy(ddict+1, dict, dictSize); /* local copy */ dict = ddict+1; } if (ZSTD_isError( ZSTD_initDDict_internal(ddict, dict, dictSize, ZSTD_dlm_byRef) )) return NULL; return ddict; } size_t ZSTD_freeDDict(ZSTD_DDict* ddict) { if (ddict==NULL) return 0; /* support free on NULL */ { ZSTD_customMem const cMem = ddict->cMem; ZSTD_free(ddict->dictBuffer, cMem); ZSTD_free(ddict, cMem); return 0; } } /*! ZSTD_estimateDDictSize() : * Estimate amount of memory that will be needed to create a dictionary for decompression. * Note : dictionary created by reference using ZSTD_dlm_byRef are smaller */ size_t ZSTD_estimateDDictSize(size_t dictSize, ZSTD_dictLoadMethod_e dictLoadMethod) { return sizeof(ZSTD_DDict) + (dictLoadMethod == ZSTD_dlm_byRef ? 0 : dictSize); } size_t ZSTD_sizeof_DDict(const ZSTD_DDict* ddict) { if (ddict==NULL) return 0; /* support sizeof on NULL */ return sizeof(*ddict) + (ddict->dictBuffer ? ddict->dictSize : 0) ; } /*! ZSTD_getDictID_fromDict() : * Provides the dictID stored within dictionary. * if @return == 0, the dictionary is not conformant with Zstandard specification. * It can still be loaded, but as a content-only dictionary. */ unsigned ZSTD_getDictID_fromDict(const void* dict, size_t dictSize) { if (dictSize < 8) return 0; if (MEM_readLE32(dict) != ZSTD_MAGIC_DICTIONARY) return 0; return MEM_readLE32((const char*)dict + ZSTD_frameIdSize); } /*! ZSTD_getDictID_fromDDict() : * Provides the dictID of the dictionary loaded into `ddict`. * If @return == 0, the dictionary is not conformant to Zstandard specification, or empty. * Non-conformant dictionaries can still be loaded, but as content-only dictionaries. */ unsigned ZSTD_getDictID_fromDDict(const ZSTD_DDict* ddict) { if (ddict==NULL) return 0; return ZSTD_getDictID_fromDict(ddict->dictContent, ddict->dictSize); } /*! ZSTD_getDictID_fromFrame() : * Provides the dictID required to decompresse frame stored within `src`. * If @return == 0, the dictID could not be decoded. * This could for one of the following reasons : * - The frame does not require a dictionary (most common case). * - The frame was built with dictID intentionally removed. * Needed dictionary is a hidden information. * Note : this use case also happens when using a non-conformant dictionary. * - `srcSize` is too small, and as a result, frame header could not be decoded. * Note : possible if `srcSize < ZSTD_FRAMEHEADERSIZE_MAX`. * - This is not a Zstandard frame. * When identifying the exact failure cause, it's possible to use * ZSTD_getFrameHeader(), which will provide a more precise error code. */ unsigned ZSTD_getDictID_fromFrame(const void* src, size_t srcSize) { ZSTD_frameHeader zfp = { 0, 0, 0, ZSTD_frame, 0, 0, 0 }; size_t const hError = ZSTD_getFrameHeader(&zfp, src, srcSize); if (ZSTD_isError(hError)) return 0; return zfp.dictID; } /*! ZSTD_decompress_usingDDict() : * Decompression using a pre-digested Dictionary * Use dictionary without significant overhead. */ size_t ZSTD_decompress_usingDDict(ZSTD_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize, const ZSTD_DDict* ddict) { /* pass content and size in case legacy frames are encountered */ return ZSTD_decompressMultiFrame(dctx, dst, dstCapacity, src, srcSize, NULL, 0, ddict); } /*===================================== * Streaming decompression *====================================*/ ZSTD_DStream* ZSTD_createDStream(void) { return ZSTD_createDStream_advanced(ZSTD_defaultCMem); } ZSTD_DStream* ZSTD_initStaticDStream(void *workspace, size_t workspaceSize) { return ZSTD_initStaticDCtx(workspace, workspaceSize); } ZSTD_DStream* ZSTD_createDStream_advanced(ZSTD_customMem customMem) { return ZSTD_createDCtx_advanced(customMem); } size_t ZSTD_freeDStream(ZSTD_DStream* zds) { return ZSTD_freeDCtx(zds); } /* *** Initialization *** */ size_t ZSTD_DStreamInSize(void) { return ZSTD_BLOCKSIZE_MAX + ZSTD_blockHeaderSize; } size_t ZSTD_DStreamOutSize(void) { return ZSTD_BLOCKSIZE_MAX; } size_t ZSTD_initDStream_usingDict(ZSTD_DStream* zds, const void* dict, size_t dictSize) { zds->streamStage = zdss_loadHeader; zds->lhSize = zds->inPos = zds->outStart = zds->outEnd = 0; ZSTD_freeDDict(zds->ddictLocal); if (dict && dictSize >= 8) { zds->ddictLocal = ZSTD_createDDict(dict, dictSize); if (zds->ddictLocal == NULL) return ERROR(memory_allocation); } else zds->ddictLocal = NULL; zds->ddict = zds->ddictLocal; zds->legacyVersion = 0; zds->hostageByte = 0; return ZSTD_frameHeaderSize_prefix; } /* note : this variant can't fail */ size_t ZSTD_initDStream(ZSTD_DStream* zds) { return ZSTD_initDStream_usingDict(zds, NULL, 0); } /* ZSTD_initDStream_usingDDict() : * ddict will just be referenced, and must outlive decompression session * this function cannot fail */ size_t ZSTD_initDStream_usingDDict(ZSTD_DStream* zds, const ZSTD_DDict* ddict) { size_t const initResult = ZSTD_initDStream(zds); zds->ddict = ddict; return initResult; } size_t ZSTD_resetDStream(ZSTD_DStream* zds) { zds->streamStage = zdss_loadHeader; zds->lhSize = zds->inPos = zds->outStart = zds->outEnd = 0; zds->legacyVersion = 0; zds->hostageByte = 0; return ZSTD_frameHeaderSize_prefix; } size_t ZSTD_setDStreamParameter(ZSTD_DStream* zds, ZSTD_DStreamParameter_e paramType, unsigned paramValue) { ZSTD_STATIC_ASSERT((unsigned)zdss_loadHeader >= (unsigned)zdss_init); if ((unsigned)zds->streamStage > (unsigned)zdss_loadHeader) return ERROR(stage_wrong); switch(paramType) { default : return ERROR(parameter_unsupported); case DStream_p_maxWindowSize : DEBUGLOG(4, "setting maxWindowSize = %u KB", paramValue >> 10); zds->maxWindowSize = paramValue ? paramValue : (U32)(-1); break; } return 0; } size_t ZSTD_DCtx_setMaxWindowSize(ZSTD_DCtx* dctx, size_t maxWindowSize) { ZSTD_STATIC_ASSERT((unsigned)zdss_loadHeader >= (unsigned)zdss_init); if ((unsigned)dctx->streamStage > (unsigned)zdss_loadHeader) return ERROR(stage_wrong); dctx->maxWindowSize = maxWindowSize; return 0; } size_t ZSTD_DCtx_setFormat(ZSTD_DCtx* dctx, ZSTD_format_e format) { DEBUGLOG(4, "ZSTD_DCtx_setFormat : %u", (unsigned)format); ZSTD_STATIC_ASSERT((unsigned)zdss_loadHeader >= (unsigned)zdss_init); if ((unsigned)dctx->streamStage > (unsigned)zdss_loadHeader) return ERROR(stage_wrong); dctx->format = format; return 0; } size_t ZSTD_sizeof_DStream(const ZSTD_DStream* zds) { return ZSTD_sizeof_DCtx(zds); } size_t ZSTD_decodingBufferSize_min(unsigned long long windowSize, unsigned long long frameContentSize) { size_t const blockSize = (size_t) MIN(windowSize, ZSTD_BLOCKSIZE_MAX); unsigned long long const neededRBSize = windowSize + blockSize + (WILDCOPY_OVERLENGTH * 2); unsigned long long const neededSize = MIN(frameContentSize, neededRBSize); size_t const minRBSize = (size_t) neededSize; if ((unsigned long long)minRBSize != neededSize) return ERROR(frameParameter_windowTooLarge); return minRBSize; } size_t ZSTD_estimateDStreamSize(size_t windowSize) { size_t const blockSize = MIN(windowSize, ZSTD_BLOCKSIZE_MAX); size_t const inBuffSize = blockSize; /* no block can be larger */ size_t const outBuffSize = ZSTD_decodingBufferSize_min(windowSize, ZSTD_CONTENTSIZE_UNKNOWN); return ZSTD_estimateDCtxSize() + inBuffSize + outBuffSize; } size_t ZSTD_estimateDStreamSize_fromFrame(const void* src, size_t srcSize) { U32 const windowSizeMax = 1U << ZSTD_WINDOWLOG_MAX; /* note : should be user-selectable */ ZSTD_frameHeader zfh; size_t const err = ZSTD_getFrameHeader(&zfh, src, srcSize); if (ZSTD_isError(err)) return err; if (err>0) return ERROR(srcSize_wrong); if (zfh.windowSize > windowSizeMax) return ERROR(frameParameter_windowTooLarge); return ZSTD_estimateDStreamSize((size_t)zfh.windowSize); } /* ***** Decompression ***** */ MEM_STATIC size_t ZSTD_limitCopy(void* dst, size_t dstCapacity, const void* src, size_t srcSize) { size_t const length = MIN(dstCapacity, srcSize); memcpy(dst, src, length); return length; } size_t ZSTD_decompressStream(ZSTD_DStream* zds, ZSTD_outBuffer* output, ZSTD_inBuffer* input) { const char* const istart = (const char*)(input->src) + input->pos; const char* const iend = (const char*)(input->src) + input->size; const char* ip = istart; char* const ostart = (char*)(output->dst) + output->pos; char* const oend = (char*)(output->dst) + output->size; char* op = ostart; U32 someMoreWork = 1; DEBUGLOG(5, "ZSTD_decompressStream"); if (input->pos > input->size) { /* forbidden */ DEBUGLOG(5, "in: pos: %u vs size: %u", (U32)input->pos, (U32)input->size); return ERROR(srcSize_wrong); } if (output->pos > output->size) { /* forbidden */ DEBUGLOG(5, "out: pos: %u vs size: %u", (U32)output->pos, (U32)output->size); return ERROR(dstSize_tooSmall); } DEBUGLOG(5, "input size : %u", (U32)(input->size - input->pos)); #if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT>=1) if (zds->legacyVersion) { /* legacy support is incompatible with static dctx */ if (zds->staticSize) return ERROR(memory_allocation); return ZSTD_decompressLegacyStream(zds->legacyContext, zds->legacyVersion, output, input); } #endif while (someMoreWork) { switch(zds->streamStage) { case zdss_init : ZSTD_resetDStream(zds); /* transparent reset on starting decoding a new frame */ /* fall-through */ case zdss_loadHeader : DEBUGLOG(5, "stage zdss_loadHeader (srcSize : %u)", (U32)(iend - ip)); { size_t const hSize = ZSTD_getFrameHeader_internal(&zds->fParams, zds->headerBuffer, zds->lhSize, zds->format); DEBUGLOG(5, "header size : %u", (U32)hSize); if (ZSTD_isError(hSize)) { #if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT>=1) U32 const legacyVersion = ZSTD_isLegacy(istart, iend-istart); if (legacyVersion) { const void* const dict = zds->ddict ? zds->ddict->dictContent : NULL; size_t const dictSize = zds->ddict ? zds->ddict->dictSize : 0; /* legacy support is incompatible with static dctx */ if (zds->staticSize) return ERROR(memory_allocation); CHECK_F(ZSTD_initLegacyStream(&zds->legacyContext, zds->previousLegacyVersion, legacyVersion, dict, dictSize)); zds->legacyVersion = zds->previousLegacyVersion = legacyVersion; return ZSTD_decompressLegacyStream(zds->legacyContext, legacyVersion, output, input); } #endif return hSize; /* error */ } if (hSize != 0) { /* need more input */ size_t const toLoad = hSize - zds->lhSize; /* if hSize!=0, hSize > zds->lhSize */ if (toLoad > (size_t)(iend-ip)) { /* not enough input to load full header */ if (iend-ip > 0) { memcpy(zds->headerBuffer + zds->lhSize, ip, iend-ip); zds->lhSize += iend-ip; } input->pos = input->size; return (MAX(ZSTD_frameHeaderSize_min, hSize) - zds->lhSize) + ZSTD_blockHeaderSize; /* remaining header bytes + next block header */ } assert(ip != NULL); memcpy(zds->headerBuffer + zds->lhSize, ip, toLoad); zds->lhSize = hSize; ip += toLoad; break; } } /* check for single-pass mode opportunity */ if (zds->fParams.frameContentSize && zds->fParams.windowSize /* skippable frame if == 0 */ && (U64)(size_t)(oend-op) >= zds->fParams.frameContentSize) { size_t const cSize = ZSTD_findFrameCompressedSize(istart, iend-istart); if (cSize <= (size_t)(iend-istart)) { size_t const decompressedSize = ZSTD_decompress_usingDDict(zds, op, oend-op, istart, cSize, zds->ddict); if (ZSTD_isError(decompressedSize)) return decompressedSize; ip = istart + cSize; op += decompressedSize; zds->expected = 0; zds->streamStage = zdss_init; someMoreWork = 0; break; } } /* Consume header (see ZSTDds_decodeFrameHeader) */ DEBUGLOG(4, "Consume header"); CHECK_F(ZSTD_decompressBegin_usingDDict(zds, zds->ddict)); if ((MEM_readLE32(zds->headerBuffer) & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START) { /* skippable frame */ zds->expected = MEM_readLE32(zds->headerBuffer + ZSTD_frameIdSize); zds->stage = ZSTDds_skipFrame; } else { CHECK_F(ZSTD_decodeFrameHeader(zds, zds->headerBuffer, zds->lhSize)); zds->expected = ZSTD_blockHeaderSize; zds->stage = ZSTDds_decodeBlockHeader; } /* control buffer memory usage */ DEBUGLOG(4, "Control max buffer memory usage (max %u KB)", (U32)(zds->maxWindowSize >> 10)); zds->fParams.windowSize = MAX(zds->fParams.windowSize, 1U << ZSTD_WINDOWLOG_ABSOLUTEMIN); if (zds->fParams.windowSize > zds->maxWindowSize) return ERROR(frameParameter_windowTooLarge); /* Adapt buffer sizes to frame header instructions */ { size_t const neededInBuffSize = MAX(zds->fParams.blockSizeMax, 4 /* frame checksum */); size_t const neededOutBuffSize = ZSTD_decodingBufferSize_min(zds->fParams.windowSize, zds->fParams.frameContentSize); if ((zds->inBuffSize < neededInBuffSize) || (zds->outBuffSize < neededOutBuffSize)) { size_t const bufferSize = neededInBuffSize + neededOutBuffSize; DEBUGLOG(4, "inBuff : from %u to %u", (U32)zds->inBuffSize, (U32)neededInBuffSize); DEBUGLOG(4, "outBuff : from %u to %u", (U32)zds->outBuffSize, (U32)neededOutBuffSize); if (zds->staticSize) { /* static DCtx */ DEBUGLOG(4, "staticSize : %u", (U32)zds->staticSize); assert(zds->staticSize >= sizeof(ZSTD_DCtx)); /* controlled at init */ if (bufferSize > zds->staticSize - sizeof(ZSTD_DCtx)) return ERROR(memory_allocation); } else { ZSTD_free(zds->inBuff, zds->customMem); zds->inBuffSize = 0; zds->outBuffSize = 0; zds->inBuff = (char*)ZSTD_malloc(bufferSize, zds->customMem); if (zds->inBuff == NULL) return ERROR(memory_allocation); } zds->inBuffSize = neededInBuffSize; zds->outBuff = zds->inBuff + zds->inBuffSize; zds->outBuffSize = neededOutBuffSize; } } zds->streamStage = zdss_read; /* fall-through */ case zdss_read: DEBUGLOG(5, "stage zdss_read"); { size_t const neededInSize = ZSTD_nextSrcSizeToDecompress(zds); DEBUGLOG(5, "neededInSize = %u", (U32)neededInSize); if (neededInSize==0) { /* end of frame */ zds->streamStage = zdss_init; someMoreWork = 0; break; } if ((size_t)(iend-ip) >= neededInSize) { /* decode directly from src */ int const isSkipFrame = ZSTD_isSkipFrame(zds); size_t const decodedSize = ZSTD_decompressContinue(zds, zds->outBuff + zds->outStart, (isSkipFrame ? 0 : zds->outBuffSize - zds->outStart), ip, neededInSize); if (ZSTD_isError(decodedSize)) return decodedSize; ip += neededInSize; if (!decodedSize && !isSkipFrame) break; /* this was just a header */ zds->outEnd = zds->outStart + decodedSize; zds->streamStage = zdss_flush; break; } } if (ip==iend) { someMoreWork = 0; break; } /* no more input */ zds->streamStage = zdss_load; /* fall-through */ case zdss_load: { size_t const neededInSize = ZSTD_nextSrcSizeToDecompress(zds); size_t const toLoad = neededInSize - zds->inPos; /* should always be <= remaining space within inBuff */ size_t loadedSize; if (toLoad > zds->inBuffSize - zds->inPos) return ERROR(corruption_detected); /* should never happen */ loadedSize = ZSTD_limitCopy(zds->inBuff + zds->inPos, toLoad, ip, iend-ip); ip += loadedSize; zds->inPos += loadedSize; if (loadedSize < toLoad) { someMoreWork = 0; break; } /* not enough input, wait for more */ /* decode loaded input */ { const int isSkipFrame = ZSTD_isSkipFrame(zds); size_t const decodedSize = ZSTD_decompressContinue(zds, zds->outBuff + zds->outStart, zds->outBuffSize - zds->outStart, zds->inBuff, neededInSize); if (ZSTD_isError(decodedSize)) return decodedSize; zds->inPos = 0; /* input is consumed */ if (!decodedSize && !isSkipFrame) { zds->streamStage = zdss_read; break; } /* this was just a header */ zds->outEnd = zds->outStart + decodedSize; } } zds->streamStage = zdss_flush; /* fall-through */ case zdss_flush: { size_t const toFlushSize = zds->outEnd - zds->outStart; size_t const flushedSize = ZSTD_limitCopy(op, oend-op, zds->outBuff + zds->outStart, toFlushSize); op += flushedSize; zds->outStart += flushedSize; if (flushedSize == toFlushSize) { /* flush completed */ zds->streamStage = zdss_read; if ( (zds->outBuffSize < zds->fParams.frameContentSize) && (zds->outStart + zds->fParams.blockSizeMax > zds->outBuffSize) ) { DEBUGLOG(5, "restart filling outBuff from beginning (left:%i, needed:%u)", (int)(zds->outBuffSize - zds->outStart), (U32)zds->fParams.blockSizeMax); zds->outStart = zds->outEnd = 0; } break; } } /* cannot complete flush */ someMoreWork = 0; break; default: return ERROR(GENERIC); /* impossible */ } } /* result */ input->pos += (size_t)(ip-istart); output->pos += (size_t)(op-ostart); { size_t nextSrcSizeHint = ZSTD_nextSrcSizeToDecompress(zds); if (!nextSrcSizeHint) { /* frame fully decoded */ if (zds->outEnd == zds->outStart) { /* output fully flushed */ if (zds->hostageByte) { if (input->pos >= input->size) { /* can't release hostage (not present) */ zds->streamStage = zdss_read; return 1; } input->pos++; /* release hostage */ } /* zds->hostageByte */ return 0; } /* zds->outEnd == zds->outStart */ if (!zds->hostageByte) { /* output not fully flushed; keep last byte as hostage; will be released when all output is flushed */ input->pos--; /* note : pos > 0, otherwise, impossible to finish reading last block */ zds->hostageByte=1; } return 1; } /* nextSrcSizeHint==0 */ nextSrcSizeHint += ZSTD_blockHeaderSize * (ZSTD_nextInputType(zds) == ZSTDnit_block); /* preload header of next block */ if (zds->inPos > nextSrcSizeHint) return ERROR(GENERIC); /* should never happen */ nextSrcSizeHint -= zds->inPos; /* already loaded*/ return nextSrcSizeHint; } } size_t ZSTD_decompress_generic(ZSTD_DCtx* dctx, ZSTD_outBuffer* output, ZSTD_inBuffer* input) { return ZSTD_decompressStream(dctx, output, input); } size_t ZSTD_decompress_generic_simpleArgs ( ZSTD_DCtx* dctx, void* dst, size_t dstCapacity, size_t* dstPos, const void* src, size_t srcSize, size_t* srcPos) { ZSTD_outBuffer output = { dst, dstCapacity, *dstPos }; ZSTD_inBuffer input = { src, srcSize, *srcPos }; /* ZSTD_compress_generic() will check validity of dstPos and srcPos */ size_t const cErr = ZSTD_decompress_generic(dctx, &output, &input); *dstPos = output.pos; *srcPos = input.pos; return cErr; } void ZSTD_DCtx_reset(ZSTD_DCtx* dctx) { (void)ZSTD_initDStream(dctx); dctx->format = ZSTD_f_zstd1; dctx->maxWindowSize = ZSTD_MAXWINDOWSIZE_DEFAULT; } borgbackup-1.1.5/src/borg/algorithms/zstd/lib/legacy/0000755000175000017500000000000013260002401021616 5ustar twtw00000000000000borgbackup-1.1.5/src/borg/algorithms/zstd/lib/legacy/zstd_v02.h0000644000175000017500000000641613257361140023467 0ustar twtw00000000000000/* * Copyright (c) 2016-present, Yann Collet, Facebook, Inc. * All rights reserved. * * This source code is licensed under both the BSD-style license (found in the * LICENSE file in the root directory of this source tree) and the GPLv2 (found * in the COPYING file in the root directory of this source tree). * You may select, at your option, one of the above-listed licenses. */ #ifndef ZSTD_V02_H_4174539423 #define ZSTD_V02_H_4174539423 #if defined (__cplusplus) extern "C" { #endif /* ************************************* * Includes ***************************************/ #include /* size_t */ /* ************************************* * Simple one-step function ***************************************/ /** ZSTDv02_decompress() : decompress ZSTD frames compliant with v0.2.x format compressedSize : is the exact source size maxOriginalSize : is the size of the 'dst' buffer, which must be already allocated. It must be equal or larger than originalSize, otherwise decompression will fail. return : the number of bytes decompressed into destination buffer (originalSize) or an errorCode if it fails (which can be tested using ZSTDv01_isError()) */ size_t ZSTDv02_decompress( void* dst, size_t maxOriginalSize, const void* src, size_t compressedSize); /** ZSTDv02_getFrameSrcSize() : get the source length of a ZSTD frame compliant with v0.2.x format compressedSize : The size of the 'src' buffer, at least as large as the frame pointed to by 'src' return : the number of bytes that would be read to decompress this frame or an errorCode if it fails (which can be tested using ZSTDv02_isError()) */ size_t ZSTDv02_findFrameCompressedSize(const void* src, size_t compressedSize); /** ZSTDv02_isError() : tells if the result of ZSTDv02_decompress() is an error */ unsigned ZSTDv02_isError(size_t code); /* ************************************* * Advanced functions ***************************************/ typedef struct ZSTDv02_Dctx_s ZSTDv02_Dctx; ZSTDv02_Dctx* ZSTDv02_createDCtx(void); size_t ZSTDv02_freeDCtx(ZSTDv02_Dctx* dctx); size_t ZSTDv02_decompressDCtx(void* ctx, void* dst, size_t maxOriginalSize, const void* src, size_t compressedSize); /* ************************************* * Streaming functions ***************************************/ size_t ZSTDv02_resetDCtx(ZSTDv02_Dctx* dctx); size_t ZSTDv02_nextSrcSizeToDecompress(ZSTDv02_Dctx* dctx); size_t ZSTDv02_decompressContinue(ZSTDv02_Dctx* dctx, void* dst, size_t maxDstSize, const void* src, size_t srcSize); /** Use above functions alternatively. ZSTD_nextSrcSizeToDecompress() tells how much bytes to provide as 'srcSize' to ZSTD_decompressContinue(). ZSTD_decompressContinue() will use previous data blocks to improve compression if they are located prior to current block. Result is the number of bytes regenerated within 'dst'. It can be zero, which is not an error; it just means ZSTD_decompressContinue() has decoded some header. */ /* ************************************* * Prefix - version detection ***************************************/ #define ZSTDv02_magicNumber 0xFD2FB522 /* v0.2 */ #if defined (__cplusplus) } #endif #endif /* ZSTD_V02_H_4174539423 */ borgbackup-1.1.5/src/borg/algorithms/zstd/lib/legacy/zstd_legacy.h0000644000175000017500000003024013257361140024314 0ustar twtw00000000000000/* * Copyright (c) 2016-present, Yann Collet, Facebook, Inc. * All rights reserved. * * This source code is licensed under both the BSD-style license (found in the * LICENSE file in the root directory of this source tree) and the GPLv2 (found * in the COPYING file in the root directory of this source tree). * You may select, at your option, one of the above-listed licenses. */ #ifndef ZSTD_LEGACY_H #define ZSTD_LEGACY_H #if defined (__cplusplus) extern "C" { #endif /* ************************************* * Includes ***************************************/ #include "mem.h" /* MEM_STATIC */ #include "error_private.h" /* ERROR */ #include "zstd.h" /* ZSTD_inBuffer, ZSTD_outBuffer */ #if !defined (ZSTD_LEGACY_SUPPORT) || (ZSTD_LEGACY_SUPPORT == 0) # undef ZSTD_LEGACY_SUPPORT # define ZSTD_LEGACY_SUPPORT 8 #endif #if (ZSTD_LEGACY_SUPPORT <= 1) # include "zstd_v01.h" #endif #if (ZSTD_LEGACY_SUPPORT <= 2) # include "zstd_v02.h" #endif #if (ZSTD_LEGACY_SUPPORT <= 3) # include "zstd_v03.h" #endif #if (ZSTD_LEGACY_SUPPORT <= 4) # include "zstd_v04.h" #endif #if (ZSTD_LEGACY_SUPPORT <= 5) # include "zstd_v05.h" #endif #if (ZSTD_LEGACY_SUPPORT <= 6) # include "zstd_v06.h" #endif #if (ZSTD_LEGACY_SUPPORT <= 7) # include "zstd_v07.h" #endif /** ZSTD_isLegacy() : @return : > 0 if supported by legacy decoder. 0 otherwise. return value is the version. */ MEM_STATIC unsigned ZSTD_isLegacy(const void* src, size_t srcSize) { U32 magicNumberLE; if (srcSize<4) return 0; magicNumberLE = MEM_readLE32(src); switch(magicNumberLE) { #if (ZSTD_LEGACY_SUPPORT <= 1) case ZSTDv01_magicNumberLE:return 1; #endif #if (ZSTD_LEGACY_SUPPORT <= 2) case ZSTDv02_magicNumber : return 2; #endif #if (ZSTD_LEGACY_SUPPORT <= 3) case ZSTDv03_magicNumber : return 3; #endif #if (ZSTD_LEGACY_SUPPORT <= 4) case ZSTDv04_magicNumber : return 4; #endif #if (ZSTD_LEGACY_SUPPORT <= 5) case ZSTDv05_MAGICNUMBER : return 5; #endif #if (ZSTD_LEGACY_SUPPORT <= 6) case ZSTDv06_MAGICNUMBER : return 6; #endif #if (ZSTD_LEGACY_SUPPORT <= 7) case ZSTDv07_MAGICNUMBER : return 7; #endif default : return 0; } } MEM_STATIC unsigned long long ZSTD_getDecompressedSize_legacy(const void* src, size_t srcSize) { U32 const version = ZSTD_isLegacy(src, srcSize); if (version < 5) return 0; /* no decompressed size in frame header, or not a legacy format */ #if (ZSTD_LEGACY_SUPPORT <= 5) if (version==5) { ZSTDv05_parameters fParams; size_t const frResult = ZSTDv05_getFrameParams(&fParams, src, srcSize); if (frResult != 0) return 0; return fParams.srcSize; } #endif #if (ZSTD_LEGACY_SUPPORT <= 6) if (version==6) { ZSTDv06_frameParams fParams; size_t const frResult = ZSTDv06_getFrameParams(&fParams, src, srcSize); if (frResult != 0) return 0; return fParams.frameContentSize; } #endif #if (ZSTD_LEGACY_SUPPORT <= 7) if (version==7) { ZSTDv07_frameParams fParams; size_t const frResult = ZSTDv07_getFrameParams(&fParams, src, srcSize); if (frResult != 0) return 0; return fParams.frameContentSize; } #endif return 0; /* should not be possible */ } MEM_STATIC size_t ZSTD_decompressLegacy( void* dst, size_t dstCapacity, const void* src, size_t compressedSize, const void* dict,size_t dictSize) { U32 const version = ZSTD_isLegacy(src, compressedSize); (void)dst; (void)dstCapacity; (void)dict; (void)dictSize; /* unused when ZSTD_LEGACY_SUPPORT >= 8 */ switch(version) { #if (ZSTD_LEGACY_SUPPORT <= 1) case 1 : return ZSTDv01_decompress(dst, dstCapacity, src, compressedSize); #endif #if (ZSTD_LEGACY_SUPPORT <= 2) case 2 : return ZSTDv02_decompress(dst, dstCapacity, src, compressedSize); #endif #if (ZSTD_LEGACY_SUPPORT <= 3) case 3 : return ZSTDv03_decompress(dst, dstCapacity, src, compressedSize); #endif #if (ZSTD_LEGACY_SUPPORT <= 4) case 4 : return ZSTDv04_decompress(dst, dstCapacity, src, compressedSize); #endif #if (ZSTD_LEGACY_SUPPORT <= 5) case 5 : { size_t result; ZSTDv05_DCtx* const zd = ZSTDv05_createDCtx(); if (zd==NULL) return ERROR(memory_allocation); result = ZSTDv05_decompress_usingDict(zd, dst, dstCapacity, src, compressedSize, dict, dictSize); ZSTDv05_freeDCtx(zd); return result; } #endif #if (ZSTD_LEGACY_SUPPORT <= 6) case 6 : { size_t result; ZSTDv06_DCtx* const zd = ZSTDv06_createDCtx(); if (zd==NULL) return ERROR(memory_allocation); result = ZSTDv06_decompress_usingDict(zd, dst, dstCapacity, src, compressedSize, dict, dictSize); ZSTDv06_freeDCtx(zd); return result; } #endif #if (ZSTD_LEGACY_SUPPORT <= 7) case 7 : { size_t result; ZSTDv07_DCtx* const zd = ZSTDv07_createDCtx(); if (zd==NULL) return ERROR(memory_allocation); result = ZSTDv07_decompress_usingDict(zd, dst, dstCapacity, src, compressedSize, dict, dictSize); ZSTDv07_freeDCtx(zd); return result; } #endif default : return ERROR(prefix_unknown); } } MEM_STATIC size_t ZSTD_findFrameCompressedSizeLegacy(const void *src, size_t compressedSize) { U32 const version = ZSTD_isLegacy(src, compressedSize); switch(version) { #if (ZSTD_LEGACY_SUPPORT <= 1) case 1 : return ZSTDv01_findFrameCompressedSize(src, compressedSize); #endif #if (ZSTD_LEGACY_SUPPORT <= 2) case 2 : return ZSTDv02_findFrameCompressedSize(src, compressedSize); #endif #if (ZSTD_LEGACY_SUPPORT <= 3) case 3 : return ZSTDv03_findFrameCompressedSize(src, compressedSize); #endif #if (ZSTD_LEGACY_SUPPORT <= 4) case 4 : return ZSTDv04_findFrameCompressedSize(src, compressedSize); #endif #if (ZSTD_LEGACY_SUPPORT <= 5) case 5 : return ZSTDv05_findFrameCompressedSize(src, compressedSize); #endif #if (ZSTD_LEGACY_SUPPORT <= 6) case 6 : return ZSTDv06_findFrameCompressedSize(src, compressedSize); #endif #if (ZSTD_LEGACY_SUPPORT <= 7) case 7 : return ZSTDv07_findFrameCompressedSize(src, compressedSize); #endif default : return ERROR(prefix_unknown); } } MEM_STATIC size_t ZSTD_freeLegacyStreamContext(void* legacyContext, U32 version) { switch(version) { default : case 1 : case 2 : case 3 : (void)legacyContext; return ERROR(version_unsupported); #if (ZSTD_LEGACY_SUPPORT <= 4) case 4 : return ZBUFFv04_freeDCtx((ZBUFFv04_DCtx*)legacyContext); #endif #if (ZSTD_LEGACY_SUPPORT <= 5) case 5 : return ZBUFFv05_freeDCtx((ZBUFFv05_DCtx*)legacyContext); #endif #if (ZSTD_LEGACY_SUPPORT <= 6) case 6 : return ZBUFFv06_freeDCtx((ZBUFFv06_DCtx*)legacyContext); #endif #if (ZSTD_LEGACY_SUPPORT <= 7) case 7 : return ZBUFFv07_freeDCtx((ZBUFFv07_DCtx*)legacyContext); #endif } } MEM_STATIC size_t ZSTD_initLegacyStream(void** legacyContext, U32 prevVersion, U32 newVersion, const void* dict, size_t dictSize) { if (prevVersion != newVersion) ZSTD_freeLegacyStreamContext(*legacyContext, prevVersion); switch(newVersion) { default : case 1 : case 2 : case 3 : (void)dict; (void)dictSize; return 0; #if (ZSTD_LEGACY_SUPPORT <= 4) case 4 : { ZBUFFv04_DCtx* dctx = (prevVersion != newVersion) ? ZBUFFv04_createDCtx() : (ZBUFFv04_DCtx*)*legacyContext; if (dctx==NULL) return ERROR(memory_allocation); ZBUFFv04_decompressInit(dctx); ZBUFFv04_decompressWithDictionary(dctx, dict, dictSize); *legacyContext = dctx; return 0; } #endif #if (ZSTD_LEGACY_SUPPORT <= 5) case 5 : { ZBUFFv05_DCtx* dctx = (prevVersion != newVersion) ? ZBUFFv05_createDCtx() : (ZBUFFv05_DCtx*)*legacyContext; if (dctx==NULL) return ERROR(memory_allocation); ZBUFFv05_decompressInitDictionary(dctx, dict, dictSize); *legacyContext = dctx; return 0; } #endif #if (ZSTD_LEGACY_SUPPORT <= 6) case 6 : { ZBUFFv06_DCtx* dctx = (prevVersion != newVersion) ? ZBUFFv06_createDCtx() : (ZBUFFv06_DCtx*)*legacyContext; if (dctx==NULL) return ERROR(memory_allocation); ZBUFFv06_decompressInitDictionary(dctx, dict, dictSize); *legacyContext = dctx; return 0; } #endif #if (ZSTD_LEGACY_SUPPORT <= 7) case 7 : { ZBUFFv07_DCtx* dctx = (prevVersion != newVersion) ? ZBUFFv07_createDCtx() : (ZBUFFv07_DCtx*)*legacyContext; if (dctx==NULL) return ERROR(memory_allocation); ZBUFFv07_decompressInitDictionary(dctx, dict, dictSize); *legacyContext = dctx; return 0; } #endif } } MEM_STATIC size_t ZSTD_decompressLegacyStream(void* legacyContext, U32 version, ZSTD_outBuffer* output, ZSTD_inBuffer* input) { switch(version) { default : case 1 : case 2 : case 3 : (void)legacyContext; (void)output; (void)input; return ERROR(version_unsupported); #if (ZSTD_LEGACY_SUPPORT <= 4) case 4 : { ZBUFFv04_DCtx* dctx = (ZBUFFv04_DCtx*) legacyContext; const void* src = (const char*)input->src + input->pos; size_t readSize = input->size - input->pos; void* dst = (char*)output->dst + output->pos; size_t decodedSize = output->size - output->pos; size_t const hintSize = ZBUFFv04_decompressContinue(dctx, dst, &decodedSize, src, &readSize); output->pos += decodedSize; input->pos += readSize; return hintSize; } #endif #if (ZSTD_LEGACY_SUPPORT <= 5) case 5 : { ZBUFFv05_DCtx* dctx = (ZBUFFv05_DCtx*) legacyContext; const void* src = (const char*)input->src + input->pos; size_t readSize = input->size - input->pos; void* dst = (char*)output->dst + output->pos; size_t decodedSize = output->size - output->pos; size_t const hintSize = ZBUFFv05_decompressContinue(dctx, dst, &decodedSize, src, &readSize); output->pos += decodedSize; input->pos += readSize; return hintSize; } #endif #if (ZSTD_LEGACY_SUPPORT <= 6) case 6 : { ZBUFFv06_DCtx* dctx = (ZBUFFv06_DCtx*) legacyContext; const void* src = (const char*)input->src + input->pos; size_t readSize = input->size - input->pos; void* dst = (char*)output->dst + output->pos; size_t decodedSize = output->size - output->pos; size_t const hintSize = ZBUFFv06_decompressContinue(dctx, dst, &decodedSize, src, &readSize); output->pos += decodedSize; input->pos += readSize; return hintSize; } #endif #if (ZSTD_LEGACY_SUPPORT <= 7) case 7 : { ZBUFFv07_DCtx* dctx = (ZBUFFv07_DCtx*) legacyContext; const void* src = (const char*)input->src + input->pos; size_t readSize = input->size - input->pos; void* dst = (char*)output->dst + output->pos; size_t decodedSize = output->size - output->pos; size_t const hintSize = ZBUFFv07_decompressContinue(dctx, dst, &decodedSize, src, &readSize); output->pos += decodedSize; input->pos += readSize; return hintSize; } #endif } } #if defined (__cplusplus) } #endif #endif /* ZSTD_LEGACY_H */ borgbackup-1.1.5/src/borg/algorithms/zstd/lib/legacy/zstd_v06.c0000644000175000017500000051233613257361140023471 0ustar twtw00000000000000/* * Copyright (c) 2016-present, Yann Collet, Facebook, Inc. * All rights reserved. * * This source code is licensed under both the BSD-style license (found in the * LICENSE file in the root directory of this source tree) and the GPLv2 (found * in the COPYING file in the root directory of this source tree). * You may select, at your option, one of the above-listed licenses. */ /*- Dependencies -*/ #include "zstd_v06.h" #include /* size_t, ptrdiff_t */ #include /* memcpy */ #include /* malloc, free, qsort */ #include "error_private.h" /* ****************************************************************** mem.h low-level memory access routines Copyright (C) 2013-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - FSE source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #ifndef MEM_H_MODULE #define MEM_H_MODULE #if defined (__cplusplus) extern "C" { #endif /*-**************************************** * Compiler specifics ******************************************/ #if defined(_MSC_VER) /* Visual Studio */ # include /* _byteswap_ulong */ # include /* _byteswap_* */ #endif #if defined(__GNUC__) # define MEM_STATIC static __attribute__((unused)) #elif defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) # define MEM_STATIC static inline #elif defined(_MSC_VER) # define MEM_STATIC static __inline #else # define MEM_STATIC static /* this version may generate warnings for unused static functions; disable the relevant warning */ #endif /*-************************************************************** * Basic Types *****************************************************************/ #if !defined (__VMS) && (defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) ) # include typedef uint8_t BYTE; typedef uint16_t U16; typedef int16_t S16; typedef uint32_t U32; typedef int32_t S32; typedef uint64_t U64; typedef int64_t S64; #else typedef unsigned char BYTE; typedef unsigned short U16; typedef signed short S16; typedef unsigned int U32; typedef signed int S32; typedef unsigned long long U64; typedef signed long long S64; #endif /*-************************************************************** * Memory I/O *****************************************************************/ /* MEM_FORCE_MEMORY_ACCESS : * By default, access to unaligned memory is controlled by `memcpy()`, which is safe and portable. * Unfortunately, on some target/compiler combinations, the generated assembly is sub-optimal. * The below switch allow to select different access method for improved performance. * Method 0 (default) : use `memcpy()`. Safe and portable. * Method 1 : `__packed` statement. It depends on compiler extension (ie, not portable). * This method is safe if your compiler supports it, and *generally* as fast or faster than `memcpy`. * Method 2 : direct access. This method is portable but violate C standard. * It can generate buggy code on targets depending on alignment. * In some circumstances, it's the only known way to get the most performance (ie GCC + ARMv6) * See http://fastcompression.blogspot.fr/2015/08/accessing-unaligned-memory.html for details. * Prefer these methods in priority order (0 > 1 > 2) */ #ifndef MEM_FORCE_MEMORY_ACCESS /* can be defined externally, on command line for example */ # if defined(__GNUC__) && ( defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) || defined(__ARM_ARCH_6K__) || defined(__ARM_ARCH_6Z__) || defined(__ARM_ARCH_6ZK__) || defined(__ARM_ARCH_6T2__) ) # define MEM_FORCE_MEMORY_ACCESS 2 # elif (defined(__INTEL_COMPILER) && !defined(WIN32)) || \ (defined(__GNUC__) && ( defined(__ARM_ARCH_7__) || defined(__ARM_ARCH_7A__) || defined(__ARM_ARCH_7R__) || defined(__ARM_ARCH_7M__) || defined(__ARM_ARCH_7S__) )) # define MEM_FORCE_MEMORY_ACCESS 1 # endif #endif MEM_STATIC unsigned MEM_32bits(void) { return sizeof(size_t)==4; } MEM_STATIC unsigned MEM_64bits(void) { return sizeof(size_t)==8; } MEM_STATIC unsigned MEM_isLittleEndian(void) { const union { U32 u; BYTE c[4]; } one = { 1 }; /* don't use static : performance detrimental */ return one.c[0]; } #if defined(MEM_FORCE_MEMORY_ACCESS) && (MEM_FORCE_MEMORY_ACCESS==2) /* violates C standard, by lying on structure alignment. Only use if no other choice to achieve best performance on target platform */ MEM_STATIC U16 MEM_read16(const void* memPtr) { return *(const U16*) memPtr; } MEM_STATIC U32 MEM_read32(const void* memPtr) { return *(const U32*) memPtr; } MEM_STATIC U64 MEM_read64(const void* memPtr) { return *(const U64*) memPtr; } MEM_STATIC void MEM_write16(void* memPtr, U16 value) { *(U16*)memPtr = value; } #elif defined(MEM_FORCE_MEMORY_ACCESS) && (MEM_FORCE_MEMORY_ACCESS==1) /* __pack instructions are safer, but compiler specific, hence potentially problematic for some compilers */ /* currently only defined for gcc and icc */ typedef union { U16 u16; U32 u32; U64 u64; size_t st; } __attribute__((packed)) unalign; MEM_STATIC U16 MEM_read16(const void* ptr) { return ((const unalign*)ptr)->u16; } MEM_STATIC U32 MEM_read32(const void* ptr) { return ((const unalign*)ptr)->u32; } MEM_STATIC U64 MEM_read64(const void* ptr) { return ((const unalign*)ptr)->u64; } MEM_STATIC void MEM_write16(void* memPtr, U16 value) { ((unalign*)memPtr)->u16 = value; } #else /* default method, safe and standard. can sometimes prove slower */ MEM_STATIC U16 MEM_read16(const void* memPtr) { U16 val; memcpy(&val, memPtr, sizeof(val)); return val; } MEM_STATIC U32 MEM_read32(const void* memPtr) { U32 val; memcpy(&val, memPtr, sizeof(val)); return val; } MEM_STATIC U64 MEM_read64(const void* memPtr) { U64 val; memcpy(&val, memPtr, sizeof(val)); return val; } MEM_STATIC void MEM_write16(void* memPtr, U16 value) { memcpy(memPtr, &value, sizeof(value)); } #endif /* MEM_FORCE_MEMORY_ACCESS */ MEM_STATIC U32 MEM_swap32(U32 in) { #if defined(_MSC_VER) /* Visual Studio */ return _byteswap_ulong(in); #elif defined (__GNUC__) return __builtin_bswap32(in); #else return ((in << 24) & 0xff000000 ) | ((in << 8) & 0x00ff0000 ) | ((in >> 8) & 0x0000ff00 ) | ((in >> 24) & 0x000000ff ); #endif } MEM_STATIC U64 MEM_swap64(U64 in) { #if defined(_MSC_VER) /* Visual Studio */ return _byteswap_uint64(in); #elif defined (__GNUC__) return __builtin_bswap64(in); #else return ((in << 56) & 0xff00000000000000ULL) | ((in << 40) & 0x00ff000000000000ULL) | ((in << 24) & 0x0000ff0000000000ULL) | ((in << 8) & 0x000000ff00000000ULL) | ((in >> 8) & 0x00000000ff000000ULL) | ((in >> 24) & 0x0000000000ff0000ULL) | ((in >> 40) & 0x000000000000ff00ULL) | ((in >> 56) & 0x00000000000000ffULL); #endif } /*=== Little endian r/w ===*/ MEM_STATIC U16 MEM_readLE16(const void* memPtr) { if (MEM_isLittleEndian()) return MEM_read16(memPtr); else { const BYTE* p = (const BYTE*)memPtr; return (U16)(p[0] + (p[1]<<8)); } } MEM_STATIC void MEM_writeLE16(void* memPtr, U16 val) { if (MEM_isLittleEndian()) { MEM_write16(memPtr, val); } else { BYTE* p = (BYTE*)memPtr; p[0] = (BYTE)val; p[1] = (BYTE)(val>>8); } } MEM_STATIC U32 MEM_readLE32(const void* memPtr) { if (MEM_isLittleEndian()) return MEM_read32(memPtr); else return MEM_swap32(MEM_read32(memPtr)); } MEM_STATIC U64 MEM_readLE64(const void* memPtr) { if (MEM_isLittleEndian()) return MEM_read64(memPtr); else return MEM_swap64(MEM_read64(memPtr)); } MEM_STATIC size_t MEM_readLEST(const void* memPtr) { if (MEM_32bits()) return (size_t)MEM_readLE32(memPtr); else return (size_t)MEM_readLE64(memPtr); } #if defined (__cplusplus) } #endif #endif /* MEM_H_MODULE */ /* zstd - standard compression library Header File for static linking only Copyright (C) 2014-2016, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - zstd homepage : http://www.zstd.net */ #ifndef ZSTDv06_STATIC_H #define ZSTDv06_STATIC_H /* The prototypes defined within this file are considered experimental. * They should not be used in the context DLL as they may change in the future. * Prefer static linking if you need them, to control breaking version changes issues. */ #if defined (__cplusplus) extern "C" { #endif /*- Advanced Decompression functions -*/ /*! ZSTDv06_decompress_usingPreparedDCtx() : * Same as ZSTDv06_decompress_usingDict, but using a reference context `preparedDCtx`, where dictionary has been loaded. * It avoids reloading the dictionary each time. * `preparedDCtx` must have been properly initialized using ZSTDv06_decompressBegin_usingDict(). * Requires 2 contexts : 1 for reference (preparedDCtx), which will not be modified, and 1 to run the decompression operation (dctx) */ ZSTDLIBv06_API size_t ZSTDv06_decompress_usingPreparedDCtx( ZSTDv06_DCtx* dctx, const ZSTDv06_DCtx* preparedDCtx, void* dst, size_t dstCapacity, const void* src, size_t srcSize); #define ZSTDv06_FRAMEHEADERSIZE_MAX 13 /* for static allocation */ static const size_t ZSTDv06_frameHeaderSize_min = 5; static const size_t ZSTDv06_frameHeaderSize_max = ZSTDv06_FRAMEHEADERSIZE_MAX; ZSTDLIBv06_API size_t ZSTDv06_decompressBegin(ZSTDv06_DCtx* dctx); /* Streaming decompression, direct mode (bufferless) A ZSTDv06_DCtx object is required to track streaming operations. Use ZSTDv06_createDCtx() / ZSTDv06_freeDCtx() to manage it. A ZSTDv06_DCtx object can be re-used multiple times. First optional operation is to retrieve frame parameters, using ZSTDv06_getFrameParams(), which doesn't consume the input. It can provide the minimum size of rolling buffer required to properly decompress data, and optionally the final size of uncompressed content. (Note : content size is an optional info that may not be present. 0 means : content size unknown) Frame parameters are extracted from the beginning of compressed frame. The amount of data to read is variable, from ZSTDv06_frameHeaderSize_min to ZSTDv06_frameHeaderSize_max (so if `srcSize` >= ZSTDv06_frameHeaderSize_max, it will always work) If `srcSize` is too small for operation to succeed, function will return the minimum size it requires to produce a result. Result : 0 when successful, it means the ZSTDv06_frameParams structure has been filled. >0 : means there is not enough data into `src`. Provides the expected size to successfully decode header. errorCode, which can be tested using ZSTDv06_isError() Start decompression, with ZSTDv06_decompressBegin() or ZSTDv06_decompressBegin_usingDict(). Alternatively, you can copy a prepared context, using ZSTDv06_copyDCtx(). Then use ZSTDv06_nextSrcSizeToDecompress() and ZSTDv06_decompressContinue() alternatively. ZSTDv06_nextSrcSizeToDecompress() tells how much bytes to provide as 'srcSize' to ZSTDv06_decompressContinue(). ZSTDv06_decompressContinue() requires this exact amount of bytes, or it will fail. ZSTDv06_decompressContinue() needs previous data blocks during decompression, up to (1 << windowlog). They should preferably be located contiguously, prior to current block. Alternatively, a round buffer is also possible. @result of ZSTDv06_decompressContinue() is the number of bytes regenerated within 'dst' (necessarily <= dstCapacity) It can be zero, which is not an error; it just means ZSTDv06_decompressContinue() has decoded some header. A frame is fully decoded when ZSTDv06_nextSrcSizeToDecompress() returns zero. Context can then be reset to start a new decompression. */ /* ************************************** * Block functions ****************************************/ /*! Block functions produce and decode raw zstd blocks, without frame metadata. User will have to take in charge required information to regenerate data, such as compressed and content sizes. A few rules to respect : - Uncompressed block size must be <= ZSTDv06_BLOCKSIZE_MAX (128 KB) - Compressing or decompressing requires a context structure + Use ZSTDv06_createCCtx() and ZSTDv06_createDCtx() - It is necessary to init context before starting + compression : ZSTDv06_compressBegin() + decompression : ZSTDv06_decompressBegin() + variants _usingDict() are also allowed + copyCCtx() and copyDCtx() work too - When a block is considered not compressible enough, ZSTDv06_compressBlock() result will be zero. In which case, nothing is produced into `dst`. + User must test for such outcome and deal directly with uncompressed data + ZSTDv06_decompressBlock() doesn't accept uncompressed data as input !! */ #define ZSTDv06_BLOCKSIZE_MAX (128 * 1024) /* define, for static allocation */ ZSTDLIBv06_API size_t ZSTDv06_decompressBlock(ZSTDv06_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize); #if defined (__cplusplus) } #endif #endif /* ZSTDv06_STATIC_H */ /* zstd_internal - common functions to include Header File for include Copyright (C) 2014-2016, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - zstd homepage : https://www.zstd.net */ #ifndef ZSTDv06_CCOMMON_H_MODULE #define ZSTDv06_CCOMMON_H_MODULE /*-************************************* * Common macros ***************************************/ #define MIN(a,b) ((a)<(b) ? (a) : (b)) #define MAX(a,b) ((a)>(b) ? (a) : (b)) /*-************************************* * Common constants ***************************************/ #define ZSTDv06_DICT_MAGIC 0xEC30A436 #define ZSTDv06_REP_NUM 3 #define ZSTDv06_REP_INIT ZSTDv06_REP_NUM #define ZSTDv06_REP_MOVE (ZSTDv06_REP_NUM-1) #define KB *(1 <<10) #define MB *(1 <<20) #define GB *(1U<<30) #define BIT7 128 #define BIT6 64 #define BIT5 32 #define BIT4 16 #define BIT1 2 #define BIT0 1 #define ZSTDv06_WINDOWLOG_ABSOLUTEMIN 12 static const size_t ZSTDv06_fcs_fieldSize[4] = { 0, 1, 2, 8 }; #define ZSTDv06_BLOCKHEADERSIZE 3 /* because C standard does not allow a static const value to be defined using another static const value .... :( */ static const size_t ZSTDv06_blockHeaderSize = ZSTDv06_BLOCKHEADERSIZE; typedef enum { bt_compressed, bt_raw, bt_rle, bt_end } blockType_t; #define MIN_SEQUENCES_SIZE 1 /* nbSeq==0 */ #define MIN_CBLOCK_SIZE (1 /*litCSize*/ + 1 /* RLE or RAW */ + MIN_SEQUENCES_SIZE /* nbSeq==0 */) /* for a non-null block */ #define HufLog 12 #define IS_HUF 0 #define IS_PCH 1 #define IS_RAW 2 #define IS_RLE 3 #define LONGNBSEQ 0x7F00 #define MINMATCH 3 #define EQUAL_READ32 4 #define REPCODE_STARTVALUE 1 #define Litbits 8 #define MaxLit ((1< /* support for bextr (experimental) */ #endif /*-******************************************** * bitStream decoding API (read backward) **********************************************/ typedef struct { size_t bitContainer; unsigned bitsConsumed; const char* ptr; const char* start; } BITv06_DStream_t; typedef enum { BITv06_DStream_unfinished = 0, BITv06_DStream_endOfBuffer = 1, BITv06_DStream_completed = 2, BITv06_DStream_overflow = 3 } BITv06_DStream_status; /* result of BITv06_reloadDStream() */ /* 1,2,4,8 would be better for bitmap combinations, but slows down performance a bit ... :( */ MEM_STATIC size_t BITv06_initDStream(BITv06_DStream_t* bitD, const void* srcBuffer, size_t srcSize); MEM_STATIC size_t BITv06_readBits(BITv06_DStream_t* bitD, unsigned nbBits); MEM_STATIC BITv06_DStream_status BITv06_reloadDStream(BITv06_DStream_t* bitD); MEM_STATIC unsigned BITv06_endOfDStream(const BITv06_DStream_t* bitD); /* Start by invoking BITv06_initDStream(). * A chunk of the bitStream is then stored into a local register. * Local register size is 64-bits on 64-bits systems, 32-bits on 32-bits systems (size_t). * You can then retrieve bitFields stored into the local register, **in reverse order**. * Local register is explicitly reloaded from memory by the BITv06_reloadDStream() method. * A reload guarantee a minimum of ((8*sizeof(bitD->bitContainer))-7) bits when its result is BITv06_DStream_unfinished. * Otherwise, it can be less than that, so proceed accordingly. * Checking if DStream has reached its end can be performed with BITv06_endOfDStream(). */ /*-**************************************** * unsafe API ******************************************/ MEM_STATIC size_t BITv06_readBitsFast(BITv06_DStream_t* bitD, unsigned nbBits); /* faster, but works only if nbBits >= 1 */ /*-************************************************************** * Internal functions ****************************************************************/ MEM_STATIC unsigned BITv06_highbit32 (register U32 val) { # if defined(_MSC_VER) /* Visual */ unsigned long r=0; _BitScanReverse ( &r, val ); return (unsigned) r; # elif defined(__GNUC__) && (__GNUC__ >= 3) /* Use GCC Intrinsic */ return 31 - __builtin_clz (val); # else /* Software version */ static const unsigned DeBruijnClz[32] = { 0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30, 8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31 }; U32 v = val; unsigned r; v |= v >> 1; v |= v >> 2; v |= v >> 4; v |= v >> 8; v |= v >> 16; r = DeBruijnClz[ (U32) (v * 0x07C4ACDDU) >> 27]; return r; # endif } /*-******************************************************** * bitStream decoding **********************************************************/ /*! BITv06_initDStream() : * Initialize a BITv06_DStream_t. * `bitD` : a pointer to an already allocated BITv06_DStream_t structure. * `srcSize` must be the *exact* size of the bitStream, in bytes. * @return : size of stream (== srcSize) or an errorCode if a problem is detected */ MEM_STATIC size_t BITv06_initDStream(BITv06_DStream_t* bitD, const void* srcBuffer, size_t srcSize) { if (srcSize < 1) { memset(bitD, 0, sizeof(*bitD)); return ERROR(srcSize_wrong); } if (srcSize >= sizeof(bitD->bitContainer)) { /* normal case */ bitD->start = (const char*)srcBuffer; bitD->ptr = (const char*)srcBuffer + srcSize - sizeof(bitD->bitContainer); bitD->bitContainer = MEM_readLEST(bitD->ptr); { BYTE const lastByte = ((const BYTE*)srcBuffer)[srcSize-1]; if (lastByte == 0) return ERROR(GENERIC); /* endMark not present */ bitD->bitsConsumed = 8 - BITv06_highbit32(lastByte); } } else { bitD->start = (const char*)srcBuffer; bitD->ptr = bitD->start; bitD->bitContainer = *(const BYTE*)(bitD->start); switch(srcSize) { case 7: bitD->bitContainer += (size_t)(((const BYTE*)(srcBuffer))[6]) << (sizeof(bitD->bitContainer)*8 - 16);/* fall-through */ case 6: bitD->bitContainer += (size_t)(((const BYTE*)(srcBuffer))[5]) << (sizeof(bitD->bitContainer)*8 - 24);/* fall-through */ case 5: bitD->bitContainer += (size_t)(((const BYTE*)(srcBuffer))[4]) << (sizeof(bitD->bitContainer)*8 - 32);/* fall-through */ case 4: bitD->bitContainer += (size_t)(((const BYTE*)(srcBuffer))[3]) << 24; /* fall-through */ case 3: bitD->bitContainer += (size_t)(((const BYTE*)(srcBuffer))[2]) << 16; /* fall-through */ case 2: bitD->bitContainer += (size_t)(((const BYTE*)(srcBuffer))[1]) << 8; /* fall-through */ default: break; } { BYTE const lastByte = ((const BYTE*)srcBuffer)[srcSize-1]; if (lastByte == 0) return ERROR(GENERIC); /* endMark not present */ bitD->bitsConsumed = 8 - BITv06_highbit32(lastByte); } bitD->bitsConsumed += (U32)(sizeof(bitD->bitContainer) - srcSize)*8; } return srcSize; } /*! BITv06_lookBits() : * Provides next n bits from local register. * local register is not modified. * On 32-bits, maxNbBits==24. * On 64-bits, maxNbBits==56. * @return : value extracted */ MEM_STATIC size_t BITv06_lookBits(const BITv06_DStream_t* bitD, U32 nbBits) { U32 const bitMask = sizeof(bitD->bitContainer)*8 - 1; return ((bitD->bitContainer << (bitD->bitsConsumed & bitMask)) >> 1) >> ((bitMask-nbBits) & bitMask); } /*! BITv06_lookBitsFast() : * unsafe version; only works only if nbBits >= 1 */ MEM_STATIC size_t BITv06_lookBitsFast(const BITv06_DStream_t* bitD, U32 nbBits) { U32 const bitMask = sizeof(bitD->bitContainer)*8 - 1; return (bitD->bitContainer << (bitD->bitsConsumed & bitMask)) >> (((bitMask+1)-nbBits) & bitMask); } MEM_STATIC void BITv06_skipBits(BITv06_DStream_t* bitD, U32 nbBits) { bitD->bitsConsumed += nbBits; } /*! BITv06_readBits() : * Read (consume) next n bits from local register and update. * Pay attention to not read more than nbBits contained into local register. * @return : extracted value. */ MEM_STATIC size_t BITv06_readBits(BITv06_DStream_t* bitD, U32 nbBits) { size_t const value = BITv06_lookBits(bitD, nbBits); BITv06_skipBits(bitD, nbBits); return value; } /*! BITv06_readBitsFast() : * unsafe version; only works only if nbBits >= 1 */ MEM_STATIC size_t BITv06_readBitsFast(BITv06_DStream_t* bitD, U32 nbBits) { size_t const value = BITv06_lookBitsFast(bitD, nbBits); BITv06_skipBits(bitD, nbBits); return value; } /*! BITv06_reloadDStream() : * Refill `BITv06_DStream_t` from src buffer previously defined (see BITv06_initDStream() ). * This function is safe, it guarantees it will not read beyond src buffer. * @return : status of `BITv06_DStream_t` internal register. if status == unfinished, internal register is filled with >= (sizeof(bitD->bitContainer)*8 - 7) bits */ MEM_STATIC BITv06_DStream_status BITv06_reloadDStream(BITv06_DStream_t* bitD) { if (bitD->bitsConsumed > (sizeof(bitD->bitContainer)*8)) /* should never happen */ return BITv06_DStream_overflow; if (bitD->ptr >= bitD->start + sizeof(bitD->bitContainer)) { bitD->ptr -= bitD->bitsConsumed >> 3; bitD->bitsConsumed &= 7; bitD->bitContainer = MEM_readLEST(bitD->ptr); return BITv06_DStream_unfinished; } if (bitD->ptr == bitD->start) { if (bitD->bitsConsumed < sizeof(bitD->bitContainer)*8) return BITv06_DStream_endOfBuffer; return BITv06_DStream_completed; } { U32 nbBytes = bitD->bitsConsumed >> 3; BITv06_DStream_status result = BITv06_DStream_unfinished; if (bitD->ptr - nbBytes < bitD->start) { nbBytes = (U32)(bitD->ptr - bitD->start); /* ptr > start */ result = BITv06_DStream_endOfBuffer; } bitD->ptr -= nbBytes; bitD->bitsConsumed -= nbBytes*8; bitD->bitContainer = MEM_readLEST(bitD->ptr); /* reminder : srcSize > sizeof(bitD) */ return result; } } /*! BITv06_endOfDStream() : * @return Tells if DStream has exactly reached its end (all bits consumed). */ MEM_STATIC unsigned BITv06_endOfDStream(const BITv06_DStream_t* DStream) { return ((DStream->ptr == DStream->start) && (DStream->bitsConsumed == sizeof(DStream->bitContainer)*8)); } #if defined (__cplusplus) } #endif #endif /* BITSTREAM_H_MODULE */ /* ****************************************************************** FSE : Finite State Entropy coder header file for static linking (only) Copyright (C) 2013-2015, Yann Collet BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - Source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #ifndef FSEv06_STATIC_H #define FSEv06_STATIC_H #if defined (__cplusplus) extern "C" { #endif /* ***************************************** * Static allocation *******************************************/ /* FSE buffer bounds */ #define FSEv06_NCOUNTBOUND 512 #define FSEv06_BLOCKBOUND(size) (size + (size>>7)) #define FSEv06_COMPRESSBOUND(size) (FSEv06_NCOUNTBOUND + FSEv06_BLOCKBOUND(size)) /* Macro version, useful for static allocation */ /* It is possible to statically allocate FSE CTable/DTable as a table of unsigned using below macros */ #define FSEv06_DTABLE_SIZE_U32(maxTableLog) (1 + (1<= BITv06_DStream_completed When it's done, verify decompression is fully completed, by checking both DStream and the relevant states. Checking if DStream has reached its end is performed by : BITv06_endOfDStream(&DStream); Check also the states. There might be some symbols left there, if some high probability ones (>50%) are possible. FSEv06_endOfDState(&DState); */ /* ***************************************** * FSE unsafe API *******************************************/ static unsigned char FSEv06_decodeSymbolFast(FSEv06_DState_t* DStatePtr, BITv06_DStream_t* bitD); /* faster, but works only if nbBits is always >= 1 (otherwise, result will be corrupted) */ /* ***************************************** * Implementation of inlined functions *******************************************/ /* ====== Decompression ====== */ typedef struct { U16 tableLog; U16 fastMode; } FSEv06_DTableHeader; /* sizeof U32 */ typedef struct { unsigned short newState; unsigned char symbol; unsigned char nbBits; } FSEv06_decode_t; /* size == U32 */ MEM_STATIC void FSEv06_initDState(FSEv06_DState_t* DStatePtr, BITv06_DStream_t* bitD, const FSEv06_DTable* dt) { const void* ptr = dt; const FSEv06_DTableHeader* const DTableH = (const FSEv06_DTableHeader*)ptr; DStatePtr->state = BITv06_readBits(bitD, DTableH->tableLog); BITv06_reloadDStream(bitD); DStatePtr->table = dt + 1; } MEM_STATIC BYTE FSEv06_peekSymbol(const FSEv06_DState_t* DStatePtr) { FSEv06_decode_t const DInfo = ((const FSEv06_decode_t*)(DStatePtr->table))[DStatePtr->state]; return DInfo.symbol; } MEM_STATIC void FSEv06_updateState(FSEv06_DState_t* DStatePtr, BITv06_DStream_t* bitD) { FSEv06_decode_t const DInfo = ((const FSEv06_decode_t*)(DStatePtr->table))[DStatePtr->state]; U32 const nbBits = DInfo.nbBits; size_t const lowBits = BITv06_readBits(bitD, nbBits); DStatePtr->state = DInfo.newState + lowBits; } MEM_STATIC BYTE FSEv06_decodeSymbol(FSEv06_DState_t* DStatePtr, BITv06_DStream_t* bitD) { FSEv06_decode_t const DInfo = ((const FSEv06_decode_t*)(DStatePtr->table))[DStatePtr->state]; U32 const nbBits = DInfo.nbBits; BYTE const symbol = DInfo.symbol; size_t const lowBits = BITv06_readBits(bitD, nbBits); DStatePtr->state = DInfo.newState + lowBits; return symbol; } /*! FSEv06_decodeSymbolFast() : unsafe, only works if no symbol has a probability > 50% */ MEM_STATIC BYTE FSEv06_decodeSymbolFast(FSEv06_DState_t* DStatePtr, BITv06_DStream_t* bitD) { FSEv06_decode_t const DInfo = ((const FSEv06_decode_t*)(DStatePtr->table))[DStatePtr->state]; U32 const nbBits = DInfo.nbBits; BYTE const symbol = DInfo.symbol; size_t const lowBits = BITv06_readBitsFast(bitD, nbBits); DStatePtr->state = DInfo.newState + lowBits; return symbol; } #ifndef FSEv06_COMMONDEFS_ONLY /* ************************************************************** * Tuning parameters ****************************************************************/ /*!MEMORY_USAGE : * Memory usage formula : N->2^N Bytes (examples : 10 -> 1KB; 12 -> 4KB ; 16 -> 64KB; 20 -> 1MB; etc.) * Increasing memory usage improves compression ratio * Reduced memory usage can improve speed, due to cache effect * Recommended max value is 14, for 16KB, which nicely fits into Intel x86 L1 cache */ #define FSEv06_MAX_MEMORY_USAGE 14 #define FSEv06_DEFAULT_MEMORY_USAGE 13 /*!FSEv06_MAX_SYMBOL_VALUE : * Maximum symbol value authorized. * Required for proper stack allocation */ #define FSEv06_MAX_SYMBOL_VALUE 255 /* ************************************************************** * template functions type & suffix ****************************************************************/ #define FSEv06_FUNCTION_TYPE BYTE #define FSEv06_FUNCTION_EXTENSION #define FSEv06_DECODE_TYPE FSEv06_decode_t #endif /* !FSEv06_COMMONDEFS_ONLY */ /* *************************************************************** * Constants *****************************************************************/ #define FSEv06_MAX_TABLELOG (FSEv06_MAX_MEMORY_USAGE-2) #define FSEv06_MAX_TABLESIZE (1U< FSEv06_TABLELOG_ABSOLUTE_MAX #error "FSEv06_MAX_TABLELOG > FSEv06_TABLELOG_ABSOLUTE_MAX is not supported" #endif #define FSEv06_TABLESTEP(tableSize) ((tableSize>>1) + (tableSize>>3) + 3) #if defined (__cplusplus) } #endif #endif /* FSEv06_STATIC_H */ /* Common functions of New Generation Entropy library Copyright (C) 2016, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - FSE+HUF source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c *************************************************************************** */ /*-**************************************** * FSE Error Management ******************************************/ unsigned FSEv06_isError(size_t code) { return ERR_isError(code); } const char* FSEv06_getErrorName(size_t code) { return ERR_getErrorName(code); } /* ************************************************************** * HUF Error Management ****************************************************************/ unsigned HUFv06_isError(size_t code) { return ERR_isError(code); } const char* HUFv06_getErrorName(size_t code) { return ERR_getErrorName(code); } /*-************************************************************** * FSE NCount encoding-decoding ****************************************************************/ static short FSEv06_abs(short a) { return a<0 ? -a : a; } size_t FSEv06_readNCount (short* normalizedCounter, unsigned* maxSVPtr, unsigned* tableLogPtr, const void* headerBuffer, size_t hbSize) { const BYTE* const istart = (const BYTE*) headerBuffer; const BYTE* const iend = istart + hbSize; const BYTE* ip = istart; int nbBits; int remaining; int threshold; U32 bitStream; int bitCount; unsigned charnum = 0; int previous0 = 0; if (hbSize < 4) return ERROR(srcSize_wrong); bitStream = MEM_readLE32(ip); nbBits = (bitStream & 0xF) + FSEv06_MIN_TABLELOG; /* extract tableLog */ if (nbBits > FSEv06_TABLELOG_ABSOLUTE_MAX) return ERROR(tableLog_tooLarge); bitStream >>= 4; bitCount = 4; *tableLogPtr = nbBits; remaining = (1<1) && (charnum<=*maxSVPtr)) { if (previous0) { unsigned n0 = charnum; while ((bitStream & 0xFFFF) == 0xFFFF) { n0+=24; if (ip < iend-5) { ip+=2; bitStream = MEM_readLE32(ip) >> bitCount; } else { bitStream >>= 16; bitCount+=16; } } while ((bitStream & 3) == 3) { n0+=3; bitStream>>=2; bitCount+=2; } n0 += bitStream & 3; bitCount += 2; if (n0 > *maxSVPtr) return ERROR(maxSymbolValue_tooSmall); while (charnum < n0) normalizedCounter[charnum++] = 0; if ((ip <= iend-7) || (ip + (bitCount>>3) <= iend-4)) { ip += bitCount>>3; bitCount &= 7; bitStream = MEM_readLE32(ip) >> bitCount; } else bitStream >>= 2; } { short const max = (short)((2*threshold-1)-remaining); short count; if ((bitStream & (threshold-1)) < (U32)max) { count = (short)(bitStream & (threshold-1)); bitCount += nbBits-1; } else { count = (short)(bitStream & (2*threshold-1)); if (count >= threshold) count -= max; bitCount += nbBits; } count--; /* extra accuracy */ remaining -= FSEv06_abs(count); normalizedCounter[charnum++] = count; previous0 = !count; while (remaining < threshold) { nbBits--; threshold >>= 1; } if ((ip <= iend-7) || (ip + (bitCount>>3) <= iend-4)) { ip += bitCount>>3; bitCount &= 7; } else { bitCount -= (int)(8 * (iend - 4 - ip)); ip = iend - 4; } bitStream = MEM_readLE32(ip) >> (bitCount & 31); } } /* while ((remaining>1) && (charnum<=*maxSVPtr)) */ if (remaining != 1) return ERROR(GENERIC); *maxSVPtr = charnum-1; ip += (bitCount+7)>>3; if ((size_t)(ip-istart) > hbSize) return ERROR(srcSize_wrong); return ip-istart; } /* ****************************************************************** FSE : Finite State Entropy decoder Copyright (C) 2013-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - FSE source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ /* ************************************************************** * Compiler specifics ****************************************************************/ #ifdef _MSC_VER /* Visual Studio */ # define FORCE_INLINE static __forceinline # include /* For Visual 2005 */ # pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */ # pragma warning(disable : 4214) /* disable: C4214: non-int bitfields */ #else # if defined (__cplusplus) || defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L /* C99 */ # ifdef __GNUC__ # define FORCE_INLINE static inline __attribute__((always_inline)) # else # define FORCE_INLINE static inline # endif # else # define FORCE_INLINE static # endif /* __STDC_VERSION__ */ #endif /* ************************************************************** * Error Management ****************************************************************/ #define FSEv06_isError ERR_isError #define FSEv06_STATIC_ASSERT(c) { enum { FSEv06_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */ /* ************************************************************** * Complex types ****************************************************************/ typedef U32 DTable_max_t[FSEv06_DTABLE_SIZE_U32(FSEv06_MAX_TABLELOG)]; /* ************************************************************** * Templates ****************************************************************/ /* designed to be included for type-specific functions (template emulation in C) Objective is to write these functions only once, for improved maintenance */ /* safety checks */ #ifndef FSEv06_FUNCTION_EXTENSION # error "FSEv06_FUNCTION_EXTENSION must be defined" #endif #ifndef FSEv06_FUNCTION_TYPE # error "FSEv06_FUNCTION_TYPE must be defined" #endif /* Function names */ #define FSEv06_CAT(X,Y) X##Y #define FSEv06_FUNCTION_NAME(X,Y) FSEv06_CAT(X,Y) #define FSEv06_TYPE_NAME(X,Y) FSEv06_CAT(X,Y) /* Function templates */ FSEv06_DTable* FSEv06_createDTable (unsigned tableLog) { if (tableLog > FSEv06_TABLELOG_ABSOLUTE_MAX) tableLog = FSEv06_TABLELOG_ABSOLUTE_MAX; return (FSEv06_DTable*)malloc( FSEv06_DTABLE_SIZE_U32(tableLog) * sizeof (U32) ); } void FSEv06_freeDTable (FSEv06_DTable* dt) { free(dt); } size_t FSEv06_buildDTable(FSEv06_DTable* dt, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog) { void* const tdPtr = dt+1; /* because *dt is unsigned, 32-bits aligned on 32-bits */ FSEv06_DECODE_TYPE* const tableDecode = (FSEv06_DECODE_TYPE*) (tdPtr); U16 symbolNext[FSEv06_MAX_SYMBOL_VALUE+1]; U32 const maxSV1 = maxSymbolValue + 1; U32 const tableSize = 1 << tableLog; U32 highThreshold = tableSize-1; /* Sanity Checks */ if (maxSymbolValue > FSEv06_MAX_SYMBOL_VALUE) return ERROR(maxSymbolValue_tooLarge); if (tableLog > FSEv06_MAX_TABLELOG) return ERROR(tableLog_tooLarge); /* Init, lay down lowprob symbols */ { FSEv06_DTableHeader DTableH; DTableH.tableLog = (U16)tableLog; DTableH.fastMode = 1; { S16 const largeLimit= (S16)(1 << (tableLog-1)); U32 s; for (s=0; s= largeLimit) DTableH.fastMode=0; symbolNext[s] = normalizedCounter[s]; } } } memcpy(dt, &DTableH, sizeof(DTableH)); } /* Spread symbols */ { U32 const tableMask = tableSize-1; U32 const step = FSEv06_TABLESTEP(tableSize); U32 s, position = 0; for (s=0; s highThreshold) position = (position + step) & tableMask; /* lowprob area */ } } if (position!=0) return ERROR(GENERIC); /* position must reach all cells once, otherwise normalizedCounter is incorrect */ } /* Build Decoding table */ { U32 u; for (u=0; utableLog = 0; DTableH->fastMode = 0; cell->newState = 0; cell->symbol = symbolValue; cell->nbBits = 0; return 0; } size_t FSEv06_buildDTable_raw (FSEv06_DTable* dt, unsigned nbBits) { void* ptr = dt; FSEv06_DTableHeader* const DTableH = (FSEv06_DTableHeader*)ptr; void* dPtr = dt + 1; FSEv06_decode_t* const dinfo = (FSEv06_decode_t*)dPtr; const unsigned tableSize = 1 << nbBits; const unsigned tableMask = tableSize - 1; const unsigned maxSV1 = tableMask+1; unsigned s; /* Sanity checks */ if (nbBits < 1) return ERROR(GENERIC); /* min size */ /* Build Decoding Table */ DTableH->tableLog = (U16)nbBits; DTableH->fastMode = 1; for (s=0; s sizeof(bitD.bitContainer)*8) /* This test must be static */ BITv06_reloadDStream(&bitD); op[1] = FSEv06_GETSYMBOL(&state2); if (FSEv06_MAX_TABLELOG*4+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */ { if (BITv06_reloadDStream(&bitD) > BITv06_DStream_unfinished) { op+=2; break; } } op[2] = FSEv06_GETSYMBOL(&state1); if (FSEv06_MAX_TABLELOG*2+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */ BITv06_reloadDStream(&bitD); op[3] = FSEv06_GETSYMBOL(&state2); } /* tail */ /* note : BITv06_reloadDStream(&bitD) >= FSEv06_DStream_partiallyFilled; Ends at exactly BITv06_DStream_completed */ while (1) { if (op>(omax-2)) return ERROR(dstSize_tooSmall); *op++ = FSEv06_GETSYMBOL(&state1); if (BITv06_reloadDStream(&bitD)==BITv06_DStream_overflow) { *op++ = FSEv06_GETSYMBOL(&state2); break; } if (op>(omax-2)) return ERROR(dstSize_tooSmall); *op++ = FSEv06_GETSYMBOL(&state2); if (BITv06_reloadDStream(&bitD)==BITv06_DStream_overflow) { *op++ = FSEv06_GETSYMBOL(&state1); break; } } return op-ostart; } size_t FSEv06_decompress_usingDTable(void* dst, size_t originalSize, const void* cSrc, size_t cSrcSize, const FSEv06_DTable* dt) { const void* ptr = dt; const FSEv06_DTableHeader* DTableH = (const FSEv06_DTableHeader*)ptr; const U32 fastMode = DTableH->fastMode; /* select fast mode (static) */ if (fastMode) return FSEv06_decompress_usingDTable_generic(dst, originalSize, cSrc, cSrcSize, dt, 1); return FSEv06_decompress_usingDTable_generic(dst, originalSize, cSrc, cSrcSize, dt, 0); } size_t FSEv06_decompress(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize) { const BYTE* const istart = (const BYTE*)cSrc; const BYTE* ip = istart; short counting[FSEv06_MAX_SYMBOL_VALUE+1]; DTable_max_t dt; /* Static analyzer seems unable to understand this table will be properly initialized later */ unsigned tableLog; unsigned maxSymbolValue = FSEv06_MAX_SYMBOL_VALUE; if (cSrcSize<2) return ERROR(srcSize_wrong); /* too small input size */ /* normal FSE decoding mode */ { size_t const NCountLength = FSEv06_readNCount (counting, &maxSymbolValue, &tableLog, istart, cSrcSize); if (FSEv06_isError(NCountLength)) return NCountLength; if (NCountLength >= cSrcSize) return ERROR(srcSize_wrong); /* too small input size */ ip += NCountLength; cSrcSize -= NCountLength; } { size_t const errorCode = FSEv06_buildDTable (dt, counting, maxSymbolValue, tableLog); if (FSEv06_isError(errorCode)) return errorCode; } return FSEv06_decompress_usingDTable (dst, maxDstSize, ip, cSrcSize, dt); /* always return, even if it is an error code */ } #endif /* FSEv06_COMMONDEFS_ONLY */ /* ****************************************************************** Huffman coder, part of New Generation Entropy library header file Copyright (C) 2013-2016, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - Source repository : https://github.com/Cyan4973/FiniteStateEntropy ****************************************************************** */ #ifndef HUFv06_H #define HUFv06_H #if defined (__cplusplus) extern "C" { #endif /* **************************************** * HUF simple functions ******************************************/ size_t HUFv06_decompress(void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /* HUFv06_decompress() : Decompress HUF data from buffer 'cSrc', of size 'cSrcSize', into already allocated destination buffer 'dst', of size 'dstSize'. `dstSize` : must be the **exact** size of original (uncompressed) data. Note : in contrast with FSE, HUFv06_decompress can regenerate RLE (cSrcSize==1) and uncompressed (cSrcSize==dstSize) data, because it knows size to regenerate. @return : size of regenerated data (== dstSize) or an error code, which can be tested using HUFv06_isError() */ /* **************************************** * Tool functions ******************************************/ size_t HUFv06_compressBound(size_t size); /**< maximum compressed size */ #if defined (__cplusplus) } #endif #endif /* HUFv06_H */ /* ****************************************************************** Huffman codec, part of New Generation Entropy library header file, for static linking only Copyright (C) 2013-2016, Yann Collet BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - Source repository : https://github.com/Cyan4973/FiniteStateEntropy ****************************************************************** */ #ifndef HUFv06_STATIC_H #define HUFv06_STATIC_H #if defined (__cplusplus) extern "C" { #endif /* **************************************** * Static allocation ******************************************/ /* HUF buffer bounds */ #define HUFv06_CTABLEBOUND 129 #define HUFv06_BLOCKBOUND(size) (size + (size>>8) + 8) /* only true if incompressible pre-filtered with fast heuristic */ #define HUFv06_COMPRESSBOUND(size) (HUFv06_CTABLEBOUND + HUFv06_BLOCKBOUND(size)) /* Macro version, useful for static allocation */ /* static allocation of HUF's DTable */ #define HUFv06_DTABLE_SIZE(maxTableLog) (1 + (1< HUFv06_ABSOLUTEMAX_TABLELOG) # error "HUFv06_MAX_TABLELOG is too large !" #endif /*! HUFv06_readStats() : Read compact Huffman tree, saved by HUFv06_writeCTable(). `huffWeight` is destination buffer. @return : size read from `src` */ MEM_STATIC size_t HUFv06_readStats(BYTE* huffWeight, size_t hwSize, U32* rankStats, U32* nbSymbolsPtr, U32* tableLogPtr, const void* src, size_t srcSize) { U32 weightTotal; const BYTE* ip = (const BYTE*) src; size_t iSize; size_t oSize; if (!srcSize) return ERROR(srcSize_wrong); iSize = ip[0]; //memset(huffWeight, 0, hwSize); /* is not necessary, even though some analyzer complain ... */ if (iSize >= 128) { /* special header */ if (iSize >= (242)) { /* RLE */ static U32 l[14] = { 1, 2, 3, 4, 7, 8, 15, 16, 31, 32, 63, 64, 127, 128 }; oSize = l[iSize-242]; memset(huffWeight, 1, hwSize); iSize = 0; } else { /* Incompressible */ oSize = iSize - 127; iSize = ((oSize+1)/2); if (iSize+1 > srcSize) return ERROR(srcSize_wrong); if (oSize >= hwSize) return ERROR(corruption_detected); ip += 1; { U32 n; for (n=0; n> 4; huffWeight[n+1] = ip[n/2] & 15; } } } } else { /* header compressed with FSE (normal case) */ if (iSize+1 > srcSize) return ERROR(srcSize_wrong); oSize = FSEv06_decompress(huffWeight, hwSize-1, ip+1, iSize); /* max (hwSize-1) values decoded, as last one is implied */ if (FSEv06_isError(oSize)) return oSize; } /* collect weight stats */ memset(rankStats, 0, (HUFv06_ABSOLUTEMAX_TABLELOG + 1) * sizeof(U32)); weightTotal = 0; { U32 n; for (n=0; n= HUFv06_ABSOLUTEMAX_TABLELOG) return ERROR(corruption_detected); rankStats[huffWeight[n]]++; weightTotal += (1 << huffWeight[n]) >> 1; } } if (weightTotal == 0) return ERROR(corruption_detected); /* get last non-null symbol weight (implied, total must be 2^n) */ { U32 const tableLog = BITv06_highbit32(weightTotal) + 1; if (tableLog > HUFv06_ABSOLUTEMAX_TABLELOG) return ERROR(corruption_detected); *tableLogPtr = tableLog; /* determine last weight */ { U32 const total = 1 << tableLog; U32 const rest = total - weightTotal; U32 const verif = 1 << BITv06_highbit32(rest); U32 const lastWeight = BITv06_highbit32(rest) + 1; if (verif != rest) return ERROR(corruption_detected); /* last value must be a clean power of 2 */ huffWeight[oSize] = (BYTE)lastWeight; rankStats[lastWeight]++; } } /* check tree construction validity */ if ((rankStats[1] < 2) || (rankStats[1] & 1)) return ERROR(corruption_detected); /* by construction : at least 2 elts of rank 1, must be even */ /* results */ *nbSymbolsPtr = (U32)(oSize+1); return iSize+1; } #if defined (__cplusplus) } #endif #endif /* HUFv06_STATIC_H */ /* ****************************************************************** Huffman decoder, part of New Generation Entropy library Copyright (C) 2013-2016, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - FSE+HUF source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ /* ************************************************************** * Compiler specifics ****************************************************************/ #if defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) /* inline is defined */ #elif defined(_MSC_VER) # define inline __inline #else # define inline /* disable inline */ #endif #ifdef _MSC_VER /* Visual Studio */ # pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */ #endif /* ************************************************************** * Error Management ****************************************************************/ #define HUFv06_STATIC_ASSERT(c) { enum { HUFv06_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */ /* ******************************************************* * HUF : Huffman block decompression *********************************************************/ typedef struct { BYTE byte; BYTE nbBits; } HUFv06_DEltX2; /* single-symbol decoding */ typedef struct { U16 sequence; BYTE nbBits; BYTE length; } HUFv06_DEltX4; /* double-symbols decoding */ typedef struct { BYTE symbol; BYTE weight; } sortedSymbol_t; /*-***************************/ /* single-symbol decoding */ /*-***************************/ size_t HUFv06_readDTableX2 (U16* DTable, const void* src, size_t srcSize) { BYTE huffWeight[HUFv06_MAX_SYMBOL_VALUE + 1]; U32 rankVal[HUFv06_ABSOLUTEMAX_TABLELOG + 1]; /* large enough for values from 0 to 16 */ U32 tableLog = 0; size_t iSize; U32 nbSymbols = 0; U32 n; U32 nextRankStart; void* const dtPtr = DTable + 1; HUFv06_DEltX2* const dt = (HUFv06_DEltX2*)dtPtr; HUFv06_STATIC_ASSERT(sizeof(HUFv06_DEltX2) == sizeof(U16)); /* if compilation fails here, assertion is false */ //memset(huffWeight, 0, sizeof(huffWeight)); /* is not necessary, even though some analyzer complain ... */ iSize = HUFv06_readStats(huffWeight, HUFv06_MAX_SYMBOL_VALUE + 1, rankVal, &nbSymbols, &tableLog, src, srcSize); if (HUFv06_isError(iSize)) return iSize; /* check result */ if (tableLog > DTable[0]) return ERROR(tableLog_tooLarge); /* DTable is too small */ DTable[0] = (U16)tableLog; /* maybe should separate sizeof allocated DTable, from used size of DTable, in case of re-use */ /* Prepare ranks */ nextRankStart = 0; for (n=1; n> 1; U32 i; HUFv06_DEltX2 D; D.byte = (BYTE)n; D.nbBits = (BYTE)(tableLog + 1 - w); for (i = rankVal[w]; i < rankVal[w] + length; i++) dt[i] = D; rankVal[w] += length; } return iSize; } static BYTE HUFv06_decodeSymbolX2(BITv06_DStream_t* Dstream, const HUFv06_DEltX2* dt, const U32 dtLog) { const size_t val = BITv06_lookBitsFast(Dstream, dtLog); /* note : dtLog >= 1 */ const BYTE c = dt[val].byte; BITv06_skipBits(Dstream, dt[val].nbBits); return c; } #define HUFv06_DECODE_SYMBOLX2_0(ptr, DStreamPtr) \ *ptr++ = HUFv06_decodeSymbolX2(DStreamPtr, dt, dtLog) #define HUFv06_DECODE_SYMBOLX2_1(ptr, DStreamPtr) \ if (MEM_64bits() || (HUFv06_MAX_TABLELOG<=12)) \ HUFv06_DECODE_SYMBOLX2_0(ptr, DStreamPtr) #define HUFv06_DECODE_SYMBOLX2_2(ptr, DStreamPtr) \ if (MEM_64bits()) \ HUFv06_DECODE_SYMBOLX2_0(ptr, DStreamPtr) static inline size_t HUFv06_decodeStreamX2(BYTE* p, BITv06_DStream_t* const bitDPtr, BYTE* const pEnd, const HUFv06_DEltX2* const dt, const U32 dtLog) { BYTE* const pStart = p; /* up to 4 symbols at a time */ while ((BITv06_reloadDStream(bitDPtr) == BITv06_DStream_unfinished) && (p <= pEnd-4)) { HUFv06_DECODE_SYMBOLX2_2(p, bitDPtr); HUFv06_DECODE_SYMBOLX2_1(p, bitDPtr); HUFv06_DECODE_SYMBOLX2_2(p, bitDPtr); HUFv06_DECODE_SYMBOLX2_0(p, bitDPtr); } /* closer to the end */ while ((BITv06_reloadDStream(bitDPtr) == BITv06_DStream_unfinished) && (p < pEnd)) HUFv06_DECODE_SYMBOLX2_0(p, bitDPtr); /* no more data to retrieve from bitstream, hence no need to reload */ while (p < pEnd) HUFv06_DECODE_SYMBOLX2_0(p, bitDPtr); return pEnd-pStart; } size_t HUFv06_decompress1X2_usingDTable( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const U16* DTable) { BYTE* op = (BYTE*)dst; BYTE* const oend = op + dstSize; const U32 dtLog = DTable[0]; const void* dtPtr = DTable; const HUFv06_DEltX2* const dt = ((const HUFv06_DEltX2*)dtPtr)+1; BITv06_DStream_t bitD; { size_t const errorCode = BITv06_initDStream(&bitD, cSrc, cSrcSize); if (HUFv06_isError(errorCode)) return errorCode; } HUFv06_decodeStreamX2(op, &bitD, oend, dt, dtLog); /* check */ if (!BITv06_endOfDStream(&bitD)) return ERROR(corruption_detected); return dstSize; } size_t HUFv06_decompress1X2 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { HUFv06_CREATE_STATIC_DTABLEX2(DTable, HUFv06_MAX_TABLELOG); const BYTE* ip = (const BYTE*) cSrc; size_t const errorCode = HUFv06_readDTableX2 (DTable, cSrc, cSrcSize); if (HUFv06_isError(errorCode)) return errorCode; if (errorCode >= cSrcSize) return ERROR(srcSize_wrong); ip += errorCode; cSrcSize -= errorCode; return HUFv06_decompress1X2_usingDTable (dst, dstSize, ip, cSrcSize, DTable); } size_t HUFv06_decompress4X2_usingDTable( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const U16* DTable) { /* Check */ if (cSrcSize < 10) return ERROR(corruption_detected); /* strict minimum : jump table + 1 byte per stream */ { const BYTE* const istart = (const BYTE*) cSrc; BYTE* const ostart = (BYTE*) dst; BYTE* const oend = ostart + dstSize; const void* const dtPtr = DTable; const HUFv06_DEltX2* const dt = ((const HUFv06_DEltX2*)dtPtr) +1; const U32 dtLog = DTable[0]; size_t errorCode; /* Init */ BITv06_DStream_t bitD1; BITv06_DStream_t bitD2; BITv06_DStream_t bitD3; BITv06_DStream_t bitD4; const size_t length1 = MEM_readLE16(istart); const size_t length2 = MEM_readLE16(istart+2); const size_t length3 = MEM_readLE16(istart+4); size_t length4; const BYTE* const istart1 = istart + 6; /* jumpTable */ const BYTE* const istart2 = istart1 + length1; const BYTE* const istart3 = istart2 + length2; const BYTE* const istart4 = istart3 + length3; const size_t segmentSize = (dstSize+3) / 4; BYTE* const opStart2 = ostart + segmentSize; BYTE* const opStart3 = opStart2 + segmentSize; BYTE* const opStart4 = opStart3 + segmentSize; BYTE* op1 = ostart; BYTE* op2 = opStart2; BYTE* op3 = opStart3; BYTE* op4 = opStart4; U32 endSignal; length4 = cSrcSize - (length1 + length2 + length3 + 6); if (length4 > cSrcSize) return ERROR(corruption_detected); /* overflow */ errorCode = BITv06_initDStream(&bitD1, istart1, length1); if (HUFv06_isError(errorCode)) return errorCode; errorCode = BITv06_initDStream(&bitD2, istart2, length2); if (HUFv06_isError(errorCode)) return errorCode; errorCode = BITv06_initDStream(&bitD3, istart3, length3); if (HUFv06_isError(errorCode)) return errorCode; errorCode = BITv06_initDStream(&bitD4, istart4, length4); if (HUFv06_isError(errorCode)) return errorCode; /* 16-32 symbols per loop (4-8 symbols per stream) */ endSignal = BITv06_reloadDStream(&bitD1) | BITv06_reloadDStream(&bitD2) | BITv06_reloadDStream(&bitD3) | BITv06_reloadDStream(&bitD4); for ( ; (endSignal==BITv06_DStream_unfinished) && (op4<(oend-7)) ; ) { HUFv06_DECODE_SYMBOLX2_2(op1, &bitD1); HUFv06_DECODE_SYMBOLX2_2(op2, &bitD2); HUFv06_DECODE_SYMBOLX2_2(op3, &bitD3); HUFv06_DECODE_SYMBOLX2_2(op4, &bitD4); HUFv06_DECODE_SYMBOLX2_1(op1, &bitD1); HUFv06_DECODE_SYMBOLX2_1(op2, &bitD2); HUFv06_DECODE_SYMBOLX2_1(op3, &bitD3); HUFv06_DECODE_SYMBOLX2_1(op4, &bitD4); HUFv06_DECODE_SYMBOLX2_2(op1, &bitD1); HUFv06_DECODE_SYMBOLX2_2(op2, &bitD2); HUFv06_DECODE_SYMBOLX2_2(op3, &bitD3); HUFv06_DECODE_SYMBOLX2_2(op4, &bitD4); HUFv06_DECODE_SYMBOLX2_0(op1, &bitD1); HUFv06_DECODE_SYMBOLX2_0(op2, &bitD2); HUFv06_DECODE_SYMBOLX2_0(op3, &bitD3); HUFv06_DECODE_SYMBOLX2_0(op4, &bitD4); endSignal = BITv06_reloadDStream(&bitD1) | BITv06_reloadDStream(&bitD2) | BITv06_reloadDStream(&bitD3) | BITv06_reloadDStream(&bitD4); } /* check corruption */ if (op1 > opStart2) return ERROR(corruption_detected); if (op2 > opStart3) return ERROR(corruption_detected); if (op3 > opStart4) return ERROR(corruption_detected); /* note : op4 supposed already verified within main loop */ /* finish bitStreams one by one */ HUFv06_decodeStreamX2(op1, &bitD1, opStart2, dt, dtLog); HUFv06_decodeStreamX2(op2, &bitD2, opStart3, dt, dtLog); HUFv06_decodeStreamX2(op3, &bitD3, opStart4, dt, dtLog); HUFv06_decodeStreamX2(op4, &bitD4, oend, dt, dtLog); /* check */ endSignal = BITv06_endOfDStream(&bitD1) & BITv06_endOfDStream(&bitD2) & BITv06_endOfDStream(&bitD3) & BITv06_endOfDStream(&bitD4); if (!endSignal) return ERROR(corruption_detected); /* decoded size */ return dstSize; } } size_t HUFv06_decompress4X2 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { HUFv06_CREATE_STATIC_DTABLEX2(DTable, HUFv06_MAX_TABLELOG); const BYTE* ip = (const BYTE*) cSrc; size_t const errorCode = HUFv06_readDTableX2 (DTable, cSrc, cSrcSize); if (HUFv06_isError(errorCode)) return errorCode; if (errorCode >= cSrcSize) return ERROR(srcSize_wrong); ip += errorCode; cSrcSize -= errorCode; return HUFv06_decompress4X2_usingDTable (dst, dstSize, ip, cSrcSize, DTable); } /* *************************/ /* double-symbols decoding */ /* *************************/ static void HUFv06_fillDTableX4Level2(HUFv06_DEltX4* DTable, U32 sizeLog, const U32 consumed, const U32* rankValOrigin, const int minWeight, const sortedSymbol_t* sortedSymbols, const U32 sortedListSize, U32 nbBitsBaseline, U16 baseSeq) { HUFv06_DEltX4 DElt; U32 rankVal[HUFv06_ABSOLUTEMAX_TABLELOG + 1]; /* get pre-calculated rankVal */ memcpy(rankVal, rankValOrigin, sizeof(rankVal)); /* fill skipped values */ if (minWeight>1) { U32 i, skipSize = rankVal[minWeight]; MEM_writeLE16(&(DElt.sequence), baseSeq); DElt.nbBits = (BYTE)(consumed); DElt.length = 1; for (i = 0; i < skipSize; i++) DTable[i] = DElt; } /* fill DTable */ { U32 s; for (s=0; s= 1 */ rankVal[weight] += length; }} } typedef U32 rankVal_t[HUFv06_ABSOLUTEMAX_TABLELOG][HUFv06_ABSOLUTEMAX_TABLELOG + 1]; static void HUFv06_fillDTableX4(HUFv06_DEltX4* DTable, const U32 targetLog, const sortedSymbol_t* sortedList, const U32 sortedListSize, const U32* rankStart, rankVal_t rankValOrigin, const U32 maxWeight, const U32 nbBitsBaseline) { U32 rankVal[HUFv06_ABSOLUTEMAX_TABLELOG + 1]; const int scaleLog = nbBitsBaseline - targetLog; /* note : targetLog >= srcLog, hence scaleLog <= 1 */ const U32 minBits = nbBitsBaseline - maxWeight; U32 s; memcpy(rankVal, rankValOrigin, sizeof(rankVal)); /* fill DTable */ for (s=0; s= minBits) { /* enough room for a second symbol */ U32 sortedRank; int minWeight = nbBits + scaleLog; if (minWeight < 1) minWeight = 1; sortedRank = rankStart[minWeight]; HUFv06_fillDTableX4Level2(DTable+start, targetLog-nbBits, nbBits, rankValOrigin[nbBits], minWeight, sortedList+sortedRank, sortedListSize-sortedRank, nbBitsBaseline, symbol); } else { HUFv06_DEltX4 DElt; MEM_writeLE16(&(DElt.sequence), symbol); DElt.nbBits = (BYTE)(nbBits); DElt.length = 1; { U32 u; const U32 end = start + length; for (u = start; u < end; u++) DTable[u] = DElt; } } rankVal[weight] += length; } } size_t HUFv06_readDTableX4 (U32* DTable, const void* src, size_t srcSize) { BYTE weightList[HUFv06_MAX_SYMBOL_VALUE + 1]; sortedSymbol_t sortedSymbol[HUFv06_MAX_SYMBOL_VALUE + 1]; U32 rankStats[HUFv06_ABSOLUTEMAX_TABLELOG + 1] = { 0 }; U32 rankStart0[HUFv06_ABSOLUTEMAX_TABLELOG + 2] = { 0 }; U32* const rankStart = rankStart0+1; rankVal_t rankVal; U32 tableLog, maxW, sizeOfSort, nbSymbols; const U32 memLog = DTable[0]; size_t iSize; void* dtPtr = DTable; HUFv06_DEltX4* const dt = ((HUFv06_DEltX4*)dtPtr) + 1; HUFv06_STATIC_ASSERT(sizeof(HUFv06_DEltX4) == sizeof(U32)); /* if compilation fails here, assertion is false */ if (memLog > HUFv06_ABSOLUTEMAX_TABLELOG) return ERROR(tableLog_tooLarge); //memset(weightList, 0, sizeof(weightList)); /* is not necessary, even though some analyzer complain ... */ iSize = HUFv06_readStats(weightList, HUFv06_MAX_SYMBOL_VALUE + 1, rankStats, &nbSymbols, &tableLog, src, srcSize); if (HUFv06_isError(iSize)) return iSize; /* check result */ if (tableLog > memLog) return ERROR(tableLog_tooLarge); /* DTable can't fit code depth */ /* find maxWeight */ for (maxW = tableLog; rankStats[maxW]==0; maxW--) {} /* necessarily finds a solution before 0 */ /* Get start index of each weight */ { U32 w, nextRankStart = 0; for (w=1; w> consumed; } } } } HUFv06_fillDTableX4(dt, memLog, sortedSymbol, sizeOfSort, rankStart0, rankVal, maxW, tableLog+1); return iSize; } static U32 HUFv06_decodeSymbolX4(void* op, BITv06_DStream_t* DStream, const HUFv06_DEltX4* dt, const U32 dtLog) { const size_t val = BITv06_lookBitsFast(DStream, dtLog); /* note : dtLog >= 1 */ memcpy(op, dt+val, 2); BITv06_skipBits(DStream, dt[val].nbBits); return dt[val].length; } static U32 HUFv06_decodeLastSymbolX4(void* op, BITv06_DStream_t* DStream, const HUFv06_DEltX4* dt, const U32 dtLog) { const size_t val = BITv06_lookBitsFast(DStream, dtLog); /* note : dtLog >= 1 */ memcpy(op, dt+val, 1); if (dt[val].length==1) BITv06_skipBits(DStream, dt[val].nbBits); else { if (DStream->bitsConsumed < (sizeof(DStream->bitContainer)*8)) { BITv06_skipBits(DStream, dt[val].nbBits); if (DStream->bitsConsumed > (sizeof(DStream->bitContainer)*8)) DStream->bitsConsumed = (sizeof(DStream->bitContainer)*8); /* ugly hack; works only because it's the last symbol. Note : can't easily extract nbBits from just this symbol */ } } return 1; } #define HUFv06_DECODE_SYMBOLX4_0(ptr, DStreamPtr) \ ptr += HUFv06_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog) #define HUFv06_DECODE_SYMBOLX4_1(ptr, DStreamPtr) \ if (MEM_64bits() || (HUFv06_MAX_TABLELOG<=12)) \ ptr += HUFv06_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog) #define HUFv06_DECODE_SYMBOLX4_2(ptr, DStreamPtr) \ if (MEM_64bits()) \ ptr += HUFv06_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog) static inline size_t HUFv06_decodeStreamX4(BYTE* p, BITv06_DStream_t* bitDPtr, BYTE* const pEnd, const HUFv06_DEltX4* const dt, const U32 dtLog) { BYTE* const pStart = p; /* up to 8 symbols at a time */ while ((BITv06_reloadDStream(bitDPtr) == BITv06_DStream_unfinished) && (p < pEnd-7)) { HUFv06_DECODE_SYMBOLX4_2(p, bitDPtr); HUFv06_DECODE_SYMBOLX4_1(p, bitDPtr); HUFv06_DECODE_SYMBOLX4_2(p, bitDPtr); HUFv06_DECODE_SYMBOLX4_0(p, bitDPtr); } /* closer to the end */ while ((BITv06_reloadDStream(bitDPtr) == BITv06_DStream_unfinished) && (p <= pEnd-2)) HUFv06_DECODE_SYMBOLX4_0(p, bitDPtr); while (p <= pEnd-2) HUFv06_DECODE_SYMBOLX4_0(p, bitDPtr); /* no need to reload : reached the end of DStream */ if (p < pEnd) p += HUFv06_decodeLastSymbolX4(p, bitDPtr, dt, dtLog); return p-pStart; } size_t HUFv06_decompress1X4_usingDTable( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const U32* DTable) { const BYTE* const istart = (const BYTE*) cSrc; BYTE* const ostart = (BYTE*) dst; BYTE* const oend = ostart + dstSize; const U32 dtLog = DTable[0]; const void* const dtPtr = DTable; const HUFv06_DEltX4* const dt = ((const HUFv06_DEltX4*)dtPtr) +1; /* Init */ BITv06_DStream_t bitD; { size_t const errorCode = BITv06_initDStream(&bitD, istart, cSrcSize); if (HUFv06_isError(errorCode)) return errorCode; } /* decode */ HUFv06_decodeStreamX4(ostart, &bitD, oend, dt, dtLog); /* check */ if (!BITv06_endOfDStream(&bitD)) return ERROR(corruption_detected); /* decoded size */ return dstSize; } size_t HUFv06_decompress1X4 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { HUFv06_CREATE_STATIC_DTABLEX4(DTable, HUFv06_MAX_TABLELOG); const BYTE* ip = (const BYTE*) cSrc; size_t const hSize = HUFv06_readDTableX4 (DTable, cSrc, cSrcSize); if (HUFv06_isError(hSize)) return hSize; if (hSize >= cSrcSize) return ERROR(srcSize_wrong); ip += hSize; cSrcSize -= hSize; return HUFv06_decompress1X4_usingDTable (dst, dstSize, ip, cSrcSize, DTable); } size_t HUFv06_decompress4X4_usingDTable( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const U32* DTable) { if (cSrcSize < 10) return ERROR(corruption_detected); /* strict minimum : jump table + 1 byte per stream */ { const BYTE* const istart = (const BYTE*) cSrc; BYTE* const ostart = (BYTE*) dst; BYTE* const oend = ostart + dstSize; const void* const dtPtr = DTable; const HUFv06_DEltX4* const dt = ((const HUFv06_DEltX4*)dtPtr) +1; const U32 dtLog = DTable[0]; size_t errorCode; /* Init */ BITv06_DStream_t bitD1; BITv06_DStream_t bitD2; BITv06_DStream_t bitD3; BITv06_DStream_t bitD4; const size_t length1 = MEM_readLE16(istart); const size_t length2 = MEM_readLE16(istart+2); const size_t length3 = MEM_readLE16(istart+4); size_t length4; const BYTE* const istart1 = istart + 6; /* jumpTable */ const BYTE* const istart2 = istart1 + length1; const BYTE* const istart3 = istart2 + length2; const BYTE* const istart4 = istart3 + length3; const size_t segmentSize = (dstSize+3) / 4; BYTE* const opStart2 = ostart + segmentSize; BYTE* const opStart3 = opStart2 + segmentSize; BYTE* const opStart4 = opStart3 + segmentSize; BYTE* op1 = ostart; BYTE* op2 = opStart2; BYTE* op3 = opStart3; BYTE* op4 = opStart4; U32 endSignal; length4 = cSrcSize - (length1 + length2 + length3 + 6); if (length4 > cSrcSize) return ERROR(corruption_detected); /* overflow */ errorCode = BITv06_initDStream(&bitD1, istart1, length1); if (HUFv06_isError(errorCode)) return errorCode; errorCode = BITv06_initDStream(&bitD2, istart2, length2); if (HUFv06_isError(errorCode)) return errorCode; errorCode = BITv06_initDStream(&bitD3, istart3, length3); if (HUFv06_isError(errorCode)) return errorCode; errorCode = BITv06_initDStream(&bitD4, istart4, length4); if (HUFv06_isError(errorCode)) return errorCode; /* 16-32 symbols per loop (4-8 symbols per stream) */ endSignal = BITv06_reloadDStream(&bitD1) | BITv06_reloadDStream(&bitD2) | BITv06_reloadDStream(&bitD3) | BITv06_reloadDStream(&bitD4); for ( ; (endSignal==BITv06_DStream_unfinished) && (op4<(oend-7)) ; ) { HUFv06_DECODE_SYMBOLX4_2(op1, &bitD1); HUFv06_DECODE_SYMBOLX4_2(op2, &bitD2); HUFv06_DECODE_SYMBOLX4_2(op3, &bitD3); HUFv06_DECODE_SYMBOLX4_2(op4, &bitD4); HUFv06_DECODE_SYMBOLX4_1(op1, &bitD1); HUFv06_DECODE_SYMBOLX4_1(op2, &bitD2); HUFv06_DECODE_SYMBOLX4_1(op3, &bitD3); HUFv06_DECODE_SYMBOLX4_1(op4, &bitD4); HUFv06_DECODE_SYMBOLX4_2(op1, &bitD1); HUFv06_DECODE_SYMBOLX4_2(op2, &bitD2); HUFv06_DECODE_SYMBOLX4_2(op3, &bitD3); HUFv06_DECODE_SYMBOLX4_2(op4, &bitD4); HUFv06_DECODE_SYMBOLX4_0(op1, &bitD1); HUFv06_DECODE_SYMBOLX4_0(op2, &bitD2); HUFv06_DECODE_SYMBOLX4_0(op3, &bitD3); HUFv06_DECODE_SYMBOLX4_0(op4, &bitD4); endSignal = BITv06_reloadDStream(&bitD1) | BITv06_reloadDStream(&bitD2) | BITv06_reloadDStream(&bitD3) | BITv06_reloadDStream(&bitD4); } /* check corruption */ if (op1 > opStart2) return ERROR(corruption_detected); if (op2 > opStart3) return ERROR(corruption_detected); if (op3 > opStart4) return ERROR(corruption_detected); /* note : op4 supposed already verified within main loop */ /* finish bitStreams one by one */ HUFv06_decodeStreamX4(op1, &bitD1, opStart2, dt, dtLog); HUFv06_decodeStreamX4(op2, &bitD2, opStart3, dt, dtLog); HUFv06_decodeStreamX4(op3, &bitD3, opStart4, dt, dtLog); HUFv06_decodeStreamX4(op4, &bitD4, oend, dt, dtLog); /* check */ endSignal = BITv06_endOfDStream(&bitD1) & BITv06_endOfDStream(&bitD2) & BITv06_endOfDStream(&bitD3) & BITv06_endOfDStream(&bitD4); if (!endSignal) return ERROR(corruption_detected); /* decoded size */ return dstSize; } } size_t HUFv06_decompress4X4 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { HUFv06_CREATE_STATIC_DTABLEX4(DTable, HUFv06_MAX_TABLELOG); const BYTE* ip = (const BYTE*) cSrc; size_t hSize = HUFv06_readDTableX4 (DTable, cSrc, cSrcSize); if (HUFv06_isError(hSize)) return hSize; if (hSize >= cSrcSize) return ERROR(srcSize_wrong); ip += hSize; cSrcSize -= hSize; return HUFv06_decompress4X4_usingDTable (dst, dstSize, ip, cSrcSize, DTable); } /* ********************************/ /* Generic decompression selector */ /* ********************************/ typedef struct { U32 tableTime; U32 decode256Time; } algo_time_t; static const algo_time_t algoTime[16 /* Quantization */][3 /* single, double, quad */] = { /* single, double, quad */ {{0,0}, {1,1}, {2,2}}, /* Q==0 : impossible */ {{0,0}, {1,1}, {2,2}}, /* Q==1 : impossible */ {{ 38,130}, {1313, 74}, {2151, 38}}, /* Q == 2 : 12-18% */ {{ 448,128}, {1353, 74}, {2238, 41}}, /* Q == 3 : 18-25% */ {{ 556,128}, {1353, 74}, {2238, 47}}, /* Q == 4 : 25-32% */ {{ 714,128}, {1418, 74}, {2436, 53}}, /* Q == 5 : 32-38% */ {{ 883,128}, {1437, 74}, {2464, 61}}, /* Q == 6 : 38-44% */ {{ 897,128}, {1515, 75}, {2622, 68}}, /* Q == 7 : 44-50% */ {{ 926,128}, {1613, 75}, {2730, 75}}, /* Q == 8 : 50-56% */ {{ 947,128}, {1729, 77}, {3359, 77}}, /* Q == 9 : 56-62% */ {{1107,128}, {2083, 81}, {4006, 84}}, /* Q ==10 : 62-69% */ {{1177,128}, {2379, 87}, {4785, 88}}, /* Q ==11 : 69-75% */ {{1242,128}, {2415, 93}, {5155, 84}}, /* Q ==12 : 75-81% */ {{1349,128}, {2644,106}, {5260,106}}, /* Q ==13 : 81-87% */ {{1455,128}, {2422,124}, {4174,124}}, /* Q ==14 : 87-93% */ {{ 722,128}, {1891,145}, {1936,146}}, /* Q ==15 : 93-99% */ }; typedef size_t (*decompressionAlgo)(void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); size_t HUFv06_decompress (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { static const decompressionAlgo decompress[3] = { HUFv06_decompress4X2, HUFv06_decompress4X4, NULL }; U32 Dtime[3]; /* decompression time estimation */ /* validation checks */ if (dstSize == 0) return ERROR(dstSize_tooSmall); if (cSrcSize > dstSize) return ERROR(corruption_detected); /* invalid */ if (cSrcSize == dstSize) { memcpy(dst, cSrc, dstSize); return dstSize; } /* not compressed */ if (cSrcSize == 1) { memset(dst, *(const BYTE*)cSrc, dstSize); return dstSize; } /* RLE */ /* decoder timing evaluation */ { U32 const Q = (U32)(cSrcSize * 16 / dstSize); /* Q < 16 since dstSize > cSrcSize */ U32 const D256 = (U32)(dstSize >> 8); U32 n; for (n=0; n<3; n++) Dtime[n] = algoTime[Q][n].tableTime + (algoTime[Q][n].decode256Time * D256); } Dtime[1] += Dtime[1] >> 4; Dtime[2] += Dtime[2] >> 3; /* advantage to algorithms using less memory, for cache eviction */ { U32 algoNb = 0; if (Dtime[1] < Dtime[0]) algoNb = 1; // if (Dtime[2] < Dtime[algoNb]) algoNb = 2; /* current speed of HUFv06_decompress4X6 is not good */ return decompress[algoNb](dst, dstSize, cSrc, cSrcSize); } //return HUFv06_decompress4X2(dst, dstSize, cSrc, cSrcSize); /* multi-streams single-symbol decoding */ //return HUFv06_decompress4X4(dst, dstSize, cSrc, cSrcSize); /* multi-streams double-symbols decoding */ //return HUFv06_decompress4X6(dst, dstSize, cSrc, cSrcSize); /* multi-streams quad-symbols decoding */ } /* Common functions of Zstd compression library Copyright (C) 2015-2016, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - zstd homepage : http://www.zstd.net/ */ /*-**************************************** * Version ******************************************/ /*-**************************************** * ZSTD Error Management ******************************************/ /*! ZSTDv06_isError() : * tells if a return value is an error code */ unsigned ZSTDv06_isError(size_t code) { return ERR_isError(code); } /*! ZSTDv06_getErrorName() : * provides error code string from function result (useful for debugging) */ const char* ZSTDv06_getErrorName(size_t code) { return ERR_getErrorName(code); } /* ************************************************************** * ZBUFF Error Management ****************************************************************/ unsigned ZBUFFv06_isError(size_t errorCode) { return ERR_isError(errorCode); } const char* ZBUFFv06_getErrorName(size_t errorCode) { return ERR_getErrorName(errorCode); } /* zstd - standard compression library Copyright (C) 2014-2016, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - zstd homepage : http://www.zstd.net */ /* *************************************************************** * Tuning parameters *****************************************************************/ /*! * HEAPMODE : * Select how default decompression function ZSTDv06_decompress() will allocate memory, * in memory stack (0), or in memory heap (1, requires malloc()) */ #ifndef ZSTDv06_HEAPMODE # define ZSTDv06_HEAPMODE 1 #endif /*-******************************************************* * Compiler specifics *********************************************************/ #ifdef _MSC_VER /* Visual Studio */ # include /* For Visual 2005 */ # pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */ # pragma warning(disable : 4324) /* disable: C4324: padded structure */ #endif /*-************************************* * Macros ***************************************/ #define ZSTDv06_isError ERR_isError /* for inlining */ #define FSEv06_isError ERR_isError #define HUFv06_isError ERR_isError /*_******************************************************* * Memory operations **********************************************************/ static void ZSTDv06_copy4(void* dst, const void* src) { memcpy(dst, src, 4); } /*-************************************************************* * Context management ***************************************************************/ typedef enum { ZSTDds_getFrameHeaderSize, ZSTDds_decodeFrameHeader, ZSTDds_decodeBlockHeader, ZSTDds_decompressBlock } ZSTDv06_dStage; struct ZSTDv06_DCtx_s { FSEv06_DTable LLTable[FSEv06_DTABLE_SIZE_U32(LLFSELog)]; FSEv06_DTable OffTable[FSEv06_DTABLE_SIZE_U32(OffFSELog)]; FSEv06_DTable MLTable[FSEv06_DTABLE_SIZE_U32(MLFSELog)]; unsigned hufTableX4[HUFv06_DTABLE_SIZE(HufLog)]; const void* previousDstEnd; const void* base; const void* vBase; const void* dictEnd; size_t expected; size_t headerSize; ZSTDv06_frameParams fParams; blockType_t bType; /* used in ZSTDv06_decompressContinue(), to transfer blockType between header decoding and block decoding stages */ ZSTDv06_dStage stage; U32 flagRepeatTable; const BYTE* litPtr; size_t litSize; BYTE litBuffer[ZSTDv06_BLOCKSIZE_MAX + WILDCOPY_OVERLENGTH]; BYTE headerBuffer[ZSTDv06_FRAMEHEADERSIZE_MAX]; }; /* typedef'd to ZSTDv06_DCtx within "zstd_static.h" */ size_t ZSTDv06_sizeofDCtx (void) { return sizeof(ZSTDv06_DCtx); } /* non published interface */ size_t ZSTDv06_decompressBegin(ZSTDv06_DCtx* dctx) { dctx->expected = ZSTDv06_frameHeaderSize_min; dctx->stage = ZSTDds_getFrameHeaderSize; dctx->previousDstEnd = NULL; dctx->base = NULL; dctx->vBase = NULL; dctx->dictEnd = NULL; dctx->hufTableX4[0] = HufLog; dctx->flagRepeatTable = 0; return 0; } ZSTDv06_DCtx* ZSTDv06_createDCtx(void) { ZSTDv06_DCtx* dctx = (ZSTDv06_DCtx*)malloc(sizeof(ZSTDv06_DCtx)); if (dctx==NULL) return NULL; ZSTDv06_decompressBegin(dctx); return dctx; } size_t ZSTDv06_freeDCtx(ZSTDv06_DCtx* dctx) { free(dctx); return 0; /* reserved as a potential error code in the future */ } void ZSTDv06_copyDCtx(ZSTDv06_DCtx* dstDCtx, const ZSTDv06_DCtx* srcDCtx) { memcpy(dstDCtx, srcDCtx, sizeof(ZSTDv06_DCtx) - (ZSTDv06_BLOCKSIZE_MAX+WILDCOPY_OVERLENGTH + ZSTDv06_frameHeaderSize_max)); /* no need to copy workspace */ } /*-************************************************************* * Decompression section ***************************************************************/ /* Frame format description Frame Header - [ Block Header - Block ] - Frame End 1) Frame Header - 4 bytes - Magic Number : ZSTDv06_MAGICNUMBER (defined within zstd_static.h) - 1 byte - Frame Descriptor 2) Block Header - 3 bytes, starting with a 2-bits descriptor Uncompressed, Compressed, Frame End, unused 3) Block See Block Format Description 4) Frame End - 3 bytes, compatible with Block Header */ /* Frame descriptor 1 byte, using : bit 0-3 : windowLog - ZSTDv06_WINDOWLOG_ABSOLUTEMIN (see zstd_internal.h) bit 4 : minmatch 4(0) or 3(1) bit 5 : reserved (must be zero) bit 6-7 : Frame content size : unknown, 1 byte, 2 bytes, 8 bytes Optional : content size (0, 1, 2 or 8 bytes) 0 : unknown 1 : 0-255 bytes 2 : 256 - 65535+256 8 : up to 16 exa */ /* Compressed Block, format description Block = Literal Section - Sequences Section Prerequisite : size of (compressed) block, maximum size of regenerated data 1) Literal Section 1.1) Header : 1-5 bytes flags: 2 bits 00 compressed by Huff0 01 unused 10 is Raw (uncompressed) 11 is Rle Note : using 01 => Huff0 with precomputed table ? Note : delta map ? => compressed ? 1.1.1) Huff0-compressed literal block : 3-5 bytes srcSize < 1 KB => 3 bytes (2-2-10-10) => single stream srcSize < 1 KB => 3 bytes (2-2-10-10) srcSize < 16KB => 4 bytes (2-2-14-14) else => 5 bytes (2-2-18-18) big endian convention 1.1.2) Raw (uncompressed) literal block header : 1-3 bytes size : 5 bits: (IS_RAW<<6) + (0<<4) + size 12 bits: (IS_RAW<<6) + (2<<4) + (size>>8) size&255 20 bits: (IS_RAW<<6) + (3<<4) + (size>>16) size>>8&255 size&255 1.1.3) Rle (repeated single byte) literal block header : 1-3 bytes size : 5 bits: (IS_RLE<<6) + (0<<4) + size 12 bits: (IS_RLE<<6) + (2<<4) + (size>>8) size&255 20 bits: (IS_RLE<<6) + (3<<4) + (size>>16) size>>8&255 size&255 1.1.4) Huff0-compressed literal block, using precomputed CTables : 3-5 bytes srcSize < 1 KB => 3 bytes (2-2-10-10) => single stream srcSize < 1 KB => 3 bytes (2-2-10-10) srcSize < 16KB => 4 bytes (2-2-14-14) else => 5 bytes (2-2-18-18) big endian convention 1- CTable available (stored into workspace ?) 2- Small input (fast heuristic ? Full comparison ? depend on clevel ?) 1.2) Literal block content 1.2.1) Huff0 block, using sizes from header See Huff0 format 1.2.2) Huff0 block, using prepared table 1.2.3) Raw content 1.2.4) single byte 2) Sequences section TO DO */ /** ZSTDv06_frameHeaderSize() : * srcSize must be >= ZSTDv06_frameHeaderSize_min. * @return : size of the Frame Header */ static size_t ZSTDv06_frameHeaderSize(const void* src, size_t srcSize) { if (srcSize < ZSTDv06_frameHeaderSize_min) return ERROR(srcSize_wrong); { U32 const fcsId = (((const BYTE*)src)[4]) >> 6; return ZSTDv06_frameHeaderSize_min + ZSTDv06_fcs_fieldSize[fcsId]; } } /** ZSTDv06_getFrameParams() : * decode Frame Header, or provide expected `srcSize`. * @return : 0, `fparamsPtr` is correctly filled, * >0, `srcSize` is too small, result is expected `srcSize`, * or an error code, which can be tested using ZSTDv06_isError() */ size_t ZSTDv06_getFrameParams(ZSTDv06_frameParams* fparamsPtr, const void* src, size_t srcSize) { const BYTE* ip = (const BYTE*)src; if (srcSize < ZSTDv06_frameHeaderSize_min) return ZSTDv06_frameHeaderSize_min; if (MEM_readLE32(src) != ZSTDv06_MAGICNUMBER) return ERROR(prefix_unknown); /* ensure there is enough `srcSize` to fully read/decode frame header */ { size_t const fhsize = ZSTDv06_frameHeaderSize(src, srcSize); if (srcSize < fhsize) return fhsize; } memset(fparamsPtr, 0, sizeof(*fparamsPtr)); { BYTE const frameDesc = ip[4]; fparamsPtr->windowLog = (frameDesc & 0xF) + ZSTDv06_WINDOWLOG_ABSOLUTEMIN; if ((frameDesc & 0x20) != 0) return ERROR(frameParameter_unsupported); /* reserved 1 bit */ switch(frameDesc >> 6) /* fcsId */ { default: /* impossible */ case 0 : fparamsPtr->frameContentSize = 0; break; case 1 : fparamsPtr->frameContentSize = ip[5]; break; case 2 : fparamsPtr->frameContentSize = MEM_readLE16(ip+5)+256; break; case 3 : fparamsPtr->frameContentSize = MEM_readLE64(ip+5); break; } } return 0; } /** ZSTDv06_decodeFrameHeader() : * `srcSize` must be the size provided by ZSTDv06_frameHeaderSize(). * @return : 0 if success, or an error code, which can be tested using ZSTDv06_isError() */ static size_t ZSTDv06_decodeFrameHeader(ZSTDv06_DCtx* zc, const void* src, size_t srcSize) { size_t const result = ZSTDv06_getFrameParams(&(zc->fParams), src, srcSize); if ((MEM_32bits()) && (zc->fParams.windowLog > 25)) return ERROR(frameParameter_unsupported); return result; } typedef struct { blockType_t blockType; U32 origSize; } blockProperties_t; /*! ZSTDv06_getcBlockSize() : * Provides the size of compressed block from block header `src` */ size_t ZSTDv06_getcBlockSize(const void* src, size_t srcSize, blockProperties_t* bpPtr) { const BYTE* const in = (const BYTE* const)src; U32 cSize; if (srcSize < ZSTDv06_blockHeaderSize) return ERROR(srcSize_wrong); bpPtr->blockType = (blockType_t)((*in) >> 6); cSize = in[2] + (in[1]<<8) + ((in[0] & 7)<<16); bpPtr->origSize = (bpPtr->blockType == bt_rle) ? cSize : 0; if (bpPtr->blockType == bt_end) return 0; if (bpPtr->blockType == bt_rle) return 1; return cSize; } static size_t ZSTDv06_copyRawBlock(void* dst, size_t dstCapacity, const void* src, size_t srcSize) { if (srcSize > dstCapacity) return ERROR(dstSize_tooSmall); memcpy(dst, src, srcSize); return srcSize; } /*! ZSTDv06_decodeLiteralsBlock() : @return : nb of bytes read from src (< srcSize ) */ size_t ZSTDv06_decodeLiteralsBlock(ZSTDv06_DCtx* dctx, const void* src, size_t srcSize) /* note : srcSize < BLOCKSIZE */ { const BYTE* const istart = (const BYTE*) src; /* any compressed block with literals segment must be at least this size */ if (srcSize < MIN_CBLOCK_SIZE) return ERROR(corruption_detected); switch(istart[0]>> 6) { case IS_HUF: { size_t litSize, litCSize, singleStream=0; U32 lhSize = ((istart[0]) >> 4) & 3; if (srcSize < 5) return ERROR(corruption_detected); /* srcSize >= MIN_CBLOCK_SIZE == 3; here we need up to 5 for lhSize, + cSize (+nbSeq) */ switch(lhSize) { case 0: case 1: default: /* note : default is impossible, since lhSize into [0..3] */ /* 2 - 2 - 10 - 10 */ lhSize=3; singleStream = istart[0] & 16; litSize = ((istart[0] & 15) << 6) + (istart[1] >> 2); litCSize = ((istart[1] & 3) << 8) + istart[2]; break; case 2: /* 2 - 2 - 14 - 14 */ lhSize=4; litSize = ((istart[0] & 15) << 10) + (istart[1] << 2) + (istart[2] >> 6); litCSize = ((istart[2] & 63) << 8) + istart[3]; break; case 3: /* 2 - 2 - 18 - 18 */ lhSize=5; litSize = ((istart[0] & 15) << 14) + (istart[1] << 6) + (istart[2] >> 2); litCSize = ((istart[2] & 3) << 16) + (istart[3] << 8) + istart[4]; break; } if (litSize > ZSTDv06_BLOCKSIZE_MAX) return ERROR(corruption_detected); if (litCSize + lhSize > srcSize) return ERROR(corruption_detected); if (HUFv06_isError(singleStream ? HUFv06_decompress1X2(dctx->litBuffer, litSize, istart+lhSize, litCSize) : HUFv06_decompress (dctx->litBuffer, litSize, istart+lhSize, litCSize) )) return ERROR(corruption_detected); dctx->litPtr = dctx->litBuffer; dctx->litSize = litSize; memset(dctx->litBuffer + dctx->litSize, 0, WILDCOPY_OVERLENGTH); return litCSize + lhSize; } case IS_PCH: { size_t litSize, litCSize; U32 lhSize = ((istart[0]) >> 4) & 3; if (lhSize != 1) /* only case supported for now : small litSize, single stream */ return ERROR(corruption_detected); if (!dctx->flagRepeatTable) return ERROR(dictionary_corrupted); /* 2 - 2 - 10 - 10 */ lhSize=3; litSize = ((istart[0] & 15) << 6) + (istart[1] >> 2); litCSize = ((istart[1] & 3) << 8) + istart[2]; if (litCSize + lhSize > srcSize) return ERROR(corruption_detected); { size_t const errorCode = HUFv06_decompress1X4_usingDTable(dctx->litBuffer, litSize, istart+lhSize, litCSize, dctx->hufTableX4); if (HUFv06_isError(errorCode)) return ERROR(corruption_detected); } dctx->litPtr = dctx->litBuffer; dctx->litSize = litSize; memset(dctx->litBuffer + dctx->litSize, 0, WILDCOPY_OVERLENGTH); return litCSize + lhSize; } case IS_RAW: { size_t litSize; U32 lhSize = ((istart[0]) >> 4) & 3; switch(lhSize) { case 0: case 1: default: /* note : default is impossible, since lhSize into [0..3] */ lhSize=1; litSize = istart[0] & 31; break; case 2: litSize = ((istart[0] & 15) << 8) + istart[1]; break; case 3: litSize = ((istart[0] & 15) << 16) + (istart[1] << 8) + istart[2]; break; } if (lhSize+litSize+WILDCOPY_OVERLENGTH > srcSize) { /* risk reading beyond src buffer with wildcopy */ if (litSize+lhSize > srcSize) return ERROR(corruption_detected); memcpy(dctx->litBuffer, istart+lhSize, litSize); dctx->litPtr = dctx->litBuffer; dctx->litSize = litSize; memset(dctx->litBuffer + dctx->litSize, 0, WILDCOPY_OVERLENGTH); return lhSize+litSize; } /* direct reference into compressed stream */ dctx->litPtr = istart+lhSize; dctx->litSize = litSize; return lhSize+litSize; } case IS_RLE: { size_t litSize; U32 lhSize = ((istart[0]) >> 4) & 3; switch(lhSize) { case 0: case 1: default: /* note : default is impossible, since lhSize into [0..3] */ lhSize = 1; litSize = istart[0] & 31; break; case 2: litSize = ((istart[0] & 15) << 8) + istart[1]; break; case 3: litSize = ((istart[0] & 15) << 16) + (istart[1] << 8) + istart[2]; if (srcSize<4) return ERROR(corruption_detected); /* srcSize >= MIN_CBLOCK_SIZE == 3; here we need lhSize+1 = 4 */ break; } if (litSize > ZSTDv06_BLOCKSIZE_MAX) return ERROR(corruption_detected); memset(dctx->litBuffer, istart[lhSize], litSize + WILDCOPY_OVERLENGTH); dctx->litPtr = dctx->litBuffer; dctx->litSize = litSize; return lhSize+1; } default: return ERROR(corruption_detected); /* impossible */ } } /*! ZSTDv06_buildSeqTable() : @return : nb bytes read from src, or an error code if it fails, testable with ZSTDv06_isError() */ size_t ZSTDv06_buildSeqTable(FSEv06_DTable* DTable, U32 type, U32 max, U32 maxLog, const void* src, size_t srcSize, const S16* defaultNorm, U32 defaultLog, U32 flagRepeatTable) { switch(type) { case FSEv06_ENCODING_RLE : if (!srcSize) return ERROR(srcSize_wrong); if ( (*(const BYTE*)src) > max) return ERROR(corruption_detected); FSEv06_buildDTable_rle(DTable, *(const BYTE*)src); /* if *src > max, data is corrupted */ return 1; case FSEv06_ENCODING_RAW : FSEv06_buildDTable(DTable, defaultNorm, max, defaultLog); return 0; case FSEv06_ENCODING_STATIC: if (!flagRepeatTable) return ERROR(corruption_detected); return 0; default : /* impossible */ case FSEv06_ENCODING_DYNAMIC : { U32 tableLog; S16 norm[MaxSeq+1]; size_t const headerSize = FSEv06_readNCount(norm, &max, &tableLog, src, srcSize); if (FSEv06_isError(headerSize)) return ERROR(corruption_detected); if (tableLog > maxLog) return ERROR(corruption_detected); FSEv06_buildDTable(DTable, norm, max, tableLog); return headerSize; } } } size_t ZSTDv06_decodeSeqHeaders(int* nbSeqPtr, FSEv06_DTable* DTableLL, FSEv06_DTable* DTableML, FSEv06_DTable* DTableOffb, U32 flagRepeatTable, const void* src, size_t srcSize) { const BYTE* const istart = (const BYTE* const)src; const BYTE* const iend = istart + srcSize; const BYTE* ip = istart; /* check */ if (srcSize < MIN_SEQUENCES_SIZE) return ERROR(srcSize_wrong); /* SeqHead */ { int nbSeq = *ip++; if (!nbSeq) { *nbSeqPtr=0; return 1; } if (nbSeq > 0x7F) { if (nbSeq == 0xFF) { if (ip+2 > iend) return ERROR(srcSize_wrong); nbSeq = MEM_readLE16(ip) + LONGNBSEQ, ip+=2; } else { if (ip >= iend) return ERROR(srcSize_wrong); nbSeq = ((nbSeq-0x80)<<8) + *ip++; } } *nbSeqPtr = nbSeq; } /* FSE table descriptors */ { U32 const LLtype = *ip >> 6; U32 const Offtype = (*ip >> 4) & 3; U32 const MLtype = (*ip >> 2) & 3; ip++; /* check */ if (ip > iend-3) return ERROR(srcSize_wrong); /* min : all 3 are "raw", hence no header, but at least xxLog bits per type */ /* Build DTables */ { size_t const bhSize = ZSTDv06_buildSeqTable(DTableLL, LLtype, MaxLL, LLFSELog, ip, iend-ip, LL_defaultNorm, LL_defaultNormLog, flagRepeatTable); if (ZSTDv06_isError(bhSize)) return ERROR(corruption_detected); ip += bhSize; } { size_t const bhSize = ZSTDv06_buildSeqTable(DTableOffb, Offtype, MaxOff, OffFSELog, ip, iend-ip, OF_defaultNorm, OF_defaultNormLog, flagRepeatTable); if (ZSTDv06_isError(bhSize)) return ERROR(corruption_detected); ip += bhSize; } { size_t const bhSize = ZSTDv06_buildSeqTable(DTableML, MLtype, MaxML, MLFSELog, ip, iend-ip, ML_defaultNorm, ML_defaultNormLog, flagRepeatTable); if (ZSTDv06_isError(bhSize)) return ERROR(corruption_detected); ip += bhSize; } } return ip-istart; } typedef struct { size_t litLength; size_t matchLength; size_t offset; } seq_t; typedef struct { BITv06_DStream_t DStream; FSEv06_DState_t stateLL; FSEv06_DState_t stateOffb; FSEv06_DState_t stateML; size_t prevOffset[ZSTDv06_REP_INIT]; } seqState_t; static void ZSTDv06_decodeSequence(seq_t* seq, seqState_t* seqState) { /* Literal length */ U32 const llCode = FSEv06_peekSymbol(&(seqState->stateLL)); U32 const mlCode = FSEv06_peekSymbol(&(seqState->stateML)); U32 const ofCode = FSEv06_peekSymbol(&(seqState->stateOffb)); /* <= maxOff, by table construction */ U32 const llBits = LL_bits[llCode]; U32 const mlBits = ML_bits[mlCode]; U32 const ofBits = ofCode; U32 const totalBits = llBits+mlBits+ofBits; static const U32 LL_base[MaxLL+1] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 18, 20, 22, 24, 28, 32, 40, 48, 64, 0x80, 0x100, 0x200, 0x400, 0x800, 0x1000, 0x2000, 0x4000, 0x8000, 0x10000 }; static const U32 ML_base[MaxML+1] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 36, 38, 40, 44, 48, 56, 64, 80, 96, 0x80, 0x100, 0x200, 0x400, 0x800, 0x1000, 0x2000, 0x4000, 0x8000, 0x10000 }; static const U32 OF_base[MaxOff+1] = { 0, 1, 3, 7, 0xF, 0x1F, 0x3F, 0x7F, 0xFF, 0x1FF, 0x3FF, 0x7FF, 0xFFF, 0x1FFF, 0x3FFF, 0x7FFF, 0xFFFF, 0x1FFFF, 0x3FFFF, 0x7FFFF, 0xFFFFF, 0x1FFFFF, 0x3FFFFF, 0x7FFFFF, 0xFFFFFF, 0x1FFFFFF, 0x3FFFFFF, /*fake*/ 1, 1 }; /* sequence */ { size_t offset; if (!ofCode) offset = 0; else { offset = OF_base[ofCode] + BITv06_readBits(&(seqState->DStream), ofBits); /* <= 26 bits */ if (MEM_32bits()) BITv06_reloadDStream(&(seqState->DStream)); } if (offset < ZSTDv06_REP_NUM) { if (llCode == 0 && offset <= 1) offset = 1-offset; if (offset != 0) { size_t temp = seqState->prevOffset[offset]; if (offset != 1) { seqState->prevOffset[2] = seqState->prevOffset[1]; } seqState->prevOffset[1] = seqState->prevOffset[0]; seqState->prevOffset[0] = offset = temp; } else { offset = seqState->prevOffset[0]; } } else { offset -= ZSTDv06_REP_MOVE; seqState->prevOffset[2] = seqState->prevOffset[1]; seqState->prevOffset[1] = seqState->prevOffset[0]; seqState->prevOffset[0] = offset; } seq->offset = offset; } seq->matchLength = ML_base[mlCode] + MINMATCH + ((mlCode>31) ? BITv06_readBits(&(seqState->DStream), mlBits) : 0); /* <= 16 bits */ if (MEM_32bits() && (mlBits+llBits>24)) BITv06_reloadDStream(&(seqState->DStream)); seq->litLength = LL_base[llCode] + ((llCode>15) ? BITv06_readBits(&(seqState->DStream), llBits) : 0); /* <= 16 bits */ if (MEM_32bits() || (totalBits > 64 - 7 - (LLFSELog+MLFSELog+OffFSELog)) ) BITv06_reloadDStream(&(seqState->DStream)); /* ANS state update */ FSEv06_updateState(&(seqState->stateLL), &(seqState->DStream)); /* <= 9 bits */ FSEv06_updateState(&(seqState->stateML), &(seqState->DStream)); /* <= 9 bits */ if (MEM_32bits()) BITv06_reloadDStream(&(seqState->DStream)); /* <= 18 bits */ FSEv06_updateState(&(seqState->stateOffb), &(seqState->DStream)); /* <= 8 bits */ } size_t ZSTDv06_execSequence(BYTE* op, BYTE* const oend, seq_t sequence, const BYTE** litPtr, const BYTE* const litLimit, const BYTE* const base, const BYTE* const vBase, const BYTE* const dictEnd) { BYTE* const oLitEnd = op + sequence.litLength; size_t const sequenceLength = sequence.litLength + sequence.matchLength; BYTE* const oMatchEnd = op + sequenceLength; /* risk : address space overflow (32-bits) */ BYTE* const oend_8 = oend-8; const BYTE* const iLitEnd = *litPtr + sequence.litLength; const BYTE* match = oLitEnd - sequence.offset; /* check */ if (oLitEnd > oend_8) return ERROR(dstSize_tooSmall); /* last match must start at a minimum distance of 8 from oend */ if (oMatchEnd > oend) return ERROR(dstSize_tooSmall); /* overwrite beyond dst buffer */ if (iLitEnd > litLimit) return ERROR(corruption_detected); /* over-read beyond lit buffer */ /* copy Literals */ ZSTDv06_wildcopy(op, *litPtr, sequence.litLength); /* note : oLitEnd <= oend-8 : no risk of overwrite beyond oend */ op = oLitEnd; *litPtr = iLitEnd; /* update for next sequence */ /* copy Match */ if (sequence.offset > (size_t)(oLitEnd - base)) { /* offset beyond prefix */ if (sequence.offset > (size_t)(oLitEnd - vBase)) return ERROR(corruption_detected); match = dictEnd - (base-match); if (match + sequence.matchLength <= dictEnd) { memmove(oLitEnd, match, sequence.matchLength); return sequenceLength; } /* span extDict & currentPrefixSegment */ { size_t const length1 = dictEnd - match; memmove(oLitEnd, match, length1); op = oLitEnd + length1; sequence.matchLength -= length1; match = base; if (op > oend_8 || sequence.matchLength < MINMATCH) { while (op < oMatchEnd) *op++ = *match++; return sequenceLength; } } } /* Requirement: op <= oend_8 */ /* match within prefix */ if (sequence.offset < 8) { /* close range match, overlap */ static const U32 dec32table[] = { 0, 1, 2, 1, 4, 4, 4, 4 }; /* added */ static const int dec64table[] = { 8, 8, 8, 7, 8, 9,10,11 }; /* substracted */ int const sub2 = dec64table[sequence.offset]; op[0] = match[0]; op[1] = match[1]; op[2] = match[2]; op[3] = match[3]; match += dec32table[sequence.offset]; ZSTDv06_copy4(op+4, match); match -= sub2; } else { ZSTDv06_copy8(op, match); } op += 8; match += 8; if (oMatchEnd > oend-(16-MINMATCH)) { if (op < oend_8) { ZSTDv06_wildcopy(op, match, oend_8 - op); match += oend_8 - op; op = oend_8; } while (op < oMatchEnd) *op++ = *match++; } else { ZSTDv06_wildcopy(op, match, (ptrdiff_t)sequence.matchLength-8); /* works even if matchLength < 8 */ } return sequenceLength; } static size_t ZSTDv06_decompressSequences( ZSTDv06_DCtx* dctx, void* dst, size_t maxDstSize, const void* seqStart, size_t seqSize) { const BYTE* ip = (const BYTE*)seqStart; const BYTE* const iend = ip + seqSize; BYTE* const ostart = (BYTE* const)dst; BYTE* const oend = ostart + maxDstSize; BYTE* op = ostart; const BYTE* litPtr = dctx->litPtr; const BYTE* const litEnd = litPtr + dctx->litSize; FSEv06_DTable* DTableLL = dctx->LLTable; FSEv06_DTable* DTableML = dctx->MLTable; FSEv06_DTable* DTableOffb = dctx->OffTable; const BYTE* const base = (const BYTE*) (dctx->base); const BYTE* const vBase = (const BYTE*) (dctx->vBase); const BYTE* const dictEnd = (const BYTE*) (dctx->dictEnd); int nbSeq; /* Build Decoding Tables */ { size_t const seqHSize = ZSTDv06_decodeSeqHeaders(&nbSeq, DTableLL, DTableML, DTableOffb, dctx->flagRepeatTable, ip, seqSize); if (ZSTDv06_isError(seqHSize)) return seqHSize; ip += seqHSize; dctx->flagRepeatTable = 0; } /* Regen sequences */ if (nbSeq) { seq_t sequence; seqState_t seqState; memset(&sequence, 0, sizeof(sequence)); sequence.offset = REPCODE_STARTVALUE; { U32 i; for (i=0; i= 5810037) && (pos < 5810400)) printf("Dpos %6u :%5u literals & match %3u bytes at distance %6u \n", pos, (U32)sequence.litLength, (U32)sequence.matchLength, (U32)sequence.offset); #endif { size_t const oneSeqSize = ZSTDv06_execSequence(op, oend, sequence, &litPtr, litEnd, base, vBase, dictEnd); if (ZSTDv06_isError(oneSeqSize)) return oneSeqSize; op += oneSeqSize; } } /* check if reached exact end */ if (nbSeq) return ERROR(corruption_detected); } /* last literal segment */ { size_t const lastLLSize = litEnd - litPtr; if (litPtr > litEnd) return ERROR(corruption_detected); /* too many literals already used */ if (op+lastLLSize > oend) return ERROR(dstSize_tooSmall); memcpy(op, litPtr, lastLLSize); op += lastLLSize; } return op-ostart; } static void ZSTDv06_checkContinuity(ZSTDv06_DCtx* dctx, const void* dst) { if (dst != dctx->previousDstEnd) { /* not contiguous */ dctx->dictEnd = dctx->previousDstEnd; dctx->vBase = (const char*)dst - ((const char*)(dctx->previousDstEnd) - (const char*)(dctx->base)); dctx->base = dst; dctx->previousDstEnd = dst; } } static size_t ZSTDv06_decompressBlock_internal(ZSTDv06_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize) { /* blockType == blockCompressed */ const BYTE* ip = (const BYTE*)src; if (srcSize >= ZSTDv06_BLOCKSIZE_MAX) return ERROR(srcSize_wrong); /* Decode literals sub-block */ { size_t const litCSize = ZSTDv06_decodeLiteralsBlock(dctx, src, srcSize); if (ZSTDv06_isError(litCSize)) return litCSize; ip += litCSize; srcSize -= litCSize; } return ZSTDv06_decompressSequences(dctx, dst, dstCapacity, ip, srcSize); } size_t ZSTDv06_decompressBlock(ZSTDv06_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize) { ZSTDv06_checkContinuity(dctx, dst); return ZSTDv06_decompressBlock_internal(dctx, dst, dstCapacity, src, srcSize); } /*! ZSTDv06_decompressFrame() : * `dctx` must be properly initialized */ static size_t ZSTDv06_decompressFrame(ZSTDv06_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize) { const BYTE* ip = (const BYTE*)src; const BYTE* const iend = ip + srcSize; BYTE* const ostart = (BYTE* const)dst; BYTE* op = ostart; BYTE* const oend = ostart + dstCapacity; size_t remainingSize = srcSize; blockProperties_t blockProperties = { bt_compressed, 0 }; /* check */ if (srcSize < ZSTDv06_frameHeaderSize_min+ZSTDv06_blockHeaderSize) return ERROR(srcSize_wrong); /* Frame Header */ { size_t const frameHeaderSize = ZSTDv06_frameHeaderSize(src, ZSTDv06_frameHeaderSize_min); if (ZSTDv06_isError(frameHeaderSize)) return frameHeaderSize; if (srcSize < frameHeaderSize+ZSTDv06_blockHeaderSize) return ERROR(srcSize_wrong); if (ZSTDv06_decodeFrameHeader(dctx, src, frameHeaderSize)) return ERROR(corruption_detected); ip += frameHeaderSize; remainingSize -= frameHeaderSize; } /* Loop on each block */ while (1) { size_t decodedSize=0; size_t const cBlockSize = ZSTDv06_getcBlockSize(ip, iend-ip, &blockProperties); if (ZSTDv06_isError(cBlockSize)) return cBlockSize; ip += ZSTDv06_blockHeaderSize; remainingSize -= ZSTDv06_blockHeaderSize; if (cBlockSize > remainingSize) return ERROR(srcSize_wrong); switch(blockProperties.blockType) { case bt_compressed: decodedSize = ZSTDv06_decompressBlock_internal(dctx, op, oend-op, ip, cBlockSize); break; case bt_raw : decodedSize = ZSTDv06_copyRawBlock(op, oend-op, ip, cBlockSize); break; case bt_rle : return ERROR(GENERIC); /* not yet supported */ break; case bt_end : /* end of frame */ if (remainingSize) return ERROR(srcSize_wrong); break; default: return ERROR(GENERIC); /* impossible */ } if (cBlockSize == 0) break; /* bt_end */ if (ZSTDv06_isError(decodedSize)) return decodedSize; op += decodedSize; ip += cBlockSize; remainingSize -= cBlockSize; } return op-ostart; } size_t ZSTDv06_decompress_usingPreparedDCtx(ZSTDv06_DCtx* dctx, const ZSTDv06_DCtx* refDCtx, void* dst, size_t dstCapacity, const void* src, size_t srcSize) { ZSTDv06_copyDCtx(dctx, refDCtx); ZSTDv06_checkContinuity(dctx, dst); return ZSTDv06_decompressFrame(dctx, dst, dstCapacity, src, srcSize); } size_t ZSTDv06_decompress_usingDict(ZSTDv06_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize, const void* dict, size_t dictSize) { ZSTDv06_decompressBegin_usingDict(dctx, dict, dictSize); ZSTDv06_checkContinuity(dctx, dst); return ZSTDv06_decompressFrame(dctx, dst, dstCapacity, src, srcSize); } size_t ZSTDv06_decompressDCtx(ZSTDv06_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize) { return ZSTDv06_decompress_usingDict(dctx, dst, dstCapacity, src, srcSize, NULL, 0); } size_t ZSTDv06_decompress(void* dst, size_t dstCapacity, const void* src, size_t srcSize) { #if defined(ZSTDv06_HEAPMODE) && (ZSTDv06_HEAPMODE==1) size_t regenSize; ZSTDv06_DCtx* dctx = ZSTDv06_createDCtx(); if (dctx==NULL) return ERROR(memory_allocation); regenSize = ZSTDv06_decompressDCtx(dctx, dst, dstCapacity, src, srcSize); ZSTDv06_freeDCtx(dctx); return regenSize; #else /* stack mode */ ZSTDv06_DCtx dctx; return ZSTDv06_decompressDCtx(&dctx, dst, dstCapacity, src, srcSize); #endif } size_t ZSTDv06_findFrameCompressedSize(const void* src, size_t srcSize) { const BYTE* ip = (const BYTE*)src; size_t remainingSize = srcSize; blockProperties_t blockProperties = { bt_compressed, 0 }; /* Frame Header */ { size_t const frameHeaderSize = ZSTDv06_frameHeaderSize(src, ZSTDv06_frameHeaderSize_min); if (ZSTDv06_isError(frameHeaderSize)) return frameHeaderSize; if (MEM_readLE32(src) != ZSTDv06_MAGICNUMBER) return ERROR(prefix_unknown); if (srcSize < frameHeaderSize+ZSTDv06_blockHeaderSize) return ERROR(srcSize_wrong); ip += frameHeaderSize; remainingSize -= frameHeaderSize; } /* Loop on each block */ while (1) { size_t const cBlockSize = ZSTDv06_getcBlockSize(ip, remainingSize, &blockProperties); if (ZSTDv06_isError(cBlockSize)) return cBlockSize; ip += ZSTDv06_blockHeaderSize; remainingSize -= ZSTDv06_blockHeaderSize; if (cBlockSize > remainingSize) return ERROR(srcSize_wrong); if (cBlockSize == 0) break; /* bt_end */ ip += cBlockSize; remainingSize -= cBlockSize; } return ip - (const BYTE*)src; } /*_****************************** * Streaming Decompression API ********************************/ size_t ZSTDv06_nextSrcSizeToDecompress(ZSTDv06_DCtx* dctx) { return dctx->expected; } size_t ZSTDv06_decompressContinue(ZSTDv06_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize) { /* Sanity check */ if (srcSize != dctx->expected) return ERROR(srcSize_wrong); if (dstCapacity) ZSTDv06_checkContinuity(dctx, dst); /* Decompress : frame header; part 1 */ switch (dctx->stage) { case ZSTDds_getFrameHeaderSize : if (srcSize != ZSTDv06_frameHeaderSize_min) return ERROR(srcSize_wrong); /* impossible */ dctx->headerSize = ZSTDv06_frameHeaderSize(src, ZSTDv06_frameHeaderSize_min); if (ZSTDv06_isError(dctx->headerSize)) return dctx->headerSize; memcpy(dctx->headerBuffer, src, ZSTDv06_frameHeaderSize_min); if (dctx->headerSize > ZSTDv06_frameHeaderSize_min) { dctx->expected = dctx->headerSize - ZSTDv06_frameHeaderSize_min; dctx->stage = ZSTDds_decodeFrameHeader; return 0; } dctx->expected = 0; /* not necessary to copy more */ /* fall-through */ case ZSTDds_decodeFrameHeader: { size_t result; memcpy(dctx->headerBuffer + ZSTDv06_frameHeaderSize_min, src, dctx->expected); result = ZSTDv06_decodeFrameHeader(dctx, dctx->headerBuffer, dctx->headerSize); if (ZSTDv06_isError(result)) return result; dctx->expected = ZSTDv06_blockHeaderSize; dctx->stage = ZSTDds_decodeBlockHeader; return 0; } case ZSTDds_decodeBlockHeader: { blockProperties_t bp; size_t const cBlockSize = ZSTDv06_getcBlockSize(src, ZSTDv06_blockHeaderSize, &bp); if (ZSTDv06_isError(cBlockSize)) return cBlockSize; if (bp.blockType == bt_end) { dctx->expected = 0; dctx->stage = ZSTDds_getFrameHeaderSize; } else { dctx->expected = cBlockSize; dctx->bType = bp.blockType; dctx->stage = ZSTDds_decompressBlock; } return 0; } case ZSTDds_decompressBlock: { size_t rSize; switch(dctx->bType) { case bt_compressed: rSize = ZSTDv06_decompressBlock_internal(dctx, dst, dstCapacity, src, srcSize); break; case bt_raw : rSize = ZSTDv06_copyRawBlock(dst, dstCapacity, src, srcSize); break; case bt_rle : return ERROR(GENERIC); /* not yet handled */ break; case bt_end : /* should never happen (filtered at phase 1) */ rSize = 0; break; default: return ERROR(GENERIC); /* impossible */ } dctx->stage = ZSTDds_decodeBlockHeader; dctx->expected = ZSTDv06_blockHeaderSize; dctx->previousDstEnd = (char*)dst + rSize; return rSize; } default: return ERROR(GENERIC); /* impossible */ } } static void ZSTDv06_refDictContent(ZSTDv06_DCtx* dctx, const void* dict, size_t dictSize) { dctx->dictEnd = dctx->previousDstEnd; dctx->vBase = (const char*)dict - ((const char*)(dctx->previousDstEnd) - (const char*)(dctx->base)); dctx->base = dict; dctx->previousDstEnd = (const char*)dict + dictSize; } static size_t ZSTDv06_loadEntropy(ZSTDv06_DCtx* dctx, const void* dict, size_t dictSize) { size_t hSize, offcodeHeaderSize, matchlengthHeaderSize, litlengthHeaderSize; hSize = HUFv06_readDTableX4(dctx->hufTableX4, dict, dictSize); if (HUFv06_isError(hSize)) return ERROR(dictionary_corrupted); dict = (const char*)dict + hSize; dictSize -= hSize; { short offcodeNCount[MaxOff+1]; U32 offcodeMaxValue=MaxOff, offcodeLog; offcodeHeaderSize = FSEv06_readNCount(offcodeNCount, &offcodeMaxValue, &offcodeLog, dict, dictSize); if (FSEv06_isError(offcodeHeaderSize)) return ERROR(dictionary_corrupted); if (offcodeLog > OffFSELog) return ERROR(dictionary_corrupted); { size_t const errorCode = FSEv06_buildDTable(dctx->OffTable, offcodeNCount, offcodeMaxValue, offcodeLog); if (FSEv06_isError(errorCode)) return ERROR(dictionary_corrupted); } dict = (const char*)dict + offcodeHeaderSize; dictSize -= offcodeHeaderSize; } { short matchlengthNCount[MaxML+1]; unsigned matchlengthMaxValue = MaxML, matchlengthLog; matchlengthHeaderSize = FSEv06_readNCount(matchlengthNCount, &matchlengthMaxValue, &matchlengthLog, dict, dictSize); if (FSEv06_isError(matchlengthHeaderSize)) return ERROR(dictionary_corrupted); if (matchlengthLog > MLFSELog) return ERROR(dictionary_corrupted); { size_t const errorCode = FSEv06_buildDTable(dctx->MLTable, matchlengthNCount, matchlengthMaxValue, matchlengthLog); if (FSEv06_isError(errorCode)) return ERROR(dictionary_corrupted); } dict = (const char*)dict + matchlengthHeaderSize; dictSize -= matchlengthHeaderSize; } { short litlengthNCount[MaxLL+1]; unsigned litlengthMaxValue = MaxLL, litlengthLog; litlengthHeaderSize = FSEv06_readNCount(litlengthNCount, &litlengthMaxValue, &litlengthLog, dict, dictSize); if (FSEv06_isError(litlengthHeaderSize)) return ERROR(dictionary_corrupted); if (litlengthLog > LLFSELog) return ERROR(dictionary_corrupted); { size_t const errorCode = FSEv06_buildDTable(dctx->LLTable, litlengthNCount, litlengthMaxValue, litlengthLog); if (FSEv06_isError(errorCode)) return ERROR(dictionary_corrupted); } } dctx->flagRepeatTable = 1; return hSize + offcodeHeaderSize + matchlengthHeaderSize + litlengthHeaderSize; } static size_t ZSTDv06_decompress_insertDictionary(ZSTDv06_DCtx* dctx, const void* dict, size_t dictSize) { size_t eSize; U32 const magic = MEM_readLE32(dict); if (magic != ZSTDv06_DICT_MAGIC) { /* pure content mode */ ZSTDv06_refDictContent(dctx, dict, dictSize); return 0; } /* load entropy tables */ dict = (const char*)dict + 4; dictSize -= 4; eSize = ZSTDv06_loadEntropy(dctx, dict, dictSize); if (ZSTDv06_isError(eSize)) return ERROR(dictionary_corrupted); /* reference dictionary content */ dict = (const char*)dict + eSize; dictSize -= eSize; ZSTDv06_refDictContent(dctx, dict, dictSize); return 0; } size_t ZSTDv06_decompressBegin_usingDict(ZSTDv06_DCtx* dctx, const void* dict, size_t dictSize) { { size_t const errorCode = ZSTDv06_decompressBegin(dctx); if (ZSTDv06_isError(errorCode)) return errorCode; } if (dict && dictSize) { size_t const errorCode = ZSTDv06_decompress_insertDictionary(dctx, dict, dictSize); if (ZSTDv06_isError(errorCode)) return ERROR(dictionary_corrupted); } return 0; } /* Buffered version of Zstd compression library Copyright (C) 2015-2016, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - zstd homepage : http://www.zstd.net/ */ /*-*************************************************************************** * Streaming decompression howto * * A ZBUFFv06_DCtx object is required to track streaming operations. * Use ZBUFFv06_createDCtx() and ZBUFFv06_freeDCtx() to create/release resources. * Use ZBUFFv06_decompressInit() to start a new decompression operation, * or ZBUFFv06_decompressInitDictionary() if decompression requires a dictionary. * Note that ZBUFFv06_DCtx objects can be re-init multiple times. * * Use ZBUFFv06_decompressContinue() repetitively to consume your input. * *srcSizePtr and *dstCapacityPtr can be any size. * The function will report how many bytes were read or written by modifying *srcSizePtr and *dstCapacityPtr. * Note that it may not consume the entire input, in which case it's up to the caller to present remaining input again. * The content of @dst will be overwritten (up to *dstCapacityPtr) at each function call, so save its content if it matters, or change @dst. * @return : a hint to preferred nb of bytes to use as input for next function call (it's only a hint, to help latency), * or 0 when a frame is completely decoded, * or an error code, which can be tested using ZBUFFv06_isError(). * * Hint : recommended buffer sizes (not compulsory) : ZBUFFv06_recommendedDInSize() and ZBUFFv06_recommendedDOutSize() * output : ZBUFFv06_recommendedDOutSize==128 KB block size is the internal unit, it ensures it's always possible to write a full block when decoded. * input : ZBUFFv06_recommendedDInSize == 128KB + 3; * just follow indications from ZBUFFv06_decompressContinue() to minimize latency. It should always be <= 128 KB + 3 . * *******************************************************************************/ typedef enum { ZBUFFds_init, ZBUFFds_loadHeader, ZBUFFds_read, ZBUFFds_load, ZBUFFds_flush } ZBUFFv06_dStage; /* *** Resource management *** */ struct ZBUFFv06_DCtx_s { ZSTDv06_DCtx* zd; ZSTDv06_frameParams fParams; ZBUFFv06_dStage stage; char* inBuff; size_t inBuffSize; size_t inPos; char* outBuff; size_t outBuffSize; size_t outStart; size_t outEnd; size_t blockSize; BYTE headerBuffer[ZSTDv06_FRAMEHEADERSIZE_MAX]; size_t lhSize; }; /* typedef'd to ZBUFFv06_DCtx within "zstd_buffered.h" */ ZBUFFv06_DCtx* ZBUFFv06_createDCtx(void) { ZBUFFv06_DCtx* zbd = (ZBUFFv06_DCtx*)malloc(sizeof(ZBUFFv06_DCtx)); if (zbd==NULL) return NULL; memset(zbd, 0, sizeof(*zbd)); zbd->zd = ZSTDv06_createDCtx(); zbd->stage = ZBUFFds_init; return zbd; } size_t ZBUFFv06_freeDCtx(ZBUFFv06_DCtx* zbd) { if (zbd==NULL) return 0; /* support free on null */ ZSTDv06_freeDCtx(zbd->zd); free(zbd->inBuff); free(zbd->outBuff); free(zbd); return 0; } /* *** Initialization *** */ size_t ZBUFFv06_decompressInitDictionary(ZBUFFv06_DCtx* zbd, const void* dict, size_t dictSize) { zbd->stage = ZBUFFds_loadHeader; zbd->lhSize = zbd->inPos = zbd->outStart = zbd->outEnd = 0; return ZSTDv06_decompressBegin_usingDict(zbd->zd, dict, dictSize); } size_t ZBUFFv06_decompressInit(ZBUFFv06_DCtx* zbd) { return ZBUFFv06_decompressInitDictionary(zbd, NULL, 0); } MEM_STATIC size_t ZBUFFv06_limitCopy(void* dst, size_t dstCapacity, const void* src, size_t srcSize) { size_t length = MIN(dstCapacity, srcSize); memcpy(dst, src, length); return length; } /* *** Decompression *** */ size_t ZBUFFv06_decompressContinue(ZBUFFv06_DCtx* zbd, void* dst, size_t* dstCapacityPtr, const void* src, size_t* srcSizePtr) { const char* const istart = (const char*)src; const char* const iend = istart + *srcSizePtr; const char* ip = istart; char* const ostart = (char*)dst; char* const oend = ostart + *dstCapacityPtr; char* op = ostart; U32 notDone = 1; while (notDone) { switch(zbd->stage) { case ZBUFFds_init : return ERROR(init_missing); case ZBUFFds_loadHeader : { size_t const hSize = ZSTDv06_getFrameParams(&(zbd->fParams), zbd->headerBuffer, zbd->lhSize); if (hSize != 0) { size_t const toLoad = hSize - zbd->lhSize; /* if hSize!=0, hSize > zbd->lhSize */ if (ZSTDv06_isError(hSize)) return hSize; if (toLoad > (size_t)(iend-ip)) { /* not enough input to load full header */ memcpy(zbd->headerBuffer + zbd->lhSize, ip, iend-ip); zbd->lhSize += iend-ip; ip = iend; notDone = 0; *dstCapacityPtr = 0; return (hSize - zbd->lhSize) + ZSTDv06_blockHeaderSize; /* remaining header bytes + next block header */ } memcpy(zbd->headerBuffer + zbd->lhSize, ip, toLoad); zbd->lhSize = hSize; ip += toLoad; break; } } /* Consume header */ { size_t const h1Size = ZSTDv06_nextSrcSizeToDecompress(zbd->zd); /* == ZSTDv06_frameHeaderSize_min */ size_t const h1Result = ZSTDv06_decompressContinue(zbd->zd, NULL, 0, zbd->headerBuffer, h1Size); if (ZSTDv06_isError(h1Result)) return h1Result; if (h1Size < zbd->lhSize) { /* long header */ size_t const h2Size = ZSTDv06_nextSrcSizeToDecompress(zbd->zd); size_t const h2Result = ZSTDv06_decompressContinue(zbd->zd, NULL, 0, zbd->headerBuffer+h1Size, h2Size); if (ZSTDv06_isError(h2Result)) return h2Result; } } /* Frame header instruct buffer sizes */ { size_t const blockSize = MIN(1 << zbd->fParams.windowLog, ZSTDv06_BLOCKSIZE_MAX); zbd->blockSize = blockSize; if (zbd->inBuffSize < blockSize) { free(zbd->inBuff); zbd->inBuffSize = blockSize; zbd->inBuff = (char*)malloc(blockSize); if (zbd->inBuff == NULL) return ERROR(memory_allocation); } { size_t const neededOutSize = ((size_t)1 << zbd->fParams.windowLog) + blockSize + WILDCOPY_OVERLENGTH * 2; if (zbd->outBuffSize < neededOutSize) { free(zbd->outBuff); zbd->outBuffSize = neededOutSize; zbd->outBuff = (char*)malloc(neededOutSize); if (zbd->outBuff == NULL) return ERROR(memory_allocation); } } } zbd->stage = ZBUFFds_read; /* fall-through */ case ZBUFFds_read: { size_t const neededInSize = ZSTDv06_nextSrcSizeToDecompress(zbd->zd); if (neededInSize==0) { /* end of frame */ zbd->stage = ZBUFFds_init; notDone = 0; break; } if ((size_t)(iend-ip) >= neededInSize) { /* decode directly from src */ size_t const decodedSize = ZSTDv06_decompressContinue(zbd->zd, zbd->outBuff + zbd->outStart, zbd->outBuffSize - zbd->outStart, ip, neededInSize); if (ZSTDv06_isError(decodedSize)) return decodedSize; ip += neededInSize; if (!decodedSize) break; /* this was just a header */ zbd->outEnd = zbd->outStart + decodedSize; zbd->stage = ZBUFFds_flush; break; } if (ip==iend) { notDone = 0; break; } /* no more input */ zbd->stage = ZBUFFds_load; } /* fall-through */ case ZBUFFds_load: { size_t const neededInSize = ZSTDv06_nextSrcSizeToDecompress(zbd->zd); size_t const toLoad = neededInSize - zbd->inPos; /* should always be <= remaining space within inBuff */ size_t loadedSize; if (toLoad > zbd->inBuffSize - zbd->inPos) return ERROR(corruption_detected); /* should never happen */ loadedSize = ZBUFFv06_limitCopy(zbd->inBuff + zbd->inPos, toLoad, ip, iend-ip); ip += loadedSize; zbd->inPos += loadedSize; if (loadedSize < toLoad) { notDone = 0; break; } /* not enough input, wait for more */ /* decode loaded input */ { size_t const decodedSize = ZSTDv06_decompressContinue(zbd->zd, zbd->outBuff + zbd->outStart, zbd->outBuffSize - zbd->outStart, zbd->inBuff, neededInSize); if (ZSTDv06_isError(decodedSize)) return decodedSize; zbd->inPos = 0; /* input is consumed */ if (!decodedSize) { zbd->stage = ZBUFFds_read; break; } /* this was just a header */ zbd->outEnd = zbd->outStart + decodedSize; zbd->stage = ZBUFFds_flush; // break; /* ZBUFFds_flush follows */ } } /* fall-through */ case ZBUFFds_flush: { size_t const toFlushSize = zbd->outEnd - zbd->outStart; size_t const flushedSize = ZBUFFv06_limitCopy(op, oend-op, zbd->outBuff + zbd->outStart, toFlushSize); op += flushedSize; zbd->outStart += flushedSize; if (flushedSize == toFlushSize) { zbd->stage = ZBUFFds_read; if (zbd->outStart + zbd->blockSize > zbd->outBuffSize) zbd->outStart = zbd->outEnd = 0; break; } /* cannot flush everything */ notDone = 0; break; } default: return ERROR(GENERIC); /* impossible */ } } /* result */ *srcSizePtr = ip-istart; *dstCapacityPtr = op-ostart; { size_t nextSrcSizeHint = ZSTDv06_nextSrcSizeToDecompress(zbd->zd); if (nextSrcSizeHint > ZSTDv06_blockHeaderSize) nextSrcSizeHint+= ZSTDv06_blockHeaderSize; /* get following block header too */ nextSrcSizeHint -= zbd->inPos; /* already loaded*/ return nextSrcSizeHint; } } /* ************************************* * Tool functions ***************************************/ size_t ZBUFFv06_recommendedDInSize(void) { return ZSTDv06_BLOCKSIZE_MAX + ZSTDv06_blockHeaderSize /* block header size*/ ; } size_t ZBUFFv06_recommendedDOutSize(void) { return ZSTDv06_BLOCKSIZE_MAX; } borgbackup-1.1.5/src/borg/algorithms/zstd/lib/legacy/zstd_v03.c0000644000175000017500000034405313257361140023465 0ustar twtw00000000000000/* * Copyright (c) 2016-present, Yann Collet, Facebook, Inc. * All rights reserved. * * This source code is licensed under both the BSD-style license (found in the * LICENSE file in the root directory of this source tree) and the GPLv2 (found * in the COPYING file in the root directory of this source tree). * You may select, at your option, one of the above-listed licenses. */ #include /* size_t, ptrdiff_t */ #include "zstd_v03.h" #include "error_private.h" /****************************************** * Compiler-specific ******************************************/ #if defined(_MSC_VER) /* Visual Studio */ # include /* _byteswap_ulong */ # include /* _byteswap_* */ #endif /* ****************************************************************** mem.h low-level memory access routines Copyright (C) 2013-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - FSE source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #ifndef MEM_H_MODULE #define MEM_H_MODULE #if defined (__cplusplus) extern "C" { #endif /****************************************** * Includes ******************************************/ #include /* size_t, ptrdiff_t */ #include /* memcpy */ /****************************************** * Compiler-specific ******************************************/ #if defined(__GNUC__) # define MEM_STATIC static __attribute__((unused)) #elif defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) # define MEM_STATIC static inline #elif defined(_MSC_VER) # define MEM_STATIC static __inline #else # define MEM_STATIC static /* this version may generate warnings for unused static functions; disable the relevant warning */ #endif /**************************************************************** * Basic Types *****************************************************************/ #if defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) # include typedef uint8_t BYTE; typedef uint16_t U16; typedef int16_t S16; typedef uint32_t U32; typedef int32_t S32; typedef uint64_t U64; typedef int64_t S64; #else typedef unsigned char BYTE; typedef unsigned short U16; typedef signed short S16; typedef unsigned int U32; typedef signed int S32; typedef unsigned long long U64; typedef signed long long S64; #endif /**************************************************************** * Memory I/O *****************************************************************/ /* MEM_FORCE_MEMORY_ACCESS * By default, access to unaligned memory is controlled by `memcpy()`, which is safe and portable. * Unfortunately, on some target/compiler combinations, the generated assembly is sub-optimal. * The below switch allow to select different access method for improved performance. * Method 0 (default) : use `memcpy()`. Safe and portable. * Method 1 : `__packed` statement. It depends on compiler extension (ie, not portable). * This method is safe if your compiler supports it, and *generally* as fast or faster than `memcpy`. * Method 2 : direct access. This method is portable but violate C standard. * It can generate buggy code on targets generating assembly depending on alignment. * But in some circumstances, it's the only known way to get the most performance (ie GCC + ARMv6) * See http://fastcompression.blogspot.fr/2015/08/accessing-unaligned-memory.html for details. * Prefer these methods in priority order (0 > 1 > 2) */ #ifndef MEM_FORCE_MEMORY_ACCESS /* can be defined externally, on command line for example */ # if defined(__GNUC__) && ( defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) || defined(__ARM_ARCH_6K__) || defined(__ARM_ARCH_6Z__) || defined(__ARM_ARCH_6ZK__) || defined(__ARM_ARCH_6T2__) ) # define MEM_FORCE_MEMORY_ACCESS 2 # elif (defined(__INTEL_COMPILER) && !defined(WIN32)) || \ (defined(__GNUC__) && ( defined(__ARM_ARCH_7__) || defined(__ARM_ARCH_7A__) || defined(__ARM_ARCH_7R__) || defined(__ARM_ARCH_7M__) || defined(__ARM_ARCH_7S__) )) # define MEM_FORCE_MEMORY_ACCESS 1 # endif #endif MEM_STATIC unsigned MEM_32bits(void) { return sizeof(void*)==4; } MEM_STATIC unsigned MEM_64bits(void) { return sizeof(void*)==8; } MEM_STATIC unsigned MEM_isLittleEndian(void) { const union { U32 u; BYTE c[4]; } one = { 1 }; /* don't use static : performance detrimental */ return one.c[0]; } #if defined(MEM_FORCE_MEMORY_ACCESS) && (MEM_FORCE_MEMORY_ACCESS==2) /* violates C standard on structure alignment. Only use if no other choice to achieve best performance on target platform */ MEM_STATIC U16 MEM_read16(const void* memPtr) { return *(const U16*) memPtr; } MEM_STATIC U32 MEM_read32(const void* memPtr) { return *(const U32*) memPtr; } MEM_STATIC U64 MEM_read64(const void* memPtr) { return *(const U64*) memPtr; } MEM_STATIC void MEM_write16(void* memPtr, U16 value) { *(U16*)memPtr = value; } #elif defined(MEM_FORCE_MEMORY_ACCESS) && (MEM_FORCE_MEMORY_ACCESS==1) /* __pack instructions are safer, but compiler specific, hence potentially problematic for some compilers */ /* currently only defined for gcc and icc */ typedef union { U16 u16; U32 u32; U64 u64; } __attribute__((packed)) unalign; MEM_STATIC U16 MEM_read16(const void* ptr) { return ((const unalign*)ptr)->u16; } MEM_STATIC U32 MEM_read32(const void* ptr) { return ((const unalign*)ptr)->u32; } MEM_STATIC U64 MEM_read64(const void* ptr) { return ((const unalign*)ptr)->u64; } MEM_STATIC void MEM_write16(void* memPtr, U16 value) { ((unalign*)memPtr)->u16 = value; } #else /* default method, safe and standard. can sometimes prove slower */ MEM_STATIC U16 MEM_read16(const void* memPtr) { U16 val; memcpy(&val, memPtr, sizeof(val)); return val; } MEM_STATIC U32 MEM_read32(const void* memPtr) { U32 val; memcpy(&val, memPtr, sizeof(val)); return val; } MEM_STATIC U64 MEM_read64(const void* memPtr) { U64 val; memcpy(&val, memPtr, sizeof(val)); return val; } MEM_STATIC void MEM_write16(void* memPtr, U16 value) { memcpy(memPtr, &value, sizeof(value)); } #endif // MEM_FORCE_MEMORY_ACCESS MEM_STATIC U16 MEM_readLE16(const void* memPtr) { if (MEM_isLittleEndian()) return MEM_read16(memPtr); else { const BYTE* p = (const BYTE*)memPtr; return (U16)(p[0] + (p[1]<<8)); } } MEM_STATIC void MEM_writeLE16(void* memPtr, U16 val) { if (MEM_isLittleEndian()) { MEM_write16(memPtr, val); } else { BYTE* p = (BYTE*)memPtr; p[0] = (BYTE)val; p[1] = (BYTE)(val>>8); } } MEM_STATIC U32 MEM_readLE32(const void* memPtr) { if (MEM_isLittleEndian()) return MEM_read32(memPtr); else { const BYTE* p = (const BYTE*)memPtr; return (U32)((U32)p[0] + ((U32)p[1]<<8) + ((U32)p[2]<<16) + ((U32)p[3]<<24)); } } MEM_STATIC U64 MEM_readLE64(const void* memPtr) { if (MEM_isLittleEndian()) return MEM_read64(memPtr); else { const BYTE* p = (const BYTE*)memPtr; return (U64)((U64)p[0] + ((U64)p[1]<<8) + ((U64)p[2]<<16) + ((U64)p[3]<<24) + ((U64)p[4]<<32) + ((U64)p[5]<<40) + ((U64)p[6]<<48) + ((U64)p[7]<<56)); } } MEM_STATIC size_t MEM_readLEST(const void* memPtr) { if (MEM_32bits()) return (size_t)MEM_readLE32(memPtr); else return (size_t)MEM_readLE64(memPtr); } #if defined (__cplusplus) } #endif #endif /* MEM_H_MODULE */ /* ****************************************************************** bitstream Part of NewGen Entropy library header file (to include) Copyright (C) 2013-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - Source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #ifndef BITSTREAM_H_MODULE #define BITSTREAM_H_MODULE #if defined (__cplusplus) extern "C" { #endif /* * This API consists of small unitary functions, which highly benefit from being inlined. * Since link-time-optimization is not available for all compilers, * these functions are defined into a .h to be included. */ /********************************************** * bitStream decompression API (read backward) **********************************************/ typedef struct { size_t bitContainer; unsigned bitsConsumed; const char* ptr; const char* start; } BIT_DStream_t; typedef enum { BIT_DStream_unfinished = 0, BIT_DStream_endOfBuffer = 1, BIT_DStream_completed = 2, BIT_DStream_overflow = 3 } BIT_DStream_status; /* result of BIT_reloadDStream() */ /* 1,2,4,8 would be better for bitmap combinations, but slows down performance a bit ... :( */ MEM_STATIC size_t BIT_initDStream(BIT_DStream_t* bitD, const void* srcBuffer, size_t srcSize); MEM_STATIC size_t BIT_readBits(BIT_DStream_t* bitD, unsigned nbBits); MEM_STATIC BIT_DStream_status BIT_reloadDStream(BIT_DStream_t* bitD); MEM_STATIC unsigned BIT_endOfDStream(const BIT_DStream_t* bitD); /* * Start by invoking BIT_initDStream(). * A chunk of the bitStream is then stored into a local register. * Local register size is 64-bits on 64-bits systems, 32-bits on 32-bits systems (size_t). * You can then retrieve bitFields stored into the local register, **in reverse order**. * Local register is manually filled from memory by the BIT_reloadDStream() method. * A reload guarantee a minimum of ((8*sizeof(size_t))-7) bits when its result is BIT_DStream_unfinished. * Otherwise, it can be less than that, so proceed accordingly. * Checking if DStream has reached its end can be performed with BIT_endOfDStream() */ /****************************************** * unsafe API ******************************************/ MEM_STATIC size_t BIT_readBitsFast(BIT_DStream_t* bitD, unsigned nbBits); /* faster, but works only if nbBits >= 1 */ /**************************************************************** * Helper functions ****************************************************************/ MEM_STATIC unsigned BIT_highbit32 (register U32 val) { # if defined(_MSC_VER) /* Visual */ unsigned long r=0; _BitScanReverse ( &r, val ); return (unsigned) r; # elif defined(__GNUC__) && (__GNUC__ >= 3) /* Use GCC Intrinsic */ return 31 - __builtin_clz (val); # else /* Software version */ static const unsigned DeBruijnClz[32] = { 0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30, 8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31 }; U32 v = val; unsigned r; v |= v >> 1; v |= v >> 2; v |= v >> 4; v |= v >> 8; v |= v >> 16; r = DeBruijnClz[ (U32) (v * 0x07C4ACDDU) >> 27]; return r; # endif } /********************************************************** * bitStream decoding **********************************************************/ /*!BIT_initDStream * Initialize a BIT_DStream_t. * @bitD : a pointer to an already allocated BIT_DStream_t structure * @srcBuffer must point at the beginning of a bitStream * @srcSize must be the exact size of the bitStream * @result : size of stream (== srcSize) or an errorCode if a problem is detected */ MEM_STATIC size_t BIT_initDStream(BIT_DStream_t* bitD, const void* srcBuffer, size_t srcSize) { if (srcSize < 1) { memset(bitD, 0, sizeof(*bitD)); return ERROR(srcSize_wrong); } if (srcSize >= sizeof(size_t)) /* normal case */ { U32 contain32; bitD->start = (const char*)srcBuffer; bitD->ptr = (const char*)srcBuffer + srcSize - sizeof(size_t); bitD->bitContainer = MEM_readLEST(bitD->ptr); contain32 = ((const BYTE*)srcBuffer)[srcSize-1]; if (contain32 == 0) return ERROR(GENERIC); /* endMark not present */ bitD->bitsConsumed = 8 - BIT_highbit32(contain32); } else { U32 contain32; bitD->start = (const char*)srcBuffer; bitD->ptr = bitD->start; bitD->bitContainer = *(const BYTE*)(bitD->start); switch(srcSize) { case 7: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[6]) << (sizeof(size_t)*8 - 16); case 6: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[5]) << (sizeof(size_t)*8 - 24); case 5: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[4]) << (sizeof(size_t)*8 - 32); case 4: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[3]) << 24; case 3: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[2]) << 16; case 2: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[1]) << 8; default:; } contain32 = ((const BYTE*)srcBuffer)[srcSize-1]; if (contain32 == 0) return ERROR(GENERIC); /* endMark not present */ bitD->bitsConsumed = 8 - BIT_highbit32(contain32); bitD->bitsConsumed += (U32)(sizeof(size_t) - srcSize)*8; } return srcSize; } /*!BIT_lookBits * Provides next n bits from local register * local register is not modified (bits are still present for next read/look) * On 32-bits, maxNbBits==25 * On 64-bits, maxNbBits==57 * @return : value extracted */ MEM_STATIC size_t BIT_lookBits(BIT_DStream_t* bitD, U32 nbBits) { const U32 bitMask = sizeof(bitD->bitContainer)*8 - 1; return ((bitD->bitContainer << (bitD->bitsConsumed & bitMask)) >> 1) >> ((bitMask-nbBits) & bitMask); } /*! BIT_lookBitsFast : * unsafe version; only works only if nbBits >= 1 */ MEM_STATIC size_t BIT_lookBitsFast(BIT_DStream_t* bitD, U32 nbBits) { const U32 bitMask = sizeof(bitD->bitContainer)*8 - 1; return (bitD->bitContainer << (bitD->bitsConsumed & bitMask)) >> (((bitMask+1)-nbBits) & bitMask); } MEM_STATIC void BIT_skipBits(BIT_DStream_t* bitD, U32 nbBits) { bitD->bitsConsumed += nbBits; } /*!BIT_readBits * Read next n bits from local register. * pay attention to not read more than nbBits contained into local register. * @return : extracted value. */ MEM_STATIC size_t BIT_readBits(BIT_DStream_t* bitD, U32 nbBits) { size_t value = BIT_lookBits(bitD, nbBits); BIT_skipBits(bitD, nbBits); return value; } /*!BIT_readBitsFast : * unsafe version; only works only if nbBits >= 1 */ MEM_STATIC size_t BIT_readBitsFast(BIT_DStream_t* bitD, U32 nbBits) { size_t value = BIT_lookBitsFast(bitD, nbBits); BIT_skipBits(bitD, nbBits); return value; } MEM_STATIC BIT_DStream_status BIT_reloadDStream(BIT_DStream_t* bitD) { if (bitD->bitsConsumed > (sizeof(bitD->bitContainer)*8)) /* should never happen */ return BIT_DStream_overflow; if (bitD->ptr >= bitD->start + sizeof(bitD->bitContainer)) { bitD->ptr -= bitD->bitsConsumed >> 3; bitD->bitsConsumed &= 7; bitD->bitContainer = MEM_readLEST(bitD->ptr); return BIT_DStream_unfinished; } if (bitD->ptr == bitD->start) { if (bitD->bitsConsumed < sizeof(bitD->bitContainer)*8) return BIT_DStream_endOfBuffer; return BIT_DStream_completed; } { U32 nbBytes = bitD->bitsConsumed >> 3; BIT_DStream_status result = BIT_DStream_unfinished; if (bitD->ptr - nbBytes < bitD->start) { nbBytes = (U32)(bitD->ptr - bitD->start); /* ptr > start */ result = BIT_DStream_endOfBuffer; } bitD->ptr -= nbBytes; bitD->bitsConsumed -= nbBytes*8; bitD->bitContainer = MEM_readLEST(bitD->ptr); /* reminder : srcSize > sizeof(bitD) */ return result; } } /*! BIT_endOfDStream * @return Tells if DStream has reached its exact end */ MEM_STATIC unsigned BIT_endOfDStream(const BIT_DStream_t* DStream) { return ((DStream->ptr == DStream->start) && (DStream->bitsConsumed == sizeof(DStream->bitContainer)*8)); } #if defined (__cplusplus) } #endif #endif /* BITSTREAM_H_MODULE */ /* ****************************************************************** Error codes and messages Copyright (C) 2013-2015, Yann Collet BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - Source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #ifndef ERROR_H_MODULE #define ERROR_H_MODULE #if defined (__cplusplus) extern "C" { #endif /****************************************** * Compiler-specific ******************************************/ #if defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) # define ERR_STATIC static inline #elif defined(_MSC_VER) # define ERR_STATIC static __inline #elif defined(__GNUC__) # define ERR_STATIC static __attribute__((unused)) #else # define ERR_STATIC static /* this version may generate warnings for unused static functions; disable the relevant warning */ #endif /****************************************** * Error Management ******************************************/ #define PREFIX(name) ZSTD_error_##name #define ERROR(name) (size_t)-PREFIX(name) #define ERROR_LIST(ITEM) \ ITEM(PREFIX(No_Error)) ITEM(PREFIX(GENERIC)) \ ITEM(PREFIX(dstSize_tooSmall)) ITEM(PREFIX(srcSize_wrong)) \ ITEM(PREFIX(prefix_unknown)) ITEM(PREFIX(corruption_detected)) \ ITEM(PREFIX(tableLog_tooLarge)) ITEM(PREFIX(maxSymbolValue_tooLarge)) ITEM(PREFIX(maxSymbolValue_tooSmall)) \ ITEM(PREFIX(maxCode)) #define ERROR_GENERATE_ENUM(ENUM) ENUM, typedef enum { ERROR_LIST(ERROR_GENERATE_ENUM) } ERR_codes; /* enum is exposed, to detect & handle specific errors; compare function result to -enum value */ #define ERROR_CONVERTTOSTRING(STRING) #STRING, #define ERROR_GENERATE_STRING(EXPR) ERROR_CONVERTTOSTRING(EXPR) static const char* ERR_strings[] = { ERROR_LIST(ERROR_GENERATE_STRING) }; ERR_STATIC unsigned ERR_isError(size_t code) { return (code > ERROR(maxCode)); } ERR_STATIC const char* ERR_getErrorName(size_t code) { static const char* codeError = "Unspecified error code"; if (ERR_isError(code)) return ERR_strings[-(int)(code)]; return codeError; } #if defined (__cplusplus) } #endif #endif /* ERROR_H_MODULE */ /* Constructor and Destructor of type FSE_CTable Note that its size depends on 'tableLog' and 'maxSymbolValue' */ typedef unsigned FSE_CTable; /* don't allocate that. It's just a way to be more restrictive than void* */ typedef unsigned FSE_DTable; /* don't allocate that. It's just a way to be more restrictive than void* */ /* ****************************************************************** FSE : Finite State Entropy coder header file for static linking (only) Copyright (C) 2013-2015, Yann Collet BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - Source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #if defined (__cplusplus) extern "C" { #endif /****************************************** * Static allocation ******************************************/ /* FSE buffer bounds */ #define FSE_NCOUNTBOUND 512 #define FSE_BLOCKBOUND(size) (size + (size>>7)) #define FSE_COMPRESSBOUND(size) (FSE_NCOUNTBOUND + FSE_BLOCKBOUND(size)) /* Macro version, useful for static allocation */ /* You can statically allocate FSE CTable/DTable as a table of unsigned using below macro */ #define FSE_CTABLE_SIZE_U32(maxTableLog, maxSymbolValue) (1 + (1<<(maxTableLog-1)) + ((maxSymbolValue+1)*2)) #define FSE_DTABLE_SIZE_U32(maxTableLog) (1 + (1<= BIT_DStream_completed When it's done, verify decompression is fully completed, by checking both DStream and the relevant states. Checking if DStream has reached its end is performed by : BIT_endOfDStream(&DStream); Check also the states. There might be some symbols left there, if some high probability ones (>50%) are possible. FSE_endOfDState(&DState); */ /****************************************** * FSE unsafe API ******************************************/ static unsigned char FSE_decodeSymbolFast(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD); /* faster, but works only if nbBits is always >= 1 (otherwise, result will be corrupted) */ /****************************************** * Implementation of inline functions ******************************************/ /* decompression */ typedef struct { U16 tableLog; U16 fastMode; } FSE_DTableHeader; /* sizeof U32 */ typedef struct { unsigned short newState; unsigned char symbol; unsigned char nbBits; } FSE_decode_t; /* size == U32 */ MEM_STATIC void FSE_initDState(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD, const FSE_DTable* dt) { FSE_DTableHeader DTableH; memcpy(&DTableH, dt, sizeof(DTableH)); DStatePtr->state = BIT_readBits(bitD, DTableH.tableLog); BIT_reloadDStream(bitD); DStatePtr->table = dt + 1; } MEM_STATIC BYTE FSE_decodeSymbol(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD) { const FSE_decode_t DInfo = ((const FSE_decode_t*)(DStatePtr->table))[DStatePtr->state]; const U32 nbBits = DInfo.nbBits; BYTE symbol = DInfo.symbol; size_t lowBits = BIT_readBits(bitD, nbBits); DStatePtr->state = DInfo.newState + lowBits; return symbol; } MEM_STATIC BYTE FSE_decodeSymbolFast(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD) { const FSE_decode_t DInfo = ((const FSE_decode_t*)(DStatePtr->table))[DStatePtr->state]; const U32 nbBits = DInfo.nbBits; BYTE symbol = DInfo.symbol; size_t lowBits = BIT_readBitsFast(bitD, nbBits); DStatePtr->state = DInfo.newState + lowBits; return symbol; } MEM_STATIC unsigned FSE_endOfDState(const FSE_DState_t* DStatePtr) { return DStatePtr->state == 0; } #if defined (__cplusplus) } #endif /* ****************************************************************** Huff0 : Huffman coder, part of New Generation Entropy library header file for static linking (only) Copyright (C) 2013-2015, Yann Collet BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - Source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #if defined (__cplusplus) extern "C" { #endif /****************************************** * Static allocation macros ******************************************/ /* Huff0 buffer bounds */ #define HUF_CTABLEBOUND 129 #define HUF_BLOCKBOUND(size) (size + (size>>8) + 8) /* only true if incompressible pre-filtered with fast heuristic */ #define HUF_COMPRESSBOUND(size) (HUF_CTABLEBOUND + HUF_BLOCKBOUND(size)) /* Macro version, useful for static allocation */ /* static allocation of Huff0's DTable */ #define HUF_DTABLE_SIZE(maxTableLog) (1 + (1< /* size_t */ /* ************************************* * Version ***************************************/ #define ZSTD_VERSION_MAJOR 0 /* for breaking interface changes */ #define ZSTD_VERSION_MINOR 2 /* for new (non-breaking) interface capabilities */ #define ZSTD_VERSION_RELEASE 2 /* for tweaks, bug-fixes, or development */ #define ZSTD_VERSION_NUMBER (ZSTD_VERSION_MAJOR *100*100 + ZSTD_VERSION_MINOR *100 + ZSTD_VERSION_RELEASE) /* ************************************* * Advanced functions ***************************************/ typedef struct ZSTD_CCtx_s ZSTD_CCtx; /* incomplete type */ #if defined (__cplusplus) } #endif /* zstd - standard compression library Header File for static linking only Copyright (C) 2014-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - zstd source repository : https://github.com/Cyan4973/zstd - ztsd public forum : https://groups.google.com/forum/#!forum/lz4c */ /* The objects defined into this file should be considered experimental. * They are not labelled stable, as their prototype may change in the future. * You can use them for tests, provide feedback, or if you can endure risk of future changes. */ #if defined (__cplusplus) extern "C" { #endif /* ************************************* * Streaming functions ***************************************/ typedef struct ZSTD_DCtx_s ZSTD_DCtx; /* Use above functions alternatively. ZSTD_nextSrcSizeToDecompress() tells how much bytes to provide as 'srcSize' to ZSTD_decompressContinue(). ZSTD_decompressContinue() will use previous data blocks to improve compression if they are located prior to current block. Result is the number of bytes regenerated within 'dst'. It can be zero, which is not an error; it just means ZSTD_decompressContinue() has decoded some header. */ /* ************************************* * Prefix - version detection ***************************************/ #define ZSTD_magicNumber 0xFD2FB523 /* v0.3 */ #if defined (__cplusplus) } #endif /* ****************************************************************** FSE : Finite State Entropy coder Copyright (C) 2013-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - FSE source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #ifndef FSE_COMMONDEFS_ONLY /**************************************************************** * Tuning parameters ****************************************************************/ /* MEMORY_USAGE : * Memory usage formula : N->2^N Bytes (examples : 10 -> 1KB; 12 -> 4KB ; 16 -> 64KB; 20 -> 1MB; etc.) * Increasing memory usage improves compression ratio * Reduced memory usage can improve speed, due to cache effect * Recommended max value is 14, for 16KB, which nicely fits into Intel x86 L1 cache */ #define FSE_MAX_MEMORY_USAGE 14 #define FSE_DEFAULT_MEMORY_USAGE 13 /* FSE_MAX_SYMBOL_VALUE : * Maximum symbol value authorized. * Required for proper stack allocation */ #define FSE_MAX_SYMBOL_VALUE 255 /**************************************************************** * template functions type & suffix ****************************************************************/ #define FSE_FUNCTION_TYPE BYTE #define FSE_FUNCTION_EXTENSION /**************************************************************** * Byte symbol type ****************************************************************/ #endif /* !FSE_COMMONDEFS_ONLY */ /**************************************************************** * Compiler specifics ****************************************************************/ #ifdef _MSC_VER /* Visual Studio */ # define FORCE_INLINE static __forceinline # include /* For Visual 2005 */ # pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */ # pragma warning(disable : 4214) /* disable: C4214: non-int bitfields */ #else # if defined (__cplusplus) || defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L /* C99 */ # ifdef __GNUC__ # define FORCE_INLINE static inline __attribute__((always_inline)) # else # define FORCE_INLINE static inline # endif # else # define FORCE_INLINE static # endif /* __STDC_VERSION__ */ #endif /**************************************************************** * Includes ****************************************************************/ #include /* malloc, free, qsort */ #include /* memcpy, memset */ #include /* printf (debug) */ /**************************************************************** * Constants *****************************************************************/ #define FSE_MAX_TABLELOG (FSE_MAX_MEMORY_USAGE-2) #define FSE_MAX_TABLESIZE (1U< FSE_TABLELOG_ABSOLUTE_MAX #error "FSE_MAX_TABLELOG > FSE_TABLELOG_ABSOLUTE_MAX is not supported" #endif /**************************************************************** * Error Management ****************************************************************/ #define FSE_STATIC_ASSERT(c) { enum { FSE_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */ /**************************************************************** * Complex types ****************************************************************/ typedef U32 DTable_max_t[FSE_DTABLE_SIZE_U32(FSE_MAX_TABLELOG)]; /**************************************************************** * Templates ****************************************************************/ /* designed to be included for type-specific functions (template emulation in C) Objective is to write these functions only once, for improved maintenance */ /* safety checks */ #ifndef FSE_FUNCTION_EXTENSION # error "FSE_FUNCTION_EXTENSION must be defined" #endif #ifndef FSE_FUNCTION_TYPE # error "FSE_FUNCTION_TYPE must be defined" #endif /* Function names */ #define FSE_CAT(X,Y) X##Y #define FSE_FUNCTION_NAME(X,Y) FSE_CAT(X,Y) #define FSE_TYPE_NAME(X,Y) FSE_CAT(X,Y) /* Function templates */ #define FSE_DECODE_TYPE FSE_decode_t static U32 FSE_tableStep(U32 tableSize) { return (tableSize>>1) + (tableSize>>3) + 3; } static size_t FSE_buildDTable (FSE_DTable* dt, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog) { void* ptr = dt+1; FSE_DTableHeader DTableH; FSE_DECODE_TYPE* const tableDecode = (FSE_DECODE_TYPE*)ptr; const U32 tableSize = 1 << tableLog; const U32 tableMask = tableSize-1; const U32 step = FSE_tableStep(tableSize); U16 symbolNext[FSE_MAX_SYMBOL_VALUE+1]; U32 position = 0; U32 highThreshold = tableSize-1; const S16 largeLimit= (S16)(1 << (tableLog-1)); U32 noLarge = 1; U32 s; /* Sanity Checks */ if (maxSymbolValue > FSE_MAX_SYMBOL_VALUE) return ERROR(maxSymbolValue_tooLarge); if (tableLog > FSE_MAX_TABLELOG) return ERROR(tableLog_tooLarge); /* Init, lay down lowprob symbols */ DTableH.tableLog = (U16)tableLog; for (s=0; s<=maxSymbolValue; s++) { if (normalizedCounter[s]==-1) { tableDecode[highThreshold--].symbol = (FSE_FUNCTION_TYPE)s; symbolNext[s] = 1; } else { if (normalizedCounter[s] >= largeLimit) noLarge=0; symbolNext[s] = normalizedCounter[s]; } } /* Spread symbols */ for (s=0; s<=maxSymbolValue; s++) { int i; for (i=0; i highThreshold) position = (position + step) & tableMask; /* lowprob area */ } } if (position!=0) return ERROR(GENERIC); /* position must reach all cells once, otherwise normalizedCounter is incorrect */ /* Build Decoding table */ { U32 i; for (i=0; i FSE_TABLELOG_ABSOLUTE_MAX) return ERROR(tableLog_tooLarge); bitStream >>= 4; bitCount = 4; *tableLogPtr = nbBits; remaining = (1<1) && (charnum<=*maxSVPtr)) { if (previous0) { unsigned n0 = charnum; while ((bitStream & 0xFFFF) == 0xFFFF) { n0+=24; if (ip < iend-5) { ip+=2; bitStream = MEM_readLE32(ip) >> bitCount; } else { bitStream >>= 16; bitCount+=16; } } while ((bitStream & 3) == 3) { n0+=3; bitStream>>=2; bitCount+=2; } n0 += bitStream & 3; bitCount += 2; if (n0 > *maxSVPtr) return ERROR(maxSymbolValue_tooSmall); while (charnum < n0) normalizedCounter[charnum++] = 0; if ((ip <= iend-7) || (ip + (bitCount>>3) <= iend-4)) { ip += bitCount>>3; bitCount &= 7; bitStream = MEM_readLE32(ip) >> bitCount; } else bitStream >>= 2; } { const short max = (short)((2*threshold-1)-remaining); short count; if ((bitStream & (threshold-1)) < (U32)max) { count = (short)(bitStream & (threshold-1)); bitCount += nbBits-1; } else { count = (short)(bitStream & (2*threshold-1)); if (count >= threshold) count -= max; bitCount += nbBits; } count--; /* extra accuracy */ remaining -= FSE_abs(count); normalizedCounter[charnum++] = count; previous0 = !count; while (remaining < threshold) { nbBits--; threshold >>= 1; } { if ((ip <= iend-7) || (ip + (bitCount>>3) <= iend-4)) { ip += bitCount>>3; bitCount &= 7; } else { bitCount -= (int)(8 * (iend - 4 - ip)); ip = iend - 4; } bitStream = MEM_readLE32(ip) >> (bitCount & 31); } } } if (remaining != 1) return ERROR(GENERIC); *maxSVPtr = charnum-1; ip += (bitCount+7)>>3; if ((size_t)(ip-istart) > hbSize) return ERROR(srcSize_wrong); return ip-istart; } /********************************************************* * Decompression (Byte symbols) *********************************************************/ static size_t FSE_buildDTable_rle (FSE_DTable* dt, BYTE symbolValue) { void* ptr = dt; FSE_DTableHeader* const DTableH = (FSE_DTableHeader*)ptr; FSE_decode_t* const cell = (FSE_decode_t*)(ptr) + 1; DTableH->tableLog = 0; DTableH->fastMode = 0; cell->newState = 0; cell->symbol = symbolValue; cell->nbBits = 0; return 0; } static size_t FSE_buildDTable_raw (FSE_DTable* dt, unsigned nbBits) { void* ptr = dt; FSE_DTableHeader* const DTableH = (FSE_DTableHeader*)ptr; FSE_decode_t* const dinfo = (FSE_decode_t*)(ptr) + 1; const unsigned tableSize = 1 << nbBits; const unsigned tableMask = tableSize - 1; const unsigned maxSymbolValue = tableMask; unsigned s; /* Sanity checks */ if (nbBits < 1) return ERROR(GENERIC); /* min size */ /* Build Decoding Table */ DTableH->tableLog = (U16)nbBits; DTableH->fastMode = 1; for (s=0; s<=maxSymbolValue; s++) { dinfo[s].newState = 0; dinfo[s].symbol = (BYTE)s; dinfo[s].nbBits = (BYTE)nbBits; } return 0; } FORCE_INLINE size_t FSE_decompress_usingDTable_generic( void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const FSE_DTable* dt, const unsigned fast) { BYTE* const ostart = (BYTE*) dst; BYTE* op = ostart; BYTE* const omax = op + maxDstSize; BYTE* const olimit = omax-3; BIT_DStream_t bitD; FSE_DState_t state1; FSE_DState_t state2; size_t errorCode; /* Init */ errorCode = BIT_initDStream(&bitD, cSrc, cSrcSize); /* replaced last arg by maxCompressed Size */ if (FSE_isError(errorCode)) return errorCode; FSE_initDState(&state1, &bitD, dt); FSE_initDState(&state2, &bitD, dt); #define FSE_GETSYMBOL(statePtr) fast ? FSE_decodeSymbolFast(statePtr, &bitD) : FSE_decodeSymbol(statePtr, &bitD) /* 4 symbols per loop */ for ( ; (BIT_reloadDStream(&bitD)==BIT_DStream_unfinished) && (op sizeof(bitD.bitContainer)*8) /* This test must be static */ BIT_reloadDStream(&bitD); op[1] = FSE_GETSYMBOL(&state2); if (FSE_MAX_TABLELOG*4+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */ { if (BIT_reloadDStream(&bitD) > BIT_DStream_unfinished) { op+=2; break; } } op[2] = FSE_GETSYMBOL(&state1); if (FSE_MAX_TABLELOG*2+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */ BIT_reloadDStream(&bitD); op[3] = FSE_GETSYMBOL(&state2); } /* tail */ /* note : BIT_reloadDStream(&bitD) >= FSE_DStream_partiallyFilled; Ends at exactly BIT_DStream_completed */ while (1) { if ( (BIT_reloadDStream(&bitD)>BIT_DStream_completed) || (op==omax) || (BIT_endOfDStream(&bitD) && (fast || FSE_endOfDState(&state1))) ) break; *op++ = FSE_GETSYMBOL(&state1); if ( (BIT_reloadDStream(&bitD)>BIT_DStream_completed) || (op==omax) || (BIT_endOfDStream(&bitD) && (fast || FSE_endOfDState(&state2))) ) break; *op++ = FSE_GETSYMBOL(&state2); } /* end ? */ if (BIT_endOfDStream(&bitD) && FSE_endOfDState(&state1) && FSE_endOfDState(&state2)) return op-ostart; if (op==omax) return ERROR(dstSize_tooSmall); /* dst buffer is full, but cSrc unfinished */ return ERROR(corruption_detected); } static size_t FSE_decompress_usingDTable(void* dst, size_t originalSize, const void* cSrc, size_t cSrcSize, const FSE_DTable* dt) { FSE_DTableHeader DTableH; memcpy(&DTableH, dt, sizeof(DTableH)); /* select fast mode (static) */ if (DTableH.fastMode) return FSE_decompress_usingDTable_generic(dst, originalSize, cSrc, cSrcSize, dt, 1); return FSE_decompress_usingDTable_generic(dst, originalSize, cSrc, cSrcSize, dt, 0); } static size_t FSE_decompress(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize) { const BYTE* const istart = (const BYTE*)cSrc; const BYTE* ip = istart; short counting[FSE_MAX_SYMBOL_VALUE+1]; DTable_max_t dt; /* Static analyzer seems unable to understand this table will be properly initialized later */ unsigned tableLog; unsigned maxSymbolValue = FSE_MAX_SYMBOL_VALUE; size_t errorCode; if (cSrcSize<2) return ERROR(srcSize_wrong); /* too small input size */ /* normal FSE decoding mode */ errorCode = FSE_readNCount (counting, &maxSymbolValue, &tableLog, istart, cSrcSize); if (FSE_isError(errorCode)) return errorCode; if (errorCode >= cSrcSize) return ERROR(srcSize_wrong); /* too small input size */ ip += errorCode; cSrcSize -= errorCode; errorCode = FSE_buildDTable (dt, counting, maxSymbolValue, tableLog); if (FSE_isError(errorCode)) return errorCode; /* always return, even if it is an error code */ return FSE_decompress_usingDTable (dst, maxDstSize, ip, cSrcSize, dt); } #endif /* FSE_COMMONDEFS_ONLY */ /* ****************************************************************** Huff0 : Huffman coder, part of New Generation Entropy library Copyright (C) 2013-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - FSE+Huff0 source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ /**************************************************************** * Compiler specifics ****************************************************************/ #if defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) /* inline is defined */ #elif defined(_MSC_VER) # pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */ # define inline __inline #else # define inline /* disable inline */ #endif /**************************************************************** * Includes ****************************************************************/ #include /* malloc, free, qsort */ #include /* memcpy, memset */ #include /* printf (debug) */ /**************************************************************** * Error Management ****************************************************************/ #define HUF_STATIC_ASSERT(c) { enum { HUF_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */ /****************************************** * Helper functions ******************************************/ static unsigned HUF_isError(size_t code) { return ERR_isError(code); } #define HUF_ABSOLUTEMAX_TABLELOG 16 /* absolute limit of HUF_MAX_TABLELOG. Beyond that value, code does not work */ #define HUF_MAX_TABLELOG 12 /* max configured tableLog (for static allocation); can be modified up to HUF_ABSOLUTEMAX_TABLELOG */ #define HUF_DEFAULT_TABLELOG HUF_MAX_TABLELOG /* tableLog by default, when not specified */ #define HUF_MAX_SYMBOL_VALUE 255 #if (HUF_MAX_TABLELOG > HUF_ABSOLUTEMAX_TABLELOG) # error "HUF_MAX_TABLELOG is too large !" #endif /********************************************************* * Huff0 : Huffman block decompression *********************************************************/ typedef struct { BYTE byte; BYTE nbBits; } HUF_DEltX2; /* single-symbol decoding */ typedef struct { U16 sequence; BYTE nbBits; BYTE length; } HUF_DEltX4; /* double-symbols decoding */ typedef struct { BYTE symbol; BYTE weight; } sortedSymbol_t; /*! HUF_readStats Read compact Huffman tree, saved by HUF_writeCTable @huffWeight : destination buffer @return : size read from `src` */ static size_t HUF_readStats(BYTE* huffWeight, size_t hwSize, U32* rankStats, U32* nbSymbolsPtr, U32* tableLogPtr, const void* src, size_t srcSize) { U32 weightTotal; U32 tableLog; const BYTE* ip = (const BYTE*) src; size_t iSize; size_t oSize; U32 n; if (!srcSize) return ERROR(srcSize_wrong); iSize = ip[0]; //memset(huffWeight, 0, hwSize); /* is not necessary, even though some analyzer complain ... */ if (iSize >= 128) /* special header */ { if (iSize >= (242)) /* RLE */ { static int l[14] = { 1, 2, 3, 4, 7, 8, 15, 16, 31, 32, 63, 64, 127, 128 }; oSize = l[iSize-242]; memset(huffWeight, 1, hwSize); iSize = 0; } else /* Incompressible */ { oSize = iSize - 127; iSize = ((oSize+1)/2); if (iSize+1 > srcSize) return ERROR(srcSize_wrong); if (oSize >= hwSize) return ERROR(corruption_detected); ip += 1; for (n=0; n> 4; huffWeight[n+1] = ip[n/2] & 15; } } } else /* header compressed with FSE (normal case) */ { if (iSize+1 > srcSize) return ERROR(srcSize_wrong); oSize = FSE_decompress(huffWeight, hwSize-1, ip+1, iSize); /* max (hwSize-1) values decoded, as last one is implied */ if (FSE_isError(oSize)) return oSize; } /* collect weight stats */ memset(rankStats, 0, (HUF_ABSOLUTEMAX_TABLELOG + 1) * sizeof(U32)); weightTotal = 0; for (n=0; n= HUF_ABSOLUTEMAX_TABLELOG) return ERROR(corruption_detected); rankStats[huffWeight[n]]++; weightTotal += (1 << huffWeight[n]) >> 1; } if (weightTotal == 0) return ERROR(corruption_detected); /* get last non-null symbol weight (implied, total must be 2^n) */ tableLog = BIT_highbit32(weightTotal) + 1; if (tableLog > HUF_ABSOLUTEMAX_TABLELOG) return ERROR(corruption_detected); { U32 total = 1 << tableLog; U32 rest = total - weightTotal; U32 verif = 1 << BIT_highbit32(rest); U32 lastWeight = BIT_highbit32(rest) + 1; if (verif != rest) return ERROR(corruption_detected); /* last value must be a clean power of 2 */ huffWeight[oSize] = (BYTE)lastWeight; rankStats[lastWeight]++; } /* check tree construction validity */ if ((rankStats[1] < 2) || (rankStats[1] & 1)) return ERROR(corruption_detected); /* by construction : at least 2 elts of rank 1, must be even */ /* results */ *nbSymbolsPtr = (U32)(oSize+1); *tableLogPtr = tableLog; return iSize+1; } /**************************/ /* single-symbol decoding */ /**************************/ static size_t HUF_readDTableX2 (U16* DTable, const void* src, size_t srcSize) { BYTE huffWeight[HUF_MAX_SYMBOL_VALUE + 1]; U32 rankVal[HUF_ABSOLUTEMAX_TABLELOG + 1]; /* large enough for values from 0 to 16 */ U32 tableLog = 0; const BYTE* ip = (const BYTE*) src; size_t iSize = ip[0]; U32 nbSymbols = 0; U32 n; U32 nextRankStart; void* ptr = DTable+1; HUF_DEltX2* const dt = (HUF_DEltX2*)(ptr); HUF_STATIC_ASSERT(sizeof(HUF_DEltX2) == sizeof(U16)); /* if compilation fails here, assertion is false */ //memset(huffWeight, 0, sizeof(huffWeight)); /* is not necessary, even though some analyzer complain ... */ iSize = HUF_readStats(huffWeight, HUF_MAX_SYMBOL_VALUE + 1, rankVal, &nbSymbols, &tableLog, src, srcSize); if (HUF_isError(iSize)) return iSize; /* check result */ if (tableLog > DTable[0]) return ERROR(tableLog_tooLarge); /* DTable is too small */ DTable[0] = (U16)tableLog; /* maybe should separate sizeof DTable, as allocated, from used size of DTable, in case of DTable re-use */ /* Prepare ranks */ nextRankStart = 0; for (n=1; n<=tableLog; n++) { U32 current = nextRankStart; nextRankStart += (rankVal[n] << (n-1)); rankVal[n] = current; } /* fill DTable */ for (n=0; n> 1; U32 i; HUF_DEltX2 D; D.byte = (BYTE)n; D.nbBits = (BYTE)(tableLog + 1 - w); for (i = rankVal[w]; i < rankVal[w] + length; i++) dt[i] = D; rankVal[w] += length; } return iSize; } static BYTE HUF_decodeSymbolX2(BIT_DStream_t* Dstream, const HUF_DEltX2* dt, const U32 dtLog) { const size_t val = BIT_lookBitsFast(Dstream, dtLog); /* note : dtLog >= 1 */ const BYTE c = dt[val].byte; BIT_skipBits(Dstream, dt[val].nbBits); return c; } #define HUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr) \ *ptr++ = HUF_decodeSymbolX2(DStreamPtr, dt, dtLog) #define HUF_DECODE_SYMBOLX2_1(ptr, DStreamPtr) \ if (MEM_64bits() || (HUF_MAX_TABLELOG<=12)) \ HUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr) #define HUF_DECODE_SYMBOLX2_2(ptr, DStreamPtr) \ if (MEM_64bits()) \ HUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr) static inline size_t HUF_decodeStreamX2(BYTE* p, BIT_DStream_t* const bitDPtr, BYTE* const pEnd, const HUF_DEltX2* const dt, const U32 dtLog) { BYTE* const pStart = p; /* up to 4 symbols at a time */ while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) && (p <= pEnd-4)) { HUF_DECODE_SYMBOLX2_2(p, bitDPtr); HUF_DECODE_SYMBOLX2_1(p, bitDPtr); HUF_DECODE_SYMBOLX2_2(p, bitDPtr); HUF_DECODE_SYMBOLX2_0(p, bitDPtr); } /* closer to the end */ while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) && (p < pEnd)) HUF_DECODE_SYMBOLX2_0(p, bitDPtr); /* no more data to retrieve from bitstream, hence no need to reload */ while (p < pEnd) HUF_DECODE_SYMBOLX2_0(p, bitDPtr); return pEnd-pStart; } static size_t HUF_decompress4X2_usingDTable( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const U16* DTable) { if (cSrcSize < 10) return ERROR(corruption_detected); /* strict minimum : jump table + 1 byte per stream */ { const BYTE* const istart = (const BYTE*) cSrc; BYTE* const ostart = (BYTE*) dst; BYTE* const oend = ostart + dstSize; const void* ptr = DTable; const HUF_DEltX2* const dt = ((const HUF_DEltX2*)ptr) +1; const U32 dtLog = DTable[0]; size_t errorCode; /* Init */ BIT_DStream_t bitD1; BIT_DStream_t bitD2; BIT_DStream_t bitD3; BIT_DStream_t bitD4; const size_t length1 = MEM_readLE16(istart); const size_t length2 = MEM_readLE16(istart+2); const size_t length3 = MEM_readLE16(istart+4); size_t length4; const BYTE* const istart1 = istart + 6; /* jumpTable */ const BYTE* const istart2 = istart1 + length1; const BYTE* const istart3 = istart2 + length2; const BYTE* const istart4 = istart3 + length3; const size_t segmentSize = (dstSize+3) / 4; BYTE* const opStart2 = ostart + segmentSize; BYTE* const opStart3 = opStart2 + segmentSize; BYTE* const opStart4 = opStart3 + segmentSize; BYTE* op1 = ostart; BYTE* op2 = opStart2; BYTE* op3 = opStart3; BYTE* op4 = opStart4; U32 endSignal; length4 = cSrcSize - (length1 + length2 + length3 + 6); if (length4 > cSrcSize) return ERROR(corruption_detected); /* overflow */ errorCode = BIT_initDStream(&bitD1, istart1, length1); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD2, istart2, length2); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD3, istart3, length3); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD4, istart4, length4); if (HUF_isError(errorCode)) return errorCode; /* 16-32 symbols per loop (4-8 symbols per stream) */ endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4); for ( ; (endSignal==BIT_DStream_unfinished) && (op4<(oend-7)) ; ) { HUF_DECODE_SYMBOLX2_2(op1, &bitD1); HUF_DECODE_SYMBOLX2_2(op2, &bitD2); HUF_DECODE_SYMBOLX2_2(op3, &bitD3); HUF_DECODE_SYMBOLX2_2(op4, &bitD4); HUF_DECODE_SYMBOLX2_1(op1, &bitD1); HUF_DECODE_SYMBOLX2_1(op2, &bitD2); HUF_DECODE_SYMBOLX2_1(op3, &bitD3); HUF_DECODE_SYMBOLX2_1(op4, &bitD4); HUF_DECODE_SYMBOLX2_2(op1, &bitD1); HUF_DECODE_SYMBOLX2_2(op2, &bitD2); HUF_DECODE_SYMBOLX2_2(op3, &bitD3); HUF_DECODE_SYMBOLX2_2(op4, &bitD4); HUF_DECODE_SYMBOLX2_0(op1, &bitD1); HUF_DECODE_SYMBOLX2_0(op2, &bitD2); HUF_DECODE_SYMBOLX2_0(op3, &bitD3); HUF_DECODE_SYMBOLX2_0(op4, &bitD4); endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4); } /* check corruption */ if (op1 > opStart2) return ERROR(corruption_detected); if (op2 > opStart3) return ERROR(corruption_detected); if (op3 > opStart4) return ERROR(corruption_detected); /* note : op4 supposed already verified within main loop */ /* finish bitStreams one by one */ HUF_decodeStreamX2(op1, &bitD1, opStart2, dt, dtLog); HUF_decodeStreamX2(op2, &bitD2, opStart3, dt, dtLog); HUF_decodeStreamX2(op3, &bitD3, opStart4, dt, dtLog); HUF_decodeStreamX2(op4, &bitD4, oend, dt, dtLog); /* check */ endSignal = BIT_endOfDStream(&bitD1) & BIT_endOfDStream(&bitD2) & BIT_endOfDStream(&bitD3) & BIT_endOfDStream(&bitD4); if (!endSignal) return ERROR(corruption_detected); /* decoded size */ return dstSize; } } static size_t HUF_decompress4X2 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { HUF_CREATE_STATIC_DTABLEX2(DTable, HUF_MAX_TABLELOG); const BYTE* ip = (const BYTE*) cSrc; size_t errorCode; errorCode = HUF_readDTableX2 (DTable, cSrc, cSrcSize); if (HUF_isError(errorCode)) return errorCode; if (errorCode >= cSrcSize) return ERROR(srcSize_wrong); ip += errorCode; cSrcSize -= errorCode; return HUF_decompress4X2_usingDTable (dst, dstSize, ip, cSrcSize, DTable); } /***************************/ /* double-symbols decoding */ /***************************/ static void HUF_fillDTableX4Level2(HUF_DEltX4* DTable, U32 sizeLog, const U32 consumed, const U32* rankValOrigin, const int minWeight, const sortedSymbol_t* sortedSymbols, const U32 sortedListSize, U32 nbBitsBaseline, U16 baseSeq) { HUF_DEltX4 DElt; U32 rankVal[HUF_ABSOLUTEMAX_TABLELOG + 1]; U32 s; /* get pre-calculated rankVal */ memcpy(rankVal, rankValOrigin, sizeof(rankVal)); /* fill skipped values */ if (minWeight>1) { U32 i, skipSize = rankVal[minWeight]; MEM_writeLE16(&(DElt.sequence), baseSeq); DElt.nbBits = (BYTE)(consumed); DElt.length = 1; for (i = 0; i < skipSize; i++) DTable[i] = DElt; } /* fill DTable */ for (s=0; s= 1 */ rankVal[weight] += length; } } typedef U32 rankVal_t[HUF_ABSOLUTEMAX_TABLELOG][HUF_ABSOLUTEMAX_TABLELOG + 1]; static void HUF_fillDTableX4(HUF_DEltX4* DTable, const U32 targetLog, const sortedSymbol_t* sortedList, const U32 sortedListSize, const U32* rankStart, rankVal_t rankValOrigin, const U32 maxWeight, const U32 nbBitsBaseline) { U32 rankVal[HUF_ABSOLUTEMAX_TABLELOG + 1]; const int scaleLog = nbBitsBaseline - targetLog; /* note : targetLog >= srcLog, hence scaleLog <= 1 */ const U32 minBits = nbBitsBaseline - maxWeight; U32 s; memcpy(rankVal, rankValOrigin, sizeof(rankVal)); /* fill DTable */ for (s=0; s= minBits) /* enough room for a second symbol */ { U32 sortedRank; int minWeight = nbBits + scaleLog; if (minWeight < 1) minWeight = 1; sortedRank = rankStart[minWeight]; HUF_fillDTableX4Level2(DTable+start, targetLog-nbBits, nbBits, rankValOrigin[nbBits], minWeight, sortedList+sortedRank, sortedListSize-sortedRank, nbBitsBaseline, symbol); } else { U32 i; const U32 end = start + length; HUF_DEltX4 DElt; MEM_writeLE16(&(DElt.sequence), symbol); DElt.nbBits = (BYTE)(nbBits); DElt.length = 1; for (i = start; i < end; i++) DTable[i] = DElt; } rankVal[weight] += length; } } static size_t HUF_readDTableX4 (U32* DTable, const void* src, size_t srcSize) { BYTE weightList[HUF_MAX_SYMBOL_VALUE + 1]; sortedSymbol_t sortedSymbol[HUF_MAX_SYMBOL_VALUE + 1]; U32 rankStats[HUF_ABSOLUTEMAX_TABLELOG + 1] = { 0 }; U32 rankStart0[HUF_ABSOLUTEMAX_TABLELOG + 2] = { 0 }; U32* const rankStart = rankStart0+1; rankVal_t rankVal; U32 tableLog, maxW, sizeOfSort, nbSymbols; const U32 memLog = DTable[0]; const BYTE* ip = (const BYTE*) src; size_t iSize = ip[0]; void* ptr = DTable; HUF_DEltX4* const dt = ((HUF_DEltX4*)ptr) + 1; HUF_STATIC_ASSERT(sizeof(HUF_DEltX4) == sizeof(U32)); /* if compilation fails here, assertion is false */ if (memLog > HUF_ABSOLUTEMAX_TABLELOG) return ERROR(tableLog_tooLarge); //memset(weightList, 0, sizeof(weightList)); /* is not necessary, even though some analyzer complain ... */ iSize = HUF_readStats(weightList, HUF_MAX_SYMBOL_VALUE + 1, rankStats, &nbSymbols, &tableLog, src, srcSize); if (HUF_isError(iSize)) return iSize; /* check result */ if (tableLog > memLog) return ERROR(tableLog_tooLarge); /* DTable can't fit code depth */ /* find maxWeight */ for (maxW = tableLog; rankStats[maxW]==0; maxW--) { if (!maxW) return ERROR(GENERIC); } /* necessarily finds a solution before maxW==0 */ /* Get start index of each weight */ { U32 w, nextRankStart = 0; for (w=1; w<=maxW; w++) { U32 current = nextRankStart; nextRankStart += rankStats[w]; rankStart[w] = current; } rankStart[0] = nextRankStart; /* put all 0w symbols at the end of sorted list*/ sizeOfSort = nextRankStart; } /* sort symbols by weight */ { U32 s; for (s=0; s> consumed; } } } HUF_fillDTableX4(dt, memLog, sortedSymbol, sizeOfSort, rankStart0, rankVal, maxW, tableLog+1); return iSize; } static U32 HUF_decodeSymbolX4(void* op, BIT_DStream_t* DStream, const HUF_DEltX4* dt, const U32 dtLog) { const size_t val = BIT_lookBitsFast(DStream, dtLog); /* note : dtLog >= 1 */ memcpy(op, dt+val, 2); BIT_skipBits(DStream, dt[val].nbBits); return dt[val].length; } static U32 HUF_decodeLastSymbolX4(void* op, BIT_DStream_t* DStream, const HUF_DEltX4* dt, const U32 dtLog) { const size_t val = BIT_lookBitsFast(DStream, dtLog); /* note : dtLog >= 1 */ memcpy(op, dt+val, 1); if (dt[val].length==1) BIT_skipBits(DStream, dt[val].nbBits); else { if (DStream->bitsConsumed < (sizeof(DStream->bitContainer)*8)) { BIT_skipBits(DStream, dt[val].nbBits); if (DStream->bitsConsumed > (sizeof(DStream->bitContainer)*8)) DStream->bitsConsumed = (sizeof(DStream->bitContainer)*8); /* ugly hack; works only because it's the last symbol. Note : can't easily extract nbBits from just this symbol */ } } return 1; } #define HUF_DECODE_SYMBOLX4_0(ptr, DStreamPtr) \ ptr += HUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog) #define HUF_DECODE_SYMBOLX4_1(ptr, DStreamPtr) \ if (MEM_64bits() || (HUF_MAX_TABLELOG<=12)) \ ptr += HUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog) #define HUF_DECODE_SYMBOLX4_2(ptr, DStreamPtr) \ if (MEM_64bits()) \ ptr += HUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog) static inline size_t HUF_decodeStreamX4(BYTE* p, BIT_DStream_t* bitDPtr, BYTE* const pEnd, const HUF_DEltX4* const dt, const U32 dtLog) { BYTE* const pStart = p; /* up to 8 symbols at a time */ while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) && (p < pEnd-7)) { HUF_DECODE_SYMBOLX4_2(p, bitDPtr); HUF_DECODE_SYMBOLX4_1(p, bitDPtr); HUF_DECODE_SYMBOLX4_2(p, bitDPtr); HUF_DECODE_SYMBOLX4_0(p, bitDPtr); } /* closer to the end */ while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) && (p <= pEnd-2)) HUF_DECODE_SYMBOLX4_0(p, bitDPtr); while (p <= pEnd-2) HUF_DECODE_SYMBOLX4_0(p, bitDPtr); /* no need to reload : reached the end of DStream */ if (p < pEnd) p += HUF_decodeLastSymbolX4(p, bitDPtr, dt, dtLog); return p-pStart; } static size_t HUF_decompress4X4_usingDTable( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const U32* DTable) { if (cSrcSize < 10) return ERROR(corruption_detected); /* strict minimum : jump table + 1 byte per stream */ { const BYTE* const istart = (const BYTE*) cSrc; BYTE* const ostart = (BYTE*) dst; BYTE* const oend = ostart + dstSize; const void* ptr = DTable; const HUF_DEltX4* const dt = ((const HUF_DEltX4*)ptr) +1; const U32 dtLog = DTable[0]; size_t errorCode; /* Init */ BIT_DStream_t bitD1; BIT_DStream_t bitD2; BIT_DStream_t bitD3; BIT_DStream_t bitD4; const size_t length1 = MEM_readLE16(istart); const size_t length2 = MEM_readLE16(istart+2); const size_t length3 = MEM_readLE16(istart+4); size_t length4; const BYTE* const istart1 = istart + 6; /* jumpTable */ const BYTE* const istart2 = istart1 + length1; const BYTE* const istart3 = istart2 + length2; const BYTE* const istart4 = istart3 + length3; const size_t segmentSize = (dstSize+3) / 4; BYTE* const opStart2 = ostart + segmentSize; BYTE* const opStart3 = opStart2 + segmentSize; BYTE* const opStart4 = opStart3 + segmentSize; BYTE* op1 = ostart; BYTE* op2 = opStart2; BYTE* op3 = opStart3; BYTE* op4 = opStart4; U32 endSignal; length4 = cSrcSize - (length1 + length2 + length3 + 6); if (length4 > cSrcSize) return ERROR(corruption_detected); /* overflow */ errorCode = BIT_initDStream(&bitD1, istart1, length1); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD2, istart2, length2); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD3, istart3, length3); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD4, istart4, length4); if (HUF_isError(errorCode)) return errorCode; /* 16-32 symbols per loop (4-8 symbols per stream) */ endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4); for ( ; (endSignal==BIT_DStream_unfinished) && (op4<(oend-7)) ; ) { HUF_DECODE_SYMBOLX4_2(op1, &bitD1); HUF_DECODE_SYMBOLX4_2(op2, &bitD2); HUF_DECODE_SYMBOLX4_2(op3, &bitD3); HUF_DECODE_SYMBOLX4_2(op4, &bitD4); HUF_DECODE_SYMBOLX4_1(op1, &bitD1); HUF_DECODE_SYMBOLX4_1(op2, &bitD2); HUF_DECODE_SYMBOLX4_1(op3, &bitD3); HUF_DECODE_SYMBOLX4_1(op4, &bitD4); HUF_DECODE_SYMBOLX4_2(op1, &bitD1); HUF_DECODE_SYMBOLX4_2(op2, &bitD2); HUF_DECODE_SYMBOLX4_2(op3, &bitD3); HUF_DECODE_SYMBOLX4_2(op4, &bitD4); HUF_DECODE_SYMBOLX4_0(op1, &bitD1); HUF_DECODE_SYMBOLX4_0(op2, &bitD2); HUF_DECODE_SYMBOLX4_0(op3, &bitD3); HUF_DECODE_SYMBOLX4_0(op4, &bitD4); endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4); } /* check corruption */ if (op1 > opStart2) return ERROR(corruption_detected); if (op2 > opStart3) return ERROR(corruption_detected); if (op3 > opStart4) return ERROR(corruption_detected); /* note : op4 supposed already verified within main loop */ /* finish bitStreams one by one */ HUF_decodeStreamX4(op1, &bitD1, opStart2, dt, dtLog); HUF_decodeStreamX4(op2, &bitD2, opStart3, dt, dtLog); HUF_decodeStreamX4(op3, &bitD3, opStart4, dt, dtLog); HUF_decodeStreamX4(op4, &bitD4, oend, dt, dtLog); /* check */ endSignal = BIT_endOfDStream(&bitD1) & BIT_endOfDStream(&bitD2) & BIT_endOfDStream(&bitD3) & BIT_endOfDStream(&bitD4); if (!endSignal) return ERROR(corruption_detected); /* decoded size */ return dstSize; } } static size_t HUF_decompress4X4 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { HUF_CREATE_STATIC_DTABLEX4(DTable, HUF_MAX_TABLELOG); const BYTE* ip = (const BYTE*) cSrc; size_t hSize = HUF_readDTableX4 (DTable, cSrc, cSrcSize); if (HUF_isError(hSize)) return hSize; if (hSize >= cSrcSize) return ERROR(srcSize_wrong); ip += hSize; cSrcSize -= hSize; return HUF_decompress4X4_usingDTable (dst, dstSize, ip, cSrcSize, DTable); } /**********************************/ /* Generic decompression selector */ /**********************************/ typedef struct { U32 tableTime; U32 decode256Time; } algo_time_t; static const algo_time_t algoTime[16 /* Quantization */][3 /* single, double, quad */] = { /* single, double, quad */ {{0,0}, {1,1}, {2,2}}, /* Q==0 : impossible */ {{0,0}, {1,1}, {2,2}}, /* Q==1 : impossible */ {{ 38,130}, {1313, 74}, {2151, 38}}, /* Q == 2 : 12-18% */ {{ 448,128}, {1353, 74}, {2238, 41}}, /* Q == 3 : 18-25% */ {{ 556,128}, {1353, 74}, {2238, 47}}, /* Q == 4 : 25-32% */ {{ 714,128}, {1418, 74}, {2436, 53}}, /* Q == 5 : 32-38% */ {{ 883,128}, {1437, 74}, {2464, 61}}, /* Q == 6 : 38-44% */ {{ 897,128}, {1515, 75}, {2622, 68}}, /* Q == 7 : 44-50% */ {{ 926,128}, {1613, 75}, {2730, 75}}, /* Q == 8 : 50-56% */ {{ 947,128}, {1729, 77}, {3359, 77}}, /* Q == 9 : 56-62% */ {{1107,128}, {2083, 81}, {4006, 84}}, /* Q ==10 : 62-69% */ {{1177,128}, {2379, 87}, {4785, 88}}, /* Q ==11 : 69-75% */ {{1242,128}, {2415, 93}, {5155, 84}}, /* Q ==12 : 75-81% */ {{1349,128}, {2644,106}, {5260,106}}, /* Q ==13 : 81-87% */ {{1455,128}, {2422,124}, {4174,124}}, /* Q ==14 : 87-93% */ {{ 722,128}, {1891,145}, {1936,146}}, /* Q ==15 : 93-99% */ }; typedef size_t (*decompressionAlgo)(void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); static size_t HUF_decompress (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { static const decompressionAlgo decompress[3] = { HUF_decompress4X2, HUF_decompress4X4, NULL }; /* estimate decompression time */ U32 Q; const U32 D256 = (U32)(dstSize >> 8); U32 Dtime[3]; U32 algoNb = 0; int n; /* validation checks */ if (dstSize == 0) return ERROR(dstSize_tooSmall); if (cSrcSize > dstSize) return ERROR(corruption_detected); /* invalid */ if (cSrcSize == dstSize) { memcpy(dst, cSrc, dstSize); return dstSize; } /* not compressed */ if (cSrcSize == 1) { memset(dst, *(const BYTE*)cSrc, dstSize); return dstSize; } /* RLE */ /* decoder timing evaluation */ Q = (U32)(cSrcSize * 16 / dstSize); /* Q < 16 since dstSize > cSrcSize */ for (n=0; n<3; n++) Dtime[n] = algoTime[Q][n].tableTime + (algoTime[Q][n].decode256Time * D256); Dtime[1] += Dtime[1] >> 4; Dtime[2] += Dtime[2] >> 3; /* advantage to algorithms using less memory, for cache eviction */ if (Dtime[1] < Dtime[0]) algoNb = 1; return decompress[algoNb](dst, dstSize, cSrc, cSrcSize); //return HUF_decompress4X2(dst, dstSize, cSrc, cSrcSize); /* multi-streams single-symbol decoding */ //return HUF_decompress4X4(dst, dstSize, cSrc, cSrcSize); /* multi-streams double-symbols decoding */ //return HUF_decompress4X6(dst, dstSize, cSrc, cSrcSize); /* multi-streams quad-symbols decoding */ } /* zstd - standard compression library Copyright (C) 2014-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - zstd source repository : https://github.com/Cyan4973/zstd - ztsd public forum : https://groups.google.com/forum/#!forum/lz4c */ /* *************************************************************** * Tuning parameters *****************************************************************/ /*! * MEMORY_USAGE : * Memory usage formula : N->2^N Bytes (examples : 10 -> 1KB; 12 -> 4KB ; 16 -> 64KB; 20 -> 1MB; etc.) * Increasing memory usage improves compression ratio * Reduced memory usage can improve speed, due to cache effect */ #define ZSTD_MEMORY_USAGE 17 /*! * HEAPMODE : * Select how default compression functions will allocate memory for their hash table, * in memory stack (0, fastest), or in memory heap (1, requires malloc()) * Note that compression context is fairly large, as a consequence heap memory is recommended. */ #ifndef ZSTD_HEAPMODE # define ZSTD_HEAPMODE 1 #endif /* ZSTD_HEAPMODE */ /*! * LEGACY_SUPPORT : * decompressor can decode older formats (starting from Zstd 0.1+) */ #ifndef ZSTD_LEGACY_SUPPORT # define ZSTD_LEGACY_SUPPORT 1 #endif /* ******************************************************* * Includes *********************************************************/ #include /* calloc */ #include /* memcpy, memmove */ #include /* debug : printf */ /* ******************************************************* * Compiler specifics *********************************************************/ #ifdef __AVX2__ # include /* AVX2 intrinsics */ #endif #ifdef _MSC_VER /* Visual Studio */ # include /* For Visual 2005 */ # pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */ # pragma warning(disable : 4324) /* disable: C4324: padded structure */ #else # define GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__) #endif /* ******************************************************* * Constants *********************************************************/ #define HASH_LOG (ZSTD_MEMORY_USAGE - 2) #define HASH_TABLESIZE (1 << HASH_LOG) #define HASH_MASK (HASH_TABLESIZE - 1) #define KNUTH 2654435761 #define BIT7 128 #define BIT6 64 #define BIT5 32 #define BIT4 16 #define BIT1 2 #define BIT0 1 #define KB *(1 <<10) #define MB *(1 <<20) #define GB *(1U<<30) #define BLOCKSIZE (128 KB) /* define, for static allocation */ #define MIN_SEQUENCES_SIZE (2 /*seqNb*/ + 2 /*dumps*/ + 3 /*seqTables*/ + 1 /*bitStream*/) #define MIN_CBLOCK_SIZE (3 /*litCSize*/ + MIN_SEQUENCES_SIZE) #define IS_RAW BIT0 #define IS_RLE BIT1 #define WORKPLACESIZE (BLOCKSIZE*3) #define MINMATCH 4 #define MLbits 7 #define LLbits 6 #define Offbits 5 #define MaxML ((1<blockType = (blockType_t)(headerFlags >> 6); bpPtr->origSize = (bpPtr->blockType == bt_rle) ? cSize : 0; if (bpPtr->blockType == bt_end) return 0; if (bpPtr->blockType == bt_rle) return 1; return cSize; } static size_t ZSTD_copyUncompressedBlock(void* dst, size_t maxDstSize, const void* src, size_t srcSize) { if (srcSize > maxDstSize) return ERROR(dstSize_tooSmall); memcpy(dst, src, srcSize); return srcSize; } /** ZSTD_decompressLiterals @return : nb of bytes read from src, or an error code*/ static size_t ZSTD_decompressLiterals(void* dst, size_t* maxDstSizePtr, const void* src, size_t srcSize) { const BYTE* ip = (const BYTE*)src; const size_t litSize = (MEM_readLE32(src) & 0x1FFFFF) >> 2; /* no buffer issue : srcSize >= MIN_CBLOCK_SIZE */ const size_t litCSize = (MEM_readLE32(ip+2) & 0xFFFFFF) >> 5; /* no buffer issue : srcSize >= MIN_CBLOCK_SIZE */ if (litSize > *maxDstSizePtr) return ERROR(corruption_detected); if (litCSize + 5 > srcSize) return ERROR(corruption_detected); if (HUF_isError(HUF_decompress(dst, litSize, ip+5, litCSize))) return ERROR(corruption_detected); *maxDstSizePtr = litSize; return litCSize + 5; } /** ZSTD_decodeLiteralsBlock @return : nb of bytes read from src (< srcSize )*/ static size_t ZSTD_decodeLiteralsBlock(void* ctx, const void* src, size_t srcSize) { ZSTD_DCtx* dctx = (ZSTD_DCtx*)ctx; const BYTE* const istart = (const BYTE* const)src; /* any compressed block with literals segment must be at least this size */ if (srcSize < MIN_CBLOCK_SIZE) return ERROR(corruption_detected); switch(*istart & 3) { default: case 0: { size_t litSize = BLOCKSIZE; const size_t readSize = ZSTD_decompressLiterals(dctx->litBuffer, &litSize, src, srcSize); dctx->litPtr = dctx->litBuffer; dctx->litSize = litSize; memset(dctx->litBuffer + dctx->litSize, 0, 8); return readSize; /* works if it's an error too */ } case IS_RAW: { const size_t litSize = (MEM_readLE32(istart) & 0xFFFFFF) >> 2; /* no buffer issue : srcSize >= MIN_CBLOCK_SIZE */ if (litSize > srcSize-11) /* risk of reading too far with wildcopy */ { if (litSize > srcSize-3) return ERROR(corruption_detected); memcpy(dctx->litBuffer, istart, litSize); dctx->litPtr = dctx->litBuffer; dctx->litSize = litSize; memset(dctx->litBuffer + dctx->litSize, 0, 8); return litSize+3; } /* direct reference into compressed stream */ dctx->litPtr = istart+3; dctx->litSize = litSize; return litSize+3; } case IS_RLE: { const size_t litSize = (MEM_readLE32(istart) & 0xFFFFFF) >> 2; /* no buffer issue : srcSize >= MIN_CBLOCK_SIZE */ if (litSize > BLOCKSIZE) return ERROR(corruption_detected); memset(dctx->litBuffer, istart[3], litSize + 8); dctx->litPtr = dctx->litBuffer; dctx->litSize = litSize; return 4; } } } static size_t ZSTD_decodeSeqHeaders(int* nbSeq, const BYTE** dumpsPtr, size_t* dumpsLengthPtr, FSE_DTable* DTableLL, FSE_DTable* DTableML, FSE_DTable* DTableOffb, const void* src, size_t srcSize) { const BYTE* const istart = (const BYTE* const)src; const BYTE* ip = istart; const BYTE* const iend = istart + srcSize; U32 LLtype, Offtype, MLtype; U32 LLlog, Offlog, MLlog; size_t dumpsLength; /* check */ if (srcSize < 5) return ERROR(srcSize_wrong); /* SeqHead */ *nbSeq = MEM_readLE16(ip); ip+=2; LLtype = *ip >> 6; Offtype = (*ip >> 4) & 3; MLtype = (*ip >> 2) & 3; if (*ip & 2) { dumpsLength = ip[2]; dumpsLength += ip[1] << 8; ip += 3; } else { dumpsLength = ip[1]; dumpsLength += (ip[0] & 1) << 8; ip += 2; } *dumpsPtr = ip; ip += dumpsLength; *dumpsLengthPtr = dumpsLength; /* check */ if (ip > iend-3) return ERROR(srcSize_wrong); /* min : all 3 are "raw", hence no header, but at least xxLog bits per type */ /* sequences */ { S16 norm[MaxML+1]; /* assumption : MaxML >= MaxLL and MaxOff */ size_t headerSize; /* Build DTables */ switch(LLtype) { case bt_rle : LLlog = 0; FSE_buildDTable_rle(DTableLL, *ip++); break; case bt_raw : LLlog = LLbits; FSE_buildDTable_raw(DTableLL, LLbits); break; default : { U32 max = MaxLL; headerSize = FSE_readNCount(norm, &max, &LLlog, ip, iend-ip); if (FSE_isError(headerSize)) return ERROR(GENERIC); if (LLlog > LLFSELog) return ERROR(corruption_detected); ip += headerSize; FSE_buildDTable(DTableLL, norm, max, LLlog); } } switch(Offtype) { case bt_rle : Offlog = 0; if (ip > iend-2) return ERROR(srcSize_wrong); /* min : "raw", hence no header, but at least xxLog bits */ FSE_buildDTable_rle(DTableOffb, *ip++ & MaxOff); /* if *ip > MaxOff, data is corrupted */ break; case bt_raw : Offlog = Offbits; FSE_buildDTable_raw(DTableOffb, Offbits); break; default : { U32 max = MaxOff; headerSize = FSE_readNCount(norm, &max, &Offlog, ip, iend-ip); if (FSE_isError(headerSize)) return ERROR(GENERIC); if (Offlog > OffFSELog) return ERROR(corruption_detected); ip += headerSize; FSE_buildDTable(DTableOffb, norm, max, Offlog); } } switch(MLtype) { case bt_rle : MLlog = 0; if (ip > iend-2) return ERROR(srcSize_wrong); /* min : "raw", hence no header, but at least xxLog bits */ FSE_buildDTable_rle(DTableML, *ip++); break; case bt_raw : MLlog = MLbits; FSE_buildDTable_raw(DTableML, MLbits); break; default : { U32 max = MaxML; headerSize = FSE_readNCount(norm, &max, &MLlog, ip, iend-ip); if (FSE_isError(headerSize)) return ERROR(GENERIC); if (MLlog > MLFSELog) return ERROR(corruption_detected); ip += headerSize; FSE_buildDTable(DTableML, norm, max, MLlog); } } } return ip-istart; } typedef struct { size_t litLength; size_t offset; size_t matchLength; } seq_t; typedef struct { BIT_DStream_t DStream; FSE_DState_t stateLL; FSE_DState_t stateOffb; FSE_DState_t stateML; size_t prevOffset; const BYTE* dumps; const BYTE* dumpsEnd; } seqState_t; static void ZSTD_decodeSequence(seq_t* seq, seqState_t* seqState) { size_t litLength; size_t prevOffset; size_t offset; size_t matchLength; const BYTE* dumps = seqState->dumps; const BYTE* const de = seqState->dumpsEnd; /* Literal length */ litLength = FSE_decodeSymbol(&(seqState->stateLL), &(seqState->DStream)); prevOffset = litLength ? seq->offset : seqState->prevOffset; seqState->prevOffset = seq->offset; if (litLength == MaxLL) { U32 add = *dumps++; if (add < 255) litLength += add; else { litLength = MEM_readLE32(dumps) & 0xFFFFFF; /* no pb : dumps is always followed by seq tables > 1 byte */ dumps += 3; } if (dumps >= de) dumps = de-1; /* late correction, to avoid read overflow (data is now corrupted anyway) */ } /* Offset */ { static const size_t offsetPrefix[MaxOff+1] = { /* note : size_t faster than U32 */ 1 /*fake*/, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4194304, 8388608, 16777216, 33554432, /*fake*/ 1, 1, 1, 1, 1 }; U32 offsetCode, nbBits; offsetCode = FSE_decodeSymbol(&(seqState->stateOffb), &(seqState->DStream)); /* <= maxOff, by table construction */ if (MEM_32bits()) BIT_reloadDStream(&(seqState->DStream)); nbBits = offsetCode - 1; if (offsetCode==0) nbBits = 0; /* cmove */ offset = offsetPrefix[offsetCode] + BIT_readBits(&(seqState->DStream), nbBits); if (MEM_32bits()) BIT_reloadDStream(&(seqState->DStream)); if (offsetCode==0) offset = prevOffset; /* cmove */ } /* MatchLength */ matchLength = FSE_decodeSymbol(&(seqState->stateML), &(seqState->DStream)); if (matchLength == MaxML) { U32 add = *dumps++; if (add < 255) matchLength += add; else { matchLength = MEM_readLE32(dumps) & 0xFFFFFF; /* no pb : dumps is always followed by seq tables > 1 byte */ dumps += 3; } if (dumps >= de) dumps = de-1; /* late correction, to avoid read overflow (data is now corrupted anyway) */ } matchLength += MINMATCH; /* save result */ seq->litLength = litLength; seq->offset = offset; seq->matchLength = matchLength; seqState->dumps = dumps; } static size_t ZSTD_execSequence(BYTE* op, seq_t sequence, const BYTE** litPtr, const BYTE* const litLimit, BYTE* const base, BYTE* const oend) { static const int dec32table[] = {0, 1, 2, 1, 4, 4, 4, 4}; /* added */ static const int dec64table[] = {8, 8, 8, 7, 8, 9,10,11}; /* substracted */ const BYTE* const ostart = op; BYTE* const oLitEnd = op + sequence.litLength; BYTE* const oMatchEnd = op + sequence.litLength + sequence.matchLength; /* risk : address space overflow (32-bits) */ BYTE* const oend_8 = oend-8; const BYTE* const litEnd = *litPtr + sequence.litLength; /* checks */ if (oLitEnd > oend_8) return ERROR(dstSize_tooSmall); /* last match must start at a minimum distance of 8 from oend */ if (oMatchEnd > oend) return ERROR(dstSize_tooSmall); /* overwrite beyond dst buffer */ if (litEnd > litLimit) return ERROR(corruption_detected); /* overRead beyond lit buffer */ /* copy Literals */ ZSTD_wildcopy(op, *litPtr, sequence.litLength); /* note : oLitEnd <= oend-8 : no risk of overwrite beyond oend */ op = oLitEnd; *litPtr = litEnd; /* update for next sequence */ /* copy Match */ { const BYTE* match = op - sequence.offset; /* check */ if (sequence.offset > (size_t)op) return ERROR(corruption_detected); /* address space overflow test (this test seems kept by clang optimizer) */ //if (match > op) return ERROR(corruption_detected); /* address space overflow test (is clang optimizer removing this test ?) */ if (match < base) return ERROR(corruption_detected); /* close range match, overlap */ if (sequence.offset < 8) { const int dec64 = dec64table[sequence.offset]; op[0] = match[0]; op[1] = match[1]; op[2] = match[2]; op[3] = match[3]; match += dec32table[sequence.offset]; ZSTD_copy4(op+4, match); match -= dec64; } else { ZSTD_copy8(op, match); } op += 8; match += 8; if (oMatchEnd > oend-(16-MINMATCH)) { if (op < oend_8) { ZSTD_wildcopy(op, match, oend_8 - op); match += oend_8 - op; op = oend_8; } while (op < oMatchEnd) *op++ = *match++; } else { ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength-8); /* works even if matchLength < 8 */ } } return oMatchEnd - ostart; } static size_t ZSTD_decompressSequences( void* ctx, void* dst, size_t maxDstSize, const void* seqStart, size_t seqSize) { ZSTD_DCtx* dctx = (ZSTD_DCtx*)ctx; const BYTE* ip = (const BYTE*)seqStart; const BYTE* const iend = ip + seqSize; BYTE* const ostart = (BYTE* const)dst; BYTE* op = ostart; BYTE* const oend = ostart + maxDstSize; size_t errorCode, dumpsLength; const BYTE* litPtr = dctx->litPtr; const BYTE* const litEnd = litPtr + dctx->litSize; int nbSeq; const BYTE* dumps; U32* DTableLL = dctx->LLTable; U32* DTableML = dctx->MLTable; U32* DTableOffb = dctx->OffTable; BYTE* const base = (BYTE*) (dctx->base); /* Build Decoding Tables */ errorCode = ZSTD_decodeSeqHeaders(&nbSeq, &dumps, &dumpsLength, DTableLL, DTableML, DTableOffb, ip, iend-ip); if (ZSTD_isError(errorCode)) return errorCode; ip += errorCode; /* Regen sequences */ { seq_t sequence; seqState_t seqState; memset(&sequence, 0, sizeof(sequence)); seqState.dumps = dumps; seqState.dumpsEnd = dumps + dumpsLength; seqState.prevOffset = sequence.offset = 4; errorCode = BIT_initDStream(&(seqState.DStream), ip, iend-ip); if (ERR_isError(errorCode)) return ERROR(corruption_detected); FSE_initDState(&(seqState.stateLL), &(seqState.DStream), DTableLL); FSE_initDState(&(seqState.stateOffb), &(seqState.DStream), DTableOffb); FSE_initDState(&(seqState.stateML), &(seqState.DStream), DTableML); for ( ; (BIT_reloadDStream(&(seqState.DStream)) <= BIT_DStream_completed) && (nbSeq>0) ; ) { size_t oneSeqSize; nbSeq--; ZSTD_decodeSequence(&sequence, &seqState); oneSeqSize = ZSTD_execSequence(op, sequence, &litPtr, litEnd, base, oend); if (ZSTD_isError(oneSeqSize)) return oneSeqSize; op += oneSeqSize; } /* check if reached exact end */ if ( !BIT_endOfDStream(&(seqState.DStream)) ) return ERROR(corruption_detected); /* requested too much : data is corrupted */ if (nbSeq<0) return ERROR(corruption_detected); /* requested too many sequences : data is corrupted */ /* last literal segment */ { size_t lastLLSize = litEnd - litPtr; if (litPtr > litEnd) return ERROR(corruption_detected); if (op+lastLLSize > oend) return ERROR(dstSize_tooSmall); if (op != litPtr) memmove(op, litPtr, lastLLSize); op += lastLLSize; } } return op-ostart; } static size_t ZSTD_decompressBlock( void* ctx, void* dst, size_t maxDstSize, const void* src, size_t srcSize) { /* blockType == blockCompressed */ const BYTE* ip = (const BYTE*)src; /* Decode literals sub-block */ size_t litCSize = ZSTD_decodeLiteralsBlock(ctx, src, srcSize); if (ZSTD_isError(litCSize)) return litCSize; ip += litCSize; srcSize -= litCSize; return ZSTD_decompressSequences(ctx, dst, maxDstSize, ip, srcSize); } static size_t ZSTD_decompressDCtx(void* ctx, void* dst, size_t maxDstSize, const void* src, size_t srcSize) { const BYTE* ip = (const BYTE*)src; const BYTE* iend = ip + srcSize; BYTE* const ostart = (BYTE* const)dst; BYTE* op = ostart; BYTE* const oend = ostart + maxDstSize; size_t remainingSize = srcSize; U32 magicNumber; blockProperties_t blockProperties; /* Frame Header */ if (srcSize < ZSTD_frameHeaderSize+ZSTD_blockHeaderSize) return ERROR(srcSize_wrong); magicNumber = MEM_readLE32(src); if (magicNumber != ZSTD_magicNumber) return ERROR(prefix_unknown); ip += ZSTD_frameHeaderSize; remainingSize -= ZSTD_frameHeaderSize; /* Loop on each block */ while (1) { size_t decodedSize=0; size_t cBlockSize = ZSTD_getcBlockSize(ip, iend-ip, &blockProperties); if (ZSTD_isError(cBlockSize)) return cBlockSize; ip += ZSTD_blockHeaderSize; remainingSize -= ZSTD_blockHeaderSize; if (cBlockSize > remainingSize) return ERROR(srcSize_wrong); switch(blockProperties.blockType) { case bt_compressed: decodedSize = ZSTD_decompressBlock(ctx, op, oend-op, ip, cBlockSize); break; case bt_raw : decodedSize = ZSTD_copyUncompressedBlock(op, oend-op, ip, cBlockSize); break; case bt_rle : return ERROR(GENERIC); /* not yet supported */ break; case bt_end : /* end of frame */ if (remainingSize) return ERROR(srcSize_wrong); break; default: return ERROR(GENERIC); /* impossible */ } if (cBlockSize == 0) break; /* bt_end */ if (ZSTD_isError(decodedSize)) return decodedSize; op += decodedSize; ip += cBlockSize; remainingSize -= cBlockSize; } return op-ostart; } static size_t ZSTD_decompress(void* dst, size_t maxDstSize, const void* src, size_t srcSize) { ZSTD_DCtx ctx; ctx.base = dst; return ZSTD_decompressDCtx(&ctx, dst, maxDstSize, src, srcSize); } static size_t ZSTD_findFrameCompressedSize(const void* src, size_t srcSize) { const BYTE* ip = (const BYTE*)src; size_t remainingSize = srcSize; U32 magicNumber; blockProperties_t blockProperties; /* Frame Header */ if (srcSize < ZSTD_frameHeaderSize+ZSTD_blockHeaderSize) return ERROR(srcSize_wrong); magicNumber = MEM_readLE32(src); if (magicNumber != ZSTD_magicNumber) return ERROR(prefix_unknown); ip += ZSTD_frameHeaderSize; remainingSize -= ZSTD_frameHeaderSize; /* Loop on each block */ while (1) { size_t cBlockSize = ZSTD_getcBlockSize(ip, remainingSize, &blockProperties); if (ZSTD_isError(cBlockSize)) return cBlockSize; ip += ZSTD_blockHeaderSize; remainingSize -= ZSTD_blockHeaderSize; if (cBlockSize > remainingSize) return ERROR(srcSize_wrong); if (cBlockSize == 0) break; /* bt_end */ ip += cBlockSize; remainingSize -= cBlockSize; } return ip - (const BYTE*)src; } /******************************* * Streaming Decompression API *******************************/ static size_t ZSTD_resetDCtx(ZSTD_DCtx* dctx) { dctx->expected = ZSTD_frameHeaderSize; dctx->phase = 0; dctx->previousDstEnd = NULL; dctx->base = NULL; return 0; } static ZSTD_DCtx* ZSTD_createDCtx(void) { ZSTD_DCtx* dctx = (ZSTD_DCtx*)malloc(sizeof(ZSTD_DCtx)); if (dctx==NULL) return NULL; ZSTD_resetDCtx(dctx); return dctx; } static size_t ZSTD_freeDCtx(ZSTD_DCtx* dctx) { free(dctx); return 0; } static size_t ZSTD_nextSrcSizeToDecompress(ZSTD_DCtx* dctx) { return dctx->expected; } static size_t ZSTD_decompressContinue(ZSTD_DCtx* ctx, void* dst, size_t maxDstSize, const void* src, size_t srcSize) { /* Sanity check */ if (srcSize != ctx->expected) return ERROR(srcSize_wrong); if (dst != ctx->previousDstEnd) /* not contiguous */ ctx->base = dst; /* Decompress : frame header */ if (ctx->phase == 0) { /* Check frame magic header */ U32 magicNumber = MEM_readLE32(src); if (magicNumber != ZSTD_magicNumber) return ERROR(prefix_unknown); ctx->phase = 1; ctx->expected = ZSTD_blockHeaderSize; return 0; } /* Decompress : block header */ if (ctx->phase == 1) { blockProperties_t bp; size_t blockSize = ZSTD_getcBlockSize(src, ZSTD_blockHeaderSize, &bp); if (ZSTD_isError(blockSize)) return blockSize; if (bp.blockType == bt_end) { ctx->expected = 0; ctx->phase = 0; } else { ctx->expected = blockSize; ctx->bType = bp.blockType; ctx->phase = 2; } return 0; } /* Decompress : block content */ { size_t rSize; switch(ctx->bType) { case bt_compressed: rSize = ZSTD_decompressBlock(ctx, dst, maxDstSize, src, srcSize); break; case bt_raw : rSize = ZSTD_copyUncompressedBlock(dst, maxDstSize, src, srcSize); break; case bt_rle : return ERROR(GENERIC); /* not yet handled */ break; case bt_end : /* should never happen (filtered at phase 1) */ rSize = 0; break; default: return ERROR(GENERIC); } ctx->phase = 1; ctx->expected = ZSTD_blockHeaderSize; ctx->previousDstEnd = (void*)( ((char*)dst) + rSize); return rSize; } } /* wrapper layer */ unsigned ZSTDv03_isError(size_t code) { return ZSTD_isError(code); } size_t ZSTDv03_decompress( void* dst, size_t maxOriginalSize, const void* src, size_t compressedSize) { return ZSTD_decompress(dst, maxOriginalSize, src, compressedSize); } size_t ZSTDv03_findFrameCompressedSize(const void* src, size_t srcSize) { return ZSTD_findFrameCompressedSize(src, srcSize); } ZSTDv03_Dctx* ZSTDv03_createDCtx(void) { return (ZSTDv03_Dctx*)ZSTD_createDCtx(); } size_t ZSTDv03_freeDCtx(ZSTDv03_Dctx* dctx) { return ZSTD_freeDCtx((ZSTD_DCtx*)dctx); } size_t ZSTDv03_resetDCtx(ZSTDv03_Dctx* dctx) { return ZSTD_resetDCtx((ZSTD_DCtx*)dctx); } size_t ZSTDv03_nextSrcSizeToDecompress(ZSTDv03_Dctx* dctx) { return ZSTD_nextSrcSizeToDecompress((ZSTD_DCtx*)dctx); } size_t ZSTDv03_decompressContinue(ZSTDv03_Dctx* dctx, void* dst, size_t maxDstSize, const void* src, size_t srcSize) { return ZSTD_decompressContinue((ZSTD_DCtx*)dctx, dst, maxDstSize, src, srcSize); } borgbackup-1.1.5/src/borg/algorithms/zstd/lib/legacy/zstd_v04.c0000644000175000017500000043302313257361140023462 0ustar twtw00000000000000/* * Copyright (c) 2016-present, Yann Collet, Facebook, Inc. * All rights reserved. * * This source code is licensed under both the BSD-style license (found in the * LICENSE file in the root directory of this source tree) and the GPLv2 (found * in the COPYING file in the root directory of this source tree). * You may select, at your option, one of the above-listed licenses. */ /*- Dependencies -*/ #include "zstd_v04.h" #include "error_private.h" /* ****************************************************************** mem.h low-level memory access routines Copyright (C) 2013-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - FSE source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #ifndef MEM_H_MODULE #define MEM_H_MODULE #if defined (__cplusplus) extern "C" { #endif /****************************************** * Includes ******************************************/ #include /* size_t, ptrdiff_t */ #include /* memcpy */ /****************************************** * Compiler-specific ******************************************/ #if defined(_MSC_VER) /* Visual Studio */ # include /* _byteswap_ulong */ # include /* _byteswap_* */ #endif #if defined(__GNUC__) # define MEM_STATIC static __attribute__((unused)) #elif defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) # define MEM_STATIC static inline #elif defined(_MSC_VER) # define MEM_STATIC static __inline #else # define MEM_STATIC static /* this version may generate warnings for unused static functions; disable the relevant warning */ #endif /**************************************************************** * Basic Types *****************************************************************/ #if defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) # include typedef uint8_t BYTE; typedef uint16_t U16; typedef int16_t S16; typedef uint32_t U32; typedef int32_t S32; typedef uint64_t U64; typedef int64_t S64; #else typedef unsigned char BYTE; typedef unsigned short U16; typedef signed short S16; typedef unsigned int U32; typedef signed int S32; typedef unsigned long long U64; typedef signed long long S64; #endif /**************************************************************** * Memory I/O *****************************************************************/ /* MEM_FORCE_MEMORY_ACCESS * By default, access to unaligned memory is controlled by `memcpy()`, which is safe and portable. * Unfortunately, on some target/compiler combinations, the generated assembly is sub-optimal. * The below switch allow to select different access method for improved performance. * Method 0 (default) : use `memcpy()`. Safe and portable. * Method 1 : `__packed` statement. It depends on compiler extension (ie, not portable). * This method is safe if your compiler supports it, and *generally* as fast or faster than `memcpy`. * Method 2 : direct access. This method is portable but violate C standard. * It can generate buggy code on targets generating assembly depending on alignment. * But in some circumstances, it's the only known way to get the most performance (ie GCC + ARMv6) * See http://fastcompression.blogspot.fr/2015/08/accessing-unaligned-memory.html for details. * Prefer these methods in priority order (0 > 1 > 2) */ #ifndef MEM_FORCE_MEMORY_ACCESS /* can be defined externally, on command line for example */ # if defined(__GNUC__) && ( defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) || defined(__ARM_ARCH_6K__) || defined(__ARM_ARCH_6Z__) || defined(__ARM_ARCH_6ZK__) || defined(__ARM_ARCH_6T2__) ) # define MEM_FORCE_MEMORY_ACCESS 2 # elif (defined(__INTEL_COMPILER) && !defined(WIN32)) || \ (defined(__GNUC__) && ( defined(__ARM_ARCH_7__) || defined(__ARM_ARCH_7A__) || defined(__ARM_ARCH_7R__) || defined(__ARM_ARCH_7M__) || defined(__ARM_ARCH_7S__) )) # define MEM_FORCE_MEMORY_ACCESS 1 # endif #endif MEM_STATIC unsigned MEM_32bits(void) { return sizeof(void*)==4; } MEM_STATIC unsigned MEM_64bits(void) { return sizeof(void*)==8; } MEM_STATIC unsigned MEM_isLittleEndian(void) { const union { U32 u; BYTE c[4]; } one = { 1 }; /* don't use static : performance detrimental */ return one.c[0]; } #if defined(MEM_FORCE_MEMORY_ACCESS) && (MEM_FORCE_MEMORY_ACCESS==2) /* violates C standard on structure alignment. Only use if no other choice to achieve best performance on target platform */ MEM_STATIC U16 MEM_read16(const void* memPtr) { return *(const U16*) memPtr; } MEM_STATIC U32 MEM_read32(const void* memPtr) { return *(const U32*) memPtr; } MEM_STATIC U64 MEM_read64(const void* memPtr) { return *(const U64*) memPtr; } MEM_STATIC void MEM_write16(void* memPtr, U16 value) { *(U16*)memPtr = value; } #elif defined(MEM_FORCE_MEMORY_ACCESS) && (MEM_FORCE_MEMORY_ACCESS==1) /* __pack instructions are safer, but compiler specific, hence potentially problematic for some compilers */ /* currently only defined for gcc and icc */ typedef union { U16 u16; U32 u32; U64 u64; } __attribute__((packed)) unalign; MEM_STATIC U16 MEM_read16(const void* ptr) { return ((const unalign*)ptr)->u16; } MEM_STATIC U32 MEM_read32(const void* ptr) { return ((const unalign*)ptr)->u32; } MEM_STATIC U64 MEM_read64(const void* ptr) { return ((const unalign*)ptr)->u64; } MEM_STATIC void MEM_write16(void* memPtr, U16 value) { ((unalign*)memPtr)->u16 = value; } #else /* default method, safe and standard. can sometimes prove slower */ MEM_STATIC U16 MEM_read16(const void* memPtr) { U16 val; memcpy(&val, memPtr, sizeof(val)); return val; } MEM_STATIC U32 MEM_read32(const void* memPtr) { U32 val; memcpy(&val, memPtr, sizeof(val)); return val; } MEM_STATIC U64 MEM_read64(const void* memPtr) { U64 val; memcpy(&val, memPtr, sizeof(val)); return val; } MEM_STATIC void MEM_write16(void* memPtr, U16 value) { memcpy(memPtr, &value, sizeof(value)); } #endif // MEM_FORCE_MEMORY_ACCESS MEM_STATIC U16 MEM_readLE16(const void* memPtr) { if (MEM_isLittleEndian()) return MEM_read16(memPtr); else { const BYTE* p = (const BYTE*)memPtr; return (U16)(p[0] + (p[1]<<8)); } } MEM_STATIC void MEM_writeLE16(void* memPtr, U16 val) { if (MEM_isLittleEndian()) { MEM_write16(memPtr, val); } else { BYTE* p = (BYTE*)memPtr; p[0] = (BYTE)val; p[1] = (BYTE)(val>>8); } } MEM_STATIC U32 MEM_readLE32(const void* memPtr) { if (MEM_isLittleEndian()) return MEM_read32(memPtr); else { const BYTE* p = (const BYTE*)memPtr; return (U32)((U32)p[0] + ((U32)p[1]<<8) + ((U32)p[2]<<16) + ((U32)p[3]<<24)); } } MEM_STATIC U64 MEM_readLE64(const void* memPtr) { if (MEM_isLittleEndian()) return MEM_read64(memPtr); else { const BYTE* p = (const BYTE*)memPtr; return (U64)((U64)p[0] + ((U64)p[1]<<8) + ((U64)p[2]<<16) + ((U64)p[3]<<24) + ((U64)p[4]<<32) + ((U64)p[5]<<40) + ((U64)p[6]<<48) + ((U64)p[7]<<56)); } } MEM_STATIC size_t MEM_readLEST(const void* memPtr) { if (MEM_32bits()) return (size_t)MEM_readLE32(memPtr); else return (size_t)MEM_readLE64(memPtr); } #if defined (__cplusplus) } #endif #endif /* MEM_H_MODULE */ /* zstd - standard compression library Header File for static linking only Copyright (C) 2014-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - zstd source repository : https://github.com/Cyan4973/zstd - ztsd public forum : https://groups.google.com/forum/#!forum/lz4c */ #ifndef ZSTD_STATIC_H #define ZSTD_STATIC_H /* The objects defined into this file shall be considered experimental. * They are not considered stable, as their prototype may change in the future. * You can use them for tests, provide feedback, or if you can endure risks of future changes. */ #if defined (__cplusplus) extern "C" { #endif /* ************************************* * Types ***************************************/ #define ZSTD_WINDOWLOG_MAX 26 #define ZSTD_WINDOWLOG_MIN 18 #define ZSTD_WINDOWLOG_ABSOLUTEMIN 11 #define ZSTD_CONTENTLOG_MAX (ZSTD_WINDOWLOG_MAX+1) #define ZSTD_CONTENTLOG_MIN 4 #define ZSTD_HASHLOG_MAX 28 #define ZSTD_HASHLOG_MIN 4 #define ZSTD_SEARCHLOG_MAX (ZSTD_CONTENTLOG_MAX-1) #define ZSTD_SEARCHLOG_MIN 1 #define ZSTD_SEARCHLENGTH_MAX 7 #define ZSTD_SEARCHLENGTH_MIN 4 /** from faster to stronger */ typedef enum { ZSTD_fast, ZSTD_greedy, ZSTD_lazy, ZSTD_lazy2, ZSTD_btlazy2 } ZSTD_strategy; typedef struct { U64 srcSize; /* optional : tells how much bytes are present in the frame. Use 0 if not known. */ U32 windowLog; /* largest match distance : larger == more compression, more memory needed during decompression */ U32 contentLog; /* full search segment : larger == more compression, slower, more memory (useless for fast) */ U32 hashLog; /* dispatch table : larger == more memory, faster */ U32 searchLog; /* nb of searches : larger == more compression, slower */ U32 searchLength; /* size of matches : larger == faster decompression, sometimes less compression */ ZSTD_strategy strategy; } ZSTD_parameters; typedef ZSTDv04_Dctx ZSTD_DCtx; /* ************************************* * Advanced functions ***************************************/ /** ZSTD_decompress_usingDict * Same as ZSTD_decompressDCtx, using a Dictionary content as prefix * Note : dict can be NULL, in which case, it's equivalent to ZSTD_decompressDCtx() */ static size_t ZSTD_decompress_usingDict(ZSTD_DCtx* ctx, void* dst, size_t maxDstSize, const void* src, size_t srcSize, const void* dict,size_t dictSize); /* ************************************** * Streaming functions (direct mode) ****************************************/ static size_t ZSTD_resetDCtx(ZSTD_DCtx* dctx); static size_t ZSTD_getFrameParams(ZSTD_parameters* params, const void* src, size_t srcSize); static void ZSTD_decompress_insertDictionary(ZSTD_DCtx* ctx, const void* src, size_t srcSize); static size_t ZSTD_nextSrcSizeToDecompress(ZSTD_DCtx* dctx); static size_t ZSTD_decompressContinue(ZSTD_DCtx* dctx, void* dst, size_t maxDstSize, const void* src, size_t srcSize); /** Streaming decompression, bufferless mode A ZSTD_DCtx object is required to track streaming operations. Use ZSTD_createDCtx() / ZSTD_freeDCtx() to manage it. A ZSTD_DCtx object can be re-used multiple times. Use ZSTD_resetDCtx() to return to fresh status. First operation is to retrieve frame parameters, using ZSTD_getFrameParams(). This function doesn't consume its input. It needs enough input data to properly decode the frame header. Objective is to retrieve *params.windowlog, to know minimum amount of memory required during decoding. Result : 0 when successful, it means the ZSTD_parameters structure has been filled. >0 : means there is not enough data into src. Provides the expected size to successfully decode header. errorCode, which can be tested using ZSTD_isError() (For example, if it's not a ZSTD header) Then, you can optionally insert a dictionary. This operation must mimic the compressor behavior, otherwise decompression will fail or be corrupted. Then it's possible to start decompression. Use ZSTD_nextSrcSizeToDecompress() and ZSTD_decompressContinue() alternatively. ZSTD_nextSrcSizeToDecompress() tells how much bytes to provide as 'srcSize' to ZSTD_decompressContinue(). ZSTD_decompressContinue() requires this exact amount of bytes, or it will fail. ZSTD_decompressContinue() needs previous data blocks during decompression, up to (1 << windowlog). They should preferably be located contiguously, prior to current block. Alternatively, a round buffer is also possible. @result of ZSTD_decompressContinue() is the number of bytes regenerated within 'dst'. It can be zero, which is not an error; it just means ZSTD_decompressContinue() has decoded some header. A frame is fully decoded when ZSTD_nextSrcSizeToDecompress() returns zero. Context can then be reset to start a new decompression. */ #if defined (__cplusplus) } #endif #endif /* ZSTD_STATIC_H */ /* zstd_internal - common functions to include Header File for include Copyright (C) 2014-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - zstd source repository : https://github.com/Cyan4973/zstd - ztsd public forum : https://groups.google.com/forum/#!forum/lz4c */ #ifndef ZSTD_CCOMMON_H_MODULE #define ZSTD_CCOMMON_H_MODULE #if defined (__cplusplus) extern "C" { #endif /* ************************************* * Common macros ***************************************/ #define MIN(a,b) ((a)<(b) ? (a) : (b)) #define MAX(a,b) ((a)>(b) ? (a) : (b)) /* ************************************* * Common constants ***************************************/ #define ZSTD_MAGICNUMBER 0xFD2FB524 /* v0.4 */ #define KB *(1 <<10) #define MB *(1 <<20) #define GB *(1U<<30) #define BLOCKSIZE (128 KB) /* define, for static allocation */ static const size_t ZSTD_blockHeaderSize = 3; static const size_t ZSTD_frameHeaderSize_min = 5; #define ZSTD_frameHeaderSize_max 5 /* define, for static allocation */ #define BIT7 128 #define BIT6 64 #define BIT5 32 #define BIT4 16 #define BIT1 2 #define BIT0 1 #define IS_RAW BIT0 #define IS_RLE BIT1 #define MINMATCH 4 #define REPCODE_STARTVALUE 4 #define MLbits 7 #define LLbits 6 #define Offbits 5 #define MaxML ((1< /* size_t, ptrdiff_t */ /* ***************************************** * FSE simple functions ******************************************/ static size_t FSE_decompress(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize); /*! FSE_decompress(): Decompress FSE data from buffer 'cSrc', of size 'cSrcSize', into already allocated destination buffer 'dst', of size 'maxDstSize'. return : size of regenerated data (<= maxDstSize) or an error code, which can be tested using FSE_isError() ** Important ** : FSE_decompress() doesn't decompress non-compressible nor RLE data !!! Why ? : making this distinction requires a header. Header management is intentionally delegated to the user layer, which can better manage special cases. */ /* ***************************************** * Tool functions ******************************************/ /* Error Management */ static unsigned FSE_isError(size_t code); /* tells if a return value is an error code */ /* ***************************************** * FSE detailed API ******************************************/ /*! FSE_compress() does the following: 1. count symbol occurrence from source[] into table count[] 2. normalize counters so that sum(count[]) == Power_of_2 (2^tableLog) 3. save normalized counters to memory buffer using writeNCount() 4. build encoding table 'CTable' from normalized counters 5. encode the data stream using encoding table 'CTable' FSE_decompress() does the following: 1. read normalized counters with readNCount() 2. build decoding table 'DTable' from normalized counters 3. decode the data stream using decoding table 'DTable' The following API allows targeting specific sub-functions for advanced tasks. For example, it's possible to compress several blocks using the same 'CTable', or to save and provide normalized distribution using external method. */ /* *** DECOMPRESSION *** */ /*! FSE_readNCount(): Read compactly saved 'normalizedCounter' from 'rBuffer'. return : size read from 'rBuffer' or an errorCode, which can be tested using FSE_isError() maxSymbolValuePtr[0] and tableLogPtr[0] will also be updated with their respective values */ static size_t FSE_readNCount (short* normalizedCounter, unsigned* maxSymbolValuePtr, unsigned* tableLogPtr, const void* rBuffer, size_t rBuffSize); /*! Constructor and Destructor of type FSE_DTable Note that its size depends on 'tableLog' */ typedef unsigned FSE_DTable; /* don't allocate that. It's just a way to be more restrictive than void* */ /*! FSE_buildDTable(): Builds 'dt', which must be already allocated, using FSE_createDTable() return : 0, or an errorCode, which can be tested using FSE_isError() */ static size_t FSE_buildDTable ( FSE_DTable* dt, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog); /*! FSE_decompress_usingDTable(): Decompress compressed source 'cSrc' of size 'cSrcSize' using 'dt' into 'dst' which must be already allocated. return : size of regenerated data (necessarily <= maxDstSize) or an errorCode, which can be tested using FSE_isError() */ static size_t FSE_decompress_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const FSE_DTable* dt); /*! Tutorial : ---------- (Note : these functions only decompress FSE-compressed blocks. If block is uncompressed, use memcpy() instead If block is a single repeated byte, use memset() instead ) The first step is to obtain the normalized frequencies of symbols. This can be performed by FSE_readNCount() if it was saved using FSE_writeNCount(). 'normalizedCounter' must be already allocated, and have at least 'maxSymbolValuePtr[0]+1' cells of signed short. In practice, that means it's necessary to know 'maxSymbolValue' beforehand, or size the table to handle worst case situations (typically 256). FSE_readNCount() will provide 'tableLog' and 'maxSymbolValue'. The result of FSE_readNCount() is the number of bytes read from 'rBuffer'. Note that 'rBufferSize' must be at least 4 bytes, even if useful information is less than that. If there is an error, the function will return an error code, which can be tested using FSE_isError(). The next step is to build the decompression tables 'FSE_DTable' from 'normalizedCounter'. This is performed by the function FSE_buildDTable(). The space required by 'FSE_DTable' must be already allocated using FSE_createDTable(). If there is an error, the function will return an error code, which can be tested using FSE_isError(). 'FSE_DTable' can then be used to decompress 'cSrc', with FSE_decompress_usingDTable(). 'cSrcSize' must be strictly correct, otherwise decompression will fail. FSE_decompress_usingDTable() result will tell how many bytes were regenerated (<=maxDstSize). If there is an error, the function will return an error code, which can be tested using FSE_isError(). (ex: dst buffer too small) */ #if defined (__cplusplus) } #endif #endif /* FSE_H */ /* ****************************************************************** bitstream Part of NewGen Entropy library header file (to include) Copyright (C) 2013-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - Source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #ifndef BITSTREAM_H_MODULE #define BITSTREAM_H_MODULE #if defined (__cplusplus) extern "C" { #endif /* * This API consists of small unitary functions, which highly benefit from being inlined. * Since link-time-optimization is not available for all compilers, * these functions are defined into a .h to be included. */ /********************************************** * bitStream decompression API (read backward) **********************************************/ typedef struct { size_t bitContainer; unsigned bitsConsumed; const char* ptr; const char* start; } BIT_DStream_t; typedef enum { BIT_DStream_unfinished = 0, BIT_DStream_endOfBuffer = 1, BIT_DStream_completed = 2, BIT_DStream_overflow = 3 } BIT_DStream_status; /* result of BIT_reloadDStream() */ /* 1,2,4,8 would be better for bitmap combinations, but slows down performance a bit ... :( */ MEM_STATIC size_t BIT_initDStream(BIT_DStream_t* bitD, const void* srcBuffer, size_t srcSize); MEM_STATIC size_t BIT_readBits(BIT_DStream_t* bitD, unsigned nbBits); MEM_STATIC BIT_DStream_status BIT_reloadDStream(BIT_DStream_t* bitD); MEM_STATIC unsigned BIT_endOfDStream(const BIT_DStream_t* bitD); /* * Start by invoking BIT_initDStream(). * A chunk of the bitStream is then stored into a local register. * Local register size is 64-bits on 64-bits systems, 32-bits on 32-bits systems (size_t). * You can then retrieve bitFields stored into the local register, **in reverse order**. * Local register is manually filled from memory by the BIT_reloadDStream() method. * A reload guarantee a minimum of ((8*sizeof(size_t))-7) bits when its result is BIT_DStream_unfinished. * Otherwise, it can be less than that, so proceed accordingly. * Checking if DStream has reached its end can be performed with BIT_endOfDStream() */ /****************************************** * unsafe API ******************************************/ MEM_STATIC size_t BIT_readBitsFast(BIT_DStream_t* bitD, unsigned nbBits); /* faster, but works only if nbBits >= 1 */ /**************************************************************** * Helper functions ****************************************************************/ MEM_STATIC unsigned BIT_highbit32 (register U32 val) { # if defined(_MSC_VER) /* Visual */ unsigned long r=0; _BitScanReverse ( &r, val ); return (unsigned) r; # elif defined(__GNUC__) && (__GNUC__ >= 3) /* Use GCC Intrinsic */ return 31 - __builtin_clz (val); # else /* Software version */ static const unsigned DeBruijnClz[32] = { 0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30, 8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31 }; U32 v = val; unsigned r; v |= v >> 1; v |= v >> 2; v |= v >> 4; v |= v >> 8; v |= v >> 16; r = DeBruijnClz[ (U32) (v * 0x07C4ACDDU) >> 27]; return r; # endif } /********************************************************** * bitStream decoding **********************************************************/ /*!BIT_initDStream * Initialize a BIT_DStream_t. * @bitD : a pointer to an already allocated BIT_DStream_t structure * @srcBuffer must point at the beginning of a bitStream * @srcSize must be the exact size of the bitStream * @result : size of stream (== srcSize) or an errorCode if a problem is detected */ MEM_STATIC size_t BIT_initDStream(BIT_DStream_t* bitD, const void* srcBuffer, size_t srcSize) { if (srcSize < 1) { memset(bitD, 0, sizeof(*bitD)); return ERROR(srcSize_wrong); } if (srcSize >= sizeof(size_t)) /* normal case */ { U32 contain32; bitD->start = (const char*)srcBuffer; bitD->ptr = (const char*)srcBuffer + srcSize - sizeof(size_t); bitD->bitContainer = MEM_readLEST(bitD->ptr); contain32 = ((const BYTE*)srcBuffer)[srcSize-1]; if (contain32 == 0) return ERROR(GENERIC); /* endMark not present */ bitD->bitsConsumed = 8 - BIT_highbit32(contain32); } else { U32 contain32; bitD->start = (const char*)srcBuffer; bitD->ptr = bitD->start; bitD->bitContainer = *(const BYTE*)(bitD->start); switch(srcSize) { case 7: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[6]) << (sizeof(size_t)*8 - 16);/* fall-through */ case 6: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[5]) << (sizeof(size_t)*8 - 24);/* fall-through */ case 5: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[4]) << (sizeof(size_t)*8 - 32);/* fall-through */ case 4: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[3]) << 24; /* fall-through */ case 3: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[2]) << 16; /* fall-through */ case 2: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[1]) << 8; /* fall-through */ default: break; } contain32 = ((const BYTE*)srcBuffer)[srcSize-1]; if (contain32 == 0) return ERROR(GENERIC); /* endMark not present */ bitD->bitsConsumed = 8 - BIT_highbit32(contain32); bitD->bitsConsumed += (U32)(sizeof(size_t) - srcSize)*8; } return srcSize; } /*!BIT_lookBits * Provides next n bits from local register * local register is not modified (bits are still present for next read/look) * On 32-bits, maxNbBits==25 * On 64-bits, maxNbBits==57 * @return : value extracted */ MEM_STATIC size_t BIT_lookBits(BIT_DStream_t* bitD, U32 nbBits) { const U32 bitMask = sizeof(bitD->bitContainer)*8 - 1; return ((bitD->bitContainer << (bitD->bitsConsumed & bitMask)) >> 1) >> ((bitMask-nbBits) & bitMask); } /*! BIT_lookBitsFast : * unsafe version; only works only if nbBits >= 1 */ MEM_STATIC size_t BIT_lookBitsFast(BIT_DStream_t* bitD, U32 nbBits) { const U32 bitMask = sizeof(bitD->bitContainer)*8 - 1; return (bitD->bitContainer << (bitD->bitsConsumed & bitMask)) >> (((bitMask+1)-nbBits) & bitMask); } MEM_STATIC void BIT_skipBits(BIT_DStream_t* bitD, U32 nbBits) { bitD->bitsConsumed += nbBits; } /*!BIT_readBits * Read next n bits from local register. * pay attention to not read more than nbBits contained into local register. * @return : extracted value. */ MEM_STATIC size_t BIT_readBits(BIT_DStream_t* bitD, U32 nbBits) { size_t value = BIT_lookBits(bitD, nbBits); BIT_skipBits(bitD, nbBits); return value; } /*!BIT_readBitsFast : * unsafe version; only works only if nbBits >= 1 */ MEM_STATIC size_t BIT_readBitsFast(BIT_DStream_t* bitD, U32 nbBits) { size_t value = BIT_lookBitsFast(bitD, nbBits); BIT_skipBits(bitD, nbBits); return value; } MEM_STATIC BIT_DStream_status BIT_reloadDStream(BIT_DStream_t* bitD) { if (bitD->bitsConsumed > (sizeof(bitD->bitContainer)*8)) /* should never happen */ return BIT_DStream_overflow; if (bitD->ptr >= bitD->start + sizeof(bitD->bitContainer)) { bitD->ptr -= bitD->bitsConsumed >> 3; bitD->bitsConsumed &= 7; bitD->bitContainer = MEM_readLEST(bitD->ptr); return BIT_DStream_unfinished; } if (bitD->ptr == bitD->start) { if (bitD->bitsConsumed < sizeof(bitD->bitContainer)*8) return BIT_DStream_endOfBuffer; return BIT_DStream_completed; } { U32 nbBytes = bitD->bitsConsumed >> 3; BIT_DStream_status result = BIT_DStream_unfinished; if (bitD->ptr - nbBytes < bitD->start) { nbBytes = (U32)(bitD->ptr - bitD->start); /* ptr > start */ result = BIT_DStream_endOfBuffer; } bitD->ptr -= nbBytes; bitD->bitsConsumed -= nbBytes*8; bitD->bitContainer = MEM_readLEST(bitD->ptr); /* reminder : srcSize > sizeof(bitD) */ return result; } } /*! BIT_endOfDStream * @return Tells if DStream has reached its exact end */ MEM_STATIC unsigned BIT_endOfDStream(const BIT_DStream_t* DStream) { return ((DStream->ptr == DStream->start) && (DStream->bitsConsumed == sizeof(DStream->bitContainer)*8)); } #if defined (__cplusplus) } #endif #endif /* BITSTREAM_H_MODULE */ /* ****************************************************************** FSE : Finite State Entropy coder header file for static linking (only) Copyright (C) 2013-2015, Yann Collet BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - Source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #ifndef FSE_STATIC_H #define FSE_STATIC_H #if defined (__cplusplus) extern "C" { #endif /* ***************************************** * Static allocation *******************************************/ /* FSE buffer bounds */ #define FSE_NCOUNTBOUND 512 #define FSE_BLOCKBOUND(size) (size + (size>>7)) #define FSE_COMPRESSBOUND(size) (FSE_NCOUNTBOUND + FSE_BLOCKBOUND(size)) /* Macro version, useful for static allocation */ /* It is possible to statically allocate FSE CTable/DTable as a table of unsigned using below macros */ #define FSE_CTABLE_SIZE_U32(maxTableLog, maxSymbolValue) (1 + (1<<(maxTableLog-1)) + ((maxSymbolValue+1)*2)) #define FSE_DTABLE_SIZE_U32(maxTableLog) (1 + (1<= BIT_DStream_completed When it's done, verify decompression is fully completed, by checking both DStream and the relevant states. Checking if DStream has reached its end is performed by : BIT_endOfDStream(&DStream); Check also the states. There might be some symbols left there, if some high probability ones (>50%) are possible. FSE_endOfDState(&DState); */ /* ***************************************** * FSE unsafe API *******************************************/ static unsigned char FSE_decodeSymbolFast(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD); /* faster, but works only if nbBits is always >= 1 (otherwise, result will be corrupted) */ /* ***************************************** * Implementation of inlined functions *******************************************/ /* decompression */ typedef struct { U16 tableLog; U16 fastMode; } FSE_DTableHeader; /* sizeof U32 */ typedef struct { unsigned short newState; unsigned char symbol; unsigned char nbBits; } FSE_decode_t; /* size == U32 */ MEM_STATIC void FSE_initDState(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD, const FSE_DTable* dt) { FSE_DTableHeader DTableH; memcpy(&DTableH, dt, sizeof(DTableH)); DStatePtr->state = BIT_readBits(bitD, DTableH.tableLog); BIT_reloadDStream(bitD); DStatePtr->table = dt + 1; } MEM_STATIC BYTE FSE_decodeSymbol(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD) { const FSE_decode_t DInfo = ((const FSE_decode_t*)(DStatePtr->table))[DStatePtr->state]; const U32 nbBits = DInfo.nbBits; BYTE symbol = DInfo.symbol; size_t lowBits = BIT_readBits(bitD, nbBits); DStatePtr->state = DInfo.newState + lowBits; return symbol; } MEM_STATIC BYTE FSE_decodeSymbolFast(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD) { const FSE_decode_t DInfo = ((const FSE_decode_t*)(DStatePtr->table))[DStatePtr->state]; const U32 nbBits = DInfo.nbBits; BYTE symbol = DInfo.symbol; size_t lowBits = BIT_readBitsFast(bitD, nbBits); DStatePtr->state = DInfo.newState + lowBits; return symbol; } MEM_STATIC unsigned FSE_endOfDState(const FSE_DState_t* DStatePtr) { return DStatePtr->state == 0; } #if defined (__cplusplus) } #endif #endif /* FSE_STATIC_H */ /* ****************************************************************** FSE : Finite State Entropy coder Copyright (C) 2013-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - FSE source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #ifndef FSE_COMMONDEFS_ONLY /* ************************************************************** * Tuning parameters ****************************************************************/ /*!MEMORY_USAGE : * Memory usage formula : N->2^N Bytes (examples : 10 -> 1KB; 12 -> 4KB ; 16 -> 64KB; 20 -> 1MB; etc.) * Increasing memory usage improves compression ratio * Reduced memory usage can improve speed, due to cache effect * Recommended max value is 14, for 16KB, which nicely fits into Intel x86 L1 cache */ #define FSE_MAX_MEMORY_USAGE 14 #define FSE_DEFAULT_MEMORY_USAGE 13 /*!FSE_MAX_SYMBOL_VALUE : * Maximum symbol value authorized. * Required for proper stack allocation */ #define FSE_MAX_SYMBOL_VALUE 255 /* ************************************************************** * template functions type & suffix ****************************************************************/ #define FSE_FUNCTION_TYPE BYTE #define FSE_FUNCTION_EXTENSION #define FSE_DECODE_TYPE FSE_decode_t #endif /* !FSE_COMMONDEFS_ONLY */ /* ************************************************************** * Compiler specifics ****************************************************************/ #ifdef _MSC_VER /* Visual Studio */ # define FORCE_INLINE static __forceinline # include /* For Visual 2005 */ # pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */ # pragma warning(disable : 4214) /* disable: C4214: non-int bitfields */ #else # if defined (__cplusplus) || defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L /* C99 */ # ifdef __GNUC__ # define FORCE_INLINE static inline __attribute__((always_inline)) # else # define FORCE_INLINE static inline # endif # else # define FORCE_INLINE static # endif /* __STDC_VERSION__ */ #endif /* ************************************************************** * Dependencies ****************************************************************/ #include /* malloc, free, qsort */ #include /* memcpy, memset */ #include /* printf (debug) */ /* *************************************************************** * Constants *****************************************************************/ #define FSE_MAX_TABLELOG (FSE_MAX_MEMORY_USAGE-2) #define FSE_MAX_TABLESIZE (1U< FSE_TABLELOG_ABSOLUTE_MAX #error "FSE_MAX_TABLELOG > FSE_TABLELOG_ABSOLUTE_MAX is not supported" #endif /* ************************************************************** * Error Management ****************************************************************/ #define FSE_STATIC_ASSERT(c) { enum { FSE_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */ /* ************************************************************** * Complex types ****************************************************************/ typedef U32 DTable_max_t[FSE_DTABLE_SIZE_U32(FSE_MAX_TABLELOG)]; /*-************************************************************** * Templates ****************************************************************/ /* designed to be included for type-specific functions (template emulation in C) Objective is to write these functions only once, for improved maintenance */ /* safety checks */ #ifndef FSE_FUNCTION_EXTENSION # error "FSE_FUNCTION_EXTENSION must be defined" #endif #ifndef FSE_FUNCTION_TYPE # error "FSE_FUNCTION_TYPE must be defined" #endif /* Function names */ #define FSE_CAT(X,Y) X##Y #define FSE_FUNCTION_NAME(X,Y) FSE_CAT(X,Y) #define FSE_TYPE_NAME(X,Y) FSE_CAT(X,Y) static U32 FSE_tableStep(U32 tableSize) { return (tableSize>>1) + (tableSize>>3) + 3; } static size_t FSE_buildDTable(FSE_DTable* dt, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog) { FSE_DTableHeader DTableH; void* const tdPtr = dt+1; /* because dt is unsigned, 32-bits aligned on 32-bits */ FSE_DECODE_TYPE* const tableDecode = (FSE_DECODE_TYPE*) (tdPtr); const U32 tableSize = 1 << tableLog; const U32 tableMask = tableSize-1; const U32 step = FSE_tableStep(tableSize); U16 symbolNext[FSE_MAX_SYMBOL_VALUE+1]; U32 position = 0; U32 highThreshold = tableSize-1; const S16 largeLimit= (S16)(1 << (tableLog-1)); U32 noLarge = 1; U32 s; /* Sanity Checks */ if (maxSymbolValue > FSE_MAX_SYMBOL_VALUE) return ERROR(maxSymbolValue_tooLarge); if (tableLog > FSE_MAX_TABLELOG) return ERROR(tableLog_tooLarge); /* Init, lay down lowprob symbols */ DTableH.tableLog = (U16)tableLog; for (s=0; s<=maxSymbolValue; s++) { if (normalizedCounter[s]==-1) { tableDecode[highThreshold--].symbol = (FSE_FUNCTION_TYPE)s; symbolNext[s] = 1; } else { if (normalizedCounter[s] >= largeLimit) noLarge=0; symbolNext[s] = normalizedCounter[s]; } } /* Spread symbols */ for (s=0; s<=maxSymbolValue; s++) { int i; for (i=0; i highThreshold) position = (position + step) & tableMask; /* lowprob area */ } } if (position!=0) return ERROR(GENERIC); /* position must reach all cells once, otherwise normalizedCounter is incorrect */ /* Build Decoding table */ { U32 i; for (i=0; i FSE_TABLELOG_ABSOLUTE_MAX) return ERROR(tableLog_tooLarge); bitStream >>= 4; bitCount = 4; *tableLogPtr = nbBits; remaining = (1<1) && (charnum<=*maxSVPtr)) { if (previous0) { unsigned n0 = charnum; while ((bitStream & 0xFFFF) == 0xFFFF) { n0+=24; if (ip < iend-5) { ip+=2; bitStream = MEM_readLE32(ip) >> bitCount; } else { bitStream >>= 16; bitCount+=16; } } while ((bitStream & 3) == 3) { n0+=3; bitStream>>=2; bitCount+=2; } n0 += bitStream & 3; bitCount += 2; if (n0 > *maxSVPtr) return ERROR(maxSymbolValue_tooSmall); while (charnum < n0) normalizedCounter[charnum++] = 0; if ((ip <= iend-7) || (ip + (bitCount>>3) <= iend-4)) { ip += bitCount>>3; bitCount &= 7; bitStream = MEM_readLE32(ip) >> bitCount; } else bitStream >>= 2; } { const short max = (short)((2*threshold-1)-remaining); short count; if ((bitStream & (threshold-1)) < (U32)max) { count = (short)(bitStream & (threshold-1)); bitCount += nbBits-1; } else { count = (short)(bitStream & (2*threshold-1)); if (count >= threshold) count -= max; bitCount += nbBits; } count--; /* extra accuracy */ remaining -= FSE_abs(count); normalizedCounter[charnum++] = count; previous0 = !count; while (remaining < threshold) { nbBits--; threshold >>= 1; } { if ((ip <= iend-7) || (ip + (bitCount>>3) <= iend-4)) { ip += bitCount>>3; bitCount &= 7; } else { bitCount -= (int)(8 * (iend - 4 - ip)); ip = iend - 4; } bitStream = MEM_readLE32(ip) >> (bitCount & 31); } } } if (remaining != 1) return ERROR(GENERIC); *maxSVPtr = charnum-1; ip += (bitCount+7)>>3; if ((size_t)(ip-istart) > hbSize) return ERROR(srcSize_wrong); return ip-istart; } /********************************************************* * Decompression (Byte symbols) *********************************************************/ static size_t FSE_buildDTable_rle (FSE_DTable* dt, BYTE symbolValue) { void* ptr = dt; FSE_DTableHeader* const DTableH = (FSE_DTableHeader*)ptr; void* dPtr = dt + 1; FSE_decode_t* const cell = (FSE_decode_t*)dPtr; DTableH->tableLog = 0; DTableH->fastMode = 0; cell->newState = 0; cell->symbol = symbolValue; cell->nbBits = 0; return 0; } static size_t FSE_buildDTable_raw (FSE_DTable* dt, unsigned nbBits) { void* ptr = dt; FSE_DTableHeader* const DTableH = (FSE_DTableHeader*)ptr; void* dPtr = dt + 1; FSE_decode_t* const dinfo = (FSE_decode_t*)dPtr; const unsigned tableSize = 1 << nbBits; const unsigned tableMask = tableSize - 1; const unsigned maxSymbolValue = tableMask; unsigned s; /* Sanity checks */ if (nbBits < 1) return ERROR(GENERIC); /* min size */ /* Build Decoding Table */ DTableH->tableLog = (U16)nbBits; DTableH->fastMode = 1; for (s=0; s<=maxSymbolValue; s++) { dinfo[s].newState = 0; dinfo[s].symbol = (BYTE)s; dinfo[s].nbBits = (BYTE)nbBits; } return 0; } FORCE_INLINE size_t FSE_decompress_usingDTable_generic( void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const FSE_DTable* dt, const unsigned fast) { BYTE* const ostart = (BYTE*) dst; BYTE* op = ostart; BYTE* const omax = op + maxDstSize; BYTE* const olimit = omax-3; BIT_DStream_t bitD; FSE_DState_t state1; FSE_DState_t state2; size_t errorCode; /* Init */ errorCode = BIT_initDStream(&bitD, cSrc, cSrcSize); /* replaced last arg by maxCompressed Size */ if (FSE_isError(errorCode)) return errorCode; FSE_initDState(&state1, &bitD, dt); FSE_initDState(&state2, &bitD, dt); #define FSE_GETSYMBOL(statePtr) fast ? FSE_decodeSymbolFast(statePtr, &bitD) : FSE_decodeSymbol(statePtr, &bitD) /* 4 symbols per loop */ for ( ; (BIT_reloadDStream(&bitD)==BIT_DStream_unfinished) && (op sizeof(bitD.bitContainer)*8) /* This test must be static */ BIT_reloadDStream(&bitD); op[1] = FSE_GETSYMBOL(&state2); if (FSE_MAX_TABLELOG*4+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */ { if (BIT_reloadDStream(&bitD) > BIT_DStream_unfinished) { op+=2; break; } } op[2] = FSE_GETSYMBOL(&state1); if (FSE_MAX_TABLELOG*2+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */ BIT_reloadDStream(&bitD); op[3] = FSE_GETSYMBOL(&state2); } /* tail */ /* note : BIT_reloadDStream(&bitD) >= FSE_DStream_partiallyFilled; Ends at exactly BIT_DStream_completed */ while (1) { if ( (BIT_reloadDStream(&bitD)>BIT_DStream_completed) || (op==omax) || (BIT_endOfDStream(&bitD) && (fast || FSE_endOfDState(&state1))) ) break; *op++ = FSE_GETSYMBOL(&state1); if ( (BIT_reloadDStream(&bitD)>BIT_DStream_completed) || (op==omax) || (BIT_endOfDStream(&bitD) && (fast || FSE_endOfDState(&state2))) ) break; *op++ = FSE_GETSYMBOL(&state2); } /* end ? */ if (BIT_endOfDStream(&bitD) && FSE_endOfDState(&state1) && FSE_endOfDState(&state2)) return op-ostart; if (op==omax) return ERROR(dstSize_tooSmall); /* dst buffer is full, but cSrc unfinished */ return ERROR(corruption_detected); } static size_t FSE_decompress_usingDTable(void* dst, size_t originalSize, const void* cSrc, size_t cSrcSize, const FSE_DTable* dt) { FSE_DTableHeader DTableH; U32 fastMode; memcpy(&DTableH, dt, sizeof(DTableH)); fastMode = DTableH.fastMode; /* select fast mode (static) */ if (fastMode) return FSE_decompress_usingDTable_generic(dst, originalSize, cSrc, cSrcSize, dt, 1); return FSE_decompress_usingDTable_generic(dst, originalSize, cSrc, cSrcSize, dt, 0); } static size_t FSE_decompress(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize) { const BYTE* const istart = (const BYTE*)cSrc; const BYTE* ip = istart; short counting[FSE_MAX_SYMBOL_VALUE+1]; DTable_max_t dt; /* Static analyzer seems unable to understand this table will be properly initialized later */ unsigned tableLog; unsigned maxSymbolValue = FSE_MAX_SYMBOL_VALUE; size_t errorCode; if (cSrcSize<2) return ERROR(srcSize_wrong); /* too small input size */ /* normal FSE decoding mode */ errorCode = FSE_readNCount (counting, &maxSymbolValue, &tableLog, istart, cSrcSize); if (FSE_isError(errorCode)) return errorCode; if (errorCode >= cSrcSize) return ERROR(srcSize_wrong); /* too small input size */ ip += errorCode; cSrcSize -= errorCode; errorCode = FSE_buildDTable (dt, counting, maxSymbolValue, tableLog); if (FSE_isError(errorCode)) return errorCode; /* always return, even if it is an error code */ return FSE_decompress_usingDTable (dst, maxDstSize, ip, cSrcSize, dt); } #endif /* FSE_COMMONDEFS_ONLY */ /* ****************************************************************** Huff0 : Huffman coder, part of New Generation Entropy library header file Copyright (C) 2013-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - Source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #ifndef HUFF0_H #define HUFF0_H #if defined (__cplusplus) extern "C" { #endif /* **************************************** * Dependency ******************************************/ #include /* size_t */ /* **************************************** * Huff0 simple functions ******************************************/ static size_t HUF_decompress(void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /*! HUF_decompress(): Decompress Huff0 data from buffer 'cSrc', of size 'cSrcSize', into already allocated destination buffer 'dst', of size 'dstSize'. 'dstSize' must be the exact size of original (uncompressed) data. Note : in contrast with FSE, HUF_decompress can regenerate RLE (cSrcSize==1) and uncompressed (cSrcSize==dstSize) data, because it knows size to regenerate. @return : size of regenerated data (== dstSize) or an error code, which can be tested using HUF_isError() */ /* **************************************** * Tool functions ******************************************/ /* Error Management */ static unsigned HUF_isError(size_t code); /* tells if a return value is an error code */ #if defined (__cplusplus) } #endif #endif /* HUFF0_H */ /* ****************************************************************** Huff0 : Huffman coder, part of New Generation Entropy library header file for static linking (only) Copyright (C) 2013-2015, Yann Collet BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - Source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #ifndef HUFF0_STATIC_H #define HUFF0_STATIC_H #if defined (__cplusplus) extern "C" { #endif /* **************************************** * Static allocation macros ******************************************/ /* static allocation of Huff0's DTable */ #define HUF_DTABLE_SIZE(maxTableLog) (1 + (1<= 199901L) /* C99 */) /* inline is defined */ #elif defined(_MSC_VER) # define inline __inline #else # define inline /* disable inline */ #endif #ifdef _MSC_VER /* Visual Studio */ # pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */ #endif /* ************************************************************** * Includes ****************************************************************/ #include /* malloc, free, qsort */ #include /* memcpy, memset */ #include /* printf (debug) */ /* ************************************************************** * Constants ****************************************************************/ #define HUF_ABSOLUTEMAX_TABLELOG 16 /* absolute limit of HUF_MAX_TABLELOG. Beyond that value, code does not work */ #define HUF_MAX_TABLELOG 12 /* max configured tableLog (for static allocation); can be modified up to HUF_ABSOLUTEMAX_TABLELOG */ #define HUF_DEFAULT_TABLELOG HUF_MAX_TABLELOG /* tableLog by default, when not specified */ #define HUF_MAX_SYMBOL_VALUE 255 #if (HUF_MAX_TABLELOG > HUF_ABSOLUTEMAX_TABLELOG) # error "HUF_MAX_TABLELOG is too large !" #endif /* ************************************************************** * Error Management ****************************************************************/ static unsigned HUF_isError(size_t code) { return ERR_isError(code); } #define HUF_STATIC_ASSERT(c) { enum { HUF_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */ /*-******************************************************* * Huff0 : Huffman block decompression *********************************************************/ typedef struct { BYTE byte; BYTE nbBits; } HUF_DEltX2; /* single-symbol decoding */ typedef struct { U16 sequence; BYTE nbBits; BYTE length; } HUF_DEltX4; /* double-symbols decoding */ typedef struct { BYTE symbol; BYTE weight; } sortedSymbol_t; /*! HUF_readStats Read compact Huffman tree, saved by HUF_writeCTable @huffWeight : destination buffer @return : size read from `src` */ static size_t HUF_readStats(BYTE* huffWeight, size_t hwSize, U32* rankStats, U32* nbSymbolsPtr, U32* tableLogPtr, const void* src, size_t srcSize) { U32 weightTotal; U32 tableLog; const BYTE* ip = (const BYTE*) src; size_t iSize; size_t oSize; U32 n; if (!srcSize) return ERROR(srcSize_wrong); iSize = ip[0]; //memset(huffWeight, 0, hwSize); /* is not necessary, even though some analyzer complain ... */ if (iSize >= 128) /* special header */ { if (iSize >= (242)) /* RLE */ { static int l[14] = { 1, 2, 3, 4, 7, 8, 15, 16, 31, 32, 63, 64, 127, 128 }; oSize = l[iSize-242]; memset(huffWeight, 1, hwSize); iSize = 0; } else /* Incompressible */ { oSize = iSize - 127; iSize = ((oSize+1)/2); if (iSize+1 > srcSize) return ERROR(srcSize_wrong); if (oSize >= hwSize) return ERROR(corruption_detected); ip += 1; for (n=0; n> 4; huffWeight[n+1] = ip[n/2] & 15; } } } else /* header compressed with FSE (normal case) */ { if (iSize+1 > srcSize) return ERROR(srcSize_wrong); oSize = FSE_decompress(huffWeight, hwSize-1, ip+1, iSize); /* max (hwSize-1) values decoded, as last one is implied */ if (FSE_isError(oSize)) return oSize; } /* collect weight stats */ memset(rankStats, 0, (HUF_ABSOLUTEMAX_TABLELOG + 1) * sizeof(U32)); weightTotal = 0; for (n=0; n= HUF_ABSOLUTEMAX_TABLELOG) return ERROR(corruption_detected); rankStats[huffWeight[n]]++; weightTotal += (1 << huffWeight[n]) >> 1; } if (weightTotal == 0) return ERROR(corruption_detected); /* get last non-null symbol weight (implied, total must be 2^n) */ tableLog = BIT_highbit32(weightTotal) + 1; if (tableLog > HUF_ABSOLUTEMAX_TABLELOG) return ERROR(corruption_detected); { U32 total = 1 << tableLog; U32 rest = total - weightTotal; U32 verif = 1 << BIT_highbit32(rest); U32 lastWeight = BIT_highbit32(rest) + 1; if (verif != rest) return ERROR(corruption_detected); /* last value must be a clean power of 2 */ huffWeight[oSize] = (BYTE)lastWeight; rankStats[lastWeight]++; } /* check tree construction validity */ if ((rankStats[1] < 2) || (rankStats[1] & 1)) return ERROR(corruption_detected); /* by construction : at least 2 elts of rank 1, must be even */ /* results */ *nbSymbolsPtr = (U32)(oSize+1); *tableLogPtr = tableLog; return iSize+1; } /**************************/ /* single-symbol decoding */ /**************************/ static size_t HUF_readDTableX2 (U16* DTable, const void* src, size_t srcSize) { BYTE huffWeight[HUF_MAX_SYMBOL_VALUE + 1]; U32 rankVal[HUF_ABSOLUTEMAX_TABLELOG + 1]; /* large enough for values from 0 to 16 */ U32 tableLog = 0; size_t iSize; U32 nbSymbols = 0; U32 n; U32 nextRankStart; void* const dtPtr = DTable + 1; HUF_DEltX2* const dt = (HUF_DEltX2*)dtPtr; HUF_STATIC_ASSERT(sizeof(HUF_DEltX2) == sizeof(U16)); /* if compilation fails here, assertion is false */ //memset(huffWeight, 0, sizeof(huffWeight)); /* is not necessary, even though some analyzer complain ... */ iSize = HUF_readStats(huffWeight, HUF_MAX_SYMBOL_VALUE + 1, rankVal, &nbSymbols, &tableLog, src, srcSize); if (HUF_isError(iSize)) return iSize; /* check result */ if (tableLog > DTable[0]) return ERROR(tableLog_tooLarge); /* DTable is too small */ DTable[0] = (U16)tableLog; /* maybe should separate sizeof DTable, as allocated, from used size of DTable, in case of DTable re-use */ /* Prepare ranks */ nextRankStart = 0; for (n=1; n<=tableLog; n++) { U32 current = nextRankStart; nextRankStart += (rankVal[n] << (n-1)); rankVal[n] = current; } /* fill DTable */ for (n=0; n> 1; U32 i; HUF_DEltX2 D; D.byte = (BYTE)n; D.nbBits = (BYTE)(tableLog + 1 - w); for (i = rankVal[w]; i < rankVal[w] + length; i++) dt[i] = D; rankVal[w] += length; } return iSize; } static BYTE HUF_decodeSymbolX2(BIT_DStream_t* Dstream, const HUF_DEltX2* dt, const U32 dtLog) { const size_t val = BIT_lookBitsFast(Dstream, dtLog); /* note : dtLog >= 1 */ const BYTE c = dt[val].byte; BIT_skipBits(Dstream, dt[val].nbBits); return c; } #define HUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr) \ *ptr++ = HUF_decodeSymbolX2(DStreamPtr, dt, dtLog) #define HUF_DECODE_SYMBOLX2_1(ptr, DStreamPtr) \ if (MEM_64bits() || (HUF_MAX_TABLELOG<=12)) \ HUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr) #define HUF_DECODE_SYMBOLX2_2(ptr, DStreamPtr) \ if (MEM_64bits()) \ HUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr) static inline size_t HUF_decodeStreamX2(BYTE* p, BIT_DStream_t* const bitDPtr, BYTE* const pEnd, const HUF_DEltX2* const dt, const U32 dtLog) { BYTE* const pStart = p; /* up to 4 symbols at a time */ while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) && (p <= pEnd-4)) { HUF_DECODE_SYMBOLX2_2(p, bitDPtr); HUF_DECODE_SYMBOLX2_1(p, bitDPtr); HUF_DECODE_SYMBOLX2_2(p, bitDPtr); HUF_DECODE_SYMBOLX2_0(p, bitDPtr); } /* closer to the end */ while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) && (p < pEnd)) HUF_DECODE_SYMBOLX2_0(p, bitDPtr); /* no more data to retrieve from bitstream, hence no need to reload */ while (p < pEnd) HUF_DECODE_SYMBOLX2_0(p, bitDPtr); return pEnd-pStart; } static size_t HUF_decompress4X2_usingDTable( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const U16* DTable) { if (cSrcSize < 10) return ERROR(corruption_detected); /* strict minimum : jump table + 1 byte per stream */ { const BYTE* const istart = (const BYTE*) cSrc; BYTE* const ostart = (BYTE*) dst; BYTE* const oend = ostart + dstSize; const void* const dtPtr = DTable; const HUF_DEltX2* const dt = ((const HUF_DEltX2*)dtPtr) +1; const U32 dtLog = DTable[0]; size_t errorCode; /* Init */ BIT_DStream_t bitD1; BIT_DStream_t bitD2; BIT_DStream_t bitD3; BIT_DStream_t bitD4; const size_t length1 = MEM_readLE16(istart); const size_t length2 = MEM_readLE16(istart+2); const size_t length3 = MEM_readLE16(istart+4); size_t length4; const BYTE* const istart1 = istart + 6; /* jumpTable */ const BYTE* const istart2 = istart1 + length1; const BYTE* const istart3 = istart2 + length2; const BYTE* const istart4 = istart3 + length3; const size_t segmentSize = (dstSize+3) / 4; BYTE* const opStart2 = ostart + segmentSize; BYTE* const opStart3 = opStart2 + segmentSize; BYTE* const opStart4 = opStart3 + segmentSize; BYTE* op1 = ostart; BYTE* op2 = opStart2; BYTE* op3 = opStart3; BYTE* op4 = opStart4; U32 endSignal; length4 = cSrcSize - (length1 + length2 + length3 + 6); if (length4 > cSrcSize) return ERROR(corruption_detected); /* overflow */ errorCode = BIT_initDStream(&bitD1, istart1, length1); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD2, istart2, length2); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD3, istart3, length3); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD4, istart4, length4); if (HUF_isError(errorCode)) return errorCode; /* 16-32 symbols per loop (4-8 symbols per stream) */ endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4); for ( ; (endSignal==BIT_DStream_unfinished) && (op4<(oend-7)) ; ) { HUF_DECODE_SYMBOLX2_2(op1, &bitD1); HUF_DECODE_SYMBOLX2_2(op2, &bitD2); HUF_DECODE_SYMBOLX2_2(op3, &bitD3); HUF_DECODE_SYMBOLX2_2(op4, &bitD4); HUF_DECODE_SYMBOLX2_1(op1, &bitD1); HUF_DECODE_SYMBOLX2_1(op2, &bitD2); HUF_DECODE_SYMBOLX2_1(op3, &bitD3); HUF_DECODE_SYMBOLX2_1(op4, &bitD4); HUF_DECODE_SYMBOLX2_2(op1, &bitD1); HUF_DECODE_SYMBOLX2_2(op2, &bitD2); HUF_DECODE_SYMBOLX2_2(op3, &bitD3); HUF_DECODE_SYMBOLX2_2(op4, &bitD4); HUF_DECODE_SYMBOLX2_0(op1, &bitD1); HUF_DECODE_SYMBOLX2_0(op2, &bitD2); HUF_DECODE_SYMBOLX2_0(op3, &bitD3); HUF_DECODE_SYMBOLX2_0(op4, &bitD4); endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4); } /* check corruption */ if (op1 > opStart2) return ERROR(corruption_detected); if (op2 > opStart3) return ERROR(corruption_detected); if (op3 > opStart4) return ERROR(corruption_detected); /* note : op4 supposed already verified within main loop */ /* finish bitStreams one by one */ HUF_decodeStreamX2(op1, &bitD1, opStart2, dt, dtLog); HUF_decodeStreamX2(op2, &bitD2, opStart3, dt, dtLog); HUF_decodeStreamX2(op3, &bitD3, opStart4, dt, dtLog); HUF_decodeStreamX2(op4, &bitD4, oend, dt, dtLog); /* check */ endSignal = BIT_endOfDStream(&bitD1) & BIT_endOfDStream(&bitD2) & BIT_endOfDStream(&bitD3) & BIT_endOfDStream(&bitD4); if (!endSignal) return ERROR(corruption_detected); /* decoded size */ return dstSize; } } static size_t HUF_decompress4X2 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { HUF_CREATE_STATIC_DTABLEX2(DTable, HUF_MAX_TABLELOG); const BYTE* ip = (const BYTE*) cSrc; size_t errorCode; errorCode = HUF_readDTableX2 (DTable, cSrc, cSrcSize); if (HUF_isError(errorCode)) return errorCode; if (errorCode >= cSrcSize) return ERROR(srcSize_wrong); ip += errorCode; cSrcSize -= errorCode; return HUF_decompress4X2_usingDTable (dst, dstSize, ip, cSrcSize, DTable); } /***************************/ /* double-symbols decoding */ /***************************/ static void HUF_fillDTableX4Level2(HUF_DEltX4* DTable, U32 sizeLog, const U32 consumed, const U32* rankValOrigin, const int minWeight, const sortedSymbol_t* sortedSymbols, const U32 sortedListSize, U32 nbBitsBaseline, U16 baseSeq) { HUF_DEltX4 DElt; U32 rankVal[HUF_ABSOLUTEMAX_TABLELOG + 1]; U32 s; /* get pre-calculated rankVal */ memcpy(rankVal, rankValOrigin, sizeof(rankVal)); /* fill skipped values */ if (minWeight>1) { U32 i, skipSize = rankVal[minWeight]; MEM_writeLE16(&(DElt.sequence), baseSeq); DElt.nbBits = (BYTE)(consumed); DElt.length = 1; for (i = 0; i < skipSize; i++) DTable[i] = DElt; } /* fill DTable */ for (s=0; s= 1 */ rankVal[weight] += length; } } typedef U32 rankVal_t[HUF_ABSOLUTEMAX_TABLELOG][HUF_ABSOLUTEMAX_TABLELOG + 1]; static void HUF_fillDTableX4(HUF_DEltX4* DTable, const U32 targetLog, const sortedSymbol_t* sortedList, const U32 sortedListSize, const U32* rankStart, rankVal_t rankValOrigin, const U32 maxWeight, const U32 nbBitsBaseline) { U32 rankVal[HUF_ABSOLUTEMAX_TABLELOG + 1]; const int scaleLog = nbBitsBaseline - targetLog; /* note : targetLog >= srcLog, hence scaleLog <= 1 */ const U32 minBits = nbBitsBaseline - maxWeight; U32 s; memcpy(rankVal, rankValOrigin, sizeof(rankVal)); /* fill DTable */ for (s=0; s= minBits) /* enough room for a second symbol */ { U32 sortedRank; int minWeight = nbBits + scaleLog; if (minWeight < 1) minWeight = 1; sortedRank = rankStart[minWeight]; HUF_fillDTableX4Level2(DTable+start, targetLog-nbBits, nbBits, rankValOrigin[nbBits], minWeight, sortedList+sortedRank, sortedListSize-sortedRank, nbBitsBaseline, symbol); } else { U32 i; const U32 end = start + length; HUF_DEltX4 DElt; MEM_writeLE16(&(DElt.sequence), symbol); DElt.nbBits = (BYTE)(nbBits); DElt.length = 1; for (i = start; i < end; i++) DTable[i] = DElt; } rankVal[weight] += length; } } static size_t HUF_readDTableX4 (U32* DTable, const void* src, size_t srcSize) { BYTE weightList[HUF_MAX_SYMBOL_VALUE + 1]; sortedSymbol_t sortedSymbol[HUF_MAX_SYMBOL_VALUE + 1]; U32 rankStats[HUF_ABSOLUTEMAX_TABLELOG + 1] = { 0 }; U32 rankStart0[HUF_ABSOLUTEMAX_TABLELOG + 2] = { 0 }; U32* const rankStart = rankStart0+1; rankVal_t rankVal; U32 tableLog, maxW, sizeOfSort, nbSymbols; const U32 memLog = DTable[0]; size_t iSize; void* dtPtr = DTable; HUF_DEltX4* const dt = ((HUF_DEltX4*)dtPtr) + 1; HUF_STATIC_ASSERT(sizeof(HUF_DEltX4) == sizeof(U32)); /* if compilation fails here, assertion is false */ if (memLog > HUF_ABSOLUTEMAX_TABLELOG) return ERROR(tableLog_tooLarge); //memset(weightList, 0, sizeof(weightList)); /* is not necessary, even though some analyzer complain ... */ iSize = HUF_readStats(weightList, HUF_MAX_SYMBOL_VALUE + 1, rankStats, &nbSymbols, &tableLog, src, srcSize); if (HUF_isError(iSize)) return iSize; /* check result */ if (tableLog > memLog) return ERROR(tableLog_tooLarge); /* DTable can't fit code depth */ /* find maxWeight */ for (maxW = tableLog; rankStats[maxW]==0; maxW--) { if (!maxW) return ERROR(GENERIC); } /* necessarily finds a solution before maxW==0 */ /* Get start index of each weight */ { U32 w, nextRankStart = 0; for (w=1; w<=maxW; w++) { U32 current = nextRankStart; nextRankStart += rankStats[w]; rankStart[w] = current; } rankStart[0] = nextRankStart; /* put all 0w symbols at the end of sorted list*/ sizeOfSort = nextRankStart; } /* sort symbols by weight */ { U32 s; for (s=0; s> consumed; } } } HUF_fillDTableX4(dt, memLog, sortedSymbol, sizeOfSort, rankStart0, rankVal, maxW, tableLog+1); return iSize; } static U32 HUF_decodeSymbolX4(void* op, BIT_DStream_t* DStream, const HUF_DEltX4* dt, const U32 dtLog) { const size_t val = BIT_lookBitsFast(DStream, dtLog); /* note : dtLog >= 1 */ memcpy(op, dt+val, 2); BIT_skipBits(DStream, dt[val].nbBits); return dt[val].length; } static U32 HUF_decodeLastSymbolX4(void* op, BIT_DStream_t* DStream, const HUF_DEltX4* dt, const U32 dtLog) { const size_t val = BIT_lookBitsFast(DStream, dtLog); /* note : dtLog >= 1 */ memcpy(op, dt+val, 1); if (dt[val].length==1) BIT_skipBits(DStream, dt[val].nbBits); else { if (DStream->bitsConsumed < (sizeof(DStream->bitContainer)*8)) { BIT_skipBits(DStream, dt[val].nbBits); if (DStream->bitsConsumed > (sizeof(DStream->bitContainer)*8)) DStream->bitsConsumed = (sizeof(DStream->bitContainer)*8); /* ugly hack; works only because it's the last symbol. Note : can't easily extract nbBits from just this symbol */ } } return 1; } #define HUF_DECODE_SYMBOLX4_0(ptr, DStreamPtr) \ ptr += HUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog) #define HUF_DECODE_SYMBOLX4_1(ptr, DStreamPtr) \ if (MEM_64bits() || (HUF_MAX_TABLELOG<=12)) \ ptr += HUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog) #define HUF_DECODE_SYMBOLX4_2(ptr, DStreamPtr) \ if (MEM_64bits()) \ ptr += HUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog) static inline size_t HUF_decodeStreamX4(BYTE* p, BIT_DStream_t* bitDPtr, BYTE* const pEnd, const HUF_DEltX4* const dt, const U32 dtLog) { BYTE* const pStart = p; /* up to 8 symbols at a time */ while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) && (p < pEnd-7)) { HUF_DECODE_SYMBOLX4_2(p, bitDPtr); HUF_DECODE_SYMBOLX4_1(p, bitDPtr); HUF_DECODE_SYMBOLX4_2(p, bitDPtr); HUF_DECODE_SYMBOLX4_0(p, bitDPtr); } /* closer to the end */ while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) && (p <= pEnd-2)) HUF_DECODE_SYMBOLX4_0(p, bitDPtr); while (p <= pEnd-2) HUF_DECODE_SYMBOLX4_0(p, bitDPtr); /* no need to reload : reached the end of DStream */ if (p < pEnd) p += HUF_decodeLastSymbolX4(p, bitDPtr, dt, dtLog); return p-pStart; } static size_t HUF_decompress4X4_usingDTable( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const U32* DTable) { if (cSrcSize < 10) return ERROR(corruption_detected); /* strict minimum : jump table + 1 byte per stream */ { const BYTE* const istart = (const BYTE*) cSrc; BYTE* const ostart = (BYTE*) dst; BYTE* const oend = ostart + dstSize; const void* const dtPtr = DTable; const HUF_DEltX4* const dt = ((const HUF_DEltX4*)dtPtr) +1; const U32 dtLog = DTable[0]; size_t errorCode; /* Init */ BIT_DStream_t bitD1; BIT_DStream_t bitD2; BIT_DStream_t bitD3; BIT_DStream_t bitD4; const size_t length1 = MEM_readLE16(istart); const size_t length2 = MEM_readLE16(istart+2); const size_t length3 = MEM_readLE16(istart+4); size_t length4; const BYTE* const istart1 = istart + 6; /* jumpTable */ const BYTE* const istart2 = istart1 + length1; const BYTE* const istart3 = istart2 + length2; const BYTE* const istart4 = istart3 + length3; const size_t segmentSize = (dstSize+3) / 4; BYTE* const opStart2 = ostart + segmentSize; BYTE* const opStart3 = opStart2 + segmentSize; BYTE* const opStart4 = opStart3 + segmentSize; BYTE* op1 = ostart; BYTE* op2 = opStart2; BYTE* op3 = opStart3; BYTE* op4 = opStart4; U32 endSignal; length4 = cSrcSize - (length1 + length2 + length3 + 6); if (length4 > cSrcSize) return ERROR(corruption_detected); /* overflow */ errorCode = BIT_initDStream(&bitD1, istart1, length1); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD2, istart2, length2); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD3, istart3, length3); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD4, istart4, length4); if (HUF_isError(errorCode)) return errorCode; /* 16-32 symbols per loop (4-8 symbols per stream) */ endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4); for ( ; (endSignal==BIT_DStream_unfinished) && (op4<(oend-7)) ; ) { HUF_DECODE_SYMBOLX4_2(op1, &bitD1); HUF_DECODE_SYMBOLX4_2(op2, &bitD2); HUF_DECODE_SYMBOLX4_2(op3, &bitD3); HUF_DECODE_SYMBOLX4_2(op4, &bitD4); HUF_DECODE_SYMBOLX4_1(op1, &bitD1); HUF_DECODE_SYMBOLX4_1(op2, &bitD2); HUF_DECODE_SYMBOLX4_1(op3, &bitD3); HUF_DECODE_SYMBOLX4_1(op4, &bitD4); HUF_DECODE_SYMBOLX4_2(op1, &bitD1); HUF_DECODE_SYMBOLX4_2(op2, &bitD2); HUF_DECODE_SYMBOLX4_2(op3, &bitD3); HUF_DECODE_SYMBOLX4_2(op4, &bitD4); HUF_DECODE_SYMBOLX4_0(op1, &bitD1); HUF_DECODE_SYMBOLX4_0(op2, &bitD2); HUF_DECODE_SYMBOLX4_0(op3, &bitD3); HUF_DECODE_SYMBOLX4_0(op4, &bitD4); endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4); } /* check corruption */ if (op1 > opStart2) return ERROR(corruption_detected); if (op2 > opStart3) return ERROR(corruption_detected); if (op3 > opStart4) return ERROR(corruption_detected); /* note : op4 supposed already verified within main loop */ /* finish bitStreams one by one */ HUF_decodeStreamX4(op1, &bitD1, opStart2, dt, dtLog); HUF_decodeStreamX4(op2, &bitD2, opStart3, dt, dtLog); HUF_decodeStreamX4(op3, &bitD3, opStart4, dt, dtLog); HUF_decodeStreamX4(op4, &bitD4, oend, dt, dtLog); /* check */ endSignal = BIT_endOfDStream(&bitD1) & BIT_endOfDStream(&bitD2) & BIT_endOfDStream(&bitD3) & BIT_endOfDStream(&bitD4); if (!endSignal) return ERROR(corruption_detected); /* decoded size */ return dstSize; } } static size_t HUF_decompress4X4 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { HUF_CREATE_STATIC_DTABLEX4(DTable, HUF_MAX_TABLELOG); const BYTE* ip = (const BYTE*) cSrc; size_t hSize = HUF_readDTableX4 (DTable, cSrc, cSrcSize); if (HUF_isError(hSize)) return hSize; if (hSize >= cSrcSize) return ERROR(srcSize_wrong); ip += hSize; cSrcSize -= hSize; return HUF_decompress4X4_usingDTable (dst, dstSize, ip, cSrcSize, DTable); } /**********************************/ /* Generic decompression selector */ /**********************************/ typedef struct { U32 tableTime; U32 decode256Time; } algo_time_t; static const algo_time_t algoTime[16 /* Quantization */][3 /* single, double, quad */] = { /* single, double, quad */ {{0,0}, {1,1}, {2,2}}, /* Q==0 : impossible */ {{0,0}, {1,1}, {2,2}}, /* Q==1 : impossible */ {{ 38,130}, {1313, 74}, {2151, 38}}, /* Q == 2 : 12-18% */ {{ 448,128}, {1353, 74}, {2238, 41}}, /* Q == 3 : 18-25% */ {{ 556,128}, {1353, 74}, {2238, 47}}, /* Q == 4 : 25-32% */ {{ 714,128}, {1418, 74}, {2436, 53}}, /* Q == 5 : 32-38% */ {{ 883,128}, {1437, 74}, {2464, 61}}, /* Q == 6 : 38-44% */ {{ 897,128}, {1515, 75}, {2622, 68}}, /* Q == 7 : 44-50% */ {{ 926,128}, {1613, 75}, {2730, 75}}, /* Q == 8 : 50-56% */ {{ 947,128}, {1729, 77}, {3359, 77}}, /* Q == 9 : 56-62% */ {{1107,128}, {2083, 81}, {4006, 84}}, /* Q ==10 : 62-69% */ {{1177,128}, {2379, 87}, {4785, 88}}, /* Q ==11 : 69-75% */ {{1242,128}, {2415, 93}, {5155, 84}}, /* Q ==12 : 75-81% */ {{1349,128}, {2644,106}, {5260,106}}, /* Q ==13 : 81-87% */ {{1455,128}, {2422,124}, {4174,124}}, /* Q ==14 : 87-93% */ {{ 722,128}, {1891,145}, {1936,146}}, /* Q ==15 : 93-99% */ }; typedef size_t (*decompressionAlgo)(void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); static size_t HUF_decompress (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { static const decompressionAlgo decompress[3] = { HUF_decompress4X2, HUF_decompress4X4, NULL }; /* estimate decompression time */ U32 Q; const U32 D256 = (U32)(dstSize >> 8); U32 Dtime[3]; U32 algoNb = 0; int n; /* validation checks */ if (dstSize == 0) return ERROR(dstSize_tooSmall); if (cSrcSize > dstSize) return ERROR(corruption_detected); /* invalid */ if (cSrcSize == dstSize) { memcpy(dst, cSrc, dstSize); return dstSize; } /* not compressed */ if (cSrcSize == 1) { memset(dst, *(const BYTE*)cSrc, dstSize); return dstSize; } /* RLE */ /* decoder timing evaluation */ Q = (U32)(cSrcSize * 16 / dstSize); /* Q < 16 since dstSize > cSrcSize */ for (n=0; n<3; n++) Dtime[n] = algoTime[Q][n].tableTime + (algoTime[Q][n].decode256Time * D256); Dtime[1] += Dtime[1] >> 4; Dtime[2] += Dtime[2] >> 3; /* advantage to algorithms using less memory, for cache eviction */ if (Dtime[1] < Dtime[0]) algoNb = 1; return decompress[algoNb](dst, dstSize, cSrc, cSrcSize); //return HUF_decompress4X2(dst, dstSize, cSrc, cSrcSize); /* multi-streams single-symbol decoding */ //return HUF_decompress4X4(dst, dstSize, cSrc, cSrcSize); /* multi-streams double-symbols decoding */ //return HUF_decompress4X6(dst, dstSize, cSrc, cSrcSize); /* multi-streams quad-symbols decoding */ } #endif /* ZSTD_CCOMMON_H_MODULE */ /* zstd - decompression module fo v0.4 legacy format Copyright (C) 2015-2016, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - zstd source repository : https://github.com/Cyan4973/zstd - ztsd public forum : https://groups.google.com/forum/#!forum/lz4c */ /* *************************************************************** * Tuning parameters *****************************************************************/ /*! * HEAPMODE : * Select how default decompression function ZSTD_decompress() will allocate memory, * in memory stack (0), or in memory heap (1, requires malloc()) */ #ifndef ZSTD_HEAPMODE # define ZSTD_HEAPMODE 1 #endif /* ******************************************************* * Includes *********************************************************/ #include /* calloc */ #include /* memcpy, memmove */ #include /* debug : printf */ /* ******************************************************* * Compiler specifics *********************************************************/ #ifdef _MSC_VER /* Visual Studio */ # include /* For Visual 2005 */ # pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */ # pragma warning(disable : 4324) /* disable: C4324: padded structure */ #endif /* ************************************* * Local types ***************************************/ typedef struct { blockType_t blockType; U32 origSize; } blockProperties_t; /* ******************************************************* * Memory operations **********************************************************/ static void ZSTD_copy4(void* dst, const void* src) { memcpy(dst, src, 4); } /* ************************************* * Error Management ***************************************/ /*! ZSTD_isError * tells if a return value is an error code */ static unsigned ZSTD_isError(size_t code) { return ERR_isError(code); } /* ************************************************************* * Context management ***************************************************************/ typedef enum { ZSTDds_getFrameHeaderSize, ZSTDds_decodeFrameHeader, ZSTDds_decodeBlockHeader, ZSTDds_decompressBlock } ZSTD_dStage; struct ZSTDv04_Dctx_s { U32 LLTable[FSE_DTABLE_SIZE_U32(LLFSELog)]; U32 OffTable[FSE_DTABLE_SIZE_U32(OffFSELog)]; U32 MLTable[FSE_DTABLE_SIZE_U32(MLFSELog)]; const void* previousDstEnd; const void* base; const void* vBase; const void* dictEnd; size_t expected; size_t headerSize; ZSTD_parameters params; blockType_t bType; ZSTD_dStage stage; const BYTE* litPtr; size_t litSize; BYTE litBuffer[BLOCKSIZE + 8 /* margin for wildcopy */]; BYTE headerBuffer[ZSTD_frameHeaderSize_max]; }; /* typedef'd to ZSTD_DCtx within "zstd_static.h" */ static size_t ZSTD_resetDCtx(ZSTD_DCtx* dctx) { dctx->expected = ZSTD_frameHeaderSize_min; dctx->stage = ZSTDds_getFrameHeaderSize; dctx->previousDstEnd = NULL; dctx->base = NULL; dctx->vBase = NULL; dctx->dictEnd = NULL; return 0; } static ZSTD_DCtx* ZSTD_createDCtx(void) { ZSTD_DCtx* dctx = (ZSTD_DCtx*)malloc(sizeof(ZSTD_DCtx)); if (dctx==NULL) return NULL; ZSTD_resetDCtx(dctx); return dctx; } static size_t ZSTD_freeDCtx(ZSTD_DCtx* dctx) { free(dctx); return 0; } /* ************************************************************* * Decompression section ***************************************************************/ /** ZSTD_decodeFrameHeader_Part1 * decode the 1st part of the Frame Header, which tells Frame Header size. * srcSize must be == ZSTD_frameHeaderSize_min * @return : the full size of the Frame Header */ static size_t ZSTD_decodeFrameHeader_Part1(ZSTD_DCtx* zc, const void* src, size_t srcSize) { U32 magicNumber; if (srcSize != ZSTD_frameHeaderSize_min) return ERROR(srcSize_wrong); magicNumber = MEM_readLE32(src); if (magicNumber != ZSTD_MAGICNUMBER) return ERROR(prefix_unknown); zc->headerSize = ZSTD_frameHeaderSize_min; return zc->headerSize; } static size_t ZSTD_getFrameParams(ZSTD_parameters* params, const void* src, size_t srcSize) { U32 magicNumber; if (srcSize < ZSTD_frameHeaderSize_min) return ZSTD_frameHeaderSize_max; magicNumber = MEM_readLE32(src); if (magicNumber != ZSTD_MAGICNUMBER) return ERROR(prefix_unknown); memset(params, 0, sizeof(*params)); params->windowLog = (((const BYTE*)src)[4] & 15) + ZSTD_WINDOWLOG_ABSOLUTEMIN; if ((((const BYTE*)src)[4] >> 4) != 0) return ERROR(frameParameter_unsupported); /* reserved bits */ return 0; } /** ZSTD_decodeFrameHeader_Part2 * decode the full Frame Header * srcSize must be the size provided by ZSTD_decodeFrameHeader_Part1 * @return : 0, or an error code, which can be tested using ZSTD_isError() */ static size_t ZSTD_decodeFrameHeader_Part2(ZSTD_DCtx* zc, const void* src, size_t srcSize) { size_t result; if (srcSize != zc->headerSize) return ERROR(srcSize_wrong); result = ZSTD_getFrameParams(&(zc->params), src, srcSize); if ((MEM_32bits()) && (zc->params.windowLog > 25)) return ERROR(frameParameter_unsupported); return result; } static size_t ZSTD_getcBlockSize(const void* src, size_t srcSize, blockProperties_t* bpPtr) { const BYTE* const in = (const BYTE* const)src; BYTE headerFlags; U32 cSize; if (srcSize < 3) return ERROR(srcSize_wrong); headerFlags = *in; cSize = in[2] + (in[1]<<8) + ((in[0] & 7)<<16); bpPtr->blockType = (blockType_t)(headerFlags >> 6); bpPtr->origSize = (bpPtr->blockType == bt_rle) ? cSize : 0; if (bpPtr->blockType == bt_end) return 0; if (bpPtr->blockType == bt_rle) return 1; return cSize; } static size_t ZSTD_copyRawBlock(void* dst, size_t maxDstSize, const void* src, size_t srcSize) { if (srcSize > maxDstSize) return ERROR(dstSize_tooSmall); memcpy(dst, src, srcSize); return srcSize; } /** ZSTD_decompressLiterals @return : nb of bytes read from src, or an error code*/ static size_t ZSTD_decompressLiterals(void* dst, size_t* maxDstSizePtr, const void* src, size_t srcSize) { const BYTE* ip = (const BYTE*)src; const size_t litSize = (MEM_readLE32(src) & 0x1FFFFF) >> 2; /* no buffer issue : srcSize >= MIN_CBLOCK_SIZE */ const size_t litCSize = (MEM_readLE32(ip+2) & 0xFFFFFF) >> 5; /* no buffer issue : srcSize >= MIN_CBLOCK_SIZE */ if (litSize > *maxDstSizePtr) return ERROR(corruption_detected); if (litCSize + 5 > srcSize) return ERROR(corruption_detected); if (HUF_isError(HUF_decompress(dst, litSize, ip+5, litCSize))) return ERROR(corruption_detected); *maxDstSizePtr = litSize; return litCSize + 5; } /** ZSTD_decodeLiteralsBlock @return : nb of bytes read from src (< srcSize ) */ static size_t ZSTD_decodeLiteralsBlock(ZSTD_DCtx* dctx, const void* src, size_t srcSize) /* note : srcSize < BLOCKSIZE */ { const BYTE* const istart = (const BYTE*) src; /* any compressed block with literals segment must be at least this size */ if (srcSize < MIN_CBLOCK_SIZE) return ERROR(corruption_detected); switch(*istart & 3) { /* compressed */ case 0: { size_t litSize = BLOCKSIZE; const size_t readSize = ZSTD_decompressLiterals(dctx->litBuffer, &litSize, src, srcSize); dctx->litPtr = dctx->litBuffer; dctx->litSize = litSize; memset(dctx->litBuffer + dctx->litSize, 0, 8); return readSize; /* works if it's an error too */ } case IS_RAW: { const size_t litSize = (MEM_readLE32(istart) & 0xFFFFFF) >> 2; /* no buffer issue : srcSize >= MIN_CBLOCK_SIZE */ if (litSize > srcSize-11) /* risk of reading too far with wildcopy */ { if (litSize > srcSize-3) return ERROR(corruption_detected); memcpy(dctx->litBuffer, istart, litSize); dctx->litPtr = dctx->litBuffer; dctx->litSize = litSize; memset(dctx->litBuffer + dctx->litSize, 0, 8); return litSize+3; } /* direct reference into compressed stream */ dctx->litPtr = istart+3; dctx->litSize = litSize; return litSize+3; } case IS_RLE: { const size_t litSize = (MEM_readLE32(istart) & 0xFFFFFF) >> 2; /* no buffer issue : srcSize >= MIN_CBLOCK_SIZE */ if (litSize > BLOCKSIZE) return ERROR(corruption_detected); memset(dctx->litBuffer, istart[3], litSize + 8); dctx->litPtr = dctx->litBuffer; dctx->litSize = litSize; return 4; } default: return ERROR(corruption_detected); /* forbidden nominal case */ } } static size_t ZSTD_decodeSeqHeaders(int* nbSeq, const BYTE** dumpsPtr, size_t* dumpsLengthPtr, FSE_DTable* DTableLL, FSE_DTable* DTableML, FSE_DTable* DTableOffb, const void* src, size_t srcSize) { const BYTE* const istart = (const BYTE* const)src; const BYTE* ip = istart; const BYTE* const iend = istart + srcSize; U32 LLtype, Offtype, MLtype; U32 LLlog, Offlog, MLlog; size_t dumpsLength; /* check */ if (srcSize < 5) return ERROR(srcSize_wrong); /* SeqHead */ *nbSeq = MEM_readLE16(ip); ip+=2; LLtype = *ip >> 6; Offtype = (*ip >> 4) & 3; MLtype = (*ip >> 2) & 3; if (*ip & 2) { dumpsLength = ip[2]; dumpsLength += ip[1] << 8; ip += 3; } else { dumpsLength = ip[1]; dumpsLength += (ip[0] & 1) << 8; ip += 2; } *dumpsPtr = ip; ip += dumpsLength; *dumpsLengthPtr = dumpsLength; /* check */ if (ip > iend-3) return ERROR(srcSize_wrong); /* min : all 3 are "raw", hence no header, but at least xxLog bits per type */ /* sequences */ { S16 norm[MaxML+1]; /* assumption : MaxML >= MaxLL >= MaxOff */ size_t headerSize; /* Build DTables */ switch(LLtype) { case bt_rle : LLlog = 0; FSE_buildDTable_rle(DTableLL, *ip++); break; case bt_raw : LLlog = LLbits; FSE_buildDTable_raw(DTableLL, LLbits); break; default : { U32 max = MaxLL; headerSize = FSE_readNCount(norm, &max, &LLlog, ip, iend-ip); if (FSE_isError(headerSize)) return ERROR(GENERIC); if (LLlog > LLFSELog) return ERROR(corruption_detected); ip += headerSize; FSE_buildDTable(DTableLL, norm, max, LLlog); } } switch(Offtype) { case bt_rle : Offlog = 0; if (ip > iend-2) return ERROR(srcSize_wrong); /* min : "raw", hence no header, but at least xxLog bits */ FSE_buildDTable_rle(DTableOffb, *ip++ & MaxOff); /* if *ip > MaxOff, data is corrupted */ break; case bt_raw : Offlog = Offbits; FSE_buildDTable_raw(DTableOffb, Offbits); break; default : { U32 max = MaxOff; headerSize = FSE_readNCount(norm, &max, &Offlog, ip, iend-ip); if (FSE_isError(headerSize)) return ERROR(GENERIC); if (Offlog > OffFSELog) return ERROR(corruption_detected); ip += headerSize; FSE_buildDTable(DTableOffb, norm, max, Offlog); } } switch(MLtype) { case bt_rle : MLlog = 0; if (ip > iend-2) return ERROR(srcSize_wrong); /* min : "raw", hence no header, but at least xxLog bits */ FSE_buildDTable_rle(DTableML, *ip++); break; case bt_raw : MLlog = MLbits; FSE_buildDTable_raw(DTableML, MLbits); break; default : { U32 max = MaxML; headerSize = FSE_readNCount(norm, &max, &MLlog, ip, iend-ip); if (FSE_isError(headerSize)) return ERROR(GENERIC); if (MLlog > MLFSELog) return ERROR(corruption_detected); ip += headerSize; FSE_buildDTable(DTableML, norm, max, MLlog); } } } return ip-istart; } typedef struct { size_t litLength; size_t offset; size_t matchLength; } seq_t; typedef struct { BIT_DStream_t DStream; FSE_DState_t stateLL; FSE_DState_t stateOffb; FSE_DState_t stateML; size_t prevOffset; const BYTE* dumps; const BYTE* dumpsEnd; } seqState_t; static void ZSTD_decodeSequence(seq_t* seq, seqState_t* seqState) { size_t litLength; size_t prevOffset; size_t offset; size_t matchLength; const BYTE* dumps = seqState->dumps; const BYTE* const de = seqState->dumpsEnd; /* Literal length */ litLength = FSE_decodeSymbol(&(seqState->stateLL), &(seqState->DStream)); prevOffset = litLength ? seq->offset : seqState->prevOffset; if (litLength == MaxLL) { U32 add = *dumps++; if (add < 255) litLength += add; else { litLength = dumps[0] + (dumps[1]<<8) + (dumps[2]<<16); dumps += 3; } if (dumps > de) { litLength = MaxLL+255; } /* late correction, to avoid using uninitialized memory */ if (dumps >= de) { dumps = de-1; } /* late correction, to avoid read overflow (data is now corrupted anyway) */ } /* Offset */ { static const U32 offsetPrefix[MaxOff+1] = { 1 /*fake*/, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072, 262144, 524288, 1048576, 2097152, 4194304, 8388608, 16777216, 33554432, /*fake*/ 1, 1, 1, 1, 1 }; U32 offsetCode, nbBits; offsetCode = FSE_decodeSymbol(&(seqState->stateOffb), &(seqState->DStream)); /* <= maxOff, by table construction */ if (MEM_32bits()) BIT_reloadDStream(&(seqState->DStream)); nbBits = offsetCode - 1; if (offsetCode==0) nbBits = 0; /* cmove */ offset = offsetPrefix[offsetCode] + BIT_readBits(&(seqState->DStream), nbBits); if (MEM_32bits()) BIT_reloadDStream(&(seqState->DStream)); if (offsetCode==0) offset = prevOffset; /* cmove */ if (offsetCode | !litLength) seqState->prevOffset = seq->offset; /* cmove */ } /* MatchLength */ matchLength = FSE_decodeSymbol(&(seqState->stateML), &(seqState->DStream)); if (matchLength == MaxML) { U32 add = *dumps++; if (add < 255) matchLength += add; else { matchLength = dumps[0] + (dumps[1]<<8) + (dumps[2]<<16); dumps += 3; } if (dumps > de) { matchLength = MaxML+255; } /* late correction, to avoid using uninitialized memory */ if (dumps >= de) { dumps = de-1; } /* late correction, to avoid read overflow (data is now corrupted anyway) */ } matchLength += MINMATCH; /* save result */ seq->litLength = litLength; seq->offset = offset; seq->matchLength = matchLength; seqState->dumps = dumps; } static size_t ZSTD_execSequence(BYTE* op, BYTE* const oend, seq_t sequence, const BYTE** litPtr, const BYTE* const litLimit, const BYTE* const base, const BYTE* const vBase, const BYTE* const dictEnd) { static const int dec32table[] = { 0, 1, 2, 1, 4, 4, 4, 4 }; /* added */ static const int dec64table[] = { 8, 8, 8, 7, 8, 9,10,11 }; /* substracted */ BYTE* const oLitEnd = op + sequence.litLength; const size_t sequenceLength = sequence.litLength + sequence.matchLength; BYTE* const oMatchEnd = op + sequenceLength; /* risk : address space overflow (32-bits) */ BYTE* const oend_8 = oend-8; const BYTE* const litEnd = *litPtr + sequence.litLength; const BYTE* match = oLitEnd - sequence.offset; /* check */ if (oLitEnd > oend_8) return ERROR(dstSize_tooSmall); /* last match must start at a minimum distance of 8 from oend */ if (oMatchEnd > oend) return ERROR(dstSize_tooSmall); /* overwrite beyond dst buffer */ if (litEnd > litLimit) return ERROR(corruption_detected); /* risk read beyond lit buffer */ /* copy Literals */ ZSTD_wildcopy(op, *litPtr, sequence.litLength); /* note : oLitEnd <= oend-8 : no risk of overwrite beyond oend */ op = oLitEnd; *litPtr = litEnd; /* update for next sequence */ /* copy Match */ if (sequence.offset > (size_t)(oLitEnd - base)) { /* offset beyond prefix */ if (sequence.offset > (size_t)(oLitEnd - vBase)) return ERROR(corruption_detected); match = dictEnd - (base-match); if (match + sequence.matchLength <= dictEnd) { memmove(oLitEnd, match, sequence.matchLength); return sequenceLength; } /* span extDict & currentPrefixSegment */ { size_t length1 = dictEnd - match; memmove(oLitEnd, match, length1); op = oLitEnd + length1; sequence.matchLength -= length1; match = base; if (op > oend_8 || sequence.matchLength < MINMATCH) { while (op < oMatchEnd) *op++ = *match++; return sequenceLength; } } } /* Requirement: op <= oend_8 */ /* match within prefix */ if (sequence.offset < 8) { /* close range match, overlap */ const int sub2 = dec64table[sequence.offset]; op[0] = match[0]; op[1] = match[1]; op[2] = match[2]; op[3] = match[3]; match += dec32table[sequence.offset]; ZSTD_copy4(op+4, match); match -= sub2; } else { ZSTD_copy8(op, match); } op += 8; match += 8; if (oMatchEnd > oend-(16-MINMATCH)) { if (op < oend_8) { ZSTD_wildcopy(op, match, oend_8 - op); match += oend_8 - op; op = oend_8; } while (op < oMatchEnd) *op++ = *match++; } else { ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength-8); /* works even if matchLength < 8 */ } return sequenceLength; } static size_t ZSTD_decompressSequences( ZSTD_DCtx* dctx, void* dst, size_t maxDstSize, const void* seqStart, size_t seqSize) { const BYTE* ip = (const BYTE*)seqStart; const BYTE* const iend = ip + seqSize; BYTE* const ostart = (BYTE* const)dst; BYTE* op = ostart; BYTE* const oend = ostart + maxDstSize; size_t errorCode, dumpsLength; const BYTE* litPtr = dctx->litPtr; const BYTE* const litEnd = litPtr + dctx->litSize; int nbSeq; const BYTE* dumps; U32* DTableLL = dctx->LLTable; U32* DTableML = dctx->MLTable; U32* DTableOffb = dctx->OffTable; const BYTE* const base = (const BYTE*) (dctx->base); const BYTE* const vBase = (const BYTE*) (dctx->vBase); const BYTE* const dictEnd = (const BYTE*) (dctx->dictEnd); /* Build Decoding Tables */ errorCode = ZSTD_decodeSeqHeaders(&nbSeq, &dumps, &dumpsLength, DTableLL, DTableML, DTableOffb, ip, iend-ip); if (ZSTD_isError(errorCode)) return errorCode; ip += errorCode; /* Regen sequences */ { seq_t sequence; seqState_t seqState; memset(&sequence, 0, sizeof(sequence)); sequence.offset = 4; seqState.dumps = dumps; seqState.dumpsEnd = dumps + dumpsLength; seqState.prevOffset = 4; errorCode = BIT_initDStream(&(seqState.DStream), ip, iend-ip); if (ERR_isError(errorCode)) return ERROR(corruption_detected); FSE_initDState(&(seqState.stateLL), &(seqState.DStream), DTableLL); FSE_initDState(&(seqState.stateOffb), &(seqState.DStream), DTableOffb); FSE_initDState(&(seqState.stateML), &(seqState.DStream), DTableML); for ( ; (BIT_reloadDStream(&(seqState.DStream)) <= BIT_DStream_completed) && nbSeq ; ) { size_t oneSeqSize; nbSeq--; ZSTD_decodeSequence(&sequence, &seqState); oneSeqSize = ZSTD_execSequence(op, oend, sequence, &litPtr, litEnd, base, vBase, dictEnd); if (ZSTD_isError(oneSeqSize)) return oneSeqSize; op += oneSeqSize; } /* check if reached exact end */ if ( !BIT_endOfDStream(&(seqState.DStream)) ) return ERROR(corruption_detected); /* DStream should be entirely and exactly consumed; otherwise data is corrupted */ /* last literal segment */ { size_t lastLLSize = litEnd - litPtr; if (litPtr > litEnd) return ERROR(corruption_detected); if (op+lastLLSize > oend) return ERROR(dstSize_tooSmall); if (op != litPtr) memcpy(op, litPtr, lastLLSize); op += lastLLSize; } } return op-ostart; } static void ZSTD_checkContinuity(ZSTD_DCtx* dctx, const void* dst) { if (dst != dctx->previousDstEnd) /* not contiguous */ { dctx->dictEnd = dctx->previousDstEnd; dctx->vBase = (const char*)dst - ((const char*)(dctx->previousDstEnd) - (const char*)(dctx->base)); dctx->base = dst; dctx->previousDstEnd = dst; } } static size_t ZSTD_decompressBlock_internal(ZSTD_DCtx* dctx, void* dst, size_t maxDstSize, const void* src, size_t srcSize) { /* blockType == blockCompressed */ const BYTE* ip = (const BYTE*)src; /* Decode literals sub-block */ size_t litCSize = ZSTD_decodeLiteralsBlock(dctx, src, srcSize); if (ZSTD_isError(litCSize)) return litCSize; ip += litCSize; srcSize -= litCSize; return ZSTD_decompressSequences(dctx, dst, maxDstSize, ip, srcSize); } static size_t ZSTD_decompress_usingDict(ZSTD_DCtx* ctx, void* dst, size_t maxDstSize, const void* src, size_t srcSize, const void* dict, size_t dictSize) { const BYTE* ip = (const BYTE*)src; const BYTE* iend = ip + srcSize; BYTE* const ostart = (BYTE* const)dst; BYTE* op = ostart; BYTE* const oend = ostart + maxDstSize; size_t remainingSize = srcSize; blockProperties_t blockProperties; /* init */ ZSTD_resetDCtx(ctx); if (dict) { ZSTD_decompress_insertDictionary(ctx, dict, dictSize); ctx->dictEnd = ctx->previousDstEnd; ctx->vBase = (const char*)dst - ((const char*)(ctx->previousDstEnd) - (const char*)(ctx->base)); ctx->base = dst; } else { ctx->vBase = ctx->base = ctx->dictEnd = dst; } /* Frame Header */ { size_t frameHeaderSize; if (srcSize < ZSTD_frameHeaderSize_min+ZSTD_blockHeaderSize) return ERROR(srcSize_wrong); frameHeaderSize = ZSTD_decodeFrameHeader_Part1(ctx, src, ZSTD_frameHeaderSize_min); if (ZSTD_isError(frameHeaderSize)) return frameHeaderSize; if (srcSize < frameHeaderSize+ZSTD_blockHeaderSize) return ERROR(srcSize_wrong); ip += frameHeaderSize; remainingSize -= frameHeaderSize; frameHeaderSize = ZSTD_decodeFrameHeader_Part2(ctx, src, frameHeaderSize); if (ZSTD_isError(frameHeaderSize)) return frameHeaderSize; } /* Loop on each block */ while (1) { size_t decodedSize=0; size_t cBlockSize = ZSTD_getcBlockSize(ip, iend-ip, &blockProperties); if (ZSTD_isError(cBlockSize)) return cBlockSize; ip += ZSTD_blockHeaderSize; remainingSize -= ZSTD_blockHeaderSize; if (cBlockSize > remainingSize) return ERROR(srcSize_wrong); switch(blockProperties.blockType) { case bt_compressed: decodedSize = ZSTD_decompressBlock_internal(ctx, op, oend-op, ip, cBlockSize); break; case bt_raw : decodedSize = ZSTD_copyRawBlock(op, oend-op, ip, cBlockSize); break; case bt_rle : return ERROR(GENERIC); /* not yet supported */ break; case bt_end : /* end of frame */ if (remainingSize) return ERROR(srcSize_wrong); break; default: return ERROR(GENERIC); /* impossible */ } if (cBlockSize == 0) break; /* bt_end */ if (ZSTD_isError(decodedSize)) return decodedSize; op += decodedSize; ip += cBlockSize; remainingSize -= cBlockSize; } return op-ostart; } static size_t ZSTD_findFrameCompressedSize(const void* src, size_t srcSize) { const BYTE* ip = (const BYTE*)src; size_t remainingSize = srcSize; blockProperties_t blockProperties; /* Frame Header */ if (srcSize < ZSTD_frameHeaderSize_min) return ERROR(srcSize_wrong); if (MEM_readLE32(src) != ZSTD_MAGICNUMBER) return ERROR(prefix_unknown); ip += ZSTD_frameHeaderSize_min; remainingSize -= ZSTD_frameHeaderSize_min; /* Loop on each block */ while (1) { size_t cBlockSize = ZSTD_getcBlockSize(ip, remainingSize, &blockProperties); if (ZSTD_isError(cBlockSize)) return cBlockSize; ip += ZSTD_blockHeaderSize; remainingSize -= ZSTD_blockHeaderSize; if (cBlockSize > remainingSize) return ERROR(srcSize_wrong); if (cBlockSize == 0) break; /* bt_end */ ip += cBlockSize; remainingSize -= cBlockSize; } return ip - (const BYTE*)src; } /* ****************************** * Streaming Decompression API ********************************/ static size_t ZSTD_nextSrcSizeToDecompress(ZSTD_DCtx* dctx) { return dctx->expected; } static size_t ZSTD_decompressContinue(ZSTD_DCtx* ctx, void* dst, size_t maxDstSize, const void* src, size_t srcSize) { /* Sanity check */ if (srcSize != ctx->expected) return ERROR(srcSize_wrong); ZSTD_checkContinuity(ctx, dst); /* Decompress : frame header; part 1 */ switch (ctx->stage) { case ZSTDds_getFrameHeaderSize : /* get frame header size */ if (srcSize != ZSTD_frameHeaderSize_min) return ERROR(srcSize_wrong); /* impossible */ ctx->headerSize = ZSTD_decodeFrameHeader_Part1(ctx, src, ZSTD_frameHeaderSize_min); if (ZSTD_isError(ctx->headerSize)) return ctx->headerSize; memcpy(ctx->headerBuffer, src, ZSTD_frameHeaderSize_min); if (ctx->headerSize > ZSTD_frameHeaderSize_min) return ERROR(GENERIC); /* impossible */ ctx->expected = 0; /* not necessary to copy more */ /* fallthrough */ case ZSTDds_decodeFrameHeader: /* get frame header */ { size_t const result = ZSTD_decodeFrameHeader_Part2(ctx, ctx->headerBuffer, ctx->headerSize); if (ZSTD_isError(result)) return result; ctx->expected = ZSTD_blockHeaderSize; ctx->stage = ZSTDds_decodeBlockHeader; return 0; } case ZSTDds_decodeBlockHeader: /* Decode block header */ { blockProperties_t bp; size_t const blockSize = ZSTD_getcBlockSize(src, ZSTD_blockHeaderSize, &bp); if (ZSTD_isError(blockSize)) return blockSize; if (bp.blockType == bt_end) { ctx->expected = 0; ctx->stage = ZSTDds_getFrameHeaderSize; } else { ctx->expected = blockSize; ctx->bType = bp.blockType; ctx->stage = ZSTDds_decompressBlock; } return 0; } case ZSTDds_decompressBlock: { /* Decompress : block content */ size_t rSize; switch(ctx->bType) { case bt_compressed: rSize = ZSTD_decompressBlock_internal(ctx, dst, maxDstSize, src, srcSize); break; case bt_raw : rSize = ZSTD_copyRawBlock(dst, maxDstSize, src, srcSize); break; case bt_rle : return ERROR(GENERIC); /* not yet handled */ break; case bt_end : /* should never happen (filtered at phase 1) */ rSize = 0; break; default: return ERROR(GENERIC); } ctx->stage = ZSTDds_decodeBlockHeader; ctx->expected = ZSTD_blockHeaderSize; ctx->previousDstEnd = (char*)dst + rSize; return rSize; } default: return ERROR(GENERIC); /* impossible */ } } static void ZSTD_decompress_insertDictionary(ZSTD_DCtx* ctx, const void* dict, size_t dictSize) { ctx->dictEnd = ctx->previousDstEnd; ctx->vBase = (const char*)dict - ((const char*)(ctx->previousDstEnd) - (const char*)(ctx->base)); ctx->base = dict; ctx->previousDstEnd = (const char*)dict + dictSize; } /* Buffered version of Zstd compression library Copyright (C) 2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - zstd source repository : https://github.com/Cyan4973/zstd - ztsd public forum : https://groups.google.com/forum/#!forum/lz4c */ /* The objects defined into this file should be considered experimental. * They are not labelled stable, as their prototype may change in the future. * You can use them for tests, provide feedback, or if you can endure risk of future changes. */ /* ************************************* * Includes ***************************************/ #include /** ************************************************ * Streaming decompression * * A ZBUFF_DCtx object is required to track streaming operation. * Use ZBUFF_createDCtx() and ZBUFF_freeDCtx() to create/release resources. * Use ZBUFF_decompressInit() to start a new decompression operation. * ZBUFF_DCtx objects can be reused multiple times. * * Use ZBUFF_decompressContinue() repetitively to consume your input. * *srcSizePtr and *maxDstSizePtr can be any size. * The function will report how many bytes were read or written by modifying *srcSizePtr and *maxDstSizePtr. * Note that it may not consume the entire input, in which case it's up to the caller to call again the function with remaining input. * The content of dst will be overwritten (up to *maxDstSizePtr) at each function call, so save its content if it matters or change dst . * return : a hint to preferred nb of bytes to use as input for next function call (it's only a hint, to improve latency) * or 0 when a frame is completely decoded * or an error code, which can be tested using ZBUFF_isError(). * * Hint : recommended buffer sizes (not compulsory) * output : 128 KB block size is the internal unit, it ensures it's always possible to write a full block when it's decoded. * input : just follow indications from ZBUFF_decompressContinue() to minimize latency. It should always be <= 128 KB + 3 . * **************************************************/ typedef enum { ZBUFFds_init, ZBUFFds_readHeader, ZBUFFds_loadHeader, ZBUFFds_decodeHeader, ZBUFFds_read, ZBUFFds_load, ZBUFFds_flush } ZBUFF_dStage; /* *** Resource management *** */ #define ZSTD_frameHeaderSize_max 5 /* too magical, should come from reference */ struct ZBUFFv04_DCtx_s { ZSTD_DCtx* zc; ZSTD_parameters params; char* inBuff; size_t inBuffSize; size_t inPos; char* outBuff; size_t outBuffSize; size_t outStart; size_t outEnd; size_t hPos; const char* dict; size_t dictSize; ZBUFF_dStage stage; unsigned char headerBuffer[ZSTD_frameHeaderSize_max]; }; /* typedef'd to ZBUFF_DCtx within "zstd_buffered.h" */ typedef ZBUFFv04_DCtx ZBUFF_DCtx; static ZBUFF_DCtx* ZBUFF_createDCtx(void) { ZBUFF_DCtx* zbc = (ZBUFF_DCtx*)malloc(sizeof(ZBUFF_DCtx)); if (zbc==NULL) return NULL; memset(zbc, 0, sizeof(*zbc)); zbc->zc = ZSTD_createDCtx(); zbc->stage = ZBUFFds_init; return zbc; } static size_t ZBUFF_freeDCtx(ZBUFF_DCtx* zbc) { if (zbc==NULL) return 0; /* support free on null */ ZSTD_freeDCtx(zbc->zc); free(zbc->inBuff); free(zbc->outBuff); free(zbc); return 0; } /* *** Initialization *** */ static size_t ZBUFF_decompressInit(ZBUFF_DCtx* zbc) { zbc->stage = ZBUFFds_readHeader; zbc->hPos = zbc->inPos = zbc->outStart = zbc->outEnd = zbc->dictSize = 0; return ZSTD_resetDCtx(zbc->zc); } static size_t ZBUFF_decompressWithDictionary(ZBUFF_DCtx* zbc, const void* src, size_t srcSize) { zbc->dict = (const char*)src; zbc->dictSize = srcSize; return 0; } static size_t ZBUFF_limitCopy(void* dst, size_t maxDstSize, const void* src, size_t srcSize) { size_t length = MIN(maxDstSize, srcSize); memcpy(dst, src, length); return length; } /* *** Decompression *** */ static size_t ZBUFF_decompressContinue(ZBUFF_DCtx* zbc, void* dst, size_t* maxDstSizePtr, const void* src, size_t* srcSizePtr) { const char* const istart = (const char*)src; const char* ip = istart; const char* const iend = istart + *srcSizePtr; char* const ostart = (char*)dst; char* op = ostart; char* const oend = ostart + *maxDstSizePtr; U32 notDone = 1; while (notDone) { switch(zbc->stage) { case ZBUFFds_init : return ERROR(init_missing); case ZBUFFds_readHeader : /* read header from src */ { size_t const headerSize = ZSTD_getFrameParams(&(zbc->params), src, *srcSizePtr); if (ZSTD_isError(headerSize)) return headerSize; if (headerSize) { /* not enough input to decode header : tell how many bytes would be necessary */ memcpy(zbc->headerBuffer+zbc->hPos, src, *srcSizePtr); zbc->hPos += *srcSizePtr; *maxDstSizePtr = 0; zbc->stage = ZBUFFds_loadHeader; return headerSize - zbc->hPos; } zbc->stage = ZBUFFds_decodeHeader; break; } case ZBUFFds_loadHeader: /* complete header from src */ { size_t headerSize = ZBUFF_limitCopy( zbc->headerBuffer + zbc->hPos, ZSTD_frameHeaderSize_max - zbc->hPos, src, *srcSizePtr); zbc->hPos += headerSize; ip += headerSize; headerSize = ZSTD_getFrameParams(&(zbc->params), zbc->headerBuffer, zbc->hPos); if (ZSTD_isError(headerSize)) return headerSize; if (headerSize) { /* not enough input to decode header : tell how many bytes would be necessary */ *maxDstSizePtr = 0; return headerSize - zbc->hPos; } } /* intentional fallthrough */ case ZBUFFds_decodeHeader: /* apply header to create / resize buffers */ { size_t const neededOutSize = (size_t)1 << zbc->params.windowLog; size_t const neededInSize = BLOCKSIZE; /* a block is never > BLOCKSIZE */ if (zbc->inBuffSize < neededInSize) { free(zbc->inBuff); zbc->inBuffSize = neededInSize; zbc->inBuff = (char*)malloc(neededInSize); if (zbc->inBuff == NULL) return ERROR(memory_allocation); } if (zbc->outBuffSize < neededOutSize) { free(zbc->outBuff); zbc->outBuffSize = neededOutSize; zbc->outBuff = (char*)malloc(neededOutSize); if (zbc->outBuff == NULL) return ERROR(memory_allocation); } } if (zbc->dictSize) ZSTD_decompress_insertDictionary(zbc->zc, zbc->dict, zbc->dictSize); if (zbc->hPos) { /* some data already loaded into headerBuffer : transfer into inBuff */ memcpy(zbc->inBuff, zbc->headerBuffer, zbc->hPos); zbc->inPos = zbc->hPos; zbc->hPos = 0; zbc->stage = ZBUFFds_load; break; } zbc->stage = ZBUFFds_read; /* fall-through */ case ZBUFFds_read: { size_t neededInSize = ZSTD_nextSrcSizeToDecompress(zbc->zc); if (neededInSize==0) /* end of frame */ { zbc->stage = ZBUFFds_init; notDone = 0; break; } if ((size_t)(iend-ip) >= neededInSize) { /* directly decode from src */ size_t decodedSize = ZSTD_decompressContinue(zbc->zc, zbc->outBuff + zbc->outStart, zbc->outBuffSize - zbc->outStart, ip, neededInSize); if (ZSTD_isError(decodedSize)) return decodedSize; ip += neededInSize; if (!decodedSize) break; /* this was just a header */ zbc->outEnd = zbc->outStart + decodedSize; zbc->stage = ZBUFFds_flush; break; } if (ip==iend) { notDone = 0; break; } /* no more input */ zbc->stage = ZBUFFds_load; } /* fall-through */ case ZBUFFds_load: { size_t neededInSize = ZSTD_nextSrcSizeToDecompress(zbc->zc); size_t toLoad = neededInSize - zbc->inPos; /* should always be <= remaining space within inBuff */ size_t loadedSize; if (toLoad > zbc->inBuffSize - zbc->inPos) return ERROR(corruption_detected); /* should never happen */ loadedSize = ZBUFF_limitCopy(zbc->inBuff + zbc->inPos, toLoad, ip, iend-ip); ip += loadedSize; zbc->inPos += loadedSize; if (loadedSize < toLoad) { notDone = 0; break; } /* not enough input, wait for more */ { size_t decodedSize = ZSTD_decompressContinue(zbc->zc, zbc->outBuff + zbc->outStart, zbc->outBuffSize - zbc->outStart, zbc->inBuff, neededInSize); if (ZSTD_isError(decodedSize)) return decodedSize; zbc->inPos = 0; /* input is consumed */ if (!decodedSize) { zbc->stage = ZBUFFds_read; break; } /* this was just a header */ zbc->outEnd = zbc->outStart + decodedSize; zbc->stage = ZBUFFds_flush; /* ZBUFFds_flush follows */ } } /* fall-through */ case ZBUFFds_flush: { size_t toFlushSize = zbc->outEnd - zbc->outStart; size_t flushedSize = ZBUFF_limitCopy(op, oend-op, zbc->outBuff + zbc->outStart, toFlushSize); op += flushedSize; zbc->outStart += flushedSize; if (flushedSize == toFlushSize) { zbc->stage = ZBUFFds_read; if (zbc->outStart + BLOCKSIZE > zbc->outBuffSize) zbc->outStart = zbc->outEnd = 0; break; } /* cannot flush everything */ notDone = 0; break; } default: return ERROR(GENERIC); /* impossible */ } } *srcSizePtr = ip-istart; *maxDstSizePtr = op-ostart; { size_t nextSrcSizeHint = ZSTD_nextSrcSizeToDecompress(zbc->zc); if (nextSrcSizeHint > 3) nextSrcSizeHint+= 3; /* get the next block header while at it */ nextSrcSizeHint -= zbc->inPos; /* already loaded*/ return nextSrcSizeHint; } } /* ************************************* * Tool functions ***************************************/ unsigned ZBUFFv04_isError(size_t errorCode) { return ERR_isError(errorCode); } const char* ZBUFFv04_getErrorName(size_t errorCode) { return ERR_getErrorName(errorCode); } size_t ZBUFFv04_recommendedDInSize() { return BLOCKSIZE + 3; } size_t ZBUFFv04_recommendedDOutSize() { return BLOCKSIZE; } /*- ========================================================================= -*/ /* final wrapping stage */ size_t ZSTDv04_decompressDCtx(ZSTD_DCtx* dctx, void* dst, size_t maxDstSize, const void* src, size_t srcSize) { return ZSTD_decompress_usingDict(dctx, dst, maxDstSize, src, srcSize, NULL, 0); } size_t ZSTDv04_decompress(void* dst, size_t maxDstSize, const void* src, size_t srcSize) { #if defined(ZSTD_HEAPMODE) && (ZSTD_HEAPMODE==1) size_t regenSize; ZSTD_DCtx* dctx = ZSTD_createDCtx(); if (dctx==NULL) return ERROR(memory_allocation); regenSize = ZSTDv04_decompressDCtx(dctx, dst, maxDstSize, src, srcSize); ZSTD_freeDCtx(dctx); return regenSize; #else ZSTD_DCtx dctx; return ZSTDv04_decompressDCtx(&dctx, dst, maxDstSize, src, srcSize); #endif } size_t ZSTDv04_findFrameCompressedSize(const void* src, size_t srcSize) { return ZSTD_findFrameCompressedSize(src, srcSize); } size_t ZSTDv04_resetDCtx(ZSTDv04_Dctx* dctx) { return ZSTD_resetDCtx(dctx); } size_t ZSTDv04_nextSrcSizeToDecompress(ZSTDv04_Dctx* dctx) { return ZSTD_nextSrcSizeToDecompress(dctx); } size_t ZSTDv04_decompressContinue(ZSTDv04_Dctx* dctx, void* dst, size_t maxDstSize, const void* src, size_t srcSize) { return ZSTD_decompressContinue(dctx, dst, maxDstSize, src, srcSize); } ZBUFFv04_DCtx* ZBUFFv04_createDCtx(void) { return ZBUFF_createDCtx(); } size_t ZBUFFv04_freeDCtx(ZBUFFv04_DCtx* dctx) { return ZBUFF_freeDCtx(dctx); } size_t ZBUFFv04_decompressInit(ZBUFFv04_DCtx* dctx) { return ZBUFF_decompressInit(dctx); } size_t ZBUFFv04_decompressWithDictionary(ZBUFFv04_DCtx* dctx, const void* src, size_t srcSize) { return ZBUFF_decompressWithDictionary(dctx, src, srcSize); } size_t ZBUFFv04_decompressContinue(ZBUFFv04_DCtx* dctx, void* dst, size_t* maxDstSizePtr, const void* src, size_t* srcSizePtr) { return ZBUFF_decompressContinue(dctx, dst, maxDstSizePtr, src, srcSizePtr); } ZSTD_DCtx* ZSTDv04_createDCtx(void) { return ZSTD_createDCtx(); } size_t ZSTDv04_freeDCtx(ZSTD_DCtx* dctx) { return ZSTD_freeDCtx(dctx); } size_t ZSTDv04_getFrameParams(ZSTD_parameters* params, const void* src, size_t srcSize) { return ZSTD_getFrameParams(params, src, srcSize); } borgbackup-1.1.5/src/borg/algorithms/zstd/lib/legacy/zstd_v01.c0000644000175000017500000021121713257361140023456 0ustar twtw00000000000000/* * Copyright (c) 2016-present, Yann Collet, Facebook, Inc. * All rights reserved. * * This source code is licensed under both the BSD-style license (found in the * LICENSE file in the root directory of this source tree) and the GPLv2 (found * in the COPYING file in the root directory of this source tree). * You may select, at your option, one of the above-listed licenses. */ /****************************************** * Includes ******************************************/ #include /* size_t, ptrdiff_t */ #include "zstd_v01.h" #include "error_private.h" /****************************************** * Static allocation ******************************************/ /* You can statically allocate FSE CTable/DTable as a table of unsigned using below macro */ #define FSE_DTABLE_SIZE_U32(maxTableLog) (1 + (1<2^N Bytes (examples : 10 -> 1KB; 12 -> 4KB ; 16 -> 64KB; 20 -> 1MB; etc.) * Increasing memory usage improves compression ratio * Reduced memory usage can improve speed, due to cache effect * Recommended max value is 14, for 16KB, which nicely fits into Intel x86 L1 cache */ #define FSE_MAX_MEMORY_USAGE 14 #define FSE_DEFAULT_MEMORY_USAGE 13 /* FSE_MAX_SYMBOL_VALUE : * Maximum symbol value authorized. * Required for proper stack allocation */ #define FSE_MAX_SYMBOL_VALUE 255 /**************************************************************** * template functions type & suffix ****************************************************************/ #define FSE_FUNCTION_TYPE BYTE #define FSE_FUNCTION_EXTENSION /**************************************************************** * Byte symbol type ****************************************************************/ typedef struct { unsigned short newState; unsigned char symbol; unsigned char nbBits; } FSE_decode_t; /* size == U32 */ /**************************************************************** * Compiler specifics ****************************************************************/ #ifdef _MSC_VER /* Visual Studio */ # define FORCE_INLINE static __forceinline # include /* For Visual 2005 */ # pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */ # pragma warning(disable : 4214) /* disable: C4214: non-int bitfields */ #else # define GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__) # if defined (__cplusplus) || defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L /* C99 */ # ifdef __GNUC__ # define FORCE_INLINE static inline __attribute__((always_inline)) # else # define FORCE_INLINE static inline # endif # else # define FORCE_INLINE static # endif /* __STDC_VERSION__ */ #endif /**************************************************************** * Includes ****************************************************************/ #include /* malloc, free, qsort */ #include /* memcpy, memset */ #include /* printf (debug) */ #ifndef MEM_ACCESS_MODULE #define MEM_ACCESS_MODULE /**************************************************************** * Basic Types *****************************************************************/ #if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L /* C99 */ # include typedef uint8_t BYTE; typedef uint16_t U16; typedef int16_t S16; typedef uint32_t U32; typedef int32_t S32; typedef uint64_t U64; typedef int64_t S64; #else typedef unsigned char BYTE; typedef unsigned short U16; typedef signed short S16; typedef unsigned int U32; typedef signed int S32; typedef unsigned long long U64; typedef signed long long S64; #endif #endif /* MEM_ACCESS_MODULE */ /**************************************************************** * Memory I/O *****************************************************************/ /* FSE_FORCE_MEMORY_ACCESS * By default, access to unaligned memory is controlled by `memcpy()`, which is safe and portable. * Unfortunately, on some target/compiler combinations, the generated assembly is sub-optimal. * The below switch allow to select different access method for improved performance. * Method 0 (default) : use `memcpy()`. Safe and portable. * Method 1 : `__packed` statement. It depends on compiler extension (ie, not portable). * This method is safe if your compiler supports it, and *generally* as fast or faster than `memcpy`. * Method 2 : direct access. This method is portable but violate C standard. * It can generate buggy code on targets generating assembly depending on alignment. * But in some circumstances, it's the only known way to get the most performance (ie GCC + ARMv6) * See http://fastcompression.blogspot.fr/2015/08/accessing-unaligned-memory.html for details. * Prefer these methods in priority order (0 > 1 > 2) */ #ifndef FSE_FORCE_MEMORY_ACCESS /* can be defined externally, on command line for example */ # if defined(__GNUC__) && ( defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) || defined(__ARM_ARCH_6K__) || defined(__ARM_ARCH_6Z__) || defined(__ARM_ARCH_6ZK__) || defined(__ARM_ARCH_6T2__) ) # define FSE_FORCE_MEMORY_ACCESS 2 # elif (defined(__INTEL_COMPILER) && !defined(WIN32)) || \ (defined(__GNUC__) && ( defined(__ARM_ARCH_7__) || defined(__ARM_ARCH_7A__) || defined(__ARM_ARCH_7R__) || defined(__ARM_ARCH_7M__) || defined(__ARM_ARCH_7S__) )) # define FSE_FORCE_MEMORY_ACCESS 1 # endif #endif static unsigned FSE_32bits(void) { return sizeof(void*)==4; } static unsigned FSE_isLittleEndian(void) { const union { U32 i; BYTE c[4]; } one = { 1 }; /* don't use static : performance detrimental */ return one.c[0]; } #if defined(FSE_FORCE_MEMORY_ACCESS) && (FSE_FORCE_MEMORY_ACCESS==2) static U16 FSE_read16(const void* memPtr) { return *(const U16*) memPtr; } static U32 FSE_read32(const void* memPtr) { return *(const U32*) memPtr; } static U64 FSE_read64(const void* memPtr) { return *(const U64*) memPtr; } #elif defined(FSE_FORCE_MEMORY_ACCESS) && (FSE_FORCE_MEMORY_ACCESS==1) /* __pack instructions are safer, but compiler specific, hence potentially problematic for some compilers */ /* currently only defined for gcc and icc */ typedef union { U16 u16; U32 u32; U64 u64; } __attribute__((packed)) unalign; static U16 FSE_read16(const void* ptr) { return ((const unalign*)ptr)->u16; } static U32 FSE_read32(const void* ptr) { return ((const unalign*)ptr)->u32; } static U64 FSE_read64(const void* ptr) { return ((const unalign*)ptr)->u64; } #else static U16 FSE_read16(const void* memPtr) { U16 val; memcpy(&val, memPtr, sizeof(val)); return val; } static U32 FSE_read32(const void* memPtr) { U32 val; memcpy(&val, memPtr, sizeof(val)); return val; } static U64 FSE_read64(const void* memPtr) { U64 val; memcpy(&val, memPtr, sizeof(val)); return val; } #endif // FSE_FORCE_MEMORY_ACCESS static U16 FSE_readLE16(const void* memPtr) { if (FSE_isLittleEndian()) return FSE_read16(memPtr); else { const BYTE* p = (const BYTE*)memPtr; return (U16)(p[0] + (p[1]<<8)); } } static U32 FSE_readLE32(const void* memPtr) { if (FSE_isLittleEndian()) return FSE_read32(memPtr); else { const BYTE* p = (const BYTE*)memPtr; return (U32)((U32)p[0] + ((U32)p[1]<<8) + ((U32)p[2]<<16) + ((U32)p[3]<<24)); } } static U64 FSE_readLE64(const void* memPtr) { if (FSE_isLittleEndian()) return FSE_read64(memPtr); else { const BYTE* p = (const BYTE*)memPtr; return (U64)((U64)p[0] + ((U64)p[1]<<8) + ((U64)p[2]<<16) + ((U64)p[3]<<24) + ((U64)p[4]<<32) + ((U64)p[5]<<40) + ((U64)p[6]<<48) + ((U64)p[7]<<56)); } } static size_t FSE_readLEST(const void* memPtr) { if (FSE_32bits()) return (size_t)FSE_readLE32(memPtr); else return (size_t)FSE_readLE64(memPtr); } /**************************************************************** * Constants *****************************************************************/ #define FSE_MAX_TABLELOG (FSE_MAX_MEMORY_USAGE-2) #define FSE_MAX_TABLESIZE (1U< FSE_TABLELOG_ABSOLUTE_MAX #error "FSE_MAX_TABLELOG > FSE_TABLELOG_ABSOLUTE_MAX is not supported" #endif /**************************************************************** * Error Management ****************************************************************/ #define FSE_STATIC_ASSERT(c) { enum { FSE_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */ /**************************************************************** * Complex types ****************************************************************/ typedef struct { int deltaFindState; U32 deltaNbBits; } FSE_symbolCompressionTransform; /* total 8 bytes */ typedef U32 DTable_max_t[FSE_DTABLE_SIZE_U32(FSE_MAX_TABLELOG)]; /**************************************************************** * Internal functions ****************************************************************/ FORCE_INLINE unsigned FSE_highbit32 (register U32 val) { # if defined(_MSC_VER) /* Visual */ unsigned long r; _BitScanReverse ( &r, val ); return (unsigned) r; # elif defined(__GNUC__) && (GCC_VERSION >= 304) /* GCC Intrinsic */ return 31 - __builtin_clz (val); # else /* Software version */ static const unsigned DeBruijnClz[32] = { 0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30, 8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31 }; U32 v = val; unsigned r; v |= v >> 1; v |= v >> 2; v |= v >> 4; v |= v >> 8; v |= v >> 16; r = DeBruijnClz[ (U32) (v * 0x07C4ACDDU) >> 27]; return r; # endif } /**************************************************************** * Templates ****************************************************************/ /* designed to be included for type-specific functions (template emulation in C) Objective is to write these functions only once, for improved maintenance */ /* safety checks */ #ifndef FSE_FUNCTION_EXTENSION # error "FSE_FUNCTION_EXTENSION must be defined" #endif #ifndef FSE_FUNCTION_TYPE # error "FSE_FUNCTION_TYPE must be defined" #endif /* Function names */ #define FSE_CAT(X,Y) X##Y #define FSE_FUNCTION_NAME(X,Y) FSE_CAT(X,Y) #define FSE_TYPE_NAME(X,Y) FSE_CAT(X,Y) static U32 FSE_tableStep(U32 tableSize) { return (tableSize>>1) + (tableSize>>3) + 3; } #define FSE_DECODE_TYPE FSE_decode_t typedef struct { U16 tableLog; U16 fastMode; } FSE_DTableHeader; /* sizeof U32 */ static size_t FSE_buildDTable (FSE_DTable* dt, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog) { void* ptr = dt; FSE_DTableHeader* const DTableH = (FSE_DTableHeader*)ptr; FSE_DECODE_TYPE* const tableDecode = (FSE_DECODE_TYPE*)(ptr) + 1; /* because dt is unsigned, 32-bits aligned on 32-bits */ const U32 tableSize = 1 << tableLog; const U32 tableMask = tableSize-1; const U32 step = FSE_tableStep(tableSize); U16 symbolNext[FSE_MAX_SYMBOL_VALUE+1]; U32 position = 0; U32 highThreshold = tableSize-1; const S16 largeLimit= (S16)(1 << (tableLog-1)); U32 noLarge = 1; U32 s; /* Sanity Checks */ if (maxSymbolValue > FSE_MAX_SYMBOL_VALUE) return (size_t)-FSE_ERROR_maxSymbolValue_tooLarge; if (tableLog > FSE_MAX_TABLELOG) return (size_t)-FSE_ERROR_tableLog_tooLarge; /* Init, lay down lowprob symbols */ DTableH[0].tableLog = (U16)tableLog; for (s=0; s<=maxSymbolValue; s++) { if (normalizedCounter[s]==-1) { tableDecode[highThreshold--].symbol = (FSE_FUNCTION_TYPE)s; symbolNext[s] = 1; } else { if (normalizedCounter[s] >= largeLimit) noLarge=0; symbolNext[s] = normalizedCounter[s]; } } /* Spread symbols */ for (s=0; s<=maxSymbolValue; s++) { int i; for (i=0; i highThreshold) position = (position + step) & tableMask; /* lowprob area */ } } if (position!=0) return (size_t)-FSE_ERROR_GENERIC; /* position must reach all cells once, otherwise normalizedCounter is incorrect */ /* Build Decoding table */ { U32 i; for (i=0; ifastMode = (U16)noLarge; return 0; } /****************************************** * FSE byte symbol ******************************************/ #ifndef FSE_COMMONDEFS_ONLY static unsigned FSE_isError(size_t code) { return (code > (size_t)(-FSE_ERROR_maxCode)); } static short FSE_abs(short a) { return a<0? -a : a; } /**************************************************************** * Header bitstream management ****************************************************************/ static size_t FSE_readNCount (short* normalizedCounter, unsigned* maxSVPtr, unsigned* tableLogPtr, const void* headerBuffer, size_t hbSize) { const BYTE* const istart = (const BYTE*) headerBuffer; const BYTE* const iend = istart + hbSize; const BYTE* ip = istart; int nbBits; int remaining; int threshold; U32 bitStream; int bitCount; unsigned charnum = 0; int previous0 = 0; if (hbSize < 4) return (size_t)-FSE_ERROR_srcSize_wrong; bitStream = FSE_readLE32(ip); nbBits = (bitStream & 0xF) + FSE_MIN_TABLELOG; /* extract tableLog */ if (nbBits > FSE_TABLELOG_ABSOLUTE_MAX) return (size_t)-FSE_ERROR_tableLog_tooLarge; bitStream >>= 4; bitCount = 4; *tableLogPtr = nbBits; remaining = (1<1) && (charnum<=*maxSVPtr)) { if (previous0) { unsigned n0 = charnum; while ((bitStream & 0xFFFF) == 0xFFFF) { n0+=24; if (ip < iend-5) { ip+=2; bitStream = FSE_readLE32(ip) >> bitCount; } else { bitStream >>= 16; bitCount+=16; } } while ((bitStream & 3) == 3) { n0+=3; bitStream>>=2; bitCount+=2; } n0 += bitStream & 3; bitCount += 2; if (n0 > *maxSVPtr) return (size_t)-FSE_ERROR_maxSymbolValue_tooSmall; while (charnum < n0) normalizedCounter[charnum++] = 0; if ((ip <= iend-7) || (ip + (bitCount>>3) <= iend-4)) { ip += bitCount>>3; bitCount &= 7; bitStream = FSE_readLE32(ip) >> bitCount; } else bitStream >>= 2; } { const short max = (short)((2*threshold-1)-remaining); short count; if ((bitStream & (threshold-1)) < (U32)max) { count = (short)(bitStream & (threshold-1)); bitCount += nbBits-1; } else { count = (short)(bitStream & (2*threshold-1)); if (count >= threshold) count -= max; bitCount += nbBits; } count--; /* extra accuracy */ remaining -= FSE_abs(count); normalizedCounter[charnum++] = count; previous0 = !count; while (remaining < threshold) { nbBits--; threshold >>= 1; } { if ((ip <= iend-7) || (ip + (bitCount>>3) <= iend-4)) { ip += bitCount>>3; bitCount &= 7; } else { bitCount -= (int)(8 * (iend - 4 - ip)); ip = iend - 4; } bitStream = FSE_readLE32(ip) >> (bitCount & 31); } } } if (remaining != 1) return (size_t)-FSE_ERROR_GENERIC; *maxSVPtr = charnum-1; ip += (bitCount+7)>>3; if ((size_t)(ip-istart) > hbSize) return (size_t)-FSE_ERROR_srcSize_wrong; return ip-istart; } /********************************************************* * Decompression (Byte symbols) *********************************************************/ static size_t FSE_buildDTable_rle (FSE_DTable* dt, BYTE symbolValue) { void* ptr = dt; FSE_DTableHeader* const DTableH = (FSE_DTableHeader*)ptr; FSE_decode_t* const cell = (FSE_decode_t*)(ptr) + 1; /* because dt is unsigned */ DTableH->tableLog = 0; DTableH->fastMode = 0; cell->newState = 0; cell->symbol = symbolValue; cell->nbBits = 0; return 0; } static size_t FSE_buildDTable_raw (FSE_DTable* dt, unsigned nbBits) { void* ptr = dt; FSE_DTableHeader* const DTableH = (FSE_DTableHeader*)ptr; FSE_decode_t* const dinfo = (FSE_decode_t*)(ptr) + 1; /* because dt is unsigned */ const unsigned tableSize = 1 << nbBits; const unsigned tableMask = tableSize - 1; const unsigned maxSymbolValue = tableMask; unsigned s; /* Sanity checks */ if (nbBits < 1) return (size_t)-FSE_ERROR_GENERIC; /* min size */ /* Build Decoding Table */ DTableH->tableLog = (U16)nbBits; DTableH->fastMode = 1; for (s=0; s<=maxSymbolValue; s++) { dinfo[s].newState = 0; dinfo[s].symbol = (BYTE)s; dinfo[s].nbBits = (BYTE)nbBits; } return 0; } /* FSE_initDStream * Initialize a FSE_DStream_t. * srcBuffer must point at the beginning of an FSE block. * The function result is the size of the FSE_block (== srcSize). * If srcSize is too small, the function will return an errorCode; */ static size_t FSE_initDStream(FSE_DStream_t* bitD, const void* srcBuffer, size_t srcSize) { if (srcSize < 1) return (size_t)-FSE_ERROR_srcSize_wrong; if (srcSize >= sizeof(size_t)) { U32 contain32; bitD->start = (const char*)srcBuffer; bitD->ptr = (const char*)srcBuffer + srcSize - sizeof(size_t); bitD->bitContainer = FSE_readLEST(bitD->ptr); contain32 = ((const BYTE*)srcBuffer)[srcSize-1]; if (contain32 == 0) return (size_t)-FSE_ERROR_GENERIC; /* stop bit not present */ bitD->bitsConsumed = 8 - FSE_highbit32(contain32); } else { U32 contain32; bitD->start = (const char*)srcBuffer; bitD->ptr = bitD->start; bitD->bitContainer = *(const BYTE*)(bitD->start); switch(srcSize) { case 7: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[6]) << (sizeof(size_t)*8 - 16); case 6: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[5]) << (sizeof(size_t)*8 - 24); case 5: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[4]) << (sizeof(size_t)*8 - 32); case 4: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[3]) << 24; case 3: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[2]) << 16; case 2: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[1]) << 8; default:; } contain32 = ((const BYTE*)srcBuffer)[srcSize-1]; if (contain32 == 0) return (size_t)-FSE_ERROR_GENERIC; /* stop bit not present */ bitD->bitsConsumed = 8 - FSE_highbit32(contain32); bitD->bitsConsumed += (U32)(sizeof(size_t) - srcSize)*8; } return srcSize; } /*!FSE_lookBits * Provides next n bits from the bitContainer. * bitContainer is not modified (bits are still present for next read/look) * On 32-bits, maxNbBits==25 * On 64-bits, maxNbBits==57 * return : value extracted. */ static size_t FSE_lookBits(FSE_DStream_t* bitD, U32 nbBits) { const U32 bitMask = sizeof(bitD->bitContainer)*8 - 1; return ((bitD->bitContainer << (bitD->bitsConsumed & bitMask)) >> 1) >> ((bitMask-nbBits) & bitMask); } static size_t FSE_lookBitsFast(FSE_DStream_t* bitD, U32 nbBits) /* only if nbBits >= 1 !! */ { const U32 bitMask = sizeof(bitD->bitContainer)*8 - 1; return (bitD->bitContainer << (bitD->bitsConsumed & bitMask)) >> (((bitMask+1)-nbBits) & bitMask); } static void FSE_skipBits(FSE_DStream_t* bitD, U32 nbBits) { bitD->bitsConsumed += nbBits; } /*!FSE_readBits * Read next n bits from the bitContainer. * On 32-bits, don't read more than maxNbBits==25 * On 64-bits, don't read more than maxNbBits==57 * Use the fast variant *only* if n >= 1. * return : value extracted. */ static size_t FSE_readBits(FSE_DStream_t* bitD, U32 nbBits) { size_t value = FSE_lookBits(bitD, nbBits); FSE_skipBits(bitD, nbBits); return value; } static size_t FSE_readBitsFast(FSE_DStream_t* bitD, U32 nbBits) /* only if nbBits >= 1 !! */ { size_t value = FSE_lookBitsFast(bitD, nbBits); FSE_skipBits(bitD, nbBits); return value; } static unsigned FSE_reloadDStream(FSE_DStream_t* bitD) { if (bitD->bitsConsumed > (sizeof(bitD->bitContainer)*8)) /* should never happen */ return FSE_DStream_tooFar; if (bitD->ptr >= bitD->start + sizeof(bitD->bitContainer)) { bitD->ptr -= bitD->bitsConsumed >> 3; bitD->bitsConsumed &= 7; bitD->bitContainer = FSE_readLEST(bitD->ptr); return FSE_DStream_unfinished; } if (bitD->ptr == bitD->start) { if (bitD->bitsConsumed < sizeof(bitD->bitContainer)*8) return FSE_DStream_endOfBuffer; return FSE_DStream_completed; } { U32 nbBytes = bitD->bitsConsumed >> 3; U32 result = FSE_DStream_unfinished; if (bitD->ptr - nbBytes < bitD->start) { nbBytes = (U32)(bitD->ptr - bitD->start); /* ptr > start */ result = FSE_DStream_endOfBuffer; } bitD->ptr -= nbBytes; bitD->bitsConsumed -= nbBytes*8; bitD->bitContainer = FSE_readLEST(bitD->ptr); /* reminder : srcSize > sizeof(bitD) */ return result; } } static void FSE_initDState(FSE_DState_t* DStatePtr, FSE_DStream_t* bitD, const FSE_DTable* dt) { const void* ptr = dt; const FSE_DTableHeader* const DTableH = (const FSE_DTableHeader*)ptr; DStatePtr->state = FSE_readBits(bitD, DTableH->tableLog); FSE_reloadDStream(bitD); DStatePtr->table = dt + 1; } static BYTE FSE_decodeSymbol(FSE_DState_t* DStatePtr, FSE_DStream_t* bitD) { const FSE_decode_t DInfo = ((const FSE_decode_t*)(DStatePtr->table))[DStatePtr->state]; const U32 nbBits = DInfo.nbBits; BYTE symbol = DInfo.symbol; size_t lowBits = FSE_readBits(bitD, nbBits); DStatePtr->state = DInfo.newState + lowBits; return symbol; } static BYTE FSE_decodeSymbolFast(FSE_DState_t* DStatePtr, FSE_DStream_t* bitD) { const FSE_decode_t DInfo = ((const FSE_decode_t*)(DStatePtr->table))[DStatePtr->state]; const U32 nbBits = DInfo.nbBits; BYTE symbol = DInfo.symbol; size_t lowBits = FSE_readBitsFast(bitD, nbBits); DStatePtr->state = DInfo.newState + lowBits; return symbol; } /* FSE_endOfDStream Tells if bitD has reached end of bitStream or not */ static unsigned FSE_endOfDStream(const FSE_DStream_t* bitD) { return ((bitD->ptr == bitD->start) && (bitD->bitsConsumed == sizeof(bitD->bitContainer)*8)); } static unsigned FSE_endOfDState(const FSE_DState_t* DStatePtr) { return DStatePtr->state == 0; } FORCE_INLINE size_t FSE_decompress_usingDTable_generic( void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const FSE_DTable* dt, const unsigned fast) { BYTE* const ostart = (BYTE*) dst; BYTE* op = ostart; BYTE* const omax = op + maxDstSize; BYTE* const olimit = omax-3; FSE_DStream_t bitD; FSE_DState_t state1; FSE_DState_t state2; size_t errorCode; /* Init */ errorCode = FSE_initDStream(&bitD, cSrc, cSrcSize); /* replaced last arg by maxCompressed Size */ if (FSE_isError(errorCode)) return errorCode; FSE_initDState(&state1, &bitD, dt); FSE_initDState(&state2, &bitD, dt); #define FSE_GETSYMBOL(statePtr) fast ? FSE_decodeSymbolFast(statePtr, &bitD) : FSE_decodeSymbol(statePtr, &bitD) /* 4 symbols per loop */ for ( ; (FSE_reloadDStream(&bitD)==FSE_DStream_unfinished) && (op sizeof(bitD.bitContainer)*8) /* This test must be static */ FSE_reloadDStream(&bitD); op[1] = FSE_GETSYMBOL(&state2); if (FSE_MAX_TABLELOG*4+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */ { if (FSE_reloadDStream(&bitD) > FSE_DStream_unfinished) { op+=2; break; } } op[2] = FSE_GETSYMBOL(&state1); if (FSE_MAX_TABLELOG*2+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */ FSE_reloadDStream(&bitD); op[3] = FSE_GETSYMBOL(&state2); } /* tail */ /* note : FSE_reloadDStream(&bitD) >= FSE_DStream_partiallyFilled; Ends at exactly FSE_DStream_completed */ while (1) { if ( (FSE_reloadDStream(&bitD)>FSE_DStream_completed) || (op==omax) || (FSE_endOfDStream(&bitD) && (fast || FSE_endOfDState(&state1))) ) break; *op++ = FSE_GETSYMBOL(&state1); if ( (FSE_reloadDStream(&bitD)>FSE_DStream_completed) || (op==omax) || (FSE_endOfDStream(&bitD) && (fast || FSE_endOfDState(&state2))) ) break; *op++ = FSE_GETSYMBOL(&state2); } /* end ? */ if (FSE_endOfDStream(&bitD) && FSE_endOfDState(&state1) && FSE_endOfDState(&state2)) return op-ostart; if (op==omax) return (size_t)-FSE_ERROR_dstSize_tooSmall; /* dst buffer is full, but cSrc unfinished */ return (size_t)-FSE_ERROR_corruptionDetected; } static size_t FSE_decompress_usingDTable(void* dst, size_t originalSize, const void* cSrc, size_t cSrcSize, const FSE_DTable* dt) { FSE_DTableHeader DTableH; memcpy(&DTableH, dt, sizeof(DTableH)); /* memcpy() into local variable, to avoid strict aliasing warning */ /* select fast mode (static) */ if (DTableH.fastMode) return FSE_decompress_usingDTable_generic(dst, originalSize, cSrc, cSrcSize, dt, 1); return FSE_decompress_usingDTable_generic(dst, originalSize, cSrc, cSrcSize, dt, 0); } static size_t FSE_decompress(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize) { const BYTE* const istart = (const BYTE*)cSrc; const BYTE* ip = istart; short counting[FSE_MAX_SYMBOL_VALUE+1]; DTable_max_t dt; /* Static analyzer seems unable to understand this table will be properly initialized later */ unsigned tableLog; unsigned maxSymbolValue = FSE_MAX_SYMBOL_VALUE; size_t errorCode; if (cSrcSize<2) return (size_t)-FSE_ERROR_srcSize_wrong; /* too small input size */ /* normal FSE decoding mode */ errorCode = FSE_readNCount (counting, &maxSymbolValue, &tableLog, istart, cSrcSize); if (FSE_isError(errorCode)) return errorCode; if (errorCode >= cSrcSize) return (size_t)-FSE_ERROR_srcSize_wrong; /* too small input size */ ip += errorCode; cSrcSize -= errorCode; errorCode = FSE_buildDTable (dt, counting, maxSymbolValue, tableLog); if (FSE_isError(errorCode)) return errorCode; /* always return, even if it is an error code */ return FSE_decompress_usingDTable (dst, maxDstSize, ip, cSrcSize, dt); } /* ******************************************************* * Huff0 : Huffman block compression *********************************************************/ #define HUF_MAX_SYMBOL_VALUE 255 #define HUF_DEFAULT_TABLELOG 12 /* used by default, when not specified */ #define HUF_MAX_TABLELOG 12 /* max possible tableLog; for allocation purpose; can be modified */ #define HUF_ABSOLUTEMAX_TABLELOG 16 /* absolute limit of HUF_MAX_TABLELOG. Beyond that value, code does not work */ #if (HUF_MAX_TABLELOG > HUF_ABSOLUTEMAX_TABLELOG) # error "HUF_MAX_TABLELOG is too large !" #endif typedef struct HUF_CElt_s { U16 val; BYTE nbBits; } HUF_CElt ; typedef struct nodeElt_s { U32 count; U16 parent; BYTE byte; BYTE nbBits; } nodeElt; /* ******************************************************* * Huff0 : Huffman block decompression *********************************************************/ typedef struct { BYTE byte; BYTE nbBits; } HUF_DElt; static size_t HUF_readDTable (U16* DTable, const void* src, size_t srcSize) { BYTE huffWeight[HUF_MAX_SYMBOL_VALUE + 1]; U32 rankVal[HUF_ABSOLUTEMAX_TABLELOG + 1]; /* large enough for values from 0 to 16 */ U32 weightTotal; U32 maxBits; const BYTE* ip = (const BYTE*) src; size_t iSize; size_t oSize; U32 n; U32 nextRankStart; void* ptr = DTable+1; HUF_DElt* const dt = (HUF_DElt*)ptr; if (!srcSize) return (size_t)-FSE_ERROR_srcSize_wrong; iSize = ip[0]; FSE_STATIC_ASSERT(sizeof(HUF_DElt) == sizeof(U16)); /* if compilation fails here, assertion is false */ //memset(huffWeight, 0, sizeof(huffWeight)); /* should not be necessary, but some analyzer complain ... */ if (iSize >= 128) /* special header */ { if (iSize >= (242)) /* RLE */ { static int l[14] = { 1, 2, 3, 4, 7, 8, 15, 16, 31, 32, 63, 64, 127, 128 }; oSize = l[iSize-242]; memset(huffWeight, 1, sizeof(huffWeight)); iSize = 0; } else /* Incompressible */ { oSize = iSize - 127; iSize = ((oSize+1)/2); if (iSize+1 > srcSize) return (size_t)-FSE_ERROR_srcSize_wrong; ip += 1; for (n=0; n> 4; huffWeight[n+1] = ip[n/2] & 15; } } } else /* header compressed with FSE (normal case) */ { if (iSize+1 > srcSize) return (size_t)-FSE_ERROR_srcSize_wrong; oSize = FSE_decompress(huffWeight, HUF_MAX_SYMBOL_VALUE, ip+1, iSize); /* max 255 values decoded, last one is implied */ if (FSE_isError(oSize)) return oSize; } /* collect weight stats */ memset(rankVal, 0, sizeof(rankVal)); weightTotal = 0; for (n=0; n= HUF_ABSOLUTEMAX_TABLELOG) return (size_t)-FSE_ERROR_corruptionDetected; rankVal[huffWeight[n]]++; weightTotal += (1 << huffWeight[n]) >> 1; } if (weightTotal == 0) return (size_t)-FSE_ERROR_corruptionDetected; /* get last non-null symbol weight (implied, total must be 2^n) */ maxBits = FSE_highbit32(weightTotal) + 1; if (maxBits > DTable[0]) return (size_t)-FSE_ERROR_tableLog_tooLarge; /* DTable is too small */ DTable[0] = (U16)maxBits; { U32 total = 1 << maxBits; U32 rest = total - weightTotal; U32 verif = 1 << FSE_highbit32(rest); U32 lastWeight = FSE_highbit32(rest) + 1; if (verif != rest) return (size_t)-FSE_ERROR_corruptionDetected; /* last value must be a clean power of 2 */ huffWeight[oSize] = (BYTE)lastWeight; rankVal[lastWeight]++; } /* check tree construction validity */ if ((rankVal[1] < 2) || (rankVal[1] & 1)) return (size_t)-FSE_ERROR_corruptionDetected; /* by construction : at least 2 elts of rank 1, must be even */ /* Prepare ranks */ nextRankStart = 0; for (n=1; n<=maxBits; n++) { U32 current = nextRankStart; nextRankStart += (rankVal[n] << (n-1)); rankVal[n] = current; } /* fill DTable */ for (n=0; n<=oSize; n++) { const U32 w = huffWeight[n]; const U32 length = (1 << w) >> 1; U32 i; HUF_DElt D; D.byte = (BYTE)n; D.nbBits = (BYTE)(maxBits + 1 - w); for (i = rankVal[w]; i < rankVal[w] + length; i++) dt[i] = D; rankVal[w] += length; } return iSize+1; } static BYTE HUF_decodeSymbol(FSE_DStream_t* Dstream, const HUF_DElt* dt, const U32 dtLog) { const size_t val = FSE_lookBitsFast(Dstream, dtLog); /* note : dtLog >= 1 */ const BYTE c = dt[val].byte; FSE_skipBits(Dstream, dt[val].nbBits); return c; } static size_t HUF_decompress_usingDTable( /* -3% slower when non static */ void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const U16* DTable) { BYTE* const ostart = (BYTE*) dst; BYTE* op = ostart; BYTE* const omax = op + maxDstSize; BYTE* const olimit = omax-15; const void* ptr = DTable; const HUF_DElt* const dt = (const HUF_DElt*)(ptr)+1; const U32 dtLog = DTable[0]; size_t errorCode; U32 reloadStatus; /* Init */ const U16* jumpTable = (const U16*)cSrc; const size_t length1 = FSE_readLE16(jumpTable); const size_t length2 = FSE_readLE16(jumpTable+1); const size_t length3 = FSE_readLE16(jumpTable+2); const size_t length4 = cSrcSize - 6 - length1 - length2 - length3; // check coherency !! const char* const start1 = (const char*)(cSrc) + 6; const char* const start2 = start1 + length1; const char* const start3 = start2 + length2; const char* const start4 = start3 + length3; FSE_DStream_t bitD1, bitD2, bitD3, bitD4; if (length1+length2+length3+6 >= cSrcSize) return (size_t)-FSE_ERROR_srcSize_wrong; errorCode = FSE_initDStream(&bitD1, start1, length1); if (FSE_isError(errorCode)) return errorCode; errorCode = FSE_initDStream(&bitD2, start2, length2); if (FSE_isError(errorCode)) return errorCode; errorCode = FSE_initDStream(&bitD3, start3, length3); if (FSE_isError(errorCode)) return errorCode; errorCode = FSE_initDStream(&bitD4, start4, length4); if (FSE_isError(errorCode)) return errorCode; reloadStatus=FSE_reloadDStream(&bitD2); /* 16 symbols per loop */ for ( ; (reloadStatus12)) FSE_reloadDStream(&Dstream) #define HUF_DECODE_SYMBOL_2(n, Dstream) \ op[n] = HUF_decodeSymbol(&Dstream, dt, dtLog); \ if (FSE_32bits()) FSE_reloadDStream(&Dstream) HUF_DECODE_SYMBOL_1( 0, bitD1); HUF_DECODE_SYMBOL_1( 1, bitD2); HUF_DECODE_SYMBOL_1( 2, bitD3); HUF_DECODE_SYMBOL_1( 3, bitD4); HUF_DECODE_SYMBOL_2( 4, bitD1); HUF_DECODE_SYMBOL_2( 5, bitD2); HUF_DECODE_SYMBOL_2( 6, bitD3); HUF_DECODE_SYMBOL_2( 7, bitD4); HUF_DECODE_SYMBOL_1( 8, bitD1); HUF_DECODE_SYMBOL_1( 9, bitD2); HUF_DECODE_SYMBOL_1(10, bitD3); HUF_DECODE_SYMBOL_1(11, bitD4); HUF_DECODE_SYMBOL_0(12, bitD1); HUF_DECODE_SYMBOL_0(13, bitD2); HUF_DECODE_SYMBOL_0(14, bitD3); HUF_DECODE_SYMBOL_0(15, bitD4); } if (reloadStatus!=FSE_DStream_completed) /* not complete : some bitStream might be FSE_DStream_unfinished */ return (size_t)-FSE_ERROR_corruptionDetected; /* tail */ { // bitTail = bitD1; // *much* slower : -20% !??! FSE_DStream_t bitTail; bitTail.ptr = bitD1.ptr; bitTail.bitsConsumed = bitD1.bitsConsumed; bitTail.bitContainer = bitD1.bitContainer; // required in case of FSE_DStream_endOfBuffer bitTail.start = start1; for ( ; (FSE_reloadDStream(&bitTail) < FSE_DStream_completed) && (op= cSrcSize) return (size_t)-FSE_ERROR_srcSize_wrong; ip += errorCode; cSrcSize -= errorCode; return HUF_decompress_usingDTable (dst, maxDstSize, ip, cSrcSize, DTable); } #endif /* FSE_COMMONDEFS_ONLY */ /* zstd - standard compression library Copyright (C) 2014-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - zstd source repository : https://github.com/Cyan4973/zstd - ztsd public forum : https://groups.google.com/forum/#!forum/lz4c */ /**************************************************************** * Tuning parameters *****************************************************************/ /* MEMORY_USAGE : * Memory usage formula : N->2^N Bytes (examples : 10 -> 1KB; 12 -> 4KB ; 16 -> 64KB; 20 -> 1MB; etc.) * Increasing memory usage improves compression ratio * Reduced memory usage can improve speed, due to cache effect */ #define ZSTD_MEMORY_USAGE 17 /************************************** CPU Feature Detection **************************************/ /* * Automated efficient unaligned memory access detection * Based on known hardware architectures * This list will be updated thanks to feedbacks */ #if defined(CPU_HAS_EFFICIENT_UNALIGNED_MEMORY_ACCESS) \ || defined(__ARM_FEATURE_UNALIGNED) \ || defined(__i386__) || defined(__x86_64__) \ || defined(_M_IX86) || defined(_M_X64) \ || defined(__ARM_ARCH_7__) || defined(__ARM_ARCH_8__) \ || (defined(_M_ARM) && (_M_ARM >= 7)) # define ZSTD_UNALIGNED_ACCESS 1 #else # define ZSTD_UNALIGNED_ACCESS 0 #endif /******************************************************** * Includes *********************************************************/ #include /* calloc */ #include /* memcpy, memmove */ #include /* debug : printf */ /******************************************************** * Compiler specifics *********************************************************/ #ifdef __AVX2__ # include /* AVX2 intrinsics */ #endif #ifdef _MSC_VER /* Visual Studio */ # include /* For Visual 2005 */ # pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */ # pragma warning(disable : 4324) /* disable: C4324: padded structure */ #endif #ifndef MEM_ACCESS_MODULE #define MEM_ACCESS_MODULE /******************************************************** * Basic Types *********************************************************/ #if defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L /* C99 */ # include typedef uint8_t BYTE; typedef uint16_t U16; typedef int16_t S16; typedef uint32_t U32; typedef int32_t S32; typedef uint64_t U64; #else typedef unsigned char BYTE; typedef unsigned short U16; typedef signed short S16; typedef unsigned int U32; typedef signed int S32; typedef unsigned long long U64; #endif #endif /* MEM_ACCESS_MODULE */ /******************************************************** * Constants *********************************************************/ static const U32 ZSTD_magicNumber = 0xFD2FB51E; /* 3rd version : seqNb header */ #define HASH_LOG (ZSTD_MEMORY_USAGE - 2) #define HASH_TABLESIZE (1 << HASH_LOG) #define HASH_MASK (HASH_TABLESIZE - 1) #define KNUTH 2654435761 #define BIT7 128 #define BIT6 64 #define BIT5 32 #define BIT4 16 #define KB *(1 <<10) #define MB *(1 <<20) #define GB *(1U<<30) #define BLOCKSIZE (128 KB) /* define, for static allocation */ #define WORKPLACESIZE (BLOCKSIZE*3) #define MINMATCH 4 #define MLbits 7 #define LLbits 6 #define Offbits 5 #define MaxML ((1<>3]; #else U32 hashTable[HASH_TABLESIZE]; #endif BYTE buffer[WORKPLACESIZE]; } cctxi_t; /************************************** * Error Management **************************************/ /* published entry point */ unsigned ZSTDv01_isError(size_t code) { return ERR_isError(code); } /************************************** * Tool functions **************************************/ #define ZSTD_VERSION_MAJOR 0 /* for breaking interface changes */ #define ZSTD_VERSION_MINOR 1 /* for new (non-breaking) interface capabilities */ #define ZSTD_VERSION_RELEASE 3 /* for tweaks, bug-fixes, or development */ #define ZSTD_VERSION_NUMBER (ZSTD_VERSION_MAJOR *100*100 + ZSTD_VERSION_MINOR *100 + ZSTD_VERSION_RELEASE) /************************************************************** * Decompression code **************************************************************/ size_t ZSTDv01_getcBlockSize(const void* src, size_t srcSize, blockProperties_t* bpPtr) { const BYTE* const in = (const BYTE* const)src; BYTE headerFlags; U32 cSize; if (srcSize < 3) return ERROR(srcSize_wrong); headerFlags = *in; cSize = in[2] + (in[1]<<8) + ((in[0] & 7)<<16); bpPtr->blockType = (blockType_t)(headerFlags >> 6); bpPtr->origSize = (bpPtr->blockType == bt_rle) ? cSize : 0; if (bpPtr->blockType == bt_end) return 0; if (bpPtr->blockType == bt_rle) return 1; return cSize; } static size_t ZSTD_copyUncompressedBlock(void* dst, size_t maxDstSize, const void* src, size_t srcSize) { if (srcSize > maxDstSize) return ERROR(dstSize_tooSmall); memcpy(dst, src, srcSize); return srcSize; } static size_t ZSTD_decompressLiterals(void* ctx, void* dst, size_t maxDstSize, const void* src, size_t srcSize) { BYTE* op = (BYTE*)dst; BYTE* const oend = op + maxDstSize; const BYTE* ip = (const BYTE*)src; size_t errorCode; size_t litSize; /* check : minimum 2, for litSize, +1, for content */ if (srcSize <= 3) return ERROR(corruption_detected); litSize = ip[1] + (ip[0]<<8); litSize += ((ip[-3] >> 3) & 7) << 16; // mmmmh.... op = oend - litSize; (void)ctx; if (litSize > maxDstSize) return ERROR(dstSize_tooSmall); errorCode = HUF_decompress(op, litSize, ip+2, srcSize-2); if (FSE_isError(errorCode)) return ERROR(GENERIC); return litSize; } size_t ZSTDv01_decodeLiteralsBlock(void* ctx, void* dst, size_t maxDstSize, const BYTE** litStart, size_t* litSize, const void* src, size_t srcSize) { const BYTE* const istart = (const BYTE* const)src; const BYTE* ip = istart; BYTE* const ostart = (BYTE* const)dst; BYTE* const oend = ostart + maxDstSize; blockProperties_t litbp; size_t litcSize = ZSTDv01_getcBlockSize(src, srcSize, &litbp); if (ZSTDv01_isError(litcSize)) return litcSize; if (litcSize > srcSize - ZSTD_blockHeaderSize) return ERROR(srcSize_wrong); ip += ZSTD_blockHeaderSize; switch(litbp.blockType) { case bt_raw: *litStart = ip; ip += litcSize; *litSize = litcSize; break; case bt_rle: { size_t rleSize = litbp.origSize; if (rleSize>maxDstSize) return ERROR(dstSize_tooSmall); if (!srcSize) return ERROR(srcSize_wrong); memset(oend - rleSize, *ip, rleSize); *litStart = oend - rleSize; *litSize = rleSize; ip++; break; } case bt_compressed: { size_t decodedLitSize = ZSTD_decompressLiterals(ctx, dst, maxDstSize, ip, litcSize); if (ZSTDv01_isError(decodedLitSize)) return decodedLitSize; *litStart = oend - decodedLitSize; *litSize = decodedLitSize; ip += litcSize; break; } case bt_end: default: return ERROR(GENERIC); } return ip-istart; } size_t ZSTDv01_decodeSeqHeaders(int* nbSeq, const BYTE** dumpsPtr, size_t* dumpsLengthPtr, FSE_DTable* DTableLL, FSE_DTable* DTableML, FSE_DTable* DTableOffb, const void* src, size_t srcSize) { const BYTE* const istart = (const BYTE* const)src; const BYTE* ip = istart; const BYTE* const iend = istart + srcSize; U32 LLtype, Offtype, MLtype; U32 LLlog, Offlog, MLlog; size_t dumpsLength; /* check */ if (srcSize < 5) return ERROR(srcSize_wrong); /* SeqHead */ *nbSeq = ZSTD_readLE16(ip); ip+=2; LLtype = *ip >> 6; Offtype = (*ip >> 4) & 3; MLtype = (*ip >> 2) & 3; if (*ip & 2) { dumpsLength = ip[2]; dumpsLength += ip[1] << 8; ip += 3; } else { dumpsLength = ip[1]; dumpsLength += (ip[0] & 1) << 8; ip += 2; } *dumpsPtr = ip; ip += dumpsLength; *dumpsLengthPtr = dumpsLength; /* check */ if (ip > iend-3) return ERROR(srcSize_wrong); /* min : all 3 are "raw", hence no header, but at least xxLog bits per type */ /* sequences */ { S16 norm[MaxML+1]; /* assumption : MaxML >= MaxLL and MaxOff */ size_t headerSize; /* Build DTables */ switch(LLtype) { case bt_rle : LLlog = 0; FSE_buildDTable_rle(DTableLL, *ip++); break; case bt_raw : LLlog = LLbits; FSE_buildDTable_raw(DTableLL, LLbits); break; default : { U32 max = MaxLL; headerSize = FSE_readNCount(norm, &max, &LLlog, ip, iend-ip); if (FSE_isError(headerSize)) return ERROR(GENERIC); if (LLlog > LLFSELog) return ERROR(corruption_detected); ip += headerSize; FSE_buildDTable(DTableLL, norm, max, LLlog); } } switch(Offtype) { case bt_rle : Offlog = 0; if (ip > iend-2) return ERROR(srcSize_wrong); /* min : "raw", hence no header, but at least xxLog bits */ FSE_buildDTable_rle(DTableOffb, *ip++); break; case bt_raw : Offlog = Offbits; FSE_buildDTable_raw(DTableOffb, Offbits); break; default : { U32 max = MaxOff; headerSize = FSE_readNCount(norm, &max, &Offlog, ip, iend-ip); if (FSE_isError(headerSize)) return ERROR(GENERIC); if (Offlog > OffFSELog) return ERROR(corruption_detected); ip += headerSize; FSE_buildDTable(DTableOffb, norm, max, Offlog); } } switch(MLtype) { case bt_rle : MLlog = 0; if (ip > iend-2) return ERROR(srcSize_wrong); /* min : "raw", hence no header, but at least xxLog bits */ FSE_buildDTable_rle(DTableML, *ip++); break; case bt_raw : MLlog = MLbits; FSE_buildDTable_raw(DTableML, MLbits); break; default : { U32 max = MaxML; headerSize = FSE_readNCount(norm, &max, &MLlog, ip, iend-ip); if (FSE_isError(headerSize)) return ERROR(GENERIC); if (MLlog > MLFSELog) return ERROR(corruption_detected); ip += headerSize; FSE_buildDTable(DTableML, norm, max, MLlog); } } } return ip-istart; } typedef struct { size_t litLength; size_t offset; size_t matchLength; } seq_t; typedef struct { FSE_DStream_t DStream; FSE_DState_t stateLL; FSE_DState_t stateOffb; FSE_DState_t stateML; size_t prevOffset; const BYTE* dumps; const BYTE* dumpsEnd; } seqState_t; static void ZSTD_decodeSequence(seq_t* seq, seqState_t* seqState) { size_t litLength; size_t prevOffset; size_t offset; size_t matchLength; const BYTE* dumps = seqState->dumps; const BYTE* const de = seqState->dumpsEnd; /* Literal length */ litLength = FSE_decodeSymbol(&(seqState->stateLL), &(seqState->DStream)); prevOffset = litLength ? seq->offset : seqState->prevOffset; seqState->prevOffset = seq->offset; if (litLength == MaxLL) { U32 add = dumps 1 byte */ dumps += 3; } } } /* Offset */ { U32 offsetCode, nbBits; offsetCode = FSE_decodeSymbol(&(seqState->stateOffb), &(seqState->DStream)); if (ZSTD_32bits()) FSE_reloadDStream(&(seqState->DStream)); nbBits = offsetCode - 1; if (offsetCode==0) nbBits = 0; /* cmove */ offset = ((size_t)1 << (nbBits & ((sizeof(offset)*8)-1))) + FSE_readBits(&(seqState->DStream), nbBits); if (ZSTD_32bits()) FSE_reloadDStream(&(seqState->DStream)); if (offsetCode==0) offset = prevOffset; } /* MatchLength */ matchLength = FSE_decodeSymbol(&(seqState->stateML), &(seqState->DStream)); if (matchLength == MaxML) { U32 add = dumps 1 byte */ dumps += 3; } } } matchLength += MINMATCH; /* save result */ seq->litLength = litLength; seq->offset = offset; seq->matchLength = matchLength; seqState->dumps = dumps; } static size_t ZSTD_execSequence(BYTE* op, seq_t sequence, const BYTE** litPtr, const BYTE* const litLimit, BYTE* const base, BYTE* const oend) { static const int dec32table[] = {0, 1, 2, 1, 4, 4, 4, 4}; /* added */ static const int dec64table[] = {8, 8, 8, 7, 8, 9,10,11}; /* substracted */ const BYTE* const ostart = op; const size_t litLength = sequence.litLength; BYTE* const endMatch = op + litLength + sequence.matchLength; /* risk : address space overflow (32-bits) */ const BYTE* const litEnd = *litPtr + litLength; /* check */ if (endMatch > oend) return ERROR(dstSize_tooSmall); /* overwrite beyond dst buffer */ if (litEnd > litLimit) return ERROR(corruption_detected); if (sequence.matchLength > (size_t)(*litPtr-op)) return ERROR(dstSize_tooSmall); /* overwrite literal segment */ /* copy Literals */ if (((size_t)(*litPtr - op) < 8) || ((size_t)(oend-litEnd) < 8) || (op+litLength > oend-8)) memmove(op, *litPtr, litLength); /* overwrite risk */ else ZSTD_wildcopy(op, *litPtr, litLength); op += litLength; *litPtr = litEnd; /* update for next sequence */ /* check : last match must be at a minimum distance of 8 from end of dest buffer */ if (oend-op < 8) return ERROR(dstSize_tooSmall); /* copy Match */ { const U32 overlapRisk = (((size_t)(litEnd - endMatch)) < 12); const BYTE* match = op - sequence.offset; /* possible underflow at op - offset ? */ size_t qutt = 12; U64 saved[2]; /* check */ if (match < base) return ERROR(corruption_detected); if (sequence.offset > (size_t)base) return ERROR(corruption_detected); /* save beginning of literal sequence, in case of write overlap */ if (overlapRisk) { if ((endMatch + qutt) > oend) qutt = oend-endMatch; memcpy(saved, endMatch, qutt); } if (sequence.offset < 8) { const int dec64 = dec64table[sequence.offset]; op[0] = match[0]; op[1] = match[1]; op[2] = match[2]; op[3] = match[3]; match += dec32table[sequence.offset]; ZSTD_copy4(op+4, match); match -= dec64; } else { ZSTD_copy8(op, match); } op += 8; match += 8; if (endMatch > oend-(16-MINMATCH)) { if (op < oend-8) { ZSTD_wildcopy(op, match, (oend-8) - op); match += (oend-8) - op; op = oend-8; } while (opLLTable; U32* DTableML = dctx->MLTable; U32* DTableOffb = dctx->OffTable; BYTE* const base = (BYTE*) (dctx->base); /* Build Decoding Tables */ errorCode = ZSTDv01_decodeSeqHeaders(&nbSeq, &dumps, &dumpsLength, DTableLL, DTableML, DTableOffb, ip, iend-ip); if (ZSTDv01_isError(errorCode)) return errorCode; ip += errorCode; /* Regen sequences */ { seq_t sequence; seqState_t seqState; memset(&sequence, 0, sizeof(sequence)); seqState.dumps = dumps; seqState.dumpsEnd = dumps + dumpsLength; seqState.prevOffset = 1; errorCode = FSE_initDStream(&(seqState.DStream), ip, iend-ip); if (FSE_isError(errorCode)) return ERROR(corruption_detected); FSE_initDState(&(seqState.stateLL), &(seqState.DStream), DTableLL); FSE_initDState(&(seqState.stateOffb), &(seqState.DStream), DTableOffb); FSE_initDState(&(seqState.stateML), &(seqState.DStream), DTableML); for ( ; (FSE_reloadDStream(&(seqState.DStream)) <= FSE_DStream_completed) && (nbSeq>0) ; ) { size_t oneSeqSize; nbSeq--; ZSTD_decodeSequence(&sequence, &seqState); oneSeqSize = ZSTD_execSequence(op, sequence, &litPtr, litEnd, base, oend); if (ZSTDv01_isError(oneSeqSize)) return oneSeqSize; op += oneSeqSize; } /* check if reached exact end */ if ( !FSE_endOfDStream(&(seqState.DStream)) ) return ERROR(corruption_detected); /* requested too much : data is corrupted */ if (nbSeq<0) return ERROR(corruption_detected); /* requested too many sequences : data is corrupted */ /* last literal segment */ { size_t lastLLSize = litEnd - litPtr; if (op+lastLLSize > oend) return ERROR(dstSize_tooSmall); if (op != litPtr) memmove(op, litPtr, lastLLSize); op += lastLLSize; } } return op-ostart; } static size_t ZSTD_decompressBlock( void* ctx, void* dst, size_t maxDstSize, const void* src, size_t srcSize) { /* blockType == blockCompressed, srcSize is trusted */ const BYTE* ip = (const BYTE*)src; const BYTE* litPtr = NULL; size_t litSize = 0; size_t errorCode; /* Decode literals sub-block */ errorCode = ZSTDv01_decodeLiteralsBlock(ctx, dst, maxDstSize, &litPtr, &litSize, src, srcSize); if (ZSTDv01_isError(errorCode)) return errorCode; ip += errorCode; srcSize -= errorCode; return ZSTD_decompressSequences(ctx, dst, maxDstSize, ip, srcSize, litPtr, litSize); } size_t ZSTDv01_decompressDCtx(void* ctx, void* dst, size_t maxDstSize, const void* src, size_t srcSize) { const BYTE* ip = (const BYTE*)src; const BYTE* iend = ip + srcSize; BYTE* const ostart = (BYTE* const)dst; BYTE* op = ostart; BYTE* const oend = ostart + maxDstSize; size_t remainingSize = srcSize; U32 magicNumber; size_t errorCode=0; blockProperties_t blockProperties; /* Frame Header */ if (srcSize < ZSTD_frameHeaderSize+ZSTD_blockHeaderSize) return ERROR(srcSize_wrong); magicNumber = ZSTD_readBE32(src); if (magicNumber != ZSTD_magicNumber) return ERROR(prefix_unknown); ip += ZSTD_frameHeaderSize; remainingSize -= ZSTD_frameHeaderSize; /* Loop on each block */ while (1) { size_t blockSize = ZSTDv01_getcBlockSize(ip, iend-ip, &blockProperties); if (ZSTDv01_isError(blockSize)) return blockSize; ip += ZSTD_blockHeaderSize; remainingSize -= ZSTD_blockHeaderSize; if (blockSize > remainingSize) return ERROR(srcSize_wrong); switch(blockProperties.blockType) { case bt_compressed: errorCode = ZSTD_decompressBlock(ctx, op, oend-op, ip, blockSize); break; case bt_raw : errorCode = ZSTD_copyUncompressedBlock(op, oend-op, ip, blockSize); break; case bt_rle : return ERROR(GENERIC); /* not yet supported */ break; case bt_end : /* end of frame */ if (remainingSize) return ERROR(srcSize_wrong); break; default: return ERROR(GENERIC); } if (blockSize == 0) break; /* bt_end */ if (ZSTDv01_isError(errorCode)) return errorCode; op += errorCode; ip += blockSize; remainingSize -= blockSize; } return op-ostart; } size_t ZSTDv01_decompress(void* dst, size_t maxDstSize, const void* src, size_t srcSize) { dctx_t ctx; ctx.base = dst; return ZSTDv01_decompressDCtx(&ctx, dst, maxDstSize, src, srcSize); } size_t ZSTDv01_findFrameCompressedSize(const void* src, size_t srcSize) { const BYTE* ip = (const BYTE*)src; size_t remainingSize = srcSize; U32 magicNumber; blockProperties_t blockProperties; /* Frame Header */ if (srcSize < ZSTD_frameHeaderSize+ZSTD_blockHeaderSize) return ERROR(srcSize_wrong); magicNumber = ZSTD_readBE32(src); if (magicNumber != ZSTD_magicNumber) return ERROR(prefix_unknown); ip += ZSTD_frameHeaderSize; remainingSize -= ZSTD_frameHeaderSize; /* Loop on each block */ while (1) { size_t blockSize = ZSTDv01_getcBlockSize(ip, remainingSize, &blockProperties); if (ZSTDv01_isError(blockSize)) return blockSize; ip += ZSTD_blockHeaderSize; remainingSize -= ZSTD_blockHeaderSize; if (blockSize > remainingSize) return ERROR(srcSize_wrong); if (blockSize == 0) break; /* bt_end */ ip += blockSize; remainingSize -= blockSize; } return ip - (const BYTE*)src; } /******************************* * Streaming Decompression API *******************************/ size_t ZSTDv01_resetDCtx(ZSTDv01_Dctx* dctx) { dctx->expected = ZSTD_frameHeaderSize; dctx->phase = 0; dctx->previousDstEnd = NULL; dctx->base = NULL; return 0; } ZSTDv01_Dctx* ZSTDv01_createDCtx(void) { ZSTDv01_Dctx* dctx = (ZSTDv01_Dctx*)malloc(sizeof(ZSTDv01_Dctx)); if (dctx==NULL) return NULL; ZSTDv01_resetDCtx(dctx); return dctx; } size_t ZSTDv01_freeDCtx(ZSTDv01_Dctx* dctx) { free(dctx); return 0; } size_t ZSTDv01_nextSrcSizeToDecompress(ZSTDv01_Dctx* dctx) { return ((dctx_t*)dctx)->expected; } size_t ZSTDv01_decompressContinue(ZSTDv01_Dctx* dctx, void* dst, size_t maxDstSize, const void* src, size_t srcSize) { dctx_t* ctx = (dctx_t*)dctx; /* Sanity check */ if (srcSize != ctx->expected) return ERROR(srcSize_wrong); if (dst != ctx->previousDstEnd) /* not contiguous */ ctx->base = dst; /* Decompress : frame header */ if (ctx->phase == 0) { /* Check frame magic header */ U32 magicNumber = ZSTD_readBE32(src); if (magicNumber != ZSTD_magicNumber) return ERROR(prefix_unknown); ctx->phase = 1; ctx->expected = ZSTD_blockHeaderSize; return 0; } /* Decompress : block header */ if (ctx->phase == 1) { blockProperties_t bp; size_t blockSize = ZSTDv01_getcBlockSize(src, ZSTD_blockHeaderSize, &bp); if (ZSTDv01_isError(blockSize)) return blockSize; if (bp.blockType == bt_end) { ctx->expected = 0; ctx->phase = 0; } else { ctx->expected = blockSize; ctx->bType = bp.blockType; ctx->phase = 2; } return 0; } /* Decompress : block content */ { size_t rSize; switch(ctx->bType) { case bt_compressed: rSize = ZSTD_decompressBlock(ctx, dst, maxDstSize, src, srcSize); break; case bt_raw : rSize = ZSTD_copyUncompressedBlock(dst, maxDstSize, src, srcSize); break; case bt_rle : return ERROR(GENERIC); /* not yet handled */ break; case bt_end : /* should never happen (filtered at phase 1) */ rSize = 0; break; default: return ERROR(GENERIC); } ctx->phase = 1; ctx->expected = ZSTD_blockHeaderSize; ctx->previousDstEnd = (void*)( ((char*)dst) + rSize); return rSize; } } borgbackup-1.1.5/src/borg/algorithms/zstd/lib/legacy/zstd_v05.h0000644000175000017500000001573313257361140023474 0ustar twtw00000000000000/* * Copyright (c) 2016-present, Yann Collet, Facebook, Inc. * All rights reserved. * * This source code is licensed under both the BSD-style license (found in the * LICENSE file in the root directory of this source tree) and the GPLv2 (found * in the COPYING file in the root directory of this source tree). * You may select, at your option, one of the above-listed licenses. */ #ifndef ZSTDv05_H #define ZSTDv05_H #if defined (__cplusplus) extern "C" { #endif /*-************************************* * Dependencies ***************************************/ #include /* size_t */ #include "mem.h" /* U64, U32 */ /* ************************************* * Simple functions ***************************************/ /*! ZSTDv05_decompress() : `compressedSize` : is the _exact_ size of the compressed blob, otherwise decompression will fail. `dstCapacity` must be large enough, equal or larger than originalSize. @return : the number of bytes decompressed into `dst` (<= `dstCapacity`), or an errorCode if it fails (which can be tested using ZSTDv05_isError()) */ size_t ZSTDv05_decompress( void* dst, size_t dstCapacity, const void* src, size_t compressedSize); /** ZSTDv05_getFrameSrcSize() : get the source length of a ZSTD frame compressedSize : The size of the 'src' buffer, at least as large as the frame pointed to by 'src' return : the number of bytes that would be read to decompress this frame or an errorCode if it fails (which can be tested using ZSTDv05_isError()) */ size_t ZSTDv05_findFrameCompressedSize(const void* src, size_t compressedSize); /* ************************************* * Helper functions ***************************************/ /* Error Management */ unsigned ZSTDv05_isError(size_t code); /*!< tells if a `size_t` function result is an error code */ const char* ZSTDv05_getErrorName(size_t code); /*!< provides readable string for an error code */ /* ************************************* * Explicit memory management ***************************************/ /** Decompression context */ typedef struct ZSTDv05_DCtx_s ZSTDv05_DCtx; ZSTDv05_DCtx* ZSTDv05_createDCtx(void); size_t ZSTDv05_freeDCtx(ZSTDv05_DCtx* dctx); /*!< @return : errorCode */ /** ZSTDv05_decompressDCtx() : * Same as ZSTDv05_decompress(), but requires an already allocated ZSTDv05_DCtx (see ZSTDv05_createDCtx()) */ size_t ZSTDv05_decompressDCtx(ZSTDv05_DCtx* ctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize); /*-*********************** * Simple Dictionary API *************************/ /*! ZSTDv05_decompress_usingDict() : * Decompression using a pre-defined Dictionary content (see dictBuilder). * Dictionary must be identical to the one used during compression, otherwise regenerated data will be corrupted. * Note : dict can be NULL, in which case, it's equivalent to ZSTDv05_decompressDCtx() */ size_t ZSTDv05_decompress_usingDict(ZSTDv05_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize, const void* dict,size_t dictSize); /*-************************ * Advanced Streaming API ***************************/ typedef enum { ZSTDv05_fast, ZSTDv05_greedy, ZSTDv05_lazy, ZSTDv05_lazy2, ZSTDv05_btlazy2, ZSTDv05_opt, ZSTDv05_btopt } ZSTDv05_strategy; typedef struct { U64 srcSize; U32 windowLog; /* the only useful information to retrieve */ U32 contentLog; U32 hashLog; U32 searchLog; U32 searchLength; U32 targetLength; ZSTDv05_strategy strategy; } ZSTDv05_parameters; size_t ZSTDv05_getFrameParams(ZSTDv05_parameters* params, const void* src, size_t srcSize); size_t ZSTDv05_decompressBegin_usingDict(ZSTDv05_DCtx* dctx, const void* dict, size_t dictSize); void ZSTDv05_copyDCtx(ZSTDv05_DCtx* dstDCtx, const ZSTDv05_DCtx* srcDCtx); size_t ZSTDv05_nextSrcSizeToDecompress(ZSTDv05_DCtx* dctx); size_t ZSTDv05_decompressContinue(ZSTDv05_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize); /*-*********************** * ZBUFF API *************************/ typedef struct ZBUFFv05_DCtx_s ZBUFFv05_DCtx; ZBUFFv05_DCtx* ZBUFFv05_createDCtx(void); size_t ZBUFFv05_freeDCtx(ZBUFFv05_DCtx* dctx); size_t ZBUFFv05_decompressInit(ZBUFFv05_DCtx* dctx); size_t ZBUFFv05_decompressInitDictionary(ZBUFFv05_DCtx* dctx, const void* dict, size_t dictSize); size_t ZBUFFv05_decompressContinue(ZBUFFv05_DCtx* dctx, void* dst, size_t* dstCapacityPtr, const void* src, size_t* srcSizePtr); /*-*************************************************************************** * Streaming decompression * * A ZBUFFv05_DCtx object is required to track streaming operations. * Use ZBUFFv05_createDCtx() and ZBUFFv05_freeDCtx() to create/release resources. * Use ZBUFFv05_decompressInit() to start a new decompression operation, * or ZBUFFv05_decompressInitDictionary() if decompression requires a dictionary. * Note that ZBUFFv05_DCtx objects can be reused multiple times. * * Use ZBUFFv05_decompressContinue() repetitively to consume your input. * *srcSizePtr and *dstCapacityPtr can be any size. * The function will report how many bytes were read or written by modifying *srcSizePtr and *dstCapacityPtr. * Note that it may not consume the entire input, in which case it's up to the caller to present remaining input again. * The content of @dst will be overwritten (up to *dstCapacityPtr) at each function call, so save its content if it matters or change @dst. * @return : a hint to preferred nb of bytes to use as input for next function call (it's only a hint, to help latency) * or 0 when a frame is completely decoded * or an error code, which can be tested using ZBUFFv05_isError(). * * Hint : recommended buffer sizes (not compulsory) : ZBUFFv05_recommendedDInSize() / ZBUFFv05_recommendedDOutSize() * output : ZBUFFv05_recommendedDOutSize==128 KB block size is the internal unit, it ensures it's always possible to write a full block when decoded. * input : ZBUFFv05_recommendedDInSize==128Kb+3; just follow indications from ZBUFFv05_decompressContinue() to minimize latency. It should always be <= 128 KB + 3 . * *******************************************************************************/ /* ************************************* * Tool functions ***************************************/ unsigned ZBUFFv05_isError(size_t errorCode); const char* ZBUFFv05_getErrorName(size_t errorCode); /** Functions below provide recommended buffer sizes for Compression or Decompression operations. * These sizes are just hints, and tend to offer better latency */ size_t ZBUFFv05_recommendedDInSize(void); size_t ZBUFFv05_recommendedDOutSize(void); /*-************************************* * Constants ***************************************/ #define ZSTDv05_MAGICNUMBER 0xFD2FB525 /* v0.5 */ #if defined (__cplusplus) } #endif #endif /* ZSTDv0505_H */ borgbackup-1.1.5/src/borg/algorithms/zstd/lib/legacy/zstd_v02.c0000644000175000017500000037716613257361140023477 0ustar twtw00000000000000/* * Copyright (c) 2016-present, Yann Collet, Facebook, Inc. * All rights reserved. * * This source code is licensed under both the BSD-style license (found in the * LICENSE file in the root directory of this source tree) and the GPLv2 (found * in the COPYING file in the root directory of this source tree). * You may select, at your option, one of the above-listed licenses. */ #include /* size_t, ptrdiff_t */ #include "zstd_v02.h" #include "error_private.h" /****************************************** * Compiler-specific ******************************************/ #if defined(_MSC_VER) /* Visual Studio */ # include /* _byteswap_ulong */ # include /* _byteswap_* */ #endif /* ****************************************************************** mem.h low-level memory access routines Copyright (C) 2013-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - FSE source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #ifndef MEM_H_MODULE #define MEM_H_MODULE #if defined (__cplusplus) extern "C" { #endif /****************************************** * Includes ******************************************/ #include /* size_t, ptrdiff_t */ #include /* memcpy */ /****************************************** * Compiler-specific ******************************************/ #if defined(__GNUC__) # define MEM_STATIC static __attribute__((unused)) #elif defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) # define MEM_STATIC static inline #elif defined(_MSC_VER) # define MEM_STATIC static __inline #else # define MEM_STATIC static /* this version may generate warnings for unused static functions; disable the relevant warning */ #endif /**************************************************************** * Basic Types *****************************************************************/ #if defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) # include typedef uint8_t BYTE; typedef uint16_t U16; typedef int16_t S16; typedef uint32_t U32; typedef int32_t S32; typedef uint64_t U64; typedef int64_t S64; #else typedef unsigned char BYTE; typedef unsigned short U16; typedef signed short S16; typedef unsigned int U32; typedef signed int S32; typedef unsigned long long U64; typedef signed long long S64; #endif /**************************************************************** * Memory I/O *****************************************************************/ /* MEM_FORCE_MEMORY_ACCESS * By default, access to unaligned memory is controlled by `memcpy()`, which is safe and portable. * Unfortunately, on some target/compiler combinations, the generated assembly is sub-optimal. * The below switch allow to select different access method for improved performance. * Method 0 (default) : use `memcpy()`. Safe and portable. * Method 1 : `__packed` statement. It depends on compiler extension (ie, not portable). * This method is safe if your compiler supports it, and *generally* as fast or faster than `memcpy`. * Method 2 : direct access. This method is portable but violate C standard. * It can generate buggy code on targets generating assembly depending on alignment. * But in some circumstances, it's the only known way to get the most performance (ie GCC + ARMv6) * See http://fastcompression.blogspot.fr/2015/08/accessing-unaligned-memory.html for details. * Prefer these methods in priority order (0 > 1 > 2) */ #ifndef MEM_FORCE_MEMORY_ACCESS /* can be defined externally, on command line for example */ # if defined(__GNUC__) && ( defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) || defined(__ARM_ARCH_6K__) || defined(__ARM_ARCH_6Z__) || defined(__ARM_ARCH_6ZK__) || defined(__ARM_ARCH_6T2__) ) # define MEM_FORCE_MEMORY_ACCESS 2 # elif (defined(__INTEL_COMPILER) && !defined(WIN32)) || \ (defined(__GNUC__) && ( defined(__ARM_ARCH_7__) || defined(__ARM_ARCH_7A__) || defined(__ARM_ARCH_7R__) || defined(__ARM_ARCH_7M__) || defined(__ARM_ARCH_7S__) )) # define MEM_FORCE_MEMORY_ACCESS 1 # endif #endif MEM_STATIC unsigned MEM_32bits(void) { return sizeof(void*)==4; } MEM_STATIC unsigned MEM_64bits(void) { return sizeof(void*)==8; } MEM_STATIC unsigned MEM_isLittleEndian(void) { const union { U32 u; BYTE c[4]; } one = { 1 }; /* don't use static : performance detrimental */ return one.c[0]; } #if defined(MEM_FORCE_MEMORY_ACCESS) && (MEM_FORCE_MEMORY_ACCESS==2) /* violates C standard on structure alignment. Only use if no other choice to achieve best performance on target platform */ MEM_STATIC U16 MEM_read16(const void* memPtr) { return *(const U16*) memPtr; } MEM_STATIC U32 MEM_read32(const void* memPtr) { return *(const U32*) memPtr; } MEM_STATIC U64 MEM_read64(const void* memPtr) { return *(const U64*) memPtr; } MEM_STATIC void MEM_write16(void* memPtr, U16 value) { *(U16*)memPtr = value; } #elif defined(MEM_FORCE_MEMORY_ACCESS) && (MEM_FORCE_MEMORY_ACCESS==1) /* __pack instructions are safer, but compiler specific, hence potentially problematic for some compilers */ /* currently only defined for gcc and icc */ typedef union { U16 u16; U32 u32; U64 u64; } __attribute__((packed)) unalign; MEM_STATIC U16 MEM_read16(const void* ptr) { return ((const unalign*)ptr)->u16; } MEM_STATIC U32 MEM_read32(const void* ptr) { return ((const unalign*)ptr)->u32; } MEM_STATIC U64 MEM_read64(const void* ptr) { return ((const unalign*)ptr)->u64; } MEM_STATIC void MEM_write16(void* memPtr, U16 value) { ((unalign*)memPtr)->u16 = value; } #else /* default method, safe and standard. can sometimes prove slower */ MEM_STATIC U16 MEM_read16(const void* memPtr) { U16 val; memcpy(&val, memPtr, sizeof(val)); return val; } MEM_STATIC U32 MEM_read32(const void* memPtr) { U32 val; memcpy(&val, memPtr, sizeof(val)); return val; } MEM_STATIC U64 MEM_read64(const void* memPtr) { U64 val; memcpy(&val, memPtr, sizeof(val)); return val; } MEM_STATIC void MEM_write16(void* memPtr, U16 value) { memcpy(memPtr, &value, sizeof(value)); } #endif // MEM_FORCE_MEMORY_ACCESS MEM_STATIC U16 MEM_readLE16(const void* memPtr) { if (MEM_isLittleEndian()) return MEM_read16(memPtr); else { const BYTE* p = (const BYTE*)memPtr; return (U16)(p[0] + (p[1]<<8)); } } MEM_STATIC void MEM_writeLE16(void* memPtr, U16 val) { if (MEM_isLittleEndian()) { MEM_write16(memPtr, val); } else { BYTE* p = (BYTE*)memPtr; p[0] = (BYTE)val; p[1] = (BYTE)(val>>8); } } MEM_STATIC U32 MEM_readLE32(const void* memPtr) { if (MEM_isLittleEndian()) return MEM_read32(memPtr); else { const BYTE* p = (const BYTE*)memPtr; return (U32)((U32)p[0] + ((U32)p[1]<<8) + ((U32)p[2]<<16) + ((U32)p[3]<<24)); } } MEM_STATIC U64 MEM_readLE64(const void* memPtr) { if (MEM_isLittleEndian()) return MEM_read64(memPtr); else { const BYTE* p = (const BYTE*)memPtr; return (U64)((U64)p[0] + ((U64)p[1]<<8) + ((U64)p[2]<<16) + ((U64)p[3]<<24) + ((U64)p[4]<<32) + ((U64)p[5]<<40) + ((U64)p[6]<<48) + ((U64)p[7]<<56)); } } MEM_STATIC size_t MEM_readLEST(const void* memPtr) { if (MEM_32bits()) return (size_t)MEM_readLE32(memPtr); else return (size_t)MEM_readLE64(memPtr); } #if defined (__cplusplus) } #endif #endif /* MEM_H_MODULE */ /* ****************************************************************** bitstream Part of NewGen Entropy library header file (to include) Copyright (C) 2013-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - Source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #ifndef BITSTREAM_H_MODULE #define BITSTREAM_H_MODULE #if defined (__cplusplus) extern "C" { #endif /* * This API consists of small unitary functions, which highly benefit from being inlined. * Since link-time-optimization is not available for all compilers, * these functions are defined into a .h to be included. */ /********************************************** * bitStream decompression API (read backward) **********************************************/ typedef struct { size_t bitContainer; unsigned bitsConsumed; const char* ptr; const char* start; } BIT_DStream_t; typedef enum { BIT_DStream_unfinished = 0, BIT_DStream_endOfBuffer = 1, BIT_DStream_completed = 2, BIT_DStream_overflow = 3 } BIT_DStream_status; /* result of BIT_reloadDStream() */ /* 1,2,4,8 would be better for bitmap combinations, but slows down performance a bit ... :( */ MEM_STATIC size_t BIT_initDStream(BIT_DStream_t* bitD, const void* srcBuffer, size_t srcSize); MEM_STATIC size_t BIT_readBits(BIT_DStream_t* bitD, unsigned nbBits); MEM_STATIC BIT_DStream_status BIT_reloadDStream(BIT_DStream_t* bitD); MEM_STATIC unsigned BIT_endOfDStream(const BIT_DStream_t* bitD); /* * Start by invoking BIT_initDStream(). * A chunk of the bitStream is then stored into a local register. * Local register size is 64-bits on 64-bits systems, 32-bits on 32-bits systems (size_t). * You can then retrieve bitFields stored into the local register, **in reverse order**. * Local register is manually filled from memory by the BIT_reloadDStream() method. * A reload guarantee a minimum of ((8*sizeof(size_t))-7) bits when its result is BIT_DStream_unfinished. * Otherwise, it can be less than that, so proceed accordingly. * Checking if DStream has reached its end can be performed with BIT_endOfDStream() */ /****************************************** * unsafe API ******************************************/ MEM_STATIC size_t BIT_readBitsFast(BIT_DStream_t* bitD, unsigned nbBits); /* faster, but works only if nbBits >= 1 */ /**************************************************************** * Helper functions ****************************************************************/ MEM_STATIC unsigned BIT_highbit32 (register U32 val) { # if defined(_MSC_VER) /* Visual */ unsigned long r=0; _BitScanReverse ( &r, val ); return (unsigned) r; # elif defined(__GNUC__) && (__GNUC__ >= 3) /* Use GCC Intrinsic */ return 31 - __builtin_clz (val); # else /* Software version */ static const unsigned DeBruijnClz[32] = { 0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30, 8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31 }; U32 v = val; unsigned r; v |= v >> 1; v |= v >> 2; v |= v >> 4; v |= v >> 8; v |= v >> 16; r = DeBruijnClz[ (U32) (v * 0x07C4ACDDU) >> 27]; return r; # endif } /********************************************************** * bitStream decoding **********************************************************/ /*!BIT_initDStream * Initialize a BIT_DStream_t. * @bitD : a pointer to an already allocated BIT_DStream_t structure * @srcBuffer must point at the beginning of a bitStream * @srcSize must be the exact size of the bitStream * @result : size of stream (== srcSize) or an errorCode if a problem is detected */ MEM_STATIC size_t BIT_initDStream(BIT_DStream_t* bitD, const void* srcBuffer, size_t srcSize) { if (srcSize < 1) { memset(bitD, 0, sizeof(*bitD)); return ERROR(srcSize_wrong); } if (srcSize >= sizeof(size_t)) /* normal case */ { U32 contain32; bitD->start = (const char*)srcBuffer; bitD->ptr = (const char*)srcBuffer + srcSize - sizeof(size_t); bitD->bitContainer = MEM_readLEST(bitD->ptr); contain32 = ((const BYTE*)srcBuffer)[srcSize-1]; if (contain32 == 0) return ERROR(GENERIC); /* endMark not present */ bitD->bitsConsumed = 8 - BIT_highbit32(contain32); } else { U32 contain32; bitD->start = (const char*)srcBuffer; bitD->ptr = bitD->start; bitD->bitContainer = *(const BYTE*)(bitD->start); switch(srcSize) { case 7: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[6]) << (sizeof(size_t)*8 - 16); case 6: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[5]) << (sizeof(size_t)*8 - 24); case 5: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[4]) << (sizeof(size_t)*8 - 32); case 4: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[3]) << 24; case 3: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[2]) << 16; case 2: bitD->bitContainer += (size_t)(((const BYTE*)(bitD->start))[1]) << 8; default:; } contain32 = ((const BYTE*)srcBuffer)[srcSize-1]; if (contain32 == 0) return ERROR(GENERIC); /* endMark not present */ bitD->bitsConsumed = 8 - BIT_highbit32(contain32); bitD->bitsConsumed += (U32)(sizeof(size_t) - srcSize)*8; } return srcSize; } /*!BIT_lookBits * Provides next n bits from local register * local register is not modified (bits are still present for next read/look) * On 32-bits, maxNbBits==25 * On 64-bits, maxNbBits==57 * @return : value extracted */ MEM_STATIC size_t BIT_lookBits(BIT_DStream_t* bitD, U32 nbBits) { const U32 bitMask = sizeof(bitD->bitContainer)*8 - 1; return ((bitD->bitContainer << (bitD->bitsConsumed & bitMask)) >> 1) >> ((bitMask-nbBits) & bitMask); } /*! BIT_lookBitsFast : * unsafe version; only works only if nbBits >= 1 */ MEM_STATIC size_t BIT_lookBitsFast(BIT_DStream_t* bitD, U32 nbBits) { const U32 bitMask = sizeof(bitD->bitContainer)*8 - 1; return (bitD->bitContainer << (bitD->bitsConsumed & bitMask)) >> (((bitMask+1)-nbBits) & bitMask); } MEM_STATIC void BIT_skipBits(BIT_DStream_t* bitD, U32 nbBits) { bitD->bitsConsumed += nbBits; } /*!BIT_readBits * Read next n bits from local register. * pay attention to not read more than nbBits contained into local register. * @return : extracted value. */ MEM_STATIC size_t BIT_readBits(BIT_DStream_t* bitD, U32 nbBits) { size_t value = BIT_lookBits(bitD, nbBits); BIT_skipBits(bitD, nbBits); return value; } /*!BIT_readBitsFast : * unsafe version; only works only if nbBits >= 1 */ MEM_STATIC size_t BIT_readBitsFast(BIT_DStream_t* bitD, U32 nbBits) { size_t value = BIT_lookBitsFast(bitD, nbBits); BIT_skipBits(bitD, nbBits); return value; } MEM_STATIC BIT_DStream_status BIT_reloadDStream(BIT_DStream_t* bitD) { if (bitD->bitsConsumed > (sizeof(bitD->bitContainer)*8)) /* should never happen */ return BIT_DStream_overflow; if (bitD->ptr >= bitD->start + sizeof(bitD->bitContainer)) { bitD->ptr -= bitD->bitsConsumed >> 3; bitD->bitsConsumed &= 7; bitD->bitContainer = MEM_readLEST(bitD->ptr); return BIT_DStream_unfinished; } if (bitD->ptr == bitD->start) { if (bitD->bitsConsumed < sizeof(bitD->bitContainer)*8) return BIT_DStream_endOfBuffer; return BIT_DStream_completed; } { U32 nbBytes = bitD->bitsConsumed >> 3; BIT_DStream_status result = BIT_DStream_unfinished; if (bitD->ptr - nbBytes < bitD->start) { nbBytes = (U32)(bitD->ptr - bitD->start); /* ptr > start */ result = BIT_DStream_endOfBuffer; } bitD->ptr -= nbBytes; bitD->bitsConsumed -= nbBytes*8; bitD->bitContainer = MEM_readLEST(bitD->ptr); /* reminder : srcSize > sizeof(bitD) */ return result; } } /*! BIT_endOfDStream * @return Tells if DStream has reached its exact end */ MEM_STATIC unsigned BIT_endOfDStream(const BIT_DStream_t* DStream) { return ((DStream->ptr == DStream->start) && (DStream->bitsConsumed == sizeof(DStream->bitContainer)*8)); } #if defined (__cplusplus) } #endif #endif /* BITSTREAM_H_MODULE */ /* ****************************************************************** Error codes and messages Copyright (C) 2013-2015, Yann Collet BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - Source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #ifndef ERROR_H_MODULE #define ERROR_H_MODULE #if defined (__cplusplus) extern "C" { #endif /****************************************** * Compiler-specific ******************************************/ #if defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) # define ERR_STATIC static inline #elif defined(_MSC_VER) # define ERR_STATIC static __inline #elif defined(__GNUC__) # define ERR_STATIC static __attribute__((unused)) #else # define ERR_STATIC static /* this version may generate warnings for unused static functions; disable the relevant warning */ #endif /****************************************** * Error Management ******************************************/ #define PREFIX(name) ZSTD_error_##name #define ERROR(name) (size_t)-PREFIX(name) #define ERROR_LIST(ITEM) \ ITEM(PREFIX(No_Error)) ITEM(PREFIX(GENERIC)) \ ITEM(PREFIX(dstSize_tooSmall)) ITEM(PREFIX(srcSize_wrong)) \ ITEM(PREFIX(prefix_unknown)) ITEM(PREFIX(corruption_detected)) \ ITEM(PREFIX(tableLog_tooLarge)) ITEM(PREFIX(maxSymbolValue_tooLarge)) ITEM(PREFIX(maxSymbolValue_tooSmall)) \ ITEM(PREFIX(maxCode)) #define ERROR_GENERATE_ENUM(ENUM) ENUM, typedef enum { ERROR_LIST(ERROR_GENERATE_ENUM) } ERR_codes; /* enum is exposed, to detect & handle specific errors; compare function result to -enum value */ #define ERROR_CONVERTTOSTRING(STRING) #STRING, #define ERROR_GENERATE_STRING(EXPR) ERROR_CONVERTTOSTRING(EXPR) static const char* ERR_strings[] = { ERROR_LIST(ERROR_GENERATE_STRING) }; ERR_STATIC unsigned ERR_isError(size_t code) { return (code > ERROR(maxCode)); } ERR_STATIC const char* ERR_getErrorName(size_t code) { static const char* codeError = "Unspecified error code"; if (ERR_isError(code)) return ERR_strings[-(int)(code)]; return codeError; } #if defined (__cplusplus) } #endif #endif /* ERROR_H_MODULE */ /* Constructor and Destructor of type FSE_CTable Note that its size depends on 'tableLog' and 'maxSymbolValue' */ typedef unsigned FSE_CTable; /* don't allocate that. It's just a way to be more restrictive than void* */ typedef unsigned FSE_DTable; /* don't allocate that. It's just a way to be more restrictive than void* */ /* ****************************************************************** FSE : Finite State Entropy coder header file for static linking (only) Copyright (C) 2013-2015, Yann Collet BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - Source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #if defined (__cplusplus) extern "C" { #endif /****************************************** * Static allocation ******************************************/ /* FSE buffer bounds */ #define FSE_NCOUNTBOUND 512 #define FSE_BLOCKBOUND(size) (size + (size>>7)) #define FSE_COMPRESSBOUND(size) (FSE_NCOUNTBOUND + FSE_BLOCKBOUND(size)) /* Macro version, useful for static allocation */ /* You can statically allocate FSE CTable/DTable as a table of unsigned using below macro */ #define FSE_CTABLE_SIZE_U32(maxTableLog, maxSymbolValue) (1 + (1<<(maxTableLog-1)) + ((maxSymbolValue+1)*2)) #define FSE_DTABLE_SIZE_U32(maxTableLog) (1 + (1<= BIT_DStream_completed When it's done, verify decompression is fully completed, by checking both DStream and the relevant states. Checking if DStream has reached its end is performed by : BIT_endOfDStream(&DStream); Check also the states. There might be some symbols left there, if some high probability ones (>50%) are possible. FSE_endOfDState(&DState); */ /****************************************** * FSE unsafe API ******************************************/ static unsigned char FSE_decodeSymbolFast(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD); /* faster, but works only if nbBits is always >= 1 (otherwise, result will be corrupted) */ /****************************************** * Implementation of inline functions ******************************************/ /* decompression */ typedef struct { U16 tableLog; U16 fastMode; } FSE_DTableHeader; /* sizeof U32 */ typedef struct { unsigned short newState; unsigned char symbol; unsigned char nbBits; } FSE_decode_t; /* size == U32 */ MEM_STATIC void FSE_initDState(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD, const FSE_DTable* dt) { FSE_DTableHeader DTableH; memcpy(&DTableH, dt, sizeof(DTableH)); DStatePtr->state = BIT_readBits(bitD, DTableH.tableLog); BIT_reloadDStream(bitD); DStatePtr->table = dt + 1; } MEM_STATIC BYTE FSE_decodeSymbol(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD) { const FSE_decode_t DInfo = ((const FSE_decode_t*)(DStatePtr->table))[DStatePtr->state]; const U32 nbBits = DInfo.nbBits; BYTE symbol = DInfo.symbol; size_t lowBits = BIT_readBits(bitD, nbBits); DStatePtr->state = DInfo.newState + lowBits; return symbol; } MEM_STATIC BYTE FSE_decodeSymbolFast(FSE_DState_t* DStatePtr, BIT_DStream_t* bitD) { const FSE_decode_t DInfo = ((const FSE_decode_t*)(DStatePtr->table))[DStatePtr->state]; const U32 nbBits = DInfo.nbBits; BYTE symbol = DInfo.symbol; size_t lowBits = BIT_readBitsFast(bitD, nbBits); DStatePtr->state = DInfo.newState + lowBits; return symbol; } MEM_STATIC unsigned FSE_endOfDState(const FSE_DState_t* DStatePtr) { return DStatePtr->state == 0; } #if defined (__cplusplus) } #endif /* ****************************************************************** Huff0 : Huffman coder, part of New Generation Entropy library header file for static linking (only) Copyright (C) 2013-2015, Yann Collet BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - Source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #if defined (__cplusplus) extern "C" { #endif /****************************************** * Static allocation macros ******************************************/ /* Huff0 buffer bounds */ #define HUF_CTABLEBOUND 129 #define HUF_BLOCKBOUND(size) (size + (size>>8) + 8) /* only true if incompressible pre-filtered with fast heuristic */ #define HUF_COMPRESSBOUND(size) (HUF_CTABLEBOUND + HUF_BLOCKBOUND(size)) /* Macro version, useful for static allocation */ /* static allocation of Huff0's DTable */ #define HUF_DTABLE_SIZE(maxTableLog) (1 + (1< /* size_t */ /* ************************************* * Version ***************************************/ #define ZSTD_VERSION_MAJOR 0 /* for breaking interface changes */ #define ZSTD_VERSION_MINOR 2 /* for new (non-breaking) interface capabilities */ #define ZSTD_VERSION_RELEASE 2 /* for tweaks, bug-fixes, or development */ #define ZSTD_VERSION_NUMBER (ZSTD_VERSION_MAJOR *100*100 + ZSTD_VERSION_MINOR *100 + ZSTD_VERSION_RELEASE) /* ************************************* * Advanced functions ***************************************/ typedef struct ZSTD_CCtx_s ZSTD_CCtx; /* incomplete type */ #if defined (__cplusplus) } #endif /* zstd - standard compression library Header File for static linking only Copyright (C) 2014-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - zstd source repository : https://github.com/Cyan4973/zstd - ztsd public forum : https://groups.google.com/forum/#!forum/lz4c */ /* The objects defined into this file should be considered experimental. * They are not labelled stable, as their prototype may change in the future. * You can use them for tests, provide feedback, or if you can endure risk of future changes. */ #if defined (__cplusplus) extern "C" { #endif /* ************************************* * Streaming functions ***************************************/ typedef struct ZSTD_DCtx_s ZSTD_DCtx; /* Use above functions alternatively. ZSTD_nextSrcSizeToDecompress() tells how much bytes to provide as 'srcSize' to ZSTD_decompressContinue(). ZSTD_decompressContinue() will use previous data blocks to improve compression if they are located prior to current block. Result is the number of bytes regenerated within 'dst'. It can be zero, which is not an error; it just means ZSTD_decompressContinue() has decoded some header. */ /* ************************************* * Prefix - version detection ***************************************/ #define ZSTD_magicNumber 0xFD2FB522 /* v0.2 (current)*/ #if defined (__cplusplus) } #endif /* ****************************************************************** FSE : Finite State Entropy coder Copyright (C) 2013-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - FSE source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ #ifndef FSE_COMMONDEFS_ONLY /**************************************************************** * Tuning parameters ****************************************************************/ /* MEMORY_USAGE : * Memory usage formula : N->2^N Bytes (examples : 10 -> 1KB; 12 -> 4KB ; 16 -> 64KB; 20 -> 1MB; etc.) * Increasing memory usage improves compression ratio * Reduced memory usage can improve speed, due to cache effect * Recommended max value is 14, for 16KB, which nicely fits into Intel x86 L1 cache */ #define FSE_MAX_MEMORY_USAGE 14 #define FSE_DEFAULT_MEMORY_USAGE 13 /* FSE_MAX_SYMBOL_VALUE : * Maximum symbol value authorized. * Required for proper stack allocation */ #define FSE_MAX_SYMBOL_VALUE 255 /**************************************************************** * template functions type & suffix ****************************************************************/ #define FSE_FUNCTION_TYPE BYTE #define FSE_FUNCTION_EXTENSION /**************************************************************** * Byte symbol type ****************************************************************/ #endif /* !FSE_COMMONDEFS_ONLY */ /**************************************************************** * Compiler specifics ****************************************************************/ #ifdef _MSC_VER /* Visual Studio */ # define FORCE_INLINE static __forceinline # include /* For Visual 2005 */ # pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */ # pragma warning(disable : 4214) /* disable: C4214: non-int bitfields */ #else # if defined (__cplusplus) || defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L /* C99 */ # ifdef __GNUC__ # define FORCE_INLINE static inline __attribute__((always_inline)) # else # define FORCE_INLINE static inline # endif # else # define FORCE_INLINE static # endif /* __STDC_VERSION__ */ #endif /**************************************************************** * Includes ****************************************************************/ #include /* malloc, free, qsort */ #include /* memcpy, memset */ #include /* printf (debug) */ /**************************************************************** * Constants *****************************************************************/ #define FSE_MAX_TABLELOG (FSE_MAX_MEMORY_USAGE-2) #define FSE_MAX_TABLESIZE (1U< FSE_TABLELOG_ABSOLUTE_MAX #error "FSE_MAX_TABLELOG > FSE_TABLELOG_ABSOLUTE_MAX is not supported" #endif /**************************************************************** * Error Management ****************************************************************/ #define FSE_STATIC_ASSERT(c) { enum { FSE_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */ /**************************************************************** * Complex types ****************************************************************/ typedef U32 DTable_max_t[FSE_DTABLE_SIZE_U32(FSE_MAX_TABLELOG)]; /**************************************************************** * Templates ****************************************************************/ /* designed to be included for type-specific functions (template emulation in C) Objective is to write these functions only once, for improved maintenance */ /* safety checks */ #ifndef FSE_FUNCTION_EXTENSION # error "FSE_FUNCTION_EXTENSION must be defined" #endif #ifndef FSE_FUNCTION_TYPE # error "FSE_FUNCTION_TYPE must be defined" #endif /* Function names */ #define FSE_CAT(X,Y) X##Y #define FSE_FUNCTION_NAME(X,Y) FSE_CAT(X,Y) #define FSE_TYPE_NAME(X,Y) FSE_CAT(X,Y) /* Function templates */ #define FSE_DECODE_TYPE FSE_decode_t static U32 FSE_tableStep(U32 tableSize) { return (tableSize>>1) + (tableSize>>3) + 3; } static size_t FSE_buildDTable (FSE_DTable* dt, const short* normalizedCounter, unsigned maxSymbolValue, unsigned tableLog) { void* ptr = dt+1; FSE_DECODE_TYPE* const tableDecode = (FSE_DECODE_TYPE*)ptr; FSE_DTableHeader DTableH; const U32 tableSize = 1 << tableLog; const U32 tableMask = tableSize-1; const U32 step = FSE_tableStep(tableSize); U16 symbolNext[FSE_MAX_SYMBOL_VALUE+1]; U32 position = 0; U32 highThreshold = tableSize-1; const S16 largeLimit= (S16)(1 << (tableLog-1)); U32 noLarge = 1; U32 s; /* Sanity Checks */ if (maxSymbolValue > FSE_MAX_SYMBOL_VALUE) return ERROR(maxSymbolValue_tooLarge); if (tableLog > FSE_MAX_TABLELOG) return ERROR(tableLog_tooLarge); /* Init, lay down lowprob symbols */ DTableH.tableLog = (U16)tableLog; for (s=0; s<=maxSymbolValue; s++) { if (normalizedCounter[s]==-1) { tableDecode[highThreshold--].symbol = (FSE_FUNCTION_TYPE)s; symbolNext[s] = 1; } else { if (normalizedCounter[s] >= largeLimit) noLarge=0; symbolNext[s] = normalizedCounter[s]; } } /* Spread symbols */ for (s=0; s<=maxSymbolValue; s++) { int i; for (i=0; i highThreshold) position = (position + step) & tableMask; /* lowprob area */ } } if (position!=0) return ERROR(GENERIC); /* position must reach all cells once, otherwise normalizedCounter is incorrect */ /* Build Decoding table */ { U32 i; for (i=0; i FSE_TABLELOG_ABSOLUTE_MAX) return ERROR(tableLog_tooLarge); bitStream >>= 4; bitCount = 4; *tableLogPtr = nbBits; remaining = (1<1) && (charnum<=*maxSVPtr)) { if (previous0) { unsigned n0 = charnum; while ((bitStream & 0xFFFF) == 0xFFFF) { n0+=24; if (ip < iend-5) { ip+=2; bitStream = MEM_readLE32(ip) >> bitCount; } else { bitStream >>= 16; bitCount+=16; } } while ((bitStream & 3) == 3) { n0+=3; bitStream>>=2; bitCount+=2; } n0 += bitStream & 3; bitCount += 2; if (n0 > *maxSVPtr) return ERROR(maxSymbolValue_tooSmall); while (charnum < n0) normalizedCounter[charnum++] = 0; if ((ip <= iend-7) || (ip + (bitCount>>3) <= iend-4)) { ip += bitCount>>3; bitCount &= 7; bitStream = MEM_readLE32(ip) >> bitCount; } else bitStream >>= 2; } { const short max = (short)((2*threshold-1)-remaining); short count; if ((bitStream & (threshold-1)) < (U32)max) { count = (short)(bitStream & (threshold-1)); bitCount += nbBits-1; } else { count = (short)(bitStream & (2*threshold-1)); if (count >= threshold) count -= max; bitCount += nbBits; } count--; /* extra accuracy */ remaining -= FSE_abs(count); normalizedCounter[charnum++] = count; previous0 = !count; while (remaining < threshold) { nbBits--; threshold >>= 1; } { if ((ip <= iend-7) || (ip + (bitCount>>3) <= iend-4)) { ip += bitCount>>3; bitCount &= 7; } else { bitCount -= (int)(8 * (iend - 4 - ip)); ip = iend - 4; } bitStream = MEM_readLE32(ip) >> (bitCount & 31); } } } if (remaining != 1) return ERROR(GENERIC); *maxSVPtr = charnum-1; ip += (bitCount+7)>>3; if ((size_t)(ip-istart) > hbSize) return ERROR(srcSize_wrong); return ip-istart; } /********************************************************* * Decompression (Byte symbols) *********************************************************/ static size_t FSE_buildDTable_rle (FSE_DTable* dt, BYTE symbolValue) { void* ptr = dt; FSE_DTableHeader* const DTableH = (FSE_DTableHeader*)ptr; FSE_decode_t* const cell = (FSE_decode_t*)(ptr) + 1; /* because dt is unsigned */ DTableH->tableLog = 0; DTableH->fastMode = 0; cell->newState = 0; cell->symbol = symbolValue; cell->nbBits = 0; return 0; } static size_t FSE_buildDTable_raw (FSE_DTable* dt, unsigned nbBits) { void* ptr = dt; FSE_DTableHeader* const DTableH = (FSE_DTableHeader*)ptr; FSE_decode_t* const dinfo = (FSE_decode_t*)(ptr) + 1; /* because dt is unsigned */ const unsigned tableSize = 1 << nbBits; const unsigned tableMask = tableSize - 1; const unsigned maxSymbolValue = tableMask; unsigned s; /* Sanity checks */ if (nbBits < 1) return ERROR(GENERIC); /* min size */ /* Build Decoding Table */ DTableH->tableLog = (U16)nbBits; DTableH->fastMode = 1; for (s=0; s<=maxSymbolValue; s++) { dinfo[s].newState = 0; dinfo[s].symbol = (BYTE)s; dinfo[s].nbBits = (BYTE)nbBits; } return 0; } FORCE_INLINE size_t FSE_decompress_usingDTable_generic( void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const FSE_DTable* dt, const unsigned fast) { BYTE* const ostart = (BYTE*) dst; BYTE* op = ostart; BYTE* const omax = op + maxDstSize; BYTE* const olimit = omax-3; BIT_DStream_t bitD; FSE_DState_t state1; FSE_DState_t state2; size_t errorCode; /* Init */ errorCode = BIT_initDStream(&bitD, cSrc, cSrcSize); /* replaced last arg by maxCompressed Size */ if (FSE_isError(errorCode)) return errorCode; FSE_initDState(&state1, &bitD, dt); FSE_initDState(&state2, &bitD, dt); #define FSE_GETSYMBOL(statePtr) fast ? FSE_decodeSymbolFast(statePtr, &bitD) : FSE_decodeSymbol(statePtr, &bitD) /* 4 symbols per loop */ for ( ; (BIT_reloadDStream(&bitD)==BIT_DStream_unfinished) && (op sizeof(bitD.bitContainer)*8) /* This test must be static */ BIT_reloadDStream(&bitD); op[1] = FSE_GETSYMBOL(&state2); if (FSE_MAX_TABLELOG*4+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */ { if (BIT_reloadDStream(&bitD) > BIT_DStream_unfinished) { op+=2; break; } } op[2] = FSE_GETSYMBOL(&state1); if (FSE_MAX_TABLELOG*2+7 > sizeof(bitD.bitContainer)*8) /* This test must be static */ BIT_reloadDStream(&bitD); op[3] = FSE_GETSYMBOL(&state2); } /* tail */ /* note : BIT_reloadDStream(&bitD) >= FSE_DStream_partiallyFilled; Ends at exactly BIT_DStream_completed */ while (1) { if ( (BIT_reloadDStream(&bitD)>BIT_DStream_completed) || (op==omax) || (BIT_endOfDStream(&bitD) && (fast || FSE_endOfDState(&state1))) ) break; *op++ = FSE_GETSYMBOL(&state1); if ( (BIT_reloadDStream(&bitD)>BIT_DStream_completed) || (op==omax) || (BIT_endOfDStream(&bitD) && (fast || FSE_endOfDState(&state2))) ) break; *op++ = FSE_GETSYMBOL(&state2); } /* end ? */ if (BIT_endOfDStream(&bitD) && FSE_endOfDState(&state1) && FSE_endOfDState(&state2)) return op-ostart; if (op==omax) return ERROR(dstSize_tooSmall); /* dst buffer is full, but cSrc unfinished */ return ERROR(corruption_detected); } static size_t FSE_decompress_usingDTable(void* dst, size_t originalSize, const void* cSrc, size_t cSrcSize, const FSE_DTable* dt) { FSE_DTableHeader DTableH; memcpy(&DTableH, dt, sizeof(DTableH)); /* select fast mode (static) */ if (DTableH.fastMode) return FSE_decompress_usingDTable_generic(dst, originalSize, cSrc, cSrcSize, dt, 1); return FSE_decompress_usingDTable_generic(dst, originalSize, cSrc, cSrcSize, dt, 0); } static size_t FSE_decompress(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize) { const BYTE* const istart = (const BYTE*)cSrc; const BYTE* ip = istart; short counting[FSE_MAX_SYMBOL_VALUE+1]; DTable_max_t dt; /* Static analyzer seems unable to understand this table will be properly initialized later */ unsigned tableLog; unsigned maxSymbolValue = FSE_MAX_SYMBOL_VALUE; size_t errorCode; if (cSrcSize<2) return ERROR(srcSize_wrong); /* too small input size */ /* normal FSE decoding mode */ errorCode = FSE_readNCount (counting, &maxSymbolValue, &tableLog, istart, cSrcSize); if (FSE_isError(errorCode)) return errorCode; if (errorCode >= cSrcSize) return ERROR(srcSize_wrong); /* too small input size */ ip += errorCode; cSrcSize -= errorCode; errorCode = FSE_buildDTable (dt, counting, maxSymbolValue, tableLog); if (FSE_isError(errorCode)) return errorCode; /* always return, even if it is an error code */ return FSE_decompress_usingDTable (dst, maxDstSize, ip, cSrcSize, dt); } #endif /* FSE_COMMONDEFS_ONLY */ /* ****************************************************************** Huff0 : Huffman coder, part of New Generation Entropy library Copyright (C) 2013-2015, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - FSE+Huff0 source repository : https://github.com/Cyan4973/FiniteStateEntropy - Public forum : https://groups.google.com/forum/#!forum/lz4c ****************************************************************** */ /**************************************************************** * Compiler specifics ****************************************************************/ #if defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) /* inline is defined */ #elif defined(_MSC_VER) # define inline __inline #else # define inline /* disable inline */ #endif #ifdef _MSC_VER /* Visual Studio */ # pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */ #endif /**************************************************************** * Includes ****************************************************************/ #include /* malloc, free, qsort */ #include /* memcpy, memset */ #include /* printf (debug) */ /**************************************************************** * Error Management ****************************************************************/ #define HUF_STATIC_ASSERT(c) { enum { HUF_static_assert = 1/(int)(!!(c)) }; } /* use only *after* variable declarations */ /****************************************** * Helper functions ******************************************/ static unsigned HUF_isError(size_t code) { return ERR_isError(code); } #define HUF_ABSOLUTEMAX_TABLELOG 16 /* absolute limit of HUF_MAX_TABLELOG. Beyond that value, code does not work */ #define HUF_MAX_TABLELOG 12 /* max configured tableLog (for static allocation); can be modified up to HUF_ABSOLUTEMAX_TABLELOG */ #define HUF_DEFAULT_TABLELOG HUF_MAX_TABLELOG /* tableLog by default, when not specified */ #define HUF_MAX_SYMBOL_VALUE 255 #if (HUF_MAX_TABLELOG > HUF_ABSOLUTEMAX_TABLELOG) # error "HUF_MAX_TABLELOG is too large !" #endif /********************************************************* * Huff0 : Huffman block decompression *********************************************************/ typedef struct { BYTE byte; BYTE nbBits; } HUF_DEltX2; /* single-symbol decoding */ typedef struct { U16 sequence; BYTE nbBits; BYTE length; } HUF_DEltX4; /* double-symbols decoding */ typedef struct { BYTE symbol; BYTE weight; } sortedSymbol_t; /*! HUF_readStats Read compact Huffman tree, saved by HUF_writeCTable @huffWeight : destination buffer @return : size read from `src` */ static size_t HUF_readStats(BYTE* huffWeight, size_t hwSize, U32* rankStats, U32* nbSymbolsPtr, U32* tableLogPtr, const void* src, size_t srcSize) { U32 weightTotal; U32 tableLog; const BYTE* ip = (const BYTE*) src; size_t iSize; size_t oSize; U32 n; if (!srcSize) return ERROR(srcSize_wrong); iSize = ip[0]; //memset(huffWeight, 0, hwSize); /* is not necessary, even though some analyzer complain ... */ if (iSize >= 128) /* special header */ { if (iSize >= (242)) /* RLE */ { static int l[14] = { 1, 2, 3, 4, 7, 8, 15, 16, 31, 32, 63, 64, 127, 128 }; oSize = l[iSize-242]; memset(huffWeight, 1, hwSize); iSize = 0; } else /* Incompressible */ { oSize = iSize - 127; iSize = ((oSize+1)/2); if (iSize+1 > srcSize) return ERROR(srcSize_wrong); if (oSize >= hwSize) return ERROR(corruption_detected); ip += 1; for (n=0; n> 4; huffWeight[n+1] = ip[n/2] & 15; } } } else /* header compressed with FSE (normal case) */ { if (iSize+1 > srcSize) return ERROR(srcSize_wrong); oSize = FSE_decompress(huffWeight, hwSize-1, ip+1, iSize); /* max (hwSize-1) values decoded, as last one is implied */ if (FSE_isError(oSize)) return oSize; } /* collect weight stats */ memset(rankStats, 0, (HUF_ABSOLUTEMAX_TABLELOG + 1) * sizeof(U32)); weightTotal = 0; for (n=0; n= HUF_ABSOLUTEMAX_TABLELOG) return ERROR(corruption_detected); rankStats[huffWeight[n]]++; weightTotal += (1 << huffWeight[n]) >> 1; } if (weightTotal == 0) return ERROR(corruption_detected); /* get last non-null symbol weight (implied, total must be 2^n) */ tableLog = BIT_highbit32(weightTotal) + 1; if (tableLog > HUF_ABSOLUTEMAX_TABLELOG) return ERROR(corruption_detected); { U32 total = 1 << tableLog; U32 rest = total - weightTotal; U32 verif = 1 << BIT_highbit32(rest); U32 lastWeight = BIT_highbit32(rest) + 1; if (verif != rest) return ERROR(corruption_detected); /* last value must be a clean power of 2 */ huffWeight[oSize] = (BYTE)lastWeight; rankStats[lastWeight]++; } /* check tree construction validity */ if ((rankStats[1] < 2) || (rankStats[1] & 1)) return ERROR(corruption_detected); /* by construction : at least 2 elts of rank 1, must be even */ /* results */ *nbSymbolsPtr = (U32)(oSize+1); *tableLogPtr = tableLog; return iSize+1; } /**************************/ /* single-symbol decoding */ /**************************/ static size_t HUF_readDTableX2 (U16* DTable, const void* src, size_t srcSize) { BYTE huffWeight[HUF_MAX_SYMBOL_VALUE + 1]; U32 rankVal[HUF_ABSOLUTEMAX_TABLELOG + 1]; /* large enough for values from 0 to 16 */ U32 tableLog = 0; const BYTE* ip = (const BYTE*) src; size_t iSize = ip[0]; U32 nbSymbols = 0; U32 n; U32 nextRankStart; void* ptr = DTable+1; HUF_DEltX2* const dt = (HUF_DEltX2*)ptr; HUF_STATIC_ASSERT(sizeof(HUF_DEltX2) == sizeof(U16)); /* if compilation fails here, assertion is false */ //memset(huffWeight, 0, sizeof(huffWeight)); /* is not necessary, even though some analyzer complain ... */ iSize = HUF_readStats(huffWeight, HUF_MAX_SYMBOL_VALUE + 1, rankVal, &nbSymbols, &tableLog, src, srcSize); if (HUF_isError(iSize)) return iSize; /* check result */ if (tableLog > DTable[0]) return ERROR(tableLog_tooLarge); /* DTable is too small */ DTable[0] = (U16)tableLog; /* maybe should separate sizeof DTable, as allocated, from used size of DTable, in case of DTable re-use */ /* Prepare ranks */ nextRankStart = 0; for (n=1; n<=tableLog; n++) { U32 current = nextRankStart; nextRankStart += (rankVal[n] << (n-1)); rankVal[n] = current; } /* fill DTable */ for (n=0; n> 1; U32 i; HUF_DEltX2 D; D.byte = (BYTE)n; D.nbBits = (BYTE)(tableLog + 1 - w); for (i = rankVal[w]; i < rankVal[w] + length; i++) dt[i] = D; rankVal[w] += length; } return iSize; } static BYTE HUF_decodeSymbolX2(BIT_DStream_t* Dstream, const HUF_DEltX2* dt, const U32 dtLog) { const size_t val = BIT_lookBitsFast(Dstream, dtLog); /* note : dtLog >= 1 */ const BYTE c = dt[val].byte; BIT_skipBits(Dstream, dt[val].nbBits); return c; } #define HUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr) \ *ptr++ = HUF_decodeSymbolX2(DStreamPtr, dt, dtLog) #define HUF_DECODE_SYMBOLX2_1(ptr, DStreamPtr) \ if (MEM_64bits() || (HUF_MAX_TABLELOG<=12)) \ HUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr) #define HUF_DECODE_SYMBOLX2_2(ptr, DStreamPtr) \ if (MEM_64bits()) \ HUF_DECODE_SYMBOLX2_0(ptr, DStreamPtr) static inline size_t HUF_decodeStreamX2(BYTE* p, BIT_DStream_t* const bitDPtr, BYTE* const pEnd, const HUF_DEltX2* const dt, const U32 dtLog) { BYTE* const pStart = p; /* up to 4 symbols at a time */ while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) && (p <= pEnd-4)) { HUF_DECODE_SYMBOLX2_2(p, bitDPtr); HUF_DECODE_SYMBOLX2_1(p, bitDPtr); HUF_DECODE_SYMBOLX2_2(p, bitDPtr); HUF_DECODE_SYMBOLX2_0(p, bitDPtr); } /* closer to the end */ while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) && (p < pEnd)) HUF_DECODE_SYMBOLX2_0(p, bitDPtr); /* no more data to retrieve from bitstream, hence no need to reload */ while (p < pEnd) HUF_DECODE_SYMBOLX2_0(p, bitDPtr); return pEnd-pStart; } static size_t HUF_decompress4X2_usingDTable( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const U16* DTable) { if (cSrcSize < 10) return ERROR(corruption_detected); /* strict minimum : jump table + 1 byte per stream */ { const BYTE* const istart = (const BYTE*) cSrc; BYTE* const ostart = (BYTE*) dst; BYTE* const oend = ostart + dstSize; const void* ptr = DTable; const HUF_DEltX2* const dt = ((const HUF_DEltX2*)ptr) +1; const U32 dtLog = DTable[0]; size_t errorCode; /* Init */ BIT_DStream_t bitD1; BIT_DStream_t bitD2; BIT_DStream_t bitD3; BIT_DStream_t bitD4; const size_t length1 = MEM_readLE16(istart); const size_t length2 = MEM_readLE16(istart+2); const size_t length3 = MEM_readLE16(istart+4); size_t length4; const BYTE* const istart1 = istart + 6; /* jumpTable */ const BYTE* const istart2 = istart1 + length1; const BYTE* const istart3 = istart2 + length2; const BYTE* const istart4 = istart3 + length3; const size_t segmentSize = (dstSize+3) / 4; BYTE* const opStart2 = ostart + segmentSize; BYTE* const opStart3 = opStart2 + segmentSize; BYTE* const opStart4 = opStart3 + segmentSize; BYTE* op1 = ostart; BYTE* op2 = opStart2; BYTE* op3 = opStart3; BYTE* op4 = opStart4; U32 endSignal; length4 = cSrcSize - (length1 + length2 + length3 + 6); if (length4 > cSrcSize) return ERROR(corruption_detected); /* overflow */ errorCode = BIT_initDStream(&bitD1, istart1, length1); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD2, istart2, length2); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD3, istart3, length3); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD4, istart4, length4); if (HUF_isError(errorCode)) return errorCode; /* 16-32 symbols per loop (4-8 symbols per stream) */ endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4); for ( ; (endSignal==BIT_DStream_unfinished) && (op4<(oend-7)) ; ) { HUF_DECODE_SYMBOLX2_2(op1, &bitD1); HUF_DECODE_SYMBOLX2_2(op2, &bitD2); HUF_DECODE_SYMBOLX2_2(op3, &bitD3); HUF_DECODE_SYMBOLX2_2(op4, &bitD4); HUF_DECODE_SYMBOLX2_1(op1, &bitD1); HUF_DECODE_SYMBOLX2_1(op2, &bitD2); HUF_DECODE_SYMBOLX2_1(op3, &bitD3); HUF_DECODE_SYMBOLX2_1(op4, &bitD4); HUF_DECODE_SYMBOLX2_2(op1, &bitD1); HUF_DECODE_SYMBOLX2_2(op2, &bitD2); HUF_DECODE_SYMBOLX2_2(op3, &bitD3); HUF_DECODE_SYMBOLX2_2(op4, &bitD4); HUF_DECODE_SYMBOLX2_0(op1, &bitD1); HUF_DECODE_SYMBOLX2_0(op2, &bitD2); HUF_DECODE_SYMBOLX2_0(op3, &bitD3); HUF_DECODE_SYMBOLX2_0(op4, &bitD4); endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4); } /* check corruption */ if (op1 > opStart2) return ERROR(corruption_detected); if (op2 > opStart3) return ERROR(corruption_detected); if (op3 > opStart4) return ERROR(corruption_detected); /* note : op4 supposed already verified within main loop */ /* finish bitStreams one by one */ HUF_decodeStreamX2(op1, &bitD1, opStart2, dt, dtLog); HUF_decodeStreamX2(op2, &bitD2, opStart3, dt, dtLog); HUF_decodeStreamX2(op3, &bitD3, opStart4, dt, dtLog); HUF_decodeStreamX2(op4, &bitD4, oend, dt, dtLog); /* check */ endSignal = BIT_endOfDStream(&bitD1) & BIT_endOfDStream(&bitD2) & BIT_endOfDStream(&bitD3) & BIT_endOfDStream(&bitD4); if (!endSignal) return ERROR(corruption_detected); /* decoded size */ return dstSize; } } static size_t HUF_decompress4X2 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { HUF_CREATE_STATIC_DTABLEX2(DTable, HUF_MAX_TABLELOG); const BYTE* ip = (const BYTE*) cSrc; size_t errorCode; errorCode = HUF_readDTableX2 (DTable, cSrc, cSrcSize); if (HUF_isError(errorCode)) return errorCode; if (errorCode >= cSrcSize) return ERROR(srcSize_wrong); ip += errorCode; cSrcSize -= errorCode; return HUF_decompress4X2_usingDTable (dst, dstSize, ip, cSrcSize, DTable); } /***************************/ /* double-symbols decoding */ /***************************/ static void HUF_fillDTableX4Level2(HUF_DEltX4* DTable, U32 sizeLog, const U32 consumed, const U32* rankValOrigin, const int minWeight, const sortedSymbol_t* sortedSymbols, const U32 sortedListSize, U32 nbBitsBaseline, U16 baseSeq) { HUF_DEltX4 DElt; U32 rankVal[HUF_ABSOLUTEMAX_TABLELOG + 1]; U32 s; /* get pre-calculated rankVal */ memcpy(rankVal, rankValOrigin, sizeof(rankVal)); /* fill skipped values */ if (minWeight>1) { U32 i, skipSize = rankVal[minWeight]; MEM_writeLE16(&(DElt.sequence), baseSeq); DElt.nbBits = (BYTE)(consumed); DElt.length = 1; for (i = 0; i < skipSize; i++) DTable[i] = DElt; } /* fill DTable */ for (s=0; s= 1 */ rankVal[weight] += length; } } typedef U32 rankVal_t[HUF_ABSOLUTEMAX_TABLELOG][HUF_ABSOLUTEMAX_TABLELOG + 1]; static void HUF_fillDTableX4(HUF_DEltX4* DTable, const U32 targetLog, const sortedSymbol_t* sortedList, const U32 sortedListSize, const U32* rankStart, rankVal_t rankValOrigin, const U32 maxWeight, const U32 nbBitsBaseline) { U32 rankVal[HUF_ABSOLUTEMAX_TABLELOG + 1]; const int scaleLog = nbBitsBaseline - targetLog; /* note : targetLog >= srcLog, hence scaleLog <= 1 */ const U32 minBits = nbBitsBaseline - maxWeight; U32 s; memcpy(rankVal, rankValOrigin, sizeof(rankVal)); /* fill DTable */ for (s=0; s= minBits) /* enough room for a second symbol */ { U32 sortedRank; int minWeight = nbBits + scaleLog; if (minWeight < 1) minWeight = 1; sortedRank = rankStart[minWeight]; HUF_fillDTableX4Level2(DTable+start, targetLog-nbBits, nbBits, rankValOrigin[nbBits], minWeight, sortedList+sortedRank, sortedListSize-sortedRank, nbBitsBaseline, symbol); } else { U32 i; const U32 end = start + length; HUF_DEltX4 DElt; MEM_writeLE16(&(DElt.sequence), symbol); DElt.nbBits = (BYTE)(nbBits); DElt.length = 1; for (i = start; i < end; i++) DTable[i] = DElt; } rankVal[weight] += length; } } static size_t HUF_readDTableX4 (U32* DTable, const void* src, size_t srcSize) { BYTE weightList[HUF_MAX_SYMBOL_VALUE + 1]; sortedSymbol_t sortedSymbol[HUF_MAX_SYMBOL_VALUE + 1]; U32 rankStats[HUF_ABSOLUTEMAX_TABLELOG + 1] = { 0 }; U32 rankStart0[HUF_ABSOLUTEMAX_TABLELOG + 2] = { 0 }; U32* const rankStart = rankStart0+1; rankVal_t rankVal; U32 tableLog, maxW, sizeOfSort, nbSymbols; const U32 memLog = DTable[0]; const BYTE* ip = (const BYTE*) src; size_t iSize = ip[0]; void* ptr = DTable; HUF_DEltX4* const dt = ((HUF_DEltX4*)ptr) + 1; HUF_STATIC_ASSERT(sizeof(HUF_DEltX4) == sizeof(U32)); /* if compilation fails here, assertion is false */ if (memLog > HUF_ABSOLUTEMAX_TABLELOG) return ERROR(tableLog_tooLarge); //memset(weightList, 0, sizeof(weightList)); /* is not necessary, even though some analyzer complain ... */ iSize = HUF_readStats(weightList, HUF_MAX_SYMBOL_VALUE + 1, rankStats, &nbSymbols, &tableLog, src, srcSize); if (HUF_isError(iSize)) return iSize; /* check result */ if (tableLog > memLog) return ERROR(tableLog_tooLarge); /* DTable can't fit code depth */ /* find maxWeight */ for (maxW = tableLog; rankStats[maxW]==0; maxW--) {if (!maxW) return ERROR(GENERIC); } /* necessarily finds a solution before maxW==0 */ /* Get start index of each weight */ { U32 w, nextRankStart = 0; for (w=1; w<=maxW; w++) { U32 current = nextRankStart; nextRankStart += rankStats[w]; rankStart[w] = current; } rankStart[0] = nextRankStart; /* put all 0w symbols at the end of sorted list*/ sizeOfSort = nextRankStart; } /* sort symbols by weight */ { U32 s; for (s=0; s> consumed; } } } HUF_fillDTableX4(dt, memLog, sortedSymbol, sizeOfSort, rankStart0, rankVal, maxW, tableLog+1); return iSize; } static U32 HUF_decodeSymbolX4(void* op, BIT_DStream_t* DStream, const HUF_DEltX4* dt, const U32 dtLog) { const size_t val = BIT_lookBitsFast(DStream, dtLog); /* note : dtLog >= 1 */ memcpy(op, dt+val, 2); BIT_skipBits(DStream, dt[val].nbBits); return dt[val].length; } static U32 HUF_decodeLastSymbolX4(void* op, BIT_DStream_t* DStream, const HUF_DEltX4* dt, const U32 dtLog) { const size_t val = BIT_lookBitsFast(DStream, dtLog); /* note : dtLog >= 1 */ memcpy(op, dt+val, 1); if (dt[val].length==1) BIT_skipBits(DStream, dt[val].nbBits); else { if (DStream->bitsConsumed < (sizeof(DStream->bitContainer)*8)) { BIT_skipBits(DStream, dt[val].nbBits); if (DStream->bitsConsumed > (sizeof(DStream->bitContainer)*8)) DStream->bitsConsumed = (sizeof(DStream->bitContainer)*8); /* ugly hack; works only because it's the last symbol. Note : can't easily extract nbBits from just this symbol */ } } return 1; } #define HUF_DECODE_SYMBOLX4_0(ptr, DStreamPtr) \ ptr += HUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog) #define HUF_DECODE_SYMBOLX4_1(ptr, DStreamPtr) \ if (MEM_64bits() || (HUF_MAX_TABLELOG<=12)) \ ptr += HUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog) #define HUF_DECODE_SYMBOLX4_2(ptr, DStreamPtr) \ if (MEM_64bits()) \ ptr += HUF_decodeSymbolX4(ptr, DStreamPtr, dt, dtLog) static inline size_t HUF_decodeStreamX4(BYTE* p, BIT_DStream_t* bitDPtr, BYTE* const pEnd, const HUF_DEltX4* const dt, const U32 dtLog) { BYTE* const pStart = p; /* up to 8 symbols at a time */ while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) && (p < pEnd-7)) { HUF_DECODE_SYMBOLX4_2(p, bitDPtr); HUF_DECODE_SYMBOLX4_1(p, bitDPtr); HUF_DECODE_SYMBOLX4_2(p, bitDPtr); HUF_DECODE_SYMBOLX4_0(p, bitDPtr); } /* closer to the end */ while ((BIT_reloadDStream(bitDPtr) == BIT_DStream_unfinished) && (p <= pEnd-2)) HUF_DECODE_SYMBOLX4_0(p, bitDPtr); while (p <= pEnd-2) HUF_DECODE_SYMBOLX4_0(p, bitDPtr); /* no need to reload : reached the end of DStream */ if (p < pEnd) p += HUF_decodeLastSymbolX4(p, bitDPtr, dt, dtLog); return p-pStart; } static size_t HUF_decompress4X4_usingDTable( void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, const U32* DTable) { if (cSrcSize < 10) return ERROR(corruption_detected); /* strict minimum : jump table + 1 byte per stream */ { const BYTE* const istart = (const BYTE*) cSrc; BYTE* const ostart = (BYTE*) dst; BYTE* const oend = ostart + dstSize; const void* ptr = DTable; const HUF_DEltX4* const dt = ((const HUF_DEltX4*)ptr) +1; const U32 dtLog = DTable[0]; size_t errorCode; /* Init */ BIT_DStream_t bitD1; BIT_DStream_t bitD2; BIT_DStream_t bitD3; BIT_DStream_t bitD4; const size_t length1 = MEM_readLE16(istart); const size_t length2 = MEM_readLE16(istart+2); const size_t length3 = MEM_readLE16(istart+4); size_t length4; const BYTE* const istart1 = istart + 6; /* jumpTable */ const BYTE* const istart2 = istart1 + length1; const BYTE* const istart3 = istart2 + length2; const BYTE* const istart4 = istart3 + length3; const size_t segmentSize = (dstSize+3) / 4; BYTE* const opStart2 = ostart + segmentSize; BYTE* const opStart3 = opStart2 + segmentSize; BYTE* const opStart4 = opStart3 + segmentSize; BYTE* op1 = ostart; BYTE* op2 = opStart2; BYTE* op3 = opStart3; BYTE* op4 = opStart4; U32 endSignal; length4 = cSrcSize - (length1 + length2 + length3 + 6); if (length4 > cSrcSize) return ERROR(corruption_detected); /* overflow */ errorCode = BIT_initDStream(&bitD1, istart1, length1); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD2, istart2, length2); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD3, istart3, length3); if (HUF_isError(errorCode)) return errorCode; errorCode = BIT_initDStream(&bitD4, istart4, length4); if (HUF_isError(errorCode)) return errorCode; /* 16-32 symbols per loop (4-8 symbols per stream) */ endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4); for ( ; (endSignal==BIT_DStream_unfinished) && (op4<(oend-7)) ; ) { HUF_DECODE_SYMBOLX4_2(op1, &bitD1); HUF_DECODE_SYMBOLX4_2(op2, &bitD2); HUF_DECODE_SYMBOLX4_2(op3, &bitD3); HUF_DECODE_SYMBOLX4_2(op4, &bitD4); HUF_DECODE_SYMBOLX4_1(op1, &bitD1); HUF_DECODE_SYMBOLX4_1(op2, &bitD2); HUF_DECODE_SYMBOLX4_1(op3, &bitD3); HUF_DECODE_SYMBOLX4_1(op4, &bitD4); HUF_DECODE_SYMBOLX4_2(op1, &bitD1); HUF_DECODE_SYMBOLX4_2(op2, &bitD2); HUF_DECODE_SYMBOLX4_2(op3, &bitD3); HUF_DECODE_SYMBOLX4_2(op4, &bitD4); HUF_DECODE_SYMBOLX4_0(op1, &bitD1); HUF_DECODE_SYMBOLX4_0(op2, &bitD2); HUF_DECODE_SYMBOLX4_0(op3, &bitD3); HUF_DECODE_SYMBOLX4_0(op4, &bitD4); endSignal = BIT_reloadDStream(&bitD1) | BIT_reloadDStream(&bitD2) | BIT_reloadDStream(&bitD3) | BIT_reloadDStream(&bitD4); } /* check corruption */ if (op1 > opStart2) return ERROR(corruption_detected); if (op2 > opStart3) return ERROR(corruption_detected); if (op3 > opStart4) return ERROR(corruption_detected); /* note : op4 supposed already verified within main loop */ /* finish bitStreams one by one */ HUF_decodeStreamX4(op1, &bitD1, opStart2, dt, dtLog); HUF_decodeStreamX4(op2, &bitD2, opStart3, dt, dtLog); HUF_decodeStreamX4(op3, &bitD3, opStart4, dt, dtLog); HUF_decodeStreamX4(op4, &bitD4, oend, dt, dtLog); /* check */ endSignal = BIT_endOfDStream(&bitD1) & BIT_endOfDStream(&bitD2) & BIT_endOfDStream(&bitD3) & BIT_endOfDStream(&bitD4); if (!endSignal) return ERROR(corruption_detected); /* decoded size */ return dstSize; } } static size_t HUF_decompress4X4 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize) { HUF_CREATE_STATIC_DTABLEX4(DTable, HUF_MAX_TABLELOG); const BYTE* ip = (const BYTE*) cSrc; size_t hSize = HUF_readDTableX4 (DTable, cSrc, cSrcSize); if (HUF_isError(hSize)) return hSize; if (hSize >= cSrcSize) return ERROR(srcSize_wrong); ip += hSize; cSrcSize -= hSize; return HUF_decompress4X4_usingDTable (dst, dstSize, ip, cSrcSize, DTable); } /**********************************/ /* quad-symbol decoding */ /**********************************/ typedef struct { BYTE nbBits; BYTE nbBytes; } HUF_DDescX6; typedef union { BYTE byte[4]; U32 sequence; } HUF_DSeqX6; /* recursive, up to level 3; may benefit from