pax_global_header00006660000000000000000000000064136150507410014514gustar00rootroot0000000000000052 comment=a89c98daf863bb6a4866cb53d353700e3e42d171 vmem-1.8/000077500000000000000000000000001361505074100123305ustar00rootroot00000000000000vmem-1.8/.cirrus.yml000066400000000000000000000003511361505074100144370ustar00rootroot00000000000000freebsd_instance: image: freebsd-12-0-release-amd64 task: install_script: pkg install -y autoconf bash binutils coreutils e2fsprogs-libuuid git gmake libunwind ncurses pkgconf script: CFLAGS="-Wno-unused-value" gmake vmem-1.8/.codecov.yml000066400000000000000000000004461361505074100145570ustar00rootroot00000000000000ignore: - src/jemalloc/ - src/windows/ - src/test/ - src/common/valgrind/ - src/benchmarks/ comment: layout: "diff" behavior: default require_changes: yes parsers: gcov: branch_detection: conditional: false loop: false method: false macro: false vmem-1.8/.gitattributes000066400000000000000000000001711361505074100152220ustar00rootroot00000000000000* text=auto eol=lf *.jpg binary *.png binary *.gif binary *.ico binary *.match text -whitespace GIT_VERSION export-subst vmem-1.8/.gitignore000066400000000000000000000003721361505074100143220ustar00rootroot00000000000000.* !.gitignore !.gitattributes !.cirrus.yml !.clang-format !.travis.yml !.mailmap !.cstyleignore !.codecov.yml *~ *.swp *.o make.out core a.out nbproject/ /rpmbuild/ /dpkgbuild/ /rpm/ /dpkg/ /user.mk *.user ~* *.db *.htmp *.hpptmp *.aps tags *.link vmem-1.8/.mailmap000066400000000000000000000022251361505074100137520ustar00rootroot00000000000000Daria Lewandowska Gábor Buella Grzegorz Brzeziński Hu Wan Igor Chorążewicz Jacob Chang Jan M Michalski Kamil Diedrich Kamil Diedrich Krzysztof Czuryło Łukasz Godlewski Łukasz Godlewski Łukasz Plewa Łukasz Stolarczuk Łukasz Stolarczuk Maciej Ramotowski Michał Biesek Oksana Sałyk Oksana Sałyk Paul Luse Paweł Lebioda Piotr Balcer Sławomir Pawłowski Tomasz Kapela Weronika Lewandowska Weronika Lewandowska Wojciech Uss vmem-1.8/.travis.yml000066400000000000000000000027201361505074100144420ustar00rootroot00000000000000dist: trusty # use temporarily the previous version of Trusty image # until Travis fixes issue with mounting permissions group: deprecated-2017Q2 sudo: required language: c services: - docker env: global: - OS=ubuntu - OS_VER=18.04 - MAKE_PKG=0 - VMEM_CC=gcc - VMEM_CXX=g++ - VALGRIND=1 matrix: - COVERAGE=1 FAULT_INJECTION=1 TEST_BUILD=debug - FAULT_INJECTION=1 TEST_BUILD=debug - FAULT_INJECTION=1 TEST_BUILD=nondebug - VMEM_CC=clang VMEM_CXX=clang++ TEST_BUILD=debug - VMEM_CC=clang VMEM_CXX=clang++ TEST_BUILD=nondebug - OS=fedora OS_VER=28 VMEM_CC=clang VMEM_CXX=clang++ TEST_BUILD=debug - OS=fedora OS_VER=28 VMEM_CC=clang VMEM_CXX=clang++ TEST_BUILD=nondebug AUTO_DOC_UPDATE=1 - MAKE_PKG=1 EXPERIMENTAL=y VALGRIND=0 PUSH_IMAGE=1 - MAKE_PKG=1 EXPERIMENTAL=y VALGRIND=0 PUSH_IMAGE=1 OS=fedora OS_VER=28 - MAKE_PKG=1 EXPERIMENTAL=y VALGRIND=0 VMEM_CC=clang VMEM_CXX=clang++ - COVERITY=1 before_install: - echo $TRAVIS_COMMIT_RANGE - export HOST_WORKDIR=`pwd` - export GITHUB_REPO=pmem/vmem - export DOCKERHUB_REPO=pmem/vmem - cd utils/docker - ./pull-or-rebuild-image.sh - if [[ -f push_image_to_repo_flag ]]; then PUSH_THE_IMAGE=1; fi - if [[ -f skip_build_package_check ]]; then export SKIP_CHECK=1; fi - rm -f push_image_to_repo_flag skip_build_package_check script: - ./build-travis.sh after_success: - if [[ $PUSH_THE_IMAGE -eq 1 ]]; then images/push-image.sh $OS-$OS_VER; fi vmem-1.8/CODING_STYLE.md000066400000000000000000000161701361505074100146020ustar00rootroot00000000000000# C Style and Coding Standards for VMEM This document defines the coding standards and conventions for writing VMEM code. To ensure readability and consistency within the code, the contributed code must adhere to the rules below. ### Introduction VMEM coding style is quite similar to the style used for the SunOS product. A full description of that standard can be found [here.](https://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf) This document does not cover the entire set of recommendations and formatting rules used in writing VMEM code, but rather focuses on some VMEM-specific conventions, not described in the document mentioned above, as well as the ones the violation of which is most frequently observed during the code review. Also, keep in mind that more important than the particular style is **consistency** of coding style. So, when modifying the existing code, the changes should be coded in the same style as the file being modified. ### Code formatting Most of the common stylistic errors can be detected by the [style checker program](https://github.com/pmem/vmem/blob/master/utils/cstyle) included in the repo. Simply run `make cstyle` or `CSTYLE.ps1` to verify if your code is well-formatted. Here is the list of the most important rules: - The limit of line length is 80 characters. - Indent the code with TABs, not spaces. Tab width is 8 characters. - Do not break user-visible strings (even when they are longer than 80 characters) - Put each variable declaration in a separate line. - Do not use C++ comments (`//`). - Spaces around operators are mandatory. - No whitespace is allowed at the end of line. - For multi-line macros, do not put whitespace before `\` character. - Precede definition of each function with a brief, non-trivial description. (Usually a single line is enough.) - Use `XXX` tag to indicate a hack, problematic code, or something to be done. - For pointer variables, place the `*` close to the variable name not pointer type. - Avoid unnecessary variable initialization. - Never type `unsigned int` - just use `unsigned` in such case. Same with `long int` and `long`, etc. - Sized types like `uint32_t`, `int64_t` should be used when there is an on-media format. Otherwise, just use `unsigned`, `long`, etc. - Functions with local scope must be declared as `static`. ### License & copyright - Make sure you have the right to submit your contribution under the BSD license, especially if it is based upon previous work. See [CONTRIBUTING.md](https://github.com/pmem/vmem/blob/master/CONTRIBUTING.md) for details. - A copy of the [BSD-style License](https://github.com/pmem/vmem/blob/master/LICENSE) must be placed at the beginning of each source file, script or man page (Obviously, it does not apply to README's, Visual Studio projects and \*.match files.) - When adding a new file to the repo, or when making a contribution to an existing file, feel free to put your copyright string on top of it. ### Naming convention - Keep identifier names short, but meaningful. One-letter variables are discouraged. - Use proper prefix for function name, depending on the module it belongs to. - Use *under_score* pattern for function/variable names. Please, do not use CamelCase or Hungarian notation. - UPPERCASE constant/macro/enum names. - Capitalize first letter for variables with global or module-level scope. - Avoid using `l` as a variable name, because it is hard to distinguish `l` from `1` on some displays. ### Multi-OS support (Linux/FreeBSD/Windows) - Do not add `#ifdef ` sections lightly. They should be treated as technical debt and avoided when possible. - Use `_WIN32` macro for conditional directives when including code using Windows-specific API. - Use `__FreeBSD__` macro for conditional directives for FreeBSD-specific code. - Use `_MSC_VER` macro for conditional directives when including code using VC++ or gcc specific extensions. - In case of large portions of code (i.e. a whole function) that have different implementation for each OS, consider moving them to separate files. (i.e. *xxx_linux.c*, *xxx_freebsd.c* and *xxx_windows.c*) - Keep in mind that `long int` is always 32-bit in VC++, even when building for 64-bit platforms. Remember to use `long long` types whenever it applies, as well as proper formatting strings and type suffixes (i.. `%llu`, `ULL`). - Standard compliant solutions should be used in preference of compiler-specific ones. (i.e. static inline functions versus statement expressions) - Do not use formatting strings that are not supported by Windows implementations of printf()/scanf() family. (like `%m`) - It is recommended to use `PRI*` and `SCN*` macros in printf()/scanf() functions for width-based integral types (`uint32_t`, `int64_t`, etc.). ### Debug traces and assertions - Put `LOG(3, ...)` at the beginning of each function. Consider using higher log level for most frequently called routines. - Make use of `COMPILE_ERROR_ON` and `ASSERT*` macros. - Use `ERR()` macro to log error messages. ### Unit tests - There **must** be unit tests provided for each new function/module added. - Test scripts **must** start with `#!/usr/bin/env ` for portability between Linux and FreeBSD. - Please, see [this](https://github.com/pmem/vmem/blob/master/src/test/README) and [that](https://github.com/pmem/vmem/blob/master/src/test/unittest/README) document to get familiar with our test framework and the guidelines on how to write and run unit tests. ### Commit messages All commit lines (entered when you run `git commit`) must follow the common conventions for git commit messages: - The first line is a short summary, no longer than **50 characters,** starting with an area name and then a colon. There should be no period after the short summary. - Valid area names are: **pmem, obj, blk, log, vmem, vmmalloc, jemalloc, **test, doc, daxio, pmreorder, pool** (for *libpmempool* and *pmempool*), **rpmem** (for *librpmem* and *rpmemd*), **benchmark, examples** and **common** (for everything else). - It is acceptable for the short summary to be the only thing in the commit message if it is a trivial change. Otherwise, the second line must be a blank line. - Starting at the third line, additional information is given in complete English sentences and, optionally, bulleted points. This content must not extend beyond **column 72.** - The English sentences should be written in the imperative, so you say "Fix bug X" instead of "Fixed bug X" or "Fixes bug X". - Bullet points should use hanging indents when they take up more than one line (see example below). - There can be any number of paragraphs, separated by a blank line, as many as it takes to describe the change. - Any references to GitHub issues are at the end of the commit message. For example, here is a properly-formatted commit message: ``` doc: fix code formatting in man pages This section contains paragraph style text with complete English sentences. There can be as many paragraphs as necessary. - Bullet points are typically sentence fragments - The first word of the bullet point is usually capitalized and if the point is long, it is continued with a hanging indent - The sentence fragments don't typically end with a period Ref: pmem/issues#1 ``` vmem-1.8/ChangeLog000066400000000000000000000004111361505074100140760ustar00rootroot00000000000000Fri Jan 31 2020 Marcin Ślusarz * Version 1.8 This is a first release of vmem as separate project from PMDK. It fixes compatiblity with newer toolchains. ChangeLog for previous releases (with PMDK) can be found in ChangeLog.pmdk. vmem-1.8/ChangeLog.pmdk000066400000000000000000000761001361505074100150400ustar00rootroot00000000000000Mon Sep 30 2019 Marcin Ślusarz * Version 1.7 This release: - Introduces new APIs in libpmemobj for managing space used by transactions. (see pmemobj_tx_log_append_buffer man page for details) - Introduces new APIs in librpmem, splitting rpmem_persist into rpmem_flush and rpmem_drain, allowing applications to use the flush + drain model already known from libpmem. (libpmemobj does not use this feature yet) - Optimizes large libpmemobj transactions by significantly reducing the amount of memory modified at the commit phase. - Optimizes tracking of libpmemobj reservations. - Adds new flags for libpmemobj's pmemobj_tx_xadd_range[_direct] API: POBJ_XADD_NO_SNAPSHOT and POBJ_XADD_ASSUME_INITIALIZED, allowing applications to optimize how memory is tracked by the library. To support some of the above changes the libpmemobj on-media layout had to be changed, which means that old pools have to be converted using pmdk-convert >= 1.7. Other changes: - obj: fix merging of ranges when NOFLUSH flag is used (#1100) - rpmem: fix closing of ssh connection (#995, #1060) - obj: abort transaction on pmemobj_tx_publish failure Internal changes: - test: fault injection tests for pmemblk, pmemlog, and pmemobj - test: improved Python testing framework - test: support real pmem in bad blocks tests - common: allow not building examples and benchmarks Tue Aug 27 2019 Marcin Ślusarz * Version 1.6.1 This release fixes possible pool corruptions on Windows (see https://github.com/pmem/pmdk/pull/3728 for details), improves compatibility with newer Linux kernels with respect to Device DAX detection, fixes pmemobj space management for large pools, improves compatibility with newer toolchains, incorporates build fixes for FreeBSD and fixes a number of smaller bugs. Detailed list of bug fixes: - common: (win) fix possible pool file coruption (#972, #715, #603) - common: implement correct / robust device_dax_alignment (#1071) - obj: fix recycler not locating unused chunks - doc: update pmemobj_tx_lock documentation wrt behavior on fail - common: fix persistent domain detection (#1093) - common: vecq: fix a pointer-to-struct aliasing violation (crash on arm64) - common: fix minor issues related to ndctl linking - obj: drop recursion from pmemobj_next - common: fix bug in badblock file error handling - obj: fix handling of malloc failures - common: fix handling of malloc failures (ctl) - jemalloc: fix build with gcc 9 - obj: don't overwrite errno when palloc_heap_check_remote fails - doc: fix pmreorder emit log macro - rpmem: change order of rpmem init (#1043) - common: Fix build failure due to unused macro PAGE_SIZE - common: support older versions of pkg-config - tools: link with release variant of pmemcommon - common: add PMDK prefix to local copy of queue.h (#990) - rpmem: switch to using an open coded basename (FreeBSD) - common: posix_fallocate: guard against integer underflow in check (FreeBSD) - test: support Valgrind 3.15 - test: skip if fi_info is missing - test: (win) fix sparsefile error handling - test: fix libpmempool_feature tests that match logs - test: remove vmem_delete test (#1074) - test: adjust matchfiles in vmem_valgrind_region test (#1087) - test: remove old log files for windows (#1013) - test: remove invalid expect_normal_exit (#1092) - test: suppress ld leak (#1098) - test: Expose necessary symbols in libvmmalloc_dummy_funcs (FreeBSD) - test: fix tests failing because `tput` fails (FreeBSD) - test: avoid obj_critnib_mt taking very long on many-core machines - test: deal with libndctl's path without build system - test: overwrite old log in pmempool_create/TEST14.PS1 - test: fix match files in tests which use dax devices - test: fix match file in rpmem_addr_ext test - test: fix pmempool_check test Wed Aug 28 2019 Marcin Ślusarz * Version 1.5.2 This release fixes possible pool corruptions on Windows (see https://github.com/pmem/pmdk/pull/3728 for details), improves compatibility with newer Linux kernels with respect to Device DAX detection, fixes pmemobj space management for large pools, improves compatibility with newer toolchains and fixes a number of smaller bugs. Detailed list of bug fixes: - common: (win) fix possible pool file coruption (#972, #715, #603) - common: implement correct / robust device_dax_alignment (#1071) - obj: fix crash after large undo log recovery - obj: fix recycler not locating unused chunks - doc: update pmemobj_tx_lock documentation wrt behavior on fail - common: fix build of rpm packages on suse (#1023) - common: fix persistent domain detection (#1093) - common: vecq: fix a pointer-to-struct aliasing violation (crash on arm64) - rpmem: lock file prior to unlink (#833) - common: fix for pool_set error handling (#1036) - pmreorder: fix handling of store drain flush drain pattern - obj: fix possible memory leak in tx_add_lock - pool: free bad_block vector - common: fix bug in badblock file error handling - obj: fix handling of malloc failures - common: fix handling of malloc failures (ctl) - jemalloc: fix build with gcc 9 - obj: don't overwrite errno when palloc_heap_check_remote fails - doc: fix typos in pmreorder configuration - doc: fix pmreorder emit log macro - tools: link with release variant of pmemcommon - test: support Valgrind 3.15 - test: skip if fi_info is missing - test: split test obj_tx_lock into two test cases (#1027) - test: (win) fix sparsefile error handling - test: fix libpmempool_feature tests that match logs - test: remove vmem_delete test (#1074) - test: adjust matchfiles in vmem_valgrind_region test (#1087) - test: remove old log files for windows (#1013) - test: remove invalid expect_normal_exit (#1092) - test: suppress ld leak (#1098) - test: fix failing pmemdetect on Windows - test: fix match files in tests which use dax devices - test: fix pmempool_check test Fri Aug 30 2019 Marcin Ślusarz * Version 1.4.3 This release fixes possible pool corruptions on Windows (see https://github.com/pmem/pmdk/pull/3728 for details) and improves compatibility with newer Linux kernels with respect to Device DAX detection. Bug fixes: - common: (win) fix possible pool file coruption (#972, #715, #603) - common: implement correct / robust device_dax_alignment (#1071) - common: fix device dax detection - obj: fix pmemobj_check for pools with some sizes (#975) - obj: fix type numbers for pmemobj_list_insert_new - obj: fix pmemobj_tx_lock error handling - obj: fix possible memory leak in tx_add_lock - common: fix ctl_load_config during libpmemobj initialization (#917) - common: win: fix getopt returning "option is ambiguous" - common: fix persistent domain detection (#1093) - pool: do not copy same regions in update_uuids - test: split test obj_tx_lock into two test cases - test: remove checking errno in obj_tx_add_range_direct - test: remove invalid expect_normal_exit - test: fix int overflow in pmem_deep_persist test - test: fix pmempool_check test - test: (win) fix a few issues related to long paths Tue Aug 27 2019 Marcin Ślusarz * Version 1.3.3 Bug fixes: - pmem: fix clflush bit position - common: implement correct / robust device_dax_alignment - common: fix device dax detection - common: fix library dependencies (#767) - common: use rpm-config CFLAGS/LDFLAGS when building packages (#768) - test: fix vmmalloc_malloc_hooks (#773) - test: fix compilation with clang-5.0 (#783) - pool: fix set convert of v3 -> v4 - common: generate pkg-config files on make install (#610) - common: fix dependencies for Debian's dev packages - test: add missing include in unittest.h - common: (win) fix timed locks - common: provide src version in GitHub tarballs - common: fix free function in tls Tue Aug 27 2019 Marcin Ślusarz * Version 1.2.4 Bug fixes: - common: fix device dax detection (compatibility with newer kernels) Tue Mar 26 2019 Marcin Ślusarz * Version 1.6 This release: - Enables unsafe shutdown and bad block detection on Linux on systems with libndctl >= 63. It is expected that systems with libndctl >= 63 has necessary kernel support (Linux >= 4.20). However, due to bugs in libndctl = 63 and Linux = 4.20, it is recommended to use libndctl >= 64.1 and Linux >= 5.0.4. On systems with libndctl < 63, PMDK uses old superuser-only interfaces. Support for old or new interfaces is chosen at BUILD time. - Introduces arena control interface in pmemobj, allowing applications to tweak performance and scalability of heap operations. See pmemobj_ctl_get man page ("heap" namespace) for details. - Introduces copy_on_write mode, which allows testing applications using pmemobj with pmreorder. See pmemobj_ctl_get man page ("copy_on_write" namespace) for details. Other changes: - allocate file space when creating a pool on existing file (pmem/issues#167) - initial support for testing using fault injection - initial Python test framework - improve performance of pmemobj_pool_by_ptr Bug fixes: - common: work around tmpfs bug during pool creation (pmem/issues#1018) - pool: race-free pmempool create --max-size - obj: don't modify remote pools in pmemobj_check Tue Feb 19 2019 Marcin Ślusarz * Version 1.5.1 This release fixes minor bugs and improves compatibility with newer tool chains. Notable bug fixes: - common: make detection of device-dax instances more robust - obj: fix pmemobj_check for pools with some sizes - obj: don't use anon struct in an union (public header) - obj: fix pmemobj_tx_lock error handling - obj: don't use braces in an expression with clang (public header) - obj: suppress pmemcheck warnings for statistics - pmreorder: fix markers nontype issue Fri Oct 26 2018 Marcin Ślusarz * Version 1.5 This release has had two major focus areas - performance and RAS (Reliability, Availability and Serviceability). Beyond that, it introduces new APIs, new tools and many other improvements. As a side effect of performance optimizations, the libpmemobj on-media layout had to be changed, which means that old pools have to be converted using pmdk-convert. libpmemcto experiment has been finished and removed from the tree. For more details, please see http://pmem.io/2018/10/22/release-1-5.html. New features: - common: unsafe shutdown detection (SDS) - common: detection and repair of uncorrectable memory errors (bad blocks) - pool: new "feature" subcommand for enabling and disabling detection of unsafe shutdown and uncorrectable memory errors - common: auto flush detection on Windows (on Linux since 1.4) - pmreorder: new tool for verification of persistent memory algorithms - obj: new on media layout - pmem/obj: new flexible memcpy|memmove|memset API - obj: new flushing APIs: pmemobj_xpersist, pmemobj_xflush (PMEMOBJ_F_RELAXED) - rpmem: new flag RPMEM_PERSIST_RELAXED for rpmem_persist - obj: lazily initialized volatile variables (pmemobj_volatile) (EXPERIMENTAL) - obj: allocation classes with alignment - obj: new action APIs: pmemobj_defer_free, POBJ_XRESERVE_NEW, POBJ_XRESERVE_ALLOC - blk/log: new "ctl" API Optimizations: - obj: major performance improvements for AEP NVDIMMs - obj: better space utilization for small allocations - common: call msync only on one page for deep drain Other changes: - cto: removed - obj: remove actions limit - common: new dependency on libndctl on Linux - pmempool: "convert" subcommand is now a wrapper around pmdk-convert (please see https://github.com/pmem/pmdk-convert) - obj: C++ bindings have been moved to a new repository (please see https://github.com/pmem/libpmemobj-cpp) Bug fixes: - obj: fix type numbers for pmemobj_list_insert_new - pmem: fix inconsistency in pmem_is_pmem - common: fix windows mmap destruction - daxio: fix checking and adjusting length - common: fix long paths support on Windows Thu Aug 16 2018 Marcin Ślusarz * Version 1.4.2 This release fixes the way PMDK reports its version via pkg-config files. Bug fixes: - common: fix reported version - doc: use single "-" in NAME section (pmem/issues#914) Fri Jun 29 2018 Marcin Ślusarz * Version 1.4.1 In 1.4 development cycle, we created new daxio utility (command line tool for performing I/O on Device-DAX), but due to some complications we had to disable it just before the 1.4 release. In 1.4.1 we finally enable it. Daxio depends on ndctl v60.1. Bug fixes: - pmem: fix clflush bit position - obj: fix invalid OOMs when zones are fully packed - obj: don't register undo logs twice in memcheck - pool: fix bash completion script - pool: fix incorrect errno after transform - obj: fix clang-7 compilation - obj: test for msync failures in non-pmem path - doc: add missing field to alloc class entry point - common: (win) fix timed locks - common: provide src version in GitHub tarballs - common: fix free function in tls - common: fix double close - test: allow testing installed libraries - test: fix Valgrind vs stripped libraries issue - test: fix dependencies between tests and tools - test: fix races on make pcheck -jN - test: use libvmmalloc.so.1 - test: fix incorrect number of required dax devices - test: add suppression for leak in ld.so - test: fail if memcheck detects overlapping chunks - test: simplify time measurements in obj_sync - benchmark: check lseek() return value - examples: catch exceptions in map_cli Thu Mar 29 2018 Krzysztof Czurylo * Version 1.4 This is the first release of PMDK under a new name. The NVML project has been renamed to PMDK (Persistent Memory Development Kit). This is only the project/repo name change and it does not affect the names of the PMDK packages. See this blog article for more details on the reasons and impact of the name change: http://pmem.io/2017/12/11/NVML-is-now-PMDK.html New features: - common: support for concatenated Device-DAX devices with 2M/1G alignment - common: add support for MAP_SYNC flag - common: always enable Valgrind instrumentation (#292) - common: pool set options / headerless pools - pmem: add support for "deep flush" operation - rpmem: add rpmem_deep_persist - doc: split man pages and add per-function aliases (#385) Optimizations: - pmem: skip CPU cache flushing when eADR is available (no Windows support yet) - pmem: add AVX512F support in pmem_memcpy/memset (#656) Bug fixes: - common: fix library dependencies (#767, RHBZ #1539564) - common: use rpm-config CFLAGS/LDFLAGS when building packages (#768, RHBZ #1539564) - common: do not unload librpmem on close (#776) - common: fix NULL check in os_fopen (#813) - common: fix missing version in .pc files - obj: fix cancel of huge allocations (#726) - obj: fix error handling in pmemobj_open (#750) - obj: validate pe_offset in pmemobj_list_* APIs (#772) - obj: fix add_range with size == 0 (#781) - log: add check for negative iovcnt (#690) - rpmem: limit maximum number of lanes (#609) - rpmem: change order of memory registration (#655) - rpmem: fix removing remote pools (#721) - pool: fix error handling (#643) - pool: fix sync with switched parts (#730) - pool: fix sync with missing replica (#731) - pool: fix detection of Device DAX size (#805) - pool: fail pmempool_sync if there are no replicas (#816) - benchmark: fix calculating standard deviation (#318) - doc: clarify pmem_is_pmem behavior (#719) - doc: clarify pmemobj_root behavior (#733) Experimental features: - common: port PMDK to FreeBSD - common: add experimental support for aarch64 - obj: introduce allocation classes - obj: introduce two-phase heap ops (reserve/publish) (#380, #415) - obj: provide basic heap statistics (#676) - obj: implement run-time pool extending (#382) - cto: add close-to-open persistence library (#192) The following features are disabled by default, until ndctl v60.0 is available: - daxio: add utility to perform I/O on Device-DAX - RAS: unsafe shutdown detection/handling Wed Dec 20 2017 Krzysztof Czurylo * Version 1.3.1 Bug fixes: - rpmem: fix issues reported by Coverity - rpmem: fix read error handling - rpmem: add fip monitor (#597) - test: add rpmemd termination handling test - cpp: fix pop.persist function in obj_cpp_ptr - rpmem: return failure for a failed allocation - rpmem: fix potential memory leak - common: fix available rm options msg (#651) - pool: fix pmempool_get_max_size - obj: fix potential deadlock during realloc (#635, #636, #637) - obj: initialize TLS data - rpmem: fix cleanup if fork() failed (#634) - obj: fix bogus OOM after exhausting first zone Thu Jul 13 2017 Krzysztof Czurylo * Version 1.3 This release introduces some useful features and optimizations in libpmemobj. Most of them are experimental and controlled by the new pmemobj_ctl APIs. For details, please check the feature requests identified by the issue numbers listed next to the items below. Other important changes are related to performance tuning and stabilization of librpmem library, which is used by libpmemobj to get remote access to persistent memory and to provide basic data replication over RDMA. The librpmem is still considered experimental. NVML for Windows is feature complete (except for libvmmalloc). This release includes the support for Unicode, long paths and the NVML installer. New features: - common: add support for concatenated DAX Devices - common: add Unicode support on Windows - common: add long path support on Windows - common: add NVML installer for Windows - pmem: make pmem_is_pmem() true for Device DAX only - obj: add pmemobj_wcsdup()/pmemobj_tx_wcsdup() APIs - obj: export non-inlined pmemobj_direct() - obj: add PMEMOBJ_NLANES env variable - cpp: introduce the allocator - cpp: add wstring version of C++ entry points - vmem: add vmem_wcsdup() API entry - pool: add pmempool_rm() function (#307) - pool: add --force flag for create command (#529) - benchmark: add a minimal execution time option - benchmark: add thread affinity option - benchmark: print 99% and 99.9% percentiles - doc: separate Linux/Windows version of web-based man pages Optimizations: - obj: cache _pobj_cached_pool in pmemobj_direct() - obj: optimize thread utilization of buckets - obj: stop grabbing a lock when querying pool ptr - rpmem: use multiple endpoints Bug fixes: - common: fix issues reported by static code analyzers - pmem: fix mmap() implementation on Windows - pmem: fix mapping addr/length alignment on Windows - pmem: fix PMEM_MMAP_HINT implementation on Windows - pmem: fix pmem_is_pmem() on invalid memory ranges - pmem: fix wrong is_pmem returned by pmem_map_file() - pmem: fix mprotect() for private mappings on Windows - pmem: modify pmem_is_pmem() behavior for len==0 - obj: add failsafe to prevent allocs in constructor - cpp: fix swap implementation - cpp: fix sync primitives' constructors - cpp: fix wrong pointer type in the allocator - cpp: return persistent_ptr::swap to being public - pool: treat invalid answer as 'n' - pool: unify flags value for dry run - pool: transform for remote replicas - rpmem: persistency method detection - benchmark: fix time measurement Experimental features/optimizations: - obj: pmemobjctl - statistics and control submodule (#194, #211) - obj: zero-overhead allocations - customizable alloc header (#347) - obj: flexible run size index (#377) - obj: dynamic range cache (#378) - obj: asynchronous post-commit (#381) - obj: configurable object cache (#515) - obj: add cache size and threshold tx params - obj: add CTL var for suppressing expensive checks - rpmem: add rpmem_set_attr() API entry - rpmem: switch to libfabric v1.4.2 Thu May 18 2017 Krzysztof Czurylo * Version 1.2.3 Bug fixes: - test: extend timeout for selected tests - test: reduce number of operations in obj_tx_mt - test: define cfree() as free() in vmmalloc_calloc Other changes: - common: move Docker images to new repo Sat Apr 15 2017 Krzysztof Czurylo * Version 1.2.2 Bug fixes: - pmempool: fix mapping type in pool_params_parse - test: limit number of arenas in vmem_stats - test: do not run pool_lock test as root - common: fix pkg-config files - common: fix building packages for Debian Tue Feb 21 2017 Krzysztof Czurylo * Version 1.2.1 This NVML release changes the behavior of pmem_is_pmem() on Linux. The pmem_is_pmem() function will now return true only if the entire range is mapped directly from Device DAX (/dev/daxX.Y) without an intervening file system, and only if the corresponding file mapping was created with pmem_map_file(). See libpmem(3) for details. Bug fixes: - jemalloc: fix test compilation on Fedora 26 (rawhide) - test: fix cpp test compilation on Fedora 26 (rawhide) - common: use same queue.h on linux and windows - common: queue.h clang static analyzer fix - common: fix path handling in build-dpkg.sh - test: fix match files in pmempool_transform/TEST8 Fri Dec 30 2016 Krzysztof Czurylo * Version 1.2 - Windows Technical Preview #1 This is the first Technical Preview release of NVML for Windows. It is based on NVML 1.2 version, but not all the 1.2 features are ported to Windows. In particular, Device DAX and remote access to persistent memory (librpmem) are not supported by design. NOTE: This release has not gone through the full validation cycle, but only through some basic tests on Travis and AppVeyor. Thus, it cannot be assumed "Production quality" and should not be used in production environments. Besides several minor improvements and bug fixes, all the other changes since NVML 1.2 release were related to Windows support: - win: port libvmem (and jemalloc) - win: benchmarks Windows port - win: fix mapping files of unaligned length - win: clean up possible race condition in mmap_init() - win: enable QueryVirtualMemoryInformation() in pmem_is_pmem() - test: check open handles at START/DONE - test: port all the remaining unit tests (scope, pmem_map, obj_debug, util_poolset, pmempool_*) - win: add resource files for versioning Known issues and limitations of Windows version of NVML: - Unicode support is missing. The UTF/USC-encoded file paths or pool set files may not be handled correctly. - The libvmmalloc library is not ported yet. - The on-media format of pmem pools is not portable at the moment. The pmem pools created using Windows version of NVM libraries cannot be open on Linux and vice versa. - Despite the fact the current version of NVML would work with any recent version of Windows OS, to take full advantage of PMEM and NVML features and to benefit from the PMEM performance, the recommended platforms needs be equipped with the real NVDIMMs hardware and should support the native, Microsoft's implementation of DAX-enabled file system (i.e. Windows Server 2016 or later). In case of using NVML with older versions of Windows or with the custom implementation of PMEM/DAX drivers, the performance might not be satisfactory. Please, contact the provider of PMEM/DAX drivers for your platform to get the customized version of NVML in such case. Thu Dec 15 2016 Krzysztof Czurylo * Version 1.2 This NVML release causes a "flag day" for libpmemobj. The pmemobj pools built under NVML 1.1 are incompatible with pools built under NVML 1.2 and later. This is because an issue was discovered with the alignment of locks (#358) and, although rare, the issue potentially impacts program correctness, making the fix mandatory. The major version number of the pmemobj pool layout and the version of the libpmemobj API is changed to prevent the use of the potentially incorrect layout. Other key changes introduced in this release: - Add Device DAX support, providing that "optimized flush" mechanism defined in SNIA NVM Programming Model can safely be used, even if PMEM-aware file system supporting that model is not available, or if the user does not want to use the file system for some reason. - Add a package for libpmemobj C++ bindings. C++ API is no longer considered experimental. Web-based documentation for C++ API is available on http://pmem.io. - Add "sync" and "transform" commands to pmempool utility. The "sync" command allows one to recover missing or corrupted part(s) of a pool set from a healthy replica, while the "transform" command is a convenient way for modifying the structure of an existing pool set, i.e. by adding or removing replicas. - Add experimental support for remote access to persistent memory and basic remote data replication over RDMA (librpmem). Experimental support for remote replicas is also provided by libpmemobj library. New features: - common: add Device DAX support (#197) - obj: add C++ bindings package (libpmemobj++-devel) - obj: add TOID_OFFSETOF macro - pmempool: add "sync" and "transform" commands (#172, #196) Bug fixes: - obj: force alignment of pmem lock structures (#358) - blk: cast translation entry to uint64_t when calculating data offset - obj: fix Valgrind instrumentation of chunk headers and cancelled allocations - obj: set error message when user called pmemobj_tx_abort() - obj: fix status returned by pmemobj_list_insert() (#226) - obj: defer allocation of global structures Optimizations: - obj: fast path for pmemobj_pool_by_ptr() when inside a transaction - obj: simplify and optimize allocation class generation Experimental features: - rpmem: add support for remote access to persistent memory and basic remote data replication over RDMA - libpmempool: add pmempool_sync() and pmempool_transform() (#196) - obj: introduce pmemobj_oid() - obj: add pmemobj_tx_xalloc()/pmemobj_tx_xadd_range() APIs and the corresponding macros - obj: add transaction stage transition callbacks Thu Jun 23 2016 Krzysztof Czurylo * Version 1.1 This NVML release introduces a new version of libpmemobj pool layout. Internal undo log structure has been modified to improve performance of pmemobj transactions. Memory pools created with older versions of the libpmemobj library must be converted to the new format using "pmempool convert" command. See pmempool-convert(1) for details. A new "libpmempool" library is available, providing support for off-line pool management and diagnostics. Initially it provides only "check" and "repair" operations for log and blk memory pools, and for BTT devices. Other changes: - pmem: deprecate PCOMMIT - blk: match BTT Flog initialization with Linux NVDIMM BTT - pmem: defer pmem_is_pmem() initialization (#158) - obj: add TOID_TYPEOF macro Bug fixes: - doc: update description of valid file size units (#133) - pmempool: fix --version short option in man page (#135) - pmempool: print usage when running rm without arg (#136) - cpp: clarify polymorphism in persistent_ptr (#150) - obj: let the before flag be any non-zero value (#151) - obj: fix compare array pptr to nullptr (#152) - obj: cpp pool.get_root() fix (#156) - log/blk: set errno if replica section is specified (#161) - cpp: change exception message (#163) - doc: remove duplicated words in man page (#164) - common: always append EXTRA_CFLAGS after our CFLAGS Experimental features: - Implementation of C++ bindings for libpmempobj is complete. Web-based documentation for C++ API is available on http://pmem.io. Note that C++ API is still considered experimental. Do not use it in production environments. - Porting NVML to Windows is in progress. There are MS Visual Studio solution/projects available, allowing to compile libpmem, libpmemlog, libpmemblk and libpmemobj on Windows, but the libraries are not fully functional and most of the test are not enabled yet. Thu Apr 07 2016 Krzysztof Czurylo * Version 1.0 The API of six libraries (libpmem, libpmemblk, libpmemlog, libpmemobj, libvmem, libvmmalloc) is complete and stable. The on-media layout of persistent memory pools will be maintained from this point, and if changed it will be backward compatible. Man pages are all complete. This release has been validated to "Production quality". For the purpose of new features planned for next releases of NVML there have been some API modifications made: - pmem: pmem_map replaced with pmem_map_file - log/blk: 'off_t' substituted with 'long long' - obj: type numbers extended to 64-bit - obj: new entry points and macros added: pmemobj_tx_errno, pmemobj_tx_lock, pmemobj_mutex_timedlock, TX_ADD_DIRECT, TX_ADD_FIELD_DIRECT, TX_SET_DIRECT Other key changes since version 0.4 include: - common: updated/fixed installation scripts - common: eliminated dependency on libuuid - pmem: CPU features/ISA detection using CPUID - obj: improved error handling - obj: atomic allocation fails if constructor returns error - obj: multiple performance optimizations - obj: object store refactoring - obj: additional examples and benchmarks This release also introduces a prototype implementation of C++ bindings for libpmemobj. Note that C++ API is still experimental and should not be used in production environments. Fri Dec 04 2015 Krzysztof Czurylo * Version 0.4 This NVML version primarily focuses on improving code quality and reliability. In addition to a couple of bug fixes, the changes include: - benchmarks for libpmemobj, libpmemblk and libvmem - additional pmemobj tests and examples - pool mapping address randomization - added pmempool "rm" command - eliminated libpmem dependency on libpthread - enabled extra warnings - minor performance improvements Man pages are all complete. This release is considered "Beta quality" by the team, having been thoroughly validated, including significant performance analysis. The pmempool command does not yet support "check" and "repair" operations for pmemobj type pools. Sun Sep 13 2015 Andy Rudoff * Version 0.3 NVML is now feature complete, adding support for: - pool sets - pmemobj local replication (active/passive) - experimental valgrind support - pmempool support for all pool types Man pages are all complete. This release is considered "Alpha quality" by the team, having gone through significant validation but only some performance analysis at this point. Tue Jun 30 2015 Andy Rudoff * Version 0.2 NVML now consists of six libraries: - libpmem (basic flushing, etc) - libpmemblk, libpmemlog, libpmemobj (transactions) - libvmem, libvmmalloc (volatile use of pmem) The "pmempool" command is available for managing pmem files. Man pages for all the above are complete. The only things documented in man pages but not implemented are: - pmem sets (ability to spread a pool over a set of files) - replication (coming for libpmemobj) The pmempool command does not yet support pmemobj type pools. Thu Sep 11 2014 Andy Rudoff * Version 0.1 Initial development done in 0.1 builds vmem-1.8/LICENSE000066400000000000000000000036721361505074100133450ustar00rootroot00000000000000Copyright 2014-2018, Intel Corporation Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Everything in this source tree is covered by the previous license with the following exceptions: * src/jemalloc has its own (somewhat similar) license contained in src/jemalloc/COPYING. * src/common/valgrind/valgrind.h, src/common/valgrind/memcheck.h, src/common/valgrind/helgrind.h, src/common/valgrind/drd.h are covered by another similar BSD license variant, contained in those files. * utils/cstyle (used only during development) licensed under CDDL. vmem-1.8/Makefile000066400000000000000000000106011361505074100137660ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # Makefile -- top-level Makefile for VMEM # # Use "make" to build the library. # # Use "make doc" to build documentation. # # Use "make test" to build unit tests. Add "SKIP_SYNC_REMOTES=y" to skip # or "FORCE_SYNC_REMOTES=y" to force syncing remote nodes if any is defined. # # Use "make check" to run unit tests. # # Use "make check-remote" to run only remote unit tests. # # Use "make clean" to delete all intermediate files (*.o, etc). # # Use "make clobber" to delete everything re-buildable (binaries, etc.). # # Use "make cstyle" to run cstyle on all C source files # # Use "make check-license" to check copyright and license in all source files # # Use "make rpm" to build rpm packages # # Use "make dpkg" to build dpkg packages # # Use "make source DESTDIR=path_to_dir" to copy source files # from HEAD to 'path_to_dir/vmem' directory. # # As root, use "make install" to install the library in the usual # locations (/usr/local/lib, /usr/local/include, and /usr/local/share/man). # You can provide custom directory prefix for installation using # DESTDIR variable e.g.: "make install DESTDIR=/opt" # You can override the prefix within DESTDIR using prefix variable # e.g.: "make install prefix=/usr" include src/common.inc RPM_BUILDDIR=rpmbuild DPKG_BUILDDIR=dpkgbuild EXPERIMENTAL ?= n BUILD_PACKAGE_CHECK ?= y TEST_CONFIG_FILE ?= "$(CURDIR)"/src/test/testconfig.sh rpm : override DESTDIR="$(CURDIR)/$(RPM_BUILDDIR)" dpkg: override DESTDIR="$(CURDIR)/$(DPKG_BUILDDIR)" rpm dpkg: override prefix=/usr all: $(MAKE) -C src $@ doc: $(MAKE) -C doc all clean: $(MAKE) -C src $@ $(MAKE) -C doc $@ $(MAKE) -C utils $@ $(RM) -r $(RPM_BUILDDIR) $(DPKG_BUILDDIR) $(RM) -f $(GIT_VERSION) clobber: $(MAKE) -C src $@ $(MAKE) -C doc $@ $(MAKE) -C utils $@ $(RM) -r $(RPM_BUILDDIR) $(DPKG_BUILDDIR) rpm dpkg $(RM) -f $(GIT_VERSION) test check pcheck: all $(MAKE) -C src $@ cstyle: test -d .git && utils/check-commits.sh $(MAKE) -C src $@ $(MAKE) -C utils $@ @echo Checking files for whitespace issues... @utils/check_whitespace -g @echo Done. format: $(MAKE) -C src $@ $(MAKE) -C utils $@ @echo Done. check-license: $(MAKE) -C utils $@ @utils/check_license/check-headers.sh \ $(TOP) \ utils/check_license/check-license \ LICENSE @echo Done. sparse: $(MAKE) -C src sparse source: $(if "$(DESTDIR)", , $(error Please provide DESTDIR variable)) +utils/copy-source.sh "$(DESTDIR)" $(SRCVERSION) pkg-clean: $(RM) -r "$(DESTDIR)" rpm dpkg: pkg-clean $(MAKE) source DESTDIR="$(DESTDIR)" +utils/build-$@.sh -t $(SRCVERSION) -s "$(DESTDIR)"/vmem -w "$(DESTDIR)" -o $(CURDIR)/$@\ -e $(EXPERIMENTAL) -c $(BUILD_PACKAGE_CHECK)\ -f $(TEST_CONFIG_FILE) install uninstall: $(MAKE) -C src $@ $(MAKE) -C doc $@ .PHONY: all clean clobber test check cstyle check-license install uninstall\ source rpm dpkg pkg-clean pcheck check-remote format doc require-rpmem\ $(SUBDIRS) vmem-1.8/README.md000066400000000000000000000257541361505074100136240ustar00rootroot00000000000000# **libvmem and libvmmalloc: malloc-like volatile allocations** [![Build Status](https://travis-ci.org/pmem/vmem.svg?branch=master)](https://travis-ci.org/pmem/vmem) [![Build status](https://ci.appveyor.com/api/projects/status/5ba870ralywix5dh/branch/master?svg=true&pr=false)](https://ci.appveyor.com/project/pmem/vmem/branch/master) [![Build status](https://api.cirrus-ci.com/github/pmem/vmem.svg)](https://cirrus-ci.com/github/pmem/vmem/master) [![Release version](https://img.shields.io/github/release/pmem/vmem.svg?sort=semver)](https://github.com/pmem/vmem/releases/latest) [![Coverage Status](https://codecov.io/github/pmem/vmem/coverage.svg?branch=master)](https://codecov.io/gh/pmem/vmem/branch/master) **libvmem** and **libvmmalloc** are a couple of libraries for using persistent memory for malloc-like volatile uses. They have historically been a part of [PMDK](https://pmem.io/pmdk) despite being solely for volatile uses. Both of these libraries are considered code-complete and mature. You may want consider using [memkind](https://github.com/memkind/memkind) instead in code that benefits from extra features like NUMA awareness. To install vmem libraries, either install pre-built packages, which we build for every stable release, or clone the tree and build it yourself. **Pre-built** packages can be found in popular Linux distribution package repositories, or you can check out our recent stable releases on our [github release page](https://github.com/pmem/vmem/releases). Specific installation instructions are outlined below. ## Contents 1. [Libraries](#libraries) 2. [Getting Started](#getting-started) 3. [Version Conventions](#version-conventions) 4. [Pre-Built Packages for Windows](#pre-built-packages-for-windows) 5. [Dependencies](#dependencies) * [Linux](#linux) * [Windows](#windows) * [FreeBSD](#freebsd) 6. [Building vmem on Linux or FreeBSD](#building-vmem-on-linux-or-freebsd) * [Make Options](#make-options) * [Testing Libraries](#testing-libraries-on-linux-and-freebsd) * [Memory Management Tools](#memory-management-tools) 7. [Building vmem on Windows](#building-vmem-on-windows) * [Testing Libraries](#testing-libraries-on-windows) 8. [Experimental Packages](#experimental-packages) * [Experimental support for 64-bit ARM](#experimental-support-for-64-bit-arm) 9. [Contact Us](#contact-us) ## Libraries Available Libraries: - [libvmem](http://pmem.io/vmem/libvmem/): turns a pool of persistent memory into a volatile memory pool, similar to the system heap but kept separate and with its own malloc-style API. - [libvmmalloc](http://pmem.io/vmem/libvmmalloc/)1: transparently converts all the dynamic memory allocations into persistent memory allocations. Currently these libraries only work on 64-bit Linux, Windows2, and 64-bit FreeBSD 11+. For information on how these libraries are licensed, see our [LICENSE](LICENSE) file. >1 Not supported on Windows. > >2 VMEM for Windows is feature complete, but not yet considered production quality. ## Pre-Built Packages for Windows (Not yet available.) The recommended and easiest way to install VMEM on Windows ~~is~~will be to use Microsoft vcpkg. Vcpkg is an open source tool and ecosystem created for library management. To install the latest VMEM release and link it to your Visual Studio solution you first need to clone and set up vcpkg on your machine as described on the [vcpkg github page](https://github.com/Microsoft/vcpkg) in **Quick Start** section. In brief: ``` > git clone https://github.com/Microsoft/vcpkg > cd vcpkg > .\bootstrap-vcpkg.bat > .\vcpkg integrate install > .\vcpkg install vmem:x64-windows ``` The last command can take a while - it is VMEM building and installation time. After a successful completion of all of the above steps, the libraries are ready to be used in Visual Studio and no additional configuration is required. Just open VS with your already existing project or create a new one (remember to use platform **x64**) and then include headers to project as you always do. ## Dependencies Required packages for each supported OS are listed below. ### Linux You will need to install the following required packages on the build system: * **autoconf** * **pkg-config** ### Windows * **MS Visual Studio 2015** * [Windows SDK 10.0.16299.15](https://developer.microsoft.com/en-us/windows/downloads/windows-10-sdk) * **perl** (i.e. [StrawberryPerl](http://strawberryperl.com/)) * **PowerShell 5** ### FreeBSD * **autoconf** * **bash** * **binutils** * **coreutils** * **e2fsprogs-libuuid** * **gmake** * **libunwind** * **ncurses**4 * **pkgconf** >4 The pkg version of ncurses is required for proper operation; the base version included in FreeBSD is not sufficient. ## Building vmem on Linux or FreeBSD To build from source, clone this tree: ``` $ git clone https://github.com/pmem/vmem $ cd vmem ``` For a stable version, checkout a [release tag](https://github.com/pmem/vmem/releases) as follows. Otherwise skip this step to build the latest development release. ``` $ git checkout tags/1.7 ``` Once the build system is setup, vmem and vmmalloc are built using the `make` command at the top level: ``` $ make ``` For FreeBSD, use `gmake` rather than `make`. By default, all code is built with the `-Werror` flag, which fails the whole build when the compiler emits any warning. This is very useful during development, but can be annoying in deployment. If you want to **disable -Werror**, use the EXTRA_CFLAGS variable: ``` $ make EXTRA_CFLAGS="-Wno-error" ``` >or ``` $ make EXTRA_CFLAGS="-Wno-error=$(type-of-warning)" ``` ### Make Options There are many options that follow `make`. If you want to invoke make with the same variables multiple times, you can create a user.mk file in the top level directory and put all variables there. For example: ``` $ cat user.mk EXTRA_CFLAGS_RELEASE = -ggdb -fno-omit-frame-pointer PATH += :$HOME/valgrind/bin ``` This feature is intended to be used only by developers and it may not work for all variables. Please do not file bug reports about it. Just fix it and make a PR. **Built-in tests:** can be compiled and ran with different compiler. To do this, you must provide the `CC` and `CXX` variables. These variables are independent and setting `CC=clang` does not set `CXX=clang++`. For example: ``` $ make CC=clang CXX=clang++ ``` Once make completes, all the libraries and examples are built. You can play with the library within the build tree, or install it locally on your machine. For information about running different types of tests, please refer to the [src/test/README](src/test/README). **Installing the library** is convenient since it installs man pages and libraries in the standard system locations: ``` (as root...) # make install ``` To install this library into **other locations**, you can use the `prefix` variable, e.g.: ``` $ make install prefix=/usr/local ``` This will install files to /usr/local/lib, /usr/local/include /usr/local/share/man. **Prepare library for packaging** can be done using the DESTDIR variable, e.g.: ``` $ make install DESTDIR=/tmp ``` This will install files to /tmp/usr/lib, /tmp/usr/include /tmp/usr/share/man. **Man pages** (groff files) are generated as part of the `install` rule. To generate the documentation separately, run: ``` $ make doc ``` This call requires the following dependencies: **pandoc**. Pandoc is provided by the hs-pandoc package on FreeBSD. **Install copy of source tree** can be done by specifying the path where you want it installed. ``` $ make source DESTDIR=some_path ``` For this example, it will be installed at $(DESTDIR). **Build rpm packages** on rpm-based distributions is done by: ``` $ make rpm ``` To build rpm packages without running tests: ``` $ make BUILD_PACKAGE_CHECK=n rpm ``` This requires **rpmbuild** to be installed. **Build dpkg packages** on Debian-based distributions is done by: ``` $ make dpkg ``` To build dpkg packages without running tests: ``` $ make BUILD_PACKAGE_CHECK=n dpkg ``` This requires **devscripts** to be installed. ### Testing Libraries on Linux and FreeBSD Before running the tests, you may need to prepare a test configuration file (src/test/testconfig.sh). Please see the available configuration settings in the example file [src/test/testconfig.sh.example](src/test/testconfig.sh.example). To build and run the **unit tests**: ``` $ make check ``` To run a specific **subset of tests**, run for example: ``` $ make check TEST_BUILD=debug ``` To **modify the timeout** which is available for **check** type tests, run: ``` $ make check TEST_TIME=1m ``` This will set the timeout to 1 minute. Please refer to the **src/test/README** for more details on how to run different types of tests. ### Memory Management Tools The VMEM libraries support standard Valgrind DRD, Helgrind and Memcheck, as well as a PM-aware version of [Valgrind](https://github.com/pmem/valgrind) (not yet available for FreeBSD). By default, support for all tools is enabled. If you wish to disable it, supply the compiler with **VG_\_ENABLED** flag set to 0, for example: ``` $ make EXTRA_CFLAGS=-DVG_MEMCHECK_ENABLED=0 ``` **VALGRIND_ENABLED** flag, when set to 0, disables all Valgrind tools (drd, helgrind, memcheck and pmemcheck). The **SANITIZE** flag allows the libraries to be tested with various sanitizers. For example, to test the libraries with AddressSanitizer and UndefinedBehaviorSanitizer, run: ``` $ make SANITIZE=address,undefined clobber check ``` The address sanitizer is not supported for libvmmalloc on FreeBSD and will be ignored. ## Building vmem on Windows Clone the vmem tree and open the solution: ``` > git clone https://github.com/pmem/vmem > cd vmem/src > devenv VMEM.sln ``` Select the desired configuration (Debug or Release) and build the solution (i.e. by pressing Ctrl-Shift-B). ### Testing Libraries on Windows Before running the tests, you may need to prepare a test configuration file (src/test/testconfig.ps1). Please see the available configuration settings in the example file [src/test/testconfig.ps1.example](src/test/testconfig.ps1.example). To **run the unit tests**, open the PowerShell console and type: ``` > cd vmem/src/test > RUNTESTS.ps1 ``` To run a specific **subset of tests**, run for example: ``` > RUNTESTS.ps1 -b debug -t short ``` To run **just one test**, run for example: ``` > RUNTESTS.ps1 -b debug -i pmem_is_pmem ``` To **modify the timeout**, run: ``` > RUNTESTS.ps1 -o 3m ``` This will set the timeout to 3 minutes. To **display all the possible options**, run: ``` > RUNTESTS.ps1 -h ``` Please refer to the **[src/test/README](src/test/README)** for more details on how to run different types of tests. ### Experimental Support for 64-bit non-x86 architectures There's generally no architecture-specific parts anywhere in these libraries, but they have received no real testing outside of 64-bit x86. ## Contact Us For more information on this library, contact Marcin Slusarz (marcin.slusarz@intel.com), Andy Rudoff (andy.rudoff@intel.com), or post to our [Google group](http://groups.google.com/group/pmem). vmem-1.8/VERSION000066400000000000000000000000041361505074100133720ustar00rootroot000000000000001.8 vmem-1.8/appveyor.yml000066400000000000000000000030431361505074100147200ustar00rootroot00000000000000version: 1.4.{build} os: Visual Studio 2017 platform: x64 install: - ps: Install-PackageProvider -Name NuGet -Force - ps: Install-Module PsScriptAnalyzer -Force configuration: - Debug - Release environment: solutionname: VMEM.sln matrix: fast_finish: true before_build: - ps: >- if ($Env:CONFIGURATION -eq "Debug") { utils/CSTYLE.ps1 if ($LASTEXITCODE -ne 0) { exit 1 } utils/CHECK_WHITESPACE.ps1 if ($LASTEXITCODE -ne 0) { exit 1 } utils/ps_analyze.ps1 if ($LASTEXITCODE -ne 0) { exit 1 } perl utils/sort_solution check if ($LASTEXITCODE -ne 0) { exit 1 } ./utils/check_sdk_version.py -d . if ($LASTEXITCODE -ne 0) { exit 1 } } build_script: - ps: msbuild src\$Env:solutionname /property:Configuration=$Env:CONFIGURATION /m /v:m after_build: - ps: utils/CREATE-ZIP.ps1 -b $Env:CONFIGURATION test_script: - ps: >- if ($true) { cd src\test md C:\temp echo "`$Env:TEST_DIR = `"C:\temp`"" >> testconfig.ps1 echo "`$Env:VMEM_NO_ABORT_MSG = `"1`"" >> testconfig.ps1 echo "`$Env:TM = `"1`"" >> testconfig.ps1 if ($Env:CONFIGURATION -eq "Debug") { ./RUNTESTS.ps1 -b debug -o 4m } if ($Env:CONFIGURATION -eq "Release") { ./RUNTESTS.ps1 -b nondebug -o 4m } } artifacts: - path: 'src\x64\*.zip' name: VMEM - path: '*examples*.zip' name: VMEM_examples vmem-1.8/doc/000077500000000000000000000000001361505074100130755ustar00rootroot00000000000000vmem-1.8/doc/.gitignore000066400000000000000000000000621361505074100150630ustar00rootroot00000000000000*.txt *.html *.gz LICENSE web_linux/ web_windows/ vmem-1.8/doc/Makefile000066400000000000000000000124011361505074100145330ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # doc/Makefile -- Makefile for VMEM man pages # include ../src/common.inc MANPAGES_7_MD = libvmem/libvmem.7.md \ libvmmalloc/libvmmalloc.7.md MANPAGES_3_MD = libvmem/vmem_create.3.md libvmem/vmem_malloc.3.md MANPAGES_3_DUMMY = vmem_create_in_region.3 vmem_delete.3 vmem_check.3 vmem_stats_print.3 \ vmem_calloc.3 vmem_realloc.3 vmem_free.3 vmem_aligned_alloc.3 vmem_strdup.3 vmem_wcsdup.3 vmem_malloc_usable_size.3 \ vmem_check_version.3 vmem_errormsg.3 vmem_set_funcs.3 MANPAGES_BUILDDIR = generated MANPAGES_WEBDIR_LINUX = web_linux MANPAGES_WEBDIR_WINDOWS = web_windows # experimental MANPAGES_7_GROFF = $(MANPAGES_7_MD:.7.md=.7) MANPAGES_3_GROFF = $(MANPAGES_3_MD:.3.md=.3) MANPAGES_7_GROFF_EXP = $(MANPAGES_7_MD_EXP:.3.md=.7) MANPAGES_3_GROFF_EXP = $(MANPAGES_3_MD_EXP:.3.md=.3) ifeq ($(EXPERIMENTAL),y) $(MANPAGES_7_GROFF) += $(MANPAGES_7_GROFF_EXP) $(MANPAGES_3_GROFF) += $(MANPAGES_3_GROFF_EXP) else MANPAGES_7_NOINSTALL += $(MANPAGES_7_GROFF_EXP) MANPAGES_3_NOINSTALL += $(MANPAGES_3_GROFF_EXP) endif MANPAGES = $(MANPAGES_7_GROFF) $(MANPAGES_3_GROFF) \ $(MANPAGES_7_NOINSTALL) $(MANPAGES_3_NOINSTALL) MANPAGES_BUILD = $(addprefix $(MANPAGES_BUILDDIR)/, $(notdir $(MANPAGES))) HTMLFILES = $(MANPAGES_BUILD:=.html) TXTFILES = $(MANPAGES_BUILD:=.txt) GZFILES_7 = $(MANPAGES_7_GROFF:=.gz) GZFILES_3 = $(MANPAGES_3_GROFF:=.gz) GZFILES_7_NOINSTALL = $(MANPAGES_7_NOINSTALL:=.gz) GZFILES_3_NOINSTALL = $(MANPAGES_3_NOINSTALL:=.gz) GZFILES_3_DUMMY = $(MANPAGES_3_DUMMY:=.gz) GZFILES = $(GZFILES_7) $(GZFILES_3) \ $(GZFILES_7_NOINSTALL) $(GZFILES_3_NOINSTALL) \ $(GZFILES_3_DUMMY) GZFILES_BUILD = $(addprefix $(MANPAGES_BUILDDIR)/, $(notdir $(GZFILES))) GZFILES_7_BUILD = $(addprefix $(MANPAGES_BUILDDIR)/, $(notdir $(GZFILES_7))) GZFILES_3_BUILD = $(addprefix $(MANPAGES_BUILDDIR)/, $(notdir $(GZFILES_3))) GZFILES_3_BUILD += $(addprefix $(MANPAGES_BUILDDIR)/, $(GZFILES_3_DUMMY)) MANPAGES_DESTDIR_7 = $(DESTDIR)$(man7dir) MANPAGES_DESTDIR_3 = $(DESTDIR)$(man3dir) DOCS_DESTDIR = $(DESTDIR)$(docdir) all: md2man $(TXTFILES) | $(MANPAGES_BUILDDIR) $(MANPAGES_BUILDDIR) $(MANPAGES_WEBDIR_LINUX) $(MANPAGES_WEBDIR_WINDOWS): $(MKDIR) -p $@ %.txt: % man ./$< > $@ groff: $(MANPAGES_7_GROFF) $(MANPAGES_3_GROFF) html: $(HTMLFILES) %.html: % groff -mandoc -Thtml ./$< > $@ md2man: $(foreach f, $(MANPAGES_7_MD), ../utils/md2man.sh $(f) default.man $(MANPAGES_BUILDDIR)/$(basename $(notdir $(f)));) $(foreach f, $(MANPAGES_3_MD), ../utils/md2man.sh $(f) default.man $(MANPAGES_BUILDDIR)/$(basename $(notdir $(f)));) web: | $(MANPAGES_WEBDIR_LINUX) $(MANPAGES_WEBDIR_WINDOWS) $(MAKE) -C generated all $(foreach f, $(MANPAGES_7_MD), WEB=1 WIN32="" ../utils/md2man.sh $(f) default.man $(MANPAGES_WEBDIR_LINUX)/$(f);) $(foreach f, $(MANPAGES_3_MD), WEB=1 WIN32="" ../utils/md2man.sh $(f) default.man $(MANPAGES_WEBDIR_LINUX)/$(f);) $(foreach f, $(MANPAGES_7_MD), WEB=1 WIN32=1 ../utils/md2man.sh $(f) default.man $(MANPAGES_WEBDIR_WINDOWS)/$(f);) $(foreach f, $(MANPAGES_3_MD), WEB=1 WIN32=1 ../utils/md2man.sh $(f) default.man $(MANPAGES_WEBDIR_WINDOWS)/$(f);) compress: $(GZFILES_BUILD) %.gz: gzip -c ./$* > $@ clean: clobber: clean $(RM) -rf $(MANPAGES_BUILDDIR)/*.txt \ $(MANPAGES_BUILDDIR)/*.html \ $(MANPAGES_BUILDDIR)/*.gz \ $(MANPAGES_WEBDIR_LINUX) \ $(MANPAGES_WEBDIR_WINDOWS) install: compress install -d -v $(MANPAGES_DESTDIR_7) install -p -m 0644 $(GZFILES_7_BUILD) $(MANPAGES_DESTDIR_7) install -d -v $(MANPAGES_DESTDIR_3) install -p -m 0644 $(GZFILES_3_BUILD) $(MANPAGES_DESTDIR_3) uninstall: $(foreach f, $(notdir $(GZFILES_7_BUILD)), $(RM) $(MANPAGES_DESTDIR_7)/$(f)) $(foreach f, $(notdir $(GZFILES_3_BUILD)), $(RM) $(MANPAGES_DESTDIR_3)/$(f)) FORCE: .PHONY: all html clean compress clobber cstyle install uninstall vmem-1.8/doc/README000066400000000000000000000035511361505074100137610ustar00rootroot00000000000000This is doc/README. Subdirectories of this directory contain source for the man pages for vmem and vmmalloc in markdown format (.md files). To generate web-based documentation or Linux/FreeBSD man pages, you need to have groff and pandoc installed. m4 macros are used in the document sources to generate OS-specific variants of man pages and web-based documentation. The macros are defined in macros.man. Processing is performed by the ../utils/md2man.sh script. All files in the *generated* directory are automatically generated and updated by the pmdk-bot. **DO NOT MODIFY THE FILES IN THAT DIRECTORY**. All changes to the documentation must be made by modifying the *.md files in the following document subdirectories: libvmem -- volatile memory allocation library libvmmalloc -- general purpose volatile memory allocation library These man pages provide the API specification for the corresponding libraries and commands in this source tree, so any updates to one should be tested, reviewed, and committed with changes to the other. To create more readable text files from the source, use: $ [g]make NOTE: This will write man page output into the *generated* subdirectory. Files in this directory MUST NOT be included in any pull requests. The man(1) command may be used to format generated man pages for viewing in a terminal window (includes bold, underline, etc.), for example: $ man generated/libpmem.7 $ man generated/pmemobj_create.3 In addition, for testing purposes ../utils/md2man.sh will generate a preprocessed markdown file with the headers stripped off if the TESTOPTS variable is set. For example: $ export TESTOPTS="-DWIN32 -UFREEBSD -UWEB" $ ../utils/md2man.sh libpmemobj/libpmemobj.7.md x libpmemobj.7.win.md will generate a version of the libpmemobj.7 man page for Windows in markdown format. The resulting file may be viewed with a markdown-enabled browser. vmem-1.8/doc/default.man000066400000000000000000000041611361505074100152200ustar00rootroot00000000000000$if(has-tables)$ .\"t $endif$ $if(pandoc-version)$ .\" Automatically generated by Pandoc $pandoc-version$ .\" $endif$ $if(adjusting)$ .ad $adjusting$ $endif$ .TH "$title$" "$section$" "$date$" "VMEM - $version$" "VMEM Programmer's Manual" $if(hyphenate)$ .hy $else$ .nh \" Turn off hyphenation by default. $endif$ $for(header-includes)$ $header-includes$ $endfor$ .\" Copyright 2014-$year$, Intel Corporation .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" .\" * Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" .\" * Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in .\" the documentation and/or other materials provided with the .\" distribution. .\" .\" * Neither the name of the copyright holder nor the names of its .\" contributors may be used to endorse or promote products derived .\" from this software without specific prior written permission. .\" .\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS .\" "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT .\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR .\" A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT .\" OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, .\" SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT .\" LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, .\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY .\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT .\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE .\" OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. $for(include-before)$ $include-before$ $endfor$ $body$ $for(include-after)$ $include-after$ $endfor$ $if(author)$ .SH AUTHORS $for(author)$$author$$sep$; $endfor$. $endif$ vmem-1.8/doc/generated/000077500000000000000000000000001361505074100150335ustar00rootroot00000000000000vmem-1.8/doc/generated/.gitignore000066400000000000000000000000061361505074100170170ustar00rootroot00000000000000*.yml vmem-1.8/doc/generated/Makefile000066400000000000000000000032451361505074100164770ustar00rootroot00000000000000# # Copyright 2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # generated/Makefile -- Makefile for generate pmem.io aliases map # all: ../../utils/get_aliases.sh clean: $(RM) -rf libs_map.yml vmem-1.8/doc/generated/libvmem.7000066400000000000000000000302531361505074100165610ustar00rootroot00000000000000.\" Automatically generated by Pandoc 2.0.6 .\" .TH "LIBVMEM" "7" "2020-01-27" "VMEM - vmem API version 1.1" "VMEM Programmer's Manual" .hy .\" Copyright 2014-2020, Intel Corporation .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" .\" * Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" .\" * Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in .\" the documentation and/or other materials provided with the .\" distribution. .\" .\" * Neither the name of the copyright holder nor the names of its .\" contributors may be used to endorse or promote products derived .\" from this software without specific prior written permission. .\" .\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS .\" "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT .\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR .\" A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT .\" OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, .\" SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT .\" LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, .\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY .\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT .\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE .\" OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. .SH NAME .PP \f[B]libvmem\f[] \- volatile memory allocation library .SH SYNOPSIS .IP .nf \f[C] #include\ cc\ ...\ \-lvmem \f[] .fi .SS Managing overall library behavior: .IP .nf \f[C] const\ char\ *vmem_check_version( \ \ \ \ unsigned\ major_required, \ \ \ \ unsigned\ minor_required); void\ vmem_set_funcs( \ \ \ \ void\ *(*malloc_func)(size_t\ size), \ \ \ \ void\ (*free_func)(void\ *ptr), \ \ \ \ void\ *(*realloc_func)(void\ *ptr,\ size_t\ size), \ \ \ \ char\ *(*strdup_func)(const\ char\ *s), \ \ \ \ void\ (*print_func)(const\ char\ *s)); \f[] .fi .SS Error handling: .IP .nf \f[C] const\ char\ *vmem_errormsg(void); \f[] .fi .SS Other library functions: .PP A description of other \f[B]libvmem\f[] functions can be found on the following manual pages: .IP \[bu] 2 memory pool management: \f[B]vmem_create\f[](3) .IP \[bu] 2 memory allocation related functions: \f[B]vmem_malloc\f[](3) .SH DESCRIPTION .PP \f[B]libvmem\f[] provides common \f[I]malloc\f[]\-like interfaces to memory pools built on memory\-mapped files. These interfaces are for traditional \f[B]volatile\f[] memory allocation but, unlike the functions described in \f[B]malloc\f[](3), the memory managed by \f[B]libvmem\f[] may have different attributes, depending on the file system containing the memory\-mapped files. .PP It is recommended that new code uses \f[B]memkind\f[](3) instead of \f[B]libvmem\f[], as this library is no longer actively developed and lacks certain features of \f[B]memkind\f[] such as NUMA awareness. Nevertheless, it is mature, and is expected to be maintained for foreseable future. .PP \f[B]libvmem\f[] uses the \f[B]mmap\f[](2) system call to create a pool of volatile memory. The library is most useful when used with \f[I]Direct Access\f[] storage (DAX), which is memory\-addressable persistent storage that supports load/store access without being paged via the system page cache. A Persistent Memory\-aware file system is typically used to provide this type of access. Memory\-mapping a file from a Persistent Memory\-aware file system provides the raw memory pools, and this library supplies the more familiar \f[I]malloc\f[]\-like interfaces on top of those pools. .PP Under normal usage, \f[B]libvmem\f[] will never print messages or intentionally cause the process to exit. Exceptions to this are prints caused by calls to \f[B]vmem_stats_print\f[](3), or by enabling debugging as described under \f[B]DEBUGGING AND ERROR HANDLING\f[] below. The library uses \f[B]pthreads\f[] to be fully MT\-safe, but never creates or destroys threads itself. The library does not make use of any signals, networking, and never calls \f[B]select\f[](2) or \f[B]poll\f[](2). The system memory allocation routines like \f[B]malloc\f[](3) and \f[B]free\f[](3) are used by \f[B]libvmem\f[] for managing a small amount of run\-time state, but applications are allowed to override these calls if necessary (see the description of \f[B]vmem_set_funcs\f[]() below). .PP \f[B]libvmem\f[] interfaces are grouped into three categories: those that manage memory pools, those providing the basic memory allocation functions, and those interfaces less commonly used for managing the overall library behavior. .SH MANAGING LIBRARY BEHAVIOR .PP The \f[B]vmem_check_version\f[]() function is used to see if the installed \f[B]libvmem\f[] supports the version of the library API required by an application. The easiest way to do this is for the application to supply the compile\-time version information, supplied by defines in \f[B]\f[], like this: .IP .nf \f[C] reason\ =\ vmem_check_version(VMEM_MAJOR_VERSION, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ VMEM_MINOR_VERSION); if\ (reason\ !=\ NULL)\ { \ \ \ \ /*\ version\ check\ failed,\ reason\ string\ tells\ you\ why\ */ } \f[] .fi .PP Any mismatch in the major version number is considered a failure, but a library with a newer minor version number will pass this check since increasing minor versions imply backwards compatibility. .PP An application can also check specifically for the existence of an interface by checking for the version where that interface was introduced. These versions are documented in this man page as follows: unless otherwise specified, all interfaces described here are available in version 1.0 of the library. Interfaces added after version 1.0 will contain the text \f[I]introduced in version x.y\f[] in the section of this manual describing the feature. .PP When the version check is successful, \f[B]vmem_check_version\f[]() returns NULL. Otherwise, \f[B]vmem_check_version\f[]() returns a static string describing the reason for failing the version check. The returned string must not be modified or freed. .PP The \f[B]vmem_set_funcs\f[]() function allows an application to override some interfaces used internally by \f[B]libvmem\f[]. Passing NULL for any of the handlers will cause the \f[B]libvmem\f[] default function to be used. The only functions in the malloc family used by the library are represented by the first four arguments to \f[B]vmem_set_funcs\f[](). While the library does not make heavy use of the system malloc functions, it does allocate approximately 4\-8 kilobytes for each memory pool in use. The \f[I]print_func\f[] function is called by \f[B]libvmem\f[] when the \f[B]vmem_stats_print\f[]() entry point is used, or when additional tracing is enabled in the debug version of the library as described in \f[B]DEBUGGING AND ERROR HANDLING\f[], below. The default \f[I]print_func\f[] used by the library prints to the file specified by the \f[B]VMEM_LOG_FILE\f[] environment variable, or to \f[I]stderr\f[] if that variable is not set. .SH CAVEATS .PP \f[B]libvmem\f[] relies on the library destructor being called from the main thread. For this reason, all functions that might trigger destruction (e.g. \f[B]dlclose\f[](3)) should be called in the main thread. Otherwise some of the resources associated with that thread might not be cleaned up properly. .SH DEBUGGING AND ERROR HANDLING .PP If an error is detected during the call to a \f[B]libvmem\f[] function, the application may retrieve an error message describing the reason for the failure from \f[B]vmem_errormsg\f[](). This function returns a pointer to a static buffer containing the last error message logged for the current thread. If \f[I]errno\f[] was set, the error message may include a description of the corresponding error code as returned by \f[B]strerror\f[](3). The error message buffer is thread\-local; errors encountered in one thread do not affect its value in other threads. The buffer is never cleared by any library function; its content is significant only when the return value of the immediately preceding call to a \f[B]libvmem\f[] function indicated an error, or if \f[I]errno\f[] was set. The application must not modify or free the error message string, but it may be modified by subsequent calls to other library functions. .PP Two versions of \f[B]libvmem\f[] are typically available on a development system. The normal version is optimized for performance. That version skips checks that impact performance and never logs any trace information or performs any run\-time assertions. A second version, accessed when using libraries from \f[B]/usr/lib/vmem_debug\f[], contains run\-time assertions and trace points. The typical way to access the debug version is to set the \f[B]LD_LIBRARY_PATH\f[] environment variable to \f[B]/usr/lib/vmem_debug\f[] or \f[B]/usr/lib64/vmem_debug\f[], as appropriate. Debugging output is controlled using the following environment variables. These variables have no effect on the non\-debug version of the library. .IP \[bu] 2 \f[B]VMEM_LOG_LEVEL\f[] .PP The value of \f[B]VMEM_LOG_LEVEL\f[] enables trace points in the debug version of the library, as follows: .IP \[bu] 2 \f[B]0\f[] \- Tracing is disabled. This is the default level when \f[B]VMEM_LOG_LEVEL\f[] is not set. Only statistics are logged, and then only in response to a call to \f[B]vmem_stats_print\f[](). .IP \[bu] 2 \f[B]1\f[] \- Additional details on any errors detected are logged, in addition to returning the \f[I]errno\f[]\-based errors as usual. .IP \[bu] 2 \f[B]2\f[] \- A trace of basic operations is logged. .IP \[bu] 2 \f[B]3\f[] \- Enables a very verbose amount of function call tracing in the library. .IP \[bu] 2 \f[B]4\f[] \- Enables voluminous tracing information about all memory allocations and deallocations. .PP Unless \f[B]VMEM_LOG_FILE\f[] is set, debugging output is written to \f[I]stderr\f[]. .IP \[bu] 2 \f[B]VMEM_LOG_FILE\f[] .PP Specifies the name of a file where all logging information should be written. If the last character in the name is \[lq]\-\[rq], the \f[I]PID\f[] of the current process will be appended to the file name when the log file is created. If \f[B]VMEM_LOG_FILE\f[] is not set, output is written to \f[I]stderr\f[]. .SH EXAMPLE .PP The following example creates a memory pool, allocates some memory to contain the string \[lq]hello, world\[rq], and then frees that memory. .IP .nf \f[C] #include\ #include\ #include\ #include\ int main(int\ argc,\ char\ *argv[]) { \ \ \ \ VMEM\ *vmp; \ \ \ \ char\ *ptr; \ \ \ \ /*\ create\ minimum\ size\ pool\ of\ memory\ */ \ \ \ \ if\ ((vmp\ =\ vmem_create("/pmem\-fs", \ \ \ \ \ \ \ \ \ \ \ \ VMEM_MIN_POOL))\ ==\ NULL)\ { \ \ \ \ \ \ \ \ perror("vmem_create"); \ \ \ \ \ \ \ \ exit(1); \ \ \ \ } \ \ \ \ if\ ((ptr\ =\ vmem_malloc(vmp,\ 100))\ ==\ NULL)\ { \ \ \ \ \ \ \ \ perror("vmem_malloc"); \ \ \ \ \ \ \ \ exit(1); \ \ \ \ } \ \ \ \ strcpy(ptr,\ "hello,\ world"); \ \ \ \ /*\ give\ the\ memory\ back\ */ \ \ \ \ vmem_free(vmp,\ ptr); \ \ \ \ /*\ ...\ */ \ \ \ \ vmem_delete(vmp); } \f[] .fi .PP See for more examples using the \f[B]libvmem\f[] API. .SH BUGS .PP Unlike the normal \f[B]malloc\f[](3), which asks the system for additional memory when it runs out, \f[B]libvmem\f[] allocates the size it is told to and never attempts to grow or shrink that memory pool. .SH ACKNOWLEDGEMENTS .PP \f[B]libvmem\f[] depends on jemalloc, written by Jason Evans, to do the heavy lifting of managing dynamic memory allocation. See: .PP \f[B]libvmem\f[] builds on the persistent memory programming model recommended by the SNIA NVM Programming Technical Work Group: .SH SEE ALSO .PP \f[B]mmap\f[](2), \f[B]dlclose\f[](3), \f[B]malloc\f[](3), \f[B]strerror\f[](3), \f[B]vmem_create\f[](3), \f[B]vmem_malloc\f[](3), and \f[B]\f[] .PP On Linux: .PP \f[B]jemalloc\f[](3), \f[B]pthreads\f[](7) .PP On FreeBSD: .PP \f[B]pthread\f[](3) vmem-1.8/doc/generated/libvmmalloc.7000066400000000000000000000301461361505074100174300ustar00rootroot00000000000000.\" Automatically generated by Pandoc 2.0.6 .\" .TH "LIBVMMALLOC" "7" "2020-01-27" "VMEM - vmmalloc API version 1.1" "VMEM Programmer's Manual" .hy .\" Copyright 2014-2020, Intel Corporation .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" .\" * Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" .\" * Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in .\" the documentation and/or other materials provided with the .\" distribution. .\" .\" * Neither the name of the copyright holder nor the names of its .\" contributors may be used to endorse or promote products derived .\" from this software without specific prior written permission. .\" .\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS .\" "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT .\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR .\" A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT .\" OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, .\" SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT .\" LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, .\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY .\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT .\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE .\" OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. .SH NAME .PP \f[B]libvmmalloc\f[] \- general purpose volatile memory allocation library .SH SYNOPSIS .IP .nf \f[C] $\ LD_PRELOAD=libvmmalloc.so.1\ command\ [\ args...\ ] \f[] .fi .PP or .IP .nf \f[C] #include\ #ifndef\ __FreeBSD__ \ \ \ \ #include\ #else \ \ \ \ #include\ #endif #include\ cc\ [\ flag...\ ]\ file...\ \-lvmmalloc\ [\ library...\ ] \f[] .fi .IP .nf \f[C] void\ *malloc(size_t\ size); void\ free(void\ *ptr); void\ *calloc(size_t\ number,\ size_t\ size); void\ *realloc(void\ *ptr,\ size_t\ size); int\ posix_memalign(void\ **memptr,\ size_t\ alignment,\ size_t\ size); void\ *aligned_alloc(size_t\ alignment,\ size_t\ size); void\ *memalign(size_t\ alignment,\ size_t\ size); void\ *valloc(size_t\ size); void\ *pvalloc(size_t\ size); size_t\ malloc_usable_size(const\ void\ *ptr); void\ cfree(void\ *ptr); \f[] .fi .SH DESCRIPTION .PP \f[B]libvmmalloc\f[] transparently converts all dynamic memory allocations into Persistent Memory allocations. .PP The typical usage of \f[B]libvmmalloc\f[] does not require any modification of the target program. It is enough to load \f[B]libvmmalloc\f[] before all other libraries by setting the environment variable \f[B]LD_PRELOAD\f[]. When used in that way, \f[B]libvmmalloc\f[] interposes the standard system memory allocation routines, as defined in \f[B]malloc\f[](3), \f[B]posix_memalign\f[](3) and \f[B]malloc_usable_size\f[](3), and provides that all dynamic memory allocations are made from a \f[I]memory pool\f[] built on a memory\-mapped file, instead of the system heap. The memory managed by \f[B]libvmmalloc\f[] may have different attributes, depending on the file system containing the memory\-mapped file. .PP This library is no longer actively developed, and is in maintenance mode, same as its underlying code backend (\f[B]libvmem\f[]). It is mature, and is expected to be supported for foreseable future. .PP \f[B]libvmmalloc\f[] may be also linked to the program, by providing the **\-lvmmalloc* argument to the linker. Then it becomes the default memory allocator for the program. .RS .PP NOTE: Due to the fact the library operates on a memory\-mapped file, \f[B]it may not work properly with programs that perform fork(2) not followed by exec(3).\f[] There are two variants of experimental \f[B]fork\f[](2) support available in libvmmalloc. The desired library behavior may be selected by setting the \f[B]VMMALLOC_FORK\f[] environment variable. By default variant #1 is enabled. See \f[B]ENVIRONMENT\f[] for more details. .RE .PP \f[B]libvmmalloc\f[] uses the \f[B]mmap\f[](2) system call to create a pool of volatile memory. The library is most useful when used with \f[I]Direct Access\f[] storage (DAX), which is memory\-addressable persistent storage that supports load/store access without being paged via the system page cache. A Persistent Memory\-aware file system is typically used to provide this type of access. Memory\-mapping a file from a Persistent Memory\-aware file system provides the raw memory pools, and this library supplies the traditional \f[I]malloc\f[] interfaces on top of those pools. .PP The memory pool acting as a system heap replacement is created automatically at library initialization time. The user may control its location and size by setting the environment variables described in \f[B]ENVIRONMENT\f[], below. The allocated file space is reclaimed when the process terminates or in case of system crash. .PP Under normal usage, \f[B]libvmmalloc\f[] will never print messages or intentionally cause the process to exit. The library uses \f[B]pthreads\f[](7) to be fully MT\-safe, but never creates or destroys threads itself. The library does not make use of any signals, networking, and never calls \f[B]select\f[](2) or \f[B]poll\f[](2). .SH ENVIRONMENT .PP The \f[B]VMMALLOC_POOL_DIR\f[] and \f[B]VMMALLOC_POOL_SIZE\f[] environment variables \f[B]must\f[] be set for \f[B]libvmmalloc\f[] to work properly. If either of them is not specified, or if their values are not valid, the library prints an appropriate error message and terminates the process. Any other environment variables are optional. .IP \[bu] 2 \f[B]VMMALLOC_POOL_DIR\f[]=\f[I]path\f[] .PP Specifies a path to the directory where the memory pool file should be created. The directory must exist and be writable. .IP \[bu] 2 \f[B]VMMALLOC_POOL_SIZE\f[]=\f[I]len\f[] .PP Defines the desired size (in bytes) of the memory pool file. It must be not less than the minimum allowed size \f[B]VMMALLOC_MIN_POOL\f[] as defined in \f[B]\f[]. .RS .PP NOTE: Due to the fact the library adds some metadata to the memory pool, the amount of actual usable space is typically less than the size of the memory pool file. .RE .IP \[bu] 2 \f[B]VMMALLOC_FORK\f[]=\f[I]val\f[] (EXPERIMENTAL) .PP \f[B]VMMALLOC_FORK\f[] controls the behavior of \f[B]libvmmalloc\f[] in case of \f[B]fork\f[](3), and can be set to the following values: .IP \[bu] 2 \f[B]0\f[] \- \f[B]fork\f[](2) support is disabled. The behavior of \f[B]fork\f[](2) is undefined in this case, but most likely results in memory pool corruption and a program crash due to segmentation fault. .IP \[bu] 2 \f[B]1\f[] \- The memory pool file is remapped with the \f[B]MAP_PRIVATE\f[] flag before the fork completes. From this moment, any access to memory that modifies the heap pages, both in the parent and in the child process, will trigger creation of a copy of those pages in RAM (copy\-on\-write). The benefit of this approach is that it does not significantly increase the time of the initial fork operation, and does not require additional space on the file system. However, all subsequent memory allocations, and modifications of any memory allocated before fork, will consume system memory resources instead of the memory pool. .PP This is the default option if \f[B]VMMALLOC_FORK\f[] is not set. .IP \[bu] 2 \f[B]2\f[] \- A copy of the entire memory pool file is created for the use of the child process. This requires additional space on the file system, but both the parent and the child process may still operate on their memory pools, not consuming system memory resources. .RS .PP NOTE: In case of large memory pools, creating a copy of the pool file may stall the fork operation for a quite long time. .RE .IP \[bu] 2 \f[B]3\f[] \- The library first attempts to create a copy of the memory pool (as for option #2), but if it fails (i.e.\ because of insufficient free space on the file system), it will fall back to option #1. .RS .PP NOTE: Options \f[B]2\f[] and \f[B]3\f[] are not currently supported on FreeBSD. .RE .PP Environment variables used for debugging are described in \f[B]DEBUGGING\f[], below. .SH CAVEATS .PP \f[B]libvmmalloc\f[] relies on the library destructor being called from the main thread. For this reason, all functions that might trigger destruction (e.g. \f[B]dlclose\f[](3)) should be called in the main thread. Otherwise some of the resources associated with that thread might not be cleaned up properly. .SH DEBUGGING .PP Two versions of \f[B]libvmmalloc\f[] are typically available on a development system. The normal version is optimized for performance. That version skips checks that impact performance and never logs any trace information or performs any run\-time assertions. A second version, accessed when using libraries from \f[B]/usr/lib/vmem_debug\f[], contains run\-time assertions and trace points. The typical way to access the debug version is to set the \f[B]LD_LIBRARY_PATH\f[] environment variable to \f[B]/usr/lib/vmem_debug\f[] or \f[B]/usr/lib64/vmem_debug\f[], as appropriate. Debugging output is controlled using the following environment variables. These variables have no effect on the non\-debug version of the library. .IP \[bu] 2 \f[B]VMMALLOC_LOG_LEVEL\f[] .PP The value of \f[B]VMMALLOC_LOG_LEVEL\f[] enables trace points in the debug version of the library, as follows: .IP \[bu] 2 \f[B]0\f[] \- Tracing is disabled. This is the default level when \f[B]VMMALLOC_LOG_LEVEL\f[] is not set. .IP \[bu] 2 \f[B]1\f[] \- Additional details on any errors detected are logged, in addition to returning the \f[I]errno\f[]\-based errors as usual. .IP \[bu] 2 \f[B]2\f[] \- A trace of basic operations is logged. .IP \[bu] 2 \f[B]3\f[] \- Enables a very verbose amount of function call tracing in the library. .IP \[bu] 2 \f[B]4\f[] \- Enables voluminous tracing information about all memory allocations and deallocations. .PP Unless \f[B]VMMALLOC_LOG_FILE\f[] is set, debugging output is written to \f[I]stderr\f[]. .IP \[bu] 2 \f[B]VMMALLOC_LOG_FILE\f[] .PP Specifies the name of a file where all logging information should be written. If the last character in the name is \[lq]\-\[rq], the \f[I]PID\f[] of the current process will be appended to the file name when the log file is created. If \f[B]VMMALLOC_LOG_FILE\f[] is not set, output is written to \f[I]stderr\f[]. .IP \[bu] 2 \f[B]VMMALLOC_LOG_STATS\f[] .PP Setting \f[B]VMMALLOC_LOG_STATS\f[] to 1 enables logging human\-readable summary statistics at program termination. .SH NOTES .PP Unlike the normal \f[B]malloc\f[](3), which asks the system for additional memory when it runs out, \f[B]libvmmalloc\f[] allocates the size it is told to and never attempts to grow or shrink that memory pool. .SH BUGS .PP \f[B]libvmmalloc\f[] may not work properly with programs that perform \f[B]fork\f[](2) and do not call \f[B]exec\f[](3) immediately afterwards. See \f[B]ENVIRONMENT\f[] for more details about experimental \f[B]fork\f[](2) support. .PP If logging is enabled in the debug version of the library and the process performs \f[B]fork\f[](2), no new log file is created for the child process, even if the configured log file name ends with \[lq]\-\[rq]. All logging information from the child process will be written to the log file owned by the parent process, which may lead to corruption or partial loss of log data. .PP Malloc hooks (see \f[B]malloc_hook\f[](3)), are not supported when using \f[B]libvmmalloc\f[]. .SH ACKNOWLEDGEMENTS .PP \f[B]libvmmalloc\f[] depends on jemalloc, written by Jason Evans, to do the heavy lifting of managing dynamic memory allocation. See: .SH SEE ALSO .PP \f[B]fork\f[](2), \f[B]dlclose(3)\f[], \f[B]exec\f[](3), \f[B]malloc\f[](3), \f[B]malloc_usable_size\f[](3), \f[B]posix_memalign\f[](3), \f[B]libpmem\f[](7), \f[B]libvmem\f[](7) and \f[B]\f[] .PP On Linux: .PP \f[B]jemalloc\f[](3), \f[B]malloc_hook\f[](3), \f[B]pthreads\f[](7), \f[B]ld.so\f[](8) .PP On FreeBSD: .PP \f[B]ld.so\f[](1), \f[B]pthread\f[](3) vmem-1.8/doc/generated/pmemobj_f_mem_nodrain.3000066400000000000000000000000351361505074100214230ustar00rootroot00000000000000.so pmemobj_memcpy_persist.3 vmem-1.8/doc/generated/pmemobj_f_mem_noflush.3000066400000000000000000000000351361505074100214470ustar00rootroot00000000000000.so pmemobj_memcpy_persist.3 vmem-1.8/doc/generated/pmemobj_f_mem_nontemporal.3000066400000000000000000000000351361505074100223270ustar00rootroot00000000000000.so pmemobj_memcpy_persist.3 vmem-1.8/doc/generated/pmemobj_f_mem_temporal.3000066400000000000000000000000351361505074100216140ustar00rootroot00000000000000.so pmemobj_memcpy_persist.3 vmem-1.8/doc/generated/pmemobj_f_mem_wb.3000066400000000000000000000000351361505074100204010ustar00rootroot00000000000000.so pmemobj_memcpy_persist.3 vmem-1.8/doc/generated/pmemobj_f_mem_wc.3000066400000000000000000000000351361505074100204020ustar00rootroot00000000000000.so pmemobj_memcpy_persist.3 vmem-1.8/doc/generated/pmemobj_f_relaxed.3000066400000000000000000000000351361505074100205570ustar00rootroot00000000000000.so pmemobj_memcpy_persist.3 vmem-1.8/doc/generated/pmemobj_tx_log_append_buffer.3000066400000000000000000000000271361505074100230030ustar00rootroot00000000000000.so pmemobj_tx_begin.3 vmem-1.8/doc/generated/pmemobj_tx_log_auto_alloc.3000066400000000000000000000000271361505074100223250ustar00rootroot00000000000000.so pmemobj_tx_begin.3 vmem-1.8/doc/generated/pmemobj_tx_log_intents_max_size.3000066400000000000000000000000271361505074100235660ustar00rootroot00000000000000.so pmemobj_tx_begin.3 vmem-1.8/doc/generated/pmemobj_tx_log_snapshots_max_size.3000066400000000000000000000000271361505074100241240ustar00rootroot00000000000000.so pmemobj_tx_begin.3 vmem-1.8/doc/generated/rpmem_drain.3000066400000000000000000000000241361505074100174100ustar00rootroot00000000000000.so rpmem_persist.3 vmem-1.8/doc/generated/rpmem_flush.3000066400000000000000000000000241361505074100174340ustar00rootroot00000000000000.so rpmem_persist.3 vmem-1.8/doc/generated/vmem_aligned_alloc.3000066400000000000000000000000221361505074100207120ustar00rootroot00000000000000.so vmem_malloc.3 vmem-1.8/doc/generated/vmem_calloc.3000066400000000000000000000000221361505074100173720ustar00rootroot00000000000000.so vmem_malloc.3 vmem-1.8/doc/generated/vmem_check.3000066400000000000000000000000221361505074100172120ustar00rootroot00000000000000.so vmem_create.3 vmem-1.8/doc/generated/vmem_check_version.3000066400000000000000000000000231361505074100207600ustar00rootroot00000000000000.so man7/libvmem.7 vmem-1.8/doc/generated/vmem_create.3000066400000000000000000000172301361505074100174110ustar00rootroot00000000000000.\" Automatically generated by Pandoc 2.0.6 .\" .TH "VMEM_CREATE" "3" "2020-01-27" "VMEM - vmem API version 1.1" "VMEM Programmer's Manual" .hy .\" Copyright 2014-2020, Intel Corporation .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" .\" * Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" .\" * Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in .\" the documentation and/or other materials provided with the .\" distribution. .\" .\" * Neither the name of the copyright holder nor the names of its .\" contributors may be used to endorse or promote products derived .\" from this software without specific prior written permission. .\" .\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS .\" "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT .\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR .\" A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT .\" OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, .\" SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT .\" LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, .\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY .\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT .\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE .\" OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. .SH NAME .PP \f[B]vmem_create\f[](), \f[B]vmem_create_in_region\f[](), \f[B]vmem_delete\f[](), \f[B]vmem_check\f[](), \f[B]vmem_stats_print\f[]() \- volatile memory pool management .SH SYNOPSIS .IP .nf \f[C] #include\ VMEM\ *vmem_create(const\ char\ *dir,\ size_t\ size); VMEM\ *vmem_create_in_region(void\ *addr,\ size_t\ size); void\ vmem_delete(VMEM\ *vmp); int\ vmem_check(VMEM\ *vmp); void\ vmem_stats_print(VMEM\ *vmp,\ const\ char\ *opts); \f[] .fi .SH DESCRIPTION .PP To use \f[B]libvmem\f[], a \f[I]memory pool\f[] is first created. This is most commonly done with the \f[B]vmem_create\f[]() function described below. The other \f[B]libvmem\f[] functions are for less common cases, where applications have special needs for creating pools or examining library state. .PP The \f[B]vmem_create\f[]() function creates a memory pool and returns an opaque memory pool handle of type \f[I]VMEM*\f[]. The handle is then used with \f[B]libvmem\f[] functions such as \f[B]vmem_malloc\f[]() and \f[B]vmem_free\f[]() to provide the familiar \f[I]malloc\f[]\-like programming model for the memory pool. .PP The pool is created by allocating a temporary file in the directory \f[I]dir\f[], in a fashion similar to \f[B]tmpfile\f[](3), so that the file name does not appear when the directory is listed, and the space is automatically freed when the program terminates. \f[I]size\f[] bytes are allocated and the resulting space is memory\-mapped. The minimum \f[I]size\f[] value allowed by the library is defined in \f[B]\f[] as \f[B]VMEM_MIN_POOL\f[]. The maximum allowed size is not limited by \f[B]libvmem\f[], but by the file system on which \f[I]dir\f[] resides. The \f[I]size\f[] passed in is the raw size of the memory pool. \f[B]libvmem\f[] will use some of that space for its own metadata, so the usable space will be less. .PP \f[B]vmem_create\f[]() can also be called with the \f[B]dir\f[] argument pointing to a device DAX. In that case the entire device will serve as a volatile pool. Device DAX is the device\-centric analogue of Filesystem DAX. It allows memory ranges to be allocated and mapped without need of an intervening file system. For more information please see \f[B]ndctl\-create\-namespace\f[](1). .PP \f[B]vmem_create_in_region\f[]() is an alternate \f[B]libvmem\f[] entry point for creating a memory pool. It is for the rare case where an application needs to create a memory pool from an already memory\-mapped region. Instead of allocating space from a file system, \f[B]vmem_create_in_region\f[]() is given the memory region explicitly via the \f[I]addr\f[] and \f[I]size\f[] arguments. Any data in the region is lost by calling \f[B]vmem_create_in_region\f[](), which will immediately store its own data structures for managing the pool there. As with \f[B]vmem_create\f[](), the minimum \f[I]size\f[] allowed is defined as \f[B]VMEM_MIN_POOL\f[]. The \f[I]addr\f[] argument must be page aligned. Undefined behavior occurs if \f[I]addr\f[] does not point to a contiguous memory region in the virtual address space of the calling process, or if the \f[I]size\f[] is larger than the actual size of the memory region pointed to by \f[I]addr\f[]. .PP The \f[B]vmem_delete\f[]() function releases the memory pool \f[I]vmp\f[]. If the memory pool was created using \f[B]vmem_create\f[](), deleting it allows the space to be reclaimed. .PP The \f[B]vmem_check\f[]() function performs an extensive consistency check of all \f[B]libvmem\f[] internal data structures in memory pool \f[I]vmp\f[]. Since an error return indicates memory pool corruption, applications should not continue to use a pool in this state. Additional details about errors found are logged when the log level is at least 1 (see \f[B]DEBUGGING AND ERROR HANDLING\f[] in \f[B]libvmem\f[](7)). During the consistency check performed by \f[B]vmem_check\f[](), other operations on the same memory pool are locked out. The checks are all read\-only; \f[B]vmem_check\f[]() never modifies the memory pool. This function is mostly useful for \f[B]libvmem\f[] developers during testing/debugging. .PP The \f[B]vmem_stats_print\f[]() function produces messages containing statistics about the given memory pool. Output is sent to \f[I]stderr\f[] unless the user sets the environment variable \f[B]VMEM_LOG_FILE\f[], or the application supplies a replacement \f[I]print_func\f[] (see \f[B]MANAGING LIBRARY BEHAVIOR\f[] in \f[B]libvmem\f[](7)). The \f[I]opts\f[] string can either be NULL or it can contain a list of options that change the statistics printed. General information that never changes during execution can be omitted by specifying \[lq]g\[rq] as a character within the opts string. The characters \[lq]m\[rq] and \[lq]a\[rq] can be specified to omit merged arena and per arena statistics, respectively; \[lq]b\[rq] and \[lq]l\[rq] can be specified to omit per size class statistics for bins and large objects, respectively. Unrecognized characters are silently ignored. Note that thread caching may prevent some statistics from being completely up to date. See \f[B]jemalloc\f[](3) for more detail (the description of the available \f[I]opts\f[] above was taken from that man page). .SH RETURN VALUE .PP On success, \f[B]vmem_create\f[]() returns an opaque memory pool handle of type \f[I]VMEM*\f[]. On error, it returns NULL and sets \f[I]errno\f[] appropriately. .PP On success, \f[B]vmem_create_in_region\f[]() returns an opaque memory pool handle of type \f[I]VMEM*\f[]. On error, it returns NULL and sets \f[I]errno\f[] appropriately. .PP The \f[B]vmem_delete\f[]() function returns no value. .PP The \f[B]vmem_check\f[]() function returns 1 if the memory pool is found to be consistent, and 0 if the check was performed but the memory pool is not consistent. If the check could not be performed, \f[B]vmem_check\f[]() returns \-1. .PP The \f[B]vmem_stats_print\f[]() function returns no value. .SH SEE ALSO .PP \f[B]ndctl\-create\-namespace\f[](1), \f[B]jemalloc\f[](3), \f[B]tmpfile\f[](3), \f[B]libvmem\f[](7) and \f[B]\f[] vmem-1.8/doc/generated/vmem_create_in_region.3000066400000000000000000000000221361505074100214310ustar00rootroot00000000000000.so vmem_create.3 vmem-1.8/doc/generated/vmem_delete.3000066400000000000000000000000221361505074100173770ustar00rootroot00000000000000.so vmem_create.3 vmem-1.8/doc/generated/vmem_errormsg.3000066400000000000000000000000231361505074100177760ustar00rootroot00000000000000.so man7/libvmem.7 vmem-1.8/doc/generated/vmem_free.3000066400000000000000000000000221361505074100170560ustar00rootroot00000000000000.so vmem_malloc.3 vmem-1.8/doc/generated/vmem_malloc.3000066400000000000000000000203451361505074100174160ustar00rootroot00000000000000.\" Automatically generated by Pandoc 2.0.6 .\" .TH "VMEM_MALLOC" "3" "2020-01-27" "VMEM - vmem API version 1.1" "VMEM Programmer's Manual" .hy .\" Copyright 2014-2020, Intel Corporation .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" .\" * Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" .\" * Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in .\" the documentation and/or other materials provided with the .\" distribution. .\" .\" * Neither the name of the copyright holder nor the names of its .\" contributors may be used to endorse or promote products derived .\" from this software without specific prior written permission. .\" .\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS .\" "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT .\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR .\" A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT .\" OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, .\" SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT .\" LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, .\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY .\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT .\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE .\" OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. .SH NAME .PP \f[B]vmem_malloc\f[](), \f[B]vmem_calloc\f[](), \f[B]vmem_realloc\f[](), \f[B]vmem_free\f[](), \f[B]vmem_aligned_alloc\f[](), \f[B]vmem_strdup\f[](), \f[B]vmem_wcsdup\f[](), \f[B]vmem_malloc_usable_size\f[]() \- memory allocation related functions .SH SYNOPSIS .IP .nf \f[C] #include\ void\ *vmem_malloc(VMEM\ *vmp,\ size_t\ size); void\ vmem_free(VMEM\ *vmp,\ void\ *ptr); void\ *vmem_calloc(VMEM\ *vmp,\ size_t\ nmemb,\ size_t\ size); void\ *vmem_realloc(VMEM\ *vmp,\ void\ *ptr,\ size_t\ size); void\ *vmem_aligned_alloc(VMEM\ *vmp,\ size_t\ alignment,\ size_t\ size); char\ *vmem_strdup(VMEM\ *vmp,\ const\ char\ *s); wchar_t\ *vmem_wcsdup(VMEM\ *vmp,\ const\ wchar_t\ *s); size_t\ vmem_malloc_usable_size(VMEM\ *vmp,\ void\ *ptr); \f[] .fi .SH DESCRIPTION .PP This section describes the \f[I]malloc\f[]\-like API provided by \f[B]libvmem\f[](7). These functions provide the same semantics as their libc namesakes, but operate on the memory pools specified by their first arguments. .PP The \f[B]vmem_malloc\f[]() function provides the same semantics as \f[B]malloc\f[](3), but operates on the memory pool \f[I]vmp\f[] instead of the process heap supplied by the system. It allocates specified \f[I]size\f[] bytes. .PP The \f[B]vmem_free\f[]() function provides the same semantics as \f[B]free\f[](3), but operates on the memory pool \f[I]vmp\f[] instead of the process heap supplied by the system. It frees the memory space pointed to by \f[I]ptr\f[], which must have been returned by a previous call to \f[B]vmem_malloc\f[](), \f[B]vmem_calloc\f[]() or \f[B]vmem_realloc\f[]() for \f[I]the same pool of memory\f[]. If \f[I]ptr\f[] is NULL, no operation is performed. .PP The \f[B]vmem_calloc\f[]() function provides the same semantics as \f[B]calloc\f[](3), but operates on the memory pool \f[I]vmp\f[] instead of the process heap supplied by the system. It allocates memory for an array of \f[I]nmemb\f[] elements of \f[I]size\f[] bytes each. The memory is set to zero. .PP The \f[B]vmem_realloc\f[]() function provides the same semantics as \f[B]realloc\f[](3), but operates on the memory pool \f[I]vmp\f[] instead of the process heap supplied by the system. It changes the size of the memory block pointed to by \f[I]ptr\f[] to \f[I]size\f[] bytes. The contents will be unchanged in the range from the start of the region up to the minimum of the old and new sizes. If the new size is larger than the old size, the added memory will \f[I]not\f[] be initialized. .PP Unless \f[I]ptr\f[] is NULL, it must have been returned by an earlier call to \f[B]vmem_malloc\f[](), \f[B]vmem_calloc\f[]() or \f[B]vmem_realloc\f[](). If \f[I]ptr\f[] is NULL, then the call is equivalent to \f[I]vmem_malloc(vmp, size)\f[], for all values of \f[I]size\f[]; if \f[I]size\f[] is equal to zero, and \f[I]ptr\f[] is not NULL, then the call is equivalent to \f[I]vmem_free(vmp, ptr)\f[]. .PP The \f[B]vmem_aligned_alloc\f[]() function provides the same semantics as \f[B]aligned_alloc\f[](3), but operates on the memory pool \f[I]vmp\f[] instead of the process heap supplied by the system. It allocates \f[I]size\f[] bytes from the memory pool. The memory address will be a multiple of \f[I]alignment\f[], which must be a power of two. .PP The \f[B]vmem_strdup\f[]() function provides the same semantics as \f[B]strdup\f[](3), but operates on the memory pool \f[I]vmp\f[] instead of the process heap supplied by the system. Memory for the new string is obtained with \f[B]vmem_malloc\f[](), on the given memory pool, and can be freed with \f[B]vmem_free\f[]() on the same memory pool. .PP The \f[B]vmem_wcsdup\f[]() function provides the same semantics as \f[B]wcsdup\f[](3), but operates on the memory pool \f[I]vmp\f[] instead of the process heap supplied by the system. Memory for the new string is obtained with \f[B]vmem_malloc\f[](), on the given memory pool, and can be freed with \f[B]vmem_free\f[]() on the same memory pool. .PP The \f[B]vmem_malloc_usable_size\f[]() function provides the same semantics as \f[B]malloc_usable_size\f[](3), but operates on the memory pool \f[I]vmp\f[] instead of the process heap supplied by the system. .SH RETURN VALUE .PP On success, \f[B]vmem_malloc\f[]() returns a pointer to the allocated memory. If \f[I]size\f[] is 0, then \f[B]vmem_malloc\f[]() returns either NULL, or a unique pointer value that can later be successfully passed to \f[B]vmem_free\f[](). If \f[B]vmem_malloc\f[]() is unable to satisfy the allocation request, it returns NULL and sets \f[I]errno\f[] appropriately. .PP The \f[B]vmem_free\f[]() function returns no value. Undefined behavior occurs if frees do not correspond to allocated memory from the same memory pool. .PP On success, \f[B]vmem_calloc\f[]() returns a pointer to the allocated memory. If \f[I]nmemb\f[] or \f[I]size\f[] is 0, then \f[B]vmem_calloc\f[]() returns either NULL, or a unique pointer value that can later be successfully passed to \f[B]vmem_free\f[](). If \f[B]vmem_calloc\f[]() is unable to satisfy the allocation request, it returns NULL and sets \f[I]errno\f[] appropriately. .PP On success, \f[B]vmem_realloc\f[]() returns a pointer to the allocated memory, which may be different from \f[I]ptr\f[]. If the area pointed to was moved, a \f[I]vmem_free(vmp, ptr)\f[] is done. If \f[B]vmem_realloc\f[]() is unable to satisfy the allocation request, it returns NULL and sets \f[I]errno\f[] appropriately. .PP On success, \f[B]vmem_aligned_alloc\f[]() returns a pointer to the allocated memory. If \f[B]vmem_aligned_alloc\f[]() is unable to satisfy the allocation request, it returns NULL and sets \f[I]errno\f[] appropriately. .PP On success, \f[B]vmem_strdup\f[]() returns a pointer to a new string which is a duplicate of the string \f[I]s\f[]. If \f[B]vmem_strdup\f[]() is unable to satisfy the allocation request, it returns NULL and sets \f[I]errno\f[] appropriately. .PP On success, \f[B]vmem_wcsdup\f[]() returns a pointer to a new wide character string which is a duplicate of the wide character string \f[I]s\f[]. If \f[B]vmem_wcsdup\f[]() is unable to satisfy the allocation request, it returns NULL and sets \f[I]errno\f[] appropriately. .PP The \f[B]vmem_malloc_usable_size\f[]() function returns the number of usable bytes in the block of allocated memory pointed to by \f[I]ptr\f[], a pointer to a block of memory allocated by \f[B]vmem_malloc\f[]() or a related function. If \f[I]ptr\f[] is NULL, 0 is returned. .SH SEE ALSO .PP \f[B]calloc\f[](3), \f[B]free\f[](3), \f[B]malloc\f[](3), \f[B]malloc_usable_size\f[](3), \f[B]realloc\f[](3), \f[B]strdup\f[](3), \f[B]wcsdup\f[](3) \f[B]libvmem(7)\f[] and \f[B]\f[] vmem-1.8/doc/generated/vmem_malloc_usable_size.3000066400000000000000000000000221361505074100217710ustar00rootroot00000000000000.so vmem_malloc.3 vmem-1.8/doc/generated/vmem_realloc.3000066400000000000000000000000221361505074100175560ustar00rootroot00000000000000.so vmem_malloc.3 vmem-1.8/doc/generated/vmem_set_funcs.3000066400000000000000000000000231361505074100201270ustar00rootroot00000000000000.so man7/libvmem.7 vmem-1.8/doc/generated/vmem_stats_print.3000066400000000000000000000000221361505074100205070ustar00rootroot00000000000000.so vmem_create.3 vmem-1.8/doc/generated/vmem_strdup.3000066400000000000000000000000221361505074100174560ustar00rootroot00000000000000.so vmem_malloc.3 vmem-1.8/doc/generated/vmem_wcsdup.3000066400000000000000000000000221361505074100174420ustar00rootroot00000000000000.so vmem_malloc.3 vmem-1.8/doc/libvmem/000077500000000000000000000000001361505074100145305ustar00rootroot00000000000000vmem-1.8/doc/libvmem/libvmem.7.md000066400000000000000000000274171361505074100166650ustar00rootroot00000000000000--- layout: manual Content-Style: 'text/css' title: _MP(LIBVMEM, 7) collection: libvmem header: VMEM date: vmem API version 1.1 ... [comment]: <> (Copyright 2016-2019, Intel Corporation) [comment]: <> (Redistribution and use in source and binary forms, with or without) [comment]: <> (modification, are permitted provided that the following conditions) [comment]: <> (are met:) [comment]: <> ( * Redistributions of source code must retain the above copyright) [comment]: <> ( notice, this list of conditions and the following disclaimer.) [comment]: <> ( * Redistributions in binary form must reproduce the above copyright) [comment]: <> ( notice, this list of conditions and the following disclaimer in) [comment]: <> ( the documentation and/or other materials provided with the) [comment]: <> ( distribution.) [comment]: <> ( * Neither the name of the copyright holder nor the names of its) [comment]: <> ( contributors may be used to endorse or promote products derived) [comment]: <> ( from this software without specific prior written permission.) [comment]: <> (THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS) [comment]: <> ("AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT) [comment]: <> (LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR) [comment]: <> (A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT) [comment]: <> (OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,) [comment]: <> (SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT) [comment]: <> (LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,) [comment]: <> (DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY) [comment]: <> (THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT) [comment]: <> ((INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE) [comment]: <> (OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.) [comment]: <> (libvmem.7 -- man page for libvmem) [NAME](#name)
[SYNOPSIS](#synopsis)
[DESCRIPTION](#description)
[MANAGING LIBRARY BEHAVIOR](#managing-library-behavior)
[DEBUGGING AND ERROR HANDLING](#debugging-and-error-handling)
[EXAMPLE](#example)
[BUGS](#bugs)
[ACKNOWLEDGEMENTS](#acknowledgements)
[SEE ALSO](#see-also)
# NAME # **libvmem** - volatile memory allocation library # SYNOPSIS # ```c #include cc ... -lvmem ``` _UNICODE() ##### Managing overall library behavior: ##### ```c _UWFUNC(vmem_check_version, =q= unsigned major_required, unsigned minor_required=e=) void vmem_set_funcs( void *(*malloc_func)(size_t size), void (*free_func)(void *ptr), void *(*realloc_func)(void *ptr, size_t size), char *(*strdup_func)(const char *s), void (*print_func)(const char *s)); ``` ##### Error handling: ##### ```c _UWFUNC(vmem_errormsg, void) ``` ##### Other library functions: ##### A description of other **libvmem** functions can be found on the following manual pages: + memory pool management: **vmem_create**(3) + memory allocation related functions: **vmem_malloc**(3) # DESCRIPTION # **libvmem** provides common *malloc*-like interfaces to memory pools built on memory-mapped files. These interfaces are for traditional **volatile** memory allocation but, unlike the functions described in **malloc**(3), the memory managed by **libvmem** may have different attributes, depending on the file system containing the memory-mapped files. It is recommended that new code uses **memkind**(3) instead of **libvmem**, as this library is no longer actively developed and lacks certain features of **memkind** such as NUMA awareness. Nevertheless, it is mature, and is expected to be maintained for foreseable future. **libvmem** uses the **mmap**(2) system call to create a pool of volatile memory. The library is most useful when used with *Direct Access* storage (DAX), which is memory-addressable persistent storage that supports load/store access without being paged via the system page cache. A Persistent Memory-aware file system is typically used to provide this type of access. Memory-mapping a file from a Persistent Memory-aware file system provides the raw memory pools, and this library supplies the more familiar *malloc*-like interfaces on top of those pools. Under normal usage, **libvmem** will never print messages or intentionally cause the process to exit. Exceptions to this are prints caused by calls to **vmem_stats_print**(3), or by enabling debugging as described under **DEBUGGING AND ERROR HANDLING** below. The library uses **pthreads** to be fully MT-safe, but never creates or destroys threads itself. The library does not make use of any signals, networking, and never calls **select**(2) or **poll**(2). The system memory allocation routines like **malloc**(3) and **free**(3) are used by **libvmem** for managing a small amount of run-time state, but applications are allowed to override these calls if necessary (see the description of **vmem_set_funcs**() below). **libvmem** interfaces are grouped into three categories: those that manage memory pools, those providing the basic memory allocation functions, and those interfaces less commonly used for managing the overall library behavior. # MANAGING LIBRARY BEHAVIOR # The _UW(vmem_check_version) function is used to see if the installed **libvmem** supports the version of the library API required by an application. The easiest way to do this is for the application to supply the compile-time version information, supplied by defines in **\**, like this: ```c reason = _U(vmem_check_version)(VMEM_MAJOR_VERSION, VMEM_MINOR_VERSION); if (reason != NULL) { /* version check failed, reason string tells you why */ } ``` Any mismatch in the major version number is considered a failure, but a library with a newer minor version number will pass this check since increasing minor versions imply backwards compatibility. An application can also check specifically for the existence of an interface by checking for the version where that interface was introduced. These versions are documented in this man page as follows: unless otherwise specified, all interfaces described here are available in version 1.0 of the library. Interfaces added after version 1.0 will contain the text *introduced in version x.y* in the section of this manual describing the feature. When the version check is successful, _UW(vmem_check_version) returns NULL. Otherwise, _UW(vmem_check_version) returns a static string describing the reason for failing the version check. The returned string must not be modified or freed. The **vmem_set_funcs**() function allows an application to override some interfaces used internally by **libvmem**. Passing NULL for any of the handlers will cause the **libvmem** default function to be used. The only functions in the malloc family used by the library are represented by the first four arguments to **vmem_set_funcs**(). While the library does not make heavy use of the system malloc functions, it does allocate approximately 4-8 kilobytes for each memory pool in use. The *print_func* function is called by **libvmem** when the **vmem_stats_print**() entry point is used, or when additional tracing is enabled in the debug version of the library as described in **DEBUGGING AND ERROR HANDLING**, below. The default *print_func* used by the library prints to the file specified by the **VMEM_LOG_FILE** environment variable, or to *stderr* if that variable is not set. # CAVEATS # **libvmem** relies on the library destructor being called from the main thread. For this reason, all functions that might trigger destruction (e.g. **dlclose**(3)) should be called in the main thread. Otherwise some of the resources associated with that thread might not be cleaned up properly. # DEBUGGING AND ERROR HANDLING # If an error is detected during the call to a **libvmem** function, the application may retrieve an error message describing the reason for the failure from _UW(vmem_errormsg). This function returns a pointer to a static buffer containing the last error message logged for the current thread. If *errno* was set, the error message may include a description of the corresponding error code as returned by **strerror**(3). The error message buffer is thread-local; errors encountered in one thread do not affect its value in other threads. The buffer is never cleared by any library function; its content is significant only when the return value of the immediately preceding call to a **libvmem** function indicated an error, or if *errno* was set. The application must not modify or free the error message string, but it may be modified by subsequent calls to other library functions. Two versions of **libvmem** are typically available on a development system. The normal version is optimized for performance. That version skips checks that impact performance and never logs any trace information or performs any run-time assertions. A second version, accessed when using libraries from _DEBUGLIBPATH(), contains run-time assertions and trace points. The typical way to access the debug version is to set the **LD_LIBRARY_PATH** environment variable to _LDLIBPATH(). Debugging output is controlled using the following environment variables. These variables have no effect on the non-debug version of the library. + **VMEM_LOG_LEVEL** The value of **VMEM_LOG_LEVEL** enables trace points in the debug version of the library, as follows: + **0** - Tracing is disabled. This is the default level when **VMEM_LOG_LEVEL** is not set. Only statistics are logged, and then only in response to a call to **vmem_stats_print**(). + **1** - Additional details on any errors detected are logged, in addition to returning the *errno*-based errors as usual. + **2** - A trace of basic operations is logged. + **3** - Enables a very verbose amount of function call tracing in the library. + **4** - Enables voluminous tracing information about all memory allocations and deallocations. Unless **VMEM_LOG_FILE** is set, debugging output is written to *stderr*. + **VMEM_LOG_FILE** Specifies the name of a file where all logging information should be written. If the last character in the name is "-", the *PID* of the current process will be appended to the file name when the log file is created. If **VMEM_LOG_FILE** is not set, output is written to *stderr*. # EXAMPLE # The following example creates a memory pool, allocates some memory to contain the string "hello, world", and then frees that memory. ```c #include #include #include #include int main(int argc, char *argv[]) { VMEM *vmp; char *ptr; /* create minimum size pool of memory */ if ((vmp = _U(vmem_create)("/pmem-fs", VMEM_MIN_POOL)) == NULL) { perror("_U(vmem_create)"); exit(1); } if ((ptr = vmem_malloc(vmp, 100)) == NULL) { perror("vmem_malloc"); exit(1); } strcpy(ptr, "hello, world"); /* give the memory back */ vmem_free(vmp, ptr); /* ... */ vmem_delete(vmp); } ``` See for more examples using the **libvmem** API. # BUGS # Unlike the normal **malloc**(3), which asks the system for additional memory when it runs out, **libvmem** allocates the size it is told to and never attempts to grow or shrink that memory pool. # ACKNOWLEDGEMENTS # **libvmem** depends on jemalloc, written by Jason Evans, to do the heavy lifting of managing dynamic memory allocation. See: **libvmem** builds on the persistent memory programming model recommended by the SNIA NVM Programming Technical Work Group: # SEE ALSO # **mmap**(2), **dlclose**(3), **malloc**(3), **strerror**(3), **vmem_create**(3), **vmem_malloc**(3), and **** On Linux: **jemalloc**(3), **pthreads**(7) On FreeBSD: **pthread**(3) vmem-1.8/doc/libvmem/vmem_create.3.md000066400000000000000000000171611361505074100175100ustar00rootroot00000000000000--- layout: manual Content-Style: 'text/css' title: _MP(VMEM_CREATE, 3) collection: libvmem header: VMEM date: vmem API version 1.1 ... [comment]: <> (Copyright 2017-2018, Intel Corporation) [comment]: <> (Redistribution and use in source and binary forms, with or without) [comment]: <> (modification, are permitted provided that the following conditions) [comment]: <> (are met:) [comment]: <> ( * Redistributions of source code must retain the above copyright) [comment]: <> ( notice, this list of conditions and the following disclaimer.) [comment]: <> ( * Redistributions in binary form must reproduce the above copyright) [comment]: <> ( notice, this list of conditions and the following disclaimer in) [comment]: <> ( the documentation and/or other materials provided with the) [comment]: <> ( distribution.) [comment]: <> ( * Neither the name of the copyright holder nor the names of its) [comment]: <> ( contributors may be used to endorse or promote products derived) [comment]: <> ( from this software without specific prior written permission.) [comment]: <> (THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS) [comment]: <> ("AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT) [comment]: <> (LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR) [comment]: <> (A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT) [comment]: <> (OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,) [comment]: <> (SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT) [comment]: <> (LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,) [comment]: <> (DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY) [comment]: <> (THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT) [comment]: <> ((INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE) [comment]: <> (OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.) [comment]: <> (vmem_create.3 -- man page for volatile memory pool management functions) [NAME](#name)
[SYNOPSIS](#synopsis)
[DESCRIPTION](#description)
[RETURN VALUE](#return-value)
[SEE ALSO](#see-also)
# NAME # _UW(vmem_create), **vmem_create_in_region**(), **vmem_delete**(), **vmem_check**(), **vmem_stats_print**() - volatile memory pool management # SYNOPSIS # ```c #include _UWFUNCR1(VMEM, *vmem_create, *dir, size_t size) VMEM *vmem_create_in_region(void *addr, size_t size); void vmem_delete(VMEM *vmp); int vmem_check(VMEM *vmp); void vmem_stats_print(VMEM *vmp, const char *opts); ``` _UNICODE() # DESCRIPTION # To use **libvmem**, a *memory pool* is first created. This is most commonly done with the _UW(vmem_create) function described below. The other **libvmem** functions are for less common cases, where applications have special needs for creating pools or examining library state. The _UW(vmem_create) function creates a memory pool and returns an opaque memory pool handle of type *VMEM\**. The handle is then used with **libvmem** functions such as **vmem_malloc**() and **vmem_free**() to provide the familiar *malloc*-like programming model for the memory pool. The pool is created by allocating a temporary file in the directory *dir*, in a fashion similar to **tmpfile**(3), so that the file name does not appear when the directory is listed, and the space is automatically freed when the program terminates. *size* bytes are allocated and the resulting space is memory-mapped. The minimum *size* value allowed by the library is defined in **\** as **VMEM_MIN_POOL**. The maximum allowed size is not limited by **libvmem**, but by the file system on which *dir* resides. The *size* passed in is the raw size of the memory pool. **libvmem** will use some of that space for its own metadata, so the usable space will be less. _UW(vmem_create) can also be called with the **dir** argument pointing to a device DAX. In that case the entire device will serve as a volatile pool. Device DAX is the device-centric analogue of Filesystem DAX. It allows memory ranges to be allocated and mapped without need of an intervening file system. For more information please see **ndctl-create-namespace**(1). **vmem_create_in_region**() is an alternate **libvmem** entry point for creating a memory pool. It is for the rare case where an application needs to create a memory pool from an already memory-mapped region. Instead of allocating space from a file system, **vmem_create_in_region**() is given the memory region explicitly via the *addr* and *size* arguments. Any data in the region is lost by calling **vmem_create_in_region**(), which will immediately store its own data structures for managing the pool there. As with _UW(vmem_create), the minimum *size* allowed is defined as **VMEM_MIN_POOL**. The *addr* argument must be page aligned. Undefined behavior occurs if *addr* does not point to a contiguous memory region in the virtual address space of the calling process, or if the *size* is larger than the actual size of the memory region pointed to by *addr*. The **vmem_delete**() function releases the memory pool *vmp*. If the memory pool was created using _UW(vmem_create), deleting it allows the space to be reclaimed. The **vmem_check**() function performs an extensive consistency check of all **libvmem** internal data structures in memory pool *vmp*. Since an error return indicates memory pool corruption, applications should not continue to use a pool in this state. Additional details about errors found are logged when the log level is at least 1 (see **DEBUGGING AND ERROR HANDLING** in **libvmem**(7)). During the consistency check performed by **vmem_check**(), other operations on the same memory pool are locked out. The checks are all read-only; **vmem_check**() never modifies the memory pool. This function is mostly useful for **libvmem** developers during testing/debugging. The **vmem_stats_print**() function produces messages containing statistics about the given memory pool. Output is sent to *stderr* unless the user sets the environment variable **VMEM_LOG_FILE**, or the application supplies a replacement *print_func* (see **MANAGING LIBRARY BEHAVIOR** in **libvmem**(7)). The *opts* string can either be NULL or it can contain a list of options that change the statistics printed. General information that never changes during execution can be omitted by specifying "g" as a character within the opts string. The characters "m" and "a" can be specified to omit merged arena and per arena statistics, respectively; "b" and "l" can be specified to omit per size class statistics for bins and large objects, respectively. Unrecognized characters are silently ignored. Note that thread caching may prevent some statistics from being completely up to date. See **jemalloc**(3) for more detail (the description of the available *opts* above was taken from that man page). # RETURN VALUE # On success, _UW(vmem_create) returns an opaque memory pool handle of type *VMEM\**. On error, it returns NULL and sets *errno* appropriately. On success, **vmem_create_in_region**() returns an opaque memory pool handle of type *VMEM\**. On error, it returns NULL and sets *errno* appropriately. The **vmem_delete**() function returns no value. The **vmem_check**() function returns 1 if the memory pool is found to be consistent, and 0 if the check was performed but the memory pool is not consistent. If the check could not be performed, **vmem_check**() returns -1. The **vmem_stats_print**() function returns no value. # SEE ALSO # **ndctl-create-namespace**(1), **jemalloc**(3), **tmpfile**(3), **libvmem**(7) and **** vmem-1.8/doc/libvmem/vmem_malloc.3.md000066400000000000000000000200071361505074100175050ustar00rootroot00000000000000--- layout: manual Content-Style: 'text/css' title: _MP(VMEM_MALLOC, 3) collection: libvmem header: VMEM date: vmem API version 1.1 ... [comment]: <> (Copyright 2017-2018, Intel Corporation) [comment]: <> (Redistribution and use in source and binary forms, with or without) [comment]: <> (modification, are permitted provided that the following conditions) [comment]: <> (are met:) [comment]: <> ( * Redistributions of source code must retain the above copyright) [comment]: <> ( notice, this list of conditions and the following disclaimer.) [comment]: <> ( * Redistributions in binary form must reproduce the above copyright) [comment]: <> ( notice, this list of conditions and the following disclaimer in) [comment]: <> ( the documentation and/or other materials provided with the) [comment]: <> ( distribution.) [comment]: <> ( * Neither the name of the copyright holder nor the names of its) [comment]: <> ( contributors may be used to endorse or promote products derived) [comment]: <> ( from this software without specific prior written permission.) [comment]: <> (THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS) [comment]: <> ("AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT) [comment]: <> (LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR) [comment]: <> (A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT) [comment]: <> (OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,) [comment]: <> (SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT) [comment]: <> (LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,) [comment]: <> (DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY) [comment]: <> (THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT) [comment]: <> ((INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE) [comment]: <> (OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.) [comment]: <> (vmem_malloc.3 -- man page for memory allocation related functions) [NAME](#name)
[SYNOPSIS](#synopsis)
[DESCRIPTION](#description)
[RETURN VALUE](#return-value)
[SEE ALSO](#see-also)
# NAME # **vmem_malloc**(), **vmem_calloc**(), **vmem_realloc**(), **vmem_free**(), **vmem_aligned_alloc**(), **vmem_strdup**(), **vmem_wcsdup**(), **vmem_malloc_usable_size**() - memory allocation related functions # SYNOPSIS # ```c #include void *vmem_malloc(VMEM *vmp, size_t size); void vmem_free(VMEM *vmp, void *ptr); void *vmem_calloc(VMEM *vmp, size_t nmemb, size_t size); void *vmem_realloc(VMEM *vmp, void *ptr, size_t size); void *vmem_aligned_alloc(VMEM *vmp, size_t alignment, size_t size); char *vmem_strdup(VMEM *vmp, const char *s); wchar_t *vmem_wcsdup(VMEM *vmp, const wchar_t *s); size_t vmem_malloc_usable_size(VMEM *vmp, void *ptr); ``` # DESCRIPTION # This section describes the *malloc*-like API provided by **libvmem**(7). These functions provide the same semantics as their libc namesakes, but operate on the memory pools specified by their first arguments. The **vmem_malloc**() function provides the same semantics as **malloc**(3), but operates on the memory pool *vmp* instead of the process heap supplied by the system. It allocates specified *size* bytes. The **vmem_free**() function provides the same semantics as **free**(3), but operates on the memory pool *vmp* instead of the process heap supplied by the system. It frees the memory space pointed to by *ptr*, which must have been returned by a previous call to **vmem_malloc**(), **vmem_calloc**() or **vmem_realloc**() for *the same pool of memory*. If *ptr* is NULL, no operation is performed. The **vmem_calloc**() function provides the same semantics as **calloc**(3), but operates on the memory pool *vmp* instead of the process heap supplied by the system. It allocates memory for an array of *nmemb* elements of *size* bytes each. The memory is set to zero. The **vmem_realloc**() function provides the same semantics as **realloc**(3), but operates on the memory pool *vmp* instead of the process heap supplied by the system. It changes the size of the memory block pointed to by *ptr* to *size* bytes. The contents will be unchanged in the range from the start of the region up to the minimum of the old and new sizes. If the new size is larger than the old size, the added memory will *not* be initialized. Unless *ptr* is NULL, it must have been returned by an earlier call to **vmem_malloc**(), **vmem_calloc**() or **vmem_realloc**(). If *ptr* is NULL, then the call is equivalent to *vmem_malloc(vmp, size)*, for all values of *size*; if *size* is equal to zero, and *ptr* is not NULL, then the call is equivalent to *vmem_free(vmp, ptr)*. The **vmem_aligned_alloc**() function provides the same semantics as **aligned_alloc**(3), but operates on the memory pool *vmp* instead of the process heap supplied by the system. It allocates *size* bytes from the memory pool. The memory address will be a multiple of *alignment*, which must be a power of two. The **vmem_strdup**() function provides the same semantics as **strdup**(3), but operates on the memory pool *vmp* instead of the process heap supplied by the system. Memory for the new string is obtained with **vmem_malloc**(), on the given memory pool, and can be freed with **vmem_free**() on the same memory pool. The **vmem_wcsdup**() function provides the same semantics as **wcsdup**(3), but operates on the memory pool *vmp* instead of the process heap supplied by the system. Memory for the new string is obtained with **vmem_malloc**(), on the given memory pool, and can be freed with **vmem_free**() on the same memory pool. The **vmem_malloc_usable_size**() function provides the same semantics as **malloc_usable_size**(3), but operates on the memory pool *vmp* instead of the process heap supplied by the system. # RETURN VALUE # On success, **vmem_malloc**() returns a pointer to the allocated memory. If *size* is 0, then **vmem_malloc**() returns either NULL, or a unique pointer value that can later be successfully passed to **vmem_free**(). If **vmem_malloc**() is unable to satisfy the allocation request, it returns NULL and sets *errno* appropriately. The **vmem_free**() function returns no value. Undefined behavior occurs if frees do not correspond to allocated memory from the same memory pool. On success, **vmem_calloc**() returns a pointer to the allocated memory. If *nmemb* or *size* is 0, then **vmem_calloc**() returns either NULL, or a unique pointer value that can later be successfully passed to **vmem_free**(). If **vmem_calloc**() is unable to satisfy the allocation request, it returns NULL and sets *errno* appropriately. On success, **vmem_realloc**() returns a pointer to the allocated memory, which may be different from *ptr*. If the area pointed to was moved, a *vmem_free(vmp, ptr)* is done. If **vmem_realloc**() is unable to satisfy the allocation request, it returns NULL and sets *errno* appropriately. On success, **vmem_aligned_alloc**() returns a pointer to the allocated memory. If **vmem_aligned_alloc**() is unable to satisfy the allocation request, it returns NULL and sets *errno* appropriately. On success, **vmem_strdup**() returns a pointer to a new string which is a duplicate of the string *s*. If **vmem_strdup**() is unable to satisfy the allocation request, it returns NULL and sets *errno* appropriately. On success, **vmem_wcsdup**() returns a pointer to a new wide character string which is a duplicate of the wide character string *s*. If **vmem_wcsdup**() is unable to satisfy the allocation request, it returns NULL and sets *errno* appropriately. The **vmem_malloc_usable_size**() function returns the number of usable bytes in the block of allocated memory pointed to by *ptr*, a pointer to a block of memory allocated by **vmem_malloc**() or a related function. If *ptr* is NULL, 0 is returned. # SEE ALSO # **calloc**(3), **free**(3), **malloc**(3), **malloc_usable_size**(3), **realloc**(3), **strdup**(3), **wcsdup**(3) **libvmem(7)** and **** vmem-1.8/doc/libvmmalloc/000077500000000000000000000000001361505074100153765ustar00rootroot00000000000000vmem-1.8/doc/libvmmalloc/libvmmalloc.7.md000066400000000000000000000273521361505074100203770ustar00rootroot00000000000000--- layout: manual Content-Style: 'text/css' title: _MP(LIBVMMALLOC, 7) collection: libvmmalloc header: VMEM date: vmmalloc API version 1.1 ... [comment]: <> (Copyright 2016-2019, Intel Corporation) [comment]: <> (Redistribution and use in source and binary forms, with or without) [comment]: <> (modification, are permitted provided that the following conditions) [comment]: <> (are met:) [comment]: <> ( * Redistributions of source code must retain the above copyright) [comment]: <> ( notice, this list of conditions and the following disclaimer.) [comment]: <> ( * Redistributions in binary form must reproduce the above copyright) [comment]: <> ( notice, this list of conditions and the following disclaimer in) [comment]: <> ( the documentation and/or other materials provided with the) [comment]: <> ( distribution.) [comment]: <> ( * Neither the name of the copyright holder nor the names of its) [comment]: <> ( contributors may be used to endorse or promote products derived) [comment]: <> ( from this software without specific prior written permission.) [comment]: <> (THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS) [comment]: <> ("AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT) [comment]: <> (LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR) [comment]: <> (A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT) [comment]: <> (OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,) [comment]: <> (SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT) [comment]: <> (LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,) [comment]: <> (DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY) [comment]: <> (THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT) [comment]: <> ((INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE) [comment]: <> (OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.) [comment]: <> (libvmmalloc.7 -- man page for libvmmalloc) [NAME](#name)
[SYNOPSIS](#synopsis)
[DESCRIPTION](#description)
[ENVIRONMENT](#environment)
[CAVEATS](#caveats)
[DEBUGGING](#debugging)
[NOTES](#notes)
[BUGS](#bugs)
[ACKNOWLEDGEMENTS](#acknowledgements)
[SEE ALSO](#see-also) # NAME # **libvmmalloc** - general purpose volatile memory allocation library # SYNOPSIS # ``` $ LD_PRELOAD=libvmmalloc.so.1 command [ args... ] ``` or ```c #include #ifndef __FreeBSD__ #include #else #include #endif #include cc [ flag... ] file... -lvmmalloc [ library... ] ``` ```c void *malloc(size_t size); void free(void *ptr); void *calloc(size_t number, size_t size); void *realloc(void *ptr, size_t size); int posix_memalign(void **memptr, size_t alignment, size_t size); void *aligned_alloc(size_t alignment, size_t size); void *memalign(size_t alignment, size_t size); void *valloc(size_t size); void *pvalloc(size_t size); size_t malloc_usable_size(const void *ptr); void cfree(void *ptr); ``` # DESCRIPTION # **libvmmalloc** transparently converts all dynamic memory allocations into Persistent Memory allocations. The typical usage of **libvmmalloc** does not require any modification of the target program. It is enough to load **libvmmalloc** before all other libraries by setting the environment variable **LD_PRELOAD**. When used in that way, **libvmmalloc** interposes the standard system memory allocation routines, as defined in **malloc**(3), **posix_memalign**(3) and **malloc_usable_size**(3), and provides that all dynamic memory allocations are made from a *memory pool* built on a memory-mapped file, instead of the system heap. The memory managed by **libvmmalloc** may have different attributes, depending on the file system containing the memory-mapped file. This library is no longer actively developed, and is in maintenance mode, same as its underlying code backend (**libvmem**). It is mature, and is expected to be supported for foreseable future. **libvmmalloc** may be also linked to the program, by providing the **-lvmmalloc* argument to the linker. Then it becomes the default memory allocator for the program. >NOTE: Due to the fact the library operates on a memory-mapped file, **it may not work properly with programs that perform fork(2) not followed by exec(3).** There are two variants of experimental **fork**(2) support available in libvmmalloc. The desired library behavior may be selected by setting the **VMMALLOC_FORK** environment variable. By default variant #1 is enabled. See **ENVIRONMENT** for more details. **libvmmalloc** uses the **mmap**(2) system call to create a pool of volatile memory. The library is most useful when used with *Direct Access* storage (DAX), which is memory-addressable persistent storage that supports load/store access without being paged via the system page cache. A Persistent Memory-aware file system is typically used to provide this type of access. Memory-mapping a file from a Persistent Memory-aware file system provides the raw memory pools, and this library supplies the traditional *malloc* interfaces on top of those pools. The memory pool acting as a system heap replacement is created automatically at library initialization time. The user may control its location and size by setting the environment variables described in **ENVIRONMENT**, below. The allocated file space is reclaimed when the process terminates or in case of system crash. Under normal usage, **libvmmalloc** will never print messages or intentionally cause the process to exit. The library uses **pthreads**(7) to be fully MT-safe, but never creates or destroys threads itself. The library does not make use of any signals, networking, and never calls **select**(2) or **poll**(2). # ENVIRONMENT # The **VMMALLOC_POOL_DIR** and **VMMALLOC_POOL_SIZE** environment variables **must** be set for **libvmmalloc** to work properly. If either of them is not specified, or if their values are not valid, the library prints an appropriate error message and terminates the process. Any other environment variables are optional. + **VMMALLOC_POOL_DIR**=*path* Specifies a path to the directory where the memory pool file should be created. The directory must exist and be writable. + **VMMALLOC_POOL_SIZE**=*len* Defines the desired size (in bytes) of the memory pool file. It must be not less than the minimum allowed size **VMMALLOC_MIN_POOL** as defined in **\**. >NOTE: Due to the fact the library adds some metadata to the memory pool, the amount of actual usable space is typically less than the size of the memory pool file. + **VMMALLOC_FORK**=*val* (EXPERIMENTAL) **VMMALLOC_FORK** controls the behavior of **libvmmalloc** in case of **fork**(3), and can be set to the following values: + **0** - **fork**(2) support is disabled. The behavior of **fork**(2) is undefined in this case, but most likely results in memory pool corruption and a program crash due to segmentation fault. + **1** - The memory pool file is remapped with the **MAP_PRIVATE** flag before the fork completes. From this moment, any access to memory that modifies the heap pages, both in the parent and in the child process, will trigger creation of a copy of those pages in RAM (copy-on-write). The benefit of this approach is that it does not significantly increase the time of the initial fork operation, and does not require additional space on the file system. However, all subsequent memory allocations, and modifications of any memory allocated before fork, will consume system memory resources instead of the memory pool. This is the default option if **VMMALLOC_FORK** is not set. + **2** - A copy of the entire memory pool file is created for the use of the child process. This requires additional space on the file system, but both the parent and the child process may still operate on their memory pools, not consuming system memory resources. >NOTE: In case of large memory pools, creating a copy of the pool file may stall the fork operation for a quite long time. + **3** - The library first attempts to create a copy of the memory pool (as for option #2), but if it fails (i.e. because of insufficient free space on the file system), it will fall back to option #1. >NOTE: Options **2** and **3** are not currently supported on FreeBSD. Environment variables used for debugging are described in **DEBUGGING**, below. # CAVEATS # **libvmmalloc** relies on the library destructor being called from the main thread. For this reason, all functions that might trigger destruction (e.g. **dlclose**(3)) should be called in the main thread. Otherwise some of the resources associated with that thread might not be cleaned up properly. # DEBUGGING # Two versions of **libvmmalloc** are typically available on a development system. The normal version is optimized for performance. That version skips checks that impact performance and never logs any trace information or performs any run-time assertions. A second version, accessed when using libraries from _DEBUGLIBPATH(), contains run-time assertions and trace points. The typical way to access the debug version is to set the **LD_LIBRARY_PATH** environment variable to _LDLIBPATH(). Debugging output is controlled using the following environment variables. These variables have no effect on the non-debug version of the library. + **VMMALLOC_LOG_LEVEL** The value of **VMMALLOC_LOG_LEVEL** enables trace points in the debug version of the library, as follows: + **0** - Tracing is disabled. This is the default level when **VMMALLOC_LOG_LEVEL** is not set. + **1** - Additional details on any errors detected are logged, in addition to returning the *errno*-based errors as usual. + **2** - A trace of basic operations is logged. + **3** - Enables a very verbose amount of function call tracing in the library. + **4** - Enables voluminous tracing information about all memory allocations and deallocations. Unless **VMMALLOC_LOG_FILE** is set, debugging output is written to *stderr*. + **VMMALLOC_LOG_FILE** Specifies the name of a file where all logging information should be written. If the last character in the name is "-", the *PID* of the current process will be appended to the file name when the log file is created. If **VMMALLOC_LOG_FILE** is not set, output is written to *stderr*. + **VMMALLOC_LOG_STATS** Setting **VMMALLOC_LOG_STATS** to 1 enables logging human-readable summary statistics at program termination. # NOTES # Unlike the normal **malloc**(3), which asks the system for additional memory when it runs out, **libvmmalloc** allocates the size it is told to and never attempts to grow or shrink that memory pool. # BUGS # **libvmmalloc** may not work properly with programs that perform **fork**(2) and do not call **exec**(3) immediately afterwards. See **ENVIRONMENT** for more details about experimental **fork**(2) support. If logging is enabled in the debug version of the library and the process performs **fork**(2), no new log file is created for the child process, even if the configured log file name ends with "-". All logging information from the child process will be written to the log file owned by the parent process, which may lead to corruption or partial loss of log data. Malloc hooks (see **malloc_hook**(3)), are not supported when using **libvmmalloc**. # ACKNOWLEDGEMENTS # **libvmmalloc** depends on jemalloc, written by Jason Evans, to do the heavy lifting of managing dynamic memory allocation. See: # SEE ALSO # **fork**(2), **dlclose(3)**, **exec**(3), **malloc**(3), **malloc_usable_size**(3), **posix_memalign**(3), **libpmem**(7), **libvmem**(7) and **** On Linux: **jemalloc**(3), **malloc_hook**(3), **pthreads**(7), **ld.so**(8) On FreeBSD: **ld.so**(1), **pthread**(3) vmem-1.8/doc/macros.man000066400000000000000000000143061361505074100150620ustar00rootroot00000000000000# # Copyright 2017-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # These macros are defined for the m4 preprocessor and are controlled by # the FREEBSD, WIN32 and WEB variables. These MUST be explicitly defined or # undefined on the m4 command line. # # This solution allows the maintenance of Windows, Linux and FreeBSD # documentation in the same file. # # The macros are: # # _BSDWX(FreeBSD,WinLinux): # Choose text based on FREEBSD. Both arguments are optional # (although the comma must be present if FreeBSD is omitted). # Bracket string with (=q=, =e=) if it contains commas. # _DEBUGLIBPATH() # Inserts pathnames for debug libraries depending on WIN32 and # FREEBSD. # _LDLIBPATH() # Inserts suggested pathnames for LD_LIBRARY_PATH depending on # WIN32 and FREEBSD. # _MP(man_page_name, section): # Include the man page section number if not building for WEB. # _UNICODE(): # Inserts a standard note regarding UNICODE support if WIN32. # _U(func_name): # Append "U" to func_name if WIN32. # _UW(func_name): # Emit **func_nameU**()/**func_nameW**() if WIN32. # _UWFUNC(func_name, args): # Define U and W prototypes of char/wchar_t *func_name if WIN32. # Bracket args string with (=q=, =e=) if it is a comma-separated # list. # _UWFUNCR(ret_type, func_name, char_arg): # Define U and W prototypes of ret_type func_name if WIN32. # Single char/wchar_t argument is char_arg. # _UWFUNCRUW(ret_type, func_name, args): # Define U and W prototypes of ret_type[U/W] func_name if WIN32. # Bracket args string with (=q=, =e=) if it is a comma-separated # list. # _UWFUNCR1(ret_type, func_name, char_arg, rest_of_args, comment): # Define U and W prototypes of ret_type func_name if WIN32. # First char/wchar_t argument is char_arg. Bracket rest_of_args # string with (=q=, =e=) if it is a comma-separated list. # Comment is added after prototype definition if present. # _UWFUNCR12(ret_type, func_name, char_arg1, char_arg2, rest_of_args, # comment): # Define U and W prototypes of ret_type func_name if WIN32. # Two char/wchar_t arguments are char_arg1-2. Bracket # rest_of_args string with (=q=, =e=) if it is a comma-separated # list. Comment is added after prototype definition if present. # _UWFUNCR1UW(ret_type, func_name, arg1_type, arg1, rest_of_args): # Define U and W prototypes of ret_type func_name, append [U/W] # to arg1_type arg1. Bracket rest_of_args string with (=q=, =e=) # if it is a comma-separated list. # _UWFUNCR2(ret_type, func_name, arg1, char_arg, rest_of_args, comment): # Define U and W prototypes of ret_type func_name if WIN32. # Second char/wchar_t argument is char_arg. Bracket rest_of_args # string with (=q=, =e=) if it is a comma-separated list. # Comment is added after prototype definition if present. # _UWS(struct_name): # Emit *struct struct_nameU*/*struct struct_nameW* if WIN32. # _WINUX(Windows,UX): # Choose text based on WIN32. Both arguments are optional # (although the comma must be present if Windows is omitted). # Bracket string with (=q=, =e=) if it contains commas. changequote(=q=,=e=) changecom() define(_BSDWX, ifdef(=q=FREEBSD=e=,$1,$2)) define(_DEBUGLIBPATH, ifdef(=q=WIN32=e=,**/pmdk/src/x64/Debug**, ifdef(=q=FREEBSD=e=,**/usr/local/lib/vmem_debug**, **/usr/lib/vmem_debug**))) define(_LDLIBPATH, ifdef(=q=WIN32=e=,**/pmdk/src/x64/Debug**, ifdef(=q=FREEBSD=e=,**/usr/local/lib/vmem_debug**, =q==q==q=**/usr/lib/vmem_debug** or **/usr/lib64/vmem_debug**, as appropriate=e==e==e=))) define(_MP, ifdef(=q=WEB=e=,$1,$1($2))) define(_UNICODE, ifdef(=q=WIN32=e=,=q==q= >NOTE: The VMEM API supports UNICODE. If the **PMDK_UTF8_API** macro is defined, basic API functions are expanded to the UTF-8 API with postfix *U*. Otherwise they are expanded to the UNICODE API with postfix *W*.=e==e=)) define(_U, ifdef(=q=WIN32=e=,$1U,$1)) define(_UW, ifdef(=q=WIN32=e=,**$1U**()/**$1W**(),**$1**())) define(_UWFUNC, ifdef(=q=WIN32=e=, const char *$1U($2); const wchar_t *$1W($2);, const char *$1($2);)) define(_UWFUNCR, ifdef(=q=WIN32=e=, $1 $2U(const char $3); $1 $2W(const wchar_t $3);, $1 $2(const char $3);)) define(_UWFUNCRUW, ifdef(=q=WIN32=e=, $1U $2U($3); $1W $2W($3);, $1 $2($3);)) define(_UWFUNCR1, ifdef(=q=WIN32=e=, $1 $2U(const char $3, $4);$5 $1 $2W(const wchar_t $3, $4);$5, $1 $2(const char $3, $4);$5)) define(_UWFUNCR12, ifdef(=q=WIN32=e=, $1 $2U(const char $3, const char $4, $5);$6 $1 $2W(const wchar_t $3, const wchar_t $4, $5);$6, $1 $2(const char $3, const char $4, $5);$6)) define(_UWFUNCR1UW, ifdef(=q=WIN32=e=, $1 $2U($3U $4, $5); $1 $2W($3W $4, $5);, $1 $2($3 $4, $5);)) define(_UWFUNCR2, ifdef(=q=WIN32=e=, $1 $2U($3, const char $4, $5);$6 $1 $2W($3, const wchar_t $4, $5);$6, $1 $2($3, const char $4, $5);$6)) define(_UWS, ifdef(=q=WIN32=e=,*struct $1U*/*struct $1W*,*struct $1*)) define(_WINUX, ifdef(=q=WIN32=e=,$1,$2)) vmem-1.8/res/000077500000000000000000000000001361505074100131215ustar00rootroot00000000000000vmem-1.8/res/PMDK.ico000066400000000000000000001475131361505074100143630ustar00rootroot000000000000002V(L2 h@@ (B#PNG  IHDR\rf2\IDATx]|~:hFA*"{ァV(ZP-C, pb/XVVn͑KK%wݽ^'eX9m5 8]Ɲ'tDĠ9Cێ3 3,?jY`5Y0bĨ"r3nyڢWS[$gr]O(z"FYq&D g#ObD)&L".C  cy,7{2&Cмwc 8w[%Ú ̆q,%#Fqp9#A\7Amh>Gh&b|n V,.D=)wd `HՋʣUBjw!ch(t:o\.'_YVXn2q?cl \h_\qO }0@,!Φi#LpOj#&LN]kݼlFZ qdd#FQϓ"w=4Og @p ڿY_w~D̶VUpMXbq0u.C8 "`\L-bMMJ; 1+oC@LCک؈@ <䤏@0P @@ |(ad #SV0[ @xIl8"#f:F/fY k`#F/]#F7Gxg_BbĈ4Ձ`N {x@\`w 7n!03owp_&ϳ95`W@G@xXn? =:{=3@0P @@ |(a > ‡aj%S<@p> srLKp&Q#JV( E//!9 w`[ %1?|q17!] )`1gg`tAdל!* ."FcFR"l\j4 j?5tx Ĩ}s/61`_-0+V[ZmMԃ;E~|(g4fD菠9[͡ǻ`v'Ε{a <8:}]?~mL{\nґ$6/³ _H0;~xkM`PI={`@ Np@BwpB+"܃qW[;)| ,1]eoOAp @hj^ &GJV 0 0@H?4 L&񑐘B@#߅&=jr=(nF9!,zٶDm3GE&7fѝY[;dZDN;'}n%h*8y u3[UY&Rsu-!\mVv.w3_pòSTUکa=u V\_O!ǝ״w}=JUw`VEpr}!-5-sT`K*>pV_%i^qPY[= Z nBM~ ׁHer0̺[aﶣc!qRFI;ֵC{:9} pxlyOB" ވ~s/oxI䉠9 JPzaݯ˨\,> {4 ,?! bWj[QIq8la,hcvո/`I6jo>9]wT8uCr%!Awo pp< `6*u <}\=S|</9iT|ƿ:ΕCq>KD?h&o6#L-N Mׅ\yseKsFz\<;DGls\-@#83ZRV;ꁙ_OzOgz' {օߖ}ywo{Qq$ FhX\weNBz8 ]W'q4jw^1]za+["H \ Lz?)#jNv **hd%Sآ9C_7.Σ8Z_7rU=d Ty]4x>u04t}~y)W*|pE6Bu"Y_2Đƞ2$wŠдO}(V&{YϏ!%%Y}U.I1!RcK5)xOw {)[ttBւ>}Y=Ry\愁ow9}^ȯwR HjJJ"-j@GTO>rAptg3z2YG@¿9нMǓ և Q#'7Et^yeN @/C]lO A 0%;.@ /:F=oeއ{N"xdTN9_$/$'@ܟw}$x7Erڍ8_>^P9+E{a-t'qY`?3fG/+/9kR&ye䄮[CpYڄZjTۧT}Nm3r'I?@9@l݁3Aidd]ݐnIEAz)2)_&m /ZNuwRP~1HT48I-->Ng}b[8uA,'rcrj;r8ٛԖJ=O~ qG4>J ph "b*PrpMb;մX 35B1[`It65?(ٿ$J,]tߊRxxwC}rM)V rE\Voe!MYeU9kߡYpr@șh.TU`Bv^m9m 0rM(P0?lz{"ˆDۗP[kelT++Ǭ+]xS[`ifawU$7fClMv<^+^=hv5f:9iI+9שTbE,}$cQ/ X6nnWUGh=Cg)-:3:-w?Rlo?.g[p}4 ̽e! >.MZ$sSRػfm,oN= kguj,O:Y۩q~Q/)jr.wBqe,We?I9yGFu?u& .=ޣ.l7mڛ@ݰ rW_ UNFl`/cvÉ\ M=Lȣ ´c ȻU %-̯e> o cVI/.@::BVַֺ `I msvTen炩sԧ2N/;~F}m0:;|p[+/Q"4te9C;`O@lR0ehE1%։?۴,(ui'4@]sM.?0gV`WTy%wh"Eufޗ=H^NI&?kҿ.LX?*,7}XUy 5cA@iBw#K8]*v&N y\ C_wRa2C9Ls5v'%C?-D3;S=mlGu X#v HCJ䳻R[2{^%,\`cqS?= R#osj #Z  ~ )G\W{ӗ՗=۳ g> gL>Ӄm -):VP_>{ס˝jiUy| Id?ۋ۱ ;Dk 2!|PߍŎ~z9Jךi6F0ݯ '⴩aVd82J{V {j[-'rHϝ/ GX[r@`jn!4R [S3hY-sp|[Y?9 c>8i)i9Hݥ5wCMA(g ֞=s3>:%g~o\o1-&w/B{S}F -4۽&m{j[5@\րY]o uhÔПTw]99+IG~Sqw9;!՛Vї`o;1!/gJ9줬;Zc H(P+WQoO{cԚ~nhsqeևz1qjȱmGDE%q ެ92*K@j P+;hz t(Y> Lΐ A.Q])sAbbmCv??s(P+gE򵛡'QWeVRvGv!s?NvRNV@]Œo>@Z}ȋ<50sj rVXF*/E 7B֥!Z}6mUGqQg/6[Dz࣓T.xg |} @lcJK`AZ([|Lm;HtcN3P7pb_x{`}n>CJ; .h0r0 F&j8!Ŝa:*P]WHW]n`X{ >p}_G2H?ȇ aԟL_\>㋁Or\ÂmG媕* W~ 7">FZ8 :v&yvڰ0@{mwyݗɒIl Nglr0eЦ_Km+B?344y]qYPz'A5`n=Uhq*LlK+ IвpGrQzY:E7PFSx(uC[j_p: :O?TT356rxT'z.HHu^k`ewot:(-dUc" ik֣ DB/~I=bєr#~3ߞ7v^~҆ehG8~;Y/*([ UY| ](@xd(4@Uo^_y S=研Zc19 h/uorU_XDa /3)F-}8-?VKnub F,v-V.Be|گJ|  ɤ*Ϳʫb?&:,T) k4g?&d(^(5LuEFz k2b.^UU&m&yzN;֕_."GenвW3fj,~YОdg8ʃKrB]uΟ;su2G.Ol24@ %̑-ރ4#>:|4X yd%-<䀐? D={B _L17wHh;~P<@_W95) .9aP؀prEYh PN@t;560qEckاiUh< 7h1Fad Qs;-/-g]u0|#rxp\)'5eV`g 1orbкo bK\ r9roXc٫Mki+o.r }?фp׮xp-K7QAG q|Ԕ;5@q@LB&z2!̷fM>壦,=@9/wך)Ȩ+A[ PH PnHC &ͺ7ӆ@EO+} m?*xr)5eȃ/9gaLʑK83'u@^͠Y.{0r~;G* wP:0{t;>p&v3[=Gus+>&yF] i`A k֯b^^fEACrc]?*eN wkfݎüW`'OI '.m'P:y=9#0Toͅ O*oNCB*@Ϫ,# e*3E\yr2Or!yB/i$ػ_{\$'T= sRtBײ2. ݆wVL [U y=4 vM+pڲU+HF$(!rڂ%gDrfT4h^s?ڍk*%j@K-BdJR+4mЕopJf Y#VC|v__Jo/НJ`@+roTO|D% Y{>߯ikAv}P6qfQB;B@5NR+sþدqhIM,tO|u6@b#%)ڗ|dpΊKU}`rM!:@fݜs/ҽbEr l/^q]F ߫@>-^HbT: Z x%o;DEgt|`hH7[/FhrȎPX2\Iȓa uz^1"j 4ބ*_^Ne{AxMe@/⳿Ռ|p#D kPCo@2!ș;'|}e *$;Os^Q$/D\Euݧg C|tGuׅBM Cj,玮9W{%꼐zxm*H|t!ժIOȷyKE֥l`ҀB#UbΦU߂']qn?WiӼላ7C/ٟ;i*H|/wZ.yV4;0c\8[\XAZ|jgt[=lq(q7_BrbۢwSM>Ar;QY F^#uBέU5dg÷?j5Ø#f5I'a_qmzvxnuDX>:nO=?N 7e~JJ,m 凪u^WׄiLtF4eԯбy"!uo2Pz EzZ^=a^RI>53w+ )9C߿mkd (^(#z ~0kbuL EB Wgbw15cadПS^mKW`lg5 zcZY]6\NO#+3uX/Oy g!jlˌL|tT]\QZ7\J*ms!zbsWx4sӡBrzY3aYD@|xrԬ.zZ}f5ݷsU(\P1c\rs 7m\/S=Rȗh~;']ycB Otn!>qQZY'93H)B@nccA+u#W=UGpFJZPT"$x)\7[H|tt(WtH|/'>::C+ :$ !ˉ@BGAB`zE!@rп@!0d@{9~^Q o?P Y(^N|tt(WtH|/'>::C+ :$ !ˉ@cj^591 18w0:n} K/'  SWX_EB`zE!p}(U$&f$>]a`Z9muG[^Q\.LkiE#uN>ޤ,i})Cꏰ^o$>*VGVd {G :#+ kd]hmMOMG#x1I$ Aϱv@!-!@vѢGSfz'GOd60 #M(<ʉܳ6 hS5a!W"xB;U9&GQO]@Gv3DF{/]4f|>MO tm䩿g#.*_gw5r$g-:mca8Xf ksG Q:Oh$6#PXp1 1.o('RdvU;!T"2-xpHk7!6?Dcol@(Y/[ F9_sVIENDB`(D <I >G?G@K @M@IAQBSDUDYFXFPB[H]H[H]I_L[JH@JBKDLDNF\L]OUKPIVL`I`L` Me Qb PhTeShTiW`QaSeUaTfWmYgXjXlYo\o]p^S KS!LV%OW&P\&SY)R[+T\,U^/X_0Xg&[i!Zm"]n$^n)_a2Zf3^c5\d5\o%`q!`u!ar%au&cs,dx*fz.h}.kh5`i=cv4h{0k~2mv;j|s=tBuDxJyK{K}H}L~ZzX|^}\~``\PSVXZ_\]b`eghjm`jrtbfhkmlvz|fmosvy{~}£ĨƬȫijƵ˰ͱ̶ɺαͶʻϺ˼̽οеѺԻսbE3Dbssqqsoqs`aEE339ͯBDWboBߪX84<56<39Y9bDoVqqV:WPPs:V`as:V:<s:Vqsq:Vs:Vs:Vqts:V:̂43;q:VZZs;V_bs:Vs:VPVs: c\ Ys F%c xH#c# 4 xH%ee,,X xH#cwxH%e wN#S  m#  %  !# 7*S}0%v,M %!Jd%JgLv2%J}R%&GI/ %hiM$Jh1$ihf"$.lh h/nf$$z.   $          yI$       ?     $I F$       K      yŔ) @)     " @k||hK )K k j )  )  jU 'UQ j  >  j        >j         \ +         Uk         A K ȟ' T \ T u*  ( =ԅ  Tŭu*                                                                                                                                                                                                                                    CO^aprqooqq]`EE`??( @D <I >G?J @M@PBSDUEYFWGR E]H]JX HIANFWJZM`J`KbLaL` Ma MeRh SbPhVgVcUnZn[jYV$NY)R[*T^/Xh'[m(^a2Ze1\b4\d5\s#at ar&`q*bt(cu-eg;ah;au1gv9jz;moDhrIltKn}@qwNq|Uv3n:s=r?tDu@wEzJ|QT}^}OV_aeg`otejmvxz~nnwzx Ȭ̮̽г־/P"p0>M[iy1Qqұ/Pp  >1\Qzq/Pp!+6@IZ1pQq/ P6pLbx1Qq,/KPip1Qq/-P?pRcv1Qqϑܱ/Pp!&,>X1qQq(**+%kq;7VP92  ,  D  JY |m pw YQ@] NY 0bD?n b NY . 1^gSv RR 1!     iE oA  4/    :6 [}   _5  Fvj&         )+((( D <J ?F>M AQBTDXFXG\H]I^ MH@KCLDQFQG]NaLgWhVmZkYo\r_S KZ)Sm#]b4[d6]n0b{2j}0l|4llAfnChqGk{BosJmuMo~Bq|VwUx9r>v@rAvFzJyY|SitlrrvzäǩǷ̾ϼж  =1[Qyq/"P0p=LYgx1Qq&/@PZpt1Qq/&PAp[tϩ1Qq/P"p0>M[iy1Qqұ/Pp  >1\Qzq/Pp!+6@IZ1pQq/ P6pLbx1Qq,/KPip1Qq/-P?pRcv1Qqϑܱ/Pp!&,>X1qQq   3"&L6G2?%#I!<(BP O;HAE <)CQ NM=0:7RD**- M@MS D .87-  ,J/>1/4F9M'$5 (@ F<3E =D D dD D L @_ L\H_Ip]}`J`J`JTWFVESCOAK ?H >E ɫ`I`J`JdO`J`Jq]}`J`J`Jr_`J`J`J`J`J`J`J\HWEd"WD D H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H >H ?H ?5G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >G >F <3E true vmem-1.8/src/LongPathSupport.props000066400000000000000000000006061361505074100173170ustar00rootroot00000000000000 $(SolutionDir)LongPath.manifest vmem-1.8/src/Makefile000066400000000000000000000152771361505074100145730ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # src/Makefile -- Makefile for VMEM # TOP := $(dir $(lastword $(MAKEFILE_LIST))).. include $(TOP)/src/common.inc TARGETS = libvmem libvmmalloc ALL_TARGETS = $(TARGETS) common POSSIBLE_TARGETS = $(TARGETS) common examples SCOPE_DIRS = $(TARGETS) common DEBUG_RELEASE_TARGETS = common libvmem libvmmalloc ifneq ($(BUILD_EXAMPLES),n) ALL_TARGETS += examples endif CLEAN_NO_JE_TARGETS = $(POSSIBLE_TARGETS) CLEAN_TARGETS = $(CLEAN_NO_JE_TARGETS) jemalloc CLOBBER_NO_JE_TARGETS = $(POSSIBLE_TARGETS) CLOBBER_TARGETS = $(CLOBBER_NO_JE_TARGETS) jemalloc CSTYLE_TARGETS = $(POSSIBLE_TARGETS) INSTALL_TARGETS = $(TARGETS) SPARSE_TARGETS = $(POSSIBLE_TARGETS) HEADERS_DESTDIR = $(DESTDIR)$(includedir) HEADERS_INSTALL = include/libvmem.h include/libvmmalloc.h OBJ_HEADERS_INSTALL = PKG_CONFIG_DESTDIR = $(DESTDIR)$(pkgconfigdir) PKG_CONFIG_COMMON = common.pc PKG_CONFIG_FILES = libvmem.pc libvmmalloc.pc rwildcard=$(strip $(foreach d,$(wildcard $1*),$(call rwildcard,$d/,$2)\ $(filter $(subst *,%,$2),$d))) SCOPE_SRC_DIRS = $(SCOPE_DIRS) include jemalloc/src SCOPE_HDR_DIRS = $(SCOPE_DIRS) include jemalloc/src\ jemalloc/include/jemalloc\ jemalloc/include/jemalloc/internal\ debug/libvmem/jemalloc/include/jemalloc\ debug/libvmmalloc/jemalloc/include/jemalloc\ debug/libvmem/jemalloc/include/jemalloc/internal\ debug/libvmmalloc/jemalloc/include/jemalloc/internal\ nondebug/libvmem/jemalloc/include/jemalloc\ nondebug/libvmmalloc/jemalloc/include/jemalloc\ nondebug/libvmem/jemalloc/include/jemalloc/internal\ nondebug/libvmmalloc/jemalloc/include/jemalloc/internal SCOPE_SRC_FILES = $(foreach d, $(SCOPE_SRC_DIRS), $(wildcard $(d)/*.c)) SCOPE_HDR_FILES = $(foreach d, $(SCOPE_HDR_DIRS), $(wildcard $(D)/*.h)) SCOPEFILES = $(SCOPE_SRC_FILES) $(SCOPE_HDR_FILES) # include/lib*.h - skip include/pmemcompat.h HEADERS =\ $(foreach f, $(wildcard\ freebsd/include/*.h\ freebsd/include/*/*.h\ include/lib*.h\ windows/include/*.h\ windows/include/*/*.h\ ), $(f)) ifneq ($(filter 1 2, $(CSTYLEON)),) TMP_HEADERS := $(addprefix debug/, $(addsuffix tmp, $(HEADERS))) endif SCRIPTS = $(call rwildcard,,*.sh) debug/%.htmp: %.h $(call check-cstyle, $<, $@) debug/%.hpptmp: %.hpp $(call check-cstyle, $<, $@) all: $(TMP_HEADERS) $(ALL_TARGETS) install: $(INSTALL_TARGETS:=-install) uninstall: $(INSTALL_TARGETS:=-uninstall) clean: $(CLEAN_TARGETS:=-clean) clobber: $(CLOBBER_TARGETS:=-clobber) cstyle: $(CSTYLE_TARGETS:=-cstyle) format: $(CSTYLE_TARGETS:=-format) examples benchmarks: $(TARGETS) benchmarks: examples sparse: $(SPARSE_TARGETS:=-sparse) custom_build = $(DEBUG)$(OBJDIR) libvmmalloc libvmem: jemalloc test: common pkg-cfg-common: @printf "version=%s\nlibdir=%s\nprefix=%s\n" "$(SRCVERSION)" "$(libdir)" "$(prefix)" > $(PKG_CONFIG_COMMON) $(PKG_CONFIG_COMMON): pkg-cfg-common %.pc: $(PKG_CONFIG_COMMON) $(TOP)/utils/%.pc.in @echo Generating $@ @cat $(PKG_CONFIG_COMMON) > $@ @cat $(TOP)/utils/$@.in >> $@ pkg-config: $(PKG_CONFIG_FILES) $(eval $(call sub-target,$(INSTALL_TARGETS),install,y)) $(eval $(call sub-target,$(INSTALL_TARGETS),uninstall,y)) $(eval $(call sub-target,$(CLEAN_NO_JE_TARGETS),clean,y)) $(eval $(call sub-target,$(CLOBBER_NO_JE_TARGETS),clobber,y)) $(eval $(call sub-target,$(CSTYLE_TARGETS),cstyle,n)) $(eval $(call sub-target,$(CSTYLE_TARGETS),format,n)) $(eval $(call sub-target,$(SPARSE_TARGETS),sparse,n)) $(DEBUG_RELEASE_TARGETS): $(MAKE) -C $@ ifeq ($(custom_build),) $(MAKE) -C $@ DEBUG=1 endif jemalloc-check: jemalloc-test test: all jemalloc-test $(MAKE) -C test test check pcheck: test jemalloc-check $(MAKE) -C test $@ jemalloc jemalloc-clean jemalloc-clobber jemalloc-test jemalloc-check: $(MAKE) -C jemalloc -f Makefile.libvmem $@ EXTRA_CFLAGS="$(EXTRA_CFLAGS) -I$(abspath $(TOP))/src/common" $(MAKE) -C jemalloc -f Makefile.libvmmalloc $@ EXTRA_CFLAGS="$(EXTRA_CFLAGS) -I$(abspath $(TOP))/src/common" ifeq ($(custom_build),) $(MAKE) -C jemalloc -f Makefile.libvmem $@ DEBUG=1 EXTRA_CFLAGS="$(EXTRA_CFLAGS) -I$(abspath $(TOP))/src/common" $(MAKE) -C jemalloc -f Makefile.libvmmalloc $@ DEBUG=1 EXTRA_CFLAGS="$(EXTRA_CFLAGS) -I$(abspath $(TOP))/src/common" endif # Re-generate pkg-config files on 'make install' (not on 'make all'), # to handle the case when prefix is specified only for 'install'. # Clean up generated files when done. install: all pkg-config install -d $(HEADERS_DESTDIR) install -p -m 0644 $(HEADERS_INSTALL) $(HEADERS_DESTDIR) install -d $(PKG_CONFIG_DESTDIR) install -p -m 0644 $(PKG_CONFIG_FILES) $(PKG_CONFIG_DESTDIR) $(RM) $(PKG_CONFIG_FILES) uninstall: $(foreach f, $(HEADERS_INSTALL), $(RM) $(HEADERS_DESTDIR)/$(notdir $(f))) $(foreach f, $(PKG_CONFIG_FILES), $(RM) $(PKG_CONFIG_DESTDIR)/$(notdir $(f))) cstyle: $(STYLE_CHECK) check $(HEADERS) $(CHECK_SHEBANG) $(SCRIPTS) format: $(STYLE_CHECK) format $(HEADERS) cscope: cscope -q -b $(SCOPEFILES) ctags -e $(SCOPEFILES) clean-here: $(RM) tags cscope.in.out cscope.out cscope.po.out *.pc $(TMP_HEADERS) clean: clean-here clobber: clean-here .NOTPARALLEL: libvmem libvmmalloc .PHONY: all install uninstall clean clobber cstyle format test check pcheck\ jemalloc jemalloc-clean jemalloc-test jemalloc-check cscope $(ALL_TARGETS)\ pkg-config clean-here pkg-cfg-common vmem-1.8/src/Makefile.inc000066400000000000000000000214761361505074100153410ustar00rootroot00000000000000# Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # src/Makefile.inc -- common Makefile rules for VMEM # TOP := $(dir $(lastword $(MAKEFILE_LIST))).. include $(TOP)/src/common.inc INCLUDE = $(TOP)/src/include COMMON = $(TOP)/src/common vpath %.c $(COMMON) INCS += -I../include -I../common/ $(OS_INCS) # default CFLAGS DEFAULT_CFLAGS += -std=gnu99 DEFAULT_CFLAGS += -Wall DEFAULT_CFLAGS += -Werror DEFAULT_CFLAGS += -Wmissing-prototypes DEFAULT_CFLAGS += -Wpointer-arith DEFAULT_CFLAGS += -Wsign-conversion DEFAULT_CFLAGS += -Wsign-compare ifeq ($(WCONVERSION_AVAILABLE), y) DEFAULT_CFLAGS += -Wconversion endif ifeq ($(IS_ICC), n) DEFAULT_CFLAGS += -Wunused-macros DEFAULT_CFLAGS += -Wmissing-field-initializers endif ifeq ($(WUNREACHABLE_CODE_RETURN_AVAILABLE), y) DEFAULT_CFLAGS += -Wunreachable-code-return endif ifeq ($(WMISSING_VARIABLE_DECLARATIONS_AVAILABLE), y) DEFAULT_CFLAGS += -Wmissing-variable-declarations endif ifeq ($(WFLOAT_EQUAL_AVAILABLE), y) DEFAULT_CFLAGS += -Wfloat-equal endif ifeq ($(WSWITCH_DEFAULT_AVAILABLE), y) DEFAULT_CFLAGS += -Wswitch-default endif ifeq ($(WCAST_FUNCTION_TYPE_AVAILABLE), y) DEFAULT_CFLAGS += -Wcast-function-type endif ifeq ($(WSTRINGOP_TRUNCATION_AVAILABLE), y) DEFAULT_CFLAGS += -DSTRINGOP_TRUNCATION_SUPPORTED endif ifeq ($(DEBUG),1) # Undefine _FORTIFY_SOURCE in case it's set in system-default or # user-defined CFLAGS as it conflicts with -O0. DEBUG_CFLAGS += -Wp,-U_FORTIFY_SOURCE DEBUG_CFLAGS += -O0 -ggdb -DDEBUG LIB_SUBDIR = /vmem_debug OBJDIR = debug else DEFAULT_CFLAGS += -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 LIB_SUBDIR = OBJDIR = nondebug endif # use defaults, if system or user-defined CFLAGS are not specified CFLAGS ?= $(DEFAULT_CFLAGS) CFLAGS += -std=gnu99 CFLAGS += -fno-common CFLAGS += -pthread CFLAGS += -DSRCVERSION=\"$(SRCVERSION)\" ifeq ($(COVERAGE),1) CFLAGS += $(GCOV_CFLAGS) LDFLAGS += $(GCOV_LDFLAGS) LIBS += $(GCOV_LIBS) endif ifeq ($(VALGRIND),0) CFLAGS += -DVALGRIND_ENABLED=0 CXXFLAGS += -DVALGRIND_ENABLED=0 endif ifeq ($(FAULT_INJECTION),1) CFLAGS += -DFAULT_INJECTION=1 CXXFLAGS += -DFAULT_INJECTION=1 endif # On FreeBSD libvmmalloc defines pthread_create, which conflicts with asan tsanitize := $(SANITIZE) ifeq ($(OS_KERNEL_NAME),FreeBSD) ifeq ($(JEMALLOC_VMEMDIR),libvmmalloc) hasaddrcomma := address, tsanitize := $(subst address,,$(subst $(hasaddrcomma),,$(SANITIZE))) ifneq ($(tsanitize),$(SANITIZE)) $(info SANITIZE=address not supported for libvmmalloc, ignored) endif endif endif ifneq ($(tsanitize),) CFLAGS += -fsanitize=$(tsanitize) LDFLAGS += -fsanitize=$(tsanitize) endif CFLAGS += $(EXTRA_CFLAGS) ifeq ($(DEBUG),1) CFLAGS += $(EXTRA_CFLAGS_DEBUG) $(DEBUG_CFLAGS) else CFLAGS += $(EXTRA_CFLAGS_RELEASE) endif LDFLAGS += -Wl,-z,relro -Wl,--fatal-warnings -Wl,--warn-common $(EXTRA_LDFLAGS) ifneq ($(NORPATH),1) LDFLAGS += -Wl,-rpath=$(libdir)$(LIB_SUBDIR) endif ifeq ($(LIBRT_NEEDED), y) LIBS += -lrt endif define arch32_error_msg ################################################## ### 32-bit builds of VMEM are not supported! ### ### Please, use 64-bit platform/compiler. ### ################################################## endef TESTCMD := $(CC) $(CFLAGS) -dM -E -x c /dev/null -o /dev/null TESTBUILD := $(shell $(TESTCMD) && echo 1 || echo 0) ifneq ($(TESTBUILD), 1) $(error "$(TESTCMD)" failed) endif LP64 := $(shell $(CC) $(CFLAGS) -dM -E -x c /dev/null | grep -Ec "__SIZEOF_LONG__.+8|__SIZEOF_POINTER__.+8" ) ifneq ($(LP64), 2) $(error $(arch32_error_msg)) endif LIBS_DESTDIR = $(DESTDIR)$(libdir)$(LIB_SUBDIR) DIRNAME = $(shell basename $(CURDIR)) ifeq ($(OBJDIR),$(abspath $(OBJDIR))) objdir = $(OBJDIR)/$(DIRNAME) else objdir = ../$(OBJDIR)/$(DIRNAME) endif LIB_OUTDIR ?= $(objdir)/.. ifneq ($(LIB_OUTDIR),) LDFLAGS += -L$(LIB_OUTDIR) endif ifneq ($(SOURCE),) _OBJS = $(SOURCE:.c=.o) _OBJS_COMMON = $(patsubst $(COMMON)/%, %, $(_OBJS)) OBJS += $(addprefix $(objdir)/, $(_OBJS_COMMON)) endif ifneq ($(HEADERS),) ifneq ($(filter 1 2, $(CSTYLEON)),) TMP_HEADERS := $(addsuffix tmp, $(HEADERS)) TMP_HEADERS := $(addprefix $(objdir)/, $(TMP_HEADERS)) endif endif ifneq ($(LIBRARY_NAME),) LIB_NAME = lib$(LIBRARY_NAME) endif ifneq ($(LIBRARY_SO_VERSION),) LIB_LINK = $(LIB_NAME).link LIB_SONAME = $(LIB_NAME).so.$(LIBRARY_SO_VERSION) LIB_SO = $(LIB_OUTDIR)/$(LIB_NAME).so LIB_SO_SONAME = $(LIB_SO).$(LIBRARY_SO_VERSION) ifneq ($(LIBRARY_VERSION),) LIB_SO_REAL = $(LIB_SO_SONAME).$(LIBRARY_VERSION) else $(error LIBRARY_VERSION not set) endif TARGET_LIBS = $(LIB_SO_REAL) TARGET_LINKS = $(LIB_SO_SONAME) $(LIB_SO) endif ifneq ($(LIB_NAME),) LIB_AR = $(LIB_OUTDIR)/$(LIB_NAME).a LIB_AR_UNSCOPED = $(objdir)/$(LIB_NAME)_unscoped.o LIB_AR_ALL = $(objdir)/$(LIB_NAME)_all.o TARGET_LIBS += $(LIB_AR) endif ifneq ($(EXTRA_TARGETS),) EXTRA_TARGETS_CLEAN = $(EXTRA_TARGETS:=-clean) EXTRA_TARGETS_CLOBBER = $(EXTRA_TARGETS:=-clobber) endif PMEMLOG_PRIV_OBJ=$(LIB_OUTDIR)/libpmemlog/libpmemlog_unscoped.o PMEMBLK_PRIV_OBJ=$(LIB_OUTDIR)/libpmemblk/libpmemblk_unscoped.o ifneq ($(LIBPMEMLOG_PRIV_FUNCS),) OBJS += pmemlog_priv_funcs.o endif ifneq ($(LIBPMEMBLK_PRIV_FUNCS),) OBJS += pmemblk_priv_funcs.o endif MAKEFILE_DEPS=../Makefile.inc Makefile $(TOP)/src/common.inc all: $(objdir) $(LIB_OUTDIR) $(EXTRA_TARGETS) $(LIB_AR) $(LIB_SO_SONAME) $(LIB_SO_REAL) $(LIB_SO) $(TMP_HEADERS) $(objdir) $(LIB_OUTDIR): $(MKDIR) -p $@ $(LIB_SO_REAL): $(OBJS) $(EXTRA_OBJS) $(LIB_LINK) $(MAKEFILE_DEPS) $(CC) $(LDFLAGS) -shared -Wl,--version-script=$(LIB_LINK),-soname,$(LIB_SONAME) -o $@ $(OBJS) $(EXTRA_OBJS) $(LIBS) $(LIB_SO_SONAME): $(LIB_SO_REAL) $(MAKEFILE_DEPS) $(LN) -sf $(shell basename $<) $@ $(LIB_SO): $(LIB_SO_SONAME) $(MAKEFILE_DEPS) $(LN) -sf $(shell basename $<) $@ $(LIB_AR_UNSCOPED): $(OBJS) $(EXTRA_OBJS) $(MAKEFILE_DEPS) $(LD) -o $@ -r $(OBJS) $(EXTRA_OBJS) ifeq ($(LIB_LINK),) $(LIB_AR_ALL): $(LIB_AR_UNSCOPED) $(MAKEFILE_DEPS) $(OBJCOPY) $< $@ else $(LIB_AR_ALL): $(LIB_AR_UNSCOPED) $(LIB_LINK) $(MAKEFILE_DEPS) $(OBJCOPY) --localize-hidden `sed -n 's/^ *\([a-zA-Z0-9_]*\);$$/-G \1/p' $(LIB_LINK)` $< $@ endif $(LIB_AR): $(LIB_AR_ALL) $(MAKEFILE_DEPS) $(AR) rv $@ $(LIB_AR_ALL) $(PMEMBLK_PRIV_OBJ): $(MAKE) -C $(LIBSDIR) libpmemblk install: all ifneq ($(LIBRARY_NAME),) $(INSTALL) -d $(LIBS_DESTDIR) $(INSTALL) -p -m 0755 $(TARGET_LIBS) $(LIBS_DESTDIR) $(CP) -d $(TARGET_LINKS) $(LIBS_DESTDIR) endif uninstall: ifneq ($(LIBRARY_NAME),) $(foreach f, $(TARGET_LIBS), $(RM) $(LIBS_DESTDIR)/$(notdir $(f))) $(foreach f, $(TARGET_LINKS), $(RM) $(LIBS_DESTDIR)/$(notdir $(f))) endif clean: $(EXTRA_TARGETS_CLEAN) ifneq ($(LIBRARY_NAME),) $(RM) $(OBJS) $(TMP_HEADERS) $(RM) $(LIB_AR_ALL) $(LIB_AR_UNSCOPED) endif clobber: clean $(EXTRA_TARGETS_CLOBBER) ifneq ($(LIBRARY_NAME),) $(RM) $(LIB_AR) $(LIB_SO_SONAME) $(LIB_SO_REAL) $(LIB_SO) $(RM) -r $(objdir)/.deps $(RM) -f *.link endif $(eval $(cstyle-rule)) $(objdir)/%.o: %.c $(MAKEFILE_DEPS) $(call check-cstyle, $<) @mkdir -p $(objdir)/.deps $(CC) -MD -c -o $@ $(CFLAGS) $(INCS) -fPIC $(call coverage-path, $<) $(call check-os, $@, $<) $(create-deps) sparse: $(if $(SOURCE), $(sparse-c)) $(objdir)/%.htmp: %.h $(call check-cstyle, $<, $@) .PHONY: all clean clobber install uninstall cstyle -include $(objdir)/.deps/*.P %.link: %.link.in ifeq ($(FAULT_INJECTION),1) @sed 's/fault_injection;/$(LIBRARY_NAME)_inject_fault_at;\n\t\t$(LIBRARY_NAME)_fault_injection_enabled;/g' $< > $@_temp else @sed '/fault_injection;/d' $< > $@_temp endif @mv $@_temp $@ vmem-1.8/src/README000066400000000000000000000014241361505074100140000ustar00rootroot00000000000000Persistent Memory Development Kit This is src/README. This directory contains the source for the Persistent Memory Development Kit. libvmem is largely just a wrapper around a modified jemalloc library. See the "jemalloc" subdirectory and the git change log for those files for details. The subdirectory "include" contains header files that get delivered along with the libraries. Everything else is internal to the libraries and lives in this directory. Two versions of the libraries are built, a debug version and a nondebug version. The object files and the libraries themselves end up in the subdirectories "debug" and "nondebug". See the top-level README for build, test, and installation instructions. The basic "make" and "make test" targets also work from this directory. vmem-1.8/src/VMEM.sln000066400000000000000000000772121361505074100144120ustar00rootroot00000000000000Microsoft Visual Studio Solution File, Format Version 12.00 # Visual Studio 14 VisualStudioVersion = 14.0.25420.1 MinimumVisualStudioVersion = 10.0.40219.1 Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "traces_custom_function", "test\traces_custom_function\traces_custom_function.vcxproj", "{02BC3B44-C7F1-4793-86C1-6F36CA8A7F53}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_check_version", "test\vmem_check_version\vmem_check_version.vcxproj", "{04345B7D-B0A1-405B-8BB2-5B98A3400FEF}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "out_err_mt", "test\out_err_mt\out_err_mt.vcxproj", "{063037B2-CA35-4520-811C-19D9C4ED891E}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "libvmem", "libvmem\libvmem.vcxproj", "{08762559-E9DF-475B-BA99-49F4B5A1D80B}" ProjectSection(ProjectDependencies) = postProject {492BAA3D-0D5D-478E-9765-500463AE69AA} = {492BAA3D-0D5D-478E-9765-500463AE69AA} {901F04DB-E1A5-4A41-8B81-9D31C19ACD59} = {901F04DB-E1A5-4A41-8B81-9D31C19ACD59} EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "util_parse_size", "test\util_parse_size\util_parse_size.vcxproj", "{08B62E36-63D2-4FF1-A605-4BBABAEE73FB}" EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Examples", "Examples", "{0CC6D525-806E-433F-AB4A-6CFD546418B1}" ProjectSection(SolutionItems) = preProject examples\ex_common.h = examples\ex_common.h EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "dllview", "test\tools\dllview\dllview.vcxproj", "{179BEB5A-2C90-44F5-A734-FA756A5E668C}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "win_lists", "test\win_lists\win_lists.vcxproj", "{1F2E1C51-2B14-4047-BE6D-52E00FC3C780}" ProjectSection(ProjectDependencies) = postProject {CE3F2DFB-8470-4802-AD37-21CAF6CB2681} = {CE3F2DFB-8470-4802-AD37-21CAF6CB2681} EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_aligned_alloc", "test\vmem_aligned_alloc\vmem_aligned_alloc.vcxproj", "{25B5C601-03D7-4861-9C0F-7F0453B04227}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_out_of_memory", "test\vmem_out_of_memory\vmem_out_of_memory.vcxproj", "{26D24B3D-22CE-44EB-AA21-2BF594F80520}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "out_err_mt_win", "test\out_err_mt_win\out_err_mt_win.vcxproj", "{2B1A5104-A324-4D02-B5C7-D021FB8F880C}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_create", "test\vmem_create\vmem_create.vcxproj", "{2E7E8487-0BB0-4E8A-8672-ED8ABD80D468}" EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "libvmem", "libvmem", "{3AB2F5A9-5C1E-4077-811A-2F96BCF9EE89}" ProjectSection(SolutionItems) = preProject ..\doc\libvmem\libvmem.7.md = ..\doc\libvmem\libvmem.7.md ..\doc\libvmem\vmem_create.3.md = ..\doc\libvmem\vmem_create.3.md ..\doc\libvmem\vmem_malloc.3.md = ..\doc\libvmem\vmem_malloc.3.md EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_check_allocations", "test\vmem_check_allocations\vmem_check_allocations.vcxproj", "{3BAB8FDF-42F7-4D46-AA10-E282FD41B9F2}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_pages_purging", "test\vmem_pages_purging\vmem_pages_purging.vcxproj", "{3D9A580B-5F0F-434F-B4D6-228B8E7ADAA5}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "sparsefile", "test\tools\sparsefile\sparsefile.vcxproj", "{3EC30D6A-BDA4-4971-879A-8814204EAE31}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_malloc", "test\vmem_malloc\vmem_malloc.vcxproj", "{40DC66AD-F66D-4194-B9A4-A3A2222516FE}" EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "sys", "sys", "{45027FC5-4A32-47BD-AC5B-66CC7616B1D2}" ProjectSection(SolutionItems) = preProject windows\include\sys\file.h = windows\include\sys\file.h windows\include\sys\mman.h = windows\include\sys\mman.h windows\include\sys\mount.h = windows\include\sys\mount.h windows\include\sys\param.h = windows\include\sys\param.h windows\include\sys\resource.h = windows\include\sys\resource.h windows\include\sys\statvfs.h = windows\include\sys\statvfs.h windows\include\sys\uio.h = windows\include\sys\uio.h windows\include\sys\wait.h = windows\include\sys\wait.h EndProjectSection EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "libvmem", "libvmem", "{45E74E38-35CA-4CB6-8965-BC20D39659AF}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "libpmemcommon", "common\libpmemcommon.vcxproj", "{492BAA3D-0D5D-478E-9765-500463AE69AA}" EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "util", "util", "{4C291EEB-3874-4724-9CC2-1335D13FF0EE}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_custom_alloc", "test\vmem_custom_alloc\vmem_custom_alloc.vcxproj", "{4ED1E400-CF16-48C2-B176-2BF186E73531}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_mix_allocations", "test\vmem_mix_allocations\vmem_mix_allocations.vcxproj", "{537F759B-B617-48D9-A2F3-7FB769A8F9B7}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "win_common", "test\win_common\win_common.vcxproj", "{6AE1B8BE-D46A-4E99-87A2-F160FB950DCA}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "set_funcs", "test\set_funcs\set_funcs.vcxproj", "{6D7C1169-3246-465F-B630-ECFEF4F3179A}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "util_file_open", "test\util_file_open\util_file_open.vcxproj", "{715EADD7-0FFE-4F1F-94E7-49302968DF79}" ProjectSection(ProjectDependencies) = postProject {3EC30D6A-BDA4-4971-879A-8814204EAE31} = {3EC30D6A-BDA4-4971-879A-8814204EAE31} EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_calloc", "test\vmem_calloc\vmem_calloc.vcxproj", "{718CA6FA-6446-4E43-83DF-BA4E85E5886B}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_create_in_region", "test\vmem_create_in_region\vmem_create_in_region.vcxproj", "{74243B75-816C-4077-8DF0-98D2C78B0E5D}" EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Tests", "Tests", "{746BA101-5C93-42A5-AC7A-64DCEB186572}" ProjectSection(SolutionItems) = preProject test\match = test\match test\RUNTESTLIB.PS1 = test\RUNTESTLIB.PS1 test\RUNTESTS.ps1 = test\RUNTESTS.ps1 test\unittest\unittest.ps1 = test\unittest\unittest.ps1 EndProjectSection EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "linux", "linux", "{774627B7-6532-4464-AEE4-02F72CA44F95}" ProjectSection(SolutionItems) = preProject windows\include\linux\limits.h = windows\include\linux\limits.h EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_realloc", "test\vmem_realloc\vmem_realloc.vcxproj", "{7E0106F8-A597-48D5-B4F2-E0FC4D95EE95}" EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Solution Items", "Solution Items", "{853D45D8-980C-4991-B62A-DAC6FD245402}" EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Tools", "Tools", "{877E7D1D-8150-4FE5-A139-B6FBCEAEC393}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_strdup", "test\vmem_strdup\vmem_strdup.vcxproj", "{89B6AF14-08A0-437A-B31D-A8A3492FA965}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "out_err", "test\out_err\out_err.vcxproj", "{8A0FA780-068A-4534-AA2F-4FF4CF977AF2}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "jemalloc", "jemalloc\msvc\jemalloc.vcxproj", "{8D6BB292-9E1C-413D-9F98-4864BDC1514A}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "srcversion", "windows\srcversion\srcversion.vcxproj", "{901F04DB-E1A5-4A41-8B81-9D31C19ACD59}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "getopt", "windows\getopt\getopt.vcxproj", "{9186EAC4-2F34-4F17-B940-6585D7869BCD}" EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Windows", "Windows", "{95FAF291-03D1-42FC-9C10-424D551D475D}" ProjectSection(SolutionItems) = preProject common\common.rc = common\common.rc EndProjectSection EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "include", "include", "{9A8482A7-BF0C-423D-8266-189456ED41F6}" ProjectSection(SolutionItems) = preProject windows\include\dirent.h = windows\include\dirent.h windows\include\endian.h = windows\include\endian.h windows\include\err.h = windows\include\err.h windows\include\features.h = windows\include\features.h windows\include\libgen.h = windows\include\libgen.h windows\include\platform.h = windows\include\platform.h include\pmemcompat.h = include\pmemcompat.h windows\include\sched.h = windows\include\sched.h windows\include\srcversion.h = windows\include\srcversion.h windows\include\strings.h = windows\include\strings.h windows\include\unistd.h = windows\include\unistd.h windows\include\win_mmap.h = windows\include\win_mmap.h EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "out_err_win", "test\out_err_win\out_err_win.vcxproj", "{A57D9365-172E-4782-ADC6-82A594E30943}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_stats", "test\vmem_stats\vmem_stats.vcxproj", "{ABD4B53D-94CD-4C6A-B30A-CB6FEBA16296}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "signal_handle", "test\signal_handle\signal_handle.vcxproj", "{AE9E908D-BAEC-491F-9914-436B3CE35E94}" EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "windows", "windows", "{B870D8A6-12CD-4DD0-B843-833695C2310A}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_create_win", "test\vmem_create_win\vmem_create_win.vcxproj", "{BF3B6C3A-3073-4AD4-BB41-A41047231982}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_malloc_usable_size", "test\vmem_malloc_usable_size\vmem_malloc_usable_size.vcxproj", "{C00B4A26-6C57-4968-AED5-B45FD31A22E7}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "scope", "test\scope\scope.vcxproj", "{C0E811E0-8942-4CFD-A817-74D99E9E6577}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_realloc_inplace", "test\vmem_realloc_inplace\vmem_realloc_inplace.vcxproj", "{C3A59B21-A287-4631-B4EC-F4A57D26A14F}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "manpage", "examples\libvmem\manpage.vcxproj", "{C84633F5-05B1-4AC1-A074-104D1DB2A91E}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "util_is_absolute", "test\util_is_absolute\util_is_absolute.vcxproj", "{C973CD39-D63B-4F5C-BE1D-DED17388B5A4}" ProjectSection(ProjectDependencies) = postProject {CE3F2DFB-8470-4802-AD37-21CAF6CB2681} = {CE3F2DFB-8470-4802-AD37-21CAF6CB2681} EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "traces", "test\traces\traces.vcxproj", "{CA4BBB24-D33E-42E2-A495-F10D80DE8C1D}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_create_error", "test\vmem_create_error\vmem_create_error.vcxproj", "{CD4B9690-7A06-4F7A-8492-9336979EE7E9}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_multiple_pools", "test\vmem_multiple_pools\vmem_multiple_pools.vcxproj", "{CD7A18D5-55D9-4922-A000-FFAA08ABB006}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "libut", "test\unittest\libut.vcxproj", "{CE3F2DFB-8470-4802-AD37-21CAF6CB2681}" ProjectSection(ProjectDependencies) = postProject {901F04DB-E1A5-4A41-8B81-9D31C19ACD59} = {901F04DB-E1A5-4A41-8B81-9D31C19ACD59} EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "util_file_create", "test\util_file_create\util_file_create.vcxproj", "{D829DB63-E046-474D-8EA3-43A6659294D8}" EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "examples", "examples", "{E23BB160-006E-44F2-8FB4-3A2240BBC20C}" EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "libvmem", "libvmem", "{EA0D2458-5FCD-4DAB-B07D-229327B98BEB}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "win_mmap_dtor", "test\win_mmap_dtor\win_mmap_dtor.vcxproj", "{F03DABEE-A03E-4437-BFD3-D012836F2D94}" EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Tools", "Tools", "{F09A0864-9221-47AD-872F-D4538104D747}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "win_signal", "test\win_signal\win_signal.vcxproj", "{F13108C4-4C86-4D56-A317-A4E5892A8AF7}" EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Documentation", "Documentation", "{F18C84B3-7898-4324-9D75-99A6048F442D}" EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Utils", "Utils", "{F8CCA5AE-2D75-4C79-BEAB-2588CD5956C8}" ProjectSection(SolutionItems) = preProject ..\appveyor.yml = ..\appveyor.yml ..\utils\CHECK_WHITESPACE.PS1 = ..\utils\CHECK_WHITESPACE.PS1 ..\utils\CREATE-ZIP.PS1 = ..\utils\CREATE-ZIP.PS1 ..\utils\cstyle = ..\utils\cstyle ..\utils\CSTYLE.ps1 = ..\utils\CSTYLE.ps1 EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "util_is_zeroed", "test\util_is_zeroed\util_is_zeroed.vcxproj", "{FD726AA3-D4FA-4597-B435-08CC7752888D}" EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "vmem_check", "test\vmem_check\vmem_check.vcxproj", "{FF374D62-CBCF-401E-9A02-1D3DB8BE16E4}" EndProject Global GlobalSection(SolutionConfigurationPlatforms) = preSolution Debug|x64 = Debug|x64 Release|x64 = Release|x64 EndGlobalSection GlobalSection(ProjectConfigurationPlatforms) = postSolution {02BC3B44-C7F1-4793-86C1-6F36CA8A7F53}.Debug|x64.ActiveCfg = Debug|x64 {02BC3B44-C7F1-4793-86C1-6F36CA8A7F53}.Debug|x64.Build.0 = Debug|x64 {02BC3B44-C7F1-4793-86C1-6F36CA8A7F53}.Release|x64.ActiveCfg = Release|x64 {04345B7D-B0A1-405B-8BB2-5B98A3400FEF}.Debug|x64.ActiveCfg = Debug|x64 {04345B7D-B0A1-405B-8BB2-5B98A3400FEF}.Debug|x64.Build.0 = Debug|x64 {04345B7D-B0A1-405B-8BB2-5B98A3400FEF}.Release|x64.ActiveCfg = Release|x64 {04345B7D-B0A1-405B-8BB2-5B98A3400FEF}.Release|x64.Build.0 = Release|x64 {063037B2-CA35-4520-811C-19D9C4ED891E}.Debug|x64.ActiveCfg = Debug|x64 {063037B2-CA35-4520-811C-19D9C4ED891E}.Debug|x64.Build.0 = Debug|x64 {063037B2-CA35-4520-811C-19D9C4ED891E}.Release|x64.ActiveCfg = Release|x64 {063037B2-CA35-4520-811C-19D9C4ED891E}.Release|x64.Build.0 = Release|x64 {08762559-E9DF-475B-BA99-49F4B5A1D80B}.Debug|x64.ActiveCfg = Debug|x64 {08762559-E9DF-475B-BA99-49F4B5A1D80B}.Debug|x64.Build.0 = Debug|x64 {08762559-E9DF-475B-BA99-49F4B5A1D80B}.Release|x64.ActiveCfg = Release|x64 {08762559-E9DF-475B-BA99-49F4B5A1D80B}.Release|x64.Build.0 = Release|x64 {08B62E36-63D2-4FF1-A605-4BBABAEE73FB}.Debug|x64.ActiveCfg = Debug|x64 {08B62E36-63D2-4FF1-A605-4BBABAEE73FB}.Debug|x64.Build.0 = Debug|x64 {08B62E36-63D2-4FF1-A605-4BBABAEE73FB}.Release|x64.ActiveCfg = Release|x64 {08B62E36-63D2-4FF1-A605-4BBABAEE73FB}.Release|x64.Build.0 = Release|x64 {179BEB5A-2C90-44F5-A734-FA756A5E668C}.Debug|x64.ActiveCfg = Debug|x64 {179BEB5A-2C90-44F5-A734-FA756A5E668C}.Debug|x64.Build.0 = Debug|x64 {179BEB5A-2C90-44F5-A734-FA756A5E668C}.Release|x64.ActiveCfg = Release|x64 {179BEB5A-2C90-44F5-A734-FA756A5E668C}.Release|x64.Build.0 = Release|x64 {1F2E1C51-2B14-4047-BE6D-52E00FC3C780}.Debug|x64.ActiveCfg = Debug|x64 {1F2E1C51-2B14-4047-BE6D-52E00FC3C780}.Debug|x64.Build.0 = Debug|x64 {1F2E1C51-2B14-4047-BE6D-52E00FC3C780}.Release|x64.ActiveCfg = Release|x64 {1F2E1C51-2B14-4047-BE6D-52E00FC3C780}.Release|x64.Build.0 = Release|x64 {25B5C601-03D7-4861-9C0F-7F0453B04227}.Debug|x64.ActiveCfg = Debug|x64 {25B5C601-03D7-4861-9C0F-7F0453B04227}.Debug|x64.Build.0 = Debug|x64 {25B5C601-03D7-4861-9C0F-7F0453B04227}.Release|x64.ActiveCfg = Release|x64 {25B5C601-03D7-4861-9C0F-7F0453B04227}.Release|x64.Build.0 = Release|x64 {26D24B3D-22CE-44EB-AA21-2BF594F80520}.Debug|x64.ActiveCfg = Debug|x64 {26D24B3D-22CE-44EB-AA21-2BF594F80520}.Debug|x64.Build.0 = Debug|x64 {26D24B3D-22CE-44EB-AA21-2BF594F80520}.Release|x64.ActiveCfg = Release|x64 {26D24B3D-22CE-44EB-AA21-2BF594F80520}.Release|x64.Build.0 = Release|x64 {2B1A5104-A324-4D02-B5C7-D021FB8F880C}.Debug|x64.ActiveCfg = Debug|x64 {2B1A5104-A324-4D02-B5C7-D021FB8F880C}.Debug|x64.Build.0 = Debug|x64 {2B1A5104-A324-4D02-B5C7-D021FB8F880C}.Release|x64.ActiveCfg = Release|x64 {2B1A5104-A324-4D02-B5C7-D021FB8F880C}.Release|x64.Build.0 = Release|x64 {2E7E8487-0BB0-4E8A-8672-ED8ABD80D468}.Debug|x64.ActiveCfg = Debug|x64 {2E7E8487-0BB0-4E8A-8672-ED8ABD80D468}.Debug|x64.Build.0 = Debug|x64 {2E7E8487-0BB0-4E8A-8672-ED8ABD80D468}.Release|x64.ActiveCfg = Release|x64 {2E7E8487-0BB0-4E8A-8672-ED8ABD80D468}.Release|x64.Build.0 = Release|x64 {3BAB8FDF-42F7-4D46-AA10-E282FD41B9F2}.Debug|x64.ActiveCfg = Debug|x64 {3BAB8FDF-42F7-4D46-AA10-E282FD41B9F2}.Debug|x64.Build.0 = Debug|x64 {3BAB8FDF-42F7-4D46-AA10-E282FD41B9F2}.Release|x64.ActiveCfg = Release|x64 {3BAB8FDF-42F7-4D46-AA10-E282FD41B9F2}.Release|x64.Build.0 = Release|x64 {3D9A580B-5F0F-434F-B4D6-228B8E7ADAA5}.Debug|x64.ActiveCfg = Debug|x64 {3D9A580B-5F0F-434F-B4D6-228B8E7ADAA5}.Debug|x64.Build.0 = Debug|x64 {3D9A580B-5F0F-434F-B4D6-228B8E7ADAA5}.Release|x64.ActiveCfg = Release|x64 {3D9A580B-5F0F-434F-B4D6-228B8E7ADAA5}.Release|x64.Build.0 = Release|x64 {3EC30D6A-BDA4-4971-879A-8814204EAE31}.Debug|x64.ActiveCfg = Debug|x64 {3EC30D6A-BDA4-4971-879A-8814204EAE31}.Debug|x64.Build.0 = Debug|x64 {3EC30D6A-BDA4-4971-879A-8814204EAE31}.Release|x64.ActiveCfg = Release|x64 {3EC30D6A-BDA4-4971-879A-8814204EAE31}.Release|x64.Build.0 = Release|x64 {40DC66AD-F66D-4194-B9A4-A3A2222516FE}.Debug|x64.ActiveCfg = Debug|x64 {40DC66AD-F66D-4194-B9A4-A3A2222516FE}.Debug|x64.Build.0 = Debug|x64 {40DC66AD-F66D-4194-B9A4-A3A2222516FE}.Release|x64.ActiveCfg = Release|x64 {40DC66AD-F66D-4194-B9A4-A3A2222516FE}.Release|x64.Build.0 = Release|x64 {492BAA3D-0D5D-478E-9765-500463AE69AA}.Debug|x64.ActiveCfg = Debug|x64 {492BAA3D-0D5D-478E-9765-500463AE69AA}.Debug|x64.Build.0 = Debug|x64 {492BAA3D-0D5D-478E-9765-500463AE69AA}.Release|x64.ActiveCfg = Release|x64 {492BAA3D-0D5D-478E-9765-500463AE69AA}.Release|x64.Build.0 = Release|x64 {4ED1E400-CF16-48C2-B176-2BF186E73531}.Debug|x64.ActiveCfg = Debug|x64 {4ED1E400-CF16-48C2-B176-2BF186E73531}.Debug|x64.Build.0 = Debug|x64 {4ED1E400-CF16-48C2-B176-2BF186E73531}.Release|x64.ActiveCfg = Release|x64 {4ED1E400-CF16-48C2-B176-2BF186E73531}.Release|x64.Build.0 = Release|x64 {537F759B-B617-48D9-A2F3-7FB769A8F9B7}.Debug|x64.ActiveCfg = Debug|x64 {537F759B-B617-48D9-A2F3-7FB769A8F9B7}.Debug|x64.Build.0 = Debug|x64 {537F759B-B617-48D9-A2F3-7FB769A8F9B7}.Release|x64.ActiveCfg = Release|x64 {537F759B-B617-48D9-A2F3-7FB769A8F9B7}.Release|x64.Build.0 = Release|x64 {6AE1B8BE-D46A-4E99-87A2-F160FB950DCA}.Debug|x64.ActiveCfg = Debug|x64 {6AE1B8BE-D46A-4E99-87A2-F160FB950DCA}.Debug|x64.Build.0 = Debug|x64 {6AE1B8BE-D46A-4E99-87A2-F160FB950DCA}.Release|x64.ActiveCfg = Release|x64 {6AE1B8BE-D46A-4E99-87A2-F160FB950DCA}.Release|x64.Build.0 = Release|x64 {6D7C1169-3246-465F-B630-ECFEF4F3179A}.Debug|x64.ActiveCfg = Debug|x64 {6D7C1169-3246-465F-B630-ECFEF4F3179A}.Debug|x64.Build.0 = Debug|x64 {6D7C1169-3246-465F-B630-ECFEF4F3179A}.Release|x64.ActiveCfg = Release|x64 {6D7C1169-3246-465F-B630-ECFEF4F3179A}.Release|x64.Build.0 = Release|x64 {715EADD7-0FFE-4F1F-94E7-49302968DF79}.Debug|x64.ActiveCfg = Debug|x64 {715EADD7-0FFE-4F1F-94E7-49302968DF79}.Debug|x64.Build.0 = Debug|x64 {715EADD7-0FFE-4F1F-94E7-49302968DF79}.Release|x64.ActiveCfg = Release|x64 {715EADD7-0FFE-4F1F-94E7-49302968DF79}.Release|x64.Build.0 = Release|x64 {718CA6FA-6446-4E43-83DF-BA4E85E5886B}.Debug|x64.ActiveCfg = Debug|x64 {718CA6FA-6446-4E43-83DF-BA4E85E5886B}.Debug|x64.Build.0 = Debug|x64 {718CA6FA-6446-4E43-83DF-BA4E85E5886B}.Release|x64.ActiveCfg = Release|x64 {718CA6FA-6446-4E43-83DF-BA4E85E5886B}.Release|x64.Build.0 = Release|x64 {74243B75-816C-4077-8DF0-98D2C78B0E5D}.Debug|x64.ActiveCfg = Debug|x64 {74243B75-816C-4077-8DF0-98D2C78B0E5D}.Debug|x64.Build.0 = Debug|x64 {74243B75-816C-4077-8DF0-98D2C78B0E5D}.Release|x64.ActiveCfg = Release|x64 {74243B75-816C-4077-8DF0-98D2C78B0E5D}.Release|x64.Build.0 = Release|x64 {7E0106F8-A597-48D5-B4F2-E0FC4D95EE95}.Debug|x64.ActiveCfg = Debug|x64 {7E0106F8-A597-48D5-B4F2-E0FC4D95EE95}.Debug|x64.Build.0 = Debug|x64 {7E0106F8-A597-48D5-B4F2-E0FC4D95EE95}.Release|x64.ActiveCfg = Release|x64 {7E0106F8-A597-48D5-B4F2-E0FC4D95EE95}.Release|x64.Build.0 = Release|x64 {89B6AF14-08A0-437A-B31D-A8A3492FA965}.Debug|x64.ActiveCfg = Debug|x64 {89B6AF14-08A0-437A-B31D-A8A3492FA965}.Debug|x64.Build.0 = Debug|x64 {89B6AF14-08A0-437A-B31D-A8A3492FA965}.Release|x64.ActiveCfg = Release|x64 {89B6AF14-08A0-437A-B31D-A8A3492FA965}.Release|x64.Build.0 = Release|x64 {8A0FA780-068A-4534-AA2F-4FF4CF977AF2}.Debug|x64.ActiveCfg = Debug|x64 {8A0FA780-068A-4534-AA2F-4FF4CF977AF2}.Debug|x64.Build.0 = Debug|x64 {8A0FA780-068A-4534-AA2F-4FF4CF977AF2}.Release|x64.ActiveCfg = Release|x64 {8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug|x64.ActiveCfg = Debug|x64 {8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug|x64.Build.0 = Debug|x64 {8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release|x64.ActiveCfg = Release|x64 {8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release|x64.Build.0 = Release|x64 {901F04DB-E1A5-4A41-8B81-9D31C19ACD59}.Debug|x64.ActiveCfg = Debug|x64 {901F04DB-E1A5-4A41-8B81-9D31C19ACD59}.Debug|x64.Build.0 = Debug|x64 {901F04DB-E1A5-4A41-8B81-9D31C19ACD59}.Release|x64.ActiveCfg = Release|x64 {901F04DB-E1A5-4A41-8B81-9D31C19ACD59}.Release|x64.Build.0 = Release|x64 {9186EAC4-2F34-4F17-B940-6585D7869BCD}.Debug|x64.ActiveCfg = Debug|x64 {9186EAC4-2F34-4F17-B940-6585D7869BCD}.Debug|x64.Build.0 = Debug|x64 {9186EAC4-2F34-4F17-B940-6585D7869BCD}.Release|x64.ActiveCfg = Release|x64 {9186EAC4-2F34-4F17-B940-6585D7869BCD}.Release|x64.Build.0 = Release|x64 {A57D9365-172E-4782-ADC6-82A594E30943}.Debug|x64.ActiveCfg = Debug|x64 {A57D9365-172E-4782-ADC6-82A594E30943}.Debug|x64.Build.0 = Debug|x64 {A57D9365-172E-4782-ADC6-82A594E30943}.Release|x64.ActiveCfg = Release|x64 {A57D9365-172E-4782-ADC6-82A594E30943}.Release|x64.Build.0 = Release|x64 {ABD4B53D-94CD-4C6A-B30A-CB6FEBA16296}.Debug|x64.ActiveCfg = Debug|x64 {ABD4B53D-94CD-4C6A-B30A-CB6FEBA16296}.Debug|x64.Build.0 = Debug|x64 {ABD4B53D-94CD-4C6A-B30A-CB6FEBA16296}.Release|x64.ActiveCfg = Release|x64 {ABD4B53D-94CD-4C6A-B30A-CB6FEBA16296}.Release|x64.Build.0 = Release|x64 {AE9E908D-BAEC-491F-9914-436B3CE35E94}.Debug|x64.ActiveCfg = Debug|x64 {AE9E908D-BAEC-491F-9914-436B3CE35E94}.Debug|x64.Build.0 = Debug|x64 {AE9E908D-BAEC-491F-9914-436B3CE35E94}.Release|x64.ActiveCfg = Release|x64 {AE9E908D-BAEC-491F-9914-436B3CE35E94}.Release|x64.Build.0 = Release|x64 {BF3B6C3A-3073-4AD4-BB41-A41047231982}.Debug|x64.ActiveCfg = Debug|x64 {BF3B6C3A-3073-4AD4-BB41-A41047231982}.Debug|x64.Build.0 = Debug|x64 {BF3B6C3A-3073-4AD4-BB41-A41047231982}.Release|x64.ActiveCfg = Release|x64 {BF3B6C3A-3073-4AD4-BB41-A41047231982}.Release|x64.Build.0 = Release|x64 {C00B4A26-6C57-4968-AED5-B45FD31A22E7}.Debug|x64.ActiveCfg = Debug|x64 {C00B4A26-6C57-4968-AED5-B45FD31A22E7}.Debug|x64.Build.0 = Debug|x64 {C00B4A26-6C57-4968-AED5-B45FD31A22E7}.Release|x64.ActiveCfg = Release|x64 {C00B4A26-6C57-4968-AED5-B45FD31A22E7}.Release|x64.Build.0 = Release|x64 {C0E811E0-8942-4CFD-A817-74D99E9E6577}.Debug|x64.ActiveCfg = Debug|x64 {C0E811E0-8942-4CFD-A817-74D99E9E6577}.Debug|x64.Build.0 = Debug|x64 {C0E811E0-8942-4CFD-A817-74D99E9E6577}.Release|x64.ActiveCfg = Release|x64 {C0E811E0-8942-4CFD-A817-74D99E9E6577}.Release|x64.Build.0 = Release|x64 {C3A59B21-A287-4631-B4EC-F4A57D26A14F}.Debug|x64.ActiveCfg = Debug|x64 {C3A59B21-A287-4631-B4EC-F4A57D26A14F}.Debug|x64.Build.0 = Debug|x64 {C3A59B21-A287-4631-B4EC-F4A57D26A14F}.Release|x64.ActiveCfg = Release|x64 {C3A59B21-A287-4631-B4EC-F4A57D26A14F}.Release|x64.Build.0 = Release|x64 {C84633F5-05B1-4AC1-A074-104D1DB2A91E}.Debug|x64.ActiveCfg = Debug|x64 {C84633F5-05B1-4AC1-A074-104D1DB2A91E}.Debug|x64.Build.0 = Debug|x64 {C84633F5-05B1-4AC1-A074-104D1DB2A91E}.Release|x64.ActiveCfg = Release|x64 {C84633F5-05B1-4AC1-A074-104D1DB2A91E}.Release|x64.Build.0 = Release|x64 {C973CD39-D63B-4F5C-BE1D-DED17388B5A4}.Debug|x64.ActiveCfg = Debug|x64 {C973CD39-D63B-4F5C-BE1D-DED17388B5A4}.Debug|x64.Build.0 = Debug|x64 {C973CD39-D63B-4F5C-BE1D-DED17388B5A4}.Release|x64.ActiveCfg = Release|x64 {C973CD39-D63B-4F5C-BE1D-DED17388B5A4}.Release|x64.Build.0 = Release|x64 {CA4BBB24-D33E-42E2-A495-F10D80DE8C1D}.Debug|x64.ActiveCfg = Debug|x64 {CA4BBB24-D33E-42E2-A495-F10D80DE8C1D}.Debug|x64.Build.0 = Debug|x64 {CA4BBB24-D33E-42E2-A495-F10D80DE8C1D}.Release|x64.ActiveCfg = Release|x64 {CD4B9690-7A06-4F7A-8492-9336979EE7E9}.Debug|x64.ActiveCfg = Debug|x64 {CD4B9690-7A06-4F7A-8492-9336979EE7E9}.Debug|x64.Build.0 = Debug|x64 {CD4B9690-7A06-4F7A-8492-9336979EE7E9}.Release|x64.ActiveCfg = Release|x64 {CD4B9690-7A06-4F7A-8492-9336979EE7E9}.Release|x64.Build.0 = Release|x64 {CD7A18D5-55D9-4922-A000-FFAA08ABB006}.Debug|x64.ActiveCfg = Debug|x64 {CD7A18D5-55D9-4922-A000-FFAA08ABB006}.Debug|x64.Build.0 = Debug|x64 {CD7A18D5-55D9-4922-A000-FFAA08ABB006}.Release|x64.ActiveCfg = Release|x64 {CD7A18D5-55D9-4922-A000-FFAA08ABB006}.Release|x64.Build.0 = Release|x64 {CE3F2DFB-8470-4802-AD37-21CAF6CB2681}.Debug|x64.ActiveCfg = Debug|x64 {CE3F2DFB-8470-4802-AD37-21CAF6CB2681}.Debug|x64.Build.0 = Debug|x64 {CE3F2DFB-8470-4802-AD37-21CAF6CB2681}.Release|x64.ActiveCfg = Release|x64 {CE3F2DFB-8470-4802-AD37-21CAF6CB2681}.Release|x64.Build.0 = Release|x64 {D829DB63-E046-474D-8EA3-43A6659294D8}.Debug|x64.ActiveCfg = Debug|x64 {D829DB63-E046-474D-8EA3-43A6659294D8}.Debug|x64.Build.0 = Debug|x64 {D829DB63-E046-474D-8EA3-43A6659294D8}.Release|x64.ActiveCfg = Release|x64 {D829DB63-E046-474D-8EA3-43A6659294D8}.Release|x64.Build.0 = Release|x64 {F03DABEE-A03E-4437-BFD3-D012836F2D94}.Debug|x64.ActiveCfg = Debug|x64 {F03DABEE-A03E-4437-BFD3-D012836F2D94}.Debug|x64.Build.0 = Debug|x64 {F03DABEE-A03E-4437-BFD3-D012836F2D94}.Release|x64.ActiveCfg = Release|x64 {F03DABEE-A03E-4437-BFD3-D012836F2D94}.Release|x64.Build.0 = Release|x64 {F13108C4-4C86-4D56-A317-A4E5892A8AF7}.Debug|x64.ActiveCfg = Debug|x64 {F13108C4-4C86-4D56-A317-A4E5892A8AF7}.Debug|x64.Build.0 = Debug|x64 {F13108C4-4C86-4D56-A317-A4E5892A8AF7}.Release|x64.ActiveCfg = Release|x64 {F13108C4-4C86-4D56-A317-A4E5892A8AF7}.Release|x64.Build.0 = Release|x64 {FD726AA3-D4FA-4597-B435-08CC7752888D}.Debug|x64.ActiveCfg = Debug|x64 {FD726AA3-D4FA-4597-B435-08CC7752888D}.Debug|x64.Build.0 = Debug|x64 {FD726AA3-D4FA-4597-B435-08CC7752888D}.Release|x64.ActiveCfg = Release|x64 {FD726AA3-D4FA-4597-B435-08CC7752888D}.Release|x64.Build.0 = Release|x64 {FF374D62-CBCF-401E-9A02-1D3DB8BE16E4}.Debug|x64.ActiveCfg = Debug|x64 {FF374D62-CBCF-401E-9A02-1D3DB8BE16E4}.Debug|x64.Build.0 = Debug|x64 {FF374D62-CBCF-401E-9A02-1D3DB8BE16E4}.Release|x64.ActiveCfg = Release|x64 {FF374D62-CBCF-401E-9A02-1D3DB8BE16E4}.Release|x64.Build.0 = Release|x64 EndGlobalSection GlobalSection(SolutionProperties) = preSolution HideSolutionNode = FALSE EndGlobalSection GlobalSection(NestedProjects) = preSolution {02BC3B44-C7F1-4793-86C1-6F36CA8A7F53} = {4C291EEB-3874-4724-9CC2-1335D13FF0EE} {04345B7D-B0A1-405B-8BB2-5B98A3400FEF} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {063037B2-CA35-4520-811C-19D9C4ED891E} = {4C291EEB-3874-4724-9CC2-1335D13FF0EE} {08762559-E9DF-475B-BA99-49F4B5A1D80B} = {853D45D8-980C-4991-B62A-DAC6FD245402} {08B62E36-63D2-4FF1-A605-4BBABAEE73FB} = {4C291EEB-3874-4724-9CC2-1335D13FF0EE} {0CC6D525-806E-433F-AB4A-6CFD546418B1} = {853D45D8-980C-4991-B62A-DAC6FD245402} {179BEB5A-2C90-44F5-A734-FA756A5E668C} = {F09A0864-9221-47AD-872F-D4538104D747} {1F2E1C51-2B14-4047-BE6D-52E00FC3C780} = {B870D8A6-12CD-4DD0-B843-833695C2310A} {25B5C601-03D7-4861-9C0F-7F0453B04227} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {26D24B3D-22CE-44EB-AA21-2BF594F80520} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {2B1A5104-A324-4D02-B5C7-D021FB8F880C} = {4C291EEB-3874-4724-9CC2-1335D13FF0EE} {2E7E8487-0BB0-4E8A-8672-ED8ABD80D468} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {3AB2F5A9-5C1E-4077-811A-2F96BCF9EE89} = {F18C84B3-7898-4324-9D75-99A6048F442D} {3BAB8FDF-42F7-4D46-AA10-E282FD41B9F2} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {3D9A580B-5F0F-434F-B4D6-228B8E7ADAA5} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {40DC66AD-F66D-4194-B9A4-A3A2222516FE} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {45027FC5-4A32-47BD-AC5B-66CC7616B1D2} = {9A8482A7-BF0C-423D-8266-189456ED41F6} {45E74E38-35CA-4CB6-8965-BC20D39659AF} = {746BA101-5C93-42A5-AC7A-64DCEB186572} {4C291EEB-3874-4724-9CC2-1335D13FF0EE} = {746BA101-5C93-42A5-AC7A-64DCEB186572} {4ED1E400-CF16-48C2-B176-2BF186E73531} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {537F759B-B617-48D9-A2F3-7FB769A8F9B7} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {6AE1B8BE-D46A-4E99-87A2-F160FB950DCA} = {B870D8A6-12CD-4DD0-B843-833695C2310A} {6D7C1169-3246-465F-B630-ECFEF4F3179A} = {4C291EEB-3874-4724-9CC2-1335D13FF0EE} {715EADD7-0FFE-4F1F-94E7-49302968DF79} = {4C291EEB-3874-4724-9CC2-1335D13FF0EE} {718CA6FA-6446-4E43-83DF-BA4E85E5886B} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {74243B75-816C-4077-8DF0-98D2C78B0E5D} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {746BA101-5C93-42A5-AC7A-64DCEB186572} = {853D45D8-980C-4991-B62A-DAC6FD245402} {774627B7-6532-4464-AEE4-02F72CA44F95} = {9A8482A7-BF0C-423D-8266-189456ED41F6} {7E0106F8-A597-48D5-B4F2-E0FC4D95EE95} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {877E7D1D-8150-4FE5-A139-B6FBCEAEC393} = {853D45D8-980C-4991-B62A-DAC6FD245402} {89B6AF14-08A0-437A-B31D-A8A3492FA965} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {8A0FA780-068A-4534-AA2F-4FF4CF977AF2} = {4C291EEB-3874-4724-9CC2-1335D13FF0EE} {8D6BB292-9E1C-413D-9F98-4864BDC1514A} = {853D45D8-980C-4991-B62A-DAC6FD245402} {901F04DB-E1A5-4A41-8B81-9D31C19ACD59} = {95FAF291-03D1-42FC-9C10-424D551D475D} {9186EAC4-2F34-4F17-B940-6585D7869BCD} = {95FAF291-03D1-42FC-9C10-424D551D475D} {95FAF291-03D1-42FC-9C10-424D551D475D} = {853D45D8-980C-4991-B62A-DAC6FD245402} {9A8482A7-BF0C-423D-8266-189456ED41F6} = {95FAF291-03D1-42FC-9C10-424D551D475D} {A57D9365-172E-4782-ADC6-82A594E30943} = {4C291EEB-3874-4724-9CC2-1335D13FF0EE} {ABD4B53D-94CD-4C6A-B30A-CB6FEBA16296} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {AE9E908D-BAEC-491F-9914-436B3CE35E94} = {B870D8A6-12CD-4DD0-B843-833695C2310A} {B870D8A6-12CD-4DD0-B843-833695C2310A} = {746BA101-5C93-42A5-AC7A-64DCEB186572} {BF3B6C3A-3073-4AD4-BB41-A41047231982} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {C00B4A26-6C57-4968-AED5-B45FD31A22E7} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {C0E811E0-8942-4CFD-A817-74D99E9E6577} = {4C291EEB-3874-4724-9CC2-1335D13FF0EE} {C3A59B21-A287-4631-B4EC-F4A57D26A14F} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {C84633F5-05B1-4AC1-A074-104D1DB2A91E} = {EA0D2458-5FCD-4DAB-B07D-229327B98BEB} {C973CD39-D63B-4F5C-BE1D-DED17388B5A4} = {4C291EEB-3874-4724-9CC2-1335D13FF0EE} {CA4BBB24-D33E-42E2-A495-F10D80DE8C1D} = {4C291EEB-3874-4724-9CC2-1335D13FF0EE} {CD4B9690-7A06-4F7A-8492-9336979EE7E9} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {CD7A18D5-55D9-4922-A000-FFAA08ABB006} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} {CE3F2DFB-8470-4802-AD37-21CAF6CB2681} = {746BA101-5C93-42A5-AC7A-64DCEB186572} {D829DB63-E046-474D-8EA3-43A6659294D8} = {F09A0864-9221-47AD-872F-D4538104D747} {E23BB160-006E-44F2-8FB4-3A2240BBC20C} = {746BA101-5C93-42A5-AC7A-64DCEB186572} {EA0D2458-5FCD-4DAB-B07D-229327B98BEB} = {0CC6D525-806E-433F-AB4A-6CFD546418B1} {F03DABEE-A03E-4437-BFD3-D012836F2D94} = {B870D8A6-12CD-4DD0-B843-833695C2310A} {F09A0864-9221-47AD-872F-D4538104D747} = {746BA101-5C93-42A5-AC7A-64DCEB186572} {F13108C4-4C86-4D56-A317-A4E5892A8AF7} = {B870D8A6-12CD-4DD0-B843-833695C2310A} {F8CCA5AE-2D75-4C79-BEAB-2588CD5956C8} = {853D45D8-980C-4991-B62A-DAC6FD245402} {FD726AA3-D4FA-4597-B435-08CC7752888D} = {4C291EEB-3874-4724-9CC2-1335D13FF0EE} {FF374D62-CBCF-401E-9A02-1D3DB8BE16E4} = {45E74E38-35CA-4CB6-8965-BC20D39659AF} EndGlobalSection GlobalSection(ExtensibilityGlobals) = postSolution SolutionGuid = {5E690324-2D48-486A-8D3C-DCB520D3F693} EndGlobalSection EndGlobal vmem-1.8/src/common.inc000066400000000000000000000223511361505074100151050ustar00rootroot00000000000000# Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # src/common.inc -- common Makefile rules for VMEM # TOP := $(dir $(lastword $(MAKEFILE_LIST))).. # import user variables ifneq ($(wildcard $(TOP)/user.mk),) include $(TOP)/user.mk endif LN = ln OBJCOPY ?= objcopy MKDIR = mkdir INSTALL = install CP = cp CSTYLE = $(TOP)/utils/cstyle CSTYLEON ?= 0 STYLE_CHECK = $(TOP)/utils/style_check.sh CHECK_SHEBANG = $(TOP)/utils/check-shebang.sh CHECK_OS = $(TOP)/utils/check-os.sh OS_BANNED = $(TOP)/utils/os-banned COVERAGE = 0 FAULT_INJECTION ?= 0 PKG_CONFIG ?= pkg-config HEADERS = $(wildcard *.h) $(wildcard *.hpp) ifeq ($(SRCVERSION),) export SRCVERSION := $(shell $(TOP)/utils/version.sh $(TOP)) else export SRCVERSION endif ifeq ($(SRCVERSION),) $(error Cannot evaluate version) endif ifeq ($(CLANG_FORMAT),) ifeq ($(shell command -v clang-format-6.0 > /dev/null && echo y || echo n), y) export CLANG_FORMAT ?= clang-format-6.0 else export CLANG_FORMAT ?= clang-format endif endif GCOV_CFLAGS=-fprofile-arcs -ftest-coverage --coverage GCOV_LDFLAGS=-fprofile-arcs -ftest-coverage GCOV_LIBS=-lgcov LIBS += $(EXTRA_LIBS) ifeq ($(OS_KERNEL_NAME),) export OS_KERNEL_NAME := $(shell uname -s) endif osdep = $(1)_$(shell echo $(OS_KERNEL_NAME) | tr "[:upper:]" "[:lower:]")$(2) get_arch = $(shell $(CC) -dumpmachine | awk -F'[/-]' '{print $$1}') ifeq ($(ARCH),) export ARCH := $(call get_arch) endif ifeq ($(ARCH),amd64) override ARCH := x86_64 endif ifeq ($(ARCH),arm64) override ARCH := aarch64 endif ifeq ($(PKG_CONFIG_CHECKED),) ifeq ($(shell command -v $(PKG_CONFIG) && echo y || echo n), n) $(error $(PKG_CONFIG) not found) endif endif export PKG_CONFIG_CHECKED := y check_package = $(shell $(PKG_CONFIG) $(1) && echo y || echo n) check_flag = $(shell echo "int main(){return 0;}" |\ $(CC) $(CFLAGS) -Werror $(1) -x c -o /dev/null - 2>/dev/null && echo y || echo n) check_compiler = $(shell $(CC) --version | grep $(1) && echo y || echo n) check_Wconversion = $(shell echo "long random(void); char test(void); char test(void){char a = 0; char b = 'a'; char ret = random() == 1 ? a : b; return ret;}" |\ $(CC) -c $(CFLAGS) -Wconversion -x c -o /dev/null - 2>/dev/null && echo y || echo n) check_librt = $(shell echo "int main() { struct timespec t; return clock_gettime(CLOCK_MONOTONIC, &t); }" |\ $(CC) $(CFLAGS) -x c -include time.h -o /dev/null - 2>/dev/null && echo n || echo y) # XXX: required by clock_gettime(), if glibc version < 2.17 # The os_clock_gettime() function is now in OS abstraction layer, # linked to all the librariess, unit tests and benchmarks. ifeq ($(LIBRT_NEEDED),) export LIBRT_NEEDED := $(call check_librt) else export LIBRT_NEEDED endif ifeq ($(IS_ICC),) export IS_ICC := $(call check_compiler, icc) else export IS_ICC endif ifeq ($(WCONVERSION_AVAILABLE),) export WCONVERSION_AVAILABLE := $(call check_Wconversion) else export WCONVERSION_AVAILABLE endif ifeq ($(WUNREACHABLE_CODE_RETURN_AVAILABLE),) ifeq ($(IS_ICC), n) export WUNREACHABLE_CODE_RETURN_AVAILABLE := $(call check_flag, -Wunreachable-code-return) else export WUNREACHABLE_CODE_RETURN_AVAILABLE := n endif else export WUNREACHABLE_CODE_RETURN_AVAILABLE endif ifeq ($(WMISSING_VARIABLE_DECLARATIONS_AVAILABLE),) ifeq ($(IS_ICC), n) export WMISSING_VARIABLE_DECLARATIONS_AVAILABLE := $(call check_flag, -Wmissing-variable-declarations) else export WMISSING_VARIABLE_DECLARATIONS_AVAILABLE := n endif else export WMISSING_VARIABLE_DECLARATIONS_AVAILABLE endif ifeq ($(WFLOAT_EQUAL_AVAILABLE),) ifeq ($(IS_ICC), n) export WFLOAT_EQUAL_AVAILABLE := $(call check_flag, -Wfloat-equal) else export WFLOAT_EQUAL_AVAILABLE := n endif else export WFLOAT_EQUAL_AVAILABLE endif ifeq ($(WSWITCH_DEFAULT_AVAILABLE),) ifeq ($(IS_ICC), n) export WSWITCH_DEFAULT_AVAILABLE := $(call check_flag, -Wswitch-default) else export WSWITCH_DEFAULT_AVAILABLE := n endif else export WSWITCH_DEFAULT_AVAILABLE endif ifeq ($(WCAST_FUNCTION_TYPE_AVAILABLE),) ifeq ($(IS_ICC), n) export WCAST_FUNCTION_TYPE_AVAILABLE := $(call check_flag, -Wcast-function-type) else export WCAST_FUNCTION_TYPE_AVAILABLE := n endif else export WCAST_FUNCTION_TYPE_AVAILABLE endif ifeq ($(WSTRINGOP_TRUNCATION_AVAILABLE),) export WSTRINGOP_TRUNCATION_AVAILABLE := $(call check_flag, -Wstringop-truncation) else export WSTRINGOP_TRUNCATION_AVAILABLE endif install_recursive = $(shell cd $(1) && find . -type f -exec install -m $(2) -D {} $(3)/{} \;) install_recursive_filter = $(shell cd $(1) && find . -type f -name "$(2)" -exec install -m $(3) -D {} $(4)/{} \;) define create-deps @cp $(objdir)/$*.d $(objdir)/.deps/$*.P; \ sed -e 's/#.*//' -e 's/^[^:]*: *//' -e 's/ *\\$$//' \ -e '/^$$/ d' -e 's/$$/ :/' < $(objdir)/$*.d >> $(objdir)/.deps/$*.P; \ $(RM) -f $(objdir)/$*.d endef check_defined = \ $(strip $(foreach 1,$1, \ $(call __check_defined,$1,$(strip $(value 2))))) __check_defined = \ $(if $(value $1),, \ $(error Undefined $1$(if $2, ($2)))) export prefix = /usr/local export exec_prefix := $(prefix) export sysconfdir := $(prefix)/etc export datarootdir := $(prefix)/share export mandir := $(datarootdir)/man export docdir := $(datarootdir)/doc export man1dir := $(mandir)/man1 export man3dir := $(mandir)/man3 export man5dir := $(mandir)/man5 export man7dir := $(mandir)/man7 export cstyle_bin := $(CSTYLE) export clang_format_bin := $(CLANG_FORMAT) ifneq ($(wildcard $(exec_prefix)/x86_64-linux-gnu),) LIB_PREFIX ?= x86_64-linux-gnu/lib endif ifneq ($(wildcard $(exec_prefix)/lib64),) LIB_PREFIX ?= lib64 endif LIB_PREFIX ?= lib all: cstyle-%: $(STYLE_CHECK) $* $(wildcard *.[ch]) $(wildcard *.[ch]pp) cstyle: cstyle-check format: cstyle-format ifeq ($(CSTYLEON),1) define check-cstyle @$(STYLE_CHECK) check $1 && if [ "$2" != "" ]; then mkdir -p `dirname $2` && touch $2; fi endef else ifeq ($(CSTYLEON),2) define check-cstyle @$(STYLE_CHECK) check $1 && if [ "$2" != "" ]; then mkdir -p `dirname $2` && touch $2; fi || true endef else define check-cstyle endef endif define check-os $(CHECK_OS) $(OS_BANNED) $(1) $(2) endef # XXX: to allow gcov tool to connect coverage with source code, we have to # use absolute path to source files ifeq ($(COVERAGE),1) define coverage-path `readlink -f $(1)` endef else define coverage-path $(1) endef endif define sub-target-foreach $(1)-$(2): $$(MAKE) -C $1 $2 ifeq ($(3),y) ifeq ($(custom_build),) $$(MAKE) -C $1 $2 DEBUG=1 endif endif endef define sub-target $(foreach f, $(1), $(eval $(call sub-target-foreach, $f,$(2),$(3)))) endef ifneq ($(wildcard $(prefix)/x86_64-linux-gnu),) INC_PREFIX ?= x86_64-linux-gnu/include endif INC_PREFIX ?= include test_build=$(addprefix "-b ", $(TEST_BUILD)) test_time=$(addprefix " -o ", $(TEST_TIME)) test_memcheck=$(addprefix " -m ", $(MEMCHECK)) test_pmemcheck=$(addprefix " -p ", $(PMEMCHECK)) test_helgrind=$(addprefix " -e ", $(HELGRIND)) test_drd=$(addprefix " -d ", $(DRD)) ifeq ($(CHECK_POOL),y) test_check_pool=" -c " endif RUNTEST_OPTIONS := "$(test_build)$(test_time)$(test_check_pool)" RUNTEST_OPTIONS += "$(test_memcheck)$(test_pmemcheck)$(test_helgrind)$(test_drd)" export libdir := $(exec_prefix)/$(LIB_PREFIX) export includedir := $(prefix)/$(INC_PREFIX) export pkgconfigdir := $(libdir)/pkgconfig export bindir := $(exec_prefix)/bin sparse-c = $(shell for c in *.c; do sparse -Wsparse-all -Wno-declaration-after-statement $(CFLAGS) $(INCS) $$c || true; done) ifeq ($(USE_LIBUNWIND),) export USE_LIBUNWIND := $(call check_package, libunwind) ifeq ($(USE_LIBUNWIND),y) export LIBUNWIND_LIBS := $(shell $(PKG_CONFIG) --libs libunwind) endif else export USE_LIBUNWIND export LIBUNWIND_LIBS endif ifeq ($(OS_KERNEL_NAME),FreeBSD) GLIBC_CXXFLAGS=-D_GLIBCXX_USE_C99 UNIX98_CFLAGS= OS_INCS=-I$(TOP)/src/freebsd/include -I/usr/local/include OS_LIBS=-L/usr/local/lib LIBDL= LIBUTIL=-lutil LIBUUID=-luuid else GLIBC_CXXFLAGS= UNIX98_CFLAGS=-D__USE_UNIX98 OS_INCS= OS_LIBS= LIBDL=-ldl LIBUTIL= endif vmem-1.8/src/common/000077500000000000000000000000001361505074100144075ustar00rootroot00000000000000vmem-1.8/src/common/.cstyleignore000066400000000000000000000000251361505074100171140ustar00rootroot00000000000000pmemcompat.h queue.h vmem-1.8/src/common/Makefile000066400000000000000000000032621361505074100160520ustar00rootroot00000000000000# Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # src/common/Makefile -- Makefile for common # LIBRARY_NAME = pmemcommon include pmemcommon.inc include ../Makefile.inc CFLAGS += -DUSE_LIBDL vmem-1.8/src/common/alloc.c000066400000000000000000000076561361505074100156630ustar00rootroot00000000000000/* * Copyright 2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include #include "alloc.h" #include "fault_injection.h" #include "out.h" Malloc_func fn_malloc = malloc; Realloc_func fn_realloc = realloc; #if FAULT_INJECTION static __thread int malloc_num; static __thread int fail_malloc_num; static __thread const char *fail_malloc_from; void * _flt_Malloc(size_t size, const char *func) { if (fail_malloc_from && strcmp(func, fail_malloc_from) == 0) { if (++malloc_num == fail_malloc_num) { errno = ENOMEM; return NULL; } } return fn_malloc(size); } static __thread int realloc_num; static __thread int fail_realloc_num; static __thread const char *fail_realloc_from; void * _flt_Realloc(void *ptr, size_t size, const char *func) { if (fail_realloc_from && strcmp(func, fail_realloc_from) == 0) { if (++realloc_num == fail_realloc_num) { errno = ENOMEM; return NULL; } } return fn_realloc(ptr, size); } void common_inject_fault_at(enum pmem_allocation_type type, int nth, const char *at) { switch (type) { case PMEM_MALLOC: malloc_num = 0; fail_malloc_num = nth; fail_malloc_from = at; break; case PMEM_REALLOC: realloc_num = 0; fail_realloc_num = nth; fail_realloc_from = at; break; default: FATAL("unknown allocation type"); } } int common_fault_injection_enabled(void) { return 1; } #else void *_Malloc(size_t size) { return fn_malloc(size); } void *_Realloc(void *ptr, size_t size) { return fn_realloc(ptr, size); } #endif void set_func_malloc(void *(*malloc_func)(size_t size)) { fn_malloc = (malloc_func == NULL) ? malloc : malloc_func; } void set_func_realloc(void *(*realloc_func)(void *ptr, size_t size)) { fn_realloc = (realloc_func == NULL) ? realloc : realloc_func; } /* * our versions of malloc & friends start off pointing to the libc versions */ Free_func Free = free; Strdup_func Strdup = strdup; /* * Zalloc -- allocate zeroed memory */ void * Zalloc(size_t sz) { void *ret = Malloc(sz); if (!ret) return NULL; return memset(ret, 0, sz); } /* * util_set_alloc_funcs -- allow one to override malloc, etc. */ void util_set_alloc_funcs(void *(*malloc_func)(size_t size), void (*free_func)(void *ptr), void *(*realloc_func)(void *ptr, size_t size), char *(*strdup_func)(const char *s)) { set_func_malloc(malloc_func); Free = (free_func == NULL) ? free : free_func; set_func_realloc(realloc_func); Strdup = (strdup_func == NULL) ? strdup : strdup_func; } vmem-1.8/src/common/alloc.h000066400000000000000000000051211361505074100156510ustar00rootroot00000000000000/* * Copyright 2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef COMMON_ALLOC_H #define COMMON_ALLOC_H #include #ifdef __cplusplus extern "C" { #endif typedef void *(*Malloc_func)(size_t size); typedef void *(*Realloc_func)(void *ptr, size_t size); extern Malloc_func fn_malloc; extern Realloc_func fn_realloc; #if FAULT_INJECTION void *_flt_Malloc(size_t, const char *); void *_flt_Realloc(void *, size_t, const char *); #define Malloc(size) _flt_Malloc(size, __func__) #define Realloc(ptr, size) _flt_Realloc(ptr, size, __func__) #else void *_Malloc(size_t); void *_Realloc(void *, size_t); #define Malloc(size) _Malloc(size) #define Realloc(ptr, size) _Realloc(ptr, size) #endif void set_func_malloc(void *(*malloc_func)(size_t size)); void set_func_realloc(void *(*realloc_func)(void *ptr, size_t size)); /* * overridable names for malloc & friends used by this library */ typedef void (*Free_func)(void *ptr); typedef char *(*Strdup_func)(const char *s); extern Free_func Free; extern Strdup_func Strdup; extern void *Zalloc(size_t sz); #ifdef __cplusplus } #endif #endif vmem-1.8/src/common/common.rc000066400000000000000000000065001361505074100162260ustar00rootroot00000000000000/* * Copyright 2016-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * common.rc -- common part of PMDK rc files */ #include #include "srcversion.h" #define VERSION(major, minor, build, revision) major, minor, build, revision #ifdef _DEBUG #define VERSION_DEBUG VS_FF_DEBUG #else #define VERSION_DEBUG 0 #endif #ifdef PRERELEASE #define VERSION_PRERELEASE VS_FF_PRERELEASE #else #define VERSION_PRERELEASE 0 #endif #ifdef BUGFIX #define VERSION_PATCHED VS_FF_PATCHED #else #define VERSION_PATCHED 0 #endif #ifdef PRIVATE #define VERSION_PRIVATE VS_FF_PRIVATE #else #define VERSION_PRIVATE 0 #endif #ifdef CUSTOM #define VERSION_SPECIAL VS_FF_SPECIALBUILD #else #define VERSION_SPECIAL 0 #endif #define VERSION_PRIVATEBUILD VS_FF_PRIVATEBUILD #define VER_PATCHED VS_FF_PATCHED VS_VERSION_INFO VERSIONINFO FILEVERSION VERSION(MAJOR, MINOR, BUILD, REVISION) PRODUCTVERSION VERSION(MAJOR, MINOR, BUILD, REVISION) FILEFLAGSMASK VS_FFI_FILEFLAGSMASK FILEFLAGS (VERSION_PRIVATEBUILD | VERSION_PRERELEASE | VERSION_DEBUG | VERSION_SPECIAL | VERSION_PATCHED) FILEOS VOS__WINDOWS32 FILETYPE TYPE FILESUBTYPE VFT2_UNKNOWN BEGIN BLOCK "StringFileInfo" BEGIN BLOCK "040904b0" BEGIN VALUE "CompanyName", "Intel" VALUE "FileDescription", DESCRIPTION VALUE "FileVersion", SRCVERSION VALUE "InternalName", "VMEM" VALUE "LegalCopyright", "Copyright 2014-2019, Intel Corporation" VALUE "OriginalFilename", FILE_NAME VALUE "ProductName", "Volatile Persistent Memory Allocator" VALUE "ProductVersion", SRCVERSION #if VERSION_SPECIAL == VS_FF_SPECIALBUILD VALUE "SpecialBuild", VERSION_CUSTOM_MSG #endif #if VERSION_PRIVATEBUILD == VS_FF_SPECIALBUILD VALUE "PrivateBuild", "Not a release build" #endif END END BLOCK "VarFileInfo" BEGIN /* XXX: Update to UNICODE */ VALUE "Translation", 0x409, 0 END END vmem-1.8/src/common/ctl_fallocate.c000066400000000000000000000045431361505074100173550ustar00rootroot00000000000000/* * Copyright 2018-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * ctl_fallocate.c -- implementation of the fallocate CTL namespace */ #include "ctl.h" #include "set.h" #include "out.h" #include "ctl_global.h" #include "file.h" static int CTL_READ_HANDLER(at_create)(void *ctx, enum ctl_query_source source, void *arg, struct ctl_indexes *indexes) { int *arg_out = arg; *arg_out = Fallocate_at_create; return 0; } static int CTL_WRITE_HANDLER(at_create)(void *ctx, enum ctl_query_source source, void *arg, struct ctl_indexes *indexes) { int arg_in = *(int *)arg; Fallocate_at_create = arg_in; return 0; } static struct ctl_argument CTL_ARG(at_create) = CTL_ARG_BOOLEAN; static const struct ctl_node CTL_NODE(fallocate)[] = { CTL_LEAF_RW(at_create), CTL_NODE_END }; void ctl_fallocate_register(void) { CTL_REGISTER_MODULE(NULL, fallocate); } vmem-1.8/src/common/dlsym.h000066400000000000000000000056701361505074100157200ustar00rootroot00000000000000/* * Copyright 2016-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * dlsym.h -- dynamic linking utilities with library-specific implementation */ #ifndef PMDK_DLSYM_H #define PMDK_DLSYM_H 1 #include "out.h" #if defined(USE_LIBDL) && !defined(_WIN32) #include /* * util_dlopen -- calls real dlopen() */ static inline void * util_dlopen(const char *filename) { LOG(3, "filename %s", filename); return dlopen(filename, RTLD_NOW); } /* * util_dlerror -- calls real dlerror() */ static inline char * util_dlerror(void) { return dlerror(); } /* * util_dlsym -- calls real dlsym() */ static inline void * util_dlsym(void *handle, const char *symbol) { LOG(3, "handle %p symbol %s", handle, symbol); return dlsym(handle, symbol); } /* * util_dlclose -- calls real dlclose() */ static inline int util_dlclose(void *handle) { LOG(3, "handle %p", handle); return dlclose(handle); } #else /* empty functions */ /* * util_dlopen -- empty function */ static inline void * util_dlopen(const char *filename) { errno = ENOSYS; return NULL; } /* * util_dlerror -- empty function */ static inline char * util_dlerror(void) { errno = ENOSYS; return NULL; } /* * util_dlsym -- empty function */ static inline void * util_dlsym(void *handle, const char *symbol) { errno = ENOSYS; return NULL; } /* * util_dlclose -- empty function */ static inline int util_dlclose(void *handle) { errno = ENOSYS; return 0; } #endif #endif vmem-1.8/src/common/errno_freebsd.h000066400000000000000000000035771361505074100174130ustar00rootroot00000000000000/* * Copyright 2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * errno_freebsd.h -- map Linux errno's to something close on FreeBSD */ #ifndef PMDK_ERRNO_FREEBSD_H #define PMDK_ERRNO_FREEBSD_H 1 #ifdef __FreeBSD__ #define EBADFD EBADF #define ELIBACC EINVAL #define EMEDIUMTYPE EOPNOTSUPP #define ENOMEDIUM ENODEV #define EREMOTEIO EIO #endif #endif /* PMDK_ERRNO_FREEBSD_H */ vmem-1.8/src/common/fault_injection.h000066400000000000000000000041371361505074100177420ustar00rootroot00000000000000/* * Copyright 2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef COMMON_FAULT_INJECTION #define COMMON_FAULT_INJECTION #ifdef __cplusplus extern "C" { #endif enum pmem_allocation_type { PMEM_MALLOC, PMEM_REALLOC }; #if FAULT_INJECTION void common_inject_fault_at(enum pmem_allocation_type type, int nth, const char *at); int common_fault_injection_enabled(void); #else static inline void common_inject_fault_at(enum pmem_allocation_type type, int nth, const char *at) { abort(); } static inline int common_fault_injection_enabled(void) { return 0; } #endif #ifdef __cplusplus } #endif #endif vmem-1.8/src/common/file.c000066400000000000000000000341661361505074100155040ustar00rootroot00000000000000/* * Copyright 2014-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * file.c -- file utilities */ #include #include #include #include #include #include #include #include #include #if !defined(_WIN32) && !defined(__FreeBSD__) #include #endif #include "file.h" #include "os.h" #include "out.h" #include "mmap.h" #define MAX_SIZE_LENGTH 64 #define DEVICE_DAX_ZERO_LEN (2 * MEGABYTE) #ifndef _WIN32 /* * device_dax_size -- (internal) checks the size of a given dax device */ static ssize_t device_dax_size(const char *path) { LOG(3, "path \"%s\"", path); os_stat_t st; int olderrno; if (os_stat(path, &st) < 0) { ERR("!stat \"%s\"", path); return -1; } char spath[PATH_MAX]; snprintf(spath, PATH_MAX, "/sys/dev/char/%u:%u/size", os_major(st.st_rdev), os_minor(st.st_rdev)); LOG(4, "device size path \"%s\"", spath); int fd = os_open(spath, O_RDONLY); if (fd < 0) { ERR("!open \"%s\"", spath); return -1; } ssize_t size = -1; char sizebuf[MAX_SIZE_LENGTH + 1]; ssize_t nread; if ((nread = read(fd, sizebuf, MAX_SIZE_LENGTH)) < 0) { ERR("!read"); goto out; } sizebuf[nread] = 0; /* null termination */ char *endptr; olderrno = errno; errno = 0; size = strtoll(sizebuf, &endptr, 0); if (endptr == sizebuf || *endptr != '\n' || ((size == LLONG_MAX || size == LLONG_MIN) && errno == ERANGE)) { ERR("invalid device size %s", sizebuf); size = -1; goto out; } errno = olderrno; out: olderrno = errno; (void) os_close(fd); errno = olderrno; LOG(4, "device size %zu", size); return size; } #endif /* * util_file_exists -- checks whether file exists */ int util_file_exists(const char *path) { LOG(3, "path \"%s\"", path); if (os_access(path, F_OK) == 0) return 1; if (errno != ENOENT) { ERR("!os_access \"%s\"", path); return -1; } /* * ENOENT means that some component of a pathname does not exists. * * XXX - we should also call os_access on parent directory and * if this also results in ENOENT -1 should be returned. * * The problem is that we would need to use realpath, which fails * if file does not exist. */ return 0; } /* * util_stat_get_type -- checks whether stat structure describes * device dax or a normal file */ enum file_type util_stat_get_type(const os_stat_t *st) { #ifdef _WIN32 return TYPE_NORMAL; #else if (!S_ISCHR(st->st_mode)) { LOG(4, "not a character device"); return TYPE_NORMAL; } char spath[PATH_MAX]; snprintf(spath, PATH_MAX, "/sys/dev/char/%u:%u/subsystem", os_major(st->st_rdev), os_minor(st->st_rdev)); LOG(4, "device subsystem path \"%s\"", spath); char npath[PATH_MAX]; char *rpath = realpath(spath, npath); if (rpath == NULL) { ERR("!realpath \"%s\"", spath); return OTHER_ERROR; } char *basename = strrchr(rpath, '/'); if (!basename || strcmp("dax", basename + 1) != 0) { LOG(3, "%s path does not match device dax prefix path", rpath); errno = EINVAL; return OTHER_ERROR; } return TYPE_DEVDAX; #endif } /* * util_fd_get_type -- checks whether a file descriptor is associated * with a device dax or a normal file */ enum file_type util_fd_get_type(int fd) { LOG(3, "fd %d", fd); #ifdef _WIN32 return TYPE_NORMAL; #else os_stat_t st; if (os_fstat(fd, &st) < 0) { ERR("!fstat"); return OTHER_ERROR; } return util_stat_get_type(&st); #endif } /* * util_file_get_type -- checks whether the path points to a device dax, * normal file or non-existent file */ enum file_type util_file_get_type(const char *path) { LOG(3, "path \"%s\"", path); if (path == NULL) { ERR("invalid (NULL) path"); errno = EINVAL; return OTHER_ERROR; } int exists = util_file_exists(path); if (exists < 0) return OTHER_ERROR; if (!exists) return NOT_EXISTS; #ifdef _WIN32 return TYPE_NORMAL; #else os_stat_t st; if (os_stat(path, &st) < 0) { ERR("!stat"); return OTHER_ERROR; } return util_stat_get_type(&st); #endif } /* * util_file_get_size -- returns size of a file */ ssize_t util_file_get_size(const char *path) { LOG(3, "path \"%s\"", path); int file_type = util_file_get_type(path); if (file_type < 0) return -1; #ifndef _WIN32 if (file_type == TYPE_DEVDAX) { return device_dax_size(path); } #endif os_stat_t stbuf; if (os_stat(path, &stbuf) < 0) { ERR("!stat \"%s\"", path); return -1; } LOG(4, "file length %zu", stbuf.st_size); return stbuf.st_size; } /* * util_file_map_whole -- maps the entire file into memory */ void * util_file_map_whole(const char *path) { LOG(3, "path \"%s\"", path); int fd; int olderrno; void *addr = NULL; int flags = O_RDWR; #ifdef _WIN32 flags |= O_BINARY; #endif if ((fd = os_open(path, flags)) < 0) { ERR("!open \"%s\"", path); return NULL; } ssize_t size = util_file_get_size(path); if (size < 0) { LOG(2, "cannot determine file length \"%s\"", path); goto out; } addr = util_map(fd, (size_t)size, MAP_SHARED, 0, 0, NULL); if (addr == NULL) { LOG(2, "failed to map entire file \"%s\"", path); goto out; } out: olderrno = errno; (void) os_close(fd); errno = olderrno; return addr; } /* * util_file_zero -- zeroes the specified region of the file */ int util_file_zero(const char *path, os_off_t off, size_t len) { LOG(3, "path \"%s\" off %ju len %zu", path, off, len); int fd; int olderrno; int ret = 0; int flags = O_RDWR; #ifdef _WIN32 flags |= O_BINARY; #endif if ((fd = os_open(path, flags)) < 0) { ERR("!open \"%s\"", path); return -1; } ssize_t size = util_file_get_size(path); if (size < 0) { LOG(2, "cannot determine file length \"%s\"", path); ret = -1; goto out; } if (off > size) { LOG(2, "offset beyond file length, %ju > %ju", off, size); ret = -1; goto out; } if ((size_t)off + len > (size_t)size) { LOG(2, "requested size of write goes beyond the file length, " "%zu > %zu", (size_t)off + len, size); LOG(4, "adjusting len to %zu", size - off); len = (size_t)(size - off); } void *addr = util_map(fd, (size_t)size, MAP_SHARED, 0, 0, NULL); if (addr == NULL) { LOG(2, "failed to map entire file \"%s\"", path); ret = -1; goto out; } /* zero initialize the specified region */ memset((char *)addr + off, 0, len); util_unmap(addr, (size_t)size); out: olderrno = errno; (void) os_close(fd); errno = olderrno; return ret; } /* * util_file_pwrite -- writes to a file with an offset */ ssize_t util_file_pwrite(const char *path, const void *buffer, size_t size, os_off_t offset) { LOG(3, "path \"%s\" buffer %p size %zu offset %ju", path, buffer, size, offset); enum file_type type = util_file_get_type(path); if (type < 0) return -1; if (type == TYPE_NORMAL) { int fd = util_file_open(path, NULL, 0, O_RDWR); if (fd < 0) { LOG(2, "failed to open file \"%s\"", path); return -1; } ssize_t write_len = pwrite(fd, buffer, size, offset); int olderrno = errno; (void) os_close(fd); errno = olderrno; return write_len; } ssize_t file_size = util_file_get_size(path); if (file_size < 0) { LOG(2, "cannot determine file length \"%s\"", path); return -1; } size_t max_size = (size_t)(file_size - offset); if (size > max_size) { LOG(2, "requested size of write goes beyond the file length, " "%zu > %zu", size, max_size); LOG(4, "adjusting size to %zu", max_size); size = max_size; } void *addr = util_file_map_whole(path); if (addr == NULL) { LOG(2, "failed to map entire file \"%s\"", path); return -1; } memcpy(ADDR_SUM(addr, offset), buffer, size); util_unmap(addr, (size_t)file_size); return (ssize_t)size; } /* * util_file_pread -- reads from a file with an offset */ ssize_t util_file_pread(const char *path, void *buffer, size_t size, os_off_t offset) { LOG(3, "path \"%s\" buffer %p size %zu offset %ju", path, buffer, size, offset); enum file_type type = util_file_get_type(path); if (type < 0) return -1; if (type == TYPE_NORMAL) { int fd = util_file_open(path, NULL, 0, O_RDONLY); if (fd < 0) { LOG(2, "failed to open file \"%s\"", path); return -1; } ssize_t read_len = pread(fd, buffer, size, offset); int olderrno = errno; (void) os_close(fd); errno = olderrno; return read_len; } ssize_t file_size = util_file_get_size(path); if (file_size < 0) { LOG(2, "cannot determine file length \"%s\"", path); return -1; } size_t max_size = (size_t)(file_size - offset); if (size > max_size) { LOG(2, "requested size of read goes beyond the file length, " "%zu > %zu", size, max_size); LOG(4, "adjusting size to %zu", max_size); size = max_size; } void *addr = util_file_map_whole(path); if (addr == NULL) { LOG(2, "failed to map entire file \"%s\"", path); return -1; } memcpy(buffer, ADDR_SUM(addr, offset), size); util_unmap(addr, (size_t)file_size); return (ssize_t)size; } /* * util_file_create -- create a new memory pool file */ int util_file_create(const char *path, size_t size, size_t minsize) { LOG(3, "path \"%s\" size %zu minsize %zu", path, size, minsize); ASSERTne(size, 0); if (size < minsize) { ERR("size %zu smaller than %zu", size, minsize); errno = EINVAL; return -1; } if (((os_off_t)size) < 0) { ERR("invalid size (%zu) for os_off_t", size); errno = EFBIG; return -1; } int fd; int mode; int flags = O_RDWR | O_CREAT | O_EXCL; #ifndef _WIN32 mode = 0; #else mode = S_IWRITE | S_IREAD; flags |= O_BINARY; #endif /* * Create file without any permission. It will be granted once * initialization completes. */ if ((fd = os_open(path, flags, mode)) < 0) { ERR("!open \"%s\"", path); return -1; } if ((errno = os_posix_fallocate(fd, 0, (os_off_t)size)) != 0) { ERR("!posix_fallocate \"%s\", %zu", path, size); goto err; } /* for windows we can't flock until after we fallocate */ if (os_flock(fd, OS_LOCK_EX | OS_LOCK_NB) < 0) { ERR("!flock \"%s\"", path); goto err; } return fd; err: LOG(4, "error clean up"); int oerrno = errno; if (fd != -1) (void) os_close(fd); os_unlink(path); errno = oerrno; return -1; } /* * util_file_open -- open a memory pool file */ int util_file_open(const char *path, size_t *size, size_t minsize, int flags) { LOG(3, "path \"%s\" size %p minsize %zu flags %d", path, size, minsize, flags); int oerrno; int fd; #ifdef _WIN32 flags |= O_BINARY; #endif if ((fd = os_open(path, flags)) < 0) { ERR("!open \"%s\"", path); return -1; } if (os_flock(fd, OS_LOCK_EX | OS_LOCK_NB) < 0) { ERR("!flock \"%s\"", path); (void) os_close(fd); return -1; } if (size || minsize) { if (size) ASSERTeq(*size, 0); ssize_t actual_size = util_file_get_size(path); if (actual_size < 0) { ERR("stat \"%s\": negative size", path); errno = EINVAL; goto err; } if ((size_t)actual_size < minsize) { ERR("size %zu smaller than %zu", (size_t)actual_size, minsize); errno = EINVAL; goto err; } if (size) { *size = (size_t)actual_size; LOG(4, "actual file size %zu", *size); } } return fd; err: oerrno = errno; if (os_flock(fd, OS_LOCK_UN)) ERR("!flock unlock"); (void) os_close(fd); errno = oerrno; return -1; } /* * util_unlink -- unlinks a file or zeroes a device dax */ int util_unlink(const char *path) { LOG(3, "path \"%s\"", path); enum file_type type = util_file_get_type(path); if (type < 0) return -1; if (type == TYPE_DEVDAX) { return util_file_zero(path, 0, DEVICE_DAX_ZERO_LEN); } else { #ifdef _WIN32 /* on Windows we can not unlink Read-Only files */ if (os_chmod(path, S_IREAD | S_IWRITE) == -1) { ERR("!chmod \"%s\"", path); return -1; } #endif return os_unlink(path); } } /* * util_unlink_flock -- flocks the file and unlinks it * * The unlink(2) call on a file which is opened and locked using flock(2) * by different process works on linux. Thus in order to forbid removing a * pool when in use by different process we need to flock(2) the pool files * first before unlinking. */ int util_unlink_flock(const char *path) { LOG(3, "path \"%s\"", path); #ifdef WIN32 /* * On Windows it is not possible to unlink the * file if it is flocked. */ return util_unlink(path); #else int fd = util_file_open(path, NULL, 0, O_RDONLY); if (fd < 0) { LOG(2, "failed to open file \"%s\"", path); return -1; } int ret = util_unlink(path); (void) os_close(fd); return ret; #endif } /* * util_write_all -- a wrapper for util_write * * writes exactly count bytes from buf to file referred to by fd * returns -1 on error, 0 otherwise */ int util_write_all(int fd, const char *buf, size_t count) { ssize_t n_wrote = 0; size_t total = 0; while (count > total) { n_wrote = util_write(fd, buf, count - total); if (n_wrote <= 0) return -1; buf += (size_t)n_wrote; total += (size_t)n_wrote; } return 0; } vmem-1.8/src/common/file.h000066400000000000000000000074671361505074100155150ustar00rootroot00000000000000/* * Copyright 2014-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * file.h -- internal definitions for file module */ #ifndef PMDK_FILE_H #define PMDK_FILE_H 1 #include #include #include #include #include #include "os.h" #ifdef __cplusplus extern "C" { #endif #ifdef _WIN32 #define NAME_MAX _MAX_FNAME #endif struct file_info { char filename[NAME_MAX + 1]; int is_dir; }; struct dir_handle { const char *path; #ifdef _WIN32 HANDLE handle; char *_file; #else DIR *dirp; #endif }; enum file_type { OTHER_ERROR = -2, NOT_EXISTS = -1, TYPE_NORMAL = 1, TYPE_DEVDAX = 2 }; int util_file_dir_open(struct dir_handle *a, const char *path); int util_file_dir_next(struct dir_handle *a, struct file_info *info); int util_file_dir_close(struct dir_handle *a); int util_file_dir_remove(const char *path); int util_file_exists(const char *path); enum file_type util_stat_get_type(const os_stat_t *st); enum file_type util_fd_get_type(int fd); enum file_type util_file_get_type(const char *path); int util_ddax_region_find(const char *path); ssize_t util_file_get_size(const char *path); size_t util_file_device_dax_alignment(const char *path); void *util_file_map_whole(const char *path); int util_file_zero(const char *path, os_off_t off, size_t len); ssize_t util_file_pread(const char *path, void *buffer, size_t size, os_off_t offset); ssize_t util_file_pwrite(const char *path, const void *buffer, size_t size, os_off_t offset); int util_tmpfile(const char *dir, const char *templ, int flags); int util_is_absolute_path(const char *path); int util_file_create(const char *path, size_t size, size_t minsize); int util_file_open(const char *path, size_t *size, size_t minsize, int flags); int util_unlink(const char *path); int util_unlink_flock(const char *path); int util_file_mkdir(const char *path, mode_t mode); int util_write_all(int fd, const char *buf, size_t count); #ifndef _WIN32 #define util_read read #define util_write write #else /* XXX - consider adding an assertion on (count <= UINT_MAX) */ #define util_read(fd, buf, count) read(fd, buf, (unsigned)(count)) #define util_write(fd, buf, count) write(fd, buf, (unsigned)(count)) #define S_ISCHR(m) (((m) & S_IFMT) == S_IFCHR) #define S_ISDIR(m) (((m) & S_IFMT) == S_IFDIR) #endif #ifdef __cplusplus } #endif #endif vmem-1.8/src/common/file_posix.c000066400000000000000000000216101361505074100167140ustar00rootroot00000000000000/* * Copyright 2014-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * file_posix.c -- Posix versions of file APIs */ /* for O_TMPFILE */ #define _GNU_SOURCE #include #include #include #include #include #include #include #include #include #include "os.h" #include "file.h" #include "out.h" #define MAX_SIZE_LENGTH 64 #define DAX_REGION_ID_LEN 6 /* 5 digits + \0 */ /* * util_tmpfile_mkstemp -- (internal) create temporary file * if O_TMPFILE not supported */ static int util_tmpfile_mkstemp(const char *dir, const char *templ) { /* the templ must start with a path separator */ ASSERTeq(templ[0], '/'); int oerrno; int fd = -1; char *fullname = alloca(strlen(dir) + strlen(templ) + 1); (void) strcpy(fullname, dir); (void) strcat(fullname, templ); sigset_t set, oldset; sigfillset(&set); (void) sigprocmask(SIG_BLOCK, &set, &oldset); mode_t prev_umask = umask(S_IRWXG | S_IRWXO); fd = os_mkstemp(fullname); umask(prev_umask); if (fd < 0) { ERR("!mkstemp"); goto err; } (void) os_unlink(fullname); (void) sigprocmask(SIG_SETMASK, &oldset, NULL); LOG(3, "unlinked file is \"%s\"", fullname); return fd; err: oerrno = errno; (void) sigprocmask(SIG_SETMASK, &oldset, NULL); if (fd != -1) (void) os_close(fd); errno = oerrno; return -1; } /* * util_tmpfile -- create temporary file */ int util_tmpfile(const char *dir, const char *templ, int flags) { LOG(3, "dir \"%s\" template \"%s\" flags %x", dir, templ, flags); /* only O_EXCL is allowed here */ ASSERT(flags == 0 || flags == O_EXCL); #ifdef O_TMPFILE int fd = open(dir, O_TMPFILE | O_RDWR | flags, S_IRUSR | S_IWUSR); /* * Open can fail if underlying file system does not support O_TMPFILE * flag. */ if (fd >= 0) return fd; if (errno != EOPNOTSUPP) { ERR("!open"); return -1; } #endif return util_tmpfile_mkstemp(dir, templ); } /* * util_is_absolute_path -- check if the path is an absolute one */ int util_is_absolute_path(const char *path) { LOG(3, "path: %s", path); if (path[0] == OS_DIR_SEPARATOR) return 1; else return 0; } /* * util_create_mkdir -- creates new dir */ int util_file_mkdir(const char *path, mode_t mode) { LOG(3, "path: %s mode: %o", path, mode); return mkdir(path, mode); } /* * util_file_dir_open -- open a directory */ int util_file_dir_open(struct dir_handle *handle, const char *path) { LOG(3, "handle: %p path: %s", handle, path); handle->dirp = opendir(path); return handle->dirp == NULL; } /* * util_file_dir_next -- read next file in directory */ int util_file_dir_next(struct dir_handle *handle, struct file_info *info) { LOG(3, "handle: %p info: %p", handle, info); struct dirent *d = readdir(handle->dirp); if (d == NULL) return 1; /* break */ info->filename[NAME_MAX] = '\0'; strncpy(info->filename, d->d_name, NAME_MAX + 1); if (info->filename[NAME_MAX] != '\0') return -1; /* filename truncated */ info->is_dir = d->d_type == DT_DIR; return 0; /* continue */ } /* * util_file_dir_close -- close a directory */ int util_file_dir_close(struct dir_handle *handle) { LOG(3, "path: %p", handle); return closedir(handle->dirp); } /* * util_file_dir_remove -- remove directory */ int util_file_dir_remove(const char *path) { LOG(3, "path: %s", path); return rmdir(path); } /* * device_dax_alignment -- (internal) checks the alignment of given Device DAX */ static size_t device_dax_alignment(const char *path) { char spath[PATH_MAX]; size_t size = 0; char *daxpath; os_stat_t st; int olderrno; LOG(3, "path \"%s\"", path); if (os_stat(path, &st) < 0) { ERR("!stat \"%s\"", path); return 0; } snprintf(spath, PATH_MAX, "/sys/dev/char/%u:%u", os_major(st.st_rdev), os_minor(st.st_rdev)); daxpath = realpath(spath, NULL); if (!daxpath) { ERR("!realpath \"%s\"", spath); return 0; } if (util_safe_strcpy(spath, daxpath, sizeof(spath))) { ERR("util_safe_strcpy failed"); free(daxpath); return 0; } free(daxpath); while (spath[0] != '\0') { char sizebuf[MAX_SIZE_LENGTH + 1]; char *pos = strrchr(spath, '/'); char *endptr; size_t len; ssize_t rc; int fd; if (strcmp(spath, "/sys/devices") == 0) break; if (!pos) break; *pos = '\0'; len = strlen(spath); snprintf(&spath[len], sizeof(spath) - len, "/dax_region/align"); fd = os_open(spath, O_RDONLY); *pos = '\0'; if (fd < 0) continue; LOG(4, "device align path \"%s\"", spath); rc = read(fd, sizebuf, MAX_SIZE_LENGTH); os_close(fd); if (rc < 0) { ERR("!read"); return 0; } sizebuf[rc] = 0; /* null termination */ olderrno = errno; errno = 0; /* 'align' is in decimal format */ size = strtoull(sizebuf, &endptr, 10); if (endptr == sizebuf || *endptr != '\n' || (size == ULLONG_MAX && errno == ERANGE)) { ERR("invalid device alignment %s", sizebuf); size = 0; errno = olderrno; break; } /* * If the alignment value is not a power of two, try with * hex format, as this is how it was printed in older kernels. * Just in case someone is using kernel <4.9. */ if ((size & (size - 1)) != 0) { size = strtoull(sizebuf, &endptr, 16); if (endptr == sizebuf || *endptr != '\n' || (size == ULLONG_MAX && errno == ERANGE)) { ERR("invalid device alignment %s", sizebuf); size = 0; } } errno = olderrno; break; } LOG(4, "device alignment %zu", size); return size; } /* * util_file_device_dax_alignment -- returns internal Device DAX alignment */ size_t util_file_device_dax_alignment(const char *path) { LOG(3, "path \"%s\"", path); return device_dax_alignment(path); } /* * util_ddax_region_find -- returns Device DAX region id */ int util_ddax_region_find(const char *path) { LOG(3, "path \"%s\"", path); int dax_reg_id_fd; char dax_region_path[PATH_MAX]; char reg_id[DAX_REGION_ID_LEN]; char *end_addr; os_stat_t st; ASSERTne(path, NULL); if (os_stat(path, &st) < 0) { ERR("!stat \"%s\"", path); return -1; } dev_t dev_id = st.st_rdev; unsigned major = os_major(dev_id); unsigned minor = os_minor(dev_id); int ret = snprintf(dax_region_path, PATH_MAX, "/sys/dev/char/%u:%u/device/dax_region/id", major, minor); if (ret < 0) { ERR("snprintf(%p, %d, /sys/dev/char/%u:%u/device/" "dax_region/id, %u, %u): %d", dax_region_path, PATH_MAX, major, minor, major, minor, ret); return -1; } if ((dax_reg_id_fd = os_open(dax_region_path, O_RDONLY)) < 0) { LOG(1, "!open(\"%s\", O_RDONLY)", dax_region_path); return -1; } ssize_t len = read(dax_reg_id_fd, reg_id, DAX_REGION_ID_LEN); if (len == -1) { ERR("!read(%d, %p, %d)", dax_reg_id_fd, reg_id, DAX_REGION_ID_LEN); goto err; } else if (len < 2 || reg_id[len - 1] != '\n') { errno = EINVAL; ERR("!read(%d, %p, %d) invalid format", dax_reg_id_fd, reg_id, DAX_REGION_ID_LEN); goto err; } int olderrno = errno; errno = 0; long reg_num = strtol(reg_id, &end_addr, 10); if ((errno == ERANGE && (reg_num == LONG_MAX || reg_num == LONG_MIN)) || (errno != 0 && reg_num == 0)) { ERR("!strtol(%p, %p, 10)", reg_id, end_addr); goto err; } errno = olderrno; if (end_addr == reg_id) { ERR("!strtol(%p, %p, 10) no digits were found", reg_id, end_addr); goto err; } if (*end_addr != '\n') { ERR("!strtol(%s, %s, 10) invalid format", reg_id, end_addr); goto err; } os_close(dax_reg_id_fd); return (int)reg_num; err: os_close(dax_reg_id_fd); return -1; } vmem-1.8/src/common/file_windows.c000066400000000000000000000130571361505074100172520ustar00rootroot00000000000000/* * Copyright 2015-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * file_windows.c -- Windows emulation of Linux-specific system calls */ /* * XXX - The initial approach to PMDK for Windows port was to minimize the * amount of changes required in the core part of the library, and to avoid * preprocessor conditionals, if possible. For that reason, some of the * Linux system calls that have no equivalents on Windows have been emulated * using Windows API. * Note that it was not a goal to fully emulate POSIX-compliant behavior * of mentioned functions. They are used only internally, so current * implementation is just good enough to satisfy PMDK needs and to make it * work on Windows. */ #include #include #include #include "alloc.h" #include "file.h" #include "out.h" #include "os.h" /* * util_tmpfile -- create a temporary file */ int util_tmpfile(const char *dir, const char *templ, int flags) { LOG(3, "dir \"%s\" template \"%s\" flags %x", dir, templ, flags); /* only O_EXCL is allowed here */ ASSERT(flags == 0 || flags == O_EXCL); int oerrno; int fd = -1; size_t len = strlen(dir) + strlen(templ) + 1; char *fullname = Malloc(sizeof(*fullname) * len); if (fullname == NULL) { ERR("!Malloc"); return -1; } int ret = _snprintf(fullname, len, "%s%s", dir, templ); if (ret < 0 || ret >= len) { ERR("snprintf: %d", ret); goto err; } LOG(4, "fullname \"%s\"", fullname); /* * XXX - block signals and modify file creation mask for the time * of mkstmep() execution. Restore previous settings once the file * is created. */ fd = os_mkstemp(fullname); if (fd < 0) { ERR("!os_mkstemp"); goto err; } /* * There is no point to use unlink() here. First, because it does not * work on open files. Second, because the file is created with * O_TEMPORARY flag, and it looks like such temp files cannot be open * from another process, even though they are visible on * the filesystem. */ Free(fullname); return fd; err: Free(fullname); oerrno = errno; if (fd != -1) (void) os_close(fd); errno = oerrno; return -1; } /* * util_is_absolute_path -- check if the path is absolute */ int util_is_absolute_path(const char *path) { LOG(3, "path \"%s\"", path); if (path == NULL || path[0] == '\0') return 0; if (path[0] == '\\' || path[1] == ':') return 1; return 0; } /* * util_file_mkdir -- creates new dir */ int util_file_mkdir(const char *path, mode_t mode) { /* * On windows we cannot create read only dir so mode * parameter is useless. */ UNREFERENCED_PARAMETER(mode); LOG(3, "path: %s mode: %d", path, mode); return _mkdir(path); } /* * util_file_dir_open -- open a directory */ int util_file_dir_open(struct dir_handle *handle, const char *path) { /* init handle */ handle->handle = NULL; handle->path = path; return 0; } /* * util_file_dir_next - read next file in directory */ int util_file_dir_next(struct dir_handle *handle, struct file_info *info) { WIN32_FIND_DATAA data; if (handle->handle == NULL) { handle->handle = FindFirstFileA(handle->path, &data); if (handle->handle == NULL) return 1; } else { if (FindNextFileA(handle->handle, &data) == 0) return 1; } info->filename[NAME_MAX] = '\0'; strncpy(info->filename, data.cFileName, NAME_MAX + 1); if (info->filename[NAME_MAX] != '\0') return -1; /* filename truncated */ info->is_dir = data.dwFileAttributes == FILE_ATTRIBUTE_DIRECTORY; return 0; } /* * util_file_dir_close -- close a directory */ int util_file_dir_close(struct dir_handle *handle) { return FindClose(handle->handle); } /* * util_file_dir_close -- remove directory */ int util_file_dir_remove(const char *path) { return RemoveDirectoryA(path) == 0 ? -1 : 0; } /* * util_file_device_dax_alignment -- returns internal Device DAX alignment */ size_t util_file_device_dax_alignment(const char *path) { LOG(3, "path \"%s\"", path); return 0; } /* * util_ddax_region_find -- returns DEV dax region id that contains file */ int util_ddax_region_find(const char *path) { LOG(3, "path \"%s\"", path); return -1; } vmem-1.8/src/common/libpmemcommon.vcxproj000066400000000000000000000151421361505074100206650ustar00rootroot00000000000000 Debug x64 Release x64 {901f04db-e1a5-4a41-8b81-9d31c19acd59} {492BAA3D-0D5D-478E-9765-500463AE69AA} Win32Proj libpmemcommon 10.0.16299.0 StaticLibrary true v140 NotSet StaticLibrary true v140 NotSet true .lib $(SolutionDir)\include;$(SolutionDir)\windows\include;$(VC_IncludePath);$(WindowsSDK_IncludePath); true .lib $(SolutionDir)\include;$(SolutionDir)\windows\include;$(VC_IncludePath);$(WindowsSDK_IncludePath); NotUsing Level3 PMDK_UTF8_API;NTDDI_VERSION=NTDDI_WIN10_RS1;_DEBUG;_CRT_SECURE_NO_WARNINGS;%(PreprocessorDefinitions) platform.h CompileAsC MultiThreadedDebugDLL false true Console true ntdll.lib;%(AdditionalDependencies) true NotUsing Level3 PMDK_UTF8_API;NTDDI_VERSION=NTDDI_WIN10_RS1;_DEBUG;_CRT_SECURE_NO_WARNINGS;%(PreprocessorDefinitions) platform.h CompileAsC MaxSpeed MultiThreadedDLL Default false ProgramDatabase true Console true ntdll.lib;%(AdditionalDependencies) true vmem-1.8/src/common/libpmemcommon.vcxproj.filters000066400000000000000000000070041361505074100223320ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {93995380-89BD-4b04-88EB-625FBE52EBFB} h;hh;hpp;hxx;hm;inl;inc;xsd Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files vmem-1.8/src/common/mmap.c000066400000000000000000000320421361505074100155060ustar00rootroot00000000000000/* * Copyright 2014-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * mmap.c -- mmap utilities */ #include #include #include #include #include #include #include #include "file.h" #include "queue.h" #include "mmap.h" #include "sys_util.h" #include "os.h" #include "alloc.h" int Mmap_no_random; void *Mmap_hint; static os_rwlock_t Mmap_list_lock; static PMDK_SORTEDQ_HEAD(map_list_head, map_tracker) Mmap_list = PMDK_SORTEDQ_HEAD_INITIALIZER(Mmap_list); /* * util_mmap_init -- initialize the mmap utils * * This is called from the library initialization code. */ void util_mmap_init(void) { LOG(3, NULL); util_rwlock_init(&Mmap_list_lock); /* * For testing, allow overriding the default mmap() hint address. * If hint address is defined, it also disables address randomization. */ char *e = os_getenv("PMEM_MMAP_HINT"); if (e) { char *endp; errno = 0; unsigned long long val = strtoull(e, &endp, 16); if (errno || endp == e) { LOG(2, "Invalid PMEM_MMAP_HINT"); } else if (os_access(OS_MAPFILE, R_OK)) { LOG(2, "No /proc, PMEM_MMAP_HINT ignored"); } else { Mmap_hint = (void *)val; Mmap_no_random = 1; LOG(3, "PMEM_MMAP_HINT set to %p", Mmap_hint); } } } /* * util_mmap_fini -- clean up the mmap utils * * This is called before process stop. */ void util_mmap_fini(void) { LOG(3, NULL); util_rwlock_destroy(&Mmap_list_lock); } /* * util_map -- memory map a file * * This is just a convenience function that calls mmap() with the * appropriate arguments and includes our trace points. */ void * util_map(int fd, size_t len, int flags, int rdonly, size_t req_align, int *map_sync) { LOG(3, "fd %d len %zu flags %d rdonly %d req_align %zu map_sync %p", fd, len, flags, rdonly, req_align, map_sync); void *base; void *addr = util_map_hint(len, req_align); if (addr == MAP_FAILED) { LOG(1, "cannot find a contiguous region of given size"); return NULL; } if (req_align) ASSERTeq((uintptr_t)addr % req_align, 0); int proto = rdonly ? PROT_READ : PROT_READ|PROT_WRITE; base = util_map_sync(addr, len, proto, flags, fd, 0, map_sync); if (base == MAP_FAILED) { ERR("!mmap %zu bytes", len); return NULL; } LOG(3, "mapped at %p", base); return base; } /* * util_unmap -- unmap a file * * This is just a convenience function that calls munmap() with the * appropriate arguments and includes our trace points. */ int util_unmap(void *addr, size_t len) { LOG(3, "addr %p len %zu", addr, len); /* * XXX Workaround for https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=169608 */ #ifdef __FreeBSD__ if (!IS_PAGE_ALIGNED((uintptr_t)addr)) { errno = EINVAL; ERR("!munmap"); return -1; } #endif int retval = munmap(addr, len); if (retval < 0) ERR("!munmap"); return retval; } /* * util_map_tmpfile -- reserve space in an unlinked file and memory-map it * * size must be multiple of page size. */ void * util_map_tmpfile(const char *dir, size_t size, size_t req_align) { int oerrno; if (((os_off_t)size) < 0) { ERR("invalid size (%zu) for os_off_t", size); errno = EFBIG; return NULL; } int fd = util_tmpfile(dir, OS_DIR_SEP_STR "vmem.XXXXXX", O_EXCL); if (fd == -1) { LOG(2, "cannot create temporary file in dir %s", dir); goto err; } if ((errno = os_posix_fallocate(fd, 0, (os_off_t)size)) != 0) { ERR("!posix_fallocate"); goto err; } void *base; if ((base = util_map(fd, size, MAP_SHARED, 0, req_align, NULL)) == NULL) { LOG(2, "cannot mmap temporary file"); goto err; } (void) os_close(fd); return base; err: oerrno = errno; if (fd != -1) (void) os_close(fd); errno = oerrno; return NULL; } /* * util_range_ro -- set a memory range read-only */ int util_range_ro(void *addr, size_t len) { LOG(3, "addr %p len %zu", addr, len); uintptr_t uptr; int retval; /* * mprotect requires addr to be a multiple of pagesize, so * adjust addr and len to represent the full 4k chunks * covering the given range. */ /* increase len by the amount we gain when we round addr down */ len += (uintptr_t)addr & (Pagesize - 1); /* round addr down to page boundary */ uptr = (uintptr_t)addr & ~(Pagesize - 1); if ((retval = mprotect((void *)uptr, len, PROT_READ)) < 0) ERR("!mprotect: PROT_READ"); return retval; } /* * util_range_rw -- set a memory range read-write */ int util_range_rw(void *addr, size_t len) { LOG(3, "addr %p len %zu", addr, len); uintptr_t uptr; int retval; /* * mprotect requires addr to be a multiple of pagesize, so * adjust addr and len to represent the full 4k chunks * covering the given range. */ /* increase len by the amount we gain when we round addr down */ len += (uintptr_t)addr & (Pagesize - 1); /* round addr down to page boundary */ uptr = (uintptr_t)addr & ~(Pagesize - 1); if ((retval = mprotect((void *)uptr, len, PROT_READ|PROT_WRITE)) < 0) ERR("!mprotect: PROT_READ|PROT_WRITE"); return retval; } /* * util_range_none -- set a memory range for no access allowed */ int util_range_none(void *addr, size_t len) { LOG(3, "addr %p len %zu", addr, len); uintptr_t uptr; int retval; /* * mprotect requires addr to be a multiple of pagesize, so * adjust addr and len to represent the full 4k chunks * covering the given range. */ /* increase len by the amount we gain when we round addr down */ len += (uintptr_t)addr & (Pagesize - 1); /* round addr down to page boundary */ uptr = (uintptr_t)addr & ~(Pagesize - 1); if ((retval = mprotect((void *)uptr, len, PROT_NONE)) < 0) ERR("!mprotect: PROT_NONE"); return retval; } /* * util_range_comparer -- (internal) compares the two mapping trackers */ static intptr_t util_range_comparer(struct map_tracker *a, struct map_tracker *b) { return ((intptr_t)a->base_addr - (intptr_t)b->base_addr); } /* * util_range_find_unlocked -- (internal) find the map tracker * for given address range * * Returns the first entry at least partially overlapping given range. * It's up to the caller to check whether the entry exactly matches the range, * or if the range spans multiple entries. */ static struct map_tracker * util_range_find_unlocked(uintptr_t addr, size_t len) { LOG(10, "addr 0x%016" PRIxPTR " len %zu", addr, len); uintptr_t end = addr + len; struct map_tracker *mt; PMDK_SORTEDQ_FOREACH(mt, &Mmap_list, entry) { if (addr < mt->end_addr && (addr >= mt->base_addr || end > mt->base_addr)) goto out; /* break if there is no chance to find matching entry */ if (addr < mt->base_addr) break; } mt = NULL; out: return mt; } /* * util_range_find -- find the map tracker for given address range * the same as util_range_find_unlocked but locked */ struct map_tracker * util_range_find(uintptr_t addr, size_t len) { LOG(10, "addr 0x%016" PRIxPTR " len %zu", addr, len); util_rwlock_rdlock(&Mmap_list_lock); struct map_tracker *mt = util_range_find_unlocked(addr, len); util_rwlock_unlock(&Mmap_list_lock); return mt; } /* * util_range_register -- add a memory range into a map tracking list */ int util_range_register(const void *addr, size_t len, const char *path, enum pmem_map_type type) { LOG(3, "addr %p len %zu path %s type %d", addr, len, path, type); /* check if not tracked already */ if (util_range_find((uintptr_t)addr, len) != NULL) { ERR( "duplicated persistent memory range; presumably unmapped with munmap() instead of pmem_unmap(): addr %p len %zu", addr, len); errno = ENOMEM; return -1; } struct map_tracker *mt; mt = Malloc(sizeof(struct map_tracker)); if (mt == NULL) { ERR("!Malloc"); return -1; } mt->base_addr = (uintptr_t)addr; mt->end_addr = mt->base_addr + len; mt->type = type; if (type == PMEM_DEV_DAX) mt->region_id = util_ddax_region_find(path); util_rwlock_wrlock(&Mmap_list_lock); PMDK_SORTEDQ_INSERT(&Mmap_list, mt, entry, struct map_tracker, util_range_comparer); util_rwlock_unlock(&Mmap_list_lock); return 0; } /* * util_range_split -- (internal) remove or split a map tracking entry */ static int util_range_split(struct map_tracker *mt, const void *addrp, const void *endp) { LOG(3, "begin %p end %p", addrp, endp); uintptr_t addr = (uintptr_t)addrp; uintptr_t end = (uintptr_t)endp; ASSERTne(mt, NULL); if (addr == end || addr % Mmap_align != 0 || end % Mmap_align != 0) { ERR( "invalid munmap length, must be non-zero and page aligned"); return -1; } struct map_tracker *mtb = NULL; struct map_tracker *mte = NULL; /* * 1) b e b e * xxxxxxxxxxxxx => xxx.......xxxx - mtb+mte * 2) b e b e * xxxxxxxxxxxxx => xxxxxxx....... - mtb * 3) b e b e * xxxxxxxxxxxxx => ........xxxxxx - mte * 4) b e b e * xxxxxxxxxxxxx => .............. - */ if (addr > mt->base_addr) { /* case #1/2 */ /* new mapping at the beginning */ mtb = Malloc(sizeof(struct map_tracker)); if (mtb == NULL) { ERR("!Malloc"); goto err; } mtb->base_addr = mt->base_addr; mtb->end_addr = addr; mtb->region_id = mt->region_id; mtb->type = mt->type; } if (end < mt->end_addr) { /* case #1/3 */ /* new mapping at the end */ mte = Malloc(sizeof(struct map_tracker)); if (mte == NULL) { ERR("!Malloc"); goto err; } mte->base_addr = end; mte->end_addr = mt->end_addr; mte->region_id = mt->region_id; mte->type = mt->type; } PMDK_SORTEDQ_REMOVE(&Mmap_list, mt, entry); if (mtb) { PMDK_SORTEDQ_INSERT(&Mmap_list, mtb, entry, struct map_tracker, util_range_comparer); } if (mte) { PMDK_SORTEDQ_INSERT(&Mmap_list, mte, entry, struct map_tracker, util_range_comparer); } /* free entry for the original mapping */ Free(mt); return 0; err: Free(mtb); Free(mte); return -1; } /* * util_range_unregister -- remove a memory range * from map tracking list * * Remove the region between [begin,end]. If it's in a middle of the existing * mapping, it results in two new map trackers. */ int util_range_unregister(const void *addr, size_t len) { LOG(3, "addr %p len %zu", addr, len); int ret = 0; util_rwlock_wrlock(&Mmap_list_lock); /* * Changes in the map tracker list must match the underlying behavior. * * $ man 2 mmap: * The address addr must be a multiple of the page size (but length * need not be). All pages containing a part of the indicated range * are unmapped. * * This means that we must align the length to the page size. */ len = PAGE_ALIGNED_UP_SIZE(len); void *end = (char *)addr + len; /* XXX optimize the loop */ struct map_tracker *mt; while ((mt = util_range_find_unlocked((uintptr_t)addr, len)) != NULL) { if (util_range_split(mt, addr, end) != 0) { ret = -1; break; } } util_rwlock_unlock(&Mmap_list_lock); return ret; } /* * util_range_is_pmem -- return true if entire range * is persistent memory */ int util_range_is_pmem(const void *addrp, size_t len) { LOG(10, "addr %p len %zu", addrp, len); uintptr_t addr = (uintptr_t)addrp; int retval = 1; util_rwlock_rdlock(&Mmap_list_lock); do { struct map_tracker *mt = util_range_find(addr, len); if (mt == NULL) { LOG(4, "address not found 0x%016" PRIxPTR, addr); retval = 0; break; } LOG(10, "range found - begin 0x%016" PRIxPTR " end 0x%016" PRIxPTR, mt->base_addr, mt->end_addr); if (mt->base_addr > addr) { LOG(10, "base address doesn't match: " "0x%" PRIxPTR " > 0x%" PRIxPTR, mt->base_addr, addr); retval = 0; break; } uintptr_t map_len = mt->end_addr - addr; if (map_len > len) map_len = len; len -= map_len; addr += map_len; } while (len > 0); util_rwlock_unlock(&Mmap_list_lock); return retval; } vmem-1.8/src/common/mmap.h000066400000000000000000000113731361505074100155170ustar00rootroot00000000000000/* * Copyright 2014-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * mmap.h -- internal definitions for mmap module */ #ifndef PMDK_MMAP_H #define PMDK_MMAP_H 1 #include #include #include #include #include #include #include "out.h" #include "queue.h" #include "os.h" #ifdef __cplusplus extern "C" { #endif extern int Mmap_no_random; extern void *Mmap_hint; extern char *Mmap_mapfile; void *util_map_sync(void *addr, size_t len, int proto, int flags, int fd, os_off_t offset, int *map_sync); void *util_map(int fd, size_t len, int flags, int rdonly, size_t req_align, int *map_sync); int util_unmap(void *addr, size_t len); void *util_map_tmpfile(const char *dir, size_t size, size_t req_align); #ifdef __FreeBSD__ #define MAP_NORESERVE 0 #define OS_MAPFILE "/proc/curproc/map" #else #define OS_MAPFILE "/proc/self/maps" #endif #ifndef MAP_SYNC #define MAP_SYNC 0x80000 #endif #ifndef MAP_SHARED_VALIDATE #define MAP_SHARED_VALIDATE 0x03 #endif /* * macros for micromanaging range protections for the debug version */ #ifdef DEBUG #define RANGE(addr, len, is_dev_dax, type) do {\ if (!is_dev_dax) ASSERT(util_range_##type(addr, len) >= 0);\ } while (0) #else #define RANGE(addr, len, is_dev_dax, type) do {} while (0) #endif #define RANGE_RO(addr, len, is_dev_dax) RANGE(addr, len, is_dev_dax, ro) #define RANGE_RW(addr, len, is_dev_dax) RANGE(addr, len, is_dev_dax, rw) #define RANGE_NONE(addr, len, is_dev_dax) RANGE(addr, len, is_dev_dax, none) /* pmem mapping type */ enum pmem_map_type { PMEM_DEV_DAX, /* device dax */ PMEM_MAP_SYNC, /* mapping with MAP_SYNC flag on dax fs */ MAX_PMEM_TYPE }; /* * this structure tracks the file mappings outstanding per file handle */ struct map_tracker { PMDK_SORTEDQ_ENTRY(map_tracker) entry; uintptr_t base_addr; uintptr_t end_addr; int region_id; enum pmem_map_type type; #ifdef _WIN32 /* Windows-specific data */ HANDLE FileHandle; HANDLE FileMappingHandle; DWORD Access; os_off_t Offset; size_t FileLen; #endif }; void util_mmap_init(void); void util_mmap_fini(void); int util_range_ro(void *addr, size_t len); int util_range_rw(void *addr, size_t len); int util_range_none(void *addr, size_t len); char *util_map_hint_unused(void *minaddr, size_t len, size_t align); char *util_map_hint(size_t len, size_t req_align); #define MEGABYTE ((uintptr_t)1 << 20) #define GIGABYTE ((uintptr_t)1 << 30) /* * util_map_hint_align -- choose the desired mapping alignment * * The smallest supported alignment is 2 megabytes because of the object * alignment requirements. Changing this value to 4 kilobytes constitues a * layout change. * * Use 1GB page alignment only if the mapping length is at least * twice as big as the page size. */ static inline size_t util_map_hint_align(size_t len, size_t req_align) { size_t align = 2 * MEGABYTE; if (req_align) align = req_align; else if (len >= 2 * GIGABYTE) align = GIGABYTE; return align; } int util_range_register(const void *addr, size_t len, const char *path, enum pmem_map_type type); int util_range_unregister(const void *addr, size_t len); struct map_tracker *util_range_find(uintptr_t addr, size_t len); int util_range_is_pmem(const void *addr, size_t len); #ifdef __cplusplus } #endif #endif vmem-1.8/src/common/mmap_posix.c000066400000000000000000000154511361505074100167350ustar00rootroot00000000000000/* * Copyright 2014-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * mmap_posix.c -- memory-mapped files for Posix */ #include #include #include #include "mmap.h" #include "out.h" #include "os.h" #define PROCMAXLEN 2048 /* maximum expected line length in /proc files */ char *Mmap_mapfile = OS_MAPFILE; /* Should be modified only for testing */ #ifdef __FreeBSD__ static const char * const sscanf_os = "%p %p"; #else static const char * const sscanf_os = "%p-%p"; #endif /* * util_map_hint_unused -- use /proc to determine a hint address for mmap() * * This is a helper function for util_map_hint(). * It opens up /proc/self/maps and looks for the first unused address * in the process address space that is: * - greater or equal 'minaddr' argument, * - large enough to hold range of given length, * - aligned to the specified unit. * * Asking for aligned address like this will allow the DAX code to use large * mappings. It is not an error if mmap() ignores the hint and chooses * different address. */ char * util_map_hint_unused(void *minaddr, size_t len, size_t align) { LOG(3, "minaddr %p len %zu align %zu", minaddr, len, align); ASSERT(align > 0); FILE *fp; if ((fp = os_fopen(Mmap_mapfile, "r")) == NULL) { ERR("!%s", Mmap_mapfile); return MAP_FAILED; } char line[PROCMAXLEN]; /* for fgets() */ char *lo = NULL; /* beginning of current range in maps file */ char *hi = NULL; /* end of current range in maps file */ char *raddr = minaddr; /* ignore regions below 'minaddr' */ if (raddr == NULL) raddr += Pagesize; raddr = (char *)roundup((uintptr_t)raddr, align); while (fgets(line, PROCMAXLEN, fp) != NULL) { /* check for range line */ if (sscanf(line, sscanf_os, &lo, &hi) == 2) { LOG(4, "%p-%p", lo, hi); if (lo > raddr) { if ((uintptr_t)(lo - raddr) >= len) { LOG(4, "unused region of size %zu " "found at %p", lo - raddr, raddr); break; } else { LOG(4, "region is too small: %zu < %zu", lo - raddr, len); } } if (hi > raddr) { raddr = (char *)roundup((uintptr_t)hi, align); LOG(4, "nearest aligned addr %p", raddr); } if (raddr == NULL) { LOG(4, "end of address space reached"); break; } } } /* * Check for a case when this is the last unused range in the address * space, but is not large enough. (very unlikely) */ if ((raddr != NULL) && (UINTPTR_MAX - (uintptr_t)raddr < len)) { ERR("end of address space reached"); raddr = MAP_FAILED; } fclose(fp); LOG(3, "returning %p", raddr); return raddr; } /* * util_map_hint -- determine hint address for mmap() * * If PMEM_MMAP_HINT environment variable is not set, we let the system to pick * the randomized mapping address. Otherwise, a user-defined hint address * is used. * * ALSR in 64-bit Linux kernel uses 28-bit of randomness for mmap * (bit positions 12-39), which means the base mapping address is randomized * within [0..1024GB] range, with 4KB granularity. Assuming additional * 1GB alignment, it results in 1024 possible locations. * * Configuring the hint address via PMEM_MMAP_HINT environment variable * disables address randomization. In such case, the function will search for * the first unused, properly aligned region of given size, above the specified * address. */ char * util_map_hint(size_t len, size_t req_align) { LOG(3, "len %zu req_align %zu", len, req_align); char *hint_addr = MAP_FAILED; /* choose the desired alignment based on the requested length */ size_t align = util_map_hint_align(len, req_align); if (Mmap_no_random) { LOG(4, "user-defined hint %p", Mmap_hint); hint_addr = util_map_hint_unused(Mmap_hint, len, align); } else { /* * Create dummy mapping to find an unused region of given size. * Request for increased size for later address alignment. * Use MAP_PRIVATE with read-only access to simulate * zero cost for overcommit accounting. Note: MAP_NORESERVE * flag is ignored if overcommit is disabled (mode 2). */ char *addr = mmap(NULL, len + align, PROT_READ, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0); if (addr == MAP_FAILED) { ERR("!mmap MAP_ANONYMOUS"); } else { LOG(4, "system choice %p", addr); hint_addr = (char *)roundup((uintptr_t)addr, align); munmap(addr, len + align); } } LOG(4, "hint %p", hint_addr); return hint_addr; } /* * util_map_sync -- memory map given file into memory, if MAP_SHARED flag is * provided it attempts to use MAP_SYNC flag. Otherwise it fallbacks to * mmap(2). */ void * util_map_sync(void *addr, size_t len, int proto, int flags, int fd, os_off_t offset, int *map_sync) { LOG(15, "addr %p len %zu proto %x flags %x fd %d offset %ld " "map_sync %p", addr, len, proto, flags, fd, offset, map_sync); if (map_sync) *map_sync = 0; /* if map_sync is NULL do not even try to mmap with MAP_SYNC flag */ if (!map_sync || flags & MAP_PRIVATE) return mmap(addr, len, proto, flags, fd, offset); /* MAP_SHARED */ void *ret = mmap(addr, len, proto, flags | MAP_SHARED_VALIDATE | MAP_SYNC, fd, offset); if (ret != MAP_FAILED) { LOG(4, "mmap with MAP_SYNC succeeded"); *map_sync = 1; return ret; } if (errno == EINVAL || errno == ENOTSUP) { LOG(4, "mmap with MAP_SYNC not supported"); return mmap(addr, len, proto, flags, fd, offset); } /* other error */ return MAP_FAILED; } vmem-1.8/src/common/mmap_windows.c000066400000000000000000000114711361505074100172630ustar00rootroot00000000000000/* * Copyright 2015-2018, Intel Corporation * Copyright (c) 2015-2017, Microsoft Corporation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * mmap_windows.c -- memory-mapped files for Windows */ #include #include "mmap.h" #include "out.h" /* * util_map_hint_unused -- use VirtualQuery to determine hint address * * This is a helper function for util_map_hint(). * It iterates through memory regions and looks for the first unused address * in the process address space that is: * - greater or equal 'minaddr' argument, * - large enough to hold range of given length, * - aligned to the specified unit. */ char * util_map_hint_unused(void *minaddr, size_t len, size_t align) { LOG(3, "minaddr %p len %zu align %zu", minaddr, len, align); ASSERT(align > 0); MEMORY_BASIC_INFORMATION mi; char *lo = NULL; /* beginning of current range in maps file */ char *hi = NULL; /* end of current range in maps file */ char *raddr = minaddr; /* ignore regions below 'minaddr' */ if (raddr == NULL) raddr += Pagesize; raddr = (char *)roundup((uintptr_t)raddr, align); while ((uintptr_t)raddr < UINTPTR_MAX - len) { size_t ret = VirtualQuery(raddr, &mi, sizeof(mi)); if (ret == 0) { ERR("VirtualQuery %p", raddr); return MAP_FAILED; } LOG(4, "addr %p len %zu state %d", mi.BaseAddress, mi.RegionSize, mi.State); if ((mi.State != MEM_FREE) || (mi.RegionSize < len)) { raddr = (char *)mi.BaseAddress + mi.RegionSize; raddr = (char *)roundup((uintptr_t)raddr, align); LOG(4, "nearest aligned addr %p", raddr); } else { LOG(4, "unused region of size %zu found at %p", mi.RegionSize, mi.BaseAddress); return mi.BaseAddress; } } LOG(4, "end of address space reached"); return MAP_FAILED; } /* * util_map_hint -- determine hint address for mmap() * * XXX - Windows doesn't support large DAX pages yet, so there is * no point in aligning for the same. */ char * util_map_hint(size_t len, size_t req_align) { LOG(3, "len %zu req_align %zu", len, req_align); char *hint_addr = MAP_FAILED; /* choose the desired alignment based on the requested length */ size_t align = util_map_hint_align(len, req_align); if (Mmap_no_random) { LOG(4, "user-defined hint %p", Mmap_hint); hint_addr = util_map_hint_unused(Mmap_hint, len, align); } else { /* * Create dummy mapping to find an unused region of given size. * Request for increased size for later address alignment. * * Use MAP_NORESERVE flag to only reserve the range of pages * rather than commit. We don't want the pages to be actually * backed by the operating system paging file, as the swap * file is usually too small to handle terabyte pools. */ char *addr = mmap(NULL, len + align, PROT_READ, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0); if (addr != MAP_FAILED) { LOG(4, "system choice %p", addr); hint_addr = (char *)roundup((uintptr_t)addr, align); munmap(addr, len + align); } } LOG(4, "hint %p", hint_addr); return hint_addr; } /* * util_map_sync -- memory map given file into memory */ void * util_map_sync(void *addr, size_t len, int proto, int flags, int fd, os_off_t offset, int *map_sync) { LOG(15, "addr %p len %zu proto %x flags %x fd %d offset %ld", addr, len, proto, flags, fd, offset); if (map_sync) *map_sync = 0; return mmap(addr, len, proto, flags, fd, offset); } vmem-1.8/src/common/os.h000066400000000000000000000074761361505074100152170ustar00rootroot00000000000000/* * Copyright 2017-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * os.h -- os abstaction layer */ #ifndef PMDK_OS_H #define PMDK_OS_H 1 #include #include #include #include "errno_freebsd.h" #ifdef __cplusplus extern "C" { #endif #ifndef _WIN32 #define OS_DIR_SEPARATOR '/' #define OS_DIR_SEP_STR "/" #else #define OS_DIR_SEPARATOR '\\' #define OS_DIR_SEP_STR "\\" #endif #ifndef _WIN32 /* madvise() */ #ifdef __FreeBSD__ #define os_madvise minherit #define MADV_DONTFORK INHERIT_NONE #else #define os_madvise madvise #endif /* dlopen() */ #ifdef __FreeBSD__ #define RTLD_DEEPBIND 0 /* XXX */ #endif /* major(), minor() */ #ifdef __FreeBSD__ #define os_major (unsigned)major #define os_minor (unsigned)minor #else #define os_major major #define os_minor minor #endif #endif /* #ifndef _WIN32 */ struct iovec; /* os_flock */ #define OS_LOCK_SH 1 #define OS_LOCK_EX 2 #define OS_LOCK_NB 4 #define OS_LOCK_UN 8 #ifndef _WIN32 typedef struct stat os_stat_t; #define os_fstat fstat #define os_lseek lseek #else typedef struct _stat64 os_stat_t; #define os_fstat _fstat64 #define os_lseek _lseeki64 #endif #define os_close close #define os_fclose fclose #ifndef _WIN32 typedef off_t os_off_t; #else /* XXX: os_off_t defined in platform.h */ #endif int os_open(const char *pathname, int flags, ...); int os_fsync(int fd); int os_fsync_dir(const char *dir_name); int os_stat(const char *pathname, os_stat_t *buf); int os_unlink(const char *pathname); int os_access(const char *pathname, int mode); FILE *os_fopen(const char *pathname, const char *mode); FILE *os_fdopen(int fd, const char *mode); int os_chmod(const char *pathname, mode_t mode); int os_mkstemp(char *temp); int os_posix_fallocate(int fd, os_off_t offset, os_off_t len); int os_ftruncate(int fd, os_off_t length); int os_flock(int fd, int operation); ssize_t os_writev(int fd, const struct iovec *iov, int iovcnt); int os_clock_gettime(int id, struct timespec *ts); unsigned os_rand_r(unsigned *seedp); int os_unsetenv(const char *name); int os_setenv(const char *name, const char *value, int overwrite); char *os_getenv(const char *name); const char *os_strsignal(int sig); int os_execv(const char *path, char *const argv[]); /* * XXX: missing APis (used in ut_file.c) * * rename * read * write */ #ifdef __cplusplus } #endif #endif /* os.h */ vmem-1.8/src/common/os_auto_flush_none.c000066400000000000000000000033631361505074100204510ustar00rootroot00000000000000/* * Copyright 2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include "os_auto_flush.h" #include "out.h" /* * os_auto_flush -- check if platform supports auto flush for all regions */ int os_auto_flush(void) { LOG(15, NULL); return 0; } vmem-1.8/src/common/os_posix.c000066400000000000000000000204461361505074100164240ustar00rootroot00000000000000/* * Copyright 2017-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * os_posix.c -- abstraction layer for basic Posix functions */ #define _GNU_SOURCE #include #include #include #ifdef __FreeBSD__ #include #endif #include #include #include #include #include #include #include #include #include "util.h" #include "out.h" #include "os.h" /* * os_open -- open abstraction layer */ int os_open(const char *pathname, int flags, ...) { int mode_required = (flags & O_CREAT) == O_CREAT; #ifdef O_TMPFILE mode_required |= (flags & O_TMPFILE) == O_TMPFILE; #endif if (mode_required) { va_list arg; va_start(arg, flags); /* Clang requires int due to auto-promotion */ int mode = va_arg(arg, int); va_end(arg); return open(pathname, flags, (mode_t)mode); } else { return open(pathname, flags); } } /* * os_fsync -- fsync abstraction layer */ int os_fsync(int fd) { return fsync(fd); } /* * os_fsync_dir -- fsync the directory */ int os_fsync_dir(const char *dir_name) { int fd = os_open(dir_name, O_RDONLY | O_DIRECTORY); if (fd < 0) return -1; int ret = os_fsync(fd); os_close(fd); return ret; } /* * os_stat -- stat abstraction layer */ int os_stat(const char *pathname, os_stat_t *buf) { return stat(pathname, buf); } /* * os_unlink -- unlink abstraction layer */ int os_unlink(const char *pathname) { return unlink(pathname); } /* * os_access -- access abstraction layer */ int os_access(const char *pathname, int mode) { return access(pathname, mode); } /* * os_fopen -- fopen abstraction layer */ FILE * os_fopen(const char *pathname, const char *mode) { return fopen(pathname, mode); } /* * os_fdopen -- fdopen abstraction layer */ FILE * os_fdopen(int fd, const char *mode) { return fdopen(fd, mode); } /* * os_chmod -- chmod abstraction layer */ int os_chmod(const char *pathname, mode_t mode) { return chmod(pathname, mode); } /* * os_mkstemp -- mkstemp abstraction layer */ int os_mkstemp(char *temp) { return mkstemp(temp); } /* * os_posix_fallocate -- posix_fallocate abstraction layer */ int os_posix_fallocate(int fd, os_off_t offset, off_t len) { #ifdef __FreeBSD__ struct stat fbuf; struct statfs fsbuf; /* * XXX Workaround for https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=223287 * * FreeBSD implements posix_fallocate with a simple block allocation/zero * loop. If the requested size is unreasonably large, this can result in * an uninterruptable system call that will suck up all the space in the * file system and could take hours to fail. To avoid this, make a crude * check to see if the requested allocation is larger than the available * space in the file system (minus any blocks already allocated to the * file), and if so, immediately return ENOSPC. We do the check only if * the offset is 0; otherwise, trying to figure out how many additional * blocks are required is too complicated. * * This workaround is here mostly to fail "absurdly" large requests for * testing purposes; however, it is coded to allow normal (albeit slow) * operation if the space can actually be allocated. Because of the way * PMDK uses posix_fallocate, supporting Linux-style fallocate in * FreeBSD should be considered. */ if (offset == 0) { if (fstatfs(fd, &fsbuf) == -1 || fstat(fd, &fbuf) == -1) return errno; size_t reqd_blocks = ((size_t)len + (fsbuf.f_bsize - 1)) / fsbuf.f_bsize; if (fbuf.st_blocks > 0) { if (reqd_blocks >= (size_t)fbuf.st_blocks) reqd_blocks -= (size_t)fbuf.st_blocks; else reqd_blocks = 0; } if (reqd_blocks > (size_t)fsbuf.f_bavail) return ENOSPC; } #endif /* * First, try to alloc the whole thing in one go. This allows ENOSPC to * fail immediately -- allocating piece by piece would fill the storage * just to abort halfway. */ int err = posix_fallocate(fd, offset, len); if (err != ENOMEM && err != EINTR) return err; /* * Workaround for a bug in tmpfs where it fails large but reasonable * requests that exceed available DRAM but fit within swap space. And * even if a request fits within DRAM, tmpfs will evict other tasks * just to reserve space. * * We also want to survive random unrelated signals. Profilers spam * the program with SIGVTALRM/SIGPROF, anything run from a terminal can * receive SIGNWINCH, etc. As fallocate is a long-running syscall, * let's restart it, but in a way that avoids infinite loops. * * Thus: * * limit a single syscall to 1GB * * ignore sporadic signals * * on repeated failures, start reducing syscall size * * ... but not below 1MB */ os_off_t chunk = 1LL << 30; /* 1GB */ int tries = 0; while (len) { if (chunk > len) chunk = len; int err = posix_fallocate(fd, offset, chunk); if (!err) { offset += chunk; len -= chunk; tries = 0; } else if (err != ENOMEM && err != EINTR) { return err; } else if (++tries == 5) { tries = 0; chunk /= 2; /* * Within memory pressure or a signal storm, small * allocs are more likely to get through, but once we * get this small, something is badly wrong. */ if (chunk < 1LL << 20) /* 1MB */ return err; } } return 0; } /* * os_ftruncate -- ftruncate abstraction layer */ int os_ftruncate(int fd, os_off_t length) { return ftruncate(fd, length); } /* * os_flock -- flock abstraction layer */ int os_flock(int fd, int operation) { int opt = 0; if (operation & OS_LOCK_EX) opt |= LOCK_EX; if (operation & OS_LOCK_SH) opt |= LOCK_SH; if (operation & OS_LOCK_UN) opt |= LOCK_UN; if (operation & OS_LOCK_NB) opt |= LOCK_NB; return flock(fd, opt); } /* * os_writev -- writev abstraction layer */ ssize_t os_writev(int fd, const struct iovec *iov, int iovcnt) { return writev(fd, iov, iovcnt); } /* * os_clock_gettime -- clock_gettime abstraction layer */ int os_clock_gettime(int id, struct timespec *ts) { return clock_gettime(id, ts); } /* * os_rand_r -- rand_r abstraction layer */ unsigned os_rand_r(unsigned *seedp) { return (unsigned)rand_r(seedp); } /* * os_unsetenv -- unsetenv abstraction layer */ int os_unsetenv(const char *name) { return unsetenv(name); } /* * os_setenv -- setenv abstraction layer */ int os_setenv(const char *name, const char *value, int overwrite) { return setenv(name, value, overwrite); } /* * secure_getenv -- provide GNU secure_getenv for FreeBSD */ #ifndef __USE_GNU static char * secure_getenv(const char *name) { if (issetugid() != 0) return NULL; return getenv(name); } #endif /* * os_getenv -- getenv abstraction layer */ char * os_getenv(const char *name) { return secure_getenv(name); } /* * os_strsignal -- strsignal abstraction layer */ const char * os_strsignal(int sig) { return strsignal(sig); } int os_execv(const char *path, char *const argv[]) { return execv(path, argv); } vmem-1.8/src/common/os_thread.h000066400000000000000000000133111361505074100165270ustar00rootroot00000000000000/* * Copyright 2015-2018, Intel Corporation * Copyright (c) 2016, Microsoft Corporation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * os_thread.h -- os thread abstraction layer */ #ifndef OS_THREAD_H #define OS_THREAD_H 1 #include #include #ifdef __cplusplus extern "C" { #endif typedef union { long long align; char padding[44]; /* linux: 40 windows: 44 */ } os_mutex_t; typedef union { long long align; char padding[56]; /* linux: 56 windows: 13 */ } os_rwlock_t; typedef union { long long align; char padding[48]; /* linux: 48 windows: 12 */ } os_cond_t; typedef union { long long align; char padding[32]; /* linux: 8 windows: 32 */ } os_thread_t; typedef union { long long align; /* linux: long windows: 8 FreeBSD: 12 */ char padding[16]; /* 16 to be safe */ } os_once_t; #define OS_ONCE_INIT { .padding = {0} } typedef unsigned os_tls_key_t; typedef union { long long align; char padding[56]; /* linux: 56 windows: 8 */ } os_semaphore_t; typedef union { long long align; char padding[56]; /* linux: 56 windows: 8 */ } os_thread_attr_t; typedef union { long long align; char padding[512]; } os_cpu_set_t; #ifdef __FreeBSD__ #define cpu_set_t cpuset_t typedef uintptr_t os_spinlock_t; #else typedef volatile int os_spinlock_t; /* XXX: not implemented on windows */ #endif void os_cpu_zero(os_cpu_set_t *set); void os_cpu_set(size_t cpu, os_cpu_set_t *set); #ifndef _WIN32 #define _When_(...) #endif int os_once(os_once_t *o, void (*func)(void)); int os_tls_key_create(os_tls_key_t *key, void (*destructor)(void *)); int os_tls_key_delete(os_tls_key_t key); int os_tls_set(os_tls_key_t key, const void *value); void *os_tls_get(os_tls_key_t key); int os_mutex_init(os_mutex_t *__restrict mutex); int os_mutex_destroy(os_mutex_t *__restrict mutex); _When_(return == 0, _Acquires_lock_(mutex->lock)) int os_mutex_lock(os_mutex_t *__restrict mutex); _When_(return == 0, _Acquires_lock_(mutex->lock)) int os_mutex_trylock(os_mutex_t *__restrict mutex); int os_mutex_unlock(os_mutex_t *__restrict mutex); /* XXX - non POSIX */ int os_mutex_timedlock(os_mutex_t *__restrict mutex, const struct timespec *abstime); int os_rwlock_init(os_rwlock_t *__restrict rwlock); int os_rwlock_destroy(os_rwlock_t *__restrict rwlock); int os_rwlock_rdlock(os_rwlock_t *__restrict rwlock); int os_rwlock_wrlock(os_rwlock_t *__restrict rwlock); int os_rwlock_tryrdlock(os_rwlock_t *__restrict rwlock); _When_(return == 0, _Acquires_exclusive_lock_(rwlock->lock)) int os_rwlock_trywrlock(os_rwlock_t *__restrict rwlock); _When_(rwlock->is_write != 0, _Requires_exclusive_lock_held_(rwlock->lock)) _When_(rwlock->is_write == 0, _Requires_shared_lock_held_(rwlock->lock)) int os_rwlock_unlock(os_rwlock_t *__restrict rwlock); int os_rwlock_timedrdlock(os_rwlock_t *__restrict rwlock, const struct timespec *abstime); int os_rwlock_timedwrlock(os_rwlock_t *__restrict rwlock, const struct timespec *abstime); int os_spin_init(os_spinlock_t *lock, int pshared); int os_spin_destroy(os_spinlock_t *lock); int os_spin_lock(os_spinlock_t *lock); int os_spin_unlock(os_spinlock_t *lock); int os_spin_trylock(os_spinlock_t *lock); int os_cond_init(os_cond_t *__restrict cond); int os_cond_destroy(os_cond_t *__restrict cond); int os_cond_broadcast(os_cond_t *__restrict cond); int os_cond_signal(os_cond_t *__restrict cond); int os_cond_timedwait(os_cond_t *__restrict cond, os_mutex_t *__restrict mutex, const struct timespec *abstime); int os_cond_wait(os_cond_t *__restrict cond, os_mutex_t *__restrict mutex); /* threading */ int os_thread_create(os_thread_t *thread, const os_thread_attr_t *attr, void *(*start_routine)(void *), void *arg); int os_thread_join(os_thread_t *thread, void **result); void os_thread_self(os_thread_t *thread); /* thread affinity */ int os_thread_setaffinity_np(os_thread_t *thread, size_t set_size, const os_cpu_set_t *set); int os_thread_atfork(void (*prepare)(void), void (*parent)(void), void (*child)(void)); int os_semaphore_init(os_semaphore_t *sem, unsigned value); int os_semaphore_destroy(os_semaphore_t *sem); int os_semaphore_wait(os_semaphore_t *sem); int os_semaphore_trywait(os_semaphore_t *sem); int os_semaphore_post(os_semaphore_t *sem); #ifdef __cplusplus } #endif #endif /* OS_THREAD_H */ vmem-1.8/src/common/os_thread_posix.c000066400000000000000000000247211361505074100177530ustar00rootroot00000000000000/* * Copyright 2017-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * os_thread_posix.c -- Posix thread abstraction layer */ #define _GNU_SOURCE #include #ifdef __FreeBSD__ #include #endif #include #include "os_thread.h" #include "util.h" typedef struct { pthread_t thread; } internal_os_thread_t; /* * os_once -- pthread_once abstraction layer */ int os_once(os_once_t *o, void (*func)(void)) { COMPILE_ERROR_ON(sizeof(os_once_t) < sizeof(pthread_once_t)); return pthread_once((pthread_once_t *)o, func); } /* * os_tls_key_create -- pthread_key_create abstraction layer */ int os_tls_key_create(os_tls_key_t *key, void (*destructor)(void *)) { COMPILE_ERROR_ON(sizeof(os_tls_key_t) < sizeof(pthread_key_t)); return pthread_key_create((pthread_key_t *)key, destructor); } /* * os_tls_key_delete -- pthread_key_delete abstraction layer */ int os_tls_key_delete(os_tls_key_t key) { return pthread_key_delete((pthread_key_t)key); } /* * os_tls_setspecific -- pthread_key_setspecific abstraction layer */ int os_tls_set(os_tls_key_t key, const void *value) { return pthread_setspecific((pthread_key_t)key, value); } /* * os_tls_get -- pthread_key_getspecific abstraction layer */ void * os_tls_get(os_tls_key_t key) { return pthread_getspecific((pthread_key_t)key); } /* * os_mutex_init -- pthread_mutex_init abstraction layer */ int os_mutex_init(os_mutex_t *__restrict mutex) { COMPILE_ERROR_ON(sizeof(os_mutex_t) < sizeof(pthread_mutex_t)); return pthread_mutex_init((pthread_mutex_t *)mutex, NULL); } /* * os_mutex_destroy -- pthread_mutex_destroy abstraction layer */ int os_mutex_destroy(os_mutex_t *__restrict mutex) { return pthread_mutex_destroy((pthread_mutex_t *)mutex); } /* * os_mutex_lock -- pthread_mutex_lock abstraction layer */ int os_mutex_lock(os_mutex_t *__restrict mutex) { return pthread_mutex_lock((pthread_mutex_t *)mutex); } /* * os_mutex_trylock -- pthread_mutex_trylock abstraction layer */ int os_mutex_trylock(os_mutex_t *__restrict mutex) { return pthread_mutex_trylock((pthread_mutex_t *)mutex); } /* * os_mutex_unlock -- pthread_mutex_unlock abstraction layer */ int os_mutex_unlock(os_mutex_t *__restrict mutex) { return pthread_mutex_unlock((pthread_mutex_t *)mutex); } /* * os_mutex_timedlock -- pthread_mutex_timedlock abstraction layer */ int os_mutex_timedlock(os_mutex_t *__restrict mutex, const struct timespec *abstime) { return pthread_mutex_timedlock((pthread_mutex_t *)mutex, abstime); } /* * os_rwlock_init -- pthread_rwlock_init abstraction layer */ int os_rwlock_init(os_rwlock_t *__restrict rwlock) { COMPILE_ERROR_ON(sizeof(os_rwlock_t) < sizeof(pthread_rwlock_t)); return pthread_rwlock_init((pthread_rwlock_t *)rwlock, NULL); } /* * os_rwlock_destroy -- pthread_rwlock_destroy abstraction layer */ int os_rwlock_destroy(os_rwlock_t *__restrict rwlock) { return pthread_rwlock_destroy((pthread_rwlock_t *)rwlock); } /* * os_rwlock_rdlock - pthread_rwlock_rdlock abstraction layer */ int os_rwlock_rdlock(os_rwlock_t *__restrict rwlock) { return pthread_rwlock_rdlock((pthread_rwlock_t *)rwlock); } /* * os_rwlock_wrlock -- pthread_rwlock_wrlock abstraction layer */ int os_rwlock_wrlock(os_rwlock_t *__restrict rwlock) { return pthread_rwlock_wrlock((pthread_rwlock_t *)rwlock); } /* * os_rwlock_unlock -- pthread_rwlock_unlock abstraction layer */ int os_rwlock_unlock(os_rwlock_t *__restrict rwlock) { return pthread_rwlock_unlock((pthread_rwlock_t *)rwlock); } /* * os_rwlock_tryrdlock -- pthread_rwlock_tryrdlock abstraction layer */ int os_rwlock_tryrdlock(os_rwlock_t *__restrict rwlock) { return pthread_rwlock_tryrdlock((pthread_rwlock_t *)rwlock); } /* * os_rwlock_tryrwlock -- pthread_rwlock_trywrlock abstraction layer */ int os_rwlock_trywrlock(os_rwlock_t *__restrict rwlock) { return pthread_rwlock_trywrlock((pthread_rwlock_t *)rwlock); } /* * os_rwlock_timedrdlock -- pthread_rwlock_timedrdlock abstraction layer */ int os_rwlock_timedrdlock(os_rwlock_t *__restrict rwlock, const struct timespec *abstime) { return pthread_rwlock_timedrdlock((pthread_rwlock_t *)rwlock, abstime); } /* * os_rwlock_timedwrlock -- pthread_rwlock_timedwrlock abstraction layer */ int os_rwlock_timedwrlock(os_rwlock_t *__restrict rwlock, const struct timespec *abstime) { return pthread_rwlock_timedwrlock((pthread_rwlock_t *)rwlock, abstime); } /* * os_spin_init -- pthread_spin_init abstraction layer */ int os_spin_init(os_spinlock_t *lock, int pshared) { COMPILE_ERROR_ON(sizeof(os_spinlock_t) < sizeof(pthread_spinlock_t)); return pthread_spin_init((pthread_spinlock_t *)lock, pshared); } /* * os_spin_destroy -- pthread_spin_destroy abstraction layer */ int os_spin_destroy(os_spinlock_t *lock) { return pthread_spin_destroy((pthread_spinlock_t *)lock); } /* * os_spin_lock -- pthread_spin_lock abstraction layer */ int os_spin_lock(os_spinlock_t *lock) { return pthread_spin_lock((pthread_spinlock_t *)lock); } /* * os_spin_unlock -- pthread_spin_unlock abstraction layer */ int os_spin_unlock(os_spinlock_t *lock) { return pthread_spin_unlock((pthread_spinlock_t *)lock); } /* * os_spin_trylock -- pthread_spin_trylock abstraction layer */ int os_spin_trylock(os_spinlock_t *lock) { return pthread_spin_trylock((pthread_spinlock_t *)lock); } /* * os_cond_init -- pthread_cond_init abstraction layer */ int os_cond_init(os_cond_t *__restrict cond) { COMPILE_ERROR_ON(sizeof(os_cond_t) < sizeof(pthread_cond_t)); return pthread_cond_init((pthread_cond_t *)cond, NULL); } /* * os_cond_destroy -- pthread_cond_destroy abstraction layer */ int os_cond_destroy(os_cond_t *__restrict cond) { return pthread_cond_destroy((pthread_cond_t *)cond); } /* * os_cond_broadcast -- pthread_cond_broadcast abstraction layer */ int os_cond_broadcast(os_cond_t *__restrict cond) { return pthread_cond_broadcast((pthread_cond_t *)cond); } /* * os_cond_signal -- pthread_cond_signal abstraction layer */ int os_cond_signal(os_cond_t *__restrict cond) { return pthread_cond_signal((pthread_cond_t *)cond); } /* * os_cond_timedwait -- pthread_cond_timedwait abstraction layer */ int os_cond_timedwait(os_cond_t *__restrict cond, os_mutex_t *__restrict mutex, const struct timespec *abstime) { return pthread_cond_timedwait((pthread_cond_t *)cond, (pthread_mutex_t *)mutex, abstime); } /* * os_cond_wait -- pthread_cond_wait abstraction layer */ int os_cond_wait(os_cond_t *__restrict cond, os_mutex_t *__restrict mutex) { return pthread_cond_wait((pthread_cond_t *)cond, (pthread_mutex_t *)mutex); } /* * os_thread_create -- pthread_create abstraction layer */ int os_thread_create(os_thread_t *thread, const os_thread_attr_t *attr, void *(*start_routine)(void *), void *arg) { COMPILE_ERROR_ON(sizeof(os_thread_t) < sizeof(internal_os_thread_t)); internal_os_thread_t *thread_info = (internal_os_thread_t *)thread; return pthread_create(&thread_info->thread, (pthread_attr_t *)attr, start_routine, arg); } /* * os_thread_join -- pthread_join abstraction layer */ int os_thread_join(os_thread_t *thread, void **result) { internal_os_thread_t *thread_info = (internal_os_thread_t *)thread; return pthread_join(thread_info->thread, result); } /* * os_thread_self -- pthread_self abstraction layer */ void os_thread_self(os_thread_t *thread) { internal_os_thread_t *thread_info = (internal_os_thread_t *)thread; thread_info->thread = pthread_self(); } /* * os_thread_atfork -- pthread_atfork abstraction layer */ int os_thread_atfork(void (*prepare)(void), void (*parent)(void), void (*child)(void)) { return pthread_atfork(prepare, parent, child); } /* * os_thread_setaffinity_np -- pthread_atfork abstraction layer */ int os_thread_setaffinity_np(os_thread_t *thread, size_t set_size, const os_cpu_set_t *set) { COMPILE_ERROR_ON(sizeof(os_cpu_set_t) < sizeof(cpu_set_t)); internal_os_thread_t *thread_info = (internal_os_thread_t *)thread; return pthread_setaffinity_np(thread_info->thread, set_size, (cpu_set_t *)set); } /* * os_cpu_zero -- CP_ZERO abstraction layer */ void os_cpu_zero(os_cpu_set_t *set) { CPU_ZERO((cpu_set_t *)set); } /* * os_cpu_set -- CP_SET abstraction layer */ void os_cpu_set(size_t cpu, os_cpu_set_t *set) { CPU_SET(cpu, (cpu_set_t *)set); } /* * os_semaphore_init -- initializes semaphore instance */ int os_semaphore_init(os_semaphore_t *sem, unsigned value) { COMPILE_ERROR_ON(sizeof(os_semaphore_t) < sizeof(sem_t)); return sem_init((sem_t *)sem, 0, value); } /* * os_semaphore_destroy -- destroys a semaphore instance */ int os_semaphore_destroy(os_semaphore_t *sem) { return sem_destroy((sem_t *)sem); } /* * os_semaphore_wait -- decreases the value of the semaphore */ int os_semaphore_wait(os_semaphore_t *sem) { return sem_wait((sem_t *)sem); } /* * os_semaphore_trywait -- tries to decrease the value of the semaphore */ int os_semaphore_trywait(os_semaphore_t *sem) { return sem_trywait((sem_t *)sem); } /* * os_semaphore_post -- increases the value of the semaphore */ int os_semaphore_post(os_semaphore_t *sem) { return sem_post((sem_t *)sem); } vmem-1.8/src/common/os_thread_windows.c000066400000000000000000000360251361505074100203030ustar00rootroot00000000000000/* * Copyright 2015-2018, Intel Corporation * Copyright (c) 2016, Microsoft Corporation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * os_thread_windows.c -- (imperfect) POSIX-like threads for Windows * * Loosely inspired by: * http://locklessinc.com/articles/pthreads_on_windows/ */ #include #include #include #include #include "os_thread.h" #include "util.h" #include "out.h" typedef struct { unsigned attr; CRITICAL_SECTION lock; } internal_os_mutex_t; typedef struct { unsigned attr; char is_write; SRWLOCK lock; } internal_os_rwlock_t; typedef struct { unsigned attr; CONDITION_VARIABLE cond; } internal_os_cond_t; typedef long long internal_os_once_t; typedef struct { HANDLE handle; } internal_semaphore_t; typedef struct { GROUP_AFFINITY affinity; } internal_os_cpu_set_t; typedef struct { HANDLE thread_handle; void *arg; void *(*start_routine)(void *); void *result; } internal_os_thread_t; /* number of useconds between 1970-01-01T00:00:00Z and 1601-01-01T00:00:00Z */ #define DELTA_WIN2UNIX (11644473600000000ull) #define TIMED_LOCK(action, ts) {\ if ((action) == TRUE)\ return 0;\ unsigned long long et = (ts)->tv_sec * 1000000000 + (ts)->tv_nsec;\ while (1) {\ FILETIME _t;\ GetSystemTimeAsFileTime(&_t);\ ULARGE_INTEGER _UI = {\ .HighPart = _t.dwHighDateTime,\ .LowPart = _t.dwLowDateTime,\ };\ if (100 * _UI.QuadPart - 1000 * DELTA_WIN2UNIX >= et)\ return ETIMEDOUT;\ if ((action) == TRUE)\ return 0;\ Sleep(1);\ }\ return ETIMEDOUT;\ } /* * os_mutex_init -- initializes mutex */ int os_mutex_init(os_mutex_t *__restrict mutex) { COMPILE_ERROR_ON(sizeof(os_mutex_t) < sizeof(internal_os_mutex_t)); internal_os_mutex_t *mutex_internal = (internal_os_mutex_t *)mutex; InitializeCriticalSection(&mutex_internal->lock); return 0; } /* * os_mutex_destroy -- destroys mutex */ int os_mutex_destroy(os_mutex_t *__restrict mutex) { internal_os_mutex_t *mutex_internal = (internal_os_mutex_t *)mutex; DeleteCriticalSection(&mutex_internal->lock); return 0; } /* * os_mutex_lock -- locks mutex */ _Use_decl_annotations_ int os_mutex_lock(os_mutex_t *__restrict mutex) { internal_os_mutex_t *mutex_internal = (internal_os_mutex_t *)mutex; EnterCriticalSection(&mutex_internal->lock); if (mutex_internal->lock.RecursionCount > 1) { LeaveCriticalSection(&mutex_internal->lock); FATAL("deadlock detected"); } return 0; } /* * os_mutex_trylock -- tries lock mutex */ _Use_decl_annotations_ int os_mutex_trylock(os_mutex_t *__restrict mutex) { internal_os_mutex_t *mutex_internal = (internal_os_mutex_t *)mutex; if (TryEnterCriticalSection(&mutex_internal->lock) == FALSE) return EBUSY; if (mutex_internal->lock.RecursionCount > 1) { LeaveCriticalSection(&mutex_internal->lock); return EBUSY; } return 0; } /* * os_mutex_timedlock -- tries lock mutex with timeout */ int os_mutex_timedlock(os_mutex_t *__restrict mutex, const struct timespec *abstime) { TIMED_LOCK((os_mutex_trylock(mutex) == 0), abstime); } /* * os_mutex_unlock -- unlocks mutex */ int os_mutex_unlock(os_mutex_t *__restrict mutex) { internal_os_mutex_t *mutex_internal = (internal_os_mutex_t *)mutex; LeaveCriticalSection(&mutex_internal->lock); return 0; } /* * os_rwlock_init -- initializes rwlock */ int os_rwlock_init(os_rwlock_t *__restrict rwlock) { COMPILE_ERROR_ON(sizeof(os_rwlock_t) < sizeof(internal_os_rwlock_t)); internal_os_rwlock_t *rwlock_internal = (internal_os_rwlock_t *)rwlock; InitializeSRWLock(&rwlock_internal->lock); return 0; } /* * os_rwlock_destroy -- destroys rwlock */ int os_rwlock_destroy(os_rwlock_t *__restrict rwlock) { /* do nothing */ UNREFERENCED_PARAMETER(rwlock); return 0; } /* * os_rwlock_rdlock -- get shared lock */ int os_rwlock_rdlock(os_rwlock_t *__restrict rwlock) { internal_os_rwlock_t *rwlock_internal = (internal_os_rwlock_t *)rwlock; AcquireSRWLockShared(&rwlock_internal->lock); rwlock_internal->is_write = 0; return 0; } /* * os_rwlock_wrlock -- get exclusive lock */ int os_rwlock_wrlock(os_rwlock_t *__restrict rwlock) { internal_os_rwlock_t *rwlock_internal = (internal_os_rwlock_t *)rwlock; AcquireSRWLockExclusive(&rwlock_internal->lock); rwlock_internal->is_write = 1; return 0; } /* * os_rwlock_tryrdlock -- tries get shared lock */ int os_rwlock_tryrdlock(os_rwlock_t *__restrict rwlock) { internal_os_rwlock_t *rwlock_internal = (internal_os_rwlock_t *)rwlock; if (TryAcquireSRWLockShared(&rwlock_internal->lock) == FALSE) { return EBUSY; } else { rwlock_internal->is_write = 0; return 0; } } /* * os_rwlock_trywrlock -- tries get exclusive lock */ _Use_decl_annotations_ int os_rwlock_trywrlock(os_rwlock_t *__restrict rwlock) { internal_os_rwlock_t *rwlock_internal = (internal_os_rwlock_t *)rwlock; if (TryAcquireSRWLockExclusive(&rwlock_internal->lock) == FALSE) { return EBUSY; } else { rwlock_internal->is_write = 1; return 0; } } /* * os_rwlock_timedrdlock -- gets shared lock with timeout */ int os_rwlock_timedrdlock(os_rwlock_t *__restrict rwlock, const struct timespec *abstime) { TIMED_LOCK((os_rwlock_tryrdlock(rwlock) == 0), abstime); } /* * os_rwlock_timedwrlock -- gets exclusive lock with timeout */ int os_rwlock_timedwrlock(os_rwlock_t *__restrict rwlock, const struct timespec *abstime) { TIMED_LOCK((os_rwlock_trywrlock(rwlock) == 0), abstime); } /* * os_rwlock_unlock -- unlocks rwlock */ _Use_decl_annotations_ int os_rwlock_unlock(os_rwlock_t *__restrict rwlock) { internal_os_rwlock_t *rwlock_internal = (internal_os_rwlock_t *)rwlock; if (rwlock_internal->is_write) ReleaseSRWLockExclusive(&rwlock_internal->lock); else ReleaseSRWLockShared(&rwlock_internal->lock); return 0; } /* * os_cond_init -- initializes condition variable */ int os_cond_init(os_cond_t *__restrict cond) { COMPILE_ERROR_ON(sizeof(os_cond_t) < sizeof(internal_os_cond_t)); internal_os_cond_t *cond_internal = (internal_os_cond_t *)cond; InitializeConditionVariable(&cond_internal->cond); return 0; } /* * os_cond_destroy -- destroys condition variable */ int os_cond_destroy(os_cond_t *__restrict cond) { /* do nothing */ UNREFERENCED_PARAMETER(cond); return 0; } /* * os_cond_broadcast -- broadcast condition variable */ int os_cond_broadcast(os_cond_t *__restrict cond) { internal_os_cond_t *cond_internal = (internal_os_cond_t *)cond; WakeAllConditionVariable(&cond_internal->cond); return 0; } /* * os_cond_wait -- signal condition variable */ int os_cond_signal(os_cond_t *__restrict cond) { internal_os_cond_t *cond_internal = (internal_os_cond_t *)cond; WakeConditionVariable(&cond_internal->cond); return 0; } /* * get_rel_wait -- (internal) convert timespec to windows timeout */ static DWORD get_rel_wait(const struct timespec *abstime) { struct __timeb64 t; _ftime64_s(&t); time_t now_ms = t.time * 1000 + t.millitm; time_t ms = (time_t)(abstime->tv_sec * 1000 + abstime->tv_nsec / 1000000); DWORD rel_wait = (DWORD)(ms - now_ms); return rel_wait < 0 ? 0 : rel_wait; } /* * os_cond_timedwait -- waits on condition variable with timeout */ int os_cond_timedwait(os_cond_t *__restrict cond, os_mutex_t *__restrict mutex, const struct timespec *abstime) { internal_os_cond_t *cond_internal = (internal_os_cond_t *)cond; internal_os_mutex_t *mutex_internal = (internal_os_mutex_t *)mutex; BOOL ret; SetLastError(0); ret = SleepConditionVariableCS(&cond_internal->cond, &mutex_internal->lock, get_rel_wait(abstime)); if (ret == FALSE) return (GetLastError() == ERROR_TIMEOUT) ? ETIMEDOUT : EINVAL; return 0; } /* * os_cond_wait -- waits on condition variable */ int os_cond_wait(os_cond_t *__restrict cond, os_mutex_t *__restrict mutex) { internal_os_cond_t *cond_internal = (internal_os_cond_t *)cond; internal_os_mutex_t *mutex_internal = (internal_os_mutex_t *)mutex; /* XXX - return error code based on GetLastError() */ BOOL ret; ret = SleepConditionVariableCS(&cond_internal->cond, &mutex_internal->lock, INFINITE); return (ret == FALSE) ? EINVAL : 0; } /* * os_once -- once-only function call */ int os_once(os_once_t *once, void (*func)(void)) { internal_os_once_t *once_internal = (internal_os_once_t *)once; internal_os_once_t tmp; while ((tmp = *once_internal) != 2) { if (tmp == 1) continue; /* another thread is already calling func() */ /* try to be the first one... */ if (!util_bool_compare_and_swap64(once_internal, tmp, 1)) continue; /* sorry, another thread was faster */ func(); if (!util_bool_compare_and_swap64(once_internal, 1, 2)) { ERR("error setting once"); return -1; } } return 0; } /* * os_tls_key_create -- creates a new tls key */ int os_tls_key_create(os_tls_key_t *key, void (*destructor)(void *)) { *key = FlsAlloc(destructor); if (*key == TLS_OUT_OF_INDEXES) return EAGAIN; return 0; } /* * os_tls_key_delete -- deletes key from tls */ int os_tls_key_delete(os_tls_key_t key) { if (!FlsFree(key)) return EINVAL; return 0; } /* * os_tls_set -- sets a value in tls */ int os_tls_set(os_tls_key_t key, const void *value) { if (!FlsSetValue(key, (LPVOID)value)) return ENOENT; return 0; } /* * os_tls_get -- gets a value from tls */ void * os_tls_get(os_tls_key_t key) { return FlsGetValue(key); } /* threading */ /* * os_thread_start_routine_wrapper is a start routine for _beginthreadex() and * it helps: * * - wrap the os_thread_create's start function */ static unsigned __stdcall os_thread_start_routine_wrapper(void *arg) { internal_os_thread_t *thread_info = (internal_os_thread_t *)arg; thread_info->result = thread_info->start_routine(thread_info->arg); return 0; } /* * os_thread_create -- starts a new thread */ int os_thread_create(os_thread_t *thread, const os_thread_attr_t *attr, void *(*start_routine)(void *), void *arg) { COMPILE_ERROR_ON(sizeof(os_thread_t) < sizeof(internal_os_thread_t)); internal_os_thread_t *thread_info = (internal_os_thread_t *)thread; thread_info->start_routine = start_routine; thread_info->arg = arg; thread_info->thread_handle = (HANDLE)_beginthreadex(NULL, 0, os_thread_start_routine_wrapper, thread_info, CREATE_SUSPENDED, NULL); if (thread_info->thread_handle == 0) { free(thread_info); return errno; } if (ResumeThread(thread_info->thread_handle) == -1) { free(thread_info); return EAGAIN; } return 0; } /* * os_thread_join -- joins a thread */ int os_thread_join(os_thread_t *thread, void **result) { internal_os_thread_t *internal_thread = (internal_os_thread_t *)thread; WaitForSingleObject(internal_thread->thread_handle, INFINITE); CloseHandle(internal_thread->thread_handle); if (result != NULL) *result = internal_thread->result; return 0; } /* * os_thread_self -- returns handle to calling thread */ void os_thread_self(os_thread_t *thread) { internal_os_thread_t *internal_thread = (internal_os_thread_t *)thread; internal_thread->thread_handle = GetCurrentThread(); } /* * os_cpu_zero -- clears cpu set */ void os_cpu_zero(os_cpu_set_t *set) { internal_os_cpu_set_t *internal_set = (internal_os_cpu_set_t *)set; memset(&internal_set->affinity, 0, sizeof(internal_set->affinity)); } /* * os_cpu_set -- adds cpu to set */ void os_cpu_set(size_t cpu, os_cpu_set_t *set) { internal_os_cpu_set_t *internal_set = (internal_os_cpu_set_t *)set; int sum = 0; int group_max = GetActiveProcessorGroupCount(); int group = 0; while (group < group_max) { sum += GetActiveProcessorCount(group); if (sum > cpu) { /* * XXX: can't set affinity to two different cpu groups */ if (internal_set->affinity.Group != group) { internal_set->affinity.Mask = 0; internal_set->affinity.Group = group; } cpu -= sum - GetActiveProcessorCount(group); internal_set->affinity.Mask |= 1LL << cpu; return; } group++; } FATAL("os_cpu_set cpu out of bounds"); } /* * os_thread_setaffinity_np -- sets affinity of the thread */ int os_thread_setaffinity_np(os_thread_t *thread, size_t set_size, const os_cpu_set_t *set) { internal_os_cpu_set_t *internal_set = (internal_os_cpu_set_t *)set; internal_os_thread_t *internal_thread = (internal_os_thread_t *)thread; int ret = SetThreadGroupAffinity(internal_thread->thread_handle, &internal_set->affinity, NULL); return ret != 0 ? 0 : EINVAL; } /* * os_semaphore_init -- initializes a new semaphore instance */ int os_semaphore_init(os_semaphore_t *sem, unsigned value) { internal_semaphore_t *internal_sem = (internal_semaphore_t *)sem; internal_sem->handle = CreateSemaphore(NULL, value, LONG_MAX, NULL); return internal_sem->handle != 0 ? 0 : -1; } /* * os_semaphore_destroy -- destroys a semaphore instance */ int os_semaphore_destroy(os_semaphore_t *sem) { internal_semaphore_t *internal_sem = (internal_semaphore_t *)sem; BOOL ret = CloseHandle(internal_sem->handle); return ret ? 0 : -1; } /* * os_semaphore_wait -- decreases the value of the semaphore */ int os_semaphore_wait(os_semaphore_t *sem) { internal_semaphore_t *internal_sem = (internal_semaphore_t *)sem; DWORD ret = WaitForSingleObject(internal_sem->handle, INFINITE); return ret == WAIT_OBJECT_0 ? 0 : -1; } /* * os_semaphore_trywait -- tries to decrease the value of the semaphore */ int os_semaphore_trywait(os_semaphore_t *sem) { internal_semaphore_t *internal_sem = (internal_semaphore_t *)sem; DWORD ret = WaitForSingleObject(internal_sem->handle, 0); if (ret == WAIT_TIMEOUT) errno = EAGAIN; return ret == WAIT_OBJECT_0 ? 0 : -1; } /* * os_semaphore_post -- increases the value of the semaphore */ int os_semaphore_post(os_semaphore_t *sem) { internal_semaphore_t *internal_sem = (internal_semaphore_t *)sem; BOOL ret = ReleaseSemaphore(internal_sem->handle, 1, NULL); return ret ? 0 : -1; } vmem-1.8/src/common/os_windows.c000066400000000000000000000346311361505074100167550ustar00rootroot00000000000000/* * Copyright 2017-2019, Intel Corporation * Copyright (c) 2016, Microsoft Corporation. All rights reserved. * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * os_windows.c -- windows abstraction layer */ #include #include #include #include #include "alloc.h" #include "util.h" #include "os.h" #include "out.h" #define UTF8_BOM "\xEF\xBB\xBF" /* * os_open -- open abstraction layer */ int os_open(const char *pathname, int flags, ...) { wchar_t *path = util_toUTF16(pathname); if (path == NULL) return -1; int ret; if (flags & O_CREAT) { va_list arg; va_start(arg, flags); mode_t mode = va_arg(arg, mode_t); va_end(arg); ret = _wopen(path, flags, mode); } else { ret = _wopen(path, flags); } util_free_UTF16(path); /* BOM skipping should not modify errno */ int orig_errno = errno; /* * text files on windows can contain BOM. As we open files * in binary mode we have to detect bom and skip it */ if (ret != -1) { char bom[3]; if (_read(ret, bom, sizeof(bom)) != 3 || memcmp(bom, UTF8_BOM, 3) != 0) { /* UTF-8 bom not found - reset file to the beginning */ _lseek(ret, 0, SEEK_SET); } } errno = orig_errno; return ret; } /* * os_fsync -- fsync abstraction layer */ int os_fsync(int fd) { HANDLE handle = (HANDLE) _get_osfhandle(fd); if (handle == INVALID_HANDLE_VALUE) { errno = EBADF; return -1; } if (!FlushFileBuffers(handle)) { errno = EINVAL; return -1; } return 0; } /* * os_fsync_dir -- fsync the directory */ int os_fsync_dir(const char *dir_name) { /* XXX not used and not implemented */ ASSERT(0); return -1; } /* * os_stat -- stat abstraction layer */ int os_stat(const char *pathname, os_stat_t *buf) { wchar_t *path = util_toUTF16(pathname); if (path == NULL) return -1; int ret = _wstat64(path, buf); util_free_UTF16(path); return ret; } /* * os_unlink -- unlink abstraction layer */ int os_unlink(const char *pathname) { wchar_t *path = util_toUTF16(pathname); if (path == NULL) return -1; int ret = _wunlink(path); util_free_UTF16(path); return ret; } /* * os_access -- access abstraction layer */ int os_access(const char *pathname, int mode) { wchar_t *path = util_toUTF16(pathname); if (path == NULL) return -1; int ret = _waccess(path, mode); util_free_UTF16(path); return ret; } /* * os_skipBOM -- (internal) Skip BOM in file stream * * text files on windows can contain BOM. We have to detect bom and skip it. */ static void os_skipBOM(FILE *file) { if (file == NULL) return; /* BOM skipping should not modify errno */ int orig_errno = errno; /* UTF-8 BOM */ uint8_t bom[3]; size_t read_num = fread(bom, sizeof(bom[0]), sizeof(bom), file); if (read_num != ARRAY_SIZE(bom)) goto out; if (memcmp(bom, UTF8_BOM, ARRAY_SIZE(bom)) != 0) { /* UTF-8 bom not found - reset file to the beginning */ fseek(file, 0, SEEK_SET); } out: errno = orig_errno; } /* * os_fopen -- fopen abstraction layer */ FILE * os_fopen(const char *pathname, const char *mode) { wchar_t *path = util_toUTF16(pathname); if (path == NULL) return NULL; wchar_t *wmode = util_toUTF16(mode); if (wmode == NULL) { util_free_UTF16(path); return NULL; } FILE *ret = _wfopen(path, wmode); util_free_UTF16(path); util_free_UTF16(wmode); os_skipBOM(ret); return ret; } /* * os_fdopen -- fdopen abstraction layer */ FILE * os_fdopen(int fd, const char *mode) { FILE *ret = fdopen(fd, mode); os_skipBOM(ret); return ret; } /* * os_chmod -- chmod abstraction layer */ int os_chmod(const char *pathname, mode_t mode) { wchar_t *path = util_toUTF16(pathname); if (path == NULL) return -1; int ret = _wchmod(path, mode); util_free_UTF16(path); return ret; } /* * os_mkstemp -- generate a unique temporary filename from template */ int os_mkstemp(char *temp) { unsigned rnd; wchar_t *utemp = util_toUTF16(temp); if (utemp == NULL) return -1; wchar_t *path = _wmktemp(utemp); if (path == NULL) { util_free_UTF16(utemp); return -1; } wchar_t *npath = Malloc(sizeof(*npath) * wcslen(path) + _MAX_FNAME); if (npath == NULL) { util_free_UTF16(utemp); return -1; } wcscpy(npath, path); util_free_UTF16(utemp); /* * Use rand_s to generate more unique tmp file name than _mktemp do. * In case with multiple threads and multiple files even after close() * file name conflicts occurred. * It resolved issue with synchronous removing * multiples files by system. */ rand_s(&rnd); int ret = _snwprintf(npath + wcslen(npath), _MAX_FNAME, L"%u", rnd); if (ret < 0) goto out; /* * Use O_TEMPORARY flag to make sure the file is deleted when * the last file descriptor is closed. Also, it prevents opening * this file from another process. */ ret = _wopen(npath, O_RDWR | O_CREAT | O_EXCL | O_TEMPORARY, S_IWRITE | S_IREAD); out: Free(npath); return ret; } /* * os_posix_fallocate -- allocate file space */ int os_posix_fallocate(int fd, os_off_t offset, os_off_t len) { /* * From POSIX: * "EINVAL -- The len argument was zero or the offset argument was * less than zero." * * From Linux man-page: * "EINVAL -- offset was less than 0, or len was less than or * equal to 0" */ if (offset < 0 || len <= 0) return EINVAL; /* * From POSIX: * "EFBIG -- The value of offset+len is greater than the maximum * file size." * * Overflow can't be checked for by _chsize_s, since it only gets * the sum. */ if (offset + len < offset) return EFBIG; /* * posix_fallocate should not clobber errno, but * _filelengthi64 might set errno. */ int orig_errno = errno; __int64 current_size = _filelengthi64(fd); int file_length_errno = errno; errno = orig_errno; if (current_size < 0) return file_length_errno; __int64 requested_size = offset + len; if (requested_size <= current_size) return 0; return _chsize_s(fd, requested_size); } /* * os_ftruncate -- truncate a file to a specified length */ int os_ftruncate(int fd, os_off_t length) { return _chsize_s(fd, length); } /* * os_flock -- apply or remove an advisory lock on an open file */ int os_flock(int fd, int operation) { int flags = 0; SYSTEM_INFO systemInfo; GetSystemInfo(&systemInfo); switch (operation & (OS_LOCK_EX | OS_LOCK_SH | OS_LOCK_UN)) { case OS_LOCK_EX: case OS_LOCK_SH: if (operation & OS_LOCK_NB) flags = _LK_NBLCK; else flags = _LK_LOCK; break; case OS_LOCK_UN: flags = _LK_UNLCK; break; default: errno = EINVAL; return -1; } os_off_t filelen = _filelengthi64(fd); if (filelen < 0) return -1; /* for our purpose it's enough to lock the first page of the file */ long len = (filelen > systemInfo.dwPageSize) ? systemInfo.dwPageSize : (long)filelen; int res = _locking(fd, flags, len); if (res != 0 && errno == EACCES) errno = EWOULDBLOCK; /* for consistency with flock() */ return res; } /* * os_writev -- windows version of writev function * * XXX: _write and other similar functions are 32 bit on windows * if size of data is bigger then 2^32, this function * will be not atomic. */ ssize_t os_writev(int fd, const struct iovec *iov, int iovcnt) { size_t size = 0; /* XXX: _write is 32 bit on windows */ for (int i = 0; i < iovcnt; i++) size += iov[i].iov_len; void *buf = malloc(size); if (buf == NULL) return ENOMEM; char *it_buf = buf; for (int i = 0; i < iovcnt; i++) { memcpy(it_buf, iov[i].iov_base, iov[i].iov_len); it_buf += iov[i].iov_len; } ssize_t written = 0; while (size > 0) { int ret = _write(fd, buf, size >= MAXUINT ? MAXUINT : (unsigned)size); if (ret == -1) { written = -1; break; } written += ret; size -= ret; } free(buf); return written; } #define NSEC_IN_SEC 1000000000ull /* number of useconds between 1970-01-01T00:00:00Z and 1601-01-01T00:00:00Z */ #define DELTA_WIN2UNIX (11644473600000000ull) /* * clock_gettime -- returns elapsed time since the system was restarted * or since Epoch, depending on the mode id */ int os_clock_gettime(int id, struct timespec *ts) { switch (id) { case CLOCK_MONOTONIC: { LARGE_INTEGER time; LARGE_INTEGER frequency; QueryPerformanceFrequency(&frequency); QueryPerformanceCounter(&time); ts->tv_sec = time.QuadPart / frequency.QuadPart; ts->tv_nsec = (long)( (time.QuadPart % frequency.QuadPart) * NSEC_IN_SEC / frequency.QuadPart); } break; case CLOCK_REALTIME: { FILETIME ctime_ft; GetSystemTimeAsFileTime(&ctime_ft); ULARGE_INTEGER ctime = { .HighPart = ctime_ft.dwHighDateTime, .LowPart = ctime_ft.dwLowDateTime, }; ts->tv_sec = (ctime.QuadPart - DELTA_WIN2UNIX * 10) / 10000000; ts->tv_nsec = ((ctime.QuadPart - DELTA_WIN2UNIX * 10) % 10000000) * 100; } break; default: SetLastError(EINVAL); return -1; } return 0; } /* * os_setenv -- change or add an environment variable */ int os_setenv(const char *name, const char *value, int overwrite) { errno_t err; /* * If caller doesn't want to overwrite make sure that a environment * variable with the same name doesn't exist. */ if (!overwrite && getenv(name)) return 0; /* * _putenv_s returns a non-zero error code on failure but setenv * needs to return -1 on failure, let's translate the error code. */ if ((err = _putenv_s(name, value)) != 0) { errno = err; return -1; } return 0; } /* * os_unsetenv -- remove an environment variable */ int os_unsetenv(const char *name) { errno_t err; if ((err = _putenv_s(name, "")) != 0) { errno = err; return -1; } return 0; } /* * os_getenv -- getenv abstraction layer */ char * os_getenv(const char *name) { return getenv(name); } /* * rand_r -- rand_r for windows * * XXX: RAND_MAX is equal 0x7fff on Windows, so to get 32 bit random number * we need to merge two numbers returned by rand_s(). * It is not to the best solution as subsequences returned by rand_s are * not guaranteed to be independent. * * XXX: Windows doesn't implement deterministic thread-safe pseudorandom * generator (generator which can be initialized by seed ). * We have to chose between a deterministic nonthread-safe generator * (rand(), srand()) or a non-deterministic thread-safe generator(rand_s()) * as thread-safety is more important, a seed parameter is ignored in this * implementation. */ unsigned os_rand_r(unsigned *seedp) { UNREFERENCED_PARAMETER(seedp); unsigned part1, part2; rand_s(&part1); rand_s(&part2); return part1 << 16 | part2; } /* * sys_siglist -- map of signal to human readable messages like sys_siglist */ const char * const sys_siglist[] = { "Unknown signal 0", /* 0 */ "Hangup", /* 1 */ "Interrupt", /* 2 */ "Quit", /* 3 */ "Illegal instruction", /* 4 */ "Trace/breakpoint trap", /* 5 */ "Aborted", /* 6 */ "Bus error", /* 7 */ "Floating point exception", /* 8 */ "Killed", /* 9 */ "User defined signal 1", /* 10 */ "Segmentation fault", /* 11 */ "User defined signal 2", /* 12 */ "Broken pipe", /* 13 */ "Alarm clock", /* 14 */ "Terminated", /* 15 */ "Stack fault", /* 16 */ "Child exited", /* 17 */ "Continued", /* 18 */ "Stopped (signal)", /* 19 */ "Stopped", /* 20 */ "Stopped (tty input)", /* 21 */ "Stopped (tty output)", /* 22 */ "Urgent I/O condition", /* 23 */ "CPU time limit exceeded", /* 24 */ "File size limit exceeded", /* 25 */ "Virtual timer expired", /* 26 */ "Profiling timer expired", /* 27 */ "Window changed", /* 28 */ "I/O possible", /* 29 */ "Power failure", /* 30 */ "Bad system call", /* 31 */ "Unknown signal 32" /* 32 */ }; int sys_siglist_size = ARRAYSIZE(sys_siglist); /* * string constants for strsignal * XXX: ideally this should have the signal number as the suffix but then we * should use a buffer from thread local storage, so deferring the same till * we need it * NOTE: In Linux strsignal uses TLS for the same reason but if it fails to get * a thread local buffer it falls back to using a static buffer trading the * thread safety. */ #define STR_REALTIME_SIGNAL "Real-time signal" #define STR_UNKNOWN_SIGNAL "Unknown signal" /* * strsignal -- returns a string describing the signal number 'sig' * * XXX: According to POSIX, this one is of type 'char *', but in our * implementation it returns 'const char *'. */ const char * os_strsignal(int sig) { if (sig >= 0 && sig < ARRAYSIZE(sys_siglist)) return sys_siglist[sig]; else if (sig >= 34 && sig <= 64) return STR_REALTIME_SIGNAL; else return STR_UNKNOWN_SIGNAL; } int os_execv(const char *path, char *const argv[]) { wchar_t *wpath = util_toUTF16(path); if (wpath == NULL) return -1; int argc = 0; while (argv[argc]) argc++; int ret; wchar_t **wargv = Zalloc((argc + 1) * sizeof(wargv[0])); if (!wargv) { ret = -1; goto wargv_alloc_failed; } for (int i = 0; i < argc; ++i) { wargv[i] = util_toUTF16(argv[i]); if (!wargv[i]) { ret = -1; goto end; } } intptr_t iret = _wexecv(wpath, wargv); if (iret == 0) ret = 0; else ret = -1; end: for (int i = 0; i < argc; ++i) util_free_UTF16(wargv[i]); Free(wargv); wargv_alloc_failed: util_free_UTF16(wpath); return ret; } vmem-1.8/src/common/out.c000066400000000000000000000316221361505074100153660ustar00rootroot00000000000000/* * Copyright 2014-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * out.c -- support for logging, tracing, and assertion output * * Macros like LOG(), OUT, ASSERT(), etc. end up here. */ #include #include #include #include #include #include #include #include "out.h" #include "os.h" #include "os_thread.h" #include "valgrind_internal.h" #include "util.h" /* XXX - modify Linux makefiles to generate srcversion.h and remove #ifdef */ #ifdef _WIN32 #include "srcversion.h" #endif static const char *Log_prefix; static int Log_level; static FILE *Out_fp; static unsigned Log_alignment; #ifndef NO_LIBPTHREAD #define MAXPRINT 8192 /* maximum expected log line */ #else #define MAXPRINT 256 /* maximum expected log line for libpmem */ #endif struct errormsg { char msg[MAXPRINT]; #ifdef _WIN32 wchar_t wmsg[MAXPRINT]; #endif }; #ifndef NO_LIBPTHREAD static os_once_t Last_errormsg_key_once = OS_ONCE_INIT; static os_tls_key_t Last_errormsg_key; static void _Last_errormsg_key_alloc(void) { int pth_ret = os_tls_key_create(&Last_errormsg_key, free); if (pth_ret) FATAL("!os_thread_key_create"); VALGRIND_ANNOTATE_HAPPENS_BEFORE(&Last_errormsg_key_once); } static void Last_errormsg_key_alloc(void) { os_once(&Last_errormsg_key_once, _Last_errormsg_key_alloc); /* * Workaround Helgrind's bug: * https://bugs.kde.org/show_bug.cgi?id=337735 */ VALGRIND_ANNOTATE_HAPPENS_AFTER(&Last_errormsg_key_once); } static inline void Last_errormsg_fini(void) { void *p = os_tls_get(Last_errormsg_key); if (p) { free(p); (void) os_tls_set(Last_errormsg_key, NULL); } (void) os_tls_key_delete(Last_errormsg_key); } static inline struct errormsg * Last_errormsg_get(void) { Last_errormsg_key_alloc(); struct errormsg *errormsg = os_tls_get(Last_errormsg_key); if (errormsg == NULL) { errormsg = malloc(sizeof(struct errormsg)); if (errormsg == NULL) FATAL("!malloc"); /* make sure it contains empty string initially */ errormsg->msg[0] = '\0'; int ret = os_tls_set(Last_errormsg_key, errormsg); if (ret) FATAL("!os_tls_set"); } return errormsg; } #else /* * We don't want libpmem to depend on libpthread. Instead of using pthread * API to dynamically allocate thread-specific error message buffer, we put * it into TLS. However, keeping a pretty large static buffer (8K) in TLS * may lead to some issues, so the maximum message length is reduced. * Fortunately, it looks like the longest error message in libpmem should * not be longer than about 90 chars (in case of pmem_check_version()). */ static __thread struct errormsg Last_errormsg; static inline void Last_errormsg_key_alloc(void) { } static inline void Last_errormsg_fini(void) { } static inline const struct errormsg * Last_errormsg_get(void) { return &Last_errormsg.msg[0]; } #endif /* NO_LIBPTHREAD */ /* * out_init -- initialize the log * * This is called from the library initialization code. */ void out_init(const char *log_prefix, const char *log_level_var, const char *log_file_var, int major_version, int minor_version) { static int once; /* only need to initialize the out module once */ if (once) return; once++; Log_prefix = log_prefix; #ifdef DEBUG char *log_level; char *log_file; if ((log_level = os_getenv(log_level_var)) != NULL) { Log_level = atoi(log_level); if (Log_level < 0) { Log_level = 0; } } if ((log_file = os_getenv(log_file_var)) != NULL && log_file[0] != '\0') { /* reserve more than enough space for a PID + '\0' */ char log_file_pid[PATH_MAX]; size_t len = strlen(log_file); if (len > 0 && log_file[len - 1] == '-') { int ret = snprintf(log_file_pid, PATH_MAX, "%s%d", log_file, getpid()); if (ret < 0 || ret >= PATH_MAX) { ERR("snprintf: %d", ret); abort(); } log_file = log_file_pid; } if ((Out_fp = os_fopen(log_file, "w")) == NULL) { char buff[UTIL_MAX_ERR_MSG]; util_strerror(errno, buff, UTIL_MAX_ERR_MSG); fprintf(stderr, "Error (%s): %s=%s: %s\n", log_prefix, log_file_var, log_file, buff); abort(); } } #endif /* DEBUG */ char *log_alignment = os_getenv("PMDK_LOG_ALIGN"); if (log_alignment) { int align = atoi(log_alignment); if (align > 0) Log_alignment = (unsigned)align; } if (Out_fp == NULL) Out_fp = stderr; else setlinebuf(Out_fp); #ifdef DEBUG static char namepath[PATH_MAX]; LOG(1, "pid %d: program: %s", getpid(), util_getexecname(namepath, PATH_MAX)); #endif LOG(1, "%s version %d.%d", log_prefix, major_version, minor_version); static __attribute__((used)) const char *version_msg = "src version: " SRCVERSION; LOG(1, "%s", version_msg); #if VG_PMEMCHECK_ENABLED /* * Attribute "used" to prevent compiler from optimizing out the variable * when LOG expands to no code (!DEBUG) */ static __attribute__((used)) const char *pmemcheck_msg = "compiled with support for Valgrind pmemcheck"; LOG(1, "%s", pmemcheck_msg); #endif /* VG_PMEMCHECK_ENABLED */ #if VG_HELGRIND_ENABLED static __attribute__((used)) const char *helgrind_msg = "compiled with support for Valgrind helgrind"; LOG(1, "%s", helgrind_msg); #endif /* VG_HELGRIND_ENABLED */ #if VG_MEMCHECK_ENABLED static __attribute__((used)) const char *memcheck_msg = "compiled with support for Valgrind memcheck"; LOG(1, "%s", memcheck_msg); #endif /* VG_MEMCHECK_ENABLED */ #if VG_DRD_ENABLED static __attribute__((used)) const char *drd_msg = "compiled with support for Valgrind drd"; LOG(1, "%s", drd_msg); #endif /* VG_DRD_ENABLED */ Last_errormsg_key_alloc(); } /* * out_fini -- close the log file * * This is called to close log file before process stop. */ void out_fini(void) { if (Out_fp != NULL && Out_fp != stderr) { fclose(Out_fp); Out_fp = stderr; } Last_errormsg_fini(); } /* * out_print_func -- default print_func, goes to stderr or Out_fp */ static void out_print_func(const char *s) { /* to suppress drd false-positive */ /* XXX: confirm real nature of this issue: pmem/issues#863 */ #ifdef SUPPRESS_FPUTS_DRD_ERROR VALGRIND_ANNOTATE_IGNORE_READS_BEGIN(); VALGRIND_ANNOTATE_IGNORE_WRITES_BEGIN(); #endif fputs(s, Out_fp); #ifdef SUPPRESS_FPUTS_DRD_ERROR VALGRIND_ANNOTATE_IGNORE_READS_END(); VALGRIND_ANNOTATE_IGNORE_WRITES_END(); #endif } /* * calling Print(s) calls the current print_func... */ typedef void (*Print_func)(const char *s); typedef int (*Vsnprintf_func)(char *str, size_t size, const char *format, va_list ap); static Print_func Print = out_print_func; static Vsnprintf_func Vsnprintf = vsnprintf; /* * out_set_print_func -- allow override of print_func used by out module */ void out_set_print_func(void (*print_func)(const char *s)) { LOG(3, "print %p", print_func); Print = (print_func == NULL) ? out_print_func : print_func; } /* * out_set_vsnprintf_func -- allow override of vsnprintf_func used by out module */ void out_set_vsnprintf_func(int (*vsnprintf_func)(char *str, size_t size, const char *format, va_list ap)) { LOG(3, "vsnprintf %p", vsnprintf_func); Vsnprintf = (vsnprintf_func == NULL) ? vsnprintf : vsnprintf_func; } /* * out_snprintf -- (internal) custom snprintf implementation */ FORMAT_PRINTF(3, 4) static int out_snprintf(char *str, size_t size, const char *format, ...) { int ret; va_list ap; va_start(ap, format); ret = Vsnprintf(str, size, format, ap); va_end(ap); return (ret); } /* * out_common -- common output code, all output goes through here */ static void out_common(const char *file, int line, const char *func, int level, const char *suffix, const char *fmt, va_list ap) { int oerrno = errno; char buf[MAXPRINT]; unsigned cc = 0; int ret; const char *sep = ""; char errstr[UTIL_MAX_ERR_MSG] = ""; if (file) { char *f = strrchr(file, OS_DIR_SEPARATOR); if (f) file = f + 1; ret = out_snprintf(&buf[cc], MAXPRINT - cc, "<%s>: <%d> [%s:%d %s] ", Log_prefix, level, file, line, func); if (ret < 0) { Print("out_snprintf failed"); goto end; } cc += (unsigned)ret; if (cc < Log_alignment) { memset(buf + cc, ' ', Log_alignment - cc); cc = Log_alignment; } } if (fmt) { if (*fmt == '!') { fmt++; sep = ": "; util_strerror(errno, errstr, UTIL_MAX_ERR_MSG); } ret = Vsnprintf(&buf[cc], MAXPRINT - cc, fmt, ap); if (ret < 0) { Print("Vsnprintf failed"); goto end; } cc += (unsigned)ret; } out_snprintf(&buf[cc], MAXPRINT - cc, "%s%s%s", sep, errstr, suffix); Print(buf); end: errno = oerrno; } /* * out_error -- common error output code, all error messages go through here */ static void out_error(const char *file, int line, const char *func, const char *suffix, const char *fmt, va_list ap) { int oerrno = errno; unsigned cc = 0; int ret; const char *sep = ""; char errstr[UTIL_MAX_ERR_MSG] = ""; char *errormsg = (char *)out_get_errormsg(); if (fmt) { if (*fmt == '!') { fmt++; sep = ": "; util_strerror(errno, errstr, UTIL_MAX_ERR_MSG); } ret = Vsnprintf(&errormsg[cc], MAXPRINT, fmt, ap); if (ret < 0) { strcpy(errormsg, "Vsnprintf failed"); goto end; } cc += (unsigned)ret; out_snprintf(&errormsg[cc], MAXPRINT - cc, "%s%s", sep, errstr); } #ifdef DEBUG if (Log_level >= 1) { char buf[MAXPRINT]; cc = 0; if (file) { char *f = strrchr(file, OS_DIR_SEPARATOR); if (f) file = f + 1; ret = out_snprintf(&buf[cc], MAXPRINT, "<%s>: <1> [%s:%d %s] ", Log_prefix, file, line, func); if (ret < 0) { Print("out_snprintf failed"); goto end; } cc += (unsigned)ret; if (cc < Log_alignment) { memset(buf + cc, ' ', Log_alignment - cc); cc = Log_alignment; } } out_snprintf(&buf[cc], MAXPRINT - cc, "%s%s", errormsg, suffix); Print(buf); } #endif end: errno = oerrno; } /* * out -- output a line, newline added automatically */ void out(const char *fmt, ...) { va_list ap; va_start(ap, fmt); out_common(NULL, 0, NULL, 0, "\n", fmt, ap); va_end(ap); } /* * out_nonl -- output a line, no newline added automatically */ void out_nonl(int level, const char *fmt, ...) { va_list ap; if (Log_level < level) return; va_start(ap, fmt); out_common(NULL, 0, NULL, level, "", fmt, ap); va_end(ap); } /* * out_log -- output a log line if Log_level >= level */ void out_log(const char *file, int line, const char *func, int level, const char *fmt, ...) { va_list ap; if (Log_level < level) return; va_start(ap, fmt); out_common(file, line, func, level, "\n", fmt, ap); va_end(ap); } /* * out_fatal -- output a fatal error & die (i.e. assertion failure) */ void out_fatal(const char *file, int line, const char *func, const char *fmt, ...) { va_list ap; va_start(ap, fmt); out_common(file, line, func, 1, "\n", fmt, ap); va_end(ap); abort(); } /* * out_err -- output an error message */ void out_err(const char *file, int line, const char *func, const char *fmt, ...) { va_list ap; va_start(ap, fmt); out_error(file, line, func, "\n", fmt, ap); va_end(ap); } /* * out_get_errormsg -- get the last error message */ const char * out_get_errormsg(void) { const struct errormsg *errormsg = Last_errormsg_get(); return &errormsg->msg[0]; } #ifdef _WIN32 /* * out_get_errormsgW -- get the last error message in wchar_t */ const wchar_t * out_get_errormsgW(void) { struct errormsg *errormsg = Last_errormsg_get(); const char *utf8 = &errormsg->msg[0]; wchar_t *utf16 = &errormsg->wmsg[0]; if (util_toUTF16_buff(utf8, utf16, sizeof(errormsg->wmsg)) != 0) FATAL("!Failed to convert string"); return (const wchar_t *)utf16; } #endif vmem-1.8/src/common/out.h000066400000000000000000000166351361505074100154020ustar00rootroot00000000000000/* * Copyright 2014-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * out.h -- definitions for "out" module */ #ifndef PMDK_OUT_H #define PMDK_OUT_H 1 #include #include #include #include "util.h" #ifdef __cplusplus extern "C" { #endif /* * Suppress errors which are after appropriate ASSERT* macro for nondebug * builds. */ #if !defined(DEBUG) && (defined(__clang_analyzer__) || defined(__COVERITY__) ||\ defined(__KLOCWORK__)) #define OUT_FATAL_DISCARD_NORETURN __attribute__((noreturn)) #else #define OUT_FATAL_DISCARD_NORETURN #endif #ifndef EVALUATE_DBG_EXPRESSIONS #if defined(DEBUG) || defined(__clang_analyzer__) || defined(__COVERITY__) ||\ defined(__KLOCWORK__) #define EVALUATE_DBG_EXPRESSIONS 1 #else #define EVALUATE_DBG_EXPRESSIONS 0 #endif #endif #ifdef DEBUG #define OUT_LOG out_log #define OUT_NONL out_nonl #define OUT_FATAL out_fatal #define OUT_FATAL_ABORT out_fatal #else static __attribute__((always_inline)) inline void out_log_discard(const char *file, int line, const char *func, int level, const char *fmt, ...) { (void) file; (void) line; (void) func; (void) level; (void) fmt; } static __attribute__((always_inline)) inline void out_nonl_discard(int level, const char *fmt, ...) { (void) level; (void) fmt; } static __attribute__((always_inline)) OUT_FATAL_DISCARD_NORETURN inline void out_fatal_discard(const char *file, int line, const char *func, const char *fmt, ...) { (void) file; (void) line; (void) func; (void) fmt; } static __attribute__((always_inline)) NORETURN inline void out_fatal_abort(const char *file, int line, const char *func, const char *fmt, ...) { (void) file; (void) line; (void) func; (void) fmt; abort(); } #define OUT_LOG out_log_discard #define OUT_NONL out_nonl_discard #define OUT_FATAL out_fatal_discard #define OUT_FATAL_ABORT out_fatal_abort #endif #if defined(__KLOCWORK__) #define TEST_ALWAYS_TRUE_EXPR(cnd) #define TEST_ALWAYS_EQ_EXPR(cnd) #define TEST_ALWAYS_NE_EXPR(cnd) #else #define TEST_ALWAYS_TRUE_EXPR(cnd)\ if (__builtin_constant_p(cnd))\ ASSERT_COMPILE_ERROR_ON(cnd); #define TEST_ALWAYS_EQ_EXPR(lhs, rhs)\ if (__builtin_constant_p(lhs) && __builtin_constant_p(rhs))\ ASSERT_COMPILE_ERROR_ON((lhs) == (rhs)); #define TEST_ALWAYS_NE_EXPR(lhs, rhs)\ if (__builtin_constant_p(lhs) && __builtin_constant_p(rhs))\ ASSERT_COMPILE_ERROR_ON((lhs) != (rhs)); #endif /* produce debug/trace output */ #define LOG(level, ...) do { \ if (!EVALUATE_DBG_EXPRESSIONS) break;\ OUT_LOG(__FILE__, __LINE__, __func__, level, __VA_ARGS__);\ } while (0) /* produce debug/trace output without prefix and new line */ #define LOG_NONL(level, ...) do { \ if (!EVALUATE_DBG_EXPRESSIONS) break; \ OUT_NONL(level, __VA_ARGS__); \ } while (0) /* produce output and exit */ #define FATAL(...)\ OUT_FATAL_ABORT(__FILE__, __LINE__, __func__, __VA_ARGS__) /* assert a condition is true at runtime */ #define ASSERT_rt(cnd) do { \ if (!EVALUATE_DBG_EXPRESSIONS || (cnd)) break; \ OUT_FATAL(__FILE__, __LINE__, __func__, "assertion failure: %s", #cnd);\ } while (0) /* assertion with extra info printed if assertion fails at runtime */ #define ASSERTinfo_rt(cnd, info) do { \ if (!EVALUATE_DBG_EXPRESSIONS || (cnd)) break; \ OUT_FATAL(__FILE__, __LINE__, __func__, \ "assertion failure: %s (%s = %s)", #cnd, #info, info);\ } while (0) /* assert two integer values are equal at runtime */ #define ASSERTeq_rt(lhs, rhs) do { \ if (!EVALUATE_DBG_EXPRESSIONS || ((lhs) == (rhs))) break; \ OUT_FATAL(__FILE__, __LINE__, __func__,\ "assertion failure: %s (0x%llx) == %s (0x%llx)", #lhs,\ (unsigned long long)(lhs), #rhs, (unsigned long long)(rhs)); \ } while (0) /* assert two integer values are not equal at runtime */ #define ASSERTne_rt(lhs, rhs) do { \ if (!EVALUATE_DBG_EXPRESSIONS || ((lhs) != (rhs))) break; \ OUT_FATAL(__FILE__, __LINE__, __func__,\ "assertion failure: %s (0x%llx) != %s (0x%llx)", #lhs,\ (unsigned long long)(lhs), #rhs, (unsigned long long)(rhs)); \ } while (0) /* assert a condition is true */ #define ASSERT(cnd)\ do {\ /*\ * Detect useless asserts on always true expression. Please use\ * COMPILE_ERROR_ON(!cnd) or ASSERT_rt(cnd) in such cases.\ */\ TEST_ALWAYS_TRUE_EXPR(cnd);\ ASSERT_rt(cnd);\ } while (0) /* assertion with extra info printed if assertion fails */ #define ASSERTinfo(cnd, info)\ do {\ /* See comment in ASSERT. */\ TEST_ALWAYS_TRUE_EXPR(cnd);\ ASSERTinfo_rt(cnd, info);\ } while (0) /* assert two integer values are equal */ #define ASSERTeq(lhs, rhs)\ do {\ /* See comment in ASSERT. */\ TEST_ALWAYS_EQ_EXPR(lhs, rhs);\ ASSERTeq_rt(lhs, rhs);\ } while (0) /* assert two integer values are not equal */ #define ASSERTne(lhs, rhs)\ do {\ /* See comment in ASSERT. */\ TEST_ALWAYS_NE_EXPR(lhs, rhs);\ ASSERTne_rt(lhs, rhs);\ } while (0) #define ERR(...)\ out_err(__FILE__, __LINE__, __func__, __VA_ARGS__) void out_init(const char *log_prefix, const char *log_level_var, const char *log_file_var, int major_version, int minor_version); void out_fini(void); void out(const char *fmt, ...) FORMAT_PRINTF(1, 2); void out_nonl(int level, const char *fmt, ...) FORMAT_PRINTF(2, 3); void out_log(const char *file, int line, const char *func, int level, const char *fmt, ...) FORMAT_PRINTF(5, 6); void out_err(const char *file, int line, const char *func, const char *fmt, ...) FORMAT_PRINTF(4, 5); void NORETURN out_fatal(const char *file, int line, const char *func, const char *fmt, ...) FORMAT_PRINTF(4, 5); void out_set_print_func(void (*print_func)(const char *s)); void out_set_vsnprintf_func(int (*vsnprintf_func)(char *str, size_t size, const char *format, va_list ap)); #ifdef _WIN32 #ifndef PMDK_UTF8_API #define out_get_errormsg out_get_errormsgW #else #define out_get_errormsg out_get_errormsgU #endif #endif #ifndef _WIN32 const char *out_get_errormsg(void); #else const char *out_get_errormsgU(void); const wchar_t *out_get_errormsgW(void); #endif #ifdef __cplusplus } #endif #endif vmem-1.8/src/common/pmemcommon.h000066400000000000000000000042061361505074100167310ustar00rootroot00000000000000/* * Copyright 2016-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * pmemcommon.h -- definitions for "common" module */ #ifndef PMEMCOMMON_H #define PMEMCOMMON_H 1 #include "util.h" #include "out.h" #include "mmap.h" #ifdef __cplusplus extern "C" { #endif static inline void common_init(const char *log_prefix, const char *log_level_var, const char *log_file_var, int major_version, int minor_version) { util_init(); out_init(log_prefix, log_level_var, log_file_var, major_version, minor_version); util_mmap_init(); } static inline void common_fini(void) { util_mmap_fini(); out_fini(); } #ifdef __cplusplus } #endif #endif vmem-1.8/src/common/pmemcommon.inc000066400000000000000000000035461361505074100172610ustar00rootroot00000000000000# Copyright 2017-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # src/pmemcommon.inc -- common SOURCE definitions for PMDK libraries # SOURCE =\ $(COMMON)/alloc.c\ $(COMMON)/file.c\ $(COMMON)/file_posix.c\ $(COMMON)/mmap.c\ $(COMMON)/mmap_posix.c\ $(COMMON)/os_posix.c\ $(COMMON)/os_thread_posix.c\ $(COMMON)/out.c\ $(COMMON)/pool_hdr.c\ $(COMMON)/util.c\ $(COMMON)/util_posix.c vmem-1.8/src/common/pmemcompat.h000066400000000000000000000046531361505074100167320ustar00rootroot00000000000000/* * Copyright 2016-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * pmemcompat.h -- compatibility layer for libpmem* libraries */ #ifndef PMEMCOMPAT_H #define PMEMCOMPAT_H #include struct iovec { void *iov_base; size_t iov_len; }; typedef int mode_t; /* * XXX: this code will not work on windows if our library is included in * an extern block. */ #if defined(__cplusplus) && defined(_MSC_VER) && !defined(__typeof__) #include /* * These templates are used to remove a type reference(T&) which, in some * cases, is returned by decltype */ namespace pmem { namespace detail { template struct get_type { using type = T; }; template struct get_type { using type = T*; }; template struct get_type { using type = T; }; } /* namespace detail */ } /* namespace pmem */ #define __typeof__(p) pmem::detail::get_type::type #endif #endif vmem-1.8/src/common/pool_hdr.c000066400000000000000000000051171361505074100163650ustar00rootroot00000000000000/* * Copyright 2014-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * pool_hdr.c -- pool header utilities */ #include #include #include #include #include "out.h" #include "pool_hdr.h" /* * util_convert2le_hdr -- convert pool_hdr into little-endian byte order */ void util_convert2le_hdr(struct pool_hdr *hdrp) { hdrp->major = htole32(hdrp->major); hdrp->features.compat = htole32(hdrp->features.compat); hdrp->features.incompat = htole32(hdrp->features.incompat); hdrp->features.ro_compat = htole32(hdrp->features.ro_compat); hdrp->crtime = htole64(hdrp->crtime); hdrp->checksum = htole64(hdrp->checksum); } /* * util_convert2h_hdr_nocheck -- convert pool_hdr into host byte order */ void util_convert2h_hdr_nocheck(struct pool_hdr *hdrp) { hdrp->major = le32toh(hdrp->major); hdrp->features.compat = le32toh(hdrp->features.compat); hdrp->features.incompat = le32toh(hdrp->features.incompat); hdrp->features.ro_compat = le32toh(hdrp->features.ro_compat); hdrp->crtime = le64toh(hdrp->crtime); hdrp->checksum = le64toh(hdrp->checksum); } vmem-1.8/src/common/pool_hdr.h000066400000000000000000000111001361505074100163570ustar00rootroot00000000000000/* * Copyright 2014-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * pool_hdr.h -- internal definitions for pool header module */ #ifndef PMDK_POOL_HDR_H #define PMDK_POOL_HDR_H 1 #include #include #include #include "uuid.h" #ifdef __cplusplus extern "C" { #endif /* possible values of the machine class field in the above struct */ #define PMDK_MACHINE_CLASS_64 2 /* 64 bit pointers, 64 bit size_t */ /* possible values of the machine field in the above struct */ #define PMDK_MACHINE_X86_64 62 #define PMDK_MACHINE_AARCH64 183 /* possible values of the data field in the above struct */ #define PMDK_DATA_LE 1 /* 2's complement, little endian */ #define PMDK_DATA_BE 2 /* 2's complement, big endian */ /* * features flags */ typedef struct { uint32_t compat; /* mask: compatible "may" features */ uint32_t incompat; /* mask: "must support" features */ uint32_t ro_compat; /* mask: force RO if unsupported */ } features_t; /* * header used at the beginning of all types of memory pools * * for pools build on persistent memory, the integer types * below are stored in little-endian byte order. */ #define POOL_HDR_SIG_LEN 8 struct pool_hdr { char signature[POOL_HDR_SIG_LEN]; uint32_t major; /* format major version number */ features_t features; /* features flags */ uuid_t poolset_uuid; /* pool set UUID */ uuid_t uuid; /* UUID of this file */ uuid_t prev_part_uuid; /* prev part */ uuid_t next_part_uuid; /* next part */ uuid_t prev_repl_uuid; /* prev replica */ uuid_t next_repl_uuid; /* next replica */ uint64_t crtime; /* when created (seconds since epoch) */ unsigned char unused[1920]; /* must be zero */ /* not checksumed */ unsigned char unused2[1976]; /* must be zero */ uint64_t padding[8]; /* !shutdown status */ uint64_t checksum; /* checksum of above fields */ }; #define POOL_HDR_SIZE (sizeof(struct pool_hdr)) #define POOL_DESC_SIZE 4096 void util_convert2le_hdr(struct pool_hdr *hdrp); void util_convert2h_hdr_nocheck(struct pool_hdr *hdrp); /* * set of macros for determining the alignment descriptor */ #define DESC_MASK ((1 << ALIGNMENT_DESC_BITS) - 1) #define alignment_of(t) offsetof(struct { char c; t x; }, x) #define alignment_desc_of(t) (((uint64_t)alignment_of(t) - 1) & DESC_MASK) #define alignment_desc()\ (alignment_desc_of(char) << 0 * ALIGNMENT_DESC_BITS) |\ (alignment_desc_of(short) << 1 * ALIGNMENT_DESC_BITS) |\ (alignment_desc_of(int) << 2 * ALIGNMENT_DESC_BITS) |\ (alignment_desc_of(long) << 3 * ALIGNMENT_DESC_BITS) |\ (alignment_desc_of(long long) << 4 * ALIGNMENT_DESC_BITS) |\ (alignment_desc_of(size_t) << 5 * ALIGNMENT_DESC_BITS) |\ (alignment_desc_of(off_t) << 6 * ALIGNMENT_DESC_BITS) |\ (alignment_desc_of(float) << 7 * ALIGNMENT_DESC_BITS) |\ (alignment_desc_of(double) << 8 * ALIGNMENT_DESC_BITS) |\ (alignment_desc_of(long double) << 9 * ALIGNMENT_DESC_BITS) |\ (alignment_desc_of(void *) << 10 * ALIGNMENT_DESC_BITS) #define POOL_FEAT_ZERO 0x0000U static const features_t features_zero = {POOL_FEAT_ZERO, POOL_FEAT_ZERO, POOL_FEAT_ZERO}; #ifdef __cplusplus } #endif #endif vmem-1.8/src/common/queue.h000066400000000000000000000532251361505074100157130ustar00rootroot00000000000000/* * Source: glibc 2.24 (git://sourceware.org/glibc.git /misc/sys/queue.h) * * Copyright (c) 1991, 1993 * The Regents of the University of California. All rights reserved. * Copyright (c) 2016, Microsoft Corporation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the University nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * @(#)queue.h 8.5 (Berkeley) 8/20/94 */ #ifndef _PMDK_QUEUE_H_ #define _PMDK_QUEUE_H_ /* * This file defines five types of data structures: singly-linked lists, * lists, simple queues, tail queues, and circular queues. * * A singly-linked list is headed by a single forward pointer. The * elements are singly linked for minimum space and pointer manipulation * overhead at the expense of O(n) removal for arbitrary elements. New * elements can be added to the list after an existing element or at the * head of the list. Elements being removed from the head of the list * should use the explicit macro for this purpose for optimum * efficiency. A singly-linked list may only be traversed in the forward * direction. Singly-linked lists are ideal for applications with large * datasets and few or no removals or for implementing a LIFO queue. * * A list is headed by a single forward pointer (or an array of forward * pointers for a hash table header). The elements are doubly linked * so that an arbitrary element can be removed without a need to * traverse the list. New elements can be added to the list before * or after an existing element or at the head of the list. A list * may only be traversed in the forward direction. * * A simple queue is headed by a pair of pointers, one the head of the * list and the other to the tail of the list. The elements are singly * linked to save space, so elements can only be removed from the * head of the list. New elements can be added to the list after * an existing element, at the head of the list, or at the end of the * list. A simple queue may only be traversed in the forward direction. * * A tail queue is headed by a pair of pointers, one to the head of the * list and the other to the tail of the list. The elements are doubly * linked so that an arbitrary element can be removed without a need to * traverse the list. New elements can be added to the list before or * after an existing element, at the head of the list, or at the end of * the list. A tail queue may be traversed in either direction. * * A circle queue is headed by a pair of pointers, one to the head of the * list and the other to the tail of the list. The elements are doubly * linked so that an arbitrary element can be removed without a need to * traverse the list. New elements can be added to the list before or after * an existing element, at the head of the list, or at the end of the list. * A circle queue may be traversed in either direction, but has a more * complex end of list detection. * * For details on the use of these macros, see the queue(3) manual page. */ /* * XXX This is a workaround for a bug in the llvm's static analyzer. For more * info see https://github.com/pmem/issues/issues/309. */ #ifdef __clang_analyzer__ static void custom_assert(void) { abort(); } #define ANALYZER_ASSERT(x) (__builtin_expect(!(x), 0) ? (void)0 : custom_assert()) #else #define ANALYZER_ASSERT(x) do {} while (0) #endif /* * List definitions. */ #define PMDK_LIST_HEAD(name, type) \ struct name { \ struct type *lh_first; /* first element */ \ } #define PMDK_LIST_HEAD_INITIALIZER(head) \ { NULL } #ifdef __cplusplus #define PMDK__CAST_AND_ASSIGN(x, y) x = (__typeof__(x))y; #else #define PMDK__CAST_AND_ASSIGN(x, y) x = (void *)(y); #endif #define PMDK_LIST_ENTRY(type) \ struct { \ struct type *le_next; /* next element */ \ struct type **le_prev; /* address of previous next element */ \ } /* * List functions. */ #define PMDK_LIST_INIT(head) do { \ (head)->lh_first = NULL; \ } while (/*CONSTCOND*/0) #define PMDK_LIST_INSERT_AFTER(listelm, elm, field) do { \ if (((elm)->field.le_next = (listelm)->field.le_next) != NULL) \ (listelm)->field.le_next->field.le_prev = \ &(elm)->field.le_next; \ (listelm)->field.le_next = (elm); \ (elm)->field.le_prev = &(listelm)->field.le_next; \ } while (/*CONSTCOND*/0) #define PMDK_LIST_INSERT_BEFORE(listelm, elm, field) do { \ (elm)->field.le_prev = (listelm)->field.le_prev; \ (elm)->field.le_next = (listelm); \ *(listelm)->field.le_prev = (elm); \ (listelm)->field.le_prev = &(elm)->field.le_next; \ } while (/*CONSTCOND*/0) #define PMDK_LIST_INSERT_HEAD(head, elm, field) do { \ if (((elm)->field.le_next = (head)->lh_first) != NULL) \ (head)->lh_first->field.le_prev = &(elm)->field.le_next;\ (head)->lh_first = (elm); \ (elm)->field.le_prev = &(head)->lh_first; \ } while (/*CONSTCOND*/0) #define PMDK_LIST_REMOVE(elm, field) do { \ ANALYZER_ASSERT((elm) != NULL); \ if ((elm)->field.le_next != NULL) \ (elm)->field.le_next->field.le_prev = \ (elm)->field.le_prev; \ *(elm)->field.le_prev = (elm)->field.le_next; \ } while (/*CONSTCOND*/0) #define PMDK_LIST_FOREACH(var, head, field) \ for ((var) = ((head)->lh_first); \ (var); \ (var) = ((var)->field.le_next)) /* * List access methods. */ #define PMDK_LIST_EMPTY(head) ((head)->lh_first == NULL) #define PMDK_LIST_FIRST(head) ((head)->lh_first) #define PMDK_LIST_NEXT(elm, field) ((elm)->field.le_next) /* * Singly-linked List definitions. */ #define PMDK_SLIST_HEAD(name, type) \ struct name { \ struct type *slh_first; /* first element */ \ } #define PMDK_SLIST_HEAD_INITIALIZER(head) \ { NULL } #define PMDK_SLIST_ENTRY(type) \ struct { \ struct type *sle_next; /* next element */ \ } /* * Singly-linked List functions. */ #define PMDK_SLIST_INIT(head) do { \ (head)->slh_first = NULL; \ } while (/*CONSTCOND*/0) #define PMDK_SLIST_INSERT_AFTER(slistelm, elm, field) do { \ (elm)->field.sle_next = (slistelm)->field.sle_next; \ (slistelm)->field.sle_next = (elm); \ } while (/*CONSTCOND*/0) #define PMDK_SLIST_INSERT_HEAD(head, elm, field) do { \ (elm)->field.sle_next = (head)->slh_first; \ (head)->slh_first = (elm); \ } while (/*CONSTCOND*/0) #define PMDK_SLIST_REMOVE_HEAD(head, field) do { \ (head)->slh_first = (head)->slh_first->field.sle_next; \ } while (/*CONSTCOND*/0) #define PMDK_SLIST_REMOVE(head, elm, type, field) do { \ if ((head)->slh_first == (elm)) { \ PMDK_SLIST_REMOVE_HEAD((head), field); \ } \ else { \ struct type *curelm = (head)->slh_first; \ while(curelm->field.sle_next != (elm)) \ curelm = curelm->field.sle_next; \ curelm->field.sle_next = \ curelm->field.sle_next->field.sle_next; \ } \ } while (/*CONSTCOND*/0) #define PMDK_SLIST_FOREACH(var, head, field) \ for((var) = (head)->slh_first; (var); (var) = (var)->field.sle_next) /* * Singly-linked List access methods. */ #define PMDK_SLIST_EMPTY(head) ((head)->slh_first == NULL) #define PMDK_SLIST_FIRST(head) ((head)->slh_first) #define PMDK_SLIST_NEXT(elm, field) ((elm)->field.sle_next) /* * Singly-linked Tail queue declarations. */ #define PMDK_STAILQ_HEAD(name, type) \ struct name { \ struct type *stqh_first; /* first element */ \ struct type **stqh_last; /* addr of last next element */ \ } #define PMDK_STAILQ_HEAD_INITIALIZER(head) \ { NULL, &(head).stqh_first } #define PMDK_STAILQ_ENTRY(type) \ struct { \ struct type *stqe_next; /* next element */ \ } /* * Singly-linked Tail queue functions. */ #define PMDK_STAILQ_INIT(head) do { \ (head)->stqh_first = NULL; \ (head)->stqh_last = &(head)->stqh_first; \ } while (/*CONSTCOND*/0) #define PMDK_STAILQ_INSERT_HEAD(head, elm, field) do { \ if (((elm)->field.stqe_next = (head)->stqh_first) == NULL) \ (head)->stqh_last = &(elm)->field.stqe_next; \ (head)->stqh_first = (elm); \ } while (/*CONSTCOND*/0) #define PMDK_STAILQ_INSERT_TAIL(head, elm, field) do { \ (elm)->field.stqe_next = NULL; \ *(head)->stqh_last = (elm); \ (head)->stqh_last = &(elm)->field.stqe_next; \ } while (/*CONSTCOND*/0) #define PMDK_STAILQ_INSERT_AFTER(head, listelm, elm, field) do { \ if (((elm)->field.stqe_next = (listelm)->field.stqe_next) == NULL)\ (head)->stqh_last = &(elm)->field.stqe_next; \ (listelm)->field.stqe_next = (elm); \ } while (/*CONSTCOND*/0) #define PMDK_STAILQ_REMOVE_HEAD(head, field) do { \ if (((head)->stqh_first = (head)->stqh_first->field.stqe_next) == NULL) \ (head)->stqh_last = &(head)->stqh_first; \ } while (/*CONSTCOND*/0) #define PMDK_STAILQ_REMOVE(head, elm, type, field) do { \ if ((head)->stqh_first == (elm)) { \ PMDK_STAILQ_REMOVE_HEAD((head), field); \ } else { \ struct type *curelm = (head)->stqh_first; \ while (curelm->field.stqe_next != (elm)) \ curelm = curelm->field.stqe_next; \ if ((curelm->field.stqe_next = \ curelm->field.stqe_next->field.stqe_next) == NULL) \ (head)->stqh_last = &(curelm)->field.stqe_next; \ } \ } while (/*CONSTCOND*/0) #define PMDK_STAILQ_FOREACH(var, head, field) \ for ((var) = ((head)->stqh_first); \ (var); \ (var) = ((var)->field.stqe_next)) #define PMDK_STAILQ_CONCAT(head1, head2) do { \ if (!PMDK_STAILQ_EMPTY((head2))) { \ *(head1)->stqh_last = (head2)->stqh_first; \ (head1)->stqh_last = (head2)->stqh_last; \ PMDK_STAILQ_INIT((head2)); \ } \ } while (/*CONSTCOND*/0) /* * Singly-linked Tail queue access methods. */ #define PMDK_STAILQ_EMPTY(head) ((head)->stqh_first == NULL) #define PMDK_STAILQ_FIRST(head) ((head)->stqh_first) #define PMDK_STAILQ_NEXT(elm, field) ((elm)->field.stqe_next) /* * Simple queue definitions. */ #define PMDK_SIMPLEQ_HEAD(name, type) \ struct name { \ struct type *sqh_first; /* first element */ \ struct type **sqh_last; /* addr of last next element */ \ } #define PMDK_SIMPLEQ_HEAD_INITIALIZER(head) \ { NULL, &(head).sqh_first } #define PMDK_SIMPLEQ_ENTRY(type) \ struct { \ struct type *sqe_next; /* next element */ \ } /* * Simple queue functions. */ #define PMDK_SIMPLEQ_INIT(head) do { \ (head)->sqh_first = NULL; \ (head)->sqh_last = &(head)->sqh_first; \ } while (/*CONSTCOND*/0) #define PMDK_SIMPLEQ_INSERT_HEAD(head, elm, field) do { \ if (((elm)->field.sqe_next = (head)->sqh_first) == NULL) \ (head)->sqh_last = &(elm)->field.sqe_next; \ (head)->sqh_first = (elm); \ } while (/*CONSTCOND*/0) #define PMDK_SIMPLEQ_INSERT_TAIL(head, elm, field) do { \ (elm)->field.sqe_next = NULL; \ *(head)->sqh_last = (elm); \ (head)->sqh_last = &(elm)->field.sqe_next; \ } while (/*CONSTCOND*/0) #define PMDK_SIMPLEQ_INSERT_AFTER(head, listelm, elm, field) do { \ if (((elm)->field.sqe_next = (listelm)->field.sqe_next) == NULL)\ (head)->sqh_last = &(elm)->field.sqe_next; \ (listelm)->field.sqe_next = (elm); \ } while (/*CONSTCOND*/0) #define PMDK_SIMPLEQ_REMOVE_HEAD(head, field) do { \ if (((head)->sqh_first = (head)->sqh_first->field.sqe_next) == NULL) \ (head)->sqh_last = &(head)->sqh_first; \ } while (/*CONSTCOND*/0) #define PMDK_SIMPLEQ_REMOVE(head, elm, type, field) do { \ if ((head)->sqh_first == (elm)) { \ PMDK_SIMPLEQ_REMOVE_HEAD((head), field); \ } else { \ struct type *curelm = (head)->sqh_first; \ while (curelm->field.sqe_next != (elm)) \ curelm = curelm->field.sqe_next; \ if ((curelm->field.sqe_next = \ curelm->field.sqe_next->field.sqe_next) == NULL) \ (head)->sqh_last = &(curelm)->field.sqe_next; \ } \ } while (/*CONSTCOND*/0) #define PMDK_SIMPLEQ_FOREACH(var, head, field) \ for ((var) = ((head)->sqh_first); \ (var); \ (var) = ((var)->field.sqe_next)) /* * Simple queue access methods. */ #define PMDK_SIMPLEQ_EMPTY(head) ((head)->sqh_first == NULL) #define PMDK_SIMPLEQ_FIRST(head) ((head)->sqh_first) #define PMDK_SIMPLEQ_NEXT(elm, field) ((elm)->field.sqe_next) /* * Tail queue definitions. */ #define PMDK__TAILQ_HEAD(name, type, qual) \ struct name { \ qual type *tqh_first; /* first element */ \ qual type *qual *tqh_last; /* addr of last next element */ \ } #define PMDK_TAILQ_HEAD(name, type) PMDK__TAILQ_HEAD(name, struct type,) #define PMDK_TAILQ_HEAD_INITIALIZER(head) \ { NULL, &(head).tqh_first } #define PMDK__TAILQ_ENTRY(type, qual) \ struct { \ qual type *tqe_next; /* next element */ \ qual type *qual *tqe_prev; /* address of previous next element */\ } #define PMDK_TAILQ_ENTRY(type) PMDK__TAILQ_ENTRY(struct type,) /* * Tail queue functions. */ #define PMDK_TAILQ_INIT(head) do { \ (head)->tqh_first = NULL; \ (head)->tqh_last = &(head)->tqh_first; \ } while (/*CONSTCOND*/0) #define PMDK_TAILQ_INSERT_HEAD(head, elm, field) do { \ if (((elm)->field.tqe_next = (head)->tqh_first) != NULL) \ (head)->tqh_first->field.tqe_prev = \ &(elm)->field.tqe_next; \ else \ (head)->tqh_last = &(elm)->field.tqe_next; \ (head)->tqh_first = (elm); \ (elm)->field.tqe_prev = &(head)->tqh_first; \ } while (/*CONSTCOND*/0) #define PMDK_TAILQ_INSERT_TAIL(head, elm, field) do { \ (elm)->field.tqe_next = NULL; \ (elm)->field.tqe_prev = (head)->tqh_last; \ *(head)->tqh_last = (elm); \ (head)->tqh_last = &(elm)->field.tqe_next; \ } while (/*CONSTCOND*/0) #define PMDK_TAILQ_INSERT_AFTER(head, listelm, elm, field) do { \ if (((elm)->field.tqe_next = (listelm)->field.tqe_next) != NULL)\ (elm)->field.tqe_next->field.tqe_prev = \ &(elm)->field.tqe_next; \ else \ (head)->tqh_last = &(elm)->field.tqe_next; \ (listelm)->field.tqe_next = (elm); \ (elm)->field.tqe_prev = &(listelm)->field.tqe_next; \ } while (/*CONSTCOND*/0) #define PMDK_TAILQ_INSERT_BEFORE(listelm, elm, field) do { \ (elm)->field.tqe_prev = (listelm)->field.tqe_prev; \ (elm)->field.tqe_next = (listelm); \ *(listelm)->field.tqe_prev = (elm); \ (listelm)->field.tqe_prev = &(elm)->field.tqe_next; \ } while (/*CONSTCOND*/0) #define PMDK_TAILQ_REMOVE(head, elm, field) do { \ ANALYZER_ASSERT((elm) != NULL); \ if (((elm)->field.tqe_next) != NULL) \ (elm)->field.tqe_next->field.tqe_prev = \ (elm)->field.tqe_prev; \ else \ (head)->tqh_last = (elm)->field.tqe_prev; \ *(elm)->field.tqe_prev = (elm)->field.tqe_next; \ } while (/*CONSTCOND*/0) #define PMDK_TAILQ_FOREACH(var, head, field) \ for ((var) = ((head)->tqh_first); \ (var); \ (var) = ((var)->field.tqe_next)) #define PMDK_TAILQ_FOREACH_REVERSE(var, head, headname, field) \ for ((var) = (*(((struct headname *)((head)->tqh_last))->tqh_last)); \ (var); \ (var) = (*(((struct headname *)((var)->field.tqe_prev))->tqh_last))) #define PMDK_TAILQ_CONCAT(head1, head2, field) do { \ if (!PMDK_TAILQ_EMPTY(head2)) { \ *(head1)->tqh_last = (head2)->tqh_first; \ (head2)->tqh_first->field.tqe_prev = (head1)->tqh_last; \ (head1)->tqh_last = (head2)->tqh_last; \ PMDK_TAILQ_INIT((head2)); \ } \ } while (/*CONSTCOND*/0) /* * Tail queue access methods. */ #define PMDK_TAILQ_EMPTY(head) ((head)->tqh_first == NULL) #define PMDK_TAILQ_FIRST(head) ((head)->tqh_first) #define PMDK_TAILQ_NEXT(elm, field) ((elm)->field.tqe_next) #define PMDK_TAILQ_LAST(head, headname) \ (*(((struct headname *)((head)->tqh_last))->tqh_last)) #define PMDK_TAILQ_PREV(elm, headname, field) \ (*(((struct headname *)((elm)->field.tqe_prev))->tqh_last)) /* * Circular queue definitions. */ #define PMDK_CIRCLEQ_HEAD(name, type) \ struct name { \ struct type *cqh_first; /* first element */ \ struct type *cqh_last; /* last element */ \ } #define PMDK_CIRCLEQ_HEAD_INITIALIZER(head) \ { (void *)&(head), (void *)&(head) } #define PMDK_CIRCLEQ_ENTRY(type) \ struct { \ struct type *cqe_next; /* next element */ \ struct type *cqe_prev; /* previous element */ \ } /* * Circular queue functions. */ #define PMDK_CIRCLEQ_INIT(head) do { \ PMDK__CAST_AND_ASSIGN((head)->cqh_first, (head)); \ PMDK__CAST_AND_ASSIGN((head)->cqh_last, (head)); \ } while (/*CONSTCOND*/0) #define PMDK_CIRCLEQ_INSERT_AFTER(head, listelm, elm, field) do { \ (elm)->field.cqe_next = (listelm)->field.cqe_next; \ (elm)->field.cqe_prev = (listelm); \ if ((listelm)->field.cqe_next == (void *)(head)) \ (head)->cqh_last = (elm); \ else \ (listelm)->field.cqe_next->field.cqe_prev = (elm); \ (listelm)->field.cqe_next = (elm); \ } while (/*CONSTCOND*/0) #define PMDK_CIRCLEQ_INSERT_BEFORE(head, listelm, elm, field) do { \ (elm)->field.cqe_next = (listelm); \ (elm)->field.cqe_prev = (listelm)->field.cqe_prev; \ if ((listelm)->field.cqe_prev == (void *)(head)) \ (head)->cqh_first = (elm); \ else \ (listelm)->field.cqe_prev->field.cqe_next = (elm); \ (listelm)->field.cqe_prev = (elm); \ } while (/*CONSTCOND*/0) #define PMDK_CIRCLEQ_INSERT_HEAD(head, elm, field) do { \ (elm)->field.cqe_next = (head)->cqh_first; \ (elm)->field.cqe_prev = (void *)(head); \ if ((head)->cqh_last == (void *)(head)) \ (head)->cqh_last = (elm); \ else \ (head)->cqh_first->field.cqe_prev = (elm); \ (head)->cqh_first = (elm); \ } while (/*CONSTCOND*/0) #define PMDK_CIRCLEQ_INSERT_TAIL(head, elm, field) do { \ PMDK__CAST_AND_ASSIGN((elm)->field.cqe_next, (head)); \ (elm)->field.cqe_prev = (head)->cqh_last; \ if ((head)->cqh_first == (void *)(head)) \ (head)->cqh_first = (elm); \ else \ (head)->cqh_last->field.cqe_next = (elm); \ (head)->cqh_last = (elm); \ } while (/*CONSTCOND*/0) #define PMDK_CIRCLEQ_REMOVE(head, elm, field) do { \ if ((elm)->field.cqe_next == (void *)(head)) \ (head)->cqh_last = (elm)->field.cqe_prev; \ else \ (elm)->field.cqe_next->field.cqe_prev = \ (elm)->field.cqe_prev; \ if ((elm)->field.cqe_prev == (void *)(head)) \ (head)->cqh_first = (elm)->field.cqe_next; \ else \ (elm)->field.cqe_prev->field.cqe_next = \ (elm)->field.cqe_next; \ } while (/*CONSTCOND*/0) #define PMDK_CIRCLEQ_FOREACH(var, head, field) \ for ((var) = ((head)->cqh_first); \ (var) != (const void *)(head); \ (var) = ((var)->field.cqe_next)) #define PMDK_CIRCLEQ_FOREACH_REVERSE(var, head, field) \ for ((var) = ((head)->cqh_last); \ (var) != (const void *)(head); \ (var) = ((var)->field.cqe_prev)) /* * Circular queue access methods. */ #define PMDK_CIRCLEQ_EMPTY(head) ((head)->cqh_first == (void *)(head)) #define PMDK_CIRCLEQ_FIRST(head) ((head)->cqh_first) #define PMDK_CIRCLEQ_LAST(head) ((head)->cqh_last) #define PMDK_CIRCLEQ_NEXT(elm, field) ((elm)->field.cqe_next) #define PMDK_CIRCLEQ_PREV(elm, field) ((elm)->field.cqe_prev) #define PMDK_CIRCLEQ_LOOP_NEXT(head, elm, field) \ (((elm)->field.cqe_next == (void *)(head)) \ ? ((head)->cqh_first) \ : ((elm)->field.cqe_next)) #define PMDK_CIRCLEQ_LOOP_PREV(head, elm, field) \ (((elm)->field.cqe_prev == (void *)(head)) \ ? ((head)->cqh_last) \ : ((elm)->field.cqe_prev)) /* * Sorted queue functions. */ #define PMDK_SORTEDQ_HEAD(name, type) PMDK_CIRCLEQ_HEAD(name, type) #define PMDK_SORTEDQ_HEAD_INITIALIZER(head) PMDK_CIRCLEQ_HEAD_INITIALIZER(head) #define PMDK_SORTEDQ_ENTRY(type) PMDK_CIRCLEQ_ENTRY(type) #define PMDK_SORTEDQ_INIT(head) PMDK_CIRCLEQ_INIT(head) #define PMDK_SORTEDQ_INSERT(head, elm, field, type, comparer) { \ type *_elm_it; \ for (_elm_it = (head)->cqh_first; \ ((_elm_it != (void *)(head)) && \ (comparer(_elm_it, (elm)) < 0)); \ _elm_it = _elm_it->field.cqe_next) \ /*NOTHING*/; \ if (_elm_it == (void *)(head)) \ PMDK_CIRCLEQ_INSERT_TAIL(head, elm, field); \ else \ PMDK_CIRCLEQ_INSERT_BEFORE(head, _elm_it, elm, field); \ } #define PMDK_SORTEDQ_REMOVE(head, elm, field) PMDK_CIRCLEQ_REMOVE(head, elm, field) #define PMDK_SORTEDQ_FOREACH(var, head, field) PMDK_CIRCLEQ_FOREACH(var, head, field) #define PMDK_SORTEDQ_FOREACH_REVERSE(var, head, field) \ PMDK_CIRCLEQ_FOREACH_REVERSE(var, head, field) /* * Sorted queue access methods. */ #define PMDK_SORTEDQ_EMPTY(head) PMDK_CIRCLEQ_EMPTY(head) #define PMDK_SORTEDQ_FIRST(head) PMDK_CIRCLEQ_FIRST(head) #define PMDK_SORTEDQ_LAST(head) PMDK_CIRCLEQ_LAST(head) #define PMDK_SORTEDQ_NEXT(elm, field) PMDK_CIRCLEQ_NEXT(elm, field) #define PMDK_SORTEDQ_PREV(elm, field) PMDK_CIRCLEQ_PREV(elm, field) #endif /* sys/queue.h */ vmem-1.8/src/common/rand.c000066400000000000000000000073021361505074100155010ustar00rootroot00000000000000/* * Copyright 2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * rand.c -- random utils */ #include #include #include #include #include #include "rand.h" #ifdef _WIN32 #include #include #else #include #endif /* * hash64 -- a u64 -> u64 hash */ uint64_t hash64(uint64_t x) { x += 0x9e3779b97f4a7c15; x = (x ^ (x >> 30)) * 0xbf58476d1ce4e5b9; x = (x ^ (x >> 27)) * 0x94d049bb133111eb; return x ^ (x >> 31); } /* * xoshiro256** random generator * * Fastest available good PRNG as of 2018 (sub-nanosecond per entry), produces * much better output than old stuff like rand() or Mersenne's Twister. * * By David Blackman and Sebastiano Vigna; PD/CC0 2018. * * It has a period of 2²⁵⁶-1, excluding all-zero state; it must always get * initialized to avoid that zero. */ static inline uint64_t rotl(const uint64_t x, int k) { /* optimized to a single instruction on x86 */ return (x << k) | (x >> (64 - k)); } /* * rnd64_r -- return 64-bits of randomness */ uint64_t rnd64_r(rng_t *state) { uint64_t *s = (void *)state; const uint64_t result = rotl(s[1] * 5, 7) * 9; const uint64_t t = s[1] << 17; s[2] ^= s[0]; s[3] ^= s[1]; s[1] ^= s[2]; s[0] ^= s[3]; s[2] ^= t; s[3] = rotl(s[3], 45); return result; } /* * randomize_r -- initialize random generator * * Seed of 0 means random. */ void randomize_r(rng_t *state, uint64_t seed) { if (!seed) { #ifdef SYS_getrandom /* We want getentropy() but ancient Red Hat lacks it. */ if (!syscall(SYS_getrandom, state, sizeof(rng_t), 0)) return; /* nofail, but ENOSYS on kernel < 3.16 */ #elif _WIN32 #pragma comment(lib, "Bcrypt.lib") if (BCryptGenRandom(NULL, (PUCHAR)state, sizeof(rng_t), BCRYPT_USE_SYSTEM_PREFERRED_RNG)) { return; } #endif seed = (uint64_t)getpid(); } uint64_t *s = (void *)state; s[0] = hash64(seed); s[1] = hash64(s[0]); s[2] = hash64(s[1]); s[3] = hash64(s[2]); } static rng_t global_rng; /* * rnd64 -- global state version of rnd64_t */ uint64_t rnd64(void) { return rnd64_r(&global_rng); } /* * randomize -- initialize global RNG */ void randomize(uint64_t seed) { randomize_r(&global_rng, seed); } vmem-1.8/src/common/rand.h000066400000000000000000000035261361505074100155120ustar00rootroot00000000000000/* * Copyright 2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * rand.h -- random utils */ #ifndef RAND_H #define RAND_H 1 #include typedef uint64_t rng_t[4]; uint64_t hash64(uint64_t x); uint64_t rnd64_r(rng_t *rng); void randomize_r(rng_t *rng, uint64_t seed); uint64_t rnd64(void); void randomize(uint64_t seed); #endif vmem-1.8/src/common/sys_util.h000066400000000000000000000167511361505074100164450ustar00rootroot00000000000000/* * Copyright 2016-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * sys_util.h -- internal utility wrappers around system functions */ #ifndef PMDK_SYS_UTIL_H #define PMDK_SYS_UTIL_H 1 #include #include "os_thread.h" #include "out.h" #ifdef __cplusplus extern "C" { #endif /* * util_mutex_init -- os_mutex_init variant that never fails from * caller perspective. If os_mutex_init failed, this function aborts * the program. */ static inline void util_mutex_init(os_mutex_t *m) { int tmp = os_mutex_init(m); if (tmp) { errno = tmp; FATAL("!os_mutex_init"); } } /* * util_mutex_destroy -- os_mutex_destroy variant that never fails from * caller perspective. If os_mutex_destroy failed, this function aborts * the program. */ static inline void util_mutex_destroy(os_mutex_t *m) { int tmp = os_mutex_destroy(m); if (tmp) { errno = tmp; FATAL("!os_mutex_destroy"); } } /* * util_mutex_lock -- os_mutex_lock variant that never fails from * caller perspective. If os_mutex_lock failed, this function aborts * the program. */ static inline void util_mutex_lock(os_mutex_t *m) { int tmp = os_mutex_lock(m); if (tmp) { errno = tmp; FATAL("!os_mutex_lock"); } } /* * util_mutex_trylock -- os_mutex_trylock variant that never fails from * caller perspective (other than EBUSY). If util_mutex_trylock failed, this * function aborts the program. * Returns 0 if locked successfully, otherwise returns EBUSY. */ static inline int util_mutex_trylock(os_mutex_t *m) { int tmp = os_mutex_trylock(m); if (tmp && tmp != EBUSY) { errno = tmp; FATAL("!os_mutex_trylock"); } return tmp; } /* * util_mutex_unlock -- os_mutex_unlock variant that never fails from * caller perspective. If os_mutex_unlock failed, this function aborts * the program. */ static inline void util_mutex_unlock(os_mutex_t *m) { int tmp = os_mutex_unlock(m); if (tmp) { errno = tmp; FATAL("!os_mutex_unlock"); } } /* * util_rwlock_init -- os_rwlock_init variant that never fails from * caller perspective. If os_rwlock_init failed, this function aborts * the program. */ static inline void util_rwlock_init(os_rwlock_t *m) { int tmp = os_rwlock_init(m); if (tmp) { errno = tmp; FATAL("!os_rwlock_init"); } } /* * util_rwlock_rdlock -- os_rwlock_rdlock variant that never fails from * caller perspective. If os_rwlock_rdlock failed, this function aborts * the program. */ static inline void util_rwlock_rdlock(os_rwlock_t *m) { int tmp = os_rwlock_rdlock(m); if (tmp) { errno = tmp; FATAL("!os_rwlock_rdlock"); } } /* * util_rwlock_wrlock -- os_rwlock_wrlock variant that never fails from * caller perspective. If os_rwlock_wrlock failed, this function aborts * the program. */ static inline void util_rwlock_wrlock(os_rwlock_t *m) { int tmp = os_rwlock_wrlock(m); if (tmp) { errno = tmp; FATAL("!os_rwlock_wrlock"); } } /* * util_rwlock_unlock -- os_rwlock_unlock variant that never fails from * caller perspective. If os_rwlock_unlock failed, this function aborts * the program. */ static inline void util_rwlock_unlock(os_rwlock_t *m) { int tmp = os_rwlock_unlock(m); if (tmp) { errno = tmp; FATAL("!os_rwlock_unlock"); } } /* * util_rwlock_destroy -- os_rwlock_destroy variant that never fails from * caller perspective. If os_rwlock_destroy failed, this function aborts * the program. */ static inline void util_rwlock_destroy(os_rwlock_t *m) { int tmp = os_rwlock_destroy(m); if (tmp) { errno = tmp; FATAL("!os_rwlock_destroy"); } } /* * util_spin_init -- os_spin_init variant that logs on fail and sets errno. */ static inline int util_spin_init(os_spinlock_t *lock, int pshared) { int tmp = os_spin_init(lock, pshared); if (tmp) { errno = tmp; ERR("!os_spin_init"); } return tmp; } /* * util_spin_destroy -- os_spin_destroy variant that never fails from * caller perspective. If os_spin_destroy failed, this function aborts * the program. */ static inline void util_spin_destroy(os_spinlock_t *lock) { int tmp = os_spin_destroy(lock); if (tmp) { errno = tmp; FATAL("!os_spin_destroy"); } } /* * util_spin_lock -- os_spin_lock variant that never fails from caller * perspective. If os_spin_lock failed, this function aborts the program. */ static inline void util_spin_lock(os_spinlock_t *lock) { int tmp = os_spin_lock(lock); if (tmp) { errno = tmp; FATAL("!os_spin_lock"); } } /* * util_spin_unlock -- os_spin_unlock variant that never fails * from caller perspective. If os_spin_unlock failed, * this function aborts the program. */ static inline void util_spin_unlock(os_spinlock_t *lock) { int tmp = os_spin_unlock(lock); if (tmp) { errno = tmp; FATAL("!os_spin_unlock"); } } /* * util_semaphore_init -- os_semaphore_init variant that never fails * from caller perspective. If os_semaphore_init failed, * this function aborts the program. */ static inline void util_semaphore_init(os_semaphore_t *sem, unsigned value) { if (os_semaphore_init(sem, value)) FATAL("!os_semaphore_init"); } /* * util_semaphore_destroy -- deletes a semaphore instance */ static inline void util_semaphore_destroy(os_semaphore_t *sem) { if (os_semaphore_destroy(sem) != 0) FATAL("!os_semaphore_destroy"); } /* * util_semaphore_wait -- decreases the value of the semaphore */ static inline void util_semaphore_wait(os_semaphore_t *sem) { errno = 0; int ret; do { ret = os_semaphore_wait(sem); } while (errno == EINTR); /* signal interrupt */ if (ret != 0) FATAL("!os_semaphore_wait"); } /* * util_semaphore_trywait -- tries to decrease the value of the semaphore */ static inline int util_semaphore_trywait(os_semaphore_t *sem) { errno = 0; int ret; do { ret = os_semaphore_trywait(sem); } while (errno == EINTR); /* signal interrupt */ if (ret != 0 && errno != EAGAIN) FATAL("!os_semaphore_trywait"); return ret; } /* * util_semaphore_post -- increases the value of the semaphore */ static inline void util_semaphore_post(os_semaphore_t *sem) { if (os_semaphore_post(sem) != 0) FATAL("!os_semaphore_post"); } #ifdef __cplusplus } #endif #endif vmem-1.8/src/common/util.c000066400000000000000000000230451361505074100155340ustar00rootroot00000000000000/* * Copyright 2014-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * util.c -- very basic utilities */ #include #include #include #include #include #include #include #include "util.h" #include "valgrind_internal.h" #include "alloc.h" /* library-wide page size */ unsigned long long Pagesize; /* allocation/mmap granularity */ unsigned long long Mmap_align; #if ANY_VG_TOOL_ENABLED /* initialized to true if the process is running inside Valgrind */ unsigned _On_valgrind; #endif #if VG_PMEMCHECK_ENABLED #define LIB_LOG_LEN 20 #define FUNC_LOG_LEN 50 #define SUFFIX_LEN 7 /* true if pmreorder instrumentization has to be enabled */ int _Pmreorder_emit; /* * util_emit_log -- emits lib and func name with appropriate suffix * to pmemcheck store log file */ void util_emit_log(const char *lib, const char *func, int order) { char lib_name[LIB_LOG_LEN]; char func_name[FUNC_LOG_LEN]; char suffix[SUFFIX_LEN]; size_t lib_len = strlen(lib); size_t func_len = strlen(func); if (order == 0) strcpy(suffix, ".BEGIN"); else strcpy(suffix, ".END"); size_t suffix_len = strlen(suffix); if (lib_len + suffix_len + 1 > LIB_LOG_LEN) { VALGRIND_EMIT_LOG("Library name is too long"); return; } if (func_len + suffix_len + 1 > FUNC_LOG_LEN) { VALGRIND_EMIT_LOG("Function name is too long"); return; } strcpy(lib_name, lib); strcat(lib_name, suffix); strcpy(func_name, func); strcat(func_name, suffix); if (order == 0) { VALGRIND_EMIT_LOG(func_name); VALGRIND_EMIT_LOG(lib_name); } else { VALGRIND_EMIT_LOG(lib_name); VALGRIND_EMIT_LOG(func_name); } } #endif /* * util_is_zeroed -- check if given memory range is all zero */ int util_is_zeroed(const void *addr, size_t len) { const char *a = addr; if (len == 0) return 1; if (a[0] == 0 && memcmp(a, a + 1, len - 1) == 0) return 1; return 0; } /* * util_checksum_compute -- compute Fletcher64 checksum * * csump points to where the checksum lives, so that location * is treated as zeros while calculating the checksum. The * checksummed data is assumed to be in little endian order. */ uint64_t util_checksum_compute(void *addr, size_t len, uint64_t *csump, size_t skip_off) { if (len % 4 != 0) abort(); uint32_t *p32 = addr; uint32_t *p32end = (uint32_t *)((char *)addr + len); uint32_t *skip; uint32_t lo32 = 0; uint32_t hi32 = 0; if (skip_off) skip = (uint32_t *)((char *)addr + skip_off); else skip = (uint32_t *)((char *)addr + len); while (p32 < p32end) if (p32 == (uint32_t *)csump || p32 >= skip) { /* lo32 += 0; treat first 32-bits as zero */ p32++; hi32 += lo32; /* lo32 += 0; treat second 32-bits as zero */ p32++; hi32 += lo32; } else { lo32 += le32toh(*p32); ++p32; hi32 += lo32; } return (uint64_t)hi32 << 32 | lo32; } /* * util_checksum -- compute Fletcher64 checksum * * csump points to where the checksum lives, so that location * is treated as zeros while calculating the checksum. * If insert is true, the calculated checksum is inserted into * the range at *csump. Otherwise the calculated checksum is * checked against *csump and the result returned (true means * the range checksummed correctly). */ int util_checksum(void *addr, size_t len, uint64_t *csump, int insert, size_t skip_off) { uint64_t csum = util_checksum_compute(addr, len, csump, skip_off); if (insert) { *csump = htole64(csum); return 1; } return *csump == htole64(csum); } /* * util_checksum_seq -- compute sequential Fletcher64 checksum * * Merges checksum from the old buffer with checksum for current buffer. */ uint64_t util_checksum_seq(const void *addr, size_t len, uint64_t csum) { if (len % 4 != 0) abort(); const uint32_t *p32 = addr; const uint32_t *p32end = (const uint32_t *)((const char *)addr + len); uint32_t lo32 = (uint32_t)csum; uint32_t hi32 = (uint32_t)(csum >> 32); while (p32 < p32end) { lo32 += le32toh(*p32); ++p32; hi32 += lo32; } return (uint64_t)hi32 << 32 | lo32; } /* * util_fgets -- fgets wrapper with conversion CRLF to LF */ char * util_fgets(char *buffer, int max, FILE *stream) { char *str = fgets(buffer, max, stream); if (str == NULL) goto end; int len = (int)strlen(str); if (len < 2) goto end; if (str[len - 2] == '\r' && str[len - 1] == '\n') { str[len - 2] = '\n'; str[len - 1] = '\0'; } end: return str; } struct suff { const char *suff; uint64_t mag; }; /* * util_parse_size -- parse size from string */ int util_parse_size(const char *str, size_t *sizep) { const struct suff suffixes[] = { { "B", 1ULL }, { "K", 1ULL << 10 }, /* JEDEC */ { "M", 1ULL << 20 }, { "G", 1ULL << 30 }, { "T", 1ULL << 40 }, { "P", 1ULL << 50 }, { "KiB", 1ULL << 10 }, /* IEC */ { "MiB", 1ULL << 20 }, { "GiB", 1ULL << 30 }, { "TiB", 1ULL << 40 }, { "PiB", 1ULL << 50 }, { "kB", 1000ULL }, /* SI */ { "MB", 1000ULL * 1000 }, { "GB", 1000ULL * 1000 * 1000 }, { "TB", 1000ULL * 1000 * 1000 * 1000 }, { "PB", 1000ULL * 1000 * 1000 * 1000 * 1000 } }; int res = -1; unsigned i; size_t size = 0; char unit[9] = {0}; int ret = sscanf(str, "%zu%8s", &size, unit); if (ret == 1) { res = 0; } else if (ret == 2) { for (i = 0; i < ARRAY_SIZE(suffixes); ++i) { if (strcmp(suffixes[i].suff, unit) == 0) { size = size * suffixes[i].mag; res = 0; break; } } } else { return -1; } if (sizep && res == 0) *sizep = size; return res; } /* * util_init -- initialize the utils * * This is called from the library initialization code. */ void util_init(void) { /* XXX - replace sysconf() with util_get_sys_xxx() */ if (Pagesize == 0) Pagesize = (unsigned long) sysconf(_SC_PAGESIZE); #ifndef _WIN32 Mmap_align = Pagesize; #else if (Mmap_align == 0) { SYSTEM_INFO si; GetSystemInfo(&si); Mmap_align = si.dwAllocationGranularity; } #endif #if ANY_VG_TOOL_ENABLED _On_valgrind = RUNNING_ON_VALGRIND; #endif #if VG_PMEMCHECK_ENABLED if (On_valgrind) { char *pmreorder_env = getenv("PMREORDER_EMIT_LOG"); if (pmreorder_env) _Pmreorder_emit = atoi(pmreorder_env); } else { _Pmreorder_emit = 0; } #endif } /* * util_concat_str -- concatenate two strings */ char * util_concat_str(const char *s1, const char *s2) { char *result = malloc(strlen(s1) + strlen(s2) + 1); if (!result) return NULL; strcpy(result, s1); strcat(result, s2); return result; } /* * util_localtime -- a wrapper for localtime function * * localtime can set nonzero errno even if it succeeds (e.g. when there is no * /etc/localtime file under Linux) and we do not want the errno to be polluted * in such cases. */ struct tm * util_localtime(const time_t *timep) { int oerrno = errno; struct tm *tm = localtime(timep); if (tm != NULL) errno = oerrno; return tm; } /* * util_safe_strcpy -- copies string from src to dst, returns -1 * when length of source string (including null-terminator) * is greater than max_length, 0 otherwise * * For gcc (found in version 8.1.1) calling this function with * max_length equal to dst size produces -Wstringop-truncation warning * * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=85902 */ #ifdef STRINGOP_TRUNCATION_SUPPORTED #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wstringop-truncation" #endif int util_safe_strcpy(char *dst, const char *src, size_t max_length) { if (max_length == 0) return -1; strncpy(dst, src, max_length); return dst[max_length - 1] == '\0' ? 0 : -1; } #ifdef STRINGOP_TRUNCATION_SUPPORTED #pragma GCC diagnostic pop #endif #define PARSER_MAX_LINE (PATH_MAX + 1024) /* * util_readline -- read line from stream */ char * util_readline(FILE *fh) { size_t bufsize = PARSER_MAX_LINE; size_t position = 0; char *buffer = NULL; do { char *tmp = buffer; buffer = Realloc(buffer, bufsize); if (buffer == NULL) { Free(tmp); return NULL; } /* ensure if we can cast bufsize to int */ char *s = util_fgets(buffer + position, (int)bufsize / 2, fh); if (s == NULL) { Free(buffer); return NULL; } position = strlen(buffer); bufsize *= 2; } while (!feof(fh) && buffer[position - 1] != '\n'); return buffer; } vmem-1.8/src/common/util.h000066400000000000000000000374131361505074100155450ustar00rootroot00000000000000/* * Copyright 2014-2019, Intel Corporation * Copyright (c) 2016, Microsoft Corporation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * util.h -- internal definitions for util module */ #ifndef PMDK_UTIL_H #define PMDK_UTIL_H 1 #include #include #include #include #include #ifdef _MSC_VER #include /* popcnt, bitscan */ #endif #include #ifdef __cplusplus extern "C" { #endif extern unsigned long long Pagesize; extern unsigned long long Mmap_align; #define CACHELINE_SIZE 64ULL #define PAGE_ALIGNED_DOWN_SIZE(size) ((size) & ~(Pagesize - 1)) #define PAGE_ALIGNED_UP_SIZE(size)\ PAGE_ALIGNED_DOWN_SIZE((size) + (Pagesize - 1)) #define IS_PAGE_ALIGNED(size) (((size) & (Pagesize - 1)) == 0) #define PAGE_ALIGN_UP(addr) ((void *)PAGE_ALIGNED_UP_SIZE((uintptr_t)(addr))) #define ALIGN_UP(size, align) (((size) + (align) - 1) & ~((align) - 1)) #define ALIGN_DOWN(size, align) ((size) & ~((align) - 1)) #define ADDR_SUM(vp, lp) ((void *)((char *)(vp) + (lp))) #define util_alignof(t) offsetof(struct {char _util_c; t _util_m; }, _util_m) #define FORMAT_PRINTF(a, b) __attribute__((__format__(__printf__, (a), (b)))) void util_init(void); int util_is_zeroed(const void *addr, size_t len); uint64_t util_checksum_compute(void *addr, size_t len, uint64_t *csump, size_t skip_off); int util_checksum(void *addr, size_t len, uint64_t *csump, int insert, size_t skip_off); uint64_t util_checksum_seq(const void *addr, size_t len, uint64_t csum); int util_parse_size(const char *str, size_t *sizep); char *util_fgets(char *buffer, int max, FILE *stream); char *util_getexecname(char *path, size_t pathlen); char *util_part_realpath(const char *path); int util_compare_file_inodes(const char *path1, const char *path2); void *util_aligned_malloc(size_t alignment, size_t size); void util_aligned_free(void *ptr); struct tm *util_localtime(const time_t *timep); int util_safe_strcpy(char *dst, const char *src, size_t max_length); void util_emit_log(const char *lib, const char *func, int order); char *util_readline(FILE *fh); #ifdef _WIN32 char *util_toUTF8(const wchar_t *wstr); wchar_t *util_toUTF16(const char *wstr); void util_free_UTF8(char *str); void util_free_UTF16(wchar_t *str); int util_toUTF16_buff(const char *in, wchar_t *out, size_t out_size); int util_toUTF8_buff(const wchar_t *in, char *out, size_t out_size); void util_suppress_errmsg(void); #endif #define UTIL_MAX_ERR_MSG 128 void util_strerror(int errnum, char *buff, size_t bufflen); void util_set_alloc_funcs( void *(*malloc_func)(size_t size), void (*free_func)(void *ptr), void *(*realloc_func)(void *ptr, size_t size), char *(*strdup_func)(const char *s)); /* * Macro calculates number of elements in given table */ #ifndef ARRAY_SIZE #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) #endif #ifdef _MSC_VER #define force_inline inline __forceinline #define NORETURN __declspec(noreturn) #else #define force_inline __attribute__((always_inline)) inline #define NORETURN __attribute__((noreturn)) #endif #define util_get_not_masked_bits(x, mask) ((x) & ~(mask)) /* * util_setbit -- setbit macro substitution which properly deals with types */ static inline void util_setbit(uint8_t *b, uint32_t i) { b[i / 8] = (uint8_t)(b[i / 8] | (uint8_t)(1 << (i % 8))); } /* * util_clrbit -- clrbit macro substitution which properly deals with types */ static inline void util_clrbit(uint8_t *b, uint32_t i) { b[i / 8] = (uint8_t)(b[i / 8] & (uint8_t)(~(1 << (i % 8)))); } #define util_isset(a, i) isset(a, i) #define util_isclr(a, i) isclr(a, i) #define util_flag_isset(a, f) ((a) & (f)) #define util_flag_isclr(a, f) (((a) & (f)) == 0) /* * util_is_pow2 -- returns !0 when there's only 1 bit set in v, 0 otherwise */ static force_inline int util_is_pow2(uint64_t v) { return v && !(v & (v - 1)); } /* * util_div_ceil -- divides a by b and rounds up the result */ static force_inline unsigned util_div_ceil(unsigned a, unsigned b) { return (unsigned)(((unsigned long)a + b - 1) / b); } /* * util_bool_compare_and_swap -- perform an atomic compare and swap * util_fetch_and_* -- perform an operation atomically, return old value * util_synchronize -- issue a full memory barrier * util_popcount -- count number of set bits * util_lssb_index -- return index of least significant set bit, * undefined on zero * util_mssb_index -- return index of most significant set bit * undefined on zero * * XXX assertions needed on (value != 0) in both versions of bitscans * */ #ifndef _MSC_VER /* * ISO C11 -- 7.17.1.4 * memory_order - an enumerated type whose enumerators identify memory ordering * constraints. */ typedef enum { memory_order_relaxed = __ATOMIC_RELAXED, memory_order_consume = __ATOMIC_CONSUME, memory_order_acquire = __ATOMIC_ACQUIRE, memory_order_release = __ATOMIC_RELEASE, memory_order_acq_rel = __ATOMIC_ACQ_REL, memory_order_seq_cst = __ATOMIC_SEQ_CST } memory_order; /* * ISO C11 -- 7.17.7.2 The atomic_load generic functions * Integer width specific versions as supplement for: * * * #include * C atomic_load(volatile A *object); * C atomic_load_explicit(volatile A *object, memory_order order); * * The atomic_load interface doesn't return the loaded value, but instead * copies it to a specified address -- see comments at the MSVC version. * * Also, instead of generic functions, two versions are available: * for 32 bit fundamental integers, and for 64 bit ones. */ #define util_atomic_load_explicit32 __atomic_load #define util_atomic_load_explicit64 __atomic_load /* * ISO C11 -- 7.17.7.1 The atomic_store generic functions * Integer width specific versions as supplement for: * * #include * void atomic_store(volatile A *object, C desired); * void atomic_store_explicit(volatile A *object, C desired, * memory_order order); */ #define util_atomic_store_explicit32 __atomic_store_n #define util_atomic_store_explicit64 __atomic_store_n /* * https://gcc.gnu.org/onlinedocs/gcc/_005f_005fsync-Builtins.html * https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html * https://clang.llvm.org/docs/LanguageExtensions.html#builtin-functions */ #define util_bool_compare_and_swap32 __sync_bool_compare_and_swap #define util_bool_compare_and_swap64 __sync_bool_compare_and_swap #define util_fetch_and_add32 __sync_fetch_and_add #define util_fetch_and_add64 __sync_fetch_and_add #define util_fetch_and_sub32 __sync_fetch_and_sub #define util_fetch_and_sub64 __sync_fetch_and_sub #define util_fetch_and_and32 __sync_fetch_and_and #define util_fetch_and_and64 __sync_fetch_and_and #define util_fetch_and_or32 __sync_fetch_and_or #define util_fetch_and_or64 __sync_fetch_and_or #define util_synchronize __sync_synchronize #define util_popcount(value) ((unsigned char)__builtin_popcount(value)) #define util_popcount64(value) ((unsigned char)__builtin_popcountll(value)) #define util_lssb_index(value) ((unsigned char)__builtin_ctz(value)) #define util_lssb_index64(value) ((unsigned char)__builtin_ctzll(value)) #define util_mssb_index(value) ((unsigned char)(31 - __builtin_clz(value))) #define util_mssb_index64(value) ((unsigned char)(63 - __builtin_clzll(value))) #else /* ISO C11 -- 7.17.1.4 */ typedef enum { memory_order_relaxed, memory_order_consume, memory_order_acquire, memory_order_release, memory_order_acq_rel, memory_order_seq_cst } memory_order; /* * ISO C11 -- 7.17.7.2 The atomic_load generic functions * Integer width specific versions as supplement for: * * * #include * C atomic_load(volatile A *object); * C atomic_load_explicit(volatile A *object, memory_order order); * * The atomic_load interface doesn't return the loaded value, but instead * copies it to a specified address. * The MSVC specific implementation needs to trigger a barrier (at least * compiler barrier) after the load from the volatile value. The actual load * from the volatile value itself is expected to be atomic. * * The actual isnterface here: * #include "util.h" * void util_atomic_load32(volatile A *object, A *destination); * void util_atomic_load64(volatile A *object, A *destination); * void util_atomic_load_explicit32(volatile A *object, A *destination, * memory_order order); * void util_atomic_load_explicit64(volatile A *object, A *destination, * memory_order order); */ #ifndef _M_X64 #error MSVC ports of util_atomic_ only work on X86_64 #endif #if _MSC_VER >= 2000 #error util_atomic_ utility functions not tested with this version of VC++ #error These utility functions are not future proof, as they are not #error based on publicly available documentation. #endif #define util_atomic_load_explicit(object, dest, order)\ do {\ COMPILE_ERROR_ON(order != memory_order_seq_cst &&\ order != memory_order_consume &&\ order != memory_order_acquire &&\ order != memory_order_relaxed);\ *dest = *object;\ if (order == memory_order_seq_cst ||\ order == memory_order_consume ||\ order == memory_order_acquire)\ _ReadWriteBarrier();\ } while (0) #define util_atomic_load_explicit32 util_atomic_load_explicit #define util_atomic_load_explicit64 util_atomic_load_explicit /* ISO C11 -- 7.17.7.1 The atomic_store generic functions */ #define util_atomic_store_explicit64(object, desired, order)\ do {\ COMPILE_ERROR_ON(order != memory_order_seq_cst &&\ order != memory_order_release &&\ order != memory_order_relaxed);\ if (order == memory_order_seq_cst) {\ _InterlockedExchange64(\ (volatile long long *)object, desired);\ } else {\ if (order == memory_order_release)\ _ReadWriteBarrier();\ *object = desired;\ }\ } while (0) #define util_atomic_store_explicit32(object, desired, order)\ do {\ COMPILE_ERROR_ON(order != memory_order_seq_cst &&\ order != memory_order_release &&\ order != memory_order_relaxed);\ if (order == memory_order_seq_cst) {\ _InterlockedExchange(\ (volatile long *)object, desired);\ } else {\ if (order == memory_order_release)\ _ReadWriteBarrier();\ *object = desired;\ }\ } while (0) /* * https://msdn.microsoft.com/en-us/library/hh977022.aspx */ static __inline int bool_compare_and_swap32_VC(volatile LONG *ptr, LONG oldval, LONG newval) { LONG old = InterlockedCompareExchange(ptr, newval, oldval); return (old == oldval); } static __inline int bool_compare_and_swap64_VC(volatile LONG64 *ptr, LONG64 oldval, LONG64 newval) { LONG64 old = InterlockedCompareExchange64(ptr, newval, oldval); return (old == oldval); } #define util_bool_compare_and_swap32(p, o, n)\ bool_compare_and_swap32_VC((LONG *)(p), (LONG)(o), (LONG)(n)) #define util_bool_compare_and_swap64(p, o, n)\ bool_compare_and_swap64_VC((LONG64 *)(p), (LONG64)(o), (LONG64)(n)) #define util_fetch_and_add32(ptr, value)\ InterlockedExchangeAdd((LONG *)(ptr), value) #define util_fetch_and_add64(ptr, value)\ InterlockedExchangeAdd64((LONG64 *)(ptr), value) #define util_fetch_and_sub32(ptr, value)\ InterlockedExchangeSubtract((LONG *)(ptr), value) #define util_fetch_and_sub64(ptr, value)\ InterlockedExchangeAdd64((LONG64 *)(ptr), -((LONG64)(value))) #define util_fetch_and_and32(ptr, value)\ InterlockedAnd((LONG *)(ptr), value) #define util_fetch_and_and64(ptr, value)\ InterlockedAnd64((LONG64 *)(ptr), value) #define util_fetch_and_or32(ptr, value)\ InterlockedOr((LONG *)(ptr), value) #define util_fetch_and_or64(ptr, value)\ InterlockedOr64((LONG64 *)(ptr), value) static __inline void util_synchronize(void) { MemoryBarrier(); } #define util_popcount(value) (unsigned char)__popcnt(value) #define util_popcount64(value) (unsigned char)__popcnt64(value) static __inline unsigned char util_lssb_index(int value) { unsigned long ret; _BitScanForward(&ret, value); return (unsigned char)ret; } static __inline unsigned char util_lssb_index64(long long value) { unsigned long ret; _BitScanForward64(&ret, value); return (unsigned char)ret; } static __inline unsigned char util_mssb_index(int value) { unsigned long ret; _BitScanReverse(&ret, value); return (unsigned char)ret; } static __inline unsigned char util_mssb_index64(long long value) { unsigned long ret; _BitScanReverse64(&ret, value); return (unsigned char)ret; } #endif /* ISO C11 -- 7.17.7 Operations on atomic types */ #define util_atomic_load32(object, dest)\ util_atomic_load_explicit32(object, dest, memory_order_seq_cst) #define util_atomic_load64(object, dest)\ util_atomic_load_explicit64(object, dest, memory_order_seq_cst) #define util_atomic_store32(object, desired)\ util_atomic_store_explicit32(object, desired, memory_order_seq_cst) #define util_atomic_store64(object, desired)\ util_atomic_store_explicit64(object, desired, memory_order_seq_cst) /* * util_get_printable_ascii -- convert non-printable ascii to dot '.' */ static inline char util_get_printable_ascii(char c) { return isprint((unsigned char)c) ? c : '.'; } char *util_concat_str(const char *s1, const char *s2); #if !defined(likely) #if defined(__GNUC__) #define likely(x) __builtin_expect(!!(x), 1) #define unlikely(x) __builtin_expect(!!(x), 0) #else #define likely(x) (!!(x)) #define unlikely(x) (!!(x)) #endif #endif #if defined(__CHECKER__) #define COMPILE_ERROR_ON(cond) #define ASSERT_COMPILE_ERROR_ON(cond) #elif defined(_MSC_VER) #define COMPILE_ERROR_ON(cond) C_ASSERT(!(cond)) /* XXX - can't be done with C_ASSERT() unless we have __builtin_constant_p() */ #define ASSERT_COMPILE_ERROR_ON(cond) do {} while (0) #else #define COMPILE_ERROR_ON(cond) ((void)sizeof(char[(cond) ? -1 : 1])) #define ASSERT_COMPILE_ERROR_ON(cond) COMPILE_ERROR_ON(cond) #endif #ifndef _MSC_VER #define ATTR_CONSTRUCTOR __attribute__((constructor)) static #define ATTR_DESTRUCTOR __attribute__((destructor)) static #else #define ATTR_CONSTRUCTOR #define ATTR_DESTRUCTOR #endif #ifndef _MSC_VER #define CONSTRUCTOR(fun) ATTR_CONSTRUCTOR #else #ifdef __cplusplus #define CONSTRUCTOR(fun) \ void fun(); \ struct _##fun { \ _##fun() { \ fun(); \ } \ }; static _##fun foo; \ static #else #define CONSTRUCTOR(fun) \ MSVC_CONSTR(fun) \ static #endif #endif #ifdef __GNUC__ #define CHECK_FUNC_COMPATIBLE(func1, func2)\ COMPILE_ERROR_ON(!__builtin_types_compatible_p(typeof(func1),\ typeof(func2))) #else #define CHECK_FUNC_COMPATIBLE(func1, func2) do {} while (0) #endif /* __GNUC__ */ #ifdef __cplusplus } #endif #endif /* util.h */ vmem-1.8/src/common/util_pmem.h000066400000000000000000000045361361505074100165630ustar00rootroot00000000000000/* * Copyright 2017-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * util_pmem.h -- internal definitions for pmem utils */ #ifndef PMDK_UTIL_PMEM_H #define PMDK_UTIL_PMEM_H 1 #include "libpmem.h" #include "out.h" #ifdef __cplusplus extern "C" { #endif /* * util_persist -- flush to persistence */ static inline void util_persist(int is_pmem, const void *addr, size_t len) { LOG(3, "is_pmem %d, addr %p, len %zu", is_pmem, addr, len); if (is_pmem) pmem_persist(addr, len); else if (pmem_msync(addr, len)) FATAL("!pmem_msync"); } /* * util_persist_auto -- flush to persistence */ static inline void util_persist_auto(int is_pmem, const void *addr, size_t len) { LOG(3, "is_pmem %d, addr %p, len %zu", is_pmem, addr, len); util_persist(is_pmem || pmem_is_pmem(addr, len), addr, len); } #ifdef __cplusplus } #endif #endif /* util_pmem.h */ vmem-1.8/src/common/util_posix.c000066400000000000000000000074101361505074100167540ustar00rootroot00000000000000/* * Copyright 2015-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * util_posix.c -- Abstraction layer for misc utilities (Posix implementation) */ #include #include #include #include #include #include "os.h" #include "out.h" #include "util.h" /* pass through for Posix */ void util_strerror(int errnum, char *buff, size_t bufflen) { strerror_r(errnum, buff, bufflen); } /* * util_part_realpath -- get canonicalized absolute pathname * * As paths used in a poolset file have to be absolute (checked when parsing * a poolset file), here we only have to resolve symlinks. */ char * util_part_realpath(const char *path) { return realpath(path, NULL); } /* * util_compare_file_inodes -- compare device and inodes of two files; * this resolves hard links */ int util_compare_file_inodes(const char *path1, const char *path2) { struct stat sb1, sb2; if (os_stat(path1, &sb1)) { if (errno != ENOENT) { ERR("!stat failed for %s", path1); return -1; } LOG(1, "stat failed for %s", path1); errno = 0; return strcmp(path1, path2) != 0; } if (os_stat(path2, &sb2)) { if (errno != ENOENT) { ERR("!stat failed for %s", path2); return -1; } LOG(1, "stat failed for %s", path2); errno = 0; return strcmp(path1, path2) != 0; } return sb1.st_dev != sb2.st_dev || sb1.st_ino != sb2.st_ino; } /* * util_aligned_malloc -- allocate aligned memory */ void * util_aligned_malloc(size_t alignment, size_t size) { void *retval = NULL; errno = posix_memalign(&retval, alignment, size); return retval; } /* * util_aligned_free -- free allocated memory in util_aligned_malloc */ void util_aligned_free(void *ptr) { free(ptr); } /* * util_getexecname -- return name of current executable */ char * util_getexecname(char *path, size_t pathlen) { ASSERT(pathlen != 0); ssize_t cc; #ifdef __FreeBSD__ #include #include int mib[4] = {CTL_KERN, KERN_PROC, KERN_PROC_PATHNAME, -1}; cc = (sysctl(mib, 4, path, &pathlen, NULL, 0) == -1) ? -1 : (ssize_t)pathlen; #else cc = readlink("/proc/self/exe", path, pathlen); #endif if (cc == -1) { strncpy(path, "unknown", pathlen); path[pathlen - 1] = '\0'; } else { path[cc] = '\0'; } return path; } vmem-1.8/src/common/util_windows.c000066400000000000000000000143531361505074100173100ustar00rootroot00000000000000/* * Copyright 2015-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * util_windows.c -- misc utilities with OS-specific implementation */ #include #include #include #include "alloc.h" #include "util.h" #include "out.h" #include "file.h" /* Windows CRT doesn't support all errors, add unmapped here */ #define ENOTSUP_STR "Operation not supported" #define ECANCELED_STR "Operation canceled" #define ENOERROR 0 #define ENOERROR_STR "Success" #define UNMAPPED_STR "Unmapped error" /* * util_strerror -- return string describing error number * * XXX: There are many other POSIX error codes that are not recognized by * strerror_s(), so eventually we may want to implement this in a similar * fashion as strsignal(). */ void util_strerror(int errnum, char *buff, size_t bufflen) { switch (errnum) { case ENOERROR: strcpy_s(buff, bufflen, ENOERROR_STR); break; case ENOTSUP: strcpy_s(buff, bufflen, ENOTSUP_STR); break; case ECANCELED: strcpy_s(buff, bufflen, ECANCELED_STR); break; default: if (strerror_s(buff, bufflen, errnum)) strcpy_s(buff, bufflen, UNMAPPED_STR); } } /* * util_part_realpath -- get canonicalized absolute pathname for a part file * * On Windows, paths cannot be symlinks and paths used in a poolset have to * be absolute (checked when parsing a poolset file), so we just return * the path. */ char * util_part_realpath(const char *path) { return strdup(path); } /* * util_compare_file_inodes -- compare device and inodes of two files */ int util_compare_file_inodes(const char *path1, const char *path2) { return strcmp(path1, path2) != 0; } /* * util_aligned_malloc -- allocate aligned memory */ void * util_aligned_malloc(size_t alignment, size_t size) { return _aligned_malloc(size, alignment); } /* * util_aligned_free -- free allocated memory in util_aligned_malloc */ void util_aligned_free(void *ptr) { _aligned_free(ptr); } /* * util_toUTF8 -- allocating conversion from wide char string to UTF8 */ char * util_toUTF8(const wchar_t *wstr) { int size = WideCharToMultiByte(CP_UTF8, WC_ERR_INVALID_CHARS, wstr, -1, NULL, 0, NULL, NULL); if (size == 0) goto err; char *str = Malloc(size * sizeof(char)); if (str == NULL) goto out; if (WideCharToMultiByte(CP_UTF8, WC_ERR_INVALID_CHARS, wstr, -1, str, size, NULL, NULL) == 0) { Free(str); goto err; } out: return str; err: errno = EINVAL; return NULL; } /* * util_free_UTF8 -- free UTF8 string */ void util_free_UTF8(char *str) { Free(str); } /* * util_toUTF16 -- allocating conversion from UTF8 to wide char string */ wchar_t * util_toUTF16(const char *str) { int size = MultiByteToWideChar(CP_UTF8, MB_ERR_INVALID_CHARS, str, -1, NULL, 0); if (size == 0) goto err; wchar_t *wstr = Malloc(size * sizeof(wchar_t)); if (wstr == NULL) goto out; if (MultiByteToWideChar(CP_UTF8, MB_ERR_INVALID_CHARS, str, -1, wstr, size) == 0) { Free(wstr); goto err; } out: return wstr; err: errno = EINVAL; return NULL; } /* * util_free_UTF16 -- free wide char string */ void util_free_UTF16(wchar_t *wstr) { Free(wstr); } /* * util_toUTF16_buff -- non-allocating conversion from UTF8 to wide char string * * The user responsible for supplying a large enough out buffer. */ int util_toUTF16_buff(const char *in, wchar_t *out, size_t out_size) { ASSERT(out != NULL); int size = MultiByteToWideChar(CP_UTF8, MB_ERR_INVALID_CHARS, in, -1, NULL, 0); if (size == 0 || out_size < size) goto err; if (MultiByteToWideChar(CP_UTF8, MB_ERR_INVALID_CHARS, in, -1, out, size) == 0) goto err; return 0; err: errno = EINVAL; return -1; } /* * util_toUTF8_buff -- non-allocating conversion from wide char string to UTF8 * * The user responsible for supplying a large enough out buffer. */ int util_toUTF8_buff(const wchar_t *in, char *out, size_t out_size) { ASSERT(out != NULL); int size = WideCharToMultiByte(CP_UTF8, WC_ERR_INVALID_CHARS, in, -1, NULL, 0, NULL, NULL); if (size == 0 || out_size < size) goto err; if (WideCharToMultiByte(CP_UTF8, WC_ERR_INVALID_CHARS, in, -1, out, size, NULL, NULL) == 0) goto err; return 0; err: errno = EINVAL; return -1; } /* * util_getexecname -- return name of current executable */ char * util_getexecname(char *path, size_t pathlen) { ssize_t cc; if ((cc = GetModuleFileNameA(NULL, path, (DWORD)pathlen)) == 0) strcpy(path, "unknown"); else path[cc] = '\0'; return path; } /* * util_suppress_errmsg -- suppresses "abort" window on Windows if env variable * is set, useful for automatic tests */ void util_suppress_errmsg(void) { if (os_getenv("VMEM_NO_ABORT_MSG") != NULL) { DWORD err = GetErrorMode(); SetErrorMode(err | SEM_NOGPFAULTERRORBOX | SEM_FAILCRITICALERRORS); _set_abort_behavior(0, _WRITE_ABORT_MSG | _CALL_REPORTFAULT); } } vmem-1.8/src/common/uuid.h000066400000000000000000000050771361505074100155370ustar00rootroot00000000000000/* * Copyright 2014-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * uuid.h -- internal definitions for uuid module */ #ifndef PMDK_UUID_H #define PMDK_UUID_H 1 #include #include #ifdef __cplusplus extern "C" { #endif /* * Structure for binary version of uuid. From RFC4122, * https://tools.ietf.org/html/rfc4122 */ struct uuid { uint32_t time_low; uint16_t time_mid; uint16_t time_hi_and_ver; uint8_t clock_seq_hi; uint8_t clock_seq_low; uint8_t node[6]; }; #define POOL_HDR_UUID_LEN 16 /* uuid byte length */ #define POOL_HDR_UUID_STR_LEN 37 /* uuid string length */ #define POOL_HDR_UUID_GEN_FILE "/proc/sys/kernel/random/uuid" typedef unsigned char uuid_t[POOL_HDR_UUID_LEN]; /* 16 byte binary uuid value */ int util_uuid_to_string(const uuid_t u, char *buf); int util_uuid_from_string(const char uuid[POOL_HDR_UUID_STR_LEN], struct uuid *ud); /* * uuidcmp -- compare two uuids */ static inline int uuidcmp(const uuid_t uuid1, const uuid_t uuid2) { return memcmp(uuid1, uuid2, POOL_HDR_UUID_LEN); } #ifdef __cplusplus } #endif #endif vmem-1.8/src/common/valgrind/000077500000000000000000000000001361505074100162155ustar00rootroot00000000000000vmem-1.8/src/common/valgrind/.cstyleignore000066400000000000000000000000631361505074100207240ustar00rootroot00000000000000drd.h helgrind.h memcheck.h pmemcheck.h valgrind.h vmem-1.8/src/common/valgrind/README000066400000000000000000000001331361505074100170720ustar00rootroot00000000000000Files in this directory were imported from Valgrind 3.14: https://github.com/pmem/valgrind vmem-1.8/src/common/valgrind/drd.h000066400000000000000000000547061361505074100171530ustar00rootroot00000000000000/* ---------------------------------------------------------------- Notice that the following BSD-style license applies to this one file (drd.h) only. The rest of Valgrind is licensed under the terms of the GNU General Public License, version 2, unless otherwise indicated. See the COPYING file in the source distribution for details. ---------------------------------------------------------------- This file is part of DRD, a Valgrind tool for verification of multithreaded programs. Copyright (C) 2006-2017 Bart Van Assche . All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 3. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 4. The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ---------------------------------------------------------------- Notice that the above BSD-style license applies to this one file (drd.h) only. The entire rest of Valgrind is licensed under the terms of the GNU General Public License, version 2. See the COPYING file in the source distribution for details. ---------------------------------------------------------------- */ #ifndef __VALGRIND_DRD_H #define __VALGRIND_DRD_H #include "valgrind.h" /** Obtain the thread ID assigned by Valgrind's core. */ #define DRD_GET_VALGRIND_THREADID \ (unsigned)VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \ VG_USERREQ__DRD_GET_VALGRIND_THREAD_ID, \ 0, 0, 0, 0, 0) /** Obtain the thread ID assigned by DRD. */ #define DRD_GET_DRD_THREADID \ (unsigned)VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \ VG_USERREQ__DRD_GET_DRD_THREAD_ID, \ 0, 0, 0, 0, 0) /** Tell DRD not to complain about data races for the specified variable. */ #define DRD_IGNORE_VAR(x) ANNOTATE_BENIGN_RACE_SIZED(&(x), sizeof(x), "") /** Tell DRD to no longer ignore data races for the specified variable. */ #define DRD_STOP_IGNORING_VAR(x) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DRD_FINISH_SUPPRESSION, \ &(x), sizeof(x), 0, 0, 0) /** * Tell DRD to trace all memory accesses for the specified variable * until the memory that was allocated for the variable is freed. */ #define DRD_TRACE_VAR(x) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DRD_START_TRACE_ADDR, \ &(x), sizeof(x), 0, 0, 0) /** * Tell DRD to stop tracing memory accesses for the specified variable. */ #define DRD_STOP_TRACING_VAR(x) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DRD_STOP_TRACE_ADDR, \ &(x), sizeof(x), 0, 0, 0) /** * @defgroup RaceDetectionAnnotations Data race detection annotations. * * @see See also the source file producer-consumer. */ #define ANNOTATE_PCQ_CREATE(pcq) do { } while(0) /** Tell DRD that a FIFO queue has been destroyed. */ #define ANNOTATE_PCQ_DESTROY(pcq) do { } while(0) /** * Tell DRD that an element has been added to the FIFO queue at address pcq. */ #define ANNOTATE_PCQ_PUT(pcq) do { } while(0) /** * Tell DRD that an element has been removed from the FIFO queue at address pcq, * and that DRD should insert a happens-before relationship between the memory * accesses that occurred before the corresponding ANNOTATE_PCQ_PUT(pcq) * annotation and the memory accesses after this annotation. Correspondence * between PUT and GET annotations happens in FIFO order. Since locking * of the queue is needed anyway to add elements to or to remove elements from * the queue, for DRD all four FIFO annotations are defined as no-ops. */ #define ANNOTATE_PCQ_GET(pcq) do { } while(0) /** * Tell DRD that data races at the specified address are expected and must not * be reported. */ #define ANNOTATE_BENIGN_RACE(addr, descr) \ ANNOTATE_BENIGN_RACE_SIZED(addr, sizeof(*addr), descr) /* Same as ANNOTATE_BENIGN_RACE(addr, descr), but applies to the memory range [addr, addr + size). */ #define ANNOTATE_BENIGN_RACE_SIZED(addr, size, descr) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DRD_START_SUPPRESSION, \ addr, size, 0, 0, 0) /** Tell DRD to ignore all reads performed by the current thread. */ #define ANNOTATE_IGNORE_READS_BEGIN() \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DRD_RECORD_LOADS, \ 0, 0, 0, 0, 0); /** Tell DRD to no longer ignore the reads performed by the current thread. */ #define ANNOTATE_IGNORE_READS_END() \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DRD_RECORD_LOADS, \ 1, 0, 0, 0, 0); /** Tell DRD to ignore all writes performed by the current thread. */ #define ANNOTATE_IGNORE_WRITES_BEGIN() \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DRD_RECORD_STORES, \ 0, 0, 0, 0, 0) /** Tell DRD to no longer ignore the writes performed by the current thread. */ #define ANNOTATE_IGNORE_WRITES_END() \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DRD_RECORD_STORES, \ 1, 0, 0, 0, 0) /** Tell DRD to ignore all memory accesses performed by the current thread. */ #define ANNOTATE_IGNORE_READS_AND_WRITES_BEGIN() \ do { ANNOTATE_IGNORE_READS_BEGIN(); ANNOTATE_IGNORE_WRITES_BEGIN(); } while(0) /** * Tell DRD to no longer ignore the memory accesses performed by the current * thread. */ #define ANNOTATE_IGNORE_READS_AND_WRITES_END() \ do { ANNOTATE_IGNORE_READS_END(); ANNOTATE_IGNORE_WRITES_END(); } while(0) /** * Tell DRD that size bytes starting at addr has been allocated by a custom * memory allocator. */ #define ANNOTATE_NEW_MEMORY(addr, size) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DRD_CLEAN_MEMORY, \ addr, size, 0, 0, 0) /** Ask DRD to report every access to the specified address. */ #define ANNOTATE_TRACE_MEMORY(addr) DRD_TRACE_VAR(*(char*)(addr)) /** * Tell DRD to assign the specified name to the current thread. This name will * be used in error messages printed by DRD. */ #define ANNOTATE_THREAD_NAME(name) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DRD_SET_THREAD_NAME, \ name, 0, 0, 0, 0) /*@}*/ /* !! ABIWARNING !! ABIWARNING !! ABIWARNING !! ABIWARNING !! This enum comprises an ABI exported by Valgrind to programs which use client requests. DO NOT CHANGE THE ORDER OF THESE ENTRIES, NOR DELETE ANY -- add new ones at the end. */ enum { /* Ask the DRD tool to discard all information about memory accesses */ /* and client objects for the specified range. This client request is */ /* binary compatible with the similarly named Helgrind client request. */ VG_USERREQ__DRD_CLEAN_MEMORY = VG_USERREQ_TOOL_BASE('H','G'), /* args: Addr, SizeT. */ /* Ask the DRD tool the thread ID assigned by Valgrind. */ VG_USERREQ__DRD_GET_VALGRIND_THREAD_ID = VG_USERREQ_TOOL_BASE('D','R'), /* args: none. */ /* Ask the DRD tool the thread ID assigned by DRD. */ VG_USERREQ__DRD_GET_DRD_THREAD_ID, /* args: none. */ /* To tell the DRD tool to suppress data race detection on the */ /* specified address range. */ VG_USERREQ__DRD_START_SUPPRESSION, /* args: start address, size in bytes */ /* To tell the DRD tool no longer to suppress data race detection on */ /* the specified address range. */ VG_USERREQ__DRD_FINISH_SUPPRESSION, /* args: start address, size in bytes */ /* To ask the DRD tool to trace all accesses to the specified range. */ VG_USERREQ__DRD_START_TRACE_ADDR, /* args: Addr, SizeT. */ /* To ask the DRD tool to stop tracing accesses to the specified range. */ VG_USERREQ__DRD_STOP_TRACE_ADDR, /* args: Addr, SizeT. */ /* Tell DRD whether or not to record memory loads in the calling thread. */ VG_USERREQ__DRD_RECORD_LOADS, /* args: Bool. */ /* Tell DRD whether or not to record memory stores in the calling thread. */ VG_USERREQ__DRD_RECORD_STORES, /* args: Bool. */ /* Set the name of the thread that performs this client request. */ VG_USERREQ__DRD_SET_THREAD_NAME, /* args: null-terminated character string. */ /* Tell DRD that a DRD annotation has not yet been implemented. */ VG_USERREQ__DRD_ANNOTATION_UNIMP, /* args: char*. */ /* Tell DRD that a user-defined semaphore synchronization object * is about to be created. */ VG_USERREQ__DRD_ANNOTATE_SEM_INIT_PRE, /* args: Addr, UInt value. */ /* Tell DRD that a user-defined semaphore synchronization object * has been destroyed. */ VG_USERREQ__DRD_ANNOTATE_SEM_DESTROY_POST, /* args: Addr. */ /* Tell DRD that a user-defined semaphore synchronization * object is going to be acquired (semaphore wait). */ VG_USERREQ__DRD_ANNOTATE_SEM_WAIT_PRE, /* args: Addr. */ /* Tell DRD that a user-defined semaphore synchronization * object has been acquired (semaphore wait). */ VG_USERREQ__DRD_ANNOTATE_SEM_WAIT_POST, /* args: Addr. */ /* Tell DRD that a user-defined semaphore synchronization * object is about to be released (semaphore post). */ VG_USERREQ__DRD_ANNOTATE_SEM_POST_PRE, /* args: Addr. */ /* Tell DRD to ignore the inter-thread ordering introduced by a mutex. */ VG_USERREQ__DRD_IGNORE_MUTEX_ORDERING, /* args: Addr. */ /* Tell DRD that a user-defined reader-writer synchronization object * has been created. */ VG_USERREQ__DRD_ANNOTATE_RWLOCK_CREATE = VG_USERREQ_TOOL_BASE('H','G') + 256 + 14, /* args: Addr. */ /* Tell DRD that a user-defined reader-writer synchronization object * is about to be destroyed. */ VG_USERREQ__DRD_ANNOTATE_RWLOCK_DESTROY = VG_USERREQ_TOOL_BASE('H','G') + 256 + 15, /* args: Addr. */ /* Tell DRD that a lock on a user-defined reader-writer synchronization * object has been acquired. */ VG_USERREQ__DRD_ANNOTATE_RWLOCK_ACQUIRED = VG_USERREQ_TOOL_BASE('H','G') + 256 + 17, /* args: Addr, Int is_rw. */ /* Tell DRD that a lock on a user-defined reader-writer synchronization * object is about to be released. */ VG_USERREQ__DRD_ANNOTATE_RWLOCK_RELEASED = VG_USERREQ_TOOL_BASE('H','G') + 256 + 18, /* args: Addr, Int is_rw. */ /* Tell DRD that a Helgrind annotation has not yet been implemented. */ VG_USERREQ__HELGRIND_ANNOTATION_UNIMP = VG_USERREQ_TOOL_BASE('H','G') + 256 + 32, /* args: char*. */ /* Tell DRD to insert a happens-before annotation. */ VG_USERREQ__DRD_ANNOTATE_HAPPENS_BEFORE = VG_USERREQ_TOOL_BASE('H','G') + 256 + 33, /* args: Addr. */ /* Tell DRD to insert a happens-after annotation. */ VG_USERREQ__DRD_ANNOTATE_HAPPENS_AFTER = VG_USERREQ_TOOL_BASE('H','G') + 256 + 34, /* args: Addr. */ }; /** * @addtogroup RaceDetectionAnnotations */ /*@{*/ #ifdef __cplusplus /* ANNOTATE_UNPROTECTED_READ is the preferred way to annotate racy reads. Instead of doing ANNOTATE_IGNORE_READS_BEGIN(); ... = x; ANNOTATE_IGNORE_READS_END(); one can use ... = ANNOTATE_UNPROTECTED_READ(x); */ template inline T ANNOTATE_UNPROTECTED_READ(const volatile T& x) { ANNOTATE_IGNORE_READS_BEGIN(); const T result = x; ANNOTATE_IGNORE_READS_END(); return result; } /* Apply ANNOTATE_BENIGN_RACE_SIZED to a static variable. */ #define ANNOTATE_BENIGN_RACE_STATIC(static_var, description) \ namespace { \ static class static_var##_annotator \ { \ public: \ static_var##_annotator() \ { \ ANNOTATE_BENIGN_RACE_SIZED(&static_var, sizeof(static_var), \ #static_var ": " description); \ } \ } the_##static_var##_annotator; \ } #endif /*@}*/ #endif /* __VALGRIND_DRD_H */ vmem-1.8/src/common/valgrind/helgrind.h000066400000000000000000001137331361505074100201720ustar00rootroot00000000000000/* ---------------------------------------------------------------- Notice that the above BSD-style license applies to this one file (helgrind.h) only. The entire rest of Valgrind is licensed under the terms of the GNU General Public License, version 2. See the COPYING file in the source distribution for details. ---------------------------------------------------------------- This file is part of Helgrind, a Valgrind tool for detecting errors in threaded programs. Copyright (C) 2007-2017 OpenWorks LLP info@open-works.co.uk Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 3. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 4. The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ---------------------------------------------------------------- Notice that the above BSD-style license applies to this one file (helgrind.h) only. The entire rest of Valgrind is licensed under the terms of the GNU General Public License, version 2. See the COPYING file in the source distribution for details. ---------------------------------------------------------------- */ #ifndef __HELGRIND_H #define __HELGRIND_H #include "valgrind.h" /* !! ABIWARNING !! ABIWARNING !! ABIWARNING !! ABIWARNING !! This enum comprises an ABI exported by Valgrind to programs which use client requests. DO NOT CHANGE THE ORDER OF THESE ENTRIES, NOR DELETE ANY -- add new ones at the end. */ typedef enum { VG_USERREQ__HG_CLEAN_MEMORY = VG_USERREQ_TOOL_BASE('H','G'), /* The rest are for Helgrind's internal use. Not for end-user use. Do not use them unless you are a Valgrind developer. */ /* Notify the tool what this thread's pthread_t is. */ _VG_USERREQ__HG_SET_MY_PTHREAD_T = VG_USERREQ_TOOL_BASE('H','G') + 256, _VG_USERREQ__HG_PTH_API_ERROR, /* char*, int */ _VG_USERREQ__HG_PTHREAD_JOIN_POST, /* pthread_t of quitter */ _VG_USERREQ__HG_PTHREAD_MUTEX_INIT_POST, /* pth_mx_t*, long mbRec */ _VG_USERREQ__HG_PTHREAD_MUTEX_DESTROY_PRE, /* pth_mx_t*, long isInit */ _VG_USERREQ__HG_PTHREAD_MUTEX_UNLOCK_PRE, /* pth_mx_t* */ _VG_USERREQ__HG_PTHREAD_MUTEX_UNLOCK_POST, /* pth_mx_t* */ _VG_USERREQ__HG_PTHREAD_MUTEX_ACQUIRE_PRE, /* void*, long isTryLock */ _VG_USERREQ__HG_PTHREAD_MUTEX_ACQUIRE_POST, /* void* */ _VG_USERREQ__HG_PTHREAD_COND_SIGNAL_PRE, /* pth_cond_t* */ _VG_USERREQ__HG_PTHREAD_COND_BROADCAST_PRE, /* pth_cond_t* */ _VG_USERREQ__HG_PTHREAD_COND_WAIT_PRE, /* pth_cond_t*, pth_mx_t* */ _VG_USERREQ__HG_PTHREAD_COND_WAIT_POST, /* pth_cond_t*, pth_mx_t* */ _VG_USERREQ__HG_PTHREAD_COND_DESTROY_PRE, /* pth_cond_t*, long isInit */ _VG_USERREQ__HG_PTHREAD_RWLOCK_INIT_POST, /* pth_rwlk_t* */ _VG_USERREQ__HG_PTHREAD_RWLOCK_DESTROY_PRE, /* pth_rwlk_t* */ _VG_USERREQ__HG_PTHREAD_RWLOCK_LOCK_PRE, /* pth_rwlk_t*, long isW */ _VG_USERREQ__HG_PTHREAD_RWLOCK_ACQUIRED, /* void*, long isW */ _VG_USERREQ__HG_PTHREAD_RWLOCK_RELEASED, /* void* */ _VG_USERREQ__HG_PTHREAD_RWLOCK_UNLOCK_POST, /* pth_rwlk_t* */ _VG_USERREQ__HG_POSIX_SEM_INIT_POST, /* sem_t*, ulong value */ _VG_USERREQ__HG_POSIX_SEM_DESTROY_PRE, /* sem_t* */ _VG_USERREQ__HG_POSIX_SEM_RELEASED, /* void* */ _VG_USERREQ__HG_POSIX_SEM_ACQUIRED, /* void* */ _VG_USERREQ__HG_PTHREAD_BARRIER_INIT_PRE, /* pth_bar_t*, ulong, ulong */ _VG_USERREQ__HG_PTHREAD_BARRIER_WAIT_PRE, /* pth_bar_t* */ _VG_USERREQ__HG_PTHREAD_BARRIER_DESTROY_PRE, /* pth_bar_t* */ _VG_USERREQ__HG_PTHREAD_SPIN_INIT_OR_UNLOCK_PRE, /* pth_slk_t* */ _VG_USERREQ__HG_PTHREAD_SPIN_INIT_OR_UNLOCK_POST, /* pth_slk_t* */ _VG_USERREQ__HG_PTHREAD_SPIN_LOCK_PRE, /* pth_slk_t* */ _VG_USERREQ__HG_PTHREAD_SPIN_LOCK_POST, /* pth_slk_t* */ _VG_USERREQ__HG_PTHREAD_SPIN_DESTROY_PRE, /* pth_slk_t* */ _VG_USERREQ__HG_CLIENTREQ_UNIMP, /* char* */ _VG_USERREQ__HG_USERSO_SEND_PRE, /* arbitrary UWord SO-tag */ _VG_USERREQ__HG_USERSO_RECV_POST, /* arbitrary UWord SO-tag */ _VG_USERREQ__HG_USERSO_FORGET_ALL, /* arbitrary UWord SO-tag */ _VG_USERREQ__HG_RESERVED2, /* Do not use */ _VG_USERREQ__HG_RESERVED3, /* Do not use */ _VG_USERREQ__HG_RESERVED4, /* Do not use */ _VG_USERREQ__HG_ARANGE_MAKE_UNTRACKED, /* Addr a, ulong len */ _VG_USERREQ__HG_ARANGE_MAKE_TRACKED, /* Addr a, ulong len */ _VG_USERREQ__HG_PTHREAD_BARRIER_RESIZE_PRE, /* pth_bar_t*, ulong */ _VG_USERREQ__HG_CLEAN_MEMORY_HEAPBLOCK, /* Addr start_of_block */ _VG_USERREQ__HG_PTHREAD_COND_INIT_POST, /* pth_cond_t*, pth_cond_attr_t*/ _VG_USERREQ__HG_GNAT_MASTER_HOOK, /* void*d,void*m,Word ml */ _VG_USERREQ__HG_GNAT_MASTER_COMPLETED_HOOK, /* void*s,Word ml */ _VG_USERREQ__HG_GET_ABITS, /* Addr a,Addr abits, ulong len */ _VG_USERREQ__HG_PTHREAD_CREATE_BEGIN, _VG_USERREQ__HG_PTHREAD_CREATE_END, _VG_USERREQ__HG_PTHREAD_MUTEX_LOCK_PRE, /* pth_mx_t*,long isTryLock */ _VG_USERREQ__HG_PTHREAD_MUTEX_LOCK_POST, /* pth_mx_t *,long tookLock */ _VG_USERREQ__HG_PTHREAD_RWLOCK_LOCK_POST, /* pth_rwlk_t*,long isW,long */ _VG_USERREQ__HG_PTHREAD_RWLOCK_UNLOCK_PRE, /* pth_rwlk_t* */ _VG_USERREQ__HG_POSIX_SEM_POST_PRE, /* sem_t* */ _VG_USERREQ__HG_POSIX_SEM_POST_POST, /* sem_t* */ _VG_USERREQ__HG_POSIX_SEM_WAIT_PRE, /* sem_t* */ _VG_USERREQ__HG_POSIX_SEM_WAIT_POST, /* sem_t*, long tookLock */ _VG_USERREQ__HG_PTHREAD_COND_SIGNAL_POST, /* pth_cond_t* */ _VG_USERREQ__HG_PTHREAD_COND_BROADCAST_POST,/* pth_cond_t* */ _VG_USERREQ__HG_RTLD_BIND_GUARD, /* int flags */ _VG_USERREQ__HG_RTLD_BIND_CLEAR, /* int flags */ _VG_USERREQ__HG_GNAT_DEPENDENT_MASTER_JOIN /* void*d, void*m */ } Vg_TCheckClientRequest; /*----------------------------------------------------------------*/ /*--- ---*/ /*--- Implementation-only facilities. Not for end-user use. ---*/ /*--- For end-user facilities see below (the next section in ---*/ /*--- this file.) ---*/ /*--- ---*/ /*----------------------------------------------------------------*/ /* Do a client request. These are macros rather than a functions so as to avoid having an extra frame in stack traces. NB: these duplicate definitions in hg_intercepts.c. But here, we have to make do with weaker typing (no definition of Word etc) and no assertions, whereas in helgrind.h we can use those facilities. Obviously it's important the two sets of definitions are kept in sync. The commented-out asserts should actually hold, but unfortunately they can't be allowed to be visible here, because that would require the end-user code to #include . */ #define DO_CREQ_v_W(_creqF, _ty1F,_arg1F) \ do { \ long int _arg1; \ /* assert(sizeof(_ty1F) == sizeof(long int)); */ \ _arg1 = (long int)(_arg1F); \ VALGRIND_DO_CLIENT_REQUEST_STMT( \ (_creqF), \ _arg1, 0,0,0,0); \ } while (0) #define DO_CREQ_W_W(_resF, _dfltF, _creqF, _ty1F,_arg1F) \ do { \ long int _arg1; \ /* assert(sizeof(_ty1F) == sizeof(long int)); */ \ _arg1 = (long int)(_arg1F); \ _qzz_res = VALGRIND_DO_CLIENT_REQUEST_EXPR( \ (_dfltF), \ (_creqF), \ _arg1, 0,0,0,0); \ _resF = _qzz_res; \ } while (0) #define DO_CREQ_v_WW(_creqF, _ty1F,_arg1F, _ty2F,_arg2F) \ do { \ long int _arg1, _arg2; \ /* assert(sizeof(_ty1F) == sizeof(long int)); */ \ /* assert(sizeof(_ty2F) == sizeof(long int)); */ \ _arg1 = (long int)(_arg1F); \ _arg2 = (long int)(_arg2F); \ VALGRIND_DO_CLIENT_REQUEST_STMT( \ (_creqF), \ _arg1,_arg2,0,0,0); \ } while (0) #define DO_CREQ_v_WWW(_creqF, _ty1F,_arg1F, \ _ty2F,_arg2F, _ty3F, _arg3F) \ do { \ long int _arg1, _arg2, _arg3; \ /* assert(sizeof(_ty1F) == sizeof(long int)); */ \ /* assert(sizeof(_ty2F) == sizeof(long int)); */ \ /* assert(sizeof(_ty3F) == sizeof(long int)); */ \ _arg1 = (long int)(_arg1F); \ _arg2 = (long int)(_arg2F); \ _arg3 = (long int)(_arg3F); \ VALGRIND_DO_CLIENT_REQUEST_STMT( \ (_creqF), \ _arg1,_arg2,_arg3,0,0); \ } while (0) #define DO_CREQ_W_WWW(_resF, _dfltF, _creqF, _ty1F,_arg1F, \ _ty2F,_arg2F, _ty3F, _arg3F) \ do { \ long int _qzz_res; \ long int _arg1, _arg2, _arg3; \ /* assert(sizeof(_ty1F) == sizeof(long int)); */ \ _arg1 = (long int)(_arg1F); \ _arg2 = (long int)(_arg2F); \ _arg3 = (long int)(_arg3F); \ _qzz_res = VALGRIND_DO_CLIENT_REQUEST_EXPR( \ (_dfltF), \ (_creqF), \ _arg1,_arg2,_arg3,0,0); \ _resF = _qzz_res; \ } while (0) #define _HG_CLIENTREQ_UNIMP(_qzz_str) \ DO_CREQ_v_W(_VG_USERREQ__HG_CLIENTREQ_UNIMP, \ (char*),(_qzz_str)) /*----------------------------------------------------------------*/ /*--- ---*/ /*--- Helgrind-native requests. These allow access to ---*/ /*--- the same set of annotation primitives that are used ---*/ /*--- to build the POSIX pthread wrappers. ---*/ /*--- ---*/ /*----------------------------------------------------------------*/ /* ---------------------------------------------------------- For describing ordinary mutexes (non-rwlocks). For rwlock descriptions see ANNOTATE_RWLOCK_* below. ---------------------------------------------------------- */ /* Notify here immediately after mutex creation. _mbRec == 0 for a non-recursive mutex, 1 for a recursive mutex. */ #define VALGRIND_HG_MUTEX_INIT_POST(_mutex, _mbRec) \ DO_CREQ_v_WW(_VG_USERREQ__HG_PTHREAD_MUTEX_INIT_POST, \ void*,(_mutex), long,(_mbRec)) /* Notify here immediately before mutex acquisition. _isTryLock == 0 for a normal acquisition, 1 for a "try" style acquisition. */ #define VALGRIND_HG_MUTEX_LOCK_PRE(_mutex, _isTryLock) \ DO_CREQ_v_WW(_VG_USERREQ__HG_PTHREAD_MUTEX_ACQUIRE_PRE, \ void*,(_mutex), long,(_isTryLock)) /* Notify here immediately after a successful mutex acquisition. */ #define VALGRIND_HG_MUTEX_LOCK_POST(_mutex) \ DO_CREQ_v_W(_VG_USERREQ__HG_PTHREAD_MUTEX_ACQUIRE_POST, \ void*,(_mutex)) /* Notify here immediately before a mutex release. */ #define VALGRIND_HG_MUTEX_UNLOCK_PRE(_mutex) \ DO_CREQ_v_W(_VG_USERREQ__HG_PTHREAD_MUTEX_UNLOCK_PRE, \ void*,(_mutex)) /* Notify here immediately after a mutex release. */ #define VALGRIND_HG_MUTEX_UNLOCK_POST(_mutex) \ DO_CREQ_v_W(_VG_USERREQ__HG_PTHREAD_MUTEX_UNLOCK_POST, \ void*,(_mutex)) /* Notify here immediately before mutex destruction. */ #define VALGRIND_HG_MUTEX_DESTROY_PRE(_mutex) \ DO_CREQ_v_W(_VG_USERREQ__HG_PTHREAD_MUTEX_DESTROY_PRE, \ void*,(_mutex)) /* ---------------------------------------------------------- For describing semaphores. ---------------------------------------------------------- */ /* Notify here immediately after semaphore creation. */ #define VALGRIND_HG_SEM_INIT_POST(_sem, _value) \ DO_CREQ_v_WW(_VG_USERREQ__HG_POSIX_SEM_INIT_POST, \ void*, (_sem), unsigned long, (_value)) /* Notify here immediately after a semaphore wait (an acquire-style operation) */ #define VALGRIND_HG_SEM_WAIT_POST(_sem) \ DO_CREQ_v_W(_VG_USERREQ__HG_POSIX_SEM_ACQUIRED, \ void*,(_sem)) /* Notify here immediately before semaphore post (a release-style operation) */ #define VALGRIND_HG_SEM_POST_PRE(_sem) \ DO_CREQ_v_W(_VG_USERREQ__HG_POSIX_SEM_RELEASED, \ void*,(_sem)) /* Notify here immediately before semaphore destruction. */ #define VALGRIND_HG_SEM_DESTROY_PRE(_sem) \ DO_CREQ_v_W(_VG_USERREQ__HG_POSIX_SEM_DESTROY_PRE, \ void*, (_sem)) /* ---------------------------------------------------------- For describing barriers. ---------------------------------------------------------- */ /* Notify here immediately before barrier creation. _count is the capacity. _resizable == 0 means the barrier may not be resized, 1 means it may be. */ #define VALGRIND_HG_BARRIER_INIT_PRE(_bar, _count, _resizable) \ DO_CREQ_v_WWW(_VG_USERREQ__HG_PTHREAD_BARRIER_INIT_PRE, \ void*,(_bar), \ unsigned long,(_count), \ unsigned long,(_resizable)) /* Notify here immediately before arrival at a barrier. */ #define VALGRIND_HG_BARRIER_WAIT_PRE(_bar) \ DO_CREQ_v_W(_VG_USERREQ__HG_PTHREAD_BARRIER_WAIT_PRE, \ void*,(_bar)) /* Notify here immediately before a resize (change of barrier capacity). If _newcount >= the existing capacity, then there is no change in the state of any threads waiting at the barrier. If _newcount < the existing capacity, and >= _newcount threads are currently waiting at the barrier, then this notification is considered to also have the effect of telling the checker that all waiting threads have now moved past the barrier. (I can't think of any other sane semantics.) */ #define VALGRIND_HG_BARRIER_RESIZE_PRE(_bar, _newcount) \ DO_CREQ_v_WW(_VG_USERREQ__HG_PTHREAD_BARRIER_RESIZE_PRE, \ void*,(_bar), \ unsigned long,(_newcount)) /* Notify here immediately before barrier destruction. */ #define VALGRIND_HG_BARRIER_DESTROY_PRE(_bar) \ DO_CREQ_v_W(_VG_USERREQ__HG_PTHREAD_BARRIER_DESTROY_PRE, \ void*,(_bar)) /* ---------------------------------------------------------- For describing memory ownership changes. ---------------------------------------------------------- */ /* Clean memory state. This makes Helgrind forget everything it knew about the specified memory range. Effectively this announces that the specified memory range now "belongs" to the calling thread, so that: (1) the calling thread can access it safely without synchronisation, and (2) all other threads must sync with this one to access it safely. This is particularly useful for memory allocators that wish to recycle memory. */ #define VALGRIND_HG_CLEAN_MEMORY(_qzz_start, _qzz_len) \ DO_CREQ_v_WW(VG_USERREQ__HG_CLEAN_MEMORY, \ void*,(_qzz_start), \ unsigned long,(_qzz_len)) /* The same, but for the heap block starting at _qzz_blockstart. This allows painting when we only know the address of an object, but not its size, which is sometimes the case in C++ code involving inheritance, and in which RTTI is not, for whatever reason, available. Returns the number of bytes painted, which can be zero for a zero-sized block. Hence, return values >= 0 indicate success (the block was found), and the value -1 indicates block not found, and -2 is returned when not running on Helgrind. */ #define VALGRIND_HG_CLEAN_MEMORY_HEAPBLOCK(_qzz_blockstart) \ (__extension__ \ ({long int _npainted; \ DO_CREQ_W_W(_npainted, (-2)/*default*/, \ _VG_USERREQ__HG_CLEAN_MEMORY_HEAPBLOCK, \ void*,(_qzz_blockstart)); \ _npainted; \ })) /* ---------------------------------------------------------- For error control. ---------------------------------------------------------- */ /* Tell H that an address range is not to be "tracked" until further notice. This puts it in the NOACCESS state, in which case we ignore all reads and writes to it. Useful for ignoring ranges of memory where there might be races we don't want to see. If the memory is subsequently reallocated via malloc/new/stack allocation, then it is put back in the trackable state. Hence it is safe in the situation where checking is disabled, the containing area is deallocated and later reallocated for some other purpose. */ #define VALGRIND_HG_DISABLE_CHECKING(_qzz_start, _qzz_len) \ DO_CREQ_v_WW(_VG_USERREQ__HG_ARANGE_MAKE_UNTRACKED, \ void*,(_qzz_start), \ unsigned long,(_qzz_len)) /* And put it back into the normal "tracked" state, that is, make it once again subject to the normal race-checking machinery. This puts it in the same state as new memory allocated by this thread -- that is, basically owned exclusively by this thread. */ #define VALGRIND_HG_ENABLE_CHECKING(_qzz_start, _qzz_len) \ DO_CREQ_v_WW(_VG_USERREQ__HG_ARANGE_MAKE_TRACKED, \ void*,(_qzz_start), \ unsigned long,(_qzz_len)) /* Checks the accessibility bits for addresses [zza..zza+zznbytes-1]. If zzabits array is provided, copy the accessibility bits in zzabits. Return values: -2 if not running on helgrind -1 if any parts of zzabits is not addressable >= 0 : success. When success, it returns the nr of addressable bytes found. So, to check that a whole range is addressable, check VALGRIND_HG_GET_ABITS(addr,NULL,len) == len In addition, if you want to examine the addressability of each byte of the range, you need to provide a non NULL ptr as second argument, pointing to an array of unsigned char of length len. Addressable bytes are indicated with 0xff. Non-addressable bytes are indicated with 0x00. */ #define VALGRIND_HG_GET_ABITS(zza,zzabits,zznbytes) \ (__extension__ \ ({long int _res; \ DO_CREQ_W_WWW(_res, (-2)/*default*/, \ _VG_USERREQ__HG_GET_ABITS, \ void*,(zza), void*,(zzabits), \ unsigned long,(zznbytes)); \ _res; \ })) /* End-user request for Ada applications compiled with GNAT. Helgrind understands the Ada concept of Ada task dependencies and terminations. See Ada Reference Manual section 9.3 "Task Dependence - Termination of Tasks". However, in some cases, the master of (terminated) tasks completes only when the application exits. An example of this is dynamically allocated tasks with an access type defined at Library Level. By default, the state of such tasks in Helgrind will be 'exited but join not done yet'. Many tasks in such a state are however causing Helgrind CPU and memory to increase significantly. VALGRIND_HG_GNAT_DEPENDENT_MASTER_JOIN can be used to indicate to Helgrind that a not yet completed master has however already 'seen' the termination of a dependent : this is conceptually the same as a pthread_join and causes the cleanup of the dependent as done by Helgrind when a master completes. This allows to avoid the overhead in helgrind caused by such tasks. A typical usage for a master to indicate it has done conceptually a join with a dependent task before the master completes is: while not Dep_Task'Terminated loop ... do whatever to wait for Dep_Task termination. end loop; VALGRIND_HG_GNAT_DEPENDENT_MASTER_JOIN (Dep_Task'Identity, Ada.Task_Identification.Current_Task); Note that VALGRIND_HG_GNAT_DEPENDENT_MASTER_JOIN should be a binding to a C function built with the below macro. */ #define VALGRIND_HG_GNAT_DEPENDENT_MASTER_JOIN(_qzz_dep, _qzz_master) \ DO_CREQ_v_WW(_VG_USERREQ__HG_GNAT_DEPENDENT_MASTER_JOIN, \ void*,(_qzz_dep), \ void*,(_qzz_master)) /*----------------------------------------------------------------*/ /*--- ---*/ /*--- ThreadSanitizer-compatible requests ---*/ /*--- (mostly unimplemented) ---*/ /*--- ---*/ /*----------------------------------------------------------------*/ /* A quite-broad set of annotations, as used in the ThreadSanitizer project. This implementation aims to be a (source-level) compatible implementation of the macros defined in: http://code.google.com/p/data-race-test/source /browse/trunk/dynamic_annotations/dynamic_annotations.h (some of the comments below are taken from the above file) The implementation here is very incomplete, and intended as a starting point. Many of the macros are unimplemented. Rather than allowing unimplemented macros to silently do nothing, they cause an assertion. Intention is to implement them on demand. The major use of these macros is to make visible to race detectors, the behaviour (effects) of user-implemented synchronisation primitives, that the detectors could not otherwise deduce from the normal observation of pthread etc calls. Some of the macros are no-ops in Helgrind. That's because Helgrind is a pure happens-before detector, whereas ThreadSanitizer uses a hybrid lockset and happens-before scheme, which requires more accurate annotations for correct operation. The macros are listed in the same order as in dynamic_annotations.h (URL just above). I should point out that I am less than clear about the intended semantics of quite a number of them. Comments and clarifications welcomed! */ /* ---------------------------------------------------------------- These four allow description of user-level condition variables, apparently in the style of POSIX's pthread_cond_t. Currently unimplemented and will assert. ---------------------------------------------------------------- */ /* Report that wait on the condition variable at address CV has succeeded and the lock at address LOCK is now held. CV and LOCK are completely arbitrary memory addresses which presumably mean something to the application, but are meaningless to Helgrind. */ #define ANNOTATE_CONDVAR_LOCK_WAIT(cv, lock) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_CONDVAR_LOCK_WAIT") /* Report that wait on the condition variable at CV has succeeded. Variant w/o lock. */ #define ANNOTATE_CONDVAR_WAIT(cv) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_CONDVAR_WAIT") /* Report that we are about to signal on the condition variable at address CV. */ #define ANNOTATE_CONDVAR_SIGNAL(cv) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_CONDVAR_SIGNAL") /* Report that we are about to signal_all on the condition variable at CV. */ #define ANNOTATE_CONDVAR_SIGNAL_ALL(cv) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_CONDVAR_SIGNAL_ALL") /* ---------------------------------------------------------------- Create completely arbitrary happens-before edges between threads. If threads T1 .. Tn all do ANNOTATE_HAPPENS_BEFORE(obj) and later (w.r.t. some notional global clock for the computation) thread Tm does ANNOTATE_HAPPENS_AFTER(obj), then Helgrind will regard all memory accesses done by T1 .. Tn before the ..BEFORE.. call as happening-before all memory accesses done by Tm after the ..AFTER.. call. Hence Helgrind won't complain about races if Tm's accesses afterwards are to the same locations as accesses before by any of T1 .. Tn. OBJ is a machine word (unsigned long, or void*), is completely arbitrary, and denotes the identity of some synchronisation object you're modelling. You must do the _BEFORE call just before the real sync event on the signaller's side, and _AFTER just after the real sync event on the waiter's side. If none of the rest of these macros make sense to you, at least take the time to understand these two. They form the very essence of describing arbitrary inter-thread synchronisation events to Helgrind. You can get a long way just with them alone. See also, extensive discussion on semantics of this in https://bugs.kde.org/show_bug.cgi?id=243935 ANNOTATE_HAPPENS_BEFORE_FORGET_ALL(obj) is interim until such time as bug 243935 is fully resolved. It instructs Helgrind to forget about any ANNOTATE_HAPPENS_BEFORE calls on the specified object, in effect putting it back in its original state. Once in that state, a use of ANNOTATE_HAPPENS_AFTER on it has no effect on the calling thread. An implementation may optionally release resources it has associated with 'obj' when ANNOTATE_HAPPENS_BEFORE_FORGET_ALL(obj) happens. Users are recommended to use ANNOTATE_HAPPENS_BEFORE_FORGET_ALL to indicate when a synchronisation object is no longer needed, so as to avoid potential indefinite resource leaks. ---------------------------------------------------------------- */ #define ANNOTATE_HAPPENS_BEFORE(obj) \ DO_CREQ_v_W(_VG_USERREQ__HG_USERSO_SEND_PRE, void*,(obj)) #define ANNOTATE_HAPPENS_AFTER(obj) \ DO_CREQ_v_W(_VG_USERREQ__HG_USERSO_RECV_POST, void*,(obj)) #define ANNOTATE_HAPPENS_BEFORE_FORGET_ALL(obj) \ DO_CREQ_v_W(_VG_USERREQ__HG_USERSO_FORGET_ALL, void*,(obj)) /* ---------------------------------------------------------------- Memory publishing. The TSan sources say: Report that the bytes in the range [pointer, pointer+size) are about to be published safely. The race checker will create a happens-before arc from the call ANNOTATE_PUBLISH_MEMORY_RANGE(pointer, size) to subsequent accesses to this memory. I'm not sure I understand what this means exactly, nor whether it is relevant for a pure h-b detector. Leaving unimplemented for now. ---------------------------------------------------------------- */ #define ANNOTATE_PUBLISH_MEMORY_RANGE(pointer, size) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_PUBLISH_MEMORY_RANGE") /* DEPRECATED. Don't use it. */ /* #define ANNOTATE_UNPUBLISH_MEMORY_RANGE(pointer, size) */ /* DEPRECATED. Don't use it. */ /* #define ANNOTATE_SWAP_MEMORY_RANGE(pointer, size) */ /* ---------------------------------------------------------------- TSan sources say: Instruct the tool to create a happens-before arc between MU->Unlock() and MU->Lock(). This annotation may slow down the race detector; normally it is used only when it would be difficult to annotate each of the mutex's critical sections individually using the annotations above. If MU is a posix pthread_mutex_t then Helgrind will do this anyway. In any case, leave as unimp for now. I'm unsure about the intended behaviour. ---------------------------------------------------------------- */ #define ANNOTATE_PURE_HAPPENS_BEFORE_MUTEX(mu) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_PURE_HAPPENS_BEFORE_MUTEX") /* Deprecated. Use ANNOTATE_PURE_HAPPENS_BEFORE_MUTEX. */ /* #define ANNOTATE_MUTEX_IS_USED_AS_CONDVAR(mu) */ /* ---------------------------------------------------------------- TSan sources say: Annotations useful when defining memory allocators, or when memory that was protected in one way starts to be protected in another. Report that a new memory at "address" of size "size" has been allocated. This might be used when the memory has been retrieved from a free list and is about to be reused, or when a the locking discipline for a variable changes. AFAICS this is the same as VALGRIND_HG_CLEAN_MEMORY. ---------------------------------------------------------------- */ #define ANNOTATE_NEW_MEMORY(address, size) \ VALGRIND_HG_CLEAN_MEMORY((address), (size)) /* ---------------------------------------------------------------- TSan sources say: Annotations useful when defining FIFO queues that transfer data between threads. All unimplemented. Am not claiming to understand this (yet). ---------------------------------------------------------------- */ /* Report that the producer-consumer queue object at address PCQ has been created. The ANNOTATE_PCQ_* annotations should be used only for FIFO queues. For non-FIFO queues use ANNOTATE_HAPPENS_BEFORE (for put) and ANNOTATE_HAPPENS_AFTER (for get). */ #define ANNOTATE_PCQ_CREATE(pcq) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_PCQ_CREATE") /* Report that the queue at address PCQ is about to be destroyed. */ #define ANNOTATE_PCQ_DESTROY(pcq) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_PCQ_DESTROY") /* Report that we are about to put an element into a FIFO queue at address PCQ. */ #define ANNOTATE_PCQ_PUT(pcq) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_PCQ_PUT") /* Report that we've just got an element from a FIFO queue at address PCQ. */ #define ANNOTATE_PCQ_GET(pcq) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_PCQ_GET") /* ---------------------------------------------------------------- Annotations that suppress errors. It is usually better to express the program's synchronization using the other annotations, but these can be used when all else fails. Currently these are all unimplemented. I can't think of a simple way to implement them without at least some performance overhead. ---------------------------------------------------------------- */ /* Report that we may have a benign race at "pointer", with size "sizeof(*(pointer))". "pointer" must be a non-void* pointer. Insert at the point where "pointer" has been allocated, preferably close to the point where the race happens. See also ANNOTATE_BENIGN_RACE_STATIC. XXX: what's this actually supposed to do? And what's the type of DESCRIPTION? When does the annotation stop having an effect? */ #define ANNOTATE_BENIGN_RACE(pointer, description) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_BENIGN_RACE") /* Same as ANNOTATE_BENIGN_RACE(address, description), but applies to the memory range [address, address+size). */ #define ANNOTATE_BENIGN_RACE_SIZED(address, size, description) \ VALGRIND_HG_DISABLE_CHECKING(address, size) /* Request the analysis tool to ignore all reads in the current thread until ANNOTATE_IGNORE_READS_END is called. Useful to ignore intentional racey reads, while still checking other reads and all writes. */ #define ANNOTATE_IGNORE_READS_BEGIN() \ _HG_CLIENTREQ_UNIMP("ANNOTATE_IGNORE_READS_BEGIN") /* Stop ignoring reads. */ #define ANNOTATE_IGNORE_READS_END() \ _HG_CLIENTREQ_UNIMP("ANNOTATE_IGNORE_READS_END") /* Similar to ANNOTATE_IGNORE_READS_BEGIN, but ignore writes. */ #define ANNOTATE_IGNORE_WRITES_BEGIN() \ _HG_CLIENTREQ_UNIMP("ANNOTATE_IGNORE_WRITES_BEGIN") /* Stop ignoring writes. */ #define ANNOTATE_IGNORE_WRITES_END() \ _HG_CLIENTREQ_UNIMP("ANNOTATE_IGNORE_WRITES_END") /* Start ignoring all memory accesses (reads and writes). */ #define ANNOTATE_IGNORE_READS_AND_WRITES_BEGIN() \ do { \ ANNOTATE_IGNORE_READS_BEGIN(); \ ANNOTATE_IGNORE_WRITES_BEGIN(); \ } while (0) /* Stop ignoring all memory accesses. */ #define ANNOTATE_IGNORE_READS_AND_WRITES_END() \ do { \ ANNOTATE_IGNORE_WRITES_END(); \ ANNOTATE_IGNORE_READS_END(); \ } while (0) /* ---------------------------------------------------------------- Annotations useful for debugging. Again, so for unimplemented, partly for performance reasons. ---------------------------------------------------------------- */ /* Request to trace every access to ADDRESS. */ #define ANNOTATE_TRACE_MEMORY(address) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_TRACE_MEMORY") /* Report the current thread name to a race detector. */ #define ANNOTATE_THREAD_NAME(name) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_THREAD_NAME") /* ---------------------------------------------------------------- Annotations for describing behaviour of user-implemented lock primitives. In all cases, the LOCK argument is a completely arbitrary machine word (unsigned long, or void*) and can be any value which gives a unique identity to the lock objects being modelled. We just pretend they're ordinary posix rwlocks. That'll probably give some rather confusing wording in error messages, claiming that the arbitrary LOCK values are pthread_rwlock_t*'s, when in fact they are not. Ah well. ---------------------------------------------------------------- */ /* Report that a lock has just been created at address LOCK. */ #define ANNOTATE_RWLOCK_CREATE(lock) \ DO_CREQ_v_W(_VG_USERREQ__HG_PTHREAD_RWLOCK_INIT_POST, \ void*,(lock)) /* Report that the lock at address LOCK is about to be destroyed. */ #define ANNOTATE_RWLOCK_DESTROY(lock) \ DO_CREQ_v_W(_VG_USERREQ__HG_PTHREAD_RWLOCK_DESTROY_PRE, \ void*,(lock)) /* Report that the lock at address LOCK has just been acquired. is_w=1 for writer lock, is_w=0 for reader lock. */ #define ANNOTATE_RWLOCK_ACQUIRED(lock, is_w) \ DO_CREQ_v_WW(_VG_USERREQ__HG_PTHREAD_RWLOCK_ACQUIRED, \ void*,(lock), unsigned long,(is_w)) /* Report that the lock at address LOCK is about to be released. */ #define ANNOTATE_RWLOCK_RELEASED(lock, is_w) \ DO_CREQ_v_W(_VG_USERREQ__HG_PTHREAD_RWLOCK_RELEASED, \ void*,(lock)) /* is_w is ignored */ /* ------------------------------------------------------------- Annotations useful when implementing barriers. They are not normally needed by modules that merely use barriers. The "barrier" argument is a pointer to the barrier object. ---------------------------------------------------------------- */ /* Report that the "barrier" has been initialized with initial "count". If 'reinitialization_allowed' is true, initialization is allowed to happen multiple times w/o calling barrier_destroy() */ #define ANNOTATE_BARRIER_INIT(barrier, count, reinitialization_allowed) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_BARRIER_INIT") /* Report that we are about to enter barrier_wait("barrier"). */ #define ANNOTATE_BARRIER_WAIT_BEFORE(barrier) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_BARRIER_DESTROY") /* Report that we just exited barrier_wait("barrier"). */ #define ANNOTATE_BARRIER_WAIT_AFTER(barrier) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_BARRIER_DESTROY") /* Report that the "barrier" has been destroyed. */ #define ANNOTATE_BARRIER_DESTROY(barrier) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_BARRIER_DESTROY") /* ---------------------------------------------------------------- Annotations useful for testing race detectors. ---------------------------------------------------------------- */ /* Report that we expect a race on the variable at ADDRESS. Use only in unit tests for a race detector. */ #define ANNOTATE_EXPECT_RACE(address, description) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_EXPECT_RACE") /* A no-op. Insert where you like to test the interceptors. */ #define ANNOTATE_NO_OP(arg) \ _HG_CLIENTREQ_UNIMP("ANNOTATE_NO_OP") /* Force the race detector to flush its state. The actual effect depends on * the implementation of the detector. */ #define ANNOTATE_FLUSH_STATE() \ _HG_CLIENTREQ_UNIMP("ANNOTATE_FLUSH_STATE") #endif /* __HELGRIND_H */ vmem-1.8/src/common/valgrind/memcheck.h000066400000000000000000000364051361505074100201520ustar00rootroot00000000000000 /* ---------------------------------------------------------------- Notice that the following BSD-style license applies to this one file (memcheck.h) only. The rest of Valgrind is licensed under the terms of the GNU General Public License, version 2, unless otherwise indicated. See the COPYING file in the source distribution for details. ---------------------------------------------------------------- This file is part of MemCheck, a heavyweight Valgrind tool for detecting memory errors. Copyright (C) 2000-2017 Julian Seward. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 3. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 4. The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ---------------------------------------------------------------- Notice that the above BSD-style license applies to this one file (memcheck.h) only. The entire rest of Valgrind is licensed under the terms of the GNU General Public License, version 2. See the COPYING file in the source distribution for details. ---------------------------------------------------------------- */ #ifndef __MEMCHECK_H #define __MEMCHECK_H /* This file is for inclusion into client (your!) code. You can use these macros to manipulate and query memory permissions inside your own programs. See comment near the top of valgrind.h on how to use them. */ #include "valgrind.h" /* !! ABIWARNING !! ABIWARNING !! ABIWARNING !! ABIWARNING !! This enum comprises an ABI exported by Valgrind to programs which use client requests. DO NOT CHANGE THE ORDER OF THESE ENTRIES, NOR DELETE ANY -- add new ones at the end. */ typedef enum { VG_USERREQ__MAKE_MEM_NOACCESS = VG_USERREQ_TOOL_BASE('M','C'), VG_USERREQ__MAKE_MEM_UNDEFINED, VG_USERREQ__MAKE_MEM_DEFINED, VG_USERREQ__DISCARD, VG_USERREQ__CHECK_MEM_IS_ADDRESSABLE, VG_USERREQ__CHECK_MEM_IS_DEFINED, VG_USERREQ__DO_LEAK_CHECK, VG_USERREQ__COUNT_LEAKS, VG_USERREQ__GET_VBITS, VG_USERREQ__SET_VBITS, VG_USERREQ__CREATE_BLOCK, VG_USERREQ__MAKE_MEM_DEFINED_IF_ADDRESSABLE, /* Not next to VG_USERREQ__COUNT_LEAKS because it was added later. */ VG_USERREQ__COUNT_LEAK_BLOCKS, VG_USERREQ__ENABLE_ADDR_ERROR_REPORTING_IN_RANGE, VG_USERREQ__DISABLE_ADDR_ERROR_REPORTING_IN_RANGE, VG_USERREQ__CHECK_MEM_IS_UNADDRESSABLE, VG_USERREQ__CHECK_MEM_IS_UNDEFINED, /* This is just for memcheck's internal use - don't use it */ _VG_USERREQ__MEMCHECK_RECORD_OVERLAP_ERROR = VG_USERREQ_TOOL_BASE('M','C') + 256 } Vg_MemCheckClientRequest; /* Client-code macros to manipulate the state of memory. */ /* Mark memory at _qzz_addr as unaddressable for _qzz_len bytes. */ #define VALGRIND_MAKE_MEM_NOACCESS(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__MAKE_MEM_NOACCESS, \ (_qzz_addr), (_qzz_len), 0, 0, 0) /* Similarly, mark memory at _qzz_addr as addressable but undefined for _qzz_len bytes. */ #define VALGRIND_MAKE_MEM_UNDEFINED(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__MAKE_MEM_UNDEFINED, \ (_qzz_addr), (_qzz_len), 0, 0, 0) /* Similarly, mark memory at _qzz_addr as addressable and defined for _qzz_len bytes. */ #define VALGRIND_MAKE_MEM_DEFINED(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__MAKE_MEM_DEFINED, \ (_qzz_addr), (_qzz_len), 0, 0, 0) /* Similar to VALGRIND_MAKE_MEM_DEFINED except that addressability is not altered: bytes which are addressable are marked as defined, but those which are not addressable are left unchanged. */ #define VALGRIND_MAKE_MEM_DEFINED_IF_ADDRESSABLE(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__MAKE_MEM_DEFINED_IF_ADDRESSABLE, \ (_qzz_addr), (_qzz_len), 0, 0, 0) /* Create a block-description handle. The description is an ascii string which is included in any messages pertaining to addresses within the specified memory range. Has no other effect on the properties of the memory range. */ #define VALGRIND_CREATE_BLOCK(_qzz_addr,_qzz_len, _qzz_desc) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__CREATE_BLOCK, \ (_qzz_addr), (_qzz_len), (_qzz_desc), \ 0, 0) /* Discard a block-description-handle. Returns 1 for an invalid handle, 0 for a valid handle. */ #define VALGRIND_DISCARD(_qzz_blkindex) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__DISCARD, \ 0, (_qzz_blkindex), 0, 0, 0) /* Client-code macros to check the state of memory. */ /* Check that memory at _qzz_addr is addressable for _qzz_len bytes. If suitable addressibility is not established, Valgrind prints an error message and returns the address of the first offending byte. Otherwise it returns zero. */ #define VALGRIND_CHECK_MEM_IS_ADDRESSABLE(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \ VG_USERREQ__CHECK_MEM_IS_ADDRESSABLE, \ (_qzz_addr), (_qzz_len), 0, 0, 0) /* Check that memory at _qzz_addr is addressable and defined for _qzz_len bytes. If suitable addressibility and definedness are not established, Valgrind prints an error message and returns the address of the first offending byte. Otherwise it returns zero. */ #define VALGRIND_CHECK_MEM_IS_DEFINED(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \ VG_USERREQ__CHECK_MEM_IS_DEFINED, \ (_qzz_addr), (_qzz_len), 0, 0, 0) /* Use this macro to force the definedness and addressibility of an lvalue to be checked. If suitable addressibility and definedness are not established, Valgrind prints an error message and returns the address of the first offending byte. Otherwise it returns zero. */ #define VALGRIND_CHECK_VALUE_IS_DEFINED(__lvalue) \ VALGRIND_CHECK_MEM_IS_DEFINED( \ (volatile unsigned char *)&(__lvalue), \ (unsigned long)(sizeof (__lvalue))) /* Check that memory at _qzz_addr is unaddressable for _qzz_len bytes. If any byte in this range is addressable, Valgrind returns the address of the first offending byte. Otherwise it returns zero. */ #define VALGRIND_CHECK_MEM_IS_UNADDRESSABLE(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \ VG_USERREQ__CHECK_MEM_IS_UNADDRESSABLE,\ (_qzz_addr), (_qzz_len), 0, 0, 0) /* Check that memory at _qzz_addr is undefined for _qzz_len bytes. If any byte in this range is defined or unaddressable, Valgrind returns the address of the first offending byte. Otherwise it returns zero. */ #define VALGRIND_CHECK_MEM_IS_UNDEFINED(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \ VG_USERREQ__CHECK_MEM_IS_UNDEFINED, \ (_qzz_addr), (_qzz_len), 0, 0, 0) /* Do a full memory leak check (like --leak-check=full) mid-execution. */ #define VALGRIND_DO_LEAK_CHECK \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DO_LEAK_CHECK, \ 0, 0, 0, 0, 0) /* Same as VALGRIND_DO_LEAK_CHECK but only showing the entries for which there was an increase in leaked bytes or leaked nr of blocks since the previous leak search. */ #define VALGRIND_DO_ADDED_LEAK_CHECK \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DO_LEAK_CHECK, \ 0, 1, 0, 0, 0) /* Same as VALGRIND_DO_ADDED_LEAK_CHECK but showing entries with increased or decreased leaked bytes/blocks since previous leak search. */ #define VALGRIND_DO_CHANGED_LEAK_CHECK \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DO_LEAK_CHECK, \ 0, 2, 0, 0, 0) /* Do a summary memory leak check (like --leak-check=summary) mid-execution. */ #define VALGRIND_DO_QUICK_LEAK_CHECK \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DO_LEAK_CHECK, \ 1, 0, 0, 0, 0) /* Return number of leaked, dubious, reachable and suppressed bytes found by all previous leak checks. They must be lvalues. */ #define VALGRIND_COUNT_LEAKS(leaked, dubious, reachable, suppressed) \ /* For safety on 64-bit platforms we assign the results to private unsigned long variables, then assign these to the lvalues the user specified, which works no matter what type 'leaked', 'dubious', etc are. We also initialise '_qzz_leaked', etc because VG_USERREQ__COUNT_LEAKS doesn't mark the values returned as defined. */ \ { \ unsigned long _qzz_leaked = 0, _qzz_dubious = 0; \ unsigned long _qzz_reachable = 0, _qzz_suppressed = 0; \ VALGRIND_DO_CLIENT_REQUEST_STMT( \ VG_USERREQ__COUNT_LEAKS, \ &_qzz_leaked, &_qzz_dubious, \ &_qzz_reachable, &_qzz_suppressed, 0); \ leaked = _qzz_leaked; \ dubious = _qzz_dubious; \ reachable = _qzz_reachable; \ suppressed = _qzz_suppressed; \ } /* Return number of leaked, dubious, reachable and suppressed bytes found by all previous leak checks. They must be lvalues. */ #define VALGRIND_COUNT_LEAK_BLOCKS(leaked, dubious, reachable, suppressed) \ /* For safety on 64-bit platforms we assign the results to private unsigned long variables, then assign these to the lvalues the user specified, which works no matter what type 'leaked', 'dubious', etc are. We also initialise '_qzz_leaked', etc because VG_USERREQ__COUNT_LEAKS doesn't mark the values returned as defined. */ \ { \ unsigned long _qzz_leaked = 0, _qzz_dubious = 0; \ unsigned long _qzz_reachable = 0, _qzz_suppressed = 0; \ VALGRIND_DO_CLIENT_REQUEST_STMT( \ VG_USERREQ__COUNT_LEAK_BLOCKS, \ &_qzz_leaked, &_qzz_dubious, \ &_qzz_reachable, &_qzz_suppressed, 0); \ leaked = _qzz_leaked; \ dubious = _qzz_dubious; \ reachable = _qzz_reachable; \ suppressed = _qzz_suppressed; \ } /* Get the validity data for addresses [zza..zza+zznbytes-1] and copy it into the provided zzvbits array. Return values: 0 if not running on valgrind 1 success 2 [previously indicated unaligned arrays; these are now allowed] 3 if any parts of zzsrc/zzvbits are not addressable. The metadata is not copied in cases 0, 2 or 3 so it should be impossible to segfault your system by using this call. */ #define VALGRIND_GET_VBITS(zza,zzvbits,zznbytes) \ (unsigned)VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \ VG_USERREQ__GET_VBITS, \ (const char*)(zza), \ (char*)(zzvbits), \ (zznbytes), 0, 0) /* Set the validity data for addresses [zza..zza+zznbytes-1], copying it from the provided zzvbits array. Return values: 0 if not running on valgrind 1 success 2 [previously indicated unaligned arrays; these are now allowed] 3 if any parts of zza/zzvbits are not addressable. The metadata is not copied in cases 0, 2 or 3 so it should be impossible to segfault your system by using this call. */ #define VALGRIND_SET_VBITS(zza,zzvbits,zznbytes) \ (unsigned)VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \ VG_USERREQ__SET_VBITS, \ (const char*)(zza), \ (const char*)(zzvbits), \ (zznbytes), 0, 0 ) /* Disable and re-enable reporting of addressing errors in the specified address range. */ #define VALGRIND_DISABLE_ADDR_ERROR_REPORTING_IN_RANGE(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__DISABLE_ADDR_ERROR_REPORTING_IN_RANGE, \ (_qzz_addr), (_qzz_len), 0, 0, 0) #define VALGRIND_ENABLE_ADDR_ERROR_REPORTING_IN_RANGE(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__ENABLE_ADDR_ERROR_REPORTING_IN_RANGE, \ (_qzz_addr), (_qzz_len), 0, 0, 0) #endif vmem-1.8/src/common/valgrind/pmemcheck.h000066400000000000000000000245511361505074100203310ustar00rootroot00000000000000/* * Copyright (c) 2014-2015, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of Intel Corporation nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef __PMEMCHECK_H #define __PMEMCHECK_H /* This file is for inclusion into client (your!) code. You can use these macros to manipulate and query memory permissions inside your own programs. See comment near the top of valgrind.h on how to use them. */ #include "valgrind.h" /* !! ABIWARNING !! ABIWARNING !! ABIWARNING !! ABIWARNING !! This enum comprises an ABI exported by Valgrind to programs which use client requests. DO NOT CHANGE THE ORDER OF THESE ENTRIES, NOR DELETE ANY -- add new ones at the end. */ typedef enum { VG_USERREQ__PMC_REGISTER_PMEM_MAPPING = VG_USERREQ_TOOL_BASE('P','C'), VG_USERREQ__PMC_REGISTER_PMEM_FILE, VG_USERREQ__PMC_REMOVE_PMEM_MAPPING, VG_USERREQ__PMC_CHECK_IS_PMEM_MAPPING, VG_USERREQ__PMC_PRINT_PMEM_MAPPINGS, VG_USERREQ__PMC_DO_FLUSH, VG_USERREQ__PMC_DO_FENCE, VG_USERREQ__PMC_RESERVED1, /* Do not use. */ VG_USERREQ__PMC_WRITE_STATS, VG_USERREQ__PMC_RESERVED2, /* Do not use. */ VG_USERREQ__PMC_RESERVED3, /* Do not use. */ VG_USERREQ__PMC_RESERVED4, /* Do not use. */ VG_USERREQ__PMC_RESERVED5, /* Do not use. */ VG_USERREQ__PMC_RESERVED7, /* Do not use. */ VG_USERREQ__PMC_RESERVED8, /* Do not use. */ VG_USERREQ__PMC_RESERVED9, /* Do not use. */ VG_USERREQ__PMC_RESERVED10, /* Do not use. */ VG_USERREQ__PMC_SET_CLEAN, /* transaction support */ VG_USERREQ__PMC_START_TX, VG_USERREQ__PMC_START_TX_N, VG_USERREQ__PMC_END_TX, VG_USERREQ__PMC_END_TX_N, VG_USERREQ__PMC_ADD_TO_TX, VG_USERREQ__PMC_ADD_TO_TX_N, VG_USERREQ__PMC_REMOVE_FROM_TX, VG_USERREQ__PMC_REMOVE_FROM_TX_N, VG_USERREQ__PMC_ADD_THREAD_TO_TX_N, VG_USERREQ__PMC_REMOVE_THREAD_FROM_TX_N, VG_USERREQ__PMC_ADD_TO_GLOBAL_TX_IGNORE, VG_USERREQ__PMC_RESERVED6, /* Do not use. */ VG_USERREQ__PMC_EMIT_LOG, } Vg_PMemCheckClientRequest; /* Client-code macros to manipulate pmem mappings */ /** Register a persistent memory mapping region */ #define VALGRIND_PMC_REGISTER_PMEM_MAPPING(_qzz_addr, _qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__PMC_REGISTER_PMEM_MAPPING, \ (_qzz_addr), (_qzz_len), 0, 0, 0) /** Register a persistent memory file */ #define VALGRIND_PMC_REGISTER_PMEM_FILE(_qzz_desc, _qzz_addr_base, \ _qzz_size, _qzz_offset) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__PMC_REGISTER_PMEM_FILE, \ (_qzz_desc), (_qzz_addr_base), (_qzz_size), \ (_qzz_offset), 0) /** Remove a persistent memory mapping region */ #define VALGRIND_PMC_REMOVE_PMEM_MAPPING(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__PMC_REMOVE_PMEM_MAPPING, \ (_qzz_addr), (_qzz_len), 0, 0, 0) /** Check if the given range is a registered persistent memory mapping */ #define VALGRIND_PMC_CHECK_IS_PMEM_MAPPING(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__PMC_CHECK_IS_PMEM_MAPPING, \ (_qzz_addr), (_qzz_len), 0, 0, 0) /** Register an SFENCE */ #define VALGRIND_PMC_PRINT_PMEM_MAPPINGS \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__PMC_PRINT_PMEM_MAPPINGS, \ 0, 0, 0, 0, 0) /** Register a CLFLUSH-like operation */ #define VALGRIND_PMC_DO_FLUSH(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__PMC_DO_FLUSH, \ (_qzz_addr), (_qzz_len), 0, 0, 0) /** Register an SFENCE */ #define VALGRIND_PMC_DO_FENCE \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__PMC_DO_FENCE, \ 0, 0, 0, 0, 0) /** Write tool stats */ #define VALGRIND_PMC_WRITE_STATS \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__PMC_WRITE_STATS, \ 0, 0, 0, 0, 0) /** Emit user log */ #define VALGRIND_PMC_EMIT_LOG(_qzz_emit_log) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__PMC_EMIT_LOG, \ (_qzz_emit_log), 0, 0, 0, 0) /** Set a region of persistent memory as clean */ #define VALGRIND_PMC_SET_CLEAN(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__PMC_SET_CLEAN, \ (_qzz_addr), (_qzz_len), 0, 0, 0) /** Support for transactions */ /** Start an implicit persistent memory transaction */ #define VALGRIND_PMC_START_TX \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__PMC_START_TX, \ 0, 0, 0, 0, 0) /** Start an explicit persistent memory transaction */ #define VALGRIND_PMC_START_TX_N(_qzz_txn) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__PMC_START_TX_N, \ (_qzz_txn), 0, 0, 0, 0) /** End an implicit persistent memory transaction */ #define VALGRIND_PMC_END_TX \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__PMC_END_TX, \ 0, 0, 0, 0, 0) /** End an explicit persistent memory transaction */ #define VALGRIND_PMC_END_TX_N(_qzz_txn) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__PMC_END_TX_N, \ (_qzz_txn), 0, 0, 0, 0) /** Add a persistent memory region to the implicit transaction */ #define VALGRIND_PMC_ADD_TO_TX(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__PMC_ADD_TO_TX, \ (_qzz_addr), (_qzz_len), 0, 0, 0) /** Add a persistent memory region to an explicit transaction */ #define VALGRIND_PMC_ADD_TO_TX_N(_qzz_txn,_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__PMC_ADD_TO_TX_N, \ (_qzz_txn), (_qzz_addr), (_qzz_len), 0, 0) /** Remove a persistent memory region from the implicit transaction */ #define VALGRIND_PMC_REMOVE_FROM_TX(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__PMC_REMOVE_FROM_TX, \ (_qzz_addr), (_qzz_len), 0, 0, 0) /** Remove a persistent memory region from an explicit transaction */ #define VALGRIND_PMC_REMOVE_FROM_TX_N(_qzz_txn,_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__PMC_REMOVE_FROM_TX_N, \ (_qzz_txn), (_qzz_addr), (_qzz_len), 0, 0) /** End an explicit persistent memory transaction */ #define VALGRIND_PMC_ADD_THREAD_TX_N(_qzz_txn) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__PMC_ADD_THREAD_TO_TX_N, \ (_qzz_txn), 0, 0, 0, 0) /** End an explicit persistent memory transaction */ #define VALGRIND_PMC_REMOVE_THREAD_FROM_TX_N(_qzz_txn) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__PMC_REMOVE_THREAD_FROM_TX_N, \ (_qzz_txn), 0, 0, 0, 0) /** Remove a persistent memory region from the implicit transaction */ #define VALGRIND_PMC_ADD_TO_GLOBAL_TX_IGNORE(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__PMC_ADD_TO_GLOBAL_TX_IGNORE,\ (_qzz_addr), (_qzz_len), 0, 0, 0) #endif vmem-1.8/src/common/valgrind/valgrind.h000066400000000000000000013752211361505074100202070ustar00rootroot00000000000000/* -*- c -*- ---------------------------------------------------------------- Notice that the following BSD-style license applies to this one file (valgrind.h) only. The rest of Valgrind is licensed under the terms of the GNU General Public License, version 2, unless otherwise indicated. See the COPYING file in the source distribution for details. ---------------------------------------------------------------- This file is part of Valgrind, a dynamic binary instrumentation framework. Copyright (C) 2000-2017 Julian Seward. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 3. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 4. The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ---------------------------------------------------------------- Notice that the above BSD-style license applies to this one file (valgrind.h) only. The entire rest of Valgrind is licensed under the terms of the GNU General Public License, version 2. See the COPYING file in the source distribution for details. ---------------------------------------------------------------- */ /* This file is for inclusion into client (your!) code. You can use these macros to manipulate and query Valgrind's execution inside your own programs. The resulting executables will still run without Valgrind, just a little bit more slowly than they otherwise would, but otherwise unchanged. When not running on valgrind, each client request consumes very few (eg. 7) instructions, so the resulting performance loss is negligible unless you plan to execute client requests millions of times per second. Nevertheless, if that is still a problem, you can compile with the NVALGRIND symbol defined (gcc -DNVALGRIND) so that client requests are not even compiled in. */ #ifndef __VALGRIND_H #define __VALGRIND_H /* ------------------------------------------------------------------ */ /* VERSION NUMBER OF VALGRIND */ /* ------------------------------------------------------------------ */ /* Specify Valgrind's version number, so that user code can conditionally compile based on our version number. Note that these were introduced at version 3.6 and so do not exist in version 3.5 or earlier. The recommended way to use them to check for "version X.Y or later" is (eg) #if defined(__VALGRIND_MAJOR__) && defined(__VALGRIND_MINOR__) \ && (__VALGRIND_MAJOR__ > 3 \ || (__VALGRIND_MAJOR__ == 3 && __VALGRIND_MINOR__ >= 6)) */ #define __VALGRIND_MAJOR__ 3 #define __VALGRIND_MINOR__ 14 #include /* Nb: this file might be included in a file compiled with -ansi. So we can't use C++ style "//" comments nor the "asm" keyword (instead use "__asm__"). */ /* Derive some tags indicating what the target platform is. Note that in this file we're using the compiler's CPP symbols for identifying architectures, which are different to the ones we use within the rest of Valgrind. Note, __powerpc__ is active for both 32 and 64-bit PPC, whereas __powerpc64__ is only active for the latter (on Linux, that is). Misc note: how to find out what's predefined in gcc by default: gcc -Wp,-dM somefile.c */ #undef PLAT_x86_darwin #undef PLAT_amd64_darwin #undef PLAT_x86_win32 #undef PLAT_amd64_win64 #undef PLAT_x86_linux #undef PLAT_amd64_linux #undef PLAT_ppc32_linux #undef PLAT_ppc64be_linux #undef PLAT_ppc64le_linux #undef PLAT_arm_linux #undef PLAT_arm64_linux #undef PLAT_s390x_linux #undef PLAT_mips32_linux #undef PLAT_mips64_linux #undef PLAT_x86_solaris #undef PLAT_amd64_solaris #if defined(__APPLE__) && defined(__i386__) # define PLAT_x86_darwin 1 #elif defined(__APPLE__) && defined(__x86_64__) # define PLAT_amd64_darwin 1 #elif (defined(__MINGW32__) && !defined(__MINGW64__)) \ || defined(__CYGWIN32__) \ || (defined(_WIN32) && defined(_M_IX86)) # define PLAT_x86_win32 1 #elif defined(__MINGW64__) \ || (defined(_WIN64) && defined(_M_X64)) # define PLAT_amd64_win64 1 #elif defined(__linux__) && defined(__i386__) # define PLAT_x86_linux 1 #elif defined(__linux__) && defined(__x86_64__) && !defined(__ILP32__) # define PLAT_amd64_linux 1 #elif defined(__linux__) && defined(__powerpc__) && !defined(__powerpc64__) # define PLAT_ppc32_linux 1 #elif defined(__linux__) && defined(__powerpc__) && defined(__powerpc64__) && _CALL_ELF != 2 /* Big Endian uses ELF version 1 */ # define PLAT_ppc64be_linux 1 #elif defined(__linux__) && defined(__powerpc__) && defined(__powerpc64__) && _CALL_ELF == 2 /* Little Endian uses ELF version 2 */ # define PLAT_ppc64le_linux 1 #elif defined(__linux__) && defined(__arm__) && !defined(__aarch64__) # define PLAT_arm_linux 1 #elif defined(__linux__) && defined(__aarch64__) && !defined(__arm__) # define PLAT_arm64_linux 1 #elif defined(__linux__) && defined(__s390__) && defined(__s390x__) # define PLAT_s390x_linux 1 #elif defined(__linux__) && defined(__mips__) && (__mips==64) # define PLAT_mips64_linux 1 #elif defined(__linux__) && defined(__mips__) && (__mips!=64) # define PLAT_mips32_linux 1 #elif defined(__sun) && defined(__i386__) # define PLAT_x86_solaris 1 #elif defined(__sun) && defined(__x86_64__) # define PLAT_amd64_solaris 1 #else /* If we're not compiling for our target platform, don't generate any inline asms. */ # if !defined(NVALGRIND) # define NVALGRIND 1 # endif #endif /* ------------------------------------------------------------------ */ /* ARCHITECTURE SPECIFICS for SPECIAL INSTRUCTIONS. There is nothing */ /* in here of use to end-users -- skip to the next section. */ /* ------------------------------------------------------------------ */ /* * VALGRIND_DO_CLIENT_REQUEST(): a statement that invokes a Valgrind client * request. Accepts both pointers and integers as arguments. * * VALGRIND_DO_CLIENT_REQUEST_STMT(): a statement that invokes a Valgrind * client request that does not return a value. * VALGRIND_DO_CLIENT_REQUEST_EXPR(): a C expression that invokes a Valgrind * client request and whose value equals the client request result. Accepts * both pointers and integers as arguments. Note that such calls are not * necessarily pure functions -- they may have side effects. */ #define VALGRIND_DO_CLIENT_REQUEST(_zzq_rlval, _zzq_default, \ _zzq_request, _zzq_arg1, _zzq_arg2, \ _zzq_arg3, _zzq_arg4, _zzq_arg5) \ do { (_zzq_rlval) = VALGRIND_DO_CLIENT_REQUEST_EXPR((_zzq_default), \ (_zzq_request), (_zzq_arg1), (_zzq_arg2), \ (_zzq_arg3), (_zzq_arg4), (_zzq_arg5)); } while (0) #define VALGRIND_DO_CLIENT_REQUEST_STMT(_zzq_request, _zzq_arg1, \ _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ do { (void) VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \ (_zzq_request), (_zzq_arg1), (_zzq_arg2), \ (_zzq_arg3), (_zzq_arg4), (_zzq_arg5)); } while (0) #if defined(NVALGRIND) /* Define NVALGRIND to completely remove the Valgrind magic sequence from the compiled code (analogous to NDEBUG's effects on assert()) */ #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \ _zzq_default, _zzq_request, \ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ (_zzq_default) #else /* ! NVALGRIND */ /* The following defines the magic code sequences which the JITter spots and handles magically. Don't look too closely at them as they will rot your brain. The assembly code sequences for all architectures is in this one file. This is because this file must be stand-alone, and we don't want to have multiple files. For VALGRIND_DO_CLIENT_REQUEST, we must ensure that the default value gets put in the return slot, so that everything works when this is executed not under Valgrind. Args are passed in a memory block, and so there's no intrinsic limit to the number that could be passed, but it's currently five. The macro args are: _zzq_rlval result lvalue _zzq_default default value (result returned when running on real CPU) _zzq_request request code _zzq_arg1..5 request params The other two macros are used to support function wrapping, and are a lot simpler. VALGRIND_GET_NR_CONTEXT returns the value of the guest's NRADDR pseudo-register and whatever other information is needed to safely run the call original from the wrapper: on ppc64-linux, the R2 value at the divert point is also needed. This information is abstracted into a user-visible type, OrigFn. VALGRIND_CALL_NOREDIR_* behaves the same as the following on the guest, but guarantees that the branch instruction will not be redirected: x86: call *%eax, amd64: call *%rax, ppc32/ppc64: branch-and-link-to-r11. VALGRIND_CALL_NOREDIR is just text, not a complete inline asm, since it needs to be combined with more magic inline asm stuff to be useful. */ /* ----------------- x86-{linux,darwin,solaris} ---------------- */ #if defined(PLAT_x86_linux) || defined(PLAT_x86_darwin) \ || (defined(PLAT_x86_win32) && defined(__GNUC__)) \ || defined(PLAT_x86_solaris) typedef struct { unsigned int nraddr; /* where's the code? */ } OrigFn; #define __SPECIAL_INSTRUCTION_PREAMBLE \ "roll $3, %%edi ; roll $13, %%edi\n\t" \ "roll $29, %%edi ; roll $19, %%edi\n\t" #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \ _zzq_default, _zzq_request, \ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ __extension__ \ ({volatile unsigned int _zzq_args[6]; \ volatile unsigned int _zzq_result; \ _zzq_args[0] = (unsigned int)(_zzq_request); \ _zzq_args[1] = (unsigned int)(_zzq_arg1); \ _zzq_args[2] = (unsigned int)(_zzq_arg2); \ _zzq_args[3] = (unsigned int)(_zzq_arg3); \ _zzq_args[4] = (unsigned int)(_zzq_arg4); \ _zzq_args[5] = (unsigned int)(_zzq_arg5); \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ /* %EDX = client_request ( %EAX ) */ \ "xchgl %%ebx,%%ebx" \ : "=d" (_zzq_result) \ : "a" (&_zzq_args[0]), "0" (_zzq_default) \ : "cc", "memory" \ ); \ _zzq_result; \ }) #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \ volatile unsigned int __addr; \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ /* %EAX = guest_NRADDR */ \ "xchgl %%ecx,%%ecx" \ : "=a" (__addr) \ : \ : "cc", "memory" \ ); \ _zzq_orig->nraddr = __addr; \ } #define VALGRIND_CALL_NOREDIR_EAX \ __SPECIAL_INSTRUCTION_PREAMBLE \ /* call-noredir *%EAX */ \ "xchgl %%edx,%%edx\n\t" #define VALGRIND_VEX_INJECT_IR() \ do { \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ "xchgl %%edi,%%edi\n\t" \ : : : "cc", "memory" \ ); \ } while (0) #endif /* PLAT_x86_linux || PLAT_x86_darwin || (PLAT_x86_win32 && __GNUC__) || PLAT_x86_solaris */ /* ------------------------- x86-Win32 ------------------------- */ #if defined(PLAT_x86_win32) && !defined(__GNUC__) typedef struct { unsigned int nraddr; /* where's the code? */ } OrigFn; #if defined(_MSC_VER) #define __SPECIAL_INSTRUCTION_PREAMBLE \ __asm rol edi, 3 __asm rol edi, 13 \ __asm rol edi, 29 __asm rol edi, 19 #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \ _zzq_default, _zzq_request, \ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ valgrind_do_client_request_expr((uintptr_t)(_zzq_default), \ (uintptr_t)(_zzq_request), (uintptr_t)(_zzq_arg1), \ (uintptr_t)(_zzq_arg2), (uintptr_t)(_zzq_arg3), \ (uintptr_t)(_zzq_arg4), (uintptr_t)(_zzq_arg5)) static __inline uintptr_t valgrind_do_client_request_expr(uintptr_t _zzq_default, uintptr_t _zzq_request, uintptr_t _zzq_arg1, uintptr_t _zzq_arg2, uintptr_t _zzq_arg3, uintptr_t _zzq_arg4, uintptr_t _zzq_arg5) { volatile uintptr_t _zzq_args[6]; volatile unsigned int _zzq_result; _zzq_args[0] = (uintptr_t)(_zzq_request); _zzq_args[1] = (uintptr_t)(_zzq_arg1); _zzq_args[2] = (uintptr_t)(_zzq_arg2); _zzq_args[3] = (uintptr_t)(_zzq_arg3); _zzq_args[4] = (uintptr_t)(_zzq_arg4); _zzq_args[5] = (uintptr_t)(_zzq_arg5); __asm { __asm lea eax, _zzq_args __asm mov edx, _zzq_default __SPECIAL_INSTRUCTION_PREAMBLE /* %EDX = client_request ( %EAX ) */ __asm xchg ebx,ebx __asm mov _zzq_result, edx } return _zzq_result; } #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \ volatile unsigned int __addr; \ __asm { __SPECIAL_INSTRUCTION_PREAMBLE \ /* %EAX = guest_NRADDR */ \ __asm xchg ecx,ecx \ __asm mov __addr, eax \ } \ _zzq_orig->nraddr = __addr; \ } #define VALGRIND_CALL_NOREDIR_EAX ERROR #define VALGRIND_VEX_INJECT_IR() \ do { \ __asm { __SPECIAL_INSTRUCTION_PREAMBLE \ __asm xchg edi,edi \ } \ } while (0) #else #error Unsupported compiler. #endif #endif /* PLAT_x86_win32 */ /* ----------------- amd64-{linux,darwin,solaris} --------------- */ #if defined(PLAT_amd64_linux) || defined(PLAT_amd64_darwin) \ || defined(PLAT_amd64_solaris) \ || (defined(PLAT_amd64_win64) && defined(__GNUC__)) typedef struct { unsigned long int nraddr; /* where's the code? */ } OrigFn; #define __SPECIAL_INSTRUCTION_PREAMBLE \ "rolq $3, %%rdi ; rolq $13, %%rdi\n\t" \ "rolq $61, %%rdi ; rolq $51, %%rdi\n\t" #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \ _zzq_default, _zzq_request, \ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ __extension__ \ ({ volatile unsigned long int _zzq_args[6]; \ volatile unsigned long int _zzq_result; \ _zzq_args[0] = (unsigned long int)(_zzq_request); \ _zzq_args[1] = (unsigned long int)(_zzq_arg1); \ _zzq_args[2] = (unsigned long int)(_zzq_arg2); \ _zzq_args[3] = (unsigned long int)(_zzq_arg3); \ _zzq_args[4] = (unsigned long int)(_zzq_arg4); \ _zzq_args[5] = (unsigned long int)(_zzq_arg5); \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ /* %RDX = client_request ( %RAX ) */ \ "xchgq %%rbx,%%rbx" \ : "=d" (_zzq_result) \ : "a" (&_zzq_args[0]), "0" (_zzq_default) \ : "cc", "memory" \ ); \ _zzq_result; \ }) #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \ volatile unsigned long int __addr; \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ /* %RAX = guest_NRADDR */ \ "xchgq %%rcx,%%rcx" \ : "=a" (__addr) \ : \ : "cc", "memory" \ ); \ _zzq_orig->nraddr = __addr; \ } #define VALGRIND_CALL_NOREDIR_RAX \ __SPECIAL_INSTRUCTION_PREAMBLE \ /* call-noredir *%RAX */ \ "xchgq %%rdx,%%rdx\n\t" #define VALGRIND_VEX_INJECT_IR() \ do { \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ "xchgq %%rdi,%%rdi\n\t" \ : : : "cc", "memory" \ ); \ } while (0) #endif /* PLAT_amd64_linux || PLAT_amd64_darwin || PLAT_amd64_solaris */ /* ------------------------- amd64-Win64 ------------------------- */ #if defined(PLAT_amd64_win64) && !defined(__GNUC__) #error Unsupported compiler. #endif /* PLAT_amd64_win64 */ /* ------------------------ ppc32-linux ------------------------ */ #if defined(PLAT_ppc32_linux) typedef struct { unsigned int nraddr; /* where's the code? */ } OrigFn; #define __SPECIAL_INSTRUCTION_PREAMBLE \ "rlwinm 0,0,3,0,31 ; rlwinm 0,0,13,0,31\n\t" \ "rlwinm 0,0,29,0,31 ; rlwinm 0,0,19,0,31\n\t" #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \ _zzq_default, _zzq_request, \ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ \ __extension__ \ ({ unsigned int _zzq_args[6]; \ unsigned int _zzq_result; \ unsigned int* _zzq_ptr; \ _zzq_args[0] = (unsigned int)(_zzq_request); \ _zzq_args[1] = (unsigned int)(_zzq_arg1); \ _zzq_args[2] = (unsigned int)(_zzq_arg2); \ _zzq_args[3] = (unsigned int)(_zzq_arg3); \ _zzq_args[4] = (unsigned int)(_zzq_arg4); \ _zzq_args[5] = (unsigned int)(_zzq_arg5); \ _zzq_ptr = _zzq_args; \ __asm__ volatile("mr 3,%1\n\t" /*default*/ \ "mr 4,%2\n\t" /*ptr*/ \ __SPECIAL_INSTRUCTION_PREAMBLE \ /* %R3 = client_request ( %R4 ) */ \ "or 1,1,1\n\t" \ "mr %0,3" /*result*/ \ : "=b" (_zzq_result) \ : "b" (_zzq_default), "b" (_zzq_ptr) \ : "cc", "memory", "r3", "r4"); \ _zzq_result; \ }) #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \ unsigned int __addr; \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ /* %R3 = guest_NRADDR */ \ "or 2,2,2\n\t" \ "mr %0,3" \ : "=b" (__addr) \ : \ : "cc", "memory", "r3" \ ); \ _zzq_orig->nraddr = __addr; \ } #define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ __SPECIAL_INSTRUCTION_PREAMBLE \ /* branch-and-link-to-noredir *%R11 */ \ "or 3,3,3\n\t" #define VALGRIND_VEX_INJECT_IR() \ do { \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ "or 5,5,5\n\t" \ ); \ } while (0) #endif /* PLAT_ppc32_linux */ /* ------------------------ ppc64-linux ------------------------ */ #if defined(PLAT_ppc64be_linux) typedef struct { unsigned long int nraddr; /* where's the code? */ unsigned long int r2; /* what tocptr do we need? */ } OrigFn; #define __SPECIAL_INSTRUCTION_PREAMBLE \ "rotldi 0,0,3 ; rotldi 0,0,13\n\t" \ "rotldi 0,0,61 ; rotldi 0,0,51\n\t" #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \ _zzq_default, _zzq_request, \ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ \ __extension__ \ ({ unsigned long int _zzq_args[6]; \ unsigned long int _zzq_result; \ unsigned long int* _zzq_ptr; \ _zzq_args[0] = (unsigned long int)(_zzq_request); \ _zzq_args[1] = (unsigned long int)(_zzq_arg1); \ _zzq_args[2] = (unsigned long int)(_zzq_arg2); \ _zzq_args[3] = (unsigned long int)(_zzq_arg3); \ _zzq_args[4] = (unsigned long int)(_zzq_arg4); \ _zzq_args[5] = (unsigned long int)(_zzq_arg5); \ _zzq_ptr = _zzq_args; \ __asm__ volatile("mr 3,%1\n\t" /*default*/ \ "mr 4,%2\n\t" /*ptr*/ \ __SPECIAL_INSTRUCTION_PREAMBLE \ /* %R3 = client_request ( %R4 ) */ \ "or 1,1,1\n\t" \ "mr %0,3" /*result*/ \ : "=b" (_zzq_result) \ : "b" (_zzq_default), "b" (_zzq_ptr) \ : "cc", "memory", "r3", "r4"); \ _zzq_result; \ }) #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \ unsigned long int __addr; \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ /* %R3 = guest_NRADDR */ \ "or 2,2,2\n\t" \ "mr %0,3" \ : "=b" (__addr) \ : \ : "cc", "memory", "r3" \ ); \ _zzq_orig->nraddr = __addr; \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ /* %R3 = guest_NRADDR_GPR2 */ \ "or 4,4,4\n\t" \ "mr %0,3" \ : "=b" (__addr) \ : \ : "cc", "memory", "r3" \ ); \ _zzq_orig->r2 = __addr; \ } #define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ __SPECIAL_INSTRUCTION_PREAMBLE \ /* branch-and-link-to-noredir *%R11 */ \ "or 3,3,3\n\t" #define VALGRIND_VEX_INJECT_IR() \ do { \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ "or 5,5,5\n\t" \ ); \ } while (0) #endif /* PLAT_ppc64be_linux */ #if defined(PLAT_ppc64le_linux) typedef struct { unsigned long int nraddr; /* where's the code? */ unsigned long int r2; /* what tocptr do we need? */ } OrigFn; #define __SPECIAL_INSTRUCTION_PREAMBLE \ "rotldi 0,0,3 ; rotldi 0,0,13\n\t" \ "rotldi 0,0,61 ; rotldi 0,0,51\n\t" #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \ _zzq_default, _zzq_request, \ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ \ __extension__ \ ({ unsigned long int _zzq_args[6]; \ unsigned long int _zzq_result; \ unsigned long int* _zzq_ptr; \ _zzq_args[0] = (unsigned long int)(_zzq_request); \ _zzq_args[1] = (unsigned long int)(_zzq_arg1); \ _zzq_args[2] = (unsigned long int)(_zzq_arg2); \ _zzq_args[3] = (unsigned long int)(_zzq_arg3); \ _zzq_args[4] = (unsigned long int)(_zzq_arg4); \ _zzq_args[5] = (unsigned long int)(_zzq_arg5); \ _zzq_ptr = _zzq_args; \ __asm__ volatile("mr 3,%1\n\t" /*default*/ \ "mr 4,%2\n\t" /*ptr*/ \ __SPECIAL_INSTRUCTION_PREAMBLE \ /* %R3 = client_request ( %R4 ) */ \ "or 1,1,1\n\t" \ "mr %0,3" /*result*/ \ : "=b" (_zzq_result) \ : "b" (_zzq_default), "b" (_zzq_ptr) \ : "cc", "memory", "r3", "r4"); \ _zzq_result; \ }) #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \ unsigned long int __addr; \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ /* %R3 = guest_NRADDR */ \ "or 2,2,2\n\t" \ "mr %0,3" \ : "=b" (__addr) \ : \ : "cc", "memory", "r3" \ ); \ _zzq_orig->nraddr = __addr; \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ /* %R3 = guest_NRADDR_GPR2 */ \ "or 4,4,4\n\t" \ "mr %0,3" \ : "=b" (__addr) \ : \ : "cc", "memory", "r3" \ ); \ _zzq_orig->r2 = __addr; \ } #define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \ __SPECIAL_INSTRUCTION_PREAMBLE \ /* branch-and-link-to-noredir *%R12 */ \ "or 3,3,3\n\t" #define VALGRIND_VEX_INJECT_IR() \ do { \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ "or 5,5,5\n\t" \ ); \ } while (0) #endif /* PLAT_ppc64le_linux */ /* ------------------------- arm-linux ------------------------- */ #if defined(PLAT_arm_linux) typedef struct { unsigned int nraddr; /* where's the code? */ } OrigFn; #define __SPECIAL_INSTRUCTION_PREAMBLE \ "mov r12, r12, ror #3 ; mov r12, r12, ror #13 \n\t" \ "mov r12, r12, ror #29 ; mov r12, r12, ror #19 \n\t" #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \ _zzq_default, _zzq_request, \ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ \ __extension__ \ ({volatile unsigned int _zzq_args[6]; \ volatile unsigned int _zzq_result; \ _zzq_args[0] = (unsigned int)(_zzq_request); \ _zzq_args[1] = (unsigned int)(_zzq_arg1); \ _zzq_args[2] = (unsigned int)(_zzq_arg2); \ _zzq_args[3] = (unsigned int)(_zzq_arg3); \ _zzq_args[4] = (unsigned int)(_zzq_arg4); \ _zzq_args[5] = (unsigned int)(_zzq_arg5); \ __asm__ volatile("mov r3, %1\n\t" /*default*/ \ "mov r4, %2\n\t" /*ptr*/ \ __SPECIAL_INSTRUCTION_PREAMBLE \ /* R3 = client_request ( R4 ) */ \ "orr r10, r10, r10\n\t" \ "mov %0, r3" /*result*/ \ : "=r" (_zzq_result) \ : "r" (_zzq_default), "r" (&_zzq_args[0]) \ : "cc","memory", "r3", "r4"); \ _zzq_result; \ }) #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \ unsigned int __addr; \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ /* R3 = guest_NRADDR */ \ "orr r11, r11, r11\n\t" \ "mov %0, r3" \ : "=r" (__addr) \ : \ : "cc", "memory", "r3" \ ); \ _zzq_orig->nraddr = __addr; \ } #define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \ __SPECIAL_INSTRUCTION_PREAMBLE \ /* branch-and-link-to-noredir *%R4 */ \ "orr r12, r12, r12\n\t" #define VALGRIND_VEX_INJECT_IR() \ do { \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ "orr r9, r9, r9\n\t" \ : : : "cc", "memory" \ ); \ } while (0) #endif /* PLAT_arm_linux */ /* ------------------------ arm64-linux ------------------------- */ #if defined(PLAT_arm64_linux) typedef struct { unsigned long int nraddr; /* where's the code? */ } OrigFn; #define __SPECIAL_INSTRUCTION_PREAMBLE \ "ror x12, x12, #3 ; ror x12, x12, #13 \n\t" \ "ror x12, x12, #51 ; ror x12, x12, #61 \n\t" #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \ _zzq_default, _zzq_request, \ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ \ __extension__ \ ({volatile unsigned long int _zzq_args[6]; \ volatile unsigned long int _zzq_result; \ _zzq_args[0] = (unsigned long int)(_zzq_request); \ _zzq_args[1] = (unsigned long int)(_zzq_arg1); \ _zzq_args[2] = (unsigned long int)(_zzq_arg2); \ _zzq_args[3] = (unsigned long int)(_zzq_arg3); \ _zzq_args[4] = (unsigned long int)(_zzq_arg4); \ _zzq_args[5] = (unsigned long int)(_zzq_arg5); \ __asm__ volatile("mov x3, %1\n\t" /*default*/ \ "mov x4, %2\n\t" /*ptr*/ \ __SPECIAL_INSTRUCTION_PREAMBLE \ /* X3 = client_request ( X4 ) */ \ "orr x10, x10, x10\n\t" \ "mov %0, x3" /*result*/ \ : "=r" (_zzq_result) \ : "r" ((unsigned long int)(_zzq_default)), \ "r" (&_zzq_args[0]) \ : "cc","memory", "x3", "x4"); \ _zzq_result; \ }) #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \ unsigned long int __addr; \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ /* X3 = guest_NRADDR */ \ "orr x11, x11, x11\n\t" \ "mov %0, x3" \ : "=r" (__addr) \ : \ : "cc", "memory", "x3" \ ); \ _zzq_orig->nraddr = __addr; \ } #define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \ __SPECIAL_INSTRUCTION_PREAMBLE \ /* branch-and-link-to-noredir X8 */ \ "orr x12, x12, x12\n\t" #define VALGRIND_VEX_INJECT_IR() \ do { \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ "orr x9, x9, x9\n\t" \ : : : "cc", "memory" \ ); \ } while (0) #endif /* PLAT_arm64_linux */ /* ------------------------ s390x-linux ------------------------ */ #if defined(PLAT_s390x_linux) typedef struct { unsigned long int nraddr; /* where's the code? */ } OrigFn; /* __SPECIAL_INSTRUCTION_PREAMBLE will be used to identify Valgrind specific * code. This detection is implemented in platform specific toIR.c * (e.g. VEX/priv/guest_s390_decoder.c). */ #define __SPECIAL_INSTRUCTION_PREAMBLE \ "lr 15,15\n\t" \ "lr 1,1\n\t" \ "lr 2,2\n\t" \ "lr 3,3\n\t" #define __CLIENT_REQUEST_CODE "lr 2,2\n\t" #define __GET_NR_CONTEXT_CODE "lr 3,3\n\t" #define __CALL_NO_REDIR_CODE "lr 4,4\n\t" #define __VEX_INJECT_IR_CODE "lr 5,5\n\t" #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \ _zzq_default, _zzq_request, \ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ __extension__ \ ({volatile unsigned long int _zzq_args[6]; \ volatile unsigned long int _zzq_result; \ _zzq_args[0] = (unsigned long int)(_zzq_request); \ _zzq_args[1] = (unsigned long int)(_zzq_arg1); \ _zzq_args[2] = (unsigned long int)(_zzq_arg2); \ _zzq_args[3] = (unsigned long int)(_zzq_arg3); \ _zzq_args[4] = (unsigned long int)(_zzq_arg4); \ _zzq_args[5] = (unsigned long int)(_zzq_arg5); \ __asm__ volatile(/* r2 = args */ \ "lgr 2,%1\n\t" \ /* r3 = default */ \ "lgr 3,%2\n\t" \ __SPECIAL_INSTRUCTION_PREAMBLE \ __CLIENT_REQUEST_CODE \ /* results = r3 */ \ "lgr %0, 3\n\t" \ : "=d" (_zzq_result) \ : "a" (&_zzq_args[0]), "0" (_zzq_default) \ : "cc", "2", "3", "memory" \ ); \ _zzq_result; \ }) #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \ volatile unsigned long int __addr; \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ __GET_NR_CONTEXT_CODE \ "lgr %0, 3\n\t" \ : "=a" (__addr) \ : \ : "cc", "3", "memory" \ ); \ _zzq_orig->nraddr = __addr; \ } #define VALGRIND_CALL_NOREDIR_R1 \ __SPECIAL_INSTRUCTION_PREAMBLE \ __CALL_NO_REDIR_CODE #define VALGRIND_VEX_INJECT_IR() \ do { \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ __VEX_INJECT_IR_CODE); \ } while (0) #endif /* PLAT_s390x_linux */ /* ------------------------- mips32-linux ---------------- */ #if defined(PLAT_mips32_linux) typedef struct { unsigned int nraddr; /* where's the code? */ } OrigFn; /* .word 0x342 * .word 0x742 * .word 0xC2 * .word 0x4C2*/ #define __SPECIAL_INSTRUCTION_PREAMBLE \ "srl $0, $0, 13\n\t" \ "srl $0, $0, 29\n\t" \ "srl $0, $0, 3\n\t" \ "srl $0, $0, 19\n\t" #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \ _zzq_default, _zzq_request, \ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ __extension__ \ ({ volatile unsigned int _zzq_args[6]; \ volatile unsigned int _zzq_result; \ _zzq_args[0] = (unsigned int)(_zzq_request); \ _zzq_args[1] = (unsigned int)(_zzq_arg1); \ _zzq_args[2] = (unsigned int)(_zzq_arg2); \ _zzq_args[3] = (unsigned int)(_zzq_arg3); \ _zzq_args[4] = (unsigned int)(_zzq_arg4); \ _zzq_args[5] = (unsigned int)(_zzq_arg5); \ __asm__ volatile("move $11, %1\n\t" /*default*/ \ "move $12, %2\n\t" /*ptr*/ \ __SPECIAL_INSTRUCTION_PREAMBLE \ /* T3 = client_request ( T4 ) */ \ "or $13, $13, $13\n\t" \ "move %0, $11\n\t" /*result*/ \ : "=r" (_zzq_result) \ : "r" (_zzq_default), "r" (&_zzq_args[0]) \ : "$11", "$12", "memory"); \ _zzq_result; \ }) #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \ volatile unsigned int __addr; \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ /* %t9 = guest_NRADDR */ \ "or $14, $14, $14\n\t" \ "move %0, $11" /*result*/ \ : "=r" (__addr) \ : \ : "$11" \ ); \ _zzq_orig->nraddr = __addr; \ } #define VALGRIND_CALL_NOREDIR_T9 \ __SPECIAL_INSTRUCTION_PREAMBLE \ /* call-noredir *%t9 */ \ "or $15, $15, $15\n\t" #define VALGRIND_VEX_INJECT_IR() \ do { \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ "or $11, $11, $11\n\t" \ ); \ } while (0) #endif /* PLAT_mips32_linux */ /* ------------------------- mips64-linux ---------------- */ #if defined(PLAT_mips64_linux) typedef struct { unsigned long nraddr; /* where's the code? */ } OrigFn; /* dsll $0,$0, 3 * dsll $0,$0, 13 * dsll $0,$0, 29 * dsll $0,$0, 19*/ #define __SPECIAL_INSTRUCTION_PREAMBLE \ "dsll $0,$0, 3 ; dsll $0,$0,13\n\t" \ "dsll $0,$0,29 ; dsll $0,$0,19\n\t" #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \ _zzq_default, _zzq_request, \ _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ __extension__ \ ({ volatile unsigned long int _zzq_args[6]; \ volatile unsigned long int _zzq_result; \ _zzq_args[0] = (unsigned long int)(_zzq_request); \ _zzq_args[1] = (unsigned long int)(_zzq_arg1); \ _zzq_args[2] = (unsigned long int)(_zzq_arg2); \ _zzq_args[3] = (unsigned long int)(_zzq_arg3); \ _zzq_args[4] = (unsigned long int)(_zzq_arg4); \ _zzq_args[5] = (unsigned long int)(_zzq_arg5); \ __asm__ volatile("move $11, %1\n\t" /*default*/ \ "move $12, %2\n\t" /*ptr*/ \ __SPECIAL_INSTRUCTION_PREAMBLE \ /* $11 = client_request ( $12 ) */ \ "or $13, $13, $13\n\t" \ "move %0, $11\n\t" /*result*/ \ : "=r" (_zzq_result) \ : "r" (_zzq_default), "r" (&_zzq_args[0]) \ : "$11", "$12", "memory"); \ _zzq_result; \ }) #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \ { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \ volatile unsigned long int __addr; \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ /* $11 = guest_NRADDR */ \ "or $14, $14, $14\n\t" \ "move %0, $11" /*result*/ \ : "=r" (__addr) \ : \ : "$11"); \ _zzq_orig->nraddr = __addr; \ } #define VALGRIND_CALL_NOREDIR_T9 \ __SPECIAL_INSTRUCTION_PREAMBLE \ /* call-noredir $25 */ \ "or $15, $15, $15\n\t" #define VALGRIND_VEX_INJECT_IR() \ do { \ __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ "or $11, $11, $11\n\t" \ ); \ } while (0) #endif /* PLAT_mips64_linux */ /* Insert assembly code for other platforms here... */ #endif /* NVALGRIND */ /* ------------------------------------------------------------------ */ /* PLATFORM SPECIFICS for FUNCTION WRAPPING. This is all very */ /* ugly. It's the least-worst tradeoff I can think of. */ /* ------------------------------------------------------------------ */ /* This section defines magic (a.k.a appalling-hack) macros for doing guaranteed-no-redirection macros, so as to get from function wrappers to the functions they are wrapping. The whole point is to construct standard call sequences, but to do the call itself with a special no-redirect call pseudo-instruction that the JIT understands and handles specially. This section is long and repetitious, and I can't see a way to make it shorter. The naming scheme is as follows: CALL_FN_{W,v}_{v,W,WW,WWW,WWWW,5W,6W,7W,etc} 'W' stands for "word" and 'v' for "void". Hence there are different macros for calling arity 0, 1, 2, 3, 4, etc, functions, and for each, the possibility of returning a word-typed result, or no result. */ /* Use these to write the name of your wrapper. NOTE: duplicates VG_WRAP_FUNCTION_Z{U,Z} in pub_tool_redir.h. NOTE also: inserts the default behaviour equivalance class tag "0000" into the name. See pub_tool_redir.h for details -- normally you don't need to think about this, though. */ /* Use an extra level of macroisation so as to ensure the soname/fnname args are fully macro-expanded before pasting them together. */ #define VG_CONCAT4(_aa,_bb,_cc,_dd) _aa##_bb##_cc##_dd #define I_WRAP_SONAME_FNNAME_ZU(soname,fnname) \ VG_CONCAT4(_vgw00000ZU_,soname,_,fnname) #define I_WRAP_SONAME_FNNAME_ZZ(soname,fnname) \ VG_CONCAT4(_vgw00000ZZ_,soname,_,fnname) /* Use this macro from within a wrapper function to collect the context (address and possibly other info) of the original function. Once you have that you can then use it in one of the CALL_FN_ macros. The type of the argument _lval is OrigFn. */ #define VALGRIND_GET_ORIG_FN(_lval) VALGRIND_GET_NR_CONTEXT(_lval) /* Also provide end-user facilities for function replacement, rather than wrapping. A replacement function differs from a wrapper in that it has no way to get hold of the original function being called, and hence no way to call onwards to it. In a replacement function, VALGRIND_GET_ORIG_FN always returns zero. */ #define I_REPLACE_SONAME_FNNAME_ZU(soname,fnname) \ VG_CONCAT4(_vgr00000ZU_,soname,_,fnname) #define I_REPLACE_SONAME_FNNAME_ZZ(soname,fnname) \ VG_CONCAT4(_vgr00000ZZ_,soname,_,fnname) /* Derivatives of the main macros below, for calling functions returning void. */ #define CALL_FN_v_v(fnptr) \ do { volatile unsigned long _junk; \ CALL_FN_W_v(_junk,fnptr); } while (0) #define CALL_FN_v_W(fnptr, arg1) \ do { volatile unsigned long _junk; \ CALL_FN_W_W(_junk,fnptr,arg1); } while (0) #define CALL_FN_v_WW(fnptr, arg1,arg2) \ do { volatile unsigned long _junk; \ CALL_FN_W_WW(_junk,fnptr,arg1,arg2); } while (0) #define CALL_FN_v_WWW(fnptr, arg1,arg2,arg3) \ do { volatile unsigned long _junk; \ CALL_FN_W_WWW(_junk,fnptr,arg1,arg2,arg3); } while (0) #define CALL_FN_v_WWWW(fnptr, arg1,arg2,arg3,arg4) \ do { volatile unsigned long _junk; \ CALL_FN_W_WWWW(_junk,fnptr,arg1,arg2,arg3,arg4); } while (0) #define CALL_FN_v_5W(fnptr, arg1,arg2,arg3,arg4,arg5) \ do { volatile unsigned long _junk; \ CALL_FN_W_5W(_junk,fnptr,arg1,arg2,arg3,arg4,arg5); } while (0) #define CALL_FN_v_6W(fnptr, arg1,arg2,arg3,arg4,arg5,arg6) \ do { volatile unsigned long _junk; \ CALL_FN_W_6W(_junk,fnptr,arg1,arg2,arg3,arg4,arg5,arg6); } while (0) #define CALL_FN_v_7W(fnptr, arg1,arg2,arg3,arg4,arg5,arg6,arg7) \ do { volatile unsigned long _junk; \ CALL_FN_W_7W(_junk,fnptr,arg1,arg2,arg3,arg4,arg5,arg6,arg7); } while (0) /* ----------------- x86-{linux,darwin,solaris} ---------------- */ #if defined(PLAT_x86_linux) || defined(PLAT_x86_darwin) \ || defined(PLAT_x86_solaris) /* These regs are trashed by the hidden call. No need to mention eax as gcc can already see that, plus causes gcc to bomb. */ #define __CALLER_SAVED_REGS /*"eax"*/ "ecx", "edx" /* Macros to save and align the stack before making a function call and restore it afterwards as gcc may not keep the stack pointer aligned if it doesn't realise calls are being made to other functions. */ #define VALGRIND_ALIGN_STACK \ "movl %%esp,%%edi\n\t" \ "andl $0xfffffff0,%%esp\n\t" #define VALGRIND_RESTORE_STACK \ "movl %%edi,%%esp\n\t" /* These CALL_FN_ macros assume that on x86-linux, sizeof(unsigned long) == 4. */ #define CALL_FN_W_v(lval, orig) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[1]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "movl (%%eax), %%eax\n\t" /* target->%eax */ \ VALGRIND_CALL_NOREDIR_EAX \ VALGRIND_RESTORE_STACK \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_W(lval, orig, arg1) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[2]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "subl $12, %%esp\n\t" \ "pushl 4(%%eax)\n\t" \ "movl (%%eax), %%eax\n\t" /* target->%eax */ \ VALGRIND_CALL_NOREDIR_EAX \ VALGRIND_RESTORE_STACK \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WW(lval, orig, arg1,arg2) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "subl $8, %%esp\n\t" \ "pushl 8(%%eax)\n\t" \ "pushl 4(%%eax)\n\t" \ "movl (%%eax), %%eax\n\t" /* target->%eax */ \ VALGRIND_CALL_NOREDIR_EAX \ VALGRIND_RESTORE_STACK \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[4]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "subl $4, %%esp\n\t" \ "pushl 12(%%eax)\n\t" \ "pushl 8(%%eax)\n\t" \ "pushl 4(%%eax)\n\t" \ "movl (%%eax), %%eax\n\t" /* target->%eax */ \ VALGRIND_CALL_NOREDIR_EAX \ VALGRIND_RESTORE_STACK \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[5]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "pushl 16(%%eax)\n\t" \ "pushl 12(%%eax)\n\t" \ "pushl 8(%%eax)\n\t" \ "pushl 4(%%eax)\n\t" \ "movl (%%eax), %%eax\n\t" /* target->%eax */ \ VALGRIND_CALL_NOREDIR_EAX \ VALGRIND_RESTORE_STACK \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[6]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "subl $12, %%esp\n\t" \ "pushl 20(%%eax)\n\t" \ "pushl 16(%%eax)\n\t" \ "pushl 12(%%eax)\n\t" \ "pushl 8(%%eax)\n\t" \ "pushl 4(%%eax)\n\t" \ "movl (%%eax), %%eax\n\t" /* target->%eax */ \ VALGRIND_CALL_NOREDIR_EAX \ VALGRIND_RESTORE_STACK \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[7]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "subl $8, %%esp\n\t" \ "pushl 24(%%eax)\n\t" \ "pushl 20(%%eax)\n\t" \ "pushl 16(%%eax)\n\t" \ "pushl 12(%%eax)\n\t" \ "pushl 8(%%eax)\n\t" \ "pushl 4(%%eax)\n\t" \ "movl (%%eax), %%eax\n\t" /* target->%eax */ \ VALGRIND_CALL_NOREDIR_EAX \ VALGRIND_RESTORE_STACK \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[8]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "subl $4, %%esp\n\t" \ "pushl 28(%%eax)\n\t" \ "pushl 24(%%eax)\n\t" \ "pushl 20(%%eax)\n\t" \ "pushl 16(%%eax)\n\t" \ "pushl 12(%%eax)\n\t" \ "pushl 8(%%eax)\n\t" \ "pushl 4(%%eax)\n\t" \ "movl (%%eax), %%eax\n\t" /* target->%eax */ \ VALGRIND_CALL_NOREDIR_EAX \ VALGRIND_RESTORE_STACK \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[9]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "pushl 32(%%eax)\n\t" \ "pushl 28(%%eax)\n\t" \ "pushl 24(%%eax)\n\t" \ "pushl 20(%%eax)\n\t" \ "pushl 16(%%eax)\n\t" \ "pushl 12(%%eax)\n\t" \ "pushl 8(%%eax)\n\t" \ "pushl 4(%%eax)\n\t" \ "movl (%%eax), %%eax\n\t" /* target->%eax */ \ VALGRIND_CALL_NOREDIR_EAX \ VALGRIND_RESTORE_STACK \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[10]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "subl $12, %%esp\n\t" \ "pushl 36(%%eax)\n\t" \ "pushl 32(%%eax)\n\t" \ "pushl 28(%%eax)\n\t" \ "pushl 24(%%eax)\n\t" \ "pushl 20(%%eax)\n\t" \ "pushl 16(%%eax)\n\t" \ "pushl 12(%%eax)\n\t" \ "pushl 8(%%eax)\n\t" \ "pushl 4(%%eax)\n\t" \ "movl (%%eax), %%eax\n\t" /* target->%eax */ \ VALGRIND_CALL_NOREDIR_EAX \ VALGRIND_RESTORE_STACK \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[11]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ _argvec[10] = (unsigned long)(arg10); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "subl $8, %%esp\n\t" \ "pushl 40(%%eax)\n\t" \ "pushl 36(%%eax)\n\t" \ "pushl 32(%%eax)\n\t" \ "pushl 28(%%eax)\n\t" \ "pushl 24(%%eax)\n\t" \ "pushl 20(%%eax)\n\t" \ "pushl 16(%%eax)\n\t" \ "pushl 12(%%eax)\n\t" \ "pushl 8(%%eax)\n\t" \ "pushl 4(%%eax)\n\t" \ "movl (%%eax), %%eax\n\t" /* target->%eax */ \ VALGRIND_CALL_NOREDIR_EAX \ VALGRIND_RESTORE_STACK \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5, \ arg6,arg7,arg8,arg9,arg10, \ arg11) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[12]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ _argvec[10] = (unsigned long)(arg10); \ _argvec[11] = (unsigned long)(arg11); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "subl $4, %%esp\n\t" \ "pushl 44(%%eax)\n\t" \ "pushl 40(%%eax)\n\t" \ "pushl 36(%%eax)\n\t" \ "pushl 32(%%eax)\n\t" \ "pushl 28(%%eax)\n\t" \ "pushl 24(%%eax)\n\t" \ "pushl 20(%%eax)\n\t" \ "pushl 16(%%eax)\n\t" \ "pushl 12(%%eax)\n\t" \ "pushl 8(%%eax)\n\t" \ "pushl 4(%%eax)\n\t" \ "movl (%%eax), %%eax\n\t" /* target->%eax */ \ VALGRIND_CALL_NOREDIR_EAX \ VALGRIND_RESTORE_STACK \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5, \ arg6,arg7,arg8,arg9,arg10, \ arg11,arg12) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[13]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ _argvec[10] = (unsigned long)(arg10); \ _argvec[11] = (unsigned long)(arg11); \ _argvec[12] = (unsigned long)(arg12); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "pushl 48(%%eax)\n\t" \ "pushl 44(%%eax)\n\t" \ "pushl 40(%%eax)\n\t" \ "pushl 36(%%eax)\n\t" \ "pushl 32(%%eax)\n\t" \ "pushl 28(%%eax)\n\t" \ "pushl 24(%%eax)\n\t" \ "pushl 20(%%eax)\n\t" \ "pushl 16(%%eax)\n\t" \ "pushl 12(%%eax)\n\t" \ "pushl 8(%%eax)\n\t" \ "pushl 4(%%eax)\n\t" \ "movl (%%eax), %%eax\n\t" /* target->%eax */ \ VALGRIND_CALL_NOREDIR_EAX \ VALGRIND_RESTORE_STACK \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "edi" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #endif /* PLAT_x86_linux || PLAT_x86_darwin || PLAT_x86_solaris */ /* ---------------- amd64-{linux,darwin,solaris} --------------- */ #if defined(PLAT_amd64_linux) || defined(PLAT_amd64_darwin) \ || defined(PLAT_amd64_solaris) /* ARGREGS: rdi rsi rdx rcx r8 r9 (the rest on stack in R-to-L order) */ /* These regs are trashed by the hidden call. */ #define __CALLER_SAVED_REGS /*"rax",*/ "rcx", "rdx", "rsi", \ "rdi", "r8", "r9", "r10", "r11" /* This is all pretty complex. It's so as to make stack unwinding work reliably. See bug 243270. The basic problem is the sub and add of 128 of %rsp in all of the following macros. If gcc believes the CFA is in %rsp, then unwinding may fail, because what's at the CFA is not what gcc "expected" when it constructs the CFIs for the places where the macros are instantiated. But we can't just add a CFI annotation to increase the CFA offset by 128, to match the sub of 128 from %rsp, because we don't know whether gcc has chosen %rsp as the CFA at that point, or whether it has chosen some other register (eg, %rbp). In the latter case, adding a CFI annotation to change the CFA offset is simply wrong. So the solution is to get hold of the CFA using __builtin_dwarf_cfa(), put it in a known register, and add a CFI annotation to say what the register is. We choose %rbp for this (perhaps perversely), because: (1) %rbp is already subject to unwinding. If a new register was chosen then the unwinder would have to unwind it in all stack traces, which is expensive, and (2) %rbp is already subject to precise exception updates in the JIT. If a new register was chosen, we'd have to have precise exceptions for it too, which reduces performance of the generated code. However .. one extra complication. We can't just whack the result of __builtin_dwarf_cfa() into %rbp and then add %rbp to the list of trashed registers at the end of the inline assembly fragments; gcc won't allow %rbp to appear in that list. Hence instead we need to stash %rbp in %r15 for the duration of the asm, and say that %r15 is trashed instead. gcc seems happy to go with that. Oh .. and this all needs to be conditionalised so that it is unchanged from before this commit, when compiled with older gccs that don't support __builtin_dwarf_cfa. Furthermore, since this header file is freestanding, it has to be independent of config.h, and so the following conditionalisation cannot depend on configure time checks. Although it's not clear from 'defined(__GNUC__) && defined(__GCC_HAVE_DWARF2_CFI_ASM)', this expression excludes Darwin. .cfi directives in Darwin assembly appear to be completely different and I haven't investigated how they work. For even more entertainment value, note we have to use the completely undocumented __builtin_dwarf_cfa(), which appears to really compute the CFA, whereas __builtin_frame_address(0) claims to but actually doesn't. See https://bugs.kde.org/show_bug.cgi?id=243270#c47 */ #if defined(__GNUC__) && defined(__GCC_HAVE_DWARF2_CFI_ASM) # define __FRAME_POINTER \ ,"r"(__builtin_dwarf_cfa()) # define VALGRIND_CFI_PROLOGUE \ "movq %%rbp, %%r15\n\t" \ "movq %2, %%rbp\n\t" \ ".cfi_remember_state\n\t" \ ".cfi_def_cfa rbp, 0\n\t" # define VALGRIND_CFI_EPILOGUE \ "movq %%r15, %%rbp\n\t" \ ".cfi_restore_state\n\t" #else # define __FRAME_POINTER # define VALGRIND_CFI_PROLOGUE # define VALGRIND_CFI_EPILOGUE #endif /* Macros to save and align the stack before making a function call and restore it afterwards as gcc may not keep the stack pointer aligned if it doesn't realise calls are being made to other functions. */ #define VALGRIND_ALIGN_STACK \ "movq %%rsp,%%r14\n\t" \ "andq $0xfffffffffffffff0,%%rsp\n\t" #define VALGRIND_RESTORE_STACK \ "movq %%r14,%%rsp\n\t" /* These CALL_FN_ macros assume that on amd64-linux, sizeof(unsigned long) == 8. */ /* NB 9 Sept 07. There is a nasty kludge here in all these CALL_FN_ macros. In order not to trash the stack redzone, we need to drop %rsp by 128 before the hidden call, and restore afterwards. The nastyness is that it is only by luck that the stack still appears to be unwindable during the hidden call - since then the behaviour of any routine using this macro does not match what the CFI data says. Sigh. Why is this important? Imagine that a wrapper has a stack allocated local, and passes to the hidden call, a pointer to it. Because gcc does not know about the hidden call, it may allocate that local in the redzone. Unfortunately the hidden call may then trash it before it comes to use it. So we must step clear of the redzone, for the duration of the hidden call, to make it safe. Probably the same problem afflicts the other redzone-style ABIs too (ppc64-linux); but for those, the stack is self describing (none of this CFI nonsense) so at least messing with the stack pointer doesn't give a danger of non-unwindable stack. */ #define CALL_FN_W_v(lval, orig) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[1]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ VALGRIND_ALIGN_STACK \ "subq $128,%%rsp\n\t" \ "movq (%%rax), %%rax\n\t" /* target->%rax */ \ VALGRIND_CALL_NOREDIR_RAX \ VALGRIND_RESTORE_STACK \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_W(lval, orig, arg1) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[2]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ VALGRIND_ALIGN_STACK \ "subq $128,%%rsp\n\t" \ "movq 8(%%rax), %%rdi\n\t" \ "movq (%%rax), %%rax\n\t" /* target->%rax */ \ VALGRIND_CALL_NOREDIR_RAX \ VALGRIND_RESTORE_STACK \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WW(lval, orig, arg1,arg2) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ VALGRIND_ALIGN_STACK \ "subq $128,%%rsp\n\t" \ "movq 16(%%rax), %%rsi\n\t" \ "movq 8(%%rax), %%rdi\n\t" \ "movq (%%rax), %%rax\n\t" /* target->%rax */ \ VALGRIND_CALL_NOREDIR_RAX \ VALGRIND_RESTORE_STACK \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[4]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ VALGRIND_ALIGN_STACK \ "subq $128,%%rsp\n\t" \ "movq 24(%%rax), %%rdx\n\t" \ "movq 16(%%rax), %%rsi\n\t" \ "movq 8(%%rax), %%rdi\n\t" \ "movq (%%rax), %%rax\n\t" /* target->%rax */ \ VALGRIND_CALL_NOREDIR_RAX \ VALGRIND_RESTORE_STACK \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[5]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ VALGRIND_ALIGN_STACK \ "subq $128,%%rsp\n\t" \ "movq 32(%%rax), %%rcx\n\t" \ "movq 24(%%rax), %%rdx\n\t" \ "movq 16(%%rax), %%rsi\n\t" \ "movq 8(%%rax), %%rdi\n\t" \ "movq (%%rax), %%rax\n\t" /* target->%rax */ \ VALGRIND_CALL_NOREDIR_RAX \ VALGRIND_RESTORE_STACK \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[6]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ VALGRIND_ALIGN_STACK \ "subq $128,%%rsp\n\t" \ "movq 40(%%rax), %%r8\n\t" \ "movq 32(%%rax), %%rcx\n\t" \ "movq 24(%%rax), %%rdx\n\t" \ "movq 16(%%rax), %%rsi\n\t" \ "movq 8(%%rax), %%rdi\n\t" \ "movq (%%rax), %%rax\n\t" /* target->%rax */ \ VALGRIND_CALL_NOREDIR_RAX \ VALGRIND_RESTORE_STACK \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[7]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ VALGRIND_ALIGN_STACK \ "subq $128,%%rsp\n\t" \ "movq 48(%%rax), %%r9\n\t" \ "movq 40(%%rax), %%r8\n\t" \ "movq 32(%%rax), %%rcx\n\t" \ "movq 24(%%rax), %%rdx\n\t" \ "movq 16(%%rax), %%rsi\n\t" \ "movq 8(%%rax), %%rdi\n\t" \ "movq (%%rax), %%rax\n\t" /* target->%rax */ \ VALGRIND_CALL_NOREDIR_RAX \ VALGRIND_RESTORE_STACK \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[8]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ VALGRIND_ALIGN_STACK \ "subq $136,%%rsp\n\t" \ "pushq 56(%%rax)\n\t" \ "movq 48(%%rax), %%r9\n\t" \ "movq 40(%%rax), %%r8\n\t" \ "movq 32(%%rax), %%rcx\n\t" \ "movq 24(%%rax), %%rdx\n\t" \ "movq 16(%%rax), %%rsi\n\t" \ "movq 8(%%rax), %%rdi\n\t" \ "movq (%%rax), %%rax\n\t" /* target->%rax */ \ VALGRIND_CALL_NOREDIR_RAX \ VALGRIND_RESTORE_STACK \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[9]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ VALGRIND_ALIGN_STACK \ "subq $128,%%rsp\n\t" \ "pushq 64(%%rax)\n\t" \ "pushq 56(%%rax)\n\t" \ "movq 48(%%rax), %%r9\n\t" \ "movq 40(%%rax), %%r8\n\t" \ "movq 32(%%rax), %%rcx\n\t" \ "movq 24(%%rax), %%rdx\n\t" \ "movq 16(%%rax), %%rsi\n\t" \ "movq 8(%%rax), %%rdi\n\t" \ "movq (%%rax), %%rax\n\t" /* target->%rax */ \ VALGRIND_CALL_NOREDIR_RAX \ VALGRIND_RESTORE_STACK \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[10]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ VALGRIND_ALIGN_STACK \ "subq $136,%%rsp\n\t" \ "pushq 72(%%rax)\n\t" \ "pushq 64(%%rax)\n\t" \ "pushq 56(%%rax)\n\t" \ "movq 48(%%rax), %%r9\n\t" \ "movq 40(%%rax), %%r8\n\t" \ "movq 32(%%rax), %%rcx\n\t" \ "movq 24(%%rax), %%rdx\n\t" \ "movq 16(%%rax), %%rsi\n\t" \ "movq 8(%%rax), %%rdi\n\t" \ "movq (%%rax), %%rax\n\t" /* target->%rax */ \ VALGRIND_CALL_NOREDIR_RAX \ VALGRIND_RESTORE_STACK \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[11]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ _argvec[10] = (unsigned long)(arg10); \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ VALGRIND_ALIGN_STACK \ "subq $128,%%rsp\n\t" \ "pushq 80(%%rax)\n\t" \ "pushq 72(%%rax)\n\t" \ "pushq 64(%%rax)\n\t" \ "pushq 56(%%rax)\n\t" \ "movq 48(%%rax), %%r9\n\t" \ "movq 40(%%rax), %%r8\n\t" \ "movq 32(%%rax), %%rcx\n\t" \ "movq 24(%%rax), %%rdx\n\t" \ "movq 16(%%rax), %%rsi\n\t" \ "movq 8(%%rax), %%rdi\n\t" \ "movq (%%rax), %%rax\n\t" /* target->%rax */ \ VALGRIND_CALL_NOREDIR_RAX \ VALGRIND_RESTORE_STACK \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10,arg11) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[12]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ _argvec[10] = (unsigned long)(arg10); \ _argvec[11] = (unsigned long)(arg11); \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ VALGRIND_ALIGN_STACK \ "subq $136,%%rsp\n\t" \ "pushq 88(%%rax)\n\t" \ "pushq 80(%%rax)\n\t" \ "pushq 72(%%rax)\n\t" \ "pushq 64(%%rax)\n\t" \ "pushq 56(%%rax)\n\t" \ "movq 48(%%rax), %%r9\n\t" \ "movq 40(%%rax), %%r8\n\t" \ "movq 32(%%rax), %%rcx\n\t" \ "movq 24(%%rax), %%rdx\n\t" \ "movq 16(%%rax), %%rsi\n\t" \ "movq 8(%%rax), %%rdi\n\t" \ "movq (%%rax), %%rax\n\t" /* target->%rax */ \ VALGRIND_CALL_NOREDIR_RAX \ VALGRIND_RESTORE_STACK \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10,arg11,arg12) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[13]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ _argvec[10] = (unsigned long)(arg10); \ _argvec[11] = (unsigned long)(arg11); \ _argvec[12] = (unsigned long)(arg12); \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ VALGRIND_ALIGN_STACK \ "subq $128,%%rsp\n\t" \ "pushq 96(%%rax)\n\t" \ "pushq 88(%%rax)\n\t" \ "pushq 80(%%rax)\n\t" \ "pushq 72(%%rax)\n\t" \ "pushq 64(%%rax)\n\t" \ "pushq 56(%%rax)\n\t" \ "movq 48(%%rax), %%r9\n\t" \ "movq 40(%%rax), %%r8\n\t" \ "movq 32(%%rax), %%rcx\n\t" \ "movq 24(%%rax), %%rdx\n\t" \ "movq 16(%%rax), %%rsi\n\t" \ "movq 8(%%rax), %%rdi\n\t" \ "movq (%%rax), %%rax\n\t" /* target->%rax */ \ VALGRIND_CALL_NOREDIR_RAX \ VALGRIND_RESTORE_STACK \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=a" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r14", "r15" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #endif /* PLAT_amd64_linux || PLAT_amd64_darwin || PLAT_amd64_solaris */ /* ------------------------ ppc32-linux ------------------------ */ #if defined(PLAT_ppc32_linux) /* This is useful for finding out about the on-stack stuff: extern int f9 ( int,int,int,int,int,int,int,int,int ); extern int f10 ( int,int,int,int,int,int,int,int,int,int ); extern int f11 ( int,int,int,int,int,int,int,int,int,int,int ); extern int f12 ( int,int,int,int,int,int,int,int,int,int,int,int ); int g9 ( void ) { return f9(11,22,33,44,55,66,77,88,99); } int g10 ( void ) { return f10(11,22,33,44,55,66,77,88,99,110); } int g11 ( void ) { return f11(11,22,33,44,55,66,77,88,99,110,121); } int g12 ( void ) { return f12(11,22,33,44,55,66,77,88,99,110,121,132); } */ /* ARGREGS: r3 r4 r5 r6 r7 r8 r9 r10 (the rest on stack somewhere) */ /* These regs are trashed by the hidden call. */ #define __CALLER_SAVED_REGS \ "lr", "ctr", "xer", \ "cr0", "cr1", "cr2", "cr3", "cr4", "cr5", "cr6", "cr7", \ "r0", "r2", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "r10", \ "r11", "r12", "r13" /* Macros to save and align the stack before making a function call and restore it afterwards as gcc may not keep the stack pointer aligned if it doesn't realise calls are being made to other functions. */ #define VALGRIND_ALIGN_STACK \ "mr 28,1\n\t" \ "rlwinm 1,1,0,0,27\n\t" #define VALGRIND_RESTORE_STACK \ "mr 1,28\n\t" /* These CALL_FN_ macros assume that on ppc32-linux, sizeof(unsigned long) == 4. */ #define CALL_FN_W_v(lval, orig) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[1]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "lwz 11,0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ VALGRIND_RESTORE_STACK \ "mr %0,3" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_W(lval, orig, arg1) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[2]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "lwz 3,4(11)\n\t" /* arg1->r3 */ \ "lwz 11,0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ VALGRIND_RESTORE_STACK \ "mr %0,3" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WW(lval, orig, arg1,arg2) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "lwz 3,4(11)\n\t" /* arg1->r3 */ \ "lwz 4,8(11)\n\t" \ "lwz 11,0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ VALGRIND_RESTORE_STACK \ "mr %0,3" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[4]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "lwz 3,4(11)\n\t" /* arg1->r3 */ \ "lwz 4,8(11)\n\t" \ "lwz 5,12(11)\n\t" \ "lwz 11,0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ VALGRIND_RESTORE_STACK \ "mr %0,3" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[5]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "lwz 3,4(11)\n\t" /* arg1->r3 */ \ "lwz 4,8(11)\n\t" \ "lwz 5,12(11)\n\t" \ "lwz 6,16(11)\n\t" /* arg4->r6 */ \ "lwz 11,0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ VALGRIND_RESTORE_STACK \ "mr %0,3" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[6]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ _argvec[5] = (unsigned long)arg5; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "lwz 3,4(11)\n\t" /* arg1->r3 */ \ "lwz 4,8(11)\n\t" \ "lwz 5,12(11)\n\t" \ "lwz 6,16(11)\n\t" /* arg4->r6 */ \ "lwz 7,20(11)\n\t" \ "lwz 11,0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ VALGRIND_RESTORE_STACK \ "mr %0,3" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[7]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ _argvec[5] = (unsigned long)arg5; \ _argvec[6] = (unsigned long)arg6; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "lwz 3,4(11)\n\t" /* arg1->r3 */ \ "lwz 4,8(11)\n\t" \ "lwz 5,12(11)\n\t" \ "lwz 6,16(11)\n\t" /* arg4->r6 */ \ "lwz 7,20(11)\n\t" \ "lwz 8,24(11)\n\t" \ "lwz 11,0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ VALGRIND_RESTORE_STACK \ "mr %0,3" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[8]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ _argvec[5] = (unsigned long)arg5; \ _argvec[6] = (unsigned long)arg6; \ _argvec[7] = (unsigned long)arg7; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "lwz 3,4(11)\n\t" /* arg1->r3 */ \ "lwz 4,8(11)\n\t" \ "lwz 5,12(11)\n\t" \ "lwz 6,16(11)\n\t" /* arg4->r6 */ \ "lwz 7,20(11)\n\t" \ "lwz 8,24(11)\n\t" \ "lwz 9,28(11)\n\t" \ "lwz 11,0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ VALGRIND_RESTORE_STACK \ "mr %0,3" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[9]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ _argvec[5] = (unsigned long)arg5; \ _argvec[6] = (unsigned long)arg6; \ _argvec[7] = (unsigned long)arg7; \ _argvec[8] = (unsigned long)arg8; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "lwz 3,4(11)\n\t" /* arg1->r3 */ \ "lwz 4,8(11)\n\t" \ "lwz 5,12(11)\n\t" \ "lwz 6,16(11)\n\t" /* arg4->r6 */ \ "lwz 7,20(11)\n\t" \ "lwz 8,24(11)\n\t" \ "lwz 9,28(11)\n\t" \ "lwz 10,32(11)\n\t" /* arg8->r10 */ \ "lwz 11,0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ VALGRIND_RESTORE_STACK \ "mr %0,3" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[10]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ _argvec[5] = (unsigned long)arg5; \ _argvec[6] = (unsigned long)arg6; \ _argvec[7] = (unsigned long)arg7; \ _argvec[8] = (unsigned long)arg8; \ _argvec[9] = (unsigned long)arg9; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "addi 1,1,-16\n\t" \ /* arg9 */ \ "lwz 3,36(11)\n\t" \ "stw 3,8(1)\n\t" \ /* args1-8 */ \ "lwz 3,4(11)\n\t" /* arg1->r3 */ \ "lwz 4,8(11)\n\t" \ "lwz 5,12(11)\n\t" \ "lwz 6,16(11)\n\t" /* arg4->r6 */ \ "lwz 7,20(11)\n\t" \ "lwz 8,24(11)\n\t" \ "lwz 9,28(11)\n\t" \ "lwz 10,32(11)\n\t" /* arg8->r10 */ \ "lwz 11,0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ VALGRIND_RESTORE_STACK \ "mr %0,3" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[11]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ _argvec[5] = (unsigned long)arg5; \ _argvec[6] = (unsigned long)arg6; \ _argvec[7] = (unsigned long)arg7; \ _argvec[8] = (unsigned long)arg8; \ _argvec[9] = (unsigned long)arg9; \ _argvec[10] = (unsigned long)arg10; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "addi 1,1,-16\n\t" \ /* arg10 */ \ "lwz 3,40(11)\n\t" \ "stw 3,12(1)\n\t" \ /* arg9 */ \ "lwz 3,36(11)\n\t" \ "stw 3,8(1)\n\t" \ /* args1-8 */ \ "lwz 3,4(11)\n\t" /* arg1->r3 */ \ "lwz 4,8(11)\n\t" \ "lwz 5,12(11)\n\t" \ "lwz 6,16(11)\n\t" /* arg4->r6 */ \ "lwz 7,20(11)\n\t" \ "lwz 8,24(11)\n\t" \ "lwz 9,28(11)\n\t" \ "lwz 10,32(11)\n\t" /* arg8->r10 */ \ "lwz 11,0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ VALGRIND_RESTORE_STACK \ "mr %0,3" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10,arg11) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[12]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ _argvec[5] = (unsigned long)arg5; \ _argvec[6] = (unsigned long)arg6; \ _argvec[7] = (unsigned long)arg7; \ _argvec[8] = (unsigned long)arg8; \ _argvec[9] = (unsigned long)arg9; \ _argvec[10] = (unsigned long)arg10; \ _argvec[11] = (unsigned long)arg11; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "addi 1,1,-32\n\t" \ /* arg11 */ \ "lwz 3,44(11)\n\t" \ "stw 3,16(1)\n\t" \ /* arg10 */ \ "lwz 3,40(11)\n\t" \ "stw 3,12(1)\n\t" \ /* arg9 */ \ "lwz 3,36(11)\n\t" \ "stw 3,8(1)\n\t" \ /* args1-8 */ \ "lwz 3,4(11)\n\t" /* arg1->r3 */ \ "lwz 4,8(11)\n\t" \ "lwz 5,12(11)\n\t" \ "lwz 6,16(11)\n\t" /* arg4->r6 */ \ "lwz 7,20(11)\n\t" \ "lwz 8,24(11)\n\t" \ "lwz 9,28(11)\n\t" \ "lwz 10,32(11)\n\t" /* arg8->r10 */ \ "lwz 11,0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ VALGRIND_RESTORE_STACK \ "mr %0,3" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10,arg11,arg12) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[13]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ _argvec[5] = (unsigned long)arg5; \ _argvec[6] = (unsigned long)arg6; \ _argvec[7] = (unsigned long)arg7; \ _argvec[8] = (unsigned long)arg8; \ _argvec[9] = (unsigned long)arg9; \ _argvec[10] = (unsigned long)arg10; \ _argvec[11] = (unsigned long)arg11; \ _argvec[12] = (unsigned long)arg12; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "addi 1,1,-32\n\t" \ /* arg12 */ \ "lwz 3,48(11)\n\t" \ "stw 3,20(1)\n\t" \ /* arg11 */ \ "lwz 3,44(11)\n\t" \ "stw 3,16(1)\n\t" \ /* arg10 */ \ "lwz 3,40(11)\n\t" \ "stw 3,12(1)\n\t" \ /* arg9 */ \ "lwz 3,36(11)\n\t" \ "stw 3,8(1)\n\t" \ /* args1-8 */ \ "lwz 3,4(11)\n\t" /* arg1->r3 */ \ "lwz 4,8(11)\n\t" \ "lwz 5,12(11)\n\t" \ "lwz 6,16(11)\n\t" /* arg4->r6 */ \ "lwz 7,20(11)\n\t" \ "lwz 8,24(11)\n\t" \ "lwz 9,28(11)\n\t" \ "lwz 10,32(11)\n\t" /* arg8->r10 */ \ "lwz 11,0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ VALGRIND_RESTORE_STACK \ "mr %0,3" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #endif /* PLAT_ppc32_linux */ /* ------------------------ ppc64-linux ------------------------ */ #if defined(PLAT_ppc64be_linux) /* ARGREGS: r3 r4 r5 r6 r7 r8 r9 r10 (the rest on stack somewhere) */ /* These regs are trashed by the hidden call. */ #define __CALLER_SAVED_REGS \ "lr", "ctr", "xer", \ "cr0", "cr1", "cr2", "cr3", "cr4", "cr5", "cr6", "cr7", \ "r0", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "r10", \ "r11", "r12", "r13" /* Macros to save and align the stack before making a function call and restore it afterwards as gcc may not keep the stack pointer aligned if it doesn't realise calls are being made to other functions. */ #define VALGRIND_ALIGN_STACK \ "mr 28,1\n\t" \ "rldicr 1,1,0,59\n\t" #define VALGRIND_RESTORE_STACK \ "mr 1,28\n\t" /* These CALL_FN_ macros assume that on ppc64-linux, sizeof(unsigned long) == 8. */ #define CALL_FN_W_v(lval, orig) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+0]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "std 2,-16(11)\n\t" /* save tocptr */ \ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ "ld 11, 0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ "mr 11,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(11)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_W(lval, orig, arg1) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+1]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "std 2,-16(11)\n\t" /* save tocptr */ \ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ "ld 3, 8(11)\n\t" /* arg1->r3 */ \ "ld 11, 0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ "mr 11,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(11)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WW(lval, orig, arg1,arg2) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+2]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "std 2,-16(11)\n\t" /* save tocptr */ \ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ "ld 3, 8(11)\n\t" /* arg1->r3 */ \ "ld 4, 16(11)\n\t" /* arg2->r4 */ \ "ld 11, 0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ "mr 11,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(11)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+3]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "std 2,-16(11)\n\t" /* save tocptr */ \ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ "ld 3, 8(11)\n\t" /* arg1->r3 */ \ "ld 4, 16(11)\n\t" /* arg2->r4 */ \ "ld 5, 24(11)\n\t" /* arg3->r5 */ \ "ld 11, 0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ "mr 11,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(11)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+4]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "std 2,-16(11)\n\t" /* save tocptr */ \ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ "ld 3, 8(11)\n\t" /* arg1->r3 */ \ "ld 4, 16(11)\n\t" /* arg2->r4 */ \ "ld 5, 24(11)\n\t" /* arg3->r5 */ \ "ld 6, 32(11)\n\t" /* arg4->r6 */ \ "ld 11, 0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ "mr 11,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(11)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+5]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ _argvec[2+5] = (unsigned long)arg5; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "std 2,-16(11)\n\t" /* save tocptr */ \ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ "ld 3, 8(11)\n\t" /* arg1->r3 */ \ "ld 4, 16(11)\n\t" /* arg2->r4 */ \ "ld 5, 24(11)\n\t" /* arg3->r5 */ \ "ld 6, 32(11)\n\t" /* arg4->r6 */ \ "ld 7, 40(11)\n\t" /* arg5->r7 */ \ "ld 11, 0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ "mr 11,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(11)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+6]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ _argvec[2+5] = (unsigned long)arg5; \ _argvec[2+6] = (unsigned long)arg6; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "std 2,-16(11)\n\t" /* save tocptr */ \ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ "ld 3, 8(11)\n\t" /* arg1->r3 */ \ "ld 4, 16(11)\n\t" /* arg2->r4 */ \ "ld 5, 24(11)\n\t" /* arg3->r5 */ \ "ld 6, 32(11)\n\t" /* arg4->r6 */ \ "ld 7, 40(11)\n\t" /* arg5->r7 */ \ "ld 8, 48(11)\n\t" /* arg6->r8 */ \ "ld 11, 0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ "mr 11,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(11)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+7]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ _argvec[2+5] = (unsigned long)arg5; \ _argvec[2+6] = (unsigned long)arg6; \ _argvec[2+7] = (unsigned long)arg7; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "std 2,-16(11)\n\t" /* save tocptr */ \ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ "ld 3, 8(11)\n\t" /* arg1->r3 */ \ "ld 4, 16(11)\n\t" /* arg2->r4 */ \ "ld 5, 24(11)\n\t" /* arg3->r5 */ \ "ld 6, 32(11)\n\t" /* arg4->r6 */ \ "ld 7, 40(11)\n\t" /* arg5->r7 */ \ "ld 8, 48(11)\n\t" /* arg6->r8 */ \ "ld 9, 56(11)\n\t" /* arg7->r9 */ \ "ld 11, 0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ "mr 11,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(11)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+8]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ _argvec[2+5] = (unsigned long)arg5; \ _argvec[2+6] = (unsigned long)arg6; \ _argvec[2+7] = (unsigned long)arg7; \ _argvec[2+8] = (unsigned long)arg8; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "std 2,-16(11)\n\t" /* save tocptr */ \ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ "ld 3, 8(11)\n\t" /* arg1->r3 */ \ "ld 4, 16(11)\n\t" /* arg2->r4 */ \ "ld 5, 24(11)\n\t" /* arg3->r5 */ \ "ld 6, 32(11)\n\t" /* arg4->r6 */ \ "ld 7, 40(11)\n\t" /* arg5->r7 */ \ "ld 8, 48(11)\n\t" /* arg6->r8 */ \ "ld 9, 56(11)\n\t" /* arg7->r9 */ \ "ld 10, 64(11)\n\t" /* arg8->r10 */ \ "ld 11, 0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ "mr 11,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(11)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+9]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ _argvec[2+5] = (unsigned long)arg5; \ _argvec[2+6] = (unsigned long)arg6; \ _argvec[2+7] = (unsigned long)arg7; \ _argvec[2+8] = (unsigned long)arg8; \ _argvec[2+9] = (unsigned long)arg9; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "std 2,-16(11)\n\t" /* save tocptr */ \ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ "addi 1,1,-128\n\t" /* expand stack frame */ \ /* arg9 */ \ "ld 3,72(11)\n\t" \ "std 3,112(1)\n\t" \ /* args1-8 */ \ "ld 3, 8(11)\n\t" /* arg1->r3 */ \ "ld 4, 16(11)\n\t" /* arg2->r4 */ \ "ld 5, 24(11)\n\t" /* arg3->r5 */ \ "ld 6, 32(11)\n\t" /* arg4->r6 */ \ "ld 7, 40(11)\n\t" /* arg5->r7 */ \ "ld 8, 48(11)\n\t" /* arg6->r8 */ \ "ld 9, 56(11)\n\t" /* arg7->r9 */ \ "ld 10, 64(11)\n\t" /* arg8->r10 */ \ "ld 11, 0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ "mr 11,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(11)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+10]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ _argvec[2+5] = (unsigned long)arg5; \ _argvec[2+6] = (unsigned long)arg6; \ _argvec[2+7] = (unsigned long)arg7; \ _argvec[2+8] = (unsigned long)arg8; \ _argvec[2+9] = (unsigned long)arg9; \ _argvec[2+10] = (unsigned long)arg10; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "std 2,-16(11)\n\t" /* save tocptr */ \ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ "addi 1,1,-128\n\t" /* expand stack frame */ \ /* arg10 */ \ "ld 3,80(11)\n\t" \ "std 3,120(1)\n\t" \ /* arg9 */ \ "ld 3,72(11)\n\t" \ "std 3,112(1)\n\t" \ /* args1-8 */ \ "ld 3, 8(11)\n\t" /* arg1->r3 */ \ "ld 4, 16(11)\n\t" /* arg2->r4 */ \ "ld 5, 24(11)\n\t" /* arg3->r5 */ \ "ld 6, 32(11)\n\t" /* arg4->r6 */ \ "ld 7, 40(11)\n\t" /* arg5->r7 */ \ "ld 8, 48(11)\n\t" /* arg6->r8 */ \ "ld 9, 56(11)\n\t" /* arg7->r9 */ \ "ld 10, 64(11)\n\t" /* arg8->r10 */ \ "ld 11, 0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ "mr 11,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(11)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10,arg11) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+11]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ _argvec[2+5] = (unsigned long)arg5; \ _argvec[2+6] = (unsigned long)arg6; \ _argvec[2+7] = (unsigned long)arg7; \ _argvec[2+8] = (unsigned long)arg8; \ _argvec[2+9] = (unsigned long)arg9; \ _argvec[2+10] = (unsigned long)arg10; \ _argvec[2+11] = (unsigned long)arg11; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "std 2,-16(11)\n\t" /* save tocptr */ \ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ "addi 1,1,-144\n\t" /* expand stack frame */ \ /* arg11 */ \ "ld 3,88(11)\n\t" \ "std 3,128(1)\n\t" \ /* arg10 */ \ "ld 3,80(11)\n\t" \ "std 3,120(1)\n\t" \ /* arg9 */ \ "ld 3,72(11)\n\t" \ "std 3,112(1)\n\t" \ /* args1-8 */ \ "ld 3, 8(11)\n\t" /* arg1->r3 */ \ "ld 4, 16(11)\n\t" /* arg2->r4 */ \ "ld 5, 24(11)\n\t" /* arg3->r5 */ \ "ld 6, 32(11)\n\t" /* arg4->r6 */ \ "ld 7, 40(11)\n\t" /* arg5->r7 */ \ "ld 8, 48(11)\n\t" /* arg6->r8 */ \ "ld 9, 56(11)\n\t" /* arg7->r9 */ \ "ld 10, 64(11)\n\t" /* arg8->r10 */ \ "ld 11, 0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ "mr 11,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(11)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10,arg11,arg12) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+12]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ _argvec[2+5] = (unsigned long)arg5; \ _argvec[2+6] = (unsigned long)arg6; \ _argvec[2+7] = (unsigned long)arg7; \ _argvec[2+8] = (unsigned long)arg8; \ _argvec[2+9] = (unsigned long)arg9; \ _argvec[2+10] = (unsigned long)arg10; \ _argvec[2+11] = (unsigned long)arg11; \ _argvec[2+12] = (unsigned long)arg12; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 11,%1\n\t" \ "std 2,-16(11)\n\t" /* save tocptr */ \ "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ "addi 1,1,-144\n\t" /* expand stack frame */ \ /* arg12 */ \ "ld 3,96(11)\n\t" \ "std 3,136(1)\n\t" \ /* arg11 */ \ "ld 3,88(11)\n\t" \ "std 3,128(1)\n\t" \ /* arg10 */ \ "ld 3,80(11)\n\t" \ "std 3,120(1)\n\t" \ /* arg9 */ \ "ld 3,72(11)\n\t" \ "std 3,112(1)\n\t" \ /* args1-8 */ \ "ld 3, 8(11)\n\t" /* arg1->r3 */ \ "ld 4, 16(11)\n\t" /* arg2->r4 */ \ "ld 5, 24(11)\n\t" /* arg3->r5 */ \ "ld 6, 32(11)\n\t" /* arg4->r6 */ \ "ld 7, 40(11)\n\t" /* arg5->r7 */ \ "ld 8, 48(11)\n\t" /* arg6->r8 */ \ "ld 9, 56(11)\n\t" /* arg7->r9 */ \ "ld 10, 64(11)\n\t" /* arg8->r10 */ \ "ld 11, 0(11)\n\t" /* target->r11 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ "mr 11,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(11)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #endif /* PLAT_ppc64be_linux */ /* ------------------------- ppc64le-linux ----------------------- */ #if defined(PLAT_ppc64le_linux) /* ARGREGS: r3 r4 r5 r6 r7 r8 r9 r10 (the rest on stack somewhere) */ /* These regs are trashed by the hidden call. */ #define __CALLER_SAVED_REGS \ "lr", "ctr", "xer", \ "cr0", "cr1", "cr2", "cr3", "cr4", "cr5", "cr6", "cr7", \ "r0", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "r10", \ "r11", "r12", "r13" /* Macros to save and align the stack before making a function call and restore it afterwards as gcc may not keep the stack pointer aligned if it doesn't realise calls are being made to other functions. */ #define VALGRIND_ALIGN_STACK \ "mr 28,1\n\t" \ "rldicr 1,1,0,59\n\t" #define VALGRIND_RESTORE_STACK \ "mr 1,28\n\t" /* These CALL_FN_ macros assume that on ppc64-linux, sizeof(unsigned long) == 8. */ #define CALL_FN_W_v(lval, orig) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+0]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 12,%1\n\t" \ "std 2,-16(12)\n\t" /* save tocptr */ \ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \ "ld 12, 0(12)\n\t" /* target->r12 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \ "mr 12,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(12)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_W(lval, orig, arg1) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+1]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 12,%1\n\t" \ "std 2,-16(12)\n\t" /* save tocptr */ \ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \ "ld 3, 8(12)\n\t" /* arg1->r3 */ \ "ld 12, 0(12)\n\t" /* target->r12 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \ "mr 12,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(12)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WW(lval, orig, arg1,arg2) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+2]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 12,%1\n\t" \ "std 2,-16(12)\n\t" /* save tocptr */ \ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \ "ld 3, 8(12)\n\t" /* arg1->r3 */ \ "ld 4, 16(12)\n\t" /* arg2->r4 */ \ "ld 12, 0(12)\n\t" /* target->r12 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \ "mr 12,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(12)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+3]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 12,%1\n\t" \ "std 2,-16(12)\n\t" /* save tocptr */ \ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \ "ld 3, 8(12)\n\t" /* arg1->r3 */ \ "ld 4, 16(12)\n\t" /* arg2->r4 */ \ "ld 5, 24(12)\n\t" /* arg3->r5 */ \ "ld 12, 0(12)\n\t" /* target->r12 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \ "mr 12,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(12)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+4]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 12,%1\n\t" \ "std 2,-16(12)\n\t" /* save tocptr */ \ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \ "ld 3, 8(12)\n\t" /* arg1->r3 */ \ "ld 4, 16(12)\n\t" /* arg2->r4 */ \ "ld 5, 24(12)\n\t" /* arg3->r5 */ \ "ld 6, 32(12)\n\t" /* arg4->r6 */ \ "ld 12, 0(12)\n\t" /* target->r12 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \ "mr 12,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(12)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+5]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ _argvec[2+5] = (unsigned long)arg5; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 12,%1\n\t" \ "std 2,-16(12)\n\t" /* save tocptr */ \ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \ "ld 3, 8(12)\n\t" /* arg1->r3 */ \ "ld 4, 16(12)\n\t" /* arg2->r4 */ \ "ld 5, 24(12)\n\t" /* arg3->r5 */ \ "ld 6, 32(12)\n\t" /* arg4->r6 */ \ "ld 7, 40(12)\n\t" /* arg5->r7 */ \ "ld 12, 0(12)\n\t" /* target->r12 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \ "mr 12,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(12)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+6]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ _argvec[2+5] = (unsigned long)arg5; \ _argvec[2+6] = (unsigned long)arg6; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 12,%1\n\t" \ "std 2,-16(12)\n\t" /* save tocptr */ \ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \ "ld 3, 8(12)\n\t" /* arg1->r3 */ \ "ld 4, 16(12)\n\t" /* arg2->r4 */ \ "ld 5, 24(12)\n\t" /* arg3->r5 */ \ "ld 6, 32(12)\n\t" /* arg4->r6 */ \ "ld 7, 40(12)\n\t" /* arg5->r7 */ \ "ld 8, 48(12)\n\t" /* arg6->r8 */ \ "ld 12, 0(12)\n\t" /* target->r12 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \ "mr 12,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(12)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+7]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ _argvec[2+5] = (unsigned long)arg5; \ _argvec[2+6] = (unsigned long)arg6; \ _argvec[2+7] = (unsigned long)arg7; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 12,%1\n\t" \ "std 2,-16(12)\n\t" /* save tocptr */ \ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \ "ld 3, 8(12)\n\t" /* arg1->r3 */ \ "ld 4, 16(12)\n\t" /* arg2->r4 */ \ "ld 5, 24(12)\n\t" /* arg3->r5 */ \ "ld 6, 32(12)\n\t" /* arg4->r6 */ \ "ld 7, 40(12)\n\t" /* arg5->r7 */ \ "ld 8, 48(12)\n\t" /* arg6->r8 */ \ "ld 9, 56(12)\n\t" /* arg7->r9 */ \ "ld 12, 0(12)\n\t" /* target->r12 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \ "mr 12,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(12)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+8]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ _argvec[2+5] = (unsigned long)arg5; \ _argvec[2+6] = (unsigned long)arg6; \ _argvec[2+7] = (unsigned long)arg7; \ _argvec[2+8] = (unsigned long)arg8; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 12,%1\n\t" \ "std 2,-16(12)\n\t" /* save tocptr */ \ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \ "ld 3, 8(12)\n\t" /* arg1->r3 */ \ "ld 4, 16(12)\n\t" /* arg2->r4 */ \ "ld 5, 24(12)\n\t" /* arg3->r5 */ \ "ld 6, 32(12)\n\t" /* arg4->r6 */ \ "ld 7, 40(12)\n\t" /* arg5->r7 */ \ "ld 8, 48(12)\n\t" /* arg6->r8 */ \ "ld 9, 56(12)\n\t" /* arg7->r9 */ \ "ld 10, 64(12)\n\t" /* arg8->r10 */ \ "ld 12, 0(12)\n\t" /* target->r12 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \ "mr 12,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(12)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+9]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ _argvec[2+5] = (unsigned long)arg5; \ _argvec[2+6] = (unsigned long)arg6; \ _argvec[2+7] = (unsigned long)arg7; \ _argvec[2+8] = (unsigned long)arg8; \ _argvec[2+9] = (unsigned long)arg9; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 12,%1\n\t" \ "std 2,-16(12)\n\t" /* save tocptr */ \ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \ "addi 1,1,-128\n\t" /* expand stack frame */ \ /* arg9 */ \ "ld 3,72(12)\n\t" \ "std 3,96(1)\n\t" \ /* args1-8 */ \ "ld 3, 8(12)\n\t" /* arg1->r3 */ \ "ld 4, 16(12)\n\t" /* arg2->r4 */ \ "ld 5, 24(12)\n\t" /* arg3->r5 */ \ "ld 6, 32(12)\n\t" /* arg4->r6 */ \ "ld 7, 40(12)\n\t" /* arg5->r7 */ \ "ld 8, 48(12)\n\t" /* arg6->r8 */ \ "ld 9, 56(12)\n\t" /* arg7->r9 */ \ "ld 10, 64(12)\n\t" /* arg8->r10 */ \ "ld 12, 0(12)\n\t" /* target->r12 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \ "mr 12,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(12)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+10]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ _argvec[2+5] = (unsigned long)arg5; \ _argvec[2+6] = (unsigned long)arg6; \ _argvec[2+7] = (unsigned long)arg7; \ _argvec[2+8] = (unsigned long)arg8; \ _argvec[2+9] = (unsigned long)arg9; \ _argvec[2+10] = (unsigned long)arg10; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 12,%1\n\t" \ "std 2,-16(12)\n\t" /* save tocptr */ \ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \ "addi 1,1,-128\n\t" /* expand stack frame */ \ /* arg10 */ \ "ld 3,80(12)\n\t" \ "std 3,104(1)\n\t" \ /* arg9 */ \ "ld 3,72(12)\n\t" \ "std 3,96(1)\n\t" \ /* args1-8 */ \ "ld 3, 8(12)\n\t" /* arg1->r3 */ \ "ld 4, 16(12)\n\t" /* arg2->r4 */ \ "ld 5, 24(12)\n\t" /* arg3->r5 */ \ "ld 6, 32(12)\n\t" /* arg4->r6 */ \ "ld 7, 40(12)\n\t" /* arg5->r7 */ \ "ld 8, 48(12)\n\t" /* arg6->r8 */ \ "ld 9, 56(12)\n\t" /* arg7->r9 */ \ "ld 10, 64(12)\n\t" /* arg8->r10 */ \ "ld 12, 0(12)\n\t" /* target->r12 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \ "mr 12,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(12)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10,arg11) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+11]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ _argvec[2+5] = (unsigned long)arg5; \ _argvec[2+6] = (unsigned long)arg6; \ _argvec[2+7] = (unsigned long)arg7; \ _argvec[2+8] = (unsigned long)arg8; \ _argvec[2+9] = (unsigned long)arg9; \ _argvec[2+10] = (unsigned long)arg10; \ _argvec[2+11] = (unsigned long)arg11; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 12,%1\n\t" \ "std 2,-16(12)\n\t" /* save tocptr */ \ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \ "addi 1,1,-144\n\t" /* expand stack frame */ \ /* arg11 */ \ "ld 3,88(12)\n\t" \ "std 3,112(1)\n\t" \ /* arg10 */ \ "ld 3,80(12)\n\t" \ "std 3,104(1)\n\t" \ /* arg9 */ \ "ld 3,72(12)\n\t" \ "std 3,96(1)\n\t" \ /* args1-8 */ \ "ld 3, 8(12)\n\t" /* arg1->r3 */ \ "ld 4, 16(12)\n\t" /* arg2->r4 */ \ "ld 5, 24(12)\n\t" /* arg3->r5 */ \ "ld 6, 32(12)\n\t" /* arg4->r6 */ \ "ld 7, 40(12)\n\t" /* arg5->r7 */ \ "ld 8, 48(12)\n\t" /* arg6->r8 */ \ "ld 9, 56(12)\n\t" /* arg7->r9 */ \ "ld 10, 64(12)\n\t" /* arg8->r10 */ \ "ld 12, 0(12)\n\t" /* target->r12 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \ "mr 12,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(12)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10,arg11,arg12) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3+12]; \ volatile unsigned long _res; \ /* _argvec[0] holds current r2 across the call */ \ _argvec[1] = (unsigned long)_orig.r2; \ _argvec[2] = (unsigned long)_orig.nraddr; \ _argvec[2+1] = (unsigned long)arg1; \ _argvec[2+2] = (unsigned long)arg2; \ _argvec[2+3] = (unsigned long)arg3; \ _argvec[2+4] = (unsigned long)arg4; \ _argvec[2+5] = (unsigned long)arg5; \ _argvec[2+6] = (unsigned long)arg6; \ _argvec[2+7] = (unsigned long)arg7; \ _argvec[2+8] = (unsigned long)arg8; \ _argvec[2+9] = (unsigned long)arg9; \ _argvec[2+10] = (unsigned long)arg10; \ _argvec[2+11] = (unsigned long)arg11; \ _argvec[2+12] = (unsigned long)arg12; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "mr 12,%1\n\t" \ "std 2,-16(12)\n\t" /* save tocptr */ \ "ld 2,-8(12)\n\t" /* use nraddr's tocptr */ \ "addi 1,1,-144\n\t" /* expand stack frame */ \ /* arg12 */ \ "ld 3,96(12)\n\t" \ "std 3,120(1)\n\t" \ /* arg11 */ \ "ld 3,88(12)\n\t" \ "std 3,112(1)\n\t" \ /* arg10 */ \ "ld 3,80(12)\n\t" \ "std 3,104(1)\n\t" \ /* arg9 */ \ "ld 3,72(12)\n\t" \ "std 3,96(1)\n\t" \ /* args1-8 */ \ "ld 3, 8(12)\n\t" /* arg1->r3 */ \ "ld 4, 16(12)\n\t" /* arg2->r4 */ \ "ld 5, 24(12)\n\t" /* arg3->r5 */ \ "ld 6, 32(12)\n\t" /* arg4->r6 */ \ "ld 7, 40(12)\n\t" /* arg5->r7 */ \ "ld 8, 48(12)\n\t" /* arg6->r8 */ \ "ld 9, 56(12)\n\t" /* arg7->r9 */ \ "ld 10, 64(12)\n\t" /* arg8->r10 */ \ "ld 12, 0(12)\n\t" /* target->r12 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R12 \ "mr 12,%1\n\t" \ "mr %0,3\n\t" \ "ld 2,-16(12)\n\t" /* restore tocptr */ \ VALGRIND_RESTORE_STACK \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[2]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r28" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #endif /* PLAT_ppc64le_linux */ /* ------------------------- arm-linux ------------------------- */ #if defined(PLAT_arm_linux) /* These regs are trashed by the hidden call. */ #define __CALLER_SAVED_REGS "r0", "r1", "r2", "r3","r4", "r12", "r14" /* Macros to save and align the stack before making a function call and restore it afterwards as gcc may not keep the stack pointer aligned if it doesn't realise calls are being made to other functions. */ /* This is a bit tricky. We store the original stack pointer in r10 as it is callee-saves. gcc doesn't allow the use of r11 for some reason. Also, we can't directly "bic" the stack pointer in thumb mode since r13 isn't an allowed register number in that context. So use r4 as a temporary, since that is about to get trashed anyway, just after each use of this macro. Side effect is we need to be very careful about any future changes, since VALGRIND_ALIGN_STACK simply assumes r4 is usable. */ #define VALGRIND_ALIGN_STACK \ "mov r10, sp\n\t" \ "mov r4, sp\n\t" \ "bic r4, r4, #7\n\t" \ "mov sp, r4\n\t" #define VALGRIND_RESTORE_STACK \ "mov sp, r10\n\t" /* These CALL_FN_ macros assume that on arm-linux, sizeof(unsigned long) == 4. */ #define CALL_FN_W_v(lval, orig) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[1]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr r4, [%1] \n\t" /* target->r4 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \ VALGRIND_RESTORE_STACK \ "mov %0, r0\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_W(lval, orig, arg1) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[2]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr r0, [%1, #4] \n\t" \ "ldr r4, [%1] \n\t" /* target->r4 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \ VALGRIND_RESTORE_STACK \ "mov %0, r0\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WW(lval, orig, arg1,arg2) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr r0, [%1, #4] \n\t" \ "ldr r1, [%1, #8] \n\t" \ "ldr r4, [%1] \n\t" /* target->r4 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \ VALGRIND_RESTORE_STACK \ "mov %0, r0\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[4]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr r0, [%1, #4] \n\t" \ "ldr r1, [%1, #8] \n\t" \ "ldr r2, [%1, #12] \n\t" \ "ldr r4, [%1] \n\t" /* target->r4 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \ VALGRIND_RESTORE_STACK \ "mov %0, r0\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[5]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr r0, [%1, #4] \n\t" \ "ldr r1, [%1, #8] \n\t" \ "ldr r2, [%1, #12] \n\t" \ "ldr r3, [%1, #16] \n\t" \ "ldr r4, [%1] \n\t" /* target->r4 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \ VALGRIND_RESTORE_STACK \ "mov %0, r0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[6]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "sub sp, sp, #4 \n\t" \ "ldr r0, [%1, #20] \n\t" \ "push {r0} \n\t" \ "ldr r0, [%1, #4] \n\t" \ "ldr r1, [%1, #8] \n\t" \ "ldr r2, [%1, #12] \n\t" \ "ldr r3, [%1, #16] \n\t" \ "ldr r4, [%1] \n\t" /* target->r4 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \ VALGRIND_RESTORE_STACK \ "mov %0, r0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[7]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr r0, [%1, #20] \n\t" \ "ldr r1, [%1, #24] \n\t" \ "push {r0, r1} \n\t" \ "ldr r0, [%1, #4] \n\t" \ "ldr r1, [%1, #8] \n\t" \ "ldr r2, [%1, #12] \n\t" \ "ldr r3, [%1, #16] \n\t" \ "ldr r4, [%1] \n\t" /* target->r4 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \ VALGRIND_RESTORE_STACK \ "mov %0, r0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[8]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "sub sp, sp, #4 \n\t" \ "ldr r0, [%1, #20] \n\t" \ "ldr r1, [%1, #24] \n\t" \ "ldr r2, [%1, #28] \n\t" \ "push {r0, r1, r2} \n\t" \ "ldr r0, [%1, #4] \n\t" \ "ldr r1, [%1, #8] \n\t" \ "ldr r2, [%1, #12] \n\t" \ "ldr r3, [%1, #16] \n\t" \ "ldr r4, [%1] \n\t" /* target->r4 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \ VALGRIND_RESTORE_STACK \ "mov %0, r0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[9]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr r0, [%1, #20] \n\t" \ "ldr r1, [%1, #24] \n\t" \ "ldr r2, [%1, #28] \n\t" \ "ldr r3, [%1, #32] \n\t" \ "push {r0, r1, r2, r3} \n\t" \ "ldr r0, [%1, #4] \n\t" \ "ldr r1, [%1, #8] \n\t" \ "ldr r2, [%1, #12] \n\t" \ "ldr r3, [%1, #16] \n\t" \ "ldr r4, [%1] \n\t" /* target->r4 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \ VALGRIND_RESTORE_STACK \ "mov %0, r0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[10]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "sub sp, sp, #4 \n\t" \ "ldr r0, [%1, #20] \n\t" \ "ldr r1, [%1, #24] \n\t" \ "ldr r2, [%1, #28] \n\t" \ "ldr r3, [%1, #32] \n\t" \ "ldr r4, [%1, #36] \n\t" \ "push {r0, r1, r2, r3, r4} \n\t" \ "ldr r0, [%1, #4] \n\t" \ "ldr r1, [%1, #8] \n\t" \ "ldr r2, [%1, #12] \n\t" \ "ldr r3, [%1, #16] \n\t" \ "ldr r4, [%1] \n\t" /* target->r4 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \ VALGRIND_RESTORE_STACK \ "mov %0, r0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[11]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ _argvec[10] = (unsigned long)(arg10); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr r0, [%1, #40] \n\t" \ "push {r0} \n\t" \ "ldr r0, [%1, #20] \n\t" \ "ldr r1, [%1, #24] \n\t" \ "ldr r2, [%1, #28] \n\t" \ "ldr r3, [%1, #32] \n\t" \ "ldr r4, [%1, #36] \n\t" \ "push {r0, r1, r2, r3, r4} \n\t" \ "ldr r0, [%1, #4] \n\t" \ "ldr r1, [%1, #8] \n\t" \ "ldr r2, [%1, #12] \n\t" \ "ldr r3, [%1, #16] \n\t" \ "ldr r4, [%1] \n\t" /* target->r4 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \ VALGRIND_RESTORE_STACK \ "mov %0, r0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5, \ arg6,arg7,arg8,arg9,arg10, \ arg11) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[12]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ _argvec[10] = (unsigned long)(arg10); \ _argvec[11] = (unsigned long)(arg11); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "sub sp, sp, #4 \n\t" \ "ldr r0, [%1, #40] \n\t" \ "ldr r1, [%1, #44] \n\t" \ "push {r0, r1} \n\t" \ "ldr r0, [%1, #20] \n\t" \ "ldr r1, [%1, #24] \n\t" \ "ldr r2, [%1, #28] \n\t" \ "ldr r3, [%1, #32] \n\t" \ "ldr r4, [%1, #36] \n\t" \ "push {r0, r1, r2, r3, r4} \n\t" \ "ldr r0, [%1, #4] \n\t" \ "ldr r1, [%1, #8] \n\t" \ "ldr r2, [%1, #12] \n\t" \ "ldr r3, [%1, #16] \n\t" \ "ldr r4, [%1] \n\t" /* target->r4 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \ VALGRIND_RESTORE_STACK \ "mov %0, r0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5, \ arg6,arg7,arg8,arg9,arg10, \ arg11,arg12) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[13]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ _argvec[10] = (unsigned long)(arg10); \ _argvec[11] = (unsigned long)(arg11); \ _argvec[12] = (unsigned long)(arg12); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr r0, [%1, #40] \n\t" \ "ldr r1, [%1, #44] \n\t" \ "ldr r2, [%1, #48] \n\t" \ "push {r0, r1, r2} \n\t" \ "ldr r0, [%1, #20] \n\t" \ "ldr r1, [%1, #24] \n\t" \ "ldr r2, [%1, #28] \n\t" \ "ldr r3, [%1, #32] \n\t" \ "ldr r4, [%1, #36] \n\t" \ "push {r0, r1, r2, r3, r4} \n\t" \ "ldr r0, [%1, #4] \n\t" \ "ldr r1, [%1, #8] \n\t" \ "ldr r2, [%1, #12] \n\t" \ "ldr r3, [%1, #16] \n\t" \ "ldr r4, [%1] \n\t" /* target->r4 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \ VALGRIND_RESTORE_STACK \ "mov %0, r0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r10" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #endif /* PLAT_arm_linux */ /* ------------------------ arm64-linux ------------------------ */ #if defined(PLAT_arm64_linux) /* These regs are trashed by the hidden call. */ #define __CALLER_SAVED_REGS \ "x0", "x1", "x2", "x3","x4", "x5", "x6", "x7", "x8", "x9", \ "x10", "x11", "x12", "x13", "x14", "x15", "x16", "x17", \ "x18", "x19", "x20", "x30", \ "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v8", "v9", \ "v10", "v11", "v12", "v13", "v14", "v15", "v16", "v17", \ "v18", "v19", "v20", "v21", "v22", "v23", "v24", "v25", \ "v26", "v27", "v28", "v29", "v30", "v31" /* x21 is callee-saved, so we can use it to save and restore SP around the hidden call. */ #define VALGRIND_ALIGN_STACK \ "mov x21, sp\n\t" \ "bic sp, x21, #15\n\t" #define VALGRIND_RESTORE_STACK \ "mov sp, x21\n\t" /* These CALL_FN_ macros assume that on arm64-linux, sizeof(unsigned long) == 8. */ #define CALL_FN_W_v(lval, orig) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[1]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr x8, [%1] \n\t" /* target->x8 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \ VALGRIND_RESTORE_STACK \ "mov %0, x0\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_W(lval, orig, arg1) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[2]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr x0, [%1, #8] \n\t" \ "ldr x8, [%1] \n\t" /* target->x8 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \ VALGRIND_RESTORE_STACK \ "mov %0, x0\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WW(lval, orig, arg1,arg2) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr x0, [%1, #8] \n\t" \ "ldr x1, [%1, #16] \n\t" \ "ldr x8, [%1] \n\t" /* target->x8 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \ VALGRIND_RESTORE_STACK \ "mov %0, x0\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[4]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr x0, [%1, #8] \n\t" \ "ldr x1, [%1, #16] \n\t" \ "ldr x2, [%1, #24] \n\t" \ "ldr x8, [%1] \n\t" /* target->x8 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \ VALGRIND_RESTORE_STACK \ "mov %0, x0\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[5]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr x0, [%1, #8] \n\t" \ "ldr x1, [%1, #16] \n\t" \ "ldr x2, [%1, #24] \n\t" \ "ldr x3, [%1, #32] \n\t" \ "ldr x8, [%1] \n\t" /* target->x8 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \ VALGRIND_RESTORE_STACK \ "mov %0, x0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[6]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr x0, [%1, #8] \n\t" \ "ldr x1, [%1, #16] \n\t" \ "ldr x2, [%1, #24] \n\t" \ "ldr x3, [%1, #32] \n\t" \ "ldr x4, [%1, #40] \n\t" \ "ldr x8, [%1] \n\t" /* target->x8 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \ VALGRIND_RESTORE_STACK \ "mov %0, x0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[7]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr x0, [%1, #8] \n\t" \ "ldr x1, [%1, #16] \n\t" \ "ldr x2, [%1, #24] \n\t" \ "ldr x3, [%1, #32] \n\t" \ "ldr x4, [%1, #40] \n\t" \ "ldr x5, [%1, #48] \n\t" \ "ldr x8, [%1] \n\t" /* target->x8 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \ VALGRIND_RESTORE_STACK \ "mov %0, x0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[8]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr x0, [%1, #8] \n\t" \ "ldr x1, [%1, #16] \n\t" \ "ldr x2, [%1, #24] \n\t" \ "ldr x3, [%1, #32] \n\t" \ "ldr x4, [%1, #40] \n\t" \ "ldr x5, [%1, #48] \n\t" \ "ldr x6, [%1, #56] \n\t" \ "ldr x8, [%1] \n\t" /* target->x8 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \ VALGRIND_RESTORE_STACK \ "mov %0, x0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[9]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "ldr x0, [%1, #8] \n\t" \ "ldr x1, [%1, #16] \n\t" \ "ldr x2, [%1, #24] \n\t" \ "ldr x3, [%1, #32] \n\t" \ "ldr x4, [%1, #40] \n\t" \ "ldr x5, [%1, #48] \n\t" \ "ldr x6, [%1, #56] \n\t" \ "ldr x7, [%1, #64] \n\t" \ "ldr x8, [%1] \n\t" /* target->x8 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \ VALGRIND_RESTORE_STACK \ "mov %0, x0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[10]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "sub sp, sp, #0x20 \n\t" \ "ldr x0, [%1, #8] \n\t" \ "ldr x1, [%1, #16] \n\t" \ "ldr x2, [%1, #24] \n\t" \ "ldr x3, [%1, #32] \n\t" \ "ldr x4, [%1, #40] \n\t" \ "ldr x5, [%1, #48] \n\t" \ "ldr x6, [%1, #56] \n\t" \ "ldr x7, [%1, #64] \n\t" \ "ldr x8, [%1, #72] \n\t" \ "str x8, [sp, #0] \n\t" \ "ldr x8, [%1] \n\t" /* target->x8 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \ VALGRIND_RESTORE_STACK \ "mov %0, x0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[11]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ _argvec[10] = (unsigned long)(arg10); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "sub sp, sp, #0x20 \n\t" \ "ldr x0, [%1, #8] \n\t" \ "ldr x1, [%1, #16] \n\t" \ "ldr x2, [%1, #24] \n\t" \ "ldr x3, [%1, #32] \n\t" \ "ldr x4, [%1, #40] \n\t" \ "ldr x5, [%1, #48] \n\t" \ "ldr x6, [%1, #56] \n\t" \ "ldr x7, [%1, #64] \n\t" \ "ldr x8, [%1, #72] \n\t" \ "str x8, [sp, #0] \n\t" \ "ldr x8, [%1, #80] \n\t" \ "str x8, [sp, #8] \n\t" \ "ldr x8, [%1] \n\t" /* target->x8 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \ VALGRIND_RESTORE_STACK \ "mov %0, x0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10,arg11) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[12]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ _argvec[10] = (unsigned long)(arg10); \ _argvec[11] = (unsigned long)(arg11); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "sub sp, sp, #0x30 \n\t" \ "ldr x0, [%1, #8] \n\t" \ "ldr x1, [%1, #16] \n\t" \ "ldr x2, [%1, #24] \n\t" \ "ldr x3, [%1, #32] \n\t" \ "ldr x4, [%1, #40] \n\t" \ "ldr x5, [%1, #48] \n\t" \ "ldr x6, [%1, #56] \n\t" \ "ldr x7, [%1, #64] \n\t" \ "ldr x8, [%1, #72] \n\t" \ "str x8, [sp, #0] \n\t" \ "ldr x8, [%1, #80] \n\t" \ "str x8, [sp, #8] \n\t" \ "ldr x8, [%1, #88] \n\t" \ "str x8, [sp, #16] \n\t" \ "ldr x8, [%1] \n\t" /* target->x8 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \ VALGRIND_RESTORE_STACK \ "mov %0, x0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10,arg11, \ arg12) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[13]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ _argvec[10] = (unsigned long)(arg10); \ _argvec[11] = (unsigned long)(arg11); \ _argvec[12] = (unsigned long)(arg12); \ __asm__ volatile( \ VALGRIND_ALIGN_STACK \ "sub sp, sp, #0x30 \n\t" \ "ldr x0, [%1, #8] \n\t" \ "ldr x1, [%1, #16] \n\t" \ "ldr x2, [%1, #24] \n\t" \ "ldr x3, [%1, #32] \n\t" \ "ldr x4, [%1, #40] \n\t" \ "ldr x5, [%1, #48] \n\t" \ "ldr x6, [%1, #56] \n\t" \ "ldr x7, [%1, #64] \n\t" \ "ldr x8, [%1, #72] \n\t" \ "str x8, [sp, #0] \n\t" \ "ldr x8, [%1, #80] \n\t" \ "str x8, [sp, #8] \n\t" \ "ldr x8, [%1, #88] \n\t" \ "str x8, [sp, #16] \n\t" \ "ldr x8, [%1, #96] \n\t" \ "str x8, [sp, #24] \n\t" \ "ldr x8, [%1] \n\t" /* target->x8 */ \ VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_X8 \ VALGRIND_RESTORE_STACK \ "mov %0, x0" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "x21" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #endif /* PLAT_arm64_linux */ /* ------------------------- s390x-linux ------------------------- */ #if defined(PLAT_s390x_linux) /* Similar workaround as amd64 (see above), but we use r11 as frame pointer and save the old r11 in r7. r11 might be used for argvec, therefore we copy argvec in r1 since r1 is clobbered after the call anyway. */ #if defined(__GNUC__) && defined(__GCC_HAVE_DWARF2_CFI_ASM) # define __FRAME_POINTER \ ,"d"(__builtin_dwarf_cfa()) # define VALGRIND_CFI_PROLOGUE \ ".cfi_remember_state\n\t" \ "lgr 1,%1\n\t" /* copy the argvec pointer in r1 */ \ "lgr 7,11\n\t" \ "lgr 11,%2\n\t" \ ".cfi_def_cfa r11, 0\n\t" # define VALGRIND_CFI_EPILOGUE \ "lgr 11, 7\n\t" \ ".cfi_restore_state\n\t" #else # define __FRAME_POINTER # define VALGRIND_CFI_PROLOGUE \ "lgr 1,%1\n\t" # define VALGRIND_CFI_EPILOGUE #endif /* Nb: On s390 the stack pointer is properly aligned *at all times* according to the s390 GCC maintainer. (The ABI specification is not precise in this regard.) Therefore, VALGRIND_ALIGN_STACK and VALGRIND_RESTORE_STACK are not defined here. */ /* These regs are trashed by the hidden call. Note that we overwrite r14 in s390_irgen_noredir (VEX/priv/guest_s390_irgen.c) to give the function a proper return address. All others are ABI defined call clobbers. */ #define __CALLER_SAVED_REGS "0","1","2","3","4","5","14", \ "f0","f1","f2","f3","f4","f5","f6","f7" /* Nb: Although r11 is modified in the asm snippets below (inside VALGRIND_CFI_PROLOGUE) it is not listed in the clobber section, for two reasons: (1) r11 is restored in VALGRIND_CFI_EPILOGUE, so effectively it is not modified (2) GCC will complain that r11 cannot appear inside a clobber section, when compiled with -O -fno-omit-frame-pointer */ #define CALL_FN_W_v(lval, orig) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[1]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ "aghi 15,-160\n\t" \ "lg 1, 0(1)\n\t" /* target->r1 */ \ VALGRIND_CALL_NOREDIR_R1 \ "lgr %0, 2\n\t" \ "aghi 15,160\n\t" \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=d" (_res) \ : /*in*/ "d" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"7" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) /* The call abi has the arguments in r2-r6 and stack */ #define CALL_FN_W_W(lval, orig, arg1) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[2]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ "aghi 15,-160\n\t" \ "lg 2, 8(1)\n\t" \ "lg 1, 0(1)\n\t" \ VALGRIND_CALL_NOREDIR_R1 \ "lgr %0, 2\n\t" \ "aghi 15,160\n\t" \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=d" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"7" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WW(lval, orig, arg1, arg2) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ "aghi 15,-160\n\t" \ "lg 2, 8(1)\n\t" \ "lg 3,16(1)\n\t" \ "lg 1, 0(1)\n\t" \ VALGRIND_CALL_NOREDIR_R1 \ "lgr %0, 2\n\t" \ "aghi 15,160\n\t" \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=d" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"7" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWW(lval, orig, arg1, arg2, arg3) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[4]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ "aghi 15,-160\n\t" \ "lg 2, 8(1)\n\t" \ "lg 3,16(1)\n\t" \ "lg 4,24(1)\n\t" \ "lg 1, 0(1)\n\t" \ VALGRIND_CALL_NOREDIR_R1 \ "lgr %0, 2\n\t" \ "aghi 15,160\n\t" \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=d" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"7" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWWW(lval, orig, arg1, arg2, arg3, arg4) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[5]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ "aghi 15,-160\n\t" \ "lg 2, 8(1)\n\t" \ "lg 3,16(1)\n\t" \ "lg 4,24(1)\n\t" \ "lg 5,32(1)\n\t" \ "lg 1, 0(1)\n\t" \ VALGRIND_CALL_NOREDIR_R1 \ "lgr %0, 2\n\t" \ "aghi 15,160\n\t" \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=d" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"7" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_5W(lval, orig, arg1, arg2, arg3, arg4, arg5) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[6]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ _argvec[5] = (unsigned long)arg5; \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ "aghi 15,-160\n\t" \ "lg 2, 8(1)\n\t" \ "lg 3,16(1)\n\t" \ "lg 4,24(1)\n\t" \ "lg 5,32(1)\n\t" \ "lg 6,40(1)\n\t" \ "lg 1, 0(1)\n\t" \ VALGRIND_CALL_NOREDIR_R1 \ "lgr %0, 2\n\t" \ "aghi 15,160\n\t" \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=d" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"6","7" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_6W(lval, orig, arg1, arg2, arg3, arg4, arg5, \ arg6) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[7]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ _argvec[5] = (unsigned long)arg5; \ _argvec[6] = (unsigned long)arg6; \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ "aghi 15,-168\n\t" \ "lg 2, 8(1)\n\t" \ "lg 3,16(1)\n\t" \ "lg 4,24(1)\n\t" \ "lg 5,32(1)\n\t" \ "lg 6,40(1)\n\t" \ "mvc 160(8,15), 48(1)\n\t" \ "lg 1, 0(1)\n\t" \ VALGRIND_CALL_NOREDIR_R1 \ "lgr %0, 2\n\t" \ "aghi 15,168\n\t" \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=d" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"6","7" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_7W(lval, orig, arg1, arg2, arg3, arg4, arg5, \ arg6, arg7) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[8]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ _argvec[5] = (unsigned long)arg5; \ _argvec[6] = (unsigned long)arg6; \ _argvec[7] = (unsigned long)arg7; \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ "aghi 15,-176\n\t" \ "lg 2, 8(1)\n\t" \ "lg 3,16(1)\n\t" \ "lg 4,24(1)\n\t" \ "lg 5,32(1)\n\t" \ "lg 6,40(1)\n\t" \ "mvc 160(8,15), 48(1)\n\t" \ "mvc 168(8,15), 56(1)\n\t" \ "lg 1, 0(1)\n\t" \ VALGRIND_CALL_NOREDIR_R1 \ "lgr %0, 2\n\t" \ "aghi 15,176\n\t" \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=d" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"6","7" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_8W(lval, orig, arg1, arg2, arg3, arg4, arg5, \ arg6, arg7 ,arg8) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[9]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ _argvec[5] = (unsigned long)arg5; \ _argvec[6] = (unsigned long)arg6; \ _argvec[7] = (unsigned long)arg7; \ _argvec[8] = (unsigned long)arg8; \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ "aghi 15,-184\n\t" \ "lg 2, 8(1)\n\t" \ "lg 3,16(1)\n\t" \ "lg 4,24(1)\n\t" \ "lg 5,32(1)\n\t" \ "lg 6,40(1)\n\t" \ "mvc 160(8,15), 48(1)\n\t" \ "mvc 168(8,15), 56(1)\n\t" \ "mvc 176(8,15), 64(1)\n\t" \ "lg 1, 0(1)\n\t" \ VALGRIND_CALL_NOREDIR_R1 \ "lgr %0, 2\n\t" \ "aghi 15,184\n\t" \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=d" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"6","7" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_9W(lval, orig, arg1, arg2, arg3, arg4, arg5, \ arg6, arg7 ,arg8, arg9) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[10]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ _argvec[5] = (unsigned long)arg5; \ _argvec[6] = (unsigned long)arg6; \ _argvec[7] = (unsigned long)arg7; \ _argvec[8] = (unsigned long)arg8; \ _argvec[9] = (unsigned long)arg9; \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ "aghi 15,-192\n\t" \ "lg 2, 8(1)\n\t" \ "lg 3,16(1)\n\t" \ "lg 4,24(1)\n\t" \ "lg 5,32(1)\n\t" \ "lg 6,40(1)\n\t" \ "mvc 160(8,15), 48(1)\n\t" \ "mvc 168(8,15), 56(1)\n\t" \ "mvc 176(8,15), 64(1)\n\t" \ "mvc 184(8,15), 72(1)\n\t" \ "lg 1, 0(1)\n\t" \ VALGRIND_CALL_NOREDIR_R1 \ "lgr %0, 2\n\t" \ "aghi 15,192\n\t" \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=d" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"6","7" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_10W(lval, orig, arg1, arg2, arg3, arg4, arg5, \ arg6, arg7 ,arg8, arg9, arg10) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[11]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ _argvec[5] = (unsigned long)arg5; \ _argvec[6] = (unsigned long)arg6; \ _argvec[7] = (unsigned long)arg7; \ _argvec[8] = (unsigned long)arg8; \ _argvec[9] = (unsigned long)arg9; \ _argvec[10] = (unsigned long)arg10; \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ "aghi 15,-200\n\t" \ "lg 2, 8(1)\n\t" \ "lg 3,16(1)\n\t" \ "lg 4,24(1)\n\t" \ "lg 5,32(1)\n\t" \ "lg 6,40(1)\n\t" \ "mvc 160(8,15), 48(1)\n\t" \ "mvc 168(8,15), 56(1)\n\t" \ "mvc 176(8,15), 64(1)\n\t" \ "mvc 184(8,15), 72(1)\n\t" \ "mvc 192(8,15), 80(1)\n\t" \ "lg 1, 0(1)\n\t" \ VALGRIND_CALL_NOREDIR_R1 \ "lgr %0, 2\n\t" \ "aghi 15,200\n\t" \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=d" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"6","7" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_11W(lval, orig, arg1, arg2, arg3, arg4, arg5, \ arg6, arg7 ,arg8, arg9, arg10, arg11) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[12]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ _argvec[5] = (unsigned long)arg5; \ _argvec[6] = (unsigned long)arg6; \ _argvec[7] = (unsigned long)arg7; \ _argvec[8] = (unsigned long)arg8; \ _argvec[9] = (unsigned long)arg9; \ _argvec[10] = (unsigned long)arg10; \ _argvec[11] = (unsigned long)arg11; \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ "aghi 15,-208\n\t" \ "lg 2, 8(1)\n\t" \ "lg 3,16(1)\n\t" \ "lg 4,24(1)\n\t" \ "lg 5,32(1)\n\t" \ "lg 6,40(1)\n\t" \ "mvc 160(8,15), 48(1)\n\t" \ "mvc 168(8,15), 56(1)\n\t" \ "mvc 176(8,15), 64(1)\n\t" \ "mvc 184(8,15), 72(1)\n\t" \ "mvc 192(8,15), 80(1)\n\t" \ "mvc 200(8,15), 88(1)\n\t" \ "lg 1, 0(1)\n\t" \ VALGRIND_CALL_NOREDIR_R1 \ "lgr %0, 2\n\t" \ "aghi 15,208\n\t" \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=d" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"6","7" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_12W(lval, orig, arg1, arg2, arg3, arg4, arg5, \ arg6, arg7 ,arg8, arg9, arg10, arg11, arg12)\ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[13]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)arg1; \ _argvec[2] = (unsigned long)arg2; \ _argvec[3] = (unsigned long)arg3; \ _argvec[4] = (unsigned long)arg4; \ _argvec[5] = (unsigned long)arg5; \ _argvec[6] = (unsigned long)arg6; \ _argvec[7] = (unsigned long)arg7; \ _argvec[8] = (unsigned long)arg8; \ _argvec[9] = (unsigned long)arg9; \ _argvec[10] = (unsigned long)arg10; \ _argvec[11] = (unsigned long)arg11; \ _argvec[12] = (unsigned long)arg12; \ __asm__ volatile( \ VALGRIND_CFI_PROLOGUE \ "aghi 15,-216\n\t" \ "lg 2, 8(1)\n\t" \ "lg 3,16(1)\n\t" \ "lg 4,24(1)\n\t" \ "lg 5,32(1)\n\t" \ "lg 6,40(1)\n\t" \ "mvc 160(8,15), 48(1)\n\t" \ "mvc 168(8,15), 56(1)\n\t" \ "mvc 176(8,15), 64(1)\n\t" \ "mvc 184(8,15), 72(1)\n\t" \ "mvc 192(8,15), 80(1)\n\t" \ "mvc 200(8,15), 88(1)\n\t" \ "mvc 208(8,15), 96(1)\n\t" \ "lg 1, 0(1)\n\t" \ VALGRIND_CALL_NOREDIR_R1 \ "lgr %0, 2\n\t" \ "aghi 15,216\n\t" \ VALGRIND_CFI_EPILOGUE \ : /*out*/ "=d" (_res) \ : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \ : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"6","7" \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #endif /* PLAT_s390x_linux */ /* ------------------------- mips32-linux ----------------------- */ #if defined(PLAT_mips32_linux) /* These regs are trashed by the hidden call. */ #define __CALLER_SAVED_REGS "$2", "$3", "$4", "$5", "$6", \ "$7", "$8", "$9", "$10", "$11", "$12", "$13", "$14", "$15", "$24", \ "$25", "$31" /* These CALL_FN_ macros assume that on mips-linux, sizeof(unsigned long) == 4. */ #define CALL_FN_W_v(lval, orig) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[1]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ __asm__ volatile( \ "subu $29, $29, 8 \n\t" \ "sw $28, 0($29) \n\t" \ "sw $31, 4($29) \n\t" \ "subu $29, $29, 16 \n\t" \ "lw $25, 0(%1) \n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "addu $29, $29, 16\n\t" \ "lw $28, 0($29) \n\t" \ "lw $31, 4($29) \n\t" \ "addu $29, $29, 8 \n\t" \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_W(lval, orig, arg1) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[2]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ __asm__ volatile( \ "subu $29, $29, 8 \n\t" \ "sw $28, 0($29) \n\t" \ "sw $31, 4($29) \n\t" \ "subu $29, $29, 16 \n\t" \ "lw $4, 4(%1) \n\t" /* arg1*/ \ "lw $25, 0(%1) \n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "addu $29, $29, 16 \n\t" \ "lw $28, 0($29) \n\t" \ "lw $31, 4($29) \n\t" \ "addu $29, $29, 8 \n\t" \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WW(lval, orig, arg1,arg2) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[3]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ __asm__ volatile( \ "subu $29, $29, 8 \n\t" \ "sw $28, 0($29) \n\t" \ "sw $31, 4($29) \n\t" \ "subu $29, $29, 16 \n\t" \ "lw $4, 4(%1) \n\t" \ "lw $5, 8(%1) \n\t" \ "lw $25, 0(%1) \n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "addu $29, $29, 16 \n\t" \ "lw $28, 0($29) \n\t" \ "lw $31, 4($29) \n\t" \ "addu $29, $29, 8 \n\t" \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[4]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ __asm__ volatile( \ "subu $29, $29, 8 \n\t" \ "sw $28, 0($29) \n\t" \ "sw $31, 4($29) \n\t" \ "subu $29, $29, 16 \n\t" \ "lw $4, 4(%1) \n\t" \ "lw $5, 8(%1) \n\t" \ "lw $6, 12(%1) \n\t" \ "lw $25, 0(%1) \n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "addu $29, $29, 16 \n\t" \ "lw $28, 0($29) \n\t" \ "lw $31, 4($29) \n\t" \ "addu $29, $29, 8 \n\t" \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[5]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ __asm__ volatile( \ "subu $29, $29, 8 \n\t" \ "sw $28, 0($29) \n\t" \ "sw $31, 4($29) \n\t" \ "subu $29, $29, 16 \n\t" \ "lw $4, 4(%1) \n\t" \ "lw $5, 8(%1) \n\t" \ "lw $6, 12(%1) \n\t" \ "lw $7, 16(%1) \n\t" \ "lw $25, 0(%1) \n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "addu $29, $29, 16 \n\t" \ "lw $28, 0($29) \n\t" \ "lw $31, 4($29) \n\t" \ "addu $29, $29, 8 \n\t" \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[6]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ __asm__ volatile( \ "subu $29, $29, 8 \n\t" \ "sw $28, 0($29) \n\t" \ "sw $31, 4($29) \n\t" \ "lw $4, 20(%1) \n\t" \ "subu $29, $29, 24\n\t" \ "sw $4, 16($29) \n\t" \ "lw $4, 4(%1) \n\t" \ "lw $5, 8(%1) \n\t" \ "lw $6, 12(%1) \n\t" \ "lw $7, 16(%1) \n\t" \ "lw $25, 0(%1) \n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "addu $29, $29, 24 \n\t" \ "lw $28, 0($29) \n\t" \ "lw $31, 4($29) \n\t" \ "addu $29, $29, 8 \n\t" \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[7]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ __asm__ volatile( \ "subu $29, $29, 8 \n\t" \ "sw $28, 0($29) \n\t" \ "sw $31, 4($29) \n\t" \ "lw $4, 20(%1) \n\t" \ "subu $29, $29, 32\n\t" \ "sw $4, 16($29) \n\t" \ "lw $4, 24(%1) \n\t" \ "nop\n\t" \ "sw $4, 20($29) \n\t" \ "lw $4, 4(%1) \n\t" \ "lw $5, 8(%1) \n\t" \ "lw $6, 12(%1) \n\t" \ "lw $7, 16(%1) \n\t" \ "lw $25, 0(%1) \n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "addu $29, $29, 32 \n\t" \ "lw $28, 0($29) \n\t" \ "lw $31, 4($29) \n\t" \ "addu $29, $29, 8 \n\t" \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[8]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ __asm__ volatile( \ "subu $29, $29, 8 \n\t" \ "sw $28, 0($29) \n\t" \ "sw $31, 4($29) \n\t" \ "lw $4, 20(%1) \n\t" \ "subu $29, $29, 32\n\t" \ "sw $4, 16($29) \n\t" \ "lw $4, 24(%1) \n\t" \ "sw $4, 20($29) \n\t" \ "lw $4, 28(%1) \n\t" \ "sw $4, 24($29) \n\t" \ "lw $4, 4(%1) \n\t" \ "lw $5, 8(%1) \n\t" \ "lw $6, 12(%1) \n\t" \ "lw $7, 16(%1) \n\t" \ "lw $25, 0(%1) \n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "addu $29, $29, 32 \n\t" \ "lw $28, 0($29) \n\t" \ "lw $31, 4($29) \n\t" \ "addu $29, $29, 8 \n\t" \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[9]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ __asm__ volatile( \ "subu $29, $29, 8 \n\t" \ "sw $28, 0($29) \n\t" \ "sw $31, 4($29) \n\t" \ "lw $4, 20(%1) \n\t" \ "subu $29, $29, 40\n\t" \ "sw $4, 16($29) \n\t" \ "lw $4, 24(%1) \n\t" \ "sw $4, 20($29) \n\t" \ "lw $4, 28(%1) \n\t" \ "sw $4, 24($29) \n\t" \ "lw $4, 32(%1) \n\t" \ "sw $4, 28($29) \n\t" \ "lw $4, 4(%1) \n\t" \ "lw $5, 8(%1) \n\t" \ "lw $6, 12(%1) \n\t" \ "lw $7, 16(%1) \n\t" \ "lw $25, 0(%1) \n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "addu $29, $29, 40 \n\t" \ "lw $28, 0($29) \n\t" \ "lw $31, 4($29) \n\t" \ "addu $29, $29, 8 \n\t" \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[10]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ __asm__ volatile( \ "subu $29, $29, 8 \n\t" \ "sw $28, 0($29) \n\t" \ "sw $31, 4($29) \n\t" \ "lw $4, 20(%1) \n\t" \ "subu $29, $29, 40\n\t" \ "sw $4, 16($29) \n\t" \ "lw $4, 24(%1) \n\t" \ "sw $4, 20($29) \n\t" \ "lw $4, 28(%1) \n\t" \ "sw $4, 24($29) \n\t" \ "lw $4, 32(%1) \n\t" \ "sw $4, 28($29) \n\t" \ "lw $4, 36(%1) \n\t" \ "sw $4, 32($29) \n\t" \ "lw $4, 4(%1) \n\t" \ "lw $5, 8(%1) \n\t" \ "lw $6, 12(%1) \n\t" \ "lw $7, 16(%1) \n\t" \ "lw $25, 0(%1) \n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "addu $29, $29, 40 \n\t" \ "lw $28, 0($29) \n\t" \ "lw $31, 4($29) \n\t" \ "addu $29, $29, 8 \n\t" \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[11]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ _argvec[10] = (unsigned long)(arg10); \ __asm__ volatile( \ "subu $29, $29, 8 \n\t" \ "sw $28, 0($29) \n\t" \ "sw $31, 4($29) \n\t" \ "lw $4, 20(%1) \n\t" \ "subu $29, $29, 48\n\t" \ "sw $4, 16($29) \n\t" \ "lw $4, 24(%1) \n\t" \ "sw $4, 20($29) \n\t" \ "lw $4, 28(%1) \n\t" \ "sw $4, 24($29) \n\t" \ "lw $4, 32(%1) \n\t" \ "sw $4, 28($29) \n\t" \ "lw $4, 36(%1) \n\t" \ "sw $4, 32($29) \n\t" \ "lw $4, 40(%1) \n\t" \ "sw $4, 36($29) \n\t" \ "lw $4, 4(%1) \n\t" \ "lw $5, 8(%1) \n\t" \ "lw $6, 12(%1) \n\t" \ "lw $7, 16(%1) \n\t" \ "lw $25, 0(%1) \n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "addu $29, $29, 48 \n\t" \ "lw $28, 0($29) \n\t" \ "lw $31, 4($29) \n\t" \ "addu $29, $29, 8 \n\t" \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5, \ arg6,arg7,arg8,arg9,arg10, \ arg11) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[12]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ _argvec[10] = (unsigned long)(arg10); \ _argvec[11] = (unsigned long)(arg11); \ __asm__ volatile( \ "subu $29, $29, 8 \n\t" \ "sw $28, 0($29) \n\t" \ "sw $31, 4($29) \n\t" \ "lw $4, 20(%1) \n\t" \ "subu $29, $29, 48\n\t" \ "sw $4, 16($29) \n\t" \ "lw $4, 24(%1) \n\t" \ "sw $4, 20($29) \n\t" \ "lw $4, 28(%1) \n\t" \ "sw $4, 24($29) \n\t" \ "lw $4, 32(%1) \n\t" \ "sw $4, 28($29) \n\t" \ "lw $4, 36(%1) \n\t" \ "sw $4, 32($29) \n\t" \ "lw $4, 40(%1) \n\t" \ "sw $4, 36($29) \n\t" \ "lw $4, 44(%1) \n\t" \ "sw $4, 40($29) \n\t" \ "lw $4, 4(%1) \n\t" \ "lw $5, 8(%1) \n\t" \ "lw $6, 12(%1) \n\t" \ "lw $7, 16(%1) \n\t" \ "lw $25, 0(%1) \n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "addu $29, $29, 48 \n\t" \ "lw $28, 0($29) \n\t" \ "lw $31, 4($29) \n\t" \ "addu $29, $29, 8 \n\t" \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5, \ arg6,arg7,arg8,arg9,arg10, \ arg11,arg12) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long _argvec[13]; \ volatile unsigned long _res; \ _argvec[0] = (unsigned long)_orig.nraddr; \ _argvec[1] = (unsigned long)(arg1); \ _argvec[2] = (unsigned long)(arg2); \ _argvec[3] = (unsigned long)(arg3); \ _argvec[4] = (unsigned long)(arg4); \ _argvec[5] = (unsigned long)(arg5); \ _argvec[6] = (unsigned long)(arg6); \ _argvec[7] = (unsigned long)(arg7); \ _argvec[8] = (unsigned long)(arg8); \ _argvec[9] = (unsigned long)(arg9); \ _argvec[10] = (unsigned long)(arg10); \ _argvec[11] = (unsigned long)(arg11); \ _argvec[12] = (unsigned long)(arg12); \ __asm__ volatile( \ "subu $29, $29, 8 \n\t" \ "sw $28, 0($29) \n\t" \ "sw $31, 4($29) \n\t" \ "lw $4, 20(%1) \n\t" \ "subu $29, $29, 56\n\t" \ "sw $4, 16($29) \n\t" \ "lw $4, 24(%1) \n\t" \ "sw $4, 20($29) \n\t" \ "lw $4, 28(%1) \n\t" \ "sw $4, 24($29) \n\t" \ "lw $4, 32(%1) \n\t" \ "sw $4, 28($29) \n\t" \ "lw $4, 36(%1) \n\t" \ "sw $4, 32($29) \n\t" \ "lw $4, 40(%1) \n\t" \ "sw $4, 36($29) \n\t" \ "lw $4, 44(%1) \n\t" \ "sw $4, 40($29) \n\t" \ "lw $4, 48(%1) \n\t" \ "sw $4, 44($29) \n\t" \ "lw $4, 4(%1) \n\t" \ "lw $5, 8(%1) \n\t" \ "lw $6, 12(%1) \n\t" \ "lw $7, 16(%1) \n\t" \ "lw $25, 0(%1) \n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "addu $29, $29, 56 \n\t" \ "lw $28, 0($29) \n\t" \ "lw $31, 4($29) \n\t" \ "addu $29, $29, 8 \n\t" \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) _res; \ } while (0) #endif /* PLAT_mips32_linux */ /* ------------------------- mips64-linux ------------------------- */ #if defined(PLAT_mips64_linux) /* These regs are trashed by the hidden call. */ #define __CALLER_SAVED_REGS "$2", "$3", "$4", "$5", "$6", \ "$7", "$8", "$9", "$10", "$11", "$12", "$13", "$14", "$15", "$24", \ "$25", "$31" /* These CALL_FN_ macros assume that on mips64-linux, sizeof(long long) == 8. */ #define MIPS64_LONG2REG_CAST(x) ((long long)(long)x) #define CALL_FN_W_v(lval, orig) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long long _argvec[1]; \ volatile unsigned long long _res; \ _argvec[0] = MIPS64_LONG2REG_CAST(_orig.nraddr); \ __asm__ volatile( \ "ld $25, 0(%1)\n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "0" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) (long)_res; \ } while (0) #define CALL_FN_W_W(lval, orig, arg1) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long long _argvec[2]; \ volatile unsigned long long _res; \ _argvec[0] = MIPS64_LONG2REG_CAST(_orig.nraddr); \ _argvec[1] = MIPS64_LONG2REG_CAST(arg1); \ __asm__ volatile( \ "ld $4, 8(%1)\n\t" /* arg1*/ \ "ld $25, 0(%1)\n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) (long)_res; \ } while (0) #define CALL_FN_W_WW(lval, orig, arg1,arg2) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long long _argvec[3]; \ volatile unsigned long long _res; \ _argvec[0] = _orig.nraddr; \ _argvec[1] = MIPS64_LONG2REG_CAST(arg1); \ _argvec[2] = MIPS64_LONG2REG_CAST(arg2); \ __asm__ volatile( \ "ld $4, 8(%1)\n\t" \ "ld $5, 16(%1)\n\t" \ "ld $25, 0(%1)\n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) (long)_res; \ } while (0) #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long long _argvec[4]; \ volatile unsigned long long _res; \ _argvec[0] = _orig.nraddr; \ _argvec[1] = MIPS64_LONG2REG_CAST(arg1); \ _argvec[2] = MIPS64_LONG2REG_CAST(arg2); \ _argvec[3] = MIPS64_LONG2REG_CAST(arg3); \ __asm__ volatile( \ "ld $4, 8(%1)\n\t" \ "ld $5, 16(%1)\n\t" \ "ld $6, 24(%1)\n\t" \ "ld $25, 0(%1)\n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) (long)_res; \ } while (0) #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long long _argvec[5]; \ volatile unsigned long long _res; \ _argvec[0] = MIPS64_LONG2REG_CAST(_orig.nraddr); \ _argvec[1] = MIPS64_LONG2REG_CAST(arg1); \ _argvec[2] = MIPS64_LONG2REG_CAST(arg2); \ _argvec[3] = MIPS64_LONG2REG_CAST(arg3); \ _argvec[4] = MIPS64_LONG2REG_CAST(arg4); \ __asm__ volatile( \ "ld $4, 8(%1)\n\t" \ "ld $5, 16(%1)\n\t" \ "ld $6, 24(%1)\n\t" \ "ld $7, 32(%1)\n\t" \ "ld $25, 0(%1)\n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) (long)_res; \ } while (0) #define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long long _argvec[6]; \ volatile unsigned long long _res; \ _argvec[0] = MIPS64_LONG2REG_CAST(_orig.nraddr); \ _argvec[1] = MIPS64_LONG2REG_CAST(arg1); \ _argvec[2] = MIPS64_LONG2REG_CAST(arg2); \ _argvec[3] = MIPS64_LONG2REG_CAST(arg3); \ _argvec[4] = MIPS64_LONG2REG_CAST(arg4); \ _argvec[5] = MIPS64_LONG2REG_CAST(arg5); \ __asm__ volatile( \ "ld $4, 8(%1)\n\t" \ "ld $5, 16(%1)\n\t" \ "ld $6, 24(%1)\n\t" \ "ld $7, 32(%1)\n\t" \ "ld $8, 40(%1)\n\t" \ "ld $25, 0(%1)\n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) (long)_res; \ } while (0) #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long long _argvec[7]; \ volatile unsigned long long _res; \ _argvec[0] = MIPS64_LONG2REG_CAST(_orig.nraddr); \ _argvec[1] = MIPS64_LONG2REG_CAST(arg1); \ _argvec[2] = MIPS64_LONG2REG_CAST(arg2); \ _argvec[3] = MIPS64_LONG2REG_CAST(arg3); \ _argvec[4] = MIPS64_LONG2REG_CAST(arg4); \ _argvec[5] = MIPS64_LONG2REG_CAST(arg5); \ _argvec[6] = MIPS64_LONG2REG_CAST(arg6); \ __asm__ volatile( \ "ld $4, 8(%1)\n\t" \ "ld $5, 16(%1)\n\t" \ "ld $6, 24(%1)\n\t" \ "ld $7, 32(%1)\n\t" \ "ld $8, 40(%1)\n\t" \ "ld $9, 48(%1)\n\t" \ "ld $25, 0(%1)\n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) (long)_res; \ } while (0) #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long long _argvec[8]; \ volatile unsigned long long _res; \ _argvec[0] = MIPS64_LONG2REG_CAST(_orig.nraddr); \ _argvec[1] = MIPS64_LONG2REG_CAST(arg1); \ _argvec[2] = MIPS64_LONG2REG_CAST(arg2); \ _argvec[3] = MIPS64_LONG2REG_CAST(arg3); \ _argvec[4] = MIPS64_LONG2REG_CAST(arg4); \ _argvec[5] = MIPS64_LONG2REG_CAST(arg5); \ _argvec[6] = MIPS64_LONG2REG_CAST(arg6); \ _argvec[7] = MIPS64_LONG2REG_CAST(arg7); \ __asm__ volatile( \ "ld $4, 8(%1)\n\t" \ "ld $5, 16(%1)\n\t" \ "ld $6, 24(%1)\n\t" \ "ld $7, 32(%1)\n\t" \ "ld $8, 40(%1)\n\t" \ "ld $9, 48(%1)\n\t" \ "ld $10, 56(%1)\n\t" \ "ld $25, 0(%1) \n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) (long)_res; \ } while (0) #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long long _argvec[9]; \ volatile unsigned long long _res; \ _argvec[0] = MIPS64_LONG2REG_CAST(_orig.nraddr); \ _argvec[1] = MIPS64_LONG2REG_CAST(arg1); \ _argvec[2] = MIPS64_LONG2REG_CAST(arg2); \ _argvec[3] = MIPS64_LONG2REG_CAST(arg3); \ _argvec[4] = MIPS64_LONG2REG_CAST(arg4); \ _argvec[5] = MIPS64_LONG2REG_CAST(arg5); \ _argvec[6] = MIPS64_LONG2REG_CAST(arg6); \ _argvec[7] = MIPS64_LONG2REG_CAST(arg7); \ _argvec[8] = MIPS64_LONG2REG_CAST(arg8); \ __asm__ volatile( \ "ld $4, 8(%1)\n\t" \ "ld $5, 16(%1)\n\t" \ "ld $6, 24(%1)\n\t" \ "ld $7, 32(%1)\n\t" \ "ld $8, 40(%1)\n\t" \ "ld $9, 48(%1)\n\t" \ "ld $10, 56(%1)\n\t" \ "ld $11, 64(%1)\n\t" \ "ld $25, 0(%1) \n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) (long)_res; \ } while (0) #define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long long _argvec[10]; \ volatile unsigned long long _res; \ _argvec[0] = MIPS64_LONG2REG_CAST(_orig.nraddr); \ _argvec[1] = MIPS64_LONG2REG_CAST(arg1); \ _argvec[2] = MIPS64_LONG2REG_CAST(arg2); \ _argvec[3] = MIPS64_LONG2REG_CAST(arg3); \ _argvec[4] = MIPS64_LONG2REG_CAST(arg4); \ _argvec[5] = MIPS64_LONG2REG_CAST(arg5); \ _argvec[6] = MIPS64_LONG2REG_CAST(arg6); \ _argvec[7] = MIPS64_LONG2REG_CAST(arg7); \ _argvec[8] = MIPS64_LONG2REG_CAST(arg8); \ _argvec[9] = MIPS64_LONG2REG_CAST(arg9); \ __asm__ volatile( \ "dsubu $29, $29, 8\n\t" \ "ld $4, 72(%1)\n\t" \ "sd $4, 0($29)\n\t" \ "ld $4, 8(%1)\n\t" \ "ld $5, 16(%1)\n\t" \ "ld $6, 24(%1)\n\t" \ "ld $7, 32(%1)\n\t" \ "ld $8, 40(%1)\n\t" \ "ld $9, 48(%1)\n\t" \ "ld $10, 56(%1)\n\t" \ "ld $11, 64(%1)\n\t" \ "ld $25, 0(%1)\n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "daddu $29, $29, 8\n\t" \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) (long)_res; \ } while (0) #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ arg7,arg8,arg9,arg10) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long long _argvec[11]; \ volatile unsigned long long _res; \ _argvec[0] = MIPS64_LONG2REG_CAST(_orig.nraddr); \ _argvec[1] = MIPS64_LONG2REG_CAST(arg1); \ _argvec[2] = MIPS64_LONG2REG_CAST(arg2); \ _argvec[3] = MIPS64_LONG2REG_CAST(arg3); \ _argvec[4] = MIPS64_LONG2REG_CAST(arg4); \ _argvec[5] = MIPS64_LONG2REG_CAST(arg5); \ _argvec[6] = MIPS64_LONG2REG_CAST(arg6); \ _argvec[7] = MIPS64_LONG2REG_CAST(arg7); \ _argvec[8] = MIPS64_LONG2REG_CAST(arg8); \ _argvec[9] = MIPS64_LONG2REG_CAST(arg9); \ _argvec[10] = MIPS64_LONG2REG_CAST(arg10); \ __asm__ volatile( \ "dsubu $29, $29, 16\n\t" \ "ld $4, 72(%1)\n\t" \ "sd $4, 0($29)\n\t" \ "ld $4, 80(%1)\n\t" \ "sd $4, 8($29)\n\t" \ "ld $4, 8(%1)\n\t" \ "ld $5, 16(%1)\n\t" \ "ld $6, 24(%1)\n\t" \ "ld $7, 32(%1)\n\t" \ "ld $8, 40(%1)\n\t" \ "ld $9, 48(%1)\n\t" \ "ld $10, 56(%1)\n\t" \ "ld $11, 64(%1)\n\t" \ "ld $25, 0(%1)\n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "daddu $29, $29, 16\n\t" \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) (long)_res; \ } while (0) #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5, \ arg6,arg7,arg8,arg9,arg10, \ arg11) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long long _argvec[12]; \ volatile unsigned long long _res; \ _argvec[0] = MIPS64_LONG2REG_CAST(_orig.nraddr); \ _argvec[1] = MIPS64_LONG2REG_CAST(arg1); \ _argvec[2] = MIPS64_LONG2REG_CAST(arg2); \ _argvec[3] = MIPS64_LONG2REG_CAST(arg3); \ _argvec[4] = MIPS64_LONG2REG_CAST(arg4); \ _argvec[5] = MIPS64_LONG2REG_CAST(arg5); \ _argvec[6] = MIPS64_LONG2REG_CAST(arg6); \ _argvec[7] = MIPS64_LONG2REG_CAST(arg7); \ _argvec[8] = MIPS64_LONG2REG_CAST(arg8); \ _argvec[9] = MIPS64_LONG2REG_CAST(arg9); \ _argvec[10] = MIPS64_LONG2REG_CAST(arg10); \ _argvec[11] = MIPS64_LONG2REG_CAST(arg11); \ __asm__ volatile( \ "dsubu $29, $29, 24\n\t" \ "ld $4, 72(%1)\n\t" \ "sd $4, 0($29)\n\t" \ "ld $4, 80(%1)\n\t" \ "sd $4, 8($29)\n\t" \ "ld $4, 88(%1)\n\t" \ "sd $4, 16($29)\n\t" \ "ld $4, 8(%1)\n\t" \ "ld $5, 16(%1)\n\t" \ "ld $6, 24(%1)\n\t" \ "ld $7, 32(%1)\n\t" \ "ld $8, 40(%1)\n\t" \ "ld $9, 48(%1)\n\t" \ "ld $10, 56(%1)\n\t" \ "ld $11, 64(%1)\n\t" \ "ld $25, 0(%1)\n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "daddu $29, $29, 24\n\t" \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) (long)_res; \ } while (0) #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5, \ arg6,arg7,arg8,arg9,arg10, \ arg11,arg12) \ do { \ volatile OrigFn _orig = (orig); \ volatile unsigned long long _argvec[13]; \ volatile unsigned long long _res; \ _argvec[0] = MIPS64_LONG2REG_CAST(_orig.nraddr); \ _argvec[1] = MIPS64_LONG2REG_CAST(arg1); \ _argvec[2] = MIPS64_LONG2REG_CAST(arg2); \ _argvec[3] = MIPS64_LONG2REG_CAST(arg3); \ _argvec[4] = MIPS64_LONG2REG_CAST(arg4); \ _argvec[5] = MIPS64_LONG2REG_CAST(arg5); \ _argvec[6] = MIPS64_LONG2REG_CAST(arg6); \ _argvec[7] = MIPS64_LONG2REG_CAST(arg7); \ _argvec[8] = MIPS64_LONG2REG_CAST(arg8); \ _argvec[9] = MIPS64_LONG2REG_CAST(arg9); \ _argvec[10] = MIPS64_LONG2REG_CAST(arg10); \ _argvec[11] = MIPS64_LONG2REG_CAST(arg11); \ _argvec[12] = MIPS64_LONG2REG_CAST(arg12); \ __asm__ volatile( \ "dsubu $29, $29, 32\n\t" \ "ld $4, 72(%1)\n\t" \ "sd $4, 0($29)\n\t" \ "ld $4, 80(%1)\n\t" \ "sd $4, 8($29)\n\t" \ "ld $4, 88(%1)\n\t" \ "sd $4, 16($29)\n\t" \ "ld $4, 96(%1)\n\t" \ "sd $4, 24($29)\n\t" \ "ld $4, 8(%1)\n\t" \ "ld $5, 16(%1)\n\t" \ "ld $6, 24(%1)\n\t" \ "ld $7, 32(%1)\n\t" \ "ld $8, 40(%1)\n\t" \ "ld $9, 48(%1)\n\t" \ "ld $10, 56(%1)\n\t" \ "ld $11, 64(%1)\n\t" \ "ld $25, 0(%1)\n\t" /* target->t9 */ \ VALGRIND_CALL_NOREDIR_T9 \ "daddu $29, $29, 32\n\t" \ "move %0, $2\n" \ : /*out*/ "=r" (_res) \ : /*in*/ "r" (&_argvec[0]) \ : /*trash*/ "memory", __CALLER_SAVED_REGS \ ); \ lval = (__typeof__(lval)) (long)_res; \ } while (0) #endif /* PLAT_mips64_linux */ /* ------------------------------------------------------------------ */ /* ARCHITECTURE INDEPENDENT MACROS for CLIENT REQUESTS. */ /* */ /* ------------------------------------------------------------------ */ /* Some request codes. There are many more of these, but most are not exposed to end-user view. These are the public ones, all of the form 0x1000 + small_number. Core ones are in the range 0x00000000--0x0000ffff. The non-public ones start at 0x2000. */ /* These macros are used by tools -- they must be public, but don't embed them into other programs. */ #define VG_USERREQ_TOOL_BASE(a,b) \ ((unsigned int)(((a)&0xff) << 24 | ((b)&0xff) << 16)) #define VG_IS_TOOL_USERREQ(a, b, v) \ (VG_USERREQ_TOOL_BASE(a,b) == ((v) & 0xffff0000)) /* !! ABIWARNING !! ABIWARNING !! ABIWARNING !! ABIWARNING !! This enum comprises an ABI exported by Valgrind to programs which use client requests. DO NOT CHANGE THE NUMERIC VALUES OF THESE ENTRIES, NOR DELETE ANY -- add new ones at the end of the most relevant group. */ typedef enum { VG_USERREQ__RUNNING_ON_VALGRIND = 0x1001, VG_USERREQ__DISCARD_TRANSLATIONS = 0x1002, /* These allow any function to be called from the simulated CPU but run on the real CPU. Nb: the first arg passed to the function is always the ThreadId of the running thread! So CLIENT_CALL0 actually requires a 1 arg function, etc. */ VG_USERREQ__CLIENT_CALL0 = 0x1101, VG_USERREQ__CLIENT_CALL1 = 0x1102, VG_USERREQ__CLIENT_CALL2 = 0x1103, VG_USERREQ__CLIENT_CALL3 = 0x1104, /* Can be useful in regression testing suites -- eg. can send Valgrind's output to /dev/null and still count errors. */ VG_USERREQ__COUNT_ERRORS = 0x1201, /* Allows the client program and/or gdbserver to execute a monitor command. */ VG_USERREQ__GDB_MONITOR_COMMAND = 0x1202, /* These are useful and can be interpreted by any tool that tracks malloc() et al, by using vg_replace_malloc.c. */ VG_USERREQ__MALLOCLIKE_BLOCK = 0x1301, VG_USERREQ__RESIZEINPLACE_BLOCK = 0x130b, VG_USERREQ__FREELIKE_BLOCK = 0x1302, /* Memory pool support. */ VG_USERREQ__CREATE_MEMPOOL = 0x1303, VG_USERREQ__DESTROY_MEMPOOL = 0x1304, VG_USERREQ__MEMPOOL_ALLOC = 0x1305, VG_USERREQ__MEMPOOL_FREE = 0x1306, VG_USERREQ__MEMPOOL_TRIM = 0x1307, VG_USERREQ__MOVE_MEMPOOL = 0x1308, VG_USERREQ__MEMPOOL_CHANGE = 0x1309, VG_USERREQ__MEMPOOL_EXISTS = 0x130a, /* Allow printfs to valgrind log. */ /* The first two pass the va_list argument by value, which assumes it is the same size as or smaller than a UWord, which generally isn't the case. Hence are deprecated. The second two pass the vargs by reference and so are immune to this problem. */ /* both :: char* fmt, va_list vargs (DEPRECATED) */ VG_USERREQ__PRINTF = 0x1401, VG_USERREQ__PRINTF_BACKTRACE = 0x1402, /* both :: char* fmt, va_list* vargs */ VG_USERREQ__PRINTF_VALIST_BY_REF = 0x1403, VG_USERREQ__PRINTF_BACKTRACE_VALIST_BY_REF = 0x1404, /* Stack support. */ VG_USERREQ__STACK_REGISTER = 0x1501, VG_USERREQ__STACK_DEREGISTER = 0x1502, VG_USERREQ__STACK_CHANGE = 0x1503, /* Wine support */ VG_USERREQ__LOAD_PDB_DEBUGINFO = 0x1601, /* Querying of debug info. */ VG_USERREQ__MAP_IP_TO_SRCLOC = 0x1701, /* Disable/enable error reporting level. Takes a single Word arg which is the delta to this thread's error disablement indicator. Hence 1 disables or further disables errors, and -1 moves back towards enablement. Other values are not allowed. */ VG_USERREQ__CHANGE_ERR_DISABLEMENT = 0x1801, /* Some requests used for Valgrind internal, such as self-test or self-hosting. */ /* Initialise IR injection */ VG_USERREQ__VEX_INIT_FOR_IRI = 0x1901, /* Used by Inner Valgrind to inform Outer Valgrind where to find the list of inner guest threads */ VG_USERREQ__INNER_THREADS = 0x1902 } Vg_ClientRequest; #if !defined(__GNUC__) # define __extension__ /* */ #endif /* Returns the number of Valgrinds this code is running under. That is, 0 if running natively, 1 if running under Valgrind, 2 if running under Valgrind which is running under another Valgrind, etc. */ #define RUNNING_ON_VALGRIND \ (unsigned)VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* if not */, \ VG_USERREQ__RUNNING_ON_VALGRIND, \ 0, 0, 0, 0, 0) \ /* Discard translation of code in the range [_qzz_addr .. _qzz_addr + _qzz_len - 1]. Useful if you are debugging a JITter or some such, since it provides a way to make sure valgrind will retranslate the invalidated area. Returns no value. */ #define VALGRIND_DISCARD_TRANSLATIONS(_qzz_addr,_qzz_len) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DISCARD_TRANSLATIONS, \ _qzz_addr, _qzz_len, 0, 0, 0) #define VALGRIND_INNER_THREADS(_qzz_addr) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__INNER_THREADS, \ _qzz_addr, 0, 0, 0, 0) /* These requests are for getting Valgrind itself to print something. Possibly with a backtrace. This is a really ugly hack. The return value is the number of characters printed, excluding the "**** " part at the start and the backtrace (if present). */ #if defined(__GNUC__) || defined(__INTEL_COMPILER) && !defined(_MSC_VER) /* Modern GCC will optimize the static routine out if unused, and unused attribute will shut down warnings about it. */ static int VALGRIND_PRINTF(const char *format, ...) __attribute__((format(__printf__, 1, 2), __unused__)); #endif static int #if defined(_MSC_VER) __inline #endif VALGRIND_PRINTF(const char *format, ...) { #if defined(NVALGRIND) (void)format; return 0; #else /* NVALGRIND */ #if defined(_MSC_VER) || defined(__MINGW64__) uintptr_t _qzz_res; #else unsigned long _qzz_res; #endif va_list vargs; va_start(vargs, format); #if defined(_MSC_VER) || defined(__MINGW64__) _qzz_res = VALGRIND_DO_CLIENT_REQUEST_EXPR(0, VG_USERREQ__PRINTF_VALIST_BY_REF, (uintptr_t)format, (uintptr_t)&vargs, 0, 0, 0); #else _qzz_res = VALGRIND_DO_CLIENT_REQUEST_EXPR(0, VG_USERREQ__PRINTF_VALIST_BY_REF, (unsigned long)format, (unsigned long)&vargs, 0, 0, 0); #endif va_end(vargs); return (int)_qzz_res; #endif /* NVALGRIND */ } #if defined(__GNUC__) || defined(__INTEL_COMPILER) && !defined(_MSC_VER) static int VALGRIND_PRINTF_BACKTRACE(const char *format, ...) __attribute__((format(__printf__, 1, 2), __unused__)); #endif static int #if defined(_MSC_VER) __inline #endif VALGRIND_PRINTF_BACKTRACE(const char *format, ...) { #if defined(NVALGRIND) (void)format; return 0; #else /* NVALGRIND */ #if defined(_MSC_VER) || defined(__MINGW64__) uintptr_t _qzz_res; #else unsigned long _qzz_res; #endif va_list vargs; va_start(vargs, format); #if defined(_MSC_VER) || defined(__MINGW64__) _qzz_res = VALGRIND_DO_CLIENT_REQUEST_EXPR(0, VG_USERREQ__PRINTF_BACKTRACE_VALIST_BY_REF, (uintptr_t)format, (uintptr_t)&vargs, 0, 0, 0); #else _qzz_res = VALGRIND_DO_CLIENT_REQUEST_EXPR(0, VG_USERREQ__PRINTF_BACKTRACE_VALIST_BY_REF, (unsigned long)format, (unsigned long)&vargs, 0, 0, 0); #endif va_end(vargs); return (int)_qzz_res; #endif /* NVALGRIND */ } /* These requests allow control to move from the simulated CPU to the real CPU, calling an arbitrary function. Note that the current ThreadId is inserted as the first argument. So this call: VALGRIND_NON_SIMD_CALL2(f, arg1, arg2) requires f to have this signature: Word f(Word tid, Word arg1, Word arg2) where "Word" is a word-sized type. Note that these client requests are not entirely reliable. For example, if you call a function with them that subsequently calls printf(), there's a high chance Valgrind will crash. Generally, your prospects of these working are made higher if the called function does not refer to any global variables, and does not refer to any libc or other functions (printf et al). Any kind of entanglement with libc or dynamic linking is likely to have a bad outcome, for tricky reasons which we've grappled with a lot in the past. */ #define VALGRIND_NON_SIMD_CALL0(_qyy_fn) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__CLIENT_CALL0, \ _qyy_fn, \ 0, 0, 0, 0) #define VALGRIND_NON_SIMD_CALL1(_qyy_fn, _qyy_arg1) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__CLIENT_CALL1, \ _qyy_fn, \ _qyy_arg1, 0, 0, 0) #define VALGRIND_NON_SIMD_CALL2(_qyy_fn, _qyy_arg1, _qyy_arg2) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__CLIENT_CALL2, \ _qyy_fn, \ _qyy_arg1, _qyy_arg2, 0, 0) #define VALGRIND_NON_SIMD_CALL3(_qyy_fn, _qyy_arg1, _qyy_arg2, _qyy_arg3) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \ VG_USERREQ__CLIENT_CALL3, \ _qyy_fn, \ _qyy_arg1, _qyy_arg2, \ _qyy_arg3, 0) /* Counts the number of errors that have been recorded by a tool. Nb: the tool must record the errors with VG_(maybe_record_error)() or VG_(unique_error)() for them to be counted. */ #define VALGRIND_COUNT_ERRORS \ (unsigned)VALGRIND_DO_CLIENT_REQUEST_EXPR( \ 0 /* default return */, \ VG_USERREQ__COUNT_ERRORS, \ 0, 0, 0, 0, 0) /* Several Valgrind tools (Memcheck, Massif, Helgrind, DRD) rely on knowing when heap blocks are allocated in order to give accurate results. This happens automatically for the standard allocator functions such as malloc(), calloc(), realloc(), memalign(), new, new[], free(), delete, delete[], etc. But if your program uses a custom allocator, this doesn't automatically happen, and Valgrind will not do as well. For example, if you allocate superblocks with mmap() and then allocates chunks of the superblocks, all Valgrind's observations will be at the mmap() level and it won't know that the chunks should be considered separate entities. In Memcheck's case, that means you probably won't get heap block overrun detection (because there won't be redzones marked as unaddressable) and you definitely won't get any leak detection. The following client requests allow a custom allocator to be annotated so that it can be handled accurately by Valgrind. VALGRIND_MALLOCLIKE_BLOCK marks a region of memory as having been allocated by a malloc()-like function. For Memcheck (an illustrative case), this does two things: - It records that the block has been allocated. This means any addresses within the block mentioned in error messages will be identified as belonging to the block. It also means that if the block isn't freed it will be detected by the leak checker. - It marks the block as being addressable and undefined (if 'is_zeroed' is not set), or addressable and defined (if 'is_zeroed' is set). This controls how accesses to the block by the program are handled. 'addr' is the start of the usable block (ie. after any redzone), 'sizeB' is its size. 'rzB' is the redzone size if the allocator can apply redzones -- these are blocks of padding at the start and end of each block. Adding redzones is recommended as it makes it much more likely Valgrind will spot block overruns. `is_zeroed' indicates if the memory is zeroed (or filled with another predictable value), as is the case for calloc(). VALGRIND_MALLOCLIKE_BLOCK should be put immediately after the point where a heap block -- that will be used by the client program -- is allocated. It's best to put it at the outermost level of the allocator if possible; for example, if you have a function my_alloc() which calls internal_alloc(), and the client request is put inside internal_alloc(), stack traces relating to the heap block will contain entries for both my_alloc() and internal_alloc(), which is probably not what you want. For Memcheck users: if you use VALGRIND_MALLOCLIKE_BLOCK to carve out custom blocks from within a heap block, B, that has been allocated with malloc/calloc/new/etc, then block B will be *ignored* during leak-checking -- the custom blocks will take precedence. VALGRIND_FREELIKE_BLOCK is the partner to VALGRIND_MALLOCLIKE_BLOCK. For Memcheck, it does two things: - It records that the block has been deallocated. This assumes that the block was annotated as having been allocated via VALGRIND_MALLOCLIKE_BLOCK. Otherwise, an error will be issued. - It marks the block as being unaddressable. VALGRIND_FREELIKE_BLOCK should be put immediately after the point where a heap block is deallocated. VALGRIND_RESIZEINPLACE_BLOCK informs a tool about reallocation. For Memcheck, it does four things: - It records that the size of a block has been changed. This assumes that the block was annotated as having been allocated via VALGRIND_MALLOCLIKE_BLOCK. Otherwise, an error will be issued. - If the block shrunk, it marks the freed memory as being unaddressable. - If the block grew, it marks the new area as undefined and defines a red zone past the end of the new block. - The V-bits of the overlap between the old and the new block are preserved. VALGRIND_RESIZEINPLACE_BLOCK should be put after allocation of the new block and before deallocation of the old block. In many cases, these three client requests will not be enough to get your allocator working well with Memcheck. More specifically, if your allocator writes to freed blocks in any way then a VALGRIND_MAKE_MEM_UNDEFINED call will be necessary to mark the memory as addressable just before the zeroing occurs, otherwise you'll get a lot of invalid write errors. For example, you'll need to do this if your allocator recycles freed blocks, but it zeroes them before handing them back out (via VALGRIND_MALLOCLIKE_BLOCK). Alternatively, if your allocator reuses freed blocks for allocator-internal data structures, VALGRIND_MAKE_MEM_UNDEFINED calls will also be necessary. Really, what's happening is a blurring of the lines between the client program and the allocator... after VALGRIND_FREELIKE_BLOCK is called, the memory should be considered unaddressable to the client program, but the allocator knows more than the rest of the client program and so may be able to safely access it. Extra client requests are necessary for Valgrind to understand the distinction between the allocator and the rest of the program. Ignored if addr == 0. */ #define VALGRIND_MALLOCLIKE_BLOCK(addr, sizeB, rzB, is_zeroed) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__MALLOCLIKE_BLOCK, \ addr, sizeB, rzB, is_zeroed, 0) /* See the comment for VALGRIND_MALLOCLIKE_BLOCK for details. Ignored if addr == 0. */ #define VALGRIND_RESIZEINPLACE_BLOCK(addr, oldSizeB, newSizeB, rzB) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__RESIZEINPLACE_BLOCK, \ addr, oldSizeB, newSizeB, rzB, 0) /* See the comment for VALGRIND_MALLOCLIKE_BLOCK for details. Ignored if addr == 0. */ #define VALGRIND_FREELIKE_BLOCK(addr, rzB) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__FREELIKE_BLOCK, \ addr, rzB, 0, 0, 0) /* Create a memory pool. */ #define VALGRIND_CREATE_MEMPOOL(pool, rzB, is_zeroed) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__CREATE_MEMPOOL, \ pool, rzB, is_zeroed, 0, 0) /* Create a memory pool with some flags specifying extended behaviour. When flags is zero, the behaviour is identical to VALGRIND_CREATE_MEMPOOL. The flag VALGRIND_MEMPOOL_METAPOOL specifies that the pieces of memory associated with the pool using VALGRIND_MEMPOOL_ALLOC will be used by the application as superblocks to dole out MALLOC_LIKE blocks using VALGRIND_MALLOCLIKE_BLOCK. In other words, a meta pool is a "2 levels" pool : first level is the blocks described by VALGRIND_MEMPOOL_ALLOC. The second level blocks are described using VALGRIND_MALLOCLIKE_BLOCK. Note that the association between the pool and the second level blocks is implicit : second level blocks will be located inside first level blocks. It is necessary to use the VALGRIND_MEMPOOL_METAPOOL flag for such 2 levels pools, as otherwise valgrind will detect overlapping memory blocks, and will abort execution (e.g. during leak search). Such a meta pool can also be marked as an 'auto free' pool using the flag VALGRIND_MEMPOOL_AUTO_FREE, which must be OR-ed together with the VALGRIND_MEMPOOL_METAPOOL. For an 'auto free' pool, VALGRIND_MEMPOOL_FREE will automatically free the second level blocks that are contained inside the first level block freed with VALGRIND_MEMPOOL_FREE. In other words, calling VALGRIND_MEMPOOL_FREE will cause implicit calls to VALGRIND_FREELIKE_BLOCK for all the second level blocks included in the first level block. Note: it is an error to use the VALGRIND_MEMPOOL_AUTO_FREE flag without the VALGRIND_MEMPOOL_METAPOOL flag. */ #define VALGRIND_MEMPOOL_AUTO_FREE 1 #define VALGRIND_MEMPOOL_METAPOOL 2 #define VALGRIND_CREATE_MEMPOOL_EXT(pool, rzB, is_zeroed, flags) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__CREATE_MEMPOOL, \ pool, rzB, is_zeroed, flags, 0) /* Destroy a memory pool. */ #define VALGRIND_DESTROY_MEMPOOL(pool) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DESTROY_MEMPOOL, \ pool, 0, 0, 0, 0) /* Associate a piece of memory with a memory pool. */ #define VALGRIND_MEMPOOL_ALLOC(pool, addr, size) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__MEMPOOL_ALLOC, \ pool, addr, size, 0, 0) /* Disassociate a piece of memory from a memory pool. */ #define VALGRIND_MEMPOOL_FREE(pool, addr) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__MEMPOOL_FREE, \ pool, addr, 0, 0, 0) /* Disassociate any pieces outside a particular range. */ #define VALGRIND_MEMPOOL_TRIM(pool, addr, size) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__MEMPOOL_TRIM, \ pool, addr, size, 0, 0) /* Resize and/or move a piece associated with a memory pool. */ #define VALGRIND_MOVE_MEMPOOL(poolA, poolB) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__MOVE_MEMPOOL, \ poolA, poolB, 0, 0, 0) /* Resize and/or move a piece associated with a memory pool. */ #define VALGRIND_MEMPOOL_CHANGE(pool, addrA, addrB, size) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__MEMPOOL_CHANGE, \ pool, addrA, addrB, size, 0) /* Return 1 if a mempool exists, else 0. */ #define VALGRIND_MEMPOOL_EXISTS(pool) \ (unsigned)VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \ VG_USERREQ__MEMPOOL_EXISTS, \ pool, 0, 0, 0, 0) /* Mark a piece of memory as being a stack. Returns a stack id. start is the lowest addressable stack byte, end is the highest addressable stack byte. */ #define VALGRIND_STACK_REGISTER(start, end) \ (unsigned)VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \ VG_USERREQ__STACK_REGISTER, \ start, end, 0, 0, 0) /* Unmark the piece of memory associated with a stack id as being a stack. */ #define VALGRIND_STACK_DEREGISTER(id) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__STACK_DEREGISTER, \ id, 0, 0, 0, 0) /* Change the start and end address of the stack id. start is the new lowest addressable stack byte, end is the new highest addressable stack byte. */ #define VALGRIND_STACK_CHANGE(id, start, end) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__STACK_CHANGE, \ id, start, end, 0, 0) /* Load PDB debug info for Wine PE image_map. */ #define VALGRIND_LOAD_PDB_DEBUGINFO(fd, ptr, total_size, delta) \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__LOAD_PDB_DEBUGINFO, \ fd, ptr, total_size, delta, 0) /* Map a code address to a source file name and line number. buf64 must point to a 64-byte buffer in the caller's address space. The result will be dumped in there and is guaranteed to be zero terminated. If no info is found, the first byte is set to zero. */ #define VALGRIND_MAP_IP_TO_SRCLOC(addr, buf64) \ (unsigned)VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \ VG_USERREQ__MAP_IP_TO_SRCLOC, \ addr, buf64, 0, 0, 0) /* Disable error reporting for this thread. Behaves in a stack like way, so you can safely call this multiple times provided that VALGRIND_ENABLE_ERROR_REPORTING is called the same number of times to re-enable reporting. The first call of this macro disables reporting. Subsequent calls have no effect except to increase the number of VALGRIND_ENABLE_ERROR_REPORTING calls needed to re-enable reporting. Child threads do not inherit this setting from their parents -- they are always created with reporting enabled. */ #define VALGRIND_DISABLE_ERROR_REPORTING \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__CHANGE_ERR_DISABLEMENT, \ 1, 0, 0, 0, 0) /* Re-enable error reporting, as per comments on VALGRIND_DISABLE_ERROR_REPORTING. */ #define VALGRIND_ENABLE_ERROR_REPORTING \ VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__CHANGE_ERR_DISABLEMENT, \ -1, 0, 0, 0, 0) /* Execute a monitor command from the client program. If a connection is opened with GDB, the output will be sent according to the output mode set for vgdb. If no connection is opened, output will go to the log output. Returns 1 if command not recognised, 0 otherwise. */ #define VALGRIND_MONITOR_COMMAND(command) \ VALGRIND_DO_CLIENT_REQUEST_EXPR(0, VG_USERREQ__GDB_MONITOR_COMMAND, \ command, 0, 0, 0, 0) #undef PLAT_x86_darwin #undef PLAT_amd64_darwin #undef PLAT_x86_win32 #undef PLAT_amd64_win64 #undef PLAT_x86_linux #undef PLAT_amd64_linux #undef PLAT_ppc32_linux #undef PLAT_ppc64be_linux #undef PLAT_ppc64le_linux #undef PLAT_arm_linux #undef PLAT_s390x_linux #undef PLAT_mips32_linux #undef PLAT_mips64_linux #undef PLAT_x86_solaris #undef PLAT_amd64_solaris #endif /* __VALGRIND_H */ vmem-1.8/src/common/valgrind_internal.h000066400000000000000000000267251361505074100202760ustar00rootroot00000000000000/* * Copyright 2015-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * valgrind_internal.h -- internal definitions for valgrind macros */ #ifndef PMDK_VALGRIND_INTERNAL_H #define PMDK_VALGRIND_INTERNAL_H 1 #if !defined(_WIN32) && !defined(__FreeBSD__) #ifndef VALGRIND_ENABLED #define VALGRIND_ENABLED 1 #endif #endif #if VALGRIND_ENABLED #define VG_PMEMCHECK_ENABLED 1 #define VG_HELGRIND_ENABLED 1 #define VG_MEMCHECK_ENABLED 1 #define VG_DRD_ENABLED 1 #endif #if VG_PMEMCHECK_ENABLED || VG_HELGRIND_ENABLED || VG_MEMCHECK_ENABLED || \ VG_DRD_ENABLED #define ANY_VG_TOOL_ENABLED 1 #else #define ANY_VG_TOOL_ENABLED 0 #endif #if ANY_VG_TOOL_ENABLED extern unsigned _On_valgrind; #define On_valgrind __builtin_expect(_On_valgrind, 0) #include "valgrind/valgrind.h" #else #define On_valgrind (0) #endif #if VG_HELGRIND_ENABLED #include "valgrind/helgrind.h" #endif #if VG_DRD_ENABLED #include "valgrind/drd.h" #endif #if VG_HELGRIND_ENABLED || VG_DRD_ENABLED #define VALGRIND_ANNOTATE_HAPPENS_BEFORE(obj) do {\ if (On_valgrind) \ ANNOTATE_HAPPENS_BEFORE((obj));\ } while (0) #define VALGRIND_ANNOTATE_HAPPENS_AFTER(obj) do {\ if (On_valgrind) \ ANNOTATE_HAPPENS_AFTER((obj));\ } while (0) #define VALGRIND_ANNOTATE_NEW_MEMORY(addr, size) do {\ if (On_valgrind) \ ANNOTATE_NEW_MEMORY((addr), (size));\ } while (0) #define VALGRIND_ANNOTATE_IGNORE_READS_BEGIN() do {\ if (On_valgrind) \ ANNOTATE_IGNORE_READS_BEGIN();\ } while (0) #define VALGRIND_ANNOTATE_IGNORE_READS_END() do {\ if (On_valgrind) \ ANNOTATE_IGNORE_READS_END();\ } while (0) #define VALGRIND_ANNOTATE_IGNORE_WRITES_BEGIN() do {\ if (On_valgrind) \ ANNOTATE_IGNORE_WRITES_BEGIN();\ } while (0) #define VALGRIND_ANNOTATE_IGNORE_WRITES_END() do {\ if (On_valgrind) \ ANNOTATE_IGNORE_WRITES_END();\ } while (0) /* Supported by both helgrind and drd. */ #define VALGRIND_HG_DRD_DISABLE_CHECKING(addr, size) do {\ if (On_valgrind) \ VALGRIND_HG_DISABLE_CHECKING((addr), (size));\ } while (0) #else #define VALGRIND_ANNOTATE_HAPPENS_BEFORE(obj) do { (void)(obj); } while (0) #define VALGRIND_ANNOTATE_HAPPENS_AFTER(obj) do { (void)(obj); } while (0) #define VALGRIND_ANNOTATE_NEW_MEMORY(addr, size) do {\ (void) (addr);\ (void) (size);\ } while (0) #define VALGRIND_ANNOTATE_IGNORE_READS_BEGIN() do {} while (0) #define VALGRIND_ANNOTATE_IGNORE_READS_END() do {} while (0) #define VALGRIND_ANNOTATE_IGNORE_WRITES_BEGIN() do {} while (0) #define VALGRIND_ANNOTATE_IGNORE_WRITES_END() do {} while (0) #define VALGRIND_HG_DRD_DISABLE_CHECKING(addr, size) do {\ (void) (addr);\ (void) (size);\ } while (0) #endif #if VG_PMEMCHECK_ENABLED #include "valgrind/pmemcheck.h" void pobj_emit_log(const char *func, int order); void pmem_emit_log(const char *func, int order); extern int _Pmreorder_emit; #define Pmreorder_emit __builtin_expect(_Pmreorder_emit, 0) #define VALGRIND_REGISTER_PMEM_MAPPING(addr, len) do {\ if (On_valgrind)\ VALGRIND_PMC_REGISTER_PMEM_MAPPING((addr), (len));\ } while (0) #define VALGRIND_REGISTER_PMEM_FILE(desc, base_addr, size, offset) do {\ if (On_valgrind)\ VALGRIND_PMC_REGISTER_PMEM_FILE((desc), (base_addr), (size), \ (offset));\ } while (0) #define VALGRIND_REMOVE_PMEM_MAPPING(addr, len) do {\ if (On_valgrind)\ VALGRIND_PMC_REMOVE_PMEM_MAPPING((addr), (len));\ } while (0) #define VALGRIND_CHECK_IS_PMEM_MAPPING(addr, len) do {\ if (On_valgrind)\ VALGRIND_PMC_CHECK_IS_PMEM_MAPPING((addr), (len));\ } while (0) #define VALGRIND_PRINT_PMEM_MAPPINGS do {\ if (On_valgrind)\ VALGRIND_PMC_PRINT_PMEM_MAPPINGS;\ } while (0) #define VALGRIND_DO_FLUSH(addr, len) do {\ if (On_valgrind)\ VALGRIND_PMC_DO_FLUSH((addr), (len));\ } while (0) #define VALGRIND_DO_FENCE do {\ if (On_valgrind)\ VALGRIND_PMC_DO_FENCE;\ } while (0) #define VALGRIND_DO_PERSIST(addr, len) do {\ if (On_valgrind) {\ VALGRIND_PMC_DO_FLUSH((addr), (len));\ VALGRIND_PMC_DO_FENCE;\ }\ } while (0) #define VALGRIND_SET_CLEAN(addr, len) do {\ if (On_valgrind)\ VALGRIND_PMC_SET_CLEAN(addr, len);\ } while (0) #define VALGRIND_WRITE_STATS do {\ if (On_valgrind)\ VALGRIND_PMC_WRITE_STATS;\ } while (0) #define VALGRIND_EMIT_LOG(emit_log) do {\ if (On_valgrind)\ VALGRIND_PMC_EMIT_LOG((emit_log));\ } while (0) #define VALGRIND_START_TX do {\ if (On_valgrind)\ VALGRIND_PMC_START_TX;\ } while (0) #define VALGRIND_START_TX_N(txn) do {\ if (On_valgrind)\ VALGRIND_PMC_START_TX_N(txn);\ } while (0) #define VALGRIND_END_TX do {\ if (On_valgrind)\ VALGRIND_PMC_END_TX;\ } while (0) #define VALGRIND_END_TX_N(txn) do {\ if (On_valgrind)\ VALGRIND_PMC_END_TX_N(txn);\ } while (0) #define VALGRIND_ADD_TO_TX(addr, len) do {\ if (On_valgrind)\ VALGRIND_PMC_ADD_TO_TX(addr, len);\ } while (0) #define VALGRIND_ADD_TO_TX_N(txn, addr, len) do {\ if (On_valgrind)\ VALGRIND_PMC_ADD_TO_TX_N(txn, addr, len);\ } while (0) #define VALGRIND_REMOVE_FROM_TX(addr, len) do {\ if (On_valgrind)\ VALGRIND_PMC_REMOVE_FROM_TX(addr, len);\ } while (0) #define VALGRIND_REMOVE_FROM_TX_N(txn, addr, len) do {\ if (On_valgrind)\ VALGRIND_PMC_REMOVE_FROM_TX_N(txn, addr, len);\ } while (0) #define VALGRIND_ADD_TO_GLOBAL_TX_IGNORE(addr, len) do {\ if (On_valgrind)\ VALGRIND_PMC_ADD_TO_GLOBAL_TX_IGNORE(addr, len);\ } while (0) /* * Logs library and function name with proper suffix * to pmemcheck store log file. */ #define PMEMOBJ_API_START()\ if (Pmreorder_emit)\ pobj_emit_log(__func__, 0); #define PMEMOBJ_API_END()\ if (Pmreorder_emit)\ pobj_emit_log(__func__, 1); #define PMEM_API_START()\ if (Pmreorder_emit)\ pmem_emit_log(__func__, 0); #define PMEM_API_END()\ if (Pmreorder_emit)\ pmem_emit_log(__func__, 1); #else #define Pmreorder_emit (0) #define VALGRIND_REGISTER_PMEM_MAPPING(addr, len) do {\ (void) (addr);\ (void) (len);\ } while (0) #define VALGRIND_REGISTER_PMEM_FILE(desc, base_addr, size, offset) do {\ (void) (desc);\ (void) (base_addr);\ (void) (size);\ (void) (offset);\ } while (0) #define VALGRIND_REMOVE_PMEM_MAPPING(addr, len) do {\ (void) (addr);\ (void) (len);\ } while (0) #define VALGRIND_CHECK_IS_PMEM_MAPPING(addr, len) do {\ (void) (addr);\ (void) (len);\ } while (0) #define VALGRIND_PRINT_PMEM_MAPPINGS do {} while (0) #define VALGRIND_DO_FLUSH(addr, len) do {\ (void) (addr);\ (void) (len);\ } while (0) #define VALGRIND_DO_FENCE do {} while (0) #define VALGRIND_DO_PERSIST(addr, len) do {\ (void) (addr);\ (void) (len);\ } while (0) #define VALGRIND_SET_CLEAN(addr, len) do {\ (void) (addr);\ (void) (len);\ } while (0) #define VALGRIND_WRITE_STATS do {} while (0) #define VALGRIND_EMIT_LOG(emit_log) do {\ (void) (emit_log);\ } while (0) #define VALGRIND_START_TX do {} while (0) #define VALGRIND_START_TX_N(txn) do { (void) (txn); } while (0) #define VALGRIND_END_TX do {} while (0) #define VALGRIND_END_TX_N(txn) do {\ (void) (txn);\ } while (0) #define VALGRIND_ADD_TO_TX(addr, len) do {\ (void) (addr);\ (void) (len);\ } while (0) #define VALGRIND_ADD_TO_TX_N(txn, addr, len) do {\ (void) (txn);\ (void) (addr);\ (void) (len);\ } while (0) #define VALGRIND_REMOVE_FROM_TX(addr, len) do {\ (void) (addr);\ (void) (len);\ } while (0) #define VALGRIND_REMOVE_FROM_TX_N(txn, addr, len) do {\ (void) (txn);\ (void) (addr);\ (void) (len);\ } while (0) #define VALGRIND_ADD_TO_GLOBAL_TX_IGNORE(addr, len) do {\ (void) (addr);\ (void) (len);\ } while (0) #define PMEMOBJ_API_START() do {} while (0) #define PMEMOBJ_API_END() do {} while (0) #define PMEM_API_START() do {} while (0) #define PMEM_API_END() do {} while (0) #endif #if VG_MEMCHECK_ENABLED #include "valgrind/memcheck.h" #define VALGRIND_DO_DISABLE_ERROR_REPORTING do {\ if (On_valgrind)\ VALGRIND_DISABLE_ERROR_REPORTING;\ } while (0) #define VALGRIND_DO_ENABLE_ERROR_REPORTING do {\ if (On_valgrind)\ VALGRIND_ENABLE_ERROR_REPORTING;\ } while (0) #define VALGRIND_DO_CREATE_MEMPOOL(heap, rzB, is_zeroed) do {\ if (On_valgrind)\ VALGRIND_CREATE_MEMPOOL(heap, rzB, is_zeroed);\ } while (0) #define VALGRIND_DO_DESTROY_MEMPOOL(heap) do {\ if (On_valgrind)\ VALGRIND_DESTROY_MEMPOOL(heap);\ } while (0) #define VALGRIND_DO_MEMPOOL_ALLOC(heap, addr, size) do {\ if (On_valgrind)\ VALGRIND_MEMPOOL_ALLOC(heap, addr, size);\ } while (0) #define VALGRIND_DO_MEMPOOL_FREE(heap, addr) do {\ if (On_valgrind)\ VALGRIND_MEMPOOL_FREE(heap, addr);\ } while (0) #define VALGRIND_DO_MEMPOOL_CHANGE(heap, addrA, addrB, size) do {\ if (On_valgrind)\ VALGRIND_MEMPOOL_CHANGE(heap, addrA, addrB, size);\ } while (0) #define VALGRIND_DO_MAKE_MEM_DEFINED(addr, len) do {\ if (On_valgrind)\ VALGRIND_MAKE_MEM_DEFINED(addr, len);\ } while (0) #define VALGRIND_DO_MAKE_MEM_UNDEFINED(addr, len) do {\ if (On_valgrind)\ VALGRIND_MAKE_MEM_UNDEFINED(addr, len);\ } while (0) #define VALGRIND_DO_MAKE_MEM_NOACCESS(addr, len) do {\ if (On_valgrind)\ VALGRIND_MAKE_MEM_NOACCESS(addr, len);\ } while (0) #define VALGRIND_DO_CHECK_MEM_IS_ADDRESSABLE(addr, len) do {\ if (On_valgrind)\ VALGRIND_CHECK_MEM_IS_ADDRESSABLE(addr, len);\ } while (0) #else #define VALGRIND_DO_DISABLE_ERROR_REPORTING do {} while (0) #define VALGRIND_DO_ENABLE_ERROR_REPORTING do {} while (0) #define VALGRIND_DO_CREATE_MEMPOOL(heap, rzB, is_zeroed)\ do { (void) (heap); (void) (rzB); (void) (is_zeroed); } while (0) #define VALGRIND_DO_DESTROY_MEMPOOL(heap)\ do { (void) (heap); } while (0) #define VALGRIND_DO_MEMPOOL_ALLOC(heap, addr, size)\ do { (void) (heap); (void) (addr); (void) (size); } while (0) #define VALGRIND_DO_MEMPOOL_FREE(heap, addr)\ do { (void) (heap); (void) (addr); } while (0) #define VALGRIND_DO_MEMPOOL_CHANGE(heap, addrA, addrB, size)\ do {\ (void) (heap); (void) (addrA); (void) (addrB); (void) (size);\ } while (0) #define VALGRIND_DO_MAKE_MEM_DEFINED(addr, len)\ do { (void) (addr); (void) (len); } while (0) #define VALGRIND_DO_MAKE_MEM_UNDEFINED(addr, len)\ do { (void) (addr); (void) (len); } while (0) #define VALGRIND_DO_MAKE_MEM_NOACCESS(addr, len)\ do { (void) (addr); (void) (len); } while (0) #define VALGRIND_DO_CHECK_MEM_IS_ADDRESSABLE(addr, len)\ do { (void) (addr); (void) (len); } while (0) #endif #endif vmem-1.8/src/examples/000077500000000000000000000000001361505074100147355ustar00rootroot00000000000000vmem-1.8/src/examples/.gitignore000066400000000000000000000000041361505074100167170ustar00rootroot00000000000000*.o vmem-1.8/src/examples/Makefile000066400000000000000000000033431361505074100164000ustar00rootroot00000000000000# # Copyright 2014-2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # examples/Makefile -- build the Persistent Memory Development Kit examples # include ../common.inc DIRS = libvmem include Makefile.inc rmtmp: $(RM) $(TMP_HEADERS) clobber clean: rmtmp vmem-1.8/src/examples/Makefile.inc000066400000000000000000000112551361505074100171510ustar00rootroot00000000000000# # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # examples/Makefile.inc -- build the Persistent Memory Development Kit examples # TOP_SRC := $(dir $(lastword $(MAKEFILE_LIST))).. TOP := $(TOP_SRC)/.. HEADERS = $(wildcard *.h) $(wildcard *.hpp) INCDIR = $(TOP_SRC)/include LIBDIR = $(TOP_SRC)/debug include $(TOP)/src/common.inc CXXFLAGS = -std=c++11 -ggdb -Wall -Werror CXXFLAGS += $(GLIBC_CXXFLAGS) CXXFLAGS += $(EXTRA_CXXFLAGS) CFLAGS = -std=gnu99 -ggdb -Wall -Werror -Wmissing-prototypes $(EXTRA_CFLAGS) LDFLAGS = -Wl,-rpath=$(LIBDIR) -L$(LIBDIR) $(EXTRA_LDFLAGS) ifneq ($(SANITIZE),) CFLAGS += -fsanitize=$(SANITIZE) CXXFLAGS += -fsanitize=$(SANITIZE) LDFLAGS += -fsanitize=$(SANITIZE) endif INCS = -I$(INCDIR) -I. -I$(TOP_SRC)/examples $(OS_INCS) LIBS += $(OS_LIBS) $(LIBUUID) LINKER=$(CC) ifeq ($(COMPILE_LANG), cpp) LINKER=$(CXX) endif all-dirs: TARGET = all clean-dirs: TARGET = clean clobber-dirs: TARGET = clobber cstyle-dirs: TARGET = cstyle format-dirs: TARGET = format sparse-dirs: TARGET = sparse all: $(if $(DIRS), all-dirs) $(if $(LIBRARIES), all-libraries) $(if $(PROGS), all-progs) clean: $(if $(DIRS), clean-dirs) $(if $(PROGS), clean-progs) $(if $(LIBRARIES), clean-libraries) clobber: $(if $(DIRS), clobber-dirs) $(if $(PROGS), clobber-progs) $(if $(LIBRARIES), clobber-libraries) cstyle: $(if $(DIRS), cstyle-dirs) format: $(if $(DIRS), format-dirs) sparse: $(if $(DIRS), sparse-dirs) $(if $(DIRS), , $(sparse-c)) DYNAMIC_LIBRARIES = $(addprefix lib, $(addsuffix .so, $(LIBRARIES))) STATIC_LIBRARIES = $(addprefix lib, $(addsuffix .a, $(LIBRARIES))) all-dirs clean-dirs clobber-dirs cstyle-dirs format-dirs sparse-dirs: $(DIRS) all-progs: $(PROGS) all-libraries: $(DYNAMIC_LIBRARIES) $(STATIC_LIBRARIES) $(foreach l, $(LIBRARIES), $(eval lib$(l).so: lib$(l).o)) $(foreach l, $(LIBRARIES), $(eval lib$(l).a: lib$(l).o)) $(foreach l, $(LIBRARIES), $(eval lib$(l).o: CFLAGS+=-fPIC)) $(foreach l, $(LIBRARIES), $(eval lib$(l).o: CXXFLAGS+=-fPIC)) $(foreach l, $(LIBRARIES), $(eval $(l): lib$(l).so lib$(l).a)) $(foreach l, $(LIBRARIES), $(eval .PHONY: $(l))) $(DIRS): $(MAKE) -C $@ $(TARGET) clobber-progs: clean-progs clobber-libraries: clean-libraries clobber-progs clobber-libraries: ifneq ($(PROGS),) $(RM) $(PROGS) endif ifneq ($(LIBRARIES),) $(RM) $(DYNAMIC_LIBRARIES) $(STATIC_LIBRARIES) endif clean-progs clean-libraries: $(RM) *.o $(TMP_HEADERS) MAKEFILE_DEPS=Makefile $(TOP)/src/examples/Makefile.inc $(TOP)/src/common.inc ifneq ($(HEADERS),) ifneq ($(filter 1 2, $(CSTYLEON)),) TMP_HEADERS := $(addsuffix tmp, $(HEADERS)) endif endif all: $(TMP_HEADERS) %.o: %.c $(MAKEFILE_DEPS) $(call check-cstyle, $<) $(CC) -c -o $@ $(CFLAGS) $(INCS) $< %.o: %.cpp $(MAKEFILE_DEPS) $(call check-cstyle, $<) $(CXX) -c -o $@ $(CXXFLAGS) $(INCS) $< %.htmp: %.h $(call check-cstyle, $<, $@) %.hpptmp: %.hpp $(call check-cstyle, $<, $@) $(PROGS): | $(TMP_HEADERS) $(LINKER) -o $@ $^ $(LDFLAGS) $(LIBS) lib%.o: $(LD) -o $@ -r $^ $(STATIC_LIBRARIES): $(AR) rv $@ $< $(DYNAMIC_LIBRARIES): $(LINKER) -shared -o $@ $(LDFLAGS) -Wl,-shared,-soname=$@ $(LIBS) $< .PHONY: all clean clobber cstyle\ all-dirs clean-dirs clobber-dirs cstyle-dirs\ all-progs clean-progs clobber-progs cstyle-progs\ $(DIRS) vmem-1.8/src/examples/README000066400000000000000000000013761361505074100156240ustar00rootroot00000000000000Persistent Memory Development Kit This is examples/README. This directory contains brief educational examples illustrating the use of the PMDK libraries. For many of these examples, the Makefile rules are here just to check that the example compiles, loads against the appropriate library, and passes cstyle. If you're looking for documentation to get you started using PMDK, start here: http://pmem.io/pmdk and follow the links to examples and man pages. Developers new to PMDK are probably looking for libpmemobj. Many of the examples in this directory are described in more detail on the above web site. libvmem(7) -- volatile memory allocation library Example programs are in the libvmem directory. More documentation: http://pmem.io/pmdk/libvmem vmem-1.8/src/examples/ex_common.h000066400000000000000000000052321361505074100170740ustar00rootroot00000000000000/* * Copyright 2016-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * ex_common.h -- examples utilities */ #ifndef EX_COMMON_H #define EX_COMMON_H #include #define MIN(a, b) (((a) < (b)) ? (a) : (b)) #ifdef __cplusplus extern "C" { #endif #ifndef _WIN32 #include #define CREATE_MODE_RW (S_IWUSR | S_IRUSR) /* * file_exists -- checks if file exists */ static inline int file_exists(char const *file) { return access(file, F_OK); } /* * find_last_set_64 -- returns last set bit position or -1 if set bit not found */ static inline int find_last_set_64(uint64_t val) { return 64 - __builtin_clzll(val) - 1; } #else #include #include #include #define CREATE_MODE_RW (S_IWRITE | S_IREAD) /* * file_exists -- checks if file exists */ static inline int file_exists(char const *file) { return _access(file, 0); } /* * find_last_set_64 -- returns last set bit position or -1 if set bit not found */ static inline int find_last_set_64(uint64_t val) { DWORD lz = 0; if (BitScanReverse64(&lz, val)) return (int)lz; else return -1; } #endif #ifdef __cplusplus } #endif #endif /* ex_common.h */ vmem-1.8/src/examples/examples_debug.props000066400000000000000000000024701361505074100210110ustar00rootroot00000000000000 $(Platform)\$(Configuration)\$(TargetName)\ ex_$(RootNamespace)_$(ProjectName) $(SolutionDir)$(Platform)\$(Configuration)\examples\ .;$(solutionDir)include;$(solutionDir)..\include;$(ProjectDir)..\..\;$(ProjectDir)..\;$(IncludePath);$(WindowsSDK_IncludePath); $(ProjectDir)..\..\x64\$(Configuration)\libs;$(ProjectDir)..\..\..\x64\$(Configuration)\libs;$(VC_LibraryPath_x64);$(WindowsSDK_LibraryPath_x64);$(NETFXKitsDir)Lib\um\x64 Level3 Disabled true 4996 PMDK_UTF8_API;SDS_ENABLED;NTDDI_VERSION=NTDDI_WIN10_RS1;_MBCS;%(PreprocessorDefinitions) true true vmem-1.8/src/examples/examples_release.props000066400000000000000000000030161361505074100213400ustar00rootroot00000000000000 $(Platform)\$(Configuration)\$(TargetName)\ ex_$(RootNamespace)_$(ProjectName) $(SolutionDir)$(Platform)\$(Configuration)\examples\ $(ProjectDir)..\..\x64\$(Configuration)\libs;$(ProjectDir)..\..\..\x64\$(Configuration)\libs;$(VC_LibraryPath_x64);$(WindowsSDK_LibraryPath_x64);$(NETFXKitsDir)Lib\um\x64 .;$(solutionDir)include;$(solutionDir)..\include;$(ProjectDir)..\..\;$(ProjectDir)..\;$(IncludePath);$(WindowsSDK_IncludePath); Level3 MaxSpeed true 4996 true true PMDK_UTF8_API;SDS_ENABLED;NTDDI_VERSION=NTDDI_WIN10_RS1;_MBCS;%(PreprocessorDefinitions) true true true true vmem-1.8/src/examples/libvmem/000077500000000000000000000000001361505074100163705ustar00rootroot00000000000000vmem-1.8/src/examples/libvmem/.gitignore000066400000000000000000000000101361505074100203470ustar00rootroot00000000000000manpage vmem-1.8/src/examples/libvmem/Makefile000066400000000000000000000033041361505074100200300ustar00rootroot00000000000000# # Copyright 2014-2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # examples/libvmem/Makefile -- build the libvmem examples # PROGS = manpage DIRS = libart LIBS = -lvmem -pthread include ../Makefile.inc manpage: manpage.o vmem-1.8/src/examples/libvmem/README000066400000000000000000000011401361505074100172440ustar00rootroot00000000000000Persistent Memory Development Kit This is examples/libvmem/README. This directory contains examples for libvmem, the library providing a volatile memory allocator for persistent memory. Some examples are described in more detail here: http://pmem.io/pmdk/libvmem manpage.c is the example used in the libvmem man page. To build these examples: make These examples can be built against an installed system using: make LIBDIR=/usr/lib INCDIR=/usr/include If you're looking for documentation to get you started using PMDK, start here: http://pmem.io/pmdk and follow the links to examples and man pages. vmem-1.8/src/examples/libvmem/libart/000077500000000000000000000000001361505074100176455ustar00rootroot00000000000000vmem-1.8/src/examples/libvmem/libart/.gitignore000066400000000000000000000000101361505074100216240ustar00rootroot00000000000000arttree vmem-1.8/src/examples/libvmem/libart/Makefile000066400000000000000000000044701361505074100213120ustar00rootroot00000000000000# # Copyright 2016, FUJITSU TECHNOLOGY SOLUTIONS GMBH # Copyright 2016-2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # ========================================================================== # # Filename: Makefile # # Description: implement ART tree using libvmem based on libart # # Author: Andreas Bluemle, Dieter Kasper # Andreas.Bluemle.external@ts.fujitsu.com # dieter.kasper@ts.fujitsu.com # # Organization: FUJITSU TECHNOLOGY SOLUTIONS GMBH # ========================================================================== # include ../../../common.inc ifeq ($(ARCH), x86_64) # libart uses x86 intrinsics PROGS = arttree LIBRARIES = art endif include ../../Makefile.inc LIBS = -lvmem $(PROGS): | $(DYNAMIC_LIBRARIES) $(PROGS): LDFLAGS += -Wl,-rpath=. -L. -lart libart.o: art.o arttree: arttree.o vmem-1.8/src/examples/libvmem/libart/art.c000066400000000000000000000672201361505074100206060ustar00rootroot00000000000000/* * Copyright 2016, FUJITSU TECHNOLOGY SOLUTIONS GMBH * Copyright 2012, Armon Dadgar. All rights reserved. * Copyright 2016-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * ========================================================================== * * Filename: art.c * * Description: implement ART tree using libvmem based on libart * * Author: Andreas Bluemle, Dieter Kasper * Andreas.Bluemle.external@ts.fujitsu.com * dieter.kasper@ts.fujitsu.com * * Organization: FUJITSU TECHNOLOGY SOLUTIONS GMBH * ========================================================================== */ /* * based on https://github.com/armon/libart/src/art.c */ #include #include #include #include #include #include #include "libvmem.h" #include "art.h" /* * Macros to manipulate pointer tags */ #define IS_LEAF(x) (((uintptr_t)(x) & 1)) #define SET_LEAF(x) ((void *)((uintptr_t)(x) | 1)) #define LEAF_RAW(x) ((void *)((uintptr_t)(x) & ~1)) /* * Allocates a node of the given type, * initializes to zero and sets the type. */ static art_node * alloc_node(VMEM *vmp, uint8_t type) { art_node *n; switch (type) { case NODE4: n = vmem_calloc(vmp, 1, sizeof(art_node4)); break; case NODE16: n = vmem_calloc(vmp, 1, sizeof(art_node16)); break; case NODE48: n = vmem_calloc(vmp, 1, sizeof(art_node48)); break; case NODE256: n = vmem_calloc(vmp, 1, sizeof(art_node256)); break; default: abort(); } assert(n != NULL); n->type = type; return n; } /* * Initializes an ART tree * @return 0 on success. */ int art_tree_init(art_tree *t) { t->root = NULL; t->size = 0; return 0; } /* * Recursively destroys the tree */ static void destroy_node(VMEM *vmp, art_node *n) { // Break if null if (!n) return; // Special case leafs if (IS_LEAF(n)) { vmem_free(vmp, LEAF_RAW(n)); return; } // Handle each node type int i; union { art_node4 *p1; art_node16 *p2; art_node48 *p3; art_node256 *p4; } p; switch (n->type) { case NODE4: p.p1 = (art_node4 *)n; for (i = 0; i < n->num_children; i++) { destroy_node(vmp, p.p1->children[i]); } break; case NODE16: p.p2 = (art_node16 *)n; for (i = 0; i < n->num_children; i++) { destroy_node(vmp, p.p2->children[i]); } break; case NODE48: p.p3 = (art_node48 *)n; for (i = 0; i < n->num_children; i++) { destroy_node(vmp, p.p3->children[i]); } break; case NODE256: p.p4 = (art_node256 *)n; for (i = 0; i < 256; i++) { if (p.p4->children[i]) destroy_node(vmp, p.p4->children[i]); } break; default: abort(); } // Free ourself on the way up vmem_free(vmp, n); } /* * Destroys an ART tree * @return 0 on success. */ int art_tree_destroy(VMEM *vmp, art_tree *t) { destroy_node(vmp, t->root); return 0; } /* * Returns the size of the ART tree. */ static art_node ** find_child(art_node *n, unsigned char c) { __m128i cmp; int i, mask, bitfield; union { art_node4 *p1; art_node16 *p2; art_node48 *p3; art_node256 *p4; } p; switch (n->type) { case NODE4: p.p1 = (art_node4 *)n; for (i = 0; i < n->num_children; i++) { if (p.p1->keys[i] == c) return &p.p1->children[i]; } break; case NODE16: p.p2 = (art_node16 *)n; // Compare the key to all 16 stored keys cmp = _mm_cmpeq_epi8(_mm_set1_epi8(c), _mm_loadu_si128((__m128i *)p.p2->keys)); // Use a mask to ignore children that don't exist mask = (1 << n->num_children) - 1; bitfield = _mm_movemask_epi8(cmp) & mask; /* * If we have a match (any bit set) then we can * return the pointer match using ctz to get * the index. */ if (bitfield) return &p.p2->children[__builtin_ctz(bitfield)]; break; case NODE48: p.p3 = (art_node48 *)n; i = p.p3->keys[c]; if (i) return &p.p3->children[i - 1]; break; case NODE256: p.p4 = (art_node256 *)n; if (p.p4->children[c]) return &p.p4->children[c]; break; default: abort(); } return NULL; } // Simple inlined if static inline int min(int a, int b) { return (a < b) ? a : b; } /* * Returns the number of prefix characters shared between * the key and node. */ static int check_prefix(const art_node *n, const unsigned char *key, int key_len, int depth) { int max_cmp = min(min(n->partial_len, MAX_PREFIX_LEN), key_len - depth); int idx; for (idx = 0; idx < max_cmp; idx++) { if (n->partial[idx] != key[depth + idx]) return idx; } return idx; } /* * Checks if a leaf matches * @return 0 on success. */ static int leaf_matches(const art_leaf *n, const unsigned char *key, int key_len, int depth) { (void) depth; // Fail if the key lengths are different if (n->key_len != (uint32_t)key_len) return 1; // Compare the keys starting at the depth return memcmp(n->key, key, key_len); } /* * Searches for a value in the ART tree * @arg t The tree * @arg key The key * @arg key_len The length of the key * @return NULL if the item was not found, otherwise * the value pointer is returned. */ void * art_search(const art_tree *t, const unsigned char *key, int key_len) { art_node **child; art_node *n = t->root; int prefix_len, depth = 0; while (n) { // Might be a leaf if (IS_LEAF(n)) { n = LEAF_RAW(n); // Check if the expanded path matches if (!leaf_matches((art_leaf *)n, key, key_len, depth)) { return ((art_leaf *)n)->value; } return NULL; } // Bail if the prefix does not match if (n->partial_len) { prefix_len = check_prefix(n, key, key_len, depth); if (prefix_len != min(MAX_PREFIX_LEN, n->partial_len)) return NULL; depth = depth + n->partial_len; } // Recursively search child = find_child(n, key[depth]); n = (child) ? *child : NULL; depth++; } return NULL; } // Find the minimum leaf under a node static art_leaf * minimum(const art_node *n) { // Handle base cases if (!n) return NULL; if (IS_LEAF(n)) return LEAF_RAW(n); int idx; switch (n->type) { case NODE4: return minimum(((art_node4 *)n)->children[0]); case NODE16: return minimum(((art_node16 *)n)->children[0]); case NODE48: idx = 0; while (!((art_node48 *)n)->keys[idx]) idx++; idx = ((art_node48 *)n)->keys[idx] - 1; assert(idx < 48); return minimum(((art_node48 *) n)->children[idx]); case NODE256: idx = 0; while (!((art_node256 *)n)->children[idx]) idx++; return minimum(((art_node256 *)n)->children[idx]); default: abort(); } } // Find the maximum leaf under a node static art_leaf * maximum(const art_node *n) { // Handle base cases if (!n) return NULL; if (IS_LEAF(n)) return LEAF_RAW(n); int idx; switch (n->type) { case NODE4: return maximum( ((art_node4 *)n)->children[n->num_children - 1]); case NODE16: return maximum( ((art_node16 *)n)->children[n->num_children - 1]); case NODE48: idx = 255; while (!((art_node48 *)n)->keys[idx]) idx--; idx = ((art_node48 *)n)->keys[idx] - 1; assert((idx >= 0) && (idx < 48)); return maximum(((art_node48 *)n)->children[idx]); case NODE256: idx = 255; while (!((art_node256 *)n)->children[idx]) idx--; return maximum(((art_node256 *)n)->children[idx]); default: abort(); } } /* * Returns the minimum valued leaf */ art_leaf * art_minimum(art_tree *t) { return minimum(t->root); } /* * Returns the maximum valued leaf */ art_leaf * art_maximum(art_tree *t) { return maximum(t->root); } static art_leaf * make_leaf(VMEM *vmp, const unsigned char *key, int key_len, void *value, int val_len) { art_leaf *l = vmem_malloc(vmp, sizeof(art_leaf) + key_len + val_len); assert(l != NULL); l->key_len = key_len; l->val_len = val_len; l->key = &(l->data[0]) + 0; l->value = &(l->data[0]) + key_len; memcpy(l->key, key, key_len); memcpy(l->value, value, val_len); return l; } static int longest_common_prefix(art_leaf *l1, art_leaf *l2, int depth) { int max_cmp = min(l1->key_len, l2->key_len) - depth; int idx; for (idx = 0; idx < max_cmp; idx++) { if (l1->key[depth + idx] != l2->key[depth + idx]) return idx; } return idx; } static void copy_header(art_node *dest, art_node *src) { dest->num_children = src->num_children; dest->partial_len = src->partial_len; memcpy(dest->partial, src->partial, min(MAX_PREFIX_LEN, src->partial_len)); } static void add_child256(VMEM *vmp, art_node256 *n, art_node **ref, unsigned char c, void *child) { (void) ref; n->n.num_children++; n->children[c] = child; } static void add_child48(VMEM *vmp, art_node48 *n, art_node **ref, unsigned char c, void *child) { if (n->n.num_children < 48) { int pos = 0; while (n->children[pos]) pos++; n->children[pos] = child; n->keys[c] = pos + 1; n->n.num_children++; } else { art_node256 *new = (art_node256 *)alloc_node(vmp, NODE256); for (int i = 0; i < 256; i++) { if (n->keys[i]) { new->children[i] = n->children[n->keys[i] - 1]; } } copy_header((art_node *)new, (art_node *)n); *ref = (art_node *)new; vmem_free(vmp, n); add_child256(vmp, new, ref, c, child); } } static void add_child16(VMEM *vmp, art_node16 *n, art_node **ref, unsigned char c, void *child) { if (n->n.num_children < 16) { __m128i cmp; // Compare the key to all 16 stored keys cmp = _mm_cmplt_epi8(_mm_set1_epi8(c), _mm_loadu_si128((__m128i *)n->keys)); // Use a mask to ignore children that don't exist unsigned mask = (1 << n->n.num_children) - 1; unsigned bitfield = _mm_movemask_epi8(cmp) & mask; // Check if less than any unsigned idx; if (bitfield) { idx = __builtin_ctz(bitfield); memmove(n->keys + idx + 1, n->keys + idx, n->n.num_children - idx); memmove(n->children + idx + 1, n->children + idx, (n->n.num_children - idx) * sizeof(void *)); } else idx = n->n.num_children; // Set the child n->keys[idx] = c; n->children[idx] = child; n->n.num_children++; } else { art_node48 *new = (art_node48 *)alloc_node(vmp, NODE48); // Copy the child pointers and populate the key map memcpy(new->children, n->children, sizeof(void *) * n->n.num_children); for (int i = 0; i < n->n.num_children; i++) { new->keys[n->keys[i]] = i + 1; } copy_header((art_node *)new, (art_node *)n); *ref = (art_node *) new; vmem_free(vmp, n); add_child48(vmp, new, ref, c, child); } } static void add_child4(VMEM *vmp, art_node4 *n, art_node **ref, unsigned char c, void *child) { if (n->n.num_children < 4) { int idx; for (idx = 0; idx < n->n.num_children; idx++) { if (c < n->keys[idx]) break; } // Shift to make room memmove(n->keys + idx + 1, n->keys + idx, n->n.num_children - idx); memmove(n->children + idx + 1, n->children + idx, (n->n.num_children - idx) * sizeof(void *)); // Insert element n->keys[idx] = c; n->children[idx] = child; n->n.num_children++; } else { art_node16 *new = (art_node16 *)alloc_node(vmp, NODE16); // Copy the child pointers and the key map memcpy(new->children, n->children, sizeof(void *) * n->n.num_children); memcpy(new->keys, n->keys, sizeof(unsigned char) * n->n.num_children); copy_header((art_node *)new, (art_node *)n); *ref = (art_node *)new; vmem_free(vmp, n); add_child16(vmp, new, ref, c, child); } } static void add_child(VMEM *vmp, art_node *n, art_node **ref, unsigned char c, void *child) { switch (n->type) { case NODE4: return add_child4(vmp, (art_node4 *)n, ref, c, child); case NODE16: return add_child16(vmp, (art_node16 *)n, ref, c, child); case NODE48: return add_child48(vmp, (art_node48 *)n, ref, c, child); case NODE256: return add_child256(vmp, (art_node256 *)n, ref, c, child); default: abort(); } } /* * Calculates the index at which the prefixes mismatch */ static int prefix_mismatch(const art_node *n, const unsigned char *key, int key_len, int depth) { int max_cmp = min(min(MAX_PREFIX_LEN, n->partial_len), key_len - depth); int idx; for (idx = 0; idx < max_cmp; idx++) { if (n->partial[idx] != key[depth + idx]) return idx; } // If the prefix is short we can avoid finding a leaf if (n->partial_len > MAX_PREFIX_LEN) { // Prefix is longer than what we've checked, find a leaf art_leaf *l = minimum(n); assert(l != NULL); max_cmp = min(l->key_len, key_len) - depth; for (; idx < max_cmp; idx++) { if (l->key[idx + depth] != key[depth + idx]) return idx; } } return idx; } static void * recursive_insert(VMEM *vmp, art_node *n, art_node **ref, const unsigned char *key, int key_len, void *value, int val_len, int depth, int *old) { // If we are at a NULL node, inject a leaf if (!n) { *ref = (art_node *)SET_LEAF( make_leaf(vmp, key, key_len, value, val_len)); return NULL; } // If we are at a leaf, we need to replace it with a node if (IS_LEAF(n)) { art_leaf *l = LEAF_RAW(n); // Check if we are updating an existing value if (!leaf_matches(l, key, key_len, depth)) { *old = 1; void *old_val = l->value; l->value = value; return old_val; } // New value, we must split the leaf into a node4 art_node4 *new = (art_node4 *)alloc_node(vmp, NODE4); // Create a new leaf art_leaf *l2 = make_leaf(vmp, key, key_len, value, val_len); // Determine longest prefix int longest_prefix = longest_common_prefix(l, l2, depth); new->n.partial_len = longest_prefix; memcpy(new->n.partial, key + depth, min(MAX_PREFIX_LEN, longest_prefix)); // Add the leafs to the new node4 *ref = (art_node *)new; add_child4(vmp, new, ref, l->key[depth + longest_prefix], SET_LEAF(l)); add_child4(vmp, new, ref, l2->key[depth + longest_prefix], SET_LEAF(l2)); return NULL; } // Check if given node has a prefix if (n->partial_len) { // Determine if the prefixes differ, since we need to split int prefix_diff = prefix_mismatch(n, key, key_len, depth); if ((uint32_t)prefix_diff >= n->partial_len) { depth += n->partial_len; goto RECURSE_SEARCH; } // Create a new node art_node4 *new = (art_node4 *)alloc_node(vmp, NODE4); *ref = (art_node *)new; new->n.partial_len = prefix_diff; memcpy(new->n.partial, n->partial, min(MAX_PREFIX_LEN, prefix_diff)); // Adjust the prefix of the old node if (n->partial_len <= MAX_PREFIX_LEN) { add_child4(vmp, new, ref, n->partial[prefix_diff], n); n->partial_len -= (prefix_diff + 1); memmove(n->partial, n->partial + prefix_diff + 1, min(MAX_PREFIX_LEN, n->partial_len)); } else { n->partial_len -= (prefix_diff + 1); art_leaf *l = minimum(n); assert(l != NULL); add_child4(vmp, new, ref, l->key[depth + prefix_diff], n); memcpy(n->partial, l->key + depth + prefix_diff + 1, min(MAX_PREFIX_LEN, n->partial_len)); } // Insert the new leaf art_leaf *l = make_leaf(vmp, key, key_len, value, val_len); add_child4(vmp, new, ref, key[depth + prefix_diff], SET_LEAF(l)); return NULL; } RECURSE_SEARCH:; // Find a child to recurse to art_node **child = find_child(n, key[depth]); if (child) { return recursive_insert(vmp, *child, child, key, key_len, value, val_len, depth + 1, old); } // No child, node goes within us art_leaf *l = make_leaf(vmp, key, key_len, value, val_len); add_child(vmp, n, ref, key[depth], SET_LEAF(l)); return NULL; } /* * Inserts a new value into the ART tree * @arg t The tree * @arg key The key * @arg key_len The length of the key * @arg value Opaque value. * @return NULL if the item was newly inserted, otherwise * the old value pointer is returned. */ void * art_insert(VMEM *vmp, art_tree *t, const unsigned char *key, int key_len, void *value, int val_len) { int old_val = 0; void *old = recursive_insert(vmp, t->root, &t->root, key, key_len, value, val_len, 0, &old_val); if (!old_val) t->size++; return old; } static void remove_child256(VMEM *vmp, art_node256 *n, art_node **ref, unsigned char c) { n->children[c] = NULL; n->n.num_children--; // Resize to a node48 on underflow, not immediately to prevent // trashing if we sit on the 48/49 boundary if (n->n.num_children == 37) { art_node48 *new = (art_node48 *)alloc_node(vmp, NODE48); *ref = (art_node *) new; copy_header((art_node *)new, (art_node *)n); int pos = 0; for (int i = 0; i < 256; i++) { if (n->children[i]) { assert(pos < 48); new->children[pos] = n->children[i]; new->keys[i] = pos + 1; pos++; } } vmem_free(vmp, n); } } static void remove_child48(VMEM *vmp, art_node48 *n, art_node **ref, unsigned char c) { int pos = n->keys[c]; n->keys[c] = 0; n->children[pos - 1] = NULL; n->n.num_children--; if (n->n.num_children == 12) { art_node16 *new = (art_node16 *)alloc_node(vmp, NODE16); *ref = (art_node *)new; copy_header((art_node *) new, (art_node *)n); int child = 0; for (int i = 0; i < 256; i++) { pos = n->keys[i]; if (pos) { assert(child < 16); new->keys[child] = i; new->children[child] = n->children[pos - 1]; child++; } } vmem_free(vmp, n); } } static void remove_child16(VMEM *vmp, art_node16 *n, art_node **ref, art_node **l) { int pos = l - n->children; memmove(n->keys + pos, n->keys + pos + 1, n->n.num_children - 1 - pos); memmove(n->children + pos, n->children + pos + 1, (n->n.num_children - 1 - pos) * sizeof(void *)); n->n.num_children--; if (n->n.num_children == 3) { art_node4 *new = (art_node4 *)alloc_node(vmp, NODE4); *ref = (art_node *) new; copy_header((art_node *)new, (art_node *)n); memcpy(new->keys, n->keys, 4); memcpy(new->children, n->children, 4 * sizeof(void *)); vmem_free(vmp, n); } } static void remove_child4(VMEM *vmp, art_node4 *n, art_node **ref, art_node **l) { int pos = l - n->children; memmove(n->keys + pos, n->keys + pos + 1, n->n.num_children - 1 - pos); memmove(n->children + pos, n->children + pos + 1, (n->n.num_children - 1 - pos) * sizeof(void *)); n->n.num_children--; // Remove nodes with only a single child if (n->n.num_children == 1) { art_node *child = n->children[0]; if (!IS_LEAF(child)) { // Concatenate the prefixes int prefix = n->n.partial_len; if (prefix < MAX_PREFIX_LEN) { n->n.partial[prefix] = n->keys[0]; prefix++; } if (prefix < MAX_PREFIX_LEN) { int sub_prefix = min(child->partial_len, MAX_PREFIX_LEN - prefix); memcpy(n->n.partial + prefix, child->partial, sub_prefix); prefix += sub_prefix; } // Store the prefix in the child memcpy(child->partial, n->n.partial, min(prefix, MAX_PREFIX_LEN)); child->partial_len += n->n.partial_len + 1; } *ref = child; vmem_free(vmp, n); } } static void remove_child(VMEM *vmp, art_node *n, art_node **ref, unsigned char c, art_node **l) { switch (n->type) { case NODE4: return remove_child4(vmp, (art_node4 *)n, ref, l); case NODE16: return remove_child16(vmp, (art_node16 *)n, ref, l); case NODE48: return remove_child48(vmp, (art_node48 *)n, ref, c); case NODE256: return remove_child256(vmp, (art_node256 *)n, ref, c); default: abort(); } } static art_leaf * recursive_delete(VMEM *vmp, art_node *n, art_node **ref, const unsigned char *key, int key_len, int depth) { // Search terminated if (!n) return NULL; // Handle hitting a leaf node if (IS_LEAF(n)) { art_leaf *l = LEAF_RAW(n); if (!leaf_matches(l, key, key_len, depth)) { *ref = NULL; return l; } return NULL; } // Bail if the prefix does not match if (n->partial_len) { int prefix_len = check_prefix(n, key, key_len, depth); if (prefix_len != min(MAX_PREFIX_LEN, n->partial_len)) { return NULL; } depth = depth + n->partial_len; } // Find child node art_node **child = find_child(n, key[depth]); if (!child) return NULL; // If the child is leaf, delete from this node if (IS_LEAF(*child)) { art_leaf *l = LEAF_RAW(*child); if (!leaf_matches(l, key, key_len, depth)) { remove_child(vmp, n, ref, key[depth], child); return l; } return NULL; // Recurse } else { return recursive_delete(vmp, *child, child, key, key_len, depth + 1); } } /* * Deletes a value from the ART tree * @arg t The tree * @arg key The key * @arg key_len The length of the key * @return NULL if the item was not found, otherwise * the value pointer is returned. */ void * art_delete(VMEM *vmp, art_tree *t, const unsigned char *key, int key_len) { art_leaf *l = recursive_delete(vmp, t->root, &t->root, key, key_len, 0); if (l) { t->size--; void *old = l->value; vmem_free(vmp, l); return old; } return NULL; } // Recursively iterates over the tree static int recursive_iter(art_node *n, art_callback cb, void *data) { // Handle base cases if (!n) return 0; if (IS_LEAF(n)) { art_leaf *l = LEAF_RAW(n); return cb(data, (const unsigned char *)l->key, l->key_len, l->value, l->val_len); } int idx, res; switch (n->type) { case NODE4: for (int i = 0; i < n->num_children; i++) { res = recursive_iter(((art_node4 *)n)->children[i], cb, data); if (res) return res; } break; case NODE16: for (int i = 0; i < n->num_children; i++) { res = recursive_iter( ((art_node16 *)n)->children[i], cb, data); if (res) return res; } break; case NODE48: for (int i = 0; i < 256; i++) { idx = ((art_node48 *)n)->keys[i]; if (!idx) continue; res = recursive_iter( ((art_node48 *)n)->children[idx - 1], cb, data); if (res) return res; } break; case NODE256: for (int i = 0; i < 256; i++) { if (!((art_node256 *)n)->children[i]) continue; res = recursive_iter( ((art_node256 *)n)->children[i], cb, data); if (res) return res; } break; default: abort(); } return 0; } /* * Iterates through the entries pairs in the map, * invoking a callback for each. The call back gets a * key, value for each and returns an integer stop value. * If the callback returns non-zero, then the iteration stops. * @arg t The tree to iterate over * @arg cb The callback function to invoke * @arg data Opaque handle passed to the callback * @return 0 on success, or the return of the callback. */ int art_iter(art_tree *t, art_callback cb, void *data) { return recursive_iter(t->root, cb, data); } // Recursively iterates over the tree static int recursive_iter2(art_node *n, art_callback cb, void *data) { cb_data _cbd, *cbd = &_cbd; int first = 1; // Handle base cases if (!n) return 0; cbd->node = (void *)n; cbd->node_type = n->type; cbd->child_idx = -1; if (IS_LEAF(n)) { art_leaf *l = LEAF_RAW(n); return cb(cbd, (const unsigned char *)l->key, l->key_len, l->value, l->val_len); } int idx, res; switch (n->type) { case NODE4: for (int i = 0; i < n->num_children; i++) { cbd->first_child = first; first = 0; cbd->child_idx = i; cb((void *)cbd, NULL, 0, NULL, 0); res = recursive_iter2(((art_node4 *)n)->children[i], cb, data); if (res) return res; } break; case NODE16: for (int i = 0; i < n->num_children; i++) { cbd->first_child = first; first = 0; cbd->child_idx = i; cb((void *)cbd, NULL, 0, NULL, 0); res = recursive_iter2(((art_node16 *)n)->children[i], cb, data); if (res) return res; } break; case NODE48: for (int i = 0; i < 256; i++) { idx = ((art_node48 *)n)->keys[i]; if (!idx) continue; cbd->first_child = first; first = 0; cbd->child_idx = i; cb((void *)cbd, NULL, 0, NULL, 0); res = recursive_iter2( ((art_node48 *)n)->children[idx - 1], cb, data); if (res) return res; } break; case NODE256: for (int i = 0; i < 256; i++) { if (!((art_node256 *)n)->children[i]) continue; cbd->first_child = first; first = 0; cbd->child_idx = i; cb((void *)cbd, NULL, 0, NULL, 0); res = recursive_iter2( ((art_node256 *)n)->children[i], cb, data); if (res) return res; } break; default: abort(); } return 0; } /* * Iterates through the entries pairs in the map, * invoking a callback for each. The call back gets a * key, value for each and returns an integer stop value. * If the callback returns non-zero, then the iteration stops. * @arg t The tree to iterate over * @arg cb The callback function to invoke * @arg data Opaque handle passed to the callback * @return 0 on success, or the return of the callback. */ int art_iter2(art_tree *t, art_callback cb, void *data) { return recursive_iter2(t->root, cb, data); } /* * Checks if a leaf prefix matches * @return 0 on success. */ static int leaf_prefix_matches(const art_leaf *n, const unsigned char *prefix, int prefix_len) { // Fail if the key length is too short if (n->key_len < (uint32_t)prefix_len) return 1; // Compare the keys return memcmp(n->key, prefix, prefix_len); } /* * Iterates through the entries pairs in the map, * invoking a callback for each that matches a given prefix. * The call back gets a key, value for each and returns an integer stop value. * If the callback returns non-zero, then the iteration stops. * @arg t The tree to iterate over * @arg prefix The prefix of keys to read * @arg prefix_len The length of the prefix * @arg cb The callback function to invoke * @arg data Opaque handle passed to the callback * @return 0 on success, or the return of the callback. */ int art_iter_prefix(art_tree *t, const unsigned char *key, int key_len, art_callback cb, void *data) { art_node **child; art_node *n = t->root; int prefix_len, depth = 0; while (n) { // Might be a leaf if (IS_LEAF(n)) { n = LEAF_RAW(n); // Check if the expanded path matches if (!leaf_prefix_matches( (art_leaf *)n, key, key_len)) { art_leaf *l = (art_leaf *)n; return cb(data, (const unsigned char *)l->key, l->key_len, l->value, l->val_len); } return 0; } // If the depth matches the prefix, we need to handle this node if (depth == key_len) { art_leaf *l = minimum(n); assert(l != NULL); if (!leaf_prefix_matches(l, key, key_len)) return recursive_iter(n, cb, data); return 0; } // Bail if the prefix does not match if (n->partial_len) { prefix_len = prefix_mismatch(n, key, key_len, depth); // If there is no match, search is terminated if (!prefix_len) return 0; // If we've matched the prefix, iterate on this node else if (depth + prefix_len == key_len) { return recursive_iter(n, cb, data); } // if there is a full match, go deeper depth = depth + n->partial_len; } // Recursively search child = find_child(n, key[depth]); n = (child) ? *child : NULL; depth++; } return 0; } vmem-1.8/src/examples/libvmem/libart/art.h000066400000000000000000000154631361505074100206150ustar00rootroot00000000000000/* * Copyright 2016, FUJITSU TECHNOLOGY SOLUTIONS GMBH * Copyright 2012, Armon Dadgar. All rights reserved. * Copyright 2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * ========================================================================== * * Filename: art.h * * Description: implement ART tree using libvmem based on libart * * Author: Andreas Bluemle, Dieter Kasper * Andreas.Bluemle.external@ts.fujitsu.com * dieter.kasper@ts.fujitsu.com * * Organization: FUJITSU TECHNOLOGY SOLUTIONS GMBH * ========================================================================== */ /* * based on https://github.com/armon/libart/src/art.h */ #include #ifndef ART_H #define ART_H #ifdef __cplusplus extern "C" { #endif #define NODE4 1 #define NODE16 2 #define NODE48 3 #define NODE256 4 #define MAX_PREFIX_LEN 10 #if defined(__GNUC__) && !defined(__clang__) #if __STDC_VERSION__ >= 199901L && 402 == (__GNUC__ * 100 + __GNUC_MINOR__) /* * GCC 4.2.2's C99 inline keyword support is pretty broken; avoid. Introduced in * GCC 4.2.something, fixed in 4.3.0. So checking for specific major.minor of * 4.2 is fine. */ #define BROKEN_GCC_C99_INLINE #endif #endif typedef int(*art_callback)(void *data, const unsigned char *key, uint32_t key_len, const unsigned char *value, uint32_t val_len); /* * This struct is included as part * of all the various node sizes */ typedef struct { uint8_t type; uint8_t num_children; uint32_t partial_len; unsigned char partial[MAX_PREFIX_LEN]; } art_node; /* * Small node with only 4 children */ typedef struct { art_node n; unsigned char keys[4]; art_node *children[4]; } art_node4; /* * Node with 16 children */ typedef struct { art_node n; unsigned char keys[16]; art_node *children[16]; } art_node16; /* * Node with 48 children, but * a full 256 byte field. */ typedef struct { art_node n; unsigned char keys[256]; art_node *children[48]; } art_node48; /* * Full node with 256 children */ typedef struct { art_node n; art_node *children[256]; } art_node256; /* * Represents a leaf. These are * of arbitrary size, as they include the key. */ typedef struct { uint32_t key_len; uint32_t val_len; unsigned char *key; unsigned char *value; unsigned char data[]; } art_leaf; /* * Main struct, points to root. */ typedef struct { art_node *root; uint64_t size; } art_tree; /* * Initializes an ART tree * @return 0 on success. */ int art_tree_init(art_tree *t); /* * DEPRECATED * Initializes an ART tree * @return 0 on success. */ #define init_art_tree(...) art_tree_init(__VA_ARGS__) /* * Destroys an ART tree * @return 0 on success. */ int art_tree_destroy(VMEM *vmp, art_tree *t); /* * Returns the size of the ART tree. */ #ifdef BROKEN_GCC_C99_INLINE #define art_size(t) ((t)->size) #else static inline uint64_t art_size(art_tree *t) { return t->size; } #endif /* * Inserts a new value into the ART tree * @arg t The tree * @arg key The key * @arg key_len The length of the key * @arg value Opaque value. * @return NULL if the item was newly inserted, otherwise * the old value pointer is returned. */ void *art_insert(VMEM *vmp, art_tree *t, const unsigned char *key, int key_len, void *value, int val_len); /* * Deletes a value from the ART tree * @arg t The tree * @arg key The key * @arg key_len The length of the key * @return NULL if the item was not found, otherwise * the value pointer is returned. */ void *art_delete(VMEM *vmp, art_tree *t, const unsigned char *key, int key_len); /* * Searches for a value in the ART tree * @arg t The tree * @arg key The key * @arg key_len The length of the key * @return NULL if the item was not found, otherwise * the value pointer is returned. */ void *art_search(const art_tree *t, const unsigned char *key, int key_len); /* * Returns the minimum valued leaf * @return The minimum leaf or NULL */ art_leaf *art_minimum(art_tree *t); /* * Returns the maximum valued leaf * @return The maximum leaf or NULL */ art_leaf *art_maximum(art_tree *t); /* * Iterates through the entries pairs in the map, * invoking a callback for each. The call back gets a * key, value for each and returns an integer stop value. * If the callback returns non-zero, then the iteration stops. * @arg t The tree to iterate over * @arg cb The callback function to invoke * @arg data Opaque handle passed to the callback * @return 0 on success, or the return of the callback. */ int art_iter(art_tree *t, art_callback cb, void *data); typedef struct _cb_data { int node_type; int child_idx; int first_child; void *node; } cb_data; int art_iter2(art_tree *t, art_callback cb, void *data); /* * Iterates through the entries pairs in the map, * invoking a callback for each that matches a given prefix. * The call back gets a key, value for each and returns an integer stop value. * If the callback returns non-zero, then the iteration stops. * @arg t The tree to iterate over * @arg prefix The prefix of keys to read * @arg prefix_len The length of the prefix * @arg cb The callback function to invoke * @arg data Opaque handle passed to the callback * @return 0 on success, or the return of the callback. */ int art_iter_prefix(art_tree *t, const unsigned char *prefix, int prefix_len, art_callback cb, void *data); #ifdef __cplusplus } #endif #endif vmem-1.8/src/examples/libvmem/libart/arttree.c000066400000000000000000000757301361505074100214730ustar00rootroot00000000000000/* * Copyright 2016, FUJITSU TECHNOLOGY SOLUTIONS GMBH * Copyright 2016-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * =========================================================================== * * Filename: arttree.c * * Description: implement ART tree using libpmemobj based on libart * * Author: Andreas Bluemle, Dieter Kasper * Andreas.Bluemle.external@ts.fujitsu.com * dieter.kasper@ts.fujitsu.com * * Organization: FUJITSU TECHNOLOGY SOLUTIONS GMBH * * =========================================================================== */ #include #include #include #include #include #include #include #include #ifdef __FreeBSD__ #define _WITH_GETLINE #endif #include #include #include #include #include #include #include #include #include "libvmem.h" #include "arttree.h" #define APPNAME "arttree" #define SRCVERSION "0.1" struct str2int_map { char *name; int value; }; #define ART_NODE 0 #define ART_NODE4 1 #define ART_NODE16 2 #define ART_NODE48 3 #define ART_NODE256 4 #define ART_TREE_ROOT 5 #define ART_LEAF 6 struct str2int_map art_node_types[] = { {"art_node", ART_NODE}, {"art_node4", ART_NODE4}, {"art_node16", ART_NODE16}, {"art_node48", ART_NODE48}, {"art_node256", ART_NODE256}, {"art_tree", ART_TREE_ROOT}, {"art_leaf", ART_LEAF}, {NULL, -1} }; struct datastore { void *priv; }; /* * context - main context of datastore */ struct ds_context { char *dirname; /* name of pool file */ int mode; /* operation mode */ int insertions; /* number of insert operations to perform */ int newpool; /* complete new memory pool */ size_t psize; /* size of pool */ VMEM *vmp; /* handle to vmem pool */ art_tree *art_tree; /* art_tree root */ bool fileio; unsigned fmode; FILE *input; FILE *output; uint64_t address; unsigned char *key; int type; int fd; /* file descriptor for file io mode */ }; #define FILL (1 << 1) #define INTERACTIVE (1 << 3) struct ds_context my_context; #define read_key(c, p) read_line(c, p) #define read_value(c, p) read_line(c, p) static void usage(char *progname); int initialize_context(struct ds_context *ctx, int ac, char *av[]); int add_elements(struct ds_context *ctx); ssize_t read_line(struct ds_context *ctx, unsigned char **line); void exit_handler(struct ds_context *ctx); int art_tree_map_init(struct datastore *ds, struct ds_context *ctx); void pmemobj_ds_set_priv(struct datastore *ds, void *priv); static int dump_art_leaf_callback(void *data, const unsigned char *key, uint32_t key_len, const unsigned char *val, uint32_t val_len); static int dump_art_tree_graph(void *data, const unsigned char *key, uint32_t key_len, const unsigned char *val, uint32_t val_len); static void print_node_info(char *nodetype, uint64_t addr, art_node *an); static void print_help(char *appname); static void print_version(char *appname); static struct command *get_command(char *cmd_str); static int help_func(char *appname, struct ds_context *ctx, int argc, char *argv[]); static void help_help(char *appname); static int quit_func(char *appname, struct ds_context *ctx, int argc, char *argv[]); static void quit_help(char *appname); static int set_output_func(char *appname, struct ds_context *ctx, int argc, char *argv[]); static void set_output_help(char *appname); static int arttree_fill_func(char *appname, struct ds_context *ctx, int ac, char *av[]); static void arttree_fill_help(char *appname); static int arttree_examine_func(char *appname, struct ds_context *ctx, int ac, char *av[]); static void arttree_examine_help(char *appname); static int arttree_search_func(char *appname, struct ds_context *ctx, int ac, char *av[]); static void arttree_search_help(char *appname); static int arttree_delete_func(char *appname, struct ds_context *ctx, int ac, char *av[]); static void arttree_delete_help(char *appname); static int arttree_dump_func(char *appname, struct ds_context *ctx, int ac, char *av[]); static void arttree_dump_help(char *appname); static int arttree_graph_func(char *appname, struct ds_context *ctx, int ac, char *av[]); static void arttree_graph_help(char *appname); static int map_lookup(struct str2int_map *map, char *name); static void arttree_examine(struct ds_context *ctx, void *addr, int node_type); static void dump_art_tree_root(struct ds_context *ctx, art_tree *node); static void dump_art_node(struct ds_context *ctx, art_node *node); static void dump_art_node4(struct ds_context *ctx, art_node4 *node); static void dump_art_node16(struct ds_context *ctx, art_node16 *node); static void dump_art_node48(struct ds_context *ctx, art_node48 *node); static void dump_art_node256(struct ds_context *ctx, art_node256 *node); static void dump_art_leaf(struct ds_context *ctx, art_leaf *node); static char *asciidump(unsigned char *s, int32_t len); void outv_err(const char *fmt, ...); void outv_err_vargs(const char *fmt, va_list ap); /* * command -- struct for commands definition */ struct command { const char *name; const char *brief; int (*func)(char *, struct ds_context *, int, char *[]); void (*help)(char *); }; struct command commands[] = { { .name = "fill", .brief = "create and fill an art tree", .func = arttree_fill_func, .help = arttree_fill_help, }, { .name = "dump", .brief = "dump an art tree", .func = arttree_dump_func, .help = arttree_dump_help, }, { .name = "graph", .brief = "dump an art tree for graphical conversion", .func = arttree_graph_func, .help = arttree_graph_help, }, { .name = "help", .brief = "print help text about a command", .func = help_func, .help = help_help, }, { .name = "examine", .brief = "examine art tree structures", .func = arttree_examine_func, .help = arttree_examine_help, }, { .name = "search", .brief = "search for key in art tree", .func = arttree_search_func, .help = arttree_search_help, }, { .name = "delete", .brief = "delete leaf with key from art tree", .func = arttree_delete_func, .help = arttree_delete_help, }, { .name = "set_output", .brief = "set output file", .func = set_output_func, .help = set_output_help, }, { .name = "quit", .brief = "quit arttree structure examiner", .func = quit_func, .help = quit_help, }, }; /* * number of arttree_structures commands */ #define COMMANDS_NUMBER (sizeof(commands) / sizeof(commands[0])) int initialize_context(struct ds_context *ctx, int ac, char *av[]) { int errors = 0; int opt; char mode; if ((ctx == NULL) || (ac < 2)) { errors++; } if (!errors) { ctx->dirname = NULL; ctx->psize = VMEM_MIN_POOL; ctx->newpool = 0; ctx->vmp = NULL; ctx->art_tree = NULL; ctx->fileio = false; ctx->fmode = 0666; ctx->mode = 0; ctx->input = stdin; ctx->output = stdout; ctx->fd = -1; } if (!errors) { while ((opt = getopt(ac, av, "m:n:s:")) != -1) { switch (opt) { case 'm': mode = optarg[0]; if (mode == 'f') { ctx->mode |= FILL; } else if (mode == 'i') { ctx->mode |= INTERACTIVE; } else { errors++; } break; case 'n': { long insertions; insertions = strtol(optarg, NULL, 0); if (insertions > 0 && insertions < LONG_MAX) { ctx->insertions = insertions; } break; } default: errors++; break; } } } if (optind >= ac) { errors++; } if (!errors) { ctx->dirname = strdup(av[optind]); } return errors; } void exit_handler(struct ds_context *ctx) { if (!ctx->fileio) { if (ctx->vmp) { vmem_delete(ctx->vmp); } } else { if (ctx->fd > - 1) { close(ctx->fd); } } } int art_tree_map_init(struct datastore *ds, struct ds_context *ctx) { int errors = 0; /* calculate a required pool size */ if (ctx->psize < VMEM_MIN_POOL) ctx->psize = VMEM_MIN_POOL; if (!ctx->fileio) { if (access(ctx->dirname, F_OK) == 0) { ctx->vmp = vmem_create(ctx->dirname, ctx->psize); if (ctx->vmp == NULL) { perror("vmem_create"); errors++; } ctx->newpool = 1; } } return errors; } /* * pmemobj_ds_set_priv -- set private structure of datastore */ void pmemobj_ds_set_priv(struct datastore *ds, void *priv) { ds->priv = priv; } struct datastore myds; static void usage(char *progname) { printf("usage: %s -m [f|d|g] dir\n", progname); printf(" -m mode known modes are\n"); printf(" f fill create and fill art tree\n"); printf(" i interactive interact with art tree\n"); printf(" -n insertions number of key-value pairs to insert" "into the tree\n"); printf(" -s size size of the vmem pool file " "[minimum: VMEM_MIN_POOL=%ld]\n", VMEM_MIN_POOL); printf("\nfilling an art tree is done by reading key value pairs\n" "from standard input.\n" "Both keys and values are single line only.\n"); } /* * print_version -- prints arttree version message */ static void print_version(char *appname) { printf("%s %s\n", appname, SRCVERSION); } /* * print_help -- prints arttree help message */ static void print_help(char *appname) { usage(appname); print_version(appname); printf("\n"); printf("Options:\n"); printf(" -h, --help display this help and exit\n"); printf("\n"); printf("The available commands are:\n"); int i; for (i = 0; i < COMMANDS_NUMBER; i++) printf("%s\t- %s\n", commands[i].name, commands[i].brief); printf("\n"); } static int map_lookup(struct str2int_map *map, char *name) { int idx; int value = -1; for (idx = 0; ; idx++) { if (map[idx].name == NULL) { break; } if (strcmp((const char *)map[idx].name, (const char *)name) == 0) { value = map[idx].value; break; } } return value; } /* * get_command -- returns command for specified command name */ static struct command * get_command(char *cmd_str) { int i; if (cmd_str == NULL) { return NULL; } for (i = 0; i < COMMANDS_NUMBER; i++) { if (strcmp(cmd_str, commands[i].name) == 0) return &commands[i]; } return NULL; } /* * quit_help -- prints help message for quit command */ static void quit_help(char *appname) { printf("Usage: quit\n"); printf(" terminate interactive arttree function\n"); } /* * quit_func -- quit arttree function */ static int quit_func(char *appname, struct ds_context *ctx, int argc, char *argv[]) { printf("\n"); exit(0); return 0; } static void set_output_help(char *appname) { printf("set_output output redirection\n"); printf("Usage: set_output []\n"); printf(" redirect subsequent output to specified file\n"); printf(" if file_name is not specified," "then reset to standard output\n"); } static int set_output_func(char *appname, struct ds_context *ctx, int ac, char *av[]) { int errors = 0; if (ac == 1) { if ((ctx->output != NULL) && (ctx->output != stdout)) { (void) fclose(ctx->output); } ctx->output = stdout; } else if (ac == 2) { FILE *out_fp; out_fp = fopen(av[1], "w+"); if (out_fp == (FILE *)NULL) { outv_err("set_output: cannot open %s for writing\n", av[1]); errors++; } else { if ((ctx->output != NULL) && (ctx->output != stdout)) { (void) fclose(ctx->output); } ctx->output = out_fp; } } else { outv_err("set_output: too many arguments [%d]\n", ac); errors++; } return errors; } /* * help_help -- prints help message for help command */ static void help_help(char *appname) { printf("Usage: %s help \n", appname); } /* * help_func -- prints help message for specified command */ static int help_func(char *appname, struct ds_context *ctx, int argc, char *argv[]) { if (argc > 1) { char *cmd_str = argv[1]; struct command *cmdp = get_command(cmd_str); if (cmdp && cmdp->help) { cmdp->help(appname); return 0; } else { outv_err("No help text for '%s' command\n", cmd_str); return -1; } } else { print_help(appname); return -1; } } static int arttree_fill_func(char *appname, struct ds_context *ctx, int ac, char *av[]) { int errors = 0; int opt; (void) appname; optind = 0; while ((opt = getopt(ac, av, "n:")) != -1) { switch (opt) { case 'n': { long insertions; insertions = strtol(optarg, NULL, 0); if (insertions > 0 && insertions < LONG_MAX) { ctx->insertions = insertions; } break; } default: errors++; break; } } if (optind >= ac) { outv_err("fill: missing input filename\n"); arttree_fill_help(appname); errors++; } if (!errors) { struct stat statbuf; FILE *in_fp; if (stat(av[optind], &statbuf)) { outv_err("fill: cannot stat %s\n", av[optind]); errors++; } else { in_fp = fopen(av[optind], "r"); if (in_fp == (FILE *)NULL) { outv_err("fill: cannot open %s for reading\n", av[optind]); errors++; } else { if ((ctx->input != NULL) && (ctx->input != stdin)) { (void) fclose(ctx->input); } ctx->input = in_fp; } } } if (!errors) { if (add_elements(ctx)) { perror("add elements"); errors++; } if ((ctx->input != NULL) && (ctx->input != stdin)) { (void) fclose(ctx->input); } ctx->input = stdin; } return errors; } static void arttree_fill_help(char *appname) { (void) appname; printf("create and fill an art tree\n"); printf("Usage: fill [-n ] \n"); printf(" number of key-val pairs to fill" "the art tree\n"); printf(" input file for key-val pairs\n"); } static char outbuf[1024]; static char * asciidump(unsigned char *s, int32_t len) { char *p; int l; p = outbuf; if ((s != 0) && (len > 0)) { while (len--) { if (isprint((*s)&0xff)) { l = sprintf(p, "%c", (*s)&0xff); } else { l = sprintf(p, "\\%.2x", (*s)&0xff); } p += l; s++; } } *p = '\0'; p++; return outbuf; } static void dump_art_tree_root(struct ds_context *ctx, art_tree *node) { fprintf(ctx->output, "art_tree 0x%" PRIxPTR " {\n" " size=%" PRId64 ";\n root=0x%" PRIxPTR ";\n}\n", (uintptr_t)node, node->size, (uintptr_t)(node->root)); } static void dump_art_node(struct ds_context *ctx, art_node *node) { fprintf(ctx->output, "art_node 0x%" PRIxPTR " {\n" " type=%s;\n" " num_children=%d;\n" " partial_len=%d;\n" " partial=[%s];\n" "}\n", (uintptr_t)node, art_node_types[node->type].name, node->num_children, node->partial_len, asciidump(node->partial, node->partial_len)); } static void dump_art_node4(struct ds_context *ctx, art_node4 *node) { int i; fprintf(ctx->output, "art_node4 0x%" PRIxPTR " {\n", (uintptr_t)node); dump_art_node(ctx, &(node->n)); for (i = 0; i < node->n.num_children; i++) { fprintf(ctx->output, " key[%d]=%s;\n", i, asciidump(&(node->keys[i]), 1)); fprintf(ctx->output, " child[%d]=0x%" PRIxPTR ";\n", i, (uintptr_t)(node->children[i])); } fprintf(ctx->output, "}\n"); } static void dump_art_node16(struct ds_context *ctx, art_node16 *node) { int i; fprintf(ctx->output, "art_node16 0x%" PRIxPTR " {\n", (uintptr_t)node); dump_art_node(ctx, &(node->n)); for (i = 0; i < node->n.num_children; i++) { fprintf(ctx->output, " key[%d]=%s;\n", i, asciidump(&(node->keys[i]), 1)); fprintf(ctx->output, " child[%d]=0x%" PRIxPTR ";\n", i, (uintptr_t)(node->children[i])); } fprintf(ctx->output, "}\n"); } static void dump_art_node48(struct ds_context *ctx, art_node48 *node) { int i; int idx; fprintf(ctx->output, "art_node48 0x%" PRIxPTR " {\n", (uintptr_t)node); dump_art_node(ctx, &(node->n)); for (i = 0; i < 256; i++) { idx = node->keys[i]; if (!idx) continue; fprintf(ctx->output, " key[%d]=%s;\n", i, asciidump((unsigned char *)(&i), 1)); fprintf(ctx->output, " child[%d]=0x%" PRIxPTR ";\n", idx, (uintptr_t)(node->children[idx])); } fprintf(ctx->output, "}\n"); } static void dump_art_node256(struct ds_context *ctx, art_node256 *node) { int i; fprintf(ctx->output, "art_node48 0x%" PRIxPTR " {\n", (uintptr_t)node); dump_art_node(ctx, &(node->n)); for (i = 0; i < 256; i++) { if (node->children[i] == NULL) continue; fprintf(ctx->output, " key[%i]=%s;\n", i, asciidump((unsigned char *)(&i), 1)); fprintf(ctx->output, " child[%d]=0x%" PRIxPTR ";\n", i, (uintptr_t)(node->children[i])); } fprintf(ctx->output, "}\n"); } static void dump_art_leaf(struct ds_context *ctx, art_leaf *node) { fprintf(ctx->output, "art_leaf 0x%" PRIxPTR " {\n" " key_len=%u;\n" " key=[%s];\n", (uintptr_t)node, node->key_len, asciidump(node->key, (int32_t)node->key_len)); fprintf(ctx->output, " val_len=%u;\n" " value=[%s];\n" "}\n", node->val_len, asciidump(node->value, (int32_t)node->val_len)); } static void arttree_examine(struct ds_context *ctx, void *addr, int node_type) { if (addr == NULL) return; switch (node_type) { case ART_TREE_ROOT: dump_art_tree_root(ctx, (art_tree *)addr); break; case ART_NODE: dump_art_node(ctx, (art_node *)addr); break; case ART_NODE4: dump_art_node4(ctx, (art_node4 *)addr); break; case ART_NODE16: dump_art_node16(ctx, (art_node16 *)addr); break; case ART_NODE48: dump_art_node48(ctx, (art_node48 *)addr); break; case ART_NODE256: dump_art_node256(ctx, (art_node256 *)addr); break; case ART_LEAF: dump_art_leaf(ctx, (art_leaf *)addr); break; default: break; } fflush(ctx->output); } static int arttree_examine_func(char *appname, struct ds_context *ctx, int ac, char *av[]) { int errors = 0; (void) appname; if (ac > 1) { if (ac < 3) { outv_err("examine: missing argument\n"); arttree_examine_help(appname); errors++; } else { ctx->address = (uint64_t)strtol(av[1], NULL, 0); ctx->type = map_lookup(&(art_node_types[0]), av[2]); } } else { ctx->address = (uint64_t)ctx->art_tree; ctx->type = ART_TREE_ROOT; } if (!errors) { if (ctx->output == NULL) ctx->output = stdout; arttree_examine(ctx, (void *)(ctx->address), ctx->type); } return errors; } static void arttree_examine_help(char *appname) { (void) appname; printf("examine structures of an art tree\n"); printf("Usage: examine
\n"); printf("
address of art tree structure to examine\n"); printf(" input file for key-val pairs\n"); printf("Known types are\n art_tree\n art_node\n" " art_node4\n art_node16\n art_node48\n art_node256\n" " art_leaf\n"); printf("If invoked without arguments, then the root of the art tree" " is dumped\n"); } static int arttree_search_func(char *appname, struct ds_context *ctx, int ac, char *av[]) { void *p; int errors = 0; (void) appname; if (ac > 1) { ctx->key = (unsigned char *)strdup(av[1]); assert(ctx->key != NULL); } else { outv_err("search: missing key\n"); arttree_search_help(appname); errors++; } if (!errors) { if (ctx->output == NULL) ctx->output = stdout; p = art_search(ctx->art_tree, ctx->key, (int)strlen((const char *)ctx->key)); if (p != NULL) { fprintf(ctx->output, "found key [%s]: ", asciidump(ctx->key, strlen((const char *)ctx->key))); fprintf(ctx->output, "value [%s]\n", asciidump((unsigned char *)p, 20)); } else { fprintf(ctx->output, "not found key [%s]\n", asciidump(ctx->key, strlen((const char *)ctx->key))); } } return errors; } static void arttree_search_help(char *appname) { (void) appname; printf("search for key in art tree\n"); printf("Usage: search \n"); printf(" the key to search for\n"); } static int arttree_delete_func(char *appname, struct ds_context *ctx, int ac, char *av[]) { void *p; int errors = 0; (void) appname; if (ac > 1) { ctx->key = (unsigned char *)strdup(av[1]); assert(ctx->key != NULL); } else { outv_err("delete: missing key\n"); arttree_delete_help(appname); errors++; } if (!errors) { if (ctx->output == NULL) ctx->output = stdout; p = art_delete(ctx->vmp, ctx->art_tree, ctx->key, (int)strlen((const char *)ctx->key)); if (p != NULL) { fprintf(ctx->output, "delete leaf with key [%s]:", asciidump(ctx->key, strlen((const char *)ctx->key))); fprintf(ctx->output, " value [%s]\n", asciidump((unsigned char *)p, 20)); } else { fprintf(ctx->output, "no leaf with key [%s]\n", asciidump(ctx->key, strlen((const char *)ctx->key))); } } return errors; } static void arttree_delete_help(char *appname) { (void) appname; printf("delete leaf with key from art tree\n"); printf("Usage: delete \n"); printf(" the key of the leaf to delete\n"); } static int arttree_dump_func(char *appname, struct ds_context *ctx, int ac, char *av[]) { (void) appname; (void) ac; (void) av; art_iter(ctx->art_tree, dump_art_leaf_callback, NULL); return 0; } static void arttree_dump_help(char *appname) { (void) appname; printf("dump all leafs of an art tree\n"); printf("Usage: dump\n"); printf("\nThis function uses the art_iter() interface to descend\n"); printf("to all leafs of the art tree\n"); } static int arttree_graph_func(char *appname, struct ds_context *ctx, int ac, char *av[]) { (void) appname; (void) ac; (void) av; fprintf(ctx->output, "digraph g {\nrankdir=LR;\n"); art_iter2(ctx->art_tree, dump_art_tree_graph, NULL); fprintf(ctx->output, "}\n"); return 0; } static void arttree_graph_help(char *appname) { (void) appname; printf("dump art tree for graphical output (graphiviz/dot)\n"); printf("Usage: graph\n"); printf("\nThis function uses the art_iter2() interface to descend\n"); printf("through the art tree and produces output for graphviz/dot\n"); } int main(int argc, char *argv[]) { if (initialize_context(&my_context, argc, argv) != 0) { usage(argv[0]); return 1; } if (art_tree_map_init(&myds, &my_context) != 0) { fprintf(stderr, "failed to initialize memory pool file\n"); return 1; } if (my_context.vmp == NULL) { perror("pool initialization"); return 1; } my_context.art_tree = (art_tree *)vmem_malloc(my_context.vmp, sizeof(art_tree)); assert(my_context.art_tree != NULL); if (art_tree_init(my_context.art_tree)) { perror("art tree setup"); return 1; } if ((my_context.mode & INTERACTIVE)) { char *line; ssize_t read; size_t len; char *args[20]; int nargs; struct command *cmdp; /* interactive mode: read commands and execute them */ line = NULL; printf("\n> "); while ((read = getline(&line, &len, stdin)) != -1) { if (line[read - 1] == '\n') { line[read - 1] = '\0'; } args[0] = strtok(line, " "); cmdp = get_command(args[0]); if (cmdp == NULL) { printf("[%s]: command not supported\n", args[0] ? args[0] : "NULL"); printf("\n> "); continue; } nargs = 1; while (1) { args[nargs] = strtok(NULL, " "); if (args[nargs] == NULL) { break; } nargs++; } (void) cmdp->func(APPNAME, &my_context, nargs, args); printf("\n> "); } if (line != NULL) { free(line); } } if ((my_context.mode & FILL)) { if (add_elements(&my_context)) { perror("add elements"); return 1; } } exit_handler(&my_context); return 0; } int add_elements(struct ds_context *ctx) { int errors = 0; int i; int key_len; int val_len; unsigned char *key; unsigned char *value; if (ctx == NULL) { errors++; } else if (ctx->vmp == NULL) { errors++; } if (!errors) { for (i = 0; i < ctx->insertions; i++) { key = NULL; value = NULL; key_len = read_key(ctx, &key); val_len = read_value(ctx, &value); art_insert(ctx->vmp, ctx->art_tree, key, key_len, value, val_len); if (key != NULL) free(key); if (value != NULL) free(value); } } return errors; } ssize_t read_line(struct ds_context *ctx, unsigned char **line) { size_t len = -1; ssize_t read = -1; *line = NULL; if ((read = getline((char **)line, &len, ctx->input)) > 0) { (*line)[read - 1] = '\0'; } return read - 1; } static int dump_art_leaf_callback(void *data, const unsigned char *key, uint32_t key_len, const unsigned char *val, uint32_t val_len) { fprintf(my_context.output, "key len %" PRIu32 " = [%s], ", key_len, asciidump((unsigned char *)key, key_len)); fprintf(my_context.output, "value len %" PRIu32 " = [%s]\n", val_len, asciidump((unsigned char *)val, val_len)); fflush(my_context.output); return 0; } /* * Macros to manipulate pointer tags */ #define IS_LEAF(x) (((uintptr_t)(x) & 1)) #define SET_LEAF(x) ((void *)((uintptr_t)(x) | 1)) #define LEAF_RAW(x) ((void *)((uintptr_t)(x) & ~1)) unsigned char hexvals[] = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f, 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f, 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f, 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57, 0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5d, 0x5e, 0x5f, 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f, 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f, 0x80, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87, 0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x8d, 0x8e, 0x8f, 0x90, 0x91, 0x92, 0x93, 0x94, 0x95, 0x96, 0x97, 0x98, 0x99, 0x9a, 0x9b, 0x9c, 0x9d, 0x9e, 0x9f, 0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0xa6, 0xa7, 0xa8, 0xa9, 0xaa, 0xab, 0xac, 0xad, 0xae, 0xaf, 0xb0, 0xb1, 0xb2, 0xb3, 0xb4, 0xb5, 0xb6, 0xb7, 0xb8, 0xb9, 0xba, 0xbb, 0xbc, 0xbd, 0xbe, 0xbf, 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7, 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf, 0xd0, 0xd1, 0xd2, 0xd3, 0xd4, 0xd5, 0xd6, 0xd7, 0xd8, 0xd9, 0xda, 0xdb, 0xdc, 0xdd, 0xde, 0xdf, 0xe0, 0xe1, 0xe2, 0xe3, 0xe4, 0xe5, 0xe6, 0xe7, 0xe8, 0xe9, 0xea, 0xeb, 0xec, 0xed, 0xee, 0xef, 0xf0, 0xf1, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7, 0xf8, 0xf9, 0xfa, 0xfb, 0xfc, 0xfd, 0xfe, 0xff, }; static void print_node_info(char *nodetype, uint64_t addr, art_node *an) { int p_len; p_len = an->partial_len; fprintf(my_context.output, "N%" PRIx64 " [label=\"%s at\\n0x%" PRIx64 "\\n%d children", addr, nodetype, addr, an->num_children); if (p_len != 0) { fprintf(my_context.output, "\\nlen %d", p_len); fprintf(my_context.output, ": "); asciidump(an->partial, p_len); } fprintf(my_context.output, "\"];\n"); } static int dump_art_tree_graph(void *data, const unsigned char *key, uint32_t key_len, const unsigned char *val, uint32_t val_len) { cb_data *cbd; art_node4 *an4; art_node16 *an16; art_node48 *an48; art_node256 *an256; art_leaf *al; void *child; int idx; if (data == NULL) return 0; cbd = (cb_data *)data; if (IS_LEAF(cbd->node)) { al = LEAF_RAW(cbd->node); fprintf(my_context.output, "N%" PRIxPTR " [shape=box, " "label=\"leaf at\\n0x%" PRIxPTR "\"];\n", (uintptr_t)al, (uintptr_t)al); fprintf(my_context.output, "N%" PRIxPTR " [shape=box, " "label=\"key at 0x%" PRIxPTR ": %s\"];\n", (uintptr_t)al->key, (uintptr_t)al->key, asciidump(al->key, al->key_len)); fprintf(my_context.output, "N%" PRIxPTR " [shape=box, " "label=\"value at 0x%" PRIxPTR ": %s\"];\n", (uintptr_t)al->value, (uintptr_t)al->value, asciidump(al->value, al->val_len)); fprintf(my_context.output, "N%" PRIxPTR " -> N%" PRIxPTR ";\n", (uintptr_t)al, (uintptr_t)al->key); fprintf(my_context.output, "N%" PRIxPTR " -> N%" PRIxPTR ";\n", (uintptr_t)al, (uintptr_t)al->value); return 0; } switch (cbd->node_type) { case NODE4: an4 = (art_node4 *)cbd->node; child = (void *)(an4->children[cbd->child_idx]); child = LEAF_RAW(child); if (child != NULL) { if (cbd->first_child) print_node_info("node4", (uint64_t)(cbd->node), &(an4->n)); fprintf(my_context.output, "N%" PRIxPTR " -> N%" PRIxPTR " [label=\"%s\"];\n", (uintptr_t)an4, (uintptr_t)child, asciidump(&(an4->keys[cbd->child_idx]), 1)); } break; case NODE16: an16 = (art_node16 *)cbd->node; child = (void *)(an16->children[cbd->child_idx]); child = LEAF_RAW(child); if (child != NULL) { if (cbd->first_child) print_node_info("node16", (uint64_t)(cbd->node), &(an16->n)); fprintf(my_context.output, "N%" PRIxPTR " -> N%" PRIxPTR " [label=\"%s\"];\n", (uintptr_t)an16, (uintptr_t)child, asciidump(&(an16->keys[cbd->child_idx]), 1)); } break; case NODE48: an48 = (art_node48 *)cbd->node; idx = an48->keys[cbd->child_idx]; child = (void *) (an48->children[idx - 1]); child = LEAF_RAW(child); if (child != NULL) { if (cbd->first_child) print_node_info("node48", (uint64_t)(cbd->node), &(an48->n)); fprintf(my_context.output, "N%" PRIxPTR " -> N%" PRIxPTR " [label=\"%s\"];\n", (uintptr_t)an48, (uintptr_t)child, asciidump(&(hexvals[cbd->child_idx]), 1)); } break; case NODE256: an256 = (art_node256 *)cbd->node; child = (void *)(an256->children[cbd->child_idx]); child = LEAF_RAW(child); if (child != NULL) { if (cbd->first_child) print_node_info("node256", (uint64_t)(cbd->node), &(an256->n)); fprintf(my_context.output, "N%" PRIxPTR " -> N%" PRIxPTR " [label=\"%s\"];\n", (uintptr_t)an256, (uintptr_t)child, asciidump(&(hexvals[cbd->child_idx]), 1)); } break; default: break; } return 0; } /* * outv_err -- print error message */ void outv_err(const char *fmt, ...) { va_list ap; va_start(ap, fmt); outv_err_vargs(fmt, ap); va_end(ap); } /* * outv_err_vargs -- print error message */ void outv_err_vargs(const char *fmt, va_list ap) { fprintf(stderr, "error: "); vfprintf(stderr, fmt, ap); if (!strchr(fmt, '\n')) fprintf(stderr, "\n"); } vmem-1.8/src/examples/libvmem/libart/arttree.h000066400000000000000000000042771361505074100214760ustar00rootroot00000000000000/* * Copyright 2016, FUJITSU TECHNOLOGY SOLUTIONS GMBH * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * ========================================================================== * * Filename: arttree.h * * Description: implement ART tree using libvmem based on libart * * Author: Andreas Bluemle, Dieter Kasper * Andreas.Bluemle.external@ts.fujitsu.com * dieter.kasper@ts.fujitsu.com * * Organization: FUJITSU TECHNOLOGY SOLUTIONS GMBH * ========================================================================== */ #ifndef _ARTTREE_H #define _ARTTREE_H #ifdef __cplusplus extern "C" { #endif #include "art.h" #ifdef __cplusplus } #endif #endif /* _ARTTREE_H */ vmem-1.8/src/examples/libvmem/manpage.c000066400000000000000000000041561361505074100201520ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * manpage.c -- simple example for the libvmem man page */ #include #include #include #include int main(int argc, char *argv[]) { VMEM *vmp; char *ptr; /* create minimum size pool of memory */ if ((vmp = vmem_create("/pmem-fs", VMEM_MIN_POOL)) == NULL) { perror("vmem_create"); exit(1); } if ((ptr = vmem_malloc(vmp, 100)) == NULL) { perror("vmem_malloc"); exit(1); } strcpy(ptr, "hello, world"); /* give the memory back */ vmem_free(vmp, ptr); /* ... */ vmem_delete(vmp); } vmem-1.8/src/examples/libvmem/manpage.vcxproj000066400000000000000000000060251361505074100214200ustar00rootroot00000000000000 Debug x64 Release x64 {C84633F5-05B1-4AC1-A074-104D1DB2A91E} Win32Proj vmem 10.0.16299.0 Application true v140 Application false v140 false ..\..\LongPath.manifest libvmem.lib {08762559-e9df-475b-ba99-49f4b5a1d80b} {08762559-e9df-475b-ba99-49f4b5a1d80b} vmem-1.8/src/examples/libvmem/manpage.vcxproj.filters000066400000000000000000000007551361505074100230730ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx Source Files vmem-1.8/src/freebsd/000077500000000000000000000000001361505074100145315ustar00rootroot00000000000000vmem-1.8/src/freebsd/README000066400000000000000000000011571361505074100154150ustar00rootroot00000000000000Persistent Memory Development Kit This is src/freebsd/README. This directory contains FreeBSD-specific files for the Persistent Memory Development Kit. The subdirectory "include" contains header files that have no equivalents on FreeBSD. Most of these files are empty, which is a cheap trick to avoid preprocessor errors when including non-existing files. Others are redirects for files that are in different locations on FreeBSD. This way we don't need a lot of preprocessor conditionals in all the source code files, although it does require conditionals in the Makefiles (which could be addressed by using autoconf). vmem-1.8/src/freebsd/include/000077500000000000000000000000001361505074100161545ustar00rootroot00000000000000vmem-1.8/src/freebsd/include/endian.h000066400000000000000000000032201361505074100175600ustar00rootroot00000000000000/* * Copyright 2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * endian.h -- redirect for FreeBSD */ #include vmem-1.8/src/freebsd/include/features.h000066400000000000000000000031511361505074100201430ustar00rootroot00000000000000/* * Copyright 2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * features.h -- Empty file redirect */ vmem-1.8/src/freebsd/include/linux/000077500000000000000000000000001361505074100173135ustar00rootroot00000000000000vmem-1.8/src/freebsd/include/linux/kdev_t.h000066400000000000000000000031551361505074100207440ustar00rootroot00000000000000/* * Copyright 2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * linux/kdev_t.h -- Empty file redirect */ vmem-1.8/src/freebsd/include/linux/limits.h000066400000000000000000000031551361505074100207710ustar00rootroot00000000000000/* * Copyright 2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * linux/limits.h -- Empty file redirect */ vmem-1.8/src/freebsd/include/sys/000077500000000000000000000000001361505074100167725ustar00rootroot00000000000000vmem-1.8/src/freebsd/include/sys/sysmacros.h000066400000000000000000000031561361505074100211730ustar00rootroot00000000000000/* * Copyright 2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * sys/sysmacros.h -- Empty file redirect */ vmem-1.8/src/include/000077500000000000000000000000001361505074100145425ustar00rootroot00000000000000vmem-1.8/src/include/README000066400000000000000000000011401361505074100154160ustar00rootroot00000000000000VMEM This is src/include/README. This directory contains include files that are meant to be installed on a system when vmem and vmmalloc libraries are installed. These include files provide the public information exported by the libraries that is necessary for applications to call into the libraries. Private include files, used only internally in the libraries, don't live here -- they typically live next to the source for their module. Here you'll find: libvmem.h -- definitions of libvmem entry points (see libvmem(3)) libvmmalloc.h -- definitions of libvmmalloc entry points (see libvmmalloc(3)) vmem-1.8/src/include/libvmem.h000066400000000000000000000117231361505074100163520ustar00rootroot00000000000000/* * Copyright 2014-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * libvmem.h -- definitions of libvmem entry points * * This library exposes memory-mapped files as volatile memory (a la malloc) * * See libvmem(3) for details. */ #ifndef LIBVMEM_H #define LIBVMEM_H 1 #include #include #ifdef _WIN32 #ifndef PMDK_UTF8_API #define vmem_create vmem_createW #define vmem_check_version vmem_check_versionW #define vmem_errormsg vmem_errormsgW #else #define vmem_create vmem_createU #define vmem_check_version vmem_check_versionU #define vmem_errormsg vmem_errormsgU #endif #endif #ifdef __cplusplus extern "C" { #endif typedef struct vmem VMEM; /* opaque type internal to libvmem */ /* * managing volatile memory pools... */ #define VMEM_MIN_POOL ((size_t)(1024 * 1024 * 14)) /* min pool size: 14MB */ #ifndef _WIN32 VMEM *vmem_create(const char *dir, size_t size); #else VMEM *vmem_createU(const char *dir, size_t size); VMEM *vmem_createW(const wchar_t *dir, size_t size); #endif VMEM *vmem_create_in_region(void *addr, size_t size); void vmem_delete(VMEM *vmp); int vmem_check(VMEM *vmp); void vmem_stats_print(VMEM *vmp, const char *opts); /* * support for malloc and friends... */ void *vmem_malloc(VMEM *vmp, size_t size); void vmem_free(VMEM *vmp, void *ptr); void *vmem_calloc(VMEM *vmp, size_t nmemb, size_t size); void *vmem_realloc(VMEM *vmp, void *ptr, size_t size); void *vmem_aligned_alloc(VMEM *vmp, size_t alignment, size_t size); char *vmem_strdup(VMEM *vmp, const char *s); wchar_t *vmem_wcsdup(VMEM *vmp, const wchar_t *s); size_t vmem_malloc_usable_size(VMEM *vmp, void *ptr); /* * managing overall library behavior... */ /* * VMEM_MAJOR_VERSION and VMEM_MINOR_VERSION provide the current * version of the libvmem API as provided by this header file. * Applications can verify that the version available at run-time * is compatible with the version used at compile-time by passing * these defines to vmem_check_version(). */ #define VMEM_MAJOR_VERSION 1 #define VMEM_MINOR_VERSION 1 #ifndef _WIN32 const char *vmem_check_version(unsigned major_required, unsigned minor_required); #else const char *vmem_check_versionU(unsigned major_required, unsigned minor_required); const wchar_t *vmem_check_versionW(unsigned major_required, unsigned minor_required); #endif /* * Passing NULL to vmem_set_funcs() tells libvmem to continue to use * the default for that function. The replacement functions must * not make calls back into libvmem. * * The print_func is called by libvmem based on the environment * variable VMEM_LOG_LEVEL: * 0 or unset: print_func is only called for vmem_stats_print() * 1: additional details are logged when errors are returned * 2: basic operations (allocations/frees) are logged * 3: produce very verbose tracing of function calls in libvmem * 4: also log obscure stuff used to debug the library itself * * The default print_func prints to stderr. Applications can override this * by setting the environment variable VMEM_LOG_FILE, or by supplying a * replacement print function. */ void vmem_set_funcs( void *(*malloc_func)(size_t size), void (*free_func)(void *ptr), void *(*realloc_func)(void *ptr, size_t size), char *(*strdup_func)(const char *s), void (*print_func)(const char *s)); #ifndef _WIN32 const char *vmem_errormsg(void); #else const char *vmem_errormsgU(void); const wchar_t *vmem_errormsgW(void); #endif #ifdef __cplusplus } #endif #endif /* libvmem.h */ vmem-1.8/src/include/libvmmalloc.h000066400000000000000000000102701361505074100172140ustar00rootroot00000000000000/* * Copyright 2014-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * libvmmalloc.h -- definitions of libvmmalloc entry points * * This library exposes memory-mapped files as volatile memory (a la malloc) * * See libvmmalloc(3) for details. */ #ifndef LIBVMMALLOC_H #define LIBVMMALLOC_H 1 #include #ifdef __cplusplus extern "C" { #endif #define VMMALLOC_MAJOR_VERSION 1 #define VMMALLOC_MINOR_VERSION 1 #define VMMALLOC_MIN_POOL ((size_t)(1024 * 1024 * 14)) /* min pool size: 14MB */ /* * check compiler support for various function attributes */ #if defined(__GNUC__) && !defined(__clang__) && !defined(__INTEL_COMPILER) #define GCC_VER (__GNUC__ * 100 + __GNUC_MINOR__) #if GCC_VER >= 296 #define __ATTR_MALLOC__ __attribute__((malloc)) #else #define __ATTR_MALLOC__ #endif #if GCC_VER >= 303 #define __ATTR_NONNULL__(x) __attribute__((nonnull(x))) #else #define __ATTR_NONNULL__(x) #endif #if GCC_VER >= 403 #define __ATTR_ALLOC_SIZE__(...) __attribute__((alloc_size(__VA_ARGS__))) #else #define __ATTR_ALLOC_SIZE__(...) #endif #if GCC_VER >= 409 #define __ATTR_ALLOC_ALIGN__(x) __attribute__((alloc_align(x))) #else #define __ATTR_ALLOC_ALIGN__(x) #endif #else /* clang, icc and other compilers */ #ifndef __has_attribute #define __has_attribute(x) 0 #endif #if __has_attribute(malloc) #define __ATTR_MALLOC__ __attribute__((malloc)) #else #define __ATTR_MALLOC__ #endif #if __has_attribute(nonnull) #define __ATTR_NONNULL__(x) __attribute__((nonnull(x))) #else #define __ATTR_NONNULL__(x) #endif #if __has_attribute(alloc_size) #define __ATTR_ALLOC_SIZE__(...) __attribute__((alloc_size(__VA_ARGS__))) #else #define __ATTR_ALLOC_SIZE__(...) #endif #if __has_attribute(alloc_align) #define __ATTR_ALLOC_ALIGN__(x) __attribute__((alloc_align(x))) #else #define __ATTR_ALLOC_ALIGN__(x) #endif #endif /* __GNUC__ */ extern void *malloc(size_t size) __ATTR_MALLOC__ __ATTR_ALLOC_SIZE__(1); extern void *calloc(size_t nmemb, size_t size) __ATTR_MALLOC__ __ATTR_ALLOC_SIZE__(1, 2); extern void *realloc(void *ptr, size_t size) __ATTR_ALLOC_SIZE__(2); extern void free(void *ptr); extern void cfree(void *ptr); extern int posix_memalign(void **memptr, size_t alignment, size_t size) __ATTR_NONNULL__(1); extern void *memalign(size_t boundary, size_t size) __ATTR_MALLOC__ __ATTR_ALLOC_ALIGN__(1) __ATTR_ALLOC_SIZE__(2); extern void *aligned_alloc(size_t alignment, size_t size) __ATTR_MALLOC__ __ATTR_ALLOC_ALIGN__(1) __ATTR_ALLOC_SIZE__(2); extern void *valloc(size_t size) __ATTR_MALLOC__ __ATTR_ALLOC_SIZE__(1); extern void *pvalloc(size_t size) __ATTR_MALLOC__ __ATTR_ALLOC_SIZE__(1); extern size_t malloc_usable_size(void *ptr); #ifdef __cplusplus } #endif #endif /* libvmmalloc.h */ vmem-1.8/src/jemalloc/000077500000000000000000000000001361505074100147055ustar00rootroot00000000000000vmem-1.8/src/jemalloc/.gitignore000066400000000000000000000027431361505074100167030ustar00rootroot00000000000000/*.gcov.* /autom4te.cache/ /bin/jemalloc.sh /config.stamp /config.log /config.status /configure /doc/html.xsl /doc/manpages.xsl /doc/jemalloc.xml /doc/jemalloc.html /doc/jemalloc.3 /lib/ /debug/ /nondebug/ /include/jemalloc/internal/jemalloc_internal.h /include/jemalloc/internal/jemalloc_internal_defs.h /include/jemalloc/internal/private_namespace.h /include/jemalloc/internal/private_unnamespace.h /include/jemalloc/internal/public_namespace.h /include/jemalloc/internal/public_symbols.txt /include/jemalloc/internal/public_unnamespace.h /include/jemalloc/internal/size_classes.h /include/jemalloc/jemalloc.h /include/jemalloc/jemalloc_defs.h /include/jemalloc/jemalloc_macros.h /include/jemalloc/jemalloc_mangle.h /include/jemalloc/jemalloc_mangle_jet.h /include/jemalloc/jemalloc_protos.h /include/jemalloc/jemalloc_protos_jet.h /include/jemalloc/jemalloc_rename.h /include/jemalloc/jemalloc_typedefs.h /src/*.[od] /src/*.gcda /src/*.gcno /test/test.sh test/include/test/jemalloc_test.h test/include/test/jemalloc_test_defs.h /test/integration/[A-Za-z]* !/test/integration/[A-Za-z]*.* /test/integration/*.[od] /test/integration/*.gcda /test/integration/*.gcno /test/integration/*.out /test/src/*.[od] /test/src/*.gcda /test/src/*.gcno /test/stress/[A-Za-z]* !/test/stress/[A-Za-z]*.* /test/stress/*.[od] /test/stress/*.gcda /test/stress/*.gcno /test/stress/*.out /test/unit/[A-Za-z]* !/test/unit/[A-Za-z]*.* /test/unit/*.[od] /test/unit/*.gcda /test/unit/*.gcno /test/unit/*.out /VERSION vmem-1.8/src/jemalloc/COPYING000066400000000000000000000032461361505074100157450ustar00rootroot00000000000000Unless otherwise specified, files in the jemalloc source distribution are subject to the following license: -------------------------------------------------------------------------------- Copyright (C) 2002-2016 Jason Evans . All rights reserved. Copyright (C) 2007-2012 Mozilla Foundation. All rights reserved. Copyright (C) 2009-2016 Facebook, Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice(s), this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice(s), this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER(S) BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. --------------------------------------------------------------------------------vmem-1.8/src/jemalloc/ChangeLog000066400000000000000000000574031361505074100164700ustar00rootroot00000000000000Following are change highlights associated with official releases. Important bug fixes are all mentioned, but internal enhancements are omitted here for brevity (even though they are more fun to write about). Much more detail can be found in the git revision history: https://github.com/jemalloc/jemalloc * 3.6.0 (March 31, 2014) This version contains a critical bug fix for a regression present in 3.5.0 and 3.5.1. Bug fixes: - Fix a regression in arena_chunk_alloc() that caused crashes during small/large allocation if chunk allocation failed. In the absence of this bug, chunk allocation failure would result in allocation failure, e.g. NULL return from malloc(). This regression was introduced in 3.5.0. - Fix backtracing for gcc intrinsics-based backtracing by specifying -fno-omit-frame-pointer to gcc. Note that the application (and all the libraries it links to) must also be compiled with this option for backtracing to be reliable. - Use dss allocation precedence for huge allocations as well as small/large allocations. - Fix test assertion failure message formatting. This bug did not manifect on x86_64 systems because of implementation subtleties in va_list. - Fix inconsequential test failures for hash and SFMT code. New features: - Support heap profiling on FreeBSD. This feature depends on the proc filesystem being mounted during heap profile dumping. * 3.5.1 (February 25, 2014) This version primarily addresses minor bugs in test code. Bug fixes: - Configure Solaris/Illumos to use MADV_FREE. - Fix junk filling for mremap(2)-based huge reallocation. This is only relevant if configuring with the --enable-mremap option specified. - Avoid compilation failure if 'restrict' C99 keyword is not supported by the compiler. - Add a configure test for SSE2 rather than assuming it is usable on i686 systems. This fixes test compilation errors, especially on 32-bit Linux systems. - Fix mallctl argument size mismatches (size_t vs. uint64_t) in the stats unit test. - Fix/remove flawed alignment-related overflow tests. - Prevent compiler optimizations that could change backtraces in the prof_accum unit test. * 3.5.0 (January 22, 2014) This version focuses on refactoring and automated testing, though it also includes some non-trivial heap profiling optimizations not mentioned below. New features: - Add the *allocx() API, which is a successor to the experimental *allocm() API. The *allocx() functions are slightly simpler to use because they have fewer parameters, they directly return the results of primary interest, and mallocx()/rallocx() avoid the strict aliasing pitfall that allocm()/rallocm() share with posix_memalign(). Note that *allocm() is slated for removal in the next non-bugfix release. - Add support for LinuxThreads. Bug fixes: - Unless heap profiling is enabled, disable floating point code and don't link with libm. This, in combination with e.g. EXTRA_CFLAGS=-mno-sse on x64 systems, makes it possible to completely disable floating point register use. Some versions of glibc neglect to save/restore caller-saved floating point registers during dynamic lazy symbol loading, and the symbol loading code uses whatever malloc the application happens to have linked/loaded with, the result being potential floating point register corruption. - Report ENOMEM rather than EINVAL if an OOM occurs during heap profiling backtrace creation in imemalign(). This bug impacted posix_memalign() and aligned_alloc(). - Fix a file descriptor leak in a prof_dump_maps() error path. - Fix prof_dump() to close the dump file descriptor for all relevant error paths. - Fix rallocm() to use the arena specified by the ALLOCM_ARENA(s) flag for allocation, not just deallocation. - Fix a data race for large allocation stats counters. - Fix a potential infinite loop during thread exit. This bug occurred on Solaris, and could affect other platforms with similar pthreads TSD implementations. - Don't junk-fill reallocations unless usable size changes. This fixes a violation of the *allocx()/*allocm() semantics. - Fix growing large reallocation to junk fill new space. - Fix huge deallocation to junk fill when munmap is disabled. - Change the default private namespace prefix from empty to je_, and change --with-private-namespace-prefix so that it prepends an additional prefix rather than replacing je_. This reduces the likelihood of applications which statically link jemalloc experiencing symbol name collisions. - Add missing private namespace mangling (relevant when --with-private-namespace is specified). - Add and use JEMALLOC_INLINE_C so that static inline functions are marked as static even for debug builds. - Add a missing mutex unlock in a malloc_init_hard() error path. In practice this error path is never executed. - Fix numerous bugs in malloc_strotumax() error handling/reporting. These bugs had no impact except for malformed inputs. - Fix numerous bugs in malloc_snprintf(). These bugs were not exercised by existing calls, so they had no impact. * 3.4.1 (October 20, 2013) Bug fixes: - Fix a race in the "arenas.extend" mallctl that could cause memory corruption of internal data structures and subsequent crashes. - Fix Valgrind integration flaws that caused Valgrind warnings about reads of uninitialized memory in: + arena chunk headers + internal zero-initialized data structures (relevant to tcache and prof code) - Preserve errno during the first allocation. A readlink(2) call during initialization fails unless /etc/malloc.conf exists, so errno was typically set during the first allocation prior to this fix. - Fix compilation warnings reported by gcc 4.8.1. * 3.4.0 (June 2, 2013) This version is essentially a small bugfix release, but the addition of aarch64 support requires that the minor version be incremented. Bug fixes: - Fix race-triggered deadlocks in chunk_record(). These deadlocks were typically triggered by multiple threads concurrently deallocating huge objects. New features: - Add support for the aarch64 architecture. * 3.3.1 (March 6, 2013) This version fixes bugs that are typically encountered only when utilizing custom run-time options. Bug fixes: - Fix a locking order bug that could cause deadlock during fork if heap profiling were enabled. - Fix a chunk recycling bug that could cause the allocator to lose track of whether a chunk was zeroed. On FreeBSD, NetBSD, and OS X, it could cause corruption if allocating via sbrk(2) (unlikely unless running with the "dss:primary" option specified). This was completely harmless on Linux unless using mlockall(2) (and unlikely even then, unless the --disable-munmap configure option or the "dss:primary" option was specified). This regression was introduced in 3.1.0 by the mlockall(2)/madvise(2) interaction fix. - Fix TLS-related memory corruption that could occur during thread exit if the thread never allocated memory. Only the quarantine and prof facilities were susceptible. - Fix two quarantine bugs: + Internal reallocation of the quarantined object array leaked the old array. + Reallocation failure for internal reallocation of the quarantined object array (very unlikely) resulted in memory corruption. - Fix Valgrind integration to annotate all internally allocated memory in a way that keeps Valgrind happy about internal data structure access. - Fix building for s390 systems. * 3.3.0 (January 23, 2013) This version includes a few minor performance improvements in addition to the listed new features and bug fixes. New features: - Add clipping support to lg_chunk option processing. - Add the --enable-ivsalloc option. - Add the --without-export option. - Add the --disable-zone-allocator option. Bug fixes: - Fix "arenas.extend" mallctl to output the number of arenas. - Fix chunk_recycle() to unconditionally inform Valgrind that returned memory is undefined. - Fix build break on FreeBSD related to alloca.h. * 3.2.0 (November 9, 2012) In addition to a couple of bug fixes, this version modifies page run allocation and dirty page purging algorithms in order to better control page-level virtual memory fragmentation. Incompatible changes: - Change the "opt.lg_dirty_mult" default from 5 to 3 (32:1 to 8:1). Bug fixes: - Fix dss/mmap allocation precedence code to use recyclable mmap memory only after primary dss allocation fails. - Fix deadlock in the "arenas.purge" mallctl. This regression was introduced in 3.1.0 by the addition of the "arena..purge" mallctl. * 3.1.0 (October 16, 2012) New features: - Auto-detect whether running inside Valgrind, thus removing the need to manually specify MALLOC_CONF=valgrind:true. - Add the "arenas.extend" mallctl, which allows applications to create manually managed arenas. - Add the ALLOCM_ARENA() flag for {,r,d}allocm(). - Add the "opt.dss", "arena..dss", and "stats.arenas..dss" mallctls, which provide control over dss/mmap precedence. - Add the "arena..purge" mallctl, which obsoletes "arenas.purge". - Define LG_QUANTUM for hppa. Incompatible changes: - Disable tcache by default if running inside Valgrind, in order to avoid making unallocated objects appear reachable to Valgrind. - Drop const from malloc_usable_size() argument on Linux. Bug fixes: - Fix heap profiling crash if sampled object is freed via realloc(p, 0). - Remove const from __*_hook variable declarations, so that glibc can modify them during process forking. - Fix mlockall(2)/madvise(2) interaction. - Fix fork(2)-related deadlocks. - Fix error return value for "thread.tcache.enabled" mallctl. * 3.0.0 (May 11, 2012) Although this version adds some major new features, the primary focus is on internal code cleanup that facilitates maintainability and portability, most of which is not reflected in the ChangeLog. This is the first release to incorporate substantial contributions from numerous other developers, and the result is a more broadly useful allocator (see the git revision history for contribution details). Note that the license has been unified, thanks to Facebook granting a license under the same terms as the other copyright holders (see COPYING). New features: - Implement Valgrind support, redzones, and quarantine. - Add support for additional platforms: + FreeBSD + Mac OS X Lion + MinGW + Windows (no support yet for replacing the system malloc) - Add support for additional architectures: + MIPS + SH4 + Tilera - Add support for cross compiling. - Add nallocm(), which rounds a request size up to the nearest size class without actually allocating. - Implement aligned_alloc() (blame C11). - Add the "thread.tcache.enabled" mallctl. - Add the "opt.prof_final" mallctl. - Update pprof (from gperftools 2.0). - Add the --with-mangling option. - Add the --disable-experimental option. - Add the --disable-munmap option, and make it the default on Linux. - Add the --enable-mremap option, which disables use of mremap(2) by default. Incompatible changes: - Enable stats by default. - Enable fill by default. - Disable lazy locking by default. - Rename the "tcache.flush" mallctl to "thread.tcache.flush". - Rename the "arenas.pagesize" mallctl to "arenas.page". - Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB). - Change the "opt.prof_accum" default from true to false. Removed features: - Remove the swap feature, including the "config.swap", "swap.avail", "swap.prezeroed", "swap.nfds", and "swap.fds" mallctls. - Remove highruns statistics, including the "stats.arenas..bins..highruns" and "stats.arenas..lruns..highruns" mallctls. - As part of small size class refactoring, remove the "opt.lg_[qc]space_max", "arenas.cacheline", "arenas.subpage", "arenas.[tqcs]space_{min,max}", and "arenas.[tqcs]bins" mallctls. - Remove the "arenas.chunksize" mallctl. - Remove the "opt.lg_prof_tcmax" option. - Remove the "opt.lg_prof_bt_max" option. - Remove the "opt.lg_tcache_gc_sweep" option. - Remove the --disable-tiny option, including the "config.tiny" mallctl. - Remove the --enable-dynamic-page-shift configure option. - Remove the --enable-sysv configure option. Bug fixes: - Fix a statistics-related bug in the "thread.arena" mallctl that could cause invalid statistics and crashes. - Work around TLS deallocation via free() on Linux. This bug could cause write-after-free memory corruption. - Fix a potential deadlock that could occur during interval- and growth-triggered heap profile dumps. - Fix large calloc() zeroing bugs due to dropping chunk map unzeroed flags. - Fix chunk_alloc_dss() to stop claiming memory is zeroed. This bug could cause memory corruption and crashes with --enable-dss specified. - Fix fork-related bugs that could cause deadlock in children between fork and exec. - Fix malloc_stats_print() to honor 'b' and 'l' in the opts parameter. - Fix realloc(p, 0) to act like free(p). - Do not enforce minimum alignment in memalign(). - Check for NULL pointer in malloc_usable_size(). - Fix an off-by-one heap profile statistics bug that could be observed in interval- and growth-triggered heap profiles. - Fix the "epoch" mallctl to update cached stats even if the passed in epoch is 0. - Fix bin->runcur management to fix a layout policy bug. This bug did not affect correctness. - Fix a bug in choose_arena_hard() that potentially caused more arenas to be initialized than necessary. - Add missing "opt.lg_tcache_max" mallctl implementation. - Use glibc allocator hooks to make mixed allocator usage less likely. - Fix build issues for --disable-tcache. - Don't mangle pthread_create() when --with-private-namespace is specified. * 2.2.5 (November 14, 2011) Bug fixes: - Fix huge_ralloc() race when using mremap(2). This is a serious bug that could cause memory corruption and/or crashes. - Fix huge_ralloc() to maintain chunk statistics. - Fix malloc_stats_print(..., "a") output. * 2.2.4 (November 5, 2011) Bug fixes: - Initialize arenas_tsd before using it. This bug existed for 2.2.[0-3], as well as for --disable-tls builds in earlier releases. - Do not assume a 4 KiB page size in test/rallocm.c. * 2.2.3 (August 31, 2011) This version fixes numerous bugs related to heap profiling. Bug fixes: - Fix a prof-related race condition. This bug could cause memory corruption, but only occurred in non-default configurations (prof_accum:false). - Fix off-by-one backtracing issues (make sure that prof_alloc_prep() is excluded from backtraces). - Fix a prof-related bug in realloc() (only triggered by OOM errors). - Fix prof-related bugs in allocm() and rallocm(). - Fix prof_tdata_cleanup() for --disable-tls builds. - Fix a relative include path, to fix objdir builds. * 2.2.2 (July 30, 2011) Bug fixes: - Fix a build error for --disable-tcache. - Fix assertions in arena_purge() (for real this time). - Add the --with-private-namespace option. This is a workaround for symbol conflicts that can inadvertently arise when using static libraries. * 2.2.1 (March 30, 2011) Bug fixes: - Implement atomic operations for x86/x64. This fixes compilation failures for versions of gcc that are still in wide use. - Fix an assertion in arena_purge(). * 2.2.0 (March 22, 2011) This version incorporates several improvements to algorithms and data structures that tend to reduce fragmentation and increase speed. New features: - Add the "stats.cactive" mallctl. - Update pprof (from google-perftools 1.7). - Improve backtracing-related configuration logic, and add the --disable-prof-libgcc option. Bug fixes: - Change default symbol visibility from "internal", to "hidden", which decreases the overhead of library-internal function calls. - Fix symbol visibility so that it is also set on OS X. - Fix a build dependency regression caused by the introduction of the .pic.o suffix for PIC object files. - Add missing checks for mutex initialization failures. - Don't use libgcc-based backtracing except on x64, where it is known to work. - Fix deadlocks on OS X that were due to memory allocation in pthread_mutex_lock(). - Heap profiling-specific fixes: + Fix memory corruption due to integer overflow in small region index computation, when using a small enough sample interval that profiling context pointers are stored in small run headers. + Fix a bootstrap ordering bug that only occurred with TLS disabled. + Fix a rallocm() rsize bug. + Fix error detection bugs for aligned memory allocation. * 2.1.3 (March 14, 2011) Bug fixes: - Fix a cpp logic regression (due to the "thread.{de,}allocatedp" mallctl fix for OS X in 2.1.2). - Fix a "thread.arena" mallctl bug. - Fix a thread cache stats merging bug. * 2.1.2 (March 2, 2011) Bug fixes: - Fix "thread.{de,}allocatedp" mallctl for OS X. - Add missing jemalloc.a to build system. * 2.1.1 (January 31, 2011) Bug fixes: - Fix aligned huge reallocation (affected allocm()). - Fix the ALLOCM_LG_ALIGN macro definition. - Fix a heap dumping deadlock. - Fix a "thread.arena" mallctl bug. * 2.1.0 (December 3, 2010) This version incorporates some optimizations that can't quite be considered bug fixes. New features: - Use Linux's mremap(2) for huge object reallocation when possible. - Avoid locking in mallctl*() when possible. - Add the "thread.[de]allocatedp" mallctl's. - Convert the manual page source from roff to DocBook, and generate both roff and HTML manuals. Bug fixes: - Fix a crash due to incorrect bootstrap ordering. This only impacted --enable-debug --enable-dss configurations. - Fix a minor statistics bug for mallctl("swap.avail", ...). * 2.0.1 (October 29, 2010) Bug fixes: - Fix a race condition in heap profiling that could cause undefined behavior if "opt.prof_accum" were disabled. - Add missing mutex unlocks for some OOM error paths in the heap profiling code. - Fix a compilation error for non-C99 builds. * 2.0.0 (October 24, 2010) This version focuses on the experimental *allocm() API, and on improved run-time configuration/introspection. Nonetheless, numerous performance improvements are also included. New features: - Implement the experimental {,r,s,d}allocm() API, which provides a superset of the functionality available via malloc(), calloc(), posix_memalign(), realloc(), malloc_usable_size(), and free(). These functions can be used to allocate/reallocate aligned zeroed memory, ask for optional extra memory during reallocation, prevent object movement during reallocation, etc. - Replace JEMALLOC_OPTIONS/JEMALLOC_PROF_PREFIX with MALLOC_CONF, which is more human-readable, and more flexible. For example: JEMALLOC_OPTIONS=AJP is now: MALLOC_CONF=abort:true,fill:true,stats_print:true - Port to Apple OS X. Sponsored by Mozilla. - Make it possible for the application to control thread-->arena mappings via the "thread.arena" mallctl. - Add compile-time support for all TLS-related functionality via pthreads TSD. This is mainly of interest for OS X, which does not support TLS, but has a TSD implementation with similar performance. - Override memalign() and valloc() if they are provided by the system. - Add the "arenas.purge" mallctl, which can be used to synchronously purge all dirty unused pages. - Make cumulative heap profiling data optional, so that it is possible to limit the amount of memory consumed by heap profiling data structures. - Add per thread allocation counters that can be accessed via the "thread.allocated" and "thread.deallocated" mallctls. Incompatible changes: - Remove JEMALLOC_OPTIONS and malloc_options (see MALLOC_CONF above). - Increase default backtrace depth from 4 to 128 for heap profiling. - Disable interval-based profile dumps by default. Bug fixes: - Remove bad assertions in fork handler functions. These assertions could cause aborts for some combinations of configure settings. - Fix strerror_r() usage to deal with non-standard semantics in GNU libc. - Fix leak context reporting. This bug tended to cause the number of contexts to be underreported (though the reported number of objects and bytes were correct). - Fix a realloc() bug for large in-place growing reallocation. This bug could cause memory corruption, but it was hard to trigger. - Fix an allocation bug for small allocations that could be triggered if multiple threads raced to create a new run of backing pages. - Enhance the heap profiler to trigger samples based on usable size, rather than request size. - Fix a heap profiling bug due to sometimes losing track of requested object size for sampled objects. * 1.0.3 (August 12, 2010) Bug fixes: - Fix the libunwind-based implementation of stack backtracing (used for heap profiling). This bug could cause zero-length backtraces to be reported. - Add a missing mutex unlock in library initialization code. If multiple threads raced to initialize malloc, some of them could end up permanently blocked. * 1.0.2 (May 11, 2010) Bug fixes: - Fix junk filling of large objects, which could cause memory corruption. - Add MAP_NORESERVE support for chunk mapping, because otherwise virtual memory limits could cause swap file configuration to fail. Contributed by Jordan DeLong. * 1.0.1 (April 14, 2010) Bug fixes: - Fix compilation when --enable-fill is specified. - Fix threads-related profiling bugs that affected accuracy and caused memory to be leaked during thread exit. - Fix dirty page purging race conditions that could cause crashes. - Fix crash in tcache flushing code during thread destruction. * 1.0.0 (April 11, 2010) This release focuses on speed and run-time introspection. Numerous algorithmic improvements make this release substantially faster than its predecessors. New features: - Implement autoconf-based configuration system. - Add mallctl*(), for the purposes of introspection and run-time configuration. - Make it possible for the application to manually flush a thread's cache, via the "tcache.flush" mallctl. - Base maximum dirty page count on proportion of active memory. - Compute various additional run-time statistics, including per size class statistics for large objects. - Expose malloc_stats_print(), which can be called repeatedly by the application. - Simplify the malloc_message() signature to only take one string argument, and incorporate an opaque data pointer argument for use by the application in combination with malloc_stats_print(). - Add support for allocation backed by one or more swap files, and allow the application to disable over-commit if swap files are in use. - Implement allocation profiling and leak checking. Removed features: - Remove the dynamic arena rebalancing code, since thread-specific caching reduces its utility. Bug fixes: - Modify chunk allocation to work when address space layout randomization (ASLR) is in use. - Fix thread cleanup bugs related to TLS destruction. - Handle 0-size allocation requests in posix_memalign(). - Fix a chunk leak. The leaked chunks were never touched, so this impacted virtual memory usage, but not physical memory usage. * linux_2008082[78]a (August 27/28, 2008) These snapshot releases are the simple result of incorporating Linux-specific support into the FreeBSD malloc sources. -------------------------------------------------------------------------------- vim:filetype=text:textwidth=80 vmem-1.8/src/jemalloc/INSTALL000066400000000000000000000260171361505074100157440ustar00rootroot00000000000000How to build jemalloc for Linux ================================================================================ Building and installing jemalloc can be as simple as typing the following while in the root directory of the source tree: ./configure make make install === Advanced configuration ===================================================== The 'configure' script supports numerous options that allow control of which functionality is enabled, where jemalloc is installed, etc. Optionally, pass any of the following arguments (not a definitive list) to 'configure': --help Print a definitive list of options. --prefix= Set the base directory in which to install. For example: ./configure --prefix=/usr/local will cause files to be installed into /usr/local/include, /usr/local/lib, and /usr/local/man. --with-rpath= Embed one or more library paths, so that libjemalloc can find the libraries it is linked to. This works only on ELF-based systems. --with-mangling= Mangle public symbols specified in which is a comma-separated list of name:mangled pairs. For example, to use ld's --wrap option as an alternative method for overriding libc's malloc implementation, specify something like: --with-mangling=malloc:__wrap_malloc,free:__wrap_free[...] Note that mangling happens prior to application of the prefix specified by --with-jemalloc-prefix, and mangled symbols are then ignored when applying the prefix. --with-jemalloc-prefix= Prefix all public APIs with . For example, if is "prefix_", API changes like the following occur: malloc() --> prefix_malloc() malloc_conf --> prefix_malloc_conf /etc/malloc.conf --> /etc/prefix_malloc.conf MALLOC_CONF --> PREFIX_MALLOC_CONF This makes it possible to use jemalloc at the same time as the system allocator, or even to use multiple copies of jemalloc simultaneously. By default, the prefix is "", except on OS X, where it is "je_". On OS X, jemalloc overlays the default malloc zone, but makes no attempt to actually replace the "malloc", "calloc", etc. symbols. --without-export Don't export public APIs. This can be useful when building jemalloc as a static library, or to avoid exporting public APIs when using the zone allocator on OSX. --with-private-namespace= Prefix all library-private APIs with je_. For shared libraries, symbol visibility mechanisms prevent these symbols from being exported, but for static libraries, naming collisions are a real possibility. By default, is empty, which results in a symbol prefix of je_ . --with-install-suffix= Append to the base name of all installed files, such that multiple versions of jemalloc can coexist in the same installation directory. For example, libjemalloc.so.0 becomes libjemalloc.so.0. --disable-cc-silence Disable code that silences non-useful compiler warnings. This is mainly useful during development when auditing the set of warnings that are being silenced. --enable-debug Enable assertions and validation code. This incurs a substantial performance hit, but is very useful during application development. Implies --enable-ivsalloc. --enable-code-coverage Enable code coverage support, for use during jemalloc test development. Additional testing targets are available if this option is enabled: coverage coverage_unit coverage_integration coverage_stress These targets do not clear code coverage results from previous runs, and there are interactions between the various coverage targets, so it is usually advisable to run 'make clean' between repeated code coverage runs. --enable-ivsalloc Enable validation code, which verifies that pointers reside within jemalloc-owned chunks before dereferencing them. This incurs a substantial performance hit. --disable-stats Disable statistics gathering functionality. See the "opt.stats_print" option documentation for usage details. --enable-prof Enable heap profiling and leak detection functionality. See the "opt.prof" option documentation for usage details. When enabled, there are several approaches to backtracing, and the configure script chooses the first one in the following list that appears to function correctly: + libunwind (requires --enable-prof-libunwind) + libgcc (unless --disable-prof-libgcc) + gcc intrinsics (unless --disable-prof-gcc) --enable-prof-libunwind Use the libunwind library (http://www.nongnu.org/libunwind/) for stack backtracing. --disable-prof-libgcc Disable the use of libgcc's backtracing functionality. --disable-prof-gcc Disable the use of gcc intrinsics for backtracing. --with-static-libunwind= Statically link against the specified libunwind.a rather than dynamically linking with -lunwind. --disable-tcache Disable thread-specific caches for small objects. Objects are cached and released in bulk, thus reducing the total number of mutex operations. See the "opt.tcache" option for usage details. --disable-munmap Disable virtual memory deallocation via munmap(2); instead keep track of the virtual memory for later use. munmap() is disabled by default (i.e. --disable-munmap is implied) on Linux, which has a quirk in its virtual memory allocation algorithm that causes semi-permanent VM map holes under normal jemalloc operation. --disable-fill Disable support for junk/zero filling of memory, quarantine, and redzones. See the "opt.junk", "opt.zero", "opt.quarantine", and "opt.redzone" option documentation for usage details. --disable-valgrind Disable support for Valgrind. --disable-zone-allocator Disable zone allocator for Darwin. This means jemalloc won't be hooked as the default allocator on OSX/iOS. --enable-utrace Enable utrace(2)-based allocation tracing. This feature is not broadly portable (FreeBSD has it, but Linux and OS X do not). --enable-xmalloc Enable support for optional immediate termination due to out-of-memory errors, as is commonly implemented by "xmalloc" wrapper function for malloc. See the "opt.xmalloc" option documentation for usage details. --enable-lazy-lock Enable code that wraps pthread_create() to detect when an application switches from single-threaded to multi-threaded mode, so that it can avoid mutex locking/unlocking operations while in single-threaded mode. In practice, this feature usually has little impact on performance unless thread-specific caching is disabled. --disable-tls Disable thread-local storage (TLS), which allows for fast access to thread-local variables via the __thread keyword. If TLS is available, jemalloc uses it for several purposes. --with-xslroot= Specify where to find DocBook XSL stylesheets when building the documentation. The following environment variables (not a definitive list) impact configure's behavior: CFLAGS="?" Pass these flags to the compiler. You probably shouldn't define this unless you know what you are doing. (Use EXTRA_CFLAGS instead.) EXTRA_CFLAGS="?" Append these flags to CFLAGS. This makes it possible to add flags such as -Werror, while allowing the configure script to determine what other flags are appropriate for the specified configuration. The configure script specifically checks whether an optimization flag (-O*) is specified in EXTRA_CFLAGS, and refrains from specifying an optimization level if it finds that one has already been specified. CPPFLAGS="?" Pass these flags to the C preprocessor. Note that CFLAGS is not passed to 'cpp' when 'configure' is looking for include files, so you must use CPPFLAGS instead if you need to help 'configure' find header files. LD_LIBRARY_PATH="?" 'ld' uses this colon-separated list to find libraries. LDFLAGS="?" Pass these flags when linking. PATH="?" 'configure' uses this to find programs. === Advanced compilation ======================================================= To build only parts of jemalloc, use the following targets: build_lib_shared build_lib_static build_lib build_doc_html build_doc_man build_doc To install only parts of jemalloc, use the following targets: install_bin install_include install_lib_shared install_lib_static install_lib install_doc_html install_doc_man install_doc To clean up build results to varying degrees, use the following make targets: clean distclean relclean === Advanced installation ====================================================== Optionally, define make variables when invoking make, including (not exclusively): INCLUDEDIR="?" Use this as the installation prefix for header files. LIBDIR="?" Use this as the installation prefix for libraries. MANDIR="?" Use this as the installation prefix for man pages. DESTDIR="?" Prepend DESTDIR to INCLUDEDIR, LIBDIR, DATADIR, and MANDIR. This is useful when installing to a different path than was specified via --prefix. CC="?" Use this to invoke the C compiler. CFLAGS="?" Pass these flags to the compiler. CPPFLAGS="?" Pass these flags to the C preprocessor. LDFLAGS="?" Pass these flags when linking. PATH="?" Use this to search for programs used during configuration and building. === Development ================================================================ If you intend to make non-trivial changes to jemalloc, use the 'autogen.sh' script rather than 'configure'. This re-generates 'configure', enables configuration dependency rules, and enables re-generation of automatically generated source files. The build system supports using an object directory separate from the source tree. For example, you can create an 'obj' directory, and from within that directory, issue configuration and build commands: autoconf mkdir obj cd obj ../configure --enable-autogen make How to build jemalloc for Windows ================================================================================ 1. Install Cygwin with at least the following packages: * autoconf * autogen * gawk * grep * sed * xsltproc 2. Install Visual Studio 2015 with Visual C++ 3. Open "VS2015 x86 Native Tools Command Prompt" (note: x86/x64 doesn't matter at this point) 4. Add cygwin\bin to the PATH environment variable (PATH=%PATH%;) 5. Generate header files: In VS Command Prompt run: sh ./win_autogen.sh 6. Now the project can be opened and built in Visual Studio: src\VMEM.sln === Documentation ============================================================== The manual page is generated in both html and roff formats. Any web browser can be used to view the html manual. The roff manual page can be formatted prior to installation via the following command: nroff -man -t doc/jemalloc.3 vmem-1.8/src/jemalloc/Makefile.in000066400000000000000000000374371361505074100167700ustar00rootroot00000000000000# Clear out all vpaths, then set just one (default vpath) for the main build # directory. vpath vpath % . # Clear the default suffixes, so that built-in rules are not used. .SUFFIXES : SHELL := /bin/sh CC := @CC@ # Configuration parameters. DESTDIR = BINDIR := $(DESTDIR)@BINDIR@ INCLUDEDIR := $(DESTDIR)@INCLUDEDIR@ LIBDIR := $(DESTDIR)@LIBDIR@ DATADIR := $(DESTDIR)@DATADIR@ MANDIR := $(DESTDIR)@MANDIR@ srcroot := @srcroot@ objroot := @objroot@ abs_srcroot := @abs_srcroot@ abs_objroot := @abs_objroot@ # Build parameters. CPPFLAGS := @CPPFLAGS@ -I$(srcroot)include -I$(objroot)include CFLAGS := @CFLAGS@ LDFLAGS := @LDFLAGS@ EXTRA_LDFLAGS := @EXTRA_LDFLAGS@ LIBS := @LIBS@ RPATH_EXTRA := @RPATH_EXTRA@ SO := @so@ IMPORTLIB := @importlib@ O := @o@ A := @a@ EXE := @exe@ LIBPREFIX := @libprefix@ REV := @rev@ install_suffix := @install_suffix@ ABI := @abi@ XSLTPROC := @XSLTPROC@ AUTOCONF := @AUTOCONF@ _RPATH = @RPATH@ RPATH = $(if $(1),$(call _RPATH,$(1))) cfghdrs_in := $(addprefix $(srcroot),@cfghdrs_in@) cfghdrs_out := @cfghdrs_out@ cfgoutputs_in := $(addprefix $(srcroot),@cfgoutputs_in@) cfgoutputs_out := @cfgoutputs_out@ enable_autogen := @enable_autogen@ enable_code_coverage := @enable_code_coverage@ enable_valgrind := @enable_valgrind@ enable_zone_allocator := @enable_zone_allocator@ DSO_LDFLAGS = @DSO_LDFLAGS@ SOREV = @SOREV@ PIC_CFLAGS = @PIC_CFLAGS@ CTARGET = @CTARGET@ LDTARGET = @LDTARGET@ MKLIB = @MKLIB@ AR = @AR@ ARFLAGS = @ARFLAGS@ CC_MM = @CC_MM@ ifeq (macho, $(ABI)) TEST_LIBRARY_PATH := DYLD_FALLBACK_LIBRARY_PATH="$(objroot)lib" else ifeq (pecoff, $(ABI)) TEST_LIBRARY_PATH := PATH="$(PATH):$(objroot)lib" else TEST_LIBRARY_PATH := endif endif LIBJEMALLOC := $(LIBPREFIX)jemalloc$(install_suffix) # Lists of files. BINS := $(srcroot)bin/pprof $(objroot)bin/jemalloc.sh C_HDRS := $(objroot)include/jemalloc/jemalloc$(install_suffix).h C_SRCS := $(srcroot)src/jemalloc.c $(srcroot)src/arena.c $(srcroot)src/pool.c \ $(srcroot)src/atomic.c $(srcroot)src/base.c $(srcroot)src/bitmap.c \ $(srcroot)src/chunk.c $(srcroot)src/chunk_dss.c $(srcroot)src/vector.c \ $(srcroot)src/chunk_mmap.c $(srcroot)src/ckh.c $(srcroot)src/ctl.c \ $(srcroot)src/extent.c $(srcroot)src/hash.c $(srcroot)src/huge.c \ $(srcroot)src/mb.c $(srcroot)src/mutex.c $(srcroot)src/prof.c \ $(srcroot)src/quarantine.c $(srcroot)src/rtree.c $(srcroot)src/stats.c \ $(srcroot)src/tcache.c $(srcroot)src/util.c $(srcroot)src/tsd.c ifeq ($(enable_valgrind), 1) C_SRCS += $(srcroot)src/valgrind.c endif ifeq ($(enable_zone_allocator), 1) C_SRCS += $(srcroot)src/zone.c endif ifeq ($(IMPORTLIB),$(SO)) STATIC_LIBS := $(objroot)lib/$(LIBJEMALLOC).$(A) endif ifdef PIC_CFLAGS STATIC_LIBS += $(objroot)lib/$(LIBJEMALLOC)_pic.$(A) else STATIC_LIBS += $(objroot)lib/$(LIBJEMALLOC)_s.$(A) endif DSOS := $(objroot)lib/$(LIBJEMALLOC).$(SOREV) ifneq ($(SOREV),$(SO)) DSOS += $(objroot)lib/$(LIBJEMALLOC).$(SO) endif MAN3 := $(objroot)doc/jemalloc$(install_suffix).3 DOCS_XML := $(objroot)doc/jemalloc$(install_suffix).xml DOCS_HTML := $(DOCS_XML:$(objroot)%.xml=$(srcroot)%.html) DOCS_MAN3 := $(DOCS_XML:$(objroot)%.xml=$(srcroot)%.3) DOCS := $(DOCS_HTML) $(DOCS_MAN3) C_TESTLIB_SRCS := $(srcroot)test/src/math.c $(srcroot)test/src/mtx.c \ $(srcroot)test/src/SFMT.c $(srcroot)test/src/test.c \ $(srcroot)test/src/thd.c C_UTIL_INTEGRATION_SRCS := $(srcroot)src/util.c TESTS_UNIT := $(srcroot)test/unit/bitmap.c \ $(srcroot)test/unit/ckh.c \ $(srcroot)test/unit/hash.c \ $(srcroot)test/unit/junk.c \ $(srcroot)test/unit/mallctl.c \ $(srcroot)test/unit/math.c \ $(srcroot)test/unit/mq.c \ $(srcroot)test/unit/mtx.c \ $(srcroot)test/unit/prof_accum.c \ $(srcroot)test/unit/prof_gdump.c \ $(srcroot)test/unit/prof_idump.c \ $(srcroot)test/unit/ql.c \ $(srcroot)test/unit/qr.c \ $(srcroot)test/unit/quarantine.c \ $(srcroot)test/unit/rb.c \ $(srcroot)test/unit/rtree.c \ $(srcroot)test/unit/SFMT.c \ $(srcroot)test/unit/stats.c \ $(srcroot)test/unit/tsd.c \ $(srcroot)test/unit/util.c \ $(srcroot)test/unit/zero.c \ $(srcroot)test/unit/pool_base_alloc.c \ $(srcroot)test/unit/pool_custom_alloc.c \ $(srcroot)test/unit/pool_custom_alloc_internal.c TESTS_UNIT_AUX := $(srcroot)test/unit/prof_accum_a.c \ $(srcroot)test/unit/prof_accum_b.c TESTS_INTEGRATION := $(srcroot)test/integration/aligned_alloc.c \ $(srcroot)test/integration/allocated.c \ $(srcroot)test/integration/mallocx.c \ $(srcroot)test/integration/MALLOCX_ARENA.c \ $(srcroot)test/integration/posix_memalign.c \ $(srcroot)test/integration/rallocx.c \ $(srcroot)test/integration/thread_arena.c \ $(srcroot)test/integration/thread_tcache_enabled.c \ $(srcroot)test/integration/xallocx.c \ $(srcroot)test/integration/chunk.c TESTS_STRESS := TESTS := $(TESTS_UNIT) $(TESTS_INTEGRATION) $(TESTS_STRESS) C_OBJS := $(C_SRCS:$(srcroot)%.c=$(objroot)%.$(O)) C_PIC_OBJS := $(C_SRCS:$(srcroot)%.c=$(objroot)%.pic.$(O)) C_JET_OBJS := $(C_SRCS:$(srcroot)%.c=$(objroot)%.jet.$(O)) C_TESTLIB_UNIT_OBJS := $(C_TESTLIB_SRCS:$(srcroot)%.c=$(objroot)%.unit.$(O)) C_TESTLIB_INTEGRATION_OBJS := $(C_TESTLIB_SRCS:$(srcroot)%.c=$(objroot)%.integration.$(O)) C_UTIL_INTEGRATION_OBJS := $(C_UTIL_INTEGRATION_SRCS:$(srcroot)%.c=$(objroot)%.integration.$(O)) C_TESTLIB_STRESS_OBJS := $(C_TESTLIB_SRCS:$(srcroot)%.c=$(objroot)%.stress.$(O)) C_TESTLIB_OBJS := $(C_TESTLIB_UNIT_OBJS) $(C_TESTLIB_INTEGRATION_OBJS) $(C_UTIL_INTEGRATION_OBJS) $(C_TESTLIB_STRESS_OBJS) TESTS_UNIT_OBJS := $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%.$(O)) TESTS_UNIT_AUX_OBJS := $(TESTS_UNIT_AUX:$(srcroot)%.c=$(objroot)%.$(O)) TESTS_INTEGRATION_OBJS := $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%.$(O)) TESTS_STRESS_OBJS := $(TESTS_STRESS:$(srcroot)%.c=$(objroot)%.$(O)) TESTS_OBJS := $(TESTS_UNIT_OBJS) $(TESTS_UNIT_AUX_OBJS) $(TESTS_INTEGRATION_OBJS) $(TESTS_STRESS_OBJS) .PHONY: all dist build_doc_html build_doc_man build_doc .PHONY: install_bin install_include install_lib .PHONY: install_doc_html install_doc_man install_doc install .PHONY: tests check clean distclean relclean .SECONDARY : $(TESTS_OBJS) # Default target. all: build_lib dist: build_doc $(srcroot)doc/%.html : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/html.xsl $(XSLTPROC) -o $@ $(objroot)doc/html.xsl $< $(srcroot)doc/%.3 : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/manpages.xsl $(XSLTPROC) -o $@ $(objroot)doc/manpages.xsl $< build_doc_html: $(DOCS_HTML) build_doc_man: $(DOCS_MAN3) build_doc: $(DOCS) # # Include generated dependency files. # ifdef CC_MM -include $(C_OBJS:%.$(O)=%.d) -include $(C_PIC_OBJS:%.$(O)=%.d) -include $(C_JET_OBJS:%.$(O)=%.d) -include $(C_TESTLIB_OBJS:%.$(O)=%.d) -include $(TESTS_OBJS:%.$(O)=%.d) endif $(C_OBJS): $(objroot)src/%.$(O): $(srcroot)src/%.c $(C_PIC_OBJS): $(objroot)src/%.pic.$(O): $(srcroot)src/%.c $(C_PIC_OBJS): CFLAGS += $(PIC_CFLAGS) $(C_JET_OBJS): $(objroot)src/%.jet.$(O): $(srcroot)src/%.c $(C_JET_OBJS): CFLAGS += -DJEMALLOC_JET $(C_TESTLIB_UNIT_OBJS): $(objroot)test/src/%.unit.$(O): $(srcroot)test/src/%.c $(C_TESTLIB_UNIT_OBJS): CPPFLAGS += -DJEMALLOC_UNIT_TEST $(C_TESTLIB_INTEGRATION_OBJS): $(objroot)test/src/%.integration.$(O): $(srcroot)test/src/%.c $(C_TESTLIB_INTEGRATION_OBJS): CPPFLAGS += -DJEMALLOC_INTEGRATION_TEST $(C_UTIL_INTEGRATION_OBJS): $(objroot)src/%.integration.$(O): $(srcroot)src/%.c $(C_TESTLIB_STRESS_OBJS): $(objroot)test/src/%.stress.$(O): $(srcroot)test/src/%.c $(C_TESTLIB_STRESS_OBJS): CPPFLAGS += -DJEMALLOC_STRESS_TEST -DJEMALLOC_STRESS_TESTLIB $(C_TESTLIB_OBJS): CPPFLAGS += -I$(srcroot)test/include -I$(objroot)test/include $(TESTS_UNIT_OBJS): CPPFLAGS += -DJEMALLOC_UNIT_TEST $(TESTS_UNIT_AUX_OBJS): CPPFLAGS += -DJEMALLOC_UNIT_TEST define make-unit-link-dep $(1): TESTS_UNIT_LINK_OBJS += $(2) $(1): $(2) endef $(foreach test, $(TESTS_UNIT:$(srcroot)test/unit/%.c=$(objroot)test/unit/%$(EXE)), $(eval $(call make-unit-link-dep,$(test),$(filter $(test:%$(EXE)=%_a.$(O)) $(test:%$(EXE)=%_b.$(O)),$(TESTS_UNIT_AUX_OBJS))))) $(TESTS_INTEGRATION_OBJS): CPPFLAGS += -DJEMALLOC_INTEGRATION_TEST $(TESTS_STRESS_OBJS): CPPFLAGS += -DJEMALLOC_STRESS_TEST $(TESTS_OBJS): $(objroot)test/%.$(O): $(srcroot)test/%.c $(TESTS_OBJS): CPPFLAGS += -I$(srcroot)test/include -I$(objroot)test/include ifneq ($(IMPORTLIB),$(SO)) $(C_OBJS) $(C_JET_OBJS): CPPFLAGS += -DDLLEXPORT endif ifndef CC_MM # Dependencies. HEADER_DIRS = $(srcroot)include/jemalloc/internal \ $(objroot)include/jemalloc $(objroot)include/jemalloc/internal HEADERS = $(wildcard $(foreach dir,$(HEADER_DIRS),$(dir)/*.h)) $(C_OBJS) $(C_PIC_OBJS) $(C_JET_OBJS) $(C_TESTLIB_OBJS) $(TESTS_OBJS): $(HEADERS) $(TESTS_OBJS): $(objroot)test/include/test/jemalloc_test.h endif $(C_OBJS) $(C_PIC_OBJS) $(C_JET_OBJS) $(C_TESTLIB_OBJS) $(TESTS_OBJS): %.$(O): @mkdir -p $(@D) $(CC) $(CFLAGS) -c $(CPPFLAGS) $(CTARGET) $< ifdef CC_MM @$(CC) -MM $(CFLAGS) $(CPPFLAGS) -MT $@ -o $(@:%.$(O)=%.d) $< endif ifneq ($(SOREV),$(SO)) %.$(SO) : %.$(SOREV) @mkdir -p $(@D) ln -sf $( $(srcroot)config.stamp.in $(objroot)config.stamp : $(cfgoutputs_in) $(cfghdrs_in) $(srcroot)configure ./$(objroot)config.status @touch $@ # There must be some action in order for make to re-read Makefile when it is # out of date. $(cfgoutputs_out) $(cfghdrs_out) : $(objroot)config.stamp @true endif vmem-1.8/src/jemalloc/Makefile.libvmem000066400000000000000000000035331361505074100200030ustar00rootroot00000000000000# Copyright 2014-2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # src/jemalloc/Makefile.libvmem -- Makefile for building jemalloc # for libvmem # EXTRA_TARGETS = jemalloc default: all JEMALLOC_VMEMDIR=libvmem include ./jemalloc.mk JEMALLOC_CONFIG += --disable-bsd-malloc-hooks ifeq ($(DEBUG),1) JEMALLOC_CONFIG += --enable-debug endif LIB_OUTDIR = include ../Makefile.inc vmem-1.8/src/jemalloc/Makefile.libvmmalloc000066400000000000000000000034701361505074100206510ustar00rootroot00000000000000# Copyright 2014-2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # src/jemalloc/Makefile.libvmmalloc -- Makefile for building jemalloc # for libvmmalloc # EXTRA_TARGETS = jemalloc default: all JEMALLOC_VMEMDIR=libvmmalloc include ./jemalloc.mk ifeq ($(DEBUG),1) JEMALLOC_CONFIG += --enable-debug endif LIB_OUTDIR = include ../Makefile.inc vmem-1.8/src/jemalloc/README000066400000000000000000000021221361505074100155620ustar00rootroot00000000000000jemalloc is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support. jemalloc first came into use as the FreeBSD libc allocator in 2005, and since then it has found its way into numerous applications that rely on its predictable behavior. In 2010 jemalloc development efforts broadened to include developer support features such as heap profiling, Valgrind integration, and extensive monitoring/tuning hooks. Modern jemalloc releases continue to be integrated back into FreeBSD, and therefore versatility remains critical. Ongoing development efforts trend toward making jemalloc among the best allocators for a broad range of demanding applications, and eliminating/mitigating weaknesses that have practical repercussions for real world applications. The COPYING file contains copyright and licensing information. The INSTALL file contains information on how to configure, build, and install jemalloc for Linux and Windows. The ChangeLog file contains a brief summary of changes for each release. URL: http://www.canonware.com/jemalloc/ vmem-1.8/src/jemalloc/README.libvmem000066400000000000000000000004621361505074100172210ustar00rootroot00000000000000Persistent Memory Development Kit This is src/jemalloc/README.libvmem. This is directory contains jemalloc version 3.6.0, released 2014-03-31, downloaded from: http://www.canonware.com/download/jemalloc/jemalloc-3.6.0.tar.bz2 with changes applied to support libvmem's need to use memory-mapped files. vmem-1.8/src/jemalloc/autogen.sh000077500000000000000000000004121361505074100167030ustar00rootroot00000000000000#!/bin/sh for i in autoconf; do echo "$i" $i if [ $? -ne 0 ]; then echo "Error $? in $i" exit 1 fi done echo "./configure --enable-autogen $@" ./configure --enable-autogen $@ if [ $? -ne 0 ]; then echo "Error $? in ./configure" exit 1 fi vmem-1.8/src/jemalloc/bin/000077500000000000000000000000001361505074100154555ustar00rootroot00000000000000vmem-1.8/src/jemalloc/bin/jemalloc.sh.in000066400000000000000000000002271361505074100202050ustar00rootroot00000000000000#!/bin/sh prefix=@prefix@ exec_prefix=@exec_prefix@ libdir=@libdir@ @LD_PRELOAD_VAR@=${libdir}/libjemalloc.@SOREV@ export @LD_PRELOAD_VAR@ exec "$@" vmem-1.8/src/jemalloc/bin/pprof000077500000000000000000005174171361505074100165500ustar00rootroot00000000000000#! /usr/bin/env perl # Copyright (c) 1998-2007, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # --- # Program for printing the profile generated by common/profiler.cc, # or by the heap profiler (common/debugallocation.cc) # # The profile contains a sequence of entries of the form: # # This program parses the profile, and generates user-readable # output. # # Examples: # # % tools/pprof "program" "profile" # Enters "interactive" mode # # % tools/pprof --text "program" "profile" # Generates one line per procedure # # % tools/pprof --gv "program" "profile" # Generates annotated call-graph and displays via "gv" # # % tools/pprof --gv --focus=Mutex "program" "profile" # Restrict to code paths that involve an entry that matches "Mutex" # # % tools/pprof --gv --focus=Mutex --ignore=string "program" "profile" # Restrict to code paths that involve an entry that matches "Mutex" # and does not match "string" # # % tools/pprof --list=IBF_CheckDocid "program" "profile" # Generates disassembly listing of all routines with at least one # sample that match the --list= pattern. The listing is # annotated with the flat and cumulative sample counts at each line. # # % tools/pprof --disasm=IBF_CheckDocid "program" "profile" # Generates disassembly listing of all routines with at least one # sample that match the --disasm= pattern. The listing is # annotated with the flat and cumulative sample counts at each PC value. # # TODO: Use color to indicate files? use strict; use warnings; use Getopt::Long; my $PPROF_VERSION = "2.0"; # These are the object tools we use which can come from a # user-specified location using --tools, from the PPROF_TOOLS # environment variable, or from the environment. my %obj_tool_map = ( "objdump" => "objdump", "nm" => "nm", "addr2line" => "addr2line", "c++filt" => "c++filt", ## ConfigureObjTools may add architecture-specific entries: #"nm_pdb" => "nm-pdb", # for reading windows (PDB-format) executables #"addr2line_pdb" => "addr2line-pdb", # ditto #"otool" => "otool", # equivalent of objdump on OS X ); # NOTE: these are lists, so you can put in commandline flags if you want. my @DOT = ("dot"); # leave non-absolute, since it may be in /usr/local my @GV = ("gv"); my @EVINCE = ("evince"); # could also be xpdf or perhaps acroread my @KCACHEGRIND = ("kcachegrind"); my @PS2PDF = ("ps2pdf"); # These are used for dynamic profiles my @URL_FETCHER = ("curl", "-s"); # These are the web pages that servers need to support for dynamic profiles my $HEAP_PAGE = "/pprof/heap"; my $PROFILE_PAGE = "/pprof/profile"; # must support cgi-param "?seconds=#" my $PMUPROFILE_PAGE = "/pprof/pmuprofile(?:\\?.*)?"; # must support cgi-param # ?seconds=#&event=x&period=n my $GROWTH_PAGE = "/pprof/growth"; my $CONTENTION_PAGE = "/pprof/contention"; my $WALL_PAGE = "/pprof/wall(?:\\?.*)?"; # accepts options like namefilter my $FILTEREDPROFILE_PAGE = "/pprof/filteredprofile(?:\\?.*)?"; my $CENSUSPROFILE_PAGE = "/pprof/censusprofile(?:\\?.*)?"; # must support cgi-param # "?seconds=#", # "?tags_regexp=#" and # "?type=#". my $SYMBOL_PAGE = "/pprof/symbol"; # must support symbol lookup via POST my $PROGRAM_NAME_PAGE = "/pprof/cmdline"; # These are the web pages that can be named on the command line. # All the alternatives must begin with /. my $PROFILES = "($HEAP_PAGE|$PROFILE_PAGE|$PMUPROFILE_PAGE|" . "$GROWTH_PAGE|$CONTENTION_PAGE|$WALL_PAGE|" . "$FILTEREDPROFILE_PAGE|$CENSUSPROFILE_PAGE)"; # default binary name my $UNKNOWN_BINARY = "(unknown)"; # There is a pervasive dependency on the length (in hex characters, # i.e., nibbles) of an address, distinguishing between 32-bit and # 64-bit profiles. To err on the safe size, default to 64-bit here: my $address_length = 16; my $dev_null = "/dev/null"; if (! -e $dev_null && $^O =~ /MSWin/) { # $^O is the OS perl was built for $dev_null = "nul"; } # A list of paths to search for shared object files my @prefix_list = (); # Special routine name that should not have any symbols. # Used as separator to parse "addr2line -i" output. my $sep_symbol = '_fini'; my $sep_address = undef; ##### Argument parsing ##### sub usage_string { return < is a space separated list of profile names. pprof [options] is a list of profile files where each file contains the necessary symbol mappings as well as profile data (likely generated with --raw). pprof [options] is a remote form. Symbols are obtained from host:port$SYMBOL_PAGE Each name can be: /path/to/profile - a path to a profile file host:port[/] - a location of a service to get profile from The / can be $HEAP_PAGE, $PROFILE_PAGE, /pprof/pmuprofile, $GROWTH_PAGE, $CONTENTION_PAGE, /pprof/wall, $CENSUSPROFILE_PAGE, or /pprof/filteredprofile. For instance: pprof http://myserver.com:80$HEAP_PAGE If / is omitted, the service defaults to $PROFILE_PAGE (cpu profiling). pprof --symbols Maps addresses to symbol names. In this mode, stdin should be a list of library mappings, in the same format as is found in the heap- and cpu-profile files (this loosely matches that of /proc/self/maps on linux), followed by a list of hex addresses to map, one per line. For more help with querying remote servers, including how to add the necessary server-side support code, see this filename (or one like it): /usr/doc/gperftools-$PPROF_VERSION/pprof_remote_servers.html Options: --cum Sort by cumulative data --base= Subtract from before display --interactive Run in interactive mode (interactive "help" gives help) [default] --seconds= Length of time for dynamic profiles [default=30 secs] --add_lib= Read additional symbols and line info from the given library --lib_prefix= Comma separated list of library path prefixes Reporting Granularity: --addresses Report at address level --lines Report at source line level --functions Report at function level [default] --files Report at source file level Output type: --text Generate text report --callgrind Generate callgrind format to stdout --gv Generate Postscript and display --evince Generate PDF and display --web Generate SVG and display --list= Generate source listing of matching routines --disasm= Generate disassembly of matching routines --symbols Print demangled symbol names found at given addresses --dot Generate DOT file to stdout --ps Generate Postcript to stdout --pdf Generate PDF to stdout --svg Generate SVG to stdout --gif Generate GIF to stdout --raw Generate symbolized pprof data (useful with remote fetch) Heap-Profile Options: --inuse_space Display in-use (mega)bytes [default] --inuse_objects Display in-use objects --alloc_space Display allocated (mega)bytes --alloc_objects Display allocated objects --show_bytes Display space in bytes --drop_negative Ignore negative differences Contention-profile options: --total_delay Display total delay at each region [default] --contentions Display number of delays at each region --mean_delay Display mean delay at each region Call-graph Options: --nodecount= Show at most so many nodes [default=80] --nodefraction= Hide nodes below *total [default=.005] --edgefraction= Hide edges below *total [default=.001] --maxdegree= Max incoming/outgoing edges per node [default=8] --focus= Focus on nodes matching --ignore= Ignore nodes matching --scale= Set GV scaling [default=0] --heapcheck Make nodes with non-0 object counts (i.e. direct leak generators) more visible Miscellaneous: --tools=[,...] \$PATH for object tool pathnames --test Run unit tests --help This message --version Version information Environment Variables: PPROF_TMPDIR Profiles directory. Defaults to \$HOME/pprof PPROF_TOOLS Prefix for object tools pathnames Examples: pprof /bin/ls ls.prof Enters "interactive" mode pprof --text /bin/ls ls.prof Outputs one line per procedure pprof --web /bin/ls ls.prof Displays annotated call-graph in web browser pprof --gv /bin/ls ls.prof Displays annotated call-graph via 'gv' pprof --gv --focus=Mutex /bin/ls ls.prof Restricts to code paths including a .*Mutex.* entry pprof --gv --focus=Mutex --ignore=string /bin/ls ls.prof Code paths including Mutex but not string pprof --list=getdir /bin/ls ls.prof (Per-line) annotated source listing for getdir() pprof --disasm=getdir /bin/ls ls.prof (Per-PC) annotated disassembly for getdir() pprof http://localhost:1234/ Enters "interactive" mode pprof --text localhost:1234 Outputs one line per procedure for localhost:1234 pprof --raw localhost:1234 > ./local.raw pprof --text ./local.raw Fetches a remote profile for later analysis and then analyzes it in text mode. EOF } sub version_string { return < \$main::opt_help, "version!" => \$main::opt_version, "cum!" => \$main::opt_cum, "base=s" => \$main::opt_base, "seconds=i" => \$main::opt_seconds, "add_lib=s" => \$main::opt_lib, "lib_prefix=s" => \$main::opt_lib_prefix, "functions!" => \$main::opt_functions, "lines!" => \$main::opt_lines, "addresses!" => \$main::opt_addresses, "files!" => \$main::opt_files, "text!" => \$main::opt_text, "callgrind!" => \$main::opt_callgrind, "list=s" => \$main::opt_list, "disasm=s" => \$main::opt_disasm, "symbols!" => \$main::opt_symbols, "gv!" => \$main::opt_gv, "evince!" => \$main::opt_evince, "web!" => \$main::opt_web, "dot!" => \$main::opt_dot, "ps!" => \$main::opt_ps, "pdf!" => \$main::opt_pdf, "svg!" => \$main::opt_svg, "gif!" => \$main::opt_gif, "raw!" => \$main::opt_raw, "interactive!" => \$main::opt_interactive, "nodecount=i" => \$main::opt_nodecount, "nodefraction=f" => \$main::opt_nodefraction, "edgefraction=f" => \$main::opt_edgefraction, "maxdegree=i" => \$main::opt_maxdegree, "focus=s" => \$main::opt_focus, "ignore=s" => \$main::opt_ignore, "scale=i" => \$main::opt_scale, "heapcheck" => \$main::opt_heapcheck, "inuse_space!" => \$main::opt_inuse_space, "inuse_objects!" => \$main::opt_inuse_objects, "alloc_space!" => \$main::opt_alloc_space, "alloc_objects!" => \$main::opt_alloc_objects, "show_bytes!" => \$main::opt_show_bytes, "drop_negative!" => \$main::opt_drop_negative, "total_delay!" => \$main::opt_total_delay, "contentions!" => \$main::opt_contentions, "mean_delay!" => \$main::opt_mean_delay, "tools=s" => \$main::opt_tools, "test!" => \$main::opt_test, "debug!" => \$main::opt_debug, # Undocumented flags used only by unittests: "test_stride=i" => \$main::opt_test_stride, ) || usage("Invalid option(s)"); # Deal with the standard --help and --version if ($main::opt_help) { print usage_string(); exit(0); } if ($main::opt_version) { print version_string(); exit(0); } # Disassembly/listing/symbols mode requires address-level info if ($main::opt_disasm || $main::opt_list || $main::opt_symbols) { $main::opt_functions = 0; $main::opt_lines = 0; $main::opt_addresses = 1; $main::opt_files = 0; } # Check heap-profiling flags if ($main::opt_inuse_space + $main::opt_inuse_objects + $main::opt_alloc_space + $main::opt_alloc_objects > 1) { usage("Specify at most on of --inuse/--alloc options"); } # Check output granularities my $grains = $main::opt_functions + $main::opt_lines + $main::opt_addresses + $main::opt_files + 0; if ($grains > 1) { usage("Only specify one output granularity option"); } if ($grains == 0) { $main::opt_functions = 1; } # Check output modes my $modes = $main::opt_text + $main::opt_callgrind + ($main::opt_list eq '' ? 0 : 1) + ($main::opt_disasm eq '' ? 0 : 1) + ($main::opt_symbols == 0 ? 0 : 1) + $main::opt_gv + $main::opt_evince + $main::opt_web + $main::opt_dot + $main::opt_ps + $main::opt_pdf + $main::opt_svg + $main::opt_gif + $main::opt_raw + $main::opt_interactive + 0; if ($modes > 1) { usage("Only specify one output mode"); } if ($modes == 0) { if (-t STDOUT) { # If STDOUT is a tty, activate interactive mode $main::opt_interactive = 1; } else { $main::opt_text = 1; } } if ($main::opt_test) { RunUnitTests(); # Should not return exit(1); } # Binary name and profile arguments list $main::prog = ""; @main::pfile_args = (); # Remote profiling without a binary (using $SYMBOL_PAGE instead) if (@ARGV > 0) { if (IsProfileURL($ARGV[0])) { $main::use_symbol_page = 1; } elsif (IsSymbolizedProfileFile($ARGV[0])) { $main::use_symbolized_profile = 1; $main::prog = $UNKNOWN_BINARY; # will be set later from the profile file } } if ($main::use_symbol_page || $main::use_symbolized_profile) { # We don't need a binary! my %disabled = ('--lines' => $main::opt_lines, '--disasm' => $main::opt_disasm); for my $option (keys %disabled) { usage("$option cannot be used without a binary") if $disabled{$option}; } # Set $main::prog later... scalar(@ARGV) || usage("Did not specify profile file"); } elsif ($main::opt_symbols) { # --symbols needs a binary-name (to run nm on, etc) but not profiles $main::prog = shift(@ARGV) || usage("Did not specify program"); } else { $main::prog = shift(@ARGV) || usage("Did not specify program"); scalar(@ARGV) || usage("Did not specify profile file"); } # Parse profile file/location arguments foreach my $farg (@ARGV) { if ($farg =~ m/(.*)\@([0-9]+)(|\/.*)$/ ) { my $machine = $1; my $num_machines = $2; my $path = $3; for (my $i = 0; $i < $num_machines; $i++) { unshift(@main::pfile_args, "$i.$machine$path"); } } else { unshift(@main::pfile_args, $farg); } } if ($main::use_symbol_page) { unless (IsProfileURL($main::pfile_args[0])) { error("The first profile should be a remote form to use $SYMBOL_PAGE\n"); } CheckSymbolPage(); $main::prog = FetchProgramName(); } elsif (!$main::use_symbolized_profile) { # may not need objtools! ConfigureObjTools($main::prog) } # Break the opt_lib_prefix into the prefix_list array @prefix_list = split (',', $main::opt_lib_prefix); # Remove trailing / from the prefixes, in the list to prevent # searching things like /my/path//lib/mylib.so foreach (@prefix_list) { s|/+$||; } } sub Main() { Init(); $main::collected_profile = undef; @main::profile_files = (); $main::op_time = time(); # Printing symbols is special and requires a lot less info that most. if ($main::opt_symbols) { PrintSymbols(*STDIN); # Get /proc/maps and symbols output from stdin return; } # Fetch all profile data FetchDynamicProfiles(); # this will hold symbols that we read from the profile files my $symbol_map = {}; # Read one profile, pick the last item on the list my $data = ReadProfile($main::prog, pop(@main::profile_files)); my $profile = $data->{profile}; my $pcs = $data->{pcs}; my $libs = $data->{libs}; # Info about main program and shared libraries $symbol_map = MergeSymbols($symbol_map, $data->{symbols}); # Add additional profiles, if available. if (scalar(@main::profile_files) > 0) { foreach my $pname (@main::profile_files) { my $data2 = ReadProfile($main::prog, $pname); $profile = AddProfile($profile, $data2->{profile}); $pcs = AddPcs($pcs, $data2->{pcs}); $symbol_map = MergeSymbols($symbol_map, $data2->{symbols}); } } # Subtract base from profile, if specified if ($main::opt_base ne '') { my $base = ReadProfile($main::prog, $main::opt_base); $profile = SubtractProfile($profile, $base->{profile}); $pcs = AddPcs($pcs, $base->{pcs}); $symbol_map = MergeSymbols($symbol_map, $base->{symbols}); } # Get total data in profile my $total = TotalProfile($profile); # Collect symbols my $symbols; if ($main::use_symbolized_profile) { $symbols = FetchSymbols($pcs, $symbol_map); } elsif ($main::use_symbol_page) { $symbols = FetchSymbols($pcs); } else { # TODO(csilvers): $libs uses the /proc/self/maps data from profile1, # which may differ from the data from subsequent profiles, especially # if they were run on different machines. Use appropriate libs for # each pc somehow. $symbols = ExtractSymbols($libs, $pcs); } # Remove uniniteresting stack items $profile = RemoveUninterestingFrames($symbols, $profile); # Focus? if ($main::opt_focus ne '') { $profile = FocusProfile($symbols, $profile, $main::opt_focus); } # Ignore? if ($main::opt_ignore ne '') { $profile = IgnoreProfile($symbols, $profile, $main::opt_ignore); } my $calls = ExtractCalls($symbols, $profile); # Reduce profiles to required output granularity, and also clean # each stack trace so a given entry exists at most once. my $reduced = ReduceProfile($symbols, $profile); # Get derived profiles my $flat = FlatProfile($reduced); my $cumulative = CumulativeProfile($reduced); # Print if (!$main::opt_interactive) { if ($main::opt_disasm) { PrintDisassembly($libs, $flat, $cumulative, $main::opt_disasm); } elsif ($main::opt_list) { PrintListing($total, $libs, $flat, $cumulative, $main::opt_list, 0); } elsif ($main::opt_text) { # Make sure the output is empty when have nothing to report # (only matters when --heapcheck is given but we must be # compatible with old branches that did not pass --heapcheck always): if ($total != 0) { printf("Total: %s %s\n", Unparse($total), Units()); } PrintText($symbols, $flat, $cumulative, -1); } elsif ($main::opt_raw) { PrintSymbolizedProfile($symbols, $profile, $main::prog); } elsif ($main::opt_callgrind) { PrintCallgrind($calls); } else { if (PrintDot($main::prog, $symbols, $profile, $flat, $cumulative, $total)) { if ($main::opt_gv) { RunGV(TempName($main::next_tmpfile, "ps"), ""); } elsif ($main::opt_evince) { RunEvince(TempName($main::next_tmpfile, "pdf"), ""); } elsif ($main::opt_web) { my $tmp = TempName($main::next_tmpfile, "svg"); RunWeb($tmp); # The command we run might hand the file name off # to an already running browser instance and then exit. # Normally, we'd remove $tmp on exit (right now), # but fork a child to remove $tmp a little later, so that the # browser has time to load it first. delete $main::tempnames{$tmp}; if (fork() == 0) { sleep 5; unlink($tmp); exit(0); } } } else { cleanup(); exit(1); } } } else { InteractiveMode($profile, $symbols, $libs, $total); } cleanup(); exit(0); } ##### Entry Point ##### Main(); # Temporary code to detect if we're running on a Goobuntu system. # These systems don't have the right stuff installed for the special # Readline libraries to work, so as a temporary workaround, we default # to using the normal stdio code, rather than the fancier readline-based # code sub ReadlineMightFail { if (-e '/lib/libtermcap.so.2') { return 0; # libtermcap exists, so readline should be okay } else { return 1; } } sub RunGV { my $fname = shift; my $bg = shift; # "" or " &" if we should run in background if (!system(ShellEscape(@GV, "--version") . " >$dev_null 2>&1")) { # Options using double dash are supported by this gv version. # Also, turn on noantialias to better handle bug in gv for # postscript files with large dimensions. # TODO: Maybe we should not pass the --noantialias flag # if the gv version is known to work properly without the flag. system(ShellEscape(@GV, "--scale=$main::opt_scale", "--noantialias", $fname) . $bg); } else { # Old gv version - only supports options that use single dash. print STDERR ShellEscape(@GV, "-scale", $main::opt_scale) . "\n"; system(ShellEscape(@GV, "-scale", "$main::opt_scale", $fname) . $bg); } } sub RunEvince { my $fname = shift; my $bg = shift; # "" or " &" if we should run in background system(ShellEscape(@EVINCE, $fname) . $bg); } sub RunWeb { my $fname = shift; print STDERR "Loading web page file:///$fname\n"; if (`uname` =~ /Darwin/) { # OS X: open will use standard preference for SVG files. system("/usr/bin/open", $fname); return; } # Some kind of Unix; try generic symlinks, then specific browsers. # (Stop once we find one.) # Works best if the browser is already running. my @alt = ( "/etc/alternatives/gnome-www-browser", "/etc/alternatives/x-www-browser", "google-chrome", "firefox", ); foreach my $b (@alt) { if (system($b, $fname) == 0) { return; } } print STDERR "Could not load web browser.\n"; } sub RunKcachegrind { my $fname = shift; my $bg = shift; # "" or " &" if we should run in background print STDERR "Starting '@KCACHEGRIND " . $fname . $bg . "'\n"; system(ShellEscape(@KCACHEGRIND, $fname) . $bg); } ##### Interactive helper routines ##### sub InteractiveMode { $| = 1; # Make output unbuffered for interactive mode my ($orig_profile, $symbols, $libs, $total) = @_; print STDERR "Welcome to pprof! For help, type 'help'.\n"; # Use ReadLine if it's installed and input comes from a console. if ( -t STDIN && !ReadlineMightFail() && defined(eval {require Term::ReadLine}) ) { my $term = new Term::ReadLine 'pprof'; while ( defined ($_ = $term->readline('(pprof) '))) { $term->addhistory($_) if /\S/; if (!InteractiveCommand($orig_profile, $symbols, $libs, $total, $_)) { last; # exit when we get an interactive command to quit } } } else { # don't have readline while (1) { print STDERR "(pprof) "; $_ = ; last if ! defined $_ ; s/\r//g; # turn windows-looking lines into unix-looking lines # Save some flags that might be reset by InteractiveCommand() my $save_opt_lines = $main::opt_lines; if (!InteractiveCommand($orig_profile, $symbols, $libs, $total, $_)) { last; # exit when we get an interactive command to quit } # Restore flags $main::opt_lines = $save_opt_lines; } } } # Takes two args: orig profile, and command to run. # Returns 1 if we should keep going, or 0 if we were asked to quit sub InteractiveCommand { my($orig_profile, $symbols, $libs, $total, $command) = @_; $_ = $command; # just to make future m//'s easier if (!defined($_)) { print STDERR "\n"; return 0; } if (m/^\s*quit/) { return 0; } if (m/^\s*help/) { InteractiveHelpMessage(); return 1; } # Clear all the mode options -- mode is controlled by "$command" $main::opt_text = 0; $main::opt_callgrind = 0; $main::opt_disasm = 0; $main::opt_list = 0; $main::opt_gv = 0; $main::opt_evince = 0; $main::opt_cum = 0; if (m/^\s*(text|top)(\d*)\s*(.*)/) { $main::opt_text = 1; my $line_limit = ($2 ne "") ? int($2) : 10; my $routine; my $ignore; ($routine, $ignore) = ParseInteractiveArgs($3); my $profile = ProcessProfile($total, $orig_profile, $symbols, "", $ignore); my $reduced = ReduceProfile($symbols, $profile); # Get derived profiles my $flat = FlatProfile($reduced); my $cumulative = CumulativeProfile($reduced); PrintText($symbols, $flat, $cumulative, $line_limit); return 1; } if (m/^\s*callgrind\s*([^ \n]*)/) { $main::opt_callgrind = 1; # Get derived profiles my $calls = ExtractCalls($symbols, $orig_profile); my $filename = $1; if ( $1 eq '' ) { $filename = TempName($main::next_tmpfile, "callgrind"); } PrintCallgrind($calls, $filename); if ( $1 eq '' ) { RunKcachegrind($filename, " & "); $main::next_tmpfile++; } return 1; } if (m/^\s*(web)?list\s*(.+)/) { my $html = (defined($1) && ($1 eq "web")); $main::opt_list = 1; my $routine; my $ignore; ($routine, $ignore) = ParseInteractiveArgs($2); my $profile = ProcessProfile($total, $orig_profile, $symbols, "", $ignore); my $reduced = ReduceProfile($symbols, $profile); # Get derived profiles my $flat = FlatProfile($reduced); my $cumulative = CumulativeProfile($reduced); PrintListing($total, $libs, $flat, $cumulative, $routine, $html); return 1; } if (m/^\s*disasm\s*(.+)/) { $main::opt_disasm = 1; my $routine; my $ignore; ($routine, $ignore) = ParseInteractiveArgs($1); # Process current profile to account for various settings my $profile = ProcessProfile($total, $orig_profile, $symbols, "", $ignore); my $reduced = ReduceProfile($symbols, $profile); # Get derived profiles my $flat = FlatProfile($reduced); my $cumulative = CumulativeProfile($reduced); PrintDisassembly($libs, $flat, $cumulative, $routine); return 1; } if (m/^\s*(gv|web|evince)\s*(.*)/) { $main::opt_gv = 0; $main::opt_evince = 0; $main::opt_web = 0; if ($1 eq "gv") { $main::opt_gv = 1; } elsif ($1 eq "evince") { $main::opt_evince = 1; } elsif ($1 eq "web") { $main::opt_web = 1; } my $focus; my $ignore; ($focus, $ignore) = ParseInteractiveArgs($2); # Process current profile to account for various settings my $profile = ProcessProfile($total, $orig_profile, $symbols, $focus, $ignore); my $reduced = ReduceProfile($symbols, $profile); # Get derived profiles my $flat = FlatProfile($reduced); my $cumulative = CumulativeProfile($reduced); if (PrintDot($main::prog, $symbols, $profile, $flat, $cumulative, $total)) { if ($main::opt_gv) { RunGV(TempName($main::next_tmpfile, "ps"), " &"); } elsif ($main::opt_evince) { RunEvince(TempName($main::next_tmpfile, "pdf"), " &"); } elsif ($main::opt_web) { RunWeb(TempName($main::next_tmpfile, "svg")); } $main::next_tmpfile++; } return 1; } if (m/^\s*$/) { return 1; } print STDERR "Unknown command: try 'help'.\n"; return 1; } sub ProcessProfile { my $total_count = shift; my $orig_profile = shift; my $symbols = shift; my $focus = shift; my $ignore = shift; # Process current profile to account for various settings my $profile = $orig_profile; printf("Total: %s %s\n", Unparse($total_count), Units()); if ($focus ne '') { $profile = FocusProfile($symbols, $profile, $focus); my $focus_count = TotalProfile($profile); printf("After focusing on '%s': %s %s of %s (%0.1f%%)\n", $focus, Unparse($focus_count), Units(), Unparse($total_count), ($focus_count*100.0) / $total_count); } if ($ignore ne '') { $profile = IgnoreProfile($symbols, $profile, $ignore); my $ignore_count = TotalProfile($profile); printf("After ignoring '%s': %s %s of %s (%0.1f%%)\n", $ignore, Unparse($ignore_count), Units(), Unparse($total_count), ($ignore_count*100.0) / $total_count); } return $profile; } sub InteractiveHelpMessage { print STDERR <{$k}; my @addrs = split(/\n/, $k); if ($#addrs >= 0) { my $depth = $#addrs + 1; # int(foo / 2**32) is the only reliable way to get rid of bottom # 32 bits on both 32- and 64-bit systems. print pack('L*', $count & 0xFFFFFFFF, int($count / 2**32)); print pack('L*', $depth & 0xFFFFFFFF, int($depth / 2**32)); foreach my $full_addr (@addrs) { my $addr = $full_addr; $addr =~ s/0x0*//; # strip off leading 0x, zeroes if (length($addr) > 16) { print STDERR "Invalid address in profile: $full_addr\n"; next; } my $low_addr = substr($addr, -8); # get last 8 hex chars my $high_addr = substr($addr, -16, 8); # get up to 8 more hex chars print pack('L*', hex('0x' . $low_addr), hex('0x' . $high_addr)); } } } } # Print symbols and profile data sub PrintSymbolizedProfile { my $symbols = shift; my $profile = shift; my $prog = shift; $SYMBOL_PAGE =~ m,[^/]+$,; # matches everything after the last slash my $symbol_marker = $&; print '--- ', $symbol_marker, "\n"; if (defined($prog)) { print 'binary=', $prog, "\n"; } while (my ($pc, $name) = each(%{$symbols})) { my $sep = ' '; print '0x', $pc; # We have a list of function names, which include the inlined # calls. They are separated (and terminated) by --, which is # illegal in function names. for (my $j = 2; $j <= $#{$name}; $j += 3) { print $sep, $name->[$j]; $sep = '--'; } print "\n"; } print '---', "\n"; $PROFILE_PAGE =~ m,[^/]+$,; # matches everything after the last slash my $profile_marker = $&; print '--- ', $profile_marker, "\n"; if (defined($main::collected_profile)) { # if used with remote fetch, simply dump the collected profile to output. open(SRC, "<$main::collected_profile"); while () { print $_; } close(SRC); } else { # dump a cpu-format profile to standard out PrintProfileData($profile); } } # Print text output sub PrintText { my $symbols = shift; my $flat = shift; my $cumulative = shift; my $line_limit = shift; my $total = TotalProfile($flat); # Which profile to sort by? my $s = $main::opt_cum ? $cumulative : $flat; my $running_sum = 0; my $lines = 0; foreach my $k (sort { GetEntry($s, $b) <=> GetEntry($s, $a) || $a cmp $b } keys(%{$cumulative})) { my $f = GetEntry($flat, $k); my $c = GetEntry($cumulative, $k); $running_sum += $f; my $sym = $k; if (exists($symbols->{$k})) { $sym = $symbols->{$k}->[0] . " " . $symbols->{$k}->[1]; if ($main::opt_addresses) { $sym = $k . " " . $sym; } } if ($f != 0 || $c != 0) { printf("%8s %6s %6s %8s %6s %s\n", Unparse($f), Percent($f, $total), Percent($running_sum, $total), Unparse($c), Percent($c, $total), $sym); } $lines++; last if ($line_limit >= 0 && $lines >= $line_limit); } } # Callgrind format has a compression for repeated function and file # names. You show the name the first time, and just use its number # subsequently. This can cut down the file to about a third or a # quarter of its uncompressed size. $key and $val are the key/value # pair that would normally be printed by callgrind; $map is a map from # value to number. sub CompressedCGName { my($key, $val, $map) = @_; my $idx = $map->{$val}; # For very short keys, providing an index hurts rather than helps. if (length($val) <= 3) { return "$key=$val\n"; } elsif (defined($idx)) { return "$key=($idx)\n"; } else { # scalar(keys $map) gives the number of items in the map. $idx = scalar(keys(%{$map})) + 1; $map->{$val} = $idx; return "$key=($idx) $val\n"; } } # Print the call graph in a way that's suiteable for callgrind. sub PrintCallgrind { my $calls = shift; my $filename; my %filename_to_index_map; my %fnname_to_index_map; if ($main::opt_interactive) { $filename = shift; print STDERR "Writing callgrind file to '$filename'.\n" } else { $filename = "&STDOUT"; } open(CG, ">$filename"); printf CG ("events: Hits\n\n"); foreach my $call ( map { $_->[0] } sort { $a->[1] cmp $b ->[1] || $a->[2] <=> $b->[2] } map { /([^:]+):(\d+):([^ ]+)( -> ([^:]+):(\d+):(.+))?/; [$_, $1, $2] } keys %$calls ) { my $count = int($calls->{$call}); $call =~ /([^:]+):(\d+):([^ ]+)( -> ([^:]+):(\d+):(.+))?/; my ( $caller_file, $caller_line, $caller_function, $callee_file, $callee_line, $callee_function ) = ( $1, $2, $3, $5, $6, $7 ); # TODO(csilvers): for better compression, collect all the # caller/callee_files and functions first, before printing # anything, and only compress those referenced more than once. printf CG CompressedCGName("fl", $caller_file, \%filename_to_index_map); printf CG CompressedCGName("fn", $caller_function, \%fnname_to_index_map); if (defined $6) { printf CG CompressedCGName("cfl", $callee_file, \%filename_to_index_map); printf CG CompressedCGName("cfn", $callee_function, \%fnname_to_index_map); printf CG ("calls=$count $callee_line\n"); } printf CG ("$caller_line $count\n\n"); } } # Print disassembly for all all routines that match $main::opt_disasm sub PrintDisassembly { my $libs = shift; my $flat = shift; my $cumulative = shift; my $disasm_opts = shift; my $total = TotalProfile($flat); foreach my $lib (@{$libs}) { my $symbol_table = GetProcedureBoundaries($lib->[0], $disasm_opts); my $offset = AddressSub($lib->[1], $lib->[3]); foreach my $routine (sort ByName keys(%{$symbol_table})) { my $start_addr = $symbol_table->{$routine}->[0]; my $end_addr = $symbol_table->{$routine}->[1]; # See if there are any samples in this routine my $length = hex(AddressSub($end_addr, $start_addr)); my $addr = AddressAdd($start_addr, $offset); for (my $i = 0; $i < $length; $i++) { if (defined($cumulative->{$addr})) { PrintDisassembledFunction($lib->[0], $offset, $routine, $flat, $cumulative, $start_addr, $end_addr, $total); last; } $addr = AddressInc($addr); } } } } # Return reference to array of tuples of the form: # [start_address, filename, linenumber, instruction, limit_address] # E.g., # ["0x806c43d", "/foo/bar.cc", 131, "ret", "0x806c440"] sub Disassemble { my $prog = shift; my $offset = shift; my $start_addr = shift; my $end_addr = shift; my $objdump = $obj_tool_map{"objdump"}; my $cmd = ShellEscape($objdump, "-C", "-d", "-l", "--no-show-raw-insn", "--start-address=0x$start_addr", "--stop-address=0x$end_addr", $prog); open(OBJDUMP, "$cmd |") || error("$cmd: $!\n"); my @result = (); my $filename = ""; my $linenumber = -1; my $last = ["", "", "", ""]; while () { s/\r//g; # turn windows-looking lines into unix-looking lines chop; if (m|\s*([^:\s]+):(\d+)\s*$|) { # Location line of the form: # : $filename = $1; $linenumber = $2; } elsif (m/^ +([0-9a-f]+):\s*(.*)/) { # Disassembly line -- zero-extend address to full length my $addr = HexExtend($1); my $k = AddressAdd($addr, $offset); $last->[4] = $k; # Store ending address for previous instruction $last = [$k, $filename, $linenumber, $2, $end_addr]; push(@result, $last); } } close(OBJDUMP); return @result; } # The input file should contain lines of the form /proc/maps-like # output (same format as expected from the profiles) or that looks # like hex addresses (like "0xDEADBEEF"). We will parse all # /proc/maps output, and for all the hex addresses, we will output # "short" symbol names, one per line, in the same order as the input. sub PrintSymbols { my $maps_and_symbols_file = shift; # ParseLibraries expects pcs to be in a set. Fine by us... my @pclist = (); # pcs in sorted order my $pcs = {}; my $map = ""; foreach my $line (<$maps_and_symbols_file>) { $line =~ s/\r//g; # turn windows-looking lines into unix-looking lines if ($line =~ /\b(0x[0-9a-f]+)\b/i) { push(@pclist, HexExtend($1)); $pcs->{$pclist[-1]} = 1; } else { $map .= $line; } } my $libs = ParseLibraries($main::prog, $map, $pcs); my $symbols = ExtractSymbols($libs, $pcs); foreach my $pc (@pclist) { # ->[0] is the shortname, ->[2] is the full name print(($symbols->{$pc}->[0] || "??") . "\n"); } } # For sorting functions by name sub ByName { return ShortFunctionName($a) cmp ShortFunctionName($b); } # Print source-listing for all all routines that match $list_opts sub PrintListing { my $total = shift; my $libs = shift; my $flat = shift; my $cumulative = shift; my $list_opts = shift; my $html = shift; my $output = \*STDOUT; my $fname = ""; if ($html) { # Arrange to write the output to a temporary file $fname = TempName($main::next_tmpfile, "html"); $main::next_tmpfile++; if (!open(TEMP, ">$fname")) { print STDERR "$fname: $!\n"; return; } $output = \*TEMP; print $output HtmlListingHeader(); printf $output ("
%s
Total: %s %s
\n", $main::prog, Unparse($total), Units()); } my $listed = 0; foreach my $lib (@{$libs}) { my $symbol_table = GetProcedureBoundaries($lib->[0], $list_opts); my $offset = AddressSub($lib->[1], $lib->[3]); foreach my $routine (sort ByName keys(%{$symbol_table})) { # Print if there are any samples in this routine my $start_addr = $symbol_table->{$routine}->[0]; my $end_addr = $symbol_table->{$routine}->[1]; my $length = hex(AddressSub($end_addr, $start_addr)); my $addr = AddressAdd($start_addr, $offset); for (my $i = 0; $i < $length; $i++) { if (defined($cumulative->{$addr})) { $listed += PrintSource( $lib->[0], $offset, $routine, $flat, $cumulative, $start_addr, $end_addr, $html, $output); last; } $addr = AddressInc($addr); } } } if ($html) { if ($listed > 0) { print $output HtmlListingFooter(); close($output); RunWeb($fname); } else { close($output); unlink($fname); } } } sub HtmlListingHeader { return <<'EOF'; Pprof listing EOF } sub HtmlListingFooter { return <<'EOF'; EOF } sub HtmlEscape { my $text = shift; $text =~ s/&/&/g; $text =~ s//>/g; return $text; } # Returns the indentation of the line, if it has any non-whitespace # characters. Otherwise, returns -1. sub Indentation { my $line = shift; if (m/^(\s*)\S/) { return length($1); } else { return -1; } } # If the symbol table contains inlining info, Disassemble() may tag an # instruction with a location inside an inlined function. But for # source listings, we prefer to use the location in the function we # are listing. So use MapToSymbols() to fetch full location # information for each instruction and then pick out the first # location from a location list (location list contains callers before # callees in case of inlining). # # After this routine has run, each entry in $instructions contains: # [0] start address # [1] filename for function we are listing # [2] line number for function we are listing # [3] disassembly # [4] limit address # [5] most specific filename (may be different from [1] due to inlining) # [6] most specific line number (may be different from [2] due to inlining) sub GetTopLevelLineNumbers { my ($lib, $offset, $instructions) = @_; my $pcs = []; for (my $i = 0; $i <= $#{$instructions}; $i++) { push(@{$pcs}, $instructions->[$i]->[0]); } my $symbols = {}; MapToSymbols($lib, $offset, $pcs, $symbols); for (my $i = 0; $i <= $#{$instructions}; $i++) { my $e = $instructions->[$i]; push(@{$e}, $e->[1]); push(@{$e}, $e->[2]); my $addr = $e->[0]; my $sym = $symbols->{$addr}; if (defined($sym)) { if ($#{$sym} >= 2 && $sym->[1] =~ m/^(.*):(\d+)$/) { $e->[1] = $1; # File name $e->[2] = $2; # Line number } } } } # Print source-listing for one routine sub PrintSource { my $prog = shift; my $offset = shift; my $routine = shift; my $flat = shift; my $cumulative = shift; my $start_addr = shift; my $end_addr = shift; my $html = shift; my $output = shift; # Disassemble all instructions (just to get line numbers) my @instructions = Disassemble($prog, $offset, $start_addr, $end_addr); GetTopLevelLineNumbers($prog, $offset, \@instructions); # Hack 1: assume that the first source file encountered in the # disassembly contains the routine my $filename = undef; for (my $i = 0; $i <= $#instructions; $i++) { if ($instructions[$i]->[2] >= 0) { $filename = $instructions[$i]->[1]; last; } } if (!defined($filename)) { print STDERR "no filename found in $routine\n"; return 0; } # Hack 2: assume that the largest line number from $filename is the # end of the procedure. This is typically safe since if P1 contains # an inlined call to P2, then P2 usually occurs earlier in the # source file. If this does not work, we might have to compute a # density profile or just print all regions we find. my $lastline = 0; for (my $i = 0; $i <= $#instructions; $i++) { my $f = $instructions[$i]->[1]; my $l = $instructions[$i]->[2]; if (($f eq $filename) && ($l > $lastline)) { $lastline = $l; } } # Hack 3: assume the first source location from "filename" is the start of # the source code. my $firstline = 1; for (my $i = 0; $i <= $#instructions; $i++) { if ($instructions[$i]->[1] eq $filename) { $firstline = $instructions[$i]->[2]; last; } } # Hack 4: Extend last line forward until its indentation is less than # the indentation we saw on $firstline my $oldlastline = $lastline; { if (!open(FILE, "<$filename")) { print STDERR "$filename: $!\n"; return 0; } my $l = 0; my $first_indentation = -1; while () { s/\r//g; # turn windows-looking lines into unix-looking lines $l++; my $indent = Indentation($_); if ($l >= $firstline) { if ($first_indentation < 0 && $indent >= 0) { $first_indentation = $indent; last if ($first_indentation == 0); } } if ($l >= $lastline && $indent >= 0) { if ($indent >= $first_indentation) { $lastline = $l+1; } else { last; } } } close(FILE); } # Assign all samples to the range $firstline,$lastline, # Hack 4: If an instruction does not occur in the range, its samples # are moved to the next instruction that occurs in the range. my $samples1 = {}; # Map from line number to flat count my $samples2 = {}; # Map from line number to cumulative count my $running1 = 0; # Unassigned flat counts my $running2 = 0; # Unassigned cumulative counts my $total1 = 0; # Total flat counts my $total2 = 0; # Total cumulative counts my %disasm = (); # Map from line number to disassembly my $running_disasm = ""; # Unassigned disassembly my $skip_marker = "---\n"; if ($html) { $skip_marker = ""; for (my $l = $firstline; $l <= $lastline; $l++) { $disasm{$l} = ""; } } my $last_dis_filename = ''; my $last_dis_linenum = -1; my $last_touched_line = -1; # To detect gaps in disassembly for a line foreach my $e (@instructions) { # Add up counts for all address that fall inside this instruction my $c1 = 0; my $c2 = 0; for (my $a = $e->[0]; $a lt $e->[4]; $a = AddressInc($a)) { $c1 += GetEntry($flat, $a); $c2 += GetEntry($cumulative, $a); } if ($html) { my $dis = sprintf(" %6s %6s \t\t%8s: %s ", HtmlPrintNumber($c1), HtmlPrintNumber($c2), UnparseAddress($offset, $e->[0]), CleanDisassembly($e->[3])); # Append the most specific source line associated with this instruction if (length($dis) < 80) { $dis .= (' ' x (80 - length($dis))) }; $dis = HtmlEscape($dis); my $f = $e->[5]; my $l = $e->[6]; if ($f ne $last_dis_filename) { $dis .= sprintf("%s:%d", HtmlEscape(CleanFileName($f)), $l); } elsif ($l ne $last_dis_linenum) { # De-emphasize the unchanged file name portion $dis .= sprintf("%s" . ":%d", HtmlEscape(CleanFileName($f)), $l); } else { # De-emphasize the entire location $dis .= sprintf("%s:%d", HtmlEscape(CleanFileName($f)), $l); } $last_dis_filename = $f; $last_dis_linenum = $l; $running_disasm .= $dis; $running_disasm .= "\n"; } $running1 += $c1; $running2 += $c2; $total1 += $c1; $total2 += $c2; my $file = $e->[1]; my $line = $e->[2]; if (($file eq $filename) && ($line >= $firstline) && ($line <= $lastline)) { # Assign all accumulated samples to this line AddEntry($samples1, $line, $running1); AddEntry($samples2, $line, $running2); $running1 = 0; $running2 = 0; if ($html) { if ($line != $last_touched_line && $disasm{$line} ne '') { $disasm{$line} .= "\n"; } $disasm{$line} .= $running_disasm; $running_disasm = ''; $last_touched_line = $line; } } } # Assign any leftover samples to $lastline AddEntry($samples1, $lastline, $running1); AddEntry($samples2, $lastline, $running2); if ($html) { if ($lastline != $last_touched_line && $disasm{$lastline} ne '') { $disasm{$lastline} .= "\n"; } $disasm{$lastline} .= $running_disasm; } if ($html) { printf $output ( "

%s

%s\n
\n" .
      "Total:%6s %6s (flat / cumulative %s)\n",
      HtmlEscape(ShortFunctionName($routine)),
      HtmlEscape(CleanFileName($filename)),
      Unparse($total1),
      Unparse($total2),
      Units());
  } else {
    printf $output (
      "ROUTINE ====================== %s in %s\n" .
      "%6s %6s Total %s (flat / cumulative)\n",
      ShortFunctionName($routine),
      CleanFileName($filename),
      Unparse($total1),
      Unparse($total2),
      Units());
  }
  if (!open(FILE, "<$filename")) {
    print STDERR "$filename: $!\n";
    return 0;
  }
  my $l = 0;
  while () {
    s/\r//g;         # turn windows-looking lines into unix-looking lines
    $l++;
    if ($l >= $firstline - 5 &&
        (($l <= $oldlastline + 5) || ($l <= $lastline))) {
      chop;
      my $text = $_;
      if ($l == $firstline) { print $output $skip_marker; }
      my $n1 = GetEntry($samples1, $l);
      my $n2 = GetEntry($samples2, $l);
      if ($html) {
        # Emit a span that has one of the following classes:
        #    livesrc -- has samples
        #    deadsrc -- has disassembly, but with no samples
        #    nop     -- has no matching disasembly
        # Also emit an optional span containing disassembly.
        my $dis = $disasm{$l};
        my $asm = "";
        if (defined($dis) && $dis ne '') {
          $asm = "" . $dis . "";
        }
        my $source_class = (($n1 + $n2 > 0) 
                            ? "livesrc" 
                            : (($asm ne "") ? "deadsrc" : "nop"));
        printf $output (
          "%5d " .
          "%6s %6s %s%s\n",
          $l, $source_class,
          HtmlPrintNumber($n1),
          HtmlPrintNumber($n2),
          HtmlEscape($text),
          $asm);
      } else {
        printf $output(
          "%6s %6s %4d: %s\n",
          UnparseAlt($n1),
          UnparseAlt($n2),
          $l,
          $text);
      }
      if ($l == $lastline)  { print $output $skip_marker; }
    };
  }
  close(FILE);
  if ($html) {
    print $output "
\n"; } return 1; } # Return the source line for the specified file/linenumber. # Returns undef if not found. sub SourceLine { my $file = shift; my $line = shift; # Look in cache if (!defined($main::source_cache{$file})) { if (100 < scalar keys(%main::source_cache)) { # Clear the cache when it gets too big $main::source_cache = (); } # Read all lines from the file if (!open(FILE, "<$file")) { print STDERR "$file: $!\n"; $main::source_cache{$file} = []; # Cache the negative result return undef; } my $lines = []; push(@{$lines}, ""); # So we can use 1-based line numbers as indices while () { push(@{$lines}, $_); } close(FILE); # Save the lines in the cache $main::source_cache{$file} = $lines; } my $lines = $main::source_cache{$file}; if (($line < 0) || ($line > $#{$lines})) { return undef; } else { return $lines->[$line]; } } # Print disassembly for one routine with interspersed source if available sub PrintDisassembledFunction { my $prog = shift; my $offset = shift; my $routine = shift; my $flat = shift; my $cumulative = shift; my $start_addr = shift; my $end_addr = shift; my $total = shift; # Disassemble all instructions my @instructions = Disassemble($prog, $offset, $start_addr, $end_addr); # Make array of counts per instruction my @flat_count = (); my @cum_count = (); my $flat_total = 0; my $cum_total = 0; foreach my $e (@instructions) { # Add up counts for all address that fall inside this instruction my $c1 = 0; my $c2 = 0; for (my $a = $e->[0]; $a lt $e->[4]; $a = AddressInc($a)) { $c1 += GetEntry($flat, $a); $c2 += GetEntry($cumulative, $a); } push(@flat_count, $c1); push(@cum_count, $c2); $flat_total += $c1; $cum_total += $c2; } # Print header with total counts printf("ROUTINE ====================== %s\n" . "%6s %6s %s (flat, cumulative) %.1f%% of total\n", ShortFunctionName($routine), Unparse($flat_total), Unparse($cum_total), Units(), ($cum_total * 100.0) / $total); # Process instructions in order my $current_file = ""; for (my $i = 0; $i <= $#instructions; ) { my $e = $instructions[$i]; # Print the new file name whenever we switch files if ($e->[1] ne $current_file) { $current_file = $e->[1]; my $fname = $current_file; $fname =~ s|^\./||; # Trim leading "./" # Shorten long file names if (length($fname) >= 58) { $fname = "..." . substr($fname, -55); } printf("-------------------- %s\n", $fname); } # TODO: Compute range of lines to print together to deal with # small reorderings. my $first_line = $e->[2]; my $last_line = $first_line; my %flat_sum = (); my %cum_sum = (); for (my $l = $first_line; $l <= $last_line; $l++) { $flat_sum{$l} = 0; $cum_sum{$l} = 0; } # Find run of instructions for this range of source lines my $first_inst = $i; while (($i <= $#instructions) && ($instructions[$i]->[2] >= $first_line) && ($instructions[$i]->[2] <= $last_line)) { $e = $instructions[$i]; $flat_sum{$e->[2]} += $flat_count[$i]; $cum_sum{$e->[2]} += $cum_count[$i]; $i++; } my $last_inst = $i - 1; # Print source lines for (my $l = $first_line; $l <= $last_line; $l++) { my $line = SourceLine($current_file, $l); if (!defined($line)) { $line = "?\n"; next; } else { $line =~ s/^\s+//; } printf("%6s %6s %5d: %s", UnparseAlt($flat_sum{$l}), UnparseAlt($cum_sum{$l}), $l, $line); } # Print disassembly for (my $x = $first_inst; $x <= $last_inst; $x++) { my $e = $instructions[$x]; printf("%6s %6s %8s: %6s\n", UnparseAlt($flat_count[$x]), UnparseAlt($cum_count[$x]), UnparseAddress($offset, $e->[0]), CleanDisassembly($e->[3])); } } } # Print DOT graph sub PrintDot { my $prog = shift; my $symbols = shift; my $raw = shift; my $flat = shift; my $cumulative = shift; my $overall_total = shift; # Get total my $local_total = TotalProfile($flat); my $nodelimit = int($main::opt_nodefraction * $local_total); my $edgelimit = int($main::opt_edgefraction * $local_total); my $nodecount = $main::opt_nodecount; # Find nodes to include my @list = (sort { abs(GetEntry($cumulative, $b)) <=> abs(GetEntry($cumulative, $a)) || $a cmp $b } keys(%{$cumulative})); my $last = $nodecount - 1; if ($last > $#list) { $last = $#list; } while (($last >= 0) && (abs(GetEntry($cumulative, $list[$last])) <= $nodelimit)) { $last--; } if ($last < 0) { print STDERR "No nodes to print\n"; return 0; } if ($nodelimit > 0 || $edgelimit > 0) { printf STDERR ("Dropping nodes with <= %s %s; edges with <= %s abs(%s)\n", Unparse($nodelimit), Units(), Unparse($edgelimit), Units()); } # Open DOT output file my $output; my $escaped_dot = ShellEscape(@DOT); my $escaped_ps2pdf = ShellEscape(@PS2PDF); if ($main::opt_gv) { my $escaped_outfile = ShellEscape(TempName($main::next_tmpfile, "ps")); $output = "| $escaped_dot -Tps2 >$escaped_outfile"; } elsif ($main::opt_evince) { my $escaped_outfile = ShellEscape(TempName($main::next_tmpfile, "pdf")); $output = "| $escaped_dot -Tps2 | $escaped_ps2pdf - $escaped_outfile"; } elsif ($main::opt_ps) { $output = "| $escaped_dot -Tps2"; } elsif ($main::opt_pdf) { $output = "| $escaped_dot -Tps2 | $escaped_ps2pdf - -"; } elsif ($main::opt_web || $main::opt_svg) { # We need to post-process the SVG, so write to a temporary file always. my $escaped_outfile = ShellEscape(TempName($main::next_tmpfile, "svg")); $output = "| $escaped_dot -Tsvg >$escaped_outfile"; } elsif ($main::opt_gif) { $output = "| $escaped_dot -Tgif"; } else { $output = ">&STDOUT"; } open(DOT, $output) || error("$output: $!\n"); # Title printf DOT ("digraph \"%s; %s %s\" {\n", $prog, Unparse($overall_total), Units()); if ($main::opt_pdf) { # The output is more printable if we set the page size for dot. printf DOT ("size=\"8,11\"\n"); } printf DOT ("node [width=0.375,height=0.25];\n"); # Print legend printf DOT ("Legend [shape=box,fontsize=24,shape=plaintext," . "label=\"%s\\l%s\\l%s\\l%s\\l%s\\l\"];\n", $prog, sprintf("Total %s: %s", Units(), Unparse($overall_total)), sprintf("Focusing on: %s", Unparse($local_total)), sprintf("Dropped nodes with <= %s abs(%s)", Unparse($nodelimit), Units()), sprintf("Dropped edges with <= %s %s", Unparse($edgelimit), Units()) ); # Print nodes my %node = (); my $nextnode = 1; foreach my $a (@list[0..$last]) { # Pick font size my $f = GetEntry($flat, $a); my $c = GetEntry($cumulative, $a); my $fs = 8; if ($local_total > 0) { $fs = 8 + (50.0 * sqrt(abs($f * 1.0 / $local_total))); } $node{$a} = $nextnode++; my $sym = $a; $sym =~ s/\s+/\\n/g; $sym =~ s/::/\\n/g; # Extra cumulative info to print for non-leaves my $extra = ""; if ($f != $c) { $extra = sprintf("\\rof %s (%s)", Unparse($c), Percent($c, $local_total)); } my $style = ""; if ($main::opt_heapcheck) { if ($f > 0) { # make leak-causing nodes more visible (add a background) $style = ",style=filled,fillcolor=gray" } elsif ($f < 0) { # make anti-leak-causing nodes (which almost never occur) # stand out as well (triple border) $style = ",peripheries=3" } } printf DOT ("N%d [label=\"%s\\n%s (%s)%s\\r" . "\",shape=box,fontsize=%.1f%s];\n", $node{$a}, $sym, Unparse($f), Percent($f, $local_total), $extra, $fs, $style, ); } # Get edges and counts per edge my %edge = (); my $n; my $fullname_to_shortname_map = {}; FillFullnameToShortnameMap($symbols, $fullname_to_shortname_map); foreach my $k (keys(%{$raw})) { # TODO: omit low %age edges $n = $raw->{$k}; my @translated = TranslateStack($symbols, $fullname_to_shortname_map, $k); for (my $i = 1; $i <= $#translated; $i++) { my $src = $translated[$i]; my $dst = $translated[$i-1]; #next if ($src eq $dst); # Avoid self-edges? if (exists($node{$src}) && exists($node{$dst})) { my $edge_label = "$src\001$dst"; if (!exists($edge{$edge_label})) { $edge{$edge_label} = 0; } $edge{$edge_label} += $n; } } } # Print edges (process in order of decreasing counts) my %indegree = (); # Number of incoming edges added per node so far my %outdegree = (); # Number of outgoing edges added per node so far foreach my $e (sort { $edge{$b} <=> $edge{$a} } keys(%edge)) { my @x = split(/\001/, $e); $n = $edge{$e}; # Initialize degree of kept incoming and outgoing edges if necessary my $src = $x[0]; my $dst = $x[1]; if (!exists($outdegree{$src})) { $outdegree{$src} = 0; } if (!exists($indegree{$dst})) { $indegree{$dst} = 0; } my $keep; if ($indegree{$dst} == 0) { # Keep edge if needed for reachability $keep = 1; } elsif (abs($n) <= $edgelimit) { # Drop if we are below --edgefraction $keep = 0; } elsif ($outdegree{$src} >= $main::opt_maxdegree || $indegree{$dst} >= $main::opt_maxdegree) { # Keep limited number of in/out edges per node $keep = 0; } else { $keep = 1; } if ($keep) { $outdegree{$src}++; $indegree{$dst}++; # Compute line width based on edge count my $fraction = abs($local_total ? (3 * ($n / $local_total)) : 0); if ($fraction > 1) { $fraction = 1; } my $w = $fraction * 2; if ($w < 1 && ($main::opt_web || $main::opt_svg)) { # SVG output treats line widths < 1 poorly. $w = 1; } # Dot sometimes segfaults if given edge weights that are too large, so # we cap the weights at a large value my $edgeweight = abs($n) ** 0.7; if ($edgeweight > 100000) { $edgeweight = 100000; } $edgeweight = int($edgeweight); my $style = sprintf("setlinewidth(%f)", $w); if ($x[1] =~ m/\(inline\)/) { $style .= ",dashed"; } # Use a slightly squashed function of the edge count as the weight printf DOT ("N%s -> N%s [label=%s, weight=%d, style=\"%s\"];\n", $node{$x[0]}, $node{$x[1]}, Unparse($n), $edgeweight, $style); } } print DOT ("}\n"); close(DOT); if ($main::opt_web || $main::opt_svg) { # Rewrite SVG to be more usable inside web browser. RewriteSvg(TempName($main::next_tmpfile, "svg")); } return 1; } sub RewriteSvg { my $svgfile = shift; open(SVG, $svgfile) || die "open temp svg: $!"; my @svg = ; close(SVG); unlink $svgfile; my $svg = join('', @svg); # Dot's SVG output is # # # # ... # # # # Change it to # # # $svg_javascript # # # ... # # # # Fix width, height; drop viewBox. $svg =~ s/(?s) above first my $svg_javascript = SvgJavascript(); my $viewport = "\n"; $svg =~ s/ above . $svg =~ s/(.*)(<\/svg>)/$1<\/g>$2/; $svg =~ s/$svgfile") || die "open $svgfile: $!"; print SVG $svg; close(SVG); } } sub SvgJavascript { return <<'EOF'; EOF } # Provides a map from fullname to shortname for cases where the # shortname is ambiguous. The symlist has both the fullname and # shortname for all symbols, which is usually fine, but sometimes -- # such as overloaded functions -- two different fullnames can map to # the same shortname. In that case, we use the address of the # function to disambiguate the two. This function fills in a map that # maps fullnames to modified shortnames in such cases. If a fullname # is not present in the map, the 'normal' shortname provided by the # symlist is the appropriate one to use. sub FillFullnameToShortnameMap { my $symbols = shift; my $fullname_to_shortname_map = shift; my $shortnames_seen_once = {}; my $shortnames_seen_more_than_once = {}; foreach my $symlist (values(%{$symbols})) { # TODO(csilvers): deal with inlined symbols too. my $shortname = $symlist->[0]; my $fullname = $symlist->[2]; if ($fullname !~ /<[0-9a-fA-F]+>$/) { # fullname doesn't end in an address next; # the only collisions we care about are when addresses differ } if (defined($shortnames_seen_once->{$shortname}) && $shortnames_seen_once->{$shortname} ne $fullname) { $shortnames_seen_more_than_once->{$shortname} = 1; } else { $shortnames_seen_once->{$shortname} = $fullname; } } foreach my $symlist (values(%{$symbols})) { my $shortname = $symlist->[0]; my $fullname = $symlist->[2]; # TODO(csilvers): take in a list of addresses we care about, and only # store in the map if $symlist->[1] is in that list. Saves space. next if defined($fullname_to_shortname_map->{$fullname}); if (defined($shortnames_seen_more_than_once->{$shortname})) { if ($fullname =~ /<0*([^>]*)>$/) { # fullname has address at end of it $fullname_to_shortname_map->{$fullname} = "$shortname\@$1"; } } } } # Return a small number that identifies the argument. # Multiple calls with the same argument will return the same number. # Calls with different arguments will return different numbers. sub ShortIdFor { my $key = shift; my $id = $main::uniqueid{$key}; if (!defined($id)) { $id = keys(%main::uniqueid) + 1; $main::uniqueid{$key} = $id; } return $id; } # Translate a stack of addresses into a stack of symbols sub TranslateStack { my $symbols = shift; my $fullname_to_shortname_map = shift; my $k = shift; my @addrs = split(/\n/, $k); my @result = (); for (my $i = 0; $i <= $#addrs; $i++) { my $a = $addrs[$i]; # Skip large addresses since they sometimes show up as fake entries on RH9 if (length($a) > 8 && $a gt "7fffffffffffffff") { next; } if ($main::opt_disasm || $main::opt_list) { # We want just the address for the key push(@result, $a); next; } my $symlist = $symbols->{$a}; if (!defined($symlist)) { $symlist = [$a, "", $a]; } # We can have a sequence of symbols for a particular entry # (more than one symbol in the case of inlining). Callers # come before callees in symlist, so walk backwards since # the translated stack should contain callees before callers. for (my $j = $#{$symlist}; $j >= 2; $j -= 3) { my $func = $symlist->[$j-2]; my $fileline = $symlist->[$j-1]; my $fullfunc = $symlist->[$j]; if (defined($fullname_to_shortname_map->{$fullfunc})) { $func = $fullname_to_shortname_map->{$fullfunc}; } if ($j > 2) { $func = "$func (inline)"; } # Do not merge nodes corresponding to Callback::Run since that # causes confusing cycles in dot display. Instead, we synthesize # a unique name for this frame per caller. if ($func =~ m/Callback.*::Run$/) { my $caller = ($i > 0) ? $addrs[$i-1] : 0; $func = "Run#" . ShortIdFor($caller); } if ($main::opt_addresses) { push(@result, "$a $func $fileline"); } elsif ($main::opt_lines) { if ($func eq '??' && $fileline eq '??:0') { push(@result, "$a"); } else { push(@result, "$func $fileline"); } } elsif ($main::opt_functions) { if ($func eq '??') { push(@result, "$a"); } else { push(@result, $func); } } elsif ($main::opt_files) { if ($fileline eq '??:0' || $fileline eq '') { push(@result, "$a"); } else { my $f = $fileline; $f =~ s/:\d+$//; push(@result, $f); } } else { push(@result, $a); last; # Do not print inlined info } } } # print join(",", @addrs), " => ", join(",", @result), "\n"; return @result; } # Generate percent string for a number and a total sub Percent { my $num = shift; my $tot = shift; if ($tot != 0) { return sprintf("%.1f%%", $num * 100.0 / $tot); } else { return ($num == 0) ? "nan" : (($num > 0) ? "+inf" : "-inf"); } } # Generate pretty-printed form of number sub Unparse { my $num = shift; if ($main::profile_type eq 'heap' || $main::profile_type eq 'growth') { if ($main::opt_inuse_objects || $main::opt_alloc_objects) { return sprintf("%d", $num); } else { if ($main::opt_show_bytes) { return sprintf("%d", $num); } else { return sprintf("%.1f", $num / 1048576.0); } } } elsif ($main::profile_type eq 'contention' && !$main::opt_contentions) { return sprintf("%.3f", $num / 1e9); # Convert nanoseconds to seconds } else { return sprintf("%d", $num); } } # Alternate pretty-printed form: 0 maps to "." sub UnparseAlt { my $num = shift; if ($num == 0) { return "."; } else { return Unparse($num); } } # Alternate pretty-printed form: 0 maps to "" sub HtmlPrintNumber { my $num = shift; if ($num == 0) { return ""; } else { return Unparse($num); } } # Return output units sub Units { if ($main::profile_type eq 'heap' || $main::profile_type eq 'growth') { if ($main::opt_inuse_objects || $main::opt_alloc_objects) { return "objects"; } else { if ($main::opt_show_bytes) { return "B"; } else { return "MB"; } } } elsif ($main::profile_type eq 'contention' && !$main::opt_contentions) { return "seconds"; } else { return "samples"; } } ##### Profile manipulation code ##### # Generate flattened profile: # If count is charged to stack [a,b,c,d], in generated profile, # it will be charged to [a] sub FlatProfile { my $profile = shift; my $result = {}; foreach my $k (keys(%{$profile})) { my $count = $profile->{$k}; my @addrs = split(/\n/, $k); if ($#addrs >= 0) { AddEntry($result, $addrs[0], $count); } } return $result; } # Generate cumulative profile: # If count is charged to stack [a,b,c,d], in generated profile, # it will be charged to [a], [b], [c], [d] sub CumulativeProfile { my $profile = shift; my $result = {}; foreach my $k (keys(%{$profile})) { my $count = $profile->{$k}; my @addrs = split(/\n/, $k); foreach my $a (@addrs) { AddEntry($result, $a, $count); } } return $result; } # If the second-youngest PC on the stack is always the same, returns # that pc. Otherwise, returns undef. sub IsSecondPcAlwaysTheSame { my $profile = shift; my $second_pc = undef; foreach my $k (keys(%{$profile})) { my @addrs = split(/\n/, $k); if ($#addrs < 1) { return undef; } if (not defined $second_pc) { $second_pc = $addrs[1]; } else { if ($second_pc ne $addrs[1]) { return undef; } } } return $second_pc; } sub ExtractSymbolLocation { my $symbols = shift; my $address = shift; # 'addr2line' outputs "??:0" for unknown locations; we do the # same to be consistent. my $location = "??:0:unknown"; if (exists $symbols->{$address}) { my $file = $symbols->{$address}->[1]; if ($file eq "?") { $file = "??:0" } $location = $file . ":" . $symbols->{$address}->[0]; } return $location; } # Extracts a graph of calls. sub ExtractCalls { my $symbols = shift; my $profile = shift; my $calls = {}; while( my ($stack_trace, $count) = each %$profile ) { my @address = split(/\n/, $stack_trace); my $destination = ExtractSymbolLocation($symbols, $address[0]); AddEntry($calls, $destination, $count); for (my $i = 1; $i <= $#address; $i++) { my $source = ExtractSymbolLocation($symbols, $address[$i]); my $call = "$source -> $destination"; AddEntry($calls, $call, $count); $destination = $source; } } return $calls; } sub RemoveUninterestingFrames { my $symbols = shift; my $profile = shift; # List of function names to skip my %skip = (); my $skip_regexp = 'NOMATCH'; if ($main::profile_type eq 'heap' || $main::profile_type eq 'growth') { foreach my $name ('calloc', 'cfree', 'malloc', 'free', 'memalign', 'posix_memalign', 'aligned_alloc', 'pvalloc', 'valloc', 'realloc', 'mallocx', # jemalloc 'rallocx', # jemalloc 'xallocx', # jemalloc 'dallocx', # jemalloc 'tc_calloc', 'tc_cfree', 'tc_malloc', 'tc_free', 'tc_memalign', 'tc_posix_memalign', 'tc_pvalloc', 'tc_valloc', 'tc_realloc', 'tc_new', 'tc_delete', 'tc_newarray', 'tc_deletearray', 'tc_new_nothrow', 'tc_newarray_nothrow', 'do_malloc', '::do_malloc', # new name -- got moved to an unnamed ns '::do_malloc_or_cpp_alloc', 'DoSampledAllocation', 'simple_alloc::allocate', '__malloc_alloc_template::allocate', '__builtin_delete', '__builtin_new', '__builtin_vec_delete', '__builtin_vec_new', 'operator new', 'operator new[]', # The entry to our memory-allocation routines on OS X 'malloc_zone_malloc', 'malloc_zone_calloc', 'malloc_zone_valloc', 'malloc_zone_realloc', 'malloc_zone_memalign', 'malloc_zone_free', # These mark the beginning/end of our custom sections '__start_google_malloc', '__stop_google_malloc', '__start_malloc_hook', '__stop_malloc_hook') { $skip{$name} = 1; $skip{"_" . $name} = 1; # Mach (OS X) adds a _ prefix to everything } # TODO: Remove TCMalloc once everything has been # moved into the tcmalloc:: namespace and we have flushed # old code out of the system. $skip_regexp = "TCMalloc|^tcmalloc::"; } elsif ($main::profile_type eq 'contention') { foreach my $vname ('base::RecordLockProfileData', 'base::SubmitMutexProfileData', 'base::SubmitSpinLockProfileData', 'Mutex::Unlock', 'Mutex::UnlockSlow', 'Mutex::ReaderUnlock', 'MutexLock::~MutexLock', 'SpinLock::Unlock', 'SpinLock::SlowUnlock', 'SpinLockHolder::~SpinLockHolder') { $skip{$vname} = 1; } } elsif ($main::profile_type eq 'cpu') { # Drop signal handlers used for CPU profile collection # TODO(dpeng): this should not be necessary; it's taken # care of by the general 2nd-pc mechanism below. foreach my $name ('ProfileData::Add', # historical 'ProfileData::prof_handler', # historical 'CpuProfiler::prof_handler', '__FRAME_END__', '__pthread_sighandler', '__restore') { $skip{$name} = 1; } } else { # Nothing skipped for unknown types } if ($main::profile_type eq 'cpu') { # If all the second-youngest program counters are the same, # this STRONGLY suggests that it is an artifact of measurement, # i.e., stack frames pushed by the CPU profiler signal handler. # Hence, we delete them. # (The topmost PC is read from the signal structure, not from # the stack, so it does not get involved.) while (my $second_pc = IsSecondPcAlwaysTheSame($profile)) { my $result = {}; my $func = ''; if (exists($symbols->{$second_pc})) { $second_pc = $symbols->{$second_pc}->[0]; } print STDERR "Removing $second_pc from all stack traces.\n"; foreach my $k (keys(%{$profile})) { my $count = $profile->{$k}; my @addrs = split(/\n/, $k); splice @addrs, 1, 1; my $reduced_path = join("\n", @addrs); AddEntry($result, $reduced_path, $count); } $profile = $result; } } my $result = {}; foreach my $k (keys(%{$profile})) { my $count = $profile->{$k}; my @addrs = split(/\n/, $k); my @path = (); foreach my $a (@addrs) { if (exists($symbols->{$a})) { my $func = $symbols->{$a}->[0]; if ($skip{$func} || ($func =~ m/$skip_regexp/)) { # Throw away the portion of the backtrace seen so far, under the # assumption that previous frames were for functions internal to the # allocator. @path = (); next; } } push(@path, $a); } my $reduced_path = join("\n", @path); AddEntry($result, $reduced_path, $count); } return $result; } # Reduce profile to granularity given by user sub ReduceProfile { my $symbols = shift; my $profile = shift; my $result = {}; my $fullname_to_shortname_map = {}; FillFullnameToShortnameMap($symbols, $fullname_to_shortname_map); foreach my $k (keys(%{$profile})) { my $count = $profile->{$k}; my @translated = TranslateStack($symbols, $fullname_to_shortname_map, $k); my @path = (); my %seen = (); $seen{''} = 1; # So that empty keys are skipped foreach my $e (@translated) { # To avoid double-counting due to recursion, skip a stack-trace # entry if it has already been seen if (!$seen{$e}) { $seen{$e} = 1; push(@path, $e); } } my $reduced_path = join("\n", @path); AddEntry($result, $reduced_path, $count); } return $result; } # Does the specified symbol array match the regexp? sub SymbolMatches { my $sym = shift; my $re = shift; if (defined($sym)) { for (my $i = 0; $i < $#{$sym}; $i += 3) { if ($sym->[$i] =~ m/$re/ || $sym->[$i+1] =~ m/$re/) { return 1; } } } return 0; } # Focus only on paths involving specified regexps sub FocusProfile { my $symbols = shift; my $profile = shift; my $focus = shift; my $result = {}; foreach my $k (keys(%{$profile})) { my $count = $profile->{$k}; my @addrs = split(/\n/, $k); foreach my $a (@addrs) { # Reply if it matches either the address/shortname/fileline if (($a =~ m/$focus/) || SymbolMatches($symbols->{$a}, $focus)) { AddEntry($result, $k, $count); last; } } } return $result; } # Focus only on paths not involving specified regexps sub IgnoreProfile { my $symbols = shift; my $profile = shift; my $ignore = shift; my $result = {}; foreach my $k (keys(%{$profile})) { my $count = $profile->{$k}; my @addrs = split(/\n/, $k); my $matched = 0; foreach my $a (@addrs) { # Reply if it matches either the address/shortname/fileline if (($a =~ m/$ignore/) || SymbolMatches($symbols->{$a}, $ignore)) { $matched = 1; last; } } if (!$matched) { AddEntry($result, $k, $count); } } return $result; } # Get total count in profile sub TotalProfile { my $profile = shift; my $result = 0; foreach my $k (keys(%{$profile})) { $result += $profile->{$k}; } return $result; } # Add A to B sub AddProfile { my $A = shift; my $B = shift; my $R = {}; # add all keys in A foreach my $k (keys(%{$A})) { my $v = $A->{$k}; AddEntry($R, $k, $v); } # add all keys in B foreach my $k (keys(%{$B})) { my $v = $B->{$k}; AddEntry($R, $k, $v); } return $R; } # Merges symbol maps sub MergeSymbols { my $A = shift; my $B = shift; my $R = {}; foreach my $k (keys(%{$A})) { $R->{$k} = $A->{$k}; } if (defined($B)) { foreach my $k (keys(%{$B})) { $R->{$k} = $B->{$k}; } } return $R; } # Add A to B sub AddPcs { my $A = shift; my $B = shift; my $R = {}; # add all keys in A foreach my $k (keys(%{$A})) { $R->{$k} = 1 } # add all keys in B foreach my $k (keys(%{$B})) { $R->{$k} = 1 } return $R; } # Subtract B from A sub SubtractProfile { my $A = shift; my $B = shift; my $R = {}; foreach my $k (keys(%{$A})) { my $v = $A->{$k} - GetEntry($B, $k); if ($v < 0 && $main::opt_drop_negative) { $v = 0; } AddEntry($R, $k, $v); } if (!$main::opt_drop_negative) { # Take care of when subtracted profile has more entries foreach my $k (keys(%{$B})) { if (!exists($A->{$k})) { AddEntry($R, $k, 0 - $B->{$k}); } } } return $R; } # Get entry from profile; zero if not present sub GetEntry { my $profile = shift; my $k = shift; if (exists($profile->{$k})) { return $profile->{$k}; } else { return 0; } } # Add entry to specified profile sub AddEntry { my $profile = shift; my $k = shift; my $n = shift; if (!exists($profile->{$k})) { $profile->{$k} = 0; } $profile->{$k} += $n; } # Add a stack of entries to specified profile, and add them to the $pcs # list. sub AddEntries { my $profile = shift; my $pcs = shift; my $stack = shift; my $count = shift; my @k = (); foreach my $e (split(/\s+/, $stack)) { my $pc = HexExtend($e); $pcs->{$pc} = 1; push @k, $pc; } AddEntry($profile, (join "\n", @k), $count); } ##### Code to profile a server dynamically ##### sub CheckSymbolPage { my $url = SymbolPageURL(); my $command = ShellEscape(@URL_FETCHER, $url); open(SYMBOL, "$command |") or error($command); my $line = ; $line =~ s/\r//g; # turn windows-looking lines into unix-looking lines close(SYMBOL); unless (defined($line)) { error("$url doesn't exist\n"); } if ($line =~ /^num_symbols:\s+(\d+)$/) { if ($1 == 0) { error("Stripped binary. No symbols available.\n"); } } else { error("Failed to get the number of symbols from $url\n"); } } sub IsProfileURL { my $profile_name = shift; if (-f $profile_name) { printf STDERR "Using local file $profile_name.\n"; return 0; } return 1; } sub ParseProfileURL { my $profile_name = shift; if (!defined($profile_name) || $profile_name eq "") { return (); } # Split profile URL - matches all non-empty strings, so no test. $profile_name =~ m,^(https?://)?([^/]+)(.*?)(/|$PROFILES)?$,; my $proto = $1 || "http://"; my $hostport = $2; my $prefix = $3; my $profile = $4 || "/"; my $host = $hostport; $host =~ s/:.*//; my $baseurl = "$proto$hostport$prefix"; return ($host, $baseurl, $profile); } # We fetch symbols from the first profile argument. sub SymbolPageURL { my ($host, $baseURL, $path) = ParseProfileURL($main::pfile_args[0]); return "$baseURL$SYMBOL_PAGE"; } sub FetchProgramName() { my ($host, $baseURL, $path) = ParseProfileURL($main::pfile_args[0]); my $url = "$baseURL$PROGRAM_NAME_PAGE"; my $command_line = ShellEscape(@URL_FETCHER, $url); open(CMDLINE, "$command_line |") or error($command_line); my $cmdline = ; $cmdline =~ s/\r//g; # turn windows-looking lines into unix-looking lines close(CMDLINE); error("Failed to get program name from $url\n") unless defined($cmdline); $cmdline =~ s/\x00.+//; # Remove argv[1] and latters. $cmdline =~ s!\n!!g; # Remove LFs. return $cmdline; } # Gee, curl's -L (--location) option isn't reliable at least # with its 7.12.3 version. Curl will forget to post data if # there is a redirection. This function is a workaround for # curl. Redirection happens on borg hosts. sub ResolveRedirectionForCurl { my $url = shift; my $command_line = ShellEscape(@URL_FETCHER, "--head", $url); open(CMDLINE, "$command_line |") or error($command_line); while () { s/\r//g; # turn windows-looking lines into unix-looking lines if (/^Location: (.*)/) { $url = $1; } } close(CMDLINE); return $url; } # Add a timeout flat to URL_FETCHER. Returns a new list. sub AddFetchTimeout { my $timeout = shift; my @fetcher = shift; if (defined($timeout)) { if (join(" ", @fetcher) =~ m/\bcurl -s/) { push(@fetcher, "--max-time", sprintf("%d", $timeout)); } elsif (join(" ", @fetcher) =~ m/\brpcget\b/) { push(@fetcher, sprintf("--deadline=%d", $timeout)); } } return @fetcher; } # Reads a symbol map from the file handle name given as $1, returning # the resulting symbol map. Also processes variables relating to symbols. # Currently, the only variable processed is 'binary=' which updates # $main::prog to have the correct program name. sub ReadSymbols { my $in = shift; my $map = {}; while (<$in>) { s/\r//g; # turn windows-looking lines into unix-looking lines # Removes all the leading zeroes from the symbols, see comment below. if (m/^0x0*([0-9a-f]+)\s+(.+)/) { $map->{$1} = $2; } elsif (m/^---/) { last; } elsif (m/^([a-z][^=]*)=(.*)$/ ) { my ($variable, $value) = ($1, $2); for ($variable, $value) { s/^\s+//; s/\s+$//; } if ($variable eq "binary") { if ($main::prog ne $UNKNOWN_BINARY && $main::prog ne $value) { printf STDERR ("Warning: Mismatched binary name '%s', using '%s'.\n", $main::prog, $value); } $main::prog = $value; } else { printf STDERR ("Ignoring unknown variable in symbols list: " . "'%s' = '%s'\n", $variable, $value); } } } return $map; } # Fetches and processes symbols to prepare them for use in the profile output # code. If the optional 'symbol_map' arg is not given, fetches symbols from # $SYMBOL_PAGE for all PC values found in profile. Otherwise, the raw symbols # are assumed to have already been fetched into 'symbol_map' and are simply # extracted and processed. sub FetchSymbols { my $pcset = shift; my $symbol_map = shift; my %seen = (); my @pcs = grep { !$seen{$_}++ } keys(%$pcset); # uniq if (!defined($symbol_map)) { my $post_data = join("+", sort((map {"0x" . "$_"} @pcs))); open(POSTFILE, ">$main::tmpfile_sym"); print POSTFILE $post_data; close(POSTFILE); my $url = SymbolPageURL(); my $command_line; if (join(" ", @URL_FETCHER) =~ m/\bcurl -s/) { $url = ResolveRedirectionForCurl($url); $command_line = ShellEscape(@URL_FETCHER, "-d", "\@$main::tmpfile_sym", $url); } else { $command_line = (ShellEscape(@URL_FETCHER, "--post", $url) . " < " . ShellEscape($main::tmpfile_sym)); } # We use c++filt in case $SYMBOL_PAGE gives us mangled symbols. my $escaped_cppfilt = ShellEscape($obj_tool_map{"c++filt"}); open(SYMBOL, "$command_line | $escaped_cppfilt |") or error($command_line); $symbol_map = ReadSymbols(*SYMBOL{IO}); close(SYMBOL); } my $symbols = {}; foreach my $pc (@pcs) { my $fullname; # For 64 bits binaries, symbols are extracted with 8 leading zeroes. # Then /symbol reads the long symbols in as uint64, and outputs # the result with a "0x%08llx" format which get rid of the zeroes. # By removing all the leading zeroes in both $pc and the symbols from # /symbol, the symbols match and are retrievable from the map. my $shortpc = $pc; $shortpc =~ s/^0*//; # Each line may have a list of names, which includes the function # and also other functions it has inlined. They are separated (in # PrintSymbolizedProfile), by --, which is illegal in function names. my $fullnames; if (defined($symbol_map->{$shortpc})) { $fullnames = $symbol_map->{$shortpc}; } else { $fullnames = "0x" . $pc; # Just use addresses } my $sym = []; $symbols->{$pc} = $sym; foreach my $fullname (split("--", $fullnames)) { my $name = ShortFunctionName($fullname); push(@{$sym}, $name, "?", $fullname); } } return $symbols; } sub BaseName { my $file_name = shift; $file_name =~ s!^.*/!!; # Remove directory name return $file_name; } sub MakeProfileBaseName { my ($binary_name, $profile_name) = @_; my ($host, $baseURL, $path) = ParseProfileURL($profile_name); my $binary_shortname = BaseName($binary_name); return sprintf("%s.%s.%s", $binary_shortname, $main::op_time, $host); } sub FetchDynamicProfile { my $binary_name = shift; my $profile_name = shift; my $fetch_name_only = shift; my $encourage_patience = shift; if (!IsProfileURL($profile_name)) { return $profile_name; } else { my ($host, $baseURL, $path) = ParseProfileURL($profile_name); if ($path eq "" || $path eq "/") { # Missing type specifier defaults to cpu-profile $path = $PROFILE_PAGE; } my $profile_file = MakeProfileBaseName($binary_name, $profile_name); my $url = "$baseURL$path"; my $fetch_timeout = undef; if ($path =~ m/$PROFILE_PAGE|$PMUPROFILE_PAGE/) { if ($path =~ m/[?]/) { $url .= "&"; } else { $url .= "?"; } $url .= sprintf("seconds=%d", $main::opt_seconds); $fetch_timeout = $main::opt_seconds * 1.01 + 60; } else { # For non-CPU profiles, we add a type-extension to # the target profile file name. my $suffix = $path; $suffix =~ s,/,.,g; $profile_file .= $suffix; } my $profile_dir = $ENV{"PPROF_TMPDIR"} || ($ENV{HOME} . "/pprof"); if (! -d $profile_dir) { mkdir($profile_dir) || die("Unable to create profile directory $profile_dir: $!\n"); } my $tmp_profile = "$profile_dir/.tmp.$profile_file"; my $real_profile = "$profile_dir/$profile_file"; if ($fetch_name_only > 0) { return $real_profile; } my @fetcher = AddFetchTimeout($fetch_timeout, @URL_FETCHER); my $cmd = ShellEscape(@fetcher, $url) . " > " . ShellEscape($tmp_profile); if ($path =~ m/$PROFILE_PAGE|$PMUPROFILE_PAGE|$CENSUSPROFILE_PAGE/){ print STDERR "Gathering CPU profile from $url for $main::opt_seconds seconds to\n ${real_profile}\n"; if ($encourage_patience) { print STDERR "Be patient...\n"; } } else { print STDERR "Fetching $path profile from $url to\n ${real_profile}\n"; } (system($cmd) == 0) || error("Failed to get profile: $cmd: $!\n"); (system("mv", $tmp_profile, $real_profile) == 0) || error("Unable to rename profile\n"); print STDERR "Wrote profile to $real_profile\n"; $main::collected_profile = $real_profile; return $main::collected_profile; } } # Collect profiles in parallel sub FetchDynamicProfiles { my $items = scalar(@main::pfile_args); my $levels = log($items) / log(2); if ($items == 1) { $main::profile_files[0] = FetchDynamicProfile($main::prog, $main::pfile_args[0], 0, 1); } else { # math rounding issues if ((2 ** $levels) < $items) { $levels++; } my $count = scalar(@main::pfile_args); for (my $i = 0; $i < $count; $i++) { $main::profile_files[$i] = FetchDynamicProfile($main::prog, $main::pfile_args[$i], 1, 0); } print STDERR "Fetching $count profiles, Be patient...\n"; FetchDynamicProfilesRecurse($levels, 0, 0); $main::collected_profile = join(" \\\n ", @main::profile_files); } } # Recursively fork a process to get enough processes # collecting profiles sub FetchDynamicProfilesRecurse { my $maxlevel = shift; my $level = shift; my $position = shift; if (my $pid = fork()) { $position = 0 | ($position << 1); TryCollectProfile($maxlevel, $level, $position); wait; } else { $position = 1 | ($position << 1); TryCollectProfile($maxlevel, $level, $position); cleanup(); exit(0); } } # Collect a single profile sub TryCollectProfile { my $maxlevel = shift; my $level = shift; my $position = shift; if ($level >= ($maxlevel - 1)) { if ($position < scalar(@main::pfile_args)) { FetchDynamicProfile($main::prog, $main::pfile_args[$position], 0, 0); } } else { FetchDynamicProfilesRecurse($maxlevel, $level+1, $position); } } ##### Parsing code ##### # Provide a small streaming-read module to handle very large # cpu-profile files. Stream in chunks along a sliding window. # Provides an interface to get one 'slot', correctly handling # endian-ness differences. A slot is one 32-bit or 64-bit word # (depending on the input profile). We tell endianness and bit-size # for the profile by looking at the first 8 bytes: in cpu profiles, # the second slot is always 3 (we'll accept anything that's not 0). BEGIN { package CpuProfileStream; sub new { my ($class, $file, $fname) = @_; my $self = { file => $file, base => 0, stride => 512 * 1024, # must be a multiple of bitsize/8 slots => [], unpack_code => "", # N for big-endian, V for little perl_is_64bit => 1, # matters if profile is 64-bit }; bless $self, $class; # Let unittests adjust the stride if ($main::opt_test_stride > 0) { $self->{stride} = $main::opt_test_stride; } # Read the first two slots to figure out bitsize and endianness. my $slots = $self->{slots}; my $str; read($self->{file}, $str, 8); # Set the global $address_length based on what we see here. # 8 is 32-bit (8 hexadecimal chars); 16 is 64-bit (16 hexadecimal chars). $address_length = ($str eq (chr(0)x8)) ? 16 : 8; if ($address_length == 8) { if (substr($str, 6, 2) eq chr(0)x2) { $self->{unpack_code} = 'V'; # Little-endian. } elsif (substr($str, 4, 2) eq chr(0)x2) { $self->{unpack_code} = 'N'; # Big-endian } else { ::error("$fname: header size >= 2**16\n"); } @$slots = unpack($self->{unpack_code} . "*", $str); } else { # If we're a 64-bit profile, check if we're a 64-bit-capable # perl. Otherwise, each slot will be represented as a float # instead of an int64, losing precision and making all the # 64-bit addresses wrong. We won't complain yet, but will # later if we ever see a value that doesn't fit in 32 bits. my $has_q = 0; eval { $has_q = pack("Q", "1") ? 1 : 1; }; if (!$has_q) { $self->{perl_is_64bit} = 0; } read($self->{file}, $str, 8); if (substr($str, 4, 4) eq chr(0)x4) { # We'd love to use 'Q', but it's a) not universal, b) not endian-proof. $self->{unpack_code} = 'V'; # Little-endian. } elsif (substr($str, 0, 4) eq chr(0)x4) { $self->{unpack_code} = 'N'; # Big-endian } else { ::error("$fname: header size >= 2**32\n"); } my @pair = unpack($self->{unpack_code} . "*", $str); # Since we know one of the pair is 0, it's fine to just add them. @$slots = (0, $pair[0] + $pair[1]); } return $self; } # Load more data when we access slots->get(X) which is not yet in memory. sub overflow { my ($self) = @_; my $slots = $self->{slots}; $self->{base} += $#$slots + 1; # skip over data we're replacing my $str; read($self->{file}, $str, $self->{stride}); if ($address_length == 8) { # the 32-bit case # This is the easy case: unpack provides 32-bit unpacking primitives. @$slots = unpack($self->{unpack_code} . "*", $str); } else { # We need to unpack 32 bits at a time and combine. my @b32_values = unpack($self->{unpack_code} . "*", $str); my @b64_values = (); for (my $i = 0; $i < $#b32_values; $i += 2) { # TODO(csilvers): if this is a 32-bit perl, the math below # could end up in a too-large int, which perl will promote # to a double, losing necessary precision. Deal with that. # Right now, we just die. my ($lo, $hi) = ($b32_values[$i], $b32_values[$i+1]); if ($self->{unpack_code} eq 'N') { # big-endian ($lo, $hi) = ($hi, $lo); } my $value = $lo + $hi * (2**32); if (!$self->{perl_is_64bit} && # check value is exactly represented (($value % (2**32)) != $lo || int($value / (2**32)) != $hi)) { ::error("Need a 64-bit perl to process this 64-bit profile.\n"); } push(@b64_values, $value); } @$slots = @b64_values; } } # Access the i-th long in the file (logically), or -1 at EOF. sub get { my ($self, $idx) = @_; my $slots = $self->{slots}; while ($#$slots >= 0) { if ($idx < $self->{base}) { # The only time we expect a reference to $slots[$i - something] # after referencing $slots[$i] is reading the very first header. # Since $stride > |header|, that shouldn't cause any lookback # errors. And everything after the header is sequential. print STDERR "Unexpected look-back reading CPU profile"; return -1; # shrug, don't know what better to return } elsif ($idx > $self->{base} + $#$slots) { $self->overflow(); } else { return $slots->[$idx - $self->{base}]; } } # If we get here, $slots is [], which means we've reached EOF return -1; # unique since slots is supposed to hold unsigned numbers } } # Reads the top, 'header' section of a profile, and returns the last # line of the header, commonly called a 'header line'. The header # section of a profile consists of zero or more 'command' lines that # are instructions to pprof, which pprof executes when reading the # header. All 'command' lines start with a %. After the command # lines is the 'header line', which is a profile-specific line that # indicates what type of profile it is, and perhaps other global # information about the profile. For instance, here's a header line # for a heap profile: # heap profile: 53: 38236 [ 5525: 1284029] @ heapprofile # For historical reasons, the CPU profile does not contain a text- # readable header line. If the profile looks like a CPU profile, # this function returns "". If no header line could be found, this # function returns undef. # # The following commands are recognized: # %warn -- emit the rest of this line to stderr, prefixed by 'WARNING:' # # The input file should be in binmode. sub ReadProfileHeader { local *PROFILE = shift; my $firstchar = ""; my $line = ""; read(PROFILE, $firstchar, 1); seek(PROFILE, -1, 1); # unread the firstchar if ($firstchar !~ /[[:print:]]/) { # is not a text character return ""; } while (defined($line = )) { $line =~ s/\r//g; # turn windows-looking lines into unix-looking lines if ($line =~ /^%warn\s+(.*)/) { # 'warn' command # Note this matches both '%warn blah\n' and '%warn\n'. print STDERR "WARNING: $1\n"; # print the rest of the line } elsif ($line =~ /^%/) { print STDERR "Ignoring unknown command from profile header: $line"; } else { # End of commands, must be the header line. return $line; } } return undef; # got to EOF without seeing a header line } sub IsSymbolizedProfileFile { my $file_name = shift; if (!(-e $file_name) || !(-r $file_name)) { return 0; } # Check if the file contains a symbol-section marker. open(TFILE, "<$file_name"); binmode TFILE; my $firstline = ReadProfileHeader(*TFILE); close(TFILE); if (!$firstline) { return 0; } $SYMBOL_PAGE =~ m,[^/]+$,; # matches everything after the last slash my $symbol_marker = $&; return $firstline =~ /^--- *$symbol_marker/; } # Parse profile generated by common/profiler.cc and return a reference # to a map: # $result->{version} Version number of profile file # $result->{period} Sampling period (in microseconds) # $result->{profile} Profile object # $result->{map} Memory map info from profile # $result->{pcs} Hash of all PC values seen, key is hex address sub ReadProfile { my $prog = shift; my $fname = shift; my $result; # return value $CONTENTION_PAGE =~ m,[^/]+$,; # matches everything after the last slash my $contention_marker = $&; $GROWTH_PAGE =~ m,[^/]+$,; # matches everything after the last slash my $growth_marker = $&; $SYMBOL_PAGE =~ m,[^/]+$,; # matches everything after the last slash my $symbol_marker = $&; $PROFILE_PAGE =~ m,[^/]+$,; # matches everything after the last slash my $profile_marker = $&; # Look at first line to see if it is a heap or a CPU profile. # CPU profile may start with no header at all, and just binary data # (starting with \0\0\0\0) -- in that case, don't try to read the # whole firstline, since it may be gigabytes(!) of data. open(PROFILE, "<$fname") || error("$fname: $!\n"); binmode PROFILE; # New perls do UTF-8 processing my $header = ReadProfileHeader(*PROFILE); if (!defined($header)) { # means "at EOF" error("Profile is empty.\n"); } my $symbols; if ($header =~ m/^--- *$symbol_marker/o) { # Verify that the user asked for a symbolized profile if (!$main::use_symbolized_profile) { # we have both a binary and symbolized profiles, abort error("FATAL ERROR: Symbolized profile\n $fname\ncannot be used with " . "a binary arg. Try again without passing\n $prog\n"); } # Read the symbol section of the symbolized profile file. $symbols = ReadSymbols(*PROFILE{IO}); # Read the next line to get the header for the remaining profile. $header = ReadProfileHeader(*PROFILE) || ""; } $main::profile_type = ''; if ($header =~ m/^heap profile:.*$growth_marker/o) { $main::profile_type = 'growth'; $result = ReadHeapProfile($prog, *PROFILE, $header); } elsif ($header =~ m/^heap profile:/) { $main::profile_type = 'heap'; $result = ReadHeapProfile($prog, *PROFILE, $header); } elsif ($header =~ m/^--- *$contention_marker/o) { $main::profile_type = 'contention'; $result = ReadSynchProfile($prog, *PROFILE); } elsif ($header =~ m/^--- *Stacks:/) { print STDERR "Old format contention profile: mistakenly reports " . "condition variable signals as lock contentions.\n"; $main::profile_type = 'contention'; $result = ReadSynchProfile($prog, *PROFILE); } elsif ($header =~ m/^--- *$profile_marker/) { # the binary cpu profile data starts immediately after this line $main::profile_type = 'cpu'; $result = ReadCPUProfile($prog, $fname, *PROFILE); } else { if (defined($symbols)) { # a symbolized profile contains a format we don't recognize, bail out error("$fname: Cannot recognize profile section after symbols.\n"); } # no ascii header present -- must be a CPU profile $main::profile_type = 'cpu'; $result = ReadCPUProfile($prog, $fname, *PROFILE); } close(PROFILE); # if we got symbols along with the profile, return those as well if (defined($symbols)) { $result->{symbols} = $symbols; } return $result; } # Subtract one from caller pc so we map back to call instr. # However, don't do this if we're reading a symbolized profile # file, in which case the subtract-one was done when the file # was written. # # We apply the same logic to all readers, though ReadCPUProfile uses an # independent implementation. sub FixCallerAddresses { my $stack = shift; if ($main::use_symbolized_profile) { return $stack; } else { $stack =~ /(\s)/; my $delimiter = $1; my @addrs = split(' ', $stack); my @fixedaddrs; $#fixedaddrs = $#addrs; if ($#addrs >= 0) { $fixedaddrs[0] = $addrs[0]; } for (my $i = 1; $i <= $#addrs; $i++) { $fixedaddrs[$i] = AddressSub($addrs[$i], "0x1"); } return join $delimiter, @fixedaddrs; } } # CPU profile reader sub ReadCPUProfile { my $prog = shift; my $fname = shift; # just used for logging local *PROFILE = shift; my $version; my $period; my $i; my $profile = {}; my $pcs = {}; # Parse string into array of slots. my $slots = CpuProfileStream->new(*PROFILE, $fname); # Read header. The current header version is a 5-element structure # containing: # 0: header count (always 0) # 1: header "words" (after this one: 3) # 2: format version (0) # 3: sampling period (usec) # 4: unused padding (always 0) if ($slots->get(0) != 0 ) { error("$fname: not a profile file, or old format profile file\n"); } $i = 2 + $slots->get(1); $version = $slots->get(2); $period = $slots->get(3); # Do some sanity checking on these header values. if ($version > (2**32) || $period > (2**32) || $i > (2**32) || $i < 5) { error("$fname: not a profile file, or corrupted profile file\n"); } # Parse profile while ($slots->get($i) != -1) { my $n = $slots->get($i++); my $d = $slots->get($i++); if ($d > (2**16)) { # TODO(csilvers): what's a reasonable max-stack-depth? my $addr = sprintf("0%o", $i * ($address_length == 8 ? 4 : 8)); print STDERR "At index $i (address $addr):\n"; error("$fname: stack trace depth >= 2**32\n"); } if ($slots->get($i) == 0) { # End of profile data marker $i += $d; last; } # Make key out of the stack entries my @k = (); for (my $j = 0; $j < $d; $j++) { my $pc = $slots->get($i+$j); # Subtract one from caller pc so we map back to call instr. # However, don't do this if we're reading a symbolized profile # file, in which case the subtract-one was done when the file # was written. if ($j > 0 && !$main::use_symbolized_profile) { $pc--; } $pc = sprintf("%0*x", $address_length, $pc); $pcs->{$pc} = 1; push @k, $pc; } AddEntry($profile, (join "\n", @k), $n); $i += $d; } # Parse map my $map = ''; seek(PROFILE, $i * 4, 0); read(PROFILE, $map, (stat PROFILE)[7]); my $r = {}; $r->{version} = $version; $r->{period} = $period; $r->{profile} = $profile; $r->{libs} = ParseLibraries($prog, $map, $pcs); $r->{pcs} = $pcs; return $r; } sub ReadHeapProfile { my $prog = shift; local *PROFILE = shift; my $header = shift; my $index = 1; if ($main::opt_inuse_space) { $index = 1; } elsif ($main::opt_inuse_objects) { $index = 0; } elsif ($main::opt_alloc_space) { $index = 3; } elsif ($main::opt_alloc_objects) { $index = 2; } # Find the type of this profile. The header line looks like: # heap profile: 1246: 8800744 [ 1246: 8800744] @ /266053 # There are two pairs , the first inuse objects/space, and the # second allocated objects/space. This is followed optionally by a profile # type, and if that is present, optionally by a sampling frequency. # For remote heap profiles (v1): # The interpretation of the sampling frequency is that the profiler, for # each sample, calculates a uniformly distributed random integer less than # the given value, and records the next sample after that many bytes have # been allocated. Therefore, the expected sample interval is half of the # given frequency. By default, if not specified, the expected sample # interval is 128KB. Only remote-heap-page profiles are adjusted for # sample size. # For remote heap profiles (v2): # The sampling frequency is the rate of a Poisson process. This means that # the probability of sampling an allocation of size X with sampling rate Y # is 1 - exp(-X/Y) # For version 2, a typical header line might look like this: # heap profile: 1922: 127792360 [ 1922: 127792360] @ _v2/524288 # the trailing number (524288) is the sampling rate. (Version 1 showed # double the 'rate' here) my $sampling_algorithm = 0; my $sample_adjustment = 0; chomp($header); my $type = "unknown"; if ($header =~ m"^heap profile:\s*(\d+):\s+(\d+)\s+\[\s*(\d+):\s+(\d+)\](\s*@\s*([^/]*)(/(\d+))?)?") { if (defined($6) && ($6 ne '')) { $type = $6; my $sample_period = $8; # $type is "heapprofile" for profiles generated by the # heap-profiler, and either "heap" or "heap_v2" for profiles # generated by sampling directly within tcmalloc. It can also # be "growth" for heap-growth profiles. The first is typically # found for profiles generated locally, and the others for # remote profiles. if (($type eq "heapprofile") || ($type !~ /heap/) ) { # No need to adjust for the sampling rate with heap-profiler-derived data $sampling_algorithm = 0; } elsif ($type =~ /_v2/) { $sampling_algorithm = 2; # version 2 sampling if (defined($sample_period) && ($sample_period ne '')) { $sample_adjustment = int($sample_period); } } else { $sampling_algorithm = 1; # version 1 sampling if (defined($sample_period) && ($sample_period ne '')) { $sample_adjustment = int($sample_period)/2; } } } else { # We detect whether or not this is a remote-heap profile by checking # that the total-allocated stats ($n2,$s2) are exactly the # same as the in-use stats ($n1,$s1). It is remotely conceivable # that a non-remote-heap profile may pass this check, but it is hard # to imagine how that could happen. # In this case it's so old it's guaranteed to be remote-heap version 1. my ($n1, $s1, $n2, $s2) = ($1, $2, $3, $4); if (($n1 == $n2) && ($s1 == $s2)) { # This is likely to be a remote-heap based sample profile $sampling_algorithm = 1; } } } if ($sampling_algorithm > 0) { # For remote-heap generated profiles, adjust the counts and sizes to # account for the sample rate (we sample once every 128KB by default). if ($sample_adjustment == 0) { # Turn on profile adjustment. $sample_adjustment = 128*1024; print STDERR "Adjusting heap profiles for 1-in-128KB sampling rate\n"; } else { printf STDERR ("Adjusting heap profiles for 1-in-%d sampling rate\n", $sample_adjustment); } if ($sampling_algorithm > 1) { # We don't bother printing anything for the original version (version 1) printf STDERR "Heap version $sampling_algorithm\n"; } } my $profile = {}; my $pcs = {}; my $map = ""; while () { s/\r//g; # turn windows-looking lines into unix-looking lines if (/^MAPPED_LIBRARIES:/) { # Read the /proc/self/maps data while () { s/\r//g; # turn windows-looking lines into unix-looking lines $map .= $_; } last; } if (/^--- Memory map:/) { # Read /proc/self/maps data as formatted by DumpAddressMap() my $buildvar = ""; while () { s/\r//g; # turn windows-looking lines into unix-looking lines # Parse "build=" specification if supplied if (m/^\s*build=(.*)\n/) { $buildvar = $1; } # Expand "$build" variable if available $_ =~ s/\$build\b/$buildvar/g; $map .= $_; } last; } # Read entry of the form: # : [: ] @ a1 a2 a3 ... an s/^\s*//; s/\s*$//; if (m/^\s*(\d+):\s+(\d+)\s+\[\s*(\d+):\s+(\d+)\]\s+@\s+(.*)$/) { my $stack = $5; my ($n1, $s1, $n2, $s2) = ($1, $2, $3, $4); if ($sample_adjustment) { if ($sampling_algorithm == 2) { # Remote-heap version 2 # The sampling frequency is the rate of a Poisson process. # This means that the probability of sampling an allocation of # size X with sampling rate Y is 1 - exp(-X/Y) if ($n1 != 0) { my $ratio = (($s1*1.0)/$n1)/($sample_adjustment); my $scale_factor = 1/(1 - exp(-$ratio)); $n1 *= $scale_factor; $s1 *= $scale_factor; } if ($n2 != 0) { my $ratio = (($s2*1.0)/$n2)/($sample_adjustment); my $scale_factor = 1/(1 - exp(-$ratio)); $n2 *= $scale_factor; $s2 *= $scale_factor; } } else { # Remote-heap version 1 my $ratio; $ratio = (($s1*1.0)/$n1)/($sample_adjustment); if ($ratio < 1) { $n1 /= $ratio; $s1 /= $ratio; } $ratio = (($s2*1.0)/$n2)/($sample_adjustment); if ($ratio < 1) { $n2 /= $ratio; $s2 /= $ratio; } } } my @counts = ($n1, $s1, $n2, $s2); AddEntries($profile, $pcs, FixCallerAddresses($stack), $counts[$index]); } } my $r = {}; $r->{version} = "heap"; $r->{period} = 1; $r->{profile} = $profile; $r->{libs} = ParseLibraries($prog, $map, $pcs); $r->{pcs} = $pcs; return $r; } sub ReadSynchProfile { my $prog = shift; local *PROFILE = shift; my $header = shift; my $map = ''; my $profile = {}; my $pcs = {}; my $sampling_period = 1; my $cyclespernanosec = 2.8; # Default assumption for old binaries my $seen_clockrate = 0; my $line; my $index = 0; if ($main::opt_total_delay) { $index = 0; } elsif ($main::opt_contentions) { $index = 1; } elsif ($main::opt_mean_delay) { $index = 2; } while ( $line = ) { $line =~ s/\r//g; # turn windows-looking lines into unix-looking lines if ( $line =~ /^\s*(\d+)\s+(\d+) \@\s*(.*?)\s*$/ ) { my ($cycles, $count, $stack) = ($1, $2, $3); # Convert cycles to nanoseconds $cycles /= $cyclespernanosec; # Adjust for sampling done by application $cycles *= $sampling_period; $count *= $sampling_period; my @values = ($cycles, $count, $cycles / $count); AddEntries($profile, $pcs, FixCallerAddresses($stack), $values[$index]); } elsif ( $line =~ /^(slow release).*thread \d+ \@\s*(.*?)\s*$/ || $line =~ /^\s*(\d+) \@\s*(.*?)\s*$/ ) { my ($cycles, $stack) = ($1, $2); if ($cycles !~ /^\d+$/) { next; } # Convert cycles to nanoseconds $cycles /= $cyclespernanosec; # Adjust for sampling done by application $cycles *= $sampling_period; AddEntries($profile, $pcs, FixCallerAddresses($stack), $cycles); } elsif ( $line =~ m/^([a-z][^=]*)=(.*)$/ ) { my ($variable, $value) = ($1,$2); for ($variable, $value) { s/^\s+//; s/\s+$//; } if ($variable eq "cycles/second") { $cyclespernanosec = $value / 1e9; $seen_clockrate = 1; } elsif ($variable eq "sampling period") { $sampling_period = $value; } elsif ($variable eq "ms since reset") { # Currently nothing is done with this value in pprof # So we just silently ignore it for now } elsif ($variable eq "discarded samples") { # Currently nothing is done with this value in pprof # So we just silently ignore it for now } else { printf STDERR ("Ignoring unnknown variable in /contention output: " . "'%s' = '%s'\n",$variable,$value); } } else { # Memory map entry $map .= $line; } } if (!$seen_clockrate) { printf STDERR ("No cycles/second entry in profile; Guessing %.1f GHz\n", $cyclespernanosec); } my $r = {}; $r->{version} = 0; $r->{period} = $sampling_period; $r->{profile} = $profile; $r->{libs} = ParseLibraries($prog, $map, $pcs); $r->{pcs} = $pcs; return $r; } # Given a hex value in the form "0x1abcd" or "1abcd", return either # "0001abcd" or "000000000001abcd", depending on the current (global) # address length. sub HexExtend { my $addr = shift; $addr =~ s/^(0x)?0*//; my $zeros_needed = $address_length - length($addr); if ($zeros_needed < 0) { printf STDERR "Warning: address $addr is longer than address length $address_length\n"; return $addr; } return ("0" x $zeros_needed) . $addr; } ##### Symbol extraction ##### # Aggressively search the lib_prefix values for the given library # If all else fails, just return the name of the library unmodified. # If the lib_prefix is "/my/path,/other/path" and $file is "/lib/dir/mylib.so" # it will search the following locations in this order, until it finds a file: # /my/path/lib/dir/mylib.so # /other/path/lib/dir/mylib.so # /my/path/dir/mylib.so # /other/path/dir/mylib.so # /my/path/mylib.so # /other/path/mylib.so # /lib/dir/mylib.so (returned as last resort) sub FindLibrary { my $file = shift; my $suffix = $file; # Search for the library as described above do { foreach my $prefix (@prefix_list) { my $fullpath = $prefix . $suffix; if (-e $fullpath) { return $fullpath; } } } while ($suffix =~ s|^/[^/]+/|/|); return $file; } # Return path to library with debugging symbols. # For libc libraries, the copy in /usr/lib/debug contains debugging symbols sub DebuggingLibrary { my $file = shift; if ($file =~ m|^/|) { if (-f "/usr/lib/debug$file") { return "/usr/lib/debug$file"; } elsif (-f "/usr/lib/debug$file.debug") { return "/usr/lib/debug$file.debug"; } } return undef; } # Parse text section header of a library using objdump sub ParseTextSectionHeaderFromObjdump { my $lib = shift; my $size = undef; my $vma; my $file_offset; # Get objdump output from the library file to figure out how to # map between mapped addresses and addresses in the library. my $cmd = ShellEscape($obj_tool_map{"objdump"}, "-h", $lib); open(OBJDUMP, "$cmd |") || error("$cmd: $!\n"); while () { s/\r//g; # turn windows-looking lines into unix-looking lines # Idx Name Size VMA LMA File off Algn # 10 .text 00104b2c 420156f0 420156f0 000156f0 2**4 # For 64-bit objects, VMA and LMA will be 16 hex digits, size and file # offset may still be 8. But AddressSub below will still handle that. my @x = split; if (($#x >= 6) && ($x[1] eq '.text')) { $size = $x[2]; $vma = $x[3]; $file_offset = $x[5]; last; } } close(OBJDUMP); if (!defined($size)) { return undef; } my $r = {}; $r->{size} = $size; $r->{vma} = $vma; $r->{file_offset} = $file_offset; return $r; } # Parse text section header of a library using otool (on OS X) sub ParseTextSectionHeaderFromOtool { my $lib = shift; my $size = undef; my $vma = undef; my $file_offset = undef; # Get otool output from the library file to figure out how to # map between mapped addresses and addresses in the library. my $command = ShellEscape($obj_tool_map{"otool"}, "-l", $lib); open(OTOOL, "$command |") || error("$command: $!\n"); my $cmd = ""; my $sectname = ""; my $segname = ""; foreach my $line () { $line =~ s/\r//g; # turn windows-looking lines into unix-looking lines # Load command <#> # cmd LC_SEGMENT # [...] # Section # sectname __text # segname __TEXT # addr 0x000009f8 # size 0x00018b9e # offset 2552 # align 2^2 (4) # We will need to strip off the leading 0x from the hex addresses, # and convert the offset into hex. if ($line =~ /Load command/) { $cmd = ""; $sectname = ""; $segname = ""; } elsif ($line =~ /Section/) { $sectname = ""; $segname = ""; } elsif ($line =~ /cmd (\w+)/) { $cmd = $1; } elsif ($line =~ /sectname (\w+)/) { $sectname = $1; } elsif ($line =~ /segname (\w+)/) { $segname = $1; } elsif (!(($cmd eq "LC_SEGMENT" || $cmd eq "LC_SEGMENT_64") && $sectname eq "__text" && $segname eq "__TEXT")) { next; } elsif ($line =~ /\baddr 0x([0-9a-fA-F]+)/) { $vma = $1; } elsif ($line =~ /\bsize 0x([0-9a-fA-F]+)/) { $size = $1; } elsif ($line =~ /\boffset ([0-9]+)/) { $file_offset = sprintf("%016x", $1); } if (defined($vma) && defined($size) && defined($file_offset)) { last; } } close(OTOOL); if (!defined($vma) || !defined($size) || !defined($file_offset)) { return undef; } my $r = {}; $r->{size} = $size; $r->{vma} = $vma; $r->{file_offset} = $file_offset; return $r; } sub ParseTextSectionHeader { # obj_tool_map("otool") is only defined if we're in a Mach-O environment if (defined($obj_tool_map{"otool"})) { my $r = ParseTextSectionHeaderFromOtool(@_); if (defined($r)){ return $r; } } # If otool doesn't work, or we don't have it, fall back to objdump return ParseTextSectionHeaderFromObjdump(@_); } # Split /proc/pid/maps dump into a list of libraries sub ParseLibraries { return if $main::use_symbol_page; # We don't need libraries info. my $prog = shift; my $map = shift; my $pcs = shift; my $result = []; my $h = "[a-f0-9]+"; my $zero_offset = HexExtend("0"); my $buildvar = ""; foreach my $l (split("\n", $map)) { if ($l =~ m/^\s*build=(.*)$/) { $buildvar = $1; } my $start; my $finish; my $offset; my $lib; if ($l =~ /^($h)-($h)\s+..x.\s+($h)\s+\S+:\S+\s+\d+\s+(\S+\.(so|dll|dylib|bundle)((\.\d+)+\w*(\.\d+){0,3})?)$/i) { # Full line from /proc/self/maps. Example: # 40000000-40015000 r-xp 00000000 03:01 12845071 /lib/ld-2.3.2.so $start = HexExtend($1); $finish = HexExtend($2); $offset = HexExtend($3); $lib = $4; $lib =~ s|\\|/|g; # turn windows-style paths into unix-style paths } elsif ($l =~ /^\s*($h)-($h):\s*(\S+\.so(\.\d+)*)/) { # Cooked line from DumpAddressMap. Example: # 40000000-40015000: /lib/ld-2.3.2.so $start = HexExtend($1); $finish = HexExtend($2); $offset = $zero_offset; $lib = $3; } # FreeBSD 10.0 virtual memory map /proc/curproc/map as defined in # function procfs_doprocmap (sys/fs/procfs/procfs_map.c) # # Example: # 0x800600000 0x80061a000 26 0 0xfffff800035a0000 r-x 75 33 0x1004 COW NC vnode /libexec/ld-elf.s # o.1 NCH -1 elsif ($l =~ /^(0x$h)\s(0x$h)\s\d+\s\d+\s0x$h\sr-x\s\d+\s\d+\s0x\d+\s(COW|NCO)\s(NC|NNC)\svnode\s(\S+\.so(\.\d+)*)/) { $start = HexExtend($1); $finish = HexExtend($2); $offset = $zero_offset; $lib = FindLibrary($5); } else { next; } # Expand "$build" variable if available $lib =~ s/\$build\b/$buildvar/g; $lib = FindLibrary($lib); # Check for pre-relocated libraries, which use pre-relocated symbol tables # and thus require adjusting the offset that we'll use to translate # VM addresses into symbol table addresses. # Only do this if we're not going to fetch the symbol table from a # debugging copy of the library. if (!DebuggingLibrary($lib)) { my $text = ParseTextSectionHeader($lib); if (defined($text)) { my $vma_offset = AddressSub($text->{vma}, $text->{file_offset}); $offset = AddressAdd($offset, $vma_offset); } } if($main::opt_debug) { printf STDERR "$start:$finish ($offset) $lib\n"; } push(@{$result}, [$lib, $start, $finish, $offset]); } # Append special entry for additional library (not relocated) if ($main::opt_lib ne "") { my $text = ParseTextSectionHeader($main::opt_lib); if (defined($text)) { my $start = $text->{vma}; my $finish = AddressAdd($start, $text->{size}); push(@{$result}, [$main::opt_lib, $start, $finish, $start]); } } # Append special entry for the main program. This covers # 0..max_pc_value_seen, so that we assume pc values not found in one # of the library ranges will be treated as coming from the main # program binary. my $min_pc = HexExtend("0"); my $max_pc = $min_pc; # find the maximal PC value in any sample foreach my $pc (keys(%{$pcs})) { if (HexExtend($pc) gt $max_pc) { $max_pc = HexExtend($pc); } } push(@{$result}, [$prog, $min_pc, $max_pc, $zero_offset]); return $result; } # Add two hex addresses of length $address_length. # Run pprof --test for unit test if this is changed. sub AddressAdd { my $addr1 = shift; my $addr2 = shift; my $sum; if ($address_length == 8) { # Perl doesn't cope with wraparound arithmetic, so do it explicitly: $sum = (hex($addr1)+hex($addr2)) % (0x10000000 * 16); return sprintf("%08x", $sum); } else { # Do the addition in 7-nibble chunks to trivialize carry handling. if ($main::opt_debug and $main::opt_test) { print STDERR "AddressAdd $addr1 + $addr2 = "; } my $a1 = substr($addr1,-7); $addr1 = substr($addr1,0,-7); my $a2 = substr($addr2,-7); $addr2 = substr($addr2,0,-7); $sum = hex($a1) + hex($a2); my $c = 0; if ($sum > 0xfffffff) { $c = 1; $sum -= 0x10000000; } my $r = sprintf("%07x", $sum); $a1 = substr($addr1,-7); $addr1 = substr($addr1,0,-7); $a2 = substr($addr2,-7); $addr2 = substr($addr2,0,-7); $sum = hex($a1) + hex($a2) + $c; $c = 0; if ($sum > 0xfffffff) { $c = 1; $sum -= 0x10000000; } $r = sprintf("%07x", $sum) . $r; $sum = hex($addr1) + hex($addr2) + $c; if ($sum > 0xff) { $sum -= 0x100; } $r = sprintf("%02x", $sum) . $r; if ($main::opt_debug and $main::opt_test) { print STDERR "$r\n"; } return $r; } } # Subtract two hex addresses of length $address_length. # Run pprof --test for unit test if this is changed. sub AddressSub { my $addr1 = shift; my $addr2 = shift; my $diff; if ($address_length == 8) { # Perl doesn't cope with wraparound arithmetic, so do it explicitly: $diff = (hex($addr1)-hex($addr2)) % (0x10000000 * 16); return sprintf("%08x", $diff); } else { # Do the addition in 7-nibble chunks to trivialize borrow handling. # if ($main::opt_debug) { print STDERR "AddressSub $addr1 - $addr2 = "; } my $a1 = hex(substr($addr1,-7)); $addr1 = substr($addr1,0,-7); my $a2 = hex(substr($addr2,-7)); $addr2 = substr($addr2,0,-7); my $b = 0; if ($a2 > $a1) { $b = 1; $a1 += 0x10000000; } $diff = $a1 - $a2; my $r = sprintf("%07x", $diff); $a1 = hex(substr($addr1,-7)); $addr1 = substr($addr1,0,-7); $a2 = hex(substr($addr2,-7)) + $b; $addr2 = substr($addr2,0,-7); $b = 0; if ($a2 > $a1) { $b = 1; $a1 += 0x10000000; } $diff = $a1 - $a2; $r = sprintf("%07x", $diff) . $r; $a1 = hex($addr1); $a2 = hex($addr2) + $b; if ($a2 > $a1) { $a1 += 0x100; } $diff = $a1 - $a2; $r = sprintf("%02x", $diff) . $r; # if ($main::opt_debug) { print STDERR "$r\n"; } return $r; } } # Increment a hex addresses of length $address_length. # Run pprof --test for unit test if this is changed. sub AddressInc { my $addr = shift; my $sum; if ($address_length == 8) { # Perl doesn't cope with wraparound arithmetic, so do it explicitly: $sum = (hex($addr)+1) % (0x10000000 * 16); return sprintf("%08x", $sum); } else { # Do the addition in 7-nibble chunks to trivialize carry handling. # We are always doing this to step through the addresses in a function, # and will almost never overflow the first chunk, so we check for this # case and exit early. # if ($main::opt_debug) { print STDERR "AddressInc $addr1 = "; } my $a1 = substr($addr,-7); $addr = substr($addr,0,-7); $sum = hex($a1) + 1; my $r = sprintf("%07x", $sum); if ($sum <= 0xfffffff) { $r = $addr . $r; # if ($main::opt_debug) { print STDERR "$r\n"; } return HexExtend($r); } else { $r = "0000000"; } $a1 = substr($addr,-7); $addr = substr($addr,0,-7); $sum = hex($a1) + 1; $r = sprintf("%07x", $sum) . $r; if ($sum <= 0xfffffff) { $r = $addr . $r; # if ($main::opt_debug) { print STDERR "$r\n"; } return HexExtend($r); } else { $r = "00000000000000"; } $sum = hex($addr) + 1; if ($sum > 0xff) { $sum -= 0x100; } $r = sprintf("%02x", $sum) . $r; # if ($main::opt_debug) { print STDERR "$r\n"; } return $r; } } # Extract symbols for all PC values found in profile sub ExtractSymbols { my $libs = shift; my $pcset = shift; my $symbols = {}; # Map each PC value to the containing library. To make this faster, # we sort libraries by their starting pc value (highest first), and # advance through the libraries as we advance the pc. Sometimes the # addresses of libraries may overlap with the addresses of the main # binary, so to make sure the libraries 'win', we iterate over the # libraries in reverse order (which assumes the binary doesn't start # in the middle of a library, which seems a fair assumption). my @pcs = (sort { $a cmp $b } keys(%{$pcset})); # pcset is 0-extended strings foreach my $lib (sort {$b->[1] cmp $a->[1]} @{$libs}) { my $libname = $lib->[0]; my $start = $lib->[1]; my $finish = $lib->[2]; my $offset = $lib->[3]; # Use debug library if it exists my $debug_libname = DebuggingLibrary($libname); if ($debug_libname) { $libname = $debug_libname; } # Get list of pcs that belong in this library. my $contained = []; my ($start_pc_index, $finish_pc_index); # Find smallest finish_pc_index such that $finish < $pc[$finish_pc_index]. for ($finish_pc_index = $#pcs + 1; $finish_pc_index > 0; $finish_pc_index--) { last if $pcs[$finish_pc_index - 1] le $finish; } # Find smallest start_pc_index such that $start <= $pc[$start_pc_index]. for ($start_pc_index = $finish_pc_index; $start_pc_index > 0; $start_pc_index--) { last if $pcs[$start_pc_index - 1] lt $start; } # This keeps PC values higher than $pc[$finish_pc_index] in @pcs, # in case there are overlaps in libraries and the main binary. @{$contained} = splice(@pcs, $start_pc_index, $finish_pc_index - $start_pc_index); # Map to symbols MapToSymbols($libname, AddressSub($start, $offset), $contained, $symbols); } return $symbols; } # Map list of PC values to symbols for a given image sub MapToSymbols { my $image = shift; my $offset = shift; my $pclist = shift; my $symbols = shift; my $debug = 0; # Ignore empty binaries if ($#{$pclist} < 0) { return; } # Figure out the addr2line command to use my $addr2line = $obj_tool_map{"addr2line"}; my $cmd = ShellEscape($addr2line, "-f", "-C", "-e", $image); if (exists $obj_tool_map{"addr2line_pdb"}) { $addr2line = $obj_tool_map{"addr2line_pdb"}; $cmd = ShellEscape($addr2line, "--demangle", "-f", "-C", "-e", $image); } # If "addr2line" isn't installed on the system at all, just use # nm to get what info we can (function names, but not line numbers). if (system(ShellEscape($addr2line, "--help") . " >$dev_null 2>&1") != 0) { MapSymbolsWithNM($image, $offset, $pclist, $symbols); return; } # "addr2line -i" can produce a variable number of lines per input # address, with no separator that allows us to tell when data for # the next address starts. So we find the address for a special # symbol (_fini) and interleave this address between all real # addresses passed to addr2line. The name of this special symbol # can then be used as a separator. $sep_address = undef; # May be filled in by MapSymbolsWithNM() my $nm_symbols = {}; MapSymbolsWithNM($image, $offset, $pclist, $nm_symbols); if (defined($sep_address)) { # Only add " -i" to addr2line if the binary supports it. # addr2line --help returns 0, but not if it sees an unknown flag first. if (system("$cmd -i --help >$dev_null 2>&1") == 0) { $cmd .= " -i"; } else { $sep_address = undef; # no need for sep_address if we don't support -i } } # Make file with all PC values with intervening 'sep_address' so # that we can reliably detect the end of inlined function list open(ADDRESSES, ">$main::tmpfile_sym") || error("$main::tmpfile_sym: $!\n"); if ($debug) { print("---- $image ---\n"); } for (my $i = 0; $i <= $#{$pclist}; $i++) { # addr2line always reads hex addresses, and does not need '0x' prefix. if ($debug) { printf STDERR ("%s\n", $pclist->[$i]); } printf ADDRESSES ("%s\n", AddressSub($pclist->[$i], $offset)); if (defined($sep_address)) { printf ADDRESSES ("%s\n", $sep_address); } } close(ADDRESSES); if ($debug) { print("----\n"); system("cat", $main::tmpfile_sym); print("----\n"); system("$cmd < " . ShellEscape($main::tmpfile_sym)); print("----\n"); } open(SYMBOLS, "$cmd <" . ShellEscape($main::tmpfile_sym) . " |") || error("$cmd: $!\n"); my $count = 0; # Index in pclist while () { # Read fullfunction and filelineinfo from next pair of lines s/\r?\n$//g; my $fullfunction = $_; $_ = ; s/\r?\n$//g; my $filelinenum = $_; if (defined($sep_address) && $fullfunction eq $sep_symbol) { # Terminating marker for data for this address $count++; next; } $filelinenum =~ s|\\|/|g; # turn windows-style paths into unix-style paths my $pcstr = $pclist->[$count]; my $function = ShortFunctionName($fullfunction); my $nms = $nm_symbols->{$pcstr}; if (defined($nms)) { if ($fullfunction eq '??') { # nm found a symbol for us. $function = $nms->[0]; $fullfunction = $nms->[2]; } else { # MapSymbolsWithNM tags each routine with its starting address, # useful in case the image has multiple occurrences of this # routine. (It uses a syntax that resembles template parameters, # that are automatically stripped out by ShortFunctionName().) # addr2line does not provide the same information. So we check # if nm disambiguated our symbol, and if so take the annotated # (nm) version of the routine-name. TODO(csilvers): this won't # catch overloaded, inlined symbols, which nm doesn't see. # Better would be to do a check similar to nm's, in this fn. if ($nms->[2] =~ m/^\Q$function\E/) { # sanity check it's the right fn $function = $nms->[0]; $fullfunction = $nms->[2]; } } } # Prepend to accumulated symbols for pcstr # (so that caller comes before callee) my $sym = $symbols->{$pcstr}; if (!defined($sym)) { $sym = []; $symbols->{$pcstr} = $sym; } unshift(@{$sym}, $function, $filelinenum, $fullfunction); if ($debug) { printf STDERR ("%s => [%s]\n", $pcstr, join(" ", @{$sym})); } if (!defined($sep_address)) { # Inlining is off, so this entry ends immediately $count++; } } close(SYMBOLS); } # Use nm to map the list of referenced PCs to symbols. Return true iff we # are able to read procedure information via nm. sub MapSymbolsWithNM { my $image = shift; my $offset = shift; my $pclist = shift; my $symbols = shift; # Get nm output sorted by increasing address my $symbol_table = GetProcedureBoundaries($image, "."); if (!%{$symbol_table}) { return 0; } # Start addresses are already the right length (8 or 16 hex digits). my @names = sort { $symbol_table->{$a}->[0] cmp $symbol_table->{$b}->[0] } keys(%{$symbol_table}); if ($#names < 0) { # No symbols: just use addresses foreach my $pc (@{$pclist}) { my $pcstr = "0x" . $pc; $symbols->{$pc} = [$pcstr, "?", $pcstr]; } return 0; } # Sort addresses so we can do a join against nm output my $index = 0; my $fullname = $names[0]; my $name = ShortFunctionName($fullname); foreach my $pc (sort { $a cmp $b } @{$pclist}) { # Adjust for mapped offset my $mpc = AddressSub($pc, $offset); while (($index < $#names) && ($mpc ge $symbol_table->{$fullname}->[1])){ $index++; $fullname = $names[$index]; $name = ShortFunctionName($fullname); } if ($mpc lt $symbol_table->{$fullname}->[1]) { $symbols->{$pc} = [$name, "?", $fullname]; } else { my $pcstr = "0x" . $pc; $symbols->{$pc} = [$pcstr, "?", $pcstr]; } } return 1; } sub ShortFunctionName { my $function = shift; while ($function =~ s/\([^()]*\)(\s*const)?//g) { } # Argument types while ($function =~ s/<[^<>]*>//g) { } # Remove template arguments $function =~ s/^.*\s+(\w+::)/$1/; # Remove leading type return $function; } # Trim overly long symbols found in disassembler output sub CleanDisassembly { my $d = shift; while ($d =~ s/\([^()%]*\)(\s*const)?//g) { } # Argument types, not (%rax) while ($d =~ s/(\w+)<[^<>]*>/$1/g) { } # Remove template arguments return $d; } # Clean file name for display sub CleanFileName { my ($f) = @_; $f =~ s|^/proc/self/cwd/||; $f =~ s|^\./||; return $f; } # Make address relative to section and clean up for display sub UnparseAddress { my ($offset, $address) = @_; $address = AddressSub($address, $offset); $address =~ s/^0x//; $address =~ s/^0*//; return $address; } ##### Miscellaneous ##### # Find the right versions of the above object tools to use. The # argument is the program file being analyzed, and should be an ELF # 32-bit or ELF 64-bit executable file. The location of the tools # is determined by considering the following options in this order: # 1) --tools option, if set # 2) PPROF_TOOLS environment variable, if set # 3) the environment sub ConfigureObjTools { my $prog_file = shift; # Check for the existence of $prog_file because /usr/bin/file does not # predictably return error status in prod. (-e $prog_file) || error("$prog_file does not exist.\n"); my $file_type = undef; if (-e "/usr/bin/file") { # Follow symlinks (at least for systems where "file" supports that). my $escaped_prog_file = ShellEscape($prog_file); $file_type = `/usr/bin/file -L $escaped_prog_file 2>$dev_null || /usr/bin/file $escaped_prog_file`; } elsif ($^O == "MSWin32") { $file_type = "MS Windows"; } else { print STDERR "WARNING: Can't determine the file type of $prog_file"; } if ($file_type =~ /64-bit/) { # Change $address_length to 16 if the program file is ELF 64-bit. # We can't detect this from many (most?) heap or lock contention # profiles, since the actual addresses referenced are generally in low # memory even for 64-bit programs. $address_length = 16; } if ($file_type =~ /MS Windows/) { # For windows, we provide a version of nm and addr2line as part of # the opensource release, which is capable of parsing # Windows-style PDB executables. It should live in the path, or # in the same directory as pprof. $obj_tool_map{"nm_pdb"} = "nm-pdb"; $obj_tool_map{"addr2line_pdb"} = "addr2line-pdb"; } if ($file_type =~ /Mach-O/) { # OS X uses otool to examine Mach-O files, rather than objdump. $obj_tool_map{"otool"} = "otool"; $obj_tool_map{"addr2line"} = "false"; # no addr2line $obj_tool_map{"objdump"} = "false"; # no objdump } # Go fill in %obj_tool_map with the pathnames to use: foreach my $tool (keys %obj_tool_map) { $obj_tool_map{$tool} = ConfigureTool($obj_tool_map{$tool}); } } # Returns the path of a caller-specified object tool. If --tools or # PPROF_TOOLS are specified, then returns the full path to the tool # with that prefix. Otherwise, returns the path unmodified (which # means we will look for it on PATH). sub ConfigureTool { my $tool = shift; my $path; # --tools (or $PPROF_TOOLS) is a comma separated list, where each # item is either a) a pathname prefix, or b) a map of the form # :. First we look for an entry of type (b) for our # tool. If one is found, we use it. Otherwise, we consider all the # pathname prefixes in turn, until one yields an existing file. If # none does, we use a default path. my $tools = $main::opt_tools || $ENV{"PPROF_TOOLS"} || ""; if ($tools =~ m/(,|^)\Q$tool\E:([^,]*)/) { $path = $2; # TODO(csilvers): sanity-check that $path exists? Hard if it's relative. } elsif ($tools ne '') { foreach my $prefix (split(',', $tools)) { next if ($prefix =~ /:/); # ignore "tool:fullpath" entries in the list if (-x $prefix . $tool) { $path = $prefix . $tool; last; } } if (!$path) { error("No '$tool' found with prefix specified by " . "--tools (or \$PPROF_TOOLS) '$tools'\n"); } } else { # ... otherwise use the version that exists in the same directory as # pprof. If there's nothing there, use $PATH. $0 =~ m,[^/]*$,; # this is everything after the last slash my $dirname = $`; # this is everything up to and including the last slash if (-x "$dirname$tool") { $path = "$dirname$tool"; } else { $path = $tool; } } if ($main::opt_debug) { print STDERR "Using '$path' for '$tool'.\n"; } return $path; } sub ShellEscape { my @escaped_words = (); foreach my $word (@_) { my $escaped_word = $word; if ($word =~ m![^a-zA-Z0-9/.,_=-]!) { # check for anything not in whitelist $escaped_word =~ s/'/'\\''/; $escaped_word = "'$escaped_word'"; } push(@escaped_words, $escaped_word); } return join(" ", @escaped_words); } sub cleanup { unlink($main::tmpfile_sym); unlink(keys %main::tempnames); # We leave any collected profiles in $HOME/pprof in case the user wants # to look at them later. We print a message informing them of this. if ((scalar(@main::profile_files) > 0) && defined($main::collected_profile)) { if (scalar(@main::profile_files) == 1) { print STDERR "Dynamically gathered profile is in $main::collected_profile\n"; } print STDERR "If you want to investigate this profile further, you can do:\n"; print STDERR "\n"; print STDERR " pprof \\\n"; print STDERR " $main::prog \\\n"; print STDERR " $main::collected_profile\n"; print STDERR "\n"; } } sub sighandler { cleanup(); exit(1); } sub error { my $msg = shift; print STDERR $msg; cleanup(); exit(1); } # Run $nm_command and get all the resulting procedure boundaries whose # names match "$regexp" and returns them in a hashtable mapping from # procedure name to a two-element vector of [start address, end address] sub GetProcedureBoundariesViaNm { my $escaped_nm_command = shift; # shell-escaped my $regexp = shift; my $symbol_table = {}; open(NM, "$escaped_nm_command |") || error("$escaped_nm_command: $!\n"); my $last_start = "0"; my $routine = ""; while () { s/\r//g; # turn windows-looking lines into unix-looking lines if (m/^\s*([0-9a-f]+) (.) (..*)/) { my $start_val = $1; my $type = $2; my $this_routine = $3; # It's possible for two symbols to share the same address, if # one is a zero-length variable (like __start_google_malloc) or # one symbol is a weak alias to another (like __libc_malloc). # In such cases, we want to ignore all values except for the # actual symbol, which in nm-speak has type "T". The logic # below does this, though it's a bit tricky: what happens when # we have a series of lines with the same address, is the first # one gets queued up to be processed. However, it won't # *actually* be processed until later, when we read a line with # a different address. That means that as long as we're reading # lines with the same address, we have a chance to replace that # item in the queue, which we do whenever we see a 'T' entry -- # that is, a line with type 'T'. If we never see a 'T' entry, # we'll just go ahead and process the first entry (which never # got touched in the queue), and ignore the others. if ($start_val eq $last_start && $type =~ /t/i) { # We are the 'T' symbol at this address, replace previous symbol. $routine = $this_routine; next; } elsif ($start_val eq $last_start) { # We're not the 'T' symbol at this address, so ignore us. next; } if ($this_routine eq $sep_symbol) { $sep_address = HexExtend($start_val); } # Tag this routine with the starting address in case the image # has multiple occurrences of this routine. We use a syntax # that resembles template parameters that are automatically # stripped out by ShortFunctionName() $this_routine .= "<$start_val>"; if (defined($routine) && $routine =~ m/$regexp/) { $symbol_table->{$routine} = [HexExtend($last_start), HexExtend($start_val)]; } $last_start = $start_val; $routine = $this_routine; } elsif (m/^Loaded image name: (.+)/) { # The win32 nm workalike emits information about the binary it is using. if ($main::opt_debug) { print STDERR "Using Image $1\n"; } } elsif (m/^PDB file name: (.+)/) { # The win32 nm workalike emits information about the pdb it is using. if ($main::opt_debug) { print STDERR "Using PDB $1\n"; } } } close(NM); # Handle the last line in the nm output. Unfortunately, we don't know # how big this last symbol is, because we don't know how big the file # is. For now, we just give it a size of 0. # TODO(csilvers): do better here. if (defined($routine) && $routine =~ m/$regexp/) { $symbol_table->{$routine} = [HexExtend($last_start), HexExtend($last_start)]; } return $symbol_table; } # Gets the procedure boundaries for all routines in "$image" whose names # match "$regexp" and returns them in a hashtable mapping from procedure # name to a two-element vector of [start address, end address]. # Will return an empty map if nm is not installed or not working properly. sub GetProcedureBoundaries { my $image = shift; my $regexp = shift; # If $image doesn't start with /, then put ./ in front of it. This works # around an obnoxious bug in our probing of nm -f behavior. # "nm -f $image" is supposed to fail on GNU nm, but if: # # a. $image starts with [BbSsPp] (for example, bin/foo/bar), AND # b. you have a.out in your current directory (a not uncommon occurrence) # # then "nm -f $image" succeeds because -f only looks at the first letter of # the argument, which looks valid because it's [BbSsPp], and then since # there's no image provided, it looks for a.out and finds it. # # This regex makes sure that $image starts with . or /, forcing the -f # parsing to fail since . and / are not valid formats. $image =~ s#^[^/]#./$&#; # For libc libraries, the copy in /usr/lib/debug contains debugging symbols my $debugging = DebuggingLibrary($image); if ($debugging) { $image = $debugging; } my $nm = $obj_tool_map{"nm"}; my $cppfilt = $obj_tool_map{"c++filt"}; # nm can fail for two reasons: 1) $image isn't a debug library; 2) nm # binary doesn't support --demangle. In addition, for OS X we need # to use the -f flag to get 'flat' nm output (otherwise we don't sort # properly and get incorrect results). Unfortunately, GNU nm uses -f # in an incompatible way. So first we test whether our nm supports # --demangle and -f. my $demangle_flag = ""; my $cppfilt_flag = ""; my $to_devnull = ">$dev_null 2>&1"; if (system(ShellEscape($nm, "--demangle", "image") . $to_devnull) == 0) { # In this mode, we do "nm --demangle " $demangle_flag = "--demangle"; $cppfilt_flag = ""; } elsif (system(ShellEscape($cppfilt, $image) . $to_devnull) == 0) { # In this mode, we do "nm | c++filt" $cppfilt_flag = " | " . ShellEscape($cppfilt); }; my $flatten_flag = ""; if (system(ShellEscape($nm, "-f", $image) . $to_devnull) == 0) { $flatten_flag = "-f"; } # Finally, in the case $imagie isn't a debug library, we try again with # -D to at least get *exported* symbols. If we can't use --demangle, # we use c++filt instead, if it exists on this system. my @nm_commands = (ShellEscape($nm, "-n", $flatten_flag, $demangle_flag, $image) . " 2>$dev_null $cppfilt_flag", ShellEscape($nm, "-D", "-n", $flatten_flag, $demangle_flag, $image) . " 2>$dev_null $cppfilt_flag", # 6nm is for Go binaries ShellEscape("6nm", "$image") . " 2>$dev_null | sort", ); # If the executable is an MS Windows PDB-format executable, we'll # have set up obj_tool_map("nm_pdb"). In this case, we actually # want to use both unix nm and windows-specific nm_pdb, since # PDB-format executables can apparently include dwarf .o files. if (exists $obj_tool_map{"nm_pdb"}) { push(@nm_commands, ShellEscape($obj_tool_map{"nm_pdb"}, "--demangle", $image) . " 2>$dev_null"); } foreach my $nm_command (@nm_commands) { my $symbol_table = GetProcedureBoundariesViaNm($nm_command, $regexp); return $symbol_table if (%{$symbol_table}); } my $symbol_table = {}; return $symbol_table; } # The test vectors for AddressAdd/Sub/Inc are 8-16-nibble hex strings. # To make them more readable, we add underscores at interesting places. # This routine removes the underscores, producing the canonical representation # used by pprof to represent addresses, particularly in the tested routines. sub CanonicalHex { my $arg = shift; return join '', (split '_',$arg); } # Unit test for AddressAdd: sub AddressAddUnitTest { my $test_data_8 = shift; my $test_data_16 = shift; my $error_count = 0; my $fail_count = 0; my $pass_count = 0; # print STDERR "AddressAddUnitTest: ", 1+$#{$test_data_8}, " tests\n"; # First a few 8-nibble addresses. Note that this implementation uses # plain old arithmetic, so a quick sanity check along with verifying what # happens to overflow (we want it to wrap): $address_length = 8; foreach my $row (@{$test_data_8}) { if ($main::opt_debug and $main::opt_test) { print STDERR "@{$row}\n"; } my $sum = AddressAdd ($row->[0], $row->[1]); if ($sum ne $row->[2]) { printf STDERR "ERROR: %s != %s + %s = %s\n", $sum, $row->[0], $row->[1], $row->[2]; ++$fail_count; } else { ++$pass_count; } } printf STDERR "AddressAdd 32-bit tests: %d passes, %d failures\n", $pass_count, $fail_count; $error_count = $fail_count; $fail_count = 0; $pass_count = 0; # Now 16-nibble addresses. $address_length = 16; foreach my $row (@{$test_data_16}) { if ($main::opt_debug and $main::opt_test) { print STDERR "@{$row}\n"; } my $sum = AddressAdd (CanonicalHex($row->[0]), CanonicalHex($row->[1])); my $expected = join '', (split '_',$row->[2]); if ($sum ne CanonicalHex($row->[2])) { printf STDERR "ERROR: %s != %s + %s = %s\n", $sum, $row->[0], $row->[1], $row->[2]; ++$fail_count; } else { ++$pass_count; } } printf STDERR "AddressAdd 64-bit tests: %d passes, %d failures\n", $pass_count, $fail_count; $error_count += $fail_count; return $error_count; } # Unit test for AddressSub: sub AddressSubUnitTest { my $test_data_8 = shift; my $test_data_16 = shift; my $error_count = 0; my $fail_count = 0; my $pass_count = 0; # print STDERR "AddressSubUnitTest: ", 1+$#{$test_data_8}, " tests\n"; # First a few 8-nibble addresses. Note that this implementation uses # plain old arithmetic, so a quick sanity check along with verifying what # happens to overflow (we want it to wrap): $address_length = 8; foreach my $row (@{$test_data_8}) { if ($main::opt_debug and $main::opt_test) { print STDERR "@{$row}\n"; } my $sum = AddressSub ($row->[0], $row->[1]); if ($sum ne $row->[3]) { printf STDERR "ERROR: %s != %s - %s = %s\n", $sum, $row->[0], $row->[1], $row->[3]; ++$fail_count; } else { ++$pass_count; } } printf STDERR "AddressSub 32-bit tests: %d passes, %d failures\n", $pass_count, $fail_count; $error_count = $fail_count; $fail_count = 0; $pass_count = 0; # Now 16-nibble addresses. $address_length = 16; foreach my $row (@{$test_data_16}) { if ($main::opt_debug and $main::opt_test) { print STDERR "@{$row}\n"; } my $sum = AddressSub (CanonicalHex($row->[0]), CanonicalHex($row->[1])); if ($sum ne CanonicalHex($row->[3])) { printf STDERR "ERROR: %s != %s - %s = %s\n", $sum, $row->[0], $row->[1], $row->[3]; ++$fail_count; } else { ++$pass_count; } } printf STDERR "AddressSub 64-bit tests: %d passes, %d failures\n", $pass_count, $fail_count; $error_count += $fail_count; return $error_count; } # Unit test for AddressInc: sub AddressIncUnitTest { my $test_data_8 = shift; my $test_data_16 = shift; my $error_count = 0; my $fail_count = 0; my $pass_count = 0; # print STDERR "AddressIncUnitTest: ", 1+$#{$test_data_8}, " tests\n"; # First a few 8-nibble addresses. Note that this implementation uses # plain old arithmetic, so a quick sanity check along with verifying what # happens to overflow (we want it to wrap): $address_length = 8; foreach my $row (@{$test_data_8}) { if ($main::opt_debug and $main::opt_test) { print STDERR "@{$row}\n"; } my $sum = AddressInc ($row->[0]); if ($sum ne $row->[4]) { printf STDERR "ERROR: %s != %s + 1 = %s\n", $sum, $row->[0], $row->[4]; ++$fail_count; } else { ++$pass_count; } } printf STDERR "AddressInc 32-bit tests: %d passes, %d failures\n", $pass_count, $fail_count; $error_count = $fail_count; $fail_count = 0; $pass_count = 0; # Now 16-nibble addresses. $address_length = 16; foreach my $row (@{$test_data_16}) { if ($main::opt_debug and $main::opt_test) { print STDERR "@{$row}\n"; } my $sum = AddressInc (CanonicalHex($row->[0])); if ($sum ne CanonicalHex($row->[4])) { printf STDERR "ERROR: %s != %s + 1 = %s\n", $sum, $row->[0], $row->[4]; ++$fail_count; } else { ++$pass_count; } } printf STDERR "AddressInc 64-bit tests: %d passes, %d failures\n", $pass_count, $fail_count; $error_count += $fail_count; return $error_count; } # Driver for unit tests. # Currently just the address add/subtract/increment routines for 64-bit. sub RunUnitTests { my $error_count = 0; # This is a list of tuples [a, b, a+b, a-b, a+1] my $unit_test_data_8 = [ [qw(aaaaaaaa 50505050 fafafafa 5a5a5a5a aaaaaaab)], [qw(50505050 aaaaaaaa fafafafa a5a5a5a6 50505051)], [qw(ffffffff aaaaaaaa aaaaaaa9 55555555 00000000)], [qw(00000001 ffffffff 00000000 00000002 00000002)], [qw(00000001 fffffff0 fffffff1 00000011 00000002)], ]; my $unit_test_data_16 = [ # The implementation handles data in 7-nibble chunks, so those are the # interesting boundaries. [qw(aaaaaaaa 50505050 00_000000f_afafafa 00_0000005_a5a5a5a 00_000000a_aaaaaab)], [qw(50505050 aaaaaaaa 00_000000f_afafafa ff_ffffffa_5a5a5a6 00_0000005_0505051)], [qw(ffffffff aaaaaaaa 00_000001a_aaaaaa9 00_0000005_5555555 00_0000010_0000000)], [qw(00000001 ffffffff 00_0000010_0000000 ff_ffffff0_0000002 00_0000000_0000002)], [qw(00000001 fffffff0 00_000000f_ffffff1 ff_ffffff0_0000011 00_0000000_0000002)], [qw(00_a00000a_aaaaaaa 50505050 00_a00000f_afafafa 00_a000005_a5a5a5a 00_a00000a_aaaaaab)], [qw(0f_fff0005_0505050 aaaaaaaa 0f_fff000f_afafafa 0f_ffefffa_5a5a5a6 0f_fff0005_0505051)], [qw(00_000000f_fffffff 01_800000a_aaaaaaa 01_800001a_aaaaaa9 fe_8000005_5555555 00_0000010_0000000)], [qw(00_0000000_0000001 ff_fffffff_fffffff 00_0000000_0000000 00_0000000_0000002 00_0000000_0000002)], [qw(00_0000000_0000001 ff_fffffff_ffffff0 ff_fffffff_ffffff1 00_0000000_0000011 00_0000000_0000002)], ]; $error_count += AddressAddUnitTest($unit_test_data_8, $unit_test_data_16); $error_count += AddressSubUnitTest($unit_test_data_8, $unit_test_data_16); $error_count += AddressIncUnitTest($unit_test_data_8, $unit_test_data_16); if ($error_count > 0) { print STDERR $error_count, " errors: FAILED\n"; } else { print STDERR "PASS\n"; } exit ($error_count); } �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������vmem-1.8/src/jemalloc/config.guess������������������������������������������������������������������0000775�0000000�0000000�00000130361�13615050741�0017231�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������#! /bin/sh # Attempt to guess a canonical system name. # Copyright 1992-2013 Free Software Foundation, Inc. timestamp='2013-06-10' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, see . # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that # program. This Exception is an additional permission under section 7 # of the GNU General Public License, version 3 ("GPLv3"). # # Originally written by Per Bothner. # # You can get the latest version of this script from: # http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD # # Please send patches with a ChangeLog entry to config-patches@gnu.org. me=`echo "$0" | sed -e 's,.*/,,'` usage="\ Usage: $0 [OPTION] Output the configuration name of the system \`$me' is run on. Operation modes: -h, --help print this help, then exit -t, --time-stamp print date of last modification, then exit -v, --version print version number, then exit Report bugs and patches to ." version="\ GNU config.guess ($timestamp) Originally written by Per Bothner. Copyright 1992-2013 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." help=" Try \`$me --help' for more information." # Parse command line while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) echo "$timestamp" ; exit ;; --version | -v ) echo "$version" ; exit ;; --help | --h* | -h ) echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. break ;; -* ) echo "$me: invalid option $1$help" >&2 exit 1 ;; * ) break ;; esac done if test $# != 0; then echo "$me: too many arguments$help" >&2 exit 1 fi trap 'exit 1' 1 2 15 # CC_FOR_BUILD -- compiler used by this script. Note that the use of a # compiler to aid in system detection is discouraged as it requires # temporary files to be created and, as you can see below, it is a # headache to deal with in a portable fashion. # Historically, `CC_FOR_BUILD' used to be named `HOST_CC'. We still # use `HOST_CC' if defined, but it is deprecated. # Portable tmp directory creation inspired by the Autoconf team. set_cc_for_build=' trap "exitcode=\$?; (rm -f \$tmpfiles 2>/dev/null; rmdir \$tmp 2>/dev/null) && exit \$exitcode" 0 ; trap "rm -f \$tmpfiles 2>/dev/null; rmdir \$tmp 2>/dev/null; exit 1" 1 2 13 15 ; : ${TMPDIR=/tmp} ; { tmp=`(umask 077 && mktemp -d "$TMPDIR/cgXXXXXX") 2>/dev/null` && test -n "$tmp" && test -d "$tmp" ; } || { test -n "$RANDOM" && tmp=$TMPDIR/cg$$-$RANDOM && (umask 077 && mkdir $tmp) ; } || { tmp=$TMPDIR/cg-$$ && (umask 077 && mkdir $tmp) && echo "Warning: creating insecure temp directory" >&2 ; } || { echo "$me: cannot create a temporary directory in $TMPDIR" >&2 ; exit 1 ; } ; dummy=$tmp/dummy ; tmpfiles="$dummy.c $dummy.o $dummy.rel $dummy" ; case $CC_FOR_BUILD,$HOST_CC,$CC in ,,) echo "int x;" > $dummy.c ; for c in cc gcc c89 c99 ; do if ($c -c -o $dummy.o $dummy.c) >/dev/null 2>&1 ; then CC_FOR_BUILD="$c"; break ; fi ; done ; if test x"$CC_FOR_BUILD" = x ; then CC_FOR_BUILD=no_compiler_found ; fi ;; ,,*) CC_FOR_BUILD=$CC ;; ,*,*) CC_FOR_BUILD=$HOST_CC ;; esac ; set_cc_for_build= ;' # This is needed to find uname on a Pyramid OSx when run in the BSD universe. # (ghazi@noc.rutgers.edu 1994-08-24) if (test -f /.attbin/uname) >/dev/null 2>&1 ; then PATH=$PATH:/.attbin ; export PATH fi UNAME_MACHINE=`(uname -m) 2>/dev/null` || UNAME_MACHINE=unknown UNAME_RELEASE=`(uname -r) 2>/dev/null` || UNAME_RELEASE=unknown UNAME_SYSTEM=`(uname -s) 2>/dev/null` || UNAME_SYSTEM=unknown UNAME_VERSION=`(uname -v) 2>/dev/null` || UNAME_VERSION=unknown case "${UNAME_SYSTEM}" in Linux|GNU|GNU/*) # If the system lacks a compiler, then just pick glibc. # We could probably try harder. LIBC=gnu eval $set_cc_for_build cat <<-EOF > $dummy.c #include #if defined(__UCLIBC__) LIBC=uclibc #elif defined(__dietlibc__) LIBC=dietlibc #else LIBC=gnu #endif EOF eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep '^LIBC'` ;; esac # Note: order is significant - the case branches are not exclusive. case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in *:NetBSD:*:*) # NetBSD (nbsd) targets should (where applicable) match one or # more of the tuples: *-*-netbsdelf*, *-*-netbsdaout*, # *-*-netbsdecoff* and *-*-netbsd*. For targets that recently # switched to ELF, *-*-netbsd* would select the old # object file format. This provides both forward # compatibility and a consistent mechanism for selecting the # object file format. # # Note: NetBSD doesn't particularly care about the vendor # portion of the name. We always set it to "unknown". sysctl="sysctl -n hw.machine_arch" UNAME_MACHINE_ARCH=`(/sbin/$sysctl 2>/dev/null || \ /usr/sbin/$sysctl 2>/dev/null || echo unknown)` case "${UNAME_MACHINE_ARCH}" in armeb) machine=armeb-unknown ;; arm*) machine=arm-unknown ;; sh3el) machine=shl-unknown ;; sh3eb) machine=sh-unknown ;; sh5el) machine=sh5le-unknown ;; *) machine=${UNAME_MACHINE_ARCH}-unknown ;; esac # The Operating System including object format, if it has switched # to ELF recently, or will in the future. case "${UNAME_MACHINE_ARCH}" in arm*|i386|m68k|ns32k|sh3*|sparc|vax) eval $set_cc_for_build if echo __ELF__ | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ELF__ then # Once all utilities can be ECOFF (netbsdecoff) or a.out (netbsdaout). # Return netbsd for either. FIX? os=netbsd else os=netbsdelf fi ;; *) os=netbsd ;; esac # The OS release # Debian GNU/NetBSD machines have a different userland, and # thus, need a distinct triplet. However, they do not need # kernel version information, so it can be replaced with a # suitable tag, in the style of linux-gnu. case "${UNAME_VERSION}" in Debian*) release='-gnu' ;; *) release=`echo ${UNAME_RELEASE}|sed -e 's/[-_].*/\./'` ;; esac # Since CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM: # contains redundant information, the shorter form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM is used. echo "${machine}-${os}${release}" exit ;; *:Bitrig:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/Bitrig.//'` echo ${UNAME_MACHINE_ARCH}-unknown-bitrig${UNAME_RELEASE} exit ;; *:OpenBSD:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/OpenBSD.//'` echo ${UNAME_MACHINE_ARCH}-unknown-openbsd${UNAME_RELEASE} exit ;; *:ekkoBSD:*:*) echo ${UNAME_MACHINE}-unknown-ekkobsd${UNAME_RELEASE} exit ;; *:SolidBSD:*:*) echo ${UNAME_MACHINE}-unknown-solidbsd${UNAME_RELEASE} exit ;; macppc:MirBSD:*:*) echo powerpc-unknown-mirbsd${UNAME_RELEASE} exit ;; *:MirBSD:*:*) echo ${UNAME_MACHINE}-unknown-mirbsd${UNAME_RELEASE} exit ;; alpha:OSF1:*:*) case $UNAME_RELEASE in *4.0) UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $3}'` ;; *5.*) UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $4}'` ;; esac # According to Compaq, /usr/sbin/psrinfo has been available on # OSF/1 and Tru64 systems produced since 1995. I hope that # covers most systems running today. This code pipes the CPU # types through head -n 1, so we only detect the type of CPU 0. ALPHA_CPU_TYPE=`/usr/sbin/psrinfo -v | sed -n -e 's/^ The alpha \(.*\) processor.*$/\1/p' | head -n 1` case "$ALPHA_CPU_TYPE" in "EV4 (21064)") UNAME_MACHINE="alpha" ;; "EV4.5 (21064)") UNAME_MACHINE="alpha" ;; "LCA4 (21066/21068)") UNAME_MACHINE="alpha" ;; "EV5 (21164)") UNAME_MACHINE="alphaev5" ;; "EV5.6 (21164A)") UNAME_MACHINE="alphaev56" ;; "EV5.6 (21164PC)") UNAME_MACHINE="alphapca56" ;; "EV5.7 (21164PC)") UNAME_MACHINE="alphapca57" ;; "EV6 (21264)") UNAME_MACHINE="alphaev6" ;; "EV6.7 (21264A)") UNAME_MACHINE="alphaev67" ;; "EV6.8CB (21264C)") UNAME_MACHINE="alphaev68" ;; "EV6.8AL (21264B)") UNAME_MACHINE="alphaev68" ;; "EV6.8CX (21264D)") UNAME_MACHINE="alphaev68" ;; "EV6.9A (21264/EV69A)") UNAME_MACHINE="alphaev69" ;; "EV7 (21364)") UNAME_MACHINE="alphaev7" ;; "EV7.9 (21364A)") UNAME_MACHINE="alphaev79" ;; esac # A Pn.n version is a patched version. # A Vn.n version is a released version. # A Tn.n version is a released field test version. # A Xn.n version is an unreleased experimental baselevel. # 1.2 uses "1.2" for uname -r. echo ${UNAME_MACHINE}-dec-osf`echo ${UNAME_RELEASE} | sed -e 's/^[PVTX]//' | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz'` # Reset EXIT trap before exiting to avoid spurious non-zero exit code. exitcode=$? trap '' 0 exit $exitcode ;; Alpha\ *:Windows_NT*:*) # How do we know it's Interix rather than the generic POSIX subsystem? # Should we change UNAME_MACHINE based on the output of uname instead # of the specific Alpha model? echo alpha-pc-interix exit ;; 21064:Windows_NT:50:3) echo alpha-dec-winnt3.5 exit ;; Amiga*:UNIX_System_V:4.0:*) echo m68k-unknown-sysv4 exit ;; *:[Aa]miga[Oo][Ss]:*:*) echo ${UNAME_MACHINE}-unknown-amigaos exit ;; *:[Mm]orph[Oo][Ss]:*:*) echo ${UNAME_MACHINE}-unknown-morphos exit ;; *:OS/390:*:*) echo i370-ibm-openedition exit ;; *:z/VM:*:*) echo s390-ibm-zvmoe exit ;; *:OS400:*:*) echo powerpc-ibm-os400 exit ;; arm:RISC*:1.[012]*:*|arm:riscix:1.[012]*:*) echo arm-acorn-riscix${UNAME_RELEASE} exit ;; arm*:riscos:*:*|arm*:RISCOS:*:*) echo arm-unknown-riscos exit ;; SR2?01:HI-UX/MPP:*:* | SR8000:HI-UX/MPP:*:*) echo hppa1.1-hitachi-hiuxmpp exit ;; Pyramid*:OSx*:*:* | MIS*:OSx*:*:* | MIS*:SMP_DC-OSx*:*:*) # akee@wpdis03.wpafb.af.mil (Earle F. Ake) contributed MIS and NILE. if test "`(/bin/universe) 2>/dev/null`" = att ; then echo pyramid-pyramid-sysv3 else echo pyramid-pyramid-bsd fi exit ;; NILE*:*:*:dcosx) echo pyramid-pyramid-svr4 exit ;; DRS?6000:unix:4.0:6*) echo sparc-icl-nx6 exit ;; DRS?6000:UNIX_SV:4.2*:7* | DRS?6000:isis:4.2*:7*) case `/usr/bin/uname -p` in sparc) echo sparc-icl-nx7; exit ;; esac ;; s390x:SunOS:*:*) echo ${UNAME_MACHINE}-ibm-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; sun4H:SunOS:5.*:*) echo sparc-hal-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; sun4*:SunOS:5.*:* | tadpole*:SunOS:5.*:*) echo sparc-sun-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; i86pc:AuroraUX:5.*:* | i86xen:AuroraUX:5.*:*) echo i386-pc-auroraux${UNAME_RELEASE} exit ;; i86pc:SunOS:5.*:* | i86xen:SunOS:5.*:*) eval $set_cc_for_build SUN_ARCH="i386" # If there is a compiler, see if it is configured for 64-bit objects. # Note that the Sun cc does not turn __LP64__ into 1 like gcc does. # This test works for both compilers. if [ "$CC_FOR_BUILD" != 'no_compiler_found' ]; then if (echo '#ifdef __amd64'; echo IS_64BIT_ARCH; echo '#endif') | \ (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | \ grep IS_64BIT_ARCH >/dev/null then SUN_ARCH="x86_64" fi fi echo ${SUN_ARCH}-pc-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; sun4*:SunOS:6*:*) # According to config.sub, this is the proper way to canonicalize # SunOS6. Hard to guess exactly what SunOS6 will be like, but # it's likely to be more like Solaris than SunOS4. echo sparc-sun-solaris3`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; sun4*:SunOS:*:*) case "`/usr/bin/arch -k`" in Series*|S4*) UNAME_RELEASE=`uname -v` ;; esac # Japanese Language versions have a version number like `4.1.3-JL'. echo sparc-sun-sunos`echo ${UNAME_RELEASE}|sed -e 's/-/_/'` exit ;; sun3*:SunOS:*:*) echo m68k-sun-sunos${UNAME_RELEASE} exit ;; sun*:*:4.2BSD:*) UNAME_RELEASE=`(sed 1q /etc/motd | awk '{print substr($5,1,3)}') 2>/dev/null` test "x${UNAME_RELEASE}" = "x" && UNAME_RELEASE=3 case "`/bin/arch`" in sun3) echo m68k-sun-sunos${UNAME_RELEASE} ;; sun4) echo sparc-sun-sunos${UNAME_RELEASE} ;; esac exit ;; aushp:SunOS:*:*) echo sparc-auspex-sunos${UNAME_RELEASE} exit ;; # The situation for MiNT is a little confusing. The machine name # can be virtually everything (everything which is not # "atarist" or "atariste" at least should have a processor # > m68000). The system name ranges from "MiNT" over "FreeMiNT" # to the lowercase version "mint" (or "freemint"). Finally # the system name "TOS" denotes a system which is actually not # MiNT. But MiNT is downward compatible to TOS, so this should # be no problem. atarist[e]:*MiNT:*:* | atarist[e]:*mint:*:* | atarist[e]:*TOS:*:*) echo m68k-atari-mint${UNAME_RELEASE} exit ;; atari*:*MiNT:*:* | atari*:*mint:*:* | atarist[e]:*TOS:*:*) echo m68k-atari-mint${UNAME_RELEASE} exit ;; *falcon*:*MiNT:*:* | *falcon*:*mint:*:* | *falcon*:*TOS:*:*) echo m68k-atari-mint${UNAME_RELEASE} exit ;; milan*:*MiNT:*:* | milan*:*mint:*:* | *milan*:*TOS:*:*) echo m68k-milan-mint${UNAME_RELEASE} exit ;; hades*:*MiNT:*:* | hades*:*mint:*:* | *hades*:*TOS:*:*) echo m68k-hades-mint${UNAME_RELEASE} exit ;; *:*MiNT:*:* | *:*mint:*:* | *:*TOS:*:*) echo m68k-unknown-mint${UNAME_RELEASE} exit ;; m68k:machten:*:*) echo m68k-apple-machten${UNAME_RELEASE} exit ;; powerpc:machten:*:*) echo powerpc-apple-machten${UNAME_RELEASE} exit ;; RISC*:Mach:*:*) echo mips-dec-mach_bsd4.3 exit ;; RISC*:ULTRIX:*:*) echo mips-dec-ultrix${UNAME_RELEASE} exit ;; VAX*:ULTRIX*:*:*) echo vax-dec-ultrix${UNAME_RELEASE} exit ;; 2020:CLIX:*:* | 2430:CLIX:*:*) echo clipper-intergraph-clix${UNAME_RELEASE} exit ;; mips:*:*:UMIPS | mips:*:*:RISCos) eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #ifdef __cplusplus #include /* for printf() prototype */ int main (int argc, char *argv[]) { #else int main (argc, argv) int argc; char *argv[]; { #endif #if defined (host_mips) && defined (MIPSEB) #if defined (SYSTYPE_SYSV) printf ("mips-mips-riscos%ssysv\n", argv[1]); exit (0); #endif #if defined (SYSTYPE_SVR4) printf ("mips-mips-riscos%ssvr4\n", argv[1]); exit (0); #endif #if defined (SYSTYPE_BSD43) || defined(SYSTYPE_BSD) printf ("mips-mips-riscos%sbsd\n", argv[1]); exit (0); #endif #endif exit (-1); } EOF $CC_FOR_BUILD -o $dummy $dummy.c && dummyarg=`echo "${UNAME_RELEASE}" | sed -n 's/\([0-9]*\).*/\1/p'` && SYSTEM_NAME=`$dummy $dummyarg` && { echo "$SYSTEM_NAME"; exit; } echo mips-mips-riscos${UNAME_RELEASE} exit ;; Motorola:PowerMAX_OS:*:*) echo powerpc-motorola-powermax exit ;; Motorola:*:4.3:PL8-*) echo powerpc-harris-powermax exit ;; Night_Hawk:*:*:PowerMAX_OS | Synergy:PowerMAX_OS:*:*) echo powerpc-harris-powermax exit ;; Night_Hawk:Power_UNIX:*:*) echo powerpc-harris-powerunix exit ;; m88k:CX/UX:7*:*) echo m88k-harris-cxux7 exit ;; m88k:*:4*:R4*) echo m88k-motorola-sysv4 exit ;; m88k:*:3*:R3*) echo m88k-motorola-sysv3 exit ;; AViiON:dgux:*:*) # DG/UX returns AViiON for all architectures UNAME_PROCESSOR=`/usr/bin/uname -p` if [ $UNAME_PROCESSOR = mc88100 ] || [ $UNAME_PROCESSOR = mc88110 ] then if [ ${TARGET_BINARY_INTERFACE}x = m88kdguxelfx ] || \ [ ${TARGET_BINARY_INTERFACE}x = x ] then echo m88k-dg-dgux${UNAME_RELEASE} else echo m88k-dg-dguxbcs${UNAME_RELEASE} fi else echo i586-dg-dgux${UNAME_RELEASE} fi exit ;; M88*:DolphinOS:*:*) # DolphinOS (SVR3) echo m88k-dolphin-sysv3 exit ;; M88*:*:R3*:*) # Delta 88k system running SVR3 echo m88k-motorola-sysv3 exit ;; XD88*:*:*:*) # Tektronix XD88 system running UTekV (SVR3) echo m88k-tektronix-sysv3 exit ;; Tek43[0-9][0-9]:UTek:*:*) # Tektronix 4300 system running UTek (BSD) echo m68k-tektronix-bsd exit ;; *:IRIX*:*:*) echo mips-sgi-irix`echo ${UNAME_RELEASE}|sed -e 's/-/_/g'` exit ;; ????????:AIX?:[12].1:2) # AIX 2.2.1 or AIX 2.1.1 is RT/PC AIX. echo romp-ibm-aix # uname -m gives an 8 hex-code CPU id exit ;; # Note that: echo "'`uname -s`'" gives 'AIX ' i*86:AIX:*:*) echo i386-ibm-aix exit ;; ia64:AIX:*:*) if [ -x /usr/bin/oslevel ] ; then IBM_REV=`/usr/bin/oslevel` else IBM_REV=${UNAME_VERSION}.${UNAME_RELEASE} fi echo ${UNAME_MACHINE}-ibm-aix${IBM_REV} exit ;; *:AIX:2:3) if grep bos325 /usr/include/stdio.h >/dev/null 2>&1; then eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #include main() { if (!__power_pc()) exit(1); puts("powerpc-ibm-aix3.2.5"); exit(0); } EOF if $CC_FOR_BUILD -o $dummy $dummy.c && SYSTEM_NAME=`$dummy` then echo "$SYSTEM_NAME" else echo rs6000-ibm-aix3.2.5 fi elif grep bos324 /usr/include/stdio.h >/dev/null 2>&1; then echo rs6000-ibm-aix3.2.4 else echo rs6000-ibm-aix3.2 fi exit ;; *:AIX:*:[4567]) IBM_CPU_ID=`/usr/sbin/lsdev -C -c processor -S available | sed 1q | awk '{ print $1 }'` if /usr/sbin/lsattr -El ${IBM_CPU_ID} | grep ' POWER' >/dev/null 2>&1; then IBM_ARCH=rs6000 else IBM_ARCH=powerpc fi if [ -x /usr/bin/oslevel ] ; then IBM_REV=`/usr/bin/oslevel` else IBM_REV=${UNAME_VERSION}.${UNAME_RELEASE} fi echo ${IBM_ARCH}-ibm-aix${IBM_REV} exit ;; *:AIX:*:*) echo rs6000-ibm-aix exit ;; ibmrt:4.4BSD:*|romp-ibm:BSD:*) echo romp-ibm-bsd4.4 exit ;; ibmrt:*BSD:*|romp-ibm:BSD:*) # covers RT/PC BSD and echo romp-ibm-bsd${UNAME_RELEASE} # 4.3 with uname added to exit ;; # report: romp-ibm BSD 4.3 *:BOSX:*:*) echo rs6000-bull-bosx exit ;; DPX/2?00:B.O.S.:*:*) echo m68k-bull-sysv3 exit ;; 9000/[34]??:4.3bsd:1.*:*) echo m68k-hp-bsd exit ;; hp300:4.4BSD:*:* | 9000/[34]??:4.3bsd:2.*:*) echo m68k-hp-bsd4.4 exit ;; 9000/[34678]??:HP-UX:*:*) HPUX_REV=`echo ${UNAME_RELEASE}|sed -e 's/[^.]*.[0B]*//'` case "${UNAME_MACHINE}" in 9000/31? ) HP_ARCH=m68000 ;; 9000/[34]?? ) HP_ARCH=m68k ;; 9000/[678][0-9][0-9]) if [ -x /usr/bin/getconf ]; then sc_cpu_version=`/usr/bin/getconf SC_CPU_VERSION 2>/dev/null` sc_kernel_bits=`/usr/bin/getconf SC_KERNEL_BITS 2>/dev/null` case "${sc_cpu_version}" in 523) HP_ARCH="hppa1.0" ;; # CPU_PA_RISC1_0 528) HP_ARCH="hppa1.1" ;; # CPU_PA_RISC1_1 532) # CPU_PA_RISC2_0 case "${sc_kernel_bits}" in 32) HP_ARCH="hppa2.0n" ;; 64) HP_ARCH="hppa2.0w" ;; '') HP_ARCH="hppa2.0" ;; # HP-UX 10.20 esac ;; esac fi if [ "${HP_ARCH}" = "" ]; then eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #define _HPUX_SOURCE #include #include int main () { #if defined(_SC_KERNEL_BITS) long bits = sysconf(_SC_KERNEL_BITS); #endif long cpu = sysconf (_SC_CPU_VERSION); switch (cpu) { case CPU_PA_RISC1_0: puts ("hppa1.0"); break; case CPU_PA_RISC1_1: puts ("hppa1.1"); break; case CPU_PA_RISC2_0: #if defined(_SC_KERNEL_BITS) switch (bits) { case 64: puts ("hppa2.0w"); break; case 32: puts ("hppa2.0n"); break; default: puts ("hppa2.0"); break; } break; #else /* !defined(_SC_KERNEL_BITS) */ puts ("hppa2.0"); break; #endif default: puts ("hppa1.0"); break; } exit (0); } EOF (CCOPTS= $CC_FOR_BUILD -o $dummy $dummy.c 2>/dev/null) && HP_ARCH=`$dummy` test -z "$HP_ARCH" && HP_ARCH=hppa fi ;; esac if [ ${HP_ARCH} = "hppa2.0w" ] then eval $set_cc_for_build # hppa2.0w-hp-hpux* has a 64-bit kernel and a compiler generating # 32-bit code. hppa64-hp-hpux* has the same kernel and a compiler # generating 64-bit code. GNU and HP use different nomenclature: # # $ CC_FOR_BUILD=cc ./config.guess # => hppa2.0w-hp-hpux11.23 # $ CC_FOR_BUILD="cc +DA2.0w" ./config.guess # => hppa64-hp-hpux11.23 if echo __LP64__ | (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | grep -q __LP64__ then HP_ARCH="hppa2.0w" else HP_ARCH="hppa64" fi fi echo ${HP_ARCH}-hp-hpux${HPUX_REV} exit ;; ia64:HP-UX:*:*) HPUX_REV=`echo ${UNAME_RELEASE}|sed -e 's/[^.]*.[0B]*//'` echo ia64-hp-hpux${HPUX_REV} exit ;; 3050*:HI-UX:*:*) eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #include int main () { long cpu = sysconf (_SC_CPU_VERSION); /* The order matters, because CPU_IS_HP_MC68K erroneously returns true for CPU_PA_RISC1_0. CPU_IS_PA_RISC returns correct results, however. */ if (CPU_IS_PA_RISC (cpu)) { switch (cpu) { case CPU_PA_RISC1_0: puts ("hppa1.0-hitachi-hiuxwe2"); break; case CPU_PA_RISC1_1: puts ("hppa1.1-hitachi-hiuxwe2"); break; case CPU_PA_RISC2_0: puts ("hppa2.0-hitachi-hiuxwe2"); break; default: puts ("hppa-hitachi-hiuxwe2"); break; } } else if (CPU_IS_HP_MC68K (cpu)) puts ("m68k-hitachi-hiuxwe2"); else puts ("unknown-hitachi-hiuxwe2"); exit (0); } EOF $CC_FOR_BUILD -o $dummy $dummy.c && SYSTEM_NAME=`$dummy` && { echo "$SYSTEM_NAME"; exit; } echo unknown-hitachi-hiuxwe2 exit ;; 9000/7??:4.3bsd:*:* | 9000/8?[79]:4.3bsd:*:* ) echo hppa1.1-hp-bsd exit ;; 9000/8??:4.3bsd:*:*) echo hppa1.0-hp-bsd exit ;; *9??*:MPE/iX:*:* | *3000*:MPE/iX:*:*) echo hppa1.0-hp-mpeix exit ;; hp7??:OSF1:*:* | hp8?[79]:OSF1:*:* ) echo hppa1.1-hp-osf exit ;; hp8??:OSF1:*:*) echo hppa1.0-hp-osf exit ;; i*86:OSF1:*:*) if [ -x /usr/sbin/sysversion ] ; then echo ${UNAME_MACHINE}-unknown-osf1mk else echo ${UNAME_MACHINE}-unknown-osf1 fi exit ;; parisc*:Lites*:*:*) echo hppa1.1-hp-lites exit ;; C1*:ConvexOS:*:* | convex:ConvexOS:C1*:*) echo c1-convex-bsd exit ;; C2*:ConvexOS:*:* | convex:ConvexOS:C2*:*) if getsysinfo -f scalar_acc then echo c32-convex-bsd else echo c2-convex-bsd fi exit ;; C34*:ConvexOS:*:* | convex:ConvexOS:C34*:*) echo c34-convex-bsd exit ;; C38*:ConvexOS:*:* | convex:ConvexOS:C38*:*) echo c38-convex-bsd exit ;; C4*:ConvexOS:*:* | convex:ConvexOS:C4*:*) echo c4-convex-bsd exit ;; CRAY*Y-MP:*:*:*) echo ymp-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; CRAY*[A-Z]90:*:*:*) echo ${UNAME_MACHINE}-cray-unicos${UNAME_RELEASE} \ | sed -e 's/CRAY.*\([A-Z]90\)/\1/' \ -e y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/ \ -e 's/\.[^.]*$/.X/' exit ;; CRAY*TS:*:*:*) echo t90-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; CRAY*T3E:*:*:*) echo alphaev5-cray-unicosmk${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; CRAY*SV1:*:*:*) echo sv1-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; *:UNICOS/mp:*:*) echo craynv-cray-unicosmp${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; F30[01]:UNIX_System_V:*:* | F700:UNIX_System_V:*:*) FUJITSU_PROC=`uname -m | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz'` FUJITSU_SYS=`uname -p | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/\///'` FUJITSU_REL=`echo ${UNAME_RELEASE} | sed -e 's/ /_/'` echo "${FUJITSU_PROC}-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}" exit ;; 5000:UNIX_System_V:4.*:*) FUJITSU_SYS=`uname -p | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/\///'` FUJITSU_REL=`echo ${UNAME_RELEASE} | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/ /_/'` echo "sparc-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}" exit ;; i*86:BSD/386:*:* | i*86:BSD/OS:*:* | *:Ascend\ Embedded/OS:*:*) echo ${UNAME_MACHINE}-pc-bsdi${UNAME_RELEASE} exit ;; sparc*:BSD/OS:*:*) echo sparc-unknown-bsdi${UNAME_RELEASE} exit ;; *:BSD/OS:*:*) echo ${UNAME_MACHINE}-unknown-bsdi${UNAME_RELEASE} exit ;; *:FreeBSD:*:*) UNAME_PROCESSOR=`/usr/bin/uname -p` case ${UNAME_PROCESSOR} in amd64) echo x86_64-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; *) echo ${UNAME_PROCESSOR}-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; esac exit ;; i*:CYGWIN*:*) echo ${UNAME_MACHINE}-pc-cygwin exit ;; *:MINGW64*:*) echo ${UNAME_MACHINE}-pc-mingw64 exit ;; *:MINGW*:*) echo ${UNAME_MACHINE}-pc-mingw32 exit ;; i*:MSYS*:*) echo ${UNAME_MACHINE}-pc-msys exit ;; i*:windows32*:*) # uname -m includes "-pc" on this system. echo ${UNAME_MACHINE}-mingw32 exit ;; i*:PW*:*) echo ${UNAME_MACHINE}-pc-pw32 exit ;; *:Interix*:*) case ${UNAME_MACHINE} in x86) echo i586-pc-interix${UNAME_RELEASE} exit ;; authenticamd | genuineintel | EM64T) echo x86_64-unknown-interix${UNAME_RELEASE} exit ;; IA64) echo ia64-unknown-interix${UNAME_RELEASE} exit ;; esac ;; [345]86:Windows_95:* | [345]86:Windows_98:* | [345]86:Windows_NT:*) echo i${UNAME_MACHINE}-pc-mks exit ;; 8664:Windows_NT:*) echo x86_64-pc-mks exit ;; i*:Windows_NT*:* | Pentium*:Windows_NT*:*) # How do we know it's Interix rather than the generic POSIX subsystem? # It also conflicts with pre-2.0 versions of AT&T UWIN. Should we # UNAME_MACHINE based on the output of uname instead of i386? echo i586-pc-interix exit ;; i*:UWIN*:*) echo ${UNAME_MACHINE}-pc-uwin exit ;; amd64:CYGWIN*:*:* | x86_64:CYGWIN*:*:*) echo x86_64-unknown-cygwin exit ;; p*:CYGWIN*:*) echo powerpcle-unknown-cygwin exit ;; prep*:SunOS:5.*:*) echo powerpcle-unknown-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; *:GNU:*:*) # the GNU system echo `echo ${UNAME_MACHINE}|sed -e 's,[-/].*$,,'`-unknown-${LIBC}`echo ${UNAME_RELEASE}|sed -e 's,/.*$,,'` exit ;; *:GNU/*:*:*) # other systems with GNU libc and userland echo ${UNAME_MACHINE}-unknown-`echo ${UNAME_SYSTEM} | sed 's,^[^/]*/,,' | tr '[A-Z]' '[a-z]'``echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'`-${LIBC} exit ;; i*86:Minix:*:*) echo ${UNAME_MACHINE}-pc-minix exit ;; aarch64:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; aarch64_be:Linux:*:*) UNAME_MACHINE=aarch64_be echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; alpha:Linux:*:*) case `sed -n '/^cpu model/s/^.*: \(.*\)/\1/p' < /proc/cpuinfo` in EV5) UNAME_MACHINE=alphaev5 ;; EV56) UNAME_MACHINE=alphaev56 ;; PCA56) UNAME_MACHINE=alphapca56 ;; PCA57) UNAME_MACHINE=alphapca56 ;; EV6) UNAME_MACHINE=alphaev6 ;; EV67) UNAME_MACHINE=alphaev67 ;; EV68*) UNAME_MACHINE=alphaev68 ;; esac objdump --private-headers /bin/sh | grep -q ld.so.1 if test "$?" = 0 ; then LIBC="gnulibc1" ; fi echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; arc:Linux:*:* | arceb:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; arm*:Linux:*:*) eval $set_cc_for_build if echo __ARM_EABI__ | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_EABI__ then echo ${UNAME_MACHINE}-unknown-linux-${LIBC} else if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_PCS_VFP then echo ${UNAME_MACHINE}-unknown-linux-${LIBC}eabi else echo ${UNAME_MACHINE}-unknown-linux-${LIBC}eabihf fi fi exit ;; avr32*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; cris:Linux:*:*) echo ${UNAME_MACHINE}-axis-linux-${LIBC} exit ;; crisv32:Linux:*:*) echo ${UNAME_MACHINE}-axis-linux-${LIBC} exit ;; frv:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; hexagon:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; i*86:Linux:*:*) echo ${UNAME_MACHINE}-pc-linux-${LIBC} exit ;; ia64:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; m32r*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; m68*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; mips:Linux:*:* | mips64:Linux:*:*) eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #undef CPU #undef ${UNAME_MACHINE} #undef ${UNAME_MACHINE}el #if defined(__MIPSEL__) || defined(__MIPSEL) || defined(_MIPSEL) || defined(MIPSEL) CPU=${UNAME_MACHINE}el #else #if defined(__MIPSEB__) || defined(__MIPSEB) || defined(_MIPSEB) || defined(MIPSEB) CPU=${UNAME_MACHINE} #else CPU= #endif #endif EOF eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep '^CPU'` test x"${CPU}" != x && { echo "${CPU}-unknown-linux-${LIBC}"; exit; } ;; or1k:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; or32:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; padre:Linux:*:*) echo sparc-unknown-linux-${LIBC} exit ;; parisc64:Linux:*:* | hppa64:Linux:*:*) echo hppa64-unknown-linux-${LIBC} exit ;; parisc:Linux:*:* | hppa:Linux:*:*) # Look for CPU level case `grep '^cpu[^a-z]*:' /proc/cpuinfo 2>/dev/null | cut -d' ' -f2` in PA7*) echo hppa1.1-unknown-linux-${LIBC} ;; PA8*) echo hppa2.0-unknown-linux-${LIBC} ;; *) echo hppa-unknown-linux-${LIBC} ;; esac exit ;; ppc64:Linux:*:*) echo powerpc64-unknown-linux-${LIBC} exit ;; ppc:Linux:*:*) echo powerpc-unknown-linux-${LIBC} exit ;; ppc64le:Linux:*:*) echo powerpc64le-unknown-linux-${LIBC} exit ;; ppcle:Linux:*:*) echo powerpcle-unknown-linux-${LIBC} exit ;; s390:Linux:*:* | s390x:Linux:*:*) echo ${UNAME_MACHINE}-ibm-linux-${LIBC} exit ;; sh64*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; sh*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; sparc:Linux:*:* | sparc64:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; tile*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; vax:Linux:*:*) echo ${UNAME_MACHINE}-dec-linux-${LIBC} exit ;; x86_64:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; xtensa*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-${LIBC} exit ;; i*86:DYNIX/ptx:4*:*) # ptx 4.0 does uname -s correctly, with DYNIX/ptx in there. # earlier versions are messed up and put the nodename in both # sysname and nodename. echo i386-sequent-sysv4 exit ;; i*86:UNIX_SV:4.2MP:2.*) # Unixware is an offshoot of SVR4, but it has its own version # number series starting with 2... # I am not positive that other SVR4 systems won't match this, # I just have to hope. -- rms. # Use sysv4.2uw... so that sysv4* matches it. echo ${UNAME_MACHINE}-pc-sysv4.2uw${UNAME_VERSION} exit ;; i*86:OS/2:*:*) # If we were able to find `uname', then EMX Unix compatibility # is probably installed. echo ${UNAME_MACHINE}-pc-os2-emx exit ;; i*86:XTS-300:*:STOP) echo ${UNAME_MACHINE}-unknown-stop exit ;; i*86:atheos:*:*) echo ${UNAME_MACHINE}-unknown-atheos exit ;; i*86:syllable:*:*) echo ${UNAME_MACHINE}-pc-syllable exit ;; i*86:LynxOS:2.*:* | i*86:LynxOS:3.[01]*:* | i*86:LynxOS:4.[02]*:*) echo i386-unknown-lynxos${UNAME_RELEASE} exit ;; i*86:*DOS:*:*) echo ${UNAME_MACHINE}-pc-msdosdjgpp exit ;; i*86:*:4.*:* | i*86:SYSTEM_V:4.*:*) UNAME_REL=`echo ${UNAME_RELEASE} | sed 's/\/MP$//'` if grep Novell /usr/include/link.h >/dev/null 2>/dev/null; then echo ${UNAME_MACHINE}-univel-sysv${UNAME_REL} else echo ${UNAME_MACHINE}-pc-sysv${UNAME_REL} fi exit ;; i*86:*:5:[678]*) # UnixWare 7.x, OpenUNIX and OpenServer 6. case `/bin/uname -X | grep "^Machine"` in *486*) UNAME_MACHINE=i486 ;; *Pentium) UNAME_MACHINE=i586 ;; *Pent*|*Celeron) UNAME_MACHINE=i686 ;; esac echo ${UNAME_MACHINE}-unknown-sysv${UNAME_RELEASE}${UNAME_SYSTEM}${UNAME_VERSION} exit ;; i*86:*:3.2:*) if test -f /usr/options/cb.name; then UNAME_REL=`sed -n 's/.*Version //p' /dev/null >/dev/null ; then UNAME_REL=`(/bin/uname -X|grep Release|sed -e 's/.*= //')` (/bin/uname -X|grep i80486 >/dev/null) && UNAME_MACHINE=i486 (/bin/uname -X|grep '^Machine.*Pentium' >/dev/null) \ && UNAME_MACHINE=i586 (/bin/uname -X|grep '^Machine.*Pent *II' >/dev/null) \ && UNAME_MACHINE=i686 (/bin/uname -X|grep '^Machine.*Pentium Pro' >/dev/null) \ && UNAME_MACHINE=i686 echo ${UNAME_MACHINE}-pc-sco$UNAME_REL else echo ${UNAME_MACHINE}-pc-sysv32 fi exit ;; pc:*:*:*) # Left here for compatibility: # uname -m prints for DJGPP always 'pc', but it prints nothing about # the processor, so we play safe by assuming i586. # Note: whatever this is, it MUST be the same as what config.sub # prints for the "djgpp" host, or else GDB configury will decide that # this is a cross-build. echo i586-pc-msdosdjgpp exit ;; Intel:Mach:3*:*) echo i386-pc-mach3 exit ;; paragon:*:*:*) echo i860-intel-osf1 exit ;; i860:*:4.*:*) # i860-SVR4 if grep Stardent /usr/include/sys/uadmin.h >/dev/null 2>&1 ; then echo i860-stardent-sysv${UNAME_RELEASE} # Stardent Vistra i860-SVR4 else # Add other i860-SVR4 vendors below as they are discovered. echo i860-unknown-sysv${UNAME_RELEASE} # Unknown i860-SVR4 fi exit ;; mini*:CTIX:SYS*5:*) # "miniframe" echo m68010-convergent-sysv exit ;; mc68k:UNIX:SYSTEM5:3.51m) echo m68k-convergent-sysv exit ;; M680?0:D-NIX:5.3:*) echo m68k-diab-dnix exit ;; M68*:*:R3V[5678]*:*) test -r /sysV68 && { echo 'm68k-motorola-sysv'; exit; } ;; 3[345]??:*:4.0:3.0 | 3[34]??A:*:4.0:3.0 | 3[34]??,*:*:4.0:3.0 | 3[34]??/*:*:4.0:3.0 | 4400:*:4.0:3.0 | 4850:*:4.0:3.0 | SKA40:*:4.0:3.0 | SDS2:*:4.0:3.0 | SHG2:*:4.0:3.0 | S7501*:*:4.0:3.0) OS_REL='' test -r /etc/.relid \ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4.3${OS_REL}; exit; } /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ && { echo i586-ncr-sysv4.3${OS_REL}; exit; } ;; 3[34]??:*:4.0:* | 3[34]??,*:*:4.0:*) /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4; exit; } ;; NCR*:*:4.2:* | MPRAS*:*:4.2:*) OS_REL='.3' test -r /etc/.relid \ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4.3${OS_REL}; exit; } /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ && { echo i586-ncr-sysv4.3${OS_REL}; exit; } /bin/uname -p 2>/dev/null | /bin/grep pteron >/dev/null \ && { echo i586-ncr-sysv4.3${OS_REL}; exit; } ;; m68*:LynxOS:2.*:* | m68*:LynxOS:3.0*:*) echo m68k-unknown-lynxos${UNAME_RELEASE} exit ;; mc68030:UNIX_System_V:4.*:*) echo m68k-atari-sysv4 exit ;; TSUNAMI:LynxOS:2.*:*) echo sparc-unknown-lynxos${UNAME_RELEASE} exit ;; rs6000:LynxOS:2.*:*) echo rs6000-unknown-lynxos${UNAME_RELEASE} exit ;; PowerPC:LynxOS:2.*:* | PowerPC:LynxOS:3.[01]*:* | PowerPC:LynxOS:4.[02]*:*) echo powerpc-unknown-lynxos${UNAME_RELEASE} exit ;; SM[BE]S:UNIX_SV:*:*) echo mips-dde-sysv${UNAME_RELEASE} exit ;; RM*:ReliantUNIX-*:*:*) echo mips-sni-sysv4 exit ;; RM*:SINIX-*:*:*) echo mips-sni-sysv4 exit ;; *:SINIX-*:*:*) if uname -p 2>/dev/null >/dev/null ; then UNAME_MACHINE=`(uname -p) 2>/dev/null` echo ${UNAME_MACHINE}-sni-sysv4 else echo ns32k-sni-sysv fi exit ;; PENTIUM:*:4.0*:*) # Unisys `ClearPath HMP IX 4000' SVR4/MP effort # says echo i586-unisys-sysv4 exit ;; *:UNIX_System_V:4*:FTX*) # From Gerald Hewes . # How about differentiating between stratus architectures? -djm echo hppa1.1-stratus-sysv4 exit ;; *:*:*:FTX*) # From seanf@swdc.stratus.com. echo i860-stratus-sysv4 exit ;; i*86:VOS:*:*) # From Paul.Green@stratus.com. echo ${UNAME_MACHINE}-stratus-vos exit ;; *:VOS:*:*) # From Paul.Green@stratus.com. echo hppa1.1-stratus-vos exit ;; mc68*:A/UX:*:*) echo m68k-apple-aux${UNAME_RELEASE} exit ;; news*:NEWS-OS:6*:*) echo mips-sony-newsos6 exit ;; R[34]000:*System_V*:*:* | R4000:UNIX_SYSV:*:* | R*000:UNIX_SV:*:*) if [ -d /usr/nec ]; then echo mips-nec-sysv${UNAME_RELEASE} else echo mips-unknown-sysv${UNAME_RELEASE} fi exit ;; BeBox:BeOS:*:*) # BeOS running on hardware made by Be, PPC only. echo powerpc-be-beos exit ;; BeMac:BeOS:*:*) # BeOS running on Mac or Mac clone, PPC only. echo powerpc-apple-beos exit ;; BePC:BeOS:*:*) # BeOS running on Intel PC compatible. echo i586-pc-beos exit ;; BePC:Haiku:*:*) # Haiku running on Intel PC compatible. echo i586-pc-haiku exit ;; x86_64:Haiku:*:*) echo x86_64-unknown-haiku exit ;; SX-4:SUPER-UX:*:*) echo sx4-nec-superux${UNAME_RELEASE} exit ;; SX-5:SUPER-UX:*:*) echo sx5-nec-superux${UNAME_RELEASE} exit ;; SX-6:SUPER-UX:*:*) echo sx6-nec-superux${UNAME_RELEASE} exit ;; SX-7:SUPER-UX:*:*) echo sx7-nec-superux${UNAME_RELEASE} exit ;; SX-8:SUPER-UX:*:*) echo sx8-nec-superux${UNAME_RELEASE} exit ;; SX-8R:SUPER-UX:*:*) echo sx8r-nec-superux${UNAME_RELEASE} exit ;; Power*:Rhapsody:*:*) echo powerpc-apple-rhapsody${UNAME_RELEASE} exit ;; *:Rhapsody:*:*) echo ${UNAME_MACHINE}-apple-rhapsody${UNAME_RELEASE} exit ;; *:Darwin:*:*) UNAME_PROCESSOR=`uname -p` || UNAME_PROCESSOR=unknown eval $set_cc_for_build if test "$UNAME_PROCESSOR" = unknown ; then UNAME_PROCESSOR=powerpc fi if [ "$CC_FOR_BUILD" != 'no_compiler_found' ]; then if (echo '#ifdef __LP64__'; echo IS_64BIT_ARCH; echo '#endif') | \ (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | \ grep IS_64BIT_ARCH >/dev/null then case $UNAME_PROCESSOR in i386) UNAME_PROCESSOR=x86_64 ;; powerpc) UNAME_PROCESSOR=powerpc64 ;; esac fi fi echo ${UNAME_PROCESSOR}-apple-darwin${UNAME_RELEASE} exit ;; *:procnto*:*:* | *:QNX:[0123456789]*:*) UNAME_PROCESSOR=`uname -p` if test "$UNAME_PROCESSOR" = "x86"; then UNAME_PROCESSOR=i386 UNAME_MACHINE=pc fi echo ${UNAME_PROCESSOR}-${UNAME_MACHINE}-nto-qnx${UNAME_RELEASE} exit ;; *:QNX:*:4*) echo i386-pc-qnx exit ;; NEO-?:NONSTOP_KERNEL:*:*) echo neo-tandem-nsk${UNAME_RELEASE} exit ;; NSE-*:NONSTOP_KERNEL:*:*) echo nse-tandem-nsk${UNAME_RELEASE} exit ;; NSR-?:NONSTOP_KERNEL:*:*) echo nsr-tandem-nsk${UNAME_RELEASE} exit ;; *:NonStop-UX:*:*) echo mips-compaq-nonstopux exit ;; BS2000:POSIX*:*:*) echo bs2000-siemens-sysv exit ;; DS/*:UNIX_System_V:*:*) echo ${UNAME_MACHINE}-${UNAME_SYSTEM}-${UNAME_RELEASE} exit ;; *:Plan9:*:*) # "uname -m" is not consistent, so use $cputype instead. 386 # is converted to i386 for consistency with other x86 # operating systems. if test "$cputype" = "386"; then UNAME_MACHINE=i386 else UNAME_MACHINE="$cputype" fi echo ${UNAME_MACHINE}-unknown-plan9 exit ;; *:TOPS-10:*:*) echo pdp10-unknown-tops10 exit ;; *:TENEX:*:*) echo pdp10-unknown-tenex exit ;; KS10:TOPS-20:*:* | KL10:TOPS-20:*:* | TYPE4:TOPS-20:*:*) echo pdp10-dec-tops20 exit ;; XKL-1:TOPS-20:*:* | TYPE5:TOPS-20:*:*) echo pdp10-xkl-tops20 exit ;; *:TOPS-20:*:*) echo pdp10-unknown-tops20 exit ;; *:ITS:*:*) echo pdp10-unknown-its exit ;; SEI:*:*:SEIUX) echo mips-sei-seiux${UNAME_RELEASE} exit ;; *:DragonFly:*:*) echo ${UNAME_MACHINE}-unknown-dragonfly`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` exit ;; *:*VMS:*:*) UNAME_MACHINE=`(uname -p) 2>/dev/null` case "${UNAME_MACHINE}" in A*) echo alpha-dec-vms ; exit ;; I*) echo ia64-dec-vms ; exit ;; V*) echo vax-dec-vms ; exit ;; esac ;; *:XENIX:*:SysV) echo i386-pc-xenix exit ;; i*86:skyos:*:*) echo ${UNAME_MACHINE}-pc-skyos`echo ${UNAME_RELEASE}` | sed -e 's/ .*$//' exit ;; i*86:rdos:*:*) echo ${UNAME_MACHINE}-pc-rdos exit ;; i*86:AROS:*:*) echo ${UNAME_MACHINE}-pc-aros exit ;; x86_64:VMkernel:*:*) echo ${UNAME_MACHINE}-unknown-esx exit ;; esac eval $set_cc_for_build cat >$dummy.c < # include #endif main () { #if defined (sony) #if defined (MIPSEB) /* BFD wants "bsd" instead of "newsos". Perhaps BFD should be changed, I don't know.... */ printf ("mips-sony-bsd\n"); exit (0); #else #include printf ("m68k-sony-newsos%s\n", #ifdef NEWSOS4 "4" #else "" #endif ); exit (0); #endif #endif #if defined (__arm) && defined (__acorn) && defined (__unix) printf ("arm-acorn-riscix\n"); exit (0); #endif #if defined (hp300) && !defined (hpux) printf ("m68k-hp-bsd\n"); exit (0); #endif #if defined (NeXT) #if !defined (__ARCHITECTURE__) #define __ARCHITECTURE__ "m68k" #endif int version; version=`(hostinfo | sed -n 's/.*NeXT Mach \([0-9]*\).*/\1/p') 2>/dev/null`; if (version < 4) printf ("%s-next-nextstep%d\n", __ARCHITECTURE__, version); else printf ("%s-next-openstep%d\n", __ARCHITECTURE__, version); exit (0); #endif #if defined (MULTIMAX) || defined (n16) #if defined (UMAXV) printf ("ns32k-encore-sysv\n"); exit (0); #else #if defined (CMU) printf ("ns32k-encore-mach\n"); exit (0); #else printf ("ns32k-encore-bsd\n"); exit (0); #endif #endif #endif #if defined (__386BSD__) printf ("i386-pc-bsd\n"); exit (0); #endif #if defined (sequent) #if defined (i386) printf ("i386-sequent-dynix\n"); exit (0); #endif #if defined (ns32000) printf ("ns32k-sequent-dynix\n"); exit (0); #endif #endif #if defined (_SEQUENT_) struct utsname un; uname(&un); if (strncmp(un.version, "V2", 2) == 0) { printf ("i386-sequent-ptx2\n"); exit (0); } if (strncmp(un.version, "V1", 2) == 0) { /* XXX is V1 correct? */ printf ("i386-sequent-ptx1\n"); exit (0); } printf ("i386-sequent-ptx\n"); exit (0); #endif #if defined (vax) # if !defined (ultrix) # include # if defined (BSD) # if BSD == 43 printf ("vax-dec-bsd4.3\n"); exit (0); # else # if BSD == 199006 printf ("vax-dec-bsd4.3reno\n"); exit (0); # else printf ("vax-dec-bsd\n"); exit (0); # endif # endif # else printf ("vax-dec-bsd\n"); exit (0); # endif # else printf ("vax-dec-ultrix\n"); exit (0); # endif #endif #if defined (alliant) && defined (i860) printf ("i860-alliant-bsd\n"); exit (0); #endif exit (1); } EOF $CC_FOR_BUILD -o $dummy $dummy.c 2>/dev/null && SYSTEM_NAME=`$dummy` && { echo "$SYSTEM_NAME"; exit; } # Apollos put the system type in the environment. test -d /usr/apollo && { echo ${ISP}-apollo-${SYSTYPE}; exit; } # Convex versions that predate uname can use getsysinfo(1) if [ -x /usr/convex/getsysinfo ] then case `getsysinfo -f cpu_type` in c1*) echo c1-convex-bsd exit ;; c2*) if getsysinfo -f scalar_acc then echo c32-convex-bsd else echo c2-convex-bsd fi exit ;; c34*) echo c34-convex-bsd exit ;; c38*) echo c38-convex-bsd exit ;; c4*) echo c4-convex-bsd exit ;; esac fi cat >&2 < in order to provide the needed information to handle your system. config.guess timestamp = $timestamp uname -m = `(uname -m) 2>/dev/null || echo unknown` uname -r = `(uname -r) 2>/dev/null || echo unknown` uname -s = `(uname -s) 2>/dev/null || echo unknown` uname -v = `(uname -v) 2>/dev/null || echo unknown` /usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null` /bin/uname -X = `(/bin/uname -X) 2>/dev/null` hostinfo = `(hostinfo) 2>/dev/null` /bin/universe = `(/bin/universe) 2>/dev/null` /usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null` /bin/arch = `(/bin/arch) 2>/dev/null` /usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null` /usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null` UNAME_MACHINE = ${UNAME_MACHINE} UNAME_RELEASE = ${UNAME_RELEASE} UNAME_SYSTEM = ${UNAME_SYSTEM} UNAME_VERSION = ${UNAME_VERSION} EOF exit 1 # Local variables: # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "timestamp='" # time-stamp-format: "%:y-%02m-%02d" # time-stamp-end: "'" # End: �������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������vmem-1.8/src/jemalloc/config.stamp.in���������������������������������������������������������������0000664�0000000�0000000�00000000000�13615050741�0017613�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������vmem-1.8/src/jemalloc/config.sub��������������������������������������������������������������������0000775�0000000�0000000�00000105427�13615050741�0016701�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������#! /bin/sh # Configuration validation subroutine script. # Copyright 1992-2013 Free Software Foundation, Inc. timestamp='2013-10-01' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, see . # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that # program. This Exception is an additional permission under section 7 # of the GNU General Public License, version 3 ("GPLv3"). # Please send patches with a ChangeLog entry to config-patches@gnu.org. # # Configuration subroutine to validate and canonicalize a configuration type. # Supply the specified configuration type as an argument. # If it is invalid, we print an error message on stderr and exit with code 1. # Otherwise, we print the canonical config type on stdout and succeed. # You can get the latest version of this script from: # http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.sub;hb=HEAD # This file is supposed to be the same for all GNU packages # and recognize all the CPU types, system types and aliases # that are meaningful with *any* GNU software. # Each package is responsible for reporting which valid configurations # it does not support. The user should be able to distinguish # a failure to support a valid configuration from a meaningless # configuration. # The goal of this file is to map all the various variations of a given # machine specification into a single specification in the form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM # or in some cases, the newer four-part form: # CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM # It is wrong to echo any other type of specification. me=`echo "$0" | sed -e 's,.*/,,'` usage="\ Usage: $0 [OPTION] CPU-MFR-OPSYS $0 [OPTION] ALIAS Canonicalize a configuration name. Operation modes: -h, --help print this help, then exit -t, --time-stamp print date of last modification, then exit -v, --version print version number, then exit Report bugs and patches to ." version="\ GNU config.sub ($timestamp) Copyright 1992-2013 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." help=" Try \`$me --help' for more information." # Parse command line while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) echo "$timestamp" ; exit ;; --version | -v ) echo "$version" ; exit ;; --help | --h* | -h ) echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. break ;; -* ) echo "$me: invalid option $1$help" exit 1 ;; *local*) # First pass through any local machine types. echo $1 exit ;; * ) break ;; esac done case $# in 0) echo "$me: missing argument$help" >&2 exit 1;; 1) ;; *) echo "$me: too many arguments$help" >&2 exit 1;; esac # Separate what the user gave into CPU-COMPANY and OS or KERNEL-OS (if any). # Here we must recognize all the valid KERNEL-OS combinations. maybe_os=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\2/'` case $maybe_os in nto-qnx* | linux-gnu* | linux-android* | linux-dietlibc | linux-newlib* | \ linux-musl* | linux-uclibc* | uclinux-uclibc* | uclinux-gnu* | kfreebsd*-gnu* | \ knetbsd*-gnu* | netbsd*-gnu* | \ kopensolaris*-gnu* | \ storm-chaos* | os2-emx* | rtmk-nova*) os=-$maybe_os basic_machine=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'` ;; android-linux) os=-linux-android basic_machine=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'`-unknown ;; *) basic_machine=`echo $1 | sed 's/-[^-]*$//'` if [ $basic_machine != $1 ] then os=`echo $1 | sed 's/.*-/-/'` else os=; fi ;; esac ### Let's recognize common machines as not being operating systems so ### that things like config.sub decstation-3100 work. We also ### recognize some manufacturers as not being operating systems, so we ### can provide default operating systems below. case $os in -sun*os*) # Prevent following clause from handling this invalid input. ;; -dec* | -mips* | -sequent* | -encore* | -pc532* | -sgi* | -sony* | \ -att* | -7300* | -3300* | -delta* | -motorola* | -sun[234]* | \ -unicom* | -ibm* | -next | -hp | -isi* | -apollo | -altos* | \ -convergent* | -ncr* | -news | -32* | -3600* | -3100* | -hitachi* |\ -c[123]* | -convex* | -sun | -crds | -omron* | -dg | -ultra | -tti* | \ -harris | -dolphin | -highlevel | -gould | -cbm | -ns | -masscomp | \ -apple | -axis | -knuth | -cray | -microblaze*) os= basic_machine=$1 ;; -bluegene*) os=-cnk ;; -sim | -cisco | -oki | -wec | -winbond) os= basic_machine=$1 ;; -scout) ;; -wrs) os=-vxworks basic_machine=$1 ;; -chorusos*) os=-chorusos basic_machine=$1 ;; -chorusrdb) os=-chorusrdb basic_machine=$1 ;; -hiux*) os=-hiuxwe2 ;; -sco6) os=-sco5v6 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco5) os=-sco3.2v5 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco4) os=-sco3.2v4 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco3.2.[4-9]*) os=`echo $os | sed -e 's/sco3.2./sco3.2v/'` basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco3.2v[4-9]*) # Don't forget version if it is 3.2v4 or newer. basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco5v6*) # Don't forget version if it is 3.2v4 or newer. basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco*) os=-sco3.2v2 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -udk*) basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -isc) os=-isc2.2 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -clix*) basic_machine=clipper-intergraph ;; -isc*) basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -lynx*178) os=-lynxos178 ;; -lynx*5) os=-lynxos5 ;; -lynx*) os=-lynxos ;; -ptx*) basic_machine=`echo $1 | sed -e 's/86-.*/86-sequent/'` ;; -windowsnt*) os=`echo $os | sed -e 's/windowsnt/winnt/'` ;; -psos*) os=-psos ;; -mint | -mint[0-9]*) basic_machine=m68k-atari os=-mint ;; esac # Decode aliases for certain CPU-COMPANY combinations. case $basic_machine in # Recognize the basic CPU types without company name. # Some are omitted here because they have special meanings below. 1750a | 580 \ | a29k \ | aarch64 | aarch64_be \ | alpha | alphaev[4-8] | alphaev56 | alphaev6[78] | alphapca5[67] \ | alpha64 | alpha64ev[4-8] | alpha64ev56 | alpha64ev6[78] | alpha64pca5[67] \ | am33_2.0 \ | arc | arceb \ | arm | arm[bl]e | arme[lb] | armv[2-8] | armv[3-8][lb] | armv7[arm] \ | avr | avr32 \ | be32 | be64 \ | bfin \ | c4x | c8051 | clipper \ | d10v | d30v | dlx | dsp16xx \ | epiphany \ | fido | fr30 | frv \ | h8300 | h8500 | hppa | hppa1.[01] | hppa2.0 | hppa2.0[nw] | hppa64 \ | hexagon \ | i370 | i860 | i960 | ia64 \ | ip2k | iq2000 \ | k1om \ | le32 | le64 \ | lm32 \ | m32c | m32r | m32rle | m68000 | m68k | m88k \ | maxq | mb | microblaze | microblazeel | mcore | mep | metag \ | mips | mipsbe | mipseb | mipsel | mipsle \ | mips16 \ | mips64 | mips64el \ | mips64octeon | mips64octeonel \ | mips64orion | mips64orionel \ | mips64r5900 | mips64r5900el \ | mips64vr | mips64vrel \ | mips64vr4100 | mips64vr4100el \ | mips64vr4300 | mips64vr4300el \ | mips64vr5000 | mips64vr5000el \ | mips64vr5900 | mips64vr5900el \ | mipsisa32 | mipsisa32el \ | mipsisa32r2 | mipsisa32r2el \ | mipsisa64 | mipsisa64el \ | mipsisa64r2 | mipsisa64r2el \ | mipsisa64sb1 | mipsisa64sb1el \ | mipsisa64sr71k | mipsisa64sr71kel \ | mipsr5900 | mipsr5900el \ | mipstx39 | mipstx39el \ | mn10200 | mn10300 \ | moxie \ | mt \ | msp430 \ | nds32 | nds32le | nds32be \ | nios | nios2 | nios2eb | nios2el \ | ns16k | ns32k \ | open8 \ | or1k | or32 \ | pdp10 | pdp11 | pj | pjl \ | powerpc | powerpc64 | powerpc64le | powerpcle \ | pyramid \ | rl78 | rx \ | score \ | sh | sh[1234] | sh[24]a | sh[24]aeb | sh[23]e | sh[34]eb | sheb | shbe | shle | sh[1234]le | sh3ele \ | sh64 | sh64le \ | sparc | sparc64 | sparc64b | sparc64v | sparc86x | sparclet | sparclite \ | sparcv8 | sparcv9 | sparcv9b | sparcv9v \ | spu \ | tahoe | tic4x | tic54x | tic55x | tic6x | tic80 | tron \ | ubicom32 \ | v850 | v850e | v850e1 | v850e2 | v850es | v850e2v3 \ | we32k \ | x86 | xc16x | xstormy16 | xtensa \ | z8k | z80) basic_machine=$basic_machine-unknown ;; c54x) basic_machine=tic54x-unknown ;; c55x) basic_machine=tic55x-unknown ;; c6x) basic_machine=tic6x-unknown ;; m6811 | m68hc11 | m6812 | m68hc12 | m68hcs12x | nvptx | picochip) basic_machine=$basic_machine-unknown os=-none ;; m88110 | m680[12346]0 | m683?2 | m68360 | m5200 | v70 | w65 | z8k) ;; ms1) basic_machine=mt-unknown ;; strongarm | thumb | xscale) basic_machine=arm-unknown ;; xgate) basic_machine=$basic_machine-unknown os=-none ;; xscaleeb) basic_machine=armeb-unknown ;; xscaleel) basic_machine=armel-unknown ;; # We use `pc' rather than `unknown' # because (1) that's what they normally are, and # (2) the word "unknown" tends to confuse beginning users. i*86 | x86_64) basic_machine=$basic_machine-pc ;; # Object if more than one company name word. *-*-*) echo Invalid configuration \`$1\': machine \`$basic_machine\' not recognized 1>&2 exit 1 ;; # Recognize the basic CPU types with company name. 580-* \ | a29k-* \ | aarch64-* | aarch64_be-* \ | alpha-* | alphaev[4-8]-* | alphaev56-* | alphaev6[78]-* \ | alpha64-* | alpha64ev[4-8]-* | alpha64ev56-* | alpha64ev6[78]-* \ | alphapca5[67]-* | alpha64pca5[67]-* | arc-* | arceb-* \ | arm-* | armbe-* | armle-* | armeb-* | armv*-* \ | avr-* | avr32-* \ | be32-* | be64-* \ | bfin-* | bs2000-* \ | c[123]* | c30-* | [cjt]90-* | c4x-* \ | c8051-* | clipper-* | craynv-* | cydra-* \ | d10v-* | d30v-* | dlx-* \ | elxsi-* \ | f30[01]-* | f700-* | fido-* | fr30-* | frv-* | fx80-* \ | h8300-* | h8500-* \ | hppa-* | hppa1.[01]-* | hppa2.0-* | hppa2.0[nw]-* | hppa64-* \ | hexagon-* \ | i*86-* | i860-* | i960-* | ia64-* \ | ip2k-* | iq2000-* \ | k1om-* \ | le32-* | le64-* \ | lm32-* \ | m32c-* | m32r-* | m32rle-* \ | m68000-* | m680[012346]0-* | m68360-* | m683?2-* | m68k-* \ | m88110-* | m88k-* | maxq-* | mcore-* | metag-* \ | microblaze-* | microblazeel-* \ | mips-* | mipsbe-* | mipseb-* | mipsel-* | mipsle-* \ | mips16-* \ | mips64-* | mips64el-* \ | mips64octeon-* | mips64octeonel-* \ | mips64orion-* | mips64orionel-* \ | mips64r5900-* | mips64r5900el-* \ | mips64vr-* | mips64vrel-* \ | mips64vr4100-* | mips64vr4100el-* \ | mips64vr4300-* | mips64vr4300el-* \ | mips64vr5000-* | mips64vr5000el-* \ | mips64vr5900-* | mips64vr5900el-* \ | mipsisa32-* | mipsisa32el-* \ | mipsisa32r2-* | mipsisa32r2el-* \ | mipsisa64-* | mipsisa64el-* \ | mipsisa64r2-* | mipsisa64r2el-* \ | mipsisa64sb1-* | mipsisa64sb1el-* \ | mipsisa64sr71k-* | mipsisa64sr71kel-* \ | mipsr5900-* | mipsr5900el-* \ | mipstx39-* | mipstx39el-* \ | mmix-* \ | mt-* \ | msp430-* \ | nds32-* | nds32le-* | nds32be-* \ | nios-* | nios2-* | nios2eb-* | nios2el-* \ | none-* | np1-* | ns16k-* | ns32k-* \ | open8-* \ | orion-* \ | pdp10-* | pdp11-* | pj-* | pjl-* | pn-* | power-* \ | powerpc-* | powerpc64-* | powerpc64le-* | powerpcle-* \ | pyramid-* \ | rl78-* | romp-* | rs6000-* | rx-* \ | sh-* | sh[1234]-* | sh[24]a-* | sh[24]aeb-* | sh[23]e-* | sh[34]eb-* | sheb-* | shbe-* \ | shle-* | sh[1234]le-* | sh3ele-* | sh64-* | sh64le-* \ | sparc-* | sparc64-* | sparc64b-* | sparc64v-* | sparc86x-* | sparclet-* \ | sparclite-* \ | sparcv8-* | sparcv9-* | sparcv9b-* | sparcv9v-* | sv1-* | sx?-* \ | tahoe-* \ | tic30-* | tic4x-* | tic54x-* | tic55x-* | tic6x-* | tic80-* \ | tile*-* \ | tron-* \ | ubicom32-* \ | v850-* | v850e-* | v850e1-* | v850es-* | v850e2-* | v850e2v3-* \ | vax-* \ | we32k-* \ | x86-* | x86_64-* | xc16x-* | xps100-* \ | xstormy16-* | xtensa*-* \ | ymp-* \ | z8k-* | z80-*) ;; # Recognize the basic CPU types without company name, with glob match. xtensa*) basic_machine=$basic_machine-unknown ;; # Recognize the various machine names and aliases which stand # for a CPU type and a company and sometimes even an OS. 386bsd) basic_machine=i386-unknown os=-bsd ;; 3b1 | 7300 | 7300-att | att-7300 | pc7300 | safari | unixpc) basic_machine=m68000-att ;; 3b*) basic_machine=we32k-att ;; a29khif) basic_machine=a29k-amd os=-udi ;; abacus) basic_machine=abacus-unknown ;; adobe68k) basic_machine=m68010-adobe os=-scout ;; alliant | fx80) basic_machine=fx80-alliant ;; altos | altos3068) basic_machine=m68k-altos ;; am29k) basic_machine=a29k-none os=-bsd ;; amd64) basic_machine=x86_64-pc ;; amd64-*) basic_machine=x86_64-`echo $basic_machine | sed 's/^[^-]*-//'` ;; amdahl) basic_machine=580-amdahl os=-sysv ;; amiga | amiga-*) basic_machine=m68k-unknown ;; amigaos | amigados) basic_machine=m68k-unknown os=-amigaos ;; amigaunix | amix) basic_machine=m68k-unknown os=-sysv4 ;; apollo68) basic_machine=m68k-apollo os=-sysv ;; apollo68bsd) basic_machine=m68k-apollo os=-bsd ;; aros) basic_machine=i386-pc os=-aros ;; aux) basic_machine=m68k-apple os=-aux ;; balance) basic_machine=ns32k-sequent os=-dynix ;; blackfin) basic_machine=bfin-unknown os=-linux ;; blackfin-*) basic_machine=bfin-`echo $basic_machine | sed 's/^[^-]*-//'` os=-linux ;; bluegene*) basic_machine=powerpc-ibm os=-cnk ;; c54x-*) basic_machine=tic54x-`echo $basic_machine | sed 's/^[^-]*-//'` ;; c55x-*) basic_machine=tic55x-`echo $basic_machine | sed 's/^[^-]*-//'` ;; c6x-*) basic_machine=tic6x-`echo $basic_machine | sed 's/^[^-]*-//'` ;; c90) basic_machine=c90-cray os=-unicos ;; cegcc) basic_machine=arm-unknown os=-cegcc ;; convex-c1) basic_machine=c1-convex os=-bsd ;; convex-c2) basic_machine=c2-convex os=-bsd ;; convex-c32) basic_machine=c32-convex os=-bsd ;; convex-c34) basic_machine=c34-convex os=-bsd ;; convex-c38) basic_machine=c38-convex os=-bsd ;; cray | j90) basic_machine=j90-cray os=-unicos ;; craynv) basic_machine=craynv-cray os=-unicosmp ;; cr16 | cr16-*) basic_machine=cr16-unknown os=-elf ;; crds | unos) basic_machine=m68k-crds ;; crisv32 | crisv32-* | etraxfs*) basic_machine=crisv32-axis ;; cris | cris-* | etrax*) basic_machine=cris-axis ;; crx) basic_machine=crx-unknown os=-elf ;; da30 | da30-*) basic_machine=m68k-da30 ;; decstation | decstation-3100 | pmax | pmax-* | pmin | dec3100 | decstatn) basic_machine=mips-dec ;; decsystem10* | dec10*) basic_machine=pdp10-dec os=-tops10 ;; decsystem20* | dec20*) basic_machine=pdp10-dec os=-tops20 ;; delta | 3300 | motorola-3300 | motorola-delta \ | 3300-motorola | delta-motorola) basic_machine=m68k-motorola ;; delta88) basic_machine=m88k-motorola os=-sysv3 ;; dicos) basic_machine=i686-pc os=-dicos ;; djgpp) basic_machine=i586-pc os=-msdosdjgpp ;; dpx20 | dpx20-*) basic_machine=rs6000-bull os=-bosx ;; dpx2* | dpx2*-bull) basic_machine=m68k-bull os=-sysv3 ;; ebmon29k) basic_machine=a29k-amd os=-ebmon ;; elxsi) basic_machine=elxsi-elxsi os=-bsd ;; encore | umax | mmax) basic_machine=ns32k-encore ;; es1800 | OSE68k | ose68k | ose | OSE) basic_machine=m68k-ericsson os=-ose ;; fx2800) basic_machine=i860-alliant ;; genix) basic_machine=ns32k-ns ;; gmicro) basic_machine=tron-gmicro os=-sysv ;; go32) basic_machine=i386-pc os=-go32 ;; h3050r* | hiux*) basic_machine=hppa1.1-hitachi os=-hiuxwe2 ;; h8300hms) basic_machine=h8300-hitachi os=-hms ;; h8300xray) basic_machine=h8300-hitachi os=-xray ;; h8500hms) basic_machine=h8500-hitachi os=-hms ;; harris) basic_machine=m88k-harris os=-sysv3 ;; hp300-*) basic_machine=m68k-hp ;; hp300bsd) basic_machine=m68k-hp os=-bsd ;; hp300hpux) basic_machine=m68k-hp os=-hpux ;; hp3k9[0-9][0-9] | hp9[0-9][0-9]) basic_machine=hppa1.0-hp ;; hp9k2[0-9][0-9] | hp9k31[0-9]) basic_machine=m68000-hp ;; hp9k3[2-9][0-9]) basic_machine=m68k-hp ;; hp9k6[0-9][0-9] | hp6[0-9][0-9]) basic_machine=hppa1.0-hp ;; hp9k7[0-79][0-9] | hp7[0-79][0-9]) basic_machine=hppa1.1-hp ;; hp9k78[0-9] | hp78[0-9]) # FIXME: really hppa2.0-hp basic_machine=hppa1.1-hp ;; hp9k8[67]1 | hp8[67]1 | hp9k80[24] | hp80[24] | hp9k8[78]9 | hp8[78]9 | hp9k893 | hp893) # FIXME: really hppa2.0-hp basic_machine=hppa1.1-hp ;; hp9k8[0-9][13679] | hp8[0-9][13679]) basic_machine=hppa1.1-hp ;; hp9k8[0-9][0-9] | hp8[0-9][0-9]) basic_machine=hppa1.0-hp ;; hppa-next) os=-nextstep3 ;; hppaosf) basic_machine=hppa1.1-hp os=-osf ;; hppro) basic_machine=hppa1.1-hp os=-proelf ;; i370-ibm* | ibm*) basic_machine=i370-ibm ;; i*86v32) basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` os=-sysv32 ;; i*86v4*) basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` os=-sysv4 ;; i*86v) basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` os=-sysv ;; i*86sol2) basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` os=-solaris2 ;; i386mach) basic_machine=i386-mach os=-mach ;; i386-vsta | vsta) basic_machine=i386-unknown os=-vsta ;; iris | iris4d) basic_machine=mips-sgi case $os in -irix*) ;; *) os=-irix4 ;; esac ;; isi68 | isi) basic_machine=m68k-isi os=-sysv ;; m68knommu) basic_machine=m68k-unknown os=-linux ;; m68knommu-*) basic_machine=m68k-`echo $basic_machine | sed 's/^[^-]*-//'` os=-linux ;; m88k-omron*) basic_machine=m88k-omron ;; magnum | m3230) basic_machine=mips-mips os=-sysv ;; merlin) basic_machine=ns32k-utek os=-sysv ;; microblaze*) basic_machine=microblaze-xilinx ;; mingw64) basic_machine=x86_64-pc os=-mingw64 ;; mingw32) basic_machine=i686-pc os=-mingw32 ;; mingw32ce) basic_machine=arm-unknown os=-mingw32ce ;; miniframe) basic_machine=m68000-convergent ;; *mint | -mint[0-9]* | *MiNT | *MiNT[0-9]*) basic_machine=m68k-atari os=-mint ;; mips3*-*) basic_machine=`echo $basic_machine | sed -e 's/mips3/mips64/'` ;; mips3*) basic_machine=`echo $basic_machine | sed -e 's/mips3/mips64/'`-unknown ;; monitor) basic_machine=m68k-rom68k os=-coff ;; morphos) basic_machine=powerpc-unknown os=-morphos ;; msdos) basic_machine=i386-pc os=-msdos ;; ms1-*) basic_machine=`echo $basic_machine | sed -e 's/ms1-/mt-/'` ;; msys) basic_machine=i686-pc os=-msys ;; mvs) basic_machine=i370-ibm os=-mvs ;; nacl) basic_machine=le32-unknown os=-nacl ;; ncr3000) basic_machine=i486-ncr os=-sysv4 ;; netbsd386) basic_machine=i386-unknown os=-netbsd ;; netwinder) basic_machine=armv4l-rebel os=-linux ;; news | news700 | news800 | news900) basic_machine=m68k-sony os=-newsos ;; news1000) basic_machine=m68030-sony os=-newsos ;; news-3600 | risc-news) basic_machine=mips-sony os=-newsos ;; necv70) basic_machine=v70-nec os=-sysv ;; next | m*-next ) basic_machine=m68k-next case $os in -nextstep* ) ;; -ns2*) os=-nextstep2 ;; *) os=-nextstep3 ;; esac ;; nh3000) basic_machine=m68k-harris os=-cxux ;; nh[45]000) basic_machine=m88k-harris os=-cxux ;; nindy960) basic_machine=i960-intel os=-nindy ;; mon960) basic_machine=i960-intel os=-mon960 ;; nonstopux) basic_machine=mips-compaq os=-nonstopux ;; np1) basic_machine=np1-gould ;; neo-tandem) basic_machine=neo-tandem ;; nse-tandem) basic_machine=nse-tandem ;; nsr-tandem) basic_machine=nsr-tandem ;; op50n-* | op60c-*) basic_machine=hppa1.1-oki os=-proelf ;; openrisc | openrisc-*) basic_machine=or32-unknown ;; os400) basic_machine=powerpc-ibm os=-os400 ;; OSE68000 | ose68000) basic_machine=m68000-ericsson os=-ose ;; os68k) basic_machine=m68k-none os=-os68k ;; pa-hitachi) basic_machine=hppa1.1-hitachi os=-hiuxwe2 ;; paragon) basic_machine=i860-intel os=-osf ;; parisc) basic_machine=hppa-unknown os=-linux ;; parisc-*) basic_machine=hppa-`echo $basic_machine | sed 's/^[^-]*-//'` os=-linux ;; pbd) basic_machine=sparc-tti ;; pbb) basic_machine=m68k-tti ;; pc532 | pc532-*) basic_machine=ns32k-pc532 ;; pc98) basic_machine=i386-pc ;; pc98-*) basic_machine=i386-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pentium | p5 | k5 | k6 | nexgen | viac3) basic_machine=i586-pc ;; pentiumpro | p6 | 6x86 | athlon | athlon_*) basic_machine=i686-pc ;; pentiumii | pentium2 | pentiumiii | pentium3) basic_machine=i686-pc ;; pentium4) basic_machine=i786-pc ;; pentium-* | p5-* | k5-* | k6-* | nexgen-* | viac3-*) basic_machine=i586-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pentiumpro-* | p6-* | 6x86-* | athlon-*) basic_machine=i686-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pentiumii-* | pentium2-* | pentiumiii-* | pentium3-*) basic_machine=i686-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pentium4-*) basic_machine=i786-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pn) basic_machine=pn-gould ;; power) basic_machine=power-ibm ;; ppc | ppcbe) basic_machine=powerpc-unknown ;; ppc-* | ppcbe-*) basic_machine=powerpc-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ppcle | powerpclittle | ppc-le | powerpc-little) basic_machine=powerpcle-unknown ;; ppcle-* | powerpclittle-*) basic_machine=powerpcle-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ppc64) basic_machine=powerpc64-unknown ;; ppc64-*) basic_machine=powerpc64-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ppc64le | powerpc64little | ppc64-le | powerpc64-little) basic_machine=powerpc64le-unknown ;; ppc64le-* | powerpc64little-*) basic_machine=powerpc64le-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ps2) basic_machine=i386-ibm ;; pw32) basic_machine=i586-unknown os=-pw32 ;; rdos | rdos64) basic_machine=x86_64-pc os=-rdos ;; rdos32) basic_machine=i386-pc os=-rdos ;; rom68k) basic_machine=m68k-rom68k os=-coff ;; rm[46]00) basic_machine=mips-siemens ;; rtpc | rtpc-*) basic_machine=romp-ibm ;; s390 | s390-*) basic_machine=s390-ibm ;; s390x | s390x-*) basic_machine=s390x-ibm ;; sa29200) basic_machine=a29k-amd os=-udi ;; sb1) basic_machine=mipsisa64sb1-unknown ;; sb1el) basic_machine=mipsisa64sb1el-unknown ;; sde) basic_machine=mipsisa32-sde os=-elf ;; sei) basic_machine=mips-sei os=-seiux ;; sequent) basic_machine=i386-sequent ;; sh) basic_machine=sh-hitachi os=-hms ;; sh5el) basic_machine=sh5le-unknown ;; sh64) basic_machine=sh64-unknown ;; sparclite-wrs | simso-wrs) basic_machine=sparclite-wrs os=-vxworks ;; sps7) basic_machine=m68k-bull os=-sysv2 ;; spur) basic_machine=spur-unknown ;; st2000) basic_machine=m68k-tandem ;; stratus) basic_machine=i860-stratus os=-sysv4 ;; strongarm-* | thumb-*) basic_machine=arm-`echo $basic_machine | sed 's/^[^-]*-//'` ;; sun2) basic_machine=m68000-sun ;; sun2os3) basic_machine=m68000-sun os=-sunos3 ;; sun2os4) basic_machine=m68000-sun os=-sunos4 ;; sun3os3) basic_machine=m68k-sun os=-sunos3 ;; sun3os4) basic_machine=m68k-sun os=-sunos4 ;; sun4os3) basic_machine=sparc-sun os=-sunos3 ;; sun4os4) basic_machine=sparc-sun os=-sunos4 ;; sun4sol2) basic_machine=sparc-sun os=-solaris2 ;; sun3 | sun3-*) basic_machine=m68k-sun ;; sun4) basic_machine=sparc-sun ;; sun386 | sun386i | roadrunner) basic_machine=i386-sun ;; sv1) basic_machine=sv1-cray os=-unicos ;; symmetry) basic_machine=i386-sequent os=-dynix ;; t3e) basic_machine=alphaev5-cray os=-unicos ;; t90) basic_machine=t90-cray os=-unicos ;; tile*) basic_machine=$basic_machine-unknown os=-linux-gnu ;; tx39) basic_machine=mipstx39-unknown ;; tx39el) basic_machine=mipstx39el-unknown ;; toad1) basic_machine=pdp10-xkl os=-tops20 ;; tower | tower-32) basic_machine=m68k-ncr ;; tpf) basic_machine=s390x-ibm os=-tpf ;; udi29k) basic_machine=a29k-amd os=-udi ;; ultra3) basic_machine=a29k-nyu os=-sym1 ;; v810 | necv810) basic_machine=v810-nec os=-none ;; vaxv) basic_machine=vax-dec os=-sysv ;; vms) basic_machine=vax-dec os=-vms ;; vpp*|vx|vx-*) basic_machine=f301-fujitsu ;; vxworks960) basic_machine=i960-wrs os=-vxworks ;; vxworks68) basic_machine=m68k-wrs os=-vxworks ;; vxworks29k) basic_machine=a29k-wrs os=-vxworks ;; w65*) basic_machine=w65-wdc os=-none ;; w89k-*) basic_machine=hppa1.1-winbond os=-proelf ;; xbox) basic_machine=i686-pc os=-mingw32 ;; xps | xps100) basic_machine=xps100-honeywell ;; xscale-* | xscalee[bl]-*) basic_machine=`echo $basic_machine | sed 's/^xscale/arm/'` ;; ymp) basic_machine=ymp-cray os=-unicos ;; z8k-*-coff) basic_machine=z8k-unknown os=-sim ;; z80-*-coff) basic_machine=z80-unknown os=-sim ;; none) basic_machine=none-none os=-none ;; # Here we handle the default manufacturer of certain CPU types. It is in # some cases the only manufacturer, in others, it is the most popular. w89k) basic_machine=hppa1.1-winbond ;; op50n) basic_machine=hppa1.1-oki ;; op60c) basic_machine=hppa1.1-oki ;; romp) basic_machine=romp-ibm ;; mmix) basic_machine=mmix-knuth ;; rs6000) basic_machine=rs6000-ibm ;; vax) basic_machine=vax-dec ;; pdp10) # there are many clones, so DEC is not a safe bet basic_machine=pdp10-unknown ;; pdp11) basic_machine=pdp11-dec ;; we32k) basic_machine=we32k-att ;; sh[1234] | sh[24]a | sh[24]aeb | sh[34]eb | sh[1234]le | sh[23]ele) basic_machine=sh-unknown ;; sparc | sparcv8 | sparcv9 | sparcv9b | sparcv9v) basic_machine=sparc-sun ;; cydra) basic_machine=cydra-cydrome ;; orion) basic_machine=orion-highlevel ;; orion105) basic_machine=clipper-highlevel ;; mac | mpw | mac-mpw) basic_machine=m68k-apple ;; pmac | pmac-mpw) basic_machine=powerpc-apple ;; *-unknown) # Make sure to match an already-canonicalized machine name. ;; *) echo Invalid configuration \`$1\': machine \`$basic_machine\' not recognized 1>&2 exit 1 ;; esac # Here we canonicalize certain aliases for manufacturers. case $basic_machine in *-digital*) basic_machine=`echo $basic_machine | sed 's/digital.*/dec/'` ;; *-commodore*) basic_machine=`echo $basic_machine | sed 's/commodore.*/cbm/'` ;; *) ;; esac # Decode manufacturer-specific aliases for certain operating systems. if [ x"$os" != x"" ] then case $os in # First match some system type aliases # that might get confused with valid system types. # -solaris* is a basic system type, with this one exception. -auroraux) os=-auroraux ;; -solaris1 | -solaris1.*) os=`echo $os | sed -e 's|solaris1|sunos4|'` ;; -solaris) os=-solaris2 ;; -svr4*) os=-sysv4 ;; -unixware*) os=-sysv4.2uw ;; -gnu/linux*) os=`echo $os | sed -e 's|gnu/linux|linux-gnu|'` ;; # First accept the basic system types. # The portable systems comes first. # Each alternative MUST END IN A *, to match a version number. # -sysv* is not here because it comes later, after sysvr4. -gnu* | -bsd* | -mach* | -minix* | -genix* | -ultrix* | -irix* \ | -*vms* | -sco* | -esix* | -isc* | -aix* | -cnk* | -sunos | -sunos[34]*\ | -hpux* | -unos* | -osf* | -luna* | -dgux* | -auroraux* | -solaris* \ | -sym* | -kopensolaris* | -plan9* \ | -amigaos* | -amigados* | -msdos* | -newsos* | -unicos* | -aof* \ | -aos* | -aros* \ | -nindy* | -vxsim* | -vxworks* | -ebmon* | -hms* | -mvs* \ | -clix* | -riscos* | -uniplus* | -iris* | -rtu* | -xenix* \ | -hiux* | -386bsd* | -knetbsd* | -mirbsd* | -netbsd* \ | -bitrig* | -openbsd* | -solidbsd* \ | -ekkobsd* | -kfreebsd* | -freebsd* | -riscix* | -lynxos* \ | -bosx* | -nextstep* | -cxux* | -aout* | -elf* | -oabi* \ | -ptx* | -coff* | -ecoff* | -winnt* | -domain* | -vsta* \ | -udi* | -eabi* | -lites* | -ieee* | -go32* | -aux* \ | -chorusos* | -chorusrdb* | -cegcc* \ | -cygwin* | -msys* | -pe* | -psos* | -moss* | -proelf* | -rtems* \ | -mingw32* | -mingw64* | -linux-gnu* | -linux-android* \ | -linux-newlib* | -linux-musl* | -linux-uclibc* \ | -uxpv* | -beos* | -mpeix* | -udk* \ | -interix* | -uwin* | -mks* | -rhapsody* | -darwin* | -opened* \ | -openstep* | -oskit* | -conix* | -pw32* | -nonstopux* \ | -storm-chaos* | -tops10* | -tenex* | -tops20* | -its* \ | -os2* | -vos* | -palmos* | -uclinux* | -nucleus* \ | -morphos* | -superux* | -rtmk* | -rtmk-nova* | -windiss* \ | -powermax* | -dnix* | -nx6 | -nx7 | -sei* | -dragonfly* \ | -skyos* | -haiku* | -rdos* | -toppers* | -drops* | -es*) # Remember, each alternative MUST END IN *, to match a version number. ;; -qnx*) case $basic_machine in x86-* | i*86-*) ;; *) os=-nto$os ;; esac ;; -nto-qnx*) ;; -nto*) os=`echo $os | sed -e 's|nto|nto-qnx|'` ;; -sim | -es1800* | -hms* | -xray | -os68k* | -none* | -v88r* \ | -windows* | -osx | -abug | -netware* | -os9* | -beos* | -haiku* \ | -macos* | -mpw* | -magic* | -mmixware* | -mon960* | -lnews*) ;; -mac*) os=`echo $os | sed -e 's|mac|macos|'` ;; -ios*) ;; -linux-dietlibc) os=-linux-dietlibc ;; -linux*) os=`echo $os | sed -e 's|linux|linux-gnu|'` ;; -sunos5*) os=`echo $os | sed -e 's|sunos5|solaris2|'` ;; -sunos6*) os=`echo $os | sed -e 's|sunos6|solaris3|'` ;; -opened*) os=-openedition ;; -os400*) os=-os400 ;; -wince*) os=-wince ;; -osfrose*) os=-osfrose ;; -osf*) os=-osf ;; -utek*) os=-bsd ;; -dynix*) os=-bsd ;; -acis*) os=-aos ;; -atheos*) os=-atheos ;; -syllable*) os=-syllable ;; -386bsd) os=-bsd ;; -ctix* | -uts*) os=-sysv ;; -nova*) os=-rtmk-nova ;; -ns2 ) os=-nextstep2 ;; -nsk*) os=-nsk ;; # Preserve the version number of sinix5. -sinix5.*) os=`echo $os | sed -e 's|sinix|sysv|'` ;; -sinix*) os=-sysv4 ;; -tpf*) os=-tpf ;; -triton*) os=-sysv3 ;; -oss*) os=-sysv3 ;; -svr4) os=-sysv4 ;; -svr3) os=-sysv3 ;; -sysvr4) os=-sysv4 ;; # This must come after -sysvr4. -sysv*) ;; -ose*) os=-ose ;; -es1800*) os=-ose ;; -xenix) os=-xenix ;; -*mint | -mint[0-9]* | -*MiNT | -MiNT[0-9]*) os=-mint ;; -aros*) os=-aros ;; -zvmoe) os=-zvmoe ;; -dicos*) os=-dicos ;; -nacl*) ;; -none) ;; *) # Get rid of the `-' at the beginning of $os. os=`echo $os | sed 's/[^-]*-//'` echo Invalid configuration \`$1\': system \`$os\' not recognized 1>&2 exit 1 ;; esac else # Here we handle the default operating systems that come with various machines. # The value should be what the vendor currently ships out the door with their # machine or put another way, the most popular os provided with the machine. # Note that if you're going to try to match "-MANUFACTURER" here (say, # "-sun"), then you have to tell the case statement up towards the top # that MANUFACTURER isn't an operating system. Otherwise, code above # will signal an error saying that MANUFACTURER isn't an operating # system, and we'll never get to this point. case $basic_machine in score-*) os=-elf ;; spu-*) os=-elf ;; *-acorn) os=-riscix1.2 ;; arm*-rebel) os=-linux ;; arm*-semi) os=-aout ;; c4x-* | tic4x-*) os=-coff ;; c8051-*) os=-elf ;; hexagon-*) os=-elf ;; tic54x-*) os=-coff ;; tic55x-*) os=-coff ;; tic6x-*) os=-coff ;; # This must come before the *-dec entry. pdp10-*) os=-tops20 ;; pdp11-*) os=-none ;; *-dec | vax-*) os=-ultrix4.2 ;; m68*-apollo) os=-domain ;; i386-sun) os=-sunos4.0.2 ;; m68000-sun) os=-sunos3 ;; m68*-cisco) os=-aout ;; mep-*) os=-elf ;; mips*-cisco) os=-elf ;; mips*-*) os=-elf ;; or1k-*) os=-elf ;; or32-*) os=-coff ;; *-tti) # must be before sparc entry or we get the wrong os. os=-sysv3 ;; sparc-* | *-sun) os=-sunos4.1.1 ;; *-be) os=-beos ;; *-haiku) os=-haiku ;; *-ibm) os=-aix ;; *-knuth) os=-mmixware ;; *-wec) os=-proelf ;; *-winbond) os=-proelf ;; *-oki) os=-proelf ;; *-hp) os=-hpux ;; *-hitachi) os=-hiux ;; i860-* | *-att | *-ncr | *-altos | *-motorola | *-convergent) os=-sysv ;; *-cbm) os=-amigaos ;; *-dg) os=-dgux ;; *-dolphin) os=-sysv3 ;; m68k-ccur) os=-rtu ;; m88k-omron*) os=-luna ;; *-next ) os=-nextstep ;; *-sequent) os=-ptx ;; *-crds) os=-unos ;; *-ns) os=-genix ;; i370-*) os=-mvs ;; *-next) os=-nextstep3 ;; *-gould) os=-sysv ;; *-highlevel) os=-bsd ;; *-encore) os=-bsd ;; *-sgi) os=-irix ;; *-siemens) os=-sysv4 ;; *-masscomp) os=-rtu ;; f30[01]-fujitsu | f700-fujitsu) os=-uxpv ;; *-rom68k) os=-coff ;; *-*bug) os=-coff ;; *-apple) os=-macos ;; *-atari*) os=-mint ;; *) os=-none ;; esac fi # Here we handle the case where we know the os, and the CPU type, but not the # manufacturer. We pick the logical manufacturer. vendor=unknown case $basic_machine in *-unknown) case $os in -riscix*) vendor=acorn ;; -sunos*) vendor=sun ;; -cnk*|-aix*) vendor=ibm ;; -beos*) vendor=be ;; -hpux*) vendor=hp ;; -mpeix*) vendor=hp ;; -hiux*) vendor=hitachi ;; -unos*) vendor=crds ;; -dgux*) vendor=dg ;; -luna*) vendor=omron ;; -genix*) vendor=ns ;; -mvs* | -opened*) vendor=ibm ;; -os400*) vendor=ibm ;; -ptx*) vendor=sequent ;; -tpf*) vendor=ibm ;; -vxsim* | -vxworks* | -windiss*) vendor=wrs ;; -aux*) vendor=apple ;; -hms*) vendor=hitachi ;; -mpw* | -macos*) vendor=apple ;; -*mint | -mint[0-9]* | -*MiNT | -MiNT[0-9]*) vendor=atari ;; -vos*) vendor=stratus ;; esac basic_machine=`echo $basic_machine | sed "s/unknown/$vendor/"` ;; esac echo $basic_machine$os exit # Local variables: # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "timestamp='" # time-stamp-format: "%:y-%02m-%02d" # time-stamp-end: "'" # End: �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������vmem-1.8/src/jemalloc/configure.ac������������������������������������������������������������������0000664�0000000�0000000�00000140507�13615050741�0017202�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������dnl Process this file with autoconf to produce a configure script. AC_INIT([Makefile.in]) dnl ============================================================================ dnl Custom macro definitions. dnl JE_CFLAGS_APPEND(cflag) AC_DEFUN([JE_CFLAGS_APPEND], [ AC_MSG_CHECKING([whether compiler supports $1]) TCFLAGS="${CFLAGS}" if test "x${CFLAGS}" = "x" ; then CFLAGS="$1" else CFLAGS="${CFLAGS} $1" fi AC_COMPILE_IFELSE([AC_LANG_PROGRAM( [[ ]], [[ return 0; ]])], [je_cv_cflags_appended=$1] AC_MSG_RESULT([yes]), [je_cv_cflags_appended=] AC_MSG_RESULT([no]) [CFLAGS="${TCFLAGS}"] ) ]) dnl JE_COMPILABLE(label, hcode, mcode, rvar) dnl dnl Use AC_LINK_IFELSE() rather than AC_COMPILE_IFELSE() so that linker errors dnl cause failure. AC_DEFUN([JE_COMPILABLE], [ AC_CACHE_CHECK([whether $1 is compilable], [$4], [AC_LINK_IFELSE([AC_LANG_PROGRAM([$2], [$3])], [$4=yes], [$4=no])]) ]) dnl ============================================================================ dnl Library revision. rev=2 AC_SUBST([rev]) srcroot=$srcdir if test "x${srcroot}" = "x." ; then srcroot="" else srcroot="${srcroot}/" fi AC_SUBST([srcroot]) abs_srcroot="`cd \"${srcdir}\"; pwd`/" AC_SUBST([abs_srcroot]) objroot="" AC_SUBST([objroot]) abs_objroot="`pwd`/" AC_SUBST([abs_objroot]) dnl Munge install path variables. if test "x$prefix" = "xNONE" ; then prefix="/usr/local" fi if test "x$exec_prefix" = "xNONE" ; then exec_prefix=$prefix fi PREFIX=$prefix AC_SUBST([PREFIX]) BINDIR=`eval echo $bindir` BINDIR=`eval echo $BINDIR` AC_SUBST([BINDIR]) INCLUDEDIR=`eval echo $includedir` INCLUDEDIR=`eval echo $INCLUDEDIR` AC_SUBST([INCLUDEDIR]) LIBDIR=`eval echo $libdir` LIBDIR=`eval echo $LIBDIR` AC_SUBST([LIBDIR]) DATADIR=`eval echo $datadir` DATADIR=`eval echo $DATADIR` AC_SUBST([DATADIR]) MANDIR=`eval echo $mandir` MANDIR=`eval echo $MANDIR` AC_SUBST([MANDIR]) dnl Support for building documentation. AC_PATH_PROG([XSLTPROC], [xsltproc], [false], [$PATH]) if test -d "/usr/share/xml/docbook/stylesheet/docbook-xsl" ; then DEFAULT_XSLROOT="/usr/share/xml/docbook/stylesheet/docbook-xsl" elif test -d "/usr/share/sgml/docbook/xsl-stylesheets" ; then DEFAULT_XSLROOT="/usr/share/sgml/docbook/xsl-stylesheets" else dnl Documentation building will fail if this default gets used. DEFAULT_XSLROOT="" fi AC_ARG_WITH([xslroot], [AS_HELP_STRING([--with-xslroot=], [XSL stylesheet root path])], [ if test "x$with_xslroot" = "xno" ; then XSLROOT="${DEFAULT_XSLROOT}" else XSLROOT="${with_xslroot}" fi ], XSLROOT="${DEFAULT_XSLROOT}" ) AC_SUBST([XSLROOT]) dnl If CFLAGS isn't defined, set CFLAGS to something reasonable. Otherwise, dnl just prevent autoconf from molesting CFLAGS. CFLAGS=$CFLAGS AC_PROG_CC if test "x$GCC" != "xyes" ; then AC_CACHE_CHECK([whether compiler is MSVC], [je_cv_msvc], [AC_COMPILE_IFELSE([AC_LANG_PROGRAM([], [ #ifndef _MSC_VER int fail[-1]; #endif ])], [je_cv_msvc=yes], [je_cv_msvc=no])]) fi if test "x$CFLAGS" = "x" ; then no_CFLAGS="yes" if test "x$GCC" = "xyes" ; then JE_CFLAGS_APPEND([-std=gnu99]) if test "x$je_cv_cflags_appended" = "x-std=gnu99" ; then AC_DEFINE_UNQUOTED([JEMALLOC_HAS_RESTRICT]) fi JE_CFLAGS_APPEND([-Wall]) JE_CFLAGS_APPEND([-pipe]) JE_CFLAGS_APPEND([-g3]) elif test "x$je_cv_msvc" = "xyes" ; then CC="$CC -nologo" JE_CFLAGS_APPEND([-Zi]) JE_CFLAGS_APPEND([-MT]) JE_CFLAGS_APPEND([-W3]) JE_CFLAGS_APPEND([-FS]) CPPFLAGS="$CPPFLAGS -I${srcdir}/include/msvc_compat" fi fi dnl Append EXTRA_CFLAGS to CFLAGS, if defined. if test "x$EXTRA_CFLAGS" != "x" ; then JE_CFLAGS_APPEND([$EXTRA_CFLAGS]) fi AC_PROG_CPP AC_C_BIGENDIAN([ac_cv_big_endian=1], [ac_cv_big_endian=0]) if test "x${ac_cv_big_endian}" = "x1" ; then AC_DEFINE_UNQUOTED([JEMALLOC_BIG_ENDIAN], [ ]) fi if test "x${je_cv_msvc}" = "xyes" -a "x${ac_cv_header_inttypes_h}" = "xno"; then CPPFLAGS="$CPPFLAGS -I${srcdir}/include/msvc_compat/C99" fi if test "x${je_cv_msvc}" = "xyes" ; then LG_SIZEOF_PTR=LG_SIZEOF_PTR_WIN AC_MSG_RESULT([Using a predefined value for sizeof(void *): 4 for 32-bit, 8 for 64-bit]) else AC_CHECK_SIZEOF([void *]) if test "x${ac_cv_sizeof_void_p}" = "x8" ; then LG_SIZEOF_PTR=3 elif test "x${ac_cv_sizeof_void_p}" = "x4" ; then LG_SIZEOF_PTR=2 else AC_MSG_ERROR([Unsupported pointer size: ${ac_cv_sizeof_void_p}]) fi fi AC_DEFINE_UNQUOTED([LG_SIZEOF_PTR], [$LG_SIZEOF_PTR]) AC_CHECK_SIZEOF([int]) if test "x${ac_cv_sizeof_int}" = "x8" ; then LG_SIZEOF_INT=3 elif test "x${ac_cv_sizeof_int}" = "x4" ; then LG_SIZEOF_INT=2 else AC_MSG_ERROR([Unsupported int size: ${ac_cv_sizeof_int}]) fi AC_DEFINE_UNQUOTED([LG_SIZEOF_INT], [$LG_SIZEOF_INT]) AC_CHECK_SIZEOF([long]) if test "x${ac_cv_sizeof_long}" = "x8" ; then LG_SIZEOF_LONG=3 elif test "x${ac_cv_sizeof_long}" = "x4" ; then LG_SIZEOF_LONG=2 else AC_MSG_ERROR([Unsupported long size: ${ac_cv_sizeof_long}]) fi AC_DEFINE_UNQUOTED([LG_SIZEOF_LONG], [$LG_SIZEOF_LONG]) AC_CHECK_SIZEOF([intmax_t]) if test "x${ac_cv_sizeof_intmax_t}" = "x16" ; then LG_SIZEOF_INTMAX_T=4 elif test "x${ac_cv_sizeof_intmax_t}" = "x8" ; then LG_SIZEOF_INTMAX_T=3 elif test "x${ac_cv_sizeof_intmax_t}" = "x4" ; then LG_SIZEOF_INTMAX_T=2 else AC_MSG_ERROR([Unsupported intmax_t size: ${ac_cv_sizeof_intmax_t}]) fi AC_DEFINE_UNQUOTED([LG_SIZEOF_INTMAX_T], [$LG_SIZEOF_INTMAX_T]) AC_CANONICAL_HOST dnl CPU-specific settings. CPU_SPINWAIT="" case "${host_cpu}" in i[[345]]86) ;; i686|x86_64) JE_COMPILABLE([pause instruction], [], [[__asm__ volatile("pause"); return 0;]], [je_cv_pause]) if test "x${je_cv_pause}" = "xyes" ; then CPU_SPINWAIT='__asm__ volatile("pause")' fi dnl emmintrin.h fails to compile unless MMX, SSE, and SSE2 are dnl supported. JE_COMPILABLE([SSE2 intrinsics], [ #include ], [], [je_cv_sse2]) if test "x${je_cv_sse2}" = "xyes" ; then AC_DEFINE_UNQUOTED([HAVE_SSE2], [ ]) fi ;; powerpc) AC_DEFINE_UNQUOTED([HAVE_ALTIVEC], [ ]) ;; *) ;; esac AC_DEFINE_UNQUOTED([CPU_SPINWAIT], [$CPU_SPINWAIT]) LD_PRELOAD_VAR="LD_PRELOAD" so="so" importlib="${so}" o="$ac_objext" a="a" exe="$ac_exeext" libprefix="lib" DSO_LDFLAGS='-shared -Wl,-soname,$(@F)' RPATH='-Wl,-rpath,$(1)' SOREV="${so}.${rev}" PIC_CFLAGS='-fPIC -DPIC' CTARGET='-o $@' LDTARGET='-o $@' EXTRA_LDFLAGS= ARFLAGS='crus' AROUT=' $@' CC_MM=1 MALLOCINC="malloc.h" AN_MAKEVAR([AR], [AC_PROG_AR]) AN_PROGRAM([ar], [AC_PROG_AR]) AC_DEFUN([AC_PROG_AR], [AC_CHECK_TOOL(AR, ar, :)]) AC_PROG_AR dnl Platform-specific settings. abi and RPATH can probably be determined dnl programmatically, but doing so is error-prone, which makes it generally dnl not worth the trouble. dnl dnl Define cpp macros in CPPFLAGS, rather than doing AC_DEFINE(macro), since the dnl definitions need to be seen before any headers are included, which is a pain dnl to make happen otherwise. default_munmap="1" case "${host}" in *-*-darwin* | *-*-ios*) CFLAGS="$CFLAGS" abi="macho" AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ]) RPATH="" LD_PRELOAD_VAR="DYLD_INSERT_LIBRARIES" so="dylib" importlib="${so}" force_tls="0" DSO_LDFLAGS='-shared -Wl,-dylib_install_name,$(@F)' SOREV="${rev}.${so}" sbrk_deprecated="1" ;; *-*-freebsd*) CFLAGS="$CFLAGS" abi="elf" AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ]) force_lazy_lock="1" MALLOCINC="stdlib.h" ;; *-*-linux*) CFLAGS="$CFLAGS" CPPFLAGS="$CPPFLAGS -D_GNU_SOURCE" abi="elf" AC_DEFINE([JEMALLOC_HAS_ALLOCA_H]) AC_DEFINE([JEMALLOC_PURGE_MADVISE_DONTNEED], [ ]) AC_DEFINE([JEMALLOC_THREADED_INIT], [ ]) default_munmap="0" ;; *-*-netbsd*) AC_MSG_CHECKING([ABI]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM( [[#ifdef __ELF__ /* ELF */ #else #error aout #endif ]])], [CFLAGS="$CFLAGS"; abi="elf"], [abi="aout"]) AC_MSG_RESULT([$abi]) AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ]) ;; *-*-solaris2*) CFLAGS="$CFLAGS" abi="elf" AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ]) RPATH='-Wl,-R,$(1)' dnl Solaris needs this for sigwait(). CPPFLAGS="$CPPFLAGS -D_POSIX_PTHREAD_SEMANTICS" LIBS="$LIBS -lposix4 -lsocket -lnsl" ;; *-ibm-aix*) if "$LG_SIZEOF_PTR" = "8"; then dnl 64bit AIX LD_PRELOAD_VAR="LDR_PRELOAD64" else dnl 32bit AIX LD_PRELOAD_VAR="LDR_PRELOAD" fi abi="xcoff" ;; *-*-mingw* | *-*-cygwin*) abi="pecoff" force_tls="0" force_lazy_lock="0" RPATH="" so="dll" if test "x$je_cv_msvc" = "xyes" ; then importlib="lib" DSO_LDFLAGS="-LD" EXTRA_LDFLAGS="-link -DEBUG" CTARGET='-Fo$@' LDTARGET='-Fe$@' AR='lib' ARFLAGS='-nologo -out:' AROUT='$@' CC_MM= else importlib="${so}" DSO_LDFLAGS="-shared" fi a="lib" libprefix="" SOREV="${so}" PIC_CFLAGS="" ;; *) AC_MSG_RESULT([Unsupported operating system: ${host}]) abi="elf" ;; esac JEMALLOC_USABLE_SIZE_CONST=const AC_CHECK_HEADERS([${MALLOCINC}], [ AC_MSG_CHECKING([whether malloc_usable_size definition can use const argument]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM( [#include <${MALLOCINC}> #include size_t malloc_usable_size(const void *ptr); ], [])],[ AC_MSG_RESULT([yes]) ],[ JEMALLOC_USABLE_SIZE_CONST= AC_MSG_RESULT([no]) ]) ]) AC_DEFINE_UNQUOTED([JEMALLOC_USABLE_SIZE_CONST], [$JEMALLOC_USABLE_SIZE_CONST]) AC_SUBST([abi]) AC_SUBST([RPATH]) AC_SUBST([LD_PRELOAD_VAR]) AC_SUBST([so]) AC_SUBST([importlib]) AC_SUBST([o]) AC_SUBST([a]) AC_SUBST([exe]) AC_SUBST([libprefix]) AC_SUBST([DSO_LDFLAGS]) AC_SUBST([EXTRA_LDFLAGS]) AC_SUBST([SOREV]) AC_SUBST([PIC_CFLAGS]) AC_SUBST([CTARGET]) AC_SUBST([LDTARGET]) AC_SUBST([MKLIB]) AC_SUBST([ARFLAGS]) AC_SUBST([AROUT]) AC_SUBST([CC_MM]) JE_COMPILABLE([__attribute__ syntax], [static __attribute__((unused)) void foo(void){}], [], [je_cv_attribute]) if test "x${je_cv_attribute}" = "xyes" ; then AC_DEFINE([JEMALLOC_HAVE_ATTR], [ ]) if test "x${GCC}" = "xyes" -a "x${abi}" = "xelf"; then JE_CFLAGS_APPEND([-fvisibility=hidden]) fi fi dnl Check for tls_model attribute support (clang 3.0 still lacks support). SAVED_CFLAGS="${CFLAGS}" JE_CFLAGS_APPEND([-Werror]) JE_COMPILABLE([tls_model attribute], [], [static __thread int __attribute__((tls_model("initial-exec"))) foo; foo = 0;], [je_cv_tls_model]) CFLAGS="${SAVED_CFLAGS}" if test "x${je_cv_tls_model}" = "xyes" ; then AC_DEFINE([JEMALLOC_TLS_MODEL], [__attribute__((tls_model("initial-exec")))]) else AC_DEFINE([JEMALLOC_TLS_MODEL], [ ]) fi dnl Support optional additions to rpath. AC_ARG_WITH([rpath], [AS_HELP_STRING([--with-rpath=], [Colon-separated rpath (ELF systems only)])], if test "x$with_rpath" = "xno" ; then RPATH_EXTRA= else RPATH_EXTRA="`echo $with_rpath | tr \":\" \" \"`" fi, RPATH_EXTRA= ) AC_SUBST([RPATH_EXTRA]) dnl Disable rules that do automatic regeneration of configure output by default. AC_ARG_ENABLE([autogen], [AS_HELP_STRING([--enable-autogen], [Automatically regenerate configure output])], if test "x$enable_autogen" = "xno" ; then enable_autogen="0" else enable_autogen="1" fi , enable_autogen="0" ) AC_SUBST([enable_autogen]) AC_PROG_INSTALL AC_PROG_RANLIB AC_PATH_PROG([LD], [ld], [false], [$PATH]) AC_PATH_PROG([AUTOCONF], [autoconf], [false], [$PATH]) public_syms="pool_create pool_delete pool_malloc pool_calloc pool_ralloc pool_aligned_alloc pool_free pool_malloc_usable_size pool_malloc_stats_print pool_extend pool_set_alloc_funcs pool_check malloc_conf malloc_message malloc calloc posix_memalign aligned_alloc realloc free mallocx rallocx xallocx sallocx dallocx nallocx mallctl mallctlnametomib mallctlbymib navsnprintf malloc_stats_print malloc_usable_size" dnl Check for allocator-related functions that should be wrapped. AC_CHECK_FUNC([memalign], [AC_DEFINE([JEMALLOC_OVERRIDE_MEMALIGN], [ ]) public_syms="${public_syms} memalign"]) AC_CHECK_FUNC([valloc], [AC_DEFINE([JEMALLOC_OVERRIDE_VALLOC], [ ]) public_syms="${public_syms} valloc"]) dnl Do not compute test code coverage by default. GCOV_FLAGS= AC_ARG_ENABLE([code-coverage], [AS_HELP_STRING([--enable-code-coverage], [Enable code coverage])], [if test "x$enable_code_coverage" = "xno" ; then enable_code_coverage="0" else enable_code_coverage="1" fi ], [enable_code_coverage="0"] ) if test "x$enable_code_coverage" = "x1" ; then deoptimize="no" echo "$CFLAGS $EXTRA_CFLAGS" | grep '\-O' >/dev/null || deoptimize="yes" if test "x${deoptimize}" = "xyes" ; then JE_CFLAGS_APPEND([-O0]) fi JE_CFLAGS_APPEND([-fprofile-arcs -ftest-coverage]) EXTRA_LDFLAGS="$EXTRA_LDFLAGS -fprofile-arcs -ftest-coverage" AC_DEFINE([JEMALLOC_CODE_COVERAGE], [ ]) fi AC_SUBST([enable_code_coverage]) dnl Perform no name mangling by default. AC_ARG_WITH([mangling], [AS_HELP_STRING([--with-mangling=], [Mangle symbols in ])], [mangling_map="$with_mangling"], [mangling_map=""]) dnl Do not prefix public APIs by default. AC_ARG_WITH([jemalloc_prefix], [AS_HELP_STRING([--with-jemalloc-prefix=], [Prefix to prepend to all public APIs])], [JEMALLOC_PREFIX="$with_jemalloc_prefix"], [if test "x$abi" != "xmacho" -a "x$abi" != "xpecoff"; then JEMALLOC_PREFIX="" else JEMALLOC_PREFIX="je_" fi] ) if test "x$JEMALLOC_PREFIX" != "x" ; then JEMALLOC_CPREFIX=`echo ${JEMALLOC_PREFIX} | tr "a-z" "A-Z"` AC_DEFINE_UNQUOTED([JEMALLOC_PREFIX], ["$JEMALLOC_PREFIX"]) AC_DEFINE_UNQUOTED([JEMALLOC_CPREFIX], ["$JEMALLOC_CPREFIX"]) fi AC_ARG_WITH([export], [AS_HELP_STRING([--without-export], [disable exporting jemalloc public APIs])], [if test "x$with_export" = "xno"; then AC_DEFINE([JEMALLOC_EXPORT],[]) fi] ) dnl Mangle library-private APIs. AC_ARG_WITH([private_namespace], [AS_HELP_STRING([--with-private-namespace=], [Prefix to prepend to all library-private APIs])], [JEMALLOC_PRIVATE_NAMESPACE="${with_private_namespace}je_"], [JEMALLOC_PRIVATE_NAMESPACE="je_"] ) AC_DEFINE_UNQUOTED([JEMALLOC_PRIVATE_NAMESPACE], [$JEMALLOC_PRIVATE_NAMESPACE]) private_namespace="$JEMALLOC_PRIVATE_NAMESPACE" AC_SUBST([private_namespace]) dnl Do not add suffix to installed files by default. AC_ARG_WITH([install_suffix], [AS_HELP_STRING([--with-install-suffix=], [Suffix to append to all installed files])], [INSTALL_SUFFIX="$with_install_suffix"], [INSTALL_SUFFIX=] ) install_suffix="$INSTALL_SUFFIX" AC_SUBST([install_suffix]) dnl Substitute @je_@ in jemalloc_protos.h.in, primarily to make generation of dnl jemalloc_protos_jet.h easy. je_="je_" AC_SUBST([je_]) cfgoutputs_in="Makefile.in" cfgoutputs_in="${cfgoutputs_in} doc/html.xsl.in" cfgoutputs_in="${cfgoutputs_in} doc/manpages.xsl.in" cfgoutputs_in="${cfgoutputs_in} doc/jemalloc.xml.in" cfgoutputs_in="${cfgoutputs_in} include/jemalloc/jemalloc_macros.h.in" cfgoutputs_in="${cfgoutputs_in} include/jemalloc/jemalloc_protos.h.in" cfgoutputs_in="${cfgoutputs_in} include/jemalloc/jemalloc_typedefs.h.in" cfgoutputs_in="${cfgoutputs_in} include/jemalloc/internal/jemalloc_internal.h.in" cfgoutputs_in="${cfgoutputs_in} test/test.sh.in" cfgoutputs_in="${cfgoutputs_in} test/include/test/jemalloc_test.h.in" cfgoutputs_out="Makefile" cfgoutputs_out="${cfgoutputs_out} doc/html.xsl" cfgoutputs_out="${cfgoutputs_out} doc/manpages.xsl" cfgoutputs_out="${cfgoutputs_out} doc/jemalloc.xml" cfgoutputs_out="${cfgoutputs_out} include/jemalloc/jemalloc_macros.h" cfgoutputs_out="${cfgoutputs_out} include/jemalloc/jemalloc_protos.h" cfgoutputs_out="${cfgoutputs_out} include/jemalloc/jemalloc_typedefs.h" cfgoutputs_out="${cfgoutputs_out} include/jemalloc/internal/jemalloc_internal.h" cfgoutputs_out="${cfgoutputs_out} test/test.sh" cfgoutputs_out="${cfgoutputs_out} test/include/test/jemalloc_test.h" cfgoutputs_tup="Makefile" cfgoutputs_tup="${cfgoutputs_tup} doc/html.xsl:doc/html.xsl.in" cfgoutputs_tup="${cfgoutputs_tup} doc/manpages.xsl:doc/manpages.xsl.in" cfgoutputs_tup="${cfgoutputs_tup} doc/jemalloc.xml:doc/jemalloc.xml.in" cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/jemalloc_macros.h:include/jemalloc/jemalloc_macros.h.in" cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/jemalloc_protos.h:include/jemalloc/jemalloc_protos.h.in" cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/jemalloc_typedefs.h:include/jemalloc/jemalloc_typedefs.h.in" cfgoutputs_tup="${cfgoutputs_tup} include/jemalloc/internal/jemalloc_internal.h" cfgoutputs_tup="${cfgoutputs_tup} test/test.sh:test/test.sh.in" cfgoutputs_tup="${cfgoutputs_tup} test/include/test/jemalloc_test.h:test/include/test/jemalloc_test.h.in" cfghdrs_in="include/jemalloc/jemalloc_defs.h.in" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/jemalloc_internal_defs.h.in" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/private_namespace.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/private_unnamespace.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/private_symbols.txt" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/public_namespace.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/public_unnamespace.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/internal/size_classes.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/jemalloc_rename.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/jemalloc_mangle.sh" cfghdrs_in="${cfghdrs_in} include/jemalloc/jemalloc.sh" cfghdrs_in="${cfghdrs_in} test/include/test/jemalloc_test_defs.h.in" cfghdrs_out="include/jemalloc/jemalloc_defs.h" cfghdrs_out="${cfghdrs_out} include/jemalloc/jemalloc${install_suffix}.h" cfghdrs_out="${cfghdrs_out} include/jemalloc/internal/private_namespace.h" cfghdrs_out="${cfghdrs_out} include/jemalloc/internal/private_unnamespace.h" cfghdrs_out="${cfghdrs_out} include/jemalloc/internal/public_symbols.txt" cfghdrs_out="${cfghdrs_out} include/jemalloc/internal/public_namespace.h" cfghdrs_out="${cfghdrs_out} include/jemalloc/internal/public_unnamespace.h" cfghdrs_out="${cfghdrs_out} include/jemalloc/internal/size_classes.h" cfghdrs_out="${cfghdrs_out} include/jemalloc/jemalloc_protos_jet.h" cfghdrs_out="${cfghdrs_out} include/jemalloc/jemalloc_rename.h" cfghdrs_out="${cfghdrs_out} include/jemalloc/jemalloc_mangle.h" cfghdrs_out="${cfghdrs_out} include/jemalloc/jemalloc_mangle_jet.h" cfghdrs_out="${cfghdrs_out} include/jemalloc/internal/jemalloc_internal_defs.h" cfghdrs_out="${cfghdrs_out} test/include/test/jemalloc_test_defs.h" cfghdrs_tup="include/jemalloc/jemalloc_defs.h:include/jemalloc/jemalloc_defs.h.in" cfghdrs_tup="${cfghdrs_tup} include/jemalloc/internal/jemalloc_internal_defs.h:include/jemalloc/internal/jemalloc_internal_defs.h.in" cfghdrs_tup="${cfghdrs_tup} test/include/test/jemalloc_test_defs.h:test/include/test/jemalloc_test_defs.h.in" dnl Silence irrelevant compiler warnings by default. AC_ARG_ENABLE([cc-silence], [AS_HELP_STRING([--disable-cc-silence], [Do not silence irrelevant compiler warnings])], [if test "x$enable_cc_silence" = "xno" ; then enable_cc_silence="0" else enable_cc_silence="1" fi ], [enable_cc_silence="1"] ) if test "x$enable_cc_silence" = "x1" ; then AC_DEFINE([JEMALLOC_CC_SILENCE], [ ]) fi dnl Do not compile with debugging by default. AC_ARG_ENABLE([debug], [AS_HELP_STRING([--enable-debug], [Build debugging code (implies --enable-ivsalloc)])], [if test "x$enable_debug" = "xno" ; then enable_debug="0" else enable_debug="1" fi ], [enable_debug="0"] ) if test "x$enable_debug" = "x1" ; then AC_DEFINE([JEMALLOC_DEBUG], [ ]) enable_ivsalloc="1" fi AC_SUBST([enable_debug]) dnl Do not validate pointers by default. AC_ARG_ENABLE([ivsalloc], [AS_HELP_STRING([--enable-ivsalloc], [Validate pointers passed through the public API])], [if test "x$enable_ivsalloc" = "xno" ; then enable_ivsalloc="0" else enable_ivsalloc="1" fi ], [enable_ivsalloc="0"] ) if test "x$enable_ivsalloc" = "x1" ; then AC_DEFINE([JEMALLOC_IVSALLOC], [ ]) fi dnl Only optimize if not debugging. if test "x$enable_debug" = "x0" -a "x$no_CFLAGS" = "xyes" ; then dnl Make sure that an optimization flag was not specified in EXTRA_CFLAGS. optimize="no" echo "$CFLAGS $EXTRA_CFLAGS" | grep '\-O' >/dev/null || optimize="yes" if test "x${optimize}" = "xyes" ; then if test "x$GCC" = "xyes" ; then JE_CFLAGS_APPEND([-O3]) JE_CFLAGS_APPEND([-funroll-loops]) elif test "x$je_cv_msvc" = "xyes" ; then JE_CFLAGS_APPEND([-O2]) else JE_CFLAGS_APPEND([-O]) fi fi fi dnl Enable statistics calculation by default. AC_ARG_ENABLE([stats], [AS_HELP_STRING([--disable-stats], [Disable statistics calculation/reporting])], [if test "x$enable_stats" = "xno" ; then enable_stats="0" else enable_stats="1" fi ], [enable_stats="1"] ) if test "x$enable_stats" = "x1" ; then AC_DEFINE([JEMALLOC_STATS], [ ]) fi AC_SUBST([enable_stats]) dnl Do not enable profiling by default. AC_ARG_ENABLE([prof], [AS_HELP_STRING([--enable-prof], [Enable allocation profiling])], [if test "x$enable_prof" = "xno" ; then enable_prof="0" else enable_prof="1" fi ], [enable_prof="0"] ) if test "x$enable_prof" = "x1" ; then backtrace_method="" else backtrace_method="N/A" fi AC_ARG_ENABLE([prof-libunwind], [AS_HELP_STRING([--enable-prof-libunwind], [Use libunwind for backtracing])], [if test "x$enable_prof_libunwind" = "xno" ; then enable_prof_libunwind="0" else enable_prof_libunwind="1" fi ], [enable_prof_libunwind="0"] ) AC_ARG_WITH([static_libunwind], [AS_HELP_STRING([--with-static-libunwind=], [Path to static libunwind library; use rather than dynamically linking])], if test "x$with_static_libunwind" = "xno" ; then LUNWIND="-lunwind" else if test ! -f "$with_static_libunwind" ; then AC_MSG_ERROR([Static libunwind not found: $with_static_libunwind]) fi LUNWIND="$with_static_libunwind" fi, LUNWIND="-lunwind" ) if test "x$backtrace_method" = "x" -a "x$enable_prof_libunwind" = "x1" ; then AC_CHECK_HEADERS([libunwind.h], , [enable_prof_libunwind="0"]) if test "x$LUNWIND" = "x-lunwind" ; then AC_CHECK_LIB([unwind], [unw_backtrace], [LIBS="$LIBS $LUNWIND"], [enable_prof_libunwind="0"]) else LIBS="$LIBS $LUNWIND" fi if test "x${enable_prof_libunwind}" = "x1" ; then backtrace_method="libunwind" AC_DEFINE([JEMALLOC_PROF_LIBUNWIND], [ ]) fi fi AC_ARG_ENABLE([prof-libgcc], [AS_HELP_STRING([--disable-prof-libgcc], [Do not use libgcc for backtracing])], [if test "x$enable_prof_libgcc" = "xno" ; then enable_prof_libgcc="0" else enable_prof_libgcc="1" fi ], [enable_prof_libgcc="1"] ) if test "x$backtrace_method" = "x" -a "x$enable_prof_libgcc" = "x1" \ -a "x$GCC" = "xyes" ; then AC_CHECK_HEADERS([unwind.h], , [enable_prof_libgcc="0"]) AC_CHECK_LIB([gcc], [_Unwind_Backtrace], [LIBS="$LIBS -lgcc"], [enable_prof_libgcc="0"]) if test "x${enable_prof_libgcc}" = "x1" ; then backtrace_method="libgcc" AC_DEFINE([JEMALLOC_PROF_LIBGCC], [ ]) fi else enable_prof_libgcc="0" fi AC_ARG_ENABLE([prof-gcc], [AS_HELP_STRING([--disable-prof-gcc], [Do not use gcc intrinsics for backtracing])], [if test "x$enable_prof_gcc" = "xno" ; then enable_prof_gcc="0" else enable_prof_gcc="1" fi ], [enable_prof_gcc="1"] ) if test "x$backtrace_method" = "x" -a "x$enable_prof_gcc" = "x1" \ -a "x$GCC" = "xyes" ; then JE_CFLAGS_APPEND([-fno-omit-frame-pointer]) backtrace_method="gcc intrinsics" AC_DEFINE([JEMALLOC_PROF_GCC], [ ]) else enable_prof_gcc="0" fi if test "x$backtrace_method" = "x" ; then backtrace_method="none (disabling profiling)" enable_prof="0" fi AC_MSG_CHECKING([configured backtracing method]) AC_MSG_RESULT([$backtrace_method]) if test "x$enable_prof" = "x1" ; then if test "x${force_tls}" = "x0" ; then AC_MSG_ERROR([Heap profiling requires TLS]); fi force_tls="1" if test "x$abi" != "xpecoff"; then dnl Heap profiling uses the log(3) function. LIBS="$LIBS -lm" fi AC_DEFINE([JEMALLOC_PROF], [ ]) fi AC_SUBST([enable_prof]) dnl Enable thread-specific caching by default. AC_ARG_ENABLE([tcache], [AS_HELP_STRING([--disable-tcache], [Disable per thread caches])], [if test "x$enable_tcache" = "xno" ; then enable_tcache="0" else enable_tcache="1" fi ], [enable_tcache="1"] ) if test "x$enable_tcache" = "x1" ; then AC_DEFINE([JEMALLOC_TCACHE], [ ]) fi AC_SUBST([enable_tcache]) dnl Enable VM deallocation via munmap() by default. AC_ARG_ENABLE([munmap], [AS_HELP_STRING([--disable-munmap], [Disable VM deallocation via munmap(2)])], [if test "x$enable_munmap" = "xno" ; then enable_munmap="0" else enable_munmap="1" fi ], [enable_munmap="${default_munmap}"] ) if test "x$enable_munmap" = "x1" ; then AC_DEFINE([JEMALLOC_MUNMAP], [ ]) fi AC_SUBST([enable_munmap]) dnl Enable allocation from DSS if supported by the OS. have_dss="1" dnl Check whether the BSD/SUSv1 sbrk() exists. If not, disable DSS support. AC_CHECK_FUNC([sbrk], [have_sbrk="1"], [have_sbrk="0"]) if test "x$have_sbrk" = "x1" ; then if test "x$sbrk_deprecated" == "x1" ; then AC_MSG_RESULT([Disabling dss allocation because sbrk is deprecated]) have_dss="0" fi else have_dss="0" fi if test "x$have_dss" = "x1" ; then AC_DEFINE([JEMALLOC_DSS], [ ]) fi dnl Support the junk/zero filling option by default. AC_ARG_ENABLE([fill], [AS_HELP_STRING([--disable-fill], [Disable support for junk/zero filling, quarantine, and redzones])], [if test "x$enable_fill" = "xno" ; then enable_fill="0" else enable_fill="1" fi ], [enable_fill="1"] ) if test "x$enable_fill" = "x1" ; then AC_DEFINE([JEMALLOC_FILL], [ ]) fi AC_SUBST([enable_fill]) dnl Disable utrace(2)-based tracing by default. AC_ARG_ENABLE([utrace], [AS_HELP_STRING([--enable-utrace], [Enable utrace(2)-based tracing])], [if test "x$enable_utrace" = "xno" ; then enable_utrace="0" else enable_utrace="1" fi ], [enable_utrace="0"] ) JE_COMPILABLE([utrace(2)], [ #include #include #include #include #include ], [ utrace((void *)0, 0); ], [je_cv_utrace]) if test "x${je_cv_utrace}" = "xno" ; then enable_utrace="0" fi if test "x$enable_utrace" = "x1" ; then AC_DEFINE([JEMALLOC_UTRACE], [ ]) fi AC_SUBST([enable_utrace]) dnl Support Valgrind by default. AC_ARG_ENABLE([valgrind], [AS_HELP_STRING([--disable-valgrind], [Disable support for Valgrind])], [if test "x$enable_valgrind" = "xno" ; then enable_valgrind="0" else enable_valgrind="1" fi ], [enable_valgrind="1"] ) if test "x$enable_valgrind" = "x1" ; then JE_COMPILABLE([valgrind], [ #include "valgrind/valgrind.h" #include "valgrind/memcheck.h" #if !defined(VALGRIND_RESIZEINPLACE_BLOCK) # error "Incompatible Valgrind version" #endif ], [], [je_cv_valgrind]) if test "x${je_cv_valgrind}" = "xno" ; then enable_valgrind="0" fi if test "x$enable_valgrind" = "x1" ; then AC_DEFINE([JEMALLOC_VALGRIND], [ ]) fi fi AC_SUBST([enable_valgrind]) dnl Do not support the xmalloc option by default. AC_ARG_ENABLE([xmalloc], [AS_HELP_STRING([--enable-xmalloc], [Support xmalloc option])], [if test "x$enable_xmalloc" = "xno" ; then enable_xmalloc="0" else enable_xmalloc="1" fi ], [enable_xmalloc="0"] ) if test "x$enable_xmalloc" = "x1" ; then AC_DEFINE([JEMALLOC_XMALLOC], [ ]) fi AC_SUBST([enable_xmalloc]) dnl ============================================================================ dnl Check for __builtin_ffsl(), then ffsl(3), and fail if neither are found. dnl One of those two functions should (theoretically) exist on all platforms dnl that jemalloc currently has a chance of functioning on without modification. dnl We additionally assume ffs() or __builtin_ffs() are defined if dnl ffsl() or __builtin_ffsl() are defined, respectively. JE_COMPILABLE([a program using __builtin_ffsl], [ #include #include #include ], [ { int rv = __builtin_ffsl(0x08); printf("%d\n", rv); } ], [je_cv_gcc_builtin_ffsl]) if test "x${je_cv_gcc_builtin_ffsl}" == "xyes" ; then AC_DEFINE([JEMALLOC_INTERNAL_FFSL], [__builtin_ffsl]) AC_DEFINE([JEMALLOC_INTERNAL_FFS], [__builtin_ffs]) else JE_COMPILABLE([a program using ffsl], [ #include #include #include ], [ { int rv = ffsl(0x08); printf("%d\n", rv); } ], [je_cv_function_ffsl]) if test "x${je_cv_function_ffsl}" == "xyes" ; then AC_DEFINE([JEMALLOC_INTERNAL_FFSL], [ffsl]) AC_DEFINE([JEMALLOC_INTERNAL_FFS], [ffs]) else AC_MSG_ERROR([Cannot build without ffsl(3) or __builtin_ffsl()]) fi fi AC_CACHE_CHECK([STATIC_PAGE_SHIFT], [je_cv_static_page_shift], AC_RUN_IFELSE([AC_LANG_PROGRAM( [[ #include #ifdef _WIN32 #include #else #include #endif #include ]], [[ int result; FILE *f; #ifdef _WIN32 SYSTEM_INFO si; GetSystemInfo(&si); result = si.dwPageSize; #else result = sysconf(_SC_PAGESIZE); #endif if (result == -1) { return 1; } result = JEMALLOC_INTERNAL_FFSL(result) - 1; f = fopen("conftest.out", "w"); if (f == NULL) { return 1; } fprintf(f, "%d\n", result); fclose(f); return 0; ]])], [je_cv_static_page_shift=`cat conftest.out`], [je_cv_static_page_shift=undefined], [je_cv_static_page_shift=12])) if test "x$je_cv_static_page_shift" != "xundefined"; then AC_DEFINE_UNQUOTED([STATIC_PAGE_SHIFT], [$je_cv_static_page_shift]) else AC_MSG_ERROR([cannot determine value for STATIC_PAGE_SHIFT]) fi dnl ============================================================================ dnl jemalloc configuration. dnl dnl Set VERSION if source directory has an embedded git repository. if test -d "${srcroot}.git" ; then git describe --long --abbrev=40 > ${srcroot}VERSION fi jemalloc_version=`cat ${srcroot}VERSION 2>/dev/null` jemalloc_version_major=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print [$]1}'` jemalloc_version_minor=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print [$]2}'` jemalloc_version_bugfix=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print [$]3}'` jemalloc_version_nrev=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print [$]4}'` jemalloc_version_gid=`echo ${jemalloc_version} | tr ".g-" " " | awk '{print [$]5}'` AC_SUBST([jemalloc_version]) AC_SUBST([jemalloc_version_major]) AC_SUBST([jemalloc_version_minor]) AC_SUBST([jemalloc_version_bugfix]) AC_SUBST([jemalloc_version_nrev]) AC_SUBST([jemalloc_version_gid]) dnl ============================================================================ dnl Configure pthreads. if test "x$abi" != "xpecoff" ; then AC_CHECK_HEADERS([pthread.h], , [AC_MSG_ERROR([pthread.h is missing])]) dnl Some systems may embed pthreads functionality in libc; check for libpthread dnl first, but try libc too before failing. AC_CHECK_LIB([pthread], [pthread_create], [LIBS="$LIBS -lpthread"], [AC_SEARCH_LIBS([pthread_create], , , AC_MSG_ERROR([libpthread is missing]))]) fi CPPFLAGS="$CPPFLAGS -D_REENTRANT" dnl The force_lazy_lock, _malloc_thread_cleanup and _pthread_mutex_init_calloc_cb dnl checks for FreeBSD assume that jemalloc is being built as a libc malloc dnl replacement. If jemalloc is being built for a different purpose, these checks dnl can be overridden with --disable-bsd-malloc-hooks. AC_ARG_ENABLE([bsd-malloc-hooks], [AS_HELP_STRING([--disable-bsd-malloc-hooks], [Disable force_lazy_lock, _malloc_thread_cleanup and _pthread_mutex_init_calloc_cb checks])], [if test "x$enable_bsd_malloc_hooks" = "xno" ; then AC_DEFINE([JEMALLOC_DISABLE_BSD_MALLOC_HOOKS], [ ]) enable_bsd_malloc_hooks="0" force_lazy_lock="0" else enable_bsd_malloc_hooks="1" fi ], [enable_bsd_malloc_hooks="1"] ) dnl Check whether the BSD-specific _malloc_thread_cleanup() exists. If so, use dnl it rather than pthreads TSD cleanup functions to support cleanup during dnl thread exit, in order to avoid pthreads library recursion during dnl bootstrapping. if test "x$enable_bsd_malloc_hooks" = "x1" ; then AC_CHECK_FUNC([_malloc_thread_cleanup], [have__malloc_thread_cleanup="1"], [have__malloc_thread_cleanup="0"] ) if test "x$have__malloc_thread_cleanup" = "x1" ; then AC_DEFINE([JEMALLOC_MALLOC_THREAD_CLEANUP], [ ]) force_tls="1" fi fi dnl Check whether the BSD-specific _pthread_mutex_init_calloc_cb() exists. If dnl so, mutex initialization causes allocation, and we need to implement this dnl callback function in order to prevent recursive allocation. if test "x$enable_bsd_malloc_hooks" = "x1" ; then AC_CHECK_FUNC([_pthread_mutex_init_calloc_cb], [have__pthread_mutex_init_calloc_cb="1"], [have__pthread_mutex_init_calloc_cb="0"] ) if test "x$have__pthread_mutex_init_calloc_cb" = "x1" ; then AC_DEFINE([JEMALLOC_MUTEX_INIT_CB]) fi fi dnl Disable lazy locking by default. AC_ARG_ENABLE([lazy_lock], [AS_HELP_STRING([--enable-lazy-lock], [Enable lazy locking (only lock when multi-threaded)])], [if test "x$enable_lazy_lock" = "xno" ; then enable_lazy_lock="0" else enable_lazy_lock="1" fi ], [enable_lazy_lock="0"] ) if test "x$enable_lazy_lock" = "x0" -a "x${force_lazy_lock}" = "x1" ; then AC_MSG_RESULT([Forcing lazy-lock to avoid allocator/threading bootstrap issues]) enable_lazy_lock="1" fi if test "x$enable_lazy_lock" = "x1" ; then if test "x$abi" != "xpecoff" ; then AC_CHECK_HEADERS([dlfcn.h], , [AC_MSG_ERROR([dlfcn.h is missing])]) AC_CHECK_FUNC([dlsym], [], [AC_CHECK_LIB([dl], [dlsym], [LIBS="$LIBS -ldl"], [AC_MSG_ERROR([libdl is missing])]) ]) fi AC_DEFINE([JEMALLOC_LAZY_LOCK], [ ]) else enable_lazy_lock="0" fi AC_SUBST([enable_lazy_lock]) AC_ARG_ENABLE([tls], [AS_HELP_STRING([--disable-tls], [Disable thread-local storage (__thread keyword)])], if test "x$enable_tls" = "xno" ; then enable_tls="0" else enable_tls="1" fi , enable_tls="1" ) if test "x${enable_tls}" = "x0" -a "x${force_tls}" = "x1" ; then AC_MSG_RESULT([Forcing TLS to avoid allocator/threading bootstrap issues]) enable_tls="1" fi if test "x${enable_tls}" = "x1" -a "x${force_tls}" = "x0" ; then AC_MSG_RESULT([Forcing no TLS to avoid allocator/threading bootstrap issues]) enable_tls="0" fi if test "x${enable_tls}" = "x1" ; then AC_MSG_CHECKING([for TLS]) AC_COMPILE_IFELSE([AC_LANG_PROGRAM( [[ __thread int x; ]], [[ x = 42; return 0; ]])], AC_MSG_RESULT([yes]), AC_MSG_RESULT([no]) enable_tls="0") fi AC_SUBST([enable_tls]) if test "x${enable_tls}" = "x1" ; then AC_DEFINE_UNQUOTED([JEMALLOC_TLS], [ ]) elif test "x${force_tls}" = "x1" ; then AC_MSG_ERROR([Failed to configure TLS, which is mandatory for correct function]) fi dnl ============================================================================ dnl Check for atomic(9) operations as provided on FreeBSD. JE_COMPILABLE([atomic(9)], [ #include #include #include ], [ { uint32_t x32 = 0; volatile uint32_t *x32p = &x32; atomic_fetchadd_32(x32p, 1); } { unsigned long xlong = 0; volatile unsigned long *xlongp = &xlong; atomic_fetchadd_long(xlongp, 1); } ], [je_cv_atomic9]) if test "x${je_cv_atomic9}" = "xyes" ; then AC_DEFINE([JEMALLOC_ATOMIC9]) fi dnl ============================================================================ dnl Check for atomic(3) operations as provided on Darwin. JE_COMPILABLE([Darwin OSAtomic*()], [ #include #include ], [ { int32_t x32 = 0; volatile int32_t *x32p = &x32; OSAtomicAdd32(1, x32p); } { int64_t x64 = 0; volatile int64_t *x64p = &x64; OSAtomicAdd64(1, x64p); } ], [je_cv_osatomic]) if test "x${je_cv_osatomic}" = "xyes" ; then AC_DEFINE([JEMALLOC_OSATOMIC], [ ]) fi dnl ============================================================================ dnl Check for madvise(2). JE_COMPILABLE([madvise(2)], [ #include ], [ { madvise((void *)0, 0, 0); } ], [je_cv_madvise]) if test "x${je_cv_madvise}" = "xyes" ; then AC_DEFINE([JEMALLOC_HAVE_MADVISE], [ ]) fi dnl ============================================================================ dnl Check whether __sync_{add,sub}_and_fetch() are available despite dnl __GCC_HAVE_SYNC_COMPARE_AND_SWAP_n macros being undefined. AC_DEFUN([JE_SYNC_COMPARE_AND_SWAP_CHECK],[ AC_CACHE_CHECK([whether to force $1-bit __sync_{add,sub}_and_fetch()], [je_cv_sync_compare_and_swap_$2], [AC_LINK_IFELSE([AC_LANG_PROGRAM([ #include ], [ #ifndef __GCC_HAVE_SYNC_COMPARE_AND_SWAP_$2 { uint$1_t x$1 = 0; __sync_add_and_fetch(&x$1, 42); __sync_sub_and_fetch(&x$1, 1); } #else #error __GCC_HAVE_SYNC_COMPARE_AND_SWAP_$2 is defined, no need to force #endif ])], [je_cv_sync_compare_and_swap_$2=yes], [je_cv_sync_compare_and_swap_$2=no])]) if test "x${je_cv_sync_compare_and_swap_$2}" = "xyes" ; then AC_DEFINE([JE_FORCE_SYNC_COMPARE_AND_SWAP_$2], [ ]) fi ]) if test "x${je_cv_atomic9}" != "xyes" -a "x${je_cv_osatomic}" != "xyes" ; then JE_SYNC_COMPARE_AND_SWAP_CHECK(32, 4) JE_SYNC_COMPARE_AND_SWAP_CHECK(64, 8) fi dnl ============================================================================ dnl Check for __builtin_clz() and __builtin_clzl(). AC_CACHE_CHECK([for __builtin_clz], [je_cv_builtin_clz], [AC_LINK_IFELSE([AC_LANG_PROGRAM([], [ { unsigned x = 0; int y = __builtin_clz(x); } { unsigned long x = 0; int y = __builtin_clzl(x); } ])], [je_cv_builtin_clz=yes], [je_cv_builtin_clz=no])]) if test "x${je_cv_builtin_clz}" = "xyes" ; then AC_DEFINE([JEMALLOC_HAVE_BUILTIN_CLZ], [ ]) fi dnl ============================================================================ dnl Check for spinlock(3) operations as provided on Darwin. JE_COMPILABLE([Darwin OSSpin*()], [ #include #include ], [ OSSpinLock lock = 0; OSSpinLockLock(&lock); OSSpinLockUnlock(&lock); ], [je_cv_osspin]) if test "x${je_cv_osspin}" = "xyes" ; then AC_DEFINE([JEMALLOC_OSSPIN], [ ]) fi dnl ============================================================================ dnl Darwin-related configuration. AC_ARG_ENABLE([zone-allocator], [AS_HELP_STRING([--disable-zone-allocator], [Disable zone allocator for Darwin])], [if test "x$enable_zone_allocator" = "xno" ; then enable_zone_allocator="0" else enable_zone_allocator="1" fi ], [if test "x${abi}" = "xmacho"; then enable_zone_allocator="1" fi ] ) AC_SUBST([enable_zone_allocator]) if test "x${enable_zone_allocator}" = "x1" ; then if test "x${abi}" != "xmacho"; then AC_MSG_ERROR([--enable-zone-allocator is only supported on Darwin]) fi AC_DEFINE([JEMALLOC_IVSALLOC], [ ]) AC_DEFINE([JEMALLOC_ZONE], [ ]) dnl The szone version jumped from 3 to 6 between the OS X 10.5.x and 10.6 dnl releases. malloc_zone_t and malloc_introspection_t have new fields in dnl 10.6, which is the only source-level indication of the change. AC_MSG_CHECKING([malloc zone version]) AC_DEFUN([JE_ZONE_PROGRAM], [AC_LANG_PROGRAM( [#include ], [static foo[[sizeof($1) $2 sizeof(void *) * $3 ? 1 : -1]]] )]) AC_COMPILE_IFELSE([JE_ZONE_PROGRAM(malloc_zone_t,==,14)],[JEMALLOC_ZONE_VERSION=3],[ AC_COMPILE_IFELSE([JE_ZONE_PROGRAM(malloc_zone_t,==,15)],[JEMALLOC_ZONE_VERSION=5],[ AC_COMPILE_IFELSE([JE_ZONE_PROGRAM(malloc_zone_t,==,16)],[ AC_COMPILE_IFELSE([JE_ZONE_PROGRAM(malloc_introspection_t,==,9)],[JEMALLOC_ZONE_VERSION=6],[ AC_COMPILE_IFELSE([JE_ZONE_PROGRAM(malloc_introspection_t,==,13)],[JEMALLOC_ZONE_VERSION=7],[JEMALLOC_ZONE_VERSION=] )])],[ AC_COMPILE_IFELSE([JE_ZONE_PROGRAM(malloc_zone_t,==,17)],[JEMALLOC_ZONE_VERSION=8],[ AC_COMPILE_IFELSE([JE_ZONE_PROGRAM(malloc_zone_t,>,17)],[JEMALLOC_ZONE_VERSION=9],[JEMALLOC_ZONE_VERSION=] )])])])]) if test "x${JEMALLOC_ZONE_VERSION}" = "x"; then AC_MSG_RESULT([unsupported]) AC_MSG_ERROR([Unsupported malloc zone version]) fi if test "${JEMALLOC_ZONE_VERSION}" = 9; then JEMALLOC_ZONE_VERSION=8 AC_MSG_RESULT([> 8]) else AC_MSG_RESULT([$JEMALLOC_ZONE_VERSION]) fi AC_DEFINE_UNQUOTED(JEMALLOC_ZONE_VERSION, [$JEMALLOC_ZONE_VERSION]) fi dnl ============================================================================ dnl Check for typedefs, structures, and compiler characteristics. AC_HEADER_STDBOOL dnl ============================================================================ dnl Define commands that generate output files. AC_CONFIG_COMMANDS([include/jemalloc/internal/private_namespace.h], [ mkdir -p "${objroot}include/jemalloc/internal" "${srcdir}/include/jemalloc/internal/private_namespace.sh" "${srcdir}/include/jemalloc/internal/private_symbols.txt" > "${objroot}include/jemalloc/internal/private_namespace.h" ], [ srcdir="${srcdir}" objroot="${objroot}" ]) AC_CONFIG_COMMANDS([include/jemalloc/internal/private_unnamespace.h], [ mkdir -p "${objroot}include/jemalloc/internal" "${srcdir}/include/jemalloc/internal/private_unnamespace.sh" "${srcdir}/include/jemalloc/internal/private_symbols.txt" > "${objroot}include/jemalloc/internal/private_unnamespace.h" ], [ srcdir="${srcdir}" objroot="${objroot}" ]) AC_CONFIG_COMMANDS([include/jemalloc/internal/public_symbols.txt], [ f="${objroot}include/jemalloc/internal/public_symbols.txt" mkdir -p "${objroot}include/jemalloc/internal" cp /dev/null "${f}" for nm in `echo ${mangling_map} |tr ',' ' '` ; do n=`echo ${nm} |tr ':' ' ' |awk '{print $[]1}'` m=`echo ${nm} |tr ':' ' ' |awk '{print $[]2}'` echo "${n}:${m}" >> "${f}" dnl Remove name from public_syms so that it isn't redefined later. public_syms=`for sym in ${public_syms}; do echo "${sym}"; done |grep -v "^${n}\$" |tr '\n' ' '` done for sym in ${public_syms} ; do n="${sym}" m="${JEMALLOC_PREFIX}${sym}" echo "${n}:${m}" >> "${f}" done ], [ srcdir="${srcdir}" objroot="${objroot}" mangling_map="${mangling_map}" public_syms="${public_syms}" JEMALLOC_PREFIX="${JEMALLOC_PREFIX}" ]) AC_CONFIG_COMMANDS([include/jemalloc/internal/public_namespace.h], [ mkdir -p "${objroot}include/jemalloc/internal" "${srcdir}/include/jemalloc/internal/public_namespace.sh" "${objroot}include/jemalloc/internal/public_symbols.txt" > "${objroot}include/jemalloc/internal/public_namespace.h" ], [ srcdir="${srcdir}" objroot="${objroot}" ]) AC_CONFIG_COMMANDS([include/jemalloc/internal/public_unnamespace.h], [ mkdir -p "${objroot}include/jemalloc/internal" "${srcdir}/include/jemalloc/internal/public_unnamespace.sh" "${objroot}include/jemalloc/internal/public_symbols.txt" > "${objroot}include/jemalloc/internal/public_unnamespace.h" ], [ srcdir="${srcdir}" objroot="${objroot}" ]) AC_CONFIG_COMMANDS([include/jemalloc/internal/size_classes.h], [ mkdir -p "${objroot}include/jemalloc/internal" "${srcdir}/include/jemalloc/internal/size_classes.sh" > "${objroot}include/jemalloc/internal/size_classes.h" ], [ srcdir="${srcdir}" objroot="${objroot}" ]) AC_CONFIG_COMMANDS([include/jemalloc/jemalloc_protos_jet.h], [ mkdir -p "${objroot}include/jemalloc" cat "${srcdir}/include/jemalloc/jemalloc_protos.h.in" | sed -e 's/@je_@/jet_/g' > "${objroot}include/jemalloc/jemalloc_protos_jet.h" ], [ srcdir="${srcdir}" objroot="${objroot}" ]) AC_CONFIG_COMMANDS([include/jemalloc/jemalloc_rename.h], [ mkdir -p "${objroot}include/jemalloc" "${srcdir}/include/jemalloc/jemalloc_rename.sh" "${objroot}include/jemalloc/internal/public_symbols.txt" > "${objroot}include/jemalloc/jemalloc_rename.h" ], [ srcdir="${srcdir}" objroot="${objroot}" ]) AC_CONFIG_COMMANDS([include/jemalloc/jemalloc_mangle.h], [ mkdir -p "${objroot}include/jemalloc" "${srcdir}/include/jemalloc/jemalloc_mangle.sh" "${objroot}include/jemalloc/internal/public_symbols.txt" je_ > "${objroot}include/jemalloc/jemalloc_mangle.h" ], [ srcdir="${srcdir}" objroot="${objroot}" ]) AC_CONFIG_COMMANDS([include/jemalloc/jemalloc_mangle_jet.h], [ mkdir -p "${objroot}include/jemalloc" "${srcdir}/include/jemalloc/jemalloc_mangle.sh" "${objroot}include/jemalloc/internal/public_symbols.txt" jet_ > "${objroot}include/jemalloc/jemalloc_mangle_jet.h" ], [ srcdir="${srcdir}" objroot="${objroot}" ]) AC_CONFIG_COMMANDS([include/jemalloc/jemalloc.h], [ mkdir -p "${objroot}include/jemalloc" "${srcdir}/include/jemalloc/jemalloc.sh" "${objroot}" > "${objroot}include/jemalloc/jemalloc${install_suffix}.h" ], [ srcdir="${srcdir}" objroot="${objroot}" install_suffix="${install_suffix}" ]) dnl Process .in files. AC_SUBST([cfghdrs_in]) AC_SUBST([cfghdrs_out]) AC_CONFIG_HEADERS([$cfghdrs_tup]) dnl ============================================================================ dnl Generate outputs. AC_CONFIG_FILES([$cfgoutputs_tup config.stamp bin/jemalloc.sh]) AC_SUBST([cfgoutputs_in]) AC_SUBST([cfgoutputs_out]) AC_OUTPUT dnl ============================================================================ dnl Print out the results of configuration. AC_MSG_RESULT([===============================================================================]) AC_MSG_RESULT([jemalloc version : ${jemalloc_version}]) AC_MSG_RESULT([library revision : ${rev}]) AC_MSG_RESULT([]) AC_MSG_RESULT([CC : ${CC}]) AC_MSG_RESULT([CPPFLAGS : ${CPPFLAGS}]) AC_MSG_RESULT([CFLAGS : ${CFLAGS}]) AC_MSG_RESULT([LDFLAGS : ${LDFLAGS}]) AC_MSG_RESULT([EXTRA_LDFLAGS : ${EXTRA_LDFLAGS}]) AC_MSG_RESULT([LIBS : ${LIBS}]) AC_MSG_RESULT([RPATH_EXTRA : ${RPATH_EXTRA}]) AC_MSG_RESULT([]) AC_MSG_RESULT([XSLTPROC : ${XSLTPROC}]) AC_MSG_RESULT([XSLROOT : ${XSLROOT}]) AC_MSG_RESULT([]) AC_MSG_RESULT([PREFIX : ${PREFIX}]) AC_MSG_RESULT([BINDIR : ${BINDIR}]) AC_MSG_RESULT([INCLUDEDIR : ${INCLUDEDIR}]) AC_MSG_RESULT([LIBDIR : ${LIBDIR}]) AC_MSG_RESULT([DATADIR : ${DATADIR}]) AC_MSG_RESULT([MANDIR : ${MANDIR}]) AC_MSG_RESULT([]) AC_MSG_RESULT([srcroot : ${srcroot}]) AC_MSG_RESULT([abs_srcroot : ${abs_srcroot}]) AC_MSG_RESULT([objroot : ${objroot}]) AC_MSG_RESULT([abs_objroot : ${abs_objroot}]) AC_MSG_RESULT([]) AC_MSG_RESULT([JEMALLOC_PREFIX : ${JEMALLOC_PREFIX}]) AC_MSG_RESULT([JEMALLOC_PRIVATE_NAMESPACE]) AC_MSG_RESULT([ : ${JEMALLOC_PRIVATE_NAMESPACE}]) AC_MSG_RESULT([install_suffix : ${install_suffix}]) AC_MSG_RESULT([autogen : ${enable_autogen}]) AC_MSG_RESULT([cc-silence : ${enable_cc_silence}]) AC_MSG_RESULT([debug : ${enable_debug}]) AC_MSG_RESULT([code-coverage : ${enable_code_coverage}]) AC_MSG_RESULT([stats : ${enable_stats}]) AC_MSG_RESULT([prof : ${enable_prof}]) AC_MSG_RESULT([prof-libunwind : ${enable_prof_libunwind}]) AC_MSG_RESULT([prof-libgcc : ${enable_prof_libgcc}]) AC_MSG_RESULT([prof-gcc : ${enable_prof_gcc}]) AC_MSG_RESULT([tcache : ${enable_tcache}]) AC_MSG_RESULT([fill : ${enable_fill}]) AC_MSG_RESULT([utrace : ${enable_utrace}]) AC_MSG_RESULT([valgrind : ${enable_valgrind}]) AC_MSG_RESULT([xmalloc : ${enable_xmalloc}]) AC_MSG_RESULT([munmap : ${enable_munmap}]) AC_MSG_RESULT([lazy_lock : ${enable_lazy_lock}]) AC_MSG_RESULT([tls : ${enable_tls}]) AC_MSG_RESULT([===============================================================================]) �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������vmem-1.8/src/jemalloc/coverage.sh�������������������������������������������������������������������0000775�0000000�0000000�00000000501�13615050741�0017033�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������#!/bin/sh set -e objdir=$1 suffix=$2 shift 2 objs=$@ gcov -b -p -f -o "${objdir}" ${objs} # Move gcov outputs so that subsequent gcov invocations won't clobber results # for the same sources with different compilation flags. for f in `find . -maxdepth 1 -type f -name '*.gcov'` ; do mv "${f}" "${f}.${suffix}" done �����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������vmem-1.8/src/jemalloc/doc/��������������������������������������������������������������������������0000775�0000000�0000000�00000000000�13615050741�0015452�5����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������vmem-1.8/src/jemalloc/doc/html.xsl.in���������������������������������������������������������������0000664�0000000�0000000�00000000313�13615050741�0017550�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ ���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������vmem-1.8/src/jemalloc/doc/jemalloc.xml.in�����������������������������������������������������������0000664�0000000�0000000�00000274505�13615050741�0020404�0����������������������������������������������������������������������������������������������������ustar�00root����������������������������root����������������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������ User Manual jemalloc @jemalloc_version@ Jason Evans Author JEMALLOC 3 jemalloc jemalloc general purpose memory allocation functions LIBRARY This manual describes jemalloc @jemalloc_version@. More information can be found at the jemalloc website. SYNOPSIS #include <stdlib.h> #include <jemalloc/jemalloc.h> Standard API void *malloc size_t size void *calloc size_t number size_t size int posix_memalign void **ptr size_t alignment size_t size void *aligned_alloc size_t alignment size_t size void *realloc void *ptr size_t size void free void *ptr Non-standard API void *mallocx size_t size int flags void *rallocx void *ptr size_t size int flags size_t xallocx void *ptr size_t size size_t extra int flags size_t sallocx void *ptr int flags void dallocx void *ptr int flags size_t nallocx size_t size int flags int mallctl const char *name void *oldp size_t *oldlenp void *newp size_t newlen int mallctlnametomib const char *name size_t *mibp size_t *miblenp int mallctlbymib const size_t *mib size_t miblen void *oldp size_t *oldlenp void *newp size_t newlen void malloc_stats_print void (*write_cb) void *, const char * void *cbopaque const char *opts size_t malloc_usable_size const void *ptr void (*malloc_message) void *cbopaque const char *s const char *malloc_conf; DESCRIPTION Standard API The malloc function allocates size bytes of uninitialized memory. The allocated space is suitably aligned (after possible pointer coercion) for storage of any type of object. The calloc function allocates space for number objects, each size bytes in length. The result is identical to calling malloc with an argument of number * size, with the exception that the allocated memory is explicitly initialized to zero bytes. The posix_memalign function allocates size bytes of memory such that the allocation's base address is a multiple of alignment, and returns the allocation in the value pointed to by ptr. The requested alignment must be a power of 2 at least as large as sizeof(void *). The aligned_alloc function allocates size bytes of memory such that the allocation's base address is a multiple of alignment. The requested alignment must be a power of 2. Behavior is undefined if size is not an integral multiple of alignment. The realloc function changes the size of the previously allocated memory referenced by ptr to size bytes. The contents of the memory are unchanged up to the lesser of the new and old sizes. If the new size is larger, the contents of the newly allocated portion of the memory are undefined. Upon success, the memory referenced by ptr is freed and a pointer to the newly allocated memory is returned. Note that realloc may move the memory allocation, resulting in a different return value than ptr. If ptr is NULL, the realloc function behaves identically to malloc for the specified size. The free function causes the allocated memory referenced by ptr to be made available for future allocations. If ptr is NULL, no action occurs. Non-standard API The mallocx, rallocx, xallocx, sallocx, dallocx, and nallocx functions all have a flags argument that can be used to specify options. The functions only check the options that are contextually relevant. Use bitwise or (|) operations to specify one or more of the following: MALLOCX_LG_ALIGN(la) Align the memory allocation to start at an address that is a multiple of (1 << la). This macro does not validate that la is within the valid range. MALLOCX_ALIGN(a) Align the memory allocation to start at an address that is a multiple of a, where a is a power of two. This macro does not validate that a is a power of 2. MALLOCX_ZERO Initialize newly allocated memory to contain zero bytes. In the growing reallocation case, the real size prior to reallocation defines the boundary between untouched bytes and those that are initialized to contain zero bytes. If this macro is absent, newly allocated memory is uninitialized. MALLOCX_ARENA(a) Use the arena specified by the index a (and by necessity bypass the thread cache). This macro has no effect for regions that were allocated via an arena other than the one specified. This macro does not validate that a specifies an arena index in the valid range. The mallocx function allocates at least size bytes of memory, and returns a pointer to the base address of the allocation. Behavior is undefined if size is 0, or if request size overflows due to size class and/or alignment constraints. The rallocx function resizes the allocation at ptr to be at least size bytes, and returns a pointer to the base address of the resulting allocation, which may or may not have moved from its original location. Behavior is undefined if size is 0, or if request size overflows due to size class and/or alignment constraints. The xallocx function resizes the allocation at ptr in place to be at least size bytes, and returns the real size of the allocation. If extra is non-zero, an attempt is made to resize the allocation to be at least (size + extra) bytes, though inability to allocate the extra byte(s) will not by itself result in failure to resize. Behavior is undefined if size is 0, or if (size + extra > SIZE_T_MAX). The sallocx function returns the real size of the allocation at ptr. The dallocx function causes the memory referenced by ptr to be made available for future allocations. The nallocx function allocates no memory, but it performs the same size computation as the mallocx function, and returns the real size of the allocation that would result from the equivalent mallocx function call. Behavior is undefined if size is 0, or if request size overflows due to size class and/or alignment constraints. The mallctl function provides a general interface for introspecting the memory allocator, as well as setting modifiable parameters and triggering actions. The period-separated name argument specifies a location in a tree-structured namespace; see the section for documentation on the tree contents. To read a value, pass a pointer via oldp to adequate space to contain the value, and a pointer to its length via oldlenp; otherwise pass NULL and NULL. Similarly, to write a value, pass a pointer to the value via newp, and its length via newlen; otherwise pass NULL and 0. The mallctlnametomib function provides a way to avoid repeated name lookups for applications that repeatedly query the same portion of the namespace, by translating a name to a “Management Information Base” (MIB) that can be passed repeatedly to mallctlbymib. Upon successful return from mallctlnametomib, mibp contains an array of *miblenp integers, where *miblenp is the lesser of the number of components in name and the input value of *miblenp. Thus it is possible to pass a *miblenp that is smaller than the number of period-separated name components, which results in a partial MIB that can be used as the basis for constructing a complete MIB. For name components that are integers (e.g. the 2 in arenas.bin.2.size), the corresponding MIB component will always be that integer. Therefore, it is legitimate to construct code like the following: The malloc_stats_print function writes human-readable summary statistics via the write_cb callback function pointer and cbopaque data passed to write_cb, or malloc_message if write_cb is NULL. This function can be called repeatedly. General information that never changes during execution can be omitted by specifying "g" as a character within the opts string. Note that malloc_message uses the mallctl* functions internally, so inconsistent statistics can be reported if multiple threads use these functions simultaneously. If is specified during configuration, “m” and “a” can be specified to omit merged arena and per arena statistics, respectively; “b” and “l” can be specified to omit per size class statistics for bins and large objects, respectively. Unrecognized characters are silently ignored. Note that thread caching may prevent some statistics from being completely up to date, since extra locking would be required to merge counters that track thread cache operations. The malloc_usable_size function returns the usable size of the allocation pointed to by ptr. The return value may be larger than the size that was requested during allocation. The malloc_usable_size function is not a mechanism for in-place realloc; rather it is provided solely as a tool for introspection purposes. Any discrepancy between the requested allocation size and the size reported by malloc_usable_size should not be depended on, since such behavior is entirely implementation-dependent. TUNING Once, when the first call is made to one of the memory allocation routines, the allocator initializes its internals based in part on various options that can be specified at compile- or run-time. The string pointed to by the global variable malloc_conf, the “name” of the file referenced by the symbolic link named /etc/malloc.conf, and the value of the environment variable MALLOC_CONF, will be interpreted, in that order, from left to right as options. Note that malloc_conf may be read before main is entered, so the declaration of malloc_conf should specify an initializer that contains the final value to be read by jemalloc. malloc_conf is a compile-time setting, whereas /etc/malloc.conf and MALLOC_CONF can be safely set any time prior to program invocation. An options string is a comma-separated list of option:value pairs. There is one key corresponding to each opt.* mallctl (see the section for options documentation). For example, abort:true,narenas:1 sets the opt.abort and opt.narenas options. Some options have boolean values (true/false), others have integer values (base 8, 10, or 16, depending on prefix), and yet others have raw string values. IMPLEMENTATION NOTES Traditionally, allocators have used sbrk 2 to obtain memory, which is suboptimal for several reasons, including race conditions, increased fragmentation, and artificial limitations on maximum usable memory. If sbrk 2 is supported by the operating system, this allocator uses both mmap 2 and sbrk 2, in that order of preference; otherwise only mmap 2 is used. This allocator uses multiple arenas in order to reduce lock contention for threaded programs on multi-processor systems. This works well with regard to threading scalability, but incurs some costs. There is a small fixed per-arena overhead, and additionally, arenas manage memory completely independently of each other, which means a small fixed increase in overall memory fragmentation. These overheads are not generally an issue, given the number of arenas normally used. Note that using substantially more arenas than the default is not likely to improve performance, mainly due to reduced cache performance. However, it may make sense to reduce the number of arenas if an application does not make much use of the allocation functions. In addition to multiple arenas, unless is specified during configuration, this allocator supports thread-specific caching for small and large objects, in order to make it possible to completely avoid synchronization for most allocation requests. Such caching allows very fast allocation in the common case, but it increases memory usage and fragmentation, since a bounded number of objects can remain allocated in each thread cache. Memory is conceptually broken into equal-sized chunks, where the chunk size is a power of two that is greater than the page size. Chunks are always aligned to multiples of the chunk size. This alignment makes it possible to find metadata for user objects very quickly. User objects are broken into three categories according to size: small, large, and huge. Small objects are smaller than one page. Large objects are smaller than the chunk size. Huge objects are a multiple of the chunk size. Small and large objects are managed entirely by arenas; huge objects are additionally aggregated in a single data structure that is shared by all threads. Huge objects are typically used by applications infrequently enough that this single data structure is not a scalability issue. Each chunk that is managed by an arena tracks its contents as runs of contiguous pages (unused, backing a set of small objects, or backing one large object). The combination of chunk alignment and chunk page maps makes it possible to determine all metadata regarding small and large allocations in constant time. Small objects are managed in groups by page runs. Each run maintains a frontier and free list to track which regions are in use. Allocation requests that are no more than half the quantum (8 or 16, depending on architecture) are rounded up to the nearest power of two that is at least sizeof(double). All other small object size classes are multiples of the quantum, spaced such that internal fragmentation is limited to approximately 25% for all but the smallest size classes. Allocation requests that are larger than the maximum small size class, but small enough to fit in an arena-managed chunk (see the opt.lg_chunk option), are rounded up to the nearest run size. Allocation requests that are too large to fit in an arena-managed chunk are rounded up to the nearest multiple of the chunk size. Allocations are packed tightly together, which can be an issue for multi-threaded applications. If you need to assure that allocations do not suffer from cacheline sharing, round your allocation requests up to the nearest multiple of the cacheline size, or specify cacheline alignment when allocating. Assuming 4 MiB chunks, 4 KiB pages, and a 16-byte quantum on a 64-bit system, the size classes in each category are as shown in . Size classes Category Spacing Size Small lg [8] 16 [16, 32, 48, ..., 128] 32 [160, 192, 224, 256] 64 [320, 384, 448, 512] 128 [640, 768, 896, 1024] 256 [1280, 1536, 1792, 2048] 512 [2560, 3072, 3584] Large 4 KiB [4 KiB, 8 KiB, 12 KiB, ..., 4072 KiB] Huge 4 MiB [4 MiB, 8 MiB, 12 MiB, ...]
MALLCTL NAMESPACE The following names are defined in the namespace accessible via the mallctl* functions. Value types are specified in parentheses, their readable/writable statuses are encoded as rw, r-, -w, or --, and required build configuration flags follow, if any. A name element encoded as <i> or <j> indicates an integer component, where the integer varies from 0 to some upper value that must be determined via introspection. In the case of stats.arenas.<i>.*, <i> equal to arenas.narenas can be used to access the summation of statistics from all arenas. Take special note of the epoch mallctl, which controls refreshing of cached dynamic statistics. version (const char *) r- Return the jemalloc version string. epoch (uint64_t) rw If a value is passed in, refresh the data from which the mallctl* functions report values, and increment the epoch. Return the current epoch. This is useful for detecting whether another thread caused a refresh. config.debug (bool) r- was specified during build configuration. config.fill (bool) r- was specified during build configuration. config.lazy_lock (bool) r- was specified during build configuration. config.munmap (bool) r- was specified during build configuration. config.prof (bool) r- was specified during build configuration. config.prof_libgcc (bool) r- was not specified during build configuration. config.prof_libunwind (bool) r- was specified during build configuration. config.stats (bool) r- was specified during build configuration. config.tcache (bool) r- was not specified during build configuration. config.tls (bool) r- was not specified during build configuration. config.utrace (bool) r- was specified during build configuration. config.valgrind (bool) r- was specified during build configuration. config.xmalloc (bool) r- was specified during build configuration. opt.abort (bool) r- Abort-on-warning enabled/disabled. If true, most warnings are fatal. The process will call abort 3 in these cases. This option is disabled by default unless is specified during configuration, in which case it is enabled by default. opt.dss (const char *) r- dss (sbrk 2) allocation precedence as related to mmap 2 allocation. The following settings are supported if sbrk 2 is supported by the operating system: “disabled”, “primary”, and “secondary”; otherwise only “disabled” is supported. The default is “secondary” if sbrk 2 is supported by the operating system; “disabled” otherwise. opt.lg_chunk (size_t) r- Virtual memory chunk size (log base 2). If a chunk size outside the supported size range is specified, the size is silently clipped to the minimum/maximum supported size. The default chunk size is 4 MiB (2^22). opt.narenas (size_t) r- Maximum number of arenas to use for automatic multiplexing of threads and arenas. The default is four times the number of CPUs, or one if there is a single CPU. opt.lg_dirty_mult (ssize_t) r- Per-arena minimum ratio (log base 2) of active to dirty pages. Some dirty unused pages may be allowed to accumulate, within the limit set by the ratio (or one chunk worth of dirty pages, whichever is greater), before informing the kernel about some of those pages via madvise 2 or a similar system call. This provides the kernel with sufficient information to recycle dirty pages if physical memory becomes scarce and the pages remain unused. The default minimum ratio is 8:1 (2^3:1); an option value of -1 will disable dirty page purging. opt.stats_print (bool) r- Enable/disable statistics printing at exit. If enabled, the malloc_stats_print function is called at program exit via an atexit 3 function. If is specified during configuration, this has the potential to cause deadlock for a multi-threaded process that exits while one or more threads are executing in the memory allocation functions. Therefore, this option should only be used with care; it is primarily intended as a performance tuning aid during application development. This option is disabled by default. opt.junk (bool) r- [] Junk filling enabled/disabled. If enabled, each byte of uninitialized allocated memory will be initialized to 0xa5. All deallocated memory will be initialized to 0x5a. This is intended for debugging and will impact performance negatively. This option is disabled by default unless is specified during configuration, in which case it is enabled by default unless running inside Valgrind. opt.quarantine (size_t) r- [] Per thread quarantine size in bytes. If non-zero, each thread maintains a FIFO object quarantine that stores up to the specified number of bytes of memory. The quarantined memory is not freed until it is released from quarantine, though it is immediately junk-filled if the opt.junk option is enabled. This feature is of particular use in combination with Valgrind, which can detect attempts to access quarantined objects. This is intended for debugging and will impact performance negatively. The default quarantine size is 0 unless running inside Valgrind, in which case the default is 16 MiB. opt.redzone (bool) r- [] Redzones enabled/disabled. If enabled, small allocations have redzones before and after them. Furthermore, if the opt.junk option is enabled, the redzones are checked for corruption during deallocation. However, the primary intended purpose of this feature is to be used in combination with Valgrind, which needs redzones in order to do effective buffer overflow/underflow detection. This option is intended for debugging and will impact performance negatively. This option is disabled by default unless running inside Valgrind. opt.zero (bool) r- [] Zero filling enabled/disabled. If enabled, each byte of uninitialized allocated memory will be initialized to 0. Note that this initialization only happens once for each byte, so realloc and rallocx calls do not zero memory that was previously allocated. This is intended for debugging and will impact performance negatively. This option is disabled by default. opt.utrace (bool) r- [] Allocation tracing based on utrace 2 enabled/disabled. This option is disabled by default. opt.xmalloc (bool) r- [] Abort-on-out-of-memory enabled/disabled. If enabled, rather than returning failure for any allocation function, display a diagnostic message on STDERR_FILENO and cause the program to drop core (using abort 3). If an application is designed to depend on this behavior, set the option at compile time by including the following in the source code: This option is disabled by default. opt.tcache (bool) r- [] Thread-specific caching enabled/disabled. When there are multiple threads, each thread uses a thread-specific cache for objects up to a certain size. Thread-specific caching allows many allocations to be satisfied without performing any thread synchronization, at the cost of increased memory use. See the opt.lg_tcache_max option for related tuning information. This option is enabled by default unless running inside Valgrind, in which case it is forcefully disabled. opt.lg_tcache_max (size_t) r- [] Maximum size class (log base 2) to cache in the thread-specific cache. At a minimum, all small size classes are cached, and at a maximum all large size classes are cached. The default maximum is 32 KiB (2^15). opt.prof (bool) r- [] Memory profiling enabled/disabled. If enabled, profile memory allocation activity. See the opt.prof_active option for on-the-fly activation/deactivation. See the opt.lg_prof_sample option for probabilistic sampling control. See the opt.prof_accum option for control of cumulative sample reporting. See the opt.lg_prof_interval option for information on interval-triggered profile dumping, the opt.prof_gdump option for information on high-water-triggered profile dumping, and the opt.prof_final option for final profile dumping. Profile output is compatible with the included pprof Perl script, which originates from the gperftools package. opt.prof_prefix (const char *) r- [] Filename prefix for profile dumps. If the prefix is set to the empty string, no automatic dumps will occur; this is primarily useful for disabling the automatic final heap dump (which also disables leak reporting, if enabled). The default prefix is jeprof. opt.prof_active (bool) rw [] Profiling activated/deactivated. This is a secondary control mechanism that makes it possible to start the application with profiling enabled (see the opt.prof option) but inactive, then toggle profiling at any time during program execution with the prof.active mallctl. This option is enabled by default. opt.lg_prof_sample (ssize_t) r- [] Average interval (log base 2) between allocation samples, as measured in bytes of allocation activity. Increasing the sampling interval decreases profile fidelity, but also decreases the computational overhead. The default sample interval is 512 KiB (2^19 B). opt.prof_accum (bool) r- [] Reporting of cumulative object/byte counts in profile dumps enabled/disabled. If this option is enabled, every unique backtrace must be stored for the duration of execution. Depending on the application, this can impose a large memory overhead, and the cumulative counts are not always of interest. This option is disabled by default. opt.lg_prof_interval (ssize_t) r- [] Average interval (log base 2) between memory profile dumps, as measured in bytes of allocation activity. The actual interval between dumps may be sporadic because decentralized allocation counters are used to avoid synchronization bottlenecks. Profiles are dumped to files named according to the pattern <prefix>.<pid>.<seq>.i<iseq>.heap, where <prefix> is controlled by the opt.prof_prefix option. By default, interval-triggered profile dumping is disabled (encoded as -1). opt.prof_gdump (bool) r- [] Trigger a memory profile dump every time the total virtual memory exceeds the previous maximum. Profiles are dumped to files named according to the pattern <prefix>.<pid>.<seq>.u<useq>.heap, where <prefix> is controlled by the opt.prof_prefix option. This option is disabled by default. opt.prof_final (bool) r- [] Use an atexit 3 function to dump final memory usage to a file named according to the pattern <prefix>.<pid>.<seq>.f.heap, where <prefix> is controlled by the opt.prof_prefix option. This option is enabled by default. opt.prof_leak (bool) r- [] Leak reporting enabled/disabled. If enabled, use an atexit 3 function to report memory leaks detected by allocation sampling. See the opt.prof option for information on analyzing heap profile output. This option is disabled by default. thread.arena (unsigned) rw Get or set the arena associated with the calling thread. If the specified arena was not initialized beforehand (see the arenas.initialized mallctl), it will be automatically initialized as a side effect of calling this interface. thread.allocated (uint64_t) r- [] Get the total number of bytes ever allocated by the calling thread. This counter has the potential to wrap around; it is up to the application to appropriately interpret the counter in such cases. thread.allocatedp (uint64_t *) r- [] Get a pointer to the value that is returned by the thread.allocated mallctl. This is useful for avoiding the overhead of repeated mallctl* calls. thread.deallocated (uint64_t) r- [] Get the total number of bytes ever deallocated by the calling thread. This counter has the potential to wrap around; it is up to the application to appropriately interpret the counter in such cases. thread.deallocatedp (uint64_t *) r- [] Get a pointer to the value that is returned by the thread.deallocated mallctl. This is useful for avoiding the overhead of repeated mallctl* calls. thread.tcache.enabled (bool) rw [] Enable/disable calling thread's tcache. The tcache is implicitly flushed as a side effect of becoming disabled (see thread.tcache.flush). thread.tcache.flush (void) -- [] Flush calling thread's tcache. This interface releases all cached objects and internal data structures associated with the calling thread's thread-specific cache. Ordinarily, this interface need not be called, since automatic periodic incremental garbage collection occurs, and the thread cache is automatically discarded when a thread exits. However, garbage collection is triggered by allocation activity, so it is possible for a thread that stops allocating/deallocating to retain its cache indefinitely, in which case the developer may find manual flushing useful. arena.<i>.purge (void) -- Purge unused dirty pages for arena <i>, or for all arenas if <i> equals arenas.narenas. arena.<i>.dss (const char *) rw Set the precedence of dss allocation as related to mmap allocation for arena <i>, or for all arenas if <i> equals arenas.narenas. See opt.dss for supported settings. arena.<i>.chunk.alloc (chunk_alloc_t *) rw Get or set the chunk allocation function for arena <i>. If setting, the chunk deallocation function should also be set via arena.<i>.chunk.dalloc to a companion function that knows how to deallocate the chunks. typedef void *(chunk_alloc_t) void *chunk size_t size size_t alignment bool *zero unsigned arena_ind A chunk allocation function conforms to the chunk_alloc_t type and upon success returns a pointer to size bytes of memory on behalf of arena arena_ind such that the chunk's base address is a multiple of alignment, as well as setting *zero to indicate whether the chunk is zeroed. Upon error the function returns NULL and leaves *zero unmodified. The size parameter is always a multiple of the chunk size. The alignment parameter is always a power of two at least as large as the chunk size. Zeroing is mandatory if *zero is true upon function entry. If chunk is not NULL, the returned pointer must be chunk or NULL if it could not be allocated. Note that replacing the default chunk allocation function makes the arena's arena.<i>.dss setting irrelevant. arena.<i>.chunk.dalloc (chunk_dalloc_t *) rw Get or set the chunk deallocation function for arena <i>. If setting, the chunk deallocation function must be capable of deallocating all extant chunks associated with arena <i>, usually by passing unknown chunks to the deallocation function that was replaced. In practice, it is feasible to control allocation for arenas created via arenas.extend such that all chunks originate from an application-supplied chunk allocator (by setting custom chunk allocation/deallocation functions just after arena creation), but the automatically created arenas may have already created chunks prior to the application having an opportunity to take over chunk allocation. typedef void (chunk_dalloc_t) void *chunk size_t size unsigned arena_ind A chunk deallocation function conforms to the chunk_dalloc_t type and deallocates a chunk of given size on behalf of arena arena_ind. arenas.narenas (unsigned) r- Current limit on number of arenas. arenas.initialized (bool *) r- An array of arenas.narenas booleans. Each boolean indicates whether the corresponding arena is initialized. arenas.quantum (size_t) r- Quantum size. arenas.page (size_t) r- Page size. arenas.tcache_max (size_t) r- [] Maximum thread-cached size class. arenas.nbins (unsigned) r- Number of bin size classes. arenas.nhbins (unsigned) r- [] Total number of thread cache bin size classes. arenas.bin.<i>.size (size_t) r- Maximum size supported by size class. arenas.bin.<i>.nregs (uint32_t) r- Number of regions per page run. arenas.bin.<i>.run_size (size_t) r- Number of bytes per page run. arenas.nlruns (size_t) r- Total number of large size classes. arenas.lrun.<i>.size (size_t) r- Maximum size supported by this large size class. arenas.extend (unsigned) r- Extend the array of arenas by appending a new arena, and returning the new arena index. prof.active (bool) rw [] Control whether sampling is currently active. See the opt.prof_active option for additional information. prof.dump (const char *) -w [] Dump a memory profile to the specified file, or if NULL is specified, to a file according to the pattern <prefix>.<pid>.<seq>.m<mseq>.heap, where <prefix> is controlled by the opt.prof_prefix option. prof.interval (uint64_t) r- [] Average number of bytes allocated between inverval-based profile dumps. See the opt.lg_prof_interval option for additional information. stats.cactive (size_t *) r- [] Pointer to a counter that contains an approximate count of the current number of bytes in active pages. The estimate may be high, but never low, because each arena rounds up to the nearest multiple of the chunk size when computing its contribution to the counter. Note that the epoch mallctl has no bearing on this counter. Furthermore, counter consistency is maintained via atomic operations, so it is necessary to use an atomic operation in order to guarantee a consistent read when dereferencing the pointer. stats.allocated (size_t) r- [] Total number of bytes allocated by the application. stats.active (size_t) r- [] Total number of bytes in active pages allocated by the application. This is a multiple of the page size, and greater than or equal to stats.allocated. This does not include stats.arenas.<i>.pdirty and pages entirely devoted to allocator metadata. stats.mapped (size_t) r- [] Total number of bytes in chunks mapped on behalf of the application. This is a multiple of the chunk size, and is at least as large as stats.active. This does not include inactive chunks. stats.chunks.current (size_t) r- [] Total number of chunks actively mapped on behalf of the application. This does not include inactive chunks. stats.chunks.total (uint64_t) r- [] Cumulative number of chunks allocated. stats.chunks.high (size_t) r- [] Maximum number of active chunks at any time thus far. stats.arenas.<i>.dss (const char *) r- dss (sbrk 2) allocation precedence as related to mmap 2 allocation. See opt.dss for details. stats.arenas.<i>.nthreads (unsigned) r- Number of threads currently assigned to arena. stats.arenas.<i>.pactive (size_t) r- Number of pages in active runs. stats.arenas.<i>.pdirty (size_t) r- Number of pages within unused runs that are potentially dirty, and for which madvise... MADV_DONTNEED or similar has not been called. stats.arenas.<i>.mapped (size_t) r- [] Number of mapped bytes. stats.arenas.<i>.npurge (uint64_t) r- [] Number of dirty page purge sweeps performed. stats.arenas.<i>.nmadvise (uint64_t) r- [] Number of madvise... MADV_DONTNEED or similar calls made to purge dirty pages. stats.arenas.<i>.purged (uint64_t) r- [] Number of pages purged. stats.arenas.<i>.small.allocated (size_t) r- [] Number of bytes currently allocated by small objects. stats.arenas.<i>.small.nmalloc (uint64_t) r- [] Cumulative number of allocation requests served by small bins. stats.arenas.<i>.small.ndalloc (uint64_t) r- [] Cumulative number of small objects returned to bins. stats.arenas.<i>.small.nrequests (uint64_t) r- [] Cumulative number of small allocation requests. stats.arenas.<i>.large.allocated (size_t) r- [] Number of bytes currently allocated by large objects. stats.arenas.<i>.large.nmalloc (uint64_t) r- [] Cumulative number of large allocation requests served directly by the arena. stats.arenas.<i>.large.ndalloc (uint64_t) r- [] Cumulative number of large deallocation requests served directly by the arena. stats.arenas.<i>.large.nrequests (uint64_t) r- [] Cumulative number of large allocation requests. stats.arenas.<i>.huge.allocated (size_t) r- [] Number of bytes currently allocated by huge objects. stats.arenas.<i>.huge.nmalloc (uint64_t) r- [] Cumulative number of huge allocation requests served directly by the arena. stats.arenas.<i>.huge.ndalloc (uint64_t) r- [] Cumulative number of huge deallocation requests served directly by the arena. stats.arenas.<i>.huge.nrequests (uint64_t) r- [] Cumulative number of huge allocation requests. stats.arenas.<i>.bins.<j>.allocated (size_t) r- [] Current number of bytes allocated by bin. stats.arenas.<i>.bins.<j>.nmalloc (uint64_t) r- [] Cumulative number of allocations served by bin. stats.arenas.<i>.bins.<j>.ndalloc (uint64_t) r- [] Cumulative number of allocations returned to bin. stats.arenas.<i>.bins.<j>.nrequests (uint64_t) r- [] Cumulative number of allocation requests. stats.arenas.<i>.bins.<j>.nfills (uint64_t) r- [ ] Cumulative number of tcache fills. stats.arenas.<i>.bins.<j>.nflushes (uint64_t) r- [ ] Cumulative number of tcache flushes. stats.arenas.<i>.bins.<j>.nruns (uint64_t) r- [] Cumulative number of runs created. stats.arenas.<i>.bins.<j>.nreruns (uint64_t) r- [] Cumulative number of times the current run from which to allocate changed. stats.arenas.<i>.bins.<j>.curruns (size_t) r- [] Current number of runs. stats.arenas.<i>.lruns.<j>.nmalloc (uint64_t) r- [] Cumulative number of allocation requests for this size class served directly by the arena. stats.arenas.<i>.lruns.<j>.ndalloc (uint64_t) r- [] Cumulative number of deallocation requests for this size class served directly by the arena. stats.arenas.<i>.lruns.<j>.nrequests (uint64_t) r- [] Cumulative number of allocation requests for this size class. stats.arenas.<i>.lruns.<j>.curruns (size_t) r- [] Current number of runs for this size class. DEBUGGING MALLOC PROBLEMS When debugging, it is a good idea to configure/build jemalloc with the and options, and recompile the program with suitable options and symbols for debugger support. When so configured, jemalloc incorporates a wide variety of run-time assertions that catch application errors such as double-free, write-after-free, etc. Programs often accidentally depend on “uninitialized” memory actually being filled with zero bytes. Junk filling (see the opt.junk option) tends to expose such bugs in the form of obviously incorrect results and/or coredumps. Conversely, zero filling (see the opt.zero option) eliminates the symptoms of such bugs. Between these two options, it is usually possible to quickly detect, diagnose, and eliminate such bugs. This implementation does not provide much detail about the problems it detects, because the performance impact for storing such information would be prohibitive. However, jemalloc does integrate with the most excellent Valgrind tool if the configuration option is enabled. DIAGNOSTIC MESSAGES If any of the memory allocation/deallocation functions detect an error or warning condition, a message will be printed to file descriptor STDERR_FILENO. Errors will result in the process dumping core. If the opt.abort option is set, most warnings are treated as errors. The malloc_message variable allows the programmer to override the function which emits the text strings forming the errors and warnings if for some reason the STDERR_FILENO file descriptor is not suitable for this. malloc_message takes the cbopaque pointer argument that is NULL unless overridden by the arguments in a call to malloc_stats_print, followed by a string pointer. Please note that doing anything which tries to allocate memory in this function is likely to result in a crash or deadlock. All messages are prefixed by “<jemalloc>: ”. RETURN VALUES Standard API The malloc and calloc functions return a pointer to the allocated memory if successful; otherwise a NULL pointer is returned and errno is set to ENOMEM. The posix_memalign function returns the value 0 if successful; otherwise it returns an error value. The posix_memalign function will fail if: EINVAL The alignment parameter is not a power of 2 at least as large as sizeof(void *). ENOMEM Memory allocation error. The aligned_alloc function returns a pointer to the allocated memory if successful; otherwise a NULL pointer is returned and errno is set. The aligned_alloc function will fail if: EINVAL The alignment parameter is not a power of 2. ENOMEM Memory allocation error. The realloc function returns a pointer, possibly identical to ptr, to the allocated memory if successful; otherwise a NULL pointer is returned, and errno is set to ENOMEM if the error was the result of an allocation failure. The realloc function always leaves the original buffer intact when an error occurs. The free function returns no value. Non-standard API The mallocx and rallocx functions return a pointer to the allocated memory if successful; otherwise a NULL pointer is returned to indicate insufficient contiguous memory was available to service the allocation request. The xallocx function returns the real size of the resulting resized allocation pointed to by ptr, which is a value less than size if the allocation could not be adequately grown in place. The sallocx function returns the real size of the allocation pointed to by ptr. The nallocx returns the real size that would result from a successful equivalent mallocx function call, or zero if insufficient memory is available to perform the size computation. The mallctl, mallctlnametomib, and mallctlbymib functions return 0 on success; otherwise they return an error value. The functions will fail if: EINVAL newp is not NULL, and newlen is too large or too small. Alternatively, *oldlenp is too large or too small; in this case as much data as possible are read despite the error. ENOENT name or mib specifies an unknown/invalid value. EPERM Attempt to read or write void value, or attempt to write read-only value. EAGAIN A memory allocation failure occurred. EFAULT An interface with side effects failed in some way not directly related to mallctl* read/write processing. The malloc_usable_size function returns the usable size of the allocation pointed to by ptr. ENVIRONMENT The following environment variable affects the execution of the allocation functions: MALLOC_CONF If the environment variable MALLOC_CONF is set, the characters it contains will be interpreted as options. EXAMPLES To dump core whenever a problem occurs: ln -s 'abort:true' /etc/malloc.conf To specify in the source a chunk size that is 16 MiB: SEE ALSO madvise 2, mmap 2, sbrk 2, utrace 2, alloca 3, atexit 3, getpagesize 3 STANDARDS The malloc, calloc, realloc, and free functions conform to ISO/IEC 9899:1990 (“ISO C90”). The posix_memalign function conforms to IEEE Std 1003.1-2001 (“POSIX.1”). vmem-1.8/src/jemalloc/doc/manpages.xsl.in000066400000000000000000000003171361505074100204030ustar00rootroot00000000000000 vmem-1.8/src/jemalloc/doc/stylesheet.xsl000066400000000000000000000004571361505074100204010ustar00rootroot00000000000000 ansi "" vmem-1.8/src/jemalloc/include/000077500000000000000000000000001361505074100163305ustar00rootroot00000000000000vmem-1.8/src/jemalloc/include/jemalloc/000077500000000000000000000000001361505074100201165ustar00rootroot00000000000000vmem-1.8/src/jemalloc/include/jemalloc/internal/000077500000000000000000000000001361505074100217325ustar00rootroot00000000000000vmem-1.8/src/jemalloc/include/jemalloc/internal/arena.h000066400000000000000000001071541361505074100232010ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES /* * RUN_MAX_OVRHD indicates maximum desired run header overhead. Runs are sized * as small as possible such that this setting is still honored, without * violating other constraints. The goal is to make runs as small as possible * without exceeding a per run external fragmentation threshold. * * We use binary fixed point math for overhead computations, where the binary * point is implicitly RUN_BFP bits to the left. * * Note that it is possible to set RUN_MAX_OVRHD low enough that it cannot be * honored for some/all object sizes, since when heap profiling is enabled * there is one pointer of header overhead per object (plus a constant). This * constraint is relaxed (ignored) for runs that are so small that the * per-region overhead is greater than: * * (RUN_MAX_OVRHD / (reg_interval << (3+RUN_BFP)) */ #define RUN_BFP 12 /* \/ Implicit binary fixed point. */ #define RUN_MAX_OVRHD 0x0000003dU #define RUN_MAX_OVRHD_RELAX 0x00001800U /* Maximum number of regions in one run. */ #define LG_RUN_MAXREGS 11 #define RUN_MAXREGS (1U << LG_RUN_MAXREGS) /* * Minimum redzone size. Redzones may be larger than this if necessary to * preserve region alignment. */ #define REDZONE_MINSIZE 16 /* * The minimum ratio of active:dirty pages per arena is computed as: * * (nactive >> opt_lg_dirty_mult) >= ndirty * * So, supposing that opt_lg_dirty_mult is 3, there can be no less than 8 times * as many active pages as dirty pages. */ #define LG_DIRTY_MULT_DEFAULT 3 typedef struct arena_chunk_map_s arena_chunk_map_t; typedef struct arena_chunk_s arena_chunk_t; typedef struct arena_run_s arena_run_t; typedef struct arena_bin_info_s arena_bin_info_t; typedef struct arena_bin_s arena_bin_t; typedef struct arena_s arena_t; #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS /* Each element of the chunk map corresponds to one page within the chunk. */ struct arena_chunk_map_s { #ifndef JEMALLOC_PROF /* * Overlay prof_ctx in order to allow it to be referenced by dead code. * Such antics aren't warranted for per arena data structures, but * chunk map overhead accounts for a percentage of memory, rather than * being just a fixed cost. */ union { #endif union { /* * Linkage for run trees. There are two disjoint uses: * * 1) arena_t's runs_avail tree. * 2) arena_run_t conceptually uses this linkage for in-use * non-full runs, rather than directly embedding linkage. */ rb_node(arena_chunk_map_t) rb_link; /* * List of runs currently in purgatory. arena_chunk_purge() * temporarily allocates runs that contain dirty pages while * purging, so that other threads cannot use the runs while the * purging thread is operating without the arena lock held. */ ql_elm(arena_chunk_map_t) ql_link; } u; /* Profile counters, used for large object runs. */ prof_ctx_t *prof_ctx; #ifndef JEMALLOC_PROF }; /* union { ... }; */ #endif /* * Run address (or size) and various flags are stored together. The bit * layout looks like (assuming 32-bit system): * * ???????? ???????? ????nnnn nnnndula * * ? : Unallocated: Run address for first/last pages, unset for internal * pages. * Small: Run page offset. * Large: Run size for first page, unset for trailing pages. * n : binind for small size class, BININD_INVALID for large size class. * d : dirty? * u : unzeroed? * l : large? * a : allocated? * * Following are example bit patterns for the three types of runs. * * p : run page offset * s : run size * n : binind for size class; large objects set these to BININD_INVALID * x : don't care * - : 0 * + : 1 * [DULA] : bit set * [dula] : bit unset * * Unallocated (clean): * ssssssss ssssssss ssss++++ ++++du-a * xxxxxxxx xxxxxxxx xxxxxxxx xxxx-Uxx * ssssssss ssssssss ssss++++ ++++dU-a * * Unallocated (dirty): * ssssssss ssssssss ssss++++ ++++D--a * xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx * ssssssss ssssssss ssss++++ ++++D--a * * Small: * pppppppp pppppppp ppppnnnn nnnnd--A * pppppppp pppppppp ppppnnnn nnnn---A * pppppppp pppppppp ppppnnnn nnnnd--A * * Large: * ssssssss ssssssss ssss++++ ++++D-LA * xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx * -------- -------- ----++++ ++++D-LA * * Large (sampled, size <= PAGE): * ssssssss ssssssss ssssnnnn nnnnD-LA * * Large (not sampled, size == PAGE): * ssssssss ssssssss ssss++++ ++++D-LA */ size_t bits; #define CHUNK_MAP_BININD_SHIFT 4 #define BININD_INVALID ((size_t)0xffU) /* CHUNK_MAP_BININD_MASK == (BININD_INVALID << CHUNK_MAP_BININD_SHIFT) */ #define CHUNK_MAP_BININD_MASK ((size_t)0xff0U) #define CHUNK_MAP_BININD_INVALID CHUNK_MAP_BININD_MASK #define CHUNK_MAP_FLAGS_MASK ((size_t)0xcU) #define CHUNK_MAP_DIRTY ((size_t)0x8U) #define CHUNK_MAP_UNZEROED ((size_t)0x4U) #define CHUNK_MAP_LARGE ((size_t)0x2U) #define CHUNK_MAP_ALLOCATED ((size_t)0x1U) #define CHUNK_MAP_KEY CHUNK_MAP_ALLOCATED }; typedef rb_tree(arena_chunk_map_t) arena_avail_tree_t; typedef rb_tree(arena_chunk_map_t) arena_run_tree_t; typedef ql_head(arena_chunk_map_t) arena_chunk_mapelms_t; /* Arena chunk header. */ struct arena_chunk_s { /* Arena that owns the chunk. */ arena_t *arena; /* Linkage for tree of arena chunks that contain dirty runs. */ rb_node(arena_chunk_t) dirty_link; /* Number of dirty pages. */ size_t ndirty; /* Number of available runs. */ size_t nruns_avail; /* * Number of available run adjacencies that purging could coalesce. * Clean and dirty available runs are not coalesced, which causes * virtual memory fragmentation. The ratio of * (nruns_avail-nruns_adjac):nruns_adjac is used for tracking this * fragmentation. */ size_t nruns_adjac; /* * Map of pages within chunk that keeps track of free/large/small. The * first map_bias entries are omitted, since the chunk header does not * need to be tracked in the map. This omission saves a header page * for common chunk sizes (e.g. 4 MiB). */ arena_chunk_map_t map[1]; /* Dynamically sized. */ }; typedef rb_tree(arena_chunk_t) arena_chunk_tree_t; struct arena_run_s { /* Bin this run is associated with. */ arena_bin_t *bin; /* Index of next region that has never been allocated, or nregs. */ uint32_t nextind; /* Number of free regions in run. */ unsigned nfree; }; /* * Read-only information associated with each element of arena_t's bins array * is stored separately, partly to reduce memory usage (only one copy, rather * than one per arena), but mainly to avoid false cacheline sharing. * * Each run has the following layout: * * /--------------------\ * | arena_run_t header | * | ... | * bitmap_offset | bitmap | * | ... | * |--------------------| * | redzone | * reg0_offset | region 0 | * | redzone | * |--------------------| \ * | redzone | | * | region 1 | > reg_interval * | redzone | / * |--------------------| * | ... | * | ... | * | ... | * |--------------------| * | redzone | * | region nregs-1 | * | redzone | * |--------------------| * | alignment pad? | * \--------------------/ * * reg_interval has at least the same minimum alignment as reg_size; this * preserves the alignment constraint that sa2u() depends on. Alignment pad is * either 0 or redzone_size; it is present only if needed to align reg0_offset. */ struct arena_bin_info_s { /* Size of regions in a run for this bin's size class. */ size_t reg_size; /* Redzone size. */ size_t redzone_size; /* Interval between regions (reg_size + (redzone_size << 1)). */ size_t reg_interval; /* Total size of a run for this bin's size class. */ size_t run_size; /* Total number of regions in a run for this bin's size class. */ uint32_t nregs; /* * Offset of first bitmap_t element in a run header for this bin's size * class. */ uint32_t bitmap_offset; /* * Metadata used to manipulate bitmaps for runs associated with this * bin. */ bitmap_info_t bitmap_info; /* Offset of first region in a run for this bin's size class. */ uint32_t reg0_offset; }; struct arena_bin_s { /* * All operations on runcur, runs, and stats require that lock be * locked. Run allocation/deallocation are protected by the arena lock, * which may be acquired while holding one or more bin locks, but not * vise versa. */ malloc_mutex_t lock; /* * Current run being used to service allocations of this bin's size * class. */ arena_run_t *runcur; /* * Tree of non-full runs. This tree is used when looking for an * existing run when runcur is no longer usable. We choose the * non-full run that is lowest in memory; this policy tends to keep * objects packed well, and it can also help reduce the number of * almost-empty chunks. */ arena_run_tree_t runs; /* Bin statistics. */ malloc_bin_stats_t stats; }; struct arena_s { /* This arena's index within the arenas array. */ unsigned ind; /* This arena's pool. */ pool_t *pool; /* * Number of threads currently assigned to this arena. This field is * protected by arenas_lock. */ unsigned nthreads; /* * There are three classes of arena operations from a locking * perspective: * 1) Thread asssignment (modifies nthreads) is protected by * arenas_lock. * 2) Bin-related operations are protected by bin locks. * 3) Chunk- and run-related operations are protected by this mutex. */ malloc_mutex_t lock; arena_stats_t stats; /* * List of tcaches for extant threads associated with this arena. * Stats from these are merged incrementally, and at exit. */ ql_head(tcache_t) tcache_ql; uint64_t prof_accumbytes; dss_prec_t dss_prec; /* Tree of dirty-page-containing chunks this arena manages. */ arena_chunk_tree_t chunks_dirty; /* * In order to avoid rapid chunk allocation/deallocation when an arena * oscillates right on the cusp of needing a new chunk, cache the most * recently freed chunk. The spare is left in the arena's chunk trees * until it is deleted. * * There is one spare chunk per arena, rather than one spare total, in * order to avoid interactions between multiple threads that could make * a single spare inadequate. */ arena_chunk_t *spare; /* Number of pages in active runs and huge regions. */ size_t nactive; /* * Current count of pages within unused runs that are potentially * dirty, and for which madvise(... MADV_DONTNEED) has not been called. * By tracking this, we can institute a limit on how much dirty unused * memory is mapped for each arena. */ size_t ndirty; /* * Approximate number of pages being purged. It is possible for * multiple threads to purge dirty pages concurrently, and they use * npurgatory to indicate the total number of pages all threads are * attempting to purge. */ size_t npurgatory; /* * Size/address-ordered trees of this arena's available runs. The trees * are used for first-best-fit run allocation. */ arena_avail_tree_t runs_avail; /* * user-configureable chunk allocation and deallocation functions. */ chunk_alloc_t *chunk_alloc; chunk_dalloc_t *chunk_dalloc; /* bins is used to store trees of free regions. */ arena_bin_t bins[NBINS]; }; arena_chunk_map_t * arena_runs_avail_tree_iter(arena_t *arena, arena_chunk_map_t *(*cb) (arena_avail_tree_t *, arena_chunk_map_t *, void *), void *arg); #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS extern ssize_t opt_lg_dirty_mult; /* * small_size2bin_tab is a compact lookup table that rounds request sizes up to * size classes. In order to reduce cache footprint, the table is compressed, * and all accesses are via small_size2bin(). */ extern uint8_t const small_size2bin_tab[]; /* * small_bin2size_tab duplicates information in arena_bin_info, but in a const * array, for which it is easier for the compiler to optimize repeated * dereferences. */ extern uint32_t const small_bin2size_tab[NBINS]; extern arena_bin_info_t arena_bin_info[NBINS]; /* Number of large size classes. */ #define nlclasses (chunk_npages - map_bias) void *arena_chunk_alloc_huge(arena_t *arena, void *new_addr, size_t size, size_t alignment, bool *zero); void arena_chunk_dalloc_huge(arena_t *arena, void *chunk, size_t size); void arena_purge_all(arena_t *arena); void arena_tcache_fill_small(arena_t *arena, tcache_bin_t *tbin, size_t binind, uint64_t prof_accumbytes); void arena_alloc_junk_small(void *ptr, arena_bin_info_t *bin_info, bool zero); #ifdef JEMALLOC_JET typedef void (arena_redzone_corruption_t)(void *, size_t, bool, size_t, uint8_t); extern arena_redzone_corruption_t *arena_redzone_corruption; typedef void (arena_dalloc_junk_small_t)(void *, arena_bin_info_t *); extern arena_dalloc_junk_small_t *arena_dalloc_junk_small; #else void arena_dalloc_junk_small(void *ptr, arena_bin_info_t *bin_info); #endif void arena_quarantine_junk_small(void *ptr, size_t usize); void *arena_malloc_small(arena_t *arena, size_t size, bool zero); void *arena_malloc_large(arena_t *arena, size_t size, bool zero); void *arena_palloc(arena_t *arena, size_t size, size_t alignment, bool zero); void arena_prof_promoted(const void *ptr, size_t size); void arena_dalloc_bin_locked(arena_t *arena, arena_chunk_t *chunk, void *ptr, arena_chunk_map_t *mapelm); void arena_dalloc_bin(arena_t *arena, arena_chunk_t *chunk, void *ptr, size_t pageind, arena_chunk_map_t *mapelm); void arena_dalloc_small(arena_t *arena, arena_chunk_t *chunk, void *ptr, size_t pageind); #ifdef JEMALLOC_JET typedef void (arena_dalloc_junk_large_t)(void *, size_t); extern arena_dalloc_junk_large_t *arena_dalloc_junk_large; #endif void arena_dalloc_large_locked(arena_t *arena, arena_chunk_t *chunk, void *ptr); void arena_dalloc_large(arena_t *arena, arena_chunk_t *chunk, void *ptr); #ifdef JEMALLOC_JET typedef void (arena_ralloc_junk_large_t)(void *, size_t, size_t); extern arena_ralloc_junk_large_t *arena_ralloc_junk_large; #endif bool arena_ralloc_no_move(void *ptr, size_t oldsize, size_t size, size_t extra, bool zero); void *arena_ralloc(arena_t *arena, void *ptr, size_t oldsize, size_t size, size_t extra, size_t alignment, bool zero, bool try_tcache_alloc, bool try_tcache_dalloc); dss_prec_t arena_dss_prec_get(arena_t *arena); bool arena_dss_prec_set(arena_t *arena, dss_prec_t dss_prec); void arena_stats_merge(arena_t *arena, const char **dss, size_t *nactive, size_t *ndirty, arena_stats_t *astats, malloc_bin_stats_t *bstats, malloc_large_stats_t *lstats); bool arena_new(pool_t *pool, arena_t *arena, unsigned ind); bool arena_boot(arena_t *arena); void arena_params_boot(void); void arena_prefork(arena_t *arena); void arena_postfork_parent(arena_t *arena); void arena_postfork_child(arena_t *arena); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE size_t small_size2bin_compute(size_t size); size_t small_size2bin_lookup(size_t size); size_t small_size2bin(size_t size); size_t small_bin2size_compute(size_t binind); size_t small_bin2size_lookup(size_t binind); size_t small_bin2size(size_t binind); size_t small_s2u_compute(size_t size); size_t small_s2u_lookup(size_t size); size_t small_s2u(size_t size); size_t arena_mapelm_to_pageind(arena_chunk_map_t *mapelm); arena_chunk_map_t *arena_mapp_get(arena_chunk_t *chunk, size_t pageind); size_t *arena_mapbitsp_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbitsp_read(size_t *mapbitsp); size_t arena_mapbits_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_unallocated_size_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_large_size_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_small_runind_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_binind_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_dirty_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_unzeroed_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_large_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_allocated_get(arena_chunk_t *chunk, size_t pageind); void arena_mapbitsp_write(size_t *mapbitsp, size_t mapbits); void arena_mapbits_unallocated_set(arena_chunk_t *chunk, size_t pageind, size_t size, size_t flags); void arena_mapbits_unallocated_size_set(arena_chunk_t *chunk, size_t pageind, size_t size); void arena_mapbits_large_set(arena_chunk_t *chunk, size_t pageind, size_t size, size_t flags); void arena_mapbits_large_binind_set(arena_chunk_t *chunk, size_t pageind, size_t binind); void arena_mapbits_small_set(arena_chunk_t *chunk, size_t pageind, size_t runind, size_t binind, size_t flags); void arena_mapbits_unzeroed_set(arena_chunk_t *chunk, size_t pageind, size_t unzeroed); bool arena_prof_accum_impl(arena_t *arena, uint64_t accumbytes); bool arena_prof_accum_locked(arena_t *arena, uint64_t accumbytes); bool arena_prof_accum(arena_t *arena, uint64_t accumbytes); size_t arena_ptr_small_binind_get(const void *ptr, size_t mapbits); size_t arena_bin_index(arena_t *arena, arena_bin_t *bin); unsigned arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info, const void *ptr); prof_ctx_t *arena_prof_ctx_get(const void *ptr); void arena_prof_ctx_set(const void *ptr, prof_ctx_t *ctx); void *arena_malloc(arena_t *arena, size_t size, bool zero, bool try_tcache); size_t arena_salloc(const void *ptr, bool demote); void arena_dalloc(arena_chunk_t *chunk, void *ptr, bool try_tcache); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_ARENA_C_)) # ifdef JEMALLOC_ARENA_INLINE_A JEMALLOC_INLINE size_t small_size2bin_compute(size_t size) { #if (NTBINS != 0) if (size <= (ZU(1) << LG_TINY_MAXCLASS)) { size_t lg_tmin = LG_TINY_MAXCLASS - NTBINS + 1; size_t lg_ceil = lg_floor(pow2_ceil(size)); return (lg_ceil < lg_tmin ? 0 : lg_ceil - lg_tmin); } else #endif { size_t x = lg_floor((size<<1)-1); size_t shift = (x < LG_SIZE_CLASS_GROUP + LG_QUANTUM) ? 0 : x - (LG_SIZE_CLASS_GROUP + LG_QUANTUM); size_t grp = shift << LG_SIZE_CLASS_GROUP; size_t lg_delta = (x < LG_SIZE_CLASS_GROUP + LG_QUANTUM + 1) ? LG_QUANTUM : x - LG_SIZE_CLASS_GROUP - 1; size_t mod = ((size - 1) >> lg_delta) & ((ZU(1) << LG_SIZE_CLASS_GROUP) - 1); size_t bin = NTBINS + grp + mod; return (bin); } } JEMALLOC_ALWAYS_INLINE size_t small_size2bin_lookup(size_t size) { assert(size <= LOOKUP_MAXCLASS); { size_t ret = ((size_t)(small_size2bin_tab[(size-1) >> LG_TINY_MIN])); assert(ret == small_size2bin_compute(size)); return (ret); } } JEMALLOC_ALWAYS_INLINE size_t small_size2bin(size_t size) { assert(size > 0); if (size <= LOOKUP_MAXCLASS) return (small_size2bin_lookup(size)); else return (small_size2bin_compute(size)); } JEMALLOC_INLINE size_t small_bin2size_compute(size_t binind) { #if (NTBINS > 0) if (binind < NTBINS) return (ZU(1) << (LG_TINY_MAXCLASS - NTBINS + 1 + binind)); else #endif { size_t reduced_binind = binind - NTBINS; size_t grp = reduced_binind >> LG_SIZE_CLASS_GROUP; size_t mod = reduced_binind & ((ZU(1) << LG_SIZE_CLASS_GROUP) - 1); size_t grp_size_mask = ~((!!grp)-1); size_t grp_size = ((ZU(1) << (LG_QUANTUM + (LG_SIZE_CLASS_GROUP-1))) << grp) & grp_size_mask; size_t shift = (grp == 0) ? 1 : grp; size_t lg_delta = shift + (LG_QUANTUM-1); size_t mod_size = (mod+1) << lg_delta; size_t usize = grp_size + mod_size; return (usize); } } JEMALLOC_ALWAYS_INLINE size_t small_bin2size_lookup(size_t binind) { assert(binind < NBINS); { size_t ret = ((size_t)(small_bin2size_tab[binind])); assert(ret == small_bin2size_compute(binind)); return (ret); } } JEMALLOC_ALWAYS_INLINE size_t small_bin2size(size_t binind) { return (small_bin2size_lookup(binind)); } JEMALLOC_ALWAYS_INLINE size_t small_s2u_compute(size_t size) { #if (NTBINS > 0) if (size <= (ZU(1) << LG_TINY_MAXCLASS)) { size_t lg_tmin = LG_TINY_MAXCLASS - NTBINS + 1; size_t lg_ceil = lg_floor(pow2_ceil(size)); return (lg_ceil < lg_tmin ? (ZU(1) << lg_tmin) : (ZU(1) << lg_ceil)); } else #endif { size_t x = lg_floor((size<<1)-1); size_t lg_delta = (x < LG_SIZE_CLASS_GROUP + LG_QUANTUM + 1) ? LG_QUANTUM : x - LG_SIZE_CLASS_GROUP - 1; size_t delta = ZU(1) << lg_delta; size_t delta_mask = delta - 1; size_t usize = (size + delta_mask) & ~delta_mask; return (usize); } } JEMALLOC_ALWAYS_INLINE size_t small_s2u_lookup(size_t size) { size_t ret = (small_bin2size(small_size2bin(size))); assert(ret == small_s2u_compute(size)); return (ret); } JEMALLOC_ALWAYS_INLINE size_t small_s2u(size_t size) { assert(size > 0); if (size <= LOOKUP_MAXCLASS) return (small_s2u_lookup(size)); else return (small_s2u_compute(size)); } # endif /* JEMALLOC_ARENA_INLINE_A */ # ifdef JEMALLOC_ARENA_INLINE_B JEMALLOC_ALWAYS_INLINE size_t arena_mapelm_to_pageind(arena_chunk_map_t *mapelm) { uintptr_t map_offset = CHUNK_ADDR2OFFSET(mapelm) - offsetof(arena_chunk_t, map); return ((map_offset / sizeof(arena_chunk_map_t)) + map_bias); } JEMALLOC_ALWAYS_INLINE arena_chunk_map_t * arena_mapp_get(arena_chunk_t *chunk, size_t pageind) { assert(pageind >= map_bias); assert(pageind < chunk_npages); return (&chunk->map[pageind-map_bias]); } JEMALLOC_ALWAYS_INLINE size_t * arena_mapbitsp_get(arena_chunk_t *chunk, size_t pageind) { return (&arena_mapp_get(chunk, pageind)->bits); } JEMALLOC_ALWAYS_INLINE size_t arena_mapbitsp_read(size_t *mapbitsp) { return (*mapbitsp); } JEMALLOC_ALWAYS_INLINE size_t arena_mapbits_get(arena_chunk_t *chunk, size_t pageind) { return (arena_mapbitsp_read(arena_mapbitsp_get(chunk, pageind))); } JEMALLOC_ALWAYS_INLINE size_t arena_mapbits_unallocated_size_get(arena_chunk_t *chunk, size_t pageind) { size_t mapbits; mapbits = arena_mapbits_get(chunk, pageind); assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) == 0); return (mapbits & ~PAGE_MASK); } JEMALLOC_ALWAYS_INLINE size_t arena_mapbits_large_size_get(arena_chunk_t *chunk, size_t pageind) { size_t mapbits; mapbits = arena_mapbits_get(chunk, pageind); assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) == (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)); return (mapbits & ~PAGE_MASK); } JEMALLOC_ALWAYS_INLINE size_t arena_mapbits_small_runind_get(arena_chunk_t *chunk, size_t pageind) { size_t mapbits; mapbits = arena_mapbits_get(chunk, pageind); assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) == CHUNK_MAP_ALLOCATED); return (mapbits >> LG_PAGE); } JEMALLOC_ALWAYS_INLINE size_t arena_mapbits_binind_get(arena_chunk_t *chunk, size_t pageind) { size_t mapbits; size_t binind; mapbits = arena_mapbits_get(chunk, pageind); binind = (mapbits & CHUNK_MAP_BININD_MASK) >> CHUNK_MAP_BININD_SHIFT; assert(binind < NBINS || binind == BININD_INVALID); return (binind); } JEMALLOC_ALWAYS_INLINE size_t arena_mapbits_dirty_get(arena_chunk_t *chunk, size_t pageind) { size_t mapbits; mapbits = arena_mapbits_get(chunk, pageind); return (mapbits & CHUNK_MAP_DIRTY); } JEMALLOC_ALWAYS_INLINE size_t arena_mapbits_unzeroed_get(arena_chunk_t *chunk, size_t pageind) { size_t mapbits; mapbits = arena_mapbits_get(chunk, pageind); return (mapbits & CHUNK_MAP_UNZEROED); } JEMALLOC_ALWAYS_INLINE size_t arena_mapbits_large_get(arena_chunk_t *chunk, size_t pageind) { size_t mapbits; mapbits = arena_mapbits_get(chunk, pageind); return (mapbits & CHUNK_MAP_LARGE); } JEMALLOC_ALWAYS_INLINE size_t arena_mapbits_allocated_get(arena_chunk_t *chunk, size_t pageind) { size_t mapbits; mapbits = arena_mapbits_get(chunk, pageind); return (mapbits & CHUNK_MAP_ALLOCATED); } JEMALLOC_ALWAYS_INLINE void arena_mapbitsp_write(size_t *mapbitsp, size_t mapbits) { *mapbitsp = mapbits; } JEMALLOC_ALWAYS_INLINE void arena_mapbits_unallocated_set(arena_chunk_t *chunk, size_t pageind, size_t size, size_t flags) { size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); assert((size & PAGE_MASK) == 0); assert((flags & ~CHUNK_MAP_FLAGS_MASK) == 0); assert((flags & (CHUNK_MAP_DIRTY|CHUNK_MAP_UNZEROED)) == flags); arena_mapbitsp_write(mapbitsp, size | CHUNK_MAP_BININD_INVALID | flags); } JEMALLOC_ALWAYS_INLINE void arena_mapbits_unallocated_size_set(arena_chunk_t *chunk, size_t pageind, size_t size) { size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); size_t mapbits = arena_mapbitsp_read(mapbitsp); assert((size & PAGE_MASK) == 0); assert((mapbits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) == 0); arena_mapbitsp_write(mapbitsp, size | (mapbits & PAGE_MASK)); } JEMALLOC_ALWAYS_INLINE void arena_mapbits_large_set(arena_chunk_t *chunk, size_t pageind, size_t size, size_t flags) { size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); size_t mapbits = arena_mapbitsp_read(mapbitsp); size_t unzeroed; assert((size & PAGE_MASK) == 0); assert((flags & CHUNK_MAP_DIRTY) == flags); unzeroed = mapbits & CHUNK_MAP_UNZEROED; /* Preserve unzeroed. */ arena_mapbitsp_write(mapbitsp, size | CHUNK_MAP_BININD_INVALID | flags | unzeroed | CHUNK_MAP_LARGE | CHUNK_MAP_ALLOCATED); } JEMALLOC_ALWAYS_INLINE void arena_mapbits_large_binind_set(arena_chunk_t *chunk, size_t pageind, size_t binind) { size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); size_t mapbits = arena_mapbitsp_read(mapbitsp); assert(binind <= BININD_INVALID); assert(arena_mapbits_large_size_get(chunk, pageind) == PAGE); arena_mapbitsp_write(mapbitsp, (mapbits & ~CHUNK_MAP_BININD_MASK) | (binind << CHUNK_MAP_BININD_SHIFT)); } JEMALLOC_ALWAYS_INLINE void arena_mapbits_small_set(arena_chunk_t *chunk, size_t pageind, size_t runind, size_t binind, size_t flags) { size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); size_t mapbits = arena_mapbitsp_read(mapbitsp); size_t unzeroed; assert(binind < BININD_INVALID); assert(pageind - runind >= map_bias); assert((flags & CHUNK_MAP_DIRTY) == flags); unzeroed = mapbits & CHUNK_MAP_UNZEROED; /* Preserve unzeroed. */ arena_mapbitsp_write(mapbitsp, (runind << LG_PAGE) | (binind << CHUNK_MAP_BININD_SHIFT) | flags | unzeroed | CHUNK_MAP_ALLOCATED); } JEMALLOC_ALWAYS_INLINE void arena_mapbits_unzeroed_set(arena_chunk_t *chunk, size_t pageind, size_t unzeroed) { size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); size_t mapbits = arena_mapbitsp_read(mapbitsp); arena_mapbitsp_write(mapbitsp, (mapbits & ~CHUNK_MAP_UNZEROED) | unzeroed); } JEMALLOC_INLINE bool arena_prof_accum_impl(arena_t *arena, uint64_t accumbytes) { cassert(config_prof); assert(prof_interval != 0); arena->prof_accumbytes += accumbytes; if (arena->prof_accumbytes >= prof_interval) { arena->prof_accumbytes -= prof_interval; return (true); } return (false); } JEMALLOC_INLINE bool arena_prof_accum_locked(arena_t *arena, uint64_t accumbytes) { cassert(config_prof); if (prof_interval == 0) return (false); return (arena_prof_accum_impl(arena, accumbytes)); } JEMALLOC_INLINE bool arena_prof_accum(arena_t *arena, uint64_t accumbytes) { cassert(config_prof); if (prof_interval == 0) return (false); { bool ret; malloc_mutex_lock(&arena->lock); ret = arena_prof_accum_impl(arena, accumbytes); malloc_mutex_unlock(&arena->lock); return (ret); } } JEMALLOC_ALWAYS_INLINE size_t arena_ptr_small_binind_get(const void *ptr, size_t mapbits) { size_t binind; binind = (mapbits & CHUNK_MAP_BININD_MASK) >> CHUNK_MAP_BININD_SHIFT; if (config_debug) { arena_chunk_t *chunk; arena_t *arena; size_t pageind; size_t actual_mapbits; arena_run_t *run; arena_bin_t *bin; size_t actual_binind; arena_bin_info_t *bin_info; assert(binind != BININD_INVALID); assert(binind < NBINS); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); arena = chunk->arena; pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; actual_mapbits = arena_mapbits_get(chunk, pageind); assert(mapbits == actual_mapbits); assert(arena_mapbits_large_get(chunk, pageind) == 0); assert(arena_mapbits_allocated_get(chunk, pageind) != 0); run = (arena_run_t *)((uintptr_t)chunk + (uintptr_t)((pageind - (actual_mapbits >> LG_PAGE)) << LG_PAGE)); bin = run->bin; actual_binind = bin - arena->bins; assert(binind == actual_binind); bin_info = &arena_bin_info[actual_binind]; assert(((uintptr_t)ptr - ((uintptr_t)run + (uintptr_t)bin_info->reg0_offset)) % bin_info->reg_interval == 0); } return (binind); } # endif /* JEMALLOC_ARENA_INLINE_B */ # ifdef JEMALLOC_ARENA_INLINE_C JEMALLOC_INLINE size_t arena_bin_index(arena_t *arena, arena_bin_t *bin) { size_t binind = bin - arena->bins; assert(binind < NBINS); return (binind); } JEMALLOC_INLINE unsigned arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info, const void *ptr) { unsigned shift, diff, regind; size_t interval; /* * Freeing a pointer lower than region zero can cause assertion * failure. */ assert((uintptr_t)ptr >= (uintptr_t)run + (uintptr_t)bin_info->reg0_offset); /* * Avoid doing division with a variable divisor if possible. Using * actual division here can reduce allocator throughput by over 20%! */ diff = (unsigned)((uintptr_t)ptr - (uintptr_t)run - bin_info->reg0_offset); /* Rescale (factor powers of 2 out of the numerator and denominator). */ interval = bin_info->reg_interval; shift = jemalloc_ffs((int)interval) - 1; diff >>= shift; interval >>= shift; if (interval == 1) { /* The divisor was a power of 2. */ regind = diff; } else { /* * To divide by a number D that is not a power of two we * multiply by (2^21 / D) and then right shift by 21 positions. * * X / D * * becomes * * (X * interval_invs[D - 3]) >> SIZE_INV_SHIFT * * We can omit the first three elements, because we never * divide by 0, and 1 and 2 are both powers of two, which are * handled above. */ #define SIZE_INV_SHIFT ((sizeof(unsigned) << 3) - LG_RUN_MAXREGS) #define SIZE_INV(s) (((1U << SIZE_INV_SHIFT) / (s)) + 1) static const unsigned interval_invs[] = { SIZE_INV(3), SIZE_INV(4), SIZE_INV(5), SIZE_INV(6), SIZE_INV(7), SIZE_INV(8), SIZE_INV(9), SIZE_INV(10), SIZE_INV(11), SIZE_INV(12), SIZE_INV(13), SIZE_INV(14), SIZE_INV(15), SIZE_INV(16), SIZE_INV(17), SIZE_INV(18), SIZE_INV(19), SIZE_INV(20), SIZE_INV(21), SIZE_INV(22), SIZE_INV(23), SIZE_INV(24), SIZE_INV(25), SIZE_INV(26), SIZE_INV(27), SIZE_INV(28), SIZE_INV(29), SIZE_INV(30), SIZE_INV(31) }; if (interval <= ((sizeof(interval_invs) / sizeof(unsigned)) + 2)) { regind = (diff * interval_invs[interval - 3]) >> SIZE_INV_SHIFT; } else regind = diff / (unsigned)interval; #undef SIZE_INV #undef SIZE_INV_SHIFT } assert(diff == regind * interval); assert(regind < bin_info->nregs); return (regind); } JEMALLOC_INLINE prof_ctx_t * arena_prof_ctx_get(const void *ptr) { prof_ctx_t *ret; arena_chunk_t *chunk; size_t pageind, mapbits; cassert(config_prof); assert(ptr != NULL); assert(CHUNK_ADDR2BASE(ptr) != ptr); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; mapbits = arena_mapbits_get(chunk, pageind); assert((mapbits & CHUNK_MAP_ALLOCATED) != 0); if ((mapbits & CHUNK_MAP_LARGE) == 0) ret = (prof_ctx_t *)(uintptr_t)1U; else ret = arena_mapp_get(chunk, pageind)->prof_ctx; return (ret); } JEMALLOC_INLINE void arena_prof_ctx_set(const void *ptr, prof_ctx_t *ctx) { arena_chunk_t *chunk; size_t pageind; cassert(config_prof); assert(ptr != NULL); assert(CHUNK_ADDR2BASE(ptr) != ptr); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; assert(arena_mapbits_allocated_get(chunk, pageind) != 0); if (arena_mapbits_large_get(chunk, pageind) != 0) arena_mapp_get(chunk, pageind)->prof_ctx = ctx; } JEMALLOC_ALWAYS_INLINE void * arena_malloc(arena_t *arena, size_t size, bool zero, bool try_tcache) { tcache_t *tcache; pool_t *pool = arena->pool; assert(size != 0); assert(size <= arena_maxclass); if (size <= SMALL_MAXCLASS) { if (try_tcache && (tcache = tcache_get(pool, true)) != NULL) return (tcache_alloc_small(tcache, size, zero)); else { return (arena_malloc_small(choose_arena(arena), size, zero)); } } else { /* * Initialize tcache after checking size in order to avoid * infinite recursion during tcache initialization. */ if (try_tcache && size <= tcache_maxclass && (tcache = tcache_get(pool, true)) != NULL) return (tcache_alloc_large(tcache, size, zero)); else { return (arena_malloc_large(choose_arena(arena), size, zero)); } } } /* Return the size of the allocation pointed to by ptr. */ JEMALLOC_ALWAYS_INLINE size_t arena_salloc(const void *ptr, bool demote) { size_t ret; arena_chunk_t *chunk; size_t pageind, binind; assert(ptr != NULL); assert(CHUNK_ADDR2BASE(ptr) != ptr); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; assert(arena_mapbits_allocated_get(chunk, pageind) != 0); binind = arena_mapbits_binind_get(chunk, pageind); if (binind == BININD_INVALID || (config_prof && demote == false && arena_mapbits_large_get(chunk, pageind) != 0)) { /* * Large allocation. In the common case (demote == true), and * as this is an inline function, most callers will only end up * looking at binind to determine that ptr is a small * allocation. */ assert(((uintptr_t)ptr & PAGE_MASK) == 0); ret = arena_mapbits_large_size_get(chunk, pageind); assert(ret != 0); assert(pageind + (ret>>LG_PAGE) <= chunk_npages); assert(ret == PAGE || arena_mapbits_large_size_get(chunk, pageind+(ret>>LG_PAGE)-1) == 0); assert(binind == arena_mapbits_binind_get(chunk, pageind+(ret>>LG_PAGE)-1)); assert(arena_mapbits_dirty_get(chunk, pageind) == arena_mapbits_dirty_get(chunk, pageind+(ret>>LG_PAGE)-1)); } else { /* Small allocation (possibly promoted to a large object). */ assert(arena_mapbits_large_get(chunk, pageind) != 0 || arena_ptr_small_binind_get(ptr, arena_mapbits_get(chunk, pageind)) == binind); ret = small_bin2size(binind); } return (ret); } JEMALLOC_ALWAYS_INLINE void arena_dalloc(arena_chunk_t *chunk, void *ptr, bool try_tcache) { size_t pageind, mapbits; tcache_t *tcache; assert(ptr != NULL); assert(CHUNK_ADDR2BASE(ptr) != ptr); pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; mapbits = arena_mapbits_get(chunk, pageind); assert(arena_mapbits_allocated_get(chunk, pageind) != 0); if ((mapbits & CHUNK_MAP_LARGE) == 0) { /* Small allocation. */ if (try_tcache && (tcache = tcache_get(chunk->arena->pool, false)) != NULL) { size_t binind; binind = arena_ptr_small_binind_get(ptr, mapbits); tcache_dalloc_small(tcache, ptr, binind); } else arena_dalloc_small(chunk->arena, chunk, ptr, pageind); } else { size_t size = arena_mapbits_large_size_get(chunk, pageind); assert(((uintptr_t)ptr & PAGE_MASK) == 0); if (try_tcache && size <= tcache_maxclass && (tcache = tcache_get(chunk->arena->pool, false)) != NULL) { tcache_dalloc_large(tcache, ptr, size); } else arena_dalloc_large(chunk->arena, chunk, ptr); } } # endif /* JEMALLOC_ARENA_INLINE_C */ #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/atomic.h000066400000000000000000000160461361505074100233660ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS #define atomic_read_uint64(p) atomic_add_uint64(p, 0) #define atomic_read_uint32(p) atomic_add_uint32(p, 0) #define atomic_read_z(p) atomic_add_z(p, 0) #define atomic_read_u(p) atomic_add_u(p, 0) #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE uint64_t atomic_add_uint64(uint64_t *p, uint64_t x); uint64_t atomic_sub_uint64(uint64_t *p, uint64_t x); uint32_t atomic_add_uint32(uint32_t *p, uint32_t x); uint32_t atomic_sub_uint32(uint32_t *p, uint32_t x); size_t atomic_add_z(size_t *p, size_t x); size_t atomic_sub_z(size_t *p, size_t x); unsigned atomic_add_u(unsigned *p, unsigned x); unsigned atomic_sub_u(unsigned *p, unsigned x); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_ATOMIC_C_)) /******************************************************************************/ /* 64-bit operations. */ #if (LG_SIZEOF_PTR == 3 || LG_SIZEOF_INT == 3) # ifdef __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8 JEMALLOC_INLINE uint64_t atomic_add_uint64(uint64_t *p, uint64_t x) { return (__sync_add_and_fetch(p, x)); } JEMALLOC_INLINE uint64_t atomic_sub_uint64(uint64_t *p, uint64_t x) { return (__sync_sub_and_fetch(p, x)); } #elif (defined(_MSC_VER)) JEMALLOC_INLINE uint64_t atomic_add_uint64(uint64_t *p, uint64_t x) { return (InterlockedExchangeAdd64(p, x)); } JEMALLOC_INLINE uint64_t atomic_sub_uint64(uint64_t *p, uint64_t x) { return (InterlockedExchangeAdd64(p, -((int64_t)x))); } #elif (defined(JEMALLOC_OSATOMIC)) JEMALLOC_INLINE uint64_t atomic_add_uint64(uint64_t *p, uint64_t x) { return (OSAtomicAdd64((int64_t)x, (int64_t *)p)); } JEMALLOC_INLINE uint64_t atomic_sub_uint64(uint64_t *p, uint64_t x) { return (OSAtomicAdd64(-((int64_t)x), (int64_t *)p)); } # elif (defined(__amd64__) || defined(__x86_64__)) JEMALLOC_INLINE uint64_t atomic_add_uint64(uint64_t *p, uint64_t x) { asm volatile ( "lock; xaddq %0, %1;" : "+r" (x), "=m" (*p) /* Outputs. */ : "m" (*p) /* Inputs. */ ); return (x); } JEMALLOC_INLINE uint64_t atomic_sub_uint64(uint64_t *p, uint64_t x) { x = (uint64_t)(-(int64_t)x); asm volatile ( "lock; xaddq %0, %1;" : "+r" (x), "=m" (*p) /* Outputs. */ : "m" (*p) /* Inputs. */ ); return (x); } # elif (defined(JEMALLOC_ATOMIC9)) JEMALLOC_INLINE uint64_t atomic_add_uint64(uint64_t *p, uint64_t x) { /* * atomic_fetchadd_64() doesn't exist, but we only ever use this * function on LP64 systems, so atomic_fetchadd_long() will do. */ assert(sizeof(uint64_t) == sizeof(unsigned long)); return (atomic_fetchadd_long(p, (unsigned long)x) + x); } JEMALLOC_INLINE uint64_t atomic_sub_uint64(uint64_t *p, uint64_t x) { assert(sizeof(uint64_t) == sizeof(unsigned long)); return (atomic_fetchadd_long(p, (unsigned long)(-(long)x)) - x); } # elif (defined(JE_FORCE_SYNC_COMPARE_AND_SWAP_8)) JEMALLOC_INLINE uint64_t atomic_add_uint64(uint64_t *p, uint64_t x) { return (__sync_add_and_fetch(p, x)); } JEMALLOC_INLINE uint64_t atomic_sub_uint64(uint64_t *p, uint64_t x) { return (__sync_sub_and_fetch(p, x)); } # else # error "Missing implementation for 64-bit atomic operations" # endif #endif /******************************************************************************/ /* 32-bit operations. */ #ifdef __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 JEMALLOC_INLINE uint32_t atomic_add_uint32(uint32_t *p, uint32_t x) { return (__sync_add_and_fetch(p, x)); } JEMALLOC_INLINE uint32_t atomic_sub_uint32(uint32_t *p, uint32_t x) { return (__sync_sub_and_fetch(p, x)); } #elif (defined(_MSC_VER)) JEMALLOC_INLINE uint32_t atomic_add_uint32(uint32_t *p, uint32_t x) { return (InterlockedExchangeAdd(p, x)); } JEMALLOC_INLINE uint32_t atomic_sub_uint32(uint32_t *p, uint32_t x) { return (InterlockedExchangeAdd(p, -((int32_t)x))); } #elif (defined(JEMALLOC_OSATOMIC)) JEMALLOC_INLINE uint32_t atomic_add_uint32(uint32_t *p, uint32_t x) { return (OSAtomicAdd32((int32_t)x, (int32_t *)p)); } JEMALLOC_INLINE uint32_t atomic_sub_uint32(uint32_t *p, uint32_t x) { return (OSAtomicAdd32(-((int32_t)x), (int32_t *)p)); } #elif (defined(__i386__) || defined(__amd64__) || defined(__x86_64__)) JEMALLOC_INLINE uint32_t atomic_add_uint32(uint32_t *p, uint32_t x) { asm volatile ( "lock; xaddl %0, %1;" : "+r" (x), "=m" (*p) /* Outputs. */ : "m" (*p) /* Inputs. */ ); return (x); } JEMALLOC_INLINE uint32_t atomic_sub_uint32(uint32_t *p, uint32_t x) { x = (uint32_t)(-(int32_t)x); asm volatile ( "lock; xaddl %0, %1;" : "+r" (x), "=m" (*p) /* Outputs. */ : "m" (*p) /* Inputs. */ ); return (x); } #elif (defined(JEMALLOC_ATOMIC9)) JEMALLOC_INLINE uint32_t atomic_add_uint32(uint32_t *p, uint32_t x) { return (atomic_fetchadd_32(p, x) + x); } JEMALLOC_INLINE uint32_t atomic_sub_uint32(uint32_t *p, uint32_t x) { return (atomic_fetchadd_32(p, (uint32_t)(-(int32_t)x)) - x); } #elif (defined(JE_FORCE_SYNC_COMPARE_AND_SWAP_4)) JEMALLOC_INLINE uint32_t atomic_add_uint32(uint32_t *p, uint32_t x) { return (__sync_add_and_fetch(p, x)); } JEMALLOC_INLINE uint32_t atomic_sub_uint32(uint32_t *p, uint32_t x) { return (__sync_sub_and_fetch(p, x)); } #else # error "Missing implementation for 32-bit atomic operations" #endif /******************************************************************************/ /* size_t operations. */ JEMALLOC_INLINE size_t atomic_add_z(size_t *p, size_t x) { #if (LG_SIZEOF_PTR == 3) return ((size_t)atomic_add_uint64((uint64_t *)p, (uint64_t)x)); #elif (LG_SIZEOF_PTR == 2) return ((size_t)atomic_add_uint32((uint32_t *)p, (uint32_t)x)); #endif } JEMALLOC_INLINE size_t atomic_sub_z(size_t *p, size_t x) { #if (LG_SIZEOF_PTR == 3) return ((size_t)atomic_add_uint64((uint64_t *)p, (uint64_t)-((int64_t)x))); #elif (LG_SIZEOF_PTR == 2) return ((size_t)atomic_add_uint32((uint32_t *)p, (uint32_t)-((int32_t)x))); #endif } /******************************************************************************/ /* unsigned operations. */ JEMALLOC_INLINE unsigned atomic_add_u(unsigned *p, unsigned x) { #if (LG_SIZEOF_INT == 3) return ((unsigned)atomic_add_uint64((uint64_t *)p, (uint64_t)x)); #elif (LG_SIZEOF_INT == 2) return ((unsigned)atomic_add_uint32((uint32_t *)p, (uint32_t)x)); #endif } JEMALLOC_INLINE unsigned atomic_sub_u(unsigned *p, unsigned x) { #if (LG_SIZEOF_INT == 3) return ((unsigned)atomic_add_uint64((uint64_t *)p, (uint64_t)-((int64_t)x))); #elif (LG_SIZEOF_INT == 2) return ((unsigned)atomic_add_uint32((uint32_t *)p, (uint32_t)-((int32_t)x))); #endif } /******************************************************************************/ #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/base.h000066400000000000000000000020661361505074100230210ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS void *base_alloc(pool_t *pool, size_t size); void *base_calloc(pool_t *pool, size_t number, size_t size); extent_node_t *base_node_alloc(pool_t *pool); void base_node_dalloc(pool_t *pool, extent_node_t *node); size_t base_node_prealloc(pool_t *pool, size_t number); bool base_boot(pool_t *pool); bool base_init(pool_t *pool); void base_prefork(pool_t *pool); void base_postfork_parent(pool_t *pool); void base_postfork_child(pool_t *pool); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/bitmap.h000066400000000000000000000121701361505074100233600ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES /* Maximum bitmap bit count is 2^LG_BITMAP_MAXBITS. */ #define LG_BITMAP_MAXBITS LG_RUN_MAXREGS typedef struct bitmap_level_s bitmap_level_t; typedef struct bitmap_info_s bitmap_info_t; typedef unsigned long bitmap_t; #define LG_SIZEOF_BITMAP LG_SIZEOF_LONG /* Number of bits per group. */ #define LG_BITMAP_GROUP_NBITS (LG_SIZEOF_BITMAP + 3) #define BITMAP_GROUP_NBITS (ZU(1) << LG_BITMAP_GROUP_NBITS) #define BITMAP_GROUP_NBITS_MASK (BITMAP_GROUP_NBITS-1) /* Maximum number of levels possible. */ #define BITMAP_MAX_LEVELS \ (LG_BITMAP_MAXBITS / LG_SIZEOF_BITMAP) \ + !!(LG_BITMAP_MAXBITS % LG_SIZEOF_BITMAP) #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS struct bitmap_level_s { /* Offset of this level's groups within the array of groups. */ size_t group_offset; }; struct bitmap_info_s { /* Logical number of bits in bitmap (stored at bottom level). */ size_t nbits; /* Number of levels necessary for nbits. */ unsigned nlevels; /* * Only the first (nlevels+1) elements are used, and levels are ordered * bottom to top (e.g. the bottom level is stored in levels[0]). */ bitmap_level_t levels[BITMAP_MAX_LEVELS+1]; }; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS void bitmap_info_init(bitmap_info_t *binfo, size_t nbits); size_t bitmap_info_ngroups(const bitmap_info_t *binfo); size_t bitmap_size(size_t nbits); void bitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE bool bitmap_full(bitmap_t *bitmap, const bitmap_info_t *binfo); bool bitmap_get(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit); void bitmap_set(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit); size_t bitmap_sfu(bitmap_t *bitmap, const bitmap_info_t *binfo); void bitmap_unset(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_BITMAP_C_)) JEMALLOC_INLINE bool bitmap_full(bitmap_t *bitmap, const bitmap_info_t *binfo) { size_t rgoff = binfo->levels[binfo->nlevels].group_offset - 1; bitmap_t rg = bitmap[rgoff]; /* The bitmap is full iff the root group is 0. */ return (rg == 0); } JEMALLOC_INLINE bool bitmap_get(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit) { size_t goff; bitmap_t g; assert(bit < binfo->nbits); goff = bit >> LG_BITMAP_GROUP_NBITS; g = bitmap[goff]; return (!(g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK)))); } JEMALLOC_INLINE void bitmap_set(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit) { size_t goff; bitmap_t *gp; bitmap_t g; assert(bit < binfo->nbits); assert(bitmap_get(bitmap, binfo, bit) == false); goff = bit >> LG_BITMAP_GROUP_NBITS; gp = &bitmap[goff]; g = *gp; assert(g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK))); g ^= 1LU << (bit & BITMAP_GROUP_NBITS_MASK); *gp = g; assert(bitmap_get(bitmap, binfo, bit)); /* Propagate group state transitions up the tree. */ if (g == 0) { unsigned i; for (i = 1; i < binfo->nlevels; i++) { bit = goff; goff = bit >> LG_BITMAP_GROUP_NBITS; if (bitmap != NULL) gp = &bitmap[binfo->levels[i].group_offset + goff]; g = *gp; assert(g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK))); g ^= 1LU << (bit & BITMAP_GROUP_NBITS_MASK); *gp = g; if (g != 0) break; } } } /* sfu: set first unset. */ JEMALLOC_INLINE size_t bitmap_sfu(bitmap_t *bitmap, const bitmap_info_t *binfo) { size_t bit; bitmap_t g; unsigned i; assert(bitmap_full(bitmap, binfo) == false); i = binfo->nlevels - 1; g = bitmap[binfo->levels[i].group_offset]; bit = jemalloc_ffsl(g) - 1; while (i > 0) { i--; g = bitmap[binfo->levels[i].group_offset + bit]; bit = (bit << LG_BITMAP_GROUP_NBITS) + (jemalloc_ffsl(g) - 1); } bitmap_set(bitmap, binfo, bit); return (bit); } JEMALLOC_INLINE void bitmap_unset(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit) { size_t goff; bitmap_t *gp; bitmap_t g; bool propagate; assert(bit < binfo->nbits); assert(bitmap_get(bitmap, binfo, bit)); goff = bit >> LG_BITMAP_GROUP_NBITS; gp = &bitmap[goff]; g = *gp; propagate = (g == 0); assert((g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK))) == 0); g ^= 1LU << (bit & BITMAP_GROUP_NBITS_MASK); *gp = g; assert(bitmap_get(bitmap, binfo, bit) == false); /* Propagate group state transitions up the tree. */ if (propagate) { unsigned i; for (i = 1; i < binfo->nlevels; i++) { bit = goff; goff = bit >> LG_BITMAP_GROUP_NBITS; gp = &bitmap[binfo->levels[i].group_offset + goff]; g = *gp; propagate = (g == 0); assert((g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK))) == 0); g ^= 1LU << (bit & BITMAP_GROUP_NBITS_MASK); *gp = g; if (propagate == false) break; } } } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/chunk.h000066400000000000000000000046721361505074100232240ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES /* * Size and alignment of memory chunks that are allocated by the OS's virtual * memory system. */ #define LG_CHUNK_DEFAULT 22 /* Return the chunk address for allocation address a. */ #define CHUNK_ADDR2BASE(a) \ ((void *)((uintptr_t)(a) & ~chunksize_mask)) /* Return the chunk offset of address a. */ #define CHUNK_ADDR2OFFSET(a) \ ((size_t)((uintptr_t)(a) & chunksize_mask)) /* Return the smallest chunk multiple that is >= s. */ #define CHUNK_CEILING(s) \ (((s) + chunksize_mask) & ~chunksize_mask) #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS extern size_t opt_lg_chunk; extern const char *opt_dss; extern size_t chunksize; extern size_t chunksize_mask; /* (chunksize - 1). */ extern size_t chunk_npages; extern size_t map_bias; /* Number of arena chunk header pages. */ extern size_t arena_maxclass; /* Max size class for arenas. */ void *chunk_alloc_base(pool_t *pool, size_t size); void *chunk_alloc_arena(chunk_alloc_t *chunk_alloc, chunk_dalloc_t *chunk_dalloc, arena_t *arena, void *new_addr, size_t size, size_t alignment, bool *zero); void *chunk_alloc_default(void *new_addr, size_t size, size_t alignment, bool *zero, unsigned arena_ind, pool_t *pool); void chunk_unmap(pool_t *pool, void *chunk, size_t size); bool chunk_dalloc_default(void *chunk, size_t size, unsigned arena_ind, pool_t *pool); void chunk_record(pool_t *pool, extent_tree_t *chunks_szad, extent_tree_t *chunks_ad, void *chunk, size_t size, bool zeroed); bool chunk_global_boot(); bool chunk_boot(pool_t *pool); bool chunk_init(pool_t *pool); void chunk_prefork0(pool_t *pool); void chunk_prefork1(pool_t *pool); void chunk_postfork_parent0(pool_t *pool); void chunk_postfork_parent1(pool_t *pool); void chunk_postfork_child0(pool_t *pool); void chunk_postfork_child1(pool_t *pool); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ #include "jemalloc/internal/chunk_dss.h" #include "jemalloc/internal/chunk_mmap.h" vmem-1.8/src/jemalloc/include/jemalloc/internal/chunk_dss.h000066400000000000000000000022541361505074100240670ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef enum { dss_prec_disabled = 0, dss_prec_primary = 1, dss_prec_secondary = 2, dss_prec_limit = 3 } dss_prec_t; #define DSS_PREC_DEFAULT dss_prec_secondary #define DSS_DEFAULT "secondary" #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS extern const char *dss_prec_names[]; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS dss_prec_t chunk_dss_prec_get(void); bool chunk_dss_prec_set(dss_prec_t dss_prec); void *chunk_alloc_dss(size_t size, size_t alignment, bool *zero); bool chunk_in_dss(void *chunk); bool chunk_dss_boot(void); void chunk_dss_prefork(void); void chunk_dss_postfork_parent(void); void chunk_dss_postfork_child(void); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/chunk_mmap.h000066400000000000000000000014631361505074100242310ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS bool pages_purge(void *addr, size_t length, bool file_mapped); void *chunk_alloc_mmap(size_t size, size_t alignment, bool *zero); bool chunk_dalloc_mmap(void *chunk, size_t size); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/ckh.h000066400000000000000000000051261361505074100226540ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef struct ckh_s ckh_t; typedef struct ckhc_s ckhc_t; /* Typedefs to allow easy function pointer passing. */ typedef void ckh_hash_t (const void *, size_t[2]); typedef bool ckh_keycomp_t (const void *, const void *); /* Maintain counters used to get an idea of performance. */ /* #define CKH_COUNT */ /* Print counter values in ckh_delete() (requires CKH_COUNT). */ /* #define CKH_VERBOSE */ /* * There are 2^LG_CKH_BUCKET_CELLS cells in each hash table bucket. Try to fit * one bucket per L1 cache line. */ #define LG_CKH_BUCKET_CELLS (LG_CACHELINE - LG_SIZEOF_PTR - 1) #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS /* Hash table cell. */ struct ckhc_s { const void *key; const void *data; }; struct ckh_s { #ifdef CKH_COUNT /* Counters used to get an idea of performance. */ uint64_t ngrows; uint64_t nshrinks; uint64_t nshrinkfails; uint64_t ninserts; uint64_t nrelocs; #endif /* Used for pseudo-random number generation. */ #define CKH_A 1103515241 #define CKH_C 12347 uint32_t prng_state; /* Total number of items. */ size_t count; /* * Minimum and current number of hash table buckets. There are * 2^LG_CKH_BUCKET_CELLS cells per bucket. */ unsigned lg_minbuckets; unsigned lg_curbuckets; /* Hash and comparison functions. */ ckh_hash_t *hash; ckh_keycomp_t *keycomp; /* Hash table with 2^lg_curbuckets buckets. */ ckhc_t *tab; }; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS bool ckh_new(ckh_t *ckh, size_t minitems, ckh_hash_t *hash, ckh_keycomp_t *keycomp); void ckh_delete(ckh_t *ckh); size_t ckh_count(ckh_t *ckh); bool ckh_iter(ckh_t *ckh, size_t *tabind, void **key, void **data); bool ckh_insert(ckh_t *ckh, const void *key, const void *data); bool ckh_remove(ckh_t *ckh, const void *searchkey, void **key, void **data); bool ckh_search(ckh_t *ckh, const void *seachkey, void **key, void **data); void ckh_string_hash(const void *key, size_t r_hash[2]); bool ckh_string_keycomp(const void *k1, const void *k2); void ckh_pointer_hash(const void *key, size_t r_hash[2]); bool ckh_pointer_keycomp(const void *k1, const void *k2); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/ctl.h000066400000000000000000000061441361505074100226720ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef struct ctl_node_s ctl_node_t; typedef struct ctl_named_node_s ctl_named_node_t; typedef struct ctl_indexed_node_s ctl_indexed_node_t; typedef struct ctl_arena_stats_s ctl_arena_stats_t; typedef struct ctl_stats_s ctl_stats_t; #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS struct ctl_node_s { bool named; }; struct ctl_named_node_s { struct ctl_node_s node; const char *name; /* If (nchildren == 0), this is a terminal node. */ unsigned nchildren; const ctl_node_t *children; int (*ctl)(const size_t *, size_t, void *, size_t *, void *, size_t); }; struct ctl_indexed_node_s { struct ctl_node_s node; const ctl_named_node_t *(*index)(const size_t *, size_t, size_t); }; struct ctl_arena_stats_s { bool initialized; unsigned nthreads; const char *dss; size_t pactive; size_t pdirty; arena_stats_t astats; /* Aggregate stats for small size classes, based on bin stats. */ size_t allocated_small; uint64_t nmalloc_small; uint64_t ndalloc_small; uint64_t nrequests_small; malloc_bin_stats_t bstats[NBINS]; malloc_large_stats_t *lstats; /* nlclasses elements. */ }; struct ctl_stats_s { struct { size_t current; /* stats_chunks.curchunks */ uint64_t total; /* stats_chunks.nchunks */ size_t high; /* stats_chunks.highchunks */ } chunks; unsigned narenas; ctl_arena_stats_t *arenas; /* (narenas + 1) elements. */ }; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS int ctl_byname(const char *name, void *oldp, size_t *oldlenp, void *newp, size_t newlen); int ctl_nametomib(const char *name, size_t *mibp, size_t *miblenp); int ctl_bymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen); bool ctl_boot(void); void ctl_prefork(void); void ctl_postfork_parent(void); void ctl_postfork_child(void); #define xmallctl(name, oldp, oldlenp, newp, newlen) do { \ if (je_mallctl(name, oldp, oldlenp, newp, newlen) \ != 0) { \ malloc_printf( \ ": Failure in xmallctl(\"%s\", ...)\n", \ name); \ abort(); \ } \ } while (0) #define xmallctlnametomib(name, mibp, miblenp) do { \ if (je_mallctlnametomib(name, mibp, miblenp) != 0) { \ malloc_printf(": Failure in " \ "xmallctlnametomib(\"%s\", ...)\n", name); \ abort(); \ } \ } while (0) #define xmallctlbymib(mib, miblen, oldp, oldlenp, newp, newlen) do { \ if (je_mallctlbymib(mib, miblen, oldp, oldlenp, newp, \ newlen) != 0) { \ malloc_write( \ ": Failure in xmallctlbymib()\n"); \ abort(); \ } \ } while (0) #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/extent.h000066400000000000000000000026001361505074100234100ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef struct extent_node_s extent_node_t; #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS /* Tree of extents. */ struct extent_node_s { /* Linkage for the size/address-ordered tree. */ rb_node(extent_node_t) link_szad; /* Linkage for the address-ordered tree. */ rb_node(extent_node_t) link_ad; /* Profile counters, used for huge objects. */ prof_ctx_t *prof_ctx; /* Pointer to the extent that this tree node is responsible for. */ void *addr; /* Total region size. */ size_t size; /* Arena from which this extent came, if any */ arena_t *arena; /* True if zero-filled; used by chunk recycling code. */ bool zeroed; }; typedef rb_tree(extent_node_t) extent_tree_t; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS rb_proto(, extent_tree_szad_, extent_tree_t, extent_node_t) rb_proto(, extent_tree_ad_, extent_tree_t, extent_node_t) #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/hash.h000066400000000000000000000174041361505074100230340ustar00rootroot00000000000000/* * The following hash function is based on MurmurHash3, placed into the public * domain by Austin Appleby. See http://code.google.com/p/smhasher/ for * details. */ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE uint32_t hash_x86_32(const void *key, int len, uint32_t seed); void hash_x86_128(const void *key, const int len, uint32_t seed, uint64_t r_out[2]); void hash_x64_128(const void *key, const int len, const uint32_t seed, uint64_t r_out[2]); void hash(const void *key, size_t len, const uint32_t seed, size_t r_hash[2]); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_HASH_C_)) /******************************************************************************/ /* Internal implementation. */ JEMALLOC_INLINE uint32_t hash_rotl_32(uint32_t x, int8_t r) { return (x << r) | (x >> (32 - r)); } JEMALLOC_INLINE uint64_t hash_rotl_64(uint64_t x, int8_t r) { return (x << r) | (x >> (64 - r)); } JEMALLOC_INLINE uint32_t hash_get_block_32(const uint32_t *p, int i) { return (p[i]); } JEMALLOC_INLINE uint64_t hash_get_block_64(const uint64_t *p, int i) { return (p[i]); } JEMALLOC_INLINE uint32_t hash_fmix_32(uint32_t h) { h ^= h >> 16; h *= 0x85ebca6b; h ^= h >> 13; h *= 0xc2b2ae35; h ^= h >> 16; return (h); } JEMALLOC_INLINE uint64_t hash_fmix_64(uint64_t k) { k ^= k >> 33; k *= KQU(0xff51afd7ed558ccd); k ^= k >> 33; k *= KQU(0xc4ceb9fe1a85ec53); k ^= k >> 33; return (k); } JEMALLOC_INLINE uint32_t hash_x86_32(const void *key, int len, uint32_t seed) { const uint8_t *data = (const uint8_t *) key; const int nblocks = len / 4; uint32_t h1 = seed; const uint32_t c1 = 0xcc9e2d51; const uint32_t c2 = 0x1b873593; /* body */ { const uint32_t *blocks = (const uint32_t *) (data + nblocks*4); int i; for (i = -nblocks; i; i++) { uint32_t k1 = hash_get_block_32(blocks, i); k1 *= c1; k1 = hash_rotl_32(k1, 15); k1 *= c2; h1 ^= k1; h1 = hash_rotl_32(h1, 13); h1 = h1*5 + 0xe6546b64; } } /* tail */ { const uint8_t *tail = (const uint8_t *) (data + nblocks*4); uint32_t k1 = 0; switch (len & 3) { case 3: k1 ^= tail[2] << 16; case 2: k1 ^= tail[1] << 8; case 1: k1 ^= tail[0]; k1 *= c1; k1 = hash_rotl_32(k1, 15); k1 *= c2; h1 ^= k1; } } /* finalization */ h1 ^= len; h1 = hash_fmix_32(h1); return (h1); } UNUSED JEMALLOC_INLINE void hash_x86_128(const void *key, const int len, uint32_t seed, uint64_t r_out[2]) { const uint8_t * data = (const uint8_t *) key; const int nblocks = len / 16; uint32_t h1 = seed; uint32_t h2 = seed; uint32_t h3 = seed; uint32_t h4 = seed; const uint32_t c1 = 0x239b961b; const uint32_t c2 = 0xab0e9789; const uint32_t c3 = 0x38b34ae5; const uint32_t c4 = 0xa1e38b93; /* body */ { const uint32_t *blocks = (const uint32_t *) (data + nblocks*16); int i; for (i = -nblocks; i; i++) { uint32_t k1 = hash_get_block_32(blocks, i*4 + 0); uint32_t k2 = hash_get_block_32(blocks, i*4 + 1); uint32_t k3 = hash_get_block_32(blocks, i*4 + 2); uint32_t k4 = hash_get_block_32(blocks, i*4 + 3); k1 *= c1; k1 = hash_rotl_32(k1, 15); k1 *= c2; h1 ^= k1; h1 = hash_rotl_32(h1, 19); h1 += h2; h1 = h1*5 + 0x561ccd1b; k2 *= c2; k2 = hash_rotl_32(k2, 16); k2 *= c3; h2 ^= k2; h2 = hash_rotl_32(h2, 17); h2 += h3; h2 = h2*5 + 0x0bcaa747; k3 *= c3; k3 = hash_rotl_32(k3, 17); k3 *= c4; h3 ^= k3; h3 = hash_rotl_32(h3, 15); h3 += h4; h3 = h3*5 + 0x96cd1c35; k4 *= c4; k4 = hash_rotl_32(k4, 18); k4 *= c1; h4 ^= k4; h4 = hash_rotl_32(h4, 13); h4 += h1; h4 = h4*5 + 0x32ac3b17; } } /* tail */ { const uint8_t *tail = (const uint8_t *) (data + nblocks*16); uint32_t k1 = 0; uint32_t k2 = 0; uint32_t k3 = 0; uint32_t k4 = 0; switch (len & 15) { case 15: k4 ^= tail[14] << 16; case 14: k4 ^= tail[13] << 8; case 13: k4 ^= tail[12] << 0; k4 *= c4; k4 = hash_rotl_32(k4, 18); k4 *= c1; h4 ^= k4; case 12: k3 ^= tail[11] << 24; case 11: k3 ^= tail[10] << 16; case 10: k3 ^= tail[ 9] << 8; case 9: k3 ^= tail[ 8] << 0; k3 *= c3; k3 = hash_rotl_32(k3, 17); k3 *= c4; h3 ^= k3; case 8: k2 ^= tail[ 7] << 24; case 7: k2 ^= tail[ 6] << 16; case 6: k2 ^= tail[ 5] << 8; case 5: k2 ^= tail[ 4] << 0; k2 *= c2; k2 = hash_rotl_32(k2, 16); k2 *= c3; h2 ^= k2; case 4: k1 ^= tail[ 3] << 24; case 3: k1 ^= tail[ 2] << 16; case 2: k1 ^= tail[ 1] << 8; case 1: k1 ^= tail[ 0] << 0; k1 *= c1; k1 = hash_rotl_32(k1, 15); k1 *= c2; h1 ^= k1; } } /* finalization */ h1 ^= len; h2 ^= len; h3 ^= len; h4 ^= len; h1 += h2; h1 += h3; h1 += h4; h2 += h1; h3 += h1; h4 += h1; h1 = hash_fmix_32(h1); h2 = hash_fmix_32(h2); h3 = hash_fmix_32(h3); h4 = hash_fmix_32(h4); h1 += h2; h1 += h3; h1 += h4; h2 += h1; h3 += h1; h4 += h1; r_out[0] = (((uint64_t) h2) << 32) | h1; r_out[1] = (((uint64_t) h4) << 32) | h3; } UNUSED JEMALLOC_INLINE void hash_x64_128(const void *key, const int len, const uint32_t seed, uint64_t r_out[2]) { const uint8_t *data = (const uint8_t *) key; const int nblocks = len / 16; uint64_t h1 = seed; uint64_t h2 = seed; const uint64_t c1 = KQU(0x87c37b91114253d5); const uint64_t c2 = KQU(0x4cf5ad432745937f); /* body */ { const uint64_t *blocks = (const uint64_t *) (data); int i; for (i = 0; i < nblocks; i++) { uint64_t k1 = hash_get_block_64(blocks, i*2 + 0); uint64_t k2 = hash_get_block_64(blocks, i*2 + 1); k1 *= c1; k1 = hash_rotl_64(k1, 31); k1 *= c2; h1 ^= k1; h1 = hash_rotl_64(h1, 27); h1 += h2; h1 = h1*5 + 0x52dce729; k2 *= c2; k2 = hash_rotl_64(k2, 33); k2 *= c1; h2 ^= k2; h2 = hash_rotl_64(h2, 31); h2 += h1; h2 = h2*5 + 0x38495ab5; } } /* tail */ { const uint8_t *tail = (const uint8_t*)(data + nblocks*16); uint64_t k1 = 0; uint64_t k2 = 0; switch (len & 15) { case 15: k2 ^= ((uint64_t)(tail[14])) << 48; case 14: k2 ^= ((uint64_t)(tail[13])) << 40; case 13: k2 ^= ((uint64_t)(tail[12])) << 32; case 12: k2 ^= ((uint64_t)(tail[11])) << 24; case 11: k2 ^= ((uint64_t)(tail[10])) << 16; case 10: k2 ^= ((uint64_t)(tail[ 9])) << 8; case 9: k2 ^= ((uint64_t)(tail[ 8])) << 0; k2 *= c2; k2 = hash_rotl_64(k2, 33); k2 *= c1; h2 ^= k2; case 8: k1 ^= ((uint64_t)(tail[ 7])) << 56; case 7: k1 ^= ((uint64_t)(tail[ 6])) << 48; case 6: k1 ^= ((uint64_t)(tail[ 5])) << 40; case 5: k1 ^= ((uint64_t)(tail[ 4])) << 32; case 4: k1 ^= ((uint64_t)(tail[ 3])) << 24; case 3: k1 ^= ((uint64_t)(tail[ 2])) << 16; case 2: k1 ^= ((uint64_t)(tail[ 1])) << 8; case 1: k1 ^= ((uint64_t)(tail[ 0])) << 0; k1 *= c1; k1 = hash_rotl_64(k1, 31); k1 *= c2; h1 ^= k1; } } /* finalization */ h1 ^= len; h2 ^= len; h1 += h2; h2 += h1; h1 = hash_fmix_64(h1); h2 = hash_fmix_64(h2); h1 += h2; h2 += h1; r_out[0] = h1; r_out[1] = h2; } /******************************************************************************/ /* API. */ JEMALLOC_INLINE void hash(const void *key, size_t len, const uint32_t seed, size_t r_hash[2]) { #if (LG_SIZEOF_PTR == 3 && !defined(JEMALLOC_BIG_ENDIAN)) hash_x64_128(key, (int)len, seed, (uint64_t *)r_hash); #else uint64_t hashes[2]; hash_x86_128(key, len, seed, hashes); r_hash[0] = (size_t)hashes[0]; r_hash[1] = (size_t)hashes[1]; #endif } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/huge.h000066400000000000000000000030401361505074100230300ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS void *huge_malloc(arena_t *arena, size_t size, bool zero); void *huge_palloc(arena_t *arena, size_t size, size_t alignment, bool zero); bool huge_ralloc_no_move(pool_t *pool, void *ptr, size_t oldsize, size_t size, size_t extra, bool zero); void *huge_ralloc(arena_t *arena, void *ptr, size_t oldsize, size_t size, size_t extra, size_t alignment, bool zero, bool try_tcache_dalloc); #ifdef JEMALLOC_JET typedef void (huge_dalloc_junk_t)(void *, size_t); extern huge_dalloc_junk_t *huge_dalloc_junk; #endif void huge_dalloc(pool_t *pool, void *ptr); size_t huge_salloc(const void *ptr); size_t huge_pool_salloc(pool_t *pool, const void *ptr); prof_ctx_t *huge_prof_ctx_get(const void *ptr); void huge_prof_ctx_set(const void *ptr, prof_ctx_t *ctx); bool huge_boot(pool_t *pool); bool huge_init(pool_t *pool); void huge_prefork(pool_t *pool); void huge_postfork_parent(pool_t *pool); void huge_postfork_child(pool_t *pool); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/jemalloc_internal.h.in000066400000000000000000000662461361505074100262100ustar00rootroot00000000000000#ifndef JEMALLOC_INTERNAL_H #define JEMALLOC_INTERNAL_H #include "jemalloc_internal_defs.h" #include "jemalloc/internal/jemalloc_internal_decls.h" #ifdef JEMALLOC_UTRACE #include #endif #define JEMALLOC_NO_DEMANGLE #ifdef JEMALLOC_JET # define JEMALLOC_N(n) jet_##n # include "jemalloc/internal/public_namespace.h" # define JEMALLOC_NO_RENAME # include "jemalloc/jemalloc@install_suffix@.h" # undef JEMALLOC_NO_RENAME #else # define JEMALLOC_N(n) @private_namespace@##n # include "jemalloc/jemalloc@install_suffix@.h" #endif #include "jemalloc/internal/private_namespace.h" static const bool config_debug = #ifdef JEMALLOC_DEBUG true #else false #endif ; static const bool have_dss = #ifdef JEMALLOC_DSS true #else false #endif ; static const bool config_fill = #ifdef JEMALLOC_FILL true #else false #endif ; static const bool config_lazy_lock = #ifdef JEMALLOC_LAZY_LOCK true #else false #endif ; static const bool config_prof = #ifdef JEMALLOC_PROF true #else false #endif ; static const bool config_prof_libgcc = #ifdef JEMALLOC_PROF_LIBGCC true #else false #endif ; static const bool config_prof_libunwind = #ifdef JEMALLOC_PROF_LIBUNWIND true #else false #endif ; static const bool config_munmap = #ifdef JEMALLOC_MUNMAP true #else false #endif ; static const bool config_stats = #ifdef JEMALLOC_STATS true #else false #endif ; static const bool config_tcache = #ifdef JEMALLOC_TCACHE true #else false #endif ; static const bool config_tls = #ifdef JEMALLOC_TLS true #else false #endif ; static const bool config_utrace = #ifdef JEMALLOC_UTRACE true #else false #endif ; static const bool config_valgrind = #ifdef JEMALLOC_VALGRIND true #else false #endif ; static const bool config_xmalloc = #ifdef JEMALLOC_XMALLOC true #else false #endif ; static const bool config_ivsalloc = #ifdef JEMALLOC_IVSALLOC true #else false #endif ; #ifdef JEMALLOC_ATOMIC9 #include #endif #if (defined(JEMALLOC_OSATOMIC) || defined(JEMALLOC_OSSPIN)) #include #endif #ifdef JEMALLOC_ZONE #include #include #include #include #endif #define RB_COMPACT #include "jemalloc/internal/rb.h" #include "jemalloc/internal/qr.h" #include "jemalloc/internal/ql.h" /* * jemalloc can conceptually be broken into components (arena, tcache, etc.), * but there are circular dependencies that cannot be broken without * substantial performance degradation. In order to reduce the effect on * visual code flow, read the header files in multiple passes, with one of the * following cpp variables defined during each pass: * * JEMALLOC_H_TYPES : Preprocessor-defined constants and psuedo-opaque data * types. * JEMALLOC_H_STRUCTS : Data structures. * JEMALLOC_H_EXTERNS : Extern data declarations and function prototypes. * JEMALLOC_H_INLINES : Inline functions. */ /******************************************************************************/ #define JEMALLOC_H_TYPES #include "jemalloc/internal/jemalloc_internal_macros.h" #define MALLOCX_LG_ALIGN_MASK ((int)0x3f) /* Smallest size class to support. */ #define LG_TINY_MIN 3 #define TINY_MIN (1U << LG_TINY_MIN) /* * Minimum alignment of allocations is 2^LG_QUANTUM bytes (ignoring tiny size * classes). */ #ifndef LG_QUANTUM # if (defined(__i386__) || defined(_M_IX86)) # define LG_QUANTUM 4 # endif # ifdef __ia64__ # define LG_QUANTUM 4 # endif # ifdef __alpha__ # define LG_QUANTUM 4 # endif # ifdef __sparc64__ # define LG_QUANTUM 4 # endif # if (defined(__amd64__) || defined(__x86_64__) || defined(_M_X64)) # define LG_QUANTUM 4 # endif # ifdef __arm__ # define LG_QUANTUM 3 # endif # ifdef __aarch64__ # define LG_QUANTUM 4 # endif # ifdef __hppa__ # define LG_QUANTUM 4 # endif # ifdef __mips__ # define LG_QUANTUM 3 # endif # ifdef __powerpc__ # define LG_QUANTUM 4 # endif # ifdef __s390__ # define LG_QUANTUM 4 # endif # ifdef __SH4__ # define LG_QUANTUM 4 # endif # ifdef __tile__ # define LG_QUANTUM 4 # endif # ifdef __le32__ # define LG_QUANTUM 4 # endif # ifndef LG_QUANTUM # error "No LG_QUANTUM definition for architecture; specify via CPPFLAGS" # endif #endif #define QUANTUM ((size_t)(1U << LG_QUANTUM)) #define QUANTUM_MASK (QUANTUM - 1) /* Return the smallest quantum multiple that is >= a. */ #define QUANTUM_CEILING(a) \ (((a) + QUANTUM_MASK) & ~QUANTUM_MASK) #define LONG ((size_t)(1U << LG_SIZEOF_LONG)) #define LONG_MASK (LONG - 1) /* Return the smallest long multiple that is >= a. */ #define LONG_CEILING(a) \ (((a) + LONG_MASK) & ~LONG_MASK) #define SIZEOF_PTR (1U << LG_SIZEOF_PTR) #define PTR_MASK (SIZEOF_PTR - 1) /* Return the smallest (void *) multiple that is >= a. */ #define PTR_CEILING(a) \ (((a) + PTR_MASK) & ~PTR_MASK) /* * Maximum size of L1 cache line. This is used to avoid cache line aliasing. * In addition, this controls the spacing of cacheline-spaced size classes. * * CACHELINE cannot be based on LG_CACHELINE because __declspec(align()) can * only handle raw constants. */ #define LG_CACHELINE 6 #define CACHELINE 64 #define CACHELINE_MASK (CACHELINE - 1) /* Return the smallest cacheline multiple that is >= s. */ #define CACHELINE_CEILING(s) \ (((s) + CACHELINE_MASK) & ~CACHELINE_MASK) /* Page size. STATIC_PAGE_SHIFT is determined by the configure script. */ #ifdef PAGE_MASK # undef PAGE_MASK #endif #define LG_PAGE STATIC_PAGE_SHIFT #define PAGE ((size_t)(1U << STATIC_PAGE_SHIFT)) #define PAGE_MASK ((size_t)(PAGE - 1)) /* Return the smallest pagesize multiple that is >= s. */ #define PAGE_CEILING(s) \ (((s) + PAGE_MASK) & ~PAGE_MASK) /* Return the nearest aligned address at or below a. */ #define ALIGNMENT_ADDR2BASE(a, alignment) \ ((void *)((uintptr_t)(a) & (-(alignment)))) /* Return the offset between a and the nearest aligned address at or below a. */ #define ALIGNMENT_ADDR2OFFSET(a, alignment) \ ((size_t)((uintptr_t)(a) & (alignment - 1))) /* Return the smallest alignment multiple that is >= s. */ #define ALIGNMENT_CEILING(s, alignment) \ (((s) + (alignment - 1)) & (-(alignment))) /* Declare a variable length array */ #if __STDC_VERSION__ < 199901L # ifdef _MSC_VER # include #ifndef alloca # define alloca _alloca #endif # else # ifdef JEMALLOC_HAS_ALLOCA_H # include # else # include # endif # endif # define VARIABLE_ARRAY(type, name, count) \ type *name = alloca(sizeof(type) * (count)) #else # define VARIABLE_ARRAY(type, name, count) type name[(count)] #endif #include "jemalloc/internal/valgrind.h" #include "jemalloc/internal/util.h" #include "jemalloc/internal/atomic.h" #include "jemalloc/internal/prng.h" #include "jemalloc/internal/ckh.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/stats.h" #include "jemalloc/internal/ctl.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/tsd.h" #include "jemalloc/internal/mb.h" #include "jemalloc/internal/extent.h" #include "jemalloc/internal/arena.h" #include "jemalloc/internal/bitmap.h" #include "jemalloc/internal/base.h" #include "jemalloc/internal/chunk.h" #include "jemalloc/internal/huge.h" #include "jemalloc/internal/rtree.h" #include "jemalloc/internal/tcache.h" #include "jemalloc/internal/hash.h" #include "jemalloc/internal/quarantine.h" #include "jemalloc/internal/prof.h" #include "jemalloc/internal/pool.h" #include "jemalloc/internal/vector.h" #undef JEMALLOC_H_TYPES /******************************************************************************/ #define JEMALLOC_H_STRUCTS #include "jemalloc/internal/valgrind.h" #include "jemalloc/internal/util.h" #include "jemalloc/internal/atomic.h" #include "jemalloc/internal/prng.h" #include "jemalloc/internal/ckh.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/stats.h" #include "jemalloc/internal/ctl.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/tsd.h" #include "jemalloc/internal/mb.h" #include "jemalloc/internal/bitmap.h" #include "jemalloc/internal/extent.h" #include "jemalloc/internal/arena.h" #include "jemalloc/internal/base.h" #include "jemalloc/internal/chunk.h" #include "jemalloc/internal/huge.h" #include "jemalloc/internal/rtree.h" #include "jemalloc/internal/tcache.h" #include "jemalloc/internal/hash.h" #include "jemalloc/internal/quarantine.h" #include "jemalloc/internal/prof.h" #include "jemalloc/internal/pool.h" #include "jemalloc/internal/vector.h" typedef struct { uint64_t allocated; uint64_t deallocated; } thread_allocated_t; /* * The JEMALLOC_ARG_CONCAT() wrapper is necessary to pass {0, 0} via a cpp macro * argument. */ #define THREAD_ALLOCATED_INITIALIZER JEMALLOC_ARG_CONCAT({0, 0}) #undef JEMALLOC_H_STRUCTS /******************************************************************************/ #define JEMALLOC_H_EXTERNS extern bool opt_abort; extern bool opt_junk; extern size_t opt_quarantine; extern bool opt_redzone; extern bool opt_utrace; extern bool opt_xmalloc; extern bool opt_zero; extern size_t opt_narenas; extern bool in_valgrind; /* Number of CPUs. */ extern unsigned ncpus; extern unsigned npools; extern unsigned npools_cnt; extern pool_t base_pool; extern pool_t **pools; extern malloc_mutex_t pools_lock; extern void *(*base_malloc_fn)(size_t); extern void (*base_free_fn)(void *); extern bool pools_shared_data_create(void); arena_t *arenas_extend(pool_t *pool, unsigned ind); bool arenas_tsd_extend(tsd_pool_t *tsd, unsigned len); void arenas_cleanup(void *arg); arena_t *choose_arena_hard(pool_t *pool); void jemalloc_prefork(void); void jemalloc_postfork_parent(void); void jemalloc_postfork_child(void); #include "jemalloc/internal/valgrind.h" #include "jemalloc/internal/util.h" #include "jemalloc/internal/atomic.h" #include "jemalloc/internal/prng.h" #include "jemalloc/internal/ckh.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/stats.h" #include "jemalloc/internal/ctl.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/tsd.h" #include "jemalloc/internal/mb.h" #include "jemalloc/internal/bitmap.h" #include "jemalloc/internal/extent.h" #include "jemalloc/internal/arena.h" #include "jemalloc/internal/base.h" #include "jemalloc/internal/chunk.h" #include "jemalloc/internal/huge.h" #include "jemalloc/internal/rtree.h" #include "jemalloc/internal/tcache.h" #include "jemalloc/internal/hash.h" #include "jemalloc/internal/quarantine.h" #include "jemalloc/internal/prof.h" #include "jemalloc/internal/pool.h" #include "jemalloc/internal/vector.h" #undef JEMALLOC_H_EXTERNS /******************************************************************************/ #define JEMALLOC_H_INLINES #include "jemalloc/internal/pool.h" #include "jemalloc/internal/valgrind.h" #include "jemalloc/internal/util.h" #include "jemalloc/internal/atomic.h" #include "jemalloc/internal/prng.h" #include "jemalloc/internal/ckh.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/stats.h" #include "jemalloc/internal/ctl.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/tsd.h" #include "jemalloc/internal/mb.h" #include "jemalloc/internal/extent.h" #include "jemalloc/internal/base.h" #include "jemalloc/internal/chunk.h" #include "jemalloc/internal/huge.h" /* * Include arena.h the first time in order to provide inline functions for this * header's inlines. */ #define JEMALLOC_ARENA_INLINE_A #include "jemalloc/internal/arena.h" #undef JEMALLOC_ARENA_INLINE_A #ifndef JEMALLOC_ENABLE_INLINE malloc_tsd_protos(JEMALLOC_ATTR(unused), arenas, tsd_pool_t) size_t s2u(size_t size); size_t sa2u(size_t size, size_t alignment); unsigned narenas_total_get(pool_t *pool); arena_t *choose_arena(arena_t *arena); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_C_)) /* * Map of pthread_self() --> arenas[???], used for selecting an arena to use * for allocations. */ malloc_tsd_externs(arenas, tsd_pool_t) malloc_tsd_funcs(JEMALLOC_ALWAYS_INLINE, arenas, tsd_pool_t, {0}, arenas_cleanup) /* * Check if the arena is dummy. */ JEMALLOC_ALWAYS_INLINE bool is_arena_dummy(arena_t *arena) { return (arena->ind == ARENA_DUMMY_IND); } /* * Compute usable size that would result from allocating an object with the * specified size. */ JEMALLOC_ALWAYS_INLINE size_t s2u(size_t size) { if (size <= SMALL_MAXCLASS) return (small_s2u(size)); if (size <= arena_maxclass) return (PAGE_CEILING(size)); return (CHUNK_CEILING(size)); } /* * Compute usable size that would result from allocating an object with the * specified size and alignment. */ JEMALLOC_ALWAYS_INLINE size_t sa2u(size_t size, size_t alignment) { size_t usize; assert(alignment != 0 && ((alignment - 1) & alignment) == 0); /* * Round size up to the nearest multiple of alignment. * * This done, we can take advantage of the fact that for each small * size class, every object is aligned at the smallest power of two * that is non-zero in the base two representation of the size. For * example: * * Size | Base 2 | Minimum alignment * -----+----------+------------------ * 96 | 1100000 | 32 * 144 | 10100000 | 32 * 192 | 11000000 | 64 */ usize = ALIGNMENT_CEILING(size, alignment); /* * (usize < size) protects against the combination of maximal * alignment and size greater than maximal alignment. */ if (usize < size) { /* size_t overflow. */ return (0); } if (usize <= arena_maxclass && alignment <= PAGE) { if (usize <= SMALL_MAXCLASS) return (small_s2u(usize)); return (PAGE_CEILING(usize)); } else { size_t run_size; /* * We can't achieve subpage alignment, so round up alignment * permanently; it makes later calculations simpler. */ alignment = PAGE_CEILING(alignment); usize = PAGE_CEILING(size); /* * (usize < size) protects against very large sizes within * PAGE of SIZE_T_MAX. * * (usize + alignment < usize) protects against the * combination of maximal alignment and usize large enough * to cause overflow. This is similar to the first overflow * check above, but it needs to be repeated due to the new * usize value, which may now be *equal* to maximal * alignment, whereas before we only detected overflow if the * original size was *greater* than maximal alignment. */ if (usize < size || usize + alignment < usize) { /* size_t overflow. */ return (0); } /* * Calculate the size of the over-size run that arena_palloc() * would need to allocate in order to guarantee the alignment. * If the run wouldn't fit within a chunk, round up to a huge * allocation size. */ run_size = usize + alignment - PAGE; if (run_size <= arena_maxclass) return (PAGE_CEILING(usize)); return (CHUNK_CEILING(usize)); } } JEMALLOC_INLINE unsigned narenas_total_get(pool_t *pool) { unsigned narenas; malloc_rwlock_rdlock(&pool->arenas_lock); narenas = pool->narenas_total; malloc_rwlock_unlock(&pool->arenas_lock); return (narenas); } /* * Choose an arena based on a per-thread value. * Arena pointer must be either a valid arena pointer or a dummy arena with * pool field filled. */ JEMALLOC_INLINE arena_t * choose_arena(arena_t *arena) { arena_t *ret; tsd_pool_t *tsd; pool_t *pool; if (!is_arena_dummy(arena)) return (arena); pool = arena->pool; tsd = arenas_tsd_get(); /* expand arenas array if necessary */ if ((tsd->npools <= pool->pool_id) && arenas_tsd_extend(tsd, pool->pool_id)) { return (NULL); } if ( (tsd->seqno[pool->pool_id] != pool->seqno) || (ret = tsd->arenas[pool->pool_id]) == NULL) { ret = choose_arena_hard(pool); assert(ret != NULL); } return (ret); } #endif #include "jemalloc/internal/bitmap.h" #include "jemalloc/internal/rtree.h" /* * Include arena.h the second and third times in order to resolve circular * dependencies with tcache.h. */ #define JEMALLOC_ARENA_INLINE_B #include "jemalloc/internal/arena.h" #undef JEMALLOC_ARENA_INLINE_B #include "jemalloc/internal/tcache.h" #define JEMALLOC_ARENA_INLINE_C #include "jemalloc/internal/arena.h" #undef JEMALLOC_ARENA_INLINE_C #include "jemalloc/internal/hash.h" #include "jemalloc/internal/quarantine.h" #ifndef JEMALLOC_ENABLE_INLINE void *imalloct(size_t size, bool try_tcache, arena_t *arena); void *imalloc(size_t size); void *pool_imalloc(pool_t *pool, size_t size); void *icalloct(size_t size, bool try_tcache, arena_t *arena); void *icalloc(size_t size); void *pool_icalloc(pool_t *pool, size_t size); void *ipalloct(size_t usize, size_t alignment, bool zero, bool try_tcache, arena_t *arena); void *ipalloc(size_t usize, size_t alignment, bool zero); void *pool_ipalloc(pool_t *pool, size_t usize, size_t alignment, bool zero); size_t isalloc(const void *ptr, bool demote); size_t pool_isalloc(pool_t *pool, const void *ptr, bool demote); size_t ivsalloc(const void *ptr, bool demote); size_t u2rz(size_t usize); size_t p2rz(const void *ptr); void idalloct(void *ptr, bool try_tcache); void pool_idalloct(pool_t *pool, void *ptr, bool try_tcache); void idalloc(void *ptr); void iqalloct(void *ptr, bool try_tcache); void pool_iqalloct(pool_t *pool, void *ptr, bool try_tcache); void iqalloc(void *ptr); void *iralloct_realign(void *ptr, size_t oldsize, size_t size, size_t extra, size_t alignment, bool zero, bool try_tcache_alloc, bool try_tcache_dalloc, arena_t *arena); void *iralloct(void *ptr, size_t size, size_t extra, size_t alignment, bool zero, bool try_tcache_alloc, bool try_tcache_dalloc, arena_t *arena); void *iralloc(void *ptr, size_t size, size_t extra, size_t alignment, bool zero); void *pool_iralloc(pool_t *pool, void *ptr, size_t size, size_t extra, size_t alignment, bool zero); bool ixalloc(void *ptr, size_t size, size_t extra, size_t alignment, bool zero); int msc_clz(unsigned int val); malloc_tsd_protos(JEMALLOC_ATTR(unused), thread_allocated, thread_allocated_t) #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_C_)) # ifdef _MSC_VER JEMALLOC_ALWAYS_INLINE int msc_clz(unsigned int val) { unsigned int res = 0; # if LG_SIZEOF_INT == 2 if (_BitScanReverse(&res, val)) { return 31 - res; } else { return 32; } # elif LG_SIZEOF_INT == 3 if (_BitScanReverse64(&res, val)) { return 63 - res; } else { return 64; } # else # error "Unsupported clz function for that size of int" # endif } #endif JEMALLOC_ALWAYS_INLINE void * imalloct(size_t size, bool try_tcache, arena_t *arena) { assert(size != 0); if (size <= arena_maxclass) return (arena_malloc(arena, size, false, try_tcache)); else return (huge_malloc(arena, size, false)); } JEMALLOC_ALWAYS_INLINE void * imalloc(size_t size) { arena_t dummy; DUMMY_ARENA_INITIALIZE(dummy, &base_pool); return (imalloct(size, true, &dummy)); } JEMALLOC_ALWAYS_INLINE void * pool_imalloc(pool_t *pool, size_t size) { arena_t dummy; DUMMY_ARENA_INITIALIZE(dummy, pool); return (imalloct(size, true, &dummy)); } JEMALLOC_ALWAYS_INLINE void * icalloct(size_t size, bool try_tcache, arena_t *arena) { if (size <= arena_maxclass) return (arena_malloc(arena, size, true, try_tcache)); else return (huge_malloc(arena, size, true)); } JEMALLOC_ALWAYS_INLINE void * icalloc(size_t size) { arena_t dummy; DUMMY_ARENA_INITIALIZE(dummy, &base_pool); return (icalloct(size, true, &dummy)); } JEMALLOC_ALWAYS_INLINE void * pool_icalloc(pool_t *pool, size_t size) { arena_t dummy; DUMMY_ARENA_INITIALIZE(dummy, pool); return (icalloct(size, true, &dummy)); } JEMALLOC_ALWAYS_INLINE void * ipalloct(size_t usize, size_t alignment, bool zero, bool try_tcache, arena_t *arena) { void *ret; assert(usize != 0); assert(usize == sa2u(usize, alignment)); if (usize <= arena_maxclass && alignment <= PAGE) ret = arena_malloc(arena, usize, zero, try_tcache); else { if (usize <= arena_maxclass) { ret = arena_palloc(choose_arena(arena), usize, alignment, zero); } else if (alignment <= chunksize) ret = huge_malloc(arena, usize, zero); else ret = huge_palloc(arena, usize, alignment, zero); } assert(ALIGNMENT_ADDR2BASE(ret, alignment) == ret); return (ret); } JEMALLOC_ALWAYS_INLINE void * ipalloc(size_t usize, size_t alignment, bool zero) { arena_t dummy; DUMMY_ARENA_INITIALIZE(dummy, &base_pool); return (ipalloct(usize, alignment, zero, true, &dummy)); } JEMALLOC_ALWAYS_INLINE void * pool_ipalloc(pool_t *pool, size_t usize, size_t alignment, bool zero) { arena_t dummy; DUMMY_ARENA_INITIALIZE(dummy, pool); return (ipalloct(usize, alignment, zero, true, &dummy)); } /* * Typical usage: * void *ptr = [...] * size_t sz = isalloc(ptr, config_prof); */ JEMALLOC_ALWAYS_INLINE size_t isalloc(const void *ptr, bool demote) { size_t ret; arena_chunk_t *chunk; assert(ptr != NULL); /* Demotion only makes sense if config_prof is true. */ assert(config_prof || demote == false); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (chunk != ptr) ret = arena_salloc(ptr, demote); else ret = huge_salloc(ptr); return (ret); } /* * Typical usage: * void *ptr = [...] * size_t sz = isalloc(ptr, config_prof); */ JEMALLOC_ALWAYS_INLINE size_t pool_isalloc(pool_t *pool, const void *ptr, bool demote) { size_t ret; arena_chunk_t *chunk; assert(ptr != NULL); /* Demotion only makes sense if config_prof is true. */ assert(config_prof || demote == false); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (chunk != ptr) ret = arena_salloc(ptr, demote); else ret = huge_pool_salloc(pool, ptr); return (ret); } JEMALLOC_ALWAYS_INLINE size_t ivsalloc(const void *ptr, bool demote) { size_t i; malloc_mutex_lock(&pools_lock); unsigned n = npools; for (i = 0; i < n; ++i) { pool_t *pool = pools[i]; if (pool == NULL) continue; /* Return 0 if ptr is not within a chunk managed by jemalloc. */ if (rtree_get(pool->chunks_rtree, (uintptr_t)CHUNK_ADDR2BASE(ptr)) != 0) break; } malloc_mutex_unlock(&pools_lock); if (i == n) return 0; return (isalloc(ptr, demote)); } JEMALLOC_INLINE size_t u2rz(size_t usize) { size_t ret; if (usize <= SMALL_MAXCLASS) { size_t binind = small_size2bin(usize); assert(binind < NBINS); ret = arena_bin_info[binind].redzone_size; } else ret = 0; return (ret); } JEMALLOC_INLINE size_t p2rz(const void *ptr) { size_t usize = isalloc(ptr, false); return (u2rz(usize)); } JEMALLOC_ALWAYS_INLINE void idalloct(void *ptr, bool try_tcache) { arena_chunk_t *chunk; assert(ptr != NULL); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (chunk != ptr) arena_dalloc(chunk, ptr, try_tcache); else huge_dalloc(&base_pool, ptr); } JEMALLOC_ALWAYS_INLINE void pool_idalloct(pool_t *pool, void *ptr, bool try_tcache) { arena_chunk_t *chunk; assert(ptr != NULL); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (chunk != ptr) arena_dalloc(chunk, ptr, try_tcache); else huge_dalloc(pool, ptr); } JEMALLOC_ALWAYS_INLINE void idalloc(void *ptr) { idalloct(ptr, true); } JEMALLOC_ALWAYS_INLINE void iqalloct(void *ptr, bool try_tcache) { if (config_fill && opt_quarantine) quarantine(ptr); else idalloct(ptr, try_tcache); } JEMALLOC_ALWAYS_INLINE void pool_iqalloct(pool_t *pool, void *ptr, bool try_tcache) { if (config_fill && opt_quarantine) quarantine(ptr); else pool_idalloct(pool, ptr, try_tcache); } JEMALLOC_ALWAYS_INLINE void iqalloc(void *ptr) { iqalloct(ptr, true); } JEMALLOC_ALWAYS_INLINE void * iralloct_realign(void *ptr, size_t oldsize, size_t size, size_t extra, size_t alignment, bool zero, bool try_tcache_alloc, bool try_tcache_dalloc, arena_t *arena) { void *p; size_t usize, copysize; usize = sa2u(size + extra, alignment); if (usize == 0) return (NULL); p = ipalloct(usize, alignment, zero, try_tcache_alloc, arena); if (p == NULL) { if (extra == 0) return (NULL); /* Try again, without extra this time. */ usize = sa2u(size, alignment); if (usize == 0) return (NULL); p = ipalloct(usize, alignment, zero, try_tcache_alloc, arena); if (p == NULL) return (NULL); } /* * Copy at most size bytes (not size+extra), since the caller has no * expectation that the extra bytes will be reliably preserved. */ copysize = (size < oldsize) ? size : oldsize; memcpy(p, ptr, copysize); pool_iqalloct(arena->pool, ptr, try_tcache_dalloc); return (p); } JEMALLOC_ALWAYS_INLINE void * iralloct(void *ptr, size_t size, size_t extra, size_t alignment, bool zero, bool try_tcache_alloc, bool try_tcache_dalloc, arena_t *arena) { size_t oldsize; assert(ptr != NULL); assert(size != 0); oldsize = isalloc(ptr, config_prof); if (alignment != 0 && ((uintptr_t)ptr & ((uintptr_t)alignment-1)) != 0) { /* * Existing object alignment is inadequate; allocate new space * and copy. */ return (iralloct_realign(ptr, oldsize, size, extra, alignment, zero, try_tcache_alloc, try_tcache_dalloc, arena)); } if (size + extra <= arena_maxclass) { void *ret; ret = arena_ralloc(arena, ptr, oldsize, size, extra, alignment, zero, try_tcache_alloc, try_tcache_dalloc); if ((ret != NULL) || (size + extra > oldsize)) return (ret); if (oldsize > chunksize) { size_t old_usize JEMALLOC_CC_SILENCE_INIT(0); UNUSED size_t old_rzsize JEMALLOC_CC_SILENCE_INIT(0); if (config_valgrind && in_valgrind) { old_usize = isalloc(ptr, config_prof); old_rzsize = config_prof ? p2rz(ptr) : u2rz(old_usize); } ret = huge_ralloc(arena, ptr, oldsize, chunksize, 0, alignment, zero, try_tcache_dalloc); JEMALLOC_VALGRIND_REALLOC(true, ret, s2u(chunksize), true, ptr, old_usize, old_rzsize, true, false); if (ret != NULL) { /* Now, it should succeed... */ return arena_ralloc(arena, ret, chunksize, size, extra, alignment, zero, try_tcache_alloc, try_tcache_dalloc); } } return NULL; } else { return (huge_ralloc(arena, ptr, oldsize, size, extra, alignment, zero, try_tcache_dalloc)); } } JEMALLOC_ALWAYS_INLINE void * iralloc(void *ptr, size_t size, size_t extra, size_t alignment, bool zero) { arena_t dummy; DUMMY_ARENA_INITIALIZE(dummy, &base_pool); return (iralloct(ptr, size, extra, alignment, zero, true, true, &dummy)); } JEMALLOC_ALWAYS_INLINE void * pool_iralloc(pool_t *pool, void *ptr, size_t size, size_t extra, size_t alignment, bool zero) { arena_t dummy; DUMMY_ARENA_INITIALIZE(dummy, pool); return (iralloct(ptr, size, extra, alignment, zero, true, true, &dummy)); } JEMALLOC_ALWAYS_INLINE bool ixalloc(void *ptr, size_t size, size_t extra, size_t alignment, bool zero) { size_t oldsize; assert(ptr != NULL); assert(size != 0); oldsize = isalloc(ptr, config_prof); if (alignment != 0 && ((uintptr_t)ptr & ((uintptr_t)alignment-1)) != 0) { /* Existing object alignment is inadequate. */ return (true); } if (size <= arena_maxclass) return (arena_ralloc_no_move(ptr, oldsize, size, extra, zero)); else return (huge_ralloc_no_move(&base_pool, ptr, oldsize, size, extra, zero)); } malloc_tsd_externs(thread_allocated, thread_allocated_t) malloc_tsd_funcs(JEMALLOC_ALWAYS_INLINE, thread_allocated, thread_allocated_t, THREAD_ALLOCATED_INITIALIZER, malloc_tsd_no_cleanup) #endif #include "jemalloc/internal/prof.h" #undef JEMALLOC_H_INLINES #ifdef _WIN32 #define __builtin_clz(x) msc_clz(x) #endif /******************************************************************************/ #endif /* JEMALLOC_INTERNAL_H */ vmem-1.8/src/jemalloc/include/jemalloc/internal/jemalloc_internal_decls.h000066400000000000000000000022651361505074100267440ustar00rootroot00000000000000#ifndef JEMALLOC_INTERNAL_DECLS_H #define JEMALLOC_INTERNAL_DECLS_H #include #ifdef _WIN32 # include # include "msvc_compat/windows_extra.h" #else # include # include # if !defined(__pnacl__) && !defined(__native_client__) # include # if !defined(SYS_write) && defined(__NR_write) # define SYS_write __NR_write # endif # include # endif # include # include #endif #include #include #ifndef SIZE_T_MAX # define SIZE_T_MAX SIZE_MAX #endif #include #include #include #include #include #include #ifndef offsetof # define offsetof(type, member) ((size_t)&(((type *)NULL)->member)) #endif #include #include #include #include #ifdef _MSC_VER # include typedef intptr_t ssize_t; # define STDERR_FILENO 2 # define __func__ __FUNCTION__ /* Disable warnings about deprecated system functions */ # pragma warning(disable: 4996) #else # include #endif #include # define JE_PATH_MAX 1024 #endif /* JEMALLOC_INTERNAL_H */ vmem-1.8/src/jemalloc/include/jemalloc/internal/jemalloc_internal_defs.h.in000066400000000000000000000145531361505074100272030ustar00rootroot00000000000000#ifndef JEMALLOC_INTERNAL_DEFS_H_ #define JEMALLOC_INTERNAL_DEFS_H_ /* * If JEMALLOC_PREFIX is defined via --with-jemalloc-prefix, it will cause all * public APIs to be prefixed. This makes it possible, with some care, to use * multiple allocators simultaneously. */ #undef JEMALLOC_PREFIX #undef JEMALLOC_CPREFIX /* * JEMALLOC_PRIVATE_NAMESPACE is used as a prefix for all library-private APIs. * For shared libraries, symbol visibility mechanisms prevent these symbols * from being exported, but for static libraries, naming collisions are a real * possibility. */ #undef JEMALLOC_PRIVATE_NAMESPACE /* * Hyper-threaded CPUs may need a special instruction inside spin loops in * order to yield to another virtual CPU. */ #undef CPU_SPINWAIT /* Defined if the equivalent of FreeBSD's atomic(9) functions are available. */ #undef JEMALLOC_ATOMIC9 /* * Defined if OSAtomic*() functions are available, as provided by Darwin, and * documented in the atomic(3) manual page. */ #undef JEMALLOC_OSATOMIC /* * Defined if __sync_add_and_fetch(uint32_t *, uint32_t) and * __sync_sub_and_fetch(uint32_t *, uint32_t) are available, despite * __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 not being defined (which means the * functions are defined in libgcc instead of being inlines) */ #undef JE_FORCE_SYNC_COMPARE_AND_SWAP_4 /* * Defined if __sync_add_and_fetch(uint64_t *, uint64_t) and * __sync_sub_and_fetch(uint64_t *, uint64_t) are available, despite * __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8 not being defined (which means the * functions are defined in libgcc instead of being inlines) */ #undef JE_FORCE_SYNC_COMPARE_AND_SWAP_8 /* * Defined if __builtin_clz() and __builtin_clzl() are available. */ #undef JEMALLOC_HAVE_BUILTIN_CLZ /* * Defined if madvise(2) is available. */ #undef JEMALLOC_HAVE_MADVISE /* * Defined if OSSpin*() functions are available, as provided by Darwin, and * documented in the spinlock(3) manual page. */ #undef JEMALLOC_OSSPIN /* * Defined if _malloc_thread_cleanup() exists. At least in the case of * FreeBSD, pthread_key_create() allocates, which if used during malloc * bootstrapping will cause recursion into the pthreads library. Therefore, if * _malloc_thread_cleanup() exists, use it as the basis for thread cleanup in * malloc_tsd. */ #undef JEMALLOC_MALLOC_THREAD_CLEANUP /* * Defined if threaded initialization is known to be safe on this platform. * Among other things, it must be possible to initialize a mutex without * triggering allocation in order for threaded allocation to be safe. */ #undef JEMALLOC_THREADED_INIT /* * Defined if the pthreads implementation defines * _pthread_mutex_init_calloc_cb(), in which case the function is used in order * to avoid recursive allocation during mutex initialization. */ #undef JEMALLOC_MUTEX_INIT_CB /* * Defined if --disable_bsd_malloc_hooks is set. This overrides the * JEMALLOC_MUTEX_INIT_CB checks for prefork/postfork. */ #undef JEMALLOC_DISABLE_BSD_MALLOC_HOOKS /* Non-empty if the tls_model attribute is supported. */ #undef JEMALLOC_TLS_MODEL /* JEMALLOC_CC_SILENCE enables code that silences unuseful compiler warnings. */ #undef JEMALLOC_CC_SILENCE /* JEMALLOC_CODE_COVERAGE enables test code coverage analysis. */ #undef JEMALLOC_CODE_COVERAGE /* * JEMALLOC_DEBUG enables assertions and other sanity checks, and disables * inline functions. */ #undef JEMALLOC_DEBUG /* JEMALLOC_STATS enables statistics calculation. */ #undef JEMALLOC_STATS /* JEMALLOC_PROF enables allocation profiling. */ #undef JEMALLOC_PROF /* Use libunwind for profile backtracing if defined. */ #undef JEMALLOC_PROF_LIBUNWIND /* Use libgcc for profile backtracing if defined. */ #undef JEMALLOC_PROF_LIBGCC /* Use gcc intrinsics for profile backtracing if defined. */ #undef JEMALLOC_PROF_GCC /* * JEMALLOC_TCACHE enables a thread-specific caching layer for small objects. * This makes it possible to allocate/deallocate objects without any locking * when the cache is in the steady state. */ #undef JEMALLOC_TCACHE /* * JEMALLOC_DSS enables use of sbrk(2) to allocate chunks from the data storage * segment (DSS). */ #undef JEMALLOC_DSS /* Support memory filling (junk/zero/quarantine/redzone). */ #undef JEMALLOC_FILL /* Support utrace(2)-based tracing. */ #undef JEMALLOC_UTRACE /* Support Valgrind. */ #undef JEMALLOC_VALGRIND /* Support optional abort() on OOM. */ #undef JEMALLOC_XMALLOC /* Support lazy locking (avoid locking unless a second thread is launched). */ #undef JEMALLOC_LAZY_LOCK /* One page is 2^STATIC_PAGE_SHIFT bytes. */ #undef STATIC_PAGE_SHIFT /* * If defined, use munmap() to unmap freed chunks, rather than storing them for * later reuse. This is disabled by default on Linux because common sequences * of mmap()/munmap() calls will cause virtual memory map holes. */ #undef JEMALLOC_MUNMAP /* TLS is used to map arenas and magazine caches to threads. */ #undef JEMALLOC_TLS /* * ffs()/ffsl() functions to use for bitmapping. Don't use these directly; * instead, use jemalloc_ffs() or jemalloc_ffsl() from util.h. */ #undef JEMALLOC_INTERNAL_FFSL #undef JEMALLOC_INTERNAL_FFS /* * JEMALLOC_IVSALLOC enables ivsalloc(), which verifies that pointers reside * within jemalloc-owned chunks before dereferencing them. */ #undef JEMALLOC_IVSALLOC /* * Darwin (OS X) uses zones to work around Mach-O symbol override shortcomings. */ #undef JEMALLOC_ZONE #undef JEMALLOC_ZONE_VERSION /* * Methods for purging unused pages differ between operating systems. * * madvise(..., MADV_DONTNEED) : On Linux, this immediately discards pages, * such that new pages will be demand-zeroed if * the address region is later touched. * madvise(..., MADV_FREE) : On FreeBSD and Darwin, this marks pages as being * unused, such that they will be discarded rather * than swapped out. */ #undef JEMALLOC_PURGE_MADVISE_DONTNEED #undef JEMALLOC_PURGE_MADVISE_FREE /* * Define if operating system has alloca.h header. */ #undef JEMALLOC_HAS_ALLOCA_H /* C99 restrict keyword supported. */ #undef JEMALLOC_HAS_RESTRICT /* For use by hash code. */ #undef JEMALLOC_BIG_ENDIAN /* sizeof(int) == 2^LG_SIZEOF_INT. */ #undef LG_SIZEOF_INT /* sizeof(long) == 2^LG_SIZEOF_LONG. */ #undef LG_SIZEOF_LONG /* sizeof(intmax_t) == 2^LG_SIZEOF_INTMAX_T. */ #undef LG_SIZEOF_INTMAX_T #endif /* JEMALLOC_INTERNAL_DEFS_H_ */ vmem-1.8/src/jemalloc/include/jemalloc/internal/jemalloc_internal_macros.h000066400000000000000000000031271361505074100271340ustar00rootroot00000000000000/* * JEMALLOC_ALWAYS_INLINE and JEMALLOC_INLINE are used within header files for * functions that are static inline functions if inlining is enabled, and * single-definition library-private functions if inlining is disabled. * * JEMALLOC_ALWAYS_INLINE_C and JEMALLOC_INLINE_C are for use in .c files, in * which case the denoted functions are always static, regardless of whether * inlining is enabled. */ #if defined(JEMALLOC_DEBUG) || defined(JEMALLOC_CODE_COVERAGE) /* Disable inlining to make debugging/profiling easier. */ # define JEMALLOC_ALWAYS_INLINE # define JEMALLOC_ALWAYS_INLINE_C static # define JEMALLOC_INLINE # define JEMALLOC_INLINE_C static # define inline #else # define JEMALLOC_ENABLE_INLINE # ifdef JEMALLOC_HAVE_ATTR # define JEMALLOC_ALWAYS_INLINE \ static inline JEMALLOC_ATTR(unused) JEMALLOC_ATTR(always_inline) # define JEMALLOC_ALWAYS_INLINE_C \ static inline JEMALLOC_ATTR(always_inline) # else # define JEMALLOC_ALWAYS_INLINE static inline # define JEMALLOC_ALWAYS_INLINE_C static inline # endif # define JEMALLOC_INLINE static inline # define JEMALLOC_INLINE_C static inline #endif #ifdef JEMALLOC_CC_SILENCE # define UNUSED JEMALLOC_ATTR(unused) #else # define UNUSED #endif #define ZU(z) ((size_t)(z)) #define ZI(z) ((ssize_t)(z)) #define QU(q) ((uint64_t)(q)) #define QI(q) ((int64_t)(q)) #define KZU(z) ZU(z##ULL) #define KZI(z) ZI(z##LL) #define KQU(q) QU(q##ULL) #define KQI(q) QI(q##LL) #ifndef __DECONST # define __DECONST(type, var) ((type)(uintptr_t)(const void *)(var)) #endif #ifndef JEMALLOC_HAS_RESTRICT # define restrict #endif vmem-1.8/src/jemalloc/include/jemalloc/internal/mb.h000066400000000000000000000051771361505074100225130ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE void mb_write(void); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_MB_C_)) #ifdef __i386__ /* * According to the Intel Architecture Software Developer's Manual, current * processors execute instructions in order from the perspective of other * processors in a multiprocessor system, but 1) Intel reserves the right to * change that, and 2) the compiler's optimizer could re-order instructions if * there weren't some form of barrier. Therefore, even if running on an * architecture that does not need memory barriers (everything through at least * i686), an "optimizer barrier" is necessary. */ JEMALLOC_INLINE void mb_write(void) { # if 0 /* This is a true memory barrier. */ asm volatile ("pusha;" "xor %%eax,%%eax;" "cpuid;" "popa;" : /* Outputs. */ : /* Inputs. */ : "memory" /* Clobbers. */ ); #else /* * This is hopefully enough to keep the compiler from reordering * instructions around this one. */ asm volatile ("nop;" : /* Outputs. */ : /* Inputs. */ : "memory" /* Clobbers. */ ); #endif } #elif (defined(__amd64__) || defined(__x86_64__)) JEMALLOC_INLINE void mb_write(void) { asm volatile ("sfence" : /* Outputs. */ : /* Inputs. */ : "memory" /* Clobbers. */ ); } #elif defined(__powerpc__) JEMALLOC_INLINE void mb_write(void) { asm volatile ("eieio" : /* Outputs. */ : /* Inputs. */ : "memory" /* Clobbers. */ ); } #elif defined(__sparc64__) JEMALLOC_INLINE void mb_write(void) { asm volatile ("membar #StoreStore" : /* Outputs. */ : /* Inputs. */ : "memory" /* Clobbers. */ ); } #elif defined(__tile__) JEMALLOC_INLINE void mb_write(void) { __sync_synchronize(); } #else /* * This is much slower than a simple memory barrier, but the semantics of mutex * unlock make this work. */ JEMALLOC_INLINE void mb_write(void) { malloc_mutex_t mtx; malloc_mutex_init(&mtx); malloc_mutex_lock(&mtx); malloc_mutex_unlock(&mtx); } #endif #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/mutex.h000066400000000000000000000122411361505074100232450ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef struct malloc_mutex_s malloc_mutex_t; #if (defined(_WIN32) || defined(JEMALLOC_OSSPIN)\ || defined(JEMALLOC_MUTEX_INIT_CB)\ || defined(JEMALLOC_DISABLE_BSD_MALLOC_HOOKS)) #define JEMALLOC_NO_RWLOCKS typedef malloc_mutex_t malloc_rwlock_t; #else typedef struct malloc_rwlock_s malloc_rwlock_t; #endif #if (defined(JEMALLOC_OSSPIN)) # define MALLOC_MUTEX_INITIALIZER {0} #elif (defined(JEMALLOC_MUTEX_INIT_CB)) # define MALLOC_MUTEX_INITIALIZER {PTHREAD_MUTEX_INITIALIZER, NULL} #else # if (defined(PTHREAD_MUTEX_ADAPTIVE_NP) && \ defined(PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP)) # define MALLOC_MUTEX_TYPE PTHREAD_MUTEX_ADAPTIVE_NP # define MALLOC_MUTEX_INITIALIZER {PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP} # else # define MALLOC_MUTEX_TYPE PTHREAD_MUTEX_DEFAULT # define MALLOC_MUTEX_INITIALIZER {PTHREAD_MUTEX_INITIALIZER} # endif #endif #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS struct malloc_mutex_s { #ifdef _WIN32 CRITICAL_SECTION lock; #elif (defined(JEMALLOC_OSSPIN)) OSSpinLock lock; #elif (defined(JEMALLOC_MUTEX_INIT_CB)) pthread_mutex_t lock; malloc_mutex_t *postponed_next; #else pthread_mutex_t lock; #endif }; #ifndef JEMALLOC_NO_RWLOCKS struct malloc_rwlock_s { pthread_rwlock_t lock; }; #endif #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_LAZY_LOCK extern bool isthreaded; #else # undef isthreaded /* Undo private_namespace.h definition. */ # define isthreaded true #endif bool malloc_mutex_init(malloc_mutex_t *mutex); void malloc_mutex_prefork(malloc_mutex_t *mutex); void malloc_mutex_postfork_parent(malloc_mutex_t *mutex); void malloc_mutex_postfork_child(malloc_mutex_t *mutex); bool mutex_boot(void); #ifdef JEMALLOC_NO_RWLOCKS #undef malloc_rwlock_init #undef malloc_rwlock_destroy #define malloc_rwlock_init malloc_mutex_init #define malloc_rwlock_destroy malloc_mutex_destroy #endif void malloc_rwlock_prefork(malloc_rwlock_t *rwlock); void malloc_rwlock_postfork_parent(malloc_rwlock_t *rwlock); void malloc_rwlock_postfork_child(malloc_rwlock_t *rwlock); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE void malloc_mutex_lock(malloc_mutex_t *mutex); void malloc_mutex_unlock(malloc_mutex_t *mutex); void malloc_mutex_destroy(malloc_mutex_t *mutex); #ifndef JEMALLOC_NO_RWLOCKS bool malloc_rwlock_init(malloc_rwlock_t *rwlock); void malloc_rwlock_destroy(malloc_rwlock_t *rwlock); #endif void malloc_rwlock_rdlock(malloc_rwlock_t *rwlock); void malloc_rwlock_wrlock(malloc_rwlock_t *rwlock); void malloc_rwlock_unlock(malloc_rwlock_t *rwlock); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_MUTEX_C_)) JEMALLOC_INLINE void malloc_mutex_lock(malloc_mutex_t *mutex) { if (isthreaded) { #ifdef _WIN32 EnterCriticalSection(&mutex->lock); #elif (defined(JEMALLOC_OSSPIN)) OSSpinLockLock(&mutex->lock); #else pthread_mutex_lock(&mutex->lock); #endif } } JEMALLOC_INLINE void malloc_mutex_unlock(malloc_mutex_t *mutex) { if (isthreaded) { #ifdef _WIN32 LeaveCriticalSection(&mutex->lock); #elif (defined(JEMALLOC_OSSPIN)) OSSpinLockUnlock(&mutex->lock); #else pthread_mutex_unlock(&mutex->lock); #endif } } JEMALLOC_INLINE void malloc_mutex_destroy(malloc_mutex_t *mutex) { #if (!defined(_WIN32) && !defined(JEMALLOC_OSSPIN)\ && !defined(JEMALLOC_MUTEX_INIT_CB) && !defined(JEMALLOC_JET)) pthread_mutex_destroy(&mutex->lock); #endif } JEMALLOC_INLINE void malloc_rwlock_rdlock(malloc_rwlock_t *rwlock) { if (isthreaded) { #ifdef _WIN32 EnterCriticalSection(&rwlock->lock); #elif (defined(JEMALLOC_OSSPIN)) OSSpinLockLock(&rwlock->lock); #elif (defined(JEMALLOC_NO_RWLOCKS)) pthread_mutex_lock(&rwlock->lock); #else pthread_rwlock_rdlock(&rwlock->lock); #endif } } JEMALLOC_INLINE void malloc_rwlock_wrlock(malloc_rwlock_t *rwlock) { if (isthreaded) { #ifdef _WIN32 EnterCriticalSection(&rwlock->lock); #elif (defined(JEMALLOC_OSSPIN)) OSSpinLockLock(&rwlock->lock); #elif (defined(JEMALLOC_NO_RWLOCKS)) pthread_mutex_lock(&rwlock->lock); #else pthread_rwlock_wrlock(&rwlock->lock); #endif } } JEMALLOC_INLINE void malloc_rwlock_unlock(malloc_rwlock_t *rwlock) { if (isthreaded) { #ifdef _WIN32 LeaveCriticalSection(&rwlock->lock); #elif (defined(JEMALLOC_OSSPIN)) OSSpinLockUnlock(&rwlock->lock); #elif (defined(JEMALLOC_NO_RWLOCKS)) pthread_mutex_unlock(&rwlock->lock); #else pthread_rwlock_unlock(&rwlock->lock); #endif } } #ifndef JEMALLOC_NO_RWLOCKS JEMALLOC_INLINE bool malloc_rwlock_init(malloc_rwlock_t *rwlock) { if (isthreaded) { if (pthread_rwlock_init(&rwlock->lock, NULL) != 0) return (true); } return (false); } JEMALLOC_INLINE void malloc_rwlock_destroy(malloc_rwlock_t *rwlock) { if (isthreaded) { pthread_rwlock_destroy(&rwlock->lock); } } #endif #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/pool.h000066400000000000000000000111171361505074100230550ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES #define POOLS_MIN 16 #define POOLS_MAX 32768 /* * We want to expose pool_t to the library user * as a result typedef for pool_s is located in "jemalloc.h" */ typedef struct tsd_pool_s tsd_pool_t; /* * Dummy arena is used to pass pool structure to choose_arena function * through various alloc/free variants */ #define ARENA_DUMMY_IND (~0) #define DUMMY_ARENA_INITIALIZE(name, p) \ do { \ (name).ind = ARENA_DUMMY_IND; \ (name).pool = (p); \ } while (0) #define TSD_POOL_INITIALIZER JEMALLOC_ARG_CONCAT({.npools = 0, .arenas = NULL, .seqno = NULL }) #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS typedef struct pool_memory_range_node_s { uintptr_t addr; uintptr_t addr_end; uintptr_t usable_addr; uintptr_t usable_addr_end; struct pool_memory_range_node_s *next; } pool_memory_range_node_t; struct pool_s { /* This pool's index within the pools array. */ unsigned pool_id; /* * Unique pool number. A pool_id can be reused, seqno helping to check * that data in Thread Storage Data are still valid. */ unsigned seqno; /* Protects arenas initialization (arenas, arenas_total). */ malloc_rwlock_t arenas_lock; /* * Arenas that are used to service external requests. Not all elements of the * arenas array are necessarily used; arenas are created lazily as needed. * * arenas[0..narenas_auto) are used for automatic multiplexing of threads and * arenas. arenas[narenas_auto..narenas_total) are only used if the application * takes some action to create them and allocate from them. */ arena_t **arenas; unsigned narenas_total; unsigned narenas_auto; /* Tree of chunks that are stand-alone huge allocations. */ extent_tree_t huge; /* Protects chunk-related data structures. */ malloc_mutex_t huge_mtx; malloc_mutex_t chunks_mtx; chunk_stats_t stats_chunks; /* * Trees of chunks that were previously allocated (trees differ only in node * ordering). These are used when allocating chunks, in an attempt to re-use * address space. Depending on function, different tree orderings are needed, * which is why there are two trees with the same contents. */ extent_tree_t chunks_szad_mmap; extent_tree_t chunks_ad_mmap; extent_tree_t chunks_szad_dss; extent_tree_t chunks_ad_dss; rtree_t *chunks_rtree; /* Protects base-related data structures. */ malloc_mutex_t base_mtx; malloc_mutex_t base_node_mtx; /* * Current pages that are being used for internal memory allocations. These * pages are carved up in cacheline-size quanta, so that there is no chance of * false cache line sharing. */ void *base_next_addr; void *base_past_addr; /* Addr immediately past base_pages. */ extent_node_t *base_nodes; /* * Per pool statistics variables */ bool ctl_initialized; ctl_stats_t ctl_stats; size_t ctl_stats_allocated; size_t ctl_stats_active; size_t ctl_stats_mapped; size_t stats_cactive; /* Protects list of memory ranges. */ malloc_mutex_t memory_range_mtx; /* List of memory ranges inside pool, useful for pool_check(). */ pool_memory_range_node_t *memory_range_list; }; struct tsd_pool_s { size_t npools; /* size of the arrays */ unsigned *seqno; /* Sequence number of pool */ arena_t **arenas; /* array of arenas indexed by pool id */ }; /* * Minimal size of pool, includes header alignment to cache line size, * initial space for base allocator, and size of at least one chunk * of memory with address alignment to multiple of chunksize. */ #define POOL_MINIMAL_SIZE (3*chunksize) #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS bool pool_boot(pool_t *pool, unsigned pool_id); bool pool_runtime_init(pool_t *pool, unsigned pool_id); bool pool_new(pool_t *pool, unsigned pool_id); void pool_destroy(pool_t *pool); extern malloc_mutex_t pools_lock; extern malloc_mutex_t pool_base_lock; void pool_prefork(); void pool_postfork_parent(); void pool_postfork_child(); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE bool pool_is_file_mapped(pool_t *pool); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined (JEMALLOC_POOL_C_)) JEMALLOC_INLINE bool pool_is_file_mapped(pool_t *pool) { return pool->pool_id != 0; } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/private_namespace.sh000077500000000000000000000001351361505074100257560ustar00rootroot00000000000000#!/bin/sh for symbol in `cat $1` ; do echo "#define ${symbol} JEMALLOC_N(${symbol})" done vmem-1.8/src/jemalloc/include/jemalloc/internal/private_symbols.txt000066400000000000000000000175321361505074100257250ustar00rootroot00000000000000a0calloc a0free a0malloc arena_alloc_junk_small arena_bin_index arena_bin_info arena_boot arena_chunk_alloc_huge arena_chunk_dalloc_huge arena_dalloc arena_dalloc_bin arena_dalloc_bin_locked arena_dalloc_junk_large arena_dalloc_junk_small arena_dalloc_large arena_dalloc_large_locked arena_dalloc_small arena_dss_prec_get arena_dss_prec_set arena_malloc arena_malloc_large arena_malloc_small arena_mapbits_allocated_get arena_mapbits_binind_get arena_mapbits_dirty_get arena_mapbits_get arena_mapbits_large_binind_set arena_mapbits_large_get arena_mapbits_large_set arena_mapbits_large_size_get arena_mapbits_small_runind_get arena_mapbits_small_set arena_mapbits_unallocated_set arena_mapbits_unallocated_size_get arena_mapbits_unallocated_size_set arena_mapbits_unzeroed_get arena_mapbits_unzeroed_set arena_mapbitsp_get arena_mapbitsp_read arena_mapbitsp_write arena_mapelm_to_pageind arena_mapp_get arena_maxclass arena_new arena_palloc arena_postfork_child arena_postfork_parent arena_prefork arena_prof_accum arena_prof_accum_impl arena_prof_accum_locked arena_prof_ctx_get arena_prof_ctx_set arena_prof_promoted arena_ptr_small_binind_get arena_purge_all arena_quarantine_junk_small arena_ralloc arena_ralloc_junk_large arena_ralloc_no_move arena_redzone_corruption arena_run_regind arena_runs_avail_tree_iter arena_salloc arena_stats_merge arena_tcache_fill_small arenas pools arenas_booted arenas_cleanup arenas_extend arenas_initialized arenas_lock arenas_tls arenas_tsd arenas_tsd_boot arenas_tsd_cleanup_wrapper arenas_tsd_get arenas_tsd_get_wrapper arenas_tsd_init_head arenas_tsd_set atomic_add_u atomic_add_uint32 atomic_add_uint64 atomic_add_z atomic_sub_u atomic_sub_uint32 atomic_sub_uint64 atomic_sub_z base_alloc base_boot base_calloc base_free_fn base_malloc_fn base_node_alloc base_node_dalloc base_pool base_postfork_child base_postfork_parent base_prefork bitmap_full bitmap_get bitmap_info_init bitmap_info_ngroups bitmap_init bitmap_set bitmap_sfu bitmap_size bitmap_unset bt_init buferror choose_arena choose_arena_hard chunk_alloc_arena chunk_alloc_base chunk_alloc_default chunk_alloc_dss chunk_alloc_mmap chunk_global_boot chunk_boot chunk_dalloc_default chunk_dalloc_mmap chunk_dss_boot chunk_dss_postfork_child chunk_dss_postfork_parent chunk_dss_prec_get chunk_dss_prec_set chunk_dss_prefork chunk_in_dss chunk_npages chunk_postfork_child chunk_postfork_parent chunk_prefork chunk_unmap chunk_record chunks_mtx chunks_rtree chunksize chunksize_mask ckh_bucket_search ckh_count ckh_delete ckh_evict_reloc_insert ckh_insert ckh_isearch ckh_iter ckh_new ckh_pointer_hash ckh_pointer_keycomp ckh_rebuild ckh_remove ckh_search ckh_string_hash ckh_string_keycomp ckh_try_bucket_insert ckh_try_insert ctl_boot ctl_bymib ctl_byname ctl_nametomib ctl_postfork_child ctl_postfork_parent ctl_prefork dss_prec_names extent_tree_ad_first extent_tree_ad_insert extent_tree_ad_iter extent_tree_ad_iter_recurse extent_tree_ad_iter_start extent_tree_ad_last extent_tree_ad_new extent_tree_ad_next extent_tree_ad_nsearch extent_tree_ad_prev extent_tree_ad_psearch extent_tree_ad_remove extent_tree_ad_reverse_iter extent_tree_ad_reverse_iter_recurse extent_tree_ad_reverse_iter_start extent_tree_ad_search extent_tree_szad_first extent_tree_szad_insert extent_tree_szad_iter extent_tree_szad_iter_recurse extent_tree_szad_iter_start extent_tree_szad_last extent_tree_szad_new extent_tree_szad_next extent_tree_szad_nsearch extent_tree_szad_prev extent_tree_szad_psearch extent_tree_szad_remove extent_tree_szad_reverse_iter extent_tree_szad_reverse_iter_recurse extent_tree_szad_reverse_iter_start extent_tree_szad_search get_errno hash hash_fmix_32 hash_fmix_64 hash_get_block_32 hash_get_block_64 hash_rotl_32 hash_rotl_64 hash_x64_128 hash_x86_128 hash_x86_32 huge_allocated huge_boot huge_dalloc huge_dalloc_junk huge_malloc huge_ndalloc huge_nmalloc huge_palloc huge_postfork_child huge_postfork_parent huge_prefork huge_prof_ctx_get huge_prof_ctx_set huge_ralloc huge_ralloc_no_move huge_salloc icalloc icalloct idalloc idalloct imalloc imalloct in_valgrind ipalloc ipalloct iqalloc iqalloct iralloc iralloct iralloct_realign isalloc isthreaded ivsalloc ixalloc jemalloc_postfork_child jemalloc_postfork_parent jemalloc_prefork lg_floor malloc_cprintf malloc_mutex_init malloc_mutex_lock malloc_mutex_postfork_child malloc_mutex_postfork_parent malloc_mutex_prefork malloc_mutex_unlock malloc_rwlock_init malloc_rwlock_postfork_child malloc_rwlock_postfork_parent malloc_rwlock_prefork malloc_rwlock_rdlock malloc_rwlock_wrlock malloc_rwlock_unlock malloc_rwlock_destroy malloc_printf malloc_snprintf malloc_strtoumax malloc_tsd_boot malloc_tsd_cleanup_register malloc_tsd_dalloc malloc_tsd_malloc malloc_tsd_no_cleanup malloc_vcprintf malloc_vsnprintf malloc_write map_bias mb_write mutex_boot narenas_auto narenas_total narenas_total_get ncpus nhbins npools npools_cnt opt_abort opt_dss opt_junk opt_lg_chunk opt_lg_dirty_mult opt_lg_prof_interval opt_lg_prof_sample opt_lg_tcache_max opt_narenas opt_prof opt_prof_accum opt_prof_active opt_prof_final opt_prof_gdump opt_prof_leak opt_prof_prefix opt_quarantine opt_redzone opt_stats_print opt_tcache opt_utrace opt_xmalloc opt_zero p2rz pages_purge pools_shared_data_initialized pow2_ceil prof_backtrace prof_boot0 prof_boot1 prof_boot2 prof_bt_count prof_ctx_get prof_ctx_set prof_dump_open prof_free prof_gdump prof_idump prof_interval prof_lookup prof_malloc prof_malloc_record_object prof_mdump prof_postfork_child prof_postfork_parent prof_prefork prof_realloc prof_sample_accum_update prof_sample_threshold_update prof_tdata_booted prof_tdata_cleanup prof_tdata_get prof_tdata_init prof_tdata_initialized prof_tdata_tls prof_tdata_tsd prof_tdata_tsd_boot prof_tdata_tsd_cleanup_wrapper prof_tdata_tsd_get prof_tdata_tsd_get_wrapper prof_tdata_tsd_init_head prof_tdata_tsd_set quarantine quarantine_alloc_hook quarantine_boot quarantine_booted quarantine_cleanup quarantine_init quarantine_tls quarantine_tsd quarantine_tsd_boot quarantine_tsd_cleanup_wrapper quarantine_tsd_get quarantine_tsd_get_wrapper quarantine_tsd_init_head quarantine_tsd_set register_zone rtree_delete rtree_get rtree_get_locked rtree_new rtree_postfork_child rtree_postfork_parent rtree_prefork rtree_set s2u sa2u set_errno small_bin2size small_bin2size_compute small_bin2size_lookup small_bin2size_tab small_s2u small_s2u_compute small_s2u_lookup small_size2bin small_size2bin_compute small_size2bin_lookup small_size2bin_tab stats_cactive stats_cactive_add stats_cactive_get stats_cactive_sub stats_chunks stats_print tcache_alloc_easy tcache_alloc_large tcache_alloc_small tcache_alloc_small_hard tcache_arena_associate tcache_arena_dissociate tcache_bin_flush_large tcache_bin_flush_small tcache_bin_info tcache_boot0 tcache_boot1 tcache_booted tcache_create tcache_dalloc_large tcache_dalloc_small tcache_destroy tcache_enabled_booted tcache_enabled_get tcache_enabled_initialized tcache_enabled_set tcache_enabled_tls tcache_enabled_tsd tcache_enabled_tsd_boot tcache_enabled_tsd_cleanup_wrapper tcache_enabled_tsd_get tcache_enabled_tsd_get_wrapper tcache_enabled_tsd_init_head tcache_enabled_tsd_set tcache_event tcache_event_hard tcache_flush tcache_get tcache_get_hard tcache_initialized tcache_maxclass tcache_salloc tcache_stats_merge tcache_thread_cleanup tcache_tls tcache_tsd tcache_tsd_boot tcache_tsd_cleanup_wrapper tcache_tsd_get tcache_tsd_get_wrapper tcache_tsd_init_head tcache_tsd_set thread_allocated_booted thread_allocated_initialized thread_allocated_tls thread_allocated_tsd thread_allocated_tsd_boot thread_allocated_tsd_cleanup_wrapper thread_allocated_tsd_get thread_allocated_tsd_get_wrapper thread_allocated_tsd_init_head thread_allocated_tsd_set tsd_init_check_recursion tsd_init_finish u2rz valgrind_freelike_block valgrind_make_mem_defined valgrind_make_mem_noaccess valgrind_make_mem_undefined pool_new pool_destroy pools_lock pool_base_lock pool_prefork pool_postfork_parent pool_postfork_child pool_alloc vec_get vec_set vec_delete vmem-1.8/src/jemalloc/include/jemalloc/internal/private_unnamespace.sh000077500000000000000000000001061361505074100263170ustar00rootroot00000000000000#!/bin/sh for symbol in `cat $1` ; do echo "#undef ${symbol}" done vmem-1.8/src/jemalloc/include/jemalloc/internal/prng.h000066400000000000000000000037411361505074100230560ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES /* * Simple linear congruential pseudo-random number generator: * * prng(y) = (a*x + c) % m * * where the following constants ensure maximal period: * * a == Odd number (relatively prime to 2^n), and (a-1) is a multiple of 4. * c == Odd number (relatively prime to 2^n). * m == 2^32 * * See Knuth's TAOCP 3rd Ed., Vol. 2, pg. 17 for details on these constraints. * * This choice of m has the disadvantage that the quality of the bits is * proportional to bit position. For example. the lowest bit has a cycle of 2, * the next has a cycle of 4, etc. For this reason, we prefer to use the upper * bits. * * Macro parameters: * uint32_t r : Result. * unsigned lg_range : (0..32], number of least significant bits to return. * uint32_t state : Seed value. * const uint32_t a, c : See above discussion. */ #define prng32(r, lg_range, state, a, c) do { \ assert(lg_range > 0); \ assert(lg_range <= 32); \ \ r = (state * (a)) + (c); \ state = r; \ r >>= (32 - lg_range); \ } while (false) /* Same as prng32(), but 64 bits of pseudo-randomness, using uint64_t. */ #define prng64(r, lg_range, state, a, c) do { \ assert(lg_range > 0); \ assert(lg_range <= 64); \ \ r = (state * (a)) + (c); \ state = r; \ r >>= (64 - lg_range); \ } while (false) #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/prof.h000066400000000000000000000342721361505074100230610ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef struct prof_bt_s prof_bt_t; typedef struct prof_cnt_s prof_cnt_t; typedef struct prof_thr_cnt_s prof_thr_cnt_t; typedef struct prof_ctx_s prof_ctx_t; typedef struct prof_tdata_s prof_tdata_t; /* Option defaults. */ #ifdef JEMALLOC_PROF # define PROF_PREFIX_DEFAULT "jeprof" #else # define PROF_PREFIX_DEFAULT "" #endif #define LG_PROF_SAMPLE_DEFAULT 19 #define LG_PROF_INTERVAL_DEFAULT -1 /* * Hard limit on stack backtrace depth. The version of prof_backtrace() that * is based on __builtin_return_address() necessarily has a hard-coded number * of backtrace frame handlers, and should be kept in sync with this setting. */ #define PROF_BT_MAX 128 /* Maximum number of backtraces to store in each per thread LRU cache. */ #define PROF_TCMAX 1024 /* Initial hash table size. */ #define PROF_CKH_MINITEMS 64 /* Size of memory buffer to use when writing dump files. */ #define PROF_DUMP_BUFSIZE 65536 /* Size of stack-allocated buffer used by prof_printf(). */ #define PROF_PRINTF_BUFSIZE 128 /* * Number of mutexes shared among all ctx's. No space is allocated for these * unless profiling is enabled, so it's okay to over-provision. */ #define PROF_NCTX_LOCKS 1024 /* * prof_tdata pointers close to NULL are used to encode state information that * is used for cleaning up during thread shutdown. */ #define PROF_TDATA_STATE_REINCARNATED ((prof_tdata_t *)(uintptr_t)1) #define PROF_TDATA_STATE_PURGATORY ((prof_tdata_t *)(uintptr_t)2) #define PROF_TDATA_STATE_MAX PROF_TDATA_STATE_PURGATORY #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS struct prof_bt_s { /* Backtrace, stored as len program counters. */ void **vec; unsigned len; }; #ifdef JEMALLOC_PROF_LIBGCC /* Data structure passed to libgcc _Unwind_Backtrace() callback functions. */ typedef struct { prof_bt_t *bt; unsigned max; } prof_unwind_data_t; #endif struct prof_cnt_s { /* * Profiling counters. An allocation/deallocation pair can operate on * different prof_thr_cnt_t objects that are linked into the same * prof_ctx_t cnts_ql, so it is possible for the cur* counters to go * negative. In principle it is possible for the *bytes counters to * overflow/underflow, but a general solution would require something * like 128-bit counters; this implementation doesn't bother to solve * that problem. */ int64_t curobjs; int64_t curbytes; uint64_t accumobjs; uint64_t accumbytes; }; struct prof_thr_cnt_s { /* Linkage into prof_ctx_t's cnts_ql. */ ql_elm(prof_thr_cnt_t) cnts_link; /* Linkage into thread's LRU. */ ql_elm(prof_thr_cnt_t) lru_link; /* * Associated context. If a thread frees an object that it did not * allocate, it is possible that the context is not cached in the * thread's hash table, in which case it must be able to look up the * context, insert a new prof_thr_cnt_t into the thread's hash table, * and link it into the prof_ctx_t's cnts_ql. */ prof_ctx_t *ctx; /* * Threads use memory barriers to update the counters. Since there is * only ever one writer, the only challenge is for the reader to get a * consistent read of the counters. * * The writer uses this series of operations: * * 1) Increment epoch to an odd number. * 2) Update counters. * 3) Increment epoch to an even number. * * The reader must assure 1) that the epoch is even while it reads the * counters, and 2) that the epoch doesn't change between the time it * starts and finishes reading the counters. */ unsigned epoch; /* Profiling counters. */ prof_cnt_t cnts; }; struct prof_ctx_s { /* Associated backtrace. */ prof_bt_t *bt; /* Protects nlimbo, cnt_merged, and cnts_ql. */ malloc_mutex_t *lock; /* * Number of threads that currently cause this ctx to be in a state of * limbo due to one of: * - Initializing per thread counters associated with this ctx. * - Preparing to destroy this ctx. * - Dumping a heap profile that includes this ctx. * nlimbo must be 1 (single destroyer) in order to safely destroy the * ctx. */ unsigned nlimbo; /* Temporary storage for summation during dump. */ prof_cnt_t cnt_summed; /* When threads exit, they merge their stats into cnt_merged. */ prof_cnt_t cnt_merged; /* * List of profile counters, one for each thread that has allocated in * this context. */ ql_head(prof_thr_cnt_t) cnts_ql; /* Linkage for list of contexts to be dumped. */ ql_elm(prof_ctx_t) dump_link; }; typedef ql_head(prof_ctx_t) prof_ctx_list_t; struct prof_tdata_s { /* * Hash of (prof_bt_t *)-->(prof_thr_cnt_t *). Each thread keeps a * cache of backtraces, with associated thread-specific prof_thr_cnt_t * objects. Other threads may read the prof_thr_cnt_t contents, but no * others will ever write them. * * Upon thread exit, the thread must merge all the prof_thr_cnt_t * counter data into the associated prof_ctx_t objects, and unlink/free * the prof_thr_cnt_t objects. */ ckh_t bt2cnt; /* LRU for contents of bt2cnt. */ ql_head(prof_thr_cnt_t) lru_ql; /* Backtrace vector, used for calls to prof_backtrace(). */ void **vec; /* Sampling state. */ uint64_t prng_state; uint64_t bytes_until_sample; /* State used to avoid dumping while operating on prof internals. */ bool enq; bool enq_idump; bool enq_gdump; }; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS extern bool opt_prof; /* * Even if opt_prof is true, sampling can be temporarily disabled by setting * opt_prof_active to false. No locking is used when updating opt_prof_active, * so there are no guarantees regarding how long it will take for all threads * to notice state changes. */ extern bool opt_prof_active; extern size_t opt_lg_prof_sample; /* Mean bytes between samples. */ extern ssize_t opt_lg_prof_interval; /* lg(prof_interval). */ extern bool opt_prof_gdump; /* High-water memory dumping. */ extern bool opt_prof_final; /* Final profile dumping. */ extern bool opt_prof_leak; /* Dump leak summary at exit. */ extern bool opt_prof_accum; /* Report cumulative bytes. */ extern char opt_prof_prefix[ /* Minimize memory bloat for non-prof builds. */ #ifdef JEMALLOC_PROF JE_PATH_MAX + #endif 1]; /* * Profile dump interval, measured in bytes allocated. Each arena triggers a * profile dump when it reaches this threshold. The effect is that the * interval between profile dumps averages prof_interval, though the actual * interval between dumps will tend to be sporadic, and the interval will be a * maximum of approximately (prof_interval * narenas). */ extern uint64_t prof_interval; void bt_init(prof_bt_t *bt, void **vec); void prof_backtrace(prof_bt_t *bt); prof_thr_cnt_t *prof_lookup(prof_bt_t *bt); #ifdef JEMALLOC_JET size_t prof_bt_count(void); typedef int (prof_dump_open_t)(bool, const char *); extern prof_dump_open_t *prof_dump_open; #endif void prof_idump(void); bool prof_mdump(const char *filename); void prof_gdump(void); prof_tdata_t *prof_tdata_init(void); void prof_tdata_cleanup(void *arg); void prof_boot0(void); void prof_boot1(void); bool prof_boot2(void); void prof_prefork(void); void prof_postfork_parent(void); void prof_postfork_child(void); void prof_sample_threshold_update(prof_tdata_t *prof_tdata); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #define PROF_ALLOC_PREP(size, ret) do { \ prof_tdata_t *prof_tdata; \ prof_bt_t bt; \ \ assert(size == s2u(size)); \ \ if (!opt_prof_active || \ prof_sample_accum_update(size, false, &prof_tdata)) { \ ret = (prof_thr_cnt_t *)(uintptr_t)1U; \ } else { \ bt_init(&bt, prof_tdata->vec); \ prof_backtrace(&bt); \ ret = prof_lookup(&bt); \ } \ } while (0) #ifndef JEMALLOC_ENABLE_INLINE malloc_tsd_protos(JEMALLOC_ATTR(unused), prof_tdata, prof_tdata_t *) prof_tdata_t *prof_tdata_get(bool create); bool prof_sample_accum_update(size_t size, bool commit, prof_tdata_t **prof_tdata_out); prof_ctx_t *prof_ctx_get(const void *ptr); void prof_ctx_set(const void *ptr, prof_ctx_t *ctx); void prof_malloc_record_object(const void *ptr, size_t usize, prof_thr_cnt_t *cnt); void prof_malloc(const void *ptr, size_t usize, prof_thr_cnt_t *cnt); void prof_realloc(const void *ptr, size_t usize, prof_thr_cnt_t *cnt, size_t old_usize, prof_ctx_t *old_ctx); void prof_free(const void *ptr, size_t size); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_PROF_C_)) /* Thread-specific backtrace cache, used to reduce bt2ctx contention. */ malloc_tsd_externs(prof_tdata, prof_tdata_t *) malloc_tsd_funcs(JEMALLOC_INLINE, prof_tdata, prof_tdata_t *, NULL, prof_tdata_cleanup) JEMALLOC_INLINE prof_tdata_t * prof_tdata_get(bool create) { prof_tdata_t *prof_tdata; cassert(config_prof); prof_tdata = *prof_tdata_tsd_get(); if (create && prof_tdata == NULL) prof_tdata = prof_tdata_init(); return (prof_tdata); } JEMALLOC_INLINE prof_ctx_t * prof_ctx_get(const void *ptr) { prof_ctx_t *ret; arena_chunk_t *chunk; cassert(config_prof); assert(ptr != NULL); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (chunk != ptr) { /* Region. */ ret = arena_prof_ctx_get(ptr); } else ret = huge_prof_ctx_get(ptr); return (ret); } JEMALLOC_INLINE void prof_ctx_set(const void *ptr, prof_ctx_t *ctx) { arena_chunk_t *chunk; cassert(config_prof); assert(ptr != NULL); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (chunk != ptr) { /* Region. */ arena_prof_ctx_set(ptr, ctx); } else huge_prof_ctx_set(ptr, ctx); } JEMALLOC_INLINE bool prof_sample_accum_update(size_t size, bool commit, prof_tdata_t **prof_tdata_out) { prof_tdata_t *prof_tdata; cassert(config_prof); prof_tdata = prof_tdata_get(true); if ((uintptr_t)prof_tdata <= (uintptr_t)PROF_TDATA_STATE_MAX) prof_tdata = NULL; if (prof_tdata_out != NULL) *prof_tdata_out = prof_tdata; if (prof_tdata == NULL) return (true); if (prof_tdata->bytes_until_sample >= size) { if (commit) prof_tdata->bytes_until_sample -= size; return (true); } else { /* Compute new sample threshold. */ if (commit) prof_sample_threshold_update(prof_tdata); return (false); } } JEMALLOC_INLINE void prof_malloc_record_object(const void *ptr, size_t usize, prof_thr_cnt_t *cnt) { prof_ctx_set(ptr, cnt->ctx); cnt->epoch++; /*********/ mb_write(); /*********/ cnt->cnts.curobjs++; cnt->cnts.curbytes += usize; if (opt_prof_accum) { cnt->cnts.accumobjs++; cnt->cnts.accumbytes += usize; } /*********/ mb_write(); /*********/ cnt->epoch++; /*********/ mb_write(); /*********/ } JEMALLOC_INLINE void prof_malloc(const void *ptr, size_t usize, prof_thr_cnt_t *cnt) { cassert(config_prof); assert(ptr != NULL); assert(usize == isalloc(ptr, true)); if (prof_sample_accum_update(usize, true, NULL)) { /* * Don't sample. For malloc()-like allocation, it is * always possible to tell in advance how large an * object's usable size will be, so there should never * be a difference between the usize passed to * PROF_ALLOC_PREP() and prof_malloc(). */ assert((uintptr_t)cnt == (uintptr_t)1U); } if ((uintptr_t)cnt > (uintptr_t)1U) prof_malloc_record_object(ptr, usize, cnt); else prof_ctx_set(ptr, (prof_ctx_t *)(uintptr_t)1U); } JEMALLOC_INLINE void prof_realloc(const void *ptr, size_t usize, prof_thr_cnt_t *cnt, size_t old_usize, prof_ctx_t *old_ctx) { prof_thr_cnt_t *told_cnt; cassert(config_prof); assert(ptr != NULL || (uintptr_t)cnt <= (uintptr_t)1U); if (ptr != NULL) { assert(usize == isalloc(ptr, true)); if (prof_sample_accum_update(usize, true, NULL)) { /* * Don't sample. The usize passed to * PROF_ALLOC_PREP() was larger than what * actually got allocated, so a backtrace was * captured for this allocation, even though * its actual usize was insufficient to cross * the sample threshold. */ cnt = (prof_thr_cnt_t *)(uintptr_t)1U; } } if ((uintptr_t)old_ctx > (uintptr_t)1U) { told_cnt = prof_lookup(old_ctx->bt); if (told_cnt == NULL) { /* * It's too late to propagate OOM for this realloc(), * so operate directly on old_cnt->ctx->cnt_merged. */ malloc_mutex_lock(old_ctx->lock); old_ctx->cnt_merged.curobjs--; old_ctx->cnt_merged.curbytes -= old_usize; malloc_mutex_unlock(old_ctx->lock); told_cnt = (prof_thr_cnt_t *)(uintptr_t)1U; } } else told_cnt = (prof_thr_cnt_t *)(uintptr_t)1U; if ((uintptr_t)told_cnt > (uintptr_t)1U) told_cnt->epoch++; if ((uintptr_t)cnt > (uintptr_t)1U) { prof_ctx_set(ptr, cnt->ctx); cnt->epoch++; } else if (ptr != NULL) prof_ctx_set(ptr, (prof_ctx_t *)(uintptr_t)1U); /*********/ mb_write(); /*********/ if ((uintptr_t)told_cnt > (uintptr_t)1U) { told_cnt->cnts.curobjs--; told_cnt->cnts.curbytes -= old_usize; } if ((uintptr_t)cnt > (uintptr_t)1U) { cnt->cnts.curobjs++; cnt->cnts.curbytes += usize; if (opt_prof_accum) { cnt->cnts.accumobjs++; cnt->cnts.accumbytes += usize; } } /*********/ mb_write(); /*********/ if ((uintptr_t)told_cnt > (uintptr_t)1U) told_cnt->epoch++; if ((uintptr_t)cnt > (uintptr_t)1U) cnt->epoch++; /*********/ mb_write(); /* Not strictly necessary. */ } JEMALLOC_INLINE void prof_free(const void *ptr, size_t size) { prof_ctx_t *ctx = prof_ctx_get(ptr); cassert(config_prof); if ((uintptr_t)ctx > (uintptr_t)1) { prof_thr_cnt_t *tcnt; assert(size == isalloc(ptr, true)); tcnt = prof_lookup(ctx->bt); if (tcnt != NULL) { tcnt->epoch++; /*********/ mb_write(); /*********/ tcnt->cnts.curobjs--; tcnt->cnts.curbytes -= size; /*********/ mb_write(); /*********/ tcnt->epoch++; /*********/ mb_write(); /*********/ } else { /* * OOM during free() cannot be propagated, so operate * directly on cnt->ctx->cnt_merged. */ malloc_mutex_lock(ctx->lock); ctx->cnt_merged.curobjs--; ctx->cnt_merged.curbytes -= size; malloc_mutex_unlock(ctx->lock); } } } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/public_namespace.sh000077500000000000000000000002011361505074100255540ustar00rootroot00000000000000#!/bin/sh for nm in `cat $1` ; do n=`echo ${nm} |tr ':' ' ' |awk '{print $1}'` echo "#define je_${n} JEMALLOC_N(${n})" done vmem-1.8/src/jemalloc/include/jemalloc/internal/public_unnamespace.sh000077500000000000000000000001571361505074100261310ustar00rootroot00000000000000#!/bin/sh for nm in `cat $1` ; do n=`echo ${nm} |tr ':' ' ' |awk '{print $1}'` echo "#undef je_${n}" done vmem-1.8/src/jemalloc/include/jemalloc/internal/ql.h000066400000000000000000000045051361505074100225230ustar00rootroot00000000000000/* * List definitions. */ #define ql_head(a_type) \ struct { \ a_type *qlh_first; \ } #define ql_head_initializer(a_head) {NULL} #define ql_elm(a_type) qr(a_type) /* List functions. */ #define ql_new(a_head) do { \ (a_head)->qlh_first = NULL; \ } while (0) #define ql_elm_new(a_elm, a_field) qr_new((a_elm), a_field) #define ql_first(a_head) ((a_head)->qlh_first) #define ql_last(a_head, a_field) \ ((ql_first(a_head) != NULL) \ ? qr_prev(ql_first(a_head), a_field) : NULL) #define ql_next(a_head, a_elm, a_field) \ ((ql_last(a_head, a_field) != (a_elm)) \ ? qr_next((a_elm), a_field) : NULL) #define ql_prev(a_head, a_elm, a_field) \ ((ql_first(a_head) != (a_elm)) ? qr_prev((a_elm), a_field) \ : NULL) #define ql_before_insert(a_head, a_qlelm, a_elm, a_field) do { \ qr_before_insert((a_qlelm), (a_elm), a_field); \ if (ql_first(a_head) == (a_qlelm)) { \ ql_first(a_head) = (a_elm); \ } \ } while (0) #define ql_after_insert(a_qlelm, a_elm, a_field) \ qr_after_insert((a_qlelm), (a_elm), a_field) #define ql_head_insert(a_head, a_elm, a_field) do { \ if (ql_first(a_head) != NULL) { \ qr_before_insert(ql_first(a_head), (a_elm), a_field); \ } \ ql_first(a_head) = (a_elm); \ } while (0) #define ql_tail_insert(a_head, a_elm, a_field) do { \ if (ql_first(a_head) != NULL) { \ qr_before_insert(ql_first(a_head), (a_elm), a_field); \ } \ ql_first(a_head) = qr_next((a_elm), a_field); \ } while (0) #define ql_remove(a_head, a_elm, a_field) do { \ if (ql_first(a_head) == (a_elm)) { \ ql_first(a_head) = qr_next(ql_first(a_head), a_field); \ } \ if (ql_first(a_head) != (a_elm)) { \ qr_remove((a_elm), a_field); \ } else { \ ql_first(a_head) = NULL; \ } \ } while (0) #define ql_head_remove(a_head, a_type, a_field) do { \ a_type *t = ql_first(a_head); \ ql_remove((a_head), t, a_field); \ } while (0) #define ql_tail_remove(a_head, a_type, a_field) do { \ a_type *t = ql_last(a_head, a_field); \ ql_remove((a_head), t, a_field); \ } while (0) #define ql_foreach(a_var, a_head, a_field) \ qr_foreach((a_var), ql_first(a_head), a_field) #define ql_reverse_foreach(a_var, a_head, a_field) \ qr_reverse_foreach((a_var), ql_first(a_head), a_field) vmem-1.8/src/jemalloc/include/jemalloc/internal/qr.h000066400000000000000000000043171361505074100225320ustar00rootroot00000000000000/* Ring definitions. */ #define qr(a_type) \ struct { \ a_type *qre_next; \ a_type *qre_prev; \ } /* Ring functions. */ #define qr_new(a_qr, a_field) do { \ (a_qr)->a_field.qre_next = (a_qr); \ (a_qr)->a_field.qre_prev = (a_qr); \ } while (0) #define qr_next(a_qr, a_field) ((a_qr)->a_field.qre_next) #define qr_prev(a_qr, a_field) ((a_qr)->a_field.qre_prev) #define qr_before_insert(a_qrelm, a_qr, a_field) do { \ (a_qr)->a_field.qre_prev = (a_qrelm)->a_field.qre_prev; \ (a_qr)->a_field.qre_next = (a_qrelm); \ (a_qr)->a_field.qre_prev->a_field.qre_next = (a_qr); \ (a_qrelm)->a_field.qre_prev = (a_qr); \ } while (0) #define qr_after_insert(a_qrelm, a_qr, a_field) \ do \ { \ (a_qr)->a_field.qre_next = (a_qrelm)->a_field.qre_next; \ (a_qr)->a_field.qre_prev = (a_qrelm); \ (a_qr)->a_field.qre_next->a_field.qre_prev = (a_qr); \ (a_qrelm)->a_field.qre_next = (a_qr); \ } while (0) #define qr_meld(a_qr_a, a_qr_b, a_field) do { \ void *t; \ (a_qr_a)->a_field.qre_prev->a_field.qre_next = (a_qr_b); \ (a_qr_b)->a_field.qre_prev->a_field.qre_next = (a_qr_a); \ t = (a_qr_a)->a_field.qre_prev; \ (a_qr_a)->a_field.qre_prev = (a_qr_b)->a_field.qre_prev; \ (a_qr_b)->a_field.qre_prev = t; \ } while (0) /* qr_meld() and qr_split() are functionally equivalent, so there's no need to * have two copies of the code. */ #define qr_split(a_qr_a, a_qr_b, a_field) \ qr_meld((a_qr_a), (a_qr_b), a_field) #define qr_remove(a_qr, a_field) do { \ (a_qr)->a_field.qre_prev->a_field.qre_next \ = (a_qr)->a_field.qre_next; \ (a_qr)->a_field.qre_next->a_field.qre_prev \ = (a_qr)->a_field.qre_prev; \ (a_qr)->a_field.qre_next = (a_qr); \ (a_qr)->a_field.qre_prev = (a_qr); \ } while (0) #define qr_foreach(var, a_qr, a_field) \ for ((var) = (a_qr); \ (var) != NULL; \ (var) = (((var)->a_field.qre_next != (a_qr)) \ ? (var)->a_field.qre_next : NULL)) #define qr_reverse_foreach(var, a_qr, a_field) \ for ((var) = ((a_qr) != NULL) ? qr_prev(a_qr, a_field) : NULL; \ (var) != NULL; \ (var) = (((var) != (a_qr)) \ ? (var)->a_field.qre_prev : NULL)) vmem-1.8/src/jemalloc/include/jemalloc/internal/quarantine.h000066400000000000000000000034711361505074100242570ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef struct quarantine_obj_s quarantine_obj_t; typedef struct quarantine_s quarantine_t; /* Default per thread quarantine size if valgrind is enabled. */ #define JEMALLOC_VALGRIND_QUARANTINE_DEFAULT (ZU(1) << 24) #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS struct quarantine_obj_s { void *ptr; size_t usize; }; struct quarantine_s { size_t curbytes; size_t curobjs; size_t first; #define LG_MAXOBJS_INIT 10 size_t lg_maxobjs; quarantine_obj_t objs[1]; /* Dynamically sized ring buffer. */ }; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS quarantine_t *quarantine_init(size_t lg_maxobjs); void quarantine(void *ptr); void quarantine_cleanup(void *arg); bool quarantine_boot(void); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE malloc_tsd_protos(JEMALLOC_ATTR(unused), quarantine, quarantine_t *) void quarantine_alloc_hook(void); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_QUARANTINE_C_)) malloc_tsd_externs(quarantine, quarantine_t *) malloc_tsd_funcs(JEMALLOC_ALWAYS_INLINE, quarantine, quarantine_t *, NULL, quarantine_cleanup) JEMALLOC_ALWAYS_INLINE void quarantine_alloc_hook(void) { quarantine_t *quarantine; assert(config_fill && opt_quarantine); quarantine = *quarantine_tsd_get(); if (quarantine == NULL) quarantine_init(LG_MAXOBJS_INIT); } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/rb.h000066400000000000000000001105501361505074100225100ustar00rootroot00000000000000/*- ******************************************************************************* * * cpp macro implementation of left-leaning 2-3 red-black trees. Parent * pointers are not used, and color bits are stored in the least significant * bit of right-child pointers (if RB_COMPACT is defined), thus making node * linkage as compact as is possible for red-black trees. * * Usage: * * #include * #include * #define NDEBUG // (Optional, see assert(3).) * #include * #define RB_COMPACT // (Optional, embed color bits in right-child pointers.) * #include * ... * ******************************************************************************* */ #ifndef RB_H_ #define RB_H_ /* XXX Avoid super-slow compile with older versions of clang */ #define NOSANITIZE #if (__clang_major__ == 3 && __clang_minor__ < 9) #if __has_attribute(__no_sanitize__) #undef NOSANITIZE #define NOSANITIZE __attribute__((no_sanitize("undefined"))) #endif #endif #ifdef RB_COMPACT /* Node structure. */ #define rb_node(a_type) \ struct { \ a_type *rbn_left; \ a_type *rbn_right_red; \ } #else #define rb_node(a_type) \ struct { \ a_type *rbn_left; \ a_type *rbn_right; \ bool rbn_red; \ } #endif /* Root structure. */ #define rb_tree(a_type) \ struct { \ a_type *rbt_root; \ a_type rbt_nil; \ } /* Left accessors. */ #define rbtn_left_get(a_type, a_field, a_node) \ ((a_node)->a_field.rbn_left) #define rbtn_left_set(a_type, a_field, a_node, a_left) do { \ (a_node)->a_field.rbn_left = a_left; \ } while (0) #ifdef RB_COMPACT /* Right accessors. */ #define rbtn_right_get(a_type, a_field, a_node) \ ((a_type *) (((intptr_t) (a_node)->a_field.rbn_right_red) \ & ((ssize_t)-2))) #define rbtn_right_set(a_type, a_field, a_node, a_right) do { \ (a_node)->a_field.rbn_right_red = (a_type *) (((uintptr_t) a_right) \ | (((uintptr_t) (a_node)->a_field.rbn_right_red) & ((size_t)1))); \ } while (0) /* Color accessors. */ #define rbtn_red_get(a_type, a_field, a_node) \ ((bool) (((uintptr_t) (a_node)->a_field.rbn_right_red) \ & ((size_t)1))) #define rbtn_color_set(a_type, a_field, a_node, a_red) do { \ (a_node)->a_field.rbn_right_red = (a_type *) ((((intptr_t) \ (a_node)->a_field.rbn_right_red) & ((ssize_t)-2)) \ | ((ssize_t)a_red)); \ } while (0) #define rbtn_red_set(a_type, a_field, a_node) do { \ (a_node)->a_field.rbn_right_red = (a_type *) (((uintptr_t) \ (a_node)->a_field.rbn_right_red) | ((size_t)1)); \ } while (0) #define rbtn_black_set(a_type, a_field, a_node) do { \ (a_node)->a_field.rbn_right_red = (a_type *) (((intptr_t) \ (a_node)->a_field.rbn_right_red) & ((ssize_t)-2)); \ } while (0) #else /* Right accessors. */ #define rbtn_right_get(a_type, a_field, a_node) \ ((a_node)->a_field.rbn_right) #define rbtn_right_set(a_type, a_field, a_node, a_right) do { \ (a_node)->a_field.rbn_right = a_right; \ } while (0) /* Color accessors. */ #define rbtn_red_get(a_type, a_field, a_node) \ ((a_node)->a_field.rbn_red) #define rbtn_color_set(a_type, a_field, a_node, a_red) do { \ (a_node)->a_field.rbn_red = (a_red); \ } while (0) #define rbtn_red_set(a_type, a_field, a_node) do { \ (a_node)->a_field.rbn_red = true; \ } while (0) #define rbtn_black_set(a_type, a_field, a_node) do { \ (a_node)->a_field.rbn_red = false; \ } while (0) #endif /* Node initializer. */ #define rbt_node_new(a_type, a_field, a_rbt, a_node) do { \ rbtn_left_set(a_type, a_field, (a_node), &(a_rbt)->rbt_nil); \ rbtn_right_set(a_type, a_field, (a_node), &(a_rbt)->rbt_nil); \ rbtn_red_set(a_type, a_field, (a_node)); \ } while (0) /* Tree initializer. */ #define rb_new(a_type, a_field, a_rbt) do { \ (a_rbt)->rbt_root = &(a_rbt)->rbt_nil; \ rbt_node_new(a_type, a_field, a_rbt, &(a_rbt)->rbt_nil); \ rbtn_black_set(a_type, a_field, &(a_rbt)->rbt_nil); \ } while (0) /* Internal utility macros. */ #define rbtn_first(a_type, a_field, a_rbt, a_root, r_node) do { \ (r_node) = (a_root); \ if ((r_node) != &(a_rbt)->rbt_nil) { \ for (; \ rbtn_left_get(a_type, a_field, (r_node)) != &(a_rbt)->rbt_nil;\ (r_node) = rbtn_left_get(a_type, a_field, (r_node))) { \ } \ } \ } while (0) #define rbtn_last(a_type, a_field, a_rbt, a_root, r_node) do { \ (r_node) = (a_root); \ if ((r_node) != &(a_rbt)->rbt_nil) { \ for (; rbtn_right_get(a_type, a_field, (r_node)) != \ &(a_rbt)->rbt_nil; (r_node) = rbtn_right_get(a_type, a_field, \ (r_node))) { \ } \ } \ } while (0) #define rbtn_rotate_left(a_type, a_field, a_node, r_node) do { \ (r_node) = rbtn_right_get(a_type, a_field, (a_node)); \ rbtn_right_set(a_type, a_field, (a_node), \ rbtn_left_get(a_type, a_field, (r_node))); \ rbtn_left_set(a_type, a_field, (r_node), (a_node)); \ } while (0) #define rbtn_rotate_right(a_type, a_field, a_node, r_node) do { \ (r_node) = rbtn_left_get(a_type, a_field, (a_node)); \ rbtn_left_set(a_type, a_field, (a_node), \ rbtn_right_get(a_type, a_field, (r_node))); \ rbtn_right_set(a_type, a_field, (r_node), (a_node)); \ } while (0) /* * The rb_proto() macro generates function prototypes that correspond to the * functions generated by an equivalently parameterized call to rb_gen(). */ #define rb_proto(a_attr, a_prefix, a_rbt_type, a_type) \ a_attr void \ a_prefix##new(a_rbt_type *rbtree); \ a_attr a_type * \ a_prefix##first(a_rbt_type *rbtree); \ a_attr a_type * \ a_prefix##last(a_rbt_type *rbtree); \ a_attr a_type * \ a_prefix##next(a_rbt_type *rbtree, a_type *node); \ a_attr a_type * \ a_prefix##prev(a_rbt_type *rbtree, a_type *node); \ a_attr a_type * \ a_prefix##search(a_rbt_type *rbtree, a_type *key); \ a_attr a_type * \ a_prefix##nsearch(a_rbt_type *rbtree, a_type *key); \ a_attr a_type * \ a_prefix##psearch(a_rbt_type *rbtree, a_type *key); \ a_attr void \ a_prefix##insert(a_rbt_type *rbtree, a_type *node); \ a_attr void \ a_prefix##remove(a_rbt_type *rbtree, a_type *node); \ a_attr a_type * \ a_prefix##iter(a_rbt_type *rbtree, a_type *start, a_type *(*cb)( \ a_rbt_type *, a_type *, void *), void *arg); \ a_attr a_type * \ a_prefix##reverse_iter(a_rbt_type *rbtree, a_type *start, \ a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg); /* * The rb_gen() macro generates a type-specific red-black tree implementation, * based on the above cpp macros. * * Arguments: * * a_attr : Function attribute for generated functions (ex: static). * a_prefix : Prefix for generated functions (ex: ex_). * a_rb_type : Type for red-black tree data structure (ex: ex_t). * a_type : Type for red-black tree node data structure (ex: ex_node_t). * a_field : Name of red-black tree node linkage (ex: ex_link). * a_cmp : Node comparison function name, with the following prototype: * int (a_cmp *)(a_type *a_node, a_type *a_other); * ^^^^^^ * or a_key * Interpretation of comparison function return values: * -1 : a_node < a_other * 0 : a_node == a_other * 1 : a_node > a_other * In all cases, the a_node or a_key macro argument is the first * argument to the comparison function, which makes it possible * to write comparison functions that treat the first argument * specially. * * Assuming the following setup: * * typedef struct ex_node_s ex_node_t; * struct ex_node_s { * rb_node(ex_node_t) ex_link; * }; * typedef rb_tree(ex_node_t) ex_t; * rb_gen(static, ex_, ex_t, ex_node_t, ex_link, ex_cmp) * * The following API is generated: * * static void * ex_new(ex_t *tree); * Description: Initialize a red-black tree structure. * Args: * tree: Pointer to an uninitialized red-black tree object. * * static ex_node_t * * ex_first(ex_t *tree); * static ex_node_t * * ex_last(ex_t *tree); * Description: Get the first/last node in tree. * Args: * tree: Pointer to an initialized red-black tree object. * Ret: First/last node in tree, or NULL if tree is empty. * * static ex_node_t * * ex_next(ex_t *tree, ex_node_t *node); * static ex_node_t * * ex_prev(ex_t *tree, ex_node_t *node); * Description: Get node's successor/predecessor. * Args: * tree: Pointer to an initialized red-black tree object. * node: A node in tree. * Ret: node's successor/predecessor in tree, or NULL if node is * last/first. * * static ex_node_t * * ex_search(ex_t *tree, ex_node_t *key); * Description: Search for node that matches key. * Args: * tree: Pointer to an initialized red-black tree object. * key : Search key. * Ret: Node in tree that matches key, or NULL if no match. * * static ex_node_t * * ex_nsearch(ex_t *tree, ex_node_t *key); * static ex_node_t * * ex_psearch(ex_t *tree, ex_node_t *key); * Description: Search for node that matches key. If no match is found, * return what would be key's successor/predecessor, were * key in tree. * Args: * tree: Pointer to an initialized red-black tree object. * key : Search key. * Ret: Node in tree that matches key, or if no match, hypothetical node's * successor/predecessor (NULL if no successor/predecessor). * * static void * ex_insert(ex_t *tree, ex_node_t *node); * Description: Insert node into tree. * Args: * tree: Pointer to an initialized red-black tree object. * node: Node to be inserted into tree. * * static void * ex_remove(ex_t *tree, ex_node_t *node); * Description: Remove node from tree. * Args: * tree: Pointer to an initialized red-black tree object. * node: Node in tree to be removed. * * static ex_node_t * * ex_iter(ex_t *tree, ex_node_t *start, ex_node_t *(*cb)(ex_t *, * ex_node_t *, void *), void *arg); * static ex_node_t * * ex_reverse_iter(ex_t *tree, ex_node_t *start, ex_node *(*cb)(ex_t *, * ex_node_t *, void *), void *arg); * Description: Iterate forward/backward over tree, starting at node. If * tree is modified, iteration must be immediately * terminated by the callback function that causes the * modification. * Args: * tree : Pointer to an initialized red-black tree object. * start: Node at which to start iteration, or NULL to start at * first/last node. * cb : Callback function, which is called for each node during * iteration. Under normal circumstances the callback function * should return NULL, which causes iteration to continue. If a * callback function returns non-NULL, iteration is immediately * terminated and the non-NULL return value is returned by the * iterator. This is useful for re-starting iteration after * modifying tree. * arg : Opaque pointer passed to cb(). * Ret: NULL if iteration completed, or the non-NULL callback return value * that caused termination of the iteration. */ #define rb_gen(a_attr, a_prefix, a_rbt_type, a_type, a_field, a_cmp) \ a_attr void \ a_prefix##new(a_rbt_type *rbtree) { \ rb_new(a_type, a_field, rbtree); \ } \ a_attr a_type * \ a_prefix##first(a_rbt_type *rbtree) { \ a_type *ret; \ rbtn_first(a_type, a_field, rbtree, rbtree->rbt_root, ret); \ if (ret == &rbtree->rbt_nil) { \ ret = NULL; \ } \ return (ret); \ } \ a_attr a_type * \ a_prefix##last(a_rbt_type *rbtree) { \ a_type *ret; \ rbtn_last(a_type, a_field, rbtree, rbtree->rbt_root, ret); \ if (ret == &rbtree->rbt_nil) { \ ret = NULL; \ } \ return (ret); \ } \ a_attr a_type * \ a_prefix##next(a_rbt_type *rbtree, a_type *node) { \ a_type *ret; \ if (rbtn_right_get(a_type, a_field, node) != &rbtree->rbt_nil) { \ rbtn_first(a_type, a_field, rbtree, rbtn_right_get(a_type, \ a_field, node), ret); \ } else { \ a_type *tnode = rbtree->rbt_root; \ assert(tnode != &rbtree->rbt_nil); \ ret = &rbtree->rbt_nil; \ while (true) { \ int cmp = (a_cmp)(node, tnode); \ if (cmp < 0) { \ ret = tnode; \ tnode = rbtn_left_get(a_type, a_field, tnode); \ } else if (cmp > 0) { \ tnode = rbtn_right_get(a_type, a_field, tnode); \ } else { \ break; \ } \ assert(tnode != &rbtree->rbt_nil); \ } \ } \ if (ret == &rbtree->rbt_nil) { \ ret = (NULL); \ } \ return (ret); \ } \ a_attr a_type * \ a_prefix##prev(a_rbt_type *rbtree, a_type *node) { \ a_type *ret; \ if (rbtn_left_get(a_type, a_field, node) != &rbtree->rbt_nil) { \ rbtn_last(a_type, a_field, rbtree, rbtn_left_get(a_type, \ a_field, node), ret); \ } else { \ a_type *tnode = rbtree->rbt_root; \ assert(tnode != &rbtree->rbt_nil); \ ret = &rbtree->rbt_nil; \ while (true) { \ int cmp = (a_cmp)(node, tnode); \ if (cmp < 0) { \ tnode = rbtn_left_get(a_type, a_field, tnode); \ } else if (cmp > 0) { \ ret = tnode; \ tnode = rbtn_right_get(a_type, a_field, tnode); \ } else { \ break; \ } \ assert(tnode != &rbtree->rbt_nil); \ } \ } \ if (ret == &rbtree->rbt_nil) { \ ret = (NULL); \ } \ return (ret); \ } \ a_attr a_type * \ a_prefix##search(a_rbt_type *rbtree, a_type *key) { \ a_type *ret; \ int cmp; \ ret = rbtree->rbt_root; \ while (ret != &rbtree->rbt_nil \ && (cmp = (a_cmp)(key, ret)) != 0) { \ if (cmp < 0) { \ ret = rbtn_left_get(a_type, a_field, ret); \ } else { \ ret = rbtn_right_get(a_type, a_field, ret); \ } \ } \ if (ret == &rbtree->rbt_nil) { \ ret = (NULL); \ } \ return (ret); \ } \ a_attr a_type * \ a_prefix##nsearch(a_rbt_type *rbtree, a_type *key) { \ a_type *ret; \ a_type *tnode = rbtree->rbt_root; \ ret = &rbtree->rbt_nil; \ while (tnode != &rbtree->rbt_nil) { \ int cmp = (a_cmp)(key, tnode); \ if (cmp < 0) { \ ret = tnode; \ tnode = rbtn_left_get(a_type, a_field, tnode); \ } else if (cmp > 0) { \ tnode = rbtn_right_get(a_type, a_field, tnode); \ } else { \ ret = tnode; \ break; \ } \ } \ if (ret == &rbtree->rbt_nil) { \ ret = (NULL); \ } \ return (ret); \ } \ a_attr a_type * \ a_prefix##psearch(a_rbt_type *rbtree, a_type *key) { \ a_type *ret; \ a_type *tnode = rbtree->rbt_root; \ ret = &rbtree->rbt_nil; \ while (tnode != &rbtree->rbt_nil) { \ int cmp = (a_cmp)(key, tnode); \ if (cmp < 0) { \ tnode = rbtn_left_get(a_type, a_field, tnode); \ } else if (cmp > 0) { \ ret = tnode; \ tnode = rbtn_right_get(a_type, a_field, tnode); \ } else { \ ret = tnode; \ break; \ } \ } \ if (ret == &rbtree->rbt_nil) { \ ret = (NULL); \ } \ return (ret); \ } \ a_attr void \ a_prefix##insert(a_rbt_type *rbtree, a_type *node) { \ struct { \ a_type *node; \ int cmp; \ } path[sizeof(void *) << 4], *pathp; \ rbt_node_new(a_type, a_field, rbtree, node); \ /* Wind. */ \ path->node = rbtree->rbt_root; \ for (pathp = path; pathp->node != &rbtree->rbt_nil; pathp++) { \ int cmp = pathp->cmp = a_cmp(node, pathp->node); \ assert(cmp != 0); \ if (cmp < 0) { \ pathp[1].node = rbtn_left_get(a_type, a_field, \ pathp->node); \ } else { \ pathp[1].node = rbtn_right_get(a_type, a_field, \ pathp->node); \ } \ } \ pathp->node = node; \ /* Unwind. */ \ for (pathp--; (uintptr_t)pathp >= (uintptr_t)path; pathp--) { \ a_type *cnode = pathp->node; \ if (pathp->cmp < 0) { \ a_type *left = pathp[1].node; \ rbtn_left_set(a_type, a_field, cnode, left); \ if (rbtn_red_get(a_type, a_field, left)) { \ a_type *leftleft = rbtn_left_get(a_type, a_field, left);\ if (rbtn_red_get(a_type, a_field, leftleft)) { \ /* Fix up 4-node. */ \ a_type *tnode; \ rbtn_black_set(a_type, a_field, leftleft); \ rbtn_rotate_right(a_type, a_field, cnode, tnode); \ cnode = tnode; \ } \ } else { \ return; \ } \ } else { \ a_type *right = pathp[1].node; \ rbtn_right_set(a_type, a_field, cnode, right); \ if (rbtn_red_get(a_type, a_field, right)) { \ a_type *left = rbtn_left_get(a_type, a_field, cnode); \ if (rbtn_red_get(a_type, a_field, left)) { \ /* Split 4-node. */ \ rbtn_black_set(a_type, a_field, left); \ rbtn_black_set(a_type, a_field, right); \ rbtn_red_set(a_type, a_field, cnode); \ } else { \ /* Lean left. */ \ a_type *tnode; \ bool tred = rbtn_red_get(a_type, a_field, cnode); \ rbtn_rotate_left(a_type, a_field, cnode, tnode); \ rbtn_color_set(a_type, a_field, tnode, tred); \ rbtn_red_set(a_type, a_field, cnode); \ cnode = tnode; \ } \ } else { \ return; \ } \ } \ pathp->node = cnode; \ } \ /* Set root, and make it black. */ \ rbtree->rbt_root = path->node; \ rbtn_black_set(a_type, a_field, rbtree->rbt_root); \ } \ a_attr void NOSANITIZE \ a_prefix##remove(a_rbt_type *rbtree, a_type *node) { \ struct { \ a_type *node; \ int cmp; \ } *pathp, *nodep, path[sizeof(void *) << 4]; \ /* Wind. */ \ nodep = NULL; /* Silence compiler warning. */ \ path->node = rbtree->rbt_root; \ for (pathp = path; pathp->node != &rbtree->rbt_nil; pathp++) { \ int cmp = pathp->cmp = a_cmp(node, pathp->node); \ if (cmp < 0) { \ pathp[1].node = rbtn_left_get(a_type, a_field, \ pathp->node); \ } else { \ pathp[1].node = rbtn_right_get(a_type, a_field, \ pathp->node); \ if (cmp == 0) { \ /* Find node's successor, in preparation for swap. */ \ pathp->cmp = 1; \ nodep = pathp; \ for (pathp++; pathp->node != &rbtree->rbt_nil; \ pathp++) { \ pathp->cmp = -1; \ pathp[1].node = rbtn_left_get(a_type, a_field, \ pathp->node); \ } \ break; \ } \ } \ } \ assert(nodep->node == node); \ pathp--; \ if (pathp->node != node) { \ /* Swap node with its successor. */ \ bool tred = rbtn_red_get(a_type, a_field, pathp->node); \ rbtn_color_set(a_type, a_field, pathp->node, \ rbtn_red_get(a_type, a_field, node)); \ rbtn_left_set(a_type, a_field, pathp->node, \ rbtn_left_get(a_type, a_field, node)); \ /* If node's successor is its right child, the following code */\ /* will do the wrong thing for the right child pointer. */\ /* However, it doesn't matter, because the pointer will be */\ /* properly set when the successor is pruned. */\ rbtn_right_set(a_type, a_field, pathp->node, \ rbtn_right_get(a_type, a_field, node)); \ rbtn_color_set(a_type, a_field, node, tred); \ /* The pruned leaf node's child pointers are never accessed */\ /* again, so don't bother setting them to nil. */\ nodep->node = pathp->node; \ pathp->node = node; \ if (nodep == path) { \ rbtree->rbt_root = nodep->node; \ } else { \ if (nodep[-1].cmp < 0) { \ rbtn_left_set(a_type, a_field, nodep[-1].node, \ nodep->node); \ } else { \ rbtn_right_set(a_type, a_field, nodep[-1].node, \ nodep->node); \ } \ } \ } else { \ a_type *left = rbtn_left_get(a_type, a_field, node); \ if (left != &rbtree->rbt_nil) { \ /* node has no successor, but it has a left child. */\ /* Splice node out, without losing the left child. */\ assert(rbtn_red_get(a_type, a_field, node) == false); \ assert(rbtn_red_get(a_type, a_field, left)); \ rbtn_black_set(a_type, a_field, left); \ if (pathp == path) { \ rbtree->rbt_root = left; \ } else { \ if (pathp[-1].cmp < 0) { \ rbtn_left_set(a_type, a_field, pathp[-1].node, \ left); \ } else { \ rbtn_right_set(a_type, a_field, pathp[-1].node, \ left); \ } \ } \ return; \ } else if (pathp == path) { \ /* The tree only contained one node. */ \ rbtree->rbt_root = &rbtree->rbt_nil; \ return; \ } \ } \ if (rbtn_red_get(a_type, a_field, pathp->node)) { \ /* Prune red node, which requires no fixup. */ \ assert(pathp[-1].cmp < 0); \ rbtn_left_set(a_type, a_field, pathp[-1].node, \ &rbtree->rbt_nil); \ return; \ } \ /* The node to be pruned is black, so unwind until balance is */\ /* restored. */\ pathp->node = &rbtree->rbt_nil; \ for (pathp--; (uintptr_t)pathp >= (uintptr_t)path; pathp--) { \ assert(pathp->cmp != 0); \ if (pathp->cmp < 0) { \ rbtn_left_set(a_type, a_field, pathp->node, \ pathp[1].node); \ assert(rbtn_red_get(a_type, a_field, pathp[1].node) \ == false); \ if (rbtn_red_get(a_type, a_field, pathp->node)) { \ a_type *right = rbtn_right_get(a_type, a_field, \ pathp->node); \ a_type *rightleft = rbtn_left_get(a_type, a_field, \ right); \ a_type *tnode; \ if (rbtn_red_get(a_type, a_field, rightleft)) { \ /* In the following diagrams, ||, //, and \\ */\ /* indicate the path to the removed node. */\ /* */\ /* || */\ /* pathp(r) */\ /* // \ */\ /* (b) (b) */\ /* / */\ /* (r) */\ /* */\ rbtn_black_set(a_type, a_field, pathp->node); \ rbtn_rotate_right(a_type, a_field, right, tnode); \ rbtn_right_set(a_type, a_field, pathp->node, tnode);\ rbtn_rotate_left(a_type, a_field, pathp->node, \ tnode); \ } else { \ /* || */\ /* pathp(r) */\ /* // \ */\ /* (b) (b) */\ /* / */\ /* (b) */\ /* */\ rbtn_rotate_left(a_type, a_field, pathp->node, \ tnode); \ } \ /* Balance restored, but rotation modified subtree */\ /* root. */\ assert((uintptr_t)pathp > (uintptr_t)path); \ if (pathp[-1].cmp < 0) { \ rbtn_left_set(a_type, a_field, pathp[-1].node, \ tnode); \ } else { \ rbtn_right_set(a_type, a_field, pathp[-1].node, \ tnode); \ } \ return; \ } else { \ a_type *right = rbtn_right_get(a_type, a_field, \ pathp->node); \ a_type *rightleft = rbtn_left_get(a_type, a_field, \ right); \ if (rbtn_red_get(a_type, a_field, rightleft)) { \ /* || */\ /* pathp(b) */\ /* // \ */\ /* (b) (b) */\ /* / */\ /* (r) */\ a_type *tnode; \ rbtn_black_set(a_type, a_field, rightleft); \ rbtn_rotate_right(a_type, a_field, right, tnode); \ rbtn_right_set(a_type, a_field, pathp->node, tnode);\ rbtn_rotate_left(a_type, a_field, pathp->node, \ tnode); \ /* Balance restored, but rotation modified */\ /* subree root, which may actually be the tree */\ /* root. */\ if (pathp == path) { \ /* Set root. */ \ rbtree->rbt_root = tnode; \ } else { \ if (pathp[-1].cmp < 0) { \ rbtn_left_set(a_type, a_field, \ pathp[-1].node, tnode); \ } else { \ rbtn_right_set(a_type, a_field, \ pathp[-1].node, tnode); \ } \ } \ return; \ } else { \ /* || */\ /* pathp(b) */\ /* // \ */\ /* (b) (b) */\ /* / */\ /* (b) */\ a_type *tnode; \ rbtn_red_set(a_type, a_field, pathp->node); \ rbtn_rotate_left(a_type, a_field, pathp->node, \ tnode); \ pathp->node = tnode; \ } \ } \ } else { \ a_type *left; \ rbtn_right_set(a_type, a_field, pathp->node, \ pathp[1].node); \ left = rbtn_left_get(a_type, a_field, pathp->node); \ if (rbtn_red_get(a_type, a_field, left)) { \ a_type *tnode; \ a_type *leftright = rbtn_right_get(a_type, a_field, \ left); \ a_type *leftrightleft = rbtn_left_get(a_type, a_field, \ leftright); \ if (rbtn_red_get(a_type, a_field, leftrightleft)) { \ /* || */\ /* pathp(b) */\ /* / \\ */\ /* (r) (b) */\ /* \ */\ /* (b) */\ /* / */\ /* (r) */\ a_type *unode; \ rbtn_black_set(a_type, a_field, leftrightleft); \ rbtn_rotate_right(a_type, a_field, pathp->node, \ unode); \ rbtn_rotate_right(a_type, a_field, pathp->node, \ tnode); \ rbtn_right_set(a_type, a_field, unode, tnode); \ rbtn_rotate_left(a_type, a_field, unode, tnode); \ } else { \ /* || */\ /* pathp(b) */\ /* / \\ */\ /* (r) (b) */\ /* \ */\ /* (b) */\ /* / */\ /* (b) */\ assert(leftright != &rbtree->rbt_nil); \ rbtn_red_set(a_type, a_field, leftright); \ rbtn_rotate_right(a_type, a_field, pathp->node, \ tnode); \ rbtn_black_set(a_type, a_field, tnode); \ } \ /* Balance restored, but rotation modified subtree */\ /* root, which may actually be the tree root. */\ if (pathp == path) { \ /* Set root. */ \ rbtree->rbt_root = tnode; \ } else { \ if (pathp[-1].cmp < 0) { \ rbtn_left_set(a_type, a_field, pathp[-1].node, \ tnode); \ } else { \ rbtn_right_set(a_type, a_field, pathp[-1].node, \ tnode); \ } \ } \ return; \ } else if (rbtn_red_get(a_type, a_field, pathp->node)) { \ a_type *leftleft = rbtn_left_get(a_type, a_field, left);\ if (rbtn_red_get(a_type, a_field, leftleft)) { \ /* || */\ /* pathp(r) */\ /* / \\ */\ /* (b) (b) */\ /* / */\ /* (r) */\ a_type *tnode; \ rbtn_black_set(a_type, a_field, pathp->node); \ rbtn_red_set(a_type, a_field, left); \ rbtn_black_set(a_type, a_field, leftleft); \ rbtn_rotate_right(a_type, a_field, pathp->node, \ tnode); \ /* Balance restored, but rotation modified */\ /* subtree root. */\ assert((uintptr_t)pathp > (uintptr_t)path); \ if (pathp[-1].cmp < 0) { \ rbtn_left_set(a_type, a_field, pathp[-1].node, \ tnode); \ } else { \ rbtn_right_set(a_type, a_field, pathp[-1].node, \ tnode); \ } \ return; \ } else { \ /* || */\ /* pathp(r) */\ /* / \\ */\ /* (b) (b) */\ /* / */\ /* (b) */\ rbtn_red_set(a_type, a_field, left); \ rbtn_black_set(a_type, a_field, pathp->node); \ /* Balance restored. */ \ return; \ } \ } else { \ a_type *leftleft = rbtn_left_get(a_type, a_field, left);\ if (rbtn_red_get(a_type, a_field, leftleft)) { \ /* || */\ /* pathp(b) */\ /* / \\ */\ /* (b) (b) */\ /* / */\ /* (r) */\ a_type *tnode; \ rbtn_black_set(a_type, a_field, leftleft); \ rbtn_rotate_right(a_type, a_field, pathp->node, \ tnode); \ /* Balance restored, but rotation modified */\ /* subtree root, which may actually be the tree */\ /* root. */\ if (pathp == path) { \ /* Set root. */ \ rbtree->rbt_root = tnode; \ } else { \ if (pathp[-1].cmp < 0) { \ rbtn_left_set(a_type, a_field, \ pathp[-1].node, tnode); \ } else { \ rbtn_right_set(a_type, a_field, \ pathp[-1].node, tnode); \ } \ } \ return; \ } else { \ /* || */\ /* pathp(b) */\ /* / \\ */\ /* (b) (b) */\ /* / */\ /* (b) */\ rbtn_red_set(a_type, a_field, left); \ } \ } \ } \ } \ /* Set root. */ \ rbtree->rbt_root = path->node; \ assert(rbtn_red_get(a_type, a_field, rbtree->rbt_root) == false); \ } \ a_attr a_type * \ a_prefix##iter_recurse(a_rbt_type *rbtree, a_type *node, \ a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg) { \ if (node == &rbtree->rbt_nil) { \ return (&rbtree->rbt_nil); \ } else { \ a_type *ret; \ if ((ret = a_prefix##iter_recurse(rbtree, rbtn_left_get(a_type, \ a_field, node), cb, arg)) != &rbtree->rbt_nil \ || (ret = cb(rbtree, node, arg)) != NULL) { \ return (ret); \ } \ return (a_prefix##iter_recurse(rbtree, rbtn_right_get(a_type, \ a_field, node), cb, arg)); \ } \ } \ a_attr a_type * \ a_prefix##iter_start(a_rbt_type *rbtree, a_type *start, a_type *node, \ a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg) { \ int cmp = a_cmp(start, node); \ if (cmp < 0) { \ a_type *ret; \ if ((ret = a_prefix##iter_start(rbtree, start, \ rbtn_left_get(a_type, a_field, node), cb, arg)) != \ &rbtree->rbt_nil || (ret = cb(rbtree, node, arg)) != NULL) { \ return (ret); \ } \ return (a_prefix##iter_recurse(rbtree, rbtn_right_get(a_type, \ a_field, node), cb, arg)); \ } else if (cmp > 0) { \ return (a_prefix##iter_start(rbtree, start, \ rbtn_right_get(a_type, a_field, node), cb, arg)); \ } else { \ a_type *ret; \ if ((ret = cb(rbtree, node, arg)) != NULL) { \ return (ret); \ } \ return (a_prefix##iter_recurse(rbtree, rbtn_right_get(a_type, \ a_field, node), cb, arg)); \ } \ } \ a_attr a_type * \ a_prefix##iter(a_rbt_type *rbtree, a_type *start, a_type *(*cb)( \ a_rbt_type *, a_type *, void *), void *arg) { \ a_type *ret; \ if (start != NULL) { \ ret = a_prefix##iter_start(rbtree, start, rbtree->rbt_root, \ cb, arg); \ } else { \ ret = a_prefix##iter_recurse(rbtree, rbtree->rbt_root, cb, arg);\ } \ if (ret == &rbtree->rbt_nil) { \ ret = NULL; \ } \ return (ret); \ } \ a_attr a_type * \ a_prefix##reverse_iter_recurse(a_rbt_type *rbtree, a_type *node, \ a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg) { \ if (node == &rbtree->rbt_nil) { \ return (&rbtree->rbt_nil); \ } else { \ a_type *ret; \ if ((ret = a_prefix##reverse_iter_recurse(rbtree, \ rbtn_right_get(a_type, a_field, node), cb, arg)) != \ &rbtree->rbt_nil || (ret = cb(rbtree, node, arg)) != NULL) { \ return (ret); \ } \ return (a_prefix##reverse_iter_recurse(rbtree, \ rbtn_left_get(a_type, a_field, node), cb, arg)); \ } \ } \ a_attr a_type * \ a_prefix##reverse_iter_start(a_rbt_type *rbtree, a_type *start, \ a_type *node, a_type *(*cb)(a_rbt_type *, a_type *, void *), \ void *arg) { \ int cmp = a_cmp(start, node); \ if (cmp > 0) { \ a_type *ret; \ if ((ret = a_prefix##reverse_iter_start(rbtree, start, \ rbtn_right_get(a_type, a_field, node), cb, arg)) != \ &rbtree->rbt_nil || (ret = cb(rbtree, node, arg)) != NULL) { \ return (ret); \ } \ return (a_prefix##reverse_iter_recurse(rbtree, \ rbtn_left_get(a_type, a_field, node), cb, arg)); \ } else if (cmp < 0) { \ return (a_prefix##reverse_iter_start(rbtree, start, \ rbtn_left_get(a_type, a_field, node), cb, arg)); \ } else { \ a_type *ret; \ if ((ret = cb(rbtree, node, arg)) != NULL) { \ return (ret); \ } \ return (a_prefix##reverse_iter_recurse(rbtree, \ rbtn_left_get(a_type, a_field, node), cb, arg)); \ } \ } \ a_attr a_type * \ a_prefix##reverse_iter(a_rbt_type *rbtree, a_type *start, \ a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg) { \ a_type *ret; \ if (start != NULL) { \ ret = a_prefix##reverse_iter_start(rbtree, start, \ rbtree->rbt_root, cb, arg); \ } else { \ ret = a_prefix##reverse_iter_recurse(rbtree, rbtree->rbt_root, \ cb, arg); \ } \ if (ret == &rbtree->rbt_nil) { \ ret = NULL; \ } \ return (ret); \ } #endif /* RB_H_ */ vmem-1.8/src/jemalloc/include/jemalloc/internal/rtree.h000066400000000000000000000121261361505074100232260ustar00rootroot00000000000000/* * This radix tree implementation is tailored to the singular purpose of * tracking which chunks are currently owned by jemalloc. This functionality * is mandatory for OS X, where jemalloc must be able to respond to object * ownership queries. * ******************************************************************************* */ #ifdef JEMALLOC_H_TYPES typedef struct rtree_s rtree_t; /* * Size of each radix tree node (must be a power of 2). This impacts tree * depth. */ #define RTREE_NODESIZE (1U << 16) typedef void *(rtree_alloc_t)(pool_t *, size_t); typedef void (rtree_dalloc_t)(pool_t *, void *); #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS struct rtree_s { rtree_alloc_t *alloc; rtree_dalloc_t *dalloc; pool_t *pool; malloc_mutex_t mutex; void **root; unsigned height; unsigned level2bits[1]; /* Dynamically sized. */ }; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS rtree_t *rtree_new(unsigned bits, rtree_alloc_t *alloc, rtree_dalloc_t *dalloc, pool_t *pool); void rtree_delete(rtree_t *rtree); void rtree_prefork(rtree_t *rtree); void rtree_postfork_parent(rtree_t *rtree); void rtree_postfork_child(rtree_t *rtree); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE #ifdef JEMALLOC_DEBUG uint8_t rtree_get_locked(rtree_t *rtree, uintptr_t key); #endif uint8_t rtree_get(rtree_t *rtree, uintptr_t key); bool rtree_set(rtree_t *rtree, uintptr_t key, uint8_t val); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_RTREE_C_)) #define RTREE_GET_GENERATE(f) \ /* The least significant bits of the key are ignored. */ \ JEMALLOC_INLINE uint8_t \ f(rtree_t *rtree, uintptr_t key) \ { \ uint8_t ret; \ uintptr_t subkey; \ unsigned i, lshift, height, bits; \ void **node, **child; \ \ RTREE_LOCK(&rtree->mutex); \ for (i = lshift = 0, height = rtree->height, node = rtree->root;\ i < height - 1; \ i++, lshift += bits, node = child) { \ bits = rtree->level2bits[i]; \ subkey = (key << lshift) >> ((ZU(1) << (LG_SIZEOF_PTR + \ 3)) - bits); \ child = (void**)node[subkey]; \ if (child == NULL) { \ RTREE_UNLOCK(&rtree->mutex); \ return (0); \ } \ } \ \ /* \ * node is a leaf, so it contains values rather than node \ * pointers. \ */ \ bits = rtree->level2bits[i]; \ subkey = (key << lshift) >> ((ZU(1) << (LG_SIZEOF_PTR+3)) - \ bits); \ { \ uint8_t *leaf = (uint8_t *)node; \ ret = leaf[subkey]; \ } \ RTREE_UNLOCK(&rtree->mutex); \ \ RTREE_GET_VALIDATE \ return (ret); \ } #ifdef JEMALLOC_DEBUG # define RTREE_LOCK(l) malloc_mutex_lock(l) # define RTREE_UNLOCK(l) malloc_mutex_unlock(l) # define RTREE_GET_VALIDATE RTREE_GET_GENERATE(rtree_get_locked) # undef RTREE_LOCK # undef RTREE_UNLOCK # undef RTREE_GET_VALIDATE #endif #define RTREE_LOCK(l) #define RTREE_UNLOCK(l) #ifdef JEMALLOC_DEBUG /* * Suppose that it were possible for a jemalloc-allocated chunk to be * munmap()ped, followed by a different allocator in another thread re-using * overlapping virtual memory, all without invalidating the cached rtree * value. The result would be a false positive (the rtree would claim that * jemalloc owns memory that it had actually discarded). This scenario * seems impossible, but the following assertion is a prudent sanity check. */ # define RTREE_GET_VALIDATE \ assert(rtree_get_locked(rtree, key) == ret); #else # define RTREE_GET_VALIDATE #endif RTREE_GET_GENERATE(rtree_get) #undef RTREE_LOCK #undef RTREE_UNLOCK #undef RTREE_GET_VALIDATE JEMALLOC_INLINE bool rtree_set(rtree_t *rtree, uintptr_t key, uint8_t val) { uintptr_t subkey; unsigned i, lshift, height, bits; void **node, **child; malloc_mutex_lock(&rtree->mutex); for (i = lshift = 0, height = rtree->height, node = rtree->root; i < height - 1; i++, lshift += bits, node = child) { bits = rtree->level2bits[i]; subkey = (key << lshift) >> ((ZU(1) << (LG_SIZEOF_PTR+3)) - bits); child = (void**)node[subkey]; if (child == NULL) { size_t size = ((i + 1 < height - 1) ? sizeof(void *) : (sizeof(uint8_t))) << rtree->level2bits[i+1]; child = (void**)rtree->alloc(rtree->pool, size); if (child == NULL) { malloc_mutex_unlock(&rtree->mutex); return (true); } memset(child, 0, size); node[subkey] = child; } } /* node is a leaf, so it contains values rather than node pointers. */ bits = rtree->level2bits[i]; subkey = (key << lshift) >> ((ZU(1) << (LG_SIZEOF_PTR+3)) - bits); { uint8_t *leaf = (uint8_t *)node; leaf[subkey] = val; } malloc_mutex_unlock(&rtree->mutex); return (false); } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/size_classes.sh000077500000000000000000000160601361505074100247630ustar00rootroot00000000000000#!/bin/sh # The following limits are chosen such that they cover all supported platforms. # Pointer sizes. lg_zarr="2 3" # Quanta. lg_qarr="3 4" # The range of tiny size classes is [2^lg_tmin..2^(lg_q-1)]. lg_tmin=3 # Maximum lookup size. lg_kmax=12 # Page sizes. lg_parr="12 13 16" # Size class group size (number of size classes for each size doubling). lg_g=2 pow2() { e=$1 pow2_result=1 while [ ${e} -gt 0 ] ; do pow2_result=$((${pow2_result} + ${pow2_result})) e=$((${e} - 1)) done } lg() { x=$1 lg_result=0 while [ ${x} -gt 1 ] ; do lg_result=$((${lg_result} + 1)) x=$((${x} / 2)) done } size_class() { index=$1 lg_grp=$2 lg_delta=$3 ndelta=$4 lg_p=$5 lg_kmax=$6 lg ${ndelta}; lg_ndelta=${lg_result}; pow2 ${lg_ndelta} if [ ${pow2_result} -lt ${ndelta} ] ; then rem="yes" else rem="no" fi lg_size=${lg_grp} if [ $((${lg_delta} + ${lg_ndelta})) -eq ${lg_grp} ] ; then lg_size=$((${lg_grp} + 1)) else lg_size=${lg_grp} rem="yes" fi if [ ${lg_size} -lt ${lg_p} ] ; then bin="yes" else bin="no" fi if [ ${lg_size} -lt ${lg_kmax} \ -o ${lg_size} -eq ${lg_kmax} -a ${rem} = "no" ] ; then lg_delta_lookup=${lg_delta} else lg_delta_lookup="no" fi printf ' SC(%3d, %6d, %8d, %6d, %3s, %2s) \\\n' ${index} ${lg_grp} ${lg_delta} ${ndelta} ${bin} ${lg_delta_lookup} # Defined upon return: # - lg_delta_lookup (${lg_delta} or "no") # - bin ("yes" or "no") } sep_line() { echo " \\" } size_classes() { lg_z=$1 lg_q=$2 lg_t=$3 lg_p=$4 lg_g=$5 pow2 $((${lg_z} + 3)); ptr_bits=${pow2_result} pow2 ${lg_g}; g=${pow2_result} echo "#define SIZE_CLASSES \\" echo " /* index, lg_grp, lg_delta, ndelta, bin, lg_delta_lookup */ \\" ntbins=0 nlbins=0 lg_tiny_maxclass='"NA"' nbins=0 # Tiny size classes. ndelta=0 index=0 lg_grp=${lg_t} lg_delta=${lg_grp} while [ ${lg_grp} -lt ${lg_q} ] ; do size_class ${index} ${lg_grp} ${lg_delta} ${ndelta} ${lg_p} ${lg_kmax} if [ ${lg_delta_lookup} != "no" ] ; then nlbins=$((${index} + 1)) fi if [ ${bin} != "no" ] ; then nbins=$((${index} + 1)) fi ntbins=$((${ntbins} + 1)) lg_tiny_maxclass=${lg_grp} # Final written value is correct. index=$((${index} + 1)) lg_delta=${lg_grp} lg_grp=$((${lg_grp} + 1)) done # First non-tiny group. if [ ${ntbins} -gt 0 ] ; then sep_line # The first size class has an unusual encoding, because the size has to be # split between grp and delta*ndelta. lg_grp=$((${lg_grp} - 1)) ndelta=1 size_class ${index} ${lg_grp} ${lg_delta} ${ndelta} ${lg_p} ${lg_kmax} index=$((${index} + 1)) lg_grp=$((${lg_grp} + 1)) lg_delta=$((${lg_delta} + 1)) fi while [ ${ndelta} -lt ${g} ] ; do size_class ${index} ${lg_grp} ${lg_delta} ${ndelta} ${lg_p} ${lg_kmax} index=$((${index} + 1)) ndelta=$((${ndelta} + 1)) done # All remaining groups. lg_grp=$((${lg_grp} + ${lg_g})) while [ ${lg_grp} -lt ${ptr_bits} ] ; do sep_line ndelta=1 if [ ${lg_grp} -eq $((${ptr_bits} - 1)) ] ; then ndelta_limit=$((${g} - 1)) else ndelta_limit=${g} fi while [ ${ndelta} -le ${ndelta_limit} ] ; do size_class ${index} ${lg_grp} ${lg_delta} ${ndelta} ${lg_p} ${lg_kmax} if [ ${lg_delta_lookup} != "no" ] ; then nlbins=$((${index} + 1)) # Final written value is correct: lookup_maxclass="((((size_t)1) << ${lg_grp}) + (((size_t)${ndelta}) << ${lg_delta}))" fi if [ ${bin} != "no" ] ; then nbins=$((${index} + 1)) # Final written value is correct: small_maxclass="((((size_t)1) << ${lg_grp}) + (((size_t)${ndelta}) << ${lg_delta}))" fi index=$((${index} + 1)) ndelta=$((${ndelta} + 1)) done lg_grp=$((${lg_grp} + 1)) lg_delta=$((${lg_delta} + 1)) done echo # Defined upon completion: # - ntbins # - nlbins # - nbins # - lg_tiny_maxclass # - lookup_maxclass # - small_maxclass } cat < 255) # error "Too many small size classes" #endif #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ EOF vmem-1.8/src/jemalloc/include/jemalloc/internal/stats.h000066400000000000000000000107741361505074100232520ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef struct tcache_bin_stats_s tcache_bin_stats_t; typedef struct malloc_bin_stats_s malloc_bin_stats_t; typedef struct malloc_large_stats_s malloc_large_stats_t; typedef struct arena_stats_s arena_stats_t; typedef struct chunk_stats_s chunk_stats_t; #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS struct tcache_bin_stats_s { /* * Number of allocation requests that corresponded to the size of this * bin. */ uint64_t nrequests; }; struct malloc_bin_stats_s { /* * Current number of bytes allocated, including objects currently * cached by tcache. */ size_t allocated; /* * Total number of allocation/deallocation requests served directly by * the bin. Note that tcache may allocate an object, then recycle it * many times, resulting many increments to nrequests, but only one * each to nmalloc and ndalloc. */ uint64_t nmalloc; uint64_t ndalloc; /* * Number of allocation requests that correspond to the size of this * bin. This includes requests served by tcache, though tcache only * periodically merges into this counter. */ uint64_t nrequests; /* Number of tcache fills from this bin. */ uint64_t nfills; /* Number of tcache flushes to this bin. */ uint64_t nflushes; /* Total number of runs created for this bin's size class. */ uint64_t nruns; /* * Total number of runs reused by extracting them from the runs tree for * this bin's size class. */ uint64_t reruns; /* Current number of runs in this bin. */ size_t curruns; }; struct malloc_large_stats_s { /* * Total number of allocation/deallocation requests served directly by * the arena. Note that tcache may allocate an object, then recycle it * many times, resulting many increments to nrequests, but only one * each to nmalloc and ndalloc. */ uint64_t nmalloc; uint64_t ndalloc; /* * Number of allocation requests that correspond to this size class. * This includes requests served by tcache, though tcache only * periodically merges into this counter. */ uint64_t nrequests; /* Current number of runs of this size class. */ size_t curruns; }; struct arena_stats_s { /* Number of bytes currently mapped. */ size_t mapped; /* * Total number of purge sweeps, total number of madvise calls made, * and total pages purged in order to keep dirty unused memory under * control. */ uint64_t npurge; uint64_t nmadvise; uint64_t purged; /* Per-size-category statistics. */ size_t allocated_large; uint64_t nmalloc_large; uint64_t ndalloc_large; uint64_t nrequests_large; size_t allocated_huge; uint64_t nmalloc_huge; uint64_t ndalloc_huge; uint64_t nrequests_huge; /* * One element for each possible size class, including sizes that * overlap with bin size classes. This is necessary because ipalloc() * sometimes has to use such large objects in order to assure proper * alignment. */ malloc_large_stats_t *lstats; }; struct chunk_stats_s { /* Number of chunks that were allocated. */ uint64_t nchunks; /* High-water mark for number of chunks allocated. */ size_t highchunks; /* * Current number of chunks allocated. This value isn't maintained for * any other purpose, so keep track of it in order to be able to set * highchunks. */ size_t curchunks; }; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS extern bool opt_stats_print; void stats_print(pool_t *pool, void (*write)(void *, const char *), void *cbopaque, const char *opts); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE size_t stats_cactive_get(pool_t *pool); void stats_cactive_add(pool_t *pool, size_t size); void stats_cactive_sub(pool_t *pool, size_t size); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_STATS_C_)) JEMALLOC_INLINE size_t stats_cactive_get(pool_t *pool) { return (atomic_read_z(&(pool->stats_cactive))); } JEMALLOC_INLINE void stats_cactive_add(pool_t *pool, size_t size) { atomic_add_z(&(pool->stats_cactive), size); } JEMALLOC_INLINE void stats_cactive_sub(pool_t *pool, size_t size) { atomic_sub_z(&(pool->stats_cactive), size); } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/tcache.h000066400000000000000000000276561361505074100233520ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef struct tcache_bin_info_s tcache_bin_info_t; typedef struct tcache_bin_s tcache_bin_t; typedef struct tcache_s tcache_t; typedef struct tsd_tcache_s tsd_tcache_t; /* * tcache pointers close to NULL are used to encode state information that is * used for two purposes: preventing thread caching on a per thread basis and * cleaning up during thread shutdown. */ #define TCACHE_STATE_DISABLED ((tcache_t *)(uintptr_t)1) #define TCACHE_STATE_REINCARNATED ((tcache_t *)(uintptr_t)2) #define TCACHE_STATE_PURGATORY ((tcache_t *)(uintptr_t)3) #define TCACHE_STATE_MAX TCACHE_STATE_PURGATORY /* * Absolute maximum number of cache slots for each small bin in the thread * cache. This is an additional constraint beyond that imposed as: twice the * number of regions per run for this size class. * * This constant must be an even number. */ #define TCACHE_NSLOTS_SMALL_MAX 200 /* Number of cache slots for large size classes. */ #define TCACHE_NSLOTS_LARGE 20 /* (1U << opt_lg_tcache_max) is used to compute tcache_maxclass. */ #define LG_TCACHE_MAXCLASS_DEFAULT 15 /* * TCACHE_GC_SWEEP is the approximate number of allocation events between * full GC sweeps. Integer rounding may cause the actual number to be * slightly higher, since GC is performed incrementally. */ #define TCACHE_GC_SWEEP 8192 /* Number of tcache allocation/deallocation events between incremental GCs. */ #define TCACHE_GC_INCR \ ((TCACHE_GC_SWEEP / NBINS) + ((TCACHE_GC_SWEEP / NBINS == 0) ? 0 : 1)) #define TSD_TCACHE_INITIALIZER JEMALLOC_ARG_CONCAT({.npools = 0, .seqno = NULL, .tcaches = NULL}) #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS typedef enum { tcache_enabled_false = 0, /* Enable cast to/from bool. */ tcache_enabled_true = 1, tcache_enabled_default = 2 } tcache_enabled_t; /* * Read-only information associated with each element of tcache_t's tbins array * is stored separately, mainly to reduce memory usage. */ struct tcache_bin_info_s { unsigned ncached_max; /* Upper limit on ncached. */ }; struct tcache_bin_s { tcache_bin_stats_t tstats; int low_water; /* Min # cached since last GC. */ unsigned lg_fill_div; /* Fill (ncached_max >> lg_fill_div). */ unsigned ncached; /* # of cached objects. */ void **avail; /* Stack of available objects. */ }; struct tcache_s { ql_elm(tcache_t) link; /* Used for aggregating stats. */ uint64_t prof_accumbytes;/* Cleared after arena_prof_accum() */ arena_t *arena; /* This thread's arena. */ unsigned ev_cnt; /* Event count since incremental GC. */ unsigned next_gc_bin; /* Next bin to GC. */ tcache_bin_t tbins[1]; /* Dynamically sized. */ /* * The pointer stacks associated with tbins follow as a contiguous * array. During tcache initialization, the avail pointer in each * element of tbins is initialized to point to the proper offset within * this array. */ }; struct tsd_tcache_s { size_t npools; unsigned *seqno; /* Sequence number of pool */ tcache_t **tcaches; }; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS extern bool opt_tcache; extern ssize_t opt_lg_tcache_max; extern tcache_bin_info_t *tcache_bin_info; /* * Number of tcache bins. There are NBINS small-object bins, plus 0 or more * large-object bins. */ extern size_t nhbins; /* Maximum cached size class. */ extern size_t tcache_maxclass; size_t tcache_salloc(const void *ptr); void tcache_event_hard(tcache_t *tcache); void *tcache_alloc_small_hard(tcache_t *tcache, tcache_bin_t *tbin, size_t binind); void tcache_bin_flush_small(tcache_bin_t *tbin, size_t binind, unsigned rem, tcache_t *tcache); void tcache_bin_flush_large(tcache_bin_t *tbin, size_t binind, unsigned rem, tcache_t *tcache); void tcache_arena_associate(tcache_t *tcache, arena_t *arena); void tcache_arena_dissociate(tcache_t *tcache); tcache_t *tcache_get_hard(tcache_t *tcache, pool_t *pool, bool create); tcache_t *tcache_create(arena_t *arena); void tcache_destroy(tcache_t *tcache); bool tcache_tsd_extend(tsd_tcache_t *tsd, unsigned len); void tcache_thread_cleanup(void *arg); void tcache_stats_merge(tcache_t *tcache, arena_t *arena); bool tcache_boot0(void); bool tcache_boot1(void); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE malloc_tsd_protos(JEMALLOC_ATTR(unused), tcache, tsd_tcache_t) malloc_tsd_protos(JEMALLOC_ATTR(unused), tcache_enabled, tcache_enabled_t) void tcache_event(tcache_t *tcache); void tcache_flush(pool_t *pool); bool tcache_enabled_get(void); tcache_t *tcache_get(pool_t *pool, bool create); void tcache_enabled_set(bool enabled); void *tcache_alloc_easy(tcache_bin_t *tbin); void *tcache_alloc_small(tcache_t *tcache, size_t size, bool zero); void *tcache_alloc_large(tcache_t *tcache, size_t size, bool zero); void tcache_dalloc_small(tcache_t *tcache, void *ptr, size_t binind); void tcache_dalloc_large(tcache_t *tcache, void *ptr, size_t size); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_TCACHE_C_)) /* Map of thread-specific caches. */ malloc_tsd_externs(tcache, tsd_tcache_t) malloc_tsd_funcs(JEMALLOC_ALWAYS_INLINE, tcache, tsd_tcache_t, { 0 }, tcache_thread_cleanup) /* Per thread flag that allows thread caches to be disabled. */ malloc_tsd_externs(tcache_enabled, tcache_enabled_t) malloc_tsd_funcs(JEMALLOC_ALWAYS_INLINE, tcache_enabled, tcache_enabled_t, tcache_enabled_default, malloc_tsd_no_cleanup) JEMALLOC_INLINE void tcache_flush(pool_t *pool) { tsd_tcache_t *tsd = tcache_tsd_get(); tcache_t *tcache = tsd->tcaches[pool->pool_id]; if (tsd->seqno[pool->pool_id] == pool->seqno) { cassert(config_tcache); if ((uintptr_t)tcache <= (uintptr_t)TCACHE_STATE_MAX) return; tcache_destroy(tcache); } tsd->tcaches[pool->pool_id] = NULL; } JEMALLOC_INLINE bool tcache_enabled_get(void) { tcache_enabled_t tcache_enabled; cassert(config_tcache); tcache_enabled = *tcache_enabled_tsd_get(); if (tcache_enabled == tcache_enabled_default) { tcache_enabled = (tcache_enabled_t)opt_tcache; tcache_enabled_tsd_set(&tcache_enabled); } return ((bool)tcache_enabled); } JEMALLOC_INLINE void tcache_enabled_set(bool enabled) { tcache_enabled_t tcache_enabled; tsd_tcache_t *tsd; tcache_t *tcache; int i; cassert(config_tcache); tcache_enabled = (tcache_enabled_t)enabled; tcache_enabled_tsd_set(&tcache_enabled); tsd = tcache_tsd_get(); malloc_mutex_lock(&pools_lock); for (i = 0; i < tsd->npools; i++) { tcache = tsd->tcaches[i]; if (tcache != NULL) { if (enabled) { if (tcache == TCACHE_STATE_DISABLED) { tsd->tcaches[i] = NULL; } } else /* disabled */ { if (tcache > TCACHE_STATE_MAX) { if (pools[i] != NULL && tsd->seqno[i] == pools[i]->seqno) tcache_destroy(tcache); tcache = NULL; } if (tcache == NULL) { tsd->tcaches[i] = TCACHE_STATE_DISABLED; } } } } malloc_mutex_unlock(&pools_lock); } JEMALLOC_ALWAYS_INLINE tcache_t * tcache_get(pool_t *pool, bool create) { tcache_t *tcache; tsd_tcache_t *tsd; if (config_tcache == false) return (NULL); if (config_lazy_lock && isthreaded == false) return (NULL); tsd = tcache_tsd_get(); /* expand tcaches array if necessary */ if ((tsd->npools <= pool->pool_id) && tcache_tsd_extend(tsd, pool->pool_id)) { return (NULL); } /* * All subsequent pools with the same id have to cleanup tcache before * calling tcache_get_hard. */ if (tsd->seqno[pool->pool_id] != pool->seqno) { tsd->tcaches[pool->pool_id] = NULL; } tcache = tsd->tcaches[pool->pool_id]; if ((uintptr_t)tcache <= (uintptr_t)TCACHE_STATE_MAX) { if (tcache == TCACHE_STATE_DISABLED) return (NULL); tcache = tcache_get_hard(tcache, pool, create); } return (tcache); } JEMALLOC_ALWAYS_INLINE void tcache_event(tcache_t *tcache) { if (TCACHE_GC_INCR == 0) return; tcache->ev_cnt++; assert(tcache->ev_cnt <= TCACHE_GC_INCR); if (tcache->ev_cnt == TCACHE_GC_INCR) tcache_event_hard(tcache); } JEMALLOC_ALWAYS_INLINE void * tcache_alloc_easy(tcache_bin_t *tbin) { void *ret; if (tbin->ncached == 0) { tbin->low_water = -1; return (NULL); } tbin->ncached--; if ((int)tbin->ncached < tbin->low_water) tbin->low_water = tbin->ncached; ret = tbin->avail[tbin->ncached]; return (ret); } JEMALLOC_ALWAYS_INLINE void * tcache_alloc_small(tcache_t *tcache, size_t size, bool zero) { void *ret; size_t binind; tcache_bin_t *tbin; binind = small_size2bin(size); assert(binind < NBINS); tbin = &tcache->tbins[binind]; size = small_bin2size(binind); ret = tcache_alloc_easy(tbin); if (ret == NULL) { ret = tcache_alloc_small_hard(tcache, tbin, binind); if (ret == NULL) return (NULL); } assert(tcache_salloc(ret) == size); if (zero == false) { if (config_fill) { if (opt_junk) { arena_alloc_junk_small(ret, &arena_bin_info[binind], false); } else if (opt_zero) memset(ret, 0, size); } } else { if (config_fill && opt_junk) { arena_alloc_junk_small(ret, &arena_bin_info[binind], true); } memset(ret, 0, size); } if (config_stats) tbin->tstats.nrequests++; if (config_prof) tcache->prof_accumbytes += size; tcache_event(tcache); return (ret); } JEMALLOC_ALWAYS_INLINE void * tcache_alloc_large(tcache_t *tcache, size_t size, bool zero) { void *ret; size_t binind; tcache_bin_t *tbin; size = PAGE_CEILING(size); assert(size <= tcache_maxclass); binind = NBINS + (size >> LG_PAGE) - 1; assert(binind < nhbins); tbin = &tcache->tbins[binind]; ret = tcache_alloc_easy(tbin); if (ret == NULL) { /* * Only allocate one large object at a time, because it's quite * expensive to create one and not use it. */ ret = arena_malloc_large(tcache->arena, size, zero); if (ret == NULL) return (NULL); } else { if (config_prof && size == PAGE) { arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ret); size_t pageind = (((uintptr_t)ret - (uintptr_t)chunk) >> LG_PAGE); arena_mapbits_large_binind_set(chunk, pageind, BININD_INVALID); } if (zero == false) { if (config_fill) { if (opt_junk) memset(ret, 0xa5, size); else if (opt_zero) memset(ret, 0, size); } } else memset(ret, 0, size); if (config_stats) tbin->tstats.nrequests++; if (config_prof) tcache->prof_accumbytes += size; } tcache_event(tcache); return (ret); } JEMALLOC_ALWAYS_INLINE void tcache_dalloc_small(tcache_t *tcache, void *ptr, size_t binind) { tcache_bin_t *tbin; tcache_bin_info_t *tbin_info; assert(tcache_salloc(ptr) <= SMALL_MAXCLASS); if (config_fill && opt_junk) arena_dalloc_junk_small(ptr, &arena_bin_info[binind]); tbin = &tcache->tbins[binind]; tbin_info = &tcache_bin_info[binind]; if (tbin->ncached == tbin_info->ncached_max) { tcache_bin_flush_small(tbin, binind, (tbin_info->ncached_max >> 1), tcache); } assert(tbin->ncached < tbin_info->ncached_max); tbin->avail[tbin->ncached] = ptr; tbin->ncached++; tcache_event(tcache); } JEMALLOC_ALWAYS_INLINE void tcache_dalloc_large(tcache_t *tcache, void *ptr, size_t size) { size_t binind; tcache_bin_t *tbin; tcache_bin_info_t *tbin_info; assert((size & PAGE_MASK) == 0); assert(tcache_salloc(ptr) > SMALL_MAXCLASS); assert(tcache_salloc(ptr) <= tcache_maxclass); binind = NBINS + (size >> LG_PAGE) - 1; if (config_fill && opt_junk) memset(ptr, 0x5a, size); tbin = &tcache->tbins[binind]; tbin_info = &tcache_bin_info[binind]; if (tbin->ncached == tbin_info->ncached_max) { tcache_bin_flush_large(tbin, binind, (tbin_info->ncached_max >> 1), tcache); } assert(tbin->ncached < tbin_info->ncached_max); tbin->avail[tbin->ncached] = ptr; tbin->ncached++; tcache_event(tcache); } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/tsd.h000066400000000000000000000342351361505074100227040ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES /* Maximum number of malloc_tsd users with cleanup functions. */ #define MALLOC_TSD_CLEANUPS_MAX 8 typedef bool (*malloc_tsd_cleanup_t)(void); #if (!defined(JEMALLOC_MALLOC_THREAD_CLEANUP) && !defined(JEMALLOC_TLS) && \ !defined(_WIN32)) typedef struct tsd_init_block_s tsd_init_block_t; typedef struct tsd_init_head_s tsd_init_head_t; #endif /* * TLS/TSD-agnostic macro-based implementation of thread-specific data. There * are four macros that support (at least) three use cases: file-private, * library-private, and library-private inlined. Following is an example * library-private tsd variable: * * In example.h: * typedef struct { * int x; * int y; * } example_t; * #define EX_INITIALIZER JEMALLOC_CONCAT({0, 0}) * malloc_tsd_protos(, example, example_t *) * malloc_tsd_externs(example, example_t *) * In example.c: * malloc_tsd_data(, example, example_t *, EX_INITIALIZER) * malloc_tsd_funcs(, example, example_t *, EX_INITIALIZER, * example_tsd_cleanup) * * The result is a set of generated functions, e.g.: * * bool example_tsd_boot(void) {...} * example_t **example_tsd_get() {...} * void example_tsd_set(example_t **val) {...} * * Note that all of the functions deal in terms of (a_type *) rather than * (a_type) so that it is possible to support non-pointer types (unlike * pthreads TSD). example_tsd_cleanup() is passed an (a_type *) pointer that is * cast to (void *). This means that the cleanup function needs to cast *and* * dereference the function argument, e.g.: * * void * example_tsd_cleanup(void *arg) * { * example_t *example = *(example_t **)arg; * * [...] * if ([want the cleanup function to be called again]) { * example_tsd_set(&example); * } * } * * If example_tsd_set() is called within example_tsd_cleanup(), it will be * called again. This is similar to how pthreads TSD destruction works, except * that pthreads only calls the cleanup function again if the value was set to * non-NULL. */ /* malloc_tsd_protos(). */ #define malloc_tsd_protos(a_attr, a_name, a_type) \ a_attr bool \ a_name##_tsd_boot(void); \ a_attr a_type * \ a_name##_tsd_get(void); \ a_attr void \ a_name##_tsd_set(a_type *val); /* malloc_tsd_externs(). */ #ifdef JEMALLOC_MALLOC_THREAD_CLEANUP #define malloc_tsd_externs(a_name, a_type) \ extern __thread a_type a_name##_tls; \ extern __thread bool a_name##_initialized; \ extern bool a_name##_booted; #elif (defined(JEMALLOC_TLS)) #define malloc_tsd_externs(a_name, a_type) \ extern __thread a_type a_name##_tls; \ extern pthread_key_t a_name##_tsd; \ extern bool a_name##_booted; #elif (defined(_WIN32)) #define malloc_tsd_externs(a_name, a_type) \ extern DWORD a_name##_tsd; \ extern bool a_name##_booted; #else #define malloc_tsd_externs(a_name, a_type) \ extern pthread_key_t a_name##_tsd; \ extern tsd_init_head_t a_name##_tsd_init_head; \ extern bool a_name##_booted; #endif /* malloc_tsd_data(). */ #ifdef JEMALLOC_MALLOC_THREAD_CLEANUP #define malloc_tsd_data(a_attr, a_name, a_type, a_initializer) \ a_attr __thread a_type JEMALLOC_TLS_MODEL \ a_name##_tls = a_initializer; \ a_attr __thread bool JEMALLOC_TLS_MODEL \ a_name##_initialized = false; \ a_attr bool a_name##_booted = false; #elif (defined(JEMALLOC_TLS)) #define malloc_tsd_data(a_attr, a_name, a_type, a_initializer) \ a_attr __thread a_type JEMALLOC_TLS_MODEL \ a_name##_tls = a_initializer; \ a_attr pthread_key_t a_name##_tsd; \ a_attr bool a_name##_booted = false; #elif (defined(_WIN32)) #define malloc_tsd_data(a_attr, a_name, a_type, a_initializer) \ a_attr DWORD a_name##_tsd; \ a_attr bool a_name##_booted = false; #else #define malloc_tsd_data(a_attr, a_name, a_type, a_initializer) \ a_attr pthread_key_t a_name##_tsd; \ a_attr tsd_init_head_t a_name##_tsd_init_head = { \ ql_head_initializer(blocks), \ MALLOC_MUTEX_INITIALIZER \ }; \ a_attr bool a_name##_booted = false; #endif /* malloc_tsd_funcs(). */ #ifdef JEMALLOC_MALLOC_THREAD_CLEANUP #define malloc_tsd_funcs(a_attr, a_name, a_type, a_initializer, \ a_cleanup) \ /* Initialization/cleanup. */ \ a_attr bool \ a_name##_tsd_cleanup_wrapper(void) \ { \ \ if (a_name##_initialized) { \ a_name##_initialized = false; \ a_cleanup(&a_name##_tls); \ } \ return (a_name##_initialized); \ } \ a_attr bool \ a_name##_tsd_boot(void) \ { \ \ if (a_cleanup != malloc_tsd_no_cleanup) { \ malloc_tsd_cleanup_register( \ &a_name##_tsd_cleanup_wrapper); \ } \ a_name##_booted = true; \ return (false); \ } \ /* Get/set. */ \ a_attr a_type * \ a_name##_tsd_get(void) \ { \ \ assert(a_name##_booted); \ return (&a_name##_tls); \ } \ a_attr void \ a_name##_tsd_set(a_type *val) \ { \ \ assert(a_name##_booted); \ a_name##_tls = (*val); \ if (a_cleanup != malloc_tsd_no_cleanup) \ a_name##_initialized = true; \ } #elif (defined(JEMALLOC_TLS)) #define malloc_tsd_funcs(a_attr, a_name, a_type, a_initializer, \ a_cleanup) \ /* Initialization/cleanup. */ \ a_attr bool \ a_name##_tsd_boot(void) \ { \ \ if (a_cleanup != malloc_tsd_no_cleanup) { \ if (pthread_key_create(&a_name##_tsd, a_cleanup) != 0) \ return (true); \ } \ a_name##_booted = true; \ return (false); \ } \ /* Get/set. */ \ a_attr a_type * \ a_name##_tsd_get(void) \ { \ \ assert(a_name##_booted); \ return (&a_name##_tls); \ } \ a_attr void \ a_name##_tsd_set(a_type *val) \ { \ \ assert(a_name##_booted); \ a_name##_tls = (*val); \ if (a_cleanup != malloc_tsd_no_cleanup) { \ if (pthread_setspecific(a_name##_tsd, \ (void *)(&a_name##_tls))) { \ malloc_write(": Error" \ " setting TSD for "#a_name"\n"); \ if (opt_abort) \ abort(); \ } \ } \ } #elif (defined(_WIN32)) #define malloc_tsd_funcs(a_attr, a_name, a_type, a_initializer, \ a_cleanup) \ /* Data structure. */ \ typedef struct { \ bool initialized; \ a_type val; \ } a_name##_tsd_wrapper_t; \ /* Initialization/cleanup. */ \ a_attr bool \ a_name##_tsd_cleanup_wrapper(void) \ { \ a_name##_tsd_wrapper_t *wrapper; \ \ wrapper = (a_name##_tsd_wrapper_t *) TlsGetValue(a_name##_tsd); \ if (wrapper == NULL) \ return (false); \ if (a_cleanup != malloc_tsd_no_cleanup && \ wrapper->initialized) { \ a_type val = wrapper->val; \ a_type tsd_static_data = a_initializer; \ wrapper->initialized = false; \ wrapper->val = tsd_static_data; \ a_cleanup(&val); \ if (wrapper->initialized) { \ /* Trigger another cleanup round. */ \ return (true); \ } \ } \ malloc_tsd_dalloc(wrapper); \ return (false); \ } \ a_attr bool \ a_name##_tsd_boot(void) \ { \ \ a_name##_tsd = TlsAlloc(); \ if (a_name##_tsd == TLS_OUT_OF_INDEXES) \ return (true); \ if (a_cleanup != malloc_tsd_no_cleanup) { \ malloc_tsd_cleanup_register( \ &a_name##_tsd_cleanup_wrapper); \ } \ a_name##_booted = true; \ return (false); \ } \ /* Get/set. */ \ a_attr a_name##_tsd_wrapper_t * \ a_name##_tsd_get_wrapper(void) \ { \ a_name##_tsd_wrapper_t *wrapper = (a_name##_tsd_wrapper_t *) \ TlsGetValue(a_name##_tsd); \ \ if (wrapper == NULL) { \ wrapper = (a_name##_tsd_wrapper_t *) \ malloc_tsd_malloc(sizeof(a_name##_tsd_wrapper_t)); \ if (wrapper == NULL) { \ malloc_write(": Error allocating" \ " TSD for "#a_name"\n"); \ abort(); \ } else { \ static a_type tsd_static_data = a_initializer; \ wrapper->initialized = false; \ wrapper->val = tsd_static_data; \ } \ if (!TlsSetValue(a_name##_tsd, (void *)wrapper)) { \ malloc_write(": Error setting" \ " TSD for "#a_name"\n"); \ abort(); \ } \ } \ return (wrapper); \ } \ a_attr a_type * \ a_name##_tsd_get(void) \ { \ a_name##_tsd_wrapper_t *wrapper; \ \ assert(a_name##_booted); \ wrapper = a_name##_tsd_get_wrapper(); \ return (&wrapper->val); \ } \ a_attr void \ a_name##_tsd_set(a_type *val) \ { \ a_name##_tsd_wrapper_t *wrapper; \ \ assert(a_name##_booted); \ wrapper = a_name##_tsd_get_wrapper(); \ wrapper->val = *(val); \ if (a_cleanup != malloc_tsd_no_cleanup) \ wrapper->initialized = true; \ } #else #define malloc_tsd_funcs(a_attr, a_name, a_type, a_initializer, \ a_cleanup) \ /* Data structure. */ \ typedef struct { \ bool initialized; \ a_type val; \ } a_name##_tsd_wrapper_t; \ /* Initialization/cleanup. */ \ a_attr void \ a_name##_tsd_cleanup_wrapper(void *arg) \ { \ a_name##_tsd_wrapper_t *wrapper = (a_name##_tsd_wrapper_t *)arg;\ \ if (a_cleanup != malloc_tsd_no_cleanup && \ wrapper->initialized) { \ wrapper->initialized = false; \ a_cleanup(&wrapper->val); \ if (wrapper->initialized) { \ /* Trigger another cleanup round. */ \ if (pthread_setspecific(a_name##_tsd, \ (void *)wrapper)) { \ malloc_write(": Error" \ " setting TSD for "#a_name"\n"); \ if (opt_abort) \ abort(); \ } \ return; \ } \ } \ malloc_tsd_dalloc(wrapper); \ } \ a_attr bool \ a_name##_tsd_boot(void) \ { \ \ if (pthread_key_create(&a_name##_tsd, \ a_name##_tsd_cleanup_wrapper) != 0) \ return (true); \ a_name##_booted = true; \ return (false); \ } \ /* Get/set. */ \ a_attr a_name##_tsd_wrapper_t * \ a_name##_tsd_get_wrapper(void) \ { \ a_name##_tsd_wrapper_t *wrapper = (a_name##_tsd_wrapper_t *) \ pthread_getspecific(a_name##_tsd); \ \ if (wrapper == NULL) { \ tsd_init_block_t block; \ wrapper = tsd_init_check_recursion( \ &a_name##_tsd_init_head, &block); \ if (wrapper) \ return (wrapper); \ wrapper = (a_name##_tsd_wrapper_t *) \ malloc_tsd_malloc(sizeof(a_name##_tsd_wrapper_t)); \ block.data = wrapper; \ if (wrapper == NULL) { \ malloc_write(": Error allocating" \ " TSD for "#a_name"\n"); \ abort(); \ } else { \ static a_type tsd_static_data = a_initializer; \ wrapper->initialized = false; \ wrapper->val = tsd_static_data; \ } \ if (pthread_setspecific(a_name##_tsd, \ (void *)wrapper)) { \ malloc_write(": Error setting" \ " TSD for "#a_name"\n"); \ abort(); \ } \ tsd_init_finish(&a_name##_tsd_init_head, &block); \ } \ return (wrapper); \ } \ a_attr a_type * \ a_name##_tsd_get(void) \ { \ a_name##_tsd_wrapper_t *wrapper; \ \ assert(a_name##_booted); \ wrapper = a_name##_tsd_get_wrapper(); \ return (&wrapper->val); \ } \ a_attr void \ a_name##_tsd_set(a_type *val) \ { \ a_name##_tsd_wrapper_t *wrapper; \ \ assert(a_name##_booted); \ wrapper = a_name##_tsd_get_wrapper(); \ wrapper->val = *(val); \ if (a_cleanup != malloc_tsd_no_cleanup) \ wrapper->initialized = true; \ } #endif /* * Vector data container implemented as TSD/TLS macros. * These functions behave exactly like the regular version, * except for the fact that they take an index argument in accessor functions. */ /* malloc_tsd_vector_protos(). */ #define malloc_tsd_vector_protos(a_attr, a_name) \ malloc_tsd_protos(a_attr, a_name, vector_t) #define malloc_tsd_vector_externs(a_name) \ malloc_tsd_externs(a_name, vector_t) #define malloc_tsd_vector_data(a_attr, a_name) \ malloc_tsd_data(a_attr, a_name, vector_t, VECTOR_INITIALIZER) #define malloc_tsd_vector_funcs(a_attr, a_name, a_type, a_cleanup) \ malloc_tsd_funcs(a_attr, a_name, vector_t, VECTOR_INITIALIZER, \ a_cleanup) \ \ a_attr a_type * \ a_name##_vec_tsd_get(uint32_t index) \ { \ vector_t *v = a_name##_tsd_get(); \ return (a_type *)vec_get(v, index); \ } \ \ a_attr void \ a_name##_vec_tsd_set(uint32_t index, a_type *val) \ { \ vector_t *v = a_name##_tsd_get(); \ vec_set(v, index, (void *)val); \ } \ #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #if (!defined(JEMALLOC_MALLOC_THREAD_CLEANUP) && !defined(JEMALLOC_TLS) && \ !defined(_WIN32)) struct tsd_init_block_s { ql_elm(tsd_init_block_t) link; pthread_t thread; void *data; }; struct tsd_init_head_s { ql_head(tsd_init_block_t) blocks; malloc_mutex_t lock; }; #endif #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS void *malloc_tsd_malloc(size_t size); void malloc_tsd_dalloc(void *wrapper); void malloc_tsd_no_cleanup(void *); void malloc_tsd_cleanup_register(bool (*f)(void)); void malloc_tsd_boot(void); #if (!defined(JEMALLOC_MALLOC_THREAD_CLEANUP) && !defined(JEMALLOC_TLS) && \ !defined(_WIN32)) void *tsd_init_check_recursion(tsd_init_head_t *head, tsd_init_block_t *block); void tsd_init_finish(tsd_init_head_t *head, tsd_init_block_t *block); #endif #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/util.h000066400000000000000000000137101361505074100230620ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES /* Size of stack-allocated buffer passed to buferror(). */ #define BUFERROR_BUF 64 /* * Size of stack-allocated buffer used by malloc_{,v,vc}printf(). This must be * large enough for all possible uses within jemalloc. */ #define MALLOC_PRINTF_BUFSIZE 4096 /* * Wrap a cpp argument that contains commas such that it isn't broken up into * multiple arguments. */ #define JEMALLOC_ARG_CONCAT(...) __VA_ARGS__ /* * Silence compiler warnings due to uninitialized values. This is used * wherever the compiler fails to recognize that the variable is never used * uninitialized. */ #ifdef JEMALLOC_CC_SILENCE # define JEMALLOC_CC_SILENCE_INIT(v) = v #else # define JEMALLOC_CC_SILENCE_INIT(v) #endif #ifndef likely #ifdef __GNUC__ #define likely(x) __builtin_expect(!!(x), 1) #define unlikely(x) __builtin_expect(!!(x), 0) #else #define likely(x) !!(x) #define unlikely(x) !!(x) #endif #endif /* * Define a custom assert() in order to reduce the chances of deadlock during * assertion failure. */ #ifndef assert #define assert(e) do { \ if (config_debug && !(e)) { \ malloc_printf( \ ": %s:%d: Failed assertion: \"%s\"\n", \ __FILE__, __LINE__, #e); \ abort(); \ } \ } while (0) #endif #ifndef not_reached #define not_reached() do { \ if (config_debug) { \ malloc_printf( \ ": %s:%d: Unreachable code reached\n", \ __FILE__, __LINE__); \ abort(); \ } \ } while (0) #endif #ifndef not_implemented #define not_implemented() do { \ if (config_debug) { \ malloc_printf(": %s:%d: Not implemented\n", \ __FILE__, __LINE__); \ abort(); \ } \ } while (0) #endif #ifndef assert_not_implemented #define assert_not_implemented(e) do { \ if (config_debug && !(e)) \ not_implemented(); \ } while (0) #endif /* Use to assert a particular configuration, e.g., cassert(config_debug). */ #define cassert(c) do { \ if ((c) == false) \ not_reached(); \ } while (0) #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS int buferror(int err, char *buf, size_t buflen); uintmax_t malloc_strtoumax(const char *restrict nptr, char **restrict endptr, int base); void malloc_write(const char *s); /* * malloc_vsnprintf() supports a subset of snprintf(3) that avoids floating * point math. */ int malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap); int malloc_snprintf(char *str, size_t size, const char *format, ...) JEMALLOC_ATTR(format(printf, 3, 4)); void malloc_vcprintf(void (*write_cb)(void *, const char *), void *cbopaque, const char *format, va_list ap); void malloc_cprintf(void (*write)(void *, const char *), void *cbopaque, const char *format, ...) JEMALLOC_ATTR(format(printf, 3, 4)); void malloc_printf(const char *format, ...) JEMALLOC_ATTR(format(printf, 1, 2)); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #ifndef JEMALLOC_ENABLE_INLINE int jemalloc_ffsl(long bitmap); int jemalloc_ffs(int bitmap); size_t pow2_ceil(size_t x); size_t lg_floor(size_t x); void set_errno(int errnum); int get_errno(void); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_UTIL_C_)) /* Sanity check: */ #if !defined(JEMALLOC_INTERNAL_FFSL) || !defined(JEMALLOC_INTERNAL_FFS) # error Both JEMALLOC_INTERNAL_FFSL && JEMALLOC_INTERNAL_FFS should have been defined by configure #endif JEMALLOC_ALWAYS_INLINE int jemalloc_ffsl(long bitmap) { return (JEMALLOC_INTERNAL_FFSL(bitmap)); } JEMALLOC_ALWAYS_INLINE int jemalloc_ffs(int bitmap) { return (JEMALLOC_INTERNAL_FFS(bitmap)); } /* Compute the smallest power of 2 that is >= x. */ JEMALLOC_INLINE size_t pow2_ceil(size_t x) { x--; x |= x >> 1; x |= x >> 2; x |= x >> 4; x |= x >> 8; x |= x >> 16; #if (LG_SIZEOF_PTR == 3) x |= x >> 32; #endif x++; return (x); } #if (defined(__i386__) || defined(__amd64__) || defined(__x86_64__)) JEMALLOC_INLINE size_t lg_floor(size_t x) { size_t ret; asm ("bsr %1, %0" : "=r"(ret) // Outputs. : "r"(x) // Inputs. ); return (ret); } #elif (defined(_MSC_VER)) JEMALLOC_INLINE size_t lg_floor(size_t x) { unsigned long ret; #if (LG_SIZEOF_PTR == 3) _BitScanReverse64(&ret, x); #elif (LG_SIZEOF_PTR == 2) _BitScanReverse(&ret, x); #else # error "Unsupported type size for lg_floor()" #endif return ((unsigned)ret); } #elif (defined(JEMALLOC_HAVE_BUILTIN_CLZ)) JEMALLOC_INLINE size_t lg_floor(size_t x) { #if (LG_SIZEOF_PTR == LG_SIZEOF_INT) return (((8 << LG_SIZEOF_PTR) - 1) - __builtin_clz(x)); #elif (LG_SIZEOF_PTR == LG_SIZEOF_LONG) return (((8 << LG_SIZEOF_PTR) - 1) - __builtin_clzl(x)); #else # error "Unsupported type sizes for lg_floor()" #endif } #else JEMALLOC_INLINE size_t lg_floor(size_t x) { x |= (x >> 1); x |= (x >> 2); x |= (x >> 4); x |= (x >> 8); x |= (x >> 16); #if (LG_SIZEOF_PTR == 3 && LG_SIZEOF_PTR == LG_SIZEOF_LONG) x |= (x >> 32); if (x == KZU(0xffffffffffffffff)) return (63); x++; return (jemalloc_ffsl(x) - 2); #elif (LG_SIZEOF_PTR == 2) if (x == KZU(0xffffffff)) return (31); x++; return (jemalloc_ffs(x) - 2); #else # error "Unsupported type sizes for lg_floor()" #endif } #endif /* Sets error code */ JEMALLOC_INLINE void set_errno(int errnum) { #ifdef _WIN32 int err = errnum; errno = err; SetLastError(errnum); #else errno = errnum; #endif } /* Get last error code */ JEMALLOC_INLINE int get_errno(void) { #ifdef _WIN32 return (GetLastError()); #else return (errno); #endif } #endif #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/valgrind.h000066400000000000000000000100651361505074100237130ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES #ifdef JEMALLOC_VALGRIND #include /* * The size that is reported to Valgrind must be consistent through a chain of * malloc..realloc..realloc calls. Request size isn't recorded anywhere in * jemalloc, so it is critical that all callers of these macros provide usize * rather than request size. As a result, buffer overflow detection is * technically weakened for the standard API, though it is generally accepted * practice to consider any extra bytes reported by malloc_usable_size() as * usable space. */ #define JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(ptr, usize) do { \ if (in_valgrind) \ valgrind_make_mem_noaccess(ptr, usize); \ } while (0) #define JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ptr, usize) do { \ if (in_valgrind) \ valgrind_make_mem_undefined(ptr, usize); \ } while (0) #define JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(ptr, usize) do { \ if (in_valgrind) \ valgrind_make_mem_defined(ptr, usize); \ } while (0) /* * The VALGRIND_MALLOCLIKE_BLOCK() and VALGRIND_RESIZEINPLACE_BLOCK() macro * calls must be embedded in macros rather than in functions so that when * Valgrind reports errors, there are no extra stack frames in the backtraces. */ #define JEMALLOC_VALGRIND_MALLOC(cond, ptr, usize, zero) do { \ if (in_valgrind && cond) \ VALGRIND_MALLOCLIKE_BLOCK(ptr, usize, p2rz(ptr), zero); \ } while (0) #define JEMALLOC_VALGRIND_REALLOC(maybe_moved, ptr, usize, \ ptr_maybe_null, old_ptr, old_usize, old_rzsize, old_ptr_maybe_null, \ zero) do { \ if (in_valgrind) { \ if (!maybe_moved || ptr == old_ptr) { \ VALGRIND_RESIZEINPLACE_BLOCK(ptr, old_usize, \ usize, p2rz(ptr)); \ if (zero && old_usize < usize) { \ valgrind_make_mem_defined( \ (void *)((uintptr_t)ptr + \ old_usize), usize - old_usize); \ } \ } else { \ if (!old_ptr_maybe_null || old_ptr != NULL) { \ valgrind_freelike_block(old_ptr, \ old_rzsize); \ } \ if (!ptr_maybe_null || ptr != NULL) { \ size_t copy_size = (old_usize < usize) \ ? old_usize : usize; \ size_t tail_size = usize - copy_size; \ VALGRIND_MALLOCLIKE_BLOCK(ptr, usize, \ p2rz(ptr), false); \ if (copy_size > 0) { \ valgrind_make_mem_defined(ptr, \ copy_size); \ } \ if (zero && tail_size > 0) { \ valgrind_make_mem_defined( \ (void *)((uintptr_t)ptr + \ copy_size), tail_size); \ } \ } \ } \ } \ } while (0) #define JEMALLOC_VALGRIND_FREE(ptr, rzsize) do { \ if (in_valgrind) \ valgrind_freelike_block(ptr, rzsize); \ } while (0) #else #define RUNNING_ON_VALGRIND ((unsigned)0) #define JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(ptr, usize) do {} while (0) #define JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ptr, usize) do {} while (0) #define JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(ptr, usize) do {} while (0) #define JEMALLOC_VALGRIND_MALLOC(cond, ptr, usize, zero) do {} while (0) #define JEMALLOC_VALGRIND_REALLOC(maybe_moved, ptr, usize, \ ptr_maybe_null, old_ptr, old_usize, old_rzsize, old_ptr_maybe_null, \ zero) do {} while (0) #define JEMALLOC_VALGRIND_FREE(ptr, rzsize) do {} while (0) #endif #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_VALGRIND void valgrind_make_mem_noaccess(void *ptr, size_t usize); void valgrind_make_mem_undefined(void *ptr, size_t usize); void valgrind_make_mem_defined(void *ptr, size_t usize); void valgrind_freelike_block(void *ptr, size_t usize); #endif #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/internal/vector.h000066400000000000000000000020471361505074100234100ustar00rootroot00000000000000/******************************************************************************/ #ifdef JEMALLOC_H_TYPES typedef struct vector_s vector_t; typedef struct vec_list_s vec_list_t; #define VECTOR_MIN_PART_SIZE 8 #define VECTOR_INITIALIZER JEMALLOC_ARG_CONCAT({.data = NULL, .size = 0}) #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS struct vec_list_s { vec_list_t *next; int length; void *data[]; }; struct vector_s { vec_list_t *list; }; #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS void *vec_get(vector_t *vector, int index); void vec_set(vector_t *vector, int index, void *val); void vec_delete(vector_t *vector); #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/jemalloc/include/jemalloc/jemalloc.sh000077500000000000000000000007621361505074100222500ustar00rootroot00000000000000#!/bin/sh objroot=$1 cat < #include #include #include #define JEMALLOC_VERSION "@jemalloc_version@" #define JEMALLOC_VERSION_MAJOR @jemalloc_version_major@ #define JEMALLOC_VERSION_MINOR @jemalloc_version_minor@ #define JEMALLOC_VERSION_BUGFIX @jemalloc_version_bugfix@ #define JEMALLOC_VERSION_NREV @jemalloc_version_nrev@ #define JEMALLOC_VERSION_GID "@jemalloc_version_gid@" # define MALLOCX_LG_ALIGN(la) (la) # if LG_SIZEOF_PTR == 2 # define MALLOCX_ALIGN(a) (ffs(a)-1) # else # define MALLOCX_ALIGN(a) \ ((a < (size_t)INT_MAX) ? ffs(a)-1 : ffs(a>>32)+31) # endif # define MALLOCX_ZERO ((int)0x40) /* Bias arena index bits so that 0 encodes "MALLOCX_ARENA() unspecified". */ # define MALLOCX_ARENA(a) ((int)(((a)+1) << 8)) #ifdef JEMALLOC_HAVE_ATTR # define JEMALLOC_ATTR(s) __attribute__((s)) # define JEMALLOC_EXPORT JEMALLOC_ATTR(visibility("default")) # define JEMALLOC_ALIGNED(s) JEMALLOC_ATTR(aligned(s)) # define JEMALLOC_SECTION(s) JEMALLOC_ATTR(section(s)) # define JEMALLOC_NOINLINE JEMALLOC_ATTR(noinline) #elif _MSC_VER # define JEMALLOC_ATTR(s) # ifndef JEMALLOC_EXPORT # ifdef DLLEXPORT # define JEMALLOC_EXPORT __declspec(dllexport) # else # define JEMALLOC_EXPORT __declspec(dllimport) # endif # endif # define JEMALLOC_ALIGNED(s) __declspec(align(s)) # define JEMALLOC_SECTION(s) __declspec(allocate(s)) # define JEMALLOC_NOINLINE __declspec(noinline) #else # define JEMALLOC_ATTR(s) # define JEMALLOC_EXPORT # define JEMALLOC_ALIGNED(s) # define JEMALLOC_SECTION(s) # define JEMALLOC_NOINLINE #endif vmem-1.8/src/jemalloc/include/jemalloc/jemalloc_mangle.sh000077500000000000000000000023521361505074100235700ustar00rootroot00000000000000#!/bin/sh public_symbols_txt=$1 symbol_prefix=$2 cat < 1000 #pragma once #endif #include "stdint.h" // 7.8 Format conversion of integer types typedef struct { intmax_t quot; intmax_t rem; } imaxdiv_t; // 7.8.1 Macros for format specifiers #if !defined(__cplusplus) || defined(__STDC_FORMAT_MACROS) // [ See footnote 185 at page 198 #ifdef _WIN64 # define __PRI64_PREFIX "l" # define __PRIPTR_PREFIX "l" #else # define __PRI64_PREFIX "ll" # define __PRIPTR_PREFIX #endif // The fprintf macros for signed integers are: #define PRId8 "d" #define PRIi8 "i" #define PRIdLEAST8 "d" #define PRIiLEAST8 "i" #define PRIdFAST8 "d" #define PRIiFAST8 "i" #define PRId16 "hd" #define PRIi16 "hi" #define PRIdLEAST16 "hd" #define PRIiLEAST16 "hi" #define PRIdFAST16 "hd" #define PRIiFAST16 "hi" #define PRId32 "d" #define PRIi32 "i" #define PRIdLEAST32 "d" #define PRIiLEAST32 "i" #define PRIdFAST32 "d" #define PRIiFAST32 "i" #define PRId64 __PRI64_PREFIX "d" #define PRIi64 __PRI64_PREFIX "i" #define PRIdLEAST64 __PRI64_PREFIX "d" #define PRIiLEAST64 __PRI64_PREFIX "i" #define PRIdFAST64 __PRI64_PREFIX "d" #define PRIiFAST64 __PRI64_PREFIX "i" #define PRIdMAX __PRI64_PREFIX "d" #define PRIiMAX __PRI64_PREFIX "i" #define PRIdPTR __PRIPTR_PREFIX "d" #define PRIiPTR __PRIPTR_PREFIX "i" // The fprintf macros for unsigned integers are: #define PRIo8 "o" #define PRIu8 "u" #define PRIx8 "x" #define PRIX8 "X" #define PRIoLEAST8 "o" #define PRIuLEAST8 "u" #define PRIxLEAST8 "x" #define PRIXLEAST8 "X" #define PRIoFAST8 "o" #define PRIuFAST8 "u" #define PRIxFAST8 "x" #define PRIXFAST8 "X" #define PRIo16 "ho" #define PRIu16 "hu" #define PRIx16 "hx" #define PRIX16 "hX" #define PRIoLEAST16 "ho" #define PRIuLEAST16 "hu" #define PRIxLEAST16 "hx" #define PRIXLEAST16 "hX" #define PRIoFAST16 "ho" #define PRIuFAST16 "hu" #define PRIxFAST16 "hx" #define PRIXFAST16 "hX" #define PRIo32 "o" #define PRIu32 "u" #define PRIx32 "x" #define PRIX32 "X" #define PRIoLEAST32 "o" #define PRIuLEAST32 "u" #define PRIxLEAST32 "x" #define PRIXLEAST32 "X" #define PRIoFAST32 "o" #define PRIuFAST32 "u" #define PRIxFAST32 "x" #define PRIXFAST32 "X" #define PRIo64 __PRI64_PREFIX "o" #define PRIu64 __PRI64_PREFIX "u" #define PRIx64 __PRI64_PREFIX "x" #define PRIX64 __PRI64_PREFIX "X" #define PRIoLEAST64 __PRI64_PREFIX "o" #define PRIuLEAST64 __PRI64_PREFIX "u" #define PRIxLEAST64 __PRI64_PREFIX "x" #define PRIXLEAST64 __PRI64_PREFIX "X" #define PRIoFAST64 __PRI64_PREFIX "o" #define PRIuFAST64 __PRI64_PREFIX "u" #define PRIxFAST64 __PRI64_PREFIX "x" #define PRIXFAST64 __PRI64_PREFIX "X" #define PRIoMAX __PRI64_PREFIX "o" #define PRIuMAX __PRI64_PREFIX "u" #define PRIxMAX __PRI64_PREFIX "x" #define PRIXMAX __PRI64_PREFIX "X" #define PRIoPTR __PRIPTR_PREFIX "o" #define PRIuPTR __PRIPTR_PREFIX "u" #define PRIxPTR __PRIPTR_PREFIX "x" #define PRIXPTR __PRIPTR_PREFIX "X" // The fscanf macros for signed integers are: #define SCNd8 "d" #define SCNi8 "i" #define SCNdLEAST8 "d" #define SCNiLEAST8 "i" #define SCNdFAST8 "d" #define SCNiFAST8 "i" #define SCNd16 "hd" #define SCNi16 "hi" #define SCNdLEAST16 "hd" #define SCNiLEAST16 "hi" #define SCNdFAST16 "hd" #define SCNiFAST16 "hi" #define SCNd32 "ld" #define SCNi32 "li" #define SCNdLEAST32 "ld" #define SCNiLEAST32 "li" #define SCNdFAST32 "ld" #define SCNiFAST32 "li" #define SCNd64 "I64d" #define SCNi64 "I64i" #define SCNdLEAST64 "I64d" #define SCNiLEAST64 "I64i" #define SCNdFAST64 "I64d" #define SCNiFAST64 "I64i" #define SCNdMAX "I64d" #define SCNiMAX "I64i" #ifdef _WIN64 // [ # define SCNdPTR "I64d" # define SCNiPTR "I64i" #else // _WIN64 ][ # define SCNdPTR "ld" # define SCNiPTR "li" #endif // _WIN64 ] // The fscanf macros for unsigned integers are: #define SCNo8 "o" #define SCNu8 "u" #define SCNx8 "x" #define SCNX8 "X" #define SCNoLEAST8 "o" #define SCNuLEAST8 "u" #define SCNxLEAST8 "x" #define SCNXLEAST8 "X" #define SCNoFAST8 "o" #define SCNuFAST8 "u" #define SCNxFAST8 "x" #define SCNXFAST8 "X" #define SCNo16 "ho" #define SCNu16 "hu" #define SCNx16 "hx" #define SCNX16 "hX" #define SCNoLEAST16 "ho" #define SCNuLEAST16 "hu" #define SCNxLEAST16 "hx" #define SCNXLEAST16 "hX" #define SCNoFAST16 "ho" #define SCNuFAST16 "hu" #define SCNxFAST16 "hx" #define SCNXFAST16 "hX" #define SCNo32 "lo" #define SCNu32 "lu" #define SCNx32 "lx" #define SCNX32 "lX" #define SCNoLEAST32 "lo" #define SCNuLEAST32 "lu" #define SCNxLEAST32 "lx" #define SCNXLEAST32 "lX" #define SCNoFAST32 "lo" #define SCNuFAST32 "lu" #define SCNxFAST32 "lx" #define SCNXFAST32 "lX" #define SCNo64 "I64o" #define SCNu64 "I64u" #define SCNx64 "I64x" #define SCNX64 "I64X" #define SCNoLEAST64 "I64o" #define SCNuLEAST64 "I64u" #define SCNxLEAST64 "I64x" #define SCNXLEAST64 "I64X" #define SCNoFAST64 "I64o" #define SCNuFAST64 "I64u" #define SCNxFAST64 "I64x" #define SCNXFAST64 "I64X" #define SCNoMAX "I64o" #define SCNuMAX "I64u" #define SCNxMAX "I64x" #define SCNXMAX "I64X" #ifdef _WIN64 // [ # define SCNoPTR "I64o" # define SCNuPTR "I64u" # define SCNxPTR "I64x" # define SCNXPTR "I64X" #else // _WIN64 ][ # define SCNoPTR "lo" # define SCNuPTR "lu" # define SCNxPTR "lx" # define SCNXPTR "lX" #endif // _WIN64 ] #endif // __STDC_FORMAT_MACROS ] // 7.8.2 Functions for greatest-width integer types // 7.8.2.1 The imaxabs function #define imaxabs _abs64 // 7.8.2.2 The imaxdiv function // This is modified version of div() function from Microsoft's div.c found // in %MSVC.NET%\crt\src\div.c #ifdef STATIC_IMAXDIV // [ static #else // STATIC_IMAXDIV ][ _inline #endif // STATIC_IMAXDIV ] imaxdiv_t __cdecl imaxdiv(intmax_t numer, intmax_t denom) { imaxdiv_t result; result.quot = numer / denom; result.rem = numer % denom; if (numer < 0 && result.rem > 0) { // did division wrong; must fix up ++result.quot; result.rem -= denom; } return result; } // 7.8.2.3 The strtoimax and strtoumax functions #define strtoimax _strtoi64 #define strtoumax _strtoui64 // 7.8.2.4 The wcstoimax and wcstoumax functions #define wcstoimax _wcstoi64 #define wcstoumax _wcstoui64 #endif // _MSC_INTTYPES_H_ ] vmem-1.8/src/jemalloc/include/msvc_compat/C99/stdbool.h000066400000000000000000000007011361505074100230240ustar00rootroot00000000000000#ifndef stdbool_h #define stdbool_h #include /* MSVC doesn't define _Bool or bool in C, but does have BOOL */ /* Note this doesn't pass autoconf's test because (bool) 0.5 != true */ /* Clang-cl uses MSVC headers, so needs msvc_compat, but has _Bool as * a built-in type. */ #ifndef __clang__ typedef BOOL _Bool; #endif #define bool _Bool #define true 1 #define false 0 #define __bool_true_false_are_defined 1 #endif /* stdbool_h */ vmem-1.8/src/jemalloc/include/msvc_compat/C99/stdint.h000066400000000000000000000170601361505074100226710ustar00rootroot00000000000000// ISO C9x compliant stdint.h for Microsoft Visual Studio // Based on ISO/IEC 9899:TC2 Committee draft (May 6, 2005) WG14/N1124 // // Copyright (c) 2006-2008 Alexander Chemeris // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are met: // // 1. Redistributions of source code must retain the above copyright notice, // this list of conditions and the following disclaimer. // // 2. Redistributions in binary form must reproduce the above copyright // notice, this list of conditions and the following disclaimer in the // documentation and/or other materials provided with the distribution. // // 3. The name of the author may be used to endorse or promote products // derived from this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED // WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF // MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO // EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, // PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; // OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR // OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF // ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // /////////////////////////////////////////////////////////////////////////////// #ifndef _MSC_VER // [ #error "Use this header only with Microsoft Visual C++ compilers!" #endif // _MSC_VER ] #ifndef _MSC_STDINT_H_ // [ #define _MSC_STDINT_H_ #if _MSC_VER > 1000 #pragma once #endif #include // For Visual Studio 6 in C++ mode and for many Visual Studio versions when // compiling for ARM we should wrap include with 'extern "C++" {}' // or compiler give many errors like this: // error C2733: second C linkage of overloaded function 'wmemchr' not allowed #ifdef __cplusplus extern "C" { #endif # include #ifdef __cplusplus } #endif // Define _W64 macros to mark types changing their size, like intptr_t. #ifndef _W64 # if !defined(__midl) && (defined(_X86_) || defined(_M_IX86)) && _MSC_VER >= 1300 # define _W64 __w64 # else # define _W64 # endif #endif // 7.18.1 Integer types // 7.18.1.1 Exact-width integer types // Visual Studio 6 and Embedded Visual C++ 4 doesn't // realize that, e.g. char has the same size as __int8 // so we give up on __intX for them. #if (_MSC_VER < 1300) typedef signed char int8_t; typedef signed short int16_t; typedef signed int int32_t; typedef unsigned char uint8_t; typedef unsigned short uint16_t; typedef unsigned int uint32_t; #else typedef signed __int8 int8_t; typedef signed __int16 int16_t; typedef signed __int32 int32_t; typedef unsigned __int8 uint8_t; typedef unsigned __int16 uint16_t; typedef unsigned __int32 uint32_t; #endif typedef signed __int64 int64_t; typedef unsigned __int64 uint64_t; // 7.18.1.2 Minimum-width integer types typedef int8_t int_least8_t; typedef int16_t int_least16_t; typedef int32_t int_least32_t; typedef int64_t int_least64_t; typedef uint8_t uint_least8_t; typedef uint16_t uint_least16_t; typedef uint32_t uint_least32_t; typedef uint64_t uint_least64_t; // 7.18.1.3 Fastest minimum-width integer types typedef int8_t int_fast8_t; typedef int16_t int_fast16_t; typedef int32_t int_fast32_t; typedef int64_t int_fast64_t; typedef uint8_t uint_fast8_t; typedef uint16_t uint_fast16_t; typedef uint32_t uint_fast32_t; typedef uint64_t uint_fast64_t; // 7.18.1.4 Integer types capable of holding object pointers #ifdef _WIN64 // [ typedef signed __int64 intptr_t; typedef unsigned __int64 uintptr_t; #else // _WIN64 ][ typedef _W64 signed int intptr_t; typedef _W64 unsigned int uintptr_t; #endif // _WIN64 ] // 7.18.1.5 Greatest-width integer types typedef int64_t intmax_t; typedef uint64_t uintmax_t; // 7.18.2 Limits of specified-width integer types #if !defined(__cplusplus) || defined(__STDC_LIMIT_MACROS) // [ See footnote 220 at page 257 and footnote 221 at page 259 // 7.18.2.1 Limits of exact-width integer types #define INT8_MIN ((int8_t)_I8_MIN) #define INT8_MAX _I8_MAX #define INT16_MIN ((int16_t)_I16_MIN) #define INT16_MAX _I16_MAX #define INT32_MIN ((int32_t)_I32_MIN) #define INT32_MAX _I32_MAX #define INT64_MIN ((int64_t)_I64_MIN) #define INT64_MAX _I64_MAX #define UINT8_MAX _UI8_MAX #define UINT16_MAX _UI16_MAX #define UINT32_MAX _UI32_MAX #define UINT64_MAX _UI64_MAX // 7.18.2.2 Limits of minimum-width integer types #define INT_LEAST8_MIN INT8_MIN #define INT_LEAST8_MAX INT8_MAX #define INT_LEAST16_MIN INT16_MIN #define INT_LEAST16_MAX INT16_MAX #define INT_LEAST32_MIN INT32_MIN #define INT_LEAST32_MAX INT32_MAX #define INT_LEAST64_MIN INT64_MIN #define INT_LEAST64_MAX INT64_MAX #define UINT_LEAST8_MAX UINT8_MAX #define UINT_LEAST16_MAX UINT16_MAX #define UINT_LEAST32_MAX UINT32_MAX #define UINT_LEAST64_MAX UINT64_MAX // 7.18.2.3 Limits of fastest minimum-width integer types #define INT_FAST8_MIN INT8_MIN #define INT_FAST8_MAX INT8_MAX #define INT_FAST16_MIN INT16_MIN #define INT_FAST16_MAX INT16_MAX #define INT_FAST32_MIN INT32_MIN #define INT_FAST32_MAX INT32_MAX #define INT_FAST64_MIN INT64_MIN #define INT_FAST64_MAX INT64_MAX #define UINT_FAST8_MAX UINT8_MAX #define UINT_FAST16_MAX UINT16_MAX #define UINT_FAST32_MAX UINT32_MAX #define UINT_FAST64_MAX UINT64_MAX // 7.18.2.4 Limits of integer types capable of holding object pointers #ifdef _WIN64 // [ # define INTPTR_MIN INT64_MIN # define INTPTR_MAX INT64_MAX # define UINTPTR_MAX UINT64_MAX #else // _WIN64 ][ # define INTPTR_MIN INT32_MIN # define INTPTR_MAX INT32_MAX # define UINTPTR_MAX UINT32_MAX #endif // _WIN64 ] // 7.18.2.5 Limits of greatest-width integer types #define INTMAX_MIN INT64_MIN #define INTMAX_MAX INT64_MAX #define UINTMAX_MAX UINT64_MAX // 7.18.3 Limits of other integer types #ifdef _WIN64 // [ # define PTRDIFF_MIN _I64_MIN # define PTRDIFF_MAX _I64_MAX #else // _WIN64 ][ # define PTRDIFF_MIN _I32_MIN # define PTRDIFF_MAX _I32_MAX #endif // _WIN64 ] #define SIG_ATOMIC_MIN INT_MIN #define SIG_ATOMIC_MAX INT_MAX #ifndef SIZE_MAX // [ # ifdef _WIN64 // [ # define SIZE_MAX _UI64_MAX # else // _WIN64 ][ # define SIZE_MAX _UI32_MAX # endif // _WIN64 ] #endif // SIZE_MAX ] // WCHAR_MIN and WCHAR_MAX are also defined in #ifndef WCHAR_MIN // [ # define WCHAR_MIN 0 #endif // WCHAR_MIN ] #ifndef WCHAR_MAX // [ # define WCHAR_MAX _UI16_MAX #endif // WCHAR_MAX ] #define WINT_MIN 0 #define WINT_MAX _UI16_MAX #endif // __STDC_LIMIT_MACROS ] // 7.18.4 Limits of other integer types #if !defined(__cplusplus) || defined(__STDC_CONSTANT_MACROS) // [ See footnote 224 at page 260 // 7.18.4.1 Macros for minimum-width integer constants #define INT8_C(val) val##i8 #define INT16_C(val) val##i16 #define INT32_C(val) val##i32 #define INT64_C(val) val##i64 #define UINT8_C(val) val##ui8 #define UINT16_C(val) val##ui16 #define UINT32_C(val) val##ui32 #define UINT64_C(val) val##ui64 // 7.18.4.2 Macros for greatest-width integer constants #define INTMAX_C INT64_C #define UINTMAX_C UINT64_C #endif // __STDC_CONSTANT_MACROS ] #endif // _MSC_STDINT_H_ ] vmem-1.8/src/jemalloc/include/msvc_compat/inttypes.h000066400000000000000000000204531361505074100226770ustar00rootroot00000000000000// ISO C9x compliant inttypes.h for Microsoft Visual Studio // Based on ISO/IEC 9899:TC2 Committee draft (May 6, 2005) WG14/N1124 // // Copyright (c) 2006 Alexander Chemeris // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are met: // // 1. Redistributions of source code must retain the above copyright notice, // this list of conditions and the following disclaimer. // // 2. Redistributions in binary form must reproduce the above copyright // notice, this list of conditions and the following disclaimer in the // documentation and/or other materials provided with the distribution. // // 3. The name of the author may be used to endorse or promote products // derived from this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED // WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF // MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO // EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, // PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; // OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR // OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF // ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // /////////////////////////////////////////////////////////////////////////////// #ifndef _MSC_VER // [ #error "Use this header only with Microsoft Visual C++ compilers!" #endif // _MSC_VER ] #ifndef _MSC_INTTYPES_H_ // [ #define _MSC_INTTYPES_H_ #if _MSC_VER > 1000 #pragma once #endif #include "stdint.h" // 7.8 Format conversion of integer types typedef struct { intmax_t quot; intmax_t rem; } imaxdiv_t; // 7.8.1 Macros for format specifiers #if !defined(__cplusplus) || defined(__STDC_FORMAT_MACROS) // [ See footnote 185 at page 198 #ifdef _WIN64 # define __PRI64_PREFIX "l" # define __PRIPTR_PREFIX "l" #else # define __PRI64_PREFIX "ll" # define __PRIPTR_PREFIX #endif // The fprintf macros for signed integers are: #define PRId8 "d" #define PRIi8 "i" #define PRIdLEAST8 "d" #define PRIiLEAST8 "i" #define PRIdFAST8 "d" #define PRIiFAST8 "i" #define PRId16 "hd" #define PRIi16 "hi" #define PRIdLEAST16 "hd" #define PRIiLEAST16 "hi" #define PRIdFAST16 "hd" #define PRIiFAST16 "hi" #define PRId32 "d" #define PRIi32 "i" #define PRIdLEAST32 "d" #define PRIiLEAST32 "i" #define PRIdFAST32 "d" #define PRIiFAST32 "i" #define PRId64 __PRI64_PREFIX "d" #define PRIi64 __PRI64_PREFIX "i" #define PRIdLEAST64 __PRI64_PREFIX "d" #define PRIiLEAST64 __PRI64_PREFIX "i" #define PRIdFAST64 __PRI64_PREFIX "d" #define PRIiFAST64 __PRI64_PREFIX "i" #define PRIdMAX __PRI64_PREFIX "d" #define PRIiMAX __PRI64_PREFIX "i" #define PRIdPTR __PRIPTR_PREFIX "d" #define PRIiPTR __PRIPTR_PREFIX "i" // The fprintf macros for unsigned integers are: #define PRIo8 "o" #define PRIu8 "u" #define PRIx8 "x" #define PRIX8 "X" #define PRIoLEAST8 "o" #define PRIuLEAST8 "u" #define PRIxLEAST8 "x" #define PRIXLEAST8 "X" #define PRIoFAST8 "o" #define PRIuFAST8 "u" #define PRIxFAST8 "x" #define PRIXFAST8 "X" #define PRIo16 "ho" #define PRIu16 "hu" #define PRIx16 "hx" #define PRIX16 "hX" #define PRIoLEAST16 "ho" #define PRIuLEAST16 "hu" #define PRIxLEAST16 "hx" #define PRIXLEAST16 "hX" #define PRIoFAST16 "ho" #define PRIuFAST16 "hu" #define PRIxFAST16 "hx" #define PRIXFAST16 "hX" #define PRIo32 "o" #define PRIu32 "u" #define PRIx32 "x" #define PRIX32 "X" #define PRIoLEAST32 "o" #define PRIuLEAST32 "u" #define PRIxLEAST32 "x" #define PRIXLEAST32 "X" #define PRIoFAST32 "o" #define PRIuFAST32 "u" #define PRIxFAST32 "x" #define PRIXFAST32 "X" #define PRIo64 __PRI64_PREFIX "o" #define PRIu64 __PRI64_PREFIX "u" #define PRIx64 __PRI64_PREFIX "x" #define PRIX64 __PRI64_PREFIX "X" #define PRIoLEAST64 __PRI64_PREFIX "o" #define PRIuLEAST64 __PRI64_PREFIX "u" #define PRIxLEAST64 __PRI64_PREFIX "x" #define PRIXLEAST64 __PRI64_PREFIX "X" #define PRIoFAST64 __PRI64_PREFIX "o" #define PRIuFAST64 __PRI64_PREFIX "u" #define PRIxFAST64 __PRI64_PREFIX "x" #define PRIXFAST64 __PRI64_PREFIX "X" #define PRIoMAX __PRI64_PREFIX "o" #define PRIuMAX __PRI64_PREFIX "u" #define PRIxMAX __PRI64_PREFIX "x" #define PRIXMAX __PRI64_PREFIX "X" #define PRIoPTR __PRIPTR_PREFIX "o" #define PRIuPTR __PRIPTR_PREFIX "u" #define PRIxPTR __PRIPTR_PREFIX "x" #define PRIXPTR __PRIPTR_PREFIX "X" // The fscanf macros for signed integers are: #define SCNd8 "d" #define SCNi8 "i" #define SCNdLEAST8 "d" #define SCNiLEAST8 "i" #define SCNdFAST8 "d" #define SCNiFAST8 "i" #define SCNd16 "hd" #define SCNi16 "hi" #define SCNdLEAST16 "hd" #define SCNiLEAST16 "hi" #define SCNdFAST16 "hd" #define SCNiFAST16 "hi" #define SCNd32 "ld" #define SCNi32 "li" #define SCNdLEAST32 "ld" #define SCNiLEAST32 "li" #define SCNdFAST32 "ld" #define SCNiFAST32 "li" #define SCNd64 "I64d" #define SCNi64 "I64i" #define SCNdLEAST64 "I64d" #define SCNiLEAST64 "I64i" #define SCNdFAST64 "I64d" #define SCNiFAST64 "I64i" #define SCNdMAX "I64d" #define SCNiMAX "I64i" #ifdef _WIN64 // [ # define SCNdPTR "I64d" # define SCNiPTR "I64i" #else // _WIN64 ][ # define SCNdPTR "ld" # define SCNiPTR "li" #endif // _WIN64 ] // The fscanf macros for unsigned integers are: #define SCNo8 "o" #define SCNu8 "u" #define SCNx8 "x" #define SCNX8 "X" #define SCNoLEAST8 "o" #define SCNuLEAST8 "u" #define SCNxLEAST8 "x" #define SCNXLEAST8 "X" #define SCNoFAST8 "o" #define SCNuFAST8 "u" #define SCNxFAST8 "x" #define SCNXFAST8 "X" #define SCNo16 "ho" #define SCNu16 "hu" #define SCNx16 "hx" #define SCNX16 "hX" #define SCNoLEAST16 "ho" #define SCNuLEAST16 "hu" #define SCNxLEAST16 "hx" #define SCNXLEAST16 "hX" #define SCNoFAST16 "ho" #define SCNuFAST16 "hu" #define SCNxFAST16 "hx" #define SCNXFAST16 "hX" #define SCNo32 "lo" #define SCNu32 "lu" #define SCNx32 "lx" #define SCNX32 "lX" #define SCNoLEAST32 "lo" #define SCNuLEAST32 "lu" #define SCNxLEAST32 "lx" #define SCNXLEAST32 "lX" #define SCNoFAST32 "lo" #define SCNuFAST32 "lu" #define SCNxFAST32 "lx" #define SCNXFAST32 "lX" #define SCNo64 "I64o" #define SCNu64 "I64u" #define SCNx64 "I64x" #define SCNX64 "I64X" #define SCNoLEAST64 "I64o" #define SCNuLEAST64 "I64u" #define SCNxLEAST64 "I64x" #define SCNXLEAST64 "I64X" #define SCNoFAST64 "I64o" #define SCNuFAST64 "I64u" #define SCNxFAST64 "I64x" #define SCNXFAST64 "I64X" #define SCNoMAX "I64o" #define SCNuMAX "I64u" #define SCNxMAX "I64x" #define SCNXMAX "I64X" #ifdef _WIN64 // [ # define SCNoPTR "I64o" # define SCNuPTR "I64u" # define SCNxPTR "I64x" # define SCNXPTR "I64X" #else // _WIN64 ][ # define SCNoPTR "lo" # define SCNuPTR "lu" # define SCNxPTR "lx" # define SCNXPTR "lX" #endif // _WIN64 ] #endif // __STDC_FORMAT_MACROS ] // 7.8.2 Functions for greatest-width integer types // 7.8.2.1 The imaxabs function #define imaxabs _abs64 // 7.8.2.2 The imaxdiv function // This is modified version of div() function from Microsoft's div.c found // in %MSVC.NET%\crt\src\div.c #ifdef STATIC_IMAXDIV // [ static #else // STATIC_IMAXDIV ][ _inline #endif // STATIC_IMAXDIV ] imaxdiv_t __cdecl imaxdiv(intmax_t numer, intmax_t denom) { imaxdiv_t result; result.quot = numer / denom; result.rem = numer % denom; if (numer < 0 && result.rem > 0) { // did division wrong; must fix up ++result.quot; result.rem -= denom; } return result; } // 7.8.2.3 The strtoimax and strtoumax functions #define strtoimax _strtoi64 #define strtoumax _strtoui64 // 7.8.2.4 The wcstoimax and wcstoumax functions #define wcstoimax _wcstoi64 #define wcstoumax _wcstoui64 #endif // _MSC_INTTYPES_H_ ] vmem-1.8/src/jemalloc/include/msvc_compat/stdint.h000066400000000000000000000170601361505074100223250ustar00rootroot00000000000000// ISO C9x compliant stdint.h for Microsoft Visual Studio // Based on ISO/IEC 9899:TC2 Committee draft (May 6, 2005) WG14/N1124 // // Copyright (c) 2006-2008 Alexander Chemeris // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are met: // // 1. Redistributions of source code must retain the above copyright notice, // this list of conditions and the following disclaimer. // // 2. Redistributions in binary form must reproduce the above copyright // notice, this list of conditions and the following disclaimer in the // documentation and/or other materials provided with the distribution. // // 3. The name of the author may be used to endorse or promote products // derived from this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED // WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF // MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO // EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, // PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; // OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR // OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF // ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // /////////////////////////////////////////////////////////////////////////////// #ifndef _MSC_VER // [ #error "Use this header only with Microsoft Visual C++ compilers!" #endif // _MSC_VER ] #ifndef _MSC_STDINT_H_ // [ #define _MSC_STDINT_H_ #if _MSC_VER > 1000 #pragma once #endif #include // For Visual Studio 6 in C++ mode and for many Visual Studio versions when // compiling for ARM we should wrap include with 'extern "C++" {}' // or compiler give many errors like this: // error C2733: second C linkage of overloaded function 'wmemchr' not allowed #ifdef __cplusplus extern "C" { #endif # include #ifdef __cplusplus } #endif // Define _W64 macros to mark types changing their size, like intptr_t. #ifndef _W64 # if !defined(__midl) && (defined(_X86_) || defined(_M_IX86)) && _MSC_VER >= 1300 # define _W64 __w64 # else # define _W64 # endif #endif // 7.18.1 Integer types // 7.18.1.1 Exact-width integer types // Visual Studio 6 and Embedded Visual C++ 4 doesn't // realize that, e.g. char has the same size as __int8 // so we give up on __intX for them. #if (_MSC_VER < 1300) typedef signed char int8_t; typedef signed short int16_t; typedef signed int int32_t; typedef unsigned char uint8_t; typedef unsigned short uint16_t; typedef unsigned int uint32_t; #else typedef signed __int8 int8_t; typedef signed __int16 int16_t; typedef signed __int32 int32_t; typedef unsigned __int8 uint8_t; typedef unsigned __int16 uint16_t; typedef unsigned __int32 uint32_t; #endif typedef signed __int64 int64_t; typedef unsigned __int64 uint64_t; // 7.18.1.2 Minimum-width integer types typedef int8_t int_least8_t; typedef int16_t int_least16_t; typedef int32_t int_least32_t; typedef int64_t int_least64_t; typedef uint8_t uint_least8_t; typedef uint16_t uint_least16_t; typedef uint32_t uint_least32_t; typedef uint64_t uint_least64_t; // 7.18.1.3 Fastest minimum-width integer types typedef int8_t int_fast8_t; typedef int16_t int_fast16_t; typedef int32_t int_fast32_t; typedef int64_t int_fast64_t; typedef uint8_t uint_fast8_t; typedef uint16_t uint_fast16_t; typedef uint32_t uint_fast32_t; typedef uint64_t uint_fast64_t; // 7.18.1.4 Integer types capable of holding object pointers #ifdef _WIN64 // [ typedef signed __int64 intptr_t; typedef unsigned __int64 uintptr_t; #else // _WIN64 ][ typedef _W64 signed int intptr_t; typedef _W64 unsigned int uintptr_t; #endif // _WIN64 ] // 7.18.1.5 Greatest-width integer types typedef int64_t intmax_t; typedef uint64_t uintmax_t; // 7.18.2 Limits of specified-width integer types #if !defined(__cplusplus) || defined(__STDC_LIMIT_MACROS) // [ See footnote 220 at page 257 and footnote 221 at page 259 // 7.18.2.1 Limits of exact-width integer types #define INT8_MIN ((int8_t)_I8_MIN) #define INT8_MAX _I8_MAX #define INT16_MIN ((int16_t)_I16_MIN) #define INT16_MAX _I16_MAX #define INT32_MIN ((int32_t)_I32_MIN) #define INT32_MAX _I32_MAX #define INT64_MIN ((int64_t)_I64_MIN) #define INT64_MAX _I64_MAX #define UINT8_MAX _UI8_MAX #define UINT16_MAX _UI16_MAX #define UINT32_MAX _UI32_MAX #define UINT64_MAX _UI64_MAX // 7.18.2.2 Limits of minimum-width integer types #define INT_LEAST8_MIN INT8_MIN #define INT_LEAST8_MAX INT8_MAX #define INT_LEAST16_MIN INT16_MIN #define INT_LEAST16_MAX INT16_MAX #define INT_LEAST32_MIN INT32_MIN #define INT_LEAST32_MAX INT32_MAX #define INT_LEAST64_MIN INT64_MIN #define INT_LEAST64_MAX INT64_MAX #define UINT_LEAST8_MAX UINT8_MAX #define UINT_LEAST16_MAX UINT16_MAX #define UINT_LEAST32_MAX UINT32_MAX #define UINT_LEAST64_MAX UINT64_MAX // 7.18.2.3 Limits of fastest minimum-width integer types #define INT_FAST8_MIN INT8_MIN #define INT_FAST8_MAX INT8_MAX #define INT_FAST16_MIN INT16_MIN #define INT_FAST16_MAX INT16_MAX #define INT_FAST32_MIN INT32_MIN #define INT_FAST32_MAX INT32_MAX #define INT_FAST64_MIN INT64_MIN #define INT_FAST64_MAX INT64_MAX #define UINT_FAST8_MAX UINT8_MAX #define UINT_FAST16_MAX UINT16_MAX #define UINT_FAST32_MAX UINT32_MAX #define UINT_FAST64_MAX UINT64_MAX // 7.18.2.4 Limits of integer types capable of holding object pointers #ifdef _WIN64 // [ # define INTPTR_MIN INT64_MIN # define INTPTR_MAX INT64_MAX # define UINTPTR_MAX UINT64_MAX #else // _WIN64 ][ # define INTPTR_MIN INT32_MIN # define INTPTR_MAX INT32_MAX # define UINTPTR_MAX UINT32_MAX #endif // _WIN64 ] // 7.18.2.5 Limits of greatest-width integer types #define INTMAX_MIN INT64_MIN #define INTMAX_MAX INT64_MAX #define UINTMAX_MAX UINT64_MAX // 7.18.3 Limits of other integer types #ifdef _WIN64 // [ # define PTRDIFF_MIN _I64_MIN # define PTRDIFF_MAX _I64_MAX #else // _WIN64 ][ # define PTRDIFF_MIN _I32_MIN # define PTRDIFF_MAX _I32_MAX #endif // _WIN64 ] #define SIG_ATOMIC_MIN INT_MIN #define SIG_ATOMIC_MAX INT_MAX #ifndef SIZE_MAX // [ # ifdef _WIN64 // [ # define SIZE_MAX _UI64_MAX # else // _WIN64 ][ # define SIZE_MAX _UI32_MAX # endif // _WIN64 ] #endif // SIZE_MAX ] // WCHAR_MIN and WCHAR_MAX are also defined in #ifndef WCHAR_MIN // [ # define WCHAR_MIN 0 #endif // WCHAR_MIN ] #ifndef WCHAR_MAX // [ # define WCHAR_MAX _UI16_MAX #endif // WCHAR_MAX ] #define WINT_MIN 0 #define WINT_MAX _UI16_MAX #endif // __STDC_LIMIT_MACROS ] // 7.18.4 Limits of other integer types #if !defined(__cplusplus) || defined(__STDC_CONSTANT_MACROS) // [ See footnote 224 at page 260 // 7.18.4.1 Macros for minimum-width integer constants #define INT8_C(val) val##i8 #define INT16_C(val) val##i16 #define INT32_C(val) val##i32 #define INT64_C(val) val##i64 #define UINT8_C(val) val##ui8 #define UINT16_C(val) val##ui16 #define UINT32_C(val) val##ui32 #define UINT64_C(val) val##ui64 // 7.18.4.2 Macros for greatest-width integer constants #define INTMAX_C INT64_C #define UINTMAX_C UINT64_C #endif // __STDC_CONSTANT_MACROS ] #endif // _MSC_STDINT_H_ ] vmem-1.8/src/jemalloc/include/msvc_compat/strings.h000066400000000000000000000020271361505074100225060ustar00rootroot00000000000000#ifndef strings_h #define strings_h /* MSVC doesn't define ffs/ffsl. This dummy strings.h header is provided * for both */ #ifdef _MSC_VER # include # pragma intrinsic(_BitScanForward) static __forceinline int ffsl(long x) { unsigned long i; if (_BitScanForward(&i, x)) return (i + 1); return (0); } static __forceinline int ffs(int x) { return (ffsl(x)); } # ifdef _M_X64 # pragma intrinsic(_BitScanForward64) # endif static __forceinline int ffsll(unsigned __int64 x) { unsigned long i; #ifdef _M_X64 if (_BitScanForward64(&i, x)) return (i + 1); return (0); #else // Fallback for 32-bit build where 64-bit version not available // assuming little endian union { unsigned __int64 ll; unsigned long l[2]; } s; s.ll = x; if (_BitScanForward(&i, s.l[0])) return (i + 1); else if(_BitScanForward(&i, s.l[1])) return (i + 33); return (0); #endif } #else # define ffsll(x) __builtin_ffsll(x) # define ffsl(x) __builtin_ffsl(x) # define ffs(x) __builtin_ffs(x) #endif #endif /* strings_h */ vmem-1.8/src/jemalloc/include/msvc_compat/windows_extra.h000066400000000000000000000010211361505074100237030ustar00rootroot00000000000000#ifndef MSVC_COMPAT_WINDOWS_EXTRA_H #define MSVC_COMPAT_WINDOWS_EXTRA_H #ifndef ENOENT # define ENOENT ERROR_PATH_NOT_FOUND #endif #ifndef EINVAL # define EINVAL ERROR_BAD_ARGUMENTS #endif #ifndef EAGAIN # define EAGAIN ERROR_OUTOFMEMORY #endif #ifndef EPERM # define EPERM ERROR_WRITE_FAULT #endif #ifndef EFAULT # define EFAULT ERROR_INVALID_ADDRESS #endif #ifndef ENOMEM # define ENOMEM ERROR_NOT_ENOUGH_MEMORY #endif #ifndef ERANGE # define ERANGE ERROR_INVALID_DATA #endif #endif /* MSVC_COMPAT_WINDOWS_EXTRA_H */ vmem-1.8/src/jemalloc/install-sh000077500000000000000000000127211361505074100167140ustar00rootroot00000000000000#! /bin/sh # # install - install a program, script, or datafile # This comes from X11R5 (mit/util/scripts/install.sh). # # Copyright 1991 by the Massachusetts Institute of Technology # # Permission to use, copy, modify, distribute, and sell this software and its # documentation for any purpose is hereby granted without fee, provided that # the above copyright notice appear in all copies and that both that # copyright notice and this permission notice appear in supporting # documentation, and that the name of M.I.T. not be used in advertising or # publicity pertaining to distribution of the software without specific, # written prior permission. M.I.T. makes no representations about the # suitability of this software for any purpose. It is provided "as is" # without express or implied warranty. # # Calling this script install-sh is preferred over install.sh, to prevent # `make' implicit rules from creating a file called install from it # when there is no Makefile. # # This script is compatible with the BSD install script, but was written # from scratch. It can only install one file at a time, a restriction # shared with many OS's install programs. # set DOITPROG to echo to test this script # Don't use :- since 4.3BSD and earlier shells don't like it. doit="${DOITPROG-}" # put in absolute paths if you don't have them in your path; or use env. vars. mvprog="${MVPROG-mv}" cpprog="${CPPROG-cp}" chmodprog="${CHMODPROG-chmod}" chownprog="${CHOWNPROG-chown}" chgrpprog="${CHGRPPROG-chgrp}" stripprog="${STRIPPROG-strip}" rmprog="${RMPROG-rm}" mkdirprog="${MKDIRPROG-mkdir}" transformbasename="" transform_arg="" instcmd="$mvprog" chmodcmd="$chmodprog 0755" chowncmd="" chgrpcmd="" stripcmd="" rmcmd="$rmprog -f" mvcmd="$mvprog" src="" dst="" dir_arg="" while [ x"$1" != x ]; do case $1 in -c) instcmd="$cpprog" shift continue;; -d) dir_arg=true shift continue;; -m) chmodcmd="$chmodprog $2" shift shift continue;; -o) chowncmd="$chownprog $2" shift shift continue;; -g) chgrpcmd="$chgrpprog $2" shift shift continue;; -s) stripcmd="$stripprog" shift continue;; -t=*) transformarg=`echo $1 | sed 's/-t=//'` shift continue;; -b=*) transformbasename=`echo $1 | sed 's/-b=//'` shift continue;; *) if [ x"$src" = x ] then src=$1 else # this colon is to work around a 386BSD /bin/sh bug : dst=$1 fi shift continue;; esac done if [ x"$src" = x ] then echo "install: no input file specified" exit 1 else true fi if [ x"$dir_arg" != x ]; then dst=$src src="" if [ -d $dst ]; then instcmd=: else instcmd=mkdir fi else # Waiting for this to be detected by the "$instcmd $src $dsttmp" command # might cause directories to be created, which would be especially bad # if $src (and thus $dsttmp) contains '*'. if [ -f $src -o -d $src ] then true else echo "install: $src does not exist" exit 1 fi if [ x"$dst" = x ] then echo "install: no destination specified" exit 1 else true fi # If destination is a directory, append the input filename; if your system # does not like double slashes in filenames, you may need to add some logic if [ -d $dst ] then dst="$dst"/`basename $src` else true fi fi ## this sed command emulates the dirname command dstdir=`echo $dst | sed -e 's,[^/]*$,,;s,/$,,;s,^$,.,'` # Make sure that the destination directory exists. # this part is taken from Noah Friedman's mkinstalldirs script # Skip lots of stat calls in the usual case. if [ ! -d "$dstdir" ]; then defaultIFS=' ' IFS="${IFS-${defaultIFS}}" oIFS="${IFS}" # Some sh's can't handle IFS=/ for some reason. IFS='%' set - `echo ${dstdir} | sed -e 's@/@%@g' -e 's@^%@/@'` IFS="${oIFS}" pathcomp='' while [ $# -ne 0 ] ; do pathcomp="${pathcomp}${1}" shift if [ ! -d "${pathcomp}" ] ; then $mkdirprog "${pathcomp}" else true fi pathcomp="${pathcomp}/" done fi if [ x"$dir_arg" != x ] then $doit $instcmd $dst && if [ x"$chowncmd" != x ]; then $doit $chowncmd $dst; else true ; fi && if [ x"$chgrpcmd" != x ]; then $doit $chgrpcmd $dst; else true ; fi && if [ x"$stripcmd" != x ]; then $doit $stripcmd $dst; else true ; fi && if [ x"$chmodcmd" != x ]; then $doit $chmodcmd $dst; else true ; fi else # If we're going to rename the final executable, determine the name now. if [ x"$transformarg" = x ] then dstfile=`basename $dst` else dstfile=`basename $dst $transformbasename | sed $transformarg`$transformbasename fi # don't allow the sed command to completely eliminate the filename if [ x"$dstfile" = x ] then dstfile=`basename $dst` else true fi # Make a temp file name in the proper directory. dsttmp=$dstdir/#inst.$$# # Move or copy the file name to the temp name $doit $instcmd $src $dsttmp && trap "rm -f ${dsttmp}" 0 && # and set any options; do chmod last to preserve setuid bits # If any of these fail, we abort the whole thing. If we want to # ignore errors from any of these, just make sure not to ignore # errors from the above "$doit $instcmd $src $dsttmp" command. if [ x"$chowncmd" != x ]; then $doit $chowncmd $dsttmp; else true;fi && if [ x"$chgrpcmd" != x ]; then $doit $chgrpcmd $dsttmp; else true;fi && if [ x"$stripcmd" != x ]; then $doit $stripcmd $dsttmp; else true;fi && if [ x"$chmodcmd" != x ]; then $doit $chmodcmd $dsttmp; else true;fi && # Now rename the file to the real destination. $doit $rmcmd -f $dstdir/$dstfile && $doit $mvcmd $dsttmp $dstdir/$dstfile fi && exit 0 vmem-1.8/src/jemalloc/jemalloc.cfg000066400000000000000000000001661361505074100171570ustar00rootroot00000000000000--without-export --with-jemalloc-prefix=je_vmem_ --with-private-namespace=je_vmem_ --disable-xmalloc --disable-munmap vmem-1.8/src/jemalloc/jemalloc.mk000066400000000000000000000122111361505074100170210ustar00rootroot00000000000000# Copyright 2014-2020, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # src/deps/jemalloc.mk -- rules for jemalloc # ifeq ($(DEBUG),1) OBJDIR = debug else OBJDIR = nondebug endif JEMALLOC_DIR = $(realpath ../jemalloc) ifeq ($(OBJDIR),$(abspath $(OBJDIR))) JEMALLOC_OBJDIR = $(OBJDIR)/jemalloc else JEMALLOC_OBJDIR = ../$(OBJDIR)/$(JEMALLOC_VMEMDIR)/jemalloc endif JEMALLOC_MAKEFILE = $(JEMALLOC_OBJDIR)/Makefile JEMALLOC_CFG = $(JEMALLOC_DIR)/configure JEMALLOC_CFG_AC = $(JEMALLOC_DIR)/configure.ac JEMALLOC_LIB_AR = libjemalloc_pic.a JEMALLOC_LIB = $(JEMALLOC_OBJDIR)/lib/$(JEMALLOC_LIB_AR) JEMALLOC_CFG_IN_FILES = $(shell find $(JEMALLOC_DIR) -name "*.in") JEMALLOC_CFG_GEN_FILES = $(JEMALLOC_CFG_IN_FILES:.in=) JEMALLOC_CFG_OUT_FILES = $(patsubst $(JEMALLOC_DIR)/%, $(JEMALLOC_OBJDIR)/%, $(JEMALLOC_CFG_GEN_FILES)) JEMALLOC_AUTOM4TE_CACHE=autom4te.cache JEMALLOC_CONFIG_FILE = $(JEMALLOC_DIR)/jemalloc.cfg JEMALLOC_CONFIG = $(shell cat $(JEMALLOC_CONFIG_FILE)) ifeq ($(shell uname -s),FreeBSD) ifndef $(CC) JEMALLOC_CONFIG += CC=$(CC) # Default to system compiler (not gcc) on FreeBSD endif endif CFLAGS_FILTER += -fno-common CFLAGS_FILTER += -Wmissing-prototypes CFLAGS_FILTER += -Wpointer-arith CFLAGS_FILTER += -Wunused-macros CFLAGS_FILTER += -Wmissing-field-initializers CFLAGS_FILTER += -Wunreachable-code-return CFLAGS_FILTER += -Wmissing-variable-declarations CFLAGS_FILTER += -Weverything CFLAGS_FILTER += -Wextra CFLAGS_FILTER += -Wsign-conversion CFLAGS_FILTER += -Wsign-compare CFLAGS_FILTER += -Wconversion CFLAGS_FILTER += -Wunused-parameter CFLAGS_FILTER += -Wpadded CFLAGS_FILTER += -Wcast-align CFLAGS_FILTER += -Wvla CFLAGS_FILTER += -Wpedantic CFLAGS_FILTER += -Wshadow CFLAGS_FILTER += -Wdisabled-macro-expansion CFLAGS_FILTER += -Wlanguage-extension-token CFLAGS_FILTER += -Wfloat-equal CFLAGS_FILTER += -Wswitch-default CFLAGS_FILTER += -Wcast-function-type JEMALLOC_CFLAGS=$(filter-out $(CFLAGS_FILTER), $(CFLAGS)) -fcommon ifeq ($(shell uname -s),FreeBSD) JEMALLOC_CFLAGS += -I/usr/local/include endif JEMALLOC_REMOVE_LDFLAGS_TMP = -Wl,--warn-common JEMALLOC_LDFLAGS=$(filter-out $(JEMALLOC_REMOVE_LDFLAGS_TMP), $(LDFLAGS)) -fcommon JEMALLOC_CFG_OUT_FILES_FIRST=$(firstword $(JEMALLOC_CFG_OUT_FILES)) JEMALLOC_CFG_OUT_FILES_REST=$(filter-out $(JEMALLOC_CFG_OUT_FILES_FIRST), $(JEMALLOC_CFG_OUT_FILES)) jemalloc $(JEMALLOC_LIB): $(JEMALLOC_CFG_OUT_FILES) $(MAKE) objroot=$(JEMALLOC_OBJDIR)/ -f $(JEMALLOC_MAKEFILE) -C $(JEMALLOC_DIR) all $(JEMALLOC_CFG_OUT_FILES_FIRST): $(JEMALLOC_CFG) $(JEMALLOC_CONFIG_FILE) $(MKDIR) -p $(JEMALLOC_OBJDIR) $(RM) -f $(JEMALLOC_CFG_OUT_FILES) cd $(JEMALLOC_OBJDIR) && \ CFLAGS="$(JEMALLOC_CFLAGS)" LDFLAGS="$(JEMALLOC_LDFLAGS)"\ $(JEMALLOC_DIR)/configure $(JEMALLOC_CONFIG) touch $(JEMALLOC_CFG_OUT_FILES_REST) $(JEMALLOC_CFG_OUT_FILES_REST): $(JEMALLOC_CFG_OUT_FILES_FIRST) $(JEMALLOC_CFG): $(JEMALLOC_CFG_AC) cd $(JEMALLOC_DIR) && \ autoconf jemalloc-clean: @if [ -f $(JEMALLOC_MAKEFILE) ];\ then\ $(MAKE) cfgoutputs_out+=$(JEMALLOC_MAKEFILE) objroot=$(JEMALLOC_OBJDIR)/ -f $(JEMALLOC_MAKEFILE) -C $(JEMALLOC_DIR) clean;\ fi jemalloc-clobber: @if [ -f $(JEMALLOC_MAKEFILE) ];\ then\ $(MAKE) cfgoutputs_out+=$(JEMALLOC_MAKEFILE) objroot=$(JEMALLOC_OBJDIR)/ -f $(JEMALLOC_MAKEFILE) -C $(JEMALLOC_DIR) distclean;\ fi $(RM) $(JEMALLOC_CFG) $(JEMALLOC_CFG_GEN_FILES) $(JEMALLOC_CFG_OUT_FILES) $(RM) -r $(JEMALLOC_OBJDIR) $(RM) -r $(JEMALLOC_AUTOM4TE_CACHE) jemalloc-test: jemalloc $(MAKE) objroot=$(JEMALLOC_OBJDIR)/ -f $(JEMALLOC_MAKEFILE) -C $(JEMALLOC_DIR) tests jemalloc-check: jemalloc-test $(MAKE) objroot=$(JEMALLOC_OBJDIR)/ -f $(JEMALLOC_MAKEFILE) -C $(JEMALLOC_DIR) check .PHONY: jemalloc jemalloc-clean jemalloc-clobber jemalloc-test jemalloc-check vmem-1.8/src/jemalloc/msvc/000077500000000000000000000000001361505074100156555ustar00rootroot00000000000000vmem-1.8/src/jemalloc/msvc/jemalloc.vcxproj000066400000000000000000000234451361505074100210700ustar00rootroot00000000000000 Debug x64 Release x64 {8D6BB292-9E1C-413D-9F98-4864BDC1514A} Win32Proj jemalloc 10.0.16299.0 StaticLibrary true v140 NotSet StaticLibrary false v140 false NotSet $(SolutionDir)\windows\jemalloc_gen\include;$(IncludePath);..\..\..\..\include\msvc_compat $(SolutionDir)\windows\jemalloc_gen\include;$(IncludePath);..\..\..\..\include\msvc_compat NotUsing Level3 _REENTRANT;JEMALLOC_EXPORT=;JEMALLOC_DEBUG;_DEBUG;JEMALLOC_IVSALLOC;%(PreprocessorDefinitions) $(SolutionDir)windows\jemalloc_gen\include\;..\include;..\include\msvc_compat;%(AdditionalIncludeDirectories) 4090;4146;4267 CompileAsC true Windows true %(AdditionalDependencies); $(OutDir)$(TargetName)$(TargetExt) true true true Level3 NotUsing $(SolutionDir)windows\jemalloc_gen\include\;..\include;..\include\msvc_compat;%(AdditionalIncludeDirectories) _REENTRANT;JEMALLOC_EXPORT=;NDEBUG;%(PreprocessorDefinitions) 4090;4146;4267 true CompileAsC true Windows true true true kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies) true vmem-1.8/src/jemalloc/msvc/jemalloc.vcxproj.filters000066400000000000000000000215471361505074100225400ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {93995380-89BD-4b04-88EB-625FBE52EBFB} h;hh;hpp;hxx;hm;inl;inc;xsd {0cbd2ca6-42a7-4f82-8517-d7e7a14fd986} {0abe6f30-49b5-46dd-8aca-6e33363fa52c} Header Files\msvc_compat Header Files\msvc_compat Header Files\msvc_compat\C99 Header Files\msvc_compat\C99 Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files vmem-1.8/src/jemalloc/src/000077500000000000000000000000001361505074100154745ustar00rootroot00000000000000vmem-1.8/src/jemalloc/src/arena.c000066400000000000000000002262351361505074100167400ustar00rootroot00000000000000#define JEMALLOC_ARENA_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Data. */ ssize_t opt_lg_dirty_mult = LG_DIRTY_MULT_DEFAULT; arena_bin_info_t arena_bin_info[NBINS]; JEMALLOC_ALIGNED(CACHELINE) const uint32_t small_bin2size_tab[NBINS] = { #define B2S_bin_yes(size) \ size, #define B2S_bin_no(size) #define SC(index, lg_grp, lg_delta, ndelta, bin, lg_delta_lookup) \ B2S_bin_##bin((ZU(1)<<(lg_grp)) + (ZU(ndelta)<<(lg_delta))) SIZE_CLASSES #undef B2S_bin_yes #undef B2S_bin_no #undef SC }; JEMALLOC_ALIGNED(CACHELINE) const uint8_t small_size2bin_tab[] = { #define S2B_3(i) i, #define S2B_4(i) S2B_3(i) S2B_3(i) #define S2B_5(i) S2B_4(i) S2B_4(i) #define S2B_6(i) S2B_5(i) S2B_5(i) #define S2B_7(i) S2B_6(i) S2B_6(i) #define S2B_8(i) S2B_7(i) S2B_7(i) #define S2B_9(i) S2B_8(i) S2B_8(i) #define S2B_no(i) #define SC(index, lg_grp, lg_delta, ndelta, bin, lg_delta_lookup) \ S2B_##lg_delta_lookup(index) SIZE_CLASSES #undef S2B_3 #undef S2B_4 #undef S2B_5 #undef S2B_6 #undef S2B_7 #undef S2B_8 #undef S2B_9 #undef S2B_no #undef SC }; /******************************************************************************/ /* * Function prototypes for static functions that are referenced prior to * definition. */ static void arena_purge(arena_t *arena, bool all); static void arena_run_dalloc(arena_t *arena, arena_run_t *run, bool dirty, bool cleaned); static void arena_dalloc_bin_run(arena_t *arena, arena_chunk_t *chunk, arena_run_t *run, arena_bin_t *bin); static void arena_bin_lower_run(arena_t *arena, arena_chunk_t *chunk, arena_run_t *run, arena_bin_t *bin); /******************************************************************************/ JEMALLOC_INLINE_C size_t arena_mapelm_to_bits(arena_chunk_map_t *mapelm) { return (mapelm->bits); } static inline int arena_run_comp(arena_chunk_map_t *a, arena_chunk_map_t *b) { uintptr_t a_mapelm = (uintptr_t)a; uintptr_t b_mapelm = (uintptr_t)b; assert(a != NULL); assert(b != NULL); return ((a_mapelm > b_mapelm) - (a_mapelm < b_mapelm)); } /* Generate red-black tree functions. */ rb_gen(static UNUSED, arena_run_tree_, arena_run_tree_t, arena_chunk_map_t, u.rb_link, arena_run_comp) static inline int arena_avail_comp(arena_chunk_map_t *a, arena_chunk_map_t *b) { int ret; size_t a_size; size_t b_size = arena_mapelm_to_bits(b) & ~PAGE_MASK; uintptr_t a_mapelm = (uintptr_t)a; uintptr_t b_mapelm = (uintptr_t)b; if (a_mapelm & CHUNK_MAP_KEY) a_size = a_mapelm & ~PAGE_MASK; else a_size = arena_mapelm_to_bits(a) & ~PAGE_MASK; ret = (a_size > b_size) - (a_size < b_size); if (ret == 0 && (!(a_mapelm & CHUNK_MAP_KEY))) ret = (a_mapelm > b_mapelm) - (a_mapelm < b_mapelm); return (ret); } /* Generate red-black tree functions. */ rb_gen(static UNUSED, arena_avail_tree_, arena_avail_tree_t, arena_chunk_map_t, u.rb_link, arena_avail_comp) arena_chunk_map_t * arena_runs_avail_tree_iter(arena_t *arena, arena_chunk_map_t *(*cb) (arena_avail_tree_t *, arena_chunk_map_t *, void *), void *arg) { return arena_avail_tree_iter(&arena->runs_avail, NULL, cb, arg); } static inline int arena_chunk_dirty_comp(arena_chunk_t *a, arena_chunk_t *b) { assert(a != NULL); assert(b != NULL); /* * Short-circuit for self comparison. The following comparison code * would come to the same result, but at the cost of executing the slow * path. */ if (a == b) return (0); /* * Order such that chunks with higher fragmentation are "less than" * those with lower fragmentation -- purging order is from "least" to * "greatest". Fragmentation is measured as: * * mean current avail run size * -------------------------------- * mean defragmented avail run size * * navail * ----------- * nruns_avail nruns_avail-nruns_adjac * = ========================= = ----------------------- * navail nruns_avail * ----------------------- * nruns_avail-nruns_adjac * * The following code multiplies away the denominator prior to * comparison, in order to avoid division. * */ { size_t a_val = (a->nruns_avail - a->nruns_adjac) * b->nruns_avail; size_t b_val = (b->nruns_avail - b->nruns_adjac) * a->nruns_avail; if (a_val < b_val) return (1); if (a_val > b_val) return (-1); } /* * Break ties by chunk address. For fragmented chunks, report lower * addresses as "lower", so that fragmentation reduction happens first * at lower addresses. However, use the opposite ordering for * unfragmented chunks, in order to increase the chances of * re-allocating dirty runs. */ { uintptr_t a_chunk = (uintptr_t)a; uintptr_t b_chunk = (uintptr_t)b; int ret = ((a_chunk > b_chunk) - (a_chunk < b_chunk)); if (a->nruns_adjac == 0) { assert(b->nruns_adjac == 0); ret = -ret; } return (ret); } } /* Generate red-black tree functions. */ rb_gen(static UNUSED, arena_chunk_dirty_, arena_chunk_tree_t, arena_chunk_t, dirty_link, arena_chunk_dirty_comp) static inline bool arena_avail_adjac_pred(arena_chunk_t *chunk, size_t pageind) { bool ret; if (pageind-1 < map_bias) ret = false; else { ret = (arena_mapbits_allocated_get(chunk, pageind-1) == 0); assert(ret == false || arena_mapbits_dirty_get(chunk, pageind-1) != arena_mapbits_dirty_get(chunk, pageind)); } return (ret); } static inline bool arena_avail_adjac_succ(arena_chunk_t *chunk, size_t pageind, size_t npages) { bool ret; if (pageind+npages == chunk_npages) ret = false; else { assert(pageind+npages < chunk_npages); ret = (arena_mapbits_allocated_get(chunk, pageind+npages) == 0); assert(ret == false || arena_mapbits_dirty_get(chunk, pageind) != arena_mapbits_dirty_get(chunk, pageind+npages)); } return (ret); } static inline bool arena_avail_adjac(arena_chunk_t *chunk, size_t pageind, size_t npages) { return (arena_avail_adjac_pred(chunk, pageind) || arena_avail_adjac_succ(chunk, pageind, npages)); } static void arena_avail_insert(arena_t *arena, arena_chunk_t *chunk, size_t pageind, size_t npages, bool maybe_adjac_pred, bool maybe_adjac_succ) { assert(npages == (arena_mapbits_unallocated_size_get(chunk, pageind) >> LG_PAGE)); /* * chunks_dirty is keyed by nruns_{avail,adjac}, so the chunk must be * removed and reinserted even if the run to be inserted is clean. */ if (chunk->ndirty != 0) arena_chunk_dirty_remove(&arena->chunks_dirty, chunk); if (maybe_adjac_pred && arena_avail_adjac_pred(chunk, pageind)) chunk->nruns_adjac++; if (maybe_adjac_succ && arena_avail_adjac_succ(chunk, pageind, npages)) chunk->nruns_adjac++; chunk->nruns_avail++; assert(chunk->nruns_avail > chunk->nruns_adjac); if (arena_mapbits_dirty_get(chunk, pageind) != 0) { arena->ndirty += npages; chunk->ndirty += npages; } if (chunk->ndirty != 0) arena_chunk_dirty_insert(&arena->chunks_dirty, chunk); arena_avail_tree_insert(&arena->runs_avail, arena_mapp_get(chunk, pageind)); } static void arena_avail_remove(arena_t *arena, arena_chunk_t *chunk, size_t pageind, size_t npages, bool maybe_adjac_pred, bool maybe_adjac_succ) { assert(npages == (arena_mapbits_unallocated_size_get(chunk, pageind) >> LG_PAGE)); /* * chunks_dirty is keyed by nruns_{avail,adjac}, so the chunk must be * removed and reinserted even if the run to be removed is clean. */ if (chunk->ndirty != 0) arena_chunk_dirty_remove(&arena->chunks_dirty, chunk); if (maybe_adjac_pred && arena_avail_adjac_pred(chunk, pageind)) chunk->nruns_adjac--; if (maybe_adjac_succ && arena_avail_adjac_succ(chunk, pageind, npages)) chunk->nruns_adjac--; chunk->nruns_avail--; assert(chunk->nruns_avail > chunk->nruns_adjac || (chunk->nruns_avail == 0 && chunk->nruns_adjac == 0)); if (arena_mapbits_dirty_get(chunk, pageind) != 0) { arena->ndirty -= npages; chunk->ndirty -= npages; } if (chunk->ndirty != 0) arena_chunk_dirty_insert(&arena->chunks_dirty, chunk); arena_avail_tree_remove(&arena->runs_avail, arena_mapp_get(chunk, pageind)); } static inline void * arena_run_reg_alloc(arena_run_t *run, arena_bin_info_t *bin_info) { void *ret; unsigned regind; bitmap_t *bitmap = (bitmap_t *)((uintptr_t)run + (uintptr_t)bin_info->bitmap_offset); assert(run->nfree > 0); assert(bitmap_full(bitmap, &bin_info->bitmap_info) == false); regind = bitmap_sfu(bitmap, &bin_info->bitmap_info); ret = (void *)((uintptr_t)run + (uintptr_t)bin_info->reg0_offset + (uintptr_t)(bin_info->reg_interval * regind)); run->nfree--; if (regind == run->nextind) run->nextind++; assert(regind < run->nextind); return (ret); } static inline void arena_run_reg_dalloc(arena_run_t *run, void *ptr) { arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(run); size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; size_t mapbits = arena_mapbits_get(chunk, pageind); size_t binind = arena_ptr_small_binind_get(ptr, mapbits); arena_bin_info_t *bin_info = &arena_bin_info[binind]; unsigned regind = arena_run_regind(run, bin_info, ptr); bitmap_t *bitmap = (bitmap_t *)((uintptr_t)run + (uintptr_t)bin_info->bitmap_offset); assert(run->nfree < bin_info->nregs); /* Freeing an interior pointer can cause assertion failure. */ assert(((uintptr_t)ptr - ((uintptr_t)run + (uintptr_t)bin_info->reg0_offset)) % (uintptr_t)bin_info->reg_interval == 0); assert((uintptr_t)ptr >= (uintptr_t)run + (uintptr_t)bin_info->reg0_offset); /* Freeing an unallocated pointer can cause assertion failure. */ assert(bitmap_get(bitmap, &bin_info->bitmap_info, regind)); bitmap_unset(bitmap, &bin_info->bitmap_info, regind); run->nfree++; } static inline void arena_run_zero(arena_chunk_t *chunk, size_t run_ind, size_t npages) { JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED((void *)((uintptr_t)chunk + (run_ind << LG_PAGE)), (npages << LG_PAGE)); memset((void *)((uintptr_t)chunk + (run_ind << LG_PAGE)), 0, (npages << LG_PAGE)); } static inline void arena_run_page_mark_zeroed(arena_chunk_t *chunk, size_t run_ind) { JEMALLOC_VALGRIND_MAKE_MEM_DEFINED((void *)((uintptr_t)chunk + (run_ind << LG_PAGE)), PAGE); } static inline void arena_run_page_validate_zeroed(arena_chunk_t *chunk, size_t run_ind) { size_t i; UNUSED size_t *p = (size_t *)((uintptr_t)chunk + (run_ind << LG_PAGE)); arena_run_page_mark_zeroed(chunk, run_ind); for (i = 0; i < PAGE / sizeof(size_t); i++) assert(p[i] == 0); } static void arena_cactive_update(arena_t *arena, size_t add_pages, size_t sub_pages) { if (config_stats) { ssize_t cactive_diff = CHUNK_CEILING((arena->nactive + add_pages) << LG_PAGE) - CHUNK_CEILING((arena->nactive - sub_pages) << LG_PAGE); if (cactive_diff != 0) stats_cactive_add(arena->pool, cactive_diff); } } static void arena_run_split_remove(arena_t *arena, arena_chunk_t *chunk, size_t run_ind, size_t flag_dirty, size_t need_pages) { size_t total_pages, rem_pages; total_pages = arena_mapbits_unallocated_size_get(chunk, run_ind) >> LG_PAGE; assert(arena_mapbits_dirty_get(chunk, run_ind+total_pages-1) == flag_dirty); assert(need_pages <= total_pages); rem_pages = total_pages - need_pages; arena_avail_remove(arena, chunk, run_ind, total_pages, true, true); arena_cactive_update(arena, need_pages, 0); arena->nactive += need_pages; /* Keep track of trailing unused pages for later use. */ if (rem_pages > 0) { if (flag_dirty != 0) { arena_mapbits_unallocated_set(chunk, run_ind+need_pages, (rem_pages << LG_PAGE), flag_dirty); arena_mapbits_unallocated_set(chunk, run_ind+total_pages-1, (rem_pages << LG_PAGE), flag_dirty); } else { arena_mapbits_unallocated_set(chunk, run_ind+need_pages, (rem_pages << LG_PAGE), arena_mapbits_unzeroed_get(chunk, run_ind+need_pages)); arena_mapbits_unallocated_set(chunk, run_ind+total_pages-1, (rem_pages << LG_PAGE), arena_mapbits_unzeroed_get(chunk, run_ind+total_pages-1)); } arena_avail_insert(arena, chunk, run_ind+need_pages, rem_pages, false, true); } } static void arena_run_split_large_helper(arena_t *arena, arena_run_t *run, size_t size, bool remove, bool zero) { arena_chunk_t *chunk; size_t flag_dirty, run_ind, need_pages, i; chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(run); run_ind = (unsigned)(((uintptr_t)run - (uintptr_t)chunk) >> LG_PAGE); flag_dirty = arena_mapbits_dirty_get(chunk, run_ind); need_pages = (size >> LG_PAGE); assert(need_pages > 0); if (remove) { arena_run_split_remove(arena, chunk, run_ind, flag_dirty, need_pages); } if (zero) { if (flag_dirty == 0) { /* * The run is clean, so some pages may be zeroed (i.e. * never before touched). */ for (i = 0; i < need_pages; i++) { if (arena_mapbits_unzeroed_get(chunk, run_ind+i) != 0) arena_run_zero(chunk, run_ind+i, 1); else if (config_debug) { arena_run_page_validate_zeroed(chunk, run_ind+i); } else { arena_run_page_mark_zeroed(chunk, run_ind+i); } } } else { /* The run is dirty, so all pages must be zeroed. */ arena_run_zero(chunk, run_ind, need_pages); } } else { JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED((void *)((uintptr_t)chunk + (run_ind << LG_PAGE)), (need_pages << LG_PAGE)); } /* * Set the last element first, in case the run only contains one page * (i.e. both statements set the same element). */ arena_mapbits_large_set(chunk, run_ind+need_pages-1, 0, flag_dirty); arena_mapbits_large_set(chunk, run_ind, size, flag_dirty); } static void arena_run_split_large(arena_t *arena, arena_run_t *run, size_t size, bool zero) { arena_run_split_large_helper(arena, run, size, true, zero); } static void arena_run_init_large(arena_t *arena, arena_run_t *run, size_t size, bool zero) { arena_run_split_large_helper(arena, run, size, false, zero); } static void arena_run_split_small(arena_t *arena, arena_run_t *run, size_t size, size_t binind) { arena_chunk_t *chunk; size_t flag_dirty, run_ind, need_pages, i; assert(binind != BININD_INVALID); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(run); run_ind = (unsigned)(((uintptr_t)run - (uintptr_t)chunk) >> LG_PAGE); flag_dirty = arena_mapbits_dirty_get(chunk, run_ind); need_pages = (size >> LG_PAGE); assert(need_pages > 0); arena_run_split_remove(arena, chunk, run_ind, flag_dirty, need_pages); /* * Propagate the dirty and unzeroed flags to the allocated small run, * so that arena_dalloc_bin_run() has the ability to conditionally trim * clean pages. */ arena_mapbits_small_set(chunk, run_ind, 0, binind, flag_dirty); /* * The first page will always be dirtied during small run * initialization, so a validation failure here would not actually * cause an observable failure. */ if (config_debug && flag_dirty == 0 && arena_mapbits_unzeroed_get(chunk, run_ind) == 0) arena_run_page_validate_zeroed(chunk, run_ind); for (i = 1; i < need_pages - 1; i++) { arena_mapbits_small_set(chunk, run_ind+i, i, binind, 0); if (config_debug && flag_dirty == 0 && arena_mapbits_unzeroed_get(chunk, run_ind+i) == 0) arena_run_page_validate_zeroed(chunk, run_ind+i); } arena_mapbits_small_set(chunk, run_ind+need_pages-1, need_pages-1, binind, flag_dirty); if (config_debug && flag_dirty == 0 && arena_mapbits_unzeroed_get(chunk, run_ind+need_pages-1) == 0) arena_run_page_validate_zeroed(chunk, run_ind+need_pages-1); JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED((void *)((uintptr_t)chunk + (run_ind << LG_PAGE)), (need_pages << LG_PAGE)); } static arena_chunk_t * arena_chunk_init_spare(arena_t *arena) { arena_chunk_t *chunk; assert(arena->spare != NULL); chunk = arena->spare; arena->spare = NULL; assert(arena_mapbits_allocated_get(chunk, map_bias) == 0); assert(arena_mapbits_allocated_get(chunk, chunk_npages-1) == 0); assert(arena_mapbits_unallocated_size_get(chunk, map_bias) == arena_maxclass); assert(arena_mapbits_unallocated_size_get(chunk, chunk_npages-1) == arena_maxclass); assert(arena_mapbits_dirty_get(chunk, map_bias) == arena_mapbits_dirty_get(chunk, chunk_npages-1)); return (chunk); } static arena_chunk_t * arena_chunk_alloc_internal(arena_t *arena, size_t size, size_t alignment, bool *zero) { arena_chunk_t *chunk; chunk_alloc_t *chunk_alloc; chunk_dalloc_t *chunk_dalloc; chunk_alloc = arena->chunk_alloc; chunk_dalloc = arena->chunk_dalloc; malloc_mutex_unlock(&arena->lock); chunk = (arena_chunk_t *)chunk_alloc_arena(chunk_alloc, chunk_dalloc, arena, NULL, size, alignment, zero); malloc_mutex_lock(&arena->lock); if (config_stats && chunk != NULL) arena->stats.mapped += chunksize; return (chunk); } void * arena_chunk_alloc_huge(arena_t *arena, void *new_addr, size_t size, size_t alignment, bool *zero) { void *ret; chunk_alloc_t *chunk_alloc; chunk_dalloc_t *chunk_dalloc; malloc_mutex_lock(&arena->lock); chunk_alloc = arena->chunk_alloc; chunk_dalloc = arena->chunk_dalloc; if (config_stats) { /* Optimistically update stats prior to unlocking. */ arena->stats.mapped += size; arena->stats.allocated_huge += size; arena->stats.nmalloc_huge++; arena->stats.nrequests_huge++; } arena->nactive += (size >> LG_PAGE); malloc_mutex_unlock(&arena->lock); ret = chunk_alloc_arena(chunk_alloc, chunk_dalloc, arena, new_addr, size, alignment, zero); if (config_stats) { if (ret != NULL) stats_cactive_add(arena->pool, size); else { /* Revert optimistic stats updates. */ malloc_mutex_lock(&arena->lock); arena->stats.mapped -= size; arena->stats.allocated_huge -= size; arena->stats.nmalloc_huge--; malloc_mutex_unlock(&arena->lock); } } return (ret); } static arena_chunk_t * arena_chunk_init_hard(arena_t *arena) { arena_chunk_t *chunk; bool zero; size_t unzeroed, i; assert(arena->spare == NULL); zero = false; chunk = arena_chunk_alloc_internal(arena, chunksize, chunksize, &zero); if (chunk == NULL) return (NULL); chunk->arena = arena; /* * Claim that no pages are in use, since the header is merely overhead. */ chunk->ndirty = 0; chunk->nruns_avail = 0; chunk->nruns_adjac = 0; /* * Initialize the map to contain one maximal free untouched run. Mark * the pages as zeroed iff chunk_alloc() returned a zeroed chunk. */ unzeroed = zero ? 0 : CHUNK_MAP_UNZEROED; arena_mapbits_unallocated_set(chunk, map_bias, arena_maxclass, unzeroed); /* * There is no need to initialize the internal page map entries unless * the chunk is not zeroed. */ if (zero == false) { JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED( (void *)arena_mapp_get(chunk, map_bias+1), (size_t)((uintptr_t) arena_mapp_get(chunk, chunk_npages-1) - (uintptr_t)arena_mapp_get(chunk, map_bias+1))); for (i = map_bias+1; i < chunk_npages-1; i++) arena_mapbits_unzeroed_set(chunk, i, unzeroed); } else { JEMALLOC_VALGRIND_MAKE_MEM_DEFINED((void *)arena_mapp_get(chunk, map_bias+1), (size_t)((uintptr_t) arena_mapp_get(chunk, chunk_npages-1) - (uintptr_t)arena_mapp_get(chunk, map_bias+1))); if (config_debug) { for (i = map_bias+1; i < chunk_npages-1; i++) { assert(arena_mapbits_unzeroed_get(chunk, i) == unzeroed); } } } arena_mapbits_unallocated_set(chunk, chunk_npages-1, arena_maxclass, unzeroed); return (chunk); } static arena_chunk_t * arena_chunk_alloc(arena_t *arena) { arena_chunk_t *chunk; if (arena->spare != NULL) chunk = arena_chunk_init_spare(arena); else { chunk = arena_chunk_init_hard(arena); if (chunk == NULL) return (NULL); } /* Insert the run into the runs_avail tree. */ arena_avail_insert(arena, chunk, map_bias, chunk_npages-map_bias, false, false); return (chunk); } static void arena_chunk_dalloc_internal(arena_t *arena, arena_chunk_t *chunk) { chunk_dalloc_t *chunk_dalloc; chunk_dalloc = arena->chunk_dalloc; malloc_mutex_unlock(&arena->lock); chunk_dalloc((void *)chunk, chunksize, arena->ind, arena->pool); malloc_mutex_lock(&arena->lock); if (config_stats) arena->stats.mapped -= chunksize; } void arena_chunk_dalloc_huge(arena_t *arena, void *chunk, size_t size) { chunk_dalloc_t *chunk_dalloc; malloc_mutex_lock(&arena->lock); chunk_dalloc = arena->chunk_dalloc; if (config_stats) { arena->stats.mapped -= size; arena->stats.allocated_huge -= size; arena->stats.ndalloc_huge++; stats_cactive_sub(arena->pool, size); } arena->nactive -= (size >> LG_PAGE); malloc_mutex_unlock(&arena->lock); chunk_dalloc(chunk, size, arena->ind, arena->pool); } static void arena_chunk_dalloc(arena_t *arena, arena_chunk_t *chunk) { assert(arena_mapbits_allocated_get(chunk, map_bias) == 0); assert(arena_mapbits_allocated_get(chunk, chunk_npages-1) == 0); assert(arena_mapbits_unallocated_size_get(chunk, map_bias) == arena_maxclass); assert(arena_mapbits_unallocated_size_get(chunk, chunk_npages-1) == arena_maxclass); assert(arena_mapbits_dirty_get(chunk, map_bias) == arena_mapbits_dirty_get(chunk, chunk_npages-1)); /* * Remove run from the runs_avail tree, so that the arena does not use * it. */ arena_avail_remove(arena, chunk, map_bias, chunk_npages-map_bias, false, false); if (arena->spare != NULL) { arena_chunk_t *spare = arena->spare; arena->spare = chunk; arena_chunk_dalloc_internal(arena, spare); } else arena->spare = chunk; } static arena_run_t * arena_run_alloc_large_helper(arena_t *arena, size_t size, bool zero) { arena_run_t *run; arena_chunk_map_t *mapelm; arena_chunk_map_t *key; key = (arena_chunk_map_t *)(size | CHUNK_MAP_KEY); mapelm = arena_avail_tree_nsearch(&arena->runs_avail, key); if (mapelm != NULL) { arena_chunk_t *run_chunk = CHUNK_ADDR2BASE(mapelm); size_t pageind = arena_mapelm_to_pageind(mapelm); run = (arena_run_t *)((uintptr_t)run_chunk + (pageind << LG_PAGE)); arena_run_split_large(arena, run, size, zero); return (run); } return (NULL); } static arena_run_t * arena_run_alloc_large(arena_t *arena, size_t size, bool zero) { arena_chunk_t *chunk; arena_run_t *run; assert(size <= arena_maxclass); assert((size & PAGE_MASK) == 0); /* Search the arena's chunks for the lowest best fit. */ run = arena_run_alloc_large_helper(arena, size, zero); if (run != NULL) return (run); /* * No usable runs. Create a new chunk from which to allocate the run. */ chunk = arena_chunk_alloc(arena); if (chunk != NULL) { run = (arena_run_t *)((uintptr_t)chunk + (map_bias << LG_PAGE)); arena_run_split_large(arena, run, size, zero); return (run); } /* * arena_chunk_alloc() failed, but another thread may have made * sufficient memory available while this one dropped arena->lock in * arena_chunk_alloc(), so search one more time. */ return (arena_run_alloc_large_helper(arena, size, zero)); } static arena_run_t * arena_run_alloc_small_helper(arena_t *arena, size_t size, size_t binind) { arena_run_t *run; arena_chunk_map_t *mapelm; arena_chunk_map_t *key; key = (arena_chunk_map_t *)(size | CHUNK_MAP_KEY); mapelm = arena_avail_tree_nsearch(&arena->runs_avail, key); if (mapelm != NULL) { arena_chunk_t *run_chunk = CHUNK_ADDR2BASE(mapelm); size_t pageind = arena_mapelm_to_pageind(mapelm); run = (arena_run_t *)((uintptr_t)run_chunk + (pageind << LG_PAGE)); arena_run_split_small(arena, run, size, binind); return (run); } return (NULL); } static arena_run_t * arena_run_alloc_small(arena_t *arena, size_t size, size_t binind) { arena_chunk_t *chunk; arena_run_t *run; assert(size <= arena_maxclass); assert((size & PAGE_MASK) == 0); assert(binind != BININD_INVALID); /* Search the arena's chunks for the lowest best fit. */ run = arena_run_alloc_small_helper(arena, size, binind); if (run != NULL) return (run); /* * No usable runs. Create a new chunk from which to allocate the run. */ chunk = arena_chunk_alloc(arena); if (chunk != NULL) { run = (arena_run_t *)((uintptr_t)chunk + (map_bias << LG_PAGE)); arena_run_split_small(arena, run, size, binind); return (run); } /* * arena_chunk_alloc() failed, but another thread may have made * sufficient memory available while this one dropped arena->lock in * arena_chunk_alloc(), so search one more time. */ return (arena_run_alloc_small_helper(arena, size, binind)); } static inline void arena_maybe_purge(arena_t *arena) { size_t npurgeable, threshold; /* Don't purge if the option is disabled. */ if (opt_lg_dirty_mult < 0) return; /* Don't purge if all dirty pages are already being purged. */ if (arena->ndirty <= arena->npurgatory) return; npurgeable = arena->ndirty - arena->npurgatory; threshold = (arena->nactive >> opt_lg_dirty_mult); /* * Don't purge unless the number of purgeable pages exceeds the * threshold. */ if (npurgeable <= threshold) return; arena_purge(arena, false); } static arena_chunk_t * chunks_dirty_iter_cb(arena_chunk_tree_t *tree, arena_chunk_t *chunk, void *arg) { size_t *ndirty = (size_t *)arg; assert(chunk->ndirty != 0); *ndirty += chunk->ndirty; return (NULL); } static size_t arena_compute_npurgatory(arena_t *arena, bool all) { size_t npurgatory, npurgeable; /* * Compute the minimum number of pages that this thread should try to * purge. */ npurgeable = arena->ndirty - arena->npurgatory; if (all == false) { size_t threshold = (arena->nactive >> opt_lg_dirty_mult); npurgatory = npurgeable - threshold; } else npurgatory = npurgeable; return (npurgatory); } static void arena_chunk_stash_dirty(arena_t *arena, arena_chunk_t *chunk, bool all, arena_chunk_mapelms_t *mapelms) { size_t pageind, npages; /* * Temporarily allocate free dirty runs within chunk. If all is false, * only operate on dirty runs that are fragments; otherwise operate on * all dirty runs. */ for (pageind = map_bias; pageind < chunk_npages; pageind += npages) { arena_chunk_map_t *mapelm = arena_mapp_get(chunk, pageind); if (arena_mapbits_allocated_get(chunk, pageind) == 0) { size_t run_size = arena_mapbits_unallocated_size_get(chunk, pageind); npages = run_size >> LG_PAGE; assert(pageind + npages <= chunk_npages); assert(arena_mapbits_dirty_get(chunk, pageind) == arena_mapbits_dirty_get(chunk, pageind+npages-1)); if (arena_mapbits_dirty_get(chunk, pageind) != 0 && (all || arena_avail_adjac(chunk, pageind, npages))) { arena_run_t *run = (arena_run_t *)((uintptr_t) chunk + (uintptr_t)(pageind << LG_PAGE)); arena_run_split_large(arena, run, run_size, false); /* Append to list for later processing. */ ql_elm_new(mapelm, u.ql_link); ql_tail_insert(mapelms, mapelm, u.ql_link); } } else { /* Skip run. */ if (arena_mapbits_large_get(chunk, pageind) != 0) { npages = arena_mapbits_large_size_get(chunk, pageind) >> LG_PAGE; } else { size_t binind; arena_bin_info_t *bin_info; arena_run_t *run = (arena_run_t *)((uintptr_t) chunk + (uintptr_t)(pageind << LG_PAGE)); assert(arena_mapbits_small_runind_get(chunk, pageind) == 0); binind = arena_bin_index(arena, run->bin); bin_info = &arena_bin_info[binind]; npages = bin_info->run_size >> LG_PAGE; } } } assert(pageind == chunk_npages); assert(chunk->ndirty == 0 || all == false); assert(chunk->nruns_adjac == 0); } static size_t arena_chunk_purge_stashed(arena_t *arena, arena_chunk_t *chunk, arena_chunk_mapelms_t *mapelms) { size_t npurged, pageind, npages, nmadvise; arena_chunk_map_t *mapelm; malloc_mutex_unlock(&arena->lock); if (config_stats) nmadvise = 0; npurged = 0; ql_foreach(mapelm, mapelms, u.ql_link) { bool unzeroed, file_mapped; size_t flag_unzeroed, i; pageind = arena_mapelm_to_pageind(mapelm); npages = arena_mapbits_large_size_get(chunk, pageind) >> LG_PAGE; assert(pageind + npages <= chunk_npages); file_mapped = pool_is_file_mapped(arena->pool); unzeroed = pages_purge((void *)((uintptr_t)chunk + (pageind << LG_PAGE)), (npages << LG_PAGE), file_mapped); flag_unzeroed = unzeroed ? CHUNK_MAP_UNZEROED : 0; /* * Set the unzeroed flag for all pages, now that pages_purge() * has returned whether the pages were zeroed as a side effect * of purging. This chunk map modification is safe even though * the arena mutex isn't currently owned by this thread, * because the run is marked as allocated, thus protecting it * from being modified by any other thread. As long as these * writes don't perturb the first and last elements' * CHUNK_MAP_ALLOCATED bits, behavior is well defined. */ for (i = 0; i < npages; i++) { arena_mapbits_unzeroed_set(chunk, pageind+i, flag_unzeroed); } npurged += npages; if (config_stats) nmadvise++; } malloc_mutex_lock(&arena->lock); if (config_stats) arena->stats.nmadvise += nmadvise; return (npurged); } static void arena_chunk_unstash_purged(arena_t *arena, arena_chunk_t *chunk, arena_chunk_mapelms_t *mapelms) { arena_chunk_map_t *mapelm; size_t pageind; /* Deallocate runs. */ for (mapelm = ql_first(mapelms); mapelm != NULL; mapelm = ql_first(mapelms)) { arena_run_t *run; pageind = arena_mapelm_to_pageind(mapelm); run = (arena_run_t *)((uintptr_t)chunk + (uintptr_t)(pageind << LG_PAGE)); ql_remove(mapelms, mapelm, u.ql_link); arena_run_dalloc(arena, run, false, true); } } static inline size_t arena_chunk_purge(arena_t *arena, arena_chunk_t *chunk, bool all) { size_t npurged; arena_chunk_mapelms_t mapelms; ql_new(&mapelms); /* * If chunk is the spare, temporarily re-allocate it, 1) so that its * run is reinserted into runs_avail, and 2) so that it cannot be * completely discarded by another thread while arena->lock is dropped * by this thread. Note that the arena_run_dalloc() call will * implicitly deallocate the chunk, so no explicit action is required * in this function to deallocate the chunk. * * Note that once a chunk contains dirty pages, it cannot again contain * a single run unless 1) it is a dirty run, or 2) this function purges * dirty pages and causes the transition to a single clean run. Thus * (chunk == arena->spare) is possible, but it is not possible for * this function to be called on the spare unless it contains a dirty * run. */ if (chunk == arena->spare) { assert(arena_mapbits_dirty_get(chunk, map_bias) != 0); assert(arena_mapbits_dirty_get(chunk, chunk_npages-1) != 0); arena_chunk_alloc(arena); } if (config_stats) arena->stats.purged += chunk->ndirty; /* * Operate on all dirty runs if there is no clean/dirty run * fragmentation. */ if (chunk->nruns_adjac == 0) all = true; arena_chunk_stash_dirty(arena, chunk, all, &mapelms); npurged = arena_chunk_purge_stashed(arena, chunk, &mapelms); arena_chunk_unstash_purged(arena, chunk, &mapelms); return (npurged); } static void arena_purge(arena_t *arena, bool all) { arena_chunk_t *chunk; size_t npurgatory; if (config_debug) { size_t ndirty = 0; arena_chunk_dirty_iter(&arena->chunks_dirty, NULL, chunks_dirty_iter_cb, (void *)&ndirty); assert(ndirty == arena->ndirty); } assert(arena->ndirty > arena->npurgatory || all); assert((arena->nactive >> opt_lg_dirty_mult) < (arena->ndirty - arena->npurgatory) || all); if (config_stats) arena->stats.npurge++; /* * Add the minimum number of pages this thread should try to purge to * arena->npurgatory. This will keep multiple threads from racing to * reduce ndirty below the threshold. */ npurgatory = arena_compute_npurgatory(arena, all); arena->npurgatory += npurgatory; while (npurgatory > 0) { size_t npurgeable, npurged, nunpurged; /* Get next chunk with dirty pages. */ chunk = arena_chunk_dirty_first(&arena->chunks_dirty); if (chunk == NULL) { /* * This thread was unable to purge as many pages as * originally intended, due to races with other threads * that either did some of the purging work, or re-used * dirty pages. */ arena->npurgatory -= npurgatory; return; } npurgeable = chunk->ndirty; assert(npurgeable != 0); if (npurgeable > npurgatory && chunk->nruns_adjac == 0) { /* * This thread will purge all the dirty pages in chunk, * so set npurgatory to reflect this thread's intent to * purge the pages. This tends to reduce the chances * of the following scenario: * * 1) This thread sets arena->npurgatory such that * (arena->ndirty - arena->npurgatory) is at the * threshold. * 2) This thread drops arena->lock. * 3) Another thread causes one or more pages to be * dirtied, and immediately determines that it must * purge dirty pages. * * If this scenario *does* play out, that's okay, * because all of the purging work being done really * needs to happen. */ arena->npurgatory += npurgeable - npurgatory; npurgatory = npurgeable; } /* * Keep track of how many pages are purgeable, versus how many * actually get purged, and adjust counters accordingly. */ arena->npurgatory -= npurgeable; npurgatory -= npurgeable; npurged = arena_chunk_purge(arena, chunk, all); nunpurged = npurgeable - npurged; arena->npurgatory += nunpurged; npurgatory += nunpurged; } } void arena_purge_all(arena_t *arena) { malloc_mutex_lock(&arena->lock); arena_purge(arena, true); malloc_mutex_unlock(&arena->lock); } static void arena_run_coalesce(arena_t *arena, arena_chunk_t *chunk, size_t *p_size, size_t *p_run_ind, size_t *p_run_pages, size_t flag_dirty) { size_t size = *p_size; size_t run_ind = *p_run_ind; size_t run_pages = *p_run_pages; /* Try to coalesce forward. */ if (run_ind + run_pages < chunk_npages && arena_mapbits_allocated_get(chunk, run_ind+run_pages) == 0 && arena_mapbits_dirty_get(chunk, run_ind+run_pages) == flag_dirty) { size_t nrun_size = arena_mapbits_unallocated_size_get(chunk, run_ind+run_pages); size_t nrun_pages = nrun_size >> LG_PAGE; /* * Remove successor from runs_avail; the coalesced run is * inserted later. */ assert(arena_mapbits_unallocated_size_get(chunk, run_ind+run_pages+nrun_pages-1) == nrun_size); assert(arena_mapbits_dirty_get(chunk, run_ind+run_pages+nrun_pages-1) == flag_dirty); arena_avail_remove(arena, chunk, run_ind+run_pages, nrun_pages, false, true); size += nrun_size; run_pages += nrun_pages; arena_mapbits_unallocated_size_set(chunk, run_ind, size); arena_mapbits_unallocated_size_set(chunk, run_ind+run_pages-1, size); } /* Try to coalesce backward. */ if (run_ind > map_bias && arena_mapbits_allocated_get(chunk, run_ind-1) == 0 && arena_mapbits_dirty_get(chunk, run_ind-1) == flag_dirty) { size_t prun_size = arena_mapbits_unallocated_size_get(chunk, run_ind-1); size_t prun_pages = prun_size >> LG_PAGE; run_ind -= prun_pages; /* * Remove predecessor from runs_avail; the coalesced run is * inserted later. */ assert(arena_mapbits_unallocated_size_get(chunk, run_ind) == prun_size); assert(arena_mapbits_dirty_get(chunk, run_ind) == flag_dirty); arena_avail_remove(arena, chunk, run_ind, prun_pages, true, false); size += prun_size; run_pages += prun_pages; arena_mapbits_unallocated_size_set(chunk, run_ind, size); arena_mapbits_unallocated_size_set(chunk, run_ind+run_pages-1, size); } *p_size = size; *p_run_ind = run_ind; *p_run_pages = run_pages; } static void arena_run_dalloc(arena_t *arena, arena_run_t *run, bool dirty, bool cleaned) { arena_chunk_t *chunk; size_t size, run_ind, run_pages, flag_dirty; chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(run); run_ind = (size_t)(((uintptr_t)run - (uintptr_t)chunk) >> LG_PAGE); assert(run_ind >= map_bias); assert(run_ind < chunk_npages); if (arena_mapbits_large_get(chunk, run_ind) != 0) { size = arena_mapbits_large_size_get(chunk, run_ind); assert(size == PAGE || arena_mapbits_large_size_get(chunk, run_ind+(size>>LG_PAGE)-1) == 0); } else { size_t binind = arena_bin_index(arena, run->bin); arena_bin_info_t *bin_info = &arena_bin_info[binind]; size = bin_info->run_size; } run_pages = (size >> LG_PAGE); arena_cactive_update(arena, 0, run_pages); arena->nactive -= run_pages; /* * The run is dirty if the caller claims to have dirtied it, as well as * if it was already dirty before being allocated and the caller * doesn't claim to have cleaned it. */ assert(arena_mapbits_dirty_get(chunk, run_ind) == arena_mapbits_dirty_get(chunk, run_ind+run_pages-1)); if (cleaned == false && arena_mapbits_dirty_get(chunk, run_ind) != 0) dirty = true; flag_dirty = dirty ? CHUNK_MAP_DIRTY : 0; /* Mark pages as unallocated in the chunk map. */ if (dirty) { arena_mapbits_unallocated_set(chunk, run_ind, size, CHUNK_MAP_DIRTY); arena_mapbits_unallocated_set(chunk, run_ind+run_pages-1, size, CHUNK_MAP_DIRTY); } else { arena_mapbits_unallocated_set(chunk, run_ind, size, arena_mapbits_unzeroed_get(chunk, run_ind)); arena_mapbits_unallocated_set(chunk, run_ind+run_pages-1, size, arena_mapbits_unzeroed_get(chunk, run_ind+run_pages-1)); } arena_run_coalesce(arena, chunk, &size, &run_ind, &run_pages, flag_dirty); /* Insert into runs_avail, now that coalescing is complete. */ assert(arena_mapbits_unallocated_size_get(chunk, run_ind) == arena_mapbits_unallocated_size_get(chunk, run_ind+run_pages-1)); assert(arena_mapbits_dirty_get(chunk, run_ind) == arena_mapbits_dirty_get(chunk, run_ind+run_pages-1)); arena_avail_insert(arena, chunk, run_ind, run_pages, true, true); /* Deallocate chunk if it is now completely unused. */ if (size == arena_maxclass) { assert(run_ind == map_bias); assert(run_pages == (arena_maxclass >> LG_PAGE)); arena_chunk_dalloc(arena, chunk); } /* * It is okay to do dirty page processing here even if the chunk was * deallocated above, since in that case it is the spare. Waiting * until after possible chunk deallocation to do dirty processing * allows for an old spare to be fully deallocated, thus decreasing the * chances of spuriously crossing the dirty page purging threshold. */ if (dirty) arena_maybe_purge(arena); } static void arena_run_trim_head(arena_t *arena, arena_chunk_t *chunk, arena_run_t *run, size_t oldsize, size_t newsize) { size_t pageind = ((uintptr_t)run - (uintptr_t)chunk) >> LG_PAGE; size_t head_npages = (oldsize - newsize) >> LG_PAGE; size_t flag_dirty = arena_mapbits_dirty_get(chunk, pageind); assert(oldsize > newsize); /* * Update the chunk map so that arena_run_dalloc() can treat the * leading run as separately allocated. Set the last element of each * run first, in case of single-page runs. */ assert(arena_mapbits_large_size_get(chunk, pageind) == oldsize); arena_mapbits_large_set(chunk, pageind+head_npages-1, 0, flag_dirty); arena_mapbits_large_set(chunk, pageind, oldsize-newsize, flag_dirty); if (config_debug) { UNUSED size_t tail_npages = newsize >> LG_PAGE; assert(arena_mapbits_large_size_get(chunk, pageind+head_npages+tail_npages-1) == 0); assert(arena_mapbits_dirty_get(chunk, pageind+head_npages+tail_npages-1) == flag_dirty); } arena_mapbits_large_set(chunk, pageind+head_npages, newsize, flag_dirty); arena_run_dalloc(arena, run, false, false); } static void arena_run_trim_tail(arena_t *arena, arena_chunk_t *chunk, arena_run_t *run, size_t oldsize, size_t newsize, bool dirty) { size_t pageind = ((uintptr_t)run - (uintptr_t)chunk) >> LG_PAGE; size_t head_npages = newsize >> LG_PAGE; size_t flag_dirty = arena_mapbits_dirty_get(chunk, pageind); assert(oldsize > newsize); /* * Update the chunk map so that arena_run_dalloc() can treat the * trailing run as separately allocated. Set the last element of each * run first, in case of single-page runs. */ assert(arena_mapbits_large_size_get(chunk, pageind) == oldsize); arena_mapbits_large_set(chunk, pageind+head_npages-1, 0, flag_dirty); arena_mapbits_large_set(chunk, pageind, newsize, flag_dirty); if (config_debug) { UNUSED size_t tail_npages = (oldsize - newsize) >> LG_PAGE; assert(arena_mapbits_large_size_get(chunk, pageind+head_npages+tail_npages-1) == 0); assert(arena_mapbits_dirty_get(chunk, pageind+head_npages+tail_npages-1) == flag_dirty); } arena_mapbits_large_set(chunk, pageind+head_npages, oldsize-newsize, flag_dirty); arena_run_dalloc(arena, (arena_run_t *)((uintptr_t)run + newsize), dirty, false); } static arena_run_t * arena_bin_runs_first(arena_bin_t *bin) { arena_chunk_map_t *mapelm = arena_run_tree_first(&bin->runs); if (mapelm != NULL) { arena_chunk_t *chunk; size_t pageind; arena_run_t *run; chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(mapelm); pageind = arena_mapelm_to_pageind(mapelm); run = (arena_run_t *)((uintptr_t)chunk + (uintptr_t)((pageind - arena_mapbits_small_runind_get(chunk, pageind)) << LG_PAGE)); return (run); } return (NULL); } static void arena_bin_runs_insert(arena_bin_t *bin, arena_run_t *run) { arena_chunk_t *chunk = CHUNK_ADDR2BASE(run); size_t pageind = ((uintptr_t)run - (uintptr_t)chunk) >> LG_PAGE; arena_chunk_map_t *mapelm = arena_mapp_get(chunk, pageind); assert(arena_run_tree_search(&bin->runs, mapelm) == NULL); arena_run_tree_insert(&bin->runs, mapelm); } static void arena_bin_runs_remove(arena_bin_t *bin, arena_run_t *run) { arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(run); size_t pageind = ((uintptr_t)run - (uintptr_t)chunk) >> LG_PAGE; arena_chunk_map_t *mapelm = arena_mapp_get(chunk, pageind); assert(arena_run_tree_search(&bin->runs, mapelm) != NULL); arena_run_tree_remove(&bin->runs, mapelm); } static arena_run_t * arena_bin_nonfull_run_tryget(arena_bin_t *bin) { arena_run_t *run = arena_bin_runs_first(bin); if (run != NULL) { arena_bin_runs_remove(bin, run); if (config_stats) bin->stats.reruns++; } return (run); } static arena_run_t * arena_bin_nonfull_run_get(arena_t *arena, arena_bin_t *bin) { arena_run_t *run; size_t binind; arena_bin_info_t *bin_info; /* Look for a usable run. */ run = arena_bin_nonfull_run_tryget(bin); if (run != NULL) return (run); /* No existing runs have any space available. */ binind = arena_bin_index(arena, bin); bin_info = &arena_bin_info[binind]; /* Allocate a new run. */ malloc_mutex_unlock(&bin->lock); /******************************/ malloc_mutex_lock(&arena->lock); run = arena_run_alloc_small(arena, bin_info->run_size, binind); if (run != NULL) { bitmap_t *bitmap = (bitmap_t *)((uintptr_t)run + (uintptr_t)bin_info->bitmap_offset); /* Initialize run internals. */ run->bin = bin; run->nextind = 0; run->nfree = bin_info->nregs; bitmap_init(bitmap, &bin_info->bitmap_info); } malloc_mutex_unlock(&arena->lock); /********************************/ malloc_mutex_lock(&bin->lock); if (run != NULL) { if (config_stats) { bin->stats.nruns++; bin->stats.curruns++; } return (run); } /* * arena_run_alloc_small() failed, but another thread may have made * sufficient memory available while this one dropped bin->lock above, * so search one more time. */ run = arena_bin_nonfull_run_tryget(bin); if (run != NULL) return (run); return (NULL); } /* Re-fill bin->runcur, then call arena_run_reg_alloc(). */ static void * arena_bin_malloc_hard(arena_t *arena, arena_bin_t *bin) { void *ret; size_t binind; arena_bin_info_t *bin_info; arena_run_t *run; binind = arena_bin_index(arena, bin); bin_info = &arena_bin_info[binind]; bin->runcur = NULL; run = arena_bin_nonfull_run_get(arena, bin); if (bin->runcur != NULL && bin->runcur->nfree > 0) { /* * Another thread updated runcur while this one ran without the * bin lock in arena_bin_nonfull_run_get(). */ assert(bin->runcur->nfree > 0); ret = arena_run_reg_alloc(bin->runcur, bin_info); if (run != NULL) { arena_chunk_t *chunk; /* * arena_run_alloc_small() may have allocated run, or * it may have pulled run from the bin's run tree. * Therefore it is unsafe to make any assumptions about * how run has previously been used, and * arena_bin_lower_run() must be called, as if a region * were just deallocated from the run. */ chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(run); if (run->nfree == bin_info->nregs) arena_dalloc_bin_run(arena, chunk, run, bin); else arena_bin_lower_run(arena, chunk, run, bin); } return (ret); } if (run == NULL) return (NULL); bin->runcur = run; assert(bin->runcur->nfree > 0); return (arena_run_reg_alloc(bin->runcur, bin_info)); } void arena_tcache_fill_small(arena_t *arena, tcache_bin_t *tbin, size_t binind, uint64_t prof_accumbytes) { unsigned i, nfill; arena_bin_t *bin; arena_run_t *run; void *ptr; assert(tbin->ncached == 0); if (config_prof && arena_prof_accum(arena, prof_accumbytes)) prof_idump(); bin = &arena->bins[binind]; malloc_mutex_lock(&bin->lock); for (i = 0, nfill = (tcache_bin_info[binind].ncached_max >> tbin->lg_fill_div); i < nfill; i++) { if ((run = bin->runcur) != NULL && run->nfree > 0) ptr = arena_run_reg_alloc(run, &arena_bin_info[binind]); else ptr = arena_bin_malloc_hard(arena, bin); if (ptr == NULL) break; if (config_fill && opt_junk) { arena_alloc_junk_small(ptr, &arena_bin_info[binind], true); } tbin->avail[i] = ptr; } if (config_stats) { bin->stats.allocated += i * arena_bin_info[binind].reg_size; bin->stats.nmalloc += i; bin->stats.nrequests += tbin->tstats.nrequests; bin->stats.nfills++; tbin->tstats.nrequests = 0; } malloc_mutex_unlock(&bin->lock); tbin->ncached = i; } void arena_alloc_junk_small(void *ptr, arena_bin_info_t *bin_info, bool zero) { if (zero) { size_t redzone_size = bin_info->redzone_size; memset((void *)((uintptr_t)ptr - redzone_size), 0xa5, redzone_size); memset((void *)((uintptr_t)ptr + bin_info->reg_size), 0xa5, redzone_size); } else { memset((void *)((uintptr_t)ptr - bin_info->redzone_size), 0xa5, bin_info->reg_interval); } } #ifdef JEMALLOC_JET #undef arena_redzone_corruption #define arena_redzone_corruption JEMALLOC_N(arena_redzone_corruption_impl) #endif static void arena_redzone_corruption(void *ptr, size_t usize, bool after, size_t offset, uint8_t byte) { malloc_printf(": Corrupt redzone %zu byte%s %s %p " "(size %zu), byte=%#x\n", offset, (offset == 1) ? "" : "s", after ? "after" : "before", ptr, usize, byte); } #ifdef JEMALLOC_JET #undef arena_redzone_corruption #define arena_redzone_corruption JEMALLOC_N(arena_redzone_corruption) arena_redzone_corruption_t *arena_redzone_corruption = JEMALLOC_N(arena_redzone_corruption_impl); #endif static void arena_redzones_validate(void *ptr, arena_bin_info_t *bin_info, bool reset) { size_t size = bin_info->reg_size; size_t redzone_size = bin_info->redzone_size; size_t i; bool error = false; for (i = 1; i <= redzone_size; i++) { uint8_t *byte = (uint8_t *)((uintptr_t)ptr - i); if (*byte != 0xa5) { error = true; arena_redzone_corruption(ptr, size, false, i, *byte); if (reset) *byte = 0xa5; } } for (i = 0; i < redzone_size; i++) { uint8_t *byte = (uint8_t *)((uintptr_t)ptr + size + i); if (*byte != 0xa5) { error = true; arena_redzone_corruption(ptr, size, true, i, *byte); if (reset) *byte = 0xa5; } } if (opt_abort && error) abort(); } #ifdef JEMALLOC_JET #undef arena_dalloc_junk_small #define arena_dalloc_junk_small JEMALLOC_N(arena_dalloc_junk_small_impl) #endif void arena_dalloc_junk_small(void *ptr, arena_bin_info_t *bin_info) { size_t redzone_size = bin_info->redzone_size; arena_redzones_validate(ptr, bin_info, false); memset((void *)((uintptr_t)ptr - redzone_size), 0x5a, bin_info->reg_interval); } #ifdef JEMALLOC_JET #undef arena_dalloc_junk_small #define arena_dalloc_junk_small JEMALLOC_N(arena_dalloc_junk_small) arena_dalloc_junk_small_t *arena_dalloc_junk_small = JEMALLOC_N(arena_dalloc_junk_small_impl); #endif void arena_quarantine_junk_small(void *ptr, size_t usize) { size_t binind; arena_bin_info_t *bin_info; cassert(config_fill); assert(opt_junk); assert(opt_quarantine); assert(usize <= SMALL_MAXCLASS); binind = small_size2bin(usize); assert(binind < NBINS); bin_info = &arena_bin_info[binind]; arena_redzones_validate(ptr, bin_info, true); } void * arena_malloc_small(arena_t *arena, size_t size, bool zero) { void *ret; arena_bin_t *bin; arena_run_t *run; size_t binind; if (arena == NULL) return NULL; binind = small_size2bin(size); assert(binind < NBINS); bin = &arena->bins[binind]; size = small_bin2size(binind); malloc_mutex_lock(&bin->lock); if ((run = bin->runcur) != NULL && run->nfree > 0) ret = arena_run_reg_alloc(run, &arena_bin_info[binind]); else ret = arena_bin_malloc_hard(arena, bin); if (ret == NULL) { malloc_mutex_unlock(&bin->lock); return (NULL); } if (config_stats) { bin->stats.allocated += size; bin->stats.nmalloc++; bin->stats.nrequests++; } malloc_mutex_unlock(&bin->lock); if (config_prof && isthreaded == false && arena_prof_accum(arena, size)) prof_idump(); if (zero == false) { if (config_fill) { if (opt_junk) { arena_alloc_junk_small(ret, &arena_bin_info[binind], false); } else if (opt_zero) memset(ret, 0, size); } JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ret, size); } else { if (config_fill && opt_junk) { arena_alloc_junk_small(ret, &arena_bin_info[binind], true); } JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ret, size); memset(ret, 0, size); } return (ret); } void * arena_malloc_large(arena_t *arena, size_t size, bool zero) { void *ret; UNUSED bool idump; if (arena == NULL) return NULL; /* Large allocation. */ size = PAGE_CEILING(size); malloc_mutex_lock(&arena->lock); ret = (void *)arena_run_alloc_large(arena, size, zero); if (ret == NULL) { malloc_mutex_unlock(&arena->lock); return (NULL); } if (config_stats) { arena->stats.nmalloc_large++; arena->stats.nrequests_large++; arena->stats.allocated_large += size; arena->stats.lstats[(size >> LG_PAGE) - 1].nmalloc++; arena->stats.lstats[(size >> LG_PAGE) - 1].nrequests++; arena->stats.lstats[(size >> LG_PAGE) - 1].curruns++; } if (config_prof) idump = arena_prof_accum_locked(arena, size); malloc_mutex_unlock(&arena->lock); if (config_prof && idump) prof_idump(); if (zero == false) { if (config_fill) { if (opt_junk) memset(ret, 0xa5, size); else if (opt_zero) memset(ret, 0, size); } } return (ret); } /* Only handles large allocations that require more than page alignment. */ void * arena_palloc(arena_t *arena, size_t size, size_t alignment, bool zero) { void *ret; size_t alloc_size, leadsize, trailsize; arena_run_t *run; arena_chunk_t *chunk; assert((size & PAGE_MASK) == 0); alignment = PAGE_CEILING(alignment); alloc_size = size + alignment - PAGE; malloc_mutex_lock(&arena->lock); run = arena_run_alloc_large(arena, alloc_size, false); if (run == NULL) { malloc_mutex_unlock(&arena->lock); return (NULL); } chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(run); leadsize = ALIGNMENT_CEILING((uintptr_t)run, alignment) - (uintptr_t)run; assert(alloc_size >= leadsize + size); trailsize = alloc_size - leadsize - size; ret = (void *)((uintptr_t)run + leadsize); if (leadsize != 0) { arena_run_trim_head(arena, chunk, run, alloc_size, alloc_size - leadsize); } if (trailsize != 0) { arena_run_trim_tail(arena, chunk, ret, size + trailsize, size, false); } arena_run_init_large(arena, (arena_run_t *)ret, size, zero); if (config_stats) { arena->stats.nmalloc_large++; arena->stats.nrequests_large++; arena->stats.allocated_large += size; arena->stats.lstats[(size >> LG_PAGE) - 1].nmalloc++; arena->stats.lstats[(size >> LG_PAGE) - 1].nrequests++; arena->stats.lstats[(size >> LG_PAGE) - 1].curruns++; } malloc_mutex_unlock(&arena->lock); if (config_fill && zero == false) { if (opt_junk) memset(ret, 0xa5, size); else if (opt_zero) memset(ret, 0, size); } return (ret); } void arena_prof_promoted(const void *ptr, size_t size) { arena_chunk_t *chunk; size_t pageind, binind; cassert(config_prof); assert(ptr != NULL); assert(CHUNK_ADDR2BASE(ptr) != ptr); assert(isalloc(ptr, false) == PAGE); assert(isalloc(ptr, true) == PAGE); assert(size <= SMALL_MAXCLASS); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; binind = small_size2bin(size); assert(binind < NBINS); arena_mapbits_large_binind_set(chunk, pageind, binind); assert(isalloc(ptr, false) == PAGE); assert(isalloc(ptr, true) == size); } static void arena_dissociate_bin_run(arena_chunk_t *chunk, arena_run_t *run, arena_bin_t *bin) { /* Dissociate run from bin. */ if (run == bin->runcur) bin->runcur = NULL; else { size_t binind = arena_bin_index(chunk->arena, bin); arena_bin_info_t *bin_info = &arena_bin_info[binind]; if (bin_info->nregs != 1) { /* * This block's conditional is necessary because if the * run only contains one region, then it never gets * inserted into the non-full runs tree. */ arena_bin_runs_remove(bin, run); } } } static void arena_dalloc_bin_run(arena_t *arena, arena_chunk_t *chunk, arena_run_t *run, arena_bin_t *bin) { size_t binind; arena_bin_info_t *bin_info; size_t npages, run_ind, past; assert(run != bin->runcur); assert(arena_run_tree_search(&bin->runs, arena_mapp_get(chunk, ((uintptr_t)run-(uintptr_t)chunk)>>LG_PAGE)) == NULL); binind = arena_bin_index(chunk->arena, run->bin); bin_info = &arena_bin_info[binind]; malloc_mutex_unlock(&bin->lock); /******************************/ npages = bin_info->run_size >> LG_PAGE; run_ind = (size_t)(((uintptr_t)run - (uintptr_t)chunk) >> LG_PAGE); past = (size_t)(PAGE_CEILING((uintptr_t)run + (uintptr_t)bin_info->reg0_offset + (uintptr_t)(run->nextind * bin_info->reg_interval - bin_info->redzone_size) - (uintptr_t)chunk) >> LG_PAGE); malloc_mutex_lock(&arena->lock); /* * If the run was originally clean, and some pages were never touched, * trim the clean pages before deallocating the dirty portion of the * run. */ assert(arena_mapbits_dirty_get(chunk, run_ind) == arena_mapbits_dirty_get(chunk, run_ind+npages-1)); if (arena_mapbits_dirty_get(chunk, run_ind) == 0 && past - run_ind < npages) { /* Trim clean pages. Convert to large run beforehand. */ assert(npages > 0); arena_mapbits_large_set(chunk, run_ind, bin_info->run_size, 0); arena_mapbits_large_set(chunk, run_ind+npages-1, 0, 0); arena_run_trim_tail(arena, chunk, run, (npages << LG_PAGE), ((past - run_ind) << LG_PAGE), false); /* npages = past - run_ind; */ } arena_run_dalloc(arena, run, true, false); malloc_mutex_unlock(&arena->lock); /****************************/ malloc_mutex_lock(&bin->lock); if (config_stats) bin->stats.curruns--; } static void arena_bin_lower_run(arena_t *arena, arena_chunk_t *chunk, arena_run_t *run, arena_bin_t *bin) { /* * Make sure that if bin->runcur is non-NULL, it refers to the lowest * non-full run. It is okay to NULL runcur out rather than proactively * keeping it pointing at the lowest non-full run. */ if ((uintptr_t)run < (uintptr_t)bin->runcur) { /* Switch runcur. */ if (bin->runcur->nfree > 0) arena_bin_runs_insert(bin, bin->runcur); bin->runcur = run; if (config_stats) bin->stats.reruns++; } else arena_bin_runs_insert(bin, run); } void arena_dalloc_bin_locked(arena_t *arena, arena_chunk_t *chunk, void *ptr, arena_chunk_map_t *mapelm) { size_t pageind; arena_run_t *run; arena_bin_t *bin; arena_bin_info_t *bin_info; size_t size, binind; pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; run = (arena_run_t *)((uintptr_t)chunk + (uintptr_t)((pageind - arena_mapbits_small_runind_get(chunk, pageind)) << LG_PAGE)); bin = run->bin; binind = arena_ptr_small_binind_get(ptr, arena_mapbits_get(chunk, pageind)); bin_info = &arena_bin_info[binind]; if (config_fill || config_stats) size = bin_info->reg_size; if (config_fill && opt_junk) arena_dalloc_junk_small(ptr, bin_info); arena_run_reg_dalloc(run, ptr); if (run->nfree == bin_info->nregs) { arena_dissociate_bin_run(chunk, run, bin); arena_dalloc_bin_run(arena, chunk, run, bin); } else if (run->nfree == 1 && run != bin->runcur) arena_bin_lower_run(arena, chunk, run, bin); if (config_stats) { bin->stats.allocated -= size; bin->stats.ndalloc++; } } void arena_dalloc_bin(arena_t *arena, arena_chunk_t *chunk, void *ptr, size_t pageind, arena_chunk_map_t *mapelm) { arena_run_t *run; arena_bin_t *bin; run = (arena_run_t *)((uintptr_t)chunk + (uintptr_t)((pageind - arena_mapbits_small_runind_get(chunk, pageind)) << LG_PAGE)); bin = run->bin; malloc_mutex_lock(&bin->lock); arena_dalloc_bin_locked(arena, chunk, ptr, mapelm); malloc_mutex_unlock(&bin->lock); } void arena_dalloc_small(arena_t *arena, arena_chunk_t *chunk, void *ptr, size_t pageind) { arena_chunk_map_t *mapelm; if (config_debug) { /* arena_ptr_small_binind_get() does extra sanity checking. */ assert(arena_ptr_small_binind_get(ptr, arena_mapbits_get(chunk, pageind)) != BININD_INVALID); } mapelm = arena_mapp_get(chunk, pageind); arena_dalloc_bin(arena, chunk, ptr, pageind, mapelm); } #ifdef JEMALLOC_JET #undef arena_dalloc_junk_large #define arena_dalloc_junk_large JEMALLOC_N(arena_dalloc_junk_large_impl) #endif static void arena_dalloc_junk_large(void *ptr, size_t usize) { if (config_fill && opt_junk) memset(ptr, 0x5a, usize); } #ifdef JEMALLOC_JET #undef arena_dalloc_junk_large #define arena_dalloc_junk_large JEMALLOC_N(arena_dalloc_junk_large) arena_dalloc_junk_large_t *arena_dalloc_junk_large = JEMALLOC_N(arena_dalloc_junk_large_impl); #endif void arena_dalloc_large_locked(arena_t *arena, arena_chunk_t *chunk, void *ptr) { if (config_fill || config_stats) { size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; size_t usize = arena_mapbits_large_size_get(chunk, pageind); arena_dalloc_junk_large(ptr, usize); if (config_stats) { arena->stats.ndalloc_large++; arena->stats.allocated_large -= usize; arena->stats.lstats[(usize >> LG_PAGE) - 1].ndalloc++; arena->stats.lstats[(usize >> LG_PAGE) - 1].curruns--; } } arena_run_dalloc(arena, (arena_run_t *)ptr, true, false); } void arena_dalloc_large(arena_t *arena, arena_chunk_t *chunk, void *ptr) { malloc_mutex_lock(&arena->lock); arena_dalloc_large_locked(arena, chunk, ptr); malloc_mutex_unlock(&arena->lock); } static void arena_ralloc_large_shrink(arena_t *arena, arena_chunk_t *chunk, void *ptr, size_t oldsize, size_t size) { assert(size < oldsize); /* * Shrink the run, and make trailing pages available for other * allocations. */ malloc_mutex_lock(&arena->lock); arena_run_trim_tail(arena, chunk, (arena_run_t *)ptr, oldsize, size, true); if (config_stats) { arena->stats.ndalloc_large++; arena->stats.allocated_large -= oldsize; arena->stats.lstats[(oldsize >> LG_PAGE) - 1].ndalloc++; arena->stats.lstats[(oldsize >> LG_PAGE) - 1].curruns--; arena->stats.nmalloc_large++; arena->stats.nrequests_large++; arena->stats.allocated_large += size; arena->stats.lstats[(size >> LG_PAGE) - 1].nmalloc++; arena->stats.lstats[(size >> LG_PAGE) - 1].nrequests++; arena->stats.lstats[(size >> LG_PAGE) - 1].curruns++; } malloc_mutex_unlock(&arena->lock); } static bool arena_ralloc_large_grow(arena_t *arena, arena_chunk_t *chunk, void *ptr, size_t oldsize, size_t size, size_t extra, bool zero) { size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; size_t npages = oldsize >> LG_PAGE; size_t followsize; assert(oldsize == arena_mapbits_large_size_get(chunk, pageind)); /* Try to extend the run. */ assert(size + extra > oldsize); malloc_mutex_lock(&arena->lock); if (pageind + npages < chunk_npages && arena_mapbits_allocated_get(chunk, pageind+npages) == 0 && (followsize = arena_mapbits_unallocated_size_get(chunk, pageind+npages)) >= size - oldsize) { /* * The next run is available and sufficiently large. Split the * following run, then merge the first part with the existing * allocation. */ size_t flag_dirty; size_t splitsize = (oldsize + followsize <= size + extra) ? followsize : size + extra - oldsize; arena_run_split_large(arena, (arena_run_t *)((uintptr_t)chunk + ((pageind+npages) << LG_PAGE)), splitsize, zero); size = oldsize + splitsize; npages = size >> LG_PAGE; /* * Mark the extended run as dirty if either portion of the run * was dirty before allocation. This is rather pedantic, * because there's not actually any sequence of events that * could cause the resulting run to be passed to * arena_run_dalloc() with the dirty argument set to false * (which is when dirty flag consistency would really matter). */ flag_dirty = arena_mapbits_dirty_get(chunk, pageind) | arena_mapbits_dirty_get(chunk, pageind+npages-1); arena_mapbits_large_set(chunk, pageind, size, flag_dirty); arena_mapbits_large_set(chunk, pageind+npages-1, 0, flag_dirty); if (config_stats) { arena->stats.ndalloc_large++; arena->stats.allocated_large -= oldsize; arena->stats.lstats[(oldsize >> LG_PAGE) - 1].ndalloc++; arena->stats.lstats[(oldsize >> LG_PAGE) - 1].curruns--; arena->stats.nmalloc_large++; arena->stats.nrequests_large++; arena->stats.allocated_large += size; arena->stats.lstats[(size >> LG_PAGE) - 1].nmalloc++; arena->stats.lstats[(size >> LG_PAGE) - 1].nrequests++; arena->stats.lstats[(size >> LG_PAGE) - 1].curruns++; } malloc_mutex_unlock(&arena->lock); return (false); } malloc_mutex_unlock(&arena->lock); return (true); } #ifdef JEMALLOC_JET #undef arena_ralloc_junk_large #define arena_ralloc_junk_large JEMALLOC_N(arena_ralloc_junk_large_impl) #endif static void arena_ralloc_junk_large(void *ptr, size_t old_usize, size_t usize) { if (config_fill && opt_junk) { memset((void *)((uintptr_t)ptr + usize), 0x5a, old_usize - usize); } } #ifdef JEMALLOC_JET #undef arena_ralloc_junk_large #define arena_ralloc_junk_large JEMALLOC_N(arena_ralloc_junk_large) arena_ralloc_junk_large_t *arena_ralloc_junk_large = JEMALLOC_N(arena_ralloc_junk_large_impl); #endif /* * Try to resize a large allocation, in order to avoid copying. This will * always fail if growing an object, and the following run is already in use. */ static bool arena_ralloc_large(void *ptr, size_t oldsize, size_t size, size_t extra, bool zero) { size_t psize; psize = PAGE_CEILING(size + extra); if (psize == oldsize) { /* Same size class. */ return (false); } else { arena_chunk_t *chunk; arena_t *arena; chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); arena = chunk->arena; if (psize < oldsize) { /* Fill before shrinking in order avoid a race. */ arena_ralloc_junk_large(ptr, oldsize, psize); arena_ralloc_large_shrink(arena, chunk, ptr, oldsize, psize); return (false); } else { bool ret = arena_ralloc_large_grow(arena, chunk, ptr, oldsize, PAGE_CEILING(size), psize - PAGE_CEILING(size), zero); if (config_fill && ret == false && zero == false) { if (opt_junk) { memset((void *)((uintptr_t)ptr + oldsize), 0xa5, isalloc(ptr, config_prof) - oldsize); } else if (opt_zero) { memset((void *)((uintptr_t)ptr + oldsize), 0, isalloc(ptr, config_prof) - oldsize); } } return (ret); } } } bool arena_ralloc_no_move(void *ptr, size_t oldsize, size_t size, size_t extra, bool zero) { /* * Avoid moving the allocation if the size class can be left the same. */ if (oldsize <= arena_maxclass) { if (oldsize <= SMALL_MAXCLASS) { assert(small_size2bin(oldsize) < NBINS); assert(arena_bin_info[small_size2bin(oldsize)].reg_size == oldsize); if ((size + extra <= SMALL_MAXCLASS && small_size2bin(size + extra) == small_size2bin(oldsize)) || (size <= oldsize && size + extra >= oldsize)) return (false); } else { assert(size <= arena_maxclass); if (size + extra > SMALL_MAXCLASS) { if (arena_ralloc_large(ptr, oldsize, size, extra, zero) == false) return (false); } } } /* Reallocation would require a move. */ return (true); } void * arena_ralloc(arena_t *arena, void *ptr, size_t oldsize, size_t size, size_t extra, size_t alignment, bool zero, bool try_tcache_alloc, bool try_tcache_dalloc) { void *ret; size_t copysize; /* Try to avoid moving the allocation. */ if (arena_ralloc_no_move(ptr, oldsize, size, extra, zero) == false) return (ptr); /* * size and oldsize are different enough that we need to move the * object. In that case, fall back to allocating new space and * copying. */ if (alignment != 0) { size_t usize = sa2u(size + extra, alignment); if (usize == 0) return (NULL); ret = ipalloct(usize, alignment, zero, try_tcache_alloc, arena); } else ret = arena_malloc(arena, size + extra, zero, try_tcache_alloc); if (ret == NULL) { if (extra == 0) return (NULL); /* Try again, this time without extra. */ if (alignment != 0) { size_t usize = sa2u(size, alignment); if (usize == 0) return (NULL); ret = ipalloct(usize, alignment, zero, try_tcache_alloc, arena); } else ret = arena_malloc(arena, size, zero, try_tcache_alloc); if (ret == NULL) return (NULL); } /* Junk/zero-filling were already done by ipalloc()/arena_malloc(). */ /* * Copy at most size bytes (not size+extra), since the caller has no * expectation that the extra bytes will be reliably preserved. */ copysize = (size < oldsize) ? size : oldsize; JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ret, copysize); memcpy(ret, ptr, copysize); pool_iqalloct(arena->pool, ptr, try_tcache_dalloc); return (ret); } dss_prec_t arena_dss_prec_get(arena_t *arena) { dss_prec_t ret; malloc_mutex_lock(&arena->lock); ret = arena->dss_prec; malloc_mutex_unlock(&arena->lock); return (ret); } bool arena_dss_prec_set(arena_t *arena, dss_prec_t dss_prec) { if (have_dss == false) return (dss_prec != dss_prec_disabled); malloc_mutex_lock(&arena->lock); arena->dss_prec = dss_prec; malloc_mutex_unlock(&arena->lock); return (false); } void arena_stats_merge(arena_t *arena, const char **dss, size_t *nactive, size_t *ndirty, arena_stats_t *astats, malloc_bin_stats_t *bstats, malloc_large_stats_t *lstats) { unsigned i; malloc_mutex_lock(&arena->lock); *dss = dss_prec_names[arena->dss_prec]; *nactive += arena->nactive; *ndirty += arena->ndirty; astats->mapped += arena->stats.mapped; astats->npurge += arena->stats.npurge; astats->nmadvise += arena->stats.nmadvise; astats->purged += arena->stats.purged; astats->allocated_large += arena->stats.allocated_large; astats->nmalloc_large += arena->stats.nmalloc_large; astats->ndalloc_large += arena->stats.ndalloc_large; astats->nrequests_large += arena->stats.nrequests_large; astats->allocated_huge += arena->stats.allocated_huge; astats->nmalloc_huge += arena->stats.nmalloc_huge; astats->ndalloc_huge += arena->stats.ndalloc_huge; astats->nrequests_huge += arena->stats.nrequests_huge; for (i = 0; i < nlclasses; i++) { lstats[i].nmalloc += arena->stats.lstats[i].nmalloc; lstats[i].ndalloc += arena->stats.lstats[i].ndalloc; lstats[i].nrequests += arena->stats.lstats[i].nrequests; lstats[i].curruns += arena->stats.lstats[i].curruns; } malloc_mutex_unlock(&arena->lock); for (i = 0; i < NBINS; i++) { arena_bin_t *bin = &arena->bins[i]; malloc_mutex_lock(&bin->lock); bstats[i].allocated += bin->stats.allocated; bstats[i].nmalloc += bin->stats.nmalloc; bstats[i].ndalloc += bin->stats.ndalloc; bstats[i].nrequests += bin->stats.nrequests; if (config_tcache) { bstats[i].nfills += bin->stats.nfills; bstats[i].nflushes += bin->stats.nflushes; } bstats[i].nruns += bin->stats.nruns; bstats[i].reruns += bin->stats.reruns; bstats[i].curruns += bin->stats.curruns; malloc_mutex_unlock(&bin->lock); } } /* * Called at each pool opening. */ bool arena_boot(arena_t *arena) { unsigned i; arena_bin_t *bin; if (malloc_mutex_init(&arena->lock)) return (true); /* Initialize bins. */ for (i = 0; i < NBINS; i++) { bin = &arena->bins[i]; if (malloc_mutex_init(&bin->lock)) return (true); } arena->nthreads = 0; return (false); } /* * Called only at pool/arena creation. */ bool arena_new(pool_t *pool, arena_t *arena, unsigned ind) { unsigned i; arena_bin_t *bin; arena->ind = ind; arena->nthreads = 0; arena->chunk_alloc = chunk_alloc_default; arena->chunk_dalloc = chunk_dalloc_default; arena->pool = pool; if (malloc_mutex_init(&arena->lock)) return (true); if (config_stats) { memset(&arena->stats, 0, sizeof(arena_stats_t)); arena->stats.lstats = (malloc_large_stats_t *)base_alloc(pool, nlclasses * sizeof(malloc_large_stats_t)); if (arena->stats.lstats == NULL) return (true); memset(arena->stats.lstats, 0, nlclasses * sizeof(malloc_large_stats_t)); if (config_tcache) ql_new(&arena->tcache_ql); } if (config_prof) arena->prof_accumbytes = 0; arena->dss_prec = chunk_dss_prec_get(); /* Initialize chunks. */ arena_chunk_dirty_new(&arena->chunks_dirty); arena->spare = NULL; arena->nactive = 0; arena->ndirty = 0; arena->npurgatory = 0; arena_avail_tree_new(&arena->runs_avail); /* Initialize bins. */ for (i = 0; i < NBINS; i++) { bin = &arena->bins[i]; if (malloc_mutex_init(&bin->lock)) return (true); bin->runcur = NULL; arena_run_tree_new(&bin->runs); if (config_stats) memset(&bin->stats, 0, sizeof(malloc_bin_stats_t)); } return (false); } /* * Calculate bin_info->run_size such that it meets the following constraints: * * *) bin_info->run_size >= min_run_size * *) bin_info->run_size <= arena_maxclass * *) run header overhead <= RUN_MAX_OVRHD (or header overhead relaxed). * *) bin_info->nregs <= RUN_MAXREGS * * bin_info->nregs, bin_info->bitmap_offset, and bin_info->reg0_offset are also * calculated here, since these settings are all interdependent. */ static size_t bin_info_run_size_calc(arena_bin_info_t *bin_info, size_t min_run_size) { size_t pad_size; size_t try_run_size, good_run_size; uint32_t try_nregs, good_nregs; uint32_t try_hdr_size, good_hdr_size; uint32_t try_bitmap_offset, good_bitmap_offset; uint32_t try_redzone0_offset, good_redzone0_offset; assert(min_run_size >= PAGE); assert(min_run_size <= arena_maxclass); /* * Determine redzone size based on minimum alignment and minimum * redzone size. Add padding to the end of the run if it is needed to * align the regions. The padding allows each redzone to be half the * minimum alignment; without the padding, each redzone would have to * be twice as large in order to maintain alignment. */ if (config_fill && opt_redzone) { size_t align_min = ZU(1) << (jemalloc_ffs(bin_info->reg_size) - 1); if (align_min <= REDZONE_MINSIZE) { bin_info->redzone_size = REDZONE_MINSIZE; pad_size = 0; } else { bin_info->redzone_size = align_min >> 1; pad_size = bin_info->redzone_size; } } else { bin_info->redzone_size = 0; pad_size = 0; } bin_info->reg_interval = bin_info->reg_size + (bin_info->redzone_size << 1); /* * Calculate known-valid settings before entering the run_size * expansion loop, so that the first part of the loop always copies * valid settings. * * The do..while loop iteratively reduces the number of regions until * the run header and the regions no longer overlap. A closed formula * would be quite messy, since there is an interdependency between the * header's mask length and the number of regions. */ try_run_size = min_run_size; try_nregs = ((try_run_size - sizeof(arena_run_t)) / bin_info->reg_interval) + 1; /* Counter-act try_nregs-- in loop. */ if (try_nregs > RUN_MAXREGS) { try_nregs = RUN_MAXREGS + 1; /* Counter-act try_nregs-- in loop. */ } do { try_nregs--; try_hdr_size = sizeof(arena_run_t); /* Pad to a long boundary. */ try_hdr_size = LONG_CEILING(try_hdr_size); try_bitmap_offset = try_hdr_size; /* Add space for bitmap. */ try_hdr_size += bitmap_size(try_nregs); try_redzone0_offset = try_run_size - (try_nregs * bin_info->reg_interval) - pad_size; } while (try_hdr_size > try_redzone0_offset); /* run_size expansion loop. */ do { /* * Copy valid settings before trying more aggressive settings. */ good_run_size = try_run_size; good_nregs = try_nregs; good_hdr_size = try_hdr_size; good_bitmap_offset = try_bitmap_offset; good_redzone0_offset = try_redzone0_offset; /* Try more aggressive settings. */ try_run_size += PAGE; try_nregs = ((try_run_size - sizeof(arena_run_t) - pad_size) / bin_info->reg_interval) + 1; /* Counter-act try_nregs-- in loop. */ if (try_nregs > RUN_MAXREGS) { try_nregs = RUN_MAXREGS + 1; /* Counter-act try_nregs-- in loop. */ } do { try_nregs--; try_hdr_size = sizeof(arena_run_t); /* Pad to a long boundary. */ try_hdr_size = LONG_CEILING(try_hdr_size); try_bitmap_offset = try_hdr_size; /* Add space for bitmap. */ try_hdr_size += bitmap_size(try_nregs); try_redzone0_offset = try_run_size - (try_nregs * bin_info->reg_interval) - pad_size; } while (try_hdr_size > try_redzone0_offset); } while (try_run_size <= arena_maxclass && RUN_MAX_OVRHD * (bin_info->reg_interval << 3) > RUN_MAX_OVRHD_RELAX && (try_redzone0_offset << RUN_BFP) > RUN_MAX_OVRHD * try_run_size && try_nregs < RUN_MAXREGS); assert(good_hdr_size <= good_redzone0_offset); /* Copy final settings. */ bin_info->run_size = good_run_size; bin_info->nregs = good_nregs; bin_info->bitmap_offset = good_bitmap_offset; bin_info->reg0_offset = good_redzone0_offset + bin_info->redzone_size; assert(bin_info->reg0_offset - bin_info->redzone_size + (bin_info->nregs * bin_info->reg_interval) + pad_size == bin_info->run_size); return (good_run_size); } static void bin_info_init(void) { arena_bin_info_t *bin_info; size_t prev_run_size = PAGE; #define BIN_INFO_INIT_bin_yes(index, size) \ bin_info = &arena_bin_info[index]; \ bin_info->reg_size = size; \ prev_run_size = bin_info_run_size_calc(bin_info, prev_run_size);\ bitmap_info_init(&bin_info->bitmap_info, bin_info->nregs); #define BIN_INFO_INIT_bin_no(index, size) #define SC(index, lg_grp, lg_delta, ndelta, bin, lg_delta_lookup) \ BIN_INFO_INIT_bin_##bin(index, (ZU(1)<<(lg_grp)) + (ZU(ndelta)<<(lg_delta))) SIZE_CLASSES #undef BIN_INFO_INIT_bin_yes #undef BIN_INFO_INIT_bin_no #undef SC } void arena_params_boot(void) { size_t header_size; unsigned i; /* * Compute the header size such that it is large enough to contain the * page map. The page map is biased to omit entries for the header * itself, so some iteration is necessary to compute the map bias. * * 1) Compute safe header_size and map_bias values that include enough * space for an unbiased page map. * 2) Refine map_bias based on (1) to omit the header pages in the page * map. The resulting map_bias may be one too small. * 3) Refine map_bias based on (2). The result will be >= the result * from (2), and will always be correct. */ map_bias = 0; for (i = 0; i < 3; i++) { header_size = offsetof(arena_chunk_t, map) + (sizeof(arena_chunk_map_t) * (chunk_npages-map_bias)); map_bias = (header_size >> LG_PAGE) + ((header_size & PAGE_MASK) != 0); } assert(map_bias > 0); arena_maxclass = chunksize - (map_bias << LG_PAGE); bin_info_init(); } void arena_prefork(arena_t *arena) { unsigned i; malloc_mutex_prefork(&arena->lock); for (i = 0; i < NBINS; i++) malloc_mutex_prefork(&arena->bins[i].lock); } void arena_postfork_parent(arena_t *arena) { unsigned i; for (i = 0; i < NBINS; i++) malloc_mutex_postfork_parent(&arena->bins[i].lock); malloc_mutex_postfork_parent(&arena->lock); } void arena_postfork_child(arena_t *arena) { unsigned i; for (i = 0; i < NBINS; i++) malloc_mutex_postfork_child(&arena->bins[i].lock); malloc_mutex_postfork_child(&arena->lock); } vmem-1.8/src/jemalloc/src/atomic.c000066400000000000000000000001141361505074100171100ustar00rootroot00000000000000#define JEMALLOC_ATOMIC_C_ #include "jemalloc/internal/jemalloc_internal.h" vmem-1.8/src/jemalloc/src/base.c000066400000000000000000000063161361505074100165600ustar00rootroot00000000000000#define JEMALLOC_BASE_C_ #include "jemalloc/internal/jemalloc_internal.h" static bool base_pages_alloc(pool_t *pool, size_t minsize) { size_t csize; void* base_pages; assert(minsize != 0); csize = CHUNK_CEILING(minsize); base_pages = chunk_alloc_base(pool, csize); if (base_pages == NULL) return (true); pool->base_next_addr = base_pages; pool->base_past_addr = (void *)((uintptr_t)base_pages + csize); return (false); } void * base_alloc(pool_t *pool, size_t size) { void *ret; size_t csize; /* Round size up to nearest multiple of the cacheline size. */ csize = CACHELINE_CEILING(size); malloc_mutex_lock(&pool->base_mtx); /* Make sure there's enough space for the allocation. */ if ((uintptr_t)pool->base_next_addr + csize > (uintptr_t)pool->base_past_addr) { if (base_pages_alloc(pool, csize)) { malloc_mutex_unlock(&pool->base_mtx); return (NULL); } } /* Allocate. */ ret = pool->base_next_addr; pool->base_next_addr = (void *)((uintptr_t)pool->base_next_addr + csize); malloc_mutex_unlock(&pool->base_mtx); JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ret, csize); return (ret); } void * base_calloc(pool_t *pool, size_t number, size_t size) { void *ret = base_alloc(pool, number * size); if (ret != NULL) memset(ret, 0, number * size); return (ret); } extent_node_t * base_node_alloc(pool_t *pool) { extent_node_t *ret; malloc_mutex_lock(&pool->base_node_mtx); if (pool->base_nodes != NULL) { ret = pool->base_nodes; pool->base_nodes = *(extent_node_t **)ret; JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ret, sizeof(extent_node_t)); } else { /* preallocated nodes for pools other than 0 */ if (pool->pool_id == 0) { ret = (extent_node_t *)base_alloc(pool, sizeof(extent_node_t)); } else { ret = NULL; } } malloc_mutex_unlock(&pool->base_node_mtx); return (ret); } void base_node_dalloc(pool_t *pool, extent_node_t *node) { JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(node, sizeof(extent_node_t)); malloc_mutex_lock(&pool->base_node_mtx); *(extent_node_t **)node = pool->base_nodes; pool->base_nodes = node; malloc_mutex_unlock(&pool->base_node_mtx); } size_t base_node_prealloc(pool_t *pool, size_t number) { extent_node_t *node; malloc_mutex_lock(&pool->base_node_mtx); for (; number > 0; --number) { node = (extent_node_t *)base_alloc(pool, sizeof(extent_node_t)); if (node == NULL) break; JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(node, sizeof(extent_node_t)); *(extent_node_t **)node = pool->base_nodes; pool->base_nodes = node; } malloc_mutex_unlock(&pool->base_node_mtx); /* return number of nodes that couldn't be allocated */ return number; } /* * Called at each pool opening. */ bool base_boot(pool_t *pool) { if (malloc_mutex_init(&pool->base_mtx)) return (true); if (malloc_mutex_init(&pool->base_node_mtx)) return (true); return (false); } /* * Called only at pool creation. */ bool base_init(pool_t *pool) { if (base_boot(pool)) return (true); pool->base_nodes = NULL; return (false); } void base_prefork(pool_t *pool) { malloc_mutex_prefork(&pool->base_mtx); } void base_postfork_parent(pool_t *pool) { malloc_mutex_postfork_parent(&pool->base_mtx); } void base_postfork_child(pool_t *pool) { malloc_mutex_postfork_child(&pool->base_mtx); } vmem-1.8/src/jemalloc/src/bitmap.c000066400000000000000000000047241361505074100171230ustar00rootroot00000000000000#define JEMALLOC_BITMAP_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Function prototypes for non-inline static functions. */ static size_t bits2groups(size_t nbits); /******************************************************************************/ static size_t bits2groups(size_t nbits) { return ((nbits >> LG_BITMAP_GROUP_NBITS) + !!(nbits & BITMAP_GROUP_NBITS_MASK)); } void bitmap_info_init(bitmap_info_t *binfo, size_t nbits) { unsigned i; size_t group_count; assert(nbits > 0); assert(nbits <= (ZU(1) << LG_BITMAP_MAXBITS)); /* * Compute the number of groups necessary to store nbits bits, and * progressively work upward through the levels until reaching a level * that requires only one group. */ binfo->levels[0].group_offset = 0; group_count = bits2groups(nbits); for (i = 1; group_count > 1; i++) { assert(i < BITMAP_MAX_LEVELS); binfo->levels[i].group_offset = binfo->levels[i-1].group_offset + group_count; group_count = bits2groups(group_count); } binfo->levels[i].group_offset = binfo->levels[i-1].group_offset + group_count; binfo->nlevels = i; binfo->nbits = nbits; } size_t bitmap_info_ngroups(const bitmap_info_t *binfo) { return (binfo->levels[binfo->nlevels].group_offset << LG_SIZEOF_BITMAP); } size_t bitmap_size(size_t nbits) { bitmap_info_t binfo; bitmap_info_init(&binfo, nbits); return (bitmap_info_ngroups(&binfo)); } void bitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo) { size_t extra; unsigned i; /* * Bits are actually inverted with regard to the external bitmap * interface, so the bitmap starts out with all 1 bits, except for * trailing unused bits (if any). Note that each group uses bit 0 to * correspond to the first logical bit in the group, so extra bits * are the most significant bits of the last group. */ memset(bitmap, 0xffU, binfo->levels[binfo->nlevels].group_offset << LG_SIZEOF_BITMAP); extra = (BITMAP_GROUP_NBITS - (binfo->nbits & BITMAP_GROUP_NBITS_MASK)) & BITMAP_GROUP_NBITS_MASK; if (extra != 0) bitmap[binfo->levels[1].group_offset - 1] >>= extra; for (i = 1; i < binfo->nlevels; i++) { size_t group_count = binfo->levels[i].group_offset - binfo->levels[i-1].group_offset; extra = (BITMAP_GROUP_NBITS - (group_count & BITMAP_GROUP_NBITS_MASK)) & BITMAP_GROUP_NBITS_MASK; if (extra != 0) bitmap[binfo->levels[i+1].group_offset - 1] >>= extra; } } vmem-1.8/src/jemalloc/src/chunk.c000066400000000000000000000327421361505074100167600ustar00rootroot00000000000000#define JEMALLOC_CHUNK_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Data. */ const char *opt_dss = DSS_DEFAULT; size_t opt_lg_chunk = LG_CHUNK_DEFAULT; /* Various chunk-related settings. */ size_t chunksize; size_t chunksize_mask; /* (chunksize - 1). */ size_t chunk_npages; size_t map_bias; size_t arena_maxclass; /* Max size class for arenas. */ /******************************************************************************/ /* * Function prototypes for static functions that are referenced prior to * definition. */ static void chunk_dalloc_core(pool_t *pool, void *chunk, size_t size); /******************************************************************************/ static void * chunk_recycle(pool_t *pool, extent_tree_t *chunks_szad, extent_tree_t *chunks_ad, void *new_addr, size_t size, size_t alignment, bool base, bool *zero) { void *ret; extent_node_t *node; extent_node_t key; size_t alloc_size, leadsize, trailsize; bool zeroed; if (base) { /* * This function may need to call base_node_{,de}alloc(), but * the current chunk allocation request is on behalf of the * base allocator. Avoid deadlock (and if that weren't an * issue, potential for infinite recursion) by returning NULL. */ return (NULL); } alloc_size = size + alignment - chunksize; /* Beware size_t wrap-around. */ if (alloc_size < size) return (NULL); key.addr = new_addr; key.size = alloc_size; malloc_mutex_lock(&pool->chunks_mtx); node = extent_tree_szad_nsearch(chunks_szad, &key); if (node == NULL || (new_addr && node->addr != new_addr)) { malloc_mutex_unlock(&pool->chunks_mtx); return (NULL); } leadsize = ALIGNMENT_CEILING((uintptr_t)node->addr, alignment) - (uintptr_t)node->addr; assert(node->size >= leadsize + size); trailsize = node->size - leadsize - size; ret = (void *)((uintptr_t)node->addr + leadsize); zeroed = node->zeroed; if (zeroed) *zero = true; /* Remove node from the tree. */ extent_tree_szad_remove(chunks_szad, node); extent_tree_ad_remove(chunks_ad, node); if (leadsize != 0) { /* Insert the leading space as a smaller chunk. */ node->size = leadsize; extent_tree_szad_insert(chunks_szad, node); extent_tree_ad_insert(chunks_ad, node); node = NULL; } if (trailsize != 0) { /* Insert the trailing space as a smaller chunk. */ if (node == NULL) { /* * An additional node is required, but * base_node_alloc() can cause a new base chunk to be * allocated. Drop chunks_mtx in order to avoid * deadlock, and if node allocation fails, deallocate * the result before returning an error. */ malloc_mutex_unlock(&pool->chunks_mtx); node = base_node_alloc(pool); if (node == NULL) { chunk_dalloc_core(pool, ret, size); return (NULL); } malloc_mutex_lock(&pool->chunks_mtx); } node->addr = (void *)((uintptr_t)(ret) + size); node->size = trailsize; node->zeroed = zeroed; extent_tree_szad_insert(chunks_szad, node); extent_tree_ad_insert(chunks_ad, node); node = NULL; } malloc_mutex_unlock(&pool->chunks_mtx); if (node != NULL) base_node_dalloc(pool, node); if (*zero) { if (zeroed == false) memset(ret, 0, size); else if (config_debug) { size_t i; size_t *p = (size_t *)(uintptr_t)ret; JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(ret, size); for (i = 0; i < size / sizeof(size_t); i++) assert(p[i] == 0); } } return (ret); } /* * If the caller specifies (*zero == false), it is still possible to receive * zeroed memory, in which case *zero is toggled to true. arena_chunk_alloc() * takes advantage of this to avoid demanding zeroed chunks, but taking * advantage of them if they are returned. */ static void * chunk_alloc_core(pool_t *pool, void *new_addr, size_t size, size_t alignment, bool base, bool *zero, dss_prec_t dss_prec) { void *ret; assert(size != 0); assert((size & chunksize_mask) == 0); assert(alignment != 0); assert((alignment & chunksize_mask) == 0); /* "primary" dss. */ if (have_dss && dss_prec == dss_prec_primary) { if ((ret = chunk_recycle(pool, &pool->chunks_szad_dss, &pool->chunks_ad_dss, new_addr, size, alignment, base, zero)) != NULL) return (ret); /* requesting an address only implemented for recycle */ if (new_addr == NULL && (ret = chunk_alloc_dss(size, alignment, zero)) != NULL) return (ret); } /* mmap. */ if ((ret = chunk_recycle(pool, &pool->chunks_szad_mmap, &pool->chunks_ad_mmap, new_addr, size, alignment, base, zero)) != NULL) return (ret); /* requesting an address only implemented for recycle */ if (new_addr == NULL && (ret = chunk_alloc_mmap(size, alignment, zero)) != NULL) return (ret); /* "secondary" dss. */ if (have_dss && dss_prec == dss_prec_secondary) { if ((ret = chunk_recycle(pool, &pool->chunks_szad_dss, &pool->chunks_ad_dss, new_addr, size, alignment, base, zero)) != NULL) return (ret); /* requesting an address only implemented for recycle */ if (new_addr == NULL && (ret = chunk_alloc_dss(size, alignment, zero)) != NULL) return (ret); } /* All strategies for allocation failed. */ return (NULL); } static bool chunk_register(pool_t *pool, void *chunk, size_t size, bool base) { assert(chunk != NULL); assert(CHUNK_ADDR2BASE(chunk) == chunk); if (config_ivsalloc && base == false) { if (rtree_set(pool->chunks_rtree, (uintptr_t)chunk, 1)) return (true); } if (config_stats || config_prof) { bool gdump; malloc_mutex_lock(&pool->chunks_mtx); if (config_stats) pool->stats_chunks.nchunks += (size / chunksize); pool->stats_chunks.curchunks += (size / chunksize); if (pool->stats_chunks.curchunks > pool->stats_chunks.highchunks) { pool->stats_chunks.highchunks = pool->stats_chunks.curchunks; if (config_prof) gdump = true; } else if (config_prof) gdump = false; malloc_mutex_unlock(&pool->chunks_mtx); if (config_prof && opt_prof && opt_prof_gdump && gdump) prof_gdump(); } if (config_valgrind) JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(chunk, size); return (false); } void * chunk_alloc_base(pool_t *pool, size_t size) { void *ret; bool zero; zero = false; if (pool->pool_id != 0) { /* Custom pools can only use existing chunks. */ ret = chunk_recycle(pool, &pool->chunks_szad_mmap, &pool->chunks_ad_mmap, NULL, size, chunksize, false, &zero); } else { ret = chunk_alloc_core(pool, NULL, size, chunksize, true, &zero, chunk_dss_prec_get()); } if (ret == NULL) return (NULL); if (chunk_register(pool, ret, size, true)) { chunk_dalloc_core(pool, ret, size); return (NULL); } return (ret); } void * chunk_alloc_arena(chunk_alloc_t *chunk_alloc, chunk_dalloc_t *chunk_dalloc, arena_t *arena, void *new_addr, size_t size, size_t alignment, bool *zero) { void *ret; ret = chunk_alloc(new_addr, size, alignment, zero, arena->ind, arena->pool); if (ret != NULL && chunk_register(arena->pool, ret, size, false)) { chunk_dalloc(ret, size, arena->ind, arena->pool); ret = NULL; } return (ret); } /* Default arena chunk allocation routine in the absence of user override. */ void * chunk_alloc_default(void *new_addr, size_t size, size_t alignment, bool *zero, unsigned arena_ind, pool_t *pool) { if (pool->pool_id != 0) { /* Custom pools can only use existing chunks. */ return (chunk_recycle(pool, &pool->chunks_szad_mmap, &pool->chunks_ad_mmap, new_addr, size, alignment, false, zero)); } else { malloc_rwlock_rdlock(&pool->arenas_lock); dss_prec_t dss_prec = pool->arenas[arena_ind]->dss_prec; malloc_rwlock_unlock(&pool->arenas_lock); return (chunk_alloc_core(pool, new_addr, size, alignment, false, zero, dss_prec)); } } void chunk_record(pool_t *pool, extent_tree_t *chunks_szad, extent_tree_t *chunks_ad, void *chunk, size_t size, bool zeroed) { bool unzeroed, file_mapped; extent_node_t *xnode, *node, *prev, *xprev, key; file_mapped = pool_is_file_mapped(pool); unzeroed = pages_purge(chunk, size, file_mapped); JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(chunk, size); /* * If pages_purge() returned that the pages were zeroed * as a side effect of purging we can safely do this assignment. */ if (zeroed == false && unzeroed == false) { zeroed = true; } /* * Allocate a node before acquiring chunks_mtx even though it might not * be needed, because base_node_alloc() may cause a new base chunk to * be allocated, which could cause deadlock if chunks_mtx were already * held. */ xnode = base_node_alloc(pool); /* Use xprev to implement conditional deferred deallocation of prev. */ xprev = NULL; malloc_mutex_lock(&pool->chunks_mtx); key.addr = (void *)((uintptr_t)chunk + size); node = extent_tree_ad_nsearch(chunks_ad, &key); /* Try to coalesce forward. */ if (node != NULL && node->addr == key.addr) { /* * Coalesce chunk with the following address range. This does * not change the position within chunks_ad, so only * remove/insert from/into chunks_szad. */ extent_tree_szad_remove(chunks_szad, node); node->addr = chunk; node->size += size; node->zeroed = (node->zeroed && zeroed); extent_tree_szad_insert(chunks_szad, node); } else { /* Coalescing forward failed, so insert a new node. */ if (xnode == NULL) { /* * base_node_alloc() failed, which is an exceedingly * unlikely failure. Leak chunk; its pages have * already been purged, so this is only a virtual * memory leak. */ goto label_return; } node = xnode; xnode = NULL; /* Prevent deallocation below. */ node->addr = chunk; node->size = size; node->zeroed = zeroed; extent_tree_ad_insert(chunks_ad, node); extent_tree_szad_insert(chunks_szad, node); } /* Try to coalesce backward. */ prev = extent_tree_ad_prev(chunks_ad, node); if (prev != NULL && (void *)((uintptr_t)prev->addr + prev->size) == chunk) { /* * Coalesce chunk with the previous address range. This does * not change the position within chunks_ad, so only * remove/insert node from/into chunks_szad. */ extent_tree_szad_remove(chunks_szad, prev); extent_tree_ad_remove(chunks_ad, prev); extent_tree_szad_remove(chunks_szad, node); node->addr = prev->addr; node->size += prev->size; node->zeroed = (node->zeroed && prev->zeroed); extent_tree_szad_insert(chunks_szad, node); xprev = prev; } label_return: malloc_mutex_unlock(&pool->chunks_mtx); /* * Deallocate xnode and/or xprev after unlocking chunks_mtx in order to * avoid potential deadlock. */ if (xnode != NULL) base_node_dalloc(pool, xnode); if (xprev != NULL) base_node_dalloc(pool, xprev); } void chunk_unmap(pool_t *pool, void *chunk, size_t size) { assert(chunk != NULL); assert(CHUNK_ADDR2BASE(chunk) == chunk); assert(size != 0); assert((size & chunksize_mask) == 0); if (have_dss && chunk_in_dss(chunk)) chunk_record(pool, &pool->chunks_szad_dss, &pool->chunks_ad_dss, chunk, size, false); else if (chunk_dalloc_mmap(chunk, size)) chunk_record(pool, &pool->chunks_szad_mmap, &pool->chunks_ad_mmap, chunk, size, false); } static void chunk_dalloc_core(pool_t *pool, void *chunk, size_t size) { assert(chunk != NULL); assert(CHUNK_ADDR2BASE(chunk) == chunk); assert(size != 0); assert((size & chunksize_mask) == 0); if (config_ivsalloc) rtree_set(pool->chunks_rtree, (uintptr_t)chunk, 0); if (config_stats || config_prof) { malloc_mutex_lock(&pool->chunks_mtx); assert(pool->stats_chunks.curchunks >= (size / chunksize)); pool->stats_chunks.curchunks -= (size / chunksize); malloc_mutex_unlock(&pool->chunks_mtx); } chunk_unmap(pool, chunk, size); } /* Default arena chunk deallocation routine in the absence of user override. */ bool chunk_dalloc_default(void *chunk, size_t size, unsigned arena_ind, pool_t *pool) { chunk_dalloc_core(pool, chunk, size); return (false); } bool chunk_global_boot() { if (have_dss && chunk_dss_boot()) return (true); /* Set variables according to the value of opt_lg_chunk. */ chunksize = (ZU(1) << opt_lg_chunk); assert(chunksize >= PAGE); chunksize_mask = chunksize - 1; chunk_npages = (chunksize >> LG_PAGE); return (false); } /* * Called at each pool opening. */ bool chunk_boot(pool_t *pool) { if (config_stats || config_prof) { if (malloc_mutex_init(&pool->chunks_mtx)) return (true); } if (pool->chunks_rtree) { rtree_t *rtree = pool->chunks_rtree; if (malloc_mutex_init(&rtree->mutex)) return (true); } return (false); } /* * Called only at pool creation. */ bool chunk_init(pool_t *pool) { if (chunk_boot(pool)) return (true); if (config_stats || config_prof) memset(&pool->stats_chunks, 0, sizeof(chunk_stats_t)); extent_tree_szad_new(&pool->chunks_szad_mmap); extent_tree_ad_new(&pool->chunks_ad_mmap); extent_tree_szad_new(&pool->chunks_szad_dss); extent_tree_ad_new(&pool->chunks_ad_dss); if (config_ivsalloc) { pool->chunks_rtree = rtree_new((ZU(1) << (LG_SIZEOF_PTR+3)) - opt_lg_chunk, base_alloc, NULL, pool); if (pool->chunks_rtree == NULL) return (true); } return (false); } void chunk_prefork0(pool_t *pool) { if (config_ivsalloc) rtree_prefork(pool->chunks_rtree); } void chunk_prefork1(pool_t *pool) { malloc_mutex_prefork(&pool->chunks_mtx); } void chunk_postfork_parent0(pool_t *pool) { if (config_ivsalloc) rtree_postfork_parent(pool->chunks_rtree); } void chunk_postfork_parent1(pool_t *pool) { malloc_mutex_postfork_parent(&pool->chunks_mtx); } void chunk_postfork_child0(pool_t *pool) { if (config_ivsalloc) rtree_postfork_child(pool->chunks_rtree); } void chunk_postfork_child1(pool_t *pool) { malloc_mutex_postfork_child(&pool->chunks_mtx); } vmem-1.8/src/jemalloc/src/chunk_dss.c000066400000000000000000000102601361505074100176200ustar00rootroot00000000000000#define JEMALLOC_CHUNK_DSS_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Data. */ const char *dss_prec_names[] = { "disabled", "primary", "secondary", "N/A" }; /* Current dss precedence default, used when creating new arenas. */ static dss_prec_t dss_prec_default = DSS_PREC_DEFAULT; /* * Protects sbrk() calls. This avoids malloc races among threads, though it * does not protect against races with threads that call sbrk() directly. */ static malloc_mutex_t dss_mtx; /* Base address of the DSS. */ static void *dss_base; /* Current end of the DSS, or ((void *)-1) if the DSS is exhausted. */ static void *dss_prev; /* Current upper limit on DSS addresses. */ static void *dss_max; /******************************************************************************/ static void * chunk_dss_sbrk(intptr_t increment) { #ifdef JEMALLOC_DSS return (sbrk(increment)); #else not_implemented(); return (NULL); #endif } dss_prec_t chunk_dss_prec_get(void) { dss_prec_t ret; if (have_dss == false) return (dss_prec_disabled); malloc_mutex_lock(&dss_mtx); ret = dss_prec_default; malloc_mutex_unlock(&dss_mtx); return (ret); } bool chunk_dss_prec_set(dss_prec_t dss_prec) { if (have_dss == false) return (dss_prec != dss_prec_disabled); malloc_mutex_lock(&dss_mtx); dss_prec_default = dss_prec; malloc_mutex_unlock(&dss_mtx); return (false); } void * chunk_alloc_dss(size_t size, size_t alignment, bool *zero) { void *ret; cassert(have_dss); assert(size > 0 && (size & chunksize_mask) == 0); assert(alignment > 0 && (alignment & chunksize_mask) == 0); /* * sbrk() uses a signed increment argument, so take care not to * interpret a huge allocation request as a negative increment. */ if ((intptr_t)size < 0) return (NULL); malloc_mutex_lock(&dss_mtx); if (dss_prev != (void *)-1) { size_t gap_size, cpad_size; void *cpad, *dss_next; intptr_t incr; /* * The loop is necessary to recover from races with other * threads that are using the DSS for something other than * malloc. */ do { /* Get the current end of the DSS. */ dss_max = chunk_dss_sbrk(0); /* * Calculate how much padding is necessary to * chunk-align the end of the DSS. */ gap_size = (chunksize - CHUNK_ADDR2OFFSET(dss_max)) & chunksize_mask; /* * Compute how much chunk-aligned pad space (if any) is * necessary to satisfy alignment. This space can be * recycled for later use. */ cpad = (void *)((uintptr_t)dss_max + gap_size); ret = (void *)ALIGNMENT_CEILING((uintptr_t)dss_max, alignment); cpad_size = (uintptr_t)ret - (uintptr_t)cpad; dss_next = (void *)((uintptr_t)ret + size); if ((uintptr_t)ret < (uintptr_t)dss_max || (uintptr_t)dss_next < (uintptr_t)dss_max) { /* Wrap-around. */ malloc_mutex_unlock(&dss_mtx); return (NULL); } incr = gap_size + cpad_size + size; dss_prev = chunk_dss_sbrk(incr); if (dss_prev == dss_max) { /* Success. */ dss_max = dss_next; malloc_mutex_unlock(&dss_mtx); if (cpad_size != 0) chunk_unmap(&base_pool, cpad, cpad_size); if (*zero) { JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED( ret, size); memset(ret, 0, size); } return (ret); } } while (dss_prev != (void *)-1); } malloc_mutex_unlock(&dss_mtx); return (NULL); } bool chunk_in_dss(void *chunk) { bool ret; cassert(have_dss); malloc_mutex_lock(&dss_mtx); if ((uintptr_t)chunk >= (uintptr_t)dss_base && (uintptr_t)chunk < (uintptr_t)dss_max) ret = true; else ret = false; malloc_mutex_unlock(&dss_mtx); return (ret); } bool chunk_dss_boot(void) { cassert(have_dss); if (malloc_mutex_init(&dss_mtx)) return (true); dss_base = chunk_dss_sbrk(0); dss_prev = dss_base; dss_max = dss_base; return (false); } void chunk_dss_prefork(void) { if (have_dss) malloc_mutex_prefork(&dss_mtx); } void chunk_dss_postfork_parent(void) { if (have_dss) malloc_mutex_postfork_parent(&dss_mtx); } void chunk_dss_postfork_child(void) { if (have_dss) malloc_mutex_postfork_child(&dss_mtx); } /******************************************************************************/ vmem-1.8/src/jemalloc/src/chunk_mmap.c000066400000000000000000000117261361505074100177710ustar00rootroot00000000000000#define JEMALLOC_CHUNK_MMAP_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Function prototypes for non-inline static functions. */ static void *pages_map(void *addr, size_t size); static void pages_unmap(void *addr, size_t size); static void *chunk_alloc_mmap_slow(size_t size, size_t alignment, bool *zero); /******************************************************************************/ static void * pages_map(void *addr, size_t size) { void *ret; assert(size != 0); #ifdef _WIN32 /* * If VirtualAlloc can't allocate at the given address when one is * given, it fails and returns NULL. */ ret = VirtualAlloc(addr, size, MEM_COMMIT | MEM_RESERVE, PAGE_READWRITE); #else /* * We don't use MAP_FIXED here, because it can cause the *replacement* * of existing mappings, and we only want to create new mappings. */ ret = mmap(addr, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON, -1, 0); assert(ret != NULL); if (ret == MAP_FAILED) ret = NULL; else if (addr != NULL && ret != addr) { /* * We succeeded in mapping memory, but not in the right place. */ if (munmap(ret, size) == -1) { char buf[BUFERROR_BUF]; buferror(get_errno(), buf, sizeof(buf)); malloc_printf(": Error in " #ifdef _WIN32 "VirtualFree" #else "munmap" #endif "(): %s\n", buf); if (opt_abort) abort(); } } static void * pages_trim(void *addr, size_t alloc_size, size_t leadsize, size_t size) { void *ret = (void *)((uintptr_t)addr + leadsize); assert(alloc_size >= leadsize + size); #ifdef _WIN32 { void *new_addr; pages_unmap(addr, alloc_size); new_addr = pages_map(ret, size); if (new_addr == ret) return (ret); if (new_addr) pages_unmap(new_addr, size); return (NULL); } #else { size_t trailsize = alloc_size - leadsize - size; if (leadsize != 0) pages_unmap(addr, leadsize); if (trailsize != 0) pages_unmap((void *)((uintptr_t)ret + size), trailsize); return (ret); } #endif } bool pages_purge(void *addr, size_t length, bool file_mapped) { bool unzeroed; #ifdef _WIN32 VirtualAlloc(addr, length, MEM_RESET, PAGE_READWRITE); unzeroed = true; #elif defined(JEMALLOC_HAVE_MADVISE) # ifdef JEMALLOC_PURGE_MADVISE_DONTNEED # define JEMALLOC_MADV_PURGE MADV_DONTNEED # define JEMALLOC_MADV_ZEROS true # elif defined(JEMALLOC_PURGE_MADVISE_FREE) # define JEMALLOC_MADV_PURGE MADV_FREE # define JEMALLOC_MADV_ZEROS false # else # error "No madvise(2) flag defined for purging unused dirty pages." # endif int err = madvise(addr, length, JEMALLOC_MADV_PURGE); unzeroed = (JEMALLOC_MADV_ZEROS == false || file_mapped || err != 0); # undef JEMALLOC_MADV_PURGE # undef JEMALLOC_MADV_ZEROS #else /* Last resort no-op. */ unzeroed = true; #endif return (unzeroed); } static void * chunk_alloc_mmap_slow(size_t size, size_t alignment, bool *zero) { void *ret, *pages; size_t alloc_size, leadsize; alloc_size = size + alignment - PAGE; /* Beware size_t wrap-around. */ if (alloc_size < size) return (NULL); do { pages = pages_map(NULL, alloc_size); if (pages == NULL) return (NULL); leadsize = ALIGNMENT_CEILING((uintptr_t)pages, alignment) - (uintptr_t)pages; ret = pages_trim(pages, alloc_size, leadsize, size); } while (ret == NULL); assert(ret != NULL); *zero = true; return (ret); } void * chunk_alloc_mmap(size_t size, size_t alignment, bool *zero) { void *ret; size_t offset; /* * Ideally, there would be a way to specify alignment to mmap() (like * NetBSD has), but in the absence of such a feature, we have to work * hard to efficiently create aligned mappings. The reliable, but * slow method is to create a mapping that is over-sized, then trim the * excess. However, that always results in one or two calls to * pages_unmap(). * * Optimistically try mapping precisely the right amount before falling * back to the slow method, with the expectation that the optimistic * approach works most of the time. */ assert(alignment != 0); assert((alignment & chunksize_mask) == 0); ret = pages_map(NULL, size); if (ret == NULL) return (NULL); offset = ALIGNMENT_ADDR2OFFSET(ret, alignment); if (offset != 0) { pages_unmap(ret, size); return (chunk_alloc_mmap_slow(size, alignment, zero)); } assert(ret != NULL); *zero = true; return (ret); } bool chunk_dalloc_mmap(void *chunk, size_t size) { if (config_munmap) pages_unmap(chunk, size); return (config_munmap == false); } vmem-1.8/src/jemalloc/src/ckh.c000066400000000000000000000331001361505074100164020ustar00rootroot00000000000000/* ******************************************************************************* * Implementation of (2^1+,2) cuckoo hashing, where 2^1+ indicates that each * hash bucket contains 2^n cells, for n >= 1, and 2 indicates that two hash * functions are employed. The original cuckoo hashing algorithm was described * in: * * Pagh, R., F.F. Rodler (2004) Cuckoo Hashing. Journal of Algorithms * 51(2):122-144. * * Generalization of cuckoo hashing was discussed in: * * Erlingsson, U., M. Manasse, F. McSherry (2006) A cool and practical * alternative to traditional hash tables. In Proceedings of the 7th * Workshop on Distributed Data and Structures (WDAS'06), Santa Clara, CA, * January 2006. * * This implementation uses precisely two hash functions because that is the * fewest that can work, and supporting multiple hashes is an implementation * burden. Here is a reproduction of Figure 1 from Erlingsson et al. (2006) * that shows approximate expected maximum load factors for various * configurations: * * | #cells/bucket | * #hashes | 1 | 2 | 4 | 8 | * --------+-------+-------+-------+-------+ * 1 | 0.006 | 0.006 | 0.03 | 0.12 | * 2 | 0.49 | 0.86 |>0.93< |>0.96< | * 3 | 0.91 | 0.97 | 0.98 | 0.999 | * 4 | 0.97 | 0.99 | 0.999 | | * * The number of cells per bucket is chosen such that a bucket fits in one cache * line. So, on 32- and 64-bit systems, we use (8,2) and (4,2) cuckoo hashing, * respectively. * ******************************************************************************/ #define JEMALLOC_CKH_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Function prototypes for non-inline static functions. */ static bool ckh_grow(ckh_t *ckh); static void ckh_shrink(ckh_t *ckh); /******************************************************************************/ /* * Search bucket for key and return the cell number if found; SIZE_T_MAX * otherwise. */ JEMALLOC_INLINE_C size_t ckh_bucket_search(ckh_t *ckh, size_t bucket, const void *key) { ckhc_t *cell; unsigned i; for (i = 0; i < (ZU(1) << LG_CKH_BUCKET_CELLS); i++) { cell = &ckh->tab[(bucket << LG_CKH_BUCKET_CELLS) + i]; if (cell->key != NULL && ckh->keycomp(key, cell->key)) return ((bucket << LG_CKH_BUCKET_CELLS) + i); } return (SIZE_T_MAX); } /* * Search table for key and return cell number if found; SIZE_T_MAX otherwise. */ JEMALLOC_INLINE_C size_t ckh_isearch(ckh_t *ckh, const void *key) { size_t hashes[2], bucket, cell; assert(ckh != NULL); ckh->hash(key, hashes); /* Search primary bucket. */ bucket = hashes[0] & ((ZU(1) << ckh->lg_curbuckets) - 1); cell = ckh_bucket_search(ckh, bucket, key); if (cell != SIZE_T_MAX) return (cell); /* Search secondary bucket. */ bucket = hashes[1] & ((ZU(1) << ckh->lg_curbuckets) - 1); cell = ckh_bucket_search(ckh, bucket, key); return (cell); } JEMALLOC_INLINE_C bool ckh_try_bucket_insert(ckh_t *ckh, size_t bucket, const void *key, const void *data) { ckhc_t *cell; unsigned offset, i; /* * Cycle through the cells in the bucket, starting at a random position. * The randomness avoids worst-case search overhead as buckets fill up. */ prng32(offset, LG_CKH_BUCKET_CELLS, ckh->prng_state, CKH_A, CKH_C); for (i = 0; i < (ZU(1) << LG_CKH_BUCKET_CELLS); i++) { cell = &ckh->tab[(bucket << LG_CKH_BUCKET_CELLS) + ((i + offset) & ((ZU(1) << LG_CKH_BUCKET_CELLS) - 1))]; if (cell->key == NULL) { cell->key = key; cell->data = data; ckh->count++; return (false); } } return (true); } /* * No space is available in bucket. Randomly evict an item, then try to find an * alternate location for that item. Iteratively repeat this * eviction/relocation procedure until either success or detection of an * eviction/relocation bucket cycle. */ JEMALLOC_INLINE_C bool ckh_evict_reloc_insert(ckh_t *ckh, size_t argbucket, void const **argkey, void const **argdata) { const void *key, *data, *tkey, *tdata; ckhc_t *cell; size_t hashes[2], bucket, tbucket; unsigned i; bucket = argbucket; key = *argkey; data = *argdata; while (true) { /* * Choose a random item within the bucket to evict. This is * critical to correct function, because without (eventually) * evicting all items within a bucket during iteration, it * would be possible to get stuck in an infinite loop if there * were an item for which both hashes indicated the same * bucket. */ prng32(i, LG_CKH_BUCKET_CELLS, ckh->prng_state, CKH_A, CKH_C); cell = &ckh->tab[(bucket << LG_CKH_BUCKET_CELLS) + i]; assert(cell->key != NULL); /* Swap cell->{key,data} and {key,data} (evict). */ tkey = cell->key; tdata = cell->data; cell->key = key; cell->data = data; key = tkey; data = tdata; #ifdef CKH_COUNT ckh->nrelocs++; #endif /* Find the alternate bucket for the evicted item. */ ckh->hash(key, hashes); tbucket = hashes[1] & ((ZU(1) << ckh->lg_curbuckets) - 1); if (tbucket == bucket) { tbucket = hashes[0] & ((ZU(1) << ckh->lg_curbuckets) - 1); /* * It may be that (tbucket == bucket) still, if the * item's hashes both indicate this bucket. However, * we are guaranteed to eventually escape this bucket * during iteration, assuming pseudo-random item * selection (true randomness would make infinite * looping a remote possibility). The reason we can * never get trapped forever is that there are two * cases: * * 1) This bucket == argbucket, so we will quickly * detect an eviction cycle and terminate. * 2) An item was evicted to this bucket from another, * which means that at least one item in this bucket * has hashes that indicate distinct buckets. */ } /* Check for a cycle. */ if (tbucket == argbucket) { *argkey = key; *argdata = data; return (true); } bucket = tbucket; if (ckh_try_bucket_insert(ckh, bucket, key, data) == false) return (false); } } JEMALLOC_INLINE_C bool ckh_try_insert(ckh_t *ckh, void const**argkey, void const**argdata) { size_t hashes[2], bucket; const void *key = *argkey; const void *data = *argdata; ckh->hash(key, hashes); /* Try to insert in primary bucket. */ bucket = hashes[0] & ((ZU(1) << ckh->lg_curbuckets) - 1); if (ckh_try_bucket_insert(ckh, bucket, key, data) == false) return (false); /* Try to insert in secondary bucket. */ bucket = hashes[1] & ((ZU(1) << ckh->lg_curbuckets) - 1); if (ckh_try_bucket_insert(ckh, bucket, key, data) == false) return (false); /* * Try to find a place for this item via iterative eviction/relocation. */ return (ckh_evict_reloc_insert(ckh, bucket, argkey, argdata)); } /* * Try to rebuild the hash table from scratch by inserting all items from the * old table into the new. */ JEMALLOC_INLINE_C bool ckh_rebuild(ckh_t *ckh, ckhc_t *aTab) { size_t count, i, nins; const void *key, *data; count = ckh->count; ckh->count = 0; for (i = nins = 0; nins < count; i++) { if (aTab[i].key != NULL) { key = aTab[i].key; data = aTab[i].data; if (ckh_try_insert(ckh, &key, &data)) { ckh->count = count; return (true); } nins++; } } return (false); } static bool ckh_grow(ckh_t *ckh) { bool ret; ckhc_t *tab, *ttab; size_t lg_curcells; unsigned lg_prevbuckets; #ifdef CKH_COUNT ckh->ngrows++; #endif /* * It is possible (though unlikely, given well behaved hashes) that the * table will have to be doubled more than once in order to create a * usable table. */ lg_prevbuckets = ckh->lg_curbuckets; lg_curcells = ckh->lg_curbuckets + LG_CKH_BUCKET_CELLS; while (true) { size_t usize; lg_curcells++; usize = sa2u(sizeof(ckhc_t) << lg_curcells, CACHELINE); if (usize == 0) { ret = true; goto label_return; } tab = (ckhc_t *)ipalloc(usize, CACHELINE, true); if (tab == NULL) { ret = true; goto label_return; } /* Swap in new table. */ ttab = ckh->tab; ckh->tab = tab; tab = ttab; ckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS; if (ckh_rebuild(ckh, tab) == false) { idalloc(tab); break; } /* Rebuilding failed, so back out partially rebuilt table. */ idalloc(ckh->tab); ckh->tab = tab; ckh->lg_curbuckets = lg_prevbuckets; } ret = false; label_return: return (ret); } static void ckh_shrink(ckh_t *ckh) { ckhc_t *tab, *ttab; size_t lg_curcells, usize; unsigned lg_prevbuckets; /* * It is possible (though unlikely, given well behaved hashes) that the * table rebuild will fail. */ lg_prevbuckets = ckh->lg_curbuckets; lg_curcells = ckh->lg_curbuckets + LG_CKH_BUCKET_CELLS - 1; usize = sa2u(sizeof(ckhc_t) << lg_curcells, CACHELINE); if (usize == 0) return; tab = (ckhc_t *)ipalloc(usize, CACHELINE, true); if (tab == NULL) { /* * An OOM error isn't worth propagating, since it doesn't * prevent this or future operations from proceeding. */ return; } /* Swap in new table. */ ttab = ckh->tab; ckh->tab = tab; tab = ttab; ckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS; if (ckh_rebuild(ckh, tab) == false) { idalloc(tab); #ifdef CKH_COUNT ckh->nshrinks++; #endif return; } /* Rebuilding failed, so back out partially rebuilt table. */ idalloc(ckh->tab); ckh->tab = tab; ckh->lg_curbuckets = lg_prevbuckets; #ifdef CKH_COUNT ckh->nshrinkfails++; #endif } bool ckh_new(ckh_t *ckh, size_t minitems, ckh_hash_t *hash, ckh_keycomp_t *keycomp) { bool ret; size_t mincells, usize; unsigned lg_mincells; assert(minitems > 0); assert(hash != NULL); assert(keycomp != NULL); #ifdef CKH_COUNT ckh->ngrows = 0; ckh->nshrinks = 0; ckh->nshrinkfails = 0; ckh->ninserts = 0; ckh->nrelocs = 0; #endif ckh->prng_state = 42; /* Value doesn't really matter. */ ckh->count = 0; /* * Find the minimum power of 2 that is large enough to fit aBaseCount * entries. We are using (2+,2) cuckoo hashing, which has an expected * maximum load factor of at least ~0.86, so 0.75 is a conservative load * factor that will typically allow 2^aLgMinItems to fit without ever * growing the table. */ assert(LG_CKH_BUCKET_CELLS > 0); mincells = ((minitems + (3 - (minitems % 3))) / 3) << 2; for (lg_mincells = LG_CKH_BUCKET_CELLS; (ZU(1) << lg_mincells) < mincells; lg_mincells++) ; /* Do nothing. */ ckh->lg_minbuckets = lg_mincells - LG_CKH_BUCKET_CELLS; ckh->lg_curbuckets = lg_mincells - LG_CKH_BUCKET_CELLS; ckh->hash = hash; ckh->keycomp = keycomp; usize = sa2u(sizeof(ckhc_t) << lg_mincells, CACHELINE); if (usize == 0) { ret = true; goto label_return; } ckh->tab = (ckhc_t *)ipalloc(usize, CACHELINE, true); if (ckh->tab == NULL) { ret = true; goto label_return; } ret = false; label_return: return (ret); } void ckh_delete(ckh_t *ckh) { assert(ckh != NULL); #ifdef CKH_VERBOSE malloc_printf( "%s(%p): ngrows: %"PRIu64", nshrinks: %"PRIu64"," " nshrinkfails: %"PRIu64", ninserts: %"PRIu64"," " nrelocs: %"PRIu64"\n", __func__, ckh, (unsigned long long)ckh->ngrows, (unsigned long long)ckh->nshrinks, (unsigned long long)ckh->nshrinkfails, (unsigned long long)ckh->ninserts, (unsigned long long)ckh->nrelocs); #endif idalloc(ckh->tab); if (config_debug) memset(ckh, 0x5a, sizeof(ckh_t)); } size_t ckh_count(ckh_t *ckh) { assert(ckh != NULL); return (ckh->count); } bool ckh_iter(ckh_t *ckh, size_t *tabind, void **key, void **data) { size_t i, ncells; for (i = *tabind, ncells = (ZU(1) << (ckh->lg_curbuckets + LG_CKH_BUCKET_CELLS)); i < ncells; i++) { if (ckh->tab[i].key != NULL) { if (key != NULL) *key = (void *)ckh->tab[i].key; if (data != NULL) *data = (void *)ckh->tab[i].data; *tabind = i + 1; return (false); } } return (true); } bool ckh_insert(ckh_t *ckh, const void *key, const void *data) { bool ret; assert(ckh != NULL); assert(ckh_search(ckh, key, NULL, NULL)); #ifdef CKH_COUNT ckh->ninserts++; #endif while (ckh_try_insert(ckh, &key, &data)) { if (ckh_grow(ckh)) { ret = true; goto label_return; } } ret = false; label_return: return (ret); } bool ckh_remove(ckh_t *ckh, const void *searchkey, void **key, void **data) { size_t cell; assert(ckh != NULL); cell = ckh_isearch(ckh, searchkey); if (cell != SIZE_T_MAX) { if (key != NULL) *key = (void *)ckh->tab[cell].key; if (data != NULL) *data = (void *)ckh->tab[cell].data; ckh->tab[cell].key = NULL; ckh->tab[cell].data = NULL; /* Not necessary. */ ckh->count--; /* Try to halve the table if it is less than 1/4 full. */ if (ckh->count < (ZU(1) << (ckh->lg_curbuckets + LG_CKH_BUCKET_CELLS - 2)) && ckh->lg_curbuckets > ckh->lg_minbuckets) { /* Ignore error due to OOM. */ ckh_shrink(ckh); } return (false); } return (true); } bool ckh_search(ckh_t *ckh, const void *searchkey, void **key, void **data) { size_t cell; assert(ckh != NULL); cell = ckh_isearch(ckh, searchkey); if (cell != SIZE_T_MAX) { if (key != NULL) *key = (void *)ckh->tab[cell].key; if (data != NULL) *data = (void *)ckh->tab[cell].data; return (false); } return (true); } void ckh_string_hash(const void *key, size_t r_hash[2]) { hash(key, strlen((const char *)key), 0x94122f33U, r_hash); } bool ckh_string_keycomp(const void *k1, const void *k2) { assert(k1 != NULL); assert(k2 != NULL); return (strcmp((char *)k1, (char *)k2) ? false : true); } void ckh_pointer_hash(const void *key, size_t r_hash[2]) { union { const void *v; size_t i; } u; assert(sizeof(u.v) == sizeof(u.i)); u.v = key; hash(&u.i, sizeof(u.i), 0xd983396eU, r_hash); } bool ckh_pointer_keycomp(const void *k1, const void *k2) { return ((k1 == k2) ? true : false); } vmem-1.8/src/jemalloc/src/ctl.c000066400000000000000000001534361361505074100164360ustar00rootroot00000000000000#define JEMALLOC_CTL_C_ #include "jemalloc/internal/jemalloc_internal.h" #include "jemalloc/internal/pool.h" /******************************************************************************/ /* Data. */ /* * ctl_mtx protects the following: * - ctl_stats.* * - opt_prof_active */ static malloc_mutex_t ctl_mtx; /* XXX separate mutex for each pool? */ static uint64_t ctl_epoch; /******************************************************************************/ /* Helpers for named and indexed nodes. */ static inline const ctl_named_node_t * ctl_named_node(const ctl_node_t *node) { return ((node->named) ? (const ctl_named_node_t *)node : NULL); } static inline const ctl_named_node_t * ctl_named_children(const ctl_named_node_t *node, int index) { const ctl_named_node_t *children = ctl_named_node(node->children); return (children ? &children[index] : NULL); } static inline const ctl_indexed_node_t * ctl_indexed_node(const ctl_node_t *node) { return ((node->named == false) ? (const ctl_indexed_node_t *)node : NULL); } /******************************************************************************/ /* Function prototypes for non-inline static functions. */ #define CTL_PROTO(n) \ static int n##_ctl(const size_t *mib, size_t miblen, void *oldp, \ size_t *oldlenp, void *newp, size_t newlen); #define INDEX_PROTO(n) \ static const ctl_named_node_t *n##_index(const size_t *mib, \ size_t miblen, size_t i); static bool ctl_arena_init(pool_t *pool, ctl_arena_stats_t *astats); static void ctl_arena_clear(ctl_arena_stats_t *astats); static void ctl_arena_stats_amerge(ctl_arena_stats_t *cstats, arena_t *arena); static void ctl_arena_stats_smerge(ctl_arena_stats_t *sstats, ctl_arena_stats_t *astats); static void ctl_arena_refresh(arena_t *arena, unsigned i); static bool ctl_grow(pool_t *pool); static void ctl_refresh_pool(pool_t *pool); static void ctl_refresh(void); static bool ctl_init_pool(pool_t *pool); static bool ctl_init(void); static int ctl_lookup(const char *name, ctl_node_t const **nodesp, size_t *mibp, size_t *depthp); CTL_PROTO(version) CTL_PROTO(epoch) INDEX_PROTO(thread_pool_i) CTL_PROTO(thread_tcache_enabled) CTL_PROTO(thread_tcache_flush) CTL_PROTO(thread_arena) CTL_PROTO(thread_allocated) CTL_PROTO(thread_allocatedp) CTL_PROTO(thread_deallocated) CTL_PROTO(thread_deallocatedp) CTL_PROTO(config_debug) CTL_PROTO(config_fill) CTL_PROTO(config_lazy_lock) CTL_PROTO(config_munmap) CTL_PROTO(config_prof) CTL_PROTO(config_prof_libgcc) CTL_PROTO(config_prof_libunwind) CTL_PROTO(config_stats) CTL_PROTO(config_tcache) CTL_PROTO(config_tls) CTL_PROTO(config_utrace) CTL_PROTO(config_valgrind) CTL_PROTO(config_xmalloc) CTL_PROTO(opt_abort) CTL_PROTO(opt_dss) CTL_PROTO(opt_lg_chunk) CTL_PROTO(opt_narenas) CTL_PROTO(opt_lg_dirty_mult) CTL_PROTO(opt_stats_print) CTL_PROTO(opt_junk) CTL_PROTO(opt_zero) CTL_PROTO(opt_quarantine) CTL_PROTO(opt_redzone) CTL_PROTO(opt_utrace) CTL_PROTO(opt_xmalloc) CTL_PROTO(opt_tcache) CTL_PROTO(opt_lg_tcache_max) CTL_PROTO(opt_prof) CTL_PROTO(opt_prof_prefix) CTL_PROTO(opt_prof_active) CTL_PROTO(opt_lg_prof_sample) CTL_PROTO(opt_lg_prof_interval) CTL_PROTO(opt_prof_gdump) CTL_PROTO(opt_prof_final) CTL_PROTO(opt_prof_leak) CTL_PROTO(opt_prof_accum) CTL_PROTO(arena_i_purge) static void arena_purge(pool_t *pool, unsigned arena_ind); CTL_PROTO(arena_i_dss) CTL_PROTO(arena_i_chunk_alloc) CTL_PROTO(arena_i_chunk_dalloc) INDEX_PROTO(arena_i) CTL_PROTO(arenas_bin_i_size) CTL_PROTO(arenas_bin_i_nregs) CTL_PROTO(arenas_bin_i_run_size) INDEX_PROTO(arenas_bin_i) CTL_PROTO(arenas_lrun_i_size) INDEX_PROTO(arenas_lrun_i) CTL_PROTO(arenas_narenas) CTL_PROTO(arenas_initialized) CTL_PROTO(arenas_quantum) CTL_PROTO(arenas_page) CTL_PROTO(arenas_tcache_max) CTL_PROTO(arenas_nbins) CTL_PROTO(arenas_nhbins) CTL_PROTO(arenas_nlruns) CTL_PROTO(arenas_extend) CTL_PROTO(prof_active) CTL_PROTO(prof_dump) CTL_PROTO(prof_interval) CTL_PROTO(stats_chunks_current) CTL_PROTO(stats_chunks_total) CTL_PROTO(stats_chunks_high) CTL_PROTO(stats_arenas_i_small_allocated) CTL_PROTO(stats_arenas_i_small_nmalloc) CTL_PROTO(stats_arenas_i_small_ndalloc) CTL_PROTO(stats_arenas_i_small_nrequests) CTL_PROTO(stats_arenas_i_large_allocated) CTL_PROTO(stats_arenas_i_large_nmalloc) CTL_PROTO(stats_arenas_i_large_ndalloc) CTL_PROTO(stats_arenas_i_large_nrequests) CTL_PROTO(stats_arenas_i_huge_allocated) CTL_PROTO(stats_arenas_i_huge_nmalloc) CTL_PROTO(stats_arenas_i_huge_ndalloc) CTL_PROTO(stats_arenas_i_huge_nrequests) CTL_PROTO(stats_arenas_i_bins_j_allocated) CTL_PROTO(stats_arenas_i_bins_j_nmalloc) CTL_PROTO(stats_arenas_i_bins_j_ndalloc) CTL_PROTO(stats_arenas_i_bins_j_nrequests) CTL_PROTO(stats_arenas_i_bins_j_nfills) CTL_PROTO(stats_arenas_i_bins_j_nflushes) CTL_PROTO(stats_arenas_i_bins_j_nruns) CTL_PROTO(stats_arenas_i_bins_j_nreruns) CTL_PROTO(stats_arenas_i_bins_j_curruns) INDEX_PROTO(stats_arenas_i_bins_j) CTL_PROTO(stats_arenas_i_lruns_j_nmalloc) CTL_PROTO(stats_arenas_i_lruns_j_ndalloc) CTL_PROTO(stats_arenas_i_lruns_j_nrequests) CTL_PROTO(stats_arenas_i_lruns_j_curruns) INDEX_PROTO(stats_arenas_i_lruns_j) CTL_PROTO(stats_arenas_i_nthreads) CTL_PROTO(stats_arenas_i_dss) CTL_PROTO(stats_arenas_i_pactive) CTL_PROTO(stats_arenas_i_pdirty) CTL_PROTO(stats_arenas_i_mapped) CTL_PROTO(stats_arenas_i_npurge) CTL_PROTO(stats_arenas_i_nmadvise) CTL_PROTO(stats_arenas_i_purged) INDEX_PROTO(stats_arenas_i) CTL_PROTO(stats_cactive) CTL_PROTO(stats_allocated) CTL_PROTO(stats_active) CTL_PROTO(stats_mapped) INDEX_PROTO(pool_i) CTL_PROTO(pools_npools) CTL_PROTO(pool_i_base) CTL_PROTO(pool_i_size) /******************************************************************************/ /* mallctl tree. */ /* Maximum tree depth. */ #define CTL_MAX_DEPTH 8 #define NAME(n) {true}, n #define CHILD(t, c) \ sizeof(c##_node) / sizeof(ctl_##t##_node_t), \ (ctl_node_t *)c##_node, \ NULL #define CTL(c) 0, NULL, c##_ctl /* * Only handles internal indexed nodes, since there are currently no external * ones. */ #define INDEX(i) {false}, i##_index static const ctl_named_node_t tcache_node[] = { {NAME("enabled"), CTL(thread_tcache_enabled)}, {NAME("flush"), CTL(thread_tcache_flush)} }; static const ctl_named_node_t thread_pool_i_node[] = { {NAME("arena"), CTL(thread_arena)}, }; static const ctl_named_node_t super_thread_pool_i_node[] = { {NAME(""), CHILD(named, thread_pool_i)} }; static const ctl_indexed_node_t thread_pool_node[] = { {INDEX(thread_pool_i)} }; static const ctl_named_node_t thread_node[] = { {NAME("pool"), CHILD(indexed, thread_pool)}, {NAME("allocated"), CTL(thread_allocated)}, {NAME("allocatedp"), CTL(thread_allocatedp)}, {NAME("deallocated"), CTL(thread_deallocated)}, {NAME("deallocatedp"), CTL(thread_deallocatedp)}, {NAME("tcache"), CHILD(named, tcache)} }; static const ctl_named_node_t config_node[] = { {NAME("debug"), CTL(config_debug)}, {NAME("fill"), CTL(config_fill)}, {NAME("lazy_lock"), CTL(config_lazy_lock)}, {NAME("munmap"), CTL(config_munmap)}, {NAME("prof"), CTL(config_prof)}, {NAME("prof_libgcc"), CTL(config_prof_libgcc)}, {NAME("prof_libunwind"), CTL(config_prof_libunwind)}, {NAME("stats"), CTL(config_stats)}, {NAME("tcache"), CTL(config_tcache)}, {NAME("tls"), CTL(config_tls)}, {NAME("utrace"), CTL(config_utrace)}, {NAME("valgrind"), CTL(config_valgrind)}, {NAME("xmalloc"), CTL(config_xmalloc)} }; static const ctl_named_node_t opt_node[] = { {NAME("abort"), CTL(opt_abort)}, {NAME("dss"), CTL(opt_dss)}, {NAME("lg_chunk"), CTL(opt_lg_chunk)}, {NAME("narenas"), CTL(opt_narenas)}, {NAME("lg_dirty_mult"), CTL(opt_lg_dirty_mult)}, {NAME("stats_print"), CTL(opt_stats_print)}, {NAME("junk"), CTL(opt_junk)}, {NAME("zero"), CTL(opt_zero)}, {NAME("quarantine"), CTL(opt_quarantine)}, {NAME("redzone"), CTL(opt_redzone)}, {NAME("utrace"), CTL(opt_utrace)}, {NAME("xmalloc"), CTL(opt_xmalloc)}, {NAME("tcache"), CTL(opt_tcache)}, {NAME("lg_tcache_max"), CTL(opt_lg_tcache_max)}, {NAME("prof"), CTL(opt_prof)}, {NAME("prof_prefix"), CTL(opt_prof_prefix)}, {NAME("prof_active"), CTL(opt_prof_active)}, {NAME("lg_prof_sample"), CTL(opt_lg_prof_sample)}, {NAME("lg_prof_interval"), CTL(opt_lg_prof_interval)}, {NAME("prof_gdump"), CTL(opt_prof_gdump)}, {NAME("prof_final"), CTL(opt_prof_final)}, {NAME("prof_leak"), CTL(opt_prof_leak)}, {NAME("prof_accum"), CTL(opt_prof_accum)} }; static const ctl_named_node_t chunk_node[] = { {NAME("alloc"), CTL(arena_i_chunk_alloc)}, {NAME("dalloc"), CTL(arena_i_chunk_dalloc)} }; static const ctl_named_node_t arena_i_node[] = { {NAME("purge"), CTL(arena_i_purge)}, {NAME("dss"), CTL(arena_i_dss)}, {NAME("chunk"), CHILD(named, chunk)}, }; static const ctl_named_node_t super_arena_i_node[] = { {NAME(""), CHILD(named, arena_i)} }; static const ctl_indexed_node_t arena_node[] = { {INDEX(arena_i)} }; static const ctl_named_node_t arenas_bin_i_node[] = { {NAME("size"), CTL(arenas_bin_i_size)}, {NAME("nregs"), CTL(arenas_bin_i_nregs)}, {NAME("run_size"), CTL(arenas_bin_i_run_size)} }; static const ctl_named_node_t super_arenas_bin_i_node[] = { {NAME(""), CHILD(named, arenas_bin_i)} }; static const ctl_indexed_node_t arenas_bin_node[] = { {INDEX(arenas_bin_i)} }; static const ctl_named_node_t arenas_lrun_i_node[] = { {NAME("size"), CTL(arenas_lrun_i_size)} }; static const ctl_named_node_t super_arenas_lrun_i_node[] = { {NAME(""), CHILD(named, arenas_lrun_i)} }; static const ctl_indexed_node_t arenas_lrun_node[] = { {INDEX(arenas_lrun_i)} }; static const ctl_named_node_t arenas_node[] = { {NAME("narenas"), CTL(arenas_narenas)}, {NAME("initialized"), CTL(arenas_initialized)}, {NAME("quantum"), CTL(arenas_quantum)}, {NAME("page"), CTL(arenas_page)}, {NAME("tcache_max"), CTL(arenas_tcache_max)}, {NAME("nbins"), CTL(arenas_nbins)}, {NAME("nhbins"), CTL(arenas_nhbins)}, {NAME("bin"), CHILD(indexed, arenas_bin)}, {NAME("nlruns"), CTL(arenas_nlruns)}, {NAME("lrun"), CHILD(indexed, arenas_lrun)}, {NAME("extend"), CTL(arenas_extend)} }; static const ctl_named_node_t prof_node[] = { {NAME("active"), CTL(prof_active)}, {NAME("dump"), CTL(prof_dump)}, {NAME("interval"), CTL(prof_interval)} }; static const ctl_named_node_t stats_chunks_node[] = { {NAME("current"), CTL(stats_chunks_current)}, {NAME("total"), CTL(stats_chunks_total)}, {NAME("high"), CTL(stats_chunks_high)} }; static const ctl_named_node_t stats_arenas_i_small_node[] = { {NAME("allocated"), CTL(stats_arenas_i_small_allocated)}, {NAME("nmalloc"), CTL(stats_arenas_i_small_nmalloc)}, {NAME("ndalloc"), CTL(stats_arenas_i_small_ndalloc)}, {NAME("nrequests"), CTL(stats_arenas_i_small_nrequests)} }; static const ctl_named_node_t stats_arenas_i_large_node[] = { {NAME("allocated"), CTL(stats_arenas_i_large_allocated)}, {NAME("nmalloc"), CTL(stats_arenas_i_large_nmalloc)}, {NAME("ndalloc"), CTL(stats_arenas_i_large_ndalloc)}, {NAME("nrequests"), CTL(stats_arenas_i_large_nrequests)} }; static const ctl_named_node_t stats_arenas_i_huge_node[] = { {NAME("allocated"), CTL(stats_arenas_i_huge_allocated)}, {NAME("nmalloc"), CTL(stats_arenas_i_huge_nmalloc)}, {NAME("ndalloc"), CTL(stats_arenas_i_huge_ndalloc)}, {NAME("nrequests"), CTL(stats_arenas_i_huge_nrequests)}, }; static const ctl_named_node_t stats_arenas_i_bins_j_node[] = { {NAME("allocated"), CTL(stats_arenas_i_bins_j_allocated)}, {NAME("nmalloc"), CTL(stats_arenas_i_bins_j_nmalloc)}, {NAME("ndalloc"), CTL(stats_arenas_i_bins_j_ndalloc)}, {NAME("nrequests"), CTL(stats_arenas_i_bins_j_nrequests)}, {NAME("nfills"), CTL(stats_arenas_i_bins_j_nfills)}, {NAME("nflushes"), CTL(stats_arenas_i_bins_j_nflushes)}, {NAME("nruns"), CTL(stats_arenas_i_bins_j_nruns)}, {NAME("nreruns"), CTL(stats_arenas_i_bins_j_nreruns)}, {NAME("curruns"), CTL(stats_arenas_i_bins_j_curruns)} }; static const ctl_named_node_t super_stats_arenas_i_bins_j_node[] = { {NAME(""), CHILD(named, stats_arenas_i_bins_j)} }; static const ctl_indexed_node_t stats_arenas_i_bins_node[] = { {INDEX(stats_arenas_i_bins_j)} }; static const ctl_named_node_t stats_arenas_i_lruns_j_node[] = { {NAME("nmalloc"), CTL(stats_arenas_i_lruns_j_nmalloc)}, {NAME("ndalloc"), CTL(stats_arenas_i_lruns_j_ndalloc)}, {NAME("nrequests"), CTL(stats_arenas_i_lruns_j_nrequests)}, {NAME("curruns"), CTL(stats_arenas_i_lruns_j_curruns)} }; static const ctl_named_node_t super_stats_arenas_i_lruns_j_node[] = { {NAME(""), CHILD(named, stats_arenas_i_lruns_j)} }; static const ctl_indexed_node_t stats_arenas_i_lruns_node[] = { {INDEX(stats_arenas_i_lruns_j)} }; static const ctl_named_node_t stats_arenas_i_node[] = { {NAME("nthreads"), CTL(stats_arenas_i_nthreads)}, {NAME("dss"), CTL(stats_arenas_i_dss)}, {NAME("pactive"), CTL(stats_arenas_i_pactive)}, {NAME("pdirty"), CTL(stats_arenas_i_pdirty)}, {NAME("mapped"), CTL(stats_arenas_i_mapped)}, {NAME("npurge"), CTL(stats_arenas_i_npurge)}, {NAME("nmadvise"), CTL(stats_arenas_i_nmadvise)}, {NAME("purged"), CTL(stats_arenas_i_purged)}, {NAME("small"), CHILD(named, stats_arenas_i_small)}, {NAME("large"), CHILD(named, stats_arenas_i_large)}, {NAME("huge"), CHILD(named, stats_arenas_i_huge)}, {NAME("bins"), CHILD(indexed, stats_arenas_i_bins)}, {NAME("lruns"), CHILD(indexed, stats_arenas_i_lruns)} }; static const ctl_named_node_t super_stats_arenas_i_node[] = { {NAME(""), CHILD(named, stats_arenas_i)} }; static const ctl_indexed_node_t stats_arenas_node[] = { {INDEX(stats_arenas_i)} }; static const ctl_named_node_t pool_stats_node[] = { {NAME("chunks"), CHILD(named, stats_chunks)}, {NAME("arenas"), CHILD(indexed, stats_arenas)}, {NAME("cactive"), CTL(stats_cactive)}, {NAME("allocated"), CTL(stats_allocated)}, {NAME("active"), CTL(stats_active)}, {NAME("mapped"), CTL(stats_mapped)} }; static const ctl_named_node_t pools_node[] = { {NAME("npools"), CTL(pools_npools)}, }; static const ctl_named_node_t pool_i_node[] = { {NAME("mem_base"), CTL(pool_i_base)}, {NAME("mem_size"), CTL(pool_i_size)}, {NAME("arena"), CHILD(indexed, arena)}, {NAME("arenas"), CHILD(named, arenas)}, {NAME("stats"), CHILD(named, pool_stats)} }; static const ctl_named_node_t super_pool_i_node[] = { {NAME(""), CHILD(named, pool_i)} }; static const ctl_indexed_node_t pool_node[] = { {INDEX(pool_i)} }; static const ctl_named_node_t root_node[] = { {NAME("version"), CTL(version)}, {NAME("epoch"), CTL(epoch)}, {NAME("thread"), CHILD(named, thread)}, {NAME("config"), CHILD(named, config)}, {NAME("opt"), CHILD(named, opt)}, {NAME("pool"), CHILD(indexed, pool)}, {NAME("pools"), CHILD(named, pools)}, {NAME("prof"), CHILD(named, prof)} }; static const ctl_named_node_t super_root_node[] = { {NAME(""), CHILD(named, root)} }; #undef NAME #undef CHILD #undef CTL #undef INDEX /******************************************************************************/ static bool ctl_arena_init(pool_t *pool, ctl_arena_stats_t *astats) { if (astats->lstats == NULL) { astats->lstats = (malloc_large_stats_t *)base_alloc(pool, nlclasses * sizeof(malloc_large_stats_t)); if (astats->lstats == NULL) return (true); } return (false); } static void ctl_arena_clear(ctl_arena_stats_t *astats) { astats->dss = dss_prec_names[dss_prec_limit]; astats->pactive = 0; astats->pdirty = 0; if (config_stats) { memset(&astats->astats, 0, sizeof(arena_stats_t)); astats->allocated_small = 0; astats->nmalloc_small = 0; astats->ndalloc_small = 0; astats->nrequests_small = 0; memset(astats->bstats, 0, NBINS * sizeof(malloc_bin_stats_t)); memset(astats->lstats, 0, nlclasses * sizeof(malloc_large_stats_t)); } } static void ctl_arena_stats_amerge(ctl_arena_stats_t *cstats, arena_t *arena) { unsigned i; arena_stats_merge(arena, &cstats->dss, &cstats->pactive, &cstats->pdirty, &cstats->astats, cstats->bstats, cstats->lstats); for (i = 0; i < NBINS; i++) { cstats->allocated_small += cstats->bstats[i].allocated; cstats->nmalloc_small += cstats->bstats[i].nmalloc; cstats->ndalloc_small += cstats->bstats[i].ndalloc; cstats->nrequests_small += cstats->bstats[i].nrequests; } } static void ctl_arena_stats_smerge(ctl_arena_stats_t *sstats, ctl_arena_stats_t *astats) { unsigned i; sstats->pactive += astats->pactive; sstats->pdirty += astats->pdirty; sstats->astats.mapped += astats->astats.mapped; sstats->astats.npurge += astats->astats.npurge; sstats->astats.nmadvise += astats->astats.nmadvise; sstats->astats.purged += astats->astats.purged; sstats->allocated_small += astats->allocated_small; sstats->nmalloc_small += astats->nmalloc_small; sstats->ndalloc_small += astats->ndalloc_small; sstats->nrequests_small += astats->nrequests_small; sstats->astats.allocated_large += astats->astats.allocated_large; sstats->astats.nmalloc_large += astats->astats.nmalloc_large; sstats->astats.ndalloc_large += astats->astats.ndalloc_large; sstats->astats.nrequests_large += astats->astats.nrequests_large; sstats->astats.allocated_huge += astats->astats.allocated_huge; sstats->astats.nmalloc_huge += astats->astats.nmalloc_huge; sstats->astats.ndalloc_huge += astats->astats.ndalloc_huge; sstats->astats.nrequests_huge += astats->astats.nrequests_huge; for (i = 0; i < nlclasses; i++) { sstats->lstats[i].nmalloc += astats->lstats[i].nmalloc; sstats->lstats[i].ndalloc += astats->lstats[i].ndalloc; sstats->lstats[i].nrequests += astats->lstats[i].nrequests; sstats->lstats[i].curruns += astats->lstats[i].curruns; } for (i = 0; i < NBINS; i++) { sstats->bstats[i].allocated += astats->bstats[i].allocated; sstats->bstats[i].nmalloc += astats->bstats[i].nmalloc; sstats->bstats[i].ndalloc += astats->bstats[i].ndalloc; sstats->bstats[i].nrequests += astats->bstats[i].nrequests; if (config_tcache) { sstats->bstats[i].nfills += astats->bstats[i].nfills; sstats->bstats[i].nflushes += astats->bstats[i].nflushes; } sstats->bstats[i].nruns += astats->bstats[i].nruns; sstats->bstats[i].reruns += astats->bstats[i].reruns; sstats->bstats[i].curruns += astats->bstats[i].curruns; } } static void ctl_arena_refresh(arena_t *arena, unsigned i) { pool_t *pool = arena->pool; ctl_arena_stats_t *astats = &pool->ctl_stats.arenas[i]; ctl_arena_stats_t *sstats = &pool->ctl_stats.arenas[pool->ctl_stats.narenas]; ctl_arena_clear(astats); sstats->nthreads += astats->nthreads; if (config_stats) { ctl_arena_stats_amerge(astats, arena); /* Merge into sum stats as well. */ ctl_arena_stats_smerge(sstats, astats); } else { astats->pactive += arena->nactive; astats->pdirty += arena->ndirty; /* Merge into sum stats as well. */ sstats->pactive += arena->nactive; sstats->pdirty += arena->ndirty; } } static bool ctl_grow(pool_t *pool) { ctl_arena_stats_t *astats; arena_t **tarenas; /* Allocate extended arena stats and arenas arrays. */ astats = (ctl_arena_stats_t *)imalloc((pool->ctl_stats.narenas + 2) * sizeof(ctl_arena_stats_t)); if (astats == NULL) return (true); tarenas = (arena_t **)imalloc((pool->ctl_stats.narenas + 1) * sizeof(arena_t *)); if (tarenas == NULL) { idalloc(astats); return (true); } /* Initialize the new astats element. */ memcpy(astats, pool->ctl_stats.arenas, (pool->ctl_stats.narenas + 1) * sizeof(ctl_arena_stats_t)); memset(&astats[pool->ctl_stats.narenas + 1], 0, sizeof(ctl_arena_stats_t)); if (ctl_arena_init(pool, &astats[pool->ctl_stats.narenas + 1])) { idalloc(tarenas); idalloc(astats); return (true); } /* Swap merged stats to their new location. */ { ctl_arena_stats_t tstats; memcpy(&tstats, &astats[pool->ctl_stats.narenas], sizeof(ctl_arena_stats_t)); memcpy(&astats[pool->ctl_stats.narenas], &astats[pool->ctl_stats.narenas + 1], sizeof(ctl_arena_stats_t)); memcpy(&astats[pool->ctl_stats.narenas + 1], &tstats, sizeof(ctl_arena_stats_t)); } /* Initialize the new arenas element. */ tarenas[pool->ctl_stats.narenas] = NULL; { arena_t **arenas_old = pool->arenas; /* * Swap extended arenas array into place. Although ctl_mtx * protects this function from other threads extending the * array, it does not protect from other threads mutating it * (i.e. initializing arenas and setting array elements to * point to them). Therefore, array copying must happen under * the protection of arenas_lock. */ malloc_rwlock_wrlock(&pool->arenas_lock); pool->arenas = tarenas; memcpy(pool->arenas, arenas_old, pool->ctl_stats.narenas * sizeof(arena_t *)); pool->narenas_total++; arenas_extend(pool, pool->narenas_total - 1); malloc_rwlock_unlock(&pool->arenas_lock); /* * Deallocate arenas_old only if it came from imalloc() (not * base_alloc()). */ if (pool->ctl_stats.narenas != pool->narenas_auto) idalloc(arenas_old); } pool->ctl_stats.arenas = astats; pool->ctl_stats.narenas++; return (false); } static void ctl_refresh_pool(pool_t *pool) { unsigned i; VARIABLE_ARRAY(arena_t *, tarenas, pool->ctl_stats.narenas); if (config_stats) { malloc_mutex_lock(&pool->chunks_mtx); pool->ctl_stats.chunks.current = pool->stats_chunks.curchunks; pool->ctl_stats.chunks.total = pool->stats_chunks.nchunks; pool->ctl_stats.chunks.high = pool->stats_chunks.highchunks; malloc_mutex_unlock(&pool->chunks_mtx); } /* * Clear sum stats, since they will be merged into by * ctl_arena_refresh(). */ pool->ctl_stats.arenas[pool->ctl_stats.narenas].nthreads = 0; ctl_arena_clear(&pool->ctl_stats.arenas[pool->ctl_stats.narenas]); malloc_rwlock_wrlock(&pool->arenas_lock); memcpy(tarenas, pool->arenas, sizeof(arena_t *) * pool->ctl_stats.narenas); for (i = 0; i < pool->ctl_stats.narenas; i++) { if (pool->arenas[i] != NULL) pool->ctl_stats.arenas[i].nthreads = pool->arenas[i]->nthreads; else pool->ctl_stats.arenas[i].nthreads = 0; } malloc_rwlock_unlock(&pool->arenas_lock); for (i = 0; i < pool->ctl_stats.narenas; i++) { bool initialized = (tarenas[i] != NULL); pool->ctl_stats.arenas[i].initialized = initialized; if (initialized) ctl_arena_refresh(tarenas[i], i); } if (config_stats) { pool->ctl_stats_allocated = pool->ctl_stats.arenas[pool->ctl_stats.narenas].allocated_small + pool->ctl_stats.arenas[pool->ctl_stats.narenas].astats.allocated_large + pool->ctl_stats.arenas[pool->ctl_stats.narenas].astats.allocated_huge; pool->ctl_stats_active = (pool->ctl_stats.arenas[pool->ctl_stats.narenas].pactive << LG_PAGE); pool->ctl_stats_mapped = (pool->ctl_stats.chunks.current << opt_lg_chunk); } ctl_epoch++; } static void ctl_refresh(void) { for (size_t i = 0; i < npools; ++i) { if (pools[i] != NULL) { ctl_refresh_pool(pools[i]); } } } static bool ctl_init_pool(pool_t *pool) { bool ret; /* * Allocate space for one extra arena stats element, which * contains summed stats across all arenas. */ assert(pool->narenas_auto == narenas_total_get(pool)); pool->ctl_stats.narenas = pool->narenas_auto; pool->ctl_stats.arenas = (ctl_arena_stats_t *)base_alloc(pool, (pool->ctl_stats.narenas + 1) * sizeof(ctl_arena_stats_t)); if (pool->ctl_stats.arenas == NULL) { ret = true; goto label_return; } memset(pool->ctl_stats.arenas, 0, (pool->ctl_stats.narenas + 1) * sizeof(ctl_arena_stats_t)); /* * Initialize all stats structures, regardless of whether they * ever get used. Lazy initialization would allow errors to * cause inconsistent state to be viewable by the application. */ if (config_stats) { unsigned i; for (i = 0; i <= pool->ctl_stats.narenas; i++) { if (ctl_arena_init(pool, &pool->ctl_stats.arenas[i])) { ret = true; goto label_return; } } } pool->ctl_stats.arenas[pool->ctl_stats.narenas].initialized = true; ctl_epoch = 0; ctl_refresh_pool(pool); pool->ctl_initialized = true; ret = false; label_return: return (ret); } static bool ctl_init(void) { bool ret; malloc_mutex_lock(&ctl_mtx); for (size_t i = 0; i < npools; ++i) { if (pools[i] != NULL && pools[i]->ctl_initialized == false) { if (ctl_init_pool(pools[i])) { ret = true; goto label_return; } } } /* false means that functions ends with success */ ret = false; label_return: malloc_mutex_unlock(&ctl_mtx); return (ret); } static int ctl_lookup(const char *name, ctl_node_t const **nodesp, size_t *mibp, size_t *depthp) { int ret; const char *elm, *tdot, *dot; size_t elen, i, j; const ctl_named_node_t *node; elm = name; /* Equivalent to strchrnul(). */ dot = ((tdot = strchr(elm, '.')) != NULL) ? tdot : strchr(elm, '\0'); elen = (size_t)((uintptr_t)dot - (uintptr_t)elm); if (elen == 0) { ret = ENOENT; goto label_return; } node = super_root_node; for (i = 0; i < *depthp; i++) { assert(node); assert(node->nchildren > 0); if (ctl_named_node(node->children) != NULL) { const ctl_named_node_t *pnode = node; /* Children are named. */ for (j = 0; j < node->nchildren; j++) { const ctl_named_node_t *child = ctl_named_children(node, j); if (strlen(child->name) == elen && strncmp(elm, child->name, elen) == 0) { node = child; if (nodesp != NULL) nodesp[i] = (const ctl_node_t *)node; mibp[i] = j; break; } } if (node == pnode) { ret = ENOENT; goto label_return; } } else { uintmax_t index; const ctl_indexed_node_t *inode; /* Children are indexed. */ index = malloc_strtoumax(elm, NULL, 10); if (index == UINTMAX_MAX || index > SIZE_T_MAX) { ret = ENOENT; goto label_return; } inode = ctl_indexed_node(node->children); node = inode->index(mibp, *depthp, (size_t)index); if (node == NULL) { ret = ENOENT; goto label_return; } if (nodesp != NULL) nodesp[i] = (const ctl_node_t *)node; mibp[i] = (size_t)index; } if (node->ctl != NULL) { /* Terminal node. */ if (*dot != '\0') { /* * The name contains more elements than are * in this path through the tree. */ ret = ENOENT; goto label_return; } /* Complete lookup successful. */ *depthp = i + 1; break; } /* Update elm. */ if (*dot == '\0') { /* No more elements. */ ret = ENOENT; goto label_return; } elm = &dot[1]; dot = ((tdot = strchr(elm, '.')) != NULL) ? tdot : strchr(elm, '\0'); elen = (size_t)((uintptr_t)dot - (uintptr_t)elm); } ret = 0; label_return: return (ret); } int ctl_byname(const char *name, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; size_t depth; ctl_node_t const *nodes[CTL_MAX_DEPTH]; size_t mib[CTL_MAX_DEPTH]; const ctl_named_node_t *node; if (ctl_init()) { ret = EAGAIN; goto label_return; } depth = CTL_MAX_DEPTH; ret = ctl_lookup(name, nodes, mib, &depth); if (ret != 0) goto label_return; node = ctl_named_node(nodes[depth-1]); if (node != NULL && node->ctl) ret = node->ctl(mib, depth, oldp, oldlenp, newp, newlen); else { /* The name refers to a partial path through the ctl tree. */ ret = ENOENT; } label_return: return(ret); } int ctl_nametomib(const char *name, size_t *mibp, size_t *miblenp) { int ret; if (ctl_init()) { ret = EAGAIN; goto label_return; } ret = ctl_lookup(name, NULL, mibp, miblenp); label_return: return(ret); } int ctl_bymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; const ctl_named_node_t *node; size_t i; if (ctl_init()) { ret = EAGAIN; goto label_return; } /* Iterate down the tree. */ node = super_root_node; for (i = 0; i < miblen; i++) { assert(node); assert(node->nchildren > 0); if (ctl_named_node(node->children) != NULL) { /* Children are named. */ if (node->nchildren <= mib[i]) { ret = ENOENT; goto label_return; } node = ctl_named_children(node, mib[i]); } else { const ctl_indexed_node_t *inode; /* Indexed element. */ inode = ctl_indexed_node(node->children); node = inode->index(mib, miblen, mib[i]); if (node == NULL) { ret = ENOENT; goto label_return; } } } /* Call the ctl function. */ if (node && node->ctl) ret = node->ctl(mib, miblen, oldp, oldlenp, newp, newlen); else { /* Partial MIB. */ ret = ENOENT; } label_return: return(ret); } bool ctl_boot(void) { if (malloc_mutex_init(&ctl_mtx)) return (true); return (false); } void ctl_prefork(void) { malloc_mutex_prefork(&ctl_mtx); } void ctl_postfork_parent(void) { malloc_mutex_postfork_parent(&ctl_mtx); } void ctl_postfork_child(void) { malloc_mutex_postfork_child(&ctl_mtx); } /******************************************************************************/ /* *_ctl() functions. */ #define READONLY() do { \ if (newp != NULL || newlen != 0) { \ ret = EPERM; \ goto label_return; \ } \ } while (0) #define WRITEONLY() do { \ if (oldp != NULL || oldlenp != NULL) { \ ret = EPERM; \ goto label_return; \ } \ } while (0) #define READ(v, t) do { \ if (oldp != NULL && oldlenp != NULL) { \ if (*oldlenp != sizeof(t)) { \ size_t copylen = (sizeof(t) <= *oldlenp) \ ? sizeof(t) : *oldlenp; \ memcpy(oldp, (void *)&(v), copylen); \ ret = EINVAL; \ goto label_return; \ } else \ *(t *)oldp = (v); \ } \ } while (0) #define WRITE(v, t) do { \ if (newp != NULL) { \ if (newlen != sizeof(t)) { \ ret = EINVAL; \ goto label_return; \ } \ (v) = *(t *)newp; \ } \ } while (0) /* * There's a lot of code duplication in the following macros due to limitations * in how nested cpp macros are expanded. */ #define CTL_RO_CLGEN(c, l, n, v, t) \ static int \ n##_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, \ void *newp, size_t newlen) \ { \ int ret; \ t oldval; \ \ if ((c) == false) \ return (ENOENT); \ if (l) \ malloc_mutex_lock(&ctl_mtx); \ READONLY(); \ oldval = (v); \ READ(oldval, t); \ \ ret = 0; \ label_return: \ if (l) \ malloc_mutex_unlock(&ctl_mtx); \ return (ret); \ } #define CTL_RO_CGEN(c, n, v, t) \ static int \ n##_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, \ void *newp, size_t newlen) \ { \ int ret; \ t oldval; \ \ if ((c) == false) \ return (ENOENT); \ malloc_mutex_lock(&ctl_mtx); \ READONLY(); \ oldval = (v); \ READ(oldval, t); \ \ ret = 0; \ label_return: \ malloc_mutex_unlock(&ctl_mtx); \ return (ret); \ } #define CTL_RO_GEN(n, v, t) \ static int \ n##_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, \ void *newp, size_t newlen) \ { \ int ret; \ t oldval; \ \ malloc_mutex_lock(&ctl_mtx); \ READONLY(); \ oldval = (v); \ READ(oldval, t); \ \ ret = 0; \ label_return: \ malloc_mutex_unlock(&ctl_mtx); \ return (ret); \ } /* * ctl_mtx is not acquired, under the assumption that no pertinent data will * mutate during the call. */ #define CTL_RO_NL_CGEN(c, n, v, t) \ static int \ n##_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, \ void *newp, size_t newlen) \ { \ int ret; \ t oldval; \ \ if ((c) == false) \ return (ENOENT); \ READONLY(); \ oldval = (v); \ READ(oldval, t); \ \ ret = 0; \ label_return: \ return (ret); \ } #define CTL_RO_NL_GEN(n, v, t) \ static int \ n##_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, \ void *newp, size_t newlen) \ { \ int ret; \ t oldval; \ \ READONLY(); \ oldval = (v); \ READ(oldval, t); \ \ ret = 0; \ label_return: \ return (ret); \ } #define CTL_RO_BOOL_CONFIG_GEN(n) \ static int \ n##_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, \ void *newp, size_t newlen) \ { \ int ret; \ bool oldval; \ \ READONLY(); \ oldval = n; \ READ(oldval, bool); \ \ ret = 0; \ label_return: \ return (ret); \ } /******************************************************************************/ CTL_RO_NL_GEN(version, JEMALLOC_VERSION, const char *) static int epoch_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; UNUSED uint64_t newval; malloc_mutex_lock(&ctl_mtx); WRITE(newval, uint64_t); if (newp != NULL) ctl_refresh(); READ(ctl_epoch, uint64_t); ret = 0; label_return: malloc_mutex_unlock(&ctl_mtx); return (ret); } /******************************************************************************/ CTL_RO_BOOL_CONFIG_GEN(config_debug) CTL_RO_BOOL_CONFIG_GEN(config_fill) CTL_RO_BOOL_CONFIG_GEN(config_lazy_lock) CTL_RO_BOOL_CONFIG_GEN(config_munmap) CTL_RO_BOOL_CONFIG_GEN(config_prof) CTL_RO_BOOL_CONFIG_GEN(config_prof_libgcc) CTL_RO_BOOL_CONFIG_GEN(config_prof_libunwind) CTL_RO_BOOL_CONFIG_GEN(config_stats) CTL_RO_BOOL_CONFIG_GEN(config_tcache) CTL_RO_BOOL_CONFIG_GEN(config_tls) CTL_RO_BOOL_CONFIG_GEN(config_utrace) CTL_RO_BOOL_CONFIG_GEN(config_valgrind) CTL_RO_BOOL_CONFIG_GEN(config_xmalloc) /******************************************************************************/ CTL_RO_NL_GEN(opt_abort, opt_abort, bool) CTL_RO_NL_GEN(opt_dss, opt_dss, const char *) CTL_RO_NL_GEN(opt_lg_chunk, opt_lg_chunk, size_t) CTL_RO_NL_GEN(opt_narenas, opt_narenas, size_t) CTL_RO_NL_GEN(opt_lg_dirty_mult, opt_lg_dirty_mult, ssize_t) CTL_RO_NL_GEN(opt_stats_print, opt_stats_print, bool) CTL_RO_NL_CGEN(config_fill, opt_junk, opt_junk, bool) CTL_RO_NL_CGEN(config_fill, opt_quarantine, opt_quarantine, size_t) CTL_RO_NL_CGEN(config_fill, opt_redzone, opt_redzone, bool) CTL_RO_NL_CGEN(config_fill, opt_zero, opt_zero, bool) CTL_RO_NL_CGEN(config_utrace, opt_utrace, opt_utrace, bool) CTL_RO_NL_CGEN(config_xmalloc, opt_xmalloc, opt_xmalloc, bool) CTL_RO_NL_CGEN(config_tcache, opt_tcache, opt_tcache, bool) CTL_RO_NL_CGEN(config_tcache, opt_lg_tcache_max, opt_lg_tcache_max, ssize_t) CTL_RO_NL_CGEN(config_prof, opt_prof, opt_prof, bool) CTL_RO_NL_CGEN(config_prof, opt_prof_prefix, opt_prof_prefix, const char *) CTL_RO_CGEN(config_prof, opt_prof_active, opt_prof_active, bool) /* Mutable. */ CTL_RO_NL_CGEN(config_prof, opt_lg_prof_sample, opt_lg_prof_sample, size_t) CTL_RO_NL_CGEN(config_prof, opt_prof_accum, opt_prof_accum, bool) CTL_RO_NL_CGEN(config_prof, opt_lg_prof_interval, opt_lg_prof_interval, ssize_t) CTL_RO_NL_CGEN(config_prof, opt_prof_gdump, opt_prof_gdump, bool) CTL_RO_NL_CGEN(config_prof, opt_prof_final, opt_prof_final, bool) CTL_RO_NL_CGEN(config_prof, opt_prof_leak, opt_prof_leak, bool) /******************************************************************************/ static int thread_arena_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned newind, oldind; size_t pool_ind = mib[1]; pool_t *pool; arena_t dummy; if (pool_ind >= npools) return (ENOENT); pool = pools[pool_ind]; DUMMY_ARENA_INITIALIZE(dummy, pool); tsd_tcache_t *tcache_tsd = tcache_tsd_get(); if (tcache_tsd->npools <= pool_ind) { assert(pool_ind < POOLS_MAX); size_t npools = 1ULL << (32 - __builtin_clz(pool_ind + 1)); if (npools < POOLS_MIN) npools = POOLS_MIN; unsigned *tseqno = base_malloc_fn(npools * sizeof (unsigned)); if (tseqno == NULL) return (ENOMEM); if (tcache_tsd->seqno != NULL) memcpy(tseqno, tcache_tsd->seqno, tcache_tsd->npools * sizeof (unsigned)); memset(&tseqno[tcache_tsd->npools], 0, (npools - tcache_tsd->npools) * sizeof (unsigned)); tcache_t **tcaches = base_malloc_fn(npools * sizeof (tcache_t *)); if (tcaches == NULL) { base_free_fn(tseqno); return (ENOMEM); } if (tcache_tsd->tcaches != NULL) memcpy(tcaches, tcache_tsd->tcaches, tcache_tsd->npools * sizeof (tcache_t *)); memset(&tcaches[tcache_tsd->npools], 0, (npools - tcache_tsd->npools) * sizeof (tcache_t *)); base_free_fn(tcache_tsd->seqno); tcache_tsd->seqno = tseqno; base_free_fn(tcache_tsd->tcaches); tcache_tsd->tcaches = tcaches; tcache_tsd->npools = npools; } malloc_mutex_lock(&ctl_mtx); arena_t *arena = choose_arena(&dummy); if (arena == NULL) { ret = EFAULT; goto label_return; } newind = oldind = arena->ind; WRITE(newind, unsigned); READ(oldind, unsigned); if (newind != oldind) { arena_t *arena; tsd_pool_t *tsd; if (newind >= pool->ctl_stats.narenas) { /* New arena index is out of range. */ ret = EFAULT; goto label_return; } /* Initialize arena if necessary. */ malloc_rwlock_wrlock(&pool->arenas_lock); if ((arena = pool->arenas[newind]) == NULL && (arena = arenas_extend(pool, newind)) == NULL) { malloc_rwlock_unlock(&pool->arenas_lock); ret = EAGAIN; goto label_return; } assert(arena == pool->arenas[newind]); pool->arenas[oldind]->nthreads--; pool->arenas[newind]->nthreads++; malloc_rwlock_unlock(&pool->arenas_lock); /* Set new arena association. */ if (config_tcache) { tcache_t *tcache = tcache_tsd->tcaches[pool->pool_id]; if ((uintptr_t)(tcache) > (uintptr_t)TCACHE_STATE_MAX) { if(tcache_tsd->seqno[pool->pool_id] == pool->seqno) tcache_arena_dissociate(tcache); tcache_arena_associate(tcache, arena); tcache_tsd->seqno[pool->pool_id] = pool->seqno; } } tsd = arenas_tsd_get(); tsd->seqno[0] = pool->seqno; tsd->arenas[0] = arena; } ret = 0; label_return: malloc_mutex_unlock(&ctl_mtx); return (ret); } CTL_RO_NL_CGEN(config_stats, thread_allocated, thread_allocated_tsd_get()->allocated, uint64_t) CTL_RO_NL_CGEN(config_stats, thread_allocatedp, &thread_allocated_tsd_get()->allocated, uint64_t *) CTL_RO_NL_CGEN(config_stats, thread_deallocated, thread_allocated_tsd_get()->deallocated, uint64_t) CTL_RO_NL_CGEN(config_stats, thread_deallocatedp, &thread_allocated_tsd_get()->deallocated, uint64_t *) static int thread_tcache_enabled_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; bool oldval; if (config_tcache == false) return (ENOENT); oldval = tcache_enabled_get(); if (newp != NULL) { if (newlen != sizeof(bool)) { ret = EINVAL; goto label_return; } tcache_enabled_set(*(bool *)newp); } READ(oldval, bool); ret = 0; label_return: return (ret); } static int thread_tcache_flush_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; if (config_tcache == false) return (ENOENT); READONLY(); WRITEONLY(); tcache_flush(pools[0]); ret = 0; label_return: return (ret); } /******************************************************************************/ /* ctl_mutex must be held during execution of this function. */ static void arena_purge(pool_t *pool, unsigned arena_ind) { VARIABLE_ARRAY(arena_t *, tarenas, pool->ctl_stats.narenas); malloc_rwlock_wrlock(&pool->arenas_lock); memcpy(tarenas, pool->arenas, sizeof(arena_t *) * pool->ctl_stats.narenas); malloc_rwlock_unlock(&pool->arenas_lock); if (arena_ind == pool->ctl_stats.narenas) { unsigned i; for (i = 0; i < pool->ctl_stats.narenas; i++) { if (tarenas[i] != NULL) arena_purge_all(tarenas[i]); } } else { assert(arena_ind < pool->ctl_stats.narenas); if (tarenas[arena_ind] != NULL) arena_purge_all(tarenas[arena_ind]); } } static int arena_i_purge_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; if (mib[1] >= npools) return (ENOENT); READONLY(); WRITEONLY(); malloc_mutex_lock(&ctl_mtx); arena_purge(pools[mib[1]], mib[3]); malloc_mutex_unlock(&ctl_mtx); ret = 0; label_return: return (ret); } static int arena_i_dss_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret, i; bool match, err; const char *dss = ""; size_t pool_ind = mib[1]; size_t arena_ind = mib[3]; dss_prec_t dss_prec_old = dss_prec_limit; dss_prec_t dss_prec = dss_prec_limit; pool_t *pool; if (pool_ind >= npools) return (ENOENT); malloc_mutex_lock(&ctl_mtx); pool = pools[pool_ind]; WRITE(dss, const char *); match = false; for (i = 0; i < dss_prec_limit; i++) { if (strcmp(dss_prec_names[i], dss) == 0) { dss_prec = i; match = true; break; } } if (match == false) { ret = EINVAL; goto label_return; } if (arena_ind < pool->ctl_stats.narenas) { arena_t *arena = pool->arenas[arena_ind]; if (arena != NULL) { dss_prec_old = arena_dss_prec_get(arena); err = arena_dss_prec_set(arena, dss_prec); } else err = true; } else { dss_prec_old = chunk_dss_prec_get(); err = chunk_dss_prec_set(dss_prec); } dss = dss_prec_names[dss_prec_old]; READ(dss, const char *); if (err) { ret = EFAULT; goto label_return; } ret = 0; label_return: malloc_mutex_unlock(&ctl_mtx); return (ret); } static int arena_i_chunk_alloc_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; size_t pool_ind = mib[1]; size_t arena_ind = mib[3]; arena_t *arena; pool_t *pool; if (pool_ind >= npools) return (ENOENT); malloc_mutex_lock(&ctl_mtx); pool = pools[pool_ind]; if (arena_ind < pool->narenas_total && (arena = pool->arenas[arena_ind]) != NULL) { malloc_mutex_lock(&arena->lock); READ(arena->chunk_alloc, chunk_alloc_t *); WRITE(arena->chunk_alloc, chunk_alloc_t *); /* * There could be direct jump to label_return from inside * of READ/WRITE macros. This is why unlocking the arena mutex * must be moved there. */ } else { ret = EFAULT; goto label_outer_return; } ret = 0; label_return: malloc_mutex_unlock(&arena->lock); label_outer_return: malloc_mutex_unlock(&ctl_mtx); return (ret); } static int arena_i_chunk_dalloc_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; size_t pool_ind = mib[1]; size_t arena_ind = mib[3]; arena_t *arena; pool_t *pool; if (pool_ind >= npools) return (ENOENT); malloc_mutex_lock(&ctl_mtx); pool = pools[pool_ind]; if (arena_ind < pool->narenas_total && (arena = pool->arenas[arena_ind]) != NULL) { malloc_mutex_lock(&arena->lock); READ(arena->chunk_dalloc, chunk_dalloc_t *); WRITE(arena->chunk_dalloc, chunk_dalloc_t *); /* * There could be direct jump to label_return from inside * of READ/WRITE macros. This is why unlocking the arena mutex * must be moved there. */ } else { ret = EFAULT; goto label_outer_return; } ret = 0; label_return: malloc_mutex_unlock(&arena->lock); label_outer_return: malloc_mutex_unlock(&ctl_mtx); return (ret); } static const ctl_named_node_t * arena_i_index(const size_t *mib, size_t miblen, size_t i) { const ctl_named_node_t * ret; malloc_mutex_lock(&ctl_mtx); if (i > pools[mib[1]]->ctl_stats.narenas) { ret = NULL; goto label_return; } ret = super_arena_i_node; label_return: malloc_mutex_unlock(&ctl_mtx); return (ret); } /******************************************************************************/ static int arenas_narenas_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned narenas; malloc_mutex_lock(&ctl_mtx); READONLY(); if (*oldlenp != sizeof(unsigned)) { ret = EINVAL; goto label_return; } narenas = pools[mib[1]]->ctl_stats.narenas; READ(narenas, unsigned); ret = 0; label_return: malloc_mutex_unlock(&ctl_mtx); return (ret); } static int arenas_initialized_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned nread, i; pool_t *pool; malloc_mutex_lock(&ctl_mtx); READONLY(); pool = pools[mib[1]]; if (*oldlenp != pool->ctl_stats.narenas * sizeof(bool)) { ret = EINVAL; nread = (*oldlenp < pool->ctl_stats.narenas * sizeof(bool)) ? (*oldlenp / sizeof(bool)) : pool->ctl_stats.narenas; } else { ret = 0; nread = pool->ctl_stats.narenas; } for (i = 0; i < nread; i++) ((bool *)oldp)[i] = pool->ctl_stats.arenas[i].initialized; label_return: malloc_mutex_unlock(&ctl_mtx); return (ret); } CTL_RO_NL_GEN(arenas_quantum, QUANTUM, size_t) CTL_RO_NL_GEN(arenas_page, PAGE, size_t) CTL_RO_NL_CGEN(config_tcache, arenas_tcache_max, tcache_maxclass, size_t) CTL_RO_NL_GEN(arenas_nbins, NBINS, unsigned) CTL_RO_NL_CGEN(config_tcache, arenas_nhbins, nhbins, unsigned) CTL_RO_NL_GEN(arenas_bin_i_size, arena_bin_info[mib[4]].reg_size, size_t) CTL_RO_NL_GEN(arenas_bin_i_nregs, arena_bin_info[mib[4]].nregs, uint32_t) CTL_RO_NL_GEN(arenas_bin_i_run_size, arena_bin_info[mib[4]].run_size, size_t) static const ctl_named_node_t * arenas_bin_i_index(const size_t *mib, size_t miblen, size_t i) { if (i > NBINS) return (NULL); return (super_arenas_bin_i_node); } CTL_RO_NL_GEN(arenas_nlruns, nlclasses, size_t) CTL_RO_NL_GEN(arenas_lrun_i_size, ((mib[4]+1) << LG_PAGE), size_t) static const ctl_named_node_t * arenas_lrun_i_index(const size_t *mib, size_t miblen, size_t i) { if (i > nlclasses) return (NULL); return (super_arenas_lrun_i_node); } static int arenas_extend_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned narenas; unsigned pool_ind = mib[1]; pool_t *pool; if (pool_ind >= npools) return (ENOENT); pool = pools[pool_ind]; malloc_mutex_lock(&ctl_mtx); READONLY(); if (ctl_grow(pool)) { ret = EAGAIN; goto label_return; } narenas = pool->ctl_stats.narenas - 1; READ(narenas, unsigned); ret = 0; label_return: malloc_mutex_unlock(&ctl_mtx); return (ret); } /** * @stub */ static int pools_npools_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; unsigned _npools; malloc_mutex_lock(&ctl_mtx); READONLY(); if (*oldlenp != sizeof(unsigned)) { ret = EINVAL; goto label_return; } _npools = npools_cnt; READ(_npools, unsigned); ret = 0; label_return: malloc_mutex_unlock(&ctl_mtx); return (ret); } /** * @stub */ static int pool_i_base_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; READONLY(); WRITEONLY(); malloc_mutex_lock(&ctl_mtx); //TODO malloc_mutex_unlock(&ctl_mtx); ret = 0; label_return: return (ret); } /** * @stub */ static int pool_i_size_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; READONLY(); WRITEONLY(); malloc_mutex_lock(&ctl_mtx); //TODO malloc_mutex_unlock(&ctl_mtx); ret = 0; label_return: return (ret); } /** * @stub */ static const ctl_named_node_t * pool_i_index(const size_t *mib, size_t miblen, size_t i) { const ctl_named_node_t * ret; malloc_mutex_lock(&ctl_mtx); if (i > npools) { ret = NULL; goto label_return; } ret = super_pool_i_node; label_return: malloc_mutex_unlock(&ctl_mtx); return (ret); } /******************************************************************************/ static int prof_active_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; bool oldval; if (config_prof == false) return (ENOENT); malloc_mutex_lock(&ctl_mtx); /* Protect opt_prof_active. */ oldval = opt_prof_active; if (newp != NULL) { /* * The memory barriers will tend to make opt_prof_active * propagate faster on systems with weak memory ordering. */ mb_write(); WRITE(opt_prof_active, bool); mb_write(); } READ(oldval, bool); ret = 0; label_return: malloc_mutex_unlock(&ctl_mtx); return (ret); } static int prof_dump_ctl(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { int ret; const char *filename = NULL; if (config_prof == false) return (ENOENT); WRITEONLY(); WRITE(filename, const char *); if (prof_mdump(filename)) { ret = EFAULT; goto label_return; } ret = 0; label_return: return (ret); } CTL_RO_NL_CGEN(config_prof, prof_interval, prof_interval, uint64_t) /******************************************************************************/ /* * @TODO remember to split up stats to arena-related and th rest */ CTL_RO_CGEN(config_stats, stats_cactive, &(pools[mib[1]]->stats_cactive), size_t *) CTL_RO_CGEN(config_stats, stats_allocated, pools[mib[1]]->ctl_stats_allocated, size_t) CTL_RO_CGEN(config_stats, stats_active, pools[mib[1]]->ctl_stats_active, size_t) CTL_RO_CGEN(config_stats, stats_mapped, pools[mib[1]]->ctl_stats_mapped, size_t) CTL_RO_CGEN(config_stats, stats_chunks_current, pools[mib[1]]->ctl_stats.chunks.current, size_t) CTL_RO_CGEN(config_stats, stats_chunks_total, pools[mib[1]]->ctl_stats.chunks.total, uint64_t) CTL_RO_CGEN(config_stats, stats_chunks_high, pools[mib[1]]->ctl_stats.chunks.high, size_t) CTL_RO_GEN(stats_arenas_i_dss, pools[mib[1]]->ctl_stats.arenas[mib[4]].dss, const char *) CTL_RO_GEN(stats_arenas_i_nthreads, pools[mib[1]]->ctl_stats.arenas[mib[4]].nthreads, unsigned) CTL_RO_GEN(stats_arenas_i_pactive, pools[mib[1]]->ctl_stats.arenas[mib[4]].pactive, size_t) CTL_RO_GEN(stats_arenas_i_pdirty, pools[mib[1]]->ctl_stats.arenas[mib[4]].pdirty, size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_mapped, pools[mib[1]]->ctl_stats.arenas[mib[4]].astats.mapped, size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_npurge, pools[mib[1]]->ctl_stats.arenas[mib[4]].astats.npurge, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_nmadvise, pools[mib[1]]->ctl_stats.arenas[mib[4]].astats.nmadvise, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_purged, pools[mib[1]]->ctl_stats.arenas[mib[4]].astats.purged, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_small_allocated, pools[mib[1]]->ctl_stats.arenas[mib[4]].allocated_small, size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_small_nmalloc, pools[mib[1]]->ctl_stats.arenas[mib[4]].nmalloc_small, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_small_ndalloc, pools[mib[1]]->ctl_stats.arenas[mib[4]].ndalloc_small, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_small_nrequests, pools[mib[1]]->ctl_stats.arenas[mib[4]].nrequests_small, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_large_allocated, pools[mib[1]]->ctl_stats.arenas[mib[4]].astats.allocated_large, size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_large_nmalloc, pools[mib[1]]->ctl_stats.arenas[mib[4]].astats.nmalloc_large, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_large_ndalloc, pools[mib[1]]->ctl_stats.arenas[mib[4]].astats.ndalloc_large, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_large_nrequests, pools[mib[1]]->ctl_stats.arenas[mib[4]].astats.nrequests_large, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_huge_allocated, pools[mib[1]]->ctl_stats.arenas[mib[4]].astats.allocated_huge, size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_huge_nmalloc, pools[mib[1]]->ctl_stats.arenas[mib[4]].astats.nmalloc_huge, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_huge_ndalloc, pools[mib[1]]->ctl_stats.arenas[mib[4]].astats.ndalloc_huge, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_huge_nrequests, pools[mib[1]]->ctl_stats.arenas[mib[4]].astats.nrequests_huge, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_allocated, pools[mib[1]]->ctl_stats.arenas[mib[4]].bstats[mib[6]].allocated, size_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nmalloc, pools[mib[1]]->ctl_stats.arenas[mib[4]].bstats[mib[6]].nmalloc, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_ndalloc, pools[mib[1]]->ctl_stats.arenas[mib[4]].bstats[mib[6]].ndalloc, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nrequests, pools[mib[1]]->ctl_stats.arenas[mib[4]].bstats[mib[6]].nrequests, uint64_t) CTL_RO_CGEN(config_stats && config_tcache, stats_arenas_i_bins_j_nfills, pools[mib[1]]->ctl_stats.arenas[mib[4]].bstats[mib[6]].nfills, uint64_t) CTL_RO_CGEN(config_stats && config_tcache, stats_arenas_i_bins_j_nflushes, pools[mib[1]]->ctl_stats.arenas[mib[4]].bstats[mib[6]].nflushes, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nruns, pools[mib[1]]->ctl_stats.arenas[mib[4]].bstats[mib[6]].nruns, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nreruns, pools[mib[1]]->ctl_stats.arenas[mib[4]].bstats[mib[6]].reruns, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_curruns, pools[mib[1]]->ctl_stats.arenas[mib[4]].bstats[mib[6]].curruns, size_t) static const ctl_named_node_t * stats_arenas_i_bins_j_index(const size_t *mib, size_t miblen, size_t j) { if (j > NBINS) return (NULL); return (super_stats_arenas_i_bins_j_node); } CTL_RO_CGEN(config_stats, stats_arenas_i_lruns_j_nmalloc, pools[mib[1]]->ctl_stats.arenas[mib[4]].lstats[mib[6]].nmalloc, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_lruns_j_ndalloc, pools[mib[1]]->ctl_stats.arenas[mib[4]].lstats[mib[6]].ndalloc, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_lruns_j_nrequests, pools[mib[1]]->ctl_stats.arenas[mib[4]].lstats[mib[6]].nrequests, uint64_t) CTL_RO_CGEN(config_stats, stats_arenas_i_lruns_j_curruns, pools[mib[1]]->ctl_stats.arenas[mib[4]].lstats[mib[6]].curruns, size_t) static const ctl_named_node_t * stats_arenas_i_lruns_j_index(const size_t *mib, size_t miblen, size_t j) { if (j > nlclasses) return (NULL); return (super_stats_arenas_i_lruns_j_node); } static const ctl_named_node_t * stats_arenas_i_index(const size_t *mib, size_t miblen, size_t i) { const ctl_named_node_t *ret; malloc_mutex_lock(&ctl_mtx); if (i > pools[mib[1]]->ctl_stats.narenas || pools[mib[1]]->ctl_stats.arenas[i].initialized == false) { ret = NULL; goto label_return; } ret = super_stats_arenas_i_node; label_return: malloc_mutex_unlock(&ctl_mtx); return (ret); } static const ctl_named_node_t * thread_pool_i_index(const size_t *mib, size_t miblen, size_t i) { const ctl_named_node_t *ret; malloc_mutex_lock(&ctl_mtx); if (i > npools) { ret = NULL; goto label_return; } ret = super_thread_pool_i_node; label_return: malloc_mutex_unlock(&ctl_mtx); return (ret); } vmem-1.8/src/jemalloc/src/extent.c000066400000000000000000000017151361505074100171530ustar00rootroot00000000000000#define JEMALLOC_EXTENT_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ static inline int extent_szad_comp(extent_node_t *a, extent_node_t *b) { int ret; size_t a_size = a->size; size_t b_size = b->size; ret = (a_size > b_size) - (a_size < b_size); if (ret == 0) { uintptr_t a_addr = (uintptr_t)a->addr; uintptr_t b_addr = (uintptr_t)b->addr; ret = (a_addr > b_addr) - (a_addr < b_addr); } return (ret); } /* Generate red-black tree functions. */ rb_gen(, extent_tree_szad_, extent_tree_t, extent_node_t, link_szad, extent_szad_comp) static inline int extent_ad_comp(extent_node_t *a, extent_node_t *b) { uintptr_t a_addr = (uintptr_t)a->addr; uintptr_t b_addr = (uintptr_t)b->addr; return ((a_addr > b_addr) - (a_addr < b_addr)); } /* Generate red-black tree functions. */ rb_gen(, extent_tree_ad_, extent_tree_t, extent_node_t, link_ad, extent_ad_comp) vmem-1.8/src/jemalloc/src/hash.c000066400000000000000000000001121361505074100165550ustar00rootroot00000000000000#define JEMALLOC_HASH_C_ #include "jemalloc/internal/jemalloc_internal.h" vmem-1.8/src/jemalloc/src/huge.c000066400000000000000000000222161361505074100165730ustar00rootroot00000000000000#define JEMALLOC_HUGE_C_ #include "jemalloc/internal/jemalloc_internal.h" void * huge_malloc(arena_t *arena, size_t size, bool zero) { return (huge_palloc(arena, size, chunksize, zero)); } void * huge_palloc(arena_t *arena, size_t size, size_t alignment, bool zero) { void *ret; size_t csize; extent_node_t *node; bool is_zeroed; pool_t *pool; /* Allocate one or more contiguous chunks for this request. */ csize = CHUNK_CEILING(size); if (csize == 0) { /* size is large enough to cause size_t wrap-around. */ return (NULL); } /* * Copy zero into is_zeroed and pass the copy to chunk_alloc(), so that * it is possible to make correct junk/zero fill decisions below. */ is_zeroed = zero; arena = choose_arena(arena); if (arena == NULL) return (NULL); pool = arena->pool; /* Allocate an extent node with which to track the chunk. */ node = base_node_alloc(pool); if (node == NULL) return (NULL); ret = arena_chunk_alloc_huge(arena, NULL, csize, alignment, &is_zeroed); if (ret == NULL) { base_node_dalloc(pool, node); return (NULL); } /* Insert node into huge. */ node->addr = ret; node->size = csize; node->arena = arena; malloc_mutex_lock(&pool->huge_mtx); extent_tree_ad_insert(&pool->huge, node); malloc_mutex_unlock(&pool->huge_mtx); if (config_fill && zero == false) { if (opt_junk) memset(ret, 0xa5, csize); else if (opt_zero && is_zeroed == false) memset(ret, 0, csize); } return (ret); } #ifdef JEMALLOC_JET #undef huge_dalloc_junk #define huge_dalloc_junk JEMALLOC_N(huge_dalloc_junk_impl) #endif static void huge_dalloc_junk(void *ptr, size_t usize) { if (config_fill && have_dss && unlikely(opt_junk)) { /* * Only bother junk filling if the chunk isn't about to be * unmapped. */ if (config_munmap == false || (have_dss && chunk_in_dss(ptr))) memset(ptr, 0x5a, usize); } } #ifdef JEMALLOC_JET #undef huge_dalloc_junk #define huge_dalloc_junk JEMALLOC_N(huge_dalloc_junk) huge_dalloc_junk_t *huge_dalloc_junk = JEMALLOC_N(huge_dalloc_junk_impl); #endif static bool huge_ralloc_no_move_expand(pool_t *pool, char *ptr, size_t oldsize, size_t size, bool zero) { size_t csize; void *expand_addr; size_t expand_size; extent_node_t *node, key; arena_t *arena; bool is_zeroed; void *ret; csize = CHUNK_CEILING(size); if (csize == 0) { /* size is large enough to cause size_t wrap-around. */ return (true); } expand_addr = ptr + oldsize; expand_size = csize - oldsize; malloc_mutex_lock(&pool->huge_mtx); key.addr = ptr; node = extent_tree_ad_search(&pool->huge, &key); assert(node != NULL); assert(node->addr == ptr); /* Find the current arena. */ arena = node->arena; malloc_mutex_unlock(&pool->huge_mtx); /* * Copy zero into is_zeroed and pass the copy to chunk_alloc(), so that * it is possible to make correct junk/zero fill decisions below. */ is_zeroed = zero; ret = arena_chunk_alloc_huge(arena, expand_addr, expand_size, chunksize, &is_zeroed); if (ret == NULL) return (true); assert(ret == expand_addr); malloc_mutex_lock(&pool->huge_mtx); /* Update the size of the huge allocation. */ node->size = csize; malloc_mutex_unlock(&pool->huge_mtx); if (config_fill && !zero) { if (unlikely(opt_junk)) memset(expand_addr, 0xa5, expand_size); else if (unlikely(opt_zero) && !is_zeroed) memset(expand_addr, 0, expand_size); } return (false); } bool huge_ralloc_no_move(pool_t *pool, void *ptr, size_t oldsize, size_t size, size_t extra, bool zero) { /* Both allocations must be huge to avoid a move. */ if (oldsize <= arena_maxclass) return (true); assert(CHUNK_CEILING(oldsize) == oldsize); /* * Avoid moving the allocation if the size class can be left the same. */ if (CHUNK_CEILING(oldsize) >= CHUNK_CEILING(size) && CHUNK_CEILING(oldsize) <= CHUNK_CEILING(size+extra)) { return (false); } /* Overflow. */ if (CHUNK_CEILING(size) == 0) return (true); /* Shrink the allocation in-place. */ if (CHUNK_CEILING(oldsize) > CHUNK_CEILING(size)) { extent_node_t *node, key; void *excess_addr; size_t excess_size; malloc_mutex_lock(&pool->huge_mtx); key.addr = ptr; node = extent_tree_ad_search(&pool->huge, &key); assert(node != NULL); assert(node->addr == ptr); /* Update the size of the huge allocation. */ node->size = CHUNK_CEILING(size); malloc_mutex_unlock(&pool->huge_mtx); excess_addr = (char *)node->addr + CHUNK_CEILING(size); excess_size = CHUNK_CEILING(oldsize) - CHUNK_CEILING(size); /* Zap the excess chunks. */ huge_dalloc_junk(excess_addr, excess_size); arena_chunk_dalloc_huge(node->arena, excess_addr, excess_size); return (false); } /* Attempt to expand the allocation in-place. */ if (huge_ralloc_no_move_expand(pool, ptr, oldsize, size + extra, zero)) { if (extra == 0) return (true); /* Try again, this time without extra. */ return (huge_ralloc_no_move_expand(pool, ptr, oldsize, size, zero)); } return (false); } void * huge_ralloc(arena_t *arena, void *ptr, size_t oldsize, size_t size, size_t extra, size_t alignment, bool zero, bool try_tcache_dalloc) { void *ret; size_t copysize; /* Try to avoid moving the allocation. */ if (huge_ralloc_no_move(arena->pool, ptr, oldsize, size, extra, zero) == false) return (ptr); /* * size and oldsize are different enough that we need to use a * different size class. In that case, fall back to allocating new * space and copying. */ if (alignment > chunksize) ret = huge_palloc(arena, size + extra, alignment, zero); else ret = huge_malloc(arena, size + extra, zero); if (ret == NULL) { if (extra == 0) return (NULL); /* Try again, this time without extra. */ if (alignment > chunksize) ret = huge_palloc(arena, size, alignment, zero); else ret = huge_malloc(arena, size, zero); if (ret == NULL) return (NULL); } /* * Copy at most size bytes (not size+extra), since the caller has no * expectation that the extra bytes will be reliably preserved. */ copysize = (size < oldsize) ? size : oldsize; memcpy(ret, ptr, copysize); pool_iqalloct(arena->pool, ptr, try_tcache_dalloc); return (ret); } void huge_dalloc(pool_t *pool, void *ptr) { extent_node_t *node, key; malloc_mutex_lock(&pool->huge_mtx); /* Extract from tree of huge allocations. */ key.addr = ptr; node = extent_tree_ad_search(&pool->huge, &key); assert(node != NULL); assert(node->addr == ptr); extent_tree_ad_remove(&pool->huge, node); malloc_mutex_unlock(&pool->huge_mtx); huge_dalloc_junk(node->addr, node->size); arena_chunk_dalloc_huge(node->arena, node->addr, node->size); base_node_dalloc(pool, node); } size_t huge_salloc(const void *ptr) { size_t ret = 0; size_t i; extent_node_t *node, key; malloc_mutex_lock(&pools_lock); for (i = 0; i < npools; ++i) { pool_t *pool = pools[i]; if (pool == NULL) continue; malloc_mutex_lock(&pool->huge_mtx); /* Extract from tree of huge allocations. */ key.addr = __DECONST(void *, ptr); node = extent_tree_ad_search(&pool->huge, &key); if (node != NULL) ret = node->size; malloc_mutex_unlock(&pool->huge_mtx); if (ret != 0) break; } malloc_mutex_unlock(&pools_lock); return (ret); } size_t huge_pool_salloc(pool_t *pool, const void *ptr) { size_t ret = 0; extent_node_t *node, key; malloc_mutex_lock(&pool->huge_mtx); /* Extract from tree of huge allocations. */ key.addr = __DECONST(void *, ptr); node = extent_tree_ad_search(&pool->huge, &key); if (node != NULL) ret = node->size; malloc_mutex_unlock(&pool->huge_mtx); return (ret); } prof_ctx_t * huge_prof_ctx_get(const void *ptr) { prof_ctx_t *ret = NULL; size_t i; extent_node_t *node, key; malloc_mutex_lock(&pools_lock); for (i = 0; i < npools; ++i) { pool_t *pool = pools[i]; if (pool == NULL) continue; malloc_mutex_lock(&pool->huge_mtx); /* Extract from tree of huge allocations. */ key.addr = __DECONST(void *, ptr); node = extent_tree_ad_search(&pool->huge, &key); if (node != NULL) ret = node->prof_ctx; malloc_mutex_unlock(&pool->huge_mtx); if (ret != NULL) break; } malloc_mutex_unlock(&pools_lock); return (ret); } void huge_prof_ctx_set(const void *ptr, prof_ctx_t *ctx) { extent_node_t *node, key; size_t i; malloc_mutex_lock(&pools_lock); for (i = 0; i < npools; ++i) { pool_t *pool = pools[i]; if (pool == NULL) continue; malloc_mutex_lock(&pool->huge_mtx); /* Extract from tree of huge allocations. */ key.addr = __DECONST(void *, ptr); node = extent_tree_ad_search(&pool->huge, &key); if (node != NULL) node->prof_ctx = ctx; malloc_mutex_unlock(&pool->huge_mtx); if (node != NULL) break; } malloc_mutex_unlock(&pools_lock); } /* * Called at each pool opening. */ bool huge_boot(pool_t *pool) { if (malloc_mutex_init(&pool->huge_mtx)) return (true); return (false); } /* * Called only at pool creation. */ bool huge_init(pool_t *pool) { if (huge_boot(pool)) return (true); /* Initialize chunks data. */ extent_tree_ad_new(&pool->huge); return (false); } void huge_prefork(pool_t *pool) { malloc_mutex_prefork(&pool->huge_mtx); } void huge_postfork_parent(pool_t *pool) { malloc_mutex_postfork_parent(&pool->huge_mtx); } void huge_postfork_child(pool_t *pool) { malloc_mutex_postfork_child(&pool->huge_mtx); } vmem-1.8/src/jemalloc/src/jemalloc.c000066400000000000000000002351741361505074100174420ustar00rootroot00000000000000#define JEMALLOC_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Data. */ malloc_tsd_data(, arenas, tsd_pool_t, TSD_POOL_INITIALIZER) malloc_tsd_data(, thread_allocated, thread_allocated_t, THREAD_ALLOCATED_INITIALIZER) /* Runtime configuration options. */ const char *je_malloc_conf; bool opt_abort = #ifdef JEMALLOC_DEBUG true #else false #endif ; bool opt_junk = #if (defined(JEMALLOC_DEBUG) && defined(JEMALLOC_FILL)) true #else false #endif ; size_t opt_quarantine = ZU(0); bool opt_redzone = false; bool opt_utrace = false; bool opt_xmalloc = false; bool opt_zero = false; size_t opt_narenas = 0; /* Initialized to true if the process is running inside Valgrind. */ bool in_valgrind; unsigned npools_cnt; /* actual number of pools */ unsigned npools; /* size of the pools[] array */ unsigned ncpus; pool_t **pools; pool_t base_pool; unsigned pool_seqno = 0; bool pools_shared_data_initialized; /* * Custom malloc() and free() for shared data and for data needed to * initialize pool. If not defined functions then base_pool will be * created for allocations from RAM. */ void *(*base_malloc_fn)(size_t); void (*base_free_fn)(void *); /* Set to true once the allocator has been initialized. */ static bool malloc_initialized = false; static bool base_pool_initialized = false; #ifdef JEMALLOC_THREADED_INIT /* Used to let the initializing thread recursively allocate. */ # define NO_INITIALIZER ((unsigned long)0) # define INITIALIZER pthread_self() # define IS_INITIALIZER (malloc_initializer == pthread_self()) static pthread_t malloc_initializer = NO_INITIALIZER; #else # define NO_INITIALIZER false # define INITIALIZER true # define IS_INITIALIZER malloc_initializer static bool malloc_initializer = NO_INITIALIZER; #endif /* Used to avoid initialization races. */ #ifdef _WIN32 static malloc_mutex_t init_lock; JEMALLOC_ATTR(constructor) static void WINAPI _init_init_lock(void) { malloc_mutex_init(&init_lock); malloc_mutex_init(&pools_lock); malloc_mutex_init(&pool_base_lock); } #ifdef _MSC_VER # pragma comment(linker, "/include:__init_init_lock") # pragma section(".CRT$XCU", read) JEMALLOC_SECTION(".CRT$XCU") JEMALLOC_ATTR(used) const void (WINAPI *__init_init_lock)(void) = _init_init_lock; #endif #else static malloc_mutex_t init_lock = MALLOC_MUTEX_INITIALIZER; #endif typedef struct { void *p; /* Input pointer (as in realloc(p, s)). */ size_t s; /* Request size. */ void *r; /* Result pointer. */ } malloc_utrace_t; #ifdef JEMALLOC_UTRACE # define UTRACE(a, b, c) do { \ if (opt_utrace) { \ int utrace_serrno = errno; \ malloc_utrace_t ut; \ ut.p = (a); \ ut.s = (b); \ ut.r = (c); \ utrace(&ut, sizeof(ut)); \ errno = utrace_serrno; \ } \ } while (0) #else # define UTRACE(a, b, c) do { (void)(a); (void)(b); (void)(c); } while (0) #endif /* data structures for callbacks used in je_pool_check() to browse trees */ typedef struct { pool_memory_range_node_t *list; size_t size; int error; } check_data_cb_t; /******************************************************************************/ /* * Function prototypes for static functions that are referenced prior to * definition. */ static bool malloc_init_hard(void); static bool malloc_init_base_pool(void); static void *base_malloc_default(size_t size); static void base_free_default(void *ptr); /******************************************************************************/ /* * Begin miscellaneous support functions. */ /* Create a new arena and insert it into the arenas array at index ind. */ arena_t * arenas_extend(pool_t *pool, unsigned ind) { arena_t *ret; ret = (arena_t *)base_alloc(pool, sizeof(arena_t)); if (ret != NULL && arena_new(pool, ret, ind) == false) { pool->arenas[ind] = ret; return (ret); } /* Only reached if there is an OOM error. */ /* * OOM here is quite inconvenient to propagate, since dealing with it * would require a check for failure in the fast path. Instead, punt * by using arenas[0]. In practice, this is an extremely unlikely * failure. */ malloc_write(": Error initializing arena\n"); if (opt_abort) abort(); return (pool->arenas[0]); } /* Slow path, called only by choose_arena(). */ arena_t * choose_arena_hard(pool_t *pool) { arena_t *ret; tsd_pool_t *tsd; if (pool->narenas_auto > 1) { unsigned i, choose, first_null; choose = 0; first_null = pool->narenas_auto; malloc_rwlock_wrlock(&pool->arenas_lock); assert(pool->arenas[0] != NULL); for (i = 1; i < pool->narenas_auto; i++) { if (pool->arenas[i] != NULL) { /* * Choose the first arena that has the lowest * number of threads assigned to it. */ if (pool->arenas[i]->nthreads < pool->arenas[choose]->nthreads) choose = i; } else if (first_null == pool->narenas_auto) { /* * Record the index of the first uninitialized * arena, in case all extant arenas are in use. * * NB: It is possible for there to be * discontinuities in terms of initialized * versus uninitialized arenas, due to the * "thread.arena" mallctl. */ first_null = i; } } if (pool->arenas[choose]->nthreads == 0 || first_null == pool->narenas_auto) { /* * Use an unloaded arena, or the least loaded arena if * all arenas are already initialized. */ ret = pool->arenas[choose]; } else { /* Initialize a new arena. */ ret = arenas_extend(pool, first_null); } ret->nthreads++; malloc_rwlock_unlock(&pool->arenas_lock); } else { ret = pool->arenas[0]; malloc_rwlock_wrlock(&pool->arenas_lock); ret->nthreads++; malloc_rwlock_unlock(&pool->arenas_lock); } tsd = arenas_tsd_get(); tsd->seqno[pool->pool_id] = pool->seqno; tsd->arenas[pool->pool_id] = ret; return (ret); } static void stats_print_atexit(void) { if (config_tcache && config_stats) { unsigned narenas, i, j; pool_t *pool; /* * Merge stats from extant threads. This is racy, since * individual threads do not lock when recording tcache stats * events. As a consequence, the final stats may be slightly * out of date by the time they are reported, if other threads * continue to allocate. */ malloc_mutex_lock(&pools_lock); for (i = 0; i < npools; i++) { pool = pools[i]; if (pool != NULL) { for (j = 0, narenas = narenas_total_get(pool); j < narenas; j++) { arena_t *arena = pool->arenas[j]; if (arena != NULL) { tcache_t *tcache; /* * tcache_stats_merge() locks bins, so if any * code is introduced that acquires both arena * and bin locks in the opposite order, * deadlocks may result. */ malloc_mutex_lock(&arena->lock); ql_foreach(tcache, &arena->tcache_ql, link) { tcache_stats_merge(tcache, arena); } malloc_mutex_unlock(&arena->lock); } } } } malloc_mutex_unlock(&pools_lock); } je_malloc_stats_print(NULL, NULL, NULL); } /* * End miscellaneous support functions. */ /******************************************************************************/ /* * Begin initialization functions. */ static unsigned malloc_ncpus(void) { long result; #ifdef _WIN32 SYSTEM_INFO si; GetSystemInfo(&si); result = si.dwNumberOfProcessors; #else result = sysconf(_SC_NPROCESSORS_ONLN); #endif return ((result == -1) ? 1 : (unsigned)result); } bool arenas_tsd_extend(tsd_pool_t *tsd, unsigned len) { assert(len < POOLS_MAX); /* round up the new length to the nearest power of 2... */ size_t npools = 1ULL << (32 - __builtin_clz(len + 1)); /* ... but not less than */ if (npools < POOLS_MIN) npools = POOLS_MIN; unsigned *tseqno = base_malloc_fn(npools * sizeof (unsigned)); if (tseqno == NULL) return (true); if (tsd->seqno != NULL) memcpy(tseqno, tsd->seqno, tsd->npools * sizeof (unsigned)); memset(&tseqno[tsd->npools], 0, (npools - tsd->npools) * sizeof (unsigned)); arena_t **tarenas = base_malloc_fn(npools * sizeof (arena_t *)); if (tarenas == NULL) { base_free_fn(tseqno); return (true); } if (tsd->arenas != NULL) memcpy(tarenas, tsd->arenas, tsd->npools * sizeof (arena_t *)); memset(&tarenas[tsd->npools], 0, (npools - tsd->npools) * sizeof (arena_t *)); base_free_fn(tsd->seqno); tsd->seqno = tseqno; base_free_fn(tsd->arenas); tsd->arenas = tarenas; tsd->npools = npools; return (false); } void arenas_cleanup(void *arg) { unsigned i; pool_t *pool; tsd_pool_t *tsd = arg; malloc_mutex_lock(&pools_lock); for (i = 0; i < tsd->npools; i++) { pool = pools[i]; if (pool != NULL) { if (pool->seqno == tsd->seqno[i] && tsd->arenas[i] != NULL) { malloc_rwlock_wrlock(&pool->arenas_lock); tsd->arenas[i]->nthreads--; malloc_rwlock_unlock(&pool->arenas_lock); } } } base_free_fn(tsd->seqno); base_free_fn(tsd->arenas); tsd->npools = 0; malloc_mutex_unlock(&pools_lock); } JEMALLOC_ALWAYS_INLINE_C bool malloc_thread_init(void) { if (config_fill && opt_quarantine && base_malloc_fn == base_malloc_default) { /* create pool base and call quarantine_alloc_hook() inside */ return (malloc_init_base_pool()); } return (false); } JEMALLOC_ALWAYS_INLINE_C bool malloc_init(void) { if (malloc_initialized == false && malloc_init_hard()) return (true); return (false); } static bool malloc_init_base_pool(void) { malloc_mutex_lock(&pool_base_lock); if (base_pool_initialized) { /* * Another thread initialized the base pool before this one * acquired pools_lock. */ malloc_mutex_unlock(&pool_base_lock); return (false); } if (malloc_init()) { malloc_mutex_unlock(&pool_base_lock); return (true); } if (pool_new(&base_pool, 0)) { malloc_mutex_unlock(&pool_base_lock); return (true); } pools = base_calloc(&base_pool, sizeof(pool_t *), POOLS_MIN); if (pools == NULL) { malloc_mutex_unlock(&pool_base_lock); return (true); } pools[0] = &base_pool; pools[0]->seqno = ++pool_seqno; npools_cnt++; npools = POOLS_MIN; base_pool_initialized = true; malloc_mutex_unlock(&pool_base_lock); /* * TSD initialization can't be safely done as a side effect of * deallocation, because it is possible for a thread to do nothing but * deallocate its TLS data via free(), in which case writing to TLS * would cause write-after-free memory corruption. The quarantine * facility *only* gets used as a side effect of deallocation, so make * a best effort attempt at initializing its TSD by hooking all * allocation events. */ if (config_fill && opt_quarantine) quarantine_alloc_hook(); /* * In the JEMALLOC_LAZY_LOCK case we had to defer initializing the * arenas_lock until base pool initialization was complete. Deferral * is safe because there are no other threads yet. We will actually * recurse here, but since base_pool_initialized is set we will * drop out of the recursion in the check at the top of this function. */ if (!isthreaded) { if (malloc_rwlock_init(&base_pool.arenas_lock)) return (true); } return (false); } static bool malloc_conf_next(char const **opts_p, char const **k_p, size_t *klen_p, char const **v_p, size_t *vlen_p) { bool accept; const char *opts = *opts_p; *k_p = opts; for (accept = false; accept == false;) { switch (*opts) { case 'A': case 'B': case 'C': case 'D': case 'E': case 'F': case 'G': case 'H': case 'I': case 'J': case 'K': case 'L': case 'M': case 'N': case 'O': case 'P': case 'Q': case 'R': case 'S': case 'T': case 'U': case 'V': case 'W': case 'X': case 'Y': case 'Z': case 'a': case 'b': case 'c': case 'd': case 'e': case 'f': case 'g': case 'h': case 'i': case 'j': case 'k': case 'l': case 'm': case 'n': case 'o': case 'p': case 'q': case 'r': case 's': case 't': case 'u': case 'v': case 'w': case 'x': case 'y': case 'z': case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': case '_': opts++; break; case ':': opts++; *klen_p = (uintptr_t)opts - 1 - (uintptr_t)*k_p; *v_p = opts; accept = true; break; case '\0': if (opts != *opts_p) { malloc_write(": Conf string ends " "with key\n"); } return (true); default: malloc_write(": Malformed conf string\n"); return (true); } } for (accept = false; accept == false;) { switch (*opts) { case ',': opts++; /* * Look ahead one character here, because the next time * this function is called, it will assume that end of * input has been cleanly reached if no input remains, * but we have optimistically already consumed the * comma if one exists. */ if (*opts == '\0') { malloc_write(": Conf string ends " "with comma\n"); } *vlen_p = (uintptr_t)opts - 1 - (uintptr_t)*v_p; accept = true; break; case '\0': *vlen_p = (uintptr_t)opts - (uintptr_t)*v_p; accept = true; break; default: opts++; break; } } *opts_p = opts; return (false); } static void malloc_conf_error(const char *msg, const char *k, size_t klen, const char *v, size_t vlen) { malloc_printf(": %s: %.*s:%.*s\n", msg, (int)klen, k, (int)vlen, v); } static void malloc_conf_init(void) { unsigned i; char buf[JE_PATH_MAX + 1]; const char *opts, *k, *v; size_t klen, vlen; /* * Automatically configure valgrind before processing options. The * valgrind option remains in jemalloc 3.x for compatibility reasons. */ if (config_valgrind) { in_valgrind = (RUNNING_ON_VALGRIND != 0) ? true : false; if (config_fill && in_valgrind) { opt_junk = false; assert(opt_zero == false); opt_quarantine = JEMALLOC_VALGRIND_QUARANTINE_DEFAULT; opt_redzone = true; } if (config_tcache && in_valgrind) opt_tcache = false; } for (i = 0; i < 3; i++) { /* Get runtime configuration. */ switch (i) { case 0: if (je_malloc_conf != NULL) { /* * Use options that were compiled into the * program. */ opts = je_malloc_conf; } else { /* No configuration specified. */ buf[0] = '\0'; opts = buf; } break; case 1: { int linklen = 0; #ifndef _WIN32 int saved_errno = errno; const char *linkname = # ifdef JEMALLOC_PREFIX "/etc/"JEMALLOC_PREFIX"malloc.conf" # else "/etc/malloc.conf" # endif ; /* * Try to use the contents of the "/etc/malloc.conf" * symbolic link's name. */ linklen = readlink(linkname, buf, sizeof(buf) - 1); if (linklen == -1) { /* No configuration specified. */ linklen = 0; /* restore errno */ set_errno(saved_errno); } #endif buf[linklen] = '\0'; opts = buf; break; } case 2: { const char *envname = #ifdef JEMALLOC_PREFIX JEMALLOC_CPREFIX"MALLOC_CONF" #else "MALLOC_CONF" #endif ; if ((opts = getenv(envname)) != NULL) { /* * Do nothing; opts is already initialized to * the value of the MALLOC_CONF environment * variable. */ } else { /* No configuration specified. */ buf[0] = '\0'; opts = buf; } break; } default: not_reached(); buf[0] = '\0'; opts = buf; } while (*opts != '\0' && malloc_conf_next(&opts, &k, &klen, &v, &vlen) == false) { #define CONF_MATCH(n) \ (sizeof(n)-1 == klen && strncmp(n, k, klen) == 0) #define CONF_HANDLE_BOOL(o, n, cont) \ if (CONF_MATCH(n)) { \ if (strncmp("true", v, vlen) == 0 && \ vlen == sizeof("true")-1) \ o = true; \ else if (strncmp("false", v, vlen) == \ 0 && vlen == sizeof("false")-1) \ o = false; \ else { \ malloc_conf_error( \ "Invalid conf value", \ k, klen, v, vlen); \ } \ if (cont) \ continue; \ } #define CONF_HANDLE_SIZE_T(o, n, min, max, clip) \ if (CONF_MATCH(n)) { \ uintmax_t um; \ char *end; \ \ set_errno(0); \ um = malloc_strtoumax(v, &end, 0); \ if (get_errno() != 0 || (uintptr_t)end -\ (uintptr_t)v != vlen) { \ malloc_conf_error( \ "Invalid conf value", \ k, klen, v, vlen); \ } else if (clip) { \ if ((min) != 0 && um < (min)) \ o = min; \ else if (um > (max)) \ o = max; \ else \ o = um; \ } else { \ if (((min) != 0 && um < (min)) || \ um > (max)) { \ malloc_conf_error( \ "Out-of-range " \ "conf value", \ k, klen, v, vlen); \ } else \ o = um; \ } \ continue; \ } #define CONF_HANDLE_SSIZE_T(o, n, min, max) \ if (CONF_MATCH(n)) { \ long l; \ char *end; \ \ set_errno(0); \ l = strtol(v, &end, 0); \ if (get_errno() != 0 || (uintptr_t)end -\ (uintptr_t)v != vlen) { \ malloc_conf_error( \ "Invalid conf value", \ k, klen, v, vlen); \ } else if (l < (ssize_t)(min) || l > \ (ssize_t)(max)) { \ malloc_conf_error( \ "Out-of-range conf value", \ k, klen, v, vlen); \ } else \ o = l; \ continue; \ } #define CONF_HANDLE_CHAR_P(o, n, d) \ if (CONF_MATCH(n)) { \ size_t cpylen = (vlen <= \ sizeof(o)-1) ? vlen : \ sizeof(o)-1; \ strncpy(o, v, cpylen); \ o[cpylen] = '\0'; \ continue; \ } CONF_HANDLE_BOOL(opt_abort, "abort", true) /* * Chunks always require at least one header page, plus * one data page in the absence of redzones, or three * pages in the presence of redzones. In order to * simplify options processing, fix the limit based on * config_fill. */ CONF_HANDLE_SIZE_T(opt_lg_chunk, "lg_chunk", LG_PAGE + (config_fill ? 2 : 1), (sizeof(size_t) << 3) - 1, true) if (strncmp("dss", k, klen) == 0) { int i; bool match = false; for (i = 0; i < dss_prec_limit; i++) { if (strncmp(dss_prec_names[i], v, vlen) == 0) { if (chunk_dss_prec_set(i)) { malloc_conf_error( "Error setting dss", k, klen, v, vlen); } else { opt_dss = dss_prec_names[i]; match = true; break; } } } if (match == false) { malloc_conf_error("Invalid conf value", k, klen, v, vlen); } continue; } CONF_HANDLE_SIZE_T(opt_narenas, "narenas", 1, SIZE_T_MAX, false) CONF_HANDLE_SSIZE_T(opt_lg_dirty_mult, "lg_dirty_mult", -1, (sizeof(size_t) << 3) - 1) CONF_HANDLE_BOOL(opt_stats_print, "stats_print", true) if (config_fill) { CONF_HANDLE_BOOL(opt_junk, "junk", true) CONF_HANDLE_SIZE_T(opt_quarantine, "quarantine", 0, SIZE_T_MAX, false) CONF_HANDLE_BOOL(opt_redzone, "redzone", true) CONF_HANDLE_BOOL(opt_zero, "zero", true) } if (config_utrace) { CONF_HANDLE_BOOL(opt_utrace, "utrace", true) } if (config_xmalloc) { CONF_HANDLE_BOOL(opt_xmalloc, "xmalloc", true) } if (config_tcache) { CONF_HANDLE_BOOL(opt_tcache, "tcache", !config_valgrind || !in_valgrind) if (CONF_MATCH("tcache")) { assert(config_valgrind && in_valgrind); if (opt_tcache) { opt_tcache = false; malloc_conf_error( "tcache cannot be enabled " "while running inside Valgrind", k, klen, v, vlen); } continue; } CONF_HANDLE_SSIZE_T(opt_lg_tcache_max, "lg_tcache_max", -1, (sizeof(size_t) << 3) - 1) } if (config_prof) { CONF_HANDLE_BOOL(opt_prof, "prof", true) CONF_HANDLE_CHAR_P(opt_prof_prefix, "prof_prefix", "jeprof") CONF_HANDLE_BOOL(opt_prof_active, "prof_active", true) CONF_HANDLE_SSIZE_T(opt_lg_prof_sample, "lg_prof_sample", 0, (sizeof(uint64_t) << 3) - 1) CONF_HANDLE_BOOL(opt_prof_accum, "prof_accum", true) CONF_HANDLE_SSIZE_T(opt_lg_prof_interval, "lg_prof_interval", -1, (sizeof(uint64_t) << 3) - 1) CONF_HANDLE_BOOL(opt_prof_gdump, "prof_gdump", true) CONF_HANDLE_BOOL(opt_prof_final, "prof_final", true) CONF_HANDLE_BOOL(opt_prof_leak, "prof_leak", true) } malloc_conf_error("Invalid conf pair", k, klen, v, vlen); #undef CONF_MATCH #undef CONF_HANDLE_BOOL #undef CONF_HANDLE_SIZE_T #undef CONF_HANDLE_SSIZE_T #undef CONF_HANDLE_CHAR_P } } } static bool malloc_init_hard(void) { malloc_mutex_lock(&init_lock); if (malloc_initialized || IS_INITIALIZER) { /* * Another thread initialized the allocator before this one * acquired init_lock, or this thread is the initializing * thread, and it is recursively allocating. */ malloc_mutex_unlock(&init_lock); return (false); } #ifdef JEMALLOC_THREADED_INIT if (malloc_initializer != NO_INITIALIZER && IS_INITIALIZER == false) { /* Busy-wait until the initializing thread completes. */ do { malloc_mutex_unlock(&init_lock); CPU_SPINWAIT; malloc_mutex_lock(&init_lock); } while (malloc_initialized == false); malloc_mutex_unlock(&init_lock); return (false); } #endif malloc_initializer = INITIALIZER; malloc_tsd_boot(); if (config_prof) prof_boot0(); malloc_conf_init(); if (opt_stats_print) { /* Print statistics at exit. */ if (atexit(stats_print_atexit) != 0) { malloc_write(": Error in atexit()\n"); if (opt_abort) abort(); } } pools_shared_data_initialized = false; if (base_malloc_fn == NULL && base_free_fn == NULL) { base_malloc_fn = base_malloc_default; base_free_fn = base_free_default; } if (chunk_global_boot()) { malloc_mutex_unlock(&init_lock); return (true); } if (ctl_boot()) { malloc_mutex_unlock(&init_lock); return (true); } if (config_prof) prof_boot1(); arena_params_boot(); /* Initialize allocation counters before any allocations can occur. */ if (config_stats && thread_allocated_tsd_boot()) { malloc_mutex_unlock(&init_lock); return (true); } if (arenas_tsd_boot()) { malloc_mutex_unlock(&init_lock); return (true); } if (config_tcache && tcache_boot1()) { malloc_mutex_unlock(&init_lock); return (true); } if (config_fill && quarantine_boot()) { malloc_mutex_unlock(&init_lock); return (true); } if (config_prof && prof_boot2()) { malloc_mutex_unlock(&init_lock); return (true); } malloc_mutex_unlock(&init_lock); /**********************************************************************/ /* Recursive allocation may follow. */ ncpus = malloc_ncpus(); #if (!defined(JEMALLOC_MUTEX_INIT_CB) && !defined(JEMALLOC_ZONE) \ && !defined(_WIN32) && !defined(__native_client__)) /* LinuxThreads's pthread_atfork() allocates. */ if (pthread_atfork(jemalloc_prefork, jemalloc_postfork_parent, jemalloc_postfork_child) != 0) { malloc_write(": Error in pthread_atfork()\n"); if (opt_abort) abort(); } #endif /* Done recursively allocating. */ /**********************************************************************/ malloc_mutex_lock(&init_lock); if (mutex_boot()) { malloc_mutex_unlock(&init_lock); return (true); } if (opt_narenas == 0) { /* * For SMP systems, create more than one arena per CPU by * default. */ if (ncpus > 1) opt_narenas = ncpus << 2; else opt_narenas = 1; } malloc_initialized = true; malloc_mutex_unlock(&init_lock); return (false); } /* * End initialization functions. */ /******************************************************************************/ /* * Begin malloc(3)-compatible functions. */ static void * imalloc_prof_sample(size_t usize, prof_thr_cnt_t *cnt) { void *p; if (cnt == NULL) return (NULL); if (usize <= SMALL_MAXCLASS) { p = imalloc(SMALL_MAXCLASS+1); if (p == NULL) return (NULL); arena_prof_promoted(p, usize); } else p = imalloc(usize); return (p); } JEMALLOC_ALWAYS_INLINE_C void * imalloc_prof(size_t usize) { void *p; prof_thr_cnt_t *cnt; PROF_ALLOC_PREP(usize, cnt); if ((uintptr_t)cnt != (uintptr_t)1U) p = imalloc_prof_sample(usize, cnt); else p = imalloc(usize); if (p == NULL) return (NULL); prof_malloc(p, usize, cnt); return (p); } JEMALLOC_ALWAYS_INLINE_C void * imalloc_body(size_t size, size_t *usize) { if (malloc_init_base_pool()) return (NULL); if (config_prof && opt_prof) { *usize = s2u(size); return (imalloc_prof(*usize)); } if (config_stats || (config_valgrind && in_valgrind)) *usize = s2u(size); return (imalloc(size)); } void * je_malloc(size_t size) { void *ret; size_t usize JEMALLOC_CC_SILENCE_INIT(0); if (size == 0) size = 1; ret = imalloc_body(size, &usize); if (ret == NULL) { if (config_xmalloc && opt_xmalloc) { malloc_write(": Error in malloc(): " "out of memory\n"); abort(); } set_errno(ENOMEM); } if (config_stats && ret != NULL) { assert(usize == isalloc(ret, config_prof)); thread_allocated_tsd_get()->allocated += usize; } UTRACE(0, size, ret); JEMALLOC_VALGRIND_MALLOC(ret != NULL, ret, usize, false); return (ret); } static void * imemalign_prof_sample(size_t alignment, size_t usize, prof_thr_cnt_t *cnt) { void *p; if (cnt == NULL) return (NULL); if (usize <= SMALL_MAXCLASS) { assert(sa2u(SMALL_MAXCLASS+1, alignment) != 0); p = ipalloc(sa2u(SMALL_MAXCLASS+1, alignment), alignment, false); if (p == NULL) return (NULL); arena_prof_promoted(p, usize); } else p = ipalloc(usize, alignment, false); return (p); } JEMALLOC_ALWAYS_INLINE_C void * imemalign_prof(size_t alignment, size_t usize, prof_thr_cnt_t *cnt) { void *p; if ((uintptr_t)cnt != (uintptr_t)1U) p = imemalign_prof_sample(alignment, usize, cnt); else p = ipalloc(usize, alignment, false); if (p == NULL) return (NULL); prof_malloc(p, usize, cnt); return (p); } JEMALLOC_ATTR(nonnull(1)) static int imemalign(void **memptr, size_t alignment, size_t size, size_t min_alignment) { int ret; size_t usize; void *result; assert(min_alignment != 0); if (malloc_init_base_pool()) { result = NULL; goto label_oom; } else { if (size == 0) size = 1; /* Make sure that alignment is a large enough power of 2. */ if (((alignment - 1) & alignment) != 0 || (alignment < min_alignment)) { if (config_xmalloc && opt_xmalloc) { malloc_write(": Error allocating " "aligned memory: invalid alignment\n"); abort(); } result = NULL; ret = EINVAL; goto label_return; } usize = sa2u(size, alignment); if (usize == 0) { result = NULL; goto label_oom; } if (config_prof && opt_prof) { prof_thr_cnt_t *cnt; PROF_ALLOC_PREP(usize, cnt); result = imemalign_prof(alignment, usize, cnt); } else result = ipalloc(usize, alignment, false); if (result == NULL) goto label_oom; } *memptr = result; ret = 0; label_return: if (config_stats && result != NULL) { assert(usize == isalloc(result, config_prof)); thread_allocated_tsd_get()->allocated += usize; } UTRACE(0, size, result); return (ret); label_oom: assert(result == NULL); if (config_xmalloc && opt_xmalloc) { malloc_write(": Error allocating aligned memory: " "out of memory\n"); abort(); } ret = ENOMEM; goto label_return; } int je_posix_memalign(void **memptr, size_t alignment, size_t size) { int ret = imemalign(memptr, alignment, size, sizeof(void *)); JEMALLOC_VALGRIND_MALLOC(ret == 0, *memptr, isalloc(*memptr, config_prof), false); return (ret); } void * je_aligned_alloc(size_t alignment, size_t size) { void *ret; int err; if ((err = imemalign(&ret, alignment, size, 1)) != 0) { ret = NULL; set_errno(err); } JEMALLOC_VALGRIND_MALLOC(err == 0, ret, isalloc(ret, config_prof), false); return (ret); } static void * icalloc_prof_sample(size_t usize, prof_thr_cnt_t *cnt) { void *p; if (cnt == NULL) return (NULL); if (usize <= SMALL_MAXCLASS) { p = icalloc(SMALL_MAXCLASS+1); if (p == NULL) return (NULL); arena_prof_promoted(p, usize); } else p = icalloc(usize); return (p); } JEMALLOC_ALWAYS_INLINE_C void * icalloc_prof(size_t usize, prof_thr_cnt_t *cnt) { void *p; if ((uintptr_t)cnt != (uintptr_t)1U) p = icalloc_prof_sample(usize, cnt); else p = icalloc(usize); if (p == NULL) return (NULL); prof_malloc(p, usize, cnt); return (p); } void * je_calloc(size_t num, size_t size) { void *ret; size_t num_size; size_t usize JEMALLOC_CC_SILENCE_INIT(0); if (malloc_init_base_pool()) { num_size = 0; ret = NULL; goto label_return; } num_size = num * size; if (num_size == 0) { if (num == 0 || size == 0) num_size = 1; else { ret = NULL; goto label_return; } /* * Try to avoid division here. We know that it isn't possible to * overflow during multiplication if neither operand uses any of the * most significant half of the bits in a size_t. */ } else if (((num | size) & (SIZE_T_MAX << (sizeof(size_t) << 2))) && (num_size / size != num)) { /* size_t overflow. */ ret = NULL; goto label_return; } if (config_prof && opt_prof) { prof_thr_cnt_t *cnt; usize = s2u(num_size); PROF_ALLOC_PREP(usize, cnt); ret = icalloc_prof(usize, cnt); } else { if (config_stats || (config_valgrind && in_valgrind)) usize = s2u(num_size); ret = icalloc(num_size); } label_return: if (ret == NULL) { if (config_xmalloc && opt_xmalloc) { malloc_write(": Error in calloc(): out of " "memory\n"); abort(); } set_errno(ENOMEM); } if (config_stats && ret != NULL) { assert(usize == isalloc(ret, config_prof)); thread_allocated_tsd_get()->allocated += usize; } UTRACE(0, num_size, ret); JEMALLOC_VALGRIND_MALLOC(ret != NULL, ret, usize, true); return (ret); } static void * irealloc_prof_sample(void *oldptr, size_t usize, prof_thr_cnt_t *cnt) { void *p; if (cnt == NULL) return (NULL); if (usize <= SMALL_MAXCLASS) { p = iralloc(oldptr, SMALL_MAXCLASS+1, 0, 0, false); if (p == NULL) return (NULL); arena_prof_promoted(p, usize); } else p = iralloc(oldptr, usize, 0, 0, false); return (p); } JEMALLOC_ALWAYS_INLINE_C void * irealloc_prof(void *oldptr, size_t old_usize, size_t usize, prof_thr_cnt_t *cnt) { void *p; prof_ctx_t *old_ctx; old_ctx = prof_ctx_get(oldptr); if ((uintptr_t)cnt != (uintptr_t)1U) p = irealloc_prof_sample(oldptr, usize, cnt); else p = iralloc(oldptr, usize, 0, 0, false); if (p == NULL) return (NULL); prof_realloc(p, usize, cnt, old_usize, old_ctx); return (p); } JEMALLOC_INLINE_C void ifree(void *ptr) { size_t usize; UNUSED size_t rzsize JEMALLOC_CC_SILENCE_INIT(0); assert(ptr != NULL); assert(malloc_initialized || IS_INITIALIZER); if (config_prof && opt_prof) { usize = isalloc(ptr, config_prof); prof_free(ptr, usize); } else if (config_stats || config_valgrind) usize = isalloc(ptr, config_prof); if (config_stats) thread_allocated_tsd_get()->deallocated += usize; if (config_valgrind && in_valgrind) rzsize = p2rz(ptr); iqalloc(ptr); JEMALLOC_VALGRIND_FREE(ptr, rzsize); } void * je_realloc(void *ptr, size_t size) { void *ret; size_t usize JEMALLOC_CC_SILENCE_INIT(0); size_t old_usize = 0; UNUSED size_t old_rzsize JEMALLOC_CC_SILENCE_INIT(0); if (size == 0) { if (ptr != NULL) { /* realloc(ptr, 0) is equivalent to free(ptr). */ UTRACE(ptr, 0, 0); ifree(ptr); return (NULL); } size = 1; } if (ptr != NULL) { assert(malloc_initialized || IS_INITIALIZER); if (malloc_thread_init()) return (NULL); if ((config_prof && opt_prof) || config_stats || (config_valgrind && in_valgrind)) old_usize = isalloc(ptr, config_prof); if (config_valgrind && in_valgrind) old_rzsize = config_prof ? p2rz(ptr) : u2rz(old_usize); if (config_prof && opt_prof) { prof_thr_cnt_t *cnt; usize = s2u(size); PROF_ALLOC_PREP(usize, cnt); ret = irealloc_prof(ptr, old_usize, usize, cnt); } else { if (config_stats || (config_valgrind && in_valgrind)) usize = s2u(size); ret = iralloc(ptr, size, 0, 0, false); } } else { /* realloc(NULL, size) is equivalent to malloc(size). */ ret = imalloc_body(size, &usize); } if (ret == NULL) { if (config_xmalloc && opt_xmalloc) { malloc_write(": Error in realloc(): " "out of memory\n"); abort(); } set_errno(ENOMEM); } if (config_stats && ret != NULL) { thread_allocated_t *ta; assert(usize == isalloc(ret, config_prof)); ta = thread_allocated_tsd_get(); ta->allocated += usize; ta->deallocated += old_usize; } UTRACE(ptr, size, ret); JEMALLOC_VALGRIND_REALLOC(true, ret, usize, true, ptr, old_usize, old_rzsize, true, false); return (ret); } void je_free(void *ptr) { UTRACE(ptr, 0, 0); if (ptr != NULL) ifree(ptr); } /* * End malloc(3)-compatible functions. */ /******************************************************************************/ /* * Begin non-standard override functions. */ #ifdef JEMALLOC_OVERRIDE_MEMALIGN void * je_memalign(size_t alignment, size_t size) { void *ret JEMALLOC_CC_SILENCE_INIT(NULL); imemalign(&ret, alignment, size, 1); JEMALLOC_VALGRIND_MALLOC(ret != NULL, ret, size, false); return (ret); } #endif #ifdef JEMALLOC_OVERRIDE_VALLOC void * je_valloc(size_t size) { void *ret JEMALLOC_CC_SILENCE_INIT(NULL); imemalign(&ret, PAGE, size, 1); JEMALLOC_VALGRIND_MALLOC(ret != NULL, ret, size, false); return (ret); } #endif /* * is_malloc(je_malloc) is some macro magic to detect if jemalloc_defs.h has * #define je_malloc malloc */ #define malloc_is_malloc 1 #define is_malloc_(a) malloc_is_ ## a #define is_malloc(a) is_malloc_(a) #if ((is_malloc(je_malloc) == 1) && defined(__GLIBC__) && !defined(__UCLIBC__)) /* * glibc provides the RTLD_DEEPBIND flag for dlopen which can make it possible * to inconsistently reference libc's malloc(3)-compatible functions * (https://bugzilla.mozilla.org/show_bug.cgi?id=493541). * * These definitions interpose hooks in glibc. The functions are actually * passed an extra argument for the caller return address, which will be * ignored. */ JEMALLOC_EXPORT void (*__free_hook)(void *ptr) = je_free; JEMALLOC_EXPORT void *(*__malloc_hook)(size_t size) = je_malloc; JEMALLOC_EXPORT void *(*__realloc_hook)(void *ptr, size_t size) = je_realloc; JEMALLOC_EXPORT void *(*__memalign_hook)(size_t alignment, size_t size) = je_memalign; #endif /* * End non-standard override functions. */ /******************************************************************************/ /* * Begin non-standard functions. */ static void * base_malloc_default(size_t size) { return base_alloc(&base_pool, size); } static void base_free_default(void *ptr) { } static void je_base_pool_destroy(void) { if (base_pool_initialized == false) return; #ifndef JEMALLOC_MUTEX_INIT_CB pool_destroy(&base_pool); malloc_mutex_destroy(&pool_base_lock); malloc_mutex_destroy(&pools_lock); #endif } bool pools_shared_data_create(void) { if (malloc_init()) return (true); if (pools_shared_data_initialized) return (false); if (config_tcache && tcache_boot0()) return (true); pools_shared_data_initialized = true; return (false); } void pools_shared_data_destroy(void) { /* Only destroy when no pools exist */ if (npools == 0) { pools_shared_data_initialized = false; base_free_fn(tcache_bin_info); tcache_bin_info = NULL; } } #ifdef JEMALLOC_VALGRIND /* * Iterates through all the chunks/allocations on the heap and marks them * as defined/undefined. */ static extent_node_t * vg_tree_binary_iter_cb(extent_tree_t *tree, extent_node_t *node, void *arg) { assert(node->size != 0); int noaccess = *(int *)arg; if (noaccess) { JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(node->addr, node->size); } else { /* assume memory is defined */ JEMALLOC_VALGRIND_MALLOC(1, node->addr, node->size, 1); JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(node->addr, node->size); } return (NULL); } /* * Iterates through all the chunks/allocations on the heap and marks them * as defined/undefined. */ static arena_chunk_map_t * vg_tree_chunks_avail_iter_cb(arena_avail_tree_t *tree, arena_chunk_map_t *map, void *arg) { int noaccess = *(int *)arg; JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(map, sizeof(*map)); assert((map->bits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) == 0); assert((map->bits & ~PAGE_MASK) != 0); size_t chunk_size = (map->bits & ~PAGE_MASK); arena_chunk_t *run_chunk = CHUNK_ADDR2BASE(map); JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(run_chunk, sizeof(*run_chunk)); size_t pageind = arena_mapelm_to_pageind(map); void *chunk_addr = (void *)((uintptr_t)run_chunk + (pageind << LG_PAGE)); if (noaccess) { JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(chunk_addr, chunk_size); } else { JEMALLOC_VALGRIND_MALLOC(1, chunk_addr, chunk_size, 1); JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(chunk_addr, chunk_size); } return (NULL); } /* * Reinitializes memcheck state if run under Valgrind. * Iterates through all the chunks/allocations on the heap and marks them * as defined/undefined. */ static int vg_pool_init(pool_t *pool, size_t size) { /* * There is no need to grab any locks here, as the pool is not * being used yet. */ /* mark base_alloc used space as defined */ char *base_start = (char *)CACHELINE_CEILING((uintptr_t)pool + sizeof(pool_t)); char *base_end = pool->base_next_addr; JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(base_start, base_end - base_start); JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(base_end, (char *)pool->base_past_addr - base_end); /* pointer to the address of chunks, align the address to chunksize */ void *usable_addr = (void *)CHUNK_CEILING((uintptr_t)pool->base_next_addr); /* usable chunks space, must be multiple of chunksize */ size_t usable_size = (size - (uintptr_t)((char *)usable_addr - (char *)pool)) & ~chunksize_mask; /* initially mark the entire heap as defined */ JEMALLOC_VALGRIND_MAKE_MEM_DEFINED( usable_addr, usable_size); /* iterate through unused (available) chunks - mark as NOACCESS */ int noaccess = 1; extent_tree_szad_iter(&pool->chunks_szad_mmap, NULL, vg_tree_binary_iter_cb, &noaccess); /* iterate through huge allocations - mark as MALLOCLIKE */ noaccess = 0; extent_tree_ad_iter(&pool->huge, NULL, vg_tree_binary_iter_cb, &noaccess); /* iterate through arenas/runs */ for (unsigned i = 0; i < pool->narenas_total; ++i) { arena_t *arena = pool->arenas[i]; if (arena != NULL) { JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(arena, sizeof(*arena)); /* bins */ for (unsigned b = 0; b < NBINS; b++) { arena_bin_t *bin = &arena->bins[b]; if (bin->runcur != NULL) JEMALLOC_VALGRIND_MAKE_MEM_DEFINED( bin->runcur, sizeof(*(bin->runcur))); } noaccess = 1; /* XXX */ arena_runs_avail_tree_iter(arena, vg_tree_chunks_avail_iter_cb, &noaccess); arena_chunk_t *spare = arena->spare; if (spare != NULL) { JEMALLOC_VALGRIND_MAKE_MEM_DEFINED( spare, sizeof(*spare)); } } } return 1; } #endif /* JEMALLOC_VALGRIND */ /* * Creates a new pool. * Initializes the heap and all the allocator metadata. */ static pool_t * pool_create_empty(pool_t *pool, size_t size, int zeroed, unsigned pool_id) { size_t result; if (!zeroed) memset(pool, 0, sizeof (pool_t)); /* * preinit base allocator in unused space, align the address * to the cache line */ pool->base_next_addr = (void *)CACHELINE_CEILING((uintptr_t)pool + sizeof (pool_t)); pool->base_past_addr = (void *)((uintptr_t)pool + size); /* prepare pool and internal structures */ if (pool_new(pool, pool_id)) { assert(pools[pool_id] == NULL); pools_shared_data_destroy(); return NULL; } /* * preallocate the chunk tree nodes for the maximum possible * number of chunks */ result = base_node_prealloc(pool, size/chunksize); assert(result == 0); assert(pools[pool_id] == NULL); pool->seqno = pool_seqno++; pools[pool_id] = pool; npools_cnt++; pool->memory_range_list = base_alloc(pool, sizeof(*pool->memory_range_list)); /* pointer to the address of chunks, align the address to chunksize */ void *usable_addr = (void *)CHUNK_CEILING((uintptr_t)pool->base_next_addr); /* reduce end of base allocator up to chunks start */ pool->base_past_addr = usable_addr; /* usable chunks space, must be multiple of chunksize */ size_t usable_size = (size - (uintptr_t)((char *)usable_addr - (char *)pool)) & ~chunksize_mask; assert(usable_size > 0); malloc_mutex_lock(&pool->memory_range_mtx); pool->memory_range_list->next = NULL; pool->memory_range_list->addr = (uintptr_t)pool; pool->memory_range_list->addr_end = (uintptr_t)pool + size; pool->memory_range_list->usable_addr = (uintptr_t)usable_addr; pool->memory_range_list->usable_addr_end = (uintptr_t)usable_addr + usable_size; malloc_mutex_unlock(&pool->memory_range_mtx); /* register the usable pool space as a single big chunk */ chunk_record(pool, &pool->chunks_szad_mmap, &pool->chunks_ad_mmap, usable_addr, usable_size, zeroed); pool->ctl_initialized = false; return pool; } /* * Opens an existing pool (i.e. pmemcto pool). * Only the run-time state needs to be re-initialized. */ static pool_t * pool_open(pool_t *pool, size_t size, unsigned pool_id) { JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(pool, sizeof(pool_t)); /* prepare pool's runtime state */ if (pool_runtime_init(pool, pool_id)) { malloc_mutex_unlock(&pools_lock); return NULL; } assert(pools[pool_id] == NULL); pool->seqno = pool_seqno++; pools[pool_id] = pool; npools_cnt++; return pool; } pool_t * je_pool_create(void *addr, size_t size, int zeroed, int empty) { if (malloc_init()) return (NULL); if (addr == NULL || size < POOL_MINIMAL_SIZE) return (NULL); pool_t *pool = (pool_t *)addr; unsigned pool_id; /* Preinit base pool if not exist, before lock pool_lock */ if (malloc_init_base_pool()) return (NULL); malloc_mutex_lock(&pools_lock); assert(pools != NULL); assert(npools > 0); /* * Find unused pool ID. * Pool 0 is a special pool with reserved ID. Pool is created during * malloc_init_pool_base() and allocates memory from RAM. */ for (pool_id = 1; pool_id < npools; ++pool_id) { if (pools[pool_id] == NULL) break; } if (pool_id == npools && npools < POOLS_MAX) { size_t npools_new = npools * 2; pool_t **pools_new = base_alloc(&base_pool, npools_new * sizeof (pool_t *)); if (pools_new == NULL) goto err; memcpy(pools_new, pools, npools * sizeof (pool_t *)); memset(&pools_new[npools], 0, (npools_new - npools) * sizeof (pool_t *)); pools = pools_new; npools = npools_new; } if (pool_id == POOLS_MAX) { malloc_printf(": Error in pool_create(): " "exceeded max number of pools (%u)\n", POOLS_MAX); goto err; } pool_t *ret; if (empty) { ret = pool_create_empty(pool, size, zeroed, pool_id); } else { ret = pool_open(pool, size, pool_id); } malloc_mutex_unlock(&pools_lock); #ifdef JEMALLOC_VALGRIND /* must be done with unlocked 'pools_lock' */ if (config_valgrind && !empty) vg_pool_init(pool, size); #endif return ret; err: malloc_mutex_unlock(&pools_lock); return (NULL); } int je_pool_delete(pool_t *pool) { unsigned pool_id = pool->pool_id; /* Remove pool from global array */ malloc_mutex_lock(&pools_lock); if ((pool_id == 0) || (pool_id >= npools) || (pools[pool_id] != pool)) { malloc_mutex_unlock(&pools_lock); malloc_printf(": Error in pool_delete(): " "invalid pool_id (%u)\n", pool_id); return -1; } pool_destroy(pool); pools[pool_id] = NULL; npools_cnt--; pools_shared_data_destroy(); malloc_mutex_unlock(&pools_lock); return 0; } static int check_is_unzeroed(void *ptr, size_t size) { size_t i; size_t *p = (size_t *)ptr; size /= sizeof(size_t); for (i = 0; i < size; i++) { if (p[i]) return 1; } return 0; } static extent_node_t * check_tree_binary_iter_cb(extent_tree_t *tree, extent_node_t *node, void *arg) { check_data_cb_t *arg_cb = arg; if (node->size == 0) { arg_cb->error += 1; malloc_printf(": Error in pool_check(): " "chunk 0x%p size is zero\n", node); /* returns value other than NULL to break iteration */ return (void*)(UINTPTR_MAX); } arg_cb->size += node->size; if (node->zeroed && check_is_unzeroed(node->addr, node->size)) { arg_cb->error += 1; malloc_printf(": Error in pool_check(): " "chunk 0x%p, is marked as zeroed, but is dirty\n", node->addr); /* returns value other than NULL to break iteration */ return (void*)(UINTPTR_MAX); } /* check chunks address is inside pool memory */ pool_memory_range_node_t *list = arg_cb->list; uintptr_t addr = (uintptr_t)node->addr; uintptr_t addr_end = (uintptr_t)node->addr + node->size; while (list != NULL) { if ((list->usable_addr <= addr) && (addr < list->usable_addr_end) && (list->usable_addr < addr_end) && (addr_end <= list->usable_addr_end)) { /* return NULL to continue iterations of tree */ return (NULL); } list = list->next; } arg_cb->error += 1; malloc_printf(": Error in pool_check(): " "incorrect address chunk 0x%p, out of memory pool\n", node->addr); /* returns value other than NULL to break iteration */ return (void*)(UINTPTR_MAX); } static arena_chunk_map_t * check_tree_chunks_avail_iter_cb(arena_avail_tree_t *tree, arena_chunk_map_t *map, void *arg) { check_data_cb_t *arg_cb = arg; if ((map->bits & (CHUNK_MAP_LARGE|CHUNK_MAP_ALLOCATED)) != 0) { arg_cb->error += 1; malloc_printf(": Error in pool_check(): " "flags in map->bits %zu are incorrect\n", map->bits); /* returns value other than NULL to break iteration */ return (void*)(UINTPTR_MAX); } if ((map->bits & ~PAGE_MASK) == 0) { arg_cb->error += 1; malloc_printf(": Error in pool_check(): " "chunk_map 0x%p size is zero\n", map); /* returns value other than NULL to break iteration */ return (void*)(UINTPTR_MAX); } size_t chunk_size = (map->bits & ~PAGE_MASK); arg_cb->size += chunk_size; arena_chunk_t *run_chunk = CHUNK_ADDR2BASE(map); size_t pageind = arena_mapelm_to_pageind(map); void *chunk_addr = (void *)((uintptr_t)run_chunk + (pageind << LG_PAGE)); if (((map->bits & (CHUNK_MAP_UNZEROED | CHUNK_MAP_DIRTY)) == 0) && check_is_unzeroed(chunk_addr, chunk_size)) { arg_cb->error += 1; malloc_printf(": Error in pool_check(): " "chunk_map 0x%p, is marked as zeroed, but is dirty\n", map); /* returns value other than NULL to break iteration */ return (void*)(UINTPTR_MAX); } /* check chunks address is inside pool memory */ pool_memory_range_node_t *list = arg_cb->list; uintptr_t addr = (uintptr_t)chunk_addr; uintptr_t addr_end = (uintptr_t)chunk_addr + chunk_size; while (list != NULL) { if ((list->usable_addr <= addr) && (addr < list->usable_addr_end) && (list->usable_addr < addr_end) && (addr_end <= list->usable_addr_end)) { /* return NULL to continue iterations of tree */ return (NULL); } list = list->next; } arg_cb->error += 1; malloc_printf(": Error in pool_check(): " "incorrect address chunk_map 0x%p, out of memory pool\n", chunk_addr); /* returns value other than NULL to break iteration */ return (void*)(UINTPTR_MAX); } int je_pool_check(pool_t *pool) { size_t total_size = 0; unsigned i; pool_memory_range_node_t *node; malloc_mutex_lock(&pools_lock); if ((pool->pool_id == 0) || (pool->pool_id >= npools)) { malloc_write(": Error in pool_check(): " "invalid pool id\n"); malloc_mutex_unlock(&pools_lock); return -1; } if (pools[pool->pool_id] != pool) { malloc_write(": Error in pool_check(): " "invalid pool handle, probably pool was deleted\n"); malloc_mutex_unlock(&pools_lock); return -1; } malloc_mutex_lock(&pool->memory_range_mtx); /* check memory regions defined correctly */ node = pool->memory_range_list; while (node != NULL) { size_t node_size = node->usable_addr_end - node->usable_addr; total_size += node_size; if ((node->addr > node->usable_addr) || (node->addr_end < node->usable_addr_end) || (node->usable_addr >= node->usable_addr_end)) { malloc_write(": Error in pool_check(): " "corrupted pool memory\n"); malloc_mutex_unlock(&pool->memory_range_mtx); malloc_mutex_unlock(&pools_lock); return 0; } /* for the purpose of further checks we need to mark it as defined */ JEMALLOC_VALGRIND_MAKE_MEM_DEFINED((void *)node->usable_addr, node_size); node = node->next; } /* check memory collision with other pools */ for (i = 1; i < npools; i++) { pool_t *pool_cmp = pools[i]; if (pool_cmp != NULL && i != pool->pool_id) { node = pool->memory_range_list; while (node != NULL) { pool_memory_range_node_t *node2 = pool_cmp->memory_range_list; while (node2 != NULL) { if ((node->addr <= node2->addr && node2->addr < node->addr_end) || (node2->addr <= node->addr && node->addr < node2->addr_end)) { malloc_write(": Error in pool_check(): " "pool uses the same as another pool\n"); malloc_mutex_unlock(&pool->memory_range_mtx); malloc_mutex_unlock(&pools_lock); return 0; } node2 = node2->next; } node = node->next; } } } /* check the addresses of the chunks are inside memory region */ check_data_cb_t arg_cb; arg_cb.list = pool->memory_range_list; arg_cb.size = 0; arg_cb.error = 0; malloc_mutex_lock(&pool->chunks_mtx); malloc_rwlock_wrlock(&pool->arenas_lock); extent_tree_szad_iter(&pool->chunks_szad_mmap, NULL, check_tree_binary_iter_cb, &arg_cb); for (i = 0; i < pool->narenas_total && arg_cb.error == 0; ++i) { arena_t *arena = pool->arenas[i]; if (arena != NULL) { malloc_mutex_lock(&arena->lock); arena_runs_avail_tree_iter(arena, check_tree_chunks_avail_iter_cb, &arg_cb); arena_chunk_t *spare = arena->spare; if (spare != NULL) { size_t spare_size = arena_mapbits_unallocated_size_get(spare, map_bias); arg_cb.size += spare_size; /* check that spare is zeroed */ if ((arena_mapbits_unzeroed_get(spare, map_bias) == 0) && check_is_unzeroed( (void *)((uintptr_t)spare + (map_bias << LG_PAGE)), spare_size)) { arg_cb.error += 1; malloc_printf(": Error in pool_check(): " "spare 0x%p, is marked as zeroed, but is dirty\n", spare); } } malloc_mutex_unlock(&arena->lock); } } malloc_rwlock_unlock(&pool->arenas_lock); malloc_mutex_unlock(&pool->chunks_mtx); malloc_mutex_unlock(&pool->memory_range_mtx); malloc_mutex_unlock(&pools_lock); if (arg_cb.error != 0) { return 0; } if (total_size < arg_cb.size) { malloc_printf(": Error in pool_check(): total size of all " "chunks: %zu is greater than associated memory range size: %zu\n", arg_cb.size, total_size); return 0; } return 1; } /* * add more memory to a pool */ size_t je_pool_extend(pool_t *pool, void *addr, size_t size, int zeroed) { char *usable_addr = addr; size_t nodes_number = size/chunksize; if (size < POOL_MINIMAL_SIZE) return 0; /* preallocate the chunk tree nodes for the max possible number of chunks */ nodes_number = base_node_prealloc(pool, nodes_number); pool_memory_range_node_t *node = base_alloc(pool, sizeof (*pool->memory_range_list)); if (nodes_number > 0 || node == NULL) { /* * If base allocation using existing chunks fails, then use the new * chunk as a source for further base allocations. */ malloc_mutex_lock(&pool->base_mtx); /* preinit base allocator in unused space */ pool->base_next_addr = (void *)CACHELINE_CEILING((uintptr_t)addr); pool->base_past_addr = (void *)((uintptr_t)addr + size); malloc_mutex_unlock(&pool->base_mtx); if (nodes_number > 0) nodes_number = base_node_prealloc(pool, nodes_number); assert(nodes_number == 0); if (node == NULL) node = base_alloc(pool, sizeof (*pool->memory_range_list)); assert(node != NULL); /* pointer to the address of chunks, align the address to chunksize */ usable_addr = (void *)CHUNK_CEILING((uintptr_t)pool->base_next_addr); /* reduce end of base allocator up to chunks */ pool->base_past_addr = usable_addr; } usable_addr = (void *)CHUNK_CEILING((uintptr_t)usable_addr); size_t usable_size = (size - (uintptr_t)(usable_addr - (char *)addr)) & ~chunksize_mask; assert(usable_size > 0); node->addr = (uintptr_t)addr; node->addr_end = (uintptr_t)addr + size; node->usable_addr = (uintptr_t)usable_addr; node->usable_addr_end = (uintptr_t)usable_addr + usable_size; malloc_mutex_lock(&pool->memory_range_mtx); node->next = pool->memory_range_list; pool->memory_range_list = node; chunk_record(pool, &pool->chunks_szad_mmap, &pool->chunks_ad_mmap, usable_addr, usable_size, zeroed); malloc_mutex_unlock(&pool->memory_range_mtx); return usable_size; } static void * pool_ialloc_prof_sample(pool_t *pool, size_t usize, prof_thr_cnt_t *cnt, void *(*ialloc)(pool_t *, size_t)) { void *p; if (cnt == NULL) return (NULL); if (usize <= SMALL_MAXCLASS) { p = ialloc(pool, SMALL_MAXCLASS+1); if (p == NULL) return (NULL); arena_prof_promoted(p, usize); } else p = ialloc(pool, usize); return (p); } JEMALLOC_ALWAYS_INLINE_C void * pool_ialloc_prof(pool_t *pool, size_t usize, void *(*ialloc)(pool_t *, size_t)) { void *p; prof_thr_cnt_t *cnt; PROF_ALLOC_PREP(usize, cnt); if ((uintptr_t)cnt != (uintptr_t)1U) p = pool_ialloc_prof_sample(pool, usize, cnt, ialloc); else p = ialloc(pool, usize); if (p == NULL) return (NULL); prof_malloc(p, usize, cnt); return (p); } JEMALLOC_ALWAYS_INLINE_C void * pool_imalloc_body(pool_t *pool, size_t size, size_t *usize) { if (malloc_init()) return (NULL); if (config_prof && opt_prof) { *usize = s2u(size); return (pool_ialloc_prof(pool, *usize, pool_imalloc)); } if (config_stats || (config_valgrind && in_valgrind)) *usize = s2u(size); return (pool_imalloc(pool, size)); } void * je_pool_malloc(pool_t *pool, size_t size) { void *ret; size_t usize JEMALLOC_CC_SILENCE_INIT(0); if (size == 0) size = 1; ret = pool_imalloc_body(pool, size, &usize); if (ret == NULL) { if (config_xmalloc && opt_xmalloc) { malloc_write(": Error in pool_malloc(): " "out of memory\n"); abort(); } set_errno(ENOMEM); } if (config_stats && ret != NULL) { assert(usize == isalloc(ret, config_prof)); thread_allocated_tsd_get()->allocated += usize; } UTRACE(0, size, ret); JEMALLOC_VALGRIND_MALLOC(ret != NULL, ret, usize, false); return (ret); } void * je_pool_calloc(pool_t *pool, size_t num, size_t size) { void *ret; size_t usize JEMALLOC_CC_SILENCE_INIT(0); size_t num_size; num_size = num * size; if (num_size == 0) { if (num == 0 || size == 0) num_size = 1; else { ret = NULL; goto label_return; } } else if (((num | size) & (SIZE_T_MAX << (sizeof(size_t) << 2))) && (num_size / size != num)) { ret = NULL; goto label_return; } if (config_prof && opt_prof) { usize = s2u(num_size); ret = pool_ialloc_prof(pool, usize, pool_icalloc); } else { if (config_stats || (config_valgrind && in_valgrind)) usize = s2u(num_size); ret = pool_icalloc(pool, num_size); } label_return: if (ret == NULL) { if (config_xmalloc && opt_xmalloc) { malloc_write(": Error in pool_calloc(): " "out of memory\n"); abort(); } set_errno(ENOMEM); } if (config_stats && ret != NULL) { assert(usize == isalloc(ret, config_prof)); thread_allocated_tsd_get()->allocated += usize; } UTRACE(0, num_size, ret); JEMALLOC_VALGRIND_MALLOC(ret != NULL, ret, usize, true); return (ret); } static void * pool_irealloc_prof_sample(pool_t *pool, void *oldptr, size_t usize, prof_thr_cnt_t *cnt) { void *p; if (cnt == NULL) return (NULL); if (usize <= SMALL_MAXCLASS) { p = pool_iralloc(pool, oldptr, SMALL_MAXCLASS+1, 0, 0, false); if (p == NULL) return (NULL); arena_prof_promoted(p, usize); } else p = pool_iralloc(pool, oldptr, usize, 0, 0, false); return (p); } JEMALLOC_ALWAYS_INLINE_C void * pool_irealloc_prof(pool_t *pool, void *oldptr, size_t old_usize, size_t usize, prof_thr_cnt_t *cnt) { void *p; prof_ctx_t *old_ctx; old_ctx = prof_ctx_get(oldptr); if ((uintptr_t)cnt != (uintptr_t)1U) p = pool_irealloc_prof_sample(pool, oldptr, usize, cnt); else p = pool_iralloc(pool, oldptr, usize, 0, 0, false); if (p == NULL) return (NULL); prof_realloc(p, usize, cnt, old_usize, old_ctx); return (p); } JEMALLOC_INLINE_C void pool_ifree(pool_t *pool, void *ptr) { size_t usize; UNUSED size_t rzsize JEMALLOC_CC_SILENCE_INIT(0); arena_chunk_t *chunk; assert(ptr != NULL); assert(malloc_initialized || IS_INITIALIZER); if (config_prof && opt_prof) { usize = isalloc(ptr, config_prof); prof_free(ptr, usize); } else if (config_stats || config_valgrind) usize = isalloc(ptr, config_prof); if (config_stats) thread_allocated_tsd_get()->deallocated += usize; if (config_valgrind && in_valgrind) rzsize = p2rz(ptr); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (chunk != ptr) arena_dalloc(chunk, ptr, true); else huge_dalloc(pool, ptr); JEMALLOC_VALGRIND_FREE(ptr, rzsize); } void * je_pool_ralloc(pool_t *pool, void *ptr, size_t size) { void *ret; size_t usize JEMALLOC_CC_SILENCE_INIT(0); size_t old_usize = 0; UNUSED size_t old_rzsize JEMALLOC_CC_SILENCE_INIT(0); if (size == 0) { if (ptr != NULL) { /* realloc(ptr, 0) is equivalent to free(ptr). */ UTRACE(ptr, 0, 0); pool_ifree(pool, ptr); return (NULL); } size = 1; } if (ptr != NULL) { assert(malloc_initialized || IS_INITIALIZER); malloc_init(); if ((config_prof && opt_prof) || config_stats || (config_valgrind && in_valgrind)) old_usize = isalloc(ptr, config_prof); if (config_valgrind && in_valgrind) old_rzsize = config_prof ? p2rz(ptr) : u2rz(old_usize); if (config_prof && opt_prof) { prof_thr_cnt_t *cnt; usize = s2u(size); PROF_ALLOC_PREP(usize, cnt); ret = pool_irealloc_prof(pool, ptr, old_usize, usize, cnt); } else { if (config_stats || (config_valgrind && in_valgrind)) usize = s2u(size); ret = pool_iralloc(pool, ptr, size, 0, 0, false); } } else { /* realloc(NULL, size) is equivalent to malloc(size). */ ret = pool_imalloc_body(pool, size, &usize); } if (ret == NULL) { if (config_xmalloc && opt_xmalloc) { malloc_write(": Error in pool_ralloc(): " "out of memory\n"); abort(); } set_errno(ENOMEM); } if (config_stats && ret != NULL) { thread_allocated_t *ta; assert(usize == isalloc(ret, config_prof)); ta = thread_allocated_tsd_get(); ta->allocated += usize; ta->deallocated += old_usize; } UTRACE(ptr, size, ret); JEMALLOC_VALGRIND_REALLOC(true, ret, usize, true, ptr, old_usize, old_rzsize, true, false); return (ret); } static void * pool_imemalign_prof_sample(pool_t *pool, size_t alignment, size_t usize, prof_thr_cnt_t *cnt) { void *p; if (cnt == NULL) return (NULL); if (usize <= SMALL_MAXCLASS) { assert(sa2u(SMALL_MAXCLASS+1, alignment) != 0); p = pool_ipalloc(pool, sa2u(SMALL_MAXCLASS+1, alignment), alignment, false); if (p == NULL) return (NULL); arena_prof_promoted(p, usize); } else p = pool_ipalloc(pool, usize, alignment, false); return (p); } JEMALLOC_ALWAYS_INLINE_C void * pool_imemalign_prof(pool_t *pool, size_t alignment, size_t usize, prof_thr_cnt_t *cnt) { void *p; if ((uintptr_t)cnt != (uintptr_t)1U) p = pool_imemalign_prof_sample(pool, alignment, usize, cnt); else p = pool_ipalloc(pool, usize, alignment, false); if (p == NULL) return (NULL); prof_malloc(p, usize, cnt); return (p); } JEMALLOC_ATTR(nonnull(1)) static int pool_imemalign(pool_t *pool, void **memptr, size_t alignment, size_t size, size_t min_alignment) { int ret; size_t usize; void *result; assert(min_alignment != 0); if (malloc_init()) { result = NULL; goto label_oom; } else { if (size == 0) size = 1; /* Make sure that alignment is a large enough power of 2. */ if (((alignment - 1) & alignment) != 0 || (alignment < min_alignment)) { if (config_xmalloc && opt_xmalloc) { malloc_write(": Error allocating pool" " aligned memory: invalid alignment\n"); abort(); } result = NULL; ret = EINVAL; goto label_return; } usize = sa2u(size, alignment); if (usize == 0) { result = NULL; goto label_oom; } if (config_prof && opt_prof) { prof_thr_cnt_t *cnt; PROF_ALLOC_PREP(usize, cnt); result = pool_imemalign_prof(pool, alignment, usize, cnt); } else result = pool_ipalloc(pool, usize, alignment, false); if (result == NULL) goto label_oom; } *memptr = result; ret = 0; label_return: if (config_stats && result != NULL) { assert(usize == isalloc(result, config_prof)); thread_allocated_tsd_get()->allocated += usize; } UTRACE(0, size, result); return (ret); label_oom: assert(result == NULL); if (config_xmalloc && opt_xmalloc) { malloc_write(": Error allocating pool " "aligned memory: out of memory\n"); abort(); } ret = ENOMEM; goto label_return; } void * je_pool_aligned_alloc(pool_t *pool, size_t alignment, size_t size) { void *ret; int err; if ((err = pool_imemalign(pool, &ret, alignment, size, 1)) != 0) { ret = NULL; set_errno(err); } JEMALLOC_VALGRIND_MALLOC(err == 0, ret, isalloc(ret, config_prof), false); return (ret); } void je_pool_free(pool_t *pool, void *ptr) { UTRACE(ptr, 0, 0); if (ptr != NULL) pool_ifree(pool, ptr); } void je_pool_malloc_stats_print(pool_t *pool, void (*write_cb)(void *, const char *), void *cbopaque, const char *opts) { stats_print(pool, write_cb, cbopaque, opts); } void je_pool_set_alloc_funcs(void *(*malloc_func)(size_t), void (*free_func)(void *)) { if (malloc_func != NULL && free_func != NULL) { malloc_mutex_lock(&pool_base_lock); if (pools == NULL) { base_malloc_fn = malloc_func; base_free_fn = free_func; } malloc_mutex_unlock(&pool_base_lock); } } size_t je_pool_malloc_usable_size(pool_t *pool, void *ptr) { assert(malloc_initialized || IS_INITIALIZER); if (malloc_thread_init()) return 0; if (config_ivsalloc) { /* Return 0 if ptr is not within a chunk managed by jemalloc. */ if (rtree_get(pool->chunks_rtree, (uintptr_t)CHUNK_ADDR2BASE(ptr)) == 0) return 0; } return (ptr != NULL) ? pool_isalloc(pool, ptr, config_prof) : 0; } JEMALLOC_ALWAYS_INLINE_C void * imallocx(size_t usize, size_t alignment, bool zero, bool try_tcache, arena_t *arena) { assert(usize == ((alignment == 0) ? s2u(usize) : sa2u(usize, alignment))); if (alignment != 0) return (ipalloct(usize, alignment, zero, try_tcache, arena)); else if (zero) return (icalloct(usize, try_tcache, arena)); else return (imalloct(usize, try_tcache, arena)); } static void * imallocx_prof_sample(size_t usize, size_t alignment, bool zero, bool try_tcache, arena_t *arena, prof_thr_cnt_t *cnt) { void *p; if (cnt == NULL) return (NULL); if (usize <= SMALL_MAXCLASS) { size_t usize_promoted = (alignment == 0) ? s2u(SMALL_MAXCLASS+1) : sa2u(SMALL_MAXCLASS+1, alignment); assert(usize_promoted != 0); p = imallocx(usize_promoted, alignment, zero, try_tcache, arena); if (p == NULL) return (NULL); arena_prof_promoted(p, usize); } else p = imallocx(usize, alignment, zero, try_tcache, arena); return (p); } JEMALLOC_ALWAYS_INLINE_C void * imallocx_prof(size_t usize, size_t alignment, bool zero, bool try_tcache, arena_t *arena, prof_thr_cnt_t *cnt) { void *p; if ((uintptr_t)cnt != (uintptr_t)1U) { p = imallocx_prof_sample(usize, alignment, zero, try_tcache, arena, cnt); } else p = imallocx(usize, alignment, zero, try_tcache, arena); if (p == NULL) return (NULL); prof_malloc(p, usize, cnt); return (p); } void * je_mallocx(size_t size, int flags) { void *p; size_t usize; size_t alignment = (ZU(1) << (flags & MALLOCX_LG_ALIGN_MASK) & (SIZE_T_MAX-1)); bool zero = flags & MALLOCX_ZERO; unsigned arena_ind = ((unsigned)(flags >> 8)) - 1; pool_t *pool = &base_pool; arena_t dummy_arena; DUMMY_ARENA_INITIALIZE(dummy_arena, pool); arena_t *arena; bool try_tcache; assert(size != 0); if (malloc_init_base_pool()) goto label_oom; if (arena_ind != UINT_MAX) { malloc_rwlock_rdlock(&pool->arenas_lock); arena = pool->arenas[arena_ind]; malloc_rwlock_unlock(&pool->arenas_lock); try_tcache = false; } else { arena = &dummy_arena; try_tcache = true; } usize = (alignment == 0) ? s2u(size) : sa2u(size, alignment); assert(usize != 0); if (config_prof && opt_prof) { prof_thr_cnt_t *cnt; PROF_ALLOC_PREP(usize, cnt); p = imallocx_prof(usize, alignment, zero, try_tcache, arena, cnt); } else p = imallocx(usize, alignment, zero, try_tcache, arena); if (p == NULL) goto label_oom; if (config_stats) { assert(usize == isalloc(p, config_prof)); thread_allocated_tsd_get()->allocated += usize; } UTRACE(0, size, p); JEMALLOC_VALGRIND_MALLOC(true, p, usize, zero); return (p); label_oom: if (config_xmalloc && opt_xmalloc) { malloc_write(": Error in mallocx(): out of memory\n"); abort(); } UTRACE(0, size, 0); return (NULL); } static void * irallocx_prof_sample(void *oldptr, size_t size, size_t alignment, size_t usize, bool zero, bool try_tcache_alloc, bool try_tcache_dalloc, arena_t *arena, prof_thr_cnt_t *cnt) { void *p; if (cnt == NULL) return (NULL); if (usize <= SMALL_MAXCLASS) { p = iralloct(oldptr, SMALL_MAXCLASS+1, (SMALL_MAXCLASS+1 >= size) ? 0 : size - (SMALL_MAXCLASS+1), alignment, zero, try_tcache_alloc, try_tcache_dalloc, arena); if (p == NULL) return (NULL); arena_prof_promoted(p, usize); } else { p = iralloct(oldptr, size, 0, alignment, zero, try_tcache_alloc, try_tcache_dalloc, arena); } return (p); } JEMALLOC_ALWAYS_INLINE_C void * irallocx_prof(void *oldptr, size_t old_usize, size_t size, size_t alignment, size_t *usize, bool zero, bool try_tcache_alloc, bool try_tcache_dalloc, arena_t *arena, prof_thr_cnt_t *cnt) { void *p; prof_ctx_t *old_ctx; old_ctx = prof_ctx_get(oldptr); if ((uintptr_t)cnt != (uintptr_t)1U) p = irallocx_prof_sample(oldptr, size, alignment, *usize, zero, try_tcache_alloc, try_tcache_dalloc, arena, cnt); else { p = iralloct(oldptr, size, 0, alignment, zero, try_tcache_alloc, try_tcache_dalloc, arena); } if (p == NULL) return (NULL); if (p == oldptr && alignment != 0) { /* * The allocation did not move, so it is possible that the size * class is smaller than would guarantee the requested * alignment, and that the alignment constraint was * serendipitously satisfied. Additionally, old_usize may not * be the same as the current usize because of in-place large * reallocation. Therefore, query the actual value of usize. */ *usize = isalloc(p, config_prof); } prof_realloc(p, *usize, cnt, old_usize, old_ctx); return (p); } void * je_rallocx(void *ptr, size_t size, int flags) { void *p; size_t usize, old_usize; UNUSED size_t old_rzsize JEMALLOC_CC_SILENCE_INIT(0); size_t alignment = (ZU(1) << (flags & MALLOCX_LG_ALIGN_MASK) & (SIZE_T_MAX-1)); bool zero = flags & MALLOCX_ZERO; unsigned arena_ind = ((unsigned)(flags >> 8)) - 1; pool_t *pool = &base_pool; arena_t dummy_arena; DUMMY_ARENA_INITIALIZE(dummy_arena, pool); bool try_tcache_alloc, try_tcache_dalloc; arena_t *arena; assert(ptr != NULL); assert(size != 0); assert(malloc_initialized || IS_INITIALIZER); if (malloc_thread_init()) return (NULL); if (arena_ind != UINT_MAX) { arena_chunk_t *chunk; try_tcache_alloc = false; chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); try_tcache_dalloc = (chunk == ptr || chunk->arena != pool->arenas[arena_ind]); arena = pool->arenas[arena_ind]; } else { try_tcache_alloc = true; try_tcache_dalloc = true; arena = &dummy_arena; } if ((config_prof && opt_prof) || config_stats || (config_valgrind && in_valgrind)) old_usize = isalloc(ptr, config_prof); if (config_valgrind && in_valgrind) old_rzsize = u2rz(old_usize); if (config_prof && opt_prof) { prof_thr_cnt_t *cnt; usize = (alignment == 0) ? s2u(size) : sa2u(size, alignment); assert(usize != 0); PROF_ALLOC_PREP(usize, cnt); p = irallocx_prof(ptr, old_usize, size, alignment, &usize, zero, try_tcache_alloc, try_tcache_dalloc, arena, cnt); if (p == NULL) goto label_oom; } else { p = iralloct(ptr, size, 0, alignment, zero, try_tcache_alloc, try_tcache_dalloc, arena); if (p == NULL) goto label_oom; if (config_stats || (config_valgrind && in_valgrind)) usize = isalloc(p, config_prof); } if (config_stats) { thread_allocated_t *ta; ta = thread_allocated_tsd_get(); ta->allocated += usize; ta->deallocated += old_usize; } UTRACE(ptr, size, p); JEMALLOC_VALGRIND_REALLOC(true, p, usize, false, ptr, old_usize, old_rzsize, false, zero); return (p); label_oom: if (config_xmalloc && opt_xmalloc) { malloc_write(": Error in rallocx(): out of memory\n"); abort(); } UTRACE(ptr, size, 0); return (NULL); } JEMALLOC_ALWAYS_INLINE_C size_t ixallocx_helper(void *ptr, size_t old_usize, size_t size, size_t extra, size_t alignment, bool zero, arena_t *arena) { size_t usize; if (ixalloc(ptr, size, extra, alignment, zero)) return (old_usize); usize = isalloc(ptr, config_prof); return (usize); } static size_t ixallocx_prof_sample(void *ptr, size_t old_usize, size_t size, size_t extra, size_t alignment, size_t max_usize, bool zero, arena_t *arena, prof_thr_cnt_t *cnt) { size_t usize; if (cnt == NULL) return (old_usize); /* Use minimum usize to determine whether promotion may happen. */ if (((alignment == 0) ? s2u(size) : sa2u(size, alignment)) <= SMALL_MAXCLASS) { if (ixalloc(ptr, SMALL_MAXCLASS+1, (SMALL_MAXCLASS+1 >= size+extra) ? 0 : size+extra - (SMALL_MAXCLASS+1), alignment, zero)) return (old_usize); usize = isalloc(ptr, config_prof); if (max_usize < PAGE) arena_prof_promoted(ptr, usize); } else { usize = ixallocx_helper(ptr, old_usize, size, extra, alignment, zero, arena); } return (usize); } JEMALLOC_ALWAYS_INLINE_C size_t ixallocx_prof(void *ptr, size_t old_usize, size_t size, size_t extra, size_t alignment, size_t max_usize, bool zero, arena_t *arena, prof_thr_cnt_t *cnt) { size_t usize; prof_ctx_t *old_ctx; old_ctx = prof_ctx_get(ptr); if ((uintptr_t)cnt != (uintptr_t)1U) { usize = ixallocx_prof_sample(ptr, old_usize, size, extra, alignment, zero, max_usize, arena, cnt); } else { usize = ixallocx_helper(ptr, old_usize, size, extra, alignment, zero, arena); } if (usize == old_usize) return (usize); prof_realloc(ptr, usize, cnt, old_usize, old_ctx); return (usize); } size_t je_xallocx(void *ptr, size_t size, size_t extra, int flags) { size_t usize, old_usize; UNUSED size_t old_rzsize JEMALLOC_CC_SILENCE_INIT(0); size_t alignment = (ZU(1) << (flags & MALLOCX_LG_ALIGN_MASK) & (SIZE_T_MAX-1)); bool zero = flags & MALLOCX_ZERO; unsigned arena_ind = ((unsigned)(flags >> 8)) - 1; pool_t *pool = &base_pool; arena_t dummy_arena; DUMMY_ARENA_INITIALIZE(dummy_arena, pool); arena_t *arena; assert(ptr != NULL); assert(size != 0); assert(SIZE_T_MAX - size >= extra); assert(malloc_initialized || IS_INITIALIZER); if (malloc_thread_init()) return (0); if (arena_ind != UINT_MAX) arena = pool->arenas[arena_ind]; else arena = &dummy_arena; old_usize = isalloc(ptr, config_prof); if (config_valgrind && in_valgrind) old_rzsize = u2rz(old_usize); if (config_prof && opt_prof) { prof_thr_cnt_t *cnt; /* * usize isn't knowable before ixalloc() returns when extra is * non-zero. Therefore, compute its maximum possible value and * use that in PROF_ALLOC_PREP() to decide whether to capture a * backtrace. prof_realloc() will use the actual usize to * decide whether to sample. */ size_t max_usize = (alignment == 0) ? s2u(size+extra) : sa2u(size+extra, alignment); PROF_ALLOC_PREP(max_usize, cnt); usize = ixallocx_prof(ptr, old_usize, size, extra, alignment, max_usize, zero, arena, cnt); } else { usize = ixallocx_helper(ptr, old_usize, size, extra, alignment, zero, arena); } if (usize == old_usize) goto label_not_resized; if (config_stats) { thread_allocated_t *ta; ta = thread_allocated_tsd_get(); ta->allocated += usize; ta->deallocated += old_usize; } JEMALLOC_VALGRIND_REALLOC(false, ptr, usize, false, ptr, old_usize, old_rzsize, false, zero); label_not_resized: UTRACE(ptr, size, ptr); return (usize); } size_t je_sallocx(const void *ptr, int flags) { size_t usize; assert(malloc_initialized || IS_INITIALIZER); if (malloc_thread_init()) return (0); if (config_ivsalloc) usize = ivsalloc(ptr, config_prof); else { assert(ptr != NULL); usize = isalloc(ptr, config_prof); } return (usize); } void je_dallocx(void *ptr, int flags) { size_t usize; UNUSED size_t rzsize JEMALLOC_CC_SILENCE_INIT(0); unsigned arena_ind = ((unsigned)(flags >> 8)) - 1; pool_t *pool = &base_pool; bool try_tcache; assert(ptr != NULL); assert(malloc_initialized || IS_INITIALIZER); if (arena_ind != UINT_MAX) { arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); try_tcache = (chunk == ptr || chunk->arena != pool->arenas[arena_ind]); } else try_tcache = true; UTRACE(ptr, 0, 0); if (config_stats || config_valgrind) usize = isalloc(ptr, config_prof); if (config_prof && opt_prof) { if (config_stats == false && config_valgrind == false) usize = isalloc(ptr, config_prof); prof_free(ptr, usize); } if (config_stats) thread_allocated_tsd_get()->deallocated += usize; if (config_valgrind && in_valgrind) rzsize = p2rz(ptr); iqalloct(ptr, try_tcache); JEMALLOC_VALGRIND_FREE(ptr, rzsize); } size_t je_nallocx(size_t size, int flags) { size_t usize; size_t alignment = (ZU(1) << (flags & MALLOCX_LG_ALIGN_MASK) & (SIZE_T_MAX-1)); assert(size != 0); if (malloc_init_base_pool()) return (0); usize = (alignment == 0) ? s2u(size) : sa2u(size, alignment); assert(usize != 0); return (usize); } int je_mallctl(const char *name, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { return (ctl_byname(name, oldp, oldlenp, newp, newlen)); } int je_mallctlnametomib(const char *name, size_t *mibp, size_t *miblenp) { return (ctl_nametomib(name, mibp, miblenp)); } int je_mallctlbymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { return (ctl_bymib(mib, miblen, oldp, oldlenp, newp, newlen)); } int je_navsnprintf(char *str, size_t size, const char *format, va_list ap) { return malloc_vsnprintf(str, size, format, ap); } void je_malloc_stats_print(void (*write_cb)(void *, const char *), void *cbopaque, const char *opts) { stats_print(&base_pool, write_cb, cbopaque, opts); } size_t je_malloc_usable_size(JEMALLOC_USABLE_SIZE_CONST void *ptr) { size_t ret; assert(malloc_initialized || IS_INITIALIZER); if (malloc_thread_init()) return (0); if (config_ivsalloc) ret = ivsalloc(ptr, config_prof); else ret = (ptr != NULL) ? isalloc(ptr, config_prof) : 0; return (ret); } /* * End non-standard functions. */ /******************************************************************************/ /* * The following functions are used by threading libraries for protection of * malloc during fork(). */ /* * If an application creates a thread before doing any allocation in the main * thread, then calls fork(2) in the main thread followed by memory allocation * in the child process, a race can occur that results in deadlock within the * child: the main thread may have forked while the created thread had * partially initialized the allocator. Ordinarily jemalloc prevents * fork/malloc races via the following functions it registers during * initialization using pthread_atfork(), but of course that does no good if * the allocator isn't fully initialized at fork time. The following library * constructor is a partial solution to this problem. It may still possible to * trigger the deadlock described above, but doing so would involve forking via * a library constructor that runs before jemalloc's runs. */ JEMALLOC_ATTR(constructor(102)) void jemalloc_constructor(void) { malloc_init(); } JEMALLOC_ATTR(destructor(101)) void jemalloc_destructor(void) { if (base_pool_initialized == false) return; tcache_thread_cleanup(tcache_tsd_get()); arenas_cleanup(arenas_tsd_get()); je_base_pool_destroy(); } #define FOREACH_POOL(func) \ do { \ unsigned i; \ for (i = 0; i < npools; i++) { \ if (pools[i] != NULL) \ (func)(pools[i]); \ } \ } while(0) #ifndef JEMALLOC_MUTEX_INIT_CB void jemalloc_prefork(void) #else JEMALLOC_EXPORT void _malloc_prefork(void) #endif { unsigned i, j; pool_t *pool; #ifdef JEMALLOC_MUTEX_INIT_CB if (malloc_initialized == false) return; #endif assert(malloc_initialized); /* Acquire all mutexes in a safe order. */ ctl_prefork(); prof_prefork(); pool_prefork(); for (i = 0; i < npools; i++) { pool = pools[i]; if (pool != NULL) { malloc_rwlock_prefork(&pool->arenas_lock); for (j = 0; j < pool->narenas_total; j++) { if (pool->arenas[j] != NULL) arena_prefork(pool->arenas[j]); } } } FOREACH_POOL(chunk_prefork0); FOREACH_POOL(base_prefork); FOREACH_POOL(chunk_prefork1); chunk_dss_prefork(); FOREACH_POOL(huge_prefork); } #ifndef JEMALLOC_MUTEX_INIT_CB void jemalloc_postfork_parent(void) #else JEMALLOC_EXPORT void _malloc_postfork(void) #endif { unsigned i, j; pool_t *pool; #ifdef JEMALLOC_MUTEX_INIT_CB if (malloc_initialized == false) return; #endif assert(malloc_initialized); /* Release all mutexes, now that fork() has completed. */ FOREACH_POOL(huge_postfork_parent); chunk_dss_postfork_parent(); FOREACH_POOL(chunk_postfork_parent1); FOREACH_POOL(base_postfork_parent); FOREACH_POOL(chunk_postfork_parent0); for (i = 0; i < npools; i++) { pool = pools[i]; if (pool != NULL) { for (j = 0; j < pool->narenas_total; j++) { if (pool->arenas[j] != NULL) arena_postfork_parent(pool->arenas[j]); } malloc_rwlock_postfork_parent(&pool->arenas_lock); } } pool_postfork_parent(); prof_postfork_parent(); ctl_postfork_parent(); } void jemalloc_postfork_child(void) { unsigned i, j; pool_t *pool; assert(malloc_initialized); /* Release all mutexes, now that fork() has completed. */ FOREACH_POOL(huge_postfork_child); chunk_dss_postfork_child(); FOREACH_POOL(chunk_postfork_child1); FOREACH_POOL(base_postfork_child); FOREACH_POOL(chunk_postfork_child0); for (i = 0; i < npools; i++) { pool = pools[i]; if (pool != NULL) { for (j = 0; j < pool->narenas_total; j++) { if (pool->arenas[j] != NULL) arena_postfork_child(pool->arenas[j]); } malloc_rwlock_postfork_child(&pool->arenas_lock); } } pool_postfork_child(); prof_postfork_child(); ctl_postfork_child(); } /******************************************************************************/ /* * The following functions are used for TLS allocation/deallocation in static * binaries on FreeBSD. The primary difference between these and i[mcd]alloc() * is that these avoid accessing TLS variables. */ static void * a0alloc(size_t size, bool zero) { if (malloc_init_base_pool()) return (NULL); if (size == 0) size = 1; if (size <= arena_maxclass) return (arena_malloc(base_pool.arenas[0], size, zero, false)); else return (huge_malloc(NULL, size, zero)); } void * a0malloc(size_t size) { return (a0alloc(size, false)); } void * a0calloc(size_t num, size_t size) { return (a0alloc(num * size, true)); } void a0free(void *ptr) { arena_chunk_t *chunk; if (ptr == NULL) return; chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (chunk != ptr) arena_dalloc(chunk, ptr, false); else huge_dalloc(&base_pool, ptr); } /******************************************************************************/ vmem-1.8/src/jemalloc/src/mb.c000066400000000000000000000001101361505074100162260ustar00rootroot00000000000000#define JEMALLOC_MB_C_ #include "jemalloc/internal/jemalloc_internal.h" vmem-1.8/src/jemalloc/src/mutex.c000066400000000000000000000106101361505074100170000ustar00rootroot00000000000000#define JEMALLOC_MUTEX_C_ #include "jemalloc/internal/jemalloc_internal.h" #if defined(JEMALLOC_LAZY_LOCK) && !defined(_WIN32) #include #endif #ifndef _CRT_SPINCOUNT #define _CRT_SPINCOUNT 4000 #endif /******************************************************************************/ /* Data. */ #ifdef JEMALLOC_LAZY_LOCK bool isthreaded = false; #endif #ifdef JEMALLOC_MUTEX_INIT_CB static bool postpone_init = true; static malloc_mutex_t *postponed_mutexes = NULL; #endif /******************************************************************************/ /* * We intercept pthread_create() calls in order to toggle isthreaded if the * process goes multi-threaded. */ #if defined(JEMALLOC_LAZY_LOCK) && !defined(_WIN32) static void pthread_create_once(void); static int (*pthread_create_fptr)(pthread_t *__restrict, const pthread_attr_t *, void *(*)(void *), void *__restrict); static void pthread_create_once(void) { pthread_create_fptr = dlsym(RTLD_NEXT, "pthread_create"); if (pthread_create_fptr == NULL) { malloc_write(": Error in dlsym(RTLD_NEXT, " "\"pthread_create\")\n"); abort(); } isthreaded = true; } JEMALLOC_EXPORT int pthread_create(pthread_t *__restrict thread, const pthread_attr_t *__restrict attr, void *(*start_routine)(void *), void *__restrict arg) { static pthread_once_t once_control = PTHREAD_ONCE_INIT; pthread_once(&once_control, pthread_create_once); return (pthread_create_fptr(thread, attr, start_routine, arg)); } #endif /******************************************************************************/ #ifdef JEMALLOC_MUTEX_INIT_CB JEMALLOC_EXPORT int _pthread_mutex_init_calloc_cb(pthread_mutex_t *mutex, void *(calloc_cb)(size_t, size_t)); static void * base_calloc_wrapper(size_t number, size_t size) { return base_calloc(&base_pool, number, size); } /* XXX We need somewhere to allocate mutexes from during early initialization */ #define BOOTSTRAP_POOL_SIZE 4096 #define BP_MASK 0xfffffffffffffff0UL static char bootstrap_pool[BOOTSTRAP_POOL_SIZE] __attribute__((aligned (16))); static char *bpp = bootstrap_pool; static void * bootstrap_calloc(size_t number, size_t size) { size_t my_size = ((number * size) + 0xf) & BP_MASK; bpp += my_size; if ((bpp - bootstrap_pool) > BOOTSTRAP_POOL_SIZE) { return NULL; } return (void *)(bpp - my_size); } #endif bool malloc_mutex_init(malloc_mutex_t *mutex) { #ifdef _WIN32 if (!InitializeCriticalSectionAndSpinCount(&mutex->lock, _CRT_SPINCOUNT)) return (true); #elif (defined(JEMALLOC_OSSPIN)) mutex->lock = 0; #elif (defined(JEMALLOC_MUTEX_INIT_CB)) if (postpone_init) { mutex->postponed_next = postponed_mutexes; postponed_mutexes = mutex; } else { if (_pthread_mutex_init_calloc_cb(&mutex->lock, base_calloc_wrapper) != 0) return (true); } #else pthread_mutexattr_t attr; if (pthread_mutexattr_init(&attr) != 0) return (true); pthread_mutexattr_settype(&attr, MALLOC_MUTEX_TYPE); if (pthread_mutex_init(&mutex->lock, &attr) != 0) { pthread_mutexattr_destroy(&attr); return (true); } pthread_mutexattr_destroy(&attr); #endif return (false); } void malloc_mutex_prefork(malloc_mutex_t *mutex) { malloc_mutex_lock(mutex); } void malloc_mutex_postfork_parent(malloc_mutex_t *mutex) { malloc_mutex_unlock(mutex); } bool mutex_boot(void) { #ifdef JEMALLOC_MUTEX_INIT_CB postpone_init = false; while (postponed_mutexes != NULL) { if (_pthread_mutex_init_calloc_cb(&postponed_mutexes->lock, bootstrap_calloc) != 0) return (true); postponed_mutexes = postponed_mutexes->postponed_next; } #endif return (false); } void malloc_mutex_postfork_child(malloc_mutex_t *mutex) { #if (defined(JEMALLOC_MUTEX_INIT_CB) || defined(JEMALLOC_DISABLE_BSD_MALLOC_HOOKS)) malloc_mutex_unlock(mutex); #else if (malloc_mutex_init(mutex)) { malloc_printf(": Error re-initializing mutex in " "child\n"); if (opt_abort) abort(); } #endif } void malloc_rwlock_prefork(malloc_rwlock_t *rwlock) { malloc_rwlock_wrlock(rwlock); } void malloc_rwlock_postfork_parent(malloc_rwlock_t *rwlock) { malloc_rwlock_unlock(rwlock); } void malloc_rwlock_postfork_child(malloc_rwlock_t *rwlock) { #if (defined(JEMALLOC_MUTEX_INIT_CB) || defined(JEMALLOC_DISABLE_BSD_MALLOC_HOOKS)) malloc_rwlock_unlock(rwlock); #else if (malloc_rwlock_init(rwlock)) { malloc_printf(": Error re-initializing rwlock in " "child\n"); if (opt_abort) abort(); } #endif } vmem-1.8/src/jemalloc/src/pool.c000066400000000000000000000072131361505074100166140ustar00rootroot00000000000000#define JEMALLOC_POOL_C_ #include "jemalloc/internal/jemalloc_internal.h" malloc_mutex_t pool_base_lock; malloc_mutex_t pools_lock; /* * Initialize runtime state of the pool. * Called both at pool creation and each pool opening. */ bool pool_boot(pool_t *pool, unsigned pool_id) { pool->pool_id = pool_id; if (malloc_mutex_init(&pool->memory_range_mtx)) return (true); /* * Rwlock initialization must be deferred if we are * creating the base pool in the JEMALLOC_LAZY_LOCK case. * This is safe because the lock won't be used until * isthreaded has been set. */ if ((isthreaded || (pool != &base_pool)) && malloc_rwlock_init(&pool->arenas_lock)) return (true); return (false); } /* * Initialize runtime state of the pool. * Called at each pool opening. */ bool pool_runtime_init(pool_t *pool, unsigned pool_id) { if (pool_boot(pool, pool_id)) return (true); if (base_boot(pool)) return (true); if (chunk_boot(pool)) return (true); if (huge_boot(pool)) return (true); JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(pool->arenas, sizeof(arena_t) * pool->narenas_total); for (size_t i = 0; i < pool->narenas_total; ++i) { if (pool->arenas[i] != NULL) { arena_t *arena = pool->arenas[i]; if (arena_boot(arena)) return (true); } } return (false); } /* * Initialize pool and create its base arena. * Called only at pool creation. */ bool pool_new(pool_t *pool, unsigned pool_id) { if (pool_boot(pool, pool_id)) return (true); if (base_init(pool)) return (true); if (chunk_init(pool)) return (true); if (huge_init(pool)) return (true); if (pools_shared_data_create()) return (true); pool->stats_cactive = 0; pool->ctl_stats_active = 0; pool->ctl_stats_allocated = 0; pool->ctl_stats_mapped = 0; pool->narenas_auto = opt_narenas; /* * Make sure that the arenas array can be allocated. In practice, this * limit is enough to allow the allocator to function, but the ctl * machinery will fail to allocate memory at far lower limits. */ if (pool->narenas_auto > chunksize / sizeof(arena_t *)) { pool->narenas_auto = chunksize / sizeof(arena_t *); malloc_printf(": Reducing narenas to limit (%d)\n", pool->narenas_auto); } pool->narenas_total = pool->narenas_auto; /* Allocate and initialize arenas. */ pool->arenas = (arena_t **)base_calloc(pool, sizeof(arena_t *), pool->narenas_total); if (pool->arenas == NULL) return (true); arenas_extend(pool, 0); return false; } /* Release the arenas associated with a pool. */ void pool_destroy(pool_t *pool) { size_t i, j; for (i = 0; i < pool->narenas_total; ++i) { if (pool->arenas[i] != NULL) { arena_t *arena = pool->arenas[i]; //arena_purge_all(arena); /* XXX */ for (j = 0; j < NBINS; j++) malloc_mutex_destroy(&arena->bins[j].lock); malloc_mutex_destroy(&arena->lock); } } /* * Set 'pool_id' to an incorrect value so that the pool cannot be used * after being deleted. */ pool->pool_id = UINT_MAX; if (pool->chunks_rtree) { rtree_t *rtree = pool->chunks_rtree; malloc_mutex_destroy(&rtree->mutex); } malloc_mutex_destroy(&pool->memory_range_mtx); malloc_mutex_destroy(&pool->base_mtx); malloc_mutex_destroy(&pool->base_node_mtx); malloc_mutex_destroy(&pool->chunks_mtx); malloc_mutex_destroy(&pool->huge_mtx); malloc_rwlock_destroy(&pool->arenas_lock); } void pool_prefork() { malloc_mutex_prefork(&pools_lock); malloc_mutex_prefork(&pool_base_lock); } void pool_postfork_parent() { malloc_mutex_postfork_parent(&pools_lock); malloc_mutex_postfork_parent(&pool_base_lock); } void pool_postfork_child() { malloc_mutex_postfork_child(&pools_lock); malloc_mutex_postfork_child(&pool_base_lock); } vmem-1.8/src/jemalloc/src/prof.c000066400000000000000000000776301361505074100166230ustar00rootroot00000000000000#define JEMALLOC_PROF_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ #ifdef JEMALLOC_PROF_LIBUNWIND #define UNW_LOCAL_ONLY #include #endif #ifdef JEMALLOC_PROF_LIBGCC #include #endif /******************************************************************************/ /* Data. */ malloc_tsd_data(, prof_tdata, prof_tdata_t *, NULL) bool opt_prof = false; bool opt_prof_active = true; size_t opt_lg_prof_sample = LG_PROF_SAMPLE_DEFAULT; ssize_t opt_lg_prof_interval = LG_PROF_INTERVAL_DEFAULT; bool opt_prof_gdump = false; bool opt_prof_final = true; bool opt_prof_leak = false; bool opt_prof_accum = false; char opt_prof_prefix[ /* Minimize memory bloat for non-prof builds. */ #ifdef JEMALLOC_PROF JE_PATH_MAX + #endif 1]; uint64_t prof_interval = 0; /* * Table of mutexes that are shared among ctx's. These are leaf locks, so * there is no problem with using them for more than one ctx at the same time. * The primary motivation for this sharing though is that ctx's are ephemeral, * and destroying mutexes causes complications for systems that allocate when * creating/destroying mutexes. */ static malloc_mutex_t *ctx_locks; static unsigned cum_ctxs; /* Atomic counter. */ /* * Global hash of (prof_bt_t *)-->(prof_ctx_t *). This is the master data * structure that knows about all backtraces currently captured. */ static ckh_t bt2ctx; static malloc_mutex_t bt2ctx_mtx; static malloc_mutex_t prof_dump_seq_mtx; static uint64_t prof_dump_seq; static uint64_t prof_dump_iseq; static uint64_t prof_dump_mseq; static uint64_t prof_dump_useq; /* * This buffer is rather large for stack allocation, so use a single buffer for * all profile dumps. */ static malloc_mutex_t prof_dump_mtx; static char prof_dump_buf[ /* Minimize memory bloat for non-prof builds. */ #ifdef JEMALLOC_PROF PROF_DUMP_BUFSIZE #else 1 #endif ]; static size_t prof_dump_buf_end; static int prof_dump_fd; /* Do not dump any profiles until bootstrapping is complete. */ static bool prof_booted = false; /******************************************************************************/ void bt_init(prof_bt_t *bt, void **vec) { cassert(config_prof); bt->vec = vec; bt->len = 0; } static void bt_destroy(prof_bt_t *bt) { cassert(config_prof); idalloc(bt); } static prof_bt_t * bt_dup(prof_bt_t *bt) { prof_bt_t *ret; cassert(config_prof); /* * Create a single allocation that has space for vec immediately * following the prof_bt_t structure. The backtraces that get * stored in the backtrace caches are copied from stack-allocated * temporary variables, so size is known at creation time. Making this * a contiguous object improves cache locality. */ ret = (prof_bt_t *)imalloc(QUANTUM_CEILING(sizeof(prof_bt_t)) + (bt->len * sizeof(void *))); if (ret == NULL) return (NULL); ret->vec = (void **)((uintptr_t)ret + QUANTUM_CEILING(sizeof(prof_bt_t))); memcpy(ret->vec, bt->vec, bt->len * sizeof(void *)); ret->len = bt->len; return (ret); } static inline void prof_enter(prof_tdata_t *prof_tdata) { cassert(config_prof); assert(prof_tdata->enq == false); prof_tdata->enq = true; malloc_mutex_lock(&bt2ctx_mtx); } static inline void prof_leave(prof_tdata_t *prof_tdata) { bool idump, gdump; cassert(config_prof); malloc_mutex_unlock(&bt2ctx_mtx); assert(prof_tdata->enq); prof_tdata->enq = false; idump = prof_tdata->enq_idump; prof_tdata->enq_idump = false; gdump = prof_tdata->enq_gdump; prof_tdata->enq_gdump = false; if (idump) prof_idump(); if (gdump) prof_gdump(); } #ifdef JEMALLOC_PROF_LIBUNWIND void prof_backtrace(prof_bt_t *bt) { int nframes; cassert(config_prof); assert(bt->len == 0); assert(bt->vec != NULL); nframes = unw_backtrace(bt->vec, PROF_BT_MAX); if (nframes <= 0) return; bt->len = nframes; } #elif (defined(JEMALLOC_PROF_LIBGCC)) static _Unwind_Reason_Code prof_unwind_init_callback(struct _Unwind_Context *context, void *arg) { cassert(config_prof); return (_URC_NO_REASON); } static _Unwind_Reason_Code prof_unwind_callback(struct _Unwind_Context *context, void *arg) { prof_unwind_data_t *data = (prof_unwind_data_t *)arg; void *ip; cassert(config_prof); ip = (void *)_Unwind_GetIP(context); if (ip == NULL) return (_URC_END_OF_STACK); data->bt->vec[data->bt->len] = ip; data->bt->len++; if (data->bt->len == data->max) return (_URC_END_OF_STACK); return (_URC_NO_REASON); } void prof_backtrace(prof_bt_t *bt) { prof_unwind_data_t data = {bt, PROF_BT_MAX}; cassert(config_prof); _Unwind_Backtrace(prof_unwind_callback, &data); } #elif (defined(JEMALLOC_PROF_GCC)) void prof_backtrace(prof_bt_t *bt) { #define BT_FRAME(i) \ if ((i) < PROF_BT_MAX) { \ void *p; \ if (__builtin_frame_address(i) == 0) \ return; \ p = __builtin_return_address(i); \ if (p == NULL) \ return; \ bt->vec[(i)] = p; \ bt->len = (i) + 1; \ } else \ return; cassert(config_prof); BT_FRAME(0) BT_FRAME(1) BT_FRAME(2) BT_FRAME(3) BT_FRAME(4) BT_FRAME(5) BT_FRAME(6) BT_FRAME(7) BT_FRAME(8) BT_FRAME(9) BT_FRAME(10) BT_FRAME(11) BT_FRAME(12) BT_FRAME(13) BT_FRAME(14) BT_FRAME(15) BT_FRAME(16) BT_FRAME(17) BT_FRAME(18) BT_FRAME(19) BT_FRAME(20) BT_FRAME(21) BT_FRAME(22) BT_FRAME(23) BT_FRAME(24) BT_FRAME(25) BT_FRAME(26) BT_FRAME(27) BT_FRAME(28) BT_FRAME(29) BT_FRAME(30) BT_FRAME(31) BT_FRAME(32) BT_FRAME(33) BT_FRAME(34) BT_FRAME(35) BT_FRAME(36) BT_FRAME(37) BT_FRAME(38) BT_FRAME(39) BT_FRAME(40) BT_FRAME(41) BT_FRAME(42) BT_FRAME(43) BT_FRAME(44) BT_FRAME(45) BT_FRAME(46) BT_FRAME(47) BT_FRAME(48) BT_FRAME(49) BT_FRAME(50) BT_FRAME(51) BT_FRAME(52) BT_FRAME(53) BT_FRAME(54) BT_FRAME(55) BT_FRAME(56) BT_FRAME(57) BT_FRAME(58) BT_FRAME(59) BT_FRAME(60) BT_FRAME(61) BT_FRAME(62) BT_FRAME(63) BT_FRAME(64) BT_FRAME(65) BT_FRAME(66) BT_FRAME(67) BT_FRAME(68) BT_FRAME(69) BT_FRAME(70) BT_FRAME(71) BT_FRAME(72) BT_FRAME(73) BT_FRAME(74) BT_FRAME(75) BT_FRAME(76) BT_FRAME(77) BT_FRAME(78) BT_FRAME(79) BT_FRAME(80) BT_FRAME(81) BT_FRAME(82) BT_FRAME(83) BT_FRAME(84) BT_FRAME(85) BT_FRAME(86) BT_FRAME(87) BT_FRAME(88) BT_FRAME(89) BT_FRAME(90) BT_FRAME(91) BT_FRAME(92) BT_FRAME(93) BT_FRAME(94) BT_FRAME(95) BT_FRAME(96) BT_FRAME(97) BT_FRAME(98) BT_FRAME(99) BT_FRAME(100) BT_FRAME(101) BT_FRAME(102) BT_FRAME(103) BT_FRAME(104) BT_FRAME(105) BT_FRAME(106) BT_FRAME(107) BT_FRAME(108) BT_FRAME(109) BT_FRAME(110) BT_FRAME(111) BT_FRAME(112) BT_FRAME(113) BT_FRAME(114) BT_FRAME(115) BT_FRAME(116) BT_FRAME(117) BT_FRAME(118) BT_FRAME(119) BT_FRAME(120) BT_FRAME(121) BT_FRAME(122) BT_FRAME(123) BT_FRAME(124) BT_FRAME(125) BT_FRAME(126) BT_FRAME(127) #undef BT_FRAME } #else void prof_backtrace(prof_bt_t *bt) { cassert(config_prof); not_reached(); } #endif static malloc_mutex_t * prof_ctx_mutex_choose(void) { unsigned nctxs = atomic_add_u(&cum_ctxs, 1); return (&ctx_locks[(nctxs - 1) % PROF_NCTX_LOCKS]); } static void prof_ctx_init(prof_ctx_t *ctx, prof_bt_t *bt) { ctx->bt = bt; ctx->lock = prof_ctx_mutex_choose(); /* * Set nlimbo to 1, in order to avoid a race condition with * prof_ctx_merge()/prof_ctx_destroy(). */ ctx->nlimbo = 1; ql_elm_new(ctx, dump_link); memset(&ctx->cnt_merged, 0, sizeof(prof_cnt_t)); ql_new(&ctx->cnts_ql); } static void prof_ctx_destroy(prof_ctx_t *ctx) { prof_tdata_t *prof_tdata; cassert(config_prof); /* * Check that ctx is still unused by any thread cache before destroying * it. prof_lookup() increments ctx->nlimbo in order to avoid a race * condition with this function, as does prof_ctx_merge() in order to * avoid a race between the main body of prof_ctx_merge() and entry * into this function. */ prof_tdata = prof_tdata_get(false); assert((uintptr_t)prof_tdata > (uintptr_t)PROF_TDATA_STATE_MAX); prof_enter(prof_tdata); malloc_mutex_lock(ctx->lock); if (ql_first(&ctx->cnts_ql) == NULL && ctx->cnt_merged.curobjs == 0 && ctx->nlimbo == 1) { assert(ctx->cnt_merged.curbytes == 0); assert(ctx->cnt_merged.accumobjs == 0); assert(ctx->cnt_merged.accumbytes == 0); /* Remove ctx from bt2ctx. */ if (ckh_remove(&bt2ctx, ctx->bt, NULL, NULL)) not_reached(); prof_leave(prof_tdata); /* Destroy ctx. */ malloc_mutex_unlock(ctx->lock); bt_destroy(ctx->bt); idalloc(ctx); } else { /* * Compensate for increment in prof_ctx_merge() or * prof_lookup(). */ ctx->nlimbo--; malloc_mutex_unlock(ctx->lock); prof_leave(prof_tdata); } } static void prof_ctx_merge(prof_ctx_t *ctx, prof_thr_cnt_t *cnt) { bool destroy; cassert(config_prof); /* Merge cnt stats and detach from ctx. */ malloc_mutex_lock(ctx->lock); ctx->cnt_merged.curobjs += cnt->cnts.curobjs; ctx->cnt_merged.curbytes += cnt->cnts.curbytes; ctx->cnt_merged.accumobjs += cnt->cnts.accumobjs; ctx->cnt_merged.accumbytes += cnt->cnts.accumbytes; ql_remove(&ctx->cnts_ql, cnt, cnts_link); if (opt_prof_accum == false && ql_first(&ctx->cnts_ql) == NULL && ctx->cnt_merged.curobjs == 0 && ctx->nlimbo == 0) { /* * Increment ctx->nlimbo in order to keep another thread from * winning the race to destroy ctx while this one has ctx->lock * dropped. Without this, it would be possible for another * thread to: * * 1) Sample an allocation associated with ctx. * 2) Deallocate the sampled object. * 3) Successfully prof_ctx_destroy(ctx). * * The result would be that ctx no longer exists by the time * this thread accesses it in prof_ctx_destroy(). */ ctx->nlimbo++; destroy = true; } else destroy = false; malloc_mutex_unlock(ctx->lock); if (destroy) prof_ctx_destroy(ctx); } static bool prof_lookup_global(prof_bt_t *bt, prof_tdata_t *prof_tdata, void **p_btkey, prof_ctx_t **p_ctx, bool *p_new_ctx) { union { prof_ctx_t *p; void *v; } ctx; union { prof_bt_t *p; void *v; } btkey; bool new_ctx; prof_enter(prof_tdata); if (ckh_search(&bt2ctx, bt, &btkey.v, &ctx.v)) { /* bt has never been seen before. Insert it. */ ctx.v = imalloc(sizeof(prof_ctx_t)); if (ctx.v == NULL) { prof_leave(prof_tdata); return (true); } btkey.p = bt_dup(bt); if (btkey.v == NULL) { prof_leave(prof_tdata); idalloc(ctx.v); return (true); } prof_ctx_init(ctx.p, btkey.p); if (ckh_insert(&bt2ctx, btkey.v, ctx.v)) { /* OOM. */ prof_leave(prof_tdata); idalloc(btkey.v); idalloc(ctx.v); return (true); } new_ctx = true; } else { /* * Increment nlimbo, in order to avoid a race condition with * prof_ctx_merge()/prof_ctx_destroy(). */ malloc_mutex_lock(ctx.p->lock); ctx.p->nlimbo++; malloc_mutex_unlock(ctx.p->lock); new_ctx = false; } prof_leave(prof_tdata); *p_btkey = btkey.v; *p_ctx = ctx.p; *p_new_ctx = new_ctx; return (false); } prof_thr_cnt_t * prof_lookup(prof_bt_t *bt) { union { prof_thr_cnt_t *p; void *v; } ret; prof_tdata_t *prof_tdata; cassert(config_prof); prof_tdata = prof_tdata_get(false); if ((uintptr_t)prof_tdata <= (uintptr_t)PROF_TDATA_STATE_MAX) return (NULL); if (ckh_search(&prof_tdata->bt2cnt, bt, NULL, &ret.v)) { void *btkey; prof_ctx_t *ctx; bool new_ctx; /* * This thread's cache lacks bt. Look for it in the global * cache. */ if (prof_lookup_global(bt, prof_tdata, &btkey, &ctx, &new_ctx)) return (NULL); /* Link a prof_thd_cnt_t into ctx for this thread. */ if (ckh_count(&prof_tdata->bt2cnt) == PROF_TCMAX) { assert(ckh_count(&prof_tdata->bt2cnt) > 0); /* * Flush the least recently used cnt in order to keep * bt2cnt from becoming too large. */ ret.p = ql_last(&prof_tdata->lru_ql, lru_link); assert(ret.v != NULL); if (ckh_remove(&prof_tdata->bt2cnt, ret.p->ctx->bt, NULL, NULL)) not_reached(); ql_remove(&prof_tdata->lru_ql, ret.p, lru_link); prof_ctx_merge(ret.p->ctx, ret.p); /* ret can now be re-used. */ } else { assert(ckh_count(&prof_tdata->bt2cnt) < PROF_TCMAX); /* Allocate and partially initialize a new cnt. */ ret.v = imalloc(sizeof(prof_thr_cnt_t)); if (ret.p == NULL) { if (new_ctx) prof_ctx_destroy(ctx); return (NULL); } ql_elm_new(ret.p, cnts_link); ql_elm_new(ret.p, lru_link); } /* Finish initializing ret. */ ret.p->ctx = ctx; ret.p->epoch = 0; memset(&ret.p->cnts, 0, sizeof(prof_cnt_t)); if (ckh_insert(&prof_tdata->bt2cnt, btkey, ret.v)) { if (new_ctx) prof_ctx_destroy(ctx); idalloc(ret.v); return (NULL); } ql_head_insert(&prof_tdata->lru_ql, ret.p, lru_link); malloc_mutex_lock(ctx->lock); ql_tail_insert(&ctx->cnts_ql, ret.p, cnts_link); ctx->nlimbo--; malloc_mutex_unlock(ctx->lock); } else { /* Move ret to the front of the LRU. */ ql_remove(&prof_tdata->lru_ql, ret.p, lru_link); ql_head_insert(&prof_tdata->lru_ql, ret.p, lru_link); } return (ret.p); } void prof_sample_threshold_update(prof_tdata_t *prof_tdata) { /* * The body of this function is compiled out unless heap profiling is * enabled, so that it is possible to compile jemalloc with floating * point support completely disabled. Avoiding floating point code is * important on memory-constrained systems, but it also enables a * workaround for versions of glibc that don't properly save/restore * floating point registers during dynamic lazy symbol loading (which * internally calls into whatever malloc implementation happens to be * integrated into the application). Note that some compilers (e.g. * gcc 4.8) may use floating point registers for fast memory moves, so * jemalloc must be compiled with such optimizations disabled (e.g. * -mno-sse) in order for the workaround to be complete. */ #ifdef JEMALLOC_PROF uint64_t r; double u; if (!config_prof) return; if (prof_tdata == NULL) prof_tdata = prof_tdata_get(false); if (opt_lg_prof_sample == 0) { prof_tdata->bytes_until_sample = 0; return; } /* * Compute sample threshold as a geometrically distributed random * variable with mean (2^opt_lg_prof_sample). * * __ __ * | log(u) | 1 * prof_tdata->threshold = | -------- |, where p = ------------------- * | log(1-p) | opt_lg_prof_sample * 2 * * For more information on the math, see: * * Non-Uniform Random Variate Generation * Luc Devroye * Springer-Verlag, New York, 1986 * pp 500 * (http://luc.devroye.org/rnbookindex.html) */ prng64(r, 53, prof_tdata->prng_state, UINT64_C(6364136223846793005), UINT64_C(1442695040888963407)); u = (double)r * (1.0/9007199254740992.0L); prof_tdata->bytes_until_sample = (uint64_t)(log(u) / log(1.0 - (1.0 / (double)((uint64_t)1U << opt_lg_prof_sample)))) + (uint64_t)1U; #endif } #ifdef JEMALLOC_JET size_t prof_bt_count(void) { size_t bt_count; prof_tdata_t *prof_tdata; prof_tdata = prof_tdata_get(false); if ((uintptr_t)prof_tdata <= (uintptr_t)PROF_TDATA_STATE_MAX) return (0); prof_enter(prof_tdata); bt_count = ckh_count(&bt2ctx); prof_leave(prof_tdata); return (bt_count); } #endif #ifdef JEMALLOC_JET #undef prof_dump_open #define prof_dump_open JEMALLOC_N(prof_dump_open_impl) #endif static int prof_dump_open(bool propagate_err, const char *filename) { int fd; fd = creat(filename, 0644); if (fd == -1 && propagate_err == false) { malloc_printf(": creat(\"%s\"), 0644) failed\n", filename); if (opt_abort) abort(); } return (fd); } #ifdef JEMALLOC_JET #undef prof_dump_open #define prof_dump_open JEMALLOC_N(prof_dump_open) prof_dump_open_t *prof_dump_open = JEMALLOC_N(prof_dump_open_impl); #endif static bool prof_dump_flush(bool propagate_err) { bool ret = false; ssize_t err; cassert(config_prof); err = write(prof_dump_fd, prof_dump_buf, prof_dump_buf_end); if (err == -1) { if (propagate_err == false) { malloc_write(": write() failed during heap " "profile flush\n"); if (opt_abort) abort(); } ret = true; } prof_dump_buf_end = 0; return (ret); } static bool prof_dump_close(bool propagate_err) { bool ret; assert(prof_dump_fd != -1); ret = prof_dump_flush(propagate_err); close(prof_dump_fd); prof_dump_fd = -1; return (ret); } static bool prof_dump_write(bool propagate_err, const char *s) { unsigned i, slen, n; cassert(config_prof); i = 0; slen = strlen(s); while (i < slen) { /* Flush the buffer if it is full. */ if (prof_dump_buf_end == sizeof(prof_dump_buf)) if (prof_dump_flush(propagate_err) && propagate_err) return (true); if (prof_dump_buf_end + slen <= sizeof(prof_dump_buf)) { /* Finish writing. */ n = slen - i; } else { /* Write as much of s as will fit. */ n = sizeof(prof_dump_buf) - prof_dump_buf_end; } memcpy(&prof_dump_buf[prof_dump_buf_end], &s[i], n); prof_dump_buf_end += n; i += n; } return (false); } JEMALLOC_ATTR(format(printf, 2, 3)) static bool prof_dump_printf(bool propagate_err, const char *format, ...) { bool ret; va_list ap; char buf[PROF_PRINTF_BUFSIZE]; va_start(ap, format); malloc_vsnprintf(buf, sizeof(buf), format, ap); va_end(ap); ret = prof_dump_write(propagate_err, buf); return (ret); } static void prof_dump_ctx_prep(prof_ctx_t *ctx, prof_cnt_t *cnt_all, size_t *leak_nctx, prof_ctx_list_t *ctx_ql) { prof_thr_cnt_t *thr_cnt; prof_cnt_t tcnt; cassert(config_prof); malloc_mutex_lock(ctx->lock); /* * Increment nlimbo so that ctx won't go away before dump. * Additionally, link ctx into the dump list so that it is included in * prof_dump()'s second pass. */ ctx->nlimbo++; ql_tail_insert(ctx_ql, ctx, dump_link); memcpy(&ctx->cnt_summed, &ctx->cnt_merged, sizeof(prof_cnt_t)); ql_foreach(thr_cnt, &ctx->cnts_ql, cnts_link) { volatile unsigned *epoch = &thr_cnt->epoch; while (true) { unsigned epoch0 = *epoch; /* Make sure epoch is even. */ if (epoch0 & 1U) continue; memcpy(&tcnt, &thr_cnt->cnts, sizeof(prof_cnt_t)); /* Terminate if epoch didn't change while reading. */ if (*epoch == epoch0) break; } ctx->cnt_summed.curobjs += tcnt.curobjs; ctx->cnt_summed.curbytes += tcnt.curbytes; if (opt_prof_accum) { ctx->cnt_summed.accumobjs += tcnt.accumobjs; ctx->cnt_summed.accumbytes += tcnt.accumbytes; } } if (ctx->cnt_summed.curobjs != 0) (*leak_nctx)++; /* Add to cnt_all. */ cnt_all->curobjs += ctx->cnt_summed.curobjs; cnt_all->curbytes += ctx->cnt_summed.curbytes; if (opt_prof_accum) { cnt_all->accumobjs += ctx->cnt_summed.accumobjs; cnt_all->accumbytes += ctx->cnt_summed.accumbytes; } malloc_mutex_unlock(ctx->lock); } static bool prof_dump_header(bool propagate_err, const prof_cnt_t *cnt_all) { if (opt_lg_prof_sample == 0) { if (prof_dump_printf(propagate_err, "heap profile: %"PRId64": %"PRId64 " [%"PRIu64": %"PRIu64"] @ heapprofile\n", cnt_all->curobjs, cnt_all->curbytes, cnt_all->accumobjs, cnt_all->accumbytes)) return (true); } else { if (prof_dump_printf(propagate_err, "heap profile: %"PRId64": %"PRId64 " [%"PRIu64": %"PRIu64"] @ heap_v2/%"PRIu64"\n", cnt_all->curobjs, cnt_all->curbytes, cnt_all->accumobjs, cnt_all->accumbytes, ((uint64_t)1U << opt_lg_prof_sample))) return (true); } return (false); } static void prof_dump_ctx_cleanup_locked(prof_ctx_t *ctx, prof_ctx_list_t *ctx_ql) { ctx->nlimbo--; ql_remove(ctx_ql, ctx, dump_link); } static void prof_dump_ctx_cleanup(prof_ctx_t *ctx, prof_ctx_list_t *ctx_ql) { malloc_mutex_lock(ctx->lock); prof_dump_ctx_cleanup_locked(ctx, ctx_ql); malloc_mutex_unlock(ctx->lock); } static bool prof_dump_ctx(bool propagate_err, prof_ctx_t *ctx, const prof_bt_t *bt, prof_ctx_list_t *ctx_ql) { bool ret; unsigned i; cassert(config_prof); /* * Current statistics can sum to 0 as a result of unmerged per thread * statistics. Additionally, interval- and growth-triggered dumps can * occur between the time a ctx is created and when its statistics are * filled in. Avoid dumping any ctx that is an artifact of either * implementation detail. */ malloc_mutex_lock(ctx->lock); if ((opt_prof_accum == false && ctx->cnt_summed.curobjs == 0) || (opt_prof_accum && ctx->cnt_summed.accumobjs == 0)) { assert(ctx->cnt_summed.curobjs == 0); assert(ctx->cnt_summed.curbytes == 0); assert(ctx->cnt_summed.accumobjs == 0); assert(ctx->cnt_summed.accumbytes == 0); ret = false; goto label_return; } if (prof_dump_printf(propagate_err, "%"PRId64": %"PRId64 " [%"PRIu64": %"PRIu64"] @", ctx->cnt_summed.curobjs, ctx->cnt_summed.curbytes, ctx->cnt_summed.accumobjs, ctx->cnt_summed.accumbytes)) { ret = true; goto label_return; } for (i = 0; i < bt->len; i++) { if (prof_dump_printf(propagate_err, " %#"PRIxPTR, (uintptr_t)bt->vec[i])) { ret = true; goto label_return; } } if (prof_dump_write(propagate_err, "\n")) { ret = true; goto label_return; } ret = false; label_return: prof_dump_ctx_cleanup_locked(ctx, ctx_ql); malloc_mutex_unlock(ctx->lock); return (ret); } static int prof_getpid(void) { #ifdef _WIN32 return (GetCurrentProcessId()); #else return (getpid()); #endif } static bool prof_dump_maps(bool propagate_err) { bool ret; int mfd; char filename[JE_PATH_MAX + 1]; cassert(config_prof); #ifdef __FreeBSD__ malloc_snprintf(filename, sizeof(filename), "/proc/curproc/map"); #else malloc_snprintf(filename, sizeof(filename), "/proc/%d/maps", (int)prof_getpid()); #endif mfd = open(filename, O_RDONLY); if (mfd != -1) { ssize_t nread; if (prof_dump_write(propagate_err, "\nMAPPED_LIBRARIES:\n") && propagate_err) { ret = true; goto label_return; } nread = 0; do { prof_dump_buf_end += nread; if (prof_dump_buf_end == sizeof(prof_dump_buf)) { /* Make space in prof_dump_buf before read(). */ if (prof_dump_flush(propagate_err) && propagate_err) { ret = true; goto label_return; } } nread = read(mfd, &prof_dump_buf[prof_dump_buf_end], sizeof(prof_dump_buf) - prof_dump_buf_end); } while (nread > 0); } else { ret = true; goto label_return; } ret = false; label_return: if (mfd != -1) close(mfd); return (ret); } static void prof_leakcheck(const prof_cnt_t *cnt_all, size_t leak_nctx, const char *filename) { if (cnt_all->curbytes != 0) { malloc_printf(": Leak summary: %"PRId64" byte%s, %" PRId64" object%s, %zu context%s\n", cnt_all->curbytes, (cnt_all->curbytes != 1) ? "s" : "", cnt_all->curobjs, (cnt_all->curobjs != 1) ? "s" : "", leak_nctx, (leak_nctx != 1) ? "s" : ""); malloc_printf( ": Run pprof on \"%s\" for leak detail\n", filename); } } static bool prof_dump(bool propagate_err, const char *filename, bool leakcheck) { prof_tdata_t *prof_tdata; prof_cnt_t cnt_all; size_t tabind; union { prof_ctx_t *p; void *v; } ctx; size_t leak_nctx; prof_ctx_list_t ctx_ql; cassert(config_prof); prof_tdata = prof_tdata_get(false); if ((uintptr_t)prof_tdata <= (uintptr_t)PROF_TDATA_STATE_MAX) return (true); malloc_mutex_lock(&prof_dump_mtx); /* Merge per thread profile stats, and sum them in cnt_all. */ memset(&cnt_all, 0, sizeof(prof_cnt_t)); leak_nctx = 0; ql_new(&ctx_ql); prof_enter(prof_tdata); for (tabind = 0; ckh_iter(&bt2ctx, &tabind, NULL, &ctx.v) == false;) prof_dump_ctx_prep(ctx.p, &cnt_all, &leak_nctx, &ctx_ql); prof_leave(prof_tdata); /* Create dump file. */ if ((prof_dump_fd = prof_dump_open(propagate_err, filename)) == -1) goto label_open_close_error; /* Dump profile header. */ if (prof_dump_header(propagate_err, &cnt_all)) goto label_write_error; /* Dump per ctx profile stats. */ while ((ctx.p = ql_first(&ctx_ql)) != NULL) { if (prof_dump_ctx(propagate_err, ctx.p, ctx.p->bt, &ctx_ql)) goto label_write_error; } /* Dump /proc//maps if possible. */ if (prof_dump_maps(propagate_err)) goto label_write_error; if (prof_dump_close(propagate_err)) goto label_open_close_error; malloc_mutex_unlock(&prof_dump_mtx); if (leakcheck) prof_leakcheck(&cnt_all, leak_nctx, filename); return (false); label_write_error: prof_dump_close(propagate_err); label_open_close_error: while ((ctx.p = ql_first(&ctx_ql)) != NULL) prof_dump_ctx_cleanup(ctx.p, &ctx_ql); malloc_mutex_unlock(&prof_dump_mtx); return (true); } #define DUMP_FILENAME_BUFSIZE (JE_PATH_MAX + 1) #define VSEQ_INVALID UINT64_C(0xffffffffffffffff) static void prof_dump_filename(char *filename, char v, uint64_t vseq) { cassert(config_prof); if (vseq != VSEQ_INVALID) { /* "...v.heap" */ malloc_snprintf(filename, DUMP_FILENAME_BUFSIZE, "%s.%d.%"PRIu64".%c%"PRIu64".heap", opt_prof_prefix, (int)prof_getpid(), prof_dump_seq, v, vseq); } else { /* "....heap" */ malloc_snprintf(filename, DUMP_FILENAME_BUFSIZE, "%s.%d.%"PRIu64".%c.heap", opt_prof_prefix, (int)prof_getpid(), prof_dump_seq, v); } prof_dump_seq++; } static void prof_fdump(void) { char filename[DUMP_FILENAME_BUFSIZE]; cassert(config_prof); if (prof_booted == false) return; if (opt_prof_final && opt_prof_prefix[0] != '\0') { malloc_mutex_lock(&prof_dump_seq_mtx); prof_dump_filename(filename, 'f', VSEQ_INVALID); malloc_mutex_unlock(&prof_dump_seq_mtx); prof_dump(false, filename, opt_prof_leak); } } void prof_idump(void) { prof_tdata_t *prof_tdata; char filename[JE_PATH_MAX + 1]; cassert(config_prof); if (prof_booted == false) return; prof_tdata = prof_tdata_get(false); if ((uintptr_t)prof_tdata <= (uintptr_t)PROF_TDATA_STATE_MAX) return; if (prof_tdata->enq) { prof_tdata->enq_idump = true; return; } if (opt_prof_prefix[0] != '\0') { malloc_mutex_lock(&prof_dump_seq_mtx); prof_dump_filename(filename, 'i', prof_dump_iseq); prof_dump_iseq++; malloc_mutex_unlock(&prof_dump_seq_mtx); prof_dump(false, filename, false); } } bool prof_mdump(const char *filename) { char filename_buf[DUMP_FILENAME_BUFSIZE]; cassert(config_prof); if (opt_prof == false || prof_booted == false) return (true); if (filename == NULL) { /* No filename specified, so automatically generate one. */ if (opt_prof_prefix[0] == '\0') return (true); malloc_mutex_lock(&prof_dump_seq_mtx); prof_dump_filename(filename_buf, 'm', prof_dump_mseq); prof_dump_mseq++; malloc_mutex_unlock(&prof_dump_seq_mtx); filename = filename_buf; } return (prof_dump(true, filename, false)); } void prof_gdump(void) { prof_tdata_t *prof_tdata; char filename[DUMP_FILENAME_BUFSIZE]; cassert(config_prof); if (prof_booted == false) return; prof_tdata = prof_tdata_get(false); if ((uintptr_t)prof_tdata <= (uintptr_t)PROF_TDATA_STATE_MAX) return; if (prof_tdata->enq) { prof_tdata->enq_gdump = true; return; } if (opt_prof_prefix[0] != '\0') { malloc_mutex_lock(&prof_dump_seq_mtx); prof_dump_filename(filename, 'u', prof_dump_useq); prof_dump_useq++; malloc_mutex_unlock(&prof_dump_seq_mtx); prof_dump(false, filename, false); } } static void prof_bt_hash(const void *key, size_t r_hash[2]) { prof_bt_t *bt = (prof_bt_t *)key; cassert(config_prof); hash(bt->vec, bt->len * sizeof(void *), 0x94122f33U, r_hash); } static bool prof_bt_keycomp(const void *k1, const void *k2) { const prof_bt_t *bt1 = (prof_bt_t *)k1; const prof_bt_t *bt2 = (prof_bt_t *)k2; cassert(config_prof); if (bt1->len != bt2->len) return (false); return (memcmp(bt1->vec, bt2->vec, bt1->len * sizeof(void *)) == 0); } prof_tdata_t * prof_tdata_init(void) { prof_tdata_t *prof_tdata; cassert(config_prof); /* Initialize an empty cache for this thread. */ prof_tdata = (prof_tdata_t *)imalloc(sizeof(prof_tdata_t)); if (prof_tdata == NULL) return (NULL); if (ckh_new(&prof_tdata->bt2cnt, PROF_CKH_MINITEMS, prof_bt_hash, prof_bt_keycomp)) { idalloc(prof_tdata); return (NULL); } ql_new(&prof_tdata->lru_ql); prof_tdata->vec = imalloc(sizeof(void *) * PROF_BT_MAX); if (prof_tdata->vec == NULL) { ckh_delete(&prof_tdata->bt2cnt); idalloc(prof_tdata); return (NULL); } prof_tdata->prng_state = (uint64_t)(uintptr_t)prof_tdata; prof_sample_threshold_update(prof_tdata); prof_tdata->enq = false; prof_tdata->enq_idump = false; prof_tdata->enq_gdump = false; prof_tdata_tsd_set(&prof_tdata); return (prof_tdata); } void prof_tdata_cleanup(void *arg) { prof_thr_cnt_t *cnt; prof_tdata_t *prof_tdata = *(prof_tdata_t **)arg; cassert(config_prof); if (prof_tdata == PROF_TDATA_STATE_REINCARNATED) { /* * Another destructor deallocated memory after this destructor * was called. Reset prof_tdata to PROF_TDATA_STATE_PURGATORY * in order to receive another callback. */ prof_tdata = PROF_TDATA_STATE_PURGATORY; prof_tdata_tsd_set(&prof_tdata); } else if (prof_tdata == PROF_TDATA_STATE_PURGATORY) { /* * The previous time this destructor was called, we set the key * to PROF_TDATA_STATE_PURGATORY so that other destructors * wouldn't cause re-creation of the prof_tdata. This time, do * nothing, so that the destructor will not be called again. */ } else if (prof_tdata != NULL) { /* * Delete the hash table. All of its contents can still be * iterated over via the LRU. */ ckh_delete(&prof_tdata->bt2cnt); /* * Iteratively merge cnt's into the global stats and delete * them. */ while ((cnt = ql_last(&prof_tdata->lru_ql, lru_link)) != NULL) { ql_remove(&prof_tdata->lru_ql, cnt, lru_link); prof_ctx_merge(cnt->ctx, cnt); idalloc(cnt); } idalloc(prof_tdata->vec); idalloc(prof_tdata); prof_tdata = PROF_TDATA_STATE_PURGATORY; prof_tdata_tsd_set(&prof_tdata); } } void prof_boot0(void) { cassert(config_prof); memcpy(opt_prof_prefix, PROF_PREFIX_DEFAULT, sizeof(PROF_PREFIX_DEFAULT)); } void prof_boot1(void) { cassert(config_prof); /* * opt_prof must be in its final state before any arenas are * initialized, so this function must be executed early. */ if (opt_prof_leak && opt_prof == false) { /* * Enable opt_prof, but in such a way that profiles are never * automatically dumped. */ opt_prof = true; opt_prof_gdump = false; } else if (opt_prof) { if (opt_lg_prof_interval >= 0) { prof_interval = (((uint64_t)1U) << opt_lg_prof_interval); } } } bool prof_boot2(void) { cassert(config_prof); if (opt_prof) { unsigned i; if (ckh_new(&bt2ctx, PROF_CKH_MINITEMS, prof_bt_hash, prof_bt_keycomp)) return (true); if (malloc_mutex_init(&bt2ctx_mtx)) return (true); if (prof_tdata_tsd_boot()) { malloc_write( ": Error in pthread_key_create()\n"); abort(); } if (malloc_mutex_init(&prof_dump_seq_mtx)) return (true); if (malloc_mutex_init(&prof_dump_mtx)) return (true); if (atexit(prof_fdump) != 0) { malloc_write(": Error in atexit()\n"); if (opt_abort) abort(); } ctx_locks = (malloc_mutex_t *)base_malloc_fn(PROF_NCTX_LOCKS * sizeof(malloc_mutex_t)); if (ctx_locks == NULL) return (true); for (i = 0; i < PROF_NCTX_LOCKS; i++) { if (malloc_mutex_init(&ctx_locks[i])) return (true); } } #ifdef JEMALLOC_PROF_LIBGCC /* * Cause the backtracing machinery to allocate its internal state * before enabling profiling. */ _Unwind_Backtrace(prof_unwind_init_callback, NULL); #endif prof_booted = true; return (false); } void prof_prefork(void) { if (opt_prof) { unsigned i; malloc_mutex_prefork(&bt2ctx_mtx); malloc_mutex_prefork(&prof_dump_seq_mtx); for (i = 0; i < PROF_NCTX_LOCKS; i++) malloc_mutex_prefork(&ctx_locks[i]); } } void prof_postfork_parent(void) { if (opt_prof) { unsigned i; for (i = 0; i < PROF_NCTX_LOCKS; i++) malloc_mutex_postfork_parent(&ctx_locks[i]); malloc_mutex_postfork_parent(&prof_dump_seq_mtx); malloc_mutex_postfork_parent(&bt2ctx_mtx); } } void prof_postfork_child(void) { if (opt_prof) { unsigned i; for (i = 0; i < PROF_NCTX_LOCKS; i++) malloc_mutex_postfork_child(&ctx_locks[i]); malloc_mutex_postfork_child(&prof_dump_seq_mtx); malloc_mutex_postfork_child(&bt2ctx_mtx); } } /******************************************************************************/ vmem-1.8/src/jemalloc/src/quarantine.c000066400000000000000000000132401361505074100200070ustar00rootroot00000000000000#define JEMALLOC_QUARANTINE_C_ #include "jemalloc/internal/jemalloc_internal.h" /* * quarantine pointers close to NULL are used to encode state information that * is used for cleaning up during thread shutdown. */ #define QUARANTINE_STATE_REINCARNATED ((quarantine_t *)(uintptr_t)1) #define QUARANTINE_STATE_PURGATORY ((quarantine_t *)(uintptr_t)2) #define QUARANTINE_STATE_MAX QUARANTINE_STATE_PURGATORY /******************************************************************************/ /* Data. */ malloc_tsd_data(, quarantine, quarantine_t *, NULL) /******************************************************************************/ /* Function prototypes for non-inline static functions. */ static quarantine_t *quarantine_grow(quarantine_t *quarantine); static void quarantine_drain_one(quarantine_t *quarantine); static void quarantine_drain(quarantine_t *quarantine, size_t upper_bound); /******************************************************************************/ quarantine_t * quarantine_init(size_t lg_maxobjs) { quarantine_t *quarantine; quarantine = (quarantine_t *)imalloc(offsetof(quarantine_t, objs) + ((ZU(1) << lg_maxobjs) * sizeof(quarantine_obj_t))); if (quarantine == NULL) return (NULL); quarantine->curbytes = 0; quarantine->curobjs = 0; quarantine->first = 0; quarantine->lg_maxobjs = lg_maxobjs; quarantine_tsd_set(&quarantine); return (quarantine); } static quarantine_t * quarantine_grow(quarantine_t *quarantine) { quarantine_t *ret; ret = quarantine_init(quarantine->lg_maxobjs + 1); if (ret == NULL) { quarantine_drain_one(quarantine); return (quarantine); } ret->curbytes = quarantine->curbytes; ret->curobjs = quarantine->curobjs; if (quarantine->first + quarantine->curobjs <= (ZU(1) << quarantine->lg_maxobjs)) { /* objs ring buffer data are contiguous. */ memcpy(ret->objs, &quarantine->objs[quarantine->first], quarantine->curobjs * sizeof(quarantine_obj_t)); } else { /* objs ring buffer data wrap around. */ size_t ncopy_a = (ZU(1) << quarantine->lg_maxobjs) - quarantine->first; size_t ncopy_b = quarantine->curobjs - ncopy_a; memcpy(ret->objs, &quarantine->objs[quarantine->first], ncopy_a * sizeof(quarantine_obj_t)); memcpy(&ret->objs[ncopy_a], quarantine->objs, ncopy_b * sizeof(quarantine_obj_t)); } idalloc(quarantine); return (ret); } static void quarantine_drain_one(quarantine_t *quarantine) { quarantine_obj_t *obj = &quarantine->objs[quarantine->first]; assert(obj->usize == isalloc(obj->ptr, config_prof)); idalloc(obj->ptr); quarantine->curbytes -= obj->usize; quarantine->curobjs--; quarantine->first = (quarantine->first + 1) & ((ZU(1) << quarantine->lg_maxobjs) - 1); } static void quarantine_drain(quarantine_t *quarantine, size_t upper_bound) { while (quarantine->curbytes > upper_bound && quarantine->curobjs > 0) quarantine_drain_one(quarantine); } void quarantine(void *ptr) { quarantine_t *quarantine; size_t usize = isalloc(ptr, config_prof); cassert(config_fill); assert(opt_quarantine); quarantine = *quarantine_tsd_get(); if ((uintptr_t)quarantine <= (uintptr_t)QUARANTINE_STATE_MAX) { if (quarantine == QUARANTINE_STATE_PURGATORY) { /* * Make a note that quarantine() was called after * quarantine_cleanup() was called. */ quarantine = QUARANTINE_STATE_REINCARNATED; quarantine_tsd_set(&quarantine); } idalloc(ptr); return; } /* * Drain one or more objects if the quarantine size limit would be * exceeded by appending ptr. */ if (quarantine->curbytes + usize > opt_quarantine) { size_t upper_bound = (opt_quarantine >= usize) ? opt_quarantine - usize : 0; quarantine_drain(quarantine, upper_bound); } /* Grow the quarantine ring buffer if it's full. */ if (quarantine->curobjs == (ZU(1) << quarantine->lg_maxobjs)) quarantine = quarantine_grow(quarantine); /* quarantine_grow() must free a slot if it fails to grow. */ assert(quarantine->curobjs < (ZU(1) << quarantine->lg_maxobjs)); /* Append ptr if its size doesn't exceed the quarantine size. */ if (quarantine->curbytes + usize <= opt_quarantine) { size_t offset = (quarantine->first + quarantine->curobjs) & ((ZU(1) << quarantine->lg_maxobjs) - 1); quarantine_obj_t *obj = &quarantine->objs[offset]; obj->ptr = ptr; obj->usize = usize; quarantine->curbytes += usize; quarantine->curobjs++; if (config_fill && opt_junk) { /* * Only do redzone validation if Valgrind isn't in * operation. */ if ((config_valgrind == false || in_valgrind == false) && usize <= SMALL_MAXCLASS) arena_quarantine_junk_small(ptr, usize); else memset(ptr, 0x5a, usize); } } else { assert(quarantine->curbytes == 0); idalloc(ptr); } } void quarantine_cleanup(void *arg) { quarantine_t *quarantine = *(quarantine_t **)arg; if (quarantine == QUARANTINE_STATE_REINCARNATED) { /* * Another destructor deallocated memory after this destructor * was called. Reset quarantine to QUARANTINE_STATE_PURGATORY * in order to receive another callback. */ quarantine = QUARANTINE_STATE_PURGATORY; quarantine_tsd_set(&quarantine); } else if (quarantine == QUARANTINE_STATE_PURGATORY) { /* * The previous time this destructor was called, we set the key * to QUARANTINE_STATE_PURGATORY so that other destructors * wouldn't cause re-creation of the quarantine. This time, do * nothing, so that the destructor will not be called again. */ } else if (quarantine != NULL) { quarantine_drain(quarantine, 0); idalloc(quarantine); quarantine = QUARANTINE_STATE_PURGATORY; quarantine_tsd_set(&quarantine); } } bool quarantine_boot(void) { cassert(config_fill); if (quarantine_tsd_boot()) return (true); return (false); } vmem-1.8/src/jemalloc/src/rtree.c000066400000000000000000000047651361505074100167750ustar00rootroot00000000000000#define JEMALLOC_RTREE_C_ #include "jemalloc/internal/jemalloc_internal.h" rtree_t * rtree_new(unsigned bits, rtree_alloc_t *alloc, rtree_dalloc_t *dalloc, pool_t *pool) { rtree_t *ret; unsigned bits_per_level, bits_in_leaf, height, i; assert(bits > 0 && bits <= (sizeof(uintptr_t) << 3)); bits_per_level = jemalloc_ffs(pow2_ceil((RTREE_NODESIZE / sizeof(void *)))) - 1; bits_in_leaf = jemalloc_ffs(pow2_ceil((RTREE_NODESIZE / sizeof(uint8_t)))) - 1; if (bits > bits_in_leaf) { height = 1 + (bits - bits_in_leaf) / bits_per_level; if ((height-1) * bits_per_level + bits_in_leaf != bits) height++; } else { height = 1; } assert((height-1) * bits_per_level + bits_in_leaf >= bits); ret = (rtree_t*)alloc(pool, offsetof(rtree_t, level2bits) + (sizeof(unsigned) * height)); if (ret == NULL) return (NULL); memset(ret, 0, offsetof(rtree_t, level2bits) + (sizeof(unsigned) * height)); ret->alloc = alloc; ret->dalloc = dalloc; ret->pool = pool; if (malloc_mutex_init(&ret->mutex)) { if (dalloc != NULL) dalloc(pool, ret); return (NULL); } ret->height = height; if (height > 1) { if ((height-1) * bits_per_level + bits_in_leaf > bits) { ret->level2bits[0] = (bits - bits_in_leaf) % bits_per_level; } else ret->level2bits[0] = bits_per_level; for (i = 1; i < height-1; i++) ret->level2bits[i] = bits_per_level; ret->level2bits[height-1] = bits_in_leaf; } else ret->level2bits[0] = bits; ret->root = (void**)alloc(pool, sizeof(void *) << ret->level2bits[0]); if (ret->root == NULL) { if (dalloc != NULL) dalloc(pool, ret); return (NULL); } memset(ret->root, 0, sizeof(void *) << ret->level2bits[0]); return (ret); } static void rtree_delete_subtree(rtree_t *rtree, void **node, unsigned level) { if (level < rtree->height - 1) { size_t nchildren, i; nchildren = ZU(1) << rtree->level2bits[level]; for (i = 0; i < nchildren; i++) { void **child = (void **)node[i]; if (child != NULL) rtree_delete_subtree(rtree, child, level + 1); } } if (rtree->dalloc) rtree->dalloc(rtree->pool, node); } void rtree_delete(rtree_t *rtree) { rtree_delete_subtree(rtree, rtree->root, 0); malloc_mutex_destroy(&rtree->mutex); if (rtree->dalloc) rtree->dalloc(rtree->pool, rtree); } void rtree_prefork(rtree_t *rtree) { malloc_mutex_prefork(&rtree->mutex); } void rtree_postfork_parent(rtree_t *rtree) { malloc_mutex_postfork_parent(&rtree->mutex); } void rtree_postfork_child(rtree_t *rtree) { malloc_mutex_postfork_child(&rtree->mutex); } vmem-1.8/src/jemalloc/src/stats.c000066400000000000000000000433311361505074100170020ustar00rootroot00000000000000#define JEMALLOC_STATS_C_ #include "jemalloc/internal/jemalloc_internal.h" #define CTL_GET(n, v, t) do { \ size_t sz = sizeof(t); \ xmallctl(n, v, &sz, NULL, 0); \ } while (0) #define CTL_P_GET_ARRAY(n, v, t, c) do { \ size_t mib[8]; \ size_t miblen = sizeof(mib) / sizeof(size_t); \ size_t sz = sizeof(t) * (c); \ xmallctlnametomib(n, mib, &miblen); \ mib[1] = p; \ xmallctlbymib(mib, miblen, v, &sz, NULL, 0); \ } while (0) #define CTL_P_GET(n, v, t) CTL_P_GET_ARRAY(n, v, t, 1) #define CTL_PI_GET(n, v, t) do { \ size_t mib[8]; \ char buf[256]; \ snprintf(buf, sizeof(buf), n, p); \ size_t miblen = sizeof(mib) / sizeof(size_t); \ size_t sz = sizeof(t); \ xmallctlnametomib(buf, mib, &miblen); \ mib[1] = p; \ mib[4] = i; \ xmallctlbymib(mib, miblen, v, &sz, NULL, 0); \ } while (0) #define CTL_PJ_GET(n, v, t) do { \ size_t mib[8]; \ char buf[256]; \ snprintf(buf, sizeof(buf), n, p); \ size_t miblen = sizeof(mib) / sizeof(size_t); \ size_t sz = sizeof(t); \ xmallctlnametomib(buf, mib, &miblen); \ mib[1] = p; \ mib[4] = j; \ xmallctlbymib(mib, miblen, v, &sz, NULL, 0); \ } while (0) #define CTL_PIJ_GET(n, v, t) do { \ size_t mib[8]; \ char buf[256]; \ snprintf(buf, sizeof(buf), n, p); \ size_t miblen = sizeof(mib) / sizeof(size_t); \ size_t sz = sizeof(t); \ xmallctlnametomib(buf, mib, &miblen); \ mib[1] = p; \ mib[4] = i; \ mib[6] = j; \ xmallctlbymib(mib, miblen, v, &sz, NULL, 0); \ } while (0) /******************************************************************************/ /* Data. */ bool opt_stats_print = false; /******************************************************************************/ /* Function prototypes for non-inline static functions. */ static void stats_arena_bins_print(void (*write_cb)(void *, const char *), void *cbopaque, unsigned p, unsigned i); static void stats_arena_lruns_print(void (*write_cb)(void *, const char *), void *cbopaque, unsigned p, unsigned i); static void stats_arena_print(void (*write_cb)(void *, const char *), void *cbopaque, unsigned p, unsigned i, bool bins, bool large); /******************************************************************************/ static void stats_arena_bins_print(void (*write_cb)(void *, const char *), void *cbopaque, unsigned p, unsigned i) { size_t page; bool config_tcache; unsigned nbins, j, gap_start; CTL_P_GET("pool.0.arenas.page", &page, size_t); CTL_P_GET("config.tcache", &config_tcache, bool); if (config_tcache) { malloc_cprintf(write_cb, cbopaque, "bins: bin size regs pgs allocated nmalloc" " ndalloc nrequests nfills nflushes" " newruns reruns curruns\n"); } else { malloc_cprintf(write_cb, cbopaque, "bins: bin size regs pgs allocated nmalloc" " ndalloc newruns reruns curruns\n"); } CTL_P_GET("pool.0.arenas.nbins", &nbins, unsigned); for (j = 0, gap_start = UINT_MAX; j < nbins; j++) { uint64_t nruns; CTL_PIJ_GET("pool.%u.stats.arenas.0.bins.0.nruns", &nruns, uint64_t); if (nruns == 0) { if (gap_start == UINT_MAX) gap_start = j; } else { size_t reg_size, run_size, allocated; uint32_t nregs; uint64_t nmalloc, ndalloc, nrequests, nfills, nflushes; uint64_t reruns; size_t curruns; if (gap_start != UINT_MAX) { if (j > gap_start + 1) { /* Gap of more than one size class. */ malloc_cprintf(write_cb, cbopaque, "[%u..%u]\n", gap_start, j - 1); } else { /* Gap of one size class. */ malloc_cprintf(write_cb, cbopaque, "[%u]\n", gap_start); } gap_start = UINT_MAX; } CTL_PJ_GET("pool.%u.arenas.bin.0.size", ®_size, size_t); CTL_PJ_GET("pool.%u.arenas.bin.0.nregs", &nregs, uint32_t); CTL_PJ_GET("pool.%u.arenas.bin.0.run_size", &run_size, size_t); CTL_PIJ_GET("pool.%u.stats.arenas.0.bins.0.allocated", &allocated, size_t); CTL_PIJ_GET("pool.%u.stats.arenas.0.bins.0.nmalloc", &nmalloc, uint64_t); CTL_PIJ_GET("pool.%u.stats.arenas.0.bins.0.ndalloc", &ndalloc, uint64_t); if (config_tcache) { CTL_PIJ_GET("pool.%u.stats.arenas.0.bins.0.nrequests", &nrequests, uint64_t); CTL_PIJ_GET("pool.%u.stats.arenas.0.bins.0.nfills", &nfills, uint64_t); CTL_PIJ_GET("pool.%u.stats.arenas.0.bins.0.nflushes", &nflushes, uint64_t); } CTL_PIJ_GET("pool.%u.stats.arenas.0.bins.0.nreruns", &reruns, uint64_t); CTL_PIJ_GET("pool.%u.stats.arenas.0.bins.0.curruns", &curruns, size_t); if (config_tcache) { malloc_cprintf(write_cb, cbopaque, "%13u %5zu %4u %3zu %12zu %12"PRIu64 " %12"PRIu64" %12"PRIu64" %12"PRIu64 " %12"PRIu64" %12"PRIu64" %12"PRIu64 " %12zu\n", j, reg_size, nregs, run_size / page, allocated, nmalloc, ndalloc, nrequests, nfills, nflushes, nruns, reruns, curruns); } else { malloc_cprintf(write_cb, cbopaque, "%13u %5zu %4u %3zu %12zu %12"PRIu64 " %12"PRIu64" %12"PRIu64" %12"PRIu64 " %12zu\n", j, reg_size, nregs, run_size / page, allocated, nmalloc, ndalloc, nruns, reruns, curruns); } } } if (gap_start != UINT_MAX) { if (j > gap_start + 1) { /* Gap of more than one size class. */ malloc_cprintf(write_cb, cbopaque, "[%u..%u]\n", gap_start, j - 1); } else { /* Gap of one size class. */ malloc_cprintf(write_cb, cbopaque, "[%u]\n", gap_start); } } } static void stats_arena_lruns_print(void (*write_cb)(void *, const char *), void *cbopaque, unsigned p, unsigned i) { size_t page, nlruns, j; ssize_t gap_start; CTL_P_GET("pool.0.arenas.page", &page, size_t); malloc_cprintf(write_cb, cbopaque, "large: size pages nmalloc ndalloc nrequests" " curruns\n"); CTL_P_GET("pool.0.arenas.nlruns", &nlruns, size_t); for (j = 0, gap_start = -1; j < nlruns; j++) { uint64_t nmalloc, ndalloc, nrequests; size_t run_size, curruns; CTL_PIJ_GET("pool.%u.stats.arenas.0.lruns.0.nmalloc", &nmalloc, uint64_t); CTL_PIJ_GET("pool.%u.stats.arenas.0.lruns.0.ndalloc", &ndalloc, uint64_t); CTL_PIJ_GET("pool.%u.stats.arenas.0.lruns.0.nrequests", &nrequests, uint64_t); if (nrequests == 0) { if (gap_start == -1) gap_start = j; } else { CTL_PJ_GET("pool.%u.arenas.lrun.0.size", &run_size, size_t); CTL_PIJ_GET("pool.%u.stats.arenas.0.lruns.0.curruns", &curruns, size_t); if (gap_start != -1) { malloc_cprintf(write_cb, cbopaque, "[%zu]\n", j - gap_start); gap_start = -1; } malloc_cprintf(write_cb, cbopaque, "%13zu %5zu %12"PRIu64" %12"PRIu64" %12"PRIu64 " %12zu\n", run_size, run_size / page, nmalloc, ndalloc, nrequests, curruns); } } if (gap_start != -1) malloc_cprintf(write_cb, cbopaque, "[%zu]\n", j - gap_start); } static void stats_arena_print(void (*write_cb)(void *, const char *), void *cbopaque, unsigned p, unsigned i, bool bins, bool large) { unsigned nthreads; const char *dss; size_t page, pactive, pdirty, mapped; uint64_t npurge, nmadvise, purged; size_t small_allocated; uint64_t small_nmalloc, small_ndalloc, small_nrequests; size_t large_allocated; uint64_t large_nmalloc, large_ndalloc, large_nrequests; size_t huge_allocated; uint64_t huge_nmalloc, huge_ndalloc, huge_nrequests; CTL_P_GET("pool.0.arenas.page", &page, size_t); CTL_PI_GET("pool.%u.stats.arenas.0.nthreads", &nthreads, unsigned); malloc_cprintf(write_cb, cbopaque, "assigned threads: %u\n", nthreads); CTL_PI_GET("pool.%u.stats.arenas.0.dss", &dss, const char *); malloc_cprintf(write_cb, cbopaque, "dss allocation precedence: %s\n", dss); CTL_PI_GET("pool.%u.stats.arenas.0.pactive", &pactive, size_t); CTL_PI_GET("pool.%u.stats.arenas.0.pdirty", &pdirty, size_t); CTL_PI_GET("pool.%u.stats.arenas.0.npurge", &npurge, uint64_t); CTL_PI_GET("pool.%u.stats.arenas.0.nmadvise", &nmadvise, uint64_t); CTL_PI_GET("pool.%u.stats.arenas.0.purged", &purged, uint64_t); malloc_cprintf(write_cb, cbopaque, "dirty pages: %zu:%zu active:dirty, %"PRIu64" sweep%s," " %"PRIu64" madvise%s, %"PRIu64" purged\n", pactive, pdirty, npurge, npurge == 1 ? "" : "s", nmadvise, nmadvise == 1 ? "" : "s", purged); malloc_cprintf(write_cb, cbopaque, " allocated nmalloc ndalloc nrequests\n"); CTL_PI_GET("pool.%u.stats.arenas.0.small.allocated", &small_allocated, size_t); CTL_PI_GET("pool.%u.stats.arenas.0.small.nmalloc", &small_nmalloc, uint64_t); CTL_PI_GET("pool.%u.stats.arenas.0.small.ndalloc", &small_ndalloc, uint64_t); CTL_PI_GET("pool.%u.stats.arenas.0.small.nrequests", &small_nrequests, uint64_t); malloc_cprintf(write_cb, cbopaque, "small: %12zu %12"PRIu64" %12"PRIu64" %12"PRIu64"\n", small_allocated, small_nmalloc, small_ndalloc, small_nrequests); CTL_PI_GET("pool.%u.stats.arenas.0.large.allocated", &large_allocated, size_t); CTL_PI_GET("pool.%u.stats.arenas.0.large.nmalloc", &large_nmalloc, uint64_t); CTL_PI_GET("pool.%u.stats.arenas.0.large.ndalloc", &large_ndalloc, uint64_t); CTL_PI_GET("pool.%u.stats.arenas.0.large.nrequests", &large_nrequests, uint64_t); malloc_cprintf(write_cb, cbopaque, "large: %12zu %12"PRIu64" %12"PRIu64" %12"PRIu64"\n", large_allocated, large_nmalloc, large_ndalloc, large_nrequests); CTL_PI_GET("pool.%u.stats.arenas.0.huge.allocated", &huge_allocated, size_t); CTL_PI_GET("pool.%u.stats.arenas.0.huge.nmalloc", &huge_nmalloc, uint64_t); CTL_PI_GET("pool.%u.stats.arenas.0.huge.ndalloc", &huge_ndalloc, uint64_t); CTL_PI_GET("pool.%u.stats.arenas.0.huge.nrequests", &huge_nrequests, uint64_t); malloc_cprintf(write_cb, cbopaque, "huge: %12zu %12"PRIu64" %12"PRIu64" %12"PRIu64"\n", huge_allocated, huge_nmalloc, huge_ndalloc, huge_nrequests); malloc_cprintf(write_cb, cbopaque, "total: %12zu %12"PRIu64" %12"PRIu64" %12"PRIu64"\n", small_allocated + large_allocated + huge_allocated, small_nmalloc + large_nmalloc + huge_nmalloc, small_ndalloc + large_ndalloc + huge_ndalloc, small_nrequests + large_nrequests + huge_nrequests); malloc_cprintf(write_cb, cbopaque, "active: %12zu\n", pactive * page); CTL_PI_GET("pool.%u.stats.arenas.0.mapped", &mapped, size_t); malloc_cprintf(write_cb, cbopaque, "mapped: %12zu\n", mapped); if (bins) stats_arena_bins_print(write_cb, cbopaque, p, i); if (large) stats_arena_lruns_print(write_cb, cbopaque, p, i); } void stats_print(pool_t *pool, void (*write_cb)(void *, const char *), void *cbopaque, const char *opts) { int err; uint64_t epoch; size_t u64sz; bool general = true; bool merged = true; bool unmerged = true; bool bins = true; bool large = true; unsigned p = pool->pool_id; /* * Refresh stats, in case mallctl() was called by the application. * * Check for OOM here, since refreshing the ctl cache can trigger * allocation. In practice, none of the subsequent mallctl()-related * calls in this function will cause OOM if this one succeeds. * */ epoch = 1; u64sz = sizeof(uint64_t); err = je_mallctl("epoch", &epoch, &u64sz, &epoch, sizeof(uint64_t)); if (err != 0) { if (err == EAGAIN) { malloc_write(": Memory allocation failure in " "mallctl(\"epoch\", ...)\n"); return; } malloc_write(": Failure in mallctl(\"epoch\", " "...)\n"); abort(); } if (opts != NULL) { unsigned i; for (i = 0; opts[i] != '\0'; i++) { switch (opts[i]) { case 'g': general = false; break; case 'm': merged = false; break; case 'a': unmerged = false; break; case 'b': bins = false; break; case 'l': large = false; break; default:; } } } malloc_cprintf(write_cb, cbopaque, "___ Begin jemalloc statistics ___\n"); if (general) { int err; const char *cpv; bool bv; unsigned uv; ssize_t ssv; size_t sv, bsz, ssz, sssz, cpsz; bsz = sizeof(bool); ssz = sizeof(size_t); sssz = sizeof(ssize_t); cpsz = sizeof(const char *); CTL_GET("version", &cpv, const char *); malloc_cprintf(write_cb, cbopaque, "Version: %s\n", cpv); CTL_GET("config.debug", &bv, bool); malloc_cprintf(write_cb, cbopaque, "Assertions %s\n", bv ? "enabled" : "disabled"); #define OPT_WRITE_BOOL(n) \ if ((err = je_mallctl("opt."#n, &bv, &bsz, NULL, 0)) \ == 0) { \ malloc_cprintf(write_cb, cbopaque, \ " opt."#n": %s\n", bv ? "true" : "false"); \ } #define OPT_WRITE_SIZE_T(n) \ if ((err = je_mallctl("opt."#n, &sv, &ssz, NULL, 0)) \ == 0) { \ malloc_cprintf(write_cb, cbopaque, \ " opt."#n": %zu\n", sv); \ } #define OPT_WRITE_SSIZE_T(n) \ if ((err = je_mallctl("opt."#n, &ssv, &sssz, NULL, 0)) \ == 0) { \ malloc_cprintf(write_cb, cbopaque, \ " opt."#n": %zd\n", ssv); \ } #define OPT_WRITE_CHAR_P(n) \ if ((err = je_mallctl("opt."#n, &cpv, &cpsz, NULL, 0)) \ == 0) { \ malloc_cprintf(write_cb, cbopaque, \ " opt."#n": \"%s\"\n", cpv); \ } malloc_cprintf(write_cb, cbopaque, "Run-time option settings:\n"); OPT_WRITE_BOOL(abort) OPT_WRITE_SIZE_T(lg_chunk) OPT_WRITE_CHAR_P(dss) OPT_WRITE_SIZE_T(narenas) OPT_WRITE_SSIZE_T(lg_dirty_mult) OPT_WRITE_BOOL(stats_print) OPT_WRITE_BOOL(junk) OPT_WRITE_SIZE_T(quarantine) OPT_WRITE_BOOL(redzone) OPT_WRITE_BOOL(zero) OPT_WRITE_BOOL(utrace) OPT_WRITE_BOOL(valgrind) OPT_WRITE_BOOL(xmalloc) OPT_WRITE_BOOL(tcache) OPT_WRITE_SSIZE_T(lg_tcache_max) OPT_WRITE_BOOL(prof) OPT_WRITE_CHAR_P(prof_prefix) OPT_WRITE_BOOL(prof_active) OPT_WRITE_SSIZE_T(lg_prof_sample) OPT_WRITE_BOOL(prof_accum) OPT_WRITE_SSIZE_T(lg_prof_interval) OPT_WRITE_BOOL(prof_gdump) OPT_WRITE_BOOL(prof_final) OPT_WRITE_BOOL(prof_leak) #undef OPT_WRITE_BOOL #undef OPT_WRITE_SIZE_T #undef OPT_WRITE_SSIZE_T #undef OPT_WRITE_CHAR_P malloc_cprintf(write_cb, cbopaque, "CPUs: %u\n", ncpus); CTL_P_GET("pool.0.arenas.narenas", &uv, unsigned); malloc_cprintf(write_cb, cbopaque, "Arenas: %u\n", uv); malloc_cprintf(write_cb, cbopaque, "Pointer size: %zu\n", sizeof(void *)); CTL_P_GET("pool.0.arenas.quantum", &sv, size_t); malloc_cprintf(write_cb, cbopaque, "Quantum size: %zu\n", sv); CTL_P_GET("pool.0.arenas.page", &sv, size_t); malloc_cprintf(write_cb, cbopaque, "Page size: %zu\n", sv); CTL_P_GET("opt.lg_dirty_mult", &ssv, ssize_t); if (ssv >= 0) { malloc_cprintf(write_cb, cbopaque, "Min active:dirty page ratio per arena: %u:1\n", (1U << ssv)); } else { malloc_cprintf(write_cb, cbopaque, "Min active:dirty page ratio per arena: N/A\n"); } if ((err = je_mallctl("arenas.tcache_max", &sv, &ssz, NULL, 0)) == 0) { malloc_cprintf(write_cb, cbopaque, "Maximum thread-cached size class: %zu\n", sv); } if ((err = je_mallctl("opt.prof", &bv, &bsz, NULL, 0)) == 0 && bv) { CTL_GET("opt.lg_prof_sample", &sv, size_t); malloc_cprintf(write_cb, cbopaque, "Average profile sample interval: %"PRIu64 " (2^%zu)\n", (((uint64_t)1U) << sv), sv); CTL_GET("opt.lg_prof_interval", &ssv, ssize_t); if (ssv >= 0) { malloc_cprintf(write_cb, cbopaque, "Average profile dump interval: %"PRIu64 " (2^%zd)\n", (((uint64_t)1U) << ssv), ssv); } else { malloc_cprintf(write_cb, cbopaque, "Average profile dump interval: N/A\n"); } } CTL_GET("opt.lg_chunk", &sv, size_t); malloc_cprintf(write_cb, cbopaque, "Chunk size: %zu (2^%zu)\n", (ZU(1) << sv), sv); } if (config_stats) { size_t *cactive; size_t allocated, active, mapped; size_t chunks_current, chunks_high; uint64_t chunks_total; CTL_P_GET("pool.0.stats.cactive", &cactive, size_t *); CTL_P_GET("pool.0.stats.allocated", &allocated, size_t); CTL_P_GET("pool.0.stats.active", &active, size_t); CTL_P_GET("pool.0.stats.mapped", &mapped, size_t); malloc_cprintf(write_cb, cbopaque, "Allocated: %zu, active: %zu, mapped: %zu\n", allocated, active, mapped); malloc_cprintf(write_cb, cbopaque, "Current active ceiling: %zu\n", atomic_read_z(cactive)); /* Print chunk stats. */ CTL_P_GET("pool.0.stats.chunks.total", &chunks_total, uint64_t); CTL_P_GET("pool.0.stats.chunks.high", &chunks_high, size_t); CTL_P_GET("pool.0.stats.chunks.current", &chunks_current, size_t); malloc_cprintf(write_cb, cbopaque, "chunks: nchunks " "highchunks curchunks\n"); malloc_cprintf(write_cb, cbopaque, " %13"PRIu64" %12zu %12zu\n", chunks_total, chunks_high, chunks_current); if (merged) { unsigned narenas; CTL_P_GET("pool.0.arenas.narenas", &narenas, unsigned); { VARIABLE_ARRAY(bool, initialized, narenas); unsigned i, ninitialized; CTL_P_GET_ARRAY("pool.0.arenas.initialized", initialized, bool, narenas); for (i = ninitialized = 0; i < narenas; i++) { if (initialized[i]) ninitialized++; } if (ninitialized > 1 || unmerged == false) { /* Print merged arena stats. */ malloc_cprintf(write_cb, cbopaque, "\nMerged arenas stats:\n"); stats_arena_print(write_cb, cbopaque, p, narenas, bins, large); } } } if (unmerged) { unsigned narenas; /* Print stats for each arena. */ CTL_P_GET("pool.0.arenas.narenas", &narenas, unsigned); { VARIABLE_ARRAY(bool, initialized, narenas); unsigned i; CTL_P_GET_ARRAY("pool.0.arenas.initialized", initialized, bool, narenas); for (i = 0; i < narenas; i++) { if (initialized[i]) { malloc_cprintf(write_cb, cbopaque, "\narenas[%u]:\n", i); stats_arena_print(write_cb, cbopaque, p, i, bins, large); } } } } } malloc_cprintf(write_cb, cbopaque, "--- End jemalloc statistics ---\n"); } vmem-1.8/src/jemalloc/src/tcache.c000066400000000000000000000370121361505074100170720ustar00rootroot00000000000000#define JEMALLOC_TCACHE_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Data. */ #define ARR_INITIALIZER JEMALLOC_ARG_CONCAT({0}) malloc_tsd_data(, tcache, tsd_tcache_t, TSD_TCACHE_INITIALIZER) malloc_tsd_data(, tcache_enabled, tcache_enabled_t, tcache_enabled_default) bool opt_tcache = true; ssize_t opt_lg_tcache_max = LG_TCACHE_MAXCLASS_DEFAULT; tcache_bin_info_t *tcache_bin_info; static unsigned stack_nelms; /* Total stack elms per tcache. */ size_t nhbins; size_t tcache_maxclass; /******************************************************************************/ size_t tcache_salloc(const void *ptr) { return (arena_salloc(ptr, false)); } void tcache_event_hard(tcache_t *tcache) { size_t binind = tcache->next_gc_bin; tcache_bin_t *tbin = &tcache->tbins[binind]; tcache_bin_info_t *tbin_info = &tcache_bin_info[binind]; if (tbin->low_water > 0) { /* * Flush (ceiling) 3/4 of the objects below the low water mark. */ if (binind < NBINS) { tcache_bin_flush_small(tbin, binind, tbin->ncached - tbin->low_water + (tbin->low_water >> 2), tcache); } else { tcache_bin_flush_large(tbin, binind, tbin->ncached - tbin->low_water + (tbin->low_water >> 2), tcache); } /* * Reduce fill count by 2X. Limit lg_fill_div such that the * fill count is always at least 1. */ if ((tbin_info->ncached_max >> (tbin->lg_fill_div+1)) >= 1) tbin->lg_fill_div++; } else if (tbin->low_water < 0) { /* * Increase fill count by 2X. Make sure lg_fill_div stays * greater than 0. */ if (tbin->lg_fill_div > 1) tbin->lg_fill_div--; } tbin->low_water = tbin->ncached; tcache->next_gc_bin++; if (tcache->next_gc_bin == nhbins) tcache->next_gc_bin = 0; tcache->ev_cnt = 0; } void * tcache_alloc_small_hard(tcache_t *tcache, tcache_bin_t *tbin, size_t binind) { void *ret; arena_tcache_fill_small(tcache->arena, tbin, binind, config_prof ? tcache->prof_accumbytes : 0); if (config_prof) tcache->prof_accumbytes = 0; ret = tcache_alloc_easy(tbin); return (ret); } void tcache_bin_flush_small(tcache_bin_t *tbin, size_t binind, unsigned rem, tcache_t *tcache) { void *ptr; unsigned i, nflush, ndeferred; bool merged_stats = false; assert(binind < NBINS); assert(rem <= tbin->ncached); for (nflush = tbin->ncached - rem; nflush > 0; nflush = ndeferred) { /* Lock the arena bin associated with the first object. */ arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE( tbin->avail[0]); arena_t *arena = chunk->arena; arena_bin_t *bin = &arena->bins[binind]; if (config_prof && arena == tcache->arena) { if (arena_prof_accum(arena, tcache->prof_accumbytes)) prof_idump(); tcache->prof_accumbytes = 0; } malloc_mutex_lock(&bin->lock); if (config_stats && arena == tcache->arena) { assert(merged_stats == false); merged_stats = true; bin->stats.nflushes++; bin->stats.nrequests += tbin->tstats.nrequests; tbin->tstats.nrequests = 0; } ndeferred = 0; for (i = 0; i < nflush; i++) { ptr = tbin->avail[i]; assert(ptr != NULL); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (chunk->arena == arena) { size_t pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE; arena_chunk_map_t *mapelm = arena_mapp_get(chunk, pageind); if (config_fill && opt_junk) { arena_alloc_junk_small(ptr, &arena_bin_info[binind], true); } arena_dalloc_bin_locked(arena, chunk, ptr, mapelm); } else { /* * This object was allocated via a different * arena bin than the one that is currently * locked. Stash the object, so that it can be * handled in a future pass. */ tbin->avail[ndeferred] = ptr; ndeferred++; } } malloc_mutex_unlock(&bin->lock); } if (config_stats && merged_stats == false) { /* * The flush loop didn't happen to flush to this thread's * arena, so the stats didn't get merged. Manually do so now. */ arena_bin_t *bin = &tcache->arena->bins[binind]; malloc_mutex_lock(&bin->lock); bin->stats.nflushes++; bin->stats.nrequests += tbin->tstats.nrequests; tbin->tstats.nrequests = 0; malloc_mutex_unlock(&bin->lock); } memmove(tbin->avail, &tbin->avail[tbin->ncached - rem], rem * sizeof(void *)); tbin->ncached = rem; if ((int)tbin->ncached < tbin->low_water) tbin->low_water = tbin->ncached; } void tcache_bin_flush_large(tcache_bin_t *tbin, size_t binind, unsigned rem, tcache_t *tcache) { void *ptr; unsigned i, nflush, ndeferred; bool merged_stats = false; assert(binind < nhbins); assert(rem <= tbin->ncached); for (nflush = tbin->ncached - rem; nflush > 0; nflush = ndeferred) { /* Lock the arena associated with the first object. */ arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE( tbin->avail[0]); arena_t *arena = chunk->arena; UNUSED bool idump; if (config_prof) idump = false; malloc_mutex_lock(&arena->lock); if ((config_prof || config_stats) && arena == tcache->arena) { if (config_prof) { idump = arena_prof_accum_locked(arena, tcache->prof_accumbytes); tcache->prof_accumbytes = 0; } if (config_stats) { merged_stats = true; arena->stats.nrequests_large += tbin->tstats.nrequests; arena->stats.lstats[binind - NBINS].nrequests += tbin->tstats.nrequests; tbin->tstats.nrequests = 0; } } ndeferred = 0; for (i = 0; i < nflush; i++) { ptr = tbin->avail[i]; assert(ptr != NULL); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (chunk->arena == arena) arena_dalloc_large_locked(arena, chunk, ptr); else { /* * This object was allocated via a different * arena than the one that is currently locked. * Stash the object, so that it can be handled * in a future pass. */ tbin->avail[ndeferred] = ptr; ndeferred++; } } malloc_mutex_unlock(&arena->lock); if (config_prof && idump) prof_idump(); } if (config_stats && merged_stats == false) { /* * The flush loop didn't happen to flush to this thread's * arena, so the stats didn't get merged. Manually do so now. */ arena_t *arena = tcache->arena; malloc_mutex_lock(&arena->lock); arena->stats.nrequests_large += tbin->tstats.nrequests; arena->stats.lstats[binind - NBINS].nrequests += tbin->tstats.nrequests; tbin->tstats.nrequests = 0; malloc_mutex_unlock(&arena->lock); } memmove(tbin->avail, &tbin->avail[tbin->ncached - rem], rem * sizeof(void *)); tbin->ncached = rem; if ((int)tbin->ncached < tbin->low_water) tbin->low_water = tbin->ncached; } void tcache_arena_associate(tcache_t *tcache, arena_t *arena) { if (config_stats) { /* Link into list of extant tcaches. */ malloc_mutex_lock(&arena->lock); ql_elm_new(tcache, link); ql_tail_insert(&arena->tcache_ql, tcache, link); malloc_mutex_unlock(&arena->lock); } tcache->arena = arena; } void tcache_arena_dissociate(tcache_t *tcache) { if (config_stats) { /* Unlink from list of extant tcaches. */ malloc_mutex_lock(&tcache->arena->lock); ql_remove(&tcache->arena->tcache_ql, tcache, link); tcache_stats_merge(tcache, tcache->arena); malloc_mutex_unlock(&tcache->arena->lock); } } tcache_t * tcache_get_hard(tcache_t *tcache, pool_t *pool, bool create) { arena_t dummy; DUMMY_ARENA_INITIALIZE(dummy, pool); if (tcache == NULL) { if (create == false) { /* * Creating a tcache here would cause * allocation as a side effect of free(). * Ordinarily that would be okay since * tcache_create() failure is a soft failure * that doesn't propagate. However, if TLS * data are freed via free() as in glibc, * subtle corruption could result from setting * a TLS variable after its backing memory is * freed. */ return (NULL); } if (tcache_enabled_get() == false) { tcache_enabled_set(false); /* Memoize. */ return (NULL); } return (tcache_create(choose_arena(&dummy))); } if (tcache == TCACHE_STATE_PURGATORY) { /* * Make a note that an allocator function was called * after tcache_thread_cleanup() was called. */ tsd_tcache_t *tsd = tcache_tsd_get(); tcache = TCACHE_STATE_REINCARNATED; tsd->seqno[pool->pool_id] = pool->seqno; tsd->tcaches[pool->pool_id] = tcache; return (NULL); } if (tcache == TCACHE_STATE_REINCARNATED) return (NULL); not_reached(); return (NULL); } tcache_t * tcache_create(arena_t *arena) { tcache_t *tcache; size_t size, stack_offset; unsigned i; tsd_tcache_t *tsd = tcache_tsd_get(); size = offsetof(tcache_t, tbins) + (sizeof(tcache_bin_t) * nhbins); /* Naturally align the pointer stacks. */ size = PTR_CEILING(size); stack_offset = size; size += stack_nelms * sizeof(void *); /* * Round up to the nearest multiple of the cacheline size, in order to * avoid the possibility of false cacheline sharing. * * That this works relies on the same logic as in ipalloc(), but we * cannot directly call ipalloc() here due to tcache bootstrapping * issues. */ size = (size + CACHELINE_MASK) & (-CACHELINE); if (size <= SMALL_MAXCLASS) tcache = (tcache_t *)arena_malloc_small(arena, size, true); else if (size <= tcache_maxclass) tcache = (tcache_t *)arena_malloc_large(arena, size, true); else tcache = (tcache_t *)icalloct(size, false, arena); if (tcache == NULL) return (NULL); tcache_arena_associate(tcache, arena); assert((TCACHE_NSLOTS_SMALL_MAX & 1U) == 0); for (i = 0; i < nhbins; i++) { tcache->tbins[i].lg_fill_div = 1; tcache->tbins[i].avail = (void **)((uintptr_t)tcache + (uintptr_t)stack_offset); stack_offset += tcache_bin_info[i].ncached_max * sizeof(void *); } tsd->seqno[arena->pool->pool_id] = arena->pool->seqno; tsd->tcaches[arena->pool->pool_id] = tcache; return (tcache); } void tcache_destroy(tcache_t *tcache) { unsigned i; size_t tcache_size; tcache_arena_dissociate(tcache); for (i = 0; i < NBINS; i++) { tcache_bin_t *tbin = &tcache->tbins[i]; tcache_bin_flush_small(tbin, i, 0, tcache); if (config_stats && tbin->tstats.nrequests != 0) { arena_t *arena = tcache->arena; arena_bin_t *bin = &arena->bins[i]; malloc_mutex_lock(&bin->lock); bin->stats.nrequests += tbin->tstats.nrequests; malloc_mutex_unlock(&bin->lock); } } for (; i < nhbins; i++) { tcache_bin_t *tbin = &tcache->tbins[i]; tcache_bin_flush_large(tbin, i, 0, tcache); if (config_stats && tbin->tstats.nrequests != 0) { arena_t *arena = tcache->arena; malloc_mutex_lock(&arena->lock); arena->stats.nrequests_large += tbin->tstats.nrequests; arena->stats.lstats[i - NBINS].nrequests += tbin->tstats.nrequests; malloc_mutex_unlock(&arena->lock); } } if (config_prof && tcache->prof_accumbytes > 0 && arena_prof_accum(tcache->arena, tcache->prof_accumbytes)) prof_idump(); tcache_size = arena_salloc(tcache, false); if (tcache_size <= SMALL_MAXCLASS) { arena_chunk_t *chunk = CHUNK_ADDR2BASE(tcache); arena_t *arena = chunk->arena; size_t pageind = ((uintptr_t)tcache - (uintptr_t)chunk) >> LG_PAGE; arena_chunk_map_t *mapelm = arena_mapp_get(chunk, pageind); arena_dalloc_bin(arena, chunk, tcache, pageind, mapelm); } else if (tcache_size <= tcache_maxclass) { arena_chunk_t *chunk = CHUNK_ADDR2BASE(tcache); arena_t *arena = chunk->arena; arena_dalloc_large(arena, chunk, tcache); } else idalloct(tcache, false); } bool tcache_tsd_extend(tsd_tcache_t *tsd, unsigned len) { if (len == UINT_MAX) return (true); assert(len < POOLS_MAX); /* round up the new length to the nearest power of 2... */ size_t npools = 1ULL << (32 - __builtin_clz(len + 1)); /* ... but not less than */ if (npools < POOLS_MIN) npools = POOLS_MIN; unsigned *tseqno = base_malloc_fn(npools * sizeof (unsigned)); if (tseqno == NULL) return (true); if (tsd->seqno != NULL) memcpy(tseqno, tsd->seqno, tsd->npools * sizeof (unsigned)); memset(&tseqno[tsd->npools], 0, (npools - tsd->npools) * sizeof (unsigned)); tcache_t **tcaches = base_malloc_fn(npools * sizeof (tcache_t *)); if (tcaches == NULL) { base_free_fn(tseqno); return (true); } if (tsd->tcaches != NULL) memcpy(tcaches, tsd->tcaches, tsd->npools * sizeof (tcache_t *)); memset(&tcaches[tsd->npools], 0, (npools - tsd->npools) * sizeof (tcache_t *)); base_free_fn(tsd->seqno); tsd->seqno = tseqno; base_free_fn(tsd->tcaches); tsd->tcaches = tcaches; tsd->npools = npools; return (false); } void tcache_thread_cleanup(void *arg) { int i; tsd_tcache_t *tsd_array = arg; malloc_mutex_lock(&pools_lock); for (i = 0; i < tsd_array->npools; ++i) { tcache_t *tcache = tsd_array->tcaches[i]; if (tcache != NULL) { if (tcache == TCACHE_STATE_DISABLED) { /* Do nothing. */ } else if (tcache == TCACHE_STATE_REINCARNATED) { /* * Another destructor called an allocator function after this * destructor was called. Reset tcache to * TCACHE_STATE_PURGATORY in order to receive another callback. */ tsd_array->tcaches[i] = TCACHE_STATE_PURGATORY; } else if (tcache == TCACHE_STATE_PURGATORY) { /* * The previous time this destructor was called, we set the key * to TCACHE_STATE_PURGATORY so that other destructors wouldn't * cause re-creation of the tcache. This time, do nothing, so * that the destructor will not be called again. */ } else if (tcache != NULL) { assert(tcache != TCACHE_STATE_PURGATORY); if (pools[i] != NULL && tsd_array->seqno[i] == pools[i]->seqno) tcache_destroy(tcache); tsd_array->tcaches[i] = TCACHE_STATE_PURGATORY; } } } base_free_fn(tsd_array->seqno); base_free_fn(tsd_array->tcaches); tsd_array->npools = 0; malloc_mutex_unlock(&pools_lock); } /* Caller must own arena->lock. */ void tcache_stats_merge(tcache_t *tcache, arena_t *arena) { unsigned i; cassert(config_stats); /* Merge and reset tcache stats. */ for (i = 0; i < NBINS; i++) { arena_bin_t *bin = &arena->bins[i]; tcache_bin_t *tbin = &tcache->tbins[i]; malloc_mutex_lock(&bin->lock); bin->stats.nrequests += tbin->tstats.nrequests; malloc_mutex_unlock(&bin->lock); tbin->tstats.nrequests = 0; } for (; i < nhbins; i++) { malloc_large_stats_t *lstats = &arena->stats.lstats[i - NBINS]; tcache_bin_t *tbin = &tcache->tbins[i]; arena->stats.nrequests_large += tbin->tstats.nrequests; lstats->nrequests += tbin->tstats.nrequests; tbin->tstats.nrequests = 0; } } bool tcache_boot0(void) { unsigned i; /* Array still initialized */ if (tcache_bin_info != NULL) return (false); /* * If necessary, clamp opt_lg_tcache_max, now that arena_maxclass is * known. */ if (opt_lg_tcache_max < 0 || (1ULL << opt_lg_tcache_max) < SMALL_MAXCLASS) tcache_maxclass = SMALL_MAXCLASS; else if ((1ULL << opt_lg_tcache_max) > arena_maxclass) tcache_maxclass = arena_maxclass; else tcache_maxclass = (1ULL << opt_lg_tcache_max); nhbins = NBINS + (tcache_maxclass >> LG_PAGE); /* Initialize tcache_bin_info. */ tcache_bin_info = (tcache_bin_info_t *)base_alloc(&base_pool, nhbins * sizeof(tcache_bin_info_t)); if (tcache_bin_info == NULL) return (true); stack_nelms = 0; for (i = 0; i < NBINS; i++) { if ((arena_bin_info[i].nregs << 1) <= TCACHE_NSLOTS_SMALL_MAX) { tcache_bin_info[i].ncached_max = (arena_bin_info[i].nregs << 1); } else { tcache_bin_info[i].ncached_max = TCACHE_NSLOTS_SMALL_MAX; } stack_nelms += tcache_bin_info[i].ncached_max; } for (; i < nhbins; i++) { tcache_bin_info[i].ncached_max = TCACHE_NSLOTS_LARGE; stack_nelms += tcache_bin_info[i].ncached_max; } return (false); } bool tcache_boot1(void) { if (tcache_tsd_boot() || tcache_enabled_tsd_boot()) return (true); return (false); } vmem-1.8/src/jemalloc/src/tsd.c000066400000000000000000000054101361505074100164320ustar00rootroot00000000000000#define JEMALLOC_TSD_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Data. */ static unsigned ncleanups; static malloc_tsd_cleanup_t cleanups[MALLOC_TSD_CLEANUPS_MAX]; /******************************************************************************/ void * malloc_tsd_malloc(size_t size) { /* Avoid choose_arena() in order to dodge bootstrapping issues. */ return (arena_malloc(base_pool.arenas[0], size, false, false)); } void malloc_tsd_dalloc(void *wrapper) { idalloct(wrapper, false); } void malloc_tsd_no_cleanup(void *arg) { not_reached(); } #if defined(JEMALLOC_MALLOC_THREAD_CLEANUP) || defined(_WIN32) #ifndef _WIN32 JEMALLOC_EXPORT #endif void _malloc_thread_cleanup(void) { bool pending[MALLOC_TSD_CLEANUPS_MAX], again; unsigned i; for (i = 0; i < ncleanups; i++) pending[i] = true; do { again = false; for (i = 0; i < ncleanups; i++) { if (pending[i]) { pending[i] = cleanups[i](); if (pending[i]) again = true; } } } while (again); } #endif void malloc_tsd_cleanup_register(bool (*f)(void)) { assert(ncleanups < MALLOC_TSD_CLEANUPS_MAX); cleanups[ncleanups] = f; ncleanups++; } void malloc_tsd_boot(void) { ncleanups = 0; } #ifdef _WIN32 static BOOL WINAPI _tls_callback(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved) { switch (fdwReason) { #ifdef JEMALLOC_LAZY_LOCK case DLL_THREAD_ATTACH: isthreaded = true; break; #endif case DLL_THREAD_DETACH: _malloc_thread_cleanup(); break; default: break; } return (true); } #ifdef _MSC_VER # ifdef _M_IX86 # pragma comment(linker, "/INCLUDE:__tls_used") # else # pragma comment(linker, "/INCLUDE:_tls_used") # endif # pragma section(".CRT$XLY",long,read) #endif JEMALLOC_SECTION(".CRT$XLY") JEMALLOC_ATTR(used) static const BOOL (WINAPI *tls_callback)(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved) = _tls_callback; #endif #if (!defined(JEMALLOC_MALLOC_THREAD_CLEANUP) && !defined(JEMALLOC_TLS) && \ !defined(_WIN32)) void * tsd_init_check_recursion(tsd_init_head_t *head, tsd_init_block_t *block) { pthread_t self = pthread_self(); tsd_init_block_t *iter; /* Check whether this thread has already inserted into the list. */ malloc_mutex_lock(&head->lock); ql_foreach(iter, &head->blocks, link) { if (iter->thread == self) { malloc_mutex_unlock(&head->lock); return (iter->data); } } /* Insert block into list. */ ql_elm_new(block, link); block->thread = self; ql_tail_insert(&head->blocks, block, link); malloc_mutex_unlock(&head->lock); return (NULL); } void tsd_init_finish(tsd_init_head_t *head, tsd_init_block_t *block) { malloc_mutex_lock(&head->lock); ql_remove(&head->blocks, block, link); malloc_mutex_unlock(&head->lock); } #endif vmem-1.8/src/jemalloc/src/util.c000066400000000000000000000334061361505074100166230ustar00rootroot00000000000000#define assert(e) do { \ if (config_debug && !(e)) { \ malloc_write(": Failed assertion\n"); \ abort(); \ } \ } while (0) #define not_reached() do { \ if (config_debug) { \ malloc_write(": Unreachable code reached\n"); \ abort(); \ } \ } while (0) #define not_implemented() do { \ if (config_debug) { \ malloc_write(": Not implemented\n"); \ abort(); \ } \ } while (0) #define JEMALLOC_UTIL_C_ #include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* Function prototypes for non-inline static functions. */ static void wrtmessage(void *cbopaque, const char *s); #define U2S_BUFSIZE ((1U << (LG_SIZEOF_INTMAX_T + 3)) + 1) static char *u2s(uintmax_t x, unsigned base, bool uppercase, char *s, size_t *slen_p); #define D2S_BUFSIZE (1 + U2S_BUFSIZE) static char *d2s(intmax_t x, char sign, char *s, size_t *slen_p); #define O2S_BUFSIZE (1 + U2S_BUFSIZE) static char *o2s(uintmax_t x, bool alt_form, char *s, size_t *slen_p); #define X2S_BUFSIZE (2 + U2S_BUFSIZE) static char *x2s(uintmax_t x, bool alt_form, bool uppercase, char *s, size_t *slen_p); /******************************************************************************/ /* malloc_message() setup. */ static void wrtmessage(void *cbopaque, const char *s) { #ifdef SYS_write /* * Use syscall(2) rather than write(2) when possible in order to avoid * the possibility of memory allocation within libc. This is necessary * on FreeBSD; most operating systems do not have this problem though. */ UNUSED int result = syscall(SYS_write, STDERR_FILENO, s, strlen(s)); #else UNUSED int result = write(STDERR_FILENO, s, strlen(s)); #endif } JEMALLOC_EXPORT void (*je_malloc_message)(void *, const char *s); /* * Wrapper around malloc_message() that avoids the need for * je_malloc_message(...) throughout the code. */ void malloc_write(const char *s) { if (je_malloc_message != NULL) je_malloc_message(NULL, s); else wrtmessage(NULL, s); } /* * glibc provides a non-standard strerror_r() when _GNU_SOURCE is defined, so * provide a wrapper. */ int buferror(int err, char *buf, size_t buflen) { #ifdef _WIN32 FormatMessageA(FORMAT_MESSAGE_FROM_SYSTEM, NULL, GetLastError(), 0, (LPSTR)buf, buflen, NULL); return (0); #elif defined(_GNU_SOURCE) char *b = strerror_r(err, buf, buflen); if (b != buf) { strncpy(buf, b, buflen); buf[buflen-1] = '\0'; } return (0); #else return (strerror_r(err, buf, buflen)); #endif } uintmax_t malloc_strtoumax(const char *restrict nptr, char **restrict endptr, int base) { uintmax_t ret, digit; unsigned b; bool neg; const char *p, *ns; p = nptr; if (base < 0 || base == 1 || base > 36) { ns = p; set_errno(EINVAL); ret = UINTMAX_MAX; goto label_return; } b = base; /* Swallow leading whitespace and get sign, if any. */ neg = false; while (true) { switch (*p) { case '\t': case '\n': case '\v': case '\f': case '\r': case ' ': p++; break; case '-': neg = true; /* Fall through. */ case '+': p++; /* Fall through. */ default: goto label_prefix; } } /* Get prefix, if any. */ label_prefix: /* * Note where the first non-whitespace/sign character is so that it is * possible to tell whether any digits are consumed (e.g., " 0" vs. * " -x"). */ ns = p; if (*p == '0') { switch (p[1]) { case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': if (b == 0) b = 8; if (b == 8) p++; break; case 'X': case 'x': switch (p[2]) { case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': case 'A': case 'B': case 'C': case 'D': case 'E': case 'F': case 'a': case 'b': case 'c': case 'd': case 'e': case 'f': if (b == 0) b = 16; if (b == 16) p += 2; break; default: break; } break; default: p++; ret = 0; goto label_return; } } if (b == 0) b = 10; /* Convert. */ ret = 0; while ((*p >= '0' && *p <= '9' && (digit = *p - '0') < b) || (*p >= 'A' && *p <= 'Z' && (digit = 10 + *p - 'A') < b) || (*p >= 'a' && *p <= 'z' && (digit = 10 + *p - 'a') < b)) { uintmax_t pret = ret; ret *= b; ret += digit; if (ret < pret) { /* Overflow. */ set_errno(ERANGE); ret = UINTMAX_MAX; goto label_return; } p++; } if (neg) ret = -ret; if (p == ns) { /* No conversion performed. */ set_errno(EINVAL); ret = UINTMAX_MAX; goto label_return; } label_return: if (endptr != NULL) { if (p == ns) { /* No characters were converted. */ *endptr = (char *)nptr; } else *endptr = (char *)p; } return (ret); } static char * u2s(uintmax_t x, unsigned base, bool uppercase, char *s, size_t *slen_p) { unsigned i; i = U2S_BUFSIZE - 1; s[i] = '\0'; switch (base) { case 10: do { i--; s[i] = "0123456789"[x % (uint64_t)10]; x /= (uint64_t)10; } while (x > 0); break; case 16: { const char *digits = (uppercase) ? "0123456789ABCDEF" : "0123456789abcdef"; do { i--; s[i] = digits[x & 0xf]; x >>= 4; } while (x > 0); break; } default: { const char *digits = (uppercase) ? "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ" : "0123456789abcdefghijklmnopqrstuvwxyz"; assert(base >= 2 && base <= 36); do { i--; s[i] = digits[x % (uint64_t)base]; x /= (uint64_t)base; } while (x > 0); }} *slen_p = U2S_BUFSIZE - 1 - i; return (&s[i]); } static char * d2s(intmax_t x, char sign, char *s, size_t *slen_p) { bool neg; if ((neg = (x < 0))) x = -x; s = u2s(x, 10, false, s, slen_p); if (neg) sign = '-'; switch (sign) { case '-': if (neg == false) break; /* Fall through. */ case ' ': case '+': s--; (*slen_p)++; *s = sign; break; default: not_reached(); } return (s); } static char * o2s(uintmax_t x, bool alt_form, char *s, size_t *slen_p) { s = u2s(x, 8, false, s, slen_p); if (alt_form && *s != '0') { s--; (*slen_p)++; *s = '0'; } return (s); } static char * x2s(uintmax_t x, bool alt_form, bool uppercase, char *s, size_t *slen_p) { s = u2s(x, 16, uppercase, s, slen_p); if (alt_form) { s -= 2; (*slen_p) += 2; memcpy(s, uppercase ? "0X" : "0x", 2); } return (s); } int malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap) { int ret; size_t i; const char *f; #define APPEND_C(c) do { \ if (i < size) \ str[i] = (c); \ i++; \ } while (0) #define APPEND_S(s, slen) do { \ if (i < size) { \ size_t cpylen = ((slen) <= size - i) ? (slen) : size - i; \ memcpy(&str[i], s, cpylen); \ } \ i += (slen); \ } while (0) #define APPEND_PADDED_S(s, slen, width, left_justify) do { \ /* Left padding. */ \ size_t pad_len = ((width) == -1) ? 0 : (((slen) < (size_t)(width)) ? \ (size_t)(width) - (slen) : 0); \ if ((left_justify) == false && pad_len != 0) { \ size_t j; \ for (j = 0; j < pad_len; j++) \ APPEND_C(' '); \ } \ /* Value. */ \ APPEND_S(s, slen); \ /* Right padding. */ \ if ((left_justify) && pad_len != 0) { \ size_t j; \ for (j = 0; j < pad_len; j++) \ APPEND_C(' '); \ } \ } while (0) #define GET_ARG_NUMERIC(val, len) do { \ switch ((int)(len)) { \ case '?': \ val = va_arg(ap, int); \ break; \ case '?' | 0x80: \ val = va_arg(ap, unsigned int); \ break; \ case 'l': \ val = va_arg(ap, long); \ break; \ case 'l' | 0x80: \ val = va_arg(ap, unsigned long); \ break; \ case 'q': \ val = va_arg(ap, long long); \ break; \ case 'q' | 0x80: \ val = va_arg(ap, unsigned long long); \ break; \ case 'j': \ val = va_arg(ap, intmax_t); \ break; \ case 'j' | 0x80: \ val = va_arg(ap, uintmax_t); \ break; \ case 't': \ val = va_arg(ap, ptrdiff_t); \ break; \ case 'z': \ val = va_arg(ap, ssize_t); \ break; \ case 'z' | 0x80: \ val = va_arg(ap, size_t); \ break; \ case 'p': /* Synthetic; used for %p. */ \ val = va_arg(ap, uintptr_t); \ break; \ default: \ not_reached(); \ val = 0; \ } \ } while (0) i = 0; f = format; while (true) { switch (*f) { case '\0': goto label_out; case '%': { bool alt_form = false; bool left_justify = false; bool plus_space = false; bool plus_plus = false; int prec = -1; int width = -1; unsigned char len = '?'; f++; /* Flags. */ while (true) { switch (*f) { case '#': assert(alt_form == false); alt_form = true; break; case '-': assert(left_justify == false); left_justify = true; break; case ' ': assert(plus_space == false); plus_space = true; break; case '+': assert(plus_plus == false); plus_plus = true; break; default: goto label_width; } f++; } /* Width. */ label_width: switch (*f) { case '*': width = va_arg(ap, int); f++; if (width < 0) { left_justify = true; width = -width; } break; case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': { uintmax_t uwidth; set_errno(0); uwidth = malloc_strtoumax(f, (char **)&f, 10); assert(uwidth != UINTMAX_MAX || get_errno() != ERANGE); width = (int)uwidth; break; } default: break; } /* Width/precision separator. */ if (*f == '.') f++; else goto label_length; /* Precision. */ switch (*f) { case '*': prec = va_arg(ap, int); f++; break; case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': { uintmax_t uprec; set_errno(0); uprec = malloc_strtoumax(f, (char **)&f, 10); assert(uprec != UINTMAX_MAX || get_errno() != ERANGE); prec = (int)uprec; break; } default: break; } /* Length. */ label_length: switch (*f) { case 'l': f++; if (*f == 'l') { len = 'q'; f++; } else len = 'l'; break; case 'q': case 'j': case 't': case 'z': len = *f; f++; break; default: break; } /* Conversion specifier. */ switch (*f) { char *s; size_t slen; case '%': /* %% */ APPEND_C(*f); f++; break; case 'd': case 'i': { intmax_t val JEMALLOC_CC_SILENCE_INIT(0); char buf[D2S_BUFSIZE]; GET_ARG_NUMERIC(val, len); s = d2s(val, (plus_plus ? '+' : (plus_space ? ' ' : '-')), buf, &slen); APPEND_PADDED_S(s, slen, width, left_justify); f++; break; } case 'o': { uintmax_t val JEMALLOC_CC_SILENCE_INIT(0); char buf[O2S_BUFSIZE]; GET_ARG_NUMERIC(val, len | 0x80); s = o2s(val, alt_form, buf, &slen); APPEND_PADDED_S(s, slen, width, left_justify); f++; break; } case 'u': { uintmax_t val JEMALLOC_CC_SILENCE_INIT(0); char buf[U2S_BUFSIZE]; GET_ARG_NUMERIC(val, len | 0x80); s = u2s(val, 10, false, buf, &slen); APPEND_PADDED_S(s, slen, width, left_justify); f++; break; } case 'x': case 'X': { uintmax_t val JEMALLOC_CC_SILENCE_INIT(0); char buf[X2S_BUFSIZE]; GET_ARG_NUMERIC(val, len | 0x80); s = x2s(val, alt_form, *f == 'X', buf, &slen); APPEND_PADDED_S(s, slen, width, left_justify); f++; break; } case 'c': { unsigned char val; char buf[2]; assert(len == '?' || len == 'l'); assert_not_implemented(len != 'l'); val = va_arg(ap, int); buf[0] = val; buf[1] = '\0'; APPEND_PADDED_S(buf, 1, width, left_justify); f++; break; } case 's': assert(len == '?' || len == 'l'); assert_not_implemented(len != 'l'); s = va_arg(ap, char *); if (s) { slen = (prec < 0) ? strlen(s) : (size_t)prec; APPEND_PADDED_S(s, slen, width, left_justify); } else { APPEND_S("(null)", 6); } f++; break; case 'p': { uintmax_t val; char buf[X2S_BUFSIZE]; GET_ARG_NUMERIC(val, 'p'); s = x2s(val, true, false, buf, &slen); APPEND_PADDED_S(s, slen, width, left_justify); f++; break; } default: not_reached(); } break; } default: { APPEND_C(*f); f++; break; }} } label_out: if (i < size) str[i] = '\0'; else str[size - 1] = '\0'; ret = i; #undef APPEND_C #undef APPEND_S #undef APPEND_PADDED_S #undef GET_ARG_NUMERIC return (ret); } JEMALLOC_ATTR(format(printf, 3, 4)) int malloc_snprintf(char *str, size_t size, const char *format, ...) { int ret; va_list ap; va_start(ap, format); ret = malloc_vsnprintf(str, size, format, ap); va_end(ap); return (ret); } void malloc_vcprintf(void (*write_cb)(void *, const char *), void *cbopaque, const char *format, va_list ap) { char buf[MALLOC_PRINTF_BUFSIZE]; if (write_cb == NULL) { /* * The caller did not provide an alternate write_cb callback * function, so use the default one. malloc_write() is an * inline function, so use malloc_message() directly here. */ write_cb = (je_malloc_message != NULL) ? je_malloc_message : wrtmessage; cbopaque = NULL; } malloc_vsnprintf(buf, sizeof(buf), format, ap); write_cb(cbopaque, buf); } /* * Print to a callback function in such a way as to (hopefully) avoid memory * allocation. */ JEMALLOC_ATTR(format(printf, 3, 4)) void malloc_cprintf(void (*write_cb)(void *, const char *), void *cbopaque, const char *format, ...) { va_list ap; va_start(ap, format); malloc_vcprintf(write_cb, cbopaque, format, ap); va_end(ap); } /* Print to stderr in such a way as to avoid memory allocation. */ JEMALLOC_ATTR(format(printf, 1, 2)) void malloc_printf(const char *format, ...) { va_list ap; va_start(ap, format); malloc_vcprintf(NULL, NULL, format, ap); va_end(ap); } vmem-1.8/src/jemalloc/src/valgrind.c000066400000000000000000000011271361505074100174470ustar00rootroot00000000000000#include "jemalloc/internal/jemalloc_internal.h" #ifndef JEMALLOC_VALGRIND # error "This source file is for Valgrind integration." #endif #include void valgrind_make_mem_noaccess(void *ptr, size_t usize) { (void)VALGRIND_MAKE_MEM_NOACCESS(ptr, usize); } void valgrind_make_mem_undefined(void *ptr, size_t usize) { (void)VALGRIND_MAKE_MEM_UNDEFINED(ptr, usize); } void valgrind_make_mem_defined(void *ptr, size_t usize) { (void)VALGRIND_MAKE_MEM_DEFINED(ptr, usize); } void valgrind_freelike_block(void *ptr, size_t usize) { VALGRIND_FREELIKE_BLOCK(ptr, usize); } vmem-1.8/src/jemalloc/src/vector.c000066400000000000000000000034651361505074100171520ustar00rootroot00000000000000#define JEMALLOC_VECTOR_C_ #include "jemalloc/internal/jemalloc_internal.h" /* Round up the value to the closest power of two. */ static inline unsigned ceil_p2(unsigned n) { return 1 << (32 - __builtin_clz(n)); } /* Calculate how big should be the vector list array. */ static inline unsigned get_vec_part_len(unsigned n) { return MAX(ceil_p2(n), VECTOR_MIN_PART_SIZE); } /* * Find the vector list element in which the index should be stored, * if no such list exist return a pointer to a place in memory where it should * be allocated. */ static vec_list_t ** find_vec_list(vector_t *vector, int *index) { vec_list_t **vec_list; for (vec_list = &vector->list; *vec_list != NULL; vec_list = &(*vec_list)->next) { if (*index < (*vec_list)->length) break; *index -= (*vec_list)->length; } return vec_list; } /* Return a value from vector at index. */ void * vec_get(vector_t *vector, int index) { vec_list_t *vec_list = *find_vec_list(vector, &index); return (vec_list == NULL) ? NULL : vec_list->data[index]; } /* Set a value to vector at index. */ void vec_set(vector_t *vector, int index, void *val) { vec_list_t **vec_list = find_vec_list(vector, &index); /* * There's no array to put the value in, * which means a new one has to be allocated. */ if (*vec_list == NULL) { int vec_part_len = get_vec_part_len(index); *vec_list = base_malloc_fn(sizeof(vec_list_t) + sizeof(void *) * vec_part_len); if (*vec_list == NULL) return; (*vec_list)->next = NULL; (*vec_list)->length = vec_part_len; } (*vec_list)->data[index] = val; } /* Free all the memory in the container. */ void vec_delete(vector_t *vector) { vec_list_t *vec_list_next, *vec_list = vector->list; while (vec_list != NULL) { vec_list_next = vec_list->next; base_free_fn(vec_list); vec_list = vec_list_next; } }vmem-1.8/src/jemalloc/src/zone.c000066400000000000000000000167751361505074100166330ustar00rootroot00000000000000#include "jemalloc/internal/jemalloc_internal.h" #ifndef JEMALLOC_ZONE # error "This source file is for zones on Darwin (OS X)." #endif /* * The malloc_default_purgeable_zone function is only available on >= 10.6. * We need to check whether it is present at runtime, thus the weak_import. */ extern malloc_zone_t *malloc_default_purgeable_zone(void) JEMALLOC_ATTR(weak_import); /******************************************************************************/ /* Data. */ static malloc_zone_t zone; static struct malloc_introspection_t zone_introspect; /******************************************************************************/ /* Function prototypes for non-inline static functions. */ static size_t zone_size(malloc_zone_t *zone, void *ptr); static void *zone_malloc(malloc_zone_t *zone, size_t size); static void *zone_calloc(malloc_zone_t *zone, size_t num, size_t size); static void *zone_valloc(malloc_zone_t *zone, size_t size); static void zone_free(malloc_zone_t *zone, void *ptr); static void *zone_realloc(malloc_zone_t *zone, void *ptr, size_t size); #if (JEMALLOC_ZONE_VERSION >= 5) static void *zone_memalign(malloc_zone_t *zone, size_t alignment, #endif #if (JEMALLOC_ZONE_VERSION >= 6) size_t size); static void zone_free_definite_size(malloc_zone_t *zone, void *ptr, size_t size); #endif static void *zone_destroy(malloc_zone_t *zone); static size_t zone_good_size(malloc_zone_t *zone, size_t size); static void zone_force_lock(malloc_zone_t *zone); static void zone_force_unlock(malloc_zone_t *zone); /******************************************************************************/ /* * Functions. */ static size_t zone_size(malloc_zone_t *zone, void *ptr) { /* * There appear to be places within Darwin (such as setenv(3)) that * cause calls to this function with pointers that *no* zone owns. If * we knew that all pointers were owned by *some* zone, we could split * our zone into two parts, and use one as the default allocator and * the other as the default deallocator/reallocator. Since that will * not work in practice, we must check all pointers to assure that they * reside within a mapped chunk before determining size. */ return (ivsalloc(ptr, config_prof)); } static void * zone_malloc(malloc_zone_t *zone, size_t size) { return (je_malloc(size)); } static void * zone_calloc(malloc_zone_t *zone, size_t num, size_t size) { return (je_calloc(num, size)); } static void * zone_valloc(malloc_zone_t *zone, size_t size) { void *ret = NULL; /* Assignment avoids useless compiler warning. */ je_posix_memalign(&ret, PAGE, size); return (ret); } static void zone_free(malloc_zone_t *zone, void *ptr) { if (ivsalloc(ptr, config_prof) != 0) { je_free(ptr); return; } free(ptr); } static void * zone_realloc(malloc_zone_t *zone, void *ptr, size_t size) { if (ivsalloc(ptr, config_prof) != 0) return (je_realloc(ptr, size)); return (realloc(ptr, size)); } #if (JEMALLOC_ZONE_VERSION >= 5) static void * zone_memalign(malloc_zone_t *zone, size_t alignment, size_t size) { void *ret = NULL; /* Assignment avoids useless compiler warning. */ je_posix_memalign(&ret, alignment, size); return (ret); } #endif #if (JEMALLOC_ZONE_VERSION >= 6) static void zone_free_definite_size(malloc_zone_t *zone, void *ptr, size_t size) { if (ivsalloc(ptr, config_prof) != 0) { assert(ivsalloc(ptr, config_prof) == size); je_free(ptr); return; } free(ptr); } #endif static void * zone_destroy(malloc_zone_t *zone) { /* This function should never be called. */ not_reached(); return (NULL); } static size_t zone_good_size(malloc_zone_t *zone, size_t size) { if (size == 0) size = 1; return (s2u(size)); } static void zone_force_lock(malloc_zone_t *zone) { if (isthreaded) jemalloc_prefork(); } static void zone_force_unlock(malloc_zone_t *zone) { if (isthreaded) jemalloc_postfork_parent(); } JEMALLOC_ATTR(constructor) void register_zone(void) { /* * If something else replaced the system default zone allocator, don't * register jemalloc's. */ malloc_zone_t *default_zone = malloc_default_zone(); malloc_zone_t *purgeable_zone = NULL; if (!default_zone->zone_name || strcmp(default_zone->zone_name, "DefaultMallocZone") != 0) { return; } zone.size = (void *)zone_size; zone.malloc = (void *)zone_malloc; zone.calloc = (void *)zone_calloc; zone.valloc = (void *)zone_valloc; zone.free = (void *)zone_free; zone.realloc = (void *)zone_realloc; zone.destroy = (void *)zone_destroy; zone.zone_name = "jemalloc_zone"; zone.batch_malloc = NULL; zone.batch_free = NULL; zone.introspect = &zone_introspect; zone.version = JEMALLOC_ZONE_VERSION; #if (JEMALLOC_ZONE_VERSION >= 5) zone.memalign = zone_memalign; #endif #if (JEMALLOC_ZONE_VERSION >= 6) zone.free_definite_size = zone_free_definite_size; #endif #if (JEMALLOC_ZONE_VERSION >= 8) zone.pressure_relief = NULL; #endif zone_introspect.enumerator = NULL; zone_introspect.good_size = (void *)zone_good_size; zone_introspect.check = NULL; zone_introspect.print = NULL; zone_introspect.log = NULL; zone_introspect.force_lock = (void *)zone_force_lock; zone_introspect.force_unlock = (void *)zone_force_unlock; zone_introspect.statistics = NULL; #if (JEMALLOC_ZONE_VERSION >= 6) zone_introspect.zone_locked = NULL; #endif #if (JEMALLOC_ZONE_VERSION >= 7) zone_introspect.enable_discharge_checking = NULL; zone_introspect.disable_discharge_checking = NULL; zone_introspect.discharge = NULL; #ifdef __BLOCKS__ zone_introspect.enumerate_discharged_pointers = NULL; #else zone_introspect.enumerate_unavailable_without_blocks = NULL; #endif #endif /* * The default purgeable zone is created lazily by OSX's libc. It uses * the default zone when it is created for "small" allocations * (< 15 KiB), but assumes the default zone is a scalable_zone. This * obviously fails when the default zone is the jemalloc zone, so * malloc_default_purgeable_zone is called beforehand so that the * default purgeable zone is created when the default zone is still * a scalable_zone. As purgeable zones only exist on >= 10.6, we need * to check for the existence of malloc_default_purgeable_zone() at * run time. */ if (malloc_default_purgeable_zone != NULL) purgeable_zone = malloc_default_purgeable_zone(); /* Register the custom zone. At this point it won't be the default. */ malloc_zone_register(&zone); do { default_zone = malloc_default_zone(); /* * Unregister and reregister the default zone. On OSX >= 10.6, * unregistering takes the last registered zone and places it * at the location of the specified zone. Unregistering the * default zone thus makes the last registered one the default. * On OSX < 10.6, unregistering shifts all registered zones. * The first registered zone then becomes the default. */ malloc_zone_unregister(default_zone); malloc_zone_register(default_zone); /* * On OSX 10.6, having the default purgeable zone appear before * the default zone makes some things crash because it thinks it * owns the default zone allocated pointers. We thus unregister/ * re-register it in order to ensure it's always after the * default zone. On OSX < 10.6, there is no purgeable zone, so * this does nothing. On OSX >= 10.6, unregistering replaces the * purgeable zone with the last registered zone above, i.e the * default zone. Registering it again then puts it at the end, * obviously after the default zone. */ if (purgeable_zone) { malloc_zone_unregister(purgeable_zone); malloc_zone_register(purgeable_zone); } } while (malloc_default_zone() != &zone); } vmem-1.8/src/jemalloc/test/000077500000000000000000000000001361505074100156645ustar00rootroot00000000000000vmem-1.8/src/jemalloc/test/include/000077500000000000000000000000001361505074100173075ustar00rootroot00000000000000vmem-1.8/src/jemalloc/test/include/test/000077500000000000000000000000001361505074100202665ustar00rootroot00000000000000vmem-1.8/src/jemalloc/test/include/test/SFMT-alti.h000066400000000000000000000134461361505074100221470ustar00rootroot00000000000000/* * This file derives from SFMT 1.3.3 * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was * released under the terms of the following license: * * Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Hiroshima University nor the names of * its contributors may be used to endorse or promote products * derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /** * @file SFMT-alti.h * * @brief SIMD oriented Fast Mersenne Twister(SFMT) * pseudorandom number generator * * @author Mutsuo Saito (Hiroshima University) * @author Makoto Matsumoto (Hiroshima University) * * Copyright (C) 2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * The new BSD License is applied to this software. * see LICENSE.txt */ #ifndef SFMT_ALTI_H #define SFMT_ALTI_H /** * This function represents the recursion formula in AltiVec and BIG ENDIAN. * @param a a 128-bit part of the internal state array * @param b a 128-bit part of the internal state array * @param c a 128-bit part of the internal state array * @param d a 128-bit part of the internal state array * @return output */ JEMALLOC_ALWAYS_INLINE vector unsigned int vec_recursion(vector unsigned int a, vector unsigned int b, vector unsigned int c, vector unsigned int d) { const vector unsigned int sl1 = ALTI_SL1; const vector unsigned int sr1 = ALTI_SR1; #ifdef ONLY64 const vector unsigned int mask = ALTI_MSK64; const vector unsigned char perm_sl = ALTI_SL2_PERM64; const vector unsigned char perm_sr = ALTI_SR2_PERM64; #else const vector unsigned int mask = ALTI_MSK; const vector unsigned char perm_sl = ALTI_SL2_PERM; const vector unsigned char perm_sr = ALTI_SR2_PERM; #endif vector unsigned int v, w, x, y, z; x = vec_perm(a, (vector unsigned int)perm_sl, perm_sl); v = a; y = vec_sr(b, sr1); z = vec_perm(c, (vector unsigned int)perm_sr, perm_sr); w = vec_sl(d, sl1); z = vec_xor(z, w); y = vec_and(y, mask); v = vec_xor(v, x); z = vec_xor(z, y); z = vec_xor(z, v); return z; } /** * This function fills the internal state array with pseudorandom * integers. */ JEMALLOC_INLINE void gen_rand_all(sfmt_t *ctx) { int i; vector unsigned int r, r1, r2; r1 = ctx->sfmt[N - 2].s; r2 = ctx->sfmt[N - 1].s; for (i = 0; i < N - POS1; i++) { r = vec_recursion(ctx->sfmt[i].s, ctx->sfmt[i + POS1].s, r1, r2); ctx->sfmt[i].s = r; r1 = r2; r2 = r; } for (; i < N; i++) { r = vec_recursion(ctx->sfmt[i].s, ctx->sfmt[i + POS1 - N].s, r1, r2); ctx->sfmt[i].s = r; r1 = r2; r2 = r; } } /** * This function fills the user-specified array with pseudorandom * integers. * * @param array an 128-bit array to be filled by pseudorandom numbers. * @param size number of 128-bit pesudorandom numbers to be generated. */ JEMALLOC_INLINE void gen_rand_array(sfmt_t *ctx, w128_t *array, int size) { int i, j; vector unsigned int r, r1, r2; r1 = ctx->sfmt[N - 2].s; r2 = ctx->sfmt[N - 1].s; for (i = 0; i < N - POS1; i++) { r = vec_recursion(ctx->sfmt[i].s, ctx->sfmt[i + POS1].s, r1, r2); array[i].s = r; r1 = r2; r2 = r; } for (; i < N; i++) { r = vec_recursion(ctx->sfmt[i].s, array[i + POS1 - N].s, r1, r2); array[i].s = r; r1 = r2; r2 = r; } /* main loop */ for (; i < size - N; i++) { r = vec_recursion(array[i - N].s, array[i + POS1 - N].s, r1, r2); array[i].s = r; r1 = r2; r2 = r; } for (j = 0; j < 2 * N - size; j++) { ctx->sfmt[j].s = array[j + size - N].s; } for (; i < size; i++) { r = vec_recursion(array[i - N].s, array[i + POS1 - N].s, r1, r2); array[i].s = r; ctx->sfmt[j++].s = r; r1 = r2; r2 = r; } } #ifndef ONLY64 #if defined(__APPLE__) #define ALTI_SWAP (vector unsigned char) \ (4, 5, 6, 7, 0, 1, 2, 3, 12, 13, 14, 15, 8, 9, 10, 11) #else #define ALTI_SWAP {4, 5, 6, 7, 0, 1, 2, 3, 12, 13, 14, 15, 8, 9, 10, 11} #endif /** * This function swaps high and low 32-bit of 64-bit integers in user * specified array. * * @param array an 128-bit array to be swapped. * @param size size of 128-bit array. */ JEMALLOC_INLINE void swap(w128_t *array, int size) { int i; const vector unsigned char perm = ALTI_SWAP; for (i = 0; i < size; i++) { array[i].s = vec_perm(array[i].s, (vector unsigned int)perm, perm); } } #endif #endif vmem-1.8/src/jemalloc/test/include/test/SFMT-params.h000066400000000000000000000102761361505074100224770ustar00rootroot00000000000000/* * This file derives from SFMT 1.3.3 * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was * released under the terms of the following license: * * Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Hiroshima University nor the names of * its contributors may be used to endorse or promote products * derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef SFMT_PARAMS_H #define SFMT_PARAMS_H #if !defined(MEXP) #ifdef __GNUC__ #warning "MEXP is not defined. I assume MEXP is 19937." #endif #define MEXP 19937 #endif /*----------------- BASIC DEFINITIONS -----------------*/ /** Mersenne Exponent. The period of the sequence * is a multiple of 2^MEXP-1. * #define MEXP 19937 */ /** SFMT generator has an internal state array of 128-bit integers, * and N is its size. */ #define N (MEXP / 128 + 1) /** N32 is the size of internal state array when regarded as an array * of 32-bit integers.*/ #define N32 (N * 4) /** N64 is the size of internal state array when regarded as an array * of 64-bit integers.*/ #define N64 (N * 2) /*---------------------- the parameters of SFMT following definitions are in paramsXXXX.h file. ----------------------*/ /** the pick up position of the array. #define POS1 122 */ /** the parameter of shift left as four 32-bit registers. #define SL1 18 */ /** the parameter of shift left as one 128-bit register. * The 128-bit integer is shifted by (SL2 * 8) bits. #define SL2 1 */ /** the parameter of shift right as four 32-bit registers. #define SR1 11 */ /** the parameter of shift right as one 128-bit register. * The 128-bit integer is shifted by (SL2 * 8) bits. #define SR2 1 */ /** A bitmask, used in the recursion. These parameters are introduced * to break symmetry of SIMD. #define MSK1 0xdfffffefU #define MSK2 0xddfecb7fU #define MSK3 0xbffaffffU #define MSK4 0xbffffff6U */ /** These definitions are part of a 128-bit period certification vector. #define PARITY1 0x00000001U #define PARITY2 0x00000000U #define PARITY3 0x00000000U #define PARITY4 0xc98e126aU */ #if MEXP == 607 #include "test/SFMT-params607.h" #elif MEXP == 1279 #include "test/SFMT-params1279.h" #elif MEXP == 2281 #include "test/SFMT-params2281.h" #elif MEXP == 4253 #include "test/SFMT-params4253.h" #elif MEXP == 11213 #include "test/SFMT-params11213.h" #elif MEXP == 19937 #include "test/SFMT-params19937.h" #elif MEXP == 44497 #include "test/SFMT-params44497.h" #elif MEXP == 86243 #include "test/SFMT-params86243.h" #elif MEXP == 132049 #include "test/SFMT-params132049.h" #elif MEXP == 216091 #include "test/SFMT-params216091.h" #else #ifdef __GNUC__ #error "MEXP is not valid." #undef MEXP #else #undef MEXP #endif #endif #endif /* SFMT_PARAMS_H */ vmem-1.8/src/jemalloc/test/include/test/SFMT-params11213.h000066400000000000000000000067561361505074100230770ustar00rootroot00000000000000/* * This file derives from SFMT 1.3.3 * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was * released under the terms of the following license: * * Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Hiroshima University nor the names of * its contributors may be used to endorse or promote products * derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef SFMT_PARAMS11213_H #define SFMT_PARAMS11213_H #define POS1 68 #define SL1 14 #define SL2 3 #define SR1 7 #define SR2 3 #define MSK1 0xeffff7fbU #define MSK2 0xffffffefU #define MSK3 0xdfdfbfffU #define MSK4 0x7fffdbfdU #define PARITY1 0x00000001U #define PARITY2 0x00000000U #define PARITY3 0xe8148000U #define PARITY4 0xd0c7afa3U /* PARAMETERS FOR ALTIVEC */ #if defined(__APPLE__) /* For OSX */ #define ALTI_SL1 (vector unsigned int)(SL1, SL1, SL1, SL1) #define ALTI_SR1 (vector unsigned int)(SR1, SR1, SR1, SR1) #define ALTI_MSK (vector unsigned int)(MSK1, MSK2, MSK3, MSK4) #define ALTI_MSK64 \ (vector unsigned int)(MSK2, MSK1, MSK4, MSK3) #define ALTI_SL2_PERM \ (vector unsigned char)(3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10) #define ALTI_SL2_PERM64 \ (vector unsigned char)(3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2) #define ALTI_SR2_PERM \ (vector unsigned char)(5,6,7,0,9,10,11,4,13,14,15,8,19,19,19,12) #define ALTI_SR2_PERM64 \ (vector unsigned char)(13,14,15,0,1,2,3,4,19,19,19,8,9,10,11,12) #else /* For OTHER OSs(Linux?) */ #define ALTI_SL1 {SL1, SL1, SL1, SL1} #define ALTI_SR1 {SR1, SR1, SR1, SR1} #define ALTI_MSK {MSK1, MSK2, MSK3, MSK4} #define ALTI_MSK64 {MSK2, MSK1, MSK4, MSK3} #define ALTI_SL2_PERM {3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10} #define ALTI_SL2_PERM64 {3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2} #define ALTI_SR2_PERM {5,6,7,0,9,10,11,4,13,14,15,8,19,19,19,12} #define ALTI_SR2_PERM64 {13,14,15,0,1,2,3,4,19,19,19,8,9,10,11,12} #endif /* For OSX */ #define IDSTR "SFMT-11213:68-14-3-7-3:effff7fb-ffffffef-dfdfbfff-7fffdbfd" #endif /* SFMT_PARAMS11213_H */ vmem-1.8/src/jemalloc/test/include/test/SFMT-params1279.h000066400000000000000000000067401361505074100230230ustar00rootroot00000000000000/* * This file derives from SFMT 1.3.3 * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was * released under the terms of the following license: * * Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Hiroshima University nor the names of * its contributors may be used to endorse or promote products * derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef SFMT_PARAMS1279_H #define SFMT_PARAMS1279_H #define POS1 7 #define SL1 14 #define SL2 3 #define SR1 5 #define SR2 1 #define MSK1 0xf7fefffdU #define MSK2 0x7fefcfffU #define MSK3 0xaff3ef3fU #define MSK4 0xb5ffff7fU #define PARITY1 0x00000001U #define PARITY2 0x00000000U #define PARITY3 0x00000000U #define PARITY4 0x20000000U /* PARAMETERS FOR ALTIVEC */ #if defined(__APPLE__) /* For OSX */ #define ALTI_SL1 (vector unsigned int)(SL1, SL1, SL1, SL1) #define ALTI_SR1 (vector unsigned int)(SR1, SR1, SR1, SR1) #define ALTI_MSK (vector unsigned int)(MSK1, MSK2, MSK3, MSK4) #define ALTI_MSK64 \ (vector unsigned int)(MSK2, MSK1, MSK4, MSK3) #define ALTI_SL2_PERM \ (vector unsigned char)(3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10) #define ALTI_SL2_PERM64 \ (vector unsigned char)(3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2) #define ALTI_SR2_PERM \ (vector unsigned char)(7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14) #define ALTI_SR2_PERM64 \ (vector unsigned char)(15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14) #else /* For OTHER OSs(Linux?) */ #define ALTI_SL1 {SL1, SL1, SL1, SL1} #define ALTI_SR1 {SR1, SR1, SR1, SR1} #define ALTI_MSK {MSK1, MSK2, MSK3, MSK4} #define ALTI_MSK64 {MSK2, MSK1, MSK4, MSK3} #define ALTI_SL2_PERM {3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10} #define ALTI_SL2_PERM64 {3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2} #define ALTI_SR2_PERM {7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14} #define ALTI_SR2_PERM64 {15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14} #endif /* For OSX */ #define IDSTR "SFMT-1279:7-14-3-5-1:f7fefffd-7fefcfff-aff3ef3f-b5ffff7f" #endif /* SFMT_PARAMS1279_H */ vmem-1.8/src/jemalloc/test/include/test/SFMT-params132049.h000066400000000000000000000067541361505074100231700ustar00rootroot00000000000000/* * This file derives from SFMT 1.3.3 * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was * released under the terms of the following license: * * Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Hiroshima University nor the names of * its contributors may be used to endorse or promote products * derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef SFMT_PARAMS132049_H #define SFMT_PARAMS132049_H #define POS1 110 #define SL1 19 #define SL2 1 #define SR1 21 #define SR2 1 #define MSK1 0xffffbb5fU #define MSK2 0xfb6ebf95U #define MSK3 0xfffefffaU #define MSK4 0xcff77fffU #define PARITY1 0x00000001U #define PARITY2 0x00000000U #define PARITY3 0xcb520000U #define PARITY4 0xc7e91c7dU /* PARAMETERS FOR ALTIVEC */ #if defined(__APPLE__) /* For OSX */ #define ALTI_SL1 (vector unsigned int)(SL1, SL1, SL1, SL1) #define ALTI_SR1 (vector unsigned int)(SR1, SR1, SR1, SR1) #define ALTI_MSK (vector unsigned int)(MSK1, MSK2, MSK3, MSK4) #define ALTI_MSK64 \ (vector unsigned int)(MSK2, MSK1, MSK4, MSK3) #define ALTI_SL2_PERM \ (vector unsigned char)(1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8) #define ALTI_SL2_PERM64 \ (vector unsigned char)(1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0) #define ALTI_SR2_PERM \ (vector unsigned char)(7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14) #define ALTI_SR2_PERM64 \ (vector unsigned char)(15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14) #else /* For OTHER OSs(Linux?) */ #define ALTI_SL1 {SL1, SL1, SL1, SL1} #define ALTI_SR1 {SR1, SR1, SR1, SR1} #define ALTI_MSK {MSK1, MSK2, MSK3, MSK4} #define ALTI_MSK64 {MSK2, MSK1, MSK4, MSK3} #define ALTI_SL2_PERM {1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8} #define ALTI_SL2_PERM64 {1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0} #define ALTI_SR2_PERM {7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14} #define ALTI_SR2_PERM64 {15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14} #endif /* For OSX */ #define IDSTR "SFMT-132049:110-19-1-21-1:ffffbb5f-fb6ebf95-fffefffa-cff77fff" #endif /* SFMT_PARAMS132049_H */ vmem-1.8/src/jemalloc/test/include/test/SFMT-params19937.h000066400000000000000000000067501361505074100231160ustar00rootroot00000000000000/* * This file derives from SFMT 1.3.3 * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was * released under the terms of the following license: * * Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Hiroshima University nor the names of * its contributors may be used to endorse or promote products * derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef SFMT_PARAMS19937_H #define SFMT_PARAMS19937_H #define POS1 122 #define SL1 18 #define SL2 1 #define SR1 11 #define SR2 1 #define MSK1 0xdfffffefU #define MSK2 0xddfecb7fU #define MSK3 0xbffaffffU #define MSK4 0xbffffff6U #define PARITY1 0x00000001U #define PARITY2 0x00000000U #define PARITY3 0x00000000U #define PARITY4 0x13c9e684U /* PARAMETERS FOR ALTIVEC */ #if defined(__APPLE__) /* For OSX */ #define ALTI_SL1 (vector unsigned int)(SL1, SL1, SL1, SL1) #define ALTI_SR1 (vector unsigned int)(SR1, SR1, SR1, SR1) #define ALTI_MSK (vector unsigned int)(MSK1, MSK2, MSK3, MSK4) #define ALTI_MSK64 \ (vector unsigned int)(MSK2, MSK1, MSK4, MSK3) #define ALTI_SL2_PERM \ (vector unsigned char)(1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8) #define ALTI_SL2_PERM64 \ (vector unsigned char)(1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0) #define ALTI_SR2_PERM \ (vector unsigned char)(7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14) #define ALTI_SR2_PERM64 \ (vector unsigned char)(15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14) #else /* For OTHER OSs(Linux?) */ #define ALTI_SL1 {SL1, SL1, SL1, SL1} #define ALTI_SR1 {SR1, SR1, SR1, SR1} #define ALTI_MSK {MSK1, MSK2, MSK3, MSK4} #define ALTI_MSK64 {MSK2, MSK1, MSK4, MSK3} #define ALTI_SL2_PERM {1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8} #define ALTI_SL2_PERM64 {1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0} #define ALTI_SR2_PERM {7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14} #define ALTI_SR2_PERM64 {15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14} #endif /* For OSX */ #define IDSTR "SFMT-19937:122-18-1-11-1:dfffffef-ddfecb7f-bffaffff-bffffff6" #endif /* SFMT_PARAMS19937_H */ vmem-1.8/src/jemalloc/test/include/test/SFMT-params216091.h000066400000000000000000000067561361505074100231720ustar00rootroot00000000000000/* * This file derives from SFMT 1.3.3 * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was * released under the terms of the following license: * * Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Hiroshima University nor the names of * its contributors may be used to endorse or promote products * derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef SFMT_PARAMS216091_H #define SFMT_PARAMS216091_H #define POS1 627 #define SL1 11 #define SL2 3 #define SR1 10 #define SR2 1 #define MSK1 0xbff7bff7U #define MSK2 0xbfffffffU #define MSK3 0xbffffa7fU #define MSK4 0xffddfbfbU #define PARITY1 0xf8000001U #define PARITY2 0x89e80709U #define PARITY3 0x3bd2b64bU #define PARITY4 0x0c64b1e4U /* PARAMETERS FOR ALTIVEC */ #if defined(__APPLE__) /* For OSX */ #define ALTI_SL1 (vector unsigned int)(SL1, SL1, SL1, SL1) #define ALTI_SR1 (vector unsigned int)(SR1, SR1, SR1, SR1) #define ALTI_MSK (vector unsigned int)(MSK1, MSK2, MSK3, MSK4) #define ALTI_MSK64 \ (vector unsigned int)(MSK2, MSK1, MSK4, MSK3) #define ALTI_SL2_PERM \ (vector unsigned char)(3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10) #define ALTI_SL2_PERM64 \ (vector unsigned char)(3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2) #define ALTI_SR2_PERM \ (vector unsigned char)(7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14) #define ALTI_SR2_PERM64 \ (vector unsigned char)(15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14) #else /* For OTHER OSs(Linux?) */ #define ALTI_SL1 {SL1, SL1, SL1, SL1} #define ALTI_SR1 {SR1, SR1, SR1, SR1} #define ALTI_MSK {MSK1, MSK2, MSK3, MSK4} #define ALTI_MSK64 {MSK2, MSK1, MSK4, MSK3} #define ALTI_SL2_PERM {3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10} #define ALTI_SL2_PERM64 {3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2} #define ALTI_SR2_PERM {7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14} #define ALTI_SR2_PERM64 {15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14} #endif /* For OSX */ #define IDSTR "SFMT-216091:627-11-3-10-1:bff7bff7-bfffffff-bffffa7f-ffddfbfb" #endif /* SFMT_PARAMS216091_H */ vmem-1.8/src/jemalloc/test/include/test/SFMT-params2281.h000066400000000000000000000067401361505074100230150ustar00rootroot00000000000000/* * This file derives from SFMT 1.3.3 * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was * released under the terms of the following license: * * Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Hiroshima University nor the names of * its contributors may be used to endorse or promote products * derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef SFMT_PARAMS2281_H #define SFMT_PARAMS2281_H #define POS1 12 #define SL1 19 #define SL2 1 #define SR1 5 #define SR2 1 #define MSK1 0xbff7ffbfU #define MSK2 0xfdfffffeU #define MSK3 0xf7ffef7fU #define MSK4 0xf2f7cbbfU #define PARITY1 0x00000001U #define PARITY2 0x00000000U #define PARITY3 0x00000000U #define PARITY4 0x41dfa600U /* PARAMETERS FOR ALTIVEC */ #if defined(__APPLE__) /* For OSX */ #define ALTI_SL1 (vector unsigned int)(SL1, SL1, SL1, SL1) #define ALTI_SR1 (vector unsigned int)(SR1, SR1, SR1, SR1) #define ALTI_MSK (vector unsigned int)(MSK1, MSK2, MSK3, MSK4) #define ALTI_MSK64 \ (vector unsigned int)(MSK2, MSK1, MSK4, MSK3) #define ALTI_SL2_PERM \ (vector unsigned char)(1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8) #define ALTI_SL2_PERM64 \ (vector unsigned char)(1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0) #define ALTI_SR2_PERM \ (vector unsigned char)(7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14) #define ALTI_SR2_PERM64 \ (vector unsigned char)(15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14) #else /* For OTHER OSs(Linux?) */ #define ALTI_SL1 {SL1, SL1, SL1, SL1} #define ALTI_SR1 {SR1, SR1, SR1, SR1} #define ALTI_MSK {MSK1, MSK2, MSK3, MSK4} #define ALTI_MSK64 {MSK2, MSK1, MSK4, MSK3} #define ALTI_SL2_PERM {1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8} #define ALTI_SL2_PERM64 {1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0} #define ALTI_SR2_PERM {7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14} #define ALTI_SR2_PERM64 {15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14} #endif /* For OSX */ #define IDSTR "SFMT-2281:12-19-1-5-1:bff7ffbf-fdfffffe-f7ffef7f-f2f7cbbf" #endif /* SFMT_PARAMS2281_H */ vmem-1.8/src/jemalloc/test/include/test/SFMT-params4253.h000066400000000000000000000067401361505074100230160ustar00rootroot00000000000000/* * This file derives from SFMT 1.3.3 * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was * released under the terms of the following license: * * Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Hiroshima University nor the names of * its contributors may be used to endorse or promote products * derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef SFMT_PARAMS4253_H #define SFMT_PARAMS4253_H #define POS1 17 #define SL1 20 #define SL2 1 #define SR1 7 #define SR2 1 #define MSK1 0x9f7bffffU #define MSK2 0x9fffff5fU #define MSK3 0x3efffffbU #define MSK4 0xfffff7bbU #define PARITY1 0xa8000001U #define PARITY2 0xaf5390a3U #define PARITY3 0xb740b3f8U #define PARITY4 0x6c11486dU /* PARAMETERS FOR ALTIVEC */ #if defined(__APPLE__) /* For OSX */ #define ALTI_SL1 (vector unsigned int)(SL1, SL1, SL1, SL1) #define ALTI_SR1 (vector unsigned int)(SR1, SR1, SR1, SR1) #define ALTI_MSK (vector unsigned int)(MSK1, MSK2, MSK3, MSK4) #define ALTI_MSK64 \ (vector unsigned int)(MSK2, MSK1, MSK4, MSK3) #define ALTI_SL2_PERM \ (vector unsigned char)(1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8) #define ALTI_SL2_PERM64 \ (vector unsigned char)(1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0) #define ALTI_SR2_PERM \ (vector unsigned char)(7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14) #define ALTI_SR2_PERM64 \ (vector unsigned char)(15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14) #else /* For OTHER OSs(Linux?) */ #define ALTI_SL1 {SL1, SL1, SL1, SL1} #define ALTI_SR1 {SR1, SR1, SR1, SR1} #define ALTI_MSK {MSK1, MSK2, MSK3, MSK4} #define ALTI_MSK64 {MSK2, MSK1, MSK4, MSK3} #define ALTI_SL2_PERM {1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8} #define ALTI_SL2_PERM64 {1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0} #define ALTI_SR2_PERM {7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14} #define ALTI_SR2_PERM64 {15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14} #endif /* For OSX */ #define IDSTR "SFMT-4253:17-20-1-7-1:9f7bffff-9fffff5f-3efffffb-fffff7bb" #endif /* SFMT_PARAMS4253_H */ vmem-1.8/src/jemalloc/test/include/test/SFMT-params44497.h000066400000000000000000000067561361505074100231230ustar00rootroot00000000000000/* * This file derives from SFMT 1.3.3 * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was * released under the terms of the following license: * * Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Hiroshima University nor the names of * its contributors may be used to endorse or promote products * derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef SFMT_PARAMS44497_H #define SFMT_PARAMS44497_H #define POS1 330 #define SL1 5 #define SL2 3 #define SR1 9 #define SR2 3 #define MSK1 0xeffffffbU #define MSK2 0xdfbebfffU #define MSK3 0xbfbf7befU #define MSK4 0x9ffd7bffU #define PARITY1 0x00000001U #define PARITY2 0x00000000U #define PARITY3 0xa3ac4000U #define PARITY4 0xecc1327aU /* PARAMETERS FOR ALTIVEC */ #if defined(__APPLE__) /* For OSX */ #define ALTI_SL1 (vector unsigned int)(SL1, SL1, SL1, SL1) #define ALTI_SR1 (vector unsigned int)(SR1, SR1, SR1, SR1) #define ALTI_MSK (vector unsigned int)(MSK1, MSK2, MSK3, MSK4) #define ALTI_MSK64 \ (vector unsigned int)(MSK2, MSK1, MSK4, MSK3) #define ALTI_SL2_PERM \ (vector unsigned char)(3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10) #define ALTI_SL2_PERM64 \ (vector unsigned char)(3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2) #define ALTI_SR2_PERM \ (vector unsigned char)(5,6,7,0,9,10,11,4,13,14,15,8,19,19,19,12) #define ALTI_SR2_PERM64 \ (vector unsigned char)(13,14,15,0,1,2,3,4,19,19,19,8,9,10,11,12) #else /* For OTHER OSs(Linux?) */ #define ALTI_SL1 {SL1, SL1, SL1, SL1} #define ALTI_SR1 {SR1, SR1, SR1, SR1} #define ALTI_MSK {MSK1, MSK2, MSK3, MSK4} #define ALTI_MSK64 {MSK2, MSK1, MSK4, MSK3} #define ALTI_SL2_PERM {3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10} #define ALTI_SL2_PERM64 {3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2} #define ALTI_SR2_PERM {5,6,7,0,9,10,11,4,13,14,15,8,19,19,19,12} #define ALTI_SR2_PERM64 {13,14,15,0,1,2,3,4,19,19,19,8,9,10,11,12} #endif /* For OSX */ #define IDSTR "SFMT-44497:330-5-3-9-3:effffffb-dfbebfff-bfbf7bef-9ffd7bff" #endif /* SFMT_PARAMS44497_H */ vmem-1.8/src/jemalloc/test/include/test/SFMT-params607.h000066400000000000000000000067461361505074100227430ustar00rootroot00000000000000/* * This file derives from SFMT 1.3.3 * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was * released under the terms of the following license: * * Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Hiroshima University nor the names of * its contributors may be used to endorse or promote products * derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef SFMT_PARAMS607_H #define SFMT_PARAMS607_H #define POS1 2 #define SL1 15 #define SL2 3 #define SR1 13 #define SR2 3 #define MSK1 0xfdff37ffU #define MSK2 0xef7f3f7dU #define MSK3 0xff777b7dU #define MSK4 0x7ff7fb2fU #define PARITY1 0x00000001U #define PARITY2 0x00000000U #define PARITY3 0x00000000U #define PARITY4 0x5986f054U /* PARAMETERS FOR ALTIVEC */ #if defined(__APPLE__) /* For OSX */ #define ALTI_SL1 (vector unsigned int)(SL1, SL1, SL1, SL1) #define ALTI_SR1 (vector unsigned int)(SR1, SR1, SR1, SR1) #define ALTI_MSK (vector unsigned int)(MSK1, MSK2, MSK3, MSK4) #define ALTI_MSK64 \ (vector unsigned int)(MSK2, MSK1, MSK4, MSK3) #define ALTI_SL2_PERM \ (vector unsigned char)(3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10) #define ALTI_SL2_PERM64 \ (vector unsigned char)(3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2) #define ALTI_SR2_PERM \ (vector unsigned char)(5,6,7,0,9,10,11,4,13,14,15,8,19,19,19,12) #define ALTI_SR2_PERM64 \ (vector unsigned char)(13,14,15,0,1,2,3,4,19,19,19,8,9,10,11,12) #else /* For OTHER OSs(Linux?) */ #define ALTI_SL1 {SL1, SL1, SL1, SL1} #define ALTI_SR1 {SR1, SR1, SR1, SR1} #define ALTI_MSK {MSK1, MSK2, MSK3, MSK4} #define ALTI_MSK64 {MSK2, MSK1, MSK4, MSK3} #define ALTI_SL2_PERM {3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10} #define ALTI_SL2_PERM64 {3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2} #define ALTI_SR2_PERM {5,6,7,0,9,10,11,4,13,14,15,8,19,19,19,12} #define ALTI_SR2_PERM64 {13,14,15,0,1,2,3,4,19,19,19,8,9,10,11,12} #endif /* For OSX */ #define IDSTR "SFMT-607:2-15-3-13-3:fdff37ff-ef7f3f7d-ff777b7d-7ff7fb2f" #endif /* SFMT_PARAMS607_H */ vmem-1.8/src/jemalloc/test/include/test/SFMT-params86243.h000066400000000000000000000067541361505074100231140ustar00rootroot00000000000000/* * This file derives from SFMT 1.3.3 * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was * released under the terms of the following license: * * Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Hiroshima University nor the names of * its contributors may be used to endorse or promote products * derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef SFMT_PARAMS86243_H #define SFMT_PARAMS86243_H #define POS1 366 #define SL1 6 #define SL2 7 #define SR1 19 #define SR2 1 #define MSK1 0xfdbffbffU #define MSK2 0xbff7ff3fU #define MSK3 0xfd77efffU #define MSK4 0xbf9ff3ffU #define PARITY1 0x00000001U #define PARITY2 0x00000000U #define PARITY3 0x00000000U #define PARITY4 0xe9528d85U /* PARAMETERS FOR ALTIVEC */ #if defined(__APPLE__) /* For OSX */ #define ALTI_SL1 (vector unsigned int)(SL1, SL1, SL1, SL1) #define ALTI_SR1 (vector unsigned int)(SR1, SR1, SR1, SR1) #define ALTI_MSK (vector unsigned int)(MSK1, MSK2, MSK3, MSK4) #define ALTI_MSK64 \ (vector unsigned int)(MSK2, MSK1, MSK4, MSK3) #define ALTI_SL2_PERM \ (vector unsigned char)(25,25,25,25,3,25,25,25,7,0,1,2,11,4,5,6) #define ALTI_SL2_PERM64 \ (vector unsigned char)(7,25,25,25,25,25,25,25,15,0,1,2,3,4,5,6) #define ALTI_SR2_PERM \ (vector unsigned char)(7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14) #define ALTI_SR2_PERM64 \ (vector unsigned char)(15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14) #else /* For OTHER OSs(Linux?) */ #define ALTI_SL1 {SL1, SL1, SL1, SL1} #define ALTI_SR1 {SR1, SR1, SR1, SR1} #define ALTI_MSK {MSK1, MSK2, MSK3, MSK4} #define ALTI_MSK64 {MSK2, MSK1, MSK4, MSK3} #define ALTI_SL2_PERM {25,25,25,25,3,25,25,25,7,0,1,2,11,4,5,6} #define ALTI_SL2_PERM64 {7,25,25,25,25,25,25,25,15,0,1,2,3,4,5,6} #define ALTI_SR2_PERM {7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14} #define ALTI_SR2_PERM64 {15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14} #endif /* For OSX */ #define IDSTR "SFMT-86243:366-6-7-19-1:fdbffbff-bff7ff3f-fd77efff-bf9ff3ff" #endif /* SFMT_PARAMS86243_H */ vmem-1.8/src/jemalloc/test/include/test/SFMT-sse2.h000066400000000000000000000121431361505074100220630ustar00rootroot00000000000000/* * This file derives from SFMT 1.3.3 * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was * released under the terms of the following license: * * Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Hiroshima University nor the names of * its contributors may be used to endorse or promote products * derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /** * @file SFMT-sse2.h * @brief SIMD oriented Fast Mersenne Twister(SFMT) for Intel SSE2 * * @author Mutsuo Saito (Hiroshima University) * @author Makoto Matsumoto (Hiroshima University) * * @note We assume LITTLE ENDIAN in this file * * Copyright (C) 2006, 2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * The new BSD License is applied to this software, see LICENSE.txt */ #ifndef SFMT_SSE2_H #define SFMT_SSE2_H /** * This function represents the recursion formula. * @param a a 128-bit part of the internal state array * @param b a 128-bit part of the internal state array * @param c a 128-bit part of the internal state array * @param d a 128-bit part of the internal state array * @param mask 128-bit mask * @return output */ JEMALLOC_ALWAYS_INLINE __m128i mm_recursion(__m128i *a, __m128i *b, __m128i c, __m128i d, __m128i mask) { __m128i v, x, y, z; x = _mm_load_si128(a); y = _mm_srli_epi32(*b, SR1); z = _mm_srli_si128(c, SR2); v = _mm_slli_epi32(d, SL1); z = _mm_xor_si128(z, x); z = _mm_xor_si128(z, v); x = _mm_slli_si128(x, SL2); y = _mm_and_si128(y, mask); z = _mm_xor_si128(z, x); z = _mm_xor_si128(z, y); return z; } /** * This function fills the internal state array with pseudorandom * integers. */ JEMALLOC_INLINE void gen_rand_all(sfmt_t *ctx) { int i; __m128i r, r1, r2, mask; mask = _mm_set_epi32(MSK4, MSK3, MSK2, MSK1); r1 = _mm_load_si128(&ctx->sfmt[N - 2].si); r2 = _mm_load_si128(&ctx->sfmt[N - 1].si); for (i = 0; i < N - POS1; i++) { r = mm_recursion(&ctx->sfmt[i].si, &ctx->sfmt[i + POS1].si, r1, r2, mask); _mm_store_si128(&ctx->sfmt[i].si, r); r1 = r2; r2 = r; } for (; i < N; i++) { r = mm_recursion(&ctx->sfmt[i].si, &ctx->sfmt[i + POS1 - N].si, r1, r2, mask); _mm_store_si128(&ctx->sfmt[i].si, r); r1 = r2; r2 = r; } } /** * This function fills the user-specified array with pseudorandom * integers. * * @param array an 128-bit array to be filled by pseudorandom numbers. * @param size number of 128-bit pesudorandom numbers to be generated. */ JEMALLOC_INLINE void gen_rand_array(sfmt_t *ctx, w128_t *array, int size) { int i, j; __m128i r, r1, r2, mask; mask = _mm_set_epi32(MSK4, MSK3, MSK2, MSK1); r1 = _mm_load_si128(&ctx->sfmt[N - 2].si); r2 = _mm_load_si128(&ctx->sfmt[N - 1].si); for (i = 0; i < N - POS1; i++) { r = mm_recursion(&ctx->sfmt[i].si, &ctx->sfmt[i + POS1].si, r1, r2, mask); _mm_store_si128(&array[i].si, r); r1 = r2; r2 = r; } for (; i < N; i++) { r = mm_recursion(&ctx->sfmt[i].si, &array[i + POS1 - N].si, r1, r2, mask); _mm_store_si128(&array[i].si, r); r1 = r2; r2 = r; } /* main loop */ for (; i < size - N; i++) { r = mm_recursion(&array[i - N].si, &array[i + POS1 - N].si, r1, r2, mask); _mm_store_si128(&array[i].si, r); r1 = r2; r2 = r; } for (j = 0; j < 2 * N - size; j++) { r = _mm_load_si128(&array[j + size - N].si); _mm_store_si128(&ctx->sfmt[j].si, r); } for (; i < size; i++) { r = mm_recursion(&array[i - N].si, &array[i + POS1 - N].si, r1, r2, mask); _mm_store_si128(&array[i].si, r); _mm_store_si128(&ctx->sfmt[j++].si, r); r1 = r2; r2 = r; } } #endif vmem-1.8/src/jemalloc/test/include/test/SFMT.h000066400000000000000000000132551361505074100212160ustar00rootroot00000000000000/* * This file derives from SFMT 1.3.3 * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was * released under the terms of the following license: * * Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Hiroshima University nor the names of * its contributors may be used to endorse or promote products * derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /** * @file SFMT.h * * @brief SIMD oriented Fast Mersenne Twister(SFMT) pseudorandom * number generator * * @author Mutsuo Saito (Hiroshima University) * @author Makoto Matsumoto (Hiroshima University) * * Copyright (C) 2006, 2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * The new BSD License is applied to this software. * see LICENSE.txt * * @note We assume that your system has inttypes.h. If your system * doesn't have inttypes.h, you have to typedef uint32_t and uint64_t, * and you have to define PRIu64 and PRIx64 in this file as follows: * @verbatim typedef unsigned int uint32_t typedef unsigned long long uint64_t #define PRIu64 "llu" #define PRIx64 "llx" @endverbatim * uint32_t must be exactly 32-bit unsigned integer type (no more, no * less), and uint64_t must be exactly 64-bit unsigned integer type. * PRIu64 and PRIx64 are used for printf function to print 64-bit * unsigned int and 64-bit unsigned int in hexadecimal format. */ #ifndef SFMT_H #define SFMT_H typedef struct sfmt_s sfmt_t; uint32_t gen_rand32(sfmt_t *ctx); uint32_t gen_rand32_range(sfmt_t *ctx, uint32_t limit); uint64_t gen_rand64(sfmt_t *ctx); uint64_t gen_rand64_range(sfmt_t *ctx, uint64_t limit); void fill_array32(sfmt_t *ctx, uint32_t *array, int size); void fill_array64(sfmt_t *ctx, uint64_t *array, int size); sfmt_t *init_gen_rand(uint32_t seed); sfmt_t *init_by_array(uint32_t *init_key, int key_length); void fini_gen_rand(sfmt_t *ctx); const char *get_idstring(void); int get_min_array_size32(void); int get_min_array_size64(void); #ifndef JEMALLOC_ENABLE_INLINE double to_real1(uint32_t v); double genrand_real1(sfmt_t *ctx); double to_real2(uint32_t v); double genrand_real2(sfmt_t *ctx); double to_real3(uint32_t v); double genrand_real3(sfmt_t *ctx); double to_res53(uint64_t v); double to_res53_mix(uint32_t x, uint32_t y); double genrand_res53(sfmt_t *ctx); double genrand_res53_mix(sfmt_t *ctx); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(SFMT_C_)) /* These real versions are due to Isaku Wada */ /** generates a random number on [0,1]-real-interval */ JEMALLOC_INLINE double to_real1(uint32_t v) { return v * (1.0/4294967295.0); /* divided by 2^32-1 */ } /** generates a random number on [0,1]-real-interval */ JEMALLOC_INLINE double genrand_real1(sfmt_t *ctx) { return to_real1(gen_rand32(ctx)); } /** generates a random number on [0,1)-real-interval */ JEMALLOC_INLINE double to_real2(uint32_t v) { return v * (1.0/4294967296.0); /* divided by 2^32 */ } /** generates a random number on [0,1)-real-interval */ JEMALLOC_INLINE double genrand_real2(sfmt_t *ctx) { return to_real2(gen_rand32(ctx)); } /** generates a random number on (0,1)-real-interval */ JEMALLOC_INLINE double to_real3(uint32_t v) { return (((double)v) + 0.5)*(1.0/4294967296.0); /* divided by 2^32 */ } /** generates a random number on (0,1)-real-interval */ JEMALLOC_INLINE double genrand_real3(sfmt_t *ctx) { return to_real3(gen_rand32(ctx)); } /** These real versions are due to Isaku Wada */ /** generates a random number on [0,1) with 53-bit resolution*/ JEMALLOC_INLINE double to_res53(uint64_t v) { return v * (1.0/18446744073709551616.0L); } /** generates a random number on [0,1) with 53-bit resolution from two * 32 bit integers */ JEMALLOC_INLINE double to_res53_mix(uint32_t x, uint32_t y) { return to_res53(x | ((uint64_t)y << 32)); } /** generates a random number on [0,1) with 53-bit resolution */ JEMALLOC_INLINE double genrand_res53(sfmt_t *ctx) { return to_res53(gen_rand64(ctx)); } /** generates a random number on [0,1) with 53-bit resolution using 32bit integer. */ JEMALLOC_INLINE double genrand_res53_mix(sfmt_t *ctx) { uint32_t x, y; x = gen_rand32(ctx); y = gen_rand32(ctx); return to_res53_mix(x, y); } #endif #endif vmem-1.8/src/jemalloc/test/include/test/jemalloc_test.h.in000066400000000000000000000077151361505074100237030ustar00rootroot00000000000000#include #include #include #include #include #include #include #ifdef _WIN32 # include #else # include #endif /******************************************************************************/ /* * Define always-enabled assertion macros, so that test assertions execute even * if assertions are disabled in the library code. These definitions must * exist prior to including "jemalloc/internal/util.h". */ #define assert(e) do { \ if (!(e)) { \ malloc_printf( \ ": %s:%d: Failed assertion: \"%s\"\n", \ __FILE__, __LINE__, #e); \ abort(); \ } \ } while (0) #define not_reached() do { \ malloc_printf( \ ": %s:%d: Unreachable code reached\n", \ __FILE__, __LINE__); \ abort(); \ } while (0) #define not_implemented() do { \ malloc_printf(": %s:%d: Not implemented\n", \ __FILE__, __LINE__); \ abort(); \ } while (0) #define assert_not_implemented(e) do { \ if (!(e)) \ not_implemented(); \ } while (0) #include "test/jemalloc_test_defs.h" #ifdef JEMALLOC_OSSPIN # include #endif #if defined(HAVE_ALTIVEC) && !defined(__APPLE__) # include #endif #ifdef HAVE_SSE2 # include #endif /******************************************************************************/ /* * For unit tests, expose all public and private interfaces. */ #ifdef JEMALLOC_UNIT_TEST # define JEMALLOC_JET # define JEMALLOC_MANGLE # include "jemalloc/internal/jemalloc_internal.h" /******************************************************************************/ /* * For integration tests, expose the public jemalloc interfaces, but only * expose the minimum necessary internal utility code (to avoid re-implementing * essentially identical code within the test infrastructure). */ #elif defined(JEMALLOC_INTEGRATION_TEST) # define JEMALLOC_MANGLE # include "jemalloc/jemalloc@install_suffix@.h" # include "jemalloc/internal/jemalloc_internal_defs.h" # include "jemalloc/internal/jemalloc_internal_macros.h" # define JEMALLOC_N(n) @private_namespace@##n # include "jemalloc/internal/private_namespace.h" # define JEMALLOC_H_TYPES # define JEMALLOC_H_STRUCTS # define JEMALLOC_H_EXTERNS # define JEMALLOC_H_INLINES # include "jemalloc/internal/util.h" # include "jemalloc/internal/qr.h" # include "jemalloc/internal/ql.h" # undef JEMALLOC_H_TYPES # undef JEMALLOC_H_STRUCTS # undef JEMALLOC_H_EXTERNS # undef JEMALLOC_H_INLINES /******************************************************************************/ /* * For stress tests, expose the public jemalloc interfaces with name mangling * so that they can be tested as e.g. malloc() and free(). Also expose the * public jemalloc interfaces with jet_ prefixes, so that stress tests can use * a separate allocator for their internal data structures. */ #elif defined(JEMALLOC_STRESS_TEST) # include "jemalloc/jemalloc@install_suffix@.h" # include "jemalloc/jemalloc_protos_jet.h" # define JEMALLOC_JET # include "jemalloc/internal/jemalloc_internal.h" # include "jemalloc/internal/public_unnamespace.h" # undef JEMALLOC_JET # include "jemalloc/jemalloc_rename.h" # define JEMALLOC_MANGLE # ifdef JEMALLOC_STRESS_TESTLIB # include "jemalloc/jemalloc_mangle_jet.h" # else # include "jemalloc/jemalloc_mangle.h" # endif /******************************************************************************/ /* * This header does dangerous things, the effects of which only test code * should be subject to. */ #else # error "This header cannot be included outside a testing context" #endif /******************************************************************************/ /* * Common test utilities. */ #include "test/math.h" #include "test/mtx.h" #include "test/mq.h" #include "test/test.h" #include "test/thd.h" #define MEXP 19937 #include "test/SFMT.h" vmem-1.8/src/jemalloc/test/include/test/jemalloc_test_defs.h.in000066400000000000000000000002521361505074100246710ustar00rootroot00000000000000#include "jemalloc/internal/jemalloc_internal_defs.h" #include "jemalloc/internal/jemalloc_internal_decls.h" /* For use by SFMT. */ #undef HAVE_SSE2 #undef HAVE_ALTIVEC vmem-1.8/src/jemalloc/test/include/test/math.h000066400000000000000000000177551361505074100214070ustar00rootroot00000000000000#ifndef JEMALLOC_ENABLE_INLINE double ln_gamma(double x); double i_gamma(double x, double p, double ln_gamma_p); double pt_norm(double p); double pt_chi2(double p, double df, double ln_gamma_df_2); double pt_gamma(double p, double shape, double scale, double ln_gamma_shape); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(MATH_C_)) /* * Compute the natural log of Gamma(x), accurate to 10 decimal places. * * This implementation is based on: * * Pike, M.C., I.D. Hill (1966) Algorithm 291: Logarithm of Gamma function * [S14]. Communications of the ACM 9(9):684. */ JEMALLOC_INLINE double ln_gamma(double x) { double f, z; assert(x > 0.0); if (x < 7.0) { f = 1.0; z = x; while (z < 7.0) { f *= z; z += 1.0; } x = z; f = -log(f); } else f = 0.0; z = 1.0 / (x * x); return (f + (x-0.5) * log(x) - x + 0.918938533204673 + (((-0.000595238095238 * z + 0.000793650793651) * z - 0.002777777777778) * z + 0.083333333333333) / x); } /* * Compute the incomplete Gamma ratio for [0..x], where p is the shape * parameter, and ln_gamma_p is ln_gamma(p). * * This implementation is based on: * * Bhattacharjee, G.P. (1970) Algorithm AS 32: The incomplete Gamma integral. * Applied Statistics 19:285-287. */ JEMALLOC_INLINE double i_gamma(double x, double p, double ln_gamma_p) { double acu, factor, oflo, gin, term, rn, a, b, an, dif; double pn[6]; unsigned i; assert(p > 0.0); assert(x >= 0.0); if (x == 0.0) return (0.0); acu = 1.0e-10; oflo = 1.0e30; gin = 0.0; factor = exp(p * log(x) - x - ln_gamma_p); if (x <= 1.0 || x < p) { /* Calculation by series expansion. */ gin = 1.0; term = 1.0; rn = p; while (true) { rn += 1.0; term *= x / rn; gin += term; if (term <= acu) { gin *= factor / p; return (gin); } } } else { /* Calculation by continued fraction. */ a = 1.0 - p; b = a + x + 1.0; term = 0.0; pn[0] = 1.0; pn[1] = x; pn[2] = x + 1.0; pn[3] = x * b; gin = pn[2] / pn[3]; while (true) { a += 1.0; b += 2.0; term += 1.0; an = a * term; for (i = 0; i < 2; i++) pn[i+4] = b * pn[i+2] - an * pn[i]; if (pn[5] != 0.0) { rn = pn[4] / pn[5]; dif = fabs(gin - rn); if (dif <= acu && dif <= acu * rn) { gin = 1.0 - factor * gin; return (gin); } gin = rn; } for (i = 0; i < 4; i++) pn[i] = pn[i+2]; if (fabs(pn[4]) >= oflo) { for (i = 0; i < 4; i++) pn[i] /= oflo; } } } } /* * Given a value p in [0..1] of the lower tail area of the normal distribution, * compute the limit on the definite integral from [-inf..z] that satisfies p, * accurate to 16 decimal places. * * This implementation is based on: * * Wichura, M.J. (1988) Algorithm AS 241: The percentage points of the normal * distribution. Applied Statistics 37(3):477-484. */ JEMALLOC_INLINE double pt_norm(double p) { double q, r, ret; assert(p > 0.0 && p < 1.0); q = p - 0.5; if (fabs(q) <= 0.425) { /* p close to 1/2. */ r = 0.180625 - q * q; return (q * (((((((2.5090809287301226727e3 * r + 3.3430575583588128105e4) * r + 6.7265770927008700853e4) * r + 4.5921953931549871457e4) * r + 1.3731693765509461125e4) * r + 1.9715909503065514427e3) * r + 1.3314166789178437745e2) * r + 3.3871328727963666080e0) / (((((((5.2264952788528545610e3 * r + 2.8729085735721942674e4) * r + 3.9307895800092710610e4) * r + 2.1213794301586595867e4) * r + 5.3941960214247511077e3) * r + 6.8718700749205790830e2) * r + 4.2313330701600911252e1) * r + 1.0)); } else { if (q < 0.0) r = p; else r = 1.0 - p; assert(r > 0.0); r = sqrt(-log(r)); if (r <= 5.0) { /* p neither close to 1/2 nor 0 or 1. */ r -= 1.6; ret = ((((((((7.74545014278341407640e-4 * r + 2.27238449892691845833e-2) * r + 2.41780725177450611770e-1) * r + 1.27045825245236838258e0) * r + 3.64784832476320460504e0) * r + 5.76949722146069140550e0) * r + 4.63033784615654529590e0) * r + 1.42343711074968357734e0) / (((((((1.05075007164441684324e-9 * r + 5.47593808499534494600e-4) * r + 1.51986665636164571966e-2) * r + 1.48103976427480074590e-1) * r + 6.89767334985100004550e-1) * r + 1.67638483018380384940e0) * r + 2.05319162663775882187e0) * r + 1.0)); } else { /* p near 0 or 1. */ r -= 5.0; ret = ((((((((2.01033439929228813265e-7 * r + 2.71155556874348757815e-5) * r + 1.24266094738807843860e-3) * r + 2.65321895265761230930e-2) * r + 2.96560571828504891230e-1) * r + 1.78482653991729133580e0) * r + 5.46378491116411436990e0) * r + 6.65790464350110377720e0) / (((((((2.04426310338993978564e-15 * r + 1.42151175831644588870e-7) * r + 1.84631831751005468180e-5) * r + 7.86869131145613259100e-4) * r + 1.48753612908506148525e-2) * r + 1.36929880922735805310e-1) * r + 5.99832206555887937690e-1) * r + 1.0)); } if (q < 0.0) ret = -ret; return (ret); } } /* * Given a value p in [0..1] of the lower tail area of the Chi^2 distribution * with df degrees of freedom, where ln_gamma_df_2 is ln_gamma(df/2.0), compute * the upper limit on the definite integral from [0..z] that satisfies p, * accurate to 12 decimal places. * * This implementation is based on: * * Best, D.J., D.E. Roberts (1975) Algorithm AS 91: The percentage points of * the Chi^2 distribution. Applied Statistics 24(3):385-388. * * Shea, B.L. (1991) Algorithm AS R85: A remark on AS 91: The percentage * points of the Chi^2 distribution. Applied Statistics 40(1):233-235. */ JEMALLOC_INLINE double pt_chi2(double p, double df, double ln_gamma_df_2) { double e, aa, xx, c, ch, a, q, p1, p2, t, x, b, s1, s2, s3, s4, s5, s6; unsigned i; assert(p >= 0.0 && p < 1.0); assert(df > 0.0); e = 5.0e-7; aa = 0.6931471805; xx = 0.5 * df; c = xx - 1.0; if (df < -1.24 * log(p)) { /* Starting approximation for small Chi^2. */ ch = pow(p * xx * exp(ln_gamma_df_2 + xx * aa), 1.0 / xx); if (ch - e < 0.0) return (ch); } else { if (df > 0.32) { x = pt_norm(p); /* * Starting approximation using Wilson and Hilferty * estimate. */ p1 = 0.222222 / df; ch = df * pow(x * sqrt(p1) + 1.0 - p1, 3.0); /* Starting approximation for p tending to 1. */ if (ch > 2.2 * df + 6.0) { ch = -2.0 * (log(1.0 - p) - c * log(0.5 * ch) + ln_gamma_df_2); } } else { ch = 0.4; a = log(1.0 - p); while (true) { q = ch; p1 = 1.0 + ch * (4.67 + ch); p2 = ch * (6.73 + ch * (6.66 + ch)); t = -0.5 + (4.67 + 2.0 * ch) / p1 - (6.73 + ch * (13.32 + 3.0 * ch)) / p2; ch -= (1.0 - exp(a + ln_gamma_df_2 + 0.5 * ch + c * aa) * p2 / p1) / t; if (fabs(q / ch - 1.0) - 0.01 <= 0.0) break; } } } for (i = 0; i < 20; i++) { /* Calculation of seven-term Taylor series. */ q = ch; p1 = 0.5 * ch; if (p1 < 0.0) return (-1.0); p2 = p - i_gamma(p1, xx, ln_gamma_df_2); t = p2 * exp(xx * aa + ln_gamma_df_2 + p1 - c * log(ch)); b = t / ch; a = 0.5 * t - b * c; s1 = (210.0 + a * (140.0 + a * (105.0 + a * (84.0 + a * (70.0 + 60.0 * a))))) / 420.0; s2 = (420.0 + a * (735.0 + a * (966.0 + a * (1141.0 + 1278.0 * a)))) / 2520.0; s3 = (210.0 + a * (462.0 + a * (707.0 + 932.0 * a))) / 2520.0; s4 = (252.0 + a * (672.0 + 1182.0 * a) + c * (294.0 + a * (889.0 + 1740.0 * a))) / 5040.0; s5 = (84.0 + 264.0 * a + c * (175.0 + 606.0 * a)) / 2520.0; s6 = (120.0 + c * (346.0 + 127.0 * c)) / 5040.0; ch += t * (1.0 + 0.5 * t * s1 - b * c * (s1 - b * (s2 - b * (s3 - b * (s4 - b * (s5 - b * s6)))))); if (fabs(q / ch - 1.0) <= e) break; } return (ch); } /* * Given a value p in [0..1] and Gamma distribution shape and scale parameters, * compute the upper limit on the definite integeral from [0..z] that satisfies * p. */ JEMALLOC_INLINE double pt_gamma(double p, double shape, double scale, double ln_gamma_shape) { return (pt_chi2(p, shape * 2.0, ln_gamma_shape) * 0.5 * scale); } #endif vmem-1.8/src/jemalloc/test/include/test/mq.h000066400000000000000000000056601361505074100210630ustar00rootroot00000000000000/* * Simple templated message queue implementation that relies on only mutexes for * synchronization (which reduces portability issues). Given the following * setup: * * typedef struct mq_msg_s mq_msg_t; * struct mq_msg_s { * mq_msg(mq_msg_t) link; * [message data] * }; * mq_gen(, mq_, mq_t, mq_msg_t, link) * * The API is as follows: * * bool mq_init(mq_t *mq); * void mq_fini(mq_t *mq); * unsigned mq_count(mq_t *mq); * mq_msg_t *mq_tryget(mq_t *mq); * mq_msg_t *mq_get(mq_t *mq); * void mq_put(mq_t *mq, mq_msg_t *msg); * * The message queue linkage embedded in each message is to be treated as * externally opaque (no need to initialize or clean up externally). mq_fini() * does not perform any cleanup of messages, since it knows nothing of their * payloads. */ #define mq_msg(a_mq_msg_type) ql_elm(a_mq_msg_type) #define mq_gen(a_attr, a_prefix, a_mq_type, a_mq_msg_type, a_field) \ typedef struct { \ mtx_t lock; \ ql_head(a_mq_msg_type) msgs; \ unsigned count; \ } a_mq_type; \ a_attr bool \ a_prefix##init(a_mq_type *mq) { \ \ if (mtx_init(&mq->lock)) \ return (true); \ ql_new(&mq->msgs); \ mq->count = 0; \ return (false); \ } \ a_attr void \ a_prefix##fini(a_mq_type *mq) \ { \ \ mtx_fini(&mq->lock); \ } \ a_attr unsigned \ a_prefix##count(a_mq_type *mq) \ { \ unsigned count; \ \ mtx_lock(&mq->lock); \ count = mq->count; \ mtx_unlock(&mq->lock); \ return (count); \ } \ a_attr a_mq_msg_type * \ a_prefix##tryget(a_mq_type *mq) \ { \ a_mq_msg_type *msg; \ \ mtx_lock(&mq->lock); \ msg = ql_first(&mq->msgs); \ if (msg != NULL) { \ ql_head_remove(&mq->msgs, a_mq_msg_type, a_field); \ mq->count--; \ } \ mtx_unlock(&mq->lock); \ return (msg); \ } \ a_attr a_mq_msg_type * \ a_prefix##get(a_mq_type *mq) \ { \ a_mq_msg_type *msg; \ struct timespec timeout; \ \ msg = a_prefix##tryget(mq); \ if (msg != NULL) \ return (msg); \ \ timeout.tv_sec = 0; \ timeout.tv_nsec = 1; \ while (true) { \ nanosleep(&timeout, NULL); \ msg = a_prefix##tryget(mq); \ if (msg != NULL) \ return (msg); \ if (timeout.tv_sec == 0) { \ /* Double sleep time, up to max 1 second. */ \ timeout.tv_nsec <<= 1; \ if (timeout.tv_nsec >= 1000*1000*1000) { \ timeout.tv_sec = 1; \ timeout.tv_nsec = 0; \ } \ } \ } \ } \ a_attr void \ a_prefix##put(a_mq_type *mq, a_mq_msg_type *msg) \ { \ \ mtx_lock(&mq->lock); \ ql_elm_new(msg, a_field); \ ql_tail_insert(&mq->msgs, msg, a_field); \ mq->count++; \ mtx_unlock(&mq->lock); \ } vmem-1.8/src/jemalloc/test/include/test/mtx.h000066400000000000000000000010101361505074100212370ustar00rootroot00000000000000/* * mtx is a slightly simplified version of malloc_mutex. This code duplication * is unfortunate, but there are allocator bootstrapping considerations that * would leak into the test infrastructure if malloc_mutex were used directly * in tests. */ typedef struct { #ifdef _WIN32 CRITICAL_SECTION lock; #elif (defined(JEMALLOC_OSSPIN)) OSSpinLock lock; #else pthread_mutex_t lock; #endif } mtx_t; bool mtx_init(mtx_t *mtx); void mtx_fini(mtx_t *mtx); void mtx_lock(mtx_t *mtx); void mtx_unlock(mtx_t *mtx); vmem-1.8/src/jemalloc/test/include/test/test.h000066400000000000000000000317751361505074100214330ustar00rootroot00000000000000#define ASSERT_BUFSIZE 256 #define assert_cmp(t, a, b, cmp, neg_cmp, pri, ...) do { \ t a_ = (a); \ t b_ = (b); \ if (!(a_ cmp b_)) { \ char prefix[ASSERT_BUFSIZE]; \ char message[ASSERT_BUFSIZE]; \ malloc_snprintf(prefix, sizeof(prefix), \ "%s:%s:%d: Failed assertion: " \ "(%s) "#cmp" (%s) --> " \ "%"pri" "#neg_cmp" %"pri": ", \ __func__, __FILE__, __LINE__, \ #a, #b, a_, b_); \ malloc_snprintf(message, sizeof(message), __VA_ARGS__); \ p_test_fail(prefix, message); \ } \ } while (0) #define assert_ptr_eq(a, b, ...) assert_cmp(void *, a, b, ==, \ !=, "p", __VA_ARGS__) #define assert_ptr_ne(a, b, ...) assert_cmp(void *, a, b, !=, \ ==, "p", __VA_ARGS__) #define assert_ptr_null(a, ...) assert_cmp(void *, a, NULL, ==, \ !=, "p", __VA_ARGS__) #define assert_ptr_not_null(a, ...) assert_cmp(void *, a, NULL, !=, \ ==, "p", __VA_ARGS__) #define assert_c_eq(a, b, ...) assert_cmp(char, a, b, ==, !=, "c", __VA_ARGS__) #define assert_c_ne(a, b, ...) assert_cmp(char, a, b, !=, ==, "c", __VA_ARGS__) #define assert_c_lt(a, b, ...) assert_cmp(char, a, b, <, >=, "c", __VA_ARGS__) #define assert_c_le(a, b, ...) assert_cmp(char, a, b, <=, >, "c", __VA_ARGS__) #define assert_c_ge(a, b, ...) assert_cmp(char, a, b, >=, <, "c", __VA_ARGS__) #define assert_c_gt(a, b, ...) assert_cmp(char, a, b, >, <=, "c", __VA_ARGS__) #define assert_x_eq(a, b, ...) assert_cmp(int, a, b, ==, !=, "#x", __VA_ARGS__) #define assert_x_ne(a, b, ...) assert_cmp(int, a, b, !=, ==, "#x", __VA_ARGS__) #define assert_x_lt(a, b, ...) assert_cmp(int, a, b, <, >=, "#x", __VA_ARGS__) #define assert_x_le(a, b, ...) assert_cmp(int, a, b, <=, >, "#x", __VA_ARGS__) #define assert_x_ge(a, b, ...) assert_cmp(int, a, b, >=, <, "#x", __VA_ARGS__) #define assert_x_gt(a, b, ...) assert_cmp(int, a, b, >, <=, "#x", __VA_ARGS__) #define assert_d_eq(a, b, ...) assert_cmp(int, a, b, ==, !=, "d", __VA_ARGS__) #define assert_d_ne(a, b, ...) assert_cmp(int, a, b, !=, ==, "d", __VA_ARGS__) #define assert_d_lt(a, b, ...) assert_cmp(int, a, b, <, >=, "d", __VA_ARGS__) #define assert_d_le(a, b, ...) assert_cmp(int, a, b, <=, >, "d", __VA_ARGS__) #define assert_d_ge(a, b, ...) assert_cmp(int, a, b, >=, <, "d", __VA_ARGS__) #define assert_d_gt(a, b, ...) assert_cmp(int, a, b, >, <=, "d", __VA_ARGS__) #define assert_u_eq(a, b, ...) assert_cmp(int, a, b, ==, !=, "u", __VA_ARGS__) #define assert_u_ne(a, b, ...) assert_cmp(int, a, b, !=, ==, "u", __VA_ARGS__) #define assert_u_lt(a, b, ...) assert_cmp(int, a, b, <, >=, "u", __VA_ARGS__) #define assert_u_le(a, b, ...) assert_cmp(int, a, b, <=, >, "u", __VA_ARGS__) #define assert_u_ge(a, b, ...) assert_cmp(int, a, b, >=, <, "u", __VA_ARGS__) #define assert_u_gt(a, b, ...) assert_cmp(int, a, b, >, <=, "u", __VA_ARGS__) #define assert_ld_eq(a, b, ...) assert_cmp(long, a, b, ==, \ !=, "ld", __VA_ARGS__) #define assert_ld_ne(a, b, ...) assert_cmp(long, a, b, !=, \ ==, "ld", __VA_ARGS__) #define assert_ld_lt(a, b, ...) assert_cmp(long, a, b, <, \ >=, "ld", __VA_ARGS__) #define assert_ld_le(a, b, ...) assert_cmp(long, a, b, <=, \ >, "ld", __VA_ARGS__) #define assert_ld_ge(a, b, ...) assert_cmp(long, a, b, >=, \ <, "ld", __VA_ARGS__) #define assert_ld_gt(a, b, ...) assert_cmp(long, a, b, >, \ <=, "ld", __VA_ARGS__) #define assert_lu_eq(a, b, ...) assert_cmp(unsigned long, \ a, b, ==, !=, "lu", __VA_ARGS__) #define assert_lu_ne(a, b, ...) assert_cmp(unsigned long, \ a, b, !=, ==, "lu", __VA_ARGS__) #define assert_lu_lt(a, b, ...) assert_cmp(unsigned long, \ a, b, <, >=, "lu", __VA_ARGS__) #define assert_lu_le(a, b, ...) assert_cmp(unsigned long, \ a, b, <=, >, "lu", __VA_ARGS__) #define assert_lu_ge(a, b, ...) assert_cmp(unsigned long, \ a, b, >=, <, "lu", __VA_ARGS__) #define assert_lu_gt(a, b, ...) assert_cmp(unsigned long, \ a, b, >, <=, "lu", __VA_ARGS__) #define assert_qd_eq(a, b, ...) assert_cmp(long long, a, b, ==, \ !=, "qd", __VA_ARGS__) #define assert_qd_ne(a, b, ...) assert_cmp(long long, a, b, !=, \ ==, "qd", __VA_ARGS__) #define assert_qd_lt(a, b, ...) assert_cmp(long long, a, b, <, \ >=, "qd", __VA_ARGS__) #define assert_qd_le(a, b, ...) assert_cmp(long long, a, b, <=, \ >, "qd", __VA_ARGS__) #define assert_qd_ge(a, b, ...) assert_cmp(long long, a, b, >=, \ <, "qd", __VA_ARGS__) #define assert_qd_gt(a, b, ...) assert_cmp(long long, a, b, >, \ <=, "qd", __VA_ARGS__) #define assert_qu_eq(a, b, ...) assert_cmp(unsigned long long, \ a, b, ==, !=, "qu", __VA_ARGS__) #define assert_qu_ne(a, b, ...) assert_cmp(unsigned long long, \ a, b, !=, ==, "qu", __VA_ARGS__) #define assert_qu_lt(a, b, ...) assert_cmp(unsigned long long, \ a, b, <, >=, "qu", __VA_ARGS__) #define assert_qu_le(a, b, ...) assert_cmp(unsigned long long, \ a, b, <=, >, "qu", __VA_ARGS__) #define assert_qu_ge(a, b, ...) assert_cmp(unsigned long long, \ a, b, >=, <, "qu", __VA_ARGS__) #define assert_qu_gt(a, b, ...) assert_cmp(unsigned long long, \ a, b, >, <=, "qu", __VA_ARGS__) #define assert_jd_eq(a, b, ...) assert_cmp(intmax_t, a, b, ==, \ !=, "jd", __VA_ARGS__) #define assert_jd_ne(a, b, ...) assert_cmp(intmax_t, a, b, !=, \ ==, "jd", __VA_ARGS__) #define assert_jd_lt(a, b, ...) assert_cmp(intmax_t, a, b, <, \ >=, "jd", __VA_ARGS__) #define assert_jd_le(a, b, ...) assert_cmp(intmax_t, a, b, <=, \ >, "jd", __VA_ARGS__) #define assert_jd_ge(a, b, ...) assert_cmp(intmax_t, a, b, >=, \ <, "jd", __VA_ARGS__) #define assert_jd_gt(a, b, ...) assert_cmp(intmax_t, a, b, >, \ <=, "jd", __VA_ARGS__) #define assert_ju_eq(a, b, ...) assert_cmp(uintmax_t, a, b, ==, \ !=, "ju", __VA_ARGS__) #define assert_ju_ne(a, b, ...) assert_cmp(uintmax_t, a, b, !=, \ ==, "ju", __VA_ARGS__) #define assert_ju_lt(a, b, ...) assert_cmp(uintmax_t, a, b, <, \ >=, "ju", __VA_ARGS__) #define assert_ju_le(a, b, ...) assert_cmp(uintmax_t, a, b, <=, \ >, "ju", __VA_ARGS__) #define assert_ju_ge(a, b, ...) assert_cmp(uintmax_t, a, b, >=, \ <, "ju", __VA_ARGS__) #define assert_ju_gt(a, b, ...) assert_cmp(uintmax_t, a, b, >, \ <=, "ju", __VA_ARGS__) #define assert_zd_eq(a, b, ...) assert_cmp(ssize_t, a, b, ==, \ !=, "zd", __VA_ARGS__) #define assert_zd_ne(a, b, ...) assert_cmp(ssize_t, a, b, !=, \ ==, "zd", __VA_ARGS__) #define assert_zd_lt(a, b, ...) assert_cmp(ssize_t, a, b, <, \ >=, "zd", __VA_ARGS__) #define assert_zd_le(a, b, ...) assert_cmp(ssize_t, a, b, <=, \ >, "zd", __VA_ARGS__) #define assert_zd_ge(a, b, ...) assert_cmp(ssize_t, a, b, >=, \ <, "zd", __VA_ARGS__) #define assert_zd_gt(a, b, ...) assert_cmp(ssize_t, a, b, >, \ <=, "zd", __VA_ARGS__) #define assert_zu_eq(a, b, ...) assert_cmp(size_t, a, b, ==, \ !=, "zu", __VA_ARGS__) #define assert_zu_ne(a, b, ...) assert_cmp(size_t, a, b, !=, \ ==, "zu", __VA_ARGS__) #define assert_zu_lt(a, b, ...) assert_cmp(size_t, a, b, <, \ >=, "zu", __VA_ARGS__) #define assert_zu_le(a, b, ...) assert_cmp(size_t, a, b, <=, \ >, "zu", __VA_ARGS__) #define assert_zu_ge(a, b, ...) assert_cmp(size_t, a, b, >=, \ <, "zu", __VA_ARGS__) #define assert_zu_gt(a, b, ...) assert_cmp(size_t, a, b, >, \ <=, "zu", __VA_ARGS__) #define assert_d32_eq(a, b, ...) assert_cmp(int32_t, a, b, ==, \ !=, PRId32, __VA_ARGS__) #define assert_d32_ne(a, b, ...) assert_cmp(int32_t, a, b, !=, \ ==, PRId32, __VA_ARGS__) #define assert_d32_lt(a, b, ...) assert_cmp(int32_t, a, b, <, \ >=, PRId32, __VA_ARGS__) #define assert_d32_le(a, b, ...) assert_cmp(int32_t, a, b, <=, \ >, PRId32, __VA_ARGS__) #define assert_d32_ge(a, b, ...) assert_cmp(int32_t, a, b, >=, \ <, PRId32, __VA_ARGS__) #define assert_d32_gt(a, b, ...) assert_cmp(int32_t, a, b, >, \ <=, PRId32, __VA_ARGS__) #define assert_u32_eq(a, b, ...) assert_cmp(uint32_t, a, b, ==, \ !=, PRIu32, __VA_ARGS__) #define assert_u32_ne(a, b, ...) assert_cmp(uint32_t, a, b, !=, \ ==, PRIu32, __VA_ARGS__) #define assert_u32_lt(a, b, ...) assert_cmp(uint32_t, a, b, <, \ >=, PRIu32, __VA_ARGS__) #define assert_u32_le(a, b, ...) assert_cmp(uint32_t, a, b, <=, \ >, PRIu32, __VA_ARGS__) #define assert_u32_ge(a, b, ...) assert_cmp(uint32_t, a, b, >=, \ <, PRIu32, __VA_ARGS__) #define assert_u32_gt(a, b, ...) assert_cmp(uint32_t, a, b, >, \ <=, PRIu32, __VA_ARGS__) #define assert_d64_eq(a, b, ...) assert_cmp(int64_t, a, b, ==, \ !=, PRId64, __VA_ARGS__) #define assert_d64_ne(a, b, ...) assert_cmp(int64_t, a, b, !=, \ ==, PRId64, __VA_ARGS__) #define assert_d64_lt(a, b, ...) assert_cmp(int64_t, a, b, <, \ >=, PRId64, __VA_ARGS__) #define assert_d64_le(a, b, ...) assert_cmp(int64_t, a, b, <=, \ >, PRId64, __VA_ARGS__) #define assert_d64_ge(a, b, ...) assert_cmp(int64_t, a, b, >=, \ <, PRId64, __VA_ARGS__) #define assert_d64_gt(a, b, ...) assert_cmp(int64_t, a, b, >, \ <=, PRId64, __VA_ARGS__) #define assert_u64_eq(a, b, ...) assert_cmp(uint64_t, a, b, ==, \ !=, PRIu64, __VA_ARGS__) #define assert_u64_ne(a, b, ...) assert_cmp(uint64_t, a, b, !=, \ ==, PRIu64, __VA_ARGS__) #define assert_u64_lt(a, b, ...) assert_cmp(uint64_t, a, b, <, \ >=, PRIu64, __VA_ARGS__) #define assert_u64_le(a, b, ...) assert_cmp(uint64_t, a, b, <=, \ >, PRIu64, __VA_ARGS__) #define assert_u64_ge(a, b, ...) assert_cmp(uint64_t, a, b, >=, \ <, PRIu64, __VA_ARGS__) #define assert_u64_gt(a, b, ...) assert_cmp(uint64_t, a, b, >, \ <=, PRIu64, __VA_ARGS__) #define assert_b_eq(a, b, ...) do { \ bool a_ = (a); \ bool b_ = (b); \ if (!(a_ == b_)) { \ char prefix[ASSERT_BUFSIZE]; \ char message[ASSERT_BUFSIZE]; \ malloc_snprintf(prefix, sizeof(prefix), \ "%s:%s:%d: Failed assertion: " \ "(%s) == (%s) --> %s != %s: ", \ __func__, __FILE__, __LINE__, \ #a, #b, a_ ? "true" : "false", \ b_ ? "true" : "false"); \ malloc_snprintf(message, sizeof(message), __VA_ARGS__); \ p_test_fail(prefix, message); \ } \ } while (0) #define assert_b_ne(a, b, ...) do { \ bool a_ = (a); \ bool b_ = (b); \ if (!(a_ != b_)) { \ char prefix[ASSERT_BUFSIZE]; \ char message[ASSERT_BUFSIZE]; \ malloc_snprintf(prefix, sizeof(prefix), \ "%s:%s:%d: Failed assertion: " \ "(%s) != (%s) --> %s == %s: ", \ __func__, __FILE__, __LINE__, \ #a, #b, a_ ? "true" : "false", \ b_ ? "true" : "false"); \ malloc_snprintf(message, sizeof(message), __VA_ARGS__); \ p_test_fail(prefix, message); \ } \ } while (0) #define assert_true(a, ...) assert_b_eq(a, true, __VA_ARGS__) #define assert_false(a, ...) assert_b_eq(a, false, __VA_ARGS__) #define assert_str_eq(a, b, ...) do { \ if (strcmp((a), (b))) { \ char prefix[ASSERT_BUFSIZE]; \ char message[ASSERT_BUFSIZE]; \ malloc_snprintf(prefix, sizeof(prefix), \ "%s:%s:%d: Failed assertion: " \ "(%s) same as (%s) --> " \ "\"%s\" differs from \"%s\": ", \ __func__, __FILE__, __LINE__, #a, #b, a, b); \ malloc_snprintf(message, sizeof(message), __VA_ARGS__); \ p_test_fail(prefix, message); \ } \ } while (0) #define assert_str_ne(a, b, ...) do { \ if (!strcmp((a), (b))) { \ char prefix[ASSERT_BUFSIZE]; \ char message[ASSERT_BUFSIZE]; \ malloc_snprintf(prefix, sizeof(prefix), \ "%s:%s:%d: Failed assertion: " \ "(%s) differs from (%s) --> " \ "\"%s\" same as \"%s\": ", \ __func__, __FILE__, __LINE__, #a, #b, a, b); \ malloc_snprintf(message, sizeof(message), __VA_ARGS__); \ p_test_fail(prefix, message); \ } \ } while (0) #define assert_not_reached(...) do { \ char prefix[ASSERT_BUFSIZE]; \ char message[ASSERT_BUFSIZE]; \ malloc_snprintf(prefix, sizeof(prefix), \ "%s:%s:%d: Unreachable code reached: ", \ __func__, __FILE__, __LINE__); \ malloc_snprintf(message, sizeof(message), __VA_ARGS__); \ p_test_fail(prefix, message); \ } while (0) /* * If this enum changes, corresponding changes in test/test.sh.in are also * necessary. */ typedef enum { test_status_pass = 0, test_status_skip = 1, test_status_fail = 2, test_status_count = 3 } test_status_t; typedef void (test_t)(void); #define TEST_BEGIN(f) \ static void \ f(void) \ { \ p_test_init(#f); #define TEST_END \ goto label_test_end; \ label_test_end: \ p_test_fini(); \ } #define test(...) \ p_test(__VA_ARGS__, NULL) #define test_not_init(...) \ p_test_not_init(__VA_ARGS__, NULL) #define test_skip_if(e) do { \ if (e) { \ test_skip("%s:%s:%d: Test skipped: (%s)", \ __func__, __FILE__, __LINE__, #e); \ goto label_test_end; \ } \ } while (0) void test_skip(const char *format, ...) JEMALLOC_ATTR(format(printf, 1, 2)); void test_fail(const char *format, ...) JEMALLOC_ATTR(format(printf, 1, 2)); /* For private use by macros. */ test_status_t p_test(test_t *t, ...); test_status_t p_test_not_init(test_t *t, ...); void p_test_init(const char *name); void p_test_fini(void); void p_test_fail(const char *prefix, const char *message); vmem-1.8/src/jemalloc/test/include/test/thd.h000066400000000000000000000003371361505074100212210ustar00rootroot00000000000000/* Abstraction layer for threading in tests */ #ifdef _WIN32 typedef HANDLE thd_t; #else typedef pthread_t thd_t; #endif void thd_create(thd_t *thd, void *(*proc)(void *), void *arg); void thd_join(thd_t thd, void **ret); vmem-1.8/src/jemalloc/test/integration/000077500000000000000000000000001361505074100202075ustar00rootroot00000000000000vmem-1.8/src/jemalloc/test/integration/MALLOCX_ARENA.c000066400000000000000000000026561361505074100223710ustar00rootroot00000000000000#include "test/jemalloc_test.h" #define NTHREADS 10 static bool have_dss = #ifdef JEMALLOC_DSS true #else false #endif ; void * thd_start(void *arg) { unsigned thread_ind = (unsigned)(uintptr_t)arg; unsigned arena_ind; void *p; size_t sz; sz = sizeof(arena_ind); assert_d_eq(mallctl("pool.0.arenas.extend", &arena_ind, &sz, NULL, 0), 0, "Error in pool.0.arenas.extend"); if (thread_ind % 4 != 3) { size_t mib[5]; size_t miblen = sizeof(mib) / sizeof(size_t); const char *dss_precs[] = {"disabled", "primary", "secondary"}; unsigned prec_ind = thread_ind % (sizeof(dss_precs)/sizeof(char*)); const char *dss = dss_precs[prec_ind]; int expected_err = (have_dss || prec_ind == 0) ? 0 : EFAULT; assert_d_eq(mallctlnametomib("pool.0.arena.0.dss", mib, &miblen), 0, "Error in mallctlnametomib()"); mib[3] = arena_ind; assert_d_eq(mallctlbymib(mib, miblen, NULL, NULL, (void *)&dss, sizeof(const char *)), expected_err, "Error in mallctlbymib()"); } p = mallocx(1, MALLOCX_ARENA(arena_ind)); assert_ptr_not_null(p, "Unexpected mallocx() error"); dallocx(p, 0); return (NULL); } TEST_BEGIN(test_MALLOCX_ARENA) { thd_t thds[NTHREADS]; unsigned i; for (i = 0; i < NTHREADS; i++) { thd_create(&thds[i], thd_start, (void *)(uintptr_t)i); } for (i = 0; i < NTHREADS; i++) thd_join(thds[i], NULL); } TEST_END int main(void) { return (test( test_MALLOCX_ARENA)); } vmem-1.8/src/jemalloc/test/integration/aligned_alloc.c000066400000000000000000000053101361505074100231270ustar00rootroot00000000000000#include "test/jemalloc_test.h" #define CHUNK 0x400000 /* #define MAXALIGN ((size_t)UINT64_C(0x80000000000)) */ #define MAXALIGN ((size_t)0x2000000LU) #define NITER 4 TEST_BEGIN(test_alignment_errors) { size_t alignment; void *p; alignment = 0; set_errno(0); p = aligned_alloc(alignment, 1); assert_false(p != NULL || get_errno() != EINVAL, "Expected error for invalid alignment %zu", alignment); for (alignment = sizeof(size_t); alignment < MAXALIGN; alignment <<= 1) { set_errno(0); p = aligned_alloc(alignment + 1, 1); assert_false(p != NULL || get_errno() != EINVAL, "Expected error for invalid alignment %zu", alignment + 1); } } TEST_END TEST_BEGIN(test_oom_errors) { size_t alignment, size; void *p; #if LG_SIZEOF_PTR == 3 alignment = UINT64_C(0x8000000000000000); size = UINT64_C(0x8000000000000000); #else alignment = 0x80000000LU; size = 0x80000000LU; #endif set_errno(0); p = aligned_alloc(alignment, size); assert_false(p != NULL || get_errno() != ENOMEM, "Expected error for aligned_alloc(%zu, %zu)", alignment, size); #if LG_SIZEOF_PTR == 3 alignment = UINT64_C(0x4000000000000000); size = UINT64_C(0xc000000000000001); #else alignment = 0x40000000LU; size = 0xc0000001LU; #endif set_errno(0); p = aligned_alloc(alignment, size); assert_false(p != NULL || get_errno() != ENOMEM, "Expected error for aligned_alloc(%zu, %zu)", alignment, size); alignment = 0x10LU; #if LG_SIZEOF_PTR == 3 size = UINT64_C(0xfffffffffffffff0); #else size = 0xfffffff0LU; #endif set_errno(0); p = aligned_alloc(alignment, size); assert_false(p != NULL || get_errno() != ENOMEM, "Expected error for aligned_alloc(&p, %zu, %zu)", alignment, size); } TEST_END TEST_BEGIN(test_alignment_and_size) { size_t alignment, size, total; unsigned i; void *ps[NITER]; for (i = 0; i < NITER; i++) ps[i] = NULL; for (alignment = 8; alignment <= MAXALIGN; alignment <<= 1) { total = 0; for (size = 1; size < 3 * alignment && size < (1U << 31); size += (alignment >> (LG_SIZEOF_PTR-1)) - 1) { for (i = 0; i < NITER; i++) { ps[i] = aligned_alloc(alignment, size); if (ps[i] == NULL) { char buf[BUFERROR_BUF]; buferror(get_errno(), buf, sizeof(buf)); test_fail( "Error for alignment=%zu, " "size=%zu (%#zx): %s", alignment, size, size, buf); } total += malloc_usable_size(ps[i]); if (total >= (MAXALIGN << 1)) break; } for (i = 0; i < NITER; i++) { if (ps[i] != NULL) { free(ps[i]); ps[i] = NULL; } } } } } TEST_END int main(void) { return (test( test_alignment_errors, test_oom_errors, test_alignment_and_size)); } vmem-1.8/src/jemalloc/test/integration/allocated.c000066400000000000000000000056551361505074100223160ustar00rootroot00000000000000#include "test/jemalloc_test.h" static const bool config_stats = #ifdef JEMALLOC_STATS true #else false #endif ; void * thd_start(void *arg) { int err; void *p; uint64_t a0, a1, d0, d1; uint64_t *ap0, *ap1, *dp0, *dp1; size_t sz, usize; sz = sizeof(a0); if ((err = mallctl("thread.allocated", &a0, &sz, NULL, 0))) { if (err == ENOENT) goto label_ENOENT; test_fail("%s(): Error in mallctl(): %s", __func__, strerror(err)); } sz = sizeof(ap0); if ((err = mallctl("thread.allocatedp", &ap0, &sz, NULL, 0))) { if (err == ENOENT) goto label_ENOENT; test_fail("%s(): Error in mallctl(): %s", __func__, strerror(err)); } assert_u64_eq(*ap0, a0, "\"thread.allocatedp\" should provide a pointer to internal " "storage"); sz = sizeof(d0); if ((err = mallctl("thread.deallocated", &d0, &sz, NULL, 0))) { if (err == ENOENT) goto label_ENOENT; test_fail("%s(): Error in mallctl(): %s", __func__, strerror(err)); } sz = sizeof(dp0); if ((err = mallctl("thread.deallocatedp", &dp0, &sz, NULL, 0))) { if (err == ENOENT) goto label_ENOENT; test_fail("%s(): Error in mallctl(): %s", __func__, strerror(err)); } assert_u64_eq(*dp0, d0, "\"thread.deallocatedp\" should provide a pointer to internal " "storage"); p = malloc(1); assert_ptr_not_null(p, "Unexpected malloc() error"); sz = sizeof(a1); mallctl("thread.allocated", &a1, &sz, NULL, 0); sz = sizeof(ap1); mallctl("thread.allocatedp", &ap1, &sz, NULL, 0); assert_u64_eq(*ap1, a1, "Dereferenced \"thread.allocatedp\" value should equal " "\"thread.allocated\" value"); assert_ptr_eq(ap0, ap1, "Pointer returned by \"thread.allocatedp\" should not change"); usize = malloc_usable_size(p); assert_u64_le(a0 + usize, a1, "Allocated memory counter should increase by at least the amount " "explicitly allocated"); free(p); sz = sizeof(d1); mallctl("thread.deallocated", &d1, &sz, NULL, 0); sz = sizeof(dp1); mallctl("thread.deallocatedp", &dp1, &sz, NULL, 0); assert_u64_eq(*dp1, d1, "Dereferenced \"thread.deallocatedp\" value should equal " "\"thread.deallocated\" value"); assert_ptr_eq(dp0, dp1, "Pointer returned by \"thread.deallocatedp\" should not change"); assert_u64_le(d0 + usize, d1, "Deallocated memory counter should increase by at least the amount " "explicitly deallocated"); return (NULL); label_ENOENT: assert_false(config_stats, "ENOENT should only be returned if stats are disabled"); test_skip("\"thread.allocated\" mallctl not available"); return (NULL); } TEST_BEGIN(test_main_thread) { thd_start(NULL); } TEST_END TEST_BEGIN(test_subthread) { thd_t thd; thd_create(&thd, thd_start, NULL); thd_join(thd, NULL); } TEST_END int main(void) { /* Run tests multiple times to check for bad interactions. */ return (test( test_main_thread, test_subthread, test_main_thread, test_subthread, test_main_thread)); } vmem-1.8/src/jemalloc/test/integration/allocm.c000066400000000000000000000052371361505074100216310ustar00rootroot00000000000000#include "test/jemalloc_test.h" #define CHUNK 0x400000 #define MAXALIGN (((size_t)1) << 25) #define NITER 4 TEST_BEGIN(test_basic) { size_t nsz, rsz, sz; void *p; sz = 42; nsz = 0; assert_d_eq(nallocm(&nsz, sz, 0), ALLOCM_SUCCESS, "Unexpected nallocm() error"); rsz = 0; assert_d_eq(allocm(&p, &rsz, sz, 0), ALLOCM_SUCCESS, "Unexpected allocm() error"); assert_zu_ge(rsz, sz, "Real size smaller than expected"); assert_zu_eq(nsz, rsz, "nallocm()/allocm() rsize mismatch"); assert_d_eq(dallocm(p, 0), ALLOCM_SUCCESS, "Unexpected dallocm() error"); assert_d_eq(allocm(&p, NULL, sz, 0), ALLOCM_SUCCESS, "Unexpected allocm() error"); assert_d_eq(dallocm(p, 0), ALLOCM_SUCCESS, "Unexpected dallocm() error"); nsz = 0; assert_d_eq(nallocm(&nsz, sz, ALLOCM_ZERO), ALLOCM_SUCCESS, "Unexpected nallocm() error"); rsz = 0; assert_d_eq(allocm(&p, &rsz, sz, ALLOCM_ZERO), ALLOCM_SUCCESS, "Unexpected allocm() error"); assert_zu_eq(nsz, rsz, "nallocm()/allocm() rsize mismatch"); assert_d_eq(dallocm(p, 0), ALLOCM_SUCCESS, "Unexpected dallocm() error"); } TEST_END TEST_BEGIN(test_alignment_and_size) { int r; size_t nsz, rsz, sz, alignment, total; unsigned i; void *ps[NITER]; for (i = 0; i < NITER; i++) ps[i] = NULL; for (alignment = 8; alignment <= MAXALIGN; alignment <<= 1) { total = 0; for (sz = 1; sz < 3 * alignment && sz < (1U << 31); sz += (alignment >> (LG_SIZEOF_PTR-1)) - 1) { for (i = 0; i < NITER; i++) { nsz = 0; r = nallocm(&nsz, sz, ALLOCM_ALIGN(alignment) | ALLOCM_ZERO); assert_d_eq(r, ALLOCM_SUCCESS, "nallocm() error for alignment=%zu, " "size=%zu (%#zx): %d", alignment, sz, sz, r); rsz = 0; r = allocm(&ps[i], &rsz, sz, ALLOCM_ALIGN(alignment) | ALLOCM_ZERO); assert_d_eq(r, ALLOCM_SUCCESS, "allocm() error for alignment=%zu, " "size=%zu (%#zx): %d", alignment, sz, sz, r); assert_zu_ge(rsz, sz, "Real size smaller than expected for " "alignment=%zu, size=%zu", alignment, sz); assert_zu_eq(nsz, rsz, "nallocm()/allocm() rsize mismatch for " "alignment=%zu, size=%zu", alignment, sz); assert_ptr_null( (void *)((uintptr_t)ps[i] & (alignment-1)), "%p inadequately aligned for" " alignment=%zu, size=%zu", ps[i], alignment, sz); sallocm(ps[i], &rsz, 0); total += rsz; if (total >= (MAXALIGN << 1)) break; } for (i = 0; i < NITER; i++) { if (ps[i] != NULL) { dallocm(ps[i], 0); ps[i] = NULL; } } } } } TEST_END int main(void) { return (test( test_basic, test_alignment_and_size)); } vmem-1.8/src/jemalloc/test/integration/chunk.c000066400000000000000000000026751361505074100214750ustar00rootroot00000000000000#include "test/jemalloc_test.h" chunk_alloc_t *old_alloc; chunk_dalloc_t *old_dalloc; bool chunk_dalloc(void *chunk, size_t size, unsigned arena_ind, pool_t *pool) { return (old_dalloc(chunk, size, arena_ind, pool)); } void * chunk_alloc(void *new_addr, size_t size, size_t alignment, bool *zero, unsigned arena_ind, pool_t *pool) { return (old_alloc(new_addr, size, alignment, zero, arena_ind, pool)); } TEST_BEGIN(test_chunk) { void *p; chunk_alloc_t *new_alloc; chunk_dalloc_t *new_dalloc; size_t old_size, new_size; new_alloc = chunk_alloc; new_dalloc = chunk_dalloc; old_size = sizeof(chunk_alloc_t *); new_size = sizeof(chunk_alloc_t *); assert_d_eq(mallctl("pool.0.arena.0.chunk.alloc", &old_alloc, &old_size, &new_alloc, new_size), 0, "Unexpected alloc error"); assert_ptr_ne(old_alloc, new_alloc, "Unexpected alloc error"); assert_d_eq(mallctl("pool.0.arena.0.chunk.dalloc", &old_dalloc, &old_size, &new_dalloc, new_size), 0, "Unexpected dalloc error"); assert_ptr_ne(old_dalloc, new_dalloc, "Unexpected dalloc error"); p = mallocx(42, 0); assert_ptr_ne(p, NULL, "Unexpected alloc error"); free(p); assert_d_eq(mallctl("pool.0.arena.0.chunk.alloc", NULL, NULL, &old_alloc, old_size), 0, "Unexpected alloc error"); assert_d_eq(mallctl("pool.0.arena.0.chunk.dalloc", NULL, NULL, &old_dalloc, old_size), 0, "Unexpected dalloc error"); } TEST_END int main(void) { return (test(test_chunk)); } vmem-1.8/src/jemalloc/test/integration/mallocx.c000066400000000000000000000045231361505074100220160ustar00rootroot00000000000000#include "test/jemalloc_test.h" #define CHUNK 0x400000 #define MAXALIGN (((size_t)1) << 25) #define NITER 4 TEST_BEGIN(test_basic) { size_t nsz, rsz, sz; void *p; sz = 42; nsz = nallocx(sz, 0); assert_zu_ne(nsz, 0, "Unexpected nallocx() error"); p = mallocx(sz, 0); assert_ptr_not_null(p, "Unexpected mallocx() error"); rsz = sallocx(p, 0); assert_zu_ge(rsz, sz, "Real size smaller than expected"); assert_zu_eq(nsz, rsz, "nallocx()/sallocx() size mismatch"); dallocx(p, 0); p = mallocx(sz, 0); assert_ptr_not_null(p, "Unexpected mallocx() error"); dallocx(p, 0); nsz = nallocx(sz, MALLOCX_ZERO); assert_zu_ne(nsz, 0, "Unexpected nallocx() error"); p = mallocx(sz, MALLOCX_ZERO); assert_ptr_not_null(p, "Unexpected mallocx() error"); rsz = sallocx(p, 0); assert_zu_eq(nsz, rsz, "nallocx()/sallocx() rsize mismatch"); dallocx(p, 0); } TEST_END TEST_BEGIN(test_alignment_and_size) { size_t nsz, rsz, sz, alignment, total; unsigned i; void *ps[NITER]; for (i = 0; i < NITER; i++) ps[i] = NULL; for (alignment = 8; alignment <= MAXALIGN; alignment <<= 1) { total = 0; for (sz = 1; sz < 3 * alignment && sz < (1U << 31); sz += (alignment >> (LG_SIZEOF_PTR-1)) - 1) { for (i = 0; i < NITER; i++) { nsz = nallocx(sz, MALLOCX_ALIGN(alignment) | MALLOCX_ZERO); assert_zu_ne(nsz, 0, "nallocx() error for alignment=%zu, " "size=%zu (%#zx)", alignment, sz, sz); ps[i] = mallocx(sz, MALLOCX_ALIGN(alignment) | MALLOCX_ZERO); assert_ptr_not_null(ps[i], "mallocx() error for alignment=%zu, " "size=%zu (%#zx)", alignment, sz, sz); rsz = sallocx(ps[i], 0); assert_zu_ge(rsz, sz, "Real size smaller than expected for " "alignment=%zu, size=%zu", alignment, sz); assert_zu_eq(nsz, rsz, "nallocx()/sallocx() size mismatch for " "alignment=%zu, size=%zu", alignment, sz); assert_ptr_null( (void *)((uintptr_t)ps[i] & (alignment-1)), "%p inadequately aligned for" " alignment=%zu, size=%zu", ps[i], alignment, sz); total += rsz; if (total >= (MAXALIGN << 1)) break; } for (i = 0; i < NITER; i++) { if (ps[i] != NULL) { dallocx(ps[i], 0); ps[i] = NULL; } } } } } TEST_END int main(void) { return (test( test_basic, test_alignment_and_size)); } vmem-1.8/src/jemalloc/test/integration/mremap.c000066400000000000000000000017411361505074100216370ustar00rootroot00000000000000#include "test/jemalloc_test.h" TEST_BEGIN(test_mremap) { int err; size_t sz, lg_chunk, chunksize, i; char *p, *q; sz = sizeof(lg_chunk); err = mallctl("opt.lg_chunk", &lg_chunk, &sz, NULL, 0); assert_d_eq(err, 0, "Error in mallctl(): %s", strerror(err)); chunksize = ((size_t)1U) << lg_chunk; p = (char *)malloc(chunksize); assert_ptr_not_null(p, "malloc(%zu) --> %p", chunksize, p); memset(p, 'a', chunksize); q = (char *)realloc(p, chunksize * 2); assert_ptr_not_null(q, "realloc(%p, %zu) --> %p", p, chunksize * 2, q); for (i = 0; i < chunksize; i++) { assert_c_eq(q[i], 'a', "realloc() should preserve existing bytes across copies"); } p = q; q = (char *)realloc(p, chunksize); assert_ptr_not_null(q, "realloc(%p, %zu) --> %p", p, chunksize, q); for (i = 0; i < chunksize; i++) { assert_c_eq(q[i], 'a', "realloc() should preserve existing bytes across copies"); } free(q); } TEST_END int main(void) { return (test( test_mremap)); } vmem-1.8/src/jemalloc/test/integration/posix_memalign.c000066400000000000000000000050531361505074100233710ustar00rootroot00000000000000#include "test/jemalloc_test.h" #define CHUNK 0x400000 /* #define MAXALIGN ((size_t)UINT64_C(0x80000000000)) */ #define MAXALIGN ((size_t)0x2000000LU) #define NITER 4 TEST_BEGIN(test_alignment_errors) { size_t alignment; void *p; for (alignment = 0; alignment < sizeof(void *); alignment++) { assert_d_eq(posix_memalign(&p, alignment, 1), EINVAL, "Expected error for invalid alignment %zu", alignment); } for (alignment = sizeof(size_t); alignment < MAXALIGN; alignment <<= 1) { assert_d_ne(posix_memalign(&p, alignment + 1, 1), 0, "Expected error for invalid alignment %zu", alignment + 1); } } TEST_END TEST_BEGIN(test_oom_errors) { size_t alignment, size; void *p; #if LG_SIZEOF_PTR == 3 alignment = UINT64_C(0x8000000000000000); size = UINT64_C(0x8000000000000000); #else alignment = 0x80000000LU; size = 0x80000000LU; #endif assert_d_ne(posix_memalign(&p, alignment, size), 0, "Expected error for posix_memalign(&p, %zu, %zu)", alignment, size); #if LG_SIZEOF_PTR == 3 alignment = UINT64_C(0x4000000000000000); size = UINT64_C(0xc000000000000001); #else alignment = 0x40000000LU; size = 0xc0000001LU; #endif assert_d_ne(posix_memalign(&p, alignment, size), 0, "Expected error for posix_memalign(&p, %zu, %zu)", alignment, size); alignment = 0x10LU; #if LG_SIZEOF_PTR == 3 size = UINT64_C(0xfffffffffffffff0); #else size = 0xfffffff0LU; #endif assert_d_ne(posix_memalign(&p, alignment, size), 0, "Expected error for posix_memalign(&p, %zu, %zu)", alignment, size); } TEST_END TEST_BEGIN(test_alignment_and_size) { size_t alignment, size, total; unsigned i; int err; void *ps[NITER]; for (i = 0; i < NITER; i++) ps[i] = NULL; for (alignment = 8; alignment <= MAXALIGN; alignment <<= 1) { total = 0; for (size = 1; size < 3 * alignment && size < (1U << 31); size += (alignment >> (LG_SIZEOF_PTR-1)) - 1) { for (i = 0; i < NITER; i++) { err = posix_memalign(&ps[i], alignment, size); if (err) { char buf[BUFERROR_BUF]; buferror(get_errno(), buf, sizeof(buf)); test_fail( "Error for alignment=%zu, " "size=%zu (%#zx): %s", alignment, size, size, buf); } total += malloc_usable_size(ps[i]); if (total >= (MAXALIGN << 1)) break; } for (i = 0; i < NITER; i++) { if (ps[i] != NULL) { free(ps[i]); ps[i] = NULL; } } } } } TEST_END int main(void) { return (test( test_alignment_errors, test_oom_errors, test_alignment_and_size)); } vmem-1.8/src/jemalloc/test/integration/rallocm.c000066400000000000000000000051151361505074100220060ustar00rootroot00000000000000#include "test/jemalloc_test.h" TEST_BEGIN(test_same_size) { void *p, *q; size_t sz, tsz; assert_d_eq(allocm(&p, &sz, 42, 0), ALLOCM_SUCCESS, "Unexpected allocm() error"); q = p; assert_d_eq(rallocm(&q, &tsz, sz, 0, ALLOCM_NO_MOVE), ALLOCM_SUCCESS, "Unexpected rallocm() error"); assert_ptr_eq(q, p, "Unexpected object move"); assert_zu_eq(tsz, sz, "Unexpected size change: %zu --> %zu", sz, tsz); assert_d_eq(dallocm(p, 0), ALLOCM_SUCCESS, "Unexpected dallocm() error"); } TEST_END TEST_BEGIN(test_extra_no_move) { void *p, *q; size_t sz, tsz; assert_d_eq(allocm(&p, &sz, 42, 0), ALLOCM_SUCCESS, "Unexpected allocm() error"); q = p; assert_d_eq(rallocm(&q, &tsz, sz, sz-42, ALLOCM_NO_MOVE), ALLOCM_SUCCESS, "Unexpected rallocm() error"); assert_ptr_eq(q, p, "Unexpected object move"); assert_zu_eq(tsz, sz, "Unexpected size change: %zu --> %zu", sz, tsz); assert_d_eq(dallocm(p, 0), ALLOCM_SUCCESS, "Unexpected dallocm() error"); } TEST_END TEST_BEGIN(test_no_move_fail) { void *p, *q; size_t sz, tsz; assert_d_eq(allocm(&p, &sz, 42, 0), ALLOCM_SUCCESS, "Unexpected allocm() error"); q = p; assert_d_eq(rallocm(&q, &tsz, sz + 5, 0, ALLOCM_NO_MOVE), ALLOCM_ERR_NOT_MOVED, "Unexpected rallocm() result"); assert_ptr_eq(q, p, "Unexpected object move"); assert_zu_eq(tsz, sz, "Unexpected size change: %zu --> %zu", sz, tsz); assert_d_eq(dallocm(p, 0), ALLOCM_SUCCESS, "Unexpected dallocm() error"); } TEST_END TEST_BEGIN(test_grow_and_shrink) { void *p, *q; size_t tsz; #define NCYCLES 3 unsigned i, j; #define NSZS 2500 size_t szs[NSZS]; #define MAXSZ ZU(12 * 1024 * 1024) assert_d_eq(allocm(&p, &szs[0], 1, 0), ALLOCM_SUCCESS, "Unexpected allocm() error"); for (i = 0; i < NCYCLES; i++) { for (j = 1; j < NSZS && szs[j-1] < MAXSZ; j++) { q = p; assert_d_eq(rallocm(&q, &szs[j], szs[j-1]+1, 0, 0), ALLOCM_SUCCESS, "Unexpected rallocm() error for size=%zu-->%zu", szs[j-1], szs[j-1]+1); assert_zu_ne(szs[j], szs[j-1]+1, "Expected size to at least: %zu", szs[j-1]+1); p = q; } for (j--; j > 0; j--) { q = p; assert_d_eq(rallocm(&q, &tsz, szs[j-1], 0, 0), ALLOCM_SUCCESS, "Unexpected rallocm() error for size=%zu-->%zu", szs[j], szs[j-1]); assert_zu_eq(tsz, szs[j-1], "Expected size=%zu, got size=%zu", szs[j-1], tsz); p = q; } } assert_d_eq(dallocm(p, 0), ALLOCM_SUCCESS, "Unexpected dallocm() error"); } TEST_END int main(void) { return (test( test_same_size, test_extra_no_move, test_no_move_fail, test_grow_and_shrink)); } vmem-1.8/src/jemalloc/test/integration/rallocx.c000066400000000000000000000104151361505074100220200ustar00rootroot00000000000000#include "test/jemalloc_test.h" TEST_BEGIN(test_grow_and_shrink) { void *p, *q; size_t tsz; #define NCYCLES 3 unsigned i, j; #define NSZS 2500 size_t szs[NSZS]; #define MAXSZ ZU(12 * 1024 * 1024) p = mallocx(1, 0); assert_ptr_not_null(p, "Unexpected mallocx() error"); szs[0] = sallocx(p, 0); for (i = 0; i < NCYCLES; i++) { for (j = 1; j < NSZS && szs[j-1] < MAXSZ; j++) { q = rallocx(p, szs[j-1]+1, 0); assert_ptr_not_null(q, "Unexpected rallocx() error for size=%zu-->%zu", szs[j-1], szs[j-1]+1); szs[j] = sallocx(q, 0); assert_zu_ne(szs[j], szs[j-1]+1, "Expected size to at least: %zu", szs[j-1]+1); p = q; } for (j--; j > 0; j--) { q = rallocx(p, szs[j-1], 0); assert_ptr_not_null(q, "Unexpected rallocx() error for size=%zu-->%zu", szs[j], szs[j-1]); tsz = sallocx(q, 0); assert_zu_eq(tsz, szs[j-1], "Expected size=%zu, got size=%zu", szs[j-1], tsz); p = q; } } dallocx(p, 0); #undef MAXSZ #undef NSZS #undef NCYCLES } TEST_END static bool validate_fill(const void *p, uint8_t c, size_t offset, size_t len) { bool ret = false; const uint8_t *buf = (const uint8_t *)p; size_t i; for (i = 0; i < len; i++) { uint8_t b = buf[offset+i]; if (b != c) { test_fail("Allocation at %p contains %#x rather than " "%#x at offset %zu", p, b, c, offset+i); ret = true; } } return (ret); } TEST_BEGIN(test_zero) { void *p, *q; size_t psz, qsz, i, j; size_t start_sizes[] = {1, 3*1024, 63*1024, 4095*1024}; #define FILL_BYTE 0xaaU #define RANGE 2048 for (i = 0; i < sizeof(start_sizes)/sizeof(size_t); i++) { size_t start_size = start_sizes[i]; p = mallocx(start_size, MALLOCX_ZERO); assert_ptr_not_null(p, "Unexpected mallocx() error"); psz = sallocx(p, 0); assert_false(validate_fill(p, 0, 0, psz), "Expected zeroed memory"); memset(p, FILL_BYTE, psz); assert_false(validate_fill(p, FILL_BYTE, 0, psz), "Expected filled memory"); for (j = 1; j < RANGE; j++) { q = rallocx(p, start_size+j, MALLOCX_ZERO); assert_ptr_not_null(q, "Unexpected rallocx() error"); qsz = sallocx(q, 0); if (q != p || qsz != psz) { assert_false(validate_fill(q, FILL_BYTE, 0, psz), "Expected filled memory"); assert_false(validate_fill(q, 0, psz, qsz-psz), "Expected zeroed memory"); } if (psz != qsz) { memset((void *)((uintptr_t)q+psz), FILL_BYTE, qsz-psz); psz = qsz; } p = q; } assert_false(validate_fill(p, FILL_BYTE, 0, psz), "Expected filled memory"); dallocx(p, 0); } #undef FILL_BYTE } TEST_END TEST_BEGIN(test_align) { void *p, *q; size_t align; #define MAX_ALIGN (ZU(1) << 25) align = ZU(1); p = mallocx(1, MALLOCX_ALIGN(align)); assert_ptr_not_null(p, "Unexpected mallocx() error"); for (align <<= 1; align <= MAX_ALIGN; align <<= 1) { q = rallocx(p, 1, MALLOCX_ALIGN(align)); assert_ptr_not_null(q, "Unexpected rallocx() error for align=%zu", align); assert_ptr_null( (void *)((uintptr_t)q & (align-1)), "%p inadequately aligned for align=%zu", q, align); p = q; } dallocx(p, 0); #undef MAX_ALIGN } TEST_END TEST_BEGIN(test_lg_align_and_zero) { void *p, *q; size_t lg_align, sz; #define MAX_LG_ALIGN 25 #define MAX_VALIDATE (ZU(1) << 22) lg_align = ZU(0); p = mallocx(1, MALLOCX_LG_ALIGN(lg_align)|MALLOCX_ZERO); assert_ptr_not_null(p, "Unexpected mallocx() error"); for (lg_align++; lg_align <= MAX_LG_ALIGN; lg_align++) { q = rallocx(p, 1, MALLOCX_LG_ALIGN(lg_align)|MALLOCX_ZERO); assert_ptr_not_null(q, "Unexpected rallocx() error for lg_align=%zu", lg_align); assert_ptr_null( (void *)((uintptr_t)q & ((ZU(1) << lg_align)-1)), "%p inadequately aligned for lg_align=%zu", q, lg_align); sz = sallocx(q, 0); if ((sz << 1) <= MAX_VALIDATE) { assert_false(validate_fill(q, 0, 0, sz), "Expected zeroed memory"); } else { assert_false(validate_fill(q, 0, 0, MAX_VALIDATE), "Expected zeroed memory"); assert_false(validate_fill( (void *)((uintptr_t)q+sz-MAX_VALIDATE), 0, 0, MAX_VALIDATE), "Expected zeroed memory"); } p = q; } dallocx(p, 0); #undef MAX_VALIDATE #undef MAX_LG_ALIGN } TEST_END int main(void) { return (test( test_grow_and_shrink, test_zero, test_align, test_lg_align_and_zero)); } vmem-1.8/src/jemalloc/test/integration/thread_arena.c000066400000000000000000000030331361505074100227670ustar00rootroot00000000000000#include "test/jemalloc_test.h" #define NTHREADS 10 void * thd_start(void *arg) { unsigned main_arena_ind = *(unsigned *)arg; void *p; unsigned arena_ind; size_t size; int err; p = malloc(1); assert_ptr_not_null(p, "Error in malloc()"); free(p); size = sizeof(arena_ind); if ((err = mallctl("thread.pool.0.arena", &arena_ind, &size, &main_arena_ind, sizeof(main_arena_ind)))) { char buf[BUFERROR_BUF]; buferror(err, buf, sizeof(buf)); test_fail("Error in mallctl(): %s", buf); } size = sizeof(arena_ind); if ((err = mallctl("thread.pool.0.arena", &arena_ind, &size, NULL, 0))) { char buf[BUFERROR_BUF]; buferror(err, buf, sizeof(buf)); test_fail("Error in mallctl(): %s", buf); } assert_u_eq(arena_ind, main_arena_ind, "Arena index should be same as for main thread"); return (NULL); } TEST_BEGIN(test_thread_arena) { void *p; unsigned arena_ind; size_t size; int err; thd_t thds[NTHREADS]; unsigned i; p = malloc(1); assert_ptr_not_null(p, "Error in malloc()"); size = sizeof(arena_ind); if ((err = mallctl("thread.pool.0.arena", &arena_ind, &size, NULL, 0))) { char buf[BUFERROR_BUF]; buferror(err, buf, sizeof(buf)); test_fail("Error in mallctl(): %s", buf); } for (i = 0; i < NTHREADS; i++) { thd_create(&thds[i], thd_start, (void *)&arena_ind); } for (i = 0; i < NTHREADS; i++) { intptr_t join_ret; thd_join(thds[i], (void *)&join_ret); assert_zd_eq(join_ret, 0, "Unexpected thread join error"); } } TEST_END int main(void) { return (test( test_thread_arena)); } vmem-1.8/src/jemalloc/test/integration/thread_tcache_enabled.c000066400000000000000000000047471361505074100246170ustar00rootroot00000000000000#include "test/jemalloc_test.h" static const bool config_tcache = #ifdef JEMALLOC_TCACHE true #else false #endif ; void * thd_start(void *arg) { int err; size_t sz; bool e0, e1; sz = sizeof(bool); if ((err = mallctl("thread.tcache.enabled", &e0, &sz, NULL, 0))) { if (err == ENOENT) { assert_false(config_tcache, "ENOENT should only be returned if tcache is " "disabled"); } goto label_ENOENT; } if (e0) { e1 = false; assert_d_eq(mallctl("thread.tcache.enabled", &e0, &sz, &e1, sz), 0, "Unexpected mallctl() error"); assert_true(e0, "tcache should be enabled"); } e1 = true; assert_d_eq(mallctl("thread.tcache.enabled", &e0, &sz, &e1, sz), 0, "Unexpected mallctl() error"); assert_false(e0, "tcache should be disabled"); e1 = true; assert_d_eq(mallctl("thread.tcache.enabled", &e0, &sz, &e1, sz), 0, "Unexpected mallctl() error"); assert_true(e0, "tcache should be enabled"); e1 = false; assert_d_eq(mallctl("thread.tcache.enabled", &e0, &sz, &e1, sz), 0, "Unexpected mallctl() error"); assert_true(e0, "tcache should be enabled"); e1 = false; assert_d_eq(mallctl("thread.tcache.enabled", &e0, &sz, &e1, sz), 0, "Unexpected mallctl() error"); assert_false(e0, "tcache should be disabled"); free(malloc(1)); e1 = true; assert_d_eq(mallctl("thread.tcache.enabled", &e0, &sz, &e1, sz), 0, "Unexpected mallctl() error"); assert_false(e0, "tcache should be disabled"); free(malloc(1)); e1 = true; assert_d_eq(mallctl("thread.tcache.enabled", &e0, &sz, &e1, sz), 0, "Unexpected mallctl() error"); assert_true(e0, "tcache should be enabled"); free(malloc(1)); e1 = false; assert_d_eq(mallctl("thread.tcache.enabled", &e0, &sz, &e1, sz), 0, "Unexpected mallctl() error"); assert_true(e0, "tcache should be enabled"); free(malloc(1)); e1 = false; assert_d_eq(mallctl("thread.tcache.enabled", &e0, &sz, &e1, sz), 0, "Unexpected mallctl() error"); assert_false(e0, "tcache should be disabled"); free(malloc(1)); return (NULL); label_ENOENT: test_skip("\"thread.tcache.enabled\" mallctl not available"); return (NULL); } TEST_BEGIN(test_main_thread) { thd_start(NULL); } TEST_END TEST_BEGIN(test_subthread) { thd_t thd; thd_create(&thd, thd_start, NULL); thd_join(thd, NULL); } TEST_END int main(void) { /* Run tests multiple times to check for bad interactions. */ return (test( test_main_thread, test_subthread, test_main_thread, test_subthread, test_main_thread)); } vmem-1.8/src/jemalloc/test/integration/xallocx.c000066400000000000000000000017621361505074100220330ustar00rootroot00000000000000#include "test/jemalloc_test.h" TEST_BEGIN(test_same_size) { void *p; size_t sz, tsz; p = mallocx(42, 0); assert_ptr_not_null(p, "Unexpected mallocx() error"); sz = sallocx(p, 0); tsz = xallocx(p, sz, 0, 0); assert_zu_eq(tsz, sz, "Unexpected size change: %zu --> %zu", sz, tsz); dallocx(p, 0); } TEST_END TEST_BEGIN(test_extra_no_move) { void *p; size_t sz, tsz; p = mallocx(42, 0); assert_ptr_not_null(p, "Unexpected mallocx() error"); sz = sallocx(p, 0); tsz = xallocx(p, sz, sz-42, 0); assert_zu_eq(tsz, sz, "Unexpected size change: %zu --> %zu", sz, tsz); dallocx(p, 0); } TEST_END TEST_BEGIN(test_no_move_fail) { void *p; size_t sz, tsz; p = mallocx(42, 0); assert_ptr_not_null(p, "Unexpected mallocx() error"); sz = sallocx(p, 0); tsz = xallocx(p, sz + 5, 0, 0); assert_zu_eq(tsz, sz, "Unexpected size change: %zu --> %zu", sz, tsz); dallocx(p, 0); } TEST_END int main(void) { return (test( test_same_size, test_extra_no_move, test_no_move_fail)); } vmem-1.8/src/jemalloc/test/src/000077500000000000000000000000001361505074100164535ustar00rootroot00000000000000vmem-1.8/src/jemalloc/test/src/SFMT.c000066400000000000000000000504351361505074100173770ustar00rootroot00000000000000/* * This file derives from SFMT 1.3.3 * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was * released under the terms of the following license: * * Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Hiroshima University nor the names of * its contributors may be used to endorse or promote products * derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /** * @file SFMT.c * @brief SIMD oriented Fast Mersenne Twister(SFMT) * * @author Mutsuo Saito (Hiroshima University) * @author Makoto Matsumoto (Hiroshima University) * * Copyright (C) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * The new BSD License is applied to this software, see LICENSE.txt */ #define SFMT_C_ #include "test/jemalloc_test.h" #include "test/SFMT-params.h" #if defined(JEMALLOC_BIG_ENDIAN) && !defined(BIG_ENDIAN64) #define BIG_ENDIAN64 1 #endif #if defined(__BIG_ENDIAN__) && !defined(__amd64) && !defined(BIG_ENDIAN64) #define BIG_ENDIAN64 1 #endif #if defined(HAVE_ALTIVEC) && !defined(BIG_ENDIAN64) #define BIG_ENDIAN64 1 #endif #if defined(ONLY64) && !defined(BIG_ENDIAN64) #if defined(__GNUC__) #error "-DONLY64 must be specified with -DBIG_ENDIAN64" #endif #undef ONLY64 #endif /*------------------------------------------------------ 128-bit SIMD data type for Altivec, SSE2 or standard C ------------------------------------------------------*/ #if defined(HAVE_ALTIVEC) /** 128-bit data structure */ union W128_T { vector unsigned int s; uint32_t u[4]; }; /** 128-bit data type */ typedef union W128_T w128_t; #elif defined(HAVE_SSE2) /** 128-bit data structure */ union W128_T { __m128i si; uint32_t u[4]; }; /** 128-bit data type */ typedef union W128_T w128_t; #else /** 128-bit data structure */ struct W128_T { uint32_t u[4]; }; /** 128-bit data type */ typedef struct W128_T w128_t; #endif struct sfmt_s { /** the 128-bit internal state array */ w128_t sfmt[N]; /** index counter to the 32-bit internal state array */ int idx; /** a flag: it is 0 if and only if the internal state is not yet * initialized. */ int initialized; }; /*-------------------------------------- FILE GLOBAL VARIABLES internal state, index counter and flag --------------------------------------*/ /** a parity check vector which certificate the period of 2^{MEXP} */ static uint32_t parity[4] = {PARITY1, PARITY2, PARITY3, PARITY4}; /*---------------- STATIC FUNCTIONS ----------------*/ JEMALLOC_INLINE_C int idxof(int i); #if (!defined(HAVE_ALTIVEC)) && (!defined(HAVE_SSE2)) JEMALLOC_INLINE_C void rshift128(w128_t *out, w128_t const *in, int shift); JEMALLOC_INLINE_C void lshift128(w128_t *out, w128_t const *in, int shift); #endif JEMALLOC_INLINE_C void gen_rand_all(sfmt_t *ctx); JEMALLOC_INLINE_C void gen_rand_array(sfmt_t *ctx, w128_t *array, int size); JEMALLOC_INLINE_C uint32_t func1(uint32_t x); JEMALLOC_INLINE_C uint32_t func2(uint32_t x); static void period_certification(sfmt_t *ctx); #if defined(BIG_ENDIAN64) && !defined(ONLY64) JEMALLOC_INLINE_C void swap(w128_t *array, int size); #endif #if defined(HAVE_ALTIVEC) #include "test/SFMT-alti.h" #elif defined(HAVE_SSE2) #include "test/SFMT-sse2.h" #endif /** * This function simulate a 64-bit index of LITTLE ENDIAN * in BIG ENDIAN machine. */ #ifdef ONLY64 JEMALLOC_INLINE_C int idxof(int i) { return i ^ 1; } #else JEMALLOC_INLINE_C int idxof(int i) { return i; } #endif /** * This function simulates SIMD 128-bit right shift by the standard C. * The 128-bit integer given in in is shifted by (shift * 8) bits. * This function simulates the LITTLE ENDIAN SIMD. * @param out the output of this function * @param in the 128-bit data to be shifted * @param shift the shift value */ #if (!defined(HAVE_ALTIVEC)) && (!defined(HAVE_SSE2)) #ifdef ONLY64 JEMALLOC_INLINE_C void rshift128(w128_t *out, w128_t const *in, int shift) { uint64_t th, tl, oh, ol; th = ((uint64_t)in->u[2] << 32) | ((uint64_t)in->u[3]); tl = ((uint64_t)in->u[0] << 32) | ((uint64_t)in->u[1]); oh = th >> (shift * 8); ol = tl >> (shift * 8); ol |= th << (64 - shift * 8); out->u[0] = (uint32_t)(ol >> 32); out->u[1] = (uint32_t)ol; out->u[2] = (uint32_t)(oh >> 32); out->u[3] = (uint32_t)oh; } #else JEMALLOC_INLINE_C void rshift128(w128_t *out, w128_t const *in, int shift) { uint64_t th, tl, oh, ol; th = ((uint64_t)in->u[3] << 32) | ((uint64_t)in->u[2]); tl = ((uint64_t)in->u[1] << 32) | ((uint64_t)in->u[0]); oh = th >> (shift * 8); ol = tl >> (shift * 8); ol |= th << (64 - shift * 8); out->u[1] = (uint32_t)(ol >> 32); out->u[0] = (uint32_t)ol; out->u[3] = (uint32_t)(oh >> 32); out->u[2] = (uint32_t)oh; } #endif /** * This function simulates SIMD 128-bit left shift by the standard C. * The 128-bit integer given in in is shifted by (shift * 8) bits. * This function simulates the LITTLE ENDIAN SIMD. * @param out the output of this function * @param in the 128-bit data to be shifted * @param shift the shift value */ #ifdef ONLY64 JEMALLOC_INLINE_C void lshift128(w128_t *out, w128_t const *in, int shift) { uint64_t th, tl, oh, ol; th = ((uint64_t)in->u[2] << 32) | ((uint64_t)in->u[3]); tl = ((uint64_t)in->u[0] << 32) | ((uint64_t)in->u[1]); oh = th << (shift * 8); ol = tl << (shift * 8); oh |= tl >> (64 - shift * 8); out->u[0] = (uint32_t)(ol >> 32); out->u[1] = (uint32_t)ol; out->u[2] = (uint32_t)(oh >> 32); out->u[3] = (uint32_t)oh; } #else JEMALLOC_INLINE_C void lshift128(w128_t *out, w128_t const *in, int shift) { uint64_t th, tl, oh, ol; th = ((uint64_t)in->u[3] << 32) | ((uint64_t)in->u[2]); tl = ((uint64_t)in->u[1] << 32) | ((uint64_t)in->u[0]); oh = th << (shift * 8); ol = tl << (shift * 8); oh |= tl >> (64 - shift * 8); out->u[1] = (uint32_t)(ol >> 32); out->u[0] = (uint32_t)ol; out->u[3] = (uint32_t)(oh >> 32); out->u[2] = (uint32_t)oh; } #endif #endif /** * This function represents the recursion formula. * @param r output * @param a a 128-bit part of the internal state array * @param b a 128-bit part of the internal state array * @param c a 128-bit part of the internal state array * @param d a 128-bit part of the internal state array */ #if (!defined(HAVE_ALTIVEC)) && (!defined(HAVE_SSE2)) #ifdef ONLY64 JEMALLOC_INLINE_C void do_recursion(w128_t *r, w128_t *a, w128_t *b, w128_t *c, w128_t *d) { w128_t x; w128_t y; lshift128(&x, a, SL2); rshift128(&y, c, SR2); r->u[0] = a->u[0] ^ x.u[0] ^ ((b->u[0] >> SR1) & MSK2) ^ y.u[0] ^ (d->u[0] << SL1); r->u[1] = a->u[1] ^ x.u[1] ^ ((b->u[1] >> SR1) & MSK1) ^ y.u[1] ^ (d->u[1] << SL1); r->u[2] = a->u[2] ^ x.u[2] ^ ((b->u[2] >> SR1) & MSK4) ^ y.u[2] ^ (d->u[2] << SL1); r->u[3] = a->u[3] ^ x.u[3] ^ ((b->u[3] >> SR1) & MSK3) ^ y.u[3] ^ (d->u[3] << SL1); } #else JEMALLOC_INLINE_C void do_recursion(w128_t *r, w128_t *a, w128_t *b, w128_t *c, w128_t *d) { w128_t x; w128_t y; lshift128(&x, a, SL2); rshift128(&y, c, SR2); r->u[0] = a->u[0] ^ x.u[0] ^ ((b->u[0] >> SR1) & MSK1) ^ y.u[0] ^ (d->u[0] << SL1); r->u[1] = a->u[1] ^ x.u[1] ^ ((b->u[1] >> SR1) & MSK2) ^ y.u[1] ^ (d->u[1] << SL1); r->u[2] = a->u[2] ^ x.u[2] ^ ((b->u[2] >> SR1) & MSK3) ^ y.u[2] ^ (d->u[2] << SL1); r->u[3] = a->u[3] ^ x.u[3] ^ ((b->u[3] >> SR1) & MSK4) ^ y.u[3] ^ (d->u[3] << SL1); } #endif #endif #if (!defined(HAVE_ALTIVEC)) && (!defined(HAVE_SSE2)) /** * This function fills the internal state array with pseudorandom * integers. */ JEMALLOC_INLINE_C void gen_rand_all(sfmt_t *ctx) { int i; w128_t *r1, *r2; r1 = &ctx->sfmt[N - 2]; r2 = &ctx->sfmt[N - 1]; for (i = 0; i < N - POS1; i++) { do_recursion(&ctx->sfmt[i], &ctx->sfmt[i], &ctx->sfmt[i + POS1], r1, r2); r1 = r2; r2 = &ctx->sfmt[i]; } for (; i < N; i++) { do_recursion(&ctx->sfmt[i], &ctx->sfmt[i], &ctx->sfmt[i + POS1 - N], r1, r2); r1 = r2; r2 = &ctx->sfmt[i]; } } /** * This function fills the user-specified array with pseudorandom * integers. * * @param array an 128-bit array to be filled by pseudorandom numbers. * @param size number of 128-bit pseudorandom numbers to be generated. */ JEMALLOC_INLINE_C void gen_rand_array(sfmt_t *ctx, w128_t *array, int size) { int i, j; w128_t *r1, *r2; r1 = &ctx->sfmt[N - 2]; r2 = &ctx->sfmt[N - 1]; for (i = 0; i < N - POS1; i++) { do_recursion(&array[i], &ctx->sfmt[i], &ctx->sfmt[i + POS1], r1, r2); r1 = r2; r2 = &array[i]; } for (; i < N; i++) { do_recursion(&array[i], &ctx->sfmt[i], &array[i + POS1 - N], r1, r2); r1 = r2; r2 = &array[i]; } for (; i < size - N; i++) { do_recursion(&array[i], &array[i - N], &array[i + POS1 - N], r1, r2); r1 = r2; r2 = &array[i]; } for (j = 0; j < 2 * N - size; j++) { ctx->sfmt[j] = array[j + size - N]; } for (; i < size; i++, j++) { do_recursion(&array[i], &array[i - N], &array[i + POS1 - N], r1, r2); r1 = r2; r2 = &array[i]; ctx->sfmt[j] = array[i]; } } #endif #if defined(BIG_ENDIAN64) && !defined(ONLY64) && !defined(HAVE_ALTIVEC) JEMALLOC_INLINE_C void swap(w128_t *array, int size) { int i; uint32_t x, y; for (i = 0; i < size; i++) { x = array[i].u[0]; y = array[i].u[2]; array[i].u[0] = array[i].u[1]; array[i].u[2] = array[i].u[3]; array[i].u[1] = x; array[i].u[3] = y; } } #endif /** * This function represents a function used in the initialization * by init_by_array * @param x 32-bit integer * @return 32-bit integer */ static uint32_t func1(uint32_t x) { return (x ^ (x >> 27)) * (uint32_t)1664525UL; } /** * This function represents a function used in the initialization * by init_by_array * @param x 32-bit integer * @return 32-bit integer */ static uint32_t func2(uint32_t x) { return (x ^ (x >> 27)) * (uint32_t)1566083941UL; } /** * This function certificate the period of 2^{MEXP} */ static void period_certification(sfmt_t *ctx) { int inner = 0; int i, j; uint32_t work; uint32_t *psfmt32 = &ctx->sfmt[0].u[0]; for (i = 0; i < 4; i++) inner ^= psfmt32[idxof(i)] & parity[i]; for (i = 16; i > 0; i >>= 1) inner ^= inner >> i; inner &= 1; /* check OK */ if (inner == 1) { return; } /* check NG, and modification */ for (i = 0; i < 4; i++) { work = 1; for (j = 0; j < 32; j++) { if ((work & parity[i]) != 0) { psfmt32[idxof(i)] ^= work; return; } work = work << 1; } } } /*---------------- PUBLIC FUNCTIONS ----------------*/ /** * This function returns the identification string. * The string shows the word size, the Mersenne exponent, * and all parameters of this generator. */ const char *get_idstring(void) { return IDSTR; } /** * This function returns the minimum size of array used for \b * fill_array32() function. * @return minimum size of array used for fill_array32() function. */ int get_min_array_size32(void) { return N32; } /** * This function returns the minimum size of array used for \b * fill_array64() function. * @return minimum size of array used for fill_array64() function. */ int get_min_array_size64(void) { return N64; } #ifndef ONLY64 /** * This function generates and returns 32-bit pseudorandom number. * init_gen_rand or init_by_array must be called before this function. * @return 32-bit pseudorandom number */ uint32_t gen_rand32(sfmt_t *ctx) { uint32_t r; uint32_t *psfmt32 = &ctx->sfmt[0].u[0]; assert(ctx->initialized); if (ctx->idx >= N32) { gen_rand_all(ctx); ctx->idx = 0; } r = psfmt32[ctx->idx++]; return r; } /* Generate a random integer in [0..limit). */ uint32_t gen_rand32_range(sfmt_t *ctx, uint32_t limit) { uint32_t ret, above; above = 0xffffffffU - (0xffffffffU % limit); while (1) { ret = gen_rand32(ctx); if (ret < above) { ret %= limit; break; } } return ret; } #endif /** * This function generates and returns 64-bit pseudorandom number. * init_gen_rand or init_by_array must be called before this function. * The function gen_rand64 should not be called after gen_rand32, * unless an initialization is again executed. * @return 64-bit pseudorandom number */ uint64_t gen_rand64(sfmt_t *ctx) { #if defined(BIG_ENDIAN64) && !defined(ONLY64) uint32_t r1, r2; uint32_t *psfmt32 = &ctx->sfmt[0].u[0]; #else uint64_t r; uint64_t *psfmt64 = (uint64_t *)&ctx->sfmt[0].u[0]; #endif assert(ctx->initialized); assert(ctx->idx % 2 == 0); if (ctx->idx >= N32) { gen_rand_all(ctx); ctx->idx = 0; } #if defined(BIG_ENDIAN64) && !defined(ONLY64) r1 = psfmt32[ctx->idx]; r2 = psfmt32[ctx->idx + 1]; ctx->idx += 2; return ((uint64_t)r2 << 32) | r1; #else r = psfmt64[ctx->idx / 2]; ctx->idx += 2; return r; #endif } /* Generate a random integer in [0..limit). */ uint64_t gen_rand64_range(sfmt_t *ctx, uint64_t limit) { uint64_t ret, above; above = KQU(0xffffffffffffffff) - (KQU(0xffffffffffffffff) % limit); while (1) { ret = gen_rand64(ctx); if (ret < above) { ret %= limit; break; } } return ret; } #ifndef ONLY64 /** * This function generates pseudorandom 32-bit integers in the * specified array[] by one call. The number of pseudorandom integers * is specified by the argument size, which must be at least 624 and a * multiple of four. The generation by this function is much faster * than the following gen_rand function. * * For initialization, init_gen_rand or init_by_array must be called * before the first call of this function. This function can not be * used after calling gen_rand function, without initialization. * * @param array an array where pseudorandom 32-bit integers are filled * by this function. The pointer to the array must be \b "aligned" * (namely, must be a multiple of 16) in the SIMD version, since it * refers to the address of a 128-bit integer. In the standard C * version, the pointer is arbitrary. * * @param size the number of 32-bit pseudorandom integers to be * generated. size must be a multiple of 4, and greater than or equal * to (MEXP / 128 + 1) * 4. * * @note \b memalign or \b posix_memalign is available to get aligned * memory. Mac OSX doesn't have these functions, but \b malloc of OSX * returns the pointer to the aligned memory block. */ void fill_array32(sfmt_t *ctx, uint32_t *array, int size) { assert(ctx->initialized); assert(ctx->idx == N32); assert(size % 4 == 0); assert(size >= N32); gen_rand_array(ctx, (w128_t *)array, size / 4); ctx->idx = N32; } #endif /** * This function generates pseudorandom 64-bit integers in the * specified array[] by one call. The number of pseudorandom integers * is specified by the argument size, which must be at least 312 and a * multiple of two. The generation by this function is much faster * than the following gen_rand function. * * For initialization, init_gen_rand or init_by_array must be called * before the first call of this function. This function can not be * used after calling gen_rand function, without initialization. * * @param array an array where pseudorandom 64-bit integers are filled * by this function. The pointer to the array must be "aligned" * (namely, must be a multiple of 16) in the SIMD version, since it * refers to the address of a 128-bit integer. In the standard C * version, the pointer is arbitrary. * * @param size the number of 64-bit pseudorandom integers to be * generated. size must be a multiple of 2, and greater than or equal * to (MEXP / 128 + 1) * 2 * * @note \b memalign or \b posix_memalign is available to get aligned * memory. Mac OSX doesn't have these functions, but \b malloc of OSX * returns the pointer to the aligned memory block. */ void fill_array64(sfmt_t *ctx, uint64_t *array, int size) { assert(ctx->initialized); assert(ctx->idx == N32); assert(size % 2 == 0); assert(size >= N64); gen_rand_array(ctx, (w128_t *)array, size / 2); ctx->idx = N32; #if defined(BIG_ENDIAN64) && !defined(ONLY64) swap((w128_t *)array, size /2); #endif } /** * This function initializes the internal state array with a 32-bit * integer seed. * * @param seed a 32-bit integer used as the seed. */ sfmt_t *init_gen_rand(uint32_t seed) { void *p; sfmt_t *ctx; int i; uint32_t *psfmt32; if (posix_memalign(&p, sizeof(w128_t), sizeof(sfmt_t)) != 0) { return NULL; } ctx = (sfmt_t *)p; psfmt32 = &ctx->sfmt[0].u[0]; psfmt32[idxof(0)] = seed; for (i = 1; i < N32; i++) { psfmt32[idxof(i)] = 1812433253UL * (psfmt32[idxof(i - 1)] ^ (psfmt32[idxof(i - 1)] >> 30)) + i; } ctx->idx = N32; period_certification(ctx); ctx->initialized = 1; return ctx; } /** * This function initializes the internal state array, * with an array of 32-bit integers used as the seeds * @param init_key the array of 32-bit integers, used as a seed. * @param key_length the length of init_key. */ sfmt_t *init_by_array(uint32_t *init_key, int key_length) { void *p; sfmt_t *ctx; int i, j, count; uint32_t r; int lag; int mid; int size = N * 4; uint32_t *psfmt32; if (posix_memalign(&p, sizeof(w128_t), sizeof(sfmt_t)) != 0) { return NULL; } ctx = (sfmt_t *)p; psfmt32 = &ctx->sfmt[0].u[0]; if (size >= 623) { lag = 11; } else if (size >= 68) { lag = 7; } else if (size >= 39) { lag = 5; } else { lag = 3; } mid = (size - lag) / 2; memset(ctx->sfmt, 0x8b, sizeof(ctx->sfmt)); if (key_length + 1 > N32) { count = key_length + 1; } else { count = N32; } r = func1(psfmt32[idxof(0)] ^ psfmt32[idxof(mid)] ^ psfmt32[idxof(N32 - 1)]); psfmt32[idxof(mid)] += r; r += key_length; psfmt32[idxof(mid + lag)] += r; psfmt32[idxof(0)] = r; count--; for (i = 1, j = 0; (j < count) && (j < key_length); j++) { r = func1(psfmt32[idxof(i)] ^ psfmt32[idxof((i + mid) % N32)] ^ psfmt32[idxof((i + N32 - 1) % N32)]); psfmt32[idxof((i + mid) % N32)] += r; r += init_key[j] + i; psfmt32[idxof((i + mid + lag) % N32)] += r; psfmt32[idxof(i)] = r; i = (i + 1) % N32; } for (; j < count; j++) { r = func1(psfmt32[idxof(i)] ^ psfmt32[idxof((i + mid) % N32)] ^ psfmt32[idxof((i + N32 - 1) % N32)]); psfmt32[idxof((i + mid) % N32)] += r; r += i; psfmt32[idxof((i + mid + lag) % N32)] += r; psfmt32[idxof(i)] = r; i = (i + 1) % N32; } for (j = 0; j < N32; j++) { r = func2(psfmt32[idxof(i)] + psfmt32[idxof((i + mid) % N32)] + psfmt32[idxof((i + N32 - 1) % N32)]); psfmt32[idxof((i + mid) % N32)] ^= r; r -= i; psfmt32[idxof((i + mid + lag) % N32)] ^= r; psfmt32[idxof(i)] = r; i = (i + 1) % N32; } ctx->idx = N32; period_certification(ctx); ctx->initialized = 1; return ctx; } void fini_gen_rand(sfmt_t *ctx) { assert(ctx != NULL); ctx->initialized = 0; free(ctx); } vmem-1.8/src/jemalloc/test/src/math.c000066400000000000000000000000601361505074100175440ustar00rootroot00000000000000#define MATH_C_ #include "test/jemalloc_test.h" vmem-1.8/src/jemalloc/test/src/mtx.c000066400000000000000000000021201361505074100174220ustar00rootroot00000000000000#include "test/jemalloc_test.h" #ifndef _CRT_SPINCOUNT #define _CRT_SPINCOUNT 4000 #endif bool mtx_init(mtx_t *mtx) { #ifdef _WIN32 if (!InitializeCriticalSectionAndSpinCount(&mtx->lock, _CRT_SPINCOUNT)) return (true); #elif (defined(JEMALLOC_OSSPIN)) mtx->lock = 0; #else pthread_mutexattr_t attr; if (pthread_mutexattr_init(&attr) != 0) return (true); pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_DEFAULT); if (pthread_mutex_init(&mtx->lock, &attr) != 0) { pthread_mutexattr_destroy(&attr); return (true); } pthread_mutexattr_destroy(&attr); #endif return (false); } void mtx_fini(mtx_t *mtx) { #ifdef _WIN32 #elif (defined(JEMALLOC_OSSPIN)) #else pthread_mutex_destroy(&mtx->lock); #endif } void mtx_lock(mtx_t *mtx) { #ifdef _WIN32 EnterCriticalSection(&mtx->lock); #elif (defined(JEMALLOC_OSSPIN)) OSSpinLockLock(&mtx->lock); #else pthread_mutex_lock(&mtx->lock); #endif } void mtx_unlock(mtx_t *mtx) { #ifdef _WIN32 LeaveCriticalSection(&mtx->lock); #elif (defined(JEMALLOC_OSSPIN)) OSSpinLockUnlock(&mtx->lock); #else pthread_mutex_unlock(&mtx->lock); #endif } vmem-1.8/src/jemalloc/test/src/test.c000066400000000000000000000055501361505074100176030ustar00rootroot00000000000000#include "test/jemalloc_test.h" static unsigned test_count = 0; static test_status_t test_counts[test_status_count] = {0, 0, 0}; static test_status_t test_status = test_status_pass; static const char * test_name = ""; JEMALLOC_ATTR(format(printf, 1, 2)) void test_skip(const char *format, ...) { va_list ap; va_start(ap, format); malloc_vcprintf(NULL, NULL, format, ap); va_end(ap); malloc_printf("\n"); test_status = test_status_skip; } JEMALLOC_ATTR(format(printf, 1, 2)) void test_fail(const char *format, ...) { va_list ap; va_start(ap, format); malloc_vcprintf(NULL, NULL, format, ap); va_end(ap); malloc_printf("\n"); test_status = test_status_fail; } static const char * test_status_string(test_status_t test_status) { switch (test_status) { case test_status_pass: return "pass"; case test_status_skip: return "skip"; case test_status_fail: return "fail"; default: not_reached(); } } void p_test_init(const char *name) { test_count++; test_status = test_status_pass; test_name = name; } void p_test_fini(void) { test_counts[test_status]++; malloc_printf("%s: %s\n", test_name, test_status_string(test_status)); } test_status_t p_test(test_t *t, ...) { test_status_t ret; va_list ap; /* * Make sure initialization occurs prior to running tests. Tests are * special because they may use internal facilities prior to triggering * initialization as a side effect of calling into the public API. This * is a final safety that works even if jemalloc_constructor() doesn't * run, as for MSVC builds. */ if (nallocx(1, 0) == 0) { malloc_printf("Initialization error"); return (test_status_fail); } ret = test_status_pass; va_start(ap, t); for (; t != NULL; t = va_arg(ap, test_t *)) { t(); if (test_status > ret) ret = test_status; } va_end(ap); malloc_printf("--- %s: %u/%u, %s: %u/%u, %s: %u/%u ---\n", test_status_string(test_status_pass), test_counts[test_status_pass], test_count, test_status_string(test_status_skip), test_counts[test_status_skip], test_count, test_status_string(test_status_fail), test_counts[test_status_fail], test_count); return (ret); } test_status_t p_test_not_init(test_t *t, ...) { test_status_t ret; va_list ap; ret = test_status_pass; va_start(ap, t); for (; t != NULL; t = va_arg(ap, test_t *)) { t(); if (test_status > ret) ret = test_status; } va_end(ap); malloc_printf("--- %s: %u/%u, %s: %u/%u, %s: %u/%u ---\n", test_status_string(test_status_pass), test_counts[test_status_pass], test_count, test_status_string(test_status_skip), test_counts[test_status_skip], test_count, test_status_string(test_status_fail), test_counts[test_status_fail], test_count); return (ret); } void p_test_fail(const char *prefix, const char *message) { malloc_cprintf(NULL, NULL, "%s%s\n", prefix, message); test_status = test_status_fail; } vmem-1.8/src/jemalloc/test/src/thd.c000066400000000000000000000013601361505074100173760ustar00rootroot00000000000000#include "test/jemalloc_test.h" #ifdef _WIN32 void thd_create(thd_t *thd, void *(*proc)(void *), void *arg) { LPTHREAD_START_ROUTINE routine = (LPTHREAD_START_ROUTINE)proc; *thd = CreateThread(NULL, 0, routine, arg, 0, NULL); if (*thd == NULL) test_fail("Error in CreateThread()\n"); } void thd_join(thd_t thd, void **ret) { if (WaitForSingleObject(thd, INFINITE) == WAIT_OBJECT_0 && ret) { DWORD exit_code; GetExitCodeThread(thd, (LPDWORD) &exit_code); *ret = (void *)(uintptr_t)exit_code; } } #else void thd_create(thd_t *thd, void *(*proc)(void *), void *arg) { if (pthread_create(thd, NULL, proc, arg) != 0) test_fail("Error in pthread_create()\n"); } void thd_join(thd_t thd, void **ret) { pthread_join(thd, ret); } #endif vmem-1.8/src/jemalloc/test/test.sh.in000066400000000000000000000017561361505074100176150ustar00rootroot00000000000000#!/bin/sh case @abi@ in macho) export DYLD_FALLBACK_LIBRARY_PATH="@objroot@lib" ;; pecoff) export PATH="${PATH}:@objroot@lib" ;; *) ;; esac # Corresponds to test_status_t. pass_code=0 skip_code=1 fail_code=2 pass_count=0 skip_count=0 fail_count=0 for t in $@; do if [ $pass_count -ne 0 -o $skip_count -ne 0 -o $fail_count != 0 ] ; then echo fi echo "=== ${t} ===" ${t}@exe@ @abs_srcroot@ @abs_objroot@ result_code=$? case ${result_code} in ${pass_code}) pass_count=$((pass_count+1)) ;; ${skip_code}) skip_count=$((skip_count+1)) ;; ${fail_code}) fail_count=$((fail_count+1)) ;; *) echo "Test harness error" 1>&2 exit 1 esac done total_count=`expr ${pass_count} + ${skip_count} + ${fail_count}` echo echo "Test suite summary: pass: ${pass_count}/${total_count}, skip: ${skip_count}/${total_count}, fail: ${fail_count}/${total_count}" if [ ${fail_count} -eq 0 ] ; then exit 0 else exit 1 fi vmem-1.8/src/jemalloc/test/unit/000077500000000000000000000000001361505074100166435ustar00rootroot00000000000000vmem-1.8/src/jemalloc/test/unit/SFMT.c000066400000000000000000002530731361505074100175720ustar00rootroot00000000000000/* * This file derives from SFMT 1.3.3 * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was * released under the terms of the following license: * * Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima * University. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the Hiroshima University nor the names of * its contributors may be used to endorse or promote products * derived from this software without specific prior written * permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include "test/jemalloc_test.h" #define BLOCK_SIZE 10000 #define BLOCK_SIZE64 (BLOCK_SIZE / 2) #define COUNT_1 1000 #define COUNT_2 700 static const uint32_t init_gen_rand_32_expected[] = { 3440181298U, 1564997079U, 1510669302U, 2930277156U, 1452439940U, 3796268453U, 423124208U, 2143818589U, 3827219408U, 2987036003U, 2674978610U, 1536842514U, 2027035537U, 2534897563U, 1686527725U, 545368292U, 1489013321U, 1370534252U, 4231012796U, 3994803019U, 1764869045U, 824597505U, 862581900U, 2469764249U, 812862514U, 359318673U, 116957936U, 3367389672U, 2327178354U, 1898245200U, 3206507879U, 2378925033U, 1040214787U, 2524778605U, 3088428700U, 1417665896U, 964324147U, 2282797708U, 2456269299U, 313400376U, 2245093271U, 1015729427U, 2694465011U, 3246975184U, 1992793635U, 463679346U, 3721104591U, 3475064196U, 856141236U, 1499559719U, 3522818941U, 3721533109U, 1954826617U, 1282044024U, 1543279136U, 1301863085U, 2669145051U, 4221477354U, 3896016841U, 3392740262U, 462466863U, 1037679449U, 1228140306U, 922298197U, 1205109853U, 1872938061U, 3102547608U, 2742766808U, 1888626088U, 4028039414U, 157593879U, 1136901695U, 4038377686U, 3572517236U, 4231706728U, 2997311961U, 1189931652U, 3981543765U, 2826166703U, 87159245U, 1721379072U, 3897926942U, 1790395498U, 2569178939U, 1047368729U, 2340259131U, 3144212906U, 2301169789U, 2442885464U, 3034046771U, 3667880593U, 3935928400U, 2372805237U, 1666397115U, 2460584504U, 513866770U, 3810869743U, 2147400037U, 2792078025U, 2941761810U, 3212265810U, 984692259U, 346590253U, 1804179199U, 3298543443U, 750108141U, 2880257022U, 243310542U, 1869036465U, 1588062513U, 2983949551U, 1931450364U, 4034505847U, 2735030199U, 1628461061U, 2539522841U, 127965585U, 3992448871U, 913388237U, 559130076U, 1202933193U, 4087643167U, 2590021067U, 2256240196U, 1746697293U, 1013913783U, 1155864921U, 2715773730U, 915061862U, 1948766573U, 2322882854U, 3761119102U, 1343405684U, 3078711943U, 3067431651U, 3245156316U, 3588354584U, 3484623306U, 3899621563U, 4156689741U, 3237090058U, 3880063844U, 862416318U, 4039923869U, 2303788317U, 3073590536U, 701653667U, 2131530884U, 3169309950U, 2028486980U, 747196777U, 3620218225U, 432016035U, 1449580595U, 2772266392U, 444224948U, 1662832057U, 3184055582U, 3028331792U, 1861686254U, 1104864179U, 342430307U, 1350510923U, 3024656237U, 1028417492U, 2870772950U, 290847558U, 3675663500U, 508431529U, 4264340390U, 2263569913U, 1669302976U, 519511383U, 2706411211U, 3764615828U, 3883162495U, 4051445305U, 2412729798U, 3299405164U, 3991911166U, 2348767304U, 2664054906U, 3763609282U, 593943581U, 3757090046U, 2075338894U, 2020550814U, 4287452920U, 4290140003U, 1422957317U, 2512716667U, 2003485045U, 2307520103U, 2288472169U, 3940751663U, 4204638664U, 2892583423U, 1710068300U, 3904755993U, 2363243951U, 3038334120U, 547099465U, 771105860U, 3199983734U, 4282046461U, 2298388363U, 934810218U, 2837827901U, 3952500708U, 2095130248U, 3083335297U, 26885281U, 3932155283U, 1531751116U, 1425227133U, 495654159U, 3279634176U, 3855562207U, 3957195338U, 4159985527U, 893375062U, 1875515536U, 1327247422U, 3754140693U, 1028923197U, 1729880440U, 805571298U, 448971099U, 2726757106U, 2749436461U, 2485987104U, 175337042U, 3235477922U, 3882114302U, 2020970972U, 943926109U, 2762587195U, 1904195558U, 3452650564U, 108432281U, 3893463573U, 3977583081U, 2636504348U, 1110673525U, 3548479841U, 4258854744U, 980047703U, 4057175418U, 3890008292U, 145653646U, 3141868989U, 3293216228U, 1194331837U, 1254570642U, 3049934521U, 2868313360U, 2886032750U, 1110873820U, 279553524U, 3007258565U, 1104807822U, 3186961098U, 315764646U, 2163680838U, 3574508994U, 3099755655U, 191957684U, 3642656737U, 3317946149U, 3522087636U, 444526410U, 779157624U, 1088229627U, 1092460223U, 1856013765U, 3659877367U, 368270451U, 503570716U, 3000984671U, 2742789647U, 928097709U, 2914109539U, 308843566U, 2816161253U, 3667192079U, 2762679057U, 3395240989U, 2928925038U, 1491465914U, 3458702834U, 3787782576U, 2894104823U, 1296880455U, 1253636503U, 989959407U, 2291560361U, 2776790436U, 1913178042U, 1584677829U, 689637520U, 1898406878U, 688391508U, 3385234998U, 845493284U, 1943591856U, 2720472050U, 222695101U, 1653320868U, 2904632120U, 4084936008U, 1080720688U, 3938032556U, 387896427U, 2650839632U, 99042991U, 1720913794U, 1047186003U, 1877048040U, 2090457659U, 517087501U, 4172014665U, 2129713163U, 2413533132U, 2760285054U, 4129272496U, 1317737175U, 2309566414U, 2228873332U, 3889671280U, 1110864630U, 3576797776U, 2074552772U, 832002644U, 3097122623U, 2464859298U, 2679603822U, 1667489885U, 3237652716U, 1478413938U, 1719340335U, 2306631119U, 639727358U, 3369698270U, 226902796U, 2099920751U, 1892289957U, 2201594097U, 3508197013U, 3495811856U, 3900381493U, 841660320U, 3974501451U, 3360949056U, 1676829340U, 728899254U, 2047809627U, 2390948962U, 670165943U, 3412951831U, 4189320049U, 1911595255U, 2055363086U, 507170575U, 418219594U, 4141495280U, 2692088692U, 4203630654U, 3540093932U, 791986533U, 2237921051U, 2526864324U, 2956616642U, 1394958700U, 1983768223U, 1893373266U, 591653646U, 228432437U, 1611046598U, 3007736357U, 1040040725U, 2726180733U, 2789804360U, 4263568405U, 829098158U, 3847722805U, 1123578029U, 1804276347U, 997971319U, 4203797076U, 4185199713U, 2811733626U, 2343642194U, 2985262313U, 1417930827U, 3759587724U, 1967077982U, 1585223204U, 1097475516U, 1903944948U, 740382444U, 1114142065U, 1541796065U, 1718384172U, 1544076191U, 1134682254U, 3519754455U, 2866243923U, 341865437U, 645498576U, 2690735853U, 1046963033U, 2493178460U, 1187604696U, 1619577821U, 488503634U, 3255768161U, 2306666149U, 1630514044U, 2377698367U, 2751503746U, 3794467088U, 1796415981U, 3657173746U, 409136296U, 1387122342U, 1297726519U, 219544855U, 4270285558U, 437578827U, 1444698679U, 2258519491U, 963109892U, 3982244073U, 3351535275U, 385328496U, 1804784013U, 698059346U, 3920535147U, 708331212U, 784338163U, 785678147U, 1238376158U, 1557298846U, 2037809321U, 271576218U, 4145155269U, 1913481602U, 2763691931U, 588981080U, 1201098051U, 3717640232U, 1509206239U, 662536967U, 3180523616U, 1133105435U, 2963500837U, 2253971215U, 3153642623U, 1066925709U, 2582781958U, 3034720222U, 1090798544U, 2942170004U, 4036187520U, 686972531U, 2610990302U, 2641437026U, 1837562420U, 722096247U, 1315333033U, 2102231203U, 3402389208U, 3403698140U, 1312402831U, 2898426558U, 814384596U, 385649582U, 1916643285U, 1924625106U, 2512905582U, 2501170304U, 4275223366U, 2841225246U, 1467663688U, 3563567847U, 2969208552U, 884750901U, 102992576U, 227844301U, 3681442994U, 3502881894U, 4034693299U, 1166727018U, 1697460687U, 1737778332U, 1787161139U, 1053003655U, 1215024478U, 2791616766U, 2525841204U, 1629323443U, 3233815U, 2003823032U, 3083834263U, 2379264872U, 3752392312U, 1287475550U, 3770904171U, 3004244617U, 1502117784U, 918698423U, 2419857538U, 3864502062U, 1751322107U, 2188775056U, 4018728324U, 983712955U, 440071928U, 3710838677U, 2001027698U, 3994702151U, 22493119U, 3584400918U, 3446253670U, 4254789085U, 1405447860U, 1240245579U, 1800644159U, 1661363424U, 3278326132U, 3403623451U, 67092802U, 2609352193U, 3914150340U, 1814842761U, 3610830847U, 591531412U, 3880232807U, 1673505890U, 2585326991U, 1678544474U, 3148435887U, 3457217359U, 1193226330U, 2816576908U, 154025329U, 121678860U, 1164915738U, 973873761U, 269116100U, 52087970U, 744015362U, 498556057U, 94298882U, 1563271621U, 2383059628U, 4197367290U, 3958472990U, 2592083636U, 2906408439U, 1097742433U, 3924840517U, 264557272U, 2292287003U, 3203307984U, 4047038857U, 3820609705U, 2333416067U, 1839206046U, 3600944252U, 3412254904U, 583538222U, 2390557166U, 4140459427U, 2810357445U, 226777499U, 2496151295U, 2207301712U, 3283683112U, 611630281U, 1933218215U, 3315610954U, 3889441987U, 3719454256U, 3957190521U, 1313998161U, 2365383016U, 3146941060U, 1801206260U, 796124080U, 2076248581U, 1747472464U, 3254365145U, 595543130U, 3573909503U, 3758250204U, 2020768540U, 2439254210U, 93368951U, 3155792250U, 2600232980U, 3709198295U, 3894900440U, 2971850836U, 1578909644U, 1443493395U, 2581621665U, 3086506297U, 2443465861U, 558107211U, 1519367835U, 249149686U, 908102264U, 2588765675U, 1232743965U, 1001330373U, 3561331654U, 2259301289U, 1564977624U, 3835077093U, 727244906U, 4255738067U, 1214133513U, 2570786021U, 3899704621U, 1633861986U, 1636979509U, 1438500431U, 58463278U, 2823485629U, 2297430187U, 2926781924U, 3371352948U, 1864009023U, 2722267973U, 1444292075U, 437703973U, 1060414512U, 189705863U, 910018135U, 4077357964U, 884213423U, 2644986052U, 3973488374U, 1187906116U, 2331207875U, 780463700U, 3713351662U, 3854611290U, 412805574U, 2978462572U, 2176222820U, 829424696U, 2790788332U, 2750819108U, 1594611657U, 3899878394U, 3032870364U, 1702887682U, 1948167778U, 14130042U, 192292500U, 947227076U, 90719497U, 3854230320U, 784028434U, 2142399787U, 1563449646U, 2844400217U, 819143172U, 2883302356U, 2328055304U, 1328532246U, 2603885363U, 3375188924U, 933941291U, 3627039714U, 2129697284U, 2167253953U, 2506905438U, 1412424497U, 2981395985U, 1418359660U, 2925902456U, 52752784U, 3713667988U, 3924669405U, 648975707U, 1145520213U, 4018650664U, 3805915440U, 2380542088U, 2013260958U, 3262572197U, 2465078101U, 1114540067U, 3728768081U, 2396958768U, 590672271U, 904818725U, 4263660715U, 700754408U, 1042601829U, 4094111823U, 4274838909U, 2512692617U, 2774300207U, 2057306915U, 3470942453U, 99333088U, 1142661026U, 2889931380U, 14316674U, 2201179167U, 415289459U, 448265759U, 3515142743U, 3254903683U, 246633281U, 1184307224U, 2418347830U, 2092967314U, 2682072314U, 2558750234U, 2000352263U, 1544150531U, 399010405U, 1513946097U, 499682937U, 461167460U, 3045570638U, 1633669705U, 851492362U, 4052801922U, 2055266765U, 635556996U, 368266356U, 2385737383U, 3218202352U, 2603772408U, 349178792U, 226482567U, 3102426060U, 3575998268U, 2103001871U, 3243137071U, 225500688U, 1634718593U, 4283311431U, 4292122923U, 3842802787U, 811735523U, 105712518U, 663434053U, 1855889273U, 2847972595U, 1196355421U, 2552150115U, 4254510614U, 3752181265U, 3430721819U, 3828705396U, 3436287905U, 3441964937U, 4123670631U, 353001539U, 459496439U, 3799690868U, 1293777660U, 2761079737U, 498096339U, 3398433374U, 4080378380U, 2304691596U, 2995729055U, 4134660419U, 3903444024U, 3576494993U, 203682175U, 3321164857U, 2747963611U, 79749085U, 2992890370U, 1240278549U, 1772175713U, 2111331972U, 2655023449U, 1683896345U, 2836027212U, 3482868021U, 2489884874U, 756853961U, 2298874501U, 4013448667U, 4143996022U, 2948306858U, 4132920035U, 1283299272U, 995592228U, 3450508595U, 1027845759U, 1766942720U, 3861411826U, 1446861231U, 95974993U, 3502263554U, 1487532194U, 601502472U, 4129619129U, 250131773U, 2050079547U, 3198903947U, 3105589778U, 4066481316U, 3026383978U, 2276901713U, 365637751U, 2260718426U, 1394775634U, 1791172338U, 2690503163U, 2952737846U, 1568710462U, 732623190U, 2980358000U, 1053631832U, 1432426951U, 3229149635U, 1854113985U, 3719733532U, 3204031934U, 735775531U, 107468620U, 3734611984U, 631009402U, 3083622457U, 4109580626U, 159373458U, 1301970201U, 4132389302U, 1293255004U, 847182752U, 4170022737U, 96712900U, 2641406755U, 1381727755U, 405608287U, 4287919625U, 1703554290U, 3589580244U, 2911403488U, 2166565U, 2647306451U, 2330535117U, 1200815358U, 1165916754U, 245060911U, 4040679071U, 3684908771U, 2452834126U, 2486872773U, 2318678365U, 2940627908U, 1837837240U, 3447897409U, 4270484676U, 1495388728U, 3754288477U, 4204167884U, 1386977705U, 2692224733U, 3076249689U, 4109568048U, 4170955115U, 4167531356U, 4020189950U, 4261855038U, 3036907575U, 3410399885U, 3076395737U, 1046178638U, 144496770U, 230725846U, 3349637149U, 17065717U, 2809932048U, 2054581785U, 3608424964U, 3259628808U, 134897388U, 3743067463U, 257685904U, 3795656590U, 1562468719U, 3589103904U, 3120404710U, 254684547U, 2653661580U, 3663904795U, 2631942758U, 1063234347U, 2609732900U, 2332080715U, 3521125233U, 1180599599U, 1935868586U, 4110970440U, 296706371U, 2128666368U, 1319875791U, 1570900197U, 3096025483U, 1799882517U, 1928302007U, 1163707758U, 1244491489U, 3533770203U, 567496053U, 2757924305U, 2781639343U, 2818420107U, 560404889U, 2619609724U, 4176035430U, 2511289753U, 2521842019U, 3910553502U, 2926149387U, 3302078172U, 4237118867U, 330725126U, 367400677U, 888239854U, 545570454U, 4259590525U, 134343617U, 1102169784U, 1647463719U, 3260979784U, 1518840883U, 3631537963U, 3342671457U, 1301549147U, 2083739356U, 146593792U, 3217959080U, 652755743U, 2032187193U, 3898758414U, 1021358093U, 4037409230U, 2176407931U, 3427391950U, 2883553603U, 985613827U, 3105265092U, 3423168427U, 3387507672U, 467170288U, 2141266163U, 3723870208U, 916410914U, 1293987799U, 2652584950U, 769160137U, 3205292896U, 1561287359U, 1684510084U, 3136055621U, 3765171391U, 639683232U, 2639569327U, 1218546948U, 4263586685U, 3058215773U, 2352279820U, 401870217U, 2625822463U, 1529125296U, 2981801895U, 1191285226U, 4027725437U, 3432700217U, 4098835661U, 971182783U, 2443861173U, 3881457123U, 3874386651U, 457276199U, 2638294160U, 4002809368U, 421169044U, 1112642589U, 3076213779U, 3387033971U, 2499610950U, 3057240914U, 1662679783U, 461224431U, 1168395933U }; static const uint32_t init_by_array_32_expected[] = { 2920711183U, 3885745737U, 3501893680U, 856470934U, 1421864068U, 277361036U, 1518638004U, 2328404353U, 3355513634U, 64329189U, 1624587673U, 3508467182U, 2481792141U, 3706480799U, 1925859037U, 2913275699U, 882658412U, 384641219U, 422202002U, 1873384891U, 2006084383U, 3924929912U, 1636718106U, 3108838742U, 1245465724U, 4195470535U, 779207191U, 1577721373U, 1390469554U, 2928648150U, 121399709U, 3170839019U, 4044347501U, 953953814U, 3821710850U, 3085591323U, 3666535579U, 3577837737U, 2012008410U, 3565417471U, 4044408017U, 433600965U, 1637785608U, 1798509764U, 860770589U, 3081466273U, 3982393409U, 2451928325U, 3437124742U, 4093828739U, 3357389386U, 2154596123U, 496568176U, 2650035164U, 2472361850U, 3438299U, 2150366101U, 1577256676U, 3802546413U, 1787774626U, 4078331588U, 3706103141U, 170391138U, 3806085154U, 1680970100U, 1961637521U, 3316029766U, 890610272U, 1453751581U, 1430283664U, 3051057411U, 3597003186U, 542563954U, 3796490244U, 1690016688U, 3448752238U, 440702173U, 347290497U, 1121336647U, 2540588620U, 280881896U, 2495136428U, 213707396U, 15104824U, 2946180358U, 659000016U, 566379385U, 2614030979U, 2855760170U, 334526548U, 2315569495U, 2729518615U, 564745877U, 1263517638U, 3157185798U, 1604852056U, 1011639885U, 2950579535U, 2524219188U, 312951012U, 1528896652U, 1327861054U, 2846910138U, 3966855905U, 2536721582U, 855353911U, 1685434729U, 3303978929U, 1624872055U, 4020329649U, 3164802143U, 1642802700U, 1957727869U, 1792352426U, 3334618929U, 2631577923U, 3027156164U, 842334259U, 3353446843U, 1226432104U, 1742801369U, 3552852535U, 3471698828U, 1653910186U, 3380330939U, 2313782701U, 3351007196U, 2129839995U, 1800682418U, 4085884420U, 1625156629U, 3669701987U, 615211810U, 3294791649U, 4131143784U, 2590843588U, 3207422808U, 3275066464U, 561592872U, 3957205738U, 3396578098U, 48410678U, 3505556445U, 1005764855U, 3920606528U, 2936980473U, 2378918600U, 2404449845U, 1649515163U, 701203563U, 3705256349U, 83714199U, 3586854132U, 922978446U, 2863406304U, 3523398907U, 2606864832U, 2385399361U, 3171757816U, 4262841009U, 3645837721U, 1169579486U, 3666433897U, 3174689479U, 1457866976U, 3803895110U, 3346639145U, 1907224409U, 1978473712U, 1036712794U, 980754888U, 1302782359U, 1765252468U, 459245755U, 3728923860U, 1512894209U, 2046491914U, 207860527U, 514188684U, 2288713615U, 1597354672U, 3349636117U, 2357291114U, 3995796221U, 945364213U, 1893326518U, 3770814016U, 1691552714U, 2397527410U, 967486361U, 776416472U, 4197661421U, 951150819U, 1852770983U, 4044624181U, 1399439738U, 4194455275U, 2284037669U, 1550734958U, 3321078108U, 1865235926U, 2912129961U, 2664980877U, 1357572033U, 2600196436U, 2486728200U, 2372668724U, 1567316966U, 2374111491U, 1839843570U, 20815612U, 3727008608U, 3871996229U, 824061249U, 1932503978U, 3404541726U, 758428924U, 2609331364U, 1223966026U, 1299179808U, 648499352U, 2180134401U, 880821170U, 3781130950U, 113491270U, 1032413764U, 4185884695U, 2490396037U, 1201932817U, 4060951446U, 4165586898U, 1629813212U, 2887821158U, 415045333U, 628926856U, 2193466079U, 3391843445U, 2227540681U, 1907099846U, 2848448395U, 1717828221U, 1372704537U, 1707549841U, 2294058813U, 2101214437U, 2052479531U, 1695809164U, 3176587306U, 2632770465U, 81634404U, 1603220563U, 644238487U, 302857763U, 897352968U, 2613146653U, 1391730149U, 4245717312U, 4191828749U, 1948492526U, 2618174230U, 3992984522U, 2178852787U, 3596044509U, 3445573503U, 2026614616U, 915763564U, 3415689334U, 2532153403U, 3879661562U, 2215027417U, 3111154986U, 2929478371U, 668346391U, 1152241381U, 2632029711U, 3004150659U, 2135025926U, 948690501U, 2799119116U, 4228829406U, 1981197489U, 4209064138U, 684318751U, 3459397845U, 201790843U, 4022541136U, 3043635877U, 492509624U, 3263466772U, 1509148086U, 921459029U, 3198857146U, 705479721U, 3835966910U, 3603356465U, 576159741U, 1742849431U, 594214882U, 2055294343U, 3634861861U, 449571793U, 3246390646U, 3868232151U, 1479156585U, 2900125656U, 2464815318U, 3960178104U, 1784261920U, 18311476U, 3627135050U, 644609697U, 424968996U, 919890700U, 2986824110U, 816423214U, 4003562844U, 1392714305U, 1757384428U, 2569030598U, 995949559U, 3875659880U, 2933807823U, 2752536860U, 2993858466U, 4030558899U, 2770783427U, 2775406005U, 2777781742U, 1931292655U, 472147933U, 3865853827U, 2726470545U, 2668412860U, 2887008249U, 408979190U, 3578063323U, 3242082049U, 1778193530U, 27981909U, 2362826515U, 389875677U, 1043878156U, 581653903U, 3830568952U, 389535942U, 3713523185U, 2768373359U, 2526101582U, 1998618197U, 1160859704U, 3951172488U, 1098005003U, 906275699U, 3446228002U, 2220677963U, 2059306445U, 132199571U, 476838790U, 1868039399U, 3097344807U, 857300945U, 396345050U, 2835919916U, 1782168828U, 1419519470U, 4288137521U, 819087232U, 596301494U, 872823172U, 1526888217U, 805161465U, 1116186205U, 2829002754U, 2352620120U, 620121516U, 354159268U, 3601949785U, 209568138U, 1352371732U, 2145977349U, 4236871834U, 1539414078U, 3558126206U, 3224857093U, 4164166682U, 3817553440U, 3301780278U, 2682696837U, 3734994768U, 1370950260U, 1477421202U, 2521315749U, 1330148125U, 1261554731U, 2769143688U, 3554756293U, 4235882678U, 3254686059U, 3530579953U, 1215452615U, 3574970923U, 4057131421U, 589224178U, 1000098193U, 171190718U, 2521852045U, 2351447494U, 2284441580U, 2646685513U, 3486933563U, 3789864960U, 1190528160U, 1702536782U, 1534105589U, 4262946827U, 2726686826U, 3584544841U, 2348270128U, 2145092281U, 2502718509U, 1027832411U, 3571171153U, 1287361161U, 4011474411U, 3241215351U, 2419700818U, 971242709U, 1361975763U, 1096842482U, 3271045537U, 81165449U, 612438025U, 3912966678U, 1356929810U, 733545735U, 537003843U, 1282953084U, 884458241U, 588930090U, 3930269801U, 2961472450U, 1219535534U, 3632251943U, 268183903U, 1441240533U, 3653903360U, 3854473319U, 2259087390U, 2548293048U, 2022641195U, 2105543911U, 1764085217U, 3246183186U, 482438805U, 888317895U, 2628314765U, 2466219854U, 717546004U, 2322237039U, 416725234U, 1544049923U, 1797944973U, 3398652364U, 3111909456U, 485742908U, 2277491072U, 1056355088U, 3181001278U, 129695079U, 2693624550U, 1764438564U, 3797785470U, 195503713U, 3266519725U, 2053389444U, 1961527818U, 3400226523U, 3777903038U, 2597274307U, 4235851091U, 4094406648U, 2171410785U, 1781151386U, 1378577117U, 654643266U, 3424024173U, 3385813322U, 679385799U, 479380913U, 681715441U, 3096225905U, 276813409U, 3854398070U, 2721105350U, 831263315U, 3276280337U, 2628301522U, 3984868494U, 1466099834U, 2104922114U, 1412672743U, 820330404U, 3491501010U, 942735832U, 710652807U, 3972652090U, 679881088U, 40577009U, 3705286397U, 2815423480U, 3566262429U, 663396513U, 3777887429U, 4016670678U, 404539370U, 1142712925U, 1140173408U, 2913248352U, 2872321286U, 263751841U, 3175196073U, 3162557581U, 2878996619U, 75498548U, 3836833140U, 3284664959U, 1157523805U, 112847376U, 207855609U, 1337979698U, 1222578451U, 157107174U, 901174378U, 3883717063U, 1618632639U, 1767889440U, 4264698824U, 1582999313U, 884471997U, 2508825098U, 3756370771U, 2457213553U, 3565776881U, 3709583214U, 915609601U, 460833524U, 1091049576U, 85522880U, 2553251U, 132102809U, 2429882442U, 2562084610U, 1386507633U, 4112471229U, 21965213U, 1981516006U, 2418435617U, 3054872091U, 4251511224U, 2025783543U, 1916911512U, 2454491136U, 3938440891U, 3825869115U, 1121698605U, 3463052265U, 802340101U, 1912886800U, 4031997367U, 3550640406U, 1596096923U, 610150600U, 431464457U, 2541325046U, 486478003U, 739704936U, 2862696430U, 3037903166U, 1129749694U, 2611481261U, 1228993498U, 510075548U, 3424962587U, 2458689681U, 818934833U, 4233309125U, 1608196251U, 3419476016U, 1858543939U, 2682166524U, 3317854285U, 631986188U, 3008214764U, 613826412U, 3567358221U, 3512343882U, 1552467474U, 3316162670U, 1275841024U, 4142173454U, 565267881U, 768644821U, 198310105U, 2396688616U, 1837659011U, 203429334U, 854539004U, 4235811518U, 3338304926U, 3730418692U, 3852254981U, 3032046452U, 2329811860U, 2303590566U, 2696092212U, 3894665932U, 145835667U, 249563655U, 1932210840U, 2431696407U, 3312636759U, 214962629U, 2092026914U, 3020145527U, 4073039873U, 2739105705U, 1308336752U, 855104522U, 2391715321U, 67448785U, 547989482U, 854411802U, 3608633740U, 431731530U, 537375589U, 3888005760U, 696099141U, 397343236U, 1864511780U, 44029739U, 1729526891U, 1993398655U, 2010173426U, 2591546756U, 275223291U, 1503900299U, 4217765081U, 2185635252U, 1122436015U, 3550155364U, 681707194U, 3260479338U, 933579397U, 2983029282U, 2505504587U, 2667410393U, 2962684490U, 4139721708U, 2658172284U, 2452602383U, 2607631612U, 1344296217U, 3075398709U, 2949785295U, 1049956168U, 3917185129U, 2155660174U, 3280524475U, 1503827867U, 674380765U, 1918468193U, 3843983676U, 634358221U, 2538335643U, 1873351298U, 3368723763U, 2129144130U, 3203528633U, 3087174986U, 2691698871U, 2516284287U, 24437745U, 1118381474U, 2816314867U, 2448576035U, 4281989654U, 217287825U, 165872888U, 2628995722U, 3533525116U, 2721669106U, 872340568U, 3429930655U, 3309047304U, 3916704967U, 3270160355U, 1348884255U, 1634797670U, 881214967U, 4259633554U, 174613027U, 1103974314U, 1625224232U, 2678368291U, 1133866707U, 3853082619U, 4073196549U, 1189620777U, 637238656U, 930241537U, 4042750792U, 3842136042U, 2417007212U, 2524907510U, 1243036827U, 1282059441U, 3764588774U, 1394459615U, 2323620015U, 1166152231U, 3307479609U, 3849322257U, 3507445699U, 4247696636U, 758393720U, 967665141U, 1095244571U, 1319812152U, 407678762U, 2640605208U, 2170766134U, 3663594275U, 4039329364U, 2512175520U, 725523154U, 2249807004U, 3312617979U, 2414634172U, 1278482215U, 349206484U, 1573063308U, 1196429124U, 3873264116U, 2400067801U, 268795167U, 226175489U, 2961367263U, 1968719665U, 42656370U, 1010790699U, 561600615U, 2422453992U, 3082197735U, 1636700484U, 3977715296U, 3125350482U, 3478021514U, 2227819446U, 1540868045U, 3061908980U, 1087362407U, 3625200291U, 361937537U, 580441897U, 1520043666U, 2270875402U, 1009161260U, 2502355842U, 4278769785U, 473902412U, 1057239083U, 1905829039U, 1483781177U, 2080011417U, 1207494246U, 1806991954U, 2194674403U, 3455972205U, 807207678U, 3655655687U, 674112918U, 195425752U, 3917890095U, 1874364234U, 1837892715U, 3663478166U, 1548892014U, 2570748714U, 2049929836U, 2167029704U, 697543767U, 3499545023U, 3342496315U, 1725251190U, 3561387469U, 2905606616U, 1580182447U, 3934525927U, 4103172792U, 1365672522U, 1534795737U, 3308667416U, 2841911405U, 3943182730U, 4072020313U, 3494770452U, 3332626671U, 55327267U, 478030603U, 411080625U, 3419529010U, 1604767823U, 3513468014U, 570668510U, 913790824U, 2283967995U, 695159462U, 3825542932U, 4150698144U, 1829758699U, 202895590U, 1609122645U, 1267651008U, 2910315509U, 2511475445U, 2477423819U, 3932081579U, 900879979U, 2145588390U, 2670007504U, 580819444U, 1864996828U, 2526325979U, 1019124258U, 815508628U, 2765933989U, 1277301341U, 3006021786U, 855540956U, 288025710U, 1919594237U, 2331223864U, 177452412U, 2475870369U, 2689291749U, 865194284U, 253432152U, 2628531804U, 2861208555U, 2361597573U, 1653952120U, 1039661024U, 2159959078U, 3709040440U, 3564718533U, 2596878672U, 2041442161U, 31164696U, 2662962485U, 3665637339U, 1678115244U, 2699839832U, 3651968520U, 3521595541U, 458433303U, 2423096824U, 21831741U, 380011703U, 2498168716U, 861806087U, 1673574843U, 4188794405U, 2520563651U, 2632279153U, 2170465525U, 4171949898U, 3886039621U, 1661344005U, 3424285243U, 992588372U, 2500984144U, 2993248497U, 3590193895U, 1535327365U, 515645636U, 131633450U, 3729760261U, 1613045101U, 3254194278U, 15889678U, 1493590689U, 244148718U, 2991472662U, 1401629333U, 777349878U, 2501401703U, 4285518317U, 3794656178U, 955526526U, 3442142820U, 3970298374U, 736025417U, 2737370764U, 1271509744U, 440570731U, 136141826U, 1596189518U, 923399175U, 257541519U, 3505774281U, 2194358432U, 2518162991U, 1379893637U, 2667767062U, 3748146247U, 1821712620U, 3923161384U, 1947811444U, 2392527197U, 4127419685U, 1423694998U, 4156576871U, 1382885582U, 3420127279U, 3617499534U, 2994377493U, 4038063986U, 1918458672U, 2983166794U, 4200449033U, 353294540U, 1609232588U, 243926648U, 2332803291U, 507996832U, 2392838793U, 4075145196U, 2060984340U, 4287475136U, 88232602U, 2491531140U, 4159725633U, 2272075455U, 759298618U, 201384554U, 838356250U, 1416268324U, 674476934U, 90795364U, 141672229U, 3660399588U, 4196417251U, 3249270244U, 3774530247U, 59587265U, 3683164208U, 19392575U, 1463123697U, 1882205379U, 293780489U, 2553160622U, 2933904694U, 675638239U, 2851336944U, 1435238743U, 2448730183U, 804436302U, 2119845972U, 322560608U, 4097732704U, 2987802540U, 641492617U, 2575442710U, 4217822703U, 3271835300U, 2836418300U, 3739921620U, 2138378768U, 2879771855U, 4294903423U, 3121097946U, 2603440486U, 2560820391U, 1012930944U, 2313499967U, 584489368U, 3431165766U, 897384869U, 2062537737U, 2847889234U, 3742362450U, 2951174585U, 4204621084U, 1109373893U, 3668075775U, 2750138839U, 3518055702U, 733072558U, 4169325400U, 788493625U }; static const uint64_t init_gen_rand_64_expected[] = { KQU(16924766246869039260), KQU( 8201438687333352714), KQU( 2265290287015001750), KQU(18397264611805473832), KQU( 3375255223302384358), KQU( 6345559975416828796), KQU(18229739242790328073), KQU( 7596792742098800905), KQU( 255338647169685981), KQU( 2052747240048610300), KQU(18328151576097299343), KQU(12472905421133796567), KQU(11315245349717600863), KQU(16594110197775871209), KQU(15708751964632456450), KQU(10452031272054632535), KQU(11097646720811454386), KQU( 4556090668445745441), KQU(17116187693090663106), KQU(14931526836144510645), KQU( 9190752218020552591), KQU( 9625800285771901401), KQU(13995141077659972832), KQU( 5194209094927829625), KQU( 4156788379151063303), KQU( 8523452593770139494), KQU(14082382103049296727), KQU( 2462601863986088483), KQU( 3030583461592840678), KQU( 5221622077872827681), KQU( 3084210671228981236), KQU(13956758381389953823), KQU(13503889856213423831), KQU(15696904024189836170), KQU( 4612584152877036206), KQU( 6231135538447867881), KQU(10172457294158869468), KQU( 6452258628466708150), KQU(14044432824917330221), KQU( 370168364480044279), KQU(10102144686427193359), KQU( 667870489994776076), KQU( 2732271956925885858), KQU(18027788905977284151), KQU(15009842788582923859), KQU( 7136357960180199542), KQU(15901736243475578127), KQU(16951293785352615701), KQU(10551492125243691632), KQU(17668869969146434804), KQU(13646002971174390445), KQU( 9804471050759613248), KQU( 5511670439655935493), KQU(18103342091070400926), KQU(17224512747665137533), KQU(15534627482992618168), KQU( 1423813266186582647), KQU(15821176807932930024), KQU( 30323369733607156), KQU(11599382494723479403), KQU( 653856076586810062), KQU( 3176437395144899659), KQU(14028076268147963917), KQU(16156398271809666195), KQU( 3166955484848201676), KQU( 5746805620136919390), KQU(17297845208891256593), KQU(11691653183226428483), KQU(17900026146506981577), KQU(15387382115755971042), KQU(16923567681040845943), KQU( 8039057517199388606), KQU(11748409241468629263), KQU( 794358245539076095), KQU(13438501964693401242), KQU(14036803236515618962), KQU( 5252311215205424721), KQU(17806589612915509081), KQU( 6802767092397596006), KQU(14212120431184557140), KQU( 1072951366761385712), KQU(13098491780722836296), KQU( 9466676828710797353), KQU(12673056849042830081), KQU(12763726623645357580), KQU(16468961652999309493), KQU(15305979875636438926), KQU(17444713151223449734), KQU( 5692214267627883674), KQU(13049589139196151505), KQU( 880115207831670745), KQU( 1776529075789695498), KQU(16695225897801466485), KQU(10666901778795346845), KQU( 6164389346722833869), KQU( 2863817793264300475), KQU( 9464049921886304754), KQU( 3993566636740015468), KQU( 9983749692528514136), KQU(16375286075057755211), KQU(16042643417005440820), KQU(11445419662923489877), KQU( 7999038846885158836), KQU( 6721913661721511535), KQU( 5363052654139357320), KQU( 1817788761173584205), KQU(13290974386445856444), KQU( 4650350818937984680), KQU( 8219183528102484836), KQU( 1569862923500819899), KQU( 4189359732136641860), KQU(14202822961683148583), KQU( 4457498315309429058), KQU(13089067387019074834), KQU(11075517153328927293), KQU(10277016248336668389), KQU( 7070509725324401122), KQU(17808892017780289380), KQU(13143367339909287349), KQU( 1377743745360085151), KQU( 5749341807421286485), KQU(14832814616770931325), KQU( 7688820635324359492), KQU(10960474011539770045), KQU( 81970066653179790), KQU(12619476072607878022), KQU( 4419566616271201744), KQU(15147917311750568503), KQU( 5549739182852706345), KQU( 7308198397975204770), KQU(13580425496671289278), KQU(17070764785210130301), KQU( 8202832846285604405), KQU( 6873046287640887249), KQU( 6927424434308206114), KQU( 6139014645937224874), KQU(10290373645978487639), KQU(15904261291701523804), KQU( 9628743442057826883), KQU(18383429096255546714), KQU( 4977413265753686967), KQU( 7714317492425012869), KQU( 9025232586309926193), KQU(14627338359776709107), KQU(14759849896467790763), KQU(10931129435864423252), KQU( 4588456988775014359), KQU(10699388531797056724), KQU( 468652268869238792), KQU( 5755943035328078086), KQU( 2102437379988580216), KQU( 9986312786506674028), KQU( 2654207180040945604), KQU( 8726634790559960062), KQU( 100497234871808137), KQU( 2800137176951425819), KQU( 6076627612918553487), KQU( 5780186919186152796), KQU( 8179183595769929098), KQU( 6009426283716221169), KQU( 2796662551397449358), KQU( 1756961367041986764), KQU( 6972897917355606205), KQU(14524774345368968243), KQU( 2773529684745706940), KQU( 4853632376213075959), KQU( 4198177923731358102), KQU( 8271224913084139776), KQU( 2741753121611092226), KQU(16782366145996731181), KQU(15426125238972640790), KQU(13595497100671260342), KQU( 3173531022836259898), KQU( 6573264560319511662), KQU(18041111951511157441), KQU( 2351433581833135952), KQU( 3113255578908173487), KQU( 1739371330877858784), KQU(16046126562789165480), KQU( 8072101652214192925), KQU(15267091584090664910), KQU( 9309579200403648940), KQU( 5218892439752408722), KQU(14492477246004337115), KQU(17431037586679770619), KQU( 7385248135963250480), KQU( 9580144956565560660), KQU( 4919546228040008720), KQU(15261542469145035584), KQU(18233297270822253102), KQU( 5453248417992302857), KQU( 9309519155931460285), KQU(10342813012345291756), KQU(15676085186784762381), KQU(15912092950691300645), KQU( 9371053121499003195), KQU( 9897186478226866746), KQU(14061858287188196327), KQU( 122575971620788119), KQU(12146750969116317754), KQU( 4438317272813245201), KQU( 8332576791009527119), KQU(13907785691786542057), KQU(10374194887283287467), KQU( 2098798755649059566), KQU( 3416235197748288894), KQU( 8688269957320773484), KQU( 7503964602397371571), KQU(16724977015147478236), KQU( 9461512855439858184), KQU(13259049744534534727), KQU( 3583094952542899294), KQU( 8764245731305528292), KQU(13240823595462088985), KQU(13716141617617910448), KQU(18114969519935960955), KQU( 2297553615798302206), KQU( 4585521442944663362), KQU(17776858680630198686), KQU( 4685873229192163363), KQU( 152558080671135627), KQU(15424900540842670088), KQU(13229630297130024108), KQU(17530268788245718717), KQU(16675633913065714144), KQU( 3158912717897568068), KQU(15399132185380087288), KQU( 7401418744515677872), KQU(13135412922344398535), KQU( 6385314346100509511), KQU(13962867001134161139), KQU(10272780155442671999), KQU(12894856086597769142), KQU(13340877795287554994), KQU(12913630602094607396), KQU(12543167911119793857), KQU(17343570372251873096), KQU(10959487764494150545), KQU( 6966737953093821128), KQU(13780699135496988601), KQU( 4405070719380142046), KQU(14923788365607284982), KQU( 2869487678905148380), KQU( 6416272754197188403), KQU(15017380475943612591), KQU( 1995636220918429487), KQU( 3402016804620122716), KQU(15800188663407057080), KQU(11362369990390932882), KQU(15262183501637986147), KQU(10239175385387371494), KQU( 9352042420365748334), KQU( 1682457034285119875), KQU( 1724710651376289644), KQU( 2038157098893817966), KQU( 9897825558324608773), KQU( 1477666236519164736), KQU(16835397314511233640), KQU(10370866327005346508), KQU(10157504370660621982), KQU(12113904045335882069), KQU(13326444439742783008), KQU(11302769043000765804), KQU(13594979923955228484), KQU(11779351762613475968), KQU( 3786101619539298383), KQU( 8021122969180846063), KQU(15745904401162500495), KQU(10762168465993897267), KQU(13552058957896319026), KQU(11200228655252462013), KQU( 5035370357337441226), KQU( 7593918984545500013), KQU( 5418554918361528700), KQU( 4858270799405446371), KQU( 9974659566876282544), KQU(18227595922273957859), KQU( 2772778443635656220), KQU(14285143053182085385), KQU( 9939700992429600469), KQU(12756185904545598068), KQU( 2020783375367345262), KQU( 57026775058331227), KQU( 950827867930065454), KQU( 6602279670145371217), KQU( 2291171535443566929), KQU( 5832380724425010313), KQU( 1220343904715982285), KQU(17045542598598037633), KQU(15460481779702820971), KQU(13948388779949365130), KQU(13975040175430829518), KQU(17477538238425541763), KQU(11104663041851745725), KQU(15860992957141157587), KQU(14529434633012950138), KQU( 2504838019075394203), KQU( 7512113882611121886), KQU( 4859973559980886617), KQU( 1258601555703250219), KQU(15594548157514316394), KQU( 4516730171963773048), KQU(11380103193905031983), KQU( 6809282239982353344), KQU(18045256930420065002), KQU( 2453702683108791859), KQU( 977214582986981460), KQU( 2006410402232713466), KQU( 6192236267216378358), KQU( 3429468402195675253), KQU(18146933153017348921), KQU(17369978576367231139), KQU( 1246940717230386603), KQU(11335758870083327110), KQU(14166488801730353682), KQU( 9008573127269635732), KQU(10776025389820643815), KQU(15087605441903942962), KQU( 1359542462712147922), KQU(13898874411226454206), KQU(17911176066536804411), KQU( 9435590428600085274), KQU( 294488509967864007), KQU( 8890111397567922046), KQU( 7987823476034328778), KQU(13263827582440967651), KQU( 7503774813106751573), KQU(14974747296185646837), KQU( 8504765037032103375), KQU(17340303357444536213), KQU( 7704610912964485743), KQU( 8107533670327205061), KQU( 9062969835083315985), KQU(16968963142126734184), KQU(12958041214190810180), KQU( 2720170147759570200), KQU( 2986358963942189566), KQU(14884226322219356580), KQU( 286224325144368520), KQU(11313800433154279797), KQU(18366849528439673248), KQU(17899725929482368789), KQU( 3730004284609106799), KQU( 1654474302052767205), KQU( 5006698007047077032), KQU( 8196893913601182838), KQU(15214541774425211640), KQU(17391346045606626073), KQU( 8369003584076969089), KQU( 3939046733368550293), KQU(10178639720308707785), KQU( 2180248669304388697), KQU( 62894391300126322), KQU( 9205708961736223191), KQU( 6837431058165360438), KQU( 3150743890848308214), KQU(17849330658111464583), KQU(12214815643135450865), KQU(13410713840519603402), KQU( 3200778126692046802), KQU(13354780043041779313), KQU( 800850022756886036), KQU(15660052933953067433), KQU( 6572823544154375676), KQU(11030281857015819266), KQU(12682241941471433835), KQU(11654136407300274693), KQU( 4517795492388641109), KQU( 9757017371504524244), KQU(17833043400781889277), KQU(12685085201747792227), KQU(10408057728835019573), KQU( 98370418513455221), KQU( 6732663555696848598), KQU(13248530959948529780), KQU( 3530441401230622826), KQU(18188251992895660615), KQU( 1847918354186383756), KQU( 1127392190402660921), KQU(11293734643143819463), KQU( 3015506344578682982), KQU(13852645444071153329), KQU( 2121359659091349142), KQU( 1294604376116677694), KQU( 5616576231286352318), KQU( 7112502442954235625), KQU(11676228199551561689), KQU(12925182803007305359), KQU( 7852375518160493082), KQU( 1136513130539296154), KQU( 5636923900916593195), KQU( 3221077517612607747), KQU(17784790465798152513), KQU( 3554210049056995938), KQU(17476839685878225874), KQU( 3206836372585575732), KQU( 2765333945644823430), KQU(10080070903718799528), KQU( 5412370818878286353), KQU( 9689685887726257728), KQU( 8236117509123533998), KQU( 1951139137165040214), KQU( 4492205209227980349), KQU(16541291230861602967), KQU( 1424371548301437940), KQU( 9117562079669206794), KQU(14374681563251691625), KQU(13873164030199921303), KQU( 6680317946770936731), KQU(15586334026918276214), KQU(10896213950976109802), KQU( 9506261949596413689), KQU( 9903949574308040616), KQU( 6038397344557204470), KQU( 174601465422373648), KQU(15946141191338238030), KQU(17142225620992044937), KQU( 7552030283784477064), KQU( 2947372384532947997), KQU( 510797021688197711), KQU( 4962499439249363461), KQU( 23770320158385357), KQU( 959774499105138124), KQU( 1468396011518788276), KQU( 2015698006852312308), KQU( 4149400718489980136), KQU( 5992916099522371188), KQU(10819182935265531076), KQU(16189787999192351131), KQU( 342833961790261950), KQU(12470830319550495336), KQU(18128495041912812501), KQU( 1193600899723524337), KQU( 9056793666590079770), KQU( 2154021227041669041), KQU( 4963570213951235735), KQU( 4865075960209211409), KQU( 2097724599039942963), KQU( 2024080278583179845), KQU(11527054549196576736), KQU(10650256084182390252), KQU( 4808408648695766755), KQU( 1642839215013788844), KQU(10607187948250398390), KQU( 7076868166085913508), KQU( 730522571106887032), KQU(12500579240208524895), KQU( 4484390097311355324), KQU(15145801330700623870), KQU( 8055827661392944028), KQU( 5865092976832712268), KQU(15159212508053625143), KQU( 3560964582876483341), KQU( 4070052741344438280), KQU( 6032585709886855634), KQU(15643262320904604873), KQU( 2565119772293371111), KQU( 318314293065348260), KQU(15047458749141511872), KQU( 7772788389811528730), KQU( 7081187494343801976), KQU( 6465136009467253947), KQU(10425940692543362069), KQU( 554608190318339115), KQU(14796699860302125214), KQU( 1638153134431111443), KQU(10336967447052276248), KQU( 8412308070396592958), KQU( 4004557277152051226), KQU( 8143598997278774834), KQU(16413323996508783221), KQU(13139418758033994949), KQU( 9772709138335006667), KQU( 2818167159287157659), KQU(17091740573832523669), KQU(14629199013130751608), KQU(18268322711500338185), KQU( 8290963415675493063), KQU( 8830864907452542588), KQU( 1614839084637494849), KQU(14855358500870422231), KQU( 3472996748392519937), KQU(15317151166268877716), KQU( 5825895018698400362), KQU(16730208429367544129), KQU(10481156578141202800), KQU( 4746166512382823750), KQU(12720876014472464998), KQU( 8825177124486735972), KQU(13733447296837467838), KQU( 6412293741681359625), KQU( 8313213138756135033), KQU(11421481194803712517), KQU( 7997007691544174032), KQU( 6812963847917605930), KQU( 9683091901227558641), KQU(14703594165860324713), KQU( 1775476144519618309), KQU( 2724283288516469519), KQU( 717642555185856868), KQU( 8736402192215092346), KQU(11878800336431381021), KQU( 4348816066017061293), KQU( 6115112756583631307), KQU( 9176597239667142976), KQU(12615622714894259204), KQU(10283406711301385987), KQU( 5111762509485379420), KQU( 3118290051198688449), KQU( 7345123071632232145), KQU( 9176423451688682359), KQU( 4843865456157868971), KQU(12008036363752566088), KQU(12058837181919397720), KQU( 2145073958457347366), KQU( 1526504881672818067), KQU( 3488830105567134848), KQU(13208362960674805143), KQU( 4077549672899572192), KQU( 7770995684693818365), KQU( 1398532341546313593), KQU(12711859908703927840), KQU( 1417561172594446813), KQU(17045191024194170604), KQU( 4101933177604931713), KQU(14708428834203480320), KQU(17447509264469407724), KQU(14314821973983434255), KQU(17990472271061617265), KQU( 5087756685841673942), KQU(12797820586893859939), KQU( 1778128952671092879), KQU( 3535918530508665898), KQU( 9035729701042481301), KQU(14808661568277079962), KQU(14587345077537747914), KQU(11920080002323122708), KQU( 6426515805197278753), KQU( 3295612216725984831), KQU(11040722532100876120), KQU(12305952936387598754), KQU(16097391899742004253), KQU( 4908537335606182208), KQU(12446674552196795504), KQU(16010497855816895177), KQU( 9194378874788615551), KQU( 3382957529567613384), KQU( 5154647600754974077), KQU( 9801822865328396141), KQU( 9023662173919288143), KQU(17623115353825147868), KQU( 8238115767443015816), KQU(15811444159859002560), KQU( 9085612528904059661), KQU( 6888601089398614254), KQU( 258252992894160189), KQU( 6704363880792428622), KQU( 6114966032147235763), KQU(11075393882690261875), KQU( 8797664238933620407), KQU( 5901892006476726920), KQU( 5309780159285518958), KQU(14940808387240817367), KQU(14642032021449656698), KQU( 9808256672068504139), KQU( 3670135111380607658), KQU(11211211097845960152), KQU( 1474304506716695808), KQU(15843166204506876239), KQU( 7661051252471780561), KQU(10170905502249418476), KQU( 7801416045582028589), KQU( 2763981484737053050), KQU( 9491377905499253054), KQU(16201395896336915095), KQU( 9256513756442782198), KQU( 5411283157972456034), KQU( 5059433122288321676), KQU( 4327408006721123357), KQU( 9278544078834433377), KQU( 7601527110882281612), KQU(11848295896975505251), KQU(12096998801094735560), KQU(14773480339823506413), KQU(15586227433895802149), KQU(12786541257830242872), KQU( 6904692985140503067), KQU( 5309011515263103959), KQU(12105257191179371066), KQU(14654380212442225037), KQU( 2556774974190695009), KQU( 4461297399927600261), KQU(14888225660915118646), KQU(14915459341148291824), KQU( 2738802166252327631), KQU( 6047155789239131512), KQU(12920545353217010338), KQU(10697617257007840205), KQU( 2751585253158203504), KQU(13252729159780047496), KQU(14700326134672815469), KQU(14082527904374600529), KQU(16852962273496542070), KQU(17446675504235853907), KQU(15019600398527572311), KQU(12312781346344081551), KQU(14524667935039810450), KQU( 5634005663377195738), KQU(11375574739525000569), KQU( 2423665396433260040), KQU( 5222836914796015410), KQU( 4397666386492647387), KQU( 4619294441691707638), KQU( 665088602354770716), KQU(13246495665281593610), KQU( 6564144270549729409), KQU(10223216188145661688), KQU( 3961556907299230585), KQU(11543262515492439914), KQU(16118031437285993790), KQU( 7143417964520166465), KQU(13295053515909486772), KQU( 40434666004899675), KQU(17127804194038347164), KQU( 8599165966560586269), KQU( 8214016749011284903), KQU(13725130352140465239), KQU( 5467254474431726291), KQU( 7748584297438219877), KQU(16933551114829772472), KQU( 2169618439506799400), KQU( 2169787627665113463), KQU(17314493571267943764), KQU(18053575102911354912), KQU(11928303275378476973), KQU(11593850925061715550), KQU(17782269923473589362), KQU( 3280235307704747039), KQU( 6145343578598685149), KQU(17080117031114086090), KQU(18066839902983594755), KQU( 6517508430331020706), KQU( 8092908893950411541), KQU(12558378233386153732), KQU( 4476532167973132976), KQU(16081642430367025016), KQU( 4233154094369139361), KQU( 8693630486693161027), KQU(11244959343027742285), KQU(12273503967768513508), KQU(14108978636385284876), KQU( 7242414665378826984), KQU( 6561316938846562432), KQU( 8601038474994665795), KQU(17532942353612365904), KQU(17940076637020912186), KQU( 7340260368823171304), KQU( 7061807613916067905), KQU(10561734935039519326), KQU(17990796503724650862), KQU( 6208732943911827159), KQU( 359077562804090617), KQU(14177751537784403113), KQU(10659599444915362902), KQU(15081727220615085833), KQU(13417573895659757486), KQU(15513842342017811524), KQU(11814141516204288231), KQU( 1827312513875101814), KQU( 2804611699894603103), KQU(17116500469975602763), KQU(12270191815211952087), KQU(12256358467786024988), KQU(18435021722453971267), KQU( 671330264390865618), KQU( 476504300460286050), KQU(16465470901027093441), KQU( 4047724406247136402), KQU( 1322305451411883346), KQU( 1388308688834322280), KQU( 7303989085269758176), KQU( 9323792664765233642), KQU( 4542762575316368936), KQU(17342696132794337618), KQU( 4588025054768498379), KQU(13415475057390330804), KQU(17880279491733405570), KQU(10610553400618620353), KQU( 3180842072658960139), KQU(13002966655454270120), KQU( 1665301181064982826), KQU( 7083673946791258979), KQU( 190522247122496820), KQU(17388280237250677740), KQU( 8430770379923642945), KQU(12987180971921668584), KQU( 2311086108365390642), KQU( 2870984383579822345), KQU(14014682609164653318), KQU(14467187293062251484), KQU( 192186361147413298), KQU(15171951713531796524), KQU( 9900305495015948728), KQU(17958004775615466344), KQU(14346380954498606514), KQU(18040047357617407096), KQU( 5035237584833424532), KQU(15089555460613972287), KQU( 4131411873749729831), KQU( 1329013581168250330), KQU(10095353333051193949), KQU(10749518561022462716), KQU( 9050611429810755847), KQU(15022028840236655649), KQU( 8775554279239748298), KQU(13105754025489230502), KQU(15471300118574167585), KQU( 89864764002355628), KQU( 8776416323420466637), KQU( 5280258630612040891), KQU( 2719174488591862912), KQU( 7599309137399661994), KQU(15012887256778039979), KQU(14062981725630928925), KQU(12038536286991689603), KQU( 7089756544681775245), KQU(10376661532744718039), KQU( 1265198725901533130), KQU(13807996727081142408), KQU( 2935019626765036403), KQU( 7651672460680700141), KQU( 3644093016200370795), KQU( 2840982578090080674), KQU(17956262740157449201), KQU(18267979450492880548), KQU(11799503659796848070), KQU( 9942537025669672388), KQU(11886606816406990297), KQU( 5488594946437447576), KQU( 7226714353282744302), KQU( 3784851653123877043), KQU( 878018453244803041), KQU(12110022586268616085), KQU( 734072179404675123), KQU(11869573627998248542), KQU( 469150421297783998), KQU( 260151124912803804), KQU(11639179410120968649), KQU( 9318165193840846253), KQU(12795671722734758075), KQU(15318410297267253933), KQU( 691524703570062620), KQU( 5837129010576994601), KQU(15045963859726941052), KQU( 5850056944932238169), KQU(12017434144750943807), KQU( 7447139064928956574), KQU( 3101711812658245019), KQU(16052940704474982954), KQU(18195745945986994042), KQU( 8932252132785575659), KQU(13390817488106794834), KQU(11582771836502517453), KQU( 4964411326683611686), KQU( 2195093981702694011), KQU(14145229538389675669), KQU(16459605532062271798), KQU( 866316924816482864), KQU( 4593041209937286377), KQU( 8415491391910972138), KQU( 4171236715600528969), KQU(16637569303336782889), KQU( 2002011073439212680), KQU(17695124661097601411), KQU( 4627687053598611702), KQU( 7895831936020190403), KQU( 8455951300917267802), KQU( 2923861649108534854), KQU( 8344557563927786255), KQU( 6408671940373352556), KQU(12210227354536675772), KQU(14294804157294222295), KQU(10103022425071085127), KQU(10092959489504123771), KQU( 6554774405376736268), KQU(12629917718410641774), KQU( 6260933257596067126), KQU( 2460827021439369673), KQU( 2541962996717103668), KQU( 597377203127351475), KQU( 5316984203117315309), KQU( 4811211393563241961), KQU(13119698597255811641), KQU( 8048691512862388981), KQU(10216818971194073842), KQU( 4612229970165291764), KQU(10000980798419974770), KQU( 6877640812402540687), KQU( 1488727563290436992), KQU( 2227774069895697318), KQU(11237754507523316593), KQU(13478948605382290972), KQU( 1963583846976858124), KQU( 5512309205269276457), KQU( 3972770164717652347), KQU( 3841751276198975037), KQU(10283343042181903117), KQU( 8564001259792872199), KQU(16472187244722489221), KQU( 8953493499268945921), KQU( 3518747340357279580), KQU( 4003157546223963073), KQU( 3270305958289814590), KQU( 3966704458129482496), KQU( 8122141865926661939), KQU(14627734748099506653), KQU(13064426990862560568), KQU( 2414079187889870829), KQU( 5378461209354225306), KQU(10841985740128255566), KQU( 538582442885401738), KQU( 7535089183482905946), KQU(16117559957598879095), KQU( 8477890721414539741), KQU( 1459127491209533386), KQU(17035126360733620462), KQU( 8517668552872379126), KQU(10292151468337355014), KQU(17081267732745344157), KQU(13751455337946087178), KQU(14026945459523832966), KQU( 6653278775061723516), KQU(10619085543856390441), KQU( 2196343631481122885), KQU(10045966074702826136), KQU(10082317330452718282), KQU( 5920859259504831242), KQU( 9951879073426540617), KQU( 7074696649151414158), KQU(15808193543879464318), KQU( 7385247772746953374), KQU( 3192003544283864292), KQU(18153684490917593847), KQU(12423498260668568905), KQU(10957758099756378169), KQU(11488762179911016040), KQU( 2099931186465333782), KQU(11180979581250294432), KQU( 8098916250668367933), KQU( 3529200436790763465), KQU(12988418908674681745), KQU( 6147567275954808580), KQU( 3207503344604030989), KQU(10761592604898615360), KQU( 229854861031893504), KQU( 8809853962667144291), KQU(13957364469005693860), KQU( 7634287665224495886), KQU(12353487366976556874), KQU( 1134423796317152034), KQU( 2088992471334107068), KQU( 7393372127190799698), KQU( 1845367839871058391), KQU( 207922563987322884), KQU(11960870813159944976), KQU(12182120053317317363), KQU(17307358132571709283), KQU(13871081155552824936), KQU(18304446751741566262), KQU( 7178705220184302849), KQU(10929605677758824425), KQU(16446976977835806844), KQU(13723874412159769044), KQU( 6942854352100915216), KQU( 1726308474365729390), KQU( 2150078766445323155), KQU(15345558947919656626), KQU(12145453828874527201), KQU( 2054448620739726849), KQU( 2740102003352628137), KQU(11294462163577610655), KQU( 756164283387413743), KQU(17841144758438810880), KQU(10802406021185415861), KQU( 8716455530476737846), KQU( 6321788834517649606), KQU(14681322910577468426), KQU(17330043563884336387), KQU(12701802180050071614), KQU(14695105111079727151), KQU( 5112098511654172830), KQU( 4957505496794139973), KQU( 8270979451952045982), KQU(12307685939199120969), KQU(12425799408953443032), KQU( 8376410143634796588), KQU(16621778679680060464), KQU( 3580497854566660073), KQU( 1122515747803382416), KQU( 857664980960597599), KQU( 6343640119895925918), KQU(12878473260854462891), KQU(10036813920765722626), KQU(14451335468363173812), KQU( 5476809692401102807), KQU(16442255173514366342), KQU(13060203194757167104), KQU(14354124071243177715), KQU(15961249405696125227), KQU(13703893649690872584), KQU( 363907326340340064), KQU( 6247455540491754842), KQU(12242249332757832361), KQU( 156065475679796717), KQU( 9351116235749732355), KQU( 4590350628677701405), KQU( 1671195940982350389), KQU(13501398458898451905), KQU( 6526341991225002255), KQU( 1689782913778157592), KQU( 7439222350869010334), KQU(13975150263226478308), KQU(11411961169932682710), KQU(17204271834833847277), KQU( 541534742544435367), KQU( 6591191931218949684), KQU( 2645454775478232486), KQU( 4322857481256485321), KQU( 8477416487553065110), KQU(12902505428548435048), KQU( 971445777981341415), KQU(14995104682744976712), KQU( 4243341648807158063), KQU( 8695061252721927661), KQU( 5028202003270177222), KQU( 2289257340915567840), KQU(13870416345121866007), KQU(13994481698072092233), KQU( 6912785400753196481), KQU( 2278309315841980139), KQU( 4329765449648304839), KQU( 5963108095785485298), KQU( 4880024847478722478), KQU(16015608779890240947), KQU( 1866679034261393544), KQU( 914821179919731519), KQU( 9643404035648760131), KQU( 2418114953615593915), KQU( 944756836073702374), KQU(15186388048737296834), KQU( 7723355336128442206), KQU( 7500747479679599691), KQU(18013961306453293634), KQU( 2315274808095756456), KQU(13655308255424029566), KQU(17203800273561677098), KQU( 1382158694422087756), KQU( 5090390250309588976), KQU( 517170818384213989), KQU( 1612709252627729621), KQU( 1330118955572449606), KQU( 300922478056709885), KQU(18115693291289091987), KQU(13491407109725238321), KQU(15293714633593827320), KQU( 5151539373053314504), KQU( 5951523243743139207), KQU(14459112015249527975), KQU( 5456113959000700739), KQU( 3877918438464873016), KQU(12534071654260163555), KQU(15871678376893555041), KQU(11005484805712025549), KQU(16353066973143374252), KQU( 4358331472063256685), KQU( 8268349332210859288), KQU(12485161590939658075), KQU(13955993592854471343), KQU( 5911446886848367039), KQU(14925834086813706974), KQU( 6590362597857994805), KQU( 1280544923533661875), KQU( 1637756018947988164), KQU( 4734090064512686329), KQU(16693705263131485912), KQU( 6834882340494360958), KQU( 8120732176159658505), KQU( 2244371958905329346), KQU(10447499707729734021), KQU( 7318742361446942194), KQU( 8032857516355555296), KQU(14023605983059313116), KQU( 1032336061815461376), KQU( 9840995337876562612), KQU( 9869256223029203587), KQU(12227975697177267636), KQU(12728115115844186033), KQU( 7752058479783205470), KQU( 729733219713393087), KQU(12954017801239007622) }; static const uint64_t init_by_array_64_expected[] = { KQU( 2100341266307895239), KQU( 8344256300489757943), KQU(15687933285484243894), KQU( 8268620370277076319), KQU(12371852309826545459), KQU( 8800491541730110238), KQU(18113268950100835773), KQU( 2886823658884438119), KQU( 3293667307248180724), KQU( 9307928143300172731), KQU( 7688082017574293629), KQU( 900986224735166665), KQU( 9977972710722265039), KQU( 6008205004994830552), KQU( 546909104521689292), KQU( 7428471521869107594), KQU(14777563419314721179), KQU(16116143076567350053), KQU( 5322685342003142329), KQU( 4200427048445863473), KQU( 4693092150132559146), KQU(13671425863759338582), KQU( 6747117460737639916), KQU( 4732666080236551150), KQU( 5912839950611941263), KQU( 3903717554504704909), KQU( 2615667650256786818), KQU(10844129913887006352), KQU(13786467861810997820), KQU(14267853002994021570), KQU(13767807302847237439), KQU(16407963253707224617), KQU( 4802498363698583497), KQU( 2523802839317209764), KQU( 3822579397797475589), KQU( 8950320572212130610), KQU( 3745623504978342534), KQU(16092609066068482806), KQU( 9817016950274642398), KQU(10591660660323829098), KQU(11751606650792815920), KQU( 5122873818577122211), KQU(17209553764913936624), KQU( 6249057709284380343), KQU(15088791264695071830), KQU(15344673071709851930), KQU( 4345751415293646084), KQU( 2542865750703067928), KQU(13520525127852368784), KQU(18294188662880997241), KQU( 3871781938044881523), KQU( 2873487268122812184), KQU(15099676759482679005), KQU(15442599127239350490), KQU( 6311893274367710888), KQU( 3286118760484672933), KQU( 4146067961333542189), KQU(13303942567897208770), KQU( 8196013722255630418), KQU( 4437815439340979989), KQU(15433791533450605135), KQU( 4254828956815687049), KQU( 1310903207708286015), KQU(10529182764462398549), KQU(14900231311660638810), KQU( 9727017277104609793), KQU( 1821308310948199033), KQU(11628861435066772084), KQU( 9469019138491546924), KQU( 3145812670532604988), KQU( 9938468915045491919), KQU( 1562447430672662142), KQU(13963995266697989134), KQU( 3356884357625028695), KQU( 4499850304584309747), KQU( 8456825817023658122), KQU(10859039922814285279), KQU( 8099512337972526555), KQU( 348006375109672149), KQU(11919893998241688603), KQU( 1104199577402948826), KQU(16689191854356060289), KQU(10992552041730168078), KQU( 7243733172705465836), KQU( 5668075606180319560), KQU(18182847037333286970), KQU( 4290215357664631322), KQU( 4061414220791828613), KQU(13006291061652989604), KQU( 7140491178917128798), KQU(12703446217663283481), KQU( 5500220597564558267), KQU(10330551509971296358), KQU(15958554768648714492), KQU( 5174555954515360045), KQU( 1731318837687577735), KQU( 3557700801048354857), KQU(13764012341928616198), KQU(13115166194379119043), KQU( 7989321021560255519), KQU( 2103584280905877040), KQU( 9230788662155228488), KQU(16396629323325547654), KQU( 657926409811318051), KQU(15046700264391400727), KQU( 5120132858771880830), KQU( 7934160097989028561), KQU( 6963121488531976245), KQU(17412329602621742089), KQU(15144843053931774092), KQU(17204176651763054532), KQU(13166595387554065870), KQU( 8590377810513960213), KQU( 5834365135373991938), KQU( 7640913007182226243), KQU( 3479394703859418425), KQU(16402784452644521040), KQU( 4993979809687083980), KQU(13254522168097688865), KQU(15643659095244365219), KQU( 5881437660538424982), KQU(11174892200618987379), KQU( 254409966159711077), KQU(17158413043140549909), KQU( 3638048789290376272), KQU( 1376816930299489190), KQU( 4622462095217761923), KQU(15086407973010263515), KQU(13253971772784692238), KQU( 5270549043541649236), KQU(11182714186805411604), KQU(12283846437495577140), KQU( 5297647149908953219), KQU(10047451738316836654), KQU( 4938228100367874746), KQU(12328523025304077923), KQU( 3601049438595312361), KQU( 9313624118352733770), KQU(13322966086117661798), KQU(16660005705644029394), KQU(11337677526988872373), KQU(13869299102574417795), KQU(15642043183045645437), KQU( 3021755569085880019), KQU( 4979741767761188161), KQU(13679979092079279587), KQU( 3344685842861071743), KQU(13947960059899588104), KQU( 305806934293368007), KQU( 5749173929201650029), KQU(11123724852118844098), KQU(15128987688788879802), KQU(15251651211024665009), KQU( 7689925933816577776), KQU(16732804392695859449), KQU(17087345401014078468), KQU(14315108589159048871), KQU( 4820700266619778917), KQU(16709637539357958441), KQU( 4936227875177351374), KQU( 2137907697912987247), KQU(11628565601408395420), KQU( 2333250549241556786), KQU( 5711200379577778637), KQU( 5170680131529031729), KQU(12620392043061335164), KQU( 95363390101096078), KQU( 5487981914081709462), KQU( 1763109823981838620), KQU( 3395861271473224396), KQU( 1300496844282213595), KQU( 6894316212820232902), KQU(10673859651135576674), KQU( 5911839658857903252), KQU(17407110743387299102), KQU( 8257427154623140385), KQU(11389003026741800267), KQU( 4070043211095013717), KQU(11663806997145259025), KQU(15265598950648798210), KQU( 630585789434030934), KQU( 3524446529213587334), KQU( 7186424168495184211), KQU(10806585451386379021), KQU(11120017753500499273), KQU( 1586837651387701301), KQU(17530454400954415544), KQU( 9991670045077880430), KQU( 7550997268990730180), KQU( 8640249196597379304), KQU( 3522203892786893823), KQU(10401116549878854788), KQU(13690285544733124852), KQU( 8295785675455774586), KQU(15535716172155117603), KQU( 3112108583723722511), KQU(17633179955339271113), KQU(18154208056063759375), KQU( 1866409236285815666), KQU(13326075895396412882), KQU( 8756261842948020025), KQU( 6281852999868439131), KQU(15087653361275292858), KQU(10333923911152949397), KQU( 5265567645757408500), KQU(12728041843210352184), KQU( 6347959327507828759), KQU( 154112802625564758), KQU(18235228308679780218), KQU( 3253805274673352418), KQU( 4849171610689031197), KQU(17948529398340432518), KQU(13803510475637409167), KQU(13506570190409883095), KQU(15870801273282960805), KQU( 8451286481299170773), KQU( 9562190620034457541), KQU( 8518905387449138364), KQU(12681306401363385655), KQU( 3788073690559762558), KQU( 5256820289573487769), KQU( 2752021372314875467), KQU( 6354035166862520716), KQU( 4328956378309739069), KQU( 449087441228269600), KQU( 5533508742653090868), KQU( 1260389420404746988), KQU(18175394473289055097), KQU( 1535467109660399420), KQU( 8818894282874061442), KQU(12140873243824811213), KQU(15031386653823014946), KQU( 1286028221456149232), KQU( 6329608889367858784), KQU( 9419654354945132725), KQU( 6094576547061672379), KQU(17706217251847450255), KQU( 1733495073065878126), KQU(16918923754607552663), KQU( 8881949849954945044), KQU(12938977706896313891), KQU(14043628638299793407), KQU(18393874581723718233), KQU( 6886318534846892044), KQU(14577870878038334081), KQU(13541558383439414119), KQU(13570472158807588273), KQU(18300760537910283361), KQU( 818368572800609205), KQU( 1417000585112573219), KQU(12337533143867683655), KQU(12433180994702314480), KQU( 778190005829189083), KQU(13667356216206524711), KQU( 9866149895295225230), KQU(11043240490417111999), KQU( 1123933826541378598), KQU( 6469631933605123610), KQU(14508554074431980040), KQU(13918931242962026714), KQU( 2870785929342348285), KQU(14786362626740736974), KQU(13176680060902695786), KQU( 9591778613541679456), KQU( 9097662885117436706), KQU( 749262234240924947), KQU( 1944844067793307093), KQU( 4339214904577487742), KQU( 8009584152961946551), KQU(16073159501225501777), KQU( 3335870590499306217), KQU(17088312653151202847), KQU( 3108893142681931848), KQU(16636841767202792021), KQU(10423316431118400637), KQU( 8008357368674443506), KQU(11340015231914677875), KQU(17687896501594936090), KQU(15173627921763199958), KQU( 542569482243721959), KQU(15071714982769812975), KQU( 4466624872151386956), KQU( 1901780715602332461), KQU( 9822227742154351098), KQU( 1479332892928648780), KQU( 6981611948382474400), KQU( 7620824924456077376), KQU(14095973329429406782), KQU( 7902744005696185404), KQU(15830577219375036920), KQU(10287076667317764416), KQU(12334872764071724025), KQU( 4419302088133544331), KQU(14455842851266090520), KQU(12488077416504654222), KQU( 7953892017701886766), KQU( 6331484925529519007), KQU( 4902145853785030022), KQU(17010159216096443073), KQU(11945354668653886087), KQU(15112022728645230829), KQU(17363484484522986742), KQU( 4423497825896692887), KQU( 8155489510809067471), KQU( 258966605622576285), KQU( 5462958075742020534), KQU( 6763710214913276228), KQU( 2368935183451109054), KQU(14209506165246453811), KQU( 2646257040978514881), KQU( 3776001911922207672), KQU( 1419304601390147631), KQU(14987366598022458284), KQU( 3977770701065815721), KQU( 730820417451838898), KQU( 3982991703612885327), KQU( 2803544519671388477), KQU(17067667221114424649), KQU( 2922555119737867166), KQU( 1989477584121460932), KQU(15020387605892337354), KQU( 9293277796427533547), KQU(10722181424063557247), KQU(16704542332047511651), KQU( 5008286236142089514), KQU(16174732308747382540), KQU(17597019485798338402), KQU(13081745199110622093), KQU( 8850305883842258115), KQU(12723629125624589005), KQU( 8140566453402805978), KQU(15356684607680935061), KQU(14222190387342648650), KQU(11134610460665975178), KQU( 1259799058620984266), KQU(13281656268025610041), KQU( 298262561068153992), KQU(12277871700239212922), KQU(13911297774719779438), KQU(16556727962761474934), KQU(17903010316654728010), KQU( 9682617699648434744), KQU(14757681836838592850), KQU( 1327242446558524473), KQU(11126645098780572792), KQU( 1883602329313221774), KQU( 2543897783922776873), KQU(15029168513767772842), KQU(12710270651039129878), KQU(16118202956069604504), KQU(15010759372168680524), KQU( 2296827082251923948), KQU(10793729742623518101), KQU(13829764151845413046), KQU(17769301223184451213), KQU( 3118268169210783372), KQU(17626204544105123127), KQU( 7416718488974352644), KQU(10450751996212925994), KQU( 9352529519128770586), KQU( 259347569641110140), KQU( 8048588892269692697), KQU( 1774414152306494058), KQU(10669548347214355622), KQU(13061992253816795081), KQU(18432677803063861659), KQU( 8879191055593984333), KQU(12433753195199268041), KQU(14919392415439730602), KQU( 6612848378595332963), KQU( 6320986812036143628), KQU(10465592420226092859), KQU( 4196009278962570808), KQU( 3747816564473572224), KQU(17941203486133732898), KQU( 2350310037040505198), KQU( 5811779859134370113), KQU(10492109599506195126), KQU( 7699650690179541274), KQU( 1954338494306022961), KQU(14095816969027231152), KQU( 5841346919964852061), KQU(14945969510148214735), KQU( 3680200305887550992), KQU( 6218047466131695792), KQU( 8242165745175775096), KQU(11021371934053307357), KQU( 1265099502753169797), KQU( 4644347436111321718), KQU( 3609296916782832859), KQU( 8109807992218521571), KQU(18387884215648662020), KQU(14656324896296392902), KQU(17386819091238216751), KQU(17788300878582317152), KQU( 7919446259742399591), KQU( 4466613134576358004), KQU(12928181023667938509), KQU(13147446154454932030), KQU(16552129038252734620), KQU( 8395299403738822450), KQU(11313817655275361164), KQU( 434258809499511718), KQU( 2074882104954788676), KQU( 7929892178759395518), KQU( 9006461629105745388), KQU( 5176475650000323086), KQU(11128357033468341069), KQU(12026158851559118955), KQU(14699716249471156500), KQU( 448982497120206757), KQU( 4156475356685519900), KQU( 6063816103417215727), KQU(10073289387954971479), KQU( 8174466846138590962), KQU( 2675777452363449006), KQU( 9090685420572474281), KQU( 6659652652765562060), KQU(12923120304018106621), KQU(11117480560334526775), KQU( 937910473424587511), KQU( 1838692113502346645), KQU(11133914074648726180), KQU( 7922600945143884053), KQU(13435287702700959550), KQU( 5287964921251123332), KQU(11354875374575318947), KQU(17955724760748238133), KQU(13728617396297106512), KQU( 4107449660118101255), KQU( 1210269794886589623), KQU(11408687205733456282), KQU( 4538354710392677887), KQU(13566803319341319267), KQU(17870798107734050771), KQU( 3354318982568089135), KQU( 9034450839405133651), KQU(13087431795753424314), KQU( 950333102820688239), KQU( 1968360654535604116), KQU(16840551645563314995), KQU( 8867501803892924995), KQU(11395388644490626845), KQU( 1529815836300732204), KQU(13330848522996608842), KQU( 1813432878817504265), KQU( 2336867432693429560), KQU(15192805445973385902), KQU( 2528593071076407877), KQU( 128459777936689248), KQU( 9976345382867214866), KQU( 6208885766767996043), KQU(14982349522273141706), KQU( 3099654362410737822), KQU(13776700761947297661), KQU( 8806185470684925550), KQU( 8151717890410585321), KQU( 640860591588072925), KQU(14592096303937307465), KQU( 9056472419613564846), KQU(14861544647742266352), KQU(12703771500398470216), KQU( 3142372800384138465), KQU( 6201105606917248196), KQU(18337516409359270184), KQU(15042268695665115339), KQU(15188246541383283846), KQU(12800028693090114519), KQU( 5992859621101493472), KQU(18278043971816803521), KQU( 9002773075219424560), KQU( 7325707116943598353), KQU( 7930571931248040822), KQU( 5645275869617023448), KQU( 7266107455295958487), KQU( 4363664528273524411), KQU(14313875763787479809), KQU(17059695613553486802), KQU( 9247761425889940932), KQU(13704726459237593128), KQU( 2701312427328909832), KQU(17235532008287243115), KQU(14093147761491729538), KQU( 6247352273768386516), KQU( 8268710048153268415), KQU( 7985295214477182083), KQU(15624495190888896807), KQU( 3772753430045262788), KQU( 9133991620474991698), KQU( 5665791943316256028), KQU( 7551996832462193473), KQU(13163729206798953877), KQU( 9263532074153846374), KQU( 1015460703698618353), KQU(17929874696989519390), KQU(18257884721466153847), KQU(16271867543011222991), KQU( 3905971519021791941), KQU(16814488397137052085), KQU( 1321197685504621613), KQU( 2870359191894002181), KQU(14317282970323395450), KQU(13663920845511074366), KQU( 2052463995796539594), KQU(14126345686431444337), KQU( 1727572121947022534), KQU(17793552254485594241), KQU( 6738857418849205750), KQU( 1282987123157442952), KQU(16655480021581159251), KQU( 6784587032080183866), KQU(14726758805359965162), KQU( 7577995933961987349), KQU(12539609320311114036), KQU(10789773033385439494), KQU( 8517001497411158227), KQU(10075543932136339710), KQU(14838152340938811081), KQU( 9560840631794044194), KQU(17445736541454117475), KQU(10633026464336393186), KQU(15705729708242246293), KQU( 1117517596891411098), KQU( 4305657943415886942), KQU( 4948856840533979263), KQU(16071681989041789593), KQU(13723031429272486527), KQU( 7639567622306509462), KQU(12670424537483090390), KQU( 9715223453097197134), KQU( 5457173389992686394), KQU( 289857129276135145), KQU(17048610270521972512), KQU( 692768013309835485), KQU(14823232360546632057), KQU(18218002361317895936), KQU( 3281724260212650204), KQU(16453957266549513795), KQU( 8592711109774511881), KQU( 929825123473369579), KQU(15966784769764367791), KQU( 9627344291450607588), KQU(10849555504977813287), KQU( 9234566913936339275), KQU( 6413807690366911210), KQU(10862389016184219267), KQU(13842504799335374048), KQU( 1531994113376881174), KQU( 2081314867544364459), KQU(16430628791616959932), KQU( 8314714038654394368), KQU( 9155473892098431813), KQU(12577843786670475704), KQU( 4399161106452401017), KQU( 1668083091682623186), KQU( 1741383777203714216), KQU( 2162597285417794374), KQU(15841980159165218736), KQU( 1971354603551467079), KQU( 1206714764913205968), KQU( 4790860439591272330), KQU(14699375615594055799), KQU( 8374423871657449988), KQU(10950685736472937738), KQU( 697344331343267176), KQU(10084998763118059810), KQU(12897369539795983124), KQU(12351260292144383605), KQU( 1268810970176811234), KQU( 7406287800414582768), KQU( 516169557043807831), KQU( 5077568278710520380), KQU( 3828791738309039304), KQU( 7721974069946943610), KQU( 3534670260981096460), KQU( 4865792189600584891), KQU(16892578493734337298), KQU( 9161499464278042590), KQU(11976149624067055931), KQU(13219479887277343990), KQU(14161556738111500680), KQU(14670715255011223056), KQU( 4671205678403576558), KQU(12633022931454259781), KQU(14821376219869187646), KQU( 751181776484317028), KQU( 2192211308839047070), KQU(11787306362361245189), KQU(10672375120744095707), KQU( 4601972328345244467), KQU(15457217788831125879), KQU( 8464345256775460809), KQU(10191938789487159478), KQU( 6184348739615197613), KQU(11425436778806882100), KQU( 2739227089124319793), KQU( 461464518456000551), KQU( 4689850170029177442), KQU( 6120307814374078625), KQU(11153579230681708671), KQU( 7891721473905347926), KQU(10281646937824872400), KQU( 3026099648191332248), KQU( 8666750296953273818), KQU(14978499698844363232), KQU(13303395102890132065), KQU( 8182358205292864080), KQU(10560547713972971291), KQU(11981635489418959093), KQU( 3134621354935288409), KQU(11580681977404383968), KQU(14205530317404088650), KQU( 5997789011854923157), KQU(13659151593432238041), KQU(11664332114338865086), KQU( 7490351383220929386), KQU( 7189290499881530378), KQU(15039262734271020220), KQU( 2057217285976980055), KQU( 555570804905355739), KQU(11235311968348555110), KQU(13824557146269603217), KQU(16906788840653099693), KQU( 7222878245455661677), KQU( 5245139444332423756), KQU( 4723748462805674292), KQU(12216509815698568612), KQU(17402362976648951187), KQU(17389614836810366768), KQU( 4880936484146667711), KQU( 9085007839292639880), KQU(13837353458498535449), KQU(11914419854360366677), KQU(16595890135313864103), KQU( 6313969847197627222), KQU(18296909792163910431), KQU(10041780113382084042), KQU( 2499478551172884794), KQU(11057894246241189489), KQU( 9742243032389068555), KQU(12838934582673196228), KQU(13437023235248490367), KQU(13372420669446163240), KQU( 6752564244716909224), KQU( 7157333073400313737), KQU(12230281516370654308), KQU( 1182884552219419117), KQU( 2955125381312499218), KQU(10308827097079443249), KQU( 1337648572986534958), KQU(16378788590020343939), KQU( 108619126514420935), KQU( 3990981009621629188), KQU( 5460953070230946410), KQU( 9703328329366531883), KQU(13166631489188077236), KQU( 1104768831213675170), KQU( 3447930458553877908), KQU( 8067172487769945676), KQU( 5445802098190775347), KQU( 3244840981648973873), KQU(17314668322981950060), KQU( 5006812527827763807), KQU(18158695070225526260), KQU( 2824536478852417853), KQU(13974775809127519886), KQU( 9814362769074067392), KQU(17276205156374862128), KQU(11361680725379306967), KQU( 3422581970382012542), KQU(11003189603753241266), KQU(11194292945277862261), KQU( 6839623313908521348), KQU(11935326462707324634), KQU( 1611456788685878444), KQU(13112620989475558907), KQU( 517659108904450427), KQU(13558114318574407624), KQU(15699089742731633077), KQU( 4988979278862685458), KQU( 8111373583056521297), KQU( 3891258746615399627), KQU( 8137298251469718086), KQU(12748663295624701649), KQU( 4389835683495292062), KQU( 5775217872128831729), KQU( 9462091896405534927), KQU( 8498124108820263989), KQU( 8059131278842839525), KQU(10503167994254090892), KQU(11613153541070396656), KQU(18069248738504647790), KQU( 570657419109768508), KQU( 3950574167771159665), KQU( 5514655599604313077), KQU( 2908460854428484165), KQU(10777722615935663114), KQU(12007363304839279486), KQU( 9800646187569484767), KQU( 8795423564889864287), KQU(14257396680131028419), KQU( 6405465117315096498), KQU( 7939411072208774878), KQU(17577572378528990006), KQU(14785873806715994850), KQU(16770572680854747390), KQU(18127549474419396481), KQU(11637013449455757750), KQU(14371851933996761086), KQU( 3601181063650110280), KQU( 4126442845019316144), KQU(10198287239244320669), KQU(18000169628555379659), KQU(18392482400739978269), KQU( 6219919037686919957), KQU( 3610085377719446052), KQU( 2513925039981776336), KQU(16679413537926716955), KQU(12903302131714909434), KQU( 5581145789762985009), KQU(12325955044293303233), KQU(17216111180742141204), KQU( 6321919595276545740), KQU( 3507521147216174501), KQU( 9659194593319481840), KQU(11473976005975358326), KQU(14742730101435987026), KQU( 492845897709954780), KQU(16976371186162599676), KQU(17712703422837648655), KQU( 9881254778587061697), KQU( 8413223156302299551), KQU( 1563841828254089168), KQU( 9996032758786671975), KQU( 138877700583772667), KQU(13003043368574995989), KQU( 4390573668650456587), KQU( 8610287390568126755), KQU(15126904974266642199), KQU( 6703637238986057662), KQU( 2873075592956810157), KQU( 6035080933946049418), KQU(13382846581202353014), KQU( 7303971031814642463), KQU(18418024405307444267), KQU( 5847096731675404647), KQU( 4035880699639842500), KQU(11525348625112218478), KQU( 3041162365459574102), KQU( 2604734487727986558), KQU(15526341771636983145), KQU(14556052310697370254), KQU(12997787077930808155), KQU( 9601806501755554499), KQU(11349677952521423389), KQU(14956777807644899350), KQU(16559736957742852721), KQU(12360828274778140726), KQU( 6685373272009662513), KQU(16932258748055324130), KQU(15918051131954158508), KQU( 1692312913140790144), KQU( 546653826801637367), KQU( 5341587076045986652), KQU(14975057236342585662), KQU(12374976357340622412), KQU(10328833995181940552), KQU(12831807101710443149), KQU(10548514914382545716), KQU( 2217806727199715993), KQU(12627067369242845138), KQU( 4598965364035438158), KQU( 150923352751318171), KQU(14274109544442257283), KQU( 4696661475093863031), KQU( 1505764114384654516), KQU(10699185831891495147), KQU( 2392353847713620519), KQU( 3652870166711788383), KQU( 8640653276221911108), KQU( 3894077592275889704), KQU( 4918592872135964845), KQU(16379121273281400789), KQU(12058465483591683656), KQU(11250106829302924945), KQU( 1147537556296983005), KQU( 6376342756004613268), KQU(14967128191709280506), KQU(18007449949790627628), KQU( 9497178279316537841), KQU( 7920174844809394893), KQU(10037752595255719907), KQU(15875342784985217697), KQU(15311615921712850696), KQU( 9552902652110992950), KQU(14054979450099721140), KQU( 5998709773566417349), KQU(18027910339276320187), KQU( 8223099053868585554), KQU( 7842270354824999767), KQU( 4896315688770080292), KQU(12969320296569787895), KQU( 2674321489185759961), KQU( 4053615936864718439), KQU(11349775270588617578), KQU( 4743019256284553975), KQU( 5602100217469723769), KQU(14398995691411527813), KQU( 7412170493796825470), KQU( 836262406131744846), KQU( 8231086633845153022), KQU( 5161377920438552287), KQU( 8828731196169924949), KQU(16211142246465502680), KQU( 3307990879253687818), KQU( 5193405406899782022), KQU( 8510842117467566693), KQU( 6070955181022405365), KQU(14482950231361409799), KQU(12585159371331138077), KQU( 3511537678933588148), KQU( 2041849474531116417), KQU(10944936685095345792), KQU(18303116923079107729), KQU( 2720566371239725320), KQU( 4958672473562397622), KQU( 3032326668253243412), KQU(13689418691726908338), KQU( 1895205511728843996), KQU( 8146303515271990527), KQU(16507343500056113480), KQU( 473996939105902919), KQU( 9897686885246881481), KQU(14606433762712790575), KQU( 6732796251605566368), KQU( 1399778120855368916), KQU( 935023885182833777), KQU(16066282816186753477), KQU( 7291270991820612055), KQU(17530230393129853844), KQU(10223493623477451366), KQU(15841725630495676683), KQU(17379567246435515824), KQU( 8588251429375561971), KQU(18339511210887206423), KQU(17349587430725976100), KQU(12244876521394838088), KQU( 6382187714147161259), KQU(12335807181848950831), KQU(16948885622305460665), KQU(13755097796371520506), KQU(14806740373324947801), KQU( 4828699633859287703), KQU( 8209879281452301604), KQU(12435716669553736437), KQU(13970976859588452131), KQU( 6233960842566773148), KQU(12507096267900505759), KQU( 1198713114381279421), KQU(14989862731124149015), KQU(15932189508707978949), KQU( 2526406641432708722), KQU( 29187427817271982), KQU( 1499802773054556353), KQU(10816638187021897173), KQU( 5436139270839738132), KQU( 6659882287036010082), KQU( 2154048955317173697), KQU(10887317019333757642), KQU(16281091802634424955), KQU(10754549879915384901), KQU(10760611745769249815), KQU( 2161505946972504002), KQU( 5243132808986265107), KQU(10129852179873415416), KQU( 710339480008649081), KQU( 7802129453068808528), KQU(17967213567178907213), KQU(15730859124668605599), KQU(13058356168962376502), KQU( 3701224985413645909), KQU(14464065869149109264), KQU( 9959272418844311646), KQU(10157426099515958752), KQU(14013736814538268528), KQU(17797456992065653951), KQU(17418878140257344806), KQU(15457429073540561521), KQU( 2184426881360949378), KQU( 2062193041154712416), KQU( 8553463347406931661), KQU( 4913057625202871854), KQU( 2668943682126618425), KQU(17064444737891172288), KQU( 4997115903913298637), KQU(12019402608892327416), KQU(17603584559765897352), KQU(11367529582073647975), KQU( 8211476043518436050), KQU( 8676849804070323674), KQU(18431829230394475730), KQU(10490177861361247904), KQU( 9508720602025651349), KQU( 7409627448555722700), KQU( 5804047018862729008), KQU(11943858176893142594), KQU(11908095418933847092), KQU( 5415449345715887652), KQU( 1554022699166156407), KQU( 9073322106406017161), KQU( 7080630967969047082), KQU(18049736940860732943), KQU(12748714242594196794), KQU( 1226992415735156741), KQU(17900981019609531193), KQU(11720739744008710999), KQU( 3006400683394775434), KQU(11347974011751996028), KQU( 3316999628257954608), KQU( 8384484563557639101), KQU(18117794685961729767), KQU( 1900145025596618194), KQU(17459527840632892676), KQU( 5634784101865710994), KQU( 7918619300292897158), KQU( 3146577625026301350), KQU( 9955212856499068767), KQU( 1873995843681746975), KQU( 1561487759967972194), KQU( 8322718804375878474), KQU(11300284215327028366), KQU( 4667391032508998982), KQU( 9820104494306625580), KQU(17922397968599970610), KQU( 1784690461886786712), KQU(14940365084341346821), KQU( 5348719575594186181), KQU(10720419084507855261), KQU(14210394354145143274), KQU( 2426468692164000131), KQU(16271062114607059202), KQU(14851904092357070247), KQU( 6524493015693121897), KQU( 9825473835127138531), KQU(14222500616268569578), KQU(15521484052007487468), KQU(14462579404124614699), KQU(11012375590820665520), KQU(11625327350536084927), KQU(14452017765243785417), KQU( 9989342263518766305), KQU( 3640105471101803790), KQU( 4749866455897513242), KQU(13963064946736312044), KQU(10007416591973223791), KQU(18314132234717431115), KQU( 3286596588617483450), KQU( 7726163455370818765), KQU( 7575454721115379328), KQU( 5308331576437663422), KQU(18288821894903530934), KQU( 8028405805410554106), KQU(15744019832103296628), KQU( 149765559630932100), KQU( 6137705557200071977), KQU(14513416315434803615), KQU(11665702820128984473), KQU( 218926670505601386), KQU( 6868675028717769519), KQU(15282016569441512302), KQU( 5707000497782960236), KQU( 6671120586555079567), KQU( 2194098052618985448), KQU(16849577895477330978), KQU(12957148471017466283), KQU( 1997805535404859393), KQU( 1180721060263860490), KQU(13206391310193756958), KQU(12980208674461861797), KQU( 3825967775058875366), KQU(17543433670782042631), KQU( 1518339070120322730), KQU(16344584340890991669), KQU( 2611327165318529819), KQU(11265022723283422529), KQU( 4001552800373196817), KQU(14509595890079346161), KQU( 3528717165416234562), KQU(18153222571501914072), KQU( 9387182977209744425), KQU(10064342315985580021), KQU(11373678413215253977), KQU( 2308457853228798099), KQU( 9729042942839545302), KQU( 7833785471140127746), KQU( 6351049900319844436), KQU(14454610627133496067), KQU(12533175683634819111), KQU(15570163926716513029), KQU(13356980519185762498) }; TEST_BEGIN(test_gen_rand_32) { uint32_t array32[BLOCK_SIZE] JEMALLOC_ATTR(aligned(16)); uint32_t array32_2[BLOCK_SIZE] JEMALLOC_ATTR(aligned(16)); int i; uint32_t r32; sfmt_t *ctx; assert_d_le(get_min_array_size32(), BLOCK_SIZE, "Array size too small"); ctx = init_gen_rand(1234); fill_array32(ctx, array32, BLOCK_SIZE); fill_array32(ctx, array32_2, BLOCK_SIZE); fini_gen_rand(ctx); ctx = init_gen_rand(1234); for (i = 0; i < BLOCK_SIZE; i++) { if (i < COUNT_1) { assert_u32_eq(array32[i], init_gen_rand_32_expected[i], "Output mismatch for i=%d", i); } r32 = gen_rand32(ctx); assert_u32_eq(r32, array32[i], "Mismatch at array32[%d]=%x, gen=%x", i, array32[i], r32); } for (i = 0; i < COUNT_2; i++) { r32 = gen_rand32(ctx); assert_u32_eq(r32, array32_2[i], "Mismatch at array32_2[%d]=%x, gen=%x", i, array32_2[i], r32); } fini_gen_rand(ctx); } TEST_END TEST_BEGIN(test_by_array_32) { uint32_t array32[BLOCK_SIZE] JEMALLOC_ATTR(aligned(16)); uint32_t array32_2[BLOCK_SIZE] JEMALLOC_ATTR(aligned(16)); int i; uint32_t ini[4] = {0x1234, 0x5678, 0x9abc, 0xdef0}; uint32_t r32; sfmt_t *ctx; assert_d_le(get_min_array_size32(), BLOCK_SIZE, "Array size too small"); ctx = init_by_array(ini, 4); fill_array32(ctx, array32, BLOCK_SIZE); fill_array32(ctx, array32_2, BLOCK_SIZE); fini_gen_rand(ctx); ctx = init_by_array(ini, 4); for (i = 0; i < BLOCK_SIZE; i++) { if (i < COUNT_1) { assert_u32_eq(array32[i], init_by_array_32_expected[i], "Output mismatch for i=%d", i); } r32 = gen_rand32(ctx); assert_u32_eq(r32, array32[i], "Mismatch at array32[%d]=%x, gen=%x", i, array32[i], r32); } for (i = 0; i < COUNT_2; i++) { r32 = gen_rand32(ctx); assert_u32_eq(r32, array32_2[i], "Mismatch at array32_2[%d]=%x, gen=%x", i, array32_2[i], r32); } fini_gen_rand(ctx); } TEST_END TEST_BEGIN(test_gen_rand_64) { uint64_t array64[BLOCK_SIZE64] JEMALLOC_ATTR(aligned(16)); uint64_t array64_2[BLOCK_SIZE64] JEMALLOC_ATTR(aligned(16)); int i; uint64_t r; sfmt_t *ctx; assert_d_le(get_min_array_size64(), BLOCK_SIZE64, "Array size too small"); ctx = init_gen_rand(4321); fill_array64(ctx, array64, BLOCK_SIZE64); fill_array64(ctx, array64_2, BLOCK_SIZE64); fini_gen_rand(ctx); ctx = init_gen_rand(4321); for (i = 0; i < BLOCK_SIZE64; i++) { if (i < COUNT_1) { assert_u64_eq(array64[i], init_gen_rand_64_expected[i], "Output mismatch for i=%d", i); } r = gen_rand64(ctx); assert_u64_eq(r, array64[i], "Mismatch at array64[%d]=%"PRIx64", gen=%"PRIx64, i, array64[i], r); } for (i = 0; i < COUNT_2; i++) { r = gen_rand64(ctx); assert_u64_eq(r, array64_2[i], "Mismatch at array64_2[%d]=%"PRIx64" gen=%"PRIx64"", i, array64_2[i], r); } fini_gen_rand(ctx); } TEST_END TEST_BEGIN(test_by_array_64) { uint64_t array64[BLOCK_SIZE64] JEMALLOC_ATTR(aligned(16)); uint64_t array64_2[BLOCK_SIZE64] JEMALLOC_ATTR(aligned(16)); int i; uint64_t r; uint32_t ini[] = {5, 4, 3, 2, 1}; sfmt_t *ctx; assert_d_le(get_min_array_size64(), BLOCK_SIZE64, "Array size too small"); ctx = init_by_array(ini, 5); fill_array64(ctx, array64, BLOCK_SIZE64); fill_array64(ctx, array64_2, BLOCK_SIZE64); fini_gen_rand(ctx); ctx = init_by_array(ini, 5); for (i = 0; i < BLOCK_SIZE64; i++) { if (i < COUNT_1) { assert_u64_eq(array64[i], init_by_array_64_expected[i], "Output mismatch for i=%d", i); } r = gen_rand64(ctx); assert_u64_eq(r, array64[i], "Mismatch at array64[%d]=%"PRIx64" gen=%"PRIx64, i, array64[i], r); } for (i = 0; i < COUNT_2; i++) { r = gen_rand64(ctx); assert_u64_eq(r, array64_2[i], "Mismatch at array64_2[%d]=%"PRIx64" gen=%"PRIx64, i, array64_2[i], r); } fini_gen_rand(ctx); } TEST_END int main(void) { return (test( test_gen_rand_32, test_by_array_32, test_gen_rand_64, test_by_array_64)); } vmem-1.8/src/jemalloc/test/unit/bitmap.c000066400000000000000000000070361361505074100202710ustar00rootroot00000000000000#include "test/jemalloc_test.h" #if (LG_BITMAP_MAXBITS > 12) # define MAXBITS 4500 #else # define MAXBITS (1U << LG_BITMAP_MAXBITS) #endif TEST_BEGIN(test_bitmap_size) { size_t i, prev_size; prev_size = 0; for (i = 1; i <= MAXBITS; i++) { size_t size = bitmap_size(i); assert_true(size >= prev_size, "Bitmap size is smaller than expected"); prev_size = size; } } TEST_END TEST_BEGIN(test_bitmap_init) { size_t i; for (i = 1; i <= MAXBITS; i++) { bitmap_info_t binfo; bitmap_info_init(&binfo, i); { size_t j; bitmap_t *bitmap = malloc(sizeof(bitmap_t) * bitmap_info_ngroups(&binfo)); bitmap_init(bitmap, &binfo); for (j = 0; j < i; j++) { assert_false(bitmap_get(bitmap, &binfo, j), "Bit should be unset"); } free(bitmap); } } } TEST_END TEST_BEGIN(test_bitmap_set) { size_t i; for (i = 1; i <= MAXBITS; i++) { bitmap_info_t binfo; bitmap_info_init(&binfo, i); { size_t j; bitmap_t *bitmap = malloc(sizeof(bitmap_t) * bitmap_info_ngroups(&binfo)); bitmap_init(bitmap, &binfo); for (j = 0; j < i; j++) bitmap_set(bitmap, &binfo, j); assert_true(bitmap_full(bitmap, &binfo), "All bits should be set"); free(bitmap); } } } TEST_END TEST_BEGIN(test_bitmap_unset) { size_t i; for (i = 1; i <= MAXBITS; i++) { bitmap_info_t binfo; bitmap_info_init(&binfo, i); { size_t j; bitmap_t *bitmap = malloc(sizeof(bitmap_t) * bitmap_info_ngroups(&binfo)); bitmap_init(bitmap, &binfo); for (j = 0; j < i; j++) bitmap_set(bitmap, &binfo, j); assert_true(bitmap_full(bitmap, &binfo), "All bits should be set"); for (j = 0; j < i; j++) bitmap_unset(bitmap, &binfo, j); for (j = 0; j < i; j++) bitmap_set(bitmap, &binfo, j); assert_true(bitmap_full(bitmap, &binfo), "All bits should be set"); free(bitmap); } } } TEST_END TEST_BEGIN(test_bitmap_sfu) { size_t i; for (i = 1; i <= MAXBITS; i++) { bitmap_info_t binfo; bitmap_info_init(&binfo, i); { ssize_t j; bitmap_t *bitmap = malloc(sizeof(bitmap_t) * bitmap_info_ngroups(&binfo)); bitmap_init(bitmap, &binfo); /* Iteratively set bits starting at the beginning. */ for (j = 0; j < i; j++) { assert_zd_eq(bitmap_sfu(bitmap, &binfo), j, "First unset bit should be just after " "previous first unset bit"); } assert_true(bitmap_full(bitmap, &binfo), "All bits should be set"); /* * Iteratively unset bits starting at the end, and * verify that bitmap_sfu() reaches the unset bits. */ for (j = i - 1; j >= 0; j--) { bitmap_unset(bitmap, &binfo, j); assert_zd_eq(bitmap_sfu(bitmap, &binfo), j, "First unset bit should the bit previously " "unset"); bitmap_unset(bitmap, &binfo, j); } assert_false(bitmap_get(bitmap, &binfo, 0), "Bit should be unset"); /* * Iteratively set bits starting at the beginning, and * verify that bitmap_sfu() looks past them. */ for (j = 1; j < i; j++) { bitmap_set(bitmap, &binfo, j - 1); assert_zd_eq(bitmap_sfu(bitmap, &binfo), j, "First unset bit should be just after the " "bit previously set"); bitmap_unset(bitmap, &binfo, j); } assert_zd_eq(bitmap_sfu(bitmap, &binfo), i - 1, "First unset bit should be the last bit"); assert_true(bitmap_full(bitmap, &binfo), "All bits should be set"); free(bitmap); } } } TEST_END int main(void) { return (test( test_bitmap_size, test_bitmap_init, test_bitmap_set, test_bitmap_unset, test_bitmap_sfu)); } vmem-1.8/src/jemalloc/test/unit/ckh.c000066400000000000000000000122651361505074100175620ustar00rootroot00000000000000#include "test/jemalloc_test.h" TEST_BEGIN(test_new_delete) { ckh_t ckh; assert_false(ckh_new(&ckh, 2, ckh_string_hash, ckh_string_keycomp), "Unexpected ckh_new() error"); ckh_delete(&ckh); assert_false(ckh_new(&ckh, 3, ckh_pointer_hash, ckh_pointer_keycomp), "Unexpected ckh_new() error"); ckh_delete(&ckh); } TEST_END TEST_BEGIN(test_count_insert_search_remove) { ckh_t ckh; const char *strs[] = { "a string", "A string", "a string.", "A string." }; const char *missing = "A string not in the hash table."; size_t i; assert_false(ckh_new(&ckh, 2, ckh_string_hash, ckh_string_keycomp), "Unexpected ckh_new() error"); assert_zu_eq(ckh_count(&ckh), 0, "ckh_count() should return %zu, but it returned %zu", ZU(0), ckh_count(&ckh)); /* Insert. */ for (i = 0; i < sizeof(strs)/sizeof(const char *); i++) { ckh_insert(&ckh, strs[i], strs[i]); assert_zu_eq(ckh_count(&ckh), i+1, "ckh_count() should return %zu, but it returned %zu", i+1, ckh_count(&ckh)); } /* Search. */ for (i = 0; i < sizeof(strs)/sizeof(const char *); i++) { union { void *p; const char *s; } k, v; void **kp, **vp; const char *ks, *vs; kp = (i & 1) ? &k.p : NULL; vp = (i & 2) ? &v.p : NULL; k.p = NULL; v.p = NULL; assert_false(ckh_search(&ckh, strs[i], kp, vp), "Unexpected ckh_search() error"); ks = (i & 1) ? strs[i] : (const char *)NULL; vs = (i & 2) ? strs[i] : (const char *)NULL; assert_ptr_eq((void *)ks, (void *)k.s, "Key mismatch, i=%zu", i); assert_ptr_eq((void *)vs, (void *)v.s, "Value mismatch, i=%zu", i); } assert_true(ckh_search(&ckh, missing, NULL, NULL), "Unexpected ckh_search() success"); /* Remove. */ for (i = 0; i < sizeof(strs)/sizeof(const char *); i++) { union { void *p; const char *s; } k, v; void **kp, **vp; const char *ks, *vs; kp = (i & 1) ? &k.p : NULL; vp = (i & 2) ? &v.p : NULL; k.p = NULL; v.p = NULL; assert_false(ckh_remove(&ckh, strs[i], kp, vp), "Unexpected ckh_remove() error"); ks = (i & 1) ? strs[i] : (const char *)NULL; vs = (i & 2) ? strs[i] : (const char *)NULL; assert_ptr_eq((void *)ks, (void *)k.s, "Key mismatch, i=%zu", i); assert_ptr_eq((void *)vs, (void *)v.s, "Value mismatch, i=%zu", i); assert_zu_eq(ckh_count(&ckh), sizeof(strs)/sizeof(const char *) - i - 1, "ckh_count() should return %zu, but it returned %zu", sizeof(strs)/sizeof(const char *) - i - 1, ckh_count(&ckh)); } ckh_delete(&ckh); } TEST_END TEST_BEGIN(test_insert_iter_remove) { #define NITEMS ZU(1000) ckh_t ckh; void **p[NITEMS]; void *q, *r; size_t i; assert_false(ckh_new(&ckh, 2, ckh_pointer_hash, ckh_pointer_keycomp), "Unexpected ckh_new() error"); for (i = 0; i < NITEMS; i++) { p[i] = mallocx(i+1, 0); assert_ptr_not_null(p[i], "Unexpected mallocx() failure"); } for (i = 0; i < NITEMS; i++) { size_t j; for (j = i; j < NITEMS; j++) { assert_false(ckh_insert(&ckh, p[j], p[j]), "Unexpected ckh_insert() failure"); assert_false(ckh_search(&ckh, p[j], &q, &r), "Unexpected ckh_search() failure"); assert_ptr_eq(p[j], q, "Key pointer mismatch"); assert_ptr_eq(p[j], r, "Value pointer mismatch"); } assert_zu_eq(ckh_count(&ckh), NITEMS, "ckh_count() should return %zu, but it returned %zu", NITEMS, ckh_count(&ckh)); for (j = i + 1; j < NITEMS; j++) { assert_false(ckh_search(&ckh, p[j], NULL, NULL), "Unexpected ckh_search() failure"); assert_false(ckh_remove(&ckh, p[j], &q, &r), "Unexpected ckh_remove() failure"); assert_ptr_eq(p[j], q, "Key pointer mismatch"); assert_ptr_eq(p[j], r, "Value pointer mismatch"); assert_true(ckh_search(&ckh, p[j], NULL, NULL), "Unexpected ckh_search() success"); assert_true(ckh_remove(&ckh, p[j], &q, &r), "Unexpected ckh_remove() success"); } { bool seen[NITEMS]; size_t tabind; memset(seen, 0, sizeof(seen)); for (tabind = 0; ckh_iter(&ckh, &tabind, &q, &r) == false;) { size_t k; assert_ptr_eq(q, r, "Key and val not equal"); for (k = 0; k < NITEMS; k++) { if (p[k] == q) { assert_false(seen[k], "Item %zu already seen", k); seen[k] = true; break; } } } for (j = 0; j < i + 1; j++) assert_true(seen[j], "Item %zu not seen", j); for (; j < NITEMS; j++) assert_false(seen[j], "Item %zu seen", j); } } for (i = 0; i < NITEMS; i++) { assert_false(ckh_search(&ckh, p[i], NULL, NULL), "Unexpected ckh_search() failure"); assert_false(ckh_remove(&ckh, p[i], &q, &r), "Unexpected ckh_remove() failure"); assert_ptr_eq(p[i], q, "Key pointer mismatch"); assert_ptr_eq(p[i], r, "Value pointer mismatch"); assert_true(ckh_search(&ckh, p[i], NULL, NULL), "Unexpected ckh_search() success"); assert_true(ckh_remove(&ckh, p[i], &q, &r), "Unexpected ckh_remove() success"); dallocx(p[i], 0); } assert_zu_eq(ckh_count(&ckh), 0, "ckh_count() should return %zu, but it returned %zu", ZU(0), ckh_count(&ckh)); ckh_delete(&ckh); #undef NITEMS } TEST_END int main(void) { return (test( test_new_delete, test_count_insert_search_remove, test_insert_iter_remove)); } vmem-1.8/src/jemalloc/test/unit/hash.c000066400000000000000000000112121361505074100177270ustar00rootroot00000000000000/* * This file is based on code that is part of SMHasher * (https://code.google.com/p/smhasher/), and is subject to the MIT license * (http://www.opensource.org/licenses/mit-license.php). Both email addresses * associated with the source code's revision history belong to Austin Appleby, * and the revision history ranges from 2010 to 2012. Therefore the copyright * and license are here taken to be: * * Copyright (c) 2010-2012 Austin Appleby * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN * THE SOFTWARE. */ #include "test/jemalloc_test.h" typedef enum { hash_variant_x86_32, hash_variant_x86_128, hash_variant_x64_128 } hash_variant_t; static size_t hash_variant_bits(hash_variant_t variant) { switch (variant) { case hash_variant_x86_32: return (32); case hash_variant_x86_128: return (128); case hash_variant_x64_128: return (128); default: not_reached(); } } static const char * hash_variant_string(hash_variant_t variant) { switch (variant) { case hash_variant_x86_32: return ("hash_x86_32"); case hash_variant_x86_128: return ("hash_x86_128"); case hash_variant_x64_128: return ("hash_x64_128"); default: not_reached(); } } static void hash_variant_verify(hash_variant_t variant) { const size_t hashbytes = hash_variant_bits(variant) / 8; uint8_t key[256]; VARIABLE_ARRAY(uint8_t, hashes, hashbytes * 256); VARIABLE_ARRAY(uint8_t, final, hashbytes); unsigned i; uint32_t computed, expected; memset(key, 0, sizeof(key)); memset(hashes, 0, sizeof(hashes)); memset(final, 0, sizeof(final)); /* * Hash keys of the form {0}, {0,1}, {0,1,2}, ..., {0,1,...,255} as the * seed. */ for (i = 0; i < 256; i++) { key[i] = (uint8_t)i; switch (variant) { case hash_variant_x86_32: { uint32_t out; out = hash_x86_32(key, i, 256-i); memcpy(&hashes[i*hashbytes], &out, hashbytes); break; } case hash_variant_x86_128: { uint64_t out[2]; hash_x86_128(key, i, 256-i, out); memcpy(&hashes[i*hashbytes], out, hashbytes); break; } case hash_variant_x64_128: { uint64_t out[2]; hash_x64_128(key, i, 256-i, out); memcpy(&hashes[i*hashbytes], out, hashbytes); break; } default: not_reached(); } } /* Hash the result array. */ switch (variant) { case hash_variant_x86_32: { uint32_t out = hash_x86_32(hashes, hashbytes*256, 0); memcpy(final, &out, sizeof(out)); break; } case hash_variant_x86_128: { uint64_t out[2]; hash_x86_128(hashes, hashbytes*256, 0, out); memcpy(final, out, sizeof(out)); break; } case hash_variant_x64_128: { uint64_t out[2]; hash_x64_128(hashes, hashbytes*256, 0, out); memcpy(final, out, sizeof(out)); break; } default: not_reached(); } computed = (final[0] << 0) | (final[1] << 8) | (final[2] << 16) | (final[3] << 24); switch (variant) { #ifdef JEMALLOC_BIG_ENDIAN case hash_variant_x86_32: expected = 0x6213303eU; break; case hash_variant_x86_128: expected = 0x266820caU; break; case hash_variant_x64_128: expected = 0xcc622b6fU; break; #else case hash_variant_x86_32: expected = 0xb0f57ee3U; break; case hash_variant_x86_128: expected = 0xb3ece62aU; break; case hash_variant_x64_128: expected = 0x6384ba69U; break; #endif default: not_reached(); } assert_u32_eq(computed, expected, "Hash mismatch for %s(): expected %#x but got %#x", hash_variant_string(variant), expected, computed); } TEST_BEGIN(test_hash_x86_32) { hash_variant_verify(hash_variant_x86_32); } TEST_END TEST_BEGIN(test_hash_x86_128) { hash_variant_verify(hash_variant_x86_128); } TEST_END TEST_BEGIN(test_hash_x64_128) { hash_variant_verify(hash_variant_x64_128); } TEST_END int main(void) { return (test( test_hash_x86_32, test_hash_x86_128, test_hash_x64_128)); } vmem-1.8/src/jemalloc/test/unit/junk.c000066400000000000000000000126451361505074100177660ustar00rootroot00000000000000#include "test/jemalloc_test.h" #ifdef JEMALLOC_FILL const char *malloc_conf = "abort:false,junk:true,zero:false,redzone:true,quarantine:0"; #endif static arena_dalloc_junk_small_t *arena_dalloc_junk_small_orig; static arena_dalloc_junk_large_t *arena_dalloc_junk_large_orig; static huge_dalloc_junk_t *huge_dalloc_junk_orig; static void *most_recently_junked; static void arena_dalloc_junk_small_intercept(void *ptr, arena_bin_info_t *bin_info) { size_t i; arena_dalloc_junk_small_orig(ptr, bin_info); for (i = 0; i < bin_info->reg_size; i++) { assert_c_eq(((char *)ptr)[i], 0x5a, "Missing junk fill for byte %zu/%zu of deallocated region", i, bin_info->reg_size); } most_recently_junked = ptr; } static void arena_dalloc_junk_large_intercept(void *ptr, size_t usize) { size_t i; arena_dalloc_junk_large_orig(ptr, usize); for (i = 0; i < usize; i++) { assert_c_eq(((char *)ptr)[i], 0x5a, "Missing junk fill for byte %zu/%zu of deallocated region", i, usize); } most_recently_junked = ptr; } static void huge_dalloc_junk_intercept(void *ptr, size_t usize) { huge_dalloc_junk_orig(ptr, usize); /* * The conditions under which junk filling actually occurs are nuanced * enough that it doesn't make sense to duplicate the decision logic in * test code, so don't actually check that the region is junk-filled. */ most_recently_junked = ptr; } static void test_junk(size_t sz_min, size_t sz_max) { char *s; size_t sz_prev, sz, i; arena_dalloc_junk_small_orig = arena_dalloc_junk_small; arena_dalloc_junk_small = arena_dalloc_junk_small_intercept; arena_dalloc_junk_large_orig = arena_dalloc_junk_large; arena_dalloc_junk_large = arena_dalloc_junk_large_intercept; huge_dalloc_junk_orig = huge_dalloc_junk; huge_dalloc_junk = huge_dalloc_junk_intercept; sz_prev = 0; s = (char *)mallocx(sz_min, 0); assert_ptr_not_null((void *)s, "Unexpected mallocx() failure"); for (sz = sallocx(s, 0); sz <= sz_max; sz_prev = sz, sz = sallocx(s, 0)) { if (sz_prev > 0) { assert_c_eq(s[0], 'a', "Previously allocated byte %zu/%zu is corrupted", ZU(0), sz_prev); assert_c_eq(s[sz_prev-1], 'a', "Previously allocated byte %zu/%zu is corrupted", sz_prev-1, sz_prev); } for (i = sz_prev; i < sz; i++) { assert_c_eq(s[i], 0xa5, "Newly allocated byte %zu/%zu isn't junk-filled", i, sz); s[i] = 'a'; } if (xallocx(s, sz+1, 0, 0) == sz) { void *junked = (void *)s; s = (char *)rallocx(s, sz+1, 0); assert_ptr_not_null((void *)s, "Unexpected rallocx() failure"); assert_ptr_eq(most_recently_junked, junked, "Expected region of size %zu to be junk-filled", sz); } } dallocx(s, 0); assert_ptr_eq(most_recently_junked, (void *)s, "Expected region of size %zu to be junk-filled", sz); arena_dalloc_junk_small = arena_dalloc_junk_small_orig; arena_dalloc_junk_large = arena_dalloc_junk_large_orig; huge_dalloc_junk = huge_dalloc_junk_orig; } TEST_BEGIN(test_junk_small) { test_skip_if(!config_fill); test_junk(1, SMALL_MAXCLASS-1); } TEST_END TEST_BEGIN(test_junk_large) { test_skip_if(!config_fill); test_junk(SMALL_MAXCLASS+1, arena_maxclass); } TEST_END TEST_BEGIN(test_junk_huge) { test_skip_if(!config_fill); test_junk(arena_maxclass+1, chunksize*2); } TEST_END arena_ralloc_junk_large_t *arena_ralloc_junk_large_orig; static void *most_recently_trimmed; static void arena_ralloc_junk_large_intercept(void *ptr, size_t old_usize, size_t usize) { arena_ralloc_junk_large_orig(ptr, old_usize, usize); assert_zu_eq(old_usize, arena_maxclass, "Unexpected old_usize"); assert_zu_eq(usize, arena_maxclass-PAGE, "Unexpected usize"); most_recently_trimmed = ptr; } TEST_BEGIN(test_junk_large_ralloc_shrink) { void *p1, *p2; p1 = mallocx(arena_maxclass, 0); assert_ptr_not_null(p1, "Unexpected mallocx() failure"); arena_ralloc_junk_large_orig = arena_ralloc_junk_large; arena_ralloc_junk_large = arena_ralloc_junk_large_intercept; p2 = rallocx(p1, arena_maxclass-PAGE, 0); assert_ptr_eq(p1, p2, "Unexpected move during shrink"); arena_ralloc_junk_large = arena_ralloc_junk_large_orig; assert_ptr_eq(most_recently_trimmed, p1, "Expected trimmed portion of region to be junk-filled"); } TEST_END static bool detected_redzone_corruption; static void arena_redzone_corruption_replacement(void *ptr, size_t usize, bool after, size_t offset, uint8_t byte) { detected_redzone_corruption = true; } TEST_BEGIN(test_junk_redzone) { char *s; arena_redzone_corruption_t *arena_redzone_corruption_orig; test_skip_if(!config_fill); arena_redzone_corruption_orig = arena_redzone_corruption; arena_redzone_corruption = arena_redzone_corruption_replacement; /* Test underflow. */ detected_redzone_corruption = false; s = (char *)mallocx(1, 0); assert_ptr_not_null((void *)s, "Unexpected mallocx() failure"); s[-1] = 0xbb; dallocx(s, 0); assert_true(detected_redzone_corruption, "Did not detect redzone corruption"); /* Test overflow. */ detected_redzone_corruption = false; s = (char *)mallocx(1, 0); assert_ptr_not_null((void *)s, "Unexpected mallocx() failure"); s[sallocx(s, 0)] = 0xbb; dallocx(s, 0); assert_true(detected_redzone_corruption, "Did not detect redzone corruption"); arena_redzone_corruption = arena_redzone_corruption_orig; } TEST_END int main(void) { return (test( test_junk_small, test_junk_large, test_junk_huge, test_junk_large_ralloc_shrink, test_junk_redzone)); } vmem-1.8/src/jemalloc/test/unit/mallctl.c000066400000000000000000000375671361505074100204610ustar00rootroot00000000000000#include "test/jemalloc_test.h" TEST_BEGIN(test_mallctl_errors) { uint64_t epoch; size_t sz; assert_d_eq(mallctl("no_such_name", NULL, NULL, NULL, 0), ENOENT, "mallctl() should return ENOENT for non-existent names"); assert_d_eq(mallctl("version", NULL, NULL, "0.0.0", strlen("0.0.0")), EPERM, "mallctl() should return EPERM on attempt to write " "read-only value"); assert_d_eq(mallctl("epoch", NULL, NULL, &epoch, sizeof(epoch)-1), EINVAL, "mallctl() should return EINVAL for input size mismatch"); assert_d_eq(mallctl("epoch", NULL, NULL, &epoch, sizeof(epoch)+1), EINVAL, "mallctl() should return EINVAL for input size mismatch"); sz = sizeof(epoch)-1; assert_d_eq(mallctl("epoch", &epoch, &sz, NULL, 0), EINVAL, "mallctl() should return EINVAL for output size mismatch"); sz = sizeof(epoch)+1; assert_d_eq(mallctl("epoch", &epoch, &sz, NULL, 0), EINVAL, "mallctl() should return EINVAL for output size mismatch"); } TEST_END TEST_BEGIN(test_mallctlnametomib_errors) { size_t mib[1]; size_t miblen; miblen = sizeof(mib)/sizeof(size_t); assert_d_eq(mallctlnametomib("no_such_name", mib, &miblen), ENOENT, "mallctlnametomib() should return ENOENT for non-existent names"); } TEST_END TEST_BEGIN(test_mallctlbymib_errors) { uint64_t epoch; size_t sz; size_t mib[1]; size_t miblen; miblen = sizeof(mib)/sizeof(size_t); assert_d_eq(mallctlnametomib("version", mib, &miblen), 0, "Unexpected mallctlnametomib() failure"); assert_d_eq(mallctlbymib(mib, miblen, NULL, NULL, "0.0.0", strlen("0.0.0")), EPERM, "mallctl() should return EPERM on " "attempt to write read-only value"); miblen = sizeof(mib)/sizeof(size_t); assert_d_eq(mallctlnametomib("epoch", mib, &miblen), 0, "Unexpected mallctlnametomib() failure"); assert_d_eq(mallctlbymib(mib, miblen, NULL, NULL, &epoch, sizeof(epoch)-1), EINVAL, "mallctlbymib() should return EINVAL for input size mismatch"); assert_d_eq(mallctlbymib(mib, miblen, NULL, NULL, &epoch, sizeof(epoch)+1), EINVAL, "mallctlbymib() should return EINVAL for input size mismatch"); sz = sizeof(epoch)-1; assert_d_eq(mallctlbymib(mib, miblen, &epoch, &sz, NULL, 0), EINVAL, "mallctlbymib() should return EINVAL for output size mismatch"); sz = sizeof(epoch)+1; assert_d_eq(mallctlbymib(mib, miblen, &epoch, &sz, NULL, 0), EINVAL, "mallctlbymib() should return EINVAL for output size mismatch"); } TEST_END TEST_BEGIN(test_mallctl_read_write) { uint64_t old_epoch, new_epoch; size_t sz = sizeof(old_epoch); /* Blind. */ assert_d_eq(mallctl("epoch", NULL, NULL, NULL, 0), 0, "Unexpected mallctl() failure"); assert_zu_eq(sz, sizeof(old_epoch), "Unexpected output size"); /* Read. */ assert_d_eq(mallctl("epoch", &old_epoch, &sz, NULL, 0), 0, "Unexpected mallctl() failure"); assert_zu_eq(sz, sizeof(old_epoch), "Unexpected output size"); /* Write. */ assert_d_eq(mallctl("epoch", NULL, NULL, &new_epoch, sizeof(new_epoch)), 0, "Unexpected mallctl() failure"); assert_zu_eq(sz, sizeof(old_epoch), "Unexpected output size"); /* Read+write. */ assert_d_eq(mallctl("epoch", &old_epoch, &sz, &new_epoch, sizeof(new_epoch)), 0, "Unexpected mallctl() failure"); assert_zu_eq(sz, sizeof(old_epoch), "Unexpected output size"); } TEST_END TEST_BEGIN(test_mallctlnametomib_short_mib) { size_t mib[6]; size_t miblen; void *mem; pool_t *pool; unsigned npools; size_t sz = sizeof(npools); mem = calloc(1, POOL_MINIMAL_SIZE); assert_ptr_ne(mem, NULL, "Unexpected calloc() failure"); pool = je_pool_create(mem, POOL_MINIMAL_SIZE, 1, 1); assert_ptr_ne((void*)pool, NULL, "Unexpected je_pool_create() failure"); assert_d_eq(mallctl("pools.npools", &npools, &sz, NULL, 0), 0, "Unexpected mallctl() failure"); assert_u_eq(npools, 2, "Unexpected number of pools"); miblen = 5; mib[5] = 42; assert_d_eq(mallctlnametomib("pool.1.arenas.bin.0.nregs", mib, &miblen), 0, "Unexpected mallctlnametomib() failure"); assert_zu_eq(miblen, 5, "Unexpected mib output length"); assert_zu_eq(mib[5], 42, "mallctlnametomib() wrote past the end of the input mib"); je_pool_delete(pool); free(mem); } TEST_END TEST_BEGIN(test_mallctl_config) { #define TEST_MALLCTL_CONFIG(config) do { \ bool oldval; \ size_t sz = sizeof(oldval); \ assert_d_eq(mallctl("config."#config, &oldval, &sz, NULL, 0), \ 0, "Unexpected mallctl() failure"); \ assert_b_eq(oldval, config_##config, "Incorrect config value"); \ assert_zu_eq(sz, sizeof(oldval), "Unexpected output size"); \ } while (0) TEST_MALLCTL_CONFIG(debug); TEST_MALLCTL_CONFIG(fill); TEST_MALLCTL_CONFIG(lazy_lock); TEST_MALLCTL_CONFIG(munmap); TEST_MALLCTL_CONFIG(prof); TEST_MALLCTL_CONFIG(prof_libgcc); TEST_MALLCTL_CONFIG(prof_libunwind); TEST_MALLCTL_CONFIG(stats); TEST_MALLCTL_CONFIG(tcache); TEST_MALLCTL_CONFIG(tls); TEST_MALLCTL_CONFIG(utrace); TEST_MALLCTL_CONFIG(valgrind); TEST_MALLCTL_CONFIG(xmalloc); #undef TEST_MALLCTL_CONFIG } TEST_END TEST_BEGIN(test_mallctl_opt) { bool config_always = true; #define TEST_MALLCTL_OPT(t, opt, config) do { \ t oldval; \ size_t sz = sizeof(oldval); \ int expected = config_##config ? 0 : ENOENT; \ int result = mallctl("opt."#opt, &oldval, &sz, NULL, 0); \ assert_d_eq(result, expected, \ "Unexpected mallctl() result for opt."#opt); \ assert_zu_eq(sz, sizeof(oldval), "Unexpected output size"); \ } while (0) TEST_MALLCTL_OPT(bool, abort, always); TEST_MALLCTL_OPT(size_t, lg_chunk, always); TEST_MALLCTL_OPT(const char *, dss, always); TEST_MALLCTL_OPT(size_t, narenas, always); TEST_MALLCTL_OPT(ssize_t, lg_dirty_mult, always); TEST_MALLCTL_OPT(bool, stats_print, always); TEST_MALLCTL_OPT(bool, junk, fill); TEST_MALLCTL_OPT(size_t, quarantine, fill); TEST_MALLCTL_OPT(bool, redzone, fill); TEST_MALLCTL_OPT(bool, zero, fill); TEST_MALLCTL_OPT(bool, utrace, utrace); TEST_MALLCTL_OPT(bool, xmalloc, xmalloc); TEST_MALLCTL_OPT(bool, tcache, tcache); TEST_MALLCTL_OPT(size_t, lg_tcache_max, tcache); TEST_MALLCTL_OPT(bool, prof, prof); TEST_MALLCTL_OPT(const char *, prof_prefix, prof); TEST_MALLCTL_OPT(bool, prof_active, prof); TEST_MALLCTL_OPT(ssize_t, lg_prof_sample, prof); TEST_MALLCTL_OPT(bool, prof_accum, prof); TEST_MALLCTL_OPT(ssize_t, lg_prof_interval, prof); TEST_MALLCTL_OPT(bool, prof_gdump, prof); TEST_MALLCTL_OPT(bool, prof_final, prof); TEST_MALLCTL_OPT(bool, prof_leak, prof); #undef TEST_MALLCTL_OPT } TEST_END /* * create a couple of pools and check their size * using mib feature */ TEST_BEGIN(test_mallctl_with_multiple_pools) { #define NPOOLS 4 pool_t *pools[NPOOLS]; void *mem; unsigned npools; int i; size_t sz = sizeof(npools); size_t mib[4], miblen; mem = calloc(NPOOLS, POOL_MINIMAL_SIZE); assert_ptr_ne(mem, NULL, "Unexpected calloc() failure"); for (i = 0; i < NPOOLS; ++i) { pools[i] = je_pool_create( mem + (i*POOL_MINIMAL_SIZE), POOL_MINIMAL_SIZE, 1, 1); assert_ptr_ne( (void*)pools[i], NULL, "Unexpected je_pool_create() failure"); } assert_d_eq(mallctl("pools.npools", &npools, &sz, NULL, 0), 0, "Unexpected mallctl() failure"); assert_u_eq(npools, NPOOLS+1, "Unexpected number of pools"); miblen = 4; assert_d_eq(mallctlnametomib("pool.0.arenas.narenas", mib, &miblen), 0, "Unexpected mallctlnametomib() failure"); /* * This loop does not use local variable pools. * Moreover we omit pool[0]. */ for (i = 1; i <= NPOOLS; ++i) { unsigned narenas; mib[1] = i; sz = sizeof(narenas); assert_d_eq(mallctlbymib(mib, miblen, &narenas, &sz, NULL, 0), 0, "Unexpected mallctlbymib() failure"); } for (i = 0; i < NPOOLS; ++i) { je_pool_delete( pools[i]); } free(mem); #undef NPOOLS } TEST_END TEST_BEGIN(test_manpage_example) { unsigned nbins, i; size_t mib[6]; size_t len, miblen; len = sizeof(nbins); assert_d_eq(mallctl("pool.0.arenas.nbins", &nbins, &len, NULL, 0), 0, "Unexpected mallctl() failure"); miblen = 6; assert_d_eq(mallctlnametomib("pool.0.arenas.bin.0.size", mib, &miblen), 0, "Unexpected mallctlnametomib() failure"); for (i = 0; i < nbins; i++) { size_t bin_size; mib[4] = i; len = sizeof(bin_size); assert_d_eq(mallctlbymib(mib, miblen, &bin_size, &len, NULL, 0), 0, "Unexpected mallctlbymib() failure"); /* Do something with bin_size... */ } } TEST_END TEST_BEGIN(test_thread_arena) { unsigned arena_old, arena_new, narenas; size_t sz = sizeof(unsigned); assert_d_eq(mallctl("pool.0.arenas.narenas", &narenas, &sz, NULL, 0), 0, "Unexpected mallctl() failure"); assert_u_eq(narenas, opt_narenas, "Number of arenas incorrect"); arena_new = narenas - 1; assert_d_eq(mallctl("thread.pool.0.arena", &arena_old, &sz, &arena_new, sizeof(unsigned)), 0, "Unexpected mallctl() failure"); arena_new = 0; assert_d_eq(mallctl("thread.pool.0.arena", &arena_old, &sz, &arena_new, sizeof(unsigned)), 0, "Unexpected mallctl() failure"); } TEST_END TEST_BEGIN(test_arena_i_purge) { unsigned narenas; unsigned npools; size_t sz = sizeof(unsigned); size_t mib[5]; size_t miblen = 5; void *mem; pool_t *pool; mem = calloc(1, POOL_MINIMAL_SIZE); assert_ptr_ne(mem, NULL, "Unexpected calloc() failure"); pool = je_pool_create(mem, POOL_MINIMAL_SIZE, 1, 1); assert_ptr_ne( (void*)pool, NULL, "Unexpected je_pool_create() failure"); assert_d_eq(mallctl("pools.npools", &npools, &sz, NULL, 0), 0, "Unexpected mallctl() failure"); assert_u_eq(npools, 2, "Unexpected number of pools"); assert_d_eq(mallctl("pool.1.arena.0.purge", NULL, NULL, NULL, 0), 0, "Unexpected mallctl() failure"); assert_d_eq(mallctl("pool.1.arenas.narenas", &narenas, &sz, NULL, 0), 0, "Unexpected mallctl() failure"); assert_d_eq(mallctlnametomib("pool.1.arena.0.purge", mib, &miblen), 0, "Unexpected mallctlnametomib() failure"); mib[3] = narenas; assert_d_eq(mallctlbymib(mib, miblen, NULL, NULL, NULL, 0), 0, "Unexpected mallctlbymib() failure"); je_pool_delete(pool); free(mem); } TEST_END TEST_BEGIN(test_arena_i_dss) { const char *dss_prec_old, *dss_prec_new; size_t sz = sizeof(dss_prec_old); size_t mib[5]; size_t miblen; miblen = sizeof(mib)/sizeof(size_t); assert_d_eq(mallctlnametomib("pool.0.arena.0.dss", mib, &miblen), 0, "Unexpected mallctlnametomib() error"); dss_prec_new = "disabled"; assert_d_eq(mallctlbymib(mib, miblen, &dss_prec_old, &sz, &dss_prec_new, sizeof(dss_prec_new)), 0, "Unexpected mallctl() failure"); assert_str_ne(dss_prec_old, "primary", "Unexpected default for dss precedence"); assert_d_eq(mallctlbymib(mib, miblen, &dss_prec_new, &sz, &dss_prec_old, sizeof(dss_prec_old)), 0, "Unexpected mallctl() failure"); mib[3] = narenas_total_get(pools[0]); dss_prec_new = "disabled"; assert_d_eq(mallctlbymib(mib, miblen, &dss_prec_old, &sz, &dss_prec_new, sizeof(dss_prec_new)), 0, "Unexpected mallctl() failure"); assert_str_ne(dss_prec_old, "primary", "Unexpected default for dss precedence"); } TEST_END TEST_BEGIN(test_arenas_initialized) { unsigned narenas; size_t sz = sizeof(narenas); assert_d_eq(mallctl("pool.0.arenas.narenas", &narenas, &sz, NULL, 0), 0, "Unexpected mallctl() failure"); { VARIABLE_ARRAY(bool, initialized, narenas); sz = narenas * sizeof(bool); assert_d_eq(mallctl("pool.0.arenas.initialized", initialized, &sz, NULL, 0), 0, "Unexpected mallctl() failure"); } } TEST_END TEST_BEGIN(test_arenas_constants) { #define TEST_ARENAS_CONSTANT(t, name, expected) do { \ t name; \ size_t sz = sizeof(t); \ assert_d_eq(mallctl("pool.0.arenas."#name, &(name), &sz, NULL, 0), 0, \ "Unexpected mallctl() failure"); \ assert_zu_eq(name, expected, "Incorrect "#name" size"); \ } while (0) TEST_ARENAS_CONSTANT(size_t, quantum, QUANTUM); TEST_ARENAS_CONSTANT(size_t, page, PAGE); TEST_ARENAS_CONSTANT(unsigned, nbins, NBINS); TEST_ARENAS_CONSTANT(size_t, nlruns, nlclasses); #undef TEST_ARENAS_CONSTANT } TEST_END TEST_BEGIN(test_arenas_bin_constants) { #define TEST_ARENAS_BIN_CONSTANT(t, name, expected) do { \ t name; \ size_t sz = sizeof(t); \ assert_d_eq(mallctl("pool.0.arenas.bin.0."#name, &(name), &sz, NULL, 0), \ 0, "Unexpected mallctl() failure"); \ assert_zu_eq(name, expected, "Incorrect "#name" size"); \ } while (0) TEST_ARENAS_BIN_CONSTANT(size_t, size, arena_bin_info[0].reg_size); TEST_ARENAS_BIN_CONSTANT(uint32_t, nregs, arena_bin_info[0].nregs); TEST_ARENAS_BIN_CONSTANT(size_t, run_size, arena_bin_info[0].run_size); #undef TEST_ARENAS_BIN_CONSTANT } TEST_END TEST_BEGIN(test_arenas_lrun_constants) { #define TEST_ARENAS_LRUN_CONSTANT(t, name, expected) do { \ t name; \ size_t sz = sizeof(t); \ assert_d_eq(mallctl("pool.0.arenas.lrun.0."#name, &(name), &sz, NULL, \ 0), 0, "Unexpected mallctl() failure"); \ assert_zu_eq(name, expected, "Incorrect "#name" size"); \ } while (0) TEST_ARENAS_LRUN_CONSTANT(size_t, size, (1 << LG_PAGE)); #undef TEST_ARENAS_LRUN_CONSTANT } TEST_END /* * create a couple of pools and extend their arenas */ TEST_BEGIN(test_arenas_extend) { #define NPOOLS 4 pool_t *pools[NPOOLS]; void *mem; unsigned npools, narenas_before, arena, narenas_after; int i; size_t mib_narenas[4], mib_extend[4], miblen = sizeof(mib_narenas), sz = sizeof(unsigned); mem = calloc(NPOOLS, POOL_MINIMAL_SIZE); assert_ptr_ne(mem, NULL, "Unexpected calloc() failure"); for (i = 0; i < NPOOLS; ++i) { pools[i] = je_pool_create(mem + (i*POOL_MINIMAL_SIZE), POOL_MINIMAL_SIZE, 0, 1); assert_ptr_ne((void *)pools[i], NULL, "Unexpected je_pool_create() failure"); } assert_d_eq(mallctl("pools.npools", &npools, &sz, NULL, 0), 0, "Unexpected mallctl() failure"); assert_u_eq(npools, NPOOLS+1, "Unexpected number of pools"); assert_d_eq(mallctlnametomib("pool.0.arenas.narenas", mib_narenas, &miblen), 0, "Unexpected mallctlnametomib() failure"); assert_d_eq(mallctlnametomib("pool.0.arenas.extend", mib_extend, &miblen), 0, "Unexpected mallctlnametomib() failure"); /* * This loop does not use local variable pools. * Moreover we omit pool[0]. */ for (i = 1; i <= NPOOLS; ++i) { mib_narenas[1] = i; mib_extend[1] = i; assert_d_eq(mallctlbymib(mib_narenas, miblen, &narenas_before, &sz, NULL, 0), 0, "Unexpected mallctlbymib() failure"); assert_d_eq(mallctlbymib(mib_extend, miblen, &arena, &sz, NULL, 0), 0, "Unexpected mallctlbymib() failure"); assert_d_eq(mallctlbymib(mib_narenas, miblen, &narenas_after, &sz, NULL, 0), 0, "Unexpected mallctlbymib() failure"); assert_u_eq(narenas_before+1, narenas_after, "Unexpected number of arenas before versus after extension"); assert_u_eq(arena, narenas_after-1, "Unexpected arena index"); } for (i = 0; i < NPOOLS; ++i) { je_pool_delete( pools[i]); } free(mem); #undef NPOOLS } TEST_END TEST_BEGIN(test_stats_arenas) { #define TEST_STATS_ARENAS(t, name) do { \ t name; \ size_t sz = sizeof(t); \ assert_d_eq(mallctl("pool.0.stats.arenas.0."#name, &(name), &sz, NULL, \ 0), 0, "Unexpected mallctl() failure"); \ } while (0) TEST_STATS_ARENAS(const char *, dss); TEST_STATS_ARENAS(unsigned, nthreads); TEST_STATS_ARENAS(size_t, pactive); TEST_STATS_ARENAS(size_t, pdirty); #undef TEST_STATS_ARENAS } TEST_END /* * Each arena allocates 32 kilobytes of CTL metadata, and since we only * have 12 megabytes, we have to hard-limit it to a known value, otherwise * on systems with high CPU count, the tests might run out of memory. */ #define NARENAS_IN_POOL 64 int main(void) { opt_narenas = NARENAS_IN_POOL; return (test( test_mallctl_errors, test_mallctlnametomib_errors, test_mallctlbymib_errors, test_mallctl_read_write, test_mallctlnametomib_short_mib, test_mallctl_config, test_mallctl_opt, test_mallctl_with_multiple_pools, test_manpage_example, test_thread_arena, test_arena_i_purge, test_arena_i_dss, test_arenas_initialized, test_arenas_constants, test_arenas_bin_constants, test_arenas_lrun_constants, test_arenas_extend, test_stats_arenas)); } vmem-1.8/src/jemalloc/test/unit/math.c000066400000000000000000000440201361505074100177400ustar00rootroot00000000000000#include "test/jemalloc_test.h" #define MAX_REL_ERR 1.0e-9 #define MAX_ABS_ERR 1.0e-9 #include #ifndef INFINITY #define INFINITY (DBL_MAX + DBL_MAX) #endif static bool double_eq_rel(double a, double b, double max_rel_err, double max_abs_err) { double rel_err; if (fabs(a - b) < max_abs_err) return (true); rel_err = (fabs(b) > fabs(a)) ? fabs((a-b)/b) : fabs((a-b)/a); return (rel_err < max_rel_err); } static uint64_t factorial(unsigned x) { uint64_t ret = 1; unsigned i; for (i = 2; i <= x; i++) ret *= (uint64_t)i; return (ret); } TEST_BEGIN(test_ln_gamma_factorial) { unsigned x; /* exp(ln_gamma(x)) == (x-1)! for integer x. */ for (x = 1; x <= 21; x++) { assert_true(double_eq_rel(exp(ln_gamma(x)), (double)factorial(x-1), MAX_REL_ERR, MAX_ABS_ERR), "Incorrect factorial result for x=%u", x); } } TEST_END /* Expected ln_gamma([0.0..100.0] increment=0.25). */ static const double ln_gamma_misc_expected[] = { INFINITY, 1.28802252469807743, 0.57236494292470008, 0.20328095143129538, 0.00000000000000000, -0.09827183642181320, -0.12078223763524518, -0.08440112102048555, 0.00000000000000000, 0.12487171489239651, 0.28468287047291918, 0.47521466691493719, 0.69314718055994529, 0.93580193110872523, 1.20097360234707429, 1.48681557859341718, 1.79175946922805496, 2.11445692745037128, 2.45373657084244234, 2.80857141857573644, 3.17805383034794575, 3.56137591038669710, 3.95781396761871651, 4.36671603662228680, 4.78749174278204581, 5.21960398699022932, 5.66256205985714178, 6.11591589143154568, 6.57925121201010121, 7.05218545073853953, 7.53436423675873268, 8.02545839631598312, 8.52516136106541467, 9.03318691960512332, 9.54926725730099690, 10.07315123968123949, 10.60460290274525086, 11.14340011995171231, 11.68933342079726856, 12.24220494005076176, 12.80182748008146909, 13.36802367147604720, 13.94062521940376342, 14.51947222506051816, 15.10441257307551943, 15.69530137706046524, 16.29200047656724237, 16.89437797963419285, 17.50230784587389010, 18.11566950571089407, 18.73434751193644843, 19.35823122022435427, 19.98721449566188468, 20.62119544270163018, 21.26007615624470048, 21.90376249182879320, 22.55216385312342098, 23.20519299513386002, 23.86276584168908954, 24.52480131594137802, 25.19122118273868338, 25.86194990184851861, 26.53691449111561340, 27.21604439872720604, 27.89927138384089389, 28.58652940490193828, 29.27775451504081516, 29.97288476399884871, 30.67186010608067548, 31.37462231367769050, 32.08111489594735843, 32.79128302226991565, 33.50507345013689076, 34.22243445715505317, 34.94331577687681545, 35.66766853819134298, 36.39544520803305261, 37.12659953718355865, 37.86108650896109395, 38.59886229060776230, 39.33988418719949465, 40.08411059791735198, 40.83150097453079752, 41.58201578195490100, 42.33561646075348506, 43.09226539146988699, 43.85192586067515208, 44.61456202863158893, 45.38013889847690052, 46.14862228684032885, 46.91997879580877395, 47.69417578616628361, 48.47118135183522014, 49.25096429545256882, 50.03349410501914463, 50.81874093156324790, 51.60667556776436982, 52.39726942748592364, 53.19049452616926743, 53.98632346204390586, 54.78472939811231157, 55.58568604486942633, 56.38916764371992940, 57.19514895105859864, 58.00360522298051080, 58.81451220059079787, 59.62784609588432261, 60.44358357816834371, 61.26170176100199427, 62.08217818962842927, 62.90499082887649962, 63.73011805151035958, 64.55753862700632340, 65.38723171073768015, 66.21917683354901385, 67.05335389170279825, 67.88974313718154008, 68.72832516833013017, 69.56908092082363737, 70.41199165894616385, 71.25703896716800045, 72.10420474200799390, 72.95347118416940191, 73.80482079093779646, 74.65823634883015814, 75.51370092648485866, 76.37119786778275454, 77.23071078519033961, 78.09222355331530707, 78.95572030266725960, 79.82118541361435859, 80.68860351052903468, 81.55795945611502873, 82.42923834590904164, 83.30242550295004378, 84.17750647261028973, 85.05446701758152983, 85.93329311301090456, 86.81397094178107920, 87.69648688992882057, 88.58082754219766741, 89.46697967771913795, 90.35493026581838194, 91.24466646193963015, 92.13617560368709292, 93.02944520697742803, 93.92446296229978486, 94.82121673107967297, 95.71969454214321615, 96.61988458827809723, 97.52177522288820910, 98.42535495673848800, 99.33061245478741341, 100.23753653310367895, 101.14611615586458981, 102.05634043243354370, 102.96819861451382394, 103.88168009337621811, 104.79677439715833032, 105.71347118823287303, 106.63176026064346047, 107.55163153760463501, 108.47307506906540198, 109.39608102933323153, 110.32063971475740516, 111.24674154146920557, 112.17437704317786995, 113.10353686902013237, 114.03421178146170689, 114.96639265424990128, 115.90007047041454769, 116.83523632031698014, 117.77188139974506953, 118.70999700805310795, 119.64957454634490830, 120.59060551569974962, 121.53308151543865279, 122.47699424143097247, 123.42233548443955726, 124.36909712850338394, 125.31727114935689826, 126.26684961288492559, 127.21782467361175861, 128.17018857322420899, 129.12393363912724453, 130.07905228303084755, 131.03553699956862033, 131.99338036494577864, 132.95257503561629164, 133.91311374698926784, 134.87498931216194364, 135.83819462068046846, 136.80272263732638294, 137.76856640092901785, 138.73571902320256299, 139.70417368760718091, 140.67392364823425055, 141.64496222871400732, 142.61728282114600574, 143.59087888505104047, 144.56574394634486680, 145.54187159633210058, 146.51925549072063859, 147.49788934865566148, 148.47776695177302031, 149.45888214327129617, 150.44122882700193600, 151.42480096657754984, 152.40959258449737490, 153.39559776128982094, 154.38281063467164245, 155.37122539872302696, 156.36083630307879844, 157.35163765213474107, 158.34362380426921391, 159.33678917107920370, 160.33112821663092973, 161.32663545672428995, 162.32330545817117695, 163.32113283808695314, 164.32011226319519892, 165.32023844914485267, 166.32150615984036790, 167.32391020678358018, 168.32744544842768164, 169.33210678954270634, 170.33788918059275375, 171.34478761712384198, 172.35279713916281707, 173.36191283062726143, 174.37212981874515094, 175.38344327348534080, 176.39584840699734514, 177.40934047306160437, 178.42391476654847793, 179.43956662288721304, 180.45629141754378111, 181.47408456550741107, 182.49294152078630304, 183.51285777591152737, 184.53382886144947861, 185.55585034552262869, 186.57891783333786861, 187.60302696672312095, 188.62817342367162610, 189.65435291789341932, 190.68156119837468054, 191.70979404894376330, 192.73904728784492590, 193.76931676731820176, 194.80059837318714244, 195.83288802445184729, 196.86618167288995096, 197.90047530266301123, 198.93576492992946214, 199.97204660246373464, 201.00931639928148797, 202.04757043027063901, 203.08680483582807597, 204.12701578650228385, 205.16819948264117102, 206.21035215404597807, 207.25347005962987623, 208.29754948708190909, 209.34258675253678916, 210.38857820024875878, 211.43552020227099320, 212.48340915813977858, 213.53224149456323744, 214.58201366511514152, 215.63272214993284592, 216.68436345542014010, 217.73693411395422004, 218.79043068359703739, 219.84484974781133815, 220.90018791517996988, 221.95644181913033322, 223.01360811766215875, 224.07168349307951871, 225.13066465172661879, 226.19054832372759734, 227.25133126272962159, 228.31301024565024704, 229.37558207242807384, 230.43904356577689896, 231.50339157094342113, 232.56862295546847008, 233.63473460895144740, 234.70172344281823484, 235.76958639009222907, 236.83832040516844586, 237.90792246359117712, 238.97838956183431947, 240.04971871708477238, 241.12190696702904802, 242.19495136964280846, 243.26884900298270509, 244.34359696498191283, 245.41919237324782443, 246.49563236486270057, 247.57291409618682110, 248.65103474266476269, 249.72999149863338175, 250.80978157713354904, 251.89040220972316320, 252.97185064629374551, 254.05412415488834199, 255.13722002152300661, 256.22113555000953511, 257.30586806178126835, 258.39141489572085675, 259.47777340799029844, 260.56494097186322279, 261.65291497755913497, 262.74169283208021852, 263.83127195904967266, 264.92164979855277807, 266.01282380697938379, 267.10479145686849733, 268.19755023675537586, 269.29109765101975427, 270.38543121973674488, 271.48054847852881721, 272.57644697842033565, 273.67312428569374561, 274.77057798174683967, 275.86880566295326389, 276.96780494052313770, 278.06757344036617496, 279.16810880295668085, 280.26940868320008349, 281.37147075030043197, 282.47429268763045229, 283.57787219260217171, 284.68220697654078322, 285.78729476455760050, 286.89313329542699194, 287.99972032146268930, 289.10705360839756395, 290.21513093526289140, 291.32395009427028754, 292.43350889069523646, 293.54380514276073200, 294.65483668152336350, 295.76660135076059532, 296.87909700685889902, 297.99232151870342022, 299.10627276756946458, 300.22094864701409733, 301.33634706277030091, 302.45246593264130297, 303.56930318639643929, 304.68685676566872189, 305.80512462385280514, 306.92410472600477078, 308.04379504874236773, 309.16419358014690033, 310.28529831966631036, 311.40710727801865687, 312.52961847709792664, 313.65282994987899201, 314.77673974032603610, 315.90134590329950015, 317.02664650446632777, 318.15263962020929966, 319.27932333753892635, 320.40669575400545455, 321.53475497761127144, 322.66349912672620803, 323.79292633000159185, 324.92303472628691452, 326.05382246454587403, 327.18528770377525916, 328.31742861292224234, 329.45024337080525356, 330.58373016603343331, 331.71788719692847280, 332.85271267144611329, 333.98820480709991898, 335.12436183088397001, 336.26118197919845443, 337.39866349777429377, 338.53680464159958774, 339.67560367484657036, 340.81505887079896411, 341.95516851178109619, 343.09593088908627578, 344.23734430290727460, 345.37940706226686416, 346.52211748494903532, 347.66547389743118401, 348.80947463481720661, 349.95411804077025408, 351.09940246744753267, 352.24532627543504759, 353.39188783368263103, 354.53908551944078908, 355.68691771819692349, 356.83538282361303118, 357.98447923746385868, 359.13420536957539753 }; TEST_BEGIN(test_ln_gamma_misc) { unsigned i; for (i = 1; i < sizeof(ln_gamma_misc_expected)/sizeof(double); i++) { double x = (double)i * 0.25; assert_true(double_eq_rel(ln_gamma(x), ln_gamma_misc_expected[i], MAX_REL_ERR, MAX_ABS_ERR), "Incorrect ln_gamma result for i=%u", i); } } TEST_END /* Expected pt_norm([0.01..0.99] increment=0.01). */ static const double pt_norm_expected[] = { -INFINITY, -2.32634787404084076, -2.05374891063182252, -1.88079360815125085, -1.75068607125216946, -1.64485362695147264, -1.55477359459685305, -1.47579102817917063, -1.40507156030963221, -1.34075503369021654, -1.28155156554460081, -1.22652812003661049, -1.17498679206608991, -1.12639112903880045, -1.08031934081495606, -1.03643338949378938, -0.99445788320975281, -0.95416525314619416, -0.91536508784281390, -0.87789629505122846, -0.84162123357291418, -0.80642124701824025, -0.77219321418868492, -0.73884684918521371, -0.70630256284008752, -0.67448975019608171, -0.64334540539291685, -0.61281299101662701, -0.58284150727121620, -0.55338471955567281, -0.52440051270804067, -0.49585034734745320, -0.46769879911450812, -0.43991316567323380, -0.41246312944140462, -0.38532046640756751, -0.35845879325119373, -0.33185334643681652, -0.30548078809939738, -0.27931903444745404, -0.25334710313579978, -0.22754497664114931, -0.20189347914185077, -0.17637416478086135, -0.15096921549677725, -0.12566134685507399, -0.10043372051146975, -0.07526986209982976, -0.05015358346473352, -0.02506890825871106, 0.00000000000000000, 0.02506890825871106, 0.05015358346473366, 0.07526986209982990, 0.10043372051146990, 0.12566134685507413, 0.15096921549677739, 0.17637416478086146, 0.20189347914185105, 0.22754497664114931, 0.25334710313579978, 0.27931903444745404, 0.30548078809939738, 0.33185334643681652, 0.35845879325119373, 0.38532046640756762, 0.41246312944140484, 0.43991316567323391, 0.46769879911450835, 0.49585034734745348, 0.52440051270804111, 0.55338471955567303, 0.58284150727121620, 0.61281299101662701, 0.64334540539291685, 0.67448975019608171, 0.70630256284008752, 0.73884684918521371, 0.77219321418868492, 0.80642124701824036, 0.84162123357291441, 0.87789629505122879, 0.91536508784281423, 0.95416525314619460, 0.99445788320975348, 1.03643338949378938, 1.08031934081495606, 1.12639112903880045, 1.17498679206608991, 1.22652812003661049, 1.28155156554460081, 1.34075503369021654, 1.40507156030963265, 1.47579102817917085, 1.55477359459685394, 1.64485362695147308, 1.75068607125217102, 1.88079360815125041, 2.05374891063182208, 2.32634787404084076 }; TEST_BEGIN(test_pt_norm) { unsigned i; for (i = 1; i < sizeof(pt_norm_expected)/sizeof(double); i++) { double p = (double)i * 0.01; assert_true(double_eq_rel(pt_norm(p), pt_norm_expected[i], MAX_REL_ERR, MAX_ABS_ERR), "Incorrect pt_norm result for i=%u", i); } } TEST_END /* * Expected pt_chi2(p=[0.01..0.99] increment=0.07, * df={0.1, 1.1, 10.1, 100.1, 1000.1}). */ static const double pt_chi2_df[] = {0.1, 1.1, 10.1, 100.1, 1000.1}; static const double pt_chi2_expected[] = { 1.168926411457320e-40, 1.347680397072034e-22, 3.886980416666260e-17, 8.245951724356564e-14, 2.068936347497604e-11, 1.562561743309233e-09, 5.459543043426564e-08, 1.114775688149252e-06, 1.532101202364371e-05, 1.553884683726585e-04, 1.239396954915939e-03, 8.153872320255721e-03, 4.631183739647523e-02, 2.473187311701327e-01, 2.175254800183617e+00, 0.0003729887888876379, 0.0164409238228929513, 0.0521523015190650113, 0.1064701372271216612, 0.1800913735793082115, 0.2748704281195626931, 0.3939246282787986497, 0.5420727552260817816, 0.7267265822221973259, 0.9596554296000253670, 1.2607440376386165326, 1.6671185084541604304, 2.2604828984738705167, 3.2868613342148607082, 6.9298574921692139839, 2.606673548632508, 4.602913725294877, 5.646152813924212, 6.488971315540869, 7.249823275816285, 7.977314231410841, 8.700354939944047, 9.441728024225892, 10.224338321374127, 11.076435368801061, 12.039320937038386, 13.183878752697167, 14.657791935084575, 16.885728216339373, 23.361991680031817, 70.14844087392152, 80.92379498849355, 85.53325420085891, 88.94433120715347, 91.83732712857017, 94.46719943606301, 96.96896479994635, 99.43412843510363, 101.94074719829733, 104.57228644307247, 107.43900093448734, 110.71844673417287, 114.76616819871325, 120.57422505959563, 135.92318818757556, 899.0072447849649, 937.9271278858220, 953.8117189560207, 965.3079371501154, 974.8974061207954, 983.4936235182347, 991.5691170518946, 999.4334123954690, 1007.3391826856553, 1015.5445154999951, 1024.3777075619569, 1034.3538789836223, 1046.4872561869577, 1063.5717461999654, 1107.0741966053859 }; TEST_BEGIN(test_pt_chi2) { unsigned i, j; unsigned e = 0; for (i = 0; i < sizeof(pt_chi2_df)/sizeof(double); i++) { double df = pt_chi2_df[i]; double ln_gamma_df = ln_gamma(df * 0.5); for (j = 1; j < 100; j += 7) { double p = (double)j * 0.01; assert_true(double_eq_rel(pt_chi2(p, df, ln_gamma_df), pt_chi2_expected[e], MAX_REL_ERR, MAX_ABS_ERR), "Incorrect pt_chi2 result for i=%u, j=%u", i, j); e++; } } } TEST_END /* * Expected pt_gamma(p=[0.1..0.99] increment=0.07, * shape=[0.5..3.0] increment=0.5). */ static const double pt_gamma_shape[] = {0.5, 1.0, 1.5, 2.0, 2.5, 3.0}; static const double pt_gamma_expected[] = { 7.854392895485103e-05, 5.043466107888016e-03, 1.788288957794883e-02, 3.900956150232906e-02, 6.913847560638034e-02, 1.093710833465766e-01, 1.613412523825817e-01, 2.274682115597864e-01, 3.114117323127083e-01, 4.189466220207417e-01, 5.598106789059246e-01, 7.521856146202706e-01, 1.036125427911119e+00, 1.532450860038180e+00, 3.317448300510606e+00, 0.01005033585350144, 0.08338160893905107, 0.16251892949777497, 0.24846135929849966, 0.34249030894677596, 0.44628710262841947, 0.56211891815354142, 0.69314718055994529, 0.84397007029452920, 1.02165124753198167, 1.23787435600161766, 1.51412773262977574, 1.89711998488588196, 2.52572864430825783, 4.60517018598809091, 0.05741590094955853, 0.24747378084860744, 0.39888572212236084, 0.54394139997444901, 0.69048812513915159, 0.84311389861296104, 1.00580622221479898, 1.18298694218766931, 1.38038096305861213, 1.60627736383027453, 1.87396970522337947, 2.20749220408081070, 2.65852391865854942, 3.37934630984842244, 5.67243336507218476, 0.1485547402532659, 0.4657458011640391, 0.6832386130709406, 0.8794297834672100, 1.0700752852474524, 1.2629614217350744, 1.4638400448580779, 1.6783469900166610, 1.9132338090606940, 2.1778589228618777, 2.4868823970010991, 2.8664695666264195, 3.3724415436062114, 4.1682658512758071, 6.6383520679938108, 0.2771490383641385, 0.7195001279643727, 0.9969081732265243, 1.2383497880608061, 1.4675206597269927, 1.6953064251816552, 1.9291243435606809, 2.1757300955477641, 2.4428032131216391, 2.7406534569230616, 3.0851445039665513, 3.5043101122033367, 4.0575997065264637, 4.9182956424675286, 7.5431362346944937, 0.4360451650782932, 0.9983600902486267, 1.3306365880734528, 1.6129750834753802, 1.8767241606994294, 2.1357032436097660, 2.3988853336865565, 2.6740603137235603, 2.9697561737517959, 3.2971457713883265, 3.6731795898504660, 4.1275751617770631, 4.7230515633946677, 5.6417477865306020, 8.4059469148854635 }; TEST_BEGIN(test_pt_gamma_shape) { unsigned i, j; unsigned e = 0; for (i = 0; i < sizeof(pt_gamma_shape)/sizeof(double); i++) { double shape = pt_gamma_shape[i]; double ln_gamma_shape = ln_gamma(shape); for (j = 1; j < 100; j += 7) { double p = (double)j * 0.01; assert_true(double_eq_rel(pt_gamma(p, shape, 1.0, ln_gamma_shape), pt_gamma_expected[e], MAX_REL_ERR, MAX_ABS_ERR), "Incorrect pt_gamma result for i=%u, j=%u", i, j); e++; } } } TEST_END TEST_BEGIN(test_pt_gamma_scale) { double shape = 1.0; double ln_gamma_shape = ln_gamma(shape); assert_true(double_eq_rel( pt_gamma(0.5, shape, 1.0, ln_gamma_shape) * 10.0, pt_gamma(0.5, shape, 10.0, ln_gamma_shape), MAX_REL_ERR, MAX_ABS_ERR), "Scale should be trivially equivalent to external multiplication"); } TEST_END int main(void) { return (test( test_ln_gamma_factorial, test_ln_gamma_misc, test_pt_norm, test_pt_chi2, test_pt_gamma_shape, test_pt_gamma_scale)); } vmem-1.8/src/jemalloc/test/unit/mq.c000066400000000000000000000034051361505074100174260ustar00rootroot00000000000000#include "test/jemalloc_test.h" #define NSENDERS 3 #define NMSGS 100000 typedef struct mq_msg_s mq_msg_t; struct mq_msg_s { mq_msg(mq_msg_t) link; }; mq_gen(static, mq_, mq_t, mq_msg_t, link) TEST_BEGIN(test_mq_basic) { mq_t mq; mq_msg_t msg; assert_false(mq_init(&mq), "Unexpected mq_init() failure"); assert_u_eq(mq_count(&mq), 0, "mq should be empty"); assert_ptr_null(mq_tryget(&mq), "mq_tryget() should fail when the queue is empty"); mq_put(&mq, &msg); assert_u_eq(mq_count(&mq), 1, "mq should contain one message"); assert_ptr_eq(mq_tryget(&mq), &msg, "mq_tryget() should return msg"); mq_put(&mq, &msg); assert_ptr_eq(mq_get(&mq), &msg, "mq_get() should return msg"); mq_fini(&mq); } TEST_END static void * thd_receiver_start(void *arg) { mq_t *mq = (mq_t *)arg; unsigned i; for (i = 0; i < (NSENDERS * NMSGS); i++) { mq_msg_t *msg = mq_get(mq); assert_ptr_not_null(msg, "mq_get() should never return NULL"); dallocx(msg, 0); } return (NULL); } static void * thd_sender_start(void *arg) { mq_t *mq = (mq_t *)arg; unsigned i; for (i = 0; i < NMSGS; i++) { mq_msg_t *msg; void *p; p = mallocx(sizeof(mq_msg_t), 0); assert_ptr_not_null(p, "Unexpected mallocx() failure"); msg = (mq_msg_t *)p; mq_put(mq, msg); } return (NULL); } TEST_BEGIN(test_mq_threaded) { mq_t mq; thd_t receiver; thd_t senders[NSENDERS]; unsigned i; assert_false(mq_init(&mq), "Unexpected mq_init() failure"); thd_create(&receiver, thd_receiver_start, (void *)&mq); for (i = 0; i < NSENDERS; i++) thd_create(&senders[i], thd_sender_start, (void *)&mq); thd_join(receiver, NULL); for (i = 0; i < NSENDERS; i++) thd_join(senders[i], NULL); mq_fini(&mq); } TEST_END int main(void) { return (test( test_mq_basic, test_mq_threaded)); } vmem-1.8/src/jemalloc/test/unit/mtx.c000066400000000000000000000017531361505074100176250ustar00rootroot00000000000000#include "test/jemalloc_test.h" #define NTHREADS 2 #define NINCRS 2000000 TEST_BEGIN(test_mtx_basic) { mtx_t mtx; assert_false(mtx_init(&mtx), "Unexpected mtx_init() failure"); mtx_lock(&mtx); mtx_unlock(&mtx); mtx_fini(&mtx); } TEST_END typedef struct { mtx_t mtx; unsigned x; } thd_start_arg_t; static void * thd_start(void *varg) { thd_start_arg_t *arg = (thd_start_arg_t *)varg; unsigned i; for (i = 0; i < NINCRS; i++) { mtx_lock(&arg->mtx); arg->x++; mtx_unlock(&arg->mtx); } return (NULL); } TEST_BEGIN(test_mtx_race) { thd_start_arg_t arg; thd_t thds[NTHREADS]; unsigned i; assert_false(mtx_init(&arg.mtx), "Unexpected mtx_init() failure"); arg.x = 0; for (i = 0; i < NTHREADS; i++) thd_create(&thds[i], thd_start, (void *)&arg); for (i = 0; i < NTHREADS; i++) thd_join(thds[i], NULL); assert_u_eq(arg.x, NTHREADS * NINCRS, "Race-related counter corruption"); } TEST_END int main(void) { return (test( test_mtx_basic, test_mtx_race)); } vmem-1.8/src/jemalloc/test/unit/pool.h000066400000000000000000000323071361505074100177720ustar00rootroot00000000000000#include "test/jemalloc_test.h" #define TEST_POOL_SIZE (16L * 1024L * 1024L) #define TEST_TOO_SMALL_POOL_SIZE (2L * 1024L * 1024L) #define TEST_VALUE 123456 #define TEST_MALLOC_FREE_LOOPS 2 #define TEST_MALLOC_SIZE 1024 #define TEST_ALLOCS_SIZE (TEST_POOL_SIZE / 8) #define TEST_BUFFOR_CMP_SIZE (4L * 1024L * 1024L) static char mem_pool[TEST_POOL_SIZE]; static char mem_extend_ok[TEST_POOL_SIZE]; static void* allocs[TEST_ALLOCS_SIZE]; static int custom_allocs; TEST_BEGIN(test_pool_create_errors) { pool_t *pool; memset(mem_pool, 1, TEST_POOL_SIZE); pool = pool_create(mem_pool, 0, 0, 1); assert_ptr_null(pool, "pool_create() should return NULL for size 0"); pool = pool_create(NULL, TEST_POOL_SIZE, 0, 1); assert_ptr_null(pool, "pool_create() should return NULL for input addr NULL"); } TEST_END TEST_BEGIN(test_pool_create) { pool_t *pool; custom_allocs = 0; memset(mem_pool, 0, TEST_POOL_SIZE); pool = pool_create(mem_pool, TEST_POOL_SIZE, 1, 1); assert_ptr_eq(pool, mem_pool, "pool_create() should return addr with valid input"); pool_delete(pool); assert_d_eq(custom_allocs, 0, "memory leak when using custom allocator"); } TEST_END TEST_BEGIN(test_pool_malloc) { pool_t *pool; custom_allocs = 0; memset(mem_pool, 0, TEST_POOL_SIZE); pool = pool_create(mem_pool, TEST_POOL_SIZE, 1, 1); int *test = pool_malloc(pool, sizeof(int)); assert_ptr_not_null(test, "pool_malloc should return valid ptr"); *test = TEST_VALUE; assert_x_eq(*test, TEST_VALUE, "ptr should be usable"); assert_lu_gt((uintptr_t)test, (uintptr_t)mem_pool, "pool_malloc() should return pointer to memory from pool"); assert_lu_lt((uintptr_t)test, (uintptr_t)mem_pool+TEST_POOL_SIZE, "pool_malloc() should return pointer to memory from pool"); pool_free(pool, test); pool_delete(pool); assert_d_eq(custom_allocs, 0, "memory leak when using custom allocator"); } TEST_END TEST_BEGIN(test_pool_free) { pool_t *pool; int i, j, s = 0, prev_s = 0; int allocs = TEST_POOL_SIZE/TEST_MALLOC_SIZE; void *arr[allocs]; custom_allocs = 0; memset(mem_pool, 0, TEST_POOL_SIZE); pool = pool_create(mem_pool, TEST_POOL_SIZE, 1, 1); for (i = 0; i < TEST_MALLOC_FREE_LOOPS; ++i) { for (j = 0; j < allocs; ++j) { arr[j] = pool_malloc(pool, TEST_MALLOC_SIZE); if (arr[j] != NULL) { s++; } } for (j = 0; j < allocs; ++j) { if (arr[j] != NULL) { pool_free(pool, arr[j]); } } if (prev_s != 0) { assert_x_eq(s, prev_s, "pool_free() should record back used chunks"); } prev_s = s; s = 0; } pool_delete(pool); assert_d_eq(custom_allocs, 0, "memory leak when using custom allocator"); } TEST_END TEST_BEGIN(test_pool_calloc) { pool_t *pool; custom_allocs = 0; memset(mem_pool, 1, TEST_POOL_SIZE); pool = pool_create(mem_pool, TEST_POOL_SIZE, 0, 1); int *test = pool_calloc(pool, 1, sizeof(int)); assert_ptr_not_null(test, "pool_calloc should return valid ptr"); assert_x_eq(*test, 0, "pool_calloc should return zeroed memory"); pool_free(pool, test); pool_delete(pool); assert_d_eq(custom_allocs, 0, "memory leak when using custom allocator"); } TEST_END TEST_BEGIN(test_pool_realloc) { pool_t *pool; custom_allocs = 0; memset(mem_pool, 0, TEST_POOL_SIZE); pool = pool_create(mem_pool, TEST_POOL_SIZE, 1, 1); int *test = pool_ralloc(pool, NULL, sizeof(int)); assert_ptr_not_null(test, "pool_ralloc with NULL addr should return valid ptr"); int *test2 = pool_ralloc(pool, test, sizeof(int)*2); assert_ptr_not_null(test, "pool_ralloc should return valid ptr"); test2[0] = TEST_VALUE; test2[1] = TEST_VALUE; assert_x_eq(test[1], TEST_VALUE, "ptr should be usable"); pool_free(pool, test2); pool_delete(pool); assert_d_eq(custom_allocs, 0, "memory leak when using custom allocator"); } TEST_END TEST_BEGIN(test_pool_aligned_alloc) { pool_t *pool; custom_allocs = 0; memset(mem_pool, 0, TEST_POOL_SIZE); pool = pool_create(mem_pool, TEST_POOL_SIZE, 1, 1); int *test = pool_aligned_alloc(pool, 1024, 1024); assert_ptr_not_null(test, "pool_aligned_alloc should return valid ptr"); assert_x_eq(((uintptr_t)(test) & 1023), 0, "ptr should be aligned"); assert_lu_gt((uintptr_t)test, (uintptr_t)mem_pool, "pool_aligned_alloc() should return pointer to memory from pool"); assert_lu_lt((uintptr_t)test, (uintptr_t)mem_pool+TEST_POOL_SIZE, "pool_aligned_alloc() should return pointer to memory from pool"); *test = TEST_VALUE; assert_x_eq(*test, TEST_VALUE, "ptr should be usable"); pool_free(pool, test); pool_delete(pool); assert_d_eq(custom_allocs, 0, "memory leak when using custom allocator"); } TEST_END TEST_BEGIN(test_pool_reuse_pool) { pool_t *pool; size_t pool_num = 0; custom_allocs = 0; /* create and destroy pool multiple times */ for (; pool_num<100; ++pool_num) { pool = pool_create(mem_pool, TEST_POOL_SIZE, 0, 1); assert_ptr_not_null(pool, "Can not create pool!!!"); if (pool == NULL) { break; } void *prev = NULL; size_t i = 0; /* allocate memory from pool */ for (; i<100; ++i) { void **next = pool_malloc(pool, sizeof (void *)); assert_lu_gt((uintptr_t)next, (uintptr_t)mem_pool, "pool_malloc() should return pointer to memory from pool"); assert_lu_lt((uintptr_t)next, (uintptr_t)mem_pool+TEST_POOL_SIZE, "pool_malloc() should return pointer to memory from pool"); *next = prev; prev = next; } /* free all allocated memory from pool */ while (prev != NULL) { void **act = prev; prev = *act; pool_free(pool, act); } pool_delete(pool); } assert_d_eq(custom_allocs, 0, "memory leak when using custom allocator"); } TEST_END TEST_BEGIN(test_pool_check_memory) { pool_t *pool; size_t pool_size = POOL_MINIMAL_SIZE; assert_lu_lt(POOL_MINIMAL_SIZE, TEST_POOL_SIZE, "Too small pool size"); size_t object_size; size_t size_allocated; size_t i; size_t j; for (object_size = 8; object_size <= TEST_BUFFOR_CMP_SIZE ; object_size *= 2) { custom_allocs = 0; pool = pool_create(mem_pool, pool_size, 0, 1); assert_ptr_not_null(pool, "Can not create pool!!!"); size_allocated = 0; memset(allocs, 0, TEST_ALLOCS_SIZE * sizeof(void *)); for (i = 0; i < TEST_ALLOCS_SIZE;++i) { allocs[i] = pool_malloc(pool, object_size); if (allocs[i] == NULL) { /* out of memory in pool */ break; } assert_lu_gt((uintptr_t)allocs[i], (uintptr_t)mem_pool, "pool_malloc() should return pointer to memory from pool"); assert_lu_lt((uintptr_t)allocs[i], (uintptr_t)mem_pool+pool_size, "pool_malloc() should return pointer to memory from pool"); size_allocated += object_size; /* fill each allocation with a unique value */ memset(allocs[i], (char)i, object_size); } assert_ptr_not_null(allocs[0], "pool_malloc should return valid ptr"); assert_lu_lt(i + 1, TEST_ALLOCS_SIZE, "All memory should be used"); /* check for unexpected modifications of prepare data */ for (i = 0; i < TEST_ALLOCS_SIZE && allocs[i] != NULL; ++i) { char *buffer = allocs[i]; for (j = 0; j < object_size; ++j) if (buffer[j] != (char)i) { assert_true(0, "Content of data object was modified unexpectedly" " for object size: %zu, id: %zu", object_size, j); break; } } pool_delete(pool); assert_d_eq(custom_allocs, 0, "memory leak when using custom allocator"); } } TEST_END TEST_BEGIN(test_pool_use_all_memory) { pool_t *pool; size_t size = 0; size_t pool_size = POOL_MINIMAL_SIZE; assert_lu_lt(POOL_MINIMAL_SIZE, TEST_POOL_SIZE, "Too small pool size"); custom_allocs = 0; pool = pool_create(mem_pool, pool_size, 0, 1); assert_ptr_not_null(pool, "Can not create pool!!!"); void *prev = NULL; for (;;) { void **next = pool_malloc(pool, sizeof (void *)); if (next == NULL) { /* Out of memory in pool, test end */ break; } size += sizeof (void *); assert_ptr_not_null(next, "pool_malloc should return valid ptr"); assert_lu_gt((uintptr_t)next, (uintptr_t)mem_pool, "pool_malloc() should return pointer to memory from pool"); assert_lu_lt((uintptr_t)next, (uintptr_t)mem_pool+pool_size, "pool_malloc() should return pointer to memory from pool"); *next = prev; assert_x_eq((uintptr_t)(*next), (uintptr_t)(prev), "ptr should be usable"); prev = next; } assert_lu_gt(size, 0, "Can not alloc any memory from pool"); /* Free all allocated memory from pool */ while (prev != NULL) { void **act = prev; prev = *act; pool_free(pool, act); } pool_delete(pool); assert_d_eq(custom_allocs, 0, "memory leak when using custom allocator"); } TEST_END TEST_BEGIN(test_pool_extend_errors) { pool_t *pool; custom_allocs = 0; memset(mem_pool, 0, TEST_POOL_SIZE); pool = pool_create(mem_pool, TEST_POOL_SIZE, 1, 1); memset(mem_extend_ok, 0, TEST_TOO_SMALL_POOL_SIZE); size_t usable_size = pool_extend(pool, mem_extend_ok, TEST_TOO_SMALL_POOL_SIZE, 0); assert_zu_eq(usable_size, 0, "pool_extend() should return 0" " when provided with memory size smaller then chunksize"); pool_delete(pool); assert_d_eq(custom_allocs, 0, "memory leak when using custom allocator"); } TEST_END TEST_BEGIN(test_pool_extend) { pool_t *pool; custom_allocs = 0; memset(mem_pool, 0, TEST_POOL_SIZE); pool = pool_create(mem_pool, TEST_POOL_SIZE, 1, 1); memset(mem_extend_ok, 0, TEST_POOL_SIZE); size_t usable_size = pool_extend(pool, mem_extend_ok, TEST_POOL_SIZE, 0); assert_zu_ne(usable_size, 0, "pool_extend() should return value" " after alignment when provided with enough memory"); pool_delete(pool); assert_d_eq(custom_allocs, 0, "memory leak when using custom allocator"); } TEST_END TEST_BEGIN(test_pool_extend_after_out_of_memory) { pool_t *pool; custom_allocs = 0; memset(mem_pool, 0, TEST_POOL_SIZE); pool = pool_create(mem_pool, TEST_POOL_SIZE, 1, 1); /* use the all memory from pool and from base allocator */ while (pool_malloc(pool, sizeof (void *))); pool->base_next_addr = pool->base_past_addr; memset(mem_extend_ok, 0, TEST_POOL_SIZE); size_t usable_size = pool_extend(pool, mem_extend_ok, TEST_POOL_SIZE, 0); assert_zu_ne(usable_size, 0, "pool_extend() should return value" " after alignment when provided with enough memory"); pool_delete(pool); assert_d_eq(custom_allocs, 0, "memory leak when using custom allocator"); } TEST_END /* * print_jemalloc_messages -- custom print function, for jemalloc */ static void print_jemalloc_messages(void* ignore, const char *s) { } TEST_BEGIN(test_pool_check_extend) { je_malloc_message = print_jemalloc_messages; pool_t *pool; custom_allocs = 0; pool = pool_create(mem_pool, TEST_POOL_SIZE, 0, 1); pool_malloc(pool, 100); assert_d_eq(je_pool_check(pool), 1, "je_pool_check() return error"); pool_delete(pool); assert_d_ne(je_pool_check(pool), 1, "je_pool_check() not return error"); pool = pool_create(mem_pool, TEST_POOL_SIZE, 0, 1); assert_d_eq(je_pool_check(pool), 1, "je_pool_check() return error"); size_t size_extend = pool_extend(pool, mem_extend_ok, TEST_POOL_SIZE, 1); assert_zu_ne(size_extend, 0, "pool_extend() should add some free space"); assert_d_eq(je_pool_check(pool), 1, "je_pool_check() return error"); pool_malloc(pool, 100); pool_delete(pool); assert_d_ne(je_pool_check(pool), 1, "je_pool_check() not return error"); assert_d_eq(custom_allocs, 0, "memory leak when using custom allocator"); je_malloc_message = NULL; } TEST_END TEST_BEGIN(test_pool_check_memory_out_of_range) { je_malloc_message = print_jemalloc_messages; pool_t *pool; custom_allocs = 0; pool = pool_create(mem_pool, TEST_POOL_SIZE, 0, 1); assert_d_eq(je_pool_check(pool), 1, "je_pool_check() return error"); void *usable_addr = (void *)CHUNK_CEILING((uintptr_t)mem_extend_ok); size_t usable_size = (TEST_POOL_SIZE - (uintptr_t)(usable_addr - (void *)mem_extend_ok)) & ~chunksize_mask; chunk_record(pool, &pool->chunks_szad_mmap, &pool->chunks_ad_mmap, usable_addr, usable_size, 0); assert_d_ne(je_pool_check(pool), 1, "je_pool_check() not return error"); pool_delete(pool); assert_d_ne(je_pool_check(pool), 1, "je_pool_check() return error"); assert_d_eq(custom_allocs, 0, "memory leak when using custom allocator"); je_malloc_message = NULL; } TEST_END TEST_BEGIN(test_pool_check_memory_overlap) { je_malloc_message = print_jemalloc_messages; pool_t *pool; pool_t *pool2; custom_allocs = 0; memset(mem_pool, 0, TEST_POOL_SIZE); pool = pool_create(mem_pool, TEST_POOL_SIZE, 1, 1); size_t size_extend = pool_extend(pool, mem_extend_ok, TEST_POOL_SIZE, 1); assert_zu_ne(size_extend, 0, "pool_extend() should add some free space"); assert_d_eq(je_pool_check(pool), 1, "je_pool_check() return error"); /* create another pool in the same memory region */ pool2 = pool_create(mem_extend_ok, TEST_POOL_SIZE, 0, 1); assert_d_ne(je_pool_check(pool), 1, "je_pool_check() not return error"); assert_d_ne(je_pool_check(pool2), 1, "je_pool_check() not return error"); pool_delete(pool2); pool_delete(pool); assert_d_eq(custom_allocs, 0, "memory leak when using custom allocator"); je_malloc_message = NULL; } TEST_END #define POOL_TEST_CASES\ test_pool_create_errors, \ test_pool_create, \ test_pool_malloc, \ test_pool_free, \ test_pool_calloc, \ test_pool_realloc, \ test_pool_aligned_alloc, \ test_pool_reuse_pool, \ test_pool_check_memory, \ test_pool_use_all_memory, \ test_pool_extend_errors, \ test_pool_extend, \ test_pool_extend_after_out_of_memory, \ test_pool_check_extend, \ test_pool_check_memory_out_of_range, \ test_pool_check_memory_overlap vmem-1.8/src/jemalloc/test/unit/pool_base_alloc.c000066400000000000000000000001171361505074100221230ustar00rootroot00000000000000#include "pool.h" int main(void) { return test_not_init(POOL_TEST_CASES); } vmem-1.8/src/jemalloc/test/unit/pool_custom_alloc.c000066400000000000000000000006451361505074100225310ustar00rootroot00000000000000#include "pool.h" static char buff_alloc[4*1024]; static char *buff_ptr = buff_alloc; void * malloc_test(size_t size) { custom_allocs++; void *ret = buff_ptr; buff_ptr = buff_ptr + size; return ret; } void free_test(void *ptr) { custom_allocs--; if(custom_allocs == 0) { buff_ptr = buff_alloc; } } int main(void) { je_pool_set_alloc_funcs(malloc_test, free_test); return test_not_init(POOL_TEST_CASES); } vmem-1.8/src/jemalloc/test/unit/pool_custom_alloc_internal.c000066400000000000000000000006701361505074100244230ustar00rootroot00000000000000#include "pool.h" void * malloc_test(size_t size) { custom_allocs++; return malloc(size); } void free_test(void *ptr) { custom_allocs--; free(ptr); } int main(void) { /* * Initialize custom allocator who call malloc from jemalloc. */ if (nallocx(1, 0) == 0) { malloc_printf("Initialization error"); return (test_status_fail); } je_pool_set_alloc_funcs(malloc_test, free_test); return test_not_init(POOL_TEST_CASES); } vmem-1.8/src/jemalloc/test/unit/prof_accum.c000066400000000000000000000033661361505074100211350ustar00rootroot00000000000000#include "prof_accum.h" #ifdef JEMALLOC_PROF const char *malloc_conf = "prof:true,prof_accum:true,prof_active:false,lg_prof_sample:0"; #endif static int prof_dump_open_intercept(bool propagate_err, const char *filename) { int fd; fd = open("/dev/null", O_WRONLY); assert_d_ne(fd, -1, "Unexpected open() failure"); return (fd); } static void * alloc_from_permuted_backtrace(unsigned thd_ind, unsigned iteration) { return (alloc_0(thd_ind*NALLOCS_PER_THREAD + iteration)); } static void * thd_start(void *varg) { unsigned thd_ind = *(unsigned *)varg; size_t bt_count_prev, bt_count; unsigned i_prev, i; i_prev = 0; bt_count_prev = 0; for (i = 0; i < NALLOCS_PER_THREAD; i++) { void *p = alloc_from_permuted_backtrace(thd_ind, i); dallocx(p, 0); if (i % DUMP_INTERVAL == 0) { assert_d_eq(mallctl("prof.dump", NULL, NULL, NULL, 0), 0, "Unexpected error while dumping heap profile"); } if (i % BT_COUNT_CHECK_INTERVAL == 0 || i+1 == NALLOCS_PER_THREAD) { bt_count = prof_bt_count(); assert_zu_le(bt_count_prev+(i-i_prev), bt_count, "Expected larger backtrace count increase"); i_prev = i; bt_count_prev = bt_count; } } return (NULL); } TEST_BEGIN(test_idump) { bool active; thd_t thds[NTHREADS]; unsigned thd_args[NTHREADS]; unsigned i; test_skip_if(!config_prof); active = true; assert_d_eq(mallctl("prof.active", NULL, NULL, &active, sizeof(active)), 0, "Unexpected mallctl failure while activating profiling"); prof_dump_open = prof_dump_open_intercept; for (i = 0; i < NTHREADS; i++) { thd_args[i] = i; thd_create(&thds[i], thd_start, (void *)&thd_args[i]); } for (i = 0; i < NTHREADS; i++) thd_join(thds[i], NULL); } TEST_END int main(void) { return (test( test_idump)); } vmem-1.8/src/jemalloc/test/unit/prof_accum.h000066400000000000000000000014321361505074100211320ustar00rootroot00000000000000#include "test/jemalloc_test.h" #define NTHREADS 4 #define NALLOCS_PER_THREAD 50 #define DUMP_INTERVAL 1 #define BT_COUNT_CHECK_INTERVAL 5 #define alloc_n_proto(n) \ void *alloc_##n(unsigned bits); alloc_n_proto(0) alloc_n_proto(1) #define alloc_n_gen(n) \ void * \ alloc_##n(unsigned bits) \ { \ void *p; \ \ if (bits == 0) \ p = mallocx(1, 0); \ else { \ switch (bits & 0x1U) { \ case 0: \ p = (alloc_0(bits >> 1)); \ break; \ case 1: \ p = (alloc_1(bits >> 1)); \ break; \ default: not_reached(); \ } \ } \ /* Intentionally sabotage tail call optimization. */ \ assert_ptr_not_null(p, "Unexpected mallocx() failure"); \ return (p); \ } vmem-1.8/src/jemalloc/test/unit/prof_accum_a.c000066400000000000000000000000501361505074100214200ustar00rootroot00000000000000#include "prof_accum.h" alloc_n_gen(0) vmem-1.8/src/jemalloc/test/unit/prof_accum_b.c000066400000000000000000000000501361505074100214210ustar00rootroot00000000000000#include "prof_accum.h" alloc_n_gen(1) vmem-1.8/src/jemalloc/test/unit/prof_gdump.c000066400000000000000000000021471361505074100211550ustar00rootroot00000000000000#include "test/jemalloc_test.h" #ifdef JEMALLOC_PROF const char *malloc_conf = "prof:true,prof_active:false,prof_gdump:true"; #endif static bool did_prof_dump_open; static int prof_dump_open_intercept(bool propagate_err, const char *filename) { int fd; did_prof_dump_open = true; fd = open("/dev/null", O_WRONLY); assert_d_ne(fd, -1, "Unexpected open() failure"); return (fd); } TEST_BEGIN(test_gdump) { bool active; void *p, *q; test_skip_if(!config_prof); active = true; assert_d_eq(mallctl("prof.active", NULL, NULL, &active, sizeof(active)), 0, "Unexpected mallctl failure while activating profiling"); prof_dump_open = prof_dump_open_intercept; did_prof_dump_open = false; p = mallocx(chunksize, 0); assert_ptr_not_null(p, "Unexpected mallocx() failure"); assert_true(did_prof_dump_open, "Expected a profile dump"); did_prof_dump_open = false; q = mallocx(chunksize, 0); assert_ptr_not_null(q, "Unexpected mallocx() failure"); assert_true(did_prof_dump_open, "Expected a profile dump"); dallocx(p, 0); dallocx(q, 0); } TEST_END int main(void) { return (test( test_gdump)); } vmem-1.8/src/jemalloc/test/unit/prof_idump.c000066400000000000000000000017111361505074100211530ustar00rootroot00000000000000#include "test/jemalloc_test.h" #ifdef JEMALLOC_PROF const char *malloc_conf = "prof:true,prof_accum:true,prof_active:false,lg_prof_sample:0," "lg_prof_interval:0"; #endif static bool did_prof_dump_open; static int prof_dump_open_intercept(bool propagate_err, const char *filename) { int fd; did_prof_dump_open = true; fd = open("/dev/null", O_WRONLY); assert_d_ne(fd, -1, "Unexpected open() failure"); return (fd); } TEST_BEGIN(test_idump) { bool active; void *p; test_skip_if(!config_prof); active = true; assert_d_eq(mallctl("prof.active", NULL, NULL, &active, sizeof(active)), 0, "Unexpected mallctl failure while activating profiling"); prof_dump_open = prof_dump_open_intercept; did_prof_dump_open = false; p = mallocx(1, 0); assert_ptr_not_null(p, "Unexpected mallocx() failure"); dallocx(p, 0); assert_true(did_prof_dump_open, "Expected a profile dump"); } TEST_END int main(void) { return (test( test_idump)); } vmem-1.8/src/jemalloc/test/unit/ql.c000066400000000000000000000106031361505074100174230ustar00rootroot00000000000000#include "test/jemalloc_test.h" /* Number of ring entries, in [2..26]. */ #define NENTRIES 9 typedef struct list_s list_t; typedef ql_head(list_t) list_head_t; struct list_s { ql_elm(list_t) link; char id; }; static void test_empty_list(list_head_t *head) { list_t *t; unsigned i; assert_ptr_null(ql_first(head), "Unexpected element for empty list"); assert_ptr_null(ql_last(head, link), "Unexpected element for empty list"); i = 0; ql_foreach(t, head, link) { i++; } assert_u_eq(i, 0, "Unexpected element for empty list"); i = 0; ql_reverse_foreach(t, head, link) { i++; } assert_u_eq(i, 0, "Unexpected element for empty list"); } TEST_BEGIN(test_ql_empty) { list_head_t head; ql_new(&head); test_empty_list(&head); } TEST_END static void init_entries(list_t *entries, unsigned nentries) { unsigned i; for (i = 0; i < nentries; i++) { entries[i].id = 'a' + i; ql_elm_new(&entries[i], link); } } static void test_entries_list(list_head_t *head, list_t *entries, unsigned nentries) { list_t *t; unsigned i; assert_c_eq(ql_first(head)->id, entries[0].id, "Element id mismatch"); assert_c_eq(ql_last(head, link)->id, entries[nentries-1].id, "Element id mismatch"); i = 0; ql_foreach(t, head, link) { assert_c_eq(t->id, entries[i].id, "Element id mismatch"); i++; } i = 0; ql_reverse_foreach(t, head, link) { assert_c_eq(t->id, entries[nentries-i-1].id, "Element id mismatch"); i++; } for (i = 0; i < nentries-1; i++) { t = ql_next(head, &entries[i], link); assert_c_eq(t->id, entries[i+1].id, "Element id mismatch"); } assert_ptr_null(ql_next(head, &entries[nentries-1], link), "Unexpected element"); assert_ptr_null(ql_prev(head, &entries[0], link), "Unexpected element"); for (i = 1; i < nentries; i++) { t = ql_prev(head, &entries[i], link); assert_c_eq(t->id, entries[i-1].id, "Element id mismatch"); } } TEST_BEGIN(test_ql_tail_insert) { list_head_t head; list_t entries[NENTRIES]; unsigned i; ql_new(&head); init_entries(entries, sizeof(entries)/sizeof(list_t)); for (i = 0; i < NENTRIES; i++) ql_tail_insert(&head, &entries[i], link); test_entries_list(&head, entries, NENTRIES); } TEST_END TEST_BEGIN(test_ql_tail_remove) { list_head_t head; list_t entries[NENTRIES]; unsigned i; ql_new(&head); init_entries(entries, sizeof(entries)/sizeof(list_t)); for (i = 0; i < NENTRIES; i++) ql_tail_insert(&head, &entries[i], link); for (i = 0; i < NENTRIES; i++) { test_entries_list(&head, entries, NENTRIES-i); ql_tail_remove(&head, list_t, link); } test_empty_list(&head); } TEST_END TEST_BEGIN(test_ql_head_insert) { list_head_t head; list_t entries[NENTRIES]; unsigned i; ql_new(&head); init_entries(entries, sizeof(entries)/sizeof(list_t)); for (i = 0; i < NENTRIES; i++) ql_head_insert(&head, &entries[NENTRIES-i-1], link); test_entries_list(&head, entries, NENTRIES); } TEST_END TEST_BEGIN(test_ql_head_remove) { list_head_t head; list_t entries[NENTRIES]; unsigned i; ql_new(&head); init_entries(entries, sizeof(entries)/sizeof(list_t)); for (i = 0; i < NENTRIES; i++) ql_head_insert(&head, &entries[NENTRIES-i-1], link); for (i = 0; i < NENTRIES; i++) { test_entries_list(&head, &entries[i], NENTRIES-i); ql_head_remove(&head, list_t, link); } test_empty_list(&head); } TEST_END TEST_BEGIN(test_ql_insert) { list_head_t head; list_t entries[8]; list_t *a, *b, *c, *d, *e, *f, *g, *h; ql_new(&head); init_entries(entries, sizeof(entries)/sizeof(list_t)); a = &entries[0]; b = &entries[1]; c = &entries[2]; d = &entries[3]; e = &entries[4]; f = &entries[5]; g = &entries[6]; h = &entries[7]; /* * ql_remove(), ql_before_insert(), and ql_after_insert() are used * internally by other macros that are already tested, so there's no * need to test them completely. However, insertion/deletion from the * middle of lists is not otherwise tested; do so here. */ ql_tail_insert(&head, f, link); ql_before_insert(&head, f, b, link); ql_before_insert(&head, f, c, link); ql_after_insert(f, h, link); ql_after_insert(f, g, link); ql_before_insert(&head, b, a, link); ql_after_insert(c, d, link); ql_before_insert(&head, f, e, link); test_entries_list(&head, entries, sizeof(entries)/sizeof(list_t)); } TEST_END int main(void) { return (test( test_ql_empty, test_ql_tail_insert, test_ql_tail_remove, test_ql_head_insert, test_ql_head_remove, test_ql_insert)); } vmem-1.8/src/jemalloc/test/unit/qr.c000066400000000000000000000120641361505074100174340ustar00rootroot00000000000000#include "test/jemalloc_test.h" /* Number of ring entries, in [2..26]. */ #define NENTRIES 9 /* Split index, in [1..NENTRIES). */ #define SPLIT_INDEX 5 typedef struct ring_s ring_t; struct ring_s { qr(ring_t) link; char id; }; static void init_entries(ring_t *entries) { unsigned i; for (i = 0; i < NENTRIES; i++) { qr_new(&entries[i], link); entries[i].id = 'a' + i; } } static void test_independent_entries(ring_t *entries) { ring_t *t; unsigned i, j; for (i = 0; i < NENTRIES; i++) { j = 0; qr_foreach(t, &entries[i], link) { j++; } assert_u_eq(j, 1, "Iteration over single-element ring should visit precisely " "one element"); } for (i = 0; i < NENTRIES; i++) { j = 0; qr_reverse_foreach(t, &entries[i], link) { j++; } assert_u_eq(j, 1, "Iteration over single-element ring should visit precisely " "one element"); } for (i = 0; i < NENTRIES; i++) { t = qr_next(&entries[i], link); assert_ptr_eq(t, &entries[i], "Next element in single-element ring should be same as " "current element"); } for (i = 0; i < NENTRIES; i++) { t = qr_prev(&entries[i], link); assert_ptr_eq(t, &entries[i], "Previous element in single-element ring should be same as " "current element"); } } TEST_BEGIN(test_qr_one) { ring_t entries[NENTRIES]; init_entries(entries); test_independent_entries(entries); } TEST_END static void test_entries_ring(ring_t *entries) { ring_t *t; unsigned i, j; for (i = 0; i < NENTRIES; i++) { j = 0; qr_foreach(t, &entries[i], link) { assert_c_eq(t->id, entries[(i+j) % NENTRIES].id, "Element id mismatch"); j++; } } for (i = 0; i < NENTRIES; i++) { j = 0; qr_reverse_foreach(t, &entries[i], link) { assert_c_eq(t->id, entries[(NENTRIES+i-j-1) % NENTRIES].id, "Element id mismatch"); j++; } } for (i = 0; i < NENTRIES; i++) { t = qr_next(&entries[i], link); assert_c_eq(t->id, entries[(i+1) % NENTRIES].id, "Element id mismatch"); } for (i = 0; i < NENTRIES; i++) { t = qr_prev(&entries[i], link); assert_c_eq(t->id, entries[(NENTRIES+i-1) % NENTRIES].id, "Element id mismatch"); } } TEST_BEGIN(test_qr_after_insert) { ring_t entries[NENTRIES]; unsigned i; init_entries(entries); for (i = 1; i < NENTRIES; i++) qr_after_insert(&entries[i - 1], &entries[i], link); test_entries_ring(entries); } TEST_END TEST_BEGIN(test_qr_remove) { ring_t entries[NENTRIES]; ring_t *t; unsigned i, j; init_entries(entries); for (i = 1; i < NENTRIES; i++) qr_after_insert(&entries[i - 1], &entries[i], link); for (i = 0; i < NENTRIES; i++) { j = 0; qr_foreach(t, &entries[i], link) { assert_c_eq(t->id, entries[i+j].id, "Element id mismatch"); j++; } j = 0; qr_reverse_foreach(t, &entries[i], link) { assert_c_eq(t->id, entries[NENTRIES - 1 - j].id, "Element id mismatch"); j++; } qr_remove(&entries[i], link); } test_independent_entries(entries); } TEST_END TEST_BEGIN(test_qr_before_insert) { ring_t entries[NENTRIES]; ring_t *t; unsigned i, j; init_entries(entries); for (i = 1; i < NENTRIES; i++) qr_before_insert(&entries[i - 1], &entries[i], link); for (i = 0; i < NENTRIES; i++) { j = 0; qr_foreach(t, &entries[i], link) { assert_c_eq(t->id, entries[(NENTRIES+i-j) % NENTRIES].id, "Element id mismatch"); j++; } } for (i = 0; i < NENTRIES; i++) { j = 0; qr_reverse_foreach(t, &entries[i], link) { assert_c_eq(t->id, entries[(i+j+1) % NENTRIES].id, "Element id mismatch"); j++; } } for (i = 0; i < NENTRIES; i++) { t = qr_next(&entries[i], link); assert_c_eq(t->id, entries[(NENTRIES+i-1) % NENTRIES].id, "Element id mismatch"); } for (i = 0; i < NENTRIES; i++) { t = qr_prev(&entries[i], link); assert_c_eq(t->id, entries[(i+1) % NENTRIES].id, "Element id mismatch"); } } TEST_END static void test_split_entries(ring_t *entries) { ring_t *t; unsigned i, j; for (i = 0; i < NENTRIES; i++) { j = 0; qr_foreach(t, &entries[i], link) { if (i < SPLIT_INDEX) { assert_c_eq(t->id, entries[(i+j) % SPLIT_INDEX].id, "Element id mismatch"); } else { assert_c_eq(t->id, entries[(i+j-SPLIT_INDEX) % (NENTRIES-SPLIT_INDEX) + SPLIT_INDEX].id, "Element id mismatch"); } j++; } } } TEST_BEGIN(test_qr_meld_split) { ring_t entries[NENTRIES]; unsigned i; init_entries(entries); for (i = 1; i < NENTRIES; i++) qr_after_insert(&entries[i - 1], &entries[i], link); qr_split(&entries[0], &entries[SPLIT_INDEX], link); test_split_entries(entries); qr_meld(&entries[0], &entries[SPLIT_INDEX], link); test_entries_ring(entries); qr_meld(&entries[0], &entries[SPLIT_INDEX], link); test_split_entries(entries); qr_split(&entries[0], &entries[SPLIT_INDEX], link); test_entries_ring(entries); qr_split(&entries[0], &entries[0], link); test_entries_ring(entries); qr_meld(&entries[0], &entries[0], link); test_entries_ring(entries); } TEST_END int main(void) { return (test( test_qr_one, test_qr_after_insert, test_qr_remove, test_qr_before_insert, test_qr_meld_split)); } vmem-1.8/src/jemalloc/test/unit/quarantine.c000066400000000000000000000050271361505074100211620ustar00rootroot00000000000000#include "test/jemalloc_test.h" #define QUARANTINE_SIZE 8192 #define STRINGIFY_HELPER(x) #x #define STRINGIFY(x) STRINGIFY_HELPER(x) #ifdef JEMALLOC_FILL const char *malloc_conf = "abort:false,junk:true,redzone:true,quarantine:" STRINGIFY(QUARANTINE_SIZE); #endif void quarantine_clear(void) { void *p; p = mallocx(QUARANTINE_SIZE*2, 0); assert_ptr_not_null(p, "Unexpected mallocx() failure"); dallocx(p, 0); } TEST_BEGIN(test_quarantine) { #define SZ ZU(256) #define NQUARANTINED (QUARANTINE_SIZE/SZ) void *quarantined[NQUARANTINED+1]; size_t i, j; test_skip_if(!config_fill); assert_zu_eq(nallocx(SZ, 0), SZ, "SZ=%zu does not precisely equal a size class", SZ); quarantine_clear(); /* * Allocate enough regions to completely fill the quarantine, plus one * more. The last iteration occurs with a completely full quarantine, * but no regions should be drained from the quarantine until the last * deallocation occurs. Therefore no region recycling should occur * until after this loop completes. */ for (i = 0; i < NQUARANTINED+1; i++) { void *p = mallocx(SZ, 0); assert_ptr_not_null(p, "Unexpected mallocx() failure"); quarantined[i] = p; dallocx(p, 0); for (j = 0; j < i; j++) { assert_ptr_ne(p, quarantined[j], "Quarantined region recycled too early; " "i=%zu, j=%zu", i, j); } } #undef NQUARANTINED #undef SZ } TEST_END static bool detected_redzone_corruption; static void arena_redzone_corruption_replacement(void *ptr, size_t usize, bool after, size_t offset, uint8_t byte) { detected_redzone_corruption = true; } TEST_BEGIN(test_quarantine_redzone) { char *s; arena_redzone_corruption_t *arena_redzone_corruption_orig; test_skip_if(!config_fill); arena_redzone_corruption_orig = arena_redzone_corruption; arena_redzone_corruption = arena_redzone_corruption_replacement; /* Test underflow. */ detected_redzone_corruption = false; s = (char *)mallocx(1, 0); assert_ptr_not_null((void *)s, "Unexpected mallocx() failure"); s[-1] = 0xbb; dallocx(s, 0); assert_true(detected_redzone_corruption, "Did not detect redzone corruption"); /* Test overflow. */ detected_redzone_corruption = false; s = (char *)mallocx(1, 0); assert_ptr_not_null((void *)s, "Unexpected mallocx() failure"); s[sallocx(s, 0)] = 0xbb; dallocx(s, 0); assert_true(detected_redzone_corruption, "Did not detect redzone corruption"); arena_redzone_corruption = arena_redzone_corruption_orig; } TEST_END int main(void) { return (test( test_quarantine, test_quarantine_redzone)); } vmem-1.8/src/jemalloc/test/unit/rb.c000066400000000000000000000164061361505074100174210ustar00rootroot00000000000000#include "test/jemalloc_test.h" #define rbtn_black_height(a_type, a_field, a_rbt, r_height) do { \ a_type *rbp_bh_t; \ for (rbp_bh_t = (a_rbt)->rbt_root, (r_height) = 0; \ rbp_bh_t != &(a_rbt)->rbt_nil; \ rbp_bh_t = rbtn_left_get(a_type, a_field, rbp_bh_t)) { \ if (rbtn_red_get(a_type, a_field, rbp_bh_t) == false) { \ (r_height)++; \ } \ } \ } while (0) typedef struct node_s node_t; struct node_s { #define NODE_MAGIC 0x9823af7e uint32_t magic; rb_node(node_t) link; uint64_t key; }; static int node_cmp(node_t *a, node_t *b) { int ret; assert_u32_eq(a->magic, NODE_MAGIC, "Bad magic"); assert_u32_eq(b->magic, NODE_MAGIC, "Bad magic"); ret = (a->key > b->key) - (a->key < b->key); if (ret == 0) { /* * Duplicates are not allowed in the tree, so force an * arbitrary ordering for non-identical items with equal keys. */ ret = (((uintptr_t)a) > ((uintptr_t)b)) - (((uintptr_t)a) < ((uintptr_t)b)); } return (ret); } typedef rb_tree(node_t) tree_t; rb_gen(static, tree_, tree_t, node_t, link, node_cmp); TEST_BEGIN(test_rb_empty) { tree_t tree; node_t key; tree_new(&tree); assert_ptr_null(tree_first(&tree), "Unexpected node"); assert_ptr_null(tree_last(&tree), "Unexpected node"); key.key = 0; key.magic = NODE_MAGIC; assert_ptr_null(tree_search(&tree, &key), "Unexpected node"); key.key = 0; key.magic = NODE_MAGIC; assert_ptr_null(tree_nsearch(&tree, &key), "Unexpected node"); key.key = 0; key.magic = NODE_MAGIC; assert_ptr_null(tree_psearch(&tree, &key), "Unexpected node"); } TEST_END static unsigned tree_recurse(node_t *node, unsigned black_height, unsigned black_depth, node_t *nil) { unsigned ret = 0; node_t *left_node = rbtn_left_get(node_t, link, node); node_t *right_node = rbtn_right_get(node_t, link, node); if (rbtn_red_get(node_t, link, node) == false) black_depth++; /* Red nodes must be interleaved with black nodes. */ if (rbtn_red_get(node_t, link, node)) { assert_false(rbtn_red_get(node_t, link, left_node), "Node should be black"); assert_false(rbtn_red_get(node_t, link, right_node), "Node should be black"); } if (node == nil) return (ret); /* Self. */ assert_u32_eq(node->magic, NODE_MAGIC, "Bad magic"); /* Left subtree. */ if (left_node != nil) ret += tree_recurse(left_node, black_height, black_depth, nil); else ret += (black_depth != black_height); /* Right subtree. */ if (right_node != nil) ret += tree_recurse(right_node, black_height, black_depth, nil); else ret += (black_depth != black_height); return (ret); } static node_t * tree_iterate_cb(tree_t *tree, node_t *node, void *data) { unsigned *i = (unsigned *)data; node_t *search_node; assert_u32_eq(node->magic, NODE_MAGIC, "Bad magic"); /* Test rb_search(). */ search_node = tree_search(tree, node); assert_ptr_eq(search_node, node, "tree_search() returned unexpected node"); /* Test rb_nsearch(). */ search_node = tree_nsearch(tree, node); assert_ptr_eq(search_node, node, "tree_nsearch() returned unexpected node"); /* Test rb_psearch(). */ search_node = tree_psearch(tree, node); assert_ptr_eq(search_node, node, "tree_psearch() returned unexpected node"); (*i)++; return (NULL); } static unsigned tree_iterate(tree_t *tree) { unsigned i; i = 0; tree_iter(tree, NULL, tree_iterate_cb, (void *)&i); return (i); } static unsigned tree_iterate_reverse(tree_t *tree) { unsigned i; i = 0; tree_reverse_iter(tree, NULL, tree_iterate_cb, (void *)&i); return (i); } static void node_remove(tree_t *tree, node_t *node, unsigned nnodes) { node_t *search_node; unsigned black_height, imbalances; tree_remove(tree, node); /* Test rb_nsearch(). */ search_node = tree_nsearch(tree, node); if (search_node != NULL) { assert_u64_ge(search_node->key, node->key, "Key ordering error"); } /* Test rb_psearch(). */ search_node = tree_psearch(tree, node); if (search_node != NULL) { assert_u64_le(search_node->key, node->key, "Key ordering error"); } node->magic = 0; rbtn_black_height(node_t, link, tree, black_height); imbalances = tree_recurse(tree->rbt_root, black_height, 0, &(tree->rbt_nil)); assert_u_eq(imbalances, 0, "Tree is unbalanced"); assert_u_eq(tree_iterate(tree), nnodes-1, "Unexpected node iteration count"); assert_u_eq(tree_iterate_reverse(tree), nnodes-1, "Unexpected node iteration count"); } static node_t * remove_iterate_cb(tree_t *tree, node_t *node, void *data) { unsigned *nnodes = (unsigned *)data; node_t *ret = tree_next(tree, node); node_remove(tree, node, *nnodes); return (ret); } static node_t * remove_reverse_iterate_cb(tree_t *tree, node_t *node, void *data) { unsigned *nnodes = (unsigned *)data; node_t *ret = tree_prev(tree, node); node_remove(tree, node, *nnodes); return (ret); } TEST_BEGIN(test_rb_random) { #define NNODES 25 #define NBAGS 250 #define SEED 42 sfmt_t *sfmt; uint64_t bag[NNODES]; tree_t tree; node_t nodes[NNODES]; unsigned i, j, k, black_height, imbalances; sfmt = init_gen_rand(SEED); for (i = 0; i < NBAGS; i++) { switch (i) { case 0: /* Insert in order. */ for (j = 0; j < NNODES; j++) bag[j] = j; break; case 1: /* Insert in reverse order. */ for (j = 0; j < NNODES; j++) bag[j] = NNODES - j - 1; break; default: for (j = 0; j < NNODES; j++) bag[j] = gen_rand64_range(sfmt, NNODES); } for (j = 1; j <= NNODES; j++) { /* Initialize tree and nodes. */ tree_new(&tree); tree.rbt_nil.magic = 0; for (k = 0; k < j; k++) { nodes[k].magic = NODE_MAGIC; nodes[k].key = bag[k]; } /* Insert nodes. */ for (k = 0; k < j; k++) { tree_insert(&tree, &nodes[k]); rbtn_black_height(node_t, link, &tree, black_height); imbalances = tree_recurse(tree.rbt_root, black_height, 0, &(tree.rbt_nil)); assert_u_eq(imbalances, 0, "Tree is unbalanced"); assert_u_eq(tree_iterate(&tree), k+1, "Unexpected node iteration count"); assert_u_eq(tree_iterate_reverse(&tree), k+1, "Unexpected node iteration count"); assert_ptr_not_null(tree_first(&tree), "Tree should not be empty"); assert_ptr_not_null(tree_last(&tree), "Tree should not be empty"); tree_next(&tree, &nodes[k]); tree_prev(&tree, &nodes[k]); } /* Remove nodes. */ switch (i % 4) { case 0: for (k = 0; k < j; k++) node_remove(&tree, &nodes[k], j - k); break; case 1: for (k = j; k > 0; k--) node_remove(&tree, &nodes[k-1], k); break; case 2: { node_t *start; unsigned nnodes = j; start = NULL; do { start = tree_iter(&tree, start, remove_iterate_cb, (void *)&nnodes); nnodes--; } while (start != NULL); assert_u_eq(nnodes, 0, "Removal terminated early"); break; } case 3: { node_t *start; unsigned nnodes = j; start = NULL; do { start = tree_reverse_iter(&tree, start, remove_reverse_iterate_cb, (void *)&nnodes); nnodes--; } while (start != NULL); assert_u_eq(nnodes, 0, "Removal terminated early"); break; } default: not_reached(); } } } fini_gen_rand(sfmt); #undef NNODES #undef NBAGS #undef SEED } TEST_END int main(void) { return (test( test_rb_empty, test_rb_random)); } vmem-1.8/src/jemalloc/test/unit/rtree.c000066400000000000000000000057301361505074100201350ustar00rootroot00000000000000#include "test/jemalloc_test.h" void * rtree_malloc(pool_t *pool, size_t size) { return imalloc(size); } void rtree_free(pool_t *pool, void *ptr) { return idalloc(ptr); } TEST_BEGIN(test_rtree_get_empty) { unsigned i; for (i = 1; i <= (sizeof(uintptr_t) << 3); i++) { rtree_t *rtree = rtree_new(i, rtree_malloc, rtree_free, pools[0]); assert_u_eq(rtree_get(rtree, 0), 0, "rtree_get() should return NULL for empty tree"); rtree_delete(rtree); } } TEST_END TEST_BEGIN(test_rtree_extrema) { unsigned i; for (i = 1; i <= (sizeof(uintptr_t) << 3); i++) { rtree_t *rtree = rtree_new(i, rtree_malloc, rtree_free, pools[0]); rtree_set(rtree, 0, 1); assert_u_eq(rtree_get(rtree, 0), 1, "rtree_get() should return previously set value"); rtree_set(rtree, ~((uintptr_t)0), 1); assert_u_eq(rtree_get(rtree, ~((uintptr_t)0)), 1, "rtree_get() should return previously set value"); rtree_delete(rtree); } } TEST_END TEST_BEGIN(test_rtree_bits) { unsigned i, j, k; for (i = 1; i < (sizeof(uintptr_t) << 3); i++) { uintptr_t keys[] = {0, 1, (((uintptr_t)1) << (sizeof(uintptr_t)*8-i)) - 1}; rtree_t *rtree = rtree_new(i, rtree_malloc, rtree_free, pools[0]); for (j = 0; j < sizeof(keys)/sizeof(uintptr_t); j++) { rtree_set(rtree, keys[j], 1); for (k = 0; k < sizeof(keys)/sizeof(uintptr_t); k++) { assert_u_eq(rtree_get(rtree, keys[k]), 1, "rtree_get() should return previously set " "value and ignore insignificant key bits; " "i=%u, j=%u, k=%u, set key=%#"PRIxPTR", " "get key=%#"PRIxPTR, i, j, k, keys[j], keys[k]); } assert_u_eq(rtree_get(rtree, (((uintptr_t)1) << (sizeof(uintptr_t)*8-i))), 0, "Only leftmost rtree leaf should be set; " "i=%u, j=%u", i, j); rtree_set(rtree, keys[j], 0); } rtree_delete(rtree); } } TEST_END TEST_BEGIN(test_rtree_random) { unsigned i; sfmt_t *sfmt; #define NSET 100 #define SEED 42 sfmt = init_gen_rand(SEED); for (i = 1; i <= (sizeof(uintptr_t) << 3); i++) { rtree_t *rtree = rtree_new(i, rtree_malloc, rtree_free, pools[0]); uintptr_t keys[NSET]; unsigned j; for (j = 0; j < NSET; j++) { keys[j] = (uintptr_t)gen_rand64(sfmt); rtree_set(rtree, keys[j], 1); assert_u_eq(rtree_get(rtree, keys[j]), 1, "rtree_get() should return previously set value"); } for (j = 0; j < NSET; j++) { assert_u_eq(rtree_get(rtree, keys[j]), 1, "rtree_get() should return previously set value"); } for (j = 0; j < NSET; j++) { rtree_set(rtree, keys[j], 0); assert_u_eq(rtree_get(rtree, keys[j]), 0, "rtree_get() should return previously set value"); } for (j = 0; j < NSET; j++) { assert_u_eq(rtree_get(rtree, keys[j]), 0, "rtree_get() should return previously set value"); } rtree_delete(rtree); } fini_gen_rand(sfmt); #undef NSET #undef SEED } TEST_END int main(void) { return (test( test_rtree_get_empty, test_rtree_extrema, test_rtree_bits, test_rtree_random)); } vmem-1.8/src/jemalloc/test/unit/stats.c000066400000000000000000000300361361505074100201470ustar00rootroot00000000000000#include "test/jemalloc_test.h" TEST_BEGIN(test_stats_summary) { size_t *cactive; size_t sz, allocated, active, mapped; int expected = config_stats ? 0 : ENOENT; sz = sizeof(cactive); assert_d_eq(mallctl("pool.0.stats.cactive", &cactive, &sz, NULL, 0), expected, "Unexpected mallctl() result"); sz = sizeof(size_t); assert_d_eq(mallctl("pool.0.stats.allocated", &allocated, &sz, NULL, 0), expected, "Unexpected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.active", &active, &sz, NULL, 0), expected, "Unexpected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.mapped", &mapped, &sz, NULL, 0), expected, "Unexpected mallctl() result"); if (config_stats) { assert_zu_le(active, *cactive, "active should be no larger than cactive"); assert_zu_le(allocated, active, "allocated should be no larger than active"); assert_zu_le(active, mapped, "active should be no larger than mapped"); } } TEST_END TEST_BEGIN(test_stats_chunks) { size_t current, high; uint64_t total; size_t sz; int expected = config_stats ? 0 : ENOENT; sz = sizeof(size_t); assert_d_eq(mallctl("pool.0.stats.chunks.current", ¤t, &sz, NULL, 0), expected, "Unexpected mallctl() result"); sz = sizeof(uint64_t); assert_d_eq(mallctl("pool.0.stats.chunks.total", &total, &sz, NULL, 0), expected, "Unexpected mallctl() result"); sz = sizeof(size_t); assert_d_eq(mallctl("pool.0.stats.chunks.high", &high, &sz, NULL, 0), expected, "Unexpected mallctl() result"); if (config_stats) { assert_zu_le(current, high, "current should be no larger than high"); assert_u64_le((uint64_t)high, total, "high should be no larger than total"); } } TEST_END TEST_BEGIN(test_stats_huge) { void *p; uint64_t epoch; size_t allocated; uint64_t nmalloc, ndalloc, nrequests; size_t sz; int expected = config_stats ? 0 : ENOENT; p = mallocx(arena_maxclass+1, 0); assert_ptr_not_null(p, "Unexpected mallocx() failure"); assert_d_eq(mallctl("epoch", NULL, NULL, &epoch, sizeof(epoch)), 0, "Unexpected mallctl() failure"); sz = sizeof(size_t); assert_d_eq(mallctl("pool.0.stats.arenas.0.huge.allocated", &allocated, &sz, NULL, 0), expected, "Unexpected mallctl() result"); sz = sizeof(uint64_t); assert_d_eq(mallctl("pool.0.stats.arenas.0.huge.nmalloc", &nmalloc, &sz, NULL, 0), expected, "Unexpected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.arenas.0.huge.ndalloc", &ndalloc, &sz, NULL, 0), expected, "Unexpected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.arenas.0.huge.nrequests", &nrequests, &sz, NULL, 0), expected, "Unexpected mallctl() result"); if (config_stats) { assert_zu_gt(allocated, 0, "allocated should be greater than zero"); assert_u64_ge(nmalloc, ndalloc, "nmalloc should be at least as large as ndalloc"); assert_u64_le(nmalloc, nrequests, "nmalloc should no larger than nrequests"); } dallocx(p, 0); } TEST_END TEST_BEGIN(test_stats_arenas_summary) { unsigned arena; void *little, *large; uint64_t epoch; size_t sz; int expected = config_stats ? 0 : ENOENT; size_t mapped; uint64_t npurge, nmadvise, purged; arena = 0; assert_d_eq(mallctl("thread.pool.0.arena", NULL, NULL, &arena, sizeof(arena)), 0, "Unexpected mallctl() failure"); little = mallocx(SMALL_MAXCLASS, 0); assert_ptr_not_null(little, "Unexpected mallocx() failure"); large = mallocx(arena_maxclass, 0); assert_ptr_not_null(large, "Unexpected mallocx() failure"); assert_d_eq(mallctl("pool.0.arena.0.purge", NULL, NULL, NULL, 0), 0, "Unexpected mallctl() failure"); assert_d_eq(mallctl("epoch", NULL, NULL, &epoch, sizeof(epoch)), 0, "Unexpected mallctl() failure"); sz = sizeof(size_t); assert_d_eq(mallctl("pool.0.stats.arenas.0.mapped", &mapped, &sz, NULL, 0), expected, "Unexepected mallctl() result"); sz = sizeof(uint64_t); assert_d_eq(mallctl("pool.0.stats.arenas.0.npurge", &npurge, &sz, NULL, 0), expected, "Unexepected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.arenas.0.nmadvise", &nmadvise, &sz, NULL, 0), expected, "Unexepected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.arenas.0.purged", &purged, &sz, NULL, 0), expected, "Unexepected mallctl() result"); if (config_stats) { assert_u64_gt(npurge, 0, "At least one purge should have occurred"); assert_u64_le(nmadvise, purged, "nmadvise should be no greater than purged"); } dallocx(little, 0); dallocx(large, 0); } TEST_END void * thd_start(void *arg) { return (NULL); } static void no_lazy_lock(void) { thd_t thd; thd_create(&thd, thd_start, NULL); thd_join(thd, NULL); } TEST_BEGIN(test_stats_arenas_small) { unsigned arena; void *p; size_t sz, allocated; uint64_t epoch, nmalloc, ndalloc, nrequests; int expected = config_stats ? 0 : ENOENT; no_lazy_lock(); /* Lazy locking would dodge tcache testing. */ arena = 0; assert_d_eq(mallctl("thread.pool.0.arena", NULL, NULL, &arena, sizeof(arena)), 0, "Unexpected mallctl() failure"); p = mallocx(SMALL_MAXCLASS, 0); assert_ptr_not_null(p, "Unexpected mallocx() failure"); assert_d_eq(mallctl("thread.tcache.flush", NULL, NULL, NULL, 0), config_tcache ? 0 : ENOENT, "Unexpected mallctl() result"); assert_d_eq(mallctl("epoch", NULL, NULL, &epoch, sizeof(epoch)), 0, "Unexpected mallctl() failure"); sz = sizeof(size_t); assert_d_eq(mallctl("pool.0.stats.arenas.0.small.allocated", &allocated, &sz, NULL, 0), expected, "Unexpected mallctl() result"); sz = sizeof(uint64_t); assert_d_eq(mallctl("pool.0.stats.arenas.0.small.nmalloc", &nmalloc, &sz, NULL, 0), expected, "Unexpected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.arenas.0.small.ndalloc", &ndalloc, &sz, NULL, 0), expected, "Unexpected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.arenas.0.small.nrequests", &nrequests, &sz, NULL, 0), expected, "Unexpected mallctl() result"); if (config_stats) { assert_zu_gt(allocated, 0, "allocated should be greater than zero"); assert_u64_gt(nmalloc, 0, "nmalloc should be no greater than zero"); assert_u64_ge(nmalloc, ndalloc, "nmalloc should be at least as large as ndalloc"); assert_u64_gt(nrequests, 0, "nrequests should be greater than zero"); } dallocx(p, 0); } TEST_END TEST_BEGIN(test_stats_arenas_large) { unsigned arena; void *p; size_t sz, allocated; uint64_t epoch, nmalloc, ndalloc, nrequests; int expected = config_stats ? 0 : ENOENT; arena = 0; assert_d_eq(mallctl("thread.pool.0.arena", NULL, NULL, &arena, sizeof(arena)), 0, "Unexpected mallctl() failure"); p = mallocx(arena_maxclass, 0); assert_ptr_not_null(p, "Unexpected mallocx() failure"); assert_d_eq(mallctl("epoch", NULL, NULL, &epoch, sizeof(epoch)), 0, "Unexpected mallctl() failure"); sz = sizeof(size_t); assert_d_eq(mallctl("pool.0.stats.arenas.0.large.allocated", &allocated, &sz, NULL, 0), expected, "Unexpected mallctl() result"); sz = sizeof(uint64_t); assert_d_eq(mallctl("pool.0.stats.arenas.0.large.nmalloc", &nmalloc, &sz, NULL, 0), expected, "Unexpected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.arenas.0.large.ndalloc", &ndalloc, &sz, NULL, 0), expected, "Unexpected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.arenas.0.large.nrequests", &nrequests, &sz, NULL, 0), expected, "Unexpected mallctl() result"); if (config_stats) { assert_zu_gt(allocated, 0, "allocated should be greater than zero"); assert_zu_gt(nmalloc, 0, "nmalloc should be greater than zero"); assert_zu_ge(nmalloc, ndalloc, "nmalloc should be at least as large as ndalloc"); assert_zu_gt(nrequests, 0, "nrequests should be greater than zero"); } dallocx(p, 0); } TEST_END TEST_BEGIN(test_stats_arenas_bins) { unsigned arena; void *p; size_t sz, allocated, curruns; uint64_t epoch, nmalloc, ndalloc, nrequests, nfills, nflushes; uint64_t nruns, nreruns; int expected = config_stats ? 0 : ENOENT; arena = 0; assert_d_eq(mallctl("thread.pool.0.arena", NULL, NULL, &arena, sizeof(arena)), 0, "Unexpected mallctl() failure"); p = mallocx(arena_bin_info[0].reg_size, 0); assert_ptr_not_null(p, "Unexpected mallocx() failure"); assert_d_eq(mallctl("thread.tcache.flush", NULL, NULL, NULL, 0), config_tcache ? 0 : ENOENT, "Unexpected mallctl() result"); assert_d_eq(mallctl("epoch", NULL, NULL, &epoch, sizeof(epoch)), 0, "Unexpected mallctl() failure"); sz = sizeof(size_t); assert_d_eq(mallctl("pool.0.stats.arenas.0.bins.0.allocated", &allocated, &sz, NULL, 0), expected, "Unexpected mallctl() result"); sz = sizeof(uint64_t); assert_d_eq(mallctl("pool.0.stats.arenas.0.bins.0.nmalloc", &nmalloc, &sz, NULL, 0), expected, "Unexpected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.arenas.0.bins.0.ndalloc", &ndalloc, &sz, NULL, 0), expected, "Unexpected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.arenas.0.bins.0.nrequests", &nrequests, &sz, NULL, 0), expected, "Unexpected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.arenas.0.bins.0.nfills", &nfills, &sz, NULL, 0), config_tcache ? expected : ENOENT, "Unexpected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.arenas.0.bins.0.nflushes", &nflushes, &sz, NULL, 0), config_tcache ? expected : ENOENT, "Unexpected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.arenas.0.bins.0.nruns", &nruns, &sz, NULL, 0), expected, "Unexpected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.arenas.0.bins.0.nreruns", &nreruns, &sz, NULL, 0), expected, "Unexpected mallctl() result"); sz = sizeof(size_t); assert_d_eq(mallctl("pool.0.stats.arenas.0.bins.0.curruns", &curruns, &sz, NULL, 0), expected, "Unexpected mallctl() result"); if (config_stats) { assert_zu_gt(allocated, 0, "allocated should be greater than zero"); assert_u64_gt(nmalloc, 0, "nmalloc should be greater than zero"); assert_u64_ge(nmalloc, ndalloc, "nmalloc should be at least as large as ndalloc"); assert_u64_gt(nrequests, 0, "nrequests should be greater than zero"); if (config_tcache) { assert_u64_gt(nfills, 0, "At least one fill should have occurred"); assert_u64_gt(nflushes, 0, "At least one flush should have occurred"); } assert_u64_gt(nruns, 0, "At least one run should have been allocated"); assert_zu_gt(curruns, 0, "At least one run should be currently allocated"); } dallocx(p, 0); } TEST_END TEST_BEGIN(test_stats_arenas_lruns) { unsigned arena; void *p; uint64_t epoch, nmalloc, ndalloc, nrequests; size_t curruns, sz; int expected = config_stats ? 0 : ENOENT; arena = 0; assert_d_eq(mallctl("thread.pool.0.arena", NULL, NULL, &arena, sizeof(arena)), 0, "Unexpected mallctl() failure"); p = mallocx(SMALL_MAXCLASS+1, 0); assert_ptr_not_null(p, "Unexpected mallocx() failure"); assert_d_eq(mallctl("epoch", NULL, NULL, &epoch, sizeof(epoch)), 0, "Unexpected mallctl() failure"); sz = sizeof(uint64_t); assert_d_eq(mallctl("pool.0.stats.arenas.0.lruns.0.nmalloc", &nmalloc, &sz, NULL, 0), expected, "Unexpected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.arenas.0.lruns.0.ndalloc", &ndalloc, &sz, NULL, 0), expected, "Unexpected mallctl() result"); assert_d_eq(mallctl("pool.0.stats.arenas.0.lruns.0.nrequests", &nrequests, &sz, NULL, 0), expected, "Unexpected mallctl() result"); sz = sizeof(size_t); assert_d_eq(mallctl("pool.0.stats.arenas.0.lruns.0.curruns", &curruns, &sz, NULL, 0), expected, "Unexpected mallctl() result"); if (config_stats) { assert_u64_gt(nmalloc, 0, "nmalloc should be greater than zero"); assert_u64_ge(nmalloc, ndalloc, "nmalloc should be at least as large as ndalloc"); assert_u64_gt(nrequests, 0, "nrequests should be greater than zero"); assert_u64_gt(curruns, 0, "At least one run should be currently allocated"); } dallocx(p, 0); } TEST_END int main(void) { return (test( test_stats_summary, test_stats_chunks, test_stats_huge, test_stats_arenas_summary, test_stats_arenas_small, test_stats_arenas_large, test_stats_arenas_bins, test_stats_arenas_lruns)); } vmem-1.8/src/jemalloc/test/unit/tsd.c000066400000000000000000000025701361505074100176050ustar00rootroot00000000000000#include "test/jemalloc_test.h" #define THREAD_DATA 0x72b65c10 typedef unsigned int data_t; static bool data_cleanup_executed; void data_cleanup(void *arg) { data_t *data = (data_t *)arg; assert_x_eq(*data, THREAD_DATA, "Argument passed into cleanup function should match tsd value"); data_cleanup_executed = true; } malloc_tsd_protos(, data, data_t) malloc_tsd_externs(data, data_t) #define DATA_INIT 0x12345678 malloc_tsd_data(, data, data_t, DATA_INIT) malloc_tsd_funcs(, data, data_t, DATA_INIT, data_cleanup) static void * thd_start(void *arg) { data_t d = (data_t)(uintptr_t)arg; assert_x_eq(*data_tsd_get(), DATA_INIT, "Initial tsd get should return initialization value"); data_tsd_set(&d); assert_x_eq(*data_tsd_get(), d, "After tsd set, tsd get should return value that was set"); d = 0; assert_x_eq(*data_tsd_get(), (data_t)(uintptr_t)arg, "Resetting local data should have no effect on tsd"); return (NULL); } TEST_BEGIN(test_tsd_main_thread) { thd_start((void *) 0xa5f3e329); } TEST_END TEST_BEGIN(test_tsd_sub_thread) { thd_t thd; data_cleanup_executed = false; thd_create(&thd, thd_start, (void *)THREAD_DATA); thd_join(thd, NULL); assert_true(data_cleanup_executed, "Cleanup function should have executed"); } TEST_END int main(void) { data_tsd_boot(); return (test( test_tsd_main_thread, test_tsd_sub_thread)); } vmem-1.8/src/jemalloc/test/unit/util.c000066400000000000000000000213111361505074100177620ustar00rootroot00000000000000#include "test/jemalloc_test.h" TEST_BEGIN(test_pow2_ceil) { unsigned i, pow2; size_t x; assert_zu_eq(pow2_ceil(0), 0, "Unexpected result"); for (i = 0; i < sizeof(size_t) * 8; i++) { assert_zu_eq(pow2_ceil(ZU(1) << i), ZU(1) << i, "Unexpected result"); } for (i = 2; i < sizeof(size_t) * 8; i++) { assert_zu_eq(pow2_ceil((ZU(1) << i) - 1), ZU(1) << i, "Unexpected result"); } for (i = 0; i < sizeof(size_t) * 8 - 1; i++) { assert_zu_eq(pow2_ceil((ZU(1) << i) + 1), ZU(1) << (i+1), "Unexpected result"); } for (pow2 = 1; pow2 < 25; pow2++) { for (x = (ZU(1) << (pow2-1)) + 1; x <= ZU(1) << pow2; x++) { assert_zu_eq(pow2_ceil(x), ZU(1) << pow2, "Unexpected result, x=%zu", x); } } } TEST_END TEST_BEGIN(test_malloc_strtoumax_no_endptr) { int err; set_errno(0); assert_ju_eq(malloc_strtoumax("0", NULL, 0), 0, "Unexpected result"); err = get_errno(); assert_d_eq(err, 0, "Unexpected failure"); } TEST_END TEST_BEGIN(test_malloc_strtoumax) { struct test_s { const char *input; const char *expected_remainder; int base; int expected_errno; const char *expected_errno_name; uintmax_t expected_x; }; #define ERR(e) e, #e #define KUMAX(x) ((uintmax_t)x##ULL) struct test_s tests[] = { {"0", "0", -1, ERR(EINVAL), UINTMAX_MAX}, {"0", "0", 1, ERR(EINVAL), UINTMAX_MAX}, {"0", "0", 37, ERR(EINVAL), UINTMAX_MAX}, {"", "", 0, ERR(EINVAL), UINTMAX_MAX}, {"+", "+", 0, ERR(EINVAL), UINTMAX_MAX}, {"++3", "++3", 0, ERR(EINVAL), UINTMAX_MAX}, {"-", "-", 0, ERR(EINVAL), UINTMAX_MAX}, {"42", "", 0, ERR(0), KUMAX(42)}, {"+42", "", 0, ERR(0), KUMAX(42)}, {"-42", "", 0, ERR(0), KUMAX(-42)}, {"042", "", 0, ERR(0), KUMAX(042)}, {"+042", "", 0, ERR(0), KUMAX(042)}, {"-042", "", 0, ERR(0), KUMAX(-042)}, {"0x42", "", 0, ERR(0), KUMAX(0x42)}, {"+0x42", "", 0, ERR(0), KUMAX(0x42)}, {"-0x42", "", 0, ERR(0), KUMAX(-0x42)}, {"0", "", 0, ERR(0), KUMAX(0)}, {"1", "", 0, ERR(0), KUMAX(1)}, {"42", "", 0, ERR(0), KUMAX(42)}, {" 42", "", 0, ERR(0), KUMAX(42)}, {"42 ", " ", 0, ERR(0), KUMAX(42)}, {"0x", "x", 0, ERR(0), KUMAX(0)}, {"42x", "x", 0, ERR(0), KUMAX(42)}, {"07", "", 0, ERR(0), KUMAX(7)}, {"010", "", 0, ERR(0), KUMAX(8)}, {"08", "8", 0, ERR(0), KUMAX(0)}, {"0_", "_", 0, ERR(0), KUMAX(0)}, {"0x", "x", 0, ERR(0), KUMAX(0)}, {"0X", "X", 0, ERR(0), KUMAX(0)}, {"0xg", "xg", 0, ERR(0), KUMAX(0)}, {"0XA", "", 0, ERR(0), KUMAX(10)}, {"010", "", 10, ERR(0), KUMAX(10)}, {"0x3", "x3", 10, ERR(0), KUMAX(0)}, {"12", "2", 2, ERR(0), KUMAX(1)}, {"78", "8", 8, ERR(0), KUMAX(7)}, {"9a", "a", 10, ERR(0), KUMAX(9)}, {"9A", "A", 10, ERR(0), KUMAX(9)}, {"fg", "g", 16, ERR(0), KUMAX(15)}, {"FG", "G", 16, ERR(0), KUMAX(15)}, {"0xfg", "g", 16, ERR(0), KUMAX(15)}, {"0XFG", "G", 16, ERR(0), KUMAX(15)}, {"z_", "_", 36, ERR(0), KUMAX(35)}, {"Z_", "_", 36, ERR(0), KUMAX(35)} }; #undef ERR #undef KUMAX unsigned i; for (i = 0; i < sizeof(tests)/sizeof(struct test_s); i++) { struct test_s *test = &tests[i]; int err; uintmax_t result; char *remainder; set_errno(0); result = malloc_strtoumax(test->input, &remainder, test->base); err = get_errno(); assert_d_eq(err, test->expected_errno, "Expected errno %s for \"%s\", base %d", test->expected_errno_name, test->input, test->base); assert_str_eq(remainder, test->expected_remainder, "Unexpected remainder for \"%s\", base %d", test->input, test->base); if (err == 0) { assert_ju_eq(result, test->expected_x, "Unexpected result for \"%s\", base %d", test->input, test->base); } } } TEST_END TEST_BEGIN(test_malloc_snprintf_truncated) { #define BUFLEN 15 char buf[BUFLEN]; int result; size_t len; #define TEST(expected_str_untruncated, ...) do { \ result = malloc_snprintf(buf, len, __VA_ARGS__); \ assert_d_eq(strncmp(buf, expected_str_untruncated, len-1), 0, \ "Unexpected string inequality (\"%s\" vs \"%s\")", \ buf, expected_str_untruncated); \ assert_d_eq(result, strlen(expected_str_untruncated), \ "Unexpected result"); \ } while (0) for (len = 1; len < BUFLEN; len++) { TEST("012346789", "012346789"); TEST("a0123b", "a%sb", "0123"); TEST("a01234567", "a%s%s", "0123", "4567"); TEST("a0123 ", "a%-6s", "0123"); TEST("a 0123", "a%6s", "0123"); TEST("a 012", "a%6.3s", "0123"); TEST("a 012", "a%*.*s", 6, 3, "0123"); TEST("a 123b", "a% db", 123); TEST("a123b", "a%-db", 123); TEST("a-123b", "a%-db", -123); TEST("a+123b", "a%+db", 123); } #undef BUFLEN #undef TEST } TEST_END TEST_BEGIN(test_malloc_snprintf) { #define BUFLEN 128 char buf[BUFLEN]; int result; #define TEST(expected_str, ...) do { \ result = malloc_snprintf(buf, sizeof(buf), __VA_ARGS__); \ assert_str_eq(buf, expected_str, "Unexpected output"); \ assert_d_eq(result, strlen(expected_str), "Unexpected result"); \ } while (0) TEST("hello", "hello"); TEST("50%, 100%", "50%%, %d%%", 100); TEST("a0123b", "a%sb", "0123"); TEST("a 0123b", "a%5sb", "0123"); TEST("a 0123b", "a%*sb", 5, "0123"); TEST("a0123 b", "a%-5sb", "0123"); TEST("a0123b", "a%*sb", -1, "0123"); TEST("a0123 b", "a%*sb", -5, "0123"); TEST("a0123 b", "a%-*sb", -5, "0123"); TEST("a012b", "a%.3sb", "0123"); TEST("a012b", "a%.*sb", 3, "0123"); TEST("a0123b", "a%.*sb", -3, "0123"); TEST("a 012b", "a%5.3sb", "0123"); TEST("a 012b", "a%5.*sb", 3, "0123"); TEST("a 012b", "a%*.3sb", 5, "0123"); TEST("a 012b", "a%*.*sb", 5, 3, "0123"); TEST("a 0123b", "a%*.*sb", 5, -3, "0123"); TEST("_abcd_", "_%x_", 0xabcd); TEST("_0xabcd_", "_%#x_", 0xabcd); TEST("_1234_", "_%o_", 01234); TEST("_01234_", "_%#o_", 01234); TEST("_1234_", "_%u_", 1234); TEST("_1234_", "_%d_", 1234); TEST("_ 1234_", "_% d_", 1234); TEST("_+1234_", "_%+d_", 1234); TEST("_-1234_", "_%d_", -1234); TEST("_-1234_", "_% d_", -1234); TEST("_-1234_", "_%+d_", -1234); TEST("_-1234_", "_%d_", -1234); TEST("_1234_", "_%d_", 1234); TEST("_-1234_", "_%i_", -1234); TEST("_1234_", "_%i_", 1234); TEST("_01234_", "_%#o_", 01234); TEST("_1234_", "_%u_", 1234); TEST("_0x1234abc_", "_%#x_", 0x1234abc); TEST("_0X1234ABC_", "_%#X_", 0x1234abc); TEST("_c_", "_%c_", 'c'); TEST("_string_", "_%s_", "string"); TEST("_0x42_", "_%p_", ((void *)0x42)); TEST("_-1234_", "_%ld_", ((long)-1234)); TEST("_1234_", "_%ld_", ((long)1234)); TEST("_-1234_", "_%li_", ((long)-1234)); TEST("_1234_", "_%li_", ((long)1234)); TEST("_01234_", "_%#lo_", ((long)01234)); TEST("_1234_", "_%lu_", ((long)1234)); TEST("_0x1234abc_", "_%#lx_", ((long)0x1234abc)); TEST("_0X1234ABC_", "_%#lX_", ((long)0x1234ABC)); TEST("_-1234_", "_%lld_", ((long long)-1234)); TEST("_1234_", "_%lld_", ((long long)1234)); TEST("_-1234_", "_%lli_", ((long long)-1234)); TEST("_1234_", "_%lli_", ((long long)1234)); TEST("_01234_", "_%#llo_", ((long long)01234)); TEST("_1234_", "_%llu_", ((long long)1234)); TEST("_0x1234abc_", "_%#llx_", ((long long)0x1234abc)); TEST("_0X1234ABC_", "_%#llX_", ((long long)0x1234ABC)); #ifdef __INTEL_COMPILER /* turn off ICC warnings on invalid format string conversion */ #pragma warning (push) #pragma warning (disable: 269) #endif TEST("_-1234_", "_%qd_", ((long long)-1234)); TEST("_1234_", "_%qd_", ((long long)1234)); TEST("_-1234_", "_%qi_", ((long long)-1234)); TEST("_1234_", "_%qi_", ((long long)1234)); TEST("_01234_", "_%#qo_", ((long long)01234)); TEST("_1234_", "_%qu_", ((long long)1234)); TEST("_0x1234abc_", "_%#qx_", ((long long)0x1234abc)); TEST("_0X1234ABC_", "_%#qX_", ((long long)0x1234ABC)); #ifdef __INTEL_COMPILER #pragma warning (pop) #endif TEST("_-1234_", "_%jd_", ((intmax_t)-1234)); TEST("_1234_", "_%jd_", ((intmax_t)1234)); TEST("_-1234_", "_%ji_", ((intmax_t)-1234)); TEST("_1234_", "_%ji_", ((intmax_t)1234)); TEST("_01234_", "_%#jo_", ((intmax_t)01234)); TEST("_1234_", "_%ju_", ((intmax_t)1234)); TEST("_0x1234abc_", "_%#jx_", ((intmax_t)0x1234abc)); TEST("_0X1234ABC_", "_%#jX_", ((intmax_t)0x1234ABC)); TEST("_1234_", "_%td_", ((ptrdiff_t)1234)); TEST("_-1234_", "_%td_", ((ptrdiff_t)-1234)); TEST("_1234_", "_%ti_", ((ptrdiff_t)1234)); TEST("_-1234_", "_%ti_", ((ptrdiff_t)-1234)); TEST("_-1234_", "_%zd_", ((ssize_t)-1234)); TEST("_1234_", "_%zd_", ((ssize_t)1234)); TEST("_-1234_", "_%zi_", ((ssize_t)-1234)); TEST("_1234_", "_%zi_", ((ssize_t)1234)); TEST("_01234_", "_%#zo_", ((ssize_t)01234)); TEST("_1234_", "_%zu_", ((ssize_t)1234)); TEST("_0x1234abc_", "_%#zx_", ((ssize_t)0x1234abc)); TEST("_0X1234ABC_", "_%#zX_", ((ssize_t)0x1234ABC)); #undef BUFLEN } TEST_END int main(void) { return (test( test_pow2_ceil, test_malloc_strtoumax_no_endptr, test_malloc_strtoumax, test_malloc_snprintf_truncated, test_malloc_snprintf)); } vmem-1.8/src/jemalloc/test/unit/zero.c000066400000000000000000000026611361505074100177730ustar00rootroot00000000000000#include "test/jemalloc_test.h" #ifdef JEMALLOC_FILL const char *malloc_conf = "abort:false,junk:false,zero:true,redzone:false,quarantine:0"; #endif static void test_zero(size_t sz_min, size_t sz_max) { char *s; size_t sz_prev, sz, i; sz_prev = 0; s = (char *)mallocx(sz_min, 0); assert_ptr_not_null((void *)s, "Unexpected mallocx() failure"); for (sz = sallocx(s, 0); sz <= sz_max; sz_prev = sz, sz = sallocx(s, 0)) { if (sz_prev > 0) { assert_c_eq(s[0], 'a', "Previously allocated byte %zu/%zu is corrupted", ZU(0), sz_prev); assert_c_eq(s[sz_prev-1], 'a', "Previously allocated byte %zu/%zu is corrupted", sz_prev-1, sz_prev); } for (i = sz_prev; i < sz; i++) { assert_c_eq(s[i], 0x0, "Newly allocated byte %zu/%zu isn't zero-filled", i, sz); s[i] = 'a'; } if (xallocx(s, sz+1, 0, 0) == sz) { s = (char *)rallocx(s, sz+1, 0); assert_ptr_not_null((void *)s, "Unexpected rallocx() failure"); } } dallocx(s, 0); } TEST_BEGIN(test_zero_small) { test_skip_if(!config_fill); test_zero(1, SMALL_MAXCLASS-1); } TEST_END TEST_BEGIN(test_zero_large) { test_skip_if(!config_fill); test_zero(SMALL_MAXCLASS+1, arena_maxclass); } TEST_END TEST_BEGIN(test_zero_huge) { test_skip_if(!config_fill); test_zero(arena_maxclass+1, chunksize*2); } TEST_END int main(void) { return (test( test_zero_small, test_zero_large, test_zero_huge)); } vmem-1.8/src/jemalloc/win_autogen.sh000066400000000000000000000041611361505074100175620ustar00rootroot00000000000000#!/bin/sh # Copyright 2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # JEMALLOC_GEN=./../windows/jemalloc_gen AC_PATH=./../../jemalloc autoconf if [ $? -ne 0 ]; then echo "Error $? in $i" exit 1 fi if [ ! -d "$JEMALLOC_GEN" ]; then echo Creating... $JEMALLOC_GEN mkdir "$JEMALLOC_GEN" fi cd $JEMALLOC_GEN echo "Run configure..." $AC_PATH/configure \ --enable-autogen \ CC=cl \ --enable-lazy-lock=no \ --without-export \ --with-jemalloc-prefix=je_vmem_ \ --with-private-namespace=je_vmem_ \ --disable-xmalloc \ --disable-munmap \ EXTRA_CFLAGS="-DJEMALLOC_LIBVMEM" if [ $? -ne 0 ]; then echo "Error $? in $AC_PATH/configure" exit 1 fi vmem-1.8/src/libvmem/000077500000000000000000000000001361505074100145525ustar00rootroot00000000000000vmem-1.8/src/libvmem/Makefile000066400000000000000000000041701361505074100162140ustar00rootroot00000000000000# Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # src/libvmem/Makefile -- Makefile for libvmem # LIBRARY_NAME = vmem LIBRARY_SO_VERSION = 1 LIBRARY_VERSION = 0.0 SOURCE = libvmem.c vmem.c\ $(COMMON)/alloc.c\ $(COMMON)/file.c\ $(COMMON)/file_posix.c\ $(COMMON)/mmap.c\ $(COMMON)/mmap_posix.c\ $(COMMON)/os_posix.c\ $(COMMON)/os_thread_posix.c\ $(COMMON)/out.c\ $(COMMON)/util.c\ $(COMMON)/util_posix.c default: all JEMALLOC_VMEMDIR = libvmem include ../jemalloc/jemalloc.mk INCS += -I$(JEMALLOC_DIR)/include/jemalloc INCS += -I$(JEMALLOC_OBJDIR)/include/jemalloc EXTRA_OBJS += $(JEMALLOC_LIB) LIBS += -pthread include ../Makefile.inc vmem-1.8/src/libvmem/libvmem.c000066400000000000000000000075441361505074100163630ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * libvmem.c -- basic libvmem functions */ #include #include #include "libvmem.h" #include "jemalloc.h" #include "out.h" #include "vmem.h" /* * vmem_check_versionU -- see if library meets application version requirements */ #ifndef _WIN32 static inline #endif const char * vmem_check_versionU(unsigned major_required, unsigned minor_required) { vmem_construct(); LOG(3, "major_required %u minor_required %u", major_required, minor_required); if (major_required != VMEM_MAJOR_VERSION) { ERR("libvmem major version mismatch (need %u, found %u)", major_required, VMEM_MAJOR_VERSION); return out_get_errormsg(); } if (minor_required > VMEM_MINOR_VERSION) { ERR("libvmem minor version mismatch (need %u, found %u)", minor_required, VMEM_MINOR_VERSION); return out_get_errormsg(); } return NULL; } #ifndef _WIN32 /* * vmem_check_version -- see if library meets application version requirements */ const char * vmem_check_version(unsigned major_required, unsigned minor_required) { return vmem_check_versionU(major_required, minor_required); } #else /* * vmem_check_versionW -- see if library meets application version requirements */ const wchar_t * vmem_check_versionW(unsigned major_required, unsigned minor_required) { if (vmem_check_versionU(major_required, minor_required) != NULL) return out_get_errormsgW(); else return NULL; } #endif /* * vmem_set_funcs -- allow overriding libvmem's call to malloc, etc. */ void vmem_set_funcs( void *(*malloc_func)(size_t size), void (*free_func)(void *ptr), void *(*realloc_func)(void *ptr, size_t size), char *(*strdup_func)(const char *s), void (*print_func)(const char *s)) { vmem_construct(); LOG(3, NULL); util_set_alloc_funcs(malloc_func, free_func, realloc_func, strdup_func); out_set_print_func(print_func); je_vmem_pool_set_alloc_funcs(malloc_func, free_func); } /* * vmem_errormsgU -- return last error message */ #ifndef _WIN32 static inline #endif const char * vmem_errormsgU(void) { return out_get_errormsg(); } #ifndef _WIN32 /* * vmem_errormsg -- return last error message */ const char * vmem_errormsg(void) { return vmem_errormsgU(); } #else /* * vmem_errormsgW -- return last error message as wchar_t */ const wchar_t * vmem_errormsgW(void) { return out_get_errormsgW(); } #endif vmem-1.8/src/libvmem/libvmem.def000066400000000000000000000036731361505074100166760ustar00rootroot00000000000000;;;; Begin Copyright Notice ; ; Copyright 2016-2017, Intel Corporation ; ; Redistribution and use in source and binary forms, with or without ; modification, are permitted provided that the following conditions ; are met: ; ; * Redistributions of source code must retain the above copyright ; notice, this list of conditions and the following disclaimer. ; ; * Redistributions in binary form must reproduce the above copyright ; notice, this list of conditions and the following disclaimer in ; the documentation and/or other materials provided with the ; distribution. ; ; * Neither the name of the copyright holder nor the names of its ; contributors may be used to endorse or promote products derived ; from this software without specific prior written permission. ; ; THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ; "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT ; LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR ; A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT ; OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, ; SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT ; LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, ; DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY ; THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT ; (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE ; OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ; ;;;; End Copyright Notice LIBRARY libvmem VERSION 1.0 EXPORTS vmem_createU vmem_createW vmem_create_in_region vmem_delete vmem_check vmem_stats_print vmem_malloc vmem_free vmem_calloc vmem_realloc vmem_aligned_alloc vmem_strdup vmem_wcsdup vmem_malloc_usable_size vmem_check_versionU vmem_check_versionW vmem_set_funcs vmem_errormsgU vmem_errormsgW DllMain vmem-1.8/src/libvmem/libvmem.link.in000066400000000000000000000036331361505074100174760ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/libvmem.link -- linker link file for libvmem # LIBVMEM_1.0 { global: vmem_create; vmem_create_in_region; vmem_delete; vmem_check; vmem_stats_print; vmem_malloc; vmem_free; vmem_calloc; vmem_realloc; vmem_aligned_alloc; vmem_strdup; vmem_wcsdup; vmem_malloc_usable_size; vmem_check_version; vmem_set_funcs; vmem_errormsg; local: *; }; vmem-1.8/src/libvmem/libvmem.rc000066400000000000000000000034161361505074100165370ustar00rootroot00000000000000/* * Copyright 2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * libvmem.rc -- libvmem resource file */ #include #define FILE_NAME "libvmem.dll" #define DESCRIPTION "libvmem - volatile memory allocation library" #define TYPE VFT_DLL #include vmem-1.8/src/libvmem/libvmem.vcxproj000066400000000000000000000125001361505074100176200ustar00rootroot00000000000000 Debug x64 Release x64 {8d6bb292-9e1c-413d-9f98-4864bdc1514a} {901f04db-e1a5-4a41-8b81-9d31c19acd59} {08762559-E9DF-475B-BA99-49F4B5A1D80B} DynamicLibrary libvmem libvmem en-US 14.0 10.0.10240.0 10.0.16299.0 DynamicLibrary true v140 DynamicLibrary false false v140 $(SolutionDir)/windows/jemalloc_gen/include/jemalloc;$(SolutionDir)/jemalloc/include/jemalloc;%(AdditionalIncludeDirectories) JEMALLOC_EXPORT=;%(PreprocessorDefinitions) $(SolutionDir)/windows/jemalloc_gen/include/jemalloc;$(SolutionDir)/jemalloc/include/jemalloc;%(AdditionalIncludeDirectories) JEMALLOC_EXPORT=;%(PreprocessorDefinitions) vmem-1.8/src/libvmem/libvmem.vcxproj.filters000066400000000000000000000064351361505074100213010ustar00rootroot00000000000000 Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files {49cfa2b4-cfcb-4c02-928a-c04d1cceffb8} {ac09c2fe-a24b-4a86-8763-d4e06d996ef3} Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files vmem-1.8/src/libvmem/libvmem_main.c000066400000000000000000000044341361505074100173620ustar00rootroot00000000000000/* * Copyright 2015-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * libvmem_main.c -- entry point for libvmem.dll * * XXX - This is a placeholder. All the library initialization/cleanup * that is done in library ctors/dtors, as well as TLS initialization * should be moved here. */ #include "win_mmap.h" void vmem_init(void); void vmem_fini(void); void jemalloc_constructor(void); void jemalloc_destructor(void); int APIENTRY DllMain(HINSTANCE hInstance, DWORD dwReason, LPVOID lpReserved) { switch (dwReason) { case DLL_PROCESS_ATTACH: jemalloc_constructor(); vmem_init(); win_mmap_init(); break; case DLL_THREAD_ATTACH: case DLL_THREAD_DETACH: break; case DLL_PROCESS_DETACH: win_mmap_fini(); vmem_fini(); jemalloc_destructor(); break; } return TRUE; } vmem-1.8/src/libvmem/vmem.c000066400000000000000000000257601361505074100156740ustar00rootroot00000000000000/* * Copyright 2014-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem.c -- memory pool & allocation entry points for libvmem */ #include #include #include #include #include #include #include #include #include #include #include "libvmem.h" #include "jemalloc.h" #include "pmemcommon.h" #include "sys_util.h" #include "file.h" #include "vmem.h" #include "valgrind_internal.h" /* * private to this file... */ static size_t Header_size; static os_mutex_t Vmem_init_lock; static os_mutex_t Pool_lock; /* guards vmem_create and vmem_delete */ /* * print_jemalloc_messages -- custom print function, for jemalloc * * Prints traces from jemalloc. All traces from jemalloc * are considered as error messages. */ static void print_jemalloc_messages(void *ignore, const char *s) { ERR("%s", s); } /* * print_jemalloc_stats -- print function, for jemalloc statistics * * Prints statistics from jemalloc. All statistics are printed with level 0. */ static void print_jemalloc_stats(void *ignore, const char *s) { LOG_NONL(0, "%s", s); } /* * vmem_construct -- initialization for vmem * * Called automatically by the run-time loader or on the first use of vmem. */ void vmem_construct(void) { static bool initialized = false; int (*je_vmem_navsnprintf) (char *, size_t, const char *, va_list) = NULL; if (initialized) return; util_mutex_lock(&Vmem_init_lock); if (!initialized) { common_init(VMEM_LOG_PREFIX, VMEM_LOG_LEVEL_VAR, VMEM_LOG_FILE_VAR, VMEM_MAJOR_VERSION, VMEM_MINOR_VERSION); out_set_vsnprintf_func(je_vmem_navsnprintf); LOG(3, NULL); Header_size = roundup(sizeof(VMEM), Pagesize); /* Set up jemalloc messages to a custom print function */ je_vmem_malloc_message = print_jemalloc_messages; initialized = true; } util_mutex_unlock(&Vmem_init_lock); } /* * vmem_init -- load-time initialization for vmem * * Called automatically by the run-time loader. */ ATTR_CONSTRUCTOR void vmem_init(void) { util_mutex_init(&Vmem_init_lock); util_mutex_init(&Pool_lock); vmem_construct(); } /* * vmem_fini -- libvmem cleanup routine * * Called automatically when the process terminates. */ ATTR_DESTRUCTOR void vmem_fini(void) { LOG(3, NULL); util_mutex_destroy(&Pool_lock); util_mutex_destroy(&Vmem_init_lock); /* set up jemalloc messages back to stderr */ je_vmem_malloc_message = NULL; common_fini(); } /* * vmem_createU -- create a memory pool in a temp file */ #ifndef _WIN32 static inline #endif VMEM * vmem_createU(const char *dir, size_t size) { vmem_construct(); LOG(3, "dir \"%s\" size %zu", dir, size); if (size < VMEM_MIN_POOL) { ERR("size %zu smaller than %zu", size, VMEM_MIN_POOL); errno = EINVAL; return NULL; } enum file_type type = util_file_get_type(dir); if (type == OTHER_ERROR) return NULL; util_mutex_lock(&Pool_lock); /* silently enforce multiple of mapping alignment */ size = roundup(size, Mmap_align); void *addr; if (type == TYPE_DEVDAX) { if ((addr = util_file_map_whole(dir)) == NULL) { util_mutex_unlock(&Pool_lock); return NULL; } } else { if ((addr = util_map_tmpfile(dir, size, 4 * MEGABYTE)) == NULL) { util_mutex_unlock(&Pool_lock); return NULL; } } /* store opaque info at beginning of mapped area */ struct vmem *vmp = addr; memset(&vmp->hdr, '\0', sizeof(vmp->hdr)); memcpy(vmp->hdr.signature, VMEM_HDR_SIG, POOL_HDR_SIG_LEN); vmp->addr = addr; vmp->size = size; vmp->caller_mapped = 0; /* Prepare pool for jemalloc */ if (je_vmem_pool_create((void *)((uintptr_t)addr + Header_size), size - Header_size, /* zeroed if */ type != TYPE_DEVDAX, /* empty */ 1) == NULL) { ERR("pool creation failed"); util_unmap(vmp->addr, vmp->size); util_mutex_unlock(&Pool_lock); return NULL; } /* * If possible, turn off all permissions on the pool header page. * * The prototype PMFS doesn't allow this when large pages are in * use. It is not considered an error if this fails. */ if (type != TYPE_DEVDAX) util_range_none(addr, sizeof(struct pool_hdr)); util_mutex_unlock(&Pool_lock); LOG(3, "vmp %p", vmp); return vmp; } #ifndef _WIN32 /* * vmem_create -- create a memory pool in a temp file */ VMEM * vmem_create(const char *dir, size_t size) { return vmem_createU(dir, size); } #else /* * vmem_createW -- create a memory pool in a temp file */ VMEM * vmem_createW(const wchar_t *dir, size_t size) { char *udir = util_toUTF8(dir); if (udir == NULL) return NULL; VMEM *ret = vmem_createU(udir, size); util_free_UTF8(udir); return ret; } #endif /* * vmem_create_in_region -- create a memory pool in a given range */ VMEM * vmem_create_in_region(void *addr, size_t size) { vmem_construct(); LOG(3, "addr %p size %zu", addr, size); if (((uintptr_t)addr & (Pagesize - 1)) != 0) { ERR("addr %p not aligned to pagesize %llu", addr, Pagesize); errno = EINVAL; return NULL; } if (size < VMEM_MIN_POOL) { ERR("size %zu smaller than %zu", size, VMEM_MIN_POOL); errno = EINVAL; return NULL; } /* * Initially, treat this memory region as undefined. * Once jemalloc initializes its metadata, it will also mark * registered free chunks (usable heap space) as unaddressable. */ VALGRIND_DO_MAKE_MEM_UNDEFINED(addr, size); /* store opaque info at beginning of mapped area */ struct vmem *vmp = addr; memset(&vmp->hdr, '\0', sizeof(vmp->hdr)); memcpy(vmp->hdr.signature, VMEM_HDR_SIG, POOL_HDR_SIG_LEN); vmp->addr = addr; vmp->size = size; vmp->caller_mapped = 1; util_mutex_lock(&Pool_lock); /* Prepare pool for jemalloc */ if (je_vmem_pool_create((void *)((uintptr_t)addr + Header_size), size - Header_size, 0, /* empty */ 1) == NULL) { ERR("pool creation failed"); util_mutex_unlock(&Pool_lock); return NULL; } #ifndef _WIN32 /* * If possible, turn off all permissions on the pool header page. * * The prototype PMFS doesn't allow this when large pages are in * use. It is not considered an error if this fails. */ util_range_none(addr, sizeof(struct pool_hdr)); #endif util_mutex_unlock(&Pool_lock); LOG(3, "vmp %p", vmp); return vmp; } /* * vmem_delete -- delete a memory pool */ void vmem_delete(VMEM *vmp) { LOG(3, "vmp %p", vmp); util_mutex_lock(&Pool_lock); int ret = je_vmem_pool_delete((pool_t *)((uintptr_t)vmp + Header_size)); if (ret != 0) { ERR("invalid pool handle: 0x%" PRIxPTR, (uintptr_t)vmp); errno = EINVAL; util_mutex_unlock(&Pool_lock); return; } #ifndef _WIN32 util_range_rw(vmp->addr, sizeof(struct pool_hdr)); #endif if (vmp->caller_mapped == 0) { util_unmap(vmp->addr, vmp->size); } else { /* * The application cannot do any assumptions about the content * of this memory region once the pool is destroyed. */ VALGRIND_DO_MAKE_MEM_UNDEFINED(vmp->addr, vmp->size); } util_mutex_unlock(&Pool_lock); } /* * vmem_check -- memory pool consistency check */ int vmem_check(VMEM *vmp) { vmem_construct(); LOG(3, "vmp %p", vmp); util_mutex_lock(&Pool_lock); int ret = je_vmem_pool_check((pool_t *)((uintptr_t)vmp + Header_size)); util_mutex_unlock(&Pool_lock); return ret; } /* * vmem_stats_print -- spew memory allocator stats for a pool */ void vmem_stats_print(VMEM *vmp, const char *opts) { LOG(3, "vmp %p opts \"%s\"", vmp, opts ? opts : ""); je_vmem_pool_malloc_stats_print( (pool_t *)((uintptr_t)vmp + Header_size), print_jemalloc_stats, NULL, opts); } /* * vmem_malloc -- allocate memory */ void * vmem_malloc(VMEM *vmp, size_t size) { LOG(3, "vmp %p size %zu", vmp, size); return je_vmem_pool_malloc( (pool_t *)((uintptr_t)vmp + Header_size), size); } /* * vmem_free -- free memory */ void vmem_free(VMEM *vmp, void *ptr) { LOG(3, "vmp %p ptr %p", vmp, ptr); je_vmem_pool_free((pool_t *)((uintptr_t)vmp + Header_size), ptr); } /* * vmem_calloc -- allocate zeroed memory */ void * vmem_calloc(VMEM *vmp, size_t nmemb, size_t size) { LOG(3, "vmp %p nmemb %zu size %zu", vmp, nmemb, size); return je_vmem_pool_calloc((pool_t *)((uintptr_t)vmp + Header_size), nmemb, size); } /* * vmem_realloc -- resize a memory allocation */ void * vmem_realloc(VMEM *vmp, void *ptr, size_t size) { LOG(3, "vmp %p ptr %p size %zu", vmp, ptr, size); return je_vmem_pool_ralloc((pool_t *)((uintptr_t)vmp + Header_size), ptr, size); } /* * vmem_aligned_alloc -- allocate aligned memory */ void * vmem_aligned_alloc(VMEM *vmp, size_t alignment, size_t size) { LOG(3, "vmp %p alignment %zu size %zu", vmp, alignment, size); return je_vmem_pool_aligned_alloc( (pool_t *)((uintptr_t)vmp + Header_size), alignment, size); } /* * vmem_strdup -- allocate memory for copy of string */ char * vmem_strdup(VMEM *vmp, const char *s) { LOG(3, "vmp %p s %p", vmp, s); size_t size = strlen(s) + 1; void *retaddr = je_vmem_pool_malloc( (pool_t *)((uintptr_t)vmp + Header_size), size); if (retaddr == NULL) return NULL; return (char *)memcpy(retaddr, s, size); } /* * vmem_wcsdup -- allocate memory for copy of wide character string */ wchar_t * vmem_wcsdup(VMEM *vmp, const wchar_t *s) { LOG(3, "vmp %p s %p", vmp, s); size_t size = (wcslen(s) + 1) * sizeof(wchar_t); void *retaddr = je_vmem_pool_malloc( (pool_t *)((uintptr_t)vmp + Header_size), size); if (retaddr == NULL) return NULL; return (wchar_t *)memcpy(retaddr, s, size); } /* * vmem_malloc_usable_size -- get usable size of allocation */ size_t vmem_malloc_usable_size(VMEM *vmp, void *ptr) { LOG(3, "vmp %p ptr %p", vmp, ptr); return je_vmem_pool_malloc_usable_size( (pool_t *)((uintptr_t)vmp + Header_size), ptr); } vmem-1.8/src/libvmem/vmem.h000066400000000000000000000043541361505074100156750ustar00rootroot00000000000000/* * Copyright 2014-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem.h -- internal definitions for libvmem */ #ifndef VMEM_H #define VMEM_H 1 #include #include "pool_hdr.h" #ifdef __cplusplus extern "C" { #endif #define VMEM_LOG_PREFIX "libvmem" #define VMEM_LOG_LEVEL_VAR "VMEM_LOG_LEVEL" #define VMEM_LOG_FILE_VAR "VMEM_LOG_FILE" /* attributes of the vmem memory pool format for the pool header */ #define VMEM_HDR_SIG "VMEM " /* must be 8 bytes including '\0' */ #define VMEM_FORMAT_MAJOR 1 struct vmem { struct pool_hdr hdr; /* memory pool header */ void *addr; /* mapped region */ size_t size; /* size of mapped region */ int caller_mapped; }; void vmem_construct(void); #ifdef __cplusplus } #endif #endif vmem-1.8/src/libvmmalloc/000077500000000000000000000000001361505074100154205ustar00rootroot00000000000000vmem-1.8/src/libvmmalloc/Makefile000066400000000000000000000042051361505074100170610ustar00rootroot00000000000000# Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # src/libvmmalloc/Makefile -- Makefile for libvmmalloc # LIBRARY_NAME = vmmalloc LIBRARY_SO_VERSION = 1 LIBRARY_VERSION = 0.0 SOURCE = libvmmalloc.c\ $(COMMON)/alloc.c\ $(COMMON)/file_posix.c\ $(COMMON)/mmap.c\ $(COMMON)/mmap_posix.c\ $(COMMON)/os_posix.c\ $(COMMON)/os_thread_posix.c\ $(COMMON)/out.c\ $(COMMON)/util.c\ $(COMMON)/util_posix.c default: all JEMALLOC_VMEMDIR=libvmmalloc include ../jemalloc/jemalloc.mk INCS += -I$(JEMALLOC_DIR)/include/jemalloc INCS += -I$(JEMALLOC_OBJDIR)/include/jemalloc INCS += -I../libvmem EXTRA_OBJS += $(JEMALLOC_LIB) LIBS += -pthread include ../Makefile.inc vmem-1.8/src/libvmmalloc/libvmmalloc.c000066400000000000000000000452171361505074100200760ustar00rootroot00000000000000/* * Copyright 2014-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * libvmmalloc.c -- entry points for libvmmalloc * * NOTES: * 1) Since some standard library functions (fopen, sprintf) use malloc * internally, then at initialization phase, malloc(3) calls are redirected * to the standard jemalloc interfaces that operate on a system heap. * There is no need to track these allocations. For small allocations, * jemalloc is able to detect the corresponding pool the memory was * allocated from, and Vmp argument is actually ignored. So, it is safe * to reclaim this memory using je_vmem_pool_free(). * The problem may occur for huge allocations only (>2MB), but it seems * like such allocations do not happen at initialization phase. * * 2) Debug traces in malloc(3) functions are not available until library * initialization (vmem pool creation) is completed. This is to avoid * recursive calls to malloc, leading to stack overflow. * * 3) Malloc hooks in glibc are overridden to prevent any references to glibc's * malloc(3) functions in case the application uses dlopen with * RTLD_DEEPBIND flag. (Not relevant for FreeBSD since FreeBSD supports * neither malloc hooks nor RTLD_DEEPBIND.) * * 4) If the process forks, there is no separate log file open for a new * process, even if the configured log file name is terminated with "-". * * 5) Fork options 2 and 3 are currently not supported on FreeBSD because * locks are dynamically allocated on FreeBSD and hence they would be cloned * as part of the pool. This may be solvable. */ #define _GNU_SOURCE #include #include #include #include #include #include #include #include #include #include #include #ifndef __FreeBSD__ #include #endif #include "libvmem.h" #include "libvmmalloc.h" #include "jemalloc.h" #include "pmemcommon.h" #include "file.h" #include "os.h" #include "os_thread.h" #include "vmem.h" #include "vmmalloc.h" #include "valgrind_internal.h" #define HUGE (2 * 1024 * 1024) /* * private to this file... */ static size_t Header_size; static VMEM *Vmp; static char *Dir; static int Fd; static int Fd_clone; static int Private; static int Forkopt = 1; /* default behavior - remap as private */ static bool Destructed; /* when set - ignore all calls (do not call jemalloc) */ /* * malloc -- allocate a block of size bytes */ __ATTR_MALLOC__ __ATTR_ALLOC_SIZE__(1) void * malloc(size_t size) { if (unlikely(Destructed)) return NULL; if (Vmp == NULL) { ASSERT(size <= HUGE); return je_vmem_malloc(size); } LOG(4, "size %zu", size); return je_vmem_pool_malloc( (pool_t *)((uintptr_t)Vmp + Header_size), size); } /* * calloc -- allocate a block of nmemb * size bytes and set its contents to zero */ __ATTR_MALLOC__ __ATTR_ALLOC_SIZE__(1, 2) void * calloc(size_t nmemb, size_t size) { if (unlikely(Destructed)) return NULL; if (Vmp == NULL) { ASSERT((nmemb * size) <= HUGE); return je_vmem_calloc(nmemb, size); } LOG(4, "nmemb %zu, size %zu", nmemb, size); return je_vmem_pool_calloc((pool_t *)((uintptr_t)Vmp + Header_size), nmemb, size); } /* * realloc -- resize a block previously allocated by malloc */ __ATTR_ALLOC_SIZE__(2) void * realloc(void *ptr, size_t size) { if (unlikely(Destructed)) return NULL; if (Vmp == NULL) { ASSERT(size <= HUGE); return je_vmem_realloc(ptr, size); } LOG(4, "ptr %p, size %zu", ptr, size); return je_vmem_pool_ralloc((pool_t *)((uintptr_t)Vmp + Header_size), ptr, size); } /* * free -- free a block previously allocated by malloc */ void free(void *ptr) { if (unlikely(Destructed)) return; if (Vmp == NULL) { je_vmem_free(ptr); return; } LOG(4, "ptr %p", ptr); je_vmem_pool_free((pool_t *)((uintptr_t)Vmp + Header_size), ptr); } /* * cfree -- free a block previously allocated by calloc * * the implementation is identical to free() * * XXX Not supported on FreeBSD, but we define it anyway */ void cfree(void *ptr) { if (unlikely(Destructed)) return; if (Vmp == NULL) { je_vmem_free(ptr); return; } LOG(4, "ptr %p", ptr); je_vmem_pool_free((pool_t *)((uintptr_t)Vmp + Header_size), ptr); } /* * memalign -- allocate a block of size bytes, starting on an address * that is a multiple of boundary * * XXX Not supported on FreeBSD, but we define it anyway */ __ATTR_MALLOC__ __ATTR_ALLOC_ALIGN__(1) __ATTR_ALLOC_SIZE__(2) void * memalign(size_t boundary, size_t size) { if (unlikely(Destructed)) return NULL; if (Vmp == NULL) { ASSERT(size <= HUGE); return je_vmem_aligned_alloc(boundary, size); } LOG(4, "boundary %zu size %zu", boundary, size); return je_vmem_pool_aligned_alloc( (pool_t *)((uintptr_t)Vmp + Header_size), boundary, size); } /* * aligned_alloc -- allocate a block of size bytes, starting on an address * that is a multiple of alignment * * size must be a multiple of alignment */ __ATTR_MALLOC__ __ATTR_ALLOC_ALIGN__(1) __ATTR_ALLOC_SIZE__(2) void * aligned_alloc(size_t alignment, size_t size) { if (unlikely(Destructed)) return NULL; /* XXX - check if size is a multiple of alignment */ if (Vmp == NULL) { ASSERT(size <= HUGE); return je_vmem_aligned_alloc(alignment, size); } LOG(4, "alignment %zu size %zu", alignment, size); return je_vmem_pool_aligned_alloc( (pool_t *)((uintptr_t)Vmp + Header_size), alignment, size); } /* * posix_memalign -- allocate a block of size bytes, starting on an address * that is a multiple of alignment */ __ATTR_NONNULL__(1) int posix_memalign(void **memptr, size_t alignment, size_t size) { if (unlikely(Destructed)) return ENOMEM; int ret = 0; int oerrno = errno; if (Vmp == NULL) { ASSERT(size <= HUGE); return je_vmem_posix_memalign(memptr, alignment, size); } LOG(4, "alignment %zu size %zu", alignment, size); *memptr = je_vmem_pool_aligned_alloc( (pool_t *)((uintptr_t)Vmp + Header_size), alignment, size); if (*memptr == NULL) ret = errno; errno = oerrno; return ret; } /* * valloc -- allocate a block of size bytes, starting on a page boundary */ __ATTR_MALLOC__ __ATTR_ALLOC_SIZE__(1) void * valloc(size_t size) { if (unlikely(Destructed)) return NULL; ASSERTne(Pagesize, 0); if (Vmp == NULL) { ASSERT(size <= HUGE); return je_vmem_aligned_alloc(Pagesize, size); } LOG(4, "size %zu", size); return je_vmem_pool_aligned_alloc( (pool_t *)((uintptr_t)Vmp + Header_size), Pagesize, size); } /* * pvalloc -- allocate a block of size bytes, starting on a page boundary * * Requested size is also aligned to page boundary. * * XXX Not supported on FreeBSD, but we define it anyway. */ __ATTR_MALLOC__ __ATTR_ALLOC_SIZE__(1) void * pvalloc(size_t size) { if (unlikely(Destructed)) return NULL; ASSERTne(Pagesize, 0); if (Vmp == NULL) { ASSERT(size <= HUGE); return je_vmem_aligned_alloc(Pagesize, roundup(size, Pagesize)); } LOG(4, "size %zu", size); return je_vmem_pool_aligned_alloc( (pool_t *)((uintptr_t)Vmp + Header_size), Pagesize, roundup(size, Pagesize)); } /* * malloc_usable_size -- get usable size of allocation */ size_t malloc_usable_size(void *ptr) { if (unlikely(Destructed)) return 0; if (Vmp == NULL) { return je_vmem_malloc_usable_size(ptr); } LOG(4, "ptr %p", ptr); return je_vmem_pool_malloc_usable_size( (pool_t *)((uintptr_t)Vmp + Header_size), ptr); } #if (defined(__GLIBC__) && !defined(__UCLIBC__)) #ifndef __MALLOC_HOOK_VOLATILE #define __MALLOC_HOOK_VOLATILE #endif /* * Interpose malloc hooks in glibc. Even if the application uses dlopen * with RTLD_DEEPBIND flag, all the references to libc's malloc(3) functions * will be redirected to libvmmalloc. */ void *(*__MALLOC_HOOK_VOLATILE __malloc_hook) (size_t size, const void *caller) = (void *)malloc; void *(*__MALLOC_HOOK_VOLATILE __realloc_hook) (void *ptr, size_t size, const void *caller) = (void *)realloc; void (*__MALLOC_HOOK_VOLATILE __free_hook) (void *ptr, const void *caller) = (void *)free; void *(*__MALLOC_HOOK_VOLATILE __memalign_hook) (size_t size, size_t alignment, const void *caller) = (void *)memalign; #endif /* * print_jemalloc_messages -- (internal) custom print function, for jemalloc * * Prints traces from jemalloc. All traces from jemalloc * are considered as error messages. */ static void print_jemalloc_messages(void *ignore, const char *s) { LOG_NONL(1, "%s", s); } /* * print_jemalloc_stats -- (internal) print function for jemalloc statistics */ static void print_jemalloc_stats(void *ignore, const char *s) { LOG_NONL(0, "%s", s); } /* * libvmmalloc_create -- (internal) create a memory pool in a temp file */ static VMEM * libvmmalloc_create(const char *dir, size_t size) { LOG(3, "dir \"%s\" size %zu", dir, size); if (size < VMMALLOC_MIN_POOL) { LOG(1, "size %zu smaller than %zu", size, VMMALLOC_MIN_POOL); errno = EINVAL; return NULL; } /* silently enforce multiple of page size */ size = roundup(size, Pagesize); Fd = util_tmpfile(dir, "/vmem.XXXXXX", O_EXCL); if (Fd == -1) return NULL; if ((errno = os_posix_fallocate(Fd, 0, (os_off_t)size)) != 0) { ERR("!posix_fallocate"); (void) os_close(Fd); return NULL; } void *addr; if ((addr = util_map(Fd, size, MAP_SHARED, 0, 4 << 20, NULL)) == NULL) { (void) os_close(Fd); return NULL; } /* store opaque info at beginning of mapped area */ struct vmem *vmp = addr; memset(&vmp->hdr, '\0', sizeof(vmp->hdr)); memcpy(vmp->hdr.signature, VMEM_HDR_SIG, POOL_HDR_SIG_LEN); vmp->addr = addr; vmp->size = size; vmp->caller_mapped = 0; /* Prepare pool for jemalloc */ if (je_vmem_pool_create((void *)((uintptr_t)addr + Header_size), size - Header_size, 1 /* zeroed */, 1 /* empty */) == NULL) { LOG(1, "vmem pool creation failed"); util_unmap(vmp->addr, vmp->size); return NULL; } /* * If possible, turn off all permissions on the pool header page. * * The prototype PMFS doesn't allow this when large pages are in * use. It is not considered an error if this fails. */ util_range_none(addr, sizeof(struct pool_hdr)); LOG(3, "vmp %p", vmp); return vmp; } /* * libvmmalloc_clone - (internal) clone the entire pool */ static int libvmmalloc_clone(void) { LOG(3, NULL); int err; Fd_clone = util_tmpfile(Dir, "/vmem.XXXXXX", O_EXCL); if (Fd_clone == -1) return -1; err = os_posix_fallocate(Fd_clone, 0, (os_off_t)Vmp->size); if (err != 0) { errno = err; ERR("!posix_fallocate"); goto err_close; } void *addr = mmap(NULL, Vmp->size, PROT_READ|PROT_WRITE, MAP_SHARED, Fd_clone, 0); if (addr == MAP_FAILED) { LOG(1, "!mmap"); goto err_close; } LOG(3, "copy the entire pool file: dst %p src %p size %zu", addr, Vmp->addr, Vmp->size); util_range_rw(Vmp->addr, sizeof(struct pool_hdr)); /* * Part of vmem pool was probably freed at some point, so Valgrind * marked it as undefined/inaccessible. We need to duplicate the whole * pool, so as a workaround temporarily disable error reporting. */ VALGRIND_DO_DISABLE_ERROR_REPORTING; memcpy(addr, Vmp->addr, Vmp->size); VALGRIND_DO_ENABLE_ERROR_REPORTING; if (munmap(addr, Vmp->size)) { ERR("!munmap"); goto err_close; } util_range_none(Vmp->addr, sizeof(struct pool_hdr)); return 0; err_close: (void) os_close(Fd_clone); return -1; } /* * remap_as_private -- (internal) remap the pool as private */ static void remap_as_private(void) { LOG(3, "remap the pool file as private"); void *r = mmap(Vmp->addr, Vmp->size, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, Fd, 0); if (r == MAP_FAILED) { out_log(NULL, 0, NULL, 0, "Error (libvmmalloc): remapping failed\n"); abort(); } if (r != Vmp->addr) { out_log(NULL, 0, NULL, 0, "Error (libvmmalloc): wrong address\n"); abort(); } Private = 1; } /* * libvmmalloc_prefork -- (internal) prepare for fork() * * Clones the entire pool or remaps it with MAP_PRIVATE flag. */ static void libvmmalloc_prefork(void) { LOG(3, NULL); /* * There's no need to grab any locks here, as jemalloc pre-fork handler * is executed first, and it does all the synchronization. */ ASSERTne(Vmp, NULL); ASSERTne(Dir, NULL); if (Private) { LOG(3, "already mapped as private - do nothing"); return; } switch (Forkopt) { case 3: /* clone the entire pool; if it fails - remap it as private */ LOG(3, "clone or remap"); case 2: LOG(3, "clone the entire pool file"); if (libvmmalloc_clone() == 0) break; if (Forkopt == 2) { out_log(NULL, 0, NULL, 0, "Error (libvmmalloc): " "pool cloning failed\n"); abort(); } /* cloning failed; fall-thru to remapping */ case 1: remap_as_private(); break; case 0: LOG(3, "do nothing"); break; default: FATAL("invalid fork action %d", Forkopt); } } /* * libvmmalloc_postfork_parent -- (internal) parent post-fork handler */ static void libvmmalloc_postfork_parent(void) { LOG(3, NULL); if (Forkopt == 0) { /* do nothing */ return; } if (Private) { LOG(3, "pool mapped as private - do nothing"); } else { LOG(3, "close the cloned pool file"); (void) os_close(Fd_clone); } } /* * libvmmalloc_postfork_child -- (internal) child post-fork handler */ static void libvmmalloc_postfork_child(void) { LOG(3, NULL); if (Forkopt == 0) { /* do nothing */ return; } if (Private) { LOG(3, "pool mapped as private - do nothing"); } else { LOG(3, "close the original pool file"); (void) os_close(Fd); Fd = Fd_clone; void *addr = Vmp->addr; size_t size = Vmp->size; LOG(3, "mapping cloned pool file at %p", addr); Vmp = mmap(addr, size, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_FIXED, Fd, 0); if (Vmp == MAP_FAILED) { out_log(NULL, 0, NULL, 0, "Error (libvmmalloc): " "mapping failed\n"); abort(); } if (Vmp != addr) { out_log(NULL, 0, NULL, 0, "Error (libvmmalloc): " "wrong address\n"); abort(); } } /* XXX - open a new log file, with the new PID in the name */ } /* * libvmmalloc_init -- load-time initialization for libvmmalloc * * Called automatically by the run-time loader. * The constructor priority guarantees this is executed before * libjemalloc constructor. */ __attribute__((constructor(101))) static void libvmmalloc_init(void) { char *env_str; size_t size; /* * Register fork handlers before jemalloc initialization. * This provides the correct order of fork handlers execution. * Note that the first malloc() will trigger jemalloc init, so we * have to register fork handlers before the call to out_init(), * as it may indirectly call malloc() when opening the log file. */ if (os_thread_atfork(libvmmalloc_prefork, libvmmalloc_postfork_parent, libvmmalloc_postfork_child) != 0) { perror("Error (libvmmalloc): os_thread_atfork"); abort(); } common_init(VMMALLOC_LOG_PREFIX, VMMALLOC_LOG_LEVEL_VAR, VMMALLOC_LOG_FILE_VAR, VMMALLOC_MAJOR_VERSION, VMMALLOC_MINOR_VERSION); out_set_vsnprintf_func(je_vmem_navsnprintf); LOG(3, NULL); /* set up jemalloc messages to a custom print function */ je_vmem_malloc_message = print_jemalloc_messages; Header_size = roundup(sizeof(VMEM), Pagesize); if ((Dir = os_getenv(VMMALLOC_POOL_DIR_VAR)) == NULL) { out_log(NULL, 0, NULL, 0, "Error (libvmmalloc): " "environment variable %s not specified", VMMALLOC_POOL_DIR_VAR); abort(); } if ((env_str = os_getenv(VMMALLOC_POOL_SIZE_VAR)) == NULL) { out_log(NULL, 0, NULL, 0, "Error (libvmmalloc): " "environment variable %s not specified", VMMALLOC_POOL_SIZE_VAR); abort(); } else { long long v = atoll(env_str); if (v < 0) { out_log(NULL, 0, NULL, 0, "Error (libvmmalloc): negative %s", VMMALLOC_POOL_SIZE_VAR); abort(); } size = (size_t)v; } if (size < VMMALLOC_MIN_POOL) { out_log(NULL, 0, NULL, 0, "Error (libvmmalloc): " "%s value is less than minimum (%zu < %zu)", VMMALLOC_POOL_SIZE_VAR, size, VMMALLOC_MIN_POOL); abort(); } if ((env_str = os_getenv(VMMALLOC_FORK_VAR)) != NULL) { Forkopt = atoi(env_str); if (Forkopt < 0 || Forkopt > 3) { out_log(NULL, 0, NULL, 0, "Error (libvmmalloc): " "incorrect %s value (%d)", VMMALLOC_FORK_VAR, Forkopt); abort(); } #ifdef __FreeBSD__ if (Forkopt > 1) { out_log(NULL, 0, NULL, 0, "Error (libvmmalloc): " "%s value %d not supported on FreeBSD", VMMALLOC_FORK_VAR, Forkopt); abort(); } #endif LOG(4, "Fork action %d", Forkopt); } /* * XXX - vmem_create() could be used here, but then we need to * link vmem.o, including all the vmem API. */ Vmp = libvmmalloc_create(Dir, size); if (Vmp == NULL) { out_log(NULL, 0, NULL, 0, "!Error (libvmmalloc): " "vmem pool creation failed"); abort(); } LOG(2, "initialization completed"); } /* * libvmmalloc_fini -- libvmmalloc cleanup routine * * Called automatically when the process terminates and prints * some basic allocator statistics. */ __attribute__((destructor(102))) static void libvmmalloc_fini(void) { LOG(3, NULL); char *env_str = os_getenv(VMMALLOC_LOG_STATS_VAR); if ((env_str != NULL) && strcmp(env_str, "1") == 0) { LOG_NONL(0, "\n========= system heap ========\n"); je_vmem_malloc_stats_print( print_jemalloc_stats, NULL, "gba"); LOG_NONL(0, "\n========= vmem pool ========\n"); je_vmem_pool_malloc_stats_print( (pool_t *)((uintptr_t)Vmp + Header_size), print_jemalloc_stats, NULL, "gba"); } common_fini(); Destructed = true; } vmem-1.8/src/libvmmalloc/libvmmalloc.link.in000066400000000000000000000036351361505074100212140ustar00rootroot00000000000000# # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/libvmmalloc.link -- linker link file for libvmmalloc # { global: _malloc_prefork; _malloc_postfork; _malloc_thread_cleanup; pthread_create; malloc; calloc; realloc; free; memalign; posix_memalign; aligned_alloc; valloc; pvalloc; cfree; malloc_usable_size; __malloc_hook; __realloc_hook; __memalign_hook; __free_hook; local: *; }; vmem-1.8/src/libvmmalloc/vmmalloc.h000066400000000000000000000037251361505074100174120ustar00rootroot00000000000000/* * Copyright 2014-2016, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmmalloc.h -- internal definitions for libvmmalloc */ #define VMMALLOC_LOG_PREFIX "libvmmalloc" #define VMMALLOC_LOG_LEVEL_VAR "VMMALLOC_LOG_LEVEL" #define VMMALLOC_LOG_FILE_VAR "VMMALLOC_LOG_FILE" #define VMMALLOC_LOG_STATS_VAR "VMMALLOC_LOG_STATS" #define VMMALLOC_POOL_DIR_VAR "VMMALLOC_POOL_DIR" #define VMMALLOC_POOL_SIZE_VAR "VMMALLOC_POOL_SIZE" #define VMMALLOC_FORK_VAR "VMMALLOC_FORK" vmem-1.8/src/test/000077500000000000000000000000001361505074100140765ustar00rootroot00000000000000vmem-1.8/src/test/.gitignore000066400000000000000000000007451361505074100160740ustar00rootroot00000000000000# testconfig.sh or testconfig.ps1 should not be checked into git. # It describes the configuration of the local machine where the local copy # of the source tree is being tested. testconfig.sh testconfig.ps1 testconfig.py # ignore files generated during test runs (left around for analysis) *.log testfile* # ignore static binaries generated for testing *.static-debug *.static-nondebug libs.tar *.synced .sync-dir # ignore lock files *.lock # ignore python modules cache __pycache__/ vmem-1.8/src/test/Makefile000066400000000000000000000113311361505074100155350ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/Makefile -- build all unit tests # # Makefile -- build all unit tests # include ../common.inc TEST_DEPS = \ unittest\ tools OTHER_TESTS = \ out_err\ out_err_mt\ scope\ set_funcs\ traces\ traces_custom_function\ unicode_api\ unicode_match_script\ util_file_create\ util_file_open\ util_is_absolute\ util_is_zeroed\ util_map_proc\ util_parse_size VMEM_TESTS = \ vmem_aligned_alloc\ vmem_calloc\ vmem_check_allocations\ vmem_check_version\ vmem_check\ vmem_create\ vmem_create_error\ vmem_create_in_region\ vmem_custom_alloc\ vmem_malloc\ vmem_malloc_usable_size\ vmem_mix_allocations\ vmem_multiple_pools\ vmem_out_of_memory\ vmem_pages_purging\ vmem_realloc\ vmem_realloc_inplace\ vmem_stats\ vmem_strdup\ vmem_valgrind\ vmem_valgrind_region VMMALLOC_DUMMY_FUNCS_DEPS = \ vmmalloc_dummy_funcs VMMALLOC_DUMMY_FUNCS_TESTS = \ vmmalloc_malloc_hooks\ vmmalloc_memalign\ vmmalloc_valloc VMMALLOC_TESTS = \ vmmalloc_calloc\ vmmalloc_check_allocations\ vmmalloc_fork\ vmmalloc_init\ vmmalloc_malloc\ vmmalloc_malloc_usable_size\ vmmalloc_out_of_memory\ vmmalloc_realloc\ vmmalloc_valgrind LOCAL_TESTS = \ $(OTHER_TESTS)\ $(VMEM_TESTS)\ $(VMMALLOC_DUMMY_FUNCS_TESTS)\ $(VMMALLOC_TESTS) TESTS = $(LOCAL_TESTS) TESTS_BUILD = \ $(TEST_DEPS)\ $(VMMALLOC_DUMMY_FUNCS_DEPS)\ $(TESTS) all : TARGET = all clean : TARGET = clean clobber : TARGET = clobber test : TARGET = test cstyle : TARGET = cstyle format : TARGET = format check : TARGET = check pcheck : TARGET = pcheck sparse : TARGET = sparse DIR_SYNC=$(TOP)/src/test/.sync-dir SYNC_EXT=synced TESTCONFIG=$(TOP)/src/test/testconfig.sh FILE_MAX_DAX_DEVICES=$(TOP)/src/test/tools/anonymous_mmap/max_dax_devices all test format sparse: $(TESTS_BUILD) cstyle: $(TESTS_BUILD) $(CHECK_SHEBANG) $(foreach dir,$(TESTS_BUILD),$(dir)/TEST? $(dir)/TEST??) clean clobber: $(TESTS_BUILD) $(RM) -r $(DIR_SYNC) $(RM) *.$(SYNC_EXT) $(RM) $(FILE_MAX_DAX_DEVICES) $(TESTS) $(VMMALLOC_DUMMY_FUNCS_DEPS): $(TEST_DEPS) $(VMMALLOC_DUMMY_FUNCS_TESTS): $(VMMALLOC_DUMMY_FUNCS_DEPS) $(TESTS_BUILD): $(MAKE) -C $@ $(TARGET) memcheck-summary: grep ERROR */memcheck*.log memcheck-summary-errors: grep ERROR */memcheck*.log | grep -v " 0 errors" || true memcheck-summary-leaks: grep "in use at exit" */memcheck*.log | grep -v " 0 bytes in 0 blocks" || true check: @./RUNTESTS $(RUNTEST_OPTIONS) $(TESTS) @echo "No failures." pcheck: pcheck-local-quiet @echo "No failures." pcheck-other: TARGET = pcheck pcheck-other: $(OTHER_TESTS) @echo "No failures." pcheck-vmem: TARGET = pcheck pcheck-vmem: $(VMEM_TESTS) @echo "No failures." pcheck-vmmalloc: TARGET = pcheck pcheck-vmmalloc: $(VMMALLOC_TESTS) @echo "No failures." pcheck-local-quiet: TARGET = pcheck pcheck-local-quiet: $(LOCAL_TESTS) pcheck-local: pcheck-local-quiet @echo "No failures." TESTCONFIG=$(TOP)/src/test/testconfig.sh $(TESTCONFIG): SUPP_SYNC_FILES=$(shell echo *.supp | sed s/supp/$(SYNC_EXT)/g) %.$(SYNC_EXT): %.supp $(TESTCONFIG) cp $(shell echo $^ | cut -d" " -f1) $(DIR_SYNC) @touch $@ .PHONY: all check clean clobber cstyle pcheck\ pcheck-other pcheck-vmem pcheck-vmmalloc\ test unittest tools format \ pcheck-local $(TESTS_BUILD) vmem-1.8/src/test/Makefile.inc000066400000000000000000000200371361505074100163100ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/Makefile.inc -- common Makefile defs for unit tests # # These defaults apply to most unit tests. The individual Makefile # for each unit test overrides the defaults as necessary. # TOP := $(dir $(lastword $(MAKEFILE_LIST)))../.. include $(TOP)/src/common.inc INCS += $(OS_INCS) LDFLAGS += $(OS_LIBS) LIBS_DIR=$(TOP)/src EXAMPLES_DIR=$(TOP)/src/examples UT = ../unittest/libut.a LIBS += $(UT) $(LIBUUID) ifeq ($(USE_LIBUNWIND),y) LIBS += $(LIBDL) $(LIBUNWIND_LIBS) endif LIBS += -L$(LIBS_DIR)/debug LIBS += -pthread $(LIBUTIL) ifeq ($(LIBRT_NEEDED), y) LIBS += -lrt endif ifeq ($(LIBPMEMCOMMON), y) LIBPMEM=y OBJS += $(LIBS_DIR)/debug/libpmemcommon.a INCS += -I$(TOP)/src/common endif ifeq ($(LIBPMEMCOMMON), internal-nondebug) OBJS +=\ $(TOP)/src/nondebug/common/file.o\ $(TOP)/src/nondebug/common/file_posix.o\ $(TOP)/src/nondebug/common/mmap.o\ $(TOP)/src/nondebug/common/mmap_posix.o\ $(TOP)/src/nondebug/common/os_posix.o\ $(TOP)/src/nondebug/common/os_thread_posix.o\ $(TOP)/src/nondebug/common/out.o\ $(TOP)/src/nondebug/common/pool_hdr.o\ $(TOP)/src/nondebug/common/util.o\ $(TOP)/src/nondebug/common/util_posix.o INCS += -I$(TOP)/src/common endif ifeq ($(LIBPMEMCOMMON), internal-debug) OBJS +=\ $(TOP)/src/debug/common/file.o\ $(TOP)/src/debug/common/file_posix.o\ $(TOP)/src/debug/common/mmap.o\ $(TOP)/src/debug/common/mmap_posix.o\ $(TOP)/src/debug/common/os_posix.o\ $(TOP)/src/debug/common/os_thread_posix.o\ $(TOP)/src/debug/common/out.o\ $(TOP)/src/debug/common/pool_hdr.o\ $(TOP)/src/debug/common/util.o\ $(TOP)/src/debug/common/util_posix.o\ # $(TOP)/src/debug/common/uuid.o\ # $(call osdep, $(TOP)/src/debug/common/uuid,.o) INCS += -I$(TOP)/src/common endif ifeq ($(LIBVMEM),y) DYNAMIC_LIBS += -lvmem STATIC_DEBUG_LIBS += $(LIBS_DIR)/debug/libvmem.a STATIC_NONDEBUG_LIBS += $(LIBS_DIR)/nondebug/libvmem.a endif ifneq ($(LIBPMEMCOMMON)$(LIBVMEM),) LIBS += -pthread endif # # This is a helper function to be combined with usage of macros available # in the unittest framework. It scans the code for functions that should be # wrapped and adds required linker flags. # PAREN=( extract_funcs = $(shell \ awk -F '[$(PAREN),]' \ '/(FUNC_MOCK_RET_ALWAYS|FUNC_MOCK_RET_ALWAYS_VOID|FUNC_MOCK)\$(PAREN)[^,]/ \ { \ print "-Wl,--wrap=" $$2 \ }' $(1) ) INCS += -I../unittest -I$(TOP)/src/include -I$(TOP)/src/common COMMON_FLAGS = -ggdb COMMON_FLAGS += -Wall COMMON_FLAGS += -Werror COMMON_FLAGS += -Wpointer-arith ifeq ($(IS_ICC), n) COMMON_FLAGS += -Wunused-macros endif COMMON_FLAGS += -fno-common CXXFLAGS = -std=c++11 CXXFLAGS += $(GLIBC_CXXFLAGS) CXXFLAGS += -ggdb CXXFLAGS += $(COMMON_FLAGS) CXXFLAGS += $(EXTRA_CXXFLAGS) CFLAGS = -std=gnu99 CFLAGS += -Wmissing-prototypes CFLAGS += $(COMMON_FLAGS) ifneq ($(USING_JEMALLOC_HEADERS),y) CFLAGS += -Wsign-conversion endif ifeq ($(WUNREACHABLE_CODE_RETURN_AVAILABLE), y) CFLAGS += -Wunreachable-code-return endif ifeq ($(WMISSING_VARIABLE_DECLARATIONS_AVAILABLE), y) CFLAGS += -Wmissing-variable-declarations endif ifeq ($(WFLOAT_EQUAL_AVAILABLE), y) CFLAGS += -Wfloat-equal endif ifeq ($(WCAST_FUNCTION_TYPE_AVAILABLE), y) CFLAGS += -Wcast-function-type endif CFLAGS += $(EXTRA_CFLAGS) LDFLAGS = -Wl,--warn-common -Wl,--fatal-warnings $(EXTRA_LDFLAGS) ifeq ($(COVERAGE),1) CFLAGS += $(GCOV_CFLAGS) CXXFLAGS += $(GCOV_CFLAGS) LDFLAGS += $(GCOV_LDFLAGS) LIBS += $(GCOV_LIBS) endif ifeq ($(VALGRIND),0) CFLAGS += -DVALGRIND_ENABLED=0 CXXFLAGS += -DVALGRIND_ENABLED=0 endif ifeq ($(FAULT_INJECTION),1) CFLAGS += -DFAULT_INJECTION=1 CXXFLAGS += -DFAULT_INJECTION=1 endif ifneq ($(SANITIZE),) CFLAGS += -fsanitize=$(SANITIZE) CXXFLAGS += -fsanitize=$(SANITIZE) LDFLAGS += -fsanitize=$(SANITIZE) endif LINKER=$(CC) ifeq ($(COMPILE_LANG), cpp) LINKER=$(CXX) endif ifneq ($(TARGET),) SCP_TARGET=$(TARGET) SCP_SRC_DIR=. # # By default debug and non-debug static versions are built. # It can be changed by setting BUILD_STATIC_DEBUG, BUILD_STATIC_NONDEBUG # or BUILD_STATIC (for both of them) to 'n'. # ifneq ($(BUILD_STATIC),n) ifneq ($(BUILD_STATIC_DEBUG),n) TARGET_STATIC_DEBUG=$(TARGET).static-debug SCP_TARGET_STATIC_DEBUG=$(SCP_SRC_DIR)/$(SCP_TARGET).static-debug endif ifneq ($(BUILD_STATIC_NONDEBUG),n) ifneq ($(DEBUG),1) TARGET_STATIC_NONDEBUG=$(TARGET).static-nondebug SCP_TARGET_STATIC_NONDEBUG=$(SCP_SRC_DIR)/$(SCP_TARGET).static-nondebug endif endif endif endif TESTCONFIG=../testconfig.sh SYNC_FILE=.synced MAKEFILE_DEPS=Makefile ../Makefile.inc $(TOP)/src/common.inc ifneq ($(HEADERS),) ifneq ($(filter 1 2, $(CSTYLEON)),) TMP_HEADERS := $(addsuffix tmp, $(HEADERS)) endif endif all: $(TARGET) $(TARGET_STATIC_DEBUG) $(TARGET_STATIC_NONDEBUG) $(UT): $(MAKE) -C ../unittest $(TARGET_STATIC_DEBUG): $(TMP_HEADERS) $(OBJS) $(UT) $(STATIC_DEBUG_LIBS) $(EXTRA_DEPS) $(MAKEFILE_DEPS) $(LINKER) -o $@ $(LDFLAGS) $(OBJS) $(STATIC_DEBUG_LIBS) $(LIBS) $(TARGET_STATIC_NONDEBUG): $(TMP_HEADERS) $(OBJS) $(UT) $(STATIC_NONDEBUG_LIBS) $(EXTRA_DEPS) $(MAKEFILE_DEPS) $(LINKER) -o $@ $(LDFLAGS) $(OBJS) $(STATIC_NONDEBUG_LIBS) $(LIBS) $(TARGET): $(TMP_HEADERS) $(OBJS) $(UT) $(EXTRA_DEPS) $(MAKEFILE_DEPS) $(LINKER) -o $@ $(LDFLAGS) $(OBJS) $(DYNAMIC_LIBS) $(LIBS) objdir=. %.o: %.c $(MAKEFILE_DEPS) $(call check-cstyle, $<) @mkdir -p .deps $(CC) -MD -c $(CFLAGS) $(INCS) $(call coverage-path, $<) -o $@ $(call check-os, $@, $<) $(create-deps) %.o: %.cpp $(MAKEFILE_DEPS) $(call check-cstyle, $<) @mkdir -p .deps $(CXX) -MD -c $(CXXFLAGS) $(INCS) $(call coverage-path, $<) -o $@ $(call check-os, $@, $<) $(create-deps) %.htmp: %.h $(call check-cstyle, $<, $@) clean: $(RM) *.o */*.o core *.core a.out *.log testfile* $(SYNC_FILE) $(TMP_HEADERS) clobber: clean $(RM) $(TARGET) $(TARGET_STATIC_DEBUG) $(TARGET_STATIC_NONDEBUG) $(RM) -r .deps $(TESTCONFIG): $(SYNC_FILE): $(TARGET) $(TESTCONFIG) ifeq ($(SCP_TO_REMOTE_NODES), y) ifeq ($(SCP_TARGET),) $(SCP) test else ifeq ($(SCP_SRC_DIR),) $(error SCP_SRC_DIR is not set) endif $(SCP) common $(SCP) test $(SCP_SRC_DIR)/$(SCP_TARGET) $(SCP_TARGET_STATIC_DEBUG) $(SCP_TARGET_STATIC_NONDEBUG) endif @touch $(SYNC_FILE) endif sync-test: all $(SYNC_FILE) $(TESTCONFIG) TST=$(shell basename `pwd`) TSTCHECKS=$(shell ls -1 TEST* 2> /dev/null | grep '^TEST[0-9]\+$$' | sort -V) $(TSTCHECKS): sync-test @cd .. && ./RUNTESTS ${TST} $(RUNTEST_OPTIONS) -s $@ check: sync-test @cd .. && ./RUNTESTS ${TST} $(RUNTEST_OPTIONS) pcheck: export NOTTY=1 pcheck: $(TSTCHECKS) test: all TOOLS=../tools all: sparse: $(if $(TARGET), $(sparse-c)) .PHONY: all check clean clobber pcheck test sync-test $(TSTCHECKS) -include .deps/*.P vmem-1.8/src/test/README000066400000000000000000000225121361505074100147600ustar00rootroot00000000000000Persistent Memory Development Kit This is src/test/README. This directory contains the unit tests for the Persistent Memory Development Kit. Unit tests require a config file, testconfig.sh, to exist in this directory. That file describes the local machine configuration (where to find persistent memory, for example) and must be created by hand in each repo as it makes no sense to check in that configuration description to the main repo. testconfig.sh.example provides more detail. The script RUNTESTS, when run with no arguments, will run all unit tests through all build-types, running the "check" level test. DETAILS ON HOW TO RUN UNIT TESTS See the top-level README for instructions on building, installing, running tests for the entire tree. Here are some additional details on running tests. Once the libraries are built, tests may be built from this directory using: $ make test A testconfig.sh must exist to run these tests! $ cp testconfig.sh.example testconfig.sh $ ...edit testconfig.sh and modify as appropriate... Tests may be run using the RUNTESTS script: $ RUNTESTS (runs them all) $ RUNTESTS testname (runs just the named test) Each test script (named something like "TEST0") is potentially run multiple times with a different set of environment variables so run the test with different target file systems or different versions of the libraries. To see how RUNTESTS will run a test, use the -n option. For example: $ RUNTESTS -n blk_nblock -s TEST0 (in ./blk_nblock) TEST=check FS=none BUILD=debug ./TEST0 (in ./blk_nblock) TEST=check FS=none BUILD=nondebug ./TEST0 (in ./blk_nblock) TEST=check FS=none BUILD=static-debug ./TEST0 (in ./blk_nblock) TEST=check FS=none BUILD=static-nondebug ./TEST0 (in ./blk_nblock) TEST=check FS=pmem BUILD=debug ./TEST0 (in ./blk_nblock) TEST=check FS=pmem BUILD=nondebug ./TEST0 (in ./blk_nblock) TEST=check FS=pmem BUILD=static-debug ./TEST0 (in ./blk_nblock) TEST=check FS=pmem BUILD=static-nondebug ./TEST0 (in ./blk_nblock) TEST=check FS=non-pmem BUILD=debug ./TEST0 (in ./blk_nblock) TEST=check FS=non-pmem BUILD=nondebug ./TEST0 (in ./blk_nblock) TEST=check FS=non-pmem BUILD=static-debug ./TEST0 (in ./blk_nblock) TEST=check FS=non-pmem BUILD=static-nondebug ./TEST0 (in ./blk_nblock) TEST=check FS=any BUILD=debug ./TEST0 (in ./blk_nblock) TEST=check FS=any BUILD=nondebug ./TEST0 (in ./blk_nblock) TEST=check FS=any BUILD=static-debug ./TEST0 (in ./blk_nblock) TEST=check FS=any BUILD=static-nondebug ./TEST0 Notice how the TEST0 script is run repeatedly with different settings for the three environment variables TEST, FS, and BUILD, providing the test type, file system type, and build type to test. RUNTESTS takes options to limit what it runs. The usage is: RUNTESTS [ -hnv ] [ -b build-type ] [ -o timeout ] [ -s test-file ] [ -k skip-dir ] [ -m memcheck ] [-p pmemcheck ] [ -e helgrind ] [ -d drd ] [ -c ] [tests...] Build types are: debug, nondebug, static-debug, static-nondebug, all (default) Timeout is: a floating point number with an optional suffix: 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days. Default value is 3 minutes. Test file is: a name of the particular test script (test case). all (default), TEST0, TEST1, ... Memcheck, helgrind, drd and pmemcheck modes are: auto (default, enable/disable based on test requirements), force-enable (enable when test does not require given valgrind tool, but obey test's explicit tool disable) For example: $ RUNTESTS -b debug blk_nblock -s TEST0 blk_nblock/TEST0: SETUP (check/debug) blk_nblock/TEST0: START: blk_nblock blk_nblock/TEST0: PASS Since the "-b debug" option was given, the RUNTESTS run above only executes the test for the debug version of the library and skips the other variants. Running the TEST* scripts directly is also common, especially when debugging an issue. Just running the script, like this: $ cd blk_nblock $ ./TEST0 will use default values for the environment, namely: BUILD=debug these defaults can be overridden on the command line: $ cd blk_nblock $ BUILD=nondebug ./TEST0 The above example runs TEST0 with the nondebug library, just as using RUNTESTS with "-b nondebug" would from the parent directory. In addition to overriding BUILD environment variable, the unit test framework also looks for several other variables: For tests that run a local program, insert the word "echo" in front of the program execution so the full command being run is displayed. This is useful to modify the command for debugging. $ ECHO=echo ./TEST0 Insert the word "strace" in front of the local command execution: $ TRACE=strace ./TEST0 Run test under Valgrind/memcheck: $ MEMCHECK=force-enable ./TEST0 Display test run times: $ TM=1 ./TEST0 DETAILS ON HOW TO WRITE UNIT TESTS A minimal unit test consists of a sub-directory here with an executable called TEST0 in it that exits normally when the test passes. Most tests, however, source the file unittest/unittest.sh to use the utility functions for setting up and checking tests. Additionally, most unit tests build a local test program and call it from the TEST* scripts. In additional to TEST0, there can be as many TEST scripts as desired, and RUNTESTS will execute them in numeric order for each of the test runs it executes. There are two ways of setting test requirements: - using config.sh file located in a test sub-directory. It is the new and recommended method but it is still under development and does not cover all possible requirements. It is applied prior to the second method. If config.sh includes test requirements which are not met, the test will not be run at all. For details see config.sh.example. - using require_* utility functions. It is the old but still supported method. It can be used simultaneously with config.sh. Available require_* utility functions are described below. Tests can require a specific build types: require_build_type debug Most tests are short "make check" tests, designed to run quickly when smoke-checking a build. Using the unittest library, the C programs run during unit testing get their output and tracing information logged to various files. These are named with the test number embedded in them, so a script called TEST0 would commonly produce: err0.log log of stderr out0.log log of stdout trace0.log trace points from unittest library vmem0.log trace from libvmem (VMEM_LOG_FILE points here) Although the above log files are the common case, the TEST* scripts are free to create any files. It is recommended, however, that the script creates files that are listed in .gitignore so they don't accidentally get committed to the repo. The TEST* scripts typically use the shell function "check" to check their results. That function calls the perl script "match" for any .match files found. For example, to check out0.log contains the expected output, the test author creates a file called out0.log.match and commits that into the repo. The match script provides several pattern-matching macros that allow for normal variation in the test output: $(N) an integer (i.e. one or more decimal digits) $(NC) one or more decimal digits with optional comma separators $(FP) a floating point number $(S) ascii string $(X) hex number $(W) whitespace $(nW) non-whitespace $(*) any string $(DD) output of a "dd" run $(OPT) the entire line is optional $(OPX) ends a contiguous list of $(OPT)...$(OPX) lines, at least one of which must match The small C programs used for unit testing are designed to not worry about the return values for things not under test. This is done with the all-caps versions of many common libc calls, for example: fd = OPEN(fname, O_RDRW); The above call is just like calling open(2), expect the framework checks for unexpected errors and halts the unit test if those happen. The result is usually a very short, compact unit test where most of the code is the interesting code under test. A full list of unit test macros is in unittest/inittest.h. The best way to create a new unit test is to start with an existing one. The test: blk_rw makes an excellent starting point for new tests. It illustrates the common idioms used in the TEST* scripts, how the small C program is usually command-line driven to create multiple cases, and the use of unit test macros like START, OUT, FATAL, and DONE. Every case is different, but a common pattern for creating new unit tests is to follow steps like these: $ cp blk_rw new_test_name $ ...edit Makefile to add new_test_name to TEST list... $ cd new_test_name $ mv blk_rw.c to new_test_name.c $ ...edit .gitignore, README, Makefile, new_test_name.c... $ ...edit TEST0 to create first test case & get it working... $ ...add more TEST* scripts as appropriate... When a "check" type test is run (the default), each test should try to limit the real execution time to a couple minutes or less. "short" and "long" versions of the tests are encouraged, especially for stress testing. PORTABILITY CONSIDERATIONS unittest.sh defines a number of macros to support portability across POSIX operating systems. See the Portability section of unittest.sh for details. When a matching line count of an output file is required, use "$GREP -c" rather than "$GREP | wc -l". Use canonical ordering of program options and arguments, i.e., put all options before any arguments. Not all operating systems support option reordering. vmem-1.8/src/test/RUNTESTLIB.PS1000066400000000000000000000215551361505074100161260ustar00rootroot00000000000000# # Copyright 2015-2019, Intel Corporation # Copyright (c) 2016, Microsoft Corporation. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # RUNTESTSLIB.PS1 -- functions used in runtest script # class Config { [bool] $dryrun [String] $buildtype [timespan] $timeout [string]$testfile [bool] $check_pool [string] $skip_dir [Object[]] $testdir [string] $verbose # # setTimeout -- parse timeout # setTimeout([string]$time) { if($time -match "^\d+$") { $this.timeout = [timespan]::FromSeconds($time) return } [int64]$timeval = $time.Substring(0,$time.length-1) if ($time -match "m") { $this.timeout = [timespan]::FromMinutes($timeval) } elseif ($time -match "h") { $this.timeout = [timespan]::FromHours($timeval) } elseif ($time -match "d") { $this.timeout = [timespan]::FromDays($timeval) } else { $this.timeout = [timespan]::FromSeconds($timeval) } } # # setBuildtype -- parse build type # setBuildtype([string]$buildtype) { if ($buildtype -eq "all") { $this.buildtype = "debug nondebug" } else { $this.buildtype = $buildtype } } # # setTestdir -- parse test directory # setTestdir($testdir) { if ($testdir -eq "all") { $this.testdir = Get-ChildItem -Directory } else { $this.testdir = Get-Item $testdir } } } # # usage -- print usage message and exit # function usage { Param ( [parameter(Position=0)] [ValidateNotNullOrEmpty()] [String]$name = $(throw "Missing application name") ) Write-Host "Usage: $name [ -hnv ] [ -b build-type ] [ -o timeout ] [ -s test-file ] [ -k skip-dir ] [ -c ] [ -i testdir ] [-j jobs] -h print this help message -n dry run -v be verbose -i test-dir run test(s) from this test directory (default is all) -b build-type run only specified build type build-type: debug, nondebug, all (default) -k skip-dir skip a specific test directories (for >1 dir enclose in "" and separate with spaces) -o timeout set timeout for test execution timeout: floating point number with an optional suffix: 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days. Default value is 180 seconds. -s test-file run only specified test file test-file: all (default), TEST0, TEST1, ... -j jobs number of tests to run simultaneously -c check pool files with pmempool check utility" exit 1 } # # get_build_dirs -- returns the directories to pick the test binaries from # # example, to get release build dirs # get_build_dirs "nondebug" # $DEBUG_DIR = '..\..\x64\Debug' $RELEASE_DIR = '..\..\x64\Release' function get_build_dirs() { param( [ValidateSet("debug", "nondebug")] [string]$build ) $build_dirs = @() if ($build -eq "debug") { $build_dirs += $DEBUG_DIR + "\tests" $build_dirs += $DEBUG_DIR + "\examples" $build_dirs += $DEBUG_DIR + "\libs" } else { $build_dirs += $RELEASE_DIR + "\tests" $build_dirs += $RELEASE_DIR + "\examples" $build_dirs += $RELEASE_DIR + "\libs" } return $build_dirs } # # read_global_test_configuration -- load per test configuration # function read_global_test_configuration { if ((Test-Path "config.PS1")) { # source the test configuration file . ".\config.PS1" return; } } # # save_env_variables -- save environment variables # function save_env_variables { $old = New-Object System.Collections.ArrayList $old.AddRange((Get-ChildItem Env:)) return $old } # # restore_env_variables -- restore environment variables # function restore_env_variables { param([System.Collections.ArrayList]$old) $new = New-Object System.Collections.ArrayList $new.AddRange((Get-ChildItem Env:)) $old.ToArray() | foreach { if($new.contains($_)) { $old.Remove($_) $new.Remove($_) } } $new.ToArray() | foreach { [Environment]::SetEnvironmentVariable($_.Key, $null) } $old.ToArray() | foreach { [Environment]::SetEnvironmentVariable($_.Key, $_.Value) } } # # runtest -- given the test directory name, run tests found inside it # function runtest { param ( [Parameter(ValueFromPipeline = $true, Mandatory = $true)] [string] $testName, [Parameter(Mandatory = $true)] [Config] $config ) # reset environment variables which can affect test execution $Env:UNITTEST_NAME=$null if (-Not $Env:UNITTEST_LOG_LEVEL) { $Env:UNITTEST_LOG_LEVEL = 1 } if ($config.testfile -eq "all") { $dirCheck = ".\TEST*.ps1" } else { $dirCheck = ".\" + $config.testfile + ".ps1" } $runscripts = "" Get-ChildItem $dirCheck | Sort-Object { $_.BaseName -replace "\D+" -as [Int] } | % { $runscripts += $_.Name + " " } $runscripts = $runscripts.trim() if (-not $runscripts) { return } # for each TEST script found... Foreach ($runscript in $runscripts.split(" ")) { Write-Verbose "RUNTESTS: Test: $testName/$runscript " read_global_test_configuration Foreach ($build in $config.buildtype.split(" ").trim()) { Write-Verbose "RUNTESTS: Testing build-type: $build..." $Env:CHECK_POOL = $config.check_pool $Env:BUILD = $build $Env:EXE_DIR = $(get_build_dirs $build)[0] $Env:EXAMPLES_DIR = $(get_build_dirs $build)[1] if ($Env:BUILD -eq 'nondebug') { if (-Not $Env:PMDK_LIB_PATH_NONDEBUG) { $Env:LIBS_DIR = $(get_build_dirs $build)[2] } else { $Env:LIBS_DIR = $Env:PMDK_LIB_PATH_NONDEBUG } } elseif ($Env:BUILD -eq 'debug') { if (-Not $Env:PMDK_LIB_PATH_DEBUG) { $Env:LIBS_DIR = $(get_build_dirs $build)[2] } else { $Env:LIBS_DIR = $Env:PMDK_LIB_PATH_DEBUG } } if ($dryrun -eq "1") { Write-Host "(in ./$testName) BUILD=$build .\$runscript" continue } $save = save_env_variables # run test Invoke-Expression ./$runscript $ret = $? # reset timeout timer New-Event -SourceIdentifier "timeout-reset" | out-null restore_env_variables($save) if (-not $ret) { throw "RUNTESTS: stopping: $testName/$runscript FAILED BUILD=$build" } } # for builds } # for runscripts } function isDir { if (-Not $args[0]) { return $false } return Test-Path $args[0] -PathType Container } if (-Not (Test-Path "./testconfig.ps1")) { throw $MyInvocation.MyCommand.Name + " stopping because no testconfig.ps1 is found. To create one: Copy-Item testconfig.ps1.example testconfig.ps1 and edit testconfig.ps1 to describe the local machine configuration. " } # need to manually clear variables, if definition of a variable was removed # from testconfig RUNTEST would use previously exported value $Env:TEST_DIR="" . .\testconfig.ps1 if ($Env:TEST_DIR -And (-Not (isDir($Env:TEST_DIR)))) { throw "error: TEST_DIR=$Env:TEST_DIR doesn't exist" } vmem-1.8/src/test/RUNTESTS000077500000000000000000000322701361505074100153170ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # RUNTESTS -- setup the environment and run each test # # # usage -- print usage message and exit # usage() { [ "$1" ] && echo Error: $1 cat >&2 </dev/null && errmsg="$(tput setaf 1)$errmsg$(tput sgr0)" echo "RUNTESTS: stopping: $RUNTEST_DIR/$RUNTEST_SCRIPT $errmsg, $RUNTEST_PARAMS" >&2 if [ "$keep_going" == "y" ]; then keep_going_exit_code=1 keep_going_skip=y fail_list="$fail_list $RUNTEST_DIR/$RUNTEST_SCRIPT" ((fail_count+=1)) if [ "$CLEAN_FAILED" == "y" ]; then dir_rm=$(<$TEMP_LOC) rm -Rf $dir_rm if [ $? -ne 0 ]; then echo -e "Cannot remove directory with data: $dir_rm" fi fi else exit 1 fi } rm -f $TEMP_LOC [ "$verbose_old" != "-1" ] && verbose=$verbose_old return 0 } # # load_default_global_test_configuration -- load a default global configuration # load_default_global_test_configuration() { global_req_buildtype=all global_req_timeout='3m' return 0 } # switch_hyphen -- substitute hyphen for underscores switch_hyphen() { echo ${1//-/_} } # # read_global_test_configuration -- read a global configuration from a test # config file and overwrite a global configuration # read_global_test_configuration() { if [ ! -e "config.sh" ]; then return fi # unset all global settings unset CONF_GLOBAL_TEST_TYPE unset CONF_GLOBAL_FS_TYPE unset CONF_GLOBAL_BUILD_TYPE unset CONF_GLOBAL_TIMEOUT # unset all local settings unset CONF_TEST_TYPE unset CONF_FS_TYPE unset CONF_BUILD_TYPE unset CONF_TIMEOUT . config.sh [ -n "$CONF_GLOBAL_BUILD_TYPE" ] && global_req_buildtype=$CONF_GLOBAL_BUILD_TYPE [ -n "$CONF_GLOBAL_TIMEOUT" ] && global_req_timeout=$CONF_GLOBAL_TIMEOUT return 0 } # # read_test_configuration -- generate a test configuration from a global # configuration and a test configuration read from a test config file # usage: read_test_configuration # read_test_configuration() { req_buildtype=$global_req_buildtype req_timeout=$global_req_timeout [ -n "${CONF_BUILD_TYPE[$1]}" ] && req_buildtype=${CONF_BUILD_TYPE[$1]} if [ -n "$runtest_timeout" ]; then req_timeout="$runtest_timeout" else [ -n "${CONF_TIMEOUT[$1]}" ] && req_timeout=${CONF_TIMEOUT[$1]} fi special_params= return 0 } # # intersection -- return common elements of collection of available and required # values # usage: intersection # intersection() { collection=$1 [ "$collection" == "all" ] && collection=$3 [ "$2" == "all" ] && echo $collection && return for e in $collection; do for r in $2; do [ "$e" == "$r" ] && { subset="$subset $e" } done done echo $subset } # # runtest -- given the test directory name, run tests found inside it # runtest() { [ "$UNITTEST_LOG_LEVEL" ] || UNITTEST_LOG_LEVEL=1 export UNITTEST_LOG_LEVEL [ -f "$1/TEST0" ] || { echo FAIL: $1: test not found. >&2 exit 1 } [ -x "$1/TEST0" ] || { echo FAIL: $1: test not executable. >&2 exit 1 } cd $1 load_default_global_test_configuration read_global_test_configuration runscripts=$testfile if [ "$runscripts" = all ]; then if [ "$testseq" = all ]; then runscripts=`ls -1 TEST* | grep '^TEST[0-9]\+$' | sort -V` else # generate test sequence seqs=(${testseq//,/ }) runscripts= for seq in ${seqs[@]}; do limits=(${seq//-/ }) if [ "${#limits[@]}" -eq "2" ]; then if [ ${limits[0]} -lt ${limits[1]} ]; then nos="$(seq ${limits[0]} ${limits[1]})" else nos="$(seq ${limits[1]} ${limits[0]})" fi else nos=${limits[0]} fi for no in $nos; do runscripts="$runscripts TEST$no" done done fi fi # for each TEST script found... for runscript in $runscripts do UNITTEST_NAME="$1/$runscript" local sid=${runscript#TEST} read_test_configuration $sid builds=$(intersection "$buildtype" "$req_buildtype" "debug nondebug static-debug static-nondebug") # for each build-type being tested... for build in $builds do export RUNTEST_DIR=$1 export RUNTEST_PARAMS="BUILD=$build" export RUNTEST_EXTRA="CHECK_TYPE=$checktype CHECK_POOL=$check_pool \ $special_params" export RUNTEST_SCRIPT="$runscript" export RUNTEST_TIMEOUT="$req_timeout" if [ "$KEEP_GOING" == "y" ] && [ "$CLEAN_FAILED" == "y" ]; then # temporary file used for sharing data # between RUNTESTS and tests processes temp_loc=$(mktemp /tmp/data-location.XXXXXXXX) export TEMP_LOC=$temp_loc fi # to not overwrite logs skip other tests from the group # if KEEP_GOING=y and test fail if [ "$keep_going_skip" == "n" ]; then runtest_local fi done keep_going_skip=n done cd .. } [ -f testconfig.sh ] || { cat >&2 </dev/null if [ $? != 0 ]; then unset killopt fi # check if timeout can be run in the foreground timeout --foreground 1s true &>/dev/null if [ $? != 0 ]; then unset use_timeout fi if [ -n "$TRACE" ]; then unset use_timeout fi if [ "$1" ]; then for test in $* do [ -d "$test" ] || echo "RUNTESTS: Test does not exist: $test" [ -f "$test/TEST0" ] && runtest $test done else # no arguments means run them all for testfile0 in */TEST0 do testdir=`dirname $testfile0` if [[ "$skip_dir" =~ "$testdir" ]]; then echo "RUNTESTS: Skipping: $testdir" continue fi runtest $testdir done fi if [ "$fail_count" != "0" ]; then echo "$(tput setaf 1)$fail_count tests failed:$(tput sgr0)" # remove duplicates and print each test name in a new line echo $fail_list | xargs -n1 | uniq exit $keep_going_exit_code else exit 0 fi vmem-1.8/src/test/RUNTESTS.PS1000066400000000000000000000172051361505074100157170ustar00rootroot00000000000000# # Copyright 2015-2019, Intel Corporation # Copyright (c) 2016, Microsoft Corporation. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # RUNTESTS.PS1 -- setup the environment and run each test # # # parameter handling # [CmdletBinding(PositionalBinding=$false)] Param( [alias("n")] [switch]$dryrun = $false, [alias("b")] [ValidateSet("all", "debug", "nondebug")] [string]$buildtype = "all", [alias("o")] [ValidateScript({ if( $_ -match "^\d+[smhd]?$") { $true } else { throw "$_ is not valid timeout value." } })] [string]$time = "180s", [alias("s")] [string]$testfile = "all", [alias("i")] [ValidateScript({ if($_ -eq "all") { $true } elseif(Test-Path -Path $_ -pathType container) { $true } else { throw "Directory $_ doesn't exist." } })] [string]$testdir = "all", [alias("c")] [switch]$check_pool = $false, [alias("k")] [string]$skip_dir = "", [alias("j")] [uint32]$jobs = 1, [alias("h")] [switch]$help= $false ) if ($PSVersionTable.PSVersion.Major -lt 5) { throw $MyInvocation.MyCommand.Name + " require powershell version >= 5" } . .\RUNTESTLIB.PS1 if($help) { usage $MyInvocation.MyCommand.Name } Write-Verbose "Options: -v $(if ($dryrun) {"-n"})" Write-Verbose " build-type: $buildtype" Write-Verbose " check-pool: $(if ($check_pool -eq "1") {"yes"} else {"no"})" $config = New-Object Config $config.setBuildtype($buildtype) $config.setTimeout($time) $config.setTestdir($testdir) $config.check_pool = $check_pool $config.skip_dir = $skip_dir $config.testfile = $testfile $config.verbose = $VerbosePreference Register-EngineEvent -SourceIdentifier "timeout-reset" -Action { $Global:stopwatch = [diagnostics.stopwatch]::StartNew() } | Out-Null $Global:stopwatch = [diagnostics.stopwatch]::StartNew() # script blocks - job's start functions $sb_ST = { param ([string]$dir, $config) # Config type is unknown here $VerbosePreference = $config.verbose Register-EngineEvent -SourceIdentifier "timeout-reset" -Forward # catch event and forward it to parent job cd $dir . .\RUNTESTLIB.PS1 $config.testdir | % { if ($config.skip_dir.split() -contains $_) { Write-Host "RUNTESTS: Skipping: $testName" return } cd $_ $_ | runtest -config $config cd .. } } $sb_MT = { param ([string]$dir, $config, [string]$test) # Config type is unknown here $VerbosePreference = $config.verbose Register-EngineEvent -SourceIdentifier "timeout-reset" -Forward # catch event and forward it to parent job cd $dir . .\RUNTESTLIB.PS1 if ($config.skip_dir.split() -contains $_) { Write-Host "RUNTESTS: Skipping: $testName" return } cd $test $test | runtest -config $config cd .. } # unique name for all jobs $name = [guid]::NewGuid().ToString() try { if ($jobs -gt 1) { $it = 0 $threads = 0 $tests = $config.testdir # start worker jobs 1..$jobs | % { if ($it -lt $tests.Length) { Start-Job -Name $name -Args $PSScriptRoot, $config, $tests[$it] -ScriptBlock $sb_MT | Out-Null $it++ $threads++ } } $fail = $false # control loop for receiving job outputs and starting new jobs while ($threads -ne 0) { if ($config.timeout.TotalSeconds -ne 0 -and $Global:stopwatch.Elapsed.TotalSeconds -ge $config.timeout.TotalSeconds) { Get-Job -name $name | Remove-Job -Force throw "RUNTESTS: stopping: TIMED OUT" } Get-Job -name $name | Receive-Job Get-Job -name $name | % { if ($_.State -eq "Running" -or $_.State -eq "NotStarted") { return } if ($_.State -eq "Failed") { $fail = $true } Receive-Job $_ Remove-Job $_ -Force $threads-- if ($fail -eq $false) { if ($it -lt $tests.Length) { Start-Job -Name $name -Args $PSScriptRoot, $config, $tests[$it] -ScriptBlock $sb_MT | Out-Null $it++ $threads++ } } } } if ($fail -eq $true) { throw "one of the tests failed" } } else { # if there is no timeout don't run tests in separate thread - useful for script debugging if ($config.timeout.TotalSeconds -eq 0) { & $sb_ST $PSScriptRoot $config } else { $job = Start-Job -Name $name -Args $PSScriptRoot, $config -ScriptBlock $sb_ST -Verbose $threads++ while ($Global:stopwatch.Elapsed.TotalSeconds -lt $config.timeout.TotalSeconds -and $(Get-Job).ChildJobs.Count -ne 0) { Receive-Job -Job $job if ($job.State -eq "Running" -or $job.State -eq "NotStarted") { sleep -Milliseconds 100 continue } if ($job.State -eq "Failed") { Remove-Job -job $job -Force | out-null $threads-- throw "one of the tests failed" } Receive-Job $job Remove-Job $job -Force $threads-- return } if ($Global:stopwatch.Elapsed.TotalSeconds -ge $config.timeout.TotalSeconds) { Receive-Job -Job $job throw "TIMED OUT" } } } } catch { # in case of fail test without timeout configured # we have to return to src/test dir if ($config.timeout.TotalSeconds -eq 0) { cd .. } Write-Error "RUNTESTS FAILED: $_" $status = 1 } finally { # cleanup jobs in case of exception or C-c if ($config.timeout.TotalSeconds -ne 0) { Get-Job -name "timeout-reset"| Remove-Job -Force } if ($threads -gt 0) { Get-Job -name $name | Remove-Job -Force } } Exit $status vmem-1.8/src/test/config.sh.example000066400000000000000000000012401361505074100173260ustar00rootroot00000000000000# # src/test/config.sh.example -- example of configuration file for a single unit # test (real config file for the 'unit_test' should have the following # path: src/test//config.sh) # # GLOBAL REQUIREMENTS # Global requirements are applied for all TEST scripts in the sub-directory # where config.sh file is located. # Tests can require a specific build types: CONF_GLOBAL_BUILD_TYPE=debug # Test can also require custom timeout: CONF_GLOBAL_TIMEOUT=5m # PER TEST REQUIREMENTS # Per TEST requirements are applied only for single TEST script. # The same for any other global requirements: # CONF_BUILD_TYPE[]="debug nondebug" vmem-1.8/src/test/drd-log.supp000066400000000000000000000011761361505074100163440ustar00rootroot00000000000000{ drd:ConflictingAccess fun:*mempcpy ... fun:_IO_file_xsputn@@GLIBC* fun:fputs fun:out_print_func fun:out_common fun:out_log } { drd:ConflictingAccess fun:*memmove fun:_IO_file_xsputn@@GLIBC* fun:fputs fun:out_print_func fun:out_common fun:out_log } { drd:ConflictingAccess fun:*mempcpy fun:_IO_file_xsputn@@GLIBC* fun:fputs ... fun:ut_out } { drd:ConflictingAccess fun:*memmove fun:_IO_file_xsputn@@GLIBC* fun:fputs ... fun:ut_out } vmem-1.8/src/test/freebsd.supp000066400000000000000000000043041361505074100164220ustar00rootroot00000000000000{ memcheck_FreeBSD_libc_catopen Memcheck:Leak match-leak-kinds: reachable ... fun:malloc ... fun:catopen ... } { memcheck_FreeBSD_libc_setvbuf Memcheck:Leak match-leak-kinds: reachable ... fun:*malloc fun:setvbuf fun:ut_start_common fun:ut_start fun:main } { memcheck_FreeBSD_ld-elf.so.1 Memcheck:Leak match-leak-kinds: reachable ... fun:*alloc obj:/lib/libthr.so.3 ... obj:/libexec/ld-elf.so.1 obj:/libexec/ld-elf.so.1 } { drd_FreeBSD_ld-elf.so.1 drd:ConflictingAccess obj:/libexec/ld-elf.so.1 } { drd_FreeBSD_libthr.so.3 drd:ConflictingAccess obj:/lib/libthr.so.3 } { helgrind_FreeBSD_libthr.so.3 Helgrind:Race obj:/lib/libthr.so.3 } { helgrind_FreeBSD___set_error_selector Helgrind:Race fun:__set_error_selector obj:/lib/libthr.so.3 ... } { drd_FreeBSD_libgcc_s.so.1 drd:ConflictingAccess obj:/lib/libgcc_s.so.1 ... obj:/lib/libthr.so.3 fun:pthread_exit } { drd_FreeBSD_flockfile drd:ConflictingAccess fun:flockfile ... } { helgrind_FreeBSD_flockfile Helgrind:Race fun:flockfile ... } { drd_FreeBSD_fopen drd:ConflictingAccess obj:/lib/libc.so.7 fun:fopen ... } { helgrind_FreeBSD_fopen Helgrind:Race obj:/lib/libc.so.7 fun:fopen ... } { drd_FreeBSD_fputs drd:ConflictingAccess obj:/lib/libc.so.7 ... fun:fputs ... } { helgrind_FreeBSD_fputs Helgrind:Race obj:/lib/libc.so.7 ... fun:fputs ... } { drd_FreeBSD_funlockfile drd:ConflictingAccess fun:funlockfile ... } { helgrind_FreeBSD_funlockfile Helgrind:Race fun:funlockfile ... } { helgrind_FreeBSD__rtld_allocate_tls Helgrind:Race obj:/lib/libthr.so.3 ... obj:/libexec/ld-elf.so.1 fun:_rtld_allocate_tls ... fun:pthread_create ... } { helgrind_FreeBSD_pthread_mutex_lock Helgrind:Race obj:/lib/libthr.so.3 ... fun:pthread_mutex_lock ... obj:/lib/libthr.so.3 } { helgrind_FreeBSD_pthread_mutex_unlock Helgrind:Race obj:/lib/libthr.so.3 ... fun:pthread_mutex_unlock ... obj:/lib/libthr.so.3 } { helgrind_FreeBSD_pthread_exit Helgrind:Race ... obj:/lib/libthr.so.3 ... fun:pthread_exit obj:/lib/libthr.so.3 } vmem-1.8/src/test/helgrind-cxgb4.supp000066400000000000000000000002701361505074100176070ustar00rootroot00000000000000{ Helgrind:Misc ... fun:pthread_spin_lock fun:c4iw_flush_qps fun:c4iw_poll_cq fun:ibv_poll_cq fun:fi_ibv_poll_cq ... } vmem-1.8/src/test/helgrind-log.supp000066400000000000000000000011361361505074100173630ustar00rootroot00000000000000{ Helgrind:Race fun:*mempcpy ... fun:_IO_file_xsputn@@GLIBC* fun:fputs fun:out_print_func fun:out_common fun:out_log } { Helgrind:Race fun:*memmove fun:_IO_file_xsputn@@GLIBC* fun:fputs fun:out_print_func fun:out_common fun:out_log } { Helgrind:Race fun:*mempcpy fun:_IO_file_xsputn@@GLIBC* fun:fputs ... fun:ut_out } { Helgrind:Race fun:*memmove fun:_IO_file_xsputn@@GLIBC* fun:fputs ... fun:ut_out } vmem-1.8/src/test/ld.supp000066400000000000000000000012721361505074100154100ustar00rootroot00000000000000{ Memcheck:Cond fun:index fun:expand_dynamic_string_token fun:_dl_map_object fun:map_doit fun:_dl_catch_error fun:do_preload fun:dl_main fun:_dl_sysdep_start fun:_dl_start obj:/lib/x86_64-linux-gnu/ld-2.*.so obj:* obj:* } { Memcheck:Cond fun:index fun:expand_dynamic_string_token fun:_dl_map_object fun:map_doit fun:_dl_catch_error fun:do_preload fun:handle_ld_preload fun:dl_main fun:_dl_sysdep_start fun:_dl_start obj:/lib/x86_64-linux-gnu/ld-2.*.so obj:* } { Memcheck:Leak ... fun:_dl_init ... } vmem-1.8/src/test/match000077500000000000000000000214731361505074100151270ustar00rootroot00000000000000#!/usr/bin/env perl # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # match -- compare an output file with expected results # # usage: match [-adoqv] [match-file]... # # this script compares the output from a test run, stored in a file, with # the expected output. comparison is done line-by-line until either all # lines compare correctly (exit code 0) or a miscompare is found (exit # code nonzero). # # expected output is stored in a ".match" file, which contains a copy of # the expected output with embedded tokens for things that should not be # exact matches. the supported tokens are: # # $(N) an integer (i.e. one or more decimal digits) # $(NC) one or more decimal digits with comma separators # $(FP) a floating point number # $(S) ascii string # $(X) hex number # $(XX) hex number prefixed with 0x # $(W) whitespace # $(nW) non-whitespace # $(*) any string # $(DD) output of a "dd" run # $(OPT) line is optional (may be missing, matched if found) # $(OPX) ends a contiguous list of $(OPT)...$(OPX) lines, at least # one of which must match # ${string1|string2} string1 OR string2 # # Additionally, if any "X.ignore" file exists, strings or phrases found per # line in the file will be ignored if found as a substring in the # corresponding output file (making it easy to skip entire output lines). # # arguments are: # # -a find all files of the form "X.match" in the current # directory and match them again the corresponding file "X". # # -o custom output filename - only one match file can be given # # -d debug -- show lots of debug output # # -q don't print log files on mismatch # # -v verbose -- show every line as it is being matched # use strict; use Getopt::Std; use Encode; use v5.16; select STDERR; binmode(STDOUT, ":utf8"); binmode(STDERR, ":utf8"); my ($color_ok, $color_bad, $color_end) = -t STDOUT ? ("", "\e[31;1m", "\e[0m") : ('')x3; my $Me = $0; $Me =~ s,.*/,,; our ($opt_a, $opt_d, $opt_q, $opt_v, $opt_o); $SIG{HUP} = $SIG{INT} = $SIG{TERM} = $SIG{__DIE__} = sub { die @_ if $^S; my $errstr = shift; die "FAIL: $Me: $errstr"; }; sub usage { my $msg = shift; warn "$Me: $msg\n" if $msg; warn "Usage: $Me [-adqv] [match-file]...\n"; warn " or: $Me [-dqv] -o output-file match-file...\n"; exit 1; } getopts('adoqv') or usage; my %match2file; if ($opt_a) { usage("-a and filename arguments are mutually exclusive") if $#ARGV != -1; opendir(DIR, '.') or die "opendir: .: $!\n"; my @matchfiles = grep { /(.*)\.match$/ && -f $1 } readdir(DIR); closedir(DIR); die "no files found to process\n" unless @matchfiles; foreach my $mfile (@matchfiles) { die "$mfile: $!\n" unless open(F, $mfile); close(F); my $ofile = $mfile; $ofile =~ s/\.match$//; die "$mfile found but cannot open $ofile: $!\n" unless open(F, $ofile); close(F); $match2file{$mfile} = $ofile; } } elsif ($opt_o) { usage("-o argument requires two paths") if $#ARGV != 1; $match2file{$ARGV[1]} = $ARGV[0]; } else { usage("no match-file arguments found") if $#ARGV == -1; # to improve the failure case, check all filename args exist and # are provided in pairs now, before going through and processing them foreach my $mfile (@ARGV) { my $ofile = $mfile; usage("$mfile: not a .match file") unless $ofile =~ s/\.match$//; usage("$mfile: $!") unless open(F, $mfile); close(F); usage("$ofile: $!") unless open(F, $ofile); close(F); $match2file{$mfile} = $ofile; } } my $mfile; my $ofile; my $ifile; print "Files to be processed:\n" if $opt_v; foreach $mfile (sort keys %match2file) { $ofile = $match2file{$mfile}; $ifile = $ofile . ".ignore"; $ifile = undef unless (-f $ifile); if ($opt_v) { print " match-file \"$mfile\" output-file \"$ofile\""; if ($ifile) { print " ignore-file $ifile\n"; } else { print "\n"; } } match($mfile, $ofile, $ifile); } exit 0; # # strip_it - user can optionally ignore lines from files that contain # any number of substrings listed in a file called "X.ignore" where X # is the name of the output file. # sub strip_it { my ($ifile, $file, $input) = @_; # if there is no ignore file just return unaltered input return $input unless $ifile; my @lines_in = split /^/, $input; my $output; my $line_in; my @i_file = split /^/, snarf($ifile); my $i_line; my $ignore_it = 0; foreach $line_in (@lines_in) { my @i_lines = @i_file; foreach $i_line (@i_lines) { chop($i_line); if (index($line_in, $i_line) != -1) { $ignore_it = 1; if ($opt_v) { print "Ignoring (from $file): $line_in"; } } } if ($ignore_it == 0) { $output .= $line_in; } $ignore_it = 0; } return $output; } # # match -- process a match-file, output-file pair # sub match { my ($mfile, $ofile, $ifile) = @_; my $pat; my $output = snarf($ofile); $output = strip_it($ifile, $ofile, $output); my $all_lines = $output; my $line_pat = 0; my $line_out = 0; my $opt = 0; my $opx = 0; my $opt_found = 0; my $fstr = snarf($mfile); $fstr = strip_it($ifile, $mfile, $fstr); for (split /^/, $fstr) { $pat = $_; $line_pat++; $line_out++; s/([*+?|{}.\\^\$\[()])/\\$1/g; s/\\\$\\\(FP\\\)/[-+]?\\d*\\.?\\d+([eE][-+]?\\d+)?/g; s/\\\$\\\(N\\\)/[-+]?\\d+/g; s/\\\$\\\(NC\\\)/[-+]?\\d+(,[0-9]+)*/g; s/\\\$\\\(\\\*\\\)/\\p{Print}*/g; s/\\\$\\\(S\\\)/\\P{IsC}+/g; s/\\\$\\\(X\\\)/\\p{XPosixXDigit}+/g; s/\\\$\\\(XX\\\)/0x\\p{XPosixXDigit}+/g; s/\\\$\\\(W\\\)/\\p{Blank}*/g; s/\\\$\\\(nW\\\)/\\p{Graph}*/g; s/\\\$\\\{([^|]*)\\\|([^|]*)\\\}/($1|$2)/g; s/\\\$\\\(DD\\\)/\\d+\\+\\d+ records in\n\\d+\\+\\d+ records out\n\\d+ bytes \\\(\\d+ .B\\\) copied, [.0-9e-]+[^,]*, [.0-9]+ .B.s/g; if (s/\\\$\\\(OPT\\\)//) { $opt = 1; } elsif (s/\\\$\\\(OPX\\\)//) { $opx = 1; } else { $opt_found = 0; } if ($opt_v) { my @lines = split /\n/, $output; my $line; if (@lines) { $line = $lines[0]; } else { $line = "[EOF]"; } printf("%s%s:%-3d %s%s:%-3d %s%s\n", ($output =~ /^$_/) ? $color_ok : $color_bad, $mfile, $line_pat, $pat, $ofile, $line_out, $line, $color_end); } print " => /$_/\n" if $opt_d; print " [$output]\n" if $opt_d; unless ($output =~ s/^$_//) { if ($opt || ($opx && $opt_found)) { printf("%s:%-3d [skipping optional line]\n", $ofile, $line_out) if $opt_v; $line_out--; $opt = 0; } else { if (!$opt_v) { if ($opt_q) { print "[MATCHING FAILED]\n"; } else { print "[MATCHING FAILED, COMPLETE FILE ($ofile) BELOW]\n$all_lines\n[EOF]\n"; } $opt_v = 1; match($mfile, $ofile); } die "$mfile:$line_pat did not match pattern\n"; } } elsif ($opt) { $opt_found = 1; } $opx = 0; } if ($output ne '') { if (!$opt_v) { if ($opt_q) { print "[MATCHING FAILED]\n"; } else { print "[MATCHING FAILED, COMPLETE FILE ($ofile) BELOW]\n$all_lines\n[EOF]\n"; } } # make it a little more print-friendly... $output =~ s/\n/\\n/g; die "line $line_pat: unexpected output: \"$output\"\n"; } } # # snarf -- slurp an entire file into memory # sub snarf { my ($file) = @_; my $fh; open($fh, '<', $file) or die "$file $!\n"; local $/; $_ = <$fh>; close $fh; # check known encodings or die my $decoded; my @encodings = ("UTF-8", "UTF-16", "UTF-16LE", "UTF-16BE"); foreach my $enc (@encodings) { eval { $decoded = decode( $enc, $_, Encode::FB_CROAK ) }; if (!$@) { $decoded =~ s/\R/\n/g; return $decoded; } } die "$Me: ERROR: Unknown file encoding"; } vmem-1.8/src/test/memcheck-dlopen.supp000066400000000000000000000004561361505074100200470ustar00rootroot00000000000000{ dlopen suppression Memcheck:Leak fun:malloc fun:strdup ... fun:call_init fun:_dl_init fun:dl_open_worker fun:_dl_catch_exception fun:_dl_open ... } { Memcheck:Leak ... fun:_dlerror_run fun:dlopen@@GLIBC* ... } vmem-1.8/src/test/memcheck-libibverbs.supp000066400000000000000000000003431361505074100207040ustar00rootroot00000000000000{ Memcheck:Param write(buf) ... fun:ibv_cmd_modify_qp fun:modify_rc_qp fun:c4iw_modify_qp fun:ibv_modify_qp@@IBVERBS_1.1 ... } vmem-1.8/src/test/memcheck-libunwind.supp000066400000000000000000000004571361505074100205620ustar00rootroot00000000000000{ generic libunwind suppression Memcheck:Param msync(start) ... obj:*libunwind* ... } { generic libunwind suppression Memcheck:Param rt_sigprocmask(set) ... obj:*libunwind* ... } { generic libunwind suppression Memcheck:Addr8 ... obj:*libunwind* ... } vmem-1.8/src/test/memcheck-stdcpp.supp000066400000000000000000000004611361505074100200570ustar00rootroot00000000000000{ https://bugs.kde.org/show_bug.cgi?id=345307, https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65434, https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64535 Memcheck:Leak match-leak-kinds: reachable fun:malloc obj:*/libstdc++.so.* fun:call_init.part.0 ... fun:_dl_init obj:*/ld-*.so } vmem-1.8/src/test/out_err/000077500000000000000000000000001361505074100155555ustar00rootroot00000000000000vmem-1.8/src/test/out_err/.gitignore000066400000000000000000000000101361505074100175340ustar00rootroot00000000000000out_err vmem-1.8/src/test/out_err/Makefile000066400000000000000000000033741361505074100172240ustar00rootroot00000000000000# # Copyright 2015-2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/out_err/Makefile -- build unit test for out_err() # TARGET = out_err OBJS = out_err.o BUILD_STATIC_DEBUG=n BUILD_STATIC_NONDEBUG=n LIBPMEMCOMMON=internal-debug include ../Makefile.inc CFLAGS += -DDEBUG vmem-1.8/src/test/out_err/TEST0000077500000000000000000000034511361505074100163450ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/out_err/TEST0 -- unit test for out_err() # . ../unittest/unittest.sh require_build_type debug setup export TRACE_LOG_LEVEL=1 export TRACE_LOG_FILE=./traces$UNITTEST_NUM.log expect_normal_exit ./out_err$EXESUFFIX check pass vmem-1.8/src/test/out_err/TEST0.PS1000066400000000000000000000034531361505074100167460ustar00rootroot00000000000000# # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/out_err/TEST0 -- unit test for out_err() # . ..\unittest\unittest.ps1 require_build_type debug setup $Env:TRACE_LOG_LEVEL = 1 $Env:TRACE_LOG_FILE = ".\traces$Env:UNITTEST_NUM.log" expect_normal_exit $Env:EXE_DIR\out_err$Env:EXESUFFIX check pass vmem-1.8/src/test/out_err/out0.log.match000066400000000000000000000003401361505074100202370ustar00rootroot00000000000000out_err$(nW)TEST0: START: out_err$(nW) $(nW)out_err$(nW) ERR #1 $(OPT)ERR #2: Success $(OPX)ERR #2: No error: 0 ERR #3: Invalid argument ERR1: Bad file descriptor:1234 ERR2: Bad file descriptor:1234 out_err$(nW)TEST0: DONE vmem-1.8/src/test/out_err/out_err.c000066400000000000000000000051451361505074100174050ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * traces.c -- unit test for traces */ #define LOG_PREFIX "trace" #define LOG_LEVEL_VAR "TRACE_LOG_LEVEL" #define LOG_FILE_VAR "TRACE_LOG_FILE" #define MAJOR_VERSION 1 #define MINOR_VERSION 0 #include #include #include "unittest.h" #include "pmemcommon.h" int main(int argc, char *argv[]) { char buff[UT_MAX_ERR_MSG]; START(argc, argv, "out_err"); /* Execute test */ common_init(LOG_PREFIX, LOG_LEVEL_VAR, LOG_FILE_VAR, MAJOR_VERSION, MINOR_VERSION); errno = 0; ERR("ERR #%d", 1); UT_OUT("%s", out_get_errormsg()); errno = 0; ERR("!ERR #%d", 2); UT_OUT("%s", out_get_errormsg()); errno = EINVAL; ERR("!ERR #%d", 3); UT_OUT("%s", out_get_errormsg()); errno = EBADF; ut_strerror(errno, buff, UT_MAX_ERR_MSG); out_err(__FILE__, 100, __func__, "ERR1: %s:%d", buff, 1234); UT_OUT("%s", out_get_errormsg()); errno = EBADF; ut_strerror(errno, buff, UT_MAX_ERR_MSG); out_err(NULL, 0, NULL, "ERR2: %s:%d", buff, 1234); UT_OUT("%s", out_get_errormsg()); /* Cleanup */ common_fini(); DONE(NULL); } vmem-1.8/src/test/out_err/out_err.vcxproj000066400000000000000000000064461361505074100206630ustar00rootroot00000000000000 Debug x64 Release x64 {8A0FA780-068A-4534-AA2F-4FF4CF977AF2} Win32Proj out_err 10.0.16299.0 Application true v140 Application false v140 {ce3f2dfb-8470-4802-ad37-21caf6cb2681} vmem-1.8/src/test/out_err/out_err.vcxproj.filters000066400000000000000000000020111361505074100223120ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {de41a380-e1fd-45b9-bca5-1c60b0a288f6} {dd367362-9e19-4a5d-bd31-01247ae35f9a} Source Files Test scripts Match Files Match Files vmem-1.8/src/test/out_err/traces0.log.match000066400000000000000000000020461361505074100207160ustar00rootroot00000000000000: <1> [out.c:$(N) out_init]$(W)pid $(N): program: $(nW) : <1> [out.c:$(N) out_init]$(W)trace version 1.0 : <1> [out.c:$(N) out_init]$(W)src version: $(nW) $(OPT): <1> [out.c:$(N) out_init]$(W)compiled with support for Valgrind pmemcheck $(OPT): <1> [out.c:$(N) out_init]$(W)compiled with support for Valgrind helgrind $(OPT): <1> [out.c:$(N) out_init]$(W)compiled with support for Valgrind memcheck $(OPT): <1> [out.c:$(N) out_init]$(W)compiled with support for Valgrind drd $(OPT): <1> [out.c:$(N) out_init]$(W)compiled with support for shutdown state $(OPT): <1> [out.c:$(N) out_init]$(W)compiled with libndctl 63+ : <1> [out_err$(nW).c:$(N) $(nW)main]$(W)ERR #1 $(OPT): <1> [out_err$(nW).c:$(N) $(nW)main]$(W)ERR #2: Success $(OPX): <1> [out_err$(nW).c:$(N) $(nW)main]$(W)ERR #2: No error: 0 : <1> [out_err$(nW).c:$(N) $(nW)main]$(W)ERR #3: Invalid argument : <1> [out_err$(nW).c:$(N) $(nW)main]$(W)ERR1: Bad file descriptor:1234 ERR2: Bad file descriptor:1234 vmem-1.8/src/test/out_err_mt/000077500000000000000000000000001361505074100162555ustar00rootroot00000000000000vmem-1.8/src/test/out_err_mt/.gitignore000066400000000000000000000000131361505074100202370ustar00rootroot00000000000000out_err_mt vmem-1.8/src/test/out_err_mt/Makefile000066400000000000000000000032671361505074100177250ustar00rootroot00000000000000# # Copyright 2015-2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/out_err_mt/Makefile -- build unit test for error messages # TARGET = out_err_mt OBJS = out_err_mt.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/out_err_mt/TEST0000077500000000000000000000033261361505074100170460ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/out_err_mt/TEST0 -- unit test for error messages # . ../unittest/unittest.sh setup expect_normal_exit ./out_err_mt$EXESUFFIX $DIR check pass vmem-1.8/src/test/out_err_mt/TEST0.PS1000066400000000000000000000034271361505074100174470ustar00rootroot00000000000000# # Copyright 2015-2019, Intel Corporation # Copyright (c) 2016, Microsoft Corporation. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/out_err_mt/TEST0 -- unit test for error messages # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\out_err_mt$Env:EXESUFFIX ` $DIR check pass vmem-1.8/src/test/out_err_mt/TEST1000077500000000000000000000034721361505074100170510ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/out_err_mt/TEST1 -- unit test for error messages # . ../unittest/unittest.sh require_valgrind 3.7 configure_valgrind drd force-enable setup unset VMEM_LOG_LEVEL unset VMEM_LOG_FILE expect_normal_exit ./out_err_mt$EXESUFFIX $DIR check pass vmem-1.8/src/test/out_err_mt/TEST2000077500000000000000000000034771361505074100170570ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/out_err_mt/TEST2 -- unit test for error messages # . ../unittest/unittest.sh require_valgrind 3.7 configure_valgrind helgrind force-enable setup unset VMEM_LOG_LEVEL unset VMEM_LOG_FILE expect_normal_exit ./out_err_mt$EXESUFFIX $DIR check pass vmem-1.8/src/test/out_err_mt/out0.log.match000066400000000000000000000003621361505074100207430ustar00rootroot00000000000000out_err_mt$(nW)TEST0: START: out_err_mt$(nW) $(nW)out_err_mt$(nW) $(nW) start VMEM: version check VMEM: libvmem major version mismatch (need 10005, found $(N)) vmem_create_in_region VMEM: size 1 smaller than $(N) out_err_mt$(nW)TEST0: DONE vmem-1.8/src/test/out_err_mt/out1.log.match000066400000000000000000000003621361505074100207440ustar00rootroot00000000000000out_err_mt$(nW)TEST1: START: out_err_mt$(nW) $(nW)out_err_mt$(nW) $(nW) start VMEM: version check VMEM: libvmem major version mismatch (need 10005, found $(N)) vmem_create_in_region VMEM: size 1 smaller than $(N) out_err_mt$(nW)TEST1: DONE vmem-1.8/src/test/out_err_mt/out2.log.match000066400000000000000000000003621361505074100207450ustar00rootroot00000000000000out_err_mt$(nW)TEST2: START: out_err_mt$(nW) $(nW)out_err_mt$(nW) $(nW) start VMEM: version check VMEM: libvmem major version mismatch (need 10005, found $(N)) vmem_create_in_region VMEM: size 1 smaller than $(N) out_err_mt$(nW)TEST2: DONE vmem-1.8/src/test/out_err_mt/out_err_mt.c000066400000000000000000000060731361505074100206060ustar00rootroot00000000000000/* * Copyright 2015-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * out_err_mt.c -- unit test for error messages */ #include #include #include #include "unittest.h" #include "valgrind_internal.h" #include "util.h" #define NUM_THREADS 16 static void print_errors(const char *msg) { UT_OUT("%s", msg); UT_OUT("VMEM: %s", vmem_errormsg()); } static void check_errors(unsigned ver) { int ret; int err_need; int err_found; ret = sscanf(vmem_errormsg(), "libvmem major version mismatch (need %d, found %d)", &err_need, &err_found); UT_ASSERTeq(ret, 2); UT_ASSERTeq(err_need, ver); UT_ASSERTeq(err_found, VMEM_MAJOR_VERSION); } static void * do_test(void *arg) { unsigned ver = *(unsigned *)arg; vmem_check_version(ver, 0); check_errors(ver); return NULL; } static void run_mt_test(void *(*worker)(void *)) { os_thread_t thread[NUM_THREADS]; unsigned ver[NUM_THREADS]; for (unsigned i = 0; i < NUM_THREADS; ++i) { ver[i] = 10000 + i; PTHREAD_CREATE(&thread[i], NULL, worker, &ver[i]); } for (unsigned i = 0; i < NUM_THREADS; ++i) { PTHREAD_JOIN(&thread[i], NULL); } } int main(int argc, char *argv[]) { START(argc, argv, "out_err_mt"); if (argc != 2) UT_FATAL("usage: %s dir", argv[0]); print_errors("start"); VMEM *vmp = vmem_create(argv[1], VMEM_MIN_POOL); UT_ASSERT(vmp); vmem_check_version(10005, 0); print_errors("version check"); VMEM *vmp2 = vmem_create_in_region(NULL, 1); UT_ASSERTeq(vmp2, NULL); print_errors("vmem_create_in_region"); run_mt_test(do_test); vmem_delete(vmp); DONE(NULL); } vmem-1.8/src/test/out_err_mt/out_err_mt.vcxproj000066400000000000000000000064451361505074100220620ustar00rootroot00000000000000 Debug x64 Release x64 {063037B2-CA35-4520-811C-19D9C4ED891E} Win32Proj out_err_mt 10.0.16299.0 Application true v140 Application false v140 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} vmem-1.8/src/test/out_err_mt/out_err_mt.vcxproj.filters000066400000000000000000000017761361505074100235330ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {1d395904-81fc-4e68-a335-f663b431dbf7} match {62ac7650-9f5d-4d5f-96ff-d652a437ea51} ps1 Source Files Match Files Test Scripts vmem-1.8/src/test/out_err_mt_win/000077500000000000000000000000001361505074100171325ustar00rootroot00000000000000vmem-1.8/src/test/out_err_mt_win/TEST0.PS1000066400000000000000000000035331361505074100203220ustar00rootroot00000000000000# # Copyright 2015-2019, Intel Corporation # Copyright (c) 2016, Microsoft Corporation. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/out_err_mt_win/TEST0 -- unit test for error messages # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\out_err_mt_win$Env:EXESUFFIX ` $DIR\testfile1 $DIR\testfile2 $DIR\testfile3 $DIR\testfile4 $DIR check pass vmem-1.8/src/test/out_err_mt_win/out0.log.match000066400000000000000000000004761361505074100216260ustar00rootroot00000000000000out_err_mt_win$(nW)TEST0: START: out_err_mt_win$(nW) $(nW)out_err_mt$(nW) $(nW)testfile1 $(nW)testfile2 $(nW)testfile3 $(nW)testfile4 $(nW) start VMEM: version check VMEM: libvmem major version mismatch (need 10005, found $(N)) vmem_create_in_region VMEM: size 1 smaller than 14680064 out_err_mt_win$(nW)TEST0: DONE vmem-1.8/src/test/out_err_mt_win/out_err_mt_win.c000066400000000000000000000061201361505074100223310ustar00rootroot00000000000000/* * Copyright 2016-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * out_err_mt_win.c -- unit test for error messages */ #include #include #include #include "unittest.h" #include "valgrind_internal.h" #include "util.h" #define NUM_THREADS 16 static void print_errors(const wchar_t *msg) { UT_OUT("%S", msg); UT_OUT("VMEM: %S", vmem_errormsgW()); } static void check_errors(int ver) { int ret; int err_need; int err_found; ret = swscanf(vmem_errormsgW(), L"libvmem major version mismatch (need %d, found %d)", &err_need, &err_found); UT_ASSERTeq(ret, 2); UT_ASSERTeq(err_need, ver); UT_ASSERTeq(err_found, VMEM_MAJOR_VERSION); } static void * do_test(void *arg) { int ver = *(int *)arg; vmem_check_version(ver, 0); check_errors(ver); return NULL; } static void run_mt_test(void *(*worker)(void *)) { os_thread_t thread[NUM_THREADS]; int ver[NUM_THREADS]; for (int i = 0; i < NUM_THREADS; ++i) { ver[i] = 10000 + i; PTHREAD_CREATE(&thread[i], NULL, worker, &ver[i]); } for (int i = 0; i < NUM_THREADS; ++i) { PTHREAD_JOIN(&thread[i], NULL); } } int wmain(int argc, wchar_t *argv[]) { STARTW(argc, argv, "out_err_mt_win"); if (argc != 6) UT_FATAL("usage: %S file1 file2 file3 file4 dir", argv[0]); print_errors(L"start"); VMEM *vmp = vmem_createW(argv[5], VMEM_MIN_POOL); util_init(); vmem_check_version(10005, 0); print_errors(L"version check"); VMEM *vmp2 = vmem_create_in_region(NULL, 1); UT_ASSERTeq(vmp2, NULL); print_errors(L"vmem_create_in_region"); run_mt_test(do_test); vmem_delete(vmp); DONEW(NULL); } vmem-1.8/src/test/out_err_mt_win/out_err_mt_win.vcxproj000066400000000000000000000064551361505074100236150ustar00rootroot00000000000000 Debug x64 Release x64 {2B1A5104-A324-4D02-B5C7-D021FB8F880C} Win32Proj out_err_mt_win 10.0.16299.0 Application true v140 Application false v140 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} vmem-1.8/src/test/out_err_mt_win/out_err_mt_win.vcxproj.filters000066400000000000000000000020021361505074100252440ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {1d395904-81fc-4e68-a335-f663b431dbf7} match {62ac7650-9f5d-4d5f-96ff-d652a437ea51} ps1 Source Files Test Scripts Match Files vmem-1.8/src/test/out_err_win/000077500000000000000000000000001361505074100164325ustar00rootroot00000000000000vmem-1.8/src/test/out_err_win/TEST0.PS1000066400000000000000000000034631361505074100176240ustar00rootroot00000000000000# # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/out_err_win/TEST0 -- unit test for out_err() # . ..\unittest\unittest.ps1 require_build_type debug setup $Env:TRACE_LOG_LEVEL = 1 $Env:TRACE_LOG_FILE = ".\traces$Env:UNITTEST_NUM.log" expect_normal_exit $Env:EXE_DIR\out_err_win$Env:EXESUFFIX check pass vmem-1.8/src/test/out_err_win/out0.log.match000066400000000000000000000003201361505074100211120ustar00rootroot00000000000000out_err_win$(nW)TEST0: START: out_err_win$(nW) $(nW)out_err_win$(nW) ERR #1 ERR #2: Success ERR #3: Invalid argument ERR1: Bad file descriptor:1234 ERR2: Bad file descriptor:1234 out_err_win$(nW)TEST0: DONE vmem-1.8/src/test/out_err_win/out_err_win.c000066400000000000000000000052011361505074100211300ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * out_err_win.c -- unit test for error messages */ #define LOG_PREFIX "trace" #define LOG_LEVEL_VAR "TRACE_LOG_LEVEL" #define LOG_FILE_VAR "TRACE_LOG_FILE" #define MAJOR_VERSION 1 #define MINOR_VERSION 0 #include #include #include "unittest.h" #include "pmemcommon.h" int wmain(int argc, wchar_t *argv[]) { char buff[UT_MAX_ERR_MSG]; STARTW(argc, argv, "out_err_win"); /* Execute test */ common_init(LOG_PREFIX, LOG_LEVEL_VAR, LOG_FILE_VAR, MAJOR_VERSION, MINOR_VERSION); errno = 0; ERR("ERR #%d", 1); UT_OUT("%S", out_get_errormsgW()); errno = 0; ERR("!ERR #%d", 2); UT_OUT("%S", out_get_errormsgW()); errno = EINVAL; ERR("!ERR #%d", 3); UT_OUT("%S", out_get_errormsgW()); errno = EBADF; ut_strerror(errno, buff, UT_MAX_ERR_MSG); out_err(__FILE__, 100, __func__, "ERR1: %s:%d", buff, 1234); UT_OUT("%S", out_get_errormsgW()); errno = EBADF; ut_strerror(errno, buff, UT_MAX_ERR_MSG); out_err(NULL, 0, NULL, "ERR2: %s:%d", buff, 1234); UT_OUT("%S", out_get_errormsgW()); /* Cleanup */ common_fini(); DONEW(NULL); } vmem-1.8/src/test/out_err_win/out_err_win.vcxproj000066400000000000000000000064561361505074100224160ustar00rootroot00000000000000 Debug x64 Release x64 {A57D9365-172E-4782-ADC6-82A594E30943} Win32Proj out_err_win 10.0.16299.0 Application true v140 Application false v140 {ce3f2dfb-8470-4802-ad37-21caf6cb2681} vmem-1.8/src/test/out_err_win/out_err_win.vcxproj.filters000066400000000000000000000020151361505074100240500ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {de41a380-e1fd-45b9-bca5-1c60b0a288f6} {dd367362-9e19-4a5d-bd31-01247ae35f9a} Source Files Test scripts Match Files Match Files vmem-1.8/src/test/out_err_win/traces0.log.match000066400000000000000000000017261361505074100215770ustar00rootroot00000000000000: <1> [out.c:$(N) out_init]$(W)pid $(N): program: $(nW) : <1> [out.c:$(N) out_init]$(W)trace version 1.0 : <1> [out.c:$(N) out_init]$(W)src version: $(nW) $(OPT): <1> [out.c:$(N) out_init]$(W)compiled with support for Valgrind pmemcheck $(OPT): <1> [out.c:$(N) out_init]$(W)compiled with support for Valgrind helgrind $(OPT): <1> [out.c:$(N) out_init]$(W)compiled with support for Valgrind memcheck $(OPT): <1> [out.c:$(N) out_init]$(W)compiled with support for Valgrind drd $(OPT): <1> [out.c:$(N) out_init]$(W)compiled with support for shutdown state $(OPT): <1> [out.c:$(N) out_init]$(W)compiled with libndctl 63+ : <1> [out_err$(nW).c:$(N) $(nW)main]$(W)ERR #1 : <1> [out_err$(nW).c:$(N) $(nW)main]$(W)ERR #2: Success : <1> [out_err$(nW).c:$(N) $(nW)main]$(W)ERR #3: Invalid argument : <1> [out_err$(nW).c:$(N) $(nW)main]$(W)ERR1: Bad file descriptor:1234 ERR2: Bad file descriptor:1234 vmem-1.8/src/test/scope/000077500000000000000000000000001361505074100152075ustar00rootroot00000000000000vmem-1.8/src/test/scope/Makefile000066400000000000000000000031611361505074100166500ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/scope/Makefile -- build scope unit test # include ../Makefile.inc vmem-1.8/src/test/scope/TEST0000077500000000000000000000037661361505074100160100ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/scope/TEST0 -- scope test to check libvmem symbols # . ../unittest/unittest.sh setup parse_lib() { local lib_path=$2 echo "$lib_path:" >> out$UNITTEST_NUM.log (nm $1 $lib_path |\ perl -ne 'print if s/^[0-9a-z]+ T (\w+)/$1/' |\ sort >> out$UNITTEST_NUM.log) 2>&1 } if [ "$BUILD" = "debug" -o "$BUILD" = "nondebug" ]; then parse_lib -D $PMDK_LIB_PATH/libvmem.so.1 else parse_lib -g $PMDK_LIB_PATH/libvmem.a fi check pass vmem-1.8/src/test/scope/TEST0w.PS1000066400000000000000000000036241361505074100165670ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/scope/TEST0 -- scope test to check libvmem symbols # . ..\unittest\unittest.ps1 require_build_type debug setup function parse_lib { echo "${args}:" >> out$Env:UNITTEST_NUM.log & $DLLVIEW $args | Sort-Object >> out$Env:UNITTEST_NUM.log } echo "${Env:UNITTEST_NAME}:" > out$Env:UNITTEST_NUM.log parse_lib "$Env:LIBS_DIR\libvmem.dll" check pass vmem-1.8/src/test/scope/TEST5000077500000000000000000000036611361505074100160070ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/scope/TEST5 -- scope test to check libvmmalloc symbols # . ../unittest/unittest.sh require_build_type debug nondebug setup parse_lib() { local lib_path=$2 echo "$lib_path:" >> out$UNITTEST_NUM.log (nm $1 $lib_path |\ perl -ne 'print if s/^[0-9a-z]+ T (\w+)/$1/' |\ sort >> out$UNITTEST_NUM.log) 2>&1 } parse_lib -D $PMDK_LIB_PATH/$VMMALLOC check pass vmem-1.8/src/test/scope/TEST5w.PS1000066400000000000000000000037151361505074100165750ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/scope/TEST5 -- scope test to check libvmmalloc symbols # . ..\unittest\unittest.ps1 require_build_type debug setup function parse_lib { echo "${args}:" >> out$Env:UNITTEST_NUM.log & $DLLVIEW $args | Sort-Object >> out$Env:UNITTEST_NUM.log } # XXX: libvmmalloc not yet ported to Windows #echo "${Env:UNITTEST_NAME}:" > out$Env:UNITTEST_NUM.log #parse_lib "$Env:LIBS_DIR\libvmmalloc.dll" #check pass vmem-1.8/src/test/scope/out0.log.match000066400000000000000000000003611361505074100176740ustar00rootroot00000000000000$(*) vmem_aligned_alloc vmem_calloc vmem_check vmem_check_version vmem_create vmem_create_in_region vmem_delete vmem_errormsg vmem_free vmem_malloc vmem_malloc_usable_size vmem_realloc vmem_set_funcs vmem_stats_print vmem_strdup vmem_wcsdup vmem-1.8/src/test/scope/out0w.log.match000066400000000000000000000005071361505074100200650ustar00rootroot00000000000000scope/TEST0w: $(*)\libvmem.dll: DllMain vmem_aligned_alloc vmem_calloc vmem_check vmem_check_versionU vmem_check_versionW vmem_create_in_region vmem_createU vmem_createW vmem_delete vmem_errormsgU vmem_errormsgW vmem_free vmem_malloc vmem_malloc_usable_size vmem_realloc vmem_set_funcs vmem_stats_print vmem_strdup vmem_wcsdup vmem-1.8/src/test/scope/out5.log.match000066400000000000000000000003151361505074100177000ustar00rootroot00000000000000$(*) $(OPT)_malloc_postfork $(OPT)_malloc_prefork $(OPT)_malloc_thread_cleanup aligned_alloc calloc cfree free malloc malloc_usable_size memalign posix_memalign $(OPT)pthread_create pvalloc realloc valloc vmem-1.8/src/test/scope/out5w.log.match000066400000000000000000000002251361505074100200670ustar00rootroot00000000000000scope/TEST5w: $(*)\libvmmalloc.dll: DllMain aligned_alloc calloc cfree free malloc malloc_usable_size memalign posix_memalign pvalloc realloc valloc vmem-1.8/src/test/scope/scope.vcxproj000066400000000000000000000101701361505074100177340ustar00rootroot00000000000000 Debug x64 Release x64 {C0E811E0-8942-4CFD-A817-74D99E9E6577} Win32Proj scope 10.0.16299.0 Application true v140 Application false v140 Level3 Disabled NTDDI_VERSION=NTDDI_WIN10_RS1;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) true Level3 MaxSpeed true true NTDDI_VERSION=NTDDI_WIN10_RS1;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) true {179beb5a-2c90-44f5-a734-fa756a5e668c} vmem-1.8/src/test/scope/scope.vcxproj.filters000066400000000000000000000032221361505074100214030ustar00rootroot00000000000000 {dac6c718-f0f8-43a0-ba38-bd72ab98e456} ps1 {822d02f2-ed7a-4f61-9773-b579384486f5} match Match Files Match Files Match Files Match Files Match Files Match Files Match Files Test Scripts Test Scripts Test Scripts Test Scripts Test Scripts Test Scripts Test Scripts vmem-1.8/src/test/set_funcs/000077500000000000000000000000001361505074100160675ustar00rootroot00000000000000vmem-1.8/src/test/set_funcs/.gitignore000066400000000000000000000000121361505074100200500ustar00rootroot00000000000000set_funcs vmem-1.8/src/test/set_funcs/Makefile000066400000000000000000000033341361505074100175320ustar00rootroot00000000000000# # Copyright 2015-2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/set_funcs/Makefile -- build set_funcs unit test # TARGET = set_funcs OBJS = set_funcs.o LIBPMEM=y LIBPMEMOBJ=y LIBPMEMBLK=y LIBPMEMLOG=y LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/set_funcs/TEST0000077500000000000000000000033341361505074100166570ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/set_funcs/TEST0 -- unit test for pmem*_set_funcs # . ../unittest/unittest.sh setup expect_normal_exit ./set_funcs$EXESUFFIX $DIR/testfile $DIR pass vmem-1.8/src/test/set_funcs/TEST0.PS1000066400000000000000000000033301361505074100172520ustar00rootroot00000000000000# # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/set_funcs/TEST0 -- unit test for pmem*_set_funcs # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\set_funcs$Env:EXESUFFIX $DIR\testfile $DIR pass vmem-1.8/src/test/set_funcs/set_funcs.c000066400000000000000000000110051361505074100202210ustar00rootroot00000000000000/* * Copyright 2015-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * set_funcs.c -- unit test for pmem*_set_funcs() */ #include "unittest.h" #define EXISTING_FILE "/root" #define GUARD 0x2BEE5AFEULL #define EXTRA sizeof(GUARD) #define VMEM_ 0 #define VMEM_POOLS 4 static struct counters { int mallocs; int frees; int reallocs; int reallocs_null; int strdups; } cnt[5]; static void * test_malloc(size_t size) { unsigned long long *p = malloc(size + EXTRA); UT_ASSERTne(p, NULL); *p = GUARD; return ++p; } static void test_free(void *ptr) { if (ptr == NULL) return; unsigned long long *p = ptr; --p; UT_ASSERTeq(*p, GUARD); free(p); } static void * test_realloc(void *ptr, size_t size) { unsigned long long *p; if (ptr != NULL) { p = ptr; --p; UT_ASSERTeq(*p, GUARD); p = realloc(p, size + EXTRA); } else { p = malloc(size + EXTRA); } UT_ASSERTne(p, NULL); *p = GUARD; return ++p; } static char * test_strdup(const char *s) { if (s == NULL) return NULL; size_t size = strlen(s) + 1; unsigned long long *p = malloc(size + EXTRA); UT_ASSERTne(p, NULL); *p = GUARD; ++p; strcpy((char *)p, s); return (char *)p; } static void * _vmem_malloc(size_t size) { cnt[VMEM_].mallocs++; return test_malloc(size); } static void _vmem_free(void *ptr) { if (ptr) cnt[VMEM_].frees++; test_free(ptr); } static void * _vmem_realloc(void *ptr, size_t size) { if (ptr == NULL) cnt[VMEM_].reallocs_null++; else cnt[VMEM_].reallocs++; return test_realloc(ptr, size); } static char * _vmem_strdup(const char *s) { cnt[VMEM_].strdups++; return test_strdup(s); } static void test_vmem(const char *dir) { vmem_set_funcs(_vmem_malloc, _vmem_free, _vmem_realloc, _vmem_strdup, NULL); /* * Generate ERR() call, that calls malloc() once, * but only when it is called for the first time * (free() is called in the destructor of the library). */ vmem_create(EXISTING_FILE, 0); memset(cnt, 0, sizeof(cnt)); VMEM *v[VMEM_POOLS]; void *ptr[VMEM_POOLS]; for (int i = 0; i < VMEM_POOLS; i++) { v[i] = vmem_create(dir, VMEM_MIN_POOL); ptr[i] = vmem_malloc(v[i], 64); vmem_free(v[i], ptr[i]); } for (int i = 0; i < VMEM_POOLS; i++) vmem_delete(v[i]); UT_OUT("vmem_mallocs: %d", cnt[VMEM_].mallocs); UT_OUT("vmem_frees: %d", cnt[VMEM_].frees); UT_OUT("vmem_reallocs: %d", cnt[VMEM_].reallocs); UT_OUT("vmem_reallocs_null: %d", cnt[VMEM_].reallocs_null); UT_OUT("vmem_strdups: %d", cnt[VMEM_].strdups); if (cnt[VMEM_].mallocs == 0 && cnt[VMEM_].frees == 0) UT_FATAL("VMEM mallocs: %d, frees: %d", cnt[VMEM_].mallocs, cnt[VMEM_].frees); for (int i = 0; i < 5; ++i) { if (i == VMEM_) continue; if (cnt[i].mallocs || cnt[i].frees) UT_FATAL("VMEM allocation used %d functions", i); } if (cnt[VMEM_].mallocs + cnt[VMEM_].strdups + cnt[VMEM_].reallocs_null > cnt[VMEM_].frees + 4) UT_FATAL("VMEM memory leak"); } int main(int argc, char *argv[]) { START(argc, argv, "set_funcs"); if (argc < 3) UT_FATAL("usage: %s file dir", argv[0]); test_vmem(argv[2]); DONE(NULL); } vmem-1.8/src/test/set_funcs/set_funcs.vcxproj000066400000000000000000000065421361505074100215040ustar00rootroot00000000000000 Debug x64 Release x64 {6D7C1169-3246-465F-B630-ECFEF4F3179A} Win32Proj set_funcs 10.0.16299.0 Application true v140 Application false v140 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} vmem-1.8/src/test/set_funcs/set_funcs.vcxproj.filters000066400000000000000000000012261361505074100231450ustar00rootroot00000000000000 {b80fb68c-bae8-4552-93e3-9e3b52ccf381} {0ac97c68-4204-4fac-b810-2708c5111c30} Source Files Test Scripts vmem-1.8/src/test/signal_handle/000077500000000000000000000000001361505074100166665ustar00rootroot00000000000000vmem-1.8/src/test/signal_handle/TEST0w.PS1000066400000000000000000000033701361505074100202440ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/signal_handle/TEST0w.PS1 -- unit test for signal_handle # . ..\unittest\unittest.ps1 require_build_type debug setup expect_normal_exit $Env:EXE_DIR\signal_handle$Env:EXESUFFIX s a a i v check pass vmem-1.8/src/test/signal_handle/out0w.log.match000066400000000000000000000005071361505074100215440ustar00rootroot00000000000000signal_handle$(nW)TEST0w: START: signal_handle $(nW)signal_handle$(nW) s a a i v Testing SIGSEGV... signal_handler_2: $(*) Testing SIGABRT... signal_handler_1: $(*) Testing SIGABRT... signal_handler_1: $(*) Testing SIGILL... signal_handler_2: $(*) Testing SIGABRT... signal_handler_3: $(*) signal_handle$(nW)TEST0w: DONE vmem-1.8/src/test/signal_handle/signal_handle.c000066400000000000000000000077401361505074100216320ustar00rootroot00000000000000/* * Copyright 2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * signal_handle.c -- unit test for signal_handle * * * operations are: 's', 'a', 'a', 'i', 'v' * s: testing SIGSEGV with signal_handler_2 * a: testing SIGABRT with signal_handler_1 * a: testing second occurrence of SIGABRT with signal_handler_1 * i: testing SIGILL with signal_handler_2 * v: testing third occurrence of SIGABRT with other signal_handler_3 * */ #include "unittest.h" ut_jmp_buf_t Jmp; static void signal_handler_1(int sig) { UT_OUT("\tsignal_handler_1: %s", os_strsignal(sig)); ut_siglongjmp(Jmp); } static void signal_handler_2(int sig) { UT_OUT("\tsignal_handler_2: %s", os_strsignal(sig)); ut_siglongjmp(Jmp); } static void signal_handler_3(int sig) { UT_OUT("\tsignal_handler_3: %s", os_strsignal(sig)); ut_siglongjmp(Jmp); } int main(int argc, char *argv[]) { START(argc, argv, "signal_handle"); if (argc < 2) UT_FATAL("usage: %s op:s|a|a|i|v", argv[0]); struct sigaction v1, v2, v3; sigemptyset(&v1.sa_mask); v1.sa_flags = 0; v1.sa_handler = signal_handler_1; sigemptyset(&v2.sa_mask); v2.sa_flags = 0; v2.sa_handler = signal_handler_2; SIGACTION(SIGSEGV, &v2, NULL); SIGACTION(SIGABRT, &v1, NULL); SIGACTION(SIGABRT, &v2, NULL); SIGACTION(SIGABRT, &v1, NULL); SIGACTION(SIGILL, &v2, NULL); for (int arg = 1; arg < argc; arg++) { if (strchr("sabiv", argv[arg][0]) == NULL || argv[arg][1] != '\0') UT_FATAL("op must be one of: s, a, a, i, v"); switch (argv[arg][0]) { case 's': UT_OUT("Testing SIGSEGV..."); if (!ut_sigsetjmp(Jmp)) { if (!raise(SIGSEGV)) { UT_OUT("\t SIGSEGV occurrence"); } else { UT_OUT("\t Issue with SIGSEGV raise"); } } break; case 'a': UT_OUT("Testing SIGABRT..."); if (!ut_sigsetjmp(Jmp)) { if (!raise(SIGABRT)) { UT_OUT("\t SIGABRT occurrence"); } else { UT_OUT("\t Issue with SIGABRT raise"); } } break; case 'i': UT_OUT("Testing SIGILL..."); if (!ut_sigsetjmp(Jmp)) { if (!raise(SIGILL)) { UT_OUT("\t SIGILL occurrence"); } else { UT_OUT("\t Issue with SIGILL raise"); } } break; case 'v': if (!ut_sigsetjmp(Jmp)) { sigemptyset(&v3.sa_mask); v3.sa_flags = 0; v3.sa_handler = signal_handler_3; UT_OUT("Testing SIGABRT..."); SIGACTION(SIGABRT, &v3, NULL); if (!raise(SIGABRT)) { UT_OUT("\t SIGABRT occurrence"); } else { UT_OUT("\t Issue with SIGABRT raise"); } } break; } } DONE(NULL); } vmem-1.8/src/test/signal_handle/signal_handle.vcxproj000066400000000000000000000064231361505074100231000ustar00rootroot00000000000000 Debug x64 Release x64 {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {AE9E908D-BAEC-491F-9914-436B3CE35E94} Win32Proj signal_handle 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/signal_handle/signal_handle.vcxproj.filters000066400000000000000000000015601361505074100245440ustar00rootroot00000000000000 Tests Files Match Files {69f676d7-e653-4644-892c-be4e71754264} {00ea252a-e40d-4b35-8ac5-7d8ea7d8b365} {9fa92886-a9ee-412f-bb0a-ce241dc6deaf} Source Files vmem-1.8/src/test/test_debug.props000066400000000000000000000026411361505074100173130ustar00rootroot00000000000000 $(SolutionDir)$(Platform)\$(Configuration)\tests\ $(FrameworkSDKdir)bin\$(TargetPlatformVersion)\$(Platform);$(ExecutablePath) Level3 true PMDK_UTF8_API;SDS_ENABLED;NTDDI_VERSION=NTDDI_WIN10_RS1;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) $(ProjectDir);$(SolutionDir)\windows\include;$(SolutionDir)\include;$(SolutionDir)\common;$(SolutionDir)\test\unittest CompileAsC platform.h true DbgHelp.lib;Shlwapi.lib;%(AdditionalDependencies) Console vmem-1.8/src/test/test_release.props000066400000000000000000000025031361505074100176420ustar00rootroot00000000000000 $(SolutionDir)$(Platform)\$(Configuration)\tests\ $(FrameworkSDKdir)bin\$(TargetPlatformVersion)\$(Platform);$(ExecutablePath) PMDK_UTF8_API;SDS_ENABLED;NTDDI_VERSION=NTDDI_WIN10_RS1;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) $(ProjectDir);$(SolutionDir)\windows\include;$(SolutionDir)\include;$(SolutionDir)\common;$(SolutionDir)\test\unittest Level3 true CompileAsC platform.h true DbgHelp.lib;%(AdditionalDependencies) Console vmem-1.8/src/test/testconfig.ps1.example000066400000000000000000000031061361505074100203220ustar00rootroot00000000000000# # src/test/testconfig.ps1 -- configuration for local and remote unit tests # # # 1) *** LOCAL CONFIGURATION *** # # The first part of the file tells the script unittest/unittest.ps1 # which file system locations are to be used for local testing. # # # Appended to TEST_DIR to test PMDK with file path longer than 255 # characters. Due to limitation of powershell you have to use UNC prefix # (i.e. \\?\c:\tmp) in PMEM_FS_DIR and NON_PMEM_FS_DIR variables. # # $Local:LONGDIR = "PhngluimglwnafhCthulhuRlyehwgahnaglfhtagnHaizhronaDagonhaiepngmnahnhriikadishtugnaiihcuhesyhahfgnaiihsgnwahlnogsgnwahlnghahaiChaugnarFaugnhlirghHshtungglingnogRlyehnghaogShub-NiggurathothhgofnnlloigshuggsllhannnCthulhuahnyth" #$Env:DIRSUFFIX = "$LONGDIR\$LONGDIR\$LONGDIR\$LONGDIR\$LONGDIR" # # Directory for scratch files during tests. # #$Env:TEST_DIR = "\temp" # # Overwrite available build types: # debug, nondebug, static-debug, static-nondebug, all (default) # #$Env:TEST_BUILD = "all" # # Overwrite default timeout # (floating point number with an optional suffix: 's' for seconds (the default), # 'm' for minutes, 'h' for hours or 'd' for days) # #$Env:TEST_TIMEOUT = "60s" # # To display execution time of each test # $Env:TM = "1" # # Test against installed libraries, NOT the one built in tree. # Note that these variable won't affect tests that link statically. You should # disabled them using TEST_BUILD variable. # # $Env:VMEM_LIB_PATH_NONDEBUG = "C:\vcpkg\buildtrees\pmdk\src\1.4.1-0ecc9f7f1f\src\x64\Release" # $Env:VMEM_LIB_PATH_DEBUG = "C:\vcpkg\buildtrees\pmdk\src\1.4.1-0ecc9f7f1f\src\x64\Debug" vmem-1.8/src/test/testconfig.sh.example000066400000000000000000000055231361505074100202360ustar00rootroot00000000000000# # src/test/testconfig.sh -- configuration for local and remote unit tests # # # The first part of the file tells the script unittest/unittest.sh # which file system locations are to be used for local testing. # # # Appended to TEST_DIR to test PMDK with file path longer than 255 # characters. # # LONGDIR="PhngluimglwnafhCthulhuRlyehwgahnaglfhtagnHaizhronaDagonhaiepngmnahnhriikadishtugnaiihcuhesyhahfgnaiihsgnwahlnogsgnwahlnghahaiChaugnarFaugnhlirghHshtungglingnogRlyehnghaogShub-NiggurathothhgofnnlloigshuggsllhannnCthulhuahnyth" # DIRSUFFIX="$LONGDIR/$LONGDIR/$LONGDIR/$LONGDIR/$LONGDIR" # # # Directory to be used for scratch files during tests. # #TEST_DIR=/tmp # # For tests that require raw dax devices without a file system, set a path to # those devices in an array format. For most tests one device is enough, but # some might require more. # # For big sizes of DAX devices, some tests ran against Valgrind might fail due # to length of anonymous mmap and Valgrind limitations. Maximum possible length # is being calculated each time testconfig.sh changes. Tests which require more # than detected maximum possible length are skipped. # # It is required to have R/W access to these devices and at least RO access # to all of the following resource files (containing physical addresses) # of NVDIMM devices (only root can read them by default): # # /sys/bus/nd/devices/ndbus*/region*/resource # /sys/bus/nd/devices/ndbus*/region*/dax*/resource # # Note: some tests require write access to '/sys/bus/nd/devices/region*/deep_flush'. # #DEVICE_DAX_PATH=(/dev/dax0.0 /dev/dax1.0) # # Overwrite available build types: # debug, nondebug, static-debug, static-nondebug, all (default) # #TEST_BUILD=all # # Overwrite default timeout # (floating point number with an optional suffix: 's' for seconds (the default), # 'm' for minutes, 'h' for hours or 'd' for days) # #TEST_TIMEOUT=3m # # To display execution time of each test # TM=1 # # Normally the first failed test terminates the test run. If KEEP_GOING # is set, continues executing all tests. If any tests fail, once all tests # have completed reports number of failures, lists failed tests and exits # with error status. # #KEEP_GOING=y # # This option works only if KEEP_GOING=y, then if CLEAN_FAILED is set # all data created by test is removed on test failure. # #CLEAN_FAILED=y # # Changes logging level. Possible values: # 0 - silent (only error messages) # 1 - normal (above + SETUP + START + DONE + PASS + important SKIP messages) # 2 - verbose (above + all SKIP messages + stdout from test binaries) # #UNITTEST_LOG_LEVEL=1 # # Test against installed libraries, NOT the one built in tree. # Note that these variable won't affect tests that link statically. You should # disabled them using TEST_BUILD variable. # #VMEM_LIB_PATH_NONDEBUG=/usr/lib/x86_64-linux-gnu/ #VMEM_LIB_PATH_DEBUG=/usr/lib/x86_64-linux-gnu/vmem_dbg vmem-1.8/src/test/tools/000077500000000000000000000000001361505074100152365ustar00rootroot00000000000000vmem-1.8/src/test/tools/Makefile000066400000000000000000000040021361505074100166720ustar00rootroot00000000000000# # Copyright 2015-2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/tools/Makefile -- build unit test helpers # TOP = ../../.. TESTCONFIG=$(TOP)/src/test/testconfig.sh DIRS = fallocate_detect all : TARGET = all clean : TARGET = clean clobber : TARGET = clobber cstyle : TARGET = cstyle format : TARGET = format sparse : TARGET = sparse all test cstyle clean clobber format sparse: $(DIRS) $(TESTCONFIG): $(DIRS): $(MAKE) -C $@ $(TARGET) check pcheck: all .PHONY: all clean clobber cstyle format check pcheck $(DIRS) vmem-1.8/src/test/tools/anonymous_mmap/000077500000000000000000000000001361505074100203005ustar00rootroot00000000000000vmem-1.8/src/test/tools/anonymous_mmap/.gitignore000066400000000000000000000000371361505074100222700ustar00rootroot00000000000000anonymous_mmap max_dax_devices vmem-1.8/src/test/tools/anonymous_mmap/Makefile000066400000000000000000000036261361505074100217470ustar00rootroot00000000000000# # Copyright 2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/tools/anonymous_mmap/Makefile -- Makefile for anonymous_mmap # TOP = ../../../.. TARGET = anonymous_mmap OBJS = anonymous_mmap.o LIBPMEM=y LIBPMEMCOMMON=y include $(TOP)/src/tools/Makefile.inc TESTCONFIG=$(TOP)/src/test/testconfig.sh all: max_dax_devices max_dax_devices: $(TESTCONFIG) check_max_mmap.sh anonymous_mmap.static-nondebug @./check_max_mmap.sh vmem-1.8/src/test/tools/anonymous_mmap/README000066400000000000000000000004471361505074100211650ustar00rootroot00000000000000Persistent Memory Development Kit This is src/test/tools/anonymous_mmap/README. This directory contains a simple tool 'anonymous_mmap' for verifying ability for anonymously mmapping (huge) amount of memory into the virtual address space. 'anonymous_mmap' takes one argument - length in bytes vmem-1.8/src/test/tools/anonymous_mmap/anonymous_mmap.c000066400000000000000000000043071361505074100235120ustar00rootroot00000000000000/* * Copyright 2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * anonymous_mmap.c -- tool for verifying if given memory length can be * anonymously mmapped */ #include #include #include #include "out.h" int main(int argc, char *argv[]) { out_init("ANONYMOUS_MMAP", "ANONYMOUS_MMAP", "", 1, 0); if (argc != 2) { out("Usage: %s ", argv[0]); return -1; } const size_t length = (size_t)atoll(argv[1]); char *addr = mmap(NULL, length, PROT_READ, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0); if (addr == MAP_FAILED) { out("anonymous_mmap.c: Failed to mmap length=%lu of memory, " "errno=%d", length, errno); return errno; } out_fini(); return 0; } vmem-1.8/src/test/tools/anonymous_mmap/check_max_mmap.sh000077500000000000000000000076541361505074100236070ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/tools/anonymous_mmap/check_max_mmap.sh -- checks how many DAX # devices can be mapped under Valgrind and saves the number in # src/test/tools/anonymous_mmap/max_dax_devices. # DIR_CHECK_MAX_MMAP="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" FILE_MAX_DAX_DEVICES="$DIR_CHECK_MAX_MMAP/max_dax_devices" ANONYMOUS_MMAP="$DIR_CHECK_MAX_MMAP/anonymous_mmap.static-nondebug" source "$DIR_CHECK_MAX_MMAP/../../testconfig.sh" # # get_devdax_size -- get the size of a device dax # function get_devdax_size() { local device=$1 local path=${DEVICE_DAX_PATH[$device]} local major_hex=$(stat -c "%t" $path) local minor_hex=$(stat -c "%T" $path) local major_dec=$((16#$major_hex)) local minor_dec=$((16#$minor_hex)) cat /sys/dev/char/$major_dec:$minor_dec/size } function msg_skip() { echo "0" > "$FILE_MAX_DAX_DEVICES" echo "$0: SKIP: $*" exit 0 } function msg_failed() { echo "$0: FATAL: $*" >&2 exit 1 } # check if DEVICE_DAX_PATH specifies at least one DAX device if [ ${#DEVICE_DAX_PATH[@]} -lt 1 ]; then msg_skip "DEVICE_DAX_PATH does not specify path to DAX device." fi # check if valgrind package is installed VALGRINDEXE=`which valgrind 2>/dev/null` ret=$? if [ $ret -ne 0 ]; then msg_skip "Valgrind required." fi # check if memcheck tool is installed $VALGRINDEXE --tool=memcheck --help 2>&1 | grep -qi "memcheck is Copyright (c)" && true if [ $? -ne 0 ]; then msg_skip "Valgrind with memcheck required." fi # check if anonymous_mmap tool is built if [ ! -f "${ANONYMOUS_MMAP}" ]; then msg_failed "${ANONYMOUS_MMAP} does not exist" fi # checks how many DAX devices can be mmapped under Valgrind and save the number # in $FILE_MAX_DAX_DEVICES file bytes="0" max_devices="0" for index in ${!DEVICE_DAX_PATH[@]} do if [ ! -e "${DEVICE_DAX_PATH[$index]}" ]; then msg_failed "${DEVICE_DAX_PATH[$index]} does not exist" fi curr=$(get_devdax_size $index) if [[ curr -eq 0 ]]; then msg_failed "size of DAX device pointed by DEVICE_DAX_PATH[$index] equals 0." fi $VALGRINDEXE --tool=memcheck --quiet $ANONYMOUS_MMAP $((bytes + curr)) status=$? if [[ status -ne 0 ]]; then break fi bytes=$((bytes + curr)) max_devices=$((max_devices + 1)) done echo "$max_devices" > "$FILE_MAX_DAX_DEVICES" echo "$0: maximum possible anonymous mmap under Valgrind: $bytes bytes, equals to size of $max_devices DAX device(s). Value saved in $FILE_MAX_DAX_DEVICES." vmem-1.8/src/test/tools/dllview/000077500000000000000000000000001361505074100167045ustar00rootroot00000000000000vmem-1.8/src/test/tools/dllview/README000066400000000000000000000003241361505074100175630ustar00rootroot00000000000000Persistent Memory Development Kit This is src/test/tools/dllview/README. This directory contains a simple command-line utility that displays the list of symbols exported by given DLL. Usage: $ dllview vmem-1.8/src/test/tools/dllview/dllview.c000066400000000000000000000052741361505074100205260ustar00rootroot00000000000000/* * Copyright 2014-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * dllview.c -- a simple utility displaying the list of symbols exported by DLL * * usage: dllview filename */ #include #include #include #include #include "util.h" int main(int argc, char *argv[]) { util_suppress_errmsg(); if (argc < 2) { fprintf(stderr, "usage: %s dllname\n", argv[0]); exit(1); } const char *dllname = argv[1]; LOADED_IMAGE img; if (MapAndLoad(dllname, NULL, &img, 1, 1) == FALSE) { fprintf(stderr, "cannot load DLL image\n"); exit(2); } IMAGE_EXPORT_DIRECTORY *dir; ULONG dirsize; dir = (IMAGE_EXPORT_DIRECTORY *)ImageDirectoryEntryToData( img.MappedAddress, 0 /* mapped as image */, IMAGE_DIRECTORY_ENTRY_EXPORT, &dirsize); if (dir == NULL) { fprintf(stderr, "cannot read image directory\n"); UnMapAndLoad(&img); exit(3); } DWORD *rva; rva = (DWORD *)ImageRvaToVa(img.FileHeader, img.MappedAddress, dir->AddressOfNames, NULL); for (DWORD i = 0; i < dir->NumberOfNames; i++) { char *name = (char *)ImageRvaToVa(img.FileHeader, img.MappedAddress, rva[i], NULL); printf("%s\n", name); } UnMapAndLoad(&img); return 0; } vmem-1.8/src/test/tools/dllview/dllview.vcxproj000066400000000000000000000122171361505074100217720ustar00rootroot00000000000000 Debug x64 Release x64 {492baa3d-0d5d-478e-9765-500463ae69aa} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {179BEB5A-2C90-44F5-A734-FA756A5E668C} Win32Proj dllview 10.0.16299.0 Application true v140 NotSet Application false v140 true NotSet true $(SolutionDir)\common;$(SolutionDir)\test\unittest;$(SolutionDir)\windows\include;$(SolutionDir)\include;$(SolutionDir)\windows\getopt;$(SolutionDir)\libpmemlog;$(SolutionDir)\libpmemblk;$(SolutionDir)\libpmemobj;$(IncludePath) false $(SolutionDir)\common;$(SolutionDir)\test\unittest;$(SolutionDir)\windows\include;$(SolutionDir)\include;$(SolutionDir)\windows\getopt;$(SolutionDir)\libpmemlog;$(SolutionDir)\libpmemblk;$(SolutionDir)\libpmemobj;$(IncludePath) NotUsing Disabled PMDK_UTF8_API; NTDDI_VERSION=NTDDI_WIN10_RS1;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) CompileAsC Debug imagehlp.lib;Dbghelp.lib NotUsing MaxSpeed true PMDK_UTF8_API; NTDDI_VERSION=NTDDI_WIN10_RS1;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) CompileAsC true true imagehlp.lib;Dbghelp.lib DebugFastLink vmem-1.8/src/test/tools/dllview/dllview.vcxproj.filters000066400000000000000000000010501361505074100234320ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx Source Files vmem-1.8/src/test/tools/fallocate_detect/000077500000000000000000000000001361505074100205205ustar00rootroot00000000000000vmem-1.8/src/test/tools/fallocate_detect/.gitignore000066400000000000000000000000211361505074100225010ustar00rootroot00000000000000fallocate_detect vmem-1.8/src/test/tools/fallocate_detect/Makefile000066400000000000000000000033201361505074100221560ustar00rootroot00000000000000# Copyright 2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # Makefile -- Makefile for fallocate detection tool # TOP = ../../../.. TARGET = fallocate_detect OBJS = fallocate_detect.o LIBPMEMCOMMON=y include $(TOP)/src/tools/Makefile.inc vmem-1.8/src/test/tools/fallocate_detect/fallocate_detect.c000066400000000000000000000062031361505074100241470ustar00rootroot00000000000000/* * Copyright 2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * fallocate_detect -- checks fallocate support on filesystem */ #define _GNU_SOURCE #include "file.h" #include "os.h" #ifdef __linux__ #include #include #include #include /* * posix_fallocate on Linux is implemented using fallocate * syscall. This syscall requires file system-specific code on * the kernel side and not all file systems have this code. * So when posix_fallocate gets 'not supported' error from * fallocate it falls back to just writing zeroes. * Detect it and return information to the caller. */ static int check_fallocate(const char *file) { int exit_code = 0; int fd = os_open(file, O_RDWR | O_CREAT | O_EXCL, 0644); if (fd < 0) { perror("os_open"); return 2; } if (fallocate(fd, 0, 0, 4096)) { if (errno == EOPNOTSUPP) { exit_code = 1; goto exit; } perror("fallocate"); exit_code = 2; goto exit; } struct statfs fs; if (!fstatfs(fd, &fs)) { if (fs.f_type != EXT4_SUPER_MAGIC /* also ext2, ext3 */) { /* * On CoW filesystems, fallocate reserves _amount * of_ space but doesn't allocate a specific block. * As we're interested in DAX filesystems only, just * skip these tests anywhere else. */ exit_code = 1; goto exit; } } exit: os_close(fd); os_unlink(file); return exit_code; } #else /* no support for fallocate in FreeBSD */ static int check_fallocate(const char *file) { return 1; } #endif int main(int argc, char *argv[]) { if (argc != 2) { fprintf(stderr, "usage: %s filename\n", argv[0]); return 1; } return check_fallocate(argv[1]); } vmem-1.8/src/test/tools/sparsefile/000077500000000000000000000000001361505074100173735ustar00rootroot00000000000000vmem-1.8/src/test/tools/sparsefile/README000066400000000000000000000013561361505074100202600ustar00rootroot00000000000000Persistent Memory Development Kit This is src/test/tools/sparsefile/README. This directory contains a simple command-line utility for fast sparse file creation on Windows. It's used as a helper program by unit tests. (See 'create_holey_file' in unittest.ps1) Usage: $ sparsefile [options] where 'options' can be: -v - verbose output -s - fail if volume/filesystem does not support sparse files -f - overwrite file if already exists Note that using 'sparsefile' is over 50x faster than 'fsutil': $ FSUtil File CreateNew $ FSUtil Sparse SetFlag $ FSUtil Sparse SetRange 0 Use 'TEST_SPARSE.ps1' script to compare performance of various implementations of 'create_holey_file' routine. vmem-1.8/src/test/tools/sparsefile/TEST_SPARSE.ps1000077500000000000000000000100241361505074100216540ustar00rootroot00000000000000# # Copyright 2016-2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # TEST_SPARSE.ps1 -- compare performance of various methods of creating # sparse files on Windows # # usage: .\TEST_SPARSE.ps1 filename length repeats # if ($args.count -lt 3) { Write-Error "usage: sparse.ps1 filename length repeats" exit 1 } $path = $args[0] $size = $args[1] $count = $args[2] # # epoch -- get timestamp # function epoch { return [int64](([datetime]::UtcNow)-(get-date "1/1/1970")).TotalMilliseconds } # # remove_file -- remove file if exists # function remove_file { if (test-path $args[0]) { rm -force $args[0] } } # # create_holey_file1 -- create sparse file using 'sparsefile' utility # function create_holey_file1 { $fname = $args[0] $size = $args[1] & '..\..\..\x64\debug\sparsefile.exe' $fname $size if ($LASTEXITCODE -ne 0) { Write-Error "Error $LASTEXITCODE with sparsefile create" exit $LASTEXITCODE } Write-Host -NoNewline "." } # # create_holey_file2 -- create sparse file using 'powershell' & 'fsutil' # function create_holey_file2 { $fname = $args[0] $size = $args[1] $f = [System.IO.File]::Create($fname) $f.Close() # XXX: How to mark file as sparse using pure PowerShell API? # Setting 'SparseFile' attribute in PS does not work for some reason. # mark file as sparse & fsutil sparse setflag $path $f = [System.IO.File]::Open($fname, "Append") $f.SetLength($size) $f.Close() Write-Host -NoNewline "." } # # create_holey_file3 -- create sparse file using 'fsutil' # function create_holey_file3 { $fname = $args[0] $size = $args[1] & "FSUtil" File CreateNew $fname $size if ($LASTEXITCODE -ne 0) { Write-Error "Error $LASTEXITCODE with FSUTIL create" exit $LASTEXITCODE } & "FSUtil" Sparse SetFlag $fname if ($LASTEXITCODE -ne 0) { Write-Error "Error $LASTEXITCODE with FSUTIL setFlag" exit $LASTEXITCODE } & "FSUtil" Sparse SetRange $fname 0 $size if ($LASTEXITCODE -ne 0) { Write-Error "Error $LASTEXITCODE with FSUTIL setRange" exit $LASTEXITCODE } Write-Host -NoNewline "." } $start = epoch for ($i=1;$i -lt $count;$i++) { remove_file $path create_holey_file1 $path $size } $end = epoch $t = ($end - $start) / 1000 Write-Host "`nsparsefile: $t seconds" $start = epoch for ($i=1;$i -lt $count;$i++) { remove_file $path create_holey_file2 $path $size } $end = epoch $t = ($end - $start) / 1000 Write-Host "`npowershell + fsutil: $t seconds" $start = epoch for ($i=1;$i -lt $count;$i++) { remove_file $path create_holey_file3 $path $size } $end = epoch $t = ($end - $start) / 1000 Write-Host "`nfsutil: $t seconds" vmem-1.8/src/test/tools/sparsefile/sparsefile.c000066400000000000000000000137601361505074100217030ustar00rootroot00000000000000/* * Copyright 2014-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * sparsefile.c -- a simple utility to create sparse files on Windows * * usage: sparsefile [options] filename len * where options can be: * -v - verbose output * -s - do not create file if sparse files are not supported * -f - overwrite file if already exists */ #include #include #include "util.h" #define MAXPRINT 8192 static int Opt_verbose; static int Opt_sparse; static int Opt_force; /* * out_err_vargs -- print error message */ static void out_err_vargs(const wchar_t *fmt, va_list ap) { wchar_t errmsg[MAXPRINT]; DWORD lasterr = GetLastError(); vfwprintf(stderr, fmt, ap); if (lasterr) { size_t size = FormatMessageW(FORMAT_MESSAGE_FROM_SYSTEM, NULL, lasterr, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), errmsg, MAXPRINT, NULL); fwprintf(stderr, L": %s", errmsg); } else { fwprintf(stderr, L"\n"); } SetLastError(0); } /* * out_err -- print error message */ static void out_err(const wchar_t *fmt, ...) { va_list ap; va_start(ap, fmt); out_err_vargs(fmt, ap); va_end(ap); } /* * print_file_size -- prints file size and its size on disk */ static void print_file_size(const wchar_t *filename) { LARGE_INTEGER filesize; FILE_COMPRESSION_INFO fci; HANDLE fh = CreateFileW(filename, GENERIC_READ, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); if (fh == INVALID_HANDLE_VALUE) { out_err(L"CreateFile"); return; } BOOL ret = GetFileSizeEx(fh, &filesize); if (ret == FALSE) { out_err(L"GetFileSizeEx"); goto err; } ret = GetFileInformationByHandleEx(fh, FileCompressionInfo, &fci, sizeof(fci)); if (ret == FALSE) { out_err(L"GetFileInformationByHandleEx"); goto err; } if (filesize.QuadPart < 65536) fwprintf(stderr, L"\ntotal size: %lluB", filesize.QuadPart); else fwprintf(stderr, L"\ntotal size: %lluKB", filesize.QuadPart / 1024); if (fci.CompressedFileSize.QuadPart < 65536) fwprintf(stderr, L", actual size on disk: %lluKB\n", fci.CompressedFileSize.QuadPart); else fwprintf(stderr, L", actual size on disk: %lluKB\n", fci.CompressedFileSize.QuadPart / 1024); err: CloseHandle(fh); } /* * create_sparse_file -- creates sparse file of given size */ static int create_sparse_file(const wchar_t *filename, size_t len) { /* create zero-length file */ DWORD create = Opt_force ? CREATE_ALWAYS : CREATE_NEW; HANDLE fh = CreateFileW(filename, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, create, FILE_ATTRIBUTE_NORMAL, NULL); if (fh == INVALID_HANDLE_VALUE) { out_err(L"CreateFile"); return -1; } SetLastError(0); /* check if sparse files are supported */ DWORD flags = 0; BOOL ret = GetVolumeInformationByHandleW(fh, NULL, 0, NULL, NULL, &flags, NULL, 0); if (ret == FALSE) { if (Opt_verbose || Opt_sparse) out_err(L"GetVolumeInformationByHandle"); } else if ((flags & FILE_SUPPORTS_SPARSE_FILES) == 0) { if (Opt_verbose || Opt_sparse) out_err(L"Volume does not support sparse files."); if (Opt_sparse) goto err; } /* mark file as sparse */ if (flags & FILE_SUPPORTS_SPARSE_FILES) { DWORD nbytes; ret = DeviceIoControl(fh, FSCTL_SET_SPARSE, NULL, 0, NULL, 0, &nbytes, NULL); if (ret == FALSE) { if (Opt_verbose || Opt_sparse) out_err(L"DeviceIoControl"); if (Opt_sparse) goto err; } } /* set file length */ LARGE_INTEGER llen; llen.QuadPart = len; ret = SetFilePointerEx(fh, llen, NULL, FILE_BEGIN); if (ret == FALSE) { out_err(L"SetFilePointerEx"); goto err; } ret = SetEndOfFile(fh); if (ret == FALSE) { out_err(L"SetEndOfFile"); goto err; } CloseHandle(fh); return 0; err: CloseHandle(fh); DeleteFileW(filename); return -1; } int wmain(int argc, const wchar_t *argv[]) { util_suppress_errmsg(); if (argc < 2) { fwprintf(stderr, L"Usage: %s filename len\n", argv[0]); exit(1); } int i = 1; while (i < argc && argv[i][0] == '-') { switch (argv[i][1]) { case 'v': Opt_verbose = 1; break; case 's': Opt_sparse = 1; break; case 'f': Opt_force = 1; break; default: out_err(L"Unknown option: \'%c\'.", argv[i][1]); exit(2); } ++i; } const wchar_t *filename = argv[i]; long long len = _wtoll(argv[i + 1]); if (len < 0) { out_err(L"Invalid file length: %lld.\n", len); exit(3); } if (create_sparse_file(filename, len) < 0) { out_err(L"File creation failed."); exit(4); } if (Opt_verbose) print_file_size(filename); return 0; } vmem-1.8/src/test/tools/sparsefile/sparsefile.vcxproj000066400000000000000000000121361361505074100231500ustar00rootroot00000000000000 Debug x64 Release x64 {492baa3d-0d5d-478e-9765-500463ae69aa} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {3EC30D6A-BDA4-4971-879A-8814204EAE31} Win32Proj sparsefile 10.0.16299.0 Application true v140 NotSet Application false v140 true NotSet true $(SolutionDir)\common;$(SolutionDir)\test\unittest;$(SolutionDir)\windows\include;$(SolutionDir)\include;$(SolutionDir)\windows\getopt;$(SolutionDir)\libpmemlog;$(SolutionDir)\libpmemblk;$(SolutionDir)\libpmemobj;$(IncludePath) false $(SolutionDir)\common;$(SolutionDir)\test\unittest;$(SolutionDir)\windows\include;$(SolutionDir)\include;$(SolutionDir)\windows\getopt;$(SolutionDir)\libpmemlog;$(SolutionDir)\libpmemblk;$(SolutionDir)\libpmemobj;$(IncludePath) NotUsing Disabled PMDK_UTF8_API; NTDDI_VERSION=NTDDI_WIN10_RS1;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) CompileAsC Debug NotUsing MaxSpeed true PMDK_UTF8_API; NTDDI_VERSION=NTDDI_WIN10_RS1;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) CompileAsC true true DebugFastLink vmem-1.8/src/test/tools/sparsefile/sparsefile.vcxproj.filters000066400000000000000000000017171361505074100246220ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {9efc97e7-7948-4037-b872-d0ad95ae1167} ps1 {4057fad4-c807-493e-898f-c4de8f8289a3} match Source Files Test Scripts vmem-1.8/src/test/tools/tools_debug.props000066400000000000000000000012131361505074100206260ustar00rootroot00000000000000 $(SolutionDir)$(Platform)\$(Configuration)\tests\ Level3 true Console true vmem-1.8/src/test/tools/tools_release.props000066400000000000000000000013031361505074100211600ustar00rootroot00000000000000 $(SolutionDir)$(Platform)\$(Configuration)\tests\ Level3 true true Console true vmem-1.8/src/test/tools/usc_permission_check/000077500000000000000000000000001361505074100214355ustar00rootroot00000000000000vmem-1.8/src/test/tools/usc_permission_check/.gitignore000066400000000000000000000000251361505074100234220ustar00rootroot00000000000000usc_permission_check vmem-1.8/src/test/tools/usc_permission_check/Makefile000066400000000000000000000033421361505074100230770ustar00rootroot00000000000000# Copyright 2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # Makefile -- Makefile for fallocate detection tool # TOP = ../../../.. TARGET = usc_permission_check OBJS = usc_permission_check.o LIBPMEM=y LIBPMEMCOMMON=y include $(TOP)/src/tools/Makefile.inc vmem-1.8/src/test/tools/usc_permission_check/usc_permission_check.c000066400000000000000000000042201361505074100257760ustar00rootroot00000000000000/* * Copyright 2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * usc_permission_check.c -- checks whether it's possible to read usc * with current permissions */ #include #include #include "os_dimm.h" /* * This program returns: * - 0 when usc can be read with current permissions * - 1 when permissions are not sufficient * - 2 when other error occurs */ int main(int argc, char *argv[]) { if (argc != 2) { fprintf(stderr, "usage: %s filename\n", argv[0]); return 2; } uint64_t usc; int ret = os_dimm_usc(argv[1], &usc); if (ret == 0) return 0; else if (errno == EACCES) return 1; else return 2; } vmem-1.8/src/test/traces/000077500000000000000000000000001361505074100153575ustar00rootroot00000000000000vmem-1.8/src/test/traces/.gitignore000066400000000000000000000000311361505074100173410ustar00rootroot00000000000000traces custom_file.log-* vmem-1.8/src/test/traces/Makefile000066400000000000000000000033441361505074100170230ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/traces/Makefile -- build traces unit test # TARGET = traces OBJS = traces.o BUILD_STATIC_DEBUG=n BUILD_STATIC_NONDEBUG=n LIBPMEMCOMMON=y include ../Makefile.inc CFLAGS += -DDEBUG vmem-1.8/src/test/traces/TEST0000077500000000000000000000037531361505074100161540ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/traces/TEST0 -- unit test for traces # . ../unittest/unittest.sh require_build_type debug setup shopt -u failglob rm -f ./custom_file.log-* shopt -s failglob export UT_LOG_LEVEL=4 export UT_LOG_FILE=./custom_file.log- expect_normal_exit ./traces$EXESUFFIX # check results [ -s ./custom_file.log-* ] || { fatal "error: ./custom_file.log-PID not found" } mv ./custom_file.log-* custom_file$UNITTEST_NUM.log check pass vmem-1.8/src/test/traces/TEST0.PS1000066400000000000000000000040541361505074100165460ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # Copyright (c) 2016, Microsoft Corporation. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src\test\traces\TEST0 -- unit test for traces # . ..\unittest\unittest.ps1 require_build_type debug setup rm -Force .\custom_file.log-* $Env:UT_LOG_LEVEL = "4" $Env:UT_LOG_FILE = ".\custom_file.log-" expect_normal_exit $Env:EXE_DIR\traces$Env:EXESUFFIX # check results if (-not (Test-Path .\custom_file.log-*)) { echo "error: .\custom_file.log-PID not found" exit 1 } mv -Force .\custom_file.log-* custom_file$Env:UNITTEST_NUM.log check pass vmem-1.8/src/test/traces/TEST1000077500000000000000000000034201361505074100161440ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/traces/TEST1 -- unit test for traces # . ../unittest/unittest.sh require_build_type debug setup export UT_LOG_LEVEL=0 expect_normal_exit ./traces$EXESUFFIX 2>redir_stderr$UNITTEST_NUM.log check pass vmem-1.8/src/test/traces/TEST1.PS1000066400000000000000000000042111361505074100165420ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # Copyright (c) 2016, Microsoft Corporation. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src\test\traces\TEST1 -- unit test for traces # . ..\unittest\unittest.ps1 require_build_type debug setup $Env:UT_LOG_LEVEL = "0" # NOTE: Any test that need to redirect stderr could follow the below syntax: # 1. primarily to avoid powershell converting the error strings to exception # objects # 2. as a side benefit this will leave the error file in ASCII, so you can # avoid a conversion from UNICODE to ASCII expect_normal_exit "cmd /c $Env:EXE_DIR\traces$Env:EXESUFFIX 2```>redir_stderr$Env:UNITTEST_NUM.log" check pass vmem-1.8/src/test/traces/TEST2000077500000000000000000000034201361505074100161450ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/traces/TEST2 -- unit test for traces # . ../unittest/unittest.sh require_build_type debug setup export UT_LOG_LEVEL=1 expect_normal_exit ./traces$EXESUFFIX 2>redir_stderr$UNITTEST_NUM.log check pass vmem-1.8/src/test/traces/TEST2.PS1000066400000000000000000000042061361505074100165470ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # Copyright (c) 2016, Microsoft Corporation. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src\test\traces\TEST2 -- unit test for traces # . ..\unittest\unittest.ps1 require_build_type debug setup $Env:UT_LOG_LEVEL = 1 # NOTE: Any test that need to redirect stderr could follow the below syntax: # 1. primarily to avoid powershell converting the error strings to exception # objects # 2. as a side benefit this will leave the error file in ASCII, so you can # avoid a conversion from UNICODE to ASCII expect_normal_exit "cmd /c $Env:EXE_DIR\traces$Env:EXESUFFIX 2```>redir_stderr$Env:UNITTEST_NUM.log" check pass vmem-1.8/src/test/traces/TEST3000077500000000000000000000034201361505074100161460ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/traces/TEST3 -- unit test for traces # . ../unittest/unittest.sh require_build_type debug setup export UT_LOG_LEVEL=2 expect_normal_exit ./traces$EXESUFFIX 2>redir_stderr$UNITTEST_NUM.log check pass vmem-1.8/src/test/traces/TEST3.PS1000066400000000000000000000042101361505074100165430ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # Copyright (c) 2016, Microsoft Corporation. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src\test\traces\TEST3 -- unit test for traces # . ..\unittest\unittest.ps1 require_build_type debug setup $Env:UT_LOG_LEVEL = "2" # NOTE: Any test that need to redirect stderr could follow the below syntax: # 1. primarily to avoid powershell converting the error strings to exception # objects # 2. as a side benefit this will leave the error file in ASCII, so you can # avoid a conversion from UNICODE to ASCII expect_normal_exit "cmd /c $Env:EXE_DIR\traces$Env:EXESUFFIX 2```>redir_stderr$Env:UNITTEST_NUM.log" check pass vmem-1.8/src/test/traces/TEST4000077500000000000000000000034201361505074100161470ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/traces/TEST4 -- unit test for traces # . ../unittest/unittest.sh require_build_type debug setup export UT_LOG_LEVEL=3 expect_normal_exit ./traces$EXESUFFIX 2>redir_stderr$UNITTEST_NUM.log check pass vmem-1.8/src/test/traces/TEST4.PS1000066400000000000000000000042101361505074100165440ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # Copyright (c) 2016, Microsoft Corporation. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src\test\traces\TEST4 -- unit test for traces # . ..\unittest\unittest.ps1 require_build_type debug setup $Env:UT_LOG_LEVEL = "3" # NOTE: Any test that need to redirect stderr could follow the below syntax: # 1. primarily to avoid powershell converting the error strings to exception # objects # 2. as a side benefit this will leave the error file in ASCII, so you can # avoid a conversion from UNICODE to ASCII expect_normal_exit "cmd /c $Env:EXE_DIR\traces$Env:EXESUFFIX 2```>redir_stderr$Env:UNITTEST_NUM.log" check pass vmem-1.8/src/test/traces/TEST5000077500000000000000000000034201361505074100161500ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/traces/TEST5 -- unit test for traces # . ../unittest/unittest.sh require_build_type debug setup export UT_LOG_LEVEL=4 expect_normal_exit ./traces$EXESUFFIX 2>redir_stderr$UNITTEST_NUM.log check pass vmem-1.8/src/test/traces/TEST5.PS1000066400000000000000000000042101361505074100165450ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # Copyright (c) 2016, Microsoft Corporation. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src\test\traces\TEST5 -- unit test for traces # . ..\unittest\unittest.ps1 require_build_type debug setup $Env:UT_LOG_LEVEL = "4" # NOTE: Any test that need to redirect stderr could follow the below syntax: # 1. primarily to avoid powershell converting the error strings to exception # objects # 2. as a side benefit this will leave the error file in ASCII, so you can # avoid a conversion from UNICODE to ASCII expect_normal_exit "cmd /c $Env:EXE_DIR\traces$Env:EXESUFFIX 2```>redir_stderr$Env:UNITTEST_NUM.log" check pass vmem-1.8/src/test/traces/TEST6000077500000000000000000000034201361505074100161510ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/traces/TEST6 -- unit test for traces # . ../unittest/unittest.sh require_build_type debug setup export UT_LOG_LEVEL=4 expect_normal_exit ./traces$EXESUFFIX 2>redir_stderr$UNITTEST_NUM.log check pass vmem-1.8/src/test/traces/TEST6.PS1000066400000000000000000000042101361505074100165460ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # Copyright (c) 2016, Microsoft Corporation. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src\test\traces\TEST6 -- unit test for traces # . ..\unittest\unittest.ps1 require_build_type debug setup $Env:UT_LOG_LEVEL = "4" # NOTE: Any test that need to redirect stderr could follow the below syntax: # 1. primarily to avoid powershell converting the error strings to exception # objects # 2. as a side benefit this will leave the error file in ASCII, so you can # avoid a conversion from UNICODE to ASCII expect_normal_exit "cmd /c $Env:EXE_DIR\traces$Env:EXESUFFIX 2```>redir_stderr$Env:UNITTEST_NUM.log" check pass vmem-1.8/src/test/traces/custom_file0.log.match000066400000000000000000000006571361505074100215560ustar00rootroot00000000000000$(*) $(*) $(*) $(OPT)$(*) compiled with support for Valgrind pmemcheck $(OPT)$(*) compiled with support for Valgrind helgrind $(OPT)$(*) compiled with support for Valgrind memcheck $(OPT)$(*) compiled with support for Valgrind drd $(OPT)$(*) compiled with support for shutdown state $(OPT)$(*) compiled with libndctl 63+ $(*) $(*)Log level NONE $(*)Log level ERROR $(*)Log level WARNING $(*)Log level INFO $(*)Log level DEBUG $(*) vmem-1.8/src/test/traces/redir_stderr1.log.match000066400000000000000000000000231361505074100217210ustar00rootroot00000000000000$(*)Log level NONE vmem-1.8/src/test/traces/redir_stderr2.log.match000066400000000000000000000005501361505074100217270ustar00rootroot00000000000000$(*) $(*) $(*) $(OPT)$(*) compiled with support for Valgrind pmemcheck $(OPT)$(*) compiled with support for Valgrind helgrind $(OPT)$(*) compiled with support for Valgrind memcheck $(OPT)$(*) compiled with support for Valgrind drd $(OPT)$(*) compiled with support for shutdown state $(OPT)$(*) compiled with libndctl 63+ $(*)Log level NONE $(*)Log level ERROR vmem-1.8/src/test/traces/redir_stderr3.log.match000066400000000000000000000005761361505074100217400ustar00rootroot00000000000000$(*) $(*) $(*) $(OPT)$(*) compiled with support for Valgrind pmemcheck $(OPT)$(*) compiled with support for Valgrind helgrind $(OPT)$(*) compiled with support for Valgrind memcheck $(OPT)$(*) compiled with support for Valgrind drd $(OPT)$(*) compiled with support for shutdown state $(OPT)$(*) compiled with libndctl 63+ $(*)Log level NONE $(*)Log level ERROR $(*)Log level WARNING vmem-1.8/src/test/traces/redir_stderr4.log.match000066400000000000000000000006331361505074100217330ustar00rootroot00000000000000$(*) $(*) $(*) $(OPT)$(*) compiled with support for Valgrind pmemcheck $(OPT)$(*) compiled with support for Valgrind helgrind $(OPT)$(*) compiled with support for Valgrind memcheck $(OPT)$(*) compiled with support for Valgrind drd $(OPT)$(*) compiled with support for shutdown state $(OPT)$(*) compiled with libndctl 63+ $(*) $(*)Log level NONE $(*)Log level ERROR $(*)Log level WARNING $(*)Log level INFO $(*) vmem-1.8/src/test/traces/redir_stderr5.log.match000066400000000000000000000006571361505074100217420ustar00rootroot00000000000000$(*) $(*) $(*) $(OPT)$(*) compiled with support for Valgrind pmemcheck $(OPT)$(*) compiled with support for Valgrind helgrind $(OPT)$(*) compiled with support for Valgrind memcheck $(OPT)$(*) compiled with support for Valgrind drd $(OPT)$(*) compiled with support for shutdown state $(OPT)$(*) compiled with libndctl 63+ $(*) $(*)Log level NONE $(*)Log level ERROR $(*)Log level WARNING $(*)Log level INFO $(*)Log level DEBUG $(*) vmem-1.8/src/test/traces/redir_stderr6.log.match000066400000000000000000000011051361505074100217300ustar00rootroot00000000000000$(*) $(*) $(*) $(OPT)$(*) compiled with support for Valgrind pmemcheck $(OPT)$(*) compiled with support for Valgrind helgrind $(OPT)$(*) compiled with support for Valgrind memcheck $(OPT)$(*) compiled with support for Valgrind drd $(OPT)$(*) compiled with support for shutdown state $(OPT)$(*) compiled with libndctl 63+ $(*) : <0> [traces.c:$(*) main]$(W)Log level NONE : <1> [traces.c:$(*) main]$(W)Log level ERROR : <2> [traces.c:$(*) main]$(W)Log level WARNING : <3> [traces.c:$(*) main]$(W)Log level INFO : <4> [traces.c:$(*) main]$(W)Log level DEBUG $(*) vmem-1.8/src/test/traces/traces.c000066400000000000000000000043031361505074100170040ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * traces.c -- unit test for traces */ #define LOG_PREFIX "ut" #define LOG_LEVEL_VAR "UT_LOG_LEVEL" #define LOG_FILE_VAR "UT_LOG_FILE" #define MAJOR_VERSION 1 #define MINOR_VERSION 0 #include #include #include "pmemcommon.h" #include "unittest.h" int main(int argc, char *argv[]) { START(argc, argv, "traces"); /* Execute test */ common_init(LOG_PREFIX, LOG_LEVEL_VAR, LOG_FILE_VAR, MAJOR_VERSION, MINOR_VERSION); LOG(0, "Log level NONE"); LOG(1, "Log level ERROR"); LOG(2, "Log level WARNING"); LOG(3, "Log level INFO"); LOG(4, "Log level DEBUG"); /* Cleanup */ common_fini(); DONE(NULL); } vmem-1.8/src/test/traces/traces.vcxproj000066400000000000000000000071551361505074100202650ustar00rootroot00000000000000 Debug x64 Release x64 {CA4BBB24-D33E-42E2-A495-F10D80DE8C1D} Win32Proj traces 10.0.16299.0 Application true v140 Application false v140 {ce3f2dfb-8470-4802-ad37-21caf6cb2681} vmem-1.8/src/test/traces/traces.vcxproj.filters000066400000000000000000000040031361505074100217210ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {dac6c718-f0f8-43a0-ba38-bd72ab98e456} ps1 {822d02f2-ed7a-4f61-9773-b579384486f5} match Source Files Match Files Match Files Match Files Match Files Match Files Match Files Match Files Test Scripts Test Scripts Test Scripts Test Scripts Test Scripts Test Scripts Test Scripts vmem-1.8/src/test/traces_custom_function/000077500000000000000000000000001361505074100206565ustar00rootroot00000000000000vmem-1.8/src/test/traces_custom_function/.gitignore000066400000000000000000000000271361505074100226450ustar00rootroot00000000000000traces_custom_function vmem-1.8/src/test/traces_custom_function/Makefile000066400000000000000000000034241361505074100223210ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/traces_custom_function/Makefile -- build traces unit test # TARGET = traces_custom_function OBJS = traces_custom_function.o BUILD_STATIC_DEBUG=n BUILD_STATIC_NONDEBUG=n LIBPMEMCOMMON=y include ../Makefile.inc CFLAGS += -DDEBUG vmem-1.8/src/test/traces_custom_function/TEST0000077500000000000000000000034551361505074100214520ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/traces_custom_function/TEST0 -- unit test for traces custom # print function # . ../unittest/unittest.sh require_build_type debug setup export TRACE_LOG_LEVEL=4 expect_normal_exit ./traces_custom_function$EXESUFFIX p check pass vmem-1.8/src/test/traces_custom_function/TEST0.PS1000066400000000000000000000034521361505074100220460ustar00rootroot00000000000000# # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/traces_custom_function/TEST0 -- unit test for traces custom # print function # . ..\unittest\unittest.ps1 require_build_type debug setup $Env:TRACE_LOG_LEVEL = 4 expect_normal_exit $Env:EXE_DIR\traces_custom_function$Env:EXESUFFIX p check pass vmem-1.8/src/test/traces_custom_function/TEST1000077500000000000000000000034611361505074100214500ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/traces_custom_function/TEST1 -- unit test for traces custom # vsnprintf function # . ../unittest/unittest.sh require_build_type debug setup export TRACE_LOG_LEVEL=4 expect_normal_exit ./traces_custom_function$EXESUFFIX v check pass vmem-1.8/src/test/traces_custom_function/TEST1.PS1000066400000000000000000000034561361505074100220530ustar00rootroot00000000000000# # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/traces_custom_function/TEST1 -- unit test for traces custom # vsnprintf function # . ..\unittest\unittest.ps1 require_build_type debug setup $Env:TRACE_LOG_LEVEL = 4 expect_normal_exit $Env:EXE_DIR\traces_custom_function$Env:EXESUFFIX v check pass vmem-1.8/src/test/traces_custom_function/out0.log.match000066400000000000000000000027611361505074100233510ustar00rootroot00000000000000traces_custom_function$(nW)TEST0: START: traces_custom_function $(nW)traces_custom_function$(nW) p CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)pid $(N): program: $(nW) CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)trace_func version $(S) CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)src version: $(nW) $(OPT)CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)compiled with support for Valgrind pmemcheck $(OPT) $(OPT)CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)compiled with support for Valgrind helgrind $(OPT) $(OPT)CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)compiled with support for Valgrind memcheck $(OPT) $(OPT)CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)compiled with support for Valgrind drd $(OPT) $(OPT)CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)compiled with support for shutdown state $(OPT) $(OPT)CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)compiled with libndctl 63+ $(OPT) CUSTOM_PRINT: : <3> [$(nW):$(N) util_mmap_init]$(W) CUSTOM_PRINT: : <0> [$(nW):$(N) main]$(W)Log level NONE CUSTOM_PRINT: : <1> [$(nW):$(N) main]$(W)Log level ERROR CUSTOM_PRINT: : <2> [$(nW):$(N) main]$(W)Log level WARNING CUSTOM_PRINT: : <3> [$(nW):$(N) main]$(W)Log level INFO CUSTOM_PRINT: : <4> [$(nW):$(N) main]$(W)Log level DEBUG CUSTOM_PRINT: : <3> [$(nW):$(N) util_mmap_fini]$(W) traces_custom_function$(nW)TEST0: DONE vmem-1.8/src/test/traces_custom_function/out1.log.match000066400000000000000000000032771361505074100233550ustar00rootroot00000000000000traces_custom_function$(nW)TEST1: START: traces_custom_function $(nW)traces_custom_function$(nW) v CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)pid $(N): program: $(nW) CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)trace_func version $(S) CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)src version: $(nW) $(OPT)CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)compiled with support for Valgrind pmemcheck $(OPT) $(OPT)CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)compiled with support for Valgrind helgrind $(OPT) $(OPT)CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)compiled with support for Valgrind memcheck $(OPT) $(OPT)CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)compiled with support for Valgrind drd $(OPT) $(OPT)CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)compiled with support for shutdown state $(OPT) $(OPT)CUSTOM_PRINT: : <1> [$(nW):$(N) out_init]$(W)compiled with libndctl 63+ $(OPT) CUSTOM_PRINT: : <3> [$(nW):$(N) util_mmap_init]$(W) CUSTOM_PRINT: : <3> [$(nW):$(N) out_set_vsnprintf_func]$(W)vsnprintf $(nW)$(X) CUSTOM_PRINT: <@@trace_func>: <@@0> [@@$(nW):@@$(N) @@main]$(W)no format@@@@@@ CUSTOM_PRINT: <@@trace_func>: <@@0> [@@$(nW):@@$(N) @@main]$(W)pointer: @@$(nW)12345678@@@@@@ CUSTOM_PRINT: <@@trace_func>: <@@0> [@@$(nW):@@$(N) @@main]$(W)string: @@Hello world!@@@@@@ CUSTOM_PRINT: <@@trace_func>: <@@0> [@@$(nW):@@$(N) @@main]$(W)number: @@12345678@@@@@@ CUSTOM_PRINT: <@@trace_func>: <@@0> [@@$(nW):@@$(N) @@main]$(W)error@@: @@Invalid argument@@ CUSTOM_PRINT: <@@trace_func>: <@@3> [@@$(nW):@@$(N) @@util_mmap_fini] @@@@@@ traces_custom_function$(nW)TEST1: DONE vmem-1.8/src/test/traces_custom_function/traces_custom_function.c000066400000000000000000000071271361505074100256110ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * traces_custom_function.c -- unit test for traces with custom print or * vsnprintf functions * * usage: traces_custom_function [v|p] * */ #define LOG_PREFIX "trace_func" #define LOG_LEVEL_VAR "TRACE_LOG_LEVEL" #define LOG_FILE_VAR "TRACE_LOG_FILE" #define MAJOR_VERSION 1 #define MINOR_VERSION 0 #include #include #include "pmemcommon.h" #include "unittest.h" /* * print_custom_function -- Custom function to handle output * * This is called from the library to print text instead of output to stderr. */ static void print_custom_function(const char *s) { if (s) { UT_OUT("CUSTOM_PRINT: %s", s); } else { UT_OUT("CUSTOM_PRINT(NULL)"); } } /* * vsnprintf_custom_function -- Custom vsnprintf implementation * * It modifies format by adding @@ in front of each conversion specification. */ static int vsnprintf_custom_function(char *str, size_t size, const char *format, va_list ap) { char *format2 = MALLOC(strlen(format) * 3); int i = 0; int ret_val; while (*format != '\0') { if (*format == '%') { format2[i++] = '@'; format2[i++] = '@'; } format2[i++] = *format++; } format2[i++] = '\0'; ret_val = vsnprintf(str, size, format2, ap); FREE(format2); return ret_val; } int main(int argc, char *argv[]) { START(argc, argv, "traces_custom_function"); if (argc != 2) UT_FATAL("usage: %s [v|p]", argv[0]); out_set_print_func(print_custom_function); common_init(LOG_PREFIX, LOG_LEVEL_VAR, LOG_FILE_VAR, MAJOR_VERSION, MINOR_VERSION); switch (argv[1][0]) { case 'p': { LOG(0, "Log level NONE"); LOG(1, "Log level ERROR"); LOG(2, "Log level WARNING"); LOG(3, "Log level INFO"); LOG(4, "Log level DEBUG"); } break; case 'v': out_set_vsnprintf_func(vsnprintf_custom_function); LOG(0, "no format"); LOG(0, "pointer: %p", (void *)0x12345678); LOG(0, "string: %s", "Hello world!"); LOG(0, "number: %u", 12345678); errno = EINVAL; LOG(0, "!error"); break; default: UT_FATAL("usage: %s [v|p]", argv[0]); } /* Cleanup */ common_fini(); DONE(NULL); } vmem-1.8/src/test/traces_custom_function/traces_custom_function.vcxproj000066400000000000000000000063551361505074100270640ustar00rootroot00000000000000 Debug x64 Release x64 {02BC3B44-C7F1-4793-86C1-6F36CA8A7F53} Win32Proj traces_custom_function 10.0.16299.0 Application true v140 Application false v140 {ce3f2dfb-8470-4802-ad37-21caf6cb2681} vmem-1.8/src/test/traces_custom_function/traces_custom_function.vcxproj.filters000066400000000000000000000021441361505074100305230ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {10ad8ca9-b73b-4997-913e-e733bebc7f29} {cf0d175d-d125-4070-b45c-f3eba9be619d} Source Files Test scripts Test scripts Match Files Match Files vmem-1.8/src/test/unicode_api/000077500000000000000000000000001361505074100163555ustar00rootroot00000000000000vmem-1.8/src/test/unicode_api/Makefile000066400000000000000000000032101361505074100200110ustar00rootroot00000000000000# # Copyright 2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/unicode_api/Makefile -- build unittest for unicode api completeness # include ../Makefile.inc vmem-1.8/src/test/unicode_api/README000066400000000000000000000002071361505074100172340ustar00rootroot00000000000000Persistent Memory Development Kit This is src/test/unicode_api/README. This directory contains a check for unicode API completeness. vmem-1.8/src/test/unicode_api/TEST0000077500000000000000000000053611361505074100171470ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/unicode_api/TEST0 -- unicode C API check # . ../unittest/unittest.sh require_command bc # there's no point in testing different builds require_build_type debug require_command clang SRC=../.. HEADERS_DIR=$SRC/include EXC_PATT="set_funcs|strdup|vmem_stats_print" FAILED=0 DEF_COL=6 function pick_col { require_command bc local ver=$(clang --version | grep version | sed "s/.*clang version \([0-9]*\)\.\([0-9]*\).*/\1*100+\2*10/" | bc) if [ $ver -le 340 ]; then DEF_COL=5 fi } function check_file { local file=$1 local pat=$2 local funcs=$(clang -Xclang -ast-dump -I$HEADERS_DIR $file -fno-color-diagnostics 2> /dev/null |\ grep "FunctionDecl.*\(vmem\).*char \*" | cut -d " " -f $DEF_COL) for func in $funcs do local good=1 # Not starting at 0 allows set -e to_check="$file" if [ -n "${pat:+x}" ] && [[ $func =~ $pat ]]; then continue fi for f in $to_check do let good+=$(grep -c "$func[UW][ ]*(" $f) done if [ $good -ne 3 ]; then echo "Function $func in file $file does not have unicode U/W counterparts" FAILED=1; fi done } setup pick_col for f in $HEADERS_DIR/*.h do check_file $f $EXC_PATT done if [ $FAILED -ne 0 ]; then exit 1 fi pass vmem-1.8/src/test/unicode_match_script/000077500000000000000000000000001361505074100202645ustar00rootroot00000000000000vmem-1.8/src/test/unicode_match_script/Makefile000066400000000000000000000032201361505074100217210ustar00rootroot00000000000000# # Copyright 2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/unicode_match_script/Makefile -- build unittest for unicode match # scripts # include ../Makefile.inc vmem-1.8/src/test/unicode_match_script/README000066400000000000000000000002211361505074100211370ustar00rootroot00000000000000Persistent Memory Development Kit This is src/test/unicode_match_script/README. This directory contains a unit test for unicode match scripts. vmem-1.8/src/test/unicode_match_script/TEST0000077500000000000000000000034061361505074100210540ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2017-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/unicode_match_script/TEST0 -- unicode encoding unit test for match # . ../unittest/unittest.sh # there's no point in testing different builds require_build_type debug setup ../match -a pass vmem-1.8/src/test/unicode_match_script/unicodetest✭✮000066400000000000000000000007471361505074100246740ustar00rootroot00000000000000आ इ ई उ ऊ ऋ ऌ অ আ ই ঈ উ ঊ ঋ ঌ এ ঐ ơ Ƣ ƣ Ƥ ƥ Ʀ Ƨ ƨ Ʃ ƪ ƫ Ƭ ƭ Ʈ Ư ư Ʊ Ʋ a̘ ̙ ̚ ̛ ̜ ̝ ̞ ̟ ̠ ̡ ̢ ̣ ̤ ̥ ̦ ̧ ̨ ̩ ̪ ̫ ̬ ̭ ̮ ̯ ̰ ̱ ̲ ̳ ̴ ̵ ̶ ̷ ̸ ̹ ̺ ̻ ̼ ̽ ̾ ̿ ̀ ́ ͂ ̓ ̈́ ͅ ͠ ͡ 奈 懶 癩 羅 蘿螺 裸 邏 樂 洛 烙 珞 落 酪 駱 か が き ぎ く ぐけ げ こ ご さ ざ し じ す ず せ ぜ そ か が き ぎ く ぐ け げ こ ご さ ざ し じ す ず せ ぜ そ ÄÄ vmem-1.8/src/test/unicode_match_script/unicodetest✭✮.match000066400000000000000000000007361361505074100257650ustar00rootroot00000000000000आ इ ई उ ऊ ऋ ऌ অ আ ই ঈ উ ঊ ঋ ঌ এ ঐ ơ Ƣ ƣ Ƥ ƥ Ʀ Ƨ ƨ Ʃ ƪ ƫ Ƭ ƭ Ʈ Ư ư Ʊ Ʋ $(nW) ̙ ̚ ̛ ̜ ̝ ̞ ̟ ̠ ̡ ̢ ̣ ̤ ̥ ̦ ̧ ̨ ̩ ̪ ̫ ̬ ̭ ̮ ̯ ̰ ̱ ̲ ̳ ̴ ̵ ̶ ̷ ̸ ̹ ̺ ̻ ̼ ̽ ̾ ̿ ̀ ́ ͂ ̓ ̈́ ͅ ͠ ͡ 奈 懶 癩 羅 $(nW) 裸 邏 樂 洛 烙 珞 落 酪 駱 か が き ぎ く $(nW) げ$(W)$(*) じ す ず せ ぜ そ$(W) か が き ぎ く ぐ け げ こ ご さ ざ し じ す ず せ ぜ そ ÄÄ vmem-1.8/src/test/unittest/000077500000000000000000000000001361505074100157555ustar00rootroot00000000000000vmem-1.8/src/test/unittest/.gitignore000066400000000000000000000000101361505074100177340ustar00rootroot00000000000000libut.a vmem-1.8/src/test/unittest/Makefile000066400000000000000000000071201361505074100174150ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/unittest/Makefile -- build unittest support library # TOP := $(dir $(lastword $(MAKEFILE_LIST)))../../.. include $(TOP)/src/common.inc vpath %.c $(TOP)/src/common vpath %.h $(TOP)/src/common TARGET = libut.a OBJS = ut.o ut_alloc.o ut_file.o ut_pthread.o ut_signal.o ut_backtrace.o\ os_posix.o os_thread_posix.o rand.o alloc.o CFLAGS = -I$(TOP)/src/include CFLAGS += -I$(TOP)/src/common CFLAGS += $(OS_INCS) CFLAGS += -std=gnu99 CFLAGS += -ggdb CFLAGS += -Wall CFLAGS += -Werror CFLAGS += -Wmissing-prototypes CFLAGS += -Wpointer-arith CFLAGS += -Wsign-conversion CFLAGS += -Wsign-compare ifeq ($(WCONVERSION_AVAILABLE), y) CFLAGS += -Wconversion endif CFLAGS += -pthread CFLAGS += -fno-common ifeq ($(IS_ICC), n) CFLAGS += -Wunused-macros CFLAGS += -Wmissing-field-initializers endif ifeq ($(WUNREACHABLE_CODE_RETURN_AVAILABLE), y) CFLAGS += -Wunreachable-code-return endif ifeq ($(WMISSING_VARIABLE_DECLARATIONS_AVAILABLE), y) CFLAGS += -Wmissing-variable-declarations endif ifeq ($(WFLOAT_EQUAL_AVAILABLE), y) CFLAGS += -Wfloat-equal endif ifeq ($(WSWITCH_DEFAULT_AVAILABLE), y) CFLAGS += -Wswitch-default endif ifeq ($(WCAST_FUNCTION_TYPE_AVAILABLE), y) CFLAGS += -Wcast-function-type endif ifeq ($(USE_LIBUNWIND),y) CFLAGS += $(shell $(PKG_CONFIG) --cflags libunwind) -DUSE_LIBUNWIND endif ifeq ($(COVERAGE),1) CFLAGS += $(GCOV_CFLAGS) LDFLAGS += $(GCOV_LDFLAGS) LIBS += $(GCOV_LIBS) endif ifeq ($(FAULT_INJECTION),1) CFLAGS += -DFAULT_INJECTION=1 CXXFLAGS += -DFAULT_INJECTION=1 endif CFLAGS += $(EXTRA_CFLAGS) LIBS += $(LIBUTIL) all test: $(TARGET) $(TARGET): $(OBJS) $(AR) rv $@ $(OBJS) ifneq ($(CSTYLEON),0) $(TARGET): unittest.htmp endif objdir=. .c.o: $(call check-cstyle, $<) @mkdir -p .deps $(CC) -MD -c $(CFLAGS) $(INCS) $(COMMONINCS) $(call coverage-path, $<) -o $@ $(create-deps) %.htmp: %.h $(call check-cstyle, $<, $@) clean: $(RM) *.o core a.out unittest.htmp clobber: clean $(RM) $(TARGET) $(RM) -r .deps test check pcheck: all sparse: $(sparse-c) .PHONY: all test check clean clobber cstyle format pcheck -include .deps/*.P vmem-1.8/src/test/unittest/README000066400000000000000000000033501361505074100166360ustar00rootroot00000000000000Persistent Memory Development Kit This is src/test/unittest/README. This directory contains the unit test framework used by the Persistent Memory Development Kit unit tests. This framework provides a support for mock objects. To mock an interface use FUNC_MOCK_RET_ALWAYS or FUNC_MOCK macros in the test code. The FUNC_MOCK_RET_ALWAYS is quite straightforward, it simply takes a function name and a value that the given function has to return. For example: FUNC_MOCK_RET_ALWAYS(malloc, NULL) This declaration causes all malloc calls to return NULL and thus facilitates error path tests. The rest of FUNC_MOCK set of macros is used in more complicated cases. It allows to implement replacement logic for different runs of the given function. For example: FUNC_MOCK(malloc, void *, size_t size) FUNC_MOCK_RUN_RET_DEFAULT_REAL(malloc, size) FUNC_MOCK_RUN(2) { UT_ASSERTeq(size, 8); return NULL; } FUNC_MOCK_END This declaration causes the third malloc call to return NULL and also verifies if the size argument is of expected size. All other mallocs fallback to the real implementation. Those macros can be used on all non-static functions that are in a different compilation unit than the test itself. Because the mocking framework uses the linker to wrap the functions, all tests have to add appropriate linker flags. For convenience, there is a makefile function 'extract_funcs' which parses the source file looking for the FUNC_MOCK_RET_ALWAYS or FUNC_MOCK and adds the wrap flag for all the encountered functions. Test makefile that wishes to use this functionality should contain following line: LDFLAGS += $(call extract_funcs, [test_name].c) And after that no changes in makefile is required at all when adding new mocks. vmem-1.8/src/test/unittest/libut.vcxproj000066400000000000000000000141401361505074100205110ustar00rootroot00000000000000 Debug x64 Release x64 {492baa3d-0d5d-478e-9765-500463ae69aa} {CE3F2DFB-8470-4802-AD37-21CAF6CB2681} Win32Proj libut 10.0.16299.0 StaticLibrary true v140 NotSet StaticLibrary true v140 NotSet true .lib $(SolutionDir)\common;$(SolutionDir)\test\unittest;$(SolutionDir)\windows\include;$(SolutionDir)\include;$(IncludePath) true .lib $(SolutionDir)\common;$(SolutionDir)\test\unittest;$(SolutionDir)\windows\include;$(SolutionDir)\include;$(IncludePath) NotUsing Level3 PMDK_UTF8_API; NTDDI_VERSION=NTDDI_WIN10_RS1;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) platform.h CompileAsC MultiThreadedDebugDLL true Console true ole32.lib;ntdll.lib false NotUsing Level3 PMDK_UTF8_API; NTDDI_VERSION=NTDDI_WIN10_RS1;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) platform.h CompileAsC MultiThreadedDLL MaxSpeed Default ProgramDatabase true Console true ole32.lib;ntdll.lib false vmem-1.8/src/test/unittest/libut.vcxproj.filters000066400000000000000000000046071361505074100221670ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {93995380-89BD-4b04-88EB-625FBE52EBFB} h;hh;hpp;hxx;hm;inl;inc;xsd Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Header Files vmem-1.8/src/test/unittest/unittest.h000066400000000000000000000606471361505074100200220ustar00rootroot00000000000000/* * Copyright 2014-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * unittest.h -- the mundane stuff shared by all unit tests * * we want unit tests to be very thorough and check absolutely everything * in order to nail down the test case as precisely as possible and flag * anything at all unexpected. as a result, most unit tests are 90% code * checking stuff that isn't really interesting to what is being tested. * to help address this, the macros defined here include all the boilerplate * error checking which prints information and exits on unexpected errors. * * the result changes this code: * * if ((buf = malloc(size)) == NULL) { * fprintf(stderr, "cannot allocate %d bytes for buf\n", size); * exit(1); * } * * into this code: * * buf = MALLOC(size); * * and the error message includes the calling context information (file:line). * in general, using the all-caps version of a call means you're using the * unittest.h version which does the most common checking for you. so * calling VMEM_CREATE() instead of vmem_create() returns the same * thing, but can never return an error since the unit test library checks for * it. * for routines like vmem_delete() there is no corresponding * VMEM_DELETE() because there's no error to check for. * * all unit tests should use the same initialization: * * START(argc, argv, "brief test description", ...); * * all unit tests should use these exit calls: * * DONE("message", ...); * UT_FATAL("message", ...); * * uniform stderr and stdout messages: * * UT_OUT("message", ...); * UT_ERR("message", ...); * * in all cases above, the message is printf-like, taking variable args. * the message can be NULL. it can start with "!" in which case the "!" is * skipped and the message gets the errno string appended to it, like this: * * if (somesyscall(..) < 0) * UT_FATAL("!my message"); */ #ifndef _UNITTEST_H #define _UNITTEST_H 1 #include #ifdef __cplusplus extern "C" { #endif #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifndef __FreeBSD__ #include #endif #include #include #include #include /* XXX: move OS abstraction layer out of common */ #include "os.h" #include "os_thread.h" #include "util.h" int ut_get_uuid_str(char *); #define UT_MAX_ERR_MSG 128 #define UT_POOL_HDR_UUID_STR_LEN 37 /* uuid string length */ #define UT_POOL_HDR_UUID_GEN_FILE "/proc/sys/kernel/random/uuid" /* XXX - fix this temp hack dup'ing util_strerror when we get mock for win */ void ut_strerror(int errnum, char *buff, size_t bufflen); /* XXX - eliminate duplicated definitions in unittest.h and util.h */ #ifdef _WIN32 static inline int ut_util_statW(const wchar_t *path, os_stat_t *st_bufp) { int retVal = _wstat64(path, st_bufp); /* clear unused bits to avoid confusion */ st_bufp->st_mode &= 0600; return retVal; } #endif /* * unit test support... */ void ut_start(const char *file, int line, const char *func, int argc, char * const argv[], const char *fmt, ...) __attribute__((format(printf, 6, 7))); void ut_startW(const char *file, int line, const char *func, int argc, wchar_t * const argv[], const char *fmt, ...) __attribute__((format(printf, 6, 7))); void NORETURN ut_done(const char *file, int line, const char *func, const char *fmt, ...) __attribute__((format(printf, 4, 5))); void NORETURN ut_fatal(const char *file, int line, const char *func, const char *fmt, ...) __attribute__((format(printf, 4, 5))); void NORETURN ut_end(const char *file, int line, const char *func, int ret); void ut_out(const char *file, int line, const char *func, const char *fmt, ...) __attribute__((format(printf, 4, 5))); void ut_err(const char *file, int line, const char *func, const char *fmt, ...) __attribute__((format(printf, 4, 5))); /* indicate the start of the test */ #ifndef _WIN32 #define START(argc, argv, ...)\ ut_start(__FILE__, __LINE__, __func__, argc, argv, __VA_ARGS__) #else #define START(argc, argv, ...)\ wchar_t **wargv = CommandLineToArgvW(GetCommandLineW(), &argc);\ for (int i = 0; i < argc; i++) {\ argv[i] = ut_toUTF8(wargv[i]);\ if (argv[i] == NULL) {\ for (i--; i >= 0; i--)\ free(argv[i]);\ UT_FATAL("Error during arguments conversion\n");\ }\ }\ ut_start(__FILE__, __LINE__, __func__, argc, argv, __VA_ARGS__) #endif /* indicate the start of the test */ #define STARTW(argc, argv, ...)\ ut_startW(__FILE__, __LINE__, __func__, argc, argv, __VA_ARGS__) /* normal exit from test */ #ifndef _WIN32 #define DONE(...)\ ut_done(__FILE__, __LINE__, __func__, __VA_ARGS__) #else #define DONE(...)\ for (int i = argc; i > 0; i--)\ free(argv[i - 1]);\ ut_done(__FILE__, __LINE__, __func__, __VA_ARGS__) #endif #define DONEW(...)\ ut_done(__FILE__, __LINE__, __func__, __VA_ARGS__) #define END(ret, ...)\ ut_end(__FILE__, __LINE__, __func__, ret) /* fatal error detected */ #define UT_FATAL(...)\ ut_fatal(__FILE__, __LINE__, __func__, __VA_ARGS__) /* normal output */ #define UT_OUT(...)\ ut_out(__FILE__, __LINE__, __func__, __VA_ARGS__) /* error output */ #define UT_ERR(...)\ ut_err(__FILE__, __LINE__, __func__, __VA_ARGS__) /* * assertions... */ /* assert a condition is true at runtime */ #define UT_ASSERT_rt(cnd)\ ((void)((cnd) || (ut_fatal(__FILE__, __LINE__, __func__,\ "assertion failure: %s", #cnd), 0))) /* assertion with extra info printed if assertion fails at runtime */ #define UT_ASSERTinfo_rt(cnd, info) \ ((void)((cnd) || (ut_fatal(__FILE__, __LINE__, __func__,\ "assertion failure: %s (%s = %s)", #cnd, #info, info), 0))) /* assert two integer values are equal at runtime */ #define UT_ASSERTeq_rt(lhs, rhs)\ ((void)(((lhs) == (rhs)) || (ut_fatal(__FILE__, __LINE__, __func__,\ "assertion failure: %s (0x%llx) == %s (0x%llx)", #lhs,\ (unsigned long long)(lhs), #rhs, (unsigned long long)(rhs)), 0))) /* assert two integer values are not equal at runtime */ #define UT_ASSERTne_rt(lhs, rhs)\ ((void)(((lhs) != (rhs)) || (ut_fatal(__FILE__, __LINE__, __func__,\ "assertion failure: %s (0x%llx) != %s (0x%llx)", #lhs,\ (unsigned long long)(lhs), #rhs, (unsigned long long)(rhs)), 0))) #if defined(__CHECKER__) #define UT_COMPILE_ERROR_ON(cond) #define UT_ASSERT_COMPILE_ERROR_ON(cond) #elif defined(_MSC_VER) #define UT_COMPILE_ERROR_ON(cond) C_ASSERT(!(cond)) /* XXX - can't be done with C_ASSERT() unless we have __builtin_constant_p() */ #define UT_ASSERT_COMPILE_ERROR_ON(cond) (void)(cond) #else #define UT_COMPILE_ERROR_ON(cond) ((void)sizeof(char[(cond) ? -1 : 1])) #ifndef __cplusplus #define UT_ASSERT_COMPILE_ERROR_ON(cond) UT_COMPILE_ERROR_ON(cond) #else /* __cplusplus */ /* * XXX - workaround for http://github.com/pmem/issues/issues/189 */ #define UT_ASSERT_COMPILE_ERROR_ON(cond) UT_ASSERT_rt(!(cond)) #endif /* __cplusplus */ #endif /* _MSC_VER */ /* assert a condition is true */ #define UT_ASSERT(cnd)\ do {\ /*\ * Detect useless asserts on always true expression. Please use\ * UT_COMPILE_ERROR_ON(!cnd) or UT_ASSERT_rt(cnd) in such\ * cases.\ */\ if (__builtin_constant_p(cnd))\ UT_ASSERT_COMPILE_ERROR_ON(cnd);\ UT_ASSERT_rt(cnd);\ } while (0) /* assertion with extra info printed if assertion fails */ #define UT_ASSERTinfo(cnd, info) \ do {\ /* See comment in UT_ASSERT. */\ if (__builtin_constant_p(cnd))\ UT_ASSERT_COMPILE_ERROR_ON(cnd);\ UT_ASSERTinfo_rt(cnd, info);\ } while (0) /* assert two integer values are equal */ #define UT_ASSERTeq(lhs, rhs)\ do {\ /* See comment in UT_ASSERT. */\ if (__builtin_constant_p(lhs) && __builtin_constant_p(rhs))\ UT_ASSERT_COMPILE_ERROR_ON((lhs) == (rhs));\ UT_ASSERTeq_rt(lhs, rhs);\ } while (0) /* assert two integer values are not equal */ #define UT_ASSERTne(lhs, rhs)\ do {\ /* See comment in UT_ASSERT. */\ if (__builtin_constant_p(lhs) && __builtin_constant_p(rhs))\ UT_ASSERT_COMPILE_ERROR_ON((lhs) != (rhs));\ UT_ASSERTne_rt(lhs, rhs);\ } while (0) /* assert pointer is fits range of [start, start + size) */ #define UT_ASSERTrange(ptr, start, size)\ ((void)(((uintptr_t)(ptr) >= (uintptr_t)(start) &&\ (uintptr_t)(ptr) < (uintptr_t)(start) + (uintptr_t)(size)) ||\ (ut_fatal(__FILE__, __LINE__, __func__,\ "assert failure: %s (%p) is outside range [%s (%p), %s (%p))", #ptr,\ (void *)(ptr), #start, (void *)(start), #start"+"#size,\ (void *)((uintptr_t)(start) + (uintptr_t)(size))), 0))) /* * memory allocation... */ void *ut_malloc(const char *file, int line, const char *func, size_t size); void *ut_calloc(const char *file, int line, const char *func, size_t nmemb, size_t size); void ut_free(const char *file, int line, const char *func, void *ptr); void ut_aligned_free(const char *file, int line, const char *func, void *ptr); void *ut_realloc(const char *file, int line, const char *func, void *ptr, size_t size); char *ut_strdup(const char *file, int line, const char *func, const char *str); void *ut_pagealignmalloc(const char *file, int line, const char *func, size_t size); void *ut_memalign(const char *file, int line, const char *func, size_t alignment, size_t size); void *ut_mmap_anon_aligned(const char *file, int line, const char *func, size_t alignment, size_t size); int ut_munmap_anon_aligned(const char *file, int line, const char *func, void *start, size_t size); /* a malloc() that can't return NULL */ #define MALLOC(size)\ ut_malloc(__FILE__, __LINE__, __func__, size) /* a calloc() that can't return NULL */ #define CALLOC(nmemb, size)\ ut_calloc(__FILE__, __LINE__, __func__, nmemb, size) /* a malloc() of zeroed memory */ #define ZALLOC(size)\ ut_calloc(__FILE__, __LINE__, __func__, 1, size) #define FREE(ptr)\ ut_free(__FILE__, __LINE__, __func__, ptr) #define ALIGNED_FREE(ptr)\ ut_aligned_free(__FILE__, __LINE__, __func__, ptr) /* a realloc() that can't return NULL */ #define REALLOC(ptr, size)\ ut_realloc(__FILE__, __LINE__, __func__, ptr, size) /* a strdup() that can't return NULL */ #define STRDUP(str)\ ut_strdup(__FILE__, __LINE__, __func__, str) /* a malloc() that only returns page aligned memory */ #define PAGEALIGNMALLOC(size)\ ut_pagealignmalloc(__FILE__, __LINE__, __func__, size) /* a malloc() that returns memory with given alignment */ #define MEMALIGN(alignment, size)\ ut_memalign(__FILE__, __LINE__, __func__, alignment, size) /* * A mmap() that returns anonymous memory with given alignment and guard * pages. */ #define MMAP_ANON_ALIGNED(size, alignment)\ ut_mmap_anon_aligned(__FILE__, __LINE__, __func__, alignment, size) #define MUNMAP_ANON_ALIGNED(start, size)\ ut_munmap_anon_aligned(__FILE__, __LINE__, __func__, start, size) /* * file operations */ int ut_open(const char *file, int line, const char *func, const char *path, int flags, ...); int ut_wopen(const char *file, int line, const char *func, const wchar_t *path, int flags, ...); int ut_close(const char *file, int line, const char *func, int fd); FILE *ut_fopen(const char *file, int line, const char *func, const char *path, const char *mode); int ut_fclose(const char *file, int line, const char *func, FILE *stream); int ut_unlink(const char *file, int line, const char *func, const char *path); size_t ut_write(const char *file, int line, const char *func, int fd, const void *buf, size_t len); size_t ut_read(const char *file, int line, const char *func, int fd, void *buf, size_t len); os_off_t ut_lseek(const char *file, int line, const char *func, int fd, os_off_t offset, int whence); int ut_posix_fallocate(const char *file, int line, const char *func, int fd, os_off_t offset, os_off_t len); int ut_stat(const char *file, int line, const char *func, const char *path, os_stat_t *st_bufp); int ut_statW(const char *file, int line, const char *func, const wchar_t *path, os_stat_t *st_bufp); int ut_fstat(const char *file, int line, const char *func, int fd, os_stat_t *st_bufp); void *ut_mmap(const char *file, int line, const char *func, void *addr, size_t length, int prot, int flags, int fd, os_off_t offset); int ut_munmap(const char *file, int line, const char *func, void *addr, size_t length); int ut_mprotect(const char *file, int line, const char *func, void *addr, size_t len, int prot); int ut_ftruncate(const char *file, int line, const char *func, int fd, os_off_t length); long long ut_strtoll(const char *file, int line, const char *func, const char *nptr, char **endptr, int base); long ut_strtol(const char *file, int line, const char *func, const char *nptr, char **endptr, int base); int ut_strtoi(const char *file, int line, const char *func, const char *nptr, char **endptr, int base); unsigned long long ut_strtoull(const char *file, int line, const char *func, const char *nptr, char **endptr, int base); unsigned long ut_strtoul(const char *file, int line, const char *func, const char *nptr, char **endptr, int base); unsigned ut_strtou(const char *file, int line, const char *func, const char *nptr, char **endptr, int base); /* an open() that can't return < 0 */ #define OPEN(path, ...)\ ut_open(__FILE__, __LINE__, __func__, path, __VA_ARGS__) /* a _wopen() that can't return < 0 */ #define WOPEN(path, ...)\ ut_wopen(__FILE__, __LINE__, __func__, path, __VA_ARGS__) /* a close() that can't return -1 */ #define CLOSE(fd)\ ut_close(__FILE__, __LINE__, __func__, fd) /* an fopen() that can't return != 0 */ #define FOPEN(path, mode)\ ut_fopen(__FILE__, __LINE__, __func__, path, mode) /* a fclose() that can't return != 0 */ #define FCLOSE(stream)\ ut_fclose(__FILE__, __LINE__, __func__, stream) /* an unlink() that can't return -1 */ #define UNLINK(path)\ ut_unlink(__FILE__, __LINE__, __func__, path) /* a write() that can't return -1 */ #define WRITE(fd, buf, len)\ ut_write(__FILE__, __LINE__, __func__, fd, buf, len) /* a read() that can't return -1 */ #define READ(fd, buf, len)\ ut_read(__FILE__, __LINE__, __func__, fd, buf, len) /* a lseek() that can't return -1 */ #define LSEEK(fd, offset, whence)\ ut_lseek(__FILE__, __LINE__, __func__, fd, offset, whence) #define POSIX_FALLOCATE(fd, off, len)\ ut_posix_fallocate(__FILE__, __LINE__, __func__, fd, off, len) #define FSTAT(fd, st_bufp)\ ut_fstat(__FILE__, __LINE__, __func__, fd, st_bufp) /* a mmap() that can't return MAP_FAILED */ #define MMAP(addr, len, prot, flags, fd, offset)\ ut_mmap(__FILE__, __LINE__, __func__, addr, len, prot, flags, fd, offset); /* a munmap() that can't return -1 */ #define MUNMAP(addr, length)\ ut_munmap(__FILE__, __LINE__, __func__, addr, length); /* a mprotect() that can't return -1 */ #define MPROTECT(addr, len, prot)\ ut_mprotect(__FILE__, __LINE__, __func__, addr, len, prot); #define STAT(path, st_bufp)\ ut_stat(__FILE__, __LINE__, __func__, path, st_bufp) #define STATW(path, st_bufp)\ ut_statW(__FILE__, __LINE__, __func__, path, st_bufp) #define FTRUNCATE(fd, length)\ ut_ftruncate(__FILE__, __LINE__, __func__, fd, length) #define ATOU(nptr) STRTOU(nptr, NULL, 10) #define ATOUL(nptr) STRTOUL(nptr, NULL, 10) #define ATOULL(nptr) STRTOULL(nptr, NULL, 10) #define ATOI(nptr) STRTOI(nptr, NULL, 10) #define ATOL(nptr) STRTOL(nptr, NULL, 10) #define ATOLL(nptr) STRTOLL(nptr, NULL, 10) #define STRTOULL(nptr, endptr, base)\ ut_strtoull(__FILE__, __LINE__, __func__, nptr, endptr, base) #define STRTOUL(nptr, endptr, base)\ ut_strtoul(__FILE__, __LINE__, __func__, nptr, endptr, base) #define STRTOL(nptr, endptr, base)\ ut_strtol(__FILE__, __LINE__, __func__, nptr, endptr, base) #define STRTOLL(nptr, endptr, base)\ ut_strtoll(__FILE__, __LINE__, __func__, nptr, endptr, base) #define STRTOU(nptr, endptr, base)\ ut_strtou(__FILE__, __LINE__, __func__, nptr, endptr, base) #define STRTOI(nptr, endptr, base)\ ut_strtoi(__FILE__, __LINE__, __func__, nptr, endptr, base) #ifndef _WIN32 #define ut_jmp_buf_t sigjmp_buf #define ut_siglongjmp(b) siglongjmp(b, 1) #define ut_sigsetjmp(b) sigsetjmp(b, 1) #else #define ut_jmp_buf_t jmp_buf #define ut_siglongjmp(b) longjmp(b, 1) #define ut_sigsetjmp(b) setjmp(b) static DWORD ErrMode; static BOOL Suppressed = FALSE; static UINT AbortBehave; #endif void ut_suppress_errmsg(void); void ut_unsuppress_errmsg(void); /* * signals... */ int ut_sigaction(const char *file, int line, const char *func, int signum, struct sigaction *act, struct sigaction *oldact); /* a sigaction() that can't return an error */ #define SIGACTION(signum, act, oldact)\ ut_sigaction(__FILE__, __LINE__, __func__, signum, act, oldact) /* * pthreads... */ int ut_thread_create(const char *file, int line, const char *func, os_thread_t *__restrict thread, const os_thread_attr_t *__restrict attr, void *(*start_routine)(void *), void *__restrict arg); int ut_thread_join(const char *file, int line, const char *func, os_thread_t *thread, void **value_ptr); /* a os_thread_create() that can't return an error */ #define PTHREAD_CREATE(thread, attr, start_routine, arg)\ ut_thread_create(__FILE__, __LINE__, __func__,\ thread, attr, start_routine, arg) /* a os_thread_join() that can't return an error */ #define PTHREAD_JOIN(thread, value_ptr)\ ut_thread_join(__FILE__, __LINE__, __func__, thread, value_ptr) /* * processes... */ #ifdef _WIN32 intptr_t ut_spawnv(int argc, const char **argv, ...); #endif /* * mocks... * * NOTE: On Linux, function mocking is implemented using wrapper functions. * See "--wrap" option of the GNU linker. * There is no such feature in VC++, so on Windows we do the mocking at * compile time, by redefining symbol names: * - all the references to are replaced with <__wrap_symbol> * in all the compilation units, except the one where the is * defined and the test source file * - the original definition of is replaced with <__real_symbol> * - a wrapper function <__wrap_symbol> must be defined in the test program * (it may still call the original function via <__real_symbol>) * Such solution seems to be sufficient for the purpose of our tests, even * though it has some limitations. I.e. it does no work well with malloc/free, * so to wrap the system memory allocator functions, we use the built-in * feature of all the PMDK libraries, allowing to override default memory * allocator with the custom one. */ #ifndef _WIN32 #define _FUNC_REAL_DECL(name, ret_type, ...)\ ret_type __real_##name(__VA_ARGS__) __attribute__((unused)); #else #define _FUNC_REAL_DECL(name, ret_type, ...)\ ret_type name(__VA_ARGS__); #endif #ifndef _WIN32 #define _FUNC_REAL(name)\ __real_##name #else #define _FUNC_REAL(name)\ name #endif #define RCOUNTER(name)\ _rcounter##name #define FUNC_MOCK_RCOUNTER_SET(name, val)\ RCOUNTER(name) = val; #define FUNC_MOCK(name, ret_type, ...)\ _FUNC_REAL_DECL(name, ret_type, ##__VA_ARGS__)\ static unsigned RCOUNTER(name);\ ret_type __wrap_##name(__VA_ARGS__);\ ret_type __wrap_##name(__VA_ARGS__) {\ switch (util_fetch_and_add32(&RCOUNTER(name), 1)) { #define FUNC_MOCK_DLLIMPORT(name, ret_type, ...)\ __declspec(dllimport) _FUNC_REAL_DECL(name, ret_type, ##__VA_ARGS__)\ static unsigned RCOUNTER(name);\ ret_type __wrap_##name(__VA_ARGS__);\ ret_type __wrap_##name(__VA_ARGS__) {\ switch (util_fetch_and_add32(&RCOUNTER(name), 1)) { #define FUNC_MOCK_END\ }} #define FUNC_MOCK_RUN(run)\ case run: #define FUNC_MOCK_RUN_DEFAULT\ default: #define FUNC_MOCK_RUN_RET(run, ret)\ case run: return (ret); #define FUNC_MOCK_RUN_RET_DEFAULT_REAL(name, ...)\ default: return _FUNC_REAL(name)(__VA_ARGS__); #define FUNC_MOCK_RUN_RET_DEFAULT(ret)\ default: return (ret); #define FUNC_MOCK_RET_ALWAYS(name, ret_type, ret, ...)\ FUNC_MOCK(name, ret_type, __VA_ARGS__)\ FUNC_MOCK_RUN_RET_DEFAULT(ret);\ FUNC_MOCK_END #define FUNC_MOCK_RET_ALWAYS_VOID(name, ...)\ FUNC_MOCK(name, void, __VA_ARGS__)\ default: return;\ FUNC_MOCK_END extern unsigned long Ut_pagesize; extern unsigned long long Ut_mmap_align; extern os_mutex_t Sigactions_lock; void ut_dump_backtrace(void); void ut_sighandler(int); void ut_register_sighandlers(void); uint16_t ut_checksum(uint8_t *addr, size_t len); char *ut_toUTF8(const wchar_t *wstr); wchar_t *ut_toUTF16(const char *wstr); struct test_case { const char *name; int (*func)(const struct test_case *tc, int argc, char *argv[]); }; /* * get_tc -- return test case of specified name */ static inline const struct test_case * get_tc(const char *name, const struct test_case *test_cases, size_t ntests) { for (size_t i = 0; i < ntests; i++) { if (strcmp(name, test_cases[i].name) == 0) return &test_cases[i]; } return NULL; } static inline void TEST_CASE_PROCESS(int argc, char *argv[], const struct test_case *test_cases, size_t ntests) { if (argc < 2) UT_FATAL("usage: %s []", argv[0]); for (int i = 1; i < argc; i++) { char *str_test = argv[i]; const int args_off = i + 1; const struct test_case *tc = get_tc(str_test, test_cases, ntests); if (!tc) UT_FATAL("unknown test case -- '%s'", str_test); i += tc->func(tc, argc - args_off, &argv[args_off]); } } #define TEST_CASE_DECLARE(_name)\ int \ _name(const struct test_case *tc, int argc, char *argv[]) #define TEST_CASE(_name)\ {\ .name = #_name,\ .func = (_name),\ } #define STR(x) #x #define ASSERT_ALIGNED_BEGIN(type) do {\ size_t off = 0;\ const char *last = "(none)";\ type t; #define ASSERT_ALIGNED_FIELD(type, field) do {\ if (offsetof(type, field) != off)\ UT_FATAL("%s: padding, missing field or fields not in order between "\ "'%s' and '%s' -- offset %lu, real offset %lu",\ STR(type), last, STR(field), off, offsetof(type, field));\ off += sizeof(t.field);\ last = STR(field);\ } while (0) #define ASSERT_FIELD_SIZE(field, size) do {\ UT_COMPILE_ERROR_ON(size != sizeof(t.field));\ } while (0) #define ASSERT_OFFSET_CHECKPOINT(type, checkpoint) do {\ if (off != checkpoint)\ UT_FATAL("%s: violated offset checkpoint -- "\ "checkpoint %lu, real offset %lu",\ STR(type), checkpoint, off);\ } while (0) #define ASSERT_ALIGNED_CHECK(type)\ if (off != sizeof(type))\ UT_FATAL("%s: missing field or padding after '%s': "\ "sizeof(%s) = %lu, fields size = %lu",\ STR(type), last, STR(type), sizeof(type), off);\ } while (0) /* * AddressSanitizer */ #ifdef __clang__ #if __has_feature(address_sanitizer) #define UT_DEFINE_ASAN_POISON #endif #else #ifdef __SANITIZE_ADDRESS__ #define UT_DEFINE_ASAN_POISON #endif #endif #ifdef UT_DEFINE_ASAN_POISON void __asan_poison_memory_region(void const volatile *addr, size_t size); void __asan_unpoison_memory_region(void const volatile *addr, size_t size); #define ASAN_POISON_MEMORY_REGION(addr, size) \ __asan_poison_memory_region((addr), (size)) #define ASAN_UNPOISON_MEMORY_REGION(addr, size) \ __asan_unpoison_memory_region((addr), (size)) #else #define ASAN_POISON_MEMORY_REGION(addr, size) \ ((void)(addr), (void)(size)) #define ASAN_UNPOISON_MEMORY_REGION(addr, size) \ ((void)(addr), (void)(size)) #endif #ifdef __cplusplus } #endif #endif /* unittest.h */ vmem-1.8/src/test/unittest/unittest.ps1000066400000000000000000001112711361505074100202640ustar00rootroot00000000000000# # Copyright 2015-2019, Intel Corporation # Copyright (c) 2016, Microsoft Corporation. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. . "..\testconfig.ps1" function verbose_msg { if ($Env:UNITTEST_LOG_LEVEL -ge "2") { Write-Host $args[0] } } function msg { if ($Env:UNITTEST_LOG_LEVEL -ge "1") { Write-Host $args[0] } } function fatal { throw $args[0] } function touch { out-file -InputObject $null -Encoding ascii -literalpath $args[0] } function epoch { return [int64](([datetime]::UtcNow)-(get-date "1/1/1970")).TotalMilliseconds } function isDir { if (-Not $args[0]) { return $false } return Test-Path $args[0] -PathType Container } # force dir w/wildcard to fail if no match function dirFailOnEmpty { if (0 -eq (Get-ChildItem $args[0]).Count) { throw -Message 'No match: $args[0]' } } function getLineCount { [int64]$numLines = 0 $buff = New-Object IO.StreamReader $args[0] while ($buff.ReadLine() -ne $null){ $numLines++ } $buff.Close() return $numLines } # # convert_to_bytes -- converts the string with K, M, G or T suffixes # to bytes # # example: # "1G" --> "1073741824" # "2T" --> "2199023255552" # "3k" --> "3072" # "1K" --> "1024" # "10" --> "10" # function convert_to_bytes() { param([string]$size) if ($size.ToLower().EndsWith("kib")) { $size = [int64]($size.Substring(0, $size.Length - 3)) * 1kb } elseif ($size.ToLower().EndsWith("mib")) { $size = [int64]($size.Substring(0, $size.Length - 3)) * 1mb } elseif ($size.ToLower().EndsWith("gib")) { $size = [int64]($size.Substring(0, $size.Length - 3)) * 1gb } elseif ($size.ToLower().EndsWith("tib")) { $size = [int64]($size.Substring(0, $size.Length - 3)) * 1tb } elseif ($size.ToLower().EndsWith("pib")) { $size = [int64]($size.Substring(0, $size.Length - 3)) * 1pb } elseif ($size.ToLower().EndsWith("kb")) { $size = [int64]($size.Substring(0, $size.Length - 2)) * 1000 } elseif ($size.ToLower().EndsWith("mb")) { $size = [int64]($size.Substring(0, $size.Length - 2)) * 1000 * 1000 } elseif ($size.ToLower().EndsWith("gb")) { $size = [int64]($size.Substring(0, $size.Length - 2)) * 1000 * 1000 * 1000 } elseif ($size.ToLower().EndsWith("tb")) { $size = [int64]($size.Substring(0, $size.Length - 2)) * 1000 * 1000 * 1000 * 1000 } elseif ($size.ToLower().EndsWith("pb")) { $size = [int64]($size.Substring(0, $size.Length - 2)) * 1000 * 1000 * 1000 * 1000 * 1000 } elseif ($size.ToLower().EndsWith("b")) { $size = [int64]($size.Substring(0, $size.Length - 1)) } elseif ($size.ToLower().EndsWith("k")) { $size = [int64]($size.Substring(0, $size.Length - 1)) * 1kb } elseif ($size.ToLower().EndsWith("m")) { $size = [int64]($size.Substring(0, $size.Length - 1)) * 1mb } elseif ($size.ToLower().EndsWith("g")) { $size = [int64]($size.Substring(0, $size.Length - 1)) * 1gb } elseif ($size.ToLower().EndsWith("t")) { $size = [int64]($size.Substring(0, $size.Length - 1)) * 1tb } elseif ($size.ToLower().EndsWith("p")) { $size = [int64]($size.Substring(0, $size.Length - 1)) * 1pb } elseif (($size -match "^[0-9]*$") -and ([int64]$size -gt 1023)) { # # Because powershell converts 1kb to 1024, and we convert it to 1000, we # catch byte values greater than 1023 to be suspicious that caller might # not be aware of the silent conversion by powershell. If the caller # knows what she is doing, she can always append 'b' to the number. # fatal "Error suspicious byte value to convert_to_bytes" } return [Int64]$size } # # truncate -- shrink or extend a file to the specified size # # A file that does not exist is created (holey). # # XXX: Modify/rename 'sparsefile' to make it work as Linux 'truncate'. # Then, this cmdlet is not needed anymore. # function truncate { [CmdletBinding(PositionalBinding=$true)] Param( [alias("s")][Parameter(Mandatory = $true)][string]$size, [Parameter(Mandatory = $true)][string]$fname ) [int64]$size_in_bytes = (convert_to_bytes $size) if (-Not (Test-Path $fname)) { & $SPARSEFILE $fname $size_in_bytes 2>&1 1>> $Env:PREP_LOG_FILE } else { $file = new-object System.IO.FileStream $fname, Open, ReadWrite $file.SetLength($size_in_bytes) $file.Close() } } # # create_file -- create zeroed out files of a given length # # example, to create two files, each 1GB in size: # create_file 1G testfile1 testfile2 # # Note: this literally fills the file with 0's to make sure its # not a sparse file. Its slow but the fastest method I could find # # Input unit size is in bytes with optional suffixes like k, KB, M, etc. # function create_file { [int64]$size = (convert_to_bytes $args[0]) for ($i=1;$i -lt $args.count;$i++) { $stream = new-object system.IO.StreamWriter($args[$i], "False", [System.Text.Encoding]::Ascii) 1..$size | %{ $stream.Write("0") } $stream.close() Get-ChildItem $args[$i]* >> $Env:PREP_LOG_FILE } } # # create_holey_file -- create holey files of a given length # # example: # create_holey_file 1024k testfile1 testfile2 # create_holey_file 2048M testfile1 testfile2 # create_holey_file 234 testfile1 # create_holey_file 2340b testfile1 # # Input unit size is in bytes with optional suffixes like k, KB, M, etc. # function create_holey_file { [int64]$size = (convert_to_bytes $args[0]) # it causes CreateFile with CREATE_ALWAYS flag $mode = "-f" for ($i=1;$i -lt $args.count;$i++) { # need to call out to sparsefile.exe to create a sparse file, note # that initial version of DAX doesn't support sparse $fname = $args[$i] & $SPARSEFILE $mode $fname $size if ($Global:LASTEXITCODE -ne 0) { fatal "Error $Global:LASTEXITCODE with sparsefile create" } Get-ChildItem $fname >> $Env:PREP_LOG_FILE } } # # create_nonzeroed_file -- create non-zeroed files of a given length # # A given first kilobytes of the file is zeroed out. # # example, to create two files, each 1GB in size, with first 4K zeroed # create_nonzeroed_file 1G 4K testfile1 testfile2 # # Note: from 0 to offset is sparse, after that filled with Z # # Input unit size is in bytes with optional suffixes like k, KB, M, etc. # function create_nonzeroed_file { [int64]$offset = (convert_to_bytes $args[1]) [int64]$size = ((convert_to_bytes $args[0]) - $offset) [int64]$numz = $size / 1024 [string] $z = "Z" * 1024 # using a 1K string to speed up writing for ($i=2;$i -lt $args.count;$i++) { # create sparse file of offset length $file = new-object System.IO.FileStream $args[$i], Create, ReadWrite $file.SetLength($offset) $file.Close() Get-ChildItem $args[$i] >> $Env:PREP_LOG_FILE $stream = new-object system.IO.StreamWriter($args[$i], "True", [System.Text.Encoding]::Ascii) 1..$numz | %{ $stream.Write($Z) } $stream.close() Get-ChildItem $args[$i] >> $Env:PREP_LOG_FILE } } # # create_poolset -- create a dummy pool set # # Creates a pool set file using the provided list of part sizes and paths. # Optionally, it also creates the selected part files (zeroed, partially zeroed # or non-zeroed) with requested size and mode. The actual file size may be # different than the part size in the pool set file. # 'r' or 'R' on the list of arguments indicate the beginning of the next # replica set and 'm' or 'M' the beginning of the next remote replica set. # A remote replica requires two parameters: a target node and a pool set # descriptor. # # Each part argument has the following format: # psize:ppath[:cmd[:fsize[:mode]]] # # where: # psize - part size or AUTO (only for DAX device) # ppath - path # cmd - (optional) can be: # x - do nothing (may be skipped if there's no 'fsize', 'mode') # z - create zeroed (holey) file # n - create non-zeroed file # h - create non-zeroed file, but with zeroed header (first 4KB) # d - create empty directory # fsize - (optional) the actual size of the part file (if 'cmd' is not 'x') # mode - (optional) same format as for 'chmod' command # # Each remote replica argument has the following format: # node:desc # # where: # node - target node # desc - pool set descriptor # # example: # The following command define a pool set consisting of two parts: 16MB # and 32MB, a local replica with only one part of 48MB and a remote replica. # The first part file is not created, the second is zeroed. The only replica # part is non-zeroed. Also, the last file is read-only and its size # does not match the information from pool set file. The last line describes # a remote replica. # # create_poolset .\pool.set 16M:testfile1 32M:testfile2:z \ # R 48M:testfile3:n:11M:0400 \ # M remote_node:remote_pool.set # # function create_poolset { $psfile = $args[0] echo "PMEMPOOLSET" | out-file -encoding utf8 -literalpath $psfile for ($i=1;$i -lt $args.count;$i++) { if ($args[$i] -eq "M" -Or $args[$i] -eq 'm') { # remote replica $i++ $cmd = $args[$i] $fparms = ($cmd.Split("{:}")) $node = $fparms[0] $desc = $fparms[1] echo "REPLICA $node $desc" | out-file -Append -encoding utf8 -literalpath $psfile continue } if ($args[$i] -eq "R" -Or $args[$i] -eq 'r') { echo "REPLICA" | out-file -Append -encoding utf8 -literalpath $psfile continue } if ($args[$i] -eq "O" -Or $args[$i] -eq 'o') { $i++ $opt = $args[$i] echo "OPTION $opt" | out-file -Append -encoding utf8 -literalpath $psfile continue } $cmd = $args[$i] # need to strip out a drive letter if included because we use : # as a delimiter in the argument $driveLetter = "" if ($cmd -match ":([a-zA-Z]):\\") { # for path names in the following format: "C:\foo\bar" $tmp = ($cmd.Split("{:\\}",2,[System.StringSplitOptions]::RemoveEmptyEntries)) $cmd = $tmp[0] + ":" + $tmp[1].SubString(2) $driveLetter = $tmp[1].SubString(0,2) } elseif ($cmd -match ":\\\\\?\\([a-zA-Z]):\\") { # for _long_ path names in the following format: "\\?\C:\foo\bar" $tmp = ($cmd.Split("{:}",2,[System.StringSplitOptions]::RemoveEmptyEntries)) $cmd = $tmp[0] + ":" + $tmp[1].SubString(6) $driveLetter = $tmp[1].SubString(0,6) } $fparms = ($cmd.Split("{:}")) $fsize = $fparms[0] # XXX: unclear how to follow a symlink # like linux "fpath=`readlink -mn ${fparms[1]}`" but I've not tested # that it works with a symlink or shortcut $fpath = $fparms[1] if (-Not $driveLetter -eq "") { $fpath = $driveLetter + $fpath } $cmd = $fparms[2] $asize = $fparms[3] $mode = $fparms[4] if (-not $asize) { $asize = $fsize } switch -regex ($cmd) { # do nothing 'x' { } # zeroed (holey) file 'z' { create_holey_file $asize $fpath } # non-zeroed file 'n' { create_file $asize $fpath } # non-zeroed file, except 4K header 'h' { create_nonzeroed_file $asize 4K $fpath } # create empty directory 'd' { new-item $fpath -force -itemtype directory >> $Env:PREP_LOG_FILE } } # XXX: didn't convert chmod # if [ $mode ]; then # chmod $mode $fpath # fi echo "$fsize $fpath" | out-file -Append -encoding utf8 -literalpath $psfile } # for args } # # dump_last_n_lines -- dumps the last N lines of given log file to stdout # function dump_last_n_lines { if ($Args[0] -And (Test-Path $Args[0])) { sv -Name fname ((Get-Location).path + "\" + $Args[0]) sv -Name ln (getLineCount $fname) if ($ln -gt $UT_DUMP_LINES) { $ln = $UT_DUMP_LINES msg "Last $UT_DUMP_LINES lines of $fname below (whole file has $ln lines)." } else { msg "$fname below." } foreach ($line in Get-Content $fname -Tail $ln) { msg $line } } } # # check_exit_code -- check if $LASTEXITCODE is equal 0 # function check_exit_code { if ($Global:LASTEXITCODE -ne 0) { sv -Name msg "failed with exit code $Global:LASTEXITCODE" if (Test-Path $Env:ERR_LOG_FILE) { if ($Env:UNITTEST_LOG_LEVEL -ge "1") { echo "${Env:UNITTEST_NAME}: $msg. $Env:ERR_LOG_FILE" >> $Env:ERR_LOG_FILE } else { Write-Error "${Env:UNITTEST_NAME}: $msg. $Env:ERR_LOG_FILE" } } else { Write-Error "${Env:UNITTEST_NAME}: $msg" } dump_last_n_lines $Env:PREP_LOG_FLE dump_last_n_lines $Env:TRACE_LOG_FILE dump_last_n_lines $Env:PMEM_LOG_FILE dump_last_n_lines $Env:PMEMOBJ_LOG_FILE dump_last_n_lines $Env:PMEMLOG_LOG_FILE dump_last_n_lines $Env:PMEMBLK_LOG_FILE dump_last_n_lines $Env:PMEMPOOL_LOG_FILE dump_last_n_lines $Env:VMEM_LOG_FILE dump_last_n_lines $Env:VMMALLOC_LOG_FILE fail "" } } # # expect_normal_exit -- run a given command, expect it to exit 0 # function expect_normal_exit { #XXX: bash sets up LD_PRELOAD and other gcc options here # that we can't do, investigating how to address API hooking... sv -Name command $args[0] $params = New-Object System.Collections.ArrayList foreach ($param in $Args[1 .. $Args.Count]) { if ($param -is [array]) { foreach ($param_entry in $param) { [string]$params += -join(" '", $param_entry, "' ") } } else { [string]$params += -join(" '", $param, "' ") } } # Set $LASTEXITCODE to the value indicating failure. It should be # overwritten with the exit status of the invoked command. # It is to catch the case when the command is not executed (i.e. because # of missing binaries / wrong path / etc.) and $LASTEXITCODE contains the # status of some other command executed before. $Global:LASTEXITCODE = 1 Invoke-Expression "$command $params" check_exit_code } # # expect_abnormal_exit -- run a given command, expect it to exit non-zero # function expect_abnormal_exit { #XXX: bash sets up LD_PRELOAD and other gcc options here # that we can't do, investigating how to address API hooking... sv -Name command $args[0] $params = New-Object System.Collections.ArrayList foreach ($param in $Args[1 .. $Args.Count]) { if ($param -is [array]) { foreach ($param_entry in $param) { [string]$params += -join(" '", $param_entry, "' ") } } else { [string]$params += -join(" '", $param, "' ") } } # Suppress abort window $prev_abort = $Env:VMEM_NO_ABORT_MSG $Env:VMEM_NO_ABORT_MSG = 1 # Set $LASTEXITCODE to the value indicating success. It should be # overwritten with the exit status of the invoked command. # It is to catch the case when the command is not executed (i.e. because # of missing binaries / wrong path / etc.) and $LASTEXITCODE contains the # status of some other command executed before. $Global:LASTEXITCODE = 0 Invoke-Expression "$command $params" $Env:VMEM_NO_ABORT_MSG = $prev_abort if ($Global:LASTEXITCODE -eq 0) { fail "${Env:UNITTEST_NAME}: command succeeded unexpectedly." } } # # check_pool -- run pmempool check on specified pool file # function check_pool { $file = $Args[0] if ($Env:CHECK_POOL -eq "1") { Write-Verbose "$Env:UNITTEST_NAME: checking consistency of pool $file" Invoke-Expression "$PMEMPOOL check $file 2>&1 1>>$Env:CHECK_POOL_LOG_FILE" if ($Global:LASTEXITCODE -ne 0) { fail "error: $PMEMPOOL returned error code ${Global:LASTEXITCODE}" } } } # # check_pools -- run pmempool check on specified pool files # function check_pools { if ($Env:CHECK_POOL -eq "1") { foreach ($arg in $Args[0 .. $Args.Count]) { check_pool $arg } } } # # require_unlimited_vm -- require unlimited virtual memory # # This implies requirements for: # - overcommit_memory enabled (/proc/sys/vm/overcommit_memory is 0 or 1) # - unlimited virtual memory (ulimit -v is unlimited) # function require_unlimited_vm { msg "${Env:UNITTEST_NAME}: SKIP required: overcommit_memory enabled and unlimited virtual memory" exit 0 } # # require_no_superuser -- require user without superuser rights # # XXX: not sure how to translate # function require_no_superuser { msg "${Env:UNITTEST_NAME}: SKIP required: run without superuser rights" exit 0 } # # require_build_type -- only allow script to continue for a certain build type # function require_build_type { for ($i=0;$i -lt $args.count;$i++) { if ($args[$i] -eq $Env:BUILD) { return } } verbose_msg "${Env:UNITTEST_NAME}: SKIP build-type $Env:BUILD ($* required)" exit 0 } # # require_pkg -- only allow script to continue if specified package exists # function require_pkg { # XXX: placeholder for checking dependencies if we have a need } # # require_binary -- continue script execution only if the binary has been compiled # # In case of conditional compilation, skip this test. # function require_binary() { # XXX: check if binary provided if (-Not (Test-Path $Args[0])) { msg "${Env:UNITTEST_NAME}: SKIP no binary found" exit 0 } } # # match -- execute match # function match { Invoke-Expression "perl ..\..\..\src\test\match $args" if ($Global:LASTEXITCODE -ne 0) { fail "" } } # # check -- check test results (using .match files) # # note: win32 version slightly different since the caller can't as # easily bail when a cmd fails # function check { # ..\match $(find . -regex "[^0-9]*${UNITTEST_NUM}\.log\.match" | xargs) $perl = Get-Command -Name perl -ErrorAction SilentlyContinue If ($perl -eq $null) { fail "error: Perl is missing, cannot check test results" } # If errX.log.match does not exist, assume errX.log should be empty $ERR_LOG_LEN=0 if (Test-Path $Env:ERR_LOG_FILE) { $ERR_LOG_LEN = (Get-Item $Env:ERR_LOG_FILE).length } if (-not (Test-Path "${Env:ERR_LOG_FILE}.match") -and ($ERR_LOG_LEN -ne 0)) { Write-Error "unexpected output in ${Env:ERR_LOG_FILE}" dump_last_n_lines $Env:ERR_LOG_FILE fail "" } [string]$listing = Get-ChildItem -File | Where-Object {$_.Name -match "[^0-9]${Env:UNITTEST_NUM}.log.match"} if ($listing) { match $listing } } # # pass -- print message that the test has passed # function pass { if ($Env:TM -eq 1) { $end_time = $script:tm.Elapsed.ToString('ddd\:hh\:mm\:ss\.fff') -Replace "^(000:)","" -Replace "^(00:){1,2}","" $script:tm.reset() } else { sv -Name end_time $null } if ($Env:UNITTEST_LOG_LEVEL -ge "1") { Write-Host -NoNewline ($Env:UNITTEST_NAME + ": ") Write-Host -NoNewline -foregroundcolor green "PASS" if ($end_time) { Write-Host -NoNewline ("`t`t`t" + "[" + $end_time + " s]") } } if (isDir $DIR) { rm -Force -Recurse $DIR } msg "" } # # fail -- print message that the test has failed # function fail { Write-Error $args[0] Write-Host -NoNewline ($Env:UNITTEST_NAME + ": ") Write-Host -NoNewLine -foregroundcolor red "FAILED" throw "${Env:UNITTEST_NAME}: FAILED" } # # remove_files - removes list of files included in variable # function remove_files { for ($i=0;$i -lt $args.count;$i++) { $arr = $args[$i] -split ' ' ForEach ($file In $arr) { Remove-Item $file -Force -ea si } } } # # check_file -- check if file exists and print error message if not # function check_file { sv -Name fname $Args[0] if (-Not (Test-Path $fname)) { fail "error: Missing File: $fname" } } # # check_files -- check if files exist and print error message if not # function check_files { for ($i=0;$i -lt $args.count;$i++) { check_file $args[$i] } } # # check_no_file -- check if file has been deleted and print error message if not # function check_no_file { sv -Name fname $Args[0] if (Test-Path $fname) { fail "error: Not deleted file: $fname" } } # # check_no_files -- check if files has been deleted and print error message if not # function check_no_files { for ($i=0;$i -lt $args.count;$i++) { check_no_file $args[$i] } } # # get_size -- return size of file # function get_size { if (Test-Path $args[0]) { return (Get-Item $args[0]).length } } # # set_file_mode - set access mode to one or multiple files # parameters: # arg0 - access mode you want to change # arg1 - true or false to admit or deny given mode # # example: # set_file_mode IsReadOnly $true file1 file2 # function set_file_mode { $mode = $args[0] $flag = $args[1] for ($i=2;$i -lt $args.count;$i++) { Set-ItemProperty $args[$i] -name $mode -value $flag } } # # get_mode -- return mode of file # function get_mode { if (Test-Path $args[0]) { return (Get-Item $args[0]).mode } } # # check_size -- validate file size # function check_size { sv -Name size -Scope "Local" $args[0] sv -Name file -Scope "Local" $args[1] sv -Name file_size -Scope "Local" (get_size $file) if ($file_size -ne $size) { fail "error: wrong size $file_size != $size" } } # # check_mode -- validate file mode # function check_mode { sv -Name mode -Scope "Local" $args[0] sv -Name file -Scope "Local" $args[1] $mode = [math]::floor($mode / 100) # get first digit (user/owner permission) $read_only = (gp $file IsReadOnly).IsReadOnly if ($mode -band 2) { if ($read_only -eq $true) { fail "error: wrong file mode" } else { return } } if ($read_only -eq $false) { fail "error: wrong file mode" } else { return } } # # check_signature -- check if file contains specified signature # function check_signature { sv -Name sig -Scope "Local" $args[0] sv -Name file -Scope "Local" ($args[1]) sv -Name file_sig -Scope "Local" "" $stream = [System.IO.File]::OpenRead($file) $buff = New-Object Byte[] $SIG_LEN # you must assign return value otherwise PS will print it to stdout $num = $stream.Read($buff, 0, $SIG_LEN) $file_sig = [System.Text.Encoding]::Ascii.GetString($buff) $stream.Close() if ($file_sig -ne $sig) { fail "error: $file signature doesn't match $file_sig != $sig" } } # # check_signatures -- check if multiple files contain specified signature # function check_signatures { for ($i=1;$i -lt $args.count;$i+=1) { check_signature $args[0] $args[$i] } } # # check_layout -- check if pmemobj pool contains specified layout # function check_layout { sv -Name layout -Scope "Local" $args[0] sv -Name file -Scope "Local" ($args[1]) $stream = [System.IO.File]::OpenRead($file) $stream.Position = $LAYOUT_OFFSET $buff = New-Object Byte[] $LAYOUT_LEN # you must assign return value otherwise PS will print it to stdout $num = $stream.Read($buff, 0, $LAYOUT_LEN) $enc = [System.Text.Encoding]::UTF8.GetString($buff) $stream.Close() if ($enc -ne $layout) { fail "error: layout doesn't match $enc != $layout" } } # # check_arena -- check if file contains specified arena signature # function check_arena { sv -Name file -Scope "Local" ($args[0]) $stream = [System.IO.File]::OpenRead($file) $stream.Position = $ARENA_OFF $buff = New-Object Byte[] $ARENA_SIG_LEN # you must assign return value otherwise PS will print it to stdout $num = $stream.Read($buff, 0, $ARENA_SIG_LEN) $enc = [System.Text.Encoding]::ASCII.GetString($buff) $stream.Close() if ($enc -ne $ARENA_SIG) { fail "error: can't find arena signature" } } # # dump_pool_info -- dump selected pool metadata and/or user data # function dump_pool_info { $params = "" for ($i=0;$i -lt $args.count;$i++) { [string]$params += -join($args[$i], " ") } # ignore selected header fields that differ by definition # this is equivalent of: 'sed -e "/^UUID/,/^Checksum/d"' $print = $True Invoke-Expression "$PMEMPOOL info $params" | % { If ($_ -match '^UUID') { $print = $False } If ($print -eq $True) { $_ } If ($_ -match '^Checksum') { $print = $True } } } # # dump_replica_info -- dump selected pool metadata and/or user data # # Used by compare_replicas() - filters out file paths and sizes. # function dump_replica_info { $params = "" for ($i=0;$i -lt $args.count;$i++) { [string]$params += -join($args[$i], " ") } # ignore selected header fields that differ by definition # this is equivalent of: 'sed -e "/^UUID/,/^Checksum/d"' $print = $True Invoke-Expression "$PMEMPOOL info $params" | % { If ($_ -match '^UUID') { $print = $False } If ($print -eq $True) { # 'sed -e "/^path/d" -e "/^size/d" If (-not ($_ -match '^path' -or $_ -match '^size')) { $_ } } If ($_ -match '^Checksum') { $print = $True } } } # # compare_replicas -- check replicas consistency by comparing `pmempool info` output # function compare_replicas { $count = $args foreach ($param in $Args[0 .. ($Args.Count - 3)]) { if ($param -is [array]) { foreach ($param_entry in $param) { [string]$params += -join(" '", $param_entry, "' ") } } else { [string]$params += -join($param, " ") } } $rep1 = $args[$cnt + 1] $rep2 = $args[$cnt + 2] diff (dump_replica_info $params $rep1) (dump_replica_info $params $rep2) } # # require_dax_devices -- only allow script to continue for a dax device # function require_dax_devices() { # XXX: no device dax on Windows msg "${Env:UNITTEST_NAME}: SKIP DEVICE_DAX_PATH does not specify enough dax devices" exit 0 } function dax_device_zero() { # XXX: no device dax on Windows } # # require_no_unicode -- require $DIR w/o non-ASCII characters # function require_no_unicode { $Env:SUFFIX = "" $u = [System.Text.Encoding]::UNICODE [string]$DIR_ASCII = [System.Text.Encoding]::Convert([System.Text.Encoding]::UNICODE, [System.Text.Encoding]::ASCII, $u.getbytes($DIR)) [string]$DIR_UTF8 = [System.Text.Encoding]::Convert([System.Text.Encoding]::UNICODE, [System.Text.Encoding]::UTF8, $u.getbytes($DIR)) if ($DIR_UTF8 -ne $DIR_ASCII) { msg "${Env:UNITTEST_NAME}: SKIP required: test directory path without non-ASCII characters" exit 0 } } # # require_short_path -- require $DIR length less than 256 characters # function require_short_path { $Env:DIRSUFFIX = "" if ($DIR.Length -ge 256) { msg "${Env:UNITTEST_NAME}: SKIP required: test directory path below 256 characters" exit 0 } } # # get_files -- returns all files in cwd with given pattern # function get_files { dir |% {$_.Name} | select-string -Pattern $args[0] } # # setup -- print message that test setup is commencing # function setup { $curtestdir = (Get-Item -Path ".\").BaseName # just in case if (-Not $curtestdir) { fatal "curtestdir does not exist" } $curtestdir = "test_" + $curtestdir $Script:DIR = $DIR + "\" + $Env:DIRSUFFIX + "\" + $curtestdir + $Env:UNITTEST_NUM + $Env:SUFFIX msg "${Env:UNITTEST_NAME}: SETUP ($Env:TYPE\$Global\$Env:BUILD)" foreach ($f in $(get_files "[a-zA-Z_]*${Env:UNITTEST_NUM}\.log$")) { Remove-Item $f } rm -Force check_pool_${Env:BUILD}_${Env:UNITTEST_NUM}.log -ErrorAction SilentlyContinue if (isDir $DIR) { rm -Force -Recurse $DIR } md -force $DIR > $null # XXX: do it before setup() is invoked # set console encoding to UTF-8 [Console]::OutputEncoding = [System.Text.Encoding]::UTF8 if ($Env:TM -eq "1" ) { $script:tm = [system.diagnostics.stopwatch]::startNew() } $DEBUG_DIR = '..\..\x64\Debug' $RELEASE_DIR = '..\..\x64\Release' if ($Env:BUILD -eq 'nondebug') { if (-Not $Env:PMDK_LIB_PATH_NONDEBUG) { $Env:PMDK_LIB_PATH_NONDEBUG = $RELEASE_DIR + '\libs\' } $Env:Path = $Env:PMDK_LIB_PATH_NONDEBUG + ';' + $Env:Path } elseif ($Env:BUILD -eq 'debug') { if (-Not $Env:PMDK_LIB_PATH_DEBUG) { $Env:PMDK_LIB_PATH_DEBUG = $DEBUG_DIR + '\libs\' } $Env:Path = $Env:PMDK_LIB_PATH_DEBUG + ';' + $Env:Path } $Env:PMEMBLK_CONF="fallocate.at_create=0;" $Env:PMEMOBJ_CONF="fallocate.at_create=0;" $Env:PMEMLOG_CONF="fallocate.at_create=0;" } # # cmp -- compare two files # function cmp { $file1 = $Args[0] $file2 = $Args[1] $argc = $Args.Count if($argc -le 2) { # fc does not support / in file path fc.exe /b ([String]$file1).Replace('/','\') ([string]$file2).Replace('/','\') > $null if ($Global:LASTEXITCODE -ne 0) { "$args differ" } return } $limit = $Args[2] $s1 = Get-Content $file1 -totalcount $limit -encoding byte $s2 = Get-Content $file1 -totalcount $limit -encoding byte if ("$s1" -ne "$s2") { "$args differ" } } ####################################################### ####################################################### if (-Not $Env:UNITTEST_NAME) { $CURDIR = (Get-Item -Path ".\").BaseName $SCRIPTNAME = (Get-Item $MyInvocation.ScriptName).BaseName $Env:UNITTEST_NAME = "$CURDIR/$SCRIPTNAME" $Env:UNITTEST_NUM = ($SCRIPTNAME).Replace("TEST", "") } # defaults if (-Not $Env:TYPE) { $Env:TYPE = 'check'} if (-Not $Env:BUILD) { $Env:BUILD = 'debug'} if (-Not $Env:CHECK_POOL) { $Env:CHECK_POOL = '0'} if (-Not $Env:EXESUFFIX) { $Env:EXESUFFIX = ".exe"} if (-Not $Env:SUFFIX) { $Env:SUFFIX = "😕⠧⠍⠑⠍ɗVMEMӜ⥺🙍"} if (-Not $Env:DIRSUFFIX) { $Env:DIRSUFFIX = ""} if ($Env:BUILD -eq 'nondebug') { if (-Not $Env:PMDK_LIB_PATH_NONDEBUG) { $PMEMPOOL = $RELEASE_DIR + "\libs\pmempool$Env:EXESUFFIX" } else { $PMEMPOOL = "$Env:PMDK_LIB_PATH_NONDEBUG\pmempool$Env:EXESUFFIX" } } elseif ($Env:BUILD -eq 'debug') { if (-Not $Env:PMDK_LIB_PATH_DEBUG) { $PMEMPOOL = $DEBUG_DIR + "\libs\pmempool$Env:EXESUFFIX" } else { $PMEMPOOL = "$Env:PMDK_LIB_PATH_DEBUG\pmempool$Env:EXESUFFIX" } } $PMEMSPOIL="$Env:EXE_DIR\pmemspoil$Env:EXESUFFIX" $PMEMWRITE="$Env:EXE_DIR\pmemwrite$Env:EXESUFFIX" $PMEMALLOC="$Env:EXE_DIR\pmemalloc$Env:EXESUFFIX" $PMEMOBJCLI="$Env:EXE_DIR\pmemobjcli$Env:EXESUFFIX" $DDMAP="$Env:EXE_DIR\ddmap$Env:EXESUFFIX" $BTTCREATE="$Env:EXE_DIR\bttcreate$Env:EXESUFFIX" $SPARSEFILE="$Env:EXE_DIR\sparsefile$Env:EXESUFFIX" $DLLVIEW="$Env:EXE_DIR\dllview$Env:EXESUFFIX" $Global:req_fs_type=0 # # The variable DIR is constructed so the test uses that directory when # constructing test files. DIR is chosen based on the fs-type for # this test, and if the appropriate fs-type doesn't have a directory # defined in testconfig.sh, the test is skipped. if (-Not $Env:UNITTEST_NUM) { fatal "UNITTEST_NUM does not have a value" } if (-Not $Env:UNITTEST_NAME) { fatal "UNITTEST_NAME does not have a value" } sv -Name DIR $Env:TEST_DIR # Length of pool file's signature sv -Name SIG_LEN 8 # Offset and length of pmemobj layout sv -Name LAYOUT_OFFSET 4096 sv -Name LAYOUT_LEN 1024 # Length of arena's signature sv -Name ARENA_SIG_LEN 16 # Signature of BTT Arena sv -Name ARENA_SIG "BTT_ARENA_INFO" # Offset to first arena sv -Name ARENA_OFF 8192 # # The default is to turn on library logging to level 3 and save it to local files. # Tests that don't want it on, should override these environment variables. # $Env:VMEM_LOG_LEVEL = 3 $Env:VMEM_LOG_FILE = "vmem${Env:UNITTEST_NUM}.log" $Env:PMEM_LOG_LEVEL = 3 $Env:PMEM_LOG_FILE = "pmem${Env:UNITTEST_NUM}.log" $Env:PMEMBLK_LOG_LEVEL=3 $Env:PMEMBLK_LOG_FILE = "pmemblk${Env:UNITTEST_NUM}.log" $Env:PMEMLOG_LOG_LEVEL = 3 $Env:PMEMLOG_LOG_FILE = "pmemlog${Env:UNITTEST_NUM}.log" $Env:PMEMOBJ_LOG_LEVEL = 3 $Env:PMEMOBJ_LOG_FILE= "pmemobj${Env:UNITTEST_NUM}.log" $Env:PMEMPOOL_LOG_LEVEL = 3 $Env:PMEMPOOL_LOG_FILE= "pmempool${Env:UNITTEST_NUM}.log" $Env:VMMALLOC_POOL_DIR = $DIR $Env:VMMALLOC_POOL_SIZE = $((16 * 1024 * 1024)) $Env:VMMALLOC_LOG_LEVEL = 3 $Env:VMMALLOC_LOG_FILE = "vmmalloc${Env:UNITTEST_NUM}.log" $Env:TRACE_LOG_FILE = "trace${Env:UNITTEST_NUM}.log" $Env:ERR_LOG_FILE = "err${Env:UNITTEST_NUM}.log" $Env:OUT_LOG_FILE = "out${Env:UNITTEST_NUM}.log" $Env:PREP_LOG_FILE = "prep${Env:UNITTEST_NUM}.log" if (-Not($UT_DUMP_LINES)) { sv -Name "UT_DUMP_LINES" 30 } $Env:CHECK_POOL_LOG_FILE = "check_pool_${Env:BUILD}_${Env:UNITTEST_NUM}.log" # # enable_log_append -- turn on appending to the log files rather than truncating them # It also removes all log files created by tests: out*.log, err*.log and trace*.log # function enable_log_append() { rm -Force -ErrorAction SilentlyContinue $Env:OUT_LOG_FILE rm -Force -ErrorAction SilentlyContinue $Env:ERR_LOG_FILE rm -Force -ErrorAction SilentlyContinue $Env:TRACE_LOG_FILE rm -Force -ErrorAction SilentlyContinue $Env:PREP_LOG_FILE $Env:UNITTEST_LOG_APPEND=1 } # # require_free_space -- check if there is enough free space to run the test # Example, checking if there is 1 GB of free space on disk: # require_free_space 1G # function require_free_space() { $req_free_space = (convert_to_bytes $args[0]) # actually require 5% or 8MB (whichever is higher) more, just in case # file system requires some space for its meta data $pct = 5 * $req_free_space / 100 $abs = (convert_to_bytes 8M) if ($pct -gt $abs) { $req_free_space = $req_free_space + $pct } else { $req_free_space = $req_free_space + $abs } $path = $DIR -replace '\\\\\?\\', '' $device_name = (Get-Item $path).PSDrive.Root $filter = "Name='$($device_name -replace '\\', '\\')'" $free_space = (gwmi Win32_Volume -Filter $filter | select FreeSpace).freespace if ([INT64]$free_space -lt [INT64]$req_free_space) { msg "${Env:UNITTEST_NAME}: SKIP not enough free space ($args required)" exit 0 } } # # require_free_physical_memory -- check if there is enough free physical memory # space to run the test # Example, checking if there is 1 GB of free physical memory space: # require_free_physical_memory 1G # function require_free_physical_memory() { $req_free_physical_memory = (convert_to_bytes $args[0]) $free_physical_memory = (Get-CimInstance Win32_OperatingSystem | Select-Object -ExpandProperty FreePhysicalMemory) * 1024 if ($free_physical_memory -lt $req_free_physical_memory) { msg "${Env:UNITTEST_NAME}: SKIP not enough free physical memory ($args required, free: $free_physical_memory B)" exit 0 } } # # require_automatic_managed_pagefile -- check if system manages the page file size # function require_automatic_managed_pagefile() { $c = Get-WmiObject Win32_computersystem -EnableAllPrivileges if($c.AutomaticManagedPagefile -eq $false) { msg "${Env:UNITTEST_NAME}: SKIP automatic page file management is disabled" exit 0 } } vmem-1.8/src/test/unittest/unittest.sh000066400000000000000000001367511361505074100202050ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # Copyright (c) 2016, Microsoft Corporation. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # set -e # make sure we have a well defined locale for string operations here export LC_ALL="C" #export LC_ALL="en_US.UTF-8" . ../testconfig.sh if [ -t 1 ]; then IS_TERMINAL_STDOUT=YES fi if [ -t 2 ]; then IS_TERMINAL_STDERR=YES fi function is_terminal() { local fd fd=$1 case $(eval "echo \${IS_TERMINAL_${fd}}") in YES) : ;; *) false ;; esac } function interactive_color() { local color fd color=$1 fd=$2 shift 2 if is_terminal ${fd} && command -v tput >/dev/null; then echo "$(tput setaf $color || :)$*$(tput sgr0 || :)" else echo "$*" fi } function interactive_red() { interactive_color 1 "$@" } function interactive_green() { interactive_color 2 "$@" } function verbose_msg() { if [ "$UNITTEST_LOG_LEVEL" -ge 2 ]; then echo "$*" fi } function msg() { if [ "$UNITTEST_LOG_LEVEL" -ge 1 ]; then echo "$*" fi } function fatal() { echo "$*" >&2 exit 1 } if [ -z "${UNITTEST_NAME}" ]; then CURDIR=$(basename $(pwd)) SCRIPTNAME=$(basename $0) export UNITTEST_NAME=$CURDIR/$SCRIPTNAME export UNITTEST_NUM=$(echo $SCRIPTNAME | sed "s/TEST//") fi # defaults [ "$UNITTEST_LOG_LEVEL" ] || UNITTEST_LOG_LEVEL=2 [ "$GREP" ] || GREP="grep -a" [ "$TEST" ] || TEST=check [ "$BUILD" ] || BUILD=debug [ "$CHECK_TYPE" ] || CHECK_TYPE=auto [ "$CHECK_POOL" ] || CHECK_POOL=0 [ "$VERBOSE" ] || VERBOSE=0 [ -n "${SUFFIX+x}" ] || SUFFIX="😕⠧⠍⠑⠍ɗVMEMӜ⥺🙍" export UNITTEST_LOG_LEVEL GREP TEST FS BUILD CHECK_TYPE CHECK_POOL VERBOSE SUFFIX VMMALLOC=libvmmalloc.so.1 TOOLS=../tools LIB_TOOLS="../../tools" # Paths to some useful tools [ "$FALLOCATE_DETECT" ] || FALLOCATE_DETECT=$TOOLS/fallocate_detect/fallocate_detect.static-nondebug # force globs to fail if they don't match shopt -s failglob # number of remote nodes required in the current unit test NODES_MAX=-1 # sizes of aligments SIZE_4KB=4096 SIZE_2MB=2097152 # PMEMOBJ limitations PMEMOBJ_MAX_ALLOC_SIZE=17177771968 # SSH and SCP options SSH_OPTS="-o BatchMode=yes" SCP_OPTS="-o BatchMode=yes -r -p" # list of common files to be copied to all remote nodes DIR_SRC="../.." FILES_COMMON_DIR="\ $DIR_SRC/test/*.supp \ $DIR_SRC/tools/pmempool/pmempool \ $DIR_SRC/test/tools/extents/extents \ $DIR_SRC/test/tools/obj_verify/obj_verify \ $DIR_SRC/test/tools/ctrld/ctrld \ $DIR_SRC/test/tools/fip/fip" # Portability VALGRIND_SUPP="--suppressions=../ld.supp \ --suppressions=../memcheck-libunwind.supp" if [ "$(uname -s)" = "FreeBSD" ]; then DATE="gdate" DD="gdd" FALLOCATE="mkfile" VM_OVERCOMMIT="[ $(sysctl vm.overcommit | awk '{print $2}') == 0 ]" RM_ONEFS="-x" STAT_MODE="-f%Lp" STAT_PERM="-f%Sp" STAT_SIZE="-f%z" STRACE="truss" VALGRIND_SUPP="$VALGRIND_SUPP --suppressions=../freebsd.supp" else DATE="date" DD="dd" FALLOCATE="fallocate -l" VM_OVERCOMMIT="[ $(cat /proc/sys/vm/overcommit_memory) != 2 ]" RM_ONEFS="--one-file-system" STAT_MODE="-c%a" STAT_PERM="-c%A" STAT_SIZE="-c%s" STRACE="strace" fi # array of lists of PID files to be cleaned in case of an error NODE_PID_FILES[0]="" case "$BUILD" in debug|static-debug) if [ -z "$PMDK_LIB_PATH_DEBUG" ]; then PMDK_LIB_PATH=../../debug REMOTE_PMDK_LIB_PATH=../debug else PMDK_LIB_PATH=$PMDK_LIB_PATH_DEBUG REMOTE_PMDK_LIB_PATH=$PMDK_LIB_PATH_DEBUG fi ;; nondebug|static-nondebug) if [ -z "$PMDK_LIB_PATH_NONDEBUG" ]; then PMDK_LIB_PATH=../../nondebug REMOTE_PMDK_LIB_PATH=../nondebug else PMDK_LIB_PATH=$PMDK_LIB_PATH_NONDEBUG REMOTE_PMDK_LIB_PATH=$PMDK_LIB_PATH_NONDEBUG fi ;; esac export LD_LIBRARY_PATH=$PMDK_LIB_PATH:$GLOBAL_LIB_PATH:$LD_LIBRARY_PATH export REMOTE_LD_LIBRARY_PATH=$REMOTE_PMDK_LIB_PATH:$GLOBAL_LIB_PATH:\$LD_LIBRARY_PATH # # When running static binary tests, append the build type to the binary # case "$BUILD" in static-*) EXESUFFIX=.$BUILD ;; esac # # The variable DIR is constructed so the test uses that directory when # constructing test files. DIR is chosen based on the fs-type for # this test, and if the appropriate fs-type doesn't have a directory # defined in testconfig.sh, the test is skipped. # # This behavior can be overridden by setting DIR. For example: # DIR=/force/test/dir ./TEST0 # curtestdir=`basename $PWD` # just in case if [ ! "$curtestdir" ]; then fatal "curtestdir does not have a value" fi curtestdir=test_$curtestdir if [ ! "$UNITTEST_NUM" ]; then fatal "UNITTEST_NUM does not have a value" fi if [ ! "$UNITTEST_NAME" ]; then fatal "UNITTEST_NAME does not have a value" fi if [ "$DIR" ]; then DIR=$DIR/$curtestdir$UNITTEST_NUM else # if a variable is set - it must point to a valid directory if [ "$TEST_DIR" == "" ]; then fatal "$UNITTEST_NAME: TEST_DIR is not set" fi DIR=$TEST_DIR/$DIRSUFFIX/$curtestdir$UNITTEST_NUM fi # # The default is to turn on library logging to level 3 and save it to local files. # Tests that don't want it on, should override these environment variables. # export VMEM_LOG_LEVEL=3 export VMEM_LOG_FILE=vmem$UNITTEST_NUM.log export VMMALLOC_POOL_SIZE=$((16 * 1024 * 1024)) export VMMALLOC_LOG_LEVEL=3 export VMMALLOC_LOG_FILE=vmmalloc$UNITTEST_NUM.log export OUT_LOG_FILE=out$UNITTEST_NUM.log export ERR_LOG_FILE=err$UNITTEST_NUM.log export TRACE_LOG_FILE=trace$UNITTEST_NUM.log export PREP_LOG_FILE=prep$UNITTEST_NUM.log export VALGRIND_LOG_FILE=${CHECK_TYPE}${UNITTEST_NUM}.log export VALIDATE_VALGRIND_LOG=1 [ "$UT_DUMP_LINES" ] || UT_DUMP_LINES=30 export CHECK_POOL_LOG_FILE=check_pool_${BUILD}_${UNITTEST_NUM}.log # In case a lock is required for Device DAXes DEVDAX_LOCK=../devdax.lock # # store_exit_on_error -- store on a stack a sign that reflects the current state # of the 'errexit' shell option # function store_exit_on_error() { if [ "${-#*e}" != "$-" ]; then estack+=- else estack+=+ fi } # # restore_exit_on_error -- restore the state of the 'errexit' shell option # function restore_exit_on_error() { if [ -z $estack ]; then fatal "error: store_exit_on_error function has to be called first" fi eval "set ${estack:${#estack}-1:1}e" estack=${estack%?} } # # disable_exit_on_error -- store the state of the 'errexit' shell option and # disable it # function disable_exit_on_error() { store_exit_on_error set +e } # # get_files -- print list of files in the current directory matching the given regex to stdout # # This function has been implemented to workaround a race condition in # `find`, which fails if any file disappears in the middle of the operation. # # example, to list all *.log files in the current directory # get_files ".*\.log" function get_files() { disable_exit_on_error ls -1 | grep -E "^$*$" restore_exit_on_error } # # get_executables -- print list of executable files in the current directory to stdout # # This function has been implemented to workaround a race condition in # `find`, which fails if any file disappears in the middle of the operation. # function get_executables() { disable_exit_on_error for c in * do if [ -f $c -a -x $c ] then echo "$c" fi done restore_exit_on_error } # # convert_to_bytes -- converts the string with K, M, G or T suffixes # to bytes # # example: # "1G" --> "1073741824" # "2T" --> "2199023255552" # "3k" --> "3072" # "1K" --> "1024" # "10" --> "10" # function convert_to_bytes() { size="$(echo $1 | tr '[:upper:]' '[:lower:]')" if [[ $size == *kib ]] then size=$(($(echo $size | tr -d 'kib') * 1024)) elif [[ $size == *mib ]] then size=$(($(echo $size | tr -d 'mib') * 1024 * 1024)) elif [[ $size == *gib ]] then size=$(($(echo $size | tr -d 'gib') * 1024 * 1024 * 1024)) elif [[ $size == *tib ]] then size=$(($(echo $size | tr -d 'tib') * 1024 * 1024 * 1024 * 1024)) elif [[ $size == *pib ]] then size=$(($(echo $size | tr -d 'pib') * 1024 * 1024 * 1024 * 1024 * 1024)) elif [[ $size == *kb ]] then size=$(($(echo $size | tr -d 'kb') * 1000)) elif [[ $size == *mb ]] then size=$(($(echo $size | tr -d 'mb') * 1000 * 1000)) elif [[ $size == *gb ]] then size=$(($(echo $size | tr -d 'gb') * 1000 * 1000 * 1000)) elif [[ $size == *tb ]] then size=$(($(echo $size | tr -d 'tb') * 1000 * 1000 * 1000 * 1000)) elif [[ $size == *pb ]] then size=$(($(echo $size | tr -d 'pb') * 1000 * 1000 * 1000 * 1000 * 1000)) elif [[ $size == *b ]] then size=$(($(echo $size | tr -d 'b'))) elif [[ $size == *k ]] then size=$(($(echo $size | tr -d 'k') * 1024)) elif [[ $size == *m ]] then size=$(($(echo $size | tr -d 'm') * 1024 * 1024)) elif [[ $size == *g ]] then size=$(($(echo $size | tr -d 'g') * 1024 * 1024 * 1024)) elif [[ $size == *t ]] then size=$(($(echo $size | tr -d 't') * 1024 * 1024 * 1024 * 1024)) elif [[ $size == *p ]] then size=$(($(echo $size | tr -d 'p') * 1024 * 1024 * 1024 * 1024 * 1024)) fi echo "$size" } # # create_file -- create zeroed out files of a given length # # example, to create two files, each 1GB in size: # create_file 1G testfile1 testfile2 # function create_file() { size=$(convert_to_bytes $1) shift for file in $* do $DD if=/dev/zero of=$file bs=1M count=$size iflag=count_bytes status=none >> $PREP_LOG_FILE done } # # create_nonzeroed_file -- create non-zeroed files of a given length # # A given first kilobytes of the file is zeroed out. # # example, to create two files, each 1GB in size, with first 4K zeroed # create_nonzeroed_file 1G 4K testfile1 testfile2 # function create_nonzeroed_file() { offset=$(convert_to_bytes $2) size=$(($(convert_to_bytes $1) - $offset)) shift 2 for file in $* do truncate -s ${offset} $file >> $PREP_LOG_FILE $DD if=/dev/zero bs=1K count=${size} iflag=count_bytes 2>>$PREP_LOG_FILE | tr '\0' '\132' >> $file done } # # create_holey_file -- create holey files of a given length # # examples: # create_holey_file 1024k testfile1 testfile2 # create_holey_file 2048M testfile1 testfile2 # create_holey_file 234 testfile1 # create_holey_file 2340b testfile1 # # Input unit size is in bytes with optional suffixes like k, KB, M, etc. # function create_holey_file() { size=$(convert_to_bytes $1) shift for file in $* do truncate -s ${size} $file >> $PREP_LOG_FILE done } # # create_poolset -- create a dummy pool set # # Creates a pool set file using the provided list of part sizes and paths. # Optionally, it also creates the selected part files (zeroed, partially zeroed # or non-zeroed) with requested size and mode. The actual file size may be # different than the part size in the pool set file. # 'r' or 'R' on the list of arguments indicate the beginning of the next # replica set and 'm' or 'M' the beginning of the next remote replica set. # 'o' or 'O' indicates the next argument is a pool set option. # A remote replica requires two parameters: a target node and a pool set # descriptor. # # Each part argument has the following format: # psize:ppath[:cmd[:fsize[:mode]]] # # where: # psize - part size or AUTO (only for DAX device) # ppath - path # cmd - (optional) can be: # x - do nothing (may be skipped if there's no 'fsize', 'mode') # z - create zeroed (holey) file # n - create non-zeroed file # h - create non-zeroed file, but with zeroed header (first 4KB) # d - create directory # fsize - (optional) the actual size of the part file (if 'cmd' is not 'x') # mode - (optional) same format as for 'chmod' command # # Each remote replica argument has the following format: # node:desc # # where: # node - target node # desc - pool set descriptor # # example: # The following command define a pool set consisting of two parts: 16MB # and 32MB, a local replica with only one part of 48MB and a remote replica. # The first part file is not created, the second is zeroed. The only replica # part is non-zeroed. Also, the last file is read-only and its size # does not match the information from pool set file. The last but one line # describes a remote replica. The SINGLEHDR poolset option is set, so only # the first part in each replica contains a pool header. The remote poolset # also has to have the SINGLEHDR option. # # create_poolset ./pool.set 16M:testfile1 32M:testfile2:z \ # R 48M:testfile3:n:11M:0400 \ # M remote_node:remote_pool.set \ # O SINGLEHDR # function create_poolset() { psfile=$1 shift 1 echo "PMEMPOOLSET" > $psfile while [ "$1" ] do if [ "$1" = "M" ] || [ "$1" = "m" ] # remote replica then shift 1 cmd=$1 shift 1 # extract last ":" separated segment as descriptor # extract everything before last ":" as node address # this extraction method is compatible with IPv6 and IPv4 node=${cmd%:*} desc=${cmd##*:} echo "REPLICA $node $desc" >> $psfile continue fi if [ "$1" = "R" ] || [ "$1" = "r" ] then echo "REPLICA" >> $psfile shift 1 continue fi if [ "$1" = "O" ] || [ "$1" = "o" ] then echo "OPTION $2" >> $psfile shift 2 continue fi cmd=$1 fparms=(${cmd//:/ }) shift 1 fsize=${fparms[0]} fpath=${fparms[1]} cmd=${fparms[2]} asize=${fparms[3]} mode=${fparms[4]} if [ ! $asize ]; then asize=$fsize fi if [ "$asize" != "AUTO" ]; then asize=$(convert_to_bytes $asize) fi case "$cmd" in x) # do nothing ;; z) # zeroed (holey) file truncate -s $asize $fpath >> $PREP_LOG_FILE ;; n) # non-zeroed file $DD if=/dev/zero bs=$asize count=1 2>>$PREP_LOG_FILE | tr '\0' '\132' >> $fpath ;; h) # non-zeroed file, except 4K header truncate -s 4K $fpath >> prep$UNITTEST_NUM.log $DD if=/dev/zero bs=$asize count=1 2>>$PREP_LOG_FILE | tr '\0' '\132' >> $fpath truncate -s $asize $fpath >> $PREP_LOG_FILE ;; d) mkdir -p $fpath ;; esac if [ $mode ]; then chmod $mode $fpath fi echo "$fsize $fpath" >> $psfile done } function dump_last_n_lines() { if [ "$1" != "" -a -f "$1" ]; then ln=`wc -l < $1` if [ $ln -gt $UT_DUMP_LINES ]; then echo -e "Last $UT_DUMP_LINES lines of $1 below (whole file has $ln lines)." >&2 ln=$UT_DUMP_LINES else echo -e "$1 below." >&2 fi paste -d " " <(yes $UNITTEST_NAME $1 | head -n $ln) <(tail -n $ln $1) >&2 echo >&2 fi } # https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=810295 # https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=780173 # https://bugs.kde.org/show_bug.cgi?id=303877 # # valgrind issues an unsuppressable warning when exceeding # the brk segment, causing matching failures. We can safely # ignore it because malloc() will fallback to mmap() anyway. function valgrind_ignore_warnings() { cat $1 | grep -v \ -e "WARNING: Serious error when reading debug info" \ -e "When reading debug info from " \ -e "Ignoring non-Dwarf2/3/4 block in .debug_info" \ -e "Last block truncated in .debug_info; ignoring" \ -e "parse_CU_Header: is neither DWARF2 nor DWARF3 nor DWARF4" \ -e "brk segment overflow" \ -e "see section Limitations in user manual" \ -e "Warning: set address range perms: large range"\ -e "further instances of this message will not be shown"\ -e "get_Form_contents: DW_FORM_GNU_strp_alt used, but no alternate .debug_str"\ > $1.tmp mv $1.tmp $1 } # # valgrind_ignore_messages -- cuts off Valgrind messages that are irrelevant # to the correctness of the test, but changes during Valgrind rebase # usage: valgrind_ignore_messages # function valgrind_ignore_messages() { if [ -e "$1.match" ]; then cat $1 | grep -v \ -e "For lists of detected and suppressed errors, rerun with: -s" \ -e "For counts of detected and suppressed errors, rerun with: -v" \ > $1.tmp mv $1.tmp $1 fi } # # get_trace -- return tracing tool command line if applicable # usage: get_trace [] # function get_trace() { if [ "$1" == "none" ]; then echo "$TRACE" return fi local exe=$VALGRINDEXE local check_type=$1 local log_file=$2 local opts="$VALGRIND_OPTS" local node=-1 [ "$#" -eq 3 ] && node=$3 if [ "$check_type" = "memcheck" -a "$MEMCHECK_DONT_CHECK_LEAKS" != "1" ]; then opts="$opts --leak-check=full" fi if [ "$check_type" = "pmemcheck" ]; then # Before Skylake, Intel CPUs did not have clflushopt instruction, so # pmem_flush and pmem_persist both translated to clflush. # This means that missing pmem_drain after pmem_flush could only be # detected on Skylake+ CPUs. # This option tells pmemcheck to expect fence (sfence or # VALGRIND_PMC_DO_FENCE client request, used by pmem_drain) after # clflush and makes pmemcheck output the same on pre-Skylake and # post-Skylake CPUs. opts="$opts --expect-fence-after-clflush=yes" fi opts="$opts $VALGRIND_SUPP" if [ "$node" -ne -1 ]; then exe=${NODE_VALGRINDEXE[$node]} opts="$opts" case "$check_type" in memcheck) opts="$opts --suppressions=../memcheck-libibverbs.supp" ;; helgrind) opts="$opts --suppressions=../helgrind-cxgb4.supp" ;; drd) ;; esac fi echo "$exe --tool=$check_type --log-file=$log_file $opts $TRACE" return } # # validate_valgrind_log -- validate valgrind log # usage: validate_valgrind_log # function validate_valgrind_log() { [ "$VALIDATE_VALGRIND_LOG" != "1" ] && return # fail if there are valgrind errors found or # if it detects overlapping chunks if [ ! -e "$1.match" ] && grep \ -e "ERROR SUMMARY: [^0]" \ -e "Bad mempool" \ $1 >/dev/null ; then msg=$(interactive_red STDERR "failed") echo -e "$UNITTEST_NAME $msg with Valgrind. See $1. Last 20 lines below." >&2 paste -d " " <(yes $UNITTEST_NAME $1 | head -n 20) <(tail -n 20 $1) >&2 false fi } # # expect_normal_exit -- run a given command, expect it to exit 0 # # if VALGRIND_DISABLED is not empty valgrind tool will be omitted # function expect_normal_exit() { local VALGRIND_LOG_FILE=${CHECK_TYPE}${UNITTEST_NUM}.log local N=$2 # in case of a remote execution disable valgrind check if valgrind is not # enabled on node local _CHECK_TYPE=$CHECK_TYPE if [ "x$VALGRIND_DISABLED" != "x" ]; then _CHECK_TYPE=none fi if [ "$1" == "run_on_node" -o "$1" == "run_on_node_background" ]; then if [ -z $(is_valgrind_enabled_on_node $N) ]; then _CHECK_TYPE="none" fi else N=-1 fi if [ -n "$TRACE" ]; then case "$1" in *_on_node*) msg "$UNITTEST_NAME: SKIP: TRACE is not supported if test is executed on remote nodes" exit 0 esac fi local trace=$(get_trace $_CHECK_TYPE $VALGRIND_LOG_FILE $N) if [ "$MEMCHECK_DONT_CHECK_LEAKS" = "1" -a "$CHECK_TYPE" = "memcheck" ]; then export OLD_ASAN_OPTIONS="${ASAN_OPTIONS}" export ASAN_OPTIONS="detect_leaks=0 ${ASAN_OPTIONS}" fi if [ "$CHECK_TYPE" = "helgrind" ]; then export VALGRIND_OPTS="--suppressions=../helgrind-log.supp" fi if [ "$CHECK_TYPE" = "memcheck" ]; then export VALGRIND_OPTS="$VALGRIND_OPTS --suppressions=../memcheck-dlopen.supp" fi # in case of preloading libvmmalloc.so.1 force valgrind to not override malloc if [ -n "$VALGRINDEXE" -a -n "$TEST_LD_PRELOAD" ]; then if [ $(valgrind_version) -ge 312 ]; then preload=`basename $TEST_LD_PRELOAD` fi if [ "$preload" == "$VMMALLOC" ]; then export VALGRIND_OPTS="$VALGRIND_OPTS --soname-synonyms=somalloc=nouserintercepts" fi fi local REMOTE_VALGRIND_LOG=0 if [ "$CHECK_TYPE" != "none" ]; then case "$1" in run_on_node) REMOTE_VALGRIND_LOG=1 trace="$1 $2 $trace" [ $# -ge 2 ] && shift 2 || shift $# ;; run_on_node_background) trace="$1 $2 $3 $trace" [ $# -ge 3 ] && shift 3 || shift $# ;; wait_on_node|wait_on_node_port|kill_on_node) [ "$1" = "wait_on_node" ] && REMOTE_VALGRIND_LOG=1 trace="$1 $2 $3 $4" [ $# -ge 4 ] && shift 4 || shift $# ;; esac fi if [ "$CHECK_TYPE" = "drd" ]; then export VALGRIND_OPTS="$VALGRIND_OPTS --suppressions=../drd-log.supp" fi disable_exit_on_error eval $ECHO LD_PRELOAD=$TEST_LD_PRELOAD $trace "$*" ret=$? if [ $REMOTE_VALGRIND_LOG -eq 1 ]; then for node in $CHECK_NODES do local new_log_file=node\_$node\_$VALGRIND_LOG_FILE copy_files_from_node $node "." ${NODE_TEST_DIR[$node]}/$VALGRIND_LOG_FILE mv $VALGRIND_LOG_FILE $new_log_file done fi restore_exit_on_error if [ "$ret" -ne "0" ]; then if [ "$ret" -gt "128" ]; then msg="crashed (signal $(($ret - 128)))" else msg="failed with exit code $ret" fi msg=$(interactive_red STDERR $msg) if [ -f $ERR_LOG_FILE ]; then if [ "$UNITTEST_LOG_LEVEL" -ge "1" ]; then echo -e "$UNITTEST_NAME $msg. $ERR_LOG_FILE below." >&2 cat $ERR_LOG_FILE >&2 else echo -e "$UNITTEST_NAME $msg. $ERR_LOG_FILE above." >&2 fi else echo -e "$UNITTEST_NAME $msg." >&2 fi # ignore Ctrl-C if [ $ret != 130 ]; then for f in $(get_files ".*[a-zA-Z_]${UNITTEST_NUM}\.log"); do dump_last_n_lines $f done fi [ $NODES_MAX -ge 0 ] && clean_all_remote_nodes false fi if [ "$CHECK_TYPE" != "none" ]; then if [ $REMOTE_VALGRIND_LOG -eq 1 ]; then for node in $CHECK_NODES do local log_file=node\_$node\_$VALGRIND_LOG_FILE valgrind_ignore_warnings $new_log_file valgrind_ignore_messages $new_log_file validate_valgrind_log $new_log_file done else if [ -f $VALGRIND_LOG_FILE ]; then valgrind_ignore_warnings $VALGRIND_LOG_FILE valgrind_ignore_messages $VALGRIND_LOG_FILE validate_valgrind_log $VALGRIND_LOG_FILE fi fi fi if [ "$MEMCHECK_DONT_CHECK_LEAKS" = "1" -a "$CHECK_TYPE" = "memcheck" ]; then export ASAN_OPTIONS="${OLD_ASAN_OPTIONS}" fi } # # expect_abnormal_exit -- run a given command, expect it to exit non-zero # function expect_abnormal_exit() { if [ -n "$TRACE" ]; then case "$1" in *_on_node*) msg "$UNITTEST_NAME: SKIP: TRACE is not supported if test is executed on remote nodes" exit 0 esac fi # in case of preloading libvmmalloc.so.1 force valgrind to not override malloc if [ -n "$VALGRINDEXE" -a -n "$TEST_LD_PRELOAD" ]; then if [ $(valgrind_version) -ge 312 ]; then preload=`basename $TEST_LD_PRELOAD` fi if [ "$preload" == "$VMMALLOC" ]; then export VALGRIND_OPTS="$VALGRIND_OPTS --soname-synonyms=somalloc=nouserintercepts" fi fi if [ "$CHECK_TYPE" = "drd" ]; then export VALGRIND_OPTS="$VALGRIND_OPTS --suppressions=../drd-log.supp" fi disable_exit_on_error eval $ECHO ASAN_OPTIONS="detect_leaks=0 ${ASAN_OPTIONS}" \ LD_PRELOAD=$TEST_LD_PRELOAD $TRACE "$*" ret=$? restore_exit_on_error if [ "$ret" -eq "0" ]; then msg=$(interactive_red STDERR "succeeded") echo -e "$UNITTEST_NAME command $msg unexpectedly." >&2 [ $NODES_MAX -ge 0 ] && clean_all_remote_nodes false fi } # # check_pool -- run pmempool check on specified pool file # function check_pool() { if [ "$CHECK_POOL" == "1" ] then if [ "$VERBOSE" != "0" ] then echo "$UNITTEST_NAME: checking consistency of pool ${1}" fi ${PMEMPOOL}.static-nondebug check $1 2>&1 1>>$CHECK_POOL_LOG_FILE fi } # # check_pools -- run pmempool check on specified pool files # function check_pools() { if [ "$CHECK_POOL" == "1" ] then for f in $* do check_pool $f done fi } # # require_unlimited_vm -- require unlimited virtual memory # # This implies requirements for: # - overcommit_memory enabled (/proc/sys/vm/overcommit_memory is 0 or 1) # - unlimited virtual memory (ulimit -v is unlimited) # function require_unlimited_vm() { $VM_OVERCOMMIT && [ $(ulimit -v) = "unlimited" ] && return msg "$UNITTEST_NAME: SKIP required: overcommit_memory enabled and unlimited virtual memory" exit 0 } # # require_no_superuser -- require user without superuser rights # function require_no_superuser() { local user_id=$(id -u) [ "$user_id" != "0" ] && return msg "$UNITTEST_NAME: SKIP required: run without superuser rights" exit 0 } # # require_no_freebsd -- Skip test on FreeBSD # function require_no_freebsd() { [ "$(uname -s)" != "FreeBSD" ] && return msg "$UNITTEST_NAME: SKIP: Not supported on FreeBSD" exit 0 } # # require_procfs -- Skip test if /proc is not mounted # function require_procfs() { mount | grep -q "/proc" && return msg "$UNITTEST_NAME: SKIP: /proc not mounted" exit 0 } function get_arch() { gcc -dumpmachine | awk -F'[/-]' '{print $1}' } function require_x86_64() { [ $(get_arch) = "x86_64" ] && return msg "$UNITTEST_NAME: SKIP: Not supported on arch != x86_64" exit 0 } # # dax_device_zero -- zero all local dax devices # dax_device_zero() { for path in ${DEVICE_DAX_PATH[@]} do daxio -z -b no -o "$path" done } # # require_dev_dax -- check if given dev dax is indeed that (has valid size) # function require_dev_dax() { local prefix="$UNITTEST_NAME: SKIP" for path in ${DEVICE_DAX_PATH[@]} do disable_exit_on_error out=$(get_devdax_size $path) ret=$? restore_exit_on_error if [ "$ret" == "0" ]; then continue elif [ "$ret" == "1" ]; then msg "$prefix $out" exit 0 else fatal "$UNITTEST_NAME: get_devdax_size: $out" fi done DEVDAX_TO_LOCK=1 } # # lock_devdax -- acquire a lock on Device DAXes # lock_devdax() { exec {DEVDAX_LOCK_FD}> $DEVDAX_LOCK flock $DEVDAX_LOCK_FD } # # unlock_devdax -- release a lock on Device DAXes # unlock_devdax() { flock -u $DEVDAX_LOCK_FD eval "exec ${DEVDAX_LOCK_FD}>&-" } # # require_dax_device -- only allow script to continue if there is at least # one Device DAX devices # function require_dax_device() { require_dev_dax ${DEVICE_DAX_PATH[0]} } # # require_no_unicode -- overwrite unicode suffix to empty string # function require_no_unicode() { export SUFFIX="" } # # get_devdax_size -- get the size of a device dax # function get_devdax_size() { local path=$1 local major_hex=$(stat -c "%t" $path) local minor_hex=$(stat -c "%T" $path) local major_dec=$((16#$major_hex)) local minor_dec=$((16#$minor_hex)) cat /sys/dev/char/$major_dec:$minor_dec/size } # # dax_get_alignment -- get the alignment of a device dax # function dax_get_alignment() { major_hex=$(stat -c "%t" $1) minor_hex=$(stat -c "%T" $1) major_dec=$((16#$major_hex)) minor_dec=$((16#$minor_hex)) cat /sys/dev/char/$major_dec:$minor_dec/device/align } # # require_dax_device_alignments -- only allow script to continue if # the internal Device DAX alignments are as specified. # If necessary, it sorts DEVICE_DAX_PATH entries to match # the requested alignment order. # # usage: require_dax_device_alignments alignment1 [ alignment2 ... ] # require_dax_device_alignments() { require_node_dax_device_alignments -1 $* } # # require_native_fallocate -- verify if filesystem supports fallocate # function require_native_fallocate() { set +e $FALLOCATE_DETECT $1 status=$? set -e if [ $status -eq 1 ]; then msg "$UNITTEST_NAME: SKIP: filesystem does not support fallocate" exit 0 elif [ $status -ne 0 ]; then msg "$UNITTEST_NAME: fallocate_detect failed" exit 1 fi } # # require_usc_permission -- verify if usc can be read with current permissions # function require_usc_permission() { set +e $USC_PERMISSION $1 2> $DIR/usc_permission.txt status=$? set -e # check if there were any messages printed to stderr, skip test if there were usc_stderr=$(cat $DIR/usc_permission.txt | wc -c) rm -f $DIR/usc_permission.txt if [ $status -eq 1 ] || [ $usc_stderr -ne 0 ]; then msg "$UNITTEST_NAME: SKIP: missing permissions to read usc" exit 0 elif [ $status -ne 0 ]; then msg "$UNITTEST_NAME: usc_permission_check failed" exit 1 fi } # # require_build_type -- only allow script to continue for a certain build type # function require_build_type() { for type in $* do [ "$type" = "$BUILD" ] && return done verbose_msg "$UNITTEST_NAME: SKIP build-type $BUILD ($* required)" exit 0 } # # require_command -- only allow script to continue if specified command exists # function require_command() { if ! which $1 &>/dev/null; then msg "$UNITTEST_NAME: SKIP: '$1' command required" exit 0 fi } # # require_pkg -- only allow script to continue if specified package exists # usage: require_pkg [] # function require_pkg() { if ! command -v pkg-config 1>/dev/null then msg "$UNITTEST_NAME: SKIP pkg-config required" exit 0 fi local COMMAND="pkg-config $1" local MSG="$UNITTEST_NAME: SKIP '$1' package" if [ "$#" -eq "2" ]; then COMMAND="$COMMAND --atleast-version $2" MSG="$MSG (version >= $2)" fi MSG="$MSG required" if ! $COMMAND then msg "$MSG" exit 0 fi } # # configure_valgrind -- only allow script to continue when settings match # function configure_valgrind() { case "$1" in memcheck|pmemcheck|helgrind|drd|force-disable) ;; *) usage "bad test-type: $1" ;; esac if [ "$CHECK_TYPE" == "none" ]; then if [ "$1" == "force-disable" ]; then msg "$UNITTEST_NAME: all valgrind tests disabled" elif [ "$2" = "force-enable" ]; then CHECK_TYPE="$1" require_valgrind_tool $1 $3 elif [ "$2" = "force-disable" ]; then CHECK_TYPE=none else fatal "invalid parameter" fi else if [ "$1" == "force-disable" ]; then msg "$UNITTEST_NAME: SKIP RUNTESTS script parameter $CHECK_TYPE tries to enable valgrind test when all valgrind tests are disabled in TEST" exit 0 elif [ "$CHECK_TYPE" != "$1" -a "$2" == "force-enable" ]; then msg "$UNITTEST_NAME: SKIP RUNTESTS script parameter $CHECK_TYPE tries to enable different valgrind test than one defined in TEST" exit 0 elif [ "$CHECK_TYPE" == "$1" -a "$2" == "force-disable" ]; then msg "$UNITTEST_NAME: SKIP RUNTESTS script parameter $CHECK_TYPE tries to enable test defined in TEST as force-disable" exit 0 fi require_valgrind_tool $CHECK_TYPE $3 fi if [ "$UT_VALGRIND_SKIP_PRINT_MISMATCHED" == 1 ]; then export UT_SKIP_PRINT_MISMATCHED=1 fi } # # valgrind_version_no_check -- returns Valgrind version without checking # for valgrind first # function valgrind_version_no_check() { $VALGRINDEXE --version | sed "s/valgrind-\([0-9]*\)\.\([0-9]*\).*/\1*100+\2/" | bc } # # require_valgrind -- continue script execution only if # valgrind package is installed # function require_valgrind() { # bc is used inside valgrind_version_no_check require_command bc require_no_asan disable_exit_on_error VALGRINDEXE=`which valgrind 2>/dev/null` local ret=$? restore_exit_on_error if [ $ret -ne 0 ]; then msg "$UNITTEST_NAME: SKIP valgrind required" exit 0 fi [ $NODES_MAX -lt 0 ] && return; if [ ! -z "$1" ]; then available=$(valgrind_version_no_check) required=`echo $1 | sed "s/\([0-9]*\)\.\([0-9]*\).*/\1*100+\2/" | bc` if [ $available -lt $required ]; then msg "$UNITTEST_NAME: SKIP valgrind required (ver $1 or later)" exit 0 fi fi for N in $NODES_SEQ; do if [ "${NODE_VALGRINDEXE[$N]}" = "" ]; then disable_exit_on_error NODE_VALGRINDEXE[$N]=$(ssh $SSH_OPTS ${NODE[$N]} "which valgrind 2>/dev/null") ret=$? restore_exit_on_error if [ $ret -ne 0 ]; then msg "$UNITTEST_NAME: SKIP valgrind required on remote node #$N" exit 0 fi fi done } # # valgrind_version -- returns Valgrind version # function valgrind_version() { require_valgrind valgrind_version_no_check } # # require_valgrind_tool -- continue script execution only if valgrind with # specified tool is installed # # usage: require_valgrind_tool [] # function require_valgrind_tool() { require_valgrind local tool=$1 local binary=$2 local dir=. [ -d "$2" ] && dir="$2" && binary= pushd "$dir" > /dev/null [ -n "$binary" ] || binary=$(get_executables) if [ -z "$binary" ]; then fatal "require_valgrind_tool: error: no binary found" fi strings ${binary} 2>&1 | \ grep -q "compiled with support for Valgrind $tool" && true if [ $? -ne 0 ]; then msg "$UNITTEST_NAME: SKIP not compiled with support for Valgrind $tool" exit 0 fi if [ "$tool" == "helgrind" ]; then valgrind --tool=$tool --help 2>&1 | \ grep -qi "$tool is Copyright (c)" && true if [ $? -ne 0 ]; then msg "$UNITTEST_NAME: SKIP Valgrind with $tool required" exit 0; fi fi if [ "$tool" == "pmemcheck" ]; then out=`valgrind --tool=$tool --help 2>&1` && true echo "$out" | grep -qi "$tool is Copyright (c)" && true if [ $? -ne 0 ]; then msg "$UNITTEST_NAME: SKIP Valgrind with $tool required" exit 0; fi echo "$out" | grep -qi "expect-fence-after-clflush" && true if [ $? -ne 0 ]; then msg "$UNITTEST_NAME: SKIP pmemcheck does not support --expect-fence-after-clflush option. Please update it to the latest version." exit 0; fi fi popd > /dev/null return 0 } # # set_valgrind_exe_name -- set the actual Valgrind executable name # # On some systems (Ubuntu), "valgrind" is a shell script that calls # the actual executable "valgrind.bin". # The wrapper script doesn't work well with LD_PRELOAD, so we want # to call Valgrind directly. # function set_valgrind_exe_name() { if [ "$VALGRINDEXE" = "" ]; then fatal "set_valgrind_exe_name: error: valgrind is not set up" fi local VALGRINDDIR=`dirname $VALGRINDEXE` if [ -x $VALGRINDDIR/valgrind.bin ]; then VALGRINDEXE=$VALGRINDDIR/valgrind.bin fi [ $NODES_MAX -lt 0 ] && return; for N in $NODES_SEQ; do local COMMAND="\ [ -x $(dirname ${NODE_VALGRINDEXE[$N]})/valgrind.bin ] && \ echo $(dirname ${NODE_VALGRINDEXE[$N]})/valgrind.bin || \ echo ${NODE_VALGRINDEXE[$N]}" NODE_VALGRINDEXE[$N]=$(ssh $SSH_OPTS ${NODE[$N]} $COMMAND) if [ $? -ne 0 ]; then fatal ${NODE_VALGRINDEXE[$N]} fi done } # # require_no_asan_for - continue script execution only if passed binary does # NOT require libasan # function require_no_asan_for() { disable_exit_on_error nm $1 | grep -q __asan_ ASAN_ENABLED=$? restore_exit_on_error if [ "$ASAN_ENABLED" == "0" ]; then msg "$UNITTEST_NAME: SKIP: ASAN enabled" exit 0 fi } # # require_no_asan - continue script execution only if libpmem does NOT require # libasan # function require_no_asan() { case "$BUILD" in esac } # # require_tty - continue script execution only if standard output is a terminal # function require_tty() { if ! tty >/dev/null; then msg "$UNITTEST_NAME: SKIP no terminal" exit 0 fi } # # require_binary -- continue script execution only if the binary has been compiled # # In case of conditional compilation, skip this test. # function require_binary() { if [ -z "$1" ]; then fatal "require_binary: error: binary not provided" fi if [ ! -x "$1" ]; then msg "$UNITTEST_NAME: SKIP no binary found" exit 0 fi return } # # require_preload - continue script execution only if supplied # executable does not generate SIGABRT # # Used to check that LD_PRELOAD of, e.g., libvmmalloc is possible # # usage: require_preload [] # function require_preload() { msg=$1 shift trap SIGABRT disable_exit_on_error ret=$(LD_PRELOAD=$TEST_LD_PRELOAD $* 2>&1 /dev/null) ret=$? restore_exit_on_error if [ $ret == 134 ]; then msg "$UNITTEST_NAME: SKIP: $msg not supported" rm -f $1.core exit 0 fi } # # check_absolute_path -- continue script execution only if $DIR path is # an absolute path; do not resolve symlinks # function check_absolute_path() { if [ "${DIR:0:1}" != "/" ]; then fatal "Directory \$DIR has to be an absolute path. $DIR was given." fi } # # run_command -- run a command in a verbose or quiet way # function run_command() { local COMMAND="$*" if [ "$VERBOSE" != "0" ]; then echo "$ $COMMAND" $COMMAND else $COMMAND fi } # # create_holey_file_on_node -- create holey files of a given length # usage: create_holey_file_on_node # # example, to create two files, each 1GB in size on node 0: # create_holey_file_on_node 0 1G testfile1 testfile2 # # Input unit size is in bytes with optional suffixes like k, KB, M, etc. # function create_holey_file_on_node() { validate_node_number $1 local N=$1 size=$(convert_to_bytes $2) shift 2 for file in $* do run_on_node $N truncate -s ${size} $file >> $PREP_LOG_FILE done } # # require_mmap_under_valgrind -- only allow script to continue if mapping is # possible under Valgrind with required length # (sum of required DAX devices size). # This function is being called internally in # setup() function. # function require_mmap_under_valgrind() { local FILE_MAX_DAX_DEVICES="../tools/anonymous_mmap/max_dax_devices" if [ -z "$REQUIRE_DAX_DEVICES" ]; then return fi if [ ! -f "$FILE_MAX_DAX_DEVICES" ]; then fatal "$FILE_MAX_DAX_DEVICES not found. Run make test." fi if [ "$REQUIRE_DAX_DEVICES" -gt "$(< $FILE_MAX_DAX_DEVICES)" ]; then msg "$UNITTEST_NAME: SKIP: anonymous mmap under Valgrind not possible for $REQUIRE_DAX_DEVICES DAX device(s)." exit 0 fi } # # setup -- print message that test setup is commencing # function setup() { DIR=$DIR$SUFFIX export VMMALLOC_POOL_DIR="$DIR" # writes test working directory to temporary file # that allows read location of data after test failure if [ -f "$TEMP_LOC" ]; then echo "$DIR" > $TEMP_LOC fi if [ "$CHECK_TYPE" != "none" ]; then require_valgrind # detect possible Valgrind mmap issues and skip uncertain tests require_mmap_under_valgrind export VALGRIND_LOG_FILE=$CHECK_TYPE${UNITTEST_NUM}.log MCSTR="/$CHECK_TYPE" else MCSTR="" fi msg "$UNITTEST_NAME: SETUP ($TEST/$BUILD$MCSTR)" for f in $(get_files ".*[a-zA-Z_]${UNITTEST_NUM}\.log"); do rm -f $f done # $DIR has to be an absolute path check_absolute_path if [ "$FS" != "none" ]; then if [ -d "$DIR" ]; then rm $RM_ONEFS -rf -- $DIR fi mkdir -p $DIR fi if [ "$TM" = "1" ]; then start_time=$($DATE +%s.%N) fi if [ "$DEVDAX_TO_LOCK" == 1 ]; then lock_devdax fi export PMEMBLK_CONF="fallocate.at_create=0;" export PMEMOBJ_CONF="fallocate.at_create=0;" export PMEMLOG_CONF="fallocate.at_create=0;" } # # check_log_empty -- if match file does not exist, assume log should be empty # function check_log_empty() { if [ ! -f ${1}.match ] && [ $(get_size $1) -ne 0 ]; then echo "unexpected output in $1" dump_last_n_lines $1 exit 1 fi } # # check_local -- check local test results (using .match files) # function check_local() { if [ "$UT_SKIP_PRINT_MISMATCHED" == 1 ]; then option=-q fi check_log_empty $ERR_LOG_FILE FILES=$(get_files "[^0-9w]*${UNITTEST_NUM}\.log\.match") if [ -n "$FILES" ]; then ../match $option $FILES fi } # # match -- execute match # function match() { ../match $@ } # # check -- check local or remote test results (using .match files) # function check() { if [ $NODES_MAX -lt 0 ]; then check_local else FILES=$(get_files "node_[0-9]+_[^0-9w]*${UNITTEST_NUM}\.log\.match") local NODE_MATCH_FILES[0]="" local NODE_SCP_MATCH_FILES[0]="" for file in $FILES; do local N=`echo $file | cut -d"_" -f2` local DIR=${NODE_WORKING_DIR[$N]}/$curtestdir local FILE=`echo $file | cut -d"_" -f3 | sed "s/\.match$//g"` validate_node_number $N NODE_MATCH_FILES[$N]="${NODE_MATCH_FILES[$N]} $FILE" NODE_SCP_MATCH_FILES[$N]="${NODE_SCP_MATCH_FILES[$N]} ${NODE[$N]}:$DIR/$FILE" done for N in $NODES_SEQ; do [ "${NODE_SCP_MATCH_FILES[$N]}" ] && run_command scp $SCP_OPTS ${NODE_SCP_MATCH_FILES[$N]} . > /dev/null for file in ${NODE_MATCH_FILES[$N]}; do mv $file node_${N}_${file} done done if [ "$UT_SKIP_PRINT_MISMATCHED" == 1 ]; then option=-q fi for N in $NODES_SEQ; do check_log_empty node_${N}_${ERR_LOG_FILE} done if [ -n "$FILES" ]; then match $option $FILES fi fi } # # pass -- print message that the test has passed # function pass() { if [ "$DEVDAX_TO_LOCK" == 1 ]; then unlock_devdax fi if [ "$TM" = "1" ]; then end_time=$($DATE +%s.%N) start_time_sec=$($DATE -d "0 $start_time sec" +%s) end_time_sec=$($DATE -d "0 $end_time sec" +%s) days=$(((end_time_sec - start_time_sec) / (24*3600))) days=$(printf "%03d" $days) tm=$($DATE -d "0 $end_time sec - $start_time sec" +%H:%M:%S.%N) tm=$(echo "$days:$tm" | sed -e "s/^000://g" -e "s/^00://g" -e "s/^00://g" -e "s/\([0-9]*\)\.\([0-9][0-9][0-9]\).*/\1.\2/") tm="\t\t\t[$tm s]" else tm="" fi msg=$(interactive_green STDOUT "PASS") if [ "$UNITTEST_LOG_LEVEL" -ge 1 ]; then echo -e "$UNITTEST_NAME: $msg$tm" fi if [ "$FS" != "none" ]; then rm $RM_ONEFS -rf -- $DIR fi } # Length of pool file's signature SIG_LEN=8 # Offset and length of pmemobj layout LAYOUT_OFFSET=4096 LAYOUT_LEN=1024 # Length of arena's signature ARENA_SIG_LEN=16 # Signature of BTT Arena ARENA_SIG="BTT_ARENA_INFO" # Offset to first arena ARENA_OFF=8192 # # check_file -- check if file exists and print error message if not # check_file() { if [ ! -f $1 ] then fatal "Missing file: ${1}" fi } # # check_files -- check if files exist and print error message if not # check_files() { for file in $* do check_file $file done } # # check_no_file -- check if file has been deleted and print error message if not # check_no_file() { if [ -f $1 ] then fatal "Not deleted file: ${1}" fi } # # check_no_files -- check if files has been deleted and print error message if not # check_no_files() { for file in $* do check_no_file $file done } # # get_size -- return size of file (0 if file does not exist) # get_size() { if [ ! -f $1 ]; then echo "0" else stat $STAT_SIZE $1 fi } # # get_mode -- return mode of file # get_mode() { stat $STAT_MODE $1 } # # check_size -- validate file size # check_size() { local size=$1 local file=$2 local file_size=$(get_size $file) if [[ $size != $file_size ]] then fatal "error: wrong size ${file_size} != ${size}" fi } # # check_mode -- validate file mode # check_mode() { local mode=$1 local file=$2 local file_mode=$(get_mode $file) if [[ $mode != $file_mode ]] then fatal "error: wrong mode ${file_mode} != ${mode}" fi } # # check_signature -- check if file contains specified signature # check_signature() { local sig=$1 local file=$2 local file_sig=$($DD if=$file bs=1 count=$SIG_LEN 2>/dev/null | tr -d \\0) if [[ $sig != $file_sig ]] then fatal "error: $file: signature doesn't match ${file_sig} != ${sig}" fi } # # check_signatures -- check if multiple files contain specified signature # check_signatures() { local sig=$1 shift 1 for file in $* do check_signature $sig $file done } # # check_layout -- check if pmemobj pool contains specified layout # check_layout() { local layout=$1 local file=$2 local file_layout=$($DD if=$file bs=1\ skip=$LAYOUT_OFFSET count=$LAYOUT_LEN 2>/dev/null | tr -d \\0) if [[ $layout != $file_layout ]] then fatal "error: layout doesn't match ${file_layout} != ${layout}" fi } # # check_arena -- check if file contains specified arena signature # check_arena() { local file=$1 local sig=$($DD if=$file bs=1 skip=$ARENA_OFF count=$ARENA_SIG_LEN 2>/dev/null | tr -d \\0) if [[ $sig != $ARENA_SIG ]] then fatal "error: can't find arena signature" fi } # # enable_log_append -- turn on appending to the log files rather than truncating them # It also removes all log files created by tests: out*.log, err*.log and trace*.log # function enable_log_append() { rm -f $OUT_LOG_FILE rm -f $ERR_LOG_FILE rm -f $TRACE_LOG_FILE export UNITTEST_LOG_APPEND=1 } # clean data directory on all remote # nodes if remote test failed if [ "$CLEAN_FAILED_REMOTE" == "y" ]; then NODES_ALL=$((${#NODE[@]} - 1)) MYPID=$$ for ((i=0;i<=$NODES_ALL;i++)); do if [[ -z "${NODE_WORKING_DIR[$i]}" || -z "$curtestdir" ]]; then echo "Invalid path to tests data: ${NODE_WORKING_DIR[$i]}/$curtestdir/data/" exit 1 fi N[$i]=${NODE_WORKING_DIR[$i]}/$curtestdir/data/ run_command ssh $SSH_OPTS ${NODE[$i]} "rm -rf ${N[$i]}; mkdir ${N[$i]}" if [ $? -eq 0 ]; then verbose_msg "Removed data from: ${NODE[$i]}:${N[$i]}" fi done exit 0 fi # calculate the minimum of two or more numbers minimum() { local min=$1 shift for val in $*; do if [[ "$val" < "$min" ]]; then min=$val fi done echo $min } # # count_lines - count number of lines that match pattern $1 in file $2 # function count_lines() { # grep returns 1 on no match disable_exit_on_error $GREP -ce "$1" $2 restore_exit_on_error } # # get_pmemcheck_version() - return pmemcheck API major or minor version # usage: get_pmemcheck_version <0|1> # function get_pmemcheck_version() { PMEMCHECK_VERSION=$($VALGRINDEXE --tool=pmemcheck true 2>&1 \ | head -n 1 | sed "s/.*-\([0-9.]*\),.*/\1/") OIFS=$IFS IFS="." PMEMCHECK_MAJ_MIN=($PMEMCHECK_VERSION) IFS=$OIFS PMEMCHECK_VERSION_PART=${PMEMCHECK_MAJ_MIN[$1]} echo "$PMEMCHECK_VERSION_PART" } # # require_pmemcheck_version_ge - check if pmemcheck API # version is greater or equal to required value # usage: require_pmemcheck_version_ge [binary] # function require_pmemcheck_version_ge() { require_valgrind_tool pmemcheck $3 REQUIRE_MAJOR=$1 REQUIRE_MINOR=$2 PMEMCHECK_MAJOR=$(get_pmemcheck_version 0) PMEMCHECK_MINOR=$(get_pmemcheck_version 1) # compare MAJOR if [ $PMEMCHECK_MAJOR -gt $REQUIRE_MAJOR ]; then return 0 fi # compare MINOR if [ $PMEMCHECK_MAJOR -eq $REQUIRE_MAJOR ]; then if [ $PMEMCHECK_MINOR -ge $REQUIRE_MINOR ]; then return 0 fi fi msg "$UNITTEST_NAME: SKIP pmemcheck API version:" \ "$PMEMCHECK_MAJOR.$PMEMCHECK_MINOR" \ "is less than required" \ "$REQUIRE_MAJOR.$REQUIRE_MINOR" exit 0 } # # require_pmemcheck_version_lt - check if pmemcheck API # version is less than required value # usage: require_pmemcheck_version_lt [binary] # function require_pmemcheck_version_lt() { require_valgrind_tool pmemcheck $3 REQUIRE_MAJOR=$1 REQUIRE_MINOR=$2 PMEMCHECK_MAJOR=$(get_pmemcheck_version 0) PMEMCHECK_MINOR=$(get_pmemcheck_version 1) # compare MAJOR if [ $PMEMCHECK_MAJOR -lt $REQUIRE_MAJOR ]; then return 0 fi # compare MINOR if [ $PMEMCHECK_MAJOR -eq $REQUIRE_MAJOR ]; then if [ $PMEMCHECK_MINOR -lt $REQUIRE_MINOR ]; then return 0 fi fi msg "$UNITTEST_NAME: SKIP pmemcheck API version:" \ "$PMEMCHECK_MAJOR.$PMEMCHECK_MINOR" \ "is greater or equal than" \ "$REQUIRE_MAJOR.$REQUIRE_MINOR" exit 0 } # # require_free_space -- check if there is enough free space to run the test # Example, checking if there is 1 GB of free space on disk: # require_free_space 1G # function require_free_space() { req_free_space=$(convert_to_bytes $1) # actually require 5% or 8MB (whichever is higher) more, just in case # file system requires some space for its meta data pct=$((5 * $req_free_space / 100)) abs=$(convert_to_bytes 8M) if [ $pct -gt $abs ]; then req_free_space=$(($req_free_space + $pct)) else req_free_space=$(($req_free_space + $abs)) fi output=$(df -k $DIR) found=false i=1 for elem in $(echo "$output" | head -1); do if [ ${elem:0:5} == "Avail" ]; then found=true break else let "i+=1" fi done if [ $found = true ]; then row=$(echo "$output" | tail -1) free_space=$(( $(echo $row | awk "{print \$$i}")*1024 )) else msg "$UNITTEST_NAME: SKIP: unable to check free space" exit 0 fi if [ $free_space -lt $req_free_space ]; then msg "$UNITTEST_NAME: SKIP: not enough free space ($1 required)" exit 0 fi } vmem-1.8/src/test/unittest/ut.c000066400000000000000000000640341361505074100165600ustar00rootroot00000000000000/* * Copyright 2014-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * ut.c -- unit test support routines * * some of these functions look at errno, but none of them * change errno -- it is preserved across these calls. * * ut_done() and ut_fatal() never return. */ #include "unittest.h" #ifndef _WIN32 #ifdef __FreeBSD__ #include int ut_get_uuid_str(char *uu) { uuid_t uuid; uuid_generate(uuid); uuid_unparse(uuid, uu); return 0; } #else int ut_get_uuid_str(char *uu) { int fd = OPEN(UT_POOL_HDR_UUID_GEN_FILE, O_RDONLY); size_t num = READ(fd, uu, UT_POOL_HDR_UUID_STR_LEN); UT_ASSERTeq(num, UT_POOL_HDR_UUID_STR_LEN); uu[UT_POOL_HDR_UUID_STR_LEN - 1] = '\0'; CLOSE(fd); return 0; } #endif /* RHEL5 seems to be missing decls, even though libc supports them */ extern DIR *fdopendir(int fd); extern ssize_t readlinkat(int, const char *restrict, char *__restrict, size_t); void ut_strerror(int errnum, char *buff, size_t bufflen) { strerror_r(errnum, buff, bufflen); } void ut_suppress_errmsg(void) {} void ut_unsuppress_errmsg(void) {} #else #pragma comment(lib, "rpcrt4.lib") void ut_suppress_errmsg(void) { ErrMode = GetErrorMode(); SetErrorMode(ErrMode | SEM_NOGPFAULTERRORBOX | SEM_FAILCRITICALERRORS); AbortBehave = _set_abort_behavior(0, _WRITE_ABORT_MSG | _CALL_REPORTFAULT); Suppressed = TRUE; } void ut_unsuppress_errmsg(void) { if (Suppressed) { SetErrorMode(ErrMode); _set_abort_behavior(AbortBehave, _WRITE_ABORT_MSG | _CALL_REPORTFAULT); Suppressed = FALSE; } } int ut_get_uuid_str(char *uuid_str) { UUID uuid; char *buff; if (UuidCreate(&uuid) == 0) if (UuidToStringA(&uuid, &buff) == RPC_S_OK) { strcpy_s(uuid_str, UT_POOL_HDR_UUID_STR_LEN, buff); return 0; } return -1; } /* XXX - fix this temp hack dup'ing util_strerror when we get mock for win */ #define ENOTSUP_STR "Operation not supported" #define UNMAPPED_STR "Unmapped error" void ut_strerror(int errnum, char *buff, size_t bufflen) { switch (errnum) { case ENOTSUP: strcpy_s(buff, bufflen, ENOTSUP_STR); break; default: if (strerror_s(buff, bufflen, errnum)) strcpy_s(buff, bufflen, UNMAPPED_STR); } } /* * ut_spawnv -- creates and executes new synchronous process, * ... are additional parameters to new process, * the last argument must be a NULL * * XXX: argc/argv are ignored actually, as we need to use the unmodified * UTF16-encoded command line args. */ intptr_t ut_spawnv(int argc, const char **argv, ...) { int va_count = 0; int wargc; wchar_t **wargv = CommandLineToArgvW(GetCommandLineW(), &wargc); va_list ap; va_start(ap, argv); while (va_arg(ap, char *)) { va_count++; } va_end(ap); /* 1 for terminating NULL */ wchar_t **wargv2 = calloc(wargc + va_count + 1, sizeof(wchar_t *)); if (wargv2 == NULL) { UT_ERR("Cannot calloc memory for new array"); return -1; } memcpy(wargv2, wargv, wargc * sizeof(wchar_t *)); va_start(ap, argv); for (int i = 0; i < va_count; i++) { char *a = va_arg(ap, char *); wargv2[wargc + i] = ut_toUTF16(a); } va_end(ap); intptr_t ret = _wspawnv(_P_WAIT, wargv2[0], wargv2); for (int i = 0; i < va_count; i++) { free(wargv2[wargc + i]); } free(wargv2); return ret; } #endif #define MAXLOGFILENAME 100 /* maximum expected .log file name length */ #define MAXPRINT 8192 /* maximum expected single print length */ /* * output gets replicated to these files */ static FILE *Outfp; static FILE *Errfp; static FILE *Tracefp; static int LogLevel; /* set by UNITTEST_LOG_LEVEL env variable */ static int Force_quiet; /* set by UNITTEST_FORCE_QUIET env variable */ static char *Testname; /* set by UNITTEST_NAME env variable */ /* set by UNITTEST_CHECK_OPEN_FILES_IGNORE_BADBLOCKS env variable */ static int Ignore_bb; unsigned long Ut_pagesize; unsigned long long Ut_mmap_align; os_mutex_t Sigactions_lock; static char Buff_out[MAXPRINT]; static char Buff_err[MAXPRINT]; static char Buff_trace[MAXPRINT]; static char Buff_stdout[MAXPRINT]; /* * flags that control output */ #define OF_NONL 1 /* do not append newline */ #define OF_ERR 2 /* output is error output */ #define OF_TRACE 4 /* output to trace file only */ #define OF_NAME 16 /* include Testname in the output */ /* * vout -- common output code, all output happens here */ static void vout(int flags, const char *prepend, const char *fmt, va_list ap) { char buf[MAXPRINT]; unsigned cc = 0; int sn; const char *sep = ""; char errstr[UT_MAX_ERR_MSG] = ""; const char *nl = "\n"; if (Force_quiet) return; if (flags & OF_NONL) nl = ""; if (flags & OF_NAME && Testname) { sn = snprintf(&buf[cc], MAXPRINT - cc, "%s: ", Testname); if (sn < 0) abort(); cc += (unsigned)sn; } if (prepend) { const char *colon = ""; if (fmt) colon = ": "; sn = snprintf(&buf[cc], MAXPRINT - cc, "%s%s", prepend, colon); if (sn < 0) abort(); cc += (unsigned)sn; } if (fmt) { if (*fmt == '!') { fmt++; sep = ": "; ut_strerror(errno, errstr, UT_MAX_ERR_MSG); } sn = vsnprintf(&buf[cc], MAXPRINT - cc, fmt, ap); if (sn < 0) abort(); cc += (unsigned)sn; } int ret = snprintf(&buf[cc], MAXPRINT - cc, "%s%s%s", sep, errstr, nl); if (ret < 0 || ret >= MAXPRINT - (int)cc) UT_FATAL("snprintf: %d", ret); /* buf has the fully-baked output, send it everywhere it goes... */ fputs(buf, Tracefp); if (flags & OF_ERR) { fputs(buf, Errfp); if (LogLevel >= 2) fputs(buf, stderr); } else if ((flags & OF_TRACE) == 0) { fputs(buf, Outfp); if (LogLevel >= 2) fputs(buf, stdout); } } /* * out -- printf-like output controlled by flags */ static void out(int flags, const char *fmt, ...) { va_list ap; va_start(ap, fmt); vout(flags, NULL, fmt, ap); va_end(ap); } /* * prefix -- emit the trace line prefix */ static void prefix(const char *file, int line, const char *func, int flags) { out(OF_NONL|OF_TRACE|flags, "{%s:%d %s} ", file, line, func); } /* * lookup table for open files */ static struct fd_lut { struct fd_lut *left; struct fd_lut *right; int fdnum; char *fdfile; } *Fd_lut; static int Fd_errcount; /* * open_file_add -- add an open file to the lut */ static struct fd_lut * open_file_add(struct fd_lut *root, int fdnum, const char *fdfile) { if (root == NULL) { root = ZALLOC(sizeof(*root)); root->fdnum = fdnum; root->fdfile = STRDUP(fdfile); } else if (root->fdnum == fdnum) UT_FATAL("duplicate fdnum: %d", fdnum); else if (root->fdnum < fdnum) root->left = open_file_add(root->left, fdnum, fdfile); else root->right = open_file_add(root->right, fdnum, fdfile); return root; } /* * open_file_remove -- find exact match & remove it from lut * * prints error if exact match not found, increments Fd_errcount */ static void open_file_remove(struct fd_lut *root, int fdnum, const char *fdfile) { if (root == NULL) { if (!Ignore_bb || strstr(fdfile, "badblocks") == NULL) { UT_ERR("unexpected open file: fd %d => \"%s\"", fdnum, fdfile); Fd_errcount++; } } else if (root->fdnum == fdnum) { if (root->fdfile == NULL) { UT_ERR("open file dup: fd %d => \"%s\"", fdnum, fdfile); Fd_errcount++; } else if (strcmp(root->fdfile, fdfile) == 0) { /* found exact match */ FREE(root->fdfile); root->fdfile = NULL; } else { UT_ERR("open file changed: fd %d was \"%s\" now \"%s\"", fdnum, root->fdfile, fdfile); #ifdef __FreeBSD__ /* * XXX Pathname list not definitive on FreeBSD, * so treat as warning */ FREE(root->fdfile); root->fdfile = NULL; #else Fd_errcount++; #endif } } else if (root->fdnum < fdnum) open_file_remove(root->left, fdnum, fdfile); else open_file_remove(root->right, fdnum, fdfile); } /* * open_file_walk -- walk lut for any left-overs * * prints error if any found, increments Fd_errcount */ static void open_file_walk(struct fd_lut *root) { if (root) { open_file_walk(root->left); if (root->fdfile) { UT_ERR("open file missing: fd %d => \"%s\"", root->fdnum, root->fdfile); Fd_errcount++; } open_file_walk(root->right); } } /* * open_file_free -- free the lut */ static void open_file_free(struct fd_lut *root) { if (root) { open_file_free(root->left); open_file_free(root->right); if (root->fdfile) FREE(root->fdfile); FREE(root); } } /* * close_output_files -- close opened output files */ static void close_output_files(void) { if (Outfp != NULL) fclose(Outfp); if (Errfp != NULL) fclose(Errfp); if (Tracefp != NULL) fclose(Tracefp); } #ifndef _WIN32 #ifdef __FreeBSD__ /* XXX Note: Pathname retrieval is not really supported in FreeBSD */ #include #include /* * record_open_files -- make a list of open files (used at START() time) */ static void record_open_files(void) { int numfds, i; struct kinfo_file *fip, *f; if ((fip = kinfo_getfile(getpid(), &numfds)) == NULL) { UT_FATAL("!kinfo_getfile"); } for (i = 0, f = fip; i < numfds; i++, f++) { if (f->kf_fd >= 0) { Fd_lut = open_file_add(Fd_lut, f->kf_fd, f->kf_path); } } free(fip); } /* * check_open_files -- verify open files match recorded open files */ static void check_open_files(void) { int numfds, i; struct kinfo_file *fip, *f; if ((fip = kinfo_getfile(getpid(), &numfds)) == NULL) { UT_FATAL("!kinfo_getfile"); } for (i = 0, f = fip; i < numfds; i++, f++) { if (f->kf_fd >= 0) { open_file_remove(Fd_lut, f->kf_fd, f->kf_path); } } open_file_walk(Fd_lut); if (Fd_errcount) UT_FATAL("open file list changed between START() and DONE()"); open_file_free(Fd_lut); free(fip); } #else /* !__FreeBSD__ */ /* * record_open_files -- make a list of open files (used at START() time) */ static void record_open_files(void) { int dirfd; DIR *dirp = NULL; struct dirent *dp; if ((dirfd = os_open("/proc/self/fd", O_RDONLY)) < 0 || (dirp = fdopendir(dirfd)) == NULL) UT_FATAL("!/proc/self/fd"); while ((dp = readdir(dirp)) != NULL) { int fdnum; char fdfile[PATH_MAX]; ssize_t cc; if (*dp->d_name == '.') continue; if ((cc = readlinkat(dirfd, dp->d_name, fdfile, PATH_MAX)) < 0) UT_FATAL("!readlinkat: /proc/self/fd/%s", dp->d_name); fdfile[cc] = '\0'; fdnum = atoi(dp->d_name); if (dirfd == fdnum) continue; Fd_lut = open_file_add(Fd_lut, fdnum, fdfile); } closedir(dirp); } /* * check_open_files -- verify open files match recorded open files */ static void check_open_files(void) { int dirfd; DIR *dirp = NULL; struct dirent *dp; if ((dirfd = os_open("/proc/self/fd", O_RDONLY)) < 0 || (dirp = fdopendir(dirfd)) == NULL) UT_FATAL("!/proc/self/fd"); while ((dp = readdir(dirp)) != NULL) { int fdnum; char fdfile[PATH_MAX]; ssize_t cc; if (*dp->d_name == '.') continue; if ((cc = readlinkat(dirfd, dp->d_name, fdfile, PATH_MAX)) < 0) UT_FATAL("!readlinkat: /proc/self/fd/%s", dp->d_name); fdfile[cc] = '\0'; fdnum = atoi(dp->d_name); if (dirfd == fdnum) continue; open_file_remove(Fd_lut, fdnum, fdfile); } closedir(dirp); open_file_walk(Fd_lut); if (Fd_errcount) UT_FATAL("open file list changed between START() and DONE()"); open_file_free(Fd_lut); } #endif /* __FreeBSD__ */ #else /* _WIN32 */ #include #define STATUS_INFO_LENGTH_MISMATCH 0xc0000004 #define ObjectTypeInformation 2 #define SystemExtendedHandleInformation 64 typedef struct _SYSTEM_HANDLE_TABLE_ENTRY_INFO_EX { PVOID Object; HANDLE UniqueProcessId; HANDLE HandleValue; ULONG GrantedAccess; USHORT CreatorBackTraceIndex; USHORT ObjectTypeIndex; ULONG HandleAttributes; ULONG Reserved; } SYSTEM_HANDLE_TABLE_ENTRY_INFO_EX, *PSYSTEM_HANDLE_TABLE_ENTRY_INFO_EX; typedef struct _SYSTEM_HANDLE_INFORMATION_EX { ULONG_PTR NumberOfHandles; ULONG_PTR Reserved; SYSTEM_HANDLE_TABLE_ENTRY_INFO_EX Handles[1]; } SYSTEM_HANDLE_INFORMATION_EX, *PSYSTEM_HANDLE_INFORMATION_EX; typedef enum _POOL_TYPE { NonPagedPool, PagedPool, NonPagedPoolMustSucceed, DontUseThisType, NonPagedPoolCacheAligned, PagedPoolCacheAligned, NonPagedPoolCacheAlignedMustS } POOL_TYPE, *PPOOL_TYPE; typedef struct _OBJECT_TYPE_INFORMATION { UNICODE_STRING Name; ULONG TotalNumberOfObjects; ULONG TotalNumberOfHandles; ULONG TotalPagedPoolUsage; ULONG TotalNonPagedPoolUsage; ULONG TotalNamePoolUsage; ULONG TotalHandleTableUsage; ULONG HighWaterNumberOfObjects; ULONG HighWaterNumberOfHandles; ULONG HighWaterPagedPoolUsage; ULONG HighWaterNonPagedPoolUsage; ULONG HighWaterNamePoolUsage; ULONG HighWaterHandleTableUsage; ULONG InvalidAttributes; GENERIC_MAPPING GenericMapping; ULONG ValidAccess; BOOLEAN SecurityRequired; BOOLEAN MaintainHandleCount; USHORT MaintainTypeList; POOL_TYPE PoolType; ULONG PagedPoolUsage; ULONG NonPagedPoolUsage; } OBJECT_TYPE_INFORMATION, *POBJECT_TYPE_INFORMATION; /* * enum_handles -- (internal) record or check a list of open handles */ static void enum_handles(int op) { ULONG hi_size = 0x200000; /* default size */ ULONG req_size = 0; PSYSTEM_HANDLE_INFORMATION_EX hndl_info = (PSYSTEM_HANDLE_INFORMATION_EX)MALLOC(hi_size); /* if it fails with the default info size, realloc and try again */ NTSTATUS status; while ((status = NtQuerySystemInformation( SystemExtendedHandleInformation, hndl_info, hi_size, &req_size) == STATUS_INFO_LENGTH_MISMATCH)) { hi_size = req_size + 4096; hndl_info = (PSYSTEM_HANDLE_INFORMATION_EX)REALLOC(hndl_info, hi_size); } UT_ASSERT(status >= 0); DWORD pid = GetProcessId(GetCurrentProcess()); DWORD ti_size = 4096; /* initial size */ POBJECT_TYPE_INFORMATION type_info = (POBJECT_TYPE_INFORMATION)MALLOC(ti_size); DWORD ni_size = 4096; /* initial size */ PVOID name_info = MALLOC(ni_size); for (ULONG i = 0; i < hndl_info->NumberOfHandles; i++) { SYSTEM_HANDLE_TABLE_ENTRY_INFO_EX handle = hndl_info->Handles[i]; char name[MAX_PATH]; /* ignore handles not owned by current process */ if ((ULONGLONG)handle.UniqueProcessId != pid) continue; /* query the object type */ status = NtQueryObject(handle.HandleValue, ObjectTypeInformation, type_info, ti_size, NULL); if (status < 0) continue; /* if handle can't be queried, ignore it */ /* * Register/verify only handles of selected types. * Do not rely on type numbers - check type name instead. */ if (wcscmp(type_info->Name.Buffer, L"Directory") && wcscmp(type_info->Name.Buffer, L"Mutant") && wcscmp(type_info->Name.Buffer, L"Semaphore") && wcscmp(type_info->Name.Buffer, L"File")) { /* does not match any of the above types */ continue; } /* * Skip handles with access 0x0012019f. NtQueryObject() may * hang on querying the handles pointing to named pipes. */ if (handle.GrantedAccess == 0x0012019f) continue; int ret = snprintf(name, MAX_PATH, "%.*S", type_info->Name.Length / 2, type_info->Name.Buffer); if (ret < 0 || ret >= MAX_PATH) UT_FATAL("snprintf: %d", ret); int fd = (int)(ULONGLONG)handle.HandleValue; if (op == 0) Fd_lut = open_file_add(Fd_lut, fd, name); else open_file_remove(Fd_lut, fd, name); } FREE(type_info); FREE(name_info); FREE(hndl_info); } /* * record_open_files -- record a number of open handles (used at START() time) * * On Windows, it records not only file handles, but some other handle types * as well. * XXX: We can't register all the handles, as spawning new process in the test * may result in opening new handles of some types (i.e. registry keys). */ static void record_open_files() { /* * XXX: Dummy call to CoCreateGuid() to ignore files/handles open * by this function. They won't be closed until process termination. */ GUID uuid; HRESULT res = CoCreateGuid(&uuid); enum_handles(0); } /* * check_open_files -- verify open handles match recorded open handles */ static void check_open_files() { enum_handles(1); open_file_walk(Fd_lut); if (Fd_errcount) UT_FATAL("open file list changed between START() and DONE()"); open_file_free(Fd_lut); } #endif /* _WIN32 */ /* * ut_start_common -- (internal) initialize unit test framework, * indicate test started */ static void ut_start_common(const char *file, int line, const char *func, const char *fmt, va_list ap) { int saveerrno = errno; char logname[MAXLOGFILENAME]; char *logsuffix; long long sc = sysconf(_SC_PAGESIZE); if (sc < 0) abort(); Ut_pagesize = (unsigned long)sc; #ifdef _WIN32 util_init(); SYSTEM_INFO si; GetSystemInfo(&si); Ut_mmap_align = si.dwAllocationGranularity; if (os_getenv("VMEM_NO_ABORT_MSG") != NULL) { /* disable windows error message boxes */ ut_suppress_errmsg(); } os_mutex_init(&Sigactions_lock); #else Ut_mmap_align = Ut_pagesize; char *ignore_bb = os_getenv("UNITTEST_CHECK_OPEN_FILES_IGNORE_BADBLOCKS"); if (ignore_bb && *ignore_bb) Ignore_bb = 1; #endif if (os_getenv("UNITTEST_NO_SIGHANDLERS") == NULL) ut_register_sighandlers(); if (os_getenv("UNITTEST_LOG_LEVEL") != NULL) LogLevel = atoi(os_getenv("UNITTEST_LOG_LEVEL")); else LogLevel = 2; if (os_getenv("UNITTEST_FORCE_QUIET") != NULL) Force_quiet++; Testname = os_getenv("UNITTEST_NAME"); if ((logsuffix = os_getenv("UNITTEST_NUM")) == NULL) logsuffix = ""; const char *fmode = "w"; if (os_getenv("UNITTEST_LOG_APPEND") != NULL) fmode = "a"; int ret = snprintf(logname, MAXLOGFILENAME, "out%s.log", logsuffix); if (ret < 0 || ret >= MAXLOGFILENAME) UT_FATAL("snprintf: %d", ret); if ((Outfp = os_fopen(logname, fmode)) == NULL) { perror(logname); exit(1); } ret = snprintf(logname, MAXLOGFILENAME, "err%s.log", logsuffix); if (ret < 0 || ret >= MAXLOGFILENAME) UT_FATAL("snprintf: %d", ret); if ((Errfp = os_fopen(logname, fmode)) == NULL) { perror(logname); exit(1); } ret = snprintf(logname, MAXLOGFILENAME, "trace%s.log", logsuffix); if (ret < 0 || ret >= MAXLOGFILENAME) UT_FATAL("snprintf: %d", ret); if ((Tracefp = os_fopen(logname, fmode)) == NULL) { perror(logname); exit(1); } setvbuf(Outfp, Buff_out, _IOLBF, MAXPRINT); setvbuf(Errfp, Buff_err, _IOLBF, MAXPRINT); setvbuf(Tracefp, Buff_trace, _IOLBF, MAXPRINT); setvbuf(stdout, Buff_stdout, _IOLBF, MAXPRINT); prefix(file, line, func, 0); vout(OF_NAME, "START", fmt, ap); #ifdef __FreeBSD__ /* XXX Record the fd that will be leaked by uuid_generate */ uuid_t u; uuid_generate(u); #endif record_open_files(); errno = saveerrno; } /* * ut_start -- initialize unit test framework, indicate test started */ void ut_start(const char *file, int line, const char *func, int argc, char * const argv[], const char *fmt, ...) { va_list ap; va_start(ap, fmt); ut_start_common(file, line, func, fmt, ap); out(OF_NONL, 0, " args:"); for (int i = 0; i < argc; i++) out(OF_NONL, " %s", argv[i]); out(0, NULL); va_end(ap); } #ifdef _WIN32 /* * ut_startW -- initialize unit test framework, indicate test started */ void ut_startW(const char *file, int line, const char *func, int argc, wchar_t * const argv[], const char *fmt, ...) { va_list ap; va_start(ap, fmt); ut_start_common(file, line, func, fmt, ap); out(OF_NONL, 0, " args:"); for (int i = 0; i < argc; i++) { char *str = ut_toUTF8(argv[i]); UT_ASSERTne(str, NULL); out(OF_NONL, " %s", str); free(str); } out(0, NULL); va_end(ap); } #endif /* * ut_end -- indicate test is done, exit program with specified value */ void ut_end(const char *file, int line, const char *func, int ret) { #ifdef _WIN32 os_mutex_destroy(&Sigactions_lock); #endif if (!os_getenv("UNITTEST_DO_NOT_CHECK_OPEN_FILES")) check_open_files(); prefix(file, line, func, 0); out(OF_NAME, "END %d", ret); close_output_files(); exit(ret); } /* * ut_done -- indicate test is done, exit program */ void ut_done(const char *file, int line, const char *func, const char *fmt, ...) { #ifdef _WIN32 os_mutex_destroy(&Sigactions_lock); #endif if (!os_getenv("UNITTEST_DO_NOT_CHECK_OPEN_FILES")) check_open_files(); va_list ap; va_start(ap, fmt); prefix(file, line, func, 0); vout(OF_NAME, "DONE", fmt, ap); va_end(ap); close_output_files(); exit(0); } /* * ut_fatal -- indicate fatal error, exit program */ void ut_fatal(const char *file, int line, const char *func, const char *fmt, ...) { va_list ap; va_start(ap, fmt); prefix(file, line, func, OF_ERR); vout(OF_ERR|OF_NAME, "Error", fmt, ap); va_end(ap); abort(); } /* * ut_out -- output to stdout */ void ut_out(const char *file, int line, const char *func, const char *fmt, ...) { va_list ap; int saveerrno = errno; va_start(ap, fmt); prefix(file, line, func, 0); vout(0, NULL, fmt, ap); va_end(ap); errno = saveerrno; } /* * ut_err -- output to stderr */ void ut_err(const char *file, int line, const char *func, const char *fmt, ...) { va_list ap; int saveerrno = errno; va_start(ap, fmt); prefix(file, line, func, OF_ERR); vout(OF_ERR|OF_NAME, NULL, fmt, ap); va_end(ap); errno = saveerrno; } /* * ut_checksum -- compute checksum using Fletcher16 algorithm */ uint16_t ut_checksum(uint8_t *addr, size_t len) { uint16_t sum1 = 0; uint16_t sum2 = 0; for (size_t i = 0; i < len; ++i) { sum1 = (uint16_t)(sum1 + addr[i]) % 255; sum2 = (uint16_t)(sum2 + sum1) % 255; } return (uint16_t)((sum2 << 8) | sum1); } #ifdef _WIN32 /* * ut_toUTF8 -- convert WCS to UTF-8 string */ char * ut_toUTF8(const wchar_t *wstr) { int size = WideCharToMultiByte(CP_UTF8, WC_ERR_INVALID_CHARS, wstr, -1, NULL, 0, NULL, NULL); if (size == 0) { UT_FATAL("!ut_toUTF8"); } char *str = malloc(size * sizeof(char)); if (str == NULL) { UT_FATAL("!ut_toUTF8"); } if (WideCharToMultiByte(CP_UTF8, WC_ERR_INVALID_CHARS, wstr, -1, str, size, NULL, NULL) == 0) { UT_FATAL("!ut_toUTF8"); } return str; } /* * ut_toUTF16 -- convert UTF-8 to WCS string */ wchar_t * ut_toUTF16(const char *wstr) { int size = MultiByteToWideChar(CP_UTF8, MB_ERR_INVALID_CHARS, wstr, -1, NULL, 0); if (size == 0) { UT_FATAL("!ut_toUTF16"); } wchar_t *str = malloc(size * sizeof(wchar_t)); if (str == NULL) { UT_FATAL("!ut_toUTF16"); } if (MultiByteToWideChar(CP_UTF8, MB_ERR_INVALID_CHARS, wstr, -1, str, size) == 0) { UT_FATAL("!ut_toUTF16"); } return str; } #endif /* * ut_strtoi -- a strtoi call that cannot return error */ int ut_strtoi(const char *file, int line, const char *func, const char *nptr, char **endptr, int base) { long ret = ut_strtol(file, line, func, nptr, endptr, base); if (ret > INT_MAX || ret < INT_MIN) ut_fatal(file, line, func, "!strtoi: nptr=%s, endptr=%s, base=%d", nptr, endptr ? *endptr : "NULL", base); return (int)ret; } /* * ut_strtou -- a strtou call that cannot return error */ unsigned ut_strtou(const char *file, int line, const char *func, const char *nptr, char **endptr, int base) { unsigned long ret = ut_strtoul(file, line, func, nptr, endptr, base); if (ret > UINT_MAX) ut_fatal(file, line, func, "!strtou: nptr=%s, endptr=%s, base=%d", nptr, endptr ? *endptr : "NULL", base); return (unsigned)ret; } /* * ut_strtol -- a strtol call that cannot return error */ long ut_strtol(const char *file, int line, const char *func, const char *nptr, char **endptr, int base) { long long ret = ut_strtoll(file, line, func, nptr, endptr, base); if (ret > LONG_MAX || ret < LONG_MIN) ut_fatal(file, line, func, "!strtol: nptr=%s, endptr=%s, base=%d", nptr, endptr ? *endptr : "NULL", base); return (long)ret; } /* * ut_strtoul -- a strtou call that cannot return error */ unsigned long ut_strtoul(const char *file, int line, const char *func, const char *nptr, char **endptr, int base) { unsigned long long ret = ut_strtoull(file, line, func, nptr, endptr, base); if (ret > ULONG_MAX) ut_fatal(file, line, func, "!strtoul: nptr=%s, endptr=%s, base=%d", nptr, endptr ? *endptr : "NULL", base); return (unsigned long)ret; } /* * ut_strtoull -- a strtoul call that cannot return error */ unsigned long long ut_strtoull(const char *file, int line, const char *func, const char *nptr, char **endptr, int base) { unsigned long long retval; errno = 0; if (*nptr == '\0') { errno = EINVAL; goto fatal; } if (endptr != NULL) { retval = strtoull(nptr, endptr, base); } else { char *end; retval = strtoull(nptr, &end, base); if (*end != '\0') goto fatal; } if (errno != 0) goto fatal; return retval; fatal: ut_fatal(file, line, func, "!strtoull: nptr=%s, endptr=%s, base=%d", nptr, endptr ? *endptr : "NULL", base); } /* * ut_strtoll -- a strtol call that cannot return error */ long long ut_strtoll(const char *file, int line, const char *func, const char *nptr, char **endptr, int base) { long long retval; errno = 0; if (*nptr == '\0') { errno = EINVAL; goto fatal; } if (endptr != NULL) { retval = strtoll(nptr, endptr, base); } else { char *end; retval = strtoll(nptr, &end, base); if (*end != '\0') goto fatal; } if (errno != 0) goto fatal; return retval; fatal: ut_fatal(file, line, func, "!strtoll: nptr=%s, endptr=%s, base=%d", nptr, endptr ? *endptr : "NULL", base); } vmem-1.8/src/test/unittest/ut_alloc.c000066400000000000000000000131711361505074100177260ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * ut_alloc.c -- unit test memory allocation routines */ #include "unittest.h" /* * ut_malloc -- a malloc that cannot return NULL */ void * ut_malloc(const char *file, int line, const char *func, size_t size) { void *retval = malloc(size); if (retval == NULL) ut_fatal(file, line, func, "cannot malloc %zu bytes", size); return retval; } /* * ut_calloc -- a calloc that cannot return NULL */ void * ut_calloc(const char *file, int line, const char *func, size_t nmemb, size_t size) { void *retval = calloc(nmemb, size); if (retval == NULL) ut_fatal(file, line, func, "cannot calloc %zu bytes", size); return retval; } /* * ut_free -- wrapper for free * * technically we don't need to wrap free since there's no return to * check. using this wrapper to add memory allocation tracking later. */ void ut_free(const char *file, int line, const char *func, void *ptr) { free(ptr); } /* * ut_aligned_free -- wrapper for aligned memory free */ void ut_aligned_free(const char *file, int line, const char *func, void *ptr) { #ifndef _WIN32 free(ptr); #else _aligned_free(ptr); #endif } /* * ut_realloc -- a realloc that cannot return NULL */ void * ut_realloc(const char *file, int line, const char *func, void *ptr, size_t size) { void *retval = realloc(ptr, size); if (retval == NULL) ut_fatal(file, line, func, "cannot realloc %zu bytes", size); return retval; } /* * ut_strdup -- a strdup that cannot return NULL */ char * ut_strdup(const char *file, int line, const char *func, const char *str) { char *retval = strdup(str); if (retval == NULL) ut_fatal(file, line, func, "cannot strdup %zu bytes", strlen(str)); return retval; } /* * ut_memalign -- like malloc but page-aligned memory */ void * ut_memalign(const char *file, int line, const char *func, size_t alignment, size_t size) { void *retval; #ifndef _WIN32 if ((errno = posix_memalign(&retval, alignment, size)) != 0) ut_fatal(file, line, func, "!memalign %zu bytes (%zu alignment)", size, alignment); #else retval = _aligned_malloc(size, alignment); if (!retval) { ut_fatal(file, line, func, "!memalign %zu bytes (%zu alignment)", size, alignment); } #endif return retval; } /* * ut_pagealignmalloc -- like malloc but page-aligned memory */ void * ut_pagealignmalloc(const char *file, int line, const char *func, size_t size) { return ut_memalign(file, line, func, (size_t)Ut_pagesize, size); } /* * ut_mmap_anon_aligned -- mmaps anonymous memory with specified (power of two, * multiple of page size) alignment and adds guard * pages around it */ void * ut_mmap_anon_aligned(const char *file, int line, const char *func, size_t alignment, size_t size) { char *d, *d_aligned; uintptr_t di, di_aligned; size_t sz; if (alignment == 0) alignment = Ut_mmap_align; /* alignment must be a multiple of page size */ if (alignment & (Ut_mmap_align - 1)) return NULL; /* power of two */ if (alignment & (alignment - 1)) return NULL; d = ut_mmap(file, line, func, NULL, size + 2 * alignment, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); di = (uintptr_t)d; di_aligned = (di + alignment - 1) & ~(alignment - 1); if (di == di_aligned) di_aligned += alignment; d_aligned = (void *)di_aligned; sz = di_aligned - di; if (sz - Ut_mmap_align) ut_munmap(file, line, func, d, sz - Ut_mmap_align); /* guard page before */ ut_mprotect(file, line, func, d_aligned - Ut_mmap_align, Ut_mmap_align, PROT_NONE); /* guard page after */ ut_mprotect(file, line, func, d_aligned + size, Ut_mmap_align, PROT_NONE); sz = di + size + 2 * alignment - (di_aligned + size) - Ut_mmap_align; if (sz) ut_munmap(file, line, func, d_aligned + size + Ut_mmap_align, sz); return d_aligned; } /* * ut_munmap_anon_aligned -- unmaps anonymous memory allocated by * ut_mmap_anon_aligned */ int ut_munmap_anon_aligned(const char *file, int line, const char *func, void *start, size_t size) { return ut_munmap(file, line, func, (char *)start - Ut_mmap_align, size + 2 * Ut_mmap_align); } vmem-1.8/src/test/unittest/ut_backtrace.c000066400000000000000000000124661361505074100205610ustar00rootroot00000000000000/* * Copyright 2015-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * ut_backtrace.c -- backtrace reporting routines */ #ifndef _GNU_SOURCE #define _GNU_SOURCE #endif #include "unittest.h" #ifdef USE_LIBUNWIND #define UNW_LOCAL_ONLY #include #include #define PROCNAMELEN 256 /* * ut_dump_backtrace -- dump stacktrace to error log using libunwind */ void ut_dump_backtrace(void) { unw_context_t context; unw_proc_info_t pip; pip.unwind_info = NULL; int ret = unw_getcontext(&context); if (ret) { UT_ERR("unw_getcontext: %s [%d]", unw_strerror(ret), ret); return; } unw_cursor_t cursor; ret = unw_init_local(&cursor, &context); if (ret) { UT_ERR("unw_init_local: %s [%d]", unw_strerror(ret), ret); return; } ret = unw_step(&cursor); char procname[PROCNAMELEN]; unsigned i = 0; while (ret > 0) { ret = unw_get_proc_info(&cursor, &pip); if (ret) { UT_ERR("unw_get_proc_info: %s [%d]", unw_strerror(ret), ret); break; } unw_word_t off; ret = unw_get_proc_name(&cursor, procname, PROCNAMELEN, &off); if (ret && ret != -UNW_ENOMEM) { if (ret != -UNW_EUNSPEC) { UT_ERR("unw_get_proc_name: %s [%d]", unw_strerror(ret), ret); } strcpy(procname, "?"); } void *ptr = (void *)(pip.start_ip + off); Dl_info dlinfo; const char *fname = "?"; uintptr_t in_object_offset = 0; if (dladdr(ptr, &dlinfo) && dlinfo.dli_fname && *dlinfo.dli_fname) { fname = dlinfo.dli_fname; uintptr_t base = (uintptr_t)dlinfo.dli_fbase; if ((uintptr_t)ptr >= base) in_object_offset = (uintptr_t)ptr - base; } UT_ERR("%u: %s (%s%s+0x%lx) [%p] [0x%" PRIxPTR "]", i++, fname, procname, ret == -UNW_ENOMEM ? "..." : "", off, ptr, in_object_offset); ret = unw_step(&cursor); if (ret < 0) UT_ERR("unw_step: %s [%d]", unw_strerror(ret), ret); } } #else /* USE_LIBUNWIND */ #define SIZE 100 #ifndef _WIN32 #include /* * ut_dump_backtrace -- dump stacktrace to error log using libc's backtrace */ void ut_dump_backtrace(void) { int j, nptrs; void *buffer[SIZE]; char **strings; nptrs = backtrace(buffer, SIZE); strings = backtrace_symbols(buffer, nptrs); if (strings == NULL) { UT_ERR("!backtrace_symbols"); return; } for (j = 0; j < nptrs; j++) UT_ERR("%u: %s", j, strings[j]); free(strings); } #else /* _WIN32 */ #include /* * ut_dump_backtrace -- dump stacktrace to error log */ void ut_dump_backtrace(void) { void *buffer[SIZE]; unsigned nptrs; SYMBOL_INFO *symbol; HANDLE proc_hndl = GetCurrentProcess(); SymInitialize(proc_hndl, NULL, TRUE); nptrs = CaptureStackBackTrace(0, SIZE, buffer, NULL); symbol = CALLOC(sizeof(SYMBOL_INFO) + MAX_SYM_NAME * sizeof(CHAR), 1); symbol->MaxNameLen = MAX_SYM_NAME - 1; symbol->SizeOfStruct = sizeof(SYMBOL_INFO); for (unsigned i = 0; i < nptrs; i++) { if (SymFromAddr(proc_hndl, (DWORD64)buffer[i], 0, symbol)) { UT_ERR("%u: %s [%p]", nptrs - i - 1, symbol->Name, buffer[i]); } else { UT_ERR("%u: [%p]", nptrs - i - 1, buffer[i]); } } FREE(symbol); } #endif /* _WIN32 */ #endif /* USE_LIBUNWIND */ /* * ut_sighandler -- fatal signal handler */ void ut_sighandler(int sig) { /* * Usually SIGABRT is a result of ASSERT() or FATAL(). * We don't need backtrace, as the reason of the failure * is logged in debug traces. */ if (sig != SIGABRT) { UT_ERR("\n"); UT_ERR("Signal %d, backtrace:", sig); ut_dump_backtrace(); UT_ERR("\n"); } exit(128 + sig); } /* * ut_register_sighandlers -- register signal handlers for various fatal signals */ void ut_register_sighandlers(void) { signal(SIGSEGV, ut_sighandler); signal(SIGABRT, ut_sighandler); signal(SIGILL, ut_sighandler); signal(SIGFPE, ut_sighandler); signal(SIGINT, ut_sighandler); #ifndef _WIN32 signal(SIGQUIT, ut_sighandler); signal(SIGBUS, ut_sighandler); #endif } vmem-1.8/src/test/unittest/ut_file.c000066400000000000000000000200251361505074100175470ustar00rootroot00000000000000/* * Copyright 2014-2019, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * ut_file.c -- unit test file operations */ #include "unittest.h" /* * ut_open -- an open that cannot return < 0 */ int ut_open(const char *file, int line, const char *func, const char *path, int flags, ...) { va_list ap; int mode; va_start(ap, flags); mode = va_arg(ap, int); int retval = os_open(path, flags, mode); va_end(ap); if (retval < 0) ut_fatal(file, line, func, "!open: %s", path); return retval; } #ifdef _WIN32 /* * ut_wopen -- a _wopen that cannot return < 0 */ int ut_wopen(const char *file, int line, const char *func, const wchar_t *path, int flags, ...) { va_list ap; int mode; va_start(ap, flags); mode = va_arg(ap, int); int retval = _wopen(path, flags, mode); va_end(ap); if (retval < 0) ut_fatal(file, line, func, "!wopen: %s", ut_toUTF8(path)); return retval; } #endif /* * ut_close -- a close that cannot return -1 */ int ut_close(const char *file, int line, const char *func, int fd) { int retval = os_close(fd); if (retval != 0) ut_fatal(file, line, func, "!close: %d", fd); return retval; } /* * ut_fopen --an fopen that cannot return != 0 */ FILE * ut_fopen(const char *file, int line, const char *func, const char *path, const char *mode) { FILE *retval = os_fopen(path, mode); if (retval == NULL) ut_fatal(file, line, func, "!fopen: %s", path); return retval; } /* * ut_fclose -- a fclose that cannot return != 0 */ int ut_fclose(const char *file, int line, const char *func, FILE *stream) { int retval = os_fclose(stream); if (retval != 0) { ut_fatal(file, line, func, "!fclose: 0x%llx", (unsigned long long)stream); } return retval; } /* * ut_unlink -- an unlink that cannot return -1 */ int ut_unlink(const char *file, int line, const char *func, const char *path) { int retval = os_unlink(path); if (retval != 0) ut_fatal(file, line, func, "!unlink: %s", path); return retval; } /* * ut_posix_fallocate -- a posix_fallocate that cannot return -1 */ int ut_posix_fallocate(const char *file, int line, const char *func, int fd, os_off_t offset, os_off_t len) { int retval = os_posix_fallocate(fd, offset, len); if (retval != 0) { errno = retval; ut_fatal(file, line, func, "!fallocate: fd %d offset 0x%llx len %llu", fd, (unsigned long long)offset, (unsigned long long)len); } return retval; } /* * ut_write -- a write that can't return -1 */ size_t ut_write(const char *file, int line, const char *func, int fd, const void *buf, size_t count) { #ifndef _WIN32 ssize_t retval = write(fd, buf, count); #else /* * XXX - do multiple write() calls in a loop? * Or just use native Windows API? */ if (count > UINT_MAX) ut_fatal(file, line, func, "read: count > UINT_MAX (%zu > %u)", count, UINT_MAX); ssize_t retval = _write(fd, buf, (unsigned)count); #endif if (retval < 0) ut_fatal(file, line, func, "!write: %d", fd); return (size_t)retval; } /* * ut_read -- a read that can't return -1 */ size_t ut_read(const char *file, int line, const char *func, int fd, void *buf, size_t count) { #ifndef _WIN32 ssize_t retval = read(fd, buf, count); #else /* * XXX - do multiple read() calls in a loop? * Or just use native Windows API? */ if (count > UINT_MAX) ut_fatal(file, line, func, "read: count > UINT_MAX (%zu > %u)", count, UINT_MAX); ssize_t retval = read(fd, buf, (unsigned)count); #endif if (retval < 0) ut_fatal(file, line, func, "!read: %d", fd); return (size_t)retval; } /* * ut_lseek -- an lseek that can't return -1 */ os_off_t ut_lseek(const char *file, int line, const char *func, int fd, os_off_t offset, int whence) { os_off_t retval = os_lseek(fd, offset, whence); if (retval == -1) ut_fatal(file, line, func, "!lseek: %d", fd); return retval; } /* * ut_fstat -- a fstat that cannot return -1 */ int ut_fstat(const char *file, int line, const char *func, int fd, os_stat_t *st_bufp) { int retval = os_fstat(fd, st_bufp); if (retval < 0) ut_fatal(file, line, func, "!fstat: %d", fd); #ifdef _WIN32 /* clear unused bits to avoid confusion */ st_bufp->st_mode &= 0600; #endif return retval; } /* * ut_stat -- a stat that cannot return -1 */ int ut_stat(const char *file, int line, const char *func, const char *path, os_stat_t *st_bufp) { int retval = os_stat(path, st_bufp); if (retval < 0) ut_fatal(file, line, func, "!stat: %s", path); #ifdef _WIN32 /* clear unused bits to avoid confusion */ st_bufp->st_mode &= 0600; #endif return retval; } #ifdef _WIN32 /* * ut_statW -- a stat that cannot return -1 */ int ut_statW(const char *file, int line, const char *func, const wchar_t *path, os_stat_t *st_bufp) { int retval = ut_util_statW(path, st_bufp); if (retval < 0) ut_fatal(file, line, func, "!stat: %S", path); #ifdef _WIN32 /* clear unused bits to avoid confusion */ st_bufp->st_mode &= 0600; #endif return retval; } #endif /* * ut_mmap -- a mmap call that cannot return MAP_FAILED */ void * ut_mmap(const char *file, int line, const char *func, void *addr, size_t length, int prot, int flags, int fd, os_off_t offset) { void *ret_addr = mmap(addr, length, prot, flags, fd, offset); if (ret_addr == MAP_FAILED) { const char *error = ""; #ifdef _WIN32 /* * XXX: on Windows mmap is implemented and exported by libpmem */ // error = pmem_errormsg(); #endif ut_fatal(file, line, func, "!mmap: addr=0x%llx length=0x%zx prot=%d flags=%d fd=%d offset=0x%llx %s", (unsigned long long)addr, length, prot, flags, fd, (unsigned long long)offset, error); } return ret_addr; } /* * ut_munmap -- a munmap call that cannot return -1 */ int ut_munmap(const char *file, int line, const char *func, void *addr, size_t length) { int retval = munmap(addr, length); if (retval < 0) ut_fatal(file, line, func, "!munmap: addr=0x%llx length=0x%zx", (unsigned long long)addr, length); return retval; } /* * ut_mprotect -- a mprotect call that cannot return -1 */ int ut_mprotect(const char *file, int line, const char *func, void *addr, size_t len, int prot) { int retval = mprotect(addr, len, prot); if (retval < 0) ut_fatal(file, line, func, "!mprotect: addr=0x%llx length=0x%zx prot=0x%x", (unsigned long long)addr, len, prot); return retval; } /* * ut_ftruncate -- a ftruncate that cannot return -1 */ int ut_ftruncate(const char *file, int line, const char *func, int fd, os_off_t length) { int retval = os_ftruncate(fd, length); if (retval < 0) ut_fatal(file, line, func, "!ftruncate: %d %llu", fd, (unsigned long long)length); return retval; } vmem-1.8/src/test/unittest/ut_pthread.c000066400000000000000000000045601361505074100202650ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * ut_pthread.c -- unit test wrappers for pthread routines */ #include "unittest.h" /* * ut_thread_create -- a os_thread_create that cannot return an error */ int ut_thread_create(const char *file, int line, const char *func, os_thread_t *__restrict thread, const os_thread_attr_t *__restrict attr, void *(*start_routine)(void *), void *__restrict arg) { if ((errno = os_thread_create(thread, attr, start_routine, arg)) != 0) ut_fatal(file, line, func, "!os_thread_create"); return 0; } /* * ut_thread_join -- a os_thread_join that cannot return an error */ int ut_thread_join(const char *file, int line, const char *func, os_thread_t *thread, void **value_ptr) { if ((errno = os_thread_join(thread, value_ptr)) != 0) ut_fatal(file, line, func, "!os_thread_join"); return 0; } vmem-1.8/src/test/unittest/ut_signal.c000066400000000000000000000073551361505074100201200ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * ut_signal.c -- unit test signal operations */ #include "unittest.h" #ifdef _WIN32 /* * On Windows, Access Violation exception does not raise SIGSEGV signal. * The trick is to catch the exception and... call the signal handler. */ /* * Sigactions[] - allows registering more than one signal/exception handler */ static struct sigaction Sigactions[NSIG]; /* * exception_handler -- called for unhandled exceptions */ static LONG CALLBACK exception_handler(_In_ PEXCEPTION_POINTERS ExceptionInfo) { DWORD excode = ExceptionInfo->ExceptionRecord->ExceptionCode; if (excode == EXCEPTION_ACCESS_VIOLATION) Sigactions[SIGSEGV].sa_handler(SIGSEGV); return EXCEPTION_CONTINUE_EXECUTION; } /* * signal_handler_wrapper -- (internal) wrapper for user-defined signal handler * * Before the specified handler function is executed, signal disposition * is reset to SIG_DFL. This wrapper allows to handle subsequent signals * without the need to set the signal disposition again. */ static void signal_handler_wrapper(int signum) { _crt_signal_t retval = signal(signum, signal_handler_wrapper); if (retval == SIG_ERR) UT_FATAL("!signal: %d", signum); if (Sigactions[signum].sa_handler) Sigactions[signum].sa_handler(signum); else UT_FATAL("handler for signal: %d is not defined", signum); } #endif /* * ut_sigaction -- a sigaction that cannot return < 0 */ int ut_sigaction(const char *file, int line, const char *func, int signum, struct sigaction *act, struct sigaction *oldact) { #ifndef _WIN32 int retval = sigaction(signum, act, oldact); if (retval != 0) ut_fatal(file, line, func, "!sigaction: %s", os_strsignal(signum)); return retval; #else UT_ASSERT(signum < NSIG); os_mutex_lock(&Sigactions_lock); if (oldact) *oldact = Sigactions[signum]; if (act) Sigactions[signum] = *act; os_mutex_unlock(&Sigactions_lock); if (signum == SIGABRT) { ut_suppress_errmsg(); } if (signum == SIGSEGV) { AddVectoredExceptionHandler(0, exception_handler); } _crt_signal_t retval = signal(signum, signal_handler_wrapper); if (retval == SIG_ERR) ut_fatal(file, line, func, "!signal: %d", signum); if (oldact != NULL) oldact->sa_handler = retval; return 0; #endif } vmem-1.8/src/test/util_file_create/000077500000000000000000000000001361505074100173755ustar00rootroot00000000000000vmem-1.8/src/test/util_file_create/.gitignore000066400000000000000000000000211361505074100213560ustar00rootroot00000000000000util_file_create vmem-1.8/src/test/util_file_create/Makefile000066400000000000000000000033141361505074100210360ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_file_create/Makefile -- build util_file_create unit test # TARGET = util_file_create OBJS = util_file_create.o LIBPMEMCOMMON=y include ../Makefile.inc vmem-1.8/src/test/util_file_create/README000066400000000000000000000010151361505074100202520ustar00rootroot00000000000000Persistent Memory Development Kit This is src/test/util_file_create/README. This directory contains a unit test for util_file_create(). The program in util_file_create.c takes a minimal pool size along with the list of len:path pairs. For example: ./util_file_create minlen len1:path1 This will call util_file_create() on path1 using len1 as pool size. minlen and len are interpreted as a decimal values unless they start with 0x. If len is zero, the file pointed by path must exist. Otherwise, the file is created. vmem-1.8/src/test/util_file_create/TEST0000077500000000000000000000043001361505074100201570ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_file_create/TEST0 -- unit test for util_file_create() # . ../unittest/unittest.sh setup MIN_POOL=0x4000 truncate -s 32K $DIR/testfile1 mkdir $DIR/testdir1 ln -s $DIR/testfile0 $DIR/testlink1 ln -s $DIR/testfile1 $DIR/testlink2 ln -s $DIR/testdir1 $DIR/testlink3 ln -s /dev/zero $DIR/testlink4 expect_normal_exit ./util_file_create$EXESUFFIX $MIN_POOL \ $MIN_POOL:$DIR/testdir1\ $MIN_POOL:/dev/zero\ $MIN_POOL:$DIR/testlink1\ $MIN_POOL:$DIR/testlink2\ $MIN_POOL:$DIR/testlink3\ $MIN_POOL:$DIR/testlink4\ $MIN_POOL:$DIR/testfile1\ 0x1000:$DIR/testfile2\ $MIN_POOL:$DIR/testfile3 check pass vmem-1.8/src/test/util_file_create/TEST0w.PS1000066400000000000000000000035571361505074100207620ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_file_create/TEST0 -- unit test for util_file_create() # . ..\unittest\unittest.ps1 setup create_holey_file 32K $DIR\testfile1 mkdir $DIR\testdir1 > $null expect_normal_exit $Env:EXE_DIR\util_file_create$Env:EXESUFFIX 0x4000 ` 0x4000:$DIR\testdir1 ` 0x4000:NUL ` 0x4000:$DIR\testfile1 check pass vmem-1.8/src/test/util_file_create/TEST1000077500000000000000000000035441361505074100201710ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_file_create/TEST1 -- unit test for util_file_create() # . ../unittest/unittest.sh require_no_superuser setup MIN_POOL=0x4000 mkdir $DIR/testdir1 chmod -w $DIR/testdir1 expect_normal_exit ./util_file_create$EXESUFFIX $MIN_POOL \ $MIN_POOL:$DIR/testdir1/testfile check pass vmem-1.8/src/test/util_file_create/TEST1.PS1000066400000000000000000000041241361505074100205630ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_file_create/TEST1 -- unit test for util_file_create() # # . ..\unittest\unittest.ps1 require_no_superuser # icacls does have problems with handling long paths in the correct way. require_short_path setup mkdir $DIR\testdir1 > $null # remove write permissions & icacls $DIR/testdir1 /deny ${Env:USERNAME}:W >$null expect_normal_exit $Env:EXE_DIR\util_file_create$Env:EXESUFFIX 0x4000 ` 0x4000:$DIR\testdir1\testfile # grant full permissions so test code can cleanup & icacls $DIR/testdir1 /grant ${Env:USERNAME}:F >$null check pass vmem-1.8/src/test/util_file_create/TEST2000077500000000000000000000035701361505074100201710ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_file_create/TEST2 -- unit test for util_file_create() # . ../unittest/unittest.sh setup # without fallocate this test takes forever require_native_fallocate $DIR/testfile1 MIN_POOL=0x4000 expect_normal_exit ./util_file_create$EXESUFFIX $MIN_POOL \ 0x7FFFFFFFFFFFFFFF:$DIR/testfile check pass vmem-1.8/src/test/util_file_create/TEST2.PS1000066400000000000000000000034151361505074100205660ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_file_create/TEST2 -- unit test for util_file_create() # # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\util_file_create$Env:EXESUFFIX 0x4000 ` 0x7FFFFFFFFFFFFFFF:$DIR\testfile check pass vmem-1.8/src/test/util_file_create/out0.log.match000066400000000000000000000013041361505074100220600ustar00rootroot00000000000000util_file_create/TEST0: START: util_file_create ./util_file_create$(nW) 0x4000 0x4000:$(nW)/testdir1 0x4000:/dev/zero 0x4000:$(nW)/testlink1 0x4000:$(nW)/testlink2 0x4000:$(nW)/testlink3 0x4000:$(nW)/testlink4 0x4000:$(nW)/testfile1 0x1000:$(nW)/testfile2 0x4000:$(nW)/testfile3 $(nW)/testdir1: util_file_create: File exists /dev/zero: util_file_create: File exists $(nW)/testlink1: util_file_create: File exists $(nW)/testlink2: util_file_create: File exists $(nW)/testlink3: util_file_create: File exists $(nW)/testlink4: util_file_create: File exists $(nW)/testfile1: util_file_create: File exists $(nW)/testfile2: util_file_create: Invalid argument $(nW)/testfile3: created util_file_create/TEST0: DONE vmem-1.8/src/test/util_file_create/out0w.log.match000066400000000000000000000004231361505074100222500ustar00rootroot00000000000000util_file_create$(nW)TEST0w: START: util_file_create $(nW)util_file_create$(nW) 0x4000 0x4000:$(nW)testdir1 0x4000:$(nW)NUL 0x4000:$(nW)testfile1 $(nW)testdir1: util_file_create: $(*) NUL: $(*) $(nW)testfile1: util_file_create: File exists util_file_create$(nW)TEST0w: DONE vmem-1.8/src/test/util_file_create/out1.log.match000066400000000000000000000003321361505074100220610ustar00rootroot00000000000000util_file_create$(nW)TEST1: START: util_file_create $(nW)util_file_create$(nW) 0x4000 0x4000:$(nW)testdir1$(nW)testfile $(nW)testdir1$(nW)testfile: util_file_create: Permission denied util_file_create$(nW)TEST1: DONE vmem-1.8/src/test/util_file_create/out2.log.match000066400000000000000000000002551361505074100220660ustar00rootroot00000000000000util_file_create$(nW)TEST2: START: util_file_create $(nW)util_file_create$(nW) 0x4000 0x7FFFFFFFFFFFFFFF:$(nW)testfile $(nW)testfile: $(*) util_file_create$(nW)TEST2: DONE vmem-1.8/src/test/util_file_create/util_file_create.c000066400000000000000000000044501361505074100230430ustar00rootroot00000000000000/* * Copyright 2014-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * util_file_create.c -- unit test for util_file_create() * * usage: util_file_create minlen len:path [len:path]... */ #include "unittest.h" #include "file.h" int main(int argc, char *argv[]) { START(argc, argv, "util_file_create"); if (argc < 3) UT_FATAL("usage: %s minlen len:path...", argv[0]); char *fname; size_t minsize = strtoul(argv[1], &fname, 0); for (int arg = 2; arg < argc; arg++) { size_t size = strtoul(argv[arg], &fname, 0); if (*fname != ':') UT_FATAL("usage: %s minlen len:path...", argv[0]); fname++; int fd; if ((fd = util_file_create(fname, size, minsize)) == -1) UT_OUT("!%s: util_file_create", fname); else { UT_OUT("%s: created", fname); os_close(fd); } } DONE(NULL); } vmem-1.8/src/test/util_file_create/util_file_create.vcxproj000066400000000000000000000065151361505074100243200ustar00rootroot00000000000000 Debug x64 Release x64 {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {D829DB63-E046-474D-8EA3-43A6659294D8} Win32Proj util_file_create 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/util_file_create/util_file_create.vcxproj.filters000066400000000000000000000025401361505074100257610ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {2d237952-c50b-4d8c-a83f-e28a96d88887} {93995380-89BD-4b04-88EB-625FBE52EBFB} h;hh;hpp;hxx;hm;inl;inc;xsd Source Files Test Scripts Test Scripts Test Scripts Match Files Match Files Match Files vmem-1.8/src/test/util_file_open/000077500000000000000000000000001361505074100170735ustar00rootroot00000000000000vmem-1.8/src/test/util_file_open/.gitignore000066400000000000000000000000171361505074100210610ustar00rootroot00000000000000util_file_open vmem-1.8/src/test/util_file_open/Makefile000066400000000000000000000033041361505074100205330ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_file_open/Makefile -- build util_file_open unit test # TARGET = util_file_open OBJS = util_file_open.o LIBPMEMCOMMON=y include ../Makefile.inc vmem-1.8/src/test/util_file_open/README000066400000000000000000000006271361505074100177600ustar00rootroot00000000000000Persistent Memory Development Kit This is src/test/util_file_open/README. This directory contains a unit test for util_file_open(). The program in util_file_open.c takes a minimal pool size along with the list of file paths. For example: ./util_file_open minlen path1 path2 This will call util_file_open() on path1 and path2. minlen is interpreted as a decimal value unless it starts with 0x. vmem-1.8/src/test/util_file_open/TEST0000077500000000000000000000040771361505074100176700ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_file_open/TEST0 -- unit test for util_file_open() # . ../unittest/unittest.sh setup MIN_POOL=0x4000 truncate -s 1K $DIR/testfile1 truncate -s 32K $DIR/testfile2 mkdir $DIR/testdir1 ln -s testfile0 $DIR/testlink0 ln -s testdir1 $DIR/testlink1 ln -s /dev/zero $DIR/testlink2 expect_normal_exit ./util_file_open$EXESUFFIX $MIN_POOL \ $DIR/testdir1\ /dev/zero\ $DIR/testlink0\ $DIR/testlink1\ $DIR/testlink2\ $DIR/testfile0\ $DIR/testfile1\ $DIR/testfile2 check pass vmem-1.8/src/test/util_file_open/TEST0w.PS1000066400000000000000000000036411361505074100204520ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_file_open/TEST0 -- unit test for util_file_open() # . ..\unittest\unittest.ps1 setup create_holey_file 1K $DIR\testfile1 create_holey_file 32K $DIR\testfile2 mkdir $DIR\testdir1 > $null expect_normal_exit $Env:EXE_DIR\util_file_open$Env:EXESUFFIX 0x4000 ` $DIR\testdir1 ` NUL ` $DIR\testfile0 ` $DIR\testfile1 ` $DIR\testfile2 check pass vmem-1.8/src/test/util_file_open/TEST1000077500000000000000000000040131361505074100176570ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_file_open/TEST1 -- unit test for util_file_open() # . ../unittest/unittest.sh require_no_superuser setup MIN_POOL=0x4000 truncate -s 32K $DIR/testfile1 chmod -rw $DIR/testfile1 truncate -s 32K $DIR/testfile2 chmod -w $DIR/testfile2 ln -s testfile1 $DIR/testlink1 ln -s testfile2 $DIR/testlink2 expect_normal_exit ./util_file_open$EXESUFFIX $MIN_POOL \ $DIR/testlink1\ $DIR/testlink2\ $DIR/testfile1\ $DIR/testfile2\ check pass vmem-1.8/src/test/util_file_open/TEST1w.PS1000066400000000000000000000044161361505074100204540ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_file_open/TEST1 -- unit test for util_file_open() # . ..\unittest\unittest.ps1 require_no_superuser # icacls does have problems with handling long paths in the correct way. require_short_path setup create_holey_file 32K $DIR\testfile1 # remove write permissions & icacls $DIR\testfile1 /deny ${Env:USERNAME}:W >$null create_holey_file 32K $DIR\testfile2 # remove write permissions & icacls $DIR\testfile2 /deny ${Env:USERNAME}:W >$null expect_normal_exit $Env:EXE_DIR\util_file_open$Env:EXESUFFIX 0x4000 ` $DIR\testfile1 ` $DIR\testfile2 # grant full permissions so test code can cleanup & icacls $DIR\testfile1 /grant ${Env:USERNAME}:F >$null & icacls $DIR\testfile2 /grant ${Env:USERNAME}:F >$null check pass vmem-1.8/src/test/util_file_open/out0.log.match000066400000000000000000000011441361505074100215600ustar00rootroot00000000000000util_file_open/TEST0: START: util_file_open ./util_file_open$(nW) 0x4000 $(nW)/testdir1 /dev/zero $(nW)/testlink0 $(nW)/testlink1 $(nW)/testlink2 $(nW)/testfile0 $(nW)/testfile1 $(nW)/testfile2 $(nW)/testdir1: util_file_open: Is a directory /dev/zero: util_file_open: Invalid argument $(nW)/testlink0: util_file_open: No such file or directory $(nW)/testlink1: util_file_open: Is a directory $(nW)/testlink2: util_file_open: Invalid argument $(nW)/testfile0: util_file_open: No such file or directory $(nW)/testfile1: util_file_open: Invalid argument $(nW)/testfile2: open, len 32768 util_file_open/TEST0: DONE vmem-1.8/src/test/util_file_open/out0w.log.match000066400000000000000000000005771361505074100217600ustar00rootroot00000000000000util_file_open$(nW)TEST0w: START: util_file_open $(nW)util_file_open$(nW) 0x4000 $(nW)testdir1 $(nW)NUL $(nW)testfile0 $(nW)testfile1 $(nW)testfile2 $(nW)testdir1: util_file_open: $(*) NUL: util_file_open: $(*) $(nW)testfile0: util_file_open: No such file or directory $(nW)testfile1: util_file_open: Invalid argument $(nW)testfile2: open, len 32768 util_file_open$(nW)TEST0w: DONE vmem-1.8/src/test/util_file_open/out1.log.match000066400000000000000000000005611361505074100215630ustar00rootroot00000000000000util_file_open/TEST1: START: util_file_open ./util_file_open$(nW) 0x4000 $(nW)/testlink1 $(nW)/testlink2 $(nW)/testfile1 $(nW)/testfile2 $(nW)/testlink1: util_file_open: Permission denied $(nW)/testlink2: util_file_open: Permission denied $(nW)/testfile1: util_file_open: Permission denied $(nW)/testfile2: util_file_open: Permission denied util_file_open/TEST1: DONE vmem-1.8/src/test/util_file_open/out1w.log.match000066400000000000000000000003641361505074100217530ustar00rootroot00000000000000util_file_open$(nW)TEST1w: START: util_file_open $(nW)util_file_open$(nW) 0x4000 $(nW)testfile1 $(nW)testfile2 $(nW)testfile1: util_file_open: Permission denied $(nW)testfile2: util_file_open: Permission denied util_file_open$(nW)TEST1w: DONE vmem-1.8/src/test/util_file_open/util_file_open.c000066400000000000000000000042751361505074100222440ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * util_file_open.c -- unit test for util_file_open() * * usage: util_file_open minlen path [path]... */ #include "unittest.h" #include "file.h" int main(int argc, char *argv[]) { START(argc, argv, "util_file_open"); if (argc < 3) UT_FATAL("usage: %s minlen path...", argv[0]); char *fname; size_t minsize = strtoul(argv[1], &fname, 0); for (int arg = 2; arg < argc; arg++) { size_t size = 0; int fd = util_file_open(argv[arg], &size, minsize, O_RDWR); if (fd == -1) UT_OUT("!%s: util_file_open", argv[arg]); else { UT_OUT("%s: open, len %zu", argv[arg], size); os_close(fd); } } DONE(NULL); } vmem-1.8/src/test/util_file_open/util_file_open.vcxproj000066400000000000000000000064021361505074100235070ustar00rootroot00000000000000 Debug x64 Release x64 {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {715EADD7-0FFE-4F1F-94E7-49302968DF79} Win32Proj util_file_open 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/util_file_open/util_file_open.vcxproj.filters000066400000000000000000000022741361505074100251610ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {2d237952-c50b-4d8c-a83f-e28a96d88887} {93995380-89BD-4b04-88EB-625FBE52EBFB} h;hh;hpp;hxx;hm;inl;inc;xsd Source Files Test Scripts Test Scripts Match Files Match Files vmem-1.8/src/test/util_is_absolute/000077500000000000000000000000001361505074100174445ustar00rootroot00000000000000vmem-1.8/src/test/util_is_absolute/.gitignore000066400000000000000000000000211361505074100214250ustar00rootroot00000000000000util_is_absolute vmem-1.8/src/test/util_is_absolute/Makefile000066400000000000000000000033061361505074100211060ustar00rootroot00000000000000# # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_is_absolute/Makefile -- check if path is absolute # TARGET = util_is_absolute OBJS = util_is_absolute.o LIBPMEMCOMMON=y include ../Makefile.inc vmem-1.8/src/test/util_is_absolute/TEST0000077500000000000000000000034711361505074100202360ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_is_absolute/TEST0 -- unit test for util_is_absolute_path() # # NOTE: This is for Linux only! # . ../unittest/unittest.sh setup expect_normal_exit ./util_is_absolute$EXESUFFIX \ "/foo/bar" \ "foo/bar" \ "/" \ "./foo/bar" check pass vmem-1.8/src/test/util_is_absolute/TEST1.PS1000066400000000000000000000035421361505074100206350ustar00rootroot00000000000000# # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # src/test/util_is_absolute/TEST1 -- unit test for util_is_absolute_path # # NOTE: This is for Windows only! # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\util_is_absolute$Env:EXESUFFIX ` C:\foo\bar ` \foo\bar ` foo\bar ` D:\foo\bar ` /foo/bar ` e: ` E:\ check pass vmem-1.8/src/test/util_is_absolute/out0.log.match000066400000000000000000000002731361505074100221330ustar00rootroot00000000000000util_is_absolute/TEST0: START: util_is_absolute $(nW)util_is_absolute$(nW) /foo/bar foo/bar / ./foo/bar "/foo/bar" - 1 "foo/bar" - 0 "/" - 1 "./foo/bar" - 0 util_is_absolute/TEST0: DONE vmem-1.8/src/test/util_is_absolute/out1.log.match000066400000000000000000000004011361505074100221250ustar00rootroot00000000000000util_is_absolute/TEST1: START: util_is_absolute $(nW)util_is_absolute$(nW) C:\foo\bar \foo\bar foo\bar D:\foo\bar /foo/bar e: E:\ "C:\foo\bar" - 1 "\foo\bar" - 1 "foo\bar" - 0 "D:\foo\bar" - 1 "/foo/bar" - 0 "e:" - 1 "E:\" - 1 util_is_absolute/TEST1: DONE vmem-1.8/src/test/util_is_absolute/util_is_absolute.c000066400000000000000000000036461361505074100231670ustar00rootroot00000000000000/* * Copyright 2016, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * util_is_absolute.c -- unit test for testing if path is absolute * * usage: util_is_absolute path [path ...] */ #include "unittest.h" #include "file.h" int main(int argc, char *argv[]) { START(argc, argv, "util_is_absolute"); for (int i = 1; i < argc; i++) { UT_OUT("\"%s\" - %d", argv[i], util_is_absolute_path(argv[i])); } DONE(NULL); } vmem-1.8/src/test/util_is_absolute/util_is_absolute.vcxproj000066400000000000000000000062321361505074100244320ustar00rootroot00000000000000 Debug x64 Release x64 {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {C973CD39-D63B-4F5C-BE1D-DED17388B5A4} Win32Proj util_is_absolute 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/util_is_absolute/util_is_absolute.vcxproj.filters000066400000000000000000000020041361505074100260720ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {aa2a2c2a-cd47-4f1a-841b-e3d019df7f13} match {5c4a9c4d-89c3-4de4-9f98-2a940a42e2d4} ps1 Source Files Match Files Test Scripts vmem-1.8/src/test/util_is_zeroed/000077500000000000000000000000001361505074100171165ustar00rootroot00000000000000vmem-1.8/src/test/util_is_zeroed/.gitignore000066400000000000000000000000171361505074100211040ustar00rootroot00000000000000util_is_zeroed vmem-1.8/src/test/util_is_zeroed/Makefile000066400000000000000000000033001361505074100205520ustar00rootroot00000000000000# # Copyright 2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_is_zeroed/Makefile -- build util_is_zeroed unit test # TARGET = util_is_zeroed OBJS = util_is_zeroed.o LIBPMEMCOMMON=y include ../Makefile.inc vmem-1.8/src/test/util_is_zeroed/TEST0000077500000000000000000000034611361505074100177070ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2018-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_is_zeroed/TEST0 -- unit test for util_is_zeroed # . ../unittest/unittest.sh require_build_type debug nondebug # covered by TEST1 configure_valgrind memcheck force-disable setup expect_normal_exit ./util_is_zeroed$EXESUFFIX pass vmem-1.8/src/test/util_is_zeroed/TEST0.PS1000066400000000000000000000033601361505074100203040ustar00rootroot00000000000000# # Copyright 2018-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_is_zeroed/TEST0 -- unit test for util_is_zeroed # . ..\unittest\unittest.ps1 require_build_type debug nondebug setup expect_normal_exit $Env:EXE_DIR\util_is_zeroed$Env:EXESUFFIX pass vmem-1.8/src/test/util_is_zeroed/TEST1000077500000000000000000000034351361505074100177110ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2018-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_is_zeroed/TEST1 -- unit test for util_is_zeroed # . ../unittest/unittest.sh require_build_type debug nondebug configure_valgrind memcheck force-enable setup expect_normal_exit ./util_is_zeroed$EXESUFFIX pass vmem-1.8/src/test/util_is_zeroed/util_is_zeroed.c000066400000000000000000000052271361505074100223100ustar00rootroot00000000000000/* * Copyright 2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * util_is_zeroed.c -- unit test for util_is_zeroed */ #include "unittest.h" #include "util.h" int main(int argc, char *argv[]) { START(argc, argv, "util_is_zeroed"); util_init(); char bigbuf[3000]; memset(bigbuf + 0, 0x11, 1000); memset(bigbuf + 1000, 0x0, 1000); memset(bigbuf + 2000, 0xff, 1000); UT_ASSERTeq(util_is_zeroed(bigbuf, 1000), 0); UT_ASSERTeq(util_is_zeroed(bigbuf + 1000, 1000), 1); UT_ASSERTeq(util_is_zeroed(bigbuf + 2000, 1000), 0); UT_ASSERTeq(util_is_zeroed(bigbuf, 0), 1); UT_ASSERTeq(util_is_zeroed(bigbuf + 999, 1000), 0); UT_ASSERTeq(util_is_zeroed(bigbuf + 1000, 1001), 0); UT_ASSERTeq(util_is_zeroed(bigbuf + 1001, 1000), 0); char *buf = bigbuf + 1000; buf[0] = 1; UT_ASSERTeq(util_is_zeroed(buf, 1000), 0); memset(buf, 0, 1000); buf[1] = 1; UT_ASSERTeq(util_is_zeroed(buf, 1000), 0); memset(buf, 0, 1000); buf[239] = 1; UT_ASSERTeq(util_is_zeroed(buf, 1000), 0); memset(buf, 0, 1000); buf[999] = 1; UT_ASSERTeq(util_is_zeroed(buf, 1000), 0); memset(buf, 0, 1000); buf[1000] = 1; UT_ASSERTeq(util_is_zeroed(buf, 1000), 1); DONE(NULL); } vmem-1.8/src/test/util_is_zeroed/util_is_zeroed.vcxproj000066400000000000000000000066051361505074100235620ustar00rootroot00000000000000 Debug x64 Release x64 {FD726AA3-D4FA-4597-B435-08CC7752888D} Win32Proj util_is_zeroed 10.0.16299.0 Application true v140 Application false v140 $(SolutionDir)\libpmem;%(AdditionalIncludeDirectories) $(SolutionDir)\libpmem;%(AdditionalIncludeDirectories) {ce3f2dfb-8470-4802-ad37-21caf6cb2681} vmem-1.8/src/test/util_is_zeroed/util_is_zeroed.vcxproj.filters000066400000000000000000000013451361505074100252250ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {cb0140fc-b255-4ef9-a417-11f1a13525ba} Source Files Test Scripts vmem-1.8/src/test/util_map_proc/000077500000000000000000000000001361505074100167335ustar00rootroot00000000000000vmem-1.8/src/test/util_map_proc/.gitignore000066400000000000000000000000161361505074100207200ustar00rootroot00000000000000util_map_proc vmem-1.8/src/test/util_map_proc/Makefile000066400000000000000000000033221361505074100203730ustar00rootroot00000000000000# # Copyright 2014-2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_map_proc/Makefile -- build util_map_proc unit test # TARGET = util_map_proc OBJS = util_map_proc.o LIBPMEMCOMMON=y include ../Makefile.inc LIBS += $(LIBDL) vmem-1.8/src/test/util_map_proc/README000066400000000000000000000010001361505074100176020ustar00rootroot00000000000000Persistent Memory Development Kit This is src/test/util_map_proc/README. This test is Linux specific. This directory contains a unit test for util_map_hint(). The program in util_map_proc.c takes a fake /proc/self/maps file as an argument, along with a length. It arranges for util_map_hint() to open the fake /proc file when looking up the range. usage: util_map_proc maps_file len [len]... len is interpreted as a decimal value unless it starts with 0x. Each len is tested against the given maps-file. vmem-1.8/src/test/util_map_proc/TEST0000077500000000000000000000037401361505074100175240ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_map_proc/TEST0 -- unit test for util_map /proc parsing # . ../unittest/unittest.sh configure_valgrind memcheck force-disable setup # there should be an unused region for each length mapfile="maps_all_"$(uname -s | tr "[:upper:]" "[:lower:]") expect_normal_exit ./util_map_proc$EXESUFFIX $mapfile\ 0x0000100000\ 0x0001000000\ 0x0040000000\ 0x0400000000\ 0x4000000000 check pass vmem-1.8/src/test/util_map_proc/TEST1000077500000000000000000000037451361505074100175320ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_map_proc/TEST1 -- unit test for util_map /proc parsing # . ../unittest/unittest.sh configure_valgrind memcheck force-disable setup # there should be no hint address for any range length mapfile="maps_none_"$(uname -s | tr "[:upper:]" "[:lower:]") expect_normal_exit ./util_map_proc$EXESUFFIX $mapfile\ 0x0000100000\ 0x0001000000\ 0x0040000000\ 0x0400000000\ 0x4000000000 check pass vmem-1.8/src/test/util_map_proc/TEST2000077500000000000000000000037461361505074100175340ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_map_proc/TEST2 -- unit test for util_map /proc parsing # . ../unittest/unittest.sh setup # due to alignment requirements there should be no hint address for # the last two range lengths mapfile="maps_align_"$(uname -s | tr "[:upper:]" "[:lower:]") expect_normal_exit ./util_map_proc$EXESUFFIX $mapfile\ 0x0001000000\ 0x0001100000\ 0x0001110000\ 0x0001110800\ 0x0001111000 check pass vmem-1.8/src/test/util_map_proc/TEST3000077500000000000000000000040721361505074100175260ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_map_proc/TEST3 -- unit test for util_map /proc parsing # . ../unittest/unittest.sh configure_valgrind memcheck force-disable setup # unused region at the end of the address space # due to alignment requirements there should be no hint address for # the last range length mapfile="maps_end_"$(uname -s | tr "[:upper:]" "[:lower:]") expect_normal_exit ./util_map_proc$EXESUFFIX $mapfile\ 0x0000100000\ 0x0001000000\ 0x003F000000\ 0x003FFFF000\ 0x0040000000\ check pass vmem-1.8/src/test/util_map_proc/TEST4000077500000000000000000000040221361505074100175220ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_map_proc/TEST4 -- unit test for util_map /proc parsing # . ../unittest/unittest.sh require_procfs configure_valgrind memcheck force-disable setup export PMEM_MMAP_HINT=2E000000000 # there should be an unused region for each length mapfile="maps_all_"$(uname -s | tr "[:upper:]" "[:lower:]") expect_normal_exit ./util_map_proc$EXESUFFIX $mapfile\ 0x0000100000\ 0x0001000000\ 0x0040000000\ 0x0400000000\ 0x4000000000 check pass vmem-1.8/src/test/util_map_proc/TEST5000077500000000000000000000041171361505074100175300ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_map_proc/TEST5 -- unit test for util_map /proc parsing # . ../unittest/unittest.sh require_procfs configure_valgrind memcheck force-disable setup export TEST_LOG_LEVEL=10 export TEST_LOG_FILE=./test$UNITTEST_NUM.log export PMEM_MMAP_HINT=0 # there should be an unused region for each length mapfile="maps_all_"$(uname -s | tr "[:upper:]" "[:lower:]") expect_normal_exit ./util_map_proc$EXESUFFIX $mapfile\ 0x0000100000\ 0x0001000000\ 0x0040000000\ 0x0400000000\ 0x4000000000 check pass vmem-1.8/src/test/util_map_proc/TEST6000077500000000000000000000035431361505074100175330ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_map_proc/TEST6 -- unit test for util_map /proc parsing # . ../unittest/unittest.sh require_procfs configure_valgrind memcheck force-disable setup export PMEM_MMAP_HINT=0 mapfile="mapfile_nonexistent" expect_normal_exit ./util_map_proc$EXESUFFIX $mapfile 0x12345678 check pass vmem-1.8/src/test/util_map_proc/maps_align_freebsd000066400000000000000000000104141361505074100224620ustar00rootroot0000000000000000400000 0041c000 r-xp 00000000 fd:01 1310785 /some/path/testfile 0061b000 0061c000 rw-p 0001b000 fd:01 1310785 /some/path/testfile 0061c000 0061d000 rw-p 00000000 00:00 0 0081c000 0081d000 rw-p 0001c000 fd:01 1310785 /some/path/testfile 02699000 026ba000 rw-p 00000000 00:00 0 [heap] 32b5000000 32b5020000 r-xp 00000000 fd:01 917587 /lib64/ld-2.12.so 32b521f000 32b5220000 r--p 0001f000 fd:01 917587 /lib64/ld-2.12.so 32b5220000 32b5221000 rw-p 00020000 fd:01 917587 /lib64/ld-2.12.so 32b5221000 32b5222000 rw-p 00000000 00:00 0 32b5400000 32b558b000 r-xp 00000000 fd:01 917588 /lib64/libc-2.12.so 32b558b000 32b578a000 ---p 0018b000 fd:01 917588 /lib64/libc-2.12.so 32b578a000 32b578e000 r--p 0018a000 fd:01 917588 /lib64/libc-2.12.so 32b578e000 32b578f000 rw-p 0018e000 fd:01 917588 /lib64/libc-2.12.so 32b578f000 32b5794000 rw-p 00000000 00:00 0 32b5c00000 32b5c02000 r-xp 00000000 fd:01 917594 /lib64/libdl-2.12.so 32b5c02000 32b5e02000 ---p 00002000 fd:01 917594 /lib64/libdl-2.12.so 32b5e02000 32b5e03000 r--p 00002000 fd:01 917594 /lib64/libdl-2.12.so 32b5e03000 32b5e04000 rw-p 00003000 fd:01 917594 /lib64/libdl-2.12.so 32b6000000 32b6017000 r-xp 00000000 fd:01 917592 /lib64/libpthread-2.12.so 32b6017000 32b6217000 ---p 00017000 fd:01 917592 /lib64/libpthread-2.12.so 32b6217000 32b6218000 r--p 00017000 fd:01 917592 /lib64/libpthread-2.12.so 32b6218000 32b6219000 rw-p 00018000 fd:01 917592 /lib64/libpthread-2.12.so 32b6219000 32b621d000 rw-p 00000000 00:00 0 32b6800000 32b6807000 r-xp 00000000 fd:01 917631 /lib64/librt-2.12.so 32b6807000 32b6a06000 ---p 00007000 fd:01 917631 /lib64/librt-2.12.so 32b6a06000 32b6a07000 r--p 00006000 fd:01 917631 /lib64/librt-2.12.so 32b6a07000 32b6a08000 rw-p 00007000 fd:01 917631 /lib64/librt-2.12.so 32b7000000 32b701d000 r-xp 00000000 fd:01 917605 /lib64/libselinux.so.1 32b701d000 32b721c000 ---p 0001d000 fd:01 917605 /lib64/libselinux.so.1 32b721c000 32b721d000 r--p 0001c000 fd:01 917605 /lib64/libselinux.so.1 32b721d000 32b721e000 rw-p 0001d000 fd:01 917605 /lib64/libselinux.so.1 32b721e000 32b721f000 rw-p 00000000 00:00 0 32c3800000 32c3804000 r-xp 00000000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3804000 32c3a03000 ---p 00004000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3a03000 32c3a04000 r--p 00003000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3a04000 32c3a05000 rw-p 00004000 fd:01 917625 /lib64/libattr.so.1.1.0 32c4400000 32c4407000 r-xp 00000000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4407000 32c4606000 ---p 00007000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4606000 32c4607000 r--p 00006000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4607000 32c4608000 rw-p 00007000 fd:01 917627 /lib64/libacl.so.1.1.0 010000000000 010050100000 rw-p 00000000 00:00 0 [dummy] 010081110000 7ffa04540000 rw-p 00000000 00:00 0 [dummy] 7ffa0454f000 7ffa0a3e0000 r--p 00000000 fd:01 2132914 /usr/lib/locale/locale-archive 7ffa0a3e0000 7fff02144000 rw-p 00000000 00:00 0 [dummy] 7fff02144000 7fff02165000 rw-p 00000000 00:00 0 [stack] 7fff02165000 7fff02190000 rw-p 00000000 00:00 0 [dummy] 7fff02190000 7fff02191000 r-xp 00000000 00:00 0 [vdso] 7fff02191000 800000000000 r-xp 00000000 00:00 0 [dummy] 0000800000000000 ffffffffff600000 rw-p 00000000 00:00 0 [dummy] ffffffffff600000 ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] vmem-1.8/src/test/util_map_proc/maps_align_linux000066400000000000000000000104141361505074100222070ustar00rootroot0000000000000000400000-0041c000 r-xp 00000000 fd:01 1310785 /some/path/testfile 0061b000-0061c000 rw-p 0001b000 fd:01 1310785 /some/path/testfile 0061c000-0061d000 rw-p 00000000 00:00 0 0081c000-0081d000 rw-p 0001c000 fd:01 1310785 /some/path/testfile 02699000-026ba000 rw-p 00000000 00:00 0 [heap] 32b5000000-32b5020000 r-xp 00000000 fd:01 917587 /lib64/ld-2.12.so 32b521f000-32b5220000 r--p 0001f000 fd:01 917587 /lib64/ld-2.12.so 32b5220000-32b5221000 rw-p 00020000 fd:01 917587 /lib64/ld-2.12.so 32b5221000-32b5222000 rw-p 00000000 00:00 0 32b5400000-32b558b000 r-xp 00000000 fd:01 917588 /lib64/libc-2.12.so 32b558b000-32b578a000 ---p 0018b000 fd:01 917588 /lib64/libc-2.12.so 32b578a000-32b578e000 r--p 0018a000 fd:01 917588 /lib64/libc-2.12.so 32b578e000-32b578f000 rw-p 0018e000 fd:01 917588 /lib64/libc-2.12.so 32b578f000-32b5794000 rw-p 00000000 00:00 0 32b5c00000-32b5c02000 r-xp 00000000 fd:01 917594 /lib64/libdl-2.12.so 32b5c02000-32b5e02000 ---p 00002000 fd:01 917594 /lib64/libdl-2.12.so 32b5e02000-32b5e03000 r--p 00002000 fd:01 917594 /lib64/libdl-2.12.so 32b5e03000-32b5e04000 rw-p 00003000 fd:01 917594 /lib64/libdl-2.12.so 32b6000000-32b6017000 r-xp 00000000 fd:01 917592 /lib64/libpthread-2.12.so 32b6017000-32b6217000 ---p 00017000 fd:01 917592 /lib64/libpthread-2.12.so 32b6217000-32b6218000 r--p 00017000 fd:01 917592 /lib64/libpthread-2.12.so 32b6218000-32b6219000 rw-p 00018000 fd:01 917592 /lib64/libpthread-2.12.so 32b6219000-32b621d000 rw-p 00000000 00:00 0 32b6800000-32b6807000 r-xp 00000000 fd:01 917631 /lib64/librt-2.12.so 32b6807000-32b6a06000 ---p 00007000 fd:01 917631 /lib64/librt-2.12.so 32b6a06000-32b6a07000 r--p 00006000 fd:01 917631 /lib64/librt-2.12.so 32b6a07000-32b6a08000 rw-p 00007000 fd:01 917631 /lib64/librt-2.12.so 32b7000000-32b701d000 r-xp 00000000 fd:01 917605 /lib64/libselinux.so.1 32b701d000-32b721c000 ---p 0001d000 fd:01 917605 /lib64/libselinux.so.1 32b721c000-32b721d000 r--p 0001c000 fd:01 917605 /lib64/libselinux.so.1 32b721d000-32b721e000 rw-p 0001d000 fd:01 917605 /lib64/libselinux.so.1 32b721e000-32b721f000 rw-p 00000000 00:00 0 32c3800000-32c3804000 r-xp 00000000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3804000-32c3a03000 ---p 00004000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3a03000-32c3a04000 r--p 00003000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3a04000-32c3a05000 rw-p 00004000 fd:01 917625 /lib64/libattr.so.1.1.0 32c4400000-32c4407000 r-xp 00000000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4407000-32c4606000 ---p 00007000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4606000-32c4607000 r--p 00006000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4607000-32c4608000 rw-p 00007000 fd:01 917627 /lib64/libacl.so.1.1.0 010000000000-010050100000 rw-p 00000000 00:00 0 [dummy] 010081110000-7ffa04540000 rw-p 00000000 00:00 0 [dummy] 7ffa0454f000-7ffa0a3e0000 r--p 00000000 fd:01 2132914 /usr/lib/locale/locale-archive 7ffa0a3e0000-7fff02144000 rw-p 00000000 00:00 0 [dummy] 7fff02144000-7fff02165000 rw-p 00000000 00:00 0 [stack] 7fff02165000-7fff02190000 rw-p 00000000 00:00 0 [dummy] 7fff02190000-7fff02191000 r-xp 00000000 00:00 0 [vdso] 7fff02191000-800000000000 r-xp 00000000 00:00 0 [dummy] 0000800000000000-ffffffffff600000 rw-p 00000000 00:00 0 [dummy] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] vmem-1.8/src/test/util_map_proc/maps_all_freebsd000066400000000000000000000115031361505074100221400ustar00rootroot0000000000000000400000 0041c000 r-xp 00000000 fd:01 1310785 /some/path/testfile 0061b000 0061c000 rw-p 0001b000 fd:01 1310785 /some/path/testfile 0061c000 0061d000 rw-p 00000000 00:00 0 0081c000 0081d000 rw-p 0001c000 fd:01 1310785 /some/path/testfile 02699000 026ba000 rw-p 00000000 00:00 0 [heap] 32b5000000 32b5020000 r-xp 00000000 fd:01 917587 /lib64/ld-2.12.so 32b521f000 32b5220000 r--p 0001f000 fd:01 917587 /lib64/ld-2.12.so 32b5220000 32b5221000 rw-p 00020000 fd:01 917587 /lib64/ld-2.12.so 32b5221000 32b5222000 rw-p 00000000 00:00 0 32b5400000 32b558b000 r-xp 00000000 fd:01 917588 /lib64/libc-2.12.so 32b558b000 32b578a000 ---p 0018b000 fd:01 917588 /lib64/libc-2.12.so 32b578a000 32b578e000 r--p 0018a000 fd:01 917588 /lib64/libc-2.12.so 32b578e000 32b578f000 rw-p 0018e000 fd:01 917588 /lib64/libc-2.12.so 32b578f000 32b5794000 rw-p 00000000 00:00 0 32b5c00000 32b5c02000 r-xp 00000000 fd:01 917594 /lib64/libdl-2.12.so 32b5c02000 32b5e02000 ---p 00002000 fd:01 917594 /lib64/libdl-2.12.so 32b5e02000 32b5e03000 r--p 00002000 fd:01 917594 /lib64/libdl-2.12.so 32b5e03000 32b5e04000 rw-p 00003000 fd:01 917594 /lib64/libdl-2.12.so 32b6000000 32b6017000 r-xp 00000000 fd:01 917592 /lib64/libpthread-2.12.so 32b6017000 32b6217000 ---p 00017000 fd:01 917592 /lib64/libpthread-2.12.so 32b6217000 32b6218000 r--p 00017000 fd:01 917592 /lib64/libpthread-2.12.so 32b6218000 32b6219000 rw-p 00018000 fd:01 917592 /lib64/libpthread-2.12.so 32b6219000 32b621d000 rw-p 00000000 00:00 0 32b6800000 32b6807000 r-xp 00000000 fd:01 917631 /lib64/librt-2.12.so 32b6807000 32b6a06000 ---p 00007000 fd:01 917631 /lib64/librt-2.12.so 32b6a06000 32b6a07000 r--p 00006000 fd:01 917631 /lib64/librt-2.12.so 32b6a07000 32b6a08000 rw-p 00007000 fd:01 917631 /lib64/librt-2.12.so 32b7000000 32b701d000 r-xp 00000000 fd:01 917605 /lib64/libselinux.so.1 32b701d000 32b721c000 ---p 0001d000 fd:01 917605 /lib64/libselinux.so.1 32b721c000 32b721d000 r--p 0001c000 fd:01 917605 /lib64/libselinux.so.1 32b721d000 32b721e000 rw-p 0001d000 fd:01 917605 /lib64/libselinux.so.1 32b721e000 32b721f000 rw-p 00000000 00:00 0 32c3800000 32c3804000 r-xp 00000000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3804000 32c3a03000 ---p 00004000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3a03000 32c3a04000 r--p 00003000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3a04000 32c3a05000 rw-p 00004000 fd:01 917625 /lib64/libattr.so.1.1.0 32c4400000 32c4407000 r-xp 00000000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4407000 32c4606000 ---p 00007000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4606000 32c4607000 r--p 00006000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4607000 32c4608000 rw-p 00007000 fd:01 917627 /lib64/libacl.so.1.1.0 010000010000 010050001000 rw-p 00000000 00:00 0 [dummy] 010080080000 010080900000 rw-p 00000000 00:00 0 [dummy] 010090030000 010090500000 rw-p 00000000 00:00 0 [dummy] 0100C0100000 0100C0200000 rw-p 00000000 00:00 0 [dummy] 010130000000 0101f0000000 rw-p 00000000 00:00 0 [dummy] 010200050000 010210000000 rw-p 00000000 00:00 0 [dummy] 010290050000 0102A0000000 rw-p 00000000 00:00 0 [dummy] 0106C0050000 0106D0000000 rw-p 00000000 00:00 0 [dummy] 0106D0080000 0107D0000000 rw-p 00000000 00:00 0 [dummy] 030000000000 7ffa04540000 rw-p 00000000 00:00 0 [dummy] 7ffa0454f000 7ffa0a3e0000 r--p 00000000 fd:01 2132914 /usr/lib/locale/locale-archive 7ffa0a3e0000 7fff02144000 rw-p 00000000 00:00 0 [dummy] 7fff02144000 7fff02165000 rw-p 00000000 00:00 0 [stack] 7fff02165000 7fff02190000 rw-p 00000000 00:00 0 [dummy] 7fff02190000 7fff02191000 r-xp 00000000 00:00 0 [vdso] 7fff02191000 800000000000 r-xp 00000000 00:00 0 [dummy] ffffffffff600000 ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] vmem-1.8/src/test/util_map_proc/maps_all_linux000066400000000000000000000115031361505074100216650ustar00rootroot0000000000000000400000-0041c000 r-xp 00000000 fd:01 1310785 /some/path/testfile 0061b000-0061c000 rw-p 0001b000 fd:01 1310785 /some/path/testfile 0061c000-0061d000 rw-p 00000000 00:00 0 0081c000-0081d000 rw-p 0001c000 fd:01 1310785 /some/path/testfile 02699000-026ba000 rw-p 00000000 00:00 0 [heap] 32b5000000-32b5020000 r-xp 00000000 fd:01 917587 /lib64/ld-2.12.so 32b521f000-32b5220000 r--p 0001f000 fd:01 917587 /lib64/ld-2.12.so 32b5220000-32b5221000 rw-p 00020000 fd:01 917587 /lib64/ld-2.12.so 32b5221000-32b5222000 rw-p 00000000 00:00 0 32b5400000-32b558b000 r-xp 00000000 fd:01 917588 /lib64/libc-2.12.so 32b558b000-32b578a000 ---p 0018b000 fd:01 917588 /lib64/libc-2.12.so 32b578a000-32b578e000 r--p 0018a000 fd:01 917588 /lib64/libc-2.12.so 32b578e000-32b578f000 rw-p 0018e000 fd:01 917588 /lib64/libc-2.12.so 32b578f000-32b5794000 rw-p 00000000 00:00 0 32b5c00000-32b5c02000 r-xp 00000000 fd:01 917594 /lib64/libdl-2.12.so 32b5c02000-32b5e02000 ---p 00002000 fd:01 917594 /lib64/libdl-2.12.so 32b5e02000-32b5e03000 r--p 00002000 fd:01 917594 /lib64/libdl-2.12.so 32b5e03000-32b5e04000 rw-p 00003000 fd:01 917594 /lib64/libdl-2.12.so 32b6000000-32b6017000 r-xp 00000000 fd:01 917592 /lib64/libpthread-2.12.so 32b6017000-32b6217000 ---p 00017000 fd:01 917592 /lib64/libpthread-2.12.so 32b6217000-32b6218000 r--p 00017000 fd:01 917592 /lib64/libpthread-2.12.so 32b6218000-32b6219000 rw-p 00018000 fd:01 917592 /lib64/libpthread-2.12.so 32b6219000-32b621d000 rw-p 00000000 00:00 0 32b6800000-32b6807000 r-xp 00000000 fd:01 917631 /lib64/librt-2.12.so 32b6807000-32b6a06000 ---p 00007000 fd:01 917631 /lib64/librt-2.12.so 32b6a06000-32b6a07000 r--p 00006000 fd:01 917631 /lib64/librt-2.12.so 32b6a07000-32b6a08000 rw-p 00007000 fd:01 917631 /lib64/librt-2.12.so 32b7000000-32b701d000 r-xp 00000000 fd:01 917605 /lib64/libselinux.so.1 32b701d000-32b721c000 ---p 0001d000 fd:01 917605 /lib64/libselinux.so.1 32b721c000-32b721d000 r--p 0001c000 fd:01 917605 /lib64/libselinux.so.1 32b721d000-32b721e000 rw-p 0001d000 fd:01 917605 /lib64/libselinux.so.1 32b721e000-32b721f000 rw-p 00000000 00:00 0 32c3800000-32c3804000 r-xp 00000000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3804000-32c3a03000 ---p 00004000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3a03000-32c3a04000 r--p 00003000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3a04000-32c3a05000 rw-p 00004000 fd:01 917625 /lib64/libattr.so.1.1.0 32c4400000-32c4407000 r-xp 00000000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4407000-32c4606000 ---p 00007000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4606000-32c4607000 r--p 00006000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4607000-32c4608000 rw-p 00007000 fd:01 917627 /lib64/libacl.so.1.1.0 010000010000-010050001000 rw-p 00000000 00:00 0 [dummy] 010080080000-010080900000 rw-p 00000000 00:00 0 [dummy] 010090030000-010090500000 rw-p 00000000 00:00 0 [dummy] 0100C0100000-0100C0200000 rw-p 00000000 00:00 0 [dummy] 010130000000-0101f0000000 rw-p 00000000 00:00 0 [dummy] 010200050000-010210000000 rw-p 00000000 00:00 0 [dummy] 010290050000-0102A0000000 rw-p 00000000 00:00 0 [dummy] 0106C0050000-0106D0000000 rw-p 00000000 00:00 0 [dummy] 0106D0080000-0107D0000000 rw-p 00000000 00:00 0 [dummy] 030000000000-7ffa04540000 rw-p 00000000 00:00 0 [dummy] 7ffa0454f000-7ffa0a3e0000 r--p 00000000 fd:01 2132914 /usr/lib/locale/locale-archive 7ffa0a3e0000-7fff02144000 rw-p 00000000 00:00 0 [dummy] 7fff02144000-7fff02165000 rw-p 00000000 00:00 0 [stack] 7fff02165000-7fff02190000 rw-p 00000000 00:00 0 [dummy] 7fff02190000-7fff02191000 r-xp 00000000 00:00 0 [vdso] 7fff02191000-800000000000 r-xp 00000000 00:00 0 [dummy] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] vmem-1.8/src/test/util_map_proc/maps_end_freebsd000066400000000000000000000101471361505074100221410ustar00rootroot0000000000000000400000 0041c000 r-xp 00000000 fd:01 1310785 /some/path/testfile 0061b000 0061c000 rw-p 0001b000 fd:01 1310785 /some/path/testfile 0061c000 0061d000 rw-p 00000000 00:00 0 0081c000 0081d000 rw-p 0001c000 fd:01 1310785 /some/path/testfile 02699000 026ba000 rw-p 00000000 00:00 0 [heap] 32b5000000 32b5020000 r-xp 00000000 fd:01 917587 /lib64/ld-2.12.so 32b521f000 32b5220000 r--p 0001f000 fd:01 917587 /lib64/ld-2.12.so 32b5220000 32b5221000 rw-p 00020000 fd:01 917587 /lib64/ld-2.12.so 32b5221000 32b5222000 rw-p 00000000 00:00 0 32b5400000 32b558b000 r-xp 00000000 fd:01 917588 /lib64/libc-2.12.so 32b558b000 32b578a000 ---p 0018b000 fd:01 917588 /lib64/libc-2.12.so 32b578a000 32b578e000 r--p 0018a000 fd:01 917588 /lib64/libc-2.12.so 32b578e000 32b578f000 rw-p 0018e000 fd:01 917588 /lib64/libc-2.12.so 32b578f000 32b5794000 rw-p 00000000 00:00 0 32b5c00000 32b5c02000 r-xp 00000000 fd:01 917594 /lib64/libdl-2.12.so 32b5c02000 32b5e02000 ---p 00002000 fd:01 917594 /lib64/libdl-2.12.so 32b5e02000 32b5e03000 r--p 00002000 fd:01 917594 /lib64/libdl-2.12.so 32b5e03000 32b5e04000 rw-p 00003000 fd:01 917594 /lib64/libdl-2.12.so 32b6000000 32b6017000 r-xp 00000000 fd:01 917592 /lib64/libpthread-2.12.so 32b6017000 32b6217000 ---p 00017000 fd:01 917592 /lib64/libpthread-2.12.so 32b6217000 32b6218000 r--p 00017000 fd:01 917592 /lib64/libpthread-2.12.so 32b6218000 32b6219000 rw-p 00018000 fd:01 917592 /lib64/libpthread-2.12.so 32b6219000 32b621d000 rw-p 00000000 00:00 0 32b6800000 32b6807000 r-xp 00000000 fd:01 917631 /lib64/librt-2.12.so 32b6807000 32b6a06000 ---p 00007000 fd:01 917631 /lib64/librt-2.12.so 32b6a06000 32b6a07000 r--p 00006000 fd:01 917631 /lib64/librt-2.12.so 32b6a07000 32b6a08000 rw-p 00007000 fd:01 917631 /lib64/librt-2.12.so 32b7000000 32b701d000 r-xp 00000000 fd:01 917605 /lib64/libselinux.so.1 32b701d000 32b721c000 ---p 0001d000 fd:01 917605 /lib64/libselinux.so.1 32b721c000 32b721d000 r--p 0001c000 fd:01 917605 /lib64/libselinux.so.1 32b721d000 32b721e000 rw-p 0001d000 fd:01 917605 /lib64/libselinux.so.1 32b721e000 32b721f000 rw-p 00000000 00:00 0 32c3800000 32c3804000 r-xp 00000000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3804000 32c3a03000 ---p 00004000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3a03000 32c3a04000 r--p 00003000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3a04000 32c3a05000 rw-p 00004000 fd:01 917625 /lib64/libattr.so.1.1.0 32c4400000 32c4407000 r-xp 00000000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4407000 32c4606000 ---p 00007000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4606000 32c4607000 r--p 00006000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4607000 32c4608000 rw-p 00007000 fd:01 917627 /lib64/libacl.so.1.1.0 010000000000 7ffa04540000 rw-p 00000000 00:00 0 [dummy] 7ffa0454f000 7ffa0a3e0000 r--p 00000000 fd:01 2132914 /usr/lib/locale/locale-archive 7ffa0a3e0000 7fff02144000 rw-p 00000000 00:00 0 [dummy] 7fff02144000 7fff02165000 rw-p 00000000 00:00 0 [stack] 7fff02165000 7fff02190000 rw-p 00000000 00:00 0 [dummy] 7fff02190000 7fff02191000 r-xp 00000000 00:00 0 [vdso] 7fff02191000 800000000000 r-xp 00000000 00:00 0 [dummy] 0000800000000000 ffffffff90000000 rw-p 00000000 00:00 0 [dummy] vmem-1.8/src/test/util_map_proc/maps_end_linux000066400000000000000000000101471361505074100216660ustar00rootroot0000000000000000400000-0041c000 r-xp 00000000 fd:01 1310785 /some/path/testfile 0061b000-0061c000 rw-p 0001b000 fd:01 1310785 /some/path/testfile 0061c000-0061d000 rw-p 00000000 00:00 0 0081c000-0081d000 rw-p 0001c000 fd:01 1310785 /some/path/testfile 02699000-026ba000 rw-p 00000000 00:00 0 [heap] 32b5000000-32b5020000 r-xp 00000000 fd:01 917587 /lib64/ld-2.12.so 32b521f000-32b5220000 r--p 0001f000 fd:01 917587 /lib64/ld-2.12.so 32b5220000-32b5221000 rw-p 00020000 fd:01 917587 /lib64/ld-2.12.so 32b5221000-32b5222000 rw-p 00000000 00:00 0 32b5400000-32b558b000 r-xp 00000000 fd:01 917588 /lib64/libc-2.12.so 32b558b000-32b578a000 ---p 0018b000 fd:01 917588 /lib64/libc-2.12.so 32b578a000-32b578e000 r--p 0018a000 fd:01 917588 /lib64/libc-2.12.so 32b578e000-32b578f000 rw-p 0018e000 fd:01 917588 /lib64/libc-2.12.so 32b578f000-32b5794000 rw-p 00000000 00:00 0 32b5c00000-32b5c02000 r-xp 00000000 fd:01 917594 /lib64/libdl-2.12.so 32b5c02000-32b5e02000 ---p 00002000 fd:01 917594 /lib64/libdl-2.12.so 32b5e02000-32b5e03000 r--p 00002000 fd:01 917594 /lib64/libdl-2.12.so 32b5e03000-32b5e04000 rw-p 00003000 fd:01 917594 /lib64/libdl-2.12.so 32b6000000-32b6017000 r-xp 00000000 fd:01 917592 /lib64/libpthread-2.12.so 32b6017000-32b6217000 ---p 00017000 fd:01 917592 /lib64/libpthread-2.12.so 32b6217000-32b6218000 r--p 00017000 fd:01 917592 /lib64/libpthread-2.12.so 32b6218000-32b6219000 rw-p 00018000 fd:01 917592 /lib64/libpthread-2.12.so 32b6219000-32b621d000 rw-p 00000000 00:00 0 32b6800000-32b6807000 r-xp 00000000 fd:01 917631 /lib64/librt-2.12.so 32b6807000-32b6a06000 ---p 00007000 fd:01 917631 /lib64/librt-2.12.so 32b6a06000-32b6a07000 r--p 00006000 fd:01 917631 /lib64/librt-2.12.so 32b6a07000-32b6a08000 rw-p 00007000 fd:01 917631 /lib64/librt-2.12.so 32b7000000-32b701d000 r-xp 00000000 fd:01 917605 /lib64/libselinux.so.1 32b701d000-32b721c000 ---p 0001d000 fd:01 917605 /lib64/libselinux.so.1 32b721c000-32b721d000 r--p 0001c000 fd:01 917605 /lib64/libselinux.so.1 32b721d000-32b721e000 rw-p 0001d000 fd:01 917605 /lib64/libselinux.so.1 32b721e000-32b721f000 rw-p 00000000 00:00 0 32c3800000-32c3804000 r-xp 00000000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3804000-32c3a03000 ---p 00004000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3a03000-32c3a04000 r--p 00003000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3a04000-32c3a05000 rw-p 00004000 fd:01 917625 /lib64/libattr.so.1.1.0 32c4400000-32c4407000 r-xp 00000000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4407000-32c4606000 ---p 00007000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4606000-32c4607000 r--p 00006000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4607000-32c4608000 rw-p 00007000 fd:01 917627 /lib64/libacl.so.1.1.0 010000000000-7ffa04540000 rw-p 00000000 00:00 0 [dummy] 7ffa0454f000-7ffa0a3e0000 r--p 00000000 fd:01 2132914 /usr/lib/locale/locale-archive 7ffa0a3e0000-7fff02144000 rw-p 00000000 00:00 0 [dummy] 7fff02144000-7fff02165000 rw-p 00000000 00:00 0 [stack] 7fff02165000-7fff02190000 rw-p 00000000 00:00 0 [dummy] 7fff02190000-7fff02191000 r-xp 00000000 00:00 0 [vdso] 7fff02191000-800000000000 r-xp 00000000 00:00 0 [dummy] 0000800000000000-ffffffff90000000 rw-p 00000000 00:00 0 [dummy] vmem-1.8/src/test/util_map_proc/maps_none_freebsd000066400000000000000000000102731361505074100223320ustar00rootroot0000000000000000400000 0041c000 r-xp 00000000 fd:01 1310785 /some/path/testfile 0061b000 0061c000 rw-p 0001b000 fd:01 1310785 /some/path/testfile 0061c000 0061d000 rw-p 00000000 00:00 0 0081c000 0081d000 rw-p 0001c000 fd:01 1310785 /some/path/testfile 02699000 026ba000 rw-p 00000000 00:00 0 [heap] 32b5000000 32b5020000 r-xp 00000000 fd:01 917587 /lib64/ld-2.12.so 32b521f000 32b5220000 r--p 0001f000 fd:01 917587 /lib64/ld-2.12.so 32b5220000 32b5221000 rw-p 00020000 fd:01 917587 /lib64/ld-2.12.so 32b5221000 32b5222000 rw-p 00000000 00:00 0 32b5400000 32b558b000 r-xp 00000000 fd:01 917588 /lib64/libc-2.12.so 32b558b000 32b578a000 ---p 0018b000 fd:01 917588 /lib64/libc-2.12.so 32b578a000 32b578e000 r--p 0018a000 fd:01 917588 /lib64/libc-2.12.so 32b578e000 32b578f000 rw-p 0018e000 fd:01 917588 /lib64/libc-2.12.so 32b578f000 32b5794000 rw-p 00000000 00:00 0 32b5c00000 32b5c02000 r-xp 00000000 fd:01 917594 /lib64/libdl-2.12.so 32b5c02000 32b5e02000 ---p 00002000 fd:01 917594 /lib64/libdl-2.12.so 32b5e02000 32b5e03000 r--p 00002000 fd:01 917594 /lib64/libdl-2.12.so 32b5e03000 32b5e04000 rw-p 00003000 fd:01 917594 /lib64/libdl-2.12.so 32b6000000 32b6017000 r-xp 00000000 fd:01 917592 /lib64/libpthread-2.12.so 32b6017000 32b6217000 ---p 00017000 fd:01 917592 /lib64/libpthread-2.12.so 32b6217000 32b6218000 r--p 00017000 fd:01 917592 /lib64/libpthread-2.12.so 32b6218000 32b6219000 rw-p 00018000 fd:01 917592 /lib64/libpthread-2.12.so 32b6219000 32b621d000 rw-p 00000000 00:00 0 32b6800000 32b6807000 r-xp 00000000 fd:01 917631 /lib64/librt-2.12.so 32b6807000 32b6a06000 ---p 00007000 fd:01 917631 /lib64/librt-2.12.so 32b6a06000 32b6a07000 r--p 00006000 fd:01 917631 /lib64/librt-2.12.so 32b6a07000 32b6a08000 rw-p 00007000 fd:01 917631 /lib64/librt-2.12.so 32b7000000 32b701d000 r-xp 00000000 fd:01 917605 /lib64/libselinux.so.1 32b701d000 32b721c000 ---p 0001d000 fd:01 917605 /lib64/libselinux.so.1 32b721c000 32b721d000 r--p 0001c000 fd:01 917605 /lib64/libselinux.so.1 32b721d000 32b721e000 rw-p 0001d000 fd:01 917605 /lib64/libselinux.so.1 32b721e000 32b721f000 rw-p 00000000 00:00 0 32c3800000 32c3804000 r-xp 00000000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3804000 32c3a03000 ---p 00004000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3a03000 32c3a04000 r--p 00003000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3a04000 32c3a05000 rw-p 00004000 fd:01 917625 /lib64/libattr.so.1.1.0 32c4400000 32c4407000 r-xp 00000000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4407000 32c4606000 ---p 00007000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4606000 32c4607000 r--p 00006000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4607000 32c4608000 rw-p 00007000 fd:01 917627 /lib64/libacl.so.1.1.0 010000000000 7ffa04540000 rw-p 00000000 00:00 0 [dummy] 7ffa0454f000 7ffa0a3e0000 r--p 00000000 fd:01 2132914 /usr/lib/locale/locale-archive 7ffa0a3e0000 7fff02144000 rw-p 00000000 00:00 0 [dummy] 7fff02144000 7fff02165000 rw-p 00000000 00:00 0 [stack] 7fff02165000 7fff02190000 rw-p 00000000 00:00 0 [dummy] 7fff02190000 7fff02191000 r-xp 00000000 00:00 0 [vdso] 7fff02191000 800000000000 r-xp 00000000 00:00 0 [dummy] 0000800000000000 ffffffffff600000 rw-p 00000000 00:00 0 [dummy] ffffffffff600000 ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] vmem-1.8/src/test/util_map_proc/maps_none_linux000066400000000000000000000102731361505074100220570ustar00rootroot0000000000000000400000-0041c000 r-xp 00000000 fd:01 1310785 /some/path/testfile 0061b000-0061c000 rw-p 0001b000 fd:01 1310785 /some/path/testfile 0061c000-0061d000 rw-p 00000000 00:00 0 0081c000-0081d000 rw-p 0001c000 fd:01 1310785 /some/path/testfile 02699000-026ba000 rw-p 00000000 00:00 0 [heap] 32b5000000-32b5020000 r-xp 00000000 fd:01 917587 /lib64/ld-2.12.so 32b521f000-32b5220000 r--p 0001f000 fd:01 917587 /lib64/ld-2.12.so 32b5220000-32b5221000 rw-p 00020000 fd:01 917587 /lib64/ld-2.12.so 32b5221000-32b5222000 rw-p 00000000 00:00 0 32b5400000-32b558b000 r-xp 00000000 fd:01 917588 /lib64/libc-2.12.so 32b558b000-32b578a000 ---p 0018b000 fd:01 917588 /lib64/libc-2.12.so 32b578a000-32b578e000 r--p 0018a000 fd:01 917588 /lib64/libc-2.12.so 32b578e000-32b578f000 rw-p 0018e000 fd:01 917588 /lib64/libc-2.12.so 32b578f000-32b5794000 rw-p 00000000 00:00 0 32b5c00000-32b5c02000 r-xp 00000000 fd:01 917594 /lib64/libdl-2.12.so 32b5c02000-32b5e02000 ---p 00002000 fd:01 917594 /lib64/libdl-2.12.so 32b5e02000-32b5e03000 r--p 00002000 fd:01 917594 /lib64/libdl-2.12.so 32b5e03000-32b5e04000 rw-p 00003000 fd:01 917594 /lib64/libdl-2.12.so 32b6000000-32b6017000 r-xp 00000000 fd:01 917592 /lib64/libpthread-2.12.so 32b6017000-32b6217000 ---p 00017000 fd:01 917592 /lib64/libpthread-2.12.so 32b6217000-32b6218000 r--p 00017000 fd:01 917592 /lib64/libpthread-2.12.so 32b6218000-32b6219000 rw-p 00018000 fd:01 917592 /lib64/libpthread-2.12.so 32b6219000-32b621d000 rw-p 00000000 00:00 0 32b6800000-32b6807000 r-xp 00000000 fd:01 917631 /lib64/librt-2.12.so 32b6807000-32b6a06000 ---p 00007000 fd:01 917631 /lib64/librt-2.12.so 32b6a06000-32b6a07000 r--p 00006000 fd:01 917631 /lib64/librt-2.12.so 32b6a07000-32b6a08000 rw-p 00007000 fd:01 917631 /lib64/librt-2.12.so 32b7000000-32b701d000 r-xp 00000000 fd:01 917605 /lib64/libselinux.so.1 32b701d000-32b721c000 ---p 0001d000 fd:01 917605 /lib64/libselinux.so.1 32b721c000-32b721d000 r--p 0001c000 fd:01 917605 /lib64/libselinux.so.1 32b721d000-32b721e000 rw-p 0001d000 fd:01 917605 /lib64/libselinux.so.1 32b721e000-32b721f000 rw-p 00000000 00:00 0 32c3800000-32c3804000 r-xp 00000000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3804000-32c3a03000 ---p 00004000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3a03000-32c3a04000 r--p 00003000 fd:01 917625 /lib64/libattr.so.1.1.0 32c3a04000-32c3a05000 rw-p 00004000 fd:01 917625 /lib64/libattr.so.1.1.0 32c4400000-32c4407000 r-xp 00000000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4407000-32c4606000 ---p 00007000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4606000-32c4607000 r--p 00006000 fd:01 917627 /lib64/libacl.so.1.1.0 32c4607000-32c4608000 rw-p 00007000 fd:01 917627 /lib64/libacl.so.1.1.0 010000000000-7ffa04540000 rw-p 00000000 00:00 0 [dummy] 7ffa0454f000-7ffa0a3e0000 r--p 00000000 fd:01 2132914 /usr/lib/locale/locale-archive 7ffa0a3e0000-7fff02144000 rw-p 00000000 00:00 0 [dummy] 7fff02144000-7fff02165000 rw-p 00000000 00:00 0 [stack] 7fff02165000-7fff02190000 rw-p 00000000 00:00 0 [dummy] 7fff02190000-7fff02191000 r-xp 00000000 00:00 0 [vdso] 7fff02191000-800000000000 r-xp 00000000 00:00 0 [dummy] 0000800000000000-ffffffffff600000 rw-p 00000000 00:00 0 [dummy] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] vmem-1.8/src/test/util_map_proc/out0.log.match000066400000000000000000000006061361505074100214220ustar00rootroot00000000000000util_map_proc/TEST0: START: util_map_proc ./util_map_proc$(nW) maps_all$(*) 0x0000100000 0x0001000000 0x0040000000 0x0400000000 0x4000000000 redirecting /proc/$(*) to maps_all$(*) len 1048576: 0x100c0000000 0x$(X) len 16777216: 0x10100000000 0x$(X) len 1073741824: 0x10240000000 0x$(X) len 17179869184: 0x102c0000000 0x$(X) len 274877906944: 0x10800000000 0x$(X) util_map_proc/TEST0: DONE vmem-1.8/src/test/util_map_proc/out1.log.match000066400000000000000000000005401361505074100214200ustar00rootroot00000000000000util_map_proc/TEST1: START: util_map_proc ./util_map_proc$(nW) maps_none$(*) 0x0000100000 0x0001000000 0x0040000000 0x0400000000 0x4000000000 redirecting /proc/$(*) to maps_none$(*) len 1048576: (nil) 0x$(X) len 16777216: (nil) 0x$(X) len 1073741824: (nil) 0x$(X) len 17179869184: (nil) 0x$(X) len 274877906944: (nil) 0x$(X) util_map_proc/TEST1: DONE vmem-1.8/src/test/util_map_proc/out2.log.match000066400000000000000000000005621361505074100214250ustar00rootroot00000000000000util_map_proc/TEST2: START: util_map_proc ./util_map_proc$(nW) maps_align$(*) 0x0001000000 0x0001100000 0x0001110000 0x0001110800 0x0001111000 redirecting /proc/$(*) to maps_align$(*) len 16777216: 0x10080000000 0x$(X) len 17825792: 0x10080000000 0x$(X) len 17891328: 0x10080000000 0x$(X) len 17893376: (nil) 0x$(X) len 17895424: (nil) 0x$(X) util_map_proc/TEST2: DONE vmem-1.8/src/test/util_map_proc/out3.log.match000066400000000000000000000006341361505074100214260ustar00rootroot00000000000000util_map_proc/TEST3: START: util_map_proc ./util_map_proc$(nW) maps_end$(*) 0x0000100000 0x0001000000 0x003F000000 0x003FFFF000 0x0040000000 redirecting /proc/$(*) to maps_end$(*) len 1048576: 0xffffffffc0000000 0x$(X) len 16777216: 0xffffffffc0000000 0x$(X) len 1056964608: 0xffffffffc0000000 0x$(X) len 1073737728: 0xffffffffc0000000 0x$(X) len 1073741824: 0xffffffffffffffff 0x$(X) util_map_proc/TEST3: DONE vmem-1.8/src/test/util_map_proc/out4.log.match000066400000000000000000000006521361505074100214270ustar00rootroot00000000000000util_map_proc/TEST4: START: util_map_proc ./util_map_proc$(nW) maps_all$(*) 0x0000100000 0x0001000000 0x0040000000 0x0400000000 0x4000000000 redirecting /proc/$(*) to maps_all$(*) len 1048576: 0x100c0000000 0x2e000000000 len 16777216: 0x10100000000 0x2e000000000 len 1073741824: 0x10240000000 0x2e000000000 len 17179869184: 0x102c0000000 0x2e000000000 len 274877906944: 0x10800000000 0x800000000000 util_map_proc/TEST4: DONE vmem-1.8/src/test/util_map_proc/out5.log.match000066400000000000000000000006271361505074100214320ustar00rootroot00000000000000util_map_proc/TEST5: START: util_map_proc ./util_map_proc$(nW) maps_all$(*) 0x0000100000 0x0001000000 0x0040000000 0x0400000000 0x4000000000 redirecting /proc/$(*) to maps_all$(*) len 1048576: 0x100c0000000 0x200000 len 16777216: 0x10100000000 0xa00000 len 1073741824: 0x10240000000 0x2800000 len 17179869184: 0x102c0000000 0x40000000 len 274877906944: 0x10800000000 0x3300000000 util_map_proc/TEST5: DONE vmem-1.8/src/test/util_map_proc/out6.log.match000066400000000000000000000003341361505074100214260ustar00rootroot00000000000000util_map_proc/TEST6: START: util_map_proc ./util_map_proc$(nW) mapfile_nonexistent 0x12345678 redirecting /proc/$(*) to mapfile_nonexistent len 305419896: 0xffffffffffffffff 0xffffffffffffffff util_map_proc/TEST6: DONE vmem-1.8/src/test/util_map_proc/util_map_proc.c000066400000000000000000000054421361505074100217410ustar00rootroot00000000000000/* * Copyright 2014-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * util_map_proc.c -- unit test for util_map() /proc parsing * * usage: util_map_proc maps_file len [len]... */ #define _GNU_SOURCE #include #include "unittest.h" #include "util.h" #include "mmap.h" #define GIGABYTE ((uintptr_t)1 << 30) #define TERABYTE ((uintptr_t)1 << 40) int main(int argc, char *argv[]) { START(argc, argv, "util_map_proc"); util_init(); util_mmap_init(); if (argc < 3) UT_FATAL("usage: %s maps_file len [len]...", argv[0]); Mmap_mapfile = argv[1]; UT_OUT("redirecting " OS_MAPFILE " to %s", Mmap_mapfile); for (int arg = 2; arg < argc; arg++) { size_t len = (size_t)strtoull(argv[arg], NULL, 0); size_t align = 2 * MEGABYTE; if (len >= 2 * GIGABYTE) align = GIGABYTE; void *h1 = util_map_hint_unused((void *)TERABYTE, len, GIGABYTE); void *h2 = util_map_hint(len, 0); if (h1 != MAP_FAILED && h1 != NULL) UT_ASSERTeq((uintptr_t)h1 & (GIGABYTE - 1), 0); if (h2 != MAP_FAILED && h2 != NULL) UT_ASSERTeq((uintptr_t)h2 & (align - 1), 0); if (h1 == NULL) /* XXX portability */ UT_OUT("len %zu: (nil) %p", len, h2); else if (h2 == NULL) UT_OUT("len %zu: %p (nil)", len, h1); else UT_OUT("len %zu: %p %p", len, h1, h2); } util_mmap_fini(); DONE(NULL); } vmem-1.8/src/test/util_parse_size/000077500000000000000000000000001361505074100172775ustar00rootroot00000000000000vmem-1.8/src/test/util_parse_size/.gitignore000066400000000000000000000000201361505074100212570ustar00rootroot00000000000000util_parse_size vmem-1.8/src/test/util_parse_size/Makefile000066400000000000000000000032711361505074100207420ustar00rootroot00000000000000# # Copyright 2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_parse_size/Makefile -- check parsing results # TARGET = util_parse_size OBJS = util_parse_size.o LIBPMEMCOMMON=y include ../Makefile.inc vmem-1.8/src/test/util_parse_size/TEST0000077500000000000000000000036041361505074100200670ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_parse_size/TEST0 -- unit test for util_parse_size. # . ../unittest/unittest.sh setup expect_normal_exit ./util_parse_size$EXESUFFIX 11K 11M 11G 11T 11P\ 11KiB 11MiB 11GiB 11TiB 11PiB 11kB 11MB 11GB 11TB 11PB 1234\ 10k 10KB 10mB 10mb 10Mb 10B B 10ki 10KiC KiD\ 10Kiboli 10Kboli 10boli 10KiBoli check pass vmem-1.8/src/test/util_parse_size/TEST0.PS1000066400000000000000000000036031361505074100204650ustar00rootroot00000000000000# # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/util_parse_size/TEST0 -- unit test for util_parse_size. # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\util_parse_size$Env:EXESUFFIX 11K 11M 11G 11T 11P ` 11KiB 11MiB 11GiB 11TiB 11PiB 11kB 11MB 11GB 11TB 11PB 1234 ` 10k 10KB 10mB 10mb 10Mb 10B B 10ki 10KiC KiD ` 10Kiboli 10Kboli 10boli 10KiBoli check pass vmem-1.8/src/test/util_parse_size/out0.log.match000066400000000000000000000014141361505074100217640ustar00rootroot00000000000000util_parse_size$(nW)TEST0: START: util_parse_size $(nW)util_parse_size$(S) 11K - correct 11264 11M - correct 11534336 11G - correct 11811160064 11T - correct 12094627905536 11P - correct 12384898975268864 11KiB - correct 11264 11MiB - correct 11534336 11GiB - correct 11811160064 11TiB - correct 12094627905536 11PiB - correct 12384898975268864 11kB - correct 11000 11MB - correct 11000000 11GB - correct 11000000000 11TB - correct 11000000000000 11PB - correct 11000000000000000 1234 - correct 1234 10k - incorrect 10KB - incorrect 10mB - incorrect 10mb - incorrect 10Mb - incorrect 10B - correct 10 B - incorrect 10ki - incorrect 10KiC - incorrect KiD - incorrect 10Kiboli - incorrect 10Kboli - incorrect 10boli - incorrect 10KiBoli - incorrect util_parse_size$(nW)TEST0: DONE vmem-1.8/src/test/util_parse_size/util_parse_size.c000066400000000000000000000040121361505074100226410ustar00rootroot00000000000000/* * Copyright 2016, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * util_parse_size.c -- unit test for parsing a size */ #include "unittest.h" #include "util.h" #include int main(int argc, char *argv[]) { int ret = 0; uint64_t size = 0; START(argc, argv, "util_parse_size"); for (int arg = 1; arg < argc; ++arg) { ret = util_parse_size(argv[arg], &size); if (ret == 0) { UT_OUT("%s - correct %"PRIu64, argv[arg], size); } else { UT_OUT("%s - incorrect", argv[arg]); } } DONE(NULL); } vmem-1.8/src/test/util_parse_size/util_parse_size.vcxproj000066400000000000000000000063101361505074100241150ustar00rootroot00000000000000 Debug x64 Release x64 {08B62E36-63D2-4FF1-A605-4BBABAEE73FB} Win32Proj util_parse_size 10.0.16299.0 Application true v140 Application false v140 {ce3f2dfb-8470-4802-ad37-21caf6cb2681} vmem-1.8/src/test/util_parse_size/util_parse_size.vcxproj.filters000066400000000000000000000026071361505074100255710ustar00rootroot00000000000000 Source Files Source Files Source Files Test Scripts Match Files Test Scripts Match Files {123c699c-29a6-434e-8584-36f216dcc624} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {3bb361b7-844f-4234-9a67-39f6ae260599} match {2f14a64d-7d7c-4617-ae54-6ec929e6b689} ps1 vmem-1.8/src/test/vmem_aligned_alloc/000077500000000000000000000000001361505074100176775ustar00rootroot00000000000000vmem-1.8/src/test/vmem_aligned_alloc/.gitignore000066400000000000000000000000231361505074100216620ustar00rootroot00000000000000vmem_aligned_alloc vmem-1.8/src/test/vmem_aligned_alloc/Makefile000066400000000000000000000033171361505074100213430ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_aligned_alloc/Makefile -- build vmem_aligned_alloc unit test # TARGET = vmem_aligned_alloc OBJS = vmem_aligned_alloc.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_aligned_alloc/TEST0000077500000000000000000000033451361505074100204710ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_aligned_alloc/TEST0 -- unit test for vmem_aligned_alloc # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_aligned_alloc$EXESUFFIX check pass vmem-1.8/src/test/vmem_aligned_alloc/TEST0.PS1000066400000000000000000000033451361505074100210700ustar00rootroot00000000000000# # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_aligned_alloc/TEST0.PS1 -- unit test for vmem_aligned_alloc # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_aligned_alloc$Env:EXESUFFIX check pass vmem-1.8/src/test/vmem_aligned_alloc/TEST1000077500000000000000000000033521361505074100204700ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_aligned_alloc/TEST1 -- unit test for vmem_aligned_alloc # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_aligned_alloc$EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_aligned_alloc/TEST1.PS1000066400000000000000000000033521361505074100210670ustar00rootroot00000000000000# # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_aligned_alloc/TEST1.PS1 -- unit test for vmem_aligned_alloc # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_aligned_alloc$Env:EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_aligned_alloc/vmem_aligned_alloc.c000066400000000000000000000110661361505074100236500ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_aligned_alloc.c -- unit test for vmem_aligned_alloc * * usage: vmem_aligned_alloc [directory] */ #include "unittest.h" #define MAX_ALLOCS (100) static int custom_allocs; static int custom_alloc_calls; /* * malloc_custom -- custom malloc function * * This function updates statistics about custom alloc functions, * and returns allocated memory. */ static void * malloc_custom(size_t size) { ++custom_alloc_calls; ++custom_allocs; return malloc(size); } /* * free_custom -- custom free function * * This function updates statistics about custom alloc functions, * and frees allocated memory. */ static void free_custom(void *ptr) { ++custom_alloc_calls; --custom_allocs; free(ptr); } /* * realloc_custom -- custom realloc function * * This function updates statistics about custom alloc functions, * and returns reallocated memory. */ static void * realloc_custom(void *ptr, size_t size) { ++custom_alloc_calls; return realloc(ptr, size); } /* * strdup_custom -- custom strdup function * * This function updates statistics about custom alloc functions, * and returns allocated memory with a duplicated string. */ static char * strdup_custom(const char *s) { ++custom_alloc_calls; ++custom_allocs; return strdup(s); } int main(int argc, char *argv[]) { const int test_value = 123456; char *dir = NULL; VMEM *vmp; size_t alignment; unsigned i; int *ptr; int *ptrs[MAX_ALLOCS]; START(argc, argv, "vmem_aligned_alloc"); if (argc == 2) { dir = argv[1]; } else if (argc > 2) { UT_FATAL("usage: %s [directory]", argv[0]); } /* allocate memory for function vmem_create_in_region() */ void *mem_pool = MMAP_ANON_ALIGNED(VMEM_MIN_POOL, 4 << 20); /* use custom alloc functions to check for memory leaks */ vmem_set_funcs(malloc_custom, free_custom, realloc_custom, strdup_custom, NULL); /* test with address alignment from 2B to 4MB */ for (alignment = 2; alignment <= 4 * 1024 * 1024; alignment *= 2) { if (dir == NULL) { vmp = vmem_create_in_region(mem_pool, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create_in_region"); } else { vmp = vmem_create(dir, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create"); } memset(ptrs, 0, MAX_ALLOCS * sizeof(ptrs[0])); for (i = 0; i < MAX_ALLOCS; ++i) { ptr = vmem_aligned_alloc(vmp, alignment, sizeof(int)); ptrs[i] = ptr; /* at least one allocation must succeed */ UT_ASSERT(i != 0 || ptr != NULL); if (ptr == NULL) break; /* ptr should be usable */ *ptr = test_value; UT_ASSERTeq(*ptr, test_value); /* check for correct address alignment */ UT_ASSERTeq((uintptr_t)(ptr) & (alignment - 1), 0); /* check that pointer came from mem_pool */ if (dir == NULL) { UT_ASSERTrange(ptr, mem_pool, VMEM_MIN_POOL); } } for (i = 0; i < MAX_ALLOCS; ++i) { if (ptrs[i] == NULL) break; vmem_free(vmp, ptrs[i]); } vmem_delete(vmp); } /* check memory leaks */ UT_ASSERTne(custom_alloc_calls, 0); UT_ASSERTeq(custom_allocs, 0); DONE(NULL); } vmem-1.8/src/test/vmem_aligned_alloc/vmem_aligned_alloc.vcxproj000066400000000000000000000065151361505074100251240ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {25B5C601-03D7-4861-9C0F-7F0453B04227} Win32Proj vmem_aligned_alloc 10.0.16299.0 Application true v140 Application false v140 false vmem-1.8/src/test/vmem_aligned_alloc/vmem_aligned_alloc.vcxproj.filters000066400000000000000000000013611361505074100265650ustar00rootroot00000000000000 Tests Scripts Tests Scripts {e9faa074-f7a9-4c1a-823b-d80bea67f96a} {0715b1f5-327e-4f53-97dc-39273ebdd0d6} Source Files vmem-1.8/src/test/vmem_calloc/000077500000000000000000000000001361505074100163575ustar00rootroot00000000000000vmem-1.8/src/test/vmem_calloc/.gitignore000066400000000000000000000000141361505074100203420ustar00rootroot00000000000000vmem_calloc vmem-1.8/src/test/vmem_calloc/Makefile000066400000000000000000000032631361505074100200230ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_calloc/Makefile -- build vmem_calloc unit test # TARGET = vmem_calloc OBJS = vmem_calloc.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_calloc/TEST0000077500000000000000000000033201361505074100171420ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_calloc/TEST0 -- unit test for vmem_calloc # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_calloc$EXESUFFIX check pass vmem-1.8/src/test/vmem_calloc/TEST0.PS1000066400000000000000000000033201361505074100175410ustar00rootroot00000000000000# # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_calloc/TEST0.PS1 -- unit test for vmem_calloc # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_calloc$Env:EXESUFFIX check pass vmem-1.8/src/test/vmem_calloc/TEST1000077500000000000000000000033251361505074100171500ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_calloc/TEST1 -- unit test for vmem_calloc # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_calloc$EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_calloc/TEST1.PS1000066400000000000000000000033251361505074100175470ustar00rootroot00000000000000# # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_calloc/TEST1.PS1 -- unit test for vmem_calloc # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_calloc$Env:EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_calloc/vmem_calloc.c000066400000000000000000000052351361505074100210110ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_calloc.c -- unit test for vmem_calloc * * usage: vmem_calloc [directory] */ #include "unittest.h" int main(int argc, char *argv[]) { const int test_value = 123456; char *dir = NULL; void *mem_pool = NULL; VMEM *vmp; START(argc, argv, "vmem_calloc"); if (argc == 2) { dir = argv[1]; } else if (argc > 2) { UT_FATAL("usage: %s [directory]", argv[0]); } if (dir == NULL) { /* allocate memory for function vmem_create_in_region() */ mem_pool = MMAP_ANON_ALIGNED(VMEM_MIN_POOL, 4 << 20); vmp = vmem_create_in_region(mem_pool, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create_in_region"); } else { vmp = vmem_create(dir, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create"); } int *test = vmem_calloc(vmp, 1, sizeof(int)); UT_ASSERTne(test, NULL); /* pool_calloc should return zeroed memory */ UT_ASSERTeq(*test, 0); *test = test_value; UT_ASSERTeq(*test, test_value); /* check that pointer came from mem_pool */ if (dir == NULL) { UT_ASSERTrange(test, mem_pool, VMEM_MIN_POOL); } vmem_free(vmp, test); vmem_delete(vmp); DONE(NULL); } vmem-1.8/src/test/vmem_calloc/vmem_calloc.vcxproj000066400000000000000000000063771361505074100222720ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {718CA6FA-6446-4E43-83DF-BA4E85E5886B} Win32Proj vmem_calloc 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_calloc/vmem_calloc.vcxproj.filters000066400000000000000000000013521361505074100237250ustar00rootroot00000000000000 {c2b89085-7edf-4ec6-ba39-4feedfe85a92} {30272684-6afe-496f-ad02-c05307e4aac0} Tests Scripts Tests Scripts Source Files vmem-1.8/src/test/vmem_check/000077500000000000000000000000001361505074100161775ustar00rootroot00000000000000vmem-1.8/src/test/vmem_check/.gitignore000066400000000000000000000000131361505074100201610ustar00rootroot00000000000000vmem_check vmem-1.8/src/test/vmem_check/Makefile000066400000000000000000000032601361505074100176400ustar00rootroot00000000000000# # Copyright 2014-2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_check/Makefile -- build vmem_check unit test # TARGET = vmem_check OBJS = vmem_check.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_check/TEST0000077500000000000000000000034331361505074100167670ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_check/TEST0 -- unit test for vmem_check # . ../unittest/unittest.sh # this test generates sigsegvs on purpose configure_valgrind memcheck force-disable setup expect_normal_exit ./vmem_check$EXESUFFIX pass vmem-1.8/src/test/vmem_check/TEST0.PS1000066400000000000000000000033151361505074100173650ustar00rootroot00000000000000# # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_check/TEST0.PS1 -- unit test for vmem_check # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_check$Env:EXESUFFIX check pass vmem-1.8/src/test/vmem_check/TEST1000077500000000000000000000034401361505074100167660ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_check/TEST1 -- unit test for vmem_check # . ../unittest/unittest.sh # this test generates sigsegvs on purpose configure_valgrind memcheck force-disable setup expect_normal_exit ./vmem_check$EXESUFFIX $DIR pass vmem-1.8/src/test/vmem_check/TEST1.PS1000066400000000000000000000033221361505074100173640ustar00rootroot00000000000000# # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_check/TEST1.PS1 -- unit test for vmem_check # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_check$Env:EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_check/vmem_check.c000066400000000000000000000056051361505074100204520ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_check.c -- unit test for vmem_check * * usage: vmem_check [directory] */ #include "unittest.h" int main(int argc, char *argv[]) { char *dir = NULL; void *mem_pool = NULL; VMEM *vmp; START(argc, argv, "vmem_check"); if (argc == 2) { dir = argv[1]; } else if (argc > 2) { UT_FATAL("usage: %s [directory]", argv[0]); } if (dir == NULL) { /* allocate memory for function vmem_create_in_region() */ mem_pool = MMAP_ANON_ALIGNED(VMEM_MIN_POOL * 2, 4 << 20); vmp = vmem_create_in_region(mem_pool, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create_in_region"); } else { vmp = vmem_create(dir, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create"); } UT_ASSERTeq(1, vmem_check(vmp)); /* create pool in this same memory region */ if (dir == NULL) { void *mem_pool2 = (void *)(((uintptr_t)mem_pool + VMEM_MIN_POOL / 2) & ~(Ut_mmap_align - 1)); VMEM *vmp2 = vmem_create_in_region(mem_pool2, VMEM_MIN_POOL); if (vmp2 == NULL) UT_FATAL("!vmem_create_in_region"); /* detect memory range collision */ UT_ASSERTne(1, vmem_check(vmp)); UT_ASSERTne(1, vmem_check(vmp2)); vmem_delete(vmp2); UT_ASSERTne(1, vmem_check(vmp2)); } vmem_delete(vmp); /* for vmem_create() memory unmapped after delete pool */ if (!dir) UT_ASSERTne(1, vmem_check(vmp)); DONE(NULL); } vmem-1.8/src/test/vmem_check/vmem_check.vcxproj000066400000000000000000000063751361505074100217300ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {FF374D62-CBCF-401E-9A02-1D3DB8BE16E4} Win32Proj vmem_check 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_check/vmem_check.vcxproj.filters000066400000000000000000000014571361505074100233730ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {bdfb5bc0-aa19-4831-b79b-359182bd4c74} Source Files Test Scripts Test Scripts vmem-1.8/src/test/vmem_check_allocations/000077500000000000000000000000001361505074100205675ustar00rootroot00000000000000vmem-1.8/src/test/vmem_check_allocations/.gitignore000066400000000000000000000000271361505074100225560ustar00rootroot00000000000000vmem_check_allocations vmem-1.8/src/test/vmem_check_allocations/Makefile000066400000000000000000000033371361505074100222350ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_check_allocations/Makefile -- build vmem_check_allocations unit test # TARGET = vmem_check_allocations OBJS = vmem_check_allocations.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_check_allocations/TEST0000077500000000000000000000035571361505074100213660ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_check_allocations/TEST0 -- unit test for vmem_check_allocations # . ../unittest/unittest.sh require_build_type debug nondebug setup # limit output for file vmem*.log to reduce time of execution test export VMEM_LOG_LEVEL=2 expect_normal_exit ./vmem_check_allocations$EXESUFFIX check pass vmem-1.8/src/test/vmem_check_allocations/TEST0.PS1000066400000000000000000000035151361505074100217570ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_check_allocations/TEST0.PS1 -- unit test for vmem_check_allocations # . ..\unittest\unittest.ps1 setup # limit output for file vmem*.log to reduce time of test execution $Env:VMEM_LOG_LEVEL = "2" expect_normal_exit $Env:EXE_DIR\vmem_check_allocations$Env:EXESUFFIX check pass vmem-1.8/src/test/vmem_check_allocations/TEST1000077500000000000000000000035641361505074100213650ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_check_allocations/TEST1 -- unit test for vmem_check_allocations # . ../unittest/unittest.sh require_build_type debug nondebug setup # limit output for file vmem*.log to reduce time of test execution export VMEM_LOG_LEVEL=2 expect_normal_exit ./vmem_check_allocations$EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_check_allocations/vmem_check_allocations.c000066400000000000000000000071771361505074100254400ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_check_allocations -- unit test for vmem_check_allocations * * usage: vmem_check_allocations [directory] */ #include "unittest.h" #define TEST_MAX_ALLOCATION_SIZE (4L * 1024L * 1024L) #define TEST_ALLOCS_SIZE (VMEM_MIN_POOL / 8) /* buffer for all allocations */ static void *allocs[TEST_ALLOCS_SIZE]; int main(int argc, char *argv[]) { char *dir = NULL; void *mem_pool = NULL; VMEM *vmp; START(argc, argv, "vmem_check_allocations"); if (argc == 2) { dir = argv[1]; } else if (argc > 2) { UT_FATAL("usage: %s [directory]", argv[0]); } size_t object_size; for (object_size = 8; object_size <= TEST_MAX_ALLOCATION_SIZE; object_size *= 2) { size_t i; size_t j; if (dir == NULL) { mem_pool = MMAP_ANON_ALIGNED(VMEM_MIN_POOL, 4 << 20); vmp = vmem_create_in_region(mem_pool, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create_in_region"); } else { vmp = vmem_create(dir, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create"); /* vmem_create should align pool to 4MB */ UT_ASSERTeq(((uintptr_t)vmp) & ((4 << 20) - 1), 0); } memset(allocs, 0, sizeof(allocs)); for (i = 0; i < TEST_ALLOCS_SIZE; ++i) { allocs[i] = vmem_malloc(vmp, object_size); if (allocs[i] == NULL) { /* out of memory in pool */ break; } /* check that pointer came from mem_pool */ if (dir == NULL) { UT_ASSERTrange(allocs[i], mem_pool, VMEM_MIN_POOL); } /* fill each allocation with a unique value */ memset(allocs[i], (char)i, object_size); } UT_ASSERT((i > 0) && (i + 1 < TEST_MAX_ALLOCATION_SIZE)); /* check for unexpected modifications of the data */ for (i = 0; i < TEST_ALLOCS_SIZE && allocs[i] != NULL; ++i) { char *buffer = allocs[i]; for (j = 0; j < object_size; ++j) { if (buffer[j] != (char)i) UT_FATAL("Content of data object was " "modified unexpectedly for " "object size: %zu, id: %zu", object_size, j); } } for (i = 0; i < TEST_ALLOCS_SIZE && allocs[i] != NULL; ++i) vmem_free(vmp, allocs[i]); vmem_delete(vmp); } DONE(NULL); } vmem-1.8/src/test/vmem_check_allocations/vmem_check_allocations.vcxproj000066400000000000000000000064251361505074100267040ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {3BAB8FDF-42F7-4D46-AA10-E282FD41B9F2} Win32Proj vmem_check_allocations 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_check_allocations/vmem_check_allocations.vcxproj.filters000066400000000000000000000014731361505074100303510ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {bdfb5bc0-aa19-4831-b79b-359182bd4c74} Test Scripts Test Scripts Source Files vmem-1.8/src/test/vmem_check_version/000077500000000000000000000000001361505074100177445ustar00rootroot00000000000000vmem-1.8/src/test/vmem_check_version/.gitignore000066400000000000000000000000231361505074100217270ustar00rootroot00000000000000vmem_check_version vmem-1.8/src/test/vmem_check_version/Makefile000066400000000000000000000033171361505074100214100ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_check_version/Makefile -- build vmem_check_version unit test # TARGET = vmem_check_version OBJS = vmem_check_version.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_check_version/TEST0000077500000000000000000000033451361505074100205360ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_check_version/TEST0 -- unit test for vmem_check_version # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_check_version$EXESUFFIX check pass vmem-1.8/src/test/vmem_check_version/TEST0.PS1000066400000000000000000000033431361505074100211330ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_check_version/TEST0.PS1 -- unit test for vmem_check_version # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_check_version$Env:EXESUFFIX check pass vmem-1.8/src/test/vmem_check_version/out0.log.match000066400000000000000000000004001361505074100224230ustar00rootroot00000000000000vmem_check_version$(nW)TEST0: START: vmem_check_version $(nW)vmem_check_version$(nW) compile-time libvmem version is 1.1 for major version 2, vmem_check_version returned: libvmem major version mismatch (need 2, found 1) vmem_check_version$(nW)TEST0: DONE vmem-1.8/src/test/vmem_check_version/vmem_check_version.c000066400000000000000000000042451361505074100237630ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_check_version.c -- unit test for vmem_check_version */ #include "unittest.h" int main(int argc, char *argv[]) { START(argc, argv, "vmem_check_version"); UT_OUT("compile-time libvmem version is %d.%d", VMEM_MAJOR_VERSION, VMEM_MINOR_VERSION); const char *errstr = vmem_check_version(VMEM_MAJOR_VERSION, VMEM_MINOR_VERSION); UT_ASSERTinfo(errstr == NULL, errstr); errstr = vmem_check_version(VMEM_MAJOR_VERSION + 1, VMEM_MINOR_VERSION); UT_ASSERT(errstr != NULL); UT_OUT("for major version %d, vmem_check_version returned: %s", VMEM_MAJOR_VERSION + 1, errstr); DONE(NULL); } vmem-1.8/src/test/vmem_check_version/vmem_check_version.vcxproj000066400000000000000000000064221361505074100252330ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {04345B7D-B0A1-405B-8BB2-5B98A3400FEF} Win32Proj vmem_check_version 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_check_version/vmem_check_version.vcxproj.filters000066400000000000000000000016761361505074100267100ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {4d39e0e5-0c47-48a1-8ceb-6c174a170013} {bdfb5bc0-aa19-4831-b79b-359182bd4c74} Match Files Test Scripts Source Files vmem-1.8/src/test/vmem_create/000077500000000000000000000000001361505074100163655ustar00rootroot00000000000000vmem-1.8/src/test/vmem_create/.gitignore000066400000000000000000000000141361505074100203500ustar00rootroot00000000000000vmem_create vmem-1.8/src/test/vmem_create/Makefile000066400000000000000000000032631361505074100200310ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_create/Makefile -- build vmem_create unit test # TARGET = vmem_create OBJS = vmem_create.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_create/TEST0000077500000000000000000000034371361505074100171610ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_create/TEST0 -- unit test for vmem_create # . ../unittest/unittest.sh setup # this test invokes sigsegvs by design export ASAN_OPTIONS=handle_segv=0 expect_normal_exit ./vmem_create$EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_create/TEST0.PS1000066400000000000000000000033201361505074100175470ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # src\test\vmem_create\TEST0.PS1 -- unit test for vmem_create # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_create$Env:EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_create/out0.log.match000066400000000000000000000002031361505074100210450ustar00rootroot00000000000000vmem_create$(nW)TEST0: START: vmem_create$(nW) $(nW)vmem_create$(nW) $(nW) signal: Segmentation fault vmem_create$(nW)TEST0: DONE vmem-1.8/src/test/vmem_create/vmem_create.c000066400000000000000000000046121361505074100210230ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_create.c -- unit test for vmem_create * * usage: vmem_create directory */ #include "unittest.h" static VMEM *Vmp; /* * signal_handler -- called on SIGSEGV */ static void signal_handler(int sig) { UT_OUT("signal: %s", os_strsignal(sig)); vmem_delete(Vmp); DONEW(NULL); } int main(int argc, char *argv[]) { START(argc, argv, "vmem_create"); if (argc < 2 || argc > 3) UT_FATAL("usage: %s directory", argv[0]); Vmp = vmem_create(argv[1], VMEM_MIN_POOL); if (Vmp == NULL) { UT_OUT("!vmem_create"); } else { struct sigaction v; sigemptyset(&v.sa_mask); v.sa_flags = 0; v.sa_handler = signal_handler; if (SIGACTION(SIGSEGV, &v, NULL) != 0) UT_FATAL("!sigaction"); /* try to dereference the opaque handle */ char x = *(char *)Vmp; UT_OUT("x = %c", x); } UT_FATAL("no signal received"); } vmem-1.8/src/test/vmem_create/vmem_create.vcxproj000066400000000000000000000064041361505074100222750ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {2E7E8487-0BB0-4E8A-8672-ED8ABD80D468} Win32Proj vmem_create 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_create/vmem_create.vcxproj.filters000066400000000000000000000016671361505074100237520ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {4d39e0e5-0c47-48a1-8ceb-6c174a170013} {bdfb5bc0-aa19-4831-b79b-359182bd4c74} Match Files Test Scripts Source Files vmem-1.8/src/test/vmem_create_error/000077500000000000000000000000001361505074100175765ustar00rootroot00000000000000vmem-1.8/src/test/vmem_create_error/.gitignore000066400000000000000000000000221361505074100215600ustar00rootroot00000000000000vmem_create_error vmem-1.8/src/test/vmem_create_error/Makefile000066400000000000000000000033131361505074100212360ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_create_error/Makefile -- build vmem_create_error unit test # TARGET = vmem_create_error OBJS = vmem_create_error.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_create_error/TEST0000077500000000000000000000034041361505074100203640ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_create_error/TEST0 -- unit test for vmem_create_error # . ../unittest/unittest.sh require_build_type debug nondebug setup expect_normal_exit ./vmem_create_error$EXESUFFIX check pass vmem-1.8/src/test/vmem_create_error/TEST0.PS1000066400000000000000000000034021361505074100207610ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_create_error/TEST0.PS1 -- unit test for vmem_create_error # . ..\unittest\unittest.ps1 require_build_type debug nondebug setup expect_normal_exit $Env:EXE_DIR\vmem_create_error$Env:EXESUFFIX check pass vmem-1.8/src/test/vmem_create_error/vmem_create_error.c000066400000000000000000000043051361505074100234440ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_create_error.c -- unit test for vmem_create_error * * usage: vmem_create_error */ #include "unittest.h" static char mem_pool[VMEM_MIN_POOL]; int main(int argc, char *argv[]) { VMEM *vmp; START(argc, argv, "vmem_create_error"); if (argc > 1) UT_FATAL("usage: %s", argv[0]); errno = 0; vmp = vmem_create_in_region(mem_pool, 0); UT_ASSERTeq(vmp, NULL); UT_ASSERTeq(errno, EINVAL); errno = 0; vmp = vmem_create("./", 0); UT_ASSERTeq(vmp, NULL); UT_ASSERTeq(errno, EINVAL); errno = 0; vmp = vmem_create("invalid dir !@#$%^&*()=", VMEM_MIN_POOL); UT_ASSERTeq(vmp, NULL); UT_ASSERTne(errno, 0); DONE(NULL); } vmem-1.8/src/test/vmem_create_error/vmem_create_error.vcxproj000066400000000000000000000063521361505074100247210ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {CD4B9690-7A06-4F7A-8492-9336979EE7E9} Win32Proj vmem_create_error 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_create_error/vmem_create_error.vcxproj.filters000066400000000000000000000012361361505074100263640ustar00rootroot00000000000000 {bf3433a8-7b81-45df-9ac7-cf6a2edce86b} {2ac658e1-777f-4c4d-aa07-8a7950c1a588} Test Scripts Source Files vmem-1.8/src/test/vmem_create_in_region/000077500000000000000000000000001361505074100204165ustar00rootroot00000000000000vmem-1.8/src/test/vmem_create_in_region/.gitignore000066400000000000000000000000261361505074100224040ustar00rootroot00000000000000vmem_create_in_region vmem-1.8/src/test/vmem_create_in_region/Makefile000066400000000000000000000033331361505074100220600ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_create_in_region/Makefile -- build vmem_create_in_region unit test # TARGET = vmem_create_in_region OBJS = vmem_create_in_region.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_create_in_region/TEST0000077500000000000000000000033561361505074100212120ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_create_in_region/TEST0 -- unit test for vmem_create_in_region # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_create_in_region$EXESUFFIX check pass vmem-1.8/src/test/vmem_create_in_region/TEST0.PS1000066400000000000000000000033541361505074100216070ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_create_in_region/TEST0.PS1 -- unit test for vmem_create_in_region # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_create_in_region$Env:EXESUFFIX check pass vmem-1.8/src/test/vmem_create_in_region/vmem_create_in_region.c000066400000000000000000000047431361505074100251120ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_create_in_region.c -- unit test for vmem_create_in_region * * usage: vmem_create_in_region */ #include "unittest.h" #define TEST_ALLOCATIONS (300) static void *allocs[TEST_ALLOCATIONS]; int main(int argc, char *argv[]) { VMEM *vmp; size_t i; START(argc, argv, "vmem_create_in_region"); if (argc > 1) UT_FATAL("usage: %s", argv[0]); /* allocate memory for function vmem_create_in_region() */ void *mem_pool = MMAP_ANON_ALIGNED(VMEM_MIN_POOL, 4 << 20); vmp = vmem_create_in_region(mem_pool, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create_in_region"); for (i = 0; i < TEST_ALLOCATIONS; ++i) { allocs[i] = vmem_malloc(vmp, sizeof(int)); UT_ASSERTne(allocs[i], NULL); /* check that pointer came from mem_pool */ UT_ASSERTrange(allocs[i], mem_pool, VMEM_MIN_POOL); } for (i = 0; i < TEST_ALLOCATIONS; ++i) { vmem_free(vmp, allocs[i]); } vmem_delete(vmp); DONE(NULL); } vmem-1.8/src/test/vmem_create_in_region/vmem_create_in_region.vcxproj000066400000000000000000000063621361505074100263620ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {74243B75-816C-4077-8DF0-98D2C78B0E5D} Win32Proj vmem_create_in_region 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_create_in_region/vmem_create_in_region.vcxproj.filters000066400000000000000000000012421361505074100300210ustar00rootroot00000000000000 {bf3433a8-7b81-45df-9ac7-cf6a2edce86b} {2ac658e1-777f-4c4d-aa07-8a7950c1a588} Test Scripts Source Files vmem-1.8/src/test/vmem_create_win/000077500000000000000000000000001361505074100172425ustar00rootroot00000000000000vmem-1.8/src/test/vmem_create_win/TEST0.PS1000066400000000000000000000033301361505074100204250ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # src\test\vmem_create_win\TEST0.PS1 -- unit test for vmem_create # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_create_win$Env:EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_create_win/out0.log.match000066400000000000000000000002231361505074100217240ustar00rootroot00000000000000vmem_create_win$(nW)TEST0: START: vmem_create_win$(nW) $(nW)vmem_create_win$(nW) $(nW) signal: Segmentation fault vmem_create_win$(nW)TEST0: DONE vmem-1.8/src/test/vmem_create_win/vmem_create_win.c000066400000000000000000000046351361505074100225620ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_create_win.c -- unit test for vmem_createW * * usage: vmem_create_win directory */ #include "unittest.h" VMEM *Vmp; /* * signal_handler -- called on SIGSEGV */ static void signal_handler(int sig) { UT_OUT("signal: %s", os_strsignal(sig)); vmem_delete(Vmp); DONEW(NULL); } int wmain(int argc, wchar_t *argv[]) { STARTW(argc, argv, "vmem_create_win"); if (argc < 2 || argc > 3) UT_FATAL("usage: %s directory", ut_toUTF8(argv[0])); Vmp = vmem_createW(argv[1], VMEM_MIN_POOL); if (Vmp == NULL) UT_OUT("!vmem_create"); else { struct sigaction v; sigemptyset(&v.sa_mask); v.sa_flags = 0; v.sa_handler = signal_handler; if (SIGACTION(SIGSEGV, &v, NULL) != 0) UT_FATAL("!sigaction"); /* try to dereference the opaque handle */ char x = *(char *)Vmp; UT_OUT("x = %c", x); } UT_FATAL("no signal received"); } vmem-1.8/src/test/vmem_create_win/vmem_create_win.filters000066400000000000000000000020121361505074100237730ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {4d39e0e5-0c47-48a1-8ceb-6c174a170013} {bdfb5bc0-aa19-4831-b79b-359182bd4c74} Match Files Match Files Test Scripts Source Files vmem-1.8/src/test/vmem_create_win/vmem_create_win.vcxproj000066400000000000000000000064101361505074100240240ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {BF3B6C3A-3073-4AD4-BB41-A41047231982} Win32Proj vmem_create 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_create_win/vmem_create_win.vcxproj.filters000066400000000000000000000015621361505074100254760ustar00rootroot00000000000000 {3cc9cff5-203b-47a7-ab65-14c2b7565a66} {137e2ec9-3c20-4b5b-af20-7ca3db95eb10} {89a00039-32e4-4144-baed-5f9c48a505be} Source Files Test Scripts Match Files vmem-1.8/src/test/vmem_custom_alloc/000077500000000000000000000000001361505074100176065ustar00rootroot00000000000000vmem-1.8/src/test/vmem_custom_alloc/.gitignore000066400000000000000000000000221361505074100215700ustar00rootroot00000000000000vmem_custom_alloc vmem-1.8/src/test/vmem_custom_alloc/Makefile000066400000000000000000000033131361505074100212460ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_custom_alloc/Makefile -- build vmem_custom_alloc unit test # TARGET = vmem_custom_alloc OBJS = vmem_custom_alloc.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_custom_alloc/TEST0000077500000000000000000000033351361505074100203770ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_custom_alloc/TEST0 -- unit test for vmem_custom_alloc # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_custom_alloc$EXESUFFIX 0 pass vmem-1.8/src/test/vmem_custom_alloc/TEST0.PS1000066400000000000000000000033331361505074100207740ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_custom_alloc/TEST0.PS1 -- unit test for vmem_custom_alloc # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_custom_alloc$Env:EXESUFFIX 0 pass vmem-1.8/src/test/vmem_custom_alloc/TEST1000077500000000000000000000033351361505074100204000ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_custom_alloc/TEST1 -- unit test for vmem_custom_alloc # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_custom_alloc$EXESUFFIX 1 pass vmem-1.8/src/test/vmem_custom_alloc/TEST1.PS1000066400000000000000000000033331361505074100207750ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_custom_alloc/TEST1.PS1 -- unit test for vmem_custom_alloc # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_custom_alloc$Env:EXESUFFIX 1 pass vmem-1.8/src/test/vmem_custom_alloc/TEST2000077500000000000000000000033351361505074100204010ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_custom_alloc/TEST2 -- unit test for vmem_custom_alloc # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_custom_alloc$EXESUFFIX 2 pass vmem-1.8/src/test/vmem_custom_alloc/TEST2.PS1000066400000000000000000000033331361505074100207760ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_custom_alloc/TEST2.PS1 -- unit test for vmem_custom_alloc # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_custom_alloc$Env:EXESUFFIX 2 pass vmem-1.8/src/test/vmem_custom_alloc/TEST3000077500000000000000000000033421361505074100204000ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_custom_alloc/TEST3 -- unit test for vmem_custom_alloc # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_custom_alloc$EXESUFFIX 0 $DIR pass vmem-1.8/src/test/vmem_custom_alloc/TEST3.PS1000066400000000000000000000033401361505074100207750ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_custom_alloc/TEST3.PS1 -- unit test for vmem_custom_alloc # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_custom_alloc$Env:EXESUFFIX 0 $DIR pass vmem-1.8/src/test/vmem_custom_alloc/TEST4000077500000000000000000000033421361505074100204010ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_custom_alloc/TEST4 -- unit test for vmem_custom_alloc # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_custom_alloc$EXESUFFIX 1 $DIR pass vmem-1.8/src/test/vmem_custom_alloc/TEST4.PS1000066400000000000000000000033401361505074100207760ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_custom_alloc/TEST4.PS1 -- unit test for vmem_custom_alloc # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_custom_alloc$Env:EXESUFFIX 1 $DIR pass vmem-1.8/src/test/vmem_custom_alloc/TEST5000077500000000000000000000033421361505074100204020ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_custom_alloc/TEST5 -- unit test for vmem_custom_alloc # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_custom_alloc$EXESUFFIX 2 $DIR pass vmem-1.8/src/test/vmem_custom_alloc/TEST5.PS1000066400000000000000000000033401361505074100207770ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_custom_alloc/TEST5.PS1 -- unit test for vmem_custom_alloc # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_custom_alloc$Env:EXESUFFIX 2 $DIR pass vmem-1.8/src/test/vmem_custom_alloc/out0.log.match000066400000000000000000000001671361505074100222770ustar00rootroot00000000000000vmem_custom_alloc$(nW)TEST0: START: vmem_custom_alloc $(nW)vmem_custom_alloc$(nW) 0 vmem_custom_alloc$(nW)TEST0: DONE vmem-1.8/src/test/vmem_custom_alloc/out1.log.match000066400000000000000000000001671361505074100223000ustar00rootroot00000000000000vmem_custom_alloc$(nW)TEST1: START: vmem_custom_alloc $(nW)vmem_custom_alloc$(nW) 1 vmem_custom_alloc$(nW)TEST1: DONE vmem-1.8/src/test/vmem_custom_alloc/out2.log.match000066400000000000000000000001671361505074100223010ustar00rootroot00000000000000vmem_custom_alloc$(nW)TEST2: START: vmem_custom_alloc $(nW)vmem_custom_alloc$(nW) 2 vmem_custom_alloc$(nW)TEST2: DONE vmem-1.8/src/test/vmem_custom_alloc/out3.log.match000066400000000000000000000001751361505074100223010ustar00rootroot00000000000000vmem_custom_alloc$(nW)TEST3: START: vmem_custom_alloc $(nW)vmem_custom_alloc$(nW) 0 $(nW) vmem_custom_alloc$(nW)TEST3: DONE vmem-1.8/src/test/vmem_custom_alloc/out4.log.match000066400000000000000000000001751361505074100223020ustar00rootroot00000000000000vmem_custom_alloc$(nW)TEST4: START: vmem_custom_alloc $(nW)vmem_custom_alloc$(nW) 1 $(nW) vmem_custom_alloc$(nW)TEST4: DONE vmem-1.8/src/test/vmem_custom_alloc/out5.log.match000066400000000000000000000001751361505074100223030ustar00rootroot00000000000000vmem_custom_alloc$(nW)TEST5: START: vmem_custom_alloc $(nW)vmem_custom_alloc$(nW) 2 $(nW) vmem_custom_alloc$(nW)TEST5: DONE vmem-1.8/src/test/vmem_custom_alloc/vmem_custom_alloc.c000066400000000000000000000127671361505074100234770ustar00rootroot00000000000000/* * Copyright 2014-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_custom_alloc.c -- unit test for vmem_custom_alloc * * usage: vmem_custom_alloc (0-2) [directory] */ #include "unittest.h" #define TEST_STRING_VALUE "Some test text, to check memory" #define TEST_REPEAT_CREATE_POOLS (20) static int custom_allocs; static int custom_alloc_calls; static int expect_malloc; /* * malloc_null -- custom malloc function with error * * This function updates statistics about custom alloc functions, * and returns NULL. */ static void * malloc_null(size_t size) { ++custom_alloc_calls; #ifdef _WIN32 /* * Because Windows version requires UTF-16 string conversion * which requires four malloc calls and one free to succeed due to * long path support */ if (custom_alloc_calls < 6) { custom_allocs++; return malloc(size); } #endif errno = ENOMEM; return NULL; } /* * malloc_custom -- custom malloc function * * This function updates statistics about custom alloc functions, * and returns allocated memory. */ static void * malloc_custom(size_t size) { ++custom_alloc_calls; ++custom_allocs; return malloc(size); } /* * free_custom -- custom free function * * This function updates statistics about custom alloc functions, * and frees allocated memory. */ static void free_custom(void *ptr) { ++custom_alloc_calls; --custom_allocs; free(ptr); } /* * realloc_custom -- custom realloc function * * This function updates statistics about custom alloc functions, * and returns reallocated memory. */ static void * realloc_custom(void *ptr, size_t size) { ++custom_alloc_calls; return realloc(ptr, size); } /* * strdup_custom -- custom strdup function * * This function updates statistics about custom alloc functions, * and returns allocated memory with a duplicated string. */ static char * strdup_custom(const char *s) { ++custom_alloc_calls; ++custom_allocs; return strdup(s); } /* * pool_test -- test pool * * This function creates a memory pool in a file (if dir is not NULL), * or in RAM (if dir is NULL) and allocates memory for the test. */ static void pool_test(const char *dir) { VMEM *vmp = NULL; if (dir != NULL) { vmp = vmem_create(dir, VMEM_MIN_POOL); } else { /* allocate memory for function vmem_create_in_region() */ void *mem_pool = MMAP_ANON_ALIGNED(VMEM_MIN_POOL, 4 << 20); vmp = vmem_create_in_region(mem_pool, VMEM_MIN_POOL); } if (vmp == NULL) { if (dir == NULL) { UT_FATAL("!vmem_create_in_region"); } else { UT_FATAL("!vmem_create"); } } char *test = vmem_malloc(vmp, strlen(TEST_STRING_VALUE) + 1); if (expect_malloc == 0) { UT_ASSERTeq(test, NULL); } else { strcpy(test, TEST_STRING_VALUE); UT_ASSERTeq(strcmp(test, TEST_STRING_VALUE), 0); UT_ASSERT(vmem_malloc_usable_size(vmp, test) > 0); vmem_free(vmp, test); } vmem_delete(vmp); } int main(int argc, char *argv[]) { int expect_custom_alloc = 0; START(argc, argv, "vmem_custom_alloc"); if (argc < 2 || argc > 3 || strlen(argv[1]) != 1) UT_FATAL("usage: %s (0-2) [directory]", argv[0]); switch (argv[1][0]) { case '0': { /* use default allocator */ expect_custom_alloc = 0; expect_malloc = 1; break; } case '1': { /* error in custom malloc function */ expect_custom_alloc = 1; expect_malloc = 0; vmem_set_funcs(malloc_null, free_custom, realloc_custom, strdup_custom, NULL); break; } case '2': { /* use custom alloc functions */ expect_custom_alloc = 1; expect_malloc = 1; vmem_set_funcs(malloc_custom, free_custom, realloc_custom, strdup_custom, NULL); break; } default: { UT_FATAL("usage: %s (0-2) [directory]", argv[0]); break; } } if (argc == 3) { pool_test(argv[2]); } else { int i; /* repeat create pool */ for (i = 0; i < TEST_REPEAT_CREATE_POOLS; ++i) pool_test(NULL); } /* check memory leak in custom allocator */ UT_ASSERTeq(custom_allocs, 0); if (expect_custom_alloc == 0) { UT_ASSERTeq(custom_alloc_calls, 0); } else { UT_ASSERTne(custom_alloc_calls, 0); } DONE(NULL); } vmem-1.8/src/test/vmem_custom_alloc/vmem_custom_alloc.vcxproj000066400000000000000000000071631361505074100247420ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {4ED1E400-CF16-48C2-B176-2BF186E73531} Win32Proj vmem_custom_alloc 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_custom_alloc/vmem_custom_alloc.vcxproj.filters000066400000000000000000000032361361505074100264060ustar00rootroot00000000000000 {9a6bc21c-8036-4ce5-8745-9d76afbbd200} {bf3433a8-7b81-45df-9ac7-cf6a2edce86b} {2ac658e1-777f-4c4d-aa07-8a7950c1a588} Test Scripts Test Scripts Test Scripts Test Scripts Test Scripts Test Scripts Match Files Match Files Match Files Match Files Match Files Match Files Source Files vmem-1.8/src/test/vmem_malloc/000077500000000000000000000000001361505074100163715ustar00rootroot00000000000000vmem-1.8/src/test/vmem_malloc/.gitignore000066400000000000000000000000141361505074100203540ustar00rootroot00000000000000vmem_malloc vmem-1.8/src/test/vmem_malloc/Makefile000066400000000000000000000032631361505074100200350ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_malloc/Makefile -- build vmem_malloc unit test # TARGET = vmem_malloc OBJS = vmem_malloc.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_malloc/TEST0000077500000000000000000000033201361505074100171540ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_malloc/TEST0 -- unit test for vmem_malloc # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_malloc$EXESUFFIX check pass vmem-1.8/src/test/vmem_malloc/TEST0.PS1000066400000000000000000000033161361505074100175600ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_malloc/TEST0.PS1 -- unit test for vmem_malloc # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_malloc$Env:EXESUFFIX check pass vmem-1.8/src/test/vmem_malloc/TEST1000077500000000000000000000033251361505074100171620ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_malloc/TEST1 -- unit test for vmem_malloc # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_malloc$EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_malloc/TEST1.PS1000066400000000000000000000033231361505074100175570ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_malloc/TEST1.PS1 -- unit test for vmem_malloc # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_malloc$Env:EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_malloc/TEST2000077500000000000000000000034241361505074100171630ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_malloc/TEST2 -- unit test for vmem_malloc # . ../unittest/unittest.sh require_command daxio require_dax_device setup dax_device_zero expect_normal_exit ./vmem_malloc$EXESUFFIX $DEVICE_DAX_PATH pass vmem-1.8/src/test/vmem_malloc/vmem_malloc.c000066400000000000000000000051211361505074100210270ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_malloc.c -- unit test for vmem_malloc * * usage: vmem_malloc [directory] */ #include "unittest.h" int main(int argc, char *argv[]) { const int test_value = 123456; char *dir = NULL; void *mem_pool = NULL; VMEM *vmp; START(argc, argv, "vmem_malloc"); if (argc == 2) { dir = argv[1]; } else if (argc > 2) { UT_FATAL("usage: %s [directory]", argv[0]); } if (dir == NULL) { /* allocate memory for function vmem_create_in_region() */ mem_pool = MMAP_ANON_ALIGNED(VMEM_MIN_POOL, 4 << 20); vmp = vmem_create_in_region(mem_pool, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create_in_region"); } else { vmp = vmem_create(dir, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create"); } int *test = vmem_malloc(vmp, sizeof(int)); UT_ASSERTne(test, NULL); *test = test_value; UT_ASSERTeq(*test, test_value); /* check that pointer came from mem_pool */ if (dir == NULL) { UT_ASSERTrange(test, mem_pool, VMEM_MIN_POOL); } vmem_free(vmp, test); vmem_delete(vmp); DONE(NULL); } vmem-1.8/src/test/vmem_malloc/vmem_malloc.vcxproj000066400000000000000000000063771361505074100223160ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {40DC66AD-F66D-4194-B9A4-A3A2222516FE} Win32Proj vmem_malloc 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_malloc/vmem_malloc.vcxproj.filters000066400000000000000000000013471361505074100237550ustar00rootroot00000000000000 {bf3433a8-7b81-45df-9ac7-cf6a2edce86b} {2ac658e1-777f-4c4d-aa07-8a7950c1a588} Test Scripts Test Scripts Source Files vmem-1.8/src/test/vmem_malloc_usable_size/000077500000000000000000000000001361505074100207565ustar00rootroot00000000000000vmem-1.8/src/test/vmem_malloc_usable_size/.gitignore000066400000000000000000000000301361505074100227370ustar00rootroot00000000000000vmem_malloc_usable_size vmem-1.8/src/test/vmem_malloc_usable_size/Makefile000066400000000000000000000033421361505074100224200ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_malloc_usable_size/Makefile -- build vmem_malloc_usable_size unit test # TARGET = vmem_malloc_usable_size OBJS =vmem_malloc_usable_size.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_malloc_usable_size/TEST0000077500000000000000000000033641361505074100215510ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_malloc_usable_size/TEST0 -- unit test for vmem_malloc_usable_size # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_malloc_usable_size$EXESUFFIX check pass vmem-1.8/src/test/vmem_malloc_usable_size/TEST0.PS1000066400000000000000000000033621361505074100221460ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_malloc_usable_size/TEST0.PS1 -- unit test for vmem_malloc_usable_size # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_malloc_usable_size$Env:EXESUFFIX check pass vmem-1.8/src/test/vmem_malloc_usable_size/TEST1000077500000000000000000000033711361505074100215500ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_malloc_usable_size/TEST1 -- unit test for vmem_malloc_usable_size # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_malloc_usable_size$EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_malloc_usable_size/TEST1.PS1000066400000000000000000000033671361505074100221540ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_malloc_usable_size/TEST1.PS1 -- unit test for vmem_malloc_usable_size # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_malloc_usable_size$Env:EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_malloc_usable_size/vmem_malloc_usable_size.c000066400000000000000000000074601361505074100260110ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_malloc_usable_size.c -- unit test for vmem_malloc_usable_size * * usage: vmem_malloc_usable_size [directory] */ #include "unittest.h" #define POOL_SIZE (VMEM_MIN_POOL * 2) static const struct { size_t size; size_t spacing; } Check_sizes[] = { {.size = 10, .spacing = 8}, {.size = 100, .spacing = 16}, {.size = 200, .spacing = 32}, {.size = 500, .spacing = 64}, {.size = 1000, .spacing = 128}, {.size = 2000, .spacing = 256}, {.size = 3000, .spacing = 512}, {.size = 1 * 1024 * 1024, .spacing = 4 * 1024 * 1024}, {.size = 2 * 1024 * 1024, .spacing = 4 * 1024 * 1024}, {.size = 3 * 1024 * 1024, .spacing = 4 * 1024 * 1024}, {.size = 4 * 1024 * 1024, .spacing = 4 * 1024 * 1024}, {.size = 5 * 1024 * 1024, .spacing = 4 * 1024 * 1024}, {.size = 6 * 1024 * 1024, .spacing = 4 * 1024 * 1024}, {.size = 7 * 1024 * 1024, .spacing = 4 * 1024 * 1024}, {.size = 8 * 1024 * 1024, .spacing = 4 * 1024 * 1024}, {.size = 9 * 1024 * 1024, .spacing = 4 * 1024 * 1024} }; int main(int argc, char *argv[]) { char *dir = NULL; void *mem_pool = NULL; VMEM *vmp; void *alloc; size_t usable_size; size_t size; unsigned i; START(argc, argv, "vmem_malloc_usable_size"); if (argc == 2) { dir = argv[1]; } else if (argc > 2) { UT_FATAL("usage: %s [directory]", argv[0]); } if (dir == NULL) { /* allocate memory for function vmem_create_in_region() */ mem_pool = MMAP_ANON_ALIGNED(POOL_SIZE, 4 << 20); vmp = vmem_create_in_region(mem_pool, POOL_SIZE); if (vmp == NULL) UT_FATAL("!vmem_create_in_region"); } else { vmp = vmem_create(dir, POOL_SIZE); if (vmp == NULL) UT_FATAL("!vmem_create"); } UT_ASSERTeq(vmem_malloc_usable_size(vmp, NULL), 0); for (i = 0; i < (sizeof(Check_sizes) / sizeof(Check_sizes[0])); ++i) { size = Check_sizes[i].size; alloc = vmem_malloc(vmp, size); UT_ASSERTne(alloc, NULL); usable_size = vmem_malloc_usable_size(vmp, alloc); UT_ASSERT(usable_size >= size); if (usable_size - size > Check_sizes[i].spacing) { UT_FATAL("Size %zu: spacing %zu is bigger" "than expected: %zu", size, (usable_size - size), Check_sizes[i].spacing); } memset(alloc, 0xEE, usable_size); vmem_free(vmp, alloc); } UT_ASSERTeq(vmem_check(vmp), 1); vmem_delete(vmp); DONE(NULL); } vmem-1.8/src/test/vmem_malloc_usable_size/vmem_malloc_usable_size.vcxproj000066400000000000000000000064271361505074100272640ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {C00B4A26-6C57-4968-AED5-B45FD31A22E7} Win32Proj vmem_malloc_usable_size 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_malloc_usable_size/vmem_malloc_usable_size.vcxproj.filters000066400000000000000000000013631361505074100307250ustar00rootroot00000000000000 {bf3433a8-7b81-45df-9ac7-cf6a2edce86b} {2ac658e1-777f-4c4d-aa07-8a7950c1a588} Test Scripts Test Scripts Source Files vmem-1.8/src/test/vmem_mix_allocations/000077500000000000000000000000001361505074100203075ustar00rootroot00000000000000vmem-1.8/src/test/vmem_mix_allocations/.gitignore000066400000000000000000000000251361505074100222740ustar00rootroot00000000000000vmem_mix_allocations vmem-1.8/src/test/vmem_mix_allocations/Makefile000066400000000000000000000033271361505074100217540ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_mix_allocations/Makefile -- build vmem_mix_allocations unit test # TARGET = vmem_mix_allocations OBJS = vmem_mix_allocations.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_mix_allocations/TEST0000077500000000000000000000034151361505074100210770ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_mix_allocations/TEST0 -- unit test for vmem_mix_allocations # . ../unittest/unittest.sh require_build_type debug nondebug setup expect_normal_exit ./vmem_mix_allocations$EXESUFFIX check pass vmem-1.8/src/test/vmem_mix_allocations/TEST0.PS1000066400000000000000000000034131361505074100214740ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_mix_allocations/TEST0.PS1 -- unit test for vmem_mix_allocations # . ..\unittest\unittest.ps1 require_build_type debug nondebug setup expect_normal_exit $Env:EXE_DIR\vmem_mix_allocations$Env:EXESUFFIX check pass vmem-1.8/src/test/vmem_mix_allocations/TEST1000077500000000000000000000034221361505074100210760ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_mix_allocations/TEST1 -- unit test for vmem_mix_allocations # . ../unittest/unittest.sh require_build_type debug nondebug setup expect_normal_exit ./vmem_mix_allocations$EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_mix_allocations/TEST1.PS1000066400000000000000000000034201361505074100214730ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_mix_allocations/TEST1.PS1 -- unit test for vmem_mix_allocations # . ..\unittest\unittest.ps1 require_build_type debug nondebug setup expect_normal_exit $Env:EXE_DIR\vmem_mix_allocations$Env:EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_mix_allocations/vmem_mix_allocations.c000066400000000000000000000057051361505074100246730ustar00rootroot00000000000000/* * Copyright 2014-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_mix_allocations.c -- unit test for vmem_mix_allocations * * usage: vmem_mix_allocations [directory] */ #include "unittest.h" #define COUNT 24 #define POOL_SIZE VMEM_MIN_POOL #define MAX_SIZE (1 << (COUNT - 1)) /* 8MB */ int main(int argc, char *argv[]) { char *dir = NULL; void *mem_pool = NULL; VMEM *vmp; size_t obj_size; int *ptr[COUNT + 1]; int i = 0; size_t sum_alloc = 0; START(argc, argv, "vmem_mix_allocations"); if (argc == 2) { dir = argv[1]; } else if (argc > 2) { UT_FATAL("usage: %s [directory]", argv[0]); } if (dir == NULL) { /* allocate memory for function vmem_create_in_region() */ mem_pool = MMAP_ANON_ALIGNED(POOL_SIZE, 4 << 20); vmp = vmem_create_in_region(mem_pool, POOL_SIZE); if (vmp == NULL) UT_FATAL("!vmem_create_in_region"); } else { vmp = vmem_create(dir, POOL_SIZE); if (vmp == NULL) UT_FATAL("!vmem_create"); } obj_size = MAX_SIZE; /* test with multiple size of allocations from 8MB to 1B */ for (i = 0; i < COUNT; ++i, obj_size /= 2) { ptr[i] = vmem_malloc(vmp, obj_size); if (ptr[i] == NULL) continue; sum_alloc += obj_size; /* check that pointer came from mem_pool */ if (dir == NULL) UT_ASSERTrange(ptr[i], mem_pool, POOL_SIZE); } /* allocate more than half of pool size */ UT_ASSERT(sum_alloc * 2 > POOL_SIZE); while (i > 0) vmem_free(vmp, ptr[--i]); vmem_delete(vmp); DONE(NULL); } vmem-1.8/src/test/vmem_mix_allocations/vmem_mix_allocations.vcxproj000066400000000000000000000070741361505074100261450ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {537F759B-B617-48D9-A2F3-7FB769A8F9B7} Win32Proj vmem_mix_allocations 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_mix_allocations/vmem_mix_allocations.vcxproj.filters000066400000000000000000000013601361505074100276040ustar00rootroot00000000000000 {bf3433a8-7b81-45df-9ac7-cf6a2edce86b} {2ac658e1-777f-4c4d-aa07-8a7950c1a588} Test Scripts Test Scripts Source Files vmem-1.8/src/test/vmem_multiple_pools/000077500000000000000000000000001361505074100201715ustar00rootroot00000000000000vmem-1.8/src/test/vmem_multiple_pools/.gitignore000066400000000000000000000000241361505074100221550ustar00rootroot00000000000000vmem_multiple_pools vmem-1.8/src/test/vmem_multiple_pools/Makefile000066400000000000000000000033231361505074100216320ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_multiple_pools/Makefile -- build vmem_multiple_pools unit test # TARGET = vmem_multiple_pools OBJS = vmem_multiple_pools.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_multiple_pools/TEST0000077500000000000000000000034541361505074100207640ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_multiple_pools/TEST0 -- unit test for vmem_multiple_pools # . ../unittest/unittest.sh require_build_type debug nondebug setup require_free_space 2G expect_normal_exit ./vmem_multiple_pools$EXESUFFIX $DIR 128 1 check pass vmem-1.8/src/test/vmem_multiple_pools/TEST0.PS1000066400000000000000000000034521361505074100213610ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_multiple_pools/TEST0.PS1 -- unit test for vmem_multiple_pools # . ..\unittest\unittest.ps1 require_build_type debug nondebug setup require_free_space 2G expect_normal_exit $Env:EXE_DIR\vmem_multiple_pools$Env:EXESUFFIX $DIR 128 1 check pass vmem-1.8/src/test/vmem_multiple_pools/TEST1000077500000000000000000000036611361505074100207650ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_multiple_pools/TEST1 -- unit test for vmem_multiple_pools # . ../unittest/unittest.sh require_build_type debug nondebug # TEST2/TEST3 are specifically for helgrind/drd tests configure_valgrind helgrind force-disable configure_valgrind drd force-disable setup require_free_space 1G expect_normal_exit ./vmem_multiple_pools$EXESUFFIX $DIR 4 16 check pass vmem-1.8/src/test/vmem_multiple_pools/TEST1.PS1000066400000000000000000000034511361505074100213610ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_multiple_pools/TEST1.PS1 -- unit test for vmem_multiple_pools # . ..\unittest\unittest.ps1 require_build_type debug nondebug setup require_free_space 1G expect_normal_exit $Env:EXE_DIR\vmem_multiple_pools$Env:EXESUFFIX $DIR 4 16 check pass vmem-1.8/src/test/vmem_multiple_pools/TEST2000077500000000000000000000035241361505074100207640ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_multiple_pools/TEST2 -- unit test for vmem_multiple_pools # . ../unittest/unittest.sh require_build_type debug nondebug configure_valgrind helgrind force-enable setup export VMEM_LOG_LEVEL=0 expect_normal_exit ./vmem_multiple_pools$EXESUFFIX $DIR 2 4 check pass vmem-1.8/src/test/vmem_multiple_pools/TEST3000077500000000000000000000035201361505074100207610ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_multiple_pools/TEST3 -- unit test for vmem_multiple_pools # . ../unittest/unittest.sh require_build_type debug nondebug configure_valgrind drd force-enable setup export VMEM_LOG_LEVEL=0 expect_normal_exit ./vmem_multiple_pools$EXESUFFIX $DIR 2 4 check pass vmem-1.8/src/test/vmem_multiple_pools/out0.log.match000066400000000000000000000002511361505074100226540ustar00rootroot00000000000000vmem_multiple_pools$(nW)TEST0: START: vmem_multiple_pools $(nW)vmem_multiple_pools$(nW) $(nW) 128 1 create 128 pools in 1 thread(s) vmem_multiple_pools$(nW)TEST0: DONE vmem-1.8/src/test/vmem_multiple_pools/out1.log.match000066400000000000000000000002471361505074100226620ustar00rootroot00000000000000vmem_multiple_pools$(nW)TEST1: START: vmem_multiple_pools $(nW)vmem_multiple_pools$(nW) $(nW) 4 16 create 4 pools in 16 thread(s) vmem_multiple_pools$(nW)TEST1: DONE vmem-1.8/src/test/vmem_multiple_pools/out2.log.match000066400000000000000000000002321361505074100226550ustar00rootroot00000000000000vmem_multiple_pools/TEST2: START: vmem_multiple_pools ./vmem_multiple_pools$(nW) $(nW) 2 4 create 2 pools in 4 thread(s) vmem_multiple_pools/TEST2: DONE vmem-1.8/src/test/vmem_multiple_pools/out3.log.match000066400000000000000000000002321361505074100226560ustar00rootroot00000000000000vmem_multiple_pools/TEST3: START: vmem_multiple_pools ./vmem_multiple_pools$(nW) $(nW) 2 4 create 2 pools in 4 thread(s) vmem_multiple_pools/TEST3: DONE vmem-1.8/src/test/vmem_multiple_pools/vmem_multiple_pools.c000066400000000000000000000101671361505074100244350ustar00rootroot00000000000000/* * Copyright 2014-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_multiple_pools.c -- unit test for vmem_multiple_pools * * usage: vmem_multiple_pools directory npools [nthreads] */ #include "unittest.h" #define TEST_REPEAT_CREATE_POOLS (10) static char **mem_pools; static VMEM **pools; static unsigned npools; static const char *dir; static void * thread_func(void *arg) { unsigned start_idx = *(unsigned *)arg; for (int repeat = 0; repeat < TEST_REPEAT_CREATE_POOLS; ++repeat) { for (unsigned idx = 0; idx < npools; ++idx) { unsigned pool_id = start_idx + idx; /* delete old pool with the same id if exist */ if (pools[pool_id] != NULL) { vmem_delete(pools[pool_id]); pools[pool_id] = NULL; } if (pool_id % 2 == 0) { /* for even pool_id, create in region */ pools[pool_id] = vmem_create_in_region( mem_pools[pool_id / 2], VMEM_MIN_POOL); if (pools[pool_id] == NULL) UT_FATAL("!vmem_create_in_region"); } else { /* for odd pool_id, create in file */ pools[pool_id] = vmem_create(dir, VMEM_MIN_POOL); if (pools[pool_id] == NULL) UT_FATAL("!vmem_create"); } void *test = vmem_malloc(pools[pool_id], sizeof(void *)); UT_ASSERTne(test, NULL); vmem_free(pools[pool_id], test); } } return NULL; } int main(int argc, char *argv[]) { START(argc, argv, "vmem_multiple_pools"); if (argc < 4) UT_FATAL("usage: %s directory npools nthreads", argv[0]); dir = argv[1]; npools = ATOU(argv[2]); unsigned nthreads = ATOU(argv[3]); UT_OUT("create %d pools in %d thread(s)", npools, nthreads); const unsigned mem_pools_size = (npools / 2 + npools % 2) * nthreads; mem_pools = MALLOC(mem_pools_size * sizeof(char *)); pools = CALLOC(npools * nthreads, sizeof(VMEM *)); os_thread_t *threads = CALLOC(nthreads, sizeof(os_thread_t)); UT_ASSERTne(threads, NULL); unsigned *pool_idx = CALLOC(nthreads, sizeof(pool_idx[0])); UT_ASSERTne(pool_idx, NULL); for (unsigned pool_id = 0; pool_id < mem_pools_size; ++pool_id) { /* allocate memory for function vmem_create_in_region() */ mem_pools[pool_id] = MMAP_ANON_ALIGNED(VMEM_MIN_POOL, 4 << 20); } /* create and destroy pools multiple times */ for (unsigned t = 0; t < nthreads; t++) { pool_idx[t] = npools * t; PTHREAD_CREATE(&threads[t], NULL, thread_func, &pool_idx[t]); } for (unsigned t = 0; t < nthreads; t++) PTHREAD_JOIN(&threads[t], NULL); for (unsigned pool_id = 0; pool_id < npools * nthreads; ++pool_id) { if (pools[pool_id] != NULL) { vmem_delete(pools[pool_id]); pools[pool_id] = NULL; } } FREE(mem_pools); FREE(pools); FREE(threads); FREE(pool_idx); DONE(NULL); } vmem-1.8/src/test/vmem_multiple_pools/vmem_multiple_pools.vcxproj000066400000000000000000000067461361505074100257160ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {CD7A18D5-55D9-4922-A000-FFAA08ABB006} Win32Proj vmem_multiple_pools 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_multiple_pools/vmem_multiple_pools.vcxproj.filters000066400000000000000000000024371361505074100273560ustar00rootroot00000000000000 {9a6bc21c-8036-4ce5-8745-9d76afbbd200} {bf3433a8-7b81-45df-9ac7-cf6a2edce86b} {2ac658e1-777f-4c4d-aa07-8a7950c1a588} {81be03f1-8556-458c-9dff-9f97c0943837} Match Files Match Files Test Scripts Test Scripts Source Files Header files vmem-1.8/src/test/vmem_out_of_memory/000077500000000000000000000000001361505074100200055ustar00rootroot00000000000000vmem-1.8/src/test/vmem_out_of_memory/.gitignore000066400000000000000000000000231361505074100217700ustar00rootroot00000000000000vmem_out_of_memory vmem-1.8/src/test/vmem_out_of_memory/Makefile000066400000000000000000000033171361505074100214510ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_out_of_memory/Makefile -- build vmem_out_of_memory unit test # TARGET = vmem_out_of_memory OBJS = vmem_out_of_memory.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_out_of_memory/TEST0000077500000000000000000000035431361505074100205770ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_out_of_memory/TEST0 -- unit test for vmem_out_of_memory # . ../unittest/unittest.sh require_build_type debug nondebug setup # limit output for file vmem*.log to reduce time of test execution export VMEM_LOG_LEVEL=2 expect_normal_exit ./vmem_out_of_memory$EXESUFFIX check pass vmem-1.8/src/test/vmem_out_of_memory/TEST0.PS1000066400000000000000000000035431361505074100211760ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_out_of_memory/TEST0.PS1 -- unit test for vmem_out_of_memory # . ..\unittest\unittest.ps1 require_build_type debug nondebug setup # limit output for file vmem*.log to reduce time of test execution $Env:VMEM_LOG_LEVEL = "2" expect_normal_exit $Env:EXE_DIR\vmem_out_of_memory$Env:EXESUFFIX check pass vmem-1.8/src/test/vmem_out_of_memory/TEST1000077500000000000000000000035501361505074100205760ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_out_of_memory/TEST1 -- unit test for vmem_out_of_memory # . ../unittest/unittest.sh require_build_type debug nondebug setup # limit output for file vmem*.log to reduce time of test execution export VMEM_LOG_LEVEL=2 expect_normal_exit ./vmem_out_of_memory$EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_out_of_memory/TEST1.PS1000066400000000000000000000035501361505074100211750ustar00rootroot00000000000000# Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_out_of_memory/TEST1.PS1 -- unit test for vmem_out_of_memory # . ..\unittest\unittest.ps1 require_build_type debug nondebug setup # limit output for file vmem*.log to reduce time of test execution $Env:VMEM_LOG_LEVEL = "2" expect_normal_exit $Env:EXE_DIR\vmem_out_of_memory$Env:EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_out_of_memory/vmem_out_of_memory.c000066400000000000000000000054231361505074100240640ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_out_of_memory -- unit test for vmem_out_of_memory * * usage: vmem_out_of_memory [directory] */ #include "unittest.h" int main(int argc, char *argv[]) { char *dir = NULL; void *mem_pool = NULL; VMEM *vmp; START(argc, argv, "vmem_out_of_memory"); if (argc == 2) { dir = argv[1]; } else if (argc > 2) { UT_FATAL("usage: %s [directory]", argv[0]); } if (dir == NULL) { /* allocate memory for function vmem_create_in_region() */ mem_pool = MMAP_ANON_ALIGNED(VMEM_MIN_POOL, 4 << 20); vmp = vmem_create_in_region(mem_pool, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create_in_region"); } else { vmp = vmem_create(dir, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create"); } /* allocate all memory */ void *prev = NULL; for (;;) { void **next = vmem_malloc(vmp, sizeof(void *)); if (next == NULL) { /* out of memory */ break; } /* check that pointer came from mem_pool */ if (dir == NULL) { UT_ASSERTrange(next, mem_pool, VMEM_MIN_POOL); } *next = prev; prev = next; } UT_ASSERTne(prev, NULL); /* free all allocations */ while (prev != NULL) { void **act = prev; prev = *act; vmem_free(vmp, act); } vmem_delete(vmp); DONE(NULL); } vmem-1.8/src/test/vmem_out_of_memory/vmem_out_of_memory.vcxproj000066400000000000000000000064151361505074100253370ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {26D24B3D-22CE-44EB-AA21-2BF594F80520} Win32Proj vmem_out_of_memory 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_out_of_memory/vmem_out_of_memory.vcxproj.filters000066400000000000000000000013561361505074100270050ustar00rootroot00000000000000 {bf3433a8-7b81-45df-9ac7-cf6a2edce86b} {2ac658e1-777f-4c4d-aa07-8a7950c1a588} Test Scripts Test Scripts Source Files vmem-1.8/src/test/vmem_pages_purging/000077500000000000000000000000001361505074100177545ustar00rootroot00000000000000vmem-1.8/src/test/vmem_pages_purging/.gitignore000066400000000000000000000000231361505074100217370ustar00rootroot00000000000000vmem_pages_purging vmem-1.8/src/test/vmem_pages_purging/Makefile000066400000000000000000000036131361505074100214170ustar00rootroot00000000000000# # Copyright 2014-2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_pages_purging/Makefile -- build vmem_pages_purging unit test # TARGET = vmem_pages_purging OBJS = vmem_pages_purging.o LIBVMEM=y USING_JEMALLOC_HEADERS=y include ../Makefile.inc INCS += -I../../jemalloc/include/ ifneq ($(DEBUG),1) INCS += -I../../nondebug/libvmem/jemalloc/include/ else INCS += -I../../debug/libvmem/jemalloc/include/ endif vmem-1.8/src/test/vmem_pages_purging/TEST0000077500000000000000000000035001361505074100205370ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_pages_purging/TEST0 -- unit test for vmem_pages_purging # # This test covers the issue related to zeroing pages mapped to # underlying file. . ../unittest/unittest.sh setup expect_normal_exit ./vmem_pages_purging$EXESUFFIX n $DIR check pass vmem-1.8/src/test/vmem_pages_purging/TEST0.PS1000066400000000000000000000034751361505074100211510ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_pages_purging/TEST0.PS1 -- unit test for vmem_pages_purging # # This test covers the issue related to zeroing pages mapped to # underlying file. . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_pages_purging$Env:EXESUFFIX n $DIR check pass vmem-1.8/src/test/vmem_pages_purging/TEST1000077500000000000000000000035001361505074100205400ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_pages_purging/TEST0 -- unit test for vmem_pages_purging # # This test covers the issue related to zeroing pages mapped to # underlying file. . ../unittest/unittest.sh setup expect_normal_exit ./vmem_pages_purging$EXESUFFIX z $DIR check pass vmem-1.8/src/test/vmem_pages_purging/TEST1.PS1000066400000000000000000000034751361505074100211520ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_pages_purging/TEST1.PS1 -- unit test for vmem_pages_purging # # This test covers the issue related to zeroing pages mapped to # underlying file. . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_pages_purging$Env:EXESUFFIX z $DIR check pass vmem-1.8/src/test/vmem_pages_purging/vmem_pages_purging.c000066400000000000000000000057541361505074100240110ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_pages_purging.c -- unit test for vmem_pages_purging * * usage: vmem_pages_purging [-z] directory */ #include #include "unittest.h" #include "jemalloc/internal/jemalloc_internal.h" #include "jemalloc/internal/size_classes.h" #define DEFAULT_COUNT (SMALL_MAXCLASS / 4) #define DEFAULT_N 100 static void usage(char *appname) { UT_FATAL("usage: %s directory ", appname); } int main(int argc, char *argv[]) { const int test_value = 123456; char *dir = NULL; int count = DEFAULT_COUNT; int n = DEFAULT_N; VMEM *vmp; int i, j; int use_calloc = 0; START(argc, argv, "vmem_pages_purging"); switch (argv[1][0]) { case 'z': use_calloc = 1; break; case 'n': break; default: usage(argv[0]); } if (argv[2]) { dir = argv[2]; } else { usage(argv[0]); } vmp = vmem_create(dir, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create"); for (i = 0; i < n; i++) { int *test = NULL; if (use_calloc) test = vmem_calloc(vmp, 1, count * sizeof(int)); else test = vmem_malloc(vmp, count * sizeof(int)); UT_ASSERTne(test, NULL); if (use_calloc) { /* vmem_calloc should return zeroed memory */ for (j = 0; j < count; j++) UT_ASSERTeq(test[j], 0); } for (j = 0; j < count; j++) test[j] = test_value; for (j = 0; j < count; j++) UT_ASSERTeq(test[j], test_value); vmem_free(vmp, test); } vmem_delete(vmp); DONE(NULL); } vmem-1.8/src/test/vmem_pages_purging/vmem_pages_purging.vcxproj000066400000000000000000000077611361505074100252620ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {3D9A580B-5F0F-434F-B4D6-228B8E7ADAA5} Win32Proj vmem_pages_purging 10.0.16299.0 Application true v140 Application false v140 $(SolutionDir)\jemalloc\include;$(SolutionDir)\windows\jemalloc_gen\include;$(SolutionDir)\windows\getopt;%(AdditionalIncludeDirectories) 4013;4146 $(SolutionDir)\jemalloc\include;$(SolutionDir)\windows\jemalloc_gen\include;$(SolutionDir)\windows\getopt;%(AdditionalIncludeDirectories) 4013;4146 vmem-1.8/src/test/vmem_pages_purging/vmem_pages_purging.vcxproj.filters000066400000000000000000000021601361505074100267150ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {bdfb5bc0-aa19-4831-b79b-359182bd4c74} {f5942387-31e5-49ba-bd74-4c4332fb1acc} Test Scripts Test Scripts Source Files Header Files vmem-1.8/src/test/vmem_realloc/000077500000000000000000000000001361505074100165435ustar00rootroot00000000000000vmem-1.8/src/test/vmem_realloc/.gitignore000066400000000000000000000000151361505074100205270ustar00rootroot00000000000000vmem_realloc vmem-1.8/src/test/vmem_realloc/Makefile000066400000000000000000000032671361505074100202130ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_realloc/Makefile -- build vmem_realloc unit test # TARGET = vmem_realloc OBJS = vmem_realloc.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_realloc/TEST0000077500000000000000000000033231361505074100173310ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_realloc/TEST0 -- unit test for vmem_realloc # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_realloc$EXESUFFIX check pass vmem-1.8/src/test/vmem_realloc/TEST0.PS1000066400000000000000000000033211361505074100177260ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_realloc/TEST0.PS1 -- unit test for vmem_realloc # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_realloc$Env:EXESUFFIX check pass vmem-1.8/src/test/vmem_realloc/TEST1000077500000000000000000000033301361505074100173300ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_realloc/TEST1 -- unit test for vmem_realloc # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_realloc$EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_realloc/TEST1.PS1000066400000000000000000000033261361505074100177340ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_realloc/TEST1.PS1 -- unit test for vmem_realloc # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_realloc$Env:EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_realloc/vmem_realloc.c000066400000000000000000000055641361505074100213660ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_realloc -- unit test for vmem_realloc * * usage: vmem_realloc [directory] */ #include "unittest.h" int main(int argc, char *argv[]) { const int test_value = 123456; char *dir = NULL; void *mem_pool = NULL; VMEM *vmp; START(argc, argv, "vmem_realloc"); if (argc == 2) { dir = argv[1]; } else if (argc > 2) { UT_FATAL("usage: %s [directory]", argv[0]); } if (dir == NULL) { /* allocate memory for function vmem_create_in_region() */ mem_pool = MMAP_ANON_ALIGNED(VMEM_MIN_POOL, 4 << 20); vmp = vmem_create_in_region(mem_pool, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create_in_region"); } else { vmp = vmem_create(dir, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create"); } int *test = vmem_realloc(vmp, NULL, sizeof(int)); UT_ASSERTne(test, NULL); test[0] = test_value; UT_ASSERTeq(test[0], test_value); /* check that pointer came from mem_pool */ if (dir == NULL) { UT_ASSERTrange(test, mem_pool, VMEM_MIN_POOL); } test = vmem_realloc(vmp, test, sizeof(int) * 10); UT_ASSERTne(test, NULL); UT_ASSERTeq(test[0], test_value); test[1] = test_value; test[9] = test_value; /* check that pointer came from mem_pool */ if (dir == NULL) { UT_ASSERTrange(test, mem_pool, VMEM_MIN_POOL); } vmem_free(vmp, test); vmem_delete(vmp); DONE(NULL); } vmem-1.8/src/test/vmem_realloc/vmem_realloc.vcxproj000066400000000000000000000064011361505074100226260ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {7E0106F8-A597-48D5-B4F2-E0FC4D95EE95} Win32Proj vmem_realloc 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_realloc/vmem_realloc.vcxproj.filters000066400000000000000000000013501361505074100242730ustar00rootroot00000000000000 {bf3433a8-7b81-45df-9ac7-cf6a2edce86b} {2ac658e1-777f-4c4d-aa07-8a7950c1a588} Test Scripts Test Scripts Source Files vmem-1.8/src/test/vmem_realloc_inplace/000077500000000000000000000000001361505074100202365ustar00rootroot00000000000000vmem-1.8/src/test/vmem_realloc_inplace/.gitignore000066400000000000000000000000251361505074100222230ustar00rootroot00000000000000vmem_realloc_inplace vmem-1.8/src/test/vmem_realloc_inplace/Makefile000066400000000000000000000033271361505074100217030ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_realloc_inplace/Makefile -- build vmem_realloc_inplace unit test # TARGET = vmem_realloc_inplace OBJS = vmem_realloc_inplace.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_realloc_inplace/TEST0000077500000000000000000000033431361505074100210260ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_realloc_inplace/TEST0 -- unit test for vmem_realloc # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_realloc_inplace$EXESUFFIX check pass vmem-1.8/src/test/vmem_realloc_inplace/TEST0.PS1000066400000000000000000000033411361505074100214230ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_realloc_inplace/TEST0.PS1 -- unit test for vmem_realloc # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_realloc_inplace$Env:EXESUFFIX check pass vmem-1.8/src/test/vmem_realloc_inplace/TEST1000077500000000000000000000033501361505074100210250ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_realloc_inplace/TEST1 -- unit test for vmem_realloc # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_realloc_inplace$EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_realloc_inplace/TEST1.PS1000066400000000000000000000033461361505074100214310ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_realloc_inplace/TEST1.PS1 -- unit test for vmem_realloc # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_realloc_inplace$Env:EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_realloc_inplace/vmem_realloc_inplace.c000066400000000000000000000070261361505074100245470ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_realloc_inplace -- unit test for vmem_realloc * * usage: vmem_realloc_inplace [directory] */ #include "unittest.h" #define POOL_SIZE (16 * 1024 * 1024) int main(int argc, char *argv[]) { char *dir = NULL; void *mem_pool = NULL; VMEM *vmp; START(argc, argv, "vmem_realloc_inplace"); if (argc == 2) { dir = argv[1]; } else if (argc > 2) { UT_FATAL("usage: %s [directory]", argv[0]); } if (dir == NULL) { /* allocate memory for function vmem_create_in_region() */ mem_pool = MMAP_ANON_ALIGNED(POOL_SIZE, 4 << 20); vmp = vmem_create_in_region(mem_pool, POOL_SIZE); if (vmp == NULL) UT_FATAL("!vmem_create_in_region"); } else { vmp = vmem_create(dir, POOL_SIZE); if (vmp == NULL) UT_FATAL("!vmem_create"); } int *test1 = vmem_malloc(vmp, 12 * 1024 * 1024); UT_ASSERTne(test1, NULL); int *test1r = vmem_realloc(vmp, test1, 6 * 1024 * 1024); UT_ASSERTeq(test1r, test1); test1r = vmem_realloc(vmp, test1, 12 * 1024 * 1024); UT_ASSERTeq(test1r, test1); test1r = vmem_realloc(vmp, test1, 8 * 1024 * 1024); UT_ASSERTeq(test1r, test1); int *test2 = vmem_malloc(vmp, 4 * 1024 * 1024); UT_ASSERTne(test2, NULL); /* 4MB => 16B */ int *test2r = vmem_realloc(vmp, test2, 16); UT_ASSERTeq(test2r, NULL); /* ... but the usable size is still 4MB. */ UT_ASSERTeq(vmem_malloc_usable_size(vmp, test2), 4 * 1024 * 1024); /* 8MB => 16B */ test1r = vmem_realloc(vmp, test1, 16); /* * If the old size of the allocation is larger than * the chunk size (4MB), we can reallocate it to 4MB first (in place), * releasing some space, which makes it possible to do the actual * shrinking... */ UT_ASSERTne(test1r, NULL); UT_ASSERTne(test1r, test1); UT_ASSERTeq(vmem_malloc_usable_size(vmp, test1r), 16); /* ... and leaves some memory for new allocations. */ int *test3 = vmem_malloc(vmp, 3 * 1024 * 1024); UT_ASSERTne(test3, NULL); vmem_free(vmp, test1r); vmem_free(vmp, test2r); vmem_free(vmp, test3); vmem_delete(vmp); DONE(NULL); } vmem-1.8/src/test/vmem_realloc_inplace/vmem_realloc_inplace.vcxproj000066400000000000000000000064211361505074100260160ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {C3A59B21-A287-4631-B4EC-F4A57D26A14F} Win32Proj vmem_realloc_inplace 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_realloc_inplace/vmem_realloc_inplace.vcxproj.filters000066400000000000000000000013601361505074100274620ustar00rootroot00000000000000 {bf3433a8-7b81-45df-9ac7-cf6a2edce86b} {2ac658e1-777f-4c4d-aa07-8a7950c1a588} Test Scripts Test Scripts Source Files vmem-1.8/src/test/vmem_stats/000077500000000000000000000000001361505074100162605ustar00rootroot00000000000000vmem-1.8/src/test/vmem_stats/.gitignore000066400000000000000000000000131361505074100202420ustar00rootroot00000000000000vmem_stats vmem-1.8/src/test/vmem_stats/Makefile000066400000000000000000000032571361505074100177270ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/Makefile -- build vmem_stats unit test # TARGET = vmem_stats OBJS = vmem_stats.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_stats/TEST0000077500000000000000000000037251361505074100170540ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST0 -- unit test for vmem_stats # . ../unittest/unittest.sh require_build_type debug # valgrind affects stats configure_valgrind force-disable setup # limit the number of arenas to fit into the minimal VMEM pool size export JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit ./vmem_stats$EXESUFFIX 0 $GREP -v ':' vmem$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST0.PS1000066400000000000000000000037021361505074100174460ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST0.PS1 -- unit test for vmem_stats # . ..\unittest\unittest.ps1 require_build_type debug setup # limit the number of arenas to fit into the minimal VMEM pool size $Env:JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit $Env:EXE_DIR\vmem_stats$Env:EXESUFFIX 0 Get-Content vmem$Env:UNITTEST_NUM.log | Where-Object ` {$_ -notmatch ':'} > grep$Env:UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST1000077500000000000000000000037271361505074100170570ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST1 -- unit test for vmem_stats # . ../unittest/unittest.sh require_build_type debug # valgrind affects stats configure_valgrind force-disable setup # limit the number of arenas to fit into the minimal VMEM pool size export JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit ./vmem_stats$EXESUFFIX 0 g $GREP -v ':' vmem$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST1.PS1000066400000000000000000000037031361505074100174500ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST1.PS1 -- unit test for vmem_stats # . ..\unittest\unittest.ps1 require_build_type debug setup # limit the number of arenas to fit into the minimal VMEM pool size $Env:JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit $Env:EXE_DIR\vmem_stats$Env:EXESUFFIX 0 g Get-Content vmem$Env:UNITTEST_NUM.log | Where-Object ` {$_ -notmatch ':'} > grep$Env:UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST2000077500000000000000000000037311361505074100170530ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST2 -- unit test for vmem_stats # . ../unittest/unittest.sh require_build_type debug # valgrind affects stats configure_valgrind force-disable setup # limit the number of arenas to fit into the minimal VMEM pool size export JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit ./vmem_stats$EXESUFFIX 0 gbl $GREP -v ':' vmem$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST2.PS1000066400000000000000000000037051361505074100174530ustar00rootroot00000000000000# Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST2.PS1 -- unit test for vmem_stats # . ..\unittest\unittest.ps1 require_build_type debug setup # limit the number of arenas to fit into the minimal VMEM pool size $Env:JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit $Env:EXE_DIR\vmem_stats$Env:EXESUFFIX 0 gbl Get-Content vmem$Env:UNITTEST_NUM.log | Where-Object ` {$_ -notmatch ':'} > grep$Env:UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST3000077500000000000000000000037321361505074100170550ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST3 -- unit test for vmem_stats # . ../unittest/unittest.sh require_build_type debug # valgrind affects stats configure_valgrind force-disable setup # limit the number of arenas to fit into the minimal VMEM pool size export JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit ./vmem_stats$EXESUFFIX 0 gbla $GREP -v ':' vmem$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST3.PS1000066400000000000000000000037061361505074100174550ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST3.PS1 -- unit test for vmem_stats # . ..\unittest\unittest.ps1 require_build_type debug setup # limit the number of arenas to fit into the minimal VMEM pool size $Env:JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit $Env:EXE_DIR\vmem_stats$Env:EXESUFFIX 0 gbla Get-Content vmem$Env:UNITTEST_NUM.log | Where-Object ` {$_ -notmatch ':'} > grep$Env:UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST4000077500000000000000000000037331361505074100170570ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST4 -- unit test for vmem_stats # . ../unittest/unittest.sh require_build_type debug # valgrind affects stats configure_valgrind force-disable setup # limit the number of arenas to fit into the minimal VMEM pool size export JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit ./vmem_stats$EXESUFFIX 0 gblma $GREP -v ':' vmem$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST4.PS1000066400000000000000000000037071361505074100174570ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST4.PS1 -- unit test for vmem_stats # . ..\unittest\unittest.ps1 require_build_type debug setup # limit the number of arenas to fit into the minimal VMEM pool size $Env:JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit $Env:EXE_DIR\vmem_stats$Env:EXESUFFIX 0 gblma Get-Content vmem$Env:UNITTEST_NUM.log | Where-Object ` {$_ -notmatch ':'} > grep$Env:UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST5000077500000000000000000000037251361505074100170610ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST5 -- unit test for vmem_stats # . ../unittest/unittest.sh require_build_type debug # valgrind affects stats configure_valgrind force-disable setup # limit the number of arenas to fit into the minimal VMEM pool size export JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit ./vmem_stats$EXESUFFIX 1 $GREP -v ':' vmem$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST5.PS1000066400000000000000000000037011361505074100174520ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST5.PS1 -- unit test for vmem_stats # . ..\unittest\unittest.ps1 require_build_type debug setup # limit the number of arenas to fit into the minimal VMEM pool size $Env:JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit $Env:EXE_DIR\vmem_stats$Env:EXESUFFIX 1 Get-Content vmem$Env:UNITTEST_NUM.log | Where-Object ` {$_ -notmatch ':'} > grep$Env:UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST6000077500000000000000000000037271361505074100170640ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST6 -- unit test for vmem_stats # . ../unittest/unittest.sh require_build_type debug # valgrind affects stats configure_valgrind force-disable setup # limit the number of arenas to fit into the minimal VMEM pool size export JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit ./vmem_stats$EXESUFFIX 1 g $GREP -v ':' vmem$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST6.PS1000066400000000000000000000036771361505074100174670ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST6 -- unit test for vmem_stats # . ..\unittest\unittest.ps1 require_build_type debug setup # limit the number of arenas to fit into the minimal VMEM pool size $Env:JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit $Env:EXE_DIR\vmem_stats$Env:EXESUFFIX 1 g Get-Content vmem$Env:UNITTEST_NUM.log | Where-Object ` {$_ -notmatch ':'} > grep$Env:UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST7000077500000000000000000000037311361505074100170600ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST7 -- unit test for vmem_stats # . ../unittest/unittest.sh require_build_type debug # valgrind affects stats configure_valgrind force-disable setup # limit the number of arenas to fit into the minimal VMEM pool size export JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit ./vmem_stats$EXESUFFIX 1 gbl $GREP -v ':' vmem$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST7.PS1000066400000000000000000000037051361505074100174600ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST7.PS1 -- unit test for vmem_stats # . ..\unittest\unittest.ps1 require_build_type debug setup # limit the number of arenas to fit into the minimal VMEM pool size $Env:JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit $Env:EXE_DIR\vmem_stats$Env:EXESUFFIX 1 gbl Get-Content vmem$Env:UNITTEST_NUM.log | Where-Object ` {$_ -notmatch ':'} > grep$Env:UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST8000077500000000000000000000037321361505074100170620ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST8 -- unit test for vmem_stats # . ../unittest/unittest.sh require_build_type debug # valgrind affects stats configure_valgrind force-disable setup # limit the number of arenas to fit into the minimal VMEM pool size export JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit ./vmem_stats$EXESUFFIX 1 gbla $GREP -v ':' vmem$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST8.PS1000066400000000000000000000037061361505074100174620ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST8.PS1 -- unit test for vmem_stats # . ..\unittest\unittest.ps1 require_build_type debug setup # limit the number of arenas to fit into the minimal VMEM pool size $Env:JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit $Env:EXE_DIR\vmem_stats$Env:EXESUFFIX 1 gbla Get-Content vmem$Env:UNITTEST_NUM.log | Where-Object ` {$_ -notmatch ':'} > grep$Env:UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST9000077500000000000000000000037331361505074100170640ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST9 -- unit test for vmem_stats # . ../unittest/unittest.sh require_build_type debug # valgrind affects stats configure_valgrind force-disable setup # limit the number of arenas to fit into the minimal VMEM pool size export JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit ./vmem_stats$EXESUFFIX 1 gblma $GREP -v ':' vmem$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/TEST9.PS1000066400000000000000000000037071361505074100174640ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_stats/TEST9.PS1 -- unit test for vmem_stats # . ..\unittest\unittest.ps1 require_build_type debug setup # limit the number of arenas to fit into the minimal VMEM pool size $Env:JE_VMEM_MALLOC_CONF="narenas:64" expect_normal_exit $Env:EXE_DIR\vmem_stats$Env:EXESUFFIX 1 gblma Get-Content vmem$Env:UNITTEST_NUM.log | Where-Object ` {$_ -notmatch ':'} > grep$Env:UNITTEST_NUM.log check pass vmem-1.8/src/test/vmem_stats/grep0.log.match000066400000000000000000000052111361505074100210720ustar00rootroot00000000000000___ Begin jemalloc statistics ___ Version:$(*) Assertions enabled Run-time option settings: opt.abort: $(*) opt.lg_chunk: $(*) opt.dss: $(*) opt.narenas: $(*) opt.lg_dirty_mult: $(*) opt.stats_print: $(*) opt.junk: $(*) opt.quarantine: $(*) opt.redzone: $(*) opt.zero: $(*) opt.tcache: $(*) opt.lg_tcache_max: $(*) CPUs: $(*) Arenas: $(*) Pointer size: $(*) Quantum size: $(*) Page size: $(*) Min active:dirty page ratio per arena: $(*) Chunk size: $(*) Allocated: 0, active: 0, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) arenas[0]: assigned threads: $(*) dss allocation precedence: $(*) dirty pages: $(*):$(*) active:dirty, $(*) sweeps, $(*) madvises, $(*) purged allocated nmalloc ndalloc nrequests small: 0 0 0 0 large: 0 0 0 0 huge: 0 0 0 0 total: 0 0 0 0 active: 0 mapped: 0 bins: bin size regs pgs allocated nmalloc ndalloc nrequests nfills nflushes newruns reruns curruns [0..$(*)] $(*) $(*) --- End jemalloc statistics --- ___ Begin jemalloc statistics ___ Version:$(*) Assertions enabled Run-time option settings: opt.abort: $(*) opt.lg_chunk: $(*) opt.dss: $(*) opt.narenas: $(*) opt.lg_dirty_mult: $(*) opt.stats_print: $(*) opt.junk: $(*) opt.quarantine: $(*) opt.redzone: $(*) opt.zero: $(*) opt.tcache: $(*) opt.lg_tcache_max: $(*) CPUs: $(*) Arenas: $(*) Pointer size: $(*) Quantum size: $(*) Page size: $(*) Min active:dirty page ratio per arena: $(*) Chunk size: $(*) Allocated: 60992, active: 61440, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) arenas[0]: assigned threads: $(*) dss allocation precedence: $(*) dirty pages: $(*):$(*) active:dirty, $(*) sweeps, $(*) madvises, $(*) purged allocated nmalloc ndalloc nrequests small: 28224 63 0 0 large: 32768 1 0 1 huge: 0 0 0 0 total: 60992 64 0 1 active: 61440 mapped: $(*) $(*) $(*) $(*) [$(*)..$(*)] large: size pages nmalloc ndalloc nrequests curruns [$(*)] $(*) $(*) $(*) $(*) $(*) $(*) [$(*)] --- End jemalloc statistics --- vmem-1.8/src/test/vmem_stats/grep1.log.match000066400000000000000000000031541361505074100210770ustar00rootroot00000000000000___ Begin jemalloc statistics ___ Allocated: 0, active: 0, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) arenas[0]: assigned threads: $(*) dss allocation precedence: $(*) dirty pages: $(*):$(*) active:dirty, $(*) sweeps, $(*) madvises, $(*) purged allocated nmalloc ndalloc nrequests small: 0 0 0 0 large: 0 0 0 0 huge: 0 0 0 0 total: 0 0 0 0 active: 0 mapped: 0 bins: bin size regs pgs allocated nmalloc ndalloc nrequests nfills nflushes newruns reruns curruns $(*) $(*) $(*) --- End jemalloc statistics --- ___ Begin jemalloc statistics ___ Allocated: 60992, active: 61440, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) arenas[0]: assigned threads: $(*) dss allocation precedence: $(*) dirty pages: $(*):$(*) active:dirty, $(*) sweeps, $(*) madvises, $(*) purged allocated nmalloc ndalloc nrequests small: 28224 63 0 0 large: 32768 1 0 1 huge: 0 0 0 0 total: 60992 64 0 1 active: 61440 mapped: $(*) $(*) $(*) $(*) $(*) $(*) $(*) $(*) $(*) --- End jemalloc statistics --- vmem-1.8/src/test/vmem_stats/grep2.log.match000066400000000000000000000026431361505074100211020ustar00rootroot00000000000000___ Begin jemalloc statistics ___ Allocated: 0, active: 0, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) arenas[0]: assigned threads: $(*) dss allocation precedence: $(*) dirty pages: $(*):$(*) active:dirty, $(*) sweeps, $(*) madvises, $(*) purged allocated nmalloc ndalloc nrequests small: 0 0 0 0 large: 0 0 0 0 huge: 0 0 0 0 total: 0 0 0 0 active: 0 mapped: 0 --- End jemalloc statistics --- ___ Begin jemalloc statistics ___ Allocated: 60992, active: 61440, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) arenas[0]: assigned threads: $(*) dss allocation precedence: $(*) dirty pages: $(*):$(*) active:dirty, $(*) sweeps, $(*) madvises, $(*) purged allocated nmalloc ndalloc nrequests small: 28224 63 0 0 large: 32768 1 0 1 huge: 0 0 0 0 total: 60992 64 0 1 active: 61440 mapped: $(*) --- End jemalloc statistics --- vmem-1.8/src/test/vmem_stats/grep3.log.match000066400000000000000000000026671361505074100211110ustar00rootroot00000000000000___ Begin jemalloc statistics ___ Allocated: 0, active: 0, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) Merged arenas stats: assigned threads: $(*) dss allocation precedence: $(*) dirty pages: $(*):$(*) active:dirty, $(*) sweeps, $(*) madvises, $(*) purged allocated nmalloc ndalloc nrequests small: 0 0 0 0 large: 0 0 0 0 huge: 0 0 0 0 total: 0 0 0 0 active: 0 mapped: 0 --- End jemalloc statistics --- ___ Begin jemalloc statistics ___ Allocated: 60992, active: 61440, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) Merged arenas stats: assigned threads: $(*) dss allocation precedence: $(*) dirty pages: $(*):$(*) active:dirty, $(*) sweeps, $(*) madvises, $(*) purged allocated nmalloc ndalloc nrequests small: 28224 63 0 0 large: 32768 1 0 1 huge: 0 0 0 0 total: 60992 64 0 1 active: 61440 mapped: $(*) --- End jemalloc statistics --- vmem-1.8/src/test/vmem_stats/grep4.log.match000066400000000000000000000007141361505074100211010ustar00rootroot00000000000000___ Begin jemalloc statistics ___ Allocated: 0, active: 0, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) --- End jemalloc statistics --- ___ Begin jemalloc statistics ___ Allocated: 60992, active: 61440, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) --- End jemalloc statistics --- vmem-1.8/src/test/vmem_stats/grep5.log.match000066400000000000000000000052111361505074100210770ustar00rootroot00000000000000___ Begin jemalloc statistics ___ Version:$(*) Assertions enabled Run-time option settings: opt.abort: $(*) opt.lg_chunk: $(*) opt.dss: $(*) opt.narenas: $(*) opt.lg_dirty_mult: $(*) opt.stats_print: $(*) opt.junk: $(*) opt.quarantine: $(*) opt.redzone: $(*) opt.zero: $(*) opt.tcache: $(*) opt.lg_tcache_max: $(*) CPUs: $(*) Arenas: $(*) Pointer size: $(*) Quantum size: $(*) Page size: $(*) Min active:dirty page ratio per arena: $(*) Chunk size: $(*) Allocated: 0, active: 0, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) arenas[0]: assigned threads: $(*) dss allocation precedence: $(*) dirty pages: $(*):$(*) active:dirty, $(*) sweeps, $(*) madvises, $(*) purged allocated nmalloc ndalloc nrequests small: 0 0 0 0 large: 0 0 0 0 huge: 0 0 0 0 total: 0 0 0 0 active: 0 mapped: 0 bins: bin size regs pgs allocated nmalloc ndalloc nrequests nfills nflushes newruns reruns curruns [0..$(*)] $(*) $(*) --- End jemalloc statistics --- ___ Begin jemalloc statistics ___ Version:$(*) Assertions enabled Run-time option settings: opt.abort: $(*) opt.lg_chunk: $(*) opt.dss: $(*) opt.narenas: $(*) opt.lg_dirty_mult: $(*) opt.stats_print: $(*) opt.junk: $(*) opt.quarantine: $(*) opt.redzone: $(*) opt.zero: $(*) opt.tcache: $(*) opt.lg_tcache_max: $(*) CPUs: $(*) Arenas: $(*) Pointer size: $(*) Quantum size: $(*) Page size: $(*) Min active:dirty page ratio per arena: $(*) Chunk size: $(*) Allocated: 60992, active: 61440, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) arenas[0]: assigned threads: $(*) dss allocation precedence: $(*) dirty pages: $(*):$(*) active:dirty, $(*) sweeps, $(*) madvises, $(*) purged allocated nmalloc ndalloc nrequests small: 28224 63 0 0 large: 32768 1 0 1 huge: 0 0 0 0 total: 60992 64 0 1 active: 61440 mapped: $(*) $(*) $(*) $(*) [$(*)..$(*)] large: size pages nmalloc ndalloc nrequests curruns [$(*)] $(*) $(*) $(*) $(*) $(*) $(*) [$(*)] --- End jemalloc statistics --- vmem-1.8/src/test/vmem_stats/grep6.log.match000066400000000000000000000031541361505074100211040ustar00rootroot00000000000000___ Begin jemalloc statistics ___ Allocated: 0, active: 0, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) arenas[0]: assigned threads: $(*) dss allocation precedence: $(*) dirty pages: $(*):$(*) active:dirty, $(*) sweeps, $(*) madvises, $(*) purged allocated nmalloc ndalloc nrequests small: 0 0 0 0 large: 0 0 0 0 huge: 0 0 0 0 total: 0 0 0 0 active: 0 mapped: 0 bins: bin size regs pgs allocated nmalloc ndalloc nrequests nfills nflushes newruns reruns curruns $(*) $(*) $(*) --- End jemalloc statistics --- ___ Begin jemalloc statistics ___ Allocated: 60992, active: 61440, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) arenas[0]: assigned threads: $(*) dss allocation precedence: $(*) dirty pages: $(*):$(*) active:dirty, $(*) sweeps, $(*) madvises, $(*) purged allocated nmalloc ndalloc nrequests small: 28224 63 0 0 large: 32768 1 0 1 huge: 0 0 0 0 total: 60992 64 0 1 active: 61440 mapped: $(*) $(*) $(*) $(*) $(*) $(*) $(*) $(*) $(*) --- End jemalloc statistics --- vmem-1.8/src/test/vmem_stats/grep7.log.match000066400000000000000000000026431361505074100211070ustar00rootroot00000000000000___ Begin jemalloc statistics ___ Allocated: 0, active: 0, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) arenas[0]: assigned threads: $(*) dss allocation precedence: $(*) dirty pages: $(*):$(*) active:dirty, $(*) sweeps, $(*) madvises, $(*) purged allocated nmalloc ndalloc nrequests small: 0 0 0 0 large: 0 0 0 0 huge: 0 0 0 0 total: 0 0 0 0 active: 0 mapped: 0 --- End jemalloc statistics --- ___ Begin jemalloc statistics ___ Allocated: 60992, active: 61440, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) arenas[0]: assigned threads: $(*) dss allocation precedence: $(*) dirty pages: $(*):$(*) active:dirty, $(*) sweeps, $(*) madvises, $(*) purged allocated nmalloc ndalloc nrequests small: 28224 63 0 0 large: 32768 1 0 1 huge: 0 0 0 0 total: 60992 64 0 1 active: 61440 mapped: $(*) --- End jemalloc statistics --- vmem-1.8/src/test/vmem_stats/grep8.log.match000066400000000000000000000026671361505074100211160ustar00rootroot00000000000000___ Begin jemalloc statistics ___ Allocated: 0, active: 0, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) Merged arenas stats: assigned threads: $(*) dss allocation precedence: $(*) dirty pages: $(*):$(*) active:dirty, $(*) sweeps, $(*) madvises, $(*) purged allocated nmalloc ndalloc nrequests small: 0 0 0 0 large: 0 0 0 0 huge: 0 0 0 0 total: 0 0 0 0 active: 0 mapped: 0 --- End jemalloc statistics --- ___ Begin jemalloc statistics ___ Allocated: 60992, active: 61440, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) Merged arenas stats: assigned threads: $(*) dss allocation precedence: $(*) dirty pages: $(*):$(*) active:dirty, $(*) sweeps, $(*) madvises, $(*) purged allocated nmalloc ndalloc nrequests small: 28224 63 0 0 large: 32768 1 0 1 huge: 0 0 0 0 total: 60992 64 0 1 active: 61440 mapped: $(*) --- End jemalloc statistics --- vmem-1.8/src/test/vmem_stats/grep9.log.match000066400000000000000000000007141361505074100211060ustar00rootroot00000000000000___ Begin jemalloc statistics ___ Allocated: 0, active: 0, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) --- End jemalloc statistics --- ___ Begin jemalloc statistics ___ Allocated: 60992, active: 61440, mapped: $(*) Current active ceiling: $(*) chunks: nchunks highchunks curchunks $(*) $(*) $(*) --- End jemalloc statistics --- vmem-1.8/src/test/vmem_stats/vmem_stats.c000066400000000000000000000077541361505074100206230ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_stats.c -- unit test for vmem_stats * * usage: vmem_stats 0|1 [opts] */ #include "unittest.h" static int custom_allocs; static int custom_alloc_calls; /* * malloc_custom -- custom malloc function * * This function updates statistics about custom alloc functions, * and returns allocated memory. */ static void * malloc_custom(size_t size) { ++custom_alloc_calls; ++custom_allocs; return malloc(size); } /* * free_custom -- custom free function * * This function updates statistics about custom alloc functions, * and frees allocated memory. */ static void free_custom(void *ptr) { ++custom_alloc_calls; --custom_allocs; free(ptr); } /* * realloc_custom -- custom realloc function * * This function updates statistics about custom alloc functions, * and returns reallocated memory. */ static void * realloc_custom(void *ptr, size_t size) { ++custom_alloc_calls; return realloc(ptr, size); } /* * strdup_custom -- custom strdup function * * This function updates statistics about custom alloc functions, * and returns allocated memory with a duplicated string. */ static char * strdup_custom(const char *s) { ++custom_alloc_calls; ++custom_allocs; return strdup(s); } int main(int argc, char *argv[]) { int expect_custom_alloc = 0; char *opts = ""; void *mem_pool; VMEM *vmp_unused; VMEM *vmp_used; START(argc, argv, "vmem_stats"); if (argc > 3 || argc < 2) { UT_FATAL("usage: %s 0|1 [opts]", argv[0]); } else { expect_custom_alloc = atoi(argv[1]); if (argc > 2) opts = argv[2]; } if (expect_custom_alloc) vmem_set_funcs(malloc_custom, free_custom, realloc_custom, strdup_custom, NULL); mem_pool = MMAP_ANON_ALIGNED(VMEM_MIN_POOL, 4 << 20); vmp_unused = vmem_create_in_region(mem_pool, VMEM_MIN_POOL); if (vmp_unused == NULL) UT_FATAL("!vmem_create_in_region"); mem_pool = MMAP_ANON_ALIGNED(VMEM_MIN_POOL, 4 << 20); vmp_used = vmem_create_in_region(mem_pool, VMEM_MIN_POOL); if (vmp_used == NULL) UT_FATAL("!vmem_create_in_region"); int *test = vmem_malloc(vmp_used, sizeof(int)*100); UT_ASSERTne(test, NULL); vmem_stats_print(vmp_unused, opts); vmem_stats_print(vmp_used, opts); vmem_free(vmp_used, test); vmem_delete(vmp_unused); vmem_delete(vmp_used); /* check memory leak in custom allocator */ UT_ASSERTeq(custom_allocs, 0); if (expect_custom_alloc == 0) { UT_ASSERTeq(custom_alloc_calls, 0); } else { UT_ASSERTne(custom_alloc_calls, 0); } DONE(NULL); } vmem-1.8/src/test/vmem_stats/vmem_stats.vcxproj000066400000000000000000000076131361505074100220660ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {ABD4B53D-94CD-4C6A-B30A-CB6FEBA16296} Win32Proj vmem_stats 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_stats/vmem_stats.vcxproj.filters000066400000000000000000000044511361505074100235320ustar00rootroot00000000000000 {9a6bc21c-8036-4ce5-8745-9d76afbbd200} {bf3433a8-7b81-45df-9ac7-cf6a2edce86b} {2ac658e1-777f-4c4d-aa07-8a7950c1a588} Test Scripts Test Scripts Test Scripts Test Scripts Test Scripts Match Files Match Files Match Files Match Files Match Files Match Files Match Files Match Files Match Files Match Files Test Scripts Test Scripts Test Scripts Test Scripts Test Scripts Source Files vmem-1.8/src/test/vmem_strdup/000077500000000000000000000000001361505074100164435ustar00rootroot00000000000000vmem-1.8/src/test/vmem_strdup/.gitignore000066400000000000000000000000141361505074100204260ustar00rootroot00000000000000vmem_strdup vmem-1.8/src/test/vmem_strdup/Makefile000066400000000000000000000032621361505074100201060ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_strdup/Makefile -- build vmem_strdup unit test # TARGET = vmem_strdup OBJS =vmem_strdup.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_strdup/TEST0000077500000000000000000000033201361505074100172260ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_strdup/TEST0 -- unit test for vmem_strdup # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_strdup$EXESUFFIX check pass vmem-1.8/src/test/vmem_strdup/TEST0.PS1000066400000000000000000000033161361505074100176320ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_strdup/TEST0.PS1 -- unit test for vmem_strdup # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_strdup$Env:EXESUFFIX check pass vmem-1.8/src/test/vmem_strdup/TEST1000077500000000000000000000033251361505074100172340ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_strdup/TEST1 -- unit test for vmem_strdup # . ../unittest/unittest.sh setup expect_normal_exit ./vmem_strdup$EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_strdup/TEST1.PS1000066400000000000000000000033231361505074100176310ustar00rootroot00000000000000# Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_strdup/TEST1.PS1 -- unit test for vmem_strdup # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\vmem_strdup$Env:EXESUFFIX $DIR check pass vmem-1.8/src/test/vmem_strdup/vmem_strdup.c000066400000000000000000000064561361505074100211670ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_strdup.c -- unit test for vmem_strdup * * usage: vmem_strdup [directory] */ #include "unittest.h" #include int main(int argc, char *argv[]) { const char *text = "Some test text"; const char *text_empty = ""; const wchar_t *wtext = L"Some test text"; const wchar_t *wtext_empty = L""; char *dir = NULL; void *mem_pool = NULL; VMEM *vmp; START(argc, argv, "vmem_strdup"); if (argc == 2) { dir = argv[1]; } else if (argc > 2) { UT_FATAL("usage: %s [directory]", argv[0]); } if (dir == NULL) { /* allocate memory for function vmem_create_in_region() */ mem_pool = MMAP_ANON_ALIGNED(VMEM_MIN_POOL, 4 << 20); vmp = vmem_create_in_region(mem_pool, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create_in_region"); } else { vmp = vmem_create(dir, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create"); } char *str1 = vmem_strdup(vmp, text); wchar_t *wcs1 = vmem_wcsdup(vmp, wtext); UT_ASSERTne(str1, NULL); UT_ASSERTne(wcs1, NULL); UT_ASSERTeq(strcmp(text, str1), 0); UT_ASSERTeq(wcscmp(wtext, wcs1), 0); /* check that pointer came from mem_pool */ if (dir == NULL) { UT_ASSERTrange(str1, mem_pool, VMEM_MIN_POOL); UT_ASSERTrange(wcs1, mem_pool, VMEM_MIN_POOL); } char *str2 = vmem_strdup(vmp, text_empty); wchar_t *wcs2 = vmem_wcsdup(vmp, wtext_empty); UT_ASSERTne(str2, NULL); UT_ASSERTne(wcs2, NULL); UT_ASSERTeq(strcmp(text_empty, str2), 0); UT_ASSERTeq(wcscmp(wtext_empty, wcs2), 0); /* check that pointer came from mem_pool */ if (dir == NULL) { UT_ASSERTrange(str2, mem_pool, VMEM_MIN_POOL); UT_ASSERTrange(wcs2, mem_pool, VMEM_MIN_POOL); } vmem_free(vmp, str1); vmem_free(vmp, wcs1); vmem_free(vmp, str2); vmem_free(vmp, wcs2); vmem_delete(vmp); DONE(NULL); } vmem-1.8/src/test/vmem_strdup/vmem_strdup.vcxproj000066400000000000000000000063771361505074100224420ustar00rootroot00000000000000 Debug x64 Release x64 {08762559-e9df-475b-ba99-49f4b5a1d80b} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {89B6AF14-08A0-437A-B31D-A8A3492FA965} Win32Proj vmem_strdup 10.0.16299.0 Application true v140 Application false v140 vmem-1.8/src/test/vmem_strdup/vmem_strdup.vcxproj.filters000066400000000000000000000013471361505074100241010ustar00rootroot00000000000000 {bf3433a8-7b81-45df-9ac7-cf6a2edce86b} {2ac658e1-777f-4c4d-aa07-8a7950c1a588} Test Scripts Test Scripts Source Files vmem-1.8/src/test/vmem_valgrind/000077500000000000000000000000001361505074100167305ustar00rootroot00000000000000vmem-1.8/src/test/vmem_valgrind/.gitignore000066400000000000000000000000161361505074100207150ustar00rootroot00000000000000vmem_valgrind vmem-1.8/src/test/vmem_valgrind/Makefile000066400000000000000000000032721361505074100203740ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_valgrind/Makefile -- build vmem_valgrind unit test # TARGET = vmem_valgrind OBJS =vmem_valgrind.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_valgrind/TEST0000077500000000000000000000037551361505074100175270ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_valgrind/TEST0 -- unit test for vmem_valgrind # export VALGRIND_OPTS="--suppressions=excluded-errors.supp --leak-check=full\ --show-reachable=yes" . ../unittest/unittest.sh require_build_type debug nondebug require_valgrind 3.7 configure_valgrind memcheck force-enable setup unset VMEM_LOG_LEVEL unset VMEM_LOG_FILE [ "$FS" == "pmem" ] && DIR_WORK=$DIR expect_normal_exit ./vmem_valgrind$EXESUFFIX 0 $DIR_WORK pass vmem-1.8/src/test/vmem_valgrind/TEST1000077500000000000000000000037541361505074100175270ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_valgrind/TEST1 -- unit test for vmem_valgrind # export VALGRIND_OPTS="--suppressions=excluded-errors.supp --leak-check=full\ --show-reachable=yes" . ../unittest/unittest.sh require_build_type debug nondebug require_valgrind 3.7 configure_valgrind memcheck force-enable setup unset VMEM_LOG_LEVEL unset VMEM_LOG_FILE [ "$FS" == "pmem" ] && DIR_WORK=$DIR expect_normal_exit ./vmem_valgrind$EXESUFFIX 1 $DIR_WORK pass vmem-1.8/src/test/vmem_valgrind/TEST2000077500000000000000000000037631361505074100175300ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_valgrind/TEST2 -- unit test for vmem_valgrind # export VALGRIND_OPTS="--suppressions=excluded-errors.supp --leak-check=full\ --show-reachable=yes" . ../unittest/unittest.sh require_build_type debug nondebug require_valgrind 3.7 configure_valgrind memcheck force-enable setup unset VMEM_LOG_LEVEL unset VMEM_LOG_FILE [ "$FS" == "pmem" ] && DIR_WORK=$DIR expect_normal_exit ./vmem_valgrind$EXESUFFIX 2 $DIR_WORK check pass vmem-1.8/src/test/vmem_valgrind/TEST3000077500000000000000000000037511361505074100175260ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_valgrind/TEST3 -- unit test for vmem_valgrind # export VALGRIND_OPTS="--suppressions=excluded-errors.supp --leak-check=full\ --show-reachable=yes" . ../unittest/unittest.sh require_build_type debug nondebug require_valgrind 3.7 configure_valgrind memcheck force-enable setup unset VMEM_LOG_LEVEL unset VMEM_LOG_FILE [ "$FS" == "pmem" ] && DIR_WORK=$DIR expect_normal_exit ./vmem_valgrind$EXESUFFIX 3 check pass vmem-1.8/src/test/vmem_valgrind/TEST4000077500000000000000000000037511361505074100175270ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_valgrind/TEST4 -- unit test for vmem_valgrind # export VALGRIND_OPTS="--suppressions=excluded-errors.supp --leak-check=full\ --show-reachable=yes" . ../unittest/unittest.sh require_build_type debug nondebug require_valgrind 3.8 configure_valgrind memcheck force-enable setup unset VMEM_LOG_LEVEL unset VMEM_LOG_FILE [ "$FS" == "pmem" ] && DIR_WORK=$DIR expect_normal_exit ./vmem_valgrind$EXESUFFIX 4 check pass vmem-1.8/src/test/vmem_valgrind/TEST5000077500000000000000000000037421361505074100175300ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_valgrind/TEST5 -- unit test for vmem_valgrind # export VALGRIND_OPTS="--suppressions=excluded-errors.supp --leak-check=full\ --show-reachable=yes" . ../unittest/unittest.sh require_build_type debug nondebug require_valgrind 3.7 configure_valgrind memcheck force-enable setup unset VMEM_LOG_LEVEL unset VMEM_LOG_FILE [ "$FS" == "pmem" ] && DIR_WORK=$DIR expect_normal_exit ./vmem_valgrind$EXESUFFIX 5 pass vmem-1.8/src/test/vmem_valgrind/excluded-errors.supp000066400000000000000000000001351361505074100227470ustar00rootroot00000000000000{ Bullseye Coverage - Memory leaks Memcheck:Leak ... fun:cov_probe_v12 ... } vmem-1.8/src/test/vmem_valgrind/memcheck2.log.match000066400000000000000000000016321361505074100223660ustar00rootroot00000000000000==$(N)== Memcheck, a memory error detector ==$(N)== Copyright $(*) ==$(N)== Using $(*) ==$(N)== Command:$(*) ==$(N)== Parent PID: $(N) ==$(N)== ==$(N)== ==$(N)== HEAP SUMMARY: ==$(N)== in use at exit: $(NC) bytes in $(N) blocks ==$(N)== total heap usage: $(N) allocs, $(N) frees, $(NC) bytes allocated ==$(N)== ==$(N)== $(N) bytes in 1 blocks are definitely lost in loss record 1 of $(N) ==$(N)== at 0x$(X): ${je_vmem_pool_malloc|???} $(*) $(OPT)==$(N)== by 0x$(X): vmem_malloc $(*) ==$(N)== by 0x$(X): main (vmem_valgrind.c:$(N)) ==$(N)== ==$(N)== LEAK SUMMARY: ==$(N)== definitely lost: 8 bytes in 1 blocks ==$(N)== indirectly lost: 0 bytes in 0 blocks ==$(N)== possibly lost: 0 bytes in 0 blocks ==$(N)== still reachable: 0 bytes in 0 blocks ==$(N)== suppressed: $(NC) bytes in $(N) blocks ==$(N)== ==$(N)== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: $(N) from $(N)) vmem-1.8/src/test/vmem_valgrind/memcheck3.log.match000066400000000000000000000016321361505074100223670ustar00rootroot00000000000000==$(N)== Memcheck, a memory error detector ==$(N)== Copyright $(*) ==$(N)== Using $(*) ==$(N)== Command:$(*) ==$(N)== Parent PID: $(N) ==$(N)== ==$(N)== ==$(N)== HEAP SUMMARY: ==$(N)== in use at exit: $(NC) bytes in $(N) blocks ==$(N)== total heap usage: $(N) allocs, $(N) frees, $(NC) bytes allocated ==$(N)== ==$(N)== $(N) bytes in 1 blocks are definitely lost in loss record 1 of $(N) ==$(N)== at 0x$(X): ${je_vmem_pool_malloc|???} $(*) $(OPT)==$(N)== by 0x$(X): vmem_malloc $(*) ==$(N)== by 0x$(X): main (vmem_valgrind.c:$(N)) ==$(N)== ==$(N)== LEAK SUMMARY: ==$(N)== definitely lost: 8 bytes in 1 blocks ==$(N)== indirectly lost: 0 bytes in 0 blocks ==$(N)== possibly lost: 0 bytes in 0 blocks ==$(N)== still reachable: 0 bytes in 0 blocks ==$(N)== suppressed: $(NC) bytes in $(N) blocks ==$(N)== ==$(N)== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: $(N) from $(N)) vmem-1.8/src/test/vmem_valgrind/memcheck4.log.match000066400000000000000000000021171361505074100223670ustar00rootroot00000000000000==$(N)== Memcheck, a memory error detector ==$(N)== Copyright $(*) ==$(N)== Using $(*) ==$(N)== Command:$(*) ==$(N)== Parent PID: $(N) ==$(N)== ==$(N)== Invalid write of size 4 ==$(N)== at 0x$(X): main (vmem_valgrind.c:$(N)) ==$(N)== Address 0x$(X) is 0 bytes after a block of size $(N) alloc'd ==$(N)== at 0x$(X): ${je_vmem_pool_malloc|???} $(*) $(OPT)==$(N)== by 0x$(X): vmem_malloc $(*) ==$(N)== by 0x$(X): main (vmem_valgrind.c:$(N)) ==$(N)== ==$(N)== ==$(N)== HEAP SUMMARY: ==$(N)== in use at exit: $(NC) bytes in $(N) blocks ==$(N)== total heap usage: $(N) allocs, $(N) frees, $(NC) bytes allocated ==$(N)== $(OPT)==$(N)== All heap blocks were freed -- no leaks are possible $(OPX)==$(N)== LEAK SUMMARY: $(OPT)==$(N)== definitely lost: 0 bytes in 0 blocks $(OPT)==$(N)== indirectly lost: 0 bytes in 0 blocks $(OPT)==$(N)== possibly lost: 0 bytes in 0 blocks $(OPT)==$(N)== still reachable: 0 bytes in 0 blocks $(OPT)==$(N)== suppressed: $(NC) bytes in $(N) blocks ==$(N)== ==$(N)== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: $(N) from $(N)) vmem-1.8/src/test/vmem_valgrind/vmem_valgrind.c000066400000000000000000000124231361505074100217300ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_valgrind.c -- unit test for vmem_valgrind * * usage: vmem_valgrind [directory] * * test-number can be a number from 0 to 9 */ #include "unittest.h" static int custom_allocs; static int custom_alloc_calls; /* * malloc_custom -- custom malloc function * * This function updates statistics about custom alloc functions, * and returns allocated memory. */ static void * malloc_custom(size_t size) { ++custom_alloc_calls; ++custom_allocs; return malloc(size); } /* * free_custom -- custom free function * * This function updates statistics about custom alloc functions, * and frees allocated memory. */ static void free_custom(void *ptr) { ++custom_alloc_calls; --custom_allocs; free(ptr); } /* * realloc_custom -- custom realloc function * * This function updates statistics about custom alloc functions, * and returns reallocated memory. */ static void * realloc_custom(void *ptr, size_t size) { ++custom_alloc_calls; return realloc(ptr, size); } /* * strdup_custom -- custom strdup function * * This function updates statistics about custom alloc functions, * and returns allocated memory with a duplicated string. */ static char * strdup_custom(const char *s) { ++custom_alloc_calls; ++custom_allocs; return strdup(s); } int main(int argc, char *argv[]) { char *dir = NULL; VMEM *vmp; int *ptr; int test_case = -1; int expect_custom_alloc = 0; START(argc, argv, "vmem_valgrind"); if (argc >= 2 && argc <= 3) { test_case = atoi(argv[1]); if (test_case > 9) test_case = -1; if (argc > 2) dir = argv[2]; } if (test_case < 0) UT_FATAL("usage: %s [directory]", argv[0]); if (test_case < 5) { UT_OUT("use default allocator"); expect_custom_alloc = 0; } else { UT_OUT("use custom alloc functions"); test_case -= 5; expect_custom_alloc = 1; vmem_set_funcs(malloc_custom, free_custom, realloc_custom, strdup_custom, NULL); } if (dir == NULL) { /* allocate memory for function vmem_create_in_region() */ void *mem_pool = MMAP_ANON_ALIGNED(VMEM_MIN_POOL, 4 << 20); vmp = vmem_create_in_region(mem_pool, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create_in_region"); } else { vmp = vmem_create(dir, VMEM_MIN_POOL); if (vmp == NULL) UT_FATAL("!vmem_create"); } switch (test_case) { case 0: { UT_OUT("remove all allocations and delete pool"); ptr = vmem_malloc(vmp, sizeof(int)); if (ptr == NULL) UT_FATAL("!vmem_malloc"); vmem_free(vmp, ptr); vmem_delete(vmp); break; } case 1: { UT_OUT("only remove allocations"); ptr = vmem_malloc(vmp, sizeof(int)); if (ptr == NULL) UT_FATAL("!vmem_malloc"); vmem_free(vmp, ptr); break; } case 2: { UT_OUT("only delete pool"); ptr = vmem_malloc(vmp, sizeof(int)); if (ptr == NULL) UT_FATAL("!vmem_malloc"); vmem_delete(vmp); /* prevent reporting leaked memory as still reachable */ ptr = NULL; break; } case 3: { UT_OUT("memory leaks"); ptr = vmem_malloc(vmp, sizeof(int)); if (ptr == NULL) UT_FATAL("!vmem_malloc"); /* prevent reporting leaked memory as still reachable */ ptr = NULL; /* Clean up pool, above malloc will still leak */ vmem_delete(vmp); break; } case 4: { UT_OUT("heap block overrun"); ptr = vmem_malloc(vmp, 12 * sizeof(int)); if (ptr == NULL) UT_FATAL("!vmem_malloc"); /* heap block overrun */ ptr[12] = 7; vmem_free(vmp, ptr); vmem_delete(vmp); break; } default: { UT_FATAL("!unknown test-number"); } } /* check memory leak in custom allocator */ UT_ASSERTeq(custom_allocs, 0); if (expect_custom_alloc == 0) { UT_ASSERTeq(custom_alloc_calls, 0); } else { UT_ASSERTne(custom_alloc_calls, 0); } DONE(NULL); } vmem-1.8/src/test/vmem_valgrind_region/000077500000000000000000000000001361505074100202735ustar00rootroot00000000000000vmem-1.8/src/test/vmem_valgrind_region/.gitignore000066400000000000000000000000251361505074100222600ustar00rootroot00000000000000vmem_valgrind_region vmem-1.8/src/test/vmem_valgrind_region/Makefile000066400000000000000000000033231361505074100217340ustar00rootroot00000000000000# # Copyright 2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_valgrind_region/Makefile -- build vmem_valgrind_region # unit test # TARGET = vmem_valgrind_region OBJS =vmem_valgrind_region.o LIBVMEM=y include ../Makefile.inc vmem-1.8/src/test/vmem_valgrind_region/TEST0000077500000000000000000000037341361505074100210670ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2017-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_valgrind_region/TEST0 -- unit test for vmem_valgrind_region # export VALGRIND_OPTS="--suppressions=excluded-errors.supp --leak-check=full\ --show-reachable=yes" . ../unittest/unittest.sh require_build_type debug nondebug require_valgrind 3.7 configure_valgrind memcheck force-enable setup unset VMEM_LOG_LEVEL unset VMEM_LOG_FILE expect_normal_exit ./vmem_valgrind_region$EXESUFFIX 0 check pass vmem-1.8/src/test/vmem_valgrind_region/TEST1000077500000000000000000000037341361505074100210700ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2017-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_valgrind_region/TEST1 -- unit test for vmem_valgrind_region # export VALGRIND_OPTS="--suppressions=excluded-errors.supp --leak-check=full\ --show-reachable=yes" . ../unittest/unittest.sh require_build_type debug nondebug require_valgrind 3.7 configure_valgrind memcheck force-enable setup unset VMEM_LOG_LEVEL unset VMEM_LOG_FILE expect_normal_exit ./vmem_valgrind_region$EXESUFFIX 1 check pass vmem-1.8/src/test/vmem_valgrind_region/TEST2000077500000000000000000000037341361505074100210710ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2017-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_valgrind_region/TEST2 -- unit test for vmem_valgrind_region # export VALGRIND_OPTS="--suppressions=excluded-errors.supp --leak-check=full\ --show-reachable=yes" . ../unittest/unittest.sh require_build_type debug nondebug require_valgrind 3.7 configure_valgrind memcheck force-enable setup unset VMEM_LOG_LEVEL unset VMEM_LOG_FILE expect_normal_exit ./vmem_valgrind_region$EXESUFFIX 2 check pass vmem-1.8/src/test/vmem_valgrind_region/TEST3000077500000000000000000000037341361505074100210720ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2017-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_valgrind_region/TEST3 -- unit test for vmem_valgrind_region # export VALGRIND_OPTS="--suppressions=excluded-errors.supp --leak-check=full\ --show-reachable=yes" . ../unittest/unittest.sh require_build_type debug nondebug require_valgrind 3.7 configure_valgrind memcheck force-enable setup unset VMEM_LOG_LEVEL unset VMEM_LOG_FILE expect_normal_exit ./vmem_valgrind_region$EXESUFFIX 3 check pass vmem-1.8/src/test/vmem_valgrind_region/TEST4000077500000000000000000000037341361505074100210730ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2017-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmem_valgrind_region/TEST4 -- unit test for vmem_valgrind_region # export VALGRIND_OPTS="--suppressions=excluded-errors.supp --leak-check=full\ --show-reachable=yes" . ../unittest/unittest.sh require_build_type debug nondebug require_valgrind 3.7 configure_valgrind memcheck force-enable setup unset VMEM_LOG_LEVEL unset VMEM_LOG_FILE expect_normal_exit ./vmem_valgrind_region$EXESUFFIX 4 check pass vmem-1.8/src/test/vmem_valgrind_region/excluded-errors.supp000066400000000000000000000003131361505074100243100ustar00rootroot00000000000000{ Bullseye Coverage - Memory leaks Memcheck:Leak ... fun:cov_probe_v12 ... } { vmem_valgrind_region - Ignore uninitialised cond jumps Memcheck:Cond ... fun:do_iterate ... } vmem-1.8/src/test/vmem_valgrind_region/memcheck0.log.match000066400000000000000000000014401361505074100237240ustar00rootroot00000000000000==$(N)== Memcheck, a memory error detector ==$(N)== Copyright $(*) ==$(N)== Using $(*) ==$(N)== Command:$(*) ==$(N)== Parent PID: $(N) ==$(N)== ==$(N)== ==$(N)== HEAP SUMMARY: ==$(N)== in use at exit: $(NC) bytes in $(N) blocks ==$(N)== total heap usage: $(N) allocs, $(N) frees, $(NC) bytes allocated ==$(N)== $(OPT)==$(N)== All heap blocks were freed -- no leaks are possible $(OPT)==$(N)== $(OPT)==$(N)== $(OPX)==$(N)== LEAK SUMMARY: $(OPT)==$(N)== definitely lost: 0 bytes in 0 blocks $(OPT)==$(N)== indirectly lost: 0 bytes in 0 blocks $(OPT)==$(N)== possibly lost: 0 bytes in 0 blocks $(OPT)==$(N)== still reachable: 0 bytes in 0 blocks $(OPT)==$(N)== suppressed: $(NC) bytes in $(N) blocks $(OPT)==$(N)== ==$(N)== ERROR SUMMARY: 0 errors from 0 contexts $(*) vmem-1.8/src/test/vmem_valgrind_region/memcheck1.log.match000066400000000000000000000017561361505074100237370ustar00rootroot00000000000000==$(N)== Memcheck, a memory error detector ==$(N)== Copyright $(*) ==$(N)== Using $(*) ==$(N)== Command:$(*) ==$(N)== Parent PID: $(N) ==$(N)== ==$(N)== ==$(N)== HEAP SUMMARY: ==$(N)== in use at exit: $(NC) bytes in $(N) blocks ==$(N)== total heap usage: $(N) allocs, $(N) frees, $(NC) bytes allocated ==$(N)== ==$(N)== 9,807,424 bytes in 8 blocks are definitely lost in loss record $(N) of $(N) ==$(N)== at 0x$(X): ${je_vmem_pool_malloc|???} $(*) $(OPT)==$(N)== by 0x$(X): vmem_malloc $(*) ==$(N)== by 0x$(X): do_alloc (vmem_valgrind_region.c:$(N)) ==$(N)== by 0x$(X): main (vmem_valgrind_region.c:$(N)) ==$(N)== ==$(N)== LEAK SUMMARY: ==$(N)== definitely lost: $(NC) bytes in $(N) blocks ==$(N)== indirectly lost: 0 bytes in 0 blocks ==$(N)== possibly lost: 0 bytes in 0 blocks ==$(N)== still reachable: 0 bytes in 0 blocks ==$(N)== suppressed: $(NC) bytes in $(N) blocks ==$(N)== ==$(N)== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: $(N) from $(N)) vmem-1.8/src/test/vmem_valgrind_region/memcheck2.log.match000066400000000000000000000037671361505074100237440ustar00rootroot00000000000000==$(N)== Memcheck, a memory error detector ==$(N)== Copyright $(*) ==$(N)== Using $(*) ==$(N)== Command:$(*) ==$(N)== Parent PID: $(N) ==$(N)== $(OPT)==$(N)== Use of uninitialised value of size 8 $(OPX)==$(N)== Syscall param write(buf) points to uninitialised byte(s) $(OPT)==$(N)== at 0x$(X): _itoa_word $(*) $(OPX)==$(N)== at 0x$(X): _write $(*) $(OPT)==$(N)== by 0x$(X): vfprintf $(*) $(OPT)==$(N)== by 0x$(X): __vfprintf_internal $(*) $(OPX)==$(N)== by 0x$(X): $(*) (in /lib/libc.so.$(N)) $(OPT)==$(N)== by 0x$(X): vsnprintf $(*) $(OPT)==$(N)== by 0x$(X): __vsnprintf_internal $(*) $(OPT)==$(N)== by 0x$(X): vsnprintf $(*) $(OPX)==$(N)== by 0x$(X): $(*) (in /lib/libc.so.$(N)) $(OPT)==$(N)== by 0x$(X): $(*) (in /lib/libc.so.$(N)) $(OPT)==$(N)== by 0x$(X): fputs $(*) ==$(N)== by 0x$(X): vout (ut.c:$(N)) ==$(N)== by 0x$(X): ut_out (ut.c:$(N)) ==$(N)== by 0x$(X): do_iterate (vmem_valgrind_region.c:$(N)) ==$(N)== by 0x$(X): main (vmem_valgrind_region.c:$(N)) $(OPT)==$(N)== Address 0x$(X) is $(N) bytes inside data symbol "Buff_trace" ==$(N)== ==$(N)== ==$(N)== HEAP SUMMARY: ==$(N)== in use at exit: $(NC) bytes in $(N) blocks ==$(N)== total heap usage: $(N) allocs, $(N) frees, $(NC) bytes allocated ==$(N)== ==$(N)== 9,807,424 bytes in 8 blocks are definitely lost in loss record $(N) of $(N) ==$(N)== at 0x$(X): ${je_vmem_pool_malloc|???} $(*) $(OPT)==$(N)== by 0x$(X): vmem_malloc $(*) ==$(N)== by 0x$(X): do_alloc (vmem_valgrind_region.c:$(N)) ==$(N)== by 0x$(X): main (vmem_valgrind_region.c:$(N)) ==$(N)== ==$(N)== LEAK SUMMARY: ==$(N)== definitely lost: 9,807,424 bytes in 8 blocks ==$(N)== indirectly lost: 0 bytes in 0 blocks ==$(N)== possibly lost: 0 bytes in 0 blocks ==$(N)== still reachable: 0 bytes in 0 blocks ==$(N)== suppressed: $(NC) bytes in $(N) blocks ==$(N)== ==$(N)== Use --track-origins=yes to see where uninitialised values come from ==$(N)== ERROR SUMMARY: $(N) errors from 2 contexts (suppressed: $(N) from $(N)) vmem-1.8/src/test/vmem_valgrind_region/memcheck3.log.match000066400000000000000000000030231361505074100237260ustar00rootroot00000000000000==$(N)== Memcheck, a memory error detector ==$(N)== Copyright $(*) ==$(N)== Using $(*) ==$(N)== Command:$(*) ==$(N)== Parent PID: $(N) ==$(N)== ==$(N)== Invalid read of size 8 ==$(N)== at 0x$(X): do_iterate (vmem_valgrind_region.c:$(N)) ==$(N)== by 0x$(X): main (vmem_valgrind_region.c:$(N)) ==$(N)== Address 0x$(X) is 0 bytes inside a block of size $(*) alloc'd ==$(N)== at 0x$(X): ${je_vmem_pool_malloc|???} $(*) $(OPT)==$(N)== by 0x$(X): vmem_malloc (vmem.c:$(N)) ==$(N)== by 0x$(X): do_alloc (vmem_valgrind_region.c:$(N)) ==$(N)== by 0x$(X): main (vmem_valgrind_region.c:$(N)) ==$(N)== ==$(N)== ==$(N)== HEAP SUMMARY: ==$(N)== in use at exit: $(NC) bytes in $(N) blocks ==$(N)== total heap usage: $(N) allocs, $(N) frees, $(*) bytes allocated ==$(N)== ==$(N)== 9,807,424 bytes in 8 blocks are definitely lost in loss record $(N) of $(N) ==$(N)== at 0x$(X): ${je_vmem_pool_malloc|???} $(*) $(OPT)==$(N)== by 0x$(X): vmem_malloc $(*) ==$(N)== by 0x$(X): do_alloc (vmem_valgrind_region.c:$(N)) ==$(N)== by 0x$(X): main (vmem_valgrind_region.c:$(N)) ==$(N)== ==$(N)== LEAK SUMMARY: ==$(N)== definitely lost: 9,807,424 bytes in 8 blocks ==$(N)== indirectly lost: 0 bytes in 0 blocks ==$(N)== possibly lost: 0 bytes in 0 blocks ==$(N)== still reachable: 0 bytes in 0 blocks ==$(N)== suppressed: $(NC) bytes in $(N) blocks ==$(N)== $(OPT)==$(N)== Use --track-origins=yes to see where uninitialised values come from ==$(N)== ERROR SUMMARY: 9 errors from 2 contexts (suppressed: $(N) from $(N)) vmem-1.8/src/test/vmem_valgrind_region/memcheck4.log.match000066400000000000000000000047161361505074100237410ustar00rootroot00000000000000==$(N)== Memcheck, a memory error detector ==$(N)== Copyright $(*) ==$(N)== Using $(*) ==$(N)== Command:$(*) ==$(N)== Parent PID: $(N) ==$(N)== $(OPT)==$(N)== Use of uninitialised value of size 8 $(OPX)==$(N)== Syscall param write(buf) points to uninitialised byte(s) $(OPT)==$(N)== at 0x$(X): _itoa_word $(*) $(OPX)==$(N)== at 0x$(X): _write $(*) $(OPT)==$(N)== by 0x$(X): vfprintf $(*) $(OPT)==$(N)== by 0x$(X): __vfprintf_internal $(*) $(OPX)==$(N)== by 0x$(X): $(*) (in /lib/libc.so.$(N)) $(OPT)==$(N)== by 0x$(X): vsnprintf $(*) $(OPT)==$(N)== by 0x$(X): __vsnprintf_internal $(*) $(OPT)==$(N)== by 0x$(X): vsnprintf $(*) $(OPX)==$(N)== by 0x$(X): $(*) (in /lib/libc.so.$(N)) $(OPT)==$(N)== by 0x$(X): $(*) (in /lib/libc.so.$(N)) $(OPT)==$(N)== by 0x$(X): fputs $(*) ==$(N)== by 0x$(X): vout (ut.c:$(N)) ==$(N)== by 0x$(X): ut_out (ut.c:$(N)) ==$(N)== by 0x$(X): do_iterate (vmem_valgrind_region.c:$(N)) ==$(N)== by 0x$(X): main (vmem_valgrind_region.c:$(N)) $(OPT)==$(N)== Address 0x$(X) is $(N) bytes inside data symbol "Buff_trace" ==$(N)== ==$(N)== Invalid read of size 8 ==$(N)== at 0x$(X): do_iterate (vmem_valgrind_region.c:$(N)) ==$(N)== by 0x$(X): main (vmem_valgrind_region.c:$(N)) ==$(N)== Address 0x$(X) is 0 bytes inside a block of size $(*) alloc'd ==$(N)== at 0x$(X): ${je_vmem_pool_malloc|???} $(*) $(OPT)==$(N)== by 0x$(X): vmem_malloc (vmem.c:$(N)) ==$(N)== by 0x$(X): do_alloc (vmem_valgrind_region.c:$(N)) ==$(N)== by 0x$(X): main (vmem_valgrind_region.c:$(N)) ==$(N)== ==$(N)== ==$(N)== HEAP SUMMARY: ==$(N)== in use at exit: $(NC) bytes in $(N) blocks ==$(N)== total heap usage: $(N) allocs, $(N) frees, $(*) bytes allocated ==$(N)== ==$(N)== 9,807,424 bytes in 8 blocks are definitely lost in loss record $(N) of $(N) ==$(N)== at 0x$(X): ${je_vmem_pool_malloc|???} $(*) $(OPT)==$(N)== by 0x$(X): vmem_malloc $(*) ==$(N)== by 0x$(X): do_alloc (vmem_valgrind_region.c:$(N)) ==$(N)== by 0x$(X): main (vmem_valgrind_region.c:$(N)) ==$(N)== ==$(N)== LEAK SUMMARY: ==$(N)== definitely lost: 9,807,424 bytes in 8 blocks ==$(N)== indirectly lost: 0 bytes in 0 blocks ==$(N)== possibly lost: 0 bytes in 0 blocks ==$(N)== still reachable: 0 bytes in 0 blocks ==$(N)== suppressed: $(NC) bytes in $(N) blocks ==$(N)== $(OPT)==$(N)== Use --track-origins=yes to see where uninitialised values come from ==$(N)== ERROR SUMMARY: $(N) errors from 3 contexts (suppressed: $(N) from $(N)) vmem-1.8/src/test/vmem_valgrind_region/vmem_valgrind_region.c000066400000000000000000000077761361505074100246550ustar00rootroot00000000000000/* * Copyright 2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmem_valgrind_region.c -- unit test for vmem_valgrind_region */ #include "unittest.h" #define POOLSIZE (16 << 20) #define CHUNKSIZE (4 << 20) #define NOBJS 8 struct foo { size_t size; char data[1]; /* dynamically sized */ }; static struct foo *objs[NOBJS]; static void do_alloc(VMEM *vmp) { size_t size = 256; /* allocate objects */ for (int i = 0; i < NOBJS; i++) { objs[i] = vmem_malloc(vmp, size + sizeof(size_t)); UT_ASSERTne(objs[i], NULL); objs[i]->size = size; memset(objs[i]->data, '0' + i, size - 1); objs[i]->data[size] = '\0'; size *= 4; } } static void do_iterate(void) { /* dump selected objects */ for (int i = 0; i < NOBJS; i++) UT_OUT("%p size %zu", objs[i], objs[i]->size); } static void do_free(VMEM *vmp) { /* free objects */ for (int i = 0; i < NOBJS; i++) vmem_free(vmp, objs[i]); } int main(int argc, char *argv[]) { VMEM *vmp; START(argc, argv, "vmem_valgrind_region"); if (argc < 2) UT_FATAL("usage: %s <0..4>", argv[0]); int test = atoi(argv[1]); /* * Allocate memory for vmem_create_in_region(). * Reserve more space for test case #4. */ char *addr = MMAP_ANON_ALIGNED(VMEM_MIN_POOL + CHUNKSIZE, CHUNKSIZE); vmp = vmem_create_in_region(addr, POOLSIZE); if (vmp == NULL) UT_FATAL("!vmem_create_in_region"); do_alloc(vmp); switch (test) { case 0: /* free objects and delete pool */ do_free(vmp); vmem_delete(vmp); break; case 1: /* delete pool without freeing objects */ vmem_delete(vmp); break; case 2: /* * delete pool without freeing objects * try to access objects * expected: use of uninitialized value */ vmem_delete(vmp); do_iterate(); break; case 3: /* * delete pool without freeing objects * re-create pool in the same region * try to access objects * expected: invalid read */ vmem_delete(vmp); vmp = vmem_create_in_region(addr, POOLSIZE); if (vmp == NULL) UT_FATAL("!vmem_create_in_region"); do_iterate(); vmem_delete(vmp); break; case 4: /* * delete pool without freeing objects * re-create pool in the overlapping region * try to access objects * expected: use of uninitialized value & invalid read */ vmem_delete(vmp); vmp = vmem_create_in_region(addr + CHUNKSIZE, POOLSIZE); if (vmp == NULL) UT_FATAL("!vmem_create_in_region"); do_iterate(); vmem_delete(vmp); break; default: UT_FATAL("wrong test case %d", test); } MUNMAP(addr, VMEM_MIN_POOL + CHUNKSIZE); DONE(NULL); } vmem-1.8/src/test/vmmalloc_calloc/000077500000000000000000000000001361505074100172255ustar00rootroot00000000000000vmem-1.8/src/test/vmmalloc_calloc/.gitignore000066400000000000000000000000201361505074100212050ustar00rootroot00000000000000vmmalloc_calloc vmem-1.8/src/test/vmmalloc_calloc/Makefile000066400000000000000000000035751361505074100206770ustar00rootroot00000000000000# # Copyright 2014-2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_calloc/Makefile -- build vmmalloc_calloc unit test # TARGET = vmmalloc_calloc OBJS = vmmalloc_calloc.o USING_JEMALLOC_HEADERS=y include ../Makefile.inc INCS += -I../../jemalloc/include/ ifneq ($(DEBUG),1) INCS += -I../../nondebug/libvmmalloc/jemalloc/include/ else INCS += -I../../debug/libvmmalloc/jemalloc/include/ endif vmem-1.8/src/test/vmmalloc_calloc/TEST0000077500000000000000000000035521361505074100200170ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_calloc/TEST0 -- unit test for libvmmalloc calloc # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_no_asan setup export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_calloc$EXESUFFIX check pass vmem-1.8/src/test/vmmalloc_calloc/out0.log.match000066400000000000000000000001421361505074100217070ustar00rootroot00000000000000vmmalloc_calloc/TEST0: START: vmmalloc_calloc ./vmmalloc_calloc$(nW) vmmalloc_calloc/TEST0: DONE vmem-1.8/src/test/vmmalloc_calloc/vmmalloc_calloc.c000066400000000000000000000047121361505074100225240ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmmalloc_calloc.c -- unit test for libvmmalloc calloc * * usage: vmmalloc_calloc */ #include "unittest.h" #include "jemalloc/internal/jemalloc_internal.h" #include "jemalloc/internal/size_classes.h" #define DEFAULT_COUNT (SMALL_MAXCLASS / 4) #define DEFAULT_N 100 /* cfree() has been removed from glibc since version 2.26 */ #ifndef cfree #define cfree free #endif int main(int argc, char *argv[]) { const int test_value = 123456; int count = DEFAULT_COUNT; int n = DEFAULT_N; int *ptr; int i, j; START(argc, argv, "vmmalloc_calloc"); for (i = 0; i < n; i++) { ptr = calloc(1, count * sizeof(int)); UT_ASSERTne(ptr, NULL); /* calloc should return zeroed memory */ for (j = 0; j < count; j++) UT_ASSERTeq(ptr[j], 0); for (j = 0; j < count; j++) ptr[j] = test_value; for (j = 0; j < count; j++) UT_ASSERTeq(ptr[j], test_value); cfree(ptr); } DONE(NULL); } vmem-1.8/src/test/vmmalloc_check_allocations/000077500000000000000000000000001361505074100214355ustar00rootroot00000000000000vmem-1.8/src/test/vmmalloc_check_allocations/.gitignore000066400000000000000000000000331361505074100234210ustar00rootroot00000000000000vmmalloc_check_allocations vmem-1.8/src/test/vmmalloc_check_allocations/Makefile000066400000000000000000000033441361505074100231010ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_check_allocations/Makefile -- build vmmalloc_check_allocations unit test # TARGET = vmmalloc_check_allocations OBJS = vmmalloc_check_allocations.o include ../Makefile.inc vmem-1.8/src/test/vmmalloc_check_allocations/TEST0000077500000000000000000000036151361505074100222270ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_check_allocations/TEST0 -- unit test for # libvmmalloc check_allocations # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_no_asan setup export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_check_allocations$EXESUFFIX check pass vmem-1.8/src/test/vmmalloc_check_allocations/out0.log.match000066400000000000000000000005431361505074100241240ustar00rootroot00000000000000vmmalloc_check_allocations/TEST0: START: vmmalloc_check_allocations ./vmmalloc_check_allocations$(nW) size 4194304 size 2097152 size 1048576 size 524288 size 262144 size 131072 size 65536 size 32768 size 16384 size 8192 size 4096 size 2048 size 1024 size 512 size 256 size 128 size 64 size 32 size 16 size 8 size 4 vmmalloc_check_allocations/TEST0: DONE vmem-1.8/src/test/vmmalloc_check_allocations/vmmalloc_check_allocations.c000066400000000000000000000054621361505074100271470ustar00rootroot00000000000000/* * Copyright 2014-2016, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmmalloc_check_allocations -- unit test for * libvmmalloc check_allocations * * usage: vmmalloc_check_allocations */ #include "unittest.h" #define MIN_SIZE (sizeof(int)) #define MAX_SIZE (4L * 1024L * 1024L) #define MAX_ALLOCS (VMEM_MIN_POOL / MIN_SIZE) /* buffer for all allocations */ static void *allocs[MAX_ALLOCS]; int main(int argc, char *argv[]) { int i, j; size_t size; START(argc, argv, "vmmalloc_check_allocations"); for (size = MAX_SIZE; size >= MIN_SIZE; size /= 2) { UT_OUT("size %zu", size); memset(allocs, 0, sizeof(allocs)); for (i = 0; i < MAX_ALLOCS; ++i) { allocs[i] = malloc(size); if (allocs[i] == NULL) { /* out of memory in pool */ break; } /* fill each allocation with a unique value */ memset(allocs[i], (char)i, size); } /* at least one allocation for each size must succeed */ UT_ASSERT(i > 0); /* check for unexpected modifications of the data */ for (i = 0; i < MAX_ALLOCS && allocs[i] != NULL; ++i) { char *buffer = allocs[i]; for (j = 0; j < size; ++j) { if (buffer[j] != (char)i) UT_FATAL("Content of data object was " "modified unexpectedly for " "object size: %zu, id: %d", size, j); } free(allocs[i]); } } DONE(NULL); } vmem-1.8/src/test/vmmalloc_dummy_funcs/000077500000000000000000000000001361505074100203215ustar00rootroot00000000000000vmem-1.8/src/test/vmmalloc_dummy_funcs/.gitignore000066400000000000000000000000331361505074100223050ustar00rootroot00000000000000libvmmalloc_dummy_funcs.so vmem-1.8/src/test/vmmalloc_dummy_funcs/Makefile000066400000000000000000000040701361505074100217620ustar00rootroot00000000000000# # Copyright 2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_dummy_funcs/Makefile -- build libvmmalloc_dummy_funcs.so # used by vmmalloc_malloc_hooks, vmmalloc_memalign and vmmalloc_valloc. # OBJS = vmmalloc_dummy_funcs.o BUILD_STATIC=n include ../Makefile.inc libvmmalloc_dummy_funcs.so: vmmalloc_dummy_funcs.c $(CC) $(CFLAGS) -fPIC -shared -Wl,--version-script=libvmmalloc_dummy_funcs.map,-soname,libvmmalloc_dummy_funcs.so -o $@ $^ all: libvmmalloc_dummy_funcs.so clobber: libvmmalloc_dummy_funcs_clean libvmmalloc_dummy_funcs_clean: $(RM) libvmmalloc_dummy_funcs.so vmem-1.8/src/test/vmmalloc_dummy_funcs/Makefile.inc000066400000000000000000000036421361505074100225360ustar00rootroot00000000000000# # Copyright 2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_dummy_funcs/Makefile.inc -- common Makefile rules # for vmmalloc tests sharing libvmmalloc_dummy_funcs.so # EXTRA_DEPS = ../vmmalloc_dummy_funcs/libvmmalloc_dummy_funcs.so include ../Makefile.inc $(EXTRA_DEPS): $(MAKE) -C ../vmmalloc_dummy_funcs all all: $(EXTRA_DEPS) INCS += -I../vmmalloc_dummy_funcs LIBS += $(EXTRA_DEPS) -Wl,-rpath=../vmmalloc_dummy_funcs vmem-1.8/src/test/vmmalloc_dummy_funcs/libvmmalloc_dummy_funcs.map000066400000000000000000000034611361505074100257360ustar00rootroot00000000000000# # Copyright 2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_dummy_funcs/libvmmalloc_dummy_funcs.map -- # linker map file for libvmmalloc_dummy_funcs # LIBVMMALLOC_DUMMY_FUNCS_1.0 { global: __free_hook; __malloc_hook; __memalign_hook; __realloc_hook; aligned_alloc; memalign; pvalloc; local: *; }; vmem-1.8/src/test/vmmalloc_dummy_funcs/vmmalloc_dummy_funcs.c000066400000000000000000000043611361505074100247140ustar00rootroot00000000000000/* * Copyright 2014-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmmalloc_dummy_funcs.c -- dummy functions for vmmalloc tests */ #include "vmmalloc_dummy_funcs.h" __attribute__((weak)) void * aligned_alloc(size_t alignment, size_t size) { return NULL; } #ifdef __FreeBSD__ __attribute__((weak)) void * memalign(size_t alignment, size_t size) { return NULL; } __attribute__((weak)) void * pvalloc(size_t size) { return NULL; } /* XXX These exist only to allow the tests to link - they are never used */ void (*__free_hook)(void *, const void *); void *(*__malloc_hook)(size_t size, const void *); void *(*__memalign_hook)(size_t alignment, size_t size, const void *); void *(*__realloc_hook)(void *ptr, size_t size, const void *); #endif vmem-1.8/src/test/vmmalloc_dummy_funcs/vmmalloc_dummy_funcs.h000066400000000000000000000043641361505074100247240ustar00rootroot00000000000000/* * Copyright 2015-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmmalloc_weakfuncs.h -- definitions for vmmalloc tests */ #ifndef VMMALLOC_WEAKFUNCS_H #define VMMALLOC_WEAKFUNCS_H #include #ifndef __FreeBSD__ #include #endif void *aligned_alloc(size_t alignment, size_t size); #ifdef __FreeBSD__ void *memalign(size_t boundary, size_t size); void *pvalloc(size_t size); /* XXX These exist only to allow the tests to compile - they are never used */ extern void (*__free_hook)(void *, const void *); extern void *(*__malloc_hook)(size_t size, const void *); extern void *(*__memalign_hook)(size_t alignment, size_t size, const void *); extern void *(*__realloc_hook)(void *ptr, size_t size, const void *); #endif #endif vmem-1.8/src/test/vmmalloc_fork/000077500000000000000000000000001361505074100167315ustar00rootroot00000000000000vmem-1.8/src/test/vmmalloc_fork/.gitignore000066400000000000000000000000161361505074100207160ustar00rootroot00000000000000vmmalloc_fork vmem-1.8/src/test/vmmalloc_fork/Makefile000066400000000000000000000032601361505074100203720ustar00rootroot00000000000000# # Copyright 2015-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_fork/Makefile -- build vmmalloc_fork unit test # TARGET = vmmalloc_fork OBJS = vmmalloc_fork.o include ../Makefile.inc vmem-1.8/src/test/vmmalloc_fork/README000066400000000000000000000022221361505074100176070ustar00rootroot00000000000000Persistent Memory Development Kit This is src/test/vmmalloc_fork/README. This directory contains a multithreaded unit test for libvmmalloc fork() support. The program in vmmalloc_fork.c takes: 'operation', 'nfork' and 'nthread' arguments. Operation can be: c - child process is a duplicate of the parent e - child process calls execl() immediately after fork The test allocates some amount of memory first, then spawns a number of threads, that also allocate memory in a loop. While the new threads are running, the main thread creates a new process by calling fork(). If 'operation' is 'c', then each child process performs the same actions as parent, spawning new threads and a new process, until some predefined number of processes is reached. 'nfork' argument defines a maximum height of the process tree, so eventually, there could be 2^nfork processes created, each running 'nthread' threads. For example: ./vmmalloc_fork c 4 2 This will create 16 (2^4) processes, each running 3 threads (1 + 2). If 'operation' is 'e', then when the 'nfork' limit is reached, the child process calls execl() immediately after fork(), executing another program. vmem-1.8/src/test/vmmalloc_fork/TEST0000077500000000000000000000036251361505074100175240ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_fork/TEST0 -- unit test for libvmmalloc fork() support # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_no_asan setup # this test is leaky by design export MEMCHECK_DONT_CHECK_LEAKS=1 expect_normal_exit ./vmmalloc_fork$EXESUFFIX c 4 2 check pass vmem-1.8/src/test/vmmalloc_fork/TEST1000077500000000000000000000043731361505074100175260ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_fork/TEST1 -- unit test for libvmmalloc fork() support # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_no_asan # This test uses pthread mutexes across fork, and recreates them in the child # process. configure_valgrind helgrind force-disable configure_valgrind drd force-disable setup # VMMALLOC_POOL_SIZE * 2^argv[2] require_free_space 1G export VMMALLOC_POOL_SIZE=$((64 * 1024 * 1024)) export VMMALLOC_LOG_LEVEL=3 export VMMALLOC_FORK=1 export TEST_LD_PRELOAD=$VMMALLOC # this test is leaky by design export MEMCHECK_DONT_CHECK_LEAKS=1 expect_normal_exit ./vmmalloc_fork$EXESUFFIX c 4 2 check pass vmem-1.8/src/test/vmmalloc_fork/TEST2000077500000000000000000000045421361505074100175250ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_fork/TEST2 -- unit test for libvmmalloc fork() support # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_no_asan # This test uses pthread mutexes across fork, and recreates them in the child # process. configure_valgrind helgrind force-disable configure_valgrind drd force-disable setup # VMMALLOC_POOL_SIZE * 2^argv[2] require_free_space 1G # Must be defined before require_preload export VMMALLOC_POOL_SIZE=$((64 * 1024 * 1024)) export VMMALLOC_LOG_LEVEL=3 export VMMALLOC_FORK=2 export TEST_LD_PRELOAD=$VMMALLOC require_preload "VMMALLOC_FORK value 2" ./vmmalloc_fork x x t # this test is leaky by design export MEMCHECK_DONT_CHECK_LEAKS=1 expect_normal_exit ./vmmalloc_fork$EXESUFFIX c 4 2 check pass vmem-1.8/src/test/vmmalloc_fork/TEST3000077500000000000000000000043731361505074100175300ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_fork/TEST3 -- unit test for libvmmalloc fork() support # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_no_asan # This test uses pthread mutexes across fork, and recreates them in the child # process. configure_valgrind helgrind force-disable configure_valgrind drd force-disable setup # VMMALLOC_POOL_SIZE * 2^argv[2] require_free_space 1G export VMMALLOC_POOL_SIZE=$((64 * 1024 * 1024)) export VMMALLOC_LOG_LEVEL=3 export VMMALLOC_FORK=1 export TEST_LD_PRELOAD=$VMMALLOC # this test is leaky by design export MEMCHECK_DONT_CHECK_LEAKS=1 expect_normal_exit ./vmmalloc_fork$EXESUFFIX e 4 2 check pass vmem-1.8/src/test/vmmalloc_fork/TEST4000077500000000000000000000045421361505074100175270ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_fork/TEST4 -- unit test for libvmmalloc fork() support # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_no_asan # This test uses pthread mutexes across fork, and recreates them in the child # process. configure_valgrind helgrind force-disable configure_valgrind drd force-disable setup # VMMALLOC_POOL_SIZE * 2^argv[2] require_free_space 1G # Must be defined before require_preload export VMMALLOC_POOL_SIZE=$((64 * 1024 * 1024)) export VMMALLOC_LOG_LEVEL=3 export VMMALLOC_FORK=2 export TEST_LD_PRELOAD=$VMMALLOC require_preload "VMMALLOC_FORK value 2" ./vmmalloc_fork x x t # this test is leaky by design export MEMCHECK_DONT_CHECK_LEAKS=1 expect_normal_exit ./vmmalloc_fork$EXESUFFIX e 4 2 check pass vmem-1.8/src/test/vmmalloc_fork/out0.log.match000066400000000000000000000001401361505074100214110ustar00rootroot00000000000000vmmalloc_fork/TEST0: START: vmmalloc_fork ./vmmalloc_fork$(nW) c 4 2 vmmalloc_fork/TEST0: DONE vmem-1.8/src/test/vmmalloc_fork/out1.log.match000066400000000000000000000001401361505074100214120ustar00rootroot00000000000000vmmalloc_fork/TEST1: START: vmmalloc_fork ./vmmalloc_fork$(nW) c 4 2 vmmalloc_fork/TEST1: DONE vmem-1.8/src/test/vmmalloc_fork/out2.log.match000066400000000000000000000001401361505074100214130ustar00rootroot00000000000000vmmalloc_fork/TEST2: START: vmmalloc_fork ./vmmalloc_fork$(nW) c 4 2 vmmalloc_fork/TEST2: DONE vmem-1.8/src/test/vmmalloc_fork/out3.log.match000066400000000000000000000001401361505074100214140ustar00rootroot00000000000000vmmalloc_fork/TEST3: START: vmmalloc_fork ./vmmalloc_fork$(nW) e 4 2 vmmalloc_fork/TEST3: DONE vmem-1.8/src/test/vmmalloc_fork/out4.log.match000066400000000000000000000002741361505074100214250ustar00rootroot00000000000000$(OPT)vmmalloc_fork/TEST4: START: vmmalloc_fork $(OPT) ./vmmalloc_fork$(nW) e 4 2 $(OPT)vmmalloc_fork/TEST4: DONE $(OPX)Error (libvmmalloc): VMMALLOC_FORK value 2 not supported on FreeBSD vmem-1.8/src/test/vmmalloc_fork/vmmalloc_fork.c000066400000000000000000000125031361505074100217310ustar00rootroot00000000000000/* * Copyright 2015-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmmalloc_fork.c -- unit test for libvmmalloc fork() support * * usage: vmmalloc_fork [c|e] */ #ifdef __FreeBSD__ #include #include #else #include #endif #include #include "unittest.h" #define NBUFS 16 /* * get_rand_size -- returns random size of allocation */ static size_t get_rand_size(void) { return sizeof(int) + 64 * ((unsigned)rand() % 100); } /* * do_test -- thread callback * * This function is called in separate thread and the main thread * forks a child processes. Please be aware that any locks held in this * function may potentially cause a deadlock. * * For example using rand() in this function may cause a deadlock because * it grabs an internal lock. */ static void * do_test(void *arg) { int **bufs = malloc(NBUFS * sizeof(void *)); UT_ASSERTne(bufs, NULL); size_t *sizes = (size_t *)arg; UT_ASSERTne(sizes, NULL); for (int j = 0; j < NBUFS; j++) { bufs[j] = malloc(sizes[j]); UT_ASSERTne(bufs[j], NULL); } for (int j = 0; j < NBUFS; j++) { UT_ASSERT(malloc_usable_size(bufs[j]) >= sizes[j]); free(bufs[j]); } free(bufs); return NULL; } int main(int argc, char *argv[]) { if (argc == 4 && argv[3][0] == 't') { exit(0); } START(argc, argv, "vmmalloc_fork"); if (argc < 4) UT_FATAL("usage: %s [c|e] ", argv[0]); unsigned nfork = ATOU(argv[2]); unsigned nthread = ATOU(argv[3]); os_thread_t thread[nthread]; unsigned first_child = 0; unsigned **bufs = malloc(nfork * NBUFS * sizeof(void *)); UT_ASSERTne(bufs, NULL); size_t *sizes = malloc(nfork * NBUFS * sizeof(size_t)); UT_ASSERTne(sizes, NULL); int *pids1 = malloc(nfork * sizeof(pid_t)); UT_ASSERTne(pids1, NULL); int *pids2 = malloc(nfork * sizeof(pid_t)); UT_ASSERTne(pids2, NULL); for (unsigned i = 0; i < nfork; i++) { for (unsigned j = 0; j < NBUFS; j++) { unsigned idx = i * NBUFS + j; sizes[idx] = get_rand_size(); bufs[idx] = malloc(sizes[idx]); UT_ASSERTne(bufs[idx], NULL); UT_ASSERT(malloc_usable_size(bufs[idx]) >= sizes[idx]); } size_t **thread_sizes = malloc(sizeof(size_t *) * nthread); UT_ASSERTne(thread_sizes, NULL); for (int t = 0; t < nthread; ++t) { thread_sizes[t] = malloc(NBUFS * sizeof(size_t)); UT_ASSERTne(thread_sizes[t], NULL); for (int j = 0; j < NBUFS; j++) thread_sizes[t][j] = get_rand_size(); } for (int t = 0; t < nthread; ++t) { PTHREAD_CREATE(&thread[t], NULL, do_test, thread_sizes[t]); } pids1[i] = fork(); if (pids1[i] == -1) UT_OUT("fork failed"); UT_ASSERTne(pids1[i], -1); if (pids1[i] == 0 && argv[1][0] == 'e' && i == nfork - 1) { int fd = os_open("/dev/null", O_RDWR, S_IWUSR); int res = dup2(fd, 1); UT_ASSERTne(res, -1); os_close(fd); execl("/bin/echo", "/bin/echo", "Hello world!", NULL); } pids2[i] = getpid(); for (unsigned j = 0; j < NBUFS; j++) { *bufs[i * NBUFS + j] = ((unsigned)pids2[i] << 16) + j; } if (pids1[i]) { /* parent */ for (int t = 0; t < nthread; ++t) { PTHREAD_JOIN(&thread[t], NULL); free(thread_sizes[t]); } free(thread_sizes); } else { /* child */ first_child = i + 1; } for (unsigned ii = 0; ii < i; ii++) { for (unsigned j = 0; j < NBUFS; j++) { UT_ASSERTeq(*bufs[ii * NBUFS + j], ((unsigned)pids2[ii] << 16) + j); } } } for (unsigned i = first_child; i < nfork; i++) { int status; waitpid(pids1[i], &status, 0); UT_ASSERT(WIFEXITED(status)); UT_ASSERTeq(WEXITSTATUS(status), 0); } free(pids1); free(pids2); for (int i = 0; i < nfork; i++) { for (int j = 0; j < NBUFS; j++) { int idx = i * NBUFS + j; UT_ASSERT(malloc_usable_size(bufs[idx]) >= sizes[idx]); free(bufs[idx]); } } free(sizes); free(bufs); if (first_child == 0) { DONE(NULL); } } vmem-1.8/src/test/vmmalloc_init/000077500000000000000000000000001361505074100167335ustar00rootroot00000000000000vmem-1.8/src/test/vmmalloc_init/.gitignore000066400000000000000000000000161361505074100207200ustar00rootroot00000000000000vmmalloc_init vmem-1.8/src/test/vmmalloc_init/Makefile000066400000000000000000000036121361505074100203750ustar00rootroot00000000000000# # Copyright 2014-2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/Makefile -- build vmmalloc_init unit test # TARGET = vmmalloc_init OBJS = vmmalloc_init.o include ../Makefile.inc all: libtest.so libtest.so: libtest.c $(CC) $(CFLAGS) -fPIC -shared -Wl,-soname,libtest.so -o $@ $^ clobber: libtest_clean libtest_clean: $(RM) libtest.so LIBS += $(LIBDL) CFLAGS += -Wno-deprecated-declarations vmem-1.8/src/test/vmmalloc_init/README000066400000000000000000000003501361505074100176110ustar00rootroot00000000000000Persistent Memory Development Kit This is src/test/vmmalloc_init/README. This test is Linux specific. This directory contains a unit test for vmmalloc_init. Usage: $ vmmalloc_init [d|l] d - deep binding l - lazy binding vmem-1.8/src/test/vmmalloc_init/TEST0000077500000000000000000000040221361505074100175160ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/TEST0 -- unit test for vmmalloc_init # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug require_no_asan setup export VMMALLOC_LOG_LEVEL=4 export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_init$EXESUFFIX 2> stderr$UNITTEST_NUM.log $GREP -E 'VMMALLOC_POOL_SIZE|VMMALLOC_POOL_DIR|TMPDIR|mkstemp|size\ 4321' \ vmmalloc$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_init/TEST1000077500000000000000000000037711361505074100175310ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/TEST1 -- unit test for vmmalloc_init # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug require_no_asan setup export VMMALLOC_LOG_LEVEL=4 unset VMMALLOC_POOL_SIZE export TEST_LD_PRELOAD=$VMMALLOC expect_abnormal_exit ./vmmalloc_init$EXESUFFIX 2> stderr$UNITTEST_NUM.log $GREP 'Error (libvmmalloc)' vmmalloc$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_init/TEST10000077500000000000000000000035671361505074100176140ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/TEST10 -- unit test for vmmalloc_init # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type nondebug require_no_asan setup export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_init$EXESUFFIX 2> stderr$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_init/TEST11000077500000000000000000000037371361505074100176140ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/TEST11 -- unit test for vmmalloc_init # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type nondebug require_no_asan setup unset VMMALLOC_POOL_SIZE export TEST_LD_PRELOAD=$VMMALLOC expect_abnormal_exit ./vmmalloc_init$EXESUFFIX 2> stderr$UNITTEST_NUM.log $GREP 'Error (libvmmalloc)' stderr$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_init/TEST12000077500000000000000000000037571361505074100176170ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/TEST12 -- unit test for vmmalloc_init # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type nondebug require_no_asan setup export VMMALLOC_POOL_SIZE=$((1024*1024)) export TEST_LD_PRELOAD=$VMMALLOC expect_abnormal_exit ./vmmalloc_init$EXESUFFIX 2> stderr$UNITTEST_NUM.log $GREP 'Error (libvmmalloc)' stderr$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_init/TEST13000077500000000000000000000037701361505074100176130ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/TEST13 -- unit test for vmmalloc_init # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type nondebug require_no_asan setup export VMMALLOC_POOL_DIR="$DIR/nonexistingsubdir" export TEST_LD_PRELOAD=$VMMALLOC expect_abnormal_exit ./vmmalloc_init$EXESUFFIX 2> stderr$UNITTEST_NUM.log $GREP 'Error (libvmmalloc)' stderr$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_init/TEST14000077500000000000000000000037471361505074100176200ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/TEST14 -- unit test for vmmalloc_init # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type nondebug require_no_asan setup export VMMALLOC_POOL_DIR="/proc" export TEST_LD_PRELOAD=$VMMALLOC expect_abnormal_exit ./vmmalloc_init$EXESUFFIX 2> stderr$UNITTEST_NUM.log $GREP 'Error (libvmmalloc)' stderr$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_init/TEST15000077500000000000000000000037361361505074100176170ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/TEST15 -- unit test for vmmalloc_init # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type nondebug require_no_asan setup unset VMMALLOC_POOL_DIR export TEST_LD_PRELOAD=$VMMALLOC expect_abnormal_exit ./vmmalloc_init$EXESUFFIX 2> stderr$UNITTEST_NUM.log $GREP 'Error (libvmmalloc)' stderr$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_init/TEST16000077500000000000000000000042651361505074100176160ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/TEST16 -- unit test for vmmalloc_init # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type nondebug require_no_asan require_no_freebsd # Valgrind does not call vmmalloc's malloc implementation from library # loaded with RTLD_DEEPBIND. configure_valgrind memcheck force-disable $PMDK_LIB_PATH/$VMMALLOC configure_valgrind helgrind force-disable $PMDK_LIB_PATH/$VMMALLOC configure_valgrind drd force-disable $PMDK_LIB_PATH/$VMMALLOC setup export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_init$EXESUFFIX d 2> stderr$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_init/TEST17000077500000000000000000000035711361505074100176160ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/TEST17 -- unit test for vmmalloc_init # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type nondebug require_no_asan setup export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_init$EXESUFFIX l 2> stderr$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_init/TEST18000077500000000000000000000037351361505074100176210ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/TEST18 -- unit test for vmmalloc_init # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type nondebug require_no_asan setup export VMMALLOC_FORK=4 export TEST_LD_PRELOAD=$VMMALLOC expect_abnormal_exit ./vmmalloc_init$EXESUFFIX 2> stderr$UNITTEST_NUM.log $GREP 'Error (libvmmalloc)' stderr$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_init/TEST2000077500000000000000000000040751361505074100175300ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/TEST2 -- unit test for vmmalloc_init # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug require_no_asan setup export VMMALLOC_LOG_LEVEL=4 export VMMALLOC_POOL_SIZE=$((1024*1024)) export TEST_LD_PRELOAD=$VMMALLOC expect_abnormal_exit ./vmmalloc_init$EXESUFFIX 2> stderr$UNITTEST_NUM.log $GREP -E 'VMMALLOC_POOL_SIZE|VMMALLOC_POOL_DIR|TMPDIR|mkstemp|size\ 4321' \ vmmalloc$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_init/TEST3000077500000000000000000000041131361505074100175220ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/TEST3 -- unit test for vmmalloc_init # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug require_no_asan setup export VMMALLOC_LOG_LEVEL=4 export VMMALLOC_POOL_DIR="$DIR/nonexistingsubdir" export TEST_LD_PRELOAD=$VMMALLOC expect_abnormal_exit ./vmmalloc_init$EXESUFFIX 2> stderr$UNITTEST_NUM.log $GREP -E 'VMMALLOC_POOL_SIZE|VMMALLOC_POOL_DIR|TMPDIR|mkstemp|open|size\ 4321' \ vmmalloc$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_init/TEST4000077500000000000000000000040721361505074100175270ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/TEST4 -- unit test for vmmalloc_init # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug require_no_asan setup export VMMALLOC_LOG_LEVEL=4 export VMMALLOC_POOL_DIR="/proc" export TEST_LD_PRELOAD=$VMMALLOC expect_abnormal_exit ./vmmalloc_init$EXESUFFIX 2> stderr$UNITTEST_NUM.log $GREP -E 'VMMALLOC_POOL_SIZE|VMMALLOC_POOL_DIR|TMPDIR|mkstemp|open|size\ 4321' \ vmmalloc$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_init/TEST5000077500000000000000000000040541361505074100175300ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/TEST1 -- unit test for vmmalloc_init # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug require_no_asan setup export VMMALLOC_LOG_LEVEL=4 unset VMMALLOC_POOL_DIR export TEST_LD_PRELOAD=$VMMALLOC expect_abnormal_exit ./vmmalloc_init$EXESUFFIX 2> stderr$UNITTEST_NUM.log $GREP -E 'VMMALLOC_POOL_SIZE|VMMALLOC_POOL_DIR|TMPDIR|mkstemp|size\ 4321' \ vmmalloc$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_init/TEST6000077500000000000000000000045211361505074100175300ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/TEST6 -- unit test for vmmalloc_init # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug require_no_asan require_no_freebsd # Valgrind does not call vmmalloc's malloc implementation from library # loaded with RTLD_DEEPBIND. configure_valgrind memcheck force-disable $PMDK_LIB_PATH/$VMMALLOC configure_valgrind helgrind force-disable $PMDK_LIB_PATH/$VMMALLOC configure_valgrind drd force-disable $PMDK_LIB_PATH/$VMMALLOC setup export VMMALLOC_LOG_LEVEL=4 export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_init$EXESUFFIX d 2> stderr$UNITTEST_NUM.log $GREP -E 'VMMALLOC_POOL_SIZE|VMMALLOC_POOL_DIR|TMPDIR|mkstemp|size\ 4321' \ vmmalloc$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_init/TEST7000077500000000000000000000040251361505074100175300ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_init/TEST7 -- unit test for vmmalloc_init # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug require_no_asan setup export VMMALLOC_LOG_LEVEL=4 export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_init$EXESUFFIX l 2> stderr$UNITTEST_NUM.log $GREP -E 'VMMALLOC_POOL_SIZE|VMMALLOC_POOL_DIR|TMPDIR|mkstemp|size\ 4321' \ vmmalloc$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_init/grep0.log.match000066400000000000000000000001711361505074100215450ustar00rootroot00000000000000$(OPT): <3> [$(*) util_tmpfile_mkstemp] unlinked file is $(*) : <4> [$(*) malloc]$(W)size 4321 vmem-1.8/src/test/vmmalloc_init/grep1.log.match000066400000000000000000000001131361505074100215420ustar00rootroot00000000000000Error (libvmmalloc): environment variable VMMALLOC_POOL_SIZE not specified vmem-1.8/src/test/vmmalloc_init/grep11.log.match000066400000000000000000000001131361505074100216230ustar00rootroot00000000000000Error (libvmmalloc): environment variable VMMALLOC_POOL_SIZE not specified vmem-1.8/src/test/vmmalloc_init/grep12.log.match000066400000000000000000000001301361505074100216230ustar00rootroot00000000000000Error (libvmmalloc): VMMALLOC_POOL_SIZE value is less than minimum (1048576 < 14680064) vmem-1.8/src/test/vmmalloc_init/grep13.log.match000066400000000000000000000001121361505074100216240ustar00rootroot00000000000000Error (libvmmalloc): vmem pool creation failed: No such file or directory vmem-1.8/src/test/vmmalloc_init/grep14.log.match000066400000000000000000000000651361505074100216340ustar00rootroot00000000000000Error (libvmmalloc): vmem pool creation failed: $(*) vmem-1.8/src/test/vmmalloc_init/grep15.log.match000066400000000000000000000001121361505074100216260ustar00rootroot00000000000000Error (libvmmalloc): environment variable VMMALLOC_POOL_DIR not specified vmem-1.8/src/test/vmmalloc_init/grep18.log.match000066400000000000000000000000671361505074100216420ustar00rootroot00000000000000Error (libvmmalloc): incorrect VMMALLOC_FORK value (4) vmem-1.8/src/test/vmmalloc_init/grep2.log.match000066400000000000000000000001301361505074100215420ustar00rootroot00000000000000Error (libvmmalloc): VMMALLOC_POOL_SIZE value is less than minimum (1048576 < 14680064) vmem-1.8/src/test/vmmalloc_init/grep3.log.match000066400000000000000000000002531361505074100215510ustar00rootroot00000000000000$(OPT): <1> [$(*) util_tmpfile_mkstemp]$(W)mkstemp: No such file or directory $(OPX): <1> [$(*) util_tmpfile]$(W)open: No such file or directory vmem-1.8/src/test/vmmalloc_init/grep4.log.match000066400000000000000000000002011361505074100215430ustar00rootroot00000000000000$(OPT): <1> [$(*) util_tmpfile_mkstemp]$(W)mkstemp: $(*) $(OPX): <1> [$(*) util_tmpfile]$(W)open: $(*) vmem-1.8/src/test/vmmalloc_init/grep5.log.match000066400000000000000000000001121361505074100215450ustar00rootroot00000000000000Error (libvmmalloc): environment variable VMMALLOC_POOL_DIR not specified vmem-1.8/src/test/vmmalloc_init/grep6.log.match000066400000000000000000000002471361505074100215570ustar00rootroot00000000000000$(OPT): <3> [$(*) util_tmpfile_mkstemp] unlinked file is $(*) : <4> [$(*) malloc]$(W)size 4321 : <4> [$(*) malloc]$(W)size 4321 vmem-1.8/src/test/vmmalloc_init/grep7.log.match000066400000000000000000000002471361505074100215600ustar00rootroot00000000000000$(OPT): <3> [$(*) util_tmpfile_mkstemp] unlinked file is $(*) : <4> [$(*) malloc]$(W)size 4321 : <4> [$(*) malloc]$(W)size 4321 vmem-1.8/src/test/vmmalloc_init/libtest.c000066400000000000000000000041341361505074100205470ustar00rootroot00000000000000/* * Copyright 2014-2016, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * libtest.c -- a simple test library that uses malloc() * * This is used to test libvmmalloc behavior in case when an application * loads a shared library depending on libc using RTLD_DEEPBIND option. */ #include #include #include "libtest.h" /* * falloc -- allocate a block of size bytes and fill it with a constant byte * * The memory obtained from falloc() can be freed using free(). */ void * falloc(size_t size, int c) { void *ptr = malloc(size); if (ptr) memset(ptr, c, size); return ptr; } vmem-1.8/src/test/vmmalloc_init/libtest.h000066400000000000000000000032451361505074100205560ustar00rootroot00000000000000/* * Copyright 2015-2016, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef LIBTEST_H #define LIBTEST_H #include void *falloc(size_t size, int c); #endif vmem-1.8/src/test/vmmalloc_init/out0.log.match000066400000000000000000000001321361505074100214140ustar00rootroot00000000000000vmmalloc_init/TEST0: START: vmmalloc_init ./vmmalloc_init$(nW) vmmalloc_init/TEST0: DONE vmem-1.8/src/test/vmmalloc_init/out10.log.match000066400000000000000000000001341361505074100214770ustar00rootroot00000000000000vmmalloc_init/TEST10: START: vmmalloc_init ./vmmalloc_init$(nW) vmmalloc_init/TEST10: DONE vmem-1.8/src/test/vmmalloc_init/out16.log.match000066400000000000000000000001531361505074100215060ustar00rootroot00000000000000vmmalloc_init/TEST16: START: vmmalloc_init ./vmmalloc_init$(nW) d deep binding vmmalloc_init/TEST16: DONE vmem-1.8/src/test/vmmalloc_init/out17.log.match000066400000000000000000000001531361505074100215070ustar00rootroot00000000000000vmmalloc_init/TEST17: START: vmmalloc_init ./vmmalloc_init$(nW) l lazy binding vmmalloc_init/TEST17: DONE vmem-1.8/src/test/vmmalloc_init/out6.log.match000066400000000000000000000001511361505074100214230ustar00rootroot00000000000000vmmalloc_init/TEST6: START: vmmalloc_init ./vmmalloc_init$(nW) d deep binding vmmalloc_init/TEST6: DONE vmem-1.8/src/test/vmmalloc_init/out7.log.match000066400000000000000000000001511361505074100214240ustar00rootroot00000000000000vmmalloc_init/TEST7: START: vmmalloc_init ./vmmalloc_init$(nW) l lazy binding vmmalloc_init/TEST7: DONE vmem-1.8/src/test/vmmalloc_init/stderr10.log.match000066400000000000000000000000001361505074100221630ustar00rootroot00000000000000vmem-1.8/src/test/vmmalloc_init/stderr16.log.match000066400000000000000000000000001361505074100221710ustar00rootroot00000000000000vmem-1.8/src/test/vmmalloc_init/stderr17.log.match000066400000000000000000000000001361505074100221720ustar00rootroot00000000000000vmem-1.8/src/test/vmmalloc_init/vmmalloc_init.c000066400000000000000000000060161361505074100217370ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmmalloc_init.c -- unit test for libvmmalloc initialization * * usage: vmmalloc_init [d|l] */ #include #include #include "unittest.h" static void *(*Falloc)(size_t size, int val); int main(int argc, char *argv[]) { void *handle = NULL; void *ptr; START(argc, argv, "vmmalloc_init"); if (argc > 2) UT_FATAL("usage: %s [d|l]", argv[0]); if (argc == 2) { switch (argv[1][0]) { case 'd': UT_OUT("deep binding"); handle = dlopen("./libtest.so", RTLD_NOW | RTLD_LOCAL | RTLD_DEEPBIND); break; case 'l': UT_OUT("lazy binding"); handle = dlopen("./libtest.so", RTLD_LAZY); break; default: UT_FATAL("usage: %s [d|l]", argv[0]); } if (handle == NULL) UT_OUT("dlopen: %s", dlerror()); UT_ASSERTne(handle, NULL); Falloc = dlsym(handle, "falloc"); UT_ASSERTne(Falloc, NULL); } ptr = malloc(4321); free(ptr); if (argc == 2) { /* * NOTE: falloc calls malloc internally. * If libtest is loaded with RTLD_DEEPBIND flag, then it will * use its own lookup scope in preference to global symbols * from already loaded (LD_PRELOAD) libvmmalloc. So, falloc * will call the stock libc's malloc. * However, since we override the malloc hooks, a call to libc's * malloc will be redirected to libvmmalloc anyway, and the * memory can be safely reclaimed using libvmmalloc's free. */ ptr = Falloc(4321, 0xaa); free(ptr); } if (handle != NULL) dlclose(handle); DONE(NULL); } vmem-1.8/src/test/vmmalloc_malloc/000077500000000000000000000000001361505074100172375ustar00rootroot00000000000000vmem-1.8/src/test/vmmalloc_malloc/.gitignore000066400000000000000000000000201361505074100212170ustar00rootroot00000000000000vmmalloc_malloc vmem-1.8/src/test/vmmalloc_malloc/Makefile000066400000000000000000000032701361505074100207010ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_malloc/Makefile -- build vmmalloc_malloc unit test # TARGET = vmmalloc_malloc OBJS = vmmalloc_malloc.o include ../Makefile.inc vmem-1.8/src/test/vmmalloc_malloc/TEST0000077500000000000000000000035521361505074100200310ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_malloc/TEST0 -- unit test for libvmmalloc malloc # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_no_asan setup export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_malloc$EXESUFFIX check pass vmem-1.8/src/test/vmmalloc_malloc/out0.log.match000066400000000000000000000001421361505074100217210ustar00rootroot00000000000000vmmalloc_malloc/TEST0: START: vmmalloc_malloc ./vmmalloc_malloc$(nW) vmmalloc_malloc/TEST0: DONE vmem-1.8/src/test/vmmalloc_malloc/vmmalloc_malloc.c000066400000000000000000000046601361505074100225520ustar00rootroot00000000000000/* * Copyright 2014-2016, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmmalloc_malloc.c -- unit test for libvmmalloc malloc * * usage: vmmalloc_malloc */ #include "unittest.h" #define MIN_SIZE (sizeof(int)) #define SIZE 20 #define MAX_SIZE (MIN_SIZE << SIZE) int main(int argc, char *argv[]) { const int test_value = 12345; size_t size; int *ptr[SIZE]; int i = 0; size_t sum_alloc = 0; START(argc, argv, "vmmalloc_malloc"); /* test with multiple size of allocations from 4MB to sizeof(int) */ for (size = MAX_SIZE; size > MIN_SIZE; size /= 2) { ptr[i] = malloc(size); if (ptr[i] == NULL) continue; *ptr[i] = test_value; UT_ASSERTeq(*ptr[i], test_value); sum_alloc += size; i++; } /* at least one allocation for each size must succeed */ UT_ASSERTeq(size, MIN_SIZE); /* allocate more than half of pool size */ UT_ASSERT(sum_alloc * 2 > VMEM_MIN_POOL); while (i > 0) free(ptr[--i]); DONE(NULL); } vmem-1.8/src/test/vmmalloc_malloc_hooks/000077500000000000000000000000001361505074100204425ustar00rootroot00000000000000vmem-1.8/src/test/vmmalloc_malloc_hooks/.gitignore000066400000000000000000000000261361505074100224300ustar00rootroot00000000000000vmmalloc_malloc_hooks vmem-1.8/src/test/vmmalloc_malloc_hooks/Makefile000066400000000000000000000034461361505074100221110ustar00rootroot00000000000000# # Copyright 2014-2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_malloc_hooks/Makefile -- build vmmalloc_malloc_hooks unit test # TARGET = vmmalloc_malloc_hooks OBJS = vmmalloc_malloc_hooks.o vmmalloc_malloc_hooks.o: CFLAGS += -Wno-deprecated-declarations include ../vmmalloc_dummy_funcs/Makefile.inc vmem-1.8/src/test/vmmalloc_malloc_hooks/README000066400000000000000000000007251361505074100213260ustar00rootroot00000000000000Persistent Memory Development Kit This is src/test/vmmalloc_malloc_hooks/README. This directory contains a unit test for libvmmalloc malloc hooks. The program in vmmalloc_malloc_hooks.c modifies the behavior of the system memory allocation routines by specifying its custom hook functions. The libvmmalloc library is expected to override malloc hooks, so when the test program is run with libvmmalloc.so.1 pre-loaded, the user-defined hooks should never be called. vmem-1.8/src/test/vmmalloc_malloc_hooks/TEST0000077500000000000000000000041561361505074100212350ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_malloc_hooks/TEST0 -- unit test for # libvmmalloc malloc hooks # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_no_asan require_no_freebsd # user-defined hooks won't be called if valgrind is enabled configure_valgrind helgrind force-disable configure_valgrind drd force-disable configure_valgrind memcheck force-disable setup # do not pre-load libvmmalloc.so.1 # user-defined hooks should be called expect_normal_exit ./vmmalloc_malloc_hooks$EXESUFFIX check pass vmem-1.8/src/test/vmmalloc_malloc_hooks/TEST1000077500000000000000000000036421361505074100212350ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_malloc_hooks/TEST1 -- unit test for # libvmmalloc malloc hooks # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type nondebug require_no_asan setup # user-defined hooks should not be called export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_malloc_hooks$EXESUFFIX check pass vmem-1.8/src/test/vmmalloc_malloc_hooks/TEST2000077500000000000000000000037401361505074100212350ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_malloc_hooks/TEST2 -- unit test for # libvmmalloc malloc hooks # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug require_no_asan setup export VMMALLOC_LOG_LEVEL=4 export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_malloc_hooks$EXESUFFIX $GREP -E 'size\ 4321' \ vmmalloc$UNITTEST_NUM.log > grep$UNITTEST_NUM.log check pass vmem-1.8/src/test/vmmalloc_malloc_hooks/grep2.log.match000066400000000000000000000003351361505074100232600ustar00rootroot00000000000000: <4> [$(*) malloc]$(W)size 4321 : <4> [$(*) calloc]$(W)nmemb 1, size 4321 : <4> [$(*) realloc]$(W)ptr 0x$(X), size 4321 : <4> [$(*) memalign]$(W)boundary 16 size 4321 vmem-1.8/src/test/vmmalloc_malloc_hooks/out0.log.match000066400000000000000000000002601361505074100231250ustar00rootroot00000000000000vmmalloc_malloc_hooks/TEST0: START: vmmalloc_malloc_hooks ./vmmalloc_malloc_hooks$(nW) installing hooks malloc 3 realloc 1 memalign 1 free 4 vmmalloc_malloc_hooks/TEST0: DONE vmem-1.8/src/test/vmmalloc_malloc_hooks/out1.log.match000066400000000000000000000002531361505074100231300ustar00rootroot00000000000000vmmalloc_malloc_hooks/TEST1: START: vmmalloc_malloc_hooks ./vmmalloc_malloc_hooks installing hooks malloc 0 realloc 0 memalign 0 free 0 vmmalloc_malloc_hooks/TEST1: DONE vmem-1.8/src/test/vmmalloc_malloc_hooks/out2.log.match000066400000000000000000000002531361505074100231310ustar00rootroot00000000000000vmmalloc_malloc_hooks/TEST2: START: vmmalloc_malloc_hooks ./vmmalloc_malloc_hooks installing hooks malloc 0 realloc 0 memalign 0 free 0 vmmalloc_malloc_hooks/TEST2: DONE vmem-1.8/src/test/vmmalloc_malloc_hooks/vmmalloc_malloc_hooks.c000066400000000000000000000073751361505074100251660ustar00rootroot00000000000000/* * Copyright 2014-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmmalloc_malloc_hooks.c -- unit test for libvmmalloc malloc hooks * * usage: vmmalloc_malloc_hooks */ #include #include "unittest.h" #include "vmmalloc_dummy_funcs.h" static void *(*old_malloc_hook) (size_t, const void *); static void *(*old_realloc_hook) (void *, size_t, const void *); static void *(*old_memalign_hook) (size_t, size_t, const void *); static void (*old_free_hook) (void *, const void *); static int malloc_cnt = 0; static int realloc_cnt = 0; static int memalign_cnt = 0; static int free_cnt = 0; static void * hook_malloc(size_t size, const void *caller) { void *p; malloc_cnt++; __malloc_hook = old_malloc_hook; p = malloc(size); old_malloc_hook = __malloc_hook; /* might changed */ __malloc_hook = hook_malloc; return p; } static void * hook_realloc(void *ptr, size_t size, const void *caller) { void *p; realloc_cnt++; __realloc_hook = old_realloc_hook; p = realloc(ptr, size); old_realloc_hook = __realloc_hook; /* might changed */ __realloc_hook = hook_realloc; return p; } static void * hook_memalign(size_t alignment, size_t size, const void *caller) { void *p; memalign_cnt++; __memalign_hook = old_memalign_hook; p = memalign(alignment, size); old_memalign_hook = __memalign_hook; /* might changed */ __memalign_hook = hook_memalign; return p; } static void hook_free(void *ptr, const void *caller) { free_cnt++; __free_hook = old_free_hook; free(ptr); old_free_hook = __free_hook; /* might changed */ __free_hook = hook_free; } static void hook_init(void) { UT_OUT("installing hooks"); old_malloc_hook = __malloc_hook; old_realloc_hook = __realloc_hook; old_memalign_hook = __memalign_hook; old_free_hook = __free_hook; __malloc_hook = hook_malloc; __realloc_hook = hook_realloc; __memalign_hook = hook_memalign; __free_hook = hook_free; } int main(int argc, char *argv[]) { void *ptr; START(argc, argv, "vmmalloc_malloc_hooks"); hook_init(); ptr = malloc(4321); free(ptr); ptr = calloc(1, 4321); free(ptr); ptr = malloc(8); ptr = realloc(ptr, 4321); free(ptr); ptr = memalign(16, 4321); free(ptr); UT_OUT("malloc %d realloc %d memalign %d free %d", malloc_cnt, realloc_cnt, memalign_cnt, free_cnt); DONE(NULL); } vmem-1.8/src/test/vmmalloc_malloc_usable_size/000077500000000000000000000000001361505074100216245ustar00rootroot00000000000000vmem-1.8/src/test/vmmalloc_malloc_usable_size/.gitignore000066400000000000000000000000341361505074100236110ustar00rootroot00000000000000vmmalloc_malloc_usable_size vmem-1.8/src/test/vmmalloc_malloc_usable_size/Makefile000066400000000000000000000033501361505074100232650ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_malloc_usable_size/Makefile -- build vmmalloc_malloc_usable_size unit test # TARGET = vmmalloc_malloc_usable_size OBJS = vmmalloc_malloc_usable_size.o include ../Makefile.inc vmem-1.8/src/test/vmmalloc_malloc_usable_size/TEST0000077500000000000000000000040051361505074100224100ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_malloc_usable_size/TEST0 -- unit test for # libvmmalloc malloc_usable_size # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_no_asan setup # override default pool size to test all the allocation size classes export VMMALLOC_POOL_SIZE=$((32 * 1024 * 1024)) export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_malloc_usable_size$EXESUFFIX check pass vmem-1.8/src/test/vmmalloc_malloc_usable_size/out0.log.match000066400000000000000000000005101361505074100243050ustar00rootroot00000000000000vmmalloc_malloc_usable_size/TEST0: START: vmmalloc_malloc_usable_size ./vmmalloc_malloc_usable_size$(nW) size 10 size 100 size 200 size 500 size 1000 size 2000 size 3000 size 1048576 size 2097152 size 3145728 size 4194304 size 5242880 size 6291456 size 7340032 size 8388608 size 9437184 vmmalloc_malloc_usable_size/TEST0: DONE vmem-1.8/src/test/vmmalloc_malloc_usable_size/vmmalloc_malloc_usable_size.c000066400000000000000000000065001361505074100275170ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmmalloc_malloc_usable_size.c -- unit test for * libvmmalloc malloc_usable_size * * usage: vmmalloc_malloc_usable_size */ #ifdef __FreeBSD__ #include #else #include #endif #include "unittest.h" static const struct { size_t size; size_t spacing; } Check_sizes[] = { {.size = 10, .spacing = 8}, {.size = 100, .spacing = 16}, {.size = 200, .spacing = 32}, {.size = 500, .spacing = 64}, {.size = 1000, .spacing = 128}, {.size = 2000, .spacing = 256}, {.size = 3000, .spacing = 512}, {.size = 1 * 1024 * 1024, .spacing = 4 * 1024 * 1024}, {.size = 2 * 1024 * 1024, .spacing = 4 * 1024 * 1024}, {.size = 3 * 1024 * 1024, .spacing = 4 * 1024 * 1024}, {.size = 4 * 1024 * 1024, .spacing = 4 * 1024 * 1024}, {.size = 5 * 1024 * 1024, .spacing = 4 * 1024 * 1024}, {.size = 6 * 1024 * 1024, .spacing = 4 * 1024 * 1024}, {.size = 7 * 1024 * 1024, .spacing = 4 * 1024 * 1024}, {.size = 8 * 1024 * 1024, .spacing = 4 * 1024 * 1024}, {.size = 9 * 1024 * 1024, .spacing = 4 * 1024 * 1024} }; int main(int argc, char *argv[]) { void *ptr; size_t usable_size; size_t size; int i; START(argc, argv, "vmmalloc_malloc_usable_size"); UT_ASSERTeq(malloc_usable_size(NULL), 0); for (i = 0; i < (sizeof(Check_sizes) / sizeof(Check_sizes[0])); ++i) { size = Check_sizes[i].size; UT_OUT("size %zu", size); ptr = malloc(size); UT_ASSERTne(ptr, NULL); usable_size = malloc_usable_size(ptr); UT_ASSERT(usable_size >= size); if (usable_size - size > Check_sizes[i].spacing) { UT_FATAL("Size %zu: spacing %zu is bigger" "than expected: %zu", size, (usable_size - size), Check_sizes[i].spacing); } memset(ptr, 0xEE, usable_size); UT_ASSERTeq(*(unsigned char *)ptr, 0xEE); free(ptr); } DONE(NULL); } vmem-1.8/src/test/vmmalloc_memalign/000077500000000000000000000000001361505074100175615ustar00rootroot00000000000000vmem-1.8/src/test/vmmalloc_memalign/.gitignore000066400000000000000000000000221361505074100215430ustar00rootroot00000000000000vmmalloc_memalign vmem-1.8/src/test/vmmalloc_memalign/Makefile000066400000000000000000000034061361505074100212240ustar00rootroot00000000000000# # Copyright 2014-2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_memalign/Makefile -- build vmmalloc_memalign unit test # TARGET = vmmalloc_memalign OBJS = vmmalloc_memalign.o vmmalloc_memalign.o: CFLAGS += -D_ISOC11_SOURCE include ../vmmalloc_dummy_funcs/Makefile.inc vmem-1.8/src/test/vmmalloc_memalign/TEST0000077500000000000000000000035621361505074100203540ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_memalign/TEST0 -- unit test for libvmmalloc memalign # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_no_asan setup export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_memalign$EXESUFFIX m check pass vmem-1.8/src/test/vmmalloc_memalign/TEST1000077500000000000000000000035701361505074100203540ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_memalign/TEST1 -- unit test for libvmmalloc posix_memalign # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_no_asan setup export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_memalign$EXESUFFIX p check pass vmem-1.8/src/test/vmmalloc_memalign/TEST2000077500000000000000000000035671361505074100203630ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_memalign/TEST2 -- unit test for libvmmalloc aligned_alloc # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_no_asan setup export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_memalign$EXESUFFIX a check pass vmem-1.8/src/test/vmmalloc_memalign/out0.log.match000066400000000000000000000007071361505074100222520ustar00rootroot00000000000000vmmalloc_memalign/TEST0: START: vmmalloc_memalign ./vmmalloc_memalign$(nW) m testing memalign alignment 4194304 alignment 2097152 alignment 1048576 alignment 524288 alignment 262144 alignment 131072 alignment 65536 alignment 32768 alignment 16384 alignment 8192 alignment 4096 alignment 2048 alignment 1024 alignment 512 alignment 256 alignment 128 alignment 64 alignment 32 alignment 16 alignment 8 alignment 4 alignment 2 vmmalloc_memalign/TEST0: DONE vmem-1.8/src/test/vmmalloc_memalign/out1.log.match000066400000000000000000000007151361505074100222520ustar00rootroot00000000000000vmmalloc_memalign/TEST1: START: vmmalloc_memalign ./vmmalloc_memalign$(nW) p testing posix_memalign alignment 4194304 alignment 2097152 alignment 1048576 alignment 524288 alignment 262144 alignment 131072 alignment 65536 alignment 32768 alignment 16384 alignment 8192 alignment 4096 alignment 2048 alignment 1024 alignment 512 alignment 256 alignment 128 alignment 64 alignment 32 alignment 16 alignment 8 alignment 4 alignment 2 vmmalloc_memalign/TEST1: DONE vmem-1.8/src/test/vmmalloc_memalign/out2.log.match000066400000000000000000000007141361505074100222520ustar00rootroot00000000000000vmmalloc_memalign/TEST2: START: vmmalloc_memalign ./vmmalloc_memalign$(nW) a testing aligned_alloc alignment 4194304 alignment 2097152 alignment 1048576 alignment 524288 alignment 262144 alignment 131072 alignment 65536 alignment 32768 alignment 16384 alignment 8192 alignment 4096 alignment 2048 alignment 1024 alignment 512 alignment 256 alignment 128 alignment 64 alignment 32 alignment 16 alignment 8 alignment 4 alignment 2 vmmalloc_memalign/TEST2: DONE vmem-1.8/src/test/vmmalloc_memalign/vmmalloc_memalign.c000066400000000000000000000070311361505074100234110ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmmalloc_memalign.c -- unit test for libvmmalloc memalign, posix_memalign * and aligned_alloc (if available) * * usage: vmmalloc_memalign [m|p|a] */ #include #include #include "unittest.h" #include "vmmalloc_dummy_funcs.h" #define USAGE "usage: %s [m|p|a]" #define MIN_ALIGN (2) #define MAX_ALIGN (4L * 1024L * 1024L) #define MAX_ALLOCS (100) /* buffer for all allocations */ static int *allocs[MAX_ALLOCS]; static void *(*Aalloc)(size_t alignment, size_t size); static void * posix_memalign_wrap(size_t alignment, size_t size) { void *ptr; int err = posix_memalign(&ptr, alignment, size); /* ignore OOM */ if (err) { char buff[UT_MAX_ERR_MSG]; ptr = NULL; ut_strerror(err, buff, UT_MAX_ERR_MSG); if (err != ENOMEM) UT_OUT("posix_memalign: %s", buff); } return ptr; } int main(int argc, char *argv[]) { const int test_value = 123456; size_t alignment; int i; START(argc, argv, "vmmalloc_memalign"); if (argc != 2) UT_FATAL(USAGE, argv[0]); switch (argv[1][0]) { case 'm': UT_OUT("testing memalign"); Aalloc = memalign; break; case 'p': UT_OUT("testing posix_memalign"); Aalloc = posix_memalign_wrap; break; case 'a': UT_OUT("testing aligned_alloc"); Aalloc = aligned_alloc; break; default: UT_FATAL(USAGE, argv[0]); } /* test with address alignment from 2B to 4MB */ for (alignment = MAX_ALIGN; alignment >= MIN_ALIGN; alignment /= 2) { UT_OUT("alignment %zu", alignment); memset(allocs, 0, sizeof(allocs)); for (i = 0; i < MAX_ALLOCS; ++i) { allocs[i] = Aalloc(alignment, sizeof(int)); if (allocs[i] == NULL) break; /* ptr should be usable */ *allocs[i] = test_value; UT_ASSERTeq(*allocs[i], test_value); /* check for correct address alignment */ UT_ASSERTeq( (uintptr_t)(allocs[i]) & (alignment - 1), 0); } /* at least one allocation must succeed */ UT_ASSERT(i > 0); for (i = 0; i < MAX_ALLOCS && allocs[i] != NULL; ++i) free(allocs[i]); } DONE(NULL); } vmem-1.8/src/test/vmmalloc_out_of_memory/000077500000000000000000000000001361505074100206535ustar00rootroot00000000000000vmem-1.8/src/test/vmmalloc_out_of_memory/.gitignore000066400000000000000000000000271361505074100226420ustar00rootroot00000000000000vmmalloc_out_of_memory vmem-1.8/src/test/vmmalloc_out_of_memory/Makefile000066400000000000000000000033241361505074100223150ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_out_of_memory/Makefile -- build vmmalloc_out_of_memory unit test # TARGET = vmmalloc_out_of_memory OBJS = vmmalloc_out_of_memory.o include ../Makefile.inc vmem-1.8/src/test/vmmalloc_out_of_memory/TEST0000077500000000000000000000036011361505074100214400ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_out_of_memory/TEST0 -- unit test for # libvmmalloc out_of_memory # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_no_asan setup export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_out_of_memory$EXESUFFIX check pass vmem-1.8/src/test/vmmalloc_out_of_memory/out0.log.match000066400000000000000000000001761361505074100233440ustar00rootroot00000000000000vmmalloc_out_of_memory/TEST0: START: vmmalloc_out_of_memory ./vmmalloc_out_of_memory$(nW) vmmalloc_out_of_memory/TEST0: DONE vmem-1.8/src/test/vmmalloc_out_of_memory/vmmalloc_out_of_memory.c000066400000000000000000000041611361505074100255760ustar00rootroot00000000000000/* * Copyright 2014-2016, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmmalloc_out_of_memory -- unit test for libvmmalloc out_of_memory * * usage: vmmalloc_out_of_memory */ #include "unittest.h" int main(int argc, char *argv[]) { START(argc, argv, "vmmalloc_out_of_memory"); /* allocate all memory */ void *prev = NULL; for (;;) { void **next = malloc(sizeof(void *)); if (next == NULL) { /* out of memory */ break; } *next = prev; prev = next; } UT_ASSERTne(prev, NULL); /* free all allocations */ while (prev != NULL) { void **act = prev; prev = *act; free(act); } DONE(NULL); } vmem-1.8/src/test/vmmalloc_realloc/000077500000000000000000000000001361505074100174115ustar00rootroot00000000000000vmem-1.8/src/test/vmmalloc_realloc/.gitignore000066400000000000000000000000211361505074100213720ustar00rootroot00000000000000vmmalloc_realloc vmem-1.8/src/test/vmmalloc_realloc/Makefile000066400000000000000000000032741361505074100210570ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_realloc/Makefile -- build vmmalloc_realloc unit test # TARGET = vmmalloc_realloc OBJS = vmmalloc_realloc.o include ../Makefile.inc vmem-1.8/src/test/vmmalloc_realloc/TEST0000077500000000000000000000035551361505074100202060ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_realloc/TEST0 -- unit test for libvmmalloc realloc # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_no_asan setup export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_realloc$EXESUFFIX check pass vmem-1.8/src/test/vmmalloc_realloc/out0.log.match000066400000000000000000000001461361505074100220770ustar00rootroot00000000000000vmmalloc_realloc/TEST0: START: vmmalloc_realloc ./vmmalloc_realloc$(nW) vmmalloc_realloc/TEST0: DONE vmem-1.8/src/test/vmmalloc_realloc/vmmalloc_realloc.c000066400000000000000000000041261361505074100230730ustar00rootroot00000000000000/* * Copyright 2014-2016, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmmalloc_realloc -- unit test for libvmmalloc realloc * * usage: vmmalloc_realloc */ #include "unittest.h" int main(int argc, char *argv[]) { const int test_value = 123456; START(argc, argv, "vmmalloc_realloc"); int *test = realloc(NULL, sizeof(int)); UT_ASSERTne(test, NULL); test[0] = test_value; UT_ASSERTeq(test[0], test_value); test = realloc(test, sizeof(int) * 10); UT_ASSERTne(test, NULL); UT_ASSERTeq(test[0], test_value); test[1] = test_value; test[9] = test_value; free(test); DONE(NULL); } vmem-1.8/src/test/vmmalloc_valgrind/000077500000000000000000000000001361505074100175765ustar00rootroot00000000000000vmem-1.8/src/test/vmmalloc_valgrind/.gitignore000066400000000000000000000000221361505074100215600ustar00rootroot00000000000000vmmalloc_valgrind vmem-1.8/src/test/vmmalloc_valgrind/Makefile000066400000000000000000000033011361505074100212330ustar00rootroot00000000000000# # Copyright 2014-2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_valgrind/Makefile -- build vmmalloc_valgrind unit test # TARGET = vmmalloc_valgrind OBJS = vmmalloc_valgrind.o include ../Makefile.inc vmem-1.8/src/test/vmmalloc_valgrind/TEST0000077500000000000000000000041421361505074100203640ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_valgrind/TEST0 -- unit test for libvmmalloc valgrind # export VALGRIND_OPTS="--suppressions=excluded-errors.supp --leak-check=full\ --show-reachable=yes" . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_valgrind 3.7 set_valgrind_exe_name configure_valgrind memcheck force-enable $PMDK_LIB_PATH/$VMMALLOC setup unset VMMALLOC_LOG_LEVEL unset VMMALLOC_LOG_FILE export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_valgrind$EXESUFFIX 0 check pass vmem-1.8/src/test/vmmalloc_valgrind/TEST1000077500000000000000000000041421361505074100203650ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_valgrind/TEST1 -- unit test for libvmmalloc valgrind # export VALGRIND_OPTS="--suppressions=excluded-errors.supp --leak-check=full\ --show-reachable=yes" . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_valgrind 3.7 set_valgrind_exe_name configure_valgrind memcheck force-enable $PMDK_LIB_PATH/$VMMALLOC setup unset VMMALLOC_LOG_LEVEL unset VMMALLOC_LOG_FILE export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_valgrind$EXESUFFIX 1 check pass vmem-1.8/src/test/vmmalloc_valgrind/TEST2000077500000000000000000000041431361505074100203670ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_valgrind/TEST2 -- unit test for libvmmalloc valgrind # export VALGRIND_OPTS="--suppressions=excluded-errors.supp --leak-check=full\ --show-reachable=yes" . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_valgrind 3.8 set_valgrind_exe_name configure_valgrind memcheck force-enable $PMDK_LIB_PATH/$VMMALLOC setup unset VMMALLOC_LOG_LEVEL unset VMMALLOC_LOG_FILE export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_valgrind$EXESUFFIX 2 check pass vmem-1.8/src/test/vmmalloc_valgrind/excluded-errors.supp000066400000000000000000000001351361505074100236150ustar00rootroot00000000000000{ Bullseye Coverage - Memory leaks Memcheck:Leak ... fun:cov_probe_v12 ... } vmem-1.8/src/test/vmmalloc_valgrind/memcheck1.log.match000066400000000000000000000016311361505074100232320ustar00rootroot00000000000000==$(N)== Memcheck, a memory error detector ==$(N)== Copyright $(*) ==$(N)== Using $(*) ==$(N)== Command:$(*) ==$(N)== Parent PID: $(N) ==$(N)== ==$(N)== ==$(N)== HEAP SUMMARY: ==$(N)== in use at exit: $(NC) bytes in $(N) blocks ==$(N)== total heap usage: $(N) allocs, $(N) frees, $(NC) bytes allocated ==$(N)== ==$(N)== $(N) bytes in 1 blocks are definitely lost in loss record 1 of $(N) ==$(N)== at 0x$(X): ${je_vmem_pool_malloc|???} $(*) $(OPT)==$(N)== by 0x$(X): malloc $(*) ==$(N)== by 0x$(X): main (vmmalloc_valgrind.c:$(N)) ==$(N)== ==$(N)== LEAK SUMMARY: ==$(N)== definitely lost: 8 bytes in 1 blocks ==$(N)== indirectly lost: 0 bytes in 0 blocks ==$(N)== possibly lost: 0 bytes in 0 blocks ==$(N)== still reachable: 0 bytes in 0 blocks ==$(N)== suppressed: $(NC) bytes in $(N) blocks ==$(N)== ==$(N)== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: $(N) from $(N)) vmem-1.8/src/test/vmmalloc_valgrind/memcheck2.log.match000066400000000000000000000021331361505074100232310ustar00rootroot00000000000000==$(N)== Memcheck, a memory error detector ==$(N)== Copyright $(*) ==$(N)== Using $(*) ==$(N)== Command:$(*) ==$(N)== Parent PID: $(N) ==$(N)== ==$(N)== Invalid write of size 4 ==$(N)== at 0x$(X): main (vmmalloc_valgrind.c:$(N)) ==$(N)== Address 0x$(X) is 0 bytes after $(*) block of size $(N) alloc'd ==$(N)== at 0x$(X): ${je_vmem_pool_malloc|???} $(*) $(OPT)==$(N)== by 0x$(X): malloc $(*) ==$(N)== by 0x$(X): main (vmmalloc_valgrind.c:$(N)) ==$(N)== ==$(N)== ==$(N)== HEAP SUMMARY: ==$(N)== in use at exit: $(NC) bytes in $(N) blocks ==$(N)== total heap usage: $(N) allocs, $(N) frees, $(NC) bytes allocated ==$(N)== $(OPT)==$(N)== All heap blocks were freed -- no leaks are possible $(OPX)==$(N)== LEAK SUMMARY: $(OPT)==$(N)== definitely lost: 0 bytes in 0 blocks $(OPT)==$(N)== indirectly lost: 0 bytes in 0 blocks $(OPT)==$(N)== possibly lost: 0 bytes in 0 blocks $(OPT)==$(N)== still reachable: 0 bytes in 0 blocks $(OPT)==$(N)== suppressed: $(NC) bytes in $(N) blocks $(OPT)==$(N)== ==$(N)== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: $(N) from $(N)) vmem-1.8/src/test/vmmalloc_valgrind/out0.log.match000066400000000000000000000002031361505074100222560ustar00rootroot00000000000000vmmalloc_valgrind/TEST0: START: vmmalloc_valgrind ./vmmalloc_valgrind$(nW) 0 remove all allocations vmmalloc_valgrind/TEST0: DONE vmem-1.8/src/test/vmmalloc_valgrind/out1.log.match000066400000000000000000000001711361505074100222630ustar00rootroot00000000000000vmmalloc_valgrind/TEST1: START: vmmalloc_valgrind ./vmmalloc_valgrind$(nW) 1 memory leaks vmmalloc_valgrind/TEST1: DONE vmem-1.8/src/test/vmmalloc_valgrind/out2.log.match000066400000000000000000000001771361505074100222720ustar00rootroot00000000000000vmmalloc_valgrind/TEST2: START: vmmalloc_valgrind ./vmmalloc_valgrind$(nW) 2 heap block overrun vmmalloc_valgrind/TEST2: DONE vmem-1.8/src/test/vmmalloc_valgrind/vmmalloc_valgrind.c000066400000000000000000000051471361505074100234510ustar00rootroot00000000000000/* * Copyright 2014-2016, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmmalloc_valgrind.c -- unit test for libvmmalloc valgrind * * usage: vmmalloc_valgrind * * test-number can be a number from 0 to 2 */ #include "unittest.h" int main(int argc, char *argv[]) { int *ptr; int test_case = -1; START(argc, argv, "vmmalloc_valgrind"); if ((argc != 2) || (test_case = atoi(argv[1])) > 2) UT_FATAL("usage: %s ", argv[0]); switch (test_case) { case 0: { UT_OUT("remove all allocations"); ptr = malloc(sizeof(int)); if (ptr == NULL) UT_FATAL("!malloc"); free(ptr); break; } case 1: { UT_OUT("memory leaks"); ptr = malloc(sizeof(int)); if (ptr == NULL) UT_FATAL("!malloc"); /* prevent reporting leaked memory as still reachable */ ptr = NULL; break; } case 2: { UT_OUT("heap block overrun"); ptr = malloc(12 * sizeof(int)); if (ptr == NULL) UT_FATAL("!malloc"); /* heap block overrun */ ptr[12] = 7; free(ptr); break; } default: { UT_FATAL("!unknown test-number"); } } DONE(NULL); } vmem-1.8/src/test/vmmalloc_valloc/000077500000000000000000000000001361505074100172505ustar00rootroot00000000000000vmem-1.8/src/test/vmmalloc_valloc/.gitignore000066400000000000000000000000201361505074100212300ustar00rootroot00000000000000vmmalloc_valloc vmem-1.8/src/test/vmmalloc_valloc/Makefile000066400000000000000000000033151361505074100207120ustar00rootroot00000000000000# # Copyright 2014-2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_valloc/Makefile -- build vmmalloc_valloc unit test # TARGET = vmmalloc_valloc OBJS = vmmalloc_valloc.o include ../vmmalloc_dummy_funcs/Makefile.inc vmem-1.8/src/test/vmmalloc_valloc/TEST0000077500000000000000000000035541361505074100200440ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_valloc/TEST0 -- unit test for libvmmalloc valloc # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_no_asan setup export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_valloc$EXESUFFIX v check pass vmem-1.8/src/test/vmmalloc_valloc/TEST1000077500000000000000000000035551361505074100200460ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/vmmalloc_valloc/TEST1 -- unit test for libvmmalloc pvalloc # . ../unittest/unittest.sh # there's no point in testing statically linked builds require_build_type debug nondebug require_no_asan setup export TEST_LD_PRELOAD=$VMMALLOC expect_normal_exit ./vmmalloc_valloc$EXESUFFIX p check pass vmem-1.8/src/test/vmmalloc_valloc/out0.log.match000066400000000000000000000001631361505074100217350ustar00rootroot00000000000000vmmalloc_valloc/TEST0: START: vmmalloc_valloc ./vmmalloc_valloc$(nW) v testing valloc vmmalloc_valloc/TEST0: DONE vmem-1.8/src/test/vmmalloc_valloc/out1.log.match000066400000000000000000000001641361505074100217370ustar00rootroot00000000000000vmmalloc_valloc/TEST1: START: vmmalloc_valloc ./vmmalloc_valloc$(nW) p testing pvalloc vmmalloc_valloc/TEST1: DONE vmem-1.8/src/test/vmmalloc_valloc/vmmalloc_valloc.c000066400000000000000000000055621361505074100225760ustar00rootroot00000000000000/* * Copyright 2014-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * vmmalloc_valloc.c -- unit test for libvmmalloc valloc/pvalloc * * usage: vmmalloc_valloc [v|p] */ #include #include #include "unittest.h" #include "vmmalloc_dummy_funcs.h" static void *(*Valloc)(size_t size); int main(int argc, char *argv[]) { const int test_value = 123456; size_t pagesize = (size_t)sysconf(_SC_PAGESIZE); size_t min_size = sizeof(int); size_t max_size = 4 * pagesize; size_t size; int *ptr; START(argc, argv, "vmmalloc_valloc"); if (argc != 2) UT_FATAL("usage: %s [v|p]", argv[0]); switch (argv[1][0]) { case 'v': UT_OUT("testing valloc"); Valloc = valloc; break; case 'p': UT_OUT("testing pvalloc"); Valloc = pvalloc; break; default: UT_FATAL("usage: %s [v|p]", argv[0]); } for (size = min_size; size < max_size; size *= 2) { ptr = Valloc(size); /* at least one allocation must succeed */ UT_ASSERT(ptr != NULL); if (ptr == NULL) break; /* ptr should be usable */ *ptr = test_value; UT_ASSERTeq(*ptr, test_value); /* check for correct address alignment */ UT_ASSERTeq((uintptr_t)(ptr) & (pagesize - 1), 0); if (Valloc == pvalloc) { /* check for correct allocation size */ size_t usable = malloc_usable_size(ptr); UT_ASSERTeq(usable, roundup(size, pagesize)); } free(ptr); } DONE(NULL); } vmem-1.8/src/test/win_common/000077500000000000000000000000001361505074100162435ustar00rootroot00000000000000vmem-1.8/src/test/win_common/README000066400000000000000000000003141361505074100171210ustar00rootroot00000000000000Persistent Memory Development Kit This is src/test/win_common/README. This test is Windows specific. This directory contains a unit test for miscellaneous Linux APIs that are implemented for Windows. vmem-1.8/src/test/win_common/TEST0.PS1000066400000000000000000000034401361505074100174300ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # Copyright (c) 2016, Microsoft Corporation. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # src/test/win_common/TEST0 -- unit test for windows list macros # $DIR = "" . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\win_common$Env:EXESUFFIX setunsetenv pass vmem-1.8/src/test/win_common/win_common.c000066400000000000000000000057341361505074100205650ustar00rootroot00000000000000/* * Copyright (c) 2016, Microsoft Corporation. All rights reserved. * Copyright 2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * win_common.c -- test common POSIX or Linux API that were implemented * for Windows by our library. */ #include "unittest.h" /* * test_setunsetenv - test the setenv and unsetenv APIs */ static void test_setunsetenv(void) { os_unsetenv("TEST_SETUNSETENV_ONE"); /* set a new variable without overwriting - expect the new value */ UT_ASSERT(os_setenv("TEST_SETUNSETENV_ONE", "test_setunsetenv_one", 0) == 0); UT_ASSERT(strcmp(os_getenv("TEST_SETUNSETENV_ONE"), "test_setunsetenv_one") == 0); /* set an existing variable without overwriting - expect old value */ UT_ASSERT(os_setenv("TEST_SETUNSETENV_ONE", "test_setunsetenv_two", 0) == 0); UT_ASSERT(strcmp(os_getenv("TEST_SETUNSETENV_ONE"), "test_setunsetenv_one") == 0); /* set an existing variable with overwriting - expect the new value */ UT_ASSERT(os_setenv("TEST_SETUNSETENV_ONE", "test_setunsetenv_two", 1) == 0); UT_ASSERT(strcmp(os_getenv("TEST_SETUNSETENV_ONE"), "test_setunsetenv_two") == 0); /* unset our test value - expect it to be empty */ UT_ASSERT(os_unsetenv("TEST_SETUNSETENV_ONE") == 0); UT_ASSERT(os_getenv("TEST_SETUNSETENV_ONE") == NULL); } int main(int argc, char *argv[]) { START(argc, argv, "win_common - testing %s", (argc > 1) ? argv[1] : "setunsetenv"); if (argc == 1 || (stricmp(argv[1], "setunsetenv") == 0)) test_setunsetenv(); DONE(NULL); } vmem-1.8/src/test/win_common/win_common.vcxproj000066400000000000000000000064711361505074100220350ustar00rootroot00000000000000 Debug x64 Release x64 {6AE1B8BE-D46A-4E99-87A2-F160FB950DCA} Win32Proj win_common 10.0.16299.0 Application true v140 Application false v140 true Disabled MaxSpeed false {ce3f2dfb-8470-4802-ad37-21caf6cb2681} vmem-1.8/src/test/win_common/win_common.vcxproj.filters000066400000000000000000000014031361505074100234720ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {67dc9c5f-914c-4c00-84e8-9c4d09ac6ebc} ps1 Source Files Test Scripts vmem-1.8/src/test/win_lists/000077500000000000000000000000001361505074100161115ustar00rootroot00000000000000vmem-1.8/src/test/win_lists/README000066400000000000000000000004461361505074100167750ustar00rootroot00000000000000Persistent Memory Development Kit This is src/test/win_lists/README. This test is Windows specific. This directory contains a unit test for src\windows\include\sys\queue.h The file sys\queue.h has the Windows implementation of sys\queue.h and little more based on what's required by PMDK. vmem-1.8/src/test/win_lists/TEST0.PS1000066400000000000000000000033301361505074100172740ustar00rootroot00000000000000# # Copyright 2015-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # src/test/win_lists/TEST0 -- unit test for windows list macros # $DIR = "" . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\win_lists$Env:EXESUFFIX check pass vmem-1.8/src/test/win_lists/TEST1.PS1000066400000000000000000000034201361505074100172750ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # Copyright (c) 2016, Microsoft Corporation. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # src/test/win_lists/TEST0 -- unit test for windows list macros # . ..\unittest\unittest.ps1 setup expect_normal_exit $ENV:EXE_DIR\win_lists$ENV:EXESUFFIX sortedq pass vmem-1.8/src/test/win_lists/out0.log.match000066400000000000000000000003501361505074100205740ustar00rootroot00000000000000win_lists/TEST0: START: win_lists - testing list $(nW)win_lists$(nW) Node value: 0 Node value: 9 Node value: 8 Node value: 7 Node value: 6 Node value: 5 Node value: 4 Node value: 3 Node value: 2 Node value: 1 win_lists/TEST0: DONE vmem-1.8/src/test/win_lists/win_lists.c000066400000000000000000000124201361505074100202670ustar00rootroot00000000000000/* * Copyright 2015-2019, Intel Corporation * Copyright (c) 2016, Microsoft Corporation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * win_lists.c -- test list routines used in windows implementation */ #include "unittest.h" #include "queue.h" typedef struct TEST_LIST_NODE { PMDK_LIST_ENTRY(TEST_LIST_NODE) ListEntry; int dummy; } *PTEST_LIST_NODE; PMDK_LIST_HEAD(TestList, TEST_LIST_NODE); static void dump_list(struct TestList *head) { PTEST_LIST_NODE pNode = NULL; pNode = (PTEST_LIST_NODE)PMDK_LIST_FIRST(head); while (pNode != NULL) { UT_OUT("Node value: %d", pNode->dummy); pNode = (PTEST_LIST_NODE)PMDK_LIST_NEXT(pNode, ListEntry); } } static int get_list_count(struct TestList *head) { PTEST_LIST_NODE pNode = NULL; int listCount = 0; pNode = (PTEST_LIST_NODE)PMDK_LIST_FIRST(head); while (pNode != NULL) { listCount++; pNode = (PTEST_LIST_NODE)PMDK_LIST_NEXT(pNode, ListEntry); } return listCount; } /* * test_list - Do some basic list manipulations and output to log for * script comparison. Only testing the macros we use. */ static void test_list(void) { PTEST_LIST_NODE pNode = NULL; struct TestList head = PMDK_LIST_HEAD_INITIALIZER(head); PMDK_LIST_INIT(&head); UT_ASSERT_rt(PMDK_LIST_EMPTY(&head)); pNode = MALLOC(sizeof(struct TEST_LIST_NODE)); pNode->dummy = 0; PMDK_LIST_INSERT_HEAD(&head, pNode, ListEntry); UT_ASSERTeq_rt(1, get_list_count(&head)); dump_list(&head); /* Remove one node */ PMDK_LIST_REMOVE(pNode, ListEntry); UT_ASSERTeq_rt(0, get_list_count(&head)); dump_list(&head); free(pNode); /* Add a bunch of nodes */ for (int i = 1; i < 10; i++) { pNode = MALLOC(sizeof(struct TEST_LIST_NODE)); pNode->dummy = i; PMDK_LIST_INSERT_HEAD(&head, pNode, ListEntry); } UT_ASSERTeq_rt(9, get_list_count(&head)); dump_list(&head); /* Remove all of them */ while (!PMDK_LIST_EMPTY(&head)) { pNode = (PTEST_LIST_NODE)PMDK_LIST_FIRST(&head); PMDK_LIST_REMOVE(pNode, ListEntry); free(pNode); } UT_ASSERTeq_rt(0, get_list_count(&head)); dump_list(&head); } typedef struct TEST_SORTEDQ_NODE { PMDK_SORTEDQ_ENTRY(TEST_SORTEDQ_NODE) queue_link; int dummy; } TEST_SORTEDQ_NODE, *PTEST_SORTEDQ_NODE; PMDK_SORTEDQ_HEAD(TEST_SORTEDQ, TEST_SORTEDQ_NODE); static int sortedq_node_comparer(TEST_SORTEDQ_NODE *a, TEST_SORTEDQ_NODE *b) { return a->dummy - b->dummy; } struct TEST_DATA_SORTEDQ { int count; int data[10]; }; /* * test_sortedq - Do some basic operations on SORTEDQ and make sure that the * queue is sorted for different input sequences. */ void test_sortedq(void) { PTEST_SORTEDQ_NODE node = NULL; struct TEST_SORTEDQ head = PMDK_SORTEDQ_HEAD_INITIALIZER(head); struct TEST_DATA_SORTEDQ test_data[] = { {5, {5, 7, 9, 100, 101}}, {7, {1, 2, 3, 4, 5, 6, 7}}, {5, {100, 90, 80, 70, 40}}, {6, {10, 9, 8, 7, 6, 5}}, {5, {23, 13, 27, 4, 15}}, {5, {2, 2, 2, 2, 2}} }; PMDK_SORTEDQ_INIT(&head); UT_ASSERT_rt(PMDK_SORTEDQ_EMPTY(&head)); for (int i = 0; i < _countof(test_data); i++) { for (int j = 0; j < test_data[i].count; j++) { node = MALLOC(sizeof(TEST_SORTEDQ_NODE)); node->dummy = test_data[i].data[j]; PMDK_SORTEDQ_INSERT(&head, node, queue_link, TEST_SORTEDQ_NODE, sortedq_node_comparer); } int prev = MININT; int num_entries = 0; PMDK_SORTEDQ_FOREACH(node, &head, queue_link) { UT_ASSERT(prev <= node->dummy); num_entries++; } UT_ASSERT(num_entries == test_data[i].count); while (!PMDK_SORTEDQ_EMPTY(&head)) { node = PMDK_SORTEDQ_FIRST(&head); PMDK_SORTEDQ_REMOVE(&head, node, queue_link); FREE(node); } } } int main(int argc, char *argv[]) { START(argc, argv, "win_lists - testing %s", (argc > 1) ? argv[1] : "list"); if (argc == 1 || (stricmp(argv[1], "list") == 0)) test_list(); if (argc > 1 && (stricmp(argv[1], "sortedq") == 0)) test_sortedq(); DONE(NULL); } vmem-1.8/src/test/win_lists/win_lists.vcxproj000066400000000000000000000065651361505074100215550ustar00rootroot00000000000000 Debug x64 Release x64 {492baa3d-0d5d-478e-9765-500463ae69aa} {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {1F2E1C51-2B14-4047-BE6D-52E00FC3C780} Win32Proj win_lists 10.0.16299.0 win_lists Application true v140 Application false v140 vmem-1.8/src/test/win_lists/win_lists.vcxproj.filters000066400000000000000000000021261361505074100232110ustar00rootroot00000000000000 {8d8f2f00-0fff-43fb-a7ca-7ad92eac11a5} {d018cf38-4041-4e05-9eb1-7a0a79a00b5a} match {2f7a5125-095e-40ce-bcfa-e396fb8d62db} ps1 Source Files Match Files Match Files Test Scripts Test Scripts vmem-1.8/src/test/win_mmap_dtor/000077500000000000000000000000001361505074100167355ustar00rootroot00000000000000vmem-1.8/src/test/win_mmap_dtor/TEST0.PS1000066400000000000000000000034411361505074100201230ustar00rootroot00000000000000# # Copyright 2018-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/win_mmap_dtor/TEST0 -- unit test for win_mmap destructor # . ..\unittest\unittest.ps1 setup create_holey_file 2M $DIR\testfile1 expect_normal_exit $Env:EXE_DIR\win_mmap_dtor$Env:EXESUFFIX $DIR\testfile1 check_files $DIR\testfile1 pass vmem-1.8/src/test/win_mmap_dtor/win_mmap_dtor.c000066400000000000000000000063351361505074100217470ustar00rootroot00000000000000/* * Copyright 2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * win_mmap_dtor.c -- unit test for windows mmap destructor */ #include "unittest.h" #include "os.h" #include "win_mmap.h" #define KILOBYTE (1 << 10) #define MEGABYTE (1 << 20) unsigned long long Mmap_align; int main(int argc, char *argv[]) { START(argc, argv, "win_mmap_dtor"); if (argc != 2) UT_FATAL("usage: %s path", argv[0]); SYSTEM_INFO si; GetSystemInfo(&si); /* set pagesize for mmap */ Mmap_align = si.dwAllocationGranularity; const char *path = argv[1]; int fd = os_open(path, O_RDWR); UT_ASSERTne(fd, -1); /* * Input file has size equal to 2MB, but the mapping is 3MB. * In this case mmap should map whole file and reserve 1MB * of virtual address space for remaining part of the mapping. */ void *addr = mmap(NULL, 3 * MEGABYTE, PROT_READ, MAP_SHARED, fd, 0); UT_ASSERTne(addr, MAP_FAILED); MEMORY_BASIC_INFORMATION basic_info; SIZE_T bytes_returned; bytes_returned = VirtualQuery(addr, &basic_info, sizeof(basic_info)); UT_ASSERTeq(bytes_returned, sizeof(basic_info)); UT_ASSERTeq(basic_info.RegionSize, 2 * MEGABYTE); UT_ASSERTeq(basic_info.State, MEM_COMMIT); bytes_returned = VirtualQuery((char *)addr + 2 * MEGABYTE, &basic_info, sizeof(basic_info)); UT_ASSERTeq(bytes_returned, sizeof(basic_info)); UT_ASSERTeq(basic_info.RegionSize, MEGABYTE); UT_ASSERTeq(basic_info.State, MEM_RESERVE); win_mmap_fini(); bytes_returned = VirtualQuery((char *)addr + 2 * MEGABYTE, &basic_info, sizeof(basic_info)); UT_ASSERTeq(bytes_returned, sizeof(basic_info)); /* * region size can be bigger than 1MB because there was probably * free space after this mapping */ UT_ASSERTeq(basic_info.State, MEM_FREE); DONE(NULL); } vmem-1.8/src/test/win_mmap_dtor/win_mmap_dtor.filters000066400000000000000000000017101361505074100231650ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {6043ccf6-d070-43a3-867f-2aa9612ac158} ps1 {9e75d124-4f98-4b16-ad1d-f62881ec9f30} match Source Files Test Scripts vmem-1.8/src/test/win_mmap_dtor/win_mmap_dtor.vcxproj000066400000000000000000000064121361505074100232140ustar00rootroot00000000000000 Debug x64 Release x64 {ce3f2dfb-8470-4802-ad37-21caf6cb2681} {F03DABEE-A03E-4437-BFD3-D012836F2D94} Win32Proj win_mmap_dtor 10.0.16299.0 Application true v140 Application false v140 NotUsing NotUsing vmem-1.8/src/test/win_mmap_dtor/win_mmap_dtor.vcxproj.filters000066400000000000000000000011201361505074100246520ustar00rootroot00000000000000 {0da09383-3374-4523-b95d-d943028e8202} Source Files Source Files vmem-1.8/src/test/win_signal/000077500000000000000000000000001361505074100162305ustar00rootroot00000000000000vmem-1.8/src/test/win_signal/TEST0.PS1000066400000000000000000000033341361505074100174170ustar00rootroot00000000000000# # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # src/test/win_signal/TEST0 -- unit test for windows signal implementation # . ..\unittest\unittest.ps1 setup expect_normal_exit $Env:EXE_DIR\win_signal$Env:EXESUFFIX check pass vmem-1.8/src/test/win_signal/out0.log.match000066400000000000000000000026151361505074100207210ustar00rootroot00000000000000win_signal$(nW)TEST0: START: win_signal $(nW)win_signal.exe 0; Unknown signal 0 1; Hangup 2; Interrupt 3; Quit 4; Illegal instruction 5; Trace/breakpoint trap 6; Aborted 7; Bus error 8; Floating point exception 9; Killed 10; User defined signal 1 11; Segmentation fault 12; User defined signal 2 13; Broken pipe 14; Alarm clock 15; Terminated 16; Stack fault 17; Child exited 18; Continued 19; Stopped (signal) 20; Stopped 21; Stopped (tty input) 22; Stopped (tty output) 23; Urgent I/O condition 24; CPU time limit exceeded 25; File size limit exceeded 26; Virtual timer expired 27; Profiling timer expired 28; Window changed 29; I/O possible 30; Power failure 31; Bad system call 32; Unknown signal 32 33; Unknown signal 34; Real-time signal 35; Real-time signal 36; Real-time signal 37; Real-time signal 38; Real-time signal 39; Real-time signal 40; Real-time signal 41; Real-time signal 42; Real-time signal 43; Real-time signal 44; Real-time signal 45; Real-time signal 46; Real-time signal 47; Real-time signal 48; Real-time signal 49; Real-time signal 50; Real-time signal 51; Real-time signal 52; Real-time signal 53; Real-time signal 54; Real-time signal 55; Real-time signal 56; Real-time signal 57; Real-time signal 58; Real-time signal 59; Real-time signal 60; Real-time signal 61; Real-time signal 62; Real-time signal 63; Real-time signal 64; Real-time signal 65; Unknown signal win_signal$(nW)TEST0: DONE vmem-1.8/src/test/win_signal/win_signal.c000066400000000000000000000036771361505074100205430ustar00rootroot00000000000000/* * Copyright 2014-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * win_signal.c -- test signal related routines */ #include "unittest.h" extern int sys_siglist_size; int main(int argc, char *argv[]) { int sig; START(argc, argv, "win_signal"); for (sig = 0; sig < sys_siglist_size; sig++) { UT_OUT("%d; %s", sig, os_strsignal(sig)); } for (sig = 33; sig < 66; sig++) { UT_OUT("%d; %s", sig, os_strsignal(sig)); } DONE(NULL); } vmem-1.8/src/test/win_signal/win_signal.vcxproj000066400000000000000000000064361361505074100220100ustar00rootroot00000000000000 Debug x64 Release x64 {F13108C4-4C86-4D56-A317-A4E5892A8AF7} Win32Proj win_signal 10.0.16299.0 Application true v140 Application false v140 true Disabled MaxSpeed {ce3f2dfb-8470-4802-ad37-21caf6cb2681} vmem-1.8/src/test/win_signal/win_signal.vcxproj.filters000066400000000000000000000016661361505074100234570ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {fd43f65a-4a91-4191-825a-d7dc98fe5f33} {a517de2c-1baf-4e3e-8774-a652dd45ef4a} Test scripts Match Files Source Files vmem-1.8/src/tools/000077500000000000000000000000001361505074100142575ustar00rootroot00000000000000vmem-1.8/src/tools/Makefile.inc000066400000000000000000000142631361505074100164750ustar00rootroot00000000000000# Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # src/tools/Makefile.inc -- Makefile include for all tools # TOP := $(dir $(lastword $(MAKEFILE_LIST)))../.. include $(TOP)/src/common.inc INSTALL_TARGET ?= y INCS += -I. INCS += -I$(TOP)/src/include INCS += $(OS_INCS) CFLAGS += -std=gnu99 CFLAGS += -Wall CFLAGS += -Werror CFLAGS += -Wmissing-prototypes CFLAGS += -Wpointer-arith CFLAGS += -Wsign-conversion CFLAGS += -Wsign-compare ifeq ($(WCONVERSION_AVAILABLE), y) CFLAGS += -Wconversion endif CFLAGS += -fno-common CFLAGS += -DSRCVERSION='"$(SRCVERSION)"' ifeq ($(IS_ICC), n) CFLAGS += -Wunused-macros CFLAGS += -Wmissing-field-initializers endif ifeq ($(WUNREACHABLE_CODE_RETURN_AVAILABLE), y) CFLAGS += -Wunreachable-code-return endif ifeq ($(WMISSING_VARIABLE_DECLARATIONS_AVAILABLE), y) CFLAGS += -Wmissing-variable-declarations endif ifeq ($(WFLOAT_EQUAL_AVAILABLE), y) CFLAGS += -Wfloat-equal endif ifeq ($(WSWITCH_DEFAULT_AVAILABLE), y) CFLAGS += -Wswitch-default endif ifeq ($(WCAST_FUNCTION_TYPE_AVAILABLE), y) CFLAGS += -Wcast-function-type endif ifeq ($(DEBUG),1) CFLAGS += -ggdb $(EXTRA_CFLAGS_DEBUG) else CFLAGS += -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 $(EXTRA_CFLAGS_RELEASE) endif ifeq ($(VALGRIND),0) CFLAGS += -DVALGRIND_ENABLED=0 CXXFLAGS += -DVALGRIND_ENABLED=0 endif ifeq ($(FAULT_INJECTION),1) CFLAGS += -DFAULT_INJECTION=1 CXXFLAGS += -DFAULT_INJECTION=1 endif ifneq ($(SANITIZE),) CFLAGS += -fsanitize=$(SANITIZE) LDFLAGS += -fsanitize=$(SANITIZE) endif LDFLAGS += $(OS_LIBS) CFLAGS += $(EXTRA_CFLAGS) LDFLAGS += -Wl,-z,relro -Wl,--warn-common -Wl,--fatal-warnings $(EXTRA_LDFLAGS) ifeq ($(DEBUG),1) LDFLAGS += -L$(TOP)/src/debug else LDFLAGS += -L$(TOP)/src/nondebug endif TARGET_DIR=$(DESTDIR)$(bindir) ifneq ($(DEBUG),1) TARGET_STATIC_NONDEBUG=$(TARGET).static-nondebug endif TARGET_STATIC_DEBUG=$(TARGET).static-debug LIBSDIR=$(TOP)/src LIBSDIR_DEBUG=$(LIBSDIR)/debug LIBSDIR_NONDEBUG=$(LIBSDIR)/nondebug ifneq ($(DEBUG),) LIBSDIR_PRIV=$(LIBSDIR_DEBUG) else LIBSDIR_PRIV=$(LIBSDIR_NONDEBUG) endif LIBS += $(LIBUUID) ifeq ($(LIBRT_NEEDED), y) LIBS += -lrt endif ifeq ($(TOOLS_COMMON), y) LIBPMEMCOMMON=y endif ifeq ($(LIBPMEMCOMMON), y) DYNAMIC_LIBS += -lpmemcommon STATIC_DEBUG_LIBS += $(LIBSDIR_DEBUG)/libpmemcommon.a STATIC_NONDEBUG_LIBS += $(LIBSDIR_NONDEBUG)/libpmemcommon.a CFLAGS += -I$(TOP)/src/common endif ifeq ($(LIBVMEM),y) DYNAMIC_LIBS += -lvmem STATIC_DEBUG_LIBS += $(LIBSDIR_DEBUG)/libvmem.a STATIC_NONDEBUG_LIBS += $(LIBSDIR_NONDEBUG)/libvmem.a endif # If any of these libraries is required, we need to link libpthread ifneq ($(LIBPMEMCOMMON)$(LIBVMEM),) LIBS += -pthread endif # If any of these libraries is required, we need to link libdl ifneq ($(LIBPMEMCOMMON),) LIBS += $(LIBDL) endif ifeq ($(TOOLS_COMMON), y) OBJS += common.o output.o CFLAGS += -I$(TOP)/src/common CFLAGS += $(UNIX98_CFLAGS) endif ifneq ($(HEADERS),) ifneq ($(filter 1 2, $(CSTYLEON)),) TMP_HEADERS := $(addsuffix tmp, $(HEADERS)) endif endif ifeq ($(COVERAGE),1) CFLAGS += $(GCOV_CFLAGS) LDFLAGS += $(GCOV_LDFLAGS) LIBS += $(GCOV_LIBS) endif MAKEFILE_DEPS=$(TOP)/src/tools/Makefile.inc $(TOP)/src/common.inc ifneq ($(TARGET),) all: $(TARGET) $(TARGET_STATIC_NONDEBUG) $(TARGET_STATIC_DEBUG) else all: endif SYNC_FILE=.synced clean: $(RM) $(OBJS) $(CLEAN_FILES) $(SYNC_FILE) $(TMP_HEADERS) clobber: clean ifneq ($(TARGET),) $(RM) $(TARGET) $(RM) $(TARGET_STATIC_NONDEBUG) $(RM) $(TARGET_STATIC_DEBUG) $(RM) -r .deps endif install: all ifeq ($(INSTALL_TARGET),y) ifneq ($(TARGET),) install -d $(TARGET_DIR) install -p -m 0755 $(TARGET) $(TARGET_DIR) endif endif uninstall: ifeq ($(INSTALL_TARGET),y) ifneq ($(TARGET),) $(RM) $(TARGET_DIR)/$(TARGET) endif endif %.gz: % gzip -c ./$< > $@ %.txt: % man ./$< > $@ $(TARGET) $(TARGET_STATIC_DEBUG) $(TARGET_STATIC_NONDEBUG): $(TMP_HEADERS) $(OBJS) $(MAKEFILE_DEPS) $(TARGET_STATIC_DEBUG): $(STATIC_DEBUG_LIBS) $(CC) $(LDFLAGS) -o $@ $(OBJS) $(STATIC_DEBUG_LIBS) $(LIBS) $(TARGET_STATIC_NONDEBUG): $(STATIC_NONDEBUG_LIBS) $(CC) $(LDFLAGS) -o $@ $(OBJS) $(STATIC_NONDEBUG_LIBS) $(LIBS) $(TARGET): $(CC) $(LDFLAGS) -o $@ $(OBJS) $(DYNAMIC_LIBS) $(LIBS) objdir=. %.o: %.c $(MAKEFILE_DEPS) $(call check-cstyle, $<) @mkdir -p .deps $(CC) -MD $(CFLAGS) $(INCS) -c -o $@ $(call coverage-path, $<) $(call check-os, $@, $<) $(create-deps) %.htmp: %.h $(call check-cstyle, $<, $@) test check pcheck: all TESTCONFIG=$(TOP)/src/test/testconfig.sh DIR_SYNC=$(TOP)/src/test/.sync-dir $(TESTCONFIG): sync-remotes: all $(SYNC_FILE) $(SYNC_FILE): $(TARGET) $(TESTCONFIG) ifeq ($(SCP_TO_REMOTE_NODES), y) cp $(TARGET) $(DIR_SYNC) @touch $(SYNC_FILE) endif sparse: $(if $(TARGET), $(sparse-c)) .PHONY: all clean clobber install uninstall test check pcheck -include .deps/*.P vmem-1.8/src/windows/000077500000000000000000000000001361505074100146115ustar00rootroot00000000000000vmem-1.8/src/windows/README000066400000000000000000000014321361505074100154710ustar00rootroot00000000000000Persistent Memory Development Kit This is src/windows/README. This directory contains the Windows-specific source for the PMDK. The subdirectory "include" contains header files that have no equivalents on Windows OS, when building PMDK using VC++ compiler. Some of those files are empty, which is a cheap trick to avoid preprocessor errors when including non-existing files. This way we don't need a lot of preprocessor conditionals in all the source code files. The "platform.h" file contains definitions of all the basic types and macros that are not available under VC++. When building PMDK with Visual Studio, "platform.h" file is included to each source file using "/FI" (forced include) option. The subdirectory "getopt" contains a windows implementation of getopt and getopt_long vmem-1.8/src/windows/getopt/000077500000000000000000000000001361505074100161135ustar00rootroot00000000000000vmem-1.8/src/windows/getopt/.cstyleignore000066400000000000000000000000221361505074100206150ustar00rootroot00000000000000getopt.c getopt.h vmem-1.8/src/windows/getopt/LICENSE.txt000066400000000000000000000027131361505074100177410ustar00rootroot00000000000000Copyright (c) 2012, Kim Gräsman All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Kim Gräsman nor the names of contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL KIM GRÄSMAN BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. vmem-1.8/src/windows/getopt/README000066400000000000000000000004021361505074100167670ustar00rootroot00000000000000Persistent Memory Development Kit This is src/windows/getopt/README. This is directory contains windows getopt implementation downloaded from: https://github.com/kimgr/getopt_port with changes applied to compile it with "compile as c code(/TC)" option. vmem-1.8/src/windows/getopt/getopt.c000066400000000000000000000232111361505074100175600ustar00rootroot00000000000000/* * *Copyright (c) 2012, Kim Gräsman * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * Neither the name of Kim Gräsman nor the * names of contributors may be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL KIM GRÄSMAN BE LIABLE FOR ANY * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include "getopt.h" #include #include #include char* optarg; int optopt; /* The variable optind [...] shall be initialized to 1 by the system. */ int optind = 1; int opterr; static char* optcursor = NULL; static char *first = NULL; /* rotates argv array */ static void rotate(char **argv, int argc) { if (argc <= 1) return; char *tmp = argv[0]; memmove(argv, argv + 1, (argc - 1) * sizeof(char *)); argv[argc - 1] = tmp; } /* Implemented based on [1] and [2] for optional arguments. optopt is handled FreeBSD-style, per [3]. Other GNU and FreeBSD extensions are purely accidental. [1] http://pubs.opengroup.org/onlinepubs/000095399/functions/getopt.html [2] http://www.kernel.org/doc/man-pages/online/pages/man3/getopt.3.html [3] http://www.freebsd.org/cgi/man.cgi?query=getopt&sektion=3&manpath=FreeBSD+9.0-RELEASE */ int getopt(int argc, char* const argv[], const char* optstring) { int optchar = -1; const char* optdecl = NULL; optarg = NULL; opterr = 0; optopt = 0; /* Unspecified, but we need it to avoid overrunning the argv bounds. */ if (optind >= argc) goto no_more_optchars; /* If, when getopt() is called argv[optind] is a null pointer, getopt() shall return -1 without changing optind. */ if (argv[optind] == NULL) goto no_more_optchars; /* If, when getopt() is called *argv[optind] is not the character '-', permute argv to move non options to the end */ if (*argv[optind] != '-') { if (argc - optind <= 1) goto no_more_optchars; if (!first) first = argv[optind]; do { rotate((char **)(argv + optind), argc - optind); } while (*argv[optind] != '-' && argv[optind] != first); if (argv[optind] == first) goto no_more_optchars; } /* If, when getopt() is called argv[optind] points to the string "-", getopt() shall return -1 without changing optind. */ if (strcmp(argv[optind], "-") == 0) goto no_more_optchars; /* If, when getopt() is called argv[optind] points to the string "--", getopt() shall return -1 after incrementing optind. */ if (strcmp(argv[optind], "--") == 0) { ++optind; if (first) { do { rotate((char **)(argv + optind), argc - optind); } while (argv[optind] != first); } goto no_more_optchars; } if (optcursor == NULL || *optcursor == '\0') optcursor = argv[optind] + 1; optchar = *optcursor; /* FreeBSD: The variable optopt saves the last known option character returned by getopt(). */ optopt = optchar; /* The getopt() function shall return the next option character (if one is found) from argv that matches a character in optstring, if there is one that matches. */ optdecl = strchr(optstring, optchar); if (optdecl) { /* [I]f a character is followed by a colon, the option takes an argument. */ if (optdecl[1] == ':') { optarg = ++optcursor; if (*optarg == '\0') { /* GNU extension: Two colons mean an option takes an optional arg; if there is text in the current argv-element (i.e., in the same word as the option name itself, for example, "-oarg"), then it is returned in optarg, otherwise optarg is set to zero. */ if (optdecl[2] != ':') { /* If the option was the last character in the string pointed to by an element of argv, then optarg shall contain the next element of argv, and optind shall be incremented by 2. If the resulting value of optind is greater than argc, this indicates a missing option-argument, and getopt() shall return an error indication. Otherwise, optarg shall point to the string following the option character in that element of argv, and optind shall be incremented by 1. */ if (++optind < argc) { optarg = argv[optind]; } else { /* If it detects a missing option-argument, it shall return the colon character ( ':' ) if the first character of optstring was a colon, or a question-mark character ( '?' ) otherwise. */ optarg = NULL; fprintf(stderr, "%s: option requires an argument -- '%c'\n", argv[0], optchar); optchar = (optstring[0] == ':') ? ':' : '?'; } } else { optarg = NULL; } } optcursor = NULL; } } else { fprintf(stderr,"%s: invalid option -- '%c'\n", argv[0], optchar); /* If getopt() encounters an option character that is not contained in optstring, it shall return the question-mark ( '?' ) character. */ optchar = '?'; } if (optcursor == NULL || *++optcursor == '\0') ++optind; return optchar; no_more_optchars: optcursor = NULL; first = NULL; return -1; } /* Implementation based on [1]. [1] http://www.kernel.org/doc/man-pages/online/pages/man3/getopt.3.html */ int getopt_long(int argc, char* const argv[], const char* optstring, const struct option* longopts, int* longindex) { const struct option* o = longopts; const struct option* match = NULL; int num_matches = 0; size_t argument_name_length = 0; const char* current_argument = NULL; int retval = -1; optarg = NULL; optopt = 0; if (optind >= argc) return -1; /* If, when getopt() is called argv[optind] is a null pointer, getopt_long() shall return -1 without changing optind. */ if (argv[optind] == NULL) goto no_more_optchars; /* If, when getopt_long() is called *argv[optind] is not the character '-', permute argv to move non options to the end */ if (*argv[optind] != '-') { if (argc - optind <= 1) goto no_more_optchars; if (!first) first = argv[optind]; do { rotate((char **)(argv + optind), argc - optind); } while (*argv[optind] != '-' && argv[optind] != first); if (argv[optind] == first) goto no_more_optchars; } if (strlen(argv[optind]) < 3 || strncmp(argv[optind], "--", 2) != 0) return getopt(argc, argv, optstring); /* It's an option; starts with -- and is longer than two chars. */ current_argument = argv[optind] + 2; argument_name_length = strcspn(current_argument, "="); for (; o->name; ++o) { if (strncmp(o->name, current_argument, argument_name_length) == 0) { match = o; ++num_matches; if (strlen(o->name) == argument_name_length) { /* found match is exactly the one which we are looking for */ num_matches = 1; break; } } } if (num_matches == 1) { /* If longindex is not NULL, it points to a variable which is set to the index of the long option relative to longopts. */ if (longindex) *longindex = (int)(match - longopts); /* If flag is NULL, then getopt_long() shall return val. Otherwise, getopt_long() returns 0, and flag shall point to a variable which shall be set to val if the option is found, but left unchanged if the option is not found. */ if (match->flag) *(match->flag) = match->val; retval = match->flag ? 0 : match->val; if (match->has_arg != no_argument) { optarg = strchr(argv[optind], '='); if (optarg != NULL) ++optarg; if (match->has_arg == required_argument) { /* Only scan the next argv for required arguments. Behavior is not specified, but has been observed with Ubuntu and Mac OSX. */ if (optarg == NULL && ++optind < argc) { optarg = argv[optind]; } if (optarg == NULL) retval = ':'; } } else if (strchr(argv[optind], '=')) { /* An argument was provided to a non-argument option. I haven't seen this specified explicitly, but both GNU and BSD-based implementations show this behavior. */ retval = '?'; } } else { /* Unknown option or ambiguous match. */ retval = '?'; if (num_matches == 0) { fprintf(stderr, "%s: unrecognized option -- '%s'\n", argv[0], argv[optind]); } else { fprintf(stderr, "%s: option '%s' is ambiguous\n", argv[0], argv[optind]); } } ++optind; return retval; no_more_optchars: first = NULL; return -1; } vmem-1.8/src/windows/getopt/getopt.h000066400000000000000000000041341361505074100175700ustar00rootroot00000000000000/* * *Copyright (c) 2012, Kim Gräsman * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * Neither the name of Kim Gräsman nor the * names of contributors may be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL KIM GRÄSMAN BE LIABLE FOR ANY * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef INCLUDED_GETOPT_PORT_H #define INCLUDED_GETOPT_PORT_H #if defined(__cplusplus) extern "C" { #endif #define no_argument 0 #define required_argument 1 #define optional_argument 2 extern char* optarg; extern int optind, opterr, optopt; struct option { const char* name; int has_arg; int* flag; int val; }; int getopt(int argc, char* const argv[], const char* optstring); int getopt_long(int argc, char* const argv[], const char* optstring, const struct option* longopts, int* longindex); #if defined(__cplusplus) } #endif #endif // INCLUDED_GETOPT_PORT_H vmem-1.8/src/windows/getopt/getopt.vcxproj000066400000000000000000000076231361505074100210420ustar00rootroot00000000000000 Debug x64 Release x64 {9186EAC4-2F34-4F17-B940-6585D7869BCD} getopt 10.0.16299.0 StaticLibrary true v140 NotSet StaticLibrary false v140 NotSet Level3 Disabled CompileAsC true NTDDI_VERSION=NTDDI_WIN10_RS1;_DEBUG;_CRT_SECURE_NO_WARNINGS;_MBCS;%(PreprocessorDefinitions) 4819 true Level3 MaxSpeed true true CompileAsC true NTDDI_VERSION=NTDDI_WIN10_RS1;NDEBUG;_CRT_SECURE_NO_WARNINGS;_MBCS;%(PreprocessorDefinitions) true true true vmem-1.8/src/windows/getopt/getopt.vcxproj.filters000066400000000000000000000014401361505074100225000ustar00rootroot00000000000000 {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {93995380-89BD-4b04-88EB-625FBE52EBFB} h;hh;hpp;hxx;hm;inl;inc;xsd Source Files Header Files vmem-1.8/src/windows/include/000077500000000000000000000000001361505074100162345ustar00rootroot00000000000000vmem-1.8/src/windows/include/.cstyleignore000066400000000000000000000000151361505074100207400ustar00rootroot00000000000000srcversion.h vmem-1.8/src/windows/include/dirent.h000066400000000000000000000031321361505074100176710ustar00rootroot00000000000000/* * Copyright 2015-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * fake dirent.h */ vmem-1.8/src/windows/include/endian.h000066400000000000000000000042431361505074100176460ustar00rootroot00000000000000/* * Copyright 2015-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * endian.h -- convert values between host and big-/little-endian byte order */ #ifndef ENDIAN_H #define ENDIAN_H 1 /* * XXX: On Windows we can assume little-endian architecture */ #include #define htole16(a) (a) #define htole32(a) (a) #define htole64(a) (a) #define le16toh(a) (a) #define le32toh(a) (a) #define le64toh(a) (a) #define htobe16(x) _byteswap_ushort(x) #define htobe32(x) _byteswap_ulong(x) #define htobe64(x) _byteswap_uint64(x) #define be16toh(x) _byteswap_ushort(x) #define be32toh(x) _byteswap_ulong(x) #define be64toh(x) _byteswap_uint64(x) #endif /* ENDIAN_H */ vmem-1.8/src/windows/include/err.h000066400000000000000000000042161361505074100172000ustar00rootroot00000000000000/* * Copyright 2016-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * err.h - error and warning messages */ #ifndef ERR_H #define ERR_H 1 #include #include #include /* * err - windows implementation of unix err function */ __declspec(noreturn) static void err(int eval, const char *fmt, ...) { va_list vl; va_start(vl, fmt); vfprintf(stderr, fmt, vl); va_end(vl); exit(eval); } /* * warn - windows implementation of unix warn function */ static void warn(const char *fmt, ...) { va_list vl; va_start(vl, fmt); fprintf(stderr, "Warning: "); vfprintf(stderr, fmt, vl); va_end(vl); } #endif /* ERR_H */ vmem-1.8/src/windows/include/features.h000066400000000000000000000031271361505074100202260ustar00rootroot00000000000000/* * Copyright 2016, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * fake features.h */ vmem-1.8/src/windows/include/libgen.h000066400000000000000000000031251361505074100176460ustar00rootroot00000000000000/* * Copyright 2016, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * fake libgen.h */ vmem-1.8/src/windows/include/linux/000077500000000000000000000000001361505074100173735ustar00rootroot00000000000000vmem-1.8/src/windows/include/linux/limits.h000066400000000000000000000037021361505074100210470ustar00rootroot00000000000000/* * Copyright 2015-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * linux/limits.h -- fake header file */ /* * XXX - The only purpose of this empty file is to avoid preprocessor * errors when including a Linux-specific header file that has no equivalent * on Windows. With this cheap trick, we don't need a lot of preprocessor * conditionals in all the source code files. * * In the future, this will be addressed in some other way. */ vmem-1.8/src/windows/include/platform.h000066400000000000000000000124151361505074100202340ustar00rootroot00000000000000/* * Copyright 2015-2018, Intel Corporation * Copyright (c) 2016, Microsoft Corporation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * platform.h -- dirty hacks to compile Linux code on Windows using VC++ * * This is included to each source file using "/FI" (forced include) option. * * XXX - it is a subject for refactoring */ #ifndef PLATFORM_H #define PLATFORM_H 1 #pragma warning(disable : 4996) #pragma warning(disable : 4200) /* allow flexible array member */ #pragma warning(disable : 4819) /* non unicode characteres */ #ifdef __cplusplus extern "C" { #endif /* Prevent PMDK compilation for 32-bit platforms */ #if defined(_WIN32) && !defined(_WIN64) #error "32-bit builds of PMDK are not supported!" #endif #define _CRT_RAND_S /* rand_s() */ #include #include #include #include #include #include #include #include #include #include #include /* use uuid_t definition from util.h */ #ifdef uuid_t #undef uuid_t #endif /* a few trivial substitutions */ #define PATH_MAX MAX_PATH #define __thread __declspec(thread) #define __func__ __FUNCTION__ #ifdef _DEBUG #define DEBUG #endif /* * The inline keyword is available only in VC++. * https://msdn.microsoft.com/en-us/library/bw1hbe6y.aspx */ #ifndef __cplusplus #define inline __inline #endif /* XXX - no equivalents in VC++ */ #define __attribute__(a) #define __builtin_constant_p(cnd) 0 /* * missing definitions */ /* errno.h */ #define ELIBACC 79 /* cannot access a needed shared library */ /* sys/stat.h */ #define S_IRUSR S_IREAD #define S_IWUSR S_IWRITE #define S_IRGRP S_IRUSR #define S_IWGRP S_IWUSR #define O_SYNC 0 typedef int mode_t; #define fchmod(fd, mode) 0 /* XXX - dummy */ #define setlinebuf(fp) setvbuf(fp, NULL, _IOLBF, BUFSIZ); /* unistd.h */ typedef long long os_off_t; typedef long long ssize_t; int setenv(const char *name, const char *value, int overwrite); int unsetenv(const char *name); /* fcntl.h */ int posix_fallocate(int fd, os_off_t offset, os_off_t len); /* string.h */ #define strtok_r strtok_s /* time.h */ #define CLOCK_MONOTONIC 1 #define CLOCK_REALTIME 2 int clock_gettime(int id, struct timespec *ts); /* signal.h */ typedef unsigned long long sigset_t; /* one bit for each signal */ C_ASSERT(NSIG <= sizeof(sigset_t) * 8); struct sigaction { void (*sa_handler) (int signum); /* void (*sa_sigaction)(int, siginfo_t *, void *); */ sigset_t sa_mask; int sa_flags; void (*sa_restorer) (void); }; __inline int sigemptyset(sigset_t *set) { *set = 0; return 0; } __inline int sigfillset(sigset_t *set) { *set = ~0; return 0; } __inline int sigaddset(sigset_t *set, int signum) { if (signum <= 0 || signum >= NSIG) { errno = EINVAL; return -1; } *set |= (1ULL << (signum - 1)); return 0; } __inline int sigdelset(sigset_t *set, int signum) { if (signum <= 0 || signum >= NSIG) { errno = EINVAL; return -1; } *set &= ~(1ULL << (signum - 1)); return 0; } __inline int sigismember(const sigset_t *set, int signum) { if (signum <= 0 || signum >= NSIG) { errno = EINVAL; return -1; } return ((*set & (1ULL << (signum - 1))) ? 1 : 0); } /* sched.h */ /* * sched_yield -- yield the processor */ __inline int sched_yield(void) { SwitchToThread(); return 0; /* always succeeds */ } /* * helper macros for library ctor/dtor function declarations */ #define MSVC_CONSTR(func) \ void func(void); \ __pragma(comment(linker, "/include:_" #func)) \ __pragma(section(".CRT$XCU", read)) \ __declspec(allocate(".CRT$XCU")) \ const void (WINAPI *_##func)(void) = (const void (WINAPI *)(void))func; #define MSVC_DESTR(func) \ void func(void); \ static void _##func##_reg(void) { atexit(func); }; \ MSVC_CONSTR(_##func##_reg) #ifdef __cplusplus } #endif #endif /* PLATFORM_H */ vmem-1.8/src/windows/include/sched.h000066400000000000000000000031241361505074100174730ustar00rootroot00000000000000/* * Copyright 2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * fake sched.h */ vmem-1.8/src/windows/include/strings.h000066400000000000000000000031331361505074100200760ustar00rootroot00000000000000/* * Copyright 2015-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * fake strings.h */ vmem-1.8/src/windows/include/sys/000077500000000000000000000000001361505074100170525ustar00rootroot00000000000000vmem-1.8/src/windows/include/sys/file.h000066400000000000000000000032521361505074100201440ustar00rootroot00000000000000/* * Copyright 2015-2018, Intel Corporation * Copyright (c) 2016, Microsoft Corporation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * sys/file.h -- file locking */ vmem-1.8/src/windows/include/sys/mman.h000066400000000000000000000044651361505074100201640ustar00rootroot00000000000000/* * Copyright 2015-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * sys/mman.h -- memory-mapped files for Windows */ #ifndef SYS_MMAN_H #define SYS_MMAN_H 1 #ifdef __cplusplus extern "C" { #endif #define PROT_NONE 0x0 #define PROT_READ 0x1 #define PROT_WRITE 0x2 #define PROT_EXEC 0x4 #define MAP_SHARED 0x1 #define MAP_PRIVATE 0x2 #define MAP_FIXED 0x10 #define MAP_ANONYMOUS 0x20 #define MAP_ANON MAP_ANONYMOUS #define MAP_NORESERVE 0x04000 #define MS_ASYNC 1 #define MS_SYNC 4 #define MS_INVALIDATE 2 #define MAP_FAILED ((void *)(-1)) void *mmap(void *addr, size_t len, int prot, int flags, int fd, os_off_t offset); int munmap(void *addr, size_t len); int msync(void *addr, size_t len, int flags); int mprotect(void *addr, size_t len, int prot); #ifdef __cplusplus } #endif #endif /* SYS_MMAN_H */ vmem-1.8/src/windows/include/sys/mount.h000066400000000000000000000031351361505074100203670ustar00rootroot00000000000000/* * Copyright 2015-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * fake sys/mount.h */ vmem-1.8/src/windows/include/sys/param.h000066400000000000000000000041171361505074100203260ustar00rootroot00000000000000/* * Copyright 2015-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * sys/param.h -- a few useful macros */ #ifndef SYS_PARAM_H #define SYS_PARAM_H 1 #define roundup(x, y) ((((x) + ((y) - 1)) / (y)) * (y)) #define howmany(x, y) (((x) + ((y) - 1)) / (y)) #define BPB 8 /* bits per byte */ #define setbit(b, i) ((b)[(i) / BPB] |= 1 << ((i) % BPB)) #define isset(b, i) ((b)[(i) / BPB] & (1 << ((i) % BPB))) #define isclr(b, i) (((b)[(i) / BPB] & (1 << ((i) % BPB))) == 0) #define MIN(a, b) (((a) < (b)) ? (a) : (b)) #define MAX(a, b) (((a) > (b)) ? (a) : (b)) #endif /* SYS_PARAM_H */ vmem-1.8/src/windows/include/sys/resource.h000066400000000000000000000031331361505074100210520ustar00rootroot00000000000000/* * Copyright 2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * fake sys/resource.h */ vmem-1.8/src/windows/include/sys/statvfs.h000066400000000000000000000031261361505074100207170ustar00rootroot00000000000000/* * Copyright 2016, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * fake statvfs.h */ vmem-1.8/src/windows/include/sys/uio.h000066400000000000000000000035221361505074100200210ustar00rootroot00000000000000/* * Copyright 2015-2018, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * sys/uio.h -- definition of iovec structure */ #ifndef SYS_UIO_H #define SYS_UIO_H 1 #include #ifdef __cplusplus extern "C" { #endif ssize_t writev(int fd, const struct iovec *iov, int iovcnt); #ifdef __cplusplus } #endif #endif /* SYS_UIO_H */ vmem-1.8/src/windows/include/sys/wait.h000066400000000000000000000031341361505074100201700ustar00rootroot00000000000000/* * Copyright 2015-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * fake sys/wait.h */ vmem-1.8/src/windows/include/unistd.h000066400000000000000000000075721361505074100177260ustar00rootroot00000000000000/* * Copyright 2015-2017, Intel Corporation * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * unistd.h -- compatibility layer for POSIX operating system API */ #ifndef UNISTD_H #define UNISTD_H 1 #include #define _SC_PAGESIZE 0 #define _SC_NPROCESSORS_ONLN 1 #define R_OK 04 #define W_OK 02 #define X_OK 00 /* execute permission doesn't exist on Windows */ #define F_OK 00 /* * sysconf -- get configuration information at run time */ static __inline long sysconf(int p) { SYSTEM_INFO si; int ret = 0; switch (p) { case _SC_PAGESIZE: GetSystemInfo(&si); return si.dwPageSize; case _SC_NPROCESSORS_ONLN: for (int i = 0; i < GetActiveProcessorGroupCount(); i++) { ret += GetActiveProcessorCount(i); } return ret; default: return 0; } } #define getpid _getpid /* * pread -- read from a file descriptor at given offset */ static ssize_t pread(int fd, void *buf, size_t count, os_off_t offset) { __int64 position = _lseeki64(fd, 0, SEEK_CUR); _lseeki64(fd, offset, SEEK_SET); int ret = _read(fd, buf, (unsigned)count); _lseeki64(fd, position, SEEK_SET); return ret; } /* * pwrite -- write to a file descriptor at given offset */ static ssize_t pwrite(int fd, const void *buf, size_t count, os_off_t offset) { __int64 position = _lseeki64(fd, 0, SEEK_CUR); _lseeki64(fd, offset, SEEK_SET); int ret = _write(fd, buf, (unsigned)count); _lseeki64(fd, position, SEEK_SET); return ret; } #define S_ISBLK(x) 0 /* BLK devices not exist on Windows */ /* * basename -- parse pathname and return filename component */ static char * basename(char *path) { char fname[_MAX_FNAME]; char ext[_MAX_EXT]; _splitpath(path, NULL, NULL, fname, ext); sprintf(path, "%s%s", fname, ext); return path; } /* * dirname -- parse pathname and return directory component */ static char * dirname(char *path) { if (path == NULL) return "."; size_t len = strlen(path); if (len == 0) return "."; char *end = path + len; /* strip trailing forslashes and backslashes */ while ((--end) > path) { if (*end != '\\' && *end != '/') { *(end + 1) = '\0'; break; } } /* strip basename */ while ((--end) > path) { if (*end == '\\' || *end == '/') { *end = '\0'; break; } } if (end != path) { return path; /* handle edge cases */ } else if (*end == '\\' || *end == '/') { *(end + 1) = '\0'; } else { *end++ = '.'; *end = '\0'; } return path; } #endif /* UNISTD_H */ vmem-1.8/src/windows/include/win_mmap.h000066400000000000000000000054131361505074100202170ustar00rootroot00000000000000/* * Copyright 2015-2019, Intel Corporation * Copyright (c) 2016, Microsoft Corporation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * win_mmap.h -- (internal) tracks the regions mapped by mmap */ #ifndef WIN_MMAP_H #define WIN_MMAP_H 1 #include "queue.h" #define roundup(x, y) ((((x) + ((y) - 1)) / (y)) * (y)) #define rounddown(x, y) (((x) / (y)) * (y)) void win_mmap_init(void); void win_mmap_fini(void); /* allocation/mmap granularity */ extern unsigned long long Mmap_align; typedef enum FILE_MAPPING_TRACKER_FLAGS { FILE_MAPPING_TRACKER_FLAG_DIRECT_MAPPED = 0x0001, /* * This should hold the value of all flags ORed for debug purpose. */ FILE_MAPPING_TRACKER_FLAGS_MASK = FILE_MAPPING_TRACKER_FLAG_DIRECT_MAPPED } FILE_MAPPING_TRACKER_FLAGS; /* * this structure tracks the file mappings outstanding per file handle */ typedef struct FILE_MAPPING_TRACKER { PMDK_SORTEDQ_ENTRY(FILE_MAPPING_TRACKER) ListEntry; HANDLE FileHandle; HANDLE FileMappingHandle; void *BaseAddress; void *EndAddress; DWORD Access; os_off_t Offset; size_t FileLen; FILE_MAPPING_TRACKER_FLAGS Flags; } FILE_MAPPING_TRACKER, *PFILE_MAPPING_TRACKER; extern SRWLOCK FileMappingQLock; extern PMDK_SORTEDQ_HEAD(FMLHead, FILE_MAPPING_TRACKER) FileMappingQHead; #endif /* WIN_MMAP_H */ vmem-1.8/src/windows/jemalloc_gen/000077500000000000000000000000001361505074100172305ustar00rootroot00000000000000vmem-1.8/src/windows/jemalloc_gen/include/000077500000000000000000000000001361505074100206535ustar00rootroot00000000000000vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/000077500000000000000000000000001361505074100224415ustar00rootroot00000000000000vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/internal/000077500000000000000000000000001361505074100242555ustar00rootroot00000000000000vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/internal/jemalloc_internal.h000066400000000000000000000662041361505074100301200ustar00rootroot00000000000000#ifndef JEMALLOC_INTERNAL_H #define JEMALLOC_INTERNAL_H #include "jemalloc_internal_defs.h" #include "jemalloc/internal/jemalloc_internal_decls.h" #ifdef JEMALLOC_UTRACE #include #endif #define JEMALLOC_NO_DEMANGLE #ifdef JEMALLOC_JET # define JEMALLOC_N(n) jet_##n # include "jemalloc/internal/public_namespace.h" # define JEMALLOC_NO_RENAME # include "jemalloc/jemalloc.h" # undef JEMALLOC_NO_RENAME #else # define JEMALLOC_N(n) je_vmem_je_##n # include "jemalloc/jemalloc.h" #endif #include "jemalloc/internal/private_namespace.h" static const bool config_debug = #ifdef JEMALLOC_DEBUG true #else false #endif ; static const bool have_dss = #ifdef JEMALLOC_DSS true #else false #endif ; static const bool config_fill = #ifdef JEMALLOC_FILL true #else false #endif ; static const bool config_lazy_lock = #ifdef JEMALLOC_LAZY_LOCK true #else false #endif ; static const bool config_prof = #ifdef JEMALLOC_PROF true #else false #endif ; static const bool config_prof_libgcc = #ifdef JEMALLOC_PROF_LIBGCC true #else false #endif ; static const bool config_prof_libunwind = #ifdef JEMALLOC_PROF_LIBUNWIND true #else false #endif ; static const bool config_munmap = #ifdef JEMALLOC_MUNMAP true #else false #endif ; static const bool config_stats = #ifdef JEMALLOC_STATS true #else false #endif ; static const bool config_tcache = #ifdef JEMALLOC_TCACHE true #else false #endif ; static const bool config_tls = #ifdef JEMALLOC_TLS true #else false #endif ; static const bool config_utrace = #ifdef JEMALLOC_UTRACE true #else false #endif ; static const bool config_valgrind = #ifdef JEMALLOC_VALGRIND true #else false #endif ; static const bool config_xmalloc = #ifdef JEMALLOC_XMALLOC true #else false #endif ; static const bool config_ivsalloc = #ifdef JEMALLOC_IVSALLOC true #else false #endif ; #ifdef JEMALLOC_ATOMIC9 #include #endif #if (defined(JEMALLOC_OSATOMIC) || defined(JEMALLOC_OSSPIN)) #include #endif #ifdef JEMALLOC_ZONE #include #include #include #include #endif #define RB_COMPACT #include "jemalloc/internal/rb.h" #include "jemalloc/internal/qr.h" #include "jemalloc/internal/ql.h" /* * jemalloc can conceptually be broken into components (arena, tcache, etc.), * but there are circular dependencies that cannot be broken without * substantial performance degradation. In order to reduce the effect on * visual code flow, read the header files in multiple passes, with one of the * following cpp variables defined during each pass: * * JEMALLOC_H_TYPES : Preprocessor-defined constants and psuedo-opaque data * types. * JEMALLOC_H_STRUCTS : Data structures. * JEMALLOC_H_EXTERNS : Extern data declarations and function prototypes. * JEMALLOC_H_INLINES : Inline functions. */ /******************************************************************************/ #define JEMALLOC_H_TYPES #include "jemalloc/internal/jemalloc_internal_macros.h" #define MALLOCX_LG_ALIGN_MASK ((int)0x3f) /* Smallest size class to support. */ #define LG_TINY_MIN 3 #define TINY_MIN (1U << LG_TINY_MIN) /* * Minimum alignment of allocations is 2^LG_QUANTUM bytes (ignoring tiny size * classes). */ #ifndef LG_QUANTUM # if (defined(__i386__) || defined(_M_IX86)) # define LG_QUANTUM 4 # endif # ifdef __ia64__ # define LG_QUANTUM 4 # endif # ifdef __alpha__ # define LG_QUANTUM 4 # endif # ifdef __sparc64__ # define LG_QUANTUM 4 # endif # if (defined(__amd64__) || defined(__x86_64__) || defined(_M_X64)) # define LG_QUANTUM 4 # endif # ifdef __arm__ # define LG_QUANTUM 3 # endif # ifdef __aarch64__ # define LG_QUANTUM 4 # endif # ifdef __hppa__ # define LG_QUANTUM 4 # endif # ifdef __mips__ # define LG_QUANTUM 3 # endif # ifdef __powerpc__ # define LG_QUANTUM 4 # endif # ifdef __s390__ # define LG_QUANTUM 4 # endif # ifdef __SH4__ # define LG_QUANTUM 4 # endif # ifdef __tile__ # define LG_QUANTUM 4 # endif # ifdef __le32__ # define LG_QUANTUM 4 # endif # ifndef LG_QUANTUM # error "No LG_QUANTUM definition for architecture; specify via CPPFLAGS" # endif #endif #define QUANTUM ((size_t)(1U << LG_QUANTUM)) #define QUANTUM_MASK (QUANTUM - 1) /* Return the smallest quantum multiple that is >= a. */ #define QUANTUM_CEILING(a) \ (((a) + QUANTUM_MASK) & ~QUANTUM_MASK) #define LONG ((size_t)(1U << LG_SIZEOF_LONG)) #define LONG_MASK (LONG - 1) /* Return the smallest long multiple that is >= a. */ #define LONG_CEILING(a) \ (((a) + LONG_MASK) & ~LONG_MASK) #define SIZEOF_PTR (1U << LG_SIZEOF_PTR) #define PTR_MASK (SIZEOF_PTR - 1) /* Return the smallest (void *) multiple that is >= a. */ #define PTR_CEILING(a) \ (((a) + PTR_MASK) & ~PTR_MASK) /* * Maximum size of L1 cache line. This is used to avoid cache line aliasing. * In addition, this controls the spacing of cacheline-spaced size classes. * * CACHELINE cannot be based on LG_CACHELINE because __declspec(align()) can * only handle raw constants. */ #define LG_CACHELINE 6 #define CACHELINE 64 #define CACHELINE_MASK (CACHELINE - 1) /* Return the smallest cacheline multiple that is >= s. */ #define CACHELINE_CEILING(s) \ (((s) + CACHELINE_MASK) & ~CACHELINE_MASK) /* Page size. STATIC_PAGE_SHIFT is determined by the configure script. */ #ifdef PAGE_MASK # undef PAGE_MASK #endif #define LG_PAGE STATIC_PAGE_SHIFT #define PAGE ((size_t)(1U << STATIC_PAGE_SHIFT)) #define PAGE_MASK ((size_t)(PAGE - 1)) /* Return the smallest pagesize multiple that is >= s. */ #define PAGE_CEILING(s) \ (((s) + PAGE_MASK) & ~PAGE_MASK) /* Return the nearest aligned address at or below a. */ #define ALIGNMENT_ADDR2BASE(a, alignment) \ ((void *)((uintptr_t)(a) & (-(alignment)))) /* Return the offset between a and the nearest aligned address at or below a. */ #define ALIGNMENT_ADDR2OFFSET(a, alignment) \ ((size_t)((uintptr_t)(a) & ((alignment) - 1))) /* Return the smallest alignment multiple that is >= s. */ #define ALIGNMENT_CEILING(s, alignment) \ (((s) + ((alignment) - 1)) & (-(alignment))) /* Declare a variable length array */ #if __STDC_VERSION__ < 199901L # ifdef _MSC_VER # include #ifndef alloca # define alloca _alloca #endif # else # ifdef JEMALLOC_HAS_ALLOCA_H # include # else # include # endif # endif # define VARIABLE_ARRAY(type, name, count) \ type *name = alloca(sizeof(type) * (count)) #else # define VARIABLE_ARRAY(type, name, count) type name[(count)] #endif #include "jemalloc/internal/valgrind.h" #include "jemalloc/internal/util.h" #include "jemalloc/internal/atomic.h" #include "jemalloc/internal/prng.h" #include "jemalloc/internal/ckh.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/stats.h" #include "jemalloc/internal/ctl.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/tsd.h" #include "jemalloc/internal/mb.h" #include "jemalloc/internal/extent.h" #include "jemalloc/internal/arena.h" #include "jemalloc/internal/bitmap.h" #include "jemalloc/internal/base.h" #include "jemalloc/internal/chunk.h" #include "jemalloc/internal/huge.h" #include "jemalloc/internal/rtree.h" #include "jemalloc/internal/tcache.h" #include "jemalloc/internal/hash.h" #include "jemalloc/internal/quarantine.h" #include "jemalloc/internal/prof.h" #include "jemalloc/internal/pool.h" #include "jemalloc/internal/vector.h" #undef JEMALLOC_H_TYPES /******************************************************************************/ #define JEMALLOC_H_STRUCTS #include "jemalloc/internal/valgrind.h" #include "jemalloc/internal/util.h" #include "jemalloc/internal/atomic.h" #include "jemalloc/internal/prng.h" #include "jemalloc/internal/ckh.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/stats.h" #include "jemalloc/internal/ctl.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/tsd.h" #include "jemalloc/internal/mb.h" #include "jemalloc/internal/bitmap.h" #include "jemalloc/internal/extent.h" #include "jemalloc/internal/arena.h" #include "jemalloc/internal/base.h" #include "jemalloc/internal/chunk.h" #include "jemalloc/internal/huge.h" #include "jemalloc/internal/rtree.h" #include "jemalloc/internal/tcache.h" #include "jemalloc/internal/hash.h" #include "jemalloc/internal/quarantine.h" #include "jemalloc/internal/prof.h" #include "jemalloc/internal/pool.h" #include "jemalloc/internal/vector.h" typedef struct { uint64_t allocated; uint64_t deallocated; } thread_allocated_t; /* * The JEMALLOC_ARG_CONCAT() wrapper is necessary to pass {0, 0} via a cpp macro * argument. */ #define THREAD_ALLOCATED_INITIALIZER JEMALLOC_ARG_CONCAT({0, 0}) #undef JEMALLOC_H_STRUCTS /******************************************************************************/ #define JEMALLOC_H_EXTERNS extern bool opt_abort; extern bool opt_junk; extern size_t opt_quarantine; extern bool opt_redzone; extern bool opt_utrace; extern bool opt_xmalloc; extern bool opt_zero; extern size_t opt_narenas; extern bool in_valgrind; /* Number of CPUs. */ extern unsigned ncpus; extern unsigned npools; extern unsigned npools_cnt; extern pool_t base_pool; extern pool_t **pools; extern malloc_mutex_t pools_lock; extern void *(*base_malloc_fn)(size_t); extern void (*base_free_fn)(void *); extern bool pools_shared_data_create(void); arena_t *arenas_extend(pool_t *pool, unsigned ind); bool arenas_tsd_extend(tsd_pool_t *tsd, unsigned len); void arenas_cleanup(void *arg); arena_t *choose_arena_hard(pool_t *pool); void jemalloc_prefork(void); void jemalloc_postfork_parent(void); void jemalloc_postfork_child(void); #include "jemalloc/internal/valgrind.h" #include "jemalloc/internal/util.h" #include "jemalloc/internal/atomic.h" #include "jemalloc/internal/prng.h" #include "jemalloc/internal/ckh.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/stats.h" #include "jemalloc/internal/ctl.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/tsd.h" #include "jemalloc/internal/mb.h" #include "jemalloc/internal/bitmap.h" #include "jemalloc/internal/extent.h" #include "jemalloc/internal/arena.h" #include "jemalloc/internal/base.h" #include "jemalloc/internal/chunk.h" #include "jemalloc/internal/huge.h" #include "jemalloc/internal/rtree.h" #include "jemalloc/internal/tcache.h" #include "jemalloc/internal/hash.h" #include "jemalloc/internal/quarantine.h" #include "jemalloc/internal/prof.h" #include "jemalloc/internal/pool.h" #include "jemalloc/internal/vector.h" #undef JEMALLOC_H_EXTERNS /******************************************************************************/ #define JEMALLOC_H_INLINES #include "jemalloc/internal/pool.h" #include "jemalloc/internal/valgrind.h" #include "jemalloc/internal/util.h" #include "jemalloc/internal/atomic.h" #include "jemalloc/internal/prng.h" #include "jemalloc/internal/ckh.h" #include "jemalloc/internal/size_classes.h" #include "jemalloc/internal/stats.h" #include "jemalloc/internal/ctl.h" #include "jemalloc/internal/mutex.h" #include "jemalloc/internal/tsd.h" #include "jemalloc/internal/mb.h" #include "jemalloc/internal/extent.h" #include "jemalloc/internal/base.h" #include "jemalloc/internal/chunk.h" #include "jemalloc/internal/huge.h" /* * Include arena.h the first time in order to provide inline functions for this * header's inlines. */ #define JEMALLOC_ARENA_INLINE_A #include "jemalloc/internal/arena.h" #undef JEMALLOC_ARENA_INLINE_A #ifndef JEMALLOC_ENABLE_INLINE malloc_tsd_protos(JEMALLOC_ATTR(unused), arenas, tsd_pool_t) size_t s2u(size_t size); size_t sa2u(size_t size, size_t alignment); unsigned narenas_total_get(pool_t *pool); arena_t *choose_arena(arena_t *arena); #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_C_)) /* * Map of pthread_self() --> arenas[???], used for selecting an arena to use * for allocations. */ malloc_tsd_externs(arenas, tsd_pool_t) malloc_tsd_funcs(JEMALLOC_ALWAYS_INLINE, arenas, tsd_pool_t, {0}, arenas_cleanup) /* * Check if the arena is dummy. */ JEMALLOC_ALWAYS_INLINE bool is_arena_dummy(arena_t *arena) { return (arena->ind == ARENA_DUMMY_IND); } /* * Compute usable size that would result from allocating an object with the * specified size. */ JEMALLOC_ALWAYS_INLINE size_t s2u(size_t size) { if (size <= SMALL_MAXCLASS) return (small_s2u(size)); if (size <= arena_maxclass) return (PAGE_CEILING(size)); return (CHUNK_CEILING(size)); } /* * Compute usable size that would result from allocating an object with the * specified size and alignment. */ JEMALLOC_ALWAYS_INLINE size_t sa2u(size_t size, size_t alignment) { size_t usize; assert(alignment != 0 && ((alignment - 1) & alignment) == 0); /* * Round size up to the nearest multiple of alignment. * * This done, we can take advantage of the fact that for each small * size class, every object is aligned at the smallest power of two * that is non-zero in the base two representation of the size. For * example: * * Size | Base 2 | Minimum alignment * -----+----------+------------------ * 96 | 1100000 | 32 * 144 | 10100000 | 32 * 192 | 11000000 | 64 */ usize = ALIGNMENT_CEILING(size, alignment); /* * (usize < size) protects against the combination of maximal * alignment and size greater than maximal alignment. */ if (usize < size) { /* size_t overflow. */ return (0); } if (usize <= arena_maxclass && alignment <= PAGE) { if (usize <= SMALL_MAXCLASS) return (small_s2u(usize)); return (PAGE_CEILING(usize)); } else { size_t run_size; /* * We can't achieve subpage alignment, so round up alignment * permanently; it makes later calculations simpler. */ alignment = PAGE_CEILING(alignment); usize = PAGE_CEILING(size); /* * (usize < size) protects against very large sizes within * PAGE of SIZE_T_MAX. * * (usize + alignment < usize) protects against the * combination of maximal alignment and usize large enough * to cause overflow. This is similar to the first overflow * check above, but it needs to be repeated due to the new * usize value, which may now be *equal* to maximal * alignment, whereas before we only detected overflow if the * original size was *greater* than maximal alignment. */ if (usize < size || usize + alignment < usize) { /* size_t overflow. */ return (0); } /* * Calculate the size of the over-size run that arena_palloc() * would need to allocate in order to guarantee the alignment. * If the run wouldn't fit within a chunk, round up to a huge * allocation size. */ run_size = usize + alignment - PAGE; if (run_size <= arena_maxclass) return (PAGE_CEILING(usize)); return (CHUNK_CEILING(usize)); } } JEMALLOC_INLINE unsigned narenas_total_get(pool_t *pool) { unsigned narenas; malloc_rwlock_rdlock(&pool->arenas_lock); narenas = pool->narenas_total; malloc_rwlock_unlock(&pool->arenas_lock); return (narenas); } /* * Choose an arena based on a per-thread value. * Arena pointer must be either a valid arena pointer or a dummy arena with * pool field filled. */ JEMALLOC_INLINE arena_t * choose_arena(arena_t *arena) { arena_t *ret; tsd_pool_t *tsd; pool_t *pool; if (!is_arena_dummy(arena)) return (arena); pool = arena->pool; tsd = arenas_tsd_get(); /* expand arenas array if necessary */ if ((tsd->npools <= pool->pool_id) && arenas_tsd_extend(tsd, pool->pool_id)) { return (NULL); } if ( (tsd->seqno[pool->pool_id] != pool->seqno) || (ret = tsd->arenas[pool->pool_id]) == NULL) { ret = choose_arena_hard(pool); assert(ret != NULL); } return (ret); } #endif #include "jemalloc/internal/bitmap.h" #include "jemalloc/internal/rtree.h" /* * Include arena.h the second and third times in order to resolve circular * dependencies with tcache.h. */ #define JEMALLOC_ARENA_INLINE_B #include "jemalloc/internal/arena.h" #undef JEMALLOC_ARENA_INLINE_B #include "jemalloc/internal/tcache.h" #define JEMALLOC_ARENA_INLINE_C #include "jemalloc/internal/arena.h" #undef JEMALLOC_ARENA_INLINE_C #include "jemalloc/internal/hash.h" #include "jemalloc/internal/quarantine.h" #ifndef JEMALLOC_ENABLE_INLINE void *imalloct(size_t size, bool try_tcache, arena_t *arena); void *imalloc(size_t size); void *pool_imalloc(pool_t *pool, size_t size); void *icalloct(size_t size, bool try_tcache, arena_t *arena); void *icalloc(size_t size); void *pool_icalloc(pool_t *pool, size_t size); void *ipalloct(size_t usize, size_t alignment, bool zero, bool try_tcache, arena_t *arena); void *ipalloc(size_t usize, size_t alignment, bool zero); void *pool_ipalloc(pool_t *pool, size_t usize, size_t alignment, bool zero); size_t isalloc(const void *ptr, bool demote); size_t pool_isalloc(pool_t *pool, const void *ptr, bool demote); size_t ivsalloc(const void *ptr, bool demote); size_t u2rz(size_t usize); size_t p2rz(const void *ptr); void idalloct(void *ptr, bool try_tcache); void pool_idalloct(pool_t *pool, void *ptr, bool try_tcache); void idalloc(void *ptr); void iqalloct(void *ptr, bool try_tcache); void pool_iqalloct(pool_t *pool, void *ptr, bool try_tcache); void iqalloc(void *ptr); void *iralloct_realign(void *ptr, size_t oldsize, size_t size, size_t extra, size_t alignment, bool zero, bool try_tcache_alloc, bool try_tcache_dalloc, arena_t *arena); void *iralloct(void *ptr, size_t size, size_t extra, size_t alignment, bool zero, bool try_tcache_alloc, bool try_tcache_dalloc, arena_t *arena); void *iralloc(void *ptr, size_t size, size_t extra, size_t alignment, bool zero); void *pool_iralloc(pool_t *pool, void *ptr, size_t size, size_t extra, size_t alignment, bool zero); bool ixalloc(void *ptr, size_t size, size_t extra, size_t alignment, bool zero); int msc_clz(unsigned int val); malloc_tsd_protos(JEMALLOC_ATTR(unused), thread_allocated, thread_allocated_t) #endif #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_C_)) # ifdef _MSC_VER JEMALLOC_ALWAYS_INLINE int msc_clz(unsigned int val) { unsigned int res = 0; # if LG_SIZEOF_INT == 2 if (_BitScanReverse(&res, val)) { return 31 - res; } else { return 32; } # elif LG_SIZEOF_INT == 3 if (_BitScanReverse64(&res, val)) { return 63 - res; } else { return 64; } # else # error "Unsupported clz function for that size of int" # endif } #endif JEMALLOC_ALWAYS_INLINE void * imalloct(size_t size, bool try_tcache, arena_t *arena) { assert(size != 0); if (size <= arena_maxclass) return (arena_malloc(arena, size, false, try_tcache)); else return (huge_malloc(arena, size, false)); } JEMALLOC_ALWAYS_INLINE void * imalloc(size_t size) { arena_t dummy; DUMMY_ARENA_INITIALIZE(dummy, &base_pool); return (imalloct(size, true, &dummy)); } JEMALLOC_ALWAYS_INLINE void * pool_imalloc(pool_t *pool, size_t size) { arena_t dummy; DUMMY_ARENA_INITIALIZE(dummy, pool); return (imalloct(size, true, &dummy)); } JEMALLOC_ALWAYS_INLINE void * icalloct(size_t size, bool try_tcache, arena_t *arena) { if (size <= arena_maxclass) return (arena_malloc(arena, size, true, try_tcache)); else return (huge_malloc(arena, size, true)); } JEMALLOC_ALWAYS_INLINE void * icalloc(size_t size) { arena_t dummy; DUMMY_ARENA_INITIALIZE(dummy, &base_pool); return (icalloct(size, true, &dummy)); } JEMALLOC_ALWAYS_INLINE void * pool_icalloc(pool_t *pool, size_t size) { arena_t dummy; DUMMY_ARENA_INITIALIZE(dummy, pool); return (icalloct(size, true, &dummy)); } JEMALLOC_ALWAYS_INLINE void * ipalloct(size_t usize, size_t alignment, bool zero, bool try_tcache, arena_t *arena) { void *ret; assert(usize != 0); assert(usize == sa2u(usize, alignment)); if (usize <= arena_maxclass && alignment <= PAGE) ret = arena_malloc(arena, usize, zero, try_tcache); else { if (usize <= arena_maxclass) { ret = arena_palloc(choose_arena(arena), usize, alignment, zero); } else if (alignment <= chunksize) ret = huge_malloc(arena, usize, zero); else ret = huge_palloc(arena, usize, alignment, zero); } assert(ALIGNMENT_ADDR2BASE(ret, alignment) == ret); return (ret); } JEMALLOC_ALWAYS_INLINE void * ipalloc(size_t usize, size_t alignment, bool zero) { arena_t dummy; DUMMY_ARENA_INITIALIZE(dummy, &base_pool); return (ipalloct(usize, alignment, zero, true, &dummy)); } JEMALLOC_ALWAYS_INLINE void * pool_ipalloc(pool_t *pool, size_t usize, size_t alignment, bool zero) { arena_t dummy; DUMMY_ARENA_INITIALIZE(dummy, pool); return (ipalloct(usize, alignment, zero, true, &dummy)); } /* * Typical usage: * void *ptr = [...] * size_t sz = isalloc(ptr, config_prof); */ JEMALLOC_ALWAYS_INLINE size_t isalloc(const void *ptr, bool demote) { size_t ret; arena_chunk_t *chunk; assert(ptr != NULL); /* Demotion only makes sense if config_prof is true. */ assert(config_prof || demote == false); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (chunk != ptr) ret = arena_salloc(ptr, demote); else ret = huge_salloc(ptr); return (ret); } /* * Typical usage: * void *ptr = [...] * size_t sz = isalloc(ptr, config_prof); */ JEMALLOC_ALWAYS_INLINE size_t pool_isalloc(pool_t *pool, const void *ptr, bool demote) { size_t ret; arena_chunk_t *chunk; assert(ptr != NULL); /* Demotion only makes sense if config_prof is true. */ assert(config_prof || demote == false); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (chunk != ptr) ret = arena_salloc(ptr, demote); else ret = huge_pool_salloc(pool, ptr); return (ret); } JEMALLOC_ALWAYS_INLINE size_t ivsalloc(const void *ptr, bool demote) { size_t i; malloc_mutex_lock(&pools_lock); unsigned n = npools; for (i = 0; i < n; ++i) { pool_t *pool = pools[i]; if (pool == NULL) continue; /* Return 0 if ptr is not within a chunk managed by jemalloc. */ if (rtree_get(pool->chunks_rtree, (uintptr_t)CHUNK_ADDR2BASE(ptr)) != 0) break; } malloc_mutex_unlock(&pools_lock); if (i == n) return 0; return (isalloc(ptr, demote)); } JEMALLOC_INLINE size_t u2rz(size_t usize) { size_t ret; if (usize <= SMALL_MAXCLASS) { size_t binind = small_size2bin(usize); assert(binind < NBINS); ret = arena_bin_info[binind].redzone_size; } else ret = 0; return (ret); } JEMALLOC_INLINE size_t p2rz(const void *ptr) { size_t usize = isalloc(ptr, false); return (u2rz(usize)); } JEMALLOC_ALWAYS_INLINE void idalloct(void *ptr, bool try_tcache) { arena_chunk_t *chunk; assert(ptr != NULL); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (chunk != ptr) arena_dalloc(chunk, ptr, try_tcache); else huge_dalloc(&base_pool, ptr); } JEMALLOC_ALWAYS_INLINE void pool_idalloct(pool_t *pool, void *ptr, bool try_tcache) { arena_chunk_t *chunk; assert(ptr != NULL); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); if (chunk != ptr) arena_dalloc(chunk, ptr, try_tcache); else huge_dalloc(pool, ptr); } JEMALLOC_ALWAYS_INLINE void idalloc(void *ptr) { idalloct(ptr, true); } JEMALLOC_ALWAYS_INLINE void iqalloct(void *ptr, bool try_tcache) { if (config_fill && opt_quarantine) quarantine(ptr); else idalloct(ptr, try_tcache); } JEMALLOC_ALWAYS_INLINE void pool_iqalloct(pool_t *pool, void *ptr, bool try_tcache) { if (config_fill && opt_quarantine) quarantine(ptr); else pool_idalloct(pool, ptr, try_tcache); } JEMALLOC_ALWAYS_INLINE void iqalloc(void *ptr) { iqalloct(ptr, true); } JEMALLOC_ALWAYS_INLINE void * iralloct_realign(void *ptr, size_t oldsize, size_t size, size_t extra, size_t alignment, bool zero, bool try_tcache_alloc, bool try_tcache_dalloc, arena_t *arena) { void *p; size_t usize, copysize; usize = sa2u(size + extra, alignment); if (usize == 0) return (NULL); p = ipalloct(usize, alignment, zero, try_tcache_alloc, arena); if (p == NULL) { if (extra == 0) return (NULL); /* Try again, without extra this time. */ usize = sa2u(size, alignment); if (usize == 0) return (NULL); p = ipalloct(usize, alignment, zero, try_tcache_alloc, arena); if (p == NULL) return (NULL); } /* * Copy at most size bytes (not size+extra), since the caller has no * expectation that the extra bytes will be reliably preserved. */ copysize = (size < oldsize) ? size : oldsize; memcpy(p, ptr, copysize); pool_iqalloct(arena->pool, ptr, try_tcache_dalloc); return (p); } JEMALLOC_ALWAYS_INLINE void * iralloct(void *ptr, size_t size, size_t extra, size_t alignment, bool zero, bool try_tcache_alloc, bool try_tcache_dalloc, arena_t *arena) { size_t oldsize; assert(ptr != NULL); assert(size != 0); oldsize = isalloc(ptr, config_prof); if (alignment != 0 && ((uintptr_t)ptr & ((uintptr_t)alignment-1)) != 0) { /* * Existing object alignment is inadequate; allocate new space * and copy. */ return (iralloct_realign(ptr, oldsize, size, extra, alignment, zero, try_tcache_alloc, try_tcache_dalloc, arena)); } if (size + extra <= arena_maxclass) { void *ret; ret = arena_ralloc(arena, ptr, oldsize, size, extra, alignment, zero, try_tcache_alloc, try_tcache_dalloc); if ((ret != NULL) || (size + extra > oldsize)) return (ret); if (oldsize > chunksize) { size_t old_usize JEMALLOC_CC_SILENCE_INIT(0); UNUSED size_t old_rzsize JEMALLOC_CC_SILENCE_INIT(0); if (config_valgrind && in_valgrind) { old_usize = isalloc(ptr, config_prof); old_rzsize = config_prof ? p2rz(ptr) : u2rz(old_usize); } ret = huge_ralloc(arena, ptr, oldsize, chunksize, 0, alignment, zero, try_tcache_dalloc); JEMALLOC_VALGRIND_REALLOC(true, ret, s2u(chunksize), true, ptr, old_usize, old_rzsize, true, false); if (ret != NULL) { /* Now, it should succeed... */ return arena_ralloc(arena, ret, chunksize, size, extra, alignment, zero, try_tcache_alloc, try_tcache_dalloc); } } return NULL; } else { return (huge_ralloc(arena, ptr, oldsize, size, extra, alignment, zero, try_tcache_dalloc)); } } JEMALLOC_ALWAYS_INLINE void * iralloc(void *ptr, size_t size, size_t extra, size_t alignment, bool zero) { arena_t dummy; DUMMY_ARENA_INITIALIZE(dummy, &base_pool); return (iralloct(ptr, size, extra, alignment, zero, true, true, &dummy)); } JEMALLOC_ALWAYS_INLINE void * pool_iralloc(pool_t *pool, void *ptr, size_t size, size_t extra, size_t alignment, bool zero) { arena_t dummy; DUMMY_ARENA_INITIALIZE(dummy, pool); return (iralloct(ptr, size, extra, alignment, zero, true, true, &dummy)); } JEMALLOC_ALWAYS_INLINE bool ixalloc(void *ptr, size_t size, size_t extra, size_t alignment, bool zero) { size_t oldsize; assert(ptr != NULL); assert(size != 0); oldsize = isalloc(ptr, config_prof); if (alignment != 0 && ((uintptr_t)ptr & ((uintptr_t)alignment-1)) != 0) { /* Existing object alignment is inadequate. */ return (true); } if (size <= arena_maxclass) return (arena_ralloc_no_move(ptr, oldsize, size, extra, zero)); else return (huge_ralloc_no_move(&base_pool, ptr, oldsize, size, extra, zero)); } malloc_tsd_externs(thread_allocated, thread_allocated_t) malloc_tsd_funcs(JEMALLOC_ALWAYS_INLINE, thread_allocated, thread_allocated_t, THREAD_ALLOCATED_INITIALIZER, malloc_tsd_no_cleanup) #endif #include "jemalloc/internal/prof.h" #undef JEMALLOC_H_INLINES #ifdef _WIN32 #define __builtin_clz(x) msc_clz(x) #endif /******************************************************************************/ #endif /* JEMALLOC_INTERNAL_H */ vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/internal/jemalloc_internal_defs.h000066400000000000000000000151131361505074100311120ustar00rootroot00000000000000/* ./../windows/jemalloc_gen/include/jemalloc/internal/jemalloc_internal_defs.h. Generated from jemalloc_internal_defs.h.in by configure. */ #ifndef JEMALLOC_INTERNAL_DEFS_H_ #define JEMALLOC_INTERNAL_DEFS_H_ /* * If JEMALLOC_PREFIX is defined via --with-jemalloc-prefix, it will cause all * public APIs to be prefixed. This makes it possible, with some care, to use * multiple allocators simultaneously. */ #define JEMALLOC_PREFIX "je_vmem_" #define JEMALLOC_CPREFIX "JE_VMEM_" /* * JEMALLOC_PRIVATE_NAMESPACE is used as a prefix for all library-private APIs. * For shared libraries, symbol visibility mechanisms prevent these symbols * from being exported, but for static libraries, naming collisions are a real * possibility. */ #define JEMALLOC_PRIVATE_NAMESPACE je_vmem_je_ /* * Hyper-threaded CPUs may need a special instruction inside spin loops in * order to yield to another virtual CPU. */ #define CPU_SPINWAIT /* Defined if the equivalent of FreeBSD's atomic(9) functions are available. */ /* #undef JEMALLOC_ATOMIC9 */ /* * Defined if OSAtomic*() functions are available, as provided by Darwin, and * documented in the atomic(3) manual page. */ /* #undef JEMALLOC_OSATOMIC */ /* * Defined if __sync_add_and_fetch(uint32_t *, uint32_t) and * __sync_sub_and_fetch(uint32_t *, uint32_t) are available, despite * __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 not being defined (which means the * functions are defined in libgcc instead of being inlines) */ /* #undef JE_FORCE_SYNC_COMPARE_AND_SWAP_4 */ /* * Defined if __sync_add_and_fetch(uint64_t *, uint64_t) and * __sync_sub_and_fetch(uint64_t *, uint64_t) are available, despite * __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8 not being defined (which means the * functions are defined in libgcc instead of being inlines) */ /* #undef JE_FORCE_SYNC_COMPARE_AND_SWAP_8 */ /* * Defined if __builtin_clz() and __builtin_clzl() are available. */ /* #undef JEMALLOC_HAVE_BUILTIN_CLZ */ /* * Defined if madvise(2) is available. */ /* #undef JEMALLOC_HAVE_MADVISE */ /* * Defined if OSSpin*() functions are available, as provided by Darwin, and * documented in the spinlock(3) manual page. */ /* #undef JEMALLOC_OSSPIN */ /* * Defined if _malloc_thread_cleanup() exists. At least in the case of * FreeBSD, pthread_key_create() allocates, which if used during malloc * bootstrapping will cause recursion into the pthreads library. Therefore, if * _malloc_thread_cleanup() exists, use it as the basis for thread cleanup in * malloc_tsd. */ /* #undef JEMALLOC_MALLOC_THREAD_CLEANUP */ /* * Defined if threaded initialization is known to be safe on this platform. * Among other things, it must be possible to initialize a mutex without * triggering allocation in order for threaded allocation to be safe. */ /* #undef JEMALLOC_THREADED_INIT */ /* * Defined if the pthreads implementation defines * _pthread_mutex_init_calloc_cb(), in which case the function is used in order * to avoid recursive allocation during mutex initialization. */ /* #undef JEMALLOC_MUTEX_INIT_CB */ /* Non-empty if the tls_model attribute is supported. */ #define JEMALLOC_TLS_MODEL /* JEMALLOC_CC_SILENCE enables code that silences unuseful compiler warnings. */ #define JEMALLOC_CC_SILENCE /* JEMALLOC_CODE_COVERAGE enables test code coverage analysis. */ /* #undef JEMALLOC_CODE_COVERAGE */ /* * JEMALLOC_DEBUG enables assertions and other sanity checks, and disables * inline functions. */ /* #undef JEMALLOC_DEBUG */ /* JEMALLOC_STATS enables statistics calculation. */ #define JEMALLOC_STATS /* JEMALLOC_PROF enables allocation profiling. */ /* #undef JEMALLOC_PROF */ /* Use libunwind for profile backtracing if defined. */ /* #undef JEMALLOC_PROF_LIBUNWIND */ /* Use libgcc for profile backtracing if defined. */ /* #undef JEMALLOC_PROF_LIBGCC */ /* Use gcc intrinsics for profile backtracing if defined. */ /* #undef JEMALLOC_PROF_GCC */ /* * JEMALLOC_TCACHE enables a thread-specific caching layer for small objects. * This makes it possible to allocate/deallocate objects without any locking * when the cache is in the steady state. */ #define JEMALLOC_TCACHE /* * JEMALLOC_DSS enables use of sbrk(2) to allocate chunks from the data storage * segment (DSS). */ /* #undef JEMALLOC_DSS */ /* Support memory filling (junk/zero/quarantine/redzone). */ #define JEMALLOC_FILL /* Support utrace(2)-based tracing. */ /* #undef JEMALLOC_UTRACE */ /* Support Valgrind. */ /* #undef JEMALLOC_VALGRIND */ /* Support optional abort() on OOM. */ /* #undef JEMALLOC_XMALLOC */ /* Support lazy locking (avoid locking unless a second thread is launched). */ /* #undef JEMALLOC_LAZY_LOCK */ /* One page is 2^STATIC_PAGE_SHIFT bytes. */ #define STATIC_PAGE_SHIFT 12 /* * If defined, use munmap() to unmap freed chunks, rather than storing them for * later reuse. This is disabled by default on Linux because common sequences * of mmap()/munmap() calls will cause virtual memory map holes. */ /* #undef JEMALLOC_MUNMAP */ /* TLS is used to map arenas and magazine caches to threads. */ /* #undef JEMALLOC_TLS */ /* * ffs()/ffsl() functions to use for bitmapping. Don't use these directly; * instead, use jemalloc_ffs() or jemalloc_ffsl() from util.h. */ #define JEMALLOC_INTERNAL_FFSL ffsl #define JEMALLOC_INTERNAL_FFS ffs /* * JEMALLOC_IVSALLOC enables ivsalloc(), which verifies that pointers reside * within jemalloc-owned chunks before dereferencing them. */ /* #undef JEMALLOC_IVSALLOC */ /* * Darwin (OS X) uses zones to work around Mach-O symbol override shortcomings. */ /* #undef JEMALLOC_ZONE */ /* #undef JEMALLOC_ZONE_VERSION */ /* * Methods for purging unused pages differ between operating systems. * * madvise(..., MADV_DONTNEED) : On Linux, this immediately discards pages, * such that new pages will be demand-zeroed if * the address region is later touched. * madvise(..., MADV_FREE) : On FreeBSD and Darwin, this marks pages as being * unused, such that they will be discarded rather * than swapped out. */ /* #undef JEMALLOC_PURGE_MADVISE_DONTNEED */ /* #undef JEMALLOC_PURGE_MADVISE_FREE */ /* * Define if operating system has alloca.h header. */ /* #undef JEMALLOC_HAS_ALLOCA_H */ /* C99 restrict keyword supported. */ /* #undef JEMALLOC_HAS_RESTRICT */ /* For use by hash code. */ /* #undef JEMALLOC_BIG_ENDIAN */ /* sizeof(int) == 2^LG_SIZEOF_INT. */ #define LG_SIZEOF_INT 2 /* sizeof(long) == 2^LG_SIZEOF_LONG. */ #define LG_SIZEOF_LONG 2 /* sizeof(intmax_t) == 2^LG_SIZEOF_INTMAX_T. */ #define LG_SIZEOF_INTMAX_T 3 #endif /* JEMALLOC_INTERNAL_DEFS_H_ */ vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/internal/private_namespace.h000066400000000000000000000612441361505074100301230ustar00rootroot00000000000000#define a0calloc JEMALLOC_N(a0calloc) #define a0free JEMALLOC_N(a0free) #define a0malloc JEMALLOC_N(a0malloc) #define arena_alloc_junk_small JEMALLOC_N(arena_alloc_junk_small) #define arena_bin_index JEMALLOC_N(arena_bin_index) #define arena_bin_info JEMALLOC_N(arena_bin_info) #define arena_boot JEMALLOC_N(arena_boot) #define arena_chunk_alloc_huge JEMALLOC_N(arena_chunk_alloc_huge) #define arena_chunk_dalloc_huge JEMALLOC_N(arena_chunk_dalloc_huge) #define arena_dalloc JEMALLOC_N(arena_dalloc) #define arena_dalloc_bin JEMALLOC_N(arena_dalloc_bin) #define arena_dalloc_bin_locked JEMALLOC_N(arena_dalloc_bin_locked) #define arena_dalloc_junk_large JEMALLOC_N(arena_dalloc_junk_large) #define arena_dalloc_junk_small JEMALLOC_N(arena_dalloc_junk_small) #define arena_dalloc_large JEMALLOC_N(arena_dalloc_large) #define arena_dalloc_large_locked JEMALLOC_N(arena_dalloc_large_locked) #define arena_dalloc_small JEMALLOC_N(arena_dalloc_small) #define arena_dss_prec_get JEMALLOC_N(arena_dss_prec_get) #define arena_dss_prec_set JEMALLOC_N(arena_dss_prec_set) #define arena_malloc JEMALLOC_N(arena_malloc) #define arena_malloc_large JEMALLOC_N(arena_malloc_large) #define arena_malloc_small JEMALLOC_N(arena_malloc_small) #define arena_mapbits_allocated_get JEMALLOC_N(arena_mapbits_allocated_get) #define arena_mapbits_binind_get JEMALLOC_N(arena_mapbits_binind_get) #define arena_mapbits_dirty_get JEMALLOC_N(arena_mapbits_dirty_get) #define arena_mapbits_get JEMALLOC_N(arena_mapbits_get) #define arena_mapbits_large_binind_set JEMALLOC_N(arena_mapbits_large_binind_set) #define arena_mapbits_large_get JEMALLOC_N(arena_mapbits_large_get) #define arena_mapbits_large_set JEMALLOC_N(arena_mapbits_large_set) #define arena_mapbits_large_size_get JEMALLOC_N(arena_mapbits_large_size_get) #define arena_mapbits_small_runind_get JEMALLOC_N(arena_mapbits_small_runind_get) #define arena_mapbits_small_set JEMALLOC_N(arena_mapbits_small_set) #define arena_mapbits_unallocated_set JEMALLOC_N(arena_mapbits_unallocated_set) #define arena_mapbits_unallocated_size_get JEMALLOC_N(arena_mapbits_unallocated_size_get) #define arena_mapbits_unallocated_size_set JEMALLOC_N(arena_mapbits_unallocated_size_set) #define arena_mapbits_unzeroed_get JEMALLOC_N(arena_mapbits_unzeroed_get) #define arena_mapbits_unzeroed_set JEMALLOC_N(arena_mapbits_unzeroed_set) #define arena_mapbitsp_get JEMALLOC_N(arena_mapbitsp_get) #define arena_mapbitsp_read JEMALLOC_N(arena_mapbitsp_read) #define arena_mapbitsp_write JEMALLOC_N(arena_mapbitsp_write) #define arena_mapelm_to_pageind JEMALLOC_N(arena_mapelm_to_pageind) #define arena_mapp_get JEMALLOC_N(arena_mapp_get) #define arena_maxclass JEMALLOC_N(arena_maxclass) #define arena_new JEMALLOC_N(arena_new) #define arena_palloc JEMALLOC_N(arena_palloc) #define arena_postfork_child JEMALLOC_N(arena_postfork_child) #define arena_postfork_parent JEMALLOC_N(arena_postfork_parent) #define arena_prefork JEMALLOC_N(arena_prefork) #define arena_prof_accum JEMALLOC_N(arena_prof_accum) #define arena_prof_accum_impl JEMALLOC_N(arena_prof_accum_impl) #define arena_prof_accum_locked JEMALLOC_N(arena_prof_accum_locked) #define arena_prof_ctx_get JEMALLOC_N(arena_prof_ctx_get) #define arena_prof_ctx_set JEMALLOC_N(arena_prof_ctx_set) #define arena_prof_promoted JEMALLOC_N(arena_prof_promoted) #define arena_ptr_small_binind_get JEMALLOC_N(arena_ptr_small_binind_get) #define arena_purge_all JEMALLOC_N(arena_purge_all) #define arena_quarantine_junk_small JEMALLOC_N(arena_quarantine_junk_small) #define arena_ralloc JEMALLOC_N(arena_ralloc) #define arena_ralloc_junk_large JEMALLOC_N(arena_ralloc_junk_large) #define arena_ralloc_no_move JEMALLOC_N(arena_ralloc_no_move) #define arena_redzone_corruption JEMALLOC_N(arena_redzone_corruption) #define arena_run_regind JEMALLOC_N(arena_run_regind) #define arena_runs_avail_tree_iter JEMALLOC_N(arena_runs_avail_tree_iter) #define arena_salloc JEMALLOC_N(arena_salloc) #define arena_stats_merge JEMALLOC_N(arena_stats_merge) #define arena_tcache_fill_small JEMALLOC_N(arena_tcache_fill_small) #define arenas JEMALLOC_N(arenas) #define pools JEMALLOC_N(pools) #define arenas_booted JEMALLOC_N(arenas_booted) #define arenas_cleanup JEMALLOC_N(arenas_cleanup) #define arenas_extend JEMALLOC_N(arenas_extend) #define arenas_initialized JEMALLOC_N(arenas_initialized) #define arenas_lock JEMALLOC_N(arenas_lock) #define arenas_tls JEMALLOC_N(arenas_tls) #define arenas_tsd JEMALLOC_N(arenas_tsd) #define arenas_tsd_boot JEMALLOC_N(arenas_tsd_boot) #define arenas_tsd_cleanup_wrapper JEMALLOC_N(arenas_tsd_cleanup_wrapper) #define arenas_tsd_get JEMALLOC_N(arenas_tsd_get) #define arenas_tsd_get_wrapper JEMALLOC_N(arenas_tsd_get_wrapper) #define arenas_tsd_init_head JEMALLOC_N(arenas_tsd_init_head) #define arenas_tsd_set JEMALLOC_N(arenas_tsd_set) #define atomic_add_u JEMALLOC_N(atomic_add_u) #define atomic_add_uint32 JEMALLOC_N(atomic_add_uint32) #define atomic_add_uint64 JEMALLOC_N(atomic_add_uint64) #define atomic_add_z JEMALLOC_N(atomic_add_z) #define atomic_sub_u JEMALLOC_N(atomic_sub_u) #define atomic_sub_uint32 JEMALLOC_N(atomic_sub_uint32) #define atomic_sub_uint64 JEMALLOC_N(atomic_sub_uint64) #define atomic_sub_z JEMALLOC_N(atomic_sub_z) #define base_alloc JEMALLOC_N(base_alloc) #define base_boot JEMALLOC_N(base_boot) #define base_calloc JEMALLOC_N(base_calloc) #define base_free_fn JEMALLOC_N(base_free_fn) #define base_malloc_fn JEMALLOC_N(base_malloc_fn) #define base_node_alloc JEMALLOC_N(base_node_alloc) #define base_node_dalloc JEMALLOC_N(base_node_dalloc) #define base_pool JEMALLOC_N(base_pool) #define base_postfork_child JEMALLOC_N(base_postfork_child) #define base_postfork_parent JEMALLOC_N(base_postfork_parent) #define base_prefork JEMALLOC_N(base_prefork) #define bitmap_full JEMALLOC_N(bitmap_full) #define bitmap_get JEMALLOC_N(bitmap_get) #define bitmap_info_init JEMALLOC_N(bitmap_info_init) #define bitmap_info_ngroups JEMALLOC_N(bitmap_info_ngroups) #define bitmap_init JEMALLOC_N(bitmap_init) #define bitmap_set JEMALLOC_N(bitmap_set) #define bitmap_sfu JEMALLOC_N(bitmap_sfu) #define bitmap_size JEMALLOC_N(bitmap_size) #define bitmap_unset JEMALLOC_N(bitmap_unset) #define bt_init JEMALLOC_N(bt_init) #define buferror JEMALLOC_N(buferror) #define choose_arena JEMALLOC_N(choose_arena) #define choose_arena_hard JEMALLOC_N(choose_arena_hard) #define chunk_alloc_arena JEMALLOC_N(chunk_alloc_arena) #define chunk_alloc_base JEMALLOC_N(chunk_alloc_base) #define chunk_alloc_default JEMALLOC_N(chunk_alloc_default) #define chunk_alloc_dss JEMALLOC_N(chunk_alloc_dss) #define chunk_alloc_mmap JEMALLOC_N(chunk_alloc_mmap) #define chunk_global_boot JEMALLOC_N(chunk_global_boot) #define chunk_boot JEMALLOC_N(chunk_boot) #define chunk_dalloc_default JEMALLOC_N(chunk_dalloc_default) #define chunk_dalloc_mmap JEMALLOC_N(chunk_dalloc_mmap) #define chunk_dss_boot JEMALLOC_N(chunk_dss_boot) #define chunk_dss_postfork_child JEMALLOC_N(chunk_dss_postfork_child) #define chunk_dss_postfork_parent JEMALLOC_N(chunk_dss_postfork_parent) #define chunk_dss_prec_get JEMALLOC_N(chunk_dss_prec_get) #define chunk_dss_prec_set JEMALLOC_N(chunk_dss_prec_set) #define chunk_dss_prefork JEMALLOC_N(chunk_dss_prefork) #define chunk_in_dss JEMALLOC_N(chunk_in_dss) #define chunk_npages JEMALLOC_N(chunk_npages) #define chunk_postfork_child JEMALLOC_N(chunk_postfork_child) #define chunk_postfork_parent JEMALLOC_N(chunk_postfork_parent) #define chunk_prefork JEMALLOC_N(chunk_prefork) #define chunk_unmap JEMALLOC_N(chunk_unmap) #define chunk_record JEMALLOC_N(chunk_record) #define chunks_mtx JEMALLOC_N(chunks_mtx) #define chunks_rtree JEMALLOC_N(chunks_rtree) #define chunksize JEMALLOC_N(chunksize) #define chunksize_mask JEMALLOC_N(chunksize_mask) #define ckh_bucket_search JEMALLOC_N(ckh_bucket_search) #define ckh_count JEMALLOC_N(ckh_count) #define ckh_delete JEMALLOC_N(ckh_delete) #define ckh_evict_reloc_insert JEMALLOC_N(ckh_evict_reloc_insert) #define ckh_insert JEMALLOC_N(ckh_insert) #define ckh_isearch JEMALLOC_N(ckh_isearch) #define ckh_iter JEMALLOC_N(ckh_iter) #define ckh_new JEMALLOC_N(ckh_new) #define ckh_pointer_hash JEMALLOC_N(ckh_pointer_hash) #define ckh_pointer_keycomp JEMALLOC_N(ckh_pointer_keycomp) #define ckh_rebuild JEMALLOC_N(ckh_rebuild) #define ckh_remove JEMALLOC_N(ckh_remove) #define ckh_search JEMALLOC_N(ckh_search) #define ckh_string_hash JEMALLOC_N(ckh_string_hash) #define ckh_string_keycomp JEMALLOC_N(ckh_string_keycomp) #define ckh_try_bucket_insert JEMALLOC_N(ckh_try_bucket_insert) #define ckh_try_insert JEMALLOC_N(ckh_try_insert) #define ctl_boot JEMALLOC_N(ctl_boot) #define ctl_bymib JEMALLOC_N(ctl_bymib) #define ctl_byname JEMALLOC_N(ctl_byname) #define ctl_nametomib JEMALLOC_N(ctl_nametomib) #define ctl_postfork_child JEMALLOC_N(ctl_postfork_child) #define ctl_postfork_parent JEMALLOC_N(ctl_postfork_parent) #define ctl_prefork JEMALLOC_N(ctl_prefork) #define dss_prec_names JEMALLOC_N(dss_prec_names) #define extent_tree_ad_first JEMALLOC_N(extent_tree_ad_first) #define extent_tree_ad_insert JEMALLOC_N(extent_tree_ad_insert) #define extent_tree_ad_iter JEMALLOC_N(extent_tree_ad_iter) #define extent_tree_ad_iter_recurse JEMALLOC_N(extent_tree_ad_iter_recurse) #define extent_tree_ad_iter_start JEMALLOC_N(extent_tree_ad_iter_start) #define extent_tree_ad_last JEMALLOC_N(extent_tree_ad_last) #define extent_tree_ad_new JEMALLOC_N(extent_tree_ad_new) #define extent_tree_ad_next JEMALLOC_N(extent_tree_ad_next) #define extent_tree_ad_nsearch JEMALLOC_N(extent_tree_ad_nsearch) #define extent_tree_ad_prev JEMALLOC_N(extent_tree_ad_prev) #define extent_tree_ad_psearch JEMALLOC_N(extent_tree_ad_psearch) #define extent_tree_ad_remove JEMALLOC_N(extent_tree_ad_remove) #define extent_tree_ad_reverse_iter JEMALLOC_N(extent_tree_ad_reverse_iter) #define extent_tree_ad_reverse_iter_recurse JEMALLOC_N(extent_tree_ad_reverse_iter_recurse) #define extent_tree_ad_reverse_iter_start JEMALLOC_N(extent_tree_ad_reverse_iter_start) #define extent_tree_ad_search JEMALLOC_N(extent_tree_ad_search) #define extent_tree_szad_first JEMALLOC_N(extent_tree_szad_first) #define extent_tree_szad_insert JEMALLOC_N(extent_tree_szad_insert) #define extent_tree_szad_iter JEMALLOC_N(extent_tree_szad_iter) #define extent_tree_szad_iter_recurse JEMALLOC_N(extent_tree_szad_iter_recurse) #define extent_tree_szad_iter_start JEMALLOC_N(extent_tree_szad_iter_start) #define extent_tree_szad_last JEMALLOC_N(extent_tree_szad_last) #define extent_tree_szad_new JEMALLOC_N(extent_tree_szad_new) #define extent_tree_szad_next JEMALLOC_N(extent_tree_szad_next) #define extent_tree_szad_nsearch JEMALLOC_N(extent_tree_szad_nsearch) #define extent_tree_szad_prev JEMALLOC_N(extent_tree_szad_prev) #define extent_tree_szad_psearch JEMALLOC_N(extent_tree_szad_psearch) #define extent_tree_szad_remove JEMALLOC_N(extent_tree_szad_remove) #define extent_tree_szad_reverse_iter JEMALLOC_N(extent_tree_szad_reverse_iter) #define extent_tree_szad_reverse_iter_recurse JEMALLOC_N(extent_tree_szad_reverse_iter_recurse) #define extent_tree_szad_reverse_iter_start JEMALLOC_N(extent_tree_szad_reverse_iter_start) #define extent_tree_szad_search JEMALLOC_N(extent_tree_szad_search) #define get_errno JEMALLOC_N(get_errno) #define hash JEMALLOC_N(hash) #define hash_fmix_32 JEMALLOC_N(hash_fmix_32) #define hash_fmix_64 JEMALLOC_N(hash_fmix_64) #define hash_get_block_32 JEMALLOC_N(hash_get_block_32) #define hash_get_block_64 JEMALLOC_N(hash_get_block_64) #define hash_rotl_32 JEMALLOC_N(hash_rotl_32) #define hash_rotl_64 JEMALLOC_N(hash_rotl_64) #define hash_x64_128 JEMALLOC_N(hash_x64_128) #define hash_x86_128 JEMALLOC_N(hash_x86_128) #define hash_x86_32 JEMALLOC_N(hash_x86_32) #define huge_allocated JEMALLOC_N(huge_allocated) #define huge_boot JEMALLOC_N(huge_boot) #define huge_dalloc JEMALLOC_N(huge_dalloc) #define huge_dalloc_junk JEMALLOC_N(huge_dalloc_junk) #define huge_malloc JEMALLOC_N(huge_malloc) #define huge_ndalloc JEMALLOC_N(huge_ndalloc) #define huge_nmalloc JEMALLOC_N(huge_nmalloc) #define huge_palloc JEMALLOC_N(huge_palloc) #define huge_postfork_child JEMALLOC_N(huge_postfork_child) #define huge_postfork_parent JEMALLOC_N(huge_postfork_parent) #define huge_prefork JEMALLOC_N(huge_prefork) #define huge_prof_ctx_get JEMALLOC_N(huge_prof_ctx_get) #define huge_prof_ctx_set JEMALLOC_N(huge_prof_ctx_set) #define huge_ralloc JEMALLOC_N(huge_ralloc) #define huge_ralloc_no_move JEMALLOC_N(huge_ralloc_no_move) #define huge_salloc JEMALLOC_N(huge_salloc) #define icalloc JEMALLOC_N(icalloc) #define icalloct JEMALLOC_N(icalloct) #define idalloc JEMALLOC_N(idalloc) #define idalloct JEMALLOC_N(idalloct) #define imalloc JEMALLOC_N(imalloc) #define imalloct JEMALLOC_N(imalloct) #define in_valgrind JEMALLOC_N(in_valgrind) #define ipalloc JEMALLOC_N(ipalloc) #define ipalloct JEMALLOC_N(ipalloct) #define iqalloc JEMALLOC_N(iqalloc) #define iqalloct JEMALLOC_N(iqalloct) #define iralloc JEMALLOC_N(iralloc) #define iralloct JEMALLOC_N(iralloct) #define iralloct_realign JEMALLOC_N(iralloct_realign) #define isalloc JEMALLOC_N(isalloc) #define isthreaded JEMALLOC_N(isthreaded) #define ivsalloc JEMALLOC_N(ivsalloc) #define ixalloc JEMALLOC_N(ixalloc) #define jemalloc_postfork_child JEMALLOC_N(jemalloc_postfork_child) #define jemalloc_postfork_parent JEMALLOC_N(jemalloc_postfork_parent) #define jemalloc_prefork JEMALLOC_N(jemalloc_prefork) #define lg_floor JEMALLOC_N(lg_floor) #define malloc_cprintf JEMALLOC_N(malloc_cprintf) #define malloc_mutex_init JEMALLOC_N(malloc_mutex_init) #define malloc_mutex_lock JEMALLOC_N(malloc_mutex_lock) #define malloc_mutex_postfork_child JEMALLOC_N(malloc_mutex_postfork_child) #define malloc_mutex_postfork_parent JEMALLOC_N(malloc_mutex_postfork_parent) #define malloc_mutex_prefork JEMALLOC_N(malloc_mutex_prefork) #define malloc_mutex_unlock JEMALLOC_N(malloc_mutex_unlock) #define malloc_rwlock_init JEMALLOC_N(malloc_rwlock_init) #define malloc_rwlock_postfork_child JEMALLOC_N(malloc_rwlock_postfork_child) #define malloc_rwlock_postfork_parent JEMALLOC_N(malloc_rwlock_postfork_parent) #define malloc_rwlock_prefork JEMALLOC_N(malloc_rwlock_prefork) #define malloc_rwlock_rdlock JEMALLOC_N(malloc_rwlock_rdlock) #define malloc_rwlock_wrlock JEMALLOC_N(malloc_rwlock_wrlock) #define malloc_rwlock_unlock JEMALLOC_N(malloc_rwlock_unlock) #define malloc_rwlock_destroy JEMALLOC_N(malloc_rwlock_destroy) #define malloc_printf JEMALLOC_N(malloc_printf) #define malloc_snprintf JEMALLOC_N(malloc_snprintf) #define malloc_strtoumax JEMALLOC_N(malloc_strtoumax) #define malloc_tsd_boot JEMALLOC_N(malloc_tsd_boot) #define malloc_tsd_cleanup_register JEMALLOC_N(malloc_tsd_cleanup_register) #define malloc_tsd_dalloc JEMALLOC_N(malloc_tsd_dalloc) #define malloc_tsd_malloc JEMALLOC_N(malloc_tsd_malloc) #define malloc_tsd_no_cleanup JEMALLOC_N(malloc_tsd_no_cleanup) #define malloc_vcprintf JEMALLOC_N(malloc_vcprintf) #define malloc_vsnprintf JEMALLOC_N(malloc_vsnprintf) #define malloc_write JEMALLOC_N(malloc_write) #define map_bias JEMALLOC_N(map_bias) #define mb_write JEMALLOC_N(mb_write) #define mutex_boot JEMALLOC_N(mutex_boot) #define narenas_auto JEMALLOC_N(narenas_auto) #define narenas_total JEMALLOC_N(narenas_total) #define narenas_total_get JEMALLOC_N(narenas_total_get) #define ncpus JEMALLOC_N(ncpus) #define nhbins JEMALLOC_N(nhbins) #define npools JEMALLOC_N(npools) #define npools_cnt JEMALLOC_N(npools_cnt) #define opt_abort JEMALLOC_N(opt_abort) #define opt_dss JEMALLOC_N(opt_dss) #define opt_junk JEMALLOC_N(opt_junk) #define opt_lg_chunk JEMALLOC_N(opt_lg_chunk) #define opt_lg_dirty_mult JEMALLOC_N(opt_lg_dirty_mult) #define opt_lg_prof_interval JEMALLOC_N(opt_lg_prof_interval) #define opt_lg_prof_sample JEMALLOC_N(opt_lg_prof_sample) #define opt_lg_tcache_max JEMALLOC_N(opt_lg_tcache_max) #define opt_narenas JEMALLOC_N(opt_narenas) #define opt_prof JEMALLOC_N(opt_prof) #define opt_prof_accum JEMALLOC_N(opt_prof_accum) #define opt_prof_active JEMALLOC_N(opt_prof_active) #define opt_prof_final JEMALLOC_N(opt_prof_final) #define opt_prof_gdump JEMALLOC_N(opt_prof_gdump) #define opt_prof_leak JEMALLOC_N(opt_prof_leak) #define opt_prof_prefix JEMALLOC_N(opt_prof_prefix) #define opt_quarantine JEMALLOC_N(opt_quarantine) #define opt_redzone JEMALLOC_N(opt_redzone) #define opt_stats_print JEMALLOC_N(opt_stats_print) #define opt_tcache JEMALLOC_N(opt_tcache) #define opt_utrace JEMALLOC_N(opt_utrace) #define opt_xmalloc JEMALLOC_N(opt_xmalloc) #define opt_zero JEMALLOC_N(opt_zero) #define p2rz JEMALLOC_N(p2rz) #define pages_purge JEMALLOC_N(pages_purge) #define pools_shared_data_initialized JEMALLOC_N(pools_shared_data_initialized) #define pow2_ceil JEMALLOC_N(pow2_ceil) #define prof_backtrace JEMALLOC_N(prof_backtrace) #define prof_boot0 JEMALLOC_N(prof_boot0) #define prof_boot1 JEMALLOC_N(prof_boot1) #define prof_boot2 JEMALLOC_N(prof_boot2) #define prof_bt_count JEMALLOC_N(prof_bt_count) #define prof_ctx_get JEMALLOC_N(prof_ctx_get) #define prof_ctx_set JEMALLOC_N(prof_ctx_set) #define prof_dump_open JEMALLOC_N(prof_dump_open) #define prof_free JEMALLOC_N(prof_free) #define prof_gdump JEMALLOC_N(prof_gdump) #define prof_idump JEMALLOC_N(prof_idump) #define prof_interval JEMALLOC_N(prof_interval) #define prof_lookup JEMALLOC_N(prof_lookup) #define prof_malloc JEMALLOC_N(prof_malloc) #define prof_malloc_record_object JEMALLOC_N(prof_malloc_record_object) #define prof_mdump JEMALLOC_N(prof_mdump) #define prof_postfork_child JEMALLOC_N(prof_postfork_child) #define prof_postfork_parent JEMALLOC_N(prof_postfork_parent) #define prof_prefork JEMALLOC_N(prof_prefork) #define prof_realloc JEMALLOC_N(prof_realloc) #define prof_sample_accum_update JEMALLOC_N(prof_sample_accum_update) #define prof_sample_threshold_update JEMALLOC_N(prof_sample_threshold_update) #define prof_tdata_booted JEMALLOC_N(prof_tdata_booted) #define prof_tdata_cleanup JEMALLOC_N(prof_tdata_cleanup) #define prof_tdata_get JEMALLOC_N(prof_tdata_get) #define prof_tdata_init JEMALLOC_N(prof_tdata_init) #define prof_tdata_initialized JEMALLOC_N(prof_tdata_initialized) #define prof_tdata_tls JEMALLOC_N(prof_tdata_tls) #define prof_tdata_tsd JEMALLOC_N(prof_tdata_tsd) #define prof_tdata_tsd_boot JEMALLOC_N(prof_tdata_tsd_boot) #define prof_tdata_tsd_cleanup_wrapper JEMALLOC_N(prof_tdata_tsd_cleanup_wrapper) #define prof_tdata_tsd_get JEMALLOC_N(prof_tdata_tsd_get) #define prof_tdata_tsd_get_wrapper JEMALLOC_N(prof_tdata_tsd_get_wrapper) #define prof_tdata_tsd_init_head JEMALLOC_N(prof_tdata_tsd_init_head) #define prof_tdata_tsd_set JEMALLOC_N(prof_tdata_tsd_set) #define quarantine JEMALLOC_N(quarantine) #define quarantine_alloc_hook JEMALLOC_N(quarantine_alloc_hook) #define quarantine_boot JEMALLOC_N(quarantine_boot) #define quarantine_booted JEMALLOC_N(quarantine_booted) #define quarantine_cleanup JEMALLOC_N(quarantine_cleanup) #define quarantine_init JEMALLOC_N(quarantine_init) #define quarantine_tls JEMALLOC_N(quarantine_tls) #define quarantine_tsd JEMALLOC_N(quarantine_tsd) #define quarantine_tsd_boot JEMALLOC_N(quarantine_tsd_boot) #define quarantine_tsd_cleanup_wrapper JEMALLOC_N(quarantine_tsd_cleanup_wrapper) #define quarantine_tsd_get JEMALLOC_N(quarantine_tsd_get) #define quarantine_tsd_get_wrapper JEMALLOC_N(quarantine_tsd_get_wrapper) #define quarantine_tsd_init_head JEMALLOC_N(quarantine_tsd_init_head) #define quarantine_tsd_set JEMALLOC_N(quarantine_tsd_set) #define register_zone JEMALLOC_N(register_zone) #define rtree_delete JEMALLOC_N(rtree_delete) #define rtree_get JEMALLOC_N(rtree_get) #define rtree_get_locked JEMALLOC_N(rtree_get_locked) #define rtree_new JEMALLOC_N(rtree_new) #define rtree_postfork_child JEMALLOC_N(rtree_postfork_child) #define rtree_postfork_parent JEMALLOC_N(rtree_postfork_parent) #define rtree_prefork JEMALLOC_N(rtree_prefork) #define rtree_set JEMALLOC_N(rtree_set) #define s2u JEMALLOC_N(s2u) #define sa2u JEMALLOC_N(sa2u) #define set_errno JEMALLOC_N(set_errno) #define small_bin2size JEMALLOC_N(small_bin2size) #define small_bin2size_compute JEMALLOC_N(small_bin2size_compute) #define small_bin2size_lookup JEMALLOC_N(small_bin2size_lookup) #define small_bin2size_tab JEMALLOC_N(small_bin2size_tab) #define small_s2u JEMALLOC_N(small_s2u) #define small_s2u_compute JEMALLOC_N(small_s2u_compute) #define small_s2u_lookup JEMALLOC_N(small_s2u_lookup) #define small_size2bin JEMALLOC_N(small_size2bin) #define small_size2bin_compute JEMALLOC_N(small_size2bin_compute) #define small_size2bin_lookup JEMALLOC_N(small_size2bin_lookup) #define small_size2bin_tab JEMALLOC_N(small_size2bin_tab) #define stats_cactive JEMALLOC_N(stats_cactive) #define stats_cactive_add JEMALLOC_N(stats_cactive_add) #define stats_cactive_get JEMALLOC_N(stats_cactive_get) #define stats_cactive_sub JEMALLOC_N(stats_cactive_sub) #define stats_chunks JEMALLOC_N(stats_chunks) #define stats_print JEMALLOC_N(stats_print) #define tcache_alloc_easy JEMALLOC_N(tcache_alloc_easy) #define tcache_alloc_large JEMALLOC_N(tcache_alloc_large) #define tcache_alloc_small JEMALLOC_N(tcache_alloc_small) #define tcache_alloc_small_hard JEMALLOC_N(tcache_alloc_small_hard) #define tcache_arena_associate JEMALLOC_N(tcache_arena_associate) #define tcache_arena_dissociate JEMALLOC_N(tcache_arena_dissociate) #define tcache_bin_flush_large JEMALLOC_N(tcache_bin_flush_large) #define tcache_bin_flush_small JEMALLOC_N(tcache_bin_flush_small) #define tcache_bin_info JEMALLOC_N(tcache_bin_info) #define tcache_boot0 JEMALLOC_N(tcache_boot0) #define tcache_boot1 JEMALLOC_N(tcache_boot1) #define tcache_booted JEMALLOC_N(tcache_booted) #define tcache_create JEMALLOC_N(tcache_create) #define tcache_dalloc_large JEMALLOC_N(tcache_dalloc_large) #define tcache_dalloc_small JEMALLOC_N(tcache_dalloc_small) #define tcache_destroy JEMALLOC_N(tcache_destroy) #define tcache_enabled_booted JEMALLOC_N(tcache_enabled_booted) #define tcache_enabled_get JEMALLOC_N(tcache_enabled_get) #define tcache_enabled_initialized JEMALLOC_N(tcache_enabled_initialized) #define tcache_enabled_set JEMALLOC_N(tcache_enabled_set) #define tcache_enabled_tls JEMALLOC_N(tcache_enabled_tls) #define tcache_enabled_tsd JEMALLOC_N(tcache_enabled_tsd) #define tcache_enabled_tsd_boot JEMALLOC_N(tcache_enabled_tsd_boot) #define tcache_enabled_tsd_cleanup_wrapper JEMALLOC_N(tcache_enabled_tsd_cleanup_wrapper) #define tcache_enabled_tsd_get JEMALLOC_N(tcache_enabled_tsd_get) #define tcache_enabled_tsd_get_wrapper JEMALLOC_N(tcache_enabled_tsd_get_wrapper) #define tcache_enabled_tsd_init_head JEMALLOC_N(tcache_enabled_tsd_init_head) #define tcache_enabled_tsd_set JEMALLOC_N(tcache_enabled_tsd_set) #define tcache_event JEMALLOC_N(tcache_event) #define tcache_event_hard JEMALLOC_N(tcache_event_hard) #define tcache_flush JEMALLOC_N(tcache_flush) #define tcache_get JEMALLOC_N(tcache_get) #define tcache_get_hard JEMALLOC_N(tcache_get_hard) #define tcache_initialized JEMALLOC_N(tcache_initialized) #define tcache_maxclass JEMALLOC_N(tcache_maxclass) #define tcache_salloc JEMALLOC_N(tcache_salloc) #define tcache_stats_merge JEMALLOC_N(tcache_stats_merge) #define tcache_thread_cleanup JEMALLOC_N(tcache_thread_cleanup) #define tcache_tls JEMALLOC_N(tcache_tls) #define tcache_tsd JEMALLOC_N(tcache_tsd) #define tcache_tsd_boot JEMALLOC_N(tcache_tsd_boot) #define tcache_tsd_cleanup_wrapper JEMALLOC_N(tcache_tsd_cleanup_wrapper) #define tcache_tsd_get JEMALLOC_N(tcache_tsd_get) #define tcache_tsd_get_wrapper JEMALLOC_N(tcache_tsd_get_wrapper) #define tcache_tsd_init_head JEMALLOC_N(tcache_tsd_init_head) #define tcache_tsd_set JEMALLOC_N(tcache_tsd_set) #define thread_allocated_booted JEMALLOC_N(thread_allocated_booted) #define thread_allocated_initialized JEMALLOC_N(thread_allocated_initialized) #define thread_allocated_tls JEMALLOC_N(thread_allocated_tls) #define thread_allocated_tsd JEMALLOC_N(thread_allocated_tsd) #define thread_allocated_tsd_boot JEMALLOC_N(thread_allocated_tsd_boot) #define thread_allocated_tsd_cleanup_wrapper JEMALLOC_N(thread_allocated_tsd_cleanup_wrapper) #define thread_allocated_tsd_get JEMALLOC_N(thread_allocated_tsd_get) #define thread_allocated_tsd_get_wrapper JEMALLOC_N(thread_allocated_tsd_get_wrapper) #define thread_allocated_tsd_init_head JEMALLOC_N(thread_allocated_tsd_init_head) #define thread_allocated_tsd_set JEMALLOC_N(thread_allocated_tsd_set) #define tsd_init_check_recursion JEMALLOC_N(tsd_init_check_recursion) #define tsd_init_finish JEMALLOC_N(tsd_init_finish) #define u2rz JEMALLOC_N(u2rz) #define valgrind_freelike_block JEMALLOC_N(valgrind_freelike_block) #define valgrind_make_mem_defined JEMALLOC_N(valgrind_make_mem_defined) #define valgrind_make_mem_noaccess JEMALLOC_N(valgrind_make_mem_noaccess) #define valgrind_make_mem_undefined JEMALLOC_N(valgrind_make_mem_undefined) #define pool_new JEMALLOC_N(pool_new) #define pool_destroy JEMALLOC_N(pool_destroy) #define pools_lock JEMALLOC_N(pools_lock) #define pool_base_lock JEMALLOC_N(pool_base_lock) #define pool_prefork JEMALLOC_N(pool_prefork) #define pool_postfork_parent JEMALLOC_N(pool_postfork_parent) #define pool_postfork_child JEMALLOC_N(pool_postfork_child) #define pool_alloc JEMALLOC_N(pool_alloc) #define vec_get JEMALLOC_N(vec_get) #define vec_set JEMALLOC_N(vec_set) #define vec_delete JEMALLOC_N(vec_delete) vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/internal/private_unnamespace.h000066400000000000000000000257561361505074100304760ustar00rootroot00000000000000#undef a0calloc #undef a0free #undef a0malloc #undef arena_alloc_junk_small #undef arena_bin_index #undef arena_bin_info #undef arena_boot #undef arena_chunk_alloc_huge #undef arena_chunk_dalloc_huge #undef arena_dalloc #undef arena_dalloc_bin #undef arena_dalloc_bin_locked #undef arena_dalloc_junk_large #undef arena_dalloc_junk_small #undef arena_dalloc_large #undef arena_dalloc_large_locked #undef arena_dalloc_small #undef arena_dss_prec_get #undef arena_dss_prec_set #undef arena_malloc #undef arena_malloc_large #undef arena_malloc_small #undef arena_mapbits_allocated_get #undef arena_mapbits_binind_get #undef arena_mapbits_dirty_get #undef arena_mapbits_get #undef arena_mapbits_large_binind_set #undef arena_mapbits_large_get #undef arena_mapbits_large_set #undef arena_mapbits_large_size_get #undef arena_mapbits_small_runind_get #undef arena_mapbits_small_set #undef arena_mapbits_unallocated_set #undef arena_mapbits_unallocated_size_get #undef arena_mapbits_unallocated_size_set #undef arena_mapbits_unzeroed_get #undef arena_mapbits_unzeroed_set #undef arena_mapbitsp_get #undef arena_mapbitsp_read #undef arena_mapbitsp_write #undef arena_mapelm_to_pageind #undef arena_mapp_get #undef arena_maxclass #undef arena_new #undef arena_palloc #undef arena_postfork_child #undef arena_postfork_parent #undef arena_prefork #undef arena_prof_accum #undef arena_prof_accum_impl #undef arena_prof_accum_locked #undef arena_prof_ctx_get #undef arena_prof_ctx_set #undef arena_prof_promoted #undef arena_ptr_small_binind_get #undef arena_purge_all #undef arena_quarantine_junk_small #undef arena_ralloc #undef arena_ralloc_junk_large #undef arena_ralloc_no_move #undef arena_redzone_corruption #undef arena_run_regind #undef arena_runs_avail_tree_iter #undef arena_salloc #undef arena_stats_merge #undef arena_tcache_fill_small #undef arenas #undef pools #undef arenas_booted #undef arenas_cleanup #undef arenas_extend #undef arenas_initialized #undef arenas_lock #undef arenas_tls #undef arenas_tsd #undef arenas_tsd_boot #undef arenas_tsd_cleanup_wrapper #undef arenas_tsd_get #undef arenas_tsd_get_wrapper #undef arenas_tsd_init_head #undef arenas_tsd_set #undef atomic_add_u #undef atomic_add_uint32 #undef atomic_add_uint64 #undef atomic_add_z #undef atomic_sub_u #undef atomic_sub_uint32 #undef atomic_sub_uint64 #undef atomic_sub_z #undef base_alloc #undef base_boot #undef base_calloc #undef base_free_fn #undef base_malloc_fn #undef base_node_alloc #undef base_node_dalloc #undef base_pool #undef base_postfork_child #undef base_postfork_parent #undef base_prefork #undef bitmap_full #undef bitmap_get #undef bitmap_info_init #undef bitmap_info_ngroups #undef bitmap_init #undef bitmap_set #undef bitmap_sfu #undef bitmap_size #undef bitmap_unset #undef bt_init #undef buferror #undef choose_arena #undef choose_arena_hard #undef chunk_alloc_arena #undef chunk_alloc_base #undef chunk_alloc_default #undef chunk_alloc_dss #undef chunk_alloc_mmap #undef chunk_global_boot #undef chunk_boot #undef chunk_dalloc_default #undef chunk_dalloc_mmap #undef chunk_dss_boot #undef chunk_dss_postfork_child #undef chunk_dss_postfork_parent #undef chunk_dss_prec_get #undef chunk_dss_prec_set #undef chunk_dss_prefork #undef chunk_in_dss #undef chunk_npages #undef chunk_postfork_child #undef chunk_postfork_parent #undef chunk_prefork #undef chunk_unmap #undef chunk_record #undef chunks_mtx #undef chunks_rtree #undef chunksize #undef chunksize_mask #undef ckh_bucket_search #undef ckh_count #undef ckh_delete #undef ckh_evict_reloc_insert #undef ckh_insert #undef ckh_isearch #undef ckh_iter #undef ckh_new #undef ckh_pointer_hash #undef ckh_pointer_keycomp #undef ckh_rebuild #undef ckh_remove #undef ckh_search #undef ckh_string_hash #undef ckh_string_keycomp #undef ckh_try_bucket_insert #undef ckh_try_insert #undef ctl_boot #undef ctl_bymib #undef ctl_byname #undef ctl_nametomib #undef ctl_postfork_child #undef ctl_postfork_parent #undef ctl_prefork #undef dss_prec_names #undef extent_tree_ad_first #undef extent_tree_ad_insert #undef extent_tree_ad_iter #undef extent_tree_ad_iter_recurse #undef extent_tree_ad_iter_start #undef extent_tree_ad_last #undef extent_tree_ad_new #undef extent_tree_ad_next #undef extent_tree_ad_nsearch #undef extent_tree_ad_prev #undef extent_tree_ad_psearch #undef extent_tree_ad_remove #undef extent_tree_ad_reverse_iter #undef extent_tree_ad_reverse_iter_recurse #undef extent_tree_ad_reverse_iter_start #undef extent_tree_ad_search #undef extent_tree_szad_first #undef extent_tree_szad_insert #undef extent_tree_szad_iter #undef extent_tree_szad_iter_recurse #undef extent_tree_szad_iter_start #undef extent_tree_szad_last #undef extent_tree_szad_new #undef extent_tree_szad_next #undef extent_tree_szad_nsearch #undef extent_tree_szad_prev #undef extent_tree_szad_psearch #undef extent_tree_szad_remove #undef extent_tree_szad_reverse_iter #undef extent_tree_szad_reverse_iter_recurse #undef extent_tree_szad_reverse_iter_start #undef extent_tree_szad_search #undef get_errno #undef hash #undef hash_fmix_32 #undef hash_fmix_64 #undef hash_get_block_32 #undef hash_get_block_64 #undef hash_rotl_32 #undef hash_rotl_64 #undef hash_x64_128 #undef hash_x86_128 #undef hash_x86_32 #undef huge_allocated #undef huge_boot #undef huge_dalloc #undef huge_dalloc_junk #undef huge_malloc #undef huge_ndalloc #undef huge_nmalloc #undef huge_palloc #undef huge_postfork_child #undef huge_postfork_parent #undef huge_prefork #undef huge_prof_ctx_get #undef huge_prof_ctx_set #undef huge_ralloc #undef huge_ralloc_no_move #undef huge_salloc #undef icalloc #undef icalloct #undef idalloc #undef idalloct #undef imalloc #undef imalloct #undef in_valgrind #undef ipalloc #undef ipalloct #undef iqalloc #undef iqalloct #undef iralloc #undef iralloct #undef iralloct_realign #undef isalloc #undef isthreaded #undef ivsalloc #undef ixalloc #undef jemalloc_postfork_child #undef jemalloc_postfork_parent #undef jemalloc_prefork #undef lg_floor #undef malloc_cprintf #undef malloc_mutex_init #undef malloc_mutex_lock #undef malloc_mutex_postfork_child #undef malloc_mutex_postfork_parent #undef malloc_mutex_prefork #undef malloc_mutex_unlock #undef malloc_rwlock_init #undef malloc_rwlock_postfork_child #undef malloc_rwlock_postfork_parent #undef malloc_rwlock_prefork #undef malloc_rwlock_rdlock #undef malloc_rwlock_wrlock #undef malloc_rwlock_unlock #undef malloc_rwlock_destroy #undef malloc_printf #undef malloc_snprintf #undef malloc_strtoumax #undef malloc_tsd_boot #undef malloc_tsd_cleanup_register #undef malloc_tsd_dalloc #undef malloc_tsd_malloc #undef malloc_tsd_no_cleanup #undef malloc_vcprintf #undef malloc_vsnprintf #undef malloc_write #undef map_bias #undef mb_write #undef mutex_boot #undef narenas_auto #undef narenas_total #undef narenas_total_get #undef ncpus #undef nhbins #undef npools #undef npools_cnt #undef opt_abort #undef opt_dss #undef opt_junk #undef opt_lg_chunk #undef opt_lg_dirty_mult #undef opt_lg_prof_interval #undef opt_lg_prof_sample #undef opt_lg_tcache_max #undef opt_narenas #undef opt_prof #undef opt_prof_accum #undef opt_prof_active #undef opt_prof_final #undef opt_prof_gdump #undef opt_prof_leak #undef opt_prof_prefix #undef opt_quarantine #undef opt_redzone #undef opt_stats_print #undef opt_tcache #undef opt_utrace #undef opt_xmalloc #undef opt_zero #undef p2rz #undef pages_purge #undef pools_shared_data_initialized #undef pow2_ceil #undef prof_backtrace #undef prof_boot0 #undef prof_boot1 #undef prof_boot2 #undef prof_bt_count #undef prof_ctx_get #undef prof_ctx_set #undef prof_dump_open #undef prof_free #undef prof_gdump #undef prof_idump #undef prof_interval #undef prof_lookup #undef prof_malloc #undef prof_malloc_record_object #undef prof_mdump #undef prof_postfork_child #undef prof_postfork_parent #undef prof_prefork #undef prof_realloc #undef prof_sample_accum_update #undef prof_sample_threshold_update #undef prof_tdata_booted #undef prof_tdata_cleanup #undef prof_tdata_get #undef prof_tdata_init #undef prof_tdata_initialized #undef prof_tdata_tls #undef prof_tdata_tsd #undef prof_tdata_tsd_boot #undef prof_tdata_tsd_cleanup_wrapper #undef prof_tdata_tsd_get #undef prof_tdata_tsd_get_wrapper #undef prof_tdata_tsd_init_head #undef prof_tdata_tsd_set #undef quarantine #undef quarantine_alloc_hook #undef quarantine_boot #undef quarantine_booted #undef quarantine_cleanup #undef quarantine_init #undef quarantine_tls #undef quarantine_tsd #undef quarantine_tsd_boot #undef quarantine_tsd_cleanup_wrapper #undef quarantine_tsd_get #undef quarantine_tsd_get_wrapper #undef quarantine_tsd_init_head #undef quarantine_tsd_set #undef register_zone #undef rtree_delete #undef rtree_get #undef rtree_get_locked #undef rtree_new #undef rtree_postfork_child #undef rtree_postfork_parent #undef rtree_prefork #undef rtree_set #undef s2u #undef sa2u #undef set_errno #undef small_bin2size #undef small_bin2size_compute #undef small_bin2size_lookup #undef small_bin2size_tab #undef small_s2u #undef small_s2u_compute #undef small_s2u_lookup #undef small_size2bin #undef small_size2bin_compute #undef small_size2bin_lookup #undef small_size2bin_tab #undef stats_cactive #undef stats_cactive_add #undef stats_cactive_get #undef stats_cactive_sub #undef stats_chunks #undef stats_print #undef tcache_alloc_easy #undef tcache_alloc_large #undef tcache_alloc_small #undef tcache_alloc_small_hard #undef tcache_arena_associate #undef tcache_arena_dissociate #undef tcache_bin_flush_large #undef tcache_bin_flush_small #undef tcache_bin_info #undef tcache_boot0 #undef tcache_boot1 #undef tcache_booted #undef tcache_create #undef tcache_dalloc_large #undef tcache_dalloc_small #undef tcache_destroy #undef tcache_enabled_booted #undef tcache_enabled_get #undef tcache_enabled_initialized #undef tcache_enabled_set #undef tcache_enabled_tls #undef tcache_enabled_tsd #undef tcache_enabled_tsd_boot #undef tcache_enabled_tsd_cleanup_wrapper #undef tcache_enabled_tsd_get #undef tcache_enabled_tsd_get_wrapper #undef tcache_enabled_tsd_init_head #undef tcache_enabled_tsd_set #undef tcache_event #undef tcache_event_hard #undef tcache_flush #undef tcache_get #undef tcache_get_hard #undef tcache_initialized #undef tcache_maxclass #undef tcache_salloc #undef tcache_stats_merge #undef tcache_thread_cleanup #undef tcache_tls #undef tcache_tsd #undef tcache_tsd_boot #undef tcache_tsd_cleanup_wrapper #undef tcache_tsd_get #undef tcache_tsd_get_wrapper #undef tcache_tsd_init_head #undef tcache_tsd_set #undef thread_allocated_booted #undef thread_allocated_initialized #undef thread_allocated_tls #undef thread_allocated_tsd #undef thread_allocated_tsd_boot #undef thread_allocated_tsd_cleanup_wrapper #undef thread_allocated_tsd_get #undef thread_allocated_tsd_get_wrapper #undef thread_allocated_tsd_init_head #undef thread_allocated_tsd_set #undef tsd_init_check_recursion #undef tsd_init_finish #undef u2rz #undef valgrind_freelike_block #undef valgrind_make_mem_defined #undef valgrind_make_mem_noaccess #undef valgrind_make_mem_undefined #undef pool_new #undef pool_destroy #undef pools_lock #undef pool_base_lock #undef pool_prefork #undef pool_postfork_parent #undef pool_postfork_child #undef pool_alloc #undef vec_get #undef vec_set #undef vec_delete vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/internal/public_namespace.h000066400000000000000000000030001361505074100277110ustar00rootroot00000000000000#define je_pool_create JEMALLOC_N(pool_create) #define je_pool_delete JEMALLOC_N(pool_delete) #define je_pool_malloc JEMALLOC_N(pool_malloc) #define je_pool_calloc JEMALLOC_N(pool_calloc) #define je_pool_ralloc JEMALLOC_N(pool_ralloc) #define je_pool_aligned_alloc JEMALLOC_N(pool_aligned_alloc) #define je_pool_free JEMALLOC_N(pool_free) #define je_pool_malloc_usable_size JEMALLOC_N(pool_malloc_usable_size) #define je_pool_malloc_stats_print JEMALLOC_N(pool_malloc_stats_print) #define je_pool_extend JEMALLOC_N(pool_extend) #define je_pool_set_alloc_funcs JEMALLOC_N(pool_set_alloc_funcs) #define je_pool_check JEMALLOC_N(pool_check) #define je_malloc_conf JEMALLOC_N(malloc_conf) #define je_malloc_message JEMALLOC_N(malloc_message) #define je_malloc JEMALLOC_N(malloc) #define je_calloc JEMALLOC_N(calloc) #define je_posix_memalign JEMALLOC_N(posix_memalign) #define je_aligned_alloc JEMALLOC_N(aligned_alloc) #define je_realloc JEMALLOC_N(realloc) #define je_free JEMALLOC_N(free) #define je_mallocx JEMALLOC_N(mallocx) #define je_rallocx JEMALLOC_N(rallocx) #define je_xallocx JEMALLOC_N(xallocx) #define je_sallocx JEMALLOC_N(sallocx) #define je_dallocx JEMALLOC_N(dallocx) #define je_nallocx JEMALLOC_N(nallocx) #define je_mallctl JEMALLOC_N(mallctl) #define je_mallctlnametomib JEMALLOC_N(mallctlnametomib) #define je_mallctlbymib JEMALLOC_N(mallctlbymib) #define je_navsnprintf JEMALLOC_N(navsnprintf) #define je_malloc_stats_print JEMALLOC_N(malloc_stats_print) #define je_malloc_usable_size JEMALLOC_N(malloc_usable_size) vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/internal/public_unnamespace.h000066400000000000000000000013201361505074100302570ustar00rootroot00000000000000#undef je_pool_create #undef je_pool_delete #undef je_pool_malloc #undef je_pool_calloc #undef je_pool_ralloc #undef je_pool_aligned_alloc #undef je_pool_free #undef je_pool_malloc_usable_size #undef je_pool_malloc_stats_print #undef je_pool_extend #undef je_pool_set_alloc_funcs #undef je_pool_check #undef je_malloc_conf #undef je_malloc_message #undef je_malloc #undef je_calloc #undef je_posix_memalign #undef je_aligned_alloc #undef je_realloc #undef je_free #undef je_mallocx #undef je_rallocx #undef je_xallocx #undef je_sallocx #undef je_dallocx #undef je_nallocx #undef je_mallctl #undef je_mallctlnametomib #undef je_mallctlbymib #undef je_navsnprintf #undef je_malloc_stats_print #undef je_malloc_usable_size vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/internal/size_classes.h000066400000000000000000006063031361505074100271250ustar00rootroot00000000000000/* This file was automatically generated by size_classes.sh. */ /******************************************************************************/ #ifdef JEMALLOC_H_TYPES /* * This header requires LG_SIZEOF_PTR, LG_TINY_MIN, LG_QUANTUM, and LG_PAGE to * be defined prior to inclusion, and it in turn defines: * * LG_SIZE_CLASS_GROUP: Lg of size class count for each size doubling. * SIZE_CLASSES: Complete table of * SC(index, lg_delta, size, bin, lg_delta_lookup) tuples. * index: Size class index. * lg_grp: Lg group base size (no deltas added). * lg_delta: Lg delta to previous size class. * ndelta: Delta multiplier. size == 1< 255) # error "Too many small size classes" #endif #endif /* JEMALLOC_H_TYPES */ /******************************************************************************/ #ifdef JEMALLOC_H_STRUCTS #endif /* JEMALLOC_H_STRUCTS */ /******************************************************************************/ #ifdef JEMALLOC_H_EXTERNS #endif /* JEMALLOC_H_EXTERNS */ /******************************************************************************/ #ifdef JEMALLOC_H_INLINES #endif /* JEMALLOC_H_INLINES */ /******************************************************************************/ vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/jemalloc.h000066400000000000000000000246621361505074100244120ustar00rootroot00000000000000#ifndef JEMALLOC_H_ #define JEMALLOC_H_ #ifdef __cplusplus extern "C" { #endif /* Defined if __attribute__((...)) syntax is supported. */ /* #undef JEMALLOC_HAVE_ATTR */ /* Defined if alloc_size attribute is supported. */ /* #undef JEMALLOC_HAVE_ATTR_ALLOC_SIZE */ /* Defined if format(gnu_printf, ...) attribute is supported. */ /* #undef JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF */ /* Defined if format(printf, ...) attribute is supported. */ /* #undef JEMALLOC_HAVE_ATTR_FORMAT_PRINTF */ /* * Define overrides for non-standard allocator-related functions if they are * present on the system. */ /* #undef JEMALLOC_OVERRIDE_MEMALIGN */ /* #undef JEMALLOC_OVERRIDE_VALLOC */ /* * At least Linux omits the "const" in: * * size_t malloc_usable_size(const void *ptr); * * Match the operating system's prototype. */ #define JEMALLOC_USABLE_SIZE_CONST const /* * If defined, specify throw() for the public function prototypes when compiling * with C++. The only justification for this is to match the prototypes that * glibc defines. */ /* #undef JEMALLOC_USE_CXX_THROW */ #ifdef _MSC_VER # ifdef _WIN64 # define LG_SIZEOF_PTR_WIN 3 # else # define LG_SIZEOF_PTR_WIN 2 # endif #endif /* sizeof(void *) == 2^LG_SIZEOF_PTR. */ #define LG_SIZEOF_PTR LG_SIZEOF_PTR_WIN /* * Name mangling for public symbols is controlled by --with-mangling and * --with-jemalloc-prefix. With default settings the je_ prefix is stripped by * these macro definitions. */ #ifndef JEMALLOC_NO_RENAME # define je_pool_create je_vmem_pool_create # define je_pool_delete je_vmem_pool_delete # define je_pool_malloc je_vmem_pool_malloc # define je_pool_calloc je_vmem_pool_calloc # define je_pool_ralloc je_vmem_pool_ralloc # define je_pool_aligned_alloc je_vmem_pool_aligned_alloc # define je_pool_free je_vmem_pool_free # define je_pool_malloc_usable_size je_vmem_pool_malloc_usable_size # define je_pool_malloc_stats_print je_vmem_pool_malloc_stats_print # define je_pool_extend je_vmem_pool_extend # define je_pool_set_alloc_funcs je_vmem_pool_set_alloc_funcs # define je_pool_check je_vmem_pool_check # define je_malloc_conf je_vmem_malloc_conf # define je_malloc_message je_vmem_malloc_message # define je_malloc je_vmem_malloc # define je_calloc je_vmem_calloc # define je_posix_memalign je_vmem_posix_memalign # define je_aligned_alloc je_vmem_aligned_alloc # define je_realloc je_vmem_realloc # define je_free je_vmem_free # define je_mallocx je_vmem_mallocx # define je_rallocx je_vmem_rallocx # define je_xallocx je_vmem_xallocx # define je_sallocx je_vmem_sallocx # define je_dallocx je_vmem_dallocx # define je_nallocx je_vmem_nallocx # define je_mallctl je_vmem_mallctl # define je_mallctlnametomib je_vmem_mallctlnametomib # define je_mallctlbymib je_vmem_mallctlbymib # define je_navsnprintf je_vmem_navsnprintf # define je_malloc_stats_print je_vmem_malloc_stats_print # define je_malloc_usable_size je_vmem_malloc_usable_size #endif #include #include #include #include #define JEMALLOC_VERSION "" #define JEMALLOC_VERSION_MAJOR #define JEMALLOC_VERSION_MINOR #define JEMALLOC_VERSION_BUGFIX #define JEMALLOC_VERSION_NREV #define JEMALLOC_VERSION_GID "" # define MALLOCX_LG_ALIGN(la) (la) # if LG_SIZEOF_PTR == 2 # define MALLOCX_ALIGN(a) (ffs(a)-1) # else # define MALLOCX_ALIGN(a) \ (((a) < (size_t)INT_MAX) ? ffs(a)-1 : ffs((a)>>32)+31) # endif # define MALLOCX_ZERO ((int)0x40) /* Bias arena index bits so that 0 encodes "MALLOCX_ARENA() unspecified". */ # define MALLOCX_ARENA(a) ((int)(((a)+1) << 8)) #ifdef JEMALLOC_HAVE_ATTR # define JEMALLOC_ATTR(s) __attribute__((s)) # define JEMALLOC_EXPORT JEMALLOC_ATTR(visibility("default")) # define JEMALLOC_ALIGNED(s) JEMALLOC_ATTR(aligned(s)) # define JEMALLOC_SECTION(s) JEMALLOC_ATTR(section(s)) # define JEMALLOC_NOINLINE JEMALLOC_ATTR(noinline) #elif _MSC_VER # define JEMALLOC_ATTR(s) # ifndef JEMALLOC_EXPORT # ifdef DLLEXPORT # define JEMALLOC_EXPORT __declspec(dllexport) # else # define JEMALLOC_EXPORT __declspec(dllimport) # endif # endif # define JEMALLOC_ALIGNED(s) __declspec(align(s)) # define JEMALLOC_SECTION(s) __declspec(allocate(s)) # define JEMALLOC_NOINLINE __declspec(noinline) #else # define JEMALLOC_ATTR(s) # define JEMALLOC_EXPORT # define JEMALLOC_ALIGNED(s) # define JEMALLOC_SECTION(s) # define JEMALLOC_NOINLINE #endif /* * The je_ prefix on the following public symbol declarations is an artifact * of namespace management, and should be omitted in application code unless * JEMALLOC_NO_DEMANGLE is defined (see jemalloc_mangle.h). */ extern JEMALLOC_EXPORT const char *je_malloc_conf; extern JEMALLOC_EXPORT void (*je_malloc_message)(void *cbopaque, const char *s); typedef struct pool_s pool_t; JEMALLOC_EXPORT pool_t *je_pool_create(void *addr, size_t size, int zeroed, int empty); JEMALLOC_EXPORT int je_pool_delete(pool_t *pool); JEMALLOC_EXPORT size_t je_pool_extend(pool_t *pool, void *addr, size_t size, int zeroed); JEMALLOC_EXPORT void *je_pool_malloc(pool_t *pool, size_t size); JEMALLOC_EXPORT void *je_pool_calloc(pool_t *pool, size_t nmemb, size_t size); JEMALLOC_EXPORT void *je_pool_ralloc(pool_t *pool, void *ptr, size_t size); JEMALLOC_EXPORT void *je_pool_aligned_alloc(pool_t *pool, size_t alignment, size_t size); JEMALLOC_EXPORT void je_pool_free(pool_t *pool, void *ptr); JEMALLOC_EXPORT size_t je_pool_malloc_usable_size(pool_t *pool, void *ptr); JEMALLOC_EXPORT void je_pool_malloc_stats_print(pool_t *pool, void (*write_cb)(void *, const char *), void *cbopaque, const char *opts); JEMALLOC_EXPORT void je_pool_set_alloc_funcs(void *(*malloc_func)(size_t), void (*free_func)(void *)); JEMALLOC_EXPORT int je_pool_check(pool_t *pool); JEMALLOC_EXPORT void *je_malloc(size_t size) JEMALLOC_ATTR(malloc); JEMALLOC_EXPORT void *je_calloc(size_t num, size_t size) JEMALLOC_ATTR(malloc); JEMALLOC_EXPORT int je_posix_memalign(void **memptr, size_t alignment, size_t size) JEMALLOC_ATTR(nonnull(1)); JEMALLOC_EXPORT void *je_aligned_alloc(size_t alignment, size_t size) JEMALLOC_ATTR(malloc); JEMALLOC_EXPORT void *je_realloc(void *ptr, size_t size); JEMALLOC_EXPORT void je_free(void *ptr); JEMALLOC_EXPORT void *je_mallocx(size_t size, int flags); JEMALLOC_EXPORT void *je_rallocx(void *ptr, size_t size, int flags); JEMALLOC_EXPORT size_t je_xallocx(void *ptr, size_t size, size_t extra, int flags); JEMALLOC_EXPORT size_t je_sallocx(const void *ptr, int flags); JEMALLOC_EXPORT void je_dallocx(void *ptr, int flags); JEMALLOC_EXPORT size_t je_nallocx(size_t size, int flags); JEMALLOC_EXPORT int je_mallctl(const char *name, void *oldp, size_t *oldlenp, void *newp, size_t newlen); JEMALLOC_EXPORT int je_mallctlnametomib(const char *name, size_t *mibp, size_t *miblenp); JEMALLOC_EXPORT int je_mallctlbymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen); JEMALLOC_EXPORT void je_malloc_stats_print(void (*write_cb)(void *, const char *), void *je_cbopaque, const char *opts); JEMALLOC_EXPORT size_t je_malloc_usable_size( JEMALLOC_USABLE_SIZE_CONST void *ptr); JEMALLOC_EXPORT int je_navsnprintf(char *str, size_t size, const char *format, va_list ap); #ifdef JEMALLOC_OVERRIDE_MEMALIGN JEMALLOC_EXPORT void * je_memalign(size_t alignment, size_t size) JEMALLOC_ATTR(malloc); #endif #ifdef JEMALLOC_OVERRIDE_VALLOC JEMALLOC_EXPORT void * je_valloc(size_t size) JEMALLOC_ATTR(malloc); #endif typedef void *(chunk_alloc_t)(void *, size_t, size_t, bool *, unsigned, pool_t *); typedef bool (chunk_dalloc_t)(void *, size_t, unsigned, pool_t *); /* * By default application code must explicitly refer to mangled symbol names, * so that it is possible to use jemalloc in conjunction with another allocator * in the same application. Define JEMALLOC_MANGLE in order to cause automatic * name mangling that matches the API prefixing that happened as a result of * --with-mangling and/or --with-jemalloc-prefix configuration settings. */ #ifdef JEMALLOC_MANGLE # ifndef JEMALLOC_NO_DEMANGLE # define JEMALLOC_NO_DEMANGLE # endif # define pool_create je_pool_create # define pool_delete je_pool_delete # define pool_malloc je_pool_malloc # define pool_calloc je_pool_calloc # define pool_ralloc je_pool_ralloc # define pool_aligned_alloc je_pool_aligned_alloc # define pool_free je_pool_free # define pool_malloc_usable_size je_pool_malloc_usable_size # define pool_malloc_stats_print je_pool_malloc_stats_print # define pool_extend je_pool_extend # define pool_set_alloc_funcs je_pool_set_alloc_funcs # define pool_check je_pool_check # define malloc_conf je_malloc_conf # define malloc_message je_malloc_message # define malloc je_malloc # define calloc je_calloc # define posix_memalign je_posix_memalign # define aligned_alloc je_aligned_alloc # define realloc je_realloc # define free je_free # define mallocx je_mallocx # define rallocx je_rallocx # define xallocx je_xallocx # define sallocx je_sallocx # define dallocx je_dallocx # define nallocx je_nallocx # define mallctl je_mallctl # define mallctlnametomib je_mallctlnametomib # define mallctlbymib je_mallctlbymib # define navsnprintf je_navsnprintf # define malloc_stats_print je_malloc_stats_print # define malloc_usable_size je_malloc_usable_size #endif /* * The je_* macros can be used as stable alternative names for the * public jemalloc API if JEMALLOC_NO_DEMANGLE is defined. This is primarily * meant for use in jemalloc itself, but it can be used by application code to * provide isolation from the name mangling specified via --with-mangling * and/or --with-jemalloc-prefix. */ #ifndef JEMALLOC_NO_DEMANGLE # undef je_pool_create # undef je_pool_delete # undef je_pool_malloc # undef je_pool_calloc # undef je_pool_ralloc # undef je_pool_aligned_alloc # undef je_pool_free # undef je_pool_malloc_usable_size # undef je_pool_malloc_stats_print # undef je_pool_extend # undef je_pool_set_alloc_funcs # undef je_pool_check # undef je_malloc_conf # undef je_malloc_message # undef je_malloc # undef je_calloc # undef je_posix_memalign # undef je_aligned_alloc # undef je_realloc # undef je_free # undef je_mallocx # undef je_rallocx # undef je_xallocx # undef je_sallocx # undef je_dallocx # undef je_nallocx # undef je_mallctl # undef je_mallctlnametomib # undef je_mallctlbymib # undef je_navsnprintf # undef je_malloc_stats_print # undef je_malloc_usable_size #endif #ifdef __cplusplus } #endif #endif /* JEMALLOC_H_ */ vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/jemalloc_defs.h000066400000000000000000000024571361505074100254110ustar00rootroot00000000000000/* ./../windows/jemalloc_gen/include/jemalloc/jemalloc_defs.h. Generated from jemalloc_defs.h.in by configure. */ /* Defined if __attribute__((...)) syntax is supported. */ /* #undef JEMALLOC_HAVE_ATTR */ /* Defined if alloc_size attribute is supported. */ /* #undef JEMALLOC_HAVE_ATTR_ALLOC_SIZE */ /* Defined if format(gnu_printf, ...) attribute is supported. */ /* #undef JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF */ /* Defined if format(printf, ...) attribute is supported. */ /* #undef JEMALLOC_HAVE_ATTR_FORMAT_PRINTF */ /* * Define overrides for non-standard allocator-related functions if they are * present on the system. */ /* #undef JEMALLOC_OVERRIDE_MEMALIGN */ /* #undef JEMALLOC_OVERRIDE_VALLOC */ /* * At least Linux omits the "const" in: * * size_t malloc_usable_size(const void *ptr); * * Match the operating system's prototype. */ #define JEMALLOC_USABLE_SIZE_CONST const /* * If defined, specify throw() for the public function prototypes when compiling * with C++. The only justification for this is to match the prototypes that * glibc defines. */ /* #undef JEMALLOC_USE_CXX_THROW */ #ifdef _MSC_VER # ifdef _WIN64 # define LG_SIZEOF_PTR_WIN 3 # else # define LG_SIZEOF_PTR_WIN 2 # endif #endif /* sizeof(void *) == 2^LG_SIZEOF_PTR. */ #define LG_SIZEOF_PTR LG_SIZEOF_PTR_WIN vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/jemalloc_macros.h000066400000000000000000000026221361505074100257460ustar00rootroot00000000000000#include #include #include #include #define JEMALLOC_VERSION "" #define JEMALLOC_VERSION_MAJOR #define JEMALLOC_VERSION_MINOR #define JEMALLOC_VERSION_BUGFIX #define JEMALLOC_VERSION_NREV #define JEMALLOC_VERSION_GID "" # define MALLOCX_LG_ALIGN(la) (la) # if LG_SIZEOF_PTR == 2 # define MALLOCX_ALIGN(a) (ffs(a)-1) # else # define MALLOCX_ALIGN(a) \ (((a) < (size_t)INT_MAX) ? ffs(a)-1 : ffs((a)>>32)+31) # endif # define MALLOCX_ZERO ((int)0x40) /* Bias arena index bits so that 0 encodes "MALLOCX_ARENA() unspecified". */ # define MALLOCX_ARENA(a) ((int)(((a)+1) << 8)) #ifdef JEMALLOC_HAVE_ATTR # define JEMALLOC_ATTR(s) __attribute__((s)) # define JEMALLOC_EXPORT JEMALLOC_ATTR(visibility("default")) # define JEMALLOC_ALIGNED(s) JEMALLOC_ATTR(aligned(s)) # define JEMALLOC_SECTION(s) JEMALLOC_ATTR(section(s)) # define JEMALLOC_NOINLINE JEMALLOC_ATTR(noinline) #elif _MSC_VER # define JEMALLOC_ATTR(s) # ifdef DLLEXPORT # define JEMALLOC_EXPORT __declspec(dllexport) # else # define JEMALLOC_EXPORT __declspec(dllimport) # endif # define JEMALLOC_ALIGNED(s) __declspec(align(s)) # define JEMALLOC_SECTION(s) __declspec(allocate(s)) # define JEMALLOC_NOINLINE __declspec(noinline) #else # define JEMALLOC_ATTR(s) # define JEMALLOC_EXPORT # define JEMALLOC_ALIGNED(s) # define JEMALLOC_SECTION(s) # define JEMALLOC_NOINLINE #endif vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/jemalloc_mangle.h000066400000000000000000000054721361505074100257330ustar00rootroot00000000000000/* * By default application code must explicitly refer to mangled symbol names, * so that it is possible to use jemalloc in conjunction with another allocator * in the same application. Define JEMALLOC_MANGLE in order to cause automatic * name mangling that matches the API prefixing that happened as a result of * --with-mangling and/or --with-jemalloc-prefix configuration settings. */ #ifdef JEMALLOC_MANGLE # ifndef JEMALLOC_NO_DEMANGLE # define JEMALLOC_NO_DEMANGLE # endif # define pool_create je_pool_create # define pool_delete je_pool_delete # define pool_malloc je_pool_malloc # define pool_calloc je_pool_calloc # define pool_ralloc je_pool_ralloc # define pool_aligned_alloc je_pool_aligned_alloc # define pool_free je_pool_free # define pool_malloc_usable_size je_pool_malloc_usable_size # define pool_malloc_stats_print je_pool_malloc_stats_print # define pool_extend je_pool_extend # define pool_set_alloc_funcs je_pool_set_alloc_funcs # define pool_check je_pool_check # define malloc_conf je_malloc_conf # define malloc_message je_malloc_message # define malloc je_malloc # define calloc je_calloc # define posix_memalign je_posix_memalign # define aligned_alloc je_aligned_alloc # define realloc je_realloc # define free je_free # define mallocx je_mallocx # define rallocx je_rallocx # define xallocx je_xallocx # define sallocx je_sallocx # define dallocx je_dallocx # define nallocx je_nallocx # define mallctl je_mallctl # define mallctlnametomib je_mallctlnametomib # define mallctlbymib je_mallctlbymib # define navsnprintf je_navsnprintf # define malloc_stats_print je_malloc_stats_print # define malloc_usable_size je_malloc_usable_size #endif /* * The je_* macros can be used as stable alternative names for the * public jemalloc API if JEMALLOC_NO_DEMANGLE is defined. This is primarily * meant for use in jemalloc itself, but it can be used by application code to * provide isolation from the name mangling specified via --with-mangling * and/or --with-jemalloc-prefix. */ #ifndef JEMALLOC_NO_DEMANGLE # undef je_pool_create # undef je_pool_delete # undef je_pool_malloc # undef je_pool_calloc # undef je_pool_ralloc # undef je_pool_aligned_alloc # undef je_pool_free # undef je_pool_malloc_usable_size # undef je_pool_malloc_stats_print # undef je_pool_extend # undef je_pool_set_alloc_funcs # undef je_pool_check # undef je_malloc_conf # undef je_malloc_message # undef je_malloc # undef je_calloc # undef je_posix_memalign # undef je_aligned_alloc # undef je_realloc # undef je_free # undef je_mallocx # undef je_rallocx # undef je_xallocx # undef je_sallocx # undef je_dallocx # undef je_nallocx # undef je_mallctl # undef je_mallctlnametomib # undef je_mallctlbymib # undef je_navsnprintf # undef je_malloc_stats_print # undef je_malloc_usable_size #endif vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/jemalloc_mangle_jet.h000066400000000000000000000055731361505074100265770ustar00rootroot00000000000000/* * By default application code must explicitly refer to mangled symbol names, * so that it is possible to use jemalloc in conjunction with another allocator * in the same application. Define JEMALLOC_MANGLE in order to cause automatic * name mangling that matches the API prefixing that happened as a result of * --with-mangling and/or --with-jemalloc-prefix configuration settings. */ #ifdef JEMALLOC_MANGLE # ifndef JEMALLOC_NO_DEMANGLE # define JEMALLOC_NO_DEMANGLE # endif # define pool_create jet_pool_create # define pool_delete jet_pool_delete # define pool_malloc jet_pool_malloc # define pool_calloc jet_pool_calloc # define pool_ralloc jet_pool_ralloc # define pool_aligned_alloc jet_pool_aligned_alloc # define pool_free jet_pool_free # define pool_malloc_usable_size jet_pool_malloc_usable_size # define pool_malloc_stats_print jet_pool_malloc_stats_print # define pool_extend jet_pool_extend # define pool_set_alloc_funcs jet_pool_set_alloc_funcs # define pool_check jet_pool_check # define malloc_conf jet_malloc_conf # define malloc_message jet_malloc_message # define malloc jet_malloc # define calloc jet_calloc # define posix_memalign jet_posix_memalign # define aligned_alloc jet_aligned_alloc # define realloc jet_realloc # define free jet_free # define mallocx jet_mallocx # define rallocx jet_rallocx # define xallocx jet_xallocx # define sallocx jet_sallocx # define dallocx jet_dallocx # define nallocx jet_nallocx # define mallctl jet_mallctl # define mallctlnametomib jet_mallctlnametomib # define mallctlbymib jet_mallctlbymib # define navsnprintf jet_navsnprintf # define malloc_stats_print jet_malloc_stats_print # define malloc_usable_size jet_malloc_usable_size #endif /* * The jet_* macros can be used as stable alternative names for the * public jemalloc API if JEMALLOC_NO_DEMANGLE is defined. This is primarily * meant for use in jemalloc itself, but it can be used by application code to * provide isolation from the name mangling specified via --with-mangling * and/or --with-jemalloc-prefix. */ #ifndef JEMALLOC_NO_DEMANGLE # undef jet_pool_create # undef jet_pool_delete # undef jet_pool_malloc # undef jet_pool_calloc # undef jet_pool_ralloc # undef jet_pool_aligned_alloc # undef jet_pool_free # undef jet_pool_malloc_usable_size # undef jet_pool_malloc_stats_print # undef jet_pool_extend # undef jet_pool_set_alloc_funcs # undef jet_pool_check # undef jet_malloc_conf # undef jet_malloc_message # undef jet_malloc # undef jet_calloc # undef jet_posix_memalign # undef jet_aligned_alloc # undef jet_realloc # undef jet_free # undef jet_mallocx # undef jet_rallocx # undef jet_xallocx # undef jet_sallocx # undef jet_dallocx # undef jet_nallocx # undef jet_mallctl # undef jet_mallctlnametomib # undef jet_mallctlbymib # undef jet_navsnprintf # undef jet_malloc_stats_print # undef jet_malloc_usable_size #endif vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/jemalloc_protos.h000066400000000000000000000060641361505074100260140ustar00rootroot00000000000000/* * The je_ prefix on the following public symbol declarations is an artifact * of namespace management, and should be omitted in application code unless * JEMALLOC_NO_DEMANGLE is defined (see jemalloc_mangle.h). */ extern JEMALLOC_EXPORT const char *je_malloc_conf; extern JEMALLOC_EXPORT void (*je_malloc_message)(void *cbopaque, const char *s); typedef struct pool_s pool_t; JEMALLOC_EXPORT pool_t *je_pool_create(void *addr, size_t size, int zeroed); JEMALLOC_EXPORT int je_pool_delete(pool_t *pool); JEMALLOC_EXPORT size_t je_pool_extend(pool_t *pool, void *addr, size_t size, int zeroed); JEMALLOC_EXPORT void *je_pool_malloc(pool_t *pool, size_t size); JEMALLOC_EXPORT void *je_pool_calloc(pool_t *pool, size_t nmemb, size_t size); JEMALLOC_EXPORT void *je_pool_ralloc(pool_t *pool, void *ptr, size_t size); JEMALLOC_EXPORT void *je_pool_aligned_alloc(pool_t *pool, size_t alignment, size_t size); JEMALLOC_EXPORT void je_pool_free(pool_t *pool, void *ptr); JEMALLOC_EXPORT size_t je_pool_malloc_usable_size(pool_t *pool, void *ptr); JEMALLOC_EXPORT void je_pool_malloc_stats_print(pool_t *pool, void (*write_cb)(void *, const char *), void *cbopaque, const char *opts); JEMALLOC_EXPORT void je_pool_set_alloc_funcs(void *(*malloc_func)(size_t), void (*free_func)(void *)); JEMALLOC_EXPORT int je_pool_check(pool_t *pool); JEMALLOC_EXPORT void *je_malloc(size_t size) JEMALLOC_ATTR(malloc); JEMALLOC_EXPORT void *je_calloc(size_t num, size_t size) JEMALLOC_ATTR(malloc); JEMALLOC_EXPORT int je_posix_memalign(void **memptr, size_t alignment, size_t size) JEMALLOC_ATTR(nonnull(1)); JEMALLOC_EXPORT void *je_aligned_alloc(size_t alignment, size_t size) JEMALLOC_ATTR(malloc); JEMALLOC_EXPORT void *je_realloc(void *ptr, size_t size); JEMALLOC_EXPORT void je_free(void *ptr); JEMALLOC_EXPORT void *je_mallocx(size_t size, int flags); JEMALLOC_EXPORT void *je_rallocx(void *ptr, size_t size, int flags); JEMALLOC_EXPORT size_t je_xallocx(void *ptr, size_t size, size_t extra, int flags); JEMALLOC_EXPORT size_t je_sallocx(const void *ptr, int flags); JEMALLOC_EXPORT void je_dallocx(void *ptr, int flags); JEMALLOC_EXPORT size_t je_nallocx(size_t size, int flags); JEMALLOC_EXPORT int je_mallctl(const char *name, void *oldp, size_t *oldlenp, void *newp, size_t newlen); JEMALLOC_EXPORT int je_mallctlnametomib(const char *name, size_t *mibp, size_t *miblenp); JEMALLOC_EXPORT int je_mallctlbymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen); JEMALLOC_EXPORT void je_malloc_stats_print(void (*write_cb)(void *, const char *), void *je_cbopaque, const char *opts); JEMALLOC_EXPORT size_t je_malloc_usable_size( JEMALLOC_USABLE_SIZE_CONST void *ptr); JEMALLOC_EXPORT int je_navsnprintf(char *str, size_t size, const char *format, va_list ap); #ifdef JEMALLOC_OVERRIDE_MEMALIGN JEMALLOC_EXPORT void * je_memalign(size_t alignment, size_t size) JEMALLOC_ATTR(malloc); #endif #ifdef JEMALLOC_OVERRIDE_VALLOC JEMALLOC_EXPORT void * je_valloc(size_t size) JEMALLOC_ATTR(malloc); #endif vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/jemalloc_protos_jet.h000066400000000000000000000061501361505074100266520ustar00rootroot00000000000000/* * The jet_ prefix on the following public symbol declarations is an artifact * of namespace management, and should be omitted in application code unless * JEMALLOC_NO_DEMANGLE is defined (see jemalloc_mangle@install_suffix@.h). */ extern JEMALLOC_EXPORT const char *jet_malloc_conf; extern JEMALLOC_EXPORT void (*jet_malloc_message)(void *cbopaque, const char *s); typedef struct pool_s pool_t; JEMALLOC_EXPORT pool_t *jet_pool_create(void *addr, size_t size, int zeroed); JEMALLOC_EXPORT int jet_pool_delete(pool_t *pool); JEMALLOC_EXPORT size_t jet_pool_extend(pool_t *pool, void *addr, size_t size, int zeroed); JEMALLOC_EXPORT void *jet_pool_malloc(pool_t *pool, size_t size); JEMALLOC_EXPORT void *jet_pool_calloc(pool_t *pool, size_t nmemb, size_t size); JEMALLOC_EXPORT void *jet_pool_ralloc(pool_t *pool, void *ptr, size_t size); JEMALLOC_EXPORT void *jet_pool_aligned_alloc(pool_t *pool, size_t alignment, size_t size); JEMALLOC_EXPORT void jet_pool_free(pool_t *pool, void *ptr); JEMALLOC_EXPORT size_t jet_pool_malloc_usable_size(pool_t *pool, void *ptr); JEMALLOC_EXPORT void jet_pool_malloc_stats_print(pool_t *pool, void (*write_cb)(void *, const char *), void *cbopaque, const char *opts); JEMALLOC_EXPORT void jet_pool_set_alloc_funcs(void *(*malloc_func)(size_t), void (*free_func)(void *)); JEMALLOC_EXPORT int jet_pool_check(pool_t *pool); JEMALLOC_EXPORT void *jet_malloc(size_t size) JEMALLOC_ATTR(malloc); JEMALLOC_EXPORT void *jet_calloc(size_t num, size_t size) JEMALLOC_ATTR(malloc); JEMALLOC_EXPORT int jet_posix_memalign(void **memptr, size_t alignment, size_t size) JEMALLOC_ATTR(nonnull(1)); JEMALLOC_EXPORT void *jet_aligned_alloc(size_t alignment, size_t size) JEMALLOC_ATTR(malloc); JEMALLOC_EXPORT void *jet_realloc(void *ptr, size_t size); JEMALLOC_EXPORT void jet_free(void *ptr); JEMALLOC_EXPORT void *jet_mallocx(size_t size, int flags); JEMALLOC_EXPORT void *jet_rallocx(void *ptr, size_t size, int flags); JEMALLOC_EXPORT size_t jet_xallocx(void *ptr, size_t size, size_t extra, int flags); JEMALLOC_EXPORT size_t jet_sallocx(const void *ptr, int flags); JEMALLOC_EXPORT void jet_dallocx(void *ptr, int flags); JEMALLOC_EXPORT size_t jet_nallocx(size_t size, int flags); JEMALLOC_EXPORT int jet_mallctl(const char *name, void *oldp, size_t *oldlenp, void *newp, size_t newlen); JEMALLOC_EXPORT int jet_mallctlnametomib(const char *name, size_t *mibp, size_t *miblenp); JEMALLOC_EXPORT int jet_mallctlbymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen); JEMALLOC_EXPORT void jet_malloc_stats_print(void (*write_cb)(void *, const char *), void *jet_cbopaque, const char *opts); JEMALLOC_EXPORT size_t jet_malloc_usable_size( JEMALLOC_USABLE_SIZE_CONST void *ptr); JEMALLOC_EXPORT int jet_navsnprintf(char *str, size_t size, const char *format, va_list ap); #ifdef JEMALLOC_OVERRIDE_MEMALIGN JEMALLOC_EXPORT void * jet_memalign(size_t alignment, size_t size) JEMALLOC_ATTR(malloc); #endif #ifdef JEMALLOC_OVERRIDE_VALLOC JEMALLOC_EXPORT void * jet_valloc(size_t size) JEMALLOC_ATTR(malloc); #endif vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/jemalloc_rename.h000066400000000000000000000032361361505074100257330ustar00rootroot00000000000000/* * Name mangling for public symbols is controlled by --with-mangling and * --with-jemalloc-prefix. With default settings the je_ prefix is stripped by * these macro definitions. */ #ifndef JEMALLOC_NO_RENAME # define je_pool_create je_vmem_pool_create # define je_pool_delete je_vmem_pool_delete # define je_pool_malloc je_vmem_pool_malloc # define je_pool_calloc je_vmem_pool_calloc # define je_pool_ralloc je_vmem_pool_ralloc # define je_pool_aligned_alloc je_vmem_pool_aligned_alloc # define je_pool_free je_vmem_pool_free # define je_pool_malloc_usable_size je_vmem_pool_malloc_usable_size # define je_pool_malloc_stats_print je_vmem_pool_malloc_stats_print # define je_pool_extend je_vmem_pool_extend # define je_pool_set_alloc_funcs je_vmem_pool_set_alloc_funcs # define je_pool_check je_vmem_pool_check # define je_malloc_conf je_vmem_malloc_conf # define je_malloc_message je_vmem_malloc_message # define je_malloc je_vmem_malloc # define je_calloc je_vmem_calloc # define je_posix_memalign je_vmem_posix_memalign # define je_aligned_alloc je_vmem_aligned_alloc # define je_realloc je_vmem_realloc # define je_free je_vmem_free # define je_mallocx je_vmem_mallocx # define je_rallocx je_vmem_rallocx # define je_xallocx je_vmem_xallocx # define je_sallocx je_vmem_sallocx # define je_dallocx je_vmem_dallocx # define je_nallocx je_vmem_nallocx # define je_mallctl je_vmem_mallctl # define je_mallctlnametomib je_vmem_mallctlnametomib # define je_mallctlbymib je_vmem_mallctlbymib # define je_navsnprintf je_vmem_navsnprintf # define je_malloc_stats_print je_vmem_malloc_stats_print # define je_malloc_usable_size je_vmem_malloc_usable_size #endif vmem-1.8/src/windows/jemalloc_gen/include/jemalloc/jemalloc_typedefs.h000066400000000000000000000002261361505074100263030ustar00rootroot00000000000000typedef void *(chunk_alloc_t)(void *, size_t, size_t, bool *, unsigned, pool_t *); typedef bool (chunk_dalloc_t)(void *, size_t, unsigned, pool_t *); vmem-1.8/src/windows/libs_debug.props000066400000000000000000000035371361505074100200050ustar00rootroot00000000000000 $(SolutionDir)$(Platform)\$(Configuration)\libs\ $(FrameworkSDKdir)bin\$(TargetPlatformVersion)\$(Platform);$(ExecutablePath) $(SolutionDir)\include;$(SolutionDir)\windows\include;$(SolutionDir)\common;$(SolutionDir)\$(TargetName) PMDK_UTF8_API;SDS_ENABLED;NTDDI_VERSION=NTDDI_WIN10_RS1;_CRT_SECURE_NO_WARNINGS;_WINDLL;_DEBUG;%(PreprocessorDefinitions) CompileAsC true platform.h Level3 true true false true shlwapi.lib;ntdll.lib;%(AdditionalDependencies) $(TargetName).def true true false _DEBUG $(SolutionDir)\common;$(SolutionDir)\windows\include vmem-1.8/src/windows/libs_release.props000066400000000000000000000036351361505074100203360ustar00rootroot00000000000000 $(SolutionDir)$(Platform)\$(Configuration)\libs\ $(FrameworkSDKdir)bin\$(TargetPlatformVersion)\$(Platform);$(ExecutablePath) $(SolutionDir)\include;$(SolutionDir)\windows\include;$(SolutionDir)\common;$(SolutionDir)\$(TargetName) PMDK_UTF8_API;SDS_ENABLED;NTDDI_VERSION=NTDDI_WIN10_RS1;_CRT_SECURE_NO_WARNINGS;_WINDLL;NDEBUG;%(PreprocessorDefinitions) CompileAsC true platform.h Level3 true true false Neither true shlwapi.lib;ntdll.lib;%(AdditionalDependencies) $(TargetName).def DebugFastLink false false $(SolutionDir)\common;$(SolutionDir)\windows\include vmem-1.8/src/windows/srcversion/000077500000000000000000000000001361505074100170065ustar00rootroot00000000000000vmem-1.8/src/windows/srcversion/srcversion.vcxproj000066400000000000000000000115761361505074100226320ustar00rootroot00000000000000 Debug x64 Release x64 {901F04DB-E1A5-4A41-8B81-9D31C19ACD59} Win32Proj srcversion 10.0.16299.0 Application true v140 NotSet Application true v140 NotSet true true NotUsing Level3 _DEBUG;_CONSOLE;WINAPI_PARTITION_SYSTEM;%(PreprocessorDefinitions) platform.h 4996 CompileAsC MultiThreadedDebugDLL Console true powershell.exe -ExecutionPolicy Bypass -file "$(SolutionDir)..\utils\SRCVERSION.ps1" $(SRCVERSION) __NON_EXISTENT_FILE__ generate srcversion.h NotUsing Level3 NDEBUG;_CONSOLE;WINAPI_PARTITION_SYSTEM;%(PreprocessorDefinitions) platform.h 4996 CompileAsC MaxSpeed MultiThreadedDLL Default Console true powershell.exe -ExecutionPolicy Bypass -file "$(SolutionDir)..\utils\SRCVERSION.ps1" $(SRCVERSION) __NON_EXISTENT_FILE__ generate srcversion.h vmem-1.8/src/windows/win_mmap.c000066400000000000000000000675001361505074100165740ustar00rootroot00000000000000/* * Copyright 2015-2019, Intel Corporation * Copyright (c) 2015-2017, Microsoft Corporation. All rights reserved. * Copyright (c) 2016, Hewlett Packard Enterprise Development LP * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * win_mmap.c -- memory-mapped files for Windows */ /* * XXX - The initial approach to PMDK for Windows port was to minimize the * amount of changes required in the core part of the library, and to avoid * preprocessor conditionals, if possible. For that reason, some of the * Linux system calls that have no equivalents on Windows have been emulated * using Windows API. * Note that it was not a goal to fully emulate POSIX-compliant behavior * of mentioned functions. They are used only internally, so current * implementation is just good enough to satisfy PMDK needs and to make it * work on Windows. * * This is a subject for change in the future. Likely, all these functions * will be replaced with "util_xxx" wrappers with OS-specific implementation * for Linux and Windows. * * Known issues: * - on Windows, mapping granularity/alignment is 64KB, not 4KB; * - mprotect() behavior and protection flag handling in mmap() is slightly * different than on Linux (see comments below). */ #include #include "mmap.h" #include "util.h" #include "out.h" #include "win_mmap.h" /* uncomment for more debug information on mmap trackers */ /* #define MMAP_DEBUG_INFO */ NTSTATUS NtFreeVirtualMemory(_In_ HANDLE ProcessHandle, _Inout_ PVOID *BaseAddress, _Inout_ PSIZE_T RegionSize, _In_ ULONG FreeType); /* * XXX Unify the Linux and Windows code and replace this structure with * the map tracking list defined in mmap.h. */ SRWLOCK FileMappingQLock = SRWLOCK_INIT; struct FMLHead FileMappingQHead = PMDK_SORTEDQ_HEAD_INITIALIZER(FileMappingQHead); /* * mmap_file_mapping_comparer -- (internal) compares the two file mapping * trackers */ static LONG_PTR mmap_file_mapping_comparer(PFILE_MAPPING_TRACKER a, PFILE_MAPPING_TRACKER b) { return ((LONG_PTR)a->BaseAddress - (LONG_PTR)b->BaseAddress); } #ifdef MMAP_DEBUG_INFO /* * mmap_info -- (internal) dump info about all the mapping trackers */ static void mmap_info(void) { LOG(4, NULL); AcquireSRWLockShared(&FileMappingQLock); PFILE_MAPPING_TRACKER mt; for (mt = PMDK_SORTEDQ_FIRST(&FileMappingQHead); mt != (void *)&FileMappingQHead; mt = PMDK_SORTEDQ_NEXT(mt, ListEntry)) { LOG(4, "FH %08x FMH %08x AD %p-%p (%zu) " "OF %08x FL %zu AC %d F %d", mt->FileHandle, mt->FileMappingHandle, mt->BaseAddress, mt->EndAddress, (char *)mt->EndAddress - (char *)mt->BaseAddress, mt->Offset, mt->FileLen, mt->Access, mt->Flags); } ReleaseSRWLockShared(&FileMappingQLock); } #endif /* * mmap_reserve -- (internal) reserve virtual address range */ static void * mmap_reserve(void *addr, size_t len) { LOG(4, "addr %p len %zu", addr, len); ASSERTeq((uintptr_t)addr % Mmap_align, 0); ASSERTeq(len % Mmap_align, 0); void *reserved_addr = VirtualAlloc(addr, len, MEM_RESERVE, PAGE_NOACCESS); if (reserved_addr == NULL) { ERR("cannot find a contiguous region - " "addr: %p, len: %lx, gle: 0x%08x", addr, len, GetLastError()); errno = ENOMEM; return MAP_FAILED; } return reserved_addr; } /* * mmap_unreserve -- (internal) frees the range that's previously reserved */ static int mmap_unreserve(void *addr, size_t len) { LOG(4, "addr %p len %zu", addr, len); ASSERTeq((uintptr_t)addr % Mmap_align, 0); ASSERTeq(len % Mmap_align, 0); size_t bytes_returned; MEMORY_BASIC_INFORMATION basic_info; bytes_returned = VirtualQuery(addr, &basic_info, sizeof(basic_info)); if (bytes_returned != sizeof(basic_info)) { ERR("cannot query the virtual address properties of the range " "- addr: %p, len: %d", addr, len); errno = EINVAL; return -1; } if (basic_info.State == MEM_RESERVE) { DWORD nt_status; void *release_addr = addr; size_t release_size = len; nt_status = NtFreeVirtualMemory(GetCurrentProcess(), &release_addr, &release_size, MEM_RELEASE); if (nt_status != 0) { ERR("cannot release the reserved virtual space - " "addr: %p, len: %d, nt_status: 0x%08x", addr, len, nt_status); errno = EINVAL; return -1; } ASSERTeq(release_addr, addr); ASSERTeq(release_size, len); LOG(4, "freed reservation - addr: %p, size: %d", release_addr, release_size); } else { LOG(4, "range not reserved - addr: %p, size: %d", addr, len); } return 0; } /* * win_mmap_init -- initialization of file mapping tracker */ void win_mmap_init(void) { AcquireSRWLockExclusive(&FileMappingQLock); PMDK_SORTEDQ_INIT(&FileMappingQHead); ReleaseSRWLockExclusive(&FileMappingQLock); } /* * win_mmap_fini -- file mapping tracker cleanup routine */ void win_mmap_fini(void) { /* * Let's make sure that no one is in the middle of updating the * list by grabbing the lock. */ AcquireSRWLockExclusive(&FileMappingQLock); while (!PMDK_SORTEDQ_EMPTY(&FileMappingQHead)) { PFILE_MAPPING_TRACKER mt; mt = (PFILE_MAPPING_TRACKER)PMDK_SORTEDQ_FIRST( &FileMappingQHead); PMDK_SORTEDQ_REMOVE(&FileMappingQHead, mt, ListEntry); if (mt->BaseAddress != NULL) UnmapViewOfFile(mt->BaseAddress); size_t release_size = (char *)mt->EndAddress - (char *)mt->BaseAddress; /* * Free reservation after file mapping (if reservation was * bigger than length of mapped file) */ void *release_addr = (char *)mt->BaseAddress + mt->FileLen; mmap_unreserve(release_addr, release_size - mt->FileLen); if (mt->FileMappingHandle != NULL) CloseHandle(mt->FileMappingHandle); if (mt->FileHandle != NULL) CloseHandle(mt->FileHandle); free(mt); } ReleaseSRWLockExclusive(&FileMappingQLock); } #define PROT_ALL (PROT_READ|PROT_WRITE|PROT_EXEC) /* * mmap -- map file into memory * * XXX - If read-only mapping was created initially, it is not possible * to change protection to R/W, even if the file itself was open in R/W mode. * To workaround that, we could modify mmap() to create R/W mapping first, * then change the protection to R/O. This way, it should be possible * to elevate permissions later. */ void * mmap(void *addr, size_t len, int prot, int flags, int fd, os_off_t offset) { LOG(4, "addr %p len %zu prot %d flags %d fd %d offset %ju", addr, len, prot, flags, fd, offset); if (len == 0) { ERR("invalid length: %zu", len); errno = EINVAL; return MAP_FAILED; } if ((prot & ~PROT_ALL) != 0) { ERR("invalid flags: 0x%08x", flags); /* invalid protection flags */ errno = EINVAL; return MAP_FAILED; } if (((flags & MAP_PRIVATE) && (flags & MAP_SHARED)) || ((flags & (MAP_PRIVATE | MAP_SHARED)) == 0)) { ERR("neither MAP_PRIVATE or MAP_SHARED is set, or both: 0x%08x", flags); errno = EINVAL; return MAP_FAILED; } /* XXX shall we use SEC_LARGE_PAGES flag? */ DWORD protect = 0; DWORD access = 0; /* on x86, PROT_WRITE implies PROT_READ */ if (prot & PROT_WRITE) { if (flags & MAP_PRIVATE) { access = FILE_MAP_COPY; if (prot & PROT_EXEC) protect = PAGE_EXECUTE_WRITECOPY; else protect = PAGE_WRITECOPY; } else { /* FILE_MAP_ALL_ACCESS == FILE_MAP_WRITE */ access = FILE_MAP_ALL_ACCESS; if (prot & PROT_EXEC) protect = PAGE_EXECUTE_READWRITE; else protect = PAGE_READWRITE; } } else if (prot & PROT_READ) { access = FILE_MAP_READ; if (prot & PROT_EXEC) protect = PAGE_EXECUTE_READ; else protect = PAGE_READONLY; } else { /* XXX - PAGE_NOACCESS is not supported by CreateFileMapping */ ERR("PAGE_NOACCESS is not supported"); errno = ENOTSUP; return MAP_FAILED; } if (((uintptr_t)addr % Mmap_align) != 0) { if ((flags & MAP_FIXED) == 0) { /* ignore invalid hint if no MAP_FIXED flag is set */ addr = NULL; } else { ERR("hint address is not well-aligned: %p", addr); errno = EINVAL; return MAP_FAILED; } } if ((offset % Mmap_align) != 0) { ERR("offset is not well-aligned: %ju", offset); errno = EINVAL; return MAP_FAILED; } if ((flags & MAP_FIXED) != 0) { /* * Free any reservations that the caller might have, also we * have to unmap any existing mappings in this region as per * mmap's manual. * XXX - Ideally we should unmap only if the prot and flags * are similar, we are deferring it as we don't rely on it * yet. */ int ret = munmap(addr, len); if (ret != 0) { ERR("!munmap: addr %p len %zu", addr, len); return MAP_FAILED; } } size_t len_align = roundup(len, Mmap_align); size_t filelen; size_t filelen_align; HANDLE fh; if (flags & MAP_ANON) { /* * In our implementation we are choosing to ignore fd when * MAP_ANON is set, instead of failing. */ fh = INVALID_HANDLE_VALUE; /* ignore/override offset */ offset = 0; filelen = len; filelen_align = len_align; if ((flags & MAP_NORESERVE) != 0) { /* * For anonymous mappings the meaning of MAP_NORESERVE * flag is pretty much the same as SEC_RESERVE. */ protect |= SEC_RESERVE; } } else { LARGE_INTEGER filesize; if (fd == -1) { ERR("invalid file descriptor: %d", fd); errno = EBADF; return MAP_FAILED; } /* * We need to keep file handle open for proper * implementation of msync() and to hold the file lock. */ if (!DuplicateHandle(GetCurrentProcess(), (HANDLE)_get_osfhandle(fd), GetCurrentProcess(), &fh, 0, FALSE, DUPLICATE_SAME_ACCESS)) { ERR("cannot duplicate handle - fd: %d, gle: 0x%08x", fd, GetLastError()); errno = ENOMEM; return MAP_FAILED; } /* * If we are asked to map more than the file size, map till the * file size and reserve the following. */ if (!GetFileSizeEx(fh, &filesize)) { ERR("cannot query the file size - fh: %d, gle: 0x%08x", fd, GetLastError()); CloseHandle(fh); return MAP_FAILED; } if (offset >= (os_off_t)filesize.QuadPart) { errno = EINVAL; ERR("offset is beyond the file size"); CloseHandle(fh); return MAP_FAILED; } /* calculate length of the mapped portion of the file */ filelen = filesize.QuadPart - offset; if (filelen > len) filelen = len; filelen_align = roundup(filelen, Mmap_align); if ((offset + len) > (size_t)filesize.QuadPart) { /* * Reserve virtual address for the rest of range we need * to map, and free a portion in the beginning for this * allocation. */ void *reserved_addr = mmap_reserve(addr, len_align); if (reserved_addr == MAP_FAILED) { ERR("cannot reserve region"); CloseHandle(fh); return MAP_FAILED; } if (addr != reserved_addr && (flags & MAP_FIXED) != 0) { ERR("cannot find a contiguous region - " "addr: %p, len: %lx, gle: 0x%08x", addr, len, GetLastError()); if (mmap_unreserve(reserved_addr, len_align) != 0) { ASSERT(FALSE); ERR("cannot free reserved region"); } errno = ENOMEM; CloseHandle(fh); return MAP_FAILED; } addr = reserved_addr; if (mmap_unreserve(reserved_addr, filelen_align) != 0) { ASSERT(FALSE); ERR("cannot free reserved region"); CloseHandle(fh); return MAP_FAILED; } } } HANDLE fmh = CreateFileMapping(fh, NULL, /* security attributes */ protect, (DWORD) ((filelen + offset) >> 32), (DWORD) ((filelen + offset) & 0xFFFFFFFF), NULL); if (fmh == NULL) { DWORD gle = GetLastError(); ERR("CreateFileMapping, gle: 0x%08x", gle); if (gle == ERROR_ACCESS_DENIED) errno = EACCES; else errno = EINVAL; /* XXX */ CloseHandle(fh); return MAP_FAILED; } void *base = MapViewOfFileEx(fmh, access, (DWORD) (offset >> 32), (DWORD) (offset & 0xFFFFFFFF), filelen, addr); /* hint address */ if (base == NULL) { if (addr == NULL || (flags & MAP_FIXED) != 0) { ERR("MapViewOfFileEx, gle: 0x%08x", GetLastError()); errno = EINVAL; CloseHandle(fh); CloseHandle(fmh); return MAP_FAILED; } /* try again w/o hint */ base = MapViewOfFileEx(fmh, access, (DWORD) (offset >> 32), (DWORD) (offset & 0xFFFFFFFF), filelen, NULL); /* no hint address */ } if (base == NULL) { ERR("MapViewOfFileEx, gle: 0x%08x", GetLastError()); errno = ENOMEM; CloseHandle(fh); CloseHandle(fmh); return MAP_FAILED; } /* * We will track the file mapping handle on a lookaside list so that * we don't have to modify the fact that we only return back the base * address rather than a more elaborate structure. */ PFILE_MAPPING_TRACKER mt = malloc(sizeof(struct FILE_MAPPING_TRACKER)); if (mt == NULL) { ERR("!malloc"); CloseHandle(fh); CloseHandle(fmh); return MAP_FAILED; } mt->Flags = 0; mt->FileHandle = fh; mt->FileMappingHandle = fmh; mt->BaseAddress = base; mt->EndAddress = (void *)((char *)base + len_align); mt->Access = access; mt->Offset = offset; mt->FileLen = filelen_align; /* * XXX: Use the QueryVirtualMemoryInformation when available in the new * SDK. If the file is DAX mapped say so in the FILE_MAPPING_TRACKER * Flags. */ DWORD filesystemFlags; if (fh == INVALID_HANDLE_VALUE) { LOG(4, "anonymous mapping - not DAX mapped - handle: %p", fh); } else if (GetVolumeInformationByHandleW(fh, NULL, 0, NULL, NULL, &filesystemFlags, NULL, 0)) { if (filesystemFlags & FILE_DAX_VOLUME) { mt->Flags |= FILE_MAPPING_TRACKER_FLAG_DIRECT_MAPPED; } else { LOG(4, "file is not DAX mapped - handle: %p", fh); } } else { ERR("failed to query volume information : %08x", GetLastError()); } AcquireSRWLockExclusive(&FileMappingQLock); PMDK_SORTEDQ_INSERT(&FileMappingQHead, mt, ListEntry, FILE_MAPPING_TRACKER, mmap_file_mapping_comparer); ReleaseSRWLockExclusive(&FileMappingQLock); #ifdef MMAP_DEBUG_INFO mmap_info(); #endif return base; } /* * mmap_split -- (internal) replace existing mapping with another one(s) * * Unmaps the region between [begin,end]. If it's in a middle of the existing * mapping, it results in two new mappings and duplicated file/mapping handles. */ static int mmap_split(PFILE_MAPPING_TRACKER mt, void *begin, void *end) { LOG(4, "begin %p end %p", begin, end); ASSERTeq((uintptr_t)begin % Mmap_align, 0); ASSERTeq((uintptr_t)end % Mmap_align, 0); PFILE_MAPPING_TRACKER mtb = NULL; PFILE_MAPPING_TRACKER mte = NULL; HANDLE fh = mt->FileHandle; HANDLE fmh = mt->FileMappingHandle; size_t len; /* * In this routine we copy flags from mt to the two subsets that we * create. All flags may not be appropriate to propagate so let's * assert about the flags we know, if some one adds a new flag in the * future they would know about this copy and take appropricate action. */ C_ASSERT(FILE_MAPPING_TRACKER_FLAGS_MASK == 1); /* * 1) b e b e * xxxxxxxxxxxxx => xxx.......xxxx - mtb+mte * 2) b e b e * xxxxxxxxxxxxx => xxxxxxx....... - mtb * 3) b e b e * xxxxxxxxxxxxx => ........xxxxxx - mte * 4) b e b e * xxxxxxxxxxxxx => .............. - */ if (begin > mt->BaseAddress) { /* case #1/2 */ /* new mapping at the beginning */ mtb = malloc(sizeof(struct FILE_MAPPING_TRACKER)); if (mtb == NULL) { ERR("!malloc"); goto err; } mtb->Flags = mt->Flags; mtb->FileHandle = fh; mtb->FileMappingHandle = fmh; mtb->BaseAddress = mt->BaseAddress; mtb->EndAddress = begin; mtb->Access = mt->Access; mtb->Offset = mt->Offset; len = (char *)begin - (char *)mt->BaseAddress; mtb->FileLen = len >= mt->FileLen ? mt->FileLen : len; } if (end < mt->EndAddress) { /* case #1/3 */ /* new mapping at the end */ mte = malloc(sizeof(struct FILE_MAPPING_TRACKER)); if (mte == NULL) { ERR("!malloc"); goto err; } if (!mtb) { /* case #3 */ mte->FileHandle = fh; mte->FileMappingHandle = fmh; } else { /* case #1 - need to duplicate handles */ mte->FileHandle = NULL; mte->FileMappingHandle = NULL; if (!DuplicateHandle(GetCurrentProcess(), fh, GetCurrentProcess(), &mte->FileHandle, 0, FALSE, DUPLICATE_SAME_ACCESS)) { ERR("DuplicateHandle, gle: 0x%08x", GetLastError()); goto err; } if (!DuplicateHandle(GetCurrentProcess(), fmh, GetCurrentProcess(), &mte->FileMappingHandle, 0, FALSE, DUPLICATE_SAME_ACCESS)) { ERR("DuplicateHandle, gle: 0x%08x", GetLastError()); goto err; } } mte->Flags = mt->Flags; mte->BaseAddress = end; mte->EndAddress = mt->EndAddress; mte->Access = mt->Access; mte->Offset = mt->Offset + ((char *)mte->BaseAddress - (char *)mt->BaseAddress); len = (char *)end - (char *)mt->BaseAddress; mte->FileLen = len >= mt->FileLen ? 0 : mt->FileLen - len; } if (mt->FileLen > 0 && UnmapViewOfFile(mt->BaseAddress) == FALSE) { ERR("UnmapViewOfFile, gle: 0x%08x", GetLastError()); goto err; } len = (char *)mt->EndAddress - (char *)mt->BaseAddress; if (len > mt->FileLen) { void *addr = (char *)mt->BaseAddress + mt->FileLen; mmap_unreserve(addr, len - mt->FileLen); } if (!mtb && !mte) { /* case #4 */ CloseHandle(fmh); CloseHandle(fh); } /* * free entry for the original mapping */ PMDK_SORTEDQ_REMOVE(&FileMappingQHead, mt, ListEntry); free(mt); if (mtb) { len = (char *)mtb->EndAddress - (char *)mtb->BaseAddress; if (len > mtb->FileLen) { void *addr = (char *)mtb->BaseAddress + mtb->FileLen; void *raddr = mmap_reserve(addr, len - mtb->FileLen); if (raddr == MAP_FAILED) { ERR("cannot find a contiguous region - " "addr: %p, len: %lx, gle: 0x%08x", addr, len, GetLastError()); goto err; } } if (mtb->FileLen > 0) { void *base = MapViewOfFileEx(mtb->FileMappingHandle, mtb->Access, (DWORD) (mtb->Offset >> 32), (DWORD) (mtb->Offset & 0xFFFFFFFF), mtb->FileLen, mtb->BaseAddress); /* hint address */ if (base == NULL) { ERR("MapViewOfFileEx, gle: 0x%08x", GetLastError()); goto err; } } PMDK_SORTEDQ_INSERT(&FileMappingQHead, mtb, ListEntry, FILE_MAPPING_TRACKER, mmap_file_mapping_comparer); } if (mte) { len = (char *)mte->EndAddress - (char *)mte->BaseAddress; if (len > mte->FileLen) { void *addr = (char *)mte->BaseAddress + mte->FileLen; void *raddr = mmap_reserve(addr, len - mte->FileLen); if (raddr == MAP_FAILED) { ERR("cannot find a contiguous region - " "addr: %p, len: %lx, gle: 0x%08x", addr, len, GetLastError()); goto err; } } if (mte->FileLen > 0) { void *base = MapViewOfFileEx(mte->FileMappingHandle, mte->Access, (DWORD) (mte->Offset >> 32), (DWORD) (mte->Offset & 0xFFFFFFFF), mte->FileLen, mte->BaseAddress); /* hint address */ if (base == NULL) { ERR("MapViewOfFileEx, gle: 0x%08x", GetLastError()); goto err_mte; } } PMDK_SORTEDQ_INSERT(&FileMappingQHead, mte, ListEntry, FILE_MAPPING_TRACKER, mmap_file_mapping_comparer); } return 0; err: if (mtb) { ASSERTeq(mtb->FileMappingHandle, fmh); ASSERTeq(mtb->FileHandle, fh); CloseHandle(mtb->FileMappingHandle); CloseHandle(mtb->FileHandle); len = (char *)mtb->EndAddress - (char *)mtb->BaseAddress; if (len > mtb->FileLen) { void *addr = (char *)mtb->BaseAddress + mtb->FileLen; mmap_unreserve(addr, len - mtb->FileLen); } } err_mte: if (mte) { if (mte->FileMappingHandle) CloseHandle(mte->FileMappingHandle); if (mte->FileHandle) CloseHandle(mte->FileHandle); len = (char *)mte->EndAddress - (char *)mte->BaseAddress; if (len > mte->FileLen) { void *addr = (char *)mte->BaseAddress + mte->FileLen; mmap_unreserve(addr, len - mte->FileLen); } } free(mtb); free(mte); return -1; } /* * munmap -- delete mapping */ int munmap(void *addr, size_t len) { LOG(4, "addr %p len %zu", addr, len); if (((uintptr_t)addr % Mmap_align) != 0) { ERR("address is not well-aligned: %p", addr); errno = EINVAL; return -1; } if (len == 0) { ERR("invalid length: %zu", len); errno = EINVAL; return -1; } int retval = -1; if (len > UINTPTR_MAX - (uintptr_t)addr) { /* limit len to not get beyond address space */ len = UINTPTR_MAX - (uintptr_t)addr; } void *begin = addr; void *end = (void *)((char *)addr + len); AcquireSRWLockExclusive(&FileMappingQLock); PFILE_MAPPING_TRACKER mt; PFILE_MAPPING_TRACKER next; for (mt = PMDK_SORTEDQ_FIRST(&FileMappingQHead); mt != (void *)&FileMappingQHead; mt = next) { /* * Pick the next entry before we split there by delete the * this one (NOTE: mmap_spilt could delete this entry). */ next = PMDK_SORTEDQ_NEXT(mt, ListEntry); if (mt->BaseAddress >= end) { LOG(4, "ignoring all mapped ranges beyond given range"); break; } if (mt->EndAddress <= begin) { LOG(4, "skipping a mapped range before given range"); continue; } void *begin2 = begin > mt->BaseAddress ? begin : mt->BaseAddress; void *end2 = end < mt->EndAddress ? end : mt->EndAddress; size_t len2 = (char *)end2 - (char *)begin2; void *align_end = (void *)roundup((uintptr_t)end2, Mmap_align); if (mmap_split(mt, begin2, align_end) != 0) { LOG(2, "mapping split failed"); goto err; } if (len > len2) { len -= len2; } else { len = 0; break; } } /* * If we didn't find any mapped regions in our list attempt to free * as if the entire range is reserved. * * XXX: We don't handle a range having few mapped regions and few * reserved regions. */ if (len > 0) mmap_unreserve(addr, roundup(len, Mmap_align)); retval = 0; err: ReleaseSRWLockExclusive(&FileMappingQLock); if (retval == -1) errno = EINVAL; #ifdef MMAP_DEBUG_INFO mmap_info(); #endif return retval; } #define MS_ALL (MS_SYNC|MS_ASYNC|MS_INVALIDATE) /* * msync -- synchronize a file with a memory map */ int msync(void *addr, size_t len, int flags) { LOG(4, "addr %p len %zu flags %d", addr, len, flags); if ((flags & ~MS_ALL) != 0) { ERR("invalid flags: 0x%08x", flags); errno = EINVAL; return -1; } /* * XXX - On Linux it is allowed to call msync() without MS_SYNC * nor MS_ASYNC. */ if (((flags & MS_SYNC) && (flags & MS_ASYNC)) || ((flags & (MS_SYNC | MS_ASYNC)) == 0)) { ERR("neither MS_SYNC or MS_ASYNC is set, or both: 0x%08x", flags); errno = EINVAL; return -1; } if (((uintptr_t)addr % Pagesize) != 0) { ERR("address is not page-aligned: %p", addr); errno = EINVAL; return -1; } if (len == 0) { LOG(4, "zero-length region - do nothing"); return 0; /* do nothing */ } if (len > UINTPTR_MAX - (uintptr_t)addr) { /* limit len to not get beyond address space */ len = UINTPTR_MAX - (uintptr_t)addr; } int retval = -1; void *begin = addr; void *end = (void *)((char *)addr + len); AcquireSRWLockShared(&FileMappingQLock); PFILE_MAPPING_TRACKER mt; PMDK_SORTEDQ_FOREACH(mt, &FileMappingQHead, ListEntry) { if (mt->BaseAddress >= end) { LOG(4, "ignoring all mapped ranges beyond given range"); break; } if (mt->EndAddress <= begin) { LOG(4, "skipping a mapped range before given range"); continue; } void *begin2 = begin > mt->BaseAddress ? begin : mt->BaseAddress; void *end2 = end < mt->EndAddress ? end : mt->EndAddress; size_t len2 = (char *)end2 - (char *)begin2; /* do nothing for anonymous mappings */ if (mt->FileHandle != INVALID_HANDLE_VALUE) { if (FlushViewOfFile(begin2, len2) == FALSE) { ERR("FlushViewOfFile, gle: 0x%08x", GetLastError()); errno = ENOMEM; goto err; } if (FlushFileBuffers(mt->FileHandle) == FALSE) { ERR("FlushFileBuffers, gle: 0x%08x", GetLastError()); errno = EINVAL; goto err; } } if (len > len2) { len -= len2; } else { len = 0; break; } } if (len > 0) { ERR("indicated memory (or part of it) was not mapped"); errno = ENOMEM; } else { retval = 0; } err: ReleaseSRWLockShared(&FileMappingQLock); return retval; } #define PROT_ALL (PROT_READ|PROT_WRITE|PROT_EXEC) /* * mprotect -- set protection on a region of memory * * XXX - If the memory range passed to mprotect() includes invalid pages, * returned status will indicate error, and errno is set to ENOMEM. * However, the protection change is actually applied to all the valid pages, * ignoring the rest. * This is different than on Linux, where it stops on the first invalid page. */ int mprotect(void *addr, size_t len, int prot) { LOG(4, "addr %p len %zu prot %d", addr, len, prot); if (((uintptr_t)addr % Pagesize) != 0) { ERR("address is not page-aligned: %p", addr); errno = EINVAL; return -1; } if (len == 0) { LOG(4, "zero-length region - do nothing"); return 0; /* do nothing */ } if (len > UINTPTR_MAX - (uintptr_t)addr) { len = UINTPTR_MAX - (uintptr_t)addr; LOG(4, "limit len to %zu to not get beyond address space", len); } DWORD protect = 0; if ((prot & PROT_READ) && (prot & PROT_WRITE)) { protect |= PAGE_READWRITE; if (prot & PROT_EXEC) protect |= PAGE_EXECUTE_READWRITE; } else if (prot & PROT_READ) { protect |= PAGE_READONLY; if (prot & PROT_EXEC) protect |= PAGE_EXECUTE_READ; } else { protect |= PAGE_NOACCESS; } int retval = -1; void *begin = addr; void *end = (void *)((char *)addr + len); AcquireSRWLockShared(&FileMappingQLock); PFILE_MAPPING_TRACKER mt; PMDK_SORTEDQ_FOREACH(mt, &FileMappingQHead, ListEntry) { if (mt->BaseAddress >= end) { LOG(4, "ignoring all mapped ranges beyond given range"); break; } if (mt->EndAddress <= begin) { LOG(4, "skipping a mapped range before given range"); continue; } void *begin2 = begin > mt->BaseAddress ? begin : mt->BaseAddress; void *end2 = end < mt->EndAddress ? end : mt->EndAddress; /* * protect of region to VirtualProtection must be compatible * with the access protection specified for this region * when the view was mapped using MapViewOfFileEx */ if (mt->Access == FILE_MAP_COPY) { if (protect & PAGE_READWRITE) { protect &= ~PAGE_READWRITE; protect |= PAGE_WRITECOPY; } else if (protect & PAGE_EXECUTE_READWRITE) { protect &= ~PAGE_EXECUTE_READWRITE; protect |= PAGE_EXECUTE_WRITECOPY; } } size_t len2 = (char *)end2 - (char *)begin2; DWORD oldprot = 0; BOOL ret; ret = VirtualProtect(begin2, len2, protect, &oldprot); if (ret == FALSE) { DWORD gle = GetLastError(); ERR("VirtualProtect, gle: 0x%08x", gle); /* translate error code */ switch (gle) { case ERROR_INVALID_PARAMETER: errno = EACCES; break; case ERROR_INVALID_ADDRESS: errno = ENOMEM; break; default: errno = EINVAL; break; } goto err; } if (len > len2) { len -= len2; } else { len = 0; break; } } if (len > 0) { ERR("indicated memory (or part of it) was not mapped"); errno = ENOMEM; } else { retval = 0; } err: ReleaseSRWLockShared(&FileMappingQLock); return retval; } vmem-1.8/utils/000077500000000000000000000000001361505074100134705ustar00rootroot00000000000000vmem-1.8/utils/.gitignore000066400000000000000000000000061361505074100154540ustar00rootroot00000000000000*.zip vmem-1.8/utils/CHECK_WHITESPACE.PS1000066400000000000000000000037601361505074100163540ustar00rootroot00000000000000# # Copyright 2016-2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # CHECK_WHITESPACE.PS1 -- script to check coding style # # XXX - integrate with VS projects and execute for each build # $scriptdir = Split-Path -Parent $PSCommandPath $rootdir = $scriptdir + "\.." $whitepace = $rootdir + "\utils\check_whitespace" If ( Get-Command -Name perl -ErrorAction SilentlyContinue ) { &perl $whitepace -g if ($LASTEXITCODE -ne 0) { Exit $LASTEXITCODE } } else { Write-Output "Cannot execute check_whitespace - perl is missing" } vmem-1.8/utils/CREATE-ZIP.PS1000066400000000000000000000110241361505074100154160ustar00rootroot00000000000000# # Copyright 2016-2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # CREATE-ZIP.PS1 -- script to create release zip package # # # parameter handling # [CmdletBinding(PositionalBinding=$false)] Param( [alias("b")] $build = "debug", [alias("v")] $version = "0", [alias("e")] $extended = "0" ) $scriptdir = Split-Path -Parent $PSCommandPath $rootdir = $scriptdir + "\..\" $builddir = $rootdir + "\src\x64\" $srcdir = $rootdir + "\src\" $zipdir = $builddir + "\vmem\" $zipexpdir = $rootdir + "\vmem_examples\" if ($version -eq "0") { $git = Get-Command -Name git -ErrorAction SilentlyContinue if ($git) { $version = $(git describe) } else { $version = "0" } } $zipfile = $builddir + "\vmem-" + $version + "-win-x64-" + $build + ".zip" $expfile = $rootdir + "\vmem_examples-" + $version + "-win-x64.zip" Remove-Item $zipdir -Force -Recurse -ea si Get-ChildItem | Where-Object {$_.Name -Match "vmem-.*-win-x64.zip"} | Remove-Item -Force -ea si New-Item -ItemType directory -Path ( $zipdir) -Force | Out-Null New-Item -ItemType directory -Path ( $zipdir + "\bin\") -Force | Out-Null New-Item -ItemType directory -Path ( $zipdir + "\lib\") -Force | Out-Null $libs = @("libvmem") $exp_types = @("*.c", "*.h", "*.cpp", "*.hpp", "*.props", "*.sln", "*.vcxproj", "*.vcxproj.filters", "README") foreach ($lib in $libs) { Copy-Item ($builddir + $build + "\libs\" + $lib + ".dll") ($zipdir + "\bin\") foreach ($ex in @(".lib", ".pdb")) { Copy-Item ($builddir + $build + "\libs\" + $lib + $ex) ($zipdir + "\lib\") } } Copy-Item -Recurse ($rootdir + "src\include") ($zipdir) Remove-Item -Force ($zipdir + "include\README") Remove-Item -Force ($zipdir + "include\libvmmalloc.h") Copy-Item ($rootdir + "README.md") ($zipdir) Copy-Item ($rootdir + "LICENSE") ($zipdir) Copy-Item ($rootdir + "ChangeLog") ($zipdir) Add-Type -Assembly System.IO.Compression.FileSystem $comprlevel = [System.IO.Compression.CompressionLevel]::Optimal if($build -eq "Release") { Remove-Item $zipexpdir -Force -Recurse -ea si New-Item -ItemType directory -Path ($zipexpdir) -Force | Out-Null Copy-Item ($srcdir + "LongPath.manifest") ($zipexpdir) foreach ($type in $exp_types) { Copy-Item -Path ($srcdir + "examples") -Filter $type -Recurse -Destination $zipexpdir -Container -Force } do { $empty_dirs = $(Get-ChildItem $zipexpdir -Recurse | Where-Object {$_.PsIsContainer -eq $true}) $to_remove = $($empty_dirs | Where-Object{$_.GetDirectories().Count -eq 0 -and $_.GetFiles().Count -eq 0}) for($i=0; $i -lt $to_remove.count; $i++) { Remove-Item $to_remove[$i].FullName -Force } } while ($null -ne $to_remove) if (Test-Path ($zipexpdir)) { [System.IO.Compression.ZipFile]::CreateFromDirectory($zipexpdir, $expfile, $comprlevel, $true) } Remove-Item $zipexpdir -Force -Recurse -ea si } if (Test-Path ($zipdir)) { [System.IO.Compression.ZipFile]::CreateFromDirectory($zipdir, $zipfile, $comprlevel, $true) } Remove-Item $zipdir -Force -Recurse -ea si vmem-1.8/utils/CSTYLE.ps1000066400000000000000000000047211361505074100151240ustar00rootroot00000000000000# # Copyright 2016-2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # CSTYLE.ps1 -- script to check coding style # # XXX - integrate with VS projects and execute for each build # $scriptdir = Split-Path -Parent $PSCommandPath $rootdir = $scriptdir + "\.." $cstyle = $rootdir + "\utils\cstyle" $checkdir = $rootdir # XXX - *.cpp/*.hpp files not supported yet $include = @( "*.c", "*.h" ) If ( Get-Command -Name perl -ErrorAction SilentlyContinue ) { Get-ChildItem -Path $checkdir -Recurse -Include $include | ` Where-Object { $_.FullName -notlike "*jemalloc*" } | ` ForEach-Object { $IGNORE = $_.DirectoryName + "\.cstyleignore" if(Test-Path $IGNORE) { if((Select-String $_.Name $IGNORE)) { return } } $_ } | ForEach-Object { Write-Output $_.FullName & perl $cstyle $_.FullName if ($LASTEXITCODE -ne 0) { Exit $LASTEXITCODE } } } else { Write-Output "Cannot execute cstyle - perl is missing" } vmem-1.8/utils/Makefile000066400000000000000000000037531361505074100151400ustar00rootroot00000000000000# # Copyright 2016-2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # rwildcard=$(strip $(foreach d,$(wildcard $1*),$(call rwildcard,$d/,$2)\ $(filter $(subst *,%,$2),$d))) SCRIPTS = $(call rwildcard,,*.sh) all: $(MAKE) -C check_license $@ check-license: $(MAKE) -C check_license $@ clean: $(MAKE) -C check_license $@ clobber: $(MAKE) -C check_license $@ cstyle: $(MAKE) -C check_license $@ ./check-shebang.sh $(SCRIPTS) format: $(MAKE) -C check_license $@ .PHONY: all check-license clean clobber cstyle format vmem-1.8/utils/README000066400000000000000000000001661361505074100143530ustar00rootroot00000000000000Persistent Memory Development Kit This is utils/README. The scripts found here are used during library development. vmem-1.8/utils/SRCVERSION.ps1000066400000000000000000000152511361505074100156160ustar00rootroot00000000000000# # Copyright 2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # SRCVERSION.PS1 -- script to create SCRVERSION macro and generate srcversion.h # # # Windows dll versioning supports only fixed number of fields. The most # important are MAJOR, MINOR and REVISION. We have 3-compoment releases # (e.g. 1.5.1) with release candidates, so we have to encode this information # into this fixed number of fields. That's why we abuse REVISION to encode both # 3rd component and rc status. # REVISION = 3RDCOMP * 1000 + (!is_rc) * 100 + rc. # # Examples: # +---------------------+-----+-----+--------+-----+------+-------+----------+ # |git describe --long |MAJOR|MINOR|REVISION|BUILD|BUGFIX|PRIVATE|PRERELEASE| # +---------------------+-----+-----+--------+-----+------+-------+----------+ # |1.5-rc2-0-12345678 | 1| 5| 2| 0| false| false| true| # |1.5-rc3-6-12345678 | 1| 5| 3| 6| false| true| true| # |1.5-0-12345678 | 1| 5| 100| 0| false| false| false| # |1.5-6-123345678 | 1| 5| 100| 6| false| true| false| # |1.5.2-rc1-0-12345678 | 1| 5| 2001| 0| true| false| true| # |1.5.2-rc4-6-12345678 | 1| 5| 2004| 6| true| true| true| # |1.5.2-0-12345678 | 1| 5| 2100| 0| true| false| false| # |1.5.2-6-12345678 | 1| 5| 2100| 6| true| true| false| # +---------------------+-----+-----+--------+-----+------+-------+----------+ # $scriptPath = Split-Path -parent $MyInvocation.MyCommand.Definition $file_path = $scriptPath + "\..\src\windows\include\srcversion.h" $git_version_file = $scriptPath + "\..\GIT_VERSION" $version_file = $scriptPath + "\..\VERSION" $git = Get-Command -Name git -ErrorAction SilentlyContinue if (Test-Path $file_path) { $old_src_version = Get-Content $file_path | ` Where-Object { $_ -like '#define SRCVERSION*' } } else { $old_src_version = "" } $git_version = "" $git_version_tag = "" $git_version_hash = "" if (Test-Path $git_version_file) { $git_version = Get-Content $git_version_file if ($git_version -eq "`$Format:%h %d`$") { $git_version = "" } elseif ($git_version -match "tag: ") { if ($git_version -match "tag: (?[0-9a-z.+-]*)") { $git_version_tag = $matches["tag"]; } } else { $git_version_hash = ($git_version -split " ")[0] } } $PRERELEASE = $false $BUGFIX = $false $PRIVATE = $true $CUSTOM = $false if ($null -ne $args[0]) { $version = $args[0] $ver_array = $version.split("-+") } elseif (Test-Path $version_file) { $version = Get-Content $version_file $ver_array = $version.split("-+") } elseif ($null -ne $git) { $version = $(git describe) $ver_array = $(git describe --long).split("-+") } elseif ($git_version_tag -ne "") { $version = $git_version_tag $ver_array = $git_version_tag.split("-+") } elseif ($git_version_hash -ne "") { $MAJOR = 0 $MINOR = 0 $REVISION = 0 $BUILD = 0 $version = $git_version_hash $CUSTOM = $true $version_custom_msg = "#define VERSION_CUSTOM_MSG `"$git_version_hash`"" } else { $MAJOR = 0 $MINOR = 0 $REVISION = 0 $BUILD = 0 $version = "UNKNOWN_VERSION" $CUSTOM = $true $version_custom_msg = "#define VERSION_CUSTOM_MSG `"UNKNOWN_VERSION`"" } if ($null -ne $ver_array) { $ver_dots = $ver_array[0].split(".") $MAJOR = $ver_dots[0] $MINOR = $ver_dots[1] if ($ver_dots.length -ge 3) { $REV = $ver_dots[2] $BUGFIX = $true } else { $REV = 0 } $REVISION = 1000 * $REV $BUILD = $ver_array[$ver_array.length - 2] if ($ver_array.length -eq 4) { # .[.]--- if ($ver_array[1].StartsWith("rc")) { # .[.]-rc-- $REVISION += $ver_array[1].Substring("rc".Length) $PRERELEASE = $true $version = "$($ver_array[0])-$($ver_array[1])+git$($ver_array[2]).$($ver_array[3])" } else { # .[.]--- throw "Unknown version format" } } else { # .[.]-- $REVISION += 100 $version = "$($ver_array[0])+git$($ver_array[1]).$($ver_array[2])" } if ($BUILD -eq 0) { # it is not a (pre)release build $PRIVATE = $false } } $src_version = "#define SRCVERSION `"$version`"" if ($old_src_version -eq $src_version) { exit 0 } Write-Output "updating source version: $version" Write-Output $src_version > $file_path Write-Output "#ifdef RC_INVOKED" >> $file_path Write-Output "#define MAJOR $MAJOR" >> $file_path Write-Output "#define MINOR $MINOR" >> $file_path Write-Output "#define REVISION $REVISION" >> $file_path Write-Output "#define BUILD $BUILD" >> $file_path if ($PRERELEASE) { Write-Output "#define PRERELEASE 1" >> $file_path } if ($BUGFIX) { Write-Output "#define BUGFIX 1" >> $file_path } if ($PRIVATE) { Write-Output "#define PRIVATE 1" >> $file_path } if ($CUSTOM) { Write-Output "#define CUSTOM 1" >> $file_path Write-Output $version_custom_msg >> $file_path } Write-Output "#endif" >> $file_path vmem-1.8/utils/build-dpkg.sh000077500000000000000000000263261361505074100160620ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # build-dpkg.sh - Script for building deb packages # set -e SCRIPT_DIR=$(dirname $0) source $SCRIPT_DIR/pkg-common.sh # # usage -- print usage message and exit # usage() { [ "$1" ] && echo Error: $1 cat >&2 < src/test/testconfig.sh; \ echo 'TEST_BUILD=\"debug nondebug\"' >> src/test/testconfig.sh; \ fi make pcheck ${PCHECK_OPTS} " else CHECK_CMD=" override_dh_auto_test: " fi check_tool debuild check_tool dch check_file $SCRIPT_DIR/pkg-config.sh source $SCRIPT_DIR/pkg-config.sh PACKAGE_VERSION=$(get_version $PACKAGE_VERSION_TAG) PACKAGE_RELEASE=1 PACKAGE_SOURCE=${PACKAGE_NAME}-${PACKAGE_VERSION} PACKAGE_TARBALL_ORIG=${PACKAGE_NAME}_${PACKAGE_VERSION}.orig.tar.gz CONTROL_FILE=debian/control [ -d $WORKING_DIR ] || mkdir $WORKING_DIR [ -d $OUT_DIR ] || mkdir $OUT_DIR OLD_DIR=$PWD cd $WORKING_DIR check_dir $SOURCE mv $SOURCE $PACKAGE_SOURCE tar zcf $PACKAGE_TARBALL_ORIG $PACKAGE_SOURCE cd $PACKAGE_SOURCE rm -rf debian mkdir debian # Generate compat file cat << EOF > debian/compat 9 EOF # Generate control file cat << EOF > $CONTROL_FILE Source: $PACKAGE_NAME Maintainer: $PACKAGE_MAINTAINER Section: libs Priority: optional Standards-version: 4.1.4 Build-Depends: debhelper (>= 9) Homepage: http://pmem.io/vmem/ Package: libvmem Architecture: any Depends: \${shlibs:Depends}, \${misc:Depends} Description: Persistent Memory volatile memory support library The libvmem library turns a pool of persistent memory into a volatile memory pool, similar to the system heap but kept separate and with its own malloc-style API. . libvmem supports the traditional malloc/free interfaces on a memory mapped file. This allows the use of persistent memory as volatile memory, for cases where the pool of persistent memory is useful to an application, but when the application doesn’t need it to be persistent. Package: libvmem-dev Section: libdevel Architecture: any Depends: libvmem (=\${binary:Version}), \${shlibs:Depends}, \${misc:Depends} Description: Development files for libvmem The libvmem library turns a pool of persistent memory into a volatile memory pool, similar to the system heap but kept separate and with its own malloc-style API. . This package contains libraries and header files used for linking programs against libvmem. Package: libvmmalloc Architecture: any Depends: \${shlibs:Depends}, \${misc:Depends} Description: Persistent Memory dynamic allocation support library The libvmmalloc library transparently converts all the dynamic memory allocations into persistent memory allocations. This allows the use of persistent memory as volatile memory without modifying the target application. Package: libvmmalloc-dev Section: libdevel Architecture: any Depends: libvmmalloc (=\${binary:Version}), \${shlibs:Depends}, \${misc:Depends} Description: Development files for libvmmalloc The libvmmalloc library transparently converts all the dynamic memory allocations into persistent memory allocations. . This package contains libraries and header files used for linking programs against libvmalloc. Package: $PACKAGE_NAME-dbg Section: debug Priority: optional Architecture: any Depends: libvmem (=\${binary:Version}), libvmmalloc (=\${binary:Version}), \${misc:Depends} Description: Debug symbols for VMEM libraries Debug symbols for all VMEM libraries. EOF cp LICENSE debian/copyright cat << EOF > debian/rules #!/usr/bin/make -f #export DH_VERBOSE=1 %: dh \$@ override_dh_strip: dh_strip --dbg-package=$PACKAGE_NAME-dbg override_dh_auto_build: dh_auto_build -- EXPERIMENTAL=${EXPERIMENTAL} prefix=/$PREFIX libdir=/$LIB_DIR includedir=/$INC_DIR docdir=/$DOC_DIR man1dir=/$MAN1_DIR man3dir=/$MAN3_DIR man5dir=/$MAN5_DIR man7dir=/$MAN7_DIR SRCVERSION=$SRCVERSION override_dh_auto_install: dh_auto_install -- EXPERIMENTAL=${EXPERIMENTAL} prefix=/$PREFIX libdir=/$LIB_DIR includedir=/$INC_DIR docdir=/$DOC_DIR man1dir=/$MAN1_DIR man3dir=/$MAN3_DIR man5dir=/$MAN5_DIR man7dir=/$MAN7_DIR SRCVERSION=$SRCVERSION find -path './debian/*usr/share/man/man*/*.gz' -exec gunzip {} \; override_dh_install: mkdir -p debian/tmp/usr/share/vmem/ dh_install ${CHECK_CMD} EOF chmod +x debian/rules mkdir debian/source ITP_BUG_EXCUSE="# This is our first package but we do not want to upload it yet. # Please refer to Debian Developer's Reference section 5.1 (New packages) for details: # https://www.debian.org/doc/manuals/developers-reference/pkgs.html#newpackage" cat << EOF > debian/source/format 3.0 (quilt) EOF cat << EOF > debian/libvmem.install $LIB_DIR/libvmem.so.* EOF cat << EOF > debian/libvmem.lintian-overrides $ITP_BUG_EXCUSE new-package-should-close-itp-bug libvmem: package-name-doesnt-match-sonames EOF cat << EOF > debian/libvmem-dev.install $LIB_DIR/vmem_debug/libvmem.a $LIB_DIR/vmem_dbg/ $LIB_DIR/vmem_debug/libvmem.so $LIB_DIR/vmem_dbg/ $LIB_DIR/vmem_debug/libvmem.so.* $LIB_DIR/vmem_dbg/ $LIB_DIR/libvmem.so $LIB_DIR/pkgconfig/libvmem.pc $INC_DIR/libvmem.h $MAN7_DIR/libvmem.7 $MAN3_DIR/vmem_*.3 EOF cat << EOF > debian/libvmem-dev.lintian-overrides $ITP_BUG_EXCUSE new-package-should-close-itp-bug # The following warnings are triggered by a bug in debhelper: # http://bugs.debian.org/204975 postinst-has-useless-call-to-ldconfig postrm-has-useless-call-to-ldconfig # We do not want to compile with -O2 for debug version hardening-no-fortify-functions $LIB_DIR/vmem_dbg/* # vmem provides second set of libraries for debugging. # These are in /usr/lib/$arch/vmem_dbg/, but still trigger ldconfig. # Related issue: https://github.com/pmem/issues/issues/841 libvmem-dev: package-has-unnecessary-activation-of-ldconfig-trigger EOF cat << EOF > debian/libvmmalloc.install $LIB_DIR/libvmmalloc.so.* EOF cat << EOF > debian/libvmmalloc.lintian-overrides $ITP_BUG_EXCUSE new-package-should-close-itp-bug libvmmalloc: package-name-doesnt-match-sonames EOF cat << EOF > debian/libvmmalloc-dev.install $LIB_DIR/vmem_debug/libvmmalloc.a $LIB_DIR/vmem_dbg/ $LIB_DIR/vmem_debug/libvmmalloc.so $LIB_DIR/vmem_dbg/ $LIB_DIR/vmem_debug/libvmmalloc.so.* $LIB_DIR/vmem_dbg/ $LIB_DIR/libvmmalloc.so $LIB_DIR/pkgconfig/libvmmalloc.pc $INC_DIR/libvmmalloc.h $MAN7_DIR/libvmmalloc.7 EOF cat << EOF > debian/libvmmalloc-dev.lintian-overrides $ITP_BUG_EXCUSE new-package-should-close-itp-bug # The following warnings are triggered by a bug in debhelper: # http://bugs.debian.org/204975 postinst-has-useless-call-to-ldconfig postrm-has-useless-call-to-ldconfig # We do not want to compile with -O2 for debug version hardening-no-fortify-functions $LIB_DIR/vmem_dbg/* # vmem provides second set of libraries for debugging. # These are in /usr/lib/$arch/vmem_dbg/, but still trigger ldconfig. # Related issue: https://github.com/pmem/issues/issues/841 libvmmalloc-dev: package-has-unnecessary-activation-of-ldconfig-trigger EOF cat << EOF > debian/$PACKAGE_NAME-dbg.lintian-overrides $ITP_BUG_EXCUSE new-package-should-close-itp-bug EOF # Convert ChangeLog to debian format CHANGELOG_TMP=changelog.tmp dch --create --empty --package $PACKAGE_NAME -v $PACKAGE_VERSION-$PACKAGE_RELEASE -M -c $CHANGELOG_TMP touch debian/changelog head -n1 $CHANGELOG_TMP >> debian/changelog echo "" >> debian/changelog convert_changelog ChangeLog >> debian/changelog echo "" >> debian/changelog tail -n1 $CHANGELOG_TMP >> debian/changelog rm $CHANGELOG_TMP # This is our first release but we do debuild --preserve-envvar=EXTRA_CFLAGS_RELEASE \ --preserve-envvar=EXTRA_CFLAGS_DEBUG \ --preserve-envvar=EXTRA_CFLAGS \ --preserve-envvar=EXTRA_CXXFLAGS \ --preserve-envvar=EXTRA_LDFLAGS \ -us -uc cd $OLD_DIR find $WORKING_DIR -name "*.deb"\ -or -name "*.dsc"\ -or -name "*.changes"\ -or -name "*.orig.tar.gz"\ -or -name "*.debian.tar.gz" | while read FILE do mv -v $FILE $OUT_DIR/ done exit 0 vmem-1.8/utils/build-rpm.sh000077500000000000000000000140371361505074100157270ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # build-rpm.sh - Script for building rpm packages # set -e SCRIPT_DIR=$(dirname $0) source $SCRIPT_DIR/pkg-common.sh check_tool rpmbuild check_file $SCRIPT_DIR/pkg-config.sh source $SCRIPT_DIR/pkg-config.sh # # usage -- print usage message and exit # usage() { [ "$1" ] && echo Error: $1 cat >&2 < $RPM_SPEC_FILE if [ "$DISTRO" = "SLES_like" ] then sed -i '/^#.*bugzilla.redhat/d' $RPM_SPEC_FILE fi # do not split on space IFS=$'\n' # experimental features if [ "${EXPERIMENTAL}" = "y" ] then # no experimental features for now RPMBUILD_OPTS+=( ) fi # use specified testconfig file or default if [[( -n "${TEST_CONFIG_FILE}") && ( -f "$TEST_CONFIG_FILE" ) ]] then echo "Test config file: $TEST_CONFIG_FILE" RPMBUILD_OPTS+=(--define "_testconfig $TEST_CONFIG_FILE") else echo -e "Test config file $TEST_CONFIG_FILE does not exist.\n"\ "Default test config will be used." fi # run make check or not if [ "${BUILD_PACKAGE_CHECK}" == "n" ] then RPMBUILD_OPTS+=(--define "_skip_check 1") fi tar zcf $PACKAGE_TARBALL $PACKAGE_SOURCE # Create directory structure for rpmbuild mkdir -v BUILD SPECS echo "opts: ${RPMBUILD_OPTS[@]}" rpmbuild --define "_topdir `pwd`"\ --define "_rpmdir ${OUT_DIR}"\ --define "_srcrpmdir ${OUT_DIR}"\ -ta $PACKAGE_TARBALL \ ${RPMBUILD_OPTS[@]} echo "Building rpm packages done" exit 0 vmem-1.8/utils/check-commit.sh000077500000000000000000000046161361505074100164010ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # Used to check whether all the commit messages in a pull request # follow the GIT/VMEM guidelines. # # usage: ./check-commit.sh commit # if [ -z "$1" ]; then echo "Usage: check-commit.sh commit-id" exit 1 fi echo "Checking $1" subject=$(git log --format="%s" -n 1 $1) if [[ $subject =~ ^Merge.* ]]; then # skip exit 0 fi if [[ $subject =~ ^Revert.* ]]; then # skip exit 0 fi # valid area names AREAS="test\|benchmark\|examples\|vmem\|vmmalloc\|jemalloc\|doc\|common" # Check commit message for commit in $commits; do subject=$(git log --format="%s" -n 1 $commit) commit_len=$(git log --format="%s%n%b" -n 1 $commit | wc -L) if [[ $subject =~ ^Merge.* ]]; then # skip continue fi if [ $commit_len -gt 73 ]; then echo "FAIL: commit message exceeds 72 chars per line (commit_len)" echo git log -n 1 $1 | cat exit 1 fi done vmem-1.8/utils/check-commits.sh000077500000000000000000000043531361505074100165620ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # Used to check whether all the commit messages in a pull request # follow the GIT/VMEM guidelines. # # usage: ./check-commits.sh [range] # if [ -z "$1" ]; then # on Travis run this check only for pull requests if [ -n "$TRAVIS_REPO_SLUG" ]; then if [[ "$TRAVIS_REPO_SLUG" != "$GITHUB_REPO" \ || $TRAVIS_EVENT_TYPE != "pull_request" ]]; then echo "SKIP: $0 can only be executed for pull requests to $GITHUB_REPO" exit 0 fi fi last_merge=$(git log --pretty="%cN:%H" | grep GitHub | head -n1 | cut -d: -f2) range=${last_merge}..HEAD else range="$1" fi commits=$(git log --pretty=%H $range) set -e for commit in $commits; do `dirname $0`/check-commit.sh $commit done vmem-1.8/utils/check-doc.sh000077500000000000000000000055051361505074100156540ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # Used to check whether changes to the generated documentation directory # are made by the authorised user. Used only by travis builds. # # usage: ./check-doc.sh # directory=doc/generated allowed_user="pmem-bot " if [[ -z "$TRAVIS" ]]; then echo "ERROR: $0 can only be executed on Travis CI." exit 1 fi if [[ "$TRAVIS_REPO_SLUG" != "$GITHUB_REPO" \ || $TRAVIS_EVENT_TYPE != "pull_request" ]]; then echo "SKIP: $0 can only be executed for pull requests to ${GITHUB_REPO}" exit 0 fi # Find all the commits for the current build if [[ -n "$TRAVIS_COMMIT_RANGE" ]]; then # $TRAVIS_COMMIT_RANGE contains "..." instead of ".." # https://github.com/travis-ci/travis-ci/issues/4596 PR_COMMIT_RANGE="${TRAVIS_COMMIT_RANGE/.../..}" commits=$(git rev-list $PR_COMMIT_RANGE) else commits=$TRAVIS_COMMIT fi # Check for changes in the generated docs directory # Only new files are allowed (first version) for commit in $commits; do last_author=$(git --no-pager show -s --format='%aN <%aE>' $commit) if [ "$last_author" == "$allowed_user" ]; then continue fi fail=$(git diff-tree --no-commit-id --name-status -r $commit | grep -c ^M.*$directory) if [ $fail -ne 0 ]; then echo "FAIL: changes to ${directory} allowed only by \"${allowed_user}\"" exit 1 fi done vmem-1.8/utils/check-os.sh000077500000000000000000000042011361505074100155200ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # Used to check if there are no banned functions in .o file # # usage: ./check-os.sh [os.h path] [.o file] [.c file] EXCLUDE="os_linux|os_thread_linux" if [[ $2 =~ $EXCLUDE ]]; then echo "skip $2" exit 0 fi symbols=$(nm --demangle --undefined-only --format=posix $2 | sed 's/ U *//g') functions=$(cat $1 | tr '\n' '|') out=$( for sym in $symbols do grep -w $functions <<<"$sym" done | sed 's/$/\(\)/g') [[ ! -z $out ]] && echo -e "`pwd`/$3:1: non wrapped function(s):\n$out\nplease use os wrappers" && rm -f $2 && # remove .o file as it don't match requirements exit 1 exit 0 vmem-1.8/utils/check-shebang.sh000077500000000000000000000043371361505074100165200ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # utils/check-shebang.sh -- interpreter directive check script # set -e err_count=0 for file in $@ ; do [ ! -f $file ] && continue SHEBANG=`head -n1 $file | cut -d" " -f1` [ "${SHEBANG:0:2}" != "#!" ] && continue if [ "$SHEBANG" != "#!/usr/bin/env" -a $SHEBANG != "#!/bin/sh" ]; then INTERP=`echo $SHEBANG | rev | cut -d"/" -f1 | rev` echo "$file:1: error: invalid interpreter directive:" >&2 echo " (is: \"$SHEBANG\", should be: \"#!/usr/bin/env $INTERP\")" >&2 ((err_count+=1)) fi done if [ "$err_count" == "0" ]; then echo "Interpreter directives are OK." else echo "Found $err_count errors in interpreter directives!" >&2 err_count=1 fi exit $err_count vmem-1.8/utils/check_license/000077500000000000000000000000001361505074100162475ustar00rootroot00000000000000vmem-1.8/utils/check_license/.gitignore000066400000000000000000000000161361505074100202340ustar00rootroot00000000000000check-license vmem-1.8/utils/check_license/Makefile000066400000000000000000000046601361505074100177150ustar00rootroot00000000000000# # Copyright 2016-2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # include ../../src/common.inc CFLAGS += -std=gnu99 CFLAGS += -Wall CFLAGS += -Werror CFLAGS += -Wmissing-prototypes CFLAGS += -Wpointer-arith CFLAGS += -Wunused-macros CFLAGS += -Wmissing-field-initializers CFLAGS += -Wsign-conversion CFLAGS += -Wsign-compare ifeq ($(WCONVERSION_AVAILABLE), y) CFLAGS += -Wconversion endif CFLAGS += -fno-common ifeq ($(WUNREACHABLE_CODE_RETURN_AVAILABLE), y) CFLAGS += -Wunreachable-code-return endif ifeq ($(WMISSING_VARIABLE_DECLARATIONS_AVAILABLE), y) CFLAGS += -Wmissing-variable-declarations endif ifeq ($(WFLOAT_EQUAL_AVAILABLE), y) CFLAGS += -Wfloat-equal endif ifeq ($(WSWITCH_DEFAULT_AVAILABLE), y) CFLAGS += -Wswitch-default endif ifeq ($(WCAST_FUNCTION_TYPE_AVAILABLE), y) CFLAGS += -Wcast-function-type endif TARGET=check-license all: $(TARGET) $(TARGET): $(TARGET).o clean: $(RM) -f *.o clobber: clean $(RM) -f $(TARGET) .PHONY: all clean clobber vmem-1.8/utils/check_license/check-headers.sh000077500000000000000000000143551361505074100213040ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # check-headers.sh - check copyright and license in source files SELF=$0 function usage() { echo "Usage: $SELF [-h|-v|-a]" echo " -h, --help this help message" echo " -v, --verbose verbose mode" echo " -a, --all check all files (only modified files are checked by default)" } if [ "$#" -lt 3 ]; then usage >&2 exit 2 fi SOURCE_ROOT=$1 shift CHECK_LICENSE=$1 shift LICENSE=$1 shift PATTERN=`mktemp` TMP=`mktemp` TMP2=`mktemp` TEMPFILE=`mktemp` rm -f $PATTERN $TMP $TMP2 function exit_if_not_exist() { if [ ! -f $1 ]; then echo "Error: file $1 does not exist. Exiting..." >&2 exit 1 fi } if [ "$1" == "-h" -o "$1" == "--help" ]; then usage exit 0 fi exit_if_not_exist $LICENSE exit_if_not_exist $CHECK_LICENSE export GIT="git -C ${SOURCE_ROOT}" $GIT rev-parse || exit 1 if [ -f $SOURCE_ROOT/.git/shallow ]; then SHALLOW_CLONE=1 echo echo "Warning: This is a shallow clone. Checking dates in copyright headers" echo " will be skipped in case of files that have no history." echo else SHALLOW_CLONE=0 fi VERBOSE=0 CHECK_ALL=0 while [ "$1" != "" ]; do case $1 in -v|--verbose) VERBOSE=1 ;; -a|--all) CHECK_ALL=1 ;; esac shift done if [ $CHECK_ALL -eq 0 ]; then CURRENT_COMMIT=$($GIT log --pretty=%H -1) MERGE_BASE=$($GIT merge-base HEAD origin/master 2>/dev/null) [ -z $MERGE_BASE ] && \ MERGE_BASE=$($GIT log --pretty="%cN:%H" | grep GitHub | head -n1 | cut -d: -f2) [ -z $MERGE_BASE -o "$CURRENT_COMMIT" = "$MERGE_BASE" ] && \ CHECK_ALL=1 fi if [ $CHECK_ALL -eq 1 ]; then echo "Checking copyright headers of all files..." GIT_COMMAND="ls-tree -r --name-only HEAD" else echo echo "Warning: will check copyright headers of modified files only," echo " in order to check all files issue the following command:" echo " $ $SELF -a" echo " (e.g.: $ $SELF $SOURCE_ROOT $CHECK_LICENSE $LICENSE -a)" echo echo "Checking copyright headers of modified files only..." GIT_COMMAND="diff --name-only $MERGE_BASE $CURRENT_COMMIT" fi FILES=$($GIT $GIT_COMMAND | ${SOURCE_ROOT}/utils/check_license/file-exceptions.sh | \ grep -E -e '*\.[chs]$' -e '*\.[ch]pp$' -e '*\.sh$' \ -e '*\.py$' -e '*\.link$' -e 'Makefile*' -e 'TEST*' \ -e '/common.inc$' -e '/match$' -e '/check_whitespace$' \ -e 'LICENSE$' -e 'CMakeLists.txt$' -e '*\.cmake$' | \ xargs) # jemalloc.mk has to be checked always, because of the grep rules above FILES="$FILES src/jemalloc/jemalloc.mk" # create a license pattern file $CHECK_LICENSE create $LICENSE $PATTERN [ $? -ne 0 ] && exit 1 RV=0 for file in $FILES ; do # The src_path is a path which should be used in every command except git. # git is called with -C flag so filepaths should be relative to SOURCE_ROOT src_path="${SOURCE_ROOT}/$file" [ ! -f $src_path ] && continue # ensure that file is UTF-8 encoded ENCODING=`file -b --mime-encoding $src_path` iconv -f $ENCODING -t "UTF-8" $src_path > $TEMPFILE YEARS=`$CHECK_LICENSE check-pattern $PATTERN $TEMPFILE $src_path` if [ $? -ne 0 ]; then echo -n $YEARS RV=1 else HEADER_FIRST=`echo $YEARS | cut -d"-" -f1` HEADER_LAST=` echo $YEARS | cut -d"-" -f2` if [ $SHALLOW_CLONE -eq 0 ]; then $GIT log --no-merges --format="%ai %aE" -- $file | sort > $TMP else # mark the grafted commits (commits with no parents) $GIT log --no-merges --format="%ai %aE grafted-%p-commit" -- $file | sort > $TMP fi # skip checking dates for non-Intel commits [[ ! $(tail -n1 $TMP) =~ "@intel.com" ]] && continue # skip checking dates for new files [ $(cat $TMP | wc -l) -le 1 ] && continue # grep out the grafted commits (commits with no parents) # and skip checking dates for non-Intel commits grep -v -e "grafted--commit" $TMP | grep -e "@intel.com" > $TMP2 [ $(cat $TMP2 | wc -l) -eq 0 ] && continue FIRST=`head -n1 $TMP2` LAST=` tail -n1 $TMP2` COMMIT_FIRST=`echo $FIRST | cut -d"-" -f1` COMMIT_LAST=` echo $LAST | cut -d"-" -f1` if [ "$COMMIT_FIRST" != "" -a "$COMMIT_LAST" != "" ]; then if [ $HEADER_LAST -lt $COMMIT_LAST ]; then if [ $HEADER_FIRST -lt $COMMIT_FIRST ]; then COMMIT_FIRST=$HEADER_FIRST fi COMMIT_LAST=`date +%G` if [ $COMMIT_FIRST -eq $COMMIT_LAST ]; then NEW=$COMMIT_LAST else NEW=$COMMIT_FIRST-$COMMIT_LAST fi echo "$file:1: error: wrong copyright date: (is: $YEARS, should be: $NEW)" >&2 RV=1 fi else echo "error: unknown commit dates in file: $file" >&2 RV=1 fi fi done rm -f $TMP $TMP2 $TEMPFILE # check if error found if [ $RV -eq 0 ]; then echo "Copyright headers are OK." else echo "Error(s) in copyright headers found!" >&2 fi exit $RV vmem-1.8/utils/check_license/check-license.c000066400000000000000000000312661361505074100211200ustar00rootroot00000000000000/* * Copyright 2016-2019, Intel Corporation * Copyright (c) 2016, Microsoft Corporation. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* * check-license.c -- check the license in the file */ #include #include #include #include #include #include #include #include #include #include #include #include #define LICENSE_MAX_LEN 2048 #define COPYRIGHT "Copyright " #define COPYRIGHT_LEN 10 #define COPYRIGHT_SYMBOL "(c) " #define COPYRIGHT_SYMBOL_LEN 4 #define YEAR_MIN 1900 #define YEAR_MAX 9999 #define YEAR_INIT_MIN 9999 #define YEAR_INIT_MAX 0 #define YEAR_LEN 4 #define LICENSE_BEG "Redistribution and use" #define LICENSE_END "THE POSSIBILITY OF SUCH DAMAGE." #define DIFF_LEN 50 #define COMMENT_STR_LEN 5 #define STR_MODE_CREATE "create" #define STR_MODE_PATTERN "check-pattern" #define STR_MODE_LICENSE "check-license" #define ERROR(fmt, ...) fprintf(stderr, "error: " fmt "\n", __VA_ARGS__) #define ERROR2(fmt, ...) fprintf(stderr, fmt "\n", __VA_ARGS__) /* * help_str -- string for the help message */ static const char * const help_str = "Usage: %s [filename]\n" "\n" "Modes:\n" " create \n" " - create a license pattern file \n" " from the license text file \n" "\n" " check-pattern \n" " - check if a license in \n" " matches the license pattern in ,\n" " if it does, copyright dates are printed out (see below)\n" "\n" " check-license \n" " - check if a license in \n" " matches the license text in ,\n" " if it does, copyright dates are printed out (see below)\n" "\n" "In case of 'check_pattern' and 'check_license' modes,\n" "if the license is correct, it prints out copyright dates\n" "in the following format: OLDEST_YEAR-NEWEST_YEAR\n" "\n" "Return value: returns 0 on success and -1 on error.\n" "\n"; /* * read_pattern -- read the pattern from the 'path_pattern' file to 'pattern' */ static int read_pattern(const char *path_pattern, char *pattern) { int file_pattern; ssize_t ret; if ((file_pattern = open(path_pattern, O_RDONLY)) == -1) { ERROR("open(): %s: %s", strerror(errno), path_pattern); return -1; } ret = read(file_pattern, pattern, LICENSE_MAX_LEN); close(file_pattern); if (ret == -1) { ERROR("read(): %s: %s", strerror(errno), path_pattern); return -1; } else if (ret != LICENSE_MAX_LEN) { ERROR("read(): incorrect format of the license pattern" " file (%s)", path_pattern); return -1; } return 0; } /* * write_pattern -- write 'pattern' to the 'path_pattern' file */ static int write_pattern(const char *path_pattern, char *pattern) { int file_pattern; ssize_t ret; if ((file_pattern = open(path_pattern, O_WRONLY | O_CREAT | O_EXCL, S_IRUSR | S_IRGRP | S_IROTH)) == -1) { ERROR("open(): %s: %s", strerror(errno), path_pattern); return -1; } ret = write(file_pattern, pattern, LICENSE_MAX_LEN); close(file_pattern); if (ret < LICENSE_MAX_LEN) { ERROR("write(): %s: %s", strerror(errno), path_pattern); return -1; } return 0; } /* * strstr2 -- locate two substrings in the string */ static int strstr2(const char *str, const char *sub1, const char *sub2, char **pos1, char **pos2) { *pos1 = strstr(str, sub1); *pos2 = strstr(str, sub2); if (*pos1 == NULL || *pos2 == NULL) return -1; return 0; } /* * format_license -- remove comments and redundant whitespaces from the license */ static void format_license(char *license, size_t length) { char comment_str[COMMENT_STR_LEN]; char *comment = license; size_t comment_len; int was_space; size_t w, r; /* detect a comment string */ while (*comment != '\n') comment--; /* is there any comment? */ if (comment + 1 != license) { /* separate out a comment */ strncpy(comment_str, comment, COMMENT_STR_LEN - 1); comment_str[COMMENT_STR_LEN - 1] = 0; comment = comment_str + 1; while (isspace(*comment)) comment++; while (!isspace(*comment)) comment++; *comment = '\0'; comment_len = strlen(comment_str); /* replace comments with spaces */ if (comment_len > 2) { while ((comment = strstr(license, comment_str)) != NULL) for (w = 1; w < comment_len; w++) comment[w] = ' '; } else { while ((comment = strstr(license, comment_str)) != NULL) comment[1] = ' '; } } /* replace multiple spaces with one space */ was_space = 0; for (r = w = 0; r < length; r++) { if (!isspace(license[r])) { if (was_space) { license[w++] = ' '; was_space = 0; } if (w < r) license[w] = license[r]; w++; } else { if (!was_space) was_space = 1; } } license[w] = '\0'; } /* * analyze_license -- check correctness of the license */ static int analyze_license(const char *name_to_print, char *buffer, char **license) { char *_license; size_t _length; char *beg_str, *end_str; if (strstr2(buffer, LICENSE_BEG, LICENSE_END, &beg_str, &end_str)) { if (!beg_str) ERROR2("%s:1: error: incorrect license" " (license should start with the string '%s')", name_to_print, LICENSE_BEG); else ERROR2("%s:1: error: incorrect license" " (license should end with the string '%s')", name_to_print, LICENSE_END); return -1; } _license = beg_str; assert((uintptr_t)end_str > (uintptr_t)beg_str); _length = (size_t)(end_str - beg_str) + strlen(LICENSE_END); _license[_length] = '\0'; format_license(_license, _length); *license = _license; return 0; } /* * create_pattern -- create 'pattern' from the 'path_license' file */ static int create_pattern(const char *path_license, char *pattern) { char buffer[LICENSE_MAX_LEN]; char *license; ssize_t ret; int file_license; if ((file_license = open(path_license, O_RDONLY)) == -1) { ERROR("open(): %s: %s", strerror(errno), path_license); return -1; } memset(buffer, 0, sizeof(buffer)); ret = read(file_license, buffer, LICENSE_MAX_LEN); close(file_license); if (ret == -1) { ERROR("read(): %s: %s", strerror(errno), path_license); return -1; } if (analyze_license(path_license, buffer, &license) == -1) return -1; strncpy(pattern, license, LICENSE_MAX_LEN); return 0; } /* * print_diff -- print the first difference between 'license' and 'pattern' */ static void print_diff(char *license, char *pattern, size_t len) { size_t i = 0; while (i < len && license[i] == pattern[i]) i++; license[i + 1] = '\0'; pattern[i + 1] = '\0'; i = (i - DIFF_LEN > 0) ? (i - DIFF_LEN) : 0; while (i > 0 && license[i] != ' ') i--; fprintf(stderr, " The first difference is at the end of the line:\n"); fprintf(stderr, " * License: %s\n", license + i); fprintf(stderr, " * Pattern: %s\n", pattern + i); } /* * verify_license -- compare 'license' with 'pattern' and check correctness * of the copyright line */ static int verify_license(const char *path_to_check, char *pattern, const char *filename) { char buffer[LICENSE_MAX_LEN]; char *license, *copyright; int file_to_check; ssize_t ret; int year_first, year_last; int min_year_first = YEAR_INIT_MIN; int max_year_last = YEAR_INIT_MAX; char *err_str = NULL; const char *name_to_print = filename ? filename : path_to_check; if ((file_to_check = open(path_to_check, O_RDONLY)) == -1) { ERROR("open(): %s: %s", strerror(errno), path_to_check); return -1; } memset(buffer, 0, sizeof(buffer)); ret = read(file_to_check, buffer, LICENSE_MAX_LEN); close(file_to_check); if (ret == -1) { ERROR("read(): %s: %s", strerror(errno), name_to_print); return -1; } if (analyze_license(name_to_print, buffer, &license) == -1) return -1; /* check the copyright notice */ copyright = buffer; while ((copyright = strstr(copyright, COPYRIGHT)) != NULL) { copyright += COPYRIGHT_LEN; /* skip the copyright symbol '(c)' if any */ if (strncmp(copyright, COPYRIGHT_SYMBOL, COPYRIGHT_SYMBOL_LEN) == 0) copyright += COPYRIGHT_SYMBOL_LEN; /* look for the first year */ if (!isdigit(*copyright)) { err_str = "no digit just after the 'Copyright ' string"; break; } year_first = atoi(copyright); if (year_first < YEAR_MIN || year_first > YEAR_MAX) { err_str = "the first year is wrong"; break; } copyright += YEAR_LEN; if (year_first < min_year_first) min_year_first = year_first; if (year_first > max_year_last) max_year_last = year_first; /* check if there is the second year */ if (*copyright == ',') continue; else if (*copyright != '-') { err_str = "'-' or ',' expected after the first year"; break; } copyright++; /* look for the second year */ if (!isdigit(*copyright)) { err_str = "no digit after '-'"; break; } year_last = atoi(copyright); if (year_last < YEAR_MIN || year_last > YEAR_MAX) { err_str = "the second year is wrong"; break; } copyright += YEAR_LEN; if (year_last > max_year_last) max_year_last = year_last; if (*copyright != ',') { err_str = "',' expected after the second year"; break; } } if (!err_str && min_year_first == YEAR_INIT_MIN) err_str = "no 'Copyright ' string found"; if (err_str) /* found an error in the copyright notice */ ERROR2("%s:1: error: incorrect copyright notice: %s", name_to_print, err_str); /* now check the license */ if (memcmp(license, pattern, strlen(pattern)) != 0) { ERROR2("%s:1: error: incorrect license", name_to_print); print_diff(license, pattern, strlen(pattern)); return -1; } if (err_str) return -1; /* all checks passed */ if (min_year_first != max_year_last && max_year_last != YEAR_INIT_MAX) { printf("%i-%i\n", min_year_first, max_year_last); } else { printf("%i\n", min_year_first); } return 0; } /* * mode_create_pattern_file -- 'create' mode function */ static int mode_create_pattern_file(const char *path_license, const char *path_pattern) { char pattern[LICENSE_MAX_LEN]; if (create_pattern(path_license, pattern) == -1) return -1; return write_pattern(path_pattern, pattern); } /* * mode_check_pattern -- 'check_pattern' mode function */ static int mode_check_pattern(const char *path_license, const char *path_to_check) { char pattern[LICENSE_MAX_LEN]; if (create_pattern(path_license, pattern) == -1) return -1; return verify_license(path_to_check, pattern, NULL); } /* * mode_check_license -- 'check_license' mode function */ static int mode_check_license(const char *path_pattern, const char *path_to_check, const char *filename) { char pattern[LICENSE_MAX_LEN]; if (read_pattern(path_pattern, pattern) == -1) return -1; return verify_license(path_to_check, pattern, filename); } int main(int argc, char *argv[]) { if (strcmp(argv[1], STR_MODE_CREATE) == 0) { if (argc != 4) goto invalid_args; return mode_create_pattern_file(argv[2], argv[3]); } else if (strcmp(argv[1], STR_MODE_PATTERN) == 0) { if (argc != 5) goto invalid_args; return mode_check_license(argv[2], argv[3], argv[4]); } else if (strcmp(argv[1], STR_MODE_LICENSE) == 0) { if (argc != 4) goto invalid_args; return mode_check_pattern(argv[2], argv[3]); } else { ERROR("wrong mode: %s\n", argv[1]); } invalid_args: printf(help_str, argv[0]); return -1; } vmem-1.8/utils/check_license/file-exceptions.sh000077500000000000000000000034321361505074100217060ustar00rootroot00000000000000#!/bin/sh -e # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # file-exceptions.sh - filter out files not checked for copyright and license grep -v -E -e 'src/jemalloc/' -e 'src/windows/jemalloc_gen/' -e '/queue.h$' -e '/getopt.h$' -e '/getopt.c$' -e 'src/common/valgrind/' -e '/testconfig\...$' vmem-1.8/utils/check_sdk_version.py000077500000000000000000000074111361505074100175330ustar00rootroot00000000000000#!/usr/bin/env python3 # # Copyright 2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. import argparse import os from subprocess import check_output, CalledProcessError import sys import shlex from xml.dom import minidom from xml.parsers.expat import ExpatError VALID_SDK_VERSION = '10.0.16299.0' def get_vcxproj_files(root_dir, ignored): """Get a list ".vcxproj" files under VMEM directory.""" to_format = [] command = 'git ls-files *.vcxproj' try: output = check_output(shlex.split(command), cwd=root_dir).decode("UTF-8") except CalledProcessError as e: sys.exit('Error: "' + command + '" failed with returncode: ' + str(e.returncode)) for line in output.splitlines(): if not line: continue file_path = os.path.join(root_dir, line) if os.path.isfile(file_path): to_format.append(file_path) return to_format def get_sdk_version(file): """ Get Windows SDK version from modified/new files from the current pull request. """ tag = 'WindowsTargetPlatformVersion' try: xml_file = minidom.parse(file) except ExpatError as e: sys.exit('Error: "' + file + '" is incorrect.\n' + str(e)) version_list = xml_file.getElementsByTagName(tag) if len(version_list) != 1: sys.exit('Error: the amount of tags "' + tag + '" is other than 1.') version = version_list[0].firstChild.data return version def main(): parser = argparse.ArgumentParser(prog='check_sdk_version.py', description='The script checks Windows SDK version in .vcxproj files.') parser.add_argument('-d', '--directory', help='Directory of VMEM tree.', required=True) args = parser.parse_args() current_directory = args.directory if not os.path.isdir(current_directory): sys.exit('"' + current_directory + '" is not a directory.') files = get_vcxproj_files(current_directory, '') if not files: sys.exit(0) for file in files: sdk_version = get_sdk_version(file) if sdk_version != VALID_SDK_VERSION: sys.exit('Wrong Windows SDK version: ' + sdk_version + ' in file: "' + file + '". Please use: ' + VALID_SDK_VERSION) if __name__ == '__main__': main() vmem-1.8/utils/check_whitespace000077500000000000000000000116311361505074100167110ustar00rootroot00000000000000#!/usr/bin/env perl # # Copyright 2015-2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # check_whitespace -- scrub source tree for whitespace errors # use strict; use warnings; use File::Basename; use File::Find; use Encode; use v5.16; my $Me = $0; $Me =~ s,.*/,,; $SIG{HUP} = $SIG{INT} = $SIG{TERM} = $SIG{__DIE__} = sub { die @_ if $^S; my $errstr = shift; die "$Me: ERROR: $errstr"; }; my $Errcount = 0; # # err -- emit error, keep total error count # sub err { warn @_, "\n"; $Errcount++; } # # decode_file_as_string -- slurp an entire file into memory and decode # sub decode_file_as_string { my ($full, $file) = @_; my $fh; open($fh, '<', $full) or die "$full $!\n"; local $/; $_ = <$fh>; close $fh; # check known encodings or die my $decoded; my @encodings = ("UTF-8", "UTF-16", "UTF-16LE", "UTF-16BE"); foreach my $enc (@encodings) { eval { $decoded = decode( $enc, $_, Encode::FB_CROAK ) }; if (!$@) { $decoded =~ s/\R/\n/g; return $decoded; } } die "$Me: ERROR: Unknown file encoding"; } # # check_whitespace -- run the checks on the given file # sub check_whitespace { my ($full, $file) = @_; my $line = 0; my $eol; my $nf = 0; my $fstr = decode_file_as_string($full, $file); for (split /^/, $fstr) { $line++; $eol = /[\n]/s; if (/^\.nf$/) { err("$full:$line: ERROR: nested .nf") if $nf; $nf = 1; } elsif (/^\.fi$/) { $nf = 0; } elsif ($nf == 0) { chomp; err("$full:$line: ERROR: trailing whitespace") if /\s$/; err("$full:$line: ERROR: spaces before tabs") if / \t/; } } err("$full:$line: .nf without .fi") if $nf; err("$full:$line: noeol") unless $eol; } sub check_whitespace_with_exc { my ($full) = @_; $_ = $full; return 0 if /^[.\/]*src\/jemalloc.*/; return 0 if /^[.\/]*src\/common\/queue\.h/; return 0 if /^[.\/]*src\/common\/valgrind\/.*\.h/; $_ = basename($full); return 0 unless /^(README.*|LICENSE.*|Makefile.*|CMakeLists.txt|.gitignore|TEST.*|RUNTESTS|check_whitespace|.*\.([chp13s]|sh|map|cpp|hpp|inc|PS1|ps1|py|md|cmake))$/; return 0 if -z; check_whitespace($full, $_); return 1; } my $verbose = 0; my $force = 0; my $recursive = 0; sub check { my ($file) = @_; my $r; if ($force) { $r = check_whitespace($file, basename($file)); } else { $r = check_whitespace_with_exc($file); } if ($verbose) { if ($r == 0) { printf("skipped $file\n"); } else { printf("checked $file\n"); } } } my @files = (); foreach my $arg (@ARGV) { if ($arg eq '-v') { $verbose = 1; next; } if ($arg eq '-f') { $force = 1; next; } if ($arg eq '-r') { $recursive = 1; next; } if ($arg eq '-g') { @files = `git ls-tree -r --name-only HEAD`; chomp(@files); next; } if ($arg eq '-h') { printf "Options: -g - check all files tracked by git -r dir - recursively check all files in specified directory -v verbose - print whether file was checked or not -f force - disable blacklist\n"; exit 1; } if ($recursive == 1) { find(sub { my $full = $File::Find::name; if (!$force && ($full eq './.git' || $full eq './src/jemalloc' || $full eq './src/debug' || $full eq './src/nondebug' || $full eq './rpmbuild' || $full eq './dpkgbuild')) { $File::Find::prune = 1; return; } return unless -f; push @files, $full; }, $arg); $recursive = 0; next; } push @files, $arg; } if (!@files) { printf "Empty file list!\n"; } foreach (@files) { check($_); } exit $Errcount; vmem-1.8/utils/copy-source.sh000077500000000000000000000044001361505074100162750ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # utils/copy-source.sh -- copy source files (from HEAD) to 'path_to_dir/vmem' # directory whether in git repository or not. # # usage: ./copy-source.sh [path_to_dir] [srcversion] set -e DESTDIR="$1" SRCVERSION=$2 if [ -d .git ]; then if [ -n "$(git status --porcelain)" ]; then echo "Error: Working directory is dirty: $(git status --porcelain)" exit 1 fi else echo "Warning: You are not in git repository, working directory might be dirty." fi mkdir -p "$DESTDIR"/vmem echo -n $SRCVERSION > "$DESTDIR"/vmem/.version if [ -d .git ]; then git archive HEAD | tar -x -C "$DESTDIR"/vmem else find . \ -maxdepth 1 \ -not -name $(basename "$DESTDIR") \ -not -name . \ -exec cp -r "{}" "$DESTDIR"/vmem \; fi vmem-1.8/utils/cstyle000077500000000000000000000656461361505074100147420ustar00rootroot00000000000000#!/usr/bin/env perl # # CDDL HEADER START # # The contents of this file are subject to the terms of the # Common Development and Distribution License (the "License"). # You may not use this file except in compliance with the License. # # You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE # or http://www.opensolaris.org/os/licensing. # See the License for the specific language governing permissions # and limitations under the License. # # When distributing Covered Code, include this CDDL HEADER in each # file and include the License file at usr/src/OPENSOLARIS.LICENSE. # If applicable, add the following below this CDDL HEADER, with the # fields enclosed by brackets "[]" replaced with your own identifying # information: Portions Copyright [yyyy] [name of copyright owner] # # CDDL HEADER END # # # Copyright 2008 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # Portions copyright 2017, Intel Corporation. # # @(#)cstyle 1.58 98/09/09 (from shannon) #ident "%Z%%M% %I% %E% SMI" # # cstyle - check for some common stylistic errors. # # cstyle is a sort of "lint" for C coding style. # It attempts to check for the style used in the # kernel, sometimes known as "Bill Joy Normal Form". # # There's a lot this can't check for, like proper indentation # of code blocks. There's also a lot more this could check for. # # A note to the non perl literate: # # perl regular expressions are pretty much like egrep # regular expressions, with the following special symbols # # \s any space character # \S any non-space character # \w any "word" character [a-zA-Z0-9_] # \W any non-word character # \d a digit [0-9] # \D a non-digit # \b word boundary (between \w and \W) # \B non-word boundary # require 5.0; use IO::File; use Getopt::Std; use strict; use warnings; my $usage = "usage: cstyle [-chpvCP] [-o constructs] file ... -c check continuation indentation inside functions -h perform heuristic checks that are sometimes wrong -p perform some of the more picky checks -v verbose -C don't check anything in header block comments -P check for use of non-POSIX types -o constructs allow a comma-seperated list of optional constructs: doxygen allow doxygen-style block comments (/** /*!) splint allow splint-style lint comments (/*@ ... @*/) "; my %opts; if (!getopts("cho:pvCP", \%opts)) { print $usage; exit 2; } my $check_continuation = $opts{'c'}; my $heuristic = $opts{'h'}; my $picky = $opts{'p'}; my $verbose = $opts{'v'}; my $ignore_hdr_comment = $opts{'C'}; my $check_posix_types = $opts{'P'}; my $doxygen_comments = 0; my $splint_comments = 0; if (defined($opts{'o'})) { for my $x (split /,/, $opts{'o'}) { if ($x eq "doxygen") { $doxygen_comments = 1; } elsif ($x eq "splint") { $splint_comments = 1; } else { print "cstyle: unrecognized construct \"$x\"\n"; print $usage; exit 2; } } } my ($filename, $line, $prev); # shared globals my $fmt; my $hdr_comment_start; if ($verbose) { $fmt = "%s:%d: %s\n%s\n"; } else { $fmt = "%s:%d: %s\n"; } if ($doxygen_comments) { # doxygen comments look like "/*!" or "/**"; allow them. $hdr_comment_start = qr/^\s*\/\*[\!\*]?$/; } else { $hdr_comment_start = qr/^\s*\/\*$/; } # Note, following must be in single quotes so that \s and \w work right. my $typename = '(int|char|short|long|unsigned|float|double' . '|\w+_t|struct\s+\w+|union\s+\w+|FILE|BOOL)'; # mapping of old types to POSIX compatible types my %old2posix = ( 'unchar' => 'uchar_t', 'ushort' => 'ushort_t', 'uint' => 'uint_t', 'ulong' => 'ulong_t', 'u_int' => 'uint_t', 'u_short' => 'ushort_t', 'u_long' => 'ulong_t', 'u_char' => 'uchar_t', 'quad' => 'quad_t' ); my $lint_re = qr/\/\*(?: ARGSUSED[0-9]*|NOTREACHED|LINTLIBRARY|VARARGS[0-9]*| CONSTCOND|CONSTANTCOND|CONSTANTCONDITION|EMPTY| FALLTHRU|FALLTHROUGH|LINTED.*?|PRINTFLIKE[0-9]*| PROTOLIB[0-9]*|SCANFLIKE[0-9]*|CSTYLED.*? )\*\//x; my $splint_re = qr/\/\*@.*?@\*\//x; my $warlock_re = qr/\/\*\s*(?: VARIABLES\ PROTECTED\ BY| MEMBERS\ PROTECTED\ BY| ALL\ MEMBERS\ PROTECTED\ BY| READ-ONLY\ VARIABLES:| READ-ONLY\ MEMBERS:| VARIABLES\ READABLE\ WITHOUT\ LOCK:| MEMBERS\ READABLE\ WITHOUT\ LOCK:| LOCKS\ COVERED\ BY| LOCK\ UNNEEDED\ BECAUSE| LOCK\ NEEDED:| LOCK\ HELD\ ON\ ENTRY:| READ\ LOCK\ HELD\ ON\ ENTRY:| WRITE\ LOCK\ HELD\ ON\ ENTRY:| LOCK\ ACQUIRED\ AS\ SIDE\ EFFECT:| READ\ LOCK\ ACQUIRED\ AS\ SIDE\ EFFECT:| WRITE\ LOCK\ ACQUIRED\ AS\ SIDE\ EFFECT:| LOCK\ RELEASED\ AS\ SIDE\ EFFECT:| LOCK\ UPGRADED\ AS\ SIDE\ EFFECT:| LOCK\ DOWNGRADED\ AS\ SIDE\ EFFECT:| FUNCTIONS\ CALLED\ THROUGH\ POINTER| FUNCTIONS\ CALLED\ THROUGH\ MEMBER| LOCK\ ORDER: )/x; my $err_stat = 0; # exit status if ($#ARGV >= 0) { foreach my $arg (@ARGV) { my $fh = new IO::File $arg, "r"; if (!defined($fh)) { printf "%s: can not open\n", $arg; } else { &cstyle($arg, $fh); close $fh; } } } else { &cstyle("", *STDIN); } exit $err_stat; my $no_errs = 0; # set for CSTYLED-protected lines sub err($) { my ($error) = @_; unless ($no_errs) { if ($verbose) { printf $fmt, $filename, $., $error, $line; } else { printf $fmt, $filename, $., $error; } $err_stat = 1; } } sub err_prefix($$) { my ($prevline, $error) = @_; my $out = $prevline."\n".$line; unless ($no_errs) { printf $fmt, $filename, $., $error, $out; $err_stat = 1; } } sub err_prev($) { my ($error) = @_; unless ($no_errs) { printf $fmt, $filename, $. - 1, $error, $prev; $err_stat = 1; } } sub cstyle($$) { my ($fn, $filehandle) = @_; $filename = $fn; # share it globally my $in_cpp = 0; my $next_in_cpp = 0; my $in_comment = 0; my $in_header_comment = 0; my $comment_done = 0; my $in_warlock_comment = 0; my $in_function = 0; my $in_function_header = 0; my $in_declaration = 0; my $note_level = 0; my $nextok = 0; my $nocheck = 0; my $in_string = 0; my ($okmsg, $comment_prefix); $line = ''; $prev = ''; reset_indent(); line: while (<$filehandle>) { s/\r?\n$//; # strip return and newline # save the original line, then remove all text from within # double or single quotes, we do not want to check such text. $line = $_; # # C allows strings to be continued with a backslash at the end of # the line. We translate that into a quoted string on the previous # line followed by an initial quote on the next line. # # (we assume that no-one will use backslash-continuation with character # constants) # $_ = '"' . $_ if ($in_string && !$nocheck && !$in_comment); # # normal strings and characters # s/'([^\\']|\\[^xX0]|\\0[0-9]*|\\[xX][0-9a-fA-F]*)'/''/g; s/"([^\\"]|\\.)*"/\"\"/g; # # detect string continuation # if ($nocheck || $in_comment) { $in_string = 0; } else { # # Now that all full strings are replaced with "", we check # for unfinished strings continuing onto the next line. # $in_string = (s/([^"](?:"")*)"([^\\"]|\\.)*\\$/$1""/ || s/^("")*"([^\\"]|\\.)*\\$/""/); } # # figure out if we are in a cpp directive # $in_cpp = $next_in_cpp || /^\s*#/; # continued or started $next_in_cpp = $in_cpp && /\\$/; # only if continued # strip off trailing backslashes, which appear in long macros s/\s*\\$//; # an /* END CSTYLED */ comment ends a no-check block. if ($nocheck) { if (/\/\* *END *CSTYLED *\*\//) { $nocheck = 0; } else { reset_indent(); next line; } } # a /*CSTYLED*/ comment indicates that the next line is ok. if ($nextok) { if ($okmsg) { err($okmsg); } $nextok = 0; $okmsg = 0; if (/\/\* *CSTYLED.*\*\//) { /^.*\/\* *CSTYLED *(.*) *\*\/.*$/; $okmsg = $1; $nextok = 1; } $no_errs = 1; } elsif ($no_errs) { $no_errs = 0; } # check length of line. # first, a quick check to see if there is any chance of being too long. if (($line =~ tr/\t/\t/) * 7 + length($line) > 80) { # yes, there is a chance. # replace tabs with spaces and check again. my $eline = $line; 1 while $eline =~ s/\t+/' ' x (length($&) * 8 - length($`) % 8)/e; if (length($eline) > 80) { # allow long line if it is user visible string # find if line start from " and ends # with " + 2 optional characters # (these characters can be i.e. '");' '" \' or '",' etc...) if($eline =~ /^ *".*"[^"]{0,2}$/) { # check if entire line is one string literal $eline =~ s/^ *"//; $eline =~ s/"[^"]{0,2}$//; if($eline =~ /[^\\]"|[^\\](\\\\)+"/) { err("line > 80 characters"); } } else { err("line > 80 characters"); } } } # ignore NOTE(...) annotations (assumes NOTE is on lines by itself). if ($note_level || /\b_?NOTE\s*\(/) { # if in NOTE or this is NOTE s/[^()]//g; # eliminate all non-parens $note_level += s/\(//g - length; # update paren nest level next; } # a /* BEGIN CSTYLED */ comment starts a no-check block. if (/\/\* *BEGIN *CSTYLED *\*\//) { $nocheck = 1; } # a /*CSTYLED*/ comment indicates that the next line is ok. if (/\/\* *CSTYLED.*\*\//) { /^.*\/\* *CSTYLED *(.*) *\*\/.*$/; $okmsg = $1; $nextok = 1; } if (/\/\/ *CSTYLED/) { /^.*\/\/ *CSTYLED *(.*)$/; $okmsg = $1; $nextok = 1; } # universal checks; apply to everything if (/\t +\t/) { err("spaces between tabs"); } if (/ \t+ /) { err("tabs between spaces"); } if (/\s$/) { err("space or tab at end of line"); } if (/[^ \t(]\/\*/ && !/\w\(\/\*.*\*\/\);/) { err("comment preceded by non-blank"); } # is this the beginning or ending of a function? # (not if "struct foo\n{\n") if (/^{$/ && $prev =~ /\)\s*(const\s*)?(\/\*.*\*\/\s*)?\\?$/) { $in_function = 1; $in_declaration = 1; $in_function_header = 0; $prev = $line; next line; } if (/^}\s*(\/\*.*\*\/\s*)*$/) { if ($prev =~ /^\s*return\s*;/) { err_prev("unneeded return at end of function"); } $in_function = 0; reset_indent(); # we don't check between functions $prev = $line; next line; } if (/^\w*\($/) { $in_function_header = 1; } if ($in_warlock_comment && /\*\//) { $in_warlock_comment = 0; $prev = $line; next line; } # a blank line terminates the declarations within a function. # XXX - but still a problem in sub-blocks. if ($in_declaration && /^$/) { $in_declaration = 0; } if ($comment_done) { $in_comment = 0; $in_header_comment = 0; $comment_done = 0; } # does this looks like the start of a block comment? if (/$hdr_comment_start/) { if (!/^\t*\/\*/) { err("block comment not indented by tabs"); } $in_comment = 1; /^(\s*)\//; $comment_prefix = $1; if ($comment_prefix eq "") { $in_header_comment = 1; } $prev = $line; next line; } # are we still in the block comment? if ($in_comment) { if (/^$comment_prefix \*\/$/) { $comment_done = 1; } elsif (/\*\//) { $comment_done = 1; err("improper block comment close") unless ($ignore_hdr_comment && $in_header_comment); } elsif (!/^$comment_prefix \*[ \t]/ && !/^$comment_prefix \*$/) { err("improper block comment") unless ($ignore_hdr_comment && $in_header_comment); } } if ($in_header_comment && $ignore_hdr_comment) { $prev = $line; next line; } # check for errors that might occur in comments and in code. # allow spaces to be used to draw pictures in header and block comments. if (/[^ ] / && !/".* .*"/ && !$in_header_comment && !$in_comment) { err("spaces instead of tabs"); } if (/^ / && !/^ \*[ \t\/]/ && !/^ \*$/ && (!/^ \w/ || $in_function != 0)) { err("indent by spaces instead of tabs"); } if (/^\t+ [^ \t\*]/ || /^\t+ \S/ || /^\t+ \S/) { err("continuation line not indented by 4 spaces"); } if (/$warlock_re/ && !/\*\//) { $in_warlock_comment = 1; $prev = $line; next line; } if (/^\s*\/\*./ && !/^\s*\/\*.*\*\// && !/$hdr_comment_start/) { err("improper first line of block comment"); } if ($in_comment) { # still in comment, don't do further checks $prev = $line; next line; } if ((/[^(]\/\*\S/ || /^\/\*\S/) && !(/$lint_re/ || ($splint_comments && /$splint_re/))) { err("missing blank after open comment"); } if (/\S\*\/[^)]|\S\*\/$/ && !(/$lint_re/ || ($splint_comments && /$splint_re/))) { err("missing blank before close comment"); } if (/\/\/\S/) { # C++ comments err("missing blank after start comment"); } # check for unterminated single line comments, but allow them when # they are used to comment out the argument list of a function # declaration. if (/\S.*\/\*/ && !/\S.*\/\*.*\*\// && !/\(\/\*/) { err("unterminated single line comment"); } if (/^(#else|#endif|#include)(.*)$/) { $prev = $line; if ($picky) { my $directive = $1; my $clause = $2; # Enforce ANSI rules for #else and #endif: no noncomment # identifiers are allowed after #endif or #else. Allow # C++ comments since they seem to be a fact of life. if ((($1 eq "#endif") || ($1 eq "#else")) && ($clause ne "") && (!($clause =~ /^\s+\/\*.*\*\/$/)) && (!($clause =~ /^\s+\/\/.*$/))) { err("non-comment text following " . "$directive (or malformed $directive " . "directive)"); } } next line; } # # delete any comments and check everything else. Note that # ".*?" is a non-greedy match, so that we don't get confused by # multiple comments on the same line. # s/\/\*.*?\*\//\x01/g; s/\/\/.*$/\x01/; # C++ comments # delete any trailing whitespace; we have already checked for that. s/\s*$//; # following checks do not apply to text in comments. if (/[^ \t\+]\+[^\+=]/ || /[^\+]\+[^ \+=]/) { err("missing space around + operator"); } if (/[^ \t]\+=/ || /\+=[^ ]/) { err("missing space around += operator"); } if (/[^ \t\-]\-[^\->]/ && !/\(\w+\)\-\w/ && !/[\(\[]\-[\w \t]+[\)\],]/) { err("missing space before - operator"); } if (/[^\-]\-[^ \-=>]/ && !/\(\-\w+\)/ && !/(return|case|=|>|<|\?|:|,|^[ \t]+)[ \t]+\-[\w\(]/ && !/(\([^\)]+\)|\[|\()\-[\w\(\]]/) { err("missing space after - operator"); } if (/(return|case|=|\?|:|,|\[)[ \t]+\-[ \t]/ || /[\(\[]\-[ \t]/) { err("extra space after - operator"); } if (/[^ \t]\-=/ || /\-=[^ ]/) { err("missing space around -= operator"); } if (/[^ \t][\%\/]/ || /[\%\/][^ =]/ || /[\%\/]=[^ ]/) { err("missing space around one of operators: % %= / /="); } if (/[^ \t]\*=/ || /\*=[^ ]/) { err("missing space around *= operator"); } if (/[^ \t\(\)\*\[]\*/) { err("missing space before * operator"); } if (/\*[^ =\*\w\(,]/ && !/\(.+ \*+\)/ && !/\*\[\]/ && !/\*\-\-\w/ && !/\*\+\+\w/ && !/\*\)/) { err("missing space after * operator"); } if (/[^<>\s][!<>=]=/ || /[^<>][!<>=]=[^\s,]/ || (/[^->]>[^,=>\s]/ && !/[^->]>$/) || (/[^<]<[^,=<\s]/ && !/[^<]<$/) || /[^<\s]<[^<]/ || /[^->\s]>[^>]/) { err("missing space around relational operator"); } if (/\S>>=/ || /\S<<=/ || />>=\S/ || /<<=\S/ || /\S[-+*\/&|^%]=/ || (/[^-+*\/&|^%!<>=\s]=[^=]/ && !/[^-+*\/&|^%!<>=\s]=$/) || (/[^!<>=]=[^=\s]/ && !/[^!<>=]=$/)) { # XXX - should only check this for C++ code # XXX - there are probably other forms that should be allowed if (!/\soperator=/) { err("missing space around assignment operator"); } } if (/[,;]\S/ && !/\bfor \(;;\)/) { err("comma or semicolon followed by non-blank"); } # allow "for" statements to have empty "while" clauses if (/\s[,;]/ && !/^[\t]+;$/ && !/^\s*for \([^;]*; ;[^;]*\)/) { err("comma or semicolon preceded by blank"); } if (/^\s*(&&|\|\|)/) { err("improper boolean continuation"); } if (/\S *(&&|\|\|)/ || /(&&|\|\|) *\S/) { err("more than one space around boolean operator"); } if (/\b(for|if|while|switch|return|case)\(/) { err("missing space between keyword and paren"); } if (/(\b(for|if|while|switch|return)\b.*){2,}/ && !/^#define/) { # multiple "case" and "sizeof" allowed err("more than one keyword on line"); } if (/\b(for|if|while|switch|return|case)\s\s+\(/ && !/^#if\s+\(/) { err("extra space between keyword and paren"); } # try to detect "func (x)" but not "if (x)" or # "#define foo (x)" or "int (*func)();" if (/\w\s\(/) { my $s = $_; # strip off all keywords on the line s/\b(for|if|while|switch|return|case)\s\(/XXX(/g; s/\b(sizeof|typeof|__typeof__)\s*\(/XXX(/g; s/#elif\s\(/XXX(/g; s/^#define\s+\w+\s+\(/XXX(/; # do not match things like "void (*f)();" # or "typedef void (func_t)();" s/\w\s\(+\*/XXX(*/g; s/\b($typename|void)\s+\(+/XXX(/og; s/\btypedef\s($typename|void)\s+\(+/XXX(/og; # do not match "__attribute__ ((format (...)))" s/\b__attribute__\s*\(\(format\s*\(/__attribute__((XXX(/g; if (/\w\s\(/) { err("extra space between function name and left paren"); } $_ = $s; } # try to detect "int foo(x)", but not "extern int foo(x);" # XXX - this still trips over too many legitimate things, # like "int foo(x,\n\ty);" # if (/^(\w+(\s|\*)+)+\w+\(/ && !/\)[;,](\s|\x01)*$/ && # !/^(extern|static)\b/) { # err("return type of function not on separate line"); # } # this is a close approximation if (/^(\w+(\s|\*)+)+\w+\(.*\)(\s|\x01)*$/ && !/^(extern|static)\b/) { err("return type of function not on separate line"); } if (/^#define\t/ || /^#ifdef\t/ || /^#ifndef\t/) { err("#define/ifdef/ifndef followed by tab instead of space"); } if (/^#define\s\s+/ || /^#ifdef\s\s+/ || /^#ifndef\s\s+/) { err("#define/ifdef/ifndef followed by more than one space"); } # AON C-style doesn't require this. #if (/^\s*return\W[^;]*;/ && !/^\s*return\s*\(.*\);/) { # err("unparenthesized return expression"); #} if (/\bsizeof\b/ && !/\bsizeof\s*\(.*\)/) { err("unparenthesized sizeof expression"); } if (/\b(sizeof|typeof)\b/ && /\b(sizeof|typeof)\s+\(.*\)/) { err("spaces between sizeof/typeof expression and paren"); } if (/\(\s/) { err("whitespace after left paren"); } # allow "for" statements to have empty "continue" clauses if (/\s\)/ && !/^\s*for \([^;]*;[^;]*; \)/) { err("whitespace before right paren"); } if (/^\s*\(void\)[^ ]/) { err("missing space after (void) cast"); } if (/\S\{/ && !/\{\{/ && !/\(struct \w+\)\{/) { err("missing space before left brace"); } if ($in_function && /^\s+{/ && ($prev =~ /\)\s*$/ || $prev =~ /\bstruct\s+\w+$/)) { err("left brace starting a line"); } if (/}(else|while)/) { err("missing space after right brace"); } if (/}\s\s+(else|while)/) { err("extra space after right brace"); } if (/\b_VOID\b|\bVOID\b|\bSTATIC\b/) { err("obsolete use of VOID or STATIC"); } if (/\b($typename|void)\*/o) { err("missing space between type name and *"); } if (/^\s+#/) { err("preprocessor statement not in column 1"); } if (/^#\s/) { err("blank after preprocessor #"); } if (/!\s*(strcmp|strncmp|bcmp)\s*\(/) { err("don't use boolean ! with comparison functions"); } if (/^\S+\([\S\s]*\)\s*{/) { err("brace of function definition not at beginning of line"); } if (/static\s+\S+\s*=\s*(0|NULL)\s*;/) { err("static variable initialized with 0 or NULL"); } if (/typedef[\S\s]+\*\s*\w+\s*;/) { err("typedefed pointer type"); } if (/unsigned\s+int\s/) { err("'unsigned int' instead of just 'unsigned'"); } if (/long\s+long\s+int\s/) { err("'long long int' instead of just 'long long'"); } elsif (/long\s+int\s/) { err("'long int' instead of just 'long'"); } # # We completely ignore, for purposes of indentation: # * lines outside of functions # * preprocessor lines # if ($check_continuation && $in_function && !$in_cpp) { process_indent($_); } if ($picky) { # try to detect spaces after casts, but allow (e.g.) # "sizeof (int) + 1", "void (*funcptr)(int) = foo;", and # "int foo(int) __NORETURN;" if ((/^\($typename( \*+)?\)\s/o || /\W\($typename( \*+)?\)\s/o) && !/sizeof\($typename( \*)?\)\s/o && !/\($typename( \*+)?\)\s+=[^=]/o) { err("space after cast"); } if (/\b($typename|void)\s*\*\s/o && !/\b($typename|void)\s*\*\s+const\b/o) { err("unary * followed by space"); } } if ($check_posix_types) { # try to detect old non-POSIX types. # POSIX requires all non-standard typedefs to end in _t, # but historically these have been used. if (/\b(unchar|ushort|uint|ulong|u_int|u_short|u_long|u_char|quad)\b/) { err("non-POSIX typedef $1 used: use $old2posix{$1} instead"); } } if ($heuristic) { # cannot check this everywhere due to "struct {\n...\n} foo;" if ($in_function && !$in_declaration && /}./ && !/}\s+=/ && !/{.*}[;,]$/ && !/}(\s|\x01)*$/ && !/} (else|while)/ && !/}}/) { err("possible bad text following right brace"); } # cannot check this because sub-blocks in # the middle of code are ok if ($in_function && /^\s+{/) { err("possible left brace starting a line"); } } if (/^\s*else\W/) { if ($prev =~ /^\s*}$/) { err_prefix($prev, "else and right brace should be on same line"); } } $prev = $line; } if ($prev eq "") { err("last line in file is blank"); } } # # Continuation-line checking # # The rest of this file contains the code for the continuation checking # engine. It's a pretty simple state machine which tracks the expression # depth (unmatched '('s and '['s). # # Keep in mind that the argument to process_indent() has already been heavily # processed; all comments have been replaced by control-A, and the contents of # strings and character constants have been elided. # my $cont_in; # currently inside of a continuation my $cont_off; # skipping an initializer or definition my $cont_noerr; # suppress cascading errors my $cont_start; # the line being continued my $cont_base; # the base indentation my $cont_first; # this is the first line of a statement my $cont_multiseg; # this continuation has multiple segments my $cont_special; # this is a C statement (if, for, etc.) my $cont_macro; # this is a macro my $cont_case; # this is a multi-line case my @cont_paren; # the stack of unmatched ( and [s we've seen sub reset_indent() { $cont_in = 0; $cont_off = 0; } sub delabel($) { # # replace labels with tabs. Note that there may be multiple # labels on a line. # local $_ = $_[0]; while (/^(\t*)( *(?:(?:\w+\s*)|(?:case\b[^:]*)): *)(.*)$/) { my ($pre_tabs, $label, $rest) = ($1, $2, $3); $_ = $pre_tabs; while ($label =~ s/^([^\t]*)(\t+)//) { $_ .= "\t" x (length($2) + length($1) / 8); } $_ .= ("\t" x (length($label) / 8)).$rest; } return ($_); } sub process_indent($) { require strict; local $_ = $_[0]; # preserve the global $_ s/\x01//g; # No comments s/\s+$//; # Strip trailing whitespace return if (/^$/); # skip empty lines # regexps used below; keywords taking (), macros, and continued cases my $special = '(?:(?:\}\s*)?else\s+)?(?:if|for|while|switch)\b'; my $macro = '[A-Z_][A-Z_0-9]*\('; my $case = 'case\b[^:]*$'; # skip over enumerations, array definitions, initializers, etc. if ($cont_off <= 0 && !/^\s*$special/ && (/(?:(?:\b(?:enum|struct|union)\s*[^\{]*)|(?:\s+=\s*))\{/ || (/^\s*{/ && $prev =~ /=\s*(?:\/\*.*\*\/\s*)*$/))) { $cont_in = 0; $cont_off = tr/{/{/ - tr/}/}/; return; } if ($cont_off) { $cont_off += tr/{/{/ - tr/}/}/; return; } if (!$cont_in) { $cont_start = $line; if (/^\t* /) { err("non-continuation indented 4 spaces"); $cont_noerr = 1; # stop reporting } $_ = delabel($_); # replace labels with tabs # check if the statement is complete return if (/^\s*\}?$/); return if (/^\s*\}?\s*else\s*\{?$/); return if (/^\s*do\s*\{?$/); return if (/{$/); return if (/}[,;]?$/); # Allow macros on their own lines return if (/^\s*[A-Z_][A-Z_0-9]*$/); # cases we don't deal with, generally non-kosher if (/{/) { err("stuff after {"); return; } # Get the base line, and set up the state machine /^(\t*)/; $cont_base = $1; $cont_in = 1; @cont_paren = (); $cont_first = 1; $cont_multiseg = 0; # certain things need special processing $cont_special = /^\s*$special/? 1 : 0; $cont_macro = /^\s*$macro/? 1 : 0; $cont_case = /^\s*$case/? 1 : 0; } else { $cont_first = 0; # Strings may be pulled back to an earlier (half-)tabstop unless ($cont_noerr || /^$cont_base / || (/^\t*(?: )?(?:gettext\()?\"/ && !/^$cont_base\t/)) { err_prefix($cont_start, "continuation should be indented 4 spaces"); } } my $rest = $_; # keeps the remainder of the line # # The split matches 0 characters, so that each 'special' character # is processed separately. Parens and brackets are pushed and # popped off the @cont_paren stack. For normal processing, we wait # until a ; or { terminates the statement. "special" processing # (if/for/while/switch) is allowed to stop when the stack empties, # as is macro processing. Case statements are terminated with a : # and an empty paren stack. # foreach $_ (split /[^\(\)\[\]\{\}\;\:]*/) { next if (length($_) == 0); # rest contains the remainder of the line my $rxp = "[^\Q$_\E]*\Q$_\E"; $rest =~ s/^$rxp//; if (/\(/ || /\[/) { push @cont_paren, $_; } elsif (/\)/ || /\]/) { my $cur = $_; tr/\)\]/\(\[/; my $old = (pop @cont_paren); if (!defined($old)) { err("unexpected '$cur'"); $cont_in = 0; last; } elsif ($old ne $_) { err("'$cur' mismatched with '$old'"); $cont_in = 0; last; } # # If the stack is now empty, do special processing # for if/for/while/switch and macro statements. # next if (@cont_paren != 0); if ($cont_special) { if ($rest =~ /^\s*{?$/) { $cont_in = 0; last; } if ($rest =~ /^\s*;$/) { err("empty if/for/while body ". "not on its own line"); $cont_in = 0; last; } if (!$cont_first && $cont_multiseg == 1) { err_prefix($cont_start, "multiple statements continued ". "over multiple lines"); $cont_multiseg = 2; } elsif ($cont_multiseg == 0) { $cont_multiseg = 1; } # We've finished this section, start # processing the next. goto section_ended; } if ($cont_macro) { if ($rest =~ /^$/) { $cont_in = 0; last; } } } elsif (/\;/) { if ($cont_case) { err("unexpected ;"); } elsif (!$cont_special) { err("unexpected ;") if (@cont_paren != 0); if (!$cont_first && $cont_multiseg == 1) { err_prefix($cont_start, "multiple statements continued ". "over multiple lines"); $cont_multiseg = 2; } elsif ($cont_multiseg == 0) { $cont_multiseg = 1; } if ($rest =~ /^$/) { $cont_in = 0; last; } if ($rest =~ /^\s*special/) { err("if/for/while/switch not started ". "on its own line"); } goto section_ended; } } elsif (/\{/) { err("{ while in parens/brackets") if (@cont_paren != 0); err("stuff after {") if ($rest =~ /[^\s}]/); $cont_in = 0; last; } elsif (/\}/) { err("} while in parens/brackets") if (@cont_paren != 0); if (!$cont_special && $rest !~ /^\s*(while|else)\b/) { if ($rest =~ /^$/) { err("unexpected }"); } else { err("stuff after }"); } $cont_in = 0; last; } } elsif (/\:/ && $cont_case && @cont_paren == 0) { err("stuff after multi-line case") if ($rest !~ /$^/); $cont_in = 0; last; } next; section_ended: # End of a statement or if/while/for loop. Reset # cont_special and cont_macro based on the rest of the # line. $cont_special = ($rest =~ /^\s*$special/)? 1 : 0; $cont_macro = ($rest =~ /^\s*$macro/)? 1 : 0; $cont_case = 0; next; } $cont_noerr = 0 if (!$cont_in); } vmem-1.8/utils/docker/000077500000000000000000000000001361505074100147375ustar00rootroot00000000000000vmem-1.8/utils/docker/0001-travis-fix-travisci_build_coverity_scan.sh.patch000066400000000000000000000016251361505074100270000ustar00rootroot00000000000000From b5179dc4822eaab192361da05aa95d98f523960f Mon Sep 17 00:00:00 2001 From: Lukasz Dorau Date: Mon, 7 May 2018 12:05:40 +0200 Subject: [PATCH] travis: fix travisci_build_coverity_scan.sh --- travisci_build_coverity_scan.sh | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/travisci_build_coverity_scan.sh b/travisci_build_coverity_scan.sh index ad9d4afcf..562b08bcc 100644 --- a/travisci_build_coverity_scan.sh +++ b/travisci_build_coverity_scan.sh @@ -92,8 +92,8 @@ response=$(curl \ --form description="Travis CI build" \ $UPLOAD_URL) status_code=$(echo "$response" | sed -n '$p') -if [ "$status_code" != "201" ]; then +if [ "$status_code" != "200" ]; then TEXT=$(echo "$response" | sed '$d') - echo -e "\033[33;1mCoverity Scan upload failed: $TEXT.\033[0m" + echo -e "\033[33;1mCoverity Scan upload failed: $response.\033[0m" exit 1 fi -- 2.13.6 vmem-1.8/utils/docker/README000066400000000000000000000014761361505074100156270ustar00rootroot00000000000000Persistent Memory Development Kit This is utils/docker/README. Scripts in this directory let Travis CI run a Docker container with ubuntu- or fedora-based environment and build VMEM project inside it. 'build-local.sh' can be used to build VMEM locally. 'build-travis.sh' is used for building VMEM on Travis. NOTE: (for those, who have Travis builds enabled for their fork of the VMEM project) If you commit changes to any Dockerfile or shell script in the 'images' subdirectory and then do git-rebase before pushing your commits to the repository, make sure that you do not squash the commit which is the head in your repository. This will let Travis recreate Docker images used during the build before the build. Otherwise the not-updated Docker image will be pulled from the Docker Hub and used during the build on Travis. vmem-1.8/utils/docker/build-local.sh000077500000000000000000000106031361505074100174650ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2017-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # build-local.sh - runs a Docker container from a Docker image with environment # prepared for building VMEM project and starts building VMEM # # this script is for building VMEM locally (not on Travis) # # Notes: # - run this script from its location or set the variable 'HOST_WORKDIR' to # where the root of the VMEM project is on the host machine, # - set variables 'OS' and 'OS_VER' properly to a system you want to build VMEM # on (for proper values take a look on the list of Dockerfiles at the # utils/docker/images directory), eg. OS=ubuntu, OS_VER=16.04. # - set 'KEEP_TEST_CONFIG' variable to 1 if you do not want the tests to be # reconfigured (your current test configuration will be preserved and used) # - tests with Device Dax are not supported by pcheck yet, so do not provide # these devices in your configuration # set -e # Environment variables that can be customized (default values are after dash): export KEEP_CONTAINER=${KEEP_CONTAINER:-0} export KEEP_TEST_CONFIG=${KEEP_TEST_CONFIG:-0} export TEST_BUILD=${TEST_BUILD:-all} export REMOTE_TESTS=${REMOTE_TESTS:-1} export MAKE_PKG=${MAKE_PKG:-0} export EXTRA_CFLAGS=${EXTRA_CFLAGS} export EXTRA_CXXFLAGS=${EXTRA_CXXFLAGS:-} export VMEM_CC=${VMEM_CC:-gcc} export VMEM_CXX=${VMEM_CXX:-g++} export EXPERIMENTAL=${EXPERIMENTAL:-n} export VALGRIND=${VALGRIND:-1} export DOCKERHUB_REPO=${DOCKERHUB_REPO:-pmem/vmem} export GITHUB_REPO=${GITHUB_REPO:-pmem/vmem} if [[ -z "$OS" || -z "$OS_VER" ]]; then echo "ERROR: The variables OS and OS_VER have to be set " \ "(eg. OS=ubuntu, OS_VER=16.04)." exit 1 fi if [[ -z "$HOST_WORKDIR" ]]; then HOST_WORKDIR=$(readlink -f ../..) fi if [[ "$KEEP_CONTAINER" != "1" ]]; then RM_SETTING=" --rm" fi imageName=${DOCKERHUB_REPO}:1.8-${OS}-${OS_VER} containerName=vmem-${OS}-${OS_VER} if [[ $MAKE_PKG -eq 1 ]] ; then command="./run-build-package.sh" else command="./run-build.sh" fi if [ -n "$DNS_SERVER" ]; then DNS_SETTING=" --dns=$DNS_SERVER "; fi WORKDIR=/vmem SCRIPTSDIR=$WORKDIR/utils/docker echo Building ${OS}-${OS_VER} # Run a container with # - environment variables set (--env) # - host directory containing VMEM source mounted (-v) # - working directory set (-w) docker run --privileged=true --name=$containerName -ti \ $RM_SETTING \ $DNS_SETTING \ --env http_proxy=$http_proxy \ --env https_proxy=$https_proxy \ --env CC=$VMEM_CC \ --env CXX=$VMEM_CXX \ --env VALGRIND=$VALGRIND \ --env EXTRA_CFLAGS=$EXTRA_CFLAGS \ --env EXTRA_CXXFLAGS=$EXTRA_CXXFLAGS \ --env REMOTE_TESTS=$REMOTE_TESTS \ --env CONFIGURE_TESTS=$CONFIGURE_TESTS \ --env TEST_BUILD=$TEST_BUILD \ --env WORKDIR=$WORKDIR \ --env EXPERIMENTAL=$EXPERIMENTAL \ --env SCRIPTSDIR=$SCRIPTSDIR \ --env KEEP_TEST_CONFIG=$KEEP_TEST_CONFIG \ -v $HOST_WORKDIR:$WORKDIR \ -v /etc/localtime:/etc/localtime \ $DAX_SETTING \ -w $SCRIPTSDIR \ $imageName $command vmem-1.8/utils/docker/build-travis.sh000077500000000000000000000114551361505074100177110ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # build-travis.sh - runs a Docker container from a Docker image with environment # prepared for building VMEM project and starts building VMEM # # this script is for building VMEM on Travis only # set -e source `dirname $0`/valid-branches.sh if [[ "$TRAVIS_EVENT_TYPE" != "cron" && "$TRAVIS_BRANCH" != "coverity_scan" \ && "$COVERITY" -eq 1 ]]; then echo "INFO: Skip Coverity scan job if build is triggered neither by " \ "'cron' nor by a push to 'coverity_scan' branch" exit 0 fi if [[ ( "$TRAVIS_EVENT_TYPE" == "cron" || "$TRAVIS_BRANCH" == "coverity_scan" )\ && "$COVERITY" -ne 1 ]]; then echo "INFO: Skip regular jobs if build is triggered either by 'cron'" \ " or by a push to 'coverity_scan' branch" exit 0 fi if [[ -z "$OS" || -z "$OS_VER" ]]; then echo "ERROR: The variables OS and OS_VER have to be set properly " \ "(eg. OS=ubuntu, OS_VER=16.04)." exit 1 fi if [[ -z "$HOST_WORKDIR" ]]; then echo "ERROR: The variable HOST_WORKDIR has to contain a path to " \ "the root of the VMEM project on the host machine" exit 1 fi if [[ -z "$TEST_BUILD" ]]; then TEST_BUILD=all fi imageName=${DOCKERHUB_REPO}:1.8-${OS}-${OS_VER} containerName=vmem-${OS}-${OS_VER} if [[ $MAKE_PKG -eq 0 ]] ; then command="./run-build.sh"; fi if [[ $MAKE_PKG -eq 1 ]] ; then command="./run-build-package.sh"; fi if [[ $COVERAGE -eq 1 ]] ; then command="./run-coverage.sh"; ci_env=`bash <(curl -s https://codecov.io/env)`; fi if [[ ( "$TRAVIS_EVENT_TYPE" == "cron" || "$TRAVIS_BRANCH" == "coverity_scan" )\ && "$COVERITY" -eq 1 ]]; then command="./run-coverity.sh" fi if [ -n "$DNS_SERVER" ]; then DNS_SETTING=" --dns=$DNS_SERVER "; fi if [[ $SKIP_CHECK -eq 1 ]]; then BUILD_PACKAGE_CHECK=n; else BUILD_PACKAGE_CHECK=y; fi # Only run doc update on $GITHUB_REPO master or stable branch if [[ ! "${VALID_BRANCHES[@]}" =~ "${TRAVIS_BRANCH}" || "$TRAVIS_PULL_REQUEST" != "false" || "$TRAVIS_REPO_SLUG" != "${GITHUB_REPO}" ]]; then AUTO_DOC_UPDATE=0 fi WORKDIR=/vmem SCRIPTSDIR=$WORKDIR/utils/docker # Run a container with # - environment variables set (--env) # - host directory containing VMEM source mounted (-v) # - working directory set (-w) docker run --rm --privileged=true --name=$containerName -ti \ $DNS_SETTING \ $ci_env \ --env http_proxy=$http_proxy \ --env https_proxy=$https_proxy \ --env AUTO_DOC_UPDATE=$AUTO_DOC_UPDATE \ --env CC=$VMEM_CC \ --env CXX=$VMEM_CXX \ --env VALGRIND=$VALGRIND \ --env EXTRA_CFLAGS=$EXTRA_CFLAGS \ --env EXTRA_CXXFLAGS=$EXTRA_CXXFLAGS \ --env REMOTE_TESTS=$REMOTE_TESTS \ --env TEST_BUILD=$TEST_BUILD \ --env WORKDIR=$WORKDIR \ --env EXPERIMENTAL=$EXPERIMENTAL \ --env BUILD_PACKAGE_CHECK=$BUILD_PACKAGE_CHECK \ --env SCRIPTSDIR=$SCRIPTSDIR \ --env TRAVIS=$TRAVIS \ --env TRAVIS_COMMIT_RANGE=$TRAVIS_COMMIT_RANGE \ --env TRAVIS_COMMIT=$TRAVIS_COMMIT \ --env TRAVIS_REPO_SLUG=$TRAVIS_REPO_SLUG \ --env TRAVIS_BRANCH=$TRAVIS_BRANCH \ --env TRAVIS_EVENT_TYPE=$TRAVIS_EVENT_TYPE \ --env GITHUB_TOKEN=$GITHUB_TOKEN \ --env COVERITY_SCAN_TOKEN=$COVERITY_SCAN_TOKEN \ --env COVERITY_SCAN_NOTIFICATION_EMAIL=$COVERITY_SCAN_NOTIFICATION_EMAIL \ --env FAULT_INJECTION=$FAULT_INJECTION \ --env GITHUB_REPO=$GITHUB_REPO \ -v $HOST_WORKDIR:$WORKDIR \ -v /etc/localtime:/etc/localtime \ -w $SCRIPTSDIR \ $imageName $command vmem-1.8/utils/docker/configure-tests.sh000077500000000000000000000041671361505074100204270ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # configure-tests.sh - is called inside a Docker container; configures tests # and ssh server for use during build of VMEM project. # set -e # Configure tests cat << EOF > $WORKDIR/src/test/testconfig.sh LONGDIR=PhngluimglwnafhCthulhuRlyehwgahnaglfhtagnHaizhronaDagonhaiepngmnahnhriikadishtugnaiihcuhesyhahfgnaiihsgnwahlnogsgnwahlnghahaiChaugnarFaugnhlirghHshtungglingnogRlyehnghaogShub-NiggurathothhgofnnlloigshuggsllhannnCthulhuahnyth # this path is ~3000 characters long DIRSUFFIX="$LONGDIR/$LONGDIR/$LONGDIR/$LONGDIR/$LONGDIR" TEST_DIR=/tmp TM=1 EOF vmem-1.8/utils/docker/images/000077500000000000000000000000001361505074100162045ustar00rootroot00000000000000vmem-1.8/utils/docker/images/Dockerfile.fedora-28000066400000000000000000000047461361505074100216770ustar00rootroot00000000000000# # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # Dockerfile - a 'recipe' for Docker to build an image of fedora-based # environment for building the VMEM project. # # Pull base image FROM fedora:28 MAINTAINER marcin.slusarz@intel.com # Install basic tools RUN dnf update -y RUN dnf install -y \ asciidoc \ asciidoctor \ autoconf \ automake \ bc \ clang \ file \ findutils \ gcc \ gdb \ git \ hub \ lbzip2 \ libtool \ libunwind-devel \ make \ man \ pandoc \ passwd \ pkgconfig \ rpm-build \ rpm-build-libs \ rpmdevtools \ rsync \ sudo \ tar \ wget \ which \ xmlto RUN dnf clean all # Install valgrind COPY install-valgrind.sh install-valgrind.sh RUN ./install-valgrind.sh # Add user ENV USER vmemuser ENV USERPASS vmempass RUN useradd -m $USER RUN echo $USERPASS | passwd $USER --stdin RUN gpasswd wheel -a $USER USER $USER # Set required environment variables ENV OS fedora ENV OS_VER 28 ENV START_SSH_COMMAND /usr/sbin/sshd ENV PACKAGE_MANAGER rpm ENV NOTTY 1 vmem-1.8/utils/docker/images/Dockerfile.ubuntu-18.04000066400000000000000000000060561361505074100221760ustar00rootroot00000000000000# # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # Dockerfile - a 'recipe' for Docker to build an image of ubuntu-based # environment for building the VMEM project. # # Pull base image FROM ubuntu:18.04 MAINTAINER marcin.slusarz@intel.com ENV DEBIAN_FRONTEND noninteractive # Update the Apt cache and install basic tools RUN apt-get update && apt-get dist-upgrade -y ENV VALGRIND_DEPS "autoconf \ automake \ build-essential \ git" # vmem base ENV BASE_DEPS "build-essential \ git \ pkg-config" # jemalloc ENV JEMALLOC_DEPS autoconf # documentation (optional) ENV DOC_DEPS pandoc # tests ENV TESTS_DEPS "bc \ libc6-dbg \ libunwind-dev" # packaging ENV PACKAGING_DEPS "debhelper \ devscripts \ fakeroot" # CodeCov ENV CODECOV_DEPS curl # Coverity ENV COVERITY_DEPS ruby gcc-6 g++-6 wget # misc ENV MISC_DEPS "clang \ clang-format \ flake8 \ sudo \ whois" RUN apt-get install -y --no-install-recommends \ $VALGRIND_DEPS \ $BASE_DEPS \ $JEMALLOC_DEPS \ $DOC_DEPS \ $TESTS_DEPS \ $PACKAGING_DEPS \ $CODECOV_DEPS \ $COVERITY_DEPS \ $MISC_DEPS # Install valgrind COPY install-valgrind.sh install-valgrind.sh RUN ./install-valgrind.sh # Add user ENV USER vmemuser ENV USERPASS vmempass RUN useradd -m $USER -g sudo -p `mkpasswd $USERPASS` # remove stuff no longer needed RUN apt remove -y \ unzip \ whois RUN apt autoremove -y RUN apt-get clean RUN rm -rf /var/lib/apt/lists/* # switch user USER $USER # Set required environment variables ENV OS ubuntu ENV OS_VER 18.04 ENV START_SSH_COMMAND service ssh start ENV PACKAGE_MANAGER dpkg ENV NOTTY 1 vmem-1.8/utils/docker/images/README000066400000000000000000000002771361505074100170720ustar00rootroot00000000000000Persistent Memory Development Kit This is utils/docker/images/README. Scripts in this directory let you prepare Docker images for building VMEM project under specified OS (ubuntu, fedora). vmem-1.8/utils/docker/images/build-image.sh000077500000000000000000000051461361505074100207300ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # build-image.sh - prepares a Docker image with -based # environment for building VMEM project, according # to the Dockerfile. file located # in the same directory. # # The script can be run locally. # set -e function usage { echo "Usage:" echo " build-image.sh " echo "where , for example, can be 'ubuntu-16.04', provided " \ "a Dockerfile named 'Dockerfile.ubuntu-16.04' exists in the " \ "current directory." } # Check if the first argument is nonempty if [[ -z "$1" ]]; then usage exit 1 fi # Check if the file Dockerfile.OS-VER exists if [[ ! -f "Dockerfile.$1" ]]; then echo "ERROR: wrong argument." usage exit 1 fi if [[ -z "${DOCKERHUB_REPO}" ]]; then echo "DOCKERHUB_REPO environment variable is not set" exit 1 fi # Build a Docker image tagged with ${DOCKERHUB_REPO}:OS-VER tag=${DOCKERHUB_REPO}:1.8-$1 docker build -t $tag \ --build-arg http_proxy=$http_proxy \ --build-arg https_proxy=$https_proxy \ -f Dockerfile.$1 . vmem-1.8/utils/docker/images/install-valgrind.sh000077500000000000000000000035231361505074100220200ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # install-valgrind.sh - installs valgrind for persistent memory # set -e git clone https://github.com/pmem/valgrind.git cd valgrind # valgrind v3.15 with pmemcheck git checkout c27a8a2f973414934e63f1e94bc84c0a580e3840 ./autogen.sh ./configure make make install cd .. rm -rf valgrind vmem-1.8/utils/docker/images/push-image.sh000077500000000000000000000051741361505074100206110ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # push-image.sh - pushes the Docker image tagged with OS-VER # to the Docker Hub. # # The script utilizes $DOCKER_USER and $DOCKER_PASSWORD variables to log in to # Docker Hub. The variables can be set in the Travis project's configuration # for automated builds. # set -e function usage { echo "Usage:" echo " push-image.sh " echo "where , for example, can be 'ubuntu-16.04', provided " \ "a Docker image tagged with ${DOCKERHUB_REPO}:ubuntu-16.04 exists " \ "locally." } # Check if the first argument is nonempty if [[ -z "$1" ]]; then usage exit 1 fi if [[ -z "${DOCKERHUB_REPO}" ]]; then echo "DOCKERHUB_REPO environment variable is not set" exit 1 fi # Check if the image tagged with vmem/OS-VER exists locally if [[ ! $(docker images -a | awk -v pattern="^${DOCKERHUB_REPO}:1.8-$1\$" \ '$1":"$2 ~ pattern') ]] then echo "ERROR: wrong argument." usage exit 1 fi # Log in to the Docker Hub docker login -u="$DOCKER_USER" -p="$DOCKER_PASSWORD" # Push the image to the repository docker push ${DOCKERHUB_REPO}:1.8-$1 vmem-1.8/utils/docker/prepare-for-build.sh000077500000000000000000000042521361505074100206200ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # prepare-for-build.sh - is called inside a Docker container; prepares # the environment inside a Docker container for # running build of VMEM project. # set -e # Mount filesystem for tests echo $USERPASS | sudo -S mount -t tmpfs none /tmp -osize=6G # Configure tests (e.g. ssh for remote tests) unless the current configuration # should be preserved KEEP_TEST_CONFIG=${KEEP_TEST_CONFIG:-0} if [[ "$KEEP_TEST_CONFIG" == 0 ]]; then ./configure-tests.sh fi # Check for changes in automatically generated docs (only when on Travis) if [[ -n "$TRAVIS" ]]; then ../check-doc.sh fi vmem-1.8/utils/docker/pull-or-rebuild-image.sh000077500000000000000000000133001361505074100213710ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # pull-or-rebuild-image.sh - rebuilds the Docker image used in the # current Travis build if necessary. # # The script rebuilds the Docker image if the Dockerfile for the current # OS version (Dockerfile.${OS}-${OS_VER}) or any .sh script from the directory # with Dockerfiles were modified and committed. # # If the Travis build is not of the "pull_request" type (i.e. in case of # merge after pull_request) and it succeed, the Docker image should be pushed # to the Docker Hub repository. An empty file is created to signal that to # further scripts. # # If the Docker image does not have to be rebuilt, it will be pulled from # Docker Hub. # set -e if [[ "$TRAVIS_EVENT_TYPE" != "cron" && "$TRAVIS_BRANCH" != "coverity_scan" \ && "$COVERITY" -eq 1 ]]; then echo "INFO: Skip Coverity scan job if build is triggered neither by " \ "'cron' nor by a push to 'coverity_scan' branch" exit 0 fi if [[ ( "$TRAVIS_EVENT_TYPE" == "cron" || "$TRAVIS_BRANCH" == "coverity_scan" )\ && "$COVERITY" -ne 1 ]]; then echo "INFO: Skip regular jobs if build is triggered either by 'cron'" \ " or by a push to 'coverity_scan' branch" exit 0 fi if [[ -z "$OS" || -z "$OS_VER" ]]; then echo "ERROR: The variables OS and OS_VER have to be set properly " \ "(eg. OS=ubuntu, OS_VER=16.04)." exit 1 fi if [[ -z "$HOST_WORKDIR" ]]; then echo "ERROR: The variable HOST_WORKDIR has to contain a path to " \ "the root of the VMEM project on the host machine" exit 1 fi # TRAVIS_COMMIT_RANGE is usually invalid for force pushes - fix it when used # with non-upstream repository if [ -n "$TRAVIS_COMMIT_RANGE" -a "$TRAVIS_REPO_SLUG" != "$GITHUB_REPO" ]; then if ! git rev-list $TRAVIS_COMMIT_RANGE; then # get commit id of the last merge LAST_MERGE=$(git log --merges --pretty=%H -1) if [ "$LAST_MERGE" == "" ]; then # possible in case of shallow clones TRAVIS_COMMIT_RANGE="" else TRAVIS_COMMIT_RANGE="$LAST_MERGE..HEAD" # make sure it works now if ! git rev-list $TRAVIS_COMMIT_RANGE; then TRAVIS_COMMIT_RANGE="" fi fi fi fi # Find all the commits for the current build if [[ -n "$TRAVIS_COMMIT_RANGE" ]]; then # $TRAVIS_COMMIT_RANGE contains "..." instead of ".." # https://github.com/travis-ci/travis-ci/issues/4596 PR_COMMIT_RANGE="${TRAVIS_COMMIT_RANGE/.../..}" commits=$(git rev-list $PR_COMMIT_RANGE) else commits=$TRAVIS_COMMIT fi echo "Commits in the commit range:" for commit in $commits; do echo $commit; done # Get the list of files modified by the commits files=$(for commit in $commits; do git diff-tree --no-commit-id --name-only \ -r $commit; done | sort -u) echo "Files modified within the commit range:" for file in $files; do echo $file; done # Path to directory with Dockerfiles and image building scripts images_dir_name=images base_dir=utils/docker/$images_dir_name # Check if committed file modifications require the Docker image to be rebuilt for file in $files; do # Check if modified files are relevant to the current build if [[ $file =~ ^($base_dir)\/Dockerfile\.($OS)-($OS_VER)$ ]] \ || [[ $file =~ ^($base_dir)\/.*\.sh$ ]] then # Rebuild Docker image for the current OS version echo "Rebuilding the Docker image for the Dockerfile.$OS-$OS_VER" pushd $images_dir_name ./build-image.sh ${OS}-${OS_VER} popd # Check if the image has to be pushed to Docker Hub # (i.e. the build is triggered by commits to the $GITHUB_REPO # repository's stable-* or master branch, and the Travis build is not # of the "pull_request" type). In that case, create the empty # file. if [[ "$TRAVIS_REPO_SLUG" == "$GITHUB_REPO" \ && ($TRAVIS_BRANCH == stable-* || $TRAVIS_BRANCH == master) \ && $TRAVIS_EVENT_TYPE != "pull_request" \ && $PUSH_IMAGE == "1" ]] then echo "The image will be pushed to Docker Hub" touch push_image_to_repo_flag else echo "Skip pushing the image to Docker Hub" fi if [[ $PUSH_IMAGE == "1" ]] then echo "Skip build package check if image has to be pushed" touch skip_build_package_check fi exit 0 fi done # Getting here means rebuilding the Docker image is not required. # Pull the image from Docker Hub. docker pull ${DOCKERHUB_REPO}:1.8-${OS}-${OS_VER} vmem-1.8/utils/docker/run-build-package.sh000077500000000000000000000043541361505074100205760ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # run-build-package.sh - is called inside a Docker container; prepares # the environment and starts a build of VMEM project. # set -e # Prepare build enviromnent ./prepare-for-build.sh # Create fake tag, so that package has proper 'version' field git config user.email "test@package.com" git config user.name "test package" git tag -a 1.4.99 -m "1.4" HEAD~1 || true # Build all and run tests cd $WORKDIR export PCHECK_OPTS=-j2 make -j2 $PACKAGE_MANAGER # Install packages if [[ "$PACKAGE_MANAGER" == "dpkg" ]]; then cd $PACKAGE_MANAGER echo $USERPASS | sudo -S dpkg --install *.deb else cd $PACKAGE_MANAGER/x86_64 echo $USERPASS | sudo -S rpm --install *.rpm fi vmem-1.8/utils/docker/run-build.sh000077500000000000000000000040441361505074100172010ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # run-build.sh - is called inside a Docker container; prepares the environment # and starts a build of VMEM project. # set -e # Prepare build environment ./prepare-for-build.sh # Build all and run tests cd $WORKDIR make check-license make cstyle make -j2 make -j2 test make -j2 pcheck TEST_BUILD=$TEST_BUILD make DESTDIR=/tmp source # Create PR with generated docs if [[ "$AUTO_DOC_UPDATE" == "1" ]]; then echo "Running auto doc update" ./utils/docker/run-doc-update.sh fi vmem-1.8/utils/docker/run-coverage.sh000077500000000000000000000045071361505074100177010ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2017-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # run-coverage.sh - is called inside a Docker container; runs the coverage # test # set -e # Get and prepare VMEM source ./prepare-for-build.sh # Hush error messages, mainly from Valgrind export UT_DUMP_LINES=0 # Skip printing mismatched files for tests with Valgrind export UT_VALGRIND_SKIP_PRINT_MISMATCHED=1 # Build all and run tests cd $WORKDIR make -j2 COVERAGE=1 make -j2 test COVERAGE=1 # XXX: unfortunately valgrind raports issues in coverage instrumentation # which we have to ignore (-k flag), also there is dependency between # local and remote tests (which cannot be easily removed) we have to # run local and remote tests separately cd src/test make -kj2 pcheck-local-quiet TEST_BUILD=debug || true cd ../.. bash <(curl -s https://codecov.io/bash) vmem-1.8/utils/docker/run-coverity.sh000077500000000000000000000065311361505074100177510ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2017-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # run-coverity.sh - runs the Coverity scan build # set -e # Prepare build environment ./prepare-for-build.sh # Coverity doesn't support gcc 7+, because of newly introduced _Float* types. # It manifests as number of "compilation units ready for analysis" # below 85% threshold and compile errors like these in cov-int/build-log.txt: #"/usr/include/math.h", line 381: error #20: identifier "_Float32" is undefined # # define _Mdouble_ _Float32 # Work around this by replacing gcc symlink with gcc-6. Setting CC/CXX # environment variables is unfortunately not enough. sudo rm /usr/bin/gcc sudo rm /usr/bin/g++ sudo ln -s gcc-6 /usr/bin/gcc sudo ln -s g++-6 /usr/bin/g++ # Download Coverity certificate echo -n | openssl s_client -connect scan.coverity.com:443 | \ sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | \ sudo tee -a /etc/ssl/certs/ca-; export COVERITY_SCAN_PROJECT_NAME="$TRAVIS_REPO_SLUG" [[ "$TRAVIS_EVENT_TYPE" == "cron" ]] \ && export COVERITY_SCAN_BRANCH_PATTERN="master" \ || export COVERITY_SCAN_BRANCH_PATTERN="coverity_scan" export COVERITY_SCAN_BUILD_COMMAND="make all" cd $WORKDIR # Run the Coverity scan # XXX: Patch the Coverity script. # Recently, this script regularly exits with an error, even though # the build is successfully submitted. Probably because the status code # is missing in response, or it's not 201. # Changes: # 1) change the expected status code to 200 and # 2) print the full response string. # # This change should be reverted when the Coverity script is fixed. # # The previous version was: # curl -s https://scan.coverity.com/scripts/travisci_build_coverity_scan.sh | bash wget https://scan.coverity.com/scripts/travisci_build_coverity_scan.sh patch < utils/docker/0001-travis-fix-travisci_build_coverity_scan.sh.patch bash ./travisci_build_coverity_scan.sh vmem-1.8/utils/docker/run-doc-update.sh000077500000000000000000000071671361505074100201400ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. set -e source `dirname $0`/valid-branches.sh BOT_NAME="pmem-bot" USER_NAME="pmem" REPO_NAME="vmem" ORIGIN="https://${GITHUB_TOKEN}@github.com/${BOT_NAME}/${REPO_NAME}" UPSTREAM="https://github.com/${USER_NAME}/${REPO_NAME}" # master or stable-* branch TARGET_BRANCH=${TRAVIS_BRANCH} VERSION=${TARGET_BRANCHES[$TARGET_BRANCH]} if [ -z $VERSION ]; then echo "Target location for branch $TARGET_BRANCH is not defined." exit 1 fi # Clone bot repo git clone ${ORIGIN} cd ${REPO_NAME} git remote add upstream ${UPSTREAM} git config --local user.name ${BOT_NAME} git config --local user.email "pmem-bot@intel.com" git remote update git checkout -B ${TARGET_BRANCH} upstream/${TARGET_BRANCH} make doc # Build & PR groff git add -A ./doc git commit -m "doc: automatic $TARGET_BRANCH docs update" && true git push -f ${ORIGIN} ${TARGET_BRANCH} # Makes pull request. # When there is already an open PR or there are no changes an error is thrown, which we ignore. hub pull-request -f -b ${USER_NAME}:${TARGET_BRANCH} -h ${BOT_NAME}:${TARGET_BRANCH} -m "doc: automatic $TARGET_BRANCH docs update" && true git clean -dfx # Copy man & PR web md cd ./doc make web cd .. mv ./doc/web_linux ../ mv ./doc/web_windows ../ mv ./doc/generated/libs_map.yml ../ # Checkout gh-pages and copy docs GH_PAGES_NAME="gh-pages-for-${TARGET_BRANCH}" git checkout -B $GH_PAGES_NAME upstream/gh-pages git clean -dfx rsync -a ../web_linux/ ./manpages/linux/${VERSION}/ rsync -a ../web_windows/ ./manpages/windows/${VERSION}/ \ --exclude='libvmmalloc' rm -r ../web_linux rm -r ../web_windows if [ $TARGET_BRANCH = "master" ]; then [ ! -d _data ] && mkdir _data cp ../libs_map.yml _data fi # Add and push changes. # git commit command may fail if there is nothing to commit. # In that case we want to force push anyway (there might be open pull request with # changes which were reverted). git add -A git commit -m "doc: automatic gh-pages docs update" && true git push -f ${ORIGIN} $GH_PAGES_NAME hub pull-request -f -b ${USER_NAME}:gh-pages -h ${BOT_NAME}:${GH_PAGES_NAME} -m "doc: automatic gh-pages docs update" && true exit 0 vmem-1.8/utils/docker/valid-branches.sh000077500000000000000000000033171361505074100201640ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. VALID_BRANCHES=("master" "stable-1.5" "stable-1.6") declare -A TARGET_BRANCHES=( \ ["master"]="master" \ ["stable-1.5"]="v1.5" \ ["stable-1.6"]="v1.6") vmem-1.8/utils/get_aliases.sh000077500000000000000000000057451361505074100163220ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2017-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # get_aliases.sh -- generate map of manuals functions and libraries # # usage: run from /vmem/doc/generated location without parameters: # ./../../utils/get_aliases.sh # # This script searches manpages from section 7 then # takes all functions from each section using specified pattern # and at the end to every function it assign real markdown file # representation based on *.gz file content # # Generated libs_map.yml file is used on gh-pages # to handle functions and their aliases # list=("$@") man_child=("$@") function search_aliases { children=$1 for i in ${children[@]} do if [ -e $i ] then echo "Man: $i" content=$(head -c 150 $i) if [[ "$content" == ".so "* ]] ; then content=$(basename ${content#".so"}) i="${i%.*}" echo " $i: $content" >> $map_file else r="${i%.*}" echo " $r: $i" >> $map_file fi fi done } function list_pages { parent="${1%.*}" list=("$@") man_child=("$@") if [ "$parent" == "libvmmalloc" ]; then man_child=($(ls vmmalloc_*.3 2>/dev/null)) echo -n "- $parent: " >> $map_file fi if [ "$parent" == "libvmem" ]; then man_child=($(ls vmem_*.3)) echo -n "- $parent: " >> $map_file echo "${man_child[@]}" >> $map_file fi if [ ${#man_child[@]} -ne 0 ] then list=${man_child[@]} search_aliases "${list[@]}" fi } man7=($(ls *.7)) map_file=libs_map.yml [ -e $map_file ] && rm $map_file touch $map_file for i in "${man7[@]}" do echo "Library: $i" list_pages $i done vmem-1.8/utils/git-years000077500000000000000000000033321361505074100153230ustar00rootroot00000000000000#!/bin/sh # Copyright 2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # git-years -- calculate the range of years for a given file from git git log --pretty='%aI %aE' "$@"|grep '@intel\.com'|cut -d- -f1|sort| sed '$p;2,$d'|uniq|tr '\n' -|sed 's/-$//' vmem-1.8/utils/libvmem.pc.in000066400000000000000000000003031361505074100160500ustar00rootroot00000000000000includedir=${prefix}/include Name: libvmem Description: libvmem library from VMEM project Version: ${version} URL: http://pmem.io/vmem Requires: Libs: -L${libdir} -lvmem Cflags: -I${includedir} vmem-1.8/utils/libvmmalloc.pc.in000066400000000000000000000003171361505074100167230ustar00rootroot00000000000000includedir=${prefix}/include Name: libvmmalloc Description: libvmmalloc library from VMEM project Version: ${version} URL: http://pmem.io/vmem Requires: Libs: -L${libdir} -lvmmalloc Cflags: -I${includedir} vmem-1.8/utils/md2man.sh000077500000000000000000000074341361505074100152150ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # # md2man.sh -- convert markdown to groff man pages # # usage: md2man.sh file template outfile # # This script converts markdown file into groff man page using pandoc. # It performs some pre- and post-processing for better results: # - uses m4 to preprocess OS-specific directives. See doc/macros.man. # - parse input file for YAML metadata block and read man page title, # section and version # - cut-off metadata block and license # - unindent code blocks # - cut-off windows and web specific parts of documentation # # If the TESTOPTS variable is set, generates a preprocessed markdown file # with the header stripped off for testing purposes. # set -e set -o pipefail filename=$1 template=$2 outfile=$3 title=`sed -n 's/^title:\ _MP(*\([A-Za-z_-]*\).*$/\1/p' $filename` section=`sed -n 's/^title:.*\([0-9]\))$/\1/p' $filename` version=`sed -n 's/^date:\ *\(.*\)$/\1/p' $filename` if [ "$TESTOPTS" != "" ]; then m4 $TESTOPTS macros.man $filename | sed -n -e '/# NAME #/,$p' > $outfile else OPTS= if [ "$WIN32" == 1 ]; then OPTS="$OPTS -DWIN32" else OPTS="$OPTS -UWIN32" fi if [ "$(uname -s)" == "FreeBSD" ]; then OPTS="$OPTS -DFREEBSD" else OPTS="$OPTS -UFREEBSD" fi if [ "$WEB" == 1 ]; then OPTS="$OPTS -DWEB" mkdir -p "$(dirname $outfile)" m4 $OPTS macros.man $filename | sed -n -e '/---/,$p' > $outfile else SOURCE_DATE_EPOCH="${SOURCE_DATE_EPOCH:-$(date +%s)}" YEAR=$(date -u -d "@$SOURCE_DATE_EPOCH" +%Y 2>/dev/null || date -u -r "$SOURCE_DATE_EPOCH" +%Y 2>/dev/null || date -u +%Y) dt=$(date -u -d "@$SOURCE_DATE_EPOCH" +%F 2>/dev/null || date -u -r "$SOURCE_DATE_EPOCH" +%F 2>/dev/null || date -u +%F) m4 $OPTS macros.man $filename | sed -n -e '/# NAME #/,$p' |\ pandoc -s -t man -o $outfile.tmp --template=$template \ -V title=$title -V section=$section \ -V date="$dt" -V version="$version" \ -V year="$YEAR" | sed '/^\.IP/{ N /\n\.nf/{ s/IP/PP/ } }' # don't overwrite the output file if the only thing that changed # is modification date (diff output has exactly 4 lines in this case) difflines=`diff $outfile $outfile.tmp | wc -l || true` onlydates=`diff $outfile $outfile.tmp | grep "$dt" | wc -l || true` if [ $difflines -eq 4 -a $onlydates -eq 1 ]; then rm $outfile.tmp else mv $outfile.tmp $outfile fi fi fi vmem-1.8/utils/os-banned000066400000000000000000000015571361505074100152710ustar00rootroot00000000000000pthread_once pthread_key_create pthread_key_delete pthread_setspecific pthread_getspecific pthread_mutex_init pthread_mutex_destroy pthread_mutex_lock pthread_mutex_trylock pthread_mutex_unlock pthread_mutex_timedlock pthread_rwlock_init pthread_rwlock_destroy pthread_rwlock_rdlock pthread_rwlock_wrlock pthread_rwlock_tryrdlock pthread_rwlock_trywrlock pthread_rwlock_unlock pthread_rwlock_timedrdlock pthread_rwlock_timedwrlock pthread_spin_init pthread_spin_destroy pthread_spin_lock pthread_spin_unlock pthread_spin_trylock pthread_cond_init pthread_cond_destroy pthread_cond_broadcast pthread_cond_signal pthread_cond_timedwait pthread_cond_wait pthread_create pthread_join cpu_zero cpu_set pthread_setaffinity_np pthread_atfork open stat unlink access fopen fdopen chmod mkstemp posix_fallocate ftruncate flock writev clock_gettime rand_r unsetenv setenv getenv strsignal vmem-1.8/utils/pkg-common.sh000066400000000000000000000047401361505074100161000ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # pkg-common.sh - common functions and variables for building packages # export LC_ALL="C" function error() { echo -e "error: $@" } function check_dir() { if [ ! -d $1 ] then error "Directory '$1' does not exist." exit 1 fi } function check_file() { if [ ! -f $1 ] then error "File '$1' does not exist." exit 1 fi } function check_tool() { local tool=$1 if [ -z "$(which $tool 2>/dev/null)" ] then error "'${tool}' not installed or not in PATH" exit 1 fi } function get_version() { echo -n $1 | sed "s/-rc/~rc/" } function get_os() { if [ -f /etc/os-release ] then local OS=$(cat /etc/os-release | grep -m1 -o -P '(?<=NAME=).*($)') [[ "$OS" =~ SLES|openSUSE ]] && echo -n "SLES_like" || ([[ "$OS" =~ "Fedora"|"Red Hat"|"CentOS" ]] && echo -n "RHEL_like" || echo 1) else echo 1 fi } REGEX_DATE_AUTHOR="([a-zA-Z]{3} [a-zA-Z]{3} [0-9]{2} [0-9]{4})\s*(.*)" REGEX_MESSAGE_START="\s*\*\s*(.*)" REGEX_MESSAGE="\s*(\S.*)" vmem-1.8/utils/pkg-config.sh000066400000000000000000000036661361505074100160630ustar00rootroot00000000000000# # Copyright 2014-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # Name of package PACKAGE_NAME="vmem" # Name and email of package maintainer PACKAGE_MAINTAINER="Marcin Slusarz " # Brief description of the package PACKAGE_SUMMARY="Volatile Persistent Memory Allocator" # Full description of the package PACKAGE_DESCRIPTION="The collection of libraries for volatile use case for persistent memory" # Website PACKAGE_URL="http://pmem.io/vmem" vmem-1.8/utils/ps_analyze.ps1000066400000000000000000000044511361505074100162660ustar00rootroot00000000000000# # Copyright 2017, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # ps_analyze -- script to analyze ps1 files # Write-Output "Starting PSScript analyzing ..." $scriptdir = Split-Path -Parent $PSCommandPath $rootdir = $scriptdir + "\.." $detected = 0 $include = @("*.ps1" ) Get-ChildItem -Path $rootdir -Recurse -Include $include | ` Where-Object { $_.FullName -notlike "*test*" } | ` ForEach-Object { $analyze_result = Invoke-ScriptAnalyzer -Path $_.FullName if ($analyze_result) { $detected = $detected + $analyze_result.Count Write-Output $_.FullName Write-Output $analyze_result } } if ($detected) { Write-Output "PSScriptAnalyzer FAILED. Issues detected: $detected" Exit 1 } else { Write-Output "PSScriptAnalyzer PASSED. No issue detected." Exit 0 } vmem-1.8/utils/sort_solution000077500000000000000000000100141361505074100163350ustar00rootroot00000000000000#!/usr/bin/perl # # Copyright 2016, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # sort_solution -- sort visual studio solution projects lists # use strict; use warnings; use Text::Diff; use Cwd 'abs_path'; use File::Basename; use File::Compare; sub help { print "Usage: sort_solution [check|sort]\n"; exit; } sub sort_global_section { my ($solution_fh, $temp_fh, $section_name) = @_; my $line = ""; my @array; while (defined($line = <$solution_fh>) && ($line !~ $section_name)) { print $temp_fh $line; } print $temp_fh $line; while (defined($line = <$solution_fh>) && ($line !~ "EndGlobalSection")) { push @array, $line; } @array = sort @array; foreach (@array) { print $temp_fh $_; } print $temp_fh $line; # print EndGlobalSection line } my $num_args = $#ARGV + 1; if ($num_args != 1) { help; } my $arg = $ARGV[0]; if($arg ne "check" && $arg ne "sort") { help; } my $filename = dirname(abs_path($0)).'/../src/VMEM.sln'; my $tempfile = dirname(abs_path($0)).'/../src/temp.sln'; open(my $temp_fh, '>', $tempfile) or die "Could not open file '$tempfile' $!"; open(my $solution_fh, '<:crlf', $filename) or die "Could not open file '$filename' $!"; my $line; # Read a header of file while (defined($line = <$solution_fh>) && ($line !~ "^Project")) { print $temp_fh $line; } my @part1; my $buff; my $guid; # Read the projects list with project dependencies do { if($line =~ "^Project") { $buff = $line; $guid = (split(/\,/, $line))[2]; } elsif($line =~ "^EndProject") { $buff .= $line; my %table = ( guid => $guid, buff => $buff, ); push @part1, \%table; } else { $buff .= $line; } } while (defined($line = <$solution_fh>) && $line ne "Global\n"); # sort the project list by a project GIUD and write to the tempfile @part1 = sort { $a->{guid} cmp $b->{guid} } @part1; foreach (@part1) { my %hash = %$_; print $temp_fh $hash{"buff"}; } print $temp_fh $line; # EndProject line sort_global_section $solution_fh, $temp_fh, "ProjectConfigurationPlatforms"; sort_global_section $solution_fh, $temp_fh, "NestedProjects"; # read solution file to the end and copy it to the temp file while (defined($line = <$solution_fh>)){ print $temp_fh $line; } close($temp_fh); close($solution_fh); if($arg eq "check") { my $diff = diff $filename => $tempfile; if ($diff eq "") { unlink $tempfile; exit; } print "VMEM solution file is not sorted, " . "please use sort_solution script before pushing your changes\n"; unlink $tempfile; exit 1; } else { unlink $filename or die "Cannot replace solution file $!"; rename $tempfile, $filename; } vmem-1.8/utils/style_check.sh000077500000000000000000000067451361505074100163400ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2016-2018, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # utils/style_check.sh -- common style checking script # set -e ARGS=("$@") CSTYLE_ARGS=() CLANG_ARGS=() CHECK_TYPE=$1 [ -z "$clang_format_bin" ] && which clang-format-6.0 >/dev/null && clang_format_bin=clang-format-6.0 [ -z "$clang_format_bin" ] && which clang-format >/dev/null && clang_format_bin=clang-format [ -z "$clang_format_bin" ] && clang_format_bin=clang-format # # print script usage # function usage() { echo "$0 [C/C++ files]" } # # require clang-format version 6.0 # function check_clang_version() { set +e which ${clang_format_bin} &> /dev/null && ${clang_format_bin} --version |\ grep "version 6\.0"\ &> /dev/null if [ $? -ne 0 ]; then echo "SKIP: requires clang-format version 6.0" exit 0 fi set -e } # # run old cstyle check # function run_cstyle() { if [ $# -eq 0 ]; then return fi ${cstyle_bin} -pP $@ } # # generate diff with clang-format rules # function run_clang_check() { if [ $# -eq 0 ]; then return fi check_clang_version for file in $@ do LINES=$(${clang_format_bin} -style=file $file |\ git diff --no-index $file - | wc -l) if [ $LINES -ne 0 ]; then ${clang_format_bin} -style=file $file | git diff --no-index $file - fi done } # # in-place format according to clang-format rules # function run_clang_format() { if [ $# -eq 0 ]; then return fi check_clang_version ${clang_format_bin} -style=file -i $@ } for ((i=1; i<$#; i++)) { IGNORE="$(dirname ${ARGS[$i]})/.cstyleignore" if [ -e $IGNORE ]; then if grep -q ${ARGS[$i]} $IGNORE ; then echo "SKIP ${ARGS[$i]}" continue fi fi case ${ARGS[$i]} in *.[ch]pp) CLANG_ARGS+="${ARGS[$i]} " ;; *.[ch]) CSTYLE_ARGS+="${ARGS[$i]} " ;; *) echo "Unknown argument" exit 1 ;; esac } case $CHECK_TYPE in check) run_cstyle ${CSTYLE_ARGS} run_clang_check ${CLANG_ARGS} ;; format) run_clang_format ${CLANG_ARGS} ;; *) echo "Invalid parameters" usage exit 1 ;; esac vmem-1.8/utils/version.sh000077500000000000000000000057251361505074100155250ustar00rootroot00000000000000#!/usr/bin/env bash # # Copyright 2017-2019, Intel Corporation # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # utils/version.sh -- determine project's version # set -e if [ -f "$1/VERSION" ]; then cat "$1/VERSION" exit 0 fi if [ -f $1/GIT_VERSION ]; then echo -n "\$Format:%h %d\$" | cmp -s $1/GIT_VERSION - && true if [ $? -eq 0 ]; then PARSE_GIT_VERSION=0 else PARSE_GIT_VERSION=1 fi else PARSE_GIT_VERSION=0 fi if [ $PARSE_GIT_VERSION -eq 1 ]; then GIT_VERSION_TAG=$(cat $1/GIT_VERSION | grep tag: | sed 's/.*tag: \([0-9a-z.+-]*\).*/\1/') GIT_VERSION_HASH=$(cat $1/GIT_VERSION | sed -e 's/ .*//') if [ -n "$GIT_VERSION_TAG" ]; then echo "$GIT_VERSION_TAG" exit 0 fi if [ -n "$GIT_VERSION_HASH" ]; then echo "$GIT_VERSION_HASH" exit 0 fi fi cd "$1" GIT_DESCRIBE=$(git describe 2>/dev/null) && true if [ -n "$GIT_DESCRIBE" ]; then # 1.5-19-gb8f78a329 -> 1.5+git19.gb8f78a329 # 1.5-rc1-19-gb8f78a329 -> 1.5-rc1+git19.gb8f78a329 echo "$GIT_DESCRIBE" | sed "s/\([0-9.]*\)-rc\([0-9]*\)-\([0-9]*\)-\([0-9a-g]*\)/\1-rc\2+git\3.\4/" | sed "s/\([0-9.]*\)-\([0-9]*\)-\([0-9a-g]*\)/\1+git\2.\3/" exit 0 fi # try commit it, git describe can fail when there are no tags (e.g. with shallow clone, like on Travis) GIT_COMMIT=$(git log -1 --format=%h) && true if [ -n "$GIT_COMMIT" ]; then echo "$GIT_COMMIT" exit 0 fi cd - >/dev/null # If nothing works, try to get version from directory name VER=$(basename `realpath "$1"` | sed 's/vmem[-]*\([0-9a-z.+-]*\).*/\1/') if [ -n "$VER" ]; then echo "$VER" exit 0 fi exit 1 vmem-1.8/utils/vmem.spec.in000066400000000000000000000147311361505074100157230ustar00rootroot00000000000000 # rpmbuild options: # --define _testconfig # --define _skip_check 1 # do not terminate build if files in the $RPM_BUILD_ROOT # directory are not found in the %files %define _unpackaged_files_terminate_build 0 # disable 'make check' on suse %if %{defined suse_version} %define _skip_check 1 %define dist .suse%{suse_version} %endif Name: vmem Version: __VERSION__ Release: 1%{?dist} Summary: __PACKAGE_SUMMARY__ Packager: __PACKAGE_MAINTAINER__ Group: __GROUP_SYS_LIBS__ License: __LICENSE__ URL: http://pmem.io/vmem Source0: %{name}-%{version}.tar.gz BuildRequires: gcc BuildRequires: make BuildRequires: glibc-devel BuildRequires: autoconf BuildRequires: automake BuildRequires: man BuildRequires: pkgconfig BuildRequires: gdb # Debug variants of the libraries should be filtered out of the provides. %global __provides_exclude_from ^%{_libdir}/vmem_debug/.*\\.so.*$ # By design, vmem does not support any 32-bit architecture. It has not # been validated on any architecture other than x86_64. ExclusiveArch: x86_64 %description The Persistent Memory Development Kit is a collection of libraries for using memory-mapped persistence, optimized specifically for persistent memory. %package -n libvmem__PKG_NAME_SUFFIX__ Summary: Volatile Memory allocation library Group: __GROUP_SYS_LIBS__ %description -n libvmem__PKG_NAME_SUFFIX__ The libvmem library turns a pool of persistent memory into a volatile memory pool, similar to the system heap but kept separate and with its own malloc-style API. %files -n libvmem__PKG_NAME_SUFFIX__ %defattr(-,root,root,-) %{_libdir}/libvmem.so.* %license LICENSE %doc ChangeLog README.md %package -n libvmem-devel Summary: Development files for the Volatile Memory allocation library Group: __GROUP_DEV_LIBS__ Requires: libvmem__PKG_NAME_SUFFIX__ = %{version}-%{release} %description -n libvmem-devel The libvmem library turns a pool of persistent memory into a volatile memory pool, similar to the system heap but kept separate and with its own malloc-style API. This sub-package contains libraries and header files for developing applications that want to make use of libvmem. %files -n libvmem-devel %defattr(-,root,root,-) %{_libdir}/libvmem.so %{_libdir}/pkgconfig/libvmem.pc %{_includedir}/libvmem.h %{_mandir}/man7/libvmem.7.gz %{_mandir}/man3/vmem_*.3.gz %license LICENSE %doc ChangeLog README.md %package -n libvmem-debug Summary: Debug variant of the Volatile Memory allocation library Group: __GROUP_DEV_LIBS__ Requires: libvmem__PKG_NAME_SUFFIX__ = %{version}-%{release} %description -n libvmem-debug The libvmem library turns a pool of persistent memory into a volatile memory pool, similar to the system heap but kept separate and with its own malloc-style API. This sub-package contains debug variant of the library, providing run-time assertions and trace points. The typical way to access the debug version is to set the environment variable LD_LIBRARY_PATH to /usr/lib64/vmem_debug. %files -n libvmem-debug %defattr(-,root,root,-) %dir %{_libdir}/vmem_debug %{_libdir}/vmem_debug/libvmem.so %{_libdir}/vmem_debug/libvmem.so.* %license LICENSE %doc ChangeLog README.md %package -n libvmmalloc__PKG_NAME_SUFFIX__ Summary: Dynamic to Persistent Memory allocation translation library Group: __GROUP_SYS_LIBS__ %description -n libvmmalloc__PKG_NAME_SUFFIX__ The libvmmalloc library transparently converts all the dynamic memory allocations into persistent memory allocations. This allows the use of persistent memory as volatile memory without modifying the target application. The typical usage of libvmmalloc is to load it via the LD_PRELOAD environment variable. %files -n libvmmalloc__PKG_NAME_SUFFIX__ %defattr(-,root,root,-) %{_libdir}/libvmmalloc.so.* %license LICENSE %doc ChangeLog README.md %package -n libvmmalloc-devel Summary: Development files for the Dynamic-to-Persistent allocation library Group: __GROUP_DEV_LIBS__ Requires: libvmmalloc__PKG_NAME_SUFFIX__ = %{version}-%{release} %description -n libvmmalloc-devel The libvmmalloc library transparently converts all the dynamic memory allocations into persistent memory allocations. This allows the use of persistent memory as volatile memory without modifying the target application. This sub-package contains libraries and header files for developing applications that want to specifically make use of libvmmalloc. %files -n libvmmalloc-devel %defattr(-,root,root,-) %{_libdir}/libvmmalloc.so %{_libdir}/pkgconfig/libvmmalloc.pc %{_includedir}/libvmmalloc.h %{_mandir}/man7/libvmmalloc.7.gz %license LICENSE %doc ChangeLog README.md %package -n libvmmalloc-debug Summary: Debug variant of the Dynamic-to-Persistent allocation library Group: __GROUP_DEV_LIBS__ Requires: libvmmalloc__PKG_NAME_SUFFIX__ = %{version}-%{release} %description -n libvmmalloc-debug The libvmmalloc library transparently converts all the dynamic memory allocations into persistent memory allocations. This allows the use of persistent memory as volatile memory without modifying the target application. This sub-package contains debug variant of the library, providing run-time assertions and trace points. The typical way to access the debug version is to set the environment variable LD_LIBRARY_PATH to /usr/lib64/vmem_debug. %files -n libvmmalloc-debug %defattr(-,root,root,-) %dir %{_libdir}/vmem_debug %{_libdir}/vmem_debug/libvmmalloc.so %{_libdir}/vmem_debug/libvmmalloc.so.* %license LICENSE %doc ChangeLog README.md %prep %setup -q -n %{name}-%{version} %build # For debug build default flags may be overriden to disable compiler # optimizations. CFLAGS="%{optflags}" \ LDFLAGS="%{?__global_ldflags}" \ make %{?_smp_mflags} __MAKE_FLAGS__ # Override LIB_AR with empty string to skip installation of static libraries %install make install DESTDIR=%{buildroot} \ LIB_AR= \ prefix=%{_prefix} \ libdir=%{_libdir} \ includedir=%{_includedir} \ mandir=%{_mandir} \ bindir=%{_bindir} \ sysconfdir=%{_sysconfdir} \ docdir=%{_docdir} __MAKE_INSTALL_FDUPES__ %check %if 0%{?_skip_check} == 1 echo "Check skipped" %else %if %{defined _testconfig} cp %{_testconfig} src/test/testconfig.sh %else echo "TEST_DIR=/tmp" > src/test/testconfig.sh echo 'TEST_BUILD="debug nondebug"' >> src/test/testconfig.sh %endif make check %endif %post -n libvmem__PKG_NAME_SUFFIX__ -p /sbin/ldconfig %postun -n libvmem__PKG_NAME_SUFFIX__ -p /sbin/ldconfig %post -n libvmmalloc__PKG_NAME_SUFFIX__ -p /sbin/ldconfig %postun -n libvmmalloc__PKG_NAME_SUFFIX__ -p /sbin/ldconfig %if 0%{?__debug_package} == 0 %debug_package %endif %changelog