pax_global_header00006660000000000000000000000064141723105130014507gustar00rootroot0000000000000052 comment=30bed5d8aa4fcbf49dba1c67bca45edb12eaec90 charliecloud-0.26/000077500000000000000000000000001417231051300140745ustar00rootroot00000000000000charliecloud-0.26/.github/000077500000000000000000000000001417231051300154345ustar00rootroot00000000000000charliecloud-0.26/.github/PERUSEME000066400000000000000000000110241417231051300166020ustar00rootroot00000000000000[This file is not called README because files named .github/README.* get picked up by GitHub and used as the main project README.] This directory defines our GitHub Actions test suite setup. The basic strategy is to start one “job” per builder; these run in parallel. Each job then cycles through several different configurations, which vary per builder. It is configured to “fail fast”, i.e., if one of the jobs fails, the others will be immediately cancelled. For example, we only run the quick test suite on one builder, but if it fails everything will stop and you still get notified quickly. The number of concurrent jobs is not clear to me, but I’ve seen 7 and the documentation [1] implies it’s at least 20 (though I assume there is some global limit for OSS projects too). Nominally, jobs are started from the left side of the list, so anything we think is likely to fail fast (e.g., the quick scope) should be leftward; in practice it seems to be random. We could add more matrix dimensions, but then we’d have to deal with ordering more carefully, and pass the Docker cache manually (or not use it for some things). [1]: https://docs.github.com/en/free-pro-team@latest/actions/reference/usage-limits-billing-and-administration Conventions: * We install everything to start, then uninstall as needed for more bare-bones tests. * For the “extra things’ tests: * Docker is the fastest builder, so that’s where we put extra things. * We need to retain sudo for uninstalling stuff. * I could not figure out how to set a boolean variable for use in “if” conditions. (I *did* get an environment variable to work, but not using our set/unset convention, rather the strings “true” and “false”. This seemed error-prone.) Therefore the extra things tests all use the full expression. Miscellaneous notes and gotchas: * Runner specs (as of 2020-11-25), nominal: Azure Standard_DS2_v2 virtual machine: 2 vCPUs, 7 GiB RAM, 15 GiB SSD storage. The OS image is bare-bones but there is a lot of software installed in third-party locations [1]. Looking at the actual VM provisioned, the disk specs are a little different: it’s got an 84GiB root filesystem mounted, and another 9GiB mounted on /mnt. With a little deleting, maybe we can make room for a full-scope test. It does seem to boot faster than Travis; overall performance is worse; but total test time is lower (Travis took 50–55 minutes to complete a passing build). [1]: https://github.com/actions/virtual-environments/blob/ubuntu20/20201116.1/images/linux/Ubuntu2004-README.md * The default shell (Bash) does not read any init files [1], so you cannot configure it with e.g. .bashrc. [1]: https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-syntax-for-github-actions#using-a-specific-shell * GitHub doesn’t seem to notice our setup if .github is a symlink. :( * GitHub seems to want us to encapsulate some of the steps that are now just shell scripts into “actions”. I haven’t looked into this. Issue #914. * Force-push does start a new build. * Commands in “run” blocks aren’t logged by default; you need “set -x” if you want to see them. However there seems to be a race condition, so the commands and their output aren’t always interleaved correctly. * “docker” does not require “sudo”. * There are several places where we configure, make, make install. These need to be kept in sync. Perhaps there is an opportunity for an “Action” here? But the configure output validation varies. * The .github directory doesn’t have a Makefile.am; the files are listed in the root Makefile.am. * Most variables are strings. It’s easy to get into a situation where you set a variable to “false” but it’s the string “false” so it’s true. * Viewing step output is glitchy: * While the job is in progress, sometimes the step headings are links and sometimes they aren’t. * If it does work, you can’t scroll back to the start. * The “in progress” throbber seems to often be on the wrong heading. * When it’s over, sometimes clicking on a heading opens it but the content is blank; in this case, clicking a different job and coming back seems to fix things. * Previously we listed $CH_TEST_TARDIR and $CH_TEST_IMGDIR between phases. I didn’t transfer that over. It must have been useful, so let’s pay attention to see if it needs to be re-added. charliecloud-0.26/.github/workflows/000077500000000000000000000000001417231051300174715ustar00rootroot00000000000000charliecloud-0.26/.github/workflows/main.yml000066400000000000000000000412711417231051300211450ustar00rootroot00000000000000name: test suite on: pull_request: # all pull requests push: branches: [ master ] # all commits on master schedule: - cron: '0 2 * * 0' # every Sunday at 2:00am UTC (Saturday 7:00pm MST) jobs: main: runs-on: ubuntu-20.04 timeout-minutes: 60 strategy: fail-fast: true # if any job fails, cancel the rest immediately matrix: builder: [none, docker, ch-image] pack_fmt: [squash-mount, tar-unpack, squash-unpack] keep_sudo: # if false, remove self from sudoers post-install/setup - false include: - builder: docker pack_fmt: squash-mount keep_sudo: true env: CH_BUILDER: ${{ matrix.builder }} CH_TEST_TARDIR: /mnt/tarballs CH_TEST_IMGDIR: /mnt/images CH_TEST_PACK_FMT: ${{ matrix.pack_fmt }} CH_TEST_PERMDIRS: /mnt/perms_test /run/perms_test ch_prefix: /var/tmp steps: - uses: actions/checkout@v2 - name: early setup & validation run: | [[ -n $CH_BUILDER ]] sudo timedatectl set-timezone America/Denver sudo chmod 1777 /mnt /usr/local/src echo "ch_makej=-j$(getconf _NPROCESSORS_ONLN)" >> $GITHUB_ENV # Remove sbin directories from $PATH (see issue #43). Assume none of # these are the first entry in $PATH. echo "PATH=$PATH" path_new=$PATH for i in /sbin /usr/sbin /usr/local/sbin; do path_new=${path_new/:$i/} done echo "path_new=$path_new" echo "PATH=$path_new" >> $GITHUB_ENV # Set sudo umask to something quite restrictive. The default is # 0022, but we had a "make install" bug (issue #947) that was # tickled by 0027, which is a better setting. For reasons I don't # understand, this only affects sudo, but since that's what we want, # I left it. Note there are a few dependency installs below that # have similar permissions bugs; these relax the umask on a # case-by-case basis. sudo sed -i -E 's/^UMASK\s+022/UMASK 0077/' /etc/login.defs fgrep UMASK /etc/login.defs - name: print starting environment run: | echo builder: ${{ matrix.builder }} echo pack_fmt: ${{ matrix.pack_fmt }} echo keep_sudo: ${{ matrix.keep_sudo }} uname -a lsb_release -a id pwd getconf _NPROCESSORS_ONLN free -m df -h locale -a timedatectl env | egrep '^(PATH|USER)=' env | egrep '^(ch|CH)_' [[ $PATH != */usr/local/sbin* ]] # verify sbin removal; see above printf 'umask for %s: ' $USER && umask printf 'umask under sudo: ' && sudo sh -c umask [[ $(umask) = 0022 ]] [[ $(sudo sh -c umask) = 0077 ]] - name: lines of code if: ${{ matrix.builder == 'none' }} run: | sudo apt-get install cloc misc/loc - name: install Bats # We need (1) old Bats, not bats-core and (2) prior to commit 1735a4f, # because this is what is provided in distros we need to support and # it contains a bug we work around (see issue #552). run: | cd /usr/local/src git clone --depth 1 --branch v0.4.0 https://github.com/sstephenson/bats.git cd bats sudo sh -c 'umask 0022 && ./install.sh /usr/local' command -v bats bats --version [[ $(command -v bats) == /usr/local/bin/bats ]] [[ $(bats --version) == 'Bats 0.4.0' ]] - name: install/configure dependencies, ch-image if: ${{ matrix.builder == 'ch-image' }} run: | # Install the minimum Python version we support (issue #959). Python # is pretty quick to build, so this only takes a couple of minutes. cd /usr/local/src wget -nv https://www.python.org/ftp/python/3.6.12/Python-3.6.12.tar.xz tar xf Python-3.6.12.tar.xz cd Python-3.6.12 ./configure --prefix=/usr/local make $ch_makej sudo sh -c 'umask 0022 && make install' command -v pip3 command -v python3 [[ $(command -v pip3) == /usr/local/bin/pip3 ]] [[ $(command -v python3) == /usr/local/bin/python3 ]] # Use most current packages b/c new versions sometimes break things. sudo sh -c 'umask 0022 && pip3 install lark-parser requests wheel' - name: install libsquashfuse if: ${{ matrix.pack_fmt == 'squash-mount' }} run: | set -x sudo apt-get install -y libfuse3-dev # Currently we need the master branch of SquashFUSE. cd /usr/local/src git clone https://github.com/vasi/squashfuse.git cd squashfuse ./autogen.sh ./configure --prefix=/usr/local make sudo sh -c 'umask 0022 && make install' sudo ldconfig - name: install/configure dependencies, all Buildah if: ${{ startsWith(matrix.builder, 'buildah') }} run: | command -v buildah buildah --version command -v runc runc --version # As of 2020-11-30, stock registries.conf is pretty simple; it # includes Docker Hub (docker.io) and then quay.io. Still, use ours # for stability. cat /etc/containers/registries.conf cat <<'EOF' | sudo tee /etc/containers/registries.conf [registries.search] registries = ['docker.io'] EOF - name: install/configure dependencies, privileged Buildah if: ${{ startsWith(matrix.builder, 'buildah-') }} run: | sudo usermod --add-subuids 10000-65536 $USER sudo usermod --add-subgids 10000-65536 $USER - name: install/configure dependencies, all run: | # configure doesn't tell us about these. sudo apt-get install pigz pv # configure does tell us about these. sudo apt-get install squashfs-tools # Track newest Sphinx in case it breaks things. sudo su -c 'umask 0022 && pip3 install sphinx sphinx-rtd-theme' # Use newest shellcheck. cd /usr/local/src wget -nv https://github.com/koalaman/shellcheck/releases/download/stable/shellcheck-stable.linux.x86_64.tar.xz tar xf shellcheck-stable.linux.x86_64.tar.xz mv shellcheck-stable/shellcheck /usr/local/bin which shellcheck shellcheck --version - name: disable bundled lark, ch-image if: ${{ matrix.builder == 'ch-image' }} run: | set -x # no install, disable ./autogen.sh --no-lark ./configure --disable-bundled-lark --prefix=$ch_prefix/from-git fgrep '"lark" module ... external' config.log ! test -f ./lib/lark/lark.py make $ch_makej sudo make $ch_makej install [[ $(bin/ch-image -v --dependencies) = 'lark path: /usr/local/lib/python3.6/site-packages/lark/__init__.py' ]] [[ $($ch_prefix/from-git/bin/ch-image -v --dependencies) = 'lark path: /usr/local/lib/python3.6/site-packages/lark/__init__.py' ]] make clean # install, disable ./autogen.sh ./configure --disable-bundled-lark --prefix=$ch_prefix/from-git fgrep '"lark" module ... bundled' config.log test -f ./lib/lark/lark.py make $ch_makej sudo make $ch_makej install [[ $(bin/ch-image -v --dependencies) = 'lark path: /home/runner/work/charliecloud/charliecloud/lib/charliecloud/lark/__init__.py' ]] [[ $($ch_prefix/from-git/bin/ch-image -v --dependencies) = 'lark path: /usr/local/lib/python3.6/site-packages/lark/__init__.py' ]] make clean # no install, enable: build fails, untested # install, enable: normal configuration, remainder of CI - name: build/install from Git run: | ./autogen.sh # Remove Autotools to make sure everything works without them. sudo apt-get remove autoconf automake # Configure and verify output. ./configure --prefix=$ch_prefix/from-git set -x fgrep 'documentation: yes' config.log [[ $CH_BUILDER = buildah* ]] && fgrep 'with Buildah: yes' config.log [[ $CH_BUILDER = docker ]] && fgrep 'with Docker: yes' config.log if [[ $CH_BUILDER = ch-image ]]; then fgrep 'with ch-image(1): yes' config.log fgrep '"lark" module ... bundled' config.log test -f ./lib/lark/lark.py fi fgrep 'recommended tests, tar-unpack mode: yes' config.log fgrep 'recommended tests, squash-unpack mode: yes' config.log if [[ ${{ matrix.pack_fmt }} = squash-mount ]]; then fgrep 'recommended tests, squash-mount mode: yes' config.log fgrep 'internal SquashFS mounting ... yes' config.log else fgrep 'recommended tests, squash-mount mode: no' config.log fgrep 'internal SquashFS mounting ... no' config.log fi set +x # Build and install. make $ch_makej sudo make $ch_makej install ldd bin/ch-run bin/ch-run --version ldd $ch_prefix/from-git/bin/ch-run $ch_prefix/from-git/bin/ch-run --version # Ensure bundled Lark in tarball. make $ch_makej dist tar tf charliecloud-*.tar.gz | fgrep /lib/lark/lark.py - name: late setup & validation, ch-image if: ${{ matrix.builder == 'ch-image' }} run: | set -x [[ $(bin/ch-image python-path) = /usr/local/bin/python3 ]] [[ $(bin/ch-image -v --dependencies) = "lark path: /home/runner/work/charliecloud/charliecloud/lib/charliecloud/lark/__init__.py" ]] [[ $($ch_prefix/from-git/bin/ch-image python-path) = /usr/local/bin/python3 ]] [[ $($ch_prefix/from-git/bin/ch-image -v --dependencies) = "lark path: /var/tmp/from-git/lib/charliecloud/lark/__init__.py" ]] - name: late setup & validation, all run: | bin/ch-test --is-pedantic all bin/ch-test --is-sudo all - name: make filesystem permissions fixtures run: | bin/ch-test mk-perm-dirs - name: start local registry, ch-image if: ${{ matrix.builder == 'ch-image' }} # See HOWTO in Google Docs for details. run: | set -x mkdir ~/registry-etc cp test/registry-config.yml ~/registry-etc/config.yml cd ~/registry-etc openssl req -batch -subj '/C=US/ST=NM/L=LA/O=LANL/CN=localhost' \ -newkey rsa:4096 -nodes -sha256 -x509 -days 36500 \ -keyout localhost.key -out localhost.crt #openssl x509 -text -noout -in localhost.crt sudo apt-get install apache2-utils htpasswd -Bbn charlie test > ./htpasswd diff -u <(docker run --rm registry:2 \ cat /etc/docker/registry/config.yml) \ config.yml || true docker run -d --rm -p 127.0.0.1:5000:5000 \ -v ~/registry-etc:/etc/docker/registry registry:2 docker ps -a - name: configure sudo to user root, group non-root if: ${{ matrix.keep_sudo }} run: | sudo sed -Ei 's/=\(ALL\)/=(ALL:ALL)/g' /etc/sudoers.d/runner sudo cat /etc/sudoers.d/runner - name: remove sudo if: ${{ ! matrix.keep_sudo }} run: | sudo rm /etc/sudoers.d/runner ! sudo echo hello - name: build/install from tarball if: ${{ matrix.builder == 'docker' && matrix.keep_sudo }} run: | # Create and unpack tarball. The wildcard saves us having to put the # version in a variable. This assumes there isn't already a tarball # or unpacked directory in $ch_prefix, which is true on the clean # VMs GitHub gives us. Note that cd fails if it gets more than one # argument, which helps, but this is probably kind of brittle. make $ch_makej dist mv charliecloud-*.tar.gz $ch_prefix cd $ch_prefix tar xf charliecloud-*.tar.gz rm charliecloud-*.tar.gz # else cd fails with "too many arguments" cd charliecloud-* pwd # Configure and verify output. ./configure --prefix=$ch_prefix/from-tarball set -x fgrep 'documentation: yes' config.log [[ $CH_BUILDER = buildah* ]] && fgrep 'with Buildah: yes' config.log [[ $CH_BUILDER = docker ]] && fgrep 'with Docker: yes' config.log [[ $CH_BUILDER = ch-image ]] && fgrep 'with ch-image(1): yes' config.log fgrep 'recommended tests, tar-unpack mode: yes' config.log fgrep 'recommended tests, squash-unpack mode: yes' config.log if [[ ${{ matrix.pack_fmt }} = squash-mount ]]; then fgrep 'recommended tests, squash-mount mode: yes' config.log fgrep 'internal SquashFS mounting ... yes' config.log else fgrep 'recommended tests, squash-mount mode: no' config.log fgrep 'internal SquashFS mounting ... no' config.log fi set +x # Build and install. make $ch_makej sudo make $ch_makej install bin/ch-run --version $ch_prefix/from-tarball/bin/ch-run --version - name: run test suite (Git WD, quick) if: ${{ matrix.builder == 'docker' && ! matrix.keep_sudo }} run: | bin/ch-test --scope=quick all - name: run test suite (Git WD, standard) run: | bin/ch-test all - name: run test suite (installed from Git WD, standard) if: ${{ matrix.builder == 'docker' && ! matrix.keep_sudo }} run: | $ch_prefix/from-git/bin/ch-test all - name: run test suite (installed from tarball, standard) if: ${{ matrix.builder == 'docker' && matrix.keep_sudo }} run: | $ch_prefix/from-tarball/bin/ch-test all - name: rebuild with most things --disable’d if: ${{ matrix.builder == 'docker' && ! matrix.keep_sudo }} run: | make distclean ./configure --prefix=/doesnotexist \ --disable-html --disable-man --disable-ch-image set -x fgrep 'HTML documentation ... no' config.log fgrep 'man pages ... no' config.log fgrep 'ch-image(1) ... no' config.log fgrep 'with Docker: yes' config.log fgrep 'with ch-image(1): no' config.log fgrep 'basic tests, all stages: yes' config.log fgrep 'more complete tests: no' config.log set +x # Build. make $ch_makej bin/ch-run --version - name: run test suite (Git WD, standard) if: ${{ matrix.builder == 'docker' && ! matrix.keep_sudo }} run: | bin/ch-test all - name: remove non-essential dependencies if: ${{ matrix.builder == 'docker' && matrix.keep_sudo && matrix.pack_fmt == 'tar-unpack' }} run: | set -x # This breaks lots of dependencies unrelated to our build but YOLO. sudo dpkg --remove --force-depends \ pigz \ pv \ python3-requests \ squashfs-tools \ sudo pip3 uninstall -y sphinx sphinx-rtd-theme if [[ ${{ matrix.pack_fmt }} = squash-mount ]]; then ( cd /usr/local/src/squashfuse && make uninstall ) fi ! python3 -c 'import requests' ! python3 -c 'import lark' test -e bin/ch-image bin/ch-test -f test/build/10_sanity.bats # issue #806 - name: rebuild if: ${{ matrix.builder == 'docker' && matrix.keep_sudo && matrix.pack_fmt == 'tar-unpack' }} run: | make distclean ./configure --prefix=/doesnotexist set -x fgrep 'documentation: no' config.log fgrep 'with Docker: yes' config.log fgrep 'basic tests, all stages: yes' config.log fgrep 'more complete tests: no' config.log fgrep 'recommended tests, squash-mount mode: no' config.log set +x # Build and install. make $ch_makej bin/ch-run --version - name: run test suite (Git WD, standard) if: ${{ matrix.builder == 'docker' && matrix.keep_sudo && matrix.pack_fmt == 'tar-unpack' }} run: | bin/ch-test all - name: print ending environment if: ${{ always() }} run: | free -m df -h du -sch $CH_TEST_TARDIR/* || true du -sch $CH_TEST_IMGDIR/* || true charliecloud-0.26/LICENSE000066400000000000000000000261361417231051300151110ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. charliecloud-0.26/Makefile.am000066400000000000000000000004061417231051300161300ustar00rootroot00000000000000SUBDIRS = lib bin doc examples misc packaging test # The CI stuff isn't really relevant for the tarballs, but they should # have complete source code. EXTRA_DIST = .github/PERUSEME .github/workflows/main.yml EXTRA_DIST += LICENSE README.rst VERSION autogen.sh charliecloud-0.26/README.rst000066400000000000000000000116571417231051300155750ustar00rootroot00000000000000What is Charliecloud? --------------------- Charliecloud provides user-defined software stacks (UDSS) for high-performance computing (HPC) centers. This "bring your own software stack" functionality addresses needs such as: * software dependencies that are numerous, complex, unusual, differently configured, or simply newer/older than what the center provides; * build-time requirements unavailable within the center, such as relatively unfettered internet access; * validated software stacks and configuration to meet the standards of a particular field of inquiry; * portability of environments between resources, including workstations and other test and development system not managed by the center; * consistent environments, even archivally so, that can be easily, reliabily, and verifiably reproduced in the future; and/or * usability and comprehensibility. How does it work? ----------------- Charliecloud uses Linux user namespaces to run containers with no privileged operations or daemons and minimal configuration changes on center resources. This simple approach avoids most security risks while maintaining access to the performance and functionality already on offer. Container images can be built using Docker or anything else that can generate a standard Linux filesystem tree. How do I learn more? -------------------- * Documentation: https://hpc.github.io/charliecloud * GitHub repository: https://github.com/hpc/charliecloud * Low-traffic mailing list for announcements: https://groups.io/g/charliecloud * We wrote an article for USENIX's magazine *;login:* that explains in more detail the motivation for Charliecloud and the technology upon which it is based: https://www.usenix.org/publications/login/fall2017/priedhorsky * A more technical resource is our Supercomputing 2017 paper: https://dl.acm.org/citation.cfm?id=3126925 Who is responsible? ------------------- Contributors: * Rusty Davis * Hunter Easterday * Oliver Freyermuth * Shane Goff * Michael Jennings * Christoph Junghans * Dave Love * Jordan Ogas * Kevin Pelzel * Megan Phinney * Reid Priedhorsky , co-founder and project lead * Tim Randles , co-founder * Matthew Vernon * Peter Wienemann * Lowell Wofford How can I participate? ---------------------- Use our GitHub page: https://hpc.github.io/charliecloud Bug reports and feature requests should be filed as "Issues". Questions, comments, support requests, and everything else should use our "Discussions". Don't worry if you put something in the wrong place; we'll be more than happy to help regardless. We also have a mailing list for announcements: https://groups.io/g/charliecloud Patches are much appreciated on the software itself as well as documentation. Optionally, please include in your first patch a credit for yourself in the list above. We are friendly and welcoming of diversity on all dimensions. How do I cite Charliecloud? --------------------------- If Charliecloud helped your research, or it was useful to you in any other context where bibliographic citations are appropriate, please cite the following open-access paper: Reid Priedhorsky and Tim Randles. "Charliecloud: Unprivileged containers for user-defined software stacks in HPC", 2017. In *Proc. Supercomputing*. DOI: `10.1145/3126908.3126925 `_. *Note:* This paper contains out-of-date number for the size of Charliecloud's code. Please instead use the current number in the FAQ. Copyright and license --------------------- Charliecloud is copyright © 2014–2021 Triad National Security, LLC. This software was produced under U.S. Government contract 89233218CNA000001 for Los Alamos National Laboratory (LANL), which is operated by Triad National Security, LLC for the U.S. Department of Energy/National Nuclear Security Administration. This is open source software (LA-CC 14-096); you can redistribute it and/or modify it under the terms of the Apache License, Version 2.0. A copy is included in file LICENSE. You may not use this software except in compliance with the license. The Government is granted for itself and others acting on its behalf a nonexclusive, paid-up, irrevocable worldwide license in this material to reproduce, prepare derivative works, distribute copies to the public, perform publicly and display publicly, and to permit others to do so. Neither the government nor Triad National Security, LLC makes any warranty, express or implied, or assumes any liability for use of this software. If software is modified to produce derivative works, such derivative works should be clearly marked, so as not to confuse it with the version available from LANL. .. LocalWords: USENIX's charliecloud-0.26/VERSION000066400000000000000000000000051417231051300151370ustar00rootroot000000000000000.26 charliecloud-0.26/autogen.sh000077500000000000000000000055211417231051300161000ustar00rootroot00000000000000#!/bin/bash set -e lark_version=0.11.3 while [[ "$#" -gt 0 ]]; do case $1 in --clean) clean=yes ;; --no-lark) lark_no_install=yes ;; --rm-lark) lark_shovel=yes ;; *) help=yes ;; esac shift done if [[ $help ]]; then cat <&1 echo 'hint: Install "wheel" and then re-run with "--rm-lark"?' 2>&1 exit 1 fi set +x echo echo 'Done. Now you can "./configure".' fi charliecloud-0.26/bin/000077500000000000000000000000001417231051300146445ustar00rootroot00000000000000charliecloud-0.26/bin/Makefile.am000066400000000000000000000024401417231051300167000ustar00rootroot00000000000000# Bugs in this Makefile: # # 1. $(EXEEXT) not included for scripts. ## C programs bin_PROGRAMS = ch-checkns ch-run ch-ssh ch_checkns_SOURCES = ch-checkns.c ch_misc.h ch_misc.c ch_run_SOURCES = ch-run.c ch_core.h ch_core.c ch_misc.h ch_misc.c if HAVE_LIBSQUASHFUSE ch_run_SOURCES += ch_fuse.h ch_fuse.c endif ch_run_CFLAGS = $(CFLAGS) $(PTHREAD_CFLAGS) ch_run_LDADD = $(CH_RUN_LIBS) ch_ssh_SOURCES = ch-ssh.c ch_misc.h ch_misc.c ## Shell scripts - distributed as-is dist_bin_SCRIPTS = ch-build \ ch-build2dir \ ch-builder2squash \ ch-builder2tar \ ch-convert \ ch-dir2squash \ ch-fromhost \ ch-pull2dir \ ch-pull2tar \ ch-tar2dir \ ch-test ## Python scripts - need text processing bin_SCRIPTS = ch-run-oci # scripts to build EXTRA_SCRIPTS = ch-image # more scripts that *may* be built if ENABLE_CH_IMAGE bin_SCRIPTS += ch-image endif EXTRA_DIST = ch-image.py.in ch-run-oci.py.in CLEANFILES = $(bin_SCRIPTS) $(EXTRA_SCRIPTS) ch-image: ch-image.py.in ch-run-oci: ch-run-oci.py.in $(bin_SCRIPTS): %: %.py.in rm -f $@ sed -E 's|%PYTHON_SHEBANG%|@PYTHON_SHEBANG@|' < $< > $@ chmod +rx,-w $@ # respects umask charliecloud-0.26/bin/ch-build000077500000000000000000000111741417231051300162650ustar00rootroot00000000000000#!/bin/sh lib=$(cd "$(dirname "$0")" && pwd)/../lib/charliecloud . "${lib}/base.sh" # shellcheck disable=SC2034 usage=$(cat <&2 exit 1 ;; esac exit 0 fi echo "building with: ${CH_BUILDER}" case $CH_BUILDER in buildah*) case $CH_BUILDER in buildah) runtime=${ch_bin}/ch-run-oci ignore_chown=true ;; buildah-runc) runtime=runc ignore_chown=false ;; buildah-setuid) runtime=${ch_bin}/ch-run-oci ignore_chown=false ;; esac # Set BUILDAH_LAYERS instead of using "--layers=true" to avoid the # error "can only set one of 'layers' or 'no-cache'". export BUILDAH_LAYERS=true # If Buildah sees a terminal on stdin, it does TTY stuff that confuses # ch-run-oci, so we always have to redirect stdin. If it's already # redirected, just pass that through; otherwise, use /dev/null. # # We used to redirect from /dev/stdin in the former case. However, # this started throwing errors in CI out of the blue (issue #964): # # cannot open /dev/stdin: No such device or address # # The buildah invocation is repeated because I couldn't figure out how # to put the arguments in a variable and then get them out again # reliably without Bash arrays. if [ -t 0 ]; then buildah --storage-opt .ignore_chown_errors="$ignore_chown" \ build-using-dockerfile \ --build-arg HTTP_PROXY="$HTTP_PROXY" \ --build-arg HTTPS_PROXY="$HTTPS_PROXY" \ --build-arg NO_PROXY="$NO_PROXY" \ --build-arg http_proxy="$http_proxy" \ --build-arg https_proxy="$https_proxy" \ --build-arg no_proxy="$no_proxy" \ --isolation=rootless \ --runtime="$runtime" \ "$@" < /dev/null else buildah --storage-opt .ignore_chown_errors="$ignore_chown" \ build-using-dockerfile \ --build-arg HTTP_PROXY="$HTTP_PROXY" \ --build-arg HTTPS_PROXY="$HTTPS_PROXY" \ --build-arg NO_PROXY="$NO_PROXY" \ --build-arg http_proxy="$http_proxy" \ --build-arg https_proxy="$https_proxy" \ --build-arg no_proxy="$no_proxy" \ --isolation=rootless \ --runtime="$runtime" \ "$@" fi ;; ch-image) "${ch_bin}/ch-image" build "$@" ;; docker) # Coordinate this list with test "build.bats/proxy variables". # shellcheck disable=SC2154 docker_ build --build-arg HTTP_PROXY="$HTTP_PROXY" \ --build-arg HTTPS_PROXY="$HTTPS_PROXY" \ --build-arg NO_PROXY="$NO_PROXY" \ --build-arg http_proxy="$http_proxy" \ --build-arg https_proxy="$https_proxy" \ --build-arg no_proxy="$no_proxy" \ "$@" ;; esac charliecloud-0.26/bin/ch-build2dir000077500000000000000000000021751417231051300170470ustar00rootroot00000000000000#!/bin/bash lib=$(cd "$(dirname "$0")" && pwd)/../lib/charliecloud . "${lib}/base.sh" # shellcheck disable=SC2034 usage=$(cat <&2 exit 1 fi tag_fs=$(tag_to_path "$tag") set -x "${ch_bin}"/ch-build "${args[@]}" "${ch_bin}"/ch-builder2tar "$tag" "$outdir" "${ch_bin}"/ch-tar2dir "${outdir}/${tag_fs}.tar.gz" "$outdir" rm "${outdir}/${tag_fs}.tar.gz" deprecated_convert_warn charliecloud-0.26/bin/ch-builder2squash000077500000000000000000000024241417231051300201210ustar00rootroot00000000000000#!/bin/sh lib=$(cd "$(dirname "$0")" && pwd)/../lib/charliecloud . "${lib}/base.sh" # shellcheck disable=SC2034 usage=$(cat <&2 exit 1 fi # mktemp is used for intermediate files to avoid heavy metadata loads on # certain filesystems temp=$(mktemp -d --tmpdir ch-builder2squash.XXXXXX) # Get image as a directory "${ch_bin}/ch-builder2tar" -b "$CH_BUILDER" --nocompress "$image" "$temp" image=$(tag_to_path "$image") tar=${temp}/${image}.tar "${ch_bin}/ch-tar2dir" "${tar}" "$temp" # Create squashfs, and clean up intermediate files and folders. "${ch_bin}/ch-dir2squash" "${temp}/${image}" "$outdir" "$@" rm -rf --one-file-system "$temp" deprecated_convert_warn charliecloud-0.26/bin/ch-builder2tar000077500000000000000000000133251417231051300174050ustar00rootroot00000000000000#!/bin/sh lib=$(cd "$(dirname "$0")" && pwd)/../lib/charliecloud . "${lib}/base.sh" # shellcheck disable=SC2034 usage=$(cat < /dev/null EOF echo "stopping Buildah container" buildah rm "$container" > /dev/null echo "adding environment" temp=$(mktemp --tmpdir ch-builder2tar.XXXXXX) buildah inspect --format='{{range .OCIv1.Config.Env}}{{println .}}{{end}}' \ "$image" > "$temp" tar rf "$tar" -b1 -P --xform="s|${temp}|ch/environment|" "$temp" rm "$temp" ;; ch-image) storage=$("${ch_bin}/ch-image" storage-path)/img/$(tag_to_path "$image") if [ ! -d "$storage" ]; then echo "$image not found in builder storage" 1>&2 exit 1 fi echo "exporting" ( cd "$storage" && tar cf - . ) | pv_ > "$tar" ;; docker) # Export the image to tarball. echo "exporting" cid=$(docker_ create --read-only "$image") size=$(docker_ image inspect "$image" --format='{{.Size}}') docker_ export "$cid" | pv_ -s "$size" > "$tar" docker_ rm "$cid" > /dev/null echo "adding environment" temp=$(mktemp --tmpdir ch-builder2tar.XXXXXX) docker_ inspect "$image" \ --format='{{range .Config.Env}}{{println .}}{{end}}' > "$temp" tar rf "$tar" -b1 -P --xform="s|${temp}|ch/environment|" "$temp" rm "$temp" ;; none) echo 'this script does not support the above builder' 1>&2 exit 1 ;; *) # builder_choose() above should have ensured the builder is good. echo "unreachable code reached: unknown builder: $CH_BUILDER" 1>&2 exit 1 ;; esac if [ "$nocompress" ]; then ls -lh "$tar" else echo "compressing" pv_ < "$tar" | gzip_ -6 > "$tar_gzipped" rm "$tar" ls -lh "$tar_gzipped" fi deprecated_convert_warn charliecloud-0.26/bin/ch-checkns.c000066400000000000000000000137011417231051300170200ustar00rootroot00000000000000/* This example program walks through the complete namespace / pivot_root(2) dance to enter a Charliecloud container, with each step documented. If you can compile it and run it without error as a normal user, ch-run will work too (if not, that's a bug). If not, this will hopefully help you understand more clearly what went wrong. pivot_root(2) has a large number of error conditions resulting in EINVAL that are not documented in the man page [1]. The ones we ran into are: 1. The new root cannot be shared [2] outside the mount namespace. This makes sense, as we as an unprivileged user inside our namespace should not be able to change privileged things owned by other namespaces. This condition arises on systemd systems, which mount everything shared by default. 2. The new root must not have been mounted before unshare(2), and/or it must be a mount point. The man page says "new_root does not have to be a mount point", but the source code comment says "[i]t must be a mount point" [3]. (I haven't isolated which was our problem.) In either case, this is a very common situation. 3. The old root is a "rootfs" [4]. This is documented in a source code comment [3] but not the man page. This is an unusual situation for most contexts, because the rootfs is typically the initramfs overmounted during boot. However, some cluster provisioning systems, e.g. Perceus, use the original rootfs directly. Regarding overlayfs: It's very attractive to union-mount a tmpfs over the read-only image; then all programs can write to their hearts' desire, and the image does not change. This also simplifies the code. Unfortunately, overlayfs + userns is not allowed as of 4.4.23. See: https://lwn.net/Articles/671774/ [1]: http://man7.org/linux/man-pages/man2/pivot_root.2.html [2]: https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt [3]: http://lxr.free-electrons.com/source/fs/namespace.c?v=4.4#L2952 [4]: https://www.kernel.org/doc/Documentation/filesystems/ramfs-rootfs-initramfs.txt */ #define _GNU_SOURCE #include #include #include #include #include #include #include #include #include #include "config.h" #include "ch_misc.h" const char usage[] = "\ \n\ Usage: ch-checkns\n\ \n\ Check \"ch-run\" prerequisites, e.g., namespaces and \"pivot_root(2)\".\n\ \n\ Example:\n\ \n\ $ ch-checkns\n\ ok\n"; #define TRY(x) if (x) fatal_(__FILE__, __LINE__, errno, #x) void fatal_(const char *file, int line, int errno_, const char *str) { char *url = "https://github.com/hpc/charliecloud/blob/master/bin/ch-checkns.c"; printf("error: %s: %d: %s\n", file, line, str); printf("errno: %d\nsee: %s\n", errno_, url); exit(1); } int main(int argc, char *argv[]) { unsigned long flags; if (argc >= 2 && strcmp(argv[1], "--help") == 0) { fprintf(stderr, usage); return 0; } if (argc >= 2 && strcmp(argv[1], "--version") == 0) { version(); return 0; } /* Ensure that our image directory exists. It doesn't really matter what's in it. */ if (mkdir("/tmp/newroot", 0755) && errno != EEXIST) TRY (errno); /* Enter the mount and user namespaces. Note that in some cases (e.g., RHEL 6.8), this will succeed even though the userns is not created. In that case, the following mount(2) will fail with EPERM. */ TRY (unshare(CLONE_NEWNS|CLONE_NEWUSER)); /* Claim the image for our namespace by recursively bind-mounting it over itself. This standard trick avoids conditions 1 and 2. */ TRY (mount("/tmp/newroot", "/tmp/newroot", NULL, MS_REC | MS_BIND | MS_PRIVATE, NULL)); /* The next few calls deal with condition 3. The solution is to overmount the root filesystem with literally anything else. We use the parent of the image, /tmp. This doesn't hurt if / is not a rootfs, so we always do it for simplicity. */ /* Claim /tmp for our namespace. You would think that because /tmp contains /tmp/newroot and it's a recursive bind mount, we could claim both in the same call. But, this causes pivot_root(2) to fail later with EBUSY. */ TRY (mount("/tmp", "/tmp", NULL, MS_REC | MS_BIND | MS_PRIVATE, NULL)); /* chdir to /tmp. This moves the process' special "." pointer to the soon-to-be root filesystem. Otherwise, it will keep pointing to the overmounted root. See the e-mail at the end of: https://git.busybox.net/busybox/tree/util-linux/switch_root.c?h=1_24_2 */ TRY (chdir("/tmp")); /* Move /tmp to /. (One could use this to directly enter the image, avoiding pivot_root(2) altogether. However, there are ways to remove all active references to the root filesystem. Then, the image could be unmounted, exposing the old root filesystem underneath. While Charliecloud does not claim a strong isolation boundary, we do want to make activating the UDSS irreversible.) */ TRY (mount("/tmp", "/", NULL, MS_MOVE, NULL)); /* Move the "/" special pointer to the new root filesystem, for the reasons above. (Similar reasoning applies for why we don't use chroot(2) to directly activate the UDSS.) */ TRY (chroot(".")); /* Make a place for the old (intermediate) root filesystem to land. */ if (mkdir("/newroot/oldroot", 0755) && errno != EEXIST) TRY (errno); /* Re-mount the image read-only. */ flags = path_mount_flags("/newroot") | MS_REMOUNT | MS_BIND | MS_RDONLY; TRY (mount(NULL, "/newroot", NULL, flags, NULL)); /* Finally, make our "real" newroot into the root filesystem. */ TRY (chdir("/newroot")); TRY (syscall(SYS_pivot_root, "/newroot", "/newroot/oldroot")); TRY (chroot(".")); /* Unmount the old filesystem and it's gone for good. */ TRY (umount2("/oldroot", MNT_DETACH)); /* Report success. */ printf("ok\n"); } charliecloud-0.26/bin/ch-convert000077500000000000000000000407311417231051300166470ustar00rootroot00000000000000#!/bin/sh ## preamble ################################################################## lib=$(cd "$(dirname "$0")" && pwd)/../lib/charliecloud . "${lib}/base.sh" PATH=${ch_bin}:$PATH # shellcheck disable=SC2034 usage=$(cat < "$2" } cv_dir_chimage () { dir_in_validate "$1" chimage_out_validate "$2" INFO 'importing ...' ch-image import "$1" "$2" # FIXME: no progress meter } cv_dir_docker () { dir_in_validate "$1" docker_out_validate "$2" dirtar=${tmpdir}/weirdal.tar.gz # One could also use "docker build" with "FROM scratch" and "COPY", # apparently saving a tar step. However, this will in fact tar the source # directory anyway to send it to the Docker daemon. cv_dir_tar "$1" "$dirtar" # FIXME: needlessly compresses cv_tar_docker "$dirtar" "$2" rm "$dirtar" } cv_dir_squash () { # FIXME: mksquashfs(1) is incredibly noisy. This can be mitigated with # -quiet, but that's not available until version 4.4 (2019). dir_in_validate "$1" squash_out_validate "$2" pflist=${tmpdir}/pseudofiles INFO 'packing ...' touch "$pflist" mount_points_ensure "$1" "$pflist" # 64kiB block size based on Shane's experiments. mksquashfs "$1" "$2" -b 65536 -noappend -all-root -pf "$pflist" # Zero the archive's internal modification time at bytes 8–11, 0-indexed # [1]. Newer SquashFS-Tools ≥4.3 have option "-fstime 0" to do this, but # CentOS 7 comes with 4.2. [1]: https://dr-emann.github.io/squashfs/ printf '\x00\x00\x00\x00' | dd of="$2" bs=1 count=4 seek=8 conv=notrunc rm "$pflist" } cv_dir_tar () { dir_in_validate "$1" tar_out_validate "$2" # Don't add essential files & directories because that will happen later # when converted to dir or squash. INFO 'packing ...' ( cd "$1" && tar czf - . ) | pv_ > "$2" } cv_docker_chimage () { docker_in_validate "$1" chimage_out_validate "$2" docker_out=${tmpdir}/weirdal.tar.gz cv_docker_tar "$1" "$docker_out" # FIXME: needlessly compresses cv_tar_chimage "$docker_out" "$2" rm "$docker_out" } cv_docker_dir () { docker_in_validate "$1" dir_out_validate "$2" docker_out=${tmpdir}/weirdal.tar.gz cv_docker_tar "$1" "$docker_out" # FIXME: needlessly compresses cv_tar_dir "$docker_out" "$2" rm "$docker_out" } cv_docker_squash () { docker_in_validate "$1" squash_out_validate "$2" docker_dir=${tmpdir}/weirdal cv_docker_dir "$1" "$docker_dir" # FIXME: needlessly compresses cv_dir_squash "$docker_dir" "$2" rm -Rf --one-file-system "$docker_dir" } cv_docker_tar () { docker_in_validate "$1" tar_out_validate "$2" tmptar=${tmpdir}/weirdal.tar tmpenv=${tmpdir}/weirdal.env INFO 'exporting ...' cid=$(docker_ create --read-only "$1" /bin/true) # cmd needed but not run size=$(docker_ image inspect "$1" --format='{{.Size}}') docker_ export "$cid" | pv_ -s "$size" > "$tmptar" docker_ rm "$cid" > /dev/null INFO 'adding environment ...' docker_ inspect "$1" \ --format='{{range .Config.Env}}{{println .}}{{end}}' > "$tmpenv" tar rf "$tmptar" -b1 -P --xform="s|${tmpenv}|ch/environment|" "$tmpenv" INFO 'compressing ...' pv_ < "$tmptar" | gzip_ -6 > "$2" rm "$tmptar" rm "$tmpenv" } cv_squash_chimage () { squash_in_validate "$1" chimage_out_validate "$2" unsquash_dir=${tmpdir}/weirdal cv_squash_dir "$1" "$unsquash_dir" cv_dir_chimage "$unsquash_dir" "$2" rm -Rf --one-file-system "$unsquash_dir" } cv_squash_dir () { squash_in_validate "$1" dir_out_validate "$2" # Note: unsquashfs(1) has no exclude filter, only include, so if the # archive includes bad files like devices, this will fail. I don't know to # what degree this will be a problem. unsquashfs -d "$2" -user-xattrs "$1" dir_fixup "$2" } cv_squash_docker () { squash_in_validate "$1" docker_out_validate "$2" unsquash_tar=${tmpdir}/weirdal.tar.gz cv_squash_tar "$1" "$unsquash_tar" cv_tar_docker "$unsquash_tar" "$2" rm "$unsquash_tar" } cv_squash_tar () { squash_in_validate "$1" tar_out_validate "$2" unsquash_dir=${tmpdir}/weirdal cv_squash_dir "$1" "$unsquash_dir" cv_dir_tar "$unsquash_dir" "$2" rm -Rf --one-file-system "$unsquash_dir" } cv_tar_chimage () { tar_in_validate "$1" chimage_out_validate "$2" INFO 'importing ...' ch-image import "$1" "$2" # FIXME: no progress meter } cv_tar_dir () { tar_in_validate "$1" # Infer decompression argument because GNU tar is unable to do so if input # is a pipe, and we want to keep pv(1). See: # https://www.gnu.org/software/tar/manual/tar.html#gzip case $1 in *.tar) decompress= ;; *.tar.gz) decompress=z ;; *.tar.xz) decompress=J ;; *.tgz) decompress=z ;; *) echo "unknown extension: ${tarball}" 1>&2 exit 1 ;; esac dir_out_validate "$2" INFO 'unpacking ...' mkdir "$2" # Use a pipe because PV ignores arguments if it's cat rather than PV. # # See FAQ on /dev exclusion. --no-wildcards-match-slash needed to prevent # * matching multiple directories; tar default differs from sh behavior. #shellcheck disable=SC2094 pv_ -s "$(stat -c%s "$1")" < "$1" \ | tar x$decompress -C "$2" -f - \ --anchored --no-wildcards-match-slash \ --exclude='dev/*' --exclude='*/dev/*' dir_fixup "$2" } cv_tar_docker () { tar_in_validate "$1" docker_out_validate "$2" INFO "importing ..." docker_ import "$1" "$2" # FIXME: no progress meter } cv_tar_squash () { tar_in_validate "$1" squash_out_validate "$2" tar_dir=${tmpdir}/weirdal cv_tar_dir "$1" "$tar_dir" cv_dir_squash "$tar_dir" "$2" rm -Rf --one-file-system "$tar_dir" } ## input/output validation functions ######################################### # Each of these checks whether $1 can be used as input/output descriptor for # that format, and also whether it already exists if --no-clobber. Exit with # error on validation failure. chimage_in_validate () { img=$(chimage_path "$1") [ -d "$img" ] || FATAL "source image not found in ch-image storage: $1" } chimage_out_validate () { img=$(chimage_path "$1") if [ -d "$img" ] && [ -n "$no_clobber" ]; then FATAL "exists in ch-image storage, not deleting per --no-clobber: ${1}" fi } # Validate that $1 can be used as an input directory. dir_in_validate () { [ -d "$1" ] || FATAL "not a directory: ${1}" } dir_out_validate () { parent_validate "$1" # $1 must not exist, unless it looks like an image, in which case remove # it (or error if --noclobber). if [ -e "$1" ]; then [ -d "$1" ] || FATAL "exists but not a directory: ${1}" if [ -d "${1}/bin" ] && [ -d "${1}/dev" ] && [ -d "${1}/usr" ]; then if [ -n "$no_clobber" ]; then FATAL "exists, not deleting per --no-clobber: ${1}" else INFO "deleting existing image: ${1}" rm -Rf --one-file-system "$1" fi else FATAL "exists but does not appear to be an image: ${1}" fi fi } docker_in_validate () { digest=$(docker_ image ls -q "$1") [ -n "$digest" ] || FATAL "source not found in Docker storage: ${1}" } docker_out_validate () { digest=$(docker_ image ls -q "$1") if [ -n "$digest" ] && [ -n "$no_clobber" ]; then FATAL "exists in Docker storage, not deleting per --no-clobber: ${1}" fi } squash_in_validate () { [ -e "$1" ] || FATAL "not found: ${1}" } squash_out_validate () { parent_validate "$1" path_noclobber "$1" } tar_in_validate () { [ -e "$1" ] || FATAL "not found: ${1}" } tar_out_validate () { case $1 in *.tar.gz|*.tgz) ;; *) FATAL "only gzipped tar output (.tar.gz or .tgz) supported" ;; esac parent_validate "$1" path_noclobber "$1" } ## supporting functions ###################################################### # Return the path to image $1 in ch-image storage. chimage_path () { echo "$(ch-image storage-path)/img/$(tag_to_path "$1")" } # Return basename of $2 (format $1) with no extension and filesystem-invalid # characters removed, i.e., suitable for a new extension to be appended. Only # extensions valid for the format $1 are considered. desc_base () { fmt=$1 dsc=$2 case $fmt in dir) basename "$dsc" ;; ch-image|docker) tag_to_path "$dsc" ;; squash) basename "$dsc" | sed -E 's/\.(sqfs|squash|squashfs|squishy)$//' ;; tar) basename "$dsc" | sed -E 's/\.(t.z|tar(\.(.|..))?)$//' ;; *) FATAL "invalid format: $fmt" ;; esac } # Ensure $1 has everything needed to be an image directory. dir_fixup () { DEBUG "fixing up: $1" # Make all directories writeable so we can delete later (hello, Red Hat). find "$1" -type d -a ! -perm /200 -exec chmod u+w {} + # If tarball had a single containing directory, move the contents up a # level and remove the containing directory. It is non-trivial in POSIX sh # to deal with hidden files; see https://unix.stackexchange.com/a/6397. files=$(ls -Aq "$1") if [ "$(echo "$files" | wc -l)" -eq 1 ]; then ( cd "${1}/${files}" || FATAL "cd failed: ${1}/${files}" for f in * .[!.]* ..?*; do if [ -e "$f" ]; then mv -- "$f" ..; fi done ) rmdir "${1}/${files}" fi # Ensure mount points are present. mount_points_ensure "$1" } # Return validated format $1: if non-empty and valid, return it; if empty, # infer format from the descriptor $2; otherwise, exit with error. fmt_validate () { fmt=$1 dsc=$2 if [ -z "$fmt" ]; then case $dsc in *.sqfs|*.squash|*.squashfs|*.squishy) fmt=squash ;; *.tar|*.t?z|*.tar.?|*.tar.??) fmt=tar ;; /*|./*) fmt=dir ;; *) if [ -n "$have_ch_image" ]; then fmt=ch-image elif [ -n "$have_docker" ]; then fmt=docker else FATAL "descriptor looks like builder storage but no builder found: ${dsc}" fi ;; esac fi case $fmt in ch-image) if [ -z "$have_ch_image" ]; then FATAL "format ch-image invalid: ch-image not found" fi ;; docker) if [ -z "$have_docker" ]; then FATAL "format docker invalid: docker not found" fi ;; dir|squash|tar) ;; *) FATAL "invalid format: ${fmt}" ;; esac echo "$fmt" } # Ensure mount points needed by ch-run exist in directory $1. Do nothing if # something already exists, without dereferencing, in case it's a symlink, # which will work for bind-mount later but won't resolve correctly now outside # the container (e.g. linuxcontainers.org images; issue #1015). # # If $2 is non-empty, append missing mount points to a list of mksquashfs(1) # "pseudo files" to that file instead of modifying $1. While pseudo files # don't conflict with actual files, they do generate a warning. # # An alternative approach is to create the mount points in a temporary # directory, then append that to the SquashFS archive. However, mksquashfs(1) # does not merge the new files. Ff an existing file or directory is given in # the appended directory, both go into the archive, with the second renamed # (to "foo_1"). This make it impossible to add mount points to a directory # that already exists; e.g., if /etc exists, /etc/resolv.conf will end up at # /etc_1/resolv.conf. # # WARNING: Keep in sync with Image.unpack_init(). mount_points_ensure () { # directories for i in bin dev etc mnt proc usr \ mnt/0 mnt/1 mnt/2 mnt/3 mnt/4 mnt/5 mnt/6 mnt/7 mnt/8 mnt/9; do if ! exist_p "${1}/${i}"; then if [ -n "$2" ]; then echo "${i} d 755 root root" >> "$2" else mkdir "${1}/${i}" fi fi done # files for i in etc/hosts etc/resolv.conf; do if ! exist_p "${1}/${i}"; then if [ -n "$2" ]; then echo "${i} f 644 root root true" >> "$2" else touch "${1}/${i}" fi fi done } # Validate the parent or enclosing directory of $1 exists. parent_validate () { parent=$(dirname "$1") [ -d "$parent" ] || "not a directory: $parent" } # Exit with error if $1 exists and --no-clobber was given. path_noclobber () { if [ -e "$1" ] && [ -n "$no_clobber" ]; then FATAL "exists, not deleting per --no-clobber: ${1}" fi } # Set $tmpdir to be a new directory with a unique and unpredictable name, as a # subdirectory of --tmp, $TMPDIR, or /var/tmp, whichever is first set. tmpdir_setup () { if [ -z "$tmpdir" ]; then if [ -n "$TMPDIR" ]; then tmpdir=$TMPDIR else tmpdir=/var/tmp fi fi case $tmpdir in /*) ;; *) FATAL "temp dir must be absolute: ${tmpdir}" ;; esac tmpdir=$(mktemp -d --tmpdir="$tmpdir" ch-convert.XXXXXX) } ## main ###################################################################### while true; do if ! parse_basic_arg "$1"; then case $1 in -i|--in-fmt) shift in_fmt=$1 ;; -i=*|--in-fmt=*) in_fmt=${1#*=} ;; -n|--dry-run) dry_run=yes ;; --no-clobber) no_clobber=yes ;; -o|--out-fmt) shift out_fmt=$1 ;; -o=*|--out-fmt=*) out_fmt=${1#*=} ;; --tmp) shift tmpdir=$1 ;; *) break ;; esac fi shift done if [ "$#" -ne 2 ]; then usage fi in_desc=$1 out_desc=$2 VERBOSE "verbose level: ${verbose}" if command -v ch-image > /dev/null 2>&1; then have_ch_image=yes VERBOSE 'ch-image: found' else VERBOSE 'ch-image: not found' fi if command -v docker > /dev/null 2>&1; then have_docker=yes VERBOSE 'docker: found' else VERBOSE 'docker: not found' fi in_fmt=$(fmt_validate "$in_fmt" "$in_desc") out_fmt=$(fmt_validate "$out_fmt" "$out_desc") tmpdir_setup VERBOSE "temp dir: ${tmpdir}" VERBOSE "noclobber: ${no_clobber:-will clobber}" INFO 'input: %-8s %s' "$in_fmt" "$in_desc" INFO 'output: %-8s %s' "$out_fmt" "$out_desc" if [ "$in_fmt" = "$out_fmt" ]; then FATAL 'input and output formats must be different' fi if [ -z "$dry_run" ]; then # Dispatch to conversion function. POSIX sh does not support hyphen in # function names, so remove it. "cv_$(echo "$in_fmt" | tr -d '-')_$(echo "$out_fmt" | tr -d '-')" \ "$in_desc" "$out_desc" fi rmdir "$tmpdir" INFO 'done' charliecloud-0.26/bin/ch-dir2squash000077500000000000000000000044311417231051300172510ustar00rootroot00000000000000#!/bin/sh set -e lib=$(cd "$(dirname "$0")" && pwd)/../lib/charliecloud . "${lib}/base.sh" # shellcheck disable=SC2034 usage=$(cat <&2 exit 1 fi # Check input directory is a directory if [ ! -d "$indir" ]; then echo "can't squash: ${indir} is not a directory" 1>&2 exit 1 fi # Check OUTDIR exists and is a directory if [ ! -e "$outdir" ]; then echo "can't squash: ${outdir} does not exist" 1>&2 exit 1 fi if [ ! -d "$outdir" ]; then echo "can't squash: ${outdir} is not a directory" 1>&2 exit 1 fi # Ensure mount points that ch-run needs exist. We do this by creating a # temporary directory and appending that to an existing SquashFS archive. This # raises mksquashfs(1)'s strange behavior w.r.t. duplicate filenames: if an # existing file or directory "foo" is given again during append, it also goes # into the archive, renamed to "foo_1". I was not able to find a way to get # the behavior we want (last one given wins, merge directories). # # WARNING: Keep in sync with other shell scripts & Image.unpack_init(). temp=$(mktemp -d --tmpdir ch-dir2squash.XXXXXX) # Directories. Don't follow symlinks in existence testing (issue #1015). for i in bin dev etc mnt proc usr \ mnt/0 mnt/1 mnt/2 mnt/3 mnt/4 mnt/5 mnt/6 mnt/7 mnt/8 /mnt/9; do exist_p "${indir}/${i}" || mkdir "${temp}/${i}" done # Files are tricky because even if the file doesn't exist, the enclosing # directory will, triggering the bad duplicate behavior. I'm not sure what to # do other than fail out. for i in etc/hosts etc/resolv.conf; do if ( ! exist_p "${indir}/${i}" ); then echo "can't squash: ${indir}/${i} does not exist" 1>&2 exit 1 fi done # Create SquashFS file. image=$(basename "${indir}") mksquashfs "${indir}" "${outdir}/${image}.sqfs" -noappend "$@" mksquashfs "$temp" "${outdir}/${image}.sqfs" -no-recovery rm -rf --one-file-system "$temp" ls -lh "${outdir}/${image}.sqfs" deprecated_convert_warn charliecloud-0.26/bin/ch-fromhost000077500000000000000000000316431417231051300170320ustar00rootroot00000000000000#!/bin/sh # The basic algorithm here is that we build up a list of file # source:destination pairs separated by newlines, then walk through them and # copy them into the image. We also maintain a list of directories to create # and a list of file globs to remove. # # The colon separator to avoid the difficulty of iterating through a sequence # of pairs with no arrays or structures in POSIX sh. We could avoid it by # taking action immediately upon encountering each file in the argument list, # but that would (a) yield a half-injected image for basic errors like # misspellings on the command line and (b) would require the image to be first # on the command line, which seems awkward. # # The newline separator is for the same reason and also because it's # convenient for input from --cmd and --file. # # Note on looping through the newlines in a variable: The approach in this # script is to set IFS to newline, loop, then restore. This is awkward but # seemed the least bad. Alternatives include: # # 1. Piping echo into "while read -r": This executes the while in a # subshell, so variables don't stick. # # 2. Here document used as input, e.g.: # # while IFS= read -r FILE; do # ... # done <&2 fi } ensure_nonempty () { [ "$2" ] || fatal "$1 must not be empty" } fatal () { printf 'ch-fromhost: %s\n' "$1" 1>&2 exit 1 } info () { printf 'ch-fromhost: %s\n' "$1" 1>&2 } is_bin () { case $1 in */bin*|*/sbin*) return 0 ;; *) return 1 esac } is_so () { case $1 in */lib*) return 0 ;; *.so) return 0 ;; *) return 1 esac } queue_files () { old_ifs="$IFS" IFS="$newline" d="${dest:-$2}" for f in $1; do case $f in *:*) fatal "paths can't contain colon: ${f}" ;; esac if is_so "$f"; then debug "found shared library: ${f}" lib_found=yes fi # This adds a delimiter only for the second and subsequent files. # https://chris-lamb.co.uk/posts/joining-strings-in-posix-shell # # If destination empty, we'll infer it later. inject_files="${inject_files:+$inject_files$newline}$f:$d" done IFS="$old_ifs" } queue_mkdir () { [ "$1" ] inject_mkdirs="${inject_mkdirs:+$inject_mkdirs$newline}$1" } queue_unlink () { [ "$1" ] inject_unlinks="${inject_unlinks:+$inject_unlinks$newline}$1" } parse_basic_args "$@" while [ $# -gt 0 ]; do opt=$1; shift case $opt in -c|--cmd) ensure_nonempty --cmd "$1" out=$($1) || fatal "command failed: $1" queue_files "$out" shift ;; --cray-mpi) # Can't act right away because we need the image path. cray_mpi=yes lib_found=yes ;; -d|--dest) ensure_nonempty --dest "$1" dest=$1 shift ;; -f|--file) ensure_nonempty --file "$1" out=$(cat "$1") || fatal "cannot read file: ${1}" queue_files "$out" shift ;; --lib-path) # Note: If this is specified along with one of the file # specification options, all the file gathering and checking work # will happen, but it will be discarded. lib_found=yes lib_dest_print=yes ;; --no-ldconfig) no_ldconfig=yes ;; --nvidia) out=$(nvidia-container-cli list --binaries --libraries) \ || fatal "nvidia-container-cli failed; does this host have GPUs?" queue_files "$out" ;; -p|--path) ensure_nonempty --path "$1" queue_files "$1" shift ;; -v|--verbose) verbose=yes ;; -*) info "invalid option: ${opt}" usage ;; *) ensure_nonempty "image path" "${opt}" [ -z "$image" ] || fatal "duplicate image: ${opt}" [ -d "$opt" ] || fatal "image not a directory: ${opt}" image="$opt" ;; esac done [ "$image" ] || fatal "no image specified" if [ $cray_mpi ]; then sentinel=/etc/opt/cray/release/cle-release [ -f $sentinel ] || fatal "not found: ${sentinel}: are you on a Cray?" mpi_version=$("${ch_bin}/ch-run" "$image" -- mpirun --version || true) case $mpi_version in *mpich*) cray_mpich=yes ;; *'Open MPI'*) cray_openmpi=yes ;; *) fatal "can't find MPI in image" ;; esac fi if [ $lib_found ]; then # We want to put the libraries in the first directory that ldconfig # searches, so that we can override (or overwrite) any of the same library # that may already be in the image. debug "asking ldconfig for shared library destination" # "ldconfig -Nv" gives some pointless warnings on stderr even if # successful; we don't want to show those to users. However, we don't want # to simply pipe stderr to /dev/null because this hides real errors. Thus, # use the following abomination to pipe stdout and stderr to *separate # grep commands*. See: https://stackoverflow.com/a/31151808 lib_dest=$( { "${ch_bin}/ch-run" "$image" -- /sbin/ldconfig -Nv \ 2>&1 1>&3 3>&- | grep -Ev '(^|dynamic linker, ignoring|given more than once)$' ; } \ 3>&1 1>&2 | grep -E '^/' | cut -d: -f1 | head -1 ) [ -n "$lib_dest" ] || fatal 'empty path from ldconfig' [ -z "${lib_dest%%/*}" ] || fatal "bad path from ldconfig: ${lib_dest}" debug "shared library destination: ${lib_dest}" fi if [ $lib_dest_print ]; then echo "$lib_dest" exit 0 fi if [ $cray_mpich ]; then # Remove open source libmpi.so. # # FIXME: These versions are specific to MPICH 3.2.1. I haven't figured out # how to use glob patterns here (they don't get expanded when I tried # basic things). queue_unlink "$lib_dest/libmpi.so" queue_unlink "$lib_dest/libmpi.so.12" queue_unlink "$lib_dest/libmpi.so.12.1.1" # Directory containing Cray's libmpi.so.12. # shellcheck disable=SC2016 [ "$CRAY_MPICH_DIR" ] \ || fatal '$CRAY_MPICH_DIR not set; is module cray-mpich-abi loaded?' cray_libmpi=$CRAY_MPICH_DIR/lib/libmpi.so.12 [ -f "$cray_libmpi" ] \ || fatal "not found: ${cray_libmpi}; is module cray-mpich-abi loaded?" # Note: Most or all of these filenames are symlinks, and the copy will # convert them to normal files (with the same content). In the # documentation, we say not to do that. However, it seems to work, it's # simpler than resolving them, and we apply greater abuse to libmpi.so.12 # below. # Cray libmpi.so.12. queue_files "$cray_libmpi" # Linked dependencies. queue_files "$( ldd "$cray_libmpi" \ | grep -F /opt \ | sed -E 's/^.+ => (.+) \(0x.+\)$/\1/')" # dlopen(3)'ed dependencies. I don't know how to not hard-code these. queue_files /opt/cray/alps/default/lib64/libalpsutil.so.0.0.0 queue_files /opt/cray/alps/default/lib64/libalpslli.so.0.0.0 queue_files /opt/cray/wlm_detect/default/lib64/libwlm_detect.so.0.0.0 #queue_files /opt/cray/alps/default/lib64/libalps.so.0.0.0 fi if [ $cray_openmpi ]; then # Both MPI_ROOT and MPIHOME are the base of the OpenMPI install tree. # We use MPIHOME because some users unset MPI_ROOT. # # Note also that the OpenMPI module name is not standardized. [ "$MPIHOME" ] \ || fatal "MPIHOME not set; is OpenMPI module loaded?" # Inject libmpi from the host host_libmpi=${MPIHOME}/lib/libmpi.so [ -f "$host_libmpi" ] \ || fatal "not found: ${host_libmpi}; is OpenMPI module loaded?" queue_files "$host_libmpi" queue_files "$( ldd "$host_libmpi" \ | grep -E "/usr|/opt" \ | sed -E 's/^.+ => (.+) \(0x.+\)$/\1/')" # Remove core MPI libraries from image. This works around ParaView # dlopen(3)-related errors that we don't understand. image_mpilibs=$( "${ch_bin}/ch-run" "$image" -- /sbin/ldconfig -p \ | sed -nE 's@^.* => (/.*/(libmpi|libopen-rte|libopen-pal)\.so([0-9.]+)?)$@\1@p') echo "$image_mpilibs" | grep -q "libmpi" \ || fatal "can't find libmpi.so* in image" for f in $image_mpilibs; do queue_unlink "$f" queue_unlink "$("${ch_bin}/ch-run" "$image" -- readlink -f "$f")" done fi if [ $cray_mpi ]; then # ALPS libraries require the contents of this directory to be present at # the same path as the host. Create the mount point here, then ch-run # bind-mounts it later. queue_mkdir /var/opt/cray/alps/spool # libwlm_detect.so requires this file to be present. queue_mkdir /etc/opt/cray/wlm_detect queue_files /etc/opt/cray/wlm_detect/active_wlm /etc/opt/cray/wlm_detect # uGNI needs a pile of hugetlbfs filesystems at paths that are arbitrary # but in a specific order in /proc/mounts. ch-run bind-mounts here later. queue_mkdir /var/opt/cray/hugetlbfs fi [ "$inject_files" ] || fatal "empty file list" debug "injecting into image: ${image}" old_ifs="$IFS" IFS="$newline" for u in $inject_unlinks; do debug " rm -f ${image}${u}" rm -f "${image}${u}" done for d in $inject_mkdirs; do debug " mkdir -p ${image}${d}" mkdir -p "${image}${d}" done for file in $inject_files; do f="${file%%:*}" d="${file#*:}" infer= if is_bin "$f" && [ -z "$d" ]; then d=/usr/bin infer=" (inferred)" elif is_so "$f" && [ -z "$d" ]; then d=$lib_dest infer=" (inferred)" fi debug " ${f} -> ${d}${infer}" [ "$d" ] || fatal "no destination for: ${f}" [ -z "${d%%/*}" ] || fatal "not an absolute path: ${d}" [ -d "${image}${d}" ] || fatal "not a directory: ${image}${d}" if [ ! -w "${image}${d}" ]; then # Some images unpack with unwriteable directories; fix. This seems # like a bit of a kludge to me, so I'd like to remove this special # case in the future if possible. (#323) info "${image}${d} not writeable; fixing" chmod u+w "${image}${d}" || fatal "can't chmod u+w: ${image}${d}" fi cp --dereference --preserve=all "$f" "${image}${d}" \ || fatal "cannot inject: ${f}" done IFS="$old_ifs" if [ $cray_mpich ]; then # Restore libmpi.so symlink (it's part of several chains). debug " ln -s libmpi.so.12 ${image}${lib_dest}/libmpi.so" ln -s libmpi.so.12 "${image}${lib_dest}/libmpi.so" # Patch libmpi.so.12 so its soname is "libmpi.so.12" instead of e.g. # "libmpich_gnu_51.so.3". Otherwise, the application won't link without # LD_LIBRARY_PATH, and LD_LIBRARY_PATH is to be avoided. # # Note: This currently requires our patched patchelf (issue #256). debug "fixing soname on libmpi.so.12" "${ch_bin}/ch-run" -w "$image" -- \ patchelf --set-soname libmpi.so.12 "$lib_dest/libmpi.so.12" fi if [ $lib_found ] && [ -z "$no_ldconfig" ]; then debug "running ldconfig" "${ch_bin}/ch-run" -w "$image" -- /sbin/ldconfig else debug "not running ldconfig" fi charliecloud-0.26/bin/ch-image.py.in000066400000000000000000000240141417231051300172760ustar00rootroot00000000000000#!%PYTHON_SHEBANG% import argparse import inspect import os.path import sys sys.path.insert(0, ( os.path.dirname(os.path.abspath(__file__)) + "/../lib/charliecloud")) import charliecloud as ch import build import misc import pull import push ## Constants ## # FIXME: It's currently easy to get the ch-run path from another script, but # hard from something in lib. So, we set it here for now. ch.CH_BIN = os.path.dirname(os.path.abspath( inspect.getframeinfo(inspect.currentframe()).filename)) ch.CH_RUN = ch.CH_BIN + "/ch-run" ## Main ## def main(): if (not os.path.exists(ch.CH_RUN)): ch.depfails.append(("missing", ch.CH_RUN)) ap = argparse.ArgumentParser(formatter_class=ch.HelpFormatter, description="Build and manage images; completely unprivileged.", epilog="""Storage directory is used for caching and temporary images. Location: first defined of --storage, $CH_IMAGE_STORAGE, and %s.""" % ch.Storage.root_default()) ap._optionals.title = "options" # https://stackoverflow.com/a/16981688 sps = ap.add_subparsers(title="subcommands", metavar="CMD") # Common options. # # --dependencies (and --help and --version) are options rather than # subcommands for consistency with other commands. # # These are also accepted *after* the subcommand, as it makes wrapping # ch-image easier and possibly improve the UX. There are multiple ways to # do this, though no tidy ones unfortunately. Here, we build up a # dictionary of options we want, and pass it to both main and subcommand # parsers; this works because both go into the same Namespace object. There # are two quirks to be aware of: # # 1. We omit the common options from subcommand --help for clarity and # because before the subcommand is preferred. # # 2. We suppress defaults in the subcommand [1]. Without this, the # subcommand option value wins even it it's the default. :P Currently, # if specified in both places, the subcommand value wins and the # before value is not considered at all, e.g. "ch-image -vv foo -v" # gives verbosity 1, not 3. This oddity seemed acceptable. # # Alternate approaches include: # # * Set the main parser as the "parent" of the subcommand parser [2]. # This may be the documented approach? However, it adds all the # subcommands to the subparser, which we don't want. A workaround would # be to create a *third* parser that's the parent of both the main and # subcommand parsers, but that seems like too much indirection to me. # # * A two-stage parse (parse_known_args(), then parse_args() to have the # main parser look again) works [3], but is complicated and has some # odd side effects e.g. multiple subcommands will be accepted. # # [1]: https://bugs.python.org/issue9351#msg373665 # [2]: https://docs.python.org/3/library/argparse.html#parents # [3]: https://stackoverflow.com/a/54936198 common_opts = \ [[ ["-a", "--arch"], { "metavar": "ARCH", "default": "host", "help": "architecture for image registries (default: host)"}], [ ["--dependencies"], { "action": misc.Dependencies, "help": "print any missing dependencies and exit" }], [ ["--no-cache"], { "action": "store_true", "help": "download everything needed, ignoring the cache" }], [ ["--password-many"], { "action": "store_true", "help": "re-prompt each time a registry password is needed" }], [ ["-s", "--storage"], { "metavar": "DIR", "help": "set builder internal storage directory to DIR" }], [ ["--tls-no-verify"], { "action": "store_true", "help": "don't verify registry certificates (dangerous!)" }], [ ["-v", "--verbose"], { "action": "count", "default": 0, "help": "print extra chatter (can be repeated)" } ], [ ["--version"], { "action": misc.Version, "help": "print version and exit" } ]] # Most, but not all, subcommands need to check dependencies before doing # anything (the exceptions being basic information commands like # storage-path). Similarly, only some need to initialize the storage # directory. These dictionaries map the dispatch function to a boolean # value saying whether to do those things. dependencies_check = dict() storage_init = dict() # Helper function to set up a subparser. The star forces the latter two # arguments to be called by keyword, for clarity. def add_opts(p, dispatch, *, deps_check, stog_init): assert (not stog_init or deps_check) # can't init storage w/o deps sp.set_defaults(func=dispatch) dependencies_check[dispatch] = deps_check storage_init[dispatch] = stog_init for (args, kwargs) in common_opts: p.add_argument(*args, **{ **kwargs, "default": argparse.SUPPRESS, "help": argparse.SUPPRESS }) # main parser for (args, kwargs) in common_opts: ap.add_argument(*args, **kwargs) # build help="build image from Dockerfile" sp = sps.add_parser("build", help=help, description=help, formatter_class=ch.HelpFormatter) add_opts(sp, build.main, deps_check=True, stog_init=True) sp.add_argument("-b", "--bind", metavar="SRC[:DST]", action="append", default=[], help="mount SRC at guest DST (default: same as SRC)") sp.add_argument("--build-arg", metavar="ARG[=VAL]", action="append", default=[], help="set build-time variable ARG to VAL, or $ARG if no VAL") sp.add_argument("-f", "--file", metavar="DOCKERFILE", help="Dockerfile to use (default: CONTEXT/Dockerfile)") sp.add_argument("--force", action="store_true", help="inject unprivileged build workarounds") sp.add_argument("-n", "--dry-run", action="store_true", help="don't execute instructions") sp.add_argument("--no-force-detect", action="store_true", help="don't try to detect if --force workarounds would work") sp.add_argument("--parse-only", action="store_true", help="stop after parsing the Dockerfile") sp.add_argument("-t", "--tag", metavar="TAG", help="name (tag) of image to create (default: inferred)") sp.add_argument("context", metavar="CONTEXT", help="context directory") # delete help="delete image from internal storage" sp = sps.add_parser("delete", help=help, description=help, formatter_class=ch.HelpFormatter) add_opts(sp, misc.delete, deps_check=True, stog_init=True) sp.add_argument("image_ref", metavar="IMAGE_REF", help="image to delete") # import help="copy external image into storage" sp = sps.add_parser("import", help=help, description=help, formatter_class=ch.HelpFormatter) add_opts(sp, misc.import_, deps_check=True, stog_init=True) sp.add_argument("path", metavar="PATH", help="directory or tarball to import") sp.add_argument("image_ref", metavar="IMAGE_REF", help="destination image name (tag)") # list help="print information about image(s)" sp = sps.add_parser("list", help=help, description=help, formatter_class=ch.HelpFormatter) add_opts(sp, misc.list_, deps_check=True, stog_init=True) sp.add_argument("image_ref", metavar="IMAGE_REF", nargs="?", help="print details of this image only") # pull help="pull image from remote repository to local filesystem" sp = sps.add_parser("pull", help=help, description=help, formatter_class=ch.HelpFormatter) add_opts(sp, pull.main, deps_check=True, stog_init=True) sp.add_argument("--last-layer", metavar="N", type=int, help="stop after unpacking N layers") sp.add_argument("--parse-only", action="store_true", help="stop after parsing the image reference") sp.add_argument("image_ref", metavar="IMAGE_REF", help="image reference") sp.add_argument("image_dir", metavar="IMAGE_DIR", nargs="?", help="unpacked image path (default: opaque path in storage dir)") # push help="push image from local filesystem to remote repository" sp = sps.add_parser("push", help=help, description=help, formatter_class=ch.HelpFormatter) add_opts(sp, push.main, deps_check=True, stog_init=True) sp.add_argument("source_ref", metavar="IMAGE_REF", help="image to push") sp.add_argument("--image", metavar="DIR", help="path to unpacked image (default: opaque path in storage dir)") sp.add_argument("dest_ref", metavar="DEST_REF", nargs="?", help="destination image reference (default: IMAGE_REF)") # python-path help="print path to python interpreter in use" sp = sps.add_parser("python-path", help=help, description=help, formatter_class=ch.HelpFormatter) add_opts(sp, misc.python_path, deps_check=False, stog_init=False) # reset help="delete everything in ch-image builder storage" sp = sps.add_parser("reset", help=help, description=help, formatter_class=ch.HelpFormatter) add_opts(sp, misc.reset, deps_check=True, stog_init=False) # storage-path help="print storage directory path" sp = sps.add_parser("storage-path", help=help, description=help, formatter_class=ch.HelpFormatter) add_opts(sp, misc.storage_path, deps_check=False, stog_init=False) # Parse it up! if (len(sys.argv) < 2): ap.print_help(file=sys.stderr) sys.exit(1) cli = ap.parse_args() ch.init(cli) # Dispatch. if (dependencies_check[cli.func]): ch.dependencies_check() if (storage_init[cli.func]): ch.storage.init() cli.func(cli) ## Bootstrap ## if (__name__ == "__main__"): main() charliecloud-0.26/bin/ch-pull2dir000077500000000000000000000011461417231051300167210ustar00rootroot00000000000000#!/bin/sh lib=$(cd "$(dirname "$0")" && pwd)/../lib/charliecloud . "${lib}/base.sh" set -e # shellcheck disable=SC2034 usage=$(cat << EOF Pull image from a Docker Hub and unpack into directory. Usage: $ $(basename "$0") IMAGE DEST You must have sufficient privilege (via sudo) to run Docker commands. ${deprecated_convert} EOF ) parse_basic_args "$@" if [ "$#" -ne 2 ]; then usage fi image=$1 dest=$2 "${ch_bin}/ch-pull2tar" "$image" "$dest" image=$(echo "$image" | sed 's/\//\./g') "${ch_bin}/ch-tar2dir" "${dest}/${image}.tar.gz" "$dest" rm -v "${dest}/${image}.tar.gz" deprecated_convert_warn charliecloud-0.26/bin/ch-pull2tar000077500000000000000000000007751417231051300167400ustar00rootroot00000000000000#!/bin/sh lib=$(cd "$(dirname "$0")" && pwd)/../lib/charliecloud . "${lib}/base.sh" set -e # shellcheck disable=SC2034 usage=$( cat << EOF Pull image from a Docker Hub and flatten into tarball. Usage: $ $(basename "$0") IMAGE DEST You must have sufficient privilege (via sudo) to run Docker commands. ${deprecated_convert} EOF ) parse_basic_args "$@" if [ "$#" -ne 2 ]; then usage fi image=$1 dest=$2 docker_ pull "$image" "${ch_bin}/ch-builder2tar" "$image" "$dest" deprecated_convert_warn charliecloud-0.26/bin/ch-run-oci.py.in000066400000000000000000000243421417231051300175740ustar00rootroot00000000000000#!%PYTHON_SHEBANG% import argparse import distutils.version import inspect import json import os import re import signal import subprocess import sys import time import types sys.path.insert(0, ( os.path.dirname(os.path.abspath(__file__)) + "/../lib/charliecloud")) import charliecloud as ch import misc BUNDLE_PREFIX = ["/tmp", "/var/tmp"] CH_BIN = os.path.dirname(os.path.abspath( inspect.getframeinfo(inspect.currentframe()).filename)) OCI_VERSION_MIN = "1.0.1-dev" # inclusive OCI_VERSION_MAX = "1.0.999" # inclusive args = None # CLI Namespace state = None # state object def main(): global args, state args = args_parse() ch.VERBOSE("--- starting ------------------------------------") ch.VERBOSE("args: %s" % sys.argv) ch.VERBOSE("environment: %s" % { k: v for (k, v) in os.environ.items() if k.startswith("CH_RUN_OCI_") }) ch.VERBOSE("CLI: %s" % args) if (args.op.__name__ == "op_" + os.getenv("CH_RUN_OCI_HANG", default="")): ch.VERBOSE("hanging before %s per CH_RUN_OCI_HANG" % args.op.__name__) sleep_forever() assert False, "unreachable code reached" state = state_load() args.op() ch.VERBOSE("done") def args_parse(): ap = argparse.ArgumentParser(description='OCI wrapper for "ch-run".') ap.add_argument("-v", "--verbose", action="count", default=0, help="print extra chatter (can be repeated)") ap.add_argument("--version", action=misc.Version, help="print version and exit") sps = ap.add_subparsers() sp = sps.add_parser("create") sp.set_defaults(op=op_create) sp.add_argument("--bundle", required=True, metavar="DIR") sp.add_argument("--console-socket", metavar="PATH") sp.add_argument("--pid-file", required=True, metavar="FILE") sp.add_argument("--no-new-keyring", action="store_true") sp.add_argument("cid", metavar="CONTAINER_ID") sp = sps.add_parser("delete") sp.set_defaults(op=op_delete) sp.add_argument("cid", metavar="CONTAINER_ID") sp = sps.add_parser("kill") sp.set_defaults(op=op_kill) sp.add_argument("cid", metavar="CONTAINER_ID") sp.add_argument("signal", metavar="SIGNAL") sp = sps.add_parser("start") sp.set_defaults(op=op_start) sp.add_argument("cid", metavar="CONTAINER_ID") sp = sps.add_parser("state") sp.set_defaults(op=op_state) sp.add_argument("cid", metavar="CONTAINER_ID") args_ = ap.parse_args() args_.arch = "yolo" args_.password_many = False args_.storage = None args_.tls_no_verify = False ch.init(args_) if len(sys.argv) < 2: ap.print_help(file=sys.stderr) sys.exit(1) bundle_ = bundle_from_cid(args_.cid) if ("bundle" in args_ and args_.bundle != bundle_): ch.FATAL("bundle argument \"%s\" differs from inferred bundle \"%s\"" % (args_.bundle, bundle_)) args_.bundle = bundle_ pid_file_ = pid_file_from_bundle(args_.bundle) if ("pid_file" in args_ and args_.pid_file != pid_file_): ch.FATAL("pid_file argument \"%s\" differs from inferred \"%s\"" % (args_.pid_file, pid_file_)) args_.pid_file = pid_file_ return args_ def bundle_from_cid(cid): m = re.search(r"^buildah-buildah(.+)$", cid) if (m is None): ch.FATAL("cannot parse container ID: %s" % cid) paths = [] for p in BUNDLE_PREFIX: paths.append("%s/buildah%s" % (p, m.group(1))) if (os.path.exists(paths[-1])): return paths[-1] ch.FATAL("can't infer bundle path; none of these exist: %s" % " ".join(paths)) def debug_lines(s): for line in s.splitlines(): ch.VERBOSE(line) def image_fixup(path): ch.VERBOSE("fixing up image: %s" % path) # Metadata directory. ch.mkdirs("%s/ch/bin" % path) # Mount points. ch.file_ensure_exists("%s/etc/hosts" % path) ch.file_ensure_exists("%s/etc/resolv.conf" % path) # /etc/{passwd,group} ch.file_write("%s/etc/passwd" % path, """\ root:x:0:0:root:/root:/bin/sh nobody:x:65534:65534:nobody:/:/bin/false """) ch.file_write("%s/etc/group" % path, """\ root:x:0: nogroup:x:65534: """) # Kludges to work around expectations of real root, not UID 0 in a # unprivileged user namespace. See also the default environment. # # Debian apt/dpkg/etc. want to chown(1), chgrp(1), etc. in various ways. ch.symlink("/bin/true", "%s/ch/bin/chgrp" % path) ch.symlink("/bin/true", "%s/ch/bin/dpkg-statoverride" % path) # Debian package management also wants to mess around with users. This is # causing problems with /etc/gshadow and other files. These links don't # work if they are in /ch/bin, I think because dpkg is resetting the path? # For now we'll do this, but I don't like it. fakeroot(1) also solves the # problem (see issue #472). ch.symlink("/bin/true", "%s/bin/chown" % path, clobber=True) ch.symlink("/bin/true", "%s/usr/sbin/groupadd" % path, clobber=True) ch.symlink("/bin/true", "%s/usr/sbin/useradd" % path, clobber=True) ch.symlink("/bin/true", "%s/usr/sbin/usermod" % path, clobber=True) ch.symlink("/bin/true", "%s/usr/bin/chage" % path, clobber=True) def op_create(): # Validate arguments. if (args.console_socket): ch.FATAL("--console-socket not supported") # Start dummy supervisor. if (state.pid is not None): ch.FATAL("container already created") pid = ch.ossafe(os.fork, "can't fork") if (pid == 0): # Child; the only reason to exist is so Buildah sees a process when it # looks for one. Sleep until told to exit. # # Note: I looked into changing the process title and this turns out to # be remarkably hairy unless you use a 3rd-party module. def exit_(sig, frame): ch.VERBOSE("dummy supervisor: done") sys.exit(0) signal.signal(signal.SIGTERM, exit_) ch.VERBOSE("dummy supervisor: starting") sleep_forever() else: state.pid = pid with ch.open_(args.pid_file, "wt") as fp: print("%d" % pid, file=fp) ch.VERBOSE("dummy supervisor started with pid %d" % pid) def op_delete(): ch.VERBOSE("delete operation is a no-op") def op_kill(): ch.VERBOSE("kill operation is a no-op") def op_start(): # Note: Contrary to the implication of its name, the "start" operation # blocks until the user command is done. c = state.config # Unsupported features to barf about. if (state.pid is None): ch.FATAL("can't start: not created yet") if (c["process"].get("terminal", False)): ch.FATAL("not supported: pseudoterminals") if ("annotations" in c): ch.FATAL("not supported: annotations") if ("hooks" in c): ch.FATAL("not supported: hooks") for d in c["linux"]["namespaces"]: if ("path" in d): ch.FATAL("not supported: joining existing namespaces") if ("intelRdt" in c["linux"]): ch.FATAL("not supported: Intel RDT") # Environment file. This is a list of lines, not a dict. # # GNU tar, when it thinks it's running as root, tries to chown(2) and # chgrp(2) files to whatever's in the tarball. --no-same-owner avoids this. with ch.open_(args.bundle + "/environment", "wt") as fp: for line in ( c["process"]["env"] # from Dockerfile + [ "TAR_OPTIONS=--no-same-owner" ]): # ours line = re.sub(r"^(PATH=)", "\\1/ch/bin:", line) ch.VERBOSE("env: %s" % line) print(line, file=fp) # Build command line. cmd = CH_BIN + "/ch-run" ca = [cmd, "--cd", c["process"]["cwd"], "--no-home", "--no-passwd", "--gid", str(c["process"]["user"]["gid"]), "--uid", str(c["process"]["user"]["uid"]), "--unset-env=*", "--set-env=%s/environment" % args.bundle] if (not c["root"].get("readonly", False)): ca.append("--write") ca += [c["root"]["path"], "--"] ca += c["process"]["args"] # Fix up root filesystem. image_fixup(args.bundle + "/mnt/rootfs") # Execute user command. We can't execv(2) because we have to do cleanup # after it exits. ch.file_ensure_exists(args.bundle + "/user_started") ch.VERBOSE("user command: %s" % ca) # Standard output disappears, so send stdout to stderr. cp = subprocess.run(ca, stdout=2) ch.file_ensure_exists(args.bundle + "/user_done") ch.VERBOSE("user command done") # Stop dummy supervisor. if (state.pid is None): ch.FATAL("no dummy supervisor PID found") try: os.kill(state.pid, signal.SIGTERM) state.pid = None os.unlink(args.pid_file) except OSError as x: ch.FATAL("can't kill PID %d: %s (%d)" % (state.pid, x.strerror, x.errno)) # Puke if user command failed. if (cp.returncode != 0): ch.FATAL("user command failed: %d" % cp.returncode) def op_state(): def status(): if (state.user_command_started): if (state.user_command_done): return "stopped" else: return "running" if (state.pid is None): return "creating" else: return "created" st = { "ociVersion": OCI_VERSION_MAX, "id": args.cid, "status": status(), "bundle": args.bundle } if (state.pid is not None): st["pid"] = state.pid out = json.dumps(st, indent=2) debug_lines(out) print(out) def sleep_forever(): while True: time.sleep(60) # can't provide infinity here def pid_file_from_bundle(bundle): return bundle + "/pid" def state_load(): st = types.SimpleNamespace() st.config = json.load(ch.open_(args.bundle + "/config.json", "rt")) #debug_lines(json.dumps(st.config, indent=2)) v_min = distutils.version.LooseVersion(OCI_VERSION_MIN) v_actual = distutils.version.LooseVersion(st.config["ociVersion"]) v_max = distutils.version.LooseVersion(OCI_VERSION_MAX) if (not v_min <= v_actual <= v_max): ch.FATAL("unsupported OCI version: %s" % st.config["ociVersion"]) try: fp = open(args.pid_file, "rt") st.pid = int(ch.ossafe(fp.read, "can't read: %s" % args.pid_file)) ch.VERBOSE("found supervisor pid: %d" % st.pid) except FileNotFoundError: st.pid = None ch.VERBOSE("no supervisor pid found") st.user_command_started = os.path.isfile(args.bundle + "/user_started") st.user_command_done = os.path.isfile(args.bundle + "/user_done") return st if (__name__ == "__main__"): main() charliecloud-0.26/bin/ch-run.c000066400000000000000000000353011417231051300162060ustar00rootroot00000000000000/* Copyright © Triad National Security, LLC, and others. */ /* Note: This program does not bother to free memory allocations, since they are modest and the program is short-lived. */ #define _GNU_SOURCE #include #include #include #include #include #include "config.h" #include "ch_core.h" #include "ch_misc.h" /** Constants and macros **/ /* Environment variables used by --join parameters. */ char *JOIN_CT_ENV[] = { "OMPI_COMM_WORLD_LOCAL_SIZE", "SLURM_STEP_TASKS_PER_NODE", "SLURM_CPUS_ON_NODE", NULL }; char *JOIN_TAG_ENV[] = { "SLURM_STEP_ID", NULL }; /** Command line options **/ const char usage[] = "\ \n\ Run a command in a Charliecloud container.\n\ \v\ Example:\n\ \n\ $ ch-run /data/foo -- echo hello\n\ hello\n\ \n\ You cannot use this program to actually change your UID.\n"; const char args_doc[] = "IMAGE -- CMD [ARG...]"; const struct argp_option options[] = { { "bind", 'b', "SRC[:DST]", 0, "mount SRC at guest DST (default: same as SRC)"}, { "cd", 'c', "DIR", 0, "initial working directory in container"}, { "ch-ssh", -8, 0, 0, "bind ch-ssh into image"}, { "env-no-expand", -10, 0, 0, "don't expand $ in --set-env input"}, { "gid", 'g', "GID", 0, "run as GID within container" }, { "join", 'j', 0, 0, "use same container as peer ch-run" }, { "join-pid", -5, "PID", 0, "join a namespace using a PID" }, { "join-ct", -3, "N", 0, "number of join peers (implies --join)" }, { "join-tag", -4, "TAG", 0, "label for peer group (implies --join)" }, { "mount", 'm', "DIR", 0, "SquashFS mount point"}, { "no-home", -2, 0, 0, "don't bind-mount your home directory"}, { "no-passwd", -9, 0, 0, "don't bind-mount /etc/{passwd,group}"}, { "private-tmp", 't', 0, 0, "use container-private /tmp" }, { "set-env", -6, "ARG", OPTION_ARG_OPTIONAL, "set environment variables per ARG"}, { "uid", 'u', "UID", 0, "run as UID within container" }, { "unset-env", -7, "GLOB", 0, "unset environment variable(s)" }, { "verbose", 'v', 0, 0, "be more verbose (debug if repeated)" }, { "version", 'V', 0, 0, "print version and exit" }, { "write", 'w', 0, 0, "mount image read-write"}, { 0 } }; /** Types **/ struct args { struct container c; struct env_delta *env_deltas; char *initial_dir; }; /** Function prototypes **/ void fix_environment(struct args *args); bool get_first_env(char **array, char **name, char **value); int join_ct(int cli_ct); char *join_tag(char *cli_tag); int parse_int(char *s, bool extra_ok, char *error_tag); static error_t parse_opt(int key, char *arg, struct argp_state *state); void privs_verify_invoking(); /** Global variables **/ const struct argp argp = { options, parse_opt, args_doc, usage }; extern char **environ; // see environ(7) /** Main **/ int main(int argc, char *argv[]) { bool argp_help_fmt_set; struct args args; int arg_next; char ** c_argv; privs_verify_invoking(); #ifdef ENABLE_SYSLOG syslog(LOG_USER|LOG_INFO, "uid=%u args=%d: %s", getuid(), argc, argv_to_string(argv)); #endif verbose = 1; // in charliecloud.h args = (struct args){ .c = (struct container){ .binds = list_new(sizeof(struct bind), 0), .ch_ssh = false, .container_gid = getegid(), .container_uid = geteuid(), .env_expand = true, .img_path = NULL, .newroot = NULL, .join = false, .join_ct = 0, .join_pid = 0, .join_tag = NULL, .private_home = false, .private_passwd = false, .private_tmp = false, .old_home = getenv("HOME"), .type = IMG_NONE, .writable = false }, .env_deltas = list_new(sizeof(struct env_delta), 0), .initial_dir = NULL }; /* I couldn't find a way to set argp help defaults other than this environment variable. Kludge sets/unsets only if not already set. */ if (getenv("ARGP_HELP_FMT")) argp_help_fmt_set = true; else { argp_help_fmt_set = false; Z_ (setenv("ARGP_HELP_FMT", "opt-doc-col=25,no-dup-args-note", 0)); } Z_ (argp_parse(&argp, argc, argv, 0, &arg_next, &args)); if (!argp_help_fmt_set) Z_ (unsetenv("ARGP_HELP_FMT")); Te (arg_next < argc - 1, "NEWROOT and/or CMD not specified"); args.c.img_path = argv[arg_next++]; args.c.type = img_type_get(args.c.img_path); switch (args.c.type) { case IMG_DIRECTORY: if (args.c.newroot != NULL) // --mount was set WARNING("--mount invalid with directory image, ignoring"); args.c.newroot = realpath(args.c.img_path, NULL); Tf (args.c.newroot != NULL, "can't find image: %s", args.c.img_path); break; case IMG_SQUASH: #ifndef HAVE_LIBSQUASHFUSE FATAL("this ch-run does not support internal SquashFS mounts"); #endif break; case IMG_NONE: FATAL("unknown image type: %s", args.c.img_path); break; } if (args.c.join) { args.c.join_ct = join_ct(args.c.join_ct); args.c.join_tag = join_tag(args.c.join_tag); } if (getenv("TMPDIR") != NULL) host_tmp = getenv("TMPDIR"); else host_tmp = "/tmp"; username = getenv("USER"); Te (username != NULL, "$USER not set"); c_argv = list_new(sizeof(char *), argc - arg_next); for (int i = 0; i < argc - arg_next; i++) c_argv[i] = argv[i + arg_next]; INFO("verbosity: %d", verbose); INFO("image: %s", args.c.img_path); INFO("newroot: %s", args.c.newroot); INFO("container uid: %u", args.c.container_uid); INFO("container gid: %u", args.c.container_gid); INFO("join: %d %d %s %d", args.c.join, args.c.join_ct, args.c.join_tag, args.c.join_pid); INFO("private /tmp: %d", args.c.private_tmp); containerize(&args.c); fix_environment(&args); run_user_command(c_argv, args.initial_dir); // should never return exit(EXIT_FAILURE); } /** Supporting functions **/ /* Adjust environment variables. Call once containerized, i.e., already pivoted into new root. */ void fix_environment(struct args *args) { char *old_value, *new_value; // $HOME: Set to /home/$USER unless --no-home specified. if (!args->c.private_home) Z_ (setenv("HOME", cat("/home/", username), 1)); // $PATH: Append /bin if not already present. old_value = getenv("PATH"); if (old_value == NULL) { WARNING("$PATH not set"); } else if ( strstr(old_value, "/bin") != old_value && !strstr(old_value, ":/bin")) { T_ (1 <= asprintf(&new_value, "%s:/bin", old_value)); Z_ (setenv("PATH", new_value, 1)); INFO("new $PATH: %s", new_value); } // $TMPDIR: Unset. Z_ (unsetenv("TMPDIR")); // --set-env and --unset-env. for (size_t i = 0; args->env_deltas[i].action != ENV_END; i++) { struct env_delta ed = args->env_deltas[i]; switch (ed.action) { case ENV_END: Te (false, "unreachable code reached"); break; case ENV_SET_DEFAULT: ed.arg.vars = env_file_read("/ch/environment"); // fall through case ENV_SET_VARS: for (size_t j = 0; ed.arg.vars[j].name != NULL; j++) env_set(ed.arg.vars[j].name, ed.arg.vars[j].value, args->c.env_expand ); break; case ENV_UNSET_GLOB: env_unset(ed.arg.glob); break; } } // $CH_RUNNING is not affected by --unset-env or --set-env. Z_ (setenv("CH_RUNNING", "Weird Al Yankovic", 1)); } /* Find the first environment variable in array that is set; put its name in *name and its value in *value, and return true. If none are set, return false, and *name and *value are undefined. */ bool get_first_env(char **array, char **name, char **value) { for (int i = 0; array[i] != NULL; i++) { *name = array[i]; *value = getenv(*name); if (*value != NULL) return true; } return false; } /* Find an appropriate join count; assumes --join was specified or implied. Exit with error if no valid value is available. */ int join_ct(int cli_ct) { int j = 0; char *ev_name, *ev_value; if (cli_ct != 0) { INFO("join: peer group size from command line"); j = cli_ct; goto end; } if (get_first_env(JOIN_CT_ENV, &ev_name, &ev_value)) { INFO("join: peer group size from %s", ev_name); j = parse_int(ev_value, true, ev_name); goto end; } end: Te(j > 0, "join: no valid peer group size found"); return j; } /* Find an appropriate join tag; assumes --join was specified or implied. Exit with error if no valid value is found. */ char *join_tag(char *cli_tag) { char *tag; char *ev_name, *ev_value; if (cli_tag != NULL) { INFO("join: peer group tag from command line"); tag = cli_tag; goto end; } if (get_first_env(JOIN_TAG_ENV, &ev_name, &ev_value)) { INFO("join: peer group tag from %s", ev_name); tag = ev_value; goto end; } INFO("join: peer group tag from getppid(2)"); T_ (1 <= asprintf(&tag, "%d", getppid())); end: Te(tag[0] != '\0', "join: peer group tag cannot be empty string"); return tag; } /* Parse an integer string arg and return the result. If an error occurs, print a message prefixed by error_tag and exit. If not extra_ok, additional characters remaining after the integer are an error. */ int parse_int(char *s, bool extra_ok, char *error_tag) { char *end; long l; errno = 0; l = strtol(s, &end, 10); Tf (errno == 0, error_tag); Ze (end == s, "%s: no digits found", error_tag); if (!extra_ok) Te (*end == 0, "%s: extra characters after digits", error_tag); Te (l >= INT_MIN && l <= INT_MAX, "%s: out of range", error_tag); return (int)l; } /* Parse one command line option. Called by argp_parse(). */ static error_t parse_opt(int key, char *arg, struct argp_state *state) { struct args *args = state->input; struct env_delta ed; int i; switch (key) { case -10: // --env-no-expand args->c.env_expand = false; break; case -2: // --private-home args->c.private_home = true; break; case -3: // --join-ct args->c.join = true; args->c.join_ct = parse_int(arg, false, "--join-ct"); break; case -4: // --join-tag args->c.join = true; args->c.join_tag = arg; break; case -5: // --join-pid args->c.join_pid = parse_int(arg, false, "--join-pid"); break; case -6: // --set-env if (arg == NULL) ed.action = ENV_SET_DEFAULT; else { ed.action = ENV_SET_VARS; if (strchr(arg, '=') == NULL) ed.arg.vars = env_file_read(arg); else { ed.arg.vars = list_new(sizeof(struct env_var), 1); ed.arg.vars[0] = env_var_parse(arg, NULL, 0); } } list_append((void **)&(args->env_deltas), &ed, sizeof(ed)); break; case -7: // --unset-env Te (strlen(arg) > 0, "--unset-env: GLOB must have non-zero length"); ed.action = ENV_UNSET_GLOB; ed.arg.glob = arg; list_append((void **)&(args->env_deltas), &ed, sizeof(ed)); break;; case -8: // --ch-ssh args->c.ch_ssh = true; break; case -9: // --no-passwd args->c.private_passwd = true; break; case 'c': args->initial_dir = arg; break; case 'b': { char *src, *dst; for (i = 0; args->c.binds[i].src != NULL; i++) // count existing binds ; T_ (args->c.binds = realloc(args->c.binds, (i+2) * sizeof(struct bind))); args->c.binds[i+1].src = NULL; // terminating zero args->c.binds[i].dep = BD_MAKE_DST; // source src = strsep(&arg, ":"); T_ (src != NULL); Te (src[0] != 0, "--bind: no source provided"); args->c.binds[i].src = src; // destination dst = arg ? arg : src; Te (dst[0] != 0, "--bind: no destination provided"); Te (strcmp(dst, "/"), "--bind: destination can't be /"); Te (dst[0] == '/', "--bind: destination must be absolute"); args->c.binds[i].dst = dst; } break; case 'g': i = parse_int(arg, false, "--gid"); Te (i >= 0, "--gid: must be non-negative"); args->c.container_gid = (gid_t) i; break; case 'j': args->c.join = true; break; case 'm': Ze ((arg[0] == '\0'), "mount point can't be empty string"); args->c.newroot = arg; break; case 't': args->c.private_tmp = true; break; case 'u': i = parse_int(arg, false, "--uid"); Te (i >= 0, "--uid: must be non-negative"); args->c.container_uid = (uid_t) i; break; case 'V': version(); exit(EXIT_SUCCESS); break; case 'v': verbose++; Te(verbose <= 4, "--verbose can be specified at most thrice"); break; case 'w': args->c.writable = true; break; case ARGP_KEY_NO_ARGS: argp_state_help(state, stderr, ( ARGP_HELP_SHORT_USAGE | ARGP_HELP_PRE_DOC | ARGP_HELP_LONG | ARGP_HELP_POST_DOC)); exit(EXIT_FAILURE); default: return ARGP_ERR_UNKNOWN; }; return 0; } /* Validate that the UIDs and GIDs are appropriate for program start, and abort if not. Note: If the binary is setuid, then the real UID will be the invoking user and the effective and saved UIDs will be the owner of the binary. Otherwise, all three IDs are that of the invoking user. */ void privs_verify_invoking() { uid_t ruid, euid, suid; gid_t rgid, egid, sgid; Z_ (getresuid(&ruid, &euid, &suid)); Z_ (getresgid(&rgid, &egid, &sgid)); // Calling the program if user is really root is OK. if ( ruid == 0 && euid == 0 && suid == 0 && rgid == 0 && egid == 0 && sgid == 0) return; // Now that we know user isn't root, no GID privilege is allowed. T_ (egid != 0); // no privilege T_ (egid == rgid && egid == sgid); // no setuid or funny business // No UID privilege allowed either. T_ (euid != 0); // no privilege T_ (euid == ruid && euid == suid); // no setuid or funny business } charliecloud-0.26/bin/ch-ssh.c000066400000000000000000000037031417231051300162000ustar00rootroot00000000000000/* Copyright © Triad National Security, LLC, and others. */ #define _GNU_SOURCE #include #include #include #include #include #include "config.h" #include "ch_misc.h" const char usage[] = "\ \n\ Usage: CH_RUN_ARGS=\"NEWROOT [ARG...]\" ch-ssh [OPTION...] HOST CMD [ARG...]\n\ \n\ Run a remote command in a Charliecloud container.\n\ \n\ Example:\n\ \n\ $ export CH_RUN_ARGS=/data/foo\n\ $ ch-ssh example.com -- echo hello\n\ hello\n\ \n\ Arguments to ch-run, including the image to activate, are specified in the\n\ CH_RUN_ARGS environment variable. Important caveat: Words in CH_RUN_ARGS are\n\ delimited by spaces only; it is not shell syntax. In particular, quotes and\n\ and backslashes are not interpreted.\n"; #define ARGS_MAX 262143 // assume 2MB buffer and length of each argument >= 7 int main(int argc, char *argv[]) { int i, j; char *ch_run_args; char *args[ARGS_MAX+1]; if (argc == 1) { fprintf(stderr, usage); exit(EXIT_FAILURE); } if (argc >= 2 && strcmp(argv[1], "--help") == 0) { fprintf(stderr, usage); return 0; } if (argc >= 2 && strcmp(argv[1], "--version") == 0) { version(); exit(EXIT_SUCCESS); } memset(args, 0, sizeof(args)); args[0] = "ssh"; // ssh option arguments for (i = 1; i < argc && i < ARGS_MAX && argv[i][0] == '-'; i++) args[i] = argv[i]; // destination host if (i < argc && i < ARGS_MAX) { args[i] = argv[i]; i++; } // insert ch-run command ch_run_args = getenv("CH_RUN_ARGS"); Te (ch_run_args != NULL, "CH_RUN_ARGS not set"); args[i] = "ch-run"; for (j = 1; i + j < ARGS_MAX; j++, ch_run_args = NULL) { args[i+j] = strtok(ch_run_args, " "); if (args[i+j] == NULL) break; } // copy remaining arguments for ( ; i < argc && i + j < ARGS_MAX; i++) args[i+j] = argv[i]; execvp("ssh", args); Tf (0, "can't execute ssh"); } charliecloud-0.26/bin/ch-tar2dir000077500000000000000000000102161417231051300165310ustar00rootroot00000000000000#!/bin/sh set -e lib=$(cd "$(dirname "$0")" && pwd)/../lib/charliecloud . "${lib}/base.sh" # shellcheck disable=SC2034 usage=$(cat <&2 exit 1 fi if [ ! -d "${2}" ]; then echo "can't unpack: ${2} is not a directory" 1>&2 exit 1 fi # Figure out the real tarball name. If the provided $1 already has a tar # extension, just test that name; if not, also append the plausible extensions # and try those too. for ext in '' .tar.gz .tar.xz .tgz .tar; do c=${1}${ext} if [ ! -f "$c" ] || [ ! -r "$c" ]; then echo "can't read: ${c}" 1>&2 case $1 in *.tar.*|*.tgz) break ;; *) continue ;; esac fi tarball=$c if [ -n "$ext" ]; then echo "found: ${tarball}" 1>&2 fi # Infer decompression argument because GNU tar is unable to do so if input # is a pipe, and we want to keep PV. See: # https://www.gnu.org/software/tar/manual/html_section/tar_68.html case $tarball in *.tar) newroot=${2}/$(basename "${tarball%.tar}") decompress= ;; *.tar.gz) newroot=${2}/$(basename "${tarball%.tar.gz}") decompress=z ;; *.tar.xz) newroot=${2}/$(basename "${tarball%.tar.xz}") decompress=J ;; *.tgz) newroot=${2}/$(basename "${tarball%.tgz}") decompress=z ;; *) echo "unknown extension: ${tarball}" 1>&2 exit 1 ;; esac break done if [ -z "$tarball" ]; then echo "no input found" 1>&2 exit 1 fi if [ ! -d "$newroot" ]; then echo "creating new image ${newroot}" else if [ -f "${newroot}/${sentinel}" ] \ && [ -d "${newroot}/bin" ] \ && [ -d "${newroot}/dev" ] \ && [ -d "${newroot}/usr" ]; then echo "replacing existing image ${newroot}" 1>&2 rm -Rf --one-file-system "${newroot}" else echo "${newroot} exists but does not appear to be an image" 1>&2 exit 1 fi fi mkdir "$newroot" # Use a pipe because PV ignores arguments if it's cat rather than PV. # # See FAQ on /dev exclusion. --no-wildcards-match-slash is needed to prevent * # matching multiple directories; the tar default differs from sh behavior. size=$(stat -c%s "$tarball") pv_ -s "$size" < "$tarball" \ | tar x$decompress -C "$newroot" -f - \ --anchored --no-wildcards-match-slash \ --exclude='dev/*' --exclude='*/dev/*' # Make all directories writeable so we can delete image later (hello, Red Hat). find "$newroot" -type d -a ! -perm /200 -exec chmod u+w {} + # If tarball had a single containing directory, move the contents up a level # and remove the containing directory. It is non-trivial in POSIX sh to deal # with hidden files; see https://unix.stackexchange.com/a/6397. files=$(ls -Aq "$newroot") if [ "$(echo "$files" | wc -l)" -eq 1 ]; then ( cd "${newroot}/${files}" for f in * .[!.]* ..?*; do if [ -e "$f" ]; then mv -- "$f" ..; fi done ) rmdir "${newroot}/${files}" fi # Ensure mount points that ch-run needs exist. Do nothing if something already # exists, without dereferencing, in case it's a symlink, which will work for # bind-mount later but won't resolve correctly now outside the container (e.g. # linuxcontainers.org images; issue #1015). # # WARNING: Keep in sync with other shell scripts & Image.unpack_init(). echo 'This directory is a Charliecloud image.' > "${newroot}/${sentinel}" for i in bin dev etc mnt proc usr \ mnt/0 mnt/1 mnt/2 mnt/3 mnt/4 mnt/5 mnt/6 mnt/7 mnt/8 mnt/9; do exist_p "${newroot}/${i}" || mkdir "${newroot}/${i}" done for i in etc/hosts etc/resolv.conf; do exist_p "${newroot}/${i}" || touch "${newroot}/${i}" done echo "${newroot} unpacked ok" deprecated_convert_warn charliecloud-0.26/bin/ch-test000077500000000000000000000732431417231051300161520ustar00rootroot00000000000000#!/bin/bash ### Setup we need right away set -e export LC_ALL=C # predictable sorting ### Functions we need right away fatal () { printf 'error: %s\n' "$1" 1>&2 exit 1 } warning () { printf 'warning: %s\n' "$1" 1>&2 } ### Setup ch_lib=$(cd "$(dirname "$0")" && pwd)/../lib/charliecloud if [[ ! -f ${ch_lib}/base.sh ]]; then fatal "install or build problem: not found: ${ch_lib}/base.sh" fi . "${ch_lib}/base.sh" . "${ch_lib}/contributors.bash" export ch_bin export ch_lib # shellcheck disable=SC2034 usage=$(cat < /dev/null 2>&1; then fatal 'builder: buildah: not installed.' fi bl=$(command -v buildah) bv=$(buildah --version | awk '{print $3}') min='1.11.2' ;; docker) if ! command -v docker > /dev/null 2>&1; then fatal 'builder: docker: not installed' fi bl=$(command -v docker) bv=$(docker_ --version | awk '{print $3}' | sed -e 's/,$//') ;; none) bl='none' bv= ;; *) fatal "builder: $CH_BUILDER: not supported" ;; esac printf 'found: %s %s\n\n' "$bl" "$bv" version_check 'builder' "$min" "$bv" } builder_set () { width=$1 if [[ -n $builder ]]; then export CH_BUILDER=$builder method='command line' elif [[ -n $CH_BUILDER ]]; then method='environment' else builder_choose method='default' fi printf "%-*s %s (%s)\n" "$width" 'builder:' "$CH_BUILDER" "$method" if [[ $CH_BUILDER == ch-image ]]; then vset CH_IMAGE_STORAGE '' "$CH_IMAGE_STORAGE" "/var/tmp/${USER}.ch" \ "$width" 'ch-image storage' fi } # Create CH_TEST_IMGDIR, avoiding #347. dir_img_mk () { dir_img_rm printf "creating %s\n\n" "$CH_TEST_IMGDIR" $ch_mpirun_node mkdir "$CH_TEST_IMGDIR" $ch_mpirun_node touch "$CH_TEST_IMGDIR/WEIRD_AL_YANKOVIC" } dir_img_rm () { dir_rm_safe "$CH_TEST_IMGDIR" } # Remove a filesystem permissions fixture directory. Ensure that the target # directory has exactly the two subdirectories expected first. dir_perm_rm () { if [[ $(find "${1}" -maxdepth 1 -mindepth 1 | wc -l) == 2 \ && -d "${1}/pass" && -d "${1}/nopass" ]]; then echo "removing ${1}" sudo rm -rf --one-file-system "$1" fi } # Remove directory $1 if it's either 1) empty or 2) contains a sentinel file. dir_rm_safe () { if [[ $(find "$1" -maxdepth 1 -mindepth 1 | wc -l) == 0 \ || -e ${1}/WEIRD_AL_YANKOVIC ]]; then echo "removing $1" $ch_mpirun_node rm -rf --one-file-system "$1" else fatal "non-empty and missing sentinel file; manually delete: $1" fi } # The run phase requires artifacts from a successful build phase. Thus, we # check sanity based on the minimal set of artifacts (no builder). dir_tar_check () { printf 'checking %s: ' "$CH_TEST_TARDIR" dir_tar_check_file chtest{.tar.gz,.sqfs} printf 'ok\n\n' } dir_tar_check_file () { local missing for f in "$@"; do if [[ -f ${CH_TEST_TARDIR}/${f} ]]; then return 0 else missing+=("${CH_TEST_TARDIR}/${f}") fi done fatal "phase $phase: missing packed images: ${missing[*]}" } dir_tar_mk () { dir_tar_rm printf "creating %s\n\n" "$CH_TEST_TARDIR" $ch_mpirun_node mkdir "$CH_TEST_TARDIR" $ch_mpirun_node touch "${CH_TEST_TARDIR}/WEIRD_AL_YANKOVIC" } dir_tar_rm () { dir_rm_safe "$CH_TEST_TARDIR" } dir_tmp_rm () { if [[ $TMP_ == "/tmp/ch-test.tmp.${USER}" ]]; then echo "removing $TMP_" rm -rf --one-file-system "$TMP_" fi } dirs_unpriv_rm () { dir_tar_rm dir_img_rm dir_tmp_rm } pack_fmt_set () { width=$1 if command -v mksquashfs > /dev/null 2>&1; then have_mksquashfs=yes else have_mksquashfs= fi if [[ $(ldd "${ch_bin}/ch-run") = *"libsquashfuse"* ]]; then have_libsquashfuse=yes else have_libsquashfuse= fi if [[ -n $pack_fmt ]]; then CH_TEST_PACK_FMT=$pack_fmt method='command line' elif [[ -n $CH_TEST_PACK_FMT ]]; then method='environment' elif [[ -n $have_mksquashfs && -n $have_libsquashfuse ]]; then CH_TEST_PACK_FMT=squash-mount method='default' else CH_TEST_PACK_FMT=tar-unpack method='default' fi case $CH_TEST_PACK_FMT in '🐘') # elephant emoji U+1F418 CH_TEST_PACK_FMT=squash-mount ;; '📠') # fax machine emoji U+1F4E0 CH_TEST_PACK_FMT=tar-unpack ;; '🎃') # jack-o-lantern emoji U+1F383 CH_TEST_PACK_FMT=squash-unpack ;; esac export CH_TEST_PACK_FMT printf "%-*s %s (%s)\n" \ "$width" 'packed image format:' "$CH_TEST_PACK_FMT" "$method" case $CH_TEST_PACK_FMT in squash-mount) if [[ -z $have_mksquashfs ]]; then fatal "format invalid: ${CH_TEST_PACK_FMT}: no mksquashfs" fi if [[ -z $have_libsquashfuse ]]; then fatal "format invalid: ${CH_TEST_PACK_FMT}: ch-run not linked with libsquashfuse" fi ;; tar-unpack) ;; # nothing to check (assume we have tar) squash-unpack) if [[ -z $have_mksquashfs ]]; then fatal "format invalid: ${CH_TEST_PACK_FMT}: no mksquashfs" fi ;; *) fatal "format unknown: ${CH_TEST_PACK_FMT}" ;; esac } pedantry_set () { width=$1 default=no # Default to pedantic on CI or if user is a contributor. if [[ -n $ch_contributor || -n $CI ]]; then default=yes fi vset ch_pedantic "$pedantic" '' $default "$width" 'pedantic mode' if [[ $ch_pedantic == no ]]; then ch_pedantic= # proper boolean fi # The motivation here is that in pedantic mode, we want to run all the # tests we reasonably can. So, if the user *has* sudo, then default --sudo # to yes. What is a little awkward is that "sudo -v" can generate a # password prompt in the middle of the status output. An alternative is # "sudo -nv", which doesn't; drawbacks are that you have to analyze the # output (not exit code) and it generates a failed password log message if # there is not already a sudo session going. if [[ -n $ch_pedantic ]] \ && command -v sudo > /dev/null \ && sudo -v > /dev/null 2>&1; then use_sudo_default=yes else use_sudo_default= fi } pq_missing () { if [[ $phase == all || $phase == build ]]; then local img=$1 local out=$2 local tag tag=$(test_make_auto tag "$img") printf '%s\n' "$out" >> "${CH_TEST_TARDIR}/${tag}.pq_missing" fi } require_unset () { name=$1 value=${!1} if [[ -n $value ]]; then fatal "$name: no multiple assignment (already \"$value\")" fi } scope_check () { case $1 in quick|standard|full) return 0 ;; *) fatal "invalid scope: $1" ;; esac } # Assign scope a sortable opaque integer value. This value is used to help # filter images and tests that are out of scope. scope_to_digit () { case $1 in quick) echo 1 ;; standard) echo 2 ;; full) echo 3 ;; skip*) # skip has the highest value to ensure it is always filtered out echo 4 ;; *) fatal "scope '$scope' invalid" esac } test_build () { echo 'executing build phase tests ...' if [[ ! -f ${TMP_}/build_auto.bats ]]; then fatal "${TMP_}/build_auto.bats not found" fi bats "${TMP_}/build_auto.bats" build/*.bats } test_examples () { printf '\n' if [[ $CH_TEST_SCOPE == quick ]]; then echo "no examples for $CH_TEST_SCOPE scope" fi echo 'executing example phase tests ...' if find "$TMP_" -name '*_example.bats' | grep -q .; then bats "${TMP_}"/*_example.bats fi } test_make () { local bat_file local img_pack local pack_files local tag case $phase in build) printf "finding tests compatible with %s phase settings ...\n" "$phase" for i in $images $examples; do if test_make_check_image "$i"; then echo "found: $i" build_targets+=( "$i" ) fi done printf '\n' printf 'generate build_auto.bats ...\n' test_make_auto "$phase" "${build_targets[@]}" > "${TMP_}/build_auto.bats" printf 'ok\n\n' ;; run) printf "finding tests compatible with %s phase settings ...\n" "$phase" # For each tarball or squashfs file in --pack-dir look for a # corresponding example or image that produces a matching tag. If # found, check the image for exclusion conidtions. pack_files=$(find "$CH_TEST_TARDIR" -name '*.tar.gz' \ -o -name '*.sqfs' | sort) for i in $pack_files; do img_pack=${i##*/} img_pack=${img_pack%%.*} for j in $images $examples; do if [[ $(test_make_tag_from_path "$j") == "$img_pack" ]]; then if test_make_check_image "$j"; then echo "found: $i" run_targets+=( "$j" ) fi fi done done printf '\n' printf 'generate run_auto.bats ...\n' test_make_auto "$phase" "${run_targets[@]}" > "${TMP_}/run_auto.bats" printf 'ok\n\n' ;; examples) printf "finding tests compatible with %s phase settings ...\n" "$phase" if [[ $CH_TEST_SCOPE == quick ]]; then echo 'found: none' return fi for i in $examples; do if test_make_check_image "$i"; then bat_file=$(dirname "$i")/test.bats tag=$(test_make_tag_from_path "$i") cp "$bat_file" "${TMP_}/${tag}_example.bats" # Substitute $ch_test_tag here with sed because we run all the # examples together later, but the value needs to vary between # the files. Watch the escaping here. sed -i "s/\\\$ch_test_tag/${tag}/g" \ "${TMP_}/${tag}_example.bats" echo "found: $(dirname "$i")" fi done printf '\n' printf 'generate example bat files ...\n' printf 'ok\n\n' ;; *) ;; esac } test_make_auto () { local mode mode=$1;shift if [[ $mode != tag ]]; then printf "# Do not edit this file; it's autogenerated\n\n" printf "load %s/common.bash\n\n" "$CHTEST_DIR" fi while [[ "$#" -gt 0 ]]; do path_=$1;shift basename_=$(basename "$path_") dirname_=$(dirname "$path_") tag=$(test_make_tag_from_path "$path_") if [[ $dir == "" ]];then dir='.' fi if [[ $mode == tag ]]; then echo "$tag" exit 0 fi if [[ $mode == build ]]; then case $basename_ in Build|Build.*) test_make_template_print 'build_custom.bats.in' ;; Dockerfile|Dockerfile.*) test_make_template_print 'build.bats.in' test_make_template_print 'builder_to_archive.bats.in' ;; *) fatal "test_make_auto: unknown build type" ;; esac elif [[ $mode == run ]];then test_make_template_print 'unpack.bats.in' else fatal "test_make_auto: invalid mode '$mode'" fi done } test_make_check_image () { img_ok=yes img=$1 dir=$(basename "$(dirname "$img")") arch_exclude=$(cat "$img" | grep -F 'ch-test-arch-exclude: ' \ | sed 's/.*: //' | awk '{print $1}') builder_include=$(cat "$img" | grep -F 'ch-test-builder-include: ' \ | sed 's/.*: //' | awk '{print $1}') builder_exclude=$(cat "$img" | grep -F 'ch-test-builder-exclude: ' \ | sed 's/.*: //' | awk '{print $1}') img_scope_int=$(scope_to_digit "$(cat "$img" | grep -F 'ch-test-scope' \ | sed 's/.*: //' \ | awk '{print $1}')") sudo_required=$(cat "$img" | grep -F 'ch-test-need-sudo') if [[ $phase == 'build' ]]; then # Exclude Dockerfiles if we have no builder. if [[ $CH_BUILDER == none && $img == *Dockerfile* ]]; then pq_missing "$img" 'builder required' img_ok= fi # Exclude if included builders are given and $CH_BUILDER isn't one. if [[ -n $builder_include ]]; then builder_ok= for b in $builder_include; do if [[ $b == "$CH_BUILDER" ]]; then builder_ok=yes fi done if [[ -z $builder_ok ]]; then pq_missing "$img" "builder not included: ${CH_BUILDER}" img_ok= fi fi # Exclude images that are not compatible with CH_BUILDER. for b in $builder_exclude; do if [[ $b == "$CH_BUILDER" ]]; then pq_missing "$img" "builder excluded: ${CH_BUILDER}" img_ok= fi done fi # Exclude images with a scope that is not a subset of CH_TEST_SCOPE. if [[ $scope_int -lt "$img_scope_int" ]]; then pq_missing "$img" "not in scope: ${CH_TEST_SCOPE}" img_ok= fi # Exclude images that do not work with the host architecture. for a in $arch_exclude; do if [[ $a == "$(uname -m)" ]]; then pq_missing "$img" "incompatible architecture: ${a}" img_ok= fi done # Exclude images that require sudo if CH_TEST_SUDO is empty if [[ -n $sudo_required && -z $CH_TEST_SUDO ]]; then pq_missing "$img" 'generic sudo required' img_ok= fi # In examples phase, exclude chtest and any images not in a subdirectory of # examples. if [[ $phase == examples && ( $dir == chtest || $dir == examples ) ]]; then img_ok= fi if [[ -n $img_ok ]]; then return 0 # include image else return 1 # exclude image fi } test_make_tag_from_path () { # Generate a tag from given path. # # Consider the following path: $CHTEST/examples/Dockerfile.openmpi # First break the path into four components: # 1) dir: the parent directory of the file (examples) # 2) base: the full file name (Dockerfile.openmpi) # 3) basicname: the file name (Dockerfile) # 4) extension: the file's extension (openmpi) # # $basicname must be 'Build' or 'Dockerfile'; otherwise error. # if $dir is '.', 'test', or 'examples' then tag=$extension; otherwise # tag is $extension if set or $dir-$exenstion. local base local basicname local dir local extension local tag dir=$(basename "$(dirname "$1")") # last directory only base=$(basename "$1") basicname=${base%%.*} extension=${base##*.} if [[ $extension == "$basicname" ]]; then extension='' else extension=${extension/\./} # remove dot fi case $basicname in Build|Dockerfile) case $dir in .|test|examples) # dot is directory "test" if [[ -z $extension ]]; then fatal "can't compute tag: $1" else tag=$extension fi ;; *) if [[ -z $extension ]]; then tag=$(basename "$dir") else tag=$(basename "${dir}-${extension}") fi esac ;; *) fatal "test_make_auto: invalid basic name '$basicname'" ;; esac echo "$tag" } test_make_template_print () { local template template="./make-auto.d/$1" cat "$template" | sed "s@%(basename)s@$basename_@g" \ | sed "s@%(dirname)s@$dirname_@g" \ | sed "s@%(path)s@$path_@g" \ | sed "s@%(scope)s@$CH_TEST_SCOPE@g" \ | sed "s@%(tag)s@$tag@g" printf '\n' return } test_one_file () { if [[ $one_file != *.bats ]]; then # Assume it's a Dockerfile or Build file; derive tag and test.bats. ch_test_tag=$(test_make_tag_from_path "$one_file") export ch_test_tag one_file=$(dirname "$one_file")/test.bats printf 'tag: %s\n' "$ch_test_tag" fi printf 'file: %s\n' "$one_file" bats "$one_file" } test_run () { echo 'executing run phase tests ...' if [[ ! -f ${TMP_}/run_auto.bats ]]; then fatal "${TMP_}/run_auto.bats not found" fi bats run_first.bats "${TMP_}/run_auto.bats" ./run/*.bats set -e if [[ $CH_TEST_SCOPE != quick ]]; then for guest_user in $(id -un) root nobody; do for guest_group in $(id -gn) root $(id -gn nobody); do export GUEST_USER=$guest_user export GUEST_GROUP=$guest_group echo "testing as $GUEST_USER $GUEST_GROUP" bats run/ch-run_uidgid.bats done done fi set +e } # Exit with failure if given version number is below a minimum. # # $1: human-readable descriptor # $2: minimum version # $3: actual version version_check () { desc=$1 min=$2 actual=$3 if [[ $( printf '%s\n%s\n' "$min" "$actual" \ | sort -V | head -n1) != "$min" ]]; then fatal "$desc: mininum version $min, found $actual" fi } win () { printf "\nAll tests passed.\n" } ### Body of script # Ensure ch-run has been compiled (issue #329). if ! "${ch_bin}/ch-run" --version > /dev/null 2>&1; then fatal "no working ch-run found in $ch_bin" fi # Ensure we have Bash 4.1 or higher if /bin/bash -c 'set -e; [[ 1 = 0 ]]; exit 0'; then # Bash bug: [[ ... ]] expression doesn't exit with set -e # https://github.com/sstephenson/bats/issues/49 fatal 'Bash minimum version is 4.1' fi # Is the user a contributor? email= # First, ask Git for the configured e-mail address. if command -v git > /dev/null 2>&1; then email="$(git config --get user.email || true)" fi # If that doesn't work, construct it from the environment. if [[ -z $email ]]; then email="$USER@$(hostname --domain)" fi ch_contributor= for i in "${ch_contributors[@]}"; do if [[ $i == "$email" ]]; then ch_contributor=yes fi done # Ensure Bats is installed. if command -v bats > /dev/null 2>&1; then bats=$(command -v bats) bats_version="$(bats --version | awk '{print $2}')" else fatal 'Bats not found' fi # Reject non-default registry on GitHub Actions. if [[ -n $GITHUB_ACTIONS && -n $CH_REGY_DEFAULT_HOST ]]; then fatal 'non-default registry on GitHub Actions invalid' fi # Create a directory to hold auto-generated test artifacts. TMP_=/tmp/ch-test.tmp.$USER if [[ ! -d $TMP_ ]]; then mkdir "$TMP_" chmod 700 "$TMP_" fi # Find test directories. (At one point we tried to unify the paths between the # two conditions using a deeper directory hierarchy and symlinks to . in the # source code to keep it from being even deeper, but this became too unruly.) if [[ -d ${ch_base}/share ]]; then # installed CHTEST_INSTALLED=yes CHTEST_GITWD= CHTEST_DIR=${ch_base}/libexec/charliecloud/test CHTEST_EXAMPLES_DIR=${ch_base}/share/doc/charliecloud/examples else # build dir CHTEST_INSTALLED= if [[ -d ${ch_base}/.git ]]; then CHTEST_GITWD=yes else CHTEST_GITWD= fi CHTEST_DIR=${ch_base}/test CHTEST_EXAMPLES_DIR=${ch_base}/examples fi export ch_base export CHTEST_INSTALLED export CHTEST_GITWD export CHTEST_DIR export CHTEST_EXAMPLES_DIR export TMP_ # Check for test directory. if [[ ! -d $CHTEST_DIR ]]; then fatal "test directory not found: $CHTEST_DIR" fi if [[ ! -d $CHTEST_EXAMPLES_DIR ]]; then fatal "examples not found: $CHTEST_EXAMPLES_DIR" fi # Parse arguments. if [[ $# == 0 ]]; then usage 1 fi while [[ $# -gt 0 ]]; do opt=$1; shift case $opt in all|build|clean|examples|mk-perm-dirs|rm-perm-dirs|run) require_unset phase phase=$opt ;; -b|--builder) require_unset builder builder=$1; shift ;; --builder=*) require_unset builder builder=${opt#*=} ;; --dry-run) dry=true ;; -f|--file) require_unset phase phase=one-file one_file=$1; shift ;; --file=*) require_unset phase phase=one-file one_file=${opt#*=} ;; -h|--help) usage 0 ;; --img-dir) require_unset imgdir imgdir=$1; shift ;; --img-dir=*) require_unset imgdir imgdir=${opt#*=} ;; --is-pedantic) # undocumented; for CI is_pedantic=yes ;; --is-sudo) # undocumented; for CI is_sudo=yes ;; --pack-dir) require_unset tardir tardir=$1; shift ;; --pack-dir=*) require_unset tardir tardir=${opt#*=} ;; --pack-fmt) require_unset pack_fmt pack_fmt=$1; shift ;; --pack-fmt=*) require_unset pack_fmt pack_fmt=${opt#*=} ;; --pedantic) pedantic=$1; shift ;; --pedantic=*) pedantic=${opt#*=} ;; --perm-dir) use_sudo=yes permdirs+=("$1"); shift ;; --perm-dir=*) use_sudo=yes permdirs+=("${opt#*=}") ;; -s|--scope) require_unset scope scope_check "$1" scope=$1; shift ;; --scope=*) require_unset scope scope=${opt#*=} scope_check "$scope" ;; --sudo) use_sudo=yes ;; --lustre) require_unset lustredir lustredir=$1; shift ;; --lustre=*) require_unset lustredir lustredir=${opt#*=} ;; --version) version; exit 0 ;; *) fatal "unrecognized argument: $opt" ;; esac done printf 'ch-run: %s\n' "${ch_bin}/ch-run" printf 'bats: %s (%s)\n' "$bats" "$bats_version" printf 'tests: %s\n' "$CHTEST_DIR" printf 'installed: %s\n' "${CHTEST_INSTALLED:-no}" printf 'git workdir: %s\n' "${CHTEST_GITWD:-no}" if [[ -n $ch_contributor ]]; then ch_contributor_note="yes; $email in README.rst" else ch_contributor_note="no; $email not in README.rst" fi printf 'contributor: %s\n\n' "$ch_contributor_note" if [[ $phase = one-file ]]; then if [[ $one_file == *:* ]]; then x=$one_file one_file=${x%%:*} # before first colon export ch_one_test=${x#*:} # after first colon fi if [[ ! -f $one_file ]]; then fatal "not a file: $one_file" fi one_file=$(readlink -f "$one_file") # make absolute b/c we cd later if [[ $one_file = */test.bats ]]; then fatal '--file: must specify build recipe file, not test.bats' fi fi printf "%-21s %s" 'phase:' "$phase" if [[ $phase = one-file ]]; then printf ': %s (%s)' "$one_file" "$ch_one_test" fi if [[ -z $phase ]]; then fatal 'phase: no phase specified' fi printf '\n' # variable name CLI environment default # desc. width description vset CH_TEST_SCOPE "$scope" "$CH_TEST_SCOPE" standard \ 21 'scope' builder_set 21 pedantry_set 21 vset CH_TEST_SUDO "$use_sudo" "$CH_TEST_SUDO" "$use_sudo_default" \ 21 'use generic sudo' vset CH_TEST_IMGDIR "$imgdir" "$CH_TEST_IMGDIR" /var/tmp/img \ 21 'unpacked images dir' vset CH_TEST_TARDIR "$tardir" "$CH_TEST_TARDIR" /var/tmp/tar \ 21 'packed images dir' pack_fmt_set 21 vset CH_TEST_PERMDIRS "${permdirs[*]}" "$CH_TEST_PERMDIRS" skip \ 21 'fs permissions dirs' vset CH_TEST_LUSTREDIR "$lustredir" "$CH_TEST_LUSTREDIR" skip \ 21 'Lustre test dir' printf '\n' if [[ $phase == *'perm'* ]] && [[ $CH_TEST_PERMDIRS == skip ]]; then fatal "phase $phase: CH_TEST_PERMDIRS: can't be 'skip'" fi # Check that different sources of version number are consistent. printf 'ch-test version: %s\n' "$ch_version" ch_run_version=$("${ch_bin}/ch-run" --version 2>&1) if [[ $ch_version != "$ch_run_version" ]]; then warning "inconsistent ch-run version: ${ch_run_version}" fi if [[ -z $CHTEST_INSTALLED ]]; then cf_version=$("${ch_base}/configure" --version | head -1 | cut -d' ' -f3) if [[ $ch_version != "$cf_version" ]]; then warning "inconsistent configure version: ${cf_version}" fi src_version=$("${ch_base}/misc/version") if [[ $ch_version != "$src_version" ]]; then warning "inconsistent source version: ${src_version}" fi fi printf '\n' # Ensure BATS_TMPDIR is set to /tmp (issue #278). if [[ -n $BATS_TMPDIR && $BATS_TMPDIR != '/tmp' ]]; then fatal "BATS_TMPDIR: must be /tmp; found '$BATS_TMPDIR' (issue #278)" fi # Ensure namespaces are configured properly. printf 'checking namespaces ...\n' if ! "${ch_bin}/ch-checkns"; then fatal 'namespace sanity check (ch-checkns) failed' fi printf '\n' if [[ $CH_TEST_SUDO ]]; then printf 'checking sudo ...\n' sudo echo ok printf '\n' fi if [[ -n $is_pedantic ]]; then printf 'exiting per --is-pedantic\n' if [[ -n $ch_pedantic ]]; then exit 0; else exit 1; fi fi if [[ -n $is_sudo ]]; then printf 'exiting per --is-sudo\n' if [[ -n $CH_TEST_SUDO ]]; then exit 0; else exit 1; fi fi if [[ -n $dry ]];then printf 'exiting per --dry-run\n' exit 0 fi cd "$CHTEST_DIR" export PATH=$ch_bin:$PATH # Now that CH_TEST_* variables, PATH, and BATS_TMPDIR has been set and checked, # we source CHTEST_DIR/common.bash. . "${CHTEST_DIR}/common.bash" # The distinction here is that "images" are purely for testing and have no # value as examples for the user, while "examples" are dual-purpose. We call # "find" twice for each to preserve desired sort order. images=$( find "$CHTEST_DIR" -name 'Dockerfile.*' | sort \ && find "$CHTEST_DIR" -name 'Build' \ -o -name 'Build.*' | sort) examples=$( find "$CHTEST_EXAMPLES_DIR" -name 'Dockerfile' \ -o -name 'Dockerfile.*' | sort \ && find "$CHTEST_EXAMPLES_DIR" -name 'Build' \ -o -name 'Build.*' | sort) scope_int=$(scope_to_digit "$CH_TEST_SCOPE") # Execute phase case $phase in all) phase=build dir_tar_mk builder_check test_make test_build phase=run dir_img_mk dir_tar_check test_make test_run phase=examples dir_tar_check test_make test_examples # Kobe. win ;; build) builder_check dir_tar_mk test_make test_build win ;; clean) dirs_unpriv_rm if [[ -d $TMP_ ]] && [[ -e $TMP_/build_auto.bats ]]; then echo "removing $TMP_" rm -rf --one-file-system "$TMP_" fi ;; examples) test_make test_examples win ;; mk-perm-dirs) printf 'creating filesystem permissions fixtures ...\n' for d in $CH_TEST_PERMDIRS; do if [[ -d ${d} ]]; then printf '%s already exists\n' "$d" continue else sudo "${CHTEST_DIR}/make-perms-test" "$d" "$USER" nobody fi done echo ;; one-file) test_one_file win ;; rm-perm-dirs) for d in $CH_TEST_PERMDIRS; do dir_perm_rm "$d" done ;; run) dir_img_mk test_make test_run win ;; esac charliecloud-0.26/bin/ch_core.c000066400000000000000000000407441417231051300164230ustar00rootroot00000000000000/* Copyright © Triad National Security, LLC, and others. */ #define _GNU_SOURCE #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "config.h" #include "ch_misc.h" #include "ch_core.h" #ifdef HAVE_LIBSQUASHFUSE #include "ch_fuse.h" #endif /** Macros **/ /* Timeout in seconds for waiting for join semaphore. */ #define JOIN_TIMEOUT 30 /* Maximum length of paths we're willing to deal with. (Note that system-defined PATH_MAX isn't reliable.) */ #define PATH_CHARS 4096 /** Constants **/ /* Default bind-mounts. */ struct bind BINDS_DEFAULT[] = { { "/dev", "/dev", BD_REQUIRED }, { "/proc", "/proc", BD_REQUIRED }, { "/sys", "/sys", BD_REQUIRED }, { "/etc/hosts", "/etc/hosts", BD_OPTIONAL }, { "/etc/machine-id", "/etc/machine-id", BD_OPTIONAL }, { "/etc/resolv.conf", "/etc/resolv.conf", BD_OPTIONAL }, { "/var/lib/hugetlbfs", "/var/opt/cray/hugetlbfs", BD_OPTIONAL }, { "/var/opt/cray/alps/spool", "/var/opt/cray/alps/spool", BD_OPTIONAL }, { 0 } }; /** Global variables **/ /* Variables for coordinating --join. */ struct { bool winner_p; char *sem_name; sem_t *sem; char *shm_name; struct { pid_t winner_pid; // access anytime after initialization (write-once) int proc_left_ct; // access only while serial } *shared; } join; /* Bind mounts done so far; canonical host paths. If null, there are none. */ char **bind_mount_paths = NULL; /** Function prototypes (private) **/ void bind_mount(const char *src, const char *dst, enum bind_dep, const char *newroot, unsigned long flags); void bind_mounts(const struct bind *binds, const char *newroot, unsigned long flags); void enter_udss(struct container *c); void join_begin(const char *join_tag); void join_namespace(pid_t pid, const char *ns); void join_namespaces(pid_t pid); void join_end(int join_ct); void sem_timedwait_relative(sem_t *sem, int timeout); void setup_namespaces(const struct container *c); void setup_passwd(const struct container *c); void tmpfs_mount(const char *dst, const char *newroot, const char *data); /** Functions **/ /* Bind-mount the given path into the container image. */ void bind_mount(const char *src, const char *dst, enum bind_dep dep, const char *newroot, unsigned long flags) { char *dst_fullc, *newrootc; char *dst_full = cat(newroot, dst); Te (src[0] != 0 && dst[0] != 0 && newroot[0] != 0, "empty string"); Te (dst[0] == '/' && newroot[0] == '/', "relative path"); if (!path_exists(src, NULL, true)) { Te (dep == BD_OPTIONAL, "can't bind: source not found: %s", src); return; } if (!path_exists(dst_full, NULL, true)) switch (dep) { case BD_REQUIRED: FATAL("can't bind: destination not found: %s", dst_full); break; case BD_OPTIONAL: return; case BD_MAKE_DST: mkdirs(newroot, dst, bind_mount_paths); break; } newrootc = realpath_safe(newroot); dst_fullc = realpath_safe(dst_full); Tf (path_subdir_p(newrootc, dst_fullc), "can't bind: %s not subdirectory of %s", dst_fullc, newrootc); if (strcmp(newroot, "/")) // don't record if newroot is "/" list_append((void **)&bind_mount_paths, &dst_fullc, sizeof(char *)); Zf (mount(src, dst_full, NULL, MS_REC|MS_BIND|flags, NULL), "can't bind %s to %s", src, dst_full); } /* Bind-mount a null-terminated array of struct bind objects. */ void bind_mounts(const struct bind *binds, const char *newroot, unsigned long flags) { for (int i = 0; binds[i].src != NULL; i++) bind_mount(binds[i].src, binds[i].dst, binds[i].dep, newroot, flags); } /* Set up new namespaces or join existing namespaces. */ void containerize(struct container *c) { if (c->join_pid) { join_namespaces(c->join_pid); return; } if (c->join) join_begin(c->join_tag); if (!c->join || join.winner_p) { #ifdef HAVE_LIBSQUASHFUSE if (c->type == IMG_SQUASH) sq_fork(c); #endif setup_namespaces(c); enter_udss(c); } else join_namespaces(join.shared->winner_pid); if (c->join) join_end(c->join_ct); } /* Enter the UDSS. After this, we are inside the UDSS. Note that pivot_root(2) requires a complex dance to work, i.e., to avoid multiple undocumented error conditions. This dance is explained in detail in bin/ch-checkns.c. */ void enter_udss(struct container *c) { char *newroot_parent, *newroot_base; LOG_IDS; path_split(c->newroot, &newroot_parent, &newroot_base); // Claim new root for this namespace. We do need both calls to avoid // pivot_root(2) failing with EBUSY later. bind_mount(c->newroot, c->newroot, BD_REQUIRED, "/", MS_PRIVATE); bind_mount(newroot_parent, newroot_parent, BD_REQUIRED, "/", MS_PRIVATE); // Bind-mount default files and directories. bind_mounts(BINDS_DEFAULT, c->newroot, MS_RDONLY); // /etc/passwd and /etc/group. if (!c->private_passwd) setup_passwd(c); // Container /tmp. if (c->private_tmp) { tmpfs_mount("/tmp", c->newroot, NULL); } else { bind_mount(host_tmp, "/tmp", BD_REQUIRED, c->newroot, 0); } // Container /home. if (!c->private_home) { char *newhome; // Mount tmpfs on guest /home because guest root is read-only tmpfs_mount("/home", c->newroot, "size=4m"); // Bind-mount user's home directory at /home/$USER. The main use case is // dotfiles. Tf (c->old_home != NULL, "cannot find home directory: is $HOME set?"); newhome = cat("/home/", username); Z_ (mkdir(cat(c->newroot, newhome), 0755)); bind_mount(c->old_home, newhome, BD_REQUIRED, c->newroot, 0); } // Container /usr/bin/ch-ssh. if (c->ch_ssh) { char chrun_file[PATH_CHARS]; int len = readlink("/proc/self/exe", chrun_file, PATH_CHARS); T_ (len >= 0); Te (path_exists(cat(c->newroot, "/usr/bin/ch-ssh"), NULL, true), "--ch-ssh: /usr/bin/ch-ssh not in image"); chrun_file[ lennewroot, 0); } // Re-mount new root read-only unless --write or already read-only. if (!c->writable && !(access(c->newroot, W_OK) == -1 && errno == EROFS)) { unsigned long flags = path_mount_flags(c->newroot) | MS_REMOUNT // Re-mount ... | MS_BIND // only this mount point ... | MS_RDONLY; // read-only. Zf (mount(NULL, c->newroot, NULL, flags, NULL), "can't re-mount image read-only (is it on NFS?)"); } // Bind-mount user-specified directories. bind_mounts(c->binds, c->newroot, 0); // Overmount / to avoid EINVAL if it's a rootfs. Z_ (chdir(newroot_parent)); Z_ (mount(newroot_parent, "/", NULL, MS_MOVE, NULL)); Z_ (chroot(".")); c->newroot = cat("/", newroot_base); // Pivot into the new root. Use /dev because it's available even in // extremely minimal images. Zf (chdir(c->newroot), "can't chdir into new root"); Zf (syscall(SYS_pivot_root, c->newroot, cat(c->newroot, "/dev")), "can't pivot_root(2)"); Zf (chroot("."), "can't chroot(2) into new root"); Zf (umount2("/dev", MNT_DETACH), "can't umount old root"); } /* Return image type of path, or exit with error if not a valid type. */ enum img_type img_type_get(const char *path) { struct stat read; FILE *fp; char magic[4]; // four bytes, not a string Zf (stat(path, &read), "can't stat: %s", path); if (S_ISDIR(read.st_mode)) return IMG_DIRECTORY; fp = fopen(path, "rb"); Tf (fp != NULL, "can't open: %s", path); Tf (fread(magic, sizeof(char), 4, fp) == 4, "can't read: %s", path); Zf (fclose(fp), "can't close: %s", path); INFO("image file magic expected: 6873 7173; actual: %x%x %x%x", magic[0], magic[1], magic[2], magic[3]); // SquashFS magic number is 6873 7173, i.e. "hsqs". I think "sqsh" was // intended but the superblock designers were confused about endianness. // See: https://dr-emann.github.io/squashfs/ if (memcmp(magic, "hsqs", 4) == 0) return IMG_SQUASH; FATAL("unknown image type: %s", path); return IMG_NONE; // unreachable, avoid warning; see issue #1158 } /* Begin coordinated section of namespace joining. */ void join_begin(const char *join_tag) { int fd; join.sem_name = cat("/ch-run_", join_tag); join.shm_name = cat("/ch-run_", join_tag); // Serialize. join.sem = sem_open(join.sem_name, O_CREAT, 0600, 1); T_ (join.sem != SEM_FAILED); sem_timedwait_relative(join.sem, JOIN_TIMEOUT); // Am I the winner? fd = shm_open(join.shm_name, O_CREAT|O_EXCL|O_RDWR, 0600); if (fd > 0) { INFO("join: I won"); join.winner_p = true; Z_ (ftruncate(fd, sizeof(*join.shared))); } else if (errno == EEXIST) { INFO("join: I lost"); join.winner_p = false; fd = shm_open(join.shm_name, O_RDWR, 0); T_ (fd > 0); } else { T_ (0); } join.shared = mmap(NULL, sizeof(*join.shared), PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); T_ (join.shared != NULL); Z_ (close(fd)); // Winner keeps lock; losers parallelize (winner will be done by now). if (!join.winner_p) Z_ (sem_post(join.sem)); } /* End coordinated section of namespace joining. */ void join_end(int join_ct) { if (join.winner_p) { // winner still serial INFO("join: winner initializing shared data"); join.shared->winner_pid = getpid(); join.shared->proc_left_ct = join_ct; } else // losers serialize sem_timedwait_relative(join.sem, JOIN_TIMEOUT); join.shared->proc_left_ct--; INFO("join: %d peers left excluding myself", join.shared->proc_left_ct); if (join.shared->proc_left_ct <= 0) { INFO("join: cleaning up IPC resources"); Te (join.shared->proc_left_ct == 0, "expected 0 peers left but found %d", join.shared->proc_left_ct); Zf (sem_unlink(join.sem_name), "can't unlink sem: %s", join.sem_name); Zf (shm_unlink(join.shm_name), "can't unlink shm: %s", join.shm_name); } Z_ (sem_post(join.sem)); // parallelize (all) Z_ (munmap(join.shared, sizeof(*join.shared))); Z_ (sem_close(join.sem)); INFO("join: done"); } /* Join a specific namespace. */ void join_namespace(pid_t pid, const char *ns) { char *path; int fd; T_ (1 <= asprintf(&path, "/proc/%d/ns/%s", pid, ns)); fd = open(path, O_RDONLY); if (fd == -1) { if (errno == ENOENT) { Te (0, "join: no PID %d: %s not found", pid, path); } else { Tf (0, "join: can't open %s", path); } } Zf (setns(fd, 0), "can't join %s namespace of pid %d", ns, pid); } /* Join the existing namespaces created by the join winner. */ void join_namespaces(pid_t pid) { INFO("joining namespaces of pid %d", pid); join_namespace(pid, "user"); join_namespace(pid, "mnt"); } /* Replace the current process with user command and arguments. */ void run_user_command(char *argv[], const char *initial_dir) { LOG_IDS; if (initial_dir != NULL) Zf (chdir(initial_dir), "can't cd to %s", initial_dir); INFO("executing: %s", argv_to_string(argv)); Zf (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0), "can't set no_new_privs"); execvp(argv[0], argv); // only returns if error Tf (0, "can't execve(2): %s", argv[0]); } /* Wait for semaphore sem for up to timeout seconds. If timeout or an error, exit unsuccessfully. */ void sem_timedwait_relative(sem_t *sem, int timeout) { struct timespec deadline; // sem_timedwait() requires a deadline rather than a timeout. Z_ (clock_gettime(CLOCK_REALTIME, &deadline)); deadline.tv_sec += timeout; if (sem_timedwait(sem, &deadline)) { Ze (errno == ETIMEDOUT, "timeout waiting for join lock"); Tf (0, "failure waiting for join lock"); } } /* Activate the desired isolation namespaces. */ void setup_namespaces(const struct container *c) { int fd; uid_t euid = -1; gid_t egid = -1; euid = geteuid(); egid = getegid(); LOG_IDS; Zf (unshare(CLONE_NEWNS|CLONE_NEWUSER), "can't init user+mount namespaces"); LOG_IDS; /* Write UID map. What we are allowed to put here is quite limited. Because we do not have CAP_SETUID in the *parent* user namespace, we can map exactly one UID: an arbitrary container UID to our EUID in the parent namespace. This is sufficient to change our UID within the container; no setuid(2) or similar required. This is because the EUID of the process in the parent namespace is unchanged, so the kernel uses our new 1-to-1 map to convert that EUID into the container UID for most (maybe all) purposes. */ T_ (-1 != (fd = open("/proc/self/uid_map", O_WRONLY))); T_ (1 <= dprintf(fd, "%d %d 1\n", c->container_uid, euid)); Z_ (close(fd)); LOG_IDS; T_ (-1 != (fd = open("/proc/self/setgroups", O_WRONLY))); T_ (1 <= dprintf(fd, "deny\n")); Z_ (close(fd)); T_ (-1 != (fd = open("/proc/self/gid_map", O_WRONLY))); T_ (1 <= dprintf(fd, "%d %d 1\n", c->container_gid, egid)); Z_ (close(fd)); LOG_IDS; } /* Build /etc/passwd and /etc/group files and bind-mount them into newroot. /etc/passwd contains root, nobody, and an entry for the container UID, i.e., three entries, or two if the container UID is 0 or 65534. We copy the host's user data for the container UID, if that exists, and use dummy data otherwise (see issue #649). /etc/group works similarly: root, nogroup, and an entry for the container GID. We build new files to capture the relevant host username and group name mappings regardless of where they come from. We used to simply bind-mount the host's /etc/passwd and /etc/group, but this fails for LDAP at least; see issue #212. After bind-mounting, we remove the files from the host; they persist inside the container and then disappear completely when the container exits. */ void setup_passwd(const struct container *c) { int fd; char *path; struct group *g; struct passwd *p; // /etc/passwd T_ (path = cat(host_tmp, "/ch-run_passwd.XXXXXX")); T_ (-1 != (fd = mkstemp(path))); // mkstemp(3) writes path if (c->container_uid != 0) T_ (1 <= dprintf(fd, "root:x:0:0:root:/root:/bin/sh\n")); if (c->container_uid != 65534) T_ (1 <= dprintf(fd, "nobody:x:65534:65534:nobody:/:/bin/false\n")); errno = 0; p = getpwuid(c->container_uid); if (p) { T_ (1 <= dprintf(fd, "%s:x:%u:%u:%s:/home/%s:/bin/sh\n", p->pw_name, c->container_uid, c->container_gid, p->pw_gecos, username)); } else { if (errno) { Tf (0, "getpwuid(3) failed"); } else { INFO("UID %d not found; using dummy info", c->container_uid); T_ (1 <= dprintf(fd, "%s:x:%u:%u:%s:/home/%s:/bin/sh\n", "charlie", c->container_uid, c->container_gid, "Charliecloud User", "charlie")); } } Z_ (close(fd)); bind_mount(path, "/etc/passwd", BD_REQUIRED, c->newroot, 0); Z_ (unlink(path)); // /etc/group T_ (path = cat(host_tmp, "/ch-run_group.XXXXXX")); T_ (-1 != (fd = mkstemp(path))); if (c->container_gid != 0) T_ (1 <= dprintf(fd, "root:x:0:\n")); if (c->container_gid != 65534) T_ (1 <= dprintf(fd, "nogroup:x:65534:\n")); errno = 0; g = getgrgid(c->container_gid); if (g) { T_ (1 <= dprintf(fd, "%s:x:%u:\n", g->gr_name, c->container_gid)); } else { if (errno) { Tf (0, "getgrgid(3) failed"); } else { INFO("GID %d not found; using dummy info", c->container_gid); T_ (1 <= dprintf(fd, "%s:x:%u:\n", "charliegroup", c->container_gid)); } } Z_ (close(fd)); bind_mount(path, "/etc/group", BD_REQUIRED, c->newroot, 0); Z_ (unlink(path)); } /* Mount a tmpfs at the given path. */ void tmpfs_mount(const char *dst, const char *newroot, const char *data) { char *dst_full = cat(newroot, dst); Zf (mount(NULL, dst_full, "tmpfs", 0, data), "can't mount tmpfs at %s", dst_full); } charliecloud-0.26/bin/ch_core.h000066400000000000000000000034661417231051300164300ustar00rootroot00000000000000/* Copyright © Triad National Security, LLC, and others. This interface contains Charliecloud's core containerization features. */ #define _GNU_SOURCE #include /** Types **/ enum bind_dep { BD_REQUIRED, // both source and destination must exist BD_OPTIONAL, // if either source or destination missing, do nothing BD_MAKE_DST, // source must exist, try to create destination if it doesn't }; struct bind { char *src; char *dst; enum bind_dep dep; }; enum img_type { IMG_DIRECTORY, // normal directory, perhaps an external mount of some kind IMG_SQUASH, // SquashFS archive file (not yet mounted) IMG_NONE, // image type is not set yet }; struct container { struct bind *binds; bool ch_ssh; // bind /usr/bin/ch-ssh? gid_t container_gid; // GID to use in container uid_t container_uid; // UID to use in container bool env_expand; // expand variables in --set-env char *img_path; // path to image char *newroot; // path to new root directory bool join; // is this a synchronized join? int join_ct; // number of peers in a synchronized join pid_t join_pid; // process in existing namespace to join char *join_tag; // identifier for synchronized join bool private_home; // don't bind user home directory bool private_passwd; // don't bind custom /etc/{passwd,group} bool private_tmp; // don't bind host's /tmp char *old_home; // host path to user's home directory (i.e. $HOME) enum img_type type; // directory, SquashFS, etc. bool writable; // re-mount image read-write }; /** Function prototypes **/ void containerize(struct container *c); enum img_type img_type_get(const char *path); void run_user_command(char *argv[], const char *initial_dir); charliecloud-0.26/bin/ch_fuse.c000066400000000000000000000216621417231051300164330ustar00rootroot00000000000000/* Copyright © Triad National Security, LLC, and others. */ /* Function prefixes: fuse_ libfuse; docs: https://libfuse.github.io/doxygen/globals.html sqfs_ll_ SquashFUSE; no docs but: https://github.com/vasi/squashfuse sq_ Charliecloud */ #define _GNU_SOURCE #include #include #include #include #include // SquashFUSE has a bug [1] where ll.h includes SquashFUSE's own config.h. // This clashes with our own config.h, as well as the system headers because // it defines _POSIX_C_SOURCE. By defining SQFS_CONFIG_H, SquashFUSE's // config.h skips itself. // [1]: https://github.com/vasi/squashfuse/issues/65 #define SQFS_CONFIG_H // But then FUSE_USE_VERSION isn't defined, which makes other parts of ll.h // puke. Looking at their code, it seems the only values used are 32 (for // libfuse3) and 26 (for libfuse2), so we can just blindly define it. #define FUSE_USE_VERSION 32 // SquashFUSE redefines __le16 unless HAVE_LINUX_TYPES_LE16 is defined. We are // assuming it is defined in on your machine. #define HAVE_LINUX_TYPES_LE16 // Now we can include ll.h. #include #include "config.h" #include "ch_core.h" #include "ch_fuse.h" #include "ch_misc.h" /** Types **/ /* A SquashFUSE mount. SquashFUSE allocates ll for us but not chan; use pointers for both for consistency. */ struct squash { char *mountpt; // path to mount point sqfs_ll_chan *chan; // FUSE channel associated with SquashFUSE mount sqfs_ll *ll; // SquashFUSE low-level data structure }; /** Constants **/ /* This mapping tells libfuse what functions implement which FUSE operations. It is passed to sqfs_ll_mount(). Why it is not internal to SquashFUSE I have no idea. */ struct fuse_lowlevel_ops OPS = { .getattr = &sqfs_ll_op_getattr, .opendir = &sqfs_ll_op_opendir, .releasedir = &sqfs_ll_op_releasedir, .readdir = &sqfs_ll_op_readdir, .lookup = &sqfs_ll_op_lookup, .open = &sqfs_ll_op_open, .create = &sqfs_ll_op_create, .release = &sqfs_ll_op_release, .read = &sqfs_ll_op_read, .readlink = &sqfs_ll_op_readlink, .listxattr = &sqfs_ll_op_listxattr, .getxattr = &sqfs_ll_op_getxattr, .forget = &sqfs_ll_op_forget, .statfs = &stfs_ll_op_statfs }; /** Global variables **/ /* SquashFUSE mount. Initialized in sq_mount() and then used in most of the other functions in this file. It's a global because the signal handler needs access to it. */ struct squash sq; /* True if exit request signal handler received SIGCHLD. */ volatile bool sigchld_received; /* True if any exit request signal has been received. */ volatile bool loop_terminating = false; /** Function prototypes (private) **/ void sq_done_request(int signum); int sq_loop(); void sq_mount(const char *img_path, char *mountpt); /** Functions **/ /* Signal handler to end the FUSE loop. This simply requests FUSE to end its loop, causing fuse_session_loop() to exit. */ void sq_done_request(int signum) { if (!loop_terminating) { // only act on first signal loop_terminating = true; sigchld_received = (signum == SIGCHLD); fuse_session_exit(sq.chan->session); } } /* Mount SquashFS archive c->img_path on directory c->newroot. If the latter is NULL, then mkdir(2) the default mount point and assign its path to c->newroot. After mounting, fork; the child returns immediately while the parent runs the FUSE loop until the child exits and then exits itself, with the same exit code as the child (unless something else went wrong). */ void sq_fork(struct container *c) { pid_t pid_child; struct stat st; // Default mount point? if (c->newroot == NULL) { char *subdir; T_ (asprintf(&subdir, "/%s.ch/mnt", username) > 0); c->newroot = cat("/var/tmp", subdir); INFO("using default mount point: %s", c->newroot); mkdirs("/var/tmp", subdir, NULL); } // Verify mount point exists and is a directory. (SquashFS file path // already checked in img_type_get().) Zf (stat(c->newroot, &st), "can't stat mount point: %s", c->newroot); Te (S_ISDIR(st.st_mode), "not a directory: %s", c->newroot); // Mount SquashFS. sq_mount(c->img_path, c->newroot); // Now that the filesystem is mounted, we can fork without race condition. // The child returns to caller and runs the user command. When that exits, // the parent gets SIGCHLD. pid_child = fork(); Tf (pid_child >= 0, "can't fork"); if (pid_child > 0) // parent (child does nothing here) exit(sq_loop()); } /* Run the squash loop to completion and return the exit code of the user command. Warning: This sets up but does not restore signal handlers. */ int sq_loop(void) { struct sigaction fin, ign; int looped, exit_code, child_status; // Set up signal handlers. Avoid fuse_set_signal_handlers() because we need // to catch a different set of signals, letting some be handled by the user // command [1]. Use sigaction(2) instead of signal(2) because the latter's // man page [2] says “avoid its use” and there are reports of bad // interactions with libfuse [3]. // // [1]: https://unix.stackexchange.com/questions/176235 // [2]: https://man7.org/linux/man-pages/man2/signal.2.html // [3]: https://stackoverflow.com/a/8918597 fin.sa_handler = sq_done_request; Z_ (sigemptyset(&fin.sa_mask)); // block no other signals during handling fin.sa_flags = SA_NOCLDSTOP; // only SIGCHLD on child exit ign.sa_handler = SIG_IGN; Z_ (sigaction(SIGCHLD, &fin, NULL)); // user command exits Z_ (sigaction(SIGHUP, &ign, NULL)); // terminal/session terminated Z_ (sigaction(SIGINT, &ign, NULL)); // Control-C Z_ (sigaction(SIGPIPE, &ign, NULL)); // broken pipe; we don't use pipes Z_ (sigaction(SIGTERM, &fin, NULL)); // somebody asked us to exit // Run the FUSE loop, which services FUSE requests until sq_done_request() // is invoked by a signal and tells it to stop, or someone unmounts the // filesystem externally with e.g. fusermount(1). Because we don't use // fuse_set_signal_handlers(), the return value doesn't contain the signal // number that ended the loop, contrary to the documentation. // // FIXME: this is single-threaded; see issue #1157. looped = fuse_session_loop(sq.chan->session); if (looped < 0) { errno = -looped; // restore encoded errno so our logging finds it Tf (0, "FUSE session failed"); } INFO("FUSE loop terminated successfully"); // Clean up zombie child if exit signal was SIGCHLD. if (!sigchld_received) exit_code = 0; else { Tf (wait(&child_status) >= 0, "can't wait for child"); if (WIFEXITED(child_status)) { exit_code = WEXITSTATUS(child_status); INFO("child terminated normally with exit code %d", exit_code); } else { // We now know that the child did not exit normally; the two // remaining options are (a) killed by signal and (b) stopped [1]. // Because we didn't call waitpid(2) with WUNTRACED, we don't get // notified if the child is stopped [2], so it must have been // signaled, and we need not call WIFSIGNALED(). // // [1]: https://codereview.stackexchange.com/a/109349 // [2]: https://man7.org/linux/man-pages/man2/wait.2.html exit_code = 1; INFO("child terminated by signal %d", WTERMSIG(child_status)) } } // Clean up SquashFS mount. These functions have no error reporting. INFO("unmounting: %s", sq.mountpt); sqfs_ll_destroy(sq.ll); sqfs_ll_unmount(sq.chan, sq.mountpt); INFO("FUSE loop done"); return exit_code; } /* Mount the SquashFS img_path at mountpt. Exit on any errors. */ void sq_mount(const char *img_path, char *mountpt) { // SquashFUSE mount takes basically a command line rather than having a // standard library API. It's unclear to me where this command line is // documented, but the libfuse docs [1] suggest mount(8). // [1]: https://libfuse.github.io/doxygen/fuse-3_810_83_2include_2fuse_8h.html#ad866b0fd4d81bdbf3e737f7273ba4520 char *mount_argv[] = {"WEIRDAL", "-d"}; int mount_argc = (verbose > 3) ? 2 : 1; // include -d if high verbosity struct fuse_args mount_args = FUSE_ARGS_INIT(mount_argc, mount_argv); sq.mountpt = mountpt; T_ (sq.chan = malloc(sizeof(sqfs_ll_chan))); sq.ll = sqfs_ll_open(img_path, 0); Te (sq.ll != NULL, "can't open SquashFS: %s; try ch-run -vv?", img_path); if (sqfs_ll_mount(sq.chan, sq.mountpt, &mount_args, &OPS, sizeof(OPS), sq.ll) != SQFS_OK) { // We get back only SQFS_OK or SQFS_ERR, with no further detail. Looking // at the source code [1], the latter says either fuse_session_new() or // fuse_session_mount() failed, but we can't tell which. // [1]: https://github.com/vasi/squashfuse/blob/74f4fe8/ll.c#L399 FATAL("unknown FUSE error mounting SquashFS; try ch-run -vv?"); } } charliecloud-0.26/bin/ch_fuse.h000066400000000000000000000002231417231051300164260ustar00rootroot00000000000000/* Copyright © Triad National Security, LLC, and others. */ #define _GNU_SOURCE /** Function prototypes **/ void sq_fork(struct container *c); charliecloud-0.26/bin/ch_misc.c000066400000000000000000000460731417231051300164270ustar00rootroot00000000000000/* Copyright © Triad National Security, LLC, and others. */ #define _GNU_SOURCE #include #include #include #include #include #include #include #include #include #include #include #include #include "config.h" #include "ch_misc.h" /** Macros **/ /* Number of supplemental GIDs we can deal with. */ #define SUPP_GIDS_MAX 128 /** Constants **/ /* Names of verbosity levels. */ const char *VERBOSE_LEVELS[] = { "error", "warning", "info", "verbose", "debug" }; /** External variables **/ /* Level of chatter on stderr desired (0-3). */ int verbose; /* Path to host temporary directory. Set during command line processing. */ char *host_tmp = NULL; /* Username of invoking users. Set during command line processing. */ char *username = NULL; /** Function prototypes (private) **/ // none /** Functions **/ /* Serialize the null-terminated vector of arguments argv and return the result as a newly allocated string. The purpose is to provide a human-readable reconstruction of a command line where each argument can also be recovered byte-for-byte; see ch-run(1) for details. */ char *argv_to_string(char **argv) { char *s = NULL; for (size_t i = 0; argv[i] != NULL; i++) { char *argv_, *x; bool quote_p = false; // Max length is escape every char plus two quotes and terminating zero. T_ (argv_ = calloc(2 * strlen(argv[i]) + 3, 1)); // Copy to new string, escaping as we go. Note lots of fall-through. I'm // not sure where this list of shell meta-characters came from; I just // had it on hand already from when we were deciding on the image // reference transformation for filesystem paths. for (size_t ji = 0, jo = 0; argv[i][ji] != 0; ji++) { char c = argv[i][ji]; if (isspace(c) || !isascii(c) || !isprint(c)) quote_p = true; switch (c) { case '!': // history expansion case '"': // string delimiter case '$': // variable expansion case '\\': // escape character case '`': // output expansion argv_[jo++] = '\\'; case '#': // comment case '%': // job ID case '&': // job control case '\'': // string delimiter case '(': // subshell grouping case ')': // subshell grouping case '*': // globbing case ';': // command separator case '<': // redirect case '=': // globbing case '>': // redirect case '?': // globbing case '[': // globbing case ']': // globbing case '^': // command “quick substitution” case '{': // command grouping case '|': // pipe case '}': // command grouping case '~': // home directory expansion quote_p = true; default: argv_[jo++] = c; break; } } if (quote_p) { x = argv_; T_ (1 <= asprintf(&argv_, "\"%s\"", argv_)); free(x); } if (i != 0) { x = s; s = cat(s, " "); free(x); } x = s; s = cat(s, argv_); free(x); free(argv_); } return s; } /* Return true if buffer buf of length size is all zeros, false otherwise. */ bool buf_zero_p(void *buf, size_t size) { for (size_t i = 0; i < size; i++) if (((char *)buf)[i] != 0) return false; return true; } /* Concatenate strings a and b, then return the result. */ char *cat(const char *a, const char *b) { char *ret; if (a == NULL) a = ""; if (b == NULL) b = ""; T_ (asprintf(&ret, "%s%s", a, b) == strlen(a) + strlen(b)); return ret; } /* Read the file listing environment variables at path, and return a corresponding list of struct env_var. If there is a problem reading the file, or with any individual variable, exit with error. */ struct env_var *env_file_read(const char *path) { struct env_var *vars; FILE *fp; Tf (fp = fopen(path, "r"), "can't open: %s", path); vars = list_new(sizeof(struct env_var), 0); for (size_t line_no = 1; true; line_no++) { struct env_var var; char *line; size_t line_len = 0; // don't care but required by getline(3) errno = 0; if (-1 == getline(&line, &line_len, fp)) { if (errno == 0) // EOF break; else Tf (0, "can't read: %s", path); } if (line[strlen(line) - 1] == '\n') // rm newline if present line[strlen(line) - 1] = 0; if (line[0] == 0) // skip blank lines continue; var = env_var_parse(line, path, line_no); list_append((void **)&vars, &var, sizeof(var)); } Zf (fclose(fp), "can't close: %s", path); return vars; } /* Set environment variable name to value. If expand, then further expand variables in value marked with "$" as described in the man page. */ void env_set(const char *name, const char *value, const bool expand) { char *value_, *value_expanded; bool first_written; // Walk through value fragments separated by colon and expand variables. T_ (value_ = strdup(value)); value_expanded = ""; first_written = false; while (true) { // loop executes ≥ once char *fgmt = strsep(&value_, ":"); // NULL -> no more items if (fgmt == NULL) break; if (expand && fgmt[0] == '$' && fgmt[1] != 0) { fgmt = getenv(fgmt + 1); // NULL if unset if (fgmt != NULL && fgmt[0] == 0) fgmt = NULL; // convert empty to unset } if (fgmt != NULL) { // NULL -> omit from output if (first_written) value_expanded = cat(value_expanded, ":"); value_expanded = cat(value_expanded, fgmt); first_written = true; } } // Save results. INFO("environment: %s=%s", name, value_expanded); Z_ (setenv(name, value_expanded, 1)); } /* Remove variables matching glob from the environment. This is tricky, because there is no standard library function to iterate through the environment, and the environ global array can be re-ordered after unsetenv(3) [1]. Thus, the only safe way without additional storage is an O(n^2) search until no matches remain. Our approach is O(n): we build up a copy of environ, skipping variables that match the glob, and then assign environ to the copy. (This is a valid thing to do [2].) [1]: https://unix.stackexchange.com/a/302987 [2]: http://man7.org/linux/man-pages/man3/exec.3p.html */ void env_unset(const char *glob) { char **new_environ = list_new(sizeof(char *), 0); for (size_t i = 0; environ[i] != NULL; i++) { char *name, *value; int matchp; split(&name, &value, environ[i], '='); T_ (name != NULL); // environ entries must always have equals matchp = fnmatch(glob, name, 0); if (matchp == 0) { INFO("environment: unset %s", name); } else { T_ (matchp == FNM_NOMATCH); *(value - 1) = '='; // rejoin line list_append((void **)&new_environ, &name, sizeof(name)); } } environ = new_environ; } /* Parse the environment variable in line and return it as a struct env_var. Exit with error on syntax error; if path is non-NULL, attribute the problem to that path at line_no. Note: Trailing whitespace such as newline is *included* in the value. */ struct env_var env_var_parse(const char *line, const char *path, size_t lineno) { char *name, *value, *where; if (path == NULL) { T_ (where = strdup(line)); } else { T_ (1 <= asprintf(&where, "%s:%lu", path, lineno)); } // Split line into variable name and value. split(&name, &value, line, '='); Te (name != NULL, "can't parse variable: no delimiter: %s", where); Te (name[0] != 0, "can't parse variable: empty name: %s", where); free(where); // for Tim // Strip leading and trailing single quotes from value, if both present. if ( strlen(value) >= 2 && value[0] == '\'' && value[strlen(value) - 1] == '\'') { value[strlen(value) - 1] = 0; value++; } return (struct env_var){ name, value }; } /* Copy the buffer of size size pointed to by new into the last position in the zero-terminated array of elements with the same size on the heap pointed to by *ar, reallocating it to hold one more element and setting list to the new location. *list can be NULL to initialize a new list. Return the new array size. Note: ar must be cast, e.g. "list_append((void **)&foo, ...)". Warning: This function relies on all pointers having the same representation, which is true on most modern machines but is not guaranteed by the standard [1]. We could instead return the new value of ar rather than using an out parameter, which would avoid the double pointer and associated non-portability but make it easy for callers to create dangling pointers, i.e., after "a = list_append(b, ...)", b will dangle. That problem could in turn be avoided by returning a *copy* of the array rather than a modified array, but then the caller has to deal with the original array itself. It seemed to me the present behavior was the best trade-off. [1]: http://www.c-faq.com/ptrs/genericpp.html */ void list_append(void **ar, void *new, size_t size) { int ct; T_ (new != NULL); // count existing elements if (*ar == NULL) ct = 0; else for (ct = 0; !buf_zero_p((char *)*ar + ct*size, size); ct++) ; T_ (*ar = realloc(*ar, (ct+2)*size)); // existing + new + terminator memcpy((char *)*ar + ct*size, new, size); // append new (no overlap) memset((char *)*ar + (ct+1)*size, 0, size); // set new terminator } /* Return a pointer to a new, empty zero-terminated array containing elements of size size, with room for ct elements without re-allocation. The latter allows to pre-allocate an arbitrary number of slots in the list, which can then be filled directly without testing the list's length for each one. (The list is completely filled with zeros, so every position has a terminator after it.) */ void *list_new(size_t size, size_t ct) { void *list; T_ (list = calloc(ct+1, size)); return list; } /* If verbose, print uids and gids on stderr prefixed with where. */ void log_ids(const char *func, int line) { uid_t ruid, euid, suid; gid_t rgid, egid, sgid; gid_t supp_gids[SUPP_GIDS_MAX]; int supp_gid_ct; if (verbose >= 3) { Z_ (getresuid(&ruid, &euid, &suid)); Z_ (getresgid(&rgid, &egid, &sgid)); fprintf(stderr, "%s %d: uids=%d,%d,%d, gids=%d,%d,%d + ", func, line, ruid, euid, suid, rgid, egid, sgid); supp_gid_ct = getgroups(SUPP_GIDS_MAX, supp_gids); if (supp_gid_ct == -1) { T_ (errno == EINVAL); Te (0, "more than %d groups", SUPP_GIDS_MAX); } for (int i = 0; i < supp_gid_ct; i++) { if (i > 0) fprintf(stderr, ","); fprintf(stderr, "%d", supp_gids[i]); } fprintf(stderr, "\n"); } } /* Create directories in path under base. Exit with an error if anything goes wrong. For example, mkdirs("/foo", "/bar/baz") will create directories /foo/bar and /foo/bar/baz if they don't already exist, but /foo must exist already. Symlinks are followed. path must remain under base, i.e. you can't use symlinks or ".." to climb out. denylist is a null-terminated array of paths under which no directories may be created, or NULL if none. */ void mkdirs(const char *base, const char *path, char **denylist) { char *basec, *component, *next, *nextc, *pathw, *saveptr; char *denylist_null[] = { NULL }; struct stat sb; T_ (base[0] != 0 && path[0] != 0); // no empty paths T_ (base[0] == '/' && path[0] == '/'); // absolute paths only if (denylist == NULL) denylist = denylist_null; // literal here causes intermittent segfaults basec = realpath_safe(base); DEBUG("mkdirs: base: %s", basec); DEBUG("mkdirs: path: %s", path); for (size_t i = 0; denylist[i] != NULL; i++) DEBUG("mkdirs: deny: %s", denylist[i]); pathw = cat(path, ""); // writeable copy saveptr = NULL; // avoid warning (#1048; see also strtok_r(3)) component = strtok_r(pathw, "/", &saveptr); nextc = basec; while (component != NULL) { next = cat(nextc, "/"); next = cat(next, component); // canonical except for last component DEBUG("mkdirs: next: %s", next) component = strtok_r(NULL, "/", &saveptr); // next NULL if current last if (path_exists(next, &sb, false)) { if (S_ISLNK(sb.st_mode)) { char buf; // we only care if absolute Tf (1 == readlink(next, &buf, 1), "can't read symlink: %s", next); Tf (buf != '/', "can't mkdir: symlink not relative: %s", next); Te (path_exists(next, &sb, true), // resolve symlink "can't mkdir: broken symlink: %s", next); } Tf (S_ISDIR(sb.st_mode) || !component, // last component not dir OK "can't mkdir: exists but not a directory: %s", next); nextc = realpath_safe(next); DEBUG("mkdirs: exists, canonical: %s", nextc); } else { Te (path_subdir_p(basec, next), "can't mkdir: %s not subdirectory of %s", next, basec); for (size_t i = 0; denylist[i] != NULL; i++) Ze (path_subdir_p(denylist[i], next), "can't mkdir: %s under existing bind-mount %s", next, denylist[i]); Zf (mkdir(next, 0777), "can't mkdir: %s", next); nextc = next; // canonical b/c we just created last component as dir DEBUG("mkdirs: created: %s", nextc) } } DEBUG("mkdirs: done"); } /* Print a formatted message on stderr if the level warrants it. Levels: 0 : "error" : always print; exit unsuccessfully afterwards 1 : "warning" : always print 2 : "info" : print if verbose >= 2 3 : "verbose" : print if verbose >= 3 4 : "debug" : print if verbose >= 4 */ void msg(int level, const char *file, int line, int errno_, const char *fmt, ...) { va_list ap; if (level > verbose) return; fprintf(stderr, "%s[%d]: ", program_invocation_short_name, getpid()); if (level <= 1 && fmt != NULL) fprintf(stderr, "%s: ", VERBOSE_LEVELS[level]); if (fmt == NULL) fputs(VERBOSE_LEVELS[level], stderr); else { va_start(ap, fmt); vfprintf(stderr, fmt, ap); va_end(ap); } if (errno_) fprintf(stderr, ": %s (%s:%d %d)\n", strerror(errno_), file, line, errno_); else fprintf(stderr, " (%s:%d)\n", file, line); if (level == 0) exit(EXIT_FAILURE); } /* Return true if the given path exists, false otherwise. On error, exit. If statbuf is non-null, store the result of stat(2) there. If follow_symlink is true and the last component of path is a symlink, stat(2) the target of the symlnk; otherwise, lstat(2) the link itself. */ bool path_exists(const char *path, struct stat *statbuf, bool follow_symlink) { struct stat statbuf_; if (statbuf == NULL) statbuf = &statbuf_; if (follow_symlink) { if (stat(path, statbuf) == 0) return true; } else { if (lstat(path, statbuf) == 0) return true; } Tf (errno == ENOENT, "can't stat: %s", path); return false; } /* Return the mount flags of the file system containing path, suitable for passing to mount(2). This is messy because, the flags we get from statvfs(3) are ST_* while the flags needed by mount(2) are MS_*. My glibc has a comment in bits/statvfs.h that the ST_* "should be kept in sync with" the MS_* flags, and the values do seem to match, but there are additional undocumented flags in there. Also, the kernel contains a test "unprivileged-remount-test.c" that manually translates the flags. Thus, I wasn't comfortable simply passing the output of statvfs(3) to mount(2). */ unsigned long path_mount_flags(const char *path) { struct statvfs sv; unsigned long known_flags = ST_MANDLOCK | ST_NOATIME | ST_NODEV | ST_NODIRATIME | ST_NOEXEC | ST_NOSUID | ST_RDONLY | ST_RELATIME | ST_SYNCHRONOUS; Z_ (statvfs(path, &sv)); Ze (sv.f_flag & ~known_flags, "unknown mount flags: 0x%lx %s", sv.f_flag & ~known_flags, path); return (sv.f_flag & ST_MANDLOCK ? MS_MANDLOCK : 0) | (sv.f_flag & ST_NOATIME ? MS_NOATIME : 0) | (sv.f_flag & ST_NODEV ? MS_NODEV : 0) | (sv.f_flag & ST_NODIRATIME ? MS_NODIRATIME : 0) | (sv.f_flag & ST_NOEXEC ? MS_NOEXEC : 0) | (sv.f_flag & ST_NOSUID ? MS_NOSUID : 0) | (sv.f_flag & ST_RDONLY ? MS_RDONLY : 0) | (sv.f_flag & ST_RELATIME ? MS_RELATIME : 0) | (sv.f_flag & ST_SYNCHRONOUS ? MS_SYNCHRONOUS : 0); } /* Split path into dirname and basename. */ void path_split(const char *path, char **dir, char **base) { char *path2; T_ (path2 = strdup(path)); *dir = dirname(path2); T_ (path2 = strdup(path)); *base = basename(path2); } /* Return true if path is a subdirectory of base, false otherwise. Acts on the paths as given, with no canonicalization or other reference to the filesystem. For example: path_subdir_p("/foo", "/foo/bar") => true path_subdir_p("/foo", "/bar") => false path_subdir_p("/foo/bar", "/foo/b") => false */ bool path_subdir_p(const char *base, const char *path) { int base_len = strlen(base); if (base_len > strlen(path)) return false; if (!strcmp(base, "/")) // below logic breaks if base is root return true; return ( !strncmp(base, path, base_len) && (path[base_len] == '/' || path[base_len] == 0)); } /* Like realpath(3), but exit with error on failure. */ char *realpath_safe(const char *path) { char *pathc; pathc = realpath(path, NULL); Tf (pathc != NULL, "can't canonicalize: %s", path); return pathc; } /* Split string str at first instance of delimiter del. Set *a to the part before del, and *b to the part after. Both can be empty; if no token is present, set both to NULL. Unlike strsep(3), str is unchanged; *a and *b point into a new buffer allocated with malloc(3). This has two implications: (1) the caller must free(3) *a but not *b, and (2) the parts can be rejoined by setting *(*b-1) to del. The point here is to provide an easier wrapper for strsep(3). */ void split(char **a, char **b, const char *str, char del) { char *tmp; char delstr[2] = { del, 0 }; T_ (str != NULL); tmp = strdup(str); *b = tmp; *a = strsep(b, delstr); if (*b == NULL) *a = NULL; } /* Report the version number. */ void version(void) { fprintf(stderr, "%s\n", VERSION); } charliecloud-0.26/bin/ch_misc.h000066400000000000000000000102021417231051300164150ustar00rootroot00000000000000/* Copyright © Triad National Security, LLC, and others. This interface contains miscellaneous utility features. It is separate so that peripheral Charliecloud C programs don't have to link in the extra libraries that ch_core requires. */ #define _GNU_SOURCE #include #include #include /** Macros **/ /* Log the current UIDs. */ #define LOG_IDS log_ids(__func__, __LINE__) /* Test some value, and if it's not what we expect, exit with an error. These are macros so we have access to the file and line number. verify x is true (non-zero); otherwise print then exit: T_ (x) default error message including file, line, errno Tf (x, fmt, ...) printf-style message followed by file, line, errno Te (x, fmt, ...) same without errno verify x is zero (false); otherwise print as above & exit Z_ (x) Zf (x, fmt, ...) Ze (x, fmt, ...) errno is omitted if it's zero. Examples: Z_ (chdir("/does/not/exist")); -> ch-run: error: No such file or directory (ch-run.c:138 2) Zf (chdir("/does/not/exist"), "foo"); -> ch-run: foo: No such file or directory (ch-run.c:138 2) Ze (chdir("/does/not/exist"), "foo"); -> ch-run: foo (ch-run.c:138) errno = 0; Zf (0, "foo"); -> ch-run: foo (ch-run.c:138) Typically, Z_ and Zf are used to check system and standard library calls, while T_ and Tf are used to assert developer-specified conditions. errno is not altered by these macros unless they exit the program. FIXME: It would be nice if we could collapse these to fewer macros. However, when looking into that I ended up in preprocessor black magic (e.g. https://stackoverflow.com/a/2308651) that I didn't understand. */ #define T_(x) if (!(x)) msg(0, __FILE__, __LINE__, errno, NULL) #define Tf(x, ...) if (!(x)) msg(0, __FILE__, __LINE__, errno, __VA_ARGS__) #define Te(x, ...) if (!(x)) msg(0, __FILE__, __LINE__, 0, __VA_ARGS__) #define Z_(x) if (x) msg(0, __FILE__, __LINE__, errno, NULL) #define Zf(x, ...) if (x) msg(0, __FILE__, __LINE__, errno, __VA_ARGS__) #define Ze(x, ...) if (x) msg(0, __FILE__, __LINE__, 0, __VA_ARGS__) #define FATAL(...) msg(0, __FILE__, __LINE__, 0, __VA_ARGS__); #define WARNING(...) msg(1, __FILE__, __LINE__, 0, __VA_ARGS__); #define INFO(...) msg(2, __FILE__, __LINE__, 0, __VA_ARGS__); #define VERBOSE(...) msg(3, __FILE__, __LINE__, 0, __VA_ARGS__); #define DEBUG(...) msg(4, __FILE__, __LINE__, 0, __VA_ARGS__); /** Types **/ enum env_action { ENV_END = 0, // terminate list of environment changes ENV_SET_DEFAULT, // set by /ch/environment within image ENV_SET_VARS, // set by list of variables ENV_UNSET_GLOB }; // unset glob matches struct env_var { char *name; char *value; }; struct env_delta { enum env_action action; union { struct env_var *vars; // ENV_SET_VARS char *glob; // ENV_UNSET_GLOB } arg; }; /** External variables **/ extern int verbose; extern char *host_tmp; extern char *username; /** Function prototypes **/ char *argv_to_string(char **argv); bool buf_zero_p(void *buf, size_t size); char *cat(const char *a, const char *b); struct env_var *env_file_read(const char *path); void env_set(const char *name, const char *value, const bool expand); void env_unset(const char *glob); struct env_var env_var_parse(const char *line, const char *path, size_t lineno); void list_append(void **ar, void *new, size_t size); void *list_new(size_t size, size_t ct); void log_ids(const char *func, int line); void mkdirs(const char *base, const char *path, char **denylist); void msg(int level, const char *file, int line, int errno_, const char *fmt, ...); bool path_exists(const char *path, struct stat *statbuf, bool follow_symlink); unsigned long path_mount_flags(const char *path); void path_split(const char *path, char **dir, char **base); bool path_subdir_p(const char *base, const char *path); char *realpath_safe(const char *path); void split(char **a, char **b, const char *str, char del); void version(void); charliecloud-0.26/configure.ac000066400000000000000000000664221417231051300163740ustar00rootroot00000000000000# Gotchas: # # 1. Quadrigraphs. M4 consumes a number of important special characters, so # Autoconf uses 4-character sequences, e.g. "@%:@" is the octothorpe (#). # See: https://www.gnu.org/software/autoconf/manual/autoconf-2.69/html_node/Quadrigraphs.html # # 2. Booleans. The convention for Autoconf variables, which we follow, is # "yes" for true and "no" for false. This differs from the Charliecloud # convention of non-empty for true and empty for false. ### Prologue ################################################################# AC_INIT([Charliecloud], [m4_esyscmd_s([misc/version])], [https://github.com/hpc/charliecloud]) AC_PREREQ([2.69]) AC_CONFIG_SRCDIR([bin/ch-run.c]) AC_CONFIG_AUX_DIR([build-aux]) AC_CONFIG_MACRO_DIRS([misc/m4]) AC_CANONICAL_BUILD AC_CANONICAL_HOST AC_CANONICAL_TARGET AS_CASE([$host_os], [linux*], [], [*], [AC_MSG_ERROR([Linux is the only supported OS; see issue @%:@42.])] ) # Turn off maintainer mode by default. This appears to be controversial; see # issue #595 for links to some discussion. # # Bottom line for me: Maintainer mode has (1) never re-built the build system # in a situation where I felt it helped, but (2) fairly regularly re-builds or # re-configures at surprising times. # # In particular, it often rebuilds before "make clean" and friends, e.g. if # you change branches and then clean. This seems wrong. In my view, clean # should remove what is currently there, not what *would have been there* had # the build used a different, not-yet-existing build system. Disabling # maintainer mode also lets us put "make maintainer-clean" in autogen.sh # without triggering spurious rebuilds. AM_MAINTAINER_MODE([disable]) # By default, Autotools honors umask for directories but not files. Thus, if # you "sudo make install" with a umask more restrictive than 0022, the result # is an installation unavailable to most users (issue #947). This appears to # be a somewhat common complaint. # # Our workaround is to set the "mkdir -p" command [1]. (Note those # instructions also mention a different variable ac_cv_path_mkdir, but I # couldn't figure out how to set it.) This needs to be before AM_INIT_AUTOMAKE # because that macro does something with the value. We use "install -d" rather # than "mkdir -m" because the latter still uses only umask for intermediate # directories [2]. # # This can still be overridden on the configure command line; for example, to # restore the previous behavior, use "./configure MKDIR_P='mkdir -p'" [3]. # # [1]: https://unix.stackexchange.com/a/436000 # [2]: http://gnu-automake.7480.n7.nabble.com/bug-12130-sudo-make-install-applies-umask-to-new-directories-tp18545p18548.html # [3]: https://lists.gnu.org/archive/html/automake/2004-01/msg00013.html MKDIR_P=${MKDIR_P:-install -d -m 0755} AM_INIT_AUTOMAKE([1.13 -Wall -Werror foreign subdir-objects]) AC_CONFIG_HEADERS([bin/config.h]) AC_CONFIG_FILES([Makefile bin/Makefile doc/Makefile examples/Makefile lib/Makefile misc/Makefile packaging/Makefile test/Makefile]) ### Options ################################################################## # Note: Variables must match option, e.g. --disable-foo-bar => enable_foo_bar. # Note: --with-sphinx-build provided by AX_WITH_PROG() below. AC_ARG_ENABLE([buggy-build], AS_HELP_STRING( [--enable-buggy-build], [omit -Werror; please see docs before use!]), [AS_CASE([$enableval], [yes], [use_werror=no], [no], [use_werror=yes], [*], [AC_MSG_ERROR([--enable-buggy-build: bad argument: $enableval])] )], [use_werror=yes]) AC_ARG_ENABLE([bundled-lark], AS_HELP_STRING([--disable-bundled-lark], [use system Lark (not recommended; see docs!)]), [], [enable_bundled_lark=yes]) AC_ARG_ENABLE([ch-image], AS_HELP_STRING([--disable-ch-image], [ch-image unprivileged builder & image manager]), [], [enable_ch_image=yes]) AC_ARG_ENABLE([html], AS_HELP_STRING([--disable-html], [HTML documentation]), [], [enable_html=yes]) AC_ARG_ENABLE([man], AS_HELP_STRING([--disable-man], [man pages]), [], [enable_man=yes]) AC_ARG_ENABLE([syslog], AS_HELP_STRING([--disable-syslog], [logging to syslog]), [], [enable_syslog=yes]) AC_ARG_ENABLE([test], AS_HELP_STRING([--disable-test], [test suite]), [], [enable_test=yes]) AC_ARG_WITH([libsquashfuse], AS_HELP_STRING([--with-libsquashfuse=foo], [foo])) AS_CASE([$with_libsquashfuse], [yes], # explicit “yes” [want_libsquashfuse=yes need_libsquashfuse=yes], [no], # explicit “no” [want_libsquashfuse=no need_libsquashfuse=no], [''], # option not specified [want_libsquashfuse=yes need_libsquashfuse=no], [*], # explicit path to libsquashfuse install [want_libsquashfuse=yes need_libsquashfuse=yes lib_libsquashfuse=$with_libsquashfuse/lib inc_libsquashfuse=$with_libsquashfuse/include]) AC_ARG_WITH([python], AS_HELP_STRING( [--with-python=SHEBANG], [Python shebang to use for scripts (default: "/usr/bin/env python3")]), [PYTHON_SHEBANG="$withval"], [PYTHON_SHEBANG='/usr/bin/env python3']) # Can't deduce shebang from Gentoo "sphinx-python"; allow override. See #629. AC_ARG_WITH([sphinx-python], AS_HELP_STRING( [--with-sphinx-python=SHEBANG], [Python shebang used by Sphinx (default: deduced from sphinx-build executable]]), [sphinx_python="$withval"], [sphinx_python='']) ### Feature test macros ###################################################### # Macro to validate executable versions. Arguments: # # $1 name of variable containing executable name or absolute path # $2 minimum version # $3 append to $1 to make shell pipeline to get actual version only # (e.g., without program name) # # This macro is not able to determine if a program exists, only whether its # version is sufficient. ${!1} (i.e, the value of the variable whose name is # stored in $1) must be either empty, an absolute path to an executable, or # the name of a program in $PATH. A prior macro such as AX_WITH_PROG can be # used to ensure this condition. # # If ${!1} is an absolute path, and that file isn't executable, error out. If # it's something other than an absolute path, assume it's the name of a # program in $PATH; if not, the behavior is undefined but not good (FIXME). # # Post-conditions: # # 1. If ${!1} is non-empty and the version reported by the program is # greater than or equal to the minimum, ${!1} is unchanged. If ${!1} is # empty or reported version is insufficient, ${!1} is the empty string. # This lets you test version sufficiency by whether ${!1} is empty. # # 2. $1_VERSION_NOTE contains a brief explanatory note. # AC_DEFUN([CH_CHECK_VERSION], [ AS_VAR_PUSHDEF([prog], [$1]) AS_IF([test -n "$prog"], [ # ${!1} is non-empty AS_CASE([$prog], # absolute path; check if executable [/*], [AC_MSG_CHECKING([if $prog is executable]) AS_IF([test -e "$prog"], [AC_MSG_RESULT([ok])], [AC_MSG_RESULT([no]) AC_MSG_ERROR([must be executable])])]) AC_MSG_CHECKING([if $prog version >= $2]) vact=$($prog $3) AX_COMPARE_VERSION([$2], [le], [$vact], [ AC_SUBST([$1_VERSION_NOTE], ["ok ($vact)"]) AC_MSG_RESULT([ok ($vact)]) ], [ AC_SUBST([$1_VERSION_NOTE], ["too old ($vact)"]) AC_MSG_RESULT([too old ($vact)]) AS_UNSET([$1]) ]) ], [ # ${!} is empty AC_SUBST([$1_VERSION_NOTE], ["not found"]) AS_UNSET([$1]) ]) AS_VAR_POPDEF([prog]) ]) ### C compiler ############################################################### # Need a C99 compiler. (See https://stackoverflow.com/a/28558338.) AC_PROG_CC # Set up CFLAGS. ch_cflags='-std=c99 -Wall' AS_IF([test -n "$lib_libsquashfuse"], [ch_cflags+=" -I$inc_libsquashfuse -L$lib_libsquashfuse" # Without this, clang fails with "error: argument unused during # compilation" on the -L. GCC ignores it. ch_cflags+=' -Wno-unused-command-line-argument']) AS_IF([test $use_werror = yes], [ch_cflags+=' -Werror']) AX_CHECK_COMPILE_FLAG([$ch_cflags], [ CFLAGS+=" $ch_cflags" ], [ AC_MSG_ERROR([no suitable C99 compiler found]) ]) AS_IF([test "$CC" = icc], [AC_MSG_ERROR([icc not supported (see PR @%:@481)])]) ### ch-run required ########################################################## # Only ch-run needs any kind of interesting library stuff; this variable holds # the library arguments we need. This also requires us to use AC_CHECK_LIB # instead of the (recommended by docs) AC_SEARCH_LIBS, because that adds # things to LIBS, which we don't want because it's applied to all executables. CH_RUN_LIBS= # asprintf(3) # # You can do this with AC_CHECK_FUNC or AC_CHECK_FUNCS, but those macros call # the function with no arguments. This causes a warning for asprintf() for # some compilers (and I have no clue why others accept it); see issue #798. # Instead, try to build a small test program that calls asprintf() correctly. AC_MSG_CHECKING([for asprintf]) AC_COMPILE_IFELSE([AC_LANG_SOURCE([[ #define _GNU_SOURCE #include #include int main(void) { char *p; if (asprintf(&p, "WEIRD AL YANKOVIC\n") >= 0) free(p); return 0; } ]])], [AC_MSG_RESULT([yes])], [AC_MSG_RESULT([no]) AC_MSG_ERROR([asprintf() not found, and we have no workaround; see config.log.])]) # pthreads; needed for "ch-run --join". AX_PTHREAD # POSIX IPC lives in librt. AC_CHECK_LIB([rt], [shm_open], [CH_RUN_LIBS="-lrt $CH_RUN_LIBS"], [ AC_MSG_ERROR([shm_open(3) not found]) ]) # User namespaces AC_MSG_CHECKING([if in chroot]) # https://unix.stackexchange.com/a/14346 AS_IF([test "$(awk '$5=="/" {print $1}' int main(void) { if (unshare(CLONE_NEWNS|CLONE_NEWUSER)) return 1; // syscall failed else return 0; // syscall succeeded } ]])], [have_userns=yes], [have_userns=no], [AC_MSG_ERROR([cross-compilation not supported])]) AC_MSG_RESULT($have_userns) ### SquashFS ################################################################# # SquashFS Tools vmin_mksquashfs=4.2 # CentOS 7 AC_CHECK_PROG([MKSQUASHFS], [mksquashfs], [mksquashfs]) CH_CHECK_VERSION([MKSQUASHFS], [$vmin_mksquashfs], [-version | head -1 | cut -d' ' -f3]) # SquashFUSE executables vmin_squashfuse=0.1.100 # Ubuntu 16.04 (Xenial). CentOS 7 has 0.1.102. AC_CHECK_PROG([SQUASHFUSE], [squashfuse], [squashfuse]) CH_CHECK_VERSION([SQUASHFUSE], [$vmin_squashfuse], [--help 2>&1 | head -1 | cut -d' ' -f2]) # libfuse3 AC_CHECK_LIB([fuse3], [fuse_set_signal_handlers], [have_libfuse3=yes], [have_libfuse3=no]) # libsquashfuse AC_CHECK_LIB([squashfuse_ll], [sqfs_ll_mount], [have_libsquashfuse_ll=yes], [have_libsquashfuse_ll=no]) AC_CHECK_HEADER([squashfuse/ll.h], [have_ll_h=yes], [have_ll_h=no], [#define SQFS_CONFIG_H #define FUSE_USE_VERSION 32 ]) # see comment in ch_fuse.c regarding these defines # Should we link with libsquashfuse? AS_IF([ test $want_libsquashfuse = yes \ && test $have_libfuse3 = yes \ && test $have_libsquashfuse_ll = yes \ && test $have_ll_h = yes], [have_libsquashfuse=yes CH_RUN_LIBS="-lsquashfuse_ll -lfuse3 -Wl,-rpath=$lib_libsquashfuse $CH_RUN_LIBS"], [have_libsquashfuse=no]) AS_IF([ test $need_libsquashfuse = yes \ && test $have_libsquashfuse = no], [AC_MSG_ERROR([libsquashfuse requested but not found])]) # Any SquashFUSE support at all? AS_IF([ test -n "$SQUASHFUSE" \ || test $have_libsquashfuse = yes], [have_any_squashfuse=yes], [have_any_squashfuse=no]) ### ch-image ################################################################# # Python vmin_python=3.6 # NOTE: Keep in sync with lib/charliecloud.py AC_MSG_CHECKING([if "$PYTHON_SHEBANG" starts with slash]) AS_CASE([$PYTHON_SHEBANG], [/*], [AC_MSG_RESULT([ok])], [*], [AC_MSG_RESULT([no]) AC_MSG_ERROR([--with-python: must start with slash])]) python="${PYTHON_SHEBANG#/usr/bin/env }" # use shell to find it AS_CASE([$python], [/*], [PYTHON="$python"], # absolute [*], [AC_CHECK_PROG([PYTHON], [$python], [$python])] # verify it's in $PATH ) CH_CHECK_VERSION([PYTHON], [$vmin_python], [--version | head -1 | cut -d' ' -f2]) # Python module "requests" vmin_requests=2.6.0 # CentOS 7; FIXME: haven't actually tested this AS_IF([test -n "$PYTHON"], [ AC_MSG_CHECKING([for requests module]) cat <&1) #AS_IF([ test -z "$sudo_out" \ # || echo "$sudo_out" | grep -Fq asswor], # [have_sudo=yes], # [have_sudo=no]) #AC_MSG_RESULT($have_sudo) # Wget vmin_wget=1.11 # 2008 AC_CHECK_PROG([WGET], [wget], [wget]) CH_CHECK_VERSION([WGET], [$vmin_wget], [--version | head -1 | cut -d' ' -f3]) ### Output variables ######################################################### # Autotools output variables are ... interesting. This is my best # understanding: # # 1. AC_SUBST(foo) does two things in Makefile.am: # # a. Replace the string "@foo@" with the value of foo anywhere it # appears. # # b. Set the Make variable foo to the same value, i.e., add "foo = @foo@" # which is then substituted as in item 1. # # So this is how you transfer a variable from configure to Make. # # 2. AC_SUBST_NOTMAKE(foo) does only (a). # # 3. AM_CONDITIONAL(foo, test) creates a variable for use in Automake # conditionals. E.g. if you say in configure.ac: # # AM_CONDITIONAL([foo], [test $foo = yes]) # # and then in Makefile.am: # # if foo # ... bar ... # else # ... baz ... # endif # # then if the configure variable $foo is "yes", lines "... bar ..." will # be placed in the Makefile; otherwise, "... baz ..." will be included. # # This is how you include and exclude portions of the Makefile.am from # the output Makefile. It *does not* create a Make variable. # # 4. AC_DEFINE(foo, value, comment) #define's the preprocessor symbol foo to # value in config.h. (Supposedly value and comment are optional but I got # warnings doing that.) So this is how you make configure values # available in C code (as macros, not variables). Typically you would # define something or not (allowing #ifdef), rather than always define to # true or false (which would require #if). # # 5. AC_DEFINE_UNQUOTES adds some extra transformations to the above. I # didn't quite follow. # # Below are all the variables we want available outside configure. AM_CONDITIONAL([ENABLE_CH_IMAGE], [test $enable_ch_image = yes]) AM_CONDITIONAL([ENABLE_HTML], [test $enable_html = yes]) AM_CONDITIONAL([ENABLE_LARK], [test $enable_bundled_lark = yes]) AM_CONDITIONAL([ENABLE_MAN], [test $enable_man = yes]) AS_IF([test $enable_syslog = yes], [AC_DEFINE([ENABLE_SYSLOG], [1], [log to syslog])]) AM_CONDITIONAL([ENABLE_TEST], [test $enable_test = yes]) AC_SUBST([CH_RUN_LIBS]) AC_SUBST([PYTHON_SHEBANG]) AC_SUBST([SPHINX]) AM_CONDITIONAL([HAVE_LIBSQUASHFUSE], [test $have_libsquashfuse = yes]) AS_IF([test $have_libsquashfuse = yes], [AC_DEFINE([HAVE_LIBSQUASHFUSE], [1], [link with libsquashfuse])]) ### Prepare report ########################################################### # FIXME: Should replace all these with macros? # ch-run (needed below) AS_IF([ test $have_userns = yes], [have_ch_run=yes], [have_ch_run=no]) # image builders AS_IF([ test -n "$BUILDAH"], [have_buildah=yes], [have_buildah=no]) AS_IF([ test $enable_ch_image = yes \ && test -n "$PYTHON" \ && test -n "$PYTHON_SHEBANG" \ && test -n "$REQUESTS" \ && test $have_ch_run = yes], [have_ch_image=yes], [have_ch_image=no]) AS_IF([ test -n "$DOCKER" \ && test -n "$MKTEMP"], [have_docker=yes], [have_docker=no]) # managing container images AS_IF([ test $have_buildah = yes \ || test $have_ch_image = yes \ || test $have_docker = yes], [have_any_builder=yes], [have_any_builder=no]) AS_IF([ test $have_any_builder = yes], [have_ch_build=yes], [have_ch_build=no]) AS_IF([ test $have_any_builder = yes], [have_builder_to_tar=yes], [have_builder_to_tar=no]) AS_IF([ test $have_any_builder = yes \ && test -n "$MKSQUASHFS"], [have_pack_squash=yes], [have_pack_squash=no]) # running containers AS_IF([ test -n "$NVIDIA_CLI" \ && test $have_nvidia_libs = yes], [have_nvidia=yes], [have_nvidia=no]) # test suite AS_IF([ test $enable_test = yes \ && test $have_ch_run = yes \ && test $have_any_builder = yes \ && test -n "$_BASH" \ && test -n "$BATS" \ && test -n "$WGET"], # assume access to Docker Hub or mirror [have_tests_basic=yes], [have_tests_basic=no]) AS_IF([ test $have_tests_basic = yes \ && test $have_docs = yes \ && test -n "$SHELLCHECK" ], # assume we do have generic sudo [have_tests_more=yes], [have_tests_more=no]) AS_IF([ test $have_tests_basic = yes \ && test $have_tests_more = yes], [have_tests_tar=yes], [have_tests_tar=no]) AS_IF([ test $have_tests_basic = yes \ && test $have_tests_more = yes \ && test $have_pack_squash = yes ], [have_tests_squashunpack=yes], [have_tests_squashunpack=no]) AS_IF([ test $have_tests_squashunpack = yes \ && test $have_libsquashfuse = yes], [have_tests_squashmount=yes], [have_tests_squashmount=no]) ### Write output files ####################################################### AC_OUTPUT ## Print report AS_IF([ test $have_userns = no \ && test $chrooted = yes], [ chroot_warning=$(cat <<'EOF' Warning: configure is running in a chroot, but user namespaces cannot be created in a chroot; see the man page unshare(2). Therefore, the above may be a false negative. However, note that like all the run-time configure tests, this is informational only and does not affect the build. EOF ) ]) AC_MSG_NOTICE([ Dependencies report =================== Below is a summary of configure's findings. Caveats ~~~~~~~ Charliecloud's run-time dependencies are lazy; features just try to use their dependencies and error if there's a problem. This report summarizes what configure found on *this system*, because that's often useful, but none of the run-time findings change what is built and installed. Listed versions are minimums. These are a bit fuzzy. Try it even if configure thinks a version is too old, and please report back to us. Building Charliecloud ~~~~~~~~~~~~~~~~~~~~~ will build and install: ch-image(1) ... ${enable_ch_image} HTML documentation ... ${enable_html} man pages ... ${enable_man} syslog ... ${enable_syslog} test suite ... ${enable_test} required: C99 compiler ... ${CC} ${CC_VERSION} ch-run(1) internal SquashFS mounting: ${have_libsquashfuse} enabled ... ${want_libsquashfuse} libfuse3 ... ${have_libfuse3} libsquashfuse_ll ... ${have_libsquashfuse_ll} ll.h header ... ${have_ll_h} documentation: ${have_docs} sphinx-build(1) ≥ $vmin_sphinx ... ${SPHINX_VERSION_NOTE} sphinx-build(1) Python ... ${sphinx_python:-n/a} "docutils" module ≥ $vmin_docutils ... ${DOCUTILS_VERSION_NOTE} "sphinx-rtd-theme" module ≥ $vmin_rtd ... ${RTD_VERSION_NOTE} Building images via our wrappers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ with Buildah: ${have_buildah} Buildah ≥ $vmin_buildah ... ${BUILDAH_VERSION_NOTE} with ch-image(1): ${have_ch_image} enabled ... ${enable_ch_image} Python shebang line ... ${PYTHON_SHEBANG:-none} Python in shebang ≥ $vmin_python ... ${PYTHON_VERSION_NOTE} "lark" module ... ${lark_status} "requests" module ≥ $vmin_requests ... ${REQUESTS_VERSION_NOTE} ch-run(1) ... ${have_ch_run} with Docker: ${have_docker} Docker ≥ $vmin_docker ... ${DOCKER_VERSION_NOTE} mktemp(1) ... ${MKTEMP:-not found} Managing container images ~~~~~~~~~~~~~~~~~~~~~~~~~ build from Dockerfile with ch-build(1): ${have_ch_build} at least one builder ... ${have_any_builder} access to an image repository ... assumed yes pack images from builder storage to tarball: ${have_builder_to_tar} at least one builder ... ${have_any_builder} pack images from builder storage to SquashFS: ${have_pack_squash} at least one builder ... ${have_any_builder} mksquashfs(1) ≥ $vmin_mksquashfs ... ${MKSQUASHFS_VERSION_NOTE} Note: Pulling/pushing images from/to a repository is currently done using the builder directly. Running containers ~~~~~~~~~~~~~~~~~~ ch-run(1): ${have_ch_run} user+mount namespaces ... ${have_userns}$chroot_warning run SquashFS images: ${have_any_squashfuse} manual mount with SquashFUSE ≥ $vmin_squashfuse ... ${SQUASHFUSE_VERSION_NOTE} internal mount with libsquashfuse ... ${have_libsquashfuse} inject nVidia GPU libraries: ${have_nvidia} nvidia-container-cli(1) ≥ $vmin_nvidia_cli ... ${NVIDIA_CLI_VERSION_NOTE} nVidia libraries & executables present ... ${have_nvidia_libs} Test suite ~~~~~~~~~~ basic tests, all stages: ${have_tests_basic} test suite enabled ... ${enable_test} ch-run(1) ... ${have_ch_run} any builder above ... ${have_any_builder} access to Docker Hub or mirror ... assumed yes Bats ≥ $vmin_bats ... ${BATS_VERSION_NOTE} Bash ≥ $vmin_bash ... ${_BASH_VERSION_NOTE} wget(1) ≥ $vmin_wget ... ${WGET_VERSION_NOTE} more complete tests: ${have_tests_more} basic tests ... ${have_tests_basic} documentation built ... ${have_docs} ShellCheck ≥ $vmin_shellcheck ... ${SHELLCHECK_VERSION_NOTE} generic sudo ... assumed yes recommended tests, tar-unpack mode: ${have_tests_tar} basic tests ... ${have_tests_basic} more tests ... ${have_tests_more} recommended tests, squash-unpack mode: ${have_tests_squashunpack} basic tests ... ${have_tests_basic} more tests ... ${have_tests_more} pack/unpack SquashFS images ... ${have_pack_squash} recommended tests, squash-mount mode: ${have_tests_squashmount} recommended, squash-unpack mode: ${have_tests_squashunpack} internal SquashFS mounting ... ${have_libsquashfuse} ]) charliecloud-0.26/doc/000077500000000000000000000000001417231051300146415ustar00rootroot00000000000000charliecloud-0.26/doc/Makefile.am000066400000000000000000000124201417231051300166740ustar00rootroot00000000000000# This Makefile started with the default Makefile produced by the Sphinx # initialization process, which we then modified over time. During the # Automake-ification, I stripped out most of the boilderplate and left only # the targets that we use. # We turn off parallel build in doc: # # 1. Sphinx handles building the whole documentation internally already, as # a unit, so we shouldn't call sphinx-build more than once for different # output files at all, let alone in parallel. # # 2. Serial build is plenty fast. # # 3. There is a race condition in Sphinx < 1.6.6 that's triggered when two # instances (e.g., for html and man targets) try to "mkdir doctrees" # simultaneously. See issue #115. # # This special target was introduced in GNU Make 3.79, in April 2000. .NOTPARALLEL: EXTRA_DIST = \ bugs.rst \ charliecloud.rst \ ch-build2dir_desc.rst \ ch-build2dir.rst \ ch-build_desc.rst \ ch-builder2squash_desc.rst \ ch-builder2squash.rst \ ch-builder2tar_desc.rst \ ch-builder2tar.rst \ ch-build.rst \ ch-checkns_desc.rst \ ch-checkns.rst \ ch-convert_desc.rst \ ch-convert.rst \ ch-dir2squash_desc.rst \ ch-dir2squash.rst \ ch-fromhost_desc.rst \ ch-fromhost.rst \ ch-image_desc.rst \ ch-image.rst \ ch-pull2dir_desc.rst \ ch-pull2dir.rst \ ch-pull2tar_desc.rst \ ch-pull2tar.rst \ ch-run_desc.rst \ ch-run-oci_desc.rst \ ch-run-oci.rst \ ch-run.rst \ ch-ssh_desc.rst \ ch-ssh.rst \ ch-tar2dir_desc.rst \ ch-tar2dir.rst \ ch-test_desc.rst \ ch-test.rst \ command-usage.rst \ conf.py \ dev.rst \ faq.rst \ favicon.ico \ index.rst \ install.rst \ loc.rst \ logo-sidebar.png \ make-deps-overview \ man/README \ py_env.rst \ rd100-winner.png \ see_also.rst \ tutorial.rst if ENABLE_MAN man_MANS = \ man/charliecloud.7 \ man/ch-build.1 \ man/ch-build2dir.1 \ man/ch-builder2squash.1 \ man/ch-builder2tar.1 \ man/ch-checkns.1 \ man/ch-convert.1 \ man/ch-dir2squash.1 \ man/ch-fromhost.1 \ man/ch-image.1 \ man/ch-pull2dir.1 \ man/ch-pull2tar.1 \ man/ch-run.1 \ man/ch-run-oci.1 \ man/ch-ssh.1 \ man/ch-tar2dir.1 \ man/ch-test.1 endif if ENABLE_HTML nobase_html_DATA = \ html/searchindex.js \ html/_images/rd100-winner.png \ html/command-usage.html \ html/dev.html \ html/faq.html \ html/index.html \ html/install.html \ html/search.html \ html/tutorial.html endif # NOTE: ./html might be a Git checkout to support "make web", so make sure not # to delete it. CLEANFILES = $(man_MANS) $(nobase_html_DATA) \ _deps.rst html/.buildinfo html/.nojekyll if ENABLE_HTML # Automake can't remove directories. clean-local: rm -Rf doctrees html/_sources html/_static html/_images endif # Automake can't install and uninstall directories. _static contains around # one hundred files in several directories, and I'm pretty sure the contents # change depending on Sphinx version and other details, so we can't just list # the files. These targets deal with it as an opaque directory. The _sources # directory is another generated directory that contains references to the # input .rst files which we need for searching to work so we give it a similar # treatment. if ENABLE_HTML install-data-hook: cp -r html/_sources $(DESTDIR)$(htmldir)/html cp -r html/_static $(DESTDIR)$(htmldir)/html find $(DESTDIR)$(htmldir)/html/_sources -xtype f -exec chmod 644 {} \; find $(DESTDIR)$(htmldir)/html/_static -xtype d -exec chmod 755 {} \; find $(DESTDIR)$(htmldir)/html/_static -xtype f -exec chmod 644 {} \; uninstall-hook: test -d $(DESTDIR)$(htmldir)/html/_sources \ && rm -Rf $(DESTDIR)$(htmldir)/html/_sources \; test -d $(DESTDIR)$(htmldir)/html/_static \ && rm -Rf $(DESTDIR)$(htmldir)/html/_static \; test -d $(DESTDIR)$(htmldir)/html/_images \ && rm -Rf $(DESTDIR)$(htmldir)/html/_images \; endif # You can set these variables from the command line. SPHINXOPTS = -W SPHINXBUILD = @SPHINX@ PAPER = BUILDDIR = . # Internal variables. ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(SPHINXOPTS) . _deps.rst: ../config.log make-deps-overview cat $< | ./make-deps-overview > $@ # Since we're not doing anything in parallel anyway, just put the HTML and the # man pages in the same target, with conditionals. Gotchas: # # 1. If we build both, the HTML needs to go first otherwise it doesn't get # curly quotes. ¯\_(ツ)_/¯ # # 2. This not a "grouped target" but rather an "independent target" [1], # because the former came in GNU Make 4.3 which is quite new. However it # does seem to get run only once. # # [1]: https://www.gnu.org/software/make/manual/html_node/Multiple-Targets.html $(nobase_html_DATA) $(man_MANS): ../lib/version.txt ../README.rst _deps.rst $(EXTRA_DIST) if ENABLE_HTML $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html # Avoid GitHub messing things up with Jekyll. touch html/.nojekyll # Some output files are copies with same timestamp as source; fix. Note # we need all the HTML output files, not just the one picked in $@. touch --no-create $(nobase_html_DATA) # remove unused files that Sphinx made rm -f $(BUILDDIR)/html/_deps.html \ $(BUILDDIR)/html/charliecloud.html \ $(BUILDDIR)/html/ch-*.html \ $(BUILDDIR)/html/bugs.html \ $(BUILDDIR)/html/loc.html \ $(BUILDDIR)/html/objects.inv \ $(BUILDDIR)/html/see_also.html endif if ENABLE_MAN $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man endif charliecloud-0.26/doc/bugs.rst000066400000000000000000000003231417231051300163310ustar00rootroot00000000000000Reporting bugs ============== If Charliecloud was obtained from your Linux distribution, use your distribution's bug reporting procedures. Otherwise, report bugs to: https://github.com/hpc/charliecloud/issues charliecloud-0.26/doc/ch-build.rst000066400000000000000000000002041417231051300170560ustar00rootroot00000000000000:orphan: ch-build man page +++++++++++++++++ .. include:: ./ch-build_desc.rst .. include:: ./bugs.rst .. include:: ./see_also.rst charliecloud-0.26/doc/ch-build2dir.rst000066400000000000000000000002201417231051300176350ustar00rootroot00000000000000:orphan: ch-build2dir man page +++++++++++++++++++++ .. include:: ./ch-build2dir_desc.rst .. include:: ./bugs.rst .. include:: ./see_also.rst charliecloud-0.26/doc/ch-build2dir_desc.rst000066400000000000000000000021421417231051300206400ustar00rootroot00000000000000Synopsis ======== :: $ ch-build2dir -t TAG [ARGS ...] CONTEXT OUTDIR Description =========== .. warning:: This script is deprecated in favor of :code:`ch-convert`. It will be removed in the next release. Build a Docker image named :code:`TAG` described by a Dockerfile (default :code:`$CONTEXT/Dockerfile`) and unpack it into :code:`OUTDIR/TAG`. This is a wrapper for :code:`ch-build`, :code:`ch-builder2tar`, and :code:`ch-tar2dir`; see also those man pages. Arguments: :code:`ARGS` additional arguments passed to :code:`ch-build` :code:`CONTEXT` Docker context directory :code:`OUTDIR` directory in which to place image directory (named :code:`TAG`) and temporary tarball :code:`-t TAG` name (tag) of Docker image to build :code:`--help` print help and exit :code:`--version` print version and exit Examples ======== To build using :code:`./Dockerfile` and create image directory :code:`/var/tmp/foo`:: $ ch-build2dir -t foo . /var/tmp Same as above, but build with a different Dockerfile:: $ ch-build2dir -t foo -f ./Dockerfile.foo . /var/tmp charliecloud-0.26/doc/ch-build_desc.rst000066400000000000000000000060751417231051300200700ustar00rootroot00000000000000Synopsis ======== :: $ ch-build [-b BUILDER] [--builder-info] -t TAG [ARGS ...] CONTEXT Description =========== .. warning:: This script is deprecated in favor of using the desired builder directly; we have added some tips for Docker to the FAQ. It will be removed in the next release. Build an image named :code:`TAG` described by a Dockerfile. Place the result into the builder's back-end storage. Using this script is *not* required for a working Charliecloud image. You can also use any builder that can produce a Linux filesystem tree directly, whether or not it is in the list below. However, this script hides the vagaries of making the supported builders work smoothly with Charliecloud and adds some conveniences (e.g., pass HTTP proxy environment variables to the build environment if the builder doesn't do this by default). Supported builders, unprivileged: * :code:`ch-image`: Our internal builder. Supported builders, privileged: * :code:`docker`: Docker. Experimental builders (i.e., the code is there but not tested much): * :code:`buildah`: Buildah in "rootless" mode with no setuid helpers, using :code:`ch-run` (via :code:`ch-run-oci`) for :code:`RUN` instructions. This mode is fully unprivileged. * :code:`buildah-runc`: Buildah in "rootless" mode with setuid helpers, using the default :code:`runc` for :code:`RUN` instructions. * :code:`buildah-setuid`: Buildah in "rootless" mode with setuid helpers, using :code:`ch-run` (via :code:`ch-run-oci`) for :code:`RUN` instructions. Specifying the builder, in descending order of priority: :code:`-b`, :code:`--builder BUILDER` Command line option. :code:`$CH_BUILDER` Environment variable Default :code:`docker` if Docker is installed; otherwise, :code:`ch-image`. Other arguments: :code:`--builder-info` Print the builder to be used and its version, then exit. :code:`-f`, :code:`--file DOCKERFILE` Dockerfile to use (default: :code:`$CONTEXT/Dockerfile`) :code:`-t TAG` Name (tag) of Docker image to build. :code:`--help` Print help and exit. :code:`--version` Print version and exit. Additional arguments are accepted and passed unchanged to the underlying builder. Bugs ==== The tag suffix :code:`:latest` is somewhat misleading, as by default neither :code:`ch-build` nor bare builders will notice if the base :code:`FROM` image has been updated. Use :code:`--pull` to make sure you have the latest base image. Examples ======== Create an image tagged :code:`foo` and specified by the file :code:`Dockerfile` located in the context directory. Use :code:`/bar` as the Docker context directory. Use the default builder. :: $ ch-build -t foo /bar Equivalent to above:: $ ch-build -t foo --file=/bar/Dockerfile /bar Instead, use :code:`/bar/Dockerfile.baz`:: $ ch-build -t foo --file=/bar/Dockerfile.baz /bar Equivalent to the first example, but use :code:`ch-image` even if Docker is installed:: $ ch-build -b ch-image -t foo /bar Equivalent to above:: $ export CH_BUILDER=ch-image $ ch-build -t foo /bar charliecloud-0.26/doc/ch-builder2squash.rst000066400000000000000000000002371417231051300207220ustar00rootroot00000000000000:orphan: ch-builder2squash man page ++++++++++++++++++++++++++ .. include:: ./ch-builder2squash_desc.rst .. include:: ./bugs.rst .. include:: ./see_also.rst charliecloud-0.26/doc/ch-builder2squash_desc.rst000066400000000000000000000021601417231051300217150ustar00rootroot00000000000000Synopsis ======== :: $ ch-builder2squash [-b BUILDER] IMAGE OUTDIR [ARGS ...] Description =========== .. warning:: This script is deprecated in favor of :code:`ch-convert`. It will be removed in the next release. Flattens the builder image tagged :code:`IMAGE` into a SquashFS file in :code:`OUTDIR`. Wrapper for :code:`ch-builder2tar --nocompress` and :code:`ch-tar2sqfs`. Intermediate files and directories are removed. Sudo privileges are required to run :code:`docker export`. Optional :code:`ARGS` passed to :code:`mksquashfs` unchanged. Additional arguments: :code:`--help` print help and exit :code:`--version` print version and exit Example ======= :: $ docker image list | fgrep debian REPOSITORY TAG IMAGE ID CREATED SIZE debian stretch 2d337f242f07 3 weeks ago 101MB $ ch-builder2squash debian /var/tmp Parallel mksquashfs: Using 6 processors Creating 4.0 filesystem on /var/tmp/debian.sqfs, block size 131072. [...] squashed /var/tmp/debian.sqfs OK $ ls -lh /var/tmp/debian* -rw-r--r-- 1 charlie charlie 41M Apr 23 14:37 debian.sqfs charliecloud-0.26/doc/ch-builder2tar.rst000066400000000000000000000002261417231051300202020ustar00rootroot00000000000000:orphan: ch-builder2tar man page +++++++++++++++++++++++ .. include:: ./ch-builder2tar_desc.rst .. include:: ./bugs.rst .. include:: ./see_also.rst charliecloud-0.26/doc/ch-builder2tar_desc.rst000066400000000000000000000020151417231051300211760ustar00rootroot00000000000000Synopsis ======== :: $ ch-builder2tar [-b BUILDER] [--nocompress] IMAGE OUTDIR Description =========== .. warning:: This script is deprecated in favor of :code:`ch-convert`. It will be removed in the next release. Flatten the builder image tagged :code:`IMAGE` into a Charliecloud tarball in directory :code:`OUTDIR`. The builder-specified environment (e.g., :code:`ENV` statements) is placed in a file in the tarball at :code:`$IMAGE/ch/environment`, in a form suitable for :code:`ch-run --set-env`. See :code:`ch-build(1)` for details on specifying the builder. Additional arguments: :code:`-b`, :code:`--builder BUILDER` Use specified builder; if not given, use :code:`$CH_BUILDER` or default. :code:`--nocompress` Do not compress tarball. :code:`--help` Print help and exit. :code:`--version` Print version and exit. Example ======= :: $ ch-builder2tar hello /var/tmp 57M /var/tmp/hello.tar.gz $ ls -lh /var/tmp -rw-r----- 1 reidpr reidpr 57M Feb 13 16:14 hello.tar.gz charliecloud-0.26/doc/ch-checkns.rst000066400000000000000000000002121417231051300173740ustar00rootroot00000000000000:orphan: ch-checkns man page +++++++++++++++++++ .. include:: ./ch-checkns_desc.rst .. include:: ./bugs.rst .. include:: ./see_also.rst charliecloud-0.26/doc/ch-checkns_desc.rst000066400000000000000000000002721417231051300204000ustar00rootroot00000000000000Synopsis ======== :: $ ch-checkns Description =========== Check :code:`ch-run` prerequisites, e.g., namespaces and :code:`pivot_root(2)`. Example ======= :: $ ch-checkns ok charliecloud-0.26/doc/ch-convert.rst000066400000000000000000000002121417231051300174360ustar00rootroot00000000000000:orphan: ch-convert man page +++++++++++++++++++ .. include:: ./ch-convert_desc.rst .. include:: ./bugs.rst .. include:: ./see_also.rst charliecloud-0.26/doc/ch-convert_desc.rst000066400000000000000000000134301417231051300204420ustar00rootroot00000000000000Synopsis ======== :: $ ch-convert [-i FMT] [-o FMT] [OPTION ...] IN OUT Description =========== Copy image :code:`IN` to :code:`OUT` and convert its format. Replace :code:`OUT` if it already exists, unless :code:`--no-clobber` is specified. It is an error if :code:`IN` and :code:`OUT` have the same format; use the format's own tools for that case. :code:`ch-run` can run container images that are plain directories or (optionally) SquashFS archives. However, images can take on a variety of other formats as well. The main purpose of this tool is to make images in those other formats available to :code:`ch-run`. For best performance, :code:`ch-convert` should be invoked only once, producing the final format actually needed. :code:`IN` Descriptor for the input image. For image builders, this is an image reference; otherwise, it's a filesystem path. :code:`OUT` Descriptor for the output image. :code:`-h`, :code:`--help` Print help and exit. :code:`-i`, :code:`--in-fmt FMT` Input image format is :code:`FMT`. If omitted, inferred as described below. :code:`-n`, :code:`--dry-run` Don't read the input or write the output. Useful for testing format inference. :code:`--no-clobber` Error if :code:`OUT` already exists, rather than replacing it. :code:`-o`, :code:`--out-fmt FMT` Output image format is :code:`FMT`; inferred if omitted. :code:`--tmp DIR` A sub-directory for temporary storage is created in :code:`DIR` and removed at the end of a successful conversion. **If this script crashes or errors out, the temporary directory is left behind to assist in debugging.** Storage may be needed up to twice the uncompressed size of the image, depending on the input and output formats. Default: :code:`$TMPDIR` if specified; otherwise :code:`/var/tmp`. :code:`-v`, :code:`--verbose` Print extra chatter. Can be repeated. .. Notes: 1. It's a deliberate choice to use UNIXey options rather than the Skopeo syntax [1], e.g. "-i docker" rather than "docker:image-name". [1]: https://manpages.debian.org/unstable/golang-github-containers-image/containers-transports.5.en.html 2. There used to be an [OUT_ARG ...] that would be passed unchanged to the archiver, i.e. tar(1) or mksquashfs(1). However it wasn't clear there were real use cases, and this has lots of opportunities to mess things up. Also, it's not clear when it will be called. For example, if you convert a directory to a tarball, then passing e.g. -J to XZ-compress will work fine, but when converting from Docker, we just compress the tarball we got from Docker, so in that case -J wouldn't work. 3. I also deliberately left out an option to change the output compression algorithm, under the assumption that the default is good enough. This can be revisited later IMO if needed. Image formats ============= :code:`ch-convert` knows about these values of :code:`FMT`: :code:`ch-image` Internal storage for Charliecloud's unprivileged image builder (Dockerfile interpreter) :code:`ch-image`. :code:`dir` Ordinary filesystem directory (i.e., not a mount point) containing an unpacked image. Output directories that already exist are replaced if they look like an image; otherwise, exit with an error. :code:`docker` Internal storage for Docker. :code:`squash` SquashFS filesystem archive containing the flattened image. SquashFS archives are much like tar archives but are mountable, including by :code:`ch-run`'s internal SquashFUSE mounting. Most systems have at least the SquashFS-Tools installed which allows unpacking into a directory, just like tar. Due to this greater flexibility, SquashFS is preferred to tar. **Note:** Conversions to and from SquashFS are quite noisy due to the verbosity of the underlying :code:`mksquashfs(1)` and :code:`unsquashfs(1)` tools. :code:`tar` Tar archive containing the flattened image with no layer sub-archives; i.e., the output of :code:`docker export` works but the output of :code:`docker save` does not. Output tarballs are always gzipped and must end in :code:`.tar.gz`; input tarballs can be any compression acceptable to :code:`tar(1)`. All of these are local formats; :code:`ch-convert` does not know how to push or pull images. Format inference ================ :code:`ch-convert` tries to save typing by guessing formats when they are reasonably clear. This is done against filenames, rather than file contents, so the rules are the same for output descriptors that do not yet exist. Format inference is done for both :code:`IN` and :code:`OUT`. The first matching glob below yields the inferred format. Paths need not exist in the filesystem. 1. :code:`*.sqfs`, :code:`*.squash`, :code:`*.squashfs`: SquashFS. 2. :code:`*.tar`, :code:`*.t?z`, :code:`*.tar.?`, :code:`*.tar.??`: Tarball. 3. :code:`/*`, :code:`./*`, i.e. absolute path or relative path with explicit dot: Directory. 4. If `ch-image` is installed: :code:`ch-image` internal storage. 5. If Docker is installed: Docker internal storage. 6. Otherwise: No format inference. Examples ======== Typical build-to-run sequence for image :code:`foo/bar` using :code:`ch-run`'s internal SquashFUSE code, inferring the output format:: $ sudo docker build -t foo/bar -f Dockerfile . [...] $ ch-convert foo/bar:latest /var/tmp/foobar.sqfs input: docker foo/bar:latest output: squashfs /var/tmp/foobar.sqfs copying ... done $ ch-run /var/tmp/foobar.sqfs -- echo hello hello Same conversion, but no format inference:: $ ch-convert -i ch-image -o squash foo/bar:latest /var/tmp/foobar.sqfs input: docker foo/bar:latest output: squashfs /var/tmp/foobar.sqfs copying ... done .. LocalWords: FMT fmt charliecloud-0.26/doc/ch-dir2squash.rst000066400000000000000000000002231417231051300200450ustar00rootroot00000000000000:orphan: ch-dir2squash man page ++++++++++++++++++++++ .. include:: ./ch-dir2squash_desc.rst .. include:: ./bugs.rst .. include:: ./see_also.rst charliecloud-0.26/doc/ch-dir2squash_desc.rst000066400000000000000000000014701417231051300210500ustar00rootroot00000000000000Synopsis ======== :: $ ch-dir2squash IMGDIR OUTDIR [ARGS ...] Description =========== .. warning:: This script is deprecated in favor of :code:`ch-convert`. It will be removed in the next release. Create Charliecloud SquashFS file from image directory :code:`IMGDIR` under directory :code:`OUTDIR`, named as last component of :code:`IMGDIR` plus suffix :code:`.sqfs`. Optional :code:`ARGS` will passed to :code:`mksquashfs` unchanged. Additional arguments: :code:`--help` print help and exit :code:`--version` print version and exit Example ======= :: $ ch-dir2squash /var/tmp/debian /var/tmp Parallel mksquashfs: Using 6 processors Creating 4.0 filesystem on /var/tmp/debian.sqfs, block size 131072. [...] -rw-r--r-- 1 charlie charlie 41M Apr 23 14:41 /var/tmp/debian.sqfs charliecloud-0.26/doc/ch-fromhost.rst000066400000000000000000000002151417231051300176220ustar00rootroot00000000000000:orphan: ch-fromhost man page ++++++++++++++++++++ .. include:: ./ch-fromhost_desc.rst .. include:: ./bugs.rst .. include:: ./see_also.rst charliecloud-0.26/doc/ch-fromhost_desc.rst000066400000000000000000000263201417231051300206250ustar00rootroot00000000000000Synopsis ======== :: $ ch-fromhost [OPTION ...] [FILE_OPTION ...] IMGDIR Description =========== .. note:: This command is experimental. Features may be incomplete and/or buggy. Please report any issues you find, so we can fix them! Inject files from the host into the Charliecloud image directory :code:`IMGDIR`. The purpose of this command is to inject files into a container image that are necessary to run the container on a specific host; e.g., GPU libraries that are tied to a specific kernel version. **It is not a general copy-to-image tool**; see further discussion on use cases below. It should be run after :code:`ch-tar2dir` and before :code:`ch-run`. After invocation, the image is no longer portable to other hosts. Injection is not atomic; if an error occurs partway through injection, the image is left in an undefined state. Injection is currently implemented using a simple file copy, but that may change in the future. By default, file paths that contain the strings :code:`/bin` or :code:`/sbin` are assumed to be executables and placed in :code:`/usr/bin` within the container. File paths that contain the strings :code:`/lib` or :code:`.so` are assumed to be shared libraries and are placed in the first-priority directory reported by :code:`ldconfig` (see :code:`--lib-path` below). Other files are placed in the directory specified by :code:`--dest`. If any shared libraries are injected, run :code:`ldconfig` inside the container (using :code:`ch-run -w`) after injection. Options ======= To specify which files to inject -------------------------------- :code:`-c`, :code:`--cmd CMD` Inject files listed in the standard output of command :code:`CMD`. :code:`-f`, :code:`--file FILE` Inject files listed in the file :code:`FILE`. :code:`-p`, :code:`--path PATH` Inject the file at :code:`PATH`. :code:`--cray-mpi` Cray-enable MPICH/OpenMPI installed inside the image. See important details below. :code:`--nvidia` Use :code:`nvidia-container-cli list` (from :code:`libnvidia-container`) to find executables and libraries to inject. These can be repeated, and at least one must be specified. To specify the destination within the image ------------------------------------------- :code:`-d`, :code:`--dest DST` Place files specified later in directory :code:`IMGDIR/DST`, overriding the inferred destination, if any. If a file's destination cannot be inferred and :code:`--dest` has not been specified, exit with an error. This can be repeated to place files in varying destinations. Additional arguments -------------------- :code:`--lib-path` Print the guest destination path for shared libraries inferred as described above. :code:`--no-ldconfig` Don't run :code:`ldconfig` even if we appear to have injected shared libraries. :code:`-h`, :code:`--help` Print help and exit. :code:`-v`, :code:`--verbose` List the injected files. :code:`--version` Print version and exit. When to use :code:`ch-fromhost` =============================== This command does a lot of heuristic magic; while it *can* copy arbitrary files into an image, this usage is discouraged and prone to error. Here are some use cases and the recommended approach: 1. *I have some files on my build host that I want to include in the image.* Use the :code:`COPY` instruction within your Dockerfile. Note that it's OK to build an image that meets your specific needs but isn't generally portable, e.g., only runs on specific micro-architectures you're using. 2. *I have an already built image and want to install a program I compiled separately into the image.* Consider whether a building a new derived image with a Dockerfile is appropriate. Another good option is to bind-mount the directory containing your program at run time. A less good option is to :code:`cp(1)` the program into your image, because this permanently alters the image in a non-reproducible way. 3. *I have some shared libraries that I need in the image for functionality or performance, and they aren't available in a place where I can use* :code:`COPY`. This is the intended use case of :code:`ch-fromhost`. You can use :code:`--cmd`, :code:`--file`, and/or :code:`--path` to put together a custom solution. But, please consider filing an issue so we can package your functionality with a tidy option like :code:`--cray-mpi` or :code:`--nvidia`. :code:`--cray-mpi` dependencies and quirks ========================================== The implementation of :code:`--cray-mpi` is messy, foul smelling, and brittle. It replaces or overrides the MPICH or OpenMPI libraries installed in the container. Users should be aware of the following. 1. Containers must have the following software installed: a. Corresponding open source MPI implementation. (`MPICH `_ and `OpenMPI `_.) b. `PatchELF with our patches `_. Use the :code:`shrink-soname` branch. (MPICH only.) c. :code:`libgfortran.so.3`, because Cray's :code:`libmpi.so.12` links to it. (MPICH only.) 2. Applications must be dynamically linked to :code:`libmpi.so.12` (not e.g. :code:`libmpich.so.12`). a. How to configure MPICH to accomplish this is not yet clear to us; :code:`test/Dockerfile.mpich` does it, while the Debian packages do not. (MPICH only.) 3. An ABI compatible module for the given MPI implementation must be loaded when :code:`ch-fromhost` is invoked. a. Load the :code:`cray-mpich-abi` module. (MPICH only.) b. We recommend loading the module of a version as close to what is installed in the image as possible. This OpenMPI install needs to be built such that libmpi contains all needed plugins (as opposed to them being standalone shared libraries). See `OpenMPI's documentation `_ for how to do this. (OpenMPI only.) 4. Tested only for C programs compiled with GCC, and it probably won't work otherwise. If you'd like to use another compiler or another programming language, please get in touch so we can implement the necessary support. Please file a bug if we missed anything above or if you know how to make the code better. Notes ===== Symbolic links are dereferenced, i.e., the files pointed to are injected, not the links themselves. As a corollary, do not include symlinks to shared libraries. These will be re-created by :code:`ldconfig`. There are two alternate approaches for nVidia GPU libraries: 1. Link :code:`libnvidia-containers` into :code:`ch-run` and call the library functions directly. However, this would mean that Charliecloud would either (a) need to be compiled differently on machines with and without nVidia GPUs or (b) have :code:`libnvidia-containers` available even on machines without nVidia GPUs. Neither of these is consistent with Charliecloud's philosophies of simplicity and minimal dependencies. 2. Use :code:`nvidia-container-cli configure` to do the injecting. This would require that containers have a half-started state, where the namespaces are active and everything is mounted but :code:`pivot_root(2)` has not been performed. This is not feasible because Charliecloud has no notion of a half-started container. Further, while these alternate approaches would simplify or eliminate this script for nVidia GPUs, they would not solve the problem for other situations. Bugs ==== File paths may not contain colons or newlines. :code:`ldconfig` tends to print :code:`stat` errors; these are typically non-fatal and occur when trying to probe common library paths. See `issue #732 `_. Examples ======== Place shared library :code:`/usr/lib64/libfoo.so` at path :code:`/usr/lib/libfoo.so` (assuming :code:`/usr/lib` is the first directory searched by the dynamic loader in the image), within the image :code:`/var/tmp/baz` and executable :code:`/bin/bar` at path :code:`/usr/bin/bar`. Then, create appropriate symlinks to :code:`libfoo` and update the :code:`ld.so` cache. :: $ cat qux.txt /bin/bar /usr/lib64/libfoo.so $ ch-fromhost --file qux.txt /var/tmp/baz Same as above:: $ ch-fromhost --cmd 'cat qux.txt' /var/tmp/baz Same as above:: $ ch-fromhost --path /bin/bar --path /usr/lib64/libfoo.so /var/tmp/baz Same as above, but place the files into :code:`/corge` instead (and the shared library will not be found by :code:`ldconfig`):: $ ch-fromhost --dest /corge --file qux.txt /var/tmp/baz Same as above, and also place file :code:`/etc/quux` at :code:`/etc/quux` within the container:: $ ch-fromhost --file qux.txt --dest /etc --path /etc/quux /var/tmp/baz Inject the executables and libraries recommended by nVidia into the image, and then run :code:`ldconfig`:: $ ch-fromhost --nvidia /var/tmp/baz asking ldconfig for shared library destination /sbin/ldconfig: Can't stat /libx32: No such file or directory /sbin/ldconfig: Can't stat /usr/libx32: No such file or directory shared library destination: /usr/lib64//bind9-export injecting into image: /var/tmp/baz /usr/bin/nvidia-smi -> /usr/bin (inferred) /usr/bin/nvidia-debugdump -> /usr/bin (inferred) /usr/bin/nvidia-persistenced -> /usr/bin (inferred) /usr/bin/nvidia-cuda-mps-control -> /usr/bin (inferred) /usr/bin/nvidia-cuda-mps-server -> /usr/bin (inferred) /usr/lib64/libnvidia-ml.so.460.32.03 -> /usr/lib64//bind9-export (inferred) /usr/lib64/libnvidia-cfg.so.460.32.03 -> /usr/lib64//bind9-export (inferred) [...] /usr/lib64/libGLESv2_nvidia.so.460.32.03 -> /usr/lib64//bind9-export (inferred) /usr/lib64/libGLESv1_CM_nvidia.so.460.32.03 -> /usr/lib64//bind9-export (inferred) running ldconfig Inject the Cray-enabled MPI libraries into the image, and then run :code:`ldconfig`:: $ ch-fromhost --cray-mpi /var/tmp/baz asking ldconfig for shared library destination /sbin/ldconfig: Can't stat /libx32: No such file or directory /sbin/ldconfig: Can't stat /usr/libx32: No such file or directory shared library destination: /usr/lib64//bind9-export found shared library: /usr/lib64/liblustreapi.so.1 found shared library: /opt/cray/xpmem/default/lib64/libxpmem.so.0 [...] injecting into image: /var/tmp/baz rm -f /var/tmp/openmpi/usr/lib64//bind9-export/libopen-rte.so.40 rm -f /var/tmp/openmpi/usr/lib64/bind9-export/libopen-rte.so.40 [...] mkdir -p /var/tmp/openmpi/var/opt/cray/alps/spool mkdir -p /var/tmp/openmpi/etc/opt/cray/wlm_detect [...] /usr/lib64/liblustreapi.so.1 -> /usr/lib64//bind9-export (inferred) /opt/cray/xpmem/default/lib64/libxpmem.so.0 -> /usr/lib64//bind9-export (inferred) [...] /etc/opt/cray/wlm_detect/active_wlm -> /etc/opt/cray/wlm_detect running ldconfig Acknowledgements ================ This command was inspired by the similar `Shifter `_ feature that allows Shifter containers to use the Cray Aries network. We particularly appreciate the help provided by Shane Canon and Doug Jacobsen during our implementation of :code:`--cray-mpi`. We appreciate the advice of Ryan Olson at nVidia on implementing :code:`--nvidia`. .. LocalWords: libmpi libmpich nvidia charliecloud-0.26/doc/ch-image.rst000066400000000000000000000002041417231051300170410ustar00rootroot00000000000000:orphan: ch-image man page +++++++++++++++++ .. include:: ./ch-image_desc.rst .. include:: ./bugs.rst .. include:: ./see_also.rst charliecloud-0.26/doc/ch-image_desc.rst000066400000000000000000000733421417231051300200540ustar00rootroot00000000000000Synopsis ======== .. Note: Keep these consistent with the synopses in each subcommand. :: $ ch-image [...] build [-t TAG] [-f DOCKERFILE] [...] CONTEXT $ ch-image [...] delete IMAGE_REF $ ch-image [...] import PATH IMAGE_REF $ ch-image [...] list [IMAGE_REF] $ ch-image [...] pull [...] IMAGE_REF [IMAGE_DIR] $ ch-image [...] push [--image DIR] IMAGE_REF [DEST_REF] $ ch-image [...] reset $ ch-image [...] storage-path $ ch-image { --help | --version | --dependencies } Description =========== :code:`ch-image` is a tool for building and manipulating container images, but not running them (for that you want :code:`ch-run`). It is completely unprivileged, with no setuid/setgid/setcap helpers. The action to take is specified by a sub-command. Options that print brief information and then exit: :code:`-h`, :code:`--help` Print help and exit successfully. If specified before the sub-command, print general help and list of sub-commands; if after the sub-command, print help specific to that sub-command. :code:`--dependencies` Report dependency problems on standard output, if any, and exit. If all is well, there is no output and the exit is successful; in case of problems, the exit is unsuccessful. :code:`--version` Print version number and exit successfully. Common options placed before the sub-command: :code:`-a`, :code:`--arch ARCH` Use :code:`ARCH` for architecture-aware registry operations, currently :code:`pull` and pulls done within :code:`build`. :code:`ARCH` can be: (1) :code:`yolo`, to bypass architecture-aware code and use the registry's default architecture; (2) :code:`host`, to use the host's architecture, obtained with the equivalent of :code:`uname -m` (default if :code:`--arch` not specified); or (3) an architecture name. If the specified architecture is not available, the error message will list which ones are. **Notes:** 1. :code:`ch-image` is limited to one image per image reference in builder storage at a time, regardless of architecture. For example, if you say :code:`ch-image pull --arch=foo baz` and then :code:`ch-image pull --arch=bar baz`, builder storage will contain one image called "baz", with architecture "bar". 2. Images' default architecture is usually :code:`amd64`, so this is usually what you get with :code:`--arch=yolo`. Similarly, if a registry image is architecture-unaware, it will still be pulled with :code:`--arch=amd64` and :code:`--arch=host` on x86-64 hosts (other host architectures must specify :code:`--arch=yolo` to pull architecture-unaware images). 3. :code:`uname -m` and image registries often use different names for the same architecture. For example, what :code:`uname -m` reports as "x86_64" is known to registries as "amd64". :code:`--arch=host` should translate if needed, but it's useful to know this is happening. Directly specified architecture names are passed to the registry without translation. 4. Registries treat architecture as a pair of items, architecture and sometimes variant (e.g., "arm" and "v7"). Charliecloud treats architecture as a simple string and converts to/from the registry view transparently. :code:`--no-cache` Download everything needed, ignoring the cache. :code:`--password-many` Re-prompt the user every time a registry password is needed. :code:`-s`, :code:`--storage DIR` Set the storage directory (see below for important details). :code:`--tls-no-verify` Don't verify TLS certificates of the repository. (Do not use this option unless you understand the risks.) :code:`-v`, :code:`--verbose` Print extra chatter; can be repeated. Authentication ============== If the remote repository needs authentication, Charliecloud will prompt you for a username and password. Note that some repositories call the secret something other than "password"; e.g., GitLab calls it a "personal access token (PAT)". These values are remembered for the life of the process and silently re-offered to the registry if needed. One case when this happens is on push to a private registry: many registries will first offer a read-only token when :code:`ch-image` checks if something exists, then re-authenticate when upgrading the token to read-write for upload. If your site uses one-time passwords such as provided by a security device, you can specify :code:`--password-many` to provide a new secret each time. These values are not saved persistently, e.g. in a file. Note that we do use normal Python variables for this information, without pinning them into physical RAM with `mlock(2) `_ or any other special treatment, so we cannot guarantee they will never reach non-volatile storage. There is no separate :code:`login` subcommand like Docker. For non-interactive authentication, you can use environment variables :code:`CH_IMAGE_USERNAME` and :code:`CH_IMAGE_PASSWORD`. Only do this if you fully understand the implications for your specific use case, because it is difficult to securely store secrets in environment variables. Storage directory ================= :code:`ch-image` maintains state using normal files and directories located in its *storage directory*; contents include temporary images used for building and various caches. In descending order of priority, this directory is located at: :code:`-s`, :code:`--storage DIR` Command line option. :code:`$CH_IMAGE_STORAGE` Environment variable. :code:`/var/tmp/$USER.ch` Default. (Previously, the default was :code:`/var/tmp/$USER/ch-image`. If a valid storage directory is found at the old default path, :code:`ch-image` tries to move it to the new default path.) Unlike many container implementations, there is no notion of storage drivers, graph drivers, etc., to select and/or configure. The storage directory can reside on any filesystem. However, it contains lots of small files and metadata traffic can be intense. For example, the Charliecloud test suite uses approximately 400,000 files and directories in the storage directory as of this writing. Place it on a filesystem appropriate for this; tmpfs'es such as :code:`/var/tmp` are a good choice if you have enough RAM (:code:`/tmp` is not recommended because :code:`ch-run` bind-mounts it into containers by default). While you can currently poke around in the storage directory and find unpacked images runnable with :code:`ch-run`, this is not a supported use case. The supported workflow uses :code:`ch-convert` to obtain a packed image; see the tutorial for details. The storage directory format changes on no particular schedule. Often :code:`ch-image` is able to upgrade the directory; however, downgrading is not supported and sometimes upgrade is not possible. In these cases, :code:`ch-image` will refuse to run until you delete and re-initialize the directory with :code:`ch-image reset`. .. warning:: Network filesystems, especially Lustre, are typically bad choices for the storage directory. This is a site-specific question and your local support will likely have strong opinions. :code:`build` ============= Build an image from a Dockerfile and put it in the storage directory. Synopsis -------- :: $ ch-image [...] build [-t TAG] [-f DOCKERFILE] [...] CONTEXT Description ----------- Uses :code:`ch-run -w -u0 -g0 --no-home --no-passwd` to execute :code:`RUN` instructions. Note that :code:`FROM` implicitly pulls the base image if needed, so you may want to read about the :code:`pull` subcommand below as well. Required argument: :code:`CONTEXT` Path to context directory. This is the root of :code:`COPY` instructions in the Dockerfile. If a single hyphen (:code:`-`) is specified: (a) read the Dockerfile from standard input, (b) specifying :code:`--file` is an error, and (c) there is no context, so :code:`COPY` will fail. (See :code:`--file` for how to provide the Dockerfile on standard input while also having a context.) Options: :code:`-b`, :code:`--bind SRC[:DST]` For :code:`RUN` instructions only, bind-mount :code:`SRC` at guest :code:`DST`. The default destination if not specified is to use the same path as the host; i.e., the default is equivalent to :code:`--bind=SRC:SRC`. If :code:`DST` does not exist, try to create it as an empty directory, though images do have ten directories :code:`/mnt/[0-9]` already available as mount points. Can be repeated. **Note:** See documentation for :code:`ch-run --bind` for important caveats and gotchas. **Note:** Other instructions that modify the image filesystem, e.g. :code:`COPY`, can only access host files from the context directory, regardless of this option. :code:`--build-arg KEY[=VALUE]` Set build-time variable :code:`KEY` defined by :code:`ARG` instruction to :code:`VALUE`. If :code:`VALUE` not specified, use the value of environment variable :code:`KEY`. :code:`-f`, :code:`--file DOCKERFILE` Use :code:`DOCKERFILE` instead of :code:`CONTEXT/Dockerfile`. If a single hyphen (:code:`-`) is specified, read the Dockerfile from standard input; like :code:`docker build`, the context directory is still available in this case. :code:`--force` Inject the unprivileged build workarounds; see discussion later in this section for details on what this does and when you might need it. If a build fails and :code:`ch-image` thinks :code:`--force` would help, it will suggest it. :code:`-n`, :code:`--dry-run` Don't actually execute any Dockerfile instructions. :code:`--no-force-detect` Don't try to detect if the workarounds in :code:`--force` would help. :code:`--parse-only` Stop after parsing the Dockerfile. :code:`-t`, :code:`--tag TAG` Name of image to create. If not specified, infer the name: 1. If Dockerfile named :code:`Dockerfile` with an extension: use the extension with invalid characters stripped, e.g. :code:`Dockerfile.@FOO.bar` → :code:`foo.bar`. 2. If Dockerfile has extension :code:`dockerfile`: use the basename with the same transformation, e.g. :code:`baz.@QUX.dockerfile` -> :code:`baz.qux`. 3. If context directory is not :code:`/`: use its name, i.e. the last component of the absolute path to the context directory, with the same transformation, 4. Otherwise (context directory is :code:`/`): use :code:`root`. If no colon present in the name, append :code:`:latest`. Privilege model --------------- :code:`ch-image` is a *fully* unprivileged image builder. It does not use any setuid or setcap helper programs, and it does not use configuration files :code:`/etc/subuid` or :code:`/etc/subgid`. This contrasts with the “rootless” or “`fakeroot `_” modes of some competing builders, which do require privileged supporting code or utilities. This approach does yield some quirks. We provide built-in workarounds that should mostly work (i.e., :code:`--force`), but it can be helpful to understand what is going on. :code:`ch-image` executes all instructions as the normal user who invokes it. For :code:`RUN`, this is accomplished with :code:`ch-run -w --uid=0 --gid=0` (and some other arguments), i.e., your host EUID and EGID both mapped to zero inside the container, and only one UID (zero) and GID (zero) are available inside the container. Under this arrangement, processes running in the container for each :code:`RUN` *appear* to be running as root, but many privileged system calls will fail without the workarounds described below. **This affects any fully unprivileged container build, not just Charliecloud.** The most common time to see this is installing packages. For example, here is RPM failing to :code:`chown(2)` a file, which makes the package update fail: .. code-block:: none Updating : 1:dbus-1.10.24-13.el7_6.x86_64 2/4 Error unpacking rpm package 1:dbus-1.10.24-13.el7_6.x86_64 error: unpacking of archive failed on file /usr/libexec/dbus-1/dbus-daemon-launch-helper;5cffd726: cpio: chown Cleanup : 1:dbus-libs-1.10.24-12.el7.x86_64 3/4 error: dbus-1:1.10.24-13.el7_6.x86_64: install failed This one is (ironically) :code:`apt-get` failing to drop privileges: .. code-block:: none E: setgroups 65534 failed - setgroups (1: Operation not permitted) E: setegid 65534 failed - setegid (22: Invalid argument) E: seteuid 100 failed - seteuid (22: Invalid argument) E: setgroups 0 failed - setgroups (1: Operation not permitted) By default, nothing is done to avoid these problems, though :code:`ch-image` does try to detect if the workarounds could help. :code:`--force` activates the workarounds: :code:`ch-image` injects extra commands to intercept these system calls and fake a successful result, using :code:`fakeroot(1)`. There are three basic steps: 1. After :code:`FROM`, analyze the image to see what distribution it contains, which determines the specific workarounds. 2. Before the user command in the first :code:`RUN` instruction where the injection seems needed, install :code:`fakeroot(1)` in the image, if one is not already installed, as well as any other necessary initialization commands. For example, we turn off the :code:`apt` sandbox (for Debian Buster) and configure EPEL but leave it disabled (for CentOS/RHEL). 3. Prepend :code:`fakeroot` to :code:`RUN` instructions that seem to need it, e.g. ones that contain :code:`apt`, :code:`apt-get`, :code:`dpkg` for Debian derivatives and :code:`dnf`, :code:`rpm`, or :code:`yum` for RPM-based distributions. The details are specific to each distribution. :code:`ch-image` analyzes image content (e.g., grepping :code:`/etc/debian_version`) to select a configuration; see :code:`lib/fakeroot.py` for details. :code:`ch-image` prints exactly what it is doing. Compatibility with other Dockerfile interpreters ------------------------------------------------ :code:`ch-image` is an independent implementation and shares no code with other Dockerfile interpreters. It uses a formal Dockerfile parsing grammar developed from the `Dockerfile reference documentation `_ and miscellaneous other sources, which you can examine in the source code. We believe this independence is valuable for several reasons. First, it helps the community examine Dockerfile syntax and semantics critically, think rigorously about what is really needed, and build a more robust standard. Second, it yields disjoint sets of bugs (note that Podman, Buildah, and Docker all share the same Dockerfile parser). Third, because it is a much smaller code base, it illustrates how Dockerfiles work more clearly. Finally, it allows straightforward extensions if needed to support scientific computing. :code:`ch-image` tries hard to be compatible with Docker and other interpreters, though as an independent implementation, it is not bug-compatible. The following subsections describe differences from the Dockerfile reference that we expect to be approximately permanent. For not-yet-implemented features and bugs in this area, see `related issues `_ on GitHub. None of these are set in stone. We are very interested in feedback on our assessments and open questions. This helps us prioritize new features and revise our thinking about what is needed for HPC containers. Context directory ~~~~~~~~~~~~~~~~~ The context directory is bind-mounted into the build, rather than copied like Docker. Thus, the size of the context is immaterial, and the build reads directly from storage like any other local process would. However, you still can't access anything outside the context directory. Variable substitution ~~~~~~~~~~~~~~~~~~~~~ Variable substitution happens for *all* instructions, not just the ones listed in the Dockerfile reference. :code:`ARG` and :code:`ENV` cause cache misses upon *definition*, in contrast with Docker where these variables miss upon *use*, except for certain cache-excluded variables that never cause misses, listed below. :code:`ch-image` passes the following proxy environment variables in to the build. Changes to these variables do not cause a cache miss. They do not require an :code:`ARG` instruction, as `documented `_ in the Dockerfile reference. Unlike Docker, they are available if the same-named environment variable is defined; :code:`--build-arg` is not required. .. code-block:: sh HTTP_PROXY http_proxy HTTPS_PROXY https_proxy FTP_PROXY ftp_proxy NO_PROXY no_proxy In addition to those listed in the Dockerfile reference, these environment variables are passed through in the same way: .. code-block:: sh SSH_AUTH_SOCK USER Finally, these variables are also pre-defined but are unrelated to the host environment: .. code-block:: sh PATH=/ch/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TAR_OPTIONS=--no-same-owner Note that :code:`ARG` and :code:`ENV` have different syntax despite very similar semantics. :code:`COPY` ~~~~~~~~~~~~ Especially for people used to UNIX :code:`cp(1)`, the semantics of the Dockerfile :code:`COPY` instruction can be confusing. Most notably, when a source of the copy is a directory, the *contents* of that directory, not the directory itself, are copied. This is documented, but it's a real gotcha because that's not what :code:`cp(1)` does, and it means that many things you can do in one :code:`cp(1)` command require multiple :code:`COPY` instructions. Also, the reference documentation is incomplete. In our experience, Docker also behaves as follows; :code:`ch-image` does the same in an attempt to be bug-compatible. 1. You can use absolute paths in the source; the root is the context directory. 2. Destination directories are created if they don't exist in the following situations: 1. If the destination path ends in slash. (Documented.) 2. If the number of sources is greater than 1, either by wildcard or explicitly, regardless of whether the destination ends in slash. (Not documented.) 3. If there is a single source and it is a directory. (Not documented.) 3. Symbolic links behave differently depending on how deep in the copied tree they are. (Not documented.) 1. Symlinks at the top level — i.e., named as the destination or the source, either explicitly or by wildcards — are dereferenced. They are followed, and whatever they point to is used as the destination or source, respectively. 2. Symlinks at deeper levels are not dereferenced, i.e., the symlink itself is copied. 4. If a directory appears at the same path in source and destination, and is at the 2nd level or deeper, the source directory's metadata (e.g., permissions) are copied to the destination directory. (Not documented.) 5. If an object appears in both the source and destination, and is at the 2nd level or deeper, and is of different types in the source and destination, then the source object will overwrite the destination object. (Not documented.) For example, if :code:`/tmp/foo/bar` is a regular file, and :code:`/tmp` is the context directory, then the following Dockerfile snippet will result in a *file* in the container at :code:`/foo/bar` (copied from :code:`/tmp/foo/bar`); the directory and all its contents will be lost. .. code-block:: docker RUN mkdir -p /foo/bar && touch /foo/bar/baz COPY foo /foo We expect the following differences to be permanent: * Wildcards use Python glob semantics, not the Go semantics. * :code:`COPY --chown` is ignored, because it doesn't make sense in an unprivileged build. Features we do not plan to support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * Parser directives are not supported. We have not identified a need for any of them. * :code:`EXPOSE`: Charliecloud does not use the network namespace, so containerized processes can simply listen on a host port like other unprivileged processes. * :code:`HEALTHCHECK`: This instruction's main use case is monitoring server processes rather than applications. Also, implementing it requires a container supervisor daemon, which we have no plans to add. * :code:`MAINTAINER` is deprecated. * :code:`STOPSIGNAL` requires a container supervisor daemon process, which we have no plans to add. * :code:`USER` does not make sense for unprivileged builds. * :code:`VOLUME`: This instruction is not currently supported. Charliecloud has good support for bind mounts; we anticipate that it will continue to focus on that and will not introduce the volume management features that Docker has. Examples -------- Build image :code:`bar` using :code:`./foo/bar/Dockerfile` and context directory :code:`./foo/bar`:: $ ch-image build -t bar -f ./foo/bar/Dockerfile ./foo/bar [...] grown in 4 instructions: bar Same, but infer the image name and Dockerfile from the context directory path:: $ ch-image build ./foo/bar [...] grown in 4 instructions: bar Build using humongous vendor compilers you want to bind-mount instead of installing into the image:: $ ch-image build --bind /opt/bigvendor:/opt . $ cat Dockerfile FROM centos:7 RUN /opt/bin/cc hello.c #COPY /opt/lib/*.so /usr/local/lib # fail: COPY doesn't bind mount RUN cp /opt/lib/*.so /usr/local/lib # possible workaround RUN ldconfig :code:`delete` ============== :: $ ch-image [...] delete IMAGE_REF Delete the image described by the image reference :code:`IMAGE_REF` from the storage directory. :code:`list` ============ Print information about images. If no argument given, list the images in builder storage. Synopsis -------- :: $ ch-image [...] list [IMAGE_REF] Description ----------- Optional argument: :code:`IMAGE_REF` Print details of what's known about :code:`IMAGE_REF`, both locally and in the remote registry, if any. Examples -------- List images in builder storage:: $ ch-image list alpine:3.9 (amd64) alpine:latest (amd64) debian:buster (amd64) Print details about Debian Buster image:: $ ch-image list debian:buster details of image: debian:buster in local storage: no full remote ref: registry-1.docker.io:443/library/debian:buster available remotely: yes remote arch-aware: yes host architecture: amd64 archs available: 386 amd64 arm/v5 arm/v7 arm64/v8 mips64le ppc64le s390x :code:`import` ============== :: $ ch-image [...] import PATH IMAGE_REF Copy the image at :code:`PATH` into builder storage with name :code:`IMAGE_REF`. :code:`PATH` can be: * an image directory * a tarball with no top-level directory (a.k.a. a "`tarbomb `_") * a standard tarball with one top-level directory If the imported image contains Charliecloud metadata, that will be imported unchanged, i.e., images exported from :code:`ch-image` builder storage will be functionally identical when re-imported. :code:`pull` ============ Pull the image described by the image reference :code:`IMAGE_REF` from a repository to the local filesystem. Synopsis -------- :: $ ch-image [...] pull [...] IMAGE_REF [IMAGE_DIR] See the FAQ for the gory details on specifying image references. Description ----------- Destination: :code:`IMAGE_DIR` If specified, place the unpacked image at this path; it is then ready for use by :code:`ch-run` or other tools. The storage directory will not contain a copy of the image, i.e., it is only unpacked once. Options: :code:`--last-layer N` Unpack only :code:`N` layers, leaving an incomplete image. This option is intended for debugging. :code:`--parse-only` Parse :code:`IMAGE_REF`, print a parse report, and exit successfully without talking to the internet or touching the storage directory. This script does a fair amount of validation and fixing of the layer tarballs before flattening in order to support unprivileged use despite image problems we frequently see in the wild. For example, device files are ignored, and file and directory permissions are increased to a minimum of :code:`rwx------` and :code:`rw-------` respectively. Note, however, that symlinks pointing outside the image are permitted, because they are not resolved until runtime within a container. The following metadata in the pulled image is retained; all other metadata is currently ignored. (If you have a need for additional metadata, please let us know!) * Current working directory set with :code:`WORKDIR` is effective in downstream Dockerfiles. * Environment variables set with :code:`ENV` are effective in downstream Dockerfiles and also written to :code:`/ch/environment` for use in :code:`ch-run --set-env`. * Mount point directories specified with :code:`VOLUME` are created in the image if they don't exist, but no other action is taken. Note that some images (e.g., those with a "version 1 manifest") do not contain metadata. A warning is printed in this case. Examples -------- Download the Debian Buster image matching the host's architecture and place it in the storage directory:: $ uname -m aarch32 pulling image: debian:buster requesting arch: arm64/v8 manifest list: downloading manifest: downloading config: downloading layer 1/1: c54d940: downloading flattening image layer 1/1: c54d940: listing validating tarball members resolving whiteouts layer 1/1: c54d940: extracting image arch: arm64 done Same, specifying the architecture explicitly:: $ ch-image --arch=arm/v7 pull debian:buster pulling image: debian:buster requesting arch: arm/v7 manifest list: downloading manifest: downloading config: downloading layer 1/1: 8947560: downloading flattening image layer 1/1: 8947560: listing validating tarball members resolving whiteouts layer 1/1: 8947560: extracting image arch: arm (may not match host arm64/v8) Download the same image and place it in :code:`/tmp/buster`:: $ ch-image pull debian:buster /tmp/buster [...] $ ls /tmp/buster bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr :code:`push` ============ Push the image described by the image reference :code:`IMAGE_REF` from the local filesystem to a repository. Synopsis -------- :: $ ch-image [...] push [--image DIR] IMAGE_REF [DEST_REF] See the FAQ for the gory details on specifying image references. Description ----------- Destination: :code:`DEST_REF` If specified, use this as the destination image reference, rather than :code:`IMAGE_REF`. This lets you push to a repository without permanently adding a tag to the image. Options: :code:`--image DIR` Use the unpacked image located at :code:`DIR` rather than an image in the storage directory named :code:`IMAGE_REF`. Because Charliecloud is fully unprivileged, the owner and group of files in its images are not meaningful in the broader ecosystem. Thus, when pushed, everything in the image is flattened to user:group :code:`root:root`. Also, setuid/setgid bits are removed, to avoid surprises if the image is pulled by a privileged container implementation. Examples -------- Push a local image to the registry :code:`example.com:5000` at path :code:`/foo/bar` with tag :code:`latest`. Note that in this form, the local image must be named to match that remote reference. :: $ ch-image push example.com:5000/foo/bar:latest pushing image: example.com:5000/foo/bar:latest layer 1/1: gathering layer 1/1: preparing preparing metadata starting upload layer 1/1: a1664c4: checking if already in repository layer 1/1: a1664c4: not present, uploading config: 89315a2: checking if already in repository config: 89315a2: not present, uploading manifest: uploading cleaning up done Same, except use local image :code:`alpine:3.9`. In this form, the local image name does not have to match the destination reference. :: $ ch-image push alpine:3.9 example.com:5000/foo/bar:latest pushing image: alpine:3.9 destination: example.com:5000/foo/bar:latest layer 1/1: gathering layer 1/1: preparing preparing metadata starting upload layer 1/1: a1664c4: checking if already in repository layer 1/1: a1664c4: not present, uploading config: 89315a2: checking if already in repository config: 89315a2: not present, uploading manifest: uploading cleaning up done Same, except use unpacked image located at :code:`/var/tmp/image` rather than an image in :code:`ch-image` storage. (Also, the sole layer is already present in the remote registry, so we don't upload it again.) :: $ ch-image push --image /var/tmp/image example.com:5000/foo/bar:latest pushing image: example.com:5000/foo/bar:latest image path: /var/tmp/image layer 1/1: gathering layer 1/1: preparing preparing metadata starting upload layer 1/1: 892e38d: checking if already in repository layer 1/1: 892e38d: already present config: 546f447: checking if already in repository config: 546f447: not present, uploading manifest: uploading cleaning up done :code:`reset` ============= :: $ ch-image [...] reset Delete all images and cache from ch-image builder storage. :code:`storage-path` ==================== :: $ ch-image [...] storage-path Print the storage directory path and exit. Environment variables ===================== :code:`CH_IMAGE_USERNAME`, :code:`CH_IMAGE_PASSWORD` Username and password for registry authentication. **See important caveats in section "Authentication" above.** .. include:: py_env.rst .. LocalWords: tmpfs'es bigvendor AUTH Aimage charliecloud-0.26/doc/ch-pull2dir.rst000066400000000000000000000002151417231051300175160ustar00rootroot00000000000000:orphan: ch-pull2dir man page ++++++++++++++++++++ .. include:: ./ch-pull2dir_desc.rst .. include:: ./bugs.rst .. include:: ./see_also.rst charliecloud-0.26/doc/ch-pull2dir_desc.rst000066400000000000000000000030361417231051300205200ustar00rootroot00000000000000Synopsis ======== :: $ ch-pull2dir IMAGE[:TAG] DIR Description =========== .. warning:: This script is deprecated in favor of :code:`ch-convert`. It will be removed in the next release. Pull Docker image named :code:`IMAGE[:TAG]` from Docker Hub and extract it into a subdirectory of :code:`DIR`. A temporary tarball is stored in :code:`DIR`. Sudo privileges are required to run the :code:`docker pull` command. This runs the following command sequence: :code:`ch-pull2tar`, :code:`ch-tar2dir`. See warning in the documentation for :code:`ch-tar2dir`. Additional arguments: :code:`--help` print help and exit :code:`--version` print version and exit Examples ======== :: $ ch-pull2dir alpine /var/tmp Using default tag: latest latest: Pulling from library/alpine Digest: sha256:621c2f39f8133acb8e64023a94dbdf0d5ca81896102b9e57c0dc184cadaf5528 Status: Image is up to date for alpine:latest -rw-r--r--. 1 charlie charlie 2.1M Oct 5 19:52 /var/tmp/alpine.tar.gz creating new image /var/tmp/alpine /var/tmp/alpine unpacked ok removed '/var/tmp/alpine.tar.gz' Same as above, except optional :code:`TAG` is specified: :: $ ch-pull2dir alpine:3.6 /var/tmp 3.6: Pulling from library/alpine Digest: sha256:cc24af836d1377e092ecb4e8f0a4324c3b1aa2b5295c2239edcc7bbc86a9cbc6 Status: Image is up to date for alpine:3.6 -rw-r--r--. 1 charlie charlie 2.1M Oct 5 19:54 /var/tmp/alpine:3.6.tar.gz creating new image /var/tmp/alpine:3.6 /var/tmp/alpine:3.6 unpacked ok removed '/var/tmp/alpine:3.6.tar.gz' charliecloud-0.26/doc/ch-pull2tar.rst000066400000000000000000000002151417231051300175260ustar00rootroot00000000000000:orphan: ch-pull2tar man page ++++++++++++++++++++ .. include:: ./ch-pull2tar_desc.rst .. include:: ./bugs.rst .. include:: ./see_also.rst charliecloud-0.26/doc/ch-pull2tar_desc.rst000066400000000000000000000024401417231051300205260ustar00rootroot00000000000000Synopsis ======== :: $ ch-pull2tar IMAGE[:TAG] OUTDIR Description =========== .. warning:: This script is deprecated in favor of :code:`ch-convert`. It will be removed in the next release. Pull a Docker image named :code:`IMAGE[:TAG]` from Docker Hub and flatten it into a Charliecloud tarball in directory :code:`OUTDIR`. This runs the following command sequence: :code:`docker pull`, :code:`ch-builder2tar` but provides less flexibility than the individual commands. Sudo privileges are required for :code:`docker pull`. Additional arguments: :code:`--help` print help and exit :code:`--version` print version and exit Examples ======== :: $ ch-pull2tar alpine /var/tmp Using default tag: latest latest: Pulling from library/alpine Digest: sha256:621c2f39f8133acb8e64023a94dbdf0d5ca81896102b9e57c0dc184cadaf5528 Status: Image is up to date for alpine:latest -rw-r--r--. 1 charlie charlie 2.1M Oct 5 19:52 /var/tmp/alpine.tar.gz Same as above, except optional :code:`TAG` is specified: :: $ ch-pull2tar alpine:3.6 3.6: Pulling from library/alpine Digest: sha256:cc24af836d1377e092ecb4e8f0a4324c3b1aa2b5295c2239edcc7bbc86a9cbc6 Status: Image is up to date for alpine:3.6 -rw-r--r--. 1 charlie charlie 2.1M Oct 5 19:54 /var/tmp/alpine:3.6.tar.gz charliecloud-0.26/doc/ch-run-oci.rst000066400000000000000000000002121417231051300173320ustar00rootroot00000000000000:orphan: ch-run-oci man page +++++++++++++++++++ .. include:: ./ch-run-oci_desc.rst .. include:: ./bugs.rst .. include:: ./see_also.rst charliecloud-0.26/doc/ch-run-oci_desc.rst000066400000000000000000000130221417231051300203330ustar00rootroot00000000000000Synopsis ======== :: $ ch-run-oci OPERATION [ARG ...] Description =========== .. note:: This command is experimental. Features may be incomplete and/or buggy. The quality of code is not yet up to the usual Charliecloud standards, and error handling is poor. Please report any issues you find, so we can fix them! Open Containers Initiative (OCI) wrapper for :code:`ch-run(1)`. You probably don't want to run this command directly; it is intended to interface with other software that expects an OCI runtime. The current goal is to support completely unprivileged image building (e.g. :code:`buildah --runtime=ch-run-oci`) rather than general OCI container running. *Support of the OCI runtime specification is only partial.* This is for two reasons. First, it's an experimental and incomplete feature. More importantly, the philosophy and goals of OCI differ significantly from those of Charliecloud. Key differences include: * OCI is designed to run services, while Charliecloud is designed to run scientific applications. * OCI containers are persistent things with a complex lifecycle, while Charliecloud containers are simply UNIX processes. * OCI expects support for a variety of namespaces, while Charliecloud supports user and mount, no more and no less. * OCI expects runtimes to maintain a supervisor process in addition to user processes; Charliecloud has no need for this. * OCI expects runtimes to maintain state throughout the container lifecycle in a location independent from the caller. For these reasons, :code:`ch-run-oci` is a bit of a kludge, and much of what it does is provide scaffolding to satisfy OCI requirements. Which OCI features are and are not supported is provided in the rest of this man page, and technical analysis and discussion are in the Contributor’s Guide. This command supports OCI version 1.0.0 only and fails with an error if other versions are offered. Operations ========== All OCI operations are accepted, but some are no-ops or merely scaffolding to satisfy the caller. For comparison, see also: * `OCI runtime and lifecycle spec `_ * The `runc man pages `_ :code:`create` -------------- :: $ ch-run-oci create --bundle DIR --pid-file FILE [--no-new-keyring] CONTAINER_ID Create a container. Charliecloud does not have separate create and start phases, so this operation only sets up OCI-related scaffolding. Arguments: :code:`--bundle DIR` Directory containing the OCI bundle. This must be :code:`/tmp/buildahYYY`, where :code:`YYY` matches :code:`CONTAINER_ID` below. :code:`--pid-file FILE` Filename to write the "container" process PID to. Note that for Charliecloud, the process given is fake; see above. This must be :code:`DIR/pid`, where :code:`DIR` is given by :code:`--bundle`. :code:`--no-new-keyring` Ignored. (Charliecloud does not implement session keyrings.) :code:`CONTAINER_ID` String to use as the container ID. This must be :code:`buildah-buildahYYY`, where :code:`YYY` matches :code:`DIR` above. Unsupported arguments: :code:`--console-socket PATH` UNIX socket to pass pseudoterminal file descriptor. Charliecloud does not support pseudoterminals; fail with an error if this argument is given. For Buildah, redirect its input from :code:`/dev/null` to prevent it from requesting a pseudoterminal. :code:`delete` -------------- :: $ ch-run-oci delete CONTAINER_ID Clean up the OCI-related scaffolding for specified container. :code:`kill` ------------ :: $ ch-run-oci kill CONTAINER_ID No-op. :code:`start` ------------- :: $ ch-run-oci start CONTAINER_ID Eexecute the user command specified at create time in a Charliecloud container. :code:`state` ------------- :: $ ch-run-oci state CONTAINER_ID Print the state of the given container on standard output as an OCI compliant JSON document. Unsupported OCI features ======================== As noted above, various OCI features are not supported by Charliecloud. We have tried to guess which features would be essential to callers; :code:`ch-run-oci` fails with an error if these are requested. Otherwise, the request is simply ignored. We are interested in hearing about scientific-computing use cases for unsupported features, so we can add support for things that are needed. Our goal is for this man page to be comprehensive: every OCI runtime feature should either work or be listed as unsupported. Unsupported features that are an error: * Pseudoterminals * Hooks (prestart, poststart, and prestop) * Annotations * Joining existing namespaces * Intel Resource Director Technology (RDT) Unsupported features that are ignored: * Mounts other than the root filesystem (we do use :code:`--no-home`) * User/group mappings beyond one user mapped to EUID and one group mapped to EGID * Disabling :code:`prctl(PR_SET_NO_NEW_PRIVS)` * Root filesystem propagation mode * :code:`sysctl` directives * masked and read-only paths (remaining unprivileged protects you) * Capabilities * rlimits * Devices (all devices are inherited from the host) * cgroups * seccomp * SELinux * AppArmor * Container hostname setting Environment variables ===================== .. include:: py_env.rst :code:`CH_RUN_OCI_HANG` If set to the name of a command (e.g., :code:`create`), sleep indefinitely when that command is invoked. The purpose here is to halt a build so it can be examined and debugged. charliecloud-0.26/doc/ch-run.rst000066400000000000000000000001761417231051300165730ustar00rootroot00000000000000:orphan: ch-run man page +++++++++++++++ .. include:: ./ch-run_desc.rst .. include:: ./bugs.rst .. include:: ./see_also.rst charliecloud-0.26/doc/ch-run_desc.rst000066400000000000000000000616331417231051300175760ustar00rootroot00000000000000Synopsis ======== :: $ ch-run [OPTION...] IMAGE -- CMD [ARG...] Description =========== Run command :code:`CMD` in a fully unprivileged Charliecloud container using the image located at :code:`IMAGE`, which can be either a directory or, if the proper support is enabled, a SquashFS archive. :code:`-b`, :code:`--bind=SRC[:DST]` Bind-mount :code:`SRC` at guest :code:`DST`. The default destination if not specified is to use the same path as the host; i.e., the default is :code:`--bind=SRC:SRC`. Can be repeated. If :code:`--write` is given and :code:`DST` does not exist, it will be created as an empty directory. However, :code:`DST` must be entirely within the image itself; :code:`DST` cannot enter a previous bind mount. For example, :code:`--bind /foo:/tmp/foo` will fail because :code:`/tmp` is shared with the host via bind-mount (unless :code:`$TMPDIR` is set to something else or :code:`--private-tmp` is given). Most images do have ten directories :code:`/mnt/[0-9]` already available as mount points. Symlinks in :code:`DST` are followed, and absolute links can have surprising behavior. Bind-mounting happens after namespace setup but before pivoting into the container image, so absolute links use the host root. For example, suppose the image has a symlink :code:`/foo -> /mnt`. Then, :code:`--bind=/bar:/foo` will bind-mount on the *host's* :code:`/mnt`, which is inaccessible on the host because namespaces are already set up and *also* inaccessible in the container because of the subsequent pivot into the image. Currently, this problem is only detected when :code:`DST` needs to be created: :code:`ch-run` will refuse to follow absolute symlinks in this case, to avoid directory creation surprises. :code:`-c`, :code:`--cd=DIR` Initial working directory in container. :code:`--ch-ssh` Bind :code:`ch-ssh(1)` into container at :code:`/usr/bin/ch-ssh`. :code:`--env-no-expand` don't expand variables when using :code:`--set-env` :code:`-g`, :code:`--gid=GID` Run as group :code:`GID` within container. :code:`-j`, :code:`--join` Use the same container (namespaces) as peer :code:`ch-run` invocations. :code:`--join-pid=PID` Join the namespaces of an existing process. :code:`--join-ct=N` Number of :code:`ch-run` peers (implies :code:`--join`; default: see below). :code:`--join-tag=TAG` Label for :code:`ch-run` peer group (implies :code:`--join`; default: see below). :code:`-m`, :code:`--mount=DIR` Use :code:`DIR` for the SquashFS mount point, which must already exist. If not specified, the default is :code:`/var/tmp/$USER.ch/mnt`, which *will* be created if needed. :code:`--no-home` By default, your host home directory (i.e., :code:`$HOME`) is bind-mounted at guest :code:`/home/$USER`. This is accomplished by mounting a new :code:`tmpfs` at :code:`/home`, which hides any image content under that path. If this is specified, neither of these things happens and the image's :code:`/home` is exposed unaltered. :code:`--no-passwd` By default, temporary :code:`/etc/passwd` and :code:`/etc/group` files are created according to the UID and GID maps for the container and bind-mounted into it. If this is specified, no such temporary files are created and the image's files are exposed. :code:`-t`, :code:`--private-tmp` By default, the host's :code:`/tmp` (or :code:`$TMPDIR` if set) is bind-mounted at container :code:`/tmp`. If this is specified, a new :code:`tmpfs` is mounted on the container's :code:`/tmp` instead. :code:`--set-env`, :code:`--set-env=FILE`, :code:`--set-env=VAR=VALUE` Set environment variable(s). With: * no argument: as listed in file :code:`/ch/environment` within the image. It is an error if the file does not exist or cannot be read. (Note that with SquashFS images, it is not currently possible to use other files within the image.) * :code:`FILE` (i.e., no equals in argument): as specified in file at host path :code:`FILE`. Again, it is an error if the file cannot be read. * :code:`NAME=VALUE` (i.e., equals sign in argument): set variable :code:`NAME` to :code:`VALUE`. See below for details on how environment variables work in :code:`ch-run`. :code:`-u`, :code:`--uid=UID` Run as user :code:`UID` within container. :code:`--unset-env=GLOB` Unset environment variables whose names match :code:`GLOB`. :code:`-v`, :code:`--verbose` Be more verbose (can be repeated). :code:`-w`, :code:`--write` Mount image read-write (by default, the image is mounted read-only). :code:`-?`, :code:`--help` Print help and exit. :code:`--usage` Print a short usage message and exit. :code:`-V`, :code:`--version` Print version and exit. **Note:** Because :code:`ch-run` is fully unprivileged, it is not possible to change UIDs and GIDs within the container (the relevant system calls fail). In particular, setuid, setgid, and setcap executables do not work. As a precaution, :code:`ch-run` calls :code:`prctl(PR_SET_NO_NEW_PRIVS, 1)` to `disable these executables `_ within the container. This does not reduce functionality but is a "belt and suspenders" precaution to reduce the attack surface should bugs in these system calls or elsewhere arise. Image format ============ :code:`ch-run` supports two different image formats. The first is a simple directory that contains a Linux filesystem tree. This can be accomplished by: * :code:`ch-convert` directly from :code:`ch-image` or another builder to a directory. * Charliecloud's tarball workflow: build or pull the image, :code:`ch-convert` it to a tarball, transfer the tarball to the target system, then :code:`ch-convert` the tarball to a directory. * Manually mount a SquashFS image, e.g. with :code:`squashfuse(1)` and then un-mount it after run with :code:`fusermount -u`. * Any other workflow that produces an appropriate directory tree. The second is a SquashFS image archive mounted internally by :code:`ch-run`, available if it's linked with the optional :code:`libsquashfuse_ll`. :code:`ch-run` mounts the image filesystem, services all FUSE requests, and unmounts it, all within :code:`ch-run`. See :code:`--mount` above to set the mount point location. Prior versions of Charliecloud provided wrappers for the :code:`squashfuse` and :code:`squashfuse_ll` SquashFS mount commands and :code:`fusermount -u` unmount command. We removed these because we concluded they had minimal value-add over the standard, unwrapped commands. .. warning:: Currently, Charliecloud unmounts the SquashFS filesystem when user command :code:`CMD`'s process exits. It does not monitor any of its child processes. Therefore, if the user command spawns child processes and then exits before them (e.g., some daemons), those children will have the image unmounted from underneath them. In this case, the workaround is to mount/unmount using external tools. We expect to remove this limitation in a future version. Host files and directories available in container via bind mounts ================================================================= In addition to any directories specified by the user with :code:`--bind`, :code:`ch-run` has standard host files and directories that are bind-mounted in as well. The following host files and directories are bind-mounted at the same location in the container. These give access to the host's devices and various kernel facilities. (Recall that Charliecloud provides minimal isolation and containerized processes are mostly normal unprivileged processes.) They cannot be disabled and are required; i.e., they must exist both on host and within the image. * :code:`/dev` * :code:`/proc` * :code:`/sys` Optional; bind-mounted only if path exists on both host and within the image, without error or warning if not. * :code:`/etc/hosts` and :code:`/etc/resolv.conf`. Because Charliecloud containers share the host network namespace, they need the same hostname resolution configuration. * :code:`/etc/machine-id`. Provides a unique ID for the OS installation; matching the host works for most situations. Needed to support D-Bus, some software licensing situations, and likely other use cases. See also `issue #1050 `_. * :code:`/var/lib/hugetlbfs` at guest :code:`/var/opt/cray/hugetlbfs`, and :code:`/var/opt/cray/alps/spool`. These support Cray MPI. * :code:`$PREFIX/bin/ch-ssh` at guest :code:`/usr/bin/ch-ssh`. SSH wrapper that automatically containerizes after connecting. Additional bind mounts done by default but can be disabled; see the options above. * :code:`$HOME` at :code:`/home/$USER` (and image :code:`/home` is hidden). Makes user data and init files available. * :code:`/tmp` (or :code:`$TMPDIR` if set) at guest :code:`/tmp`. Provides a temporary directory that persists between container runs and is shared with non-containerized application components. * temporary files at :code:`/etc/passwd` and :code:`/etc/group`. Usernames and group names need to be customized for each container run. Multiple processes in the same container with :code:`--join` ============================================================= By default, different :code:`ch-run` invocations use different user and mount namespaces (i.e., different containers). While this has no impact on sharing most resources between invocations, there are a few important exceptions. These include: 1. :code:`ptrace(2)`, used by debuggers and related tools. One can attach a debugger to processes in descendant namespaces, but not sibling namespaces. The practical effect of this is that (without :code:`--join`), you can't run a command with :code:`ch-run` and then attach to it with a debugger also run with :code:`ch-run`. 2. *Cross-memory attach* (CMA) is used by cooperating processes to communicate by simply reading and writing one another's memory. This is also not permitted between sibling namespaces. This affects various MPI implementations that use CMA to pass messages between ranks on the same node, because it’s faster than traditional shared memory. :code:`--join` is designed to address this by placing related :code:`ch-run` commands (the “peer group”) in the same container. This is done by one of the peers creating the namespaces with :code:`unshare(2)` and the others joining with :code:`setns(2)`. To do so, we need to know the number of peers and a name for the group. These are specified by additional arguments that can (hopefully) be left at default values in most cases: * :code:`--join-ct` sets the number of peers. The default is the value of the first of the following environment variables that is defined: :code:`OMPI_COMM_WORLD_LOCAL_SIZE`, :code:`SLURM_STEP_TASKS_PER_NODE`, :code:`SLURM_CPUS_ON_NODE`. * :code:`--join-tag` sets the tag that names the peer group. The default is environment variable :code:`SLURM_STEP_ID`, if defined; otherwise, the PID of :code:`ch-run`'s parent. Tags can be re-used for peer groups that start at different times, i.e., once all peer :code:`ch-run` have replaced themselves with the user command, the tag can be re-used. Caveats: * One cannot currently add peers after the fact, for example, if one decides to start a debugger after the fact. (This is only required for code with bugs and is thus an unusual use case.) * :code:`ch-run` instances race. The winner of this race sets up the namespaces, and the other peers use the winner to find the namespaces to join. Therefore, if the user command of the winner exits, any remaining peers will not be able to join the namespaces, even if they are still active. There is currently no general way to specify which :code:`ch-run` should be the winner. * If :code:`--join-ct` is too high, the winning :code:`ch-run`'s user command exits before all peers join, or :code:`ch-run` itself crashes, IPC resources such as semaphores and shared memory segments will be leaked. These appear as files in :code:`/dev/shm/` and can be removed with :code:`rm(1)`. * Many of the arguments given to the race losers, such as the image path and :code:`--bind`, will be ignored in favor of what was given to the winner. Environment variables ===================== :code:`ch-run` leaves environment variables unchanged, i.e. the host environment is passed through unaltered, except: * limited tweaks to avoid significant guest breakage; * user-set variables via :code:`--set-env`; * user-unset variables via :code:`--unset-env`; and * set :code:`CH_RUNNING`. This section describes these features. The default tweaks happen first, then :code:`--set-env` and :code:`--unset-env` in the order specified on the command line, and then :code:`CH_RUNNING`. The two options can be repeated arbitrarily many times, e.g. to add/remove multiple variable sets or add only some variables in a file. Default behavior ---------------- By default, :code:`ch-run` makes the following environment variable changes: * :code:`$CH_RUNNING`: Set to :code:`Weird Al Yankovic`. While a process can figure out that it's in an unprivileged container and what namespaces are active without this hint, that can be messy, and there is no way to tell that it's a *Charliecloud* container specifically. This variable makes such a test simple and well-defined. (**Note:** This variable is unaffected by :code:`--unset-env`.) * :code:`$HOME`: If the path to your home directory is not :code:`/home/$USER` on the host, then an inherited :code:`$HOME` will be incorrect inside the guest. This confuses some software, such as Spack. Thus, we change :code:`$HOME` to :code:`/home/$USER`, unless :code:`--no-home` is specified, in which case it is left unchanged. * :code:`$PATH`: Newer Linux distributions replace some root-level directories, such as :code:`/bin`, with symlinks to their counterparts in :code:`/usr`. Some of these distributions (e.g., Fedora 24) have also dropped :code:`/bin` from the default :code:`$PATH`. This is a problem when the guest OS does *not* have a merged :code:`/usr` (e.g., Debian 8 “Jessie”). Thus, we add :code:`/bin` to :code:`$PATH` if it's not already present. Further reading: * `The case for the /usr Merge `_ * `Fedora `_ * `Debian `_ * :code:`$TMPDIR`: Unset, because this is almost certainly a host path, and that host path is made available in the guest at :code:`/tmp` unless :code:`--private-tmp` is given. Setting variables with :code:`--set-env` ---------------------------------------- The purpose of :code:`--set-env` is to set environment variables within the container. Values given replace any already in the environment (i.e., inherited from the host shell) or set by earlier :code:`--set-env`. This flag takes an optional argument with two possible forms: 1. **If the argument contains an equals sign** (:code:`=`, ASCII 61), that sets an environment variable directly. For example, to set :code:`FOO` to the string value :code:`bar`:: $ ch-run --set-env=FOO=bar ... Single straight quotes around the value (:code:`'`, ASCII 39) are stripped, though be aware that both single and double quotes are also interpreted by the shell. For example, this example is similar to the prior one; the double quotes are removed by the shell and the single quotes are removed by :code:`ch-run`:: $ ch-run --set-env="'BAZ=qux'" ... 2. **If the argument does not contain an equals sign**, it is a host path to a file containing zero or more variables using the same syntax as above (except with no prior shell processing). This file contains a sequence of assignments separated by newlines. Empty lines are ignored, and no comments are interpreted. (This syntax is designed to accept the output of :code:`printenv` and be easily produced by other simple mechanisms.) For example:: $ cat /tmp/env.txt FOO=bar BAZ='qux' $ ch-run --set-env=/tmp/env.txt ... For directory images only (because the file is read before containerizing), guest paths can be given by prepending the image path. 3. **If there is no argument**, the file :code:`/ch/environment` within the image is used. This file is commonly populated by :code:`ENV` instructions in the Dockerfile. For example, equivalently to form 2:: $ cat Dockerfile [...] ENV FOO=bar ENV BAZ=qux [...] $ ch-image build -t foo . $ ch-convert foo /var/tmp/foo.sqfs $ ch-run --set-env /var/tmp/foo.sqfs -- ... (Note the image path is interpreted correctly, not as the :code:`--set-env` argument.) At present, there is no way to use files other than :code:`/ch/environment` within SquashFS images. Environment variables are expanded for values that look like search paths, unless :code:`--env-no-expand` is given prior to :code:`--set-env`. In this case, the value is a sequence of zero or more possibly-empty items separated by colon (:code:`:`, ASCII 58). If an item begins with dollar sign (:code:`$`, ASCII 36), then the rest of the item is the name of an environment variable. If this variable is set to a non-empty value, that value is substituted for the item; otherwise (i.e., the variable is unset or the empty string), the item is deleted, including a delimiter colon. The purpose of omitting empty expansions is to avoid surprising behavior such as an empty element in :code:`$PATH` meaning `the current directory `_. For example, to set :code:`HOSTPATH` to the search path in the current shell (this is expanded by :code:`ch-run`, though letting the shell do it happens to be equivalent):: $ ch-run --set-env='HOSTPATH=$PATH' ... To prepend :code:`/opt/bin` to this current search path:: $ ch-run --set-env='PATH=/opt/bin:$PATH' ... To prepend :code:`/opt/bin` to the search path set by the Dockerfile, as retrieved from guest file :code:`/ch/environment` (here we really cannot let the shell expand :code:`$PATH`):: $ ch-run --set-env --set-env='PATH=/opt/bin:$PATH' ... Examples of valid assignment, assuming that environment variable :code:`BAR` is set to :code:`bar` and :code:`UNSET` is unset or set to the empty string: .. list-table:: :header-rows: 1 * - Assignment - Name - Value * - :code:`FOO=bar` - :code:`FOO` - :code:`bar` * - :code:`FOO=bar=baz` - :code:`FOO` - :code:`bar=baz` * - :code:`FLAGS=-march=foo -mtune=bar` - :code:`FLAGS` - :code:`-march=foo -mtune=bar` * - :code:`FLAGS='-march=foo -mtune=bar'` - :code:`FLAGS` - :code:`-march=foo -mtune=bar` * - :code:`FOO=$BAR` - :code:`FOO` - :code:`bar` * - :code:`FOO=$BAR:baz` - :code:`FOO` - :code:`bar:baz` * - :code:`FOO=` - :code:`FOO` - empty string * - :code:`FOO=$UNSET` - :code:`FOO` - empty string * - :code:`FOO=baz:$UNSET:qux` - :code:`FOO` - :code:`baz:qux` (not :code:`baz::qux`) * - :code:`FOO=:bar:baz::` - :code:`FOO` - :code:`:bar:baz::` * - :code:`FOO=''` - :code:`FOO` - empty string * - :code:`FOO=''''` - :code:`FOO` - :code:`''` (two single quotes) Example invalid assignments: .. list-table:: :header-rows: 1 * - Assignment - Problem * - :code:`FOO bar` - no equals separator * - :code:`=bar` - name cannot be empty Example valid assignments that are probably not what you want: .. Note: Plain leading space screws up ReST parser. We use ZERO WIDTH SPACE U+200B, then plain space. This will copy and paste incorrectly, but that seems unlikely. .. list-table:: :header-rows: 1 * - Assignment - Name - Value - Problem * - :code:`FOO="bar"` - :code:`FOO` - :code:`"bar"` - double quotes aren't stripped * - :code:`FOO=bar # baz` - :code:`FOO` - :code:`bar # baz` - comments not supported * - :code:`FOO=bar\tbaz` - :code:`FOO` - :code:`bar\tbaz` - backslashes are not special * - :code:`​ FOO=bar` - :code:`​ FOO` - :code:`bar` - leading space in key * - :code:`FOO= bar` - :code:`FOO` - :code:`​ bar` - leading space in value * - :code:`$FOO=bar` - :code:`$FOO` - :code:`bar` - variables not expanded in key * - :code:`FOO=$BAR baz:qux` - :code:`FOO` - :code:`qux` - variable :code:`BAR baz` not set Removing variables with :code:`--unset-env` ------------------------------------------- The purpose of :code:`--unset-env=GLOB` is to remove unwanted environment variables. The argument :code:`GLOB` is a glob pattern (`dialect `_ :code:`fnmatch(3)` with no flags); all variables with matching names are removed from the environment. .. warning:: Because the shell also interprets glob patterns, if any wildcard characters are in :code:`GLOB`, it is important to put it in single quotes to avoid surprises. :code:`GLOB` must be a non-empty string. Example 1: Remove the single environment variable :code:`FOO`:: $ export FOO=bar $ env | fgrep FOO FOO=bar $ ch-run --unset-env=FOO $CH_TEST_IMGDIR/chtest -- env | fgrep FOO $ Example 2: Hide from a container the fact that it's running in a Slurm allocation, by removing all variables beginning with :code:`SLURM`. You might want to do this to test an MPI program with one rank and no launcher:: $ salloc -N1 $ env | egrep '^SLURM' | wc 44 44 1092 $ ch-run $CH_TEST_IMGDIR/mpihello-openmpi -- /hello/hello [... long error message ...] $ ch-run --unset-env='SLURM*' $CH_TEST_IMGDIR/mpihello-openmpi -- /hello/hello 0: MPI version: Open MPI v3.1.3, package: Open MPI root@c897a83f6f92 Distribution, ident: 3.1.3, repo rev: v3.1.3, Oct 29, 2018 0: init ok cn001.localdomain, 1 ranks, userns 4026532530 0: send/receive ok 0: finalize ok Example 3: Clear the environment completely (remove all variables):: $ ch-run --unset-env='*' $CH_TEST_IMGDIR/chtest -- env $ Note that some programs, such as shells, set some environment variables even if started with no init files:: $ ch-run --unset-env='*' $CH_TEST_IMGDIR/debian9 -- bash --noprofile --norc -c env SHLVL=1 PWD=/ _=/usr/bin/env $ Examples ======== Run the command :code:`echo hello` inside a Charliecloud container using the unpacked image at :code:`/data/foo`:: $ ch-run /data/foo -- echo hello hello Run an MPI job that can use CMA to communicate:: $ srun ch-run --join /data/foo -- bar Syslog ====== By default, :code:`ch-run` logs its command line to `syslog `_. (This can be disabled by configuring with :code:`--disable-syslog`.) This includes: (1) the invoking real UID, (2) the number of command line arguments, and (3) the arguments, separated by spaces. For example:: Dec 10 18:19:08 mybox ch-run: uid=1000 args=7: ch-run -v /var/tmp/00_tiny -- echo hello "wor l}\$d" Logging is one of the first things done during program initialization, even before command line parsing. That is, almost all command lines are logged, even if erroneous, and there is no logging of program success or failure. Arguments are serialized with the following procedure. The purpose is to provide a human-readable reconstruction of the command line while also allowing each argument to be recovered byte-for-byte. .. Note: The next paragraph contains ​U+200B ZERO WIDTH SPACE after the backslash because backslash by itself won't build and two backslashes renders as two backslashes. * If an argument contains only printable ASCII bytes that are not whitespace, shell metacharacters, double quote (:code:`"`, ASCII 34 decimal), or backslash (:code:`\​`, ASCII 92), then log it unchanged. * Otherwise, (a) enclose the argument in double quotes and (b) backslash-escape double quotes, backslashes, and characters interpreted by Bash (including POSIX shells) within double quotes. The verbatim command line typed in the shell cannot be recovered, because not enough information is provided to UNIX programs. For example, :code:`echo  'foo'` is given to programs as a sequence of two arguments, :code:`echo` and :code:`foo`; the two spaces and single quotes are removed by the shell. The zero byte, ASCII NUL, cannot appear in arguments because it would terminate the string. Exit status =========== If there is an error during containerization, :code:`ch-run` exits with status non-zero. If the user command is started successfully, the exit status is that of the user command, with one exception: if the image is an internally mounted SquashFS filesystem and the user command is killed by a signal, the exit status is 1 regardless of the signal value. .. LocalWords: mtune NEWROOT hugetlbfs UsrMerge fusermount mybox IMG HOSTPATH charliecloud-0.26/doc/ch-ssh.rst000066400000000000000000000001761417231051300165640ustar00rootroot00000000000000:orphan: ch-ssh man page +++++++++++++++ .. include:: ./ch-ssh_desc.rst .. include:: ./bugs.rst .. include:: ./see_also.rst charliecloud-0.26/doc/ch-ssh_desc.rst000066400000000000000000000013301417231051300175530ustar00rootroot00000000000000Synopsis ======== :: $ CH_RUN_ARGS="NEWROOT [ARG...]" $ ch-ssh [OPTION...] HOST CMD [ARG...] Description =========== Runs command :code:`CMD` in a Charliecloud container on remote host :code:`HOST`. Use the content of environment variable :code:`CH_RUN_ARGS` as the arguments to :code:`ch-run` on the remote host. .. note:: Words in :code:`CH_RUN_ARGS` are delimited by spaces only; it is not shell syntax. Example ======= On host bar.example.com, run the command :code:`echo hello` inside a Charliecloud container using the unpacked image at :code:`/data/foo` with starting directory :code:`/baz`:: $ hostname foo $ export CH_RUN_ARGS='--cd /baz /data/foo' $ ch-ssh bar.example.com -- hostname bar charliecloud-0.26/doc/ch-tar2dir.rst000066400000000000000000000002121417231051300173250ustar00rootroot00000000000000:orphan: ch-tar2dir man page +++++++++++++++++++ .. include:: ./ch-tar2dir_desc.rst .. include:: ./bugs.rst .. include:: ./see_also.rst charliecloud-0.26/doc/ch-tar2dir_desc.rst000066400000000000000000000031551417231051300203340ustar00rootroot00000000000000Synopsis ======== :: $ ch-tar2dir TARBALL DIR Description =========== .. warning:: This script is deprecated in favor of :code:`ch-convert`. It will be removed in the next release. Extract the tarball :code:`TARBALL` into a subdirectory of :code:`DIR`. :code:`TARBALL` must contain a Linux filesystem image, e.g. as created by :code:`ch-builder2tar`, and be compressed with :code:`gzip` or :code:`xz`. If :code:`TARBALL` has no extension, try appending :code:`.tar.gz` and :code:`.tar.xz`. Inside :code:`DIR`, a subdirectory will be created whose name corresponds to the name of the tarball with :code:`.tar.gz` or other suffix removed. If such a directory exists already and appears to be a Charliecloud container image, it is removed and replaced. If the existing directory doesn't appear to be a container image, the script aborts with an error. Additional arguments: :code:`--help` print help and exit :code:`--version` print version and exit .. warning:: Placing :code:`DIR` on a shared file system can cause significant metadata load on the file system servers. This can result in poor performance for you and all your colleagues who use the same file system. Please consult your site admin for a suitable location. Example ======= :: $ ls -lh /var/tmp total 57M -rw-r----- 1 reidpr reidpr 57M Feb 13 16:14 hello.tar.gz $ ch-tar2dir /var/tmp/hello.tar.gz /var/tmp creating new image /var/tmp/hello /var/tmp/hello unpacked ok $ ls -lh /var/tmp total 57M drwxr-x--- 22 reidpr reidpr 4.0K Feb 13 16:29 hello -rw-r----- 1 reidpr reidpr 57M Feb 13 16:14 hello.tar.gz charliecloud-0.26/doc/ch-test.rst000066400000000000000000000002071417231051300167410ustar00rootroot00000000000000:orphan: ch-test man page ++++++++++++++++++++++ .. include:: ./ch-test_desc.rst .. include:: ./bugs.rst .. include:: ./see_also.rst charliecloud-0.26/doc/ch-test_desc.rst000066400000000000000000000223531417231051300177450ustar00rootroot00000000000000Synopsis ======== :: $ ch-test [PHASE] [--scope SCOPE] [--pack-fmt FMT] [ARGS] Description =========== Charliecloud comes with a comprehensive test suite that exercises the container workflow itself as well as a few example applications. :code:`ch-test` coordinates running the test suite. While the CLI has lots of options, the defaults are reasonable, and bare :code:`ch-test` will give useful results in a few minutes on single-node, internet-connected systems with a few GB available in :code:`/var/tmp`. The test suite requires a few GB (standard scope) or tens of GB (full scope) of storage for test fixtures: * *Builder storage* (e.g., layer cache). This goes wherever the builder puts it. * *Packed images directory*: image tarballs or SquashFS files. * *Unpacked images directory*. Images are unpacked into and then run from here. * *Filesystem permissions* directories. These are used to test that the kernel is enforcing permissions correctly. Note that this exercises the kernel, not Charliecloud, and can be omitted from routine Charliecloud testing. The first three are created when needed if they don't exist, while the filesystem permissions fixtures must be created manually, in order to accommodate configurations where sudo is not available via the same login path used for running tests. The packed and unpacked image directories specified for testing are volatile. The contents of these directories are deleted before the build and run phases, respectively. In all four cases, when creating directories, only the final path component is created. Parent directories must already exist, i.e., :code:`ch-test` uses the behavior of :code:`mkdir` rather than :code:`mkdir -p`. Some of the tests exercise parallel functionality. If :code:`ch-test` is run on a single node, multiple cores will be used; if in a Slurm allocation, multiple nodes too. The subset of tests to run mostly splits along two key dimensions. The *phase* is which parts of the workflow to run. Different parts of the workflow can be tested on different systems by copying the necessary artifacts between them, e.g. by building images on one system and running them on another. The *scope* allows trading off thoroughness versus time. :code:`PHASE` must be one of the following: :code:`build` Image building and associated functionality, with the selected builder. :code:`run` Running containers and associated functionality. This requires a packed images directory produced by a successful :code:`build` phase, which can be copied from the build system if it's not also the run system. :code:`examples` Example applications. Requires an unpacked images directory produced by a successful :code:`run` phase. :code:`all` Execute phases :code:`build`, :code:`run`, and :code:`examples`, in that order. :code:`mk-perm-dirs` Create the filesystem permissions directories. Requires :code:`--perm-dirs`. :code:`clean` Delete automatically-generated test files, and packed and unpacked image directories. :code:`rm-perm-dirs` Remove the filesystem permissions directories. Requires :code:`--perm-dirs`. :code:`-f`, :code:`--file FILE[:TEST]` Run the tests in the given file only, which can be an arbitrary :code:`.bats` file, except for :code:`test.bats` under :code:`examples`, where you must specify the corresponding Dockerfile or :code:`Build` file instead. This is somewhat brittle and typically used for development or debugging. For example, it does not check whether the pre-requisites of whatever is in the file are satisfied. Often running :code:`build` and :code:`run` first is sufficient, but this varies. If :code:`TEST` is also given, then run only tests with name containing that string, skipping the others. The separator is a literal colon. If the string contains shell metacharacters such as space, you'll need to quote the argument to protect it from the shell. Scope is specified with: :code:`-s`, :code:`--scope SCOPE` :code:`SCOPE` must be one of the following: * :code:`quick`: Most important subset of workflow. Handy for development. * :code:`standard`: All tested workflow functionality and a selection of more important examples. (Default.) * :code:`full`: All available tests, including all examples. Image format is specified with: :code:`--pack-fmt FMT` :code:`FMT` must be one of the following: * :code:`squash-mount` or 🐘: SquashFS archive, run directly from the archive using :code:`ch-run`'s internal SquashFUSE functionality. In this mode, tests that require writing to the image are skipped. * :code:`tar-unpack` or 📠: Tarball, and the images are unpacked before running. * :code:`squash-unpack` or 🎃: SquashFS, and the images are unpacked before running. Default: :code:`$CH_TEST_PACK_FMT` if set. Otherwise, if :code:`mksquashfs(1)` is available and :code:`ch-run` was built with :code:`libsquashfuse` support, then :code:`squash-mount`, else :code:`tar-unpack`. Additional arguments: :code:`-b`, :code:`--builder BUILDER` Image builder to use. See :code:`ch-build(1)` for how the default is selected. :code:`--dry-run` Print summary of what would be tested and then exit. :code:`-h`, :code:`--help` Print usage and then exit. :code:`--img-dir DIR` Set unpacked images directory to :code:`DIR`. In a multi-node allocation, this directory may not be shared between nodes. Default: :code:`$CH_TEST_IMGDIR` if set; otherwise :code:`/var/tmp/img`. :code:`--lustre DIR` Use :code:`DIR` for run-phase Lustre tests. Default: :code:`CH_TEST_LUSTREDIR` if set; otherwise skip them. The tests will create, populate, and delete a new subdirectory under :code:`DIR`, leaving everything else in :code:`DIR` untouched. :code:`--pack-dir DIR` Set packed images directory to :code:`DIR`. Default: :code:`$CH_TEST_TARDIR` if set; otherwise :code:`/var/tmp/pack`. :code:`--pedantic (yes|no)` Some tests require configurations that are very specific (e.g., being a member of at least two groups) or unusual (e.g., sudo to a non-root group). If :code:`yes`, then fail if the requirement is not met; if :code:`no`, then skip. The default is :code:`yes` for CI environments or people listed in :code:`README.md`, :code:`no` otherwise. If :code:`yes` and sudo seems to be available, implies :code:`--sudo`. :code:`--perm-dir DIR` Add :code:`DIR` to filesystem permission fixture directories; can be specified multiple times. We recommend one such directory per mounted filesystem type whose kernel module you do not trust; e.g., you probably don't need to test your :code:`tmpfs`\ es, but out-of-tree filesystems very likely need this. Implies :code:`--sudo`. Default: :code:`CH_TEST_PERMDIRS` if set; otherwise skip the filesystem permissions tests. :code:`--sudo` Enable things that require sudo, such as certain privilege escalation tests and creating/removing the filesystem permissions fixtures. Requires generic :code:`sudo` capabilities. Note that the Docker builder uses :code:`sudo docker` even without this option. Exit status =========== Zero if all tests passed; non-zero if any failed. For setup and teardown phases, zero if everything was created or deleted correctly, non-zero otherwise. Bugs ==== Bats will wait until all descendant processes finish before exiting, so if you get into a failure mode where a test sequence doesn't clean up all its processes, :code:`ch-test` will hang. Examples ======== Many systems can simply use the defaults. To run the :code:`build`, :code:`run`, and :code:`examples` phases on a single system, without the filesystem permissions tests:: $ ch-test ch-test version 0.12 ch-run: 0.12 /usr/local/bin/ch-run bats: 0.4.0 /usr/bin/bats tests: /usr/local/libexec/charliecloud/test phase: build run examples scope: standard (default) builder: docker (default) use generic sudo: no (default) unpacked images dir: /var/tmp/img (default) packed images dir: /var/tmp/tar (default) fs permissions dirs: skip (default) checking namespaces ... ok checking builder ... found: /usr/bin/docker 19.03.2 bats build.bats build_auto.bats build_post.bats ✓ documentation seems sane ✓ version number seems sane [...] All tests passed. The next example is for a more complex setup like you might find in HPC centers: * Non-default fixture directories. * Non-default scope. * Different build and run systems. * Run the filesystem permissions tests. Output has been omitted. :: (mybox)$ ssh hpc-admin (hpc-admin)$ ch-test mk-perm-dirs --perm-dir /scratch/$USER/perms \ --perm-dir /home/$USER/perms (hpc-admin)$ exit (mybox)$ ch-test build --scope full (mybox)$ scp -r /var/tmp/pack hpc:/scratch/$USER/pack (mybox)$ ssh hpc (hpc)$ salloc -N2 (cn001)$ export CH_TEST_TARDIR=/scratch/$USER/pack (cn001)$ export CH_TEST_IMGDIR=/local/tmp (cn001)$ export CH_TEST_PERMDIRS="/scratch/$USER/perms /home/$USER/perms" (cn001)$ export CH_TEST_SCOPE=full (cn001)$ ch-test run (cn001)$ ch-test examples .. LocalWords: fmt img charliecloud-0.26/doc/charliecloud.rst000066400000000000000000000010131417231051300200240ustar00rootroot00000000000000:orphan: charliecloud man page +++++++++++++++++++++ .. include:: ../README.rst .. include:: ./bugs.rst See also -------- ch-build(1), ch-build2dir(1), ch-builder2squash(1), ch-builder2tar(1), ch-checkns(1), ch-convert(1), ch-dir2squash(1), ch-fromhost(1), ch-image(1), ch-pull2dir(1), ch-pull2tar(1), ch-run(1), ch-run-oci(1), ch-ssh(1), ch-tar2dir(1), ch-test(1), Full documentation at: https://hpc.github.io/charliecloud Note ---- These man pages are for Charliecloud version |release| (Git commit |version|). charliecloud-0.26/doc/command-usage.rst000066400000000000000000000044261417231051300201210ustar00rootroot00000000000000Charliecloud command reference ****************************** This section is a comprehensive description of the usage and arguments of the Charliecloud commands. Its content is identical to the commands' man pages. .. contents:: :depth: 1 :local: .. Note the unusual heading level. This is so the man page .rst files can still use double underscores as their top-level headers. You will also find this in the man page .rst files. ch-build ++++++++ Build an image and place it in the builder's back-end storage. .. include:: ./ch-build_desc.rst ch-build2dir ++++++++++++ Build a Charliecloud image from Dockerfile and unpack it into a directory. .. include:: ./ch-build2dir_desc.rst ch-builder2squash +++++++++++++++++ Flatten a builder image into a Charliecloud SquashFS file. .. include:: ./ch-builder2squash_desc.rst ch-builder2tar ++++++++++++++ Flatten a builder image into a Charliecloud image tarball. .. include:: ./ch-builder2tar_desc.rst ch-checkns ++++++++++ Check :code:`ch-run` prerequisites, e.g., namespaces and :code:`pivot_root(2)`. .. include:: ./ch-checkns_desc.rst ch-convert ++++++++++ Convert an image from one format to another. .. include:: ./ch-convert_desc.rst ch-dir2squash +++++++++++++ Create a SquashFS file from an image directory. .. include:: ./ch-dir2squash_desc.rst ch-fromhost +++++++++++ Inject files from the host into an image directory, with various magic. .. include:: ./ch-fromhost_desc.rst ch-image ++++++++ Build and manage images; completely unprivileged. .. include:: ./ch-image_desc.rst ch-pull2dir +++++++++++ Pull image from a Docker Hub and unpack into directory. .. include:: ./ch-pull2dir_desc.rst ch-pull2tar +++++++++++ Pull image from a Docker Hub and flatten into tarball. .. include:: ./ch-pull2tar_desc.rst .. _man_ch-run: ch-run ++++++ Run a command in a Charliecloud container. .. include:: ./ch-run_desc.rst ch-run-oci ++++++++++ OCI wrapper for :code:`ch-run`. .. include:: ./ch-run-oci_desc.rst ch-ssh ++++++ Run a remote command in a Charliecloud container. .. include:: ./ch-ssh_desc.rst ch-tar2dir ++++++++++ Unpack an image tarball into a directory. .. include:: ./ch-tar2dir_desc.rst .. _ch-test: ch-test +++++++ Run some or all of the Charliecloud test suite. .. include:: ./ch-test_desc.rst charliecloud-0.26/doc/conf.py000066400000000000000000000253371417231051300161520ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # QUAC documentation build configuration file, created by # sphinx-quickstart on Wed Feb 20 12:04:35 2013. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.4.9' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.mathjax', 'sphinx.ext.todo'] todo_include_todos = True # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Charliecloud' copyright = u'2014–2021, Triad National Security, LLC' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = open("../lib/version.txt", "r").read().rstrip() # The full version, including alpha/beta/rc tags. release = version # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. today_fmt = '%Y-%m-%d %H:%M %Z' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ["doctrees", "html", "man"] # FIXME: Workaround for older Sphinx that barf with: # # WARNING: document isn't included in any toctree # # on files included via ".. include::'. I believe this was fixed in 1.4.3 and # the relevant issue is: https://github.com/sphinx-doc/sphinx/issues/2603 exclude_patterns += ["*_desc.rst", "_deps.rst", "bugs.rst", "py_env.rst", "see_also.rst"] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. #pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'sphinx_rtd_theme' # FIXME: Workaround for older versions of Sphinx. This is not needed in 1.8.5, # but it is needed in 1.2.3. I don't know where the boundary is. We embed it # in try/except so that "docs-sane" can import the file too. try: import sphinx_rtd_theme html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] except ImportError: pass # error caught elsewhere highlight_language = 'console' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {'bodyfont': 'serif', # for agogo # 'pagewidth': '60em', # 'documentwidth': '43em', # 'sidebarwidth': '17em', # 'textalign':'left'} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. html_logo = "logo-sidebar.png" # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. html_favicon = "favicon.ico" # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". #html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. html_use_smartypants = True # deprecated in 1.6.6 smartquotes = True smartquotes_action = "qBD" # quotes, en and em dashes, but not ellipses # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. html_domain_indices = False # If false, no index is generated. html_use_index = False # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. html_show_sourcelink = False # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. html_show_sphinx = False # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'charliedoc' # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'charlie.tex', u'Charliecloud Documentation', u'Reid Priedhorsky, Tim Randles, and others', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # Put all man pages in one directory regardless of section. Default changes to # True in Sphinx 4.0, which broke our builds (#1060). man_make_section_directory = False # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ("charliecloud", "charliecloud", "Lightweight user-defined software stacks for high-performance computing", [], 7), ("ch-build", "ch-build", "Build an image and place it in the builder's back-end storage", [], 1), ("ch-build2dir", "ch-build2dir", "Build a Charliecloud image from Dockerfile and unpack it into a directory", [], 1), ("ch-builder2squash", "ch-builder2squash", "Flatten a builder image into a Charliecloud SquashFS file", [], 1), ("ch-builder2tar", "ch-builder2tar", "Flatten a builder image into a Charliecloud image tarball", [], 1), ("ch-checkns", "ch-checkns", 'Check "ch-run" prerequisites, e.g., namespaces and "pivot_root(2)"', [], 1), ("ch-convert", "ch-convert", 'Convert an image from one format to another', [], 1), ("ch-dir2squash", "ch-dir2squash", "Create a SquashFS file from an image directory", [], 1), ("ch-fromhost", "ch-fromhost", "Inject files from the host into an image directory, with various magic", [], 1), ("ch-image", "ch-image", "Build and manage images; completely unprivileged", [], 1), ("ch-pull2dir", "ch-pull2dir", "Pull image from a Docker Hub and unpack into directory", [], 1), ("ch-pull2tar", "ch-pull2tar", "Pull image from a Docker Hub and flatten into tarball", [], 1), ("ch-run", "ch-run", "Run a command in a Charliecloud container", [], 1), ("ch-run-oci", "ch-run-oci", 'OCI wrapper for "ch-run"', [], 1), ("ch-ssh", "ch-ssh", "Run a remote command in a Charliecloud container", [], 1), ("ch-tar2dir", "ch-tar2dir", "Unpack an image tarball into a directory", [], 1), ("ch-test", "ch-test", "Run some or all of the Charliecloud test suite", [], 1), ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'Charliecloud', u'Charliecloud Documentation', u'Reid Priedhorsky, Tim Randles, and others', 'Charliecloud', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' charliecloud-0.26/doc/dev.rst000066400000000000000000001340101417231051300161500ustar00rootroot00000000000000Contributor's guide ******************* This section is notes on contributing to Charliecloud development. Currently, it is messy and incomplete. Patches welcome! It documents public stuff only. If you are on the core team at LANL, also consult the internal documentation and other resources. .. contents:: :depth: 2 :local: .. note:: We're interested in and will consider all good-faith contributions. While it does make things easier and faster if you follow the guidelines here, they are not required. We'll either clean it up for you or walk you through any necessary changes. Workflow ======== We try to keep procedures and the Git branching model simple. Right now, we're pretty similar to Scott Chacon's “`GitHub Flow `_”: Master is stable; work on short-lived topic branches; use pull requests to ask for merging; keep issues organized with tags and milestones. The standard workflow is: 1. Propose a change in an issue. 2. Tag the issue with its kind (bug, enhancement, question). 3. Get consensus on what to do and how to do it, with key information recorded in the issue. 4. Submit a PR that refers to the issue. 5. Assign the issue to a milestone. 6. Review/iterate. 7. Project lead merges. Core team members may deliberate in public on GitHub or internally, whichever they are comfortable with, making sure to follow LANL policy and taking into account the probable desires of the recipient as well. Milestones ---------- We use milestones to organize what we plan to do next and what happened in a given release. There are two groups of milestones: * :code:`next` contains the issues that we plan to complete soon but have not yet landed on a specific release. Generally, we avoid putting PRs in here because of their ticking clocks. * Each release has a milestone. These are dated with the target date for that release. We put an issue in when it has actually landed in that release or we are willing to delay that release until it does. We put a PR in when we think it's reasonably likely to be merged for that release. If an issue is assigned to a person, that means they are actively leading the work on it or will do so in the near future. Typically this happens when the issue ends up in :code:`next`. Issues in a status of "I'll get to this later" should not be assigned to a person. Peer review ----------- **Issues and pull requests.** The standard workflow is to introduce a change in an issue, get consensus on what to do, and then create a *draft* `pull request `_ (PR) for the implementation. The issue, not the PR, should be tagged and milestoned so a given change shows up only once in the various views. If consensus is obtained through other means (e.g., in-person discussion), then open a PR directly. In this case, the PR should be tagged and milestoned, since there is no issue. **Address a single concern.** When possible, issues and PRs should address completely one self-contained change. If there are multiple concerns, make separate issues and/or PRs. For example, PRs should not tidy unrelated code, and non-essential complications should be split into a follow-on issue. **Documentation and tests first.** The best practice for significant changes is to draft documentation and/or tests first, get feedback on that, and then implement the code. Reviews of the form "you need a completely different approach" are no fun. **Tests must pass.** PRs will not be merged until they pass the tests. While this most saliently includes CI, the tests should also pass on your development box as well as all relevant clusters (if appropriate for the changes). **No close keywords in PRs.** While GitHub will interpret issue-closing keywords (variations on `"closes", "fixes", and "resolves" `_) in PR descriptions, don't use this feature, because often the specific issues a PR closes change over time, and we don't want to have to edit the description to deal with that. We also want this information in only one place (the commit log). Instead, use "addresses", and we'll edit the keywords into the commit message(s) at merge time if needed. **PR review procedure.** When your draft PR is ready for review — which may or may not be when you want it considered for merging! — do one or both of: * Request review from the person(s) you want to look at it. If you think it may be ready for merge, that should include the project lead. The purpose of requesting review is so the person is notified you need their help. * If you think it may be ready to merge (even if you're not sure), then also mark the PR "ready to review". The purpose of this is so the project lead can see which PRs are ready to consider for merging (green icon) and which are not (gray icon). If the project lead decides it's ready, they will merge; otherwise, they'll change it back to draft. In both cases, the person from whom you requested review now owns the branch, and you should stop work on it unless and until you get it back. Do not hesitate to pester your reviewer if you haven't heard back promptly, say within 24 hours. *Special case 1:* Often, the review consists of code changes, and the reviewer will want you to assess those changes. GitHub doesn't let you request review from the PR submitter, so this must be done with a comment, either online or offline. *Special case 2:* GitHub will not let you request review from external people, so this needs to be done with a comment too. Generally you should ask the original bug reporter to review, to make sure it solves their problem. **Use multi-comment reviews.** Review comments should all be packaged up into a single review; click *Start a review* rather than *Add single comment*. Then the PR author gets only a single notification instead of one for every comment you make, and it's clear when they branch is theirs again. Branching and merging --------------------- **Don't commit directly to master.** Even the project lead doesn't do this. While it may appear that some trivial fixes are being committed to the master directly, what's really happening is that these are prototyped on a branch and then fast-forward merged after the tests pass. **Merging to master.** Only the project lead should do this. **Branch merge procedure.** Generally, branches are merged in the GitHub web interface with the *Squash and merge* button, which is :code:`git merge --squash` under the hood. This squashes the branch into a single commit on master. Commit message example:: PR #268 from @j-ogas: remove ch-docker-run (closes #258) If the branch closes multiple issues and it's reasonable to separate those issues into independent commits, then the branch is rebased, interactively squashed, and force-pushed into a tidy history with close instructions, then merged in the web interface with *Create a merge commit*. Example history and commit messages:: * 18aa2b8 merge PR #254 from @j-ogas and me: Dockerfile.openmpi: use snapshot |\ | * 79fa89a upgrade to ibverbs 20.0-1 (closes #250) | * 385ce16 Dockerfile.debian9: use snapshot.debian.org (closes #249) |/ * 322df2f ... The reason to prefer merge via web interface is that GitHub often doesn't notice merges done on the command line. After merge, the branch is deleted via the web interface. **Branch history tidiness.** Commit frequently at semantically relevant times, and keep in mind that this history will probably be squashed per above. It is not necessary to rebase or squash to keep branch history tidy. But, don't go crazy. Commit messages like "try 2" and "fix CI again" are a bad sign; so are carefully proofread ones. Commit messages that are brief, technically relevant, and quick to write are what you want on feature branches. **Keep branches up to date.** Merge master into your branch, rather than rebasing. This lets you resolve conflicts once rather than multiple times as rebase works through a stack of commits. Note that PRs with merge conflicts will generally not be merged. Resolve conflicts before asking for merge. **Remove obsolete branches.** Keep your repo free of old branches with :code:`git branch -d` (or :code:`-D`) and :code:`git fetch --prune --all`. Miscellaneous issue and pull request notes ------------------------------------------ **Acknowledging issues.** Issues and PRs submitted from outside should be acknowledged promptly, including adding or correcting tags. **Closing issues.** We close issues when we've taken the requested action, decided not to take action, resolved the question, or actively determined an issue is obsolete. It is OK for "stale" issues to sit around indefinitely awaiting this. Unlike many projects, we do not automatically close issues just because they're old. **Closing PR.** Stale PRs, on the other hand, are to be avoided due to bit rot. We try to either merge or reject PRs in a timely manner. **Re-opening issues.** Closed issues can be re-opened if new information arises, for example a :code:`worksforme` issue with new reproduction steps. Continuous integration testing ------------------------------ **Quality of testing.** Tagged versions currently get more testing for various reasons. We are working to improve testing for normal commits on master, but full parity is probably unlikely. **Cycles budget.** The resource is there for your use, so take advantage of it, but be mindful of the various costs of this compute time. Things you can do include testing locally first, cancelling jobs you know will fail or that won't give you additional information, and not pushing every commit (CI tests only the most recent commit in a pushed group). **Iterating.** When trying to make CI happy, force-push or squash-merge. Don't submit a PR with half a dozen "fix CI" commits. **Purging Docker cache.** :code:`misc/docker-clean.sh` can be used to purge your Docker cache, either by removing all tags or deleting all containers and images. The former is generally preferred, as it lets you update only those base images that have actually changed (the ones that haven't will be re-tagged). Issue labeling -------------- We use the following labels (a.k.a. tags) to organize issues. Each issue (or stand-alone PR) should have label(s) from every category, with the exception of disposition which only applies to closed issues. Charliecloud team members should label their own issues. Members of the general public are more than welcome to label their issues if they like, but in practice this is rare, which is fine. Whoever triages the incoming issue should add or adjust labels as needed. .. note:: This scheme is designed to organize open issues only. There have been previous schemes, and we have not re-labeled closed issues. What kind of change is it? ~~~~~~~~~~~~~~~~~~~~~~~~~~ Choose *one type* from: :code:`bug` Something doesn't work; e.g., it doesn't work as intended or it was mis-designed. This includes usability and documentation problems. Steps to reproduce with expected and actual behavior are almost always very helpful. :code:`enhancement` Things work, but it would be better if something was different. For example, a new feature proposal, an improvement in how a feature works, or clarifying an error message. Steps to reproduce with desired and current behavior are often helpful. :code:`refactor` Change that will improve Charliecloud but does not materially affect user-visible behavior. Note this doesn't mean "invisible to the user"; even user-facing documentation or logging changes could feasibly be this, if they are more cleanup-oriented. How important/urgent is it? ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Choose *one priority* from: :code:`high` High priority. :code:`medium` Medium priority. :code:`low` Low priority. Note: Unfortunately, due to resource limitations, complex issues here are likely to wait a long time, perhaps forever. If that makes you particularly sad on a particular issue, please comment to say why. Maybe it's mis-prioritized. :code:`deferred` No plans to do this, but not rejected. These issues stay open, because we do not consider the deferred state resolved. Submitting PRs on these issues is risky; you probably want to argue successfully that it should be done before starting work on it. Priority is indeed required, though it can be tricky because the levels are fuzzy. Do not hesitate to ask for advice. Considerations include: is customer or development work blocked by the issue; how valuable is the issue for customers; does the issue affect key customers; how many customers are affected; how much of Charliecloud is affected; what is the workaround like, if any. Difficulty of the issue is not a factor in priority, i.e., here we are trying to express benefit, not cost/benefit ratio. Perhaps the `Debian bug severity levels `_ provide inspiration. The number of :code:`high` priority issues should be relatively low. In part because priority is quite imprecise, issues are not a priority queue, i.e., we do work on lower-priority issues while higher-priority ones are still open. Related to this, issues do often move between priority levels. In particular, if you think we picked the wrong priority level, please say so. What part of Charliecloud is affected? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Choose *one or more components* from: :code:`runtime` The container runtime itself; largely :code:`ch-run`. :code:`image` Image building and interaction with image registries; largely :code:`ch-image`. (Not to be confused with image management tasks done by glue code.) :code:`glue` The “glue” that ties the runtime and image management (:code:`ch-image` or another builder) together. Largely shell scripts in :code:`bin`. :code:`install` Charliecloud build & install system, packaging, etc. (Not to be confused with image building.) :code:`doc` Documentation. :code:`test` Test suite and examples. :code:`misc` Everything else. Do not combine with another component. Special considerations ~~~~~~~~~~~~~~~~~~~~~~ Choose *one or more extras* from: :code:`blocked` We can't do this yet because something else needs to happen first. If that something is another issue, mention it in a comment. :code:`hpc` Related specifically to HPC and HPC scaling considerations; e.g., interactions with job schedulers. :code:`uncertain` Course of action is unclear. For example: is the feature a good idea, what is a good approach to solve the bug, what additional information is needed. :code:`usability` Affects usability of any part of Charliecloud, including documentation and project organization. Why was it closed? ~~~~~~~~~~~~~~~~~~ If the issue was resolved (i.e., bug fixed or enhancement/refactoring implemented), there is no disposition tag. Otherwise, to explain why not, choose *one disposition* from: :code:`cantfix` The issue is not something we can resolve. Typically problems with other software, problems with containers in general that we can't work around, or not actionable due to clarity or other reasons. *Use caution when blaming a problem on user error. Often (or usually) there is a documentation or usability bug that caused the "user error".* :code:`discussion` Converted to a discussion. The most common use is when someone asks a question rather than making a request for some change. :code:`duplicate` Same as some other issue. In addition to this tag, duplicates should refer to the other issue in a comment to record the link. Of the duplicates, the better one should stay open (e.g., clearer reproduction steps); if they are roughly equal in quality, the older one should stay open. :code:`moot` No longer relevant. Examples: withdrawn by reporter, fixed in current version (use :code:`duplicate` instead if it applies though), obsoleted by change in plans. :code:`wontfix` We are not going to do this, and we won't merge PRs. Sometimes you'll want to tag and then wait a few days before closing, to allow for further discussion to catch mistaken tags. :code:`worksforme` We cannot reproduce a bug, and it seems unlikely this will change given available information. Typically you'll want to tag, then wait a few days for clarification before closing. Bugs closed with this tag that do gain a reproducer later should definitely be re-opened. For some bugs, it really feels like they should be reproducible but we're missing it somehow; such bugs should be left open in hopes of new insight arising. Deprecated labels ~~~~~~~~~~~~~~~~~ You might see these on old issues, but they are no longer in use. * :code:`help wanted`: This tended to get stale and wasn't generating any leads. * :code:`key issue`: Replaced by priority labels. * :code:`question`: Replaced by Discussions. (If you report a bug that seems to be a discussion, we'll be happy to convert it to you.) Test suite ========== Timing the tests ---------------- The :code:`ts` utility from :code:`moreutils` is quite handy. The following prepends each line with the elapsed time since the previous line:: $ ch-test -s quick | ts -i '%M:%.S' Note: a skipped test isn't free; I see ~0.15 seconds to do a skip. :code:`ch-test` complains about inconsistent versions ----------------------------------------------------- There are multiple ways to ask Charliecloud for its version number. These should all give the same result. If they don't, :code:`ch-test` will fail. Typically, something needs to be rebuilt. Recall that :code:`configure` contains the version number as a constant, so a common way to get into this situation is to change Git branches without rebuilding it. Charliecloud is small enough to just rebuild everything with:: $ ./autogen.sh && ./configure && make clean && make Special images -------------- For images not needed after completion of a test, tag them :code:`tmpimg`. This leaves only one extra image at the end of the test suite. Writing a test image using the standard workflow ------------------------------------------------ Summary ~~~~~~~ The Charliecloud test suite has a workflow that can build images by two methods: 1. From a Dockerfile, using :code:`ch-build`. 2. By running a custom script. To create an image that will be built and unpacked and/or mounted, create a file in :code:`examples` (if the image recipe is useful as an example) or :code:`test` (if not) called :code:`{Dockerfile,Build}.foo`. This will create an image tagged :code:`foo`. Additional tests can be added to the test suite Bats files. To create an image with its own tests, documentation, etc., create a directory in :code:`examples`. In this directory, place :code:`{Dockerfile,Build}[.foo]` to build the image and :code:`test.bats` with your tests. For example, the file :code:`examples/foo/Dockerfile` will create an image tagged :code:`foo`, and :code:`examples/foo/Dockerfile.bar` tagged :code:`foo-bar`. These images also get the build and unpack/mount tests. Additional directories can be symlinked into :code:`examples` and will be integrated into the test suite. This allows you to create a site-specific test suite. :code:`ch-test` finds tests at any directory depth; e.g. :code:`examples/foo/bar/Dockerfile.baz` will create a test image tagged :code:`bar-baz`. Image tags in the test suite must be unique. Order of processing; within each item, alphabetical order: 1. Dockerfiles in :code:`test`. 2. :code:`Build` files in :code:`test`. 3. Dockerfiles in :code:`examples`. 4. :code:`Build` files in :code:`examples`. The purpose of doing :code:`Build` second is so they can leverage what has already been built by a Dockerfile, which is often more straightforward. How to specify when to include and exclude a test image ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Each of these image build files must specify its scope for building and running, which must be greater than or equal than the scope of all tests in any corresponding :code:`.bats` files. Exactly one of the following strings must appear: .. code-block:: none ch-test-scope: quick ch-test-scope: standard ch-test-scope: full Other stuff on the line (e.g., comment syntax) is ignored. Optional test modification directives are: :code:`ch-test-arch-exclude: ARCH` If the output of :code:`uname -m` matches :code:`ARCH`, skip the file. :code:`ch-test-builder-exclude: BUILDER` If using :code:`BUILDER`, skip the file. :code:`ch-test-builder-include: BUILDER` If specified, run only if using :code:`BUILDER`. Can be repeated to include multiple builders. If specified zero times, all builders are included. :code:`ch-test-need-sudo` Run only if user has sudo. How to write a :code:`Dockerfile` recipe ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ It's a standard Dockerfile. How to write a :code:`Build` recipe ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is an arbitrary script or program that builds the image. It gets three command line arguments: * :code:`$1`: Absolute path to directory containing :code:`Build`. * :code:`$2`: Absolute path and name of output image, without extension. This can be either: * Tarball compressed with gzip or xz; append :code:`.tar.gz` or :code:`.tar.xz` to :code:`$2`. If :code:`ch-test --pack-fmt=squash`, then this tarball will be unpacked and repacked as a SquashFS. Therefore, only use tarball output if the image build process naturally produces it and you would have to unpack it to get a directory (e.g., :code:`docker export`). * Directory; use :code:`$2` unchanged. The contents of this directory will be packed without any enclosing directory, so if you want an enclosing directory, include one. Hidden (dot) files in :code:`$2` will be ignored. * :code:`$3`: Absolute path to temporary directory for use by the script. This can be used for whatever and need no be cleaned up; the test harness will delete it. Other requirements: * The script may write only in two directories: (a) the parent directory of :code:`$2` and (b) :code:`$3`. Specifically, it may not write to the current working directory. Everything written to the parent directory of :code:`$2` must have a name starting with :code:`$(basename $2)`. * The first entry in :code:`$PATH` will be the Charliecloud under test, i.e., bare :code:`ch-*` commands will be the right ones. * Any programming language is permitted. To be included in the Charliecloud source code, a language already in the test suite dependencies is required. * The script must test for its dependencies and fail with appropriate error message and exit code if something is missing. To be included in the Charliecloud source code, all dependencies must be something we are willing to install and test. * Exit codes: * 0: Image successfully created. * 65: One or more dependencies were not met. * 126 or 127: No interpreter available for script language (the shell takes care of this). * else: An error occurred. Building RPMs ============= We maintain :code:`.spec` files and infrastructure for building RPMs in the Charliecloud source code. This is for two purposes: 1. We maintain our own Fedora RPMs (see `packaging guidelines `_). 2. We want to be able to build an RPM of any commit. Item 2 is tested; i.e., if you break the RPM build, the test suite will fail. This section describes how to build the RPMs and the pain we've hopefully abstracted away. Dependencies ------------ * charliecloud * Python 3.6+ * Either: * the provided example :code:`centos7` or :code:`centos8` image * a RHEL/CentOS 7 or newer container image with (note there are different python version names for the listed packages in RHEL/CentOS 8): * autoconf * automake * gcc * make * python36 * python36-sphinx * python36-sphinx_rtd_theme * rpm-build * rpmlint * rsync :code:`rpmbuild` wrapper script ------------------------------- While building the Charliecloud RPMs is not too weird, we provide a script to streamline it. The purpose is to (a) make it easy to build versions not matching the working directory, (b) use an arbitrary :code:`rpmbuild` directory, and (c) build in a Charliecloud container for non-RPM-based environments. The script must be run from the root of a Charliecloud Git working directory. Usage:: $ packaging/fedora/build [OPTIONS] IMAGE VERSION Options: * :code:`--install` : Install the RPMs after building into the build environment. * :code:`--rpmbuild=DIR` : Use RPM build directory root :code:`DIR` (default: :code:`~/rpmbuild`). For example, to build a version 0.9.7 RPM from the CentOS 7 image provided with the test suite, on any system, and leave the results in :code:`~/rpmbuild/RPMS` (note the test suite would also build the necessary image directory):: $ bin/ch-image build -t centos7 -f ./examples/Dockerfile.centos7 ./examples $ bin/ch-convert centos7 $CH_TEST_IMGDIR/centos7 $ packaging/fedora/build $CH_TEST_IMGDIR/centos7 0.9.7-1 To build a pre-release RPM of Git HEAD using the CentOS 7 image:: $ bin/ch-image build -t centos7 -f ./examples/Dockerfile.centos7 ./examples $ bin/ch-convert centos7 $CH_TEST_IMGDIR/centos7 $ packaging/fedora/build ${CH_TEST_IMGDIR}/centos7 HEAD Gotchas and quirks ------------------ RPM versions and releases ~~~~~~~~~~~~~~~~~~~~~~~~~ If :code:`VERSION` is :code:`HEAD`, then the RPM version will be the content of :code:`VERSION.full` for that commit, including Git gobbledygook, and the RPM release will be :code:`0`. Note that such RPMs cannot be reliably upgraded because their version numbers are unordered. Otherwise, :code:`VERSION` should be a released Charliecloud version followed by a hyphen and the desired RPM release, e.g. :code:`0.9.7-3`. Other values of :code:`VERSION` (e.g., a branch name) may work but are not supported. Packaged source code and RPM build config come from different commits ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The spec file, :code:`build` script, :code:`.rpmlintrc`, etc. come from the working directory, but the package source is from the specified commit. This is what enables us to make additional RPM releases for a given Charliecloud release (e.g. 0.9.7-2). Corollaries of this policy are that RPM build configuration can be any or no commit, and it's not possible to create an RPM of uncommitted source code. Changelog maintenance ~~~~~~~~~~~~~~~~~~~~~ The spec file contains a manually maintained changelog. Add a new entry for each new RPM release; do not include the Charliecloud release notes. For released versions, :code:`build` verifies that the most recent changelog entry matches the given :code:`VERSION` argument. The timestamp is not automatically verified. For other Charliecloud versions, :code:`build` adds a generic changelog entry with the appropriate version stating that it's a pre-release RPM. .. _build-ova: Style hints =========== We haven't written down a comprehensive style guide. Generally, follow the style of the surrounding code, think in rectangles rather than lines of code or text, and avoid CamelCase. Note that Reid is very picky about style, so don’t feel singled out if he complains (or even updates this section based on your patch!). He tries to be nice about it. Writing English --------------- * When describing what something does (e.g., your PR or a command), use the `imperative mood `_, i.e., write the orders you are giving rather than describe what the thing does. For example, do: | Inject files from the host into an image directory. | Add :code:`--join-pid` option to :code:`ch-run`. Do not (indicative mood): | Injects files from the host into an image directory. | Adds :code:`--join-pid` option to :code:`ch-run`. * Use sentence case for titles, not title case. * If it's not a sentence, start with a lower-case character. * Use spell check. Keep your personal dictionary updated so your editor is not filled with false positives. Documentation ------------- Heading underline characters: 1. Asterisk, :code:`*`, e.g. "5. Contributor's guide" 2. Equals, :code:`=`, e.g. "5.7 OCI technical notes" 3. Hyphen, :code:`-`, e.g. "5.7.1 Gotchas" 4. Tilde, :code:`~`, e.g. "5.7.1.1 Namespaces" (try to avoid) .. _dependency-policy: Dependency policy ----------------- Specific dependencies (prerequisites) are stated elsewhere in the documentation. This section describes our policy on which dependencies are acceptable. Generally ~~~~~~~~~ All dependencies must be stated and justified in the documentation. We want Charliecloud to run on as many systems as practical, so we work hard to keep dependencies minimal. However, because Charliecloud depends on new-ish kernel features, we do depend on standards of similar vintage. Core functionality should be available even on small systems with basic Linux distributions, so dependencies for run-time and build-time are only the bare essentials. Exceptions, to be used judiciously: * Features that add convenience rather than functionality may have additional dependencies that are reasonably expected on most systems where the convenience would be used. * Features that only work if some other software is present (example: the Docker wrapper scripts) can add dependencies of that other software. The test suite is tricky, because we need a test framework and to set up complex test fixtures. We have not yet figured out how to do this at reasonable expense with dependencies as tight as run- and build-time, so there are systems that do support Charliecloud but cannot run the test suite. Building the documentation needs Sphinx features that have not made their way into common distributions (i.e., RHEL), so we use recent versions of Sphinx and provide a source distribution with pre-built documentation. Building the RPMs should work on RPM-based distributions with a kernel new enough to support Charliecloud. You might need to install additional packages (but not from third-party repositories). :code:`curl` vs. :code:`wget` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For URL downloading in shell code, including Dockerfiles, use :code:`wget -nv`. Both work fine for our purposes, and we need to use one or the other consistently. According to Debian's popularity contest, 99.88% of reporting systems have :code:`wget` installed, vs. about 44% for :code:`curl`. On the other hand, :code:`curl` is in the minimal install of CentOS 7 while :code:`wget` is not. For now, Reid just picked :code:`wget` because he likes it better. Variable conventions in shell scripts and :code:`.bats` files ------------------------------------------------------------- * Separate words with underscores. * User-configured environment variables: all uppercase, :code:`CH_TEST_` prefix. Do not use in individual :code:`.bats` files; instead, provide an intermediate variable. * Variables local to a given file: lower case, no prefix. * Bats: set in :code:`common.bash` and then used in :code:`.bats` files: lower case, :code:`ch_` prefix. * Surround lower-case variables expanded in strings with curly braces, unless they're the only thing in the string. E.g.: .. code-block:: none "${foo}/bar" # yes "$foo" # yes "$foo/bar" # no "${foo}" # no * Quote the entire string instead of just the variable when practical: .. code-block:: none "${foo}/bar" # yes "${foo}"/bar # no "$foo"/bar # no * Don't quote variable assignments or other places where not needed (e.g., case statements). E.g.: .. code-block:: none foo=${bar}/baz # yes foo="${bar}/baz" # no C code ------ :code:`const` ~~~~~~~~~~~~~ The :code:`const` keyword is used to indicate that variables are read-only. It has a variety of uses; in Charliecloud, we use it for `function pointer arguments `_ to state whether or not the object pointed to will be altered by the function. For example: .. code-block:: c void foo(const char *in, char *out) is a function that will not alter the string pointed to by :code:`in` but may alter the string pointed to by :code:`out`. (Note that :code:`char const` is equivalent to :code:`const char`, but we use the latter order because that's what appears in GCC error messages.) We do not use :code:`const` on local variables or function arguments passed by value. One could do this to be more clear about what is and isn't mutable, but it adds quite a lot of noise to the source code, and in our evaluations didn't catch any bugs. We also do not use it on double pointers (e.g., :code:`char **out` used when a function allocates a string and sets the caller's pointer to point to it), because so far those are all out-arguments and C has `confusing rules `_ about double pointers and :code:`const`. Lists ~~~~~ The general convention is to use an array of elements terminated by an element containing all zeros (i.e., every byte is zero). While this precludes zero elements within the list, it makes it easy to iterate: .. code-block:: c struct foo { int a; float b; }; struct foo *bar = ...; for (int i = 0; bar[i].a != 0; i++) do_stuff(bar[i]); Note that the conditional checks that only one field of the struct (:code:`a`) is zero; this loop leverages knowledge of this specific data structure that checking only :code:`a` is sufficient. Lists can be set either as literals: .. code-block:: c struct foo bar[] = { {1, 2.0}, {3, 4.0}, {0, 0.0} }; or built up from scratch on the heap; the contents of this list are equivalent (note the C99 trick to avoid create a :code:`struct foo` variable): .. code-block:: c struct foo baz; struct foo *qux = list_new(sizeof(struct foo), 0); baz.a = 1; baz.b = 2.0; list_append((void **)&qux, &baz, sizeof(struct foo)); list_append((void **)&qux, &((struct foo){3, 4.0}), sizeof(struct foo)); This form of list should be used unless some API requires something else. .. warning:: Taking the address of an array in C yields the address of the first element, which is the same thing. For example, consider this list of strings, i.e. pointers to :code:`char`: .. code-block:: c char foo[] = "hello"; char **list = list_new(sizeof(char *), 0) list_append((void **)list, &foo, sizeof(char *)); // error! Because :code:`foo == &foo`, this will add to the list not a pointer to :code:`foo` but the *contents* of :code:`foo`, i.e. (on a machine with 64-bit pointers) :code:`'h'`, :code:`'e'`, :code:`'l'`, :code:`'l'`, :code:`'o'`, :code:`'\0'` followed by two bytes of whatever follows :code:`foo` in memory. This would work because :code:`bar != &bar`: .. code-block:: c char foo[] = "hello"; char bar = foo; char **list = list_new(sizeof(char *), 0) list_append((void **)list, &bar, sizeof(char *)); // OK OCI technical notes =================== This section describes our analysis of the Open Container Initiative (OCI) specification and implications for our implementation in :code:`ch-run-oci`. Anything relevant for users goes in that man page; here is for technical details. The main goals are to guide Charliecloud development and provide and opportunity for peer-review of our work. Currently, :code:`ch-run-oci` is only tested with Buildah. These notes describe what we are seeing from Buildah's runtime expectations. Gotchas ------- Namespaces ~~~~~~~~~~ Buildah sets up its own user and mount namespaces before invoking the runtime, though it does not change the root directory. We do not understand why. In particular, this means that you cannot see the container root filesystem it provides without joining those namespaces. To do so: #. Export :code:`CH_RUN_OCI_LOGFILE` with some logfile path. #. Export :code:`CH_RUN_OCI_DEBUG_HANG` with the step you want to examine (e.g., :code:`create`). #. Run :code:`ch-build -b buildah`. #. Make note of the PID in the logfile. #. :code:`$ nsenter -U -m -t $PID bash` Supervisor process and maintaining state ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ OCI (and thus Buildah) expects a process that exists throughout the life of the container. This conflicts with Charliecloud's lack of a supervisor process. **FIXME** Bundle directory ---------------- * OCI documentation (very incomplete): https://github.com/opencontainers/runtime-spec/blob/master/bundle.md The bundle directory defines the container and is used to communicate between Buildah and the runtime. The root filesystem (:code:`mnt/rootfs`) is mounted within Buildah's namespaces, so you'll want to join them before examination. :code:`ch-run-oci` has restrictions on bundle directory path so it can be inferred from the container ID (see the man page). This lets us store state in the bundle directory instead of maintaining a second location for container state. Example:: # cd /tmp/buildah265508516 # ls -lR . | head -40 .: total 12 -rw------- 1 root root 3138 Apr 25 16:39 config.json d--------- 2 root root 40 Apr 25 16:39 empty -rw-r--r-- 1 root root 200 Mar 9 2015 hosts d--x------ 3 root root 60 Apr 25 16:39 mnt -rw-r--r-- 1 root root 79 Apr 19 20:23 resolv.conf ./empty: total 0 ./mnt: total 0 drwxr-x--- 19 root root 380 Apr 25 16:39 rootfs ./mnt/rootfs: total 0 drwxr-xr-x 2 root root 1680 Apr 8 14:30 bin drwxr-xr-x 2 root root 40 Apr 8 14:30 dev drwxr-xr-x 15 root root 720 Apr 8 14:30 etc drwxr-xr-x 2 root root 40 Apr 8 14:30 home [...] Observations: #. The weird permissions on :code:`empty` (000) and :code:`mnt` (100) persist within the namespaces, so you'll want to be namespace root to look around. #. :code:`hosts` and :code:`resolv.conf` are identical to the host's. #. :code:`empty` is still an empty directory with in the namespaces. What is this for? #. :code:`mnt/rootfs` contains the container root filesystem. It is a tmpfs. No other new filesystems are mounted within the namespaces. :code:`config.json` ------------------- * OCI documentation: * https://github.com/opencontainers/runtime-spec/blob/master/config.md * https://github.com/opencontainers/runtime-spec/blob/master/config-linux.md This is the meat of the container configuration. Below is an example :code:`config.json` along with commentary and how it maps to :code:`ch-run` arguments. This was pretty-printed with :code:`jq . config.json`, and we re-ordered the keys to match the documentation. There are a number of additional keys that appear in the documentation but not in this example. These are all unsupported, either by ignoring them or throwing an error. The :code:`ch-run-oci` man page documents comprehensively what OCI features are and are not supported. .. code-block:: javascript { "ociVersion": "1.0.0", We validate that this is "1.0.0". .. code-block:: javascript "root": { "path": "/tmp/buildah115496812/mnt/rootfs" }, Path to root filesystem; maps to :code:`NEWROOT`. If key :code:`readonly` is :code:`false` or absent, add :code:`--write`. .. code-block:: javascript "mounts": [ { "destination": "/dev", "type": "tmpfs", "source": "/dev", "options": [ "private", "strictatime", "noexec", "nosuid", "mode=755", "size=65536k" ] }, { "destination": "/dev/mqueue", "type": "mqueue", "source": "mqueue", "options": [ "private", "nodev", "noexec", "nosuid" ] }, { "destination": "/dev/pts", "type": "devpts", "source": "pts", "options": [ "private", "noexec", "nosuid", "newinstance", "ptmxmode=0666", "mode=0620" ] }, { "destination": "/dev/shm", "type": "tmpfs", "source": "shm", "options": [ "private", "nodev", "noexec", "nosuid", "mode=1777", "size=65536k" ] }, { "destination": "/proc", "type": "proc", "source": "/proc", "options": [ "private", "nodev", "noexec", "nosuid" ] }, { "destination": "/sys", "type": "bind", "source": "/sys", "options": [ "rbind", "private", "nodev", "noexec", "nosuid", "ro" ] }, { "destination": "/etc/hosts", "type": "bind", "source": "/tmp/buildah115496812/hosts", "options": [ "rbind" ] }, { "destination": "/etc/resolv.conf", "type": "bind", "source": "/tmp/buildah115496812/resolv.conf", "options": [ "rbind" ] } ], This says what filesystems to mount in the container. It is a mix; it has tmpfses, bind-mounts of both files and directories, and other non-device-backed filesystems. The docs suggest a lot of flexibility, including stuff that won't work in an unprivileged user namespace (e.g., filesystems backed by a block device). The things that matter seem to be the same as Charliecloud defaults. Therefore, for now we just ignore mounts. We do add :code:`--no-home` in OCI mode. .. code-block:: javascript "process": { "terminal": true, This says that Buildah wants a pseudoterminal allocated. Charliecloud does not currently support that, so we error in this case. However, Buildah can be persuaded to set this :code:`false` if you redirect its standard input from :code:`/dev/null`, which is the current workaround. Things work fine. .. code-block:: javascript "cwd": "/", Maps to :code:`--cd`. .. code-block:: javascript "args": [ "/bin/sh", "-c", "apk add --no-cache bc" ], Maps to :code:`CMD [ARG ...]`. Note that we do not run :code:`ch-run` via the shell, so there aren't worries about shell parsing. .. code-block:: javascript "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "https_proxy=http://proxyout.lanl.gov:8080", "no_proxy=localhost,127.0.0.1,.lanl.gov", "HTTP_PROXY=http://proxyout.lanl.gov:8080", "HTTPS_PROXY=http://proxyout.lanl.gov:8080", "NO_PROXY=localhost,127.0.0.1,.lanl.gov", "http_proxy=http://proxyout.lanl.gov:8080" ], Environment for the container. The spec does not say whether this is the complete environment or whether it should be added to some default environment. We treat it as a complete environment, i.e., place the variables in a file and then :code:`--unset-env='*' --set-env=FILE`. .. code-block:: javascript "rlimits": [ { "type": "RLIMIT_NOFILE", "hard": 1048576, "soft": 1048576 } ] Process limits Buildah wants us to set with :code:`setrlimit(2)`. Ignored. .. code-block:: javascript "capabilities": { ... }, Long list of capabilities that Buildah wants. Ignored. (Charliecloud provides security by remaining an unprivileged process.) .. code-block:: javascript "user": { "uid": 0, "gid": 0 }, }, Maps to :code:`--uid=0 --gid=0`. .. code-block:: javascript "linux": { "namespaces": [ { "type": "pid" }, { "type": "ipc" }, { "type": "mount" }, { "type": "user" } ], Namespaces that Buildah wants. Ignored; Charliecloud just does user and mount. .. code-block:: javascript "uidMappings": [ { "hostID": 0, "containerID": 0, "size": 1 }, { "hostID": 1, "containerID": 1, "size": 65536 } ], "gidMappings": [ { "hostID": 0, "containerID": 0, "size": 1 }, { "hostID": 1, "containerID": 1, "size": 65536 } ], Describes the identity map between the namespace and host. Buildah wants it much larger than Charliecloud's single entry and asks for container root to be host root, which we can't do. Ignored. .. code-block:: javascript "maskedPaths": [ "/proc/acpi", "/proc/kcore", ... ], "readonlyPaths": [ "/proc/asound", "/proc/bus", ... ] Spec says to "mask over the provided paths ... so they cannot be read" and "sed the provided paths as readonly". Ignored. (Unprivileged user namespace protects us.) .. code-block:: javascript } } End of example. State ----- The OCI spec does not say how the JSON document describing state should be given to the caller. Buildah is happy to get it on the runtime's standard output. :code:`ch-run-oci` provides an OCI compliant state document. Status :code:`creating` will never be returned, because the create operation is essentially a no-op, and annotations are not supported, so the :code:`annotations` key will never be given. Additional sources ------------------ * :code:`buildah` man page: https://github.com/containers/buildah/blob/master/docs/buildah.md * :code:`buildah bud` man page: https://github.com/containers/buildah/blob/master/docs/buildah-bud.md * :code:`runc create` man page: https://raw.githubusercontent.com/opencontainers/runc/master/man/runc-create.8.md * https://github.com/opencontainers/runtime-spec/blob/master/runtime.md Miscellaneous notes =================== Updating bundled Lark parser ---------------------------- In order to change the version of the bundled lark parser you must modify multiple files. To find them, e.g. for version 0.11.3 (the regex is hairy to catch both dot notation and tuples, but not the list of filenames in :code:`lib/Makefile.am`):: $ misc/grep -E '0(\.|, )11(\.|, )3($|\s|\))' What to do in each location should either be obvious or commented. .. LocalWords: milestoned gh nv cht Chacon's scottchacon mis cantfix tmpimg charliecloud-0.26/doc/faq.rst000066400000000000000000001136571417231051300161570ustar00rootroot00000000000000Frequently asked questions (FAQ) ******************************** .. contents:: :depth: 3 :local: About the project ================= Where did the name Charliecloud come from? ------------------------------------------ *Charlie* — Charles F. McMillan was director of Los Alamos National Laboratory from June 2011 until December 2017, i.e., at the time Charliecloud was started in early 2014. He is universally referred to as “Charlie” here. *cloud* — Charliecloud provides cloud-like flexibility for HPC systems. How do you spell Charliecloud? ------------------------------ We try to be consistent with *Charliecloud* — one word, no camel case. That is, *Charlie Cloud* and *CharlieCloud* are both incorrect. How large is Charliecloud? -------------------------- .. include:: loc.rst Errors ====== How do I read the :code:`ch-run` error messages? ------------------------------------------------ :code:`ch-run` error messages look like this:: $ ch-run foo -- echo hello ch-run[25750]: can't find image: foo: No such file or directory (ch-run.c:107 2) There is a lot of information here, and it comes in this order: 1. Name of the executable; always :code:`ch-run`. 2. Process ID in square brackets; here :code:`25750`. This is useful when debugging parallel :code:`ch-run` invocations. 3. Colon. 4. Main error message; here :code:`can't find image: foo`. This should be informative as to what went wrong, and if it’s not, please file an issue, because you may have found a usability bug. Note that in some cases you may encounter the default message :code:`error`; if this happens and you’re not doing something very strange, that’s also a usability bug. 5. Colon (but note that the main error itself can contain colons too), if and only if the next item is present. 6. Operating system’s description of the the value of :code:`errno`; here :code:`No such file or directory`. Omitted if not applicable. 7. Open parenthesis. 8. Name of the source file where the error occurred; here :code:`ch-run.c`. This and the following item tell developers exactly where :code:`ch-run` became confused, which greatly improves our ability to provide help and/or debug. 9. Source line where the error occurred. 10. Value of :code:`errno` (see `C error codes in Linux `_ for the full list of possibilities). 11. Close parenthesis. *Note:* Despite the structured format, the error messages are not guaranteed to be machine-readable. Tarball build fails with “No command specified” ----------------------------------------------- The full error from :code:`ch-builder2tar` or :code:`ch-build2dir` is:: docker: Error response from daemon: No command specified. You will also see it with various plain Docker commands. This happens when there is no default command specified in the Dockerfile or any of its ancestors. Some base images specify one (e.g., Debian) and others don’t (e.g., Alpine). Docker requires this even for commands that don’t seem like they should need it, such as :code:`docker create` (which is what trips up Charliecloud). The solution is to add a default command to your Dockerfile, such as :code:`CMD ["true"]`. :code:`ch-run` fails with “can't re-mount image read-only” ---------------------------------------------------------- Normally, :code:`ch-run` re-mounts the image directory read-only within the container. This fails if the image resides on certain filesystems, such as NFS (see `issue #9 `_). There are two solutions: 1. Unpack the image into a different filesystem, such as :code:`tmpfs` or local disk. Consult your local admins for a recommendation. Note that Lustre is probably not a good idea because it can give poor performance for you and also everyone else on the system. 2. Use the :code:`-w` switch to leave the image mounted read-write. This may have an impact on reproducibility (because the application can change the image between runs) and/or stability (if there are multiple application processes and one writes a file in the image that another is reading or writing). :code:`ch-image` fails with "certificate verify failed" ------------------------------------------------------- When :code:`ch-image` interacts with a remote registry (e.g., via :code:`push` or :code:`pull` subcommands), it will verify the registry's HTTPS certificate. If this fails, :code:`ch-image` will exit with the error "certificate verify failed". This situation tends to arise with self-signed or institutionally-signed certificates, even if the OS is configured to trust them. We use the Python HTTP library Requests, which on many platforms `includes its own CA certificates bundle `_, ignoring the bundle installed by the OS. Requests can be directed to use an alternate bundle of trusted CAs by setting environment variable :code:`REQUESTS_CA_BUNDLE` to the bundle path. (See `the Requests documentation `_ for details.) For example:: $ export REQUESTS_CA_BUNDLE=/usr/local/share/ca-certificates/registry.crt $ ch-image pull registry.example.com/image:tag Alternatively, certificate verification can be disabled entirely with the :code:`--tls-no-verify` flag. However, users should enable this option only if they have other means to be confident in the registry's identity. Unexpected behavior =================== What do the version numbers mean? --------------------------------- Released versions of Charliecloud have a pretty standard version number, e.g. 0.9.7. Work leading up to a released version also has version numbers, to satisfy tools that require them and to give the executables something useful to report on :code:`--version`, but these can be quite messy. We refer to such versions informally as *pre-releases*, but Charliecloud does not have formal pre-releases such as alpha, beta, or release candidate. *Pre-release version numbers are not in order*, because this work is in a DAG rather than linear, except they precede the version we are working towards. If you're dealing with these versions, use Git. Pre-release version numbers are the version we are working towards, followed by: :code:`~pre`, the branch name if not :code:`master` with non-alphanumerics removed, the commit hash, and finally :code:`dirty` if the working directory had uncommitted changes. Examples: * :code:`0.2.0` : Version 0.2.0. Released versions don't include Git information, even if built in a Git working directory. * :code:`0.2.1~pre` : Some snapshot of work leading up to 0.2.1, built from source code where the Git information has been lost, e.g. the tarballs Github provides. This should make you wary because you don't have any provenance. It might even be uncommitted work or an abandoned branch. * :code:`0.2.1~pre+1a99f42` : Master branch commit 1a99f42, built from a clean working directory (i.e., no changes since that commit). * :code:`0.2.1~pre+foo1.0729a78` : Commit 0729a78 on branch :code:`foo-1`, :code:`foo_1`, etc. built from clean working directory. * :code:`0.2.1~pre+foo1.0729a78.dirty` : Commit 0729a78 on one of those branches, plus un-committed changes. :code:`--uid 0` lets me read files I can’t otherwise! ----------------------------------------------------- Some permission bits can give a surprising result with a container UID of 0. For example:: $ whoami reidpr $ echo surprise > ~/cantreadme $ chmod 000 ~/cantreadme $ ls -l ~/cantreadme ---------- 1 reidpr reidpr 9 Oct 3 15:03 /home/reidpr/cantreadme $ cat ~/cantreadme cat: /home/reidpr/cantreadme: Permission denied $ ch-run /var/tmp/hello cat ~/cantreadme cat: /home/reidpr/cantreadme: Permission denied $ ch-run --uid 0 /var/tmp/hello cat ~/cantreadme surprise At first glance, it seems that we’ve found an escalation -- we were able to read a file inside a container that we could not read on the host! That seems bad. However, what is really going on here is more prosaic but complicated: 1. After :code:`unshare(CLONE_NEWUSER)`, :code:`ch-run` gains all capabilities inside the namespace. (Outside, capabilities are unchanged.) 2. This include :code:`CAP_DAC_OVERRIDE`, which enables a process to read/write/execute a file or directory mostly regardless of its permission bits. (This is why root isn’t limited by permissions.) 3. Within the container, :code:`exec(2)` capability rules are followed. Normally, this basically means that all capabilities are dropped when :code:`ch-run` replaces itself with the user command. However, if EUID is 0, which it is inside the namespace given :code:`--uid 0`, then the subprocess keeps all its capabilities. (This makes sense: if root creates a new process, it stays root.) 4. :code:`CAP_DAC_OVERRIDE` within a user namespace is honored for a file or directory only if its UID and GID are both mapped. In this case, :code:`ch-run` maps :code:`reidpr` to container :code:`root` and group :code:`reidpr` to itself. 5. Thus, files and directories owned by the host EUID and EGID (here :code:`reidpr:reidpr`) are available for all access with :code:`ch-run --uid 0`. This is not an escalation. The quirk applies only to files owned by the invoking user, because :code:`ch-run` is unprivileged outside the namespace, and thus he or she could simply :code:`chmod` the file to read it. Access inside and outside the container remains equivalent. References: * http://man7.org/linux/man-pages/man7/capabilities.7.html * http://lxr.free-electrons.com/source/kernel/capability.c?v=4.2#L442 * http://lxr.free-electrons.com/source/fs/namei.c?v=4.2#L328 Why does :code:`ping` not work? ------------------------------- :code:`ping` fails with “permission denied” or similar under Charliecloud, even if you’re UID 0 inside the container:: $ ch-run $IMG -- ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes ping: permission denied (are you root?) $ ch-run --uid=0 $IMG -- ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes ping: permission denied (are you root?) This is because :code:`ping` needs a raw socket to construct the needed :code:`ICMP ECHO` packets, which requires capability :code:`CAP_NET_RAW` or root. Unprivileged users can normally use :code:`ping` because it’s a setuid or setcap binary: it raises privilege using the filesystem bits on the executable to obtain a raw socket. Under Charliecloud, there are multiple reasons :code:`ping` can’t get a raw socket. First, images are unpacked without privilege, meaning that setuid and setcap bits are lost. But even if you do get privilege in the container (e.g., with :code:`--uid=0`), this only applies in the container. Charliecloud uses the host’s network namespace, where your unprivileged host identity applies and :code:`ping` still can’t get a raw socket. The recommended alternative is to simply try the thing you want to do, without testing connectivity using :code:`ping` first. Why is MATLAB trying and failing to change the group of :code:`/dev/pts/0`? --------------------------------------------------------------------------- MATLAB and some other programs want pseudo-TTY (PTY) files to be group-owned by :code:`tty`. If it’s not, Matlab will attempt to :code:`chown(2)` the file, which fails inside a container. The scenario in more detail is this. Assume you’re user :code:`charlie` (UID=1000), your primary group is :code:`nerds` (GID=1001), :code:`/dev/pts/0` is the PTY file in question, and its ownership is :code:`charlie:tty` (:code:`1000:5`), as it should be. What happens in the container by default is: 1. MATLAB :code:`stat(2)`\ s :code:`/dev/pts/0` and checks the GID. 2. This GID is :code:`nogroup` (65534) because :code:`tty` (5) is not mapped on the host side (and cannot be, because only one’s EGID can be mapped in an unprivileged user namespace). 3. MATLAB concludes this is bad. 4. MATLAB executes :code:`chown("/dev/pts/0", 1000, 5)`. 5. This fails because GID 5 is not mapped on the guest side. 6. MATLAB pukes. The workaround is to map your EGID of 1001 to 5 inside the container (instead of the default 1001:1001), i.e. :code:`--gid=5`. Then, step 4 succeeds because the call is mapped to :code:`chown("/dev/pts/0", 1000, 1001)` and MATLAB is happy. .. _faq_docker2tar-size: :code:`ch-convert` from Docker incorrect image sizes ---------------------------------------------------- When converting from Docker, :code:`ch-convert` often finishes before the progress bar is complete. For example:: $ ch-convert -i docker foo /var/tmp/foo.tar.gz input: docker foo output: tar /var/tmp/foo.tar.gz exporting ... 373MiB 0:00:21 [============================> ] 65% [...] In this case, the :code:`.tar.gz` contains 392 MB uncompressed:: $ zcat /var/tmp/foo.tar.gz | wc 2740966 14631550 392145408 But Docker thinks the image is 597 MB:: $ sudo docker image inspect foo | fgrep -i size "Size": 596952928, "VirtualSize": 596952928, We've also seen cases where the Docker-reported size is an *under*\ estimate:: $ ch-convert -i docker bar /var/tmp/bar.tar.gz input: docker bar output: tar /var/tmp/bar.tar.gz exporting ... 423MiB 0:00:22 [============================================>] 102% [...] $ zcat /var/tmp/bar.tar.gz | wc 4181186 20317858 444212736 $ sudo docker image inspect bar | fgrep -i size "Size": 433812403, "VirtualSize": 433812403, We think that this is because Docker is computing size based on the size of the layers rather than the unpacked image. We do not currently have a fix; see `issue #165 `_. My second-level directory :code:`dev` is empty ---------------------------------------------- Some image tarballs, such as official Ubuntu Docker images, put device files in :code:`/dev`. These files prevent unpacking the tarball, because unprivileged users cannot create device files. Further, these files are not needed because :code:`ch-run` overmounts :code:`/dev` anyway. We cannot reliably prevent device files from being included in the tar, because often that is outside our control, e.g. :code:`docker export` produces a tarball. Thus, we must exclude them at unpacking time. An additional complication is that :code:`ch-convert` can read tarballs both with a single top-level directory and without, i.e. “tarbombs”. For example, best practice use of :code:`tar` on the command line produces the former, while :code:`docker export` (invoked by :code:`ch-convert` when converting from Docker) produces a tarbomb. Thus, :code:`ch-convert` uses :code:`tar --exclude` to exclude from unpacking everything under :code:`./dev` and :code:`*/dev`, i.e., directory :code:`dev` appearing at either the first or second level are forced to be empty. This yields false positives if you have a tarbomb image with a directory :code:`dev` at the second level containing stuff you care about. Hopefully this is rare, but please let us know if it is your use case. My password that contains digits doesn't work in VirtualBox console ------------------------------------------------------------------- VirtualBox has confusing Num Lock behavior. Thus, you may be typing arrows, page up/down, etc. instead of digits, without noticing because console password fields give no feedback, not even whether a character has been typed. Try using the number row instead, toggling Num Lock key, or SSHing into the virtual machine. How do I ... ============ My app needs to write to :code:`/var/log`, :code:`/run`, etc. ------------------------------------------------------------- Because the image is mounted read-only by default, log files, caches, and other stuff cannot be written anywhere in the image. You have three options: 1. Configure the application to use a different directory. :code:`/tmp` is often a good choice, because it’s shared with the host and fast. 2. Use :code:`RUN` commands in your Dockerfile to create symlinks that point somewhere writeable, e.g. :code:`/tmp`, or :code:`/mnt/0` with :code:`ch-run --bind`. 3. Run the image read-write with :code:`ch-run -w`. Be careful that multiple containers do not try to write to the same files. Which specific :code:`sudo` commands are needed? ------------------------------------------------ For running images, :code:`sudo` is not needed at all. For building images, it depends on what you would like to support. For example, do you want to let users build images with Docker? Do you want to let them run the build tests? We do not maintain specific lists, but you can search the source code and documentation for uses of :code:`sudo` and :code:`$DOCKER` and evaluate them on a case-by-case basis. (The latter includes :code:`sudo` if needed to invoke :code:`docker` in your environment.) For example:: $ find . \( -type f -executable \ -o -name Makefile \ -o -name '*.bats' \ -o -name '*.rst' \ -o -name '*.sh' \) \ -exec egrep -H '(sudo|\$DOCKER)' {} \; OpenMPI Charliecloud jobs don’t work ------------------------------------ MPI can be finicky. This section documents some of the problems we’ve seen. :code:`mpirun` can’t launch jobs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For example, you might see:: $ mpirun -np 1 ch-run /var/tmp/mpihello-openmpi -- /hello/hello App launch reported: 2 (out of 2) daemons - 0 (out of 1) procs [cn001:27101] PMIX ERROR: BAD-PARAM in file src/dstore/pmix_esh.c at line 996 We’re not yet sure why this happens — it may be a mismatch between the OpenMPI builds inside and outside the container — but in our experience launching with :code:`srun` often works when :code:`mpirun` doesn’t, so try that. .. _faq_join: Communication between ranks on the same node fails ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ OpenMPI has many ways to transfer messages between ranks. If the ranks are on the same node, it is faster to do these transfers using shared memory rather than involving the network stack. There are two ways to use shared memory. The first and older method is to use POSIX or SysV shared memory segments. This approach uses two copies: one from Rank A to shared memory, and a second from shared memory to Rank B. For example, the :code:`sm` *byte transport layer* (BTL) does this. The second and newer method is to use the :code:`process_vm_readv(2)` and/or :code:`process_vm_writev(2)`) system calls to transfer messages directly from Rank A’s virtual memory to Rank B’s. This approach is known as *cross-memory attach* (CMA). It gives significant performance improvements in `benchmarks `_, though of course the real-world impact depends on the application. For example, the :code:`vader` BTL (enabled by default in OpenMPI 2.0) and :code:`psm2` *matching transport layer* (MTL) do this. The problem in Charliecloud is that the second approach does not work by default. We can demonstrate the problem with LAMMPS molecular dynamics application:: $ srun --cpus-per-task 1 ch-run /var/tmp/lammps_mpi -- \ lmp_mpi -log none -in /lammps/examples/melt/in.melt [cn002:21512] Read -1, expected 6144, errno = 1 [cn001:23947] Read -1, expected 6144, errno = 1 [cn002:21517] Read -1, expected 9792, errno = 1 [... repeat thousands of times ...] With :code:`strace(1)`, one can isolate the problem to the system call noted above:: process_vm_readv(...) = -1 EPERM (Operation not permitted) write(33, "[cn001:27673] Read -1, expected 6"..., 48) = 48 The `man page `_ reveals that these system calls require that the process have permission to :code:`ptrace(2)` one another, but sibling user namespaces `do not `_. (You *can* :code:`ptrace(2)` into a child namespace, which is why :code:`gdb` doesn’t require anything special in Charliecloud.) This problem is not specific to containers; for example, many settings of kernels with `YAMA `_ enabled will similarly disallow this access. So what can you do? There are a few options: * We recommend simply using the :code:`--join` family of arguments to :code:`ch-run`. This puts a group of :code:`ch-run` peers in the same namespaces; then, the system calls work. See the :ref:`man_ch-run` man page for details. * You can also sometimes turn off single-copy. For example, for :code:`vader`, set the MCA variable :code:`btl_vader_single_copy_mechanism` to :code:`none`, e.g. with an environment variable:: $ export OMPI_MCA_btl_vader_single_copy_mechanism=none :code:`psm2` does not let you turn off CMA, but it does fall back to two-copy if CMA doesn’t work. However, this fallback crashed when we tried it. * The kernel module `XPMEM `_ enables a different single-copy approach. We have not yet tried this, and the module needs to be evaluated for user namespace safety, but it’s quite a bit faster than CMA on benchmarks. .. Images by URL only works in Sphinx 1.6+. Debian Stretch has 1.4.9, so remove it for now. .. image:: https://media.giphy.com/media/1mNBTj3g4jRCg/giphy.gif :alt: Darth Vader bowling a strike with the help of the Force :align: center I get a bunch of independent rank-0 processes when launching with :code:`srun` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For example, you might be seeing this:: $ srun ch-run /var/tmp/mpihello-openmpi -- /hello/hello 0: init ok cn036.localdomain, 1 ranks, userns 4026554634 0: send/receive ok 0: finalize ok 0: init ok cn035.localdomain, 1 ranks, userns 4026554634 0: send/receive ok 0: finalize ok We were expecting a two-rank MPI job, but instead we got two independent one-rank jobs that did not coordinate. MPI ranks start as normal, independent processes that must find one another somehow in order to sync up and begin the coupled parallel program; this happens in :code:`MPI_Init()`. There are lots of ways to do this coordination. Because we are launching with the host's Slurm, we need it to provide something for the containerized processes for such coordination. OpenMPI must be compiled to use what that Slurm has to offer, and Slurm must be told to offer it. What works for us is a something called "PMI2". You can see if your Slurm supports it with:: $ srun --mpi=list srun: MPI types are... srun: mpi/pmi2 srun: mpi/openmpi srun: mpi/mpich1_shmem srun: mpi/mpich1_p4 srun: mpi/lam srun: mpi/none srun: mpi/mvapich srun: mpi/mpichmx srun: mpi/mpichgm If :code:`pmi2` is not in the list, you must ask your admins to enable Slurm's PMI2 support. If it is in the list, but you're seeing this problem, that means it is not the default, and you need to tell Slurm you want it. Try:: $ export SLURM_MPI_TYPE=pmi2 $ srun ch-run /var/tmp/mpihello-openmpi -- /hello/hello 0: init ok wc035.localdomain, 2 ranks, userns 4026554634 1: init ok wc036.localdomain, 2 ranks, userns 4026554634 0: send/receive ok 0: finalize ok How do I run X11 apps? ---------------------- X11 applications should “just work”. For example, try this Dockerfile: .. code-block:: docker FROM debian:stretch RUN apt-get update \ && apt-get install -y xterm Build it and unpack it to :code:`/var/tmp`. Then:: $ ch-run /scratch/ch/xterm -- xterm should pop an xterm. If your X11 application doesn’t work, please file an issue so we can figure out why. How do I specify an image reference? ------------------------------------ You must specify an image for many use cases, including :code:`FROM` instructions, the source of an image pull (e.g. :code:`ch-image pull` or :code:`docker pull`), the destination of an image push, and adding image tags. Charliecloud calls this an *image reference*, but there appears to be no established name for this concept. The syntax of an image reference is not well documented. This FAQ represents our understanding, which is cobbled together from the `Dockerfile reference `_, the :code:`docker tag` `documentation `_, and various forum posts. It is not a precise match for how Docker implements it, but it should be close enough. We'll start with two complete examples with all the bells and whistles: 1. :code:`example.com:8080/foo/bar/hello-world:version1.0` 2. :code:`example.com:8080/foo/bar/hello-world@sha256:f6c68e2ad82a` These references parse into the following components, in this order: 1. A `valid hostname `_; we assume this matches the regular expression :code:`[A-Za-z0-9.-]+`, which is very approximate. Optional; here :code:`example.com`. 2. A colon followed by a decimal port number. If hostname is given, optional; otherwise disallowed; here :code:`8080`. 3. If hostname given, a slash. 4. A path, with one or more components separated by slash. Components match the regex :code:`[a-z0-9_.-]+`. Optional; here :code:`foo/bar`. Pedantic details: * Under the hood, the default path is :code:`library`, but this is generally not exposed to users. * Three or more underscores in a row is disallowed by Docker, but we don't check this. 5. If path given, a slash. 6. The image name (tag), which matches :code:`[a-z0-9_.-]+`. Required; here :code:`hello-world`. 7. Zero or one of: * A tag matching the regular expression :code:`[A-Za-z0-9_.-]+` and preceded by a colon. Here :code:`version1.0` (example 1). * A hexadecimal hash preceded by the string :code:`@sha256:`. Here :code:`f6c68e2ad82a` (example 2). * Note: Digest algorithms other than SHA-256 are in principle allowed, but we have not yet seen any. Detail-oriented readers may have noticed the following gotchas: * A hostname without port number is ambiguous with the leading component of a path. For example, in the reference :code:`foo/bar/baz`, it is ambiguous whether :code:`foo` is a hostname or the first (and only) component of the path :code:`foo/bar`. The `resolution rule `_ is: if the ambiguous substring contains a dot, assume it's a hostname; otherwise, assume it's a path component. * The only character than cannot go in a POSIX filename is slash. Thus, Charliecloud uses image references in filenames, replacing slash with percent (:code:`%`). Because this character cannot appear in image references, the transformation is reversible. An alternate approach would be to replicate the reference path in the filesystem, i.e., path components in the reference would correspond directly to a filesystem path. This would yield a clearer filesystem structure. However, we elected not to do it because it complicates the code to save and clean up image reference-related data, and it does not address a few related questions, e.g. should the host and port also be a directory level. Usually, most of the components are omitted. For example, you'll more commonly see image references like: * :code:`debian`, which refers to the tag :code:`latest` of image :code:`debian` from Docker Hub. * :code:`debian:stretch`, which is the same except for tag :code:`stretch`. * :code:`fedora/httpd`, which is tag :code:`latest` of :code:`fedora/httpd` from Docker Hub. See :code:`charliecloud.py` for a specific grammar that implements this. Can I build or pull images using a tool Charliecloud doesn't know about? ------------------------------------------------------------------------ Yes. Charliecloud deals in well-known UNIX formats like directories, tarballs, and SquashFS images. So, once you get your image into some format Charliecloud likes, you can enter the workflow. For example, `skopeo `_ is a tool to pull images to OCI format, and `umoci `_ can flatten an OCI image to a directory. Thus, you can use the following commands to run an Alpine 3.9 image pulled from Docker hub:: $ skopeo copy docker://alpine:3.9 oci:/tmp/oci:img [...] $ ls /tmp/oci blobs index.json oci-layout $ umoci unpack --rootless --image /tmp/oci:img /tmp/alpine:3.9 [...] $ ls /tmp/alpine:3.9 config.json rootfs sha256_2ca27acab3a0f4057852d9a8b775791ad8ff62fbedfc99529754549d33965941.mtree umoci.json $ ls /tmp/alpine:3.9/rootfs bin etc lib mnt proc run srv tmp var dev home media opt root sbin sys usr $ ch-run /tmp/alpine:3.9/rootfs -- cat /etc/alpine-release 3.9.5 How do I authenticate with SSH during :code:`ch-image` build? ------------------------------------------------------------- The simplest approach is to run the `SSH agent `_ on the host. :code:`ch-image` then leverages this with two steps: 1. pass environment variable :code:`SSH_AUTH_SOCK` into the build, with no need to put :code:`ARG` in the Dockerfile or specify :code:`--build-arg` on the command line; and 2. bind-mount host :code:`/tmp` to guest :code:`/tmp`, which is where the SSH agent's listening socket usually resides. Thus, SSH within the container will use this existing SSH agent on the host to authenticate without further intervention. For example, after making :code:`ssh-agent` available on the host, which is OS and site-specific:: $ echo $SSH_AUTH_SOCK /tmp/ssh-rHsFFqwwqh/agent.49041 $ ssh-add Enter passphrase for /home/charlie/.ssh/id_rsa: Identity added: /home/charlie/.ssh/id_rsa (/home/charlie/.ssh/id_rsa) $ ssh-add -l 4096 SHA256:aN4n2JeMah2ekwhyHnb0Ug9bYMASmY+5uGg6MrieaQ /home/charlie/.ssh/id_rsa (RSA) $ cat ./Dockerfile FROM alpine:latest RUN apk add openssh RUN echo $SSH_AUTH_SOCK RUN ssh git@github.com $ ch-image build -t foo -f ./Dockerfile . [...] 3 RUN ['/bin/sh', '-c', 'echo $SSH_AUTH_SOCK'] /tmp/ssh-rHsFFqwwqh/agent.49041 4 RUN ['/bin/sh', '-c', 'ssh git@github.com'] [...] Hi charlie! You've successfully authenticated, but GitHub does not provide shell access. Note this example is rather contrived — bare SSH sessions in a Dockerfile rarely make sense. In practice, SSH is used as a transport to fetch something, e.g. with :code:`scp(1)` or :code:`git(1)`. See the next entry for a more realistic example. SSH stops :code:`ch-image` build with interactive queries --------------------------------------------------------- This often occurs during an SSH-based Git clone. For example: .. code-block:: docker FROM alpine:latest RUN apk add git openssh RUN git clone git@github.com:hpc/charliecloud.git .. code-block:: console $ ch-image build -t foo -f ./Dockerfile . [...] 3 RUN ['/bin/sh', '-c', 'git clone git@github.com:hpc/charliecloud.git'] Cloning into 'charliecloud'... The authenticity of host 'github.com (140.82.113.3)' can't be established. RSA key fingerprint is SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8. Are you sure you want to continue connecting (yes/no/[fingerprint])? At this point, the build stops while SSH waits for input. This happens even if you have :code:`github.com` in your :code:`~/.ssh/known_hosts`. This file is not available to the build because :code:`ch-image` runs :code:`ch-run` with :code:`--no-home`, so :code:`RUN` instructions can't see anything in your home directory. Solutions include: 1. Change to anonymous HTTPS clone, if available. Most public repositories will support this. For example: .. code-block:: docker FROM alpine:latest RUN apk add git RUN git clone https://github.com/hpc/charliecloud.git 2. Approve the connection interactively by typing :code:`yes`. Note this will record details of the connection within the image, including IP address and the fingerprint. The build also remains interactive. 3. Edit the image's system `SSH config `_ to turn off host key checking. Note this can be rather hairy, because the SSH config language is quite flexible and the first instance of a directive is the one used. However, often the changes can be simply appended: .. code-block:: docker FROM alpine:latest RUN apk add git openssh RUN printf 'StrictHostKeyChecking=no\nUserKnownHostsFile=/dev/null\n' \ >> /etc/ssh/ssh_config RUN git clone git@github.com:hpc/charliecloud.git Check your institutional policy on whether this is permissible, though it's worth noting that users `almost never `_ verify the host fingerprints anyway. This will not record details of the connection in the image. 4. Turn off host key checking on the SSH command line. (See caveats in the previous item.) The wrapping tool should provide a way to configure this command line. For example, for Git: .. code-block:: docker FROM alpine:latest RUN apk add git openssh ARG GIT_SSH_COMMAND="ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" RUN git clone git@github.com:hpc/charliecloud.git 5. Add the remote host to the system known hosts file, e.g.: .. code-block:: docker FROM alpine:latest RUN apk add git openssh RUN echo 'github.com,140.82.112.4 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==' >> /etc/ssh/ssh_known_hosts RUN git clone git@github.com:hpc/charliecloud.git This records connection details in both the Dockerfile and the image. Other approaches could be found with web searches such as "automate unattended SSH" or "SSH in cron jobs". .. _faq_building-with-docker: How do I use Docker to build Charliecloud images? ------------------------------------------------- The short version is to run Docker commands like :code:`docker build` and :code:`docker pull` like usual, and then use :code:`ch-convert` to copy the image from Docker storage to a SquashFS archive, tarball, or directory. If you are behind an HTTP proxy, that requires some extra setup for Docker; see below. Security implications of Docker ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Because Docker (a) makes installing random crap from the internet simple and (b) is easy to deploy insecurely, you should take care. Some of the implications are below. This list should not be considered comprehensive nor a substitute for appropriate expertise; adhere to your ethical and institutional responsibilities. * **Docker equals root.** Anyone who can run the :code:`docker` command or interact with the Docker daemon can `trivially escalate to root `_. This is considered a feature. For this reason, don't create the :code:`docker` group, as this will allow passwordless, unlogged escalation for anyone in the group. Run it with :code:`sudo docker`. Also, Docker runs container processes as root by default. In addition to being poor hygiene, this can be an escalation path, e.g. if you bind-mount host directories. * **Docker alters your network configuration.** To see what it did:: $ ifconfig # note docker0 interface $ brctl show # note docker0 bridge $ route -n * **Docker installs services.** If you don't want the Docker service starting automatically at boot, e.g.:: $ systemctl is-enabled docker enabled $ systemctl disable docker $ systemctl is-enabled docker disabled Configuring for a proxy ~~~~~~~~~~~~~~~~~~~~~~~ By default, Docker does not work if you are behind a proxy, and it fails in two different ways. The first problem is that Docker itself must be told to use a proxy. This manifests as:: $ sudo docker run hello-world Unable to find image 'hello-world:latest' locally Pulling repository hello-world Get https://index.docker.io/v1/repositories/library/hello-world/images: dial tcp 54.152.161.54:443: connection refused If you have a systemd system, the `Docker documentation `_ explains how to configure this. If you don't have a systemd system, then :code:`/etc/default/docker` might be the place to go? The second problem is that programs executed during build (:code:`RUN` instructions) need to know about the proxy as well. This manifests as images failing to build because they can't download stuff from the internet. One fix is to configure your :code:`.bashrc` or equivalent to: 1. Set the proxy environment variables: .. code-block:: sh export HTTP_PROXY=http://proxy.example.com:8088 export http_proxy=$HTTP_PROXY export HTTPS_PROXY=$HTTP_PROXY export https_proxy=$HTTP_PROXY export NO_PROXY='localhost,127.0.0.1,.example.com' export no_proxy=$NO_PROXY 2. Configure a :code:`docker build` wrapper: .. code-block:: sh # Run "docker build" with specified arguments, adding proxy variables if # set. Assumes "sudo" is needed to run "docker". function docker-build () { if [[ -z $HTTP_PROXY ]]; then sudo docker build "$@" else sudo docker build --build-arg HTTP_PROXY="$HTTP_PROXY" \ --build-arg HTTPS_PROXY="$HTTPS_PROXY" \ --build-arg NO_PROXY="$NO_PROXY" \ --build-arg http_proxy="$http_proxy" \ --build-arg https_proxy="$https_proxy" \ --build-arg no_proxy="$no_proxy" \ "$@" fi } .. LocalWords: CAs SY Gutmann AUTH rHsFFqwwqh MrieaQ Za loc mpihello .. LocalWords: VirtualSize charliecloud-0.26/doc/favicon.ico000066400000000000000000000104761417231051300167720ustar00rootroot00000000000000  (( @ #.#.[Op 6AGp[z3z [Y p(p3kA [3 O[3?(%[SG-"1= pfi'qfffQfY(q3LQ0  2=zfHf(([G \)charliecloud-0.26/doc/index.rst000066400000000000000000000006641417231051300165100ustar00rootroot00000000000000Overview ******** .. image:: rd100-winner.png :align: right :alt: R&D 100 2018 winner logo :width: 128px :target: https://www.lanl.gov/discover/news-release-archive/2018/November/1119-rd-100-awards.php .. include:: ../README.rst .. note:: This documentation is for Charliecloud version |version| and was built |today|. .. toctree:: :numbered: :hidden: install tutorial command-usage faq dev charliecloud-0.26/doc/install.rst000066400000000000000000000465421417231051300170540ustar00rootroot00000000000000Installing ********** This section describes what you need to install Charliecloud and how to do so. Note that installing and using Charliecloud can be done as a normal user with no elevated privileges, provided that user namespaces have been enabled. .. contents:: :depth: 2 :local: Build and install from source ============================= Using release tarball --------------------- We provide `tarballs `_ with a fairly standard :code:`configure` script. Thus, build and install can be as simple as:: $ ./configure $ make $ sudo make install If you don't have sudo, you can: * Run Charliecloud directly from the build directory; add :code:`$BUILD_DIR/bin` to your :code:`$PATH` and you are good to go, without :code:`make install`. * Install in a prefix you have write access to, e.g. in your home directory with :code:`./configure --prefix=~`. :code:`configure` will provide a detailed report on what will be built and installed, along with what dependencies are present and missing. From Git checkout ----------------- If you obtain the source code with Git, you must build :code:`configure` and friends yourself. To do so, you will need the following. The versions in most common distributions should be sufficient. * Automake * Autoconf * Python's :code:`pip3` package installer and its :code:`wheel` extension Create :code:`configure` with:: $ ./autogen.sh This script has a few options; see its :code:`--help`. Note that Charliecloud disables Automake's "maintainer mode" by default, so the build system (Makefiles, :code:`configure`, etc.) will never automatically be rebuilt. You must run :code:`autogen.sh` manually if you need this. You can also re-enable maintainer mode with :code:`configure` if you like, though this is not a tested configuration. :code:`configure` options ------------------------- Charliecloud's :code:`configure` has the following options in addition to the standard ones. Feature selection: :code:`--disable-FOO` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ By default, all features that can be built will be built and installed. You can exclude some features with: ========================== ======================================================= option don't build/install ========================== ======================================================= :code:`--disable-ch-image` :code:`ch-image` unprivileged builder & image manager :code:`--disable-html` HTML documentation :code:`--disable-man` man pages :code:`--disable-syslog` logging to syslog (see individual man pages) :code:`--disable-tests` test suite ========================== ======================================================= You can also say :code:`--enable-FOO` to fail the build if :code:`FOO` can't be built. Dependency selection: :code:`--with-FOO` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Some dependencies can be specified as follows. Note only some of these support :code:`--with-FOO=no`, as listed. :code:`--with-libsquashfuse={yes,no,PATH}` Whether to link with :code:`libsquashfuse`. Options: * If not specified: Look for :code:`libsquashfuse` in standard install locations and link with it if found. Otherwise disable internal SquashFS mount, with no warning or error. * :code:`yes`: Look for :code:`libsquashfuse` in standard locations and link with it if found; otherwise, error. * :code:`no`: Disable :code:`libsquashfuse` linking and internal SquashFS mounting, even if it's installed. * Path to :code:`libsquashfuse` install prefix: Link with :code:`libsquashfuse` found there, or error if not found, and add it to :code:`ch-run`'s RPATH. (Note this argument is *not* the directory containing the shared library or header file.) **Note:** A very specific version and configuration of SquashFUSE is required. See below for details. :code:`--with-python=SHEBANG` Shebang line to use for Python scripts. Default: :code:`/usr/bin/env python3`. :code:`--with-sphinx-build=PATH` Path to :code:`sphinx-build` executable. Default: the :code:`sphinx-build` found first in :code:`$PATH`. :code:`--with-sphinx-python=PATH` Path to Python used by :code:`sphinx-build`. Default: shebang of :code:`sphinx-build`. Less strict build: :code:`--enable-buggy-build` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *Please do not use this option routinely, as that hides bugs that we cannot find otherwise.* By default, Charliecloud builds with :code:`CFLAGS` including :code:`-Wall -Werror`. The principle here is that we prefer diagnostics that are as noisy as practical, so that problems are identified early and we can fix them. We prefer :code:`-Werror` unless there is a specific reason to turn it off. For example, this approach identified a buggy :code:`configure` test (`issue #798 `_). Many others recommend the opposite. For example, Gentoo's "`Common mistakes `_" guide advises against :code:`-Werror` because it causes breakage that is "random" and "without purpose". There is a well-known `blog post `_ from Flameeyes that recommends :code:`-Werror` be off by default and used by developers and testers only. In our opinion, for Charliecloud, these warnings are most likely the result of real bugs and shouldn't be hidden (i.e., they are neither random nor without purpose). Our code should have no warnings, regardless of compiler, and any spurious warnings should be silenced individually. We do not have the resources to test with a wide variety of compilers, so enabling :code:`-Werror` only for development and testing, as recommended by others, means that we miss potentially important diagnostics — people typically do not pay attention to warnings, only errors. That said, we recognize that packagers and end users just want to build the code with a minimum of hassle. Thus, we provide the :code:`configure` flag: :code:`--enable-buggy-build` Remove :code:`-Werror` from :code:`CFLAGS` when building. Don't hesitate to use it. But if you do, we would very much appreciate if you: 1. File a bug explaining why! We'll fix it. 2. Remove it from your package or procedure once we fix that bug. Disable bundled Lark package: :code:`--disable-bundled-lark` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *This option is minimally supported and not recommended. Use only if you really know what you are doing.* Charliecloud uses the Python package `Lark `_ for parsing Dockerfiles and image references. Because this package is developed rapidly, and recent versions have important features and bug fixes not yet available in common distributions, we bundle the package with Charliecloud. If you prefer a separately-installed Lark, either via system packages or :code:`pip`, you can use :code:`./configure --disable-bundled-lark`. This excludes the bundled Lark from being installed or placed in :code:`make dist` tarballs. It *does not* remove the bundled Lark from the source directory; if you run from the source directory (i.e., without installing), the bundled Lark will be used if present regardless of this option. Bundled Lark is included in the tarballs we distribute. You can remove it and re-build :code:`configure` with :code:`./autogen.sh --rm-lark --no-lark`. If you are starting from a Git checkout, bundled Lark is installed by default by :code:`./autogen.sh`, but you can prevent this with :code:`./autogen.sh --no-lark`. The main use case for these options is to support package maintainers. If this is you and does not meet your needs, please get in touch with us and we will help. Install with package manager ============================ Charliecloud is also available using a variety of distribution and third-party package managers. Maintained by us: * `Spack `_; install with :code:`+builder` to get :code:`ch-image`. * `Fedora/EPEL `_; check for available versions with :code:`{yum,dnf} list charliecloud`. Maintained by others: * `Debian `_ * `Gentoo `_ * `NixOS `_ * `SUSE `_ and `openSUSE `_ Note that Charliecloud development moves quickly, so double-check that packages have the version and features you need. Pull requests and other collaboration to improve the packaging situation are particularly welcome! Dependencies ============ Charliecloud's philosophy on dependencies is that they should be (1) minimal and (2) granular. For any given feature, we try to implement it with the minimum set of dependencies, and in any given environment, we try to make the maximum set of features available. This section documents Charliecloud's dependencies in detail. Do you need to read it? If you are installing Charliecloud on the same system where it will be used, probably not. :code:`configure` will issue a report saying what will and won't work. Otherwise, it may be useful to gain an understanding of what to expect when deploying Charliecloud. Note that we do not rigorously track dependency versions. We update the minimum versions stated below as we encounter problems, but they are not tight bounds and may be out of date. It is worth trying even if your version is documented to be too old. Please let us know any success or failure reports. Finally, the run-time dependencies are lazy; specific features just try to use their dependencies and fail if there's a problem, hopefully with a useful error message. In other words, there's no version checking or whatnot that will keep you from using a feature unless it truly doesn't work in your environment. User namespaces --------------- Charliecloud's fundamental principle of a workflow that is fully unprivileged end-to-end requires unprivileged `user namespaces `_. In order to enable them, you need a vaguely recent Linux kernel with the feature compiled in and active. Some distributions need configuration changes. For example: * Debian Stretch `needs sysctl `_ :code:`kernel.unprivileged_userns_clone=1`. * RHEL/CentOS 7.4 and 7.5 need both a `kernel command line option and a sysctl `_. RHEL/CentOS 7.6 and up need only the sysctl. Note that Docker does not work with user namespaces, so skip step 4 of the Red Hat instructions, i.e., don't add :code:`--userns-remap` to the Docker configuration (see `issue #97 `_). Note: User namespaces `always fail in a chroot `_ with :code:`EPERM`. If :code:`configure` detects that it's in a chroot, it will print a warning in its report. One common scenario where this comes up is packaging, where builds often happen in a chroot. However, like all the run-time :code:`configure` tests, this is informational only and does not affect the build. Supported architectures ----------------------- Charliecloud should work on any architecture supported by the Linux kernel, and we have run Charliecloud containers on x86-64, ARM, and Power. However, it is currently tested only on x86_64 and ARM. Most builders are also fairly portable; e.g., see `Docker's supported platforms `_. Details by feature ------------------ This section is a comprehensive listing of the specific dependencies and versions by feature group. It is autogenerated from the definitive source, :code:`configure.ac`. Listed versions are minimums, with the caveats above. Everything needs a POSIX shell and utilities. The next section contains notes about some of the dependencies. .. include:: _deps.rst Notes on specific dependencies ------------------------------ This section describes additional details we have learned about some of the dependencies. Note that most of these are optional. It is in alphabetical order by dependency. Bash ~~~~ When Bash is needed, it's because: * Shell scripting is a lot easier in Bash than POSIX shell, so we use it for scripts applicable in contexts where it's very likely Bash is already available. * It is required by our testing framework, Bats. Bats ~~~~ Bats ("Bash Automated Testing System") is a test framework for tests written as Bash shell scripts. `Upstream Bats `_ is unmaintained, but widely available. Both version 0.4.0, which tends to be in distributions, and upstream master branch (commit 0360811) should work. There is a maintained fork called `Bats-core `_, but the test suite currently does not pass with it; see `issue #582 `_. Patches welcome! Buildah ~~~~~~~ Charliecloud uses Buildah's "rootless" mode and :code:`ignore-chown-errors` storage configuration for a fully unprivileged workflow with no sudo and no setuid binaries. Note that in this mode, images in Buildah internal storage will have all user and group ownership flattened to UID/GID 0. If you prefer a privileged workflow, Charliecloud can also use Buildah with setuid helpers :code:`newuidmap` and :code:`newgidmap`. This will not remap ownership. To configure Buildah in rootless mode, make sure your config files are in :code:`~/.config/containers` and they are correct. Particularly if your system also has configuration in :code:`/etc/containers`, problems can be very hard to diagnose. .. For example, with different mistakes in :code:`~/.config/containers/storage.conf` and :code:`/etc/containers/storage.conf` present or absent, and all in rootless mode, we have seen various combinations of: * error messages about configuration * error messages about :code:`lchown` * using :code:`storage.conf` from :code:`/etc/containers` instead of :code:`~/.config/containers` * using default config documented for rootless * using default config documented for rootful * exiting zero * exiting non-zero * completing the build * not completing the build We assume this will be straightened out over time, but for the time being, if you encounter strange problems with Buildah, check that your config resides only in :code:`~/.config/containers` and is correct. C compiler ~~~~~~~~~~ We test with GCC. Core team members use whatever version comes with their distribution. In principle, any C99 compiler should work. Please let us know any success or failure reports. Intel :code:`icc` is not supported because it links extra shared libraries that our test suite can't deal with. See `PR #481 `_. image repository access ~~~~~~~~~~~~~~~~~~~~~~~ :code:`FROM` instructions in Dockerfiles and image pushing/pulling require access to an image repository and configuring the builder for that repository. Options include: * `Docker Hub `_, or other public repository such as `gitlab.com `_ or NVIDIA's `NCG container registry `_. * A private Docker-compatible registry, such as a private Docker Hub or GitLab instance. * Filesystem directory, for builders that support this (e.g., :code:`ch-image`). Python ~~~~~~ We use Python for scripts that would be really hard to do in Bash, when we think Python is likely to be available. ShellCheck ~~~~~~~~~~ `ShellCheck `_ is a very thorough and capable linter for shell scripts. In order to pass the full test suite, all the shell scripts need to pass ShellCheck. While it is widely available in distributions, the packaged version is usually too old. Building from source is tricky because it's a Haskell program, which isn't a widely available tool chain. Fortunately, the developers provide pre-compiled `static binaries `_ on their GitHub page. Sphinx ~~~~~~ We use Sphinx to build the documentation; the theme is `sphinx-rtd-theme `_. Minimum versions are listed above. Note that while anything greater than the minimum should yield readable documentation, we don't test quality with anything other than what we use to build the website, which is usually but not always the most recent version available on PyPI. If you're on Debian Stretch or some version of Ubuntu, installing with :code:`pip3` will silently install into :code:`~/.local`, leaving the :code:`sphinx-build` binary in :code:`~/.local/bin`, which is often not on your path. One workaround (untested) is to run :code:`pip3` as root, which violates principle of least privilege. A better workaround, assuming you can write to :code:`/usr/local`, is to add the undocumented and non-standard :code:`--system` argument to install in :code:`/usr/local` instead. (This matches previous :code:`pip` behavior.) See Debian bugs `725848 `_ and `820856 `_. SquashFS and SquashFUSE ~~~~~~~~~~~~~~~~~~~~~~~ The SquashFS workflow requires `SquashFS Tools `_ to create SquashFS archives. To mount these archives using :code:`ch-run`'s internal code, you need: * `libfuse3 `_ including development files, which is probably available in your distribution (e.g., :code:`libfuse3-dev`), and * a very recent version of `SquashFUSE `_ that has the :code:`libsquashfuse_ll` shared library. At the time of this writing (August 2021), this is probably `commit 56a24f6 `_ or newer, and there is no versioned release yet. This must be installed, though it can be a non-standard location; :code:`ch-run` can't link with :code:`libsquashfuse` in the latter's build directory. Without these, you can still use a SquashFS workflow but must mount and unmount the filesystem archives manually. You can do this using the executables that come with SquashFUSE, and the version requirement is much less stringent. .. note:: If :code:`libfuse2` development files are available but those for :code:`libfuse3` are not, SquashFUSE will still build and install, but the proper components will not be available, so Charliecloud's :code:`configure` will say it's not found. sudo, generic ~~~~~~~~~~~~~ Privilege escalation via sudo is used in the test suite to: * Prepare fixture directories for testing filesystem permissions enforcement. * Test :code:`ch-run`'s behavior under different ownership scenarios. (Note that Charliecloud also uses :code:`sudo docker`; see above.) Wget ~~~~ Wget is used to demonstrate building an image without a builder (the main test image used to exercise Charliecloud itself). .. LocalWords: Werror Flameeyes plougher charliecloud-0.26/doc/loc.rst000066400000000000000000000013601417231051300161500ustar00rootroot00000000000000:orphan: .. Do not edit this file — it’s auto-generated. We pride ourselves on keeping Charliecloud lightweight and simple. The lines of code as of version 0.26~pre is: .. list-table:: * - Program itself - 4868 * - Test suite & examples - 7983 * - Documentation - 4605 * - Build system - 1053 * - Packaging - 642 * - Miscellaneous - 403 * - Total - 19554 These include code only, excluding blank lines and comments. They were counted using `cloc `_ version 1.86. We typically quote the "Program itself" number when describing the size of Charliecloud. (Please do not quote the size in Priedhorsky and Randles 2017, as that number is very out of date.) charliecloud-0.26/doc/logo-sidebar.png000066400000000000000000000767111417231051300177320ustar00rootroot00000000000000PNG  IHDR Y pHYs.#.#x?vrPLTEUUU[[[jjjTTTTTTSSSQQQFFFOOOLLLSSSIII???DDD>>>888000&&& "tRNS --0@HPT`pw?|IDATxbJPC c"ilLamͪariTAyCyB@E*rWUUuiXUUUeYµMYOzWbŪ,tڗmsP.w2*Kfdq.!8Ur%΢(wad߳妘?dƇKNvct8kz1˕Sƛ ˵)fq *eXoK*N2QKr۵2јeuIq1 @q.;eıX]~?-EKj-oW`zx^ cqs3X`3kޝ$Db\I [#`<監28-˖{ Yl*$gT`jWp榝QtT~eޢt>^L 4ThZyc2! V`PKCKk0@d\hloilV魝92 עzDUtMWT'kk7W^e2@*ց^_ELoM*; GktY䜛J/'mY e232T2BPe2@Y(@ ;+r{g_8 @kOiF-v,Phr,m+ Jˬ-`zey0U]y0Vrwf5Y^묭 Kgdjz ȘQIJ "e^LoeYO|(IO 2!2` `2!@`zkCde2hŇ0Na8|>Py(-:|G`XQCMGۡ30b@*/XEU^9v|'ZPPFEkBh5@(^ ñ!ݔμhϙəm-R>Z B+ 0\x2 eFƄ2@L#3LndPC_bdL^{$C#>4E^ BX 1W8wXŸ,n!{ K"/$2^`-&-#3k 0mi Lf(lx $2AAvdd2DT6(+$2ph7 $2 $ !$2d8D A&Hdd2DO3ه aa(@"L  2@"L$2d̐LL*2mHf; \.n HdXӺ2HVd%^.{ @2+erbaD&֩NVd@"u:L>=D&uB|@R+L3O+uJ$2AO~RN(wb s 2 D>ȯtn >d@"`z|SP@l*v LPeCZ/ rKȡH1Lv&7v%d1dG[ɱt%Ɏ刐̗t%ɕ#BYU.zkg#CHنTk7Q% =3d B6$G`9Sl+D}t=mObcSj& Š#L ( d[}1F=-aur1R= O9# Fl$ge@,6@N?-b`rsrMeaBqqMR)L\) o&;-fir>| ]-ބ(qg5pYMufzPzPYyP5L^y09+vV:-Tܹu&; ҮW0N̙C@, =;zQԙ 8[gEWl:3B^|jĜN ²0lEWP`,bUeY~{-˲E좉PGpE$7LeY|.( !,E.]UNUU(:&,X+L^vyWe#{͋߭m+[лnEFɇm|e_p=Vȫfn" e9T&˥"6fj @NuL-7+~9E2y6[aXȩeder3_ayC"',tdc1{L<Υ('-(aE7c5$yl2SeTyL{R?ttLг\t׿dm }L%nW&dCAP&'!/` X&(T"erb>[TȥcjMb<T*ym<ɩ7B$| K29Ye ܪ"er.5*֤׳LV*֥y:XMSȞLLmL<0LiudH(@c˹zx4$ Njsd j̨lcZ dj$c5ced wVs9f9˚dž29wRCeH󭧌;"2ﯦp|~OJe2=a5W͞dW3K0w}I'9$'6N'Ĕ2}ژDFdL)3Bdة^Ӈĵcq,i\Qf"vmLF5kULF9?wkj5XYsxop,佖6Adzc4gY4 k,-|ƫڮu2ld#so e2ʄ3HmxW2eB+QW] ,dL@@-[/uwPUJ2eWz\b/_VƝ[lLŽ1P&%'^&#0/jMDVF&yQ£LfpJ:QʍLVx%eE$(2RXŷV]g_r\|Q(6.l6+tYxLR,`ʡE|Oef*?HS$Ur_R媸Z!D\Q_yJwY1;xXʿY&+^oG!Qfu{z)IgJޛkU2Y(߽Õ?ҘvP*`=LU:3nep-nD(!E2pWkaWV,sd}DO^'ȇmdɈdJ/O29D>tJ[EV25Ul3cѕUl2Y"^.A~|kqdᨀzRPoe|mddDQWL=",;V e \-D&K\|8n2Y"g;?H2:_ɦ72Y"[^ ɉ[LW{Lݕj"9[.eE2Y"_s`  jL섚 2Gkp|h+2MTHNkY!(aIeߤHNzUK5憐!^EIFn uǵCL'T=q1E22Y"bc:9K,ko\FR/D~')ybE2=;䀭G"9__/Vjj}dd/!fߢQ'/D@qM$'kثxߥLr7v--1-Lka-e]"!8:<w, kЀL5GDsDrCKӍ"6B0,cUN'="fXi}FWo#]s_e;ٞ1W=_Ft-~u1EB~S&cd'H?>T0m ˫-="9N1HfLa,{ze/QF?d*D>qwo#_Mg0 [0YkKDr}5<[hf%6iצ9? /=j:+!Yvb_~;)^I*Erxe9-O"szki1ԗHH&1]OhP֚!װ pd&^]GC"Y$H$34KZCx&ؗK%^y-iWy DH~BZ5bIӲkH~}L< 񅰒jeٵHؗY!^?*ƛH"fȆūK,NQ$9xɡ,l?I?I"MN U8mɁF[wo͏L'hr/ErS%Yux[DrF7~9 2D򏗱EyDr8Fb:y$=_lJR$O>B{GV}KHq?tB\9uuٵ"Y$Y!WO$jWDrǾO/$6\sKW $'~|9^~ usI Z$[htTˮmON@#upҲC xa&Er|B?lp^~y}_"Y$Y!Z^HΡ,þ[,E$6;FuLkL>4yN"7 T?Dr[ adD;oq~ƾ< $Qlgڮ$œH'jS{GZ:ױۓ%ErH^j(]+[\""9TO_Q&ryo{LɁDMҵԙlJ,Z'ܯFdX9ѯBP\\K2Y$Gc5 kR:p"Y$Y!eʀuТq,tx$o2kmBO1J%ww2yz]G~>6h g{cA0Q!EHZG1՝K>,|c~GM$geȳ40Z B!ҏd, ȧ1($E9%/@A;+JTIdHw9+i ]w%drV"yHƎ9.d qmWԷ_&{/B))3E^"9xi`D+UV&d<xֵg[^i-ErnGgy6s.L%$ȳl^v%qҫL= M>DV^%&eG8h/Gr`4r=gxL!DMg|6Dpµt6d,Er!aHJJu,5]_[t<%4܋o#T imdHɿ8+d_'T%.+}%?j㟨Kox`ljliiYHw\k6]$*m*D,ErH^f9DVP -k2Y$6+TcjrB㼐R ErDxIuIhR6٣3Y}*lsܳ/ynup ry3[䠥>4yonrmun@& |}ߗVc' j伐,Ergm@z;"iɋGD$O$67f˺3L6H>2Y$gV{Z[n<}ɓKؤXe`HN,>+8n }C5n;TO$h㻾EJ>4^䥷<>gȞpe |t"Y$O,wykJVk޻X&ujHN&c<+8ݺ\',]ӻBRɃF?0es=]y.]x<g|OZ? ,k|ӒC('7 o)M?2H!FtVq ʎAYM#ymƛ]Ĉь|s*c0< ,y-{欐%-y-cFo>I tS8<'xfNprc%Gᬐřc>Q!wr"7}a!#ulg DroǿrpBFfH>E?kAbvXlMs2[+^~|~mG[< up#Y'Hc9W, m'}j+ܿց=c4#V exf" ^ k oM׮&ErRA-]wMdw"/RAZ$wmNeb s8̷&Ӥۘ+^|pm|ɇrƐWխX";:U ⬐zƐl#Oa A"y6aUfqJzI|Er#oLn8 0 Ggy("MqTkfN?PvȸH,c6=<(C俍1Y$+dSA ge^X-X&6eg] ex7l%?3z`+ewA=?X\v&pʖy}7o$"c<(;Z$<29ڷ(;K$2Β뷧{*#GuNB<~"ًx8luǟnHfC/~{Wʳ}6){8gy VzMu(o>/ k<)ׯ_UErf ʶT k<ql/R9Ĩ >eE]GszИpX^JBe!4fp,o.O>^QGr^-8?ll9oϯ-xh1ߞ5 }2]CYٔ?l~@|M#;hJ,ƕ'i("YS^ǧ|=\(*?ٖ/4~<{SߞHF$`pt<Ը/.HMMFek{M*ڐw6.;_Do o&ɷ+tRHdzX[~-nO${7OϽo7.g$yVU "aլ^߻q&GO(s='eui$SCu~3d NzkrDrODn^ߊ"+g ЗnL6HSx 5>{={x|>Xcr>??>ލ3{:5,"-Ix%>[8~y~Sr8krDH6̏cG/&L6?"0u]2~}4/3MmNs|p K|'ڐ|$[`Vێ^nbLdkCD2AZ|Pd(kJL>{/7'MɘLN 囷 m"'{5Fxw;xe`7IOLh|>9mo+TH"Y$gm5j+u?c~;oWz$,67P|k0[0 /=no^G>A ےd6|tӐIR>F3c`/c4JL(FLo9oC oM@/5܉%ErFL~h߳8~oInɺQdw5RۀyP~.ۭc~?4Hə!B! G"Y$nV߷>7> {9[6ٽ?KXp-ErےwHE&[H$dq #Iƙ|:wuP,"9{M&~/_H䦙8wfH,N$af O\&vx%ԋRiLcaG:.m@4ozoU `8Z$dGj4ixkբp~/F8;K`tF"Y${sy췾vPu` VI5e_ EH/m=<u}mk(K$dd򗁣m&{Q}KÿZ7 |uu()E2=O&|06U}W+7\[gT ",=~<ܦz?euVw0 ZW$E2>'2a|<Y( t||" [=H\zL2R5N?fnWihuH,g4Ӭkp^n*P9\qQsݴS}uҍ7d7مmu|q:[LPiߦϽ^68Ӛ8($:HL&|M;Mwa?>أ~U۠oHN ^?$E2ȇԻWx3omJ+98(D$da2vںֻ)C/?-_ jz>%E2?uo/桎[ynyOj6 Dr쩋Z ~t;o_ngZC w-oJGh*Y$diӬ$韁ݷV_`\n}[/`*Y'[i~vuAB}C;j_V[xEH&knu_:#b*9gL`:M&߾Nz ^>Ҹ^w ɝL4ɤQyg~6k:~1= ȷ?K\֍}K ڹQ2<"ْZ4964{ed`=s@k:)7~2Ln s)՛f.۹sYD$O̷EHF9XIzN--k<z ,M䶃i/w-}^K坃?G-n3( ϡݍo*>{uH4Hn6uHB'NuVbk5kO`:v](>"wkZM&7}zQ@_۵#oy@d?L&4/nL~Zkv߶bnƈ\&V&٬]5M a&?w\mϿk%[#)-[7J䆙|O]&~ Zw|<Kw[#z銦ی}o]ԾyN(߶@:j$ەgk|X ~G/[}uu2$ەC|2ܗኮgՌN7⻬2 "Y$3Ɠ{o۴~uo]>u֑'ٰ|$~ӽ;;[R\+Cݗ;ɨn45*2ۮi}ַKf>x J5[O$Ii֞}:R"[9aacra|ݲû|U6-w6~fy0}$tkZwSsj#;KƩ̝$E2h8堌K㷭%}Jar*dlh۲J!ywߵn}N̖Xp=G,MԶU;^wkp^rH׮-hoω8 w%ўAYz^ot&qzpCiQ9H>׾/-&|h4'E29 Fv<άkӴ4<㦒bSrtu19M9Wy#Z_vyu[iZMdm*lHɄgҨ K~ߘnuZ[pP%۾OZUݑdaDTRgh_|l!ߘ"QUH놕&|$խ'cqWdۗ)X$ɣ| nuc˰qu8K$d>N&C^ljM<5z̬60#)ا5xi:~fȗfWal ۸ 8uO0\7RߛoMsBⳌaH;c4|(-^oL:&W%ݼZhG^T@J?I/_N/__Z;8&MA35k΃vzʸkm\Fvh3V~~|=<_[hPѕcMj9!ϸnvþ.|Xf$O1!(nd&G{mSFUs z5ko}kwU y*e­L)& Ԭ;4-?~ت^>.,VR]ʭ.4=S.uk]?~ⵡ_όt,Wt;sPLͺdt9nkǚpMίe{^}p 9wۙ y5~S_Z̪68w;|Fɹ+ԭ֞gTn#|7ӬKO:fu}RikAvLz~Ͽ.Ze2fu^ڹo{6FA2}G}'fYT ?kgr/8dҵ~\[rylr]}Riҵ5jkdR_pmu*.yq!7?y^G|iL g[G~U#+o3zyo6?(>75=zM$$sI̒T=S|nܘ탷ˈ7ر]\.*ژTݝ jMW]7]` f;So ԈK^A2g0jaS? MA}x`e{O*j6?["[5;ECR%NnrV-aD CKoϏw?@=>nb)Y2(۠^f2)Gd8%$]5f'5Obo7֏yJsuĠ,oq+79 uA5ϵj 5=.cR٘N鷸۠>?龥~_@3.#T9QwT㍺O$cL:OЗ!yZR K۠nN?ȅ拱>?ukk7~B܆2htJr+98n[3]ukC});W²H4nm߾=(zkS\gd= 1c}έ{ ow}}< zk')]wX<4l >v6۠ZRsne~̫v:lhw[7:ӄW];>Hw}Ϸќ)hxNUOEmPCai:c_vCdCȣ-:C/N'qHm?C'%8U&Ma!5؛ש~cEģut}uCkSfy{~?/֨e> 峝Oa2i{6ߥb]O|hz=%C(?D |'1Gܖnu"vo4GV7m#eG}}T-:dD4r%~n6GNvyh:T~fV#2w}-H2y-~Y+[U6drcmfYxsU~f|l4@YN|ǵ=bǧ7rXݐ{7ѷƝNn)̟|>]^jekz)$ pe(2j&Se-v@[^byFmpvlo^FYc;ɘJ;epی]o"s3BϬ%D4߼7p&걟!2W&VՄjYSTIE#nn R?EfMQ&;ܙ! 52p@CJCYM?XW%10bB.vAݎ/C|az> dL$ǨȮ9v 0|;;9G['8ytSỢ71x{x>;iLMK YMKgqŎmlR#_,lƴ}Zd*Or6Y(!Kvȴ]('ݴ;F#v[h'GsjOl/1[FxO$CT>8U: *$QהA,N"N嚔C=>oGqLֺ ukk/w_yz|5U!a*DV5T!5?[\r ]u>:\ ~[hq#6$[>Tu խ#e*`>k4e#B -TA"Tz 9;i3٨\D&;:D%Ȅ`:TS$28j3j\D&lMVL|=g65Hd`A5Y$2[il6[i fb@"3)~+m$rJ-Rc:$2,4k0`Bw/kHs #cq A$2wxY 阫b/oN"q-'K*YF/IjniBRHI[&s+EVii9 $F>cϛFaJDȺHE5*&Jv@)Arlu $4fk"PidRkm qH{"I>JBHA% bPYqͩrLarIvvYDžX+1rbWly&dT3mxu\+d@F6HNS5`xޣfG'ƒKާW8K0Y-)Z  &hAa2ApB6H6L& xJkdd|B|eB?eL,,"iδ6H6L&"x0 ^$ka]Aa2qTu$&l}2H6L&! _F6jȎB%70Lɵi)F"c`LZ2:H֠ 3ӐIHils5Ss\y4XH"NNLQ&g>(ҰV6H}F1]A mI&F cis>bԬ`JiTNm8/0&Y愚Ar+Ot (NYh: H'MNv&4ftr~u#dB%oH؁lUWlI0nqΪm4.:a2dzr;QA2LLO&8ΈO80x9}vug7SPvuԌ߀@3k 9b'~+@-v9[+M)zG͘0'̧'Λo PEeIa)z6{csrl"}k hKy)~6BPrnlIVwVDٱS?YE[/3OɭM)[jt2I(59<3([+]{ʱ[tHj:I QoV]NyQ`SfJϡv;=J׼c ּd2cꗭM C&lI ZsUhb j絝1F5 0ScUU۲,WEam4bUeY`F=75Gw+y{n6OƉ!tIKeQHZ={~-"DXER/w߳?d1[~*<K,"/]UEHg7ʭI]4AK˿j9t.PS\xpJZ˲k eb`]dYlm5A$2Xl]Hdd2V($22l V(L~2ؐl{2 w,"F q 12L OPHdd2 Hdd2DF& (2yHd/L@"#Rs' @ (5dL֚2DF&a!R:i'\c)UId @vB,1drdWDNV+mGA$r Id u`Yg:|"[ؕ _]yDiBBSyq@Nڄ2@ eBF6 @L#ۡ @FCTY[8 SJ"x 0={PVx 0=vB:v;s_9'vVZ;džD( |9]9l<ɋl6u{o?Vy^=ί-ӌVњkk߰7JY9&q9lF*d߰;$|-B{Cd([$[ lj `e a^gW@9&0PN I(|GWʆ(ǬIheB"V̵/7P6D@`?:]LM3f 1P&^y6-L):2P6D&-޳VSʩ)z{Bq"|=;!r*S&]üB";Vz8m+U/.Hqߜ9^ܼeW~őc|h\e^)jݝ/ز.,R:ߞ2XE=CYwιP7N"z@uhGTz}%^'cf! d7>;״5͑u}xk}լ陓C̪Ia)e}4˾M8<Wr)7,Ku^i<υUYVqu۹HY&2<e<+jXyQ;ުQyyyvoc:2BYܴwޘ\d>c.ٳ> eLZlHy:>f,NBY tuVoa`P&@f#SǴt<09rpN+aԇ p*Bvy<5z9nj s o>H-:MKBX YzQp-ZZ=&eR2IM*_ k_~M!~b=rC;H*z7YrvP krT0@PYrTvqȆ `dbM;}^) 3$e]eS*߹?X`\*:e`rTkz"_g>285539XQR9?G+IQR ?K\nLg*'3nqP򏳉?!ԗlJ˗idOCVY-^ XEVe#?E@RȕeqQWMi|ګ$c;Oud55)T6J]c|> I=ի8VƈyQ[Ʈk3o*Jd`u{plU>UJw~C!>ڈko*ծ,4{C::qol}]!0v*e_G%Ǣ(xrmisQ`rcdUrc`<,i)K7MKmDrVܻ"tq9|.HF$'2>)Hew\mv6(dDrҷæ=u缬NE2"9ve.HκsnV(H|s PY$948xdDr2Ӷ{/)s2&dDra5ErƝsѬ&dD@4ErƝsVdDreLBY$g97Y9"3sC(|;i;SɈȬ mW_|;iɈ,4~V$973TMHȓ}+9"9ιc嚋dDr4eRko W\$#c"o/S;6"Y$皋dD!e,EHF$'f{ CӁH"Y$#<\Y"Y$dDrJ֧K@MNt"Y$dDrBvV"Y<&k.Ɂ.H%dDr*K+Ersm|1Y$#ö2/CͳDriڸ"%TzA$9ϛk.]VC"9ι*KɈd<eqܰCP"yu \8"9ιaZZ$#M&;Fe[D2"Y"w6C;ET",md"D&;yO.HF$KydsHïəw5Ɉ@/T2Y$9z<]nHӪǸM]OH9J"dDr$U1[s~߇uE2"9'yUs*?,uγl}thHF$Mb{$EGC8\o.HF$keU^obx,wfu~xڭ)A۴C/e9EwE۲ `P,uθ䄯[YT=&H9#ch޷Qe"Y猻HO>|,O&uθZ|Z [~Y$qiuHa1ݓY'Ew?Yk%H9#fmװͷ.*"Y猻H1i߬N?qc$r7վ\OuhǪ*7E\ѺTHsr[U=_ږ"Ċ,6?Uվ,!Zw}QlOPUeY,gKe:>-^|?<_m-vC?x:Ueȍt򦵿rz[}څzETmvzq v9o:8uK>"Ʒ#;;ͪ6Aw~R$/ݦg1_T媷?]_ޱ_yx90_V?;Gt>-}.?9O%ɫ]y6luߡ\|חfl?RGrAico< "5zJN߯|=4οOUs5z_Zsi2+ծ%Dl}?n/M<-Ov^ui2eT:Hnٻj儕ty?z$P#L(bu 8g^nշ*g-vk/ GrN#]VoLPCCd"9^*RM$gmu$>M`|\.ɵnyH.~2r\,vKoNK$Dzkx"d/wenXع!fH޵ɽ\/=k?3^$]@{85B9HS:L3uh7>oɛkߌhUmٯtOqrP)W~uΑn/y7ъUżq$Ӹ}7a1#+"H^ɴbwN(=PdRQjk5ϜZ4j\>yo;E$8#yQ itqd$ogny$KEr%5Oػ0Xsxi}V 9P.5ýC^aR1Z4z띧6#F1r$_;81#42z<;T5-bheJt&\ܰ:.Mho=j$?u70#3:}fubǁ#yf*HQNe;K74t'ucFr0ףaN"yy_RZ6yF jbc7-WDfEvH}MwH%#"T0Gr1̳ڶ,:"ƶEHs-ג2!#{*z5x9ΒZDr.p[$cFZG.=t.G,or(z,FEɃBLp#4,H7Ϸđ<{~$Kg=pm旧K(ErQ&HU=-X&jK5gćD2#O>y$_'rHn`W$Oqdk=a$&vɱRa3a='jڊrH..Ӊz~߉n>_3!FrWe .[MlZhju OqJ$KEv8_Ͷul۽urGU4Hҏ7X"Eȼl>Aݰt!8VWQlmպs>xܰ.rc,.aM%"S/XqVV5qo3)HnfX$7૚}ż<896+MƊŁn}+ea/o)Y|\=>RGYaHnZk ,ˁ{(<=Oze|nV/}<~pm,y*HLݗ*H6o>@.wpC0xABX.>Sd\zy6[tuo?}o\l$&HL)#vl܆g٬ˆNF_wCI?WUlخz 1RF:H>|V7m˾_?ڝ@$89HnԐ:a7VޭϻԺu"S~n|.`$GKe?]$k4 ܆,Hzn>g}kG]ղ͓v짎W$7n d#9^*Ѹgֳ6keM#BbnH{;Zͺ"3u*iuqDrTqÿE>5oݴ\ŏUzt6/ۿӥw_]z8{=.hlgjfHnf?H.FJ&\x [̧֞c-CV6UGrT\!h#y;L9RlB?s.W1"{ 3D]Aq/+Ga-:f?չ:#DrH&k~+n>|gw$l#Dr1n&؏XUMξ٫H("9^*HEol:HnVc~Լ!\i#5Nnv}blA9J"7Eđk/f$G|RHHnXz˛^iEr9B$HtZhw}eIbcOW&;:uo28cDr4y1I"8s#kjS$>Yv R׳jv)HqGrݳ4uSDvιH;jmEq¢ݩB?a!cum/%voo?{ts<-~47N1=snWFrH:siPꣴ8?:^"nmcWrx7op{#7 X#9^J$GmF9Ϊsye =uνDr.uV=&SrznX͏Mo/%cVw_fO-o?v${I:CFo>Ggi$K#՝t}.Is@{ds/\uV#P"~ݺ@o^vř- 'H! "*}>J06#n" G^|s\ݥ"yhVE"9pb['5-Ճׯ2H[ɋc,{y>&hw֍YqrD`Urr<.@}$Rg:q#+17|/IDrWyb˿W,ݛFxկkeɆl7S|V$17CGG$՛'f^J#yiQɃɣH6\ldKV$weѥʵf$͊H$/S^d$z#P{fP}$R7GDroj٢~d(vf"9EB [ez##p"cTL}H\.:5K^$7+"GLW涺otܡ'|ɖM#BB/ƽYt9C|`!>d}^2mzlnH\._}qgF pH>gH#yȻݞV ^ɖM#|)͆2}HL$kOo258 mljUC_כdUʘl}X$X?{D˺Ou=6Cз䛁D*eL֕!0跻֏A}$=6fV/]M'klJj+\N"ZZD}SdV$A7dUyU>GrHG}3Q]l^ڕF*eMpanRCvwD,8Ra+%R' Nɦ5.&W˨!ŗcd"D$k$R?Y=B"Z<*O/Ov[dUʚC$7"{pjEЪ8D?x Kp:5!L$W%Er1ɋZ" 9A]Af"H&mEre@<|id"H&d"Udv5L$D2L$kyD2L$D2|ĺ.?=A$D2Hd3osL$Dr*x[cHvD{x7R[DrD2L$7>yRw$RD0lD2HJGr_n%qmd&Dr·,F#9 -% )CsJ$da-"fHg4ى ]n:A۽ɁWR_u==T_oJW__%UV\{H[;"9C5W+q$=>o㋿=uG*uw>("ٷDr؇{Em]{\?.M<ߙSIcL$7#ْE$}o3>ȶZGr!]]/ۈϽXl:2w{VɡQՍDr,mNm_74Gr O^mDϞPX2z'~68m<Ճ\yqUʨWU[d푼5) H]c){׀`{]U)Lj~?nξ,CwNN?J$.h雹^Z\Y"9tRWt3G eO*et%k-[{$-NJ$H`P:F]n_xɽ W er/tG$-u|ZMn&*J7nحR&W~>}39*ͬj59ՃWNi_/ג*arL3Y}$/ӈ fb6h^~e/v-_O~*ar#9|ԮރH161}[?J<=_T[z0ʮAuҝH9CŒG$K-j=M] zx#sfLZx]*a򵗜zP$Gz_DrlF+ܬiwSEL[Z}EsRL>-(+P$W c?$^jƘ_Bu=]׵tZ.K4tftU)ˆ+E ]#9j {1ƋH6ezQ뇬)4VH冘sGn:7E[RI|8)՛ DrqMuۘH:ktKbYOzT̿::'ZRugR&_@rH[7yתDr|R]?iF7jR6UQN`B#v|*>2*eۜ+ʝM"9f-oCJ>"9.ڄW"5v΍Z}ԣQØN`,Q4KbJ3Nug? w _ hnȉ/?ϔz_G%_D;T:ꡦJן|y2LB$GsDrDQu*{]@_"ڣ"Ǔ"qu90Z:u-C0}q['R~="9egrfueDr_GEVUj2b&&bVY;u-P?>ُ>GK$ǷPf&q.3އsqeDrlmOQы5R >?mJ=yp3_;ϫ&"90vqHlW"9 =%(R81 pzyO䐋do2mS|馜l#-HNi~t]铘?#";<~!"!dr?mL)g=|43O}I-o_LT0} йSIEr_n!~ō qglh^n/2n\$l$+ӑqfJy +rRCyJQ1+PRMCg1"9i~ ;Bwq|\O;U_vTg$''Oc7c"9؁W$>Ӑ(m _t M=5&πsf]%`W-~Go3=:^>jI6`J|ܭ>rS{C"9q_̏=꽤GN?~Sa6XalZr4ۿ=oDrF J{&e\ 0H~nR9\v(]Jߟ}C]+~)R7&D+k~-7ʜ:Dz~ޖ$]F?sK{-i?,U?oɃƓK$8AM跢+{C#9Nc$ ItS/+>*N$c"9uEORV#_;MԇHFk3FBW*E&+KdK.ErNF|VsqRմOmF>Wd{U>*|1uDI!"YlqRr;ja~W(wDL^ejX"s?Ջ7[{+E?[ULj~+_dUv)v2kx*FrGZk1L>cwRdV$[. 5nSjJGr{j1WSUL6!d+- 7>I eM7$mobH.Ɋ]؎H4"Yߍ3S:ʮڑܲػR&CUn> u[F72yL=fWHnwg|FU>;ULֶBc.u=BJ$+ү0j,ɭZ,i>f=R2SnHuHVuD$5tD$iSש vz?q$ډdeuOu])+UFeuH+2:<8ʹT{ҵzn5ψdUѶwٵP$>DwY`JuצGMZ W?:d,љ[Uҵf7?So댁Nt.1JH~;/gGrhdrL^^#?l۩RL^FyY_Q^Dr~?_vL\NmS~"Rw~ֽɯD ]urWƘ_0k ,0Sz_nmh;PN%rO,s:߉oDh08Ep&RwxګemsA#PgHj= Hw嚳~[KSL>\W[GeQ|QIE$7"8cʱ?vƽp$kZܥ (R79)B|*R"Y(?uuEP+/KW$YWl[xDuzug(H.b@vZ7(' G]h,8D.y3AozrލH-Xd2U+ؕe"ȕ"- T`R**%zQm>nA['%+||j3vKk$hHSO^ZQ[4R>--"Yh,,;Wd1UʥB9uN$^T˭hRO UT!%6yO$K b "YrjѦ\w%WAtVgKD="9wA$oy$'CbOd{2nH~ĽBڗV-*BCe/u%sf%/n/Bk*^O%75#YbncT\>UVVU}znԥ'HߵɓN#{rUnۨc+NY{jRLe7e \Ǫ͓:T{Rv26Ks:-L[]\{V vTyUJU=.Y7;'z +3vTߏ`:2/cGzznu~sa?U͓%GþկûF}wןK챹J׵wr~s#iZ5G߆b w*skwnZav={pQW75?Wx0T)=atn;_~q=mIENDB`charliecloud-0.26/doc/make-deps-overview000077500000000000000000000027001417231051300203000ustar00rootroot00000000000000#!/bin/sh # This script is a pipe that translates configure output to ReST markup # suitable for inclusion in the documentation. # # Note: Don't pipe configure into it, because that can alter the build in the # middle of it. E.g., I've had part of "make install" in --prefix and part in # the default /usr/local. Use config.log instead. # # I have very mixed feelings about this script. On one hand, we've defined a # new markup language. On the other hand, I feel that writing down the # dependencies really does need to be DRY, checking the run-time dependencies # too has a lot of value, translating from prose docs to a configure script # would be nearly impossible, and ReST is way too ugly to use as-is in # terminal output. # # Steps: # # 1. Remove everything before "Building Charliecloud" (inclusive) to the # next log section (exclusive). # 2. Remove "will build and install" paragraph. # 3. Remove any "Warning:" paragraphs. # 4. Remove results of tests: " ..." to EOL, ": yes", ": no". # 5. Convert indentation to bullet lists. # 6. Convert "foo(1)" to ":code:`foo`". # shellcheck disable=SC2016 awk '/^Building Charliecloud/,/^##/' | head -n-2 \ | awk -v RS='' '{gsub(/^ will build.*/, ""); print; print ""}' \ | awk -v RS='' '{gsub(/^ +Warning:.*/, ""); print; print ""}' \ | sed -r -e 's/ \.\.\..*$//' -e 's/ (yes|no)$//' \ -e 's/^ //' -e 's/(^( )+)/\1* /' -e 's/:$/:\n/' \ -e 's/([a-zA-Z0-9-]+)\(1\)/:code:`\1`/g' charliecloud-0.26/doc/man/000077500000000000000000000000001417231051300154145ustar00rootroot00000000000000charliecloud-0.26/doc/man/README000066400000000000000000000001361417231051300162740ustar00rootroot00000000000000This directory contains the compiled man pages. You can read them with: $ man -l man/foo.1 charliecloud-0.26/doc/publish000077500000000000000000000032041417231051300162340ustar00rootroot00000000000000#!/bin/bash # This script builds the documentation and then publishes it to the web. See # the internal documentation for usage and how to set it up. set -e doc_base=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) fatal () { echo "¯\_(ツ)_/¯ $1" 1>&2 exit 1 } # Parse command line. if [[ $1 == --force ]]; then clean_only= else clean_only=yes fi # Are there any uncommitted changes? echo 'checking for uncommitted changes' dirty= if ! git diff-index --quiet --cached HEAD; then dirty='+dirty' fi if ! git diff-files --quiet; then dirty='+dirty' fi if [[ $clean_only && $dirty ]]; then fatal 'uncommitted changes present' fi cd "$doc_base" # Clean up and prep. echo 'preparing to build' make clean > /dev/null # Did "make clean" work? The only files left should be .git and an empty # directory _images. leftovers=$(find html -mindepth 1 -name .git -prune \ -o -not \( -name _images \ -o -name '.git*' \) -print) if [[ -n "$leftovers" ]]; then echo "$leftovers" 1>&2 fatal 'mysterious files in doc/html after "make clean"' fi # Build. echo 'building docs' make cd html # Can we talk to GitHub? echo 'testing GitHub access' if ! git ls-remote > /dev/null; then fatal "can't talk to GitHub" fi # Publish it (note Unicode siren characters that don't appear in all editors). echo '🚨🚨🚨 publishing new docs 🚨🚨🚨' commit=$(cd .. && git rev-parse --short HEAD)${dirty} set -x git add --all git commit -a -m "docs for commit $commit" git push origin gh-pages set +x # Done. echo 'Done.' echo "Typos found: $((RANDOM%5+1))" charliecloud-0.26/doc/py_env.rst000066400000000000000000000005161417231051300166750ustar00rootroot00000000000000:code:`CH_LOG_FILE` If set, append log chatter to this file, rather than standard error. This is useful for debugging situations where standard error is consumed or lost. Also sets verbose mode if not already set (equivalent to :code:`--verbose`). :code:`CH_LOG_FESTOON` If set, prepend PID and timestamp to logged chatter. charliecloud-0.26/doc/rd100-winner.png000066400000000000000000002520621417231051300175040ustar00rootroot00000000000000PNG  IHDR'@gAMA asRGB cHRMz&u0`:pQ< pHYsԍ9bKGDLIDATx]wE~zŀb)'΄L(*a"`@Q͇ @+h@i&e/]Q#F p}}Uo *E3#Ph^\j^nnaQ%}sy(2  "22==DFF Q WVV[7X@Cdv4PA .\88888pppppp .\88888pppppp .\88888pppppt>> L&b1L 8NI3!""bӐ$rvQϽ0Vh4b x<vi1a6V鬪r\}RRRNg]]2j>|1BRvkkkwrXxߟRq[[[ss3; 䈈QYYt:Ci4[ZZpp!*++1ƄgddDEE p8v޽k׮IbccӫK7++;'><..!RXX_@|||ZZZEEE&J]9Z,Ԗ*A$I6l 'A)mkk+//߶m[YYfaHII1͕uuu#FkN=ԼHɄb{p8>\XXw߽ }Jibbb)--Zϟ\ 5$''gee_cQ&HDl;ہm} 77`}?@WV |oqa[n`5 m;k֬29ҍuϺkG2)RLSѱbsfv̉t)lذtRb%&&޽[{uyc.5s f~v]nh)DЃ#{rɝ*nii}v7x# ",˲e˚t53/[KTUU}>lڵ_|Ŏ;jjjػZ|W@\\#< hPp9NylXpfRRR0ƾ|!W^y?s%''uy2111UUU]t]we2(Cׄ;w~'[l?RRRN|8[Y{-0dȐ̐˫֒; ??_QQQ6oެ8cl6[+Z~)O?tUW@IIIh 11nFVo "F#`Ν;.\8qڷ~E sssC&0]hvo>] 裏)?Zcǎў`]NOlJf!\Xzg/++c5A0_T_>}sժU BEEc/|ʕ%%%As@8p 77[o=p0yd=db]k&M$΂UUUs΍.** Ҡ-%%eݫWVEX,ӧO ~ngE :tѢEza F-9vlbX>T}}}jj?_ZZ DCIxAG${&''36ϲ=Y 6˦9 !O?tZZv>&>>JZ.'tʣZ'''$}GgqƼy233KKKv{bTUU;VaV[ne'̡!VAHHHqp8!bkk+nFXreKK|YdA@})Acc/]IOOojjڶmRSS^B#ZxصtMW^yessfdH1=XNn_py4f*5X,Fǁ9p(vgNgFFƶm۪=~6#**uxz?v/b9p6l 'y hjj뭷KKKSqRRnwAAn$k6""?\8B B#lfY>0ԩSTLyf]FqqqJaYL{{{N!IKJJz,U1f^nnn`\8B&]c I‰'C} QY+@S쩩SLe'SUUU }3h[ZZk2gM͔ ~> hnn.\8B 5:[@XGMLTcٺsݻ4)"""XUsTTT:6hd LSN9Uch'l7;,0l' G*'#Qѫ iӦ6kcV b+ơvR%;2#oM v{bb7|S__: ,K0`4K%۷o8ppt_RsŜx≌Tlu&.uֶ6휣ŗ$-.wcvq͛7 ֬_WbboHً:Rg-.gdMMMuuuY9S#<st7vݪ \L֯_/I?2H΄uoj АpB&{bE9996m2LCm럞u*c\SS&M Go[JY9TߤA7ZPP4t w'>""b'p=p߃c/޵ 9۷p 㜜tqILL˫ݱcرctr GI&H,t:+hiiQo9YShLX &L0n:tlf]|~o̝;777ʂ8냂1sO:UEgzpojrr23ULb=#,&!v4  (sojj:T HII=zlfny襠`Μ9ޕ GȨ]dI^^jwfڵ+;;[Knju*{F[j{ڟ!.*77뮻8džYpa~~͛GYTT8 } VkYYٸq㮼Jcy`-SFǺ\/Kb3,!t:dɒ?__ZeK.]v8`Xi.!#)JiZZ%?쟝]TTk=YYY>fXcBȾ},Xp-|lxy'x{{tx < /Amnn^`Abb/Oq/|'rrrX.]$m7y>voqQ6hӦM1>Jٳx_~9**nn;O8'#F|ᇗ_~Ν;###\^88t8477O>;va]xqqqLx&MMM~ Y}@Jivvvqq?m4_x]jNscǎ}w333_~SN9߹s7z 'Vj^^ޡC^|?O>SvGx8kB7wouP$IEDK KJJ|+[n%???""!666;;{ƍ{v)4`̘1V .f»{ٺofV~ih4߿_7aE>{SLX! n"""O={{c_6lXAA!`0y7|sCCV .KKKY>.(ʦO~}DІ nּ`H%\Tf"cg4Nsл:++kOWXK/ 6SWWwС7|nkkSi!bp L_ GHJJ:3}ٸ89O0Ν;/袬`wGm)zvn@G: mmmgURRr}͝;7''*BՒC 6lٲez? /I,.'f[|yzz6ꫯlllT-W['W__t 1aZS0?A`捙1cFttv]իW+]j*m6ܹs:$ :22K/.LT駟:TS__뭷|X.ƚ02ɔ^\\iӦO?](.X'`ua](uy*%%e͵w@,f0f477222AЎ3{୒$yزh#F& CFY_|1qD-B^z{wСEEE=` 04@YYY'tRYYY^ 1 5~嗌3-{&RvdٮqpaVPP~)S؟x͛1? J=466/DEEUUUM:533SEL@ S$3g\C b!\8[ /Բ?B7С^~裏f$ 2:ujT^7oo࡟Q0sh155^YXEwKII!;;`ڵ_|.f nY. 4vs=8dP38CkWTT,\0:: 8˃***R&bzpyZ>`eʗA:vK.DW\9c ǯ 4^9s0sC%''WVVs=;۷@bbbjǦ#k֬-ȸ[bUSNٳ_;<.Gsss VZ}ӧ3ah4w},Tuɓ'O8H0oJ3Tyc;@2vggg7772Bkab^lv;<ֲr| xo\877СC~ᥗ^jheŊ3f6lXqq1[_&I(qqq<3uv6lXQQ /pǪ/ggg3o{Y,>!@v N=ԇzhtȑ%%%{#VݻwÆ 6rGP}HRѱQ LGW:*\>!3Yr%[]Vu !o;vȐ!)=!C yyyժs`.] GfÇ̙3[ZZp8fΜ ~uW)3DE$QJ׭[Ǯ[4LvL*>|xn#Fy海k~]Ɵ [ WQO&w .wedz`EFJ`/x`Ȑ!999yyyٹyyyfΜY[[,@vvvHJ!6 jY__?w\Fܹ6-)))&&&:::666%%%''';;;)) x3ܲe dee^\p1\b -(**ZduO9v#, 2|gux=0955uZ.//EIFPzz:رCuoq\k֬9餓^l@LqFQuٿO @`&!G?*++_뮻NV9XzRf鯷?˗/}s9gÆ /((hmmlFJHH`OzgRRRtMw+p뭷~iii <|8uԉ'޽{ǎ{-++s8111999cƌ7nѣY* Zd??S? ~.G9XmL8S N.ZZZ׬Y>&I'xndUEyVZZoG{{{RRw.Z- ;a„ &Bn7?t> .dͷ98p谧&տv~GcbbnvѨ˃iot9Ik͛7/77ǭW__{ l6v}Dc,wɷƁ#g`4 *hE03Fa%%%999s?’}5 T Yv 㖖{lΜ9! ]®zȐ!_}Մ 6n.wi˗|X!O?M69pؿ(f⬬{l;wH/83~)۶mCl6}3h*1)S~{as17>%C͟?SN)**2dHyy9gVT^^n2nw2C!V5(VC~qӧO>|+ukk={yŋ2$---L~ߠ ߠ dMfyF #++7XhM7tW;699YmRа~Ga=[ZZT88 y\bqWyQ?h(w齑#8\888+x(.\88888pppppp .\88888pppppp .\88888&@)tqpq;5Puܱs @1($Q`j@r:1P"(Z?E(EI#\nq9]p.[=@p꟪p)c33<0(Gb~.ηl0ì=u ~(R6{8D!QEr{N;]v&!6pUе3rbb0@/:^FqZTgBͻXy}H(CASg @H$(yD7HnD0{aO}:ulzL@ vIк?*@- #]Uv6deW-!$zZ닋(mׁ_ݿo_!!F"A1<rQS%S ($a)NF:!HqJ;' iRSj_qZ]^V)B]5 "#BV€ ,R"Ini7nn,)/(CU%%I ŀ s`$` w{ Tf#amF(q/< |uޞ)W % 2<1_(UUu{;)RHL 0Bc0B,M} "@Fv XKR'ejb@Qh;}5un^ҸmPyu,:HHT]]@ @(* CO)ѣ20 ;KJQ۞ҝKz`P ޟP/_G{ٿs{_]Mѐ]t?z"Q*u uSf;>5kljje?De8U4LT;R~|c_CiDT[H' SGi\C#T>iSPt\EA@ 9g :ꚦ~YO~QtJsJ0hd]1 8$ " 85-˧]qť--+>ZKBH:;jN?]A芍r>S]WTdM9S@<<|#\ !J)%8+H3sl{#6Wl|`ZRC#A |(PLLԩ6O+V|h}#TXQ\-??(ٟbв?hbץ}1Gǟ""8cu1;恻\ j/vc)!+Ah~.Aڿ`ٿ c, Bl6SP+KJ:aDKT?t:Nz )h__DO5MR(!* `[$v׾3痯|?M8)xf+Mp!$Ƙ$GFFؼpU)%uazwCT>=?99ʤ}PYڏ[bM*mTT1ϐOTak*aP !`vpxmOW7ֳ2"pM'Gz:&)ˉc$v|(8s^1?uwa+XT].g#0j9Qu؟ju1i=K4/}3VA@Slw:=Gy}~/ܳ}7Q&> UW;m[iT쏔 `ؖ䥧lY9t YGsP /]W TmhNƧOAZlu Ʉ;,/ /kh]y-ꙐǑ$G ˆ:f(\z1-yYWk1 $쏺{\0`c.A0D~晧g7xUgz9_E|ɗ^N)?d+L*b S*W$Rm%Zg֩A@VGuܘ/̛}GI (cG}3?/iS a.}n O^)f{@7_#'0+4HĀ3H0F!ʈ{ ~rέg-. ԟRC?x@\`VϽƵkbB>wUҔ IU\D'7TQvT'S*h)&cH"i)QO>qY+04։/%(^|Ѡ _2RO K>%>)ٟ(_t˜"͎xg.Z輄Xx@rv9>CGWB/C^ 竦>= +Mb|iu^|Ѯ / հ?hؿcO |e#68. Opbo#|ˤ^*M)~8N‚}뮝ٚܯ!||iz.} /9PnQ?հ?9Iw_O0@1/wӔ߽o@!@@ą 3bDN :_f Ycw+)/p|)>/M @&D3!|SOXĸA8`|)'$Ok dcT2)//z|ieF'y|9U"z#1mhujM;il,!2: n?{_8mI<<%)*A|_.WD&}c6`0gZJ\~a|K.$/ //%Z?*q'~8G]Wx雂)9)UNd UT975ϻ257/: ~("$Ygw /{]Eb/e@*5/m";N~P1?  Tvt# i zvW?$l^'O0 (7 a]_| .9&̚u嫺?l]J`R{VL?*|)J;bħ%EgV|Sٰ}7Aw$Wm~UAMد<.3x_G$Jc1 WWW];׊r_1aIw_B5))"H/ @ Bmlv|H@_QG3#C!<iѢՍMzg(@!P[ڏ  xggH) KlltrRBZzrFƐ!C*, wn)=c"Nywqz{˅MoR%J OOà2>u:!+5^ _2g {?e.Bc=?B~S}"G8`$䙅몪[GAp~~ƞ}99??`%QD]zY/0ʝ1Iy_-ZawhTN2( fVWdUɾF\B:kӍwRJRB$"ID(!X5䳑^%@w|nD'Y`@Eu ZZ]ϽZ=3 @fГo8R 0Bt`F( IK~wi;kW\yAbJ=$bAЋK7u μk~捠~~sU|K_>9l2z+mhCal2LfCd"QHST%"d#G$J(z x|؟vώԱ!"?oD}cG  @Hg0 t<D)]?λo7+)9U(P|2豿SogsB*/J0߮%tQl991*mH\Vf-31/;9';9#-6%1&"ڂ[`0A|Ao @K>Ad[BRn'O7M?؏b_ԑJGQ͈jks<_Xs톿$yAO.ٟRB:#f̸ui,f0 B$I"g >*b0N!? ÍÍm7c2k;ayɉ Q(:I0F(- (%*TKJdTHtc/5CQ׏N yB-7-{.--] 苂/-SJD(PA F#` $5/~u][~,SrR|||Ld cj J0EB(!(Hs潏v@|SN>kԩsMpBu Ӆc.)GbE}S. k^le_ I@J'[H?/pD5aA`6*^.'txBi„?lŔ)%!@|B(!`r-[ΒD6m2O8ɓN<1"#q:@)p@6_*(Fթ3ێ]U;vUg^={DbBv("шiӦV~'۝JA:97(6_nmp1yW_uޕM6,:NBq0_4p}ЏE-mQZJܿqe'dn[bN!myW__Jxٿi?o?7+ZyF0;+Kf:tH̄c!/J(`0̖xz.(Mv]{ c|d6EẌ́H|iTfOKZO~q9<6g\JBZ :?r(V<"DGϱG@H>kn cϸ=.RF%ID6-^p~ޣ\]8$B(|Δ-"b4m',+|voƟ`aE|@_/gcֈ S%{(GlP?g[wб,BK4ǎ4MhP %n74`t{:)J`0kK?%V=RgcNz_DFYHDv_/^̓EV3e? ,xD"v.:>(co\,{+|qGDy%mzODQ27ƛ]l{0[Zӯ{۞A& K5> ~p\YL͠ |6I cƓ[3OMU\1n/ /5e>سg/LާSD՟}Ư~Dޥٌ]ʪ b5J}bu 񢍄.cv;O}X r*+qWN$upꣂ/+|QIX@ e?}aesY;֒fo/b&D _}p7zbMAxd|u쏽r2 opA7ȷ{_]JYɋskZ[  :PBϹ| X Ӏ{/vf#0`ݝ}f5 -={!*n";]e#5` -\Ԩ_ Bn?K+]w+KG!صhƬ1:%d{r\0.}{;`DD$ISEB\c_BXiLGCYtQ7}Q.bp{"k;{eKATg:]>Z_yk(%FЙc΂/l#pkIpf:^-[~pLU&?(zͷ \/~QQT_sW Me7sT$T_ g?{IMfTp_?~;PUYH\)/Q&W~e]/T6  * րO*-&*k c_&F]<3sa?#} G,,*R.1-~1a $oTT^pdIƮX芳\[g2Jb` tk![P0u$u!IT?pw Kl -701Y[-&BI594k{߮(__@.3!c1Q\/ mNPVV!pgI`|{)RٲRF ϲZM}< Eڲuä5)Tu{yLQp|a?*go EFnf40:ڤU>n?!FyϿ3zTΛo<|hڽ}XʧƟ8of삖b6Q>] ǡŜ{65FҰ|u@Hv):$CSե@pb466u_ㅵya:otOK,f ĸ/?o|?sff/lrUtx̒ COo6cJV_GX!Дȿ\> `|_$}=<Hڥp"I?peYmnG"P".z3[DK1ʏ`tէbb(B 4*goa]nzy :Ô]@~xhG5ت .i 'h lq mM3:m{fX6ml2zA- >l„)|iZ;a SL. :ˀ`do/ &{ֆg7Se;NA˳P1p{K?7Laq8= L$]2L6\RC]Z5 vY-ꂯx` /{7ϼ1+.* ʽ@:̋ 22@O@o_|ct;7c=n77ۖر Fv?e Ko62")bCOR[ ǀ)yjFtpCz, MȡĄ8o\w7 _aHAG%+-Αy0˝ģ(H?.nW`*j` oi؟vӌ &epO;3AZZ2%P0:琔')VoAvM<'qV\"2MR? :vTGE HJ'.?5쯠PICMg.!a/e(e$Ly7_omko y&=« ea$'a`^&hx( oAx4뮫_#lYafG{SH"BF8ańp|>uuHig <3ʂ#~T? v}a:6Ӊ0VWZ W]y.3/nwzz~U[uW%F#a,%}j:QaY02ET7оI >N9"u:5=)!PPXva:-jA* P\ts[T?pTO*+M&JoKXpya@upu}P/%/C03I"_%Ղ2,֐RB0BQSNH3Г6Gw={Հ hJjЏvF.ߗ9jdN*)1:*"J_ğ!ύf3"v wૃ;D=Q)h*7pQ6,ɍ"@h[=gTI"$+#=!23"$t}`ZIQ( @%wcHٙqC|Z;󈳿* ȽᄍQ+юg;n?JHBB aF7BHT|^  P|!_ w)#%Y ls);pҳݵR%XTζ/Nz:uG;_ Bhd%:9\ r'PJ/h B :G^?cBd4;r 80,IVCh?(ӻݩ je4PDD_$ɭ仏VoHTC)5 pcB`NU~;v \x d;_v:IEV@ Lt^Z'\cRc6@ q}cqs55 &PRuV>XpFRbYn#$bo@ݕXew(R̹_A|+ߗT(<[@ v}T5-)͛yӜkECtcv I n˓{xh:ո8)FtZ|InjdN aV&Y,bGP&20#Ҵi`l_]@ݵS* %`0 ٙ0P%0iG}ԟ7OthP/s$ICFKvw0F>%DZM;cڭAͼf$:Ji>Xeޞ|A/Tٿe@hH? #y}{?<,ˎ[%#7ٛ@%r'qD) r"",*O3:yvK$M2<%vHIR|A9-҆Ds8pgQ̙۳DGn _vC')AP@z%|!z=1*&⥥֬!3N# PYǁ{M1F)'}c@BS#4 RFM mT?B%݀TUU fveΘG)i쯞P&`6] r|!CVPl\ԪzS?0o'7.TtRuOAlQ&I^/Px и4 _jװy[zq'sY'O g&/O!(\t\uu(9Lj:H-ν|>_ݺuի[]}%@i~uJ'JC헒$0<ɭbÎ|^{~#GNڹ_L, +RMB@m 8@4z4CeKDEq],E!tFW\6 LC|Ƕ/J $"3EbV[UЭ8Cn];lظ/x/e@g:)cktOFz,5AwΤK3_iQVѢȶ(=^2Q(B2-5$ςE?:"" Cd 7-o\^ߐ傀h}'t/er[ٳQ"BXZڿ'$R1t2d*[~$r:-E] i?h49O4vlEsdyoGj5dy#_{M O?[n{B6C];>n#|.YrX+H&ݶ0BMi_ y `9o3 0 Ӧ>mcv_wNuA9ogr*DRXt`݆P@8B,!z_x_exg!p P,Co 'y;$9g"`z]k!|"[rbB X_D29 E i1hːc A]0s@&h_7M3XKvXk0"BGxoV_ D0Q4]576F*ѿ!$7Ze-b2޴J ynOҖ5R4_q4wj#_v)i诅Q` h_i~k}4ħ_ _‘H]o[h.wjk\al"[vh,y7^|YXp 'p<ѳ- z1 ͇&2^ECSN:l옝 e]qԫ(!TWGKhpnW=7ru-0H7QCi49M3|Yd tqۅwnң#h4c~_),w(&RR$"H)uEC l鯕c~9^qh:ݧ=TUQ ;w,i*DȲ'_/CSe V@A"3> 2s .yf8nd~_f'?H  @xN_oe_?D >5ȇk~.eh9=1Y)| !Ӕi pDcٜCR)-diZFEiBY|ؤ[ɠ/`9]N!D;9_~cyGUU݁$37uߡc~/߿Fѡ=zhd_0- c`L^Dt';@D*<^͹v8ɕ'%v앿xk~0syO]3u;LN1flOHA H f_-eat#@_wyE ¹*xTU]NEƥ=M/Y92`(RS8y:܍JNNZLeK)缲:Tǭp YŢ˚o| W]uO^ݪg!%oxyϿIyWUĒ,6G!D.8Vcm|馧) 2* .!{п_W5}K?,z 3|/~,+06|7?u-|خKG |iƠ@B`C?TWYA/fg! % )$)IJ)I!㏓?4hBķ d?!B'K HU9zڊW#P~lz]BȄ # Hr΢iPRL\ OL1 W\S.j cWI,D@ "4VE?ۂ/"RJ}='^k6V96qb*cp?qb-a4ш!8g47[B@"r(lPX,."Bd,}7nq: m$MZX`8s1W^yP Yn_I)% l|腹*J"D2tsn(qζV" a;uVUbIOSR$dáזB9Jty!BBO7lؚwGc ۈrFϭ>q5yOA7/r4mkJ"q媍Edg}FE|Οxt))zA<+)V:s JTO3|I!%޻z_oF]pDEZXe8~P@`;h{6[p:%Qv_tr]#/Bfuga;?P]]o-ھ DXb?trA$x?~[yu&n_s -,2Il`p"= r9O+.üj O+NN[\E~觝9{-ٽ :=.#UNGmm`Ԯ^{VwD ‹ y;e%nU:Gž} "*~ػo8")p gMeqj8gueBDz #F\[ 0'G˩Y҂/Rp{wܻ]w߷O@09#"kG=Ѕp 7̣DIrlyEP f{Lܳ;*L{dKevr.k*韢 (V+#'D7x_d_`q#_B/n#W]]0Q@QX?pqx~#?$%t:_.YpU6߿xiD$Kue HC Zw᧊%TՔȤqCݑ;#`[8gu/>&jnV8̇kAK!\.e8ϻށ8z_,K/A|W((Ga,9p-[>|ea CkWdɊg};\%&XCC=^vZU<@;[IWUUIuDBpƞ~wi/ء5>Aq гT2|A/L_f!J"6l ,0=^Qٺyӡ¹`M$qϹP8kmUtf#N_zjKip o ZZg}矶kN%ј@~̷6ZXh s?C((!Xk?3AǺuPR+) Wqê}-]mkoM z7rjBCw3ù8oaquc\[Ux:7YUO h (:~(]X:||#+Yt;7W$}Q;c )%&Y诽P2i7h͡hYW_0̥ O_X+I)q}߫,RVPUq-C%@&) \n[9|^2jK:@a̳=Z=)8>Qg%{/+ |{(RW r/J7/dPm<>]tY|@RJ}(v9ɐqb]!B(S'XXqΐJ;|nvŻ"OGs ad"Kٰѿo W 7?r^{ݽRЮ ݒ =9R"suDPUr*O>raN%t7wݥ+G;s@@C 6ׂ/0߈I>-B<h!ma,`|җ_~v*5KeHҔϿvoksRr9c`O/̽sN_ҟ;19g|A&/DDF -Z@-,x ƬkW(#J?~ ~p8FE`^SG:xО5A(o7_g(9 |a3_FE*^k>쏢h@˧X\ 4Dm՚ǝ!Db,ZM?/r Gb`!4't~%Ntڌ5c<%š^>%>7k/h˔ * +K`;} _`&Ƈc唒 ҽYt8W ޑZL3F_tQFP(r '?tG:xuTrKq$1/>|:1P3_b|&y=l=(gEݰa<~>4:ѿ/|ԿD.vCՃs?B>i  rp@Z-by ZO-pqSrП,qF4h?w3 !а+x9lj-'/hKnJJ/.X2RbÂKm/ȳ;|c׭5ϼNjiuz$E=XoF`iyubHXH 𜃏8x׼ {NÆ 6=rONA 7F 2b^c \w=X (dF3Ǭk\ xI.N6Sݶ/+7FS!iNڌ*%b~ʀ:M{c qkz?^ާ*4J ,I%>ϓykkP`:Dߞxu۶U@/+[~s ,Ea(IZRAʶ?6H4ֱ}ɓ6|H@1 " `1;X*J_xWI"ǹM-4z`łܸn0 h,sp5k7A]WXR1Q#?pЪ+kRJ@um}KG>W] pF9|uXoʿ~vh0omyK]<˜U@&)oj}/[S!2 t_22I1mSW喻?*K4 &l!/83Fj,&,ؕsWʢ I奞OXs_SP@o9Jǥ P/ j^G8u s; mB8cK[W+w1u[nj̻/.;uZbȐ[P6 _k3'dE6Зn1`I,#J! DPT ϼA4 )Mʿ  "bz]ֿNOܭ' T "OH{|ؓJ m@e27hW+X댻)Q.yǞsr>,Ύ9}Ͽm.-qj%IN~ܠY{uZ`1G//+|ݟW~ڐ\ƵIm@׵?8cА< ooo٨SfKZ}cT?xc=uhmuXm@u %%t};g]0l£n~+*#)|a_IBu/ɶ%+%={|a?|x TU*-"|oRJ[W F1#}zMp9VWfÎgC.zKӭ& 3Dfn υeޛ|gkXQ.|> Bue>UU4i?okm/}}"FeI0@ Bi7íj4 ީ];*Lnj޽k ʪL+m)d @Co~}@T^g_U{Ӄ|էE"QE '}~|tT_/yfGLXӼ&!aJ_s%qu?zωǝ8Gc&k1ØvuPvy05("2Ƹț)D_7'}K!J=m'T) /1UO xh[}t LO@$v*s|b ;)0L/h-YODj?y#d8lA p[:h@}Ee݊c*#`xQd4=^.2R>- &4'mBU+0-iG6ADF΢p}H2GKyūC 4 !HM*cg]vȒWoW L8/+]+:tuݥSIrTFEc6#Rc9qz49û/ޡ 2b| Jg?㯛@) d*mWܧޚq 867@ٵKR6d|nٳ p`| xynbWޱon%5@FHݺGö$$ ]Z $W7OUš_\쫛_[v9`+9C&JB`] & Ci~|as_x1ҟuqs6(,B^п-g?["9󜐹 ;g_[zv}t98gi#FN Gc 4#wOé8b|Ϻ@3¸qO<$ i~K"n@` sЕJ_w;qܮ]<'"Qh^#4Cdqf̾?3)/|i;Y^x-m.)MH=XeKJ oCGJزBHe^<_TTT!g,'f' N=dsN#Oϫ,x] b5@0D@4 D@ Rj0 `$2 em VS+15#'/2F/ Z >DS8S8 1 _?6?w/(^9{]B6V?Ms©y*ǵ?9Vy!RGѣvj/?{v>)eB`Ov3v-)( _sɔCىtEK(sb(cq 0cگ$L 18 \+\+~{Wg^S]L6?_+0=y-{Mfv/ J) J=]-P18kf&ǟxvǓ&X PE'пyĭ(LUEz 21"D2s˫W{kZ_^pQ)Lޞ'̕$bB•2\cͩ~_Em:UAI6іJgШLb½90OdmOҿ^ U~M)%_1h4wٮmؕ7<{kB)IIHHebslλ5WߜGog||%QZ J)B?*+Jn@g㾱|c}HdL(ҥ#, Ac~P(dxS, h=/}Džpqԧ:4$lT!JK<0vw=%O8F% dD9N^sTxYؤ.݈!ڕqD9(tI,#YIMumxf={8zDu$?uҭݛx*ڕ֮:ǴJێ- // rW;y.C03LaA!䉣oֽg~c'ݿߞ;^zƍ(hTFcA\x':tڵ[_~}=27$Xnމ>ðV;\NBU@02f}pk\Jιi}0p"r#a:c$a/^u)so lk26ܤ}f/o}b[uKt71 G?r_/=OOӟws=:8 $>B-[j7׿ i 7祷?~THT"Ǵ||> rO.U2)FH' 3; !KK=+t~tKWs9|wҹ@0yKk_K∨0@#aG4tm|nPv0icJ&BD@"R%U{+\2a[S SaƵ{?"` baOu@0_ eo_ PПɻsb 1d J:*+*<觳.zuur Ec}m.(xBﳈHO.'p v[tD41+oɱ~{ Ti'?#@ $IHjWk^r#~t3A`+TߋԦm߮=w-.Ü1?C8eAa~잨}q);(B.璁 5_ht=te7~?f[y⹆dYߌ6ظ^q:R#c0 "C;9mPrׇ1UuZXe}x~|RasPSyp"}=(5,V1 /lF%E p֡Oܗc'>7nrRSaX)X*yDLUY<>KIBp$ZVιƝ׎c, /2@zPH\NC@'>33/_Ew[0B_ZvV=v.>{DJhR*)PUSQ ">y∶-=isIOjLQX ,ʨ hUJK*|kW_;#uoGa˹ ?G D)6nG{n!A$IJNppD C?7?.YOmIM14jXwO}>='\$Z;KFV\~p@7 I"v;ng] =?˯(:۞HʞKsP?lذu8<2+.9jճ.D6m]>OemhϹ 7S;jxWz c;-’sȅ+񰹂/$IZWq;UUZ彏~y%+~ޢ?Mц "Th6@c_[|+V!}}sGw]3aO}[_,WuԖy`0 GN^Ugv=p,P̽{URwDeʸrDc՟/}[~dB(FYȺ◩ӶKP<q|e/x"yqF^GLg"qD8XUMI;<|3.}vmc&ZYIѿ_5w Pvk#?x>dÆ6 gfUdR2.x| 9~V9'n{ۡ:yrC@RXU:!Pk?-{5l꿶2ҿe_\;C`9Pں﫷E_翯:O.haI\Zbb1&@|C\5k:b1ѿ9Glm;8z-WOv:XeMPH$ ݆)FDcً !;xkes&)}0,> RA5dɧH53`ćpE 3DTU b6֮vź׬Zfm SdK] c6zUn*^Y[l,X^./yK|qө( cR2Ej8 H$&}܃G>eH QyD21H0$@0zI#N9mdb|Qϵ[f:LL1ӫۥ_ݻwڥ}2or9NEL˿ESjJtպBbj8 "~:׬ߛW?+aDӦ5ku;$ILA/\Q=tM[7uޞ~^U"{.^Dk./I"!IRF2ѨbP,UUա歁M6o l d&"Q@Dc1{k`$"J)x5Op{ؐЫON:x9II$!Lz$' ra#cXsw`,)")e( XM+~^/~mm@mzdݏ. Cyy>fYӲ{G%-7ۉרiMԦVB!g*ÆgϡFypڻ]N@Hك(%;cJ^ wHWPo4JI$E_"hΡSǒn]vat,۲7.[g_wiꑨgTZ]+!xd*Sħ{xmyҥtu8Ȯ<8zS?ΐqDkP6mں-euwؔɣ'i]2jLbuA"oAS"0:nA|Y|+ih<DX,21(>>6U/~;2E(#{K\8=.>eç}ln|OT(\{=k9g"RV6)%SxYDRh{#wa{vb UUDHm42gZ/q&3NSLb~&+=c ,ŽDSE4#"dбC!:y?[/7ZC!-7BE}vchLwSS_[>^jk21EPr DXbnf%Ld(f_ҳADIH4綰C{'5uϝw9b`9K$m dQ?Ɏ["cDB-(S#|)4eo}s_uoKvy?p.v)g X?3@`0{p)t| ,~š(h;vX]TU圧qGv3AIAjvl鲩2Lv].8йcpTdL_ӟɰ62Lލ;RV#AMBkC@ԣkgyұ-{]ݏ[W{>^s:}F;|h}:\BP8EHAr41خ5s&?ykUx h{RFy,`̱XdX=Py]/vK}p$ZS`!'VON=/7X0iM>3H4 G}>w( rJ OZ7-,4o;SIO@B#I&WW;>gkxgBa2s'_zl׻g~}:վ[vZްhLTU5e"SfRm"c SVʼW]tqGM=qT4 mD4ZOv m`:Wx^L=_W\tLpmmspf ۂӟ]qg 27}7BDsDTr'5yhO,(+aݻ2F z":g 2* S(v9\.}Wi,+uV{<+RxEADI2Pe?]Ls(n9*BfcT/zqRYס}]7wܑ/>[Y4ő|$0.D+6OfJDaC?ܱw #~qXkC'Y=`M\_`GП:$㊊Lu{RΨ$I"qTU**^=Ƀ.TI0Wݾ#aC33/LעY1CD w)H$$HA@ k#qbd-bny咧^+^Ŷ.; s!0c+/>r10H?ϾD^-\6TȄ`A[fzx%j&K@EAUU5]w+o-iڕ{gqS)Ua%L^HP2mOⴊx.2MP610lOG?7`O;q& >'?l[lŠOs \~ҒXL5":mNg_[6Vt\[̞u CsDǏnq0Z_d|Mʿ~HUV?[!±.!I7^nWơvD@}wL_UƒɳqRi.3$"gj?M|@H~ v%߭ E#ިqíuzv|g]~6(tsa?~qH*a=S>-X\kEǰpƏO6"ޜ lR롿a J3$+7=Cqh/>cW;SOUusRD=jKjh(B)Âԁ0y[|Vx 3s^LUWxZ=o˔o+sg$DmPZH6eAjT8QYK2yvч*1M BПW=N f-nZ4ls?խo8'\7al^ @OD'8 OEBN_m@#wمG1z# #А ƊtuA ~JPJY[q7\k;wP zl( GӟYc}s_۞"6;.vD)6auFr^[.eNy[E`I|Sͧ>;/{G.>uƔ3mR3|=M/k>O&/=4@joM:Z < nա N~땻%ݵSvڡCUMġp#X tOL8ևd 6_&"#kOáyP4Oo˯t*R sW&)Z06-zLKL_KoKFCK]H vA$Π&|։Cf_2 }AO4pv51EƩqom!'LПG꧿ sxlh\@E᱘ Μm Ec9ͷjLMCӊ/S6$w{ 0N::EIk9.M|Q_?td\k$ƨ&|.9c0ks'^GlaXmioҘ=h#5KۂaMc9eT{޸Q۵PVDv<矿Ka(X:nSW('g1#@OҦOnNA ?PSj3g]T?Xo7m`l+4o FecM[=}v}: @C}tgT6_)sؿ|Ɂ@tl> *X! aV7WkE\;ꀽVVk~= )?`ykI ?s Fums;hجKN۷{l4dLߏ+ fan)B; }BA/×, W)Il.`CґDž pOseuD1,2ϤHGǂ/-+OT @q#&[~牅Üs+)2L*8>ەFѸ))P /П9֔4Oe2}py!e_26(^kKl tWU)/h-z+/:ڜ@m;(9~!˖.xe; $KC/hHei) ]>juuxeЂ/"$% !*UUBT !DƢC) Ҧ܀'9jhf?Z;)Hp].EFN_2f(BDH;񟒄RH)IM I/;f?CӀ:o X@Xi'~:r2zE$c0035 Kos;'&z5|d񢷮R ) W ‘JI)c1j! kP˘iG~pG>aো7}u.xv٩*cHo |ac_q!r9µ2ZQ!D$ T!՘PURj%o0'2:bw.9gpI$* O':5}®&~v񨝪jJ3_Y|I"8r:x4&k57nR Ѩ*8cnӡ}INeݺoWRsP8 `b&uH5 Wx Soq @KKѿp(:d]w:r;#w=v׾F߯¡h8*8"2C/;S/;'e%% 9`0.~o2 E_Žuݎeje~ B_>eΣF?Q!$缑/lhu;' I~hZ&EVswخKݻwݫ[=ѷO=vԞ+ TIAV xт/|?g cv<Ɂ@H,/_EBUN^ro=_.z~A6-y}G87[ײP(c6_0IL8Gmdowo-]w$Plַ/kťg6.]~<ͧLiڑcCY d trcɢ:kD)hr;Nko׮vJK}ҒR_t;xbj4B$rƌ/hˆp(<qI|A_p-z6S}s{U5Q=(c, 8`sg[.Ib (J \. )=v%?yN6_r/҂/!TJ|[{SiY&PX;v2Ta! r? gwUq@63g.6uIKPgCnK4 q}ܐ,MOc_8*۩0[t/o g5~SoӇ6u8BEa| 'Xq? bvmw^PbU3GpWD%h|||2֭ko}'M >6,k[n}x G1vHȂ+ٰ4g,T:)ϼ_W @(  jL !%!gLL\snˀP.JG<+]aPE[#_&~_*J}MkNZb5%*zL5e*!Jf>ej;$׶(#8[Ԟc:5Q'<ǽѿdyB@RRvޏ?mw?wI6F/g~f)w^u|\n&47 -FE8 pdLQf1SX#qO'YkÀ^P!6K_WT!e>̵x^vӣ.e?74)(8gnrΥ faNџ<ˈh=& e^U;7Ii?b4)BHo粋 @ف4Dn@{9A "wqp>]0 dAE|Bij6ߜqe}~YyKj/Ӭ~~Q{qs8ġns_V5,""Ƙӡq'^XbPο!imyW[ӿq/DCQ`mhʁwѯhrJ3-6nYϠNj K'p( ;I"cf k2ϫo-;uӭu̼W\QQV c2h*mq/!?g<R>וE:ۭg 2#1]Â/(9/hdomL9?󺄔|5)CBH}9 @NSɎ&9/ ͉Yt"--BZjis?J]ƤOep6NsiO9 Cp*>5S u:xi#_S{>׭ 6q_ q:-Y; /IHC朇BMm]EB6Fi 4^Y|-l ͜8)%o͜ $oq#.!S.xs&I6L E}X4Jr6ʷhYM=lOWZ r}}JƘ/g1 5xyI/̨ }eٕe2vY R20 [{?~~TK8poQ$Y5tfnP~/DkqΧzfg&$ Q_DPJ07ebY!˿W^*׍|fukvyP,/ꯙ4|^3~píIJhW>R߾lDKJ`L e `l/C"H&;~\oC[:20 "4o(k_v)+:)Go ޒmM\]|*kF=^7NOyUr3f3VK)kր/:1Ţu6E}|SӪhE^3P,i' 4}V.M\W(JD5.$>W?-?(_ȦK/=<rTIa$>!Q)\@';j&k߀{>K/ť-_$ {7XmOnܵgb岮BA"F|)ꏠXU &&.g'=>gYa&=^4_~L},B૶R\R lG?{8dCX˽^*h*z-,wtr%?0s)Y2^<0QJ!?sԩ'581"l E  qFMtF;Z ?vI0xC@Ӗ]xDRQ' ~t <%8 kw}#g0qxx{xr7;<7$>8Q`a$]ׄ.@ƲB $Lc˶/y{r9StW`q!尟vq@5=_:W}W3ٲ% "@3k'55DD عh1'w:O;vCC8 _Wby J`~vdٓS0{λ9llvǧ g8* |ꯢȤ%NX".3NIy l']ӝldQ/F>WHJ-{_:9J ?/uWQۛoI&>;~p/ XB>q/ij] -"EZ|~*evfm1 XAMdž3$Q3+/_W֦>uH"zT?r~TUl|>W XB"b-_"/, *_s7׏ouY9~I'Ks;*!v8{2?^wO4S ظy_dž>ePvj"\}̜qɢR7jK;i#S[/AΉM_AooRR"ZfY'A~J' cW-&^˿_"W>cSݺ81Y\7FrކvuŒ܁.˒8K iGK%߳ {%r2@׮Yi'4!˿WOL-Z8` :˿;N_b5I28L3{$)Dfhl2р0+k a-:W#B۵{$زu(:F/LX[q'E|i՟t]e!_&2LvXъؐ> _uʻ4/lQ*|㘕s +,_>w3vxZmێ93R_j Q?"cL bN ]"@4ՊP XbA6B{%YI&""I9R=d#s@,+_ޕd,/fJ}ǑX߽{ O6<_ށKz2$.HNhKL4C_<ʧ]b>gl_F`e v/ 32KM"+DhlҾ0G>KIi7/J!b(vxz@My38ȫ/G%,Y2Q'ԍ_ /e,jK 7"; {R5B૒yY.ز% QoIgI|\)<{`]t4h ʬX,ig|WMMK%"ts84S2u}BJ{p|dž^=/Y_ȤBTBz/ԨnҀAS5nS1NC;\[`|֟%w@Me<KԿ2p ̑RK76.S <bჇKqIQOybv鞮"TBX=Q s6:EZ7ewzU1Xir53cKapGJB=^5a.W8tx"gc_ބIБxxw`DZuȔ0.lw'X(ZKyueSR[f|)kR#1c|AK c%8kt+AJdsavr.X/}D<|d ۡã  |xoT0J) Z) ИfK=a@*R_ |Xr9\{cik,eSup2IJfBy݈@|Md\j_N3rΜZ=~v@@IPm*6!α*J 1ѱ8ɣcq6 v ty{/5|AD044xxdjz8RKFO9"lOOŏO'g?vX*hp`/ET˿wV#SqO>2VbG|U>Kgl@v>Cs2:K_w1!ę\ޚ̡F)e6;ŏO'ԟ4 |_אdw܁rWu(RP ht<) /ݜ2uD|tDĹ0כ_U%!GF!GSy,"4ZA܁V__{ԉB KK|>P[w|Q♴?al]&Jr Y@e 9}])Iʌbw K%96&>g(/tTLs! 4iU |+쉥vdQY>M ]Dt^b$e|" f|ArJ8UZА+'y@دNKe5o!],4 `|D2_auRRoO OR,_Z'BD^(Z`b|I/ Rx8!iZ)૮ZBvw{f399Yhv[jӖf ՙKd+=]f2a[c$6IM彳/%3傾?8R+ :˿qKs#[6$IX $_?˗ނhrsw钐T] R@d(1T"aU*0ws&?evbjB@N#j5 |Ax= #ͩ?Rdn@T)@I+*QXBNMO礰KG|ꯪIH̺l6a|!x_BND`eYgyj25i3 ]?8$էKoE]@C7TB W_ȲtM,rnW|% I4DLsngTx x5 2筶'@?Y5_>S])ZU!%2 տa/P_,yq#i,@u/|| D %rX8g!dEt 2a3 | 0 !jVPn5XعM h ?#KQg-P[) iw)ž :̆/ | ?Կ|((8lD |GQMzqBԪ~%!|c; ~P RZA*@`7VH/r{0HF B@Z@ZW?Ձ [^=NRʲqjZiV2?: C_j_TZ)Bxi-kkz)"R R@p_]7v"zcu/kK|0"oQ-~t@.Q_J'hXVi/1`{-a.>髺jf ҏXinX{_މrZ@v@_z'O(L|RT>ZXI)y4rWJ0 p3) d2h_ѥP_~kč@Qh!d5fD|Ae׳E;3Ҡ3{Ps)os=^uQP8zL/&wR@m0"uPͷ=aӚRMN@|A8 |~Do>tp/B,|i=.QdvO@˿k* J B([4 -_n0a˒DR[J/~K- vJqƪȁ/w½XԯcYRS42K[^‡.38G'9S!Q' I-NMv@؟H-7`I76=^@|G3l!CRY hbn!KPDf_>:y F4 LCx5S\ˋKYZneYS!K Nҍ7 h&@3",_ h ZPHQ QCzJGBɄ>(TXm' AdHXhٝP;CPUKp[P,>3OD~ 2˖Lap*:fr~Rg , s%R6t_y {:hEt:Y5˓⫑qI|b=i=^HY>p[W U*Yӧ/[$EX$ܙ F|s.!,/G%@23K.iX!ݍ]@*Y|; -ZDBA|Q/8$gLeJrTR~?(YJbALXD Rͼi }MNfV=R@YN'Ux<'nSS9hP/x4:PAU K_,!Re|=%jY$"c6hI' p~ @j!cUTAHruf|n&,R)y˿V~ 819ݙ9 ($ -^ ,rREu`wV)(5E )/z@R=9/YP&l(49o޻;vD |UߨqMLL2ES_9)emj1˿N\ c%A6&+LsVUgQL165]21/$COAh/g2E"H&mߕM+.rK33pMtf5|Cmyof:w-Dl Бc c|^NLdPCOz':g!@"J$lMy]4 GG;Կ1+ox/SM__=fkkFޞcvZ_Ac SV>_b?@T2ݝ L==2%+Ja'nIN3cϒo :2:]/ҘAA'縯'Nc3W0~4JrrxmS@ۓiENg PD@GBkTڣ%D|i_'dxhjiJ{U~ [ooBe"*Al"B%&q"|yf/92 חm՝!Rs'ʐ!@t-G|yj?Ȁ!IL"*KQgd"dkknszTw8Հ O\%TSzJ}; Z@mî4IXNjw9B8tdb=,t"@Te8ء#dgr%ӗۇN$!,_E2&$:2Y;9`HU M\FUz RF/lҪ? SGZ}`t>8µ3W^I m(:|d5I_՗)c).)ތ_@;aYP#CSc<)fҞD`NS@1<2QmL -D{8ovBȞvYNJigzdBUbcB`pИ39}Ccvs{I_Ϋk1j:PgG|y՟ቩcH@>f|ԿIHedRr&$؁ h]y glrxT`1x@sxW$ B-?&)TO tC[t@9*Qs3V56Eԥƀ/Wտ\S)Օ5cOtw% w+ | 9p&Ja}J0Rx쯵h\ƕrO3LL}(; NkڥS ,%䜩i/$$I)I6o03f/%Soصk_Nh)p/GT <|,4GT.[WʔHclQbw{^`"1 _SV0 y&)sL j_>eBd/ t<"BlK#rܫĎQ:VH1F/R՟|5=UW}˿lL-Ylq3[w+k;v5Q09 qO5 |Q拪1aCӹ"gk3ҒdQ% 3ܢAkSS;m;l? Z رXN:Q_z0ۻP$: G|j!S@{GǦ9G2Z˵VG~,KΛJǸJ@* ,1"g? Nj؎[,Z$1@Qc]Ȧ:3#Z cy_B@gQmjZ<4nXqF|yտ$,Y/_c]sҖ|im84NJ%~Wn;8+q?IJҒ˖yy? }/Y. }ӂNe8RKG ǀ(@Kr[ bf1lR8)/" k([w L:pE.qvJC6ء~㦭ѝJоQ ( [E7/-ۆ# _QO+ Ea?n뎟gE|JCy롆9k߁Q4]Sę_SD5VD- _ȵ3?`IQڴik,gvNjR,wʓ ].񅃒j__rl9vgj%:7l߲ir"oL'OX iJeŪ+_O\@D4޼y{,R'Ԑ E3|P^<4+2Q_. PL,X*< V3PML6m>|e潪3irfBWxªys+u^>tعoz m |U|&优|ECn/n9td48I=|TA@JbE, hArɢREfr=DIh&Ƿkts{%Kci3QDiɓNXyN?})]h.e6n-_?i=^j_l,@ OXf!U]7&w t'eS.K[@/eԵQ)r~P"P WJm(%{ V;%Yv> Mܳo8e3;D|EJb3N=&[85$ 8ֆǟüj_HAPoUW~-D?>ܶ?0VIvb7Ī9}H8e(U/_n%a,K>Ԯ2*Ͽ/Q5E|ݕJIf*yk!nKwRPB쩭|z3 HCx, _ඁOSy^) m$U?D|:vEv/fncͱ@B_ Uc'ܬv""yWQ}! y+Bܖ?>QY篞f/r$N{.P<&O$QKw& HFچ%ba/P*@HvhJ$5/q4?KUc SO^/;ۮ1{k6Y p'ԽB_I泛& &UN9rO8_tN?d%/`irQ૆S[Ey$J&̧t.eSM$ "X NWsObT_tgCd6@L: ,@Uպ{G7.*˽xhsb<|+ EA/%Աߑ"xM| ݴ74%, -C r&g׮>nU(V =_9SH?"#aN?1ҸNSW]˿[|5.y豝E Z૒񯪁G"Ft*K~Ⱦ[_5LR=˛C@ dܺۆ `rEy 4.aP:E^..W_qVM"= nMtPhS=0ʩȀ/+/Xg^s񅢝-K"JD|[gEy];vN&MZuj?!H&c:N=eu)sj͖PO0=>Y!S' yE`ۥ$DL|]}EsZ^~5 Y< Ѳoo)?lI%.8 ૦g+\ӳw5<WzD}ׯ,^2 @$)b) =&#_o_<s7,mN-0{DH{3 wg|yTU@KW,x[/iI^WW W^,JEX(?lcU[#rxi\'ȫP/ /PC`3&ܟJ2!g|K)1Ʀ]pD+Kүةc8ԑtۧ{~̞}# \d<&ϯ\|q)_@_|հտ_~@'4|Qi%mfս[\+M_Sr+hNw\ ǿ]%U`wW<w,=ҙ!e} M^q\qZISPpw@FB:-63ІCc'tPS9R@b<`N wSR |ճ/p̪;6JnTV7?t=Ţrm0Ex_䉔N'|bt$ *7; |a Krq6=W1G)z=^&5/Ѯ n.aN9 p s=^tK3Rtio ]-ewlbº N]nIxwL] [k!t/6Dryq}O3I))B˩S@%$>L}לp/CS;TK0 R;fNQuv6 |S2ª7C$ KH&*'BJ?a@B )HJ&U?~wC@!1[L~^ݕ,°6|yԟRIsCLW`>WtY}0 !X~2wڙk>hVX]#ywY<)#!$y]ƧsPcUjELyN7; _zkw̤lfX6óe,flʲ,*òiͰLeF6fͮLpg@udsZ `yʫ/[=6^6'JIt_?mYx-ܖ$݌/+,nWsKR%3{rypn5*gtp+jtO`) 7g/IJ@'ۣd~ eyI_`{e-ngw"&KS|г/^+KQ<||k]OJFrÆG:&`}x3i{Ytw% /nK3LL nuۛO=l|O-/f_ |_.a+g߱RPcEoDN @h _A.kV=ɫ_l<p&SN{WiwӖ{/oQH%΋T&i_oc0F͟4>"Xt𦛾L&p舀jo{~@6E5/݋2@ԇ\k6PQPԿXWgzɒŸo8id4gz_#g_Sf Og/7u`H"aV7|)#@QE+&,?|/=1[sd~yח,[PlOSqȍ I~< !U5Ep!ZӨ?W0Ho]ϼ1|z9#9C|& ﭟUw`'nDdK_;/;j1v^N<)+ Է14͑ᡯ~;U^o_er.jX7Z`ãm~ó*s?ƛ>sGr3}>;[%Q6|]7t C|CGIC-|-A_1$wO|-GAsI{'z|~rsCIi]X _I@ܺo;v=F/_XڃS•ZuW:;}W]yz_2o3j{}e2BJ/ c$ea*dR?{~Xr?"K4_(>rߌ ` ZЕҬRy?XnpsU?߹ 77#9ؙ -K _n+?[v8N%HB4j\ 8 G'yzw_saYa`O~|:,U ˿FJ)>}E~azܩ!_@ zoLHd>˿ש?<4 >9m1>O_wbo{u&J|5`33 qNx_=IJ)!˟54ſvӧOXT;Ʉ񏟸g~v_"!; ٯ;w?8rɦ\=\9G&zSzl?:w'a0 IknyͿZ(yrQ|_7_=o;㼳WHqj@|Xʷ|ͦy|~ۛ/<ՓyX$W@q!] 0RoyC~{S9ڑK׼~BE{P{A?Iƿצ /r 雮dOWJHUߒKU3v A:,15]`)j__,? doOY%NE:q;]YT# Bo e~}]Wð_7/( S@usZɿѽwݿ?;xDhuc眹՗w9+W-& u}VqdhSѿS>ckRk8 no~۫K7={׏MrY"]KϾs)}}TO397*; d=^PHa {o %&GH2%wu~߽mNjagGOhFpA2/XUyN)*u==뷂' J.b >}Y-p2HBMMؼww\uh~;Fw`Pn|˕02zSstrg 9ktt†/уWHKo߶s( ၇6纻??7  U_=ClcaCcO:cG?^xq[vuCONaLNς+/Zz/_(@)_ML0t3J s#Gm2ޭd Cd;X:&dۂY~Bu=\@HUC֭I}.U{UkHiu{8F8;l~SDwkn$s@K $@˜cU"Qe`)DLӖ0DI $ a 7BQLL19Z_"d{`ӗ~cx%SΙ?sOyA}3_z]U8 Rcs.\2_a`RDB}`9(rvt78uz`/~iI\{xbSL ?,TfetOk_vB}5pWu$We]O~vJ%xdݽ [  |^/ Ҫ?Կ| EE)"KչG6*mtv10/܇2J<GQ[3Qe&URTWUk [6|EPJLZNvuS$R dHPYtJjLi`on|$娿SO5m7M/֯-&5SDkA4w R~RyE 7v@U_ Z5 ! CJxf1;'}-I!gBJgfFz|x4X?e_Аo|n嗿}r2e ߣ4 W8˿VY AZ-J纬Dܥ_~Go0(<ӄ aFBSRI^ޯM\ Ǝu_N$4hY/ׁ%1sy~‹"E'~/:Z*;|qtMj٩iui} +߲xʷ}nnB`v ߾/HD*!,_z R>6(/y)E@VIzslxx}c_EO:Q*_gԇ]c/ |ZG{s_XHJ޲W~vĥ nG^w?Odz%f h%FW_u-4%IJAf[~~WOOet@E_zW-U?w5CdNE] ?X.(%I z_[ Ve ϟ{>}BHr׊/|廞2}B8 a oofԾ_}//h=K>RչH |YG˲D2ivwwm_x]\nfQvEDBX6c}` *5[/WguHlv/ЩG B@)IH?7kmm B}+MMs-! F| /!f׃>|ʩڏ{UAPoUW~w =P./忕/",J=3/_ݳ z?U?|׿z4;He /xI/)_x[w<"ʳڿ /h *??sy<H`w"ǟi!,R_BHswz{5XF/kkۇ %tE-KJi i[i~e /u/6oJAc]$HdҰ/ | /"B"|o;g9}K?!$P_Re Lڱ};.{_v*4Tf%X 7l?|T"$)5./"B2to_KŒ|9ŏO_;YvNr,Q9_R_P >fСßg[}ُnRVMU.B\>-!%BH'/OJe*C!&%$]Lw\>#} HWRz=.*j-e;pȁ5'(9 ++~8VYʃZSy׮7zU'~ҼT|1BVY0 %i4"n~7||k/އW.%t*ZXnÌ/OH/D!"f&xɧ[?wZ* ӈ"q It"Mԛ",߼S]~K97yPRCFb_@JitCSy'>vO,ʶk'>_럮{UmSz!)KU1ϯz+2'$ID1n&7w޹n^+%LMot:!$)ت7WTZ|ܠn lAz^Z]v{+_@,;nE5ܗ٫qRA 6p~2< Ǟyɇïd_ug,^70K,Dr8Stz8 vL`H@pxhǶy?=!X<dɛ~rM?9W+_wg׏D% }\쫿}o$D4L3Cwn実vo.WH\@./9I-&C[hZp9sqgMJ%Q,R$f1mR-.U9C4 09O7o9S;{pooj4;;;h{k-[o^E)td䥤جV rFipNpx adZpMQC(EQ(Ȑiy<5^aA_S'^ӫ`iEdfo:wZEyQNnVbJRpىs4"9ǡу݁""2`?t~k67 )O  `vLtto}և QqUUUY%yiYI)ͦ( cB!AP A hj==Vw_l?^*FH@|hz䱷n̜1|RGiA~AvVVzJJ5!N*v6wt759_!A)$&0V!݆9 9)9ٙIIiɶ$[Bfl%pj~>U=~w5swMq}ŢS532 s2󳳳ҳSSRRl6j*"9'RU|~7;ra޲$Hıf$^7y<mc{Q'˻~̯+@\B2IB,R1)@bD#T?:m^2O5$qroƤ؂G !!!!dHHHHHH !!!!!dHHHHHH !!!!!dHHHHHH !!!!!dHHHHHH !!!!!dHHHHHH !!!!!dHHHHH !!!!!dHHHHHL1X(Z9! [/))%%%yc8u%$m ? h"!d)dL5ܓocߣ"C|aʠ3566ʡ$!!!a`Z@s @00`Omރh^:Y (Mi_<#=k4h Ms]ZUaZq9đy>Fc1>hL=2WFD׍QN#_"h9; 36) Q0r8E{x4c4"0DOjDpȏlQ]9~.e܁x6UhPHBdX)- AT@Ad "( @Ad   0@ԫ_2bNoW1K#B}eD  !ؾz \ЯKۊ(CO] ʡZd9D~E ?eHfy0PGE\t-T"KEnpC'rj^MU~ h8 !h, 91{qK *P֌%8L0ODuX^DR!4*:#4L' @`4aFa"an I?$}BgʁZ 5A" /.p?-Ecęb CP- ZJ($Tė  ø"}X((%|Ckh J n#t8p zpHq~I.-Z'X?# 20I/2P,w!@9 QTY9`G!r K@4&zbaQ Pd` ((@@4h?: k" Pr578![pаF=&0eF"sNɨR^JTơiHD/DV>-5B3^'Qm%p(#tT, E(B U #1JhD =T]:c@C_)qBdPP{Ģ\>FImᑔ"L$CGe h8GYi ::?pvy?G`0rxM1(l;cM(}9(h;D~?ZfUi2r`"XͦPx;IY(r^߸^VU6RVobaCZu>L%h+Ikj KN$'%0@inC0Wk?=VVT֎\ᶛ/]D֑_ y_KΨBtW(po{ve$PMӶnvrgZ6U՘b Xva?sڴi҈¡jg^xwmIL!kovݰ/fH+jcϿIII1Ǣ={.;͎}D4d6mT]0;;{D Ξ=kɑ/x]8o0@FFfee 51==F!ԲFܹUuD1GKjjJCC0S^Dv.lbx4$^ӆ3bFs<'';%%ƺCDjww_\=,35I̘M2a+[Ӓ:~jkhhi,++{{^p\#h6xݏl<ȹ :hYcd /*̻u՜lDz)]]]f485rCWWׄkʛ<|  "c,999%%%))i``^|.K,EQ"N.0n¼>,W%&}?rDއđ?bē|0Uλ_,e pyf [z5k[l }fB𬫫+++}!''HLiZKKKmmmkkkJJJyy?\{=f|2Un/9991OpYYYWŪ(1Q֮]yՍ󔮮G}4>˿=+V\mmm1Onnn>Ӧmooo/\lwh*oȬiZGGGZZ!GlޙA#gAXbkVx)6ݪrI9 PAӖiZJr2|UӘgkꚛ  /9X f_~)/ DLNN]Q$ ---ee?Mw~W"'*Ats_ó>WVV󔬬,8%iVS}^fXQQq-w߳DP#LQs47OW\|߀/ mY|i ^~sfWÓ^o@wwϨNommMMM= k{Kmo1"bvd>kcՕ;=~DΧ0"VUUFĤ!h rUTT/?)ϫ8o|:"JKKKMM &HNN?e,yjE4G4jkkeey6[>3}[dh;o[,L+= OQYeyr7{/)Zt]=/;;hq 333- Bns|s2} 555?u댲90T* OOOha'@(bcw\:`///lnn64<xO.Ykl,ecwݗvcR^DxUx~QcT|3nC188p8~Ͼm}7]`{䑧*tzzzCCÑG.5(`=s{{{qqou ^k]%R욀c'ܳW q HNN?C6߿)dREqwee D7t[$4#Ay_@}C"uwwi_rKqFU%X}|hH,e`E[q$'',|>%FF y'X,1=̂]礤(UO?Ծλ>)]]]?!'f֮][^^~8 Z6-{򀮮O?lu^HIj%J< ??fa }P)Qx2f5g0%DΙ7nn{mdHMD9x}wlm}o 5M<)ا_1bF*z,^ph]Ng/::;;gx2:ͭs.fCoox׿ 4`ƀs|4Cmq̮:缕Ͻځ%)$GCz5-$6iTE|4E0MCCT U4 Uq>..qjjjj 4 nn^-'4M35K8k=LQ("FzDP%5#1&-++{饗!FdE\F'ӯ40"൵7psnnOnn.,_~ZnN1d'ՕGkMhYK.WzEv)Ǩn+ҭ"_-N[/-*xQpy O='%%x>h!ͦ®S G!|+z{vygggii?_*|5$$$Agj766Wp : bs-|JEaG4_H[,l( !c@^fsSWWWaaAV9"qlSKO2aʭٽW\~q__uG^uM͌15D~]_)/w |^z郀TNH{}$:댍6`Ȧvpֶܲ&MyNl 6sD\7p*ͭ]C2a90(fR%%%i0)5-#<^^sI̠-??磌ƍ,?|7|S^^xb>qPaħ*iwww~~W_}u/'Llqvttv--lH1Ml˾vͬV ;0DǙ։^U=#d>k3gs8gsnX୷ט*"dӹpnoV>)rRVVcOt~!j0ճ%$$,ލvK6TT8 !!3 ,H0 N\qmmm%%%/ʪ -.) n~xFuR)m 9sfloo~Ma1FTrEEEs/((>1k|/`XDow2d;qٹo~"x6x?H!p)(pݽAۈ$Ac@fL@z]?Pu`xA.%''3sV";^IH]w[KKK$W%-- :ñ'}DOJ8---2f;K_\]exc0̸0@nvz~By T`KL6i]|YZZ/bM}{k~vAB֬r g֑%DHC4s? iL,YwݙrnmcÁɨ~a~]Pr(~/++Da~z<2Ĺ4slkkgdnEQ.fbag@zz;묛6[QDtuygV1Ԟ~eGFFFFCCQG9kV1뿌1">}nՕeƲ߯(kJf\qEQw/yFFF A,Urio`+'b"s$9ZSHΜb=`nb(?q;.'л¹s|E;s;AVkg=ۈyVOOxFD_|R]x?'P4MңO8؎!'4[Cl3*au:SSc3oxlqGDv8f9e^<}}233L%L<\dCo jliӦm@䄢(gAzPlqvN96)݇vH}}}~~I%O>]y< NA"J)ޕ\Jw%ջꛒ]Iߦc7?Vcs6Y\-dP` 0 #%#lHSZƻ^v9EEE.R]{.jULmZ +I%/zdu8^y_X{b-7Ǹ 9A @˼)&TޥxXh LW^}M$6Cv]./߸*M,#N=?$'''& "MSW?ɑab |ُBÈƘ``IƐC{|UUUuu57awN+x_;0ċbHf瑻;43rA=[rkK;'Alca<kߪ 缣~C4##HP3ܥebGxsCUe\{/,7'gmv\w,ޮkdE RUs/~f}GĄ;t I;mqII1vZ[rrWoK"R~c=|##;;e37`SeȾNZvNeeeMM ƌӳ}nNI ׇ9{YUUY]]*=¼;v1KSU 6n@kkP-)))77sjkCe'mTrcv}©w LՑie g/--]Z9)q8窪jcƖ%$t:bJ~Autt̚5kJq(Ħ̘?c(K)-[t}f{ڬ\f$\`.QHE%+jicH> HoYKDhsOd f*i}fW;<} &!f.ٻ+.q8fCUULNN>[ͳ-]f ; ̸Q\\8=xYQw3eE(w}:TN0~-U999f?SSSN~ʛ؎86CĦ}ֹW>;;dg8ũVS+"q9sfwwwi=)8i6S$" ?hYPP`FKJJ?ι2LAqʊkVUU'oAq'9۶~;)L"#CxY|~0w0 蛑ܡIjAgmI!=rͯjG<_}%%@SM?F7ƿȚwOd)0n]"D3YFn4)(`Z٧=2ׯjBFl`N"n/Zp8F" ]82# 0>pgfxg~M"nsvsLfE8%쾣o6@*}obir0I>3?ۛDG% $wܱn9 &~} YsW^rAWW[}ߝW^d~Z---G!2{f'`fQ?>` Ŵ?sԬ5hrPe`Y|?RM< AV5%%%y 8Qn9 GQ&IѿhYb$ "Ц+^^k mo4Yi@w)577\OdE4|H(S6_(5d8xEvd( L0=k[rsNADr5϶Z:lquc~ q00xs]DhQLy;9\m,>0_ PE؈mS 9W[lVrAKf"9믿@eOu ojK_t3nwqq+vu 7Isnz?u{C.+扃B(^';(;7i19|Eֱ; q ddd``^$"nM&'imvu_xXҴIrg<ԁ%<੧HO7TU|fL|W%s"j{ŗ2*LD#'0$3ON2!N09O(s7rqJJgֻ.[~M'I1˖?L3:;; ?㋯)} ބRy34;`hy uORMGӴ򢢢ꃃo|ŕצ]uґ3'69@@BD$| &-?`Xtm4>b-Pݪr_|ERR]'l9Ф95%¸@z͚~'O2{ύϘf33r9G}byW͙L3}R5c ۥ!$(O1–xᗯX"%myws涵VO6VT߿h _a&9V摝xiN( FED8d[NB`ÞdsYUUS,4QI·],t l·{K 67.[ɻwBt_q͝]vo4l44 1MCMC@ʙ1M@!G~q V*rX*,3\}{}[ly[[[aa!kkk >줚FYLn;+ C;wy"1a,X43#"wt֛0LOE9<'D -> k՗TWW̚ FEkk Ӷā-s*~ӯ}J0 ݭ{7{B 6 s{ H>i=U0i^|^x w)Nt\lSUU އTq VGzY* ӝNŇϝh5#t#qKK\z83HIIq:-l80N{&0:ę't,,45:;;餺a 2YXs VWW{)U_j!2 "w`eg•pqǸ\.3>VQQq}m,M* *̷- ))iLhdFF:}~ vjfck;cغv Ec]j%B LA)+.Z;\b+y3JIxdl/+**2L.tI^ESiϘ1v'$$a(}Zې=6uF=\t:ͤuu]vy EO9V]sygg+VUį 30Fhۧ4Q9%$+(sMMMB X( XRwOp~Ղŭ0 KNdߠȼ>:ݲh}<=$rrr抩g<fT}aU\\4oA(3';7!!16YSŐ׋˸sMI{BGz-%NYclY477Dt:KJJvضb"r;oySSIp:Ţ@ q0w;kjjdMjTPUovɟ@`ы_5cp8}W?xU&!R5Kao+^ mns}"pC ֙0edd444.]z̬%Ȥh~>I7^I ݿ.3>,Sڎ3guuuO9( LJv ZbXz{{`g`̙]tכIzlvƹL "zl?~{Cց jꊊ K8${vzzzʞ߿T{SpNJ \Dm\vn^^^WW׵.+-smf8lpsB$5k[I Q,vvZp@"ý,?HZ'X zכocYP sY2tk-BDe-﷟LKKm Šud$I?<ׇܬ޻n>^(pf&jvkE(9`}׮]>LcFzvetwwpy7xk+L-٤htfʼyꜣuq\wFI1 % 9Y9#`cLĬ.XHG(0 9ie}pkkkaaT֖4Ҳjnr"UUE$Qa(++ ) > ΢B @XZȮ[uYooPιb>#u0 UcޓO\涊4 v8h5myt:GǍrrrZ[[=w_6(%qHY m2k$ys+~xb7x%D²AvGAC f,ѵP!a%hnsnjVk[\qՍmj / wغO_:kTeGp̴y2M:D2F=LIa-Rn媿)cFf} 9y @ b~@ĉDD:32GGGǚZ4'+$"k_t5n:DjjjGGǒY5+c22WQ&j&=H^{ni N:[nxsa3㜲٩;:5foHl+VjWXW/hϼlͯNdA!{ߝEؿw&EN#ED̿ͮYإ=HMMu:srs((HN;myIIIccc̳z{{.]bեXajYl?j_{l3{mNfX;dUZZZW].o[oXEz+[4r"U̴)))119w}_n"K{IN:J9䐃^xE3 {''iu%VxEy#?R0? 9[[ŗk<~m6kyY}s[Ro;j( \.5A0#U[z^e&:zOQC Liqh.>l /12HLLt:m6[+"gYW]u577iժ+30,g͐sJ,vMtlyyy}}}o~ MKHm]~k׮5#{n`I+SX>*63&zTƃ7Ҍ Zs:#"g:ㄬ I|vv`d:D[m & j*rCggef&H/xq}rN[F[ܚRiE`iܜnsފvfQ&>(l-@rrP*\&V^raEseV\8}ԁ8eRtD`L+++SRM'[~RΑ;a6JgN\~_d wcSMut:"rV^[rC\x ռޫ.bXwf&p(-5ȏ98 [}leoooVVfM\!xRzr˪o1N^Q#>$DP5LKю=@2w>E͟ "KM/=? ;+@(ňx?N^@l9XN:7x3;;ؙ;*C l(..Fv6mT 7mW;;;!qJbf/߳Ξ={```HZZhɢ0̌>==9~Kg轧l:Ԝܜm Р04ŀeiSz&VYkjjLtdfu8wv=hgilB an2XHx'AS 742}2rjk6ަjD{I_TT425U䀦q쌙|B$~gϝ˻ݹKKK@ =iwwO]s"U%yvGz߯L>N:n<+n枙ڄ)8b>2+ d@2 |E "C@q(@Hۂ0BtDY4:F-.$r 2}ҫ3?33snJ.632]#5v#2"2Ǟxet@ZZZOO^{s%'ۉ'@,௡ܸ1 j26y"ZDv2c Buk~GLxdgL'4"D$AC`!ѐ;FBBO&9ʃ2ʷ@~J yY."+" "BV7NcnPCkXUґR "$  RaPZgj ~@/u%^/0CPYָ k-Вj%ŸhÙE3' #:u`z5)tnTeDPa~(xj?)1F mShΑaPN~u5NbAAA ͯrGQBz`ZψMDƫFD&`cا@EwjYxia+)pΐH2@뭫s[, HO.`(2ɥj(1jʐQp ~b =0-:? z! 55l!? ];H8p hG Á$(и0@}  ].MjPPL(M kd/ ۯZx 4Eeجd,vDjZ7@D IQxذtBd9HE&"" ' r T!p 3,3#!" jFLԄMFh|34)B"fF9©yDD(qH!9!qDN@VPqch|Cx[ <=HQdLqM`z. ^CyH$8F9G 4# \qGƕ cc N^S.s*Kh D@6!֐" $cCmm1G6O lBGos7HCRaG: lKŇl=CitQx ŹL<نr$w>B8K h *̃B }P7)HC ֙ &#p0$$$$$ @BBBBB2 J؝ ^:I1NԘLD@X"PBZ4*@bDEH @B@` &IĂS0W(_q^/P ;Arп3"IwN" Y`^#W DERKG!~ӣנ{ pMQV F-he538q$cp@YPhН8gaa%z^"Bj' clLwda(jU0cDʣ\ C'$#{`hO@p60z ޸^8;/, s_G4pAB 6} ES  \%p:AcH/`)n{."mCD1Ãt+b" #+ (C0("rQ;r`=W E OT ƘP~ C' Pn 8t!8E`CczO*09iláٸFF O2=D+1_BV(<Ɩyx07\0O4\ghvu)X7W2IcSNð-t2}QD1VF 3)jnњ(Fq t!:G!FA( Qē9Z1["E^"p=boSg8'E0fS(+X'"ThЈHߠ [J[ ib_| *+**Cee%ܵ.*a9@-.P?EM*W8%'b*i^'ݽ˭'y|#׈4tlj9$K >EY(da{o -&Q7Ltp `Pa0&?qUv"dHHHHHH !!!!!dHHHHHH !!!!!dHHHHHH !!!!!dHHHHHH !!!!!dHHHHHH !!!!!dHHHHHH !!!!d XdŢ$&&ʇ(!!!a(J0M<|%M4@@$ζXHHHHl00ꭖ?"r F$$$$66> UX aAj@*\x֟oߎ`]@u%~b^~ CY%tEXtdate:create2019-03-13T12:57:06-06:00'%tEXtdate:modify2019-03-13T12:57:06-06:00VM6V tEXtdc:formatapplication/postscriptTr' tEXtillustrator:StartupProfilePrintMlj$tEXtpdf:ProducerAdobe PDF library 10.01?|$tEXtps:HiResBoundingBox115x148+0+0b_tEXtps:LevelAdobe-3.0 EPSF-3.0 .@7tEXtps:SpotColor-0procset Adobe_CoolType_Utility_T42 1.0 0mtEXtstRef:originalDocumentIDuuid:5D20892493BFDB11914A8590D31508C8htEXtstRef:renditionClassproof:pdf."&(tEXtxmp:CreateDate2018-10-30T12:07:03-04:00K/tEXtxmp:CreatorToolAdobe Illustrator CS6 (Windows)y*tEXtxmp:MetadataDate2018-10-30T12:07:04-04:00_(tEXtxmp:ModifyDate2018-10-30T12:07:04-04:00:|9tEXtxmpMM:DocumentIDxmp.did:1DDA8BBB5DDCE811B208EDCE842B44C1￀`9tEXtxmpMM:InstanceIDxmp.iid:1DDA8BBB5DDCE811B208EDCE842B44C1R@>tEXtxmpMM:OriginalDocumentIDuuid:5D20892493BFDB11914A8590D31508C8&tEXtxmpMM:RenditionClassproof:pdf߉ tEXtxmpTPg:HasVisibleOverprintFalsey#tEXtxmpTPg:HasVisibleTransparencyFalse:\=tEXtxmpTPg:NPages1Ɂ۲IENDB`charliecloud-0.26/doc/see_also.rst000066400000000000000000000001401417231051300171600ustar00rootroot00000000000000See also ======== charliecloud(7) Full documentation at: charliecloud-0.26/doc/tutorial.rst000066400000000000000000001206611417231051300172440ustar00rootroot00000000000000Tutorial ******** This tutorial will teach you how to create and run Charliecloud images, using both examples included with the source code as well as new ones you create from scratch. This tutorial assumes that: (a) Charliecloud is in your path, including Charliecloud's fully unprivileged image builder :code:`ch-image` and (b) the Charliecloud source code is available at :code:`/usr/local/src/charliecloud`. Optionally, (c) :code:`ch-run` is linked with SquashFUSE to provide internal SquashFS image mounting. (If you wish to use Docker to build images, see the :ref:`FAQ `.) .. contents:: :depth: 2 :local: .. note:: Shell sessions throughout this documentation will use the prompt :code:`$` to indicate commands executed natively on the host and :code:`>` for commands executed in a container. 90 seconds to Charliecloud ========================== This section is for the impatient. It shows you how to quickly build and run a "hello world" Charliecloud container. If you like what you see, then proceed with the rest of the tutorial to understand what is happening and how to use Charliecloud for your own applications. The preferred workflow uses our internal SquashFS mounting code. Your sysadmin should be able to tell you if this is linked in. :: $ cd /usr/local/share/doc/charliecloud/examples/hello $ ch-image build --force . inferred image name: hello [...] grown in 4 instructions: hello $ ch-convert hello /var/tmp/hello.sqfs input: ch-image hello output: squash /var/tmp/hello.sqfs packing ... Parallel mksquashfs: Using 8 processors Creating 4.0 filesystem on /var/tmp/hello.sqfs, block size 65536. [=============================================|] 10411/10411 100% [...] done $ ch-run /var/tmp/hello.sqfs -- echo "I'm in a container" I'm in a container If not, you can create image in plain directory format instead:: $ cd /usr/local/share/doc/charliecloud/examples/hello $ ch-image build --force . inferred image name: hello [...] grown in 4 instructions: hello $ ch-convert hello /var/tmp/hello input: ch-image hello output: dir /var/tmp/hello exporting ... done $ ch-run /var/tmp/hello -- echo "I'm in a container" I'm in a container Getting help ============ All the executables have decent help and can tell you what version of Charliecloud you have (if not, please report a bug). For example:: $ ch-run --help Usage: ch-run [OPTION...] NEWROOT CMD [ARG...] Run a command in a Charliecloud container. [...] $ ch-run --version 0.26 A description of all commands is also collected later in this documentation; see :doc:`command-usage`. In addition, each executable has a man page. Your first user-defined software stack ====================================== In this section, we will create and run a simple "hello, world" image. This uses the :code:`hello` example in the Charliecloud source code. Start with:: $ cd examples/hello Defining your UDSS ------------------ You must first write a Dockerfile that describes the image you would like; consult the `Dockerfile documentation `_ for details on how to do this. Note that run-time functionality such as :code:`ENTRYPOINT` is not supported. We will use the following simple Dockerfile: .. literalinclude:: ../examples/hello/Dockerfile :language: docker This creates a minimal CentOS 8 image with :code:`ssh` installed. We will encounter more complex Dockerfiles later in this tutorial. Build Charliecloud image ------------------------ The three arguments here are the :code:`ch-image` subcommand :code:`build`, the option to enable unprivileged build workarounds :code:`--force`, and the context directory :code:`.`, which in this case is the current directory. :: $ ch-image build --force . inferred image name: hello 2 FROM centos:8 will use --force: rhel8: CentOS/RHEL 8+ [...] 7 COPY ['.'] -> 'hello' 9 RUN ['/bin/sh', '-c', 'touch /usr/bin/ch-ssh'] --force: init OK & modified 1 RUN instructions grown in 4 instructions: hello .. note:: :code:`ch-image` prints information about the build process while in progress. While not shown above, it uses yellow for this chatter, while build command output remains in the default color (e.g., white). This image and the :code:`centos:8` base image used to build it are now visible in Charliecloud's builder storage: :: $ ch-image list centos:8 hello Sharing images -------------- Charliecloud images in internal storage can be converted to multiple formats via :code:`ch-convert`, e.g. SquashFS (if SquashFUSE is installed):: $ ch-convert hello /var/tmp/hello.sqfs input: ch-image hello output: squash /var/tmp/hello.sqfs packing ... [...] done $ ls -l /var/tmp/hello.sqfs -rw-rw-r-- 1 heasterday heasterday 83288064 Nov 15 12:07 /var/tmp/hello.sqfs Tarball:: $ ch-convert hello /var/tmp/hello.tar.gz input: ch-image hello output: tar /var/tmp/hello.tar.gz exporting ... done $ ls -l /var/tmp/hello.tar.gz -rw-rw-r-- 1 heasterday heasterday 86122450 Nov 15 15:23 /var/tmp/hello.tar.gz Directory:: $ ch-convert hello /var/tmp/hello input: ch-image hello output: dir /var/tmp/hello exporting ... done $ ls /var/tmp/hello bin dev hello lib lost+found mnt proc run srv tmp var ch etc home lib64 media opt root sbin sys usr :code:`ch-convert` can also convert images between any two supported formats, e.g. SquashFS to tarball:: $ ch-convert hello.sqfs hello.tar.gz input: tar hello.tar.gz output: squash hello.sqfs unpacking ... [...] done Tarball to directory:: $ ch-convert /var/tmp/hello.tar.gz /var/tmp/hello input: tar /var/tmp/hello.tar.gz output: dir /var/tmp/hello unpacking ... [...] done Charliecloud also supports "pushing" images from its internal storage to a registry using :code:`ch-image push` and "pulling" images in the reverse direction with :code:`ch-image pull`. Distributing images ------------------- Thus far, the workflow has taken place on the build system. The next step is to copy the built image to the run system. This can use any appropriate method for moving files: :code:`scp`, :code:`rsync`, something integrated with the scheduler, etc. (The purpose of the tarball image format is to put images in a single file that's easy to move around with traditional UNIX commands.) If the build and run systems are the same, then no copy is needed. This is a typical use case for development and testing. If you are using the SquashFS workflow, copy the :code:`.sqfs` file you created above to the run system; otherwise, copy the :code:`.tar.gz`, then unpack it on the run system using :code:`ch-convert` as above. .. warning:: Generally, you should avoid directory-format images on shared filesystems such as NFS and Lustre, in favor of local storage such as :code:`tmpfs` and local hard disks. This will yield better performance for you and anyone else on the shared filesystem. In contrast, SquashFS images should work fine on shared filesystems. Running images -------------- We are now ready to run programs inside a Charliecloud container. This is done with the :code:`ch-run` command:: $ ch-run /var/tmp/hello.sqfs -- echo hello hello or:: $ ch-run /var/tmp/hello -- echo hello hello .. note:: You can run perfectly well out of :code:`/tmp`, but because it is bind-mounted automatically, the image root will then appear in multiple locations in the container's filesystem tree. This can cause confusion for both users and programs. Symbolic links in :code:`/proc` tell us the current namespaces, which are identified by long ID numbers:: $ ls -l /proc/self/ns total 0 lrwxrwxrwx 1 reidpr reidpr 0 Sep 28 11:24 ipc -> ipc:[4026531839] lrwxrwxrwx 1 reidpr reidpr 0 Sep 28 11:24 mnt -> mnt:[4026531840] lrwxrwxrwx 1 reidpr reidpr 0 Sep 28 11:24 net -> net:[4026531969] lrwxrwxrwx 1 reidpr reidpr 0 Sep 28 11:24 pid -> pid:[4026531836] lrwxrwxrwx 1 reidpr reidpr 0 Sep 28 11:24 user -> user:[4026531837] lrwxrwxrwx 1 reidpr reidpr 0 Sep 28 11:24 uts -> uts:[4026531838] $ ch-run /var/tmp/hello -- ls -l /proc/self/ns total 0 lrwxrwxrwx 1 reidpr reidpr 0 Sep 28 17:34 ipc -> ipc:[4026531839] lrwxrwxrwx 1 reidpr reidpr 0 Sep 28 17:34 mnt -> mnt:[4026532257] lrwxrwxrwx 1 reidpr reidpr 0 Sep 28 17:34 net -> net:[4026531969] lrwxrwxrwx 1 reidpr reidpr 0 Sep 28 17:34 pid -> pid:[4026531836] lrwxrwxrwx 1 reidpr reidpr 0 Sep 28 17:34 user -> user:[4026532256] lrwxrwxrwx 1 reidpr reidpr 0 Sep 28 17:34 uts -> uts:[4026531838] Notice that the container has different mount (:code:`mnt`) and user (:code:`user`) namespaces, but the rest of the namespaces are shared with the host. This highlights Charliecloud's focus on functionality (make your UDSS run), rather than isolation (protect the host from your UDSS). Normally, each invocation of :code:`ch-run` creates a new container, so if you have multiple simultaneous invocations, they will not share containers. In some cases this can cause problems with MPI programs. However, there is an option :code:`--join` that can solve them; see the :ref:`FAQ ` for details. .. note:: The :code:`--` in the :code:`ch-run` command line is a standard argument that separates options from non-option arguments. Without it, :code:`ch-run` would try (and fail) to interpret :code:`ls`’s :code:`-l` argument. These IDs are available both in the symlink target as well as its inode number:: $ stat -L --format='%i' /proc/self/ns/user 4026531837 $ ch-run /var/tmp/hello -- stat -L --format='%i' /proc/self/ns/user 4026532256 You can also run interactive commands, such as a shell:: $ ch-run /var/tmp/hello.sqfs -- /bin/bash > stat -L --format='%i' /proc/self/ns/user 4026532256 > exit Be aware that wildcards in the :code:`ch-run` command are interpreted by the host, not the container, unless protected. One workaround is to use a sub-shell. For example:: $ ls /usr/bin/oldfind ls: cannot access '/usr/bin/oldfind': No such file or directory $ ch-run /var/tmp/hello.sqfs -- ls /usr/bin/oldfind /usr/bin/oldfind $ ls /usr/bin/oldf* ls: cannot access '/usr/bin/oldf*': No such file or directory $ ch-run /var/tmp/hello.sqfs -- ls /usr/bin/oldf* ls: cannot access /usr/bin/oldf*: No such file or directory $ ch-run /var/tmp/hello.sqfs -- sh -c 'ls /usr/bin/oldf*' /usr/bin/oldfind You have now successfully run commands within a single-node Charliecloud container. Next, we explore how Charliecloud accesses host resources. Interacting with the host ========================= Charliecloud is not an isolation layer, so containers have full access to host resources, with a few quirks. This section demonstrates how this works. Filesystems ----------- Charliecloud makes host directories available inside the container using bind mounts, which is somewhat like a hard link in that it causes a file or directory to appear in multiple places in the filesystem tree, but it is a property of the running kernel rather than the filesystem. Several host directories are always bind-mounted into the container. These include system directories such as :code:`/dev`, :code:`/proc`, and :code:`/sys`; :code:`/tmp`; Charliecloud's :code:`ch-ssh` command in :code:`/usr/bin`; and the invoking user's home directory (for dotfiles), unless :code:`--no-home` is specified. Charliecloud uses recursive bind mounts, so for example if the host has a variety of sub-filesystems under :code:`/sys`, as Ubuntu does, these will be available in the container as well. In addition to the default bind mounts, arbitrary user-specified directories can be added using the :code:`--bind` or :code:`-b` switch. By default, mounts use the same path as provided from the host. In the case of directory images, which are writeable, the target mount directory will be automatically created before the container is started:: $ mkdir /var/tmp/foo0 $ echo hello > /var/tmp/foo0/bar $ mkdir /var/tmp/foo1 $ echo world > /var/tmp/foo1/bar $ ch-run -b /var/tmp/foo0 -b /var/tmp/foo1 /var/tmp/hello -- bash > cat /var/tmp/foo0/bar hello > cat /var/tmp/foo1/bar world However, as SquashFS filesystems are read-only, in this case you must provide a destination that already exists, like those created under :code:`/mnt`:: $ mkdir /var/tmp/foo0 $ echo hello > /var/tmp/foo0/bar $ mkdir /var/tmp/foo1 $ echo world > /var/tmp/foo1/bar $ ch-run -b /var/tmp/foo0 -b /var/tmp/foo1 /var/tmp/hello -- bash ch-run[1184427]: error: can't mkdir: /var/tmp/hello/var/tmp/foo0: Read-only file system (ch_misc.c:142 30) $ ch-run -b /var/tmp/foo0:/mnt/0 -b /var/tmp/foo1:/mnt/1 /var/tmp/hello -- bash > ls /mnt 0 1 2 3 4 5 6 7 8 9 > cat /mnt/0/bar hello > cat /mnt/1/bar world Network ------- Charliecloud containers share the host's network namespace, so most network things should be the same. However, SSH is not aware of Charliecloud containers. If you SSH to a node where Charliecloud is installed, you will get a shell on the host, not in a container, even if :code:`ssh` was initiated from a container:: $ stat -L --format='%i' /proc/self/ns/user 4026531837 $ ssh localhost stat -L --format='%i' /proc/self/ns/user 4026531837 $ ch-run /var/tmp/hello.sqfs -- /bin/bash > stat -L --format='%i' /proc/self/ns/user 4026532256 > ssh localhost stat -L --format='%i' /proc/self/ns/user 4026531837 There are several ways to SSH to a remote node and run commands inside a container. The simplest is to manually invoke :code:`ch-run` in the :code:`ssh` command:: $ ssh localhost ch-run /var/tmp/hello.sqfs -- stat -L --format='%i' /proc/self/ns/user 4026532256 .. note:: Recall that each :code:`ch-run` invocation creates a new container. That is, the :code:`ssh` command above has not entered an existing user namespace :code:`’2256`; rather, it has re-used the namespace ID :code:`’2256`. Another is to use the :code:`ch-ssh` wrapper program, which adds :code:`ch-run` to the :code:`ssh` command implicitly. It takes the :code:`ch-run` arguments from the environment variable :code:`CH_RUN_ARGS`, making it mostly a drop-in replacement for :code:`ssh`. For example:: $ export CH_RUN_ARGS="/var/tmp/hello.sqfs --" $ ch-ssh localhost stat -L --format='%i' /proc/self/ns/user 4026532256 $ ch-ssh -t localhost /bin/bash > stat -L --format='%i' /proc/self/ns/user 4026532256 :code:`ch-ssh` is available inside containers as well (in :code:`/usr/bin` via bind-mount):: $ export CH_RUN_ARGS="/var/tmp/hello.sqfs --" $ ch-run /var/tmp/hello.sqfs -- /bin/bash > stat -L --format='%i' /proc/self/ns/user 4026532256 > ch-ssh localhost stat -L --format='%i' /proc/self/ns/user 4026532258 This also demonstrates that :code:`ch-run` does not alter most environment variables. .. warning:: 1. :code:`CH_RUN_ARGS` is interpreted very simply; the sole delimiter is spaces. It is not shell syntax. In particular, quotes and backslashes are not interpreted. 2. Argument :code:`-t` is required for SSH to allocate a pseudo-TTY and thus convince your shell to be interactive. In the case of Bash, otherwise you'll get a shell that accepts commands but doesn't print prompts, among other other issues. (`Issue #2 `_.) A third may be to edit one's shell initialization scripts to check the command line and :code:`exec(1)` :code:`ch-run` if appropriate. This is brittle but avoids wrapping :code:`ssh` or altering its command line. User and group IDs ------------------ Unlike Docker and some other container systems, Charliecloud tries to make the container's users and groups look the same as the host's. (This is accomplished by bind-mounting a custom :code:`/etc/passwd` and :code:`/etc/group` into the container.) For example:: $ id -u 901 $ whoami reidpr $ ch-run /var/tmp/hello.sqfs -- bash > id -u 901 > whoami reidpr More specifically, the user namespace, when created without privileges as Charliecloud does, lets you map any container UID to your host UID. :code:`ch-run` implements this with the :code:`--uid` switch. So, for example, you can tell Charliecloud you want to be root, and it will tell you that you're root:: $ ch-run --uid 0 /var/tmp/hello.sqfs -- bash > id -u 0 > whoami root But, this doesn't get you anything useful, because the container UID is mapped back to your UID on the host before permission checks are applied:: > dd if=/dev/mem of=/tmp/pwned dd: failed to open '/dev/mem': Permission denied This mapping also affects how users are displayed. For example, if a file is owned by you, your host UID will be mapped to your container UID, which is then looked up in :code:`/etc/passwd` to determine the display name. In typical usage without :code:`--uid`, this mapping is a no-op, so everything looks normal:: $ ls -nd ~ drwxr-xr-x 87 901 901 4096 Sep 28 12:12 /home/reidpr $ ls -ld ~ drwxr-xr-x 87 reidpr reidpr 4096 Sep 28 12:12 /home/reidpr $ ch-run /var/tmp/hello.sqfs -- bash > ls -nd ~ drwxr-xr-x 87 901 901 4096 Sep 28 18:12 /home/reidpr > ls -ld ~ drwxr-xr-x 87 reidpr reidpr 4096 Sep 28 18:12 /home/reidpr But if :code:`--uid` is provided, things can seem odd. For example:: $ ch-run --uid 0 /var/tmp/hello.sqfs -- bash > ls -nd /home/reidpr drwxr-xr-x 87 0 901 4096 Sep 28 18:12 /home/reidpr > ls -ld /home/reidpr drwxr-xr-x 87 root reidpr 4096 Sep 28 18:12 /home/reidpr This UID mapping can contain only one pair: an arbitrary container UID to your effective UID on the host. Thus, all other users are unmapped, and they show up as :code:`nobody`:: $ ls -n /tmp/foo -rw-rw---- 1 902 902 0 Sep 28 15:40 /tmp/foo $ ls -l /tmp/foo -rw-rw---- 1 sig sig 0 Sep 28 15:40 /tmp/foo $ ch-run /var/tmp/hello.sqfs -- bash > ls -n /tmp/foo -rw-rw---- 1 65534 65534 843 Sep 28 21:40 /tmp/foo > ls -l /tmp/foo -rw-rw---- 1 nobody nogroup 843 Sep 28 21:40 /tmp/foo User namespaces have a similar mapping for GIDs, with the same limitation --- exactly one arbitrary container GID maps to your effective *primary* GID. This can lead to some strange-looking results, because only one of your GIDs can be mapped in any given container. All the rest become :code:`nogroup`:: $ id uid=901(reidpr) gid=901(reidpr) groups=901(reidpr),903(nerds),904(losers) $ ch-run /var/tmp/hello.sqfs -- id uid=901(reidpr) gid=901(reidpr) groups=901(reidpr),65534(nogroup) $ ch-run --gid 903 /var/tmp/hello.sqfs -- id uid=901(reidpr) gid=903(nerds) groups=903(nerds),65534(nogroup) However, this doesn't affect access. The container process retains the same GIDs from the host perspective, and as always, the host IDs are what control access:: $ ls -l /tmp/primary /tmp/supplemental -rw-rw---- 1 sig reidpr 0 Sep 28 15:47 /tmp/primary -rw-rw---- 1 sig nerds 0 Sep 28 15:48 /tmp/supplemental $ ch-run /var/tmp/hello.sqfs -- bash > cat /tmp/primary > /dev/null > cat /tmp/supplemental > /dev/null One area where functionality *is* reduced is that :code:`chgrp(1)` becomes useless. Using an unmapped group or :code:`nogroup` fails, and using a mapped group is a no-op because it's mapped back to the host GID:: $ ls -l /tmp/bar rw-rw---- 1 reidpr reidpr 0 Sep 28 16:12 /tmp/bar $ ch-run /var/tmp/hello.sqfs -- chgrp nerds /tmp/bar chgrp: changing group of '/tmp/bar': Invalid argument $ ch-run /var/tmp/hello.sqfs -- chgrp nogroup /tmp/bar chgrp: changing group of '/tmp/bar': Invalid argument $ ch-run --gid 903 /var/tmp/hello.sqfs -- chgrp nerds /tmp/bar $ ls -l /tmp/bar -rw-rw---- 1 reidpr reidpr 0 Sep 28 16:12 /tmp/bar Workarounds include :code:`chgrp(1)` on the host or fastidious use of setgid directories:: $ mkdir /tmp/baz $ chgrp nerds /tmp/baz $ chmod 2770 /tmp/baz $ ls -ld /tmp/baz drwxrws--- 2 reidpr nerds 40 Sep 28 16:19 /tmp/baz $ ch-run /var/tmp/hello.sqfs -- touch /tmp/baz/foo $ ls -l /tmp/baz/foo -rw-rw---- 1 reidpr nerds 0 Sep 28 16:21 /tmp/baz/foo This concludes our discussion of how a Charliecloud container interacts with its host and principal Charliecloud quirks. We next move on to installing software. Installing your own software ============================ This section covers four situations for making software available inside a Charliecloud container: 1. Third-party software installed into the image using a package manager. 2. Third-party software compiled from source into the image. 3. Your software installed into the image. 4. Your software stored on the host but compiled in the container. Many of Docker's `Best practices for writing Dockerfiles `_ apply to Charliecloud images as well, so you should be familiar with that document. .. note:: Maybe you don't have to install the software at all. Is there already a trustworthy image on Docker Hub you can use as a base? Third-party software via package manager ---------------------------------------- This approach is the simplest and fastest way to install stuff in your image. The :code:`examples/hello` Dockerfile also seen above does this to install the package :code:`openssh-client`: .. literalinclude:: ../examples/hello/Dockerfile :language: docker :lines: 3-7 You can use distribution package managers such as :code:`dnf`, as demonstrated above, or others, such as :code:`pip` for Python packages. Be aware that the software will be downloaded anew each time you build the image, unless you add an HTTP cache, which is out of scope of this tutorial. Third-party software compiled from source ----------------------------------------- Under this method, one uses :code:`RUN` commands to fetch the desired software using :code:`curl` or :code:`wget`, compile it, and install. Our example does this with two chained Dockerfiles. First, we build a basic CentOS image (:code:`examples/Dockerfile.centos8`): .. literalinclude:: ../examples/Dockerfile.centos8 :language: docker :lines: 2- Then, we add OpenMPI with :code:`examples/Dockerfile.openmpi`. This is a complex Dockerfile that compiles several dependencies in addition to OpenMPI. For the purposes of this tutorial, you can skip most of it, but we felt it would be useful to show a real example. .. literalinclude:: ../examples/Dockerfile.openmpi :language: docker :lines: 2- So what is going on here? 1. Use the latest CentOS 8 as the base image. 2. Install a basic build system using the OS package manager. 3. For a few dependencies and then OpenMPI itself: 1. Download and untar. Note the use of variables to make adjusting the URL and versions easier, as well as the explanation of why we're not using :code:`dnf`, given that several of these packages are included in CentOS. 2. Build and install OpenMPI. Note the :code:`getconf` trick to guess at an appropriate parallel build. 4. Clean up, in order to reduce the size of layers as well as the resulting Charliecloud image (:code:`rm -Rf`). .. Finally, because it's a container image, you can be less tidy than you might be on a normal system. For example, the above downloads and builds in :code:`/` rather than :code:`/usr/local/src`, and it installs MPI into :code:`/usr` rather than :code:`/usr/local`. Your software stored in the image --------------------------------- This method covers software provided by you that is included in the image. This is recommended when your software is relatively stable or is not easily available to users of your image, for example a library rather than simulation code under active development. The general approach is the same as installing third-party software from source, but you use the :code:`COPY` instruction to transfer files from the host filesystem (rather than the network via HTTP) to the image. For example, :code:`examples/mpihello/Dockerfile.openmpi` uses this approach: .. literalinclude:: ../examples/mpihello/Dockerfile.openmpi :language: docker These Dockerfile instructions: 1. Copy the host directory :code:`examples/mpihello` to the image at path :code:`/hello`. The host path is relative to the *context directory*, which is tarred up and sent to the Docker daemon. Docker builds have no access to the host filesystem outside the context directory. (Unlike the HPC custom, Docker comes from a world without network filesystems. This tar-based approach lets the Docker daemon run on a different node from the client without needing any shared filesystems.) The convention for Charliecloud tests and examples is that the context is the directory containing the Dockerfile in question, and a common pattern, used here, is to copy in the entire context. 2. :code:`cd` to :code:`/hello`. 3. Compile our example. We include :code:`make clean` to remove any leftover build files, since they would be inappropriate inside the container. Once the image is built, we can see the results. (Install the image into :code:`/var/tmp` as outlined above, if you haven't already.) :: $ ch-run /var/tmp/mpihello-openmpi.sqfs -- ls -lh /hello total 32K -rw-rw---- 1 reidpr reidpr 908 Oct 4 15:52 Dockerfile -rw-rw---- 1 reidpr reidpr 157 Aug 5 22:37 Makefile -rw-rw---- 1 reidpr reidpr 1.2K Aug 5 22:37 README -rwxr-x--- 1 reidpr reidpr 9.5K Oct 4 15:58 hello -rw-rw---- 1 reidpr reidpr 1.4K Aug 5 22:37 hello.c -rwxrwx--- 1 reidpr reidpr 441 Aug 5 22:37 test.sh We will revisit this image later. Your software stored on the host -------------------------------- This method leaves your software on the host but compiles it in the image. This is recommended when your software is volatile or each image user needs a different version, for example a simulation code under active development. The general approach is to bind-mount the appropriate directory and then run the build inside the container. We can re-use the :code:`mpihello` image to demonstrate this. :: $ cd examples/mpihello $ ls -l total 20 -rw-rw---- 1 reidpr reidpr 908 Oct 4 09:52 Dockerfile -rw-rw---- 1 reidpr reidpr 1431 Aug 5 16:37 hello.c -rw-rw---- 1 reidpr reidpr 157 Aug 5 16:37 Makefile -rw-rw---- 1 reidpr reidpr 1172 Aug 5 16:37 README $ ch-run -b .:/mnt/0 --cd /mnt/0 /var/tmp/mpihello.sqfs -- make mpicc -std=gnu11 -Wall hello.c -o hello $ ls -l total 32 -rw-rw---- 1 reidpr reidpr 908 Oct 4 09:52 Dockerfile -rwxrwx--- 1 reidpr reidpr 9632 Oct 4 10:43 hello -rw-rw---- 1 reidpr reidpr 1431 Aug 5 16:37 hello.c -rw-rw---- 1 reidpr reidpr 157 Aug 5 16:37 Makefile -rw-rw---- 1 reidpr reidpr 1172 Aug 5 16:37 README A common use case is to leave a container shell open in one terminal for building, and then run using a separate container invoked from a different terminal. Your first single-node, multi-process jobs ========================================== This is an important use case even for large-scale codes, when testing and development happens at small scale but need to use an environment comparable to large-scale runs. This tutorial covers three approaches: 1. Processes are coordinated by the host, i.e., one process per container. 2. Processes are coordinated by the container, i.e., one container with multiple processes, using configuration files from the container. 3. Processes are coordinated by the container using configuration files from the host. In order to test approach 1, you must install OpenMPI 2.1.2 on the host. In our experience, we have had success compiling from source with the same options as in the Dockerfile, but there is probably more nuance to the match than we've discovered. Processes coordinated by host ----------------------------- This approach does the forking and process coordination on the host. Each process is spawned in its own container, and because Charliecloud introduces minimal isolation, they can communicate as if they were running directly on the host. For example, using Slurm :code:`srun` and the :code:`mpihello` example above:: $ stat -L --format='%i' /proc/self/ns/user 4026531837 $ ch-run /var/tmp/mpihello-openmpi.sqfs -- mpirun --version mpirun (Open MPI) 2.1.5 $ srun -n4 ch-run /var/tmp/mpihello-openmpi.sqfs -- /hello/hello 0: init ok cn001, 4 ranks, userns 4026554650 1: init ok cn001, 4 ranks, userns 4026554652 3: init ok cn002, 4 ranks, userns 4026554652 2: init ok cn002, 4 ranks, userns 4026554650 0: send/receive ok 0: finalize ok We recommend this approach because it lets you take advantage of difficult things already done by your site admins, such as configuring Slurm. If you don't have Slurm, you can use :code:`mpirun -np 4` instead of :code:`srun -n4`. However, this requires that the host have a compatible version of OpenMPI installed on the host. Which versions are compatible seems to be a moving target, but having the same versions inside and outside the container *usually* works. Processes coordinated by container ---------------------------------- This approach starts a single container process, which then forks and coordinates the parallel work. The advantage is that this approach is completely independent of the host for dependency configuration and installation; the disadvantage is that it cannot take advantage of host things such as Slurm configuration. For example:: $ ch-run /var/tmp/mpihello-openmpi.sqfs -- mpirun -np 4 /hello/hello 0: init ok cn001, 4 ranks, userns 4026532256 1: init ok cn001, 4 ranks, userns 4026532256 2: init ok cn001, 4 ranks, userns 4026532256 3: init ok cn001, 4 ranks, userns 4026532256 0: send/receive ok 0: finalize ok Note that in this case, we use :code:`mpirun` rather than :code:`srun` because the Slurm client programs are not installed inside the container, and we don't want the host's Slurm coordinating processes anyway. Your first multi-node jobs ========================== This section assumes that you are using a Slurm cluster and some type of node-local storage. A :code:`tmpfs` will suffice, and we use :code:`/var/tmp` for this tutorial. (Using :code:`/tmp` often works but can cause confusion because it's shared by the container and host, yielding cycles in the directory tree.) We cover three cases: 1. The MPI hello world example above, run interactively, with the host coordinating. 2. Same, non-interactive. 3. An Apache Spark example, run interactively. 4. Same, non-interactive. We think that container-coordinated MPI jobs will also work, but we haven't worked out how to do this yet. (See `issue #5 `_.) .. note:: The image directory is mounted read-only by default so it can be shared by multiple Charliecloud containers in the same or different jobs. It can be mounted read-write with :code:`ch-run -w` if you are not using SquashFS. .. warning:: The image can reside on most filesystems, but be aware of metadata impact. A non-trivial Charliecloud job may overwhelm a network filesystem, earning you the ire of your sysadmins and colleagues. Interactive MPI hello world --------------------------- First, obtain an interactive allocation of nodes. This tutorial assumes an allocation of 4 nodes (but any number should work) and an interactive shell on one of those nodes. For example:: $ salloc -N4 The next step is to distribute the image to the compute nodes. For SquashFS images, this is handled by the internal mounting process; for tarballs, we run one instance of :code:`ch-convert` on each node using :code:`srun`:: $ srun ch-convert mpihello-openmpi.tar.gz /var/tmp/mpihello-openmpi input: tar mpihello-openmpi.tar.gz output: dir /var/tmp/mpihello-openmpi unpacking ... input: tar mpihello-openmpi.tar.gz output: dir /var/tmp/mpihello-openmpi unpacking ... input: tar mpihello-openmpi.tar.gz output: dir /var/tmp/mpihello-openmpi unpacking ... input: tar mpihello-openmpi.tar.gz output: dir /var/tmp/mpihello-openmpi unpacking ... done done done done We can now activate the image and run our program:: $ srun --cpus-per-task=1 ch-run /var/tmp/mpihello-openmpi.sqfs -- /hello/hello 2: init ok cn001, 64 ranks, userns 4026532567 4: init ok cn001, 64 ranks, userns 4026532571 8: init ok cn001, 64 ranks, userns 4026532579 [...] 45: init ok cn003, 64 ranks, userns 4026532589 17: init ok cn002, 64 ranks, userns 4026532565 55: init ok cn004, 64 ranks, userns 4026532577 0: send/receive ok 0: finalize ok Success! Non-interactive MPI hello world ------------------------------- Production jobs are normally run non-interactively, via submission of a job script that runs when resources are available, placing output into a file. The MPI hello world example includes such a script, :code:`examples/mpi/mpihello/slurm.sh`: .. literalinclude:: ../examples/mpihello/slurm.sh :language: bash Note that this script both unpacks the image and runs it. Submit it with something like:: $ sbatch -N4 slurm.sh ~/mpihello-openmpi.tar.gz /var/tmp 207745 When the job is complete, look at the output:: $ cat slurm-207745.out tarball: /home/reidpr/mpihello-openmpi.tar.gz image: /var/tmp/mpihello-openmpi creating new image /var/tmp/mpihello-openmpi creating new image /var/tmp/mpihello-openmpi [...] /var/tmp/mpihello-openmpi unpacked ok /var/tmp/mpihello-openmpi unpacked ok container: mpirun (Open MPI) 2.1.5 0: init ok cn001.localdomain, 144 ranks, userns 4026554766 37: init ok cn002.localdomain, 144 ranks, userns 4026554800 [...] 96: init ok cn003.localdomain, 144 ranks, userns 4026554803 86: init ok cn003.localdomain, 144 ranks, userns 4026554793 0: send/receive ok 0: finalize ok Success! Interactive Apache Spark ------------------------ This example is in :code:`examples/spark`. Build a tarball or SquashFS and upload it to your cluster. Once you have an interactive job, prepare the image. Recall that for the SquashFS workflow this is handled by the internal mounting process. tarball: :: $ srun ch-convert spark.tar.gz /var/tmp/spark input: tar spark.tar.gz output: dir /var/tmp/spark unpacking ... input: tar spark.tar.gz output: dir /var/tmp/spark unpacking ... done done We need to first create a basic configuration for Spark, as the defaults in the Dockerfile are insufficient. (For real jobs, you'll want to also configure performance parameters such as memory use; see `the documentation `_.) First:: $ mkdir -p ~/sparkconf $ chmod 700 ~/sparkconf We'll want to use the cluster's high-speed network. For this example, we'll find the Spark master's IP manually:: $ ip -o -f inet addr show | cut -d/ -f1 1: lo inet 127.0.0.1 2: eth0 inet 192.168.8.3 8: eth1 inet 10.8.8.3 Your site support can tell you which to use. In this case, we'll use 10.8.8.3. Create some configuration files. Replace :code:`[MYSECRET]` with a string only you know. Edit to match your system; in particular, use local disks instead of :code:`/tmp` if you have them:: $ cat > ~/sparkconf/spark-env.sh SPARK_LOCAL_DIRS=/tmp/spark SPARK_LOG_DIR=/tmp/spark/log SPARK_WORKER_DIR=/tmp/spark SPARK_LOCAL_IP=127.0.0.1 SPARK_MASTER_HOST=10.8.8.3 $ cat > ~/sparkconf/spark-defaults.conf spark.authenticate true spark.authenticate.secret [MYSECRET] We can now start the Spark master:: $ ch-run -b ~/sparkconf /var/tmp/spark.sqfs -- /spark/sbin/start-master.sh Look at the log in :code:`/tmp/spark/log` to see that the master started correctly:: $ tail -7 /tmp/spark/log/*master*.out 17/02/24 22:37:21 INFO Master: Starting Spark master at spark://10.8.8.3:7077 17/02/24 22:37:21 INFO Master: Running Spark version 2.0.2 17/02/24 22:37:22 INFO Utils: Successfully started service 'MasterUI' on port 8080. 17/02/24 22:37:22 INFO MasterWebUI: Bound MasterWebUI to 127.0.0.1, and started at http://127.0.0.1:8080 17/02/24 22:37:22 INFO Utils: Successfully started service on port 6066. 17/02/24 22:37:22 INFO StandaloneRestServer: Started REST server for submitting applications on port 6066 17/02/24 22:37:22 INFO Master: I have been elected leader! New state: ALIVE If you can run a web browser on the node, browse to :code:`http://localhost:8080` for the Spark master web interface. Because this capability varies, the tutorial does not depend on it, but it can be informative. Refresh after each key step below. The Spark workers need to know how to reach the master. This is via a URL; you can get it from the log excerpt above, or consult the web interface. For example:: $ MASTER_URL=spark://10.8.8.3:7077 Next, start one worker on each compute node. In this tutorial, we start the workers using :code:`srun` in a way that prevents any subsequent :code:`srun` invocations from running until the Spark workers exit. For our purposes here, that's OK, but it's a big limitation for some jobs. (See `issue #230 `_.) Alternatives include :code:`pdsh`, which is the approach we use for the Spark tests (:code:`examples/other/spark/test.bats`), or a simple for loop of :code:`ssh` calls. Both of these are also quite clunky and do not scale well. :: $ srun sh -c " ch-run -b ~/sparkconf /var/tmp/spark.sqfs -- \ spark/sbin/start-slave.sh $MASTER_URL \ && sleep infinity" & One of the advantages of Spark is that it's resilient: if a worker becomes unavailable, the computation simply proceeds without it. However, this can mask issues as well. For example, this example will run perfectly fine with just one worker, or all four workers on the same node, which aren't what we want. Check the master log to see that the right number of workers registered:: $ fgrep worker /tmp/spark/log/*master*.out 17/02/24 22:52:24 INFO Master: Registering worker 127.0.0.1:39890 with 16 cores, 187.8 GB RAM 17/02/24 22:52:24 INFO Master: Registering worker 127.0.0.1:44735 with 16 cores, 187.8 GB RAM 17/02/24 22:52:24 INFO Master: Registering worker 127.0.0.1:22445 with 16 cores, 187.8 GB RAM 17/02/24 22:52:24 INFO Master: Registering worker 127.0.0.1:29473 with 16 cores, 187.8 GB RAM Despite the workers calling themselves 127.0.0.1, they really are running across the allocation. (The confusion happens because of our :code:`$SPARK_LOCAL_IP` setting above.) This can be verified by examining logs on each compute node. For example (note single quotes):: $ ssh 10.8.8.4 -- tail -3 '/tmp/spark/log/*worker*.out' 17/02/24 22:52:24 INFO Worker: Connecting to master 10.8.8.3:7077... 17/02/24 22:52:24 INFO TransportClientFactory: Successfully created connection to /10.8.8.3:7077 after 263 ms (216 ms spent in bootstraps) 17/02/24 22:52:24 INFO Worker: Successfully registered with master spark://10.8.8.3:7077 We can now start an interactive shell to do some Spark computing:: $ ch-run -b ~/sparkconf /var/tmp/spark.sqfs -- /spark/bin/pyspark --master $MASTER_URL Let's use this shell to estimate 𝜋 (this is adapted from one of the Spark `examples `_): .. code-block:: pycon >>> import operator >>> import random >>> >>> def sample(p): ... (x, y) = (random.random(), random.random()) ... return 1 if x*x + y*y < 1 else 0 ... >>> SAMPLE_CT = int(2e8) >>> ct = sc.parallelize(xrange(0, SAMPLE_CT)) \ ... .map(sample) \ ... .reduce(operator.add) >>> 4.0*ct/SAMPLE_CT 3.14109824 (Type Control-D to exit.) We can also submit jobs to the Spark cluster. This one runs the same example as included with the Spark source code. (The voluminous logging output is omitted.) :: $ ch-run -b ~/sparkconf /var/tmp/spark.sqfs -- \ /spark/bin/spark-submit --master $MASTER_URL \ /spark/examples/src/main/python/pi.py 1024 [...] Pi is roughly 3.141211 [...] Exit your allocation. Slurm will clean up the Spark daemons. Success! Next, we'll run a similar job non-interactively. Non-interactive Apache Spark ---------------------------- We'll re-use much of the above to run the same computation non-interactively. For brevity, the Slurm script at :code:`examples/other/spark/slurm.sh` is not reproduced here. Submit it as follows. It requires three arguments: the tarball, the image directory to unpack into, and the high-speed network interface. Again, consult your site administrators for the latter. :: $ sbatch -N4 slurm.sh spark.tar.gz /var/tmp ib0 Submitted batch job 86754 Output:: $ fgrep 'Pi is' slurm-86754.out Pi is roughly 3.141393 Success! (to four significant digits) .. LocalWords: NEWROOT rhel oldfind oldf mem drwxr xr sig drwxrws mpihello .. LocalWords: openmpi rwxr rwxrwx cn cpus sparkconf MasterWebUI MasterUI .. LocalWords: StandaloneRestServer MYSECRET TransportClientFactory sc charliecloud-0.26/examples/000077500000000000000000000000001417231051300157125ustar00rootroot00000000000000charliecloud-0.26/examples/.dockerignore000066400000000000000000000002321417231051300203630ustar00rootroot00000000000000# Exclude everything by default * # Needed for Dockerfile.openmpi !dont-init-ucx-on-intel-cray.patch # Needed for Dockerfile.exhaustive !Dockerfile.* charliecloud-0.26/examples/Dockerfile.centos7000066400000000000000000000024151417231051300212670ustar00rootroot00000000000000# ch-test-scope: standard FROM centos:7 # This image has two purposes: (1) demonstrate we can build a CentOS 7 image # and (2) provide a build environment for Charliecloud EPEL 7 RPMs. # Install our dependencies, ensuring we fail out if any are missing. RUN yum install -y epel-release \ && yum install -y --setopt=skip_missing_names_on_install=0 \ autoconf \ automake \ bats \ fakeroot \ gcc \ git \ make \ python3-devel \ python3 \ python36-lark-parser \ python36-requests \ python36-sphinx \ python36-sphinx_rtd_theme \ rpm-build \ rpmlint \ rsync \ squashfs-tools \ squashfuse \ wget \ && yum clean all # We need to install epel rpm-macros after python3-devel to get the correct # python package version for our spec file macros. # https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/thread/K4EH7V3OUFJFVL6A72IILJUA6JFX2HZW/ RUN yum install -y epel-rpm-macros # Need wheel to install bundled Lark, and the RPM version doesn't work. RUN pip3 install wheel charliecloud-0.26/examples/Dockerfile.centos8000066400000000000000000000032361417231051300212720ustar00rootroot00000000000000# ch-test-scope: standard FROM centos:8 # This image has two purposes: (1) demonstrate we can build a CentOS 8 image # and (2) provide a build environment for Charliecloud EPEL 8 RPMs. # # Quirks: # # 1. Install the dnf ovl plugin to work around RPMDB corruption when # building images with Docker and the OverlayFS storage driver. # # 2. Enable PowerTools repo, because some packages in EPEL depend on it. # # 3. Install packages needed to build el8 rpms. # # 4. Issue #1103: Install libarchive to resolve cmake bug # RUN dnf install -y --setopt=install_weak_deps=false \ epel-release \ 'dnf-command(config-manager)' \ && dnf config-manager --enable powertools \ && dnf install -y --setopt=install_weak_deps=false \ dnf-plugin-ovl \ autoconf \ automake \ gcc \ git \ libarchive \ make \ python3 \ python3-devel \ python3-lark-parser \ python3-requests \ python3-sphinx \ python3-sphinx_rtd_theme \ rpm-build \ rpmlint \ rsync \ squashfs-tools \ squashfuse \ wget \ && dnf clean all # Need wheel to install bundled Lark, and the RPM version doesn't work. RUN pip3 install wheel # CentOS's linker doesn't search these paths by default; add them because we # will install stuff later into /usr/local. RUN echo "/usr/local/lib" > /etc/ld.so.conf.d/usrlocal.conf \ && echo "/usr/local/lib64" >> /etc/ld.so.conf.d/usrlocal.conf \ && ldconfig charliecloud-0.26/examples/Dockerfile.debian9000066400000000000000000000003741417231051300212220ustar00rootroot00000000000000# ch-test-scope: standard # ch-test-arch-exclude: ppc64le # base image unavailable FROM debian:stretch ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update \ && apt-get install -y --no-install-recommends apt-utils \ && rm -rf /var/lib/apt/lists/* charliecloud-0.26/examples/Dockerfile.mpich000066400000000000000000000032731417231051300210100ustar00rootroot00000000000000# The MPICH example has a smaller scope than the OpenMPI example. We want to # provide an MPICH build that works on a single node and (via ch-fromhost # trickery) on Cray Aries systems. That's it for now. # # We build MPICH rather than install the RPM to get a bare bones build. # # ch-test-scope: full FROM centos8 RUN dnf install -y --setopt=install_weak_deps=false \ automake \ file \ gcc \ gcc-c++ \ gcc-gfortran \ git \ make \ wget \ && dnf clean all WORKDIR /usr/local/src # We currently need our own patched patchelf; see issue #256. RUN git clone https://github.com/hpc/patchelf.git \ && cd patchelf \ && git checkout shrink-soname \ && ./bootstrap.sh \ && ./configure --prefix=/usr/local \ && make install \ && rm -Rf ../patchelf # Forcing ch3 instead of the default ch4 as the latter requires ucx or # libfabric to be installed. ARG MPI_VERSION=3.4.1 ARG MPI_URL=http://www.mpich.org/static/downloads/${MPI_VERSION} RUN wget -nv ${MPI_URL}/mpich-${MPI_VERSION}.tar.gz \ && tar xf mpich-${MPI_VERSION}.tar.gz \ && cd mpich-${MPI_VERSION} \ && CFLAGS=-O3 \ CXXFLAGS=-O3 \ ./configure --prefix=/usr/local \ --disable-cxx \ --disable-fortran \ --disable-threads \ --disable-rpath \ --disable-static \ --disable-wrapper-rpath \ --with-device=ch3 \ --without-ibverbs \ --without-libfabric \ --without-slurm \ && make -j$(getconf _NPROCESSORS_ONLN) install \ && rm -Rf ../mpich-${MPI_VERSION}* RUN ldconfig charliecloud-0.26/examples/Dockerfile.nvidia000066400000000000000000000043371417231051300211640ustar00rootroot00000000000000# ch-test-scope: full # ch-test-arch-exclude: aarch64 # only x86-64, ppc64le supported by nVidia # This Dockerfile demonstrates a multi-stage build. With a single-stage build # that brings along the nVidia build environment, the resulting unpacked image # is 2.9 GiB; with the multi-stage build, it's 146 MiB. # # See: https://docs.docker.com/develop/develop-images/multistage-build ## Stage 1: Install the nVidia build environment and build a sample app. FROM ubuntu:20.04 # OS packages needed ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update \ && apt-get install -y --no-install-recommends \ ca-certificates \ gnupg \ make \ wget \ && rm -rf /var/lib/apt/lists/* # Install CUDA from nVidia. # See: https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=2004&target_type=debnetwork WORKDIR /usr/local/src RUN wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin \ && mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600 \ && wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub \ && apt-key add 7fa2af80.pub \ && echo "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /" >> /etc/apt/sources.list \ && apt-get update \ && apt-get install -y --no-install-recommends cuda-toolkit-11-2 \ && rm -rf /var/lib/apt/lists/* 7fa2af80.pub # Build the sample app we'll use to test. WORKDIR /usr/local/cuda-11.2/samples/0_Simple/matrixMulCUBLAS RUN make ## Stage 2: Copy the built sample app into a clean Ubuntu image. FROM ubuntu:20.04 COPY --from=0 /usr/local/cuda-11.2/samples/0_Simple/matrixMulCUBLAS / # These are the two nVidia shared libraries that the sample app needs. We could # be smarter about finding this path. However, one thing to avoid is copying in # all of /usr/local/cuda-11.2/targets/x86_64-linux/lib, because that directory # is quite large. COPY --from=0 /usr/local/cuda-11.2/targets/x86_64-linux/lib/libcublas.so.11.4.1.1043 /usr/local/lib COPY --from=0 /usr/local/cuda-11.2/targets/x86_64-linux/lib/libcublasLt.so.11.4.1.1043 /usr/local/lib RUN ldconfig charliecloud-0.26/examples/Dockerfile.openmpi000066400000000000000000000120551417231051300213550ustar00rootroot00000000000000# ch-test-scope: full FROM centos8 # A key goal of this Dockerfile is to demonstrate best practices for building # OpenMPI for use inside a container. # # This OpenMPI aspires to work close to optimally on clusters with any of the # following interconnects: # # - Ethernet (TCP/IP) # - InfiniBand (IB) # - Omni-Path (OPA) # - RDMA over Converged Ethernet (RoCE) interconnects # # with no environment variables, command line arguments, or additional # configuration files. Thus, we try to implement decisions at build time. # # This is a work in progress, and we're very interested in feedback. # # OpenMPI has numerous ways to communicate messages [1]. The ones relevant to # this build and the interconnects they support are: # # Module Eth IB OPA RoCE note decision # ------------ ---- ---- ---- ---- ---- -------- # # ob1 : tcp Y* X X X a include # ob1 : openib N Y Y Y b,c exclude # cm : psm2 N N Y* N include # : ucx Y? Y* N Y? b,d include # # Y : supported # Y*: best choice for that interconnect # X : supported but sub-optimal # # a : No RDMA, so performance will suffer. # b : Uses libibverbs. # c : Will be removed in OpenMPI 5. # d : Uses Mellanox libraries if available in preference to libibverbs. # # You can check what's available with: # # $ ch-run /var/tmp/openmpi -- ompi_info | egrep '(btl|mtl|pml)' # # The other build decisions are: # # 1. PMI/PMIx: Include these so that we can use srun or any other PMI[x] # provider, with no matching OpenMPI needed on the host. # # 2. --disable-pty-support to avoid "pipe function call failed when # setting up I/O forwarding subsystem". # # 3. --enable-mca-no-build=plm-slurm to support launching processes using # the host's srun (i.e., the container OpenMPI needs to talk to the host # Slurm's PMI) but prevent OpenMPI from invoking srun itself from within # the container, where srun is not installed (the error messages from # this are inscrutable). # # [1]: https://github.com/open-mpi/ompi/blob/master/README # OS packages needed to build this stuff. # # As of 2019-10-07, CentOS 8 provides version 22.2 of libibverbs. This is # quite new but works for our test systems. # Note that libpsm2 is x86-64 only so we skip if missing RUN dnf install -y --setopt=install_weak_deps=false \ automake \ file \ flex \ gcc \ gcc-c++ \ gcc-gfortran \ git \ ibacm \ libevent-devel \ libtool \ libibumad \ libibumad-devel \ libibverbs \ libibverbs-devel \ libibverbs-utils \ librdmacm \ librdmacm-devel \ rdma-core \ make \ numactl-devel \ wget \ && dnf install -y --setopt=install_weak_deps=false --skip-broken \ libpsm2 \ libpsm2-devel \ && dnf clean all WORKDIR /usr/local/src # UCX. There is a package for CentOS 8 but we want control over build options, # specifically multithreaded support. ARG UCX_VERSION=1.11.2 RUN git clone --branch v${UCX_VERSION} --depth 1 \ https://github.com/openucx/ucx.git \ && cd ucx \ && ./autogen.sh \ && ./contrib/configure-release-mt --prefix=/usr/local \ && make -j$(getconf _NPROCESSORS_ONLN) install \ && rm -Rf ../ucx* # PMI2. # # CentOS doesn't have a package with the Slurm PMI2 libraries we need, so # build them from Slurm's release. ARG SLURM_VERSION=19-05-3-2 RUN wget https://github.com/SchedMD/slurm/archive/slurm-${SLURM_VERSION}.tar.gz \ && tar -xf slurm-${SLURM_VERSION}.tar.gz \ && cd slurm-slurm-${SLURM_VERSION} \ && ./configure --prefix=/usr/local \ && cd contribs/pmi2 \ && make -j$(getconf _NPROCESSORS_ONLN) install \ && rm -Rf ../../../slurm* # OpenMPI. # # Patch OpenMPI to disable UCX plugin on systems with Intel or Cray HSNs. UCX # has inferior performance than PSM2/uGNI but higher priority. ARG MPI_URL=https://www.open-mpi.org/software/ompi/v3.1/downloads ARG MPI_VERSION=3.1.6 RUN wget -nv ${MPI_URL}/openmpi-${MPI_VERSION}.tar.gz \ && tar xf openmpi-${MPI_VERSION}.tar.gz COPY dont-init-ucx-on-intel-cray.patch ./openmpi-${MPI_VERSION} RUN cd openmpi-${MPI_VERSION} && git apply dont-init-ucx-on-intel-cray.patch \ && CFLAGS=-O3 \ CXXFLAGS=-O3 \ ./configure --prefix=/usr/local \ --sysconfdir=/mnt/0 \ --with-slurm \ --with-pmi=/usr/local \ --with-pmix \ --with-ucx \ --disable-pty-support \ --enable-mca-no-build=btl-openib,plm-slurm \ && make -j$(getconf _NPROCESSORS_ONLN) install \ && rm -Rf ../openmpi-${MPI_VERSION}* RUN ldconfig # OpenMPI expects this program to exist, even if it's not used. Default is # "ssh : rsh", but that's not installed. RUN echo 'plm_rsh_agent = false' >> /mnt/0/openmpi-mca-params.conf charliecloud-0.26/examples/Makefile.am000066400000000000000000000105161417231051300177510ustar00rootroot00000000000000examplesdir = $(docdir)/examples execs = \ chtest/Build \ chtest/bind_priv.py \ chtest/dev_proc_sys.py \ chtest/fs_perms.py \ chtest/printns \ chtest/signal_out.py \ hello/hello.sh \ obspy/hello.py noexecs = \ Dockerfile.centos7 \ Dockerfile.centos8 \ Dockerfile.debian9 \ Dockerfile.mpich \ Dockerfile.nvidia \ Dockerfile.openmpi \ chtest/Makefile \ chtest/chroot-escape.c \ chtest/mknods.c \ chtest/setgroups.c \ chtest/setuid.c \ copy/Dockerfile \ copy/dirA/fileAa \ copy/dirB/fileBa \ copy/dirB/fileBb \ copy/dirCa/dirCb/fileCba \ copy/dirCa/dirCb/fileCbb \ copy/dirD/fileDa \ copy/dirEa/dirEb/fileEba \ copy/dirEa/dirEb/fileEbb \ copy/dirF/dir19a3/file19b1 \ copy/dirF/file19a3 \ copy/dirF/file19a2 \ copy/dirF/dir19a2/file19b2 \ copy/dirF/dir19a2/dir19b2/file19c1 \ copy/dirF/dir19a2/dir19b3/file19c1 \ copy/dirF/dir19a2/file19b3 \ copy/dirG/diry/file_ \ copy/dirG/filey \ copy/dirG/s_dir1 \ copy/dirG/s_dir4/file_ \ copy/dirG/s_file1 \ copy/dirG/s_file4/file_ \ copy/fileA \ copy/fileB \ copy/test.bats \ dont-init-ucx-on-intel-cray.patch \ exhaustive/Dockerfile \ hello/Dockerfile \ hello/README \ lammps/Dockerfile \ lammps/melt.patch \ lammps/simple.patch \ lustre/Dockerfile \ mpibench/Dockerfile.mpich \ mpibench/Dockerfile.openmpi \ mpihello/Dockerfile.mpich \ mpihello/Dockerfile.openmpi \ mpihello/Makefile \ mpihello/hello.c \ mpihello/slurm.sh \ multistage/Dockerfile \ obspy/Dockerfile \ obspy/README \ obspy/obspy.png \ paraview/Dockerfile \ paraview/cone.2ranks.vtk \ paraview/cone.nranks.vtk \ paraview/cone.png \ paraview/cone.py \ paraview/cone.serial.vtk \ spack/Dockerfile \ spark/Dockerfile \ spark/slurm.sh batsfiles = \ exhaustive/test.bats \ hello/test.bats \ lammps/test.bats \ lustre/test.bats \ mpibench/test.bats \ mpihello/test.bats \ multistage/test.bats \ obspy/test.bats \ paraview/test.bats \ spack/test.bats \ spark/test.bats nobase_examples_SCRIPTS = $(execs) nobase_examples_DATA = $(noexecs) if ENABLE_TEST nobase_examples_DATA += $(batsfiles) endif EXTRA_DIST = $(execs) $(noexecs) $(batsfiles) # Automake is completely unable to deal with symlinks; we cannot include them # in the source code or "make dist" won't work, and we can't include them in # the files to install or "make install" won't work. These targets take care # of everything manually. # # Note: -T prevents ln(1) from dereferencing and descending into symlinks to # directories. Without this, new symlinks are created within such directories, # instead of replacing the existing symlink as we wanted. See PR #722. all-local: ln -fTs dirCb copy/dirCa/symlink-to-dirCb ln -fTs fileDa copy/dirD/symlink-to-fileDa ln -fTs dirEb copy/dirEa/symlink-to-dirEb ln -fTs filey copy/dirG/s_dir2 ln -fTs diry copy/dirG/s_dir3 ln -fTs filey copy/dirG/s_file2 ln -fTs diry copy/dirG/s_file3 ln -fTs fileA copy/symlink-to-fileA ln -fTs fileB copy/symlink-to-fileB-A ln -fTs fileB copy/symlink-to-fileB-B clean-local: rm -f copy/dirCa/symlink-to-dirCb rm -f copy/dirD/symlink-to-fileDa rm -f copy/dirEa/symlink-to-dirEb rm -f copy/dirG/s_dir2 rm -f copy/dirG/s_dir3 rm -f copy/dirG/s_file2 rm -f copy/dirG/s_file3 rm -f copy/symlink-to-fileA rm -f copy/symlink-to-fileB-A rm -f copy/symlink-to-fileB-B install-data-hook: ln -fTs dirCb $(DESTDIR)$(examplesdir)/copy/dirCa/symlink-to-dirCb ln -fTs fileDa $(DESTDIR)$(examplesdir)/copy/dirD/symlink-to-fileDa ln -fTs dirEb $(DESTDIR)$(examplesdir)/copy/dirEa/symlink-to-dirEb ln -fTs filey $(DESTDIR)$(examplesdir)/copy/dirG/s_dir2 ln -fTs diry $(DESTDIR)$(examplesdir)/copy/dirG/s_dir3 ln -fTs filey $(DESTDIR)$(examplesdir)/copy/dirG/s_file2 ln -fTs diry $(DESTDIR)$(examplesdir)/copy/dirG/s_file3 ln -fTs fileA $(DESTDIR)$(examplesdir)/copy/symlink-to-fileA ln -fTs fileB $(DESTDIR)$(examplesdir)/copy/symlink-to-fileB-A ln -fTs fileB $(DESTDIR)$(examplesdir)/copy/symlink-to-fileB-B uninstall-local: rm -f $(DESTDIR)$(examplesdir)/copy/dirCa/symlink-to-dirCb rm -f $(DESTDIR)$(examplesdir)/copy/dirD/symlink-to-fileDa rm -f $(DESTDIR)$(examplesdir)/copy/dirEa/symlink-to-dirEb rm -f $(DESTDIR)$(examplesdir)/copy/dirG/s_dir2 rm -f $(DESTDIR)$(examplesdir)/copy/dirG/s_dir3 rm -f $(DESTDIR)$(examplesdir)/copy/dirG/s_file2 rm -f $(DESTDIR)$(examplesdir)/copy/dirG/s_file3 rm -f $(DESTDIR)$(examplesdir)/copy/symlink-to-fileA rm -f $(DESTDIR)$(examplesdir)/copy/symlink-to-fileB-A rm -f $(DESTDIR)$(examplesdir)/copy/symlink-to-fileB-B charliecloud-0.26/examples/chtest/000077500000000000000000000000001417231051300172045ustar00rootroot00000000000000charliecloud-0.26/examples/chtest/Build000077500000000000000000000116361417231051300202000ustar00rootroot00000000000000#!/bin/bash # Build an Alpine Linux image roughly following the chroot(2) instructions: # https://wiki.alpinelinux.org/wiki/Installing_Alpine_Linux_in_a_chroot # # We deliberately do not sudo. It's a little rough around the edges, because # apk expects root, but it better follows the principle of least privilege. We # could tidy by using the fakeroot utility, but AFAICT that's not particularly # common and we'd prefer not to introduce another dependency. For example, # it's a standard tool on Debian but only in EPEL for CentOS. # # FIXME: Despite the guidance in the Build script API docs, this produces a # tarball even though the process does not naturally produce one. This is # because we are also creating some rather bizarre tar edge cases. These # should be moved to a separate script. # # ch-test-scope: quick set -ex srcdir=$1 tarball_uncompressed=${2}.tar tarball=${tarball_uncompressed}.gz workdir=$3 arch=$(uname -m) mirror=http://dl-cdn.alpinelinux.org/alpine/v3.9 # Dynamically select apk-tools-static version. We would prefer to hard-code a # version (and upgrade on our schedule), but we can't because Alpine does not # keep old package versions. If we try, the build breaks every few months (for # example, see issue #242). apk_tools=$( wget -qO - "${mirror}/main/${arch}" \ | grep -F apk-tools-static \ | sed -E 's/^.*(apk-tools-static-[0-9.r-]+\.apk).*$/\1/') img=${workdir}/img cd "$workdir" # "apk add" wants to install a bunch of files root:root. Thus, if we don't map # ourselves to root:root, we get thousands of errors about "Failed to set # ownership". # # For most Build scripts, we'd simply error out with missing prerequisites, # but this is a core image that much of the test suite depends on. ch_run="ch-run -u0 -g0 -w --no-home ${img}" ## Bootstrap base Alpine Linux. # Download statically linked apk. wget "${mirror}/main/${arch}/${apk_tools}" # Bootstrap directories. mkdir img mkdir img/{dev,etc,proc,sys,tmp} touch img/etc/{group,hosts,passwd,resolv.conf} # Bootstrap static apk. (cd img && tar xf "../${apk_tools}") mkdir img/etc/apk echo ${mirror}/main > img/etc/apk/repositories # Install the base system and a dynamically linked apk. # # This will give a few errors about chown failures. However, the install does # seem to work, so we ignore the failed exit code. $ch_run -- /sbin/apk.static \ --allow-untrusted --initdb --update-cache \ add alpine-base apk-tools \ || true # Now that we've bootstrapped, we don't need apk.static any more. It wasn't # installed using apk, so it's not in the database and can just be rm'ed. rm img/sbin/apk.static.* # Install packages we need for our tests. $ch_run -- /sbin/apk add gcc make musl-dev python3 || true # Validate the install. $ch_run -- /sbin/apk audit --system $ch_run -- /sbin/apk stats # Fix permissions. # # Note that this removes setuid/setgid bits from a few files (and # directories). There is not a race condition, i.e., a window where setuid # executables could become the invoking users, which would be a security hole, # because the setuid/setgid binaries are not group- or world-readable until # after this chmod. chmod -R u+rw,ug-s img ## Install our test stuff. # Sentinel file for --no-home --bind test echo "tmpfs and host home are not overmounted" \ > img/home/overmount-me # We want ch-ssh touch img/usr/bin/ch-ssh # Test programs. cp -r "$srcdir" img/test $ch_run --cd /test -- sh -c 'make clean && make' # Fixtures for /dev cleaning. touch img/dev/deleteme mkdir -p img/mnt/dev touch img/mnt/dev/dontdeleteme # Fixture to make sure we raise hidden files in non-tarbombs. touch img/.hiddenfile1 img/..hiddenfile2 img/...hiddenfile3 # Fixtures for bind-mounting ln -s ../bind4 img/mnt/bind4 ln -s ./doesnotexist img/mnt/link-b0rken-rel ln -s /doesnotexist img/mnt/link-b0rken-abs ln -s /tmp img/mnt/link-bad-abs ln -s ../.. img/mnt/link-bad-rel # Fixture to test resolv.conf as symlink (issue #1015). mv img/etc/resolv.conf img/etc/resolv.conf.real ln -s /etc/resolv.conf.real img/etc/resolv.conf ## Tar it up. # Using pigz saves about 8 seconds. Normally we wouldn't care about that, but # this script is part of the quick scope, which we'd like developers to use # frequently, so every second matters. if command -v pigz > /dev/null 2>&1; then gzip_cmd=pigz else gzip_cmd=gzip fi # Charliecloud supports images both with a single top level directory and # without (tarbomb). The Docker images in the test suite are all tarbombs # (because that's what "docker export" gives us), so use a containing # directory for this one. tar cf "$tarball_uncompressed" -- img # Add in the /dev fixtures in a couple more places. Note that this will cause # the tarball to no longer have a single root directory, but they'll be # removed during unpacking, restoring that condition. ( cd img && tar rf "$tarball_uncompressed" ./dev/deleteme dev/deleteme ) # Finalize the tarball. $gzip_cmd -f "$tarball_uncompressed" [[ -f $tarball ]] charliecloud-0.26/examples/chtest/Makefile000066400000000000000000000002331417231051300206420ustar00rootroot00000000000000BINS := chroot-escape mknods setgroups setuid ALL := $(BINS) CFLAGS := -std=c11 -Wall -Werror .PHONY: all all: $(ALL) .PHONY: clean clean: rm -f $(ALL) charliecloud-0.26/examples/chtest/bind_priv.py000077500000000000000000000022461417231051300215410ustar00rootroot00000000000000#!/usr/bin/env python3 # This script tries to bind to a privileged port on each of the IP addresses # specified on the command line. import errno import socket import sys PORT = 7 # echo results = dict() try: for ip in sys.argv[1:]: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: s.bind((ip, PORT)) except OSError as x: if (x.errno in (errno.EACCES, errno.EADDRNOTAVAIL)): results[ip] = x.errno else: raise else: results[ip] = 0 except Exception as x: print('ERROR\texception: %s' % x) rc = 1 else: if (len(results) < 1): print('ERROR\tnothing to test', end='') rc = 1 elif (len(set(results.values())) != 1): print('ERROR\tmixed results: ', end='') rc = 1 else: result = next(iter(results.values())) if (result != 0): print('SAFE\t%d (%s) ' % (result, errno.errorcode[result]), end='') rc = 0 else: print('RISK\tsuccessful bind ', end='') rc = 1 explanation = ' '.join('%s=%d' % (ip, e) for (ip, e) in sorted(results.items())) print(explanation) sys.exit(rc) charliecloud-0.26/examples/chtest/chroot-escape.c000066400000000000000000000042131417231051300221040ustar00rootroot00000000000000/* This program tries to escape a chroot using well-established methods, which are not an exploit but rather take advantage of chroot(2)'s well-defined behavior. We use device and inode numbers to test whether the root directory is the same before and after the escape. References: https://filippo.io/escaping-a-chroot-jail-slash-1/ http://www.bpfh.net/simes/computing/chroot-break.html */ #define _DEFAULT_SOURCE #include #include #include #include #include #include #include #include void fatal(char * msg) { printf("ERROR\t%s: %s\n", msg, strerror(errno)); exit(EXIT_FAILURE); } int main() { struct stat before, after; int fd; int status = EXIT_FAILURE; char tmpdir_template[] = "/tmp/chtest.tmp.chroot.XXXXXX"; char * tmpdir_name; if (stat("/", &before)) fatal("stat before"); tmpdir_name = mkdtemp(tmpdir_template); if (tmpdir_name == NULL) fatal("mkdtemp"); if ((fd = open(".", O_RDONLY)) < 0) fatal("open"); if (chroot(tmpdir_name)) { if (errno == EPERM) { printf("SAFE\tchroot(2) failed with EPERM\n"); status = EXIT_SUCCESS; } else { fatal("chroot"); } } else { if (fchdir(fd)) fatal("fchdir"); if (close(fd)) fatal("close"); for (int i = 0; i < 1024; i++) if (chdir("..")) fatal("chdir"); /* If we got this far, we should be able to call chroot(2), so failure is an error. */ if (chroot(".")) fatal("chroot"); /* If root directory is the same before and after the attempted escape, then the escape failed, and we should be happy. */ if (stat("/", &after)) fatal("stat after"); if (before.st_dev == after.st_dev && before.st_ino == after.st_ino) { printf("SAFE\t"); status = EXIT_SUCCESS; } else { printf("RISK\t"); status = EXIT_FAILURE; } printf("dev/inode before %lu/%lu, after %lu/%lu\n", before.st_dev, before.st_ino, after.st_dev, after.st_ino); } if (rmdir(tmpdir_name)) fatal("rmdir"); return status; } charliecloud-0.26/examples/chtest/dev_proc_sys.py000077500000000000000000000026361417231051300222670ustar00rootroot00000000000000#!/usr/bin/env python3 import os.path import sys # Files in /dev and /sys seem to vary between Linux systems. Thus, try a few # candidates and use the first one that exists. What we want is a file with # permissions root:root -rw------- that's in a directory readable and # executable by unprivileged users, so we know we're testing permissions on # the file rather than any of its containing directories. This may help for # finding such a file in /sys: # # $ find /sys -type f -a -perm 600 -ls # sys_file = None for f in ("/sys/devices/cpu/rdpmc", "/sys/kernel/mm/page_idle/bitmap", "/sys/module/nf_conntrack_ipv4/parameters/hashsize", "/sys/kernel/slab/request_sock_TCP/red_zone"): if (os.path.exists(f)): sys_file = f break if (sys_file is None): print("ERROR\tno test candidates in /sys exist") sys.exit(1) dev_file = None for f in ("/dev/cpu_dma_latency", "/dev/mem"): if (os.path.exists(f)): dev_file = f break if (dev_file is None): print("ERROR\tno test candidates in /dev exist") sys.exit(1) problem_ct = 0 for f in (dev_file, "/proc/kcore", sys_file): try: open(f, "rb").read(1) print("RISK\t%s: read allowed" % f) problem_ct += 1 except PermissionError: print("SAFE\t%s: read not allowed" % f) except OSError as x: print("ERROR\t%s: exception: %s" % (f, x)) problem_ct += 1 sys.exit(problem_ct != 0) charliecloud-0.26/examples/chtest/fs_perms.py000077500000000000000000000067301417231051300214050ustar00rootroot00000000000000#!/usr/bin/env python3 # This script walks the directories specified in sys.argv[1:] prepared by # make-perms-test.sh and attempts to read, write, and traverse (cd) each of # the entries within. It compares the result to the expectation encoded in the # filename. # # A summary line is printed on stdout. Running chatter describing each # evaluation is printed on stderr. # # Note: This works more or less the same as an older version embodied by # `examples/sandbox.py --filesystem` but is implemented in pure Python without # shell commands. Thus, the whole script must be run as root if you want to # see what root can do. import os.path import random import re import sys EXPECTED_RE = re.compile(r'~(...)$') class Makes_No_Sense(TypeError): pass VERBOSE = False def main(): if (sys.argv[1] == '--verbose'): global VERBOSE VERBOSE = True sys.argv.pop(1) d = sys.argv[1] mismatch_ct = 0 test_ct = 0 for path in sorted(os.listdir(d)): test_ct += 1 mismatch_ct += not test('%s/%s' % (d, path)) if (test_ct <= 0 or test_ct % 2887 != 0): error("unexpected number of tests: %d" % test_ct) if (mismatch_ct == 0): print('SAFE\t', end='') else: print('RISK\t', end='') print('%d mismatches in %d tests' % (mismatch_ct, test_ct)) sys.exit(mismatch_ct != 0) # Table of test function name fragments. testvec = { (False, False, False): ('X', 'bad'), (False, False, True ): ('l', 'broken_symlink'), (False, True, False): ('f', 'file'), (False, True, True ): ('f', 'file'), (True, False, False): ('d', 'dir'), (True, False, True ): ('d', 'dir') } def error(msg): print('ERROR\t%s' % msg) sys.exit(1) def expected(path): m = EXPECTED_RE.search(path) if (m is None): return '*' else: return m.group(1) def test(path): filetype = (os.path.isdir(path), os.path.isfile(path), os.path.islink(path)) report = '%s %-24s ' % (testvec[filetype][0], path) expect = expected(path) result = '' for op in 'r', 'w', 't': # read, write, traverse f = globals()['try_%s_%s' % (op, testvec[filetype][1])] try: f(path) except (PermissionError, Makes_No_Sense): result += '-' except Exception as x: error('exception on %s: %s' % (path, x)) else: result += op report += result if (expect != '*' and result != expect): print('%s mismatch' % report) return False else: if (VERBOSE): print('%s ok' % report) return True def try_r_bad(path): error('bad file type: %s' % path) try_t_bad = try_r_bad try_w_bad = try_r_bad def try_r_broken_symlink(path): raise Makes_No_Sense() try_t_broken_symlink = try_r_broken_symlink try_w_broken_symlink = try_r_broken_symlink def try_r_dir(path): os.listdir(path) def try_t_dir(path): try_r_file(path + '/file') def try_w_dir(path): fpath = '%s/a%d' % (path, random.getrandbits(64)) try_w_file(fpath) os.unlink(fpath) def try_r_file(path): with open(path, 'rb', buffering=0) as fp: fp.read(1) def try_t_file(path): raise Makes_No_Sense() def try_w_file(path): # The file should exist, but this will create it if it doesn't. We don't # check for that error condition because we *only* want to touch the OS for # open(2) and write(2). with open(path, 'wb', buffering=0) as fp: fp.write(b'written by fs_test.py\n') if (__name__ == '__main__'): main() charliecloud-0.26/examples/chtest/mknods.c000066400000000000000000000061261417231051300206500ustar00rootroot00000000000000/* Try to make some device files, and print a message to stdout describing what happened. See: https://www.kernel.org/doc/Documentation/devices.txt */ #define _GNU_SOURCE #include #include #include #include #include #include #include #include const unsigned char_devs[] = { 1, 3, /* /dev/null -- most innocuous */ 1, 1, /* /dev/mem -- most juicy */ 0 }; int main(int argc, char ** argv) { dev_t dev; char * dir; int i, j; unsigned maj, min; bool open_ok; char * path; for (i = 1; i < argc; i++) { dir = argv[i]; for (j = 0; char_devs[j] != 0; j += 2) { maj = char_devs[j]; min = char_devs[j + 1]; if (0 > asprintf(&path, "%s/c%d.%d", dir, maj, min)) { printf("ERROR\tasprintf() failed with errno=%d\n", errno); return 1; } fprintf(stderr, "trying to mknod %s: ", path); dev = makedev(maj, min); if (mknod(path, S_IFCHR | 0500, dev)) { // Could not create device; make sure it's an error we expected. switch (errno) { case EACCES: case EINVAL: // e.g. /sys/firmware/efi/efivars case ENOENT: // e.g. /proc case ENOTDIR: // for bind-mounted files e.g. /etc/passwd case EPERM: case EROFS: fprintf(stderr, "failed as expected with errno=%d\n", errno); break; default: fprintf(stderr, "failed with unexpected errno\n"); printf("ERROR\tmknod(2) failed on %s with errno=%d\n", path, errno); return 1; } } else { // Device created; safe if we can't open it (see issue #381). fprintf(stderr, "succeeded\n"); fprintf(stderr, "trying to open %s: ", path); if (open(path, O_RDONLY) != -1) { fprintf(stderr, "succeeded\n"); open_ok = true; } else { open_ok = false; switch (errno) { case EACCES: fprintf(stderr, "failed as expected with errno=%d\n", errno); break; default: fprintf(stderr, "failed with unexpected errno\n"); printf("ERROR\topen(2) failed on %s with errno=%d\n", path, errno); return 1; } } // Remove the device, whether or not we were able to open it. if (unlink(path)) { printf("ERROR\tunlink(2) failed on %s with errno=%d", path, errno); return 1; } if (open_ok) { printf("RISK\tmknod(2), open(2) succeeded on %s (now removed)\n", path); return 1; } } } } printf("SAFE\t%d devices in %d dirs failed\n", (i - 1) * (j / 2), i - 1); return 0; } charliecloud-0.26/examples/chtest/printns000077500000000000000000000011171417231051300206270ustar00rootroot00000000000000#!/usr/bin/env python3 # Print out my namespace IDs, to stdout or (if specified) the path in $2. # Then, if $1 is specified, wait that number of seconds before exiting. import glob import os import socket import sys import time if (len(sys.argv) > 1): pause = float(sys.argv[1]) else: pause = 0 if (len(sys.argv) > 2): out = open(sys.argv[2], "wt") else: out = sys.stdout hostname = socket.gethostname() for ns in glob.glob("/proc/self/ns/*"): stat = os.stat(ns) print("%s:%s:%d" % (ns, hostname, stat.st_ino), file=out, flush=True) if (pause): time.sleep(pause) charliecloud-0.26/examples/chtest/setgroups.c000066400000000000000000000016461417231051300214120ustar00rootroot00000000000000/* Try to drop the last supplemental group, and print a message to stdout describing what happened. */ #define _DEFAULT_SOURCE #include #include #include #include #include #define NGROUPS_MAX 128 int main() { int group_ct; gid_t groups[NGROUPS_MAX]; group_ct = getgroups(NGROUPS_MAX, groups); if (group_ct == -1) { printf("ERROR\tgetgroups(2) failed with errno=%d\n", errno); return 1; } fprintf(stderr, "found %d groups; trying to drop last group %d\n", group_ct, groups[group_ct - 1]); if (setgroups(group_ct - 1, groups)) { if (errno == EPERM) { printf("SAFE\tsetgroups(2) failed with EPERM\n"); return 0; } else { printf("ERROR\tsetgroups(2) failed with errno=%d\n", errno); return 1; } } else { printf("RISK\tsetgroups(2) succeeded\n"); return 1; } } charliecloud-0.26/examples/chtest/setuid.c000066400000000000000000000015441417231051300206510ustar00rootroot00000000000000/* Try to change effective UID. */ #define _GNU_SOURCE #include #include #include #include #define NOBODY 65534 #define NOBODY2 65533 int main(int argc, char ** argv) { // target UID is nobody, unless we're already nobody uid_t start = geteuid(); uid_t target = start != NOBODY ? NOBODY : NOBODY2; int result; fprintf(stderr, "current EUID=%u, attempting EUID=%u\n", start, target); result = seteuid(target); // setuid(2) fails with EINVAL in user namespaces and EPERM if not root. if (result == 0) { printf("RISK\tsetuid(2) succeeded for EUID=%u\n", target); return 1; } else if (errno == EINVAL) { printf("SAFE\tsetuid(2) failed as expected with EINVAL\n"); return 0; } printf("ERROR\tsetuid(2) failed unexpectedly with errno=%d\n", errno); return 1; } charliecloud-0.26/examples/chtest/signal_out.py000077500000000000000000000020611417231051300217240ustar00rootroot00000000000000#!/usr/bin/env python3 # Send a signal to a process outside the container. # # This is a little tricky. We want a process that: # # 1. is certain to exist, to avoid false negatives # 2. we shouldn't be able to signal (specifically, we can't create a process # to serve as the target) # 3. is outside the container # 4. won't crash the host too badly if killed by the signal # # We want a signal that: # # 5. will be harmless if received # 6. is not blocked # # Accordingly, this test sends SIGCONT to the youngest getty process. The # thinking is that the virtual terminals are unlikely to be in use, so losing # one will be straightforward to clean up. import os import signal import subprocess import sys try: pdata = subprocess.check_output(["pgrep", "-nl", "getty"]) except subprocess.CalledProcessError: print("ERROR\tpgrep failed") sys.exit(1) pid = int(pdata.split()[0]) try: os.kill(pid, signal.SIGCONT) except PermissionError as x: print("SAFE\tfailed as expected: %s" % x) sys.exit(0) print("RISK\tsucceeded") sys.exit(1) charliecloud-0.26/examples/copy/000077500000000000000000000000001417231051300166645ustar00rootroot00000000000000charliecloud-0.26/examples/copy/Dockerfile000066400000000000000000000210771417231051300206650ustar00rootroot00000000000000# Exercise the COPY instruction, which has rather strange semantics compared # to what we are used to in cp(1). See FAQ. # # ch-test-scope: standard # ch-test-builder-exclude: buildah # ch-test-builder-exclude: buildah-runc # ch-test-builder-exclude: buildah-setuid FROM 00_tiny # Test directory RUN mkdir /test WORKDIR /test ## Source: Regular file(s) # Source: one file # Dest: new file, relative to workdir COPY fileA file1a # Source: one file # Dest: new file, absolute path COPY fileA /test/file1b # Source: one file, absolute path (root is context directory) # Dest: new file COPY /fileB file2 # Source: one file # Dest: existing file RUN echo 'this should be overwritten' > file3 COPY fileA file3 # Source: one file # Dest: symlink to existing file, relative path RUN echo 'this should be overwritten' > file4 \ && ln -s file4 symlink-to-file4 COPY fileA symlink-to-file4 # Source: one file # Dest: symlink to existing file, absolute path RUN echo 'this should be overwritten' > file5 \ && ln -s /test/file5 symlink-to-file5 COPY fileA symlink-to-file5 # Source: one file # Dest: existing directory, no trailing slash # # Note: This behavior is inconsistent with the Dockerfile reference, which # implies that dir1a must be a file because it does not end in slash. RUN mkdir dir01a COPY fileA dir01a # Source: one file # Dest: existing directory, trailing slash RUN mkdir dir01b COPY fileA dir01b/ # Source: one file # Dest: symlink to existing directory, relative, no trailing slash RUN mkdir dir01c \ && ln -s dir01c symlink-to-dir01c COPY fileA symlink-to-dir01c # Source: one file # Dest: symlink to existing directory, absolute, no trailing slash RUN mkdir dir01d \ && ln -s /test/dir01d symlink-to-dir01d COPY fileA symlink-to-dir01d # Source: one file # Dest: symlink to existing directory, relative, trailing slash RUN mkdir dir01e \ && ln -s dir01e symlink-to-dir01e COPY fileA symlink-to-dir01e/ # Source: one file # Dest: symlink to existing directory, absolute, trailing slash RUN mkdir dir01f \ && ln -s /test/dir01f symlink-to-dir01f COPY fileA symlink-to-dir01f/ # Source: one file # Dest: symlink to existing directory, multi-level, relative, no slash RUN mkdir -p dir01g/dir \ && ln -s dir01g symlink-to-dir01g COPY fileA symlink-to-dir01g/dir # Source: one file # Dest: symlink to existing directory, multi-level, absolute, no slash RUN mkdir -p dir01h/dir \ && ln -s /test/dir01h symlink-to-dir01h COPY fileA symlink-to-dir01h/dir # Source: one file # Dest: new directory, one level of creation COPY fileA dir02/ # Source: one file # Dest: new directory, two levels of creation COPY fileA dir03a/dir03b/ # Source: two files, explicit # Dest: existing directory RUN mkdir dir04 COPY fileA fileB dir04/ # Source: two files, explicit # Dest: new directory, one level COPY fileA fileB dir05/ # Source: two files, wildcard # Dest: existing directory RUN mkdir dir06 COPY file* dir06/ ## Source: Director(y|ies) # Source: one directory # Dest: existing directory, no trailing slash # # Note: Again, the reference seems to imply this shouldn't work. RUN mkdir dir07a COPY dirA dir07a # Source: one directory # Dest: existing directory, trailing slash RUN mkdir dir07b COPY dirA dir07b/ # Source: one directory # Dest: symlink to existing directory, relative, no trailing slash RUN mkdir dir07c \ && ln -s dir07c symlink-to-dir07c COPY dirA symlink-to-dir07c # Source: one directory # Dest: symlink to existing directory, absolute, no trailing slash RUN mkdir dir07d \ && ln -s /test/dir07d symlink-to-dir07d COPY dirA symlink-to-dir07d # Source: one directory # Dest: symlink to existing directory, relative, trailing slash RUN mkdir dir07e \ && ln -s dir07e symlink-to-dir07e COPY dirA symlink-to-dir07e/ # Source: one directory # Dest: symlink to existing directory, absolute, trailing slash RUN mkdir dir07f \ && ln -s /test/dir07f symlink-to-dir07f COPY dirA symlink-to-dir07f/ # Source: one directory # Dest: new directory, one level, no trailing slash # # Note: Again, the reference seems to imply this shouldn't work. COPY dirA dir08a # Source: one directory # Dest: new directory, one level, trailing slash COPY dirA dir08b/ # Source: one directory # Dest: existing file, 2nd level # # Note: While this fails if the existing file is at the top level (which we # verify in test/build/50_dockerfile.bats), if the existing file is at the 2nd # level, it's overwritten by the directory. RUN touch dir08a/dirCb COPY dirCa dir08a # Source: two directories, explicit # Dest: existing directory RUN mkdir dir09 COPY dirA dirB dir09/ # Source: two directories, explicit # Dest: new directory, one level COPY dirA dirB dir10/ # Source: two directories, wildcard # Dest: existing directory RUN mkdir dir11 COPY dir[AB] dir11/ # Source: two directories, wildcard # Dest: new directory, one level COPY dir[AB] dir12/ ## Source: Symlink(s) # Note: Behavior for symlinks is not documented. See FAQ. # Source: one symbolic link, to file, named explicitly # Dest: existing directory COPY symlink-to-fileA ./ # Source: one symbolic link, to directory, named explicitly # Dest: existing directory RUN mkdir dir13 COPY dirCa/symlink-to-dirCb dir13/ # Source: one symbolic link, to file, in a directory # Dest: existing directory RUN mkdir dir14 COPY dirD dir14/ # Source: one symbolic link, to file, in a directory # Dest: new directory, one level COPY dirD dir15/ # Source: one symbolic link, to directory, in a directory # Dest: existing directory RUN mkdir dir16 COPY dirEa dir16/ # Source: two symbolic links, to files, named explicitly # Dest: existing directory RUN mkdir dir17 COPY fileB symlink-to-fileB-A symlink-to-fileB-B dir17/ # Source: two symbolic links, to files, wildcard # Dest: existing directory RUN mkdir dir18 COPY fileB symlink-to-fileB-* dir18/ ## Merge directory trees # Set up destination directory tree. RUN mkdir dir19 \ && mkdir dir19/dir19a1 \ && mkdir dir19/dir19a2 \ && mkdir dir19/dir19a2/dir19b1 \ && mkdir dir19/dir19a2/dir19b2 \ && echo old > dir19/file19a1 \ && echo old > dir19/file19a2 \ && echo old > dir19/dir19a1/file19b1 \ && echo old > dir19/dir19a2/file19b1 \ && echo old > dir19/dir19a2/file19b2 \ && echo old > dir19/dir19a2/dir19b2/file19c1 \ && chmod 777 dir19/dir19a2 # Copy in the new directory tree. This is supposed to merge the two trees. # Important considerations, from perspective of destination tree: # # 1. File at top level, new. # 2. File at top level, existing (should overwrite). # 3. File at 2nd level, new. # 4. File at 2nd level, existing (should overwrite). # 5. Directory at top level, new. # 6. Directory at top level, existing (permissions should overwrite). # 7. Directory at 2nd level, new. # 8. Directory at 2nd level, existing (permissions should overwrite). # # The directories should be o-rwx so we can see if the permissions were from # the old or new version. RUN test $(stat -c '%A' dir19/dir19a2 | cut -c8-) = 'rwx' \ && stat -c '%n: %A' dir19/dir19a2 COPY dirF dir19/ RUN test $(stat -c '%A' dir19/dir19a2 | cut -c8-) != 'rwx' \ && stat -c '%n: %A' dir19/dir19a2 ## Destination: Symlink, 2nd level. # Note: This behavior is DIFFERENT from the symlink at 1st level tests above # (recall we are trying to be bug-compatible with Docker). # Set up destination. RUN mkdir dir20 \ && echo new > dir20/filex \ && mkdir dir20/dirx \ && for i in $(seq 4); do \ echo file$i > dir20/file$i \ && ln -s file$i dir20/s_file$i \ && mkdir dir20/dir$i \ && echo dir$i/file_ > dir20/dir$i/file_ \ && ln -s dir$i dir20/s_dir$i; \ done \ && ls -lR dir20 # Copy in the new directory tree. In all of these cases, the source simply # overwrites the destination; symlinks are not followed. # # name source destination # ------- ------------ ------------ # 1. s_file1 file link to file # 2. s_dir1 file link to dir # 3. s_file2 link to file link to file # 4. s_dir2 link to file link to dir # 5. s_file3 link to dir link to file # 6. s_dir3 link to dir link to dir # 7. s_file4 directory link to file # 8. s_dir4 directory link to dir # COPY dirG dir20/ ## Wrap up; this output helps to build the expectations in test.bats. # Need GNU find, not BusyBox find RUN apk add --no-cache findutils # File tree with type annotation characters. RUN ls -1FR . # Regular file contents. RUN find . -type f -printf '%y: %p: ' -a -exec cat {} \; | sort # Symlink targets. RUN find . -type l -printf '%y: %p -> %l\n' | sort charliecloud-0.26/examples/copy/dirA/000077500000000000000000000000001417231051300175435ustar00rootroot00000000000000charliecloud-0.26/examples/copy/dirA/fileAa000066400000000000000000000000141417231051300206420ustar00rootroot00000000000000dirA/fileAa charliecloud-0.26/examples/copy/dirB/000077500000000000000000000000001417231051300175445ustar00rootroot00000000000000charliecloud-0.26/examples/copy/dirB/fileBa000066400000000000000000000000141417231051300206440ustar00rootroot00000000000000dirB/fileBa charliecloud-0.26/examples/copy/dirB/fileBb000066400000000000000000000000141417231051300206450ustar00rootroot00000000000000dirB/fileBb charliecloud-0.26/examples/copy/dirCa/000077500000000000000000000000001417231051300177065ustar00rootroot00000000000000charliecloud-0.26/examples/copy/dirCa/dirCb/000077500000000000000000000000001417231051300207315ustar00rootroot00000000000000charliecloud-0.26/examples/copy/dirCa/dirCb/fileCba000066400000000000000000000000241417231051300221750ustar00rootroot00000000000000dirCa/dirCb/fileCba charliecloud-0.26/examples/copy/dirCa/dirCb/fileCbb000066400000000000000000000000241417231051300221760ustar00rootroot00000000000000dirCa/dirCb/fileCbb charliecloud-0.26/examples/copy/dirD/000077500000000000000000000000001417231051300175465ustar00rootroot00000000000000charliecloud-0.26/examples/copy/dirD/fileDa000066400000000000000000000000141417231051300206500ustar00rootroot00000000000000dirD/fileDa charliecloud-0.26/examples/copy/dirEa/000077500000000000000000000000001417231051300177105ustar00rootroot00000000000000charliecloud-0.26/examples/copy/dirEa/dirEb/000077500000000000000000000000001417231051300207355ustar00rootroot00000000000000charliecloud-0.26/examples/copy/dirEa/dirEb/fileEba000066400000000000000000000000241417231051300222030ustar00rootroot00000000000000dirEa/dirEb/fileEba charliecloud-0.26/examples/copy/dirEa/dirEb/fileEbb000066400000000000000000000000241417231051300222040ustar00rootroot00000000000000dirEa/dirEb/fileEbb charliecloud-0.26/examples/copy/dirF/000077500000000000000000000000001417231051300175505ustar00rootroot00000000000000charliecloud-0.26/examples/copy/dirF/dir19a2/000077500000000000000000000000001417231051300207235ustar00rootroot00000000000000charliecloud-0.26/examples/copy/dirF/dir19a2/dir19b2/000077500000000000000000000000001417231051300220775ustar00rootroot00000000000000charliecloud-0.26/examples/copy/dirF/dir19a2/dir19b2/file19c1000066400000000000000000000000041417231051300233310ustar00rootroot00000000000000new charliecloud-0.26/examples/copy/dirF/dir19a2/dir19b3/000077500000000000000000000000001417231051300221005ustar00rootroot00000000000000charliecloud-0.26/examples/copy/dirF/dir19a2/dir19b3/file19c1000066400000000000000000000000041417231051300233320ustar00rootroot00000000000000new charliecloud-0.26/examples/copy/dirF/dir19a2/file19b2000066400000000000000000000000041417231051300221550ustar00rootroot00000000000000new charliecloud-0.26/examples/copy/dirF/dir19a2/file19b3000066400000000000000000000000041417231051300221560ustar00rootroot00000000000000new charliecloud-0.26/examples/copy/dirF/dir19a3/000077500000000000000000000000001417231051300207245ustar00rootroot00000000000000charliecloud-0.26/examples/copy/dirF/dir19a3/file19b1000066400000000000000000000000041417231051300221550ustar00rootroot00000000000000new charliecloud-0.26/examples/copy/dirF/file19a2000066400000000000000000000000041417231051300210010ustar00rootroot00000000000000new charliecloud-0.26/examples/copy/dirF/file19a3000066400000000000000000000000041417231051300210020ustar00rootroot00000000000000new charliecloud-0.26/examples/copy/dirG/000077500000000000000000000000001417231051300175515ustar00rootroot00000000000000charliecloud-0.26/examples/copy/dirG/diry/000077500000000000000000000000001417231051300205205ustar00rootroot00000000000000charliecloud-0.26/examples/copy/dirG/diry/file_000066400000000000000000000000131417231051300215130ustar00rootroot00000000000000diry/file_ charliecloud-0.26/examples/copy/dirG/filey000066400000000000000000000000041417231051300205760ustar00rootroot00000000000000new charliecloud-0.26/examples/copy/dirG/s_dir1000066400000000000000000000000041417231051300206470ustar00rootroot00000000000000new charliecloud-0.26/examples/copy/dirG/s_dir4/000077500000000000000000000000001417231051300207355ustar00rootroot00000000000000charliecloud-0.26/examples/copy/dirG/s_dir4/file_000066400000000000000000000000151417231051300217320ustar00rootroot00000000000000s_dir4/file_ charliecloud-0.26/examples/copy/dirG/s_file1000066400000000000000000000000041417231051300210100ustar00rootroot00000000000000new charliecloud-0.26/examples/copy/dirG/s_file4/000077500000000000000000000000001417231051300210765ustar00rootroot00000000000000charliecloud-0.26/examples/copy/dirG/s_file4/file_000066400000000000000000000000161417231051300220740ustar00rootroot00000000000000s_file4/file_ charliecloud-0.26/examples/copy/fileA000066400000000000000000000000061417231051300176230ustar00rootroot00000000000000fileA charliecloud-0.26/examples/copy/fileB000066400000000000000000000000061417231051300176240ustar00rootroot00000000000000fileB charliecloud-0.26/examples/copy/test.bats000066400000000000000000000134721417231051300205250ustar00rootroot00000000000000true # shellcheck disable=SC2034 CH_TEST_TAG=$ch_test_tag load "${CHTEST_DIR}/common.bash" @test "${ch_tag}/ls" { scope standard prerequisites_ok copy # "ls -F" trailing symbol list: https://unix.stackexchange.com/a/82358 diff -u - <(ch-run --cd /test "$ch_img" -- ls -1FR .) < %l\n' | sort) < dirCb l: ./dir14/symlink-to-fileDa -> fileDa l: ./dir15/symlink-to-fileDa -> fileDa l: ./dir16/symlink-to-dirEb -> dirEb l: ./dir20/s_dir2 -> filey l: ./dir20/s_dir3 -> diry l: ./dir20/s_file2 -> filey l: ./dir20/s_file3 -> diry l: ./symlink-to-dir01c -> dir01c l: ./symlink-to-dir01d -> /test/dir01d l: ./symlink-to-dir01e -> dir01e l: ./symlink-to-dir01f -> /test/dir01f l: ./symlink-to-dir01g -> dir01g l: ./symlink-to-dir01h -> /test/dir01h l: ./symlink-to-dir07c -> dir07c l: ./symlink-to-dir07d -> /test/dir07d l: ./symlink-to-dir07e -> dir07e l: ./symlink-to-dir07f -> /test/dir07f l: ./symlink-to-file4 -> file4 l: ./symlink-to-file5 -> /test/file5 EOF } charliecloud-0.26/examples/distroless/000077500000000000000000000000001417231051300201055ustar00rootroot00000000000000charliecloud-0.26/examples/distroless/Dockerfile000066400000000000000000000005721417231051300221030ustar00rootroot00000000000000# ch-test-scope: standard # ch-test-arch-exclude: aarch64 # issue #1249 # ch-test-arch-exclude: ppc64le # base image unavailable # Distroless is a Google project providing slim images that contain runtime # dependencies only. https://github.com/GoogleContainerTools/distroless # The python3 image was chosen for ease of testing. FROM gcr.io/distroless/python3 COPY hello.py / charliecloud-0.26/examples/distroless/hello.py000077500000000000000000000000521417231051300215620ustar00rootroot00000000000000#!/usr/bin/python3 print("Hello, World!") charliecloud-0.26/examples/distroless/test.bats000066400000000000000000000004561417231051300217440ustar00rootroot00000000000000true # shellcheck disable=SC2034 CH_TEST_TAG=$ch_test_tag load "${CHTEST_DIR}/common.bash" setup () { scope standard prerequisites_ok distroless } @test "${ch_tag}/hello" { run ch-run "$ch_img" -- /hello.py echo "$output" [[ $status -eq 0 ]] [[ $output = 'Hello, World!' ]] } charliecloud-0.26/examples/dont-init-ucx-on-intel-cray.patch000066400000000000000000000014471417231051300241220ustar00rootroot00000000000000diff --git a/ompi/mca/pml/ucx/pml_ucx_component.c b/ompi/mca/pml/ucx/pml_ucx_component.c index ff0040f18c..e8cf903860 100644 --- a/ompi/mca/pml/ucx/pml_ucx_component.c +++ b/ompi/mca/pml/ucx/pml_ucx_component.c @@ -14,6 +14,9 @@ #include +#ifdef HAVE_UNISTD_H +#include +#endif static int mca_pml_ucx_component_register(void); static int mca_pml_ucx_component_open(void); @@ -131,6 +134,11 @@ mca_pml_ucx_component_init(int* priority, bool enable_progress_threads, { int ret; + if ((0 == access("/sys/class/ugni/", F_OK) || (0 == access("/sys/class/hfi1/", F_OK)))){ + PML_UCX_VERBOSE(1, "Cray or Intel HSN detected, removing UCX from consideration"); + return NULL; + } + if ( (ret = mca_pml_ucx_init()) != 0) { return NULL; } charliecloud-0.26/examples/exhaustive/000077500000000000000000000000001417231051300200775ustar00rootroot00000000000000charliecloud-0.26/examples/exhaustive/Dockerfile000066400000000000000000000027571417231051300221040ustar00rootroot00000000000000# This Dockerfile aims to have at least one of everything, to exercise the # comprehensiveness of Dockerfile feature support. # # FIXME: That focus is a bit out of date. I think really what is here is the # ways we want to exercise ch-image in ways we care about the resulting image. # Exercises where we don't care are in test/build/50_dockerfile.bats. But, I # don't want to do the refactoring right now. # # See: https://docs.docker.com/engine/reference/builder # # ch-test-scope: full # ch-test-builder-include: ch-image # Use a moderately complex image reference. FROM registry-1.docker.io:443/library/alpine:3.9 AS stage1 RUN pwd WORKDIR /usr/local/src RUN pwd RUN ls --color=no -lh RUN apk add --no-cache bc RUN ["echo", "hello \n${chse_2} \${chse_2} ${NOTSET}"] # should print: # a -${chse_2}- b -value2- c -c- d -d- RUN echo 'a -${chse_2}-' "b -${chse_2}-" "c -${NOTSET:-c}-" "d -${chse_2:+d}-" RUN env # WORKDIR. See test/build/50_ch-image.bats where we validate this all worked OK. # FIXME: test with variable # # filesystem root WORKDIR / RUN mkdir workdir # absolute path, no mkdir WORKDIR /workdir RUN touch file # absolute path, mkdir RUN mkdir /workdir/abs2 WORKDIR /workdir/abs2 RUN touch file # relative path, no mkdir WORKDIR rel1 RUN touch file1 # relative path, 2nd level, no mkdir WORKDIR rel2 RUN touch file # relative path, parent dir, no mkdir WORKDIR .. RUN touch file2 # results RUN ls -R /workdir # TODO: # comment with trailing backslash (line continuation does not work in comments) charliecloud-0.26/examples/exhaustive/test.bats000066400000000000000000000007461417231051300217400ustar00rootroot00000000000000true # shellcheck disable=SC2034 CH_TEST_TAG=$ch_test_tag load "${CHTEST_DIR}/common.bash" setup () { scope standard prerequisites_ok exhaustive } @test "${ch_tag}/WORKDIR" { output_expected=$(cat <<'EOF' /workdir: abs2 file /workdir/abs2: file rel1 /workdir/abs2/rel1: file1 file2 rel2 /workdir/abs2/rel1/rel2: file EOF ) run ch-run "$ch_img" -- ls -R /workdir echo "$output" [[ $status -eq 0 ]] diff -u <(echo "$output_expected") <(echo "$output") } charliecloud-0.26/examples/hello/000077500000000000000000000000001417231051300170155ustar00rootroot00000000000000charliecloud-0.26/examples/hello/Dockerfile000066400000000000000000000002521417231051300210060ustar00rootroot00000000000000# ch-test-scope: standard FROM centos:8 RUN dnf install -y --setopt=install_weak_deps=false openssh-clients \ && dnf clean all COPY . hello RUN touch /usr/bin/ch-ssh charliecloud-0.26/examples/hello/README000066400000000000000000000001751417231051300177000ustar00rootroot00000000000000This example is a hello world Charliecloud container. It demonstrates running a command on the host from inside a container. charliecloud-0.26/examples/hello/hello.sh000077500000000000000000000000461417231051300204570ustar00rootroot00000000000000#!/bin/sh set -e echo 'hello world' charliecloud-0.26/examples/hello/test.bats000066400000000000000000000012771417231051300206560ustar00rootroot00000000000000true # shellcheck disable=SC2034 CH_TEST_TAG=$ch_test_tag load "${CHTEST_DIR}/common.bash" setup () { scope standard prerequisites_ok hello } @test "${ch_tag}/hello" { run ch-run "$ch_img" -- /hello/hello.sh echo "$output" [[ $status -eq 0 ]] [[ $output = 'hello world' ]] } @test "${ch_tag}/distribution sanity" { # Try various simple things that should work in a basic Debian # distribution. (This does not test anything Charliecloud manipulates.) ch-run "$ch_img" -- /bin/bash -c true ch-run "$ch_img" -- /bin/true ch-run "$ch_img" -- find /etc -name 'a*' ch-run "$ch_img" -- sh -c 'echo foo | /bin/grep -E foo' ch-run "$ch_img" -- nice true } charliecloud-0.26/examples/lammps/000077500000000000000000000000001417231051300172035ustar00rootroot00000000000000charliecloud-0.26/examples/lammps/Dockerfile000066400000000000000000000034111417231051300211740ustar00rootroot00000000000000# ch-test-scope: full FROM openmpi WORKDIR /usr/local/src # Packages for building. RUN dnf install -y --setopt=install_weak_deps=false \ cmake \ patch \ python3-devel \ python3-pip \ python3-setuptools \ && dnf clean all # Building mpi4py from source to ensure it is built against our MPI build # Building numpy from source to work around issues seen on Aarch64 systems RUN pip3 install --no-binary :all: cython==0.29.24 mpi4py==3.1.1 numpy==1.19.5 #RUN ln -s /usr/bin/python3 /usr/bin/python # Build LAMMPS. ARG LAMMPS_VERSION=29Sep2021 RUN wget -nv https://github.com/lammps/lammps/archive/patch_${LAMMPS_VERSION}.tar.gz \ && tar xf patch_$LAMMPS_VERSION.tar.gz \ && mkdir lammps-${LAMMPS_VERSION}.build \ && cd lammps-${LAMMPS_VERSION}.build \ && cmake -DCMAKE_INSTALL_PREFIX=/usr/local \ -DCMAKE_BUILD_TYPE=Release \ -DBUILD_MPI=yes \ -DBUILD_LIB=on \ -DBUILD_SHARED_LIBS=on \ -DPKG_DIPOLE=yes \ -DPKG_KSPACE=yes \ -DPKG_POEMS=yes \ -DPKG_PYTHON=yes \ -DPKG_USER-REAXC=yes \ -DPKG_USER-MEAMC=yes \ -DLAMMPS_MACHINE=mpi \ ../lammps-patch_${LAMMPS_VERSION}/cmake \ && make -j $(getconf _NPROCESSORS_ONLN) install \ && ln -s /usr/local/src/lammps-patch_${LAMMPS_VERSION}/ /lammps \ && rm -f ../patch_$LAMMPS_VERSION.tar.gz RUN ldconfig # Patch in.melt to increase problem dimensions. COPY melt.patch /lammps/examples/melt RUN patch -p1 -d / < /lammps/examples/melt/melt.patch # Patch simple.py to uncomment mpi4py calls and disable file output. # Patch in.simple to increase problem dimensions. COPY simple.patch /lammps/python/examples RUN patch -p1 -d / < /lammps/python/examples/simple.patch charliecloud-0.26/examples/lammps/melt.patch000066400000000000000000000004741417231051300211720ustar00rootroot00000000000000--- a/lammps/examples/melt/in.melt 2014-01-07 14:43:31.000000000 -0700 +++ b/lammps/examples/melt/in.melt 2018-03-16 14:37:02.000000000 -0600 @@ -6,3 +6,3 @@ lattice fcc 0.8442 -region box block 0 10 0 10 0 10 +region box block 0 120 0 120 0 120 create_box 1 box @@ -32,2 +32,2 @@ thermo 50 -run 250 +run 3 charliecloud-0.26/examples/lammps/simple.patch000066400000000000000000000040141417231051300215140ustar00rootroot00000000000000--- /lammps/python/examples/simple.py 2019-09-20 09:51:15.000000000 -0600 +++ /lammps/python/examples/simple.py 2019-09-23 16:58:28.950720810 -0600 @@ -1,4 +1,4 @@ -#!/usr/bin/env python -i +#!/usr/bin/python3 # preceding line should have path for Python on your machine # simple.py @@ -28,12 +28,12 @@ me = 0 # uncomment this if running in parallel via mpi4py -#from mpi4py import MPI -#me = MPI.COMM_WORLD.Get_rank() -#nprocs = MPI.COMM_WORLD.Get_size() +from mpi4py import MPI +me = MPI.COMM_WORLD.Get_rank() +nprocs = MPI.COMM_WORLD.Get_size() from lammps import lammps -lmp = lammps() +lmp = lammps("mpi") # run infile one line at a time @@ -85,7 +85,7 @@ # test of new gather/scatter and box extract/reset methods # can try this in parallel and with/without atom_modify sort enabled -lmp.command("write_dump all custom tmp.simple id type x y z fx fy fz"); +#lmp.command("write_dump all custom tmp.simple id type x y z fx fy fz"); x = lmp.gather_atoms("x",1,3) f = lmp.gather_atoms("f",1,3) @@ -123,10 +123,10 @@ boxlo,boxhi,xy,yz,xz,periodicity,box_change = lmp.extract_box() if me == 0: print("Box info",boxlo,boxhi,xy,yz,xz,periodicity,box_change) -lmp.reset_box([0,0,0],[10,10,8],0,0,0) +#lmp.reset_box([0,0,0],[10,10,8],0,0,0) -boxlo,boxhi,xy,yz,xz,periodicity,box_change = lmp.extract_box() -if me == 0: print("Box info",boxlo,boxhi,xy,yz,xz,periodicity,box_change) +#boxlo,boxhi,xy,yz,xz,periodicity,box_change = lmp.extract_box() +#if me == 0: print("Box info",boxlo,boxhi,xy,yz,xz,periodicity,box_change) # uncomment if running in parallel via mpi4py -#print("Proc %d out of %d procs has" % (me,nprocs), lmp) +print("Proc %d out of %d procs has" % (me,nprocs), lmp) --- /lammps/python/examples/in.simple 2019-10-02 16:09:55.198770328 -0600 +++ /lammps/python/examples/in.simple 2019-10-02 16:10:21.263332834 -0600 @@ -5,7 +5,7 @@ atom_style atomic atom_modify map array lattice fcc 0.8442 -region box block 0 4 0 4 0 4 +region box block 0 120 0 120 0 120 create_box 1 box create_atoms 1 box mass 1 1.0 charliecloud-0.26/examples/lammps/test.bats000066400000000000000000000062021417231051300210350ustar00rootroot00000000000000true # shellcheck disable=SC2034 CH_TEST_TAG=$ch_test_tag load "${CHTEST_DIR}/common.bash" # LAMMPS does have a test suite, but we do not use it, because it seems too # fiddly to get it running properly. # # 1. Running the command listed in LAMMPS' Jenkins tests [2] fails with a # strange error: # # $ python run_tests.py tests/test_commands.py tests/test_examples.py # Loading tests from tests/test_commands.py... # Traceback (most recent call last): # File "run_tests.py", line 81, in # tests += load_tests(f) # File "run_tests.py", line 22, in load_tests # for testname in list(tc): # TypeError: 'Test' object is not iterable # # Looking in run_tests.py, this sure looks like a bug (it's expecting a # list of Tests, I think, but getting a single Test). But it works in # Jenkins. Who knows. # # 2. The files test/test_*.py say that the tests can be run with # "nosetests", which they can, after setting several environment # variables. But some of the tests fail for me. I didn't diagnose. # # Instead, we simply run some of the example problems in a loop and see if # they exit with return code zero. We don't check output. # # Note that a lot of the other examples crash. I haven't diagnosed or figured # out if we care. # # We are open to patches if anyone knows how to fix this situation reliably. # # [1]: https://github.com/lammps/lammps-testing # [2]: https://ci.lammps.org/job/lammps/job/master/job/testing/lastSuccessfulBuild/console setup () { scope full prerequisites_ok "$ch_tag" multiprocess_ok } lammps_try () { # These examples cd because some (not all) of the LAMMPS tests expect to # find things based on $CWD. infiles=$(ch-run --cd "/lammps/examples/${1}" "$ch_img" -- \ bash -c "ls in.*") for i in $infiles; do printf '\n\n%s\n' "$i" # shellcheck disable=SC2086 $ch_mpirun_core ch-run --join --cd /lammps/examples/$1 "$ch_img" -- \ lmp_mpi -log none -in "$i" done } @test "${ch_tag}/crayify image" { crayify_mpi_or_skip "$ch_img" } @test "${ch_tag}/using all cores" { # shellcheck disable=SC2086 run $ch_mpirun_core ch-run --join "$ch_img" -- \ lmp_mpi -log none -in /lammps/examples/melt/in.melt echo "$output" [[ $status -eq 0 ]] ranks_found=$( echo "$output" \ | grep -F 'MPI tasks' \ | tail -1 \ | sed -r 's/^.+with ([0-9]+) MPI tasks.+$/\1/') echo "ranks expected: ${ch_cores_total}" echo "ranks found: ${ranks_found}" [[ $ranks_found -eq "$ch_cores_total" ]] } @test "${ch_tag}/crack" { lammps_try crack; } @test "${ch_tag}/dipole" { lammps_try dipole; } @test "${ch_tag}/flow" { lammps_try flow; } @test "${ch_tag}/friction" { lammps_try friction; } @test "${ch_tag}/melt" { lammps_try melt; } @test "${ch_tag}/mpi4py simple" { $ch_mpirun_core ch-run --join --cd /lammps/python/examples "$ch_img" -- \ ./simple.py in.simple } @test "${ch_tag}/revert image" { unpack_img_all_nodes "$ch_cray" } charliecloud-0.26/examples/lustre/000077500000000000000000000000001417231051300172305ustar00rootroot00000000000000charliecloud-0.26/examples/lustre/Dockerfile000066400000000000000000000016661417231051300212330ustar00rootroot00000000000000# ch-test-scope: full # ch-test-arch-exclude: aarch64 # No lustre RPMS for aarch64 FROM centos:8 # Install lustre-client dependencies RUN dnf install -y --setopt=install_weak_deps=false \ e2fsprogs-libs \ wget \ perl \ && dnf clean all ARG LUSTRE_VERSION=2.12.6 ARG LUSTRE_URL=https://downloads.whamcloud.com/public/lustre/lustre-${LUSTRE_VERSION}/el8/client/RPMS/x86_64/ # The lustre-client rpm has a dependency on the kmod-lustre-client rpm, this is # not required for our tests and frequently is incompatible with the kernel # headers in the container, using the --nodeps flag to work around this. # NOTE: The --nodeps flag ignores all dependencies not just kmod-lustre-client, # this could surpress a legitimate failure at build time and lead to odd # behavior at runtime. RUN wget ${LUSTRE_URL}/lustre-client-${LUSTRE_VERSION}-1.el8.x86_64.rpm \ && rpm -i --nodeps *.rpm \ && rm -f *.rpm charliecloud-0.26/examples/lustre/test.bats000066400000000000000000000046331417231051300210700ustar00rootroot00000000000000true # shellcheck disable=SC2034 CH_TEST_TAG=$ch_test_tag load "${CHTEST_DIR}/common.bash" setup () { scope full prerequisites_ok lustre if [[ $CH_TEST_LUSTREDIR = skip ]]; then # Assume that in a Slurm allocation, even if one node, Lustre should # be available for testing. msg='no Lustre test directory to bind mount' if [[ $SLURM_JOB_ID ]]; then pedantic_fail "$msg" else skip "$msg" fi elif [[ ! -d $CH_TEST_LUSTREDIR ]]; then echo "'${CH_TEST_LUSTREDIR}' is not a directory" 1>&2 exit 1 fi } clean_dir () { rmdir "${1}/set_stripes" rmdir "${1}/test_create_dir" rm "${1}/test_write.txt" rmdir "$1" } tidy_run () { ch-run -b "$binds" "$ch_img" -- "$@" } binds=${CH_TEST_LUSTREDIR}:/mnt/0 work_dir=/mnt/0/charliecloud_test @test "${ch_tag}/start clean" { clean_dir "${CH_TEST_LUSTREDIR}/charliecloud_test" || true mkdir "${CH_TEST_LUSTREDIR}/charliecloud_test" # fail if not cleaned up } @test "${ch_tag}/create directory" { tidy_run mkdir "${work_dir}/test_create_dir" } @test "${ch_tag}/create file" { tidy_run touch "${work_dir}/test_create_file" } @test "${ch_tag}/delete file" { tidy_run rm "${work_dir}/test_create_file" } @test "${ch_tag}/write file" { # sh wrapper to get echo output to the right place. Without it, the output # from echo goes outside the container. tidy_run sh -c "echo hello > ${work_dir}/test_write.txt" } @test "${ch_tag}/read file" { output_expected=$(cat <<'EOF' hello 0+1 records in 0+1 records out EOF ) # Using dd allows us to skip the write cache and hit the disk. run tidy_run dd if="${work_dir}/test_write.txt" iflag=nocache status=noxfer diff -u <(echo "$output_expected") <(echo "$output") } @test "${ch_tag}/striping" { tidy_run mkdir "${work_dir}/set_stripes" stripe_ct_old=$(tidy_run lfs getstripe --stripe-count "${work_dir}/set_stripes/") echo "old stripe count: $stripe_ct_old" expected_new=$((stripe_ct_old * 2)) echo "expected new stripe count: $expected_new" tidy_run lfs setstripe -c "$expected_new" "${work_dir}/set_stripes" stripe_ct_new=$(tidy_run lfs getstripe --stripe-count "${work_dir}/set_stripes") echo "actual new stripe count: $stripe_ct_new" [[ $expected_new -eq $stripe_ct_new ]] } @test "${ch_tag}/clean up" { clean_dir "${CH_TEST_LUSTREDIR}/charliecloud_test" } charliecloud-0.26/examples/mpibench/000077500000000000000000000000001417231051300174775ustar00rootroot00000000000000charliecloud-0.26/examples/mpibench/Dockerfile.mpich000066400000000000000000000005641417231051300225750ustar00rootroot00000000000000# ch-test-scope: full FROM mpich RUN dnf install -y which \ && dnf clean all # Compile the Intel MPI benchmark WORKDIR /usr/local/src ARG IMB_VERSION=IMB-v2021.3 RUN git clone --branch $IMB_VERSION --depth 1 \ https://github.com/intel/mpi-benchmarks \ && cd mpi-benchmarks/src_c \ && make CC=mpicc -j$(getconf _NPROCESSORS_ONLN) -f Makefile TARGET=MPI1 charliecloud-0.26/examples/mpibench/Dockerfile.openmpi000066400000000000000000000005661417231051300231460ustar00rootroot00000000000000# ch-test-scope: full FROM openmpi RUN dnf install -y which \ && dnf clean all # Compile the Intel MPI benchmark WORKDIR /usr/local/src ARG IMB_VERSION=IMB-v2021.3 RUN git clone --branch $IMB_VERSION --depth 1 \ https://github.com/intel/mpi-benchmarks \ && cd mpi-benchmarks/src_c \ && make CC=mpicc -j$(getconf _NPROCESSORS_ONLN) -f Makefile TARGET=MPI1 charliecloud-0.26/examples/mpibench/test.bats000066400000000000000000000120351417231051300213320ustar00rootroot00000000000000true # shellcheck disable=SC2034 CH_TEST_TAG=$ch_test_tag load "${CHTEST_DIR}/common.bash" setup () { scope full prerequisites_ok "$ch_tag" # One iteration on most of these tests because we just care about # correctness, not performance. (If we let the benchmark choose, there is # an overwhelming number of errors when MPI calls start failing, e.g. if # CMA isn't working, and this makes the test take really long.) # # Large -npmin because we only want to test all cores. imb_mpi1=/usr/local/src/mpi-benchmarks/src_c/IMB-MPI1 imb_args="-iter 1 -npmin 1000000000" # On the HSN performance test, we do want to run multiple iterations to # reduce variability. The benchmark will automatically scale down the # number of iterations based on the expected run time, disabling that so # we get more consistent results. Npmin is ommitted as we are only running # with two processes, one per node. imb_perf_args="-iter 100 -iter_policy off" } check_errors () { [[ ! "$1" =~ 'errno =' ]] } check_finalized () { [[ "$1" =~ 'All processes entering MPI_Finalize' ]] } check_process_ct () { ranks_expected="$1" echo "ranks expected: ${ranks_expected}" ranks_found=$( echo "$output" \ | grep -F '#processes =' \ | sed -r 's/^.+#processes = ([0-9]+)\s+$/\1/') echo "ranks found: ${ranks_found}" [[ $ranks_found -eq "$ranks_expected" ]] } # one from "Single Transfer Benchmarks" @test "${ch_tag}/pingpong (guest launch)" { # shellcheck disable=SC2086 run ch-run $ch_unslurm "$ch_img" -- \ mpirun $ch_mpirun_np "$imb_mpi1" $imb_args PingPong echo "$output" [[ $status -eq 0 ]] check_errors "$output" check_process_ct 2 "$output" check_finalized "$output" } # one from "Parallel Transfer Benchmarks" @test "${ch_tag}/sendrecv (guest launch)" { # shellcheck disable=SC2086 run ch-run $ch_unslurm "$ch_img" -- \ mpirun $ch_mpirun_np "$imb_mpi1" $imb_args Sendrecv echo "$output" [[ $status -eq 0 ]] check_errors "$output" check_process_ct "$ch_cores_node" "$output" check_finalized "$output" } # one from "Collective Benchmarks" @test "${ch_tag}/allreduce (guest launch)" { # shellcheck disable=SC2086 run ch-run $ch_unslurm "$ch_img" -- \ mpirun $ch_mpirun_np "$imb_mpi1" $imb_args Allreduce echo "$output" [[ $status -eq 0 ]] check_errors "$output" check_process_ct "$ch_cores_node" "$output" check_finalized "$output" } @test "${ch_tag}/crayify image" { crayify_mpi_or_skip "$ch_img" } # This test compares OpenMPI's point to point bandwidth with all high speed # plugins enabled against the performance just using TCP. Pass if HSN # performance is at least double TCP. @test "${ch_tag}/using the high-speed network (host launch)" { multiprocess_ok [[ $ch_multinode ]] || skip "multinode only" [[ $ch_cray ]] && skip "Cray doesn't support running on tcp" # Verify we have known HSN devices present. (Note that -d tests for # directory, not device.) [[ ! -d /dev/infiniband ]] && pedantic_fail "no high speed network detected" # shellcheck disable=SC2086 hsn_enabled_bw=$($ch_mpirun_2_2node ch-run "$ch_img" -- \ "$imb_mpi1" $imb_perf_args Sendrecv | tail -n +35 \ | sort -nrk6 | head -1 | awk '{print $6}') # Configure network transport plugins to TCP only. export OMPI_MCA_pml=ob1 export OMPI_MCA_btl=tcp,self # shellcheck disable=SC2086 hsn_disabled_bw=$($ch_mpirun_2_2node ch-run "$ch_img" -- \ "$imb_mpi1" $imb_perf_args Sendrecv | tail -n +35 \ | sort -nrk6 | head -1 | awk '{print $6}') echo "Max bandwidth with high speed network: $hsn_enabled_bw MB/s" echo "Max bandwidth without high speed network: $hsn_disabled_bw MB/s" [[ ${hsn_disabled_bw%\.*} -lt $((${hsn_enabled_bw%\.*} / 2)) ]] } @test "${ch_tag}/pingpong (host launch)" { multiprocess_ok # shellcheck disable=SC2086 run $ch_mpirun_core ch-run --join "$ch_img" -- \ "$imb_mpi1" $imb_args PingPong echo "$output" [[ $status -eq 0 ]] check_errors "$output" check_process_ct 2 "$output" check_finalized "$output" } @test "${ch_tag}/sendrecv (host launch)" { multiprocess_ok # shellcheck disable=SC2086 run $ch_mpirun_core ch-run --join "$ch_img" -- \ "$imb_mpi1" $imb_args Sendrecv echo "$output" [[ $status -eq 0 ]] check_errors "$output" check_process_ct "$ch_cores_total" "$output" check_finalized "$output" } @test "${ch_tag}/allreduce (host launch)" { multiprocess_ok # shellcheck disable=SC2086 run $ch_mpirun_core ch-run --join "$ch_img" -- \ "$imb_mpi1" $imb_args Allreduce echo "$output" [[ $status -eq 0 ]] check_errors "$output" check_process_ct "$ch_cores_total" "$output" check_finalized "$output" } @test "${ch_tag}/revert image" { unpack_img_all_nodes "$ch_cray" } charliecloud-0.26/examples/mpihello/000077500000000000000000000000001417231051300175235ustar00rootroot00000000000000charliecloud-0.26/examples/mpihello/Dockerfile.mpich000066400000000000000000000001261417231051300226130ustar00rootroot00000000000000# ch-test-scope: full FROM mpich COPY . /hello WORKDIR /hello RUN make clean && make charliecloud-0.26/examples/mpihello/Dockerfile.openmpi000066400000000000000000000001471417231051300231650ustar00rootroot00000000000000# ch-test-scope: full FROM openmpi # This example COPY . /hello WORKDIR /hello RUN make clean && make charliecloud-0.26/examples/mpihello/Makefile000066400000000000000000000002351417231051300211630ustar00rootroot00000000000000BINS := hello CFLAGS := -std=gnu11 -Wall .PHONY: all all: $(BINS) .PHONY: clean clean: rm -f $(BINS) $(BINS): Makefile %: %.c mpicc $(CFLAGS) $< -o $@ charliecloud-0.26/examples/mpihello/hello.c000066400000000000000000000042251417231051300207750ustar00rootroot00000000000000/* MPI test program. Reports user namespace and rank, then sends and receives some simple messages. Patterned after: http://en.wikipedia.org/wiki/Message_Passing_Interface#Example_program */ #define _GNU_SOURCE #include #include #include #include #include #include #include #include #define TAG 0 #define MSG_OUT 8675309 void fatal(char * fmt, ...); int op(int rank, int i); int rank, rank_ct; int main(int argc, char ** argv) { char hostname[HOST_NAME_MAX+1]; char mpi_version[MPI_MAX_LIBRARY_VERSION_STRING]; int mpi_version_len; int msg; MPI_Status mstat; struct stat st; stat("/proc/self/ns/user", &st); MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &rank_ct); MPI_Comm_rank(MPI_COMM_WORLD, &rank); if (rank == 0) { MPI_Get_library_version(mpi_version, &mpi_version_len); printf("%d: MPI version:\n%s\n", rank, mpi_version); } gethostname(hostname, HOST_NAME_MAX+1); printf("%d: init ok %s, %d ranks, userns %lu\n", rank, hostname, rank_ct, st.st_ino); fflush(stdout); if (rank == 0) { for (int i = 1; i < rank_ct; i++) { msg = MSG_OUT; MPI_Send(&msg, 1, MPI_INT, i, TAG, MPI_COMM_WORLD); msg = 0; MPI_Recv(&msg, 1, MPI_INT, i, TAG, MPI_COMM_WORLD, &mstat); if (msg != op(i, MSG_OUT)) fatal("0: expected %d back but got %d", op(i, MSG_OUT), msg); } } else { msg = 0; MPI_Recv(&msg, 1, MPI_INT, 0, TAG, MPI_COMM_WORLD, &mstat); if (msg != MSG_OUT) fatal("%d: expected %d but got %d", rank, MSG_OUT, msg); msg = op(rank, msg); MPI_Send(&msg, 1, MPI_INT, 0, TAG, MPI_COMM_WORLD); } if (rank == 0) printf("%d: send/receive ok\n", rank); MPI_Finalize(); if (rank == 0) printf("%d: finalize ok\n", rank); return 0; } void fatal(char * fmt, ...) { va_list ap; fprintf(stderr, "rank %d:", rank); va_start(ap, fmt); vfprintf(stderr, fmt, ap); va_end(ap); fprintf(stderr, "\n"); exit(EXIT_FAILURE); } int op(int rank, int i) { return i * rank; } charliecloud-0.26/examples/mpihello/slurm.sh000077500000000000000000000013351417231051300212260ustar00rootroot00000000000000#!/bin/bash #SBATCH --time=0:10:00 # Arguments: Path to tarball, path to image parent directory. set -e tar=$1 imgdir=$2 img=${2}/$(basename "${tar%.tar.gz}") if [[ -z $tar ]]; then echo 'no tarball specified' 1>&2 exit 1 fi printf 'tarball: %s\n' "$tar" if [[ -z $imgdir ]]; then echo 'no image directory specified' 1>&2 exit 1 fi printf 'image: %s\n' "$img" # Make Charliecloud available (varies by site). module purge module load friendly-testing module load charliecloud # Unpack image. srun ch-convert -o dir "$tar" "$imgdir" # MPI version in container. printf 'container: ' ch-run "$img" -- mpirun --version | grep -E '^mpirun' # Run the app. srun --cpus-per-task=1 ch-run "$img" -- /hello/hello charliecloud-0.26/examples/mpihello/test.bats000066400000000000000000000051401417231051300213550ustar00rootroot00000000000000true # shellcheck disable=SC2034 CH_TEST_TAG=$ch_test_tag load "${CHTEST_DIR}/common.bash" setup () { scope full prerequisites_ok "$ch_tag" } count_ranks () { echo "$1" \ | grep -E '^0: init ok' \ | tail -1 \ | sed -r 's/^.+ ([0-9]+) ranks.+$/\1/' } @test "${ch_tag}/guest starts ranks" { # shellcheck disable=SC2086 run ch-run $ch_unslurm "$ch_img" -- mpirun $ch_mpirun_np /hello/hello echo "$output" [[ $status -eq 0 ]] rank_ct=$(count_ranks "$output") echo "found ${rank_ct} ranks, expected ${ch_cores_node}" [[ $rank_ct -eq "$ch_cores_node" ]] [[ $output = *'0: send/receive ok'* ]] [[ $output = *'0: finalize ok'* ]] } @test "${ch_tag}/crayify image" { crayify_mpi_or_skip "$ch_img" } @test "${ch_tag}/MPI version" { # shellcheck disable=SC2086 run ch-run $ch_unslurm "$ch_img" -- /hello/hello echo "$output" [[ $status -eq 0 ]] if [[ $ch_mpi = openmpi ]]; then [[ $output = *'Open MPI'* ]] else [[ $ch_mpi = mpich ]] if [[ $ch_cray ]]; then [[ $output = *'CRAY MPICH'* ]] else [[ $output = *'MPICH Version:'* ]] fi fi } @test "${ch_tag}/empty stderr" { multiprocess_ok output=$($ch_mpirun_core ch-run --join "$ch_img" -- \ /hello/hello 2>&1 1>/dev/null) echo "$output" [[ -z "$output" ]] } @test "${ch_tag}/serial" { # This seems to start up the MPI infrastructure (daemons, etc.) within the # guest even though there's no mpirun. # shellcheck disable=SC2086 run ch-run $ch_unslurm "$ch_img" -- /hello/hello echo "$output" [[ $status -eq 0 ]] [[ $output = *' 1 ranks'* ]] [[ $output = *'0: send/receive ok'* ]] [[ $output = *'0: finalize ok'* ]] } @test "${ch_tag}/host starts ranks" { multiprocess_ok echo "starting ranks with: ${mpirun_core}" guest_mpi=$(ch-run "$ch_img" -- mpirun --version | head -1) echo "guest MPI: ${guest_mpi}" # shellcheck disable=SC2086 run $ch_mpirun_core ch-run --join "$ch_img" -- /hello/hello echo "$output" [[ $status -eq 0 ]] rank_ct=$(count_ranks "$output") echo "found ${rank_ct} ranks, expected ${ch_cores_total}" [[ $rank_ct -eq "$ch_cores_total" ]] [[ $output = *'0: send/receive ok'* ]] [[ $output = *'0: finalize ok'* ]] } @test "${ch_tag}/Cray bind mounts" { [[ $ch_cray ]] || skip 'host is not a Cray' ch-run "$ch_img" -- mount | grep -F /var/opt/cray/alps/spool ch-run "$ch_img" -- mount | grep -F /var/opt/cray/hugetlbfs } @test "${ch_tag}/revert image" { unpack_img_all_nodes "$ch_cray" } charliecloud-0.26/examples/multistage/000077500000000000000000000000001417231051300200705ustar00rootroot00000000000000charliecloud-0.26/examples/multistage/Dockerfile000066400000000000000000000022571417231051300220700ustar00rootroot00000000000000# This image tests multi-stage build using GNU Hello. In the first stage, we # install a build environment and build Hello. In the second stage, we start # fresh again with a base image and copy the Hello executables. Tests # demonstrate that Hello runs and none of the build environment is present. # # ch-test-scope: standard FROM centos:8 AS buildstage # Build environment RUN dnf install -y \ gcc \ make \ wget WORKDIR /usr/local/src # GNU Hello. Install using DESTDIR to make copying below easier. RUN wget -nv https://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz RUN tar xf hello-2.10.tar.gz \ && cd hello-2.10 \ && ./configure \ && make -j $(getconf _NPROCESSORS_ONLN) \ && make install DESTDIR=/hello RUN ls -ld /hello/usr/local/*/* FROM centos:8 RUN dnf install -y man # COPY the hello install over, by both name and index, making sure not to # overwrite existing contents. Recall that COPY works different than cp(1). COPY --from=0 /hello/usr/local/bin /usr/local/bin COPY --from=buildstage /hello/usr/local/share /usr/local/share COPY --from=buildstage /hello/usr/local/share/locale /usr/local/share/locale RUN ls -ld /usr/local/*/* charliecloud-0.26/examples/multistage/test.bats000066400000000000000000000027431417231051300217300ustar00rootroot00000000000000true # shellcheck disable=SC2034 CH_TEST_TAG=$ch_test_tag load "${CHTEST_DIR}/common.bash" setup () { prerequisites_ok multistage } @test "${ch_tag}/hello" { run ch-run "$ch_img" -- hello -g 'Hello, Charliecloud!' echo "$output" [[ $status -eq 0 ]] [[ $output = 'Hello, Charliecloud!' ]] } @test "${ch_tag}/man hello" { ch-run "$ch_img" -- man hello > /dev/null } @test "${ch_tag}/files seem OK" { [[ $CH_TEST_PACK_FMT = squash-mount ]] && skip 'need directory image' # hello executable itself. test -x "${ch_img}/usr/local/bin/hello" # Present by default. test -d "${ch_img}/usr/local/share/applications" test -d "${ch_img}/usr/local/share/info" test -d "${ch_img}/usr/local/share/man" # Copied from first stage. test -d "${ch_img}/usr/local/share/locale" # Correct file count in directories. ls -lh "${ch_img}/usr/local/bin" [[ $(find "${ch_img}/usr/local/bin" -mindepth 1 -maxdepth 1 | wc -l) -eq 1 ]] ls -lh "${ch_img}/usr/local/share" [[ $(find "${ch_img}/usr/local/share" -mindepth 1 -maxdepth 1 | wc -l) -eq 4 ]] } @test "${ch_tag}/no first-stage stuff present" { # Can't run GCC. run ch-run "$ch_img" -- gcc --version echo "$output" [[ $status -eq 1 ]] [[ $output = *'gcc: No such file or directory'* ]] # No GCC or Make. ls -lh "${ch_img}/usr/bin/gcc" || true ! test -f "${ch_img}/usr/bin/gcc" ls -lh "${ch_img}/usr/bin/make" || true ! test -f "${ch_img}/usr/bin/make" } charliecloud-0.26/examples/obspy/000077500000000000000000000000001417231051300170465ustar00rootroot00000000000000charliecloud-0.26/examples/obspy/Dockerfile000066400000000000000000000026061417231051300210440ustar00rootroot00000000000000# ch-test-scope: standard # ch-test-arch-exclude: aarch64 # no obspy Conda package # ch-test-arch-exclude: ppc64le # no obspy Conda package? FROM debian:buster RUN apt-get update \ && apt-get install -y \ bzip2 \ wget \ && rm -rf /var/lib/apt/lists/* # Install Miniconda. Notes/gotchas: # # 1. Install into /usr/local. Some of the instructions [e.g., 1] warn # against putting conda in $PATH; others don't. However it seems to work # and then we don't need to muck with the path. # # 2. Use latest version so we catch sooner if things explode. # # [1]: https://docs.anaconda.com/anaconda/user-guide/faq/ WORKDIR /usr/local/src ARG MC_VERSION=latest ARG MC_FILE=Miniconda3-$MC_VERSION-Linux-x86_64.sh RUN wget -nv https://repo.anaconda.com/miniconda/$MC_FILE RUN bash $MC_FILE -bf -p /usr/local RUN rm -Rf $MC_FILE RUN which conda && conda --version # Disable automatic conda upgrades for predictable versioning. RUN conda config --set auto_update_conda False # Install obspy, also latest. This is a container, so don't bother creating a # new environment for obspy. # See: https://github.com/obspy/obspy/wiki/Installation-via-Anaconda RUN conda config --add channels conda-forge RUN conda install --yes obspy # Hello world program and input from docs. WORKDIR / RUN wget -nv http://examples.obspy.org/RJOB_061005_072159.ehz.new COPY hello.py . RUN chmod 755 ./hello.py charliecloud-0.26/examples/obspy/README000066400000000000000000000022541417231051300177310ustar00rootroot00000000000000We'd prefer to run the ObsPy test suite, but it seems quite finicky and we weren't able to get it to work. Problems with the test suite include: 1. Requires network access even for the non-network modules. We filed an issue about this [1] that did result in likely-actionable exclusions, though we haven't followed up. ObsPy also has a PR [2] unmerged as of 2021-08-04 that could replay the network traffic offline. 2. Expects to write within the install directory (e.g., site-packages/obspy/clients/filesystem/tests/data/tsindex_data), which is an antipattern even when not containerized. 3. LOTS of warnings, e.g. hundreds of deprecation gripes from NumPy as well as ObsPy itself. 4. Various errors, e.g. "AttributeError: 'bool' object has no attribute 'lower'" from within Matplotlib. (I was able to solve this one by choosing an older version of Matplotlib than the one depended on by the ObsPy Conda package, but we don't have time to maintain that.) 5. Can't get it to pass. ;) See also issue #64. Bottom line, I would love for an ObsPy person to maintain this example with passing ObsPy tests, but we don't have time to do so. charliecloud-0.26/examples/obspy/hello.py000066400000000000000000000011471417231051300205260ustar00rootroot00000000000000#!/usr/bin/env python3 # "Reading Seismograms" example from §3 of the ObsPy tutorial, with some of # the prints commented out and taking the plot file from the command line. # # See: https://docs.obspy.org/tutorial/code_snippets/reading_seismograms.html import sys # §3.0 from obspy import read st = read('RJOB_061005_072159.ehz.new') #print(st) #print(len(st)) tr = st[0] # assign first and only trace to new variable print(tr) # §3.1 print(tr.stats) #print(tr.stats.station) #print(tr.stats.datatype) # §3.2 #print(tr.data) print(tr.data[0:3]) print(len(tr)) # §3.3 tr.plot(outfile=sys.argv[1]) charliecloud-0.26/examples/obspy/obspy.png000066400000000000000000000517531417231051300207230ustar00rootroot00000000000000PNG  IHDR Y=#9tEXtSoftwareMatplotlib version3.4.2, https://matplotlib.org/+X pHYsaa?iSXIDATxyXTe?6 "Bf方/e{iOi=V2yR4MsO3T@qEP\a`e>?qT߯ss}3DD@DDDDDdv=<0!"""""aBDDDDD6l  """""& DDDDDd3L@f0!"""""aBDDDDD6l  """""& DU`ȑ#t&,X5kք7LQꢢWWWt .]RF'''jjGnn.{9ԭ[* &YYY;fe-.Xb6l7774nΝÇ~GGGezܸq[GyܹsGi3J?l14k  ҥKK/ F#OuoIxb+ٳgΝ;YYO^u&M@e˖طoI}DDh0rHdgg+uh߾=\]]ѼysDGG+uf2SZm5c"((* 63 OOOԮ]g-kq͛ 777?ի8aooo!!!&jggO>b|7oF۶m???̞=|k׮Jڵk~xxx`Ĉ0 Ef@~˖-/r5iܸ,_\DDz%++KL":tP>|̟?DqŋСC@_nR?ydիˁCΞ=kڴiɓ'h4ʅ &m̟?_nR"oNC/l1hqqqVk~ҥr!ɑ'OJZ?(Ѿ(~^l4mTҶm[YtyDɓ'%77WO.O>RߩS'YfXJìY󒗗'?xyy)1&&F$**JҤs2c eVZɬY$++K>siР䈈̙3eر%cɒ%w^C}KJJ9sF|||dEe-cǎI@@8p@ʕ+r׬Y#:u2)+8/֭[RZ59Hٵkztux{{+ӶJ@ GԩSeܸqJݻA""rh4b0z%$_@@YOڵk3fȑ#-._\\ e˖YRRq^W_}%O84jժUS6mjk׮7q_4pM^z׿^1X[zxexzz>xx9lݺǏ7722M65) ׮]CZZY͚5Ck.!""O<٘c?FBB._nݺwEAi1ydxxx|<?/Μ97n࣏>ȑ#q碸y U_lj's Eő#G^PvÇ+e'NG}GGG bZݻwGdd$Ν;Nܓb阴Nj/ 3fW_}/Z{Vӂ{18qظq#~G|gfQ 0`QF)jZVjM ?Bj0tPm;w`zlQ!//KJnZ\...wyo~}T[z5^x_ [t)֯_uAR;%1cЫW/nZB޽Qn]|7ذaoo߾/֭[V)SVqȑbի=,"1bjժYf)喎'88>>>ppp@K/aӦMEᶊ׿PNT^'O.122O?4jժUl7oƜ9syf峲qF888gϞ.X=zIЫW/*.&M}cQѣi& {}ԩS ڴi6ly%h4ڵkpwwJ* $""""O"wvv\CyPp,G\\BCCz777!..&aaa8rH  eի.-!""""III+0,R {guGNFc2FNN=\]]2|̞=۬<))l=DDDDDjQ^=w(E2 ҥK~z8p@JVC՚̧jVVL% ɯ+ʴi0i$իF*|{@H@~̛7'jԨcٲetFF /// 66V$::!!!E NNNgll,~wh4w;ڵ+7n\Q%Rih4"''zժUî]0aڵ 7YstK/se˖>5kp)lڴ WĿ|r,YիWG`` d=MDDz-KUf#FIٞ={0{l߿Dxx<(=ϟGVrJ%˜1cqFxyyᣏ>!CJVӭ^Gرc+"//~)"##eԮ]C""""ziy4 HET7`ݨV #,[:O?4~m<UaӧO۷_w(DDZeH@*O͛7Sq֭GDD]%&&BRAV>(VXa2JBrr2 ̄]vPըQF7o*f͂f-*#\\\Rpey+ DDDDd[L@*'''t:ܽ{~)^y={{E1zhܼy111SO=^7zht:hZkxJӕg俺u놗^z DDDDpbR ވX?m4=z4\\\닕+W֭[WT2d_{gѢE8}4&z=OA0`D[l[hV#Gg1)wvvFϞ=gezj4jOTTf͚B?QƤI59 U`'z= ,X-ZwFծ]2b ]"{)==/">C wwaq>///ٙܐ/%%I#GDZZ233g5 Ǐ/qL 5k=QFa׮]2& #Νxlذ [ƍMz=mۆ.]-Rжm[4lw.QK,#G|""""" T@x7Zl2|w/F6leƍ;Xv-h[ bRzUd1|p$&&\~R]tƍ7ߠFhҤ \]]{n8;;+/_jnnn4h}zj(7ƍ,^x͞~ +QiQwUIupBDEE~t:h4 4~W^x;"*<Q*1yODDE+Q*~ !!!. KD";;C!"* H׳gO\t |MyBDW_-0 bb* Fݶm[)GS`g[^_!QUe9;v ڵFA`` VX-X5kք7Lbr qTTN:ҥK[͚5\.etHMM5yQyرcT*^ZޡQ1L2l0iiiXnܹs߰d_8u6oެ$'ĉ'x/rֱcGdeea}&òeː;w8DTnݺeu\#^x!"RRe.t ;;;4o7ٳg_~V¨Qw^j5 f͚tJ-^Ò%K/ 00־… HKK[oZjwHDT Y&v܉~ڦ1FYr,"$ Ǐʕ+1}t;v IIIhӦ {=aaa6m ..Jg11 0 ʴV-q|GF֭g$''W3!vvv0`vZQk"""7?[U&ٳ' 9s-[Zj)ȧhp}aٳ;PJG~WΑQq= oF~xb 8q>/j3Zj 6m4+(" OEDD_H@.^<쳰G&Mйsg۷U捎FHHedd >>Fc""q?!"D裏ݻ_!"8s ~w"<<K,ABBh":w N ̝;-[,ЉK3DDGH@<<<?`ƌh4޽;Ə={O>;v,Zjƍwʈ*NNNX~=-ZOOO8p+W,!""k233_lQ%UenBѣzanڴiWj 111e݇j̟?sEVкuk:&%DD_8BDD"\TT YYY`2,zL@*>& DDT"!)<ȵ3?*JlDDT<& DDT".\v ;v(TDDSVSNpB^T\BDDπ=rJi4e%s@&& DDT"e~߾}gϞ{^7UL@DN8Q߼yAFFF""gey~ex_Q*I۷ ^EDTq,""2Sn-ֽ[Xhdg@*!""3s)uY|Q$$$X,EϞ=ѣݫLM^EDT,""2S%M)))ؾ}{cǎHJJz^1!"QQQ[nzF#F}/_+^EDTUdٲehӦ 7n\:QNXj`޽P5jѣ< BD0K.ARR}-KDDSI@RSSxb̚5ˤ<..tXXN:e AAAEi0jM^DDUQJJJmԨ|{j??ap=-WX^EDTyTd ///rNFLk4t:u ?><<L.#"W%"""LMiӦYUVATDDU_Ϟ=OСCV+ā!"|%XDDT?|2Ν;W.1L@~'_J!""k޼9ڵkhԨۇ:X΀ /HHH֭[e˖R3 DDT&.]EYZWg;o1qD?Tggghxa""?L@̜=z#[o?$''?ŋ+V+W#!CkZT*h4֭[HHH@N0b =\TŸpM1{AzW_k׮.jth4DJDTTaP)7o}]_|Ǜ׫WOn݊x{{ͬ=Jgggt:իW?*XlT*[n_UVaĈf#ݼyz3ik޽T>QΈ*JTH $cƌ ٰaxyyIjjj˥ IOOADD!z_@ڷo/^Z~mi޼4h:t?Sycǎ*?RwiݺhZ;wI}s,!!A$77W^y4h,YDf͚%7nP ~6mH˖-wwwJ%g϶s玤h۷oh8`/xQ9x?^rrrz`\z\~j;w ",&eYYYwAS> ݻwZjr5Ce+CWfFm㋯K/h>OOOeˤK.ʴgggm۶_l5j:u""B”nݺիE˷~+ ,7oʶm۔y7n,2|8pKTT#y 'Z0QEP/z/rtfh,>vP^L>C BZ VkRjV6m&Md2_M6mŋԩGGG 8AAA0a>}HKKCLL2ԩn OOO}46ml<䓰Czz:Μ9MBR?@֭a4qիÇnݺ8u֬Ym۶x憄|ץ ǎÆ 憰0;gΜ1 _~dgg[6?&OիWcΜ98{, ݻwGǣM6ׯ6l؀ 6`ҥ&UHwWŐHx=̞{r&ʿWÆ ݭ.ӼysYpźX,S aÆɅ $%%Ez)_G_|!"")))˅ dܸqr-1 ;HFFrqdՒ+{%yyy!6msΉȧ~*_~$$$_|!gϞ۷o[dddԩSerU垄RFr3|S̙3j/R+Ye~ 7/^;wbĈ*0gDDh; *gpttDff&Pn]YYY%n+x,((H޶mz衬N:^''',^]tAϞ=aÆ᭷;wСC١ZjKP^=?«T*ѣx饗{q4kLĉ_1cƌmo||<AT*΀*7nH^E6l(;w,r!$"W/bbDȺڵkS[7nƍYo0ٳuVDK۶mK;m4 j@+8?~FҴqFΖ#GJBB\zUDD[ N2Y˗effNN :lHX~q/j[Ie~T&"W/OdGWf\]]M@N8!&udbcceٲeңG ׯ_c qqq/-=z7o:uRb),##dڰ=z[Z=֌FDGG[Me~K@8EDt 5k֔wW^~[<-ZPnzg}uW_}[.\b4nݺի.\ɓ'6Y6''4j۷qq<%^*LL% m,qpp';w`ڵ7FRR௿RF)2eI/"{%I&ҥKf8::իWgADeL11s.v &w DNNMfV[n pׯo2ڵk͖I71!""L@ʕ+.Qprr2|с򠠠{%XDDdn \lYY\+aÆYLĬYL+bVrhxG/iRTZ 0ӦMÔ)S:e̙&U59LM= Zj8}4MʧO'*1^u 0ÇWлwx ={amߴn^ &.\P?쳰CJJ cB+ݻwt٤lܸu߿_~%!"LTVW" )x_}r[Ė /""R裏"&&O~vJ;)| /!"֣Gl۶T*K5j*!"oZ[QR/X* Vʲwww ???h49ٶҫHI ;v_:;;c̘1Tqc>>&3gDjj*^k^ùs4i~$%%!11s-*Һ\i׮])rYw7ߔFXDDT*[={8+W̙3hЮ];k׮DFF_D˖-3f`ժU>QԵkW:}QiADTUd޽u}Y;w 99_XXN:3KHH@VV hZ ŽbBDTU$77o&/^lV`ooWWWL@)Ƥ.ܒCyիW`ѢE<"۽{w۷b{www}l' ;vZ?`۷?͛ӧ.\#GhӦ ::!!!`*mEGGApqq.''2HDTL:׮]C޽;ټysy@DD Hq"""VZ믿FΝf:u 6m_2d:wW^yAAA7oc3*:`ɒ%ʏ9ŋ |XDȺ͛pDDTv*}i2mooooo咫9s`̘1SW_QFP|'ׯZ-{9" }|@ hؿQQ*V NDT999`֧z w~uHo$Vp~[#Ye~ZG"""*Q}glDDTQp}ѱcR""2c2)B$DDTkIu""'L@ATaxȶ&SO!33 "`BDDfhn̙ JcX`"" G'""3X~qmQU3 DDdnݺ6[פIbQbBDD%V|'&U\*C "*wDDtQSЉlJ$ k֬> F-[J]VVYdوA`ȑζuDDO?!0jԨSO@_#GbٲeHOOǫA)3gDjj*^k^ùs4i~$%%!11s-M!"\\\_a(F˗wDDt*}r5Ԯ];wJBxx8\wV\3gBѠ]v߿?֮] ċ/-[3fUss*J={wDDTETYf_>vڅ>۷+7ohl͛qJQO@cǎ-yxx`ǎHOO;#ݻXL4 s*˾K޽;RSS1j(<͵6Q#wDDTT""DIT*\~>>>ESn]lڴ -ZiӐ%K~w3/^ٳgѺukܼyժU`ʕرcj@zz:4̓o )U崙 HI%&&"55U,..J}XX84jHI>iӦ8uT hZ\I@rrr0|pL<Ngtfu:u̟?ʫ^ze%DDdI޽矗wDD=޽;- ޯa`ĈUf͚jjY]~Z.r=ӦMCzzJJJ-%"e? "Pرۘ0a]m۶9Upp0bcch4h...ٳgGGG@LL &O\z= HI z0sL8p7n4K ǎCzz:͛p@FШQ#,X_}9<Q HF_̙ӧOWjP|'ׯP^=ʲضm<==7`pp(BDDDDDUV Üã2|? ?wpDDDDT/<'s8^""""H޽<%Xh4ٳFRRR=EiZԫW}Xc>؇ݻ5"`ggu!Vnʏ}X5+?a>*ꙏ|3-"""""*  䄙3g cV ʏ}Xx: π0!"""""aBDDDDD6l#Gܹ3bcc f͚Ɣ)SP(:u¥K#F jj!!!EƐ{uօJBrrI}VVY6Y VX  7ƹs*:99Q7n`֭xGgywQ̟7R?[/""͚5;tRٳ۷/jԨ5k"<}Faqƙ/GGGbI&!00hٲ%Wd<FӦMZjatJ}||<ڷoWWW4oѥ'*/[̙3Q^=h44l+V}h!?/11...xo*ZHLJN9sHRR'|"""exv4nX/_.""z^d咕%SL:(>\ϟ_rrrdr! ׯ_7siР<.lчΝNS؇Y(3 z @>}0j(XjyZ-jź?8PZ5 :m۶Ν;!!!ʍVpKj5򐙙iq=۲Wwyo~S[z5^x8::ҥK~z[*ʤ.!!ˋ}cm$sz]l{`ĈUf͚Ujժ^zaذa%ac>J6mX|yb*x\U>,ÒE?^z}}E*b"D& >>EQ-kqyyyפ\~ei{/^\*eXnWE0o$4l>>>Xp!{b;,ޛ؇H*JF!ݻwl͛7K@@\xQ_.!!!f`XBzL:du։NYvKBBB1z(YYYJo-}V+9sLXkJ[W^&MȊ+Lڰ6 ֶm۔" a_"SDdRfM6KKKMy}aMLLx{{-iiiSO|^^}U̔7#5jԐ={Xm}Kf$==xl"gΜ(׮]ݻ /ԷjJ̙#z^K!oFܹ#yyyw^h4E\q>4WS뭷ޒaÆɭ[,⽫h}"ݫRK@8;;ڷo|Rzuɓ'|>r䈄tAiJD}hCZn-vG@@0y̔!Cիe-. cƌF#% >"3:矗ӧ-o>qssS;w,&;`RWpy晌UܾYbZÇ[,ݸqCz%...ҰaCٹsRyf󓼼<傃eժU"R> NNN&u^tIҥK""wI Uԩ#F2Ү];qvvf͚bT*g}VEVKpp|J]>~\U>آ *<?!?tT"%v0DDDDDD1!"""""aBDDDDD6l  """""& DDDDDd3L@f0!"""""aBDDDDD6l  """""& DDDDDd3L@f0!"""""aBDDDDD6PfxKIENDB`charliecloud-0.26/examples/obspy/test.bats000066400000000000000000000010541417231051300207000ustar00rootroot00000000000000true # shellcheck disable=SC2034 CH_TEST_TAG=$ch_test_tag load "${CHTEST_DIR}/common.bash" setup () { scope standard prerequisites_ok obspy } @test "${ch_tag}/hello" { # Remove prior test's plot to avoid using it if something else breaks. rm -f "$BATS_TMPDIR"/obspy.png ch-run -b "$BATS_TMPDIR":/mnt "$ch_img" -- /hello.py /mnt/obspy.png # Compare reference image to generated image. # FIXME: Bitwise comparison is unreliable; see #1171 and #1172. #cmp "$CHTEST_EXAMPLES_DIR"/obspy/obspy.png "$BATS_TMPDIR"/obspy.png } charliecloud-0.26/examples/paraview/000077500000000000000000000000001417231051300175305ustar00rootroot00000000000000charliecloud-0.26/examples/paraview/Dockerfile000066400000000000000000000033711417231051300215260ustar00rootroot00000000000000# ch-test-scope: full FROM openmpi WORKDIR /usr/local/src RUN dnf install -y --setopt=install_weak_deps=false \ cmake \ expat-devel \ libpng-devel \ llvm \ llvm-devel \ mesa-libGL \ mesa-libGL-devel \ mesa-libOSMesa \ mesa-libOSMesa-devel \ python3 \ python3-devel \ python3-mako \ python3-pip \ && dnf clean all RUN pip3 install --no-binary=mpi4py \ cython \ mpi4py # ParaView. Use system libpng to work around issues linking with NEON specific # symbols on ARM. ARG PARAVIEW_MAJORMINOR=5.9 ARG PARAVIEW_VERSION=5.9.1 RUN wget -nv -O ParaView-v${PARAVIEW_VERSION}.tar.xz "https://www.paraview.org/paraview-downloads/download.php?submit=Download&version=v${PARAVIEW_MAJORMINOR}&type=binary&os=Sources&downloadFile=ParaView-v${PARAVIEW_VERSION}.tar.xz" \ && tar xf ParaView-v${PARAVIEW_VERSION}.tar.xz \ && mkdir ParaView-v${PARAVIEW_VERSION}.build \ && cd ParaView-v${PARAVIEW_VERSION}.build \ && cmake -DCMAKE_INSTALL_PREFIX=/usr/local \ -DCMAKE_BUILD_TYPE=Release \ -DBUILD_TESTING=OFF \ -DBUILD_SHARED_LIBS=ON \ -DPARAVIEW_ENABLE_PYTHON=ON \ -DPARAVIEW_BUILD_QT_GUI=OFF \ -DVTK_USE_X=OFF \ -DOPENGL_INCLUDE_DIR=IGNORE \ -DOPENGL_gl_LIBRARY=IGNORE \ -DVTK_OPENGL_HAS_OSMESA=ON \ -DVTK_USE_OFFSCREEN=OFF \ -DPARAVIEW_USE_MPI=ON \ -DPYTHON_EXECUTABLE=/usr/bin/python3 \ -DVTK_USE_SYSTEM_PNG=ON \ ../ParaView-v${PARAVIEW_VERSION} \ && make -j $(getconf _NPROCESSORS_ONLN) install \ && rm -Rf ../ParaView-v${PARAVIEW_VERSION}* charliecloud-0.26/examples/paraview/cone.2ranks.vtk000066400000000000000000000006611417231051300224040ustar00rootroot00000000000000# vtk DataFile Version 5.1 vtk output ASCII DATASET POLYDATA POINTS 12 float 0.5 0 0 -0.5 0.5 0 -0.5 0.25 0.433013 -0.5 -0.25 0.433013 -0.5 -0.5 6.12323e-17 -0.5 -0.25 -0.433013 -0.5 0.25 -0.433013 0.5 0 0 -0.5 -0.5 6.12323e-17 -0.5 -0.25 -0.433013 -0.5 0.25 -0.433013 -0.5 0.5 -1.22465e-16 POLYGONS 8 24 OFFSETS vtktypeint64 0 6 9 12 15 18 21 24 CONNECTIVITY vtktypeint64 6 5 4 3 2 1 0 1 2 0 2 3 0 3 4 7 8 9 7 9 10 7 10 11 charliecloud-0.26/examples/paraview/cone.nranks.vtk000066400000000000000000000011271417231051300224760ustar00rootroot00000000000000# vtk DataFile Version 5.1 vtk output ASCII DATASET POLYDATA POINTS 22 float 0.5 0 0 -0.5 0.5 0 -0.5 0.25 0.433013 -0.5 -0.25 0.433013 -0.5 -0.5 6.12323e-17 -0.5 -0.25 -0.433013 -0.5 0.25 -0.433013 0.5 0 0 -0.5 0.25 0.433013 -0.5 -0.25 0.433013 0.5 0 0 -0.5 -0.25 0.433013 -0.5 -0.5 6.12323e-17 0.5 0 0 -0.5 -0.5 6.12323e-17 -0.5 -0.25 -0.433013 0.5 0 0 -0.5 -0.25 -0.433013 -0.5 0.25 -0.433013 0.5 0 0 -0.5 0.25 -0.433013 -0.5 0.5 -1.22465e-16 POLYGONS 8 24 OFFSETS vtktypeint64 0 6 9 12 15 18 21 24 CONNECTIVITY vtktypeint64 6 5 4 3 2 1 0 1 2 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 charliecloud-0.26/examples/paraview/cone.png000066400000000000000000000075021417231051300211660ustar00rootroot00000000000000PNG  IHDRݡ IDATx^ݏ]y`;m-F/bk' T-mJiBrDeڪRՋREBI#EJ>6xl=3{19|zz+3^ }MO= , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ˜k:'g{'Oo: $`}d}UUؿ!cؿK Bu`>F?%[أ? [u>XK 5>XK ~pp)(+Xw߁NmH` _L`Z ր탐-^A}]*(X_L}lI샵d &`d &`d UDZyjdm)"XnlVg탐-GW#-MJa%[0̃>XK`p+}l 2V`-قrV`-ق]w6Xvv.`Mq%[&`źnlA z>٢dy+W#-ʔgk0XdrdЯ_L(Any-X9~52"WA*X]BIVy*Xd }p@E\ׯF&[DI jdE,W-$X'[/`nolw0"Mm:YX.~wYW?v}{>/]z0q9LX owֿlpᘶHG."Yk&XӕC暂ux)di).X]c-Cfgg4]^5٢Տ Ef65kpIMXk$ɭ!X)0v![E'^d dWb^d)/bp%[ BU慗lџ`K؈`ERTd; VTK` /b`e%Kl/*`!Q$Xel쒭VlC,%[%,z𒭼 ;vVDlGZxVND<1vVtEl%XLV;lE$Xt$ͱKb, xVŔ/J`.d+eE>vVM1^"K!XDl@˱KKJ7i,5Q'XocluI(˄%[,zdk /ٚhk쒭I,v  dxV[ 6څlO`t#]5v /`Al K`/dkp?v  `xVSv([,HEu`{}_3yJw_'ݳޛ~J1{Ѧ3_8-l>~׶VU鷛JJXUՃ6Jb6o^m:(`]o:(`ϙ)knFDIkJH2XH1X.݁:Ik^) VZY;P#`- P#`^,FZ,FZIWL]iXY+s?̧a$'VUKzF'mvC7Bq ii֯{/^?|87RJ_~G?q^G6^ء?va' @r֏W_|}7W=W_[[:7un-+[VWonuue~}إ >֭G=O5Ámz/5Գ{u媪v-UR妇z=ܼO++ydU'6}զ瀤%7aֻ~7mv'gZYY[]ė\Buv7g{gg_RSv֍#~=p[ 6, , , , , , , , , , , , , , , , , , , , , , , ?rLBkIENDB`charliecloud-0.26/examples/paraview/cone.py000066400000000000000000000024611417231051300210310ustar00rootroot00000000000000# Draw a cone and write it out to sys.argv[1] in a few different ways. All # output files should be bit-for-bit reproducible, i.e., no embedded # timestamps, hostnames, floating point error, etc. from __future__ import print_function import os import platform import sys import mpi4py.MPI import paraview.simple as pv # Version information. print("ParaView %d.%d.%d on Python %s" % (pv.paraview.servermanager.vtkSMProxyManager.GetVersionMajor(), pv.paraview.servermanager.vtkSMProxyManager.GetVersionMinor(), pv.paraview.servermanager.vtkSMProxyManager.GetVersionPatch(), platform.python_version())) # Even if you start multiple pvbatch using MPI, this script is only # executed by rank 0. Check this assumption. assert mpi4py.MPI.COMM_WORLD.rank == 0 # Output directory provided on command line. outdir = sys.argv[1] # Render a cone. pv.Cone() pv.Show() pv.Render() print("rendered") # PNG image (serial). filename = "%s/cone.png" % outdir pv.SaveScreenshot(filename) print(filename) # Legacy VTK file (ASCII, serial). filename = "%s/cone.vtk" % outdir pv.SaveData(filename, FileType="Ascii") print(filename) # XML VTK files (parallel). filename=("%s/cone.pvtp" % outdir) writer = pv.XMLPPolyDataWriter(FileName=filename) writer.UpdatePipeline() print(filename) # Done. print("done") charliecloud-0.26/examples/paraview/cone.serial.vtk000066400000000000000000000007511417231051300224630ustar00rootroot00000000000000# vtk DataFile Version 5.1 vtk output ASCII DATASET POLYDATA POINTS 7 float 0.5 0 0 -0.5 0.5 0 -0.5 0.25 0.433013 -0.5 -0.25 0.433013 -0.5 -0.5 6.12323e-17 -0.5 -0.25 -0.433013 -0.5 0.25 -0.433013 METADATA INFORMATION 2 NAME L2_NORM_RANGE LOCATION vtkDataArray DATA 2 0.5 0.707107 NAME L2_NORM_FINITE_RANGE LOCATION vtkDataArray DATA 2 0.5 0.707107 POLYGONS 8 24 OFFSETS vtktypeint64 0 6 9 12 15 18 21 24 CONNECTIVITY vtktypeint64 6 5 4 3 2 1 0 1 2 0 2 3 0 3 4 0 4 5 0 5 6 0 6 1 charliecloud-0.26/examples/paraview/test.bats000066400000000000000000000046241417231051300213700ustar00rootroot00000000000000true # shellcheck disable=SC2034 CH_TEST_TAG=$ch_test_tag load "${CHTEST_DIR}/common.bash" setup () { scope full prerequisites_ok paraview indir=${CHTEST_EXAMPLES_DIR}/paraview outdir=$BATS_TMPDIR inbind=${indir}:/mnt/0 outbind=${outdir}:/mnt/1 if [[ $ch_multinode ]]; then # Bats only creates $BATS_TMPDIR on the first node. # shellcheck disable=SC2086 $ch_mpirun_node mkdir -p "$BATS_TMPDIR" fi } # The first two tests demonstrate ParaView as an "executable" to process a # non-containerized input deck (cone.py) and produce non-containerized output. # # .png: In previous versions, PNG output is antialiased with a single rank # and not with multiple ranks depending on the execution environment. # This is no longer the case as of version 5.5.4 but may change with # a new version of Paraview. # # .vtk: The number of extra and/or duplicate points and indexing of these # points into polygons varied by rank count on my VM, but not on the # cluster. The resulting VTK file is dependent on whether an image was # rendered serially or using 2 or n processes. # # We do not check .pvtp (and its companion .vtp) output because it's a # collection of XML files containing binary data and it seems too hairy to me. @test "${ch_tag}/crayify image" { crayify_mpi_or_skip "$ch_img" } @test "${ch_tag}/cone serial" { # shellcheck disable=SC2086 ch-run $ch_unslurm -b "$inbind" -b "$outbind" "$ch_img" -- \ pvbatch /mnt/0/cone.py /mnt/1 ls -l "$outdir"/cone* diff -u "${indir}/cone.serial.vtk" "${outdir}/cone.vtk" cmp "${indir}/cone.png" "${outdir}/cone.png" } @test "${ch_tag}/cone ranks=2" { multiprocess_ok # shellcheck disable=SC2086 $ch_mpirun_2 ch-run --join -b "$inbind" -b "$outbind" "$ch_img" -- \ pvbatch /mnt/0/cone.py /mnt/1 ls -l "$outdir"/cone* diff -u "${indir}/cone.2ranks.vtk" "${outdir}/cone.vtk" cmp "${indir}/cone.png" "${outdir}/cone.png" } @test "${ch_tag}/cone ranks=N" { multiprocess_ok # shellcheck disable=SC2086 $ch_mpirun_core ch-run --join -b "$inbind" -b "$outbind" "$ch_img" -- \ pvbatch /mnt/0/cone.py /mnt/1 ls -l "$outdir"/cone* diff -u "${indir}/cone.nranks.vtk" "${outdir}/cone.vtk" cmp "${indir}/cone.png" "${outdir}/cone.png" } @test "${ch_tag}/revert image" { unpack_img_all_nodes "$ch_cray" } charliecloud-0.26/examples/spack/000077500000000000000000000000001417231051300170135ustar00rootroot00000000000000charliecloud-0.26/examples/spack/Dockerfile000066400000000000000000000062641417231051300210150ustar00rootroot00000000000000# ch-test-scope: full FROM centos:8 # Note: Spack is a bit of an odd duck testing wise. Because it's a package # manager, the key tests we want are to install stuff (this includes the Spack # test suite), and those don't make sense at run time. Thus, most of what we # care about is here in the Dockerfile, and test.bats just has a few # trivialities. # # Spack does document use in Docker [2]. We do things somewhat differently, # but worth reviewing those docs. # # [1]: https://spack.readthedocs.io/en/latest/getting_started.html # [2]: https://spack.readthedocs.io/en/latest/workflows.html#using-spack-to-create-docker-images # Packages needed to install Spack [1]. # # bzip, file, patch, unzip, and which are packages needed to install # Charliecloud with Spack. These are in Spack's Docker example [2] but are not # documented as prerequisites [1]. texinfo is an undocumented dependency of # Spack's m4, and that package is in PowerTools. RUN sed -Ei 's/enabled=0/enabled=1/' \ /etc/yum.repos.d/CentOS-Linux-PowerTools.repo RUN dnf install -y --setopt=install_weak_deps=false \ bzip2 \ gcc \ gcc-c++ \ git \ gnupg2-smime \ file \ make \ patch \ python3 \ texinfo \ unzip \ which \ && dnf clean all # Certain Spack packages (e.g., tar) puke if they detect themselves being # configured as UID 0. This is the override. See issue #540 and [2]. ARG FORCE_UNSAFE_CONFIGURE=1 # Install Spack. This follows the documented procedure to run it out of the # source directory. There apparently is no "make install" type operation to # place it at a standard path ("spack clone" simply clones another working # directory to a new path). # # Depending on what's commented below, we get either Spack’s “develop” branch # or the latest released version. Using develop catches problems earlier, but # that branch has a LOT more churn and some of the problems might not occur in # a released version. I expect the right choice will change over time. ARG SPACK_REPO=https://github.com/spack/spack #RUN git clone --depth 1 $SPACK_REPO # tip of develop; faster clone RUN git clone $SPACK_REPO && cd spack && git checkout releases/latest # slow RUN cd spack && git status && git rev-parse --short HEAD # Set up environment to use Spack. (We can't use setup-env.sh because the # Dockerfile shell is sh, not Bash.) ENV PATH /spack/bin:$PATH RUN spack compiler find --scope system # Test: Some basic commands. RUN which spack RUN spack --version RUN spack compiler find RUN spack compiler list RUN spack compiler list --scope=system RUN spack compiler list --scope=user RUN spack compilers RUN spack spec charliecloud # Test: Install Charliecloud. # Kludge: here we specify an older python sphinx rtd_theme version because # newer default version, 0.5.0, introduces a dependency on node-js which doesn't # appear to build on gcc 4.8 or gcc 8.3 # (see: https://github.com/spack/spack/issues/19310). RUN spack spec charliecloud+docs^py-sphinx-rtd-theme@0.4.3 RUN spack install charliecloud+docs^py-sphinx-rtd-theme@0.4.3 # Clean up. RUN spack clean --all charliecloud-0.26/examples/spack/test.bats000066400000000000000000000014311417231051300206440ustar00rootroot00000000000000true # shellcheck disable=SC2034 CH_TEST_TAG=$ch_test_tag load "${CHTEST_DIR}/common.bash" setup() { scope full prerequisites_ok spack export PATH=/spack/bin:$PATH } @test "${ch_tag}/version" { ch-run "$ch_img" -- spack --version } @test "${ch_tag}/compilers" { echo "spack compiler list" ch-run "$ch_img" -- spack compiler list echo "spack compiler list --scope=system" ch-run "$ch_img" -- spack compiler list --scope=system echo "spack compiler list --scope=user" ch-run "$ch_img" -- spack compiler list --scope=user echo "spack compilers" ch-run "$ch_img" -- spack compilers } @test "${ch_tag}/find" { run ch-run "$ch_img" -- spack find charliecloud echo "$output" [[ $status -eq 0 ]] [[ $output = *'charliecloud@'* ]] } charliecloud-0.26/examples/spark/000077500000000000000000000000001417231051300170325ustar00rootroot00000000000000charliecloud-0.26/examples/spark/Dockerfile000066400000000000000000000027461417231051300210350ustar00rootroot00000000000000# ch-test-scope: full # # Use Buster because Stretch JRE install fails with: # # tempnam() is so ludicrously insecure as to defy implementation. # tempnam: Cannot allocate memory # dpkg: error processing package openjdk-8-jre-headless:amd64 (--configure): # subprocess installed post-installation script returned error exit status 1 FROM debian:buster ARG DEBIAN_FRONTEND=noninteractive # Install needed OS packages. RUN apt-get update \ && apt-get install -y --no-install-recommends \ default-jre-headless \ less \ procps \ python3 \ wget \ && rm -rf /var/lib/apt/lists/* # We want ch-ssh RUN touch /usr/bin/ch-ssh # Download and install Spark. Notes: # # 1. We aren't using SPARK_NO_DAEMONIZE to make sure can deal with daemonized # applications. # # 2. Spark is installed to /opt/spark, which is Spark's new default location. ARG URLPATH=https://archive.apache.org/dist/spark/spark-3.2.0/ ARG DIR=spark-3.2.0-bin-hadoop3.2 ARG TAR=$DIR.tgz RUN wget -nv $URLPATH/$TAR \ && tar xf $TAR \ && mv $DIR /opt/spark \ && rm $TAR # Very basic default configuration, to make it run and not do anything stupid. RUN printf '\ SPARK_LOCAL_IP=127.0.0.1\n\ SPARK_LOCAL_DIRS=/tmp\n\ SPARK_LOG_DIR=/tmp\n\ SPARK_WORKER_DIR=/tmp\n\ ' > /opt/spark/conf/spark-env.sh # Move config to /mnt/0 so we can provide a different config if we want RUN mv /opt/spark/conf /mnt/0 \ && ln -s /mnt/0 /opt/spark/conf charliecloud-0.26/examples/spark/slurm.sh000077500000000000000000000043401417231051300205340ustar00rootroot00000000000000#!/bin/bash #SBATCH --time=0:10:00 # Run an example non-interactive Spark computation. Requires three arguments: # # 1. Image tarball # 2. Directory in which to unpack tarball # 3. High-speed network interface name # # Example: # # $ sbatch slurm.sh /scratch/spark.tar.gz /var/tmp ib0 # # Spark configuration will be generated in ~/slurm-$SLURM_JOB_ID.spark; any # configuration already there will be clobbered. set -e if [[ -z $SLURM_JOB_ID ]]; then echo "not running under Slurm" 1>&2 exit 1 fi tar=$1 img=$2 img=${img}/spark dev=$3 conf=${HOME}/slurm-${SLURM_JOB_ID}.spark # Make Charliecloud available (varies by site) module purge module load friendly-testing module load charliecloud # What IP address to use for master? if [[ -z $dev ]]; then echo "no high-speed network device specified" exit 1 fi master_ip=$( ip -o -f inet addr show dev "$dev" \ | sed -r 's/^.+inet ([0-9.]+).+/\1/') master_url=spark://${master_ip}:7077 if [[ -n $master_ip ]]; then echo "Spark master IP: ${master_ip}" else echo "no IP address for ${dev} found" exit 1 fi # Unpack image srun ch-convert -o dir "$tar" "$img" # Make Spark configuration mkdir "$conf" chmod 700 "$conf" cat < "${conf}/spark-env.sh" SPARK_LOCAL_DIRS=/tmp/spark SPARK_LOG_DIR=/tmp/spark/log SPARK_WORKER_DIR=/tmp/spark SPARK_LOCAL_IP=127.0.0.1 SPARK_MASTER_HOST=${master_ip} JAVA_HOME=/usr/lib/jvm/default-java/ EOF mysecret=$(cat /dev/urandom | tr -dc '0-9a-f' | head -c 48) cat < "${conf}/spark-defaults.sh" spark.authenticate true spark.authenticate.secret $mysecret EOF chmod 600 "${conf}/spark-defaults.sh" # Start the Spark master ch-run -b "$conf" "$img" -- /spark/sbin/start-master.sh sleep 10 tail -7 /tmp/spark/log/*master*.out grep -Fq 'New state: ALIVE' /tmp/spark/log/*master*.out # Start the Spark workers srun sh -c " ch-run -b '${conf}' '${img}' -- \ /spark/sbin/start-slave.sh ${master_url} \ && sleep infinity" & sleep 10 grep -F worker /tmp/spark/log/*master*.out tail -3 /tmp/spark/log/*worker*.out # Compute pi ch-run -b "$conf" "$img" -- \ /spark/bin/spark-submit --master "$master_url" \ /spark/examples/src/main/python/pi.py 1024 # Let Slurm kill the workers and master charliecloud-0.26/examples/spark/test.bats000066400000000000000000000121371417231051300206700ustar00rootroot00000000000000true # shellcheck disable=SC2034 CH_TEST_TAG=$ch_test_tag load "${CHTEST_DIR}/common.bash" # Note: If you get output like the following (piping through cat turns off # BATS terminal magic): # # $ ./bats ../examples/spark/test.bats | cat # 1..5 # ok 1 spark/configure # ok 2 spark/start # [...]/test/bats.src/libexec/bats-exec-test: line 329: /tmp/bats.92406.src: No such file or directory # [...]/test/bats.src/libexec/bats-exec-test: line 329: /tmp/bats.92406.src: No such file or directory # [...]/test/bats.src/libexec/bats-exec-test: line 329: /tmp/bats.92406.src: No such file or directory # # that means that mpirun is starting too many processes per node (you want 1). # One solution is to export OMPI_MCA_rmaps_base_mapping_policy= (i.e., set but # empty). setup () { scope standard prerequisites_ok spark [[ $CH_TEST_PACK_FMT = *-unpack ]] || skip 'issue #1161' umask 0077 # Unset these Java variables so the container doesn't use host paths. unset JAVA_BINDIR JAVA_HOME JAVA_ROOT spark_dir=${TMP_}/spark # runs before each test, so no mktemp spark_config=$spark_dir spark_log=/tmp/sparklog confbind=${spark_config}:/mnt/0 if [[ $ch_multinode ]]; then # We use hostname to determine the interface to use for this test, # avoiding complicated logic determining which interface is the HSN. # In many environments this likely results in the tests running over # the slower management interface, which is fine for testing, but # should be avoided for large scale runs. master_host="$(hostname)" # Start Spark workers using pdsh. We would really prefer to do this # using srun, but that doesn't work; see issue #230. command -v pdsh >/dev/null 2>&1 || pedantic_fail "pdsh not in path" pernode="pdsh -R ssh -w ${SLURM_NODELIST} -- PATH='${PATH}'" else master_host=localhost pernode= fi master_url=spark://${master_host}:7077 master_log="${spark_log}/*master.Master*.out" # expand globs later } @test "${ch_tag}/configure" { # check for restrictive umask run umask -S echo "$output" [[ $status -eq 0 ]] [[ $output = 'u=rwx,g=,o=' ]] # create config $ch_mpirun_node mkdir -p "$spark_config" # We set JAVA_HOME in the spark environment file as this appears to be the # idiomatic method for ensuring spark finds the java install. tee < "${spark_config}/spark-env.sh" SPARK_LOCAL_DIRS=/tmp/spark SPARK_LOG_DIR=$spark_log SPARK_WORKER_DIR=/tmp/spark SPARK_LOCAL_IP=127.0.0.1 SPARK_MASTER_HOST=${master_host} JAVA_HOME=/usr/lib/jvm/default-java/ EOF my_secret=$(cat /dev/urandom | tr -dc '0-9a-f' | head -c 48) tee < "${spark_config}/spark-defaults.conf" spark.authenticate.true spark.authenticate.secret ${my_secret} EOF if [[ $ch_multinode ]]; then sbcast -f "${spark_config}/spark-env.sh" "${spark_config}/spark-env.sh" sbcast -f "${spark_config}/spark-defaults.conf" "${spark_config}/spark-defaults.conf" fi } @test "${ch_tag}/start" { # remove old master logs so new one has predictable name rm -Rf --one-file-system "$spark_log" # start the master ch-run -b "$confbind" "$ch_img" -- /opt/spark/sbin/start-master.sh sleep 15 # shellcheck disable=SC2086 cat $master_log # shellcheck disable=SC2086 grep -Fq 'New state: ALIVE' $master_log # start the workers # shellcheck disable=SC2086 $pernode ch-run -b "$confbind" "$ch_img" -- \ /opt/spark/sbin/start-worker.sh "$master_url" sleep 15 } @test "${ch_tag}/worker count" { # Note that in the log, each worker shows up as 127.0.0.1, which might # lead you to believe that all the workers started on the same (master) # node. However, I believe this string is self-reported by the workers and # is an artifact of SPARK_LOCAL_IP=127.0.0.1 above, which AFAICT just # tells the workers to put their web interfaces on localhost. They still # connect to the master and get work OK. [[ -z $ch_multinode ]] && SLURM_NNODES=1 # shellcheck disable=SC2086 worker_ct=$(grep -Fc 'Registering worker' $master_log || true) echo "node count: $SLURM_NNODES; worker count: ${worker_ct}" [[ $worker_ct -eq "$SLURM_NNODES" ]] } @test "${ch_tag}/pi" { run ch-run -b "$confbind" "$ch_img" -- \ /opt/spark/bin/spark-submit --master "$master_url" \ /opt/spark/examples/src/main/python/pi.py 64 echo "$output" [[ $status -eq 0 ]] # This computation converges quite slowly, so we only ask for two correct # digits of pi. [[ $output = *'Pi is roughly 3.1'* ]] } @test "${ch_tag}/stop" { $pernode ch-run -b "$confbind" "$ch_img" -- /opt/spark/sbin/stop-worker.sh ch-run -b "$confbind" "$ch_img" -- /opt/spark/sbin/stop-master.sh sleep 2 # Any Spark processes left? # (Use egrep instead of fgrep so we don't match the grep process.) # shellcheck disable=SC2086 $pernode ps aux | ( ! grep -E '[o]rg\.apache\.spark\.deploy' ) } @test "${ch_tag}/hang" { # If there are any test processes remaining, this test will hang. true } charliecloud-0.26/lib/000077500000000000000000000000001417231051300146425ustar00rootroot00000000000000charliecloud-0.26/lib/Makefile.am000066400000000000000000000060431417231051300167010ustar00rootroot00000000000000# Define an alias for pkglibdir to override Automake helpfulness: # # error: 'pkglibdir' is not a legitimate directory for 'DATA' # # See: https://www.gnu.org/software/automake/manual/html_node/Uniform.html mylibdir = $(pkglibdir) dist_mylib_DATA = base.sh \ build.py \ charliecloud.py \ fakeroot.py \ misc.py \ pull.py \ push.py noinst_DATA = charliecloud mylib_DATA = contributors.bash \ version.py \ version.sh \ version.txt # Bundled Lark (currently version 0.11.3); Automake does not support wildcards # [1], so list the files. Note it's version-specific. Hopefully if a new # version of Lark adds a file and we omit it here by mistake, the tests will # catch it. To get this list: # # $ (cd lib && find lark lark-stubs lark-*.dist-info -xtype f) | LC_ALL=C sort | sed -E 's/$/ \\/' # # Then, copy-n-paste & remove the last backslash. PROOFREAD YOUR DIFF!!! LARK = \ lark-0.11.3.dist-info/INSTALLER \ lark-0.11.3.dist-info/LICENSE \ lark-0.11.3.dist-info/METADATA \ lark-0.11.3.dist-info/RECORD \ lark-0.11.3.dist-info/WHEEL \ lark-0.11.3.dist-info/entry_points.txt \ lark-0.11.3.dist-info/top_level.txt \ lark-stubs/__init__.pyi \ lark-stubs/ast_utils.pyi \ lark-stubs/exceptions.pyi \ lark-stubs/grammar.pyi \ lark-stubs/indenter.pyi \ lark-stubs/lark.pyi \ lark-stubs/lexer.pyi \ lark-stubs/load_grammar.pyi \ lark-stubs/reconstruct.pyi \ lark-stubs/tree.pyi \ lark-stubs/visitors.pyi \ lark/__init__.py \ lark/ast_utils.py \ lark/common.py \ lark/exceptions.py \ lark/grammar.py \ lark/grammars/common.lark \ lark/grammars/lark.lark \ lark/grammars/python.lark \ lark/grammars/unicode.lark \ lark/indenter.py \ lark/lark.py \ lark/lexer.py \ lark/load_grammar.py \ lark/parse_tree_builder.py \ lark/parser_frontends.py \ lark/parsers/__init__.py \ lark/parsers/cyk.py \ lark/parsers/earley.py \ lark/parsers/earley_common.py \ lark/parsers/earley_forest.py \ lark/parsers/grammar_analysis.py \ lark/parsers/lalr_analysis.py \ lark/parsers/lalr_interactive_parser.py \ lark/parsers/lalr_parser.py \ lark/parsers/lalr_puppet.py \ lark/parsers/xearley.py \ lark/reconstruct.py \ lark/tools/__init__.py \ lark/tools/nearley.py \ lark/tools/serialize.py \ lark/tools/standalone.py \ lark/tree.py \ lark/tree_matcher.py \ lark/utils.py \ lark/visitors.py if ENABLE_LARK nobase_dist_mylib_DATA = $(LARK) endif CLEANFILES = $(mylib_DATA) $(noinst_DATA) # This symlink is so scripts can use "lib/charliecloud" whether they are # installed or not. charliecloud: ln -s . charliecloud contributors.bash: ../README.rst rm -f $@ printf '# shellcheck shell=bash\n' >> $@ printf 'declare -a ch_contributors\n' >> $@ sed -En 's/^\*.+<(.+@.+)>.*$$/ch_contributors+=('"'"'\1'"'"')/p' < $< >> $@ version.txt: ../configure printf '@PACKAGE_VERSION@\n' > $@ version.py: ../configure printf "VERSION='@PACKAGE_VERSION@'\n" > $@ version.sh: ../configure printf "# shellcheck shell=sh disable=SC2034\n" > $@ printf "ch_version='@PACKAGE_VERSION@'\n" >> $@ charliecloud-0.26/lib/base.sh000066400000000000000000000114771417231051300161220ustar00rootroot00000000000000# shellcheck shell=sh set -e # shellcheck disable=SC2034 ch_bin="$(cd "$(dirname "$0")" && pwd)" # shellcheck disable=SC2034 ch_base=${ch_bin%/*} lib="${ch_bin}/../lib/charliecloud" . "${lib}/version.sh" # Verbosity level; works the same as the Python code. verbose=0 DEBUG () { if [ "$verbose" -ge 2 ]; then # shellcheck disable=SC2059 printf "$@" 1>&2 printf '\n' 1>&2 fi } FATAL () { printf 'error: ' 1>&2 # shellcheck disable=SC2059 printf "$@" 1>&2 printf '\n' 1>&2 exit 1 } INFO () { # shellcheck disable=SC2059 printf "$@" 1>&2 printf '\n' 1>&2 } VERBOSE () { if [ "$verbose" -ge 1 ]; then # shellcheck disable=SC2059 printf "$@" 1>&2 printf '\n' 1>&2 fi } # Don't call in a subshell or the selection will be lost. builder_choose () { if [ -z "$CH_BUILDER" ]; then if command -v docker > /dev/null 2>&1; then export CH_BUILDER=docker elif "${ch_bin}/ch-image" --dependencies > /dev/null 2>&1; then export CH_BUILDER=ch-image else export CH_BUILDER=none fi fi case $CH_BUILDER in buildah|buildah-runc|buildah-setuid|ch-image|docker|none) ;; *) echo "unknown builder: $CH_BUILDER" 1>&2 exit 1 ;; esac } deprecated_convert=$(cat <&2 } # Return success if path $1 exists, without dereferencing links, failure # otherwise. ("test -e" dereferences.) exist_p () { stat "$1" > /dev/null 2>&1 } # Try to parse $1 as a common argument. If accepted, either exit (for things # like --help) or return success; otherwise, return failure (i.e., not a # common argument). parse_basic_arg () { case $1 in --_lib-path) # undocumented echo "$lib" exit 0 ;; --help) usage 0 # exits ;; -v|--verbose) verbose=$((verbose+1)) return 0 ;; --version) version # exits ;; esac return 1 # not a basic arg } parse_basic_args () { if [ "$#" -eq 0 ]; then usage 1 fi for i in "$@"; do parse_basic_arg "$i" || true done } # Convert container registry path to filesystem compatible path. # # NOTE: This is used both to name user-visible stuff like tarballs as well as # dig around in the ch-image storage directory. tag_to_path () { echo "$1" | tr '/' '%' } usage () { echo "${usage:?}" 1>&2 exit "${1:-1}" } version () { # shellcheck disable=SC2154 echo 1>&2 "$ch_version" exit 0 } # Set a variable and print its value, human readable description, and origin. # Parameters: # # $1: string: variable name # $2: string: command line argument value (1st priority) # $3: string: environment variable value (2nd priority) # $4: string: default value (3rd priority) # $5: boolean: if true, suppress chatter # $6: int: width of description (use -1 for natural width) # $7: string: human readable description for stdout # # FIXME: Shouldn't export the variable, and no Bash indirection available. # There are safe eval solution out there, but I was too lazy to deal with it. vset () { var_name=$1 cli_value=$2 env_value=$3 def_value=$4 desc_width=$5 var_desc=$6 quiet=$7 if [ "$cli_value" ]; then export "$var_name"="$cli_value" value=$cli_value method='command line' elif [ "$env_value" ]; then export "$var_name"="$env_value" value=$env_value method='environment' else export "$var_name"="$def_value" value=$def_value method='default' fi # FIXME: Kludge: Assume it's a boolean variable and the empty string means # false. Print "no" instead of the empty string. if [ -z "$value" ]; then value=no fi if [ -z "$quiet" ]; then var_desc="$var_desc:" printf "%-*s %s (%s)\n" "$desc_width" "$var_desc" "$value" "$method" fi } # Do we need sudo to run docker? if docker info > /dev/null 2>&1; then docker_ () { docker "$@" } else docker_ () { sudo docker "$@" } fi # Use parallel gzip if it's available. if command -v pigz > /dev/null 2>&1; then gzip_ () { pigz "$@" } else gzip_ () { gzip "$@" } fi # Use pv to show a progress bar, if it's available. (We also don't want a # progress bar if stdin is not a terminal, but pv takes care of that.) if command -v pv > /dev/null 2>&1; then pv_ () { pv -pteb "$@" } else pv_ () { # Arguments may be present, but we ignore them. cat } fi charliecloud-0.26/lib/build.py000066400000000000000000000744301417231051300163230ustar00rootroot00000000000000# Implementation of "ch-image build". import abc import ast import glob import os import os.path import re import shutil import sys import charliecloud as ch import fakeroot import pull ## Globals ## # Namespace from command line arguments. FIXME: be more tidy about this ... cli = None # Environment object. env = None # Fakeroot configuration (initialized during FROM). fakeroot_config = None # Images that we are building. Each stage gets its own image. In this # dictionary, an image appears exactly once or twice. All images appear with # an int key counting stages up from zero. Images with a name (e.g., "FROM ... # AS foo") have a second string key of the name. images = dict() # Current stage. Incremented by FROM instructions; the first will set it to 0. image_i = -1 image_alias = None # Number of stages. image_ct = None ## Imports not in standard library ## # See charliecloud.py for the messy import of this. lark = ch.lark ## Constants ## ARG_DEFAULTS = { "HTTP_PROXY": os.environ.get("HTTP_PROXY"), "HTTPS_PROXY": os.environ.get("HTTPS_PROXY"), "FTP_PROXY": os.environ.get("FTP_PROXY"), "NO_PROXY": os.environ.get("NO_PROXY"), "http_proxy": os.environ.get("http_proxy"), "https_proxy": os.environ.get("https_proxy"), "ftp_proxy": os.environ.get("ftp_proxy"), "no_proxy": os.environ.get("no_proxy"), "SSH_AUTH_SOCK": os.environ.get("SSH_AUTH_SOCK"), "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", # GNU tar, when it thinks it's running as root, tries to # chown(2) and chgrp(2) files to whatever's in the tarball. "TAR_OPTIONS": "--no-same-owner", "USER": ch.user() } ## Main ## def main(cli_): # CLI namespace. :P global cli cli = cli_ # Check argument validity. if (cli.force and cli.no_force_detect): ch.FATAL("--force and --no-force-detect are incompatible") # Infer input file if needed. if (cli.file is None): cli.file = cli.context + "/Dockerfile" # Infer image name if needed. if (cli.tag is None): path = os.path.basename(cli.file) if ("." in path): (base, ext_all) = str(path).split(".", maxsplit=1) (base_all, ext_last) = str(path).rsplit(".", maxsplit=1) else: base = None ext_last = None if (base == "Dockerfile"): cli.tag = ext_all ch.VERBOSE("inferring name from Dockerfile extension: %s" % cli.tag) elif (ext_last == "dockerfile"): cli.tag = base_all ch.VERBOSE("inferring name from Dockerfile basename: %s" % cli.tag) elif (os.path.abspath(cli.context) != "/"): cli.tag = os.path.basename(os.path.abspath(cli.context)) ch.VERBOSE("inferring name from context directory: %s" % cli.tag) else: assert (os.path.abspath(cli.context) == "/") cli.tag = "root" ch.VERBOSE("inferring name with root context directory: %s" % cli.tag) cli.tag = re.sub(r"[^a-z0-9_.-]", "", cli.tag.lower()) ch.INFO("inferred image name: %s" % cli.tag) # Deal with build arguments. def build_arg_get(arg): kv = arg.split("=") if (len(kv) == 2): return kv else: v = os.getenv(kv[0]) if (v is None): ch.FATAL("--build-arg: %s: no value and not in environment" % kv[0]) return (kv[0], v) cli.build_arg = dict( build_arg_get(i) for i in cli.build_arg ) ch.DEBUG(cli) # Guess whether the context is a URL, and error out if so. This can be a # typical looking URL e.g. "https://..." or also something like # "git@github.com:...". The line noise in the second line of the regex is # to match this second form. Username and host characters from # https://tools.ietf.org/html/rfc3986. if (re.search(r""" ^((git|git+ssh|http|https|ssh):// | ^[\w.~%!$&'\(\)\*\+,;=-]+@[\w.~%!$&'\(\)\*\+,;=-]+:)""", cli.context, re.VERBOSE) is not None): ch.FATAL("not yet supported: issue #773: URL context: %s" % cli.context) if (os.path.exists(cli.context + "/.dockerignore")): ch.WARNING("not yet supported, ignored: issue #777: .dockerignore file") # Set up build environment. global env env = Environment() # Read input file. if (cli.file == "-" or cli.context == "-"): text = ch.ossafe(sys.stdin.read, "can't read stdin") else: fp = ch.open_(cli.file, "rt") text = ch.ossafe(fp.read, "can't read: %s" % cli.file) fp.close() # Parse it. parser = lark.Lark("?start: dockerfile\n" + ch.GRAMMAR, parser="earley", propagate_positions=True) # Avoid Lark issue #237: lark.exceptions.UnexpectedEOF if the file does not # end in newline. text += "\n" try: tree = parser.parse(text) except lark.exceptions.UnexpectedInput as x: ch.VERBOSE(x) # noise about what was expected in the grammar ch.FATAL("can't parse: %s:%d,%d\n\n%s" % (cli.file, x.line, x.column, x.get_context(text, 39))) ch.VERBOSE(tree.pretty()) # Sometimes we exit after parsing. if (cli.parse_only): sys.exit(0) # Count the number of stages (i.e., FROM instructions) global image_ct image_ct = sum(1 for i in ch.tree_children(tree, "from_")) # Traverse the tree and do what it says. # # We don't actually care whether the tree is traversed breadth-first or # depth-first, but we *do* care that instruction nodes are visited in # order. Neither visit() nor visit_topdown() are documented as of # 2020-06-11 [1], but examining source code [2] shows that visit_topdown() # uses Tree.iter_trees_topdown(), which *is* documented to be in-order [3]. # # This change seems to have been made in 0.8.6 (see PR #761); before then, # visit() was in order. Therefore, we call that instead, if visit_topdown() # is not present, to improve compatibility (see issue #792). # # [1]: https://lark-parser.readthedocs.io/en/latest/visitors/#visitors # [2]: https://github.com/lark-parser/lark/blob/445c8d4/lark/visitors.py#L211 # [3]: https://lark-parser.readthedocs.io/en/latest/classes/#tree ml = Main_Loop() if (hasattr(ml, 'visit_topdown')): ml.visit_topdown(tree) else: ml.visit(tree) # Check that all build arguments were consumed. if (len(cli.build_arg) != 0): ch.FATAL("--build-arg: not consumed: " + " ".join(cli.build_arg.keys())) # Print summary & we're done. if (ml.instruction_ct == 0): ch.FATAL("no instructions found: %s" % cli.file) assert (image_i + 1 == image_ct) # should have errored already if not if (cli.force): if (fakeroot_config.inject_ct == 0): assert (not fakeroot_config.init_done) ch.WARNING("--force specified, but nothing to do") else: ch.INFO("--force: init OK & modified %d RUN instructions" % fakeroot_config.inject_ct) ch.INFO("grown in %d instructions: %s" % (ml.instruction_ct, images[image_i])) class Main_Loop(lark.Visitor): def __init__(self, *args, **kwargs): self.instruction_ct = 0 super().__init__(*args, **kwargs) def __default__(self, tree): class_ = "I_" + tree.data if (class_ in globals()): inst = globals()[class_](tree) inst.announce() if (self.instruction_ct == 0): if ( isinstance(inst, I_directive) or isinstance(inst, I_from_)): pass elif (isinstance(inst, Arg)): ch.WARNING("ARG before FROM not yet supported; see issue #779") else: ch.FATAL("first instruction must be ARG or FROM") inst.execute() if (image_i != -1): images[image_i].metadata_save() self.instruction_ct += inst.execute_increment ## Instruction classes ## class Instruction(abc.ABC): execute_increment = 1 def __init__(self, tree): self.lineno = tree.meta.line self.options = {} for st in ch.tree_children(tree, "option"): k = ch.tree_terminal(st, "OPTION_KEY") v = ch.tree_terminal(st, "OPTION_VALUE") if (k in self.options): ch.FATAL("%3d %s: repeated option --%s" % (self.lineno, self.str_name(), k)) self.options[k] = v # Save original options string because instructions pop() from the dict # to process them. self.options_str = " ".join("--%s=%s" % (k,v) for (k,v) in self.options.items()) self.tree = tree def __str__(self): options = self.options_str if (options != ""): options = " " + options return ("%3s %s%s %s" % (self.lineno, self.str_name(), options, self.str_())) def announce(self): ch.INFO(self) def execute(self): if (not cli.dry_run): self.execute_() @abc.abstractmethod def execute_(self): ... def options_assert_empty(self): try: k = next(iter(self.options.keys())) ch.FATAL("%s: invalid option --%s" % (self.str_name(), k)) except StopIteration: pass @abc.abstractmethod def str_(self): ... def str_name(self): return self.__class__.__name__.split("_")[1].upper() def unsupported_forever_warn(self, msg): ch.WARNING("not supported, ignored: %s %s" % (self.str_name(), msg)) def unsupported_yet_warn(self, msg, issue_no): ch.WARNING("not yet supported, ignored: issue #%d: %s %s" % (issue_no, self.str_name(), msg)) def unsupported_yet_fatal(self, msg, issue_no): ch.FATAL("not yet supported: issue #%d: %s %s" % (issue_no, self.str_name(), msg)) class Instruction_Supported_Never(Instruction): execute_increment = 0 def announce(self): self.unsupported_forever_warn("instruction") def str_(self): return "(unsupported)" def execute_(self): pass class Arg(Instruction): def __init__(self, *args): super().__init__(*args) self.key = ch.tree_terminal(self.tree, "WORD", 0) if (self.key in cli.build_arg): self.value = cli.build_arg[self.key] del cli.build_arg[self.key] else: self.value = self.value_default() if (self.value is not None): self.value = variables_sub(self.value, env.env_build) def str_(self): if (self.value is None): return self.key else: return "%s='%s'" % (self.key, self.value) def execute_(self): if (self.value is not None): env.arg[self.key] = self.value class I_arg_bare(Arg): def __init__(self, *args): super().__init__(*args) def value_default(self): return None class I_arg_equals(Arg): def __init__(self, *args): super().__init__(*args) def value_default(self): v = ch.tree_terminal(self.tree, "WORD", 1) if (v is None): v = unescape(ch.tree_terminal(self.tree, "STRING_QUOTED")) return v class I_copy(Instruction): # Note: The Dockerfile specification for COPY is complex, messy, # inexplicably different from cp(1), and incomplete. We try to be # bug-compatible with Docker but probably are not 100%. See the FAQ. # # Because of these weird semantics, none of this abstracted into a general # copy function. I don't want people calling it except from here. def __init__(self, *args): super().__init__(*args) self.from_ = self.options.pop("from", None) if (self.from_ is not None): try: self.from_ = int(self.from_) except ValueError: pass if (ch.tree_child(self.tree, "copy_shell") is not None): paths = [variables_sub(i, env.env_build) for i in ch.tree_child_terminals(self.tree, "copy_shell", "WORD")] elif (ch.tree_child(self.tree, "copy_list") is not None): paths = [variables_sub(i, env.env_build) for i in ch.tree_child_terminals(self.tree, "copy_list", "STRING_QUOTED")] for i in range(len(paths)): paths[i] = paths[i][1:-1] #strip quotes else: assert False, "unreachable code reached" self.srcs = paths[:-1] self.dst = paths[-1] def str_(self): return "%s -> %s" % (self.srcs, repr(self.dst)) def copy_src_dir(self, src, dst): """Copy the contents of directory src, named by COPY, either explicitly or with wildcards, to dst. src might be a symlink, but dst is a canonical path. Both must be at the top level of the COPY instruction; i.e., this function must not be called recursively. dst must exist already and be a directory. Unlike subdirectories, the metadata of dst will not be altered to match src.""" def onerror(x): ch.FATAL("can't scan directory: %s: %s" % (x.filename, x.strerror)) # Use Path objects in this method because the path arithmetic was # getting too hard with strings. src = ch.Path(os.path.realpath(src)) dst = ch.Path(dst) assert (os.path.isdir(src) and not os.path.islink(src)) assert (os.path.isdir(dst) and not os.path.islink(dst)) ch.DEBUG("copying named directory: %s -> %s" % (src, dst)) for (dirpath, dirnames, filenames) in os.walk(src, onerror=onerror): dirpath = ch.Path(dirpath) subdir = dirpath.relative_to(src) dst_dir = dst // subdir # dirnames can contain symlinks, which we handle as files, so we'll # rebuild it; the walk will not descend into those "directories". dirnames2 = dirnames.copy() # shallow copy dirnames[:] = list() # clear in place for d in dirnames2: d = ch.Path(d) src_path = dirpath // d dst_path = dst_dir // d ch.TRACE("dir: %s -> %s" % (src_path, dst_path)) if (os.path.islink(src_path)): filenames.append(d) # symlink, handle as file ch.TRACE("symlink to dir, will handle as file") continue else: dirnames.append(d) # directory, descend into later # If destination exists, but isn't a directory, remove it. if (os.path.exists(dst_path)): if (os.path.isdir(dst_path) and not os.path.islink(dst_path)): ch.TRACE("dst_path exists and is a directory") else: ch.TRACE("dst_path exists, not a directory, removing") ch.unlink(dst_path) # If destination directory doesn't exist, create it. if (not os.path.exists(dst_path)): ch.TRACE("mkdir dst_path") ch.ossafe(os.mkdir, "can't mkdir: %s" % dst_path, dst_path) # Copy metadata, now that we know the destination exists and is a # directory. ch.ossafe(shutil.copystat, "can't copy metadata: %s -> %s" % (src_path, dst_path), src_path, dst_path, follow_symlinks=False) for f in filenames: f = ch.Path(f) src_path = dirpath // f dst_path = dst_dir // f ch.TRACE("file or symlink via copy2: %s -> %s" % (src_path, dst_path)) if (not (os.path.isfile(src_path) or os.path.islink(src_path))): ch.FATAL("can't COPY: unknown file type: %s" % src_path) if (os.path.exists(dst_path)): ch.TRACE("destination exists, removing") if (os.path.isdir(dst_path) and not os.path.islink(dst_path)): ch.rmtree(dst_path) else: ch.unlink(dst_path) ch.copy2(src_path, dst_path, follow_symlinks=False) def copy_src_file(self, src, dst): """Copy file src, named by COPY either explicitly or with wildcards, to dst. src might be a symlink, but dst is a canonical path. Both must be at the top level of the COPY instruction; i.e., this function must not be called recursively. If dst is a directory, file should go in that directory named src (i.e., the directory creation magic has already happened).""" assert (os.path.isfile(src)) assert ( not os.path.exists(dst) or (os.path.isdir(dst) and not os.path.islink(dst)) or (os.path.isfile(dst) and not os.path.islink(dst))) ch.DEBUG("copying named file: %s -> %s" % (src, dst)) ch.copy2(src, dst, follow_symlinks=True) def dest_realpath(self, unpack_path, dst): """Return the canonicalized version of path dst within (canonical) image path unpack_path. We can't use os.path.realpath() because if dst is an absolute symlink, we need to use the *image's* root directory, not the host. Thus, we have to resolve symlinks manually.""" unpack_path = ch.Path(unpack_path) dst_canon = ch.Path(unpack_path) dst = ch.Path(dst) dst_parts = list(reversed(dst.parts)) # easier to operate on end of list iter_ct = 0 while (len(dst_parts) > 0): iter_ct += 1 if (iter_ct > 100): # arbitrary ch.FATAL("can't COPY: too many path components") ch.TRACE("current destination: %d %s" % (iter_ct, dst_canon)) #ch.TRACE("parts remaining: %s" % dst_parts) part = dst_parts.pop() if (part == "/" or part == "//"): # 3 or more slashes yields "/" ch.TRACE("skipping root") continue cand = dst_canon // part ch.TRACE("checking: %s" % cand) if (not cand.is_symlink()): ch.TRACE("not symlink") dst_canon = cand else: target = ch.Path(os.readlink(cand)) ch.TRACE("symlink to: %s" % target) assert (len(target.parts) > 0) # POSIX says no empty symlinks if (target.is_absolute()): ch.TRACE("absolute") dst_canon = ch.Path(unpack_path) else: ch.TRACE("relative") dst_parts.extend(reversed(target.parts)) return dst_canon def execute_(self): if (cli.context == "-"): ch.FATAL("can't COPY: no context because \"-\" given") if (len(self.srcs) < 1): ch.FATAL("can't COPY: must specify at least one source") # Complain about unsupported stuff. if (self.options.pop("chown", False)): self.unsupported_forever_warn("--chown") # Any remaining options are invalid. self.options_assert_empty() # Find the context directory. if (self.from_ is None): context = cli.context else: if (self.from_ == image_i or self.from_ == image_alias): ch.FATAL("COPY --from: stage %s is the current stage" % self.from_) if (not self.from_ in images): # FIXME: Would be nice to also report if a named stage is below. if (isinstance(self.from_, int) and self.from_ < image_ct): if (self.from_ < 0): ch.FATAL("COPY --from: invalid negative stage index %d" % self.from_) else: ch.FATAL("COPY --from: stage %d does not exist yet" % self.from_) else: ch.FATAL("COPY --from: stage %s does not exist" % self.from_) context = images[self.from_].unpack_path context_canon = os.path.realpath(context) ch.VERBOSE("context: %s" % context) # Expand source wildcards. srcs = list() for src in self.srcs: matches = glob.glob("%s/%s" % (context, src)) # glob can't take Path if (len(matches) == 0): ch.FATAL("can't copy: not found: %s" % src) for i in matches: srcs.append(i) ch.VERBOSE("source: %s" % i) # Validate sources are within context directory. (Can't convert to # canonical paths yet because we need the source path as given.) for src in srcs: src_canon = os.path.realpath(src) if (not os.path.commonpath([src_canon, context_canon]) .startswith(context_canon)): ch.FATAL("can't COPY from outside context: %s" % src) # Locate the destination. unpack_canon = os.path.realpath(images[image_i].unpack_path) if (self.dst.startswith("/")): dst = ch.Path(self.dst) else: dst = env.workdir // self.dst ch.VERBOSE("destination, as given: %s" % dst) dst_canon = self.dest_realpath(unpack_canon, dst) # strips trailing slash ch.VERBOSE("destination, canonical: %s" % dst_canon) if (not os.path.commonpath([dst_canon, unpack_canon]) .startswith(unpack_canon)): ch.FATAL("can't COPY: destination not in image: %s" % dst_canon) # Create the destination directory if needed. if (self.dst.endswith("/") or len(srcs) > 1 or os.path.isdir(srcs[0])): if (not os.path.exists(dst_canon)): ch.mkdirs(dst_canon) elif (not os.path.isdir(dst_canon)): # not symlink b/c realpath() ch.FATAL("can't COPY: not a directory: %s" % dst_canon) # Copy each source. for src in srcs: if (os.path.isfile(src)): self.copy_src_file(src, dst_canon) elif (os.path.isdir(src)): self.copy_src_dir(src, dst_canon) else: ch.FATAL("can't COPY: unknown file type: %s" % src) class I_directive(Instruction_Supported_Never): def __init__(self, *args): super().__init__(*args) def announce(self): ch.WARNING("not supported, ignored: parser directives") class Env(Instruction): def str_(self): return "%s='%s'" % (self.key, self.value) def execute_(self): env.env[self.key] = self.value with ch.open_(images[image_i].unpack_path // "/ch/environment", "wt") \ as fp: for (k, v) in env.env.items(): print("%s=%s" % (k, v), file=fp) class I_env_equals(Env): def __init__(self, *args): super().__init__(*args) self.key = ch.tree_terminal(self.tree, "WORD", 0) v = ch.tree_terminal(self.tree, "WORD", 1) if (v is None): v = ch.tree_terminal(self.tree, "STRING_QUOTED") self.value = variables_sub(unescape(v), env.env_build) class I_env_space(Env): def __init__(self, *args): super().__init__(*args) self.key = ch.tree_terminal(self.tree, "WORD") v = ch.tree_terminals_cat(self.tree, "LINE_CHUNK") self.value = variables_sub(unescape(v), env.env_build) class I_from_(Instruction): def __init__(self, *args): super().__init__(*args) self.base_ref = ch.Image_Ref(ch.tree_child(self.tree, "image_ref")) self.alias = ch.tree_child_terminal(self.tree, "from_alias", "IR_PATH_COMPONENT") def execute_(self): # Complain about unsupported stuff. if (self.options.pop("platform", False)): self.unsupported_yet_fatal("--platform", 778) # Any remaining options are invalid. self.options_assert_empty() # Update image globals. global image_i image_i += 1 global image_alias image_alias = self.alias if (image_i == image_ct - 1): # Last image; use tag unchanged. tag = cli.tag elif (image_i > image_ct - 1): # Too many images! ch.FATAL("expected %d stages but found at least %d" % (image_ct, image_i + 1)) else: # Not last image; append stage index to tag. tag = "%s/_stage%d" % (cli.tag, image_i) image = ch.Image(ch.Image_Ref(tag)) images[image_i] = image if (self.alias is not None): images[self.alias] = image ch.VERBOSE("image path: %s" % image.unpack_path) # Other error checking. if (str(image.ref) == str(self.base_ref)): ch.FATAL("output image ref same as FROM: %s" % self.base_ref) # Initialize image. self.base_image = ch.Image(self.base_ref) if (os.path.isdir(self.base_image.unpack_path)): ch.VERBOSE("base image found: %s" % self.base_image.unpack_path) else: ch.VERBOSE("base image not found, pulling") # a young hen, especially one less than one year old. pullet = pull.Image_Puller(self.base_image, not cli.no_cache) pullet.pull_to_unpacked() pullet.done() image.copy_unpacked(self.base_image) image.metadata_load() env.reset() # Find fakeroot configuration, if any. global fakeroot_config fakeroot_config = fakeroot.detect(image.unpack_path, cli.force, cli.no_force_detect) def str_(self): alias = " AS %s" % self.alias if self.alias else "" return "%s%s" % (self.base_ref, alias) class Run(Instruction): def execute_(self): rootfs = images[image_i].unpack_path fakeroot_config.init_maybe(rootfs, self.cmd, env.env_build) cmd = fakeroot_config.inject_run(self.cmd) exit_code = ch.ch_run_modify(rootfs, cmd, env.env_build, env.workdir, cli.bind, fail_ok=True) if (exit_code != 0): if (cli.force): if (isinstance(fakeroot_config, fakeroot.Fakeroot_Noop)): ch.ERROR("build failed: --force specified, but no suitable config found") else: pass # we did init --force OK but the build still failed elif (not cli.no_force_detect): if (fakeroot_config.init_done): ch.ERROR("build failed: --force may fix it") else: ch.ERROR("build failed: current version of --force wouldn't help") ch.FATAL("build failed: RUN command exited with %d" % exit_code) def str_(self): return str(self.cmd) class I_run_exec(Run): def __init__(self, *args): super().__init__(*args) self.cmd = [ variables_sub(unescape(i), env.env_build) for i in ch.tree_terminals(self.tree, "STRING_QUOTED")] class I_run_shell(Run): # Note re. line continuations and whitespace: Whitespace before the # backslash is passed verbatim to the shell, while the newline and any # whitespace between the newline and baskslash are deleted. def __init__(self, *args): super().__init__(*args) cmd = ch.tree_terminals_cat(self.tree, "LINE_CHUNK") self.cmd = env.shell + [cmd] class I_shell(Instruction): def __init__(self, *args): super().__init__(*args) self.shell = [variables_sub(unescape(i), env.env_build) for i in ch.tree_terminals(self.tree, "STRING_QUOTED")] def str_(self): return str(self.shell) def execute_(self): env.shell = list(self.shell) #copy class I_workdir(Instruction): def __init__(self, *args): super().__init__(*args) self.path = variables_sub(ch.tree_terminals_cat(self.tree, "LINE_CHUNK"), env.env_build) def str_(self): return self.path def execute_(self): env.chdir(self.path) ch.mkdirs(images[image_i].unpack_path // env.workdir) class I_uns_forever(Instruction_Supported_Never): def __init__(self, *args): super().__init__(*args) self.name = ch.tree_terminal(self.tree, "UNS_FOREVER") def str_name(self): return self.name class I_uns_yet(Instruction): execute_increment = 0 def __init__(self, *args): super().__init__(*args) self.name = ch.tree_terminal(self.tree, "UNS_YET") self.issue_no = { "ADD": 782, "CMD": 780, "ENTRYPOINT": 780, "LABEL": 781, "ONBUILD": 788 }[self.name] def announce(self): self.unsupported_yet_warn("instruction", self.issue_no) def str_(self): return "(unsupported)" def str_name(self): return self.name def execute_(self): pass ## Supporting classes ## class Environment: """The state we are in: environment variables, working directory, etc. Most of this is just passed through from the image metadata.""" __slots__ = ("arg",) def __init__(self): self.reset() @property def env(self): if (image_i == -1): return dict() else: return images[image_i].metadata["env"] @env.setter def env(self, x): images[image_i].metadata["env"] = x @property def env_build(self): return { **self.arg, **self.env } @property def shell(self): if (image_i == -1): return ["/bin/false"] else: return images[image_i].metadata["shell"] @shell.setter def shell(self, x): images[image_i].metadata["shell"] = x @property def workdir(self): return ch.Path(images[image_i].metadata["cwd"]) @workdir.setter def workdir(self, x): images[image_i].metadata["cwd"] = str(x) def chdir(self, path): if (path.startswith("/")): self.workdir = ch.Path(path) else: self.workdir //= path def reset(self): # This resets only things that aren't part of the image metadata. self.arg = { k: v for (k, v) in ARG_DEFAULTS.items() if v is not None } ## Supporting functions ### def variables_sub(s, variables): # FIXME: This should go in the grammar rather than being a regex kludge. # # Dockerfile spec does not say what to do if substituting a value that's # not set. We ignore those subsitutions. This is probably wrong (the shell # substitutes the empty string). for (k, v) in variables.items(): # FIXME: remove when issue #774 is fixed m = re.search(r"(?= 2 and sl[0] == '"' and sl[-1] == '"' and sl[-2:] != '\\"') return ast.literal_eval(sl) charliecloud-0.26/lib/charliecloud.py000066400000000000000000002333161417231051300176620ustar00rootroot00000000000000import argparse import atexit import collections import collections.abc import copy import datetime import getpass import hashlib import http.client import io import json import os import getpass import pathlib import platform import pprint import re import shutil import stat import subprocess import sys import tarfile import time import types import version ## Imports not in standard library ## # List of dependency problems. depfails = [] # Lark is bundled or provided by package dependencies, so assume it's always # importable. There used to be a conflicting package on PyPI called "lark", # but it's gone now [1]. However, verify the version we got. # # [1]: https://github.com/lark-parser/lark/issues/505 import lark LARK_MIN = (0, 7, 1) LARK_MAX = (0, 12, 0) lark_version = tuple(int(i) for i in lark.__version__.split(".")) if (not LARK_MIN <= lark_version <= LARK_MAX): depfails.append(("bad", 'found Python module "lark" version %d.%d.%d but need between %d.%d.%d and %d.%d.%d inclusive' % (lark_version + LARK_MIN + LARK_MAX))) # Requests is not bundled, so this noise makes the file parse and # --version/--help work even if it's not installed. try: import requests import requests.auth import requests.exceptions except ImportError: depfails.append(("missing", 'Python module "requests"')) # Mock up a requests.auth module so the rest of the file parses. requests = types.ModuleType("requests") requests.auth = types.ModuleType("requests.auth") requests.auth.AuthBase = object ## Constants ## # Architectures. This maps the "machine" field returned by uname(2), also # available as "uname -m" and platform.machine(), into architecture names that # image registries use. It is incomplete (see e.g. [1], which is itself # incomplete) but hopefully includes most architectures encountered in # practice [e.g. 2]. Registry architecture and variant are separated by a # slash. Note it is *not* 1-to-1: multiple uname(2) architectures map to the # same registry architecture. # # [1]: https://stackoverflow.com/a/45125525 # [2]: https://github.com/docker-library/bashbrew/blob/v0.1.0/vendor/github.com/docker-library/go-dockerlibrary/architecture/oci-platform.go ARCH_MAP = { "x86_64": "amd64", "armv5l": "arm/v5", "armv6l": "arm/v6", "aarch32": "arm/v7", "armv7l": "arm/v7", "aarch64": "arm64/v8", "armv8l": "arm64/v8", "i386": "386", "i686": "386", "mips64le": "mips64le", "ppc64le": "ppc64le", "s390x": "s390x" } # a.k.a. IBM Z # String to use as hint when we throw an error that suggests a bug. BUG_REPORT_PLZ = "please report this bug: https://github.com/hpc/charliecloud/issues" # This is a general grammar for all the parsing we need to do. As such, you # must prepend a start rule before use. GRAMMAR = r""" /// Image references /// // Note: Hostnames with no dot and no port get parsed as a hostname, which // is wrong; it should be the first path component. We patch this error later. // FIXME: Supposedly this can be fixed with priorities, but I couldn't get it // to work with brief trying. image_ref: ir_hostport? ir_path? ir_name ( ir_tag | ir_digest )? ir_hostport: IR_HOST ( ":" IR_PORT )? "/" ir_path: ( IR_PATH_COMPONENT "/" )+ ir_name: IR_PATH_COMPONENT ir_tag: ":" IR_TAG ir_digest: "@sha256:" HEX_STRING IR_HOST: /[A-Za-z0-9_.-]+/ IR_PORT: /[0-9]+/ IR_PATH_COMPONENT: /[a-z0-9_.-]+/ IR_TAG: /[A-Za-z0-9_.-]+/ /// Dockerfile /// // First instruction must be ARG or FROM, but that is not a syntax error. dockerfile: _NEWLINES? ( directive | comment )* ( instruction | comment )* ?instruction: _WS? ( arg | copy | env | from_ | run | shell | workdir | uns_forever | uns_yet ) directive.2: _WS? "#" _WS? DIRECTIVE_NAME "=" _line _NEWLINES DIRECTIVE_NAME: ( "escape" | "syntax" ) comment: _WS? _COMMENT_BODY _NEWLINES _COMMENT_BODY: /#[^\n]*/ copy: "COPY"i ( _WS option )* _WS ( copy_list | copy_shell ) _NEWLINES copy_list.2: _string_list copy_shell: WORD ( _WS WORD )+ arg: "ARG"i _WS ( arg_bare | arg_equals ) _NEWLINES arg_bare: WORD arg_equals: WORD "=" ( WORD | STRING_QUOTED ) env: "ENV"i _WS ( env_space | env_equalses ) _NEWLINES env_space: WORD _WS _line env_equalses: env_equals ( _WS env_equals )* env_equals: WORD "=" ( WORD | STRING_QUOTED ) from_: "FROM"i ( _WS option )* _WS image_ref [ _WS from_alias ] _NEWLINES from_alias: "AS"i _WS IR_PATH_COMPONENT // FIXME: undocumented; this is guess run: "RUN"i _WS ( run_exec | run_shell ) _NEWLINES run_exec.2: _string_list run_shell: _line shell: "SHELL"i _WS _string_list _NEWLINES workdir: "WORKDIR"i _WS _line _NEWLINES uns_forever: UNS_FOREVER _WS _line _NEWLINES UNS_FOREVER: ( "EXPOSE"i | "HEALTHCHECK"i | "MAINTAINER"i | "STOPSIGNAL"i | "USER"i | "VOLUME"i ) uns_yet: UNS_YET _WS _line _NEWLINES UNS_YET: ( "ADD"i | "CMD"i | "ENTRYPOINT"i | "LABEL"i | "ONBUILD"i ) /// Common /// option: "--" OPTION_KEY "=" OPTION_VALUE OPTION_KEY: /[a-z]+/ OPTION_VALUE: /[^ \t\n]+/ // Matching lines in the face of continuations is surprisingly hairy. Notes: // // 1. The underscore prefix means the rule is always inlined (i.e., removed // and children become children of its parent). // // 2. LINE_CHUNK must not match any characters that _LINE_CONTINUE does. // // 3. This is very sensitive to the location of repetition. Moving the plus // either to the entire regex (i.e., “/(...)+/”) or outside the regex // (i.e., ”/.../+”) gave parse errors. // _line: ( _LINE_CONTINUE | LINE_CHUNK )+ LINE_CHUNK: /[^\\\n]+|(\\(?![ \t]+\n))+/ HEX_STRING: /[0-9A-Fa-f]+/ WORD: /[^ \t\n=]/+ _string_list: "[" _WS? STRING_QUOTED ( "," _WS? STRING_QUOTED )* _WS? "]" _WSH: /[ \t]/+ // sequence of horizontal whitespace _LINE_CONTINUE: "\\" _WSH? "\n" // line continuation _WS: ( _WSH | _LINE_CONTINUE )+ // horizontal whitespace w/ line continuations _NEWLINES: ( _WS? "\n" )+ // sequence of newlines %import common.ESCAPED_STRING -> STRING_QUOTED """ # Chunk size in bytes when streaming HTTP. Progress meter is updated once per # chunk, which means the display is updated roughly every 20s at 100 Kbit/s # and every 2s at 1Mbit/s; beyond that, the once-per-second display throttling # takes over. HTTP_CHUNK_SIZE = 256 * 1024 # Minimum Python version. NOTE: Keep in sync with configure.ac. PYTHON_MIN = (3,6) # Content types for some stuff we care about. # See: https://github.com/opencontainers/image-spec/blob/main/media-types.md TYPES_MANIFEST = \ {"docker2": "application/vnd.docker.distribution.manifest.v2+json", "oci1": "application/vnd.oci.image.manifest.v1+json"} TYPES_INDEX = \ {"docker2": "application/vnd.docker.distribution.manifest.list.v2+json", "oci1": "application/vnd.oci.image.index.v1+json"} TYPE_CONFIG = "application/vnd.docker.container.image.v1+json" TYPE_LAYER = "application/vnd.docker.image.rootfs.diff.tar.gzip" # Top-level directories we create if not present. STANDARD_DIRS = { "bin", "dev", "etc", "mnt", "proc", "sys", "tmp", "usr" } # Storage directory format version. We refuse to operate on storage # directories with non-matching versions. Increment this number when the # format changes non-trivially. STORAGE_VERSION = 2 ## Globals ## # Active architecture (both using registry vocabulary) arch = None # requested by user arch_host = None # of host # FIXME: currently set in ch-image :P CH_BIN = None CH_RUN = None # Logging; set using init() below. verbose = 0 # Verbosity level. Can be 0, 1, or 2. log_festoon = False # If true, prepend pid and timestamp to chatter. log_fp = sys.stderr # File object to print logs to. # Verify TLS certificates? Passed to requests. tls_verify = True ## Exceptions ## class No_Fatman_Error(Exception): pass class Not_In_Registry_Error(Exception): pass ## Classes ## class Credentials: __slots__ = ("password", "username") def __init__(self): self.username = None self.password = None def get(self): # If stored, return those. if (self.username is not None): username = self.username password = self.password else: try: # Otherwise, use environment variables. username = os.environ["CH_IMAGE_USERNAME"] password = os.environ["CH_IMAGE_PASSWORD"] except KeyError: # Finally, prompt the user. # FIXME: This hangs in Bats despite sys.stdin.isatty() == True. try: username = input("\nUsername: ") except KeyboardInterrupt: FATAL("authentication cancelled") password = getpass.getpass("Password: ") if (not password_many): # Remember the credentials. self.username = username self.password = password return (username, password) class HelpFormatter(argparse.HelpFormatter): def __init__(self, *args, **kwargs): # max_help_position is undocumented but I don't know how else to do this. #kwargs["max_help_position"] = 26 super().__init__(max_help_position=26, *args, **kwargs) # Suppress duplicate metavar printing when option has both short and long # flavors. E.g., instead of: # # -s DIR, --storage DIR set builder internal storage directory to DIR # # print: # # -s, --storage DIR set builder internal storage directory to DIR # # From https://stackoverflow.com/a/31124505. def _format_action_invocation(self, action): if (not action.option_strings or action.nargs == 0): return super()._format_action_invocation(action) default = self._get_default_metavar_for_optional(action) args_string = self._format_args(action, default) return ', '.join(action.option_strings) + ' ' + args_string class Image: """Container image object. Constructor arguments: ref........... Image_Ref object to identify the image. unpack_path .. Directory to unpack the image in; if None, infer path in storage dir from ref.""" __slots__ = ("metadata", "ref", "unpack_path") def __init__(self, ref, unpack_path=None): assert isinstance(ref, Image_Ref) self.ref = ref if (unpack_path is not None): self.unpack_path = Path(unpack_path) else: self.unpack_path = storage.unpack(self.ref) self.metadata_init() @property def metadata_path(self): return self.unpack_path // "ch" @property def unpack_exist_p(self): return os.path.exists(self.unpack_path) def __str__(self): return str(self.ref) @staticmethod def unpacked_p(imgdir): "Return True if imgdir looks like an unpacked image, False otherwise." return ( os.path.isdir(imgdir) and os.path.isdir(imgdir // 'bin') and os.path.isdir(imgdir // 'dev') and os.path.isdir(imgdir // 'usr')) def commit(self): "Commit the current unpack directory into the layer cache." assert False, "unimplemented" def copy_unpacked(self, other): """Copy image other to my unpack directory. other can be either a path (string or Path object) or an Image object; in the latter case other.unpack_pach is used. other need not be a valid image; the essentials will be created if needed.""" if (isinstance(other, str) or isinstance(other, Path)): src_path = other else: src_path = other.unpack_path self.unpack_clear() VERBOSE("copying image: %s -> %s" % (src_path, self.unpack_path)) copytree(src_path, self.unpack_path, symlinks=True) self.unpack_init() def layers_open(self, layer_tars): """Open the layer tarballs and read some metadata (which unfortunately means reading the entirety of every file). Return an OrderedDict: keys: layer hash (full) values: namedtuple with two fields: fp: open TarFile object members: sequence of members (OrderedSet) Empty layers are skipped. Important note: TarFile.extractall() extracts the given members in the order they are specified, so we need to preserve their order from the file, as returned by getmembers(). We also need to quickly remove members we don't want from this sequence. Thus, we use the OrderedSet class defined in this module.""" TT = collections.namedtuple("TT", ["fp", "members"]) layers = collections.OrderedDict() # Schema version one (v1) allows one or more empty layers for Dockerfile # entries like CMD (https://github.com/containers/skopeo/issues/393). # Unpacking an empty layer doesn't accomplish anything so we ignore them. empty_cnt = 0 for (i, path) in enumerate(layer_tars, start=1): lh = os.path.basename(path).split(".", 1)[0] lh_short = lh[:7] INFO("layer %d/%d: %s: listing" % (i, len(layer_tars), lh_short)) try: fp = TarFile.open(path) members = OrderedSet(fp.getmembers()) # reads whole file :( except tarfile.TarError as x: FATAL("cannot open: %s: %s" % (path, x)) if (lh in layers and len(members) > 0): FATAL("duplicate non-empty layer %s" % lh) if (len(members) > 0): layers[lh] = TT(fp, members) else: empty_cnt += 1 VERBOSE("skipped %d empty layers" % empty_cnt) return layers def metadata_init(self): "Initialize empty metadata structure." # Elsewhere can assume the existence and types of everything here. self.metadata = { "arch": arch_host.split("/")[0], # no variant "cwd": "/", "env": dict(), "labels": dict(), "shell": ["/bin/sh", "-c"], "volumes": list() } # set isn't JSON-serializable def metadata_load(self): """Load metadata file, replacing the existing metadata object. If metadata doesn't exist, warn and use defaults.""" path = self.metadata_path // "metadata.json" if (not path.exists()): WARNING("no metadata to load; using defaults") self.metadata_init() return self.metadata = json_from_file(path, "metadata") def metadata_merge_from_config(self, config): """Interpret all the crap in the config data structure that is meaingful to us, and add it to self.metadata. Ignore anything we expect in config that's missing.""" def get(*keys): d = config keys = list(keys) while (len(keys) > 1): try: d = d[keys.pop(0)] except KeyError: return None assert (len(keys) == 1) return d.get(keys[0]) def set_(dst_key, *src_keys): v = get(*src_keys) if (v is not None and v != ""): self.metadata[dst_key] = v if ("config" not in config): FATAL("config missing key 'config'") # architecture set_("arch", "architecture") # $CWD set_("cwd", "config", "WorkingDir") # environment env = get("config", "Env") if (env is not None): for line in env: try: (k,v) = line.split("=", maxsplit=1) except AttributeError: FATAL("can't parse config: bad Env line: %s" % line) self.metadata["env"][k] = v # labels set_("labels", "config", "Labels") # copy reference # shell set_("shell", "config", "Shell") # Volumes. FIXME: Why is this a dict with empty dicts as values? vols = get("config", "Volumes") if (vols is not None): for k in config["config"]["Volumes"].keys(): self.metadata["volumes"].append(k) def metadata_replace(self, config_json): self.metadata_init() if (config_json is None): INFO("no config found; initializing empty metadata") else: # Copy pulled config file into the image so we still have it. path = self.metadata_path // "config.pulled.json" copy2(config_json, path) VERBOSE("pulled config path: %s" % path) self.metadata_merge_from_config(json_from_file(path, "config")) self.metadata_save() def metadata_save(self): """Dump image's metadata to disk, including the main data structure but also all auxiliary files, e.g. ch/environment.""" # Serialize. We take care to pretty-print this so it can (sometimes) be # parsed by simple things like grep and sed. out = json.dumps(self.metadata, indent=2, sort_keys=True) DEBUG("metadata:\n%s" % out) # Main metadata file. path = self.metadata_path // "metadata.json" VERBOSE("writing metadata file: %s" % path) file_write(path, out + "\n") # /ch/environment path = self.metadata_path // "environment" VERBOSE("writing environment file: %s" % path) file_write(path, ( "\n".join("%s=%s" % (k,v) for (k,v) in sorted(self.metadata["env"].items())) + "\n")) # mkdir volumes VERBOSE("ensuring volume directories exist") for path in self.metadata["volumes"]: mkdirs(self.unpack_path // path) def tarballs_write(self, tarball_dir): """Write one uncompressed tarball per layer to tarball_dir. Return a sequence of tarball basenames, with the lowest layer first.""" # FIXME: Yes, there is only one layer for now and we'll need to update # it when (if) we have multiple layers. But, I wanted the interface to # support multiple layers. base = "%s.tar" % self.ref.for_path path = tarball_dir // base try: INFO("layer 1/1: gathering") VERBOSE("writing tarball: %s" % path) fp = TarFile.open(path, "w", format=tarfile.PAX_FORMAT) unpack_path = self.unpack_path.resolve() # aliases use symlinks VERBOSE("canonicalized unpack path: %s" % unpack_path) fp.add_(unpack_path, arcname=".") fp.close() except OSError as x: FATAL("can't write tarball: %s" % x.strerror) return [base] def unpack(self, layer_tars, last_layer=None): """Unpack config_json (path to JSON config file) and layer_tars (sequence of paths to tarballs, with lowest layer first) into the unpack directory, validating layer contents and dealing with whiteouts. Empty layers are ignored. Overwrite any existing image in the unpack directory.""" if (last_layer is None): last_layer = sys.maxsize INFO("flattening image") self.unpack_clear() self.unpack_layers(layer_tars, last_layer) self.unpack_init() def unpack_clear(self): """If the unpack directory does not exist, do nothing. If the unpack directory is already an image, remove it. Otherwise, error.""" if (not os.path.exists(self.unpack_path)): VERBOSE("no image found: %s" % self.unpack_path) else: if (not os.path.isdir(self.unpack_path)): FATAL("can't flatten: %s exists but is not a directory" % self.unpack_path) if (not self.unpacked_p(self.unpack_path)): FATAL("can't flatten: %s exists but does not appear to be an image" % self.unpack_path) VERBOSE("removing existing image: %s" % self.unpack_path) rmtree(self.unpack_path) def unpack_init(self): """Initialize the unpack directory, which must exist. Any setup already present will be left unchanged. After this, self.unpack_path is a valid Charliecloud image directory.""" # Metadata directory. mkdir(self.unpack_path // "ch") file_ensure_exists(self.unpack_path // "ch/environment") # Essential directories & mount points. Do nothing if something already # exists, without dereferencing, in case it's a symlink, which will work # for bind-mount later but won't resolve correctly now outside the # container (e.g. linuxcontainers.org images; issue #1015). # # WARNING: Keep in sync with shell scripts. for d in list(STANDARD_DIRS) + ["mnt/%d" % i for i in range(10)]: d = self.unpack_path // d if (not os.path.lexists(d)): mkdirs(d) file_ensure_exists(self.unpack_path // "etc/hosts") file_ensure_exists(self.unpack_path // "etc/resolv.conf") def unpack_layers(self, layer_tars, last_layer): layers = self.layers_open(layer_tars) self.validate_members(layers) self.whiteouts_resolve(layers) mkdir(self.unpack_path) # create directory in case no layers top_dirs = set() for (i, (lh, (fp, members))) in enumerate(layers.items(), start=1): lh_short = lh[:7] if (i > last_layer): INFO("layer %d/%d: %s: skipping per --last-layer" % (i, len(layers), lh_short)) else: INFO("layer %d/%d: %s: extracting" % (i, len(layers), lh_short)) try: fp.extractall(path=self.unpack_path, members=members) except OSError as x: FATAL("can't extract layer %d: %s" % (i, x.strerror)) top_dirs.update(path_first(i.name) for i in members) # If standard tarball with enclosing directory, raise everything out of # that directory, unless it's one of the standard directories (e.g., an # image containing just "/bin/fooprog" won't be raised). This supports # "ch-image import", which may be used on manually-created tarballs # where best practice is not to do a tarbomb. top_dirs -= { None } # some tarballs contain entry for "."; ignore if (len(top_dirs) == 1): top_dir = top_dirs.pop() if ( (self.unpack_path // top_dir).is_dir() and str(top_dir) not in STANDARD_DIRS): top_dir = self.unpack_path // top_dir # make absolute INFO("layers: single enclosing directory, using its contents") for src in list(top_dir.iterdir()): dst = self.unpack_path // src.parts[-1] DEBUG("moving: %s -> %s" % (src, dst)) ossafe(src.rename, "can't move: %s -> %s" % (src, dst), dst) DEBUG("removing empty directory: %s" % top_dir) ossafe(top_dir.rmdir, "can't rmdir: %s" % top_dir) def validate_members(self, layers): INFO("validating tarball members") for (i, (lh, (fp, members))) in enumerate(layers.items(), start=1): dev_ct = 0 members2 = list(members) # copy b/c we'll alter members for m in members2: TarFile.fix_member_path(m, fp.name) if (m.isdev()): # Device or FIFO: Ignore. dev_ct += 1 members.remove(m) continue elif (m.issym()): # Symlink: Nothing to change, but accept it. pass elif (m.islnk()): # Hard link: Fail if pointing outside top level. (Note that we # let symlinks point wherever they want, because they aren't # interpreted until run time in a container.) self.validate_tar_link(fp.name, m.name, m.linkname) elif (m.isdir()): # Directory: Fix bad permissions (hello, Red Hat). m.mode |= 0o700 elif (m.isfile()): # Regular file: Fix bad permissions (HELLO RED HAT!!). m.mode |= 0o600 else: FATAL("unknown member type: %s" % m.name) TarFile.fix_member_uidgid(m) if (dev_ct > 0): INFO("layer %d/%d: %s: ignored %d devices and/or FIFOs" % (i, len(layers), lh[:7], dev_ct)) def validate_tar_link(self, filename, path, target): """Reject hard link targets outside the tar top level by aborting the program.""" if (len(target) > 0 and target[0] == "/"): FATAL("rejecting absolute hard link target: %s: %s -> %s" % (filename, path, target)) if (".." in os.path.normpath(path + "/" + target).split("/")): FATAL("rejecting too many up-levels: %s: %s -> %s" % (filename, path, target)) def whiteout_rm_prefix(self, layers, max_i, prefix): """Ignore members of all layers from 1 to max_i inclusive that have path prefix of prefix. For example, if prefix is foo/bar, then ignore foo/bar and foo/bar/baz but not foo/barbaz. Return count of members ignored.""" TRACE("finding members with prefix: %s" % prefix) prefix = os.path.normpath(prefix) # "./foo" == "foo" ignore_ct = 0 for (i, (lh, (fp, members))) in enumerate(layers.items(), start=1): if (i > max_i): break members2 = list(members) # copy b/c we'll alter members for m in members2: if (prefix_path(prefix, m.name)): ignore_ct += 1 members.remove(m) TRACE("layer %d/%d: %s: ignoring %s" % (i, len(layers), lh[:7], m.name)) return ignore_ct def whiteouts_resolve(self, layers): """Resolve whiteouts. See: https://github.com/opencontainers/image-spec/blob/master/layer.md""" INFO("resolving whiteouts") for (i, (lh, (fp, members))) in enumerate(layers.items(), start=1): wo_ct = 0 ig_ct = 0 members2 = list(members) # copy b/c we'll alter members for m in members2: dir_ = os.path.dirname(m.name) filename = os.path.basename(m.name) if (filename.startswith(".wh.")): wo_ct += 1 members.remove(m) if (filename == ".wh..wh..opq"): # "Opaque whiteout": remove contents of dir_. DEBUG("found opaque whiteout: %s" % m.name) ig_ct += self.whiteout_rm_prefix(layers, i - 1, dir_) else: # "Explicit whiteout": remove same-name file without ".wh.". DEBUG("found explicit whiteout: %s" % m.name) ig_ct += self.whiteout_rm_prefix(layers, i - 1, dir_ + "/" + filename[4:]) if (wo_ct > 0): VERBOSE("layer %d/%d: %s: %d whiteouts; %d members ignored" % (i, len(layers), lh[:7], wo_ct, ig_ct)) def unpack_create_ok(self): """Ensure the unpack directory can be created. If the unpack directory is already an image, remove it.""" if (not self.unpack_exist_p): VERBOSE("creating new image: %s" % self.unpack_path) else: if (not os.path.isdir(self.unpack_path)): FATAL("can't flatten: %s exists but is not a directory" % self.unpack_path) if (not self.unpacked_p(self.unpack_path)): FATAL("can't flatten: %s exists but does not appear to be an image" % self.unpack_path) VERBOSE("replacing existing image: %s" % self.unpack_path) rmtree(self.unpack_path) def unpack_delete(self): VERBOSE("unpack path: %s" % self.unpack_path) if (not self.unpack_exist_p): FATAL("%s image not found" % self.ref) if (self.unpacked_p(self.unpack_path)): INFO("deleting image: %s" % self.ref) rmtree(self.unpack_path) else: FATAL("storage directory seems broken: not an image: %s" % self.ref) def unpack_create(self): "Ensure the unpack directory exists, replacing or creating if needed." self.unpack_create_ok() mkdir(self.unpack_path) class Image_Ref: """Reference to an image in a remote repository. The constructor takes one argument, which is interpreted differently depending on type: None or omitted... Build an empty Image_Ref (all fields None). string ........... Parse it; see FAQ for syntax. Can be either the standard form (e.g., as in a FROM instruction) or our filename form with percents replacing slashes. Lark parse tree .. Must be same result as parsing a string. This allows the parse step to be embedded in a larger parse (e.g., a Dockerfile). Warning: References containing a hostname without a dot and no port cannot be round-tripped through a string, because the hostname will be assumed to be a path component.""" __slots__ = ("host", "port", "path", "name", "tag", "digest") # Reference parser object. Instantiating a parser took 100ms when we tested # it, which means we can't really put it in a loop. But, at parse time, # "lark" may refer to a dummy module (see above), so we can't populate the # parser here either. We use a class varible and populate it at the time of # first use. parser = None def __init__(self, src=None): self.host = None self.port = None self.path = [] self.name = None self.tag = None self.digest = None if (isinstance(src, str)): src = self.parse(src) if (isinstance(src, lark.tree.Tree)): self.from_tree(src) elif (src is not None): assert False, "unsupported initialization type" def __str__(self): out = "" if (self.host is not None): out += self.host if (self.port is not None): out += ":" + str(self.port) if (self.host is not None): out += "/" out += self.path_full if (self.tag is not None): out += ":" + self.tag if (self.digest is not None): out += "@sha256:" + self.digest return out @classmethod def parse(class_, s): if (class_.parser is None): class_.parser = lark.Lark("?start: image_ref\n" + GRAMMAR, parser="earley", propagate_positions=True) if ("%" in s): s = s.replace("%", "/") hint="https://hpc.github.io/charliecloud/faq.html#how-do-i-specify-an-image-reference" try: tree = class_.parser.parse(s) except lark.exceptions.UnexpectedInput as x: if (x.column == -1): FATAL("image ref syntax, at end: %s" % s, hint); else: FATAL("image ref syntax, char %d: %s" % (x.column, s), hint) except lark.exceptions.UnexpectedEOF as x: # We get UnexpectedEOF because of Lark issue #237. This exception # doesn't have a column location. FATAL("image ref syntax, at end: %s" % s, hint) DEBUG(tree.pretty()) return tree @property def as_verbose_str(self): def fmt(x): if (x is None): return None else: return repr(x) return """\ as string: %s for filename: %s fields: host %s port %s path %s name %s tag %s digest %s\ """ % tuple( [str(self), self.for_path] + [fmt(i) for i in (self.host, self.port, self.path, self.name, self.tag, self.digest)]) @property def canonical(self): "Copy of self with all the defaults filled in." ref = self.copy() ref.defaults_add() return ref @property def for_path(self): return str(self).replace("/", "%") @property def path_full(self): out = "" if (len(self.path) > 0): out += "/".join(self.path) + "/" out += self.name return out @property def version(self): if (self.tag is not None): return self.tag if (self.digest is not None): return "sha256:" + self.digest assert False, "version invalid with no tag or digest" def copy(self): "Return an independent copy of myself." return copy.deepcopy(self) def defaults_add(self): "Set defaults for all empty fields." if (self.host is None): if ("CH_REGY_DEFAULT_HOST" not in os.environ): self.host = "registry-1.docker.io" else: self.host = os.getenv("CH_REGY_DEFAULT_HOST") self.port = int(os.getenv("CH_REGY_DEFAULT_PORT", 443)) prefix = os.getenv("CH_REGY_PATH_PREFIX") if (prefix is not None): self.path = prefix.split("/") + self.path if (self.port is None): self.port = 443 if (self.host == "registry-1.docker.io" and len(self.path) == 0): # FIXME: For Docker Hub only, images with no path need a path of # "library" substituted. Need to understand/document the rules here. self.path = ["library"] if (self.tag is None and self.digest is None): self.tag = "latest" def from_tree(self, t): self.host = tree_child_terminal(t, "ir_hostport", "IR_HOST") self.port = tree_child_terminal(t, "ir_hostport", "IR_PORT") if (self.port is not None): self.port = int(self.port) self.path = list(tree_child_terminals(t, "ir_path", "IR_PATH_COMPONENT")) self.name = tree_child_terminal(t, "ir_name", "IR_PATH_COMPONENT") self.tag = tree_child_terminal(t, "ir_tag", "IR_TAG") self.digest = tree_child_terminal(t, "ir_digest", "HEX_STRING") # Resolve grammar ambiguity for hostnames w/o dot or port. if ( self.host is not None and "." not in self.host and self.port is None): self.path.insert(0, self.host) self.host = None class OrderedSet(collections.abc.MutableSet): # Note: The superclass provides basic implementations of all the other # methods. I didn't evaluate any of these. __slots__ = ("data",) def __init__(self, others=None): self.data = collections.OrderedDict() if (others is not None): self.data.update((i, None) for i in others) def __contains__(self, item): return (item in self.data) def __iter__(self): return iter(self.data.keys()) def __len__(self): return len(self.data) def __repr__(self): return "%s(%s)" % (self.__class__.__name__, list(iter(self))) def add(self, x): self.data[x] = None def clear(self): # Superclass provides an implementation but warns it's slow (and it is). self.data.clear() def discard(self, x): self.data.pop(x, None) class Path(pathlib.PosixPath): """Stock Path objects have the very weird property that appending an *absolute* path to an existing path ignores the left operand, leaving only the absolute right operand: >>> import pathlib >>> a = pathlib.Path("/foo/bar") >>> a.joinpath("baz") PosixPath('/foo/bar/baz') >>> a.joinpath("/baz") PosixPath('/baz') This is contrary to long-standing UNIX/POSIX, where extra slashes in a path are ignored, e.g. the path "foo//bar" is equivalent to "foo/bar". It seems to be inherited from os.path.join(). Even with the relatively limited use of Path objects so far, this has caused quite a few bugs. IMO it's too difficult and error-prone to manually manage whether paths are absolute or relative. Thus, this subclass introduces a new operator "//" which does the right thing, i.e., if the right operand is absolute, that fact is ignored. E.g.: >>> a = Path("/foo/bar") >>> a.joinpath_posix("baz") Path('/foo/bar/baz') >>> a.joinpath_posix("/baz") Path('/foo/bar/baz') >>> a // "/baz" Path('/foo/bar/baz') >>> "/baz" // a Path('/baz/foo/bar') We introduce a new operator because it seemed like too subtle a change to the existing operator "/" (which we disable to avoid getting burned here in Charliecloud). An alternative was "+" like strings, but that led to silently wrong results when the paths *were* strings (components concatenated with no slash).""" def __floordiv__(self, right): return self.joinpath_posix(right) def __rfloordiv__(self, left): left = Path(left) return left.joinpath_posix(self) def __truediv__(self, right): return NotImplemented def __rtruediv__(self, left): return NotImplemented def joinpath_posix(self, *others): others2 = list() for other in others: other = Path(other) if (other.is_absolute()): other = other.relative_to("/") assert (not other.is_absolute()) others2.append(other) return self.joinpath(*others2) class Progress: """Simple progress meter for countable things that updates at most once per second. Writes first update upon creation. If length is None, then just count up (this is for registries like Red Hat that sometimes don't provide a Content-Length header for blobs). The purpose of the divisor is to allow counting things that are much more numerous than what we want to display; for example, to count bytes but report MiB, use a divisor of 1048576. By default, moves to a new line at first update, then assumes exclusive control of this line in the terminal, rewriting the line as needed. If output is not a TTY or global log_festoon is set, each update is one log entry with no overwriting.""" __slots__ = ("display_last", "divisor", "msg", "length", "unit", "overwrite_p", "precision", "progress") def __init__(self, msg, unit, divisor, length): self.msg = msg self.unit = unit self.divisor = divisor self.length = length if (not os.isatty(log_fp.fileno()) or log_festoon): self.overwrite_p = False # updates all use same line else: self.overwrite_p = True # each update on new line self.precision = 1 if self.divisor >= 1000 else 0 self.progress = 0 self.display_last = float("-inf") self.update(0) def update(self, increment, last=False): now = time.monotonic() self.progress += increment if (last or now - self.display_last > 1): if (self.length is None): line = ("%s: %.*f %s" % (self.msg, self.precision, self.progress / self.divisor, self.unit)) else: ct = "%.*f/%.*f" % (self.precision, self.progress / self.divisor, self.precision, self.length / self.divisor) pct = "%d%%" % (100 * self.progress / self.length) if (ct == "0.0/0.0"): # too small, don't print count line = "%s: %s" % (self.msg, pct) else: line = ("%s: %s %s (%s)" % (self.msg, ct, self.unit, pct)) INFO(line, end=("\r" if self.overwrite_p else "\n")) self.display_last = now def done(self): self.update(0, True) if (self.overwrite_p): INFO("") # newline to release display line class Progress_Reader: """Wrapper around a binary file object to maintain a progress meter while reading.""" __slots__ = ("fp", "msg", "progress") def __init__(self, fp, msg): self.fp = fp self.msg = msg self.progress = None def __iter__(self): return self def __next__(self): data = self.read(HTTP_CHUNK_SIZE) if (len(data) == 0): raise StopIteration return data def close(self): if (self.progress is not None): self.progress.done() ossafe(self.fp.close, "can't close: %s" % self.fp.name) def read(self, size=-1): data = ossafe(self.fp.read, "can't read: %s" % self.fp.name, size) self.progress.update(len(data)) return data def seek(self, *args): raise io.UnsupportedOperation def start(self): # Get file size. This seems awkward, but I wasn't able to find anything # better. See: https://stackoverflow.com/questions/283707 old_pos = self.fp.tell() assert (old_pos == 0) # math will be wrong if this isn't true length = self.fp.seek(0, os.SEEK_END) self.fp.seek(old_pos) self.progress = Progress(self.msg, "MiB", 2**20, length) class Progress_Writer: """Wrapper around a binary file object to maintain a progress meter while data are written.""" __slots__ = ("fp", "msg", "path", "progress") def __init__(self, path, msg): self.msg = msg self.path = path self.progress = None def close(self): if (self.progress is not None): self.progress.done() ossafe(self.fp.close, "can't close: %s" % self.path) def start(self, length): self.progress = Progress(self.msg, "MiB", 2**20, length) self.fp = open_(self.path, "wb") def write(self, data): self.progress.update(len(data)) ossafe(self.fp.write, "can't write: %s" % self.path, data) class Registry_HTTP: """Transfers image data to and from a remote image repository via HTTPS. Note that ref refers to the *remote* image. Objects of this class have no information about the local image.""" # Note that with some registries, authentication is required even for # anonymous downloads of public images. In this case, we just fetch an # authentication token anonymously. __slots__ = ("auth", "creds", "ref", "session") # https://stackoverflow.com/a/58055668 class Bearer_Auth(requests.auth.AuthBase): __slots__ = ("token",) def __init__(self, token): self.token = token def __call__(self, req): req.headers["Authorization"] = "Bearer %s" % self.token return req def __str__(self): return ("Bearer %s" % self.token[:32]) class Null_Auth(requests.auth.AuthBase): def __call__(self, req): return req def __str__(self): return "no authorization" def __init__(self, ref): # Need an image ref with all the defaults filled in. self.ref = ref.canonical self.auth = self.Null_Auth() self.creds = Credentials() self.session = None # This is commented out because it prints full request and response # bodies to standard output (not stderr), which overwhelms the terminal. # Normally, a better debugging approach if you need this is to sniff the # connection using e.g. mitmproxy. #if (verbose >= 2): # http.client.HTTPConnection.debuglevel = 1 def _url_of(self, type_, address): "Return an appropriate repository URL." url_base = "https://%s:%d/v2" % (self.ref.host, self.ref.port) return "/".join((url_base, self.ref.path_full, type_, address)) def authenticate_basic(self, res, auth_d): VERBOSE("authenticating using Basic") if ("realm" not in auth_d): FATAL("WWW-Authenticate missing realm") (username, password) = self.creds.get() self.auth = requests.auth.HTTPBasicAuth(username, password) def authenticate_bearer(self, res, auth_d): VERBOSE("authenticating using Bearer") # Registries vary in what they put in WWW-Authenticate. Specifically, # for everything except NGC, we get back realm, service, and scope. NGC # just gives service and scope. We need realm because it's the URL to # use for a token. scope also seems critical, so check we have that. # Otherwise, just give back all the keys we got. for k in ("realm", "scope"): if (k not in auth_d): FATAL("WWW-Authenticate missing key: %s" % k) params = { (k,v) for (k,v) in auth_d.items() if k != "realm" } # Request anonymous auth token first, but only for the “safe” methods. # We assume no registry will accept anonymous pushes. This is because # GitLab registries don't seem to honor the scope argument (issue #975); # e.g., for scope “repository:reidpr/foo/00_tiny:pull,push”, GitLab # 13.6.3-ee will hand out an anonymous token, but that token is rejected # with ‘error="insufficient_scope"’ when the request is re-tried. token = None if (res.request.method not in ("GET", "HEAD")): VERBOSE("won't request anonymous token for %s" % res.request.method) else: VERBOSE("requesting anonymous auth token") res = self.request_raw("GET", auth_d["realm"], {200,401,403}, params=params) if (res.status_code != 200): VERBOSE("anonymous access rejected") else: token = res.json()["token"] # If that failed or was inappropriate, try for an authenticated token. if (token is None): (username, password) = self.creds.get() auth = requests.auth.HTTPBasicAuth(username, password) res = self.request_raw("GET", auth_d["realm"], {200}, auth=auth, params=params) token = res.json()["token"] VERBOSE("received auth token: %s" % (token[:32])) self.auth = self.Bearer_Auth(token) def authorize(self, res): "Authorize using the WWW-Authenticate header in failed response res." VERBOSE("authorizing") assert (res.status_code == 401) # Get authentication instructions. if ("WWW-Authenticate" not in res.headers): FATAL("WWW-Authenticate header not found") auth_h = res.headers["WWW-Authenticate"] VERBOSE("WWW-Authenticate raw: %s" % auth_h) # Parse the WWW-Authenticate header. Apparently doing this correctly is # pretty hard. We use a non-compliant regex kludge [1,2]. Alternatives # include putting the grammar into Lark (this can be gotten by reading # the RFCs enough) or using the www-authenticate library [3]. # # [1]: https://stackoverflow.com/a/1349528 # [2]: https://stackoverflow.com/a/1547940 # [3]: https://pypi.org/project/www-authenticate auth_type = auth_h.split()[0] auth_d = dict(re.findall(r'(?:(\w+)[:=] ?"?([\w.~:/?#@!$&()*+,;=\'\[\]-]+)"?)+', auth_h)) VERBOSE("WWW-Authenticate parsed: %s %s" % (auth_type, auth_d)) # Dispatch to proper method. if (auth_type == "Bearer"): self.authenticate_bearer(res, auth_d) elif (auth_type == "Basic"): self.authenticate_basic(res, auth_d) else: FATAL("unknown auth type: %s" % auth_h) def blob_exists_p(self, digest): """Return true if a blob with digest (hex string) exists in the remote repository, false otherwise.""" # Gotchas: # # 1. HTTP 401 means both unauthorized *or* not found, I assume to avoid # information leakage about the presence of stuff one isn't allowed # to see. By the time it gets here, we should be authenticated, so # interpret it as not found. # # 2. Sometimes we get 301 Moved Permanently. It doesn't bubble up to # here because requests.request() follows redirects. However, # requests.head() does not follow redirects, and it seems like a # weird status, so I worry there is a gotcha I haven't figured out. url = self._url_of("blobs", "sha256:%s" % digest) res = self.request("HEAD", url, {200,401,404}) return (res.status_code == 200) def blob_to_file(self, digest, path, msg): "GET the blob with hash digest and save it at path." # /v2/library/hello-world/blobs/ url = self._url_of("blobs", "sha256:" + digest) sw = Progress_Writer(path, msg) self.request("GET", url, out=sw) sw.close() def blob_upload(self, digest, data, note=""): """Upload blob with hash digest to url. data is the data to upload, and can be anything requests can handle; if it's an open file, then it's wrapped in a Progress_Reader object. note is a string to prepend to the log messages; default empty string.""" INFO("%s%s: checking if already in repository" % (note, digest[:7])) # 1. Check if blob already exists. If so, stop. if (self.blob_exists_p(digest)): INFO("%s%s: already present" % (note, digest[:7])) return msg = "%s%s: not present, uploading" % (note, digest[:7]) if (isinstance(data, io.IOBase)): data = Progress_Reader(data, msg) data.start() else: INFO(msg) # 2. Get upload URL for blob. url = self._url_of("blobs", "uploads/") res = self.request("POST", url, {202}) # 3. Upload blob. We do a "monolithic" upload (i.e., send all the # content in a single PUT request) as opposed to a "chunked" upload # (i.e., send data in multiple PATCH requests followed by a PUT request # with no body). url = res.headers["Location"] res = self.request("PUT", url, {201}, data=data, params={ "digest": "sha256:%s" % digest }) if (isinstance(data, Progress_Reader)): data.close() # 4. Verify blob now exists. if (not self.blob_exists_p(digest)): FATAL("blob just uploaded does not exist: %s" % digest[:7]) def close(self): if (self.session is not None): self.session.close() def config_upload(self, config): "Upload config (sequence of bytes)." self.blob_upload(bytes_hash(config), config, "config: ") def fatman_to_file(self, path, msg): """GET the manifest for self.image and save it at path. This seems to have three possible results: 1. HTTP 200, and body is a fat manifest: image exists and is architecture-aware. 2. HTTP 200, but body is a skinny manifest: image exists but is not architecture-aware. 3. HTTP 401/404: image does not exist. This method raises Not_In_Registry_Error in case 3. The caller is responsible for distinguishing cases 1 and 2.""" url = self._url_of("manifests", self.ref.version) pw = Progress_Writer(path, msg) # Including TYPES_MANIFEST avoids the server trying to convert its v2 # manifest to a v1 manifest, which currently fails for images # Charliecloud pushes. The error in the test registry is “empty history # when trying to create schema1 manifest”. accept = "%s;q=0.5" % ",".join( list(TYPES_INDEX.values()) + list(TYPES_MANIFEST.values())) res = self.request("GET", url, out=pw, statuses={200, 401, 404}, headers={ "Accept" : accept }) pw.close() if (res.status_code != 200): DEBUG(res.content) raise Not_In_Registry_Error() def layer_from_file(self, digest, path, note=""): "Upload gzipped tarball layer at path, which must have hash digest." # NOTE: We don't verify the digest b/c that means reading the whole file. VERBOSE("layer tarball: %s" % path) fp = open_(path, "rb") # open file avoids reading it all into memory self.blob_upload(digest, fp, note) ossafe(fp.close, "can't close: %s" % path) def manifest_to_file(self, path, msg, digest=None): """GET manifest for the image and save it at path. If digest is given, use that to fetch the appropriate architecture; otherwise, fetch the default manifest using the exising image reference.""" if (digest is None): digest = self.ref.version else: digest = "sha256:" + digest url = self._url_of("manifests", digest) pw = Progress_Writer(path, msg) accept = "%s;q=0.5" % ",".join(TYPES_MANIFEST.values()) res = self.request("GET", url, out=pw, statuses={200, 401, 404}, headers={ "Accept" : accept }) pw.close() if (res.status_code != 200): DEBUG(res.content) raise Not_In_Registry_Error() def manifest_upload(self, manifest): "Upload manifest (sequence of bytes)." # Note: The manifest is *not* uploaded as a blob. We just do one PUT. INFO("manifest: uploading") url = self._url_of("manifests", self.ref.tag) self.request("PUT", url, {201}, data=manifest, headers={ "Content-Type": TYPES_MANIFEST["docker2"] }) def request(self, method, url, statuses={200}, out=None, **kwargs): """Request url using method and return the response object. If statuses is given, it is set of acceptable response status codes, defaulting to {200}; any other response is a fatal error. If out is given, response content will be streamed to this Progress_Writer object and must be non-zero length. Use current session if there is one, or start a new one if not. If authentication fails (or isn't initialized), then authenticate and re-try the request.""" self.session_init_maybe() VERBOSE("auth: %s" % self.auth) if (out is not None): kwargs["stream"] = True res = self.request_raw(method, url, statuses | {401}, **kwargs) if (res.status_code == 401): VERBOSE("HTTP 401 unauthorized") self.authorize(res) VERBOSE("retrying with auth: %s" % self.auth) res = self.request_raw(method, url, statuses, **kwargs) if (out is not None and res.status_code == 200): try: length = int(res.headers["Content-Length"]) except KeyError: length = None except ValueError: FATAL("invalid Content-Length in response") out.start(length) for chunk in res.iter_content(HTTP_CHUNK_SIZE): out.write(chunk) return res def request_raw(self, method, url, statuses, auth=None, **kwargs): """Request url using method. statuses is an iterable of acceptable response status codes; any other response is a fatal error. Return the requests.Response object. Session must already exist. If auth arg given, use it; otherwise, use object's stored authentication if initialized; otherwise, use no authentication.""" VERBOSE("%s: %s" % (method, url)) if (auth is None): auth = self.auth try: res = self.session.request(method, url, auth=auth, **kwargs) VERBOSE("response status: %d" % res.status_code) if (res.status_code not in statuses): FATAL("%s failed; expected status %s but got %d: %s" % (method, statuses, res.status_code, res.reason)) except requests.exceptions.RequestException as x: FATAL("%s failed: %s" % (method, x)) # Log the rate limit headers if present. for h in ("RateLimit-Limit", "RateLimit-Remaining"): if (h in res.headers): VERBOSE("%s: %s" % (h, res.headers[h])) return res def session_init_maybe(self): "Initialize session if it's not initialized; otherwise do nothing." if (self.session is None): VERBOSE("initializing session") self.session = requests.Session() self.session.verify = tls_verify class Storage: """Source of truth for all paths within the storage directory. Do not compute any such paths elsewhere!""" __slots__ = ("root",) def __init__(self, storage_cli): self.root = storage_cli if (self.root is None): self.root = self.root_env() if (self.root is None): self.root = self.root_default() self.root = Path(self.root) @property def download_cache(self): return self.root // "dlcache" @property def unpack_base(self): return self.root // "img" @property def upload_cache(self): return self.root // "ulcache" @property def valid_p(self): # FIXME: require version file too when appropriate (#1147) """Return True if storage present and seems valid, False otherwise.""" return (os.path.isdir(self.unpack_base) and os.path.isdir(self.download_cache)) @property def version_file(self): return self.root // "version" @staticmethod def root_default(): # FIXME: Perhaps we should use getpass.getuser() instead of the $USER # environment variable? It seems a lot more robust. But, (1) we'd have # to match it in some scripts and (2) it makes the documentation less # clear becase we have to explain the fallback behavior. return Path("/var/tmp/%s.ch" % user()) @staticmethod def root_env(): if ("CH_GROW_STORAGE" in os.environ): # Avoid surprises if user still has $CH_GROW_STORAGE set (see #906). FATAL("$CH_GROW_STORAGE no longer supported; use $CH_IMAGE_STORAGE") try: return os.environ["CH_IMAGE_STORAGE"] except KeyError: return None def init(self): """Ensure the storage directory exists, contains all the appropriate top-level directories & metadata, and is the appropriate version.""" self.init_move_old() # see issues #1160 and #1243 if (not os.path.isdir(self.root)): op = "initializing" v_found = None else: op = "upgrading" if (not self.valid_p): FATAL("storage directory seems invalid: %s" % self.root) if (os.path.isfile(self.version_file)): v_found = int(file_read_all(self.version_file)) else: v_found = 1 if (v_found == STORAGE_VERSION): VERBOSE("found storage dir v%d: %s" % (STORAGE_VERSION, self.root)) elif (v_found in {None, 1}): # initialize/upgrade INFO("%s storage directory: v%d %s" % (op, STORAGE_VERSION, self.root)) mkdir(self.root) mkdir(self.download_cache) mkdir(self.unpack_base) mkdir(self.upload_cache) file_write(self.version_file, "%d\n" % STORAGE_VERSION) else: # can't upgrade FATAL("incompatible storage directory v%d: %s" % (v_found, self.root), 'you can delete and re-initialize with "ch-image reset"') def init_move_old(self): """If appropriate, move storage directory from old default path to new. See issues #1160 and #1243.""" old = Storage("/var/tmp/%s/ch-image" % user()) moves = ( "dlcache", "img", "ulcache", "version" ) if (self.root != self.root_default()): return # do nothing silently unless using default storage dir if (not os.path.exists(old.root)): return # do nothing silently unless old default storage dir exists if (not old.valid_p): WARNING("storage dir: invalid at old default, ignoring: %s" % old.root) return INFO("storage dir: valid at old default: %s" % old.root) if (not os.path.exists(self.root)): mkdir(self.root) elif (self.valid_p): WARNING("storage dir: also valid at new default: %s" % self.root, hint="consider deleting the old one") return elif (not os.path.isdir(self.root)): return # new isn't a directory; init/upgrade code will error later elif (any(os.path.exists(self.root // i) for i in moves)): return # new is broken; init/upgrade should error later # Now we know (1) the old storage exists and is valid and (2) the new # storage exists, is a directory, and contains none of the files we'd # move. However, it *may* contain subdirectories other parts of # Charliecloud care about, e.g. "mnt" for ch-run. INFO("storage dir: moving to new default path: %s" % self.root) for i in moves: src = old.root // i dst = self.root // i if (os.path.exists(src)): DEBUG("moving: %s -> %s" % (src, dst)) try: shutil.move(src, dst) except OSError as x: FATAL("can't move: %s -> %s: %s" % (x.filename, x.filename2, x.strerror)) ossafe(os.rmdir, "can't rmdir: %s" % old.root, old.root) if (len(ossafe(os.listdir, "can't list: %s" % old.root.parent, old.root.parent)) == 0): WARNING("parent of old storage dir now empty: %s" % old.root.parent, hint="consider deleting it") def manifest_for_download(self, image_ref, digest): if (digest is None): digest = "skinny" return ( self.download_cache // ("%s%%%s.manifest.json" % (image_ref.for_path, digest))) def fatman_for_download(self, image_ref): return self.download_cache // ("%s.fat.json" % image_ref.for_path) def reset(self): if (self.valid_p): rmtree(self.root) self.init() # largely for debugging else: FATAL("%s not a builder storage" % (self.root)); def unpack(self, image_ref): return self.unpack_base // image_ref.for_path class TarFile(tarfile.TarFile): # This subclass augments tarfile.TarFile to add safety code. While the # tarfile module docs [1] say “do not use this class [TarFile] directly”, # they also say “[t]he tarfile.open() function is actually a shortcut” to # class method TarFile.open(), and the source code recommends subclassing # TarFile [2]. # # It's here because the standard library class has problems with symlinks # and replacing one file type with another; see issues #819 and #825 as # well as multiple unfixed Python bugs [e.g. 3,4,5]. We work around this # with manual deletions. # # [1]: https://docs.python.org/3/library/tarfile.html # [2]: https://github.com/python/cpython/blob/2bcd0fe7a5d1a3c3dd99e7e067239a514a780402/Lib/tarfile.py#L2159 # [3]: https://bugs.python.org/issue35483 # [4]: https://bugs.python.org/issue19974 # [5]: https://bugs.python.org/issue23228 # Need new method name because add() is called recursively and we don't # want those internal calls to get our special sauce. def add_(self, name, **kwargs): kwargs["filter"] = self.fix_member_uidgid super().add(name, **kwargs) def clobber(self, targetpath, regulars=False, symlinks=False, dirs=False): assert (regulars or symlinks or dirs) try: st = os.lstat(targetpath) except FileNotFoundError: # We could move this except clause after all the stat.S_IS* calls, # but that risks catching FileNotFoundError that came from somewhere # other than lstat(). st = None except OSError as x: FATAL("can't lstat: %s" % targetpath, targetpath) if (st is not None): if (stat.S_ISREG(st.st_mode)): if (regulars): unlink(targetpath) elif (stat.S_ISLNK(st.st_mode)): if (symlinks): unlink(targetpath) elif (stat.S_ISDIR(st.st_mode)): if (dirs): rmtree(targetpath) else: FATAL("invalid file type 0%o in previous layer; see inode(7): %s" % (stat.S_IFMT(st.st_mode), targetpath)) @staticmethod def fix_member_path(ti, tarball_path): """Repair or reject (by aborting the program) ti.name (member path) if it climbs outside the top level or has some other problem.""" old_name = ti.name if (len(ti.name) == 0): FATAL("rejecting zero-length member path: %s" % (tarball_path)) if (".." in ti.name.split("/")): FATAL("rejecting member path with up-level: %s: %s" % (filename, path)) if (ti.name[0] == "/"): ti.name = "." + old_name # add dot so we don't have to count slashes if (ti.name != old_name): WARNING("fixed member path: %s -> %s" % (old_name, ti.name)) @staticmethod def fix_member_uidgid(ti): assert (ti.name[0] != "/") # absolute paths unsafe but shouldn't happen if (not (ti.isfile() or ti.isdir() or ti.issym() or ti.islnk())): FATAL("invalid file type: %s" % ti.name) ti.uid = 0 ti.uname = "root" ti.gid = 0 ti.gname = "root" if (ti.mode & stat.S_ISUID): VERBOSE("stripping unsafe setuid bit: %s" % ti.name) ti.mode &= ~stat.S_ISUID if (ti.mode & stat.S_ISGID): VERBOSE("stripping unsafe setgid bit: %s" % ti.name) ti.mode &= ~stat.S_ISGID return ti def makedir(self, tarinfo, targetpath): # Note: This gets called a lot, e.g. once for each component in the path # of the member being extracted. TRACE("makedir: %s" % targetpath) self.clobber(targetpath, regulars=True, symlinks=True) super().makedir(tarinfo, targetpath) def makefile(self, tarinfo, targetpath): TRACE("makefile: %s" % targetpath) self.clobber(targetpath, symlinks=True, dirs=True) super().makefile(tarinfo, targetpath) def makelink(self, tarinfo, targetpath): TRACE("makelink: %s -> %s" % (targetpath, tarinfo.linkname)) self.clobber(targetpath, regulars=True, symlinks=True, dirs=True) super().makelink(tarinfo, targetpath) ## Supporting functions ## def DEBUG(*args, **kwargs): if (verbose >= 2): log(*args, color="38;5;6m", **kwargs) # dark cyan (same as 36m) def ERROR(*args, **kwargs): log(*args, color="1;31m", prefix="error: ", **kwargs) # bold red def FATAL(*args, **kwargs): ERROR(*args, **kwargs) sys.exit(1) def INFO(*args, **kwargs): "Note: Use print() for output; this function is for logging." log(*args, color="33m", **kwargs) # yellow def TRACE(*args, **kwargs): if (verbose >= 3): log(*args, color="38;5;6m", **kwargs) # dark cyan (same as 36m) def VERBOSE(*args, **kwargs): if (verbose >= 1): log(*args, color="38;5;14m", **kwargs) # light cyan (1;36m but not bold) def WARNING(*args, **kwargs): log(*args, color="31m", prefix="warning: ", **kwargs) # red def arch_host_get(): "Return the registry architecture of the host." arch_uname = platform.machine() VERBOSE("host architecture from uname: %s" % arch_uname) try: arch_registry = ARCH_MAP[arch_uname] except KeyError: FATAL("unknown host architecture: %s" % arch_uname, BUG_REPORT_PLZ) VERBOSE("host architecture for registry: %s" % arch_registry) return arch_registry def bytes_hash(data): "Return the hash of data, as a hex string with no leading algorithm tag." h = hashlib.sha256() h.update(data) return h.hexdigest() def ch_run_modify(img, args, env, workdir="/", binds=[], fail_ok=False): # Note: If you update these arguments, update the ch-image(1) man page too. args = ( [CH_BIN + "/ch-run"] + ["-w", "-u0", "-g0", "--no-home", "--no-passwd", "--cd", workdir] + sum([["-b", i] for i in binds], []) + [img, "--"] + args) return cmd(args, env, fail_ok) def cmd(args, env=None, fail_ok=False): VERBOSE("environment: %s" % env) VERBOSE("executing: %s" % args) cp = subprocess.run(args, env=env, stdin=subprocess.DEVNULL) if (not fail_ok and cp.returncode): FATAL("command failed with code %d: %s" % (cp.returncode, args[0])) return cp.returncode def color_reset(*fps): for fp in fps: color_set("0m", fp) def color_set(color, fp): if (fp.isatty()): print("\033[" + color, end="", flush=True, file=fp) def copy2(src, dst, **kwargs): "Wrapper for shutil.copy2() with error checking." ossafe(shutil.copy2, "can't copy: %s -> %s" % (src, dst), src, dst, **kwargs) def copytree(*args, **kwargs): "Wrapper for shutil.copytree() that exits the program on the first error." shutil.copytree(copy_function=copy2, *args, **kwargs) def dependencies_check(): """Check more dependencies. If any dependency problems found, here or above (e.g., lark module checked at import time), then complain and exit.""" # enforce Python minimum version vsys_py = sys.version_info[:3] # 4th element is a string if (vsys_py < PYTHON_MIN): vmin_py_str = ".".join(("%d" % i) for i in PYTHON_MIN) vsys_py_str = ".".join(("%d" % i) for i in vsys_py) depfails.append(("bad", ("need Python %s but running under %s: %s" % (vmin_py_str, vsys_py_str, sys.executable)))) # report problems & exit for (p, v) in depfails: ERROR("%s dependency: %s" % (p, v)) if (len(depfails) > 0): sys.exit(1) def digest_trim(d): """Remove the algorithm tag from digest d and return the rest. >>> digest_trim("sha256:foobar") 'foobar' Note: Does not validate the form of the rest.""" try: return d.split(":", maxsplit=1)[1] except AttributeError: FATAL("not a string: %s" % repr(d)) except IndexError: FATAL("no algorithm tag: %s" % d) def done_notify(): if (user() == "jogas"): INFO("!!! KOBE !!!") else: INFO("done") def file_ensure_exists(path): """If the final element of path exists (without dereferencing if it's a symlink), do nothing; otherwise, create it as an empty regular file.""" if (not os.path.lexists(path)): fp = open_(path, "w") fp.close() def file_gzip(path, args=[]): """Run pigz if it's available, otherwise gzip, on file at path and return the file's new name. Pass args to the gzip executable. This lets us gzip files (a) in parallel if pigz is installed and (b) without reading them into memory.""" path_c = Path(str(path) + ".gz") # On first call, remember first available of pigz and gzip using an # attribute of this function (yes, you can do that lol). if (not hasattr(file_gzip, "gzip")): if (shutil.which("pigz") is not None): file_gzip.gzip = "pigz" elif (shutil.which("gzip") is not None): file_gzip.gzip = "gzip" else: FATAL("can't find path to gzip or pigz") # Remove destination file if it already exists, because gzip --force does # several other things too. (Note: pigz sometimes confusingly reports # "Inappropriate ioctl for device" if destination already exists.) if (os.path.exists(path_c)): unlink(path_c) # Compress. cmd([file_gzip.gzip] + args + [str(path)]) # Zero out GZIP header timestamp, bytes 4–7 zero-indexed inclusive [1], to # ensure layer hash is consistent. See issue #1080. # [1]: https://datatracker.ietf.org/doc/html/rfc1952 §2.3.1 fp = open_(path_c, "r+b") ossafe(fp.seek, "can't seek: %s" % fp, 4) ossafe(fp.write, "can't write: %s" % fp, b'\x00\x00\x00\x00') ossafe(fp.close, "can't close: %s" % fp) return path_c def file_hash(path): """Return the hash of data in file at path, as a hex string with no algorithm tag. File is read in chunks and can be larger than memory.""" fp = open_(path, "rb") h = hashlib.sha256() while True: data = ossafe(fp.read, "can't read: %s" % path, 2**18) if (len(data) == 0): break # EOF h.update(data) ossafe(fp.close, "can't close: %s" % path) return h.hexdigest() def file_read_all(path, text=True): """Return the contents of file at path, or exit with error. If text, read in "rt" mode with UTF-8 encoding; otherwise, read in mode "rb".""" if (text): mode = "rt" encoding = "UTF-8" else: mode = "rb" encoding = None fp = open_(path, mode, encoding=encoding) data = ossafe(fp.read, "can't read: %s" % path) ossafe(fp.close, "can't close: %s" % path) return data def file_size(path, follow_symlinks=False): "Return the size of file at path in bytes." st = ossafe(os.stat, "can't stat: %s" % path, path, follow_symlinks=follow_symlinks) return st.st_size def file_write(path, content, mode=None): if (isinstance(content, str)): content = content.encode("UTF-8") fp = open_(path, "wb") ossafe(fp.write, "can't write: %s" % path, content) if (mode is not None): ossafe(os.chmod, "can't chmod 0%o: %s" % (mode, path)) ossafe(fp.close, "can't close: %s" % path) def grep_p(path, rx): """Return True if file at path contains a line matching regular expression rx, False if it does not.""" rx = re.compile(rx) try: with open(path, "rt") as fp: for line in fp: if (rx.search(line) is not None): return True return False except OSError as x: FATAL("error reading %s: %s" % (path, x.strerror)) def init(cli): # logging global log_festoon, log_fp, verbose assert (0 <= cli.verbose <= 3) verbose = cli.verbose if ("CH_LOG_FESTOON" in os.environ): log_festoon = True file_ = os.getenv("CH_LOG_FILE") if (file_ is not None): verbose = max(verbose_, 1) log_fp = open_(file_, "at") atexit.register(color_reset, log_fp) VERBOSE("verbose level: %d" % verbose) # storage object global storage storage = Storage(cli.storage) # architecture global arch, arch_host assert (cli.arch is not None) arch_host = arch_host_get() if (cli.arch == "host"): arch = arch_host else: arch = cli.arch # misc global password_many, tls_verify password_many = cli.password_many if (cli.tls_no_verify): tls_verify = False rpu = requests.packages.urllib3 rpu.disable_warnings(rpu.exceptions.InsecureRequestWarning) def json_from_file(path, msg): DEBUG("loading JSON: %s: %s" % (msg, path)) text = file_read_all(path) TRACE("text:\n%s" % text) try: data = json.loads(text) DEBUG("result:\n%s" % pprint.pformat(data, indent=2)) except json.JSONDecodeError as x: FATAL("can't parse JSON: %s:%d: %s" % (path, x.lineno, x.msg)) return data def log(msg, hint=None, color=None, prefix="", end="\n"): if (color is not None): color_set(color, log_fp) if (log_festoon): ts = datetime.datetime.now().isoformat(timespec="milliseconds") festoon = ("%5d %s " % (os.getpid(), ts)) else: festoon = "" print(festoon, prefix, msg, sep="", file=log_fp, end=end, flush=True) if (hint is not None): print(festoon, "hint: ", hint, sep="", file=log_fp, flush=True) if (color is not None): color_reset(log_fp) def mkdir(path): TRACE("ensuring directory: %s" % path) try: os.mkdir(path) except FileExistsError as x: if (not os.path.isdir(path)): FATAL("can't mkdir: exists and not a directory: %s" % x.filename) except OSError as x: FATAL("can't mkdir: %s: %s: %s" % (path, x.filename, x.strerror)) def mkdirs(path): TRACE("ensuring directories: %s" % path) try: os.makedirs(path, exist_ok=True) except OSError as x: FATAL("can't mkdir: %s: %s: %s" % (path, x.filename, x.strerror)) def now_utc_iso8601(): return datetime.datetime.utcnow().isoformat(timespec="seconds") + "Z" def open_(path, mode, *args, **kwargs): "Error-checking wrapper for open()." return ossafe(open, "can't open for %s: %s" % (mode, path), path, mode, *args, **kwargs) def ossafe(f, msg, *args, **kwargs): """Call f with args and kwargs. Catch OSError and other problems and fail with a nice error message.""" try: return f(*args, **kwargs) except OSError as x: FATAL("%s: %s" % (msg, x.strerror)) def path_first(path): """Return first component of path, skipping no-op dot components. If path contains *only* no-ops, return None. (Note: In my testing, parsing a string into a Path object took about 2.5µs, so this is plenty fast.)""" try: return Path(path).parts[0] except IndexError: return None def prefix_path(prefix, path): """"Return True if prefix is a parent directory of path. Assume that prefix and path are strings.""" return prefix == path or (prefix + '/' == path[:len(prefix) + 1]) def rmtree(path): if (os.path.isdir(path)): TRACE("deleting directory: %s" % path) try: shutil.rmtree(path) except OSError as x: FATAL("can't recursively delete directory %s: %s: %s" % (path, x.filename, x.strerror)) else: assert False, "unimplemented" def symlink(target, source, clobber=False): if (clobber and os.path.isfile(source)): unlink(source) try: os.symlink(target, source) except FileExistsError: if (not os.path.islink(source)): FATAL("can't symlink: source exists and isn't a symlink: %s" % source) if (os.readlink(source) != target): FATAL("can't symlink: %s exists; want target %s but existing is %s" % (source, target, os.readlink(source))) except OSError as x: FATAL("can't symlink: %s -> %s: %s" % (source, target, x.strerror)) def tree_child(tree, cname): """Locate a descendant subtree named cname using breadth-first search and return it. If no such subtree exists, return None.""" return next(tree_children(tree, cname), None) def tree_child_terminal(tree, cname, tname, i=0): """Locate a descendant subtree named cname using breadth-first search and return its first child terminal named tname. If no such subtree exists, or it doesn't have such a terminal, return None.""" st = tree_child(tree, cname) if (st is not None): return tree_terminal(st, tname, i) else: return None def tree_child_terminals(tree, cname, tname): """Locate a descendant substree named cname using breadth-first search and yield the values of its child terminals named tname. If no such subtree exists, or it has no such terminals, yield an empty sequence.""" for d in tree.iter_subtrees_topdown(): if (d.data == cname): return tree_terminals(d, tname) return [] def tree_children(tree, cname): "Yield children of tree named cname using breadth-first search." for st in tree.iter_subtrees_topdown(): if (st.data == cname): yield st def tree_terminal(tree, tname, i=0): """Return the value of the ith child terminal named tname (zero-based), or None if not found.""" for (j, t) in enumerate(tree_terminals(tree, tname)): if (j == i): return t return None def tree_terminals(tree, tname): """Yield values of all child terminals named tname, or empty list if none found.""" for j in tree.children: if (isinstance(j, lark.lexer.Token) and j.type == tname): yield j.value def tree_terminals_cat(tree, tname): """Return the concatenated values of all child terminals named tname as a string, with no delimiters. If none, return the empty string.""" return "".join(tree_terminals(tree, tname)) def unlink(path, *args, **kwargs): "Error-checking wrapper for os.unlink()." ossafe(os.unlink, "can't unlink: %s" % path, path) def user(): "Return the current username; exit with error if it can't be obtained." try: return os.environ["USER"] except KeyError: FATAL("can't get username: $USER not set") charliecloud-0.26/lib/fakeroot.py000066400000000000000000000333701417231051300170340ustar00rootroot00000000000000import os.path import charliecloud as ch ## Globals ## DEFAULT_CONFIGS = { # General notes: # # 1. Semantics of these configurations. (Character limits are to support # tidy code and message formatting.) # # a. This is a dictionary of configurations, which themselves are # dictionaries. # # b. Key is an arbitrary tag; user-visible. There's no enforced # character set but let's stick with [a-z0-9_] for now and limit to # at most 10 characters. # # c. A configuration has the following keys. # # name ... Human-readable name for the configuration. Max 46 chars. # # match .. Tuple; first item is the name of a file and the second is # a regular expression. If the regex matches any line in the # file, that configuration is used for the image. # # init ... List of tuples containing POSIX shell commands to perform # fakeroot installation and any other initialization steps. # # Item 1: Command to detect if the step is necessary. If the # command exits successfully, the step is already # complete; if unsuccessful, it is still needed. The sense # of the test is so something like "is command FOO # available?", which seems the most common command, does # not require negation. # # The test should be fairly permissive; e.g., if the image # already has a fakeroot implementation installed, but # it's a different one than we would have chosen, the # command should succeed. # # IMPORTANT: This command must have no side effects, # because it is normally run in all matching images, even # if --force is not specified. Note that talking to the # internet is a side effect! # # Item 2: Command to do the init step. # # I.e., to perform each fakeroot initialization step, # ch-image does roughly: # # if ( ! $CMD_1 ); then # $CMD_2 # fi # # For both commands, the output is visible to the user but # is not analyzed. # # cmds ... List of RUN command words that need fakeroot injection. # Each item in the list is matched against each # whitespace-separated word in the RUN instructions. For # example, suppose that each is the list "dnf", "rpm", and # "yum"; consider the following RUN instructions: # # RUN ['dnf', 'install', 'foo'] # RUN dnf install foo # # These are fairly standard forms. "dnf" matches both, the # first on the first element in the list and the second # after breaking the shell command on whitespace. # # RUN true&&dnf install foo # # This third example does *not* match (false negative) # because breaking on whitespace yields "true&&dnf", # "install", and "foo"; none of these words are "dnf". # # RUN echo dnf install foo # # This final example *does* match (false positive) becaus # the second word *is* "dnf"; the algorithm isn't smart # enough to realize that it's an argument to "echo". # # The last two illustrate that the algorithm uses simple # whitespace delimiters, not even a partial shell parser. # # each ... List of words to prepend to RUN instructions that match # cmd_each. For example, if each is ["fr", "-z"], then these # instructions: # # RUN ['dnf', 'install', 'foo'] # RUN dnf install foo # # become: # # RUN ['fr', '-z', 'dnf', 'install', 'foo'] # RUN ['fr', '-z', '/bin/sh', '-c', 'dnf install foo'] # # (Note that "/bin/sh -c" is how shell-form RUN instructions # are executed regardless of --force.) # # 2. The first match wins. However, because dictionary ordering can't be # relied on yet, since it was introduced in Python 3.6 [1], matches # should be disjoint. # # [1]: https://docs.python.org/3/library/stdtypes.html#dict # # 3. A matching configuration is considered applicable if any of the # fakeroot-able commands are present. We do nothing if the config isn't # applicable. We do not look for other matches. # # 4. There are three implementations of fakeroot that I could find: # fakeroot, fakeroot-ng, and pseudo. As of 2020-09-02: # # * fakeroot-ng and pseudo use a daemon process, while fakeroot does # not. pseudo also uses a persistent database. # # * fakeroot-ng does not support ARM; pseudo supports many architectures # including ARM. # # * “Old” fakeroot seems to have had version 1.24 on 2019-09-07 with # the most recent commit 2020-08-12. # # * fakeroot-ng is quite old: last upstream release was 0.18 in 2013, # and its source code is on Sourceforge. # # * pseudo is aslo a bit old: last upstream version was 1.9.0 on # 2018-01-20, and the last Git commit was 2019-08-02. # # Generally, we select the first one that seems to work in the order # fakeroot, pseudo, fakeroot-ng. # # 5. Why grep a specified file vs. simpler alternatives? # # * Look at image name: Misses derived images, large number of names # seems a maintenance headache, :latest changes. # # * grep the same file for each distro: No standardized file for this. # # * Ask lsb_release(1): Not always installed, requires executing ch-run. # CentOS/RHEL notes: # # 1. CentOS seems to have only fakeroot, which is in EPEL, not the standard # repos. # # 2. Unlike on CentOS, RHEL doesn't have the epel-release rpm in the # standard repos, install via rpm for both to be consistent. # # 3. Enabling EPEL can have undesirable side effects, e.g. different # version of things in the base repo that breaks other things. Thus, # when we are done with EPEL, we uninstall it. Existing EPEL # installations are left alone. # # 4. "yum repolist" has a lot of side effects, e.g. locking the RPM # database and asking configured repos for something or other. "rhel7": { "name": "CentOS/RHEL 7", "match": ("/etc/redhat-release", r"(Red Hat|CentOS).*release 7\."), "init": [ ("command -v fakeroot > /dev/null", "set -ex; " "if ! grep -Eq '\[epel\]' /etc/yum.conf /etc/yum.repos.d/*; then " "yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm; " "yum install -y fakeroot; " "yum remove -y epel-release; " "else " "yum install -y fakeroot; " "fi; ") ], "cmds": ["dnf", "rpm", "yum"], "each": ["fakeroot"] }, "rhel8": { "name": "CentOS/RHEL 8+", "match": ("/etc/redhat-release", r"(Red Hat|CentOS).*release (?![0-7]\.)"), "init": [ ("command -v fakeroot > /dev/null", "set -ex; " "if ! grep -Eq '\[epel\]' /etc/yum.conf /etc/yum.repos.d/*; then " "dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm; " "dnf install -y fakeroot; " "dnf remove -y epel-release; " "else " "dnf install -y fakeroot; " "fi; ") ], "cmds": ["dnf", "rpm", "yum"], "each": ["fakeroot"] }, # Debian/Ubuntu notes: # # 1. In recent Debian-based distributions apt(8) runs as an unprivileged # user by default. This makes *all* apt operations fail in an # unprivileged container because it can't drop privileges. There are # multiple ways to turn the “sandbox” off. AFAICT, none are documented, # but this one at least appears in Google searches a lot. # # apt also doesn't drop privileges if there is no user _apt; in my # testing, sometimes this user is present and sometimes not, for reasons # I don't understand. If not present, you get this warning: # # W: No sandbox user '_apt' on the system, can not drop privileges # # Configuring apt not to use the sandbox seemed cleaner than deleting # this user and eliminates the warning. # # 2. If we wanted to test if a fakeroot package was installed, we could say: # # dpkg-query -Wf '${Package}\n' \ # | egrep '^(fakeroot|fakeroot-ng|pseudo)$' "debderiv": { "name": "Debian 9+ or Ubuntu 14+", "match": ("/etc/os-release", r"Debian GNU/Linux (?![0-8] )|Ubuntu (?![0-9]\.|1[0-3]\.)"), "init": [ ("apt-config dump | fgrep -q 'APT::Sandbox::User \"root\"'" " || ! fgrep -q _apt /etc/passwd", "echo 'APT::Sandbox::User \"root\";'" " > /etc/apt/apt.conf.d/no-sandbox"), ("command -v fakeroot > /dev/null", # update b/c base image ships with no package indexes "apt-get update && apt-get install -y pseudo") ], "cmds": ["apt", "apt-get", "dpkg"], "each": ["fakeroot"] }, # Fedora notes: # # 1. The minimum supported version was chosen somewhat arbitrarily based on the # release versions available for testing (what was on Docker Hub). # # 2. The fakeroot package is in the base repository set so enabling EPEL is # not required. "fedora": { "name": "Fedora 24+,", "match": ("/etc/fedora-release", r"release (?!1?[0-9] |2[0-3] )"), "init": [ ("command -v fakeroot > /dev/null", "dnf install -y fakeroot") ], "cmds": ["dnf", "rpm", "yum"], "each": ["fakeroot"] }, } ## Functions ### def detect(image, force, no_force_detect): f = None if (no_force_detect): ch.VERBOSE("not detecting --force config, per --no-force-detect") else: # Try to find a real fakeroot config. for (tag, cfg) in DEFAULT_CONFIGS.items(): try: f = Fakeroot(image, tag, cfg, force) break except Config_Aint_Matched: pass # Report findings. if (f is None): msg = "--force not available (no suitable config found)" if (force): ch.WARNING(msg) else: ch.VERBOSE(msg) else: if (force): adj = "will use" else: adj = "available" ch.INFO("%s --force: %s: %s" % (adj, f.tag, f.name)) # Wrap up if (f is None): f = Fakeroot_Noop() return f ## Classes ## class Config_Aint_Matched(Exception): pass class Fakeroot_Noop(): __slots__ = ("init_done", "inject_ct") def __init__(self): self.init_done = False self.inject_ct = 0 def init_maybe(self, img_path, args, env): pass def inject_run(self, args): return args class Fakeroot(): __slots__ = ("tag", "name", "init", "cmds", "each", "init_done", "inject_ct", "inject_p") def __init__(self, image_path, tag, cfg, inject_p): ch.VERBOSE("workarounds: testing config: %s" % tag) file_path = "%s/%s" % (image_path, cfg["match"][0]) if (not ( os.path.isfile(file_path) and ch.grep_p(file_path, cfg["match"][1]))): raise Config_Aint_Matched(tag) self.tag = tag self.inject_ct = 0 self.inject_p = inject_p for i in ("name", "init", "cmds", "each"): setattr(self, i, cfg[i]) self.init_done = False def init_maybe(self, img_path, args, env): if (not self.needs_inject(args)): ch.VERBOSE("workarounds: init: instruction doesn't need injection") return if (self.init_done): ch.VERBOSE("workarounds: init: already initialized") return for (i, (test_cmd, init_cmd)) in enumerate(self.init, 1): ch.INFO("workarounds: init step %s: checking: $ %s" % (i, test_cmd)) args = ["/bin/sh", "-c", test_cmd] exit_code = ch.ch_run_modify(img_path, args, env, fail_ok=True) if (exit_code == 0): ch.INFO("workarounds: init step %d: exit code %d, step not needed" % (i, exit_code)) else: if (not self.inject_p): ch.INFO("workarounds: init step %d: no --force, skipping" % i) else: ch.INFO("workarounds: init step %d: $ %s" % (i, init_cmd)) args = ["/bin/sh", "-c", init_cmd] ch.ch_run_modify(img_path, args, env) self.init_done = True def inject_run(self, args): if (not self.needs_inject(args)): ch.VERBOSE("workarounds: RUN: instruction doesn't need injection") return args assert (self.init_done) if (not self.inject_p): ch.INFO("workarounds: RUN: available here with --force") return args args = self.each + args self.inject_ct += 1 ch.INFO("workarounds: RUN: new command: %s" % args) return args def needs_inject(self, args): """Return True if the command in args seems to need fakeroot injection, False otherwise.""" for word in self.cmds: for arg in args: if (word in arg.split()): # arg words separate by whitespace return True return False charliecloud-0.26/lib/misc.py000066400000000000000000000061731417231051300161560ustar00rootroot00000000000000# Subcommands not exciting enough for their own module. import argparse import inspect import os import os.path import sys import charliecloud as ch import pull import version ## argparse "actions" ## class Action_Exit(argparse.Action): def __init__(self, *args, **kwargs): super().__init__(nargs=0, *args, **kwargs) class Dependencies(Action_Exit): def __call__(self, ap, cli, *args, **kwargs): # ch.init() not yet called, so must get verbosity from arguments. ch.dependencies_check() if (cli.verbose >= 1): print("lark path: %s" % os.path.normpath(inspect.getfile(ch.lark))) sys.exit(0) class Version(Action_Exit): def __call__(self, *args, **kwargs): print(version.VERSION) sys.exit(0) ## Plain functions ## # Argument: command line arguments Namespace. Do not need to call sys.exit() # because caller manages that. def delete(cli): img_ref = ch.Image_Ref(cli.image_ref) img = ch.Image(img_ref) img.unpack_delete() def import_(cli): if (not os.path.exists(cli.path)): ch.FATAL("can't copy: not found: %s" % cli.path) dst = ch.Image(ch.Image_Ref(cli.image_ref)) ch.INFO("importing: %s" % cli.path) ch.INFO("destination: %s" % dst) if (os.path.isdir(cli.path)): dst.copy_unpacked(cli.path) else: # tarball, hopefully dst.unpack([cli.path]) # initialize metadata if needed dst.metadata_load() dst.metadata_save() ch.done_notify() def list_(cli): imgdir = ch.storage.unpack_base if (cli.image_ref is None): # list all images if (not os.path.isdir(ch.storage.root)): ch.FATAL("does not exist: %s" % ch.storage.root) if (not ch.storage.valid_p): ch.FATAL("not a storage directory: %s" % ch.storage.root) imgs = ch.ossafe(os.listdir, "can't list directory: %s" % ch.storage.root, imgdir) for img in sorted(imgs): print(ch.Image_Ref(img)) else: # list specified image img = ch.Image(ch.Image_Ref(cli.image_ref)) print("details of image: %s" % img.ref) # present locally? if (not img.unpack_exist_p): stored = "no" else: img.metadata_load() stored = "yes (%s)" % img.metadata["arch"] print("in local storage: %s" % stored) # present remotely? print("full remote ref: %s" % img.ref.canonical) pullet = pull.Image_Puller(img, not cli.no_cache) try: pullet.fatman_load() remote = "yes" arch_aware = "yes" arch_avail = " ".join(sorted(pullet.architectures.keys())) except ch.Not_In_Registry_Error: remote = "no" arch_aware = "n/a" arch_avail = "n/a" except ch.No_Fatman_Error: remote = "yes" arch_aware = "no" arch_avail = "unknown" pullet.done() print("available remotely: %s" % remote) print("remote arch-aware: %s" % arch_aware) print("host architecture: %s" % ch.arch_host) print("archs available: %s" % arch_avail) def python_path(cli): print(sys.executable) def reset(cli): ch.storage.reset() def storage_path(cli): print(ch.storage.root) charliecloud-0.26/lib/pull.py000066400000000000000000000267371417231051300162070ustar00rootroot00000000000000import json import os.path import sys import charliecloud as ch ## Constants ## # Internal library of manifests, e.g. for "FROM scratch" (issue #1013). manifests_internal = { "scratch": { # magic empty image "schemaVersion": 2, "config": { "digest": None }, "layers": [] } } ## Main ## def main(cli): # Set things up. ref = ch.Image_Ref(cli.image_ref) if (cli.parse_only): print(ref.as_verbose_str) sys.exit(0) image = ch.Image(ref, cli.image_dir) ch.INFO("pulling image: %s" % ref) ch.INFO("requesting arch: %s" % ch.arch) if (cli.image_dir is not None): ch.INFO("destination: %s" % image.unpack_path) else: ch.VERBOSE("destination: %s" % image.unpack_path) pullet = Image_Puller(image, not cli.no_cache) pullet.pull_to_unpacked(cli.last_layer) pullet.done() ch.done_notify() ## Classes ## class Image_Puller: __slots__ = ("architectures", # key: architecture, value: manifest digest "config_hash", "image", "layer_hashes", "registry", "use_cache") def __init__(self, image, use_cache): self.architectures = None self.image = image self.registry = ch.Registry_HTTP(image.ref) self.config_hash = None self.layer_hashes = None self.use_cache = use_cache @property def config_path(self): if (self.config_hash is None): return None else: return ch.storage.download_cache // (self.config_hash + ".json") @property def fatman_path(self): return ch.storage.fatman_for_download(self.image.ref) @property def manifest_path(self): if (str(self.image.ref) in manifests_internal): return "[internal library]" else: if (ch.arch == "yolo" or self.architectures is None): digest = None else: digest = self.architectures[ch.arch] return ch.storage.manifest_for_download(self.image.ref, digest) def done(self): self.registry.close() def download(self): "Download image metadata and layers and put them in the download cache." # Spec: https://docs.docker.com/registry/spec/manifest-v2-2/ ch.VERBOSE("downloading image: %s" % self.image) try: # fat manifest if (ch.arch != "yolo"): try: self.fatman_load() if (ch.arch not in self.architectures): ch.FATAL("requested arch unavailable: %s" % ch.arch, ("available: %s" % " ".join(sorted(self.architectures.keys())))) except ch.No_Fatman_Error: if (ch.arch == "amd64"): # We're guessing that enough arch-unaware images are amd64 to # barge ahead if requested architecture is amd64. ch.arch = "yolo" ch.WARNING("image is architecture-unaware") ch.WARNING("requested arch is amd64; using --arch=yolo") else: ch.FATAL("image is architecture-unaware", "consider --arch=yolo") # manifest self.manifest_load() except ch.Not_In_Registry_Error: ch.FATAL("not in registry: %s" % self.registry.ref) # config ch.VERBOSE("config path: %s" % self.config_path) if (self.config_path is not None): if (os.path.exists(self.config_path) and self.use_cache): ch.INFO("config: using existing file") else: self.registry.blob_to_file(self.config_hash, self.config_path, "config: downloading") # layers for (i, lh) in enumerate(self.layer_hashes, start=1): path = self.layer_path(lh) ch.VERBOSE("layer path: %s" % path) msg = "layer %d/%d: %s" % (i, len(self.layer_hashes), lh[:7]) if (os.path.exists(path) and self.use_cache): ch.INFO("%s: using existing file" % msg) else: self.registry.blob_to_file(lh, path, "%s: downloading" % msg) # done self.registry.close() def error_decode(self, data): """Decode first error message in registry error blob and return a tuple (code, message).""" try: code = data["errors"][0]["code"] msg = data["errors"][0]["message"] except (IndexError, KeyError): ch.FATAL("malformed error data", "yes, this is ironic") return (code, msg) def fatman_load(self): """Load the fat manifest JSON file, downloading it first if needed. If the image has a fat manifest, populate self.architectures; this may be an empty dictionary if no valid architectures were found. Raises: * Not_In_Registry_Error if the image does not exist. * No_Fatman_Error if the image exists but has no fat manifest, i.e., is architecture-unaware. In this case self.architectures is set to None.""" self.architectures = None if (str(self.image.ref) in manifests_internal): # cheat; internal manifest library matches every architecture self.architectures = { ch.arch_host: None } return if (os.path.exists(self.fatman_path) and self.use_cache): ch.INFO("manifest list: using existing file") else: # raises Not_In_Registry_Error if needed self.registry.fatman_to_file(self.fatman_path, "manifest list: downloading") fm = ch.json_from_file(self.fatman_path, "fat manifest") if ("layers" in fm or "fsLayers" in fm): # FIXME (issue #1101): If it's a v2 manifest we could use it instead # of re-requesting later. Maybe we could here move/copy it over to # the skinny manifest path. raise ch.No_Fatman_Error() if ("errors" in fm): # fm is an error blob. (code, msg) = self.error_decode(fm) if (code == "MANIFEST_UNKNOWN"): ch.INFO("manifest list: no such image") return else: ch.FATAL("manifest list: error: %s" % msg) self.architectures = dict() if ("manifests" not in fm): ch.FATAL("manifest list has no key 'manifests'") for m in fm["manifests"]: try: if (m["platform"]["os"] != "linux"): continue arch = m["platform"]["architecture"] if ("variant" in m["platform"]): arch = "%s/%s" % (arch, m["platform"]["variant"]) digest = m["digest"] except KeyError: ch.FATAL("manifest lists missing a required key") if (arch in self.architectures): ch.FATAL("manifest list: duplicate architecture: %s" % arch) self.architectures[arch] = ch.digest_trim(digest) if (len(self.architectures) == 0): ch.WARNING("no valid architectures found") def layer_path(self, layer_hash): "Return the path to tarball for layer layer_hash." return ch.storage.download_cache // (layer_hash + ".tar.gz") def manifest_load(self): """Download the manifest file if needed, parse it, and set self.config_hash and self.layer_hashes. If the image does not exist, exit with error.""" def bad_key(key): ch.FATAL("manifest: %s: no key: %s" % (self.manifest_path, key)) self.config_hash = None self.layer_hashes = None # obtain the manifest try: # internal manifest library, e.g. for "FROM scratch" manifest = manifests_internal[str(self.image.ref)] ch.INFO("manifest: using internal library") except KeyError: # download the file if needed, then parse it if (ch.arch == "yolo" or self.architectures is None): digest = None else: digest = self.architectures[ch.arch] ch.DEBUG("manifest digest: %s" % digest) if (os.path.exists(self.manifest_path) and self.use_cache): ch.INFO("manifest: using existing file") else: self.registry.manifest_to_file(self.manifest_path, "manifest: downloading", digest=digest) manifest = ch.json_from_file(self.manifest_path, "manifest") # validate schema version try: version = manifest['schemaVersion'] except KeyError: bad_key("schemaVersion") if (version not in {1,2}): ch.FATAL("unsupported manifest schema version: %s" % repr(version)) # load config hash # # FIXME: Manifest version 1 does not list a config blob. It does have # things (plural) that look like a config at history/v1Compatibility as # an embedded JSON string :P but I haven't dug into it. if (version == 1): ch.VERBOSE("no config; manifest schema version 1") self.config_hash = None else: # version == 2 try: self.config_hash = manifest["config"]["digest"] if (self.config_hash is not None): self.config_hash = ch.digest_trim(self.config_hash) except KeyError: bad_key("config/digest") # load layer hashes if (version == 1): key1 = "fsLayers" key2 = "blobSum" else: # version == 2 key1 = "layers" key2 = "digest" if (key1 not in manifest): bad_key(key1) self.layer_hashes = list() for i in manifest[key1]: if (key2 not in i): bad_key("%s/%s" % (key1, key2)) self.layer_hashes.append(ch.digest_trim(i[key2])) if (version == 1): self.layer_hashes.reverse() def manifest_digest_by_arch(self): "Return skinny manifest digest for target architecture." fatman = ch.json_from_file(self.fat_manifest_path) arch = None digest = None variant = None try: arch, variant = ch.arch.split("/", maxsplit=1) except ValueError: arch = ch.arch if ("manifests" not in fatman): ch.FATAL("manifest list has no manifests") for k in fatman["manifests"]: if (k.get('platform').get('os') != 'linux'): continue elif ( k.get('platform').get('architecture') == arch and ( variant is None or k.get('platform').get('variant') == variant)): digest = k.get('digest') if (digest is None): ch.FATAL("arch not found for image: %s" % arch, 'try "ch-image list IMAGE_REF"') return digest def pull_to_unpacked(self, last_layer=None): "Pull and flatten image." self.download() layer_paths = [self.layer_path(h) for h in self.layer_hashes] self.image.unpack(layer_paths, last_layer) self.image.metadata_replace(self.config_path) # Check architecture we got. This is limited because image metadata does # not store the variant. Move fast and break things, I guess. arch_image = self.image.metadata["arch"] or "unknown" arch_short = ch.arch.split("/")[0] arch_host_short = ch.arch_host.split("/")[0] if (arch_image != "unknown" and arch_image != arch_host_short): host_mismatch = " (may not match host %s)" % ch.arch_host else: host_mismatch = "" ch.INFO("image arch: %s%s" % (arch_image, host_mismatch)) if (ch.arch != "yolo" and arch_short != arch_image): ch.WARNING("image architecture does not match requested: %s ≠ %s" % (ch.arch, arch_image)) charliecloud-0.26/lib/push.py000066400000000000000000000122011417231051300161670ustar00rootroot00000000000000import json import os.path import charliecloud as ch import version ## Main ## def main(cli): src_ref = ch.Image_Ref(cli.source_ref) ch.INFO("pushing image: %s" % src_ref) image = ch.Image(src_ref, cli.image) # FIXME: validate it's an image using Megan's new function (PR #908) if (not os.path.isdir(image.unpack_path)): if (cli.image is not None): ch.FATAL("can't push: %s does not appear to be an image" % cli.image) else: ch.FATAL("can't push: no image %s" % src_ref) if (cli.image is not None): ch.INFO("image path: %s" % image.unpack_path) else: ch.VERBOSE("image path: %s" % image.unpack_path) if (cli.dest_ref is not None): dst_ref = ch.Image_Ref(cli.dest_ref) ch.INFO("destination: %s" % dst_ref) else: dst_ref = ch.Image_Ref(cli.source_ref) up = Image_Pusher(image, dst_ref) up.push() ch.done_notify() ## Classes ## class Image_Pusher: # Note; We use functions to create the blank config and manifest to to # avoid copy/deepcopy complexity from just copying a default dict. __slots__ = ("config", # sequence of bytes "dst_ref", # destination of upload "image", # Image object we are uploading "layers", # list of (digest, .tar.gz path), lowest first "manifest") # sequence of bytes def __init__(self, image, dst_ref): self.config = None self.dst_ref = dst_ref self.image = image self.layers = None self.manifest = None @classmethod def config_new(class_): "Return an empty config, ready to be filled in." # FIXME: URL of relevant docs? # FIXME: tidy blank/empty fields? return { "architecture": ch.arch_host_get(), "charliecloud_version": version.VERSION, "comment": "pushed with Charliecloud", "config": {}, "container_config": {}, "created": ch.now_utc_iso8601(), "history": [], "os": "linux", "rootfs": { "diff_ids": [], "type": "layers" }, "weirdal": "yankovic" } @classmethod def manifest_new(class_): "Return an empty manifest, ready to be filled in." return { "schemaVersion": 2, "mediaType": ch.TYPES_MANIFEST["docker2"], "config": { "mediaType": ch.TYPE_CONFIG, "size": None, "digest": None }, "layers": [], "weirdal": "yankovic" } def cleanup(self): ch.INFO("cleaning up") # Delete the tarballs since we can't yet cache them. for (_, tar_c) in self.layers: ch.VERBOSE("deleting tarball: %s" % tar_c) ch.unlink(tar_c) def prepare(self): """Prepare self.image for pushing to self.dst_ref. Return tuple: (list of gzipped layer tarball paths, config as a sequence of bytes, manifest as a sequence of bytes). There is not currently any support for re-using any previously prepared files already in the upload cache, because we don't yet have a way to know if these have changed until they are already build.""" tars_uc = self.image.tarballs_write(ch.storage.upload_cache) tars_c = list() config = self.config_new() manifest = self.manifest_new() # Prepare layers. for (i, tar_uc) in enumerate(tars_uc, start=1): ch.INFO("layer %d/%d: preparing" % (i, len(tars_uc))) path_uc = ch.storage.upload_cache // tar_uc hash_uc = ch.file_hash(path_uc) config["rootfs"]["diff_ids"].append("sha256:" + hash_uc) #size_uc = ch.file_size(path_uc) path_c = ch.file_gzip(path_uc, ["-9", "--no-name"]) tar_c = path_c.name hash_c = ch.file_hash(path_c) size_c = ch.file_size(path_c) tars_c.append((hash_c, path_c)) manifest["layers"].append({ "mediaType": ch.TYPE_LAYER, "size": size_c, "digest": "sha256:" + hash_c }) # Prepare metadata. ch.INFO("preparing metadata") config_bytes = json.dumps(config, indent=2).encode("UTF-8") config_hash = ch.bytes_hash(config_bytes) manifest["config"]["size"] = len(config_bytes) manifest["config"]["digest"] = "sha256:" + config_hash ch.DEBUG("config: %s\n%s" % (config_hash, config_bytes.decode("UTF-8"))) manifest_bytes = json.dumps(manifest, indent=2).encode("UTF-8") ch.DEBUG("manifest:\n%s" % manifest_bytes.decode("UTF-8")) # Store for the next steps. self.layers = tars_c self.config = config_bytes self.manifest = manifest_bytes def push(self): self.prepare() self.upload() self.cleanup() def upload(self): ch.INFO("starting upload") ul = ch.Registry_HTTP(self.dst_ref) for (i, (digest, tarball)) in enumerate(self.layers, start=1): ul.layer_from_file(digest, tarball, "layer %d/%d: " % (i, len(self.layers))) ul.config_upload(self.config) ul.manifest_upload(self.manifest) ul.close() charliecloud-0.26/misc/000077500000000000000000000000001417231051300150275ustar00rootroot00000000000000charliecloud-0.26/misc/Makefile.am000066400000000000000000000000521417231051300170600ustar00rootroot00000000000000EXTRA_DIST = docker-clean.sh grep version charliecloud-0.26/misc/docker-clean.sh000077500000000000000000000020751417231051300177210ustar00rootroot00000000000000#!/bin/bash # FIXME: Give up after a certain number of iterations. set -e # Remove all containers. while true; do cmd='sudo docker ps -aq' cs_ct=$($cmd | wc -l) echo "found $cs_ct containers" [[ 0 -eq $cs_ct ]] && break # shellcheck disable=SC2046 sudo docker rm $($cmd) done # Untag all images. This fails with: # # Error response from daemon: invalid reference format # # sometimes. I don't know why. if [[ $1 != --all ]]; then while true; do cmd='sudo docker images --filter dangling=false --format {{.Repository}}:{{.Tag}}' tag_ct=$($cmd | wc -l) echo "found $tag_ct tagged images" [[ 0 -eq $tag_ct ]] && break # shellcheck disable=SC2046 sudo docker rmi -f --no-prune $($cmd) done fi # If --all specified, remove all images. if [[ $1 = --all ]]; then while true; do cmd='sudo docker images -q' img_ct=$($cmd | wc -l) echo "found $img_ct images" [[ 0 -eq $img_ct ]] && break # shellcheck disable=SC2046 sudo docker rmi -f $($cmd) done fi charliecloud-0.26/misc/grep000077500000000000000000000021001417231051300157030ustar00rootroot00000000000000#!/bin/bash # The purpose of this script is to make it more convenient to search the # project source code. It's a wrapper for grep(1) to exclude directories that # yield a ton of false positives, in particular the documentation's JavaScript # index that is one line thousands of characters long. It's tedious to derive # the exclusions manually each time. # # grep(1) has an --exclude-dir option, but I can't get it to work with rooted # directories, e.g., doc-src/_build vs _build anywhere in the tree, so we use # find(1) instead. You may hear complaints about how find is too slow for # this, but the project is small enough we don't care. set -e cd "$(dirname "$0")"/.. pwd find . \( -path ./.git \ -o -path ./autom4te.cache \ -o -path ./doc/doctrees \ -o -path ./doc/html \ -o -path ./doc/man \ -o -name '*.pyc' \ -o -name configure \ -o -name 'config.*' \ -o -name Makefile \ -o -name Makefile.in \ \) -prune \ -o -type f \ -exec grep --color=auto -Hn "$@" {} \; charliecloud-0.26/misc/loc000077500000000000000000000235511417231051300155400ustar00rootroot00000000000000#!/bin/bash # Count lines of code in the project in an intelligent way. # # Notes/gotchas: # # 1. Use temporary files instead of variables to allow easier examination # after the script is run, and also because newline handling in shell # variables is rather hairy & error prone (e.g., command substitution # strips trailing newlines but echo adds a trailing newline, hiding it). # # 2. To add/update language definitions, cloc only has "merge, but mine are # lower priority" (--read-lang-def) and "use mine only, with no default # definitions" (--force-lang-def). We need "merge my definitions with # default, but with mine higher priority", so we emulate it with # --force-lang-def, providing all the language definitions we need, some # of which are altered. # # 3. cloc also does not have a way to define file prefixes (say, # Dockerfile.*). So we list all the Dockerfiles we have manually in the # language definitions. # # 4. To debug "cloc ignoring wrong number of files": just look in # /tmp/loc.ignored.out. However, you need cloc 1.84 or higher because of # cloc issue #401 [1]; otherwise the file is empty. # # [1]: https://github.com/AlDanial/cloc/issues/401 set -e -o pipefail export LC_ALL=C cd "$(dirname "$0")"/.. countem () { msg=$1 infile=$2 outfile=${2}.out catfile=${2}.cat ignore_ct=${3:-0} nolist=$4 echo echo "*** $msg ***" #cat "$infile" cloc --force-lang-def=/dev/stdin \ --categorized="$catfile" \ --list-file="$infile" \ --ignored=/tmp/loc.ignored.out \ <<'EOF' > "$outfile" Bash filter remove_matches ^\s*# filter remove_inline #.*$ extension bash extension bats extension bats.in script_exe bash 3rd_gen_scale 3.81 end_of_line_continuation \\$ C filter rm_comments_in_strings " /* */ filter rm_comments_in_strings " // filter call_regexp_common C++ extension c extension h 3rd_gen_scale 0.77 end_of_line_continuation \\$ conf-like filter remove_matches ^\s*# filter remove_inline #.*$ extension rpmlintrc extension spec filename .dockerignore filename .gitattributes filename .gitignore 3rd_gen_scale 1.00 end_of_line_continuation \\$ Dockerfile filter remove_matches ^\s*# filter remove_inline #.*$ filename Dockerfile filename Dockerfile.00_tiny filename Dockerfile.argenv filename Dockerfile.centos7 filename Dockerfile.centos8 filename Dockerfile.debian9 filename Dockerfile.file-quirks filename Dockerfile.metadata filename Dockerfile.mpich filename Dockerfile.nvidia filename Dockerfile.ocimanifest filename Dockerfile.openmpi 3rd_gen_scale 2.00 end_of_line_continuation \\$ m4 filter remove_matches ^dnl\s filter remove_matches ^\s*# filter remove_inline #.*$ extension ac extension m4 3rd_gen_scale 1.00 Make filter remove_matches ^\s*# filter remove_inline #.*$ extension Gnumakefile extension Makefile extension am extension gnumakefile extension makefile extension mk filename Gnumakefile filename Makefile filename gnumakefile filename makefile script_exe make 3rd_gen_scale 2.50 end_of_line_continuation \\$ patch filter remove_matches ^# filter remove_matches ^\-\-\- filter remove_matches ^\+\+\+ filter remove_matches ^\s filter remove_matches ^@@ extension diff extension patch 3rd_gen_scale 1.00 plain text filter remove_matches TEXT_HAS_NO_COMMENTS_BUT_A_FILTER_IS_REQUIRED extension txt filename PERUSEME filename README filename VERSION 3rd_gen_scale 1.00 POSIX sh filter remove_matches ^\s*# filter remove_inline #.*$ extension sh script_exe sh 3rd_gen_scale 3.81 end_of_line_continuation \\$ Python filter remove_matches ^\s*# filter docstring_to_C filter call_regexp_common C filter remove_inline #.*$ extension py extension py.in script_exe python script_exe python2.6 script_exe python2.7 script_exe python3 script_exe python3.3 script_exe python3.4 script_exe python3.5 3rd_gen_scale 4.20 end_of_line_continuation \\$ ReST filter remove_between_regex ^\.\. ^[^\s\.] extension rst 3rd_gen_scale 1.50 Ruby filter remove_matches ^\s*# filter remove_below_above ^=begin ^=end filter remove_inline #.*$ extension rake extension rb filename Rakefile filename rakefile filename Vagrantfile script_exe ruby 3rd_gen_scale 4.20 end_of_line_continuation \\$ VTK filter remove_matches ^\s*# extension vtk 3rd_gen_scale 1.00 end_of_line_continuation \\$ YAML filter remove_matches ^\s*# filter remove_inline #.*$ extension yaml extension yml 3rd_gen_scale 0.90 EOF if [[ -z $nolist ]]; then cat "$catfile" fi cat "$outfile" if ! grep -Eq "\b${ignore_ct} files ignored" "$outfile"; then grep -F '(unknown)' "$catfile" || true echo echo "🚨🚨🚨 cloc ignoring wrong number of files; expected ${ignore_ct} 🚨🚨🚨" exit 1 fi } if [[ -e ./Makefile.in ]]; then echo '🚨🚨🚨 Makefile.in seems to exist 🚨🚨🚨' echo 'did you "./autogen.sh --clean --rm-lark"?' exit 1 fi min_cloc=1.81 if ! command -v cloc > /dev/null \ || [[ $( printf '%s\n%s\n' "$min_cloc" "$(cloc --version)" \ | sort -V | head -1 ) != "$min_cloc" ]]; then echo "🚨🚨🚨 no cloc version ≥ $min_cloc 🚨🚨🚨" echo 'did you try a better operating system?' exit 1 fi # program (Charliecloud itself) find ./bin ./lib -type f -a \( \ -name '*.c' \ -o -name '*.h' \ -o -name '*.py' \ -o -name '*.py.in' \ -o -name '*.sh' \ -o -path './bin/ch-*' \) | grep -Fv ./bin/ch-test | sort > /tmp/loc.program countem "PROGRAM" /tmp/loc.program # test suite find ./.github ./examples ./test -type f -a \( \ -name '*.bats' \ -o -name '*.bats.in' \ -o -name '*.c' \ -o -name '*.patch' \ -o -name '*.py' \ -o -name '*.py.in' \ -o -name '*.sh' \ -o -name '*.vtk' \ -o -name '*.yml' \ -o -name 'Build.*' \ -o -name 'Dockerfile.*' \ -o -name .dockerignore \ -o -name Build \ -o -name Dockerfile \ -o -name Makefile \ -o -name README \ -o -path ./test/fixtures/README \ -o -path ./.github/PERUSEME \ -o -path ./examples/chtest/printns \ -o -path ./test/common.bash \ -o -path ./test/sotest/files_inferrable.txt \ -o -path ./test/whiteout \) | sort > /tmp/loc.test echo ./bin/ch-test >> /tmp/loc.test countem "TEST SUITE & EXAMPLES" /tmp/loc.test # documentation find . -type f -a \( \ -path './doc/*.rst' \ -o -path ./README.rst \ -o -path ./doc/conf.py \ -o -path ./doc/make-deps-overview \ -o -path ./doc/man/README \ -o -path ./doc/publish \) | sort > /tmp/loc.doc countem "DOCUMENTATION" /tmp/loc.doc # build system find . -type f -a \( \ -name Makefile.am \ -o -path ./autogen.sh \ -o -path ./configure.ac \ -o -path ./misc/version \ -o -path ./misc/m4/README \) | sort > /tmp/loc.build countem "BUILD SYSTEM" /tmp/loc.build # packaging code find ./packaging \( -path ./packaging/debian \ -o -name Makefile.am \) -prune -o -type f \ -print | sort > /tmp/loc.packaging countem "PACKAGING" /tmp/loc.packaging # misc find . -type f -a \( \ -path ./.gitattributes \ -o -path ./.gitignore \ -o -path ./VERSION \ -o -path ./misc/docker-clean.sh \ -o -path ./misc/grep \ -o -path ./misc/loc \) | sort > /tmp/loc.misc countem "MISCELLANEOUS" /tmp/loc.misc # ignored - this includes binaries and files we distribute but didn't write find . -type f -a \( \ -name '*.ico' \ -o -name '.gitmodules' \ -o -name '*.png' \ -o -name '*.tar.gz' \ -o -name 'file[A-Z0-9y]*' \ -o -name 's_dir?' \ -o -name 's_file?' \ -o -path './misc/m4/*.m4' \ -o -path './packaging/debian/*' \ -o -name 'file_' \ -o -path ./LICENSE \ -o -path ./test/fixtures/empty-file \) | sort > /tmp/loc.ignored echo echo "*** IGNORED ***" cat /tmp/loc.ignored # everything find . \( -path ./.git \ -o -name __pycache__ \) \ -prune -o -type f -print \ | sort > /tmp/loc.all cat /tmp/loc.{all,ignored} | sort | uniq -u > /tmp/loc.total countem "TOTAL" /tmp/loc.total 0 nolist # test for files we forgot cat /tmp/loc.{program,test,doc,build,packaging,misc,ignored,all} \ | sort | uniq -c | (grep -Ev '^\s*2' || true) > /tmp/loc.extra if [[ -s /tmp/loc.extra ]]; then echo echo 'UNKNOWN FILES' cat /tmp/loc.extra echo echo '🚨🚨🚨 unknown files found 🚨🚨🚨' exit 1 fi # generate loc.rst cmd='s/^SUM:.* ([0-9]+)$/\1/p' cat < doc/loc.rst :orphan: .. Do not edit this file — it’s auto-generated. We pride ourselves on keeping Charliecloud lightweight and simple. The lines of code as of version $(cat VERSION) is: .. list-table:: * - Program itself - $(sed -En "${cmd}" /tmp/loc.program.out) * - Test suite & examples - $(sed -En "${cmd}" /tmp/loc.test.out) * - Documentation - $(sed -En "${cmd}" /tmp/loc.doc.out) * - Build system - $(sed -En "${cmd}" /tmp/loc.build.out) * - Packaging - $(sed -En "${cmd}" /tmp/loc.packaging.out) * - Miscellaneous - $(sed -En "${cmd}" /tmp/loc.misc.out) * - Total - $(sed -En "${cmd}" /tmp/loc.total.out) These include code only, excluding blank lines and comments. They were counted using \`cloc \`_ version $(cloc --version). We typically quote the "Program itself" number when describing the size of Charliecloud. (Please do not quote the size in Priedhorsky and Randles 2017, as that number is very out of date.) EOF echo cat doc/loc.rst charliecloud-0.26/misc/m4/000077500000000000000000000000001417231051300153475ustar00rootroot00000000000000charliecloud-0.26/misc/m4/README000066400000000000000000000030271417231051300162310ustar00rootroot00000000000000This directory contains additional M4 macros for the build system. Currently, these are all from Autoconf Archive. While many distributions have an Autoconf Archive package, which autogen.sh can use if present, it’s a little uncommon to have installed, and we keep running into boxen where we want to run autogen.sh, but Autoconf Archive is not installed and we can’t install it promptly. There is a licensing exception for these macros that lets us redistribute them: “Every single one of those macros can be re-used without imposing any restrictions whatsoever on the licensing of the generated configure script. In particular, it is possible to use all those macros in configure scripts that are meant for non-free software.” [1] To add a macro: 1. Browse the Autoconf Archive documentation [1] and select the macro you want to use. 2. Download the macro file from the "m4" directory of the GitHub source code mirror [2] and put it in this directory. Use a release tag rather than a random Git commit. You can "wget" the URL you get with the "raw" button. (You could also use the master Git repo on Savannah [3], but GitHub is a lot easier to use.) 3. Record the macro and its last updated version in the list below. Macros in use: v2021.02.19 AX_CHECK_COMPILE_FLAG v2021.02.19 AX_COMPARE_VERSION v2021.02.19 AX_PTHREAD v2021.02.19 AX_WITH_PROG [1]: https://www.gnu.org/software/autoconf-archive/ [2]: https://github.com/autoconf-archive/autoconf-archive [3]: http://savannah.gnu.org/projects/autoconf-archive/ charliecloud-0.26/misc/m4/ax_check_compile_flag.m4000066400000000000000000000040701417231051300220600ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_check_compile_flag.html # =========================================================================== # # SYNOPSIS # # AX_CHECK_COMPILE_FLAG(FLAG, [ACTION-SUCCESS], [ACTION-FAILURE], [EXTRA-FLAGS], [INPUT]) # # DESCRIPTION # # Check whether the given FLAG works with the current language's compiler # or gives an error. (Warnings, however, are ignored) # # ACTION-SUCCESS/ACTION-FAILURE are shell commands to execute on # success/failure. # # If EXTRA-FLAGS is defined, it is added to the current language's default # flags (e.g. CFLAGS) when the check is done. The check is thus made with # the flags: "CFLAGS EXTRA-FLAGS FLAG". This can for example be used to # force the compiler to issue an error when a bad flag is given. # # INPUT gives an alternative input source to AC_COMPILE_IFELSE. # # NOTE: Implementation based on AX_CFLAGS_GCC_OPTION. Please keep this # macro in sync with AX_CHECK_{PREPROC,LINK}_FLAG. # # LICENSE # # Copyright (c) 2008 Guido U. Draheim # Copyright (c) 2011 Maarten Bosmans # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 6 AC_DEFUN([AX_CHECK_COMPILE_FLAG], [AC_PREREQ(2.64)dnl for _AC_LANG_PREFIX and AS_VAR_IF AS_VAR_PUSHDEF([CACHEVAR],[ax_cv_check_[]_AC_LANG_ABBREV[]flags_$4_$1])dnl AC_CACHE_CHECK([whether _AC_LANG compiler accepts $1], CACHEVAR, [ ax_check_save_flags=$[]_AC_LANG_PREFIX[]FLAGS _AC_LANG_PREFIX[]FLAGS="$[]_AC_LANG_PREFIX[]FLAGS $4 $1" AC_COMPILE_IFELSE([m4_default([$5],[AC_LANG_PROGRAM()])], [AS_VAR_SET(CACHEVAR,[yes])], [AS_VAR_SET(CACHEVAR,[no])]) _AC_LANG_PREFIX[]FLAGS=$ax_check_save_flags]) AS_VAR_IF(CACHEVAR,yes, [m4_default([$2], :)], [m4_default([$3], :)]) AS_VAR_POPDEF([CACHEVAR])dnl ])dnl AX_CHECK_COMPILE_FLAGS charliecloud-0.26/misc/m4/ax_compare_version.m4000066400000000000000000000146531417231051300215050ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_compare_version.html # =========================================================================== # # SYNOPSIS # # AX_COMPARE_VERSION(VERSION_A, OP, VERSION_B, [ACTION-IF-TRUE], [ACTION-IF-FALSE]) # # DESCRIPTION # # This macro compares two version strings. Due to the various number of # minor-version numbers that can exist, and the fact that string # comparisons are not compatible with numeric comparisons, this is not # necessarily trivial to do in a autoconf script. This macro makes doing # these comparisons easy. # # The six basic comparisons are available, as well as checking equality # limited to a certain number of minor-version levels. # # The operator OP determines what type of comparison to do, and can be one # of: # # eq - equal (test A == B) # ne - not equal (test A != B) # le - less than or equal (test A <= B) # ge - greater than or equal (test A >= B) # lt - less than (test A < B) # gt - greater than (test A > B) # # Additionally, the eq and ne operator can have a number after it to limit # the test to that number of minor versions. # # eq0 - equal up to the length of the shorter version # ne0 - not equal up to the length of the shorter version # eqN - equal up to N sub-version levels # neN - not equal up to N sub-version levels # # When the condition is true, shell commands ACTION-IF-TRUE are run, # otherwise shell commands ACTION-IF-FALSE are run. The environment # variable 'ax_compare_version' is always set to either 'true' or 'false' # as well. # # Examples: # # AX_COMPARE_VERSION([3.15.7],[lt],[3.15.8]) # AX_COMPARE_VERSION([3.15],[lt],[3.15.8]) # # would both be true. # # AX_COMPARE_VERSION([3.15.7],[eq],[3.15.8]) # AX_COMPARE_VERSION([3.15],[gt],[3.15.8]) # # would both be false. # # AX_COMPARE_VERSION([3.15.7],[eq2],[3.15.8]) # # would be true because it is only comparing two minor versions. # # AX_COMPARE_VERSION([3.15.7],[eq0],[3.15]) # # would be true because it is only comparing the lesser number of minor # versions of the two values. # # Note: The characters that separate the version numbers do not matter. An # empty string is the same as version 0. OP is evaluated by autoconf, not # configure, so must be a string, not a variable. # # The author would like to acknowledge Guido Draheim whose advice about # the m4_case and m4_ifvaln functions make this macro only include the # portions necessary to perform the specific comparison specified by the # OP argument in the final configure script. # # LICENSE # # Copyright (c) 2008 Tim Toolan # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 13 dnl ######################################################################### AC_DEFUN([AX_COMPARE_VERSION], [ AC_REQUIRE([AC_PROG_AWK]) # Used to indicate true or false condition ax_compare_version=false # Convert the two version strings to be compared into a format that # allows a simple string comparison. The end result is that a version # string of the form 1.12.5-r617 will be converted to the form # 0001001200050617. In other words, each number is zero padded to four # digits, and non digits are removed. AS_VAR_PUSHDEF([A],[ax_compare_version_A]) A=`echo "$1" | sed -e 's/\([[0-9]]*\)/Z\1Z/g' \ -e 's/Z\([[0-9]]\)Z/Z0\1Z/g' \ -e 's/Z\([[0-9]][[0-9]]\)Z/Z0\1Z/g' \ -e 's/Z\([[0-9]][[0-9]][[0-9]]\)Z/Z0\1Z/g' \ -e 's/[[^0-9]]//g'` AS_VAR_PUSHDEF([B],[ax_compare_version_B]) B=`echo "$3" | sed -e 's/\([[0-9]]*\)/Z\1Z/g' \ -e 's/Z\([[0-9]]\)Z/Z0\1Z/g' \ -e 's/Z\([[0-9]][[0-9]]\)Z/Z0\1Z/g' \ -e 's/Z\([[0-9]][[0-9]][[0-9]]\)Z/Z0\1Z/g' \ -e 's/[[^0-9]]//g'` dnl # In the case of le, ge, lt, and gt, the strings are sorted as necessary dnl # then the first line is used to determine if the condition is true. dnl # The sed right after the echo is to remove any indented white space. m4_case(m4_tolower($2), [lt],[ ax_compare_version=`echo "x$A x$B" | sed 's/^ *//' | sort -r | sed "s/x${A}/false/;s/x${B}/true/;1q"` ], [gt],[ ax_compare_version=`echo "x$A x$B" | sed 's/^ *//' | sort | sed "s/x${A}/false/;s/x${B}/true/;1q"` ], [le],[ ax_compare_version=`echo "x$A x$B" | sed 's/^ *//' | sort | sed "s/x${A}/true/;s/x${B}/false/;1q"` ], [ge],[ ax_compare_version=`echo "x$A x$B" | sed 's/^ *//' | sort -r | sed "s/x${A}/true/;s/x${B}/false/;1q"` ],[ dnl Split the operator from the subversion count if present. m4_bmatch(m4_substr($2,2), [0],[ # A count of zero means use the length of the shorter version. # Determine the number of characters in A and B. ax_compare_version_len_A=`echo "$A" | $AWK '{print(length)}'` ax_compare_version_len_B=`echo "$B" | $AWK '{print(length)}'` # Set A to no more than B's length and B to no more than A's length. A=`echo "$A" | sed "s/\(.\{$ax_compare_version_len_B\}\).*/\1/"` B=`echo "$B" | sed "s/\(.\{$ax_compare_version_len_A\}\).*/\1/"` ], [[0-9]+],[ # A count greater than zero means use only that many subversions A=`echo "$A" | sed "s/\(\([[0-9]]\{4\}\)\{m4_substr($2,2)\}\).*/\1/"` B=`echo "$B" | sed "s/\(\([[0-9]]\{4\}\)\{m4_substr($2,2)\}\).*/\1/"` ], [.+],[ AC_WARNING( [invalid OP numeric parameter: $2]) ],[]) # Pad zeros at end of numbers to make same length. ax_compare_version_tmp_A="$A`echo $B | sed 's/./0/g'`" B="$B`echo $A | sed 's/./0/g'`" A="$ax_compare_version_tmp_A" # Check for equality or inequality as necessary. m4_case(m4_tolower(m4_substr($2,0,2)), [eq],[ test "x$A" = "x$B" && ax_compare_version=true ], [ne],[ test "x$A" != "x$B" && ax_compare_version=true ],[ AC_WARNING([invalid OP parameter: $2]) ]) ]) AS_VAR_POPDEF([A])dnl AS_VAR_POPDEF([B])dnl dnl # Execute ACTION-IF-TRUE / ACTION-IF-FALSE. if test "$ax_compare_version" = "true" ; then m4_ifvaln([$4],[$4],[:])dnl m4_ifvaln([$5],[else $5])dnl fi ]) dnl AX_COMPARE_VERSION charliecloud-0.26/misc/m4/ax_pthread.m4000066400000000000000000000540461417231051300177410ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_pthread.html # =========================================================================== # # SYNOPSIS # # AX_PTHREAD([ACTION-IF-FOUND[, ACTION-IF-NOT-FOUND]]) # # DESCRIPTION # # This macro figures out how to build C programs using POSIX threads. It # sets the PTHREAD_LIBS output variable to the threads library and linker # flags, and the PTHREAD_CFLAGS output variable to any special C compiler # flags that are needed. (The user can also force certain compiler # flags/libs to be tested by setting these environment variables.) # # Also sets PTHREAD_CC and PTHREAD_CXX to any special C compiler that is # needed for multi-threaded programs (defaults to the value of CC # respectively CXX otherwise). (This is necessary on e.g. AIX to use the # special cc_r/CC_r compiler alias.) # # NOTE: You are assumed to not only compile your program with these flags, # but also to link with them as well. For example, you might link with # $PTHREAD_CC $CFLAGS $PTHREAD_CFLAGS $LDFLAGS ... $PTHREAD_LIBS $LIBS # $PTHREAD_CXX $CXXFLAGS $PTHREAD_CFLAGS $LDFLAGS ... $PTHREAD_LIBS $LIBS # # If you are only building threaded programs, you may wish to use these # variables in your default LIBS, CFLAGS, and CC: # # LIBS="$PTHREAD_LIBS $LIBS" # CFLAGS="$CFLAGS $PTHREAD_CFLAGS" # CXXFLAGS="$CXXFLAGS $PTHREAD_CFLAGS" # CC="$PTHREAD_CC" # CXX="$PTHREAD_CXX" # # In addition, if the PTHREAD_CREATE_JOINABLE thread-attribute constant # has a nonstandard name, this macro defines PTHREAD_CREATE_JOINABLE to # that name (e.g. PTHREAD_CREATE_UNDETACHED on AIX). # # Also HAVE_PTHREAD_PRIO_INHERIT is defined if pthread is found and the # PTHREAD_PRIO_INHERIT symbol is defined when compiling with # PTHREAD_CFLAGS. # # ACTION-IF-FOUND is a list of shell commands to run if a threads library # is found, and ACTION-IF-NOT-FOUND is a list of commands to run it if it # is not found. If ACTION-IF-FOUND is not specified, the default action # will define HAVE_PTHREAD. # # Please let the authors know if this macro fails on any platform, or if # you have any other suggestions or comments. This macro was based on work # by SGJ on autoconf scripts for FFTW (http://www.fftw.org/) (with help # from M. Frigo), as well as ac_pthread and hb_pthread macros posted by # Alejandro Forero Cuervo to the autoconf macro repository. We are also # grateful for the helpful feedback of numerous users. # # Updated for Autoconf 2.68 by Daniel Richard G. # # LICENSE # # Copyright (c) 2008 Steven G. Johnson # Copyright (c) 2011 Daniel Richard G. # Copyright (c) 2019 Marc Stevens # # This program is free software: you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by the # Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General # Public License for more details. # # You should have received a copy of the GNU General Public License along # with this program. If not, see . # # As a special exception, the respective Autoconf Macro's copyright owner # gives unlimited permission to copy, distribute and modify the configure # scripts that are the output of Autoconf when processing the Macro. You # need not follow the terms of the GNU General Public License when using # or distributing such scripts, even though portions of the text of the # Macro appear in them. The GNU General Public License (GPL) does govern # all other use of the material that constitutes the Autoconf Macro. # # This special exception to the GPL applies to versions of the Autoconf # Macro released by the Autoconf Archive. When you make and distribute a # modified version of the Autoconf Macro, you may extend this special # exception to the GPL to apply to your modified version as well. #serial 30 AU_ALIAS([ACX_PTHREAD], [AX_PTHREAD]) AC_DEFUN([AX_PTHREAD], [ AC_REQUIRE([AC_CANONICAL_TARGET]) AC_REQUIRE([AC_PROG_CC]) AC_REQUIRE([AC_PROG_SED]) AC_LANG_PUSH([C]) ax_pthread_ok=no # We used to check for pthread.h first, but this fails if pthread.h # requires special compiler flags (e.g. on Tru64 or Sequent). # It gets checked for in the link test anyway. # First of all, check if the user has set any of the PTHREAD_LIBS, # etcetera environment variables, and if threads linking works using # them: if test "x$PTHREAD_CFLAGS$PTHREAD_LIBS" != "x"; then ax_pthread_save_CC="$CC" ax_pthread_save_CFLAGS="$CFLAGS" ax_pthread_save_LIBS="$LIBS" AS_IF([test "x$PTHREAD_CC" != "x"], [CC="$PTHREAD_CC"]) AS_IF([test "x$PTHREAD_CXX" != "x"], [CXX="$PTHREAD_CXX"]) CFLAGS="$CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" AC_MSG_CHECKING([for pthread_join using $CC $PTHREAD_CFLAGS $PTHREAD_LIBS]) AC_LINK_IFELSE([AC_LANG_CALL([], [pthread_join])], [ax_pthread_ok=yes]) AC_MSG_RESULT([$ax_pthread_ok]) if test "x$ax_pthread_ok" = "xno"; then PTHREAD_LIBS="" PTHREAD_CFLAGS="" fi CC="$ax_pthread_save_CC" CFLAGS="$ax_pthread_save_CFLAGS" LIBS="$ax_pthread_save_LIBS" fi # We must check for the threads library under a number of different # names; the ordering is very important because some systems # (e.g. DEC) have both -lpthread and -lpthreads, where one of the # libraries is broken (non-POSIX). # Create a list of thread flags to try. Items with a "," contain both # C compiler flags (before ",") and linker flags (after ","). Other items # starting with a "-" are C compiler flags, and remaining items are # library names, except for "none" which indicates that we try without # any flags at all, and "pthread-config" which is a program returning # the flags for the Pth emulation library. ax_pthread_flags="pthreads none -Kthread -pthread -pthreads -mthreads pthread --thread-safe -mt pthread-config" # The ordering *is* (sometimes) important. Some notes on the # individual items follow: # pthreads: AIX (must check this before -lpthread) # none: in case threads are in libc; should be tried before -Kthread and # other compiler flags to prevent continual compiler warnings # -Kthread: Sequent (threads in libc, but -Kthread needed for pthread.h) # -pthread: Linux/gcc (kernel threads), BSD/gcc (userland threads), Tru64 # (Note: HP C rejects this with "bad form for `-t' option") # -pthreads: Solaris/gcc (Note: HP C also rejects) # -mt: Sun Workshop C (may only link SunOS threads [-lthread], but it # doesn't hurt to check since this sometimes defines pthreads and # -D_REENTRANT too), HP C (must be checked before -lpthread, which # is present but should not be used directly; and before -mthreads, # because the compiler interprets this as "-mt" + "-hreads") # -mthreads: Mingw32/gcc, Lynx/gcc # pthread: Linux, etcetera # --thread-safe: KAI C++ # pthread-config: use pthread-config program (for GNU Pth library) case $target_os in freebsd*) # -kthread: FreeBSD kernel threads (preferred to -pthread since SMP-able) # lthread: LinuxThreads port on FreeBSD (also preferred to -pthread) ax_pthread_flags="-kthread lthread $ax_pthread_flags" ;; hpux*) # From the cc(1) man page: "[-mt] Sets various -D flags to enable # multi-threading and also sets -lpthread." ax_pthread_flags="-mt -pthread pthread $ax_pthread_flags" ;; openedition*) # IBM z/OS requires a feature-test macro to be defined in order to # enable POSIX threads at all, so give the user a hint if this is # not set. (We don't define these ourselves, as they can affect # other portions of the system API in unpredictable ways.) AC_EGREP_CPP([AX_PTHREAD_ZOS_MISSING], [ # if !defined(_OPEN_THREADS) && !defined(_UNIX03_THREADS) AX_PTHREAD_ZOS_MISSING # endif ], [AC_MSG_WARN([IBM z/OS requires -D_OPEN_THREADS or -D_UNIX03_THREADS to enable pthreads support.])]) ;; solaris*) # On Solaris (at least, for some versions), libc contains stubbed # (non-functional) versions of the pthreads routines, so link-based # tests will erroneously succeed. (N.B.: The stubs are missing # pthread_cleanup_push, or rather a function called by this macro, # so we could check for that, but who knows whether they'll stub # that too in a future libc.) So we'll check first for the # standard Solaris way of linking pthreads (-mt -lpthread). ax_pthread_flags="-mt,-lpthread pthread $ax_pthread_flags" ;; esac # Are we compiling with Clang? AC_CACHE_CHECK([whether $CC is Clang], [ax_cv_PTHREAD_CLANG], [ax_cv_PTHREAD_CLANG=no # Note that Autoconf sets GCC=yes for Clang as well as GCC if test "x$GCC" = "xyes"; then AC_EGREP_CPP([AX_PTHREAD_CC_IS_CLANG], [/* Note: Clang 2.7 lacks __clang_[a-z]+__ */ # if defined(__clang__) && defined(__llvm__) AX_PTHREAD_CC_IS_CLANG # endif ], [ax_cv_PTHREAD_CLANG=yes]) fi ]) ax_pthread_clang="$ax_cv_PTHREAD_CLANG" # GCC generally uses -pthread, or -pthreads on some platforms (e.g. SPARC) # Note that for GCC and Clang -pthread generally implies -lpthread, # except when -nostdlib is passed. # This is problematic using libtool to build C++ shared libraries with pthread: # [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=25460 # [2] https://bugzilla.redhat.com/show_bug.cgi?id=661333 # [3] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=468555 # To solve this, first try -pthread together with -lpthread for GCC AS_IF([test "x$GCC" = "xyes"], [ax_pthread_flags="-pthread,-lpthread -pthread -pthreads $ax_pthread_flags"]) # Clang takes -pthread (never supported any other flag), but we'll try with -lpthread first AS_IF([test "x$ax_pthread_clang" = "xyes"], [ax_pthread_flags="-pthread,-lpthread -pthread"]) # The presence of a feature test macro requesting re-entrant function # definitions is, on some systems, a strong hint that pthreads support is # correctly enabled case $target_os in darwin* | hpux* | linux* | osf* | solaris*) ax_pthread_check_macro="_REENTRANT" ;; aix*) ax_pthread_check_macro="_THREAD_SAFE" ;; *) ax_pthread_check_macro="--" ;; esac AS_IF([test "x$ax_pthread_check_macro" = "x--"], [ax_pthread_check_cond=0], [ax_pthread_check_cond="!defined($ax_pthread_check_macro)"]) if test "x$ax_pthread_ok" = "xno"; then for ax_pthread_try_flag in $ax_pthread_flags; do case $ax_pthread_try_flag in none) AC_MSG_CHECKING([whether pthreads work without any flags]) ;; *,*) PTHREAD_CFLAGS=`echo $ax_pthread_try_flag | sed "s/^\(.*\),\(.*\)$/\1/"` PTHREAD_LIBS=`echo $ax_pthread_try_flag | sed "s/^\(.*\),\(.*\)$/\2/"` AC_MSG_CHECKING([whether pthreads work with "$PTHREAD_CFLAGS" and "$PTHREAD_LIBS"]) ;; -*) AC_MSG_CHECKING([whether pthreads work with $ax_pthread_try_flag]) PTHREAD_CFLAGS="$ax_pthread_try_flag" ;; pthread-config) AC_CHECK_PROG([ax_pthread_config], [pthread-config], [yes], [no]) AS_IF([test "x$ax_pthread_config" = "xno"], [continue]) PTHREAD_CFLAGS="`pthread-config --cflags`" PTHREAD_LIBS="`pthread-config --ldflags` `pthread-config --libs`" ;; *) AC_MSG_CHECKING([for the pthreads library -l$ax_pthread_try_flag]) PTHREAD_LIBS="-l$ax_pthread_try_flag" ;; esac ax_pthread_save_CFLAGS="$CFLAGS" ax_pthread_save_LIBS="$LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" # Check for various functions. We must include pthread.h, # since some functions may be macros. (On the Sequent, we # need a special flag -Kthread to make this header compile.) # We check for pthread_join because it is in -lpthread on IRIX # while pthread_create is in libc. We check for pthread_attr_init # due to DEC craziness with -lpthreads. We check for # pthread_cleanup_push because it is one of the few pthread # functions on Solaris that doesn't have a non-functional libc stub. # We try pthread_create on general principles. AC_LINK_IFELSE([AC_LANG_PROGRAM([#include # if $ax_pthread_check_cond # error "$ax_pthread_check_macro must be defined" # endif static void *some_global = NULL; static void routine(void *a) { /* To avoid any unused-parameter or unused-but-set-parameter warning. */ some_global = a; } static void *start_routine(void *a) { return a; }], [pthread_t th; pthread_attr_t attr; pthread_create(&th, 0, start_routine, 0); pthread_join(th, 0); pthread_attr_init(&attr); pthread_cleanup_push(routine, 0); pthread_cleanup_pop(0) /* ; */])], [ax_pthread_ok=yes], []) CFLAGS="$ax_pthread_save_CFLAGS" LIBS="$ax_pthread_save_LIBS" AC_MSG_RESULT([$ax_pthread_ok]) AS_IF([test "x$ax_pthread_ok" = "xyes"], [break]) PTHREAD_LIBS="" PTHREAD_CFLAGS="" done fi # Clang needs special handling, because older versions handle the -pthread # option in a rather... idiosyncratic way if test "x$ax_pthread_clang" = "xyes"; then # Clang takes -pthread; it has never supported any other flag # (Note 1: This will need to be revisited if a system that Clang # supports has POSIX threads in a separate library. This tends not # to be the way of modern systems, but it's conceivable.) # (Note 2: On some systems, notably Darwin, -pthread is not needed # to get POSIX threads support; the API is always present and # active. We could reasonably leave PTHREAD_CFLAGS empty. But # -pthread does define _REENTRANT, and while the Darwin headers # ignore this macro, third-party headers might not.) # However, older versions of Clang make a point of warning the user # that, in an invocation where only linking and no compilation is # taking place, the -pthread option has no effect ("argument unused # during compilation"). They expect -pthread to be passed in only # when source code is being compiled. # # Problem is, this is at odds with the way Automake and most other # C build frameworks function, which is that the same flags used in # compilation (CFLAGS) are also used in linking. Many systems # supported by AX_PTHREAD require exactly this for POSIX threads # support, and in fact it is often not straightforward to specify a # flag that is used only in the compilation phase and not in # linking. Such a scenario is extremely rare in practice. # # Even though use of the -pthread flag in linking would only print # a warning, this can be a nuisance for well-run software projects # that build with -Werror. So if the active version of Clang has # this misfeature, we search for an option to squash it. AC_CACHE_CHECK([whether Clang needs flag to prevent "argument unused" warning when linking with -pthread], [ax_cv_PTHREAD_CLANG_NO_WARN_FLAG], [ax_cv_PTHREAD_CLANG_NO_WARN_FLAG=unknown # Create an alternate version of $ac_link that compiles and # links in two steps (.c -> .o, .o -> exe) instead of one # (.c -> exe), because the warning occurs only in the second # step ax_pthread_save_ac_link="$ac_link" ax_pthread_sed='s/conftest\.\$ac_ext/conftest.$ac_objext/g' ax_pthread_link_step=`AS_ECHO(["$ac_link"]) | sed "$ax_pthread_sed"` ax_pthread_2step_ac_link="($ac_compile) && (echo ==== >&5) && ($ax_pthread_link_step)" ax_pthread_save_CFLAGS="$CFLAGS" for ax_pthread_try in '' -Qunused-arguments -Wno-unused-command-line-argument unknown; do AS_IF([test "x$ax_pthread_try" = "xunknown"], [break]) CFLAGS="-Werror -Wunknown-warning-option $ax_pthread_try -pthread $ax_pthread_save_CFLAGS" ac_link="$ax_pthread_save_ac_link" AC_LINK_IFELSE([AC_LANG_SOURCE([[int main(void){return 0;}]])], [ac_link="$ax_pthread_2step_ac_link" AC_LINK_IFELSE([AC_LANG_SOURCE([[int main(void){return 0;}]])], [break]) ]) done ac_link="$ax_pthread_save_ac_link" CFLAGS="$ax_pthread_save_CFLAGS" AS_IF([test "x$ax_pthread_try" = "x"], [ax_pthread_try=no]) ax_cv_PTHREAD_CLANG_NO_WARN_FLAG="$ax_pthread_try" ]) case "$ax_cv_PTHREAD_CLANG_NO_WARN_FLAG" in no | unknown) ;; *) PTHREAD_CFLAGS="$ax_cv_PTHREAD_CLANG_NO_WARN_FLAG $PTHREAD_CFLAGS" ;; esac fi # $ax_pthread_clang = yes # Various other checks: if test "x$ax_pthread_ok" = "xyes"; then ax_pthread_save_CFLAGS="$CFLAGS" ax_pthread_save_LIBS="$LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" # Detect AIX lossage: JOINABLE attribute is called UNDETACHED. AC_CACHE_CHECK([for joinable pthread attribute], [ax_cv_PTHREAD_JOINABLE_ATTR], [ax_cv_PTHREAD_JOINABLE_ATTR=unknown for ax_pthread_attr in PTHREAD_CREATE_JOINABLE PTHREAD_CREATE_UNDETACHED; do AC_LINK_IFELSE([AC_LANG_PROGRAM([#include ], [int attr = $ax_pthread_attr; return attr /* ; */])], [ax_cv_PTHREAD_JOINABLE_ATTR=$ax_pthread_attr; break], []) done ]) AS_IF([test "x$ax_cv_PTHREAD_JOINABLE_ATTR" != "xunknown" && \ test "x$ax_cv_PTHREAD_JOINABLE_ATTR" != "xPTHREAD_CREATE_JOINABLE" && \ test "x$ax_pthread_joinable_attr_defined" != "xyes"], [AC_DEFINE_UNQUOTED([PTHREAD_CREATE_JOINABLE], [$ax_cv_PTHREAD_JOINABLE_ATTR], [Define to necessary symbol if this constant uses a non-standard name on your system.]) ax_pthread_joinable_attr_defined=yes ]) AC_CACHE_CHECK([whether more special flags are required for pthreads], [ax_cv_PTHREAD_SPECIAL_FLAGS], [ax_cv_PTHREAD_SPECIAL_FLAGS=no case $target_os in solaris*) ax_cv_PTHREAD_SPECIAL_FLAGS="-D_POSIX_PTHREAD_SEMANTICS" ;; esac ]) AS_IF([test "x$ax_cv_PTHREAD_SPECIAL_FLAGS" != "xno" && \ test "x$ax_pthread_special_flags_added" != "xyes"], [PTHREAD_CFLAGS="$ax_cv_PTHREAD_SPECIAL_FLAGS $PTHREAD_CFLAGS" ax_pthread_special_flags_added=yes]) AC_CACHE_CHECK([for PTHREAD_PRIO_INHERIT], [ax_cv_PTHREAD_PRIO_INHERIT], [AC_LINK_IFELSE([AC_LANG_PROGRAM([[#include ]], [[int i = PTHREAD_PRIO_INHERIT; return i;]])], [ax_cv_PTHREAD_PRIO_INHERIT=yes], [ax_cv_PTHREAD_PRIO_INHERIT=no]) ]) AS_IF([test "x$ax_cv_PTHREAD_PRIO_INHERIT" = "xyes" && \ test "x$ax_pthread_prio_inherit_defined" != "xyes"], [AC_DEFINE([HAVE_PTHREAD_PRIO_INHERIT], [1], [Have PTHREAD_PRIO_INHERIT.]) ax_pthread_prio_inherit_defined=yes ]) CFLAGS="$ax_pthread_save_CFLAGS" LIBS="$ax_pthread_save_LIBS" # More AIX lossage: compile with *_r variant if test "x$GCC" != "xyes"; then case $target_os in aix*) AS_CASE(["x/$CC"], [x*/c89|x*/c89_128|x*/c99|x*/c99_128|x*/cc|x*/cc128|x*/xlc|x*/xlc_v6|x*/xlc128|x*/xlc128_v6], [#handle absolute path differently from PATH based program lookup AS_CASE(["x$CC"], [x/*], [ AS_IF([AS_EXECUTABLE_P([${CC}_r])],[PTHREAD_CC="${CC}_r"]) AS_IF([test "x${CXX}" != "x"], [AS_IF([AS_EXECUTABLE_P([${CXX}_r])],[PTHREAD_CXX="${CXX}_r"])]) ], [ AC_CHECK_PROGS([PTHREAD_CC],[${CC}_r],[$CC]) AS_IF([test "x${CXX}" != "x"], [AC_CHECK_PROGS([PTHREAD_CXX],[${CXX}_r],[$CXX])]) ] ) ]) ;; esac fi fi test -n "$PTHREAD_CC" || PTHREAD_CC="$CC" test -n "$PTHREAD_CXX" || PTHREAD_CXX="$CXX" AC_SUBST([PTHREAD_LIBS]) AC_SUBST([PTHREAD_CFLAGS]) AC_SUBST([PTHREAD_CC]) AC_SUBST([PTHREAD_CXX]) # Finally, execute ACTION-IF-FOUND/ACTION-IF-NOT-FOUND: if test "x$ax_pthread_ok" = "xyes"; then ifelse([$1],,[AC_DEFINE([HAVE_PTHREAD],[1],[Define if you have POSIX threads libraries and header files.])],[$1]) : else ax_pthread_ok=no $2 fi AC_LANG_POP ])dnl AX_PTHREAD charliecloud-0.26/misc/m4/ax_with_prog.m4000066400000000000000000000047121417231051300203070ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_with_prog.html # =========================================================================== # # SYNOPSIS # # AX_WITH_PROG([VARIABLE],[program],[VALUE-IF-NOT-FOUND],[PATH]) # # DESCRIPTION # # Locates an installed program binary, placing the result in the precious # variable VARIABLE. Accepts a present VARIABLE, then --with-program, and # failing that searches for program in the given path (which defaults to # the system path). If program is found, VARIABLE is set to the full path # of the binary; if it is not found VARIABLE is set to VALUE-IF-NOT-FOUND # if provided, unchanged otherwise. # # A typical example could be the following one: # # AX_WITH_PROG(PERL,perl) # # NOTE: This macro is based upon the original AX_WITH_PYTHON macro from # Dustin J. Mitchell . # # LICENSE # # Copyright (c) 2008 Francesco Salvestrini # Copyright (c) 2008 Dustin J. Mitchell # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 17 AC_DEFUN([AX_WITH_PROG],[ AC_PREREQ([2.61]) pushdef([VARIABLE],$1) pushdef([EXECUTABLE],$2) pushdef([VALUE_IF_NOT_FOUND],$3) pushdef([PATH_PROG],$4) AC_ARG_VAR(VARIABLE,Absolute path to EXECUTABLE executable) AS_IF(test -z "$VARIABLE",[ AC_MSG_CHECKING(whether EXECUTABLE executable path has been provided) AC_ARG_WITH(EXECUTABLE,AS_HELP_STRING([--with-EXECUTABLE=[[[PATH]]]],absolute path to EXECUTABLE executable), [ AS_IF([test "$withval" != yes && test "$withval" != no],[ VARIABLE="$withval" AC_MSG_RESULT($VARIABLE) ],[ VARIABLE="" AC_MSG_RESULT([no]) AS_IF([test "$withval" != no], [ AC_PATH_PROG([]VARIABLE[],[]EXECUTABLE[],[]VALUE_IF_NOT_FOUND[],[]PATH_PROG[]) ]) ]) ],[ AC_MSG_RESULT([no]) AC_PATH_PROG([]VARIABLE[],[]EXECUTABLE[],[]VALUE_IF_NOT_FOUND[],[]PATH_PROG[]) ]) ]) popdef([PATH_PROG]) popdef([VALUE_IF_NOT_FOUND]) popdef([EXECUTABLE]) popdef([VARIABLE]) ]) charliecloud-0.26/misc/version000077500000000000000000000021571417231051300164470ustar00rootroot00000000000000#!/bin/sh # Compute and print out the full version number. See FAQ for details. # # This script should usually be run once, by Autotools, and the result # propagated using Autotools. This propagates the Git information into # tarballs, and otherwise, you can get a mismatch between different parts of # the software. set -e ch_base=$(cd "$(dirname "$0")" && pwd)/.. version_file=${ch_base}/VERSION version_simple=$(cat "$version_file") case $version_simple in *~*) prerelease=yes ;; *) prerelease= ;; esac if [ ! -d "${ch_base}/.git" ] || [ -z "$prerelease" ]; then # no Git or release version; use simple version printf "%s\n" "$version_simple" else # add Git stuff git_branch=$( git rev-parse --abbrev-ref HEAD \ | sed 's/[^A-Za-z0-9]//g' \ | sed 's/$/./g' \ | sed 's/master.//g') git_hash=$(git rev-parse --short HEAD) dirty=$(git diff-index --quiet HEAD || echo .dirty) printf '%s+%s%s%s\n' "$version_simple" \ "$git_branch" \ "$git_hash" \ "$dirty" fi charliecloud-0.26/packaging/000077500000000000000000000000001417231051300160205ustar00rootroot00000000000000charliecloud-0.26/packaging/Makefile.am000066400000000000000000000002211417231051300200470ustar00rootroot00000000000000EXTRA_DIST = \ README \ fedora/build \ fedora/charliecloud.rpmlintrc \ fedora/charliecloud.spec \ fedora/el7-pkgdir.patch \ fedora/upstream.spec charliecloud-0.26/packaging/README000066400000000000000000000014001417231051300166730ustar00rootroot00000000000000openSUSE -------- openSUSE packages are maintained and built using the openSUSE Build Service (OBS) at https://build.opensuse.org To use OBS to create your own charliecloud packages, you'll need to create a user account and then copy the packaging from the devel package at: https://build.opensuse.org/package/show/network:cluster/charliecloud This can be done from the web interface clicking on "Branch package". Then you will be able to fetch locally all the packaging files from your copy (or branch), make your changes and send them to OBS that will build packages and create package repositories for your branch. The beginner's guide on how to use OBS with the osc command-line tool can be found at https://openbuildservice.org/help/manuals/obs-user-guide/ charliecloud-0.26/packaging/fedora/000077500000000000000000000000001417231051300172605ustar00rootroot00000000000000charliecloud-0.26/packaging/fedora/build000077500000000000000000000211601417231051300203050ustar00rootroot00000000000000#!/usr/bin/env python2.7 # See contributors' guide for documentation of this script. from __future__ import print_function import argparse import errno import os import pipes import platform import pwd import re import shutil import socket import subprocess import sys import time CH_BASE = os.path.abspath(os.path.dirname(__file__) + "/../..") CH_RUN = [CH_BASE + "/bin/ch-run"] PACKAGES = ["charliecloud", "charliecloud-builder", "charliecloud-debuginfo", "charliecloud-doc", "charliecloud-test"] ARCH = platform.machine() def main(): # Parse arguments. ap = argparse.ArgumentParser() ap.add_argument("image", metavar="DIR") ap.add_argument("version"), ap.add_argument("--install", action="store_true") ap.add_argument("--rpmbuild", metavar="DIR", default="%s/rpmbuild" % os.getenv("HOME")) args = ap.parse_args() print("# Charliecloud root: %s" % CH_BASE) print("architecture: %s" % ARCH) print("""\ version: %(version)s image: %(image)s install: %(install)s rpmbuild root: %(rpmbuild)s """ % args.__dict__) # Use the generic base locale, rather than whatever the user has set, as # that's the only one guaranteed to be installed in the containers. os.environ["LC_ALL"] = "C" # What's the real Git version? if (args.version == "HEAD"): try: # If we're on a branch, we want to build on that branch so the branch # name shows up in the version name. commit = subprocess.check_output(["git", "symbolic-ref", "-q", "--short", "HEAD"])[:-1] except subprocess.CalledProcessError as x: if (x.returncode != 1): raise # Detached HEAD (e.g. CI) is also fine; use commit hash. commit = subprocess.check_output(["git", "rev-parse", "--verify", "HEAD"])[:-1] rpm_release = "0" else: m = re.search(r"([0-9.]+)-([0-9]+)", args.version) commit = "v" + m.group(1) rpm_release = m.group(2) # Create rpmbuild root rpm_sources = args.rpmbuild + '/SOURCES' rpm_specs = args.rpmbuild + '/SPECS' for d in (rpm_sources, rpm_specs): print("# mkdir -p %s" % d) try: os.makedirs(d) except OSError as x: if (x.errno != errno.EEXIST): raise # Determine distro. rpms_dist = cmd_out(CH_RUN, args.image, '--', 'rpmbuild', '--eval', '%{?dist}') rpms_dist = rpms_dist.split('.')[-1] # FIXME: el8 doesn't have bats; skip for now. if (rpms_dist == 'el8'): PACKAGES.remove('charliecloud-test') # Get a clean Git checkout of the desired version. We do this by making a # temporary clone so as not to mess up the WD. git_tmp = rpm_sources + '/charliecloud' print("# cloning into %s and checking out commit %s" % (git_tmp, commit)) cmd("git", "clone", '.', git_tmp) cmd("git", "checkout", commit, cwd=git_tmp) # Build tarball. print("# building source tarball") cmd(CH_RUN, "-b", "%s:/mnt/0" % git_tmp, "-c", "/mnt/0", args.image, "--", "./autogen.sh") cmd(CH_RUN, "-b", "%s:/mnt/0" % git_tmp, "-c", "/mnt/0", args.image, "--", "./configure") cmd(CH_RUN, "-b", "%s:/mnt/0" % git_tmp, "-c", "/mnt/0", args.image, "--", "make") cmd(CH_RUN, "-b", "%s:/mnt/0" % git_tmp, "-c", "/mnt/0", args.image, "--", "make", "dist") ch_version = open(git_tmp + "/lib/version.txt").read()[:-1] ch_tarball = "charliecloud-%s.tar.gz" % ch_version print("# Charliecloud version: %s" % ch_version) print("# source tarball: %s" % ch_tarball) os.rename("%s/%s" % (git_tmp, ch_tarball), "%s/%s" % (rpm_sources, ch_tarball)) # Copy lint configuration and patchs. # FIXME: Put version into destination sometime? shutil.copy("%s/packaging/fedora/charliecloud.rpmlintrc" % CH_BASE, "%s/charliecloud.rpmlintrc" % rpm_specs) shutil.copy("%s/packaging/fedora/el7-pkgdir.patch" % CH_BASE, "%s/el7-pkgdir.patch" % rpm_sources) # Remove temporary Git directory. print("# rm -rf %s" % git_tmp) shutil.rmtree(git_tmp) # Copy and patch spec file. rpm_vr = "%s-%s" % (ch_version, rpm_release) # Fedora requires no version in spec file. Add a version for pre-release # specs to make it hard to upload one to Fedora by mistake. if ("~pre" not in ch_version): spec = "charliecloud.spec" else: spec = "charliecloud-%s.spec" % rpm_vr with open("%s/packaging/fedora/charliecloud.spec" % CH_BASE, "rt") as in_, \ open("%s/%s" % (rpm_specs, spec), "wt") as out: print("# writing %s" % out.name) t = in_.read() t = t.replace("@VERSION@", ch_version) t = t.replace("@RELEASE@", rpm_release) if ("~pre" in ch_version): # Add dummy changelog entry. timestamp = time.strftime("%a %b %d %Y") # required format name = pwd.getpwuid(os.geteuid()).pw_gecos.split(",")[0] moniker = pwd.getpwuid(os.geteuid()).pw_name domain = re.sub(r"^[^.]+.", "", socket.getfqdn()) t = t.replace("%changelog\n", """\ %%changelog * %s %s <%s@%s> %s - Pre-release package. See Git history for what is going on. """ % (timestamp, name, moniker, domain, rpm_vr)) else: # Verify requested version matches changelog. m = re.search(r"%changelog\n.+?([0-9.-]+)\n", t) if (m.group(1) != rpm_vr): print("requested version %s != changelog %s" % (rpm_vr, m.group(1))) sys.exit(1) out.write(t) # Prepare build and rpmlint arguments. container = [] rpmbuild_args = [] rpmlint_args = [] # Use /usr/local/src because rpmbuild fails with "%{_topdir}/BUILD" # shorter than "/usr/src/debug" (yes, really!) [1,2]. # # [1]: https://access.redhat.com/solutions/1426113 # [2]: https://gbenson.net/?p=367 rpms_src = "/usr/local/src/RPMS" rpms_arch = "%s/%s" % (rpms_src, ARCH) rpms_noarch = "%s/noarch" % rpms_src rpms = {'charliecloud': [rpms_arch, '%s.%s.rpm' % (rpms_dist, ARCH)], 'charliecloud-builder': [rpms_noarch, '%s.noarch.rpm' % rpms_dist], 'charliecloud-debuginfo': [rpms_arch, '%s.%s.rpm' % (rpms_dist, ARCH)], 'charliecloud-doc': [rpms_noarch, '%s.noarch.rpm' % rpms_dist], 'charliecloud-test': [rpms_arch, '%s.%s.rpm' % (rpms_dist, ARCH)]} rpm_specs = "/usr/local/src/SPECS" rpm_sources = "/usr/local/src/SOURCES" rpmbuild_args += ["--define", "_topdir /usr/local/src"] rpmlint_args += ["--file", "%s/charliecloud.rpmlintrc" % rpm_specs] container += [CH_BASE + "/bin/ch-run", "-w", "-b", "%s:/usr/local/src" % args.rpmbuild, args.image, "--"] # Build RPMs. cmd(container, "rpmbuild", rpmbuild_args, "--version") cmd(container, "rpmbuild", rpmbuild_args, "-ba", "%s/%s" % (rpm_specs, spec)) cmd(container, "ls", "-lh", rpms_arch) cmd(container, "ls", "-lh", rpms_noarch) # Install RPMs. if (args.install): print("# uninstalling (most errors can be ignored)") cmd_ok(container, "rpm", "--erase", PACKAGES) print("# installing") for p in PACKAGES: rpm_path = rpms[p][0] rpm_ext = rpms[p][-1] file = '%s/%s-%s.%s' % (rpm_path, p, rpm_vr, rpm_ext) cmd(container, "rpm", "--install", file) cmd(container, "rpm", "-qa", "charliecloud*") # Lint RPMs and spec file. Last so problems that don't result in program # returning error are more obvious. print("# linting") cmd(container, "rpmlint", rpmlint_args, "%s/%s" % (rpm_specs, spec)) for p in PACKAGES: rpm_path = rpms[p][0] rpm_ext = rpms[p][-1] file = '%s/%s-%s.%s' % (rpm_path, p, rpm_vr, rpm_ext) cmd(container, "test", "-e", file) cmd(container, "rpmlint", rpmlint_args, file) # Success! print("# done") def cmd(*args, **kwargs): cmd_real(subprocess.check_call, *args, **kwargs) def cmd_ok(*args, **kwargs): rc = cmd_real(subprocess.call, *args, **kwargs) return (rc == 0) def cmd_out(*args, **kwargs): out = cmd_real(subprocess.check_output, *args, **kwargs) return out.rstrip() # remove trailing newline def cmd_real(runf, *args, **kwargs): # flatten any sub-lists (kludge) args2 = [] for arg in args: if (isinstance(arg, list)): args2 += arg else: args2.append(arg) # print and run print("$", end="") for arg in args2: arg = pipes.quote(arg) print(" " + arg, end="") print() return runf(args2, **kwargs) if (__name__ == "__main__"): main() charliecloud-0.26/packaging/fedora/charliecloud.rpmlintrc000066400000000000000000000055771417231051300236700ustar00rootroot00000000000000# This file is used to supress false positive errors and warnings generated by # rpmlint when used with our charliecloud packages. ### charliecloud.spec ### # The chroot tests are very fragile and have been removed upstream. See: # https://github.com/rpm-software-management/rpmlint/commit/83f915a54d23f7a912ed42b84ccb4e373bec8ad9 addFilter(r'missing-call-to-chdir-with-chroot') # The RPM build script will generate invalid source URLs for non-release # versions, e.g., '0.9.8~pre+epelpackage.41fe9fd'. addFilter(r'invalid-url') # charliecloud # We don't have architecture specific libraries, thus we can put files in /lib. # The rpm macro, _lib, expands to lib64, which is not what we want. Rather than # patch our install to an incorrect library path we ignore the lint error. addFilter(r'hardcoded-library-path') # Charliecloud uses pivot_root(2), not chroot(2), for containerization. The # calls to chroot(2) are part of the pivot_root(2) dance and not relevant to # Charliecloud's security posture. addFilter(r'missing-call-to-chdir-with-chroot') # The charliecloud example, chtest, has python scripts. addFilter(r'doc-file-dependency') # charliecloud-debuginfo # The only files under /usr/lib are those placed there by rpmbuild. addFilter(r'only-non-binary-in-usr-lib') # Ignore a false positive warning concerning pycache files and byte code. # https://bugzilla.redhat.com/show_bug.cgi?id=1286382 addFilter(r'python-bytecode-without-source') # We don't specify a version because the offending package was not out long and # we intend to remove Obsolete lines in the near future. addFilter(r'unversioned-explicit-obsoletes') ### charliecloud-test ### # Charliecloud is a container runtime. These shared objects are never used in # the host environment; rather, they are compiled by the test suite (both # running and examination of which serve as end-user documentation) and injected # into the container (guest) via utility script 'ch-fromhost'. The ldconfig # links are generated inside the container runtime environment. For more # information, see the test file: test/run/ch-fromhost.bats (line 108). addFilter(r'no-ldconfig-symlink') addFilter(r'library-without-ldconfig-postin') addFilter(r'library-without-ldconfig-postun') # The test suite has a few C files, e.g. userns.c, pivot_root.c, # chroot-escape.c, sotest.c, setgroups.c, mknods.c, setuid.c, etc., that # document -- line-by-line in some cases -- various components of the open source # runtime. These C files serve to show end users how containers work; some of # them are used explicitly during test suite runtime. addFilter(r'devel-file-in-non-devel-package') # The symlink to /usr/bin is created and does exist. addFilter(r'dangling-relative-symlink') # Funky files used as test fixtures: addFilter(r'dangling-symlink') # to /tmp addFilter(r'hidden-file-or-dir') # .dockerignore addFilter(r'zero-length') # for file copy test charliecloud-0.26/packaging/fedora/charliecloud.spec000066400000000000000000000133401417231051300225730ustar00rootroot00000000000000# Charliecloud fedora package spec file # # Contributors: # Dave Love @loveshack # Michael Jennings @mej # Jordan Ogas @jogas # Reid Priedhorksy @reidpr # Don't try to compile python3 files with /usr/bin/python. %{?el7:%global __python %__python3} Name: charliecloud Version: @VERSION@ Release: @RELEASE@%{?dist} Summary: Lightweight user-defined software stacks for high-performance computing License: ASL 2.0 URL: https://hpc.github.io/%{name}/ Source0: https://github.com/hpc/%{name}/releases/downloads/v%{version}/%{name}-%{version}.tar.gz BuildRequires: gcc rsync bash Requires: squashfuse squashfs-tools Patch1: el7-pkgdir.patch # Suggests: name-builder docker buildah Obsoletes: %{name}-runtime Obsoletes: %{name}-common %description Charliecloud uses Linux user namespaces to run containers with no privileged operations or daemons and minimal configuration changes on center resources. This simple approach avoids most security risks while maintaining access to the performance and functionality already on offer. Container images can be built using Docker or anything else that can generate a standard Linux filesystem tree. For more information: https://hpc.github.io/charliecloud %package builder Summary: Charliecloud container image building tools License: ASL 2.0 and MIT BuildArch: noarch BuildRequires: python3-devel BuildRequires: python%{python3_pkgversion}-requests Requires: %{name} Requires: python3 Requires: python%{python3_pkgversion}-requests Obsoletes: %{name}-builders Provides: bundled(python%{python3_pkgversion}-lark-parser) = 0.11.3 %description builder This package provides ch-image, Charliecloud's completely unprivileged container image manipulation tool. %package doc Summary: Charliecloud html documentation License: BSD and ASL 2.0 BuildArch: noarch Obsoletes: %{name}-doc < %{version}-%{release} BuildRequires: python%{python3_pkgversion}-sphinx BuildRequires: python%{python3_pkgversion}-sphinx_rtd_theme Requires: python%{python3_pkgversion}-sphinx_rtd_theme %description doc Html and man page documentation for %{name}. %package test Summary: Charliecloud test suite License: ASL 2.0 Requires: %{name} %{name}-builder /usr/bin/bats Obsoletes: %{name}-test < %{version}-%{release} %description test Test fixtures for %{name}. %prep %setup -q %if 0%{?el7} %patch1 -p1 %endif %build # Use old inlining behavior, see: # https://github.com/hpc/charliecloud/issues/735 CFLAGS=${CFLAGS:-%optflags -fgnu89-inline}; export CFLAGS %configure --docdir=%{_pkgdocdir} \ --libdir=%{_prefix}/lib \ --with-python=/usr/bin/python3 \ %if 0%{?el7} --with-sphinx-build=%{_bindir}/sphinx-build-3.6 %else --with-sphinx-build=%{_bindir}/sphinx-build %endif %install %make_install cat > README.EL7 </etc/sysctl.d/51-userns.conf sysctl -p /etc/sysctl.d/51-userns.conf Note for versions below RHEL7.6, you will also need to enable user namespaces: grubby --args=namespace.unpriv_enable=1 --update-kernel=ALL reboot Please visit https://hpc.github.io/charliecloud/ for more information. EOF # Remove bundled sphinx bits. %{__rm} -rf %{buildroot}%{_pkgdocdir}/html/_static/css %{__rm} -rf %{buildroot}%{_pkgdocdir}/html/_static/fonts %{__rm} -rf %{buildroot}%{_pkgdocdir}/html/_static/js # Use Fedora package sphinx bits. sphinxdir=%{python3_sitelib}/sphinx_rtd_theme/static ln -s "${sphinxdir}/css" %{buildroot}%{_pkgdocdir}/html/_static/css ln -s "${sphinxdir}/fonts" %{buildroot}%{_pkgdocdir}/html/_static/fonts ln -s "${sphinxdir}/js" %{buildroot}%{_pkgdocdir}/html/_static/js # Remove bundled license and readme (prefer license and doc macros). %{__rm} -f %{buildroot}%{_pkgdocdir}/LICENSE %{__rm} -f %{buildroot}%{_pkgdocdir}/README.rst %files %license LICENSE %doc README.rst %{?el7:README.EL7} %{_bindir}/ch-build %{_bindir}/ch-build2dir %{_bindir}/ch-builder2squash %{_bindir}/ch-builder2tar %{_bindir}/ch-checkns %{_bindir}/ch-convert %{_bindir}/ch-dir2squash %{_bindir}/ch-fromhost %{_bindir}/ch-pull2dir %{_bindir}/ch-pull2tar %{_bindir}/ch-run %{_bindir}/ch-run-oci %{_bindir}/ch-ssh %{_bindir}/ch-tar2dir %{_mandir}/man1/ch-build.1* %{_mandir}/man1/ch-build2dir.1* %{_mandir}/man1/ch-builder2squash.1* %{_mandir}/man1/ch-builder2tar.1* %{_mandir}/man1/ch-checkns.1* %{_mandir}/man1/ch-convert.1* %{_mandir}/man1/ch-dir2squash.1* %{_mandir}/man1/ch-fromhost.1* %{_mandir}/man1/ch-pull2dir.1* %{_mandir}/man1/ch-pull2tar.1* %{_mandir}/man1/ch-run.1* %{_mandir}/man1/ch-run-oci.1* %{_mandir}/man1/ch-ssh.1* %{_mandir}/man1/ch-tar2dir.1* %{_mandir}/man7/charliecloud.7* %{_prefix}/lib/%{name}/base.sh %{_prefix}/lib/%{name}/contributors.bash %{_prefix}/lib/%{name}/version.sh %{_prefix}/lib/%{name}/version.txt %files builder %{_bindir}/ch-image %{_mandir}/man1/ch-image.1* %{_prefix}/lib/%{name}/build.py %{_prefix}/lib/%{name}/charliecloud.py %{_prefix}/lib/%{name}/fakeroot.py %{_prefix}/lib/%{name}/lark %{_prefix}/lib/%{name}/lark-0.11.3.dist-info %{_prefix}/lib/%{name}/lark-stubs %{_prefix}/lib/%{name}/misc.py %{_prefix}/lib/%{name}/pull.py %{_prefix}/lib/%{name}/push.py %{_prefix}/lib/%{name}/version.py %{?el7:%{_prefix}/lib/%{name}/__pycache__} %files doc %license LICENSE %{_pkgdocdir}/examples %{_pkgdocdir}/html %{?el7:%exclude %{_pkgdocdir}/examples/*/__pycache__} %files test %{_bindir}/ch-test %{_libexecdir}/%{name}/test %{_mandir}/man1/ch-test.1* %changelog * Thu Apr 16 2020 - @VERSION@-@RELEASE@ - Add new charliecloud package. charliecloud-0.26/packaging/fedora/el7-pkgdir.patch000066400000000000000000000007771417231051300222610ustar00rootroot00000000000000diff -ru charliecloud/bin/ch-test charliecloud-lib/bin/ch-test --- charliecloud/bin/ch-test 2020-04-07 12:19:37.054609706 -0600 +++ charliecloud-lib/bin/ch-test 2020-04-15 16:36:55.128831767 -0600 @@ -662,7 +662,7 @@ CHTEST_INSTALLED=yes CHTEST_GITWD= CHTEST_DIR=${ch_base}/libexec/charliecloud/test - CHTEST_EXAMPLES_DIR=${ch_base}/share/doc/charliecloud/examples + CHTEST_EXAMPLES_DIR=${ch_base}/share/doc/charliecloud-${ch_version}/examples else # build dir CHTEST_INSTALLED= charliecloud-0.26/packaging/fedora/upstream.spec000066400000000000000000000255601417231051300220040ustar00rootroot00000000000000# Charliecloud fedora package spec file # # Contributors: # Dave Love @loveshack # Michael Jennings @mej # Jordan Ogas @jogas # Reid Priedhorksy @reidpr # Don't try to compile python3 files with /usr/bin/python. %{?el7:%global __python %__python3} Name: charliecloud Version: 0.25 Release: 1%{?dist} Summary: Lightweight user-defined software stacks for high-performance computing License: ASL 2.0 URL: https://hpc.github.io/%{name}/ Source0: https://github.com/hpc/%{name}/releases/downloads/v%{version}/%{name}-%{version}.tar.gz BuildRequires: gcc rsync bash Requires: squashfuse squashfs-tools Patch1: el7-pkgdir.patch %description Charliecloud uses Linux user namespaces to run containers with no privileged operations or daemons and minimal configuration changes on center resources. This simple approach avoids most security risks while maintaining access to the performance and functionality already on offer. Container images can be built using Docker or anything else that can generate a standard Linux filesystem tree. For more information: https://hpc.github.io/charliecloud %package builder Summary: Charliecloud container image building tools License: ASL 2.0 and MIT BuildArch: noarch BuildRequires: python3-devel BuildRequires: python%{python3_pkgversion}-lark-parser BuildRequires: python%{python3_pkgversion}-requests Requires: %{name} Requires: python3 Requires: python%{python3_pkgversion}-lark-parser Requires: python%{python3_pkgversion}-requests Provides: bundled(python%{python3_pkgversion}-lark-parser) = 0.11.3 %description builder This package provides ch-image, Charliecloud's completely unprivileged container image manipulation tool. %package doc Summary: Charliecloud html documentation License: BSD and ASL 2.0 BuildArch: noarch Obsoletes: %{name}-doc < %{version}-%{release} BuildRequires: python%{python3_pkgversion}-sphinx BuildRequires: python%{python3_pkgversion}-sphinx_rtd_theme Requires: python%{python3_pkgversion}-sphinx_rtd_theme %description doc Html and man page documentation for %{name}. %package test Summary: Charliecloud test suite License: ASL 2.0 Requires: %{name} %{name}-builder /usr/bin/bats Obsoletes: %{name}-test < %{version}-%{release} %description test Test fixtures for %{name}. %prep %setup -q %if 0%{?el7} %patch1 -p1 %endif %build # Use old inlining behavior, see: # https://github.com/hpc/charliecloud/issues/735 CFLAGS=${CFLAGS:-%optflags -fgnu89-inline}; export CFLAGS %configure --docdir=%{_pkgdocdir} \ --libdir=%{_prefix}/lib \ --with-python=/usr/bin/python3 \ %if 0%{?el7} --with-sphinx-build=%{_bindir}/sphinx-build-3.6 %else --with-sphinx-build=%{_bindir}/sphinx-build %endif %install %make_install cat > README.EL7 </etc/sysctl.d/51-userns.conf sysctl -p /etc/sysctl.d/51-userns.conf Note for versions below RHEL7.6, you will also need to enable user namespaces: grubby --args=namespace.unpriv_enable=1 --update-kernel=ALL reboot Please visit https://hpc.github.io/charliecloud/ for more information. EOF # Remove bundled sphinx bits. %{__rm} -rf %{buildroot}%{_pkgdocdir}/html/_static/css %{__rm} -rf %{buildroot}%{_pkgdocdir}/html/_static/fonts %{__rm} -rf %{buildroot}%{_pkgdocdir}/html/_static/js # Use Fedora package sphinx bits. sphinxdir=%{python3_sitelib}/sphinx_rtd_theme/static ln -s "${sphinxdir}/css" %{buildroot}%{_pkgdocdir}/html/_static/css ln -s "${sphinxdir}/fonts" %{buildroot}%{_pkgdocdir}/html/_static/fonts ln -s "${sphinxdir}/js" %{buildroot}%{_pkgdocdir}/html/_static/js # Remove bundled license and readme (prefer license and doc macros). %{__rm} -f %{buildroot}%{_pkgdocdir}/LICENSE %{__rm} -f %{buildroot}%{_pkgdocdir}/README.rst %files %license LICENSE %doc README.rst %{?el7:README.EL7} %{_bindir}/ch-build %{_bindir}/ch-build2dir %{_bindir}/ch-builder2squash %{_bindir}/ch-builder2tar %{_bindir}/ch-checkns %{_bindir}/ch-dir2squash %{_bindir}/ch-fromhost %{_bindir}/ch-mount %{_bindir}/ch-pull2dir %{_bindir}/ch-pull2tar %{_bindir}/ch-run %{_bindir}/ch-run-oci %{_bindir}/ch-ssh %{_bindir}/ch-umount %{_bindir}/ch-tar2dir %{_mandir}/man1/ch-build.1* %{_mandir}/man1/ch-build2dir.1* %{_mandir}/man1/ch-builder2squash.1* %{_mandir}/man1/ch-builder2tar.1* %{_mandir}/man1/ch-checkns.1* %{_mandir}/man1/ch-dir2squash.1* %{_mandir}/man1/ch-fromhost.1* %{_mandir}/man1/ch-mount.1* %{_mandir}/man1/ch-pull2dir.1* %{_mandir}/man1/ch-pull2tar.1* %{_mandir}/man1/ch-run.1* %{_mandir}/man1/ch-run-oci.1* %{_mandir}/man1/ch-ssh.1* %{_mandir}/man1/ch-tar2dir.1* %{_mandir}/man1/ch-umount.1* %{_mandir}/man7/charliecloud.7* %{_prefix}/lib/%{name}/base.sh %{_prefix}/lib/%{name}/contributors.bash %{_prefix}/lib/%{name}/version.sh %{_prefix}/lib/%{name}/version.txt %files builder %{_bindir}/ch-image %{_mandir}/man1/ch-image.1* %{_prefix}/lib/%{name}/build.py %{_prefix}/lib/%{name}/charliecloud.py %{_prefix}/lib/%{name}/fakeroot.py %{_prefix}/lib/%{name}/lark %{_prefix}/lib/%{name}/lark-0.11.3.dist-info %{_prefix}/lib/%{name}/lark-stubs %{_prefix}/lib/%{name}/misc.py %{_prefix}/lib/%{name}/pull.py %{_prefix}/lib/%{name}/push.py %{_prefix}/lib/%{name}/version.py %{?el7:%{_prefix}/lib/%{name}/__pycache__} %files doc %license LICENSE %{_pkgdocdir}/examples %{_pkgdocdir}/html %{?el7:%exclude %{_pkgdocdir}/examples/*/__pycache__} %files test %{_bindir}/ch-test %{_libexecdir}/%{name}/test %{_mandir}/man1/ch-test.1* %changelog * Mon Sep 20 2021 Jordan Ogas 0.24-12 - remove version numbers from Obsolete - remove Provides tag - replace package name with macro - tidy * Thu Jul 29 2021 Jordan Ogas 0.24-11 - move -builder to noarch - move examples back to -doc - add versions to obsoletes - use name macro * Wed Jul 28 2021 Jordan Ogas 0.24-10 - fix yet another typo; BuildRequires * Wed Jul 28 2021 Jordan Ogas 0.24-9 - add version to obsoletes * Wed Jul 28 2021 Jordan Ogas 0.24-8 - fix provides typo * Wed Jul 28 2021 Jordan Ogas 0.24-7 - add -common to obsoletes and provides * Wed Jul 28 2021 Jordan Ogas - 0.24-6 * revert to meta-package; separate builder to -builder * Wed Jul 21 2021 Fedora Release Engineering - 0.24-5 - Rebuilt for https://fedoraproject.org/wiki/Fedora_35_Mass_Rebuild * Mon Jul 19 2021 Jordan Ogas - 0.24-4 - fix epel7 python cache files * Mon Jul 19 2021 Jordan Ogas - 0.24-3 - Tidy, alphabatize files - Move builder exlusive python files out from -common - Move generic helper scripts to -common - Add requires runtime to -builders * Tue Jul 13 2021 Dave Love - 0.24-2 - Obsolete previous packge by -runtime, not -common * Wed Jun 30 2021 Dave Love - 0.24-1 - New version * Sun Apr 18 2021 Dave Love - 0.23-1 - New version - Split main package into runtime, builder, and common sub-packages - Require buildah and squashfs at run time - Use /lib, not /lib64 for noarch; drop lib64 patch - Don't BR squashfs-tools, squashfuse, buildah - Require squashfs-tools in -builders * Mon Mar 8 2021 Dave Love - 0.22-2 - Fix source0 path - Put man7 in base package * Tue Feb 9 2021 Dave Love - 0.22-1 - New version - update lib64.patch - add pull.py and push.py - (Build)Require python3-lark-parser, python3-requests * Wed Feb 3 2021 - 0.21-2 - Fix lib64.patch path for ch-image * Tue Jan 05 2021 - 0.21-1 - New version - Ship charlicloud.7 - Require fakeroot - Install fakeroot.py - Always ship patch1 - Get python3_sitelib defined - Move examples to -test and require sphinx_rtd_theme - Include __pycache__ on el7 - Use %%python3_pkgversion - BR python3, not /usr/bin/python3 - Fix comment capitalization and spacing * Tue Sep 22 2020 - 0.19-1 - Package build.py and misc.py - Remove unnecessary patch - New release * Mon Jul 27 2020 Fedora Release Engineering - 0.15-2 - Rebuilt for https://fedoraproject.org/wiki/Fedora_33_Mass_Rebuild * Thu Apr 16 2020 - 0.15-1 - Add test suite package - Update spec for autoconf - New release * Tue Jan 28 2020 Fedora Release Engineering - 0.10-2 - Rebuilt for https://fedoraproject.org/wiki/Fedora_32_Mass_Rebuild * Wed Sep 04 2019 - 0.10-1 - Patch doc-src/conf.py for epel - Fix doc-src/dev.rst - Fix libexec and doc install path for 0.10 changes - Tidy comments - New release * Thu Aug 22 2019 - 0.9.10-12 - Upate doc subpackage obsoletes * Mon Aug 19 2019 Dave love - 0.9.10-11 - Use canonical form for Source0 - Remove main package dependency from doc, and make it noarch * Fri Aug 02 2019 0.9.10-10 - Tidy comments; fix typ * Thu Jul 25 2019 0.9.10-9 - Use python site variable; fix doc file reference * Tue Jul 23 2019 0.9.10-8 - Remove bundled js, css, and font bits * Mon Jul 22 2019 0.9.10-6 - Temporarily remove test suite * Wed Jul 10 2019 0.9.10-5 - Revert test and example install path change - Update test readme * Wed Jul 3 2019 0.9.10-4 - Add doc package * Tue Jul 2 2019 0.9.10-3 - Tidy comments - Update source URL - Build html documentation; add rsync dependency - Add el7 conditionals for documentation - Remove libexecdir definition - Add test suite README.TEST * Wed May 15 2019 0.9.10-2 - Fix comment typo - Move test suite install path * Tue May 14 2019 0.9.10-1 - New version - Fix README.EL7 sysctl command instruction - Add pre-built html documentation - Fix python dependency - Remove temporary test-package readme - Fixed capitalization of change log messages * Tue Apr 30 2019 0.9.9-4 - Move global python declaration * Mon Apr 29 2019 0.9.9-3 - Match bin files with wildcard * Mon Apr 29 2019 0.9.9-2 - Update macro comment - Fix release tag history * Tue Apr 16 2019 0.9.9-1 - New version - Move temp readme creation to install segment - Fix spec file macro * Tue Apr 02 2019 0.9.8-2 - Remove python2 build option * Thu Mar 14 2019 0.9.8-1 - Add initial Fedora/EPEL package charliecloud-0.26/test/000077500000000000000000000000001417231051300150535ustar00rootroot00000000000000charliecloud-0.26/test/.dockerignore000066400000000000000000000000631417231051300175260ustar00rootroot00000000000000# Nothing yet; used for testing ch-image warnings. charliecloud-0.26/test/Build.centos7xz000077500000000000000000000013741417231051300200100ustar00rootroot00000000000000#!/bin/bash # Download an xz-compressed CentOS 7 tarball. These are the base images for # the official CentOS Docker images. # # https://github.com/CentOS/sig-cloud-instance-images # # This GitHub repository is arranged with CentOS version and architecture in # different branches. We download the latest for a given architecture. # # To check what version is in a tarball (on any architecture): # # $ tar xf centos-7-${arch}-docker.tar.xz --to-stdout ./etc/centos-release # # ch-test-scope: standard # ch-test-builder-exclude: none set -ex #srcdir=$1 # unused tarball=${2}.tar.xz #workdir=$3 # unused wget -nv -O "$tarball" "https://github.com/CentOS/sig-cloud-instance-images/blob/CentOS-7-$(uname -m)/docker/centos-7-$(uname -m)-docker.tar.xz?raw=true" charliecloud-0.26/test/Build.docker_pull000077500000000000000000000011331417231051300203400ustar00rootroot00000000000000#!/bin/bash # ch-test-scope: quick # ch-test-builder-include: docker # ch-test-need-sudo # # Pull a docker image directly from Dockerhub and pack it into an image tarball. set -e #srcdir=$1 # unused tarball_gz=${2}.tar.gz workdir=$3 tag=docker_pull addr=alpine:3.9 img=$tag:latest cd "$workdir" sudo docker pull "$addr" sudo docker tag "$addr" "$tag" # FIXME: do we need a ch_version_docker equivalent? sudo docker tag "$tag" "$img" hash_=$(sudo docker images -q "$img" | sort -u) if [[ -z $hash_ ]]; then echo "no such image '$img'" exit 1 fi ch-convert -i docker "$tag" "$tarball_gz" charliecloud-0.26/test/Build.missing000077500000000000000000000001411417231051300175040ustar00rootroot00000000000000#!/bin/bash # ch-test-scope: quick # This image's prerequisites can never be satisfied. exit 65 charliecloud-0.26/test/Dockerfile.00_tiny000066400000000000000000000002321417231051300203230ustar00rootroot00000000000000# Minimal image useful for later tests. # ch-test-scope: quick FROM alpine:3.9 # Base image has no default command; we need one to build. CMD ["true"] charliecloud-0.26/test/Dockerfile.argenv000066400000000000000000000011141417231051300203230ustar00rootroot00000000000000# Test how ARG and ENV variables flow around. This does not address syntax # quirks; for that see test 'Dockerfile: syntax quirks' in # build/50_dockerfile.bats. Results are checked in both test 'Dockerfile: ARG # and ENV values' in build/50_dockerfile.bats and multiple tests in # run/ch-run_misc.bats. The latter is why this is a separate Dockerfile # instead of embedded in a .bats file. # ch-test-scope: standard FROM 00_tiny ARG chse_arg1_df ARG chse_arg2_df=arg2 ARG chse_arg3_df="arg3 ${chse_arg2_df}" ENV chse_env1_df env1 ENV chse_env2_df="env2 ${chse_env1_df}" RUN env | sort charliecloud-0.26/test/Dockerfile.build2dir000077700000000000000000000000001417231051300241722Dockerfile.00_tinyustar00rootroot00000000000000charliecloud-0.26/test/Dockerfile.file-quirks000066400000000000000000000074551417231051300213120ustar00rootroot00000000000000# This Dockerfile is used to test that pull deals with quirky files, e.g. # replacement by different types (issues #819 and #825)`. Scope is “skip” # because we pull the image to test it; see test/build/50_pull.bats. # # To build and push: # # $ VERSION=$(date +%Y-%m-%d) # or other date as appropriate # $ sudo docker login # if needed # $ sudo docker build -t file-quirks -f Dockerfile.file-quirks . # $ sudo docker tag file-quirks:latest charliecloud/file-quirks:$VERSION # $ sudo docker images | fgrep file-quirks # $ sudo docker push charliecloud/file-quirks:$VERSION # # ch-test-scope: skip FROM 00_tiny WORKDIR /test ## Replace symlink with symlink. # Set up a symlink & targets. RUN echo target1 > ss_target1 \ && echo target2 > ss_target2 \ && ln -s ss_target1 ss_link # link and target should both contain "target1" RUN ls -l \ && for i in ss_*; do printf '%s : ' $i; cat $i; done # Overwrite it with a new symlink. RUN rm ss_link \ && ln -s ss_target2 ss_link # Now link should still be a symlink but contain "target2". RUN ls -l \ && for i in ss_*; do printf '%s : ' $i; cat $i; done ## Replace symlink with regular file (issue #819). # Set up a symlink. RUN echo target > sf_target \ && ln -s sf_target sf_link # Link and target should both contain "target". RUN ls -l \ && for i in sf_*; do printf '%s : ' $i; cat $i; done # Overwrite it with a regular file. RUN rm sf_link \ && echo regular > sf_link # Now link should be a regular file and contain "regular". RUN ls -l \ && for i in sf_*; do printf '%s : ' $i; cat $i; done ## Replace regular file with symlink. # Set up two regular files. RUN echo regular > fs_link \ && echo target > fs_target # Link should be a regular file and contain "regular". RUN ls -l \ && for i in fs_*; do printf '%s : ' $i; cat $i; done # Overwrite it with a symlink. RUN rm fs_link \ && ln -s fs_target fs_link # Now link should be a symlink; both should contain "target". RUN ls -l \ && for i in fs_*; do printf '%s : ' $i; cat $i; done ## Replace symlink with directory. # Set up a symlink. RUN echo target > sd_target \ && ln -s sd_target sd_link # link and target should both contain "target" RUN ls -l \ && for i in sd_*; do printf '%s : ' $i; cat $i; done # Overwrite it with a directory. RUN rm sd_link \ && mkdir sd_link # Now link should be a directory. RUN ls -l ## Replace directory with symlink. # I think this is what's in image ppc64le.neo4j/2.3.5, as reported in issue # #825, but it doesn't cause the same infinite recursion. # Set up a directory and a target. RUN mkdir ds_link \ && echo target > ds_target # It should be a directory. RUN ls -l # Overwrite it with a symlink. RUN rmdir ds_link \ && ln -s ds_target ds_link # Now link should be a symlink; both should contain "target". RUN ls -l \ && for i in ds_*; do printf '%s : ' $i; cat $i; done ## Replace regular file with directory. # Set up a file. RUN echo regular > fd_member # It should be a file. RUN ls -l \ && for i in fd_*; do printf '%s : ' $i; cat $i; done # Overwrite it with a directory. RUN rm fd_member \ && mkdir fd_member # Now it should be a directory. RUN ls -l ## Replace directory with regular file. # Set up a directory. RUN mkdir df_member # It should be a directory. RUN ls -l # Overwrite it with a file. RUN rmdir df_member \ && echo regular > df_member # Now it should be a file. RUN ls -l \ && for i in df_*; do printf '%s : ' $i; cat $i; done ## Symlink with cycle (https://bugs.python.org/file37774). # Set up a symlink pointing to itself. RUN ln -s link_self link_self # List. RUN ls -l ## Broken symlinks (https://bugs.python.org/file37774). # Set up a symlink pointing to (1) a nonexistent file and (2) a directory that # only exists in the image. RUN ln -s doesnotexist link_b0rken \ && ln -s /test link_imageonly # List. RUN ls -l charliecloud-0.26/test/Dockerfile.metadata000066400000000000000000000020711417231051300206240ustar00rootroot00000000000000# This Dockerfile is used to test metadata pulling (issue #651). It includes # all the instructions that seemed like they ought to create metadata, even if # unsupported by ch-image. # # Scope is "skip" because we pull the image to test it; see # test/build/50_pull.bats. # # To build and push: # # $ VERSION=$(date +%Y-%m-%d) # or other date as appropriate # $ sudo docker login # if needed # $ sudo docker build -t charliecloud/metadata:$VERSION \ # -f Dockerfile.metadata . # $ sudo docker images | fgrep metadata # $ sudo docker push charliecloud/metadata:$VERSION # # ch-test-scope: skip FROM 00_tiny CMD ["bar", "baz"] ENTRYPOINT ["/bin/echo","foo"] ENV ch_foo=foo-ev ch_bar=bar-ev EXPOSE 867 5309/udp HEALTHCHECK --interval=60s --timeout=5s CMD ["/bin/true"] LABEL ch_foo=foo-label ch_bar=bar-label MAINTAINER charlie@example.com ONBUILD RUN echo hello RUN echo hello RUN ["/bin/echo", "world"] SHELL ["/bin/ash", "-c"] STOPSIGNAL SIGWINCH USER charlie:chargrp WORKDIR /mnt VOLUME /mnt/foo /mnt/bar /mnt/foo charliecloud-0.26/test/Dockerfile.ocimanifest000066400000000000000000000017501417231051300213500ustar00rootroot00000000000000# This Dockerfile is used to test image with an OCI manifest (issue #1184). # # WARNING: The manifest is produced by the build tool and is rather opaque. # Specifically, re-building the image might silently produce a different # manifest that also works, negating the value of this test. Building this # image with Podman 3.0.1 did trigger the above issue; Podman 3.4.0 very # likely has the same behavior. Bottom line, be very cautious about # re-building this image. One approach would be to comment out the content # types added by #1184 and see if the updated image still triggers the bug. # # Scope is "skip" because we pull it to test; see test/build/50_pull.bats. # ch-test-scope: skip # # To build and push: # # $ VERSION=$(date +%Y-%m-%d) # $ podman build -t charliecloud/ocimanifest:$VERSION \ # -f Dockerfile.ocimanifest . # $ podman images | fgrep ocimanifest # $ podman login # $ podman push charliecloud/ocimanifest:$VERSION # FROM alpine:3.9 RUN echo hello charliecloud-0.26/test/Makefile.am000066400000000000000000000060701417231051300171120ustar00rootroot00000000000000testdir = $(pkglibexecdir)/test # These test files require no special handling. testfiles = \ .dockerignore \ Dockerfile.00_tiny \ Dockerfile.argenv \ Dockerfile.build2dir \ build/10_sanity.bats \ build/40_pull.bats \ build/50_ch-image.bats \ build/50_dockerfile.bats \ build/50_localregistry.bats \ build/50_misc.bats \ build/99_cleanup.bats \ common.bash \ fixtures/empty-file \ fixtures/README \ make-auto.d/build.bats.in \ make-auto.d/build_custom.bats.in \ make-auto.d/builder_to_archive.bats.in \ make-auto.d/unpack.bats.in \ registry-config.yml \ run/build-rpms.bats \ run/ch-fromhost.bats \ run/ch-run_escalated.bats \ run/ch-run_isolation.bats \ run/ch-run_join.bats \ run/ch-run_misc.bats \ run/ch-run_uidgid.bats \ run/ch-tar2dir.bats \ run_first.bats \ sotest/files_inferrable.txt \ sotest/libsotest.c \ sotest/sotest.c # Test files that should be executable. testfiles_exec = \ Build.centos7xz \ Build.docker_pull \ Build.missing \ docs-sane \ make-perms-test # Program and shared library used for testing shared library injection. It's # built according to the rules below. In principle, we could use libtool for # that, but I'm disinclined to add that in since it's one test program and # does not require any libtool portability. sobuilts = \ sotest/bin/sotest \ sotest/lib/libsotest.so.1.0 \ sotest/libsotest.so \ sotest/libsotest.so.1 \ sotest/libsotest.so.1.0 \ sotest/sotest CLEANFILES = $(sobuilts) \ docs-sane force-auto make-perms-test \ build/61_force-auto.bats if ENABLE_TEST nobase_test_DATA = $(testfiles) nobase_test_SCRIPTS = $(testfiles_exec) nobase_nodist_test_SCRIPTS = $(sobuilts) if ENABLE_CH_IMAGE # this means we have Python nobase_test_DATA += build/61_force-auto.bats build/61_force-auto.bats: force-auto ./$< > $@ endif # See comment about symlinks in examples/Makefile.am. all-local: ln -fTs /tmp fixtures/symlink-to-tmp clean-local: rm -f fixtures/symlink-to-tmp install-data-hook: $(MKDIR_P) $(DESTDIR)$(testdir)/fixtures ln -fTs /tmp $(DESTDIR)$(testdir)/fixtures/symlink-to-tmp uninstall-hook: rm -f $(DESTDIR)$(testdir)/fixtures/symlink-to-tmp rmdir $(DESTDIR)$(testdir)/fixtures || true endif EXTRA_DIST = $(testfiles) $(testfiles_exec) \ docs-sane.py.in force-auto.py.in make-perms-test.py.in EXTRA_SCRIPTS = $(sobuilts) ## Python scripts - need text processing docs-sane force-auto make-perms-test: %: %.py.in rm -f $@ sed -E 's|%PYTHON_SHEBANG%|@PYTHON_SHEBANG@|' < $< > $@ chmod +rx,-w $@ # respects umask sotest/sotest: sotest/sotest.c sotest/libsotest.so.1.0 sotest/libsotest.so sotest/libsotest.so.1 $(CC) -o $@ $(CFLAGS) $(CPPFLAGS) $(LDFLAGS) -L./sotest -lsotest $^ sotest/libsotest.so.1.0: sotest/libsotest.c $(CC) -o $@ $(CFLAGS) $(CPPFLAGS) $(LDFLAGS) -shared -fPIC -Wl,-soname,libsotest.so.1 -lc $^ sotest/libsotest.so: sotest/libsotest.so.1.0 ln -fTs ./libsotest.so.1.0 $@ sotest/libsotest.so.1: sotest/libsotest.so.1.0 ln -fTs ./libsotest.so.1.0 $@ sotest/bin/sotest: sotest/sotest mkdir -p sotest/bin cp -a $^ $@ sotest/lib/libsotest.so.1.0: sotest/libsotest.so.1.0 mkdir -p sotest/lib cp -a $^ $@ charliecloud-0.26/test/build/000077500000000000000000000000001417231051300161525ustar00rootroot00000000000000charliecloud-0.26/test/build/10_sanity.bats000066400000000000000000000130611417231051300206350ustar00rootroot00000000000000load ../common @test 'documentation seems sane' { scope standard if ( ! command -v sphinx-build > /dev/null 2>&1 ); then skip 'Sphinx is not installed' fi if [[ ! -d ../doc ]]; then skip 'documentation source code absent' fi if [[ ! -f ../doc/html/index.html || ! -f ../doc/man/ch-run.1 ]]; then skip 'documentation not built' fi (cd ../doc && make -j "$(getconf _NPROCESSORS_ONLN)") ./docs-sane } @test 'version number seems sane' { # This checks the form of the version number but not whether it's # consistent with anything, because so far that level of strictness has # yielded hundreds of false positives but zero actual bugs. scope quick echo "version: ${ch_version}" re='^0\.[0-9]+(\.[0-9]+)?(~pre\+([A-Za-z0-9]+\.)?([0-9a-f]+(\.dirty)?)?)?$' [[ $ch_version =~ $re ]] } @test 'executables seem sane' { scope quick # Assume that everything in $ch_bin is ours if it starts with "ch-" and # either (1) is executable or (2) ends in ".c". Demand satisfaction from # each. The latter is to catch cases when we haven't compiled everything; # if we have, the test makes duplicate demands, but that's low cost. while IFS= read -r -d '' path; do path=${path%.c} filename=$(basename "$path") echo echo "$path" # --version run "$path" --version echo "$output" [[ $status -eq 0 ]] # --help: returns 0, says "Usage:" somewhere. run "$path" --help echo "$output" [[ $status -eq 0 ]] [[ $output = *'sage:'* ]] # Most, but not all, executables should print usage and exit # unsuccessfully when run without arguments. case $filename in ch-checkns) ;; *) run "$path" echo "$output" [[ $status -eq 1 ]] [[ $output = *'sage:'* ]] ;; esac # not setuid or setgid ls -l "$path" [[ ! -u $path ]] [[ ! -g $path ]] done < <( find "$ch_bin" -name 'ch-*' -a \( -executable -o -name '*.c' \) \ -print0 ) } @test 'lint shell scripts' { # ShellCheck excludes used below: # # SC2002 useless use of cat # SC2164 cd exit code unchecked (Bats checks for failure) # # Additional excludes work around issue #210, and I think are required for # the Bats tests forever: # # SC1090 can't find sourced file # SC2154 variable referenced but not assigned # scope standard arch_exclude ppc64le # no ShellCheck pre-built # Only do this test in build directory; the reasoning is that we don't # alter the shell scripts during install enough to re-test, and it means # we only have to find everything in one path. if [[ $CHTEST_INSTALLED ]]; then skip 'only in build directory' fi # ShellCheck present? if ( ! command -v shellcheck >/dev/null 2>&1 ); then pedantic_fail 'no ShellCheck found' fi # ShellCheck minimum version? version=$(shellcheck --version | grep -E '^version:' | cut -d' ' -f2) needed=0.7.2 lesser=$(printf "%s\n%s\n" "$version" "$needed" | sort -V | head -1) echo "shellcheck: have ${version}, need ${needed}, lesser ${lesser}" if [[ $lesser != "$needed" ]]; then pedantic_fail 'shellcheck too old' fi # Shell scripts and libraries: appropriate extension or shebang. # For awk program, see: https://unix.stackexchange.com/a/66099 while IFS= read -r i; do echo "shellcheck: ${i}" shellcheck -x -P "$ch_lib" -e SC1090,SC2002,SC2154 "$i" done < <( find "$ch_base" \ \( -name .git \ -o -name build-aux \) -prune \ -o \( -name '*.sh' -print \) \ -o \( -name '*.bash' -print \) \ -o \( -type f -exec awk '/^#!\/bin\/(ba)?sh/ {print FILENAME} {nextfile}' {} + \) ) # Bats scripts. Use sed to do two things: # # 1. Make parseable by ShellCheck by removing "@test '...'". This does # remove the test names, but line numbers are still valid. # # 2. Remove preprocessor substitutions "%(foo)", which also confuse Bats. # while IFS= read -r i; do echo "shellcheck: ${i}" sed -r -e 's/@test (.+) \{/test_ () {/g' "$i" \ -e 's/%\(([a-zA-Z0-9_]+)\)/SUBST_\1/g' \ | shellcheck -s bash -e SC1090,SC2002,SC2154,SC2164 - done < <( find "$ch_base" -name '*.bats' -o -name '*.bats.in' ) } @test 'proxy variables' { scope quick # Proxy variables are a mess on UNIX. There are a lot them, and different # programs use them inconsistently. This test is based on the assumption # that if one of the proxy variables are set, then they all should be, in # order to prepare for diverse internet access at build time. # # Coordinate this test with bin/ch-build. # # Note: ALL_PROXY and all_proxy aren't currently included, because they # cause image builds to fail until Docker 1.13 # (https://github.com/docker/docker/pull/27412). v=' no_proxy http_proxy https_proxy' v+=$(echo "$v" | tr '[:lower:]' '[:upper:]') empty_ct=0 for i in $v; do if [[ -n ${!i} ]]; then echo "${i} is non-empty" for j in $v; do echo " $j=${!j}" if [[ -z ${!j} ]]; then (( ++empty_ct )) fi done break fi done [[ $empty_ct -eq 0 ]] } charliecloud-0.26/test/build/40_pull.bats000066400000000000000000000321221417231051300203040ustar00rootroot00000000000000load ../common setup () { scope standard [[ $CH_BUILDER = ch-image ]] || skip 'ch-image only' } image_ref_parse () { # Try to parse image ref $1; expected output is provided on stdin and # expected exit code in $2. ref=$1 retcode_expected=$2 echo "--- parsing: ${ref}" set +e out=$(ch-image pull --parse-only "$ref" 2>&1) retcode=$? set -e echo "--- return code: ${retcode}" echo '--- output:' echo "$out" if [[ $retcode -ne "$retcode_expected" ]]; then echo "fail: return code differs from expected ${retcode_expected}" exit 1 fi diff -u - <(echo "$out") } @test 'image ref parsing' { # simplest cat <<'EOF' | image_ref_parse name 0 as string: name for filename: name fields: host None port None path [] name 'name' tag None digest None EOF # one-component path cat <<'EOF' | image_ref_parse path1/name 0 as string: path1/name for filename: path1%name fields: host None port None path ['path1'] name 'name' tag None digest None EOF # two-component path cat <<'EOF' | image_ref_parse path1/path2/name 0 as string: path1/path2/name for filename: path1%path2%name fields: host None port None path ['path1', 'path2'] name 'name' tag None digest None EOF # host with dot cat <<'EOF' | image_ref_parse example.com/name 0 as string: example.com/name for filename: example.com%name fields: host 'example.com' port None path [] name 'name' tag None digest None EOF # host with dot, with port cat <<'EOF' | image_ref_parse example.com:8080/name 0 as string: example.com:8080/name for filename: example.com:8080%name fields: host 'example.com' port 8080 path [] name 'name' tag None digest None EOF # host without dot, with port cat <<'EOF' | image_ref_parse examplecom:8080/name 0 as string: examplecom:8080/name for filename: examplecom:8080%name fields: host 'examplecom' port 8080 path [] name 'name' tag None digest None EOF # no path, tag cat <<'EOF' | image_ref_parse name:tag 0 as string: name:tag for filename: name:tag fields: host None port None path [] name 'name' tag 'tag' digest None EOF # no path, digest cat <<'EOF' | image_ref_parse name@sha256:feeddad 0 as string: name@sha256:feeddad for filename: name@sha256:feeddad fields: host None port None path [] name 'name' tag None digest 'feeddad' EOF # everything, tagged cat <<'EOF' | image_ref_parse example.com:8080/path1/path2/name:tag 0 as string: example.com:8080/path1/path2/name:tag for filename: example.com:8080%path1%path2%name:tag fields: host 'example.com' port 8080 path ['path1', 'path2'] name 'name' tag 'tag' digest None EOF # everything, tagged, filename component cat <<'EOF' | image_ref_parse example.com:8080%path1%path2%name:tag 0 as string: example.com:8080/path1/path2/name:tag for filename: example.com:8080%path1%path2%name:tag fields: host 'example.com' port 8080 path ['path1', 'path2'] name 'name' tag 'tag' digest None EOF # everything, digest cat <<'EOF' | image_ref_parse example.com:8080/path1/path2/name@sha256:feeddad 0 as string: example.com:8080/path1/path2/name@sha256:feeddad for filename: example.com:8080%path1%path2%name@sha256:feeddad fields: host 'example.com' port 8080 path ['path1', 'path2'] name 'name' tag None digest 'feeddad' EOF # errors # invalid character in image name cat <<'EOF' | image_ref_parse 'name*' 1 error: image ref syntax, char 5: name* hint: https://hpc.github.io/charliecloud/faq.html#how-do-i-specify-an-image-reference EOF # missing port number cat <<'EOF' | image_ref_parse 'example.com:/path1/name' 1 error: image ref syntax, char 13: example.com:/path1/name hint: https://hpc.github.io/charliecloud/faq.html#how-do-i-specify-an-image-reference EOF # path with leading slash cat <<'EOF' | image_ref_parse '/path1/name' 1 error: image ref syntax, char 1: /path1/name hint: https://hpc.github.io/charliecloud/faq.html#how-do-i-specify-an-image-reference EOF # path but no name cat <<'EOF' | image_ref_parse 'path1/' 1 error: image ref syntax, at end: path1/ hint: https://hpc.github.io/charliecloud/faq.html#how-do-i-specify-an-image-reference EOF # bad digest algorithm cat <<'EOF' | image_ref_parse 'name@sha512:feeddad' 1 error: image ref syntax, char 5: name@sha512:feeddad hint: https://hpc.github.io/charliecloud/faq.html#how-do-i-specify-an-image-reference EOF # both tag and digest cat <<'EOF' | image_ref_parse 'name:tag@sha512:feeddad' 1 error: image ref syntax, char 9: name:tag@sha512:feeddad hint: https://hpc.github.io/charliecloud/faq.html#how-do-i-specify-an-image-reference EOF } @test 'pull image with quirky files' { arch_exclude aarch64 # test image not available arch_exclude ppc64le # test image not available # Validate that layers replace symlinks correctly. See # test/Dockerfile.symlink and issues #819 & #825. img=$BATS_TMPDIR/charliecloud%file-quirks ch-image pull charliecloud/file-quirks:2020-10-21 "$img" ls -lh "${img}/test" output_expected=$(cat <<'EOF' regular file 'df_member' symbolic link 'ds_link' -> 'ds_target' regular file 'ds_target' directory 'fd_member' symbolic link 'fs_link' -> 'fs_target' regular file 'fs_target' symbolic link 'link_b0rken' -> 'doesnotexist' symbolic link 'link_imageonly' -> '/test' symbolic link 'link_self' -> 'link_self' directory 'sd_link' regular file 'sd_target' regular file 'sf_link' regular file 'sf_target' symbolic link 'ss_link' -> 'ss_target2' regular file 'ss_target1' regular file 'ss_target2' EOF ) cd "${img}/test" run stat -c '%-14F %N' -- * echo "$output" [[ $status -eq 0 ]] diff -u <(echo "$output_expected") <(echo "$output") cd - } @test 'pull images with uncommon manifests' { arch_exclude aarch64 # test image not available arch_exclude ppc64le # test image not available if [[ -n $CH_REGY_DEFAULT_HOST ]]; then # Manifests seem to vary by registry; we need Docker Hub. skip 'default registry host set' fi storage="${BATS_TMPDIR}/tmp" cache=$storage/dlcache export CH_IMAGE_STORAGE=$storage # OCI manifest; see issue #1184. img=charliecloud/ocimanifest:2021-10-12 ch-image pull "$img" # Manifest schema version one (v1); see issue #814. Use debian:squeeze # because 1) it always returns a v1 manifest schema (regardless of media # type specified), and 2) it isn't very large, thus keeps test time down. img=debian:squeeze ch-image pull "$img" grep -F '"schemaVersion": 1' "${cache}/${img}%skinny.manifest.json" rm -Rf --one-file-system "$storage" } @test 'pull from public repos' { if [[ -n $CH_REGY_DEFAULT_HOST ]]; then skip 'default registry host set' # avoid Docker Hub fi if [[ -z $CI ]]; then # Verify we can reach the public internet, except on CI, where we # insist this should work. ping -c3 8.8.8.8 || skip "can't ping 8.8.8.8" fi # These images are selected to be official-ish and small. My rough goal is # to keep them under 10MiB uncompressed, but this isn't working great. It # may be worth our while to upload some small test images to these places. # Docker Hub: https://hub.docker.com/_/alpine ch-image pull registry-1.docker.io/library/alpine:latest # quay.io: https://quay.io/repository/quay/busybox ch-image pull quay.io/quay/busybox:latest # gitlab.com: https://gitlab.com/pages/hugo # FIXME: 50 MiB, try to do better; seems to be the slowest repo. ch-image pull registry.gitlab.com/pages/hugo:latest # Google Container Registry: # https://console.cloud.google.com/gcr/images/google-containers/GLOBAL # FIXME: "latest" tags do not work, but they do in Docker (issue #896) # FIXME: arch-aware pull does not work either (issue #1100) ch-image pull --arch=yolo gcr.io/google-containers/busybox:1.27 # nVidia NGC: https://ngc.nvidia.com # FIXME: 96 MiB unpacked; also kind of slow # FIXME: Can't pull this image with LC_ALL=C (issue #970). LC_ALL=en_US.utf-8 \ ch-image pull nvcr.io/hpc/foldingathome/fah-gpu:7.6.21 # Red Hat registry: https://catalog.redhat.com/software/containers/explore # FIXME: 77 MiB unpacked, should find a smaller public image ch-image pull registry.access.redhat.com/ubi7-minimal:latest # Microsoft Container Registry: # https://github.com/microsoft/containerregistry ch-image pull mcr.microsoft.com/mcr/hello-world:latest # Things not here (yet?): # # 1. Harbor (issue #899): Has a demo repo (https://demo.goharbor.io) that # you can make an account on, but I couldn't find a public repo, and # the demo repo gets reset every two days. # # 2. Docker registry container (https://hub.docker.com/_/registry): Would # need to set up an instance. # # 3. Amazon public repo (issue #901, # https://aws.amazon.com/blogs/containers/advice-for-customers-dealing-with-docker-hub-rate-limits-and-a-coming-soon-announcement/): # Does not exist yet; coming "within weeks" of 2020-11-02. # # 4. Microsoft Azure registry [1] (issue #902): I could not find any # public images. It seems that public pull is "currently a preview # feature" as of 2020-11-06 [2]. # # [1]: https://azure.microsoft.com/en-us/services/container-registry # [2]: https://docs.microsoft.com/en-us/azure/container-registry/container-registry-faq#how-do-i-enable-anonymous-pull-access # # 5. JFrog / Artifactory (https://jfrog.com/container-registry/): Could # not find any public registry. } @test 'pull image with metadata' { arch_exclude aarch64 # test image not available arch_exclude ppc64le # test image not available tag=2021-01-15 name=charliecloud/metadata:$tag img=$CH_IMAGE_STORAGE/img/charliecloud%metadata:$tag ch-image pull "$name" # Correct files? diff -u - <(ls "${img}/ch") <<'EOF' config.pulled.json environment metadata.json EOF # Volume mount points exist? ls -lh "${img}/mnt" test -d "${img}/mnt/foo" test -d "${img}/mnt/bar" # /ch/environment contents diff -u - "${img}/ch/environment" <<'EOF' PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ch_bar=bar-ev ch_foo=foo-ev EOF # /ch/metadata.json contents diff -u - "${img}/ch/metadata.json" <<'EOF' { "arch": "amd64", "cwd": "/mnt", "env": { "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "ch_bar": "bar-ev", "ch_foo": "foo-ev" }, "labels": { "ch_bar": "bar-label", "ch_foo": "foo-label" }, "shell": [ "/bin/ash", "-c" ], "volumes": [ "/mnt/bar", "/mnt/foo" ] } EOF } @test 'pull by arch' { # Has fat manifest; requested arch exists. There's not much simple to look # for in the output, so just see if it works. ch-image --arch=yolo pull alpine:latest ch-image --arch=host pull alpine:latest ch-image --arch=amd64 pull alpine:latest ch-image --arch=arm64/v8 pull alpine:latest # Has fat manifest, but requested arch does not exist. run ch-image --arch=doesnotexist pull alpine:latest echo "$output" [[ $status -eq 1 ]] [[ $output = *'requested arch unavailable:'*'available:'* ]] # Delete it so we don't try to use a non-matching arch for other testing. ch-image delete alpine:latest # No fat manifest. ch-image --arch=yolo pull charliecloud/metadata:2021-01-15 ch-image --arch=amd64 pull charliecloud/metadata:2021-01-15 if [[ $(uname -m) == 'x86_64' ]]; then ch-image --arch=host pull charliecloud/metadata:2021-01-15 run ch-image --arch=arm64/v8 pull charliecloud/metadata:2021-01-15 echo "$output" [[ $status -eq 1 ]] [[ $output = *'image is architecture-unaware'*'consider --arch=yolo' ]] fi } @test 'pull images that do not exist' { if [[ -n $CH_REGY_DEFAULT_HOST ]]; then skip 'default registry host set' # errors are Docker Hub specific fi # name does not exist remotely, in library run ch-image pull doesnotexist:latest echo "$output" [[ $status -eq 1 ]] [[ $output = *'registry-1.docker.io:443/library/doesnotexist:latest'* ]] # tag does not exist remotely, in library run ch-image pull alpine:doesnotexist echo "$output" [[ $status -eq 1 ]] [[ $output = *'registry-1.docker.io:443/library/alpine:doesnotexist'* ]] # name does not exist remotely, not in library run ch-image pull charliecloud/doesnotexist:latest echo "$output" [[ $status -eq 1 ]] [[ $output = *'registry-1.docker.io:443/charliecloud/doesnotexist:latest'* ]] # tag does not exist remotely, not in library run ch-image pull charliecloud/metadata:doesnotexist echo "$output" [[ $status -eq 1 ]] [[ $output = *'registry-1.docker.io:443/charliecloud/metadata:doesnotexist'* ]] } charliecloud-0.26/test/build/50_ch-image.bats000066400000000000000000000452371417231051300210160ustar00rootroot00000000000000load ../common setup () { scope standard [[ $CH_BUILDER = ch-image ]] || skip 'ch-image only' } @test 'ch-image common options' { # no common options run ch-image storage-path echo "$output" [[ $status -eq 0 ]] [[ $output != *'verbose level'* ]] # before only run ch-image -vv storage-path echo "$output" [[ $status -eq 0 ]] [[ $output = *'verbose level: 2'* ]] # after only run ch-image storage-path -vv echo "$output" [[ $status -eq 0 ]] [[ $output = *'verbose level: 2'* ]] # before and after; after wins run ch-image -vv storage-path -v echo "$output" [[ $status -eq 0 ]] [[ $output = *'verbose level: 1'* ]] } @test 'ch-image delete' { # verify delete/test image doesn't exist run ch-image list echo "$output" [[ $status -eq 0 ]] [[ $output != *"delete/test"* ]] # Build image. It's called called delete/test to check ref parsing with # slash present. ch-image build -t delete/test -f - . << 'EOF' FROM 00_tiny EOF run ch-image list echo "$output" [[ $status -eq 0 ]] [[ $output = *"delete/test"* ]] # delete image ch-image delete delete/test run ch-image list echo "$output" [[ $status -eq 0 ]] [[ $output != *"delete/test"* ]] } @test 'ch-image import' { # Note: We don't test importing a real image because (1) when this is run # during the build phase there aren't any unpacked images and (2) I can't # think of a way import could fail that would be real image-specific. ## Test image (not runnable) fixtures=${BATS_TMPDIR}/import rm -Rfv --one-file-system "$fixtures" mkdir "$fixtures" \ "${fixtures}/empty" \ "${fixtures}/nonempty" \ "${fixtures}/nonempty/ch" \ "${fixtures}/nonempty/bin" (cd "$fixtures" && ln -s nonempty nelink) touch "${fixtures}/nonempty/bin/foo" cat <<'EOF' > "${fixtures}/nonempty/ch/metadata.json" { "arch": "corn", "cwd": "/", "env": {}, "labels": {}, "shell": [ "/bin/sh", "-c" ], "volumes": [] } EOF ls -lhR "$fixtures" ## Tarballs # tarbomb (cd "${fixtures}/nonempty" && tar czvf ../bomb.tar.gz .) run ch-image import -v "${fixtures}/bomb.tar.gz" imptest echo "$output" [[ $status -eq 0 ]] [[ $output = *"importing: ${fixtures}/bomb.tar.gz"* ]] [[ $output != *'layers: single enclosing directory, using its contents'* ]] [[ -f "${CH_IMAGE_STORAGE}/img/imptest/bin/foo" ]] grep -F '"arch": "corn"' "${CH_IMAGE_STORAGE}/img/imptest/ch/metadata.json" ch-image delete imptest # non-tarbomb (cd "$fixtures" && tar czvf standard.tar.gz nonempty) run ch-image import -v "${fixtures}/standard.tar.gz" imptest echo "$output" [[ $status -eq 0 ]] [[ $output = *"importing: ${fixtures}/standard.tar.gz"* ]] [[ $output = *'layers: single enclosing directory, using its contents'* ]] [[ -f "${CH_IMAGE_STORAGE}/img/imptest/bin/foo" ]] grep -F '"arch": "corn"' "${CH_IMAGE_STORAGE}/img/imptest/ch/metadata.json" ch-image delete imptest # non-tarbomb, but enclosing directory is a standard dir (cd "${fixtures}/nonempty" && tar czvf ../tricky.tar.gz bin) run ch-image import -v "${fixtures}/tricky.tar.gz" imptest echo "$output" [[ $status -eq 0 ]] [[ $output = *"importing: ${fixtures}/tricky.tar.gz"* ]] [[ $output != *'layers: single enclosing directory, using its contents'* ]] [[ -f "${CH_IMAGE_STORAGE}/img/imptest/bin/foo" ]] ch-image delete imptest # empty, uncompressed tarfile (cd "${fixtures}" && tar cvf empty.tar empty) run ch-image import -v "${fixtures}/empty.tar" imptest echo "$output" [[ $status -eq 0 ]] [[ $output = *"importing: ${fixtures}/empty.tar"* ]] [[ $output = *'layers: single enclosing directory, using its contents'* ]] [[ $output = *'warning: no metadata to load; using defaults'* ]] ch-image delete imptest ## Directories # non-empty directory run ch-image import -v "${fixtures}/nonempty" imptest echo "$output" [[ $status -eq 0 ]] [[ $output = *"importing: ${fixtures}/nonempty"* ]] [[ $output = *"copying image: ${fixtures}/nonempty -> ${CH_IMAGE_STORAGE}/img/imptest"* ]] [[ -f "${CH_IMAGE_STORAGE}/img/imptest/bin/foo" ]] grep -F '"arch": "corn"' "${CH_IMAGE_STORAGE}/img/imptest/ch/metadata.json" ch-image delete imptest # empty directory run ch-image import -v "${fixtures}/empty" imptest echo "$output" [[ $status -eq 0 ]] [[ $output = *"importing: ${fixtures}/empty"* ]] [[ $output = *"copying image: ${fixtures}/empty -> ${CH_IMAGE_STORAGE}/img/imptest"* ]] [[ $output = *'warning: no metadata to load; using defaults'* ]] ch-image delete imptest # symlink to directory run ch-image import -v "${fixtures}/nelink" imptest echo "$output" [[ $status -eq 0 ]] [[ $output = *"importing: ${fixtures}/nelink"* ]] [[ $output = *"copying image: ${fixtures}/nelink -> ${CH_IMAGE_STORAGE}/img/imptest"* ]] [[ -f "${CH_IMAGE_STORAGE}/img/imptest/bin/foo" ]] grep -F '"arch": "corn"' "${CH_IMAGE_STORAGE}/img/imptest/ch/metadata.json" ch-image delete imptest ## Errors # input does not exist run ch-image import -v /doesnotexist imptest echo "$output" [[ $status -eq 1 ]] [[ $output = *"error: can't copy: not found: /doesnotexist"* ]] # invalid destination reference run ch-image import -v "${fixtures}/empty" 'badchar*' echo "$output" [[ $status -eq 1 ]] [[ $output = *'error: image ref syntax, char 8: badchar*'* ]] # non-empty file that's not a tarball run ch-image import -v "${fixtures}/nonempty/ch/metadata.json" imptest echo "$output" [[ $status -eq 1 ]] [[ $output = *"error: cannot open: ${fixtures}/nonempty/ch/metadata.json"* ]] ## Clean up [[ ! -e "${CH_IMAGE_STORAGE}/img/imptest" ]] rm -Rfv --one-file-system "$fixtures" } @test 'ch-image list' { # list all images run ch-image list echo "$output" [[ $status -eq 0 ]] [[ $output = *"00_tiny"* ]] # name does not exist remotely, in library run ch-image list doesnotexist:latest echo "$output" [[ $status -eq 0 ]] [[ $output = *'in local storage: no'* ]] [[ $output = *'available remotely: no'* ]] [[ $output = *'remote arch-aware: n/a'* ]] [[ $output = *'archs available: n/a'* ]] # tag does not exist remotely, in library run ch-image list alpine:doesnotexist echo "$output" [[ $status -eq 0 ]] [[ $output = *'in local storage: no'* ]] [[ $output = *'available remotely: no'* ]] [[ $output = *'remote arch-aware: n/a'* ]] [[ $output = *'archs available: n/a'* ]] # name does not exist remotely, not in library run ch-image list charliecloud/doesnotexist:latest echo "$output" [[ $status -eq 0 ]] [[ $output = *'in local storage: no'* ]] [[ $output = *'available remotely: no'* ]] [[ $output = *'remote arch-aware: n/a'* ]] [[ $output = *'archs available: n/a'* ]] # tag does not exist remotely, not in library run ch-image list charliecloud/metadata:doesnotexist echo "$output" [[ $status -eq 0 ]] [[ $output = *'in local storage: no'* ]] [[ $output = *'available remotely: no'* ]] [[ $output = *'remote arch-aware: n/a'* ]] [[ $output = *'archs available: n/a'* ]] # in storage, does not exist remotely run ch-image list 00_tiny echo "$output" [[ $status -eq 0 ]] [[ $output = *'in local storage: yes'* ]] [[ $output = *'available remotely: no'* ]] [[ $output = *'remote arch-aware: n/a'* ]] [[ $output = *'archs available: n/a'* ]] # not in storage, exists remotely, fat manifest exists run ch-image list debian:buster-slim echo "$output" [[ $status -eq 0 ]] [[ $output = *'in local storage: no'* ]] [[ $output = *'available remotely: yes'* ]] [[ $output = *'remote arch-aware: yes'* ]] [[ $output = *'archs available: 386 amd64 arm/v5 arm/v7 arm64/v8 mips64le ppc64le s390x'* ]] # in storage, exists remotely, no fat manifest run ch-image list charliecloud/metadata:2021-01-15 echo "$output" [[ $status -eq 0 ]] [[ $output = *'in local storage: yes'* ]] [[ $output = *'available remotely: yes'* ]] [[ $output = *'remote arch-aware: no'* ]] [[ $output = *'archs available: unknown'* ]] # exists remotely, fat manifest exists, no Linux architectures run ch-image list mcr.microsoft.com/windows:20H2 echo "$output" [[ $status -eq 0 ]] [[ $output = *'in local storage: no'* ]] [[ $output = *'available remotely: yes'* ]] [[ $output = *'remote arch-aware: yes'* ]] [[ $output = *'warning: no valid architectures found'* ]] # scratch is weird and tells lies run ch-image list scratch echo "$output" [[ $status -eq 0 ]] [[ $output = *'available remotely: yes'* ]] [[ $output = *'remote arch-aware: yes'* ]] } @test 'ch-image reset' { export CH_IMAGE_STORAGE="$BATS_TMPDIR"/sd-reset # Ensure our test storage dir doesn't exist yet. [[ -e $CH_IMAGE_STORAGE ]] && rm -Rf --one-file-system "$CH_IMAGE_STORAGE" # Put an image innit. ch-image pull alpine:3.9 ls "$CH_IMAGE_STORAGE" # List images; should be only the one we just pulled. run ch-image list echo "$output" [[ $status -eq 0 ]] [[ $output = "alpine:3.9" ]] # Reset. ch-image reset # Image storage directory should be empty now. expected=$(cat <<'EOF' .: dlcache img ulcache version ./dlcache: ./img: ./ulcache: EOF ) actual=$(cd "$CH_IMAGE_STORAGE" && ls -1R) diff -u <(echo "$expected") <(echo "$actual") # Remove storage directory. rm -Rf --one-file-system "$CH_IMAGE_STORAGE" # Reset again; should error. run ch-image reset echo "$output" [[ $status -eq 1 ]] [[ $output = *"$CH_IMAGE_STORAGE not a builder storage"* ]] } @test 'ch-image storage-path' { run ch-image storage-path echo "$output" [[ $status -eq 0 ]] [[ $output = /* ]] # absolute path [[ $CH_IMAGE_STORAGE && $output = "$CH_IMAGE_STORAGE" ]] # match what we set } @test 'ch-image build --bind' { run ch-image --no-cache build -t tmpimg -f - \ -b "${PWD}/fixtures" -b ./fixtures:/mnt/0 . < "$CH_IMAGE_STORAGE"/version cat "$CH_IMAGE_STORAGE"/version # Version mismatch; fail. run ch-image -v list echo "$output" [[ $status -eq 1 ]] [[ $output = *'error: incompatible storage directory v-1'* ]] # Reset. run ch-image reset echo "$output" [[ $status -eq 0 ]] [[ $output = *"initializing storage directory: v${v_current} ${CH_IMAGE_STORAGE}"* ]] # Version matches again; success. run ch-image -v list echo "$output" [[ $status -eq 0 ]] [[ $output = *"found storage dir v${v_current}: ${CH_IMAGE_STORAGE}"* ]] # Fake version mismatch - no file (v1). rm "$CH_IMAGE_STORAGE"/version # Version mismatch; upgrade; success. run ch-image -v list echo "$output" [[ $status -eq 0 ]] [[ $output = *"upgrading storage directory: v${v_current} ${CH_IMAGE_STORAGE}"* ]] [[ $(cat "$CH_IMAGE_STORAGE"/version) -eq "$v_current" ]] } @test 'storage directory default path move' { unset CH_IMAGE_STORAGE old=/var/tmp/${USER}/ch-image old_parent=$(dirname "$old") new=/var/tmp/${USER}.ch new_bak=/var/tmp/${USER}.ch.test-save if [[ -e $old ]]; then pedantic_fail 'old default storage dir exists' fi printf '\n*** move real storage dir out of the way\n' echo 'WARNING: If this test fails, your storage directory may be broken' if [[ -e $new ]]; then [[ ! -e $new_bak ]] mv -v "$new" "$new_bak" fi printf '\n*** old valid and needs upgrade\n' [[ -d "$old_parent" ]] || mkdir "$old_parent" mkdir "$old" mkdir "$old"/{dlcache,img,ulcache} echo 1 > "$old"/version run ch-image -vv list echo "$output" [[ $status -eq 0 ]] [[ $output = *"storage dir: valid at old default: ${old}"* ]] [[ $output = *"storage dir: moving to new default path: ${new}"* ]] [[ $output = *"moving: ${old}/dlcache -> ${new}/dlcache"* ]] [[ $output = *"moving: ${old}/img -> ${new}/img"* ]] [[ $output = *"moving: ${old}/ulcache -> ${new}/ulcache"* ]] [[ $output = *"moving: ${old}/version -> ${new}/version"* ]] [[ $output = *"warning: parent of old storage dir now empty: ${old_parent}"* ]] [[ $output = *'hint: consider deleting it'* ]] [[ $output = *"upgrading storage directory: v2 ${new}"* ]] [[ ! -e $old ]] [[ -d ${new}/dlcache ]] [[ -d ${new}/img ]] [[ -d ${new}/ulcache ]] [[ -f ${new}/version ]] printf '\n*** old and new both valid\n' mkdir "$old" mkdir "$old"/{dlcache,img,ulcache} echo 2 > "$old"/version run ch-image -vv list echo "$output" [[ $status -eq 0 ]] [[ $output = *"storage dir: valid at old default: ${old}"* ]] [[ $output = *"warning: storage dir: also valid at new default: ${new}"* ]] [[ $output = *'hint: consider deleting the old one'* ]] [[ $output = *"found storage dir v2: ${new}"* ]] [[ -d $old ]] [[ -d ${new}/dlcache ]] [[ -d ${new}/img ]] [[ -d ${new}/ulcache ]] [[ -f ${new}/version ]] rm -Rfv --one-file-system "$new" printf '\n*** old is invalid, new absent\n' rmdir "$old"/img run ch-image -vv list echo "$output" [[ $status -eq 0 ]] [[ $output = *"warning: storage dir: invalid at old default, ignoring: ${old}"* ]] [[ $output = *"initializing storage directory: v2 ${new}"* ]] [[ -d $old ]] [[ -d ${new}/dlcache ]] [[ -d ${new}/img ]] [[ -d ${new}/ulcache ]] [[ -f ${new}/version ]] rm -Rfv --one-file-system "$new" printf '\n*** old is valid, new exists but is regular file\n' mkdir "$old"/img echo weirdal > "$new" run ch-image -vv list echo "$output" [[ $status -eq 1 ]] [[ $output = *"storage dir: valid at old default: ${old}"* ]] [[ $output = *"initializing storage directory: v2 ${new}"* ]] [[ $output = *"error: can't mkdir: exists and not a directory: ${new}"* ]] [[ -d $old ]] [[ -f $new ]] rm -v "$new" printf '\n*** old is valid, new exists and is empty directory\n' run ch-image -vv list echo "$output" [[ $status -eq 0 ]] [[ $output = *"storage dir: valid at old default: ${old}"* ]] [[ $output = *"storage dir: moving to new default path: ${new}"* ]] [[ $output = *"moving: ${old}/dlcache -> ${new}/dlcache"* ]] [[ $output = *"moving: ${old}/img -> ${new}/img"* ]] [[ $output = *"moving: ${old}/ulcache -> ${new}/ulcache"* ]] [[ $output = *"moving: ${old}/version -> ${new}/version"* ]] [[ $output = *"warning: parent of old storage dir now empty: ${old_parent}"* ]] [[ $output = *'hint: consider deleting it'* ]] [[ $output = *"found storage dir v2: ${new}"* ]] [[ ! -e $old ]] [[ -d ${new}/dlcache ]] [[ -d ${new}/img ]] [[ -d ${new}/ulcache ]] [[ -f ${new}/version ]] printf '\n*** old is valid, new exists, is invalid with one moved item\n' mkdir "$old" mkdir "$old"/{dlcache,img,ulcache} echo 2 > "$old"/version rm -Rfv --one-file-system "$new"/{dlcache,ulcache,version} run ch-image -vv list echo "$output" [[ $status -eq 1 ]] [[ $output = *"storage dir: valid at old default: ${old}"* ]] [[ $output = *"error: storage directory seems invalid: ${new}"* ]] [[ -d $old ]] [[ ! -e ${new}/dlcache ]] [[ -d ${new}/img ]] [[ ! -e ${new}/ulcache ]] [[ ! -e ${new}/version ]] rm -Rfv --one-file-system "$old" rm -Rfv --one-file-system "$new" printf '\n*** put real storage dir back\n' if [[ -e $new_bak ]]; then rm -Rfv --one-file-system "$new" mv -v "$new_bak" "$new" fi } charliecloud-0.26/test/build/50_dockerfile.bats000066400000000000000000000673531417231051300214560ustar00rootroot00000000000000load ../common @test 'Dockerfile: syntax quirks' { # These should all yield an output image, but we don't actually care about # it, so re-use the same one. scope standard [[ $CH_BUILDER = ch-image ]] || skip 'ch-image only' # FIXME: other builders? # No newline at end of file. printf 'FROM 00_tiny\nRUN echo hello' \ | ch-image build -t syntax-quirks -f - . # Newline before FROM. ch-image build -t syntax-quirks -f - . <<'EOF' FROM 00_tiny RUN echo hello EOF # Comment before FROM. ch-image build -t tmpimg -f - . <<'EOF' # foo FROM 00_tiny RUN echo hello EOF # Single instruction. ch-image build -t tmpimg -f - . <<'EOF' FROM 00_tiny EOF # Whitespace around comment hash. run ch-image -v build -t tmpimg -f - . <<'EOF' FROM 00_tiny #no whitespace #before only # after only # both before and after # multiple before # tab before EOF echo "$output" [[ $status -eq 0 ]] [[ $(echo "$output" | grep -Fc 'comment') -eq 6 ]] # Whitespace and newlines (turn on whitespace highlighting in your editor): run ch-image build -t tmpimg -f - . <<'EOF' FROM 00_tiny # trailing whitespace: shell sees it verbatim RUN true # whitespace-only line: ignored # two in a row # line continuation, no whitespace: shell sees one word RUN echo test1\ a # two in a row RUN echo test1\ b\ c # whitespace before line continuation: shell sees whitespace verbatim RUN echo test2 \ a # two in a row RUN echo test2 \ b \ c # whitespace after line continuation: shell sees one word RUN echo test3\ a # two in a row RUN echo test3\ b\ c # whitespace before & after line continuation: shell sees before only RUN echo test4 \ a # two in a row RUN echo test4 \ b \ c # whitespace on continued line: shell sees continued line's whitespace RUN echo test5\ a # two in a row RUN echo test5\ b\ c # whitespace-only continued line: shell sees whitespace verbatim RUN echo test6\ \ a # two in a row RUN echo test6\ \ \ b # backslash that is not a continuation: shell sees it verbatim RUN echo test\ 7\ a # two in a row RUN echo test\ 7\ \ b EOF echo "$output" [[ $status -eq 0 ]] output_expected=$(cat <<'EOF' warning: not yet supported, ignored: issue #777: .dockerignore file 1 FROM 00_tiny 4 RUN ['/bin/sh', '-c', 'true '] 13 RUN ['/bin/sh', '-c', 'echo test1a'] test1a 16 RUN ['/bin/sh', '-c', 'echo test1bc'] test1bc 21 RUN ['/bin/sh', '-c', 'echo test2 a'] test2 a 24 RUN ['/bin/sh', '-c', 'echo test2 b c'] test2 b c 29 RUN ['/bin/sh', '-c', 'echo test3a'] test3a 32 RUN ['/bin/sh', '-c', 'echo test3bc'] test3bc 37 RUN ['/bin/sh', '-c', 'echo test4 a'] test4 a 40 RUN ['/bin/sh', '-c', 'echo test4 b c'] test4 b c 45 RUN ['/bin/sh', '-c', 'echo test5 a'] test5 a 48 RUN ['/bin/sh', '-c', 'echo test5 b c'] test5 b c 53 RUN ['/bin/sh', '-c', 'echo test6 a'] test6 a 57 RUN ['/bin/sh', '-c', 'echo test6 b'] test6 b 63 RUN ['/bin/sh', '-c', 'echo test\\ 7a'] test 7a 66 RUN ['/bin/sh', '-c', 'echo test\\ 7\\ b'] test 7 b grown in 16 instructions: tmpimg EOF ) diff -u <(echo "$output_expected") <(echo "$output") } @test 'Dockerfile: syntax errors' { scope standard [[ $CH_BUILDER = ch-image ]] || skip 'ch-image only' # Bad instruction. Also, -v should give interal blabber about the grammar. run ch-image -v build -t tmpimg -f - . <<'EOF' FROM 00_tiny WEIRDAL EOF echo "$output" [[ $status -eq 1 ]] # error message [[ $output = *"can't parse: -:2,1"* ]] # internal blabber (varies by version) [[ $output = *'No terminal'*"'W'"*'at line 2 col 1'* ]] # Bad long option. run ch-image build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY --chown= foo bar EOF echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't parse: -:2,14"* ]] # Empty input. run ch-image build -t tmpimg -f /dev/null . echo "$output" [[ $status -eq 1 ]] [[ $output = *'no instructions found: /dev/null'* ]] # Newline only. run ch-image build -t tmpimg -f - . <<'EOF' EOF echo "$output" [[ $status -eq 1 ]] [[ $output = *'no instructions found: -'* ]] # Comment only. run ch-image build -t tmpimg -f - . <<'EOF' # foo EOF echo "$output" [[ $status -eq 1 ]] [[ $output = *'no instructions found: -'* ]] # Only newline, then comment. run ch-image build -t tmpimg -f - . <<'EOF' # foo EOF echo "$output" [[ $status -eq 1 ]] [[ $output = *'no instructions found: -'* ]] # Non-ARG instruction before FROM run ch-image build -t tmpimg -f - . <<'EOF' RUN echo uh oh FROM 00_tiny EOF echo "$output" [[ $status -eq 1 ]] [[ $output = *'first instruction must be ARG or FROM'* ]] } @test 'Dockerfile: semantic errors' { scope standard [[ $CH_BUILDER = ch-image ]] || skip 'ch-image only' # Repeated instruction option. run ch-image build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY --chown=foo --chown=bar fixtures/empty-file . EOF echo "$output" [[ $status -eq 1 ]] [[ $output = *' 2 COPY: repeated option --chown'* ]] # COPY invalid option. run ch-image build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY --foo=foo fixtures/empty-file . EOF echo "$output" [[ $status -eq 1 ]] [[ $output = *'COPY: invalid option --foo'* ]] # FROM invalid option. run ch-image build -t tmpimg -f - . <<'EOF' FROM --foo=bar 00_tiny EOF echo "$output" [[ $status -eq 1 ]] [[ $output = *'FROM: invalid option --foo'* ]] } @test 'Dockerfile: not-yet-supported features' { # This test also creates images we don't care about. scope standard [[ $CH_BUILDER = ch-image ]] || skip 'ch-image only' # ARG before FROM run ch-image build -t tmpimg -f - . <<'EOF' ARG foo=bar FROM 00_tiny EOF echo "$output" [[ $status -eq 0 ]] [[ $output = *'warning: ARG before FROM not yet supported; see issue #779'* ]] # FROM --platform run ch-image build -t tmpimg -f - . <<'EOF' FROM --platform=foo 00_tiny EOF echo "$output" [[ $status -eq 1 ]] [[ $output = *'error: not yet supported: issue #778: FROM --platform'* ]] # other instructions run ch-image build -t tmpimg -f - . <<'EOF' FROM 00_tiny ADD foo CMD foo ENTRYPOINT foo LABEL foo ONBUILD foo EOF echo "$output" [[ $status -eq 0 ]] [[ $(echo "$output" | grep -Ec 'not yet supported.+instruction') -eq 5 ]] [[ $output = *'warning: not yet supported, ignored: issue #782: ADD instruction'* ]] [[ $output = *'warning: not yet supported, ignored: issue #780: CMD instruction'* ]] [[ $output = *'warning: not yet supported, ignored: issue #780: ENTRYPOINT instruction'* ]] [[ $output = *'warning: not yet supported, ignored: issue #781: LABEL instruction'* ]] [[ $output = *'warning: not yet supported, ignored: issue #788: ONBUILD instruction'* ]] # .dockerignore files run ch-image build -t tmpimg -f - . <<'EOF' FROM 00_tiny EOF echo "$output" [[ $status -eq 0 ]] [[ $output = *'warning: not yet supported, ignored: issue #777: .dockerignore file'* ]] # URL (Git repo) contexts run ch-image build -t not-yet-supported -f - \ git@github.com:hpc/charliecloud.git <<'EOF' FROM 00_tiny EOF echo "$output" [[ $status -eq 1 ]] [[ $output = *'error: not yet supported: issue #773: URL context'* ]] run ch-image build -t tmpimg -f - \ https://github.com/hpc/charliecloud.git <<'EOF' FROM 00_tiny EOF echo "$output" [[ $status -eq 1 ]] [[ $output = *'error: not yet supported: issue #773: URL context'* ]] # variable expansion modifiers run ch-image build -t tmpimg -f - . <<'EOF' FROM 00_tiny ARG foo=README COPY fixtures/${foo:+bar} . EOF echo "$output" [[ $status -eq 1 ]] # shellcheck disable=SC2016 [[ $output = *'error: modifiers ${foo:+bar} and ${foo:-bar} not yet supported (issue #774)'* ]] run ch-image build -t tmpimg -f - . <<'EOF' FROM 00_tiny ARG foo=README COPY fixtures/${foo:-bar} . EOF echo "$output" [[ $status -eq 1 ]] # shellcheck disable=SC2016 [[ $output = *'error: modifiers ${foo:+bar} and ${foo:-bar} not yet supported (issue #774)'* ]] } @test 'Dockerfile: unsupported features' { # This test also creates images we don't care about. scope standard [[ $CH_BUILDER = ch-image ]] || skip 'ch-image only' # parser directives run ch-image build -t tmpimg -f - . <<'EOF' # escape=foo # syntax=foo #syntax=foo # syntax=foo #syntax=foo # foo=bar # comment FROM 00_tiny EOF echo "$output" [[ $status -eq 0 ]] [[ $output = *'warning: not supported, ignored: parser directives'* ]] [[ $(echo "$output" | grep -Fc 'parser directives') -eq 5 ]] # COPY --from run ch-image build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY --chown=foo fixtures/empty-file . EOF echo "$output" [[ $status -eq 0 ]] [[ $output = *'warning: not supported, ignored: COPY --chown'* ]] # Unsupported instructions run ch-image build -t tmpimg -f - . <<'EOF' FROM 00_tiny EXPOSE foo HEALTHCHECK foo MAINTAINER foo STOPSIGNAL foo USER foo VOLUME foo EOF echo "$output" [[ $status -eq 0 ]] [[ $(echo "$output" | grep -Fc 'not supported') -eq 6 ]] [[ $output = *'warning: not supported, ignored: EXPOSE instruction'* ]] [[ $output = *'warning: not supported, ignored: HEALTHCHECK instruction'* ]] [[ $output = *'warning: not supported, ignored: MAINTAINER instruction'* ]] [[ $output = *'warning: not supported, ignored: STOPSIGNAL instruction'* ]] [[ $output = *'warning: not supported, ignored: USER instruction'* ]] [[ $output = *'warning: not supported, ignored: VOLUME instruction'* ]] } @test 'Dockerfile: ENV parsing' { scope standard [[ $CH_BUILDER = none ]] && skip 'no builder' env_expected=$(cat <<'EOF' ('chse_0a', 'value 0a') ('chse_0b', 'value 0b') ('chse_1b', 'value 1b ') ('chse_2a', 'value2a') ('chse_2b', 'value2b') ('chse_2c', 'chse2: value2a') ('chse_2d', 'chse2: value2a') ('chse_3a', '"value3a"') ('chse_4a', 'value4a') ('chse_4b', 'value4b') ('chse_5a', 'value5a') ('chse_5b', 'value5b') ('chse_6a', 'value6a') ('chse_6b', 'value6b') EOF ) run ch-build --no-cache -t tmpimg -f - . <<'EOF' FROM centos8 # FIXME: make this more comprehensive, e.g. space-separate vs. # equals-separated for everything. # Value has internal space. ENV chse_0a value 0a ENV chse_0b="value 0b" # Value has internal space and trailing space. NOTE: Beware your editor # "helpfully" removing the trailing space. # # FIXME: Docker removes the trailing space! #ENV chse_1a value 1a ENV chse_1b="value 1b " # FIXME: currently a parse error. #ENV chse_1c=value\ 1c\ # Value surrounded by double quotes, which are not part of the value. ENV chse_2a "value2a" ENV chse_2b="value2b" # Substitute previous value, space-separated, without quotes. ENV chse_2c chse2: ${chse_2a} # Substitute a previous value, equals-separated, with quotes. ENV chse_2d="chse2: ${chse_2a}" # Backslashed quotes are included in value. ENV chse_3a \"value3a\" # FIXME: backslashes end up literal #ENV chse_3b=\"value3b\" # Multiple variables in the same instruction. ENV chse_4a=value4a chse_5a=value5a ENV chse_4b=value4b \ chse_5b=value5b # Value contains line continuation. FIXME: I think something isn't quite right # here. The backslash, newline sequence appears in the parse tree but not in # the output. That doesn't seem right. ENV chse_6a value\ 6a ENV chse_6b "value\ 6b" # FIXME: currently a parse error. #ENV chse_4=value4 chse_5="value5 foo" chse_6=value6\ foo chse_7=\"value7\" # Print output with Python to avoid ambiguity. RUN python3 -c 'import os; [print((k,v)) for (k,v) in sorted(os.environ.items()) if "chse_" in k]' EOF echo "$output" [[ $status -eq 0 ]] diff -u <(echo "$env_expected") <(echo "$output" | grep -E "^\('chse_") } @test 'Dockerfile: SHELL' { scope standard [[ $CH_BUILDER = none ]] && skip 'no builder' [[ $CH_BUILDER = buildah* ]] && skip "Buildah doesn't support SHELL" # test that SHELL command can change executables and parameters run ch-build -t tmpimg --no-cache -f - . <<'EOF' FROM 00_tiny RUN echo default: $0 SHELL ["/bin/ash", "-c"] RUN echo ash: $0 SHELL ["/bin/sh", "-v", "-c"] RUN echo sh-v: $0 EOF echo "$output" [[ $status -eq 0 ]] [[ $output = *"default: /bin/sh"* ]] [[ $output = *"ash: /bin/ash"* ]] [[ $output = *"sh-v: /bin/sh"* ]] # test that it fails if shell doesn't exist run ch-build -t tmpimg -f - . <<'EOF' FROM 00_tiny SHELL ["/doesnotexist", "-c"] RUN print("hello") EOF echo "$output" [[ $status -eq 1 ]] if [[ $CH_BUILDER = ch-image ]]; then [[ $output = *"/doesnotexist: No such file or directory"* ]] else [[ $output = *"/doesnotexist: no such file or directory"* ]] fi # test that it fails if no paramaters run ch-build -t tmpimg -f - . <<'EOF' FROM 00_tiny SHELL ["/bin/sh"] RUN true EOF echo "$output" [[ $status -ne 0 ]] # different builders use different error exit codes [[ $output = *"/bin/sh: can't open 'true': No such file or directory"* ]] # test that it works with python3 run ch-build -t tmpimg -f - . <<'EOF' FROM centos7 SHELL ["/usr/bin/python3", "-c"] RUN print ("hello") EOF echo "$output" [[ $status -eq 0 ]] if [[ $CH_BUILDER = ch-image ]]; then [[ $output = *"grown in 3 instructions: tmpimg"* ]] else [[ $output = *"Successfully built"* ]] fi } @test 'Dockerfile: ARG and ENV values' { # We use full scope for builders other than ch-image because (1) with # ch-image, we are responsible for --build-arg being implemented correctly # and (2) Docker and Buildah take a full minute for this test, vs. three # seconds for ch-image. if [[ $CH_BUILDER = ch-image ]]; then scope standard elif [[ $CH_BUILDER = none ]]; then skip 'no builder' else scope full fi prerequisites_ok argenv # Note that this test illustrates a number of behavior differences between # the builders. For most of these, but not all, Docker and Buildah have # the same behavior and ch-image differs. echo '*** default (no --build-arg)' env_expected=$(cat <<'EOF' chse_arg2_df=arg2 chse_arg3_df=arg3 arg2 chse_env1_df=env1 chse_env2_df=env2 env1 EOF ) run ch-build --no-cache -t tmpimg -f ./Dockerfile.argenv . echo "$output" [[ $status -eq 0 ]] env_actual=$(echo "$output" | grep -E '^chse_') diff -u <(echo "$env_expected") <(echo "$env_actual") echo '*** one --build-arg, has no default' env_expected=$(cat <<'EOF' chse_arg1_df=foo1 chse_arg2_df=arg2 chse_arg3_df=arg3 arg2 chse_env1_df=env1 chse_env2_df=env2 env1 EOF ) run ch-build --build-arg chse_arg1_df=foo1 \ --no-cache -t tmpimg -f ./Dockerfile.argenv . echo "$output" [[ $status -eq 0 ]] env_actual=$(echo "$output" | grep -E '^chse_') diff -u <(echo "$env_expected") <(echo "$env_actual") echo '*** one --build-arg, has default' env_expected=$(cat <<'EOF' chse_arg2_df=foo2 chse_arg3_df=arg3 foo2 chse_env1_df=env1 chse_env2_df=env2 env1 EOF ) run ch-build --build-arg chse_arg2_df=foo2 \ --no-cache -t tmpimg -f ./Dockerfile.argenv . echo "$output" [[ $status -eq 0 ]] env_actual=$(echo "$output" | grep -E '^chse_') diff -u <(echo "$env_expected") <(echo "$env_actual") echo '*** one --build-arg from environment' if [[ $CH_BUILDER == ch-image ]]; then env_expected=$(cat <<'EOF' chse_arg1_df=foo1 chse_arg2_df=arg2 chse_arg3_df=arg3 arg2 chse_env1_df=env1 chse_env2_df=env2 env1 EOF ) else # Docker and Buildah do not appear to take --build-arg values from the # environment. This is contrary to the "docker build" documentation; # "buildah bud" does not mention it either way. Tested on 18.09.7 and # 1.9.1-dev, respectively. env_expected=$(cat <<'EOF' chse_arg2_df=arg2 chse_arg3_df=arg3 arg2 chse_env1_df=env1 chse_env2_df=env2 env1 EOF ) fi chse_arg1_df=foo1 \ run ch-build --build-arg chse_arg1_df \ --no-cache -t tmpimg -f ./Dockerfile.argenv . echo "$output" [[ $status -eq 0 ]] env_actual=$(echo "$output" | grep -E '^chse_') diff -u <(echo "$env_expected") <(echo "$env_actual") echo '*** one --build-arg set to empty string' env_expected=$(cat <<'EOF' chse_arg1_df= chse_arg2_df=arg2 chse_arg3_df=arg3 arg2 chse_env1_df=env1 chse_env2_df=env2 env1 EOF ) chse_arg1_df=foo1 \ run ch-build --build-arg chse_arg1_df= \ --no-cache -t tmpimg -f ./Dockerfile.argenv . echo "$output" [[ $status -eq 0 ]] env_actual=$(echo "$output" | grep -E '^chse_') diff -u <(echo "$env_expected") <(echo "$env_actual") echo '*** two --build-arg' env_expected=$(cat <<'EOF' chse_arg2_df=bar2 chse_arg3_df=bar3 chse_env1_df=env1 chse_env2_df=env2 env1 EOF ) run ch-build --build-arg chse_arg2_df=bar2 \ --build-arg chse_arg3_df=bar3 \ --no-cache -t tmpimg -f ./Dockerfile.argenv . echo "$output" [[ $status -eq 0 ]] env_actual=$(echo "$output" | grep -E '^chse_') diff -u <(echo "$env_expected") <(echo "$env_actual") echo '*** repeated --build-arg' env_expected=$(cat <<'EOF' chse_arg2_df=bar2 chse_arg3_df=arg3 bar2 chse_env1_df=env1 chse_env2_df=env2 env1 EOF ) run ch-build --build-arg chse_arg2_df=FOO \ --build-arg chse_arg2_df=bar2 \ --no-cache -t tmpimg -f ./Dockerfile.argenv . echo "$output" [[ $status -eq 0 ]] env_actual=$(echo "$output" | grep -E '^chse_') diff -u <(echo "$env_expected") <(echo "$env_actual") echo '*** two --build-arg with substitution' if [[ $CH_BUILDER == ch-image ]]; then env_expected=$(cat <<'EOF' chse_arg2_df=bar2 chse_arg3_df=bar3 bar2 chse_env1_df=env1 chse_env2_df=env2 env1 EOF ) else # Docker and Buildah don't substitute provided values. env_expected=$(cat <<'EOF' chse_arg2_df=bar2 chse_arg3_df=bar3 ${chse_arg2_df} chse_env1_df=env1 chse_env2_df=env2 env1 EOF ) fi # shellcheck disable=SC2016 run ch-build --build-arg chse_arg2_df=bar2 \ --build-arg chse_arg3_df='bar3 ${chse_arg2_df}' \ --no-cache -t tmpimg -f ./Dockerfile.argenv . echo "$output" [[ $status -eq 0 ]] env_actual=$(echo "$output" | grep -E '^chse_') diff -u <(echo "$env_expected") <(echo "$env_actual") echo '*** ARG not in Dockerfile' # Note: We don't test it, but for Buildah, the variable does show up in # the build environment. run ch-build --build-arg chse_doesnotexist=foo \ --no-cache -t tmpimg -f ./Dockerfile.argenv . echo "$output" if [[ $CH_BUILDER = ch-image ]]; then [[ $status -eq 1 ]] else [[ $status -eq 0 ]] fi [[ $output = *'not consumed'* ]] [[ $output = *'chse_doesnotexist'* ]] echo '*** ARG not in environment' run ch-build --build-arg chse_arg1_df \ --no-cache -t tmpimg -f ./Dockerfile.argenv . echo "$output" if [[ $CH_BUILDER = ch-image ]]; then [[ $status -eq 1 ]] [[ $output = *'--build-arg: chse_arg1_df: no value and not in environment'* ]] else [[ $status -eq 0 ]] fi } @test 'Dockerfile: COPY list form' { scope standard [[ $CH_BUILDER == ch-image ]] || skip 'ch-image only' # single source run ch-image build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY ["fixtures/empty-file", "."] EOF echo "$output" [[ $status -eq 0 ]] [[ $output = *"COPY ['fixtures/empty-file'] -> '.'"* ]] test -f "$CH_IMAGE_STORAGE"/img/tmpimg/empty-file # multiple source run ch-image build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY ["fixtures/empty-file", "fixtures/README", "."] EOF echo "$output" [[ $status -eq 0 ]] [[ $output = *"COPY ['fixtures/empty-file', 'fixtures/README'] -> '.'"* ]] test -f "$CH_IMAGE_STORAGE"/img/tmpimg/empty-file test -f "$CH_IMAGE_STORAGE"/img/tmpimg/README } @test 'Dockerfile: COPY errors' { scope standard [[ $CH_BUILDER = none ]] && skip 'no builder' [[ $CH_BUILDER = buildah* ]] && skip 'Buildah untested' # Dockerfile on stdin, so no context directory. if [[ $CH_BUILDER != ch-image ]]; then # ch-image doesn't support this yet run ch-build -t tmpimg - <<'EOF' FROM 00_tiny COPY doesnotexist . EOF echo "$output" [[ $status -ne 0 ]] if [[ $CH_BUILDER = docker ]]; then # This error message seems wrong. I was expecting something about # no context, so COPY not allowed. [[ $output = *'file does not exist'* ]] else false # unimplemented fi fi # SRC not inside context directory. # # Case 1: leading "..". run ch-build -t tmpimg -f - sotest <<'EOF' FROM 00_tiny COPY ../common.bash . EOF echo "$output" [[ $status -ne 0 ]] [[ $output = *'outside'*'context'* ]] # Case 2: ".." inside path. run ch-build -t tmpimg -f - sotest <<'EOF' FROM 00_tiny COPY lib/../../common.bash . EOF echo "$output" [[ $status -ne 0 ]] [[ $output = *'outside'*'context'* ]] # Case 3: symlink leading outside context directory. run ch-build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY fixtures/symlink-to-tmp . EOF echo "$output" [[ $status -ne 0 ]] if [[ $CH_BUILDER = docker ]]; then [[ $output = *'file does not exist'* ]] else [[ $output = *'outside'*'context'* ]] fi # Multiple sources and non-directory destination. run ch-build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY Build.missing common.bash /etc/fstab/ EOF echo "$output" [[ $status -ne 0 ]] [[ $output = *'not a directory'* ]] run ch-build -t foo -f - . <<'EOF' FROM 00_tiny COPY Build.missing common.bash /etc/fstab EOF echo "$output" [[ $status -ne 0 ]] if [[ $CH_BUILDER = docker ]]; then [[ $output = *'must be a directory'* ]] else [[ $output = *'not a directory'* ]] fi run ch-build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY run /etc/fstab/ EOF echo "$output" [[ $status -ne 0 ]] [[ $output = *'not a directory'* ]] run ch-build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY run /etc/fstab EOF echo "$output" [[ $status -ne 0 ]] [[ $output = *'not a directory'* ]] # No sources given. run ch-build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY . EOF echo "$output" [[ $status -ne 0 ]] if [[ $CH_BUILDER = ch-image ]]; then [[ $output = *"error: can't parse: -:2,7"* ]] else [[ $output = *'COPY requires at least two arguments'* ]] fi run ch-build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY ["."] EOF echo "$output" [[ $status -ne 0 ]] if [[ $CH_BUILDER = ch-image ]]; then [[ $output = *"error: can't COPY: must specify at least one source"* ]] else [[ $output = *'COPY requires at least two arguments'* ]] fi # No sources found. run ch-build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY doesnotexist . EOF echo "$output" [[ $status -ne 0 ]] [[ $output = *'not found'* ]] # Some sources found. run ch-build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY fixtures/README doesnotexist . EOF echo "$output" [[ $status -ne 0 ]] [[ $output = *'not found'* ]] # No context with Dockerfile on stdin by context "-" run ch-build -t tmpimg - <<'EOF' FROM 00_tiny COPY fixtures/README . EOF echo "$output" [[ $status -ne 0 ]] if [[ $CH_BUILDER = ch-image ]]; then [[ $output = *"error: can't COPY: no context because \"-\" given"* ]] else [[ $output = *'COPY failed: file not found in build context or'* ]] fi } @test 'Dockerfile: COPY --from errors' { scope standard [[ $CH_BUILDER = none ]] && skip 'no builder' [[ $CH_BUILDER = buildah* ]] && skip 'Buildah untested' # Note: Docker treats several types of erroneous --from names as another # image and tries to pull it. To avoid clashes with real, pullable images, # we use the random name "uhigtsbjmfps" (https://www.random.org/strings/). # current index run ch-build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY --from=0 /etc/fstab / EOF echo "$output" [[ $status -ne 0 ]] [[ $output = *'current'*'stage'* ]] # current name run ch-build -t tmpimg -f - . <<'EOF' FROM 00_tiny AS uhigtsbjmfps COPY --from=uhigtsbjmfps /etc/fstab / EOF echo "$output" [[ $status -ne 0 ]] case $CH_BUILDER in ch-image) [[ $output = *'current stage'* ]] ;; docker) [[ $output = *'pull access denied'*'repository does not exist'* ]] ;; *) false ;; esac # index does not exist run ch-build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY --from=1 /etc/fstab / EOF echo "$output" [[ $status -ne 0 ]] case $CH_BUILDER in ch-image) [[ $output = *'does not exist'* ]] ;; docker) [[ $output = *'index out of bounds'* ]] ;; *) false ;; esac # name does not exist run ch-build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY --from=uhigtsbjmfps /etc/fstab / EOF echo "$output" [[ $status -ne 0 ]] case $CH_BUILDER in ch-image) [[ $output = *'does not exist'* ]] ;; docker) [[ $output = *'pull access denied'*'repository does not exist'* ]] ;; *) false ;; esac # index exists, but is later run ch-build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY --from=1 /etc/fstab / FROM 00_tiny EOF echo "$output" [[ $status -ne 0 ]] case $CH_BUILDER in ch-image) [[ $output = *'does not exist yet'* ]] ;; docker) [[ $output = *'index out of bounds'* ]] ;; *) false ;; esac # name is later run ch-build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY --from=uhigtsbjmfps /etc/fstab / FROM 00_tiny AS uhigtsbjmfps EOF echo "$output" [[ $status -ne 0 ]] case $CH_BUILDER in ch-image) [[ $output = *'does not exist'* ]] [[ $output != *'does not exist yet'* ]] # so we review test ;; docker) [[ $output = *'pull access denied'*'repository does not exist'* ]] ;; *) false ;; esac # negative index run ch-build -t tmpimg -f - . <<'EOF' FROM 00_tiny COPY --from=-1 /etc/fstab / FROM 00_tiny EOF echo "$output" [[ $status -ne 0 ]] case $CH_BUILDER in ch-image) [[ $output = *'invalid negative stage index'* ]] ;; docker) [[ $output = *'index out of bounds'* ]] ;; *) false ;; esac } @test 'Dockerfile: FROM scratch' { scope standard [[ $CH_BUILDER = ch-image ]] || skip 'ch-image only' # pull it; validate special handling run ch-image pull -v scratch echo "$output" [[ $status -eq 0 ]] [[ $output = *'manifest: using internal library'* ]] [[ $output = *'no config found; initializing empty metadata'* ]] [[ $output != *'layer 1'* ]] # no layers # remove ch-image delete scratch # build 1; validate pulled with special handling run ch-image build -v -t foo -f - . <<'EOF' FROM scratch EOF echo "$output" [[ $status -eq 0 ]] [[ $output = *'base image not found, pulling'* ]] [[ $output = *'manifest: using internal library'* ]] [[ $output = *'no config found; initializing empty metadata'* ]] [[ $output != *'layer 1'* ]] # no layers ls -lha "${CH_IMAGE_STORAGE}/img/foo/usr" [[ $(find /tmp/foo -mindepth 1 "${CH_IMAGE_STORAGE}/img/foo/usr") = '' ]] # build 2; validate not pulled run ch-image build -v -t tmpimg -f - . <<'EOF' FROM scratch EOF echo "$output" [[ $status -eq 0 ]] [[ $output = *"base image found: ${CH_IMAGE_STORAGE}/img/scratch"* ]] } charliecloud-0.26/test/build/50_localregistry.bats000066400000000000000000000076721417231051300222300ustar00rootroot00000000000000load ../common # shellcheck disable=SC2034 tag='ch-image push' # Note: These tests use a local registry listening on localhost:5000 but do # not start it. Therefore, they do not depend on whether the pushed images are # already present. setup () { scope standard [[ $CH_BUILDER = ch-image ]] || skip 'ch-image only' # Skip unless GitHub Actions or there is a listener on localhost:5000. if [[ -z $GITHUB_ACTIONS ]] && ! ( command -v ss > /dev/null 2>&1 \ && ss -lnt | grep -F :5000); then skip 'no local registry' fi # WARNING: If you came here looking for a way to non-interactively # authenticate with ch-image, be aware that these environment variables # are currently undocumented and unsupported. export CH_IMAGE_USERNAME=charlie export CH_IMAGE_PASSWORD=test } @test "${tag}: without destination reference" { # FIXME: This test sets up an alias tag manually so we can use it to push. # Remove when we have real aliasing support for images. ln -vfns 00_tiny "$CH_IMAGE_STORAGE"/img/localhost:5000%00_tiny run ch-image -v --tls-no-verify push localhost:5000/00_tiny echo "$output" [[ $status -eq 0 ]] [[ $output = *'pushing image: localhost:5000/00_tiny'* ]] [[ $output = *"image path: ${CH_IMAGE_STORAGE}/img/localhost:5000%00_tiny"* ]] rm "$CH_IMAGE_STORAGE"/img/localhost:5000%00_tiny } @test "${tag}: with destination reference" { run ch-image -v --tls-no-verify push 00_tiny localhost:5000/00_tiny echo "$output" [[ $status -eq 0 ]] [[ $output = *'pushing image: 00_tiny'* ]] [[ $output = *'destination: localhost:5000/00_tiny'* ]] [[ $output = *"image path: ${CH_IMAGE_STORAGE}/img/00_tiny"* ]] re='layer 1/1: [0-9a-f]{7}: already present' [[ $output =~ $re ]] } @test "${tag}: with --image" { # NOTE: This also tests round-tripping and a more complex destination ref. img="$BATS_TMPDIR"/pushtest-up img2="$BATS_TMPDIR"/pushtest-down mkdir -p "$img" "$img"/{bin,dev,usr} # Set up setuid/setgid files and directories. touch "$img"/{setuid_file,setgid_file} chmod 4640 "$img"/setuid_file chmod 2640 "$img"/setgid_file mkdir -p "$img"/{setuid_dir,setgid_dir} chmod 4750 "$img"/setuid_dir chmod 2750 "$img"/setgid_dir ls -l "$img" [[ $(stat -c '%A' "$img"/setuid_file) = -rwSr----- ]] [[ $(stat -c '%A' "$img"/setgid_file) = -rw-r-S--- ]] [[ $(stat -c '%A' "$img"/setuid_dir) = drwsr-x--- ]] [[ $(stat -c '%A' "$img"/setgid_dir) = drwxr-s--- ]] # Push the image run ch-image -v --tls-no-verify push --image "$img" \ localhost:5000/foo/bar:weirdal echo "$output" [[ $status -eq 0 ]] [[ $output = *'pushing image: localhost:5000/foo/bar:weirdal'* ]] [[ $output = *"image path: ${img}"* ]] [[ $output = *'stripping unsafe setgid bit: ./setgid_dir'* ]] [[ $output = *'stripping unsafe setgid bit: ./setgid_file'* ]] [[ $output = *'stripping unsafe setuid bit: ./setuid_dir'* ]] [[ $output = *'stripping unsafe setuid bit: ./setuid_file'* ]] # Pull it back run ch-image -v --tls-no-verify pull localhost:5000/foo/bar:weirdal "$img2" echo "$output" [[ $status -eq 0 ]] ls -l "$img2" [[ $(stat -c '%A' "$img2"/setuid_file) = -rw-r----- ]] [[ $(stat -c '%A' "$img2"/setgid_file) = -rw-r----- ]] [[ $(stat -c '%A' "$img2"/setuid_dir) = drwxr-x--- ]] [[ $(stat -c '%A' "$img2"/setgid_dir) = drwxr-x--- ]] } @test "${tag}: consistent layer hash" { run ch-image push --tls-no-verify 00_tiny localhost:5000/00_tiny echo "$output" [[ $status -eq 0 ]] push1=$(echo "$output" | grep -E 'layer 1/1: .+: checking') run ch-image push --tls-no-verify 00_tiny localhost:5000/00_tiny echo "$output" [[ $status -eq 0 ]] push2=$(echo "$output" | grep -E 'layer 1/1: .+: checking') diff -u <(echo "$push1") <(echo "$push2") } charliecloud-0.26/test/build/50_misc.bats000066400000000000000000000003441417231051300202650ustar00rootroot00000000000000load ../common @test 'ch-build --builder-info' { scope standard ch-build --builder-info } @test 'sotest executable works' { scope quick export LD_LIBRARY_PATH=./sotest ldd sotest/sotest sotest/sotest } charliecloud-0.26/test/build/60_force.bats000066400000000000000000000033631417231051300204350ustar00rootroot00000000000000load ../common # shellcheck disable=SC2034 tag='ch-image --force' setup () { [[ $CH_BUILDER = ch-image ]] || skip 'ch-image only' } @test "${tag}: no matching distro" { scope standard # without --force run ch-image -v build -t tmpimg -f - . <<'EOF' FROM alpine:3.9 EOF echo "$output" [[ $status -eq 0 ]] [[ $output = *'--force not available (no suitable config found)'* ]] # with --force run ch-image -v build --force -t tmpimg -f - . <<'EOF' FROM alpine:3.9 EOF echo "$output" [[ $status -eq 0 ]] [[ $output = *'--force not available (no suitable config found)'* ]] } @test "${tag}: --no-force-detect" { scope standard run ch-image -v build --no-force-detect -t tmpimg -f - . <<'EOF' FROM alpine:3.9 EOF echo "$output" [[ $status -eq 0 ]] [[ $output = *'not detecting --force config, per --no-force-detect'* ]] } @test "${tag}: misc errors" { scope standard run ch-image build --force --no-force-detect . echo "$output" [[ $status -eq 1 ]] [[ $output = 'error'*'are incompatible'* ]] } @test "${tag}: multiple RUN" { scope standard # 1. List form of RUN. # 2. apt-get not at beginning. run ch-image -v build --force -t tmpimg -f - . <<'EOF' FROM debian:buster RUN true RUN true && apt-get update RUN ["apt-get", "install", "-y", "hello"] EOF echo "$output" [[ $status -eq 0 ]] [[ $(echo "$output" | grep -Fc 'init step 1: checking: $') -eq 1 ]] [[ $(echo "$output" | grep -Fc 'init step 1: $') -eq 1 ]] [[ $(echo "$output" | grep -Fc 'RUN: new command:') -eq 2 ]] [[ $output = *'init: already initialized'* ]] [[ $output = *'--force: init OK & modified 2 RUN instructions'* ]] [[ $output = *'grown in 4 instructions: tmpimg'* ]] } charliecloud-0.26/test/build/99_cleanup.bats000066400000000000000000000006671417231051300210060ustar00rootroot00000000000000load ../common @test 'nothing unexpected in tarball directory' { scope quick run find "$ch_tardir" -mindepth 1 -maxdepth 1 \ -not \( -name 'WEIRD_AL_YANKOVIC' \ -o -name '*.sqfs' \ -o -name '*.tar.gz' \ -o -name '*.tar.xz' \ -o -name '*.pq_missing' \) echo "$output" [[ $output = '' ]] } charliecloud-0.26/test/common.bash000066400000000000000000000231451417231051300172070ustar00rootroot00000000000000# shellcheck shell=bash arch_exclude () { # Skip the test if architecture (from "uname -m") matches $1. [[ $(uname -m) != "$1" ]] || skip "arch ${1}" } archive_grep () { image="$1" case $image in *.sqfs) unsquashfs -l "$image" | grep 'squashfs-root/ch/environment' ;; *) tar -tf "$image" | grep -E '^(\./)?ch/environment$' ;; esac } archive_ok () { ls -ld "$1" || true test -f "$1" test -s "$1" } builder_ok () { # FIXME: Currently we make fairly limited tagging for some builders. # Uncomment below when they can be supported by all the builders. builder_tag_p "$1" #builder_tag_p "${1}:latest" #docker_tag_p "${1}:$(ch-run --version |& tr '~+' '--')" } builder_tag_p () { printf 'image tag %s ... ' "$1" case $CH_BUILDER in buildah*) hash_=$(buildah images -q "$1" | sort -u) if [[ $hash_ ]]; then echo "$hash_" return 0 fi ;; ch-image) if [[ -d ${CH_IMAGE_STORAGE}/img/${1} ]]; then echo "ok" return 0 fi ;; docker) hash_=$(docker_ images -q "$1" | sort -u) if [[ $hash_ ]]; then echo "$hash_" return 0 fi ;; esac echo 'not found' return 1 } crayify_mpi_or_skip () { if [[ $ch_cray ]]; then # shellcheck disable=SC2086 $ch_mpirun_node ch-fromhost --cray-mpi "$1" else skip 'host is not a Cray' fi } # Do we need sudo to run docker? if docker info > /dev/null 2>&1; then docker_ () { docker "$@" } else docker_ () { sudo docker "$@" } fi env_require () { if [[ -z ${!1} ]]; then printf '$%s is empty or not set\n\n' "$1" >&2 exit 1 fi } image_ok () { ls -ld "$1" "${1}/WEIRD_AL_YANKOVIC" || true test -d "$1" ls -ld "$1" || true byte_ct=$(du -s -B1 "$1" | cut -f1) echo "$byte_ct" [[ $byte_ct -ge 3145728 ]] # image is at least 3MiB } multiprocess_ok () { [[ $ch_multiprocess ]] || skip 'no multiprocess launch tool found' # If the MPI in the container is MPICH, we only try host launch on Crays. # For the other settings (workstation, other Linux clusters), it may or # may not work; we simply haven't tried. [[ $ch_mpi = mpich && -z $ch_cray ]] \ && skip 'MPICH untested' # Exit function successfully. true } pedantic_fail () { msg=$1 if [[ -n $ch_pedantic ]]; then echo "$msg" 1>&2 return 1 else skip "$msg" fi } prerequisites_ok () { if [[ -f $CH_TEST_TARDIR/${1}.pq_missing ]]; then skip 'build prerequisites not met' fi } # Wrapper for Bats run() to work around Bats bug #89 by saving/restoring $IFS. # See issues #552 and #555 and https://stackoverflow.com/a/32425874. if type run &> /dev/null; then eval bats_"$(declare -f run)" fi run () { local ifs_old="$IFS" bats_run "$@" IFS="$ifs_old" } scope () { if [[ -n $ch_one_test ]]; then # Ignore scope if a single test is given. if [[ $BATS_TEST_DESCRIPTION != *"$ch_one_test"* ]]; then skip 'per --file' else return 0 fi fi case $1 in # $1 is the test's scope quick) ;; # always run quick-scope tests standard) if [[ $CH_TEST_SCOPE = quick ]]; then skip "${1} scope" fi ;; full) if [[ $CH_TEST_SCOPE = quick || $CH_TEST_SCOPE = standard ]]; then skip "${1} scope" fi ;; skip) skip "developer-skipped; see comments and/or issues" ;; *) exit 1 esac } unpack_img_all_nodes () { if [[ $1 ]]; then $ch_mpirun_node ch-tar2dir "${ch_tardir}/${ch_tag}.tar.gz" "$ch_imgdir" else skip 'not needed' fi } # Predictable sorting and collation export LC_ALL=C # Do we have what we need? env_require CH_TEST_TARDIR env_require CH_TEST_IMGDIR env_require CH_TEST_PERMDIRS env_require CH_BUILDER if [[ $CH_BUILDER == ch-image ]]; then env_require CH_IMAGE_STORAGE fi # User-private temporary directory in case multiple users are running the # tests simultaneously. # shellcheck disable=SC2154 btnew=$TMP_/bats.tmp mkdir -p "$btnew" chmod 700 "$btnew" export BATS_TMPDIR=$btnew [[ $(stat -c %a "$BATS_TMPDIR") = '700' ]] # shellcheck disable=SC2034 ch_runfile=$(command -v ch-run) # shellcheck disable=SC2034 ch_lib=$(ch-build --_lib-path) # Charliecloud version. ch_version=$(ch-run --version 2>&1) # shellcheck disable=SC2034 ch_version_base=$(echo "$ch_version" | sed -E 's/~.+//') # shellcheck disable=SC2034 ch_version_docker=$(echo "$ch_version" | tr '~+' '--') # Separate directories for tarballs and images. # # Canonicalize both so the have consistent paths and we can reliably use them # in tests (see issue #143). We use readlink(1) rather than realpath(2), # despite the admonition in the man page, because it's more portable [1]. # # We use "readlink -m" rather than "-e" or "-f" to account for the possibility # of some directory anywhere the path not existing [2], which has bitten us # multiple times; see issues #347 and #733. With this switch, if something is # missing, readlink(1) returns the path unchanged, and checks later convert # that to a proper error. # # [1]: https://unix.stackexchange.com/a/136527 # [2]: http://man7.org/linux/man-pages/man1/readlink.1.html ch_imgdir=$(readlink -m "$CH_TEST_IMGDIR") ch_tardir=$(readlink -m "$CH_TEST_TARDIR") # shellcheck disable=SC2034 ch_mounts="${ch_imgdir}/mounts" # Image information. # shellcheck disable=SC2034 ch_tag=${CH_TEST_TAG:-NO_TAG_SET} # set by Makefile; many tests don't need it # shellcheck disable=SC2034 ch_img=${ch_imgdir}/${ch_tag} # shellcheck disable=SC2034 ch_tar=${ch_tardir}/${ch_tag}.tar.gz # shellcheck disable=SC2034 ch_ttar=${ch_tardir}/chtest.tar.gz # shellcheck disable=SC2034 ch_timg=${ch_imgdir}/chtest # MPICH requires different handling from OpenMPI. Set a variable to enable # some kludges. if [[ $ch_tag = *'-mpich' ]]; then ch_mpi=mpich # First kludge. MPICH's internal launcher is called "Hydra". If Hydra sees # Slurm environment variables, it tries to launch even local ranks with # "srun". This of course fails within the container. You can't turn it off # by building with --without-slurm like OpenMPI, so we fall back to this # environment variable at run time. export HYDRA_LAUNCHER=fork else ch_mpi=openmpi fi # Crays are special. if [[ -f /etc/opt/cray/release/cle-release ]]; then ch_cray=yes else ch_cray= fi # Multi-node and multi-process stuff. Do not use Slurm variables in tests; use # these instead: # # ch_multiprocess can run multiple processes # ch_multinode can run on multiple nodes # ch_nodes number of nodes in job # ch_cores_node number of cores per node # ch_cores_total total cores in job ($ch_nodes × $ch_cores_node) # # ch_mpirun_node command to run one rank per node # ch_mpirun_core command to run one rank per physical core # ch_mpirun_2 command to run two ranks per job launcher default # ch_mpirun_2_1node command to run two ranks on one node # ch_mpirun_2_2node command to run two ranks on two nodes (one rank/node) # if [[ $SLURM_JOB_ID ]]; then ch_nodes=$SLURM_JOB_NUM_NODES else ch_nodes=1 fi # One rank per hyperthread can exhaust hardware contexts, resulting in # communication failure. Use one rank per core to avoid this. There are ways # to do this with Slurm, but they need Slurm configuration that seems # unreliably present. This seems to be the most portable way to do this. ch_cores_node=$(lscpu -p | tail -n +5 | sort -u -t, -k 2 | wc -l) # shellcheck disable=SC2034 ch_cores_total=$((ch_nodes * ch_cores_node)) ch_mpirun_node= ch_mpirun_np="-np ${ch_cores_node}" # shellcheck disable=SC2034 ch_unslurm= if [[ $SLURM_JOB_ID ]]; then ch_multiprocess=yes ch_mpirun_node='srun --ntasks-per-node 1' ch_mpirun_core="srun --ntasks-per-node $ch_cores_node" ch_mpirun_2='srun -n2' ch_mpirun_2_1node='srun -N1 -n2' # OpenMPI 3.1 pukes when guest-launched and Slurm environment variables # are present. Work around this by fooling OpenMPI into believing it's not # in a Slurm allocation. if [[ $ch_mpi = openmpi ]]; then # shellcheck disable=SC2034 ch_unslurm='--unset-env=SLURM*' fi if [[ $ch_nodes -eq 1 ]]; then ch_multinode= ch_mpirun_2_2node=false else ch_multinode=yes ch_mpirun_2_2node='srun -N2 -n2' fi else # shellcheck disable=SC2034 ch_multinode= # shellcheck disable=SC2034 ch_mpirun_2_2node=false if command -v mpirun > /dev/null 2>&1; then ch_multiprocess=yes ch_mpirun_node='mpirun --map-by ppr:1:node' ch_mpirun_core="mpirun ${ch_mpirun_np}" ch_mpirun_2='mpirun -np 2' ch_mpirun_2_1node='mpirun -np 2 --host localhost:2' else ch_multiprocess= ch_mpirun_node='' # shellcheck disable=SC2034 ch_mpirun_core=false # shellcheck disable=SC2034 ch_mpirun_2=false # shellcheck disable=SC2034 ch_mpirun_2_1node=false fi fi # Do we have and want sudo? if [[ $CH_TEST_SUDO ]] \ && command -v sudo >/dev/null 2>&1 \ && sudo -v > /dev/null 2>&1; then # This isn't super reliable; it returns true if we have *any* sudo # privileges, not specifically to run the commands we want to run. # shellcheck disable=SC2034 ch_have_sudo=yes fi charliecloud-0.26/test/docs-sane.py.in000066400000000000000000000111731417231051300177110ustar00rootroot00000000000000#!%PYTHON_SHEBANG% # coding: utf-8 # This script performs sanity checking on the documentation: # # 1. Man page consistency. # # a. man/charliecloud.7 exists. # # b. Every executable FOO in bin has: # # - doc/FOO.rst # - doc/FOO_desc.rst # - doc/man/FOO.1 # - a section in doc/command-usage.rst # - an entry under "See also" in charliecloud.7 # # c. There aren't the things in (b) except for the executables (modulo a # few execeptions for the other documentation source files). # # d. Summary in "FOO --help" matches the man page and command-usage.rst. from __future__ import print_function import glob import re import os import subprocess import sys CH_BASE = os.path.abspath(os.path.dirname(__file__) + "/..") if (not os.path.isfile("%s/bin/ch-run" % CH_BASE)): print("not found: %s/bin/ch-run" % CH_BASE, file=sys.stderr) sys.exit(1) win = True def main(): check_man() if (win): print("ok") sys.exit(0) else: sys.exit(1) def check_man(): os.chdir(CH_BASE + "/bin") execs = { f for f in os.listdir(".") if (os.path.isfile(f) and os.stat(f).st_mode & 0o111) } helps = { x: help_get(x) for x in execs } os.chdir(CH_BASE + "/doc") man_rsts = set(glob.glob("ch-*.rst")) man_rsts_expected = ( { i + ".rst" for i in execs } | { i + "_desc.rst" for i in execs }) lose_lots("unexpected .rst", man_rsts - man_rsts_expected) lose_lots("missing .rst", man_rsts_expected - man_rsts) sect_matches = set(re.finditer(r"\n([a-z0-9-]+)\n\++\n\n([^\n]+)\n", open("command-usage.rst").read())) sects = { m.group(1) for m in sect_matches } sects_expected = execs lose_lots("unexpected § in command-usage.rst", sects - sects_expected) lose_lots("missing § in command-usage.rst", sects_expected - sects) sect_helps = { m.group(1): m.group(2) for m in sect_matches } lose_lots("bad summary in command-usage.rst", { "%s: %s" % (p, s) for (p, s) in sect_helps.items() if ( p in helps and summary_unrest(s) != helps[p]) and "deprecated" not in s.lower() }) sees = { m.group(0) for m in re.finditer(r"ch-[a-z0-9-]+\(1\)", open("charliecloud.rst").read()) } sees_expected = { i + "(1)" for i in execs } lose_lots("unexpected see-also in charliecloud.rst", sees - sees_expected) lose_lots("missing see-also in charliecloud.rst", sees_expected - sees) conf = {} execfile("./conf.py", conf) for (docname, name, desc, authors, section) in conf["man_pages"]: if (docname != name): lose("conf.py: startdocname != name: %s != %s" % (docname, name)) if (len(authors) != 0): lose("conf.py: bad authors: %s: %s" % (name, authors)) if (name != "charliecloud"): if (section != 1): lose("conf.py: bad section: %s: %s != 1" % (name, section)) if (name not in helps): lose("conf.py: unexpected man page: %s" % name) elif (desc + "." != helps[name] and "deprecated" not in desc.lower()): lose("conf.py: bad summary: %s: %s" % (name, desc)) else: if (section != 7): lose("conf.py: bad section: %s: %s != 7" % (name, section)) os.chdir(CH_BASE + "/doc/man") mans = set(glob.glob("*.1")) mans_expected = { i + ".1" for i in execs } lose_lots("unexpected man", mans - mans_expected) lose_lots("missing man", mans_expected - mans) try: execfile # Python 2 except NameError: # Python 3; provide our own. See: https://stackoverflow.com/questions/436198 def execfile(path, globals_): with open(path, "rb") as fp: code = compile(fp.read(), path, "exec") exec(code, globals_) def help_get(prog): try: out = subprocess.check_output(["./" + prog, "--help"], universal_newlines=True, stderr=subprocess.STDOUT) except Exception as x: lose("%s --help failed: %s" % (prog, str(x))) return None m = re.search(r"^(?:[Uu]sage:[^\n]+\n| +[^\n]+\n|\n)*([^\n]+)\n", out) if (m is None): lose("%s --help: no summary found" % prog) return None else: return m.group(1) def lose(msg): print(msg) global win win = False def lose_lots(prefix, losers): for loser in losers: lose("%s: %s" % (prefix, loser)) def summary_unrest(rest): t = rest t = t.replace(r":code:`", '"') t = t.replace(r"`", '"') return t if (__name__ == "__main__"): main() charliecloud-0.26/test/fixtures/000077500000000000000000000000001417231051300167245ustar00rootroot00000000000000charliecloud-0.26/test/fixtures/README000066400000000000000000000001061417231051300176010ustar00rootroot00000000000000You can see what tests use the fixtures with "misc/grep 'fixtures/'". charliecloud-0.26/test/fixtures/empty-file000066400000000000000000000000001417231051300207100ustar00rootroot00000000000000charliecloud-0.26/test/force-auto.py.in000066400000000000000000000214411417231051300201000ustar00rootroot00000000000000#!/usr/bin/env python3 # This script generates a BATS file to exercise "ch-image build --force" # across a variety of distributions. It's used by Makefile.am. # # About each distribution, we remember: # # - base image name # - config name it should select # - scope # standard: all tests in standard scope # full: one test in standard scope, the rest in full # - any tests invalid for that distro # # For each distribution, we test these factors: # # - whether or not --force is given (2) # - whether or not preparation for --force is already done (2) # - commands that (4) # - don't need --force, and fail # - don't need --force, and succeed # - apparently need --force but in fact do not # - really do need --force # # This would appear to yield 2×2×4 = 16 tests per distribution. However: # # 1. We only try pre-prepared images for "really need" commands with --force # given, to save time, so it's at most 9 potential tests. # # 2. Some distros don't have any pre-preparation step, so that test doesn't # make sense. # # 3. Some distros don't have an "apparently need" command determined. # # Bottom line, the number of tests per distro varies. See the code below for # specific details. import abc import enum @enum.unique class Scope(enum.Enum): STANDARD = "standard" FULL = "full" @enum.unique class Run(enum.Enum): UNNEEDED_FAIL = "unneeded fail" UNNEEDED_WIN = "unneeded win" FAKE_NEEDED = "fake needed" NEEDED = "needed" class Test(abc.ABC): arch_excludes = [] base = None config = None scope = Scope.FULL prep_run = None runs = { Run.UNNEEDED_FAIL: "false", Run.UNNEEDED_WIN: "true" } def __init__(self, run, forced, preprep): self.run = run self.forced = forced self.preprep = preprep def __str__(self): preprep = "preprep" if self.preprep else "no preprep" force = "with --force" if self.forced else "w/o --force" return f"{self.base}, {self.run.value}, {force}, {preprep}" @property def build1_post_hook(self): return "" @property def build2_post_hook(self): return "" def as_grep_files(self, grep_files, image, invert=False): cmds = [] for (re, path) in grep_files: path = f"\"$CH_IMAGE_STORAGE\"/img/{image}/{path}" cmd = f"ls -lh {path}" if (invert): cmd = f"! ( {cmd} )" cmds.append(cmd) if (not invert): cmds.append(f"grep -Eq -- '{re}' {path}") return "\n".join(cmds) def as_outputs(self, outputs, invert=False): cmds = [] for out in outputs: out = f"echo \"$output\" | grep -Eq -- \"{out}\"" if (invert): out = f"! ( {out} )" cmds.append(out) return "\n".join(cmds) def as_runs(self, runs): return "\n".join("RUN %s" % run for run in runs) def test(self): # skip? if (self.preprep and not (self.forced and self.run == Run.NEEDED)): print(f"\n# skip: {self}: not needed") return if (self.preprep and self.prep_run is None): print(f"\n# skip: {self}: no preprep command") return # scope if (self.scope == Scope.STANDARD or self.run == Run.NEEDED): scope = "standard" else: scope = "full" # architecture excludes arch_excludes = "\n".join("arch_exclude %s" % i for i in self.arch_excludes) # build 1 to make prep-prepped image (e.g. install EPEL) if needed if (not self.preprep): build1 = "# skipped: no separate prep" build2_base = self.base else: build2_base = "tmpimg" build1 = f"""\ run ch-image -v build -t tmpimg -f - . << 'EOF' FROM {self.base} RUN {self.prep_run} EOF echo "$output" [[ $status -eq 0 ]] {self.build1_post_hook}""" # force force = "--force" if self.forced else "" # run command we're testing try: run = self.runs[self.run] except KeyError: print(f"\n# skip: {self}: no run command") return # status if ( self.run == Run.UNNEEDED_FAIL or ( self.run == Run.NEEDED and not self.forced )): status = 1 else: status = 0 # output outs = [] if (self.forced): outs += [f"will use --force: {self.config}"] if (self.run == Run.UNNEEDED_WIN ): outs += ["warning: --force specified, but nothing to do"] elif (self.run in { Run.NEEDED, Run.FAKE_NEEDED }): outs += ["--force: init OK & modified 1 RUN instructions"] else: outs += [f"available --force: {self.config}"] if (self.run in { Run.NEEDED, Run.FAKE_NEEDED }): outs += ["RUN: available here with --force"] if (self.run == Run.NEEDED): outs += ["build failed: --force may fix it"] elif (self.run == Run.UNNEEDED_FAIL): outs += ["build failed: current version of --force wouldn't help"] out = self.as_outputs(outs) # emit the test print(f""" @test "ch-image --force: {self}" {{ scope {scope} {arch_excludes} # build 1: intermediate image for preparatory commands {build1} # build 2: image we're testing run ch-image -v build {force} -t tmpimg2 -f - . << 'EOF' FROM {build2_base} RUN {run} EOF echo "$output" [[ $status -eq {status} ]] {out} {self.build2_post_hook} }} """, end="") class _EPEL_Mixin: # Mixin class for RPM distros where we want to pre-install EPEL. I think # this should maybe go away and just go into a _Red_Hat base class, i.e. # test all RPM distros with EPEL pre-installed, but this matches what # existed in 50_fakeroot.bats. Note the install-EPEL command is elsewhere. epel_outputs = ["(Updating|Installing).+: epel-release"] epel_greps = [("enabled=1", "/etc/yum.repos.d/epel*.repo")] @property def build1_post_hook(self): return "\n".join(["# validate EPEL installed", self.as_outputs(self.epel_outputs), self.as_grep_files(self.epel_greps, "tmpimg")]) @property def build2_post_hook(self): return "\n".join([ "# validate EPEL present if installed by build 1, gone if by --force", self.as_grep_files(self.epel_greps, "tmpimg2", not self.preprep)]) class _RHEL7(Test): config = "rhel7" runs = { **Test.runs, **{ Run.FAKE_NEEDED: "yum install -y ed", Run.NEEDED: "yum install -y openssh" } } class CentOS_7(_RHEL7, _EPEL_Mixin): scope = Scope.STANDARD base = "centos:7" prep_run = "yum install -y epel-release" class _RHEL8(Test): config = "rhel8" runs = { **Test.runs, **{ Run.FAKE_NEEDED: "dnf install -y --setopt=install_weak_deps=false ed", Run.NEEDED: "dnf install -y --setopt=install_weak_deps=false openssh" } } class CentOS_Latest(_RHEL8, _EPEL_Mixin): base = "centos:latest" prep_run = "dnf install -y epel-release" class CentOS_8_Stream(_RHEL8): # pulling from quay.io per the CentOS wiki # https://wiki.centos.org/FAQ/CentOSStream#What_artifacts_are_built.3F base = "quay.io/centos/centos:stream8" arch_excludes = ["aarch64"] # issue #1249 class RHEL_UBI_8(_RHEL8): base = "registry.access.redhat.com/ubi8/ubi" arch_excludes = ["aarch64"] # issue #1249 class _Fedora(_RHEL8): config = "fedora" class Fedora_26(_Fedora): # We would prefer to test the lowest supported --force version, 24, # but the ancient version of dnf it has doesn't fail the transaction when # a package fails so we test with 26 instead. base = "fedora:26" class Fedora_34(_Fedora): base = "fedora:34" class Fedora_Latest(_Fedora): base = "fedora:latest" class _Debian(Test): config = "debderiv" runs = { **Test.runs, **{ Run.NEEDED: "apt-get update && apt-get install -y openssh-client" } } class Debian_9(_Debian): base = "debian:9" arch_excludes = ["ppc64le"] # base image unavailable class Debian_Latest(_Debian): scope = Scope.STANDARD base = "debian:latest" class Ubuntu_16(_Debian): base = "ubuntu:16.04" arch_excludes = ["aarch64"] # no pseudo package class Ubuntu_18(_Debian): base = "ubuntu:18.04" class Ubuntu_Latest(_Debian): base = "ubuntu:latest" # main loop print("""\ # NOTE: This file is auto-generated. Do not modify. load ../common setup () { [[ $CH_BUILDER = ch-image ]] || skip 'ch-image only' } """) for test in [CentOS_7, CentOS_Latest, CentOS_8_Stream, RHEL_UBI_8, Fedora_26, Fedora_34, #Fedora_Latest, # issue #1163 Debian_9, Debian_Latest, Ubuntu_16, Ubuntu_18, Ubuntu_Latest]: for run in Run: for forced in (False, True): for preprep in (False, True): test(run, forced, preprep).test() charliecloud-0.26/test/make-auto.d/000077500000000000000000000000001417231051300171605ustar00rootroot00000000000000charliecloud-0.26/test/make-auto.d/build.bats.in000066400000000000000000000005001417231051300215320ustar00rootroot00000000000000@test 'ch-build %(tag)s' { scope %(scope)s if [[ $CH_BUILDER = ch-image ]]; then force=--force else force= fi # shellcheck disable=SC2086 ch-build $force -t %(tag)s --file="%(path)s" "%(dirname)s" #sudo docker tag %(tag)s "%(tag)s:$ch_version_docker" builder_ok %(tag)s } charliecloud-0.26/test/make-auto.d/build_custom.bats.in000066400000000000000000000027151417231051300231360ustar00rootroot00000000000000@test 'custom build %(tag)s' { scope %(scope)s out="${ch_tardir}/%(tag)s" pq="${ch_tardir}/%(tag)s.pq_missing" workdir="${ch_tardir}/%(tag)s.tmp" rm -f "$pq" mkdir "$workdir" cd "%(dirname)s" run ./%(basename)s "$PWD" "$out" "$workdir" echo "$output" if [[ $status -eq 0 ]]; then if [[ -f ${out}.tar.gz || -f ${out}.tar.xz ]]; then # tarball # Validate exactly one tarball came out. tarballs=( "$out".tar.* ) [[ ${#tarballs[@]} -eq 1 ]] tarball=${tarballs[0]} # Convert to SquashFS if needed. if [[ $CH_TEST_PACK_FMT = squash* ]]; then ch-convert "$tarball" "${tarball/tar.?z/sqfs}" rm "$tarball" fi elif [[ -d $out ]]; then # directory case $CH_TEST_PACK_FMT in squash-*) ext=sqsh ;; tar-unpack) ext=tar.gz ;; *) false # unknown format ;; esac ch-convert "$out" "${out}.${ext}" else false # unknown format fi fi rm -Rf --one-file-system "$out" "$workdir" if [[ $status -eq 65 ]]; then touch "$pq" rm -Rf --one-file-system "$out".tar.{gz,xz} skip 'prerequisites not met' else return "$status" fi } charliecloud-0.26/test/make-auto.d/builder_to_archive.bats.in000066400000000000000000000006421417231051300242730ustar00rootroot00000000000000@test 'builder to archive %(tag)s' { scope %(scope)s case $CH_TEST_PACK_FMT in squash*) ext=sqfs ;; tar-unpack) ext=tar.gz ;; *) false # unknown format ;; esac archive=${ch_tardir}/%(tag)s.${ext} ch-convert -i "$CH_BUILDER" '%(tag)s' "$archive" archive_grep "$archive" archive_ok "$archive" } charliecloud-0.26/test/make-auto.d/unpack.bats.in000066400000000000000000000012671417231051300217270ustar00rootroot00000000000000@test 'unpack %(tag)s' { scope %(scope)s prerequisites_ok %(tag)s case $CH_TEST_PACK_FMT in squash-mount) # Lots of things expect no extension here, so go with that even # though it's a file, not a directory. $ch_mpirun_node ln -s "${ch_tardir}/%(tag)s.sqfs" "${ch_imgdir}/%(tag)s" ;; squash-unpack) $ch_mpirun_node ch-convert -o dir "${ch_tardir}/%(tag)s.sqfs" "${ch_imgdir}/%(tag)s" ;; tar-unpack) $ch_mpirun_node ch-convert -o dir "${ch_tardir}/%(tag)s.tar.gz" "${ch_imgdir}/%(tag)s" ;; *) false # unknown format ;; esac } charliecloud-0.26/test/make-perms-test.py.in000066400000000000000000000144211417231051300210520ustar00rootroot00000000000000#!%PYTHON_SHEBANG% # This script sets up a test directory for testing filesystem permissions # enforcement in UDSS such as virtual machines and containers. It must be run # as root. For example: # # $ sudo ./make-perms-test /data/perms_test $USER nobody # $ ./fs_perms.py /data/perms_test/pass 2>&1 | egrep -v 'ok$' # d /data/perms_test/pass/ld.out-a~--- --- rwt mismatch # d /data/perms_test/pass/ld.out-r~--- --- rwt mismatch # f /data/perms_test/pass/lf.out-a~--- --- rw- mismatch # f /data/perms_test/pass/lf.out-r~--- --- rw- mismatch # RISK 4 mismatches in 1 directories # # In this case, there will be four mismatches because the symlinks are # expected to be invalid after the pass directory is attached to the UDSS. # # Roughly 3,000 permission settings are evaluated in order to check files and # directories against user, primary group, and supplemental group access. # # For files, we test read and write. For directories, read, write, and # traverse. Files are not tested for execute because it's a more complicated # test (new process needed) and if readable, someone could simply make their # own executable copy. # # Compatibility: As of February 2016, this needs to be compatible with Python # 2.6 because that's the highest version that comes with RHEL 6. We're also # aiming to be source-compatible with Python 3.4+, but that's untested. # # Help: http://python-future.org/compatible_idioms.html from __future__ import division, print_function, unicode_literals import grp import os import os.path import pwd import sys if (len(sys.argv) != 4): print('usage error (PEBKAC)', file=sys.stderr) sys.exit(1) FILE_PERMS = set([0, 2, 4, 6]) DIR_PERMS = set([0, 1, 2, 3, 4, 5, 6, 7]) ALL_PERMS = FILE_PERMS | DIR_PERMS FILE_CONTENT = 'gary' * 19 + '\n' testdir = os.path.abspath(sys.argv[1]) my_user = sys.argv[2] yr_user = sys.argv[3] me = pwd.getpwnam(my_user) you = pwd.getpwnam(yr_user) my_uid = me.pw_uid my_gid = me.pw_gid my_group = grp.getgrgid(my_gid).gr_name yr_uid = you.pw_uid yr_gid = you.pw_gid yr_group = grp.getgrgid(yr_gid).gr_name # find an arbitrary supplemental group for my_user my_group2 = None my_gid2 = None for g in grp.getgrall(): if (my_user in g.gr_mem and g.gr_name != my_group): my_group2 = g.gr_name my_gid2 = g.gr_gid break if (my_group2 is None): print("couldn't find supplementary group for %s" % my_user, file=sys.stderr) sys.exit(1) if (my_gid == yr_gid or my_gid == my_gid2): print('%s and %s share a group' % (my_user, yr_user), file=sys.stderr) sys.exit(1) print('''\ test directory: %(testdir)s me: %(my_user)s %(my_uid)d you: %(yr_user)s %(yr_uid)d my primary group: %(my_group)s %(my_gid)d my supp. group: %(my_group2)s %(my_gid2)d your primary group: %(yr_group)s %(yr_gid)d ''' % locals()) def set_perms(name, uid, gid, mode): os.chown(name, uid, gid) os.chmod(name, mode) def symlink(src, link_name): if (not os.path.exists(src)): print('link target does not exist: %s' % src) sys.exit(1) os.symlink(src, link_name) class Test(object): def __init__(self, uid, gid, up, gp, op, name=None): self.uid = uid self.group = grp.getgrgid(gid).gr_name self.gid = gid self.user = pwd.getpwuid(uid).pw_name self.up = up self.gp = gp self.op = op self.name_override = name self.mode = up << 6 | gp << 3 | op # Which permission bits govern? if (self.uid == my_uid): self.p = self.up elif (self.gid in (my_gid, my_gid2)): self.p = self.gp else: self.p = self.op @property def name(self): if (self.name_override is not None): return self.name_override else: return ('%s.%s-%s.%03o~%s' % (self.type_, self.user, self.group, self.mode, self.expect)) @property def valid(self): return (all(x in self.valid_perms for x in (self.up, self.gp, self.op))) def write(self): if (not self.valid): return 0 self.write_real() set_perms(self.name, self.uid, self.gid, self.mode) return 1 class Test_Directory(Test): type_ = 'd' valid_perms = DIR_PERMS @property def expect(self): return ( ('r' if (self.p & 4) else '-') + ('w' if (self.p & 3 == 3) else '-') + ('t' if (self.p & 1) else '-')) def write_real(self): os.mkdir(self.name) # Create a file R/W by me, for testing traversal. file_ = self.name + '/file' with open(file_, 'w') as fp: fp.write(FILE_CONTENT) set_perms(file_, my_uid, my_uid, 0o660) class Test_File(Test): type_ = 'f' valid_perms = FILE_PERMS @property def expect(self): return ( ('r' if (self.p & 4) else '-') + ('w' if (self.p & 2) else '-') + '-') def write_real(self): with open(self.name, 'w') as fp: fp.write(FILE_CONTENT) try: os.mkdir(testdir) except OSError as x: print("can't mkdir %s: %s" % (testdir, str(x))) sys.exit(1) set_perms(testdir, my_uid, my_gid, 0o770) os.chdir(testdir) Test_Directory(my_uid, my_gid, 7, 7, 0, 'nopass').write() os.chdir('nopass') Test_Directory(my_uid, my_gid, 7, 7, 0, 'dir').write() Test_File(my_uid, my_gid, 6, 6, 0, 'file').write() os.chdir('..') Test_Directory(my_uid, my_gid, 7, 7, 0, 'pass').write() os.chdir('pass') ct = 0 for uid in (my_uid, yr_uid): for gid in (my_gid, my_gid2, yr_gid): if (uid == my_uid and gid == my_gid): # Files owned by my_uid:my_gid are not a meaningful access control # test; check the documentation for why. continue for up in ALL_PERMS: for gp in ALL_PERMS: for op in ALL_PERMS: f = Test_File(uid, gid, up, gp, op) #print(f.name) ct += f.write() d = Test_Directory(uid, gid, up, gp, op) #print(d.name) ct += d.write() #print(ct) symlink('f.%s-%s.600~rw-' % (my_user, yr_group), 'lf.in~rw-') symlink('d.%s-%s.700~rwt' % (my_user, yr_group), 'ld.in~rwt') symlink('%s/nopass/file' % testdir, 'lf.out-a~---') symlink('%s/nopass/dir' % testdir, 'ld.out-a~---') symlink('../nopass/file', 'lf.out-r~---') symlink('../nopass/dir', 'ld.out-r~---') print("created %d files and directories" % ct) charliecloud-0.26/test/registry-config.yml000066400000000000000000000012131417231051300207060ustar00rootroot00000000000000# Sample registry configuration file. Used for CI testing. # # WARNING: Ports will be incorrect for mitmproxy. See HOWTO in Google Docs. version: 0.1 log: fields: service: registry storage: cache: blobdescriptor: inmemory filesystem: rootdirectory: /var/lib/registry auth: htpasswd: realm: i-lost-on-jeopardy path: /etc/docker/registry/htpasswd http: addr: :5000 headers: X-Content-Type-Options: [nosniff] X-Weird-Al: [Yankovic] tls: certificate: /etc/docker/registry/localhost.crt key: /etc/docker/registry/localhost.key health: storagedriver: enabled: true interval: 10s threshold: 3 charliecloud-0.26/test/run/000077500000000000000000000000001417231051300156575ustar00rootroot00000000000000charliecloud-0.26/test/run/build-rpms.bats000066400000000000000000000125061417231051300206140ustar00rootroot00000000000000load ../common setup () { scope standard [[ $CH_TEST_PACK_FMT = *-unpack ]] || skip 'need writeable image' [[ $CHTEST_GITWD ]] || skip "not in Git working directory" if ! command -v sphinx-build > /dev/null 2>&1 \ && ! command -v sphinx-build-3.6 > /dev/null 2>&1; then skip 'Sphinx is not installed' fi } @test 'build/install el7 RPMs' { prerequisites_ok centos7 img=${ch_imgdir}/centos7 image_ok "$img" rm -rf --one-file-system "${BATS_TMPDIR}/rpmbuild" # Build and install RPMs into CentOS 7 image. (cd .. && packaging/fedora/build --install "$img" \ --rpmbuild="$BATS_TMPDIR/rpmbuild" HEAD) } @test 'check el7 RPM files' { prerequisites_ok centos7 img=${ch_imgdir}/centos7 # Do installed RPMs look sane? run ch-run "$img" -- rpm -qa "charliecloud*" echo "$output" [[ $status -eq 0 ]] [[ $output = *'charliecloud-'* ]] [[ $output = *'charliecloud-builder'* ]] [[ $output = *'charliecloud-debuginfo-'* ]] [[ $output = *'charliecloud-doc'* ]] [[ $output = *'charliecloud-test-'* ]] run ch-run "$img" -- rpm -ql "charliecloud" echo "$output" [[ $status -eq 0 ]] [[ $output = *'/usr/bin/ch-run'* ]] [[ $output = *'/usr/lib/charliecloud/base.sh'* ]] [[ $output = *'/usr/share/man/man7/charliecloud.7.gz'* ]] run ch-run "$img" -- rpm -ql "charliecloud-builder" echo "$output" [[ $status -eq 0 ]] [[ $output = *'/usr/bin/ch-image'* ]] [[ $output = *'/usr/lib/charliecloud/charliecloud.py'* ]] run ch-run "$img" -- rpm -ql "charliecloud-debuginfo" echo "$output" [[ $status -eq 0 ]] [[ $output = *'/usr/lib/debug/usr/bin/ch-run.debug'* ]] [[ $output = *'/usr/lib/debug/usr/libexec/charliecloud/test/sotest/lib/libsotest.so.1.0.debug'* ]] run ch-run "$img" -- rpm -ql "charliecloud-test" echo "$output" [[ $status -eq 0 ]] [[ $output = *'/usr/bin/ch-test'* ]] [[ $output = *'/usr/libexec/charliecloud/test/Build.centos7xz'* ]] [[ $output = *'/usr/libexec/charliecloud/test/sotest/lib/libsotest.so.1.0'* ]] run ch-run "$img" -- rpm -ql "charliecloud-doc" echo "$output" [[ $output = *'/usr/share/doc/charliecloud-'*'/html'* ]] [[ $output = *'/usr/share/doc/charliecloud-'*'/examples/lammps/Dockerfile'* ]] } @test 'remove el7 RPMs' { prerequisites_ok centos7 img=${ch_imgdir}/centos7 # Uninstall to avoid interfering with the rest of the test suite. run ch-run -w "$img" -- rpm -v --erase charliecloud-test \ charliecloud-debuginfo \ charliecloud-doc \ charliecloud-builder \ charliecloud echo "$output" [[ $status -eq 0 ]] [[ $output = *'charliecloud-'* ]] [[ $output = *'charliecloud-debuginfo-'* ]] [[ $output = *'charliecloud-doc'* ]] [[ $output = *'charliecloud-test-'* ]] # All gone? run ch-run "$img" -- rpm -qa "charliecloud*" echo "$output" [[ $status -eq 0 ]] [[ $output = '' ]] } @test 'build/install el8 RPMS' { prerequisites_ok centos8 img=${ch_imgdir}/centos8 image_ok "$img" rm -rf --one-file-system "${BATS_TMPDIR}/rpmbuild" # Build and install RPMs into CentOS 8 image. (cd .. && packaging/fedora/build --install "$img" \ --rpmbuild="$BATS_TMPDIR/rpmbuild" HEAD) } @test 'check el8 RPM files' { prerequisites_ok centos8 img=${ch_imgdir}/centos8 # Do installed RPMs look sane? run ch-run "$img" -- rpm -qa "charliecloud*" echo "$output" [[ $status -eq 0 ]] [[ $output = *'charliecloud-'* ]] [[ $output = *'charliecloud-builder'* ]] [[ $output = *'charliecloud-debuginfo-'* ]] [[ $output = *'charliecloud-doc'* ]] run ch-run "$img" -- rpm -ql "charliecloud" echo "$output" [[ $status -eq 0 ]] [[ $output = *'/usr/bin/ch-run'* ]] [[ $output = *'/usr/lib/charliecloud/base.sh'* ]] [[ $output = *'/usr/share/man/man7/charliecloud.7.gz'* ]] run ch-run "$img" -- rpm -ql "charliecloud-builder" echo "$output" [[ $status -eq 0 ]] [[ $output = *'/usr/bin/ch-image'* ]] [[ $output = *'/usr/lib/charliecloud/charliecloud.py'* ]] run ch-run "$img" -- rpm -ql "charliecloud-debuginfo" echo "$output" [[ $status -eq 0 ]] [[ $output = *'/usr/lib/debug/usr/bin/ch-run'*'debug'* ]] run ch-run "$img" -- rpm -ql "charliecloud-doc" echo "$output" [[ $output = *'/usr/share/doc/charliecloud/html'* ]] [[ $output = *'/usr/share/doc/charliecloud/examples/lammps/Dockerfile'* ]] } @test 'remove el8 RPMs' { prerequisites_ok centos8 img=${ch_imgdir}/centos8 # Uninstall to avoid interfering with the rest of the test suite. run ch-run -w "$img" -- rpm -v --erase charliecloud-debuginfo \ charliecloud-doc \ charliecloud-builder \ charliecloud echo "$output" [[ $status -eq 0 ]] [[ $output = *'charliecloud-'* ]] [[ $output = *'charliecloud-debuginfo-'* ]] [[ $output = *'charliecloud-doc'* ]] # All gone? run ch-run "$img" -- rpm -qa "charliecloud*" echo "$output" [[ $status -eq 0 ]] [[ $output = '' ]] } charliecloud-0.26/test/run/ch-convert.bats000066400000000000000000000254041417231051300206070ustar00rootroot00000000000000load ../common # Testing strategy overview: # # The most efficient way to test conversion through all formats would be to # start with a directory, cycle through all the formats one at a time, with # directory being last, then compare the starting and ending directories. That # corresponds to visiting all the cells in the matrix below, starting from one # labeled "a", ending in one labeled "b", and skipping those labeled with a # dash. Also, if visit n is in column i, then the next visit n+1 must be in # row i. This approach does each conversion exactly once. # # output -> # | dir | ch-image | docker | squash | tar | # input +----------+----------+----------+----------+---------+ # | dir | — | a | a | a | a | # v ch-image | b | — | | | | # docker | b | | — | | | # squash | b | | | — | | # tar | b | | | | — | # +----------+----------+----------+----------+---------+ # # Because we start with a directory already available, this yields 5*5 - 5 - 1 # = 19 conversions. However, I was not able to figure out a traversal order # that would meet the constraints. # # Thus, we use the following algorithm. # # for every format i except dir: (4 iterations) # convert start_dir -> i # for every format j except dir: (4) # if i≠j: convert i -> j # convert j -> finish_dir # compare start_dir with finish_dir # # This yields 4 * (3*2 + 1*1) = 28 conversions, due to excess conversions to # dir. However, it can better isolate where the conversion went wrong, because # the chain is 3 conversions long rather than 19. # # The outer loop is unrolled into four separate tests to avoid having one test # that runs for two minutes. # This is a little goofy, because several of the texts need *all* the # builders. Thus, we (a) run only for builder ch-image but (b) # pedantic-require Docker to also be installed. setup () { scope standard [[ $CH_BUILDER = ch-image ]] || skip 'ch-image only' [[ $CH_TEST_PACK_FMT = *-unpack ]] || skip 'needs directory images' if ! command -v docker > /dev/null 2>&1; then pedantic_fail 'docker not found' fi } # Return success if directories $1 and $2 are recursively the same, failure # otherwise. This compares only metadata. False positives are possible if a # file's content changes but the size and all other metadata stays the same; # this seems unlikely. We could also use "diff -qr --no-dereference", which # would also compare file conttent, but diff's --exclude only accepts basename # patterns, not paths. The long list of excludes is things that don't # round-trip through the various formats; the surprising directories (e.g. # /dev) are because modification times seem to change. compare () { out=$( rsync -nv -aAX --delete "${1}/" "$2" \ | sed -E -e '/^$/d' \ -e '/^sending incremental file list/d' \ -e '/^sent [0-9,]+ bytes/d' \ -e '/^total size is/d' \ -e '\|^deleting ch/|d' \ -e '\|^deleting .dockerenv$|d' \ -e '\|^deleting dev/console$|d' \ -e '\|^deleting dev/pts/$|d' \ -e '\|^deleting dev/shm/$|d' \ -e '\|^./$|d' \ -e '\|^WEIRD_AL_YANKOVIC$|d' \ -e '\|^dev/$|d' \ -e '\|^etc/$|d' \ -e '\|^etc/hostname$|d' \ -e '\|^etc/hosts$|d' \ -e '\|^etc/resolv.conf -> /etc/resolv.conf.real$|d' \ -e '\|^mnt/dev/dontdeleteme$|d' ) echo "$out" [ -z "$out" ] } # Kludge to cook up the right input and output descriptors for ch-convert. convert () { ct=$1 in_fmt=$2 out_fmt=$3; case $in_fmt in ch-image) in_desc=tmpimg ;; dir) in_desc=$ch_timg ;; docker) in_desc=tmpimg ;; tar) in_desc=${BATS_TMPDIR}/convert.tar.gz ;; squash) in_desc=${BATS_TMPDIR}/convert.sqfs ;; *) echo "unknown input format: $in_fmt" false ;; esac case $out_fmt in ch-image) out_desc=tmpimg ;; dir) out_desc=${BATS_TMPDIR}/convert.dir ;; docker) out_desc=tmpimg ;; tar) out_desc=${BATS_TMPDIR}/convert.tar.gz ;; squash) out_desc=${BATS_TMPDIR}/convert.sqfs ;; *) echo "unknown output format: $out_fmt" false ;; esac echo "CONVERT ${ct}: ${in_desc} ($in_fmt) -> ${out_desc} (${out_fmt})" delete "$out_fmt" "$out_desc" ch-convert --no-clobber -v -i "$in_fmt" -o "$out_fmt" "$in_desc" "$out_desc" # Doing it twice doubles the time but also tests that both new conversions # and overwrite work. Hence, full scope only. if [[ $CH_TEST_SCOPE = full ]]; then ch-convert -v -i "$in_fmt" -o "$out_fmt" "$in_desc" "$out_desc" fi } delete () { fmt=$1 desc=$2 case $fmt in ch-image) ch-image delete "$desc" || true ;; dir) rm -Rf --one-file-system "$desc" ;; docker) docker_ rmi -f "$desc" ;; tar) rm -f "$desc" ;; squash) rm -f "$desc" ;; *) echo "unknown format: $fmt" false ;; esac } # Test conversions dir -> $1 -> (all) -> dir. test_from () { end=${BATS_TMPDIR}/convert.dir ct=1 convert "$ct" dir "$1" for j in ch-image docker squash tar; do if [[ $1 != "$j" ]]; then ct=$((ct+1)) convert "$ct" "$1" "$j" fi ct=$((ct+1)) convert "$ct" "$1" dir image_ok "$end" compare "$ch_timg" "$end" done } @test 'ch-convert: format inference' { # Test input only; output uses same code. Test cases match all the # criteria to validate the priority. We don't exercise every possible # descriptor pattern, only those I thought had potential for error. # SquashFS run ch-convert -n ./foo:bar.sqfs out.tar echo "$output" [[ $status -eq 0 ]] [[ $output = *'input: squash'* ]] # tar run ch-convert -n ./foo:bar.tar out.sqfs echo "$output" [[ $status -eq 0 ]] [[ $output = *'input: tar'* ]] run ch-convert -n ./foo:bar.tgz out.sqfs echo "$output" [[ $status -eq 0 ]] [[ $output = *'input: tar'* ]] run ch-convert -n ./foo:bar.tar.Z out.sqfs echo "$output" [[ $status -eq 0 ]] [[ $output = *'input: tar'* ]] run ch-convert -n ./foo:bar.tar.gz out.sqfs echo "$output" [[ $status -eq 0 ]] [[ $output = *'input: tar'* ]] # directory run ch-convert -n ./foo:bar out.tar echo "$output" [[ $status -eq 0 ]] [[ $output = *'input: dir'* ]] # builders run ch-convert -n foo out.tar echo "$output" if command -v ch-image > /dev/null 2>&1; then [[ $status -eq 0 ]] [[ $output = *'input: ch-image'* ]] elif command -v docker > /dev/null 2>&1; then [[ $status -eq 0 ]] [[ $output = *'input: docker'* ]] else [[ $status -eq 1 ]] [[ $output = *'no builder found' ]] fi } @test 'ch-convert: errors' { # same format run ch-convert -n foo.tar foo.tar.gz echo "$output" [[ $status -eq 1 ]] [[ $output = *'error: input and output formats must be different'* ]] # output directory not an image touch "${BATS_TMPDIR}/foo.tar" run ch-convert "${BATS_TMPDIR}/foo.tar" "$BATS_TMPDIR" echo "$output" [[ $status -eq 1 ]] [[ $output = *"error: exists but does not appear to be an image: ${BATS_TMPDIR}"* ]] rm "${BATS_TMPDIR}/foo.tar" } @test 'ch-convert: --no-clobber' { # ch-image printf 'FROM alpine:3.9\n' | ch-image build -t tmpimg -f - "$BATS_TMPDIR" run ch-convert --no-clobber -o ch-image "$BATS_TMPDIR" tmpimg echo "$output" [[ $status -eq 1 ]] [[ $output = *"error: exists in ch-image storage, not deleting per --no-clobber: tmpimg" ]] # dir ch-convert -i ch-image -o dir 00_tiny "$BATS_TMPDIR"/00_tiny run ch-convert --no-clobber -i ch-image -o dir 00_tiny "$BATS_TMPDIR"/00_tiny echo "$output" [[ $status -eq 1 ]] [[ $output = *"error: exists, not deleting per --no-clobber: ${BATS_TMPDIR}/00_tiny" ]] rm -Rf --one-file-system "$BATS_TMPDIR"/00_tiny # docker printf 'FROM alpine:3.9\n' | docker_ build -t tmpimg - run ch-convert --no-clobber -o docker "$BATS_TMPDIR" tmpimg echo "$output" [[ $status -eq 1 ]] [[ $output = *"error: exists in Docker storage, not deleting per --no-clobber: tmpimg" ]] # squash touch "${BATS_TMPDIR}/00_tiny.sqfs" run ch-convert --no-clobber -i ch-image -o squash 00_tiny "$BATS_TMPDIR"/00_tiny.sqfs echo "$output" [[ $status -eq 1 ]] [[ $output = *"error: exists, not deleting per --no-clobber: ${BATS_TMPDIR}/00_tiny.sqfs" ]] rm "${BATS_TMPDIR}/00_tiny.sqfs" # tar touch "${BATS_TMPDIR}/00_tiny.tar.gz" run ch-convert --no-clobber -i ch-image -o tar 00_tiny "$BATS_TMPDIR"/00_tiny.tar.gz echo "$output" [[ $status -eq 1 ]] [[ $output = *"error: exists, not deleting per --no-clobber: ${BATS_TMPDIR}/00_tiny.tar.gz" ]] rm "${BATS_TMPDIR}/00_tiny.tar.gz" } @test 'ch-convert: pathological tarballs' { [[ $CH_PACK_FMT = tar ]] || skip 'tar mode only' out=${BATS_TMPDIR}/convert.dir # Are /dev fixtures present in tarball? (issue #157) present=$(tar tf "$ch_ttar" | grep -F deleteme) [[ $(echo "$present" | wc -l) -eq 4 ]] echo "$present" | grep -E '^img/dev/deleteme$' echo "$present" | grep -E '^./dev/deleteme$' echo "$present" | grep -E '^dev/deleteme$' echo "$present" | grep -E '^img/mnt/dev/dontdeleteme$' # Convert to dir. ch-convert "$ch_ttar" "$out" image_ok "$out" # Did we raise hidden files correctly? [[ -e ${out}/.hiddenfile1 ]] [[ -e ${out}/..hiddenfile2 ]] [[ -e ${out}/...hiddenfile3 ]] # Did we remove the right /dev stuff? [[ -e ${out}/mnt/dev/dontdeleteme ]] [[ $(ls -Aq "${out}/dev") -eq 0 ]] ch-run "$out" -- test -e /mnt/dev/dontdeleteme } @test 'ch-convert: dir -> ch-image -> X' { test_from ch-image } @test 'ch-convert: dir -> docker -> X' { test_from docker } @test 'ch-convert: dir -> squash -> X' { test_from squash } @test 'ch-convert: dir -> tar -> X' { test_from tar } charliecloud-0.26/test/run/ch-fromhost.bats000066400000000000000000000301301417231051300207600ustar00rootroot00000000000000load ../common setup () { [[ $CH_TEST_PACK_FMT = *-unpack ]] || skip 'need writeable image' } fromhost_clean () { [[ $1 ]] # We used to delete only specific paths, but this turned into an unwieldy # mess of wildcards that obscured the original specificity purpose. rm -f "${1}/ld.so.cache" find "$1" -xdev \( \ -name 'libcuda*' \ -o -name 'libnvidia*' \ -o -name libsotest.so.1 \ -o -name libsotest.so.1.0 \ -o -name sotest \ -o -name sotest.c \ \) -print -delete ch-run -w "$1" -- /sbin/ldconfig # restore default cache fromhost_clean_p "$1" } fromhost_clean_p () { ch-run "$1" -- /sbin/ldconfig -p | grep -F libsotest && return 1 run fromhost_ls "$1" echo "$output" [[ $status -eq 0 ]] [[ -z $output ]] } fromhost_ls () { find "$1" -xdev -name '*sotest*' -ls } @test 'ch-fromhost (CentOS)' { scope standard prerequisites_ok centos8 img=${ch_imgdir}/centos8 libpath=$(ch-fromhost --lib-path "$img") echo "libpath: ${libpath}" # --file fromhost_clean "$img" ch-fromhost -v --file sotest/files_inferrable.txt "$img" fromhost_ls "$img" test -f "${img}/usr/bin/sotest" test -f "${img}${libpath}/libsotest.so.1.0" test -L "${img}${libpath}/libsotest.so.1" ch-run "$img" -- /sbin/ldconfig -p | grep -F libsotest ch-run "$img" -- sotest rm "${img}/usr/bin/sotest" rm "${img}${libpath}/libsotest.so.1.0" rm "${img}${libpath}/libsotest.so.1" ch-run -w "$img" -- /sbin/ldconfig fromhost_clean_p "$img" # --cmd ch-fromhost -v --cmd 'cat sotest/files_inferrable.txt' "$img" ch-run "$img" -- sotest # --path ch-fromhost -v --path sotest/bin/sotest \ --path sotest/lib/libsotest.so.1.0 \ "$img" ch-run "$img" -- sotest fromhost_clean "$img" # --cmd and --file ch-fromhost -v --cmd 'cat sotest/files_inferrable.txt' \ --file sotest/files_inferrable.txt "$img" ch-run "$img" -- sotest fromhost_clean "$img" # --dest ch-fromhost -v --file sotest/files_inferrable.txt \ --dest /mnt "$img" \ --path sotest/sotest.c ch-run "$img" -- sotest ch-run "$img" -- test -f /mnt/sotest.c fromhost_clean "$img" # --dest overrides inference, but ldconfig still run ch-fromhost -v --dest /lib \ --file sotest/files_inferrable.txt \ "$img" ch-run "$img" -- /lib/sotest fromhost_clean "$img" # --no-ldconfig ch-fromhost -v --no-ldconfig --file sotest/files_inferrable.txt "$img" test -f "${img}/usr/bin/sotest" test -f "${img}${libpath}/libsotest.so.1.0" ! test -L "${img}${libpath}/libsotest.so.1" ! ( ch-run "$img" -- /sbin/ldconfig -p | grep -F libsotest ) run ch-run "$img" -- sotest echo "$output" [[ $status -eq 127 ]] [[ $output = *'libsotest.so.1: cannot open shared object file'* ]] fromhost_clean "$img" # no --verbose ch-fromhost --file sotest/files_inferrable.txt "$img" ch-run "$img" -- sotest fromhost_clean "$img" # destination directory not writeable (#323) chmod -v u-w "${img}/mnt" ch-fromhost --dest /mnt --path sotest/sotest.c "$img" test -w "${img}/mnt" test -f "${img}/mnt/sotest.c" fromhost_clean "$img" } @test 'ch-fromhost (Debian)' { scope full prerequisites_ok debian9 img=${ch_imgdir}/debian9 libpath=$(ch-fromhost --lib-path "$img") echo "libpath: ${libpath}" fromhost_clean "$img" ch-fromhost -v --file sotest/files_inferrable.txt "$img" fromhost_ls "$img" test -f "${img}/usr/bin/sotest" test -f "${img}/${libpath}/libsotest.so.1.0" test -L "${img}/${libpath}/libsotest.so.1" ch-run "$img" -- /sbin/ldconfig -p | grep -F libsotest ch-run "$img" -- sotest rm "${img}/usr/bin/sotest" rm "${img}/${libpath}/libsotest.so.1.0" rm "${img}/${libpath}/libsotest.so.1" rm "${img}/etc/ld.so.cache" fromhost_clean_p "$img" } @test 'ch-fromhost errors' { scope standard prerequisites_ok centos8 img=${ch_imgdir}/centos8 # no image run ch-fromhost --path sotest/sotest.c echo "$output" [[ $status -eq 1 ]] [[ $output = *'no image specified'* ]] fromhost_clean_p "$img" # image is not a directory run ch-fromhost --path sotest/sotest.c /etc/motd echo "$output" [[ $status -eq 1 ]] [[ $output = *'image not a directory: /etc/motd'* ]] fromhost_clean_p "$img" # two image arguments run ch-fromhost --path sotest/sotest.c "$img" foo echo "$output" [[ $status -eq 1 ]] [[ $output = *'duplicate image: foo'* ]] fromhost_clean_p "$img" # no files argument run ch-fromhost "$img" echo "$output" [[ $status -eq 1 ]] [[ $output = *'empty file list'* ]] fromhost_clean_p "$img" # file that needs --dest but not specified run ch-fromhost -v --path sotest/sotest.c "$img" echo "$output" [[ $status -eq 1 ]] [[ $output = *'no destination for: sotest/sotest.c'* ]] fromhost_clean_p "$img" # file with colon in name run ch-fromhost -v --path 'foo:bar' "$img" echo "$output" [[ $status -eq 1 ]] [[ $output = *"paths can't contain colon: foo:bar"* ]] fromhost_clean_p "$img" # file with newlines in name run ch-fromhost -v --path $'foo\nbar' "$img" echo "$output" [[ $status -eq 1 ]] [[ $output = *"no destination for: foo"* ]] fromhost_clean_p "$img" # --cmd no argument run ch-fromhost "$img" --cmd echo "$output" [[ $status -eq 1 ]] [[ $output = *'--cmd must not be empty'* ]] fromhost_clean_p "$img" # --cmd empty run ch-fromhost --cmd true "$img" echo "$output" [[ $status -eq 1 ]] [[ $output = *'empty file list'* ]] fromhost_clean_p "$img" # --cmd fails run ch-fromhost --cmd false "$img" echo "$output" [[ $status -eq 1 ]] [[ $output = *'command failed: false'* ]] fromhost_clean_p "$img" # --file no argument run ch-fromhost "$img" --file echo "$output" [[ $status -eq 1 ]] [[ $output = *'--file must not be empty'* ]] fromhost_clean_p "$img" # --file empty run ch-fromhost --file /dev/null "$img" echo "$output" [[ $status -eq 1 ]] [[ $output = *'empty file list'* ]] fromhost_clean_p "$img" # --file does not exist run ch-fromhost --file /doesnotexist "$img" echo "$output" [[ $status -eq 1 ]] [[ $output = *'/doesnotexist: No such file or directory'* ]] [[ $output = *'cannot read file: /doesnotexist'* ]] fromhost_clean_p "$img" # --path no argument run ch-fromhost "$img" --path echo "$output" [[ $status -eq 1 ]] [[ $output = *'--path must not be empty'* ]] fromhost_clean_p "$img" # --path does not exist run ch-fromhost --dest /mnt --path /doesnotexist "$img" echo "$output" [[ $status -eq 1 ]] [[ $output = *'No such file or directory'* ]] [[ $output = *'cannot inject: /doesnotexist'* ]] fromhost_clean_p "$img" # --dest no argument run ch-fromhost "$img" --dest echo "$output" [[ $status -eq 1 ]] [[ $output = *'--dest must not be empty'* ]] fromhost_clean_p "$img" # --dest not an absolute path run ch-fromhost --dest relative --path sotest/sotest.c "$img" echo "$output" [[ $status -eq 1 ]] [[ $output = *'not an absolute path: relative'* ]] fromhost_clean_p "$img" # --dest does not exist run ch-fromhost --dest /doesnotexist --path sotest/sotest.c "$img" echo "$output" [[ $status -eq 1 ]] [[ $output = *'not a directory:'* ]] fromhost_clean_p "$img" # --dest is not a directory run ch-fromhost --dest /bin/sh --file sotest/sotest.c "$img" echo "$output" [[ $status -eq 1 ]] [[ $output = *'not a directory:'* ]] fromhost_clean_p "$img" # image does not exist run ch-fromhost --file sotest/files_inferrable.txt /doesnotexist echo "$output" [[ $status -eq 1 ]] [[ $output = *'image not a directory: /doesnotexist'* ]] fromhost_clean_p "$img" # image specified twice run ch-fromhost --file sotest/files_inferrable.txt "$img" "$img" echo "$output" [[ $status -eq 1 ]] [[ $output = *'duplicate image'* ]] fromhost_clean_p "$img" # ldconfig gives no shared library path (#324) # # (I don't think this is the best way to get ldconfig to fail, but I # couldn't come up with anything better. E.g., bad ld.so.conf or broken # .so's seem to produce only warnings.) mv "${img}/sbin/ldconfig" "${img}/sbin/ldconfig.foo" run ch-fromhost --lib-path "$img" mv "${img}/sbin/ldconfig.foo" "${img}/sbin/ldconfig" echo "$output" [[ $status -eq 1 ]] [[ $output = *'empty path from ldconfig'* ]] fromhost_clean_p "$img" } @test 'ch-fromhost --cray-mpi not on a Cray' { scope full [[ $ch_cray ]] && skip 'host is a Cray' run ch-fromhost --cray-mpi "$ch_timg" echo "$output" [[ $status -eq 1 ]] [[ $output = *'are you on a Cray?'* ]] } @test 'ch-fromhost --cray-mpi with no MPI installed' { scope full [[ $ch_cray ]] || skip 'host is not a Cray' run ch-fromhost --cray-mpi "$ch_timg" echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't find MPI in image"* ]] } @test 'ch-fromhost --nvidia with GPU' { scope full prerequisites_ok nvidia command -v nvidia-container-cli >/dev/null 2>&1 \ || skip 'nvidia-container-cli not in PATH' img=${ch_imgdir}/nvidia # nvidia-container-cli --version (to make sure it's linked correctly) nvidia-container-cli --version # Skip if nvidia-container-cli can't find CUDA. run nvidia-container-cli list --binaries --libraries echo "$output" if [[ $status -eq 1 ]]; then if [[ $output = *'cuda error'* ]]; then skip "nvidia-container-cli can't find CUDA" fi false fi # --nvidia ch-fromhost -v --nvidia "$img" # nvidia-smi runs in guest ch-run "$img" -- nvidia-smi -L # nvidia-smi -L matches host host=$(nvidia-smi -L) echo "host GPUs:" echo "$host" guest=$(ch-run "$img" -- nvidia-smi -L) echo "guest GPUs:" echo "$guest" cmp <(echo "$host") <(echo "$guest") # --nvidia and --cmd fromhost_clean "$img" ch-fromhost --nvidia --file sotest/files_inferrable.txt "$img" ch-run "$img" -- nvidia-smi -L ch-run "$img" -- sotest # --nvidia and --file fromhost_clean "$img" ch-fromhost --nvidia --cmd 'cat sotest/files_inferrable.txt' "$img" ch-run "$img" -- nvidia-smi -L ch-run "$img" -- sotest # CUDA sample sample=/matrixMulCUBLAS # should fail without ch-fromhost --nvidia fromhost_clean "$img" run ch-run "$img" -- $sample echo "$output" [[ $status -eq 1 ]] [[ $output = *'CUDA error at'* ]] # should succeed with it fromhost_clean_p "$img" ch-fromhost --nvidia "$img" run ch-run "$img" -- $sample echo "$output" [[ $status -eq 0 ]] [[ $output =~ 'Comparing CUBLAS Matrix Multiply with CPU results: PASS' ]] } @test 'ch-fromhost --nvidia without GPU' { scope full prerequisites_ok nvidia img=${ch_imgdir}/nvidia # --nvidia should give a proper error whether or not nvidia-container-cli # is available. if ( command -v nvidia-container-cli >/dev/null 2>&1 ); then # nvidia-container-cli in $PATH run nvidia-container-cli list --binaries --libraries echo "$output" if [[ $status -eq 0 ]]; then # found CUDA; skip skip 'nvidia-container-cli found CUDA' else [[ $status -eq 1 ]] [[ $output = *'cuda error'* ]] run ch-fromhost -v --nvidia "$img" echo "$output" [[ $status -eq 1 ]] [[ $output = *'does this host have GPUs'* ]] fi else # nvidia-container-cli not in $PATH run ch-fromhost -v --nvidia "$img" echo "$output" [[ $status -eq 1 ]] r="nvidia-container-cli: (command )?not found" [[ $output =~ $r ]] [[ $output =~ 'nvidia-container-cli failed' ]] fi } charliecloud-0.26/test/run/ch-run_escalated.bats000066400000000000000000000044741417231051300217440ustar00rootroot00000000000000load ../common @test 'ch-run refuses to run if setgid' { scope quick ch_run_tmp=$BATS_TMPDIR/ch-run.setgid gid=$(id -g) gid2=$(id -G | cut -d' ' -f2) echo "gids: ${gid} ${gid2}" [[ $gid != "$gid2" ]] cp -a "$ch_runfile" "$ch_run_tmp" ls -l "$ch_run_tmp" chgrp "$gid2" "$ch_run_tmp" chmod g+s "$ch_run_tmp" ls -l "$ch_run_tmp" [[ -g $ch_run_tmp ]] run "$ch_run_tmp" --version echo "$output" [[ $status -eq 1 ]] [[ $output = *': error ('* ]] rm "$ch_run_tmp" } @test 'ch-run refuses to run if setuid' { scope quick [[ -n $ch_have_sudo ]] || skip 'sudo not available' ch_run_tmp=$BATS_TMPDIR/ch-run.setuid cp -a "$ch_runfile" "$ch_run_tmp" ls -l "$ch_run_tmp" sudo chown root "$ch_run_tmp" sudo chmod u+s "$ch_run_tmp" ls -l "$ch_run_tmp" [[ -u $ch_run_tmp ]] run "$ch_run_tmp" --version echo "$output" [[ $status -eq 1 ]] [[ $output = *': error ('* ]] sudo rm "$ch_run_tmp" } @test 'ch-run as root: --version and --test' { scope quick [[ -n $ch_have_sudo ]] || skip 'sudo not available' sudo "$ch_runfile" --version sudo "$ch_runfile" --help } @test 'ch-run as root: run image' { scope standard # Running an image should work as root, but it doesn't, and I'm not sure # why, so skip this test. This fails in the test suite with: # # ch-run: couldn't resolve image path: No such file or directory (ch-run.c:139:2) # # but when run manually (with same arguments?) it fails differently with: # # $ sudo bin/ch-run $ch_imgdir/chtest -- true # ch-run: [...]/chtest: Permission denied (ch-run.c:195:13) # skip 'issue #76' sudo "$ch_runfile" "$ch_timg" -- true } @test 'ch-run as root: root with non-zero gid refused' { scope standard [[ -n $ch_have_sudo ]] || skip 'sudo not available' if ! (sudo -u root -g "$(id -gn)" true); then # Allowing sudo to user root but group non-root is an unusual # configuration. You need e.g. "%foo ALL=(ALL:ALL)" instead of the # more common "%foo ALL=(ALL)". See issue #485. pedantic_fail 'sudo not configured for user root and group non-root' fi run sudo -u root -g "$(id -gn)" "$ch_runfile" -v --version echo "$output" [[ $status -eq 1 ]] [[ $output = *'error ('* ]] } charliecloud-0.26/test/run/ch-run_isolation.bats000066400000000000000000000037131417231051300220130ustar00rootroot00000000000000load ../common @test 'mountns id differs' { scope quick host_ns=$(stat -Lc '%i' /proc/self/ns/mnt) echo "host: ${host_ns}" guest_ns=$(ch-run "$ch_timg" -- stat -Lc %i /proc/self/ns/mnt) echo "guest: ${guest_ns}" [[ -n $host_ns && -n $guest_ns && $host_ns -ne $guest_ns ]] } @test 'userns id differs' { scope quick host_ns=$(stat -Lc '%i' /proc/self/ns/user) echo "host: ${host_userns}" guest_ns=$(ch-run "$ch_timg" -- stat -Lc %i /proc/self/ns/user) echo "guest: ${guest_ns}" [[ -n $host_ns && -n $guest_ns && $host_ns -ne $guest_ns ]] } @test 'distro differs' { scope quick # This is a catch-all and a bit of a guess. Even if it fails, however, we # get an empty string, which is fine for the purposes of the test. host_distro=$( cat /etc/os-release /etc/*-release /etc/*_version \ | grep -Em1 '[A-Za-z] [0-9]' \ | sed -r 's/^(.*")?(.+)(")$/\2/') echo "host: ${host_distro}" guest_expected='Alpine Linux v3.9' echo "guest expected: ${guest_expected}" if [[ $host_distro = "$guest_expected" ]]; then pedantic_fail 'host matches expected guest distro' fi guest_distro=$(ch-run "$ch_timg" -- \ cat /etc/os-release \ | grep -F PRETTY_NAME \ | sed -r 's/^(.*")?(.+)(")$/\2/') echo "guest: ${guest_distro}" [[ $guest_distro = "$guest_expected" ]] [[ $guest_distro != "$host_distro" ]] } @test 'user and group match host' { scope quick host_uid=$(id -u) guest_uid=$(ch-run "$ch_timg" -- id -u) [[ $host_uid = "$guest_uid" ]] host_pgid=$(id -g) guest_pgid=$(ch-run "$ch_timg" -- id -g) [[ $host_pgid = "$guest_pgid" ]] host_username=$(id -un) guest_username=$(ch-run "$ch_timg" -- id -un) [[ $host_username = "$guest_username" ]] host_pgroup=$(id -gn) guest_pgroup=$(ch-run "$ch_timg" -- id -gn) [[ $host_pgroup = "$guest_pgroup" ]] } charliecloud-0.26/test/run/ch-run_join.bats000066400000000000000000000353351417231051300207560ustar00rootroot00000000000000load ../common setup () { scope standard } ipc_clean () { rm -v /dev/shm/*ch-run* } ipc_clean_p () { sem="$(find /dev/shm -maxdepth 1 -name '*ch-run*')" [[ -z $sem ]] } joined_ok () { # parameters proc_ct_total=$1 # total number of processes peer_ct_node=$2 # size of each peer group (peers per node) namespace_ct=$3 # number of different namespace IDs status=$4 # exit status output="$5" # output echo "$output" # exit success printf ' exit status: ' 1>&2 if [[ $status -eq 0 ]]; then printf 'ok\n' 1>&2 else printf 'fail (%d)\n' "$status" 1>&2 return 1 fi # number of processes printf ' process count; expected %d: ' "$proc_ct_total" 1>&2 proc_ct_found=$(echo "$output" | grep -Ec 'join: 1 [0-9]+ [0-9a-z]+') if [[ $proc_ct_total -eq "$proc_ct_found" ]]; then printf 'ok\n' else printf 'fail (%d)\n' "$proc_ct_found" 1>&2 return 1 fi # number of peers printf ' peer group size; expected %d: ' "$peer_ct_node" 1>&2 peer_cts=$( echo "$output" \ | sed -rn 's/^ch-run\[[0-9]+\]: join: 1 ([0-9]+) .+$/\1/p') peer_ct_found=$(echo "$peer_cts" | sort -u) peer_cts_found=$(echo "$peer_ct_found" | wc -l) if [[ $peer_cts_found -ne 1 ]]; then printf 'fail (%d different counts reported)\n' "$peer_cts_found" 1>&2 return 1 fi if [[ $peer_ct_found -eq "$peer_ct_node" ]]; then printf 'ok\n' 1>&2 else printf 'fail (%d)\n' "$peer_ct_found" 1>&2 return 1 fi # correct number of namespace IDs for i in /proc/self/ns/*; do printf ' namespace count; expected %d: %s: ' "$namespace_ct" "$i" 1>&2 namespace_ct_found=$( echo "$output" \ | grep -E "^${i}:" \ | sort -u \ | wc -l) if [[ $namespace_ct -eq "$namespace_ct_found" ]]; then printf 'ok\n' 1>&2 else printf 'fail (%d)\n' "$namespace_ct_found" 1>&2 return 1 fi done } # Unset environment variables that might be used. unset_vars () { unset OMPI_COMM_WORLD_LOCAL_SIZE unset SLURM_CPUS_ON_NODE unset SLURM_STEP_ID unset SLURM_STEP_TASKS_PER_NODE } @test 'ch-run --join: /dev/shm starts clean' { if ( ! ipc_clean_p ); then echo 'warning: /dev/shm contains leftover ch-run IPC' ipc_clean false fi } @test 'ch-run --join: one peer, direct launch' { unset_vars ipc_clean_p # --join-ct run ch-run -v --join-ct=1 "$ch_timg" -- /test/printns joined_ok 1 1 1 "$status" "$output" r='join: 1 1 [0-9]+ 0' # status from getppid(2) is all digits [[ $output =~ $r ]] [[ $output = *'join: peer group size from command line'* ]] ipc_clean_p # join count from an environment variable SLURM_CPUS_ON_NODE=1 run ch-run -v --join "$ch_timg" -- /test/printns joined_ok 1 1 1 "$status" "$output" [[ $output = *'join: peer group size from SLURM_CPUS_ON_NODE'* ]] ipc_clean_p # join count from an environment variable with extra goop SLURM_CPUS_ON_NODE=1foo ch-run --join "$ch_timg" -- /test/printns joined_ok 1 1 1 "$status" "$output" [[ $output = *'join: peer group size from SLURM_CPUS_ON_NODE'* ]] ipc_clean_p # join tag run ch-run -v --join-ct=1 --join-tag=foo "$ch_timg" -- /test/printns joined_ok 1 1 1 "$status" "$output" [[ $output = *'join: 1 1 foo 0'* ]] [[ $output = *'join: peer group tag from command line'* ]] ipc_clean_p SLURM_STEP_ID=bar run ch-run -v --join-ct=1 "$ch_timg" -- /test/printns joined_ok 1 1 1 "$status" "$output" [[ $output = *'join: 1 1 bar 0'* ]] [[ $output = *'join: peer group tag from SLURM_STEP_ID'* ]] ipc_clean_p } @test 'ch-run --join: two peers, direct launch' { unset_vars ipc_clean_p rm -f "$BATS_TMPDIR"/join.?.* # first peer (winner) ch-run -v --join-ct=2 --join-tag=foo "$ch_timg" -- \ /test/printns 5 "${BATS_TMPDIR}/join.1.ns" \ >& "${BATS_TMPDIR}/join.1.err" & sleep 1 cat "${BATS_TMPDIR}/join.1.err" cat "${BATS_TMPDIR}/join.1.ns" grep -Fq 'join: 1 2' "${BATS_TMPDIR}/join.1.err" grep -Fq 'join: I won' "${BATS_TMPDIR}/join.1.err" ! grep -Fq 'join: cleaning up IPC' "${BATS_TMPDIR}/join.1.err" # IPC resources present? test -e /dev/shm/ch-run_foo test -e /dev/shm/sem.ch-run_foo # second peer (loser) run ch-run -v --join-ct=2 --join-tag=foo "$ch_timg" -- \ /test/printns 0 "${BATS_TMPDIR}/join.2.ns" echo "$output" [[ $status -eq 0 ]] cat "${BATS_TMPDIR}/join.2.ns" echo "$output" | grep -Fq 'join: 1 2' echo "$output" | grep -Fq 'join: I lost' echo "$output" | grep -Fq 'joining namespaces of pid' echo "$output" | grep -Fq 'join: cleaning up IPC' # same namespaces? for i in /proc/self/ns/*; do [[ 1 = $( cat "$BATS_TMPDIR"/join.?.ns \ | grep -E "^${i}:" | uniq | wc -l) ]] done ipc_clean_p } @test 'ch-run --join: three peers, direct launch' { unset_vars ipc_clean_p rm -f "$BATS_TMPDIR"/join.?.* # first peer (winner) ch-run -v --join-ct=3 --join-tag=foo "$ch_timg" -- \ /test/printns 5 "${BATS_TMPDIR}/join.1.ns" \ >& "${BATS_TMPDIR}/join.1.err" & sleep 1 cat "${BATS_TMPDIR}/join.1.err" cat "${BATS_TMPDIR}/join.1.ns" grep -Fq 'join: 1 3' "${BATS_TMPDIR}/join.1.err" grep -Fq 'join: I won' "${BATS_TMPDIR}/join.1.err" grep -Fq 'join: 2 peers left' "${BATS_TMPDIR}/join.1.err" ! grep -Fq 'join: cleaning up IPC' "${BATS_TMPDIR}/join.1.err" # second peer (loser, no cleanup) ch-run -v --join-ct=3 --join-tag=foo "${ch_timg}" -- \ /test/printns 0 "${BATS_TMPDIR}/join.2.ns" \ >& "${BATS_TMPDIR}/join.2.err" & sleep 1 cat "${BATS_TMPDIR}/join.2.err" cat "${BATS_TMPDIR}/join.2.ns" grep -Fq 'join: 1 3' "${BATS_TMPDIR}/join.2.err" grep -Fq 'join: I lost' "${BATS_TMPDIR}/join.2.err" grep -Fq 'joining namespaces of pid' "${BATS_TMPDIR}/join.2.err" grep -Fq 'join: 1 peers left' "${BATS_TMPDIR}/join.2.err" ! grep -Fq 'join: cleaning up IPC' "${BATS_TMPDIR}/join.2.err" # IPC resources present? test -e /dev/shm/ch-run_foo test -e /dev/shm/sem.ch-run_foo # third peer (loser, cleanup) ch-run -v --join-ct=3 --join-tag=foo "$ch_timg" -- \ /test/printns 0 "${BATS_TMPDIR}/join.3.ns" \ >& "${BATS_TMPDIR}/join.3.err" & sleep 1 cat "${BATS_TMPDIR}/join.3.err" cat "${BATS_TMPDIR}/join.3.ns" grep -Fq 'join: 1 3' "${BATS_TMPDIR}/join.3.err" grep -Fq 'join: I lost' "${BATS_TMPDIR}/join.2.err" grep -Fq 'joining namespaces of pid' "${BATS_TMPDIR}/join.2.err" grep -Fq 'join: 0 peers left' "${BATS_TMPDIR}/join.3.err" grep -Fq 'join: cleaning up IPC' "${BATS_TMPDIR}/join.3.err" # same namespaces? for i in /proc/self/ns/*; do [[ 1 = $( cat "$BATS_TMPDIR"/join.?.ns \ | grep -E "^$i:" | uniq | wc -l) ]] done ipc_clean_p } @test 'ch-run --join: multiple peers, framework launch' { multiprocess_ok ipc_clean_p # Two peers, one node. Should be one of each of the namespaces. Make sure # everyone chdir(2)s properly. # shellcheck disable=SC2086 run $ch_mpirun_2_1node ch-run -v --join --cd /test "$ch_timg" -- ./printns 2 ipc_clean_p joined_ok 2 2 1 "$status" "$output" # One peer per core across the allocation. Should be $ch_nodes of each # of the namespaces. # shellcheck disable=SC2086 run $ch_mpirun_core ch-run -v --join "$ch_timg" -- /test/printns 4 joined_ok "$ch_cores_total" "$ch_cores_node" "$ch_nodes" \ "$status" "$output" ipc_clean_p } @test 'ch-run --join: peer group size errors' { unset_vars # --join but no join count run ch-run --join "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output =~ 'join: no valid peer group size found' ]] ipc_clean_p # join count no digits run ch-run --join-ct=a "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output =~ 'join-ct: no digits found' ]] SLURM_CPUS_ON_NODE=a run ch-run --join "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output =~ 'SLURM_CPUS_ON_NODE: no digits found' ]] ipc_clean_p # join count empty string run ch-run --join-ct='' "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output =~ '--join-ct: no digits found' ]] SLURM_CPUS_ON_NODE=-1 run ch-run --join "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output =~ 'join: no valid peer group size found' ]] ipc_clean_p # --join-ct digits followed by extra goo (OK from environment variable) run ch-run --join-ct=1a "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output =~ '--join-ct: extra characters after digits' ]] ipc_clean_p # Regex for out-of-range error. range_re='.*: .*out of range' # join count above INT_MAX run ch-run --join-ct=2147483648 "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output =~ $range_re ]] SLURM_CPUS_ON_NODE=2147483648 \ run ch-run --join "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output =~ $range_re ]] ipc_clean_p # join count below INT_MIN run ch-run --join-ct=-2147483649 "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output =~ $range_re ]] SLURM_CPUS_ON_NODE=-2147483649 \ run ch-run --join "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output =~ $range_re ]] ipc_clean_p # join count above LONG_MAX run ch-run --join-ct=9223372036854775808 "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output =~ $range_re ]] SLURM_CPUS_ON_NODE=9223372036854775808 \ run ch-run --join "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output =~ $range_re ]] ipc_clean_p # join count below LONG_MIN run ch-run --join-ct=-9223372036854775809 "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output =~ $range_re ]] SLURM_CPUS_ON_NODE=-9223372036854775809 \ run ch-run --join "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output =~ $range_re ]] ipc_clean_p } @test 'ch-run --join: peer group tag errors' { unset_vars # Use a join count of 1 throughout. export SLURM_CPUS_ON_NODE=1 # join tag empty string run ch-run --join-tag='' "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output =~ 'join: peer group tag cannot be empty string' ]] SLURM_STEP_ID='' run ch-run --join "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output =~ 'join: peer group tag cannot be empty string' ]] ipc_clean_p } @test 'ch-run --join-pid: without prior --join' { unset_vars ipc_clean_p rm -f "$BATS_TMPDIR"/join.?.* # First ch-run creates the namespaces with no joining at all. ch-run -v "$ch_timg" -- \ /test/printns 5 "${BATS_TMPDIR}/join.1.ns" \ >& "${BATS_TMPDIR}/join.1.err" & sleep 1 cat "${BATS_TMPDIR}/join.1.err" cat "${BATS_TMPDIR}/join.1.ns" grep -Fq "join: 0 0 (null) 0" "${BATS_TMPDIR}/join.1.err" # PID of ch-run/printns above. pid=$(sed -En 's/^ch-run\[([0-9]+)\]: executing:.+$/\1/p' \ "${BATS_TMPDIR}/join.1.err") echo "found pid: ${pid}" [[ -n $pid ]] # Second ch-run joins the first's namespaces. run ch-run -v --join-pid="$pid" "$ch_timg" -- \ /test/printns 0 "${BATS_TMPDIR}/join.2.ns" echo "$output" [[ $status -eq 0 ]] cat "${BATS_TMPDIR}/join.2.ns" echo "$output" | grep -Fq "join: 0 0 (null) ${pid}" # Same namespaces? for i in /proc/self/ns/*; do [[ 1 = $( cat "$BATS_TMPDIR"/join.?.ns \ | grep -E "^${i}:" | uniq | wc -l) ]] done ipc_clean_p } @test 'ch-run --join-pid: with prior --join' { unset_vars ipc_clean_p rm -f "$BATS_TMPDIR"/join.?.* # First of two peers (winner). ch-run -v --join-ct=2 --join-tag=bar "$ch_timg" -- \ /test/printns 5 "${BATS_TMPDIR}/join.1.ns" \ >& "${BATS_TMPDIR}/join.1.err" & sleep 1 cat "${BATS_TMPDIR}/join.1.err" cat "${BATS_TMPDIR}/join.1.ns" grep -Fq 'join: 1 2' "${BATS_TMPDIR}/join.1.err" grep -Fq 'join: I won' "${BATS_TMPDIR}/join.1.err" ! grep -Fq 'join: cleaning up IPC' "${BATS_TMPDIR}/join.1.err" # PID of first peer. pid=$(sed -En 's/^ch-run\[([0-9]+)\]: join: winner initializing.+$/\1/p' \ "${BATS_TMPDIR}/join.1.err") echo "found pid: ${pid}" [[ -n $pid ]] # Second of two peers (loser). ch-run -v --join-ct=2 --join-tag=bar "${ch_timg}" -- \ /test/printns 5 "${BATS_TMPDIR}/join.2.ns" \ >& "${BATS_TMPDIR}/join.2.err" & sleep 1 cat "${BATS_TMPDIR}/join.2.err" cat "${BATS_TMPDIR}/join.2.ns" grep -Fq 'join: 1 2' "${BATS_TMPDIR}/join.2.err" grep -Fq 'join: I lost' "${BATS_TMPDIR}/join.2.err" grep -Fq "joining namespaces of pid ${pid}" "${BATS_TMPDIR}/join.2.err" grep -Fq 'join: 0 peers left' "${BATS_TMPDIR}/join.2.err" grep -Fq 'join: cleaning up IPC' "${BATS_TMPDIR}/join.2.err" # Third ch-run simulates unplanned, joins existing namespaces. run ch-run -v --join-pid="$pid" "$ch_timg" -- \ /test/printns 0 "${BATS_TMPDIR}/join.3.ns" echo "$output" [[ $status -eq 0 ]] cat "${BATS_TMPDIR}/join.3.ns" echo "$output" | grep -Fq "join: 0 0 (null) ${pid}" echo "$output" | grep -Fq "joining namespaces of pid ${pid}" ! echo "$output" | grep -Fq 'join: I won' ! echo "$output" | grep -Fq 'join: I lost' ! echo "$output" | grep -q 'join: .+ peers left' ! echo "$output" | grep -Fq 'join: cleaning up IPC' # Same namespaces? for i in /proc/self/ns/*; do [[ 1 = $( cat "$BATS_TMPDIR"/join.?.ns \ | grep -E "^${i}:" | uniq | wc -l) ]] done ipc_clean_p } @test 'ch-run --join-pid: errors' { # Can't join namespaces of processes we don't own. run ch-run -v --join-pid=1 "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output = *"join: can't open /proc/1/ns/user: Permission denied"* ]] # Can't join namespaces of processes that don't exist. pid=2147483647 run ch-run -v --join-pid="$pid" "$ch_timg" -- true echo "$output" [[ $status -eq 1 ]] [[ $output = *"join: no PID ${pid}: /proc/${pid}/ns/user not found"* ]] } @test 'ch-run --join: /dev/shm ends clean' { if ( ! ipc_clean_p ); then echo 'warning: /dev/shm contains leftover ch-run IPC' ipc_clean false fi } charliecloud-0.26/test/run/ch-run_misc.bats000066400000000000000000000741701417231051300207520ustar00rootroot00000000000000load ../common @test 'relative path to image' { # issue #6 scope quick cd "$(dirname "$ch_timg")" && ch-run "$(basename "$ch_timg")" -- /bin/true } @test 'symlink to image' { # issue #50 scope quick ln -sf "$ch_timg" "${BATS_TMPDIR}/symlink-test" ch-run "${BATS_TMPDIR}/symlink-test" -- /bin/true } @test 'mount image read-only' { scope quick run ch-run "$ch_timg" sh < write' ch-run -w "$ch_timg" rm write } @test '/usr/bin/ch-ssh' { # Note: --ch-ssh without /usr/bin/ch-ssh is in test "broken image errors". scope quick ls -l "$ch_bin/ch-ssh" ch-run --ch-ssh "$ch_timg" -- ls -l /usr/bin/ch-ssh ch-run --ch-ssh "$ch_timg" -- test -x /usr/bin/ch-ssh # Test bind-mount by comparing size rather than e.g. "ch-ssh --version" # because ch-ssh won't run on Alpine (issue #4). host_size=$(stat -c %s "${ch_bin}/ch-ssh") guest_size=$(ch-run --ch-ssh "$ch_timg" -- stat -c %s /usr/bin/ch-ssh) echo "host: ${host_size}, guest: ${guest_size}" [[ $host_size -eq "$guest_size" ]] } @test 'optional default bind mounts silently skipped' { scope standard [[ ! -e "${ch_timg}/var/opt/cray/alps/spool" ]] [[ ! -e "${ch_timg}/var/opt/cray/hugetlbfs" ]] ch-run "$ch_timg" -- mount | ( ! grep -F /var/opt/cray/alps/spool ) ch-run "$ch_timg" -- mount | ( ! grep -F /var/opt/cray/hugetlbfs ) } # shellcheck disable=SC2016 @test '$CH_RUNNING' { scope standard if [[ -v CH_RUNNING ]]; then echo "\$CH_RUNNING already set: $CH_RUNNING" false fi run ch-run "$ch_timg" -- /bin/sh -c 'env | grep -E ^CH_RUNNING' echo "$output" [[ $status -eq 0 ]] [[ $output = 'CH_RUNNING=Weird Al Yankovic' ]] } # shellcheck disable=SC2016 @test '$HOME' { scope quick echo "host: $HOME" [[ $HOME ]] [[ $USER ]] # default: set $HOME # shellcheck disable=SC2016 run ch-run "$ch_timg" -- /bin/sh -c 'echo $HOME' echo "$output" [[ $status -eq 0 ]] [[ $output = /home/$USER ]] # no change if --no-home # shellcheck disable=SC2016 run ch-run --no-home "$ch_timg" -- /bin/sh -c 'echo $HOME' echo "$output" [[ $status -eq 0 ]] [[ $output = "$HOME" ]] # puke if $HOME not set home_tmp=$HOME unset HOME # shellcheck disable=SC2016 run ch-run "$ch_timg" -- /bin/sh -c 'echo $HOME' export HOME="$home_tmp" echo "$output" [[ $status -eq 1 ]] # shellcheck disable=SC2016 [[ $output = *'cannot find home directory: is $HOME set?'* ]] # puke if $USER not set user_tmp=$USER unset USER # shellcheck disable=SC2016 run ch-run "$ch_timg" -- /bin/sh -c 'echo $HOME' export USER=$user_tmp echo "$output" [[ $status -eq 1 ]] # shellcheck disable=SC2016 [[ $output = *'$USER not set'* ]] } # shellcheck disable=SC2016 @test '$PATH: add /bin' { scope quick echo "$PATH" # if /bin is in $PATH, latter passes through unchanged PATH2="$ch_bin:/bin:/usr/bin" echo "$PATH2" # shellcheck disable=SC2016 PATH=$PATH2 run ch-run "$ch_timg" -- /bin/sh -c 'echo $PATH' echo "$output" [[ $status -eq 0 ]] [[ $output = "$PATH2" ]] PATH2="/bin:$ch_bin:/usr/bin" echo "$PATH2" # shellcheck disable=SC2016 PATH=$PATH2 run ch-run "$ch_timg" -- /bin/sh -c 'echo $PATH' echo "$output" [[ $status -eq 0 ]] [[ $output = "$PATH2" ]] # if /bin isn't in $PATH, former is added to end PATH2="$ch_bin:/usr/bin" echo "$PATH2" # shellcheck disable=SC2016 PATH=$PATH2 run ch-run "$ch_timg" -- /bin/sh -c 'echo $PATH' echo "$output" [[ $status -eq 0 ]] [[ $output = $PATH2:/bin ]] } # shellcheck disable=SC2016 @test '$PATH: unset' { scope standard old_path=$PATH unset PATH run "$ch_runfile" "$ch_timg" -- \ /usr/bin/python3 -c 'import os; print(os.getenv("PATH") is None)' PATH=$old_path echo "$output" [[ $status -eq 0 ]] # shellcheck disable=SC2016 [[ $output = *': $PATH not set'* ]] [[ $output = *'True'* ]] } # shellcheck disable=SC2016 @test '$TMPDIR' { scope standard mkdir -p "${BATS_TMPDIR}/tmpdir" touch "${BATS_TMPDIR}/tmpdir/file-in-tmpdir" TMPDIR=${BATS_TMPDIR}/tmpdir run ch-run "$ch_timg" -- ls -1 /tmp echo "$output" [[ $status -eq 0 ]] [[ $output = file-in-tmpdir ]] } @test 'ch-run --cd' { scope quick # Default initial working directory is /. run ch-run "$ch_timg" -- pwd echo "$output" [[ $status -eq 0 ]] [[ $output = '/' ]] # Specify initial working directory. run ch-run --cd /dev "$ch_timg" -- pwd echo "$output" [[ $status -eq 0 ]] [[ $output = '/dev' ]] # Error if directory does not exist. run ch-run --cd /goops "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output =~ "can't cd to /goops: No such file or directory" ]] } @test 'ch-run --bind' { scope quick [[ $CH_TEST_PACK_FMT = *-unpack ]] || skip 'needs writeable image' # set up sources mkdir -p "${ch_timg}/${ch_imgdir}/bind1" mkdir -p "${ch_timg}/${ch_imgdir}/bind2" # remove destinations that will be created rmdir "${ch_timg}/bind3" || true [[ ! -e ${ch_timg}/bind3 ]] rmdir "${ch_timg}/bind4/a" "${ch_timg}/bind4/b" "${ch_timg}/bind4" || true [[ ! -e ${ch_timg}/bind4 ]] # one bind, default destination ch-run -b "${ch_imgdir}/bind1" "$ch_timg" -- cat "${ch_imgdir}/bind1/file1" # one bind, explicit destination ch-run -b "${ch_imgdir}/bind1:/mnt/9" "$ch_timg" -- cat /mnt/9/file1 # one bind, create destination, one level ch-run -w -b "${ch_imgdir}/bind1:/bind3" "$ch_timg" -- cat /bind3/file1 # one bind, create destination, two levels ch-run -w -b "${ch_imgdir}/bind1:/bind4/a" "$ch_timg" -- cat /bind4/a/file1 # one bind, create destination, two levels via symlink [[ -L ${ch_timg}/mnt/bind4 ]] ch-run -w -b "${ch_imgdir}/bind1:/mnt/bind4/b" "$ch_timg" \ -- cat /bind4/b/file1 # two binds, default destination ch-run -b "${ch_imgdir}/bind1" -b "${ch_imgdir}/bind2" "$ch_timg" \ -- cat "${ch_imgdir}/bind1/file1" "${ch_imgdir}/bind2/file2" # two binds, explicit destinations ch-run -b "${ch_imgdir}/bind1:/mnt/8" -b "${ch_imgdir}/bind2:/mnt/9" \ "$ch_timg" \ -- cat /mnt/8/file1 /mnt/9/file2 # two binds, default/explicit ch-run -b "${ch_imgdir}/bind1" -b "${ch_imgdir}/bind2:/mnt/9" "$ch_timg" \ -- cat "${ch_imgdir}/bind1/file1" /mnt/9/file2 # two binds, explicit/default ch-run -b "${ch_imgdir}/bind1:/mnt/8" -b "${ch_imgdir}/bind2" "$ch_timg" \ -- cat /mnt/8/file1 "${ch_imgdir}/bind2/file2" # bind one source at two destinations ch-run -b "${ch_imgdir}/bind1:/mnt/8" -b "${ch_imgdir}/bind1:/mnt/9" \ "$ch_timg" \ -- diff -u /mnt/8/file1 /mnt/9/file1 # bind two sources at one destination ch-run -b "${ch_imgdir}/bind1:/mnt/9" -b "${ch_imgdir}/bind2:/mnt/9" \ "$ch_timg" \ -- sh -c '[ ! -e /mnt/9/file1 ] && cat /mnt/9/file2' # omit tmpfs at /home, which shouldn't be empty ch-run --no-home "$ch_timg" -- cat /home/overmount-me # overmount tmpfs at /home ch-run -b "${ch_imgdir}/bind1:/home" "$ch_timg" -- cat /home/file1 # bind to /home without overmount ch-run --no-home -b "${ch_imgdir}/bind1:/home" "$ch_timg" \ -- cat /home/file1 # omit default /home, with unrelated --bind ch-run --no-home -b "${ch_imgdir}/bind1" "$ch_timg" \ -- cat "${ch_imgdir}/bind1/file1" } @test 'ch-run --bind errors' { scope quick [[ $CH_TEST_PACK_FMT = *-unpack ]] || skip 'needs writeable image' # no argument to --bind run ch-run "$ch_timg" -b echo "$output" [[ $status -eq 64 ]] [[ $output = *'option requires an argument'* ]] # empty argument to --bind run ch-run -b '' "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *'--bind: no source provided'* ]] # source not provided run ch-run -b :/mnt/9 "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *'--bind: no source provided'* ]] # destination not provided run ch-run -b "${ch_imgdir}/bind1:" "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *'--bind: no destination provided'* ]] # destination is / run ch-run -b "${ch_imgdir}/bind1:/" "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"--bind: destination can't be /"* ]] # destination is relative run ch-run -b "${ch_imgdir}/bind1:foo" "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"--bind: destination must be absolute"* ]] # destination climbs out of image, exists run ch-run -b "${ch_imgdir}/bind1:/.." "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't bind: ${ch_imgdir} not subdirectory of ${ch_timg}"* ]] # destination climbs out of image, does not exist run ch-run -b "${ch_imgdir}/bind1:/../doesnotexist/a" "$ch_timg" \ -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't mkdir: ${ch_imgdir}/doesnotexist not subdirectory of ${ch_timg}"* ]] [[ ! -e ${ch_imgdir}/doesnotexist ]] # source does not exist run ch-run -b "${ch_imgdir}/hoops" "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't bind: source not found: ${ch_imgdir}/hoops"* ]] # destination does not exist run ch-run -b "${ch_imgdir}/bind1:/goops" "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't mkdir: ${ch_timg}/goops: Read-only file system"* ]] # neither source nor destination exist run ch-run -b "${ch_imgdir}/hoops:/goops" "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't bind: source not found: ${ch_imgdir}/hoops"* ]] # correct bind followed by source does not exist run ch-run -b "${ch_imgdir}/bind1:/mnt/0" -b "${ch_imgdir}/hoops" \ "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't bind: source not found: ${ch_imgdir}/hoops"* ]] # correct bind followed by destination does not exist run ch-run -b "${ch_imgdir}/bind1:/mnt/0" -b "${ch_imgdir}/bind2:/goops" \ "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't mkdir: ${ch_timg}/goops: Read-only file system"* ]] # destination is broken symlink, absolute run ch-run -b "${ch_imgdir}/bind1:/mnt/link-b0rken-abs" "$ch_timg" \ -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't mkdir: symlink not relative: ${ch_timg}/mnt/link-b0rken-abs"* ]] # destination is broken symlink, relative, directly run ch-run -b "${ch_imgdir}/bind1:/mnt/link-b0rken-rel" "$ch_timg" \ -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't mkdir: broken symlink: ${ch_timg}/mnt/link-b0rken-rel"* ]] [[ ! -e ${ch_timg}/mnt/doesnotexist ]] # destination goes through broken symlink run ch-run -b "${ch_imgdir}/bind1:/mnt/link-b0rken-rel/a" "$ch_timg" \ -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't mkdir: broken symlink: ${ch_timg}/mnt/link-b0rken-rel"* ]] [[ ! -e ${ch_timg}/mnt/doesnotexist ]] # destination is absolute symlink outside image run ch-run -b "${ch_imgdir}/bind1:/mnt/link-bad-abs" "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't bind: "*" not subdirectory of ${ch_timg}"* ]] # destination relative symlink outside image run ch-run -b "${ch_imgdir}/bind1:/mnt/link-bad-rel" "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't bind: "*" not subdirectory of ${ch_timg}"* ]] # mkdir(2) under existing bind-mount, default, first level run ch-run -b "${ch_imgdir}/bind1:/proc/doesnotexist" "$ch_timg" \ -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't mkdir: ${ch_timg}/proc/doesnotexist under existing bind-mount ${ch_timg}/proc "* ]] # mkdir(2) under existing bind-mount, user-supplied, first level run ch-run -b "${ch_imgdir}/bind1:/mnt/0" \ -b "${ch_imgdir}/bind2:/mnt/0/foo" "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't mkdir: ${ch_timg}/mnt/0/foo under existing bind-mount ${ch_timg}/mnt/0 "* ]] # mkdir(2) under existing bind-mount, default, 2nd level run ch-run -b "${ch_imgdir}/bind1:/proc/sys/doesnotexist" "$ch_timg" \ -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't mkdir: ${ch_timg}/proc/sys/doesnotexist under existing bind-mount ${ch_timg}/proc "* ]] } @test 'ch-run --set-env' { scope standard # Quirk that is probably too obscure to put in the documentation: The # string containing only two straight quotes does not round-trip through # "printenv" or "env", though it does round-trip through Bash "set": # # $ export foo="''" # $ echo [$foo] # [''] # $ set | fgrep foo # foo=''\'''\''' # $ eval $(set | fgrep foo) # $ echo [$foo] # [''] # $ printenv | fgrep foo # foo='' # $ eval $(printenv | fgrep foo) # $ echo $foo # [] # Valid inputs. Use Python to print the results to avoid ambiguity. export SET=foo export SET2=boo f_in=${BATS_TMPDIR}/env.txt cat <<'EOF' > "$f_in" chse_a1=bar chse_a2=bar=baz chse_a3=bar baz chse_a4='bar' chse_a5= chse_a6='' chse_a7='''' chse_b1="bar" chse_b2=bar # baz chse_b3=bar chse_b4= bar chse_c1=foo chse_c1=bar chse_d1=foo: chse_d2=:foo chse_d3=: chse_d4=:: chse_d5=$SET chse_d6=$SET:$SET2 chse_d7=bar:$SET chse_d8=bar:baz:$SET chse_d9=$SET:bar chse_dA=$SET:bar:baz chse_dB=bar:$SET:baz chse_dC=bar:baz:$SET:bar:baz chse_e1=:$SET chse_e2=::$SET chse_e3=$SET: chse_e4=$SET:: chse_e5=bar:$ chse_e6=bar:* chse_e7=bar$SET chse_e8=bar::$SET chse_f1=$UNSET chse_f2=foo:$UNSET chse_f3=foo:$UNSET: chse_f4=$UNSET:foo chse_f5=:$UNSET:foo chse_f6=foo:$UNSET:$UNSET2 chse_f7=foo:$UNSET:$UNSET2: chse_f8=$UNSET:$UNSET2:foo chse_f9=:$UNSET:$UNSET2:foo chse_fA=foo:$UNSET:bar chse_fB=foo:$UNSET:$UNSET2:bar chse_fC=:$UNSET chse_fD=::$UNSET chse_fE=$UNSET: chse_fF=$UNSET:: EOF cat "$f_in" output_expected=$(cat <<'EOF' (' chse_b3', 'bar') ('chse_a1', 'bar') ('chse_a2', 'bar=baz') ('chse_a3', 'bar baz') ('chse_a4', 'bar') ('chse_a5', '') ('chse_a6', '') ('chse_a7', "''") ('chse_b1', '"bar"') ('chse_b2', 'bar # baz') ('chse_b4', ' bar') ('chse_c1', 'bar') ('chse_d1', 'foo:') ('chse_d2', ':foo') ('chse_d3', ':') ('chse_d4', '::') ('chse_d5', 'foo') ('chse_d6', 'foo:boo') ('chse_d7', 'bar:foo') ('chse_d8', 'bar:baz:foo') ('chse_d9', 'foo:bar') ('chse_dA', 'foo:bar:baz') ('chse_dB', 'bar:foo:baz') ('chse_dC', 'bar:baz:foo:bar:baz') ('chse_e1', ':foo') ('chse_e2', '::foo') ('chse_e3', 'foo:') ('chse_e4', 'foo::') ('chse_e5', 'bar:$') ('chse_e6', 'bar:*') ('chse_e7', 'bar$SET') ('chse_e8', 'bar::foo') ('chse_f1', '') ('chse_f2', 'foo') ('chse_f3', 'foo:') ('chse_f4', 'foo') ('chse_f5', ':foo') ('chse_f6', 'foo') ('chse_f7', 'foo:') ('chse_f8', 'foo') ('chse_f9', ':foo') ('chse_fA', 'foo:bar') ('chse_fB', 'foo:bar') ('chse_fC', '') ('chse_fD', ':') ('chse_fE', '') ('chse_fF', ':') EOF ) run ch-run --set-env="$f_in" "$ch_timg" -- python3 -c 'import os; [print((k,v)) for (k,v) in sorted(os.environ.items()) if "chse_" in k]' echo "$output" [[ $status -eq 0 ]] diff -u <(echo "$output_expected") <(echo "$output") } @test 'ch-run --set-env from Dockerfile' { scope standard prerequisites_ok argenv img=${ch_imgdir}/argenv output_expected=$(cat <<'EOF' chse_env1_df=env1 chse_env2_df=env2 env1 EOF ) run ch-run --set-env "$img" -- sh -c 'env | grep -E "^chse_"' echo "$output" [[ $status -eq 0 ]] diff -u <(echo "$output_expected") <(echo "$output") } @test 'ch-run --set-env errors' { scope standard f_in=${BATS_TMPDIR}/env.txt # file does not exist run ch-run --set-env=doesnotexist.txt "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't open: doesnotexist.txt: No such file or directory"* ]] # /ch/environment missing run ch-run --set-env "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't open: /ch/environment: No such file or directory"* ]] # Note: I'm not sure how to test an error during reading, i.e., getline(3) # rather than fopen(3). Hence no test for "error reading". # invalid line: missing '=' echo 'FOO bar' > "$f_in" run ch-run --set-env="$f_in" "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't parse variable: no delimiter: ${f_in}:1"* ]] # invalid line: no name echo '=bar' > "$f_in" run ch-run --set-env="$f_in" "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't parse variable: empty name: ${f_in}:1"* ]] } # shellcheck disable=SC2016 @test 'ch-run --set-env command line' { scope standard # missing ''' # shellcheck disable=SC2086 run ch-run --set-env=foo='$test:app' --env-no-expand -v "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 0 ]] [[ $output = *'environment: foo=$test:app'* ]] # missing environment variable run ch-run --set-env='$PATH:foo' "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *'$PATH:foo: No such file or directory'* ]] } @test 'ch-run --unset-env' { scope standard export chue_1=foo export chue_2=bar printf '\n# Nothing\n\n' run ch-run --unset-env=doesnotmatch "$ch_timg" -- env echo "$output" | sort [[ $status -eq 0 ]] ex='^(_|CH_RUNNING|HOME|PATH|SHLVL|TMPDIR)=' # expected to change diff -u <(env | grep -Ev "$ex" | sort) \ <(echo "$output" | grep -Ev "$ex" | sort) printf '\n# Everything\n\n' run ch-run --unset-env='*' "$ch_timg" -- env echo "$output" [[ $status -eq 0 ]] [[ $output = 'CH_RUNNING=Weird Al Yankovic' ]] printf '\n# Everything, plus shell re-adds\n\n' run ch-run --unset-env='*' "$ch_timg" -- /bin/sh -c 'env | sort' echo "$output" [[ $status -eq 0 ]] diff -u <(printf 'CH_RUNNING=Weird Al Yankovic\nPWD=/\nSHLVL=1\n') \ <(echo "$output") printf '\n# Without wildcards\n\n' run ch-run --unset-env=chue_1 "$ch_timg" -- env echo "$output" [[ $status -eq 0 ]] diff -u <(printf 'chue_2=bar\n') <(echo "$output" | grep -E '^chue_') printf '\n# With wildcards\n\n' run ch-run --unset-env='chue_*' "$ch_timg" -- env echo "$output" [[ $status -eq 0 ]] [[ $(echo "$output" | grep -E '^chue_') = '' ]] printf '\n# Empty string\n\n' run ch-run --unset-env= "$ch_timg" -- env echo "$output" [[ $status -eq 1 ]] [[ $output = *'--unset-env: GLOB must have non-zero length'* ]] } @test 'ch-run mixed --set-env and --unset-env' { scope standard # Input. export chmix_a1=z export chmix_a2=y export chmix_a3=x f1_in=${BATS_TMPDIR}/env1.txt cat <<'EOF' > "$f1_in" chmix_b1=w chmix_b2=v EOF f2_in=${BATS_TMPDIR}/env2.txt cat <<'EOF' > "$f2_in" chmix_c1=u chmix_c2=t EOF # unset, unset output_expected=$(cat <<'EOF' chmix_a3=x EOF ) run ch-run --unset-env=chmix_a1 --unset-env=chmix_a2 "$ch_timg" -- \ sh -c 'env | grep -E ^chmix_ | sort' echo "$output" [[ $status -eq 0 ]] diff -u <(echo "$output_expected") <(echo "$output") echo '# set, set' output_expected=$(cat <<'EOF' chmix_a1=z chmix_a2=y chmix_a3=x chmix_b1=w chmix_b2=v chmix_c1=u chmix_c2=t EOF ) run ch-run --set-env="$f1_in" --set-env="$f2_in" "$ch_timg" -- \ sh -c 'env | grep -E ^chmix_ | sort' echo "$output" [[ $status -eq 0 ]] diff -u <(echo "$output_expected") <(echo "$output") echo '# unset, set' output_expected=$(cat <<'EOF' chmix_a2=y chmix_a3=x chmix_b1=w chmix_b2=v EOF ) run ch-run --unset-env=chmix_a1 --set-env="$f1_in" "$ch_timg" -- \ sh -c 'env | grep -E ^chmix_ | sort' echo "$output" [[ $status -eq 0 ]] diff -u <(echo "$output_expected") <(echo "$output") echo '# set, unset' output_expected=$(cat <<'EOF' chmix_a1=z chmix_a2=y chmix_a3=x chmix_b1=w EOF ) run ch-run --set-env="$f1_in" --unset-env=chmix_b2 "$ch_timg" -- \ sh -c 'env | grep -E ^chmix_ | sort' echo "$output" [[ $status -eq 0 ]] diff -u <(echo "$output_expected") <(echo "$output") echo '# unset, set, unset' output_expected=$(cat <<'EOF' chmix_a2=y chmix_a3=x chmix_b1=w EOF ) run ch-run --unset-env=chmix_a1 \ --set-env="$f1_in" \ --unset-env=chmix_b2 \ "$ch_timg" -- sh -c 'env | grep -E ^chmix_ | sort' echo "$output" [[ $status -eq 0 ]] diff -u <(echo "$output_expected") <(echo "$output") echo '# set, unset, set' output_expected=$(cat <<'EOF' chmix_a1=z chmix_a2=y chmix_a3=x chmix_b1=w chmix_c1=u chmix_c2=t EOF ) run ch-run --set-env="$f1_in" \ --unset-env=chmix_b2 \ --set-env="$f2_in" \ "$ch_timg" -- sh -c 'env | grep -E ^chmix_ | sort' echo "$output" [[ $status -eq 0 ]] diff -u <(echo "$output_expected") <(echo "$output") } @test 'ch-run: internal SquashFUSE mounting' { scope standard [[ $CH_TEST_PACK_FMT == squash-mount ]] || skip 'squash-mount format only' ch_mnt="/var/tmp/${USER}.ch/mnt" # default mount point run ch-run -v "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 0 ]] [[ $output = *"newroot: (null)"* ]] [[ $output = *"using default mount point: ${ch_mnt}"* ]] [[ -d ${ch_mnt} ]] rmdir "${ch_mnt}" # -m option mountpt="${BATS_TMPDIR}/sqfs_tmpdir" [[ -e $mountpt ]] || mkdir "$mountpt" run ch-run -m "$mountpt" -v "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 0 ]] [[ $output = *"newroot: ${mountpt}"* ]] rmdir "$mountpt" # -m with non-sqfs img img=${BATS_TMPDIR}/dirimg ch-convert -i squash "$ch_timg" "$img" run ch-run -m /doesnotexist -v "$img" -- /bin/true echo "$output" [[ $status -eq 0 ]] [[ $output = *"warning: --mount invalid with directory image, ignoring"* ]] [[ $output = *"newroot: ${img}"* ]] rm -Rf --one-file-system "$img" } @test 'ch-run: internal SquashFUSE errors' { scope standard [[ $CH_TEST_PACK_FMT == squash-mount ]] || skip 'squash-mount format only' # mount point is empty string run ch-run --mount= "$ch_timg" -- /bin/true echo "$output" [[ $status -ne 0 ]] # exits with status of 139 [[ $output = *"mount point can't be empty string"* ]] # mount point doesn't exist run ch-run -m /doesnotexist "$ch_timg" -- /bin/true echo "$output" [[ $status -ne 0 ]] # exits with status of 139 [[ $output = *"can't stat mount point: /doesnotexist: No such file or directory"* ]] # mount point is a file run ch-run -m ./fixtures/README "$ch_timg" -- /bin/true echo "$output" [[ $status -ne 0 ]] [[ $output = *'not a directory: ./fixtures/README'* ]] # image is file but not sqfs run ch-run -vv ./fixtures/README -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *'magic expected: 6873 7173; actual: 596f 7520'* ]] [[ $output = *'unknown image type: ./fixtures/README'* ]] # image is a broken sqfs sq_tmp="$BATS_TMPDIR"/b0rken.sqfs cp "$ch_timg" "$sq_tmp" # corrupt inode count (bytes 4–7, 0-indexed) printf '\xED\x5F\x84\x00' | dd of="$sq_tmp" bs=1 count=4 seek=4 conv=notrunc ls -l "$ch_timg" "$sq_tmp" run ch-run -vv "$sq_tmp" -- ls -l / echo "$output" [[ $status -ne 0 ]] [[ $output = *'magic expected: 6873 7173; actual: 6873 7173'* ]] [[ $output = *"can't open SquashFS: ${sq_tmp}"* ]] rm "$sq_tmp" } @test 'broken image errors' { scope standard img=${BATS_TMPDIR}/broken-image tmpdir=${TMPDIR:-/tmp} # Create an image skeleton. dirs=$(echo {dev,proc,sys}) files=$(echo etc/{group,passwd}) # shellcheck disable=SC2116 files_optional=$(echo etc/{hosts,resolv.conf}) mkdir -p "$img" for d in $dirs; do mkdir -p "${img}/$d"; done mkdir -p "${img}/etc" "${img}/home" "${img}/usr/bin" "${img}/tmp" for f in $files $files_optional; do touch "${img}/${f}"; done # This should start up the container OK but fail to find the user command. run ch-run "$img" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't execve(2): /bin/true: No such file or directory"* ]] # For each required file, we want a correct error if it's missing. for f in $files; do echo "required: ${f}" rm "${img}/${f}" ls -l "${img}/${f}" || true run ch-run "$img" -- /bin/true touch "${img}/${f}" # restore before test fails for idempotency echo "$output" [[ $status -eq 1 ]] r="can't bind: destination not found: .+/${f}" echo "expected: ${r}" [[ $output =~ $r ]] done # For each optional file, we want no error if it's missing. for f in $files_optional; do echo "optional: ${f}" rm "${img}/${f}" run ch-run "$img" -- /bin/true touch "${img}/${f}" # restore before test fails for idempotency echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't execve(2): /bin/true: No such file or directory"* ]] done # For all files, we want a correct error if it's not a regular file. for f in $files $files_optional; do echo "not a regular file: ${f}" rm "${img}/${f}" mkdir "${img}/${f}" run ch-run "$img" -- /bin/true rmdir "${img}/${f}" # restore before test fails for idempotency touch "${img}/${f}" echo "$output" [[ $status -eq 1 ]] r="can't bind .+ to /.+/${f}: Not a directory" echo "expected: ${r}" [[ $output =~ $r ]] done # For each directory, we want a correct error if it's missing. for d in $dirs tmp; do echo "required: ${d}" rmdir "${img}/${d}" run ch-run "$img" -- /bin/true mkdir "${img}/${d}" # restore before test fails for idempotency echo "$output" [[ $status -eq 1 ]] r="can't bind: destination not found: .+/${d}" echo "expected: ${r}" [[ $output =~ $r ]] done # For each directory, we want a correct error if it's not a directory. for d in $dirs tmp; do echo "not a directory: ${d}" rmdir "${img}/${d}" touch "${img}/${d}" run ch-run "$img" -- /bin/true rm "${img}/${d}" # restore before test fails for idempotency mkdir "${img}/${d}" echo "$output" [[ $status -eq 1 ]] r="can't bind .+ to /.+/${d}: Not a directory" echo "expected: ${r}" [[ $output =~ $r ]] done # --private-tmp rmdir "${img}/tmp" run ch-run --private-tmp "$img" -- /bin/true mkdir "${img}/tmp" # restore before test fails for idempotency echo "$output" [[ $status -eq 1 ]] r="can't mount tmpfs at /.+/tmp: No such file or directory" echo "expected: ${r}" [[ $output =~ $r ]] # /home without --private-home # FIXME: Not sure how to make the second mount(2) fail. rmdir "${img}/home" run ch-run "$img" -- /bin/true mkdir "${img}/home" # restore before test fails for idempotency echo "$output" [[ $status -eq 1 ]] r="can't mount tmpfs at /.+/home: No such file or directory" echo "expected: ${r}" [[ $output =~ $r ]] # --no-home shouldn't care if /home is missing rmdir "${img}/home" run ch-run --no-home "$img" -- /bin/true mkdir "${img}/home" # restore before test fails for idempotency echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't execve(2): /bin/true: No such file or directory"* ]] # --ch-ssh but no /usr/bin/ch-ssh run ch-run --ch-ssh "$img" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"--ch-ssh: /usr/bin/ch-ssh not in image"* ]] # Everything should be restored and back to the original error. run ch-run "$img" -- /bin/true echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't execve(2): /bin/true: No such file or directory"* ]] # At this point, there should be exactly two each of passwd and group # temporary files. Remove them. [[ $(find -H "$tmpdir" -maxdepth 1 -name 'ch-run_passwd*' | wc -l) -eq 2 ]] [[ $(find -H "$tmpdir" -maxdepth 1 -name 'ch-run_group*' | wc -l) -eq 2 ]] rm -v "$tmpdir"/ch-run_{passwd,group}* [[ $(find -H "$tmpdir" -maxdepth 1 -name 'ch-run_passwd*' | wc -l) -eq 0 ]] [[ $(find -H "$tmpdir" -maxdepth 1 -name 'ch-run_group*' | wc -l) -eq 0 ]] } @test 'UID and/or GID invalid on host' { scope standard uid_bad=8675309 gid_bad=8675310 # UID run ch-run -v --uid=$uid_bad "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 0 ]] [[ $output = *"UID ${uid_bad} not found; using dummy info"* ]] # GID run ch-run -v --gid=$gid_bad "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 0 ]] [[ $output = *"GID ${gid_bad} not found; using dummy info"* ]] # both run ch-run -v --uid=$uid_bad --gid=$gid_bad "$ch_timg" -- /bin/true echo "$output" [[ $status -eq 0 ]] [[ $output = *"UID ${uid_bad} not found; using dummy info"* ]] [[ $output = *"GID ${gid_bad} not found; using dummy info"* ]] } @test 'syslog' { # This test depends on a fairly specific syslog configuration, so just do # it on GitHub Actions. [[ -n $GITHUB_ACTIONS ]] || skip 'GitHub Actions only' [[ -n $CH_TEST_SUDO ]] || skip 'sudo required' expected="ch-run: uid=$(id -u) args=6: ch-run ${ch_timg} -- echo foo \"b a}\\\$r\"" echo "$expected" #shellcheck disable=SC2016 ch-run "$ch_timg" -- echo foo 'b a}$r' sudo tail -n 10 /var/log/syslog | grep -F "$expected" } charliecloud-0.26/test/run/ch-run_uidgid.bats000066400000000000000000000132011417231051300212500ustar00rootroot00000000000000load ../common setup () { scope standard if [[ -n $GUEST_USER ]]; then # Specific user requested for testing. [[ -n $GUEST_GROUP ]] guest_uid=$(id -u "$GUEST_USER") guest_gid=$(getent group "$GUEST_GROUP" | cut -d: -f3) uid_args="-u ${guest_uid}" gid_args="-g ${guest_gid}" echo "ID args: ${GUEST_USER}/${guest_uid} ${GUEST_GROUP}/${guest_gid}" echo else # No specific user requested. [[ -z $GUEST_GROUP ]] GUEST_USER=$(id -un) guest_uid=$(id -u) [[ $GUEST_USER = "$USER" ]] [[ $guest_uid -ne 0 ]] GUEST_GROUP=$(id -gn) guest_gid=$(id -g) [[ $guest_gid -ne 0 ]] uid_args= gid_args= echo "no ID arguments" echo fi } @test 'user and group as specified' { # shellcheck disable=SC2086 g=$(ch-run $uid_args $gid_args "$ch_timg" -- id -un) [[ $GUEST_USER = "$g" ]] # shellcheck disable=SC2086 g=$(ch-run $uid_args $gid_args "$ch_timg" -- id -u) [[ $guest_uid = "$g" ]] # shellcheck disable=SC2086 g=$(ch-run $uid_args $gid_args "$ch_timg" -- id -gn) [[ $GUEST_GROUP = "$g" ]] # shellcheck disable=SC2086 g=$(ch-run $uid_args $gid_args "$ch_timg" -- id -g) [[ $guest_gid = "$g" ]] } @test 'chroot escape' { # Try to escape a chroot(2) using the standard approach. # shellcheck disable=SC2086 ch-run $uid_args $gid_args "$ch_timg" -- /test/chroot-escape } @test '/dev /proc /sys' { # Read some files in /dev, /proc, and /sys that I shouldn't have access to. # shellcheck disable=SC2086 ch-run $uid_args $gid_args "$ch_timg" -- /test/dev_proc_sys.py } @test 'filesystem permission enforcement' { [[ $CH_TEST_PERMDIRS = skip ]] && skip 'user request' for d in $CH_TEST_PERMDIRS; do d="${d}/pass" echo "verifying: ${d}" # shellcheck disable=SC2086 ch-run --no-home --private-tmp \ $uid_args $gid_args -b "$d:/mnt/0" "$ch_timg" -- \ /test/fs_perms.py /mnt/0 done } @test 'mknod(2)' { # Make some device files. If this works, we might be able to later read or # write them to do things we shouldn't. Try on all mount points. # shellcheck disable=SC2016,SC2086 ch-run $uid_args $gid_args "$ch_timg" -- \ sh -c '/test/mknods $(cat /proc/mounts | cut -d" " -f2)' } @test 'privileged IPv4 bind(2)' { # Bind to privileged ports on all host IPv4 addresses. # # Some supported distributions don't have "hostname --all-ip-addresses". # Hence the awk voodoo. addrs=$(ip -o addr | awk '/inet / {gsub(/\/.*/, " ",$4); print $4}') # shellcheck disable=SC2086 ch-run $uid_args $gid_args "$ch_timg" -- /test/bind_priv.py $addrs } @test 'remount host root' { # Re-mount the root filesystem. Notes: # # - Because we have /dev from the host, we don't need to create a new # device node. This makes the test simpler. In particular, we can # treat network and local root the same. # # - We leave the filesystem mounted even if successful, again to make # the test simpler. The rest of the tests will ignore it or maybe # over-mount something else. # # shellcheck disable=SC2086 ch-run $uid_args $gid_args "$ch_timg" -- \ sh -c '[ -f /bin/mount -a -x /bin/mount ]' dev=$(findmnt -n -o SOURCE -T /) type=$(findmnt -n -o FSTYPE -T /) opts=$(findmnt -n -o OPTIONS -T /) # shellcheck disable=SC2086 run ch-run $uid_args $gid_args "$ch_timg" -- \ /bin/mount -n -o "$opts" -t "$type" "$dev" /mnt/0 echo "$output" # return codes from http://man7.org/linux/man-pages/man8/mount.8.html # busybox seems to use the same list case $status in 0) # "success" printf 'RISK\tsuccessful mount\n' return 1 ;; 1) ;& # "incorrect invocation or permissions" (we care which) 111) ;& # undocumented 255) # undocumented if [[ $output = *'ermission denied'* ]]; then printf 'SAFE\tmount exit %d, permission denied\n' "$status" return 0 elif [[ $dev = 'rootfs' && $output =~ 'No such device' ]]; then printf 'SAFE\tmount exit %d, no such device' "$status" return 0 else printf 'RISK\tmount exit %d w/o known explanation\n' "$status" return 1 fi ;; 32) # "mount failed" printf 'SAFE\tmount exited with code 32\n' return 0 ;; esac printf 'ERROR\tunknown exit code: %s\n' "$status" return 1 } @test 'setgroups(2)' { # Can we change our supplemental groups? # shellcheck disable=SC2086 ch-run $uid_args $gid_args "$ch_timg" -- /test/setgroups } @test 'seteuid(2)' { # Try to seteuid(2) to another UID we shouldn't have access to # shellcheck disable=SC2086 ch-run $uid_args $gid_args "$ch_timg" -- /test/setuid } @test 'signal process outside container' { # Send a signal to a process we shouldn't be able to signal, in this case # getty. This requires at least one getty running, i.e., at least one # virtual console waiting for login. In the past, distributions ran gettys # on several VCs by default, but in recent years they are often started # dynamically, so there may be none running. See your distro's # documentation on how to configure this. See also e.g. issue #840. [[ $(pgrep -c getty) -eq 0 ]] && pedantic_fail 'no getty process found' # shellcheck disable=SC2086 ch-run $uid_args $gid_args "$ch_timg" -- /test/signal_out.py } charliecloud-0.26/test/run/ch-tar2dir.bats000066400000000000000000000070311417231051300204720ustar00rootroot00000000000000load ../common @test 'ch-tar2dir: unpack image' { scope standard [[ $CH_TEST_PACK_FMT = tar-unpack ]] || skip 'issue #693' if ( image_ok "$ch_timg" ); then # image exists, remove so we can test new unpack rm -Rf --one-file-system "$ch_timg" fi ch-tar2dir "$ch_ttar" "$ch_imgdir" # new unpack image_ok "$ch_timg" ch-tar2dir "$ch_ttar" "$ch_imgdir" # overwrite image_ok "$ch_timg" # Did we raise hidden files correctly? [[ -e $ch_timg/.hiddenfile1 ]] [[ -e $ch_timg/..hiddenfile2 ]] [[ -e $ch_timg/...hiddenfile3 ]] } @test 'ch-tar2dir: /dev cleaning' { # issue #157 scope standard [[ $CH_TEST_PACK_FMT = tar-unpack ]] || skip 'issue #693' # Are all fixtures present in tarball? present=$(tar tf "$ch_ttar" | grep -F deleteme) [[ $(echo "$present" | wc -l) -eq 4 ]] echo "$present" | grep -E '^img/dev/deleteme$' echo "$present" | grep -E '^./dev/deleteme$' echo "$present" | grep -E '^dev/deleteme$' echo "$present" | grep -E '^img/mnt/dev/dontdeleteme$' # Did we remove the right fixtures? [[ -e $ch_timg/mnt/dev/dontdeleteme ]] [[ $(ls -Aq "${ch_timg}/dev") -eq 0 ]] ch-run "$ch_timg" -- test -e /mnt/dev/dontdeleteme } @test 'ch-tar2dir: errors' { scope quick # destination doesn't exist run ch-tar2dir "$ch_timg" /doesnotexist echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't unpack: /doesnotexist does not exist"* ]] # destination is not a directory run ch-tar2dir "$ch_timg" /bin/false echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't unpack: /bin/false is not a directory"* ]] # tarball doesn't exist (extension provided) run ch-tar2dir does_not_exist.tar.gz "$ch_imgdir" echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't read: does_not_exist.tar.gz"* ]] ! [[ $output = *"can't read: does_not_exist.tar.gz.tar.gz"* ]] ! [[ $output = *"can't read: does_not_exist.tar.xz"* ]] [[ $output = *"no input found" ]] # tarball doesn't exist (extension inferred, doesn't contain "tar") run ch-tar2dir does_not_exist "$ch_imgdir" echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't read: does_not_exist"* ]] [[ $output = *"can't read: does_not_exist.tar.gz"* ]] [[ $output = *"can't read: does_not_exist.tar.xz"* ]] [[ $output = *"no input found"* ]] # tarball doesn't exist (bad extension containing "tar") run ch-tar2dir does_not_exist.tar.foo "$ch_imgdir" echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't read: does_not_exist.tar.foo"* ]] ! [[ $output = *"can't read: does_not_exist.tar.foo.tar.gz"* ]] ! [[ $output = *"can't read: does_not_exist.tar.foo.tar.xz"* ]] [[ $output = *"no input found"* ]] # tarball exists but isn't readable touch "${BATS_TMPDIR}/unreadable.tar.gz" chmod 000 "${BATS_TMPDIR}/unreadable.tar.gz" run ch-tar2dir "${BATS_TMPDIR}/unreadable.tar.gz" "$ch_imgdir" echo "$output" [[ $status -eq 1 ]] [[ $output = *"can't read: ${BATS_TMPDIR}/unreadable.tar.gz"* ]] [[ $output = *"no input found"* ]] # file exists but has bad extension touch "${BATS_TMPDIR}/foo.bar" run ch-tar2dir "${BATS_TMPDIR}/foo.bar" "$ch_imgdir" echo "$output" [[ $status -eq 1 ]] [[ $output = *"unknown extension: ${BATS_TMPDIR}/foo.bar"* ]] touch "${BATS_TMPDIR}/foo.tar.bar" run ch-tar2dir "${BATS_TMPDIR}/foo.tar.bar" "$ch_imgdir" echo "$output" [[ $status -eq 1 ]] [[ $output = *"unknown extension: ${BATS_TMPDIR}/foo.tar.bar"* ]] } charliecloud-0.26/test/run_first.bats000066400000000000000000000012501417231051300177370ustar00rootroot00000000000000load common @test 'prepare images directory' { scope quick mkdir -p "${ch_imgdir}/bind1" touch "${ch_imgdir}/bind1/file1" mkdir -p "${ch_imgdir}/bind2" touch "${ch_imgdir}/bind2/file2" mkdir -p "${ch_imgdir}/mounts" } @test 'permissions test directories exist' { scope standard [[ $CH_TEST_PERMDIRS = skip ]] && skip 'user request' for d in $CH_TEST_PERMDIRS; do echo "$d" test -d "${d}" test -d "${d}/pass" test -f "${d}/pass/file" test -d "${d}/nopass" test -d "${d}/nopass/dir" test -f "${d}/nopass/file" done } @test 'ch-checkns' { scope quick "${ch_bin}/ch-checkns" } charliecloud-0.26/test/sotest/000077500000000000000000000000001417231051300163745ustar00rootroot00000000000000charliecloud-0.26/test/sotest/files_inferrable.txt000066400000000000000000000000561417231051300224310ustar00rootroot00000000000000sotest/bin/sotest sotest/lib/libsotest.so.1.0 charliecloud-0.26/test/sotest/libsotest.c000066400000000000000000000001011417231051300205400ustar00rootroot00000000000000int increment(int a); int increment(int a) { return a + 1; } charliecloud-0.26/test/sotest/sotest.c000066400000000000000000000002631417231051300200620ustar00rootroot00000000000000#include #include int increment(int a); int main() { int b = 8675308; printf("libsotest says %d incremented is %d\n", b, increment(b)); exit(0); } charliecloud-0.26/test/unused/000077500000000000000000000000001417231051300163565ustar00rootroot00000000000000charliecloud-0.26/test/unused/echo-euid.c000066400000000000000000000004051417231051300203630ustar00rootroot00000000000000/* This program prints the effective user ID on stdout and exits. It is useful for testing whether the setuid bit was effective. */ #include #include #include int main(void) { printf("%u\n", geteuid()); return 0; } charliecloud-0.26/test/unused/su_wrap.py000077500000000000000000000036721417231051300204230ustar00rootroot00000000000000#!/usr/bin/env python3 # This script tries to use su to gain root privileges, assuming that # /etc/shadow has been changed such that no password is required. It uses # pexpect to emulate the terminal that su requires. # # WARNING: This does not work. For example: # # $ whoami ; echo $UID EUID # reidpr # 1001 1001 # $ /bin/su -c whoami # root # $ ./su_wrap.py 2>> /dev/null # SAFE escalation failed: empty password rejected # # That is, manual su can escalate without a password (and doesn't without the # /etc/shadow hack), but when this program tries to do apparently the same # thing, su wants a password. # # I have not been able to track down why this happens. I suspect that PAM has # some extra smarts about TTY that causes it to ask for a password under # pexpect. I'm leaving the code in the repository in case some future person # can figure it out. import sys import pexpect # Invoke su. This will do one of three things: # # 1. Print 'root'; the escalation was successful. # 2. Ask for a password; the escalation was unsuccessful. # 3. Something else; this is an error. # p = pexpect.spawn('/bin/su', ['-c', 'whoami'], timeout=5, encoding='UTF-8', logfile=sys.stderr) i = p.expect_exact(['root', 'Password:']) try: if (i == 0): # printed "root" print('RISK\tescalation successful: no password requested') elif (i == 1): # asked for password p.sendline() # try empty password i = p.expect_exact(['root', 'Authentication failure']) if (i == 0): # printed "root" print('RISK\tescalation successful: empty password accepted') elif (i == 1): # explicit failure print('SAFE\tescalation failed: empty password rejected') else: assert False else: assert False except p.EOF: print('ERROR\tsu exited unexpectedly') except p.TIMEOUT: print('ERROR\ttimed out waiting for su') except AssertionError: print('ERROR\tassertion failed') charliecloud-0.26/test/whiteout000077500000000000000000000052741417231051300166610ustar00rootroot00000000000000#!/usr/bin/env python3 # This Python script produces (on stdout) a Dockerfile that produces a large # number of whiteouts. At the end, the Dockerfile prints some output that can # be compared with the flattened image. The purpose is to test whiteout # interpretation during flattening. # # See: https://github.com/opencontainers/image-spec/blob/master/layer.md # # There are a few factors to consider: # # * files vs. directories # * white-out explicit files vs. everything in a directory # * restore the files vs. not (in the same layer as deletion) # # Currently, we don't do recursion, operating only on the specified directory. # We do this at two different levels in the directory tree. # # It's easy to bump into the 127-layer limit with this script. # # To build and push: # # $ version=2020-01-09 # use today's date # $ sudo docker login # if needed # $ ./whiteout | sudo docker build -t whiteout -f - . # $ sudo docker tag whiteout:latest charliecloud/whiteout:$version # $ sudo docker images | fgrep whiteout # $ sudo docker push charliecloud/whiteout:$version # # Then your new image will be at: # # https://hub.docker.com/repository/docker/charliecloud/whiteout import sys INF = 99 def discotheque(prefix, et): if (et == "file"): mk_cmd = "echo orig > %s" rm_cmd = "rm %s" rt_cmd = "echo rest > %s" elif (et == "dir"): mk_cmd = "mkdir -p %s/orig" rm_cmd = "rm -Rf %s" rt_cmd = "mkdir -p %s/rest" for mk_ct in [0, 1, 2]: for rm_ct in [0, 1, INF]: if ( (rm_ct == INF and mk_ct == 0) or (rm_ct != INF and rm_ct > mk_ct)): continue for rt_ct in [0, 1, 2]: if (rt_ct > rm_ct or rt_ct > mk_ct): continue base = "%s/%s_mk-%d_rm-%d_rt-%d" % (prefix, et, mk_ct, rm_ct, rt_ct) mks = ["mkdir %s" % base] rms = [] print("") for mk in range(mk_ct): mks.append(mk_cmd % ("%s/%d" % (base, mk))) if (rm_ct == INF): rms.append(rm_cmd % ("%s/*" % base)) else: for rm in range(rm_ct): rms.append(rm_cmd % ("%s/%d" % (base, rm))) for rt in range(rt_ct): rms.append(rt_cmd % ("%s/%d" % (base, rt))) if (len(mks) > 0): print("RUN " + " && ".join(mks)) if (len(rms) > 0): print("RUN " + " && ".join(rms)) print("FROM alpine:3.9") print("RUN mkdir /w /w/v") discotheque("/w", "file") discotheque("/w", "dir") discotheque("/w/v", "file") discotheque("/w/v", "dir") print("") print("RUN ls -aR /w") print("RUN find /w -type f -exec sh -c 'printf \"{} \" && cat {}' \; | sort")