pax_global_header00006660000000000000000000000064142015565570014524gustar00rootroot0000000000000052 comment=070a9cecb798791033441ebd92049d3f006302cf mmdebstrap/000077500000000000000000000000001420155655700132265ustar00rootroot00000000000000mmdebstrap/.gitignore000066400000000000000000000000071420155655700152130ustar00rootroot00000000000000shared mmdebstrap/.perltidyrc000066400000000000000000000007261420155655700154150ustar00rootroot00000000000000# mmdebstrap is a tool focused on Debian and derivatives (it relies on apt # after all). Thus, we use a perl style used in other Debian Perl code. The # following options are used in Lintian and devscripts --break-before-all-operators --noblanks-before-comments --cuddled-else --maximum-line-length=79 --paren-tightness=2 --square-bracket-tightness=2 --space-for-semicolon --opening-brace-always-on-right --stack-opening-tokens --stack-closing-tokens --format-skipping mmdebstrap/CHANGELOG.md000066400000000000000000000206011420155655700150360ustar00rootroot000000000000000.8.4 (2022-02-11) ------------------ - tarfilter: add --strip-components option - don't install essential packages in run_install() - remove /var/lib/dbus/machine-id 0.8.3 (2022-01-08) ------------------ - allow codenames with apt patterns (requires apt >= 2.3.14) - don't overwrite existing files in setup code - don't copy in qemu-user-static binary if it's not needed 0.8.2 (2021-12-14) ------------------ - use apt patterns to select priority variants (requires apt >= 2.3.10) 0.8.1 (2021-10-07) ------------------ - enforce dpkg >= 1.20.0 and apt >= 2.3.7 - allow working directory be not world readable - do not run xz and zstd with --threads=0 since this is a bad default for machines with more than 100 cores - bit-by-bit identical chrootless mode 0.8.0 (2021-09-21) ------------------ - allow running inside chroot in root mode - allow running without /dev, /sys or /proc - new --format=null which gets automatically selected if the output is /dev/null and doesn't produce a tarball or other permanent output - allow ASCII-armored keyrings (requires gnupg >= 2.2.8) - run zstd with --threads=0 - tarfilter: add --pax-exclude and --pax-include to strip extended attributes - add --skip=setup, --skip=update and --skip=cleanup - add --skip=cleanup/apt/lists and --skip=cleanup/apt/cache - pass extended attributes (excluding system) to tar2sqfs - use apt-get update -error-on=any (requires apt >= 2.1.16) - support Debian 11 Buster - use apt from outside using DPkg::Chroot-Directory (requires apt >= 2.3.7) * build chroots without apt (for example from buildinfo files) * no need to install additional packages like apt-transport-* or ca-certificates inside the chroot * no need for additional key material inside the chroot * possible use of file:// and copy:// - use apt pattern to select essential set - write 'uninitialized' to /etc/machine-id - allow running in root mode without mount working, either because of missing CAP_SYS_ADMIN or missing /usr/bin/mount - make /etc/ld.so.cache under fakechroot mode bit-by-bit identical to root and unshare mode - move hooks/setup00-merged-usr.sh to hooks/merged-usr/setup00.sh - add gpgvnoexpkeysig script for very old snapshot.d.o timestamps with expired signature 0.7.5 (2021-02-06) ------------------ - skip emulation check for extract variant - add new suite name trixie - unset TMPDIR in hooks because there is no value that works inside as well as outside the chroot - expose hook name to hooks via MMDEBSTRAP_HOOK environment variable 0.7.4 (2021-01-16) ------------------ - Optimize mmtarfilter to handle many path exclusions - Set MMDEBSTRAP_APT_CONFIG, MMDEBSTRAP_MODE and MMDEBSTRAP_HOOKSOCK for hook scripts - Do not run an additional env command inside the chroot - Allow unshare mode as root user - Additional checks whether root has the necessary privileges to mount - Make most features work on Debian 10 Buster 0.7.3 (2020-12-02) ------------------ - bugfix release 0.7.2 (2020-11-28) ------------------ - check whether tools like dpkg and apt are installed at startup - make it possible to seed /var/cache/apt/archives with deb packages - if a suite name was specified, use the matching apt index to figure out the package set to install - use Debian::DistroInfo or /usr/share/distro-info/debian.csv (if available) to figure out the security mirror for bullseye and beyond - use argparse in tarfilter and taridshift for proper --help output 0.7.1 (2020-09-18) ------------------ - bugfix release 0.7.0 (2020-08-27) ----------------- - the hook system (setup, extract, essential, customize and hook-dir) is made public and is now a documented interface - tarball is also created if the output is a named pipe or character special - add --format option to control the output format independent of the output filename or in cases where output is directed to stdout - generate ext2 filesystems if output file ends with .ext2 or --format=ext2 - add --skip option to prevent some automatic actions from being carried out - implement dpkg-realpath in perl so that we don't need to run tar inside the chroot anymore for modes other than fakechroot and proot - add ready-to-use hook scripts for eatmydata, merged-usr and busybox - add tarfilter tool - use distro-info-data and debootstrap to help with suite name and keyring discovery - no longer needs to install twice when --depkgopt=path-exclude is given - variant=custom and hooks can be used as a debootstrap wrapper - use File::Find instead of "du" to avoid different results on different filesystems - many, many bugfixes and documentation enhancements 0.6.1 (2020-03-08) ------------------ - replace /etc/machine-id with an empty file - fix deterministic tar with pax and xattr support - support deb822-style format apt sources - mount /sys and /proc as read-only in root mode - unset TMPDIR environment variable for everything running inside the chroot 0.6.0 (2020-01-16) ------------------ - allow multiple --architecture options - allow multiple --include options - enable parallel compression with xz by default - add --man option - add --keyring option overwriting apt's default keyring - preserve extended attributes in tarball - allow running tests on non-amd64 systems - generate squashfs images if output file ends in .sqfs or .squashfs - add --dry-run/--simulate options - add taridshift tool 0.5.1 (2019-10-19) ------------------ - minor bugfixes and documentation clarification - the --components option now takes component names as a comma or whitespace separated list or as multiple --components options - make_mirror.sh now has to be invoked manually before calling coverage.sh 0.5.0 (2019-10-05) ------------------ - do not unconditionally read sources.list stdin anymore * if mmdebstrap is used via ssh without a pseudo-terminal, it will stall forever * as this is unexpected, one now has to explicitly request reading sources.list from stdin in situations where it's ambiguous whether that is requested * thus, the following modes of operation don't work anymore: $ mmdebstrap unstable /output/dir < sources.list $ mmdebstrap unstable /output/dir http://mirror < sources.list * instead, one now has to write: $ mmdebstrap unstable /output/dir - < sources.list $ mmdebstrap unstable /output/dir http://mirror - < sources.list - fix binfmt_misc support on docker - do not use qemu for architectures unequal the native architecture that can be used without it - do not copy /etc/resolv.conf or /etc/hostname if the host system doesn't have them - add --force-check-gpg dummy option - allow hooks to remove start-stop-daemon - add /var/lib/dpkg/arch in chrootless mode when chroot architecture differs - create /var/lib/dpkg/cmethopt for dselect - do not skip package installation in 'custom' variant - fix EDSP output for external solvers so that apt doesn't mark itself as Essential:yes - also re-exec under fakechroot if fakechroot is picked in 'auto' mode - chdir() before 'apt-get update' to accomodate for apt << 1.5 - add Dir::State::Status to apt config for apt << 1.3 - chmod 0755 on qemu-user-static binary - select the right mirror for ubuntu, kali and tanglu 0.4.1 (2019-03-01) ------------------ - re-enable fakechroot mode testing - disable apt sandboxing if necessary - keep apt and dpkg lock files 0.4.0 (2019-02-23) ------------------ - disable merged-usr - add --verbose option that prints apt and dpkg output instead of progress bars - add --quiet/--silent options which print nothing on stderr - add --debug option for even more output than with --verbose - add some no-op options to make mmdebstrap a drop-in replacement for certain debootstrap wrappers like sbuild-createchroot - add --logfile option which outputs to a file what would otherwise be written to stderr - add --version option 0.3.0 (2018-11-21) ------------------ - add chrootless mode - add extract and custom variants - make testsuite unprivileged through qemu and guestfish - allow empty lost+found directory in target - add 54 testcases and fix lots of bugs as a result 0.2.0 (2018-10-03) ------------------ - if no MIRROR was specified but there was data on standard input, then use that data as the sources.list instead of falling back to the default mirror - lots of bug fixes 0.1.0 (2018-09-24) ------------------ - initial release mmdebstrap/README.md000066400000000000000000000140171420155655700145100ustar00rootroot00000000000000mmdebstrap ========== An alternative to debootstrap which uses apt internally and is thus able to use more than one mirror and resolve more complex dependencies. Usage ----- Use like debootstrap: sudo mmdebstrap unstable ./unstable-chroot Without superuser privileges: mmdebstrap unstable unstable-chroot.tar With complex apt options: cat /etc/apt/sources.list | mmdebstrap > unstable-chroot.tar For the full documentation use: pod2man ./mmdebstrap | man -l - The sales pitch in comparison to debootstrap -------------------------------------------- Summary: - more than one mirror possible - security and updates mirror included for Debian stable chroots - twice as fast - chroot with apt in 11 seconds - gzipped tarball with apt is 27M small - bit-by-bit reproducible output - unprivileged operation using Linux user namespaces, fakechroot or proot - can operate on filesystems mounted with nodev - foreign architecture chroots with qemu-user - variant installing only Essential:yes packages and dependencies - temporary chroots by redirecting to /dev/null - chroots without apt inside (for chroot from buildinfo file with debootsnap) The author believes that a chroot of a Debian stable release should include the latest packages including security fixes by default. This has been a wontfix with debootstrap since 2009 (See #543819 and #762222). Since mmdebstrap uses apt internally, support for multiple mirrors comes for free and stable or oldstable **chroots will include security and updates mirrors**. A side-effect of using apt is being twice as fast as debootstrap. The timings were carried out on a laptop with an Intel Core i5-5200U, using a mirror on localhost and a tmpfs. | variant | mmdebstrap | debootstrap | | --------- | ---------- | ------------ | | essential | 9.52 s | n.a | | apt | 10.98 s | n.a | | minbase | 13.54 s | 26.37 s | | buildd | 21.31 s | 34.85 s | | - | 23.01 s | 48.83 s | Apt considers itself an `Essential: yes` package. This feature allows one to create a chroot containing just the `Essential: yes` packages and apt (and their hard dependencies) in **just 11 seconds**. If desired, a most minimal chroot with just the `Essential: yes` packages and their hard dependencies can be created with a gzipped tarball size of just 34M. By using dpkg's `--path-exclude` option to exclude documentation, even smaller gzipped tarballs of 21M in size are possible. If apt is included, the result is a **gzipped tarball of only 27M**. These small sizes are also achieved because apt caches and other cruft is stripped from the chroot. This also makes the result **bit-by-bit reproducible** if the `$SOURCE_DATE_EPOCH` environment variable is set. The author believes, that it should not be necessary to have superuser privileges to create a file (the chroot tarball) in one's home directory. Thus, mmdebstrap provides multiple options to create a chroot tarball with the right permissions **without superuser privileges**. This avoids a whole class of bugs like #921815. Depending on what is available, it uses either Linux user namespaces, fakechroot or proot. Debootstrap supports fakechroot but will not create a tarball with the right permissions by itself. Support for Linux user namespaces and proot is missing (see bugs #829134 and #698347, respectively). When creating a chroot tarball with debootstrap, the temporary chroot directory cannot be on a filesystem that has been mounted with nodev. In unprivileged mode, **mknod is never used**, which means that /tmp can be used as a temporary directory location even if if it's mounted with nodev as a security measure. If the chroot architecture cannot be executed by the current machine, qemu-user is used to allow one to create a **foreign architecture chroot**. Limitations in comparison to debootstrap ---------------------------------------- Debootstrap supports creating a Debian chroot on non-Debian systems but mmdebstrap requires apt and is thus limited to Debian and derivatives. This means that mmdebstrap can never fully replace debootstrap and debootstrap will continue to be relevant in situations where you want to create a Debian chroot from a platform without apt and dpkg. There is no `SCRIPT` argument. The following options, don't exist: `--second-stage`, `--exclude`, `--resolve-deps`, `--force-check-gpg`, `--merged-usr` and `--no-merged-usr`. The quirks from debootstrap are needed to create chroots of Debian unstable from snapshot.d.o before timestamp 20141107T220431Z or Debian 8 (Jessie) or later. Tests ===== The script `coverage.sh` runs mmdebstrap in all kind of scenarios to execute all code paths of the script. It verifies its output in each scenario and displays the results gathered with Devel::Cover. It also compares the output of mmdebstrap with debootstrap in several scenarios. To run the testsuite, run: ./make_mirror.sh CMD=./mmdebstrap ./coverage.sh To also generate perl Devel::Cover data, omit the `CMD` environment variable. But that will also take a lot longer. The `make_mirror.sh` script will be a no-op if nothing changed in Debian unstable. You don't need to run `make_mirror.sh` before every invocation of `coverage.sh`. When you make changes to `make_mirror.sh` and want to regenerate the cache, run: touch -d yesterday shared/cache/debian/dists/unstable/Release The script `coverage.sh` does not need an active internet connection by default. An online connection is only needed by the `make_mirror.sh` script which fills a local cache with a few minimal Debian mirror copies. By default, `coverage.sh` will skip running a single test which tries creating a Ubuntu Focal chroot. To not skip that test, run `coverage.sh` with the environment variable `ONLINE=yes`. Bugs ==== mmdebstrap has bugs. Report them here: https://gitlab.mister-muffin.de/josch/mmdebstrap/issues Contributors ============ - Johannes Schauer Marin Rodrigues (main author) - Helmut Grohne - Benjamin Drung - Steve Dodd - Josh Triplett - Konstantin Demin - Trent W. Buck - Vagrant Cascadian mmdebstrap/coverage.sh000077500000000000000000003654201420155655700153720ustar00rootroot00000000000000#!/bin/sh set -eu if [ -e ./mmdebstrap -a -e ./taridshift -a -e ./tarfilter ]; then TMPFILE=$(mktemp) perltidy < ./mmdebstrap > "$TMPFILE" ret=0 diff -u ./mmdebstrap "$TMPFILE" || ret=$? if [ "$ret" -ne 0 ]; then echo "perltidy failed" >&2 rm "$TMPFILE" exit 1 fi rm "$TMPFILE" if [ $(sed -e '/^__END__$/,$d' ./mmdebstrap | wc --max-line-length) -gt 79 ]; then echo "exceeded maximum line length of 79 characters" >&2 exit 1 fi perlcritic --severity 4 --verbose 8 ./mmdebstrap black --check ./taridshift ./tarfilter fi mirrordir="./shared/cache/debian" if [ ! -e "$mirrordir" ]; then echo "run ./make_mirror.sh before running $0" >&2 exit 1 fi # we use -f because the file might not exist rm -f shared/cover_db.img : "${DEFAULT_DIST:=unstable}" : "${HAVE_QEMU:=yes}" : "${RUN_MA_SAME_TESTS:=yes}" : "${ONLINE:=no}" : "${CONTAINER:=no}" HOSTARCH=$(dpkg --print-architecture) if [ "$HAVE_QEMU" = "yes" ]; then # prepare image for cover_db guestfish -N shared/cover_db.img=disk:64M -- mkfs vfat /dev/sda if [ ! -e "./shared/cache/debian-$DEFAULT_DIST.qcow" ]; then echo "./shared/cache/debian-$DEFAULT_DIST.qcow does not exist" >&2 exit 1 fi fi # check if all required debootstrap tarballs exist notfound=0 for dist in oldstable stable testing unstable; do for variant in minbase buildd -; do if [ ! -e "shared/cache/debian-$dist-$variant.tar" ]; then echo "shared/cache/debian-$dist-$variant.tar does not exist" >&2 notfound=1 fi done done if [ "$notfound" -ne 0 ]; then echo "not all required debootstrap tarballs are present" >&2 exit 1 fi # only copy if necessary if [ ! -e shared/mmdebstrap ] || [ mmdebstrap -nt shared/mmdebstrap ]; then if [ -e ./mmdebstrap ]; then cp -a mmdebstrap shared else cp -a /usr/bin/mmdebstrap shared fi fi if [ ! -e shared/taridshift ] || [ taridshift -nt shared/taridshift ]; then if [ -e ./taridshift ]; then cp -a ./taridshift shared else cp -a /usr/bin/mmtaridshift shared/taridshift fi fi if [ ! -e shared/tarfilter ] || [ tarfilter -nt shared/tarfilter ]; then if [ -e ./tarfilter ]; then cp -a tarfilter shared else cp -a /usr/bin/mmtarfilter shared/tarfilter fi fi if [ ! -e shared/proxysolver ] || [ proxysolver -nt shared/proxysolver ]; then if [ -e ./proxysolver ]; then cp -a proxysolver shared else cp -a /usr/lib/apt/solvers/mmdebstrap-dump-solution shared/proxysolver fi fi if [ ! -e shared/ldconfig.fakechroot ] || [ ldconfig.fakechroot -nt shared/ldconfig.fakechroot ]; then if [ -e ./ldconfig.fakechroot ]; then cp -a ldconfig.fakechroot shared else cp -a /usr/libexec/mmdebstrap/ldconfig.fakechroot shared/ldconfig.fakechroot fi fi mkdir -p shared/hooks/merged-usr if [ ! -e shared/hooks/merged-usr/setup00.sh ] || [ hooks/merged-usr/setup00.sh -nt shared/hooks/merged-usr/setup00.sh ]; then if [ -e hooks/merged-usr/setup00.sh ]; then cp -a hooks/merged-usr/setup00.sh shared/hooks/merged-usr/ else cp -a /usr/share/mmdebstrap/hooks/merged-usr/setup00.sh shared/hooks/merged-usr/ fi fi mkdir -p shared/hooks/eatmydata if [ ! -e shared/hooks/eatmydata/extract.sh ] || [ hooks/eatmydata/extract.sh -nt shared/hooks/eatmydata/extract.sh ]; then if [ -e hooks/eatmydata/extract.sh ]; then cp -a hooks/eatmydata/extract.sh shared/hooks/eatmydata/ else cp -a /usr/share/mmdebstrap/hooks/eatmydata/extract.sh shared/hooks/eatmydata/ fi fi if [ ! -e shared/hooks/eatmydata/customize.sh ] || [ hooks/eatmydata/customize.sh -nt shared/hooks/eatmydata/customize.sh ]; then if [ -e hooks/eatmydata/customize.sh ]; then cp -a hooks/eatmydata/customize.sh shared/hooks/eatmydata/ else cp -a /usr/share/mmdebstrap/hooks/eatmydata/customize.sh shared/hooks/eatmydata/ fi fi starttime= total=182 skipped=0 runtests=0 i=1 print_header() { echo ------------------------------------------------------------------------------ >&2 echo "($i/$total) $1" >&2 if [ -z "$starttime" ]; then starttime=$(date +%s) else currenttime=$(date +%s) timeleft=$(((total-i+1)*(currenttime-starttime)/(i-1))) printf "time left: %02d:%02d:%02d\n" $((timeleft/3600)) $(((timeleft%3600)/60)) $((timeleft%60)) fi echo ------------------------------------------------------------------------------ >&2 i=$((i+1)) } # choose the timestamp of the unstable Release file, so that we get # reproducible results for the same mirror timestamp SOURCE_DATE_EPOCH=$(date --date="$(grep-dctrl -s Date -n '' "$mirrordir/dists/$DEFAULT_DIST/Release")" +%s) # for traditional sort order that uses native byte values export LC_ALL=C.UTF-8 : "${HAVE_UNSHARE:=yes}" : "${HAVE_PROOT:=yes}" : "${HAVE_BINFMT:=yes}" defaultmode="auto" if [ "$HAVE_UNSHARE" != "yes" ]; then defaultmode="root" fi # by default, use the mmdebstrap executable in the current directory together # with perl Devel::Cover but allow to overwrite this : "${CMD:=perl -MDevel::Cover=-silent,-nogcov ./mmdebstrap}" mirror="http://127.0.0.1/debian" for dist in oldstable stable testing unstable; do for variant in minbase buildd -; do print_header "mode=$defaultmode,variant=$variant: check against debootstrap $dist" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 export SOURCE_DATE_EPOCH=$SOURCE_DATE_EPOCH # we create the apt user ourselves or otherwise its uid/gid will differ # compared to the one chosen in debootstrap because of different installation # order in comparison to the systemd users # https://bugs.debian.org/969631 # we cannot use useradd because passwd is not Essential:yes $CMD --variant=$variant --mode=$defaultmode \ --essential-hook='if [ $variant = - ]; then echo _apt:*:100:65534::/nonexistent:/usr/sbin/nologin >> "\$1"/etc/passwd; fi' \ $dist /tmp/debian-$dist-mm.tar $mirror mkdir /tmp/debian-$dist-mm tar --xattrs --xattrs-include='*' -C /tmp/debian-$dist-mm -xf /tmp/debian-$dist-mm.tar rm /tmp/debian-$dist-mm.tar mkdir /tmp/debian-$dist-debootstrap tar --xattrs --xattrs-include='*' -C /tmp/debian-$dist-debootstrap -xf "cache/debian-$dist-$variant.tar" # diff cannot compare device nodes, so we use tar to do that for us and then # delete the directory tar -C /tmp/debian-$dist-debootstrap -cf dev1.tar ./dev tar -C /tmp/debian-$dist-mm -cf dev2.tar ./dev ret=0 cmp dev1.tar dev2.tar || ret=\$? if [ "\$ret" -ne 0 ]; then if type diffoscope >/dev/null; then diffoscope dev1.tar dev2.tar exit 1 else echo "no diffoscope installed" >&2 fi if type base64 >/dev/null; then base64 dev1.tar base64 dev2.tar exit 1 else echo "no base64 installed" >&2 fi if type xxd >/dev/null; then xxd dev1.tar xxd dev2.tar exit 1 else echo "no xxd installed" >&2 fi exit 1 fi rm dev1.tar dev2.tar rm -r /tmp/debian-$dist-debootstrap/dev /tmp/debian-$dist-mm/dev # remove downloaded deb packages rm /tmp/debian-$dist-debootstrap/var/cache/apt/archives/*.deb # remove aux-cache rm /tmp/debian-$dist-debootstrap/var/cache/ldconfig/aux-cache # remove logs rm /tmp/debian-$dist-debootstrap/var/log/dpkg.log \ /tmp/debian-$dist-debootstrap/var/log/bootstrap.log \ /tmp/debian-$dist-debootstrap/var/log/alternatives.log # remove *-old files rm /tmp/debian-$dist-debootstrap/var/cache/debconf/config.dat-old \ /tmp/debian-$dist-mm/var/cache/debconf/config.dat-old rm /tmp/debian-$dist-debootstrap/var/cache/debconf/templates.dat-old \ /tmp/debian-$dist-mm/var/cache/debconf/templates.dat-old rm /tmp/debian-$dist-debootstrap/var/lib/dpkg/status-old \ /tmp/debian-$dist-mm/var/lib/dpkg/status-old # remove dpkg files rm /tmp/debian-$dist-debootstrap/var/lib/dpkg/available rm /tmp/debian-$dist-debootstrap/var/lib/dpkg/cmethopt # since we installed packages directly from the .deb files, Priorities differ # thus we first check for equality and then remove the files chroot /tmp/debian-$dist-debootstrap dpkg --list > dpkg1 chroot /tmp/debian-$dist-mm dpkg --list > dpkg2 diff -u dpkg1 dpkg2 rm dpkg1 dpkg2 grep -v '^Priority: ' /tmp/debian-$dist-debootstrap/var/lib/dpkg/status > status1 grep -v '^Priority: ' /tmp/debian-$dist-mm/var/lib/dpkg/status > status2 diff -u status1 status2 rm status1 status2 rm /tmp/debian-$dist-debootstrap/var/lib/dpkg/status /tmp/debian-$dist-mm/var/lib/dpkg/status # debootstrap exposes the hosts's kernel version if [ -e /tmp/debian-$dist-debootstrap/etc/apt/apt.conf.d/01autoremove-kernels ]; then rm /tmp/debian-$dist-debootstrap/etc/apt/apt.conf.d/01autoremove-kernels fi if [ -e /tmp/debian-$dist-mm/etc/apt/apt.conf.d/01autoremove-kernels ]; then rm /tmp/debian-$dist-mm/etc/apt/apt.conf.d/01autoremove-kernels fi # who creates /run/mount? if [ -e "/tmp/debian-$dist-debootstrap/run/mount/utab" ]; then rm "/tmp/debian-$dist-debootstrap/run/mount/utab" fi if [ -e "/tmp/debian-$dist-debootstrap/run/mount" ]; then rmdir "/tmp/debian-$dist-debootstrap/run/mount" fi # debootstrap doesn't clean apt rm /tmp/debian-$dist-debootstrap/var/lib/apt/lists/127.0.0.1_debian_dists_${dist}_main_binary-${HOSTARCH}_Packages \ /tmp/debian-$dist-debootstrap/var/lib/apt/lists/127.0.0.1_debian_dists_${dist}_Release \ /tmp/debian-$dist-debootstrap/var/lib/apt/lists/127.0.0.1_debian_dists_${dist}_Release.gpg if [ "$variant" = "-" ]; then rm /tmp/debian-$dist-debootstrap/etc/machine-id rm /tmp/debian-$dist-mm/etc/machine-id rm /tmp/debian-$dist-debootstrap/var/lib/systemd/catalog/database rm /tmp/debian-$dist-mm/var/lib/systemd/catalog/database cap=\$(chroot /tmp/debian-$dist-debootstrap /sbin/getcap /bin/ping) expected="/bin/ping cap_net_raw=ep" if [ "$dist" = oldstable ]; then expected="/bin/ping = cap_net_raw+ep" fi if [ "\$cap" != "\$expected" ]; then echo "expected bin/ping to have capabilities \$expected" >&2 echo "but debootstrap produced: \$cap" >&2 exit 1 fi cap=\$(chroot /tmp/debian-$dist-mm /sbin/getcap /bin/ping) if [ "\$cap" != "\$expected" ]; then echo "expected bin/ping to have capabilities \$expected" >&2 echo "but mmdebstrap produced: \$cap" >&2 exit 1 fi fi rm /tmp/debian-$dist-mm/var/cache/apt/archives/lock rm /tmp/debian-$dist-mm/var/lib/apt/extended_states rm /tmp/debian-$dist-mm/var/lib/apt/lists/lock # the list of shells might be sorted wrongly for f in "/tmp/debian-$dist-debootstrap/etc/shells" "/tmp/debian-$dist-mm/etc/shells"; do sort -o "\$f" "\$f" done # workaround for https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=917773 if ! cmp /tmp/debian-$dist-debootstrap/etc/shadow /tmp/debian-$dist-mm/etc/shadow; then echo patching /etc/shadow on $dist $variant >&2 awk -v FS=: -v OFS=: -v SDE=\$SOURCE_DATE_EPOCH '{ print \$1,\$2,int(SDE/60/60/24),\$4,\$5,\$6,\$7,\$8,\$9 }' < /tmp/debian-$dist-mm/etc/shadow > /tmp/debian-$dist-mm/etc/shadow.bak cat /tmp/debian-$dist-mm/etc/shadow.bak > /tmp/debian-$dist-mm/etc/shadow rm /tmp/debian-$dist-mm/etc/shadow.bak else echo no difference for /etc/shadow on $dist $variant >&2 fi if ! cmp /tmp/debian-$dist-debootstrap/etc/shadow- /tmp/debian-$dist-mm/etc/shadow-; then echo patching /etc/shadow- on $dist $variant >&2 awk -v FS=: -v OFS=: -v SDE=\$SOURCE_DATE_EPOCH '{ print \$1,\$2,int(SDE/60/60/24),\$4,\$5,\$6,\$7,\$8,\$9 }' < /tmp/debian-$dist-mm/etc/shadow- > /tmp/debian-$dist-mm/etc/shadow-.bak cat /tmp/debian-$dist-mm/etc/shadow-.bak > /tmp/debian-$dist-mm/etc/shadow- rm /tmp/debian-$dist-mm/etc/shadow-.bak else echo no difference for /etc/shadow- on $dist $variant >&2 fi # Because of unreproducible uids (#969631) we created the _apt user ourselves # and because passwd is not Essential:yes we didn't use useradd. But passwd # since 1:4.11.1+dfsg1-1 will create empty mail files, so we create it too. # https://bugs.debian.org/1004710 if [ $variant = - ]; then if [ -e /tmp/debian-$dist-debootstrap/var/mail/_apt ]; then touch /tmp/debian-$dist-mm/var/mail/_apt chmod 660 /tmp/debian-$dist-mm/var/mail/_apt chown 100:8 /tmp/debian-$dist-mm/var/mail/_apt fi fi # check if the file content differs diff --unified --no-dereference --recursive /tmp/debian-$dist-debootstrap /tmp/debian-$dist-mm # check permissions, ownership, symlink targets, modification times using tar # directory mtimes will differ, thus we equalize them first find /tmp/debian-$dist-debootstrap /tmp/debian-$dist-mm -type d -print0 | xargs -0 touch --date="@$SOURCE_DATE_EPOCH" # debootstrap never ran apt -- fixing permissions for d in ./var/lib/apt/lists/partial ./var/cache/apt/archives/partial; do chroot /tmp/debian-$dist-debootstrap chmod 0700 \$d chroot /tmp/debian-$dist-debootstrap chown _apt:root \$d done tar -C /tmp/debian-$dist-debootstrap --numeric-owner --sort=name --clamp-mtime --mtime=$(date --utc --date=@$SOURCE_DATE_EPOCH --iso-8601=seconds) -cf /tmp/root1.tar . tar -C /tmp/debian-$dist-mm --numeric-owner --sort=name --clamp-mtime --mtime=$(date --utc --date=@$SOURCE_DATE_EPOCH --iso-8601=seconds) -cf /tmp/root2.tar . tar --full-time --verbose -tf /tmp/root1.tar > /tmp/root1.tar.list tar --full-time --verbose -tf /tmp/root2.tar > /tmp/root2.tar.list diff -u /tmp/root1.tar.list /tmp/root2.tar.list rm /tmp/root1.tar /tmp/root2.tar /tmp/root1.tar.list /tmp/root2.tar.list # check if file properties (permissions, ownership, symlink names, modification time) differ # # we cannot use this (yet) because it cannot cope with paths that have [ or @ in them #fmtree -c -p /tmp/debian-$dist-debootstrap -k flags,gid,link,mode,size,time,uid | sudo fmtree -p /tmp/debian-$dist-mm rm -r /tmp/debian-$dist-debootstrap /tmp/debian-$dist-mm END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi done done # this is a solution for https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=829134 print_header "mode=unshare,variant=custom: as debootstrap unshare wrapper" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 export SOURCE_DATE_EPOCH=$SOURCE_DATE_EPOCH if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi sysctl -w kernel.unprivileged_userns_clone=1 adduser --gecos user --disabled-password user runuser -u user -- $CMD --variant=custom --mode=unshare --setup-hook='env container=lxc debootstrap --no-merged-usr unstable "\$1" $mirror' - /tmp/debian-mm.tar $mirror mkdir /tmp/debian-mm tar --xattrs --xattrs-include='*' -C /tmp/debian-mm -xf /tmp/debian-mm.tar mkdir /tmp/debian-debootstrap tar --xattrs --xattrs-include='*' -C /tmp/debian-debootstrap -xf "cache/debian-unstable--.tar" # diff cannot compare device nodes, so we use tar to do that for us and then # delete the directory tar -C /tmp/debian-debootstrap -cf dev1.tar ./dev tar -C /tmp/debian-mm -cf dev2.tar ./dev cmp dev1.tar dev2.tar rm dev1.tar dev2.tar rm -r /tmp/debian-debootstrap/dev /tmp/debian-mm/dev # remove downloaded deb packages rm /tmp/debian-debootstrap/var/cache/apt/archives/*.deb # remove aux-cache rm /tmp/debian-debootstrap/var/cache/ldconfig/aux-cache # remove logs rm /tmp/debian-debootstrap/var/log/dpkg.log \ /tmp/debian-debootstrap/var/log/bootstrap.log \ /tmp/debian-debootstrap/var/log/alternatives.log \ /tmp/debian-mm/var/log/bootstrap.log # debootstrap doesn't clean apt rm /tmp/debian-debootstrap/var/lib/apt/lists/127.0.0.1_debian_dists_unstable_main_binary-${HOSTARCH}_Packages \ /tmp/debian-debootstrap/var/lib/apt/lists/127.0.0.1_debian_dists_unstable_Release \ /tmp/debian-debootstrap/var/lib/apt/lists/127.0.0.1_debian_dists_unstable_Release.gpg rm /tmp/debian-debootstrap/etc/machine-id /tmp/debian-mm/etc/machine-id rm /tmp/debian-mm/var/cache/apt/archives/lock rm /tmp/debian-mm/var/lib/apt/lists/lock # check if the file content differs diff --no-dereference --recursive /tmp/debian-debootstrap /tmp/debian-mm # check permissions, ownership, symlink targets, modification times using tar # mtimes of directories created by mmdebstrap will differ, thus we equalize them first for d in etc/apt/preferences.d/ etc/apt/sources.list.d/ etc/dpkg/dpkg.cfg.d/ var/log/apt/; do touch --date="@$SOURCE_DATE_EPOCH" /tmp/debian-debootstrap/\$d /tmp/debian-mm/\$d done # debootstrap never ran apt -- fixing permissions for d in ./var/lib/apt/lists/partial ./var/cache/apt/archives/partial; do chroot /tmp/debian-debootstrap chmod 0700 \$d chroot /tmp/debian-debootstrap chown _apt:root \$d done tar -C /tmp/debian-debootstrap --numeric-owner --xattrs --xattrs-include='*' --sort=name --clamp-mtime --mtime=$(date --utc --date=@$SOURCE_DATE_EPOCH --iso-8601=seconds) -cf /tmp/root1.tar . tar -C /tmp/debian-mm --numeric-owner --xattrs --xattrs-include='*' --sort=name --clamp-mtime --mtime=$(date --utc --date=@$SOURCE_DATE_EPOCH --iso-8601=seconds) -cf /tmp/root2.tar . tar --full-time --verbose -tf /tmp/root1.tar > /tmp/root1.tar.list tar --full-time --verbose -tf /tmp/root2.tar > /tmp/root2.tar.list # despite SOURCE_DATE_EPOCH and --clamp-mtime, the timestamps in the tarball # will slightly differ from each other in the sub-second precision (last # decimals) so the tarballs will not be identical, so we use diff to compare # content and tar to compare attributes diff -u /tmp/root1.tar.list /tmp/root2.tar.list rm /tmp/root1.tar /tmp/root2.tar /tmp/root1.tar.list /tmp/root2.tar.list rm /tmp/debian-mm.tar rm -r /tmp/debian-debootstrap /tmp/debian-mm END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "test --help" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 # we redirect to /dev/null instead of using --quiet to not cause a broken pipe # when grep exits before mmdebstrap was able to write all its output $CMD --help | grep --fixed-strings 'mmdebstrap [OPTION...] [SUITE [TARGET [MIRROR...]]]' >/dev/null END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "test --man" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 # we redirect to /dev/null instead of using --quiet to not cause a broken pipe # when grep exits before mmdebstrap was able to write all its output $CMD --man | grep --fixed-strings 'mmdebstrap [OPTION...] [*SUITE* [*TARGET* [*MIRROR*...]]]' >/dev/null END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "test --version" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 # we redirect to /dev/null instead of using --quiet to not cause a broken pipe # when grep exits before mmdebstrap was able to write all its output $CMD --version | egrep '^mmdebstrap [0-9](\.[0-9])+$' >/dev/null END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: create directory" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt $DEFAULT_DIST /tmp/debian-chroot $mirror chroot /tmp/debian-chroot dpkg-query --showformat '\${binary:Package}\n' --show > pkglist.txt tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort > tar1.txt rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=unshare,variant=apt: unshare as root user" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 [ "\$(whoami)" = "root" ] $CMD --mode=unshare --variant=apt \ --customize-hook='chroot "\$1" sh -c "test -e /proc/self/fd"' \ $DEFAULT_DIST /tmp/debian-chroot.tar $mirror tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi # make sure that using codenames works https://bugs.debian.org/cgi-bin/1003191 for dist in oldstable stable testing unstable; do print_header "mode=$defaultmode,variant=apt: test $dist using codename" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 /usr/lib/apt/apt-helper download-file "$mirror/dists/$dist/Release" Release codename=\$(awk '/^Codename: / { print \$2; }' Release) rm Release $CMD --mode=$defaultmode --variant=apt \$codename /tmp/debian-chroot.tar $mirror if [ "$dist" = "$DEFAULT_DIST" ]; then tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - fi rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi done print_header "mode=unshare,variant=apt: fail without /etc/subuid" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user sysctl -w kernel.unprivileged_userns_clone=1 rm /etc/subuid ret=0 runuser -u user -- $CMD --mode=unshare --variant=apt $DEFAULT_DIST /tmp/debian-chroot $mirror || ret=\$? if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret >&2 exit 1 fi rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=unshare,variant=apt: fail without username in /etc/subuid" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user sysctl -w kernel.unprivileged_userns_clone=1 awk -F: '\$1!="user"' /etc/subuid > /etc/subuid.tmp mv /etc/subuid.tmp /etc/subuid ret=0 runuser -u user -- $CMD --mode=unshare --variant=apt $DEFAULT_DIST /tmp/debian-chroot $mirror || ret=\$? if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret >&2 exit 1 fi rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi # Before running unshare mode as root, we run "unshare --mount" but that fails # if mmdebstrap itself is executed from within a chroot: # unshare: cannot change root filesystem propagation: Invalid argument # This test tests the workaround in mmdebstrap using --propagation unchanged print_header "mode=root,variant=apt: unshare as root user inside chroot" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 [ "\$(whoami)" = "root" ] cat << 'SCRIPT' > script.sh #!/bin/sh set -eu rootfs="\$1" mkdir -p "\$rootfs/mnt" [ -e /usr/bin/mmdebstrap ] && cp -aT /usr/bin/mmdebstrap "\$rootfs/usr/bin/mmdebstrap" [ -e ./mmdebstrap ] && cp -aT ./mmdebstrap "\$rootfs/mnt/mmdebstrap" chroot "\$rootfs" env --chdir=/mnt \ $CMD --mode=unshare --variant=apt \ $DEFAULT_DIST /tmp/debian-chroot.tar $mirror SCRIPT chmod +x script.sh $CMD --mode=root --variant=apt --include=perl,mount \ --customize-hook=./script.sh \ --customize-hook="download /tmp/debian-chroot.tar /tmp/debian-chroot.tar" \ $DEFAULT_DIST /dev/null $mirror tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar script.sh END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi # Same as above but this time we run mmdebstrap in root mode from inside a # chroot. print_header "mode=root,variant=apt: root mode inside chroot" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 [ "\$(whoami)" = "root" ] cat << 'SCRIPT' > script.sh #!/bin/sh set -eu rootfs="\$1" mkdir -p "\$rootfs/mnt" [ -e /usr/bin/mmdebstrap ] && cp -aT /usr/bin/mmdebstrap "\$rootfs/usr/bin/mmdebstrap" [ -e ./mmdebstrap ] && cp -aT ./mmdebstrap "\$rootfs/mnt/mmdebstrap" chroot "\$rootfs" env --chdir=/mnt \ $CMD --mode=root --variant=apt \ $DEFAULT_DIST /tmp/debian-chroot.tar $mirror SCRIPT chmod +x script.sh $CMD --mode=root --variant=apt --include=perl,mount \ --customize-hook=./script.sh \ --customize-hook="download /tmp/debian-chroot.tar /tmp/debian-chroot.tar" \ $DEFAULT_DIST /dev/null $mirror tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar script.sh END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=unshare,variant=apt: root without cap_sys_admin" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 [ "\$(whoami)" = "root" ] capsh --drop=cap_sys_admin -- -c 'exec "\$@"' exec \ $CMD --mode=root --variant=apt \ --customize-hook='chroot "\$1" sh -c "test ! -e /proc/self/fd"' \ $DEFAULT_DIST /tmp/debian-chroot.tar $mirror tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$CONTAINER" = "lxc" ]; then # see https://stackoverflow.com/questions/65748254/ echo "cannot run under lxc -- Skipping test..." >&2 skipped=$((skipped+1)) elif [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: mount is missing" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi for p in /bin /usr/bin /sbin /usr/sbin; do rm -f "\$p/mount" done $CMD --mode=root --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar $mirror tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi for variant in essential apt minbase buildd important standard; do for format in tar squashfs ext2; do print_header "mode=root/unshare/fakechroot,variant=$variant: check for bit-by-bit identical $format output" # pyc files and man index.db are not reproducible # See #1004557 and #1004558 if [ "$variant" = "standard" ]; then echo "skipping test because of #864082" >&2 skipped=$((skipped+1)) continue fi if [ "$variant" = "important" ] && [ "$DEFAULT_DIST" = "oldstable" ]; then echo "skipping test on oldstable because /var/lib/systemd/catalog/database differs" >&2 skipped=$((skipped+1)) continue fi if [ "$format" = "squashfs" ] && [ "$DEFAULT_DIST" = "oldstable" ]; then echo "skipping test on oldstable because squashfs-tools-ng is not available" >&2 skipped=$((skipped+1)) continue fi if [ "$format" = "ext2" ] && [ "$DEFAULT_DIST" = "oldstable" ]; then echo "skipping test on oldstable because genext2fs does not support SOURCE_DATE_EPOCH" >&2 skipped=$((skipped+1)) continue fi cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user sysctl -w kernel.unprivileged_userns_clone=1 export SOURCE_DATE_EPOCH=$SOURCE_DATE_EPOCH $CMD --mode=root --variant=$variant $DEFAULT_DIST /tmp/debian-chroot-root.$format $mirror if [ "$format" = tar ]; then printf 'ustar ' | cmp --bytes=6 --ignore-initial=257:0 /tmp/debian-chroot-root.tar - elif [ "$format" = squashfs ]; then printf 'hsqs' | cmp --bytes=4 /tmp/debian-chroot-root.squashfs - elif [ "$format" = ext2 ]; then printf '\123\357' | cmp --bytes=2 --ignore-initial=1080:0 /tmp/debian-chroot-root.ext2 - else echo "unknown format: $format" >&2 fi runuser -u user -- $CMD --mode=unshare --variant=$variant $DEFAULT_DIST /tmp/debian-chroot-unshare.$format $mirror cmp /tmp/debian-chroot-root.$format /tmp/debian-chroot-unshare.$format rm /tmp/debian-chroot-unshare.$format case $variant in essential|apt|minbase|buildd) # variants important and standard differ because permissions drwxr-sr-x # and extended attributes of ./var/log/journal/ cannot be preserved # in fakechroot mode runuser -u user -- $CMD --mode=fakechroot --variant=$variant $DEFAULT_DIST /tmp/debian-chroot-fakechroot.$format $mirror cmp /tmp/debian-chroot-root.$format /tmp/debian-chroot-fakechroot.$format rm /tmp/debian-chroot-fakechroot.$format ;; esac rm /tmp/debian-chroot-root.$format END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi done done print_header "mode=unshare,variant=apt: test taridshift utility" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user echo user:100000:65536 | cmp /etc/subuid - echo user:100000:65536 | cmp /etc/subgid - sysctl -w kernel.unprivileged_userns_clone=1 # include iputils-ping so that we can verify that taridshift does not remove # extended attributes # run through tarshift no-op to create a tarball that should be bit-by-bit # identical to a round trip through "taridshift X" and "taridshift -X" runuser -u user -- $CMD --mode=unshare --variant=apt --include=iputils-ping $DEFAULT_DIST - $mirror \ | ./taridshift 0 > /tmp/debian-chroot.tar # make sure that xattrs are set in the original tarball mkdir /tmp/debian-chroot tar --xattrs --xattrs-include='*' --directory /tmp/debian-chroot -xf /tmp/debian-chroot.tar ./bin/ping echo "/tmp/debian-chroot/bin/ping cap_net_raw=ep" > /tmp/expected getcap /tmp/debian-chroot/bin/ping | diff -u /tmp/expected - rm /tmp/debian-chroot/bin/ping rmdir /tmp/debian-chroot/bin rmdir /tmp/debian-chroot # shift the uid/gid forward by 100000 and backward by 100000 ./taridshift 100000 < /tmp/debian-chroot.tar > /tmp/debian-chroot-shifted.tar ./taridshift -100000 < /tmp/debian-chroot-shifted.tar > /tmp/debian-chroot-shiftedback.tar # the tarball before and after the roundtrip through taridshift should be bit # by bit identical cmp /tmp/debian-chroot.tar /tmp/debian-chroot-shiftedback.tar # manually adjust uid/gid and compare "tar -t" output tar --numeric-owner -tvf /tmp/debian-chroot.tar \ | sed 's# 100/0 # 100100/100000 #' \ | sed 's# 100/8 # 100100/100008 #' \ | sed 's# 0/0 # 100000/100000 #' \ | sed 's# 0/5 # 100000/100005 #' \ | sed 's# 0/8 # 100000/100008 #' \ | sed 's# 0/42 # 100000/100042 #' \ | sed 's# 0/43 # 100000/100043 #' \ | sed 's# 0/50 # 100000/100050 #' \ | sed 's/ \\+/ /g' \ > /tmp/debian-chroot.txt tar --numeric-owner -tvf /tmp/debian-chroot-shifted.tar \ | sed 's/ \\+/ /g' \ | diff -u /tmp/debian-chroot.txt - mkdir /tmp/debian-chroot tar --xattrs --xattrs-include='*' --directory /tmp/debian-chroot -xf /tmp/debian-chroot-shifted.tar echo "100000 100000" > /tmp/expected stat --format="%u %g" /tmp/debian-chroot/bin/ping | diff -u /tmp/expected - echo "/tmp/debian-chroot/bin/ping cap_net_raw=ep" > /tmp/expected getcap /tmp/debian-chroot/bin/ping | diff -u /tmp/expected - echo "0 0" > /tmp/expected runuser -u user -- $CMD --unshare-helper /usr/sbin/chroot /tmp/debian-chroot stat --format="%u %g" /bin/ping \ | diff -u /tmp/expected - echo "/bin/ping cap_net_raw=ep" > /tmp/expected runuser -u user -- $CMD --unshare-helper /usr/sbin/chroot /tmp/debian-chroot getcap /bin/ping \ | diff -u /tmp/expected - rm /tmp/debian-chroot.tar /tmp/debian-chroot-shifted.tar /tmp/debian-chroot.txt /tmp/debian-chroot-shiftedback.tar /tmp/expected rm -r /tmp/debian-chroot END if [ "$DEFAULT_DIST" = "oldstable" ]; then echo "the python3 tarfile module in oldstable does not preserve xattrs -- Skipping test..." >&2 skipped=$((skipped+1)) elif [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=$defaultmode,variant=apt: test progress bars on fake tty" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 script -qfc "$CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar $mirror" /dev/null tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=$defaultmode,variant=apt: test --debug output on fake tty" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 script -qfc "$CMD --mode=$defaultmode --debug --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar $mirror" /dev/null tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: existing empty directory" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 mkdir /tmp/debian-chroot $CMD --mode=root --variant=apt $DEFAULT_DIST /tmp/debian-chroot $mirror tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: existing directory with lost+found" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 mkdir /tmp/debian-chroot mkdir /tmp/debian-chroot/lost+found $CMD --mode=root --variant=apt $DEFAULT_DIST /tmp/debian-chroot $mirror rmdir /tmp/debian-chroot/lost+found tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=$defaultmode,variant=apt: fail installing to non-empty lost+found" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 mkdir /tmp/debian-chroot mkdir /tmp/debian-chroot/lost+found touch /tmp/debian-chroot/lost+found/exists ret=0 $CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot $mirror || ret=\$? rm /tmp/debian-chroot/lost+found/exists rmdir /tmp/debian-chroot/lost+found rmdir /tmp/debian-chroot if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret >&2 exit 1 fi END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=$defaultmode,variant=apt: fail installing to non-empty target directory" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 mkdir /tmp/debian-chroot mkdir /tmp/debian-chroot/lost+found touch /tmp/debian-chroot/exists ret=0 $CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot $mirror || ret=\$? rmdir /tmp/debian-chroot/lost+found rm /tmp/debian-chroot/exists rmdir /tmp/debian-chroot if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret >&2 exit 1 fi END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=unshare,variant=apt: missing device nodes outside the chroot" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi rm /dev/console adduser --gecos user --disabled-password user sysctl -w kernel.unprivileged_userns_clone=1 runuser -u user -- $CMD --mode=unshare --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar $mirror tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=unshare,variant=custom: missing /dev, /sys, /proc inside the chroot" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user sysctl -w kernel.unprivileged_userns_clone=1 runuser -u user -- $CMD --mode=unshare --variant=custom --include=dpkg,dash,diffutils,coreutils,libc-bin,sed $DEFAULT_DIST /dev/null $mirror END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=root,variant=apt: chroot directory not accessible by _apt user" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 mkdir /tmp/debian-chroot chmod 700 /tmp/debian-chroot $CMD --mode=root --variant=apt $DEFAULT_DIST /tmp/debian-chroot $mirror tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=unshare,variant=apt: CWD directory not accessible by unshared user" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user sysctl -w kernel.unprivileged_userns_clone=1 mkdir /tmp/debian-chroot chmod 700 /tmp/debian-chroot chown user:user /tmp/debian-chroot if [ "$CMD" = "./mmdebstrap" ]; then CMD=\$(realpath --canonicalize-existing ./mmdebstrap) elif [ "$CMD" = "perl -MDevel::Cover=-silent,-nogcov ./mmdebstrap" ]; then CMD="perl -MDevel::Cover=-silent,-nogcov \$(realpath --canonicalize-existing ./mmdebstrap)" else CMD="$CMD" fi env --chdir=/tmp/debian-chroot runuser -u user -- \$CMD --mode=unshare --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar $mirror tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=unshare,variant=apt: create gzip compressed tarball" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user sysctl -w kernel.unprivileged_userns_clone=1 runuser -u user -- $CMD --mode=unshare --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar.gz $mirror printf '\037\213\010' | cmp --bytes=3 /tmp/debian-chroot.tar.gz - tar -tf /tmp/debian-chroot.tar.gz | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar.gz END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=unshare,variant=apt: custom TMPDIR" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user sysctl -w kernel.unprivileged_userns_clone=1 homedir=\$(runuser -u user -- sh -c 'cd && pwd') runuser -u user -- mkdir "\$homedir/tmp" runuser -u user -- env TMPDIR="\$homedir/tmp" $CMD --mode=unshare --variant=apt \ --setup-hook='case "\$1" in "'"\$homedir/tmp/mmdebstrap."'"??????????) exit 0;; *) exit 1;; esac' \ $DEFAULT_DIST /tmp/debian-chroot.tar $mirror tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - # use rmdir as a quick check that nothing is remaining in TMPDIR runuser -u user -- rmdir "\$homedir/tmp" rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=$defaultmode,variant=apt: test xz compressed tarball" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar.xz $mirror printf '\3757zXZ\0' | cmp --bytes=6 /tmp/debian-chroot.tar.xz - tar -tf /tmp/debian-chroot.tar.xz | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar.xz END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=$defaultmode,variant=apt: directory ending in .tar" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=$defaultmode --variant=apt --format=directory $DEFAULT_DIST /tmp/debian-chroot.tar $mirror ftype=\$(stat -c %F /tmp/debian-chroot.tar) if [ "\$ftype" != directory ]; then echo "expected directory but got: \$ftype" >&2 exit 1 fi tar -C /tmp/debian-chroot.tar --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=auto,variant=apt: test auto-mode without unshare capabilities" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user sysctl -w kernel.unprivileged_userns_clone=0 runuser -u user -- $CMD --mode=auto --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar.gz $mirror tar -tf /tmp/debian-chroot.tar.gz | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar.gz END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=$defaultmode,variant=apt: fail with missing lz4" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 ret=0 $CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar.lz4 $mirror || ret=\$? if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret >&2 exit 1 fi END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=$defaultmode,variant=apt: fail with path with quotes" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 ret=0 $CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/quoted\"path $mirror || ret=\$? if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret >&2 exit 1 fi END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: create tarball with /tmp mounted nodev" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi mount -t tmpfs -o nodev,nosuid,size=300M tmpfs /tmp # use --customize-hook to exercise the mounting/unmounting code of block devices in root mode $CMD --mode=root --variant=apt --customize-hook='mount | grep /dev/full' --customize-hook='test "\$(echo foo | tee /dev/full 2>&1 1>/dev/null)" = "tee: /dev/full: No space left on device"' $DEFAULT_DIST /tmp/debian-chroot.tar $mirror tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=$defaultmode,variant=apt: read from stdin, write to stdout" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 echo "deb $mirror $DEFAULT_DIST main" | $CMD --mode=$defaultmode --variant=apt > /tmp/debian-chroot.tar tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=$defaultmode,variant=apt: supply components manually" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=$defaultmode --variant=apt --components="main main" --comp="main,main" $DEFAULT_DIST /tmp/debian-chroot $mirror echo "deb $mirror $DEFAULT_DIST main" | cmp /tmp/debian-chroot/etc/apt/sources.list tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: stable default mirror" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi cat << HOSTS >> /etc/hosts 127.0.0.1 deb.debian.org 127.0.0.1 security.debian.org HOSTS apt-cache policy cat /etc/apt/sources.list $CMD --mode=root --variant=apt stable /tmp/debian-chroot cat << SOURCES | cmp /tmp/debian-chroot/etc/apt/sources.list deb http://deb.debian.org/debian stable main deb http://deb.debian.org/debian stable-updates main deb http://security.debian.org/debian-security stable-security main SOURCES rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=$defaultmode,variant=apt: pass distribution but implicitly write to stdout" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi cat << HOSTS >> /etc/hosts 127.0.0.1 deb.debian.org 127.0.0.1 security.debian.org HOSTS $CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST > /tmp/debian-chroot.tar tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=$defaultmode,variant=apt: test aspcud apt solver" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=$defaultmode --variant=custom \ --include \$(cat pkglist.txt | tr '\n' ',') \ --aptopt='APT::Solver "aspcud"' \ $DEFAULT_DIST /tmp/debian-chroot.tar $mirror tar -tf /tmp/debian-chroot.tar | sort \ | grep -v '^./etc/apt/apt.conf.d/99mmdebstrap$' \ | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=$defaultmode,variant=apt: mirror is -" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 echo "deb $mirror $DEFAULT_DIST main" | $CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar - tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=$defaultmode,variant=apt: copy:// mirror" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test requires the cache directory to be mounted on /mnt and should only be run inside a container" >&2 exit 1 fi $CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar "deb copy:///mnt/cache/debian $DEFAULT_DIST main" tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=$defaultmode,variant=apt: fail with file:// mirror" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test requires the cache directory to be mounted on /mnt and should only be run inside a container" >&2 exit 1 fi ret=0 $CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar "deb file:///mnt/cache/debian unstable main" || ret=\$? rm /tmp/debian-chroot.tar if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret >&2 exit 1 fi END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=$defaultmode,variant=apt: mirror is deb..." cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar "deb $mirror $DEFAULT_DIST main" tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=$defaultmode,variant=apt: mirror is real file" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 echo "deb $mirror $DEFAULT_DIST main" > /tmp/sources.list $CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar /tmp/sources.list tar -tf /tmp/debian-chroot.tar \ | sed 's#^./etc/apt/sources.list.d/0000sources.list\$#./etc/apt/sources.list#' \ | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar /tmp/sources.list END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=$defaultmode,variant=apt: test deb822 (1/2)" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 cat << SOURCES > /tmp/deb822.sources Types: deb URIs: ${mirror}1 Suites: $DEFAULT_DIST Components: main SOURCES echo "deb ${mirror}2 $DEFAULT_DIST main" > /tmp/sources.list echo "deb ${mirror}3 $DEFAULT_DIST main" \ | $CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST \ /tmp/debian-chroot \ /tmp/deb822.sources \ ${mirror}4 \ - \ "deb ${mirror}5 $DEFAULT_DIST main" \ ${mirror}6 \ /tmp/sources.list test ! -e /tmp/debian-chroot/etc/apt/sources.list cat << SOURCES | cmp /tmp/debian-chroot/etc/apt/sources.list.d/0000deb822.sources - Types: deb URIs: ${mirror}1 Suites: $DEFAULT_DIST Components: main SOURCES cat << SOURCES | cmp /tmp/debian-chroot/etc/apt/sources.list.d/0001main.list - deb ${mirror}4 $DEFAULT_DIST main deb ${mirror}3 $DEFAULT_DIST main deb ${mirror}5 $DEFAULT_DIST main deb ${mirror}6 $DEFAULT_DIST main SOURCES echo "deb ${mirror}2 $DEFAULT_DIST main" | cmp /tmp/debian-chroot/etc/apt/sources.list.d/0002sources.list - tar -C /tmp/debian-chroot --one-file-system -c . \ | { tar -t \ | grep -v "^./etc/apt/sources.list.d/0000deb822.sources$" \ | grep -v "^./etc/apt/sources.list.d/0001main.list$" \ | grep -v "^./etc/apt/sources.list.d/0002sources.list"; printf "./etc/apt/sources.list\n"; } | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot rm /tmp/sources.list /tmp/deb822.sources END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=$defaultmode,variant=apt: test deb822 (2/2)" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 cat << SOURCES > /tmp/deb822 Types: deb URIs: ${mirror}1 Suites: $DEFAULT_DIST Components: main SOURCES echo "deb ${mirror}2 $DEFAULT_DIST main" > /tmp/sources cat << SOURCES | $CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST \ /tmp/debian-chroot \ /tmp/deb822 \ - \ /tmp/sources Types: deb URIs: ${mirror}3 Suites: $DEFAULT_DIST Components: main SOURCES test ! -e /tmp/debian-chroot/etc/apt/sources.list ls -lha /tmp/debian-chroot/etc/apt/sources.list.d/ cat << SOURCES | cmp /tmp/debian-chroot/etc/apt/sources.list.d/0000deb822.sources - Types: deb URIs: ${mirror}1 Suites: $DEFAULT_DIST Components: main SOURCES cat << SOURCES | cmp /tmp/debian-chroot/etc/apt/sources.list.d/0001main.sources - Types: deb URIs: ${mirror}3 Suites: $DEFAULT_DIST Components: main SOURCES echo "deb ${mirror}2 $DEFAULT_DIST main" | cmp /tmp/debian-chroot/etc/apt/sources.list.d/0002sources.list - tar -C /tmp/debian-chroot --one-file-system -c . \ | { tar -t \ | grep -v "^./etc/apt/sources.list.d/0000deb822.sources$" \ | grep -v "^./etc/apt/sources.list.d/0001main.sources$" \ | grep -v "^./etc/apt/sources.list.d/0002sources.list$"; printf "./etc/apt/sources.list\n"; } | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot rm /tmp/sources /tmp/deb822 END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=$defaultmode,variant=apt: automatic mirror from suite" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi cat << HOSTS >> /etc/hosts 127.0.0.1 deb.debian.org 127.0.0.1 security.debian.org HOSTS $CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=$defaultmode,variant=apt: invalid mirror" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 ret=0 $CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar $mirror/invalid || ret=\$? rm /tmp/debian-chroot.tar if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret >&2 exit 1 fi END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: fail installing to /" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 ret=0 $CMD --mode=root --variant=apt $DEFAULT_DIST / $mirror || ret=\$? if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret >&2 exit 1 fi END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: fail installing to existing file" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 touch /tmp/exists ret=0 $CMD --mode=root --variant=apt $DEFAULT_DIST /tmp/exists $mirror || ret=\$? if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret >&2 exit 1 fi END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=$defaultmode,variant=apt: test arm64 without qemu support" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi apt-get remove --yes qemu-user-static binfmt-support qemu-user ret=0 $CMD --mode=$defaultmode --variant=apt --architectures=arm64 $DEFAULT_DIST /tmp/debian-chroot.tar $mirror || ret=\$? if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret >&2 exit 1 fi END if [ "$HOSTARCH" != amd64 ]; then echo "HOSTARCH != amd64 -- Skipping test..." >&2 skipped=$((skipped+1)) elif [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=$defaultmode,variant=apt: test i386 (which can be executed without qemu)" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi # remove qemu just to be sure apt-get remove --yes qemu-user-static binfmt-support qemu-user $CMD --mode=$defaultmode --variant=apt --architectures=i386 $DEFAULT_DIST /tmp/debian-chroot.tar $mirror # we ignore differences between architectures by ignoring some files # and renaming others { tar -tf /tmp/debian-chroot.tar \ | grep -v '^\./usr/bin/i386$' \ | grep -v '^\./lib/ld-linux\.so\.2$' \ | grep -v '^\./lib/i386-linux-gnu/ld-linux\.so\.2$' \ | grep -v '^\./usr/lib/gcc/i686-linux-gnu/$' \ | grep -v '^\./usr/lib/gcc/i686-linux-gnu/[0-9]\+/$' \ | grep -v '^\./usr/share/man/man8/i386\.8\.gz$' \ | grep -v '^\./usr/share/doc/[^/]\+/changelog\(\.Debian\)\?\.i386\.gz$' \ | sed 's/i386-linux-gnu/x86_64-linux-gnu/' \ | sed 's/i386/amd64/'; } | sort > tar2.txt { cat tar1.txt \ | grep -v '^\./usr/bin/i386$' \ | grep -v '^\./usr/bin/x86_64$' \ | grep -v '^\./lib64/$' \ | grep -v '^\./lib64/ld-linux-x86-64\.so\.2$' \ | grep -v '^\./usr/lib/gcc/x86_64-linux-gnu/$' \ | grep -v '^\./usr/lib/gcc/x86_64-linux-gnu/[0-9]\+/$' \ | grep -v '^\./lib/x86_64-linux-gnu/ld-linux-x86-64\.so\.2$' \ | grep -v '^\./lib/x86_64-linux-gnu/libmvec-2\.[0-9]\+\.so$' \ | grep -v '^\./lib/x86_64-linux-gnu/libmvec\.so\.1$' \ | grep -v '^\./usr/share/doc/[^/]\+/changelog\(\.Debian\)\?\.amd64\.gz$' \ | grep -v '^\./usr/share/man/man8/i386\.8\.gz$' \ | grep -v '^\./usr/share/man/man8/x86_64\.8\.gz$'; } | sort | diff -u - tar2.txt rm /tmp/debian-chroot.tar END # this test compares the contents of different architectures, so this might # fail if the versions do not match if [ "$RUN_MA_SAME_TESTS" = "yes" ]; then if [ "$HOSTARCH" != amd64 ]; then echo "HOSTARCH != amd64 -- Skipping test..." >&2 skipped=$((skipped+1)) elif [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi else echo "RUN_MA_SAME_TESTS != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi # to test foreign architecture package installation we choose a package which # - is not part of the native installation set # - does not have any dependencies # - installs only few files # - doesn't change its name regularly (like gcc-*-base) print_header "mode=root,variant=apt: test --include=libmagic-mgc:arm64" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt --architectures=amd64,arm64 --include=libmagic-mgc:arm64 $DEFAULT_DIST /tmp/debian-chroot $mirror { echo "amd64"; echo "arm64"; } | cmp /tmp/debian-chroot/var/lib/dpkg/arch - rm /tmp/debian-chroot/var/lib/dpkg/arch rm /tmp/debian-chroot/var/lib/apt/extended_states rm /tmp/debian-chroot/var/lib/dpkg/info/libmagic-mgc.list rm /tmp/debian-chroot/var/lib/dpkg/info/libmagic-mgc.md5sums rm /tmp/debian-chroot/usr/lib/file/magic.mgc rm /tmp/debian-chroot/usr/share/doc/libmagic-mgc/README.Debian rm /tmp/debian-chroot/usr/share/doc/libmagic-mgc/changelog.Debian.gz rm /tmp/debian-chroot/usr/share/doc/libmagic-mgc/changelog.gz rm /tmp/debian-chroot/usr/share/doc/libmagic-mgc/copyright rm /tmp/debian-chroot/usr/share/file/magic.mgc rm /tmp/debian-chroot/usr/share/misc/magic.mgc rmdir /tmp/debian-chroot/usr/share/doc/libmagic-mgc/ rmdir /tmp/debian-chroot/usr/share/file/magic/ rmdir /tmp/debian-chroot/usr/share/file/ rmdir /tmp/debian-chroot/usr/lib/file/ tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot END if [ "$RUN_MA_SAME_TESTS" = "yes" ]; then if [ "$HOSTARCH" != amd64 ]; then echo "HOSTARCH != amd64 -- Skipping test..." >&2 skipped=$((skipped+1)) elif [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi else echo "RUN_MA_SAME_TESTS != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=root,variant=apt: test --include=libmagic-mgc:arm64 with multiple --arch options" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt --architectures=amd64 --architectures=arm64 --include=libmagic-mgc:arm64 $DEFAULT_DIST /tmp/debian-chroot $mirror { echo "amd64"; echo "arm64"; } | cmp /tmp/debian-chroot/var/lib/dpkg/arch - rm /tmp/debian-chroot/var/lib/dpkg/arch rm /tmp/debian-chroot/var/lib/apt/extended_states rm /tmp/debian-chroot/var/lib/dpkg/info/libmagic-mgc.list rm /tmp/debian-chroot/var/lib/dpkg/info/libmagic-mgc.md5sums rm /tmp/debian-chroot/usr/lib/file/magic.mgc rm /tmp/debian-chroot/usr/share/doc/libmagic-mgc/README.Debian rm /tmp/debian-chroot/usr/share/doc/libmagic-mgc/changelog.Debian.gz rm /tmp/debian-chroot/usr/share/doc/libmagic-mgc/changelog.gz rm /tmp/debian-chroot/usr/share/doc/libmagic-mgc/copyright rm /tmp/debian-chroot/usr/share/file/magic.mgc rm /tmp/debian-chroot/usr/share/misc/magic.mgc rmdir /tmp/debian-chroot/usr/share/doc/libmagic-mgc/ rmdir /tmp/debian-chroot/usr/share/file/magic/ rmdir /tmp/debian-chroot/usr/share/file/ rmdir /tmp/debian-chroot/usr/lib/file/ tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot END if [ "$RUN_MA_SAME_TESTS" = "yes" ]; then if [ "$HOSTARCH" != amd64 ]; then echo "HOSTARCH != amd64 -- Skipping test..." >&2 skipped=$((skipped+1)) elif [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi else echo "RUN_MA_SAME_TESTS != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=root,variant=apt: test --aptopt" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 echo 'Acquire::Languages "none";' > /tmp/config $CMD --mode=root --variant=apt --aptopt='Acquire::Check-Valid-Until "false"' --aptopt=/tmp/config $DEFAULT_DIST /tmp/debian-chroot $mirror printf 'Acquire::Check-Valid-Until "false";\nAcquire::Languages "none";\n' | cmp /tmp/debian-chroot/etc/apt/apt.conf.d/99mmdebstrap - rm /tmp/debian-chroot/etc/apt/apt.conf.d/99mmdebstrap tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot /tmp/config END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: test --keyring" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi rm /etc/apt/trusted.gpg.d/*.gpg $CMD --mode=root --variant=apt --keyring=/usr/share/keyrings/debian-archive-keyring.gpg --keyring=/usr/share/keyrings/ $DEFAULT_DIST /tmp/debian-chroot "deb $mirror $DEFAULT_DIST main" # make sure that no [signedby=...] managed to make it into the sources.list echo "deb $mirror $DEFAULT_DIST main" | cmp /tmp/debian-chroot/etc/apt/sources.list - tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=root,variant=apt: test --keyring overwrites" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 mkdir -p /tmp/emptydir touch /tmp/emptyfile # this overwrites the apt keyring options and should fail ret=0 $CMD --mode=root --variant=apt --keyring=/tmp/emptydir --keyring=/tmp/emptyfile $DEFAULT_DIST /tmp/debian-chroot "deb $mirror $DEFAULT_DIST main" || ret=\$? # make sure that no [signedby=...] managed to make it into the sources.list echo "deb $mirror $DEFAULT_DIST main" | cmp /tmp/debian-chroot/etc/apt/sources.list - rm -r /tmp/debian-chroot rmdir /tmp/emptydir rm /tmp/emptyfile if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret >&2 exit 1 fi END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: test signed-by without host keys" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi rm /etc/apt/trusted.gpg.d/*.gpg $CMD --mode=root --variant=apt $DEFAULT_DIST /tmp/debian-chroot $mirror printf 'deb [signed-by="/usr/share/keyrings/debian-archive-keyring.gpg"] $mirror $DEFAULT_DIST main\n' | cmp /tmp/debian-chroot/etc/apt/sources.list - tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=root,variant=apt: test ascii armored keys" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi for f in /usr/share/keyrings/*.gpg; do name=\$(basename "\$f" .gpg) gpg --enarmor < /usr/share/keyrings/\$name.gpg \ | sed 's/ PGP ARMORED FILE/ PGP PUBLIC KEY BLOCK/;/^Comment: /d' \ > /etc/apt/trusted.gpg.d/\$name.asc done rm /etc/apt/trusted.gpg.d/*.gpg rm /usr/share/keyrings/*.gpg $CMD --mode=root --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar $mirror tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=root,variant=apt: test signed-by with host keys" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt $DEFAULT_DIST /tmp/debian-chroot $mirror printf 'deb $mirror $DEFAULT_DIST main\n' | cmp /tmp/debian-chroot/etc/apt/sources.list - tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: test --dpkgopt" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 echo no-pager > /tmp/config $CMD --mode=root --variant=apt --dpkgopt="path-exclude=/usr/share/doc/*" --dpkgopt=/tmp/config --dpkgopt="path-include=/usr/share/doc/dpkg/copyright" $DEFAULT_DIST /tmp/debian-chroot $mirror printf 'path-exclude=/usr/share/doc/*\nno-pager\npath-include=/usr/share/doc/dpkg/copyright\n' | cmp /tmp/debian-chroot/etc/dpkg/dpkg.cfg.d/99mmdebstrap - rm /tmp/debian-chroot/etc/dpkg/dpkg.cfg.d/99mmdebstrap tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort > tar2.txt { grep -v '^./usr/share/doc/.' tar1.txt; echo ./usr/share/doc/dpkg/; echo ./usr/share/doc/dpkg/copyright; } | sort | diff -u - tar2.txt rm -r /tmp/debian-chroot /tmp/config END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: test --include" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt --include=doc-debian $DEFAULT_DIST /tmp/debian-chroot $mirror rm /tmp/debian-chroot/usr/share/doc-base/debian-* rm -r /tmp/debian-chroot/usr/share/doc/debian rm -r /tmp/debian-chroot/usr/share/doc/doc-debian rm /tmp/debian-chroot/var/lib/apt/extended_states rm /tmp/debian-chroot/var/lib/dpkg/info/doc-debian.list rm /tmp/debian-chroot/var/lib/dpkg/info/doc-debian.md5sums tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: test multiple --include" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt --include=doc-debian --include=tzdata $DEFAULT_DIST /tmp/debian-chroot $mirror rm /tmp/debian-chroot/usr/share/doc-base/debian-* rm -r /tmp/debian-chroot/usr/share/doc/debian rm -r /tmp/debian-chroot/usr/share/doc/doc-debian rm /tmp/debian-chroot/etc/localtime rm /tmp/debian-chroot/etc/timezone rm /tmp/debian-chroot/usr/sbin/tzconfig rm -r /tmp/debian-chroot/usr/share/doc/tzdata rm -r /tmp/debian-chroot/usr/share/zoneinfo rm /tmp/debian-chroot/var/lib/apt/extended_states rm /tmp/debian-chroot/var/lib/dpkg/info/doc-debian.list rm /tmp/debian-chroot/var/lib/dpkg/info/doc-debian.md5sums rm /tmp/debian-chroot/var/lib/dpkg/info/tzdata.list rm /tmp/debian-chroot/var/lib/dpkg/info/tzdata.md5sums rm /tmp/debian-chroot/var/lib/dpkg/info/tzdata.config rm /tmp/debian-chroot/var/lib/dpkg/info/tzdata.postinst rm /tmp/debian-chroot/var/lib/dpkg/info/tzdata.postrm rm /tmp/debian-chroot/var/lib/dpkg/info/tzdata.templates tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi # This checks for https://bugs.debian.org/976166 # Since $DEFAULT_DIST varies, we hardcode stable and unstable. print_header "mode=root,variant=apt: test --include with multiple apt sources" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=minbase --include=doc-debian unstable /tmp/debian-chroot "deb $mirror unstable main" "deb $mirror stable main" chroot /tmp/debian-chroot dpkg-query --show doc-debian rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: test merged-usr via --setup-hook" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt \ --setup-hook=./hooks/merged-usr/setup00.sh \ --customize-hook='[ -L "\$1"/bin -a -L "\$1"/sbin -a -L "\$1"/lib ]' \ $DEFAULT_DIST /tmp/debian-chroot $mirror tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort > tar2.txt { sed -e 's/^\.\/bin\//.\/usr\/bin\//;s/^\.\/lib\//.\/usr\/lib\//;s/^\.\/sbin\//.\/usr\/sbin\//;' tar1.txt | { case $HOSTARCH in amd64) sed -e 's/^\.\/lib32\//.\/usr\/lib32\//;s/^\.\/lib64\//.\/usr\/lib64\//;s/^\.\/libx32\//.\/usr\/libx32\//;';; ppc64el) sed -e 's/^\.\/lib64\//.\/usr\/lib64\//;';; *) cat;; esac }; echo ./bin; echo ./lib; echo ./sbin; case $HOSTARCH in amd64) echo ./lib32; echo ./lib64; echo ./libx32; echo ./usr/lib32/; echo ./usr/libx32/; ;; i386) echo ./lib64; echo ./libx32; echo ./usr/lib64/; echo ./usr/libx32/; ;; ppc64el) echo ./lib64; ;; s390x) echo ./lib32; echo ./usr/lib32/; ;; esac } | sort -u | diff -u - tar2.txt rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: test --essential-hook" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 cat << 'SCRIPT' > /tmp/essential.sh #!/bin/sh echo tzdata tzdata/Zones/Europe select Berlin | chroot "\$1" debconf-set-selections SCRIPT chmod +x /tmp/essential.sh $CMD --mode=root --variant=apt --include=tzdata --essential-hook='echo tzdata tzdata/Areas select Europe | chroot "\$1" debconf-set-selections' --essential-hook=/tmp/essential.sh $DEFAULT_DIST /tmp/debian-chroot $mirror echo Europe/Berlin | cmp /tmp/debian-chroot/etc/timezone tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort \ | grep -v '^./etc/localtime' \ | grep -v '^./etc/timezone' \ | grep -v '^./usr/sbin/tzconfig' \ | grep -v '^./usr/share/doc/tzdata' \ | grep -v '^./usr/share/zoneinfo' \ | grep -v '^./var/lib/dpkg/info/tzdata.' \ | grep -v '^./var/lib/apt/extended_states$' \ | diff -u tar1.txt - rm /tmp/essential.sh rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: test --customize-hook" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 cat << 'SCRIPT' > /tmp/customize.sh #!/bin/sh chroot "\$1" whoami > "\$1/output2" chroot "\$1" pwd >> "\$1/output2" SCRIPT chmod +x /tmp/customize.sh $CMD --mode=root --variant=apt --customize-hook='chroot "\$1" sh -c "whoami; pwd" > "\$1/output1"' --customize-hook=/tmp/customize.sh $DEFAULT_DIST /tmp/debian-chroot $mirror printf "root\n/\n" | cmp /tmp/debian-chroot/output1 printf "root\n/\n" | cmp /tmp/debian-chroot/output2 rm /tmp/debian-chroot/output1 rm /tmp/debian-chroot/output2 tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm /tmp/customize.sh rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: test failing --customize-hook" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 ret=0 $CMD --mode=root --variant=apt --customize-hook='chroot "\$1" sh -c "exit 1"' $DEFAULT_DIST /tmp/debian-chroot $mirror || ret=\$? rm -r /tmp/debian-chroot if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret >&2 exit 1 fi END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: test sigint during --customize-hook" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 setsid --wait $CMD --mode=root --variant=apt --customize-hook='touch done && sleep 10 && touch fail' $DEFAULT_DIST /tmp/debian-chroot $mirror & pid=\$! while sleep 1; do [ -e done ] && break; done rm done pgid=\$(echo \$(ps -p \$pid -o pgid=)) /bin/kill --signal INT -- -\$pgid ret=0 wait \$pid || ret=\$? rm -r /tmp/debian-chroot if [ -e fail ]; then echo customize hook was not interrupted >&2 rm fail exit 1 fi if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret >&2 exit 1 fi END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: test --hook-directory" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 for h in hookA hookB; do mkdir /tmp/\$h for s in setup extract essential customize; do cat << SCRIPT > /tmp/\$h/\${s}00.sh #!/bin/sh echo \$h/\${s}00 >> "\\\$1/\$s" SCRIPT chmod +x /tmp/\$h/\${s}00.sh cat << SCRIPT > /tmp/\$h/\${s}01.sh echo \$h/\${s}01 >> "\\\$1/\$s" SCRIPT chmod +x /tmp/\$h/\${s}01.sh done done $CMD --mode=root --variant=apt \ --setup-hook='echo cliA/setup >> "\$1"/setup' \ --extract-hook='echo cliA/extract >> "\$1"/extract' \ --essential-hook='echo cliA/essential >> "\$1"/essential' \ --customize-hook='echo cliA/customize >> "\$1"/customize' \ --hook-dir=/tmp/hookA \ --setup-hook='echo cliB/setup >> "\$1"/setup' \ --extract-hook='echo cliB/extract >> "\$1"/extract' \ --essential-hook='echo cliB/essential >> "\$1"/essential' \ --customize-hook='echo cliB/customize >> "\$1"/customize' \ --hook-dir=/tmp/hookB \ --setup-hook='echo cliC/setup >> "\$1"/setup' \ --extract-hook='echo cliC/extract >> "\$1"/extract' \ --essential-hook='echo cliC/essential >> "\$1"/essential' \ --customize-hook='echo cliC/customize >> "\$1"/customize' \ $DEFAULT_DIST /tmp/debian-chroot $mirror printf "cliA/setup\nhookA/setup00\nhookA/setup01\ncliB/setup\nhookB/setup00\nhookB/setup01\ncliC/setup\n" | diff -u - /tmp/debian-chroot/setup printf "cliA/extract\nhookA/extract00\nhookA/extract01\ncliB/extract\nhookB/extract00\nhookB/extract01\ncliC/extract\n" | diff -u - /tmp/debian-chroot/extract printf "cliA/essential\nhookA/essential00\nhookA/essential01\ncliB/essential\nhookB/essential00\nhookB/essential01\ncliC/essential\n" | diff -u - /tmp/debian-chroot/essential printf "cliA/customize\nhookA/customize00\nhookA/customize01\ncliB/customize\nhookB/customize00\nhookB/customize01\ncliC/customize\n" | diff -u - /tmp/debian-chroot/customize for s in setup extract essential customize; do rm /tmp/debian-chroot/\$s done tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - for h in hookA hookB; do for s in setup extract essential customize; do rm /tmp/\$h/\${s}00.sh rm /tmp/\$h/\${s}01.sh done rmdir /tmp/\$h done rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: test eatmydata via --hook-dir" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 cat << SCRIPT > /tmp/checkeatmydata.sh #!/bin/sh set -exu cat << EOF | diff - "\\\$1"/usr/bin/dpkg #!/bin/sh exec /usr/bin/eatmydata /usr/bin/dpkg.distrib "\\\\\\\$@" EOF [ -e "\\\$1"/usr/bin/eatmydata ] SCRIPT chmod +x /tmp/checkeatmydata.sh # first four bytes: magic elfheader="\\177ELF" # fifth byte: bits case "\$(dpkg-architecture -qDEB_HOST_ARCH_BITS)" in 32) elfheader="\$elfheader\\001";; 64) elfheader="\$elfheader\\002";; *) echo "bits not supported"; exit 1;; esac # sixth byte: endian case "\$(dpkg-architecture -qDEB_HOST_ARCH_ENDIAN)" in little) elfheader="\$elfheader\\001";; big) elfheader="\$elfheader\\002";; *) echo "endian not supported"; exit 1;; esac # seventh and eigth byte: elf version (1) and abi (unset) elfheader="\$elfheader\\001\\000" $CMD --mode=root --variant=apt \ --customize-hook=/tmp/checkeatmydata.sh \ --essential-hook=/tmp/checkeatmydata.sh \ --extract-hook='printf "'"\$elfheader"'" | cmp --bytes=8 - "\$1"/usr/bin/dpkg' \ --hook-dir=./hooks/eatmydata \ --customize-hook='printf "'"\$elfheader"'" | cmp --bytes=8 - "\$1"/usr/bin/dpkg' \ $DEFAULT_DIST /tmp/debian-chroot $mirror tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm /tmp/checkeatmydata.sh rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: test special hooks using helpers" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 mkfifo /tmp/myfifo mkdir /tmp/root ln -s /real /tmp/root/link mkdir /tmp/root/real run_testA() { echo content > /tmp/foo { { { $CMD --hook-helper /tmp/root root setup env 1 upload /tmp/foo \$1 < /tmp/myfifo 3>&-; echo \$? >&3; printf "\\000\\000adios"; } | $CMD --hook-listener 1 3>&- >/tmp/myfifo; echo \$?; } 3>&1; } | { read xs1; [ "\$xs1" -eq 0 ]; read xs2; [ "\$xs2" -eq 0 ]; } echo content | diff -u - /tmp/root/real/foo rm /tmp/foo rm /tmp/root/real/foo } run_testA link/foo run_testA /link/foo run_testA ///link///foo/// run_testA /././link/././foo/././ run_testA /link/../link/foo run_testA /link/../../link/foo run_testA /../../link/foo rmdir /tmp/root/real rm /tmp/root/link rmdir /tmp/root rm /tmp/myfifo END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: test special hooks using helpers and env vars" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 cat << 'SCRIPT' > /tmp/script.sh #!/bin/sh set -eu echo "MMDEBSTRAP_APT_CONFIG \$MMDEBSTRAP_APT_CONFIG" echo "\$MMDEBSTRAP_HOOK" >> /tmp/hooks [ "\$MMDEBSTRAP_MODE" = "root" ] echo test-content \$MMDEBSTRAP_HOOK > test $CMD --hook-helper "\$1" "\$MMDEBSTRAP_MODE" "\$MMDEBSTRAP_HOOK" env 1 upload test /test <&\$MMDEBSTRAP_HOOKSOCK >&\$MMDEBSTRAP_HOOKSOCK rm test echo "content inside chroot:" cat "\$1/test" [ "test-content \$MMDEBSTRAP_HOOK" = "\$(cat "\$1/test")" ] $CMD --hook-helper "\$1" "\$MMDEBSTRAP_MODE" "\$MMDEBSTRAP_HOOK" env 1 download /test test <&\$MMDEBSTRAP_HOOKSOCK >&\$MMDEBSTRAP_HOOKSOCK echo "content outside chroot:" cat test [ "test-content \$MMDEBSTRAP_HOOK" = "\$(cat test)" ] rm test SCRIPT chmod +x /tmp/script.sh $CMD --mode=root --variant=apt \ --setup-hook=/tmp/script.sh \ --extract-hook=/tmp/script.sh \ --essential-hook=/tmp/script.sh \ --customize-hook=/tmp/script.sh \ $DEFAULT_DIST /tmp/debian-chroot $mirror printf "setup\nextract\nessential\ncustomize\n" | diff -u - /tmp/hooks rm /tmp/script.sh /tmp/hooks rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi # test special hooks for mode in root unshare fakechroot proot; do print_header "mode=$mode,variant=apt: test special hooks with $mode mode" if [ "$mode" = "unshare" ] && [ "$HAVE_UNSHARE" != "yes" ]; then echo "HAVE_UNSHARE != yes -- Skipping test..." >&2 skipped=$((skipped+1)) continue fi if [ "$mode" = "proot" ] && [ "$HAVE_PROOT" != "yes" ]; then echo "HAVE_PROOT != yes -- Skipping test..." >&2 skipped=$((skipped+1)) continue fi cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ "\$(id -u)" -eq 0 ] && ! id -u user > /dev/null 2>&1; then if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user fi if [ "$mode" = unshare ]; then if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi sysctl -w kernel.unprivileged_userns_clone=1 fi prefix= [ "\$(id -u)" -eq 0 ] && [ "$mode" != "root" ] && prefix="runuser -u user --" [ "$mode" = "fakechroot" ] && prefix="\$prefix fakechroot fakeroot" symlinktarget=/real case $mode in fakechroot|proot) symlinktarget='\$1/real';; esac echo copy-in-setup > /tmp/copy-in-setup echo copy-in-essential > /tmp/copy-in-essential echo copy-in-customize > /tmp/copy-in-customize echo tar-in-setup > /tmp/tar-in-setup echo tar-in-essential > /tmp/tar-in-essential echo tar-in-customize > /tmp/tar-in-customize tar --numeric-owner --format=pax --pax-option=exthdr.name=%d/PaxHeaders/%f,delete=atime,delete=ctime -C /tmp -cf /tmp/tar-in-setup.tar tar-in-setup tar --numeric-owner --format=pax --pax-option=exthdr.name=%d/PaxHeaders/%f,delete=atime,delete=ctime -C /tmp -cf /tmp/tar-in-essential.tar tar-in-essential tar --numeric-owner --format=pax --pax-option=exthdr.name=%d/PaxHeaders/%f,delete=atime,delete=ctime -C /tmp -cf /tmp/tar-in-customize.tar tar-in-customize rm /tmp/tar-in-setup rm /tmp/tar-in-essential rm /tmp/tar-in-customize echo upload-setup > /tmp/upload-setup echo upload-essential > /tmp/upload-essential echo upload-customize > /tmp/upload-customize mkdir /tmp/sync-in-setup mkdir /tmp/sync-in-essential mkdir /tmp/sync-in-customize echo sync-in-setup > /tmp/sync-in-setup/file echo sync-in-essential > /tmp/sync-in-essential/file echo sync-in-customize > /tmp/sync-in-customize/file \$prefix $CMD --mode=$mode --variant=apt \ --setup-hook='mkdir "\$1/real"' \ --setup-hook='copy-in /tmp/copy-in-setup /real' \ --setup-hook='echo copy-in-setup | cmp "\$1/real/copy-in-setup" -' \ --setup-hook='rm "\$1/real/copy-in-setup"' \ --setup-hook='echo copy-out-setup > "\$1/real/copy-out-setup"' \ --setup-hook='copy-out /real/copy-out-setup /tmp' \ --setup-hook='rm "\$1/real/copy-out-setup"' \ --setup-hook='tar-in /tmp/tar-in-setup.tar /real' \ --setup-hook='echo tar-in-setup | cmp "\$1/real/tar-in-setup" -' \ --setup-hook='tar-out /real/tar-in-setup /tmp/tar-out-setup.tar' \ --setup-hook='rm "\$1"/real/tar-in-setup' \ --setup-hook='upload /tmp/upload-setup /real/upload' \ --setup-hook='echo upload-setup | cmp "\$1/real/upload" -' \ --setup-hook='download /real/upload /tmp/download-setup' \ --setup-hook='rm "\$1/real/upload"' \ --setup-hook='sync-in /tmp/sync-in-setup /real' \ --setup-hook='echo sync-in-setup | cmp "\$1/real/file" -' \ --setup-hook='sync-out /real /tmp/sync-out-setup' \ --setup-hook='rm "\$1/real/file"' \ --essential-hook='ln -s "'"\$symlinktarget"'" "\$1/symlink"' \ --essential-hook='copy-in /tmp/copy-in-essential /symlink' \ --essential-hook='echo copy-in-essential | cmp "\$1/real/copy-in-essential" -' \ --essential-hook='rm "\$1/real/copy-in-essential"' \ --essential-hook='echo copy-out-essential > "\$1/real/copy-out-essential"' \ --essential-hook='copy-out /symlink/copy-out-essential /tmp' \ --essential-hook='rm "\$1/real/copy-out-essential"' \ --essential-hook='tar-in /tmp/tar-in-essential.tar /symlink' \ --essential-hook='echo tar-in-essential | cmp "\$1/real/tar-in-essential" -' \ --essential-hook='tar-out /symlink/tar-in-essential /tmp/tar-out-essential.tar' \ --essential-hook='rm "\$1"/real/tar-in-essential' \ --essential-hook='upload /tmp/upload-essential /symlink/upload' \ --essential-hook='echo upload-essential | cmp "\$1/real/upload" -' \ --essential-hook='download /symlink/upload /tmp/download-essential' \ --essential-hook='rm "\$1/real/upload"' \ --essential-hook='sync-in /tmp/sync-in-essential /symlink' \ --essential-hook='echo sync-in-essential | cmp "\$1/real/file" -' \ --essential-hook='sync-out /real /tmp/sync-out-essential' \ --essential-hook='rm "\$1/real/file"' \ --customize-hook='copy-in /tmp/copy-in-customize /symlink' \ --customize-hook='echo copy-in-customize | cmp "\$1/real/copy-in-customize" -' \ --customize-hook='rm "\$1/real/copy-in-customize"' \ --customize-hook='echo copy-out-customize > "\$1/real/copy-out-customize"' \ --customize-hook='copy-out /symlink/copy-out-customize /tmp' \ --customize-hook='rm "\$1/real/copy-out-customize"' \ --customize-hook='tar-in /tmp/tar-in-customize.tar /symlink' \ --customize-hook='echo tar-in-customize | cmp "\$1/real/tar-in-customize" -' \ --customize-hook='tar-out /symlink/tar-in-customize /tmp/tar-out-customize.tar' \ --customize-hook='rm "\$1"/real/tar-in-customize' \ --customize-hook='upload /tmp/upload-customize /symlink/upload' \ --customize-hook='echo upload-customize | cmp "\$1/real/upload" -' \ --customize-hook='download /symlink/upload /tmp/download-customize' \ --customize-hook='rm "\$1/real/upload"' \ --customize-hook='sync-in /tmp/sync-in-customize /symlink' \ --customize-hook='echo sync-in-customize | cmp "\$1/real/file" -' \ --customize-hook='sync-out /real /tmp/sync-out-customize' \ --customize-hook='rm "\$1/real/file"' \ --customize-hook='rmdir "\$1/real"' \ --customize-hook='rm "\$1/symlink"' \ $DEFAULT_DIST /tmp/debian-chroot.tar $mirror for n in setup essential customize; do ret=0 cmp /tmp/tar-in-\$n.tar /tmp/tar-out-\$n.tar || ret=\$? if [ "\$ret" -ne 0 ]; then if type diffoscope >/dev/null; then diffoscope /tmp/tar-in-\$n.tar /tmp/tar-out-\$n.tar exit 1 else echo "no diffoscope installed" >&2 fi if type base64 >/dev/null; then base64 /tmp/tar-in-\$n.tar base64 /tmp/tar-out-\$n.tar exit 1 else echo "no base64 installed" >&2 fi if type xxd >/dev/null; then xxd /tmp/tar-in-\$n.tar xxd /tmp/tar-out-\$n.tar exit 1 else echo "no xxd installed" >&2 fi exit 1 fi done echo copy-out-setup | cmp /tmp/copy-out-setup - echo copy-out-essential | cmp /tmp/copy-out-essential - echo copy-out-customize | cmp /tmp/copy-out-customize - echo upload-setup | cmp /tmp/download-setup - echo upload-essential | cmp /tmp/download-essential - echo upload-customize | cmp /tmp/download-customize - echo sync-in-setup | cmp /tmp/sync-out-setup/file - echo sync-in-essential | cmp /tmp/sync-out-essential/file - echo sync-in-customize | cmp /tmp/sync-out-customize/file - tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar \ /tmp/copy-in-setup /tmp/copy-in-essential /tmp/copy-in-customize \ /tmp/copy-out-setup /tmp/copy-out-essential /tmp/copy-out-customize \ /tmp/tar-in-setup.tar /tmp/tar-in-essential.tar /tmp/tar-in-customize.tar \ /tmp/tar-out-setup.tar /tmp/tar-out-essential.tar /tmp/tar-out-customize.tar \ /tmp/upload-setup /tmp/upload-essential /tmp/upload-customize \ /tmp/download-setup /tmp/download-essential /tmp/download-customize \ /tmp/sync-in-setup/file /tmp/sync-in-essential/file /tmp/sync-in-customize/file \ /tmp/sync-out-setup/file /tmp/sync-out-essential/file /tmp/sync-out-customize/file rmdir /tmp/sync-in-setup /tmp/sync-in-essential /tmp/sync-in-customize \ /tmp/sync-out-setup /tmp/sync-out-essential /tmp/sync-out-customize END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi done print_header "mode=root,variant=apt: debootstrap no-op options" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt --resolve-deps --merged-usr --no-merged-usr --force-check-gpg $DEFAULT_DIST /tmp/debian-chroot $mirror tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: --verbose" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt --verbose $DEFAULT_DIST /tmp/debian-chroot $mirror tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: --debug" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt --debug $DEFAULT_DIST /tmp/debian-chroot $mirror tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: --quiet" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt --quiet $DEFAULT_DIST /tmp/debian-chroot $mirror tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=root,variant=apt: --logfile" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 # we check the full log to also prevent debug printfs to accidentally make it into a commit $CMD --mode=root --variant=apt --logfile=/tmp/log $DEFAULT_DIST /tmp/debian-chroot $mirror # omit the last line which should contain the runtime head --lines=-1 /tmp/log > /tmp/trimmed cat << LOG | diff -u - /tmp/trimmed I: chroot architecture $HOSTARCH is equal to the host's architecture I: automatically chosen format: directory I: running apt-get update... I: downloading packages with apt... I: extracting archives... I: installing essential packages... I: cleaning package lists and apt cache... LOG tail --lines=1 /tmp/log | grep '^I: success in .* seconds$' tar -C /tmp/debian-chroot --one-file-system -c . | tar -t | sort | diff -u tar1.txt - rm -r /tmp/debian-chroot rm /tmp/log /tmp/trimmed END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi print_header "mode=$defaultmode,variant=apt: without /etc/resolv.conf and /etc/hostname" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi rm /etc/resolv.conf /etc/hostname $CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar $mirror { tar -tf /tmp/debian-chroot.tar; printf "./etc/hostname\n"; printf "./etc/resolv.conf\n"; } | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=$defaultmode,variant=custom: preserve mode of /etc/resolv.conf and /etc/hostname" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi for f in /etc/resolv.conf /etc/hostname; do # preserve original content cat "\$f" > "\$f.bak" # in case \$f is a symlink, we replace it by a real file if [ -L "\$f" ]; then rm "\$f" cp "\$f.bak" "\$f" fi chmod 644 "\$f" [ "\$(stat --format=%A "\$f")" = "-rw-r--r--" ] done $CMD --variant=custom --mode=$defaultmode $DEFAULT_DIST /tmp/debian-chroot $mirror for f in /etc/resolv.conf /etc/hostname; do [ "\$(stat --format=%A "/tmp/debian-chroot/\$f")" = "-rw-r--r--" ] done rm /tmp/debian-chroot/dev/console rm /tmp/debian-chroot/dev/fd rm /tmp/debian-chroot/dev/full rm /tmp/debian-chroot/dev/null rm /tmp/debian-chroot/dev/ptmx rm /tmp/debian-chroot/dev/random rm /tmp/debian-chroot/dev/stderr rm /tmp/debian-chroot/dev/stdin rm /tmp/debian-chroot/dev/stdout rm /tmp/debian-chroot/dev/tty rm /tmp/debian-chroot/dev/urandom rm /tmp/debian-chroot/dev/zero rm /tmp/debian-chroot/etc/apt/sources.list rm /tmp/debian-chroot/etc/fstab rm /tmp/debian-chroot/etc/hostname rm /tmp/debian-chroot/etc/resolv.conf rm /tmp/debian-chroot/var/lib/apt/lists/lock rm /tmp/debian-chroot/var/lib/dpkg/status # the rest should be empty directories that we can rmdir recursively find /tmp/debian-chroot -depth -print0 | xargs -0 rmdir for f in /etc/resolv.conf /etc/hostname; do chmod 755 "\$f" [ "\$(stat --format=%A "\$f")" = "-rwxr-xr-x" ] done $CMD --variant=custom --mode=$defaultmode $DEFAULT_DIST /tmp/debian-chroot $mirror for f in /etc/resolv.conf /etc/hostname; do [ "\$(stat --format=%A "/tmp/debian-chroot/\$f")" = "-rwxr-xr-x" ] done rm /tmp/debian-chroot/dev/console rm /tmp/debian-chroot/dev/fd rm /tmp/debian-chroot/dev/full rm /tmp/debian-chroot/dev/null rm /tmp/debian-chroot/dev/ptmx rm /tmp/debian-chroot/dev/random rm /tmp/debian-chroot/dev/stderr rm /tmp/debian-chroot/dev/stdin rm /tmp/debian-chroot/dev/stdout rm /tmp/debian-chroot/dev/tty rm /tmp/debian-chroot/dev/urandom rm /tmp/debian-chroot/dev/zero rm /tmp/debian-chroot/etc/apt/sources.list rm /tmp/debian-chroot/etc/fstab rm /tmp/debian-chroot/etc/hostname rm /tmp/debian-chroot/etc/resolv.conf rm /tmp/debian-chroot/var/lib/apt/lists/lock rm /tmp/debian-chroot/var/lib/dpkg/status # the rest should be empty directories that we can rmdir recursively find /tmp/debian-chroot -depth -print0 | xargs -0 rmdir for f in /etc/resolv.conf /etc/hostname; do rm "\$f" ln -s "\$f.bak" "\$f" [ "\$(stat --format=%A "\$f")" = "lrwxrwxrwx" ] done $CMD --variant=custom --mode=$defaultmode $DEFAULT_DIST /tmp/debian-chroot $mirror for f in /etc/resolv.conf /etc/hostname; do [ "\$(stat --format=%A "/tmp/debian-chroot/\$f")" = "-rw-r--r--" ] done rm /tmp/debian-chroot/dev/console rm /tmp/debian-chroot/dev/fd rm /tmp/debian-chroot/dev/full rm /tmp/debian-chroot/dev/null rm /tmp/debian-chroot/dev/ptmx rm /tmp/debian-chroot/dev/random rm /tmp/debian-chroot/dev/stderr rm /tmp/debian-chroot/dev/stdin rm /tmp/debian-chroot/dev/stdout rm /tmp/debian-chroot/dev/tty rm /tmp/debian-chroot/dev/urandom rm /tmp/debian-chroot/dev/zero rm /tmp/debian-chroot/etc/apt/sources.list rm /tmp/debian-chroot/etc/fstab rm /tmp/debian-chroot/etc/hostname rm /tmp/debian-chroot/etc/resolv.conf rm /tmp/debian-chroot/var/lib/apt/lists/lock rm /tmp/debian-chroot/var/lib/dpkg/status # the rest should be empty directories that we can rmdir recursively find /tmp/debian-chroot -depth -print0 | xargs -0 rmdir END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=$defaultmode,variant=essential: test not having to install apt in --include because a hook did it before" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=$defaultmode --variant=essential --include=apt \ --essential-hook='APT_CONFIG=\$MMDEBSTRAP_APT_CONFIG apt-get update' \ --essential-hook='APT_CONFIG=\$MMDEBSTRAP_APT_CONFIG apt-get --yes install -oDPkg::Chroot-Directory="\$1" apt' \ $DEFAULT_DIST /tmp/debian-chroot.tar $mirror tar -tf /tmp/debian-chroot.tar | sort | grep -v ./var/lib/apt/extended_states | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=$defaultmode,variant=apt: remove start-stop-daemon and policy-rc.d in hook" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=$defaultmode --variant=apt --customize-hook='rm "\$1/usr/sbin/policy-rc.d"; rm "\$1/sbin/start-stop-daemon"' $DEFAULT_DIST /tmp/debian-chroot.tar $mirror tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi # test that the user can drop archives into /var/cache/apt/archives as well as # into /var/cache/apt/archives/partial for variant in extract custom essential apt minbase buildd important standard; do print_header "mode=$defaultmode,variant=$variant: compare output with pre-seeded /var/cache/apt/archives" # pyc files and man index.db are not reproducible # See #1004557 and #1004558 if [ "$variant" = "standard" ]; then echo "skipping test because of #864082" >&2 skipped=$((skipped+1)) continue fi if [ "$variant" = "important" ] && [ "$DEFAULT_DIST" = "oldstable" ]; then echo "skipping test on oldstable because /var/lib/systemd/catalog/database differs" >&2 skipped=$((skipped+1)) continue fi cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 export SOURCE_DATE_EPOCH=$SOURCE_DATE_EPOCH if [ ! -e /mmdebstrap-testenv ]; then echo "this test requires the cache directory to be mounted on /mnt and should only be run inside a container" >&2 exit 1 fi include="--include=doc-debian" if [ "$variant" = "custom" ]; then include="\$include,base-files,base-passwd,coreutils,dash,diffutils,dpkg,libc-bin,sed" fi $CMD \$include --mode=$defaultmode --variant=$variant \ --setup-hook='mkdir -p "\$1"/var/cache/apt/archives/partial' \ --setup-hook='touch "\$1"/var/cache/apt/archives/lock' \ --setup-hook='chmod 0640 "\$1"/var/cache/apt/archives/lock' \ $DEFAULT_DIST - $mirror > orig.tar # somehow, when trying to create a tarball from the 9p mount, tar throws the # following error: tar: ./doc-debian_6.4_all.deb: File shrank by 132942 bytes; padding with zeros # to reproduce, try: tar --directory /mnt/cache/debian/pool/main/d/doc-debian/ --create --file - . | tar --directory /tmp/ --extract --file - # this will be different: # md5sum /mnt/cache/debian/pool/main/d/doc-debian/*.deb /tmp/*.deb # another reason to copy the files into a new directory is, that we can use shell globs tmpdir=\$(mktemp -d) cp /mnt/cache/debian/pool/main/b/busybox/busybox_*"_$HOSTARCH.deb" /mnt/cache/debian/pool/main/a/apt/apt_*"_$HOSTARCH.deb" "\$tmpdir" $CMD \$include --mode=$defaultmode --variant=$variant \ --setup-hook='mkdir -p "\$1"/var/cache/apt/archives/partial' \ --setup-hook='sync-in "'"\$tmpdir"'" /var/cache/apt/archives/partial' \ $DEFAULT_DIST - $mirror > test1.tar cmp orig.tar test1.tar $CMD \$include --mode=$defaultmode --variant=$variant --skip=download/empty \ --customize-hook='touch "\$1"/var/cache/apt/archives/partial' \ --setup-hook='mkdir -p "\$1"/var/cache/apt/archives/' \ --setup-hook='sync-in "'"\$tmpdir"'" /var/cache/apt/archives/' \ --setup-hook='chmod 0755 "\$1"/var/cache/apt/archives/' \ $DEFAULT_DIST - $mirror > test2.tar cmp orig.tar test2.tar rm "\$tmpdir"/*.deb orig.tar test1.tar test2.tar rmdir "\$tmpdir" END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi done print_header "mode=$defaultmode,variant=apt: create directory --dry-run" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=$defaultmode --dry-run --variant=apt --setup-hook="exit 1" --essential-hook="exit 1" --customize-hook="exit 1" $DEFAULT_DIST /tmp/debian-chroot $mirror rm /tmp/debian-chroot/dev/console rm /tmp/debian-chroot/dev/fd rm /tmp/debian-chroot/dev/full rm /tmp/debian-chroot/dev/null rm /tmp/debian-chroot/dev/ptmx rm /tmp/debian-chroot/dev/random rm /tmp/debian-chroot/dev/stderr rm /tmp/debian-chroot/dev/stdin rm /tmp/debian-chroot/dev/stdout rm /tmp/debian-chroot/dev/tty rm /tmp/debian-chroot/dev/urandom rm /tmp/debian-chroot/dev/zero rm /tmp/debian-chroot/etc/apt/sources.list rm /tmp/debian-chroot/etc/fstab rm /tmp/debian-chroot/etc/hostname rm /tmp/debian-chroot/etc/resolv.conf rm /tmp/debian-chroot/var/lib/apt/lists/lock rm /tmp/debian-chroot/var/lib/dpkg/status # the rest should be empty directories that we can rmdir recursively find /tmp/debian-chroot -depth -print0 | xargs -0 rmdir END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi # test all --dry-run variants # we are testing all variants here because with 0.7.5 we had a bug: # mmdebstrap sid /dev/null --simulate ==> E: cannot read /var/cache/apt/archives/ for variant in extract custom essential apt minbase buildd important standard; do for mode in root unshare fakechroot proot chrootless; do print_header "mode=$mode,variant=$variant: create tarball --dry-run" if [ "$mode" = "unshare" ] && [ "$HAVE_UNSHARE" != "yes" ]; then echo "HAVE_UNSHARE != yes -- Skipping test..." >&2 skipped=$((skipped+1)) continue fi if [ "$mode" = "proot" ] && [ "$HAVE_PROOT" != "yes" ]; then echo "HAVE_PROOT != yes -- Skipping test..." >&2 skipped=$((skipped+1)) continue fi cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 prefix= include= if [ "\$(id -u)" -eq 0 ] && [ "$mode" != root ]; then # this must be qemu if ! id -u user >/dev/null 2>&1; then if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user fi if [ "$mode" = unshare ]; then if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi sysctl -w kernel.unprivileged_userns_clone=1 fi prefix="runuser -u user --" if [ "$mode" = extract ] || [ "$mode" = custom ]; then include="--include=\$(cat pkglist.txt | tr '\n' ',')" fi fi \$prefix $CMD --mode=$mode \$include --dry-run --variant=$variant $DEFAULT_DIST /tmp/debian-chroot.tar $mirror if [ -e /tmp/debian-chroot.tar ]; then echo "/tmp/debian-chroot.tar must not be created with --dry-run" >&2 exit 1 fi END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$mode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi done done # test extract variant also with chrootless mode for mode in root unshare fakechroot proot chrootless; do print_header "mode=$mode,variant=extract: unpack doc-debian" if [ "$mode" = "unshare" ] && [ "$HAVE_UNSHARE" != "yes" ]; then echo "HAVE_UNSHARE != yes -- Skipping test..." >&2 skipped=$((skipped+1)) continue fi if [ "$mode" = "proot" ] && [ "$HAVE_PROOT" != "yes" ]; then echo "HAVE_PROOT != yes -- Skipping test..." >&2 skipped=$((skipped+1)) continue fi cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ "\$(id -u)" -eq 0 ] && ! id -u user > /dev/null 2>&1; then if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user fi if [ "$mode" = unshare ]; then if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi sysctl -w kernel.unprivileged_userns_clone=1 fi prefix= [ "\$(id -u)" -eq 0 ] && [ "$mode" != "root" ] && prefix="runuser -u user --" [ "$mode" = "fakechroot" ] && prefix="\$prefix fakechroot fakeroot" \$prefix $CMD --mode=$mode --variant=extract --include=doc-debian $DEFAULT_DIST /tmp/debian-chroot $mirror # delete contents of doc-debian rm /tmp/debian-chroot/usr/share/doc-base/debian-* rm -r /tmp/debian-chroot/usr/share/doc/debian rm -r /tmp/debian-chroot/usr/share/doc/doc-debian # delete real files rm /tmp/debian-chroot/etc/apt/sources.list rm /tmp/debian-chroot/etc/fstab rm /tmp/debian-chroot/etc/hostname rm /tmp/debian-chroot/etc/resolv.conf rm /tmp/debian-chroot/var/lib/dpkg/status rm /tmp/debian-chroot/var/cache/apt/archives/lock rm /tmp/debian-chroot/var/lib/dpkg/lock rm /tmp/debian-chroot/var/lib/dpkg/lock-frontend rm /tmp/debian-chroot/var/lib/apt/lists/lock ## delete merged usr symlinks #rm /tmp/debian-chroot/libx32 #rm /tmp/debian-chroot/lib64 #rm /tmp/debian-chroot/lib32 #rm /tmp/debian-chroot/sbin #rm /tmp/debian-chroot/bin #rm /tmp/debian-chroot/lib # delete ./dev (files might exist or not depending on the mode) rm -f /tmp/debian-chroot/dev/console rm -f /tmp/debian-chroot/dev/fd rm -f /tmp/debian-chroot/dev/full rm -f /tmp/debian-chroot/dev/null rm -f /tmp/debian-chroot/dev/ptmx rm -f /tmp/debian-chroot/dev/random rm -f /tmp/debian-chroot/dev/stderr rm -f /tmp/debian-chroot/dev/stdin rm -f /tmp/debian-chroot/dev/stdout rm -f /tmp/debian-chroot/dev/tty rm -f /tmp/debian-chroot/dev/urandom rm -f /tmp/debian-chroot/dev/zero # the rest should be empty directories that we can rmdir recursively find /tmp/debian-chroot -depth -print0 | xargs -0 rmdir END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else echo "HAVE_QEMU != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi done print_header "mode=chrootless,variant=custom: install doc-debian" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ "\$(id -u)" -eq 0 ] && ! id -u user > /dev/null 2>&1; then if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user fi prefix= [ "\$(id -u)" -eq 0 ] && prefix="runuser -u user --" \$prefix $CMD --mode=chrootless --variant=custom --include=doc-debian $DEFAULT_DIST /tmp/debian-chroot $mirror tar -C /tmp/debian-chroot --owner=0 --group=0 --numeric-owner --sort=name --clamp-mtime --mtime=$(date --utc --date=@$SOURCE_DATE_EPOCH --iso-8601=seconds) -cf /tmp/debian-chroot.tar . tar tvf /tmp/debian-chroot.tar > doc-debian.tar.list rm /tmp/debian-chroot.tar # delete contents of doc-debian rm /tmp/debian-chroot/usr/share/doc-base/debian-* rm -r /tmp/debian-chroot/usr/share/doc/debian rm -r /tmp/debian-chroot/usr/share/doc/doc-debian # delete real files rm /tmp/debian-chroot/etc/apt/sources.list rm /tmp/debian-chroot/etc/fstab rm /tmp/debian-chroot/etc/hostname rm /tmp/debian-chroot/etc/resolv.conf rm /tmp/debian-chroot/var/lib/dpkg/status rm /tmp/debian-chroot/var/cache/apt/archives/lock rm /tmp/debian-chroot/var/lib/dpkg/lock rm /tmp/debian-chroot/var/lib/dpkg/lock-frontend rm /tmp/debian-chroot/var/lib/apt/lists/lock ## delete merged usr symlinks #rm /tmp/debian-chroot/libx32 #rm /tmp/debian-chroot/lib64 #rm /tmp/debian-chroot/lib32 #rm /tmp/debian-chroot/sbin #rm /tmp/debian-chroot/bin #rm /tmp/debian-chroot/lib # in chrootless mode, there is more to remove rm /tmp/debian-chroot/var/lib/dpkg/triggers/Lock rm /tmp/debian-chroot/var/lib/dpkg/triggers/Unincorp rm /tmp/debian-chroot/var/lib/dpkg/status-old rm /tmp/debian-chroot/var/lib/dpkg/info/format rm /tmp/debian-chroot/var/lib/dpkg/info/doc-debian.md5sums rm /tmp/debian-chroot/var/lib/dpkg/info/doc-debian.list # the rest should be empty directories that we can rmdir recursively find /tmp/debian-chroot -depth -print0 | xargs -0 rmdir END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi # regularly check whether more packages work with chrootless: # for p in $(grep-aptavail -F Essential yes -s Package -n | sort -u); do ./mmdebstrap --mode=chrootless --variant=custom --include=bsdutils,coreutils,debianutils,diffutils,dpkg,findutils,grep,gzip,hostname,init-system-helpers,ncurses-base,ncurses-bin,perl-base,sed,sysvinit-utils,tar,$p unstable /dev/null; done # # see https://bugs.debian.org/cgi-bin/pkgreport.cgi?users=debian-dpkg@lists.debian.org;tag=dpkg-root-support # # base-files: #824594 # base-passwd: debconf # bash: depends base-files # bsdutils: ok # coreutils: ok # dash: debconf # debianutils: ok # diffutils: ok # dpkg: ok # findutils: ok # grep: ok # gzip: ok # hostname: ok # init-system-helpers: ok # libc-bin: #983412 # login: debconf # ncurses-base: ok # ncurses-bin: ok # perl-base: ok # sed: ok # sysvinit-utils: ok # tar: ok # util-linux: debconf print_header "mode=chrootless,variant=custom: install known-good from essential:yes" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ "\$(id -u)" -eq 0 ] && ! id -u user > /dev/null 2>&1; then if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user fi prefix= [ "\$(id -u)" -eq 0 ] && prefix="runuser -u user --" \$prefix $CMD --mode=chrootless --variant=custom --include=bsdutils,coreutils,debianutils,diffutils,dpkg,findutils,grep,gzip,hostname,init-system-helpers,ncurses-base,ncurses-bin,perl-base,sed,sysvinit-utils,tar $DEFAULT_DIST /dev/null $mirror END if [ "$DEFAULT_DIST" = "oldstable" ]; then echo "chrootless doesn't work in oldstable -- Skipping test..." >&2 skipped=$((skipped+1)) elif true; then # https://salsa.debian.org/pkg-debconf/debconf/-/merge_requests/8 echo "blocked by #983425 -- Skipping test..." >&2 skipped=$((skipped+1)) elif [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=chrootless,variant=custom: install doc-debian and output tarball" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 export SOURCE_DATE_EPOCH=$SOURCE_DATE_EPOCH if [ "\$(id -u)" -eq 0 ] && ! id -u user > /dev/null 2>&1; then if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user fi prefix= [ "\$(id -u)" -eq 0 ] && prefix="runuser -u user --" \$prefix $CMD --mode=chrootless --variant=custom --include=doc-debian $DEFAULT_DIST /tmp/debian-chroot.tar $mirror tar tvf /tmp/debian-chroot.tar | grep -v ' ./dev' | diff -u doc-debian.tar.list - rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=chrootless,variant=custom: install doc-debian and test hooks" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 export SOURCE_DATE_EPOCH=$SOURCE_DATE_EPOCH if [ "\$(id -u)" -eq 0 ] && ! id -u user > /dev/null 2>&1; then if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user fi prefix= [ "\$(id -u)" -eq 0 ] && prefix="runuser -u user --" \$prefix $CMD --mode=chrootless --skip=cleanup/tmp --variant=custom --include=doc-debian --setup-hook='touch "\$1/tmp/setup"' --customize-hook='touch "\$1/tmp/customize"' $DEFAULT_DIST /tmp/debian-chroot $mirror rm /tmp/debian-chroot/tmp/setup rm /tmp/debian-chroot/tmp/customize tar -C /tmp/debian-chroot --owner=0 --group=0 --numeric-owner --sort=name --clamp-mtime --mtime=$(date --utc --date=@$SOURCE_DATE_EPOCH --iso-8601=seconds) -cf /tmp/debian-chroot.tar . tar tvf /tmp/debian-chroot.tar | grep -v ' ./dev' | diff -u doc-debian.tar.list - rm /tmp/debian-chroot.tar # delete contents of doc-debian rm /tmp/debian-chroot/usr/share/doc-base/debian-* rm -r /tmp/debian-chroot/usr/share/doc/debian rm -r /tmp/debian-chroot/usr/share/doc/doc-debian # delete real files rm /tmp/debian-chroot/etc/apt/sources.list rm /tmp/debian-chroot/etc/fstab rm /tmp/debian-chroot/etc/hostname rm /tmp/debian-chroot/etc/resolv.conf rm /tmp/debian-chroot/var/lib/dpkg/status rm /tmp/debian-chroot/var/cache/apt/archives/lock rm /tmp/debian-chroot/var/lib/dpkg/lock rm /tmp/debian-chroot/var/lib/dpkg/lock-frontend rm /tmp/debian-chroot/var/lib/apt/lists/lock ## delete merged usr symlinks #rm /tmp/debian-chroot/libx32 #rm /tmp/debian-chroot/lib64 #rm /tmp/debian-chroot/lib32 #rm /tmp/debian-chroot/sbin #rm /tmp/debian-chroot/bin #rm /tmp/debian-chroot/lib # in chrootless mode, there is more to remove rm /tmp/debian-chroot/var/lib/dpkg/triggers/Lock rm /tmp/debian-chroot/var/lib/dpkg/triggers/Unincorp rm /tmp/debian-chroot/var/lib/dpkg/status-old rm /tmp/debian-chroot/var/lib/dpkg/info/format rm /tmp/debian-chroot/var/lib/dpkg/info/doc-debian.md5sums rm /tmp/debian-chroot/var/lib/dpkg/info/doc-debian.list # the rest should be empty directories that we can rmdir recursively find /tmp/debian-chroot -depth -print0 | xargs -0 rmdir END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi print_header "mode=chrootless,variant=custom: install libmagic-mgc on arm64" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ "\$(id -u)" -eq 0 ] && ! id -u user > /dev/null 2>&1; then if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user fi prefix= [ "\$(id -u)" -eq 0 ] && prefix="runuser -u user --" \$prefix $CMD --mode=chrootless --variant=custom --architectures=arm64 --include=libmagic-mgc $DEFAULT_DIST /tmp/debian-chroot $mirror # delete contents of libmagic-mgc rm /tmp/debian-chroot/usr/lib/file/magic.mgc rm /tmp/debian-chroot/usr/share/doc/libmagic-mgc/README.Debian rm /tmp/debian-chroot/usr/share/doc/libmagic-mgc/changelog.Debian.gz rm /tmp/debian-chroot/usr/share/doc/libmagic-mgc/changelog.gz rm /tmp/debian-chroot/usr/share/doc/libmagic-mgc/copyright rm /tmp/debian-chroot/usr/share/file/magic.mgc rm /tmp/debian-chroot/usr/share/misc/magic.mgc # delete real files rm /tmp/debian-chroot/etc/apt/sources.list rm /tmp/debian-chroot/etc/fstab rm /tmp/debian-chroot/etc/hostname rm /tmp/debian-chroot/etc/resolv.conf rm /tmp/debian-chroot/var/lib/dpkg/status rm /tmp/debian-chroot/var/cache/apt/archives/lock rm /tmp/debian-chroot/var/lib/dpkg/lock rm /tmp/debian-chroot/var/lib/dpkg/lock-frontend rm /tmp/debian-chroot/var/lib/apt/lists/lock ## delete merged usr symlinks #rm /tmp/debian-chroot/libx32 #rm /tmp/debian-chroot/lib64 #rm /tmp/debian-chroot/lib32 #rm /tmp/debian-chroot/sbin #rm /tmp/debian-chroot/bin #rm /tmp/debian-chroot/lib # in chrootless mode, there is more to remove rm /tmp/debian-chroot/var/lib/dpkg/arch rm /tmp/debian-chroot/var/lib/dpkg/triggers/Lock rm /tmp/debian-chroot/var/lib/dpkg/triggers/Unincorp rm /tmp/debian-chroot/var/lib/dpkg/status-old rm /tmp/debian-chroot/var/lib/dpkg/info/format rm /tmp/debian-chroot/var/lib/dpkg/info/libmagic-mgc.md5sums rm /tmp/debian-chroot/var/lib/dpkg/info/libmagic-mgc.list # the rest should be empty directories that we can rmdir recursively find /tmp/debian-chroot -depth -print0 | xargs -0 rmdir END if [ "$HOSTARCH" != amd64 ]; then echo "HOSTARCH != amd64 -- Skipping test..." >&2 skipped=$((skipped+1)) elif [ "$HAVE_BINFMT" = "yes" ]; then if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi else echo "HAVE_BINFMT != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi print_header "mode=root,variant=custom: install busybox-based sub-essential system" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 pkgs=base-files,base-passwd,busybox,debianutils,dpkg,libc-bin,mawk,tar # busybox --install -s will install symbolic links into the rootfs, leaving # existing files untouched. It has to run after extraction (otherwise there is # no busybox binary) and before first configuration $CMD --mode=root --variant=custom \ --include=\$pkgs \ --setup-hook='mkdir -p "\$1/bin"' \ --setup-hook='echo root:x:0:0:root:/root:/bin/sh > "\$1/etc/passwd"' \ --setup-hook='printf "root:x:0:\nmail:x:8:\nutmp:x:43:\n" > "\$1/etc/group"' \ --extract-hook='chroot "\$1" busybox --install -s' \ $DEFAULT_DIST /tmp/debian-chroot $mirror echo "\$pkgs" | tr ',' '\n' > /tmp/expected chroot /tmp/debian-chroot dpkg-query -f '\${binary:Package}\n' -W \ | comm -12 - /tmp/expected \ | diff -u - /tmp/expected rm /tmp/expected for cmd in echo cat sed grep; do test -L /tmp/debian-chroot/bin/\$cmd test "\$(readlink /tmp/debian-chroot/bin/\$cmd)" = "/bin/busybox" done for cmd in sort; do test -L /tmp/debian-chroot/usr/bin/\$cmd test "\$(readlink /tmp/debian-chroot/usr/bin/\$cmd)" = "/bin/busybox" done chroot /tmp/debian-chroot echo foobar \ | chroot /tmp/debian-chroot cat \ | chroot /tmp/debian-chroot sort \ | chroot /tmp/debian-chroot sed 's/foobar/blubber/' \ | chroot /tmp/debian-chroot grep blubber >/dev/null rm -r /tmp/debian-chroot END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) else ./run_null.sh SUDO runtests=$((runtests+1)) fi # test foreign architecture with all modes # create directory in sudo mode for mode in root unshare fakechroot proot; do print_header "mode=$mode,variant=apt: create arm64 tarball" if [ "$HOSTARCH" != amd64 ]; then echo "HOSTARCH != amd64 -- Skipping test..." >&2 skipped=$((skipped+1)) continue fi if [ "$RUN_MA_SAME_TESTS" != yes ]; then echo "RUN_MA_SAME_TESTS != yes -- Skipping test..." >&2 skipped=$((skipped+1)) continue fi if [ "$HAVE_BINFMT" != "yes" ]; then echo "HAVE_BINFMT != yes -- Skipping test..." >&2 skipped=$((skipped+1)) continue fi if [ "$mode" = "unshare" ] && [ "$HAVE_UNSHARE" != "yes" ]; then echo "HAVE_UNSHARE != yes -- Skipping test..." >&2 skipped=$((skipped+1)) continue fi if [ "$mode" = "proot" ] && [ "$HAVE_PROOT" != "yes" ]; then echo "HAVE_PROOT != yes -- Skipping test..." >&2 skipped=$((skipped+1)) continue fi cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if [ "\$(id -u)" -eq 0 ] && ! id -u user > /dev/null 2>&1; then if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi adduser --gecos user --disabled-password user fi if [ "$mode" = unshare ]; then if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi sysctl -w kernel.unprivileged_userns_clone=1 fi prefix= [ "\$(id -u)" -eq 0 ] && [ "$mode" != "root" ] && prefix="runuser -u user --" [ "$mode" = "fakechroot" ] && prefix="\$prefix fakechroot fakeroot" \$prefix $CMD --mode=$mode --variant=apt --architectures=arm64 $DEFAULT_DIST /tmp/debian-chroot.tar $mirror # we ignore differences between architectures by ignoring some files # and renaming others # in proot mode, some extra files are put there by proot { tar -tf /tmp/debian-chroot.tar \ | grep -v '^\./lib/ld-linux-aarch64\.so\.1$' \ | grep -v '^\./lib/aarch64-linux-gnu/ld-linux-aarch64\.so\.1$' \ | grep -v '^\./usr/share/doc/[^/]\+/changelog\(\.Debian\)\?\.arm64\.gz$' \ | sed 's/aarch64-linux-gnu/x86_64-linux-gnu/' \ | sed 's/arm64/amd64/'; } | sort > tar2.txt { cat tar1.txt \ | grep -v '^\./usr/bin/i386$' \ | grep -v '^\./usr/bin/x86_64$' \ | grep -v '^\./lib64/$' \ | grep -v '^\./lib64/ld-linux-x86-64\.so\.2$' \ | grep -v '^\./lib/x86_64-linux-gnu/ld-linux-x86-64\.so\.2$' \ | grep -v '^\./lib/x86_64-linux-gnu/libmvec-2\.[0-9]\+\.so$' \ | grep -v '^\./lib/x86_64-linux-gnu/libmvec\.so\.1$' \ | grep -v '^\./usr/share/doc/[^/]\+/changelog\(\.Debian\)\?\.amd64\.gz$' \ | grep -v '^\./usr/share/man/man8/i386\.8\.gz$' \ | grep -v '^\./usr/share/man/man8/x86_64\.8\.gz$'; [ "$mode" = "proot" ] && printf "./etc/ld.so.preload\n"; } | sort | diff -u - tar2.txt rm /tmp/debian-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$mode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi done print_header "mode=$defaultmode,variant=apt: test ubuntu focal" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 if ! /usr/lib/apt/apt-helper download-file http://archive.ubuntu.com/ubuntu/dists/focal/Release /dev/null && grep "QEMU Virtual CPU" /proc/cpuinfo; then if [ ! -e /mmdebstrap-testenv ]; then echo "this test modifies the system and should only be run inside a container" >&2 exit 1 fi ip link set dev ens3 up ip addr add 10.0.2.15/24 dev ens3 ip route add default via 10.0.2.2 dev ens3 echo "nameserver 10.0.2.3" > /etc/resolv.conf fi $CMD --mode=$defaultmode --variant=apt --customize-hook='grep UBUNTU_CODENAME=focal "\$1/etc/os-release"' focal /dev/null END if [ "$ONLINE" = "yes" ]; then if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh runtests=$((runtests+1)) elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO runtests=$((runtests+1)) else ./run_null.sh runtests=$((runtests+1)) fi else echo "ONLINE != yes -- Skipping test..." >&2 skipped=$((skipped+1)) fi if [ -e shared/cover_db.img ]; then # produce report inside the VM to make sure that the versions match or # otherwise we might get: # Can't read shared/cover_db/runs/1598213854.252.64287/cover.14 with Sereal: Sereal: Error: Bad Sereal header: Not a valid Sereal document. at offset 1 of input at srl_decoder.c line 600 at /usr/lib/x86_64-linux-gnu/perl5/5.30/Devel/Cover/DB/IO/Sereal.pm line 34, <$fh> chunk 1. cat << END > shared/test.sh cover -nogcov -report html_basic cover_db >&2 mkdir -p report for f in common.js coverage.html cover.css css.js mmdebstrap--branch.html mmdebstrap--condition.html mmdebstrap.html mmdebstrap--subroutine.html standardista-table-sorting.js; do cp -a cover_db/\$f report done cover -delete cover_db >&2 END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh elif [ "$mode" = "root" ]; then ./run_null.sh SUDO else ./run_null.sh fi echo echo open file://$(pwd)/shared/report/coverage.html in a browser echo fi if [ "$((i-1))" -ne "$total" ]; then echo "unexpected number of tests: got $((i-1)) but expected $total" >&2 exit 1 fi if [ "$skipped" -gt 0 ]; then echo "number of skipped tests: $skipped" >&2 fi if [ "$runtests" -gt 0 ]; then echo "number of executed tests: $runtests" >&2 fi if [ "$((skipped+runtests))" -ne "$total" ]; then echo "sum of skipped and executed tests is not equal to $total" >&2 exit 1 fi rm shared/test.sh shared/tar1.txt shared/tar2.txt shared/pkglist.txt shared/doc-debian.tar.list shared/mmdebstrap shared/taridshift shared/tarfilter shared/proxysolver mmdebstrap/examples/000077500000000000000000000000001420155655700150445ustar00rootroot00000000000000mmdebstrap/examples/twb/000077500000000000000000000000001420155655700156405ustar00rootroot00000000000000mmdebstrap/examples/twb/debian-11-minimal.py000066400000000000000000000123621420155655700213030ustar00rootroot00000000000000#!/usr/bin/python3 import argparse import pathlib import subprocess import tempfile import pathlib __author__ = "Trent W. Buck" __copyright__ = "Copyright © 2020 Trent W. Buck" __license__ = "expat" __doc__ = """ build the simplest Debian Live image that can boot This uses mmdebstrap to do the heavy lifting; it can run entirely without root privileges. It emits a USB key disk image that contains a bootable EFI ESP, which in turn includes a bootloader (refind), kernel, ramdisk, and filesystem.squashfs. NOTE: this is the simplest config possible. It lacks CRITICAL SECURITY AND DATA LOSS packages, such as amd64-microcode and smartd. """ parser = argparse.ArgumentParser(description=__doc__) parser.add_argument( "output_file", nargs="?", default=pathlib.Path("filesystem.img"), type=pathlib.Path ) args = parser.parse_args() filesystem_img_size = "256M" # big enough to include filesystem.squashfs + about 64M of bootloader, kernel, and ramdisk. esp_offset = 1024 * 1024 # 1MiB esp_label = "UEFI-ESP" # max 8 bytes for FAT32 live_media_path = "debian-live" with tempfile.TemporaryDirectory(prefix="debian-live-bullseye-amd64-minimal.") as td: td = pathlib.Path(td) subprocess.check_call( [ "mmdebstrap", "--mode=unshare", "--variant=apt", '--aptopt=Acquire::http::Proxy "http://apt-cacher-ng.cyber.com.au:3142"', '--aptopt=Acquire::https::Proxy "DIRECT"', "--dpkgopt=force-unsafe-io", "--include=linux-image-amd64 init initramfs-tools live-boot netbase", "--include=dbus", # https://bugs.debian.org/814758 "--include=live-config iproute2 keyboard-configuration locales sudo user-setup", "--include=ifupdown isc-dhcp-client", # live-config doesn't support systemd-networkd yet. # Do the **BARE MINIMUM** to make a USB key that can boot on X86_64 UEFI. # We use mtools so we do not ever need root privileges. # We can't use mkfs.vfat, as that needs kpartx or losetup (i.e. root). # We can't use mkfs.udf, as that needs mount (i.e. root). # We can't use "refind-install --usedefault" as that runs mount(8) (i.e. root). # We don't use genisoimage because # 1) ISO9660 must die; # 2) incomplete UDF 1.5+ support; # 3) resulting filesystem can't be tweaked after flashing (e.g. debian-live/site.dir/etc/systemd/network/up.network). # # We use refind because 1) I hate grub; and 2) I like refind. # If you want aarch64 or ia32 you need to install their BOOTxxx.EFI files. # If you want kernel+initrd on something other than FAT, you need refind/drivers_xxx/xxx_xxx.EFI. # # FIXME: with qemu in UEFI mode (OVMF), I get dumped into startup.nsh (UEFI REPL). # From there, I can manually type in "FS0:\EFI\BOOT\BOOTX64.EFI" to start refind, tho. # So WTF is its problem? Does it not support fallback bootloader? "--include=refind parted mtools", "--essential-hook=echo refind refind/install_to_esp boolean false | chroot $1 debconf-set-selections", "--customize-hook=echo refind refind/install_to_esp boolean true | chroot $1 debconf-set-selections", "--customize-hook=chroot $1 mkdir -p /boot/USB /boot/EFI/BOOT", "--customize-hook=chroot $1 cp /usr/share/refind/refind/refind_x64.efi /boot/EFI/BOOT/BOOTX64.EFI", f"--customize-hook=chroot $1 truncate --size={filesystem_img_size} /boot/USB/filesystem.img", f"--customize-hook=chroot $1 parted --script --align=optimal /boot/USB/filesystem.img mklabel gpt mkpart {esp_label} {esp_offset}b 100% set 1 esp on", f"--customize-hook=chroot $1 mformat -i /boot/USB/filesystem.img@@{esp_offset} -F -v {esp_label}", f"--customize-hook=chroot $1 mmd -i /boot/USB/filesystem.img@@{esp_offset} ::{live_media_path}", f"""--customize-hook=echo '"Boot with default options" "boot=live live-media-path={live_media_path}"' >$1/boot/refind_linux.conf""", # NOTE: find sidesteps the "glob expands before chroot applies" problem. f"""--customize-hook=chroot $1 find -O3 /boot/ -xdev -mindepth 1 -maxdepth 1 -regextype posix-egrep -iregex '.*/(EFI|refind_linux.conf|vmlinuz.*|initrd.img.*)' -exec mcopy -vsbpm -i /boot/USB/filesystem.img@@{esp_offset} {{}} :: ';'""", # FIXME: copy-out doesn't handle sparseness, so is REALLY slow (about 50 seconds). # Therefore instead leave it in the squashfs, and extract it later. # f'--customize-hook=copy-out /boot/USB/filesystem.img /tmp/', # f'--customize-hook=chroot $1 rm /boot/USB/filesystem.img', "bullseye", td / "filesystem.squashfs", ] ) with args.output_file.open("wb") as f: subprocess.check_call( ["rdsquashfs", "--cat=boot/USB/filesystem.img", td / "filesystem.squashfs"], stdout=f, ) subprocess.check_call( [ "mcopy", "-i", f"{args.output_file}@@{esp_offset}", td / "filesystem.squashfs", f"::{live_media_path}/filesystem.squashfs", ] ) mmdebstrap/examples/twb/debian-sid-zfs.py000066400000000000000000000334731420155655700210230ustar00rootroot00000000000000#!/usr/bin/python3 import argparse import pathlib import subprocess import tempfile import pathlib __author__ = "Trent W. Buck" __copyright__ = "Copyright © 2020 Trent W. Buck" __license__ = "expat" __doc__ = """ build a Debian Live image that can install Debian 11 on ZFS 2 This uses mmdebstrap to do the heavy lifting; it can run entirely without root privileges. It emits a USB key disk image that contains a bootable EFI ESP, which in turn includes a bootloader (refind), kernel, ramdisk, and filesystem.squashfs. """ parser = argparse.ArgumentParser(description=__doc__) parser.add_argument( "output_file", nargs="?", default=pathlib.Path("filesystem.img"), type=pathlib.Path ) parser.add_argument( "--timezone", default="Australia/Melbourne", type=lambda s: s.split("/"), help='NOTE: MUST be "Area/Zone" not e.g. "UTC", for now', ) parser.add_argument( "--locale", default="en_AU.UTF-8", help='NOTE: MUST end in ".UTF-8", for now' ) args = parser.parse_args() filesystem_img_size = "512M" # big enough to include filesystem.squashfs + about 64M of bootloader, kernel, and ramdisk. esp_offset = 1024 * 1024 # 1MiB esp_label = "UEFI-ESP" # max 8 bytes for FAT32 live_media_path = "debian-live" with tempfile.TemporaryDirectory(prefix="debian-sid-zfs.") as td: td = pathlib.Path(td) subprocess.check_call( [ "mmdebstrap", "--mode=unshare", "--variant=apt", '--aptopt=Acquire::http::Proxy "http://apt-cacher-ng.cyber.com.au:3142"', '--aptopt=Acquire::https::Proxy "DIRECT"', "--dpkgopt=force-unsafe-io", "--components=main contrib non-free", # needed for CPU security patches "--include=init initramfs-tools xz-utils live-boot netbase", "--include=dbus", # https://bugs.debian.org/814758 "--include=linux-image-amd64 firmware-linux", # Have ZFS 2.0 support. "--include=zfs-dkms zfsutils-linux zfs-zed build-essential linux-headers-amd64", # ZFS 2 support # Make the initrd a little smaller (41MB -> 20MB), at the expensive of significantly slower image build time. "--include=zstd", "--essential-hook=mkdir -p $1/etc/initramfs-tools/conf.d", "--essential-hook=>$1/etc/initramfs-tools/conf.d/zstd echo COMPRESS=zstd", # Be the equivalent of Debian Live GNOME # '--include=live-task-gnome', #'--include=live-task-xfce', # FIXME: enable this? It makes live-task-xfce go from 1G to 16G... so no. #'--aptopt=Apt::Install-Recommends "true"', # ...cherry-pick instead # UPDATE: debian-installer-launcher DOES NOT WORK because we don't load crap SPECIFICALLY into /live/installer, in the ESP. # UPDATE: network-manager-gnome DOES NOT WORK, nor is systemd-networkd auto-started... WTF? # end result is no networking. #'--include=live-config user-setup sudo firmware-linux haveged', #'--include=calamares-settings-debian udisks2', # 300MB weirdo Qt GUI debian installer #'--include=xfce4-terminal', # x86_64 CPUs are undocumented proprietary RISC chips that EMULATE a documented x86_64 CISC ISA. # The emulator is called "microcode", and is full of security vulnerabilities. # Make sure security patches for microcode for *ALL* CPUs are included. # By default, it tries to auto-detect the running CPU, so only patches the CPU of the build server. "--include=intel-microcode amd64-microcode iucode-tool", "--essential-hook=>$1/etc/default/intel-microcode echo IUCODE_TOOL_INITRAMFS=yes IUCODE_TOOL_SCANCPUS=no", "--essential-hook=>$1/etc/default/amd64-microcode echo AMD64UCODE_INITRAMFS=yes", "--dpkgopt=force-confold", # Work around https://bugs.debian.org/981004 # DHCP/DNS/SNTP clients... # FIXME: use live-config ? "--include=libnss-resolve libnss-myhostname systemd-timesyncd", "--customize-hook=chroot $1 cp -alf /lib/systemd/resolv.conf /etc/resolv.conf", # This probably needs to happen LAST # FIXME: fix resolv.conf to point to resolved, not "copy from the build-time OS" # FIXME: fix hostname & hosts to not exist, not "copy from the build-time OS" "--customize-hook=systemctl --root=$1 enable systemd-networkd systemd-timesyncd", # is this needed? # Run a DHCP client on *ALL* ifaces. # Consider network "up" (start sshd and local login prompt) when *ANY* (not ALL) ifaces are up. "--customize-hook=>$1/etc/systemd/network/up.network printf '%s\n' '[Match]' Name='en*' '[Network]' DHCP=yes", # try DHCP on all ethernet ifaces "--customize-hook=mkdir $1/etc/systemd/system/systemd-networkd-wait-online.service.d", "--customize-hook=>$1/etc/systemd/system/systemd-networkd-wait-online.service.d/any-not-all.conf printf '%s\n' '[Service]' 'ExecStart=' 'ExecStart=/lib/systemd/systemd-networkd-wait-online --any'", # Hope there's a central smarthost SMTP server called "mail" in the local search domain. # FIXME: can live-config do this? "--include=msmtp-mta", "--customize-hook=>$1/etc/msmtprc printf '%s\n' 'account default' 'syslog LOG_MAIL' 'host mail' 'auto_from on'", # Hope there's a central RELP logserver called "logserv" in the local domain. # FIXME: can live-config do this? "--include=rsyslog-relp", """--customize-hook=>$1/etc/rsyslog.conf printf '%s\n' 'module(load="imuxsock")' 'module(load="imklog")' 'module(load="omrelp")' 'action(type="omrelp" target="logserv" port="2514" template="RSYSLOG_SyslogProtocol23Format")'""", # Run self-tests on all discoverable hard disks, and (try to) email if something goes wrong. "--include=smartmontools bsd-mailx", "--customize-hook=>$1/etc/smartd.conf echo 'DEVICESCAN -n standby,15 -a -o on -S on -s (S/../../7/00|L/../01/./01) -t -H -m root -M once'", # For rarely-updated, rarely-rebooted SOEs, apply what security updates we can into transient tmpfs COW. # This CANNOT apply kernel security updates (though it will download them). # This CANNOT make the upgrades persistent across reboots (they re-download each boot). # FIXME: Would it be cleaner to set Environment=NEEDRESTART_MODE=a in # apt-daily-upgrade.service and/or # unattended-upgrades.service, so # needrestart is noninteractive only when apt is noninteractive? "--include=unattended-upgrades needrestart", "--customize-hook=echo 'unattended-upgrades unattended-upgrades/enable_auto_updates boolean true' | chroot $1 debconf-set-selections", """--customize-hook=>$1/etc/needrestart/conf.d/unattended-needrestart.conf echo '$nrconf{restart} = "a";'""", # https://bugs.debian.org/894444 # Do an apt update & apt upgrade at boot time (as well as @daily). # The lack of /etc/machine-id causes these to be implicitly enabled. # FIXME: use dropin in /etc. "--customize-hook=>>$1/lib/systemd/system/apt-daily.service printf '%s\n' '[Install]' 'WantedBy=multi-user.target'", "--customize-hook=>>$1/lib/systemd/system/apt-daily-upgrade.service printf '%s\n' '[Install]' 'WantedBy=multi-user.target'", # FIXME: add support for this stuff (for the non-live final install this happens via ansible): # # unattended-upgrades # smartd # networkd (boot off ANY NIC, not EVERY NIC -- https://github.com/systemd/systemd/issues/9714) # refind (bootloader config) # misc safety nets # double-check that mmdebstrap's machine-id support works properly # Bare minimum to let me SSH in. # FIXME: make this configurable. # FIXME: trust a CA certificate instead -- see Zero Trust SSH, Jeremy Stott, LCA 2020 # WARNING: tinysshd does not support RSA, nor MaxStartups, nor sftp (unless you also install openssh-client, which is huge). # FIXME: double-check no host keys are baked into the image (openssh-server and dropbear do this). "--include=tinysshd rsync", "--essential-hook=install -dm700 $1/root/.ssh", '--essential-hook=echo "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIapAZ0E0353DaY6xBnasvu/DOvdWdKQ6RQURwq4l6Wu twb@cyber.com.au (Trent W. Buck)" >$1/root/.ssh/authorized_keys', # Bare minimum to let me log in locally. # DO NOT use this on production builds! "--essential-hook=chroot $1 passwd --delete root", # Configure language (not needed to boot). # Racism saves a **LOT** of space -- something like 2GB for Debian Live images. # FIXME: use live-config instead? "--include=locales localepurge", f"--essential-hook=echo locales locales/default_environment_locale select {args.locale} | chroot $1 debconf-set-selections", f"--essential-hook=echo locales locales/locales_to_be_generated multiselect {args.locale} UTF-8 | chroot $1 debconf-set-selections", # FIXME: https://bugs.debian.org/603700 "--customize-hook=chroot $1 sed -i /etc/locale.nopurge -e 's/^USE_DPKG/#ARGH#&/'", "--customize-hook=chroot $1 localepurge", "--customize-hook=chroot $1 sed -i /etc/locale.nopurge -e 's/^#ARGH#//'", # Removing documentation also saves a LOT of space. "--dpkgopt=path-exclude=/usr/share/doc/*", "--dpkgopt=path-exclude=/usr/share/info/*", "--dpkgopt=path-exclude=/usr/share/man/*", "--dpkgopt=path-exclude=/usr/share/omf/*", "--dpkgopt=path-exclude=/usr/share/help/*", "--dpkgopt=path-exclude=/usr/share/gnome/help/*", # Configure timezone (not needed to boot)` # FIXME: use live-config instead? "--include=tzdata", f"--essential-hook=echo tzdata tzdata/Areas select {args.timezone[0]} | chroot $1 debconf-set-selections", f"--essential-hook=echo tzdata tzdata/Zones/{args.timezone[0]} select {args.timezone[1]} | chroot $1 debconf-set-selections", # Do the **BARE MINIMUM** to make a USB key that can boot on X86_64 UEFI. # We use mtools so we do not ever need root privileges. # We can't use mkfs.vfat, as that needs kpartx or losetup (i.e. root). # We can't use mkfs.udf, as that needs mount (i.e. root). # We can't use "refind-install --usedefault" as that runs mount(8) (i.e. root). # We don't use genisoimage because # 1) ISO9660 must die; # 2) incomplete UDF 1.5+ support; # 3) resulting filesystem can't be tweaked after flashing (e.g. debian-live/site.dir/etc/systemd/network/up.network). # # We use refind because 1) I hate grub; and 2) I like refind. # If you want aarch64 or ia32 you need to install their BOOTxxx.EFI files. # If you want kernel+initrd on something other than FAT, you need refind/drivers_xxx/xxx_xxx.EFI. # # FIXME: with qemu in UEFI mode (OVMF), I get dumped into startup.nsh (UEFI REPL). # From there, I can manually type in "FS0:\EFI\BOOT\BOOTX64.EFI" to start refind, tho. # So WTF is its problem? Does it not support fallback bootloader? "--include=refind parted mtools", "--essential-hook=echo refind refind/install_to_esp boolean false | chroot $1 debconf-set-selections", "--customize-hook=echo refind refind/install_to_esp boolean true | chroot $1 debconf-set-selections", "--customize-hook=chroot $1 mkdir -p /boot/USB /boot/EFI/BOOT", "--customize-hook=chroot $1 cp /usr/share/refind/refind/refind_x64.efi /boot/EFI/BOOT/BOOTX64.EFI", "--customize-hook=chroot $1 cp /usr/share/refind/refind/refind.conf-sample /boot/EFI/BOOT/refind.conf", f"--customize-hook=chroot $1 truncate --size={filesystem_img_size} /boot/USB/filesystem.img", f"--customize-hook=chroot $1 parted --script --align=optimal /boot/USB/filesystem.img mklabel gpt mkpart {esp_label} {esp_offset}b 100% set 1 esp on", f"--customize-hook=chroot $1 mformat -i /boot/USB/filesystem.img@@{esp_offset} -F -v {esp_label}", f"--customize-hook=chroot $1 mmd -i /boot/USB/filesystem.img@@{esp_offset} ::{live_media_path}", f"""--customize-hook=echo '"Boot with default options" "boot=live live-media-path={live_media_path}"' >$1/boot/refind_linux.conf""", f"""--customize-hook=chroot $1 find /boot/ -xdev -mindepth 1 -maxdepth 1 -not -name filesystem.img -not -name USB -exec mcopy -vsbpm -i /boot/USB/filesystem.img@@{esp_offset} {{}} :: ';'""", # FIXME: copy-out doesn't handle sparseness, so is REALLY slow (about 50 seconds). # Therefore instead leave it in the squashfs, and extract it later. # f'--customize-hook=copy-out /boot/USB/filesystem.img /tmp/', # f'--customize-hook=chroot $1 rm /boot/USB/filesystem.img', "sid", td / "filesystem.squashfs", ] ) with args.output_file.open("wb") as f: subprocess.check_call( ["rdsquashfs", "--cat=boot/USB/filesystem.img", td / "filesystem.squashfs"], stdout=f, ) subprocess.check_call( [ "mcopy", "-i", f"{args.output_file}@@{esp_offset}", td / "filesystem.squashfs", f"::{live_media_path}/filesystem.squashfs", ] ) mmdebstrap/gpgvnoexpkeysig000077500000000000000000000035431420155655700164120ustar00rootroot00000000000000#!/bin/sh # # This script is in the public domain # # Author: Johannes Schauer Marin Rodrigues # # This is a wrapper around gpgv as invoked by apt. It turns EXPKEYSIG results # from gpgv into GOODSIG results. This is necessary for apt to access very old # timestamps from snapshot.debian.org for which the GPG key is already expired: # # Get:1 http://snapshot.debian.org/archive/debian/20150106T000000Z unstable InRelease [242 kB] # Err:1 http://snapshot.debian.org/archive/debian/20150106T000000Z unstable InRelease # The following signatures were invalid: EXPKEYSIG 8B48AD6246925553 Debian Archive Automatic Signing Key (7.0/wheezy) # Reading package lists... # W: GPG error: http://snapshot.debian.org/archive/debian/20150106T000000Z unstable InRelease: The following signatures were invalid: EXPKEYSIG 8B48AD6246925553 Debian Archive Automatic Signing Key (7.0/wheezy) # E: The repository 'http://snapshot.debian.org/archive/debian/20150106T000000Z unstable InRelease' is not signed. # # To use this script, call apt with # # -o Apt::Key::gpgvcommand=/usr/libexec/mmdebstrap/gpgvnoexpkeysig # # Scripts doing similar things can be found here: # # * debuerreotype as /usr/share/debuerreotype/scripts/.gpgv-ignore-expiration.sh # * derivative census: salsa.d.o/deriv-team/census/-/blob/master/bin/fakegpgv set -eu find_gpgv_status_fd() { while [ "$#" -gt 0 ]; do if [ "$1" = '--status-fd' ]; then echo "$2" return 0 fi shift done # default fd is stdout echo 1 } GPGSTATUSFD="$(find_gpgv_status_fd "$@")" case $GPGSTATUSFD in ''|*[!0-9]*) echo "invalid --status-fd argument" >&2 exit 1 ;; esac # we need eval because we cannot redirect a variable fd eval 'exec gpgv "$@" '"$GPGSTATUSFD"'>&1 | sed "s/^\[GNUPG:\] EXPKEYSIG /[GNUPG:] GOODSIG /" >&'"$GPGSTATUSFD" mmdebstrap/hooks/000077500000000000000000000000001420155655700143515ustar00rootroot00000000000000mmdebstrap/hooks/busybox/000077500000000000000000000000001420155655700160445ustar00rootroot00000000000000mmdebstrap/hooks/busybox/extract00.sh000077500000000000000000000003731420155655700202200ustar00rootroot00000000000000#!/bin/sh set -exu rootdir="$1" # Run busybox using an absolute path so that this script also works in case # /proc is not mounted. Busybox uses /proc/self/exe to figure out the path # to its executable. chroot "$rootdir" /bin/busybox --install -s mmdebstrap/hooks/busybox/setup00.sh000077500000000000000000000002731420155655700177050ustar00rootroot00000000000000#!/bin/sh set -exu rootdir="$1" mkdir -p "$rootdir/bin" echo root:x:0:0:root:/root:/bin/sh > "$rootdir/etc/passwd" cat << END > "$rootdir/etc/group" root:x:0: mail:x:8: utmp:x:43: END mmdebstrap/hooks/eatmydata/000077500000000000000000000000001420155655700163225ustar00rootroot00000000000000mmdebstrap/hooks/eatmydata/README.txt000066400000000000000000000005131420155655700200170ustar00rootroot00000000000000Adding this directory with --hook-directory will result in mmdebstrap using dpkg inside an eatmydata wrapper script. This will result in spead-ups on systems where sync() takes some time. Using --dpkgopt=force-unsafe-io will have a lesser effect compared to eatmydata. See: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=613428 mmdebstrap/hooks/eatmydata/customize.sh000077500000000000000000000012071420155655700207030ustar00rootroot00000000000000#!/bin/sh set -exu rootdir="$1" if [ -e "$rootdir/var/lib/dpkg/arch" ]; then chrootarch=$(head -1 "$rootdir/var/lib/dpkg/arch") else chrootarch=$(dpkg --print-architecture) fi libdir="/usr/lib/$(dpkg-architecture -a $chrootarch -q DEB_HOST_MULTIARCH)" # if eatmydata was actually installed properly, then we are not removing # anything here if ! chroot "$rootdir" dpkg-query --list eatmydata; then rm "$rootdir/usr/bin/eatmydata" fi if ! chroot "$rootdir" dpkg-query --list libeatmydata1; then rm "$rootdir$libdir"/libeatmydata.so* fi rm "$rootdir/usr/bin/dpkg" chroot "$rootdir" dpkg-divert --local --rename --remove /usr/bin/dpkg sync mmdebstrap/hooks/eatmydata/extract.sh000077500000000000000000000043321420155655700203350ustar00rootroot00000000000000#!/bin/sh set -exu rootdir="$1" if [ -e "$rootdir/var/lib/dpkg/arch" ]; then chrootarch=$(head -1 "$rootdir/var/lib/dpkg/arch") else chrootarch=$(dpkg --print-architecture) fi eval $(apt-config shell trusted Dir::Etc::trusted/f) eval $(apt-config shell trustedparts Dir::Etc::trustedparts/d) tmpfile=$(mktemp --tmpdir="$rootdir/tmp") cat << END > "$tmpfile" Apt::Architecture "$chrootarch"; Apt::Architectures "$chrootarch"; Dir "$rootdir"; Dir::Etc::Trusted "$trusted"; Dir::Etc::TrustedParts "$trustedparts"; END # we run "apt-get download --print-uris" in a temporary directory, to make sure # that the packages do not already exist in the current directory, or otherwise # nothing will be printed for them tmpdir=$(mktemp --directory --tmpdir="$rootdir/tmp") env --chdir="$tmpdir" APT_CONFIG="$tmpfile" apt-get download --print-uris eatmydata libeatmydata1 \ | sed -ne "s/^'\([^']\+\)'\s\+\([^\s]\+\)\s\+\([0-9]\+\)\s\+\(SHA256:[a-f0-9]\+\)$/\1 \2 \3 \4/p" \ | while read uri fname size hash; do echo "processing $fname" >&2 if [ -e "$tmpdir/$fname" ]; then echo "$tmpdir/$fname already exists" >&2 exit 1 fi [ -z "$hash" ] && hash="Checksum-FileSize:$size" env --chdir="$tmpdir" APT_CONFIG="$tmpfile" /usr/lib/apt/apt-helper download-file "$uri" "$fname" "$hash" case "$fname" in eatmydata_*_all.deb) mkdir -p "$rootdir/usr/bin" dpkg-deb --fsys-tarfile "$tmpdir/$fname" \ | tar --directory="$rootdir/usr/bin" --strip-components=3 --extract --verbose ./usr/bin/eatmydata ;; libeatmydata1_*_$chrootarch.deb) libdir="/usr/lib/$(dpkg-architecture -a $chrootarch -q DEB_HOST_MULTIARCH)" mkdir -p "$rootdir$libdir" dpkg-deb --fsys-tarfile "$tmpdir/$fname" \ | tar --directory="$rootdir$libdir" --strip-components=4 --extract --verbose --wildcards ".$libdir/libeatmydata.so*" ;; *) echo "unexpected filename: $fname" >&2 exit 1 ;; esac rm "$tmpdir/$fname" done rm "$tmpfile" rmdir "$tmpdir" mv "$rootdir/usr/bin/dpkg" "$rootdir/usr/bin/dpkg.distrib" cat << END > "$rootdir/usr/bin/dpkg" #!/bin/sh exec /usr/bin/eatmydata /usr/bin/dpkg.distrib "\$@" END chmod +x "$rootdir/usr/bin/dpkg" cat << END >> "$rootdir/var/lib/dpkg/diversions" /usr/bin/dpkg /usr/bin/dpkg.distrib : END mmdebstrap/hooks/merged-usr/000077500000000000000000000000001420155655700164235ustar00rootroot00000000000000mmdebstrap/hooks/merged-usr/setup00.sh000077500000000000000000000060211420155655700202610ustar00rootroot00000000000000#!/bin/sh # # mmdebstrap does have a --merged-usr option but only as a no-op for # debootstrap compatibility # # Using this hook script, you can emulate what debootstrap does to set up # merged /usr via directory symlinks, even using the exact same shell function # that debootstrap uses by running mmdebstrap with: # # --setup-hook=/usr/share/mmdebstrap/hooks/merged-usr/setup00.sh # # Alternatively, you can setup merged-/usr by installing the usrmerge package: # # --include=usrmerge # # mmdebstrap will not include this functionality via a --merged-usr option # because there are many reasons against implementing merged-/usr that way: # # https://wiki.debian.org/Teams/Dpkg/MergedUsr # https://wiki.debian.org/Teams/Dpkg/FAQ#Q:_Does_dpkg_support_merged-.2Fusr-via-aliased-dirs.3F # https://lists.debian.org/20190219044924.GB21901@gaara.hadrons.org # https://lists.debian.org/YAkLOMIocggdprSQ@thunder.hadrons.org # https://lists.debian.org/20181223030614.GA8788@gaara.hadrons.org # # In addition, the merged-/usr-via-aliased-dirs approach violates an important # principle of component based software engineering one of the core design # ideas/goals of mmdebstrap: All the information to create a chroot of a Debian # based distribution should be included in its packages and their metadata. # Using directory symlinks as used by debootstrap contradicts this principle. # The information whether a distribution uses this approach to merged-/usr or # not is not anymore contained in its packages but in a tool from the outside. # # Example real world problem: I'm using debbisect to bisect Debian unstable # between 2015 and today. For which snapshot.d.o timestamp should a merged-/usr # chroot be created and for which ones not? # # The problem is not the idea of merged-/usr but the problem is the way how it # got implemented in debootstrap via directory symlinks. That way of rolling # out merged-/usr is bad from the dpkg point-of-view and completely opposite of # the vision with which in mind I wrote mmdebstrap. set -exu TARGET="$1" if [ -e "$TARGET/var/lib/dpkg/arch" ]; then ARCH=$(head -1 "$TARGET/var/lib/dpkg/arch") else ARCH=$(dpkg --print-architecture) fi if [ -e /usr/share/debootstrap/functions ]; then . /usr/share/debootstrap/functions doing_variant () { [ $1 != "buildd" ]; } MERGED_USR="yes" # until https://salsa.debian.org/installer-team/debootstrap/-/merge_requests/48 gets merged link_dir="" setup_merged_usr else link_dir="" case $ARCH in hurd-*) exit 0;; amd64) link_dir="lib32 lib64 libx32" ;; i386) link_dir="lib64 libx32" ;; mips|mipsel) link_dir="lib32 lib64" ;; mips64*|mipsn32*) link_dir="lib32 lib64 libo32" ;; powerpc) link_dir="lib64" ;; ppc64) link_dir="lib32 lib64" ;; ppc64el) link_dir="lib64" ;; s390x) link_dir="lib32" ;; sparc) link_dir="lib64" ;; sparc64) link_dir="lib32 lib64" ;; x32) link_dir="lib32 lib64 libx32" ;; esac link_dir="bin sbin lib $link_dir" for dir in $link_dir; do ln -s usr/"$dir" "$TARGET/$dir" mkdir -p "$TARGET/usr/$dir" done fi mmdebstrap/ldconfig.fakechroot000077500000000000000000000075501420155655700170740ustar00rootroot00000000000000#!/usr/bin/env python3 # # This script is in the public domain # # Author: Johannes Schauer Marin Rodrigues # # This is command substitution for ldconfig under fakechroot: # # export FAKECHROOT_CMD_SUBST=/sbin/ldconfig=/path/to/ldconfig.fakechroot # # Statically linked binaries cannot work with fakechroot and thus have to be # replaced by either /bin/true or a more clever solution like this one. The # ldconfig command supports the -r option which allows passing a chroot # directory for ldconfig to work in. This can be used to run ldconfig without # fakechroot but still let it create /etc/ld.so.cache inside the chroot. # # Since absolute symlinks are broken without fakechroot to translate them, # we read /etc/ld.so.conf and turn all absolute symlink shared libraries into # relative ones. At program exit, the original state is restored. import os import sys import subprocess import atexit import glob from pathlib import Path symlinks = [] def restore_symlinks(): for (link, target, atime, mtime) in symlinks: link.unlink() link.symlink_to(target) os.utime(link, times=None, ns=(atime, mtime), follow_symlinks=False) atexit.register(restore_symlinks) def get_libdirs(chroot, configs): res = [] for conf in configs: for line in (Path(conf)).read_text().splitlines(): line = line.strip() if not line: continue if line.startswith("#"): continue if line.startswith("include "): assert line.startswith("include /") res.extend( get_libdirs(chroot, chroot.glob(line.removeprefix("include /"))) ) continue assert line.startswith("/"), line line = line.lstrip("/") if not (chroot / Path(line)).is_dir(): continue for f in (chroot / Path(line)).iterdir(): if not f.is_symlink(): continue linktarget = f.readlink() # make sure that the linktarget is an absolute path inside the # chroot if not str(linktarget).startswith("/"): continue if chroot not in linktarget.parents: continue # store original link so that we can restore it later symlinks.append( (f, linktarget, f.lstat().st_atime_ns, f.lstat().st_mtime_ns) ) # replace absolute symlink by relative link relative = os.path.relpath(linktarget, f.parent) f.unlink() f.symlink_to(relative) return res def main(): if "FAKECHROOT_BASE_ORIG" not in os.environ: print("FAKECHROOT_BASE_ORIG is not set", file=sys.stderr) print( "must be executed under fakechroot using FAKECHROOT_CMD_SUBST", file=sys.stderr, ) sys.exit(1) chroot = Path(os.environ["FAKECHROOT_BASE_ORIG"]) # if chrootless mode is used from within a fakechroot chroot, then # FAKECHROOT_BASE_ORIG will point at the outer chroot. We want to use # the path from DPKG_ROOT inside of that instead if os.environ.get("DPKG_ROOT", "") not in ["", "/"]: chroot /= os.environ["DPKG_ROOT"].lstrip("/") if not (chroot / "sbin" / "ldconfig").exists(): sys.exit(0) (chroot / "var" / "cache" / "ldconfig").mkdir( mode=0o700, parents=True, exist_ok=True ) for d in get_libdirs(chroot, [chroot / "etc" / "ld.so.conf"]): make_relative(d) # we add any additional arguments before "-r" such that any other "-r" # option will be overwritten by the one we set subprocess.check_call( [chroot / "sbin" / "ldconfig"] + sys.argv[1:] + ["-r", chroot] ) if __name__ == "__main__": main() mmdebstrap/make_mirror.sh000077500000000000000000000555761420155655700161160ustar00rootroot00000000000000#!/bin/sh set -eu # This script fills either cache.A or cache.B with new content and then # atomically switches the cache symlink from one to the other at the end. # This way, at no point will the cache be in an non-working state, even # when this script got canceled at any point. # Working with two directories also automatically prunes old packages in # the local repository. deletecache() { dir="$1" echo "running deletecache $dir">&2 if [ ! -e "$dir" ]; then return fi if [ ! -e "$dir/mmdebstrapcache" ]; then echo "$dir cannot be the mmdebstrap cache" >&2 return 1 fi # be very careful with removing the old directory for dist in oldstable stable testing unstable; do for variant in minbase buildd -; do if [ -e "$dir/debian-$dist-$variant.tar" ]; then rm "$dir/debian-$dist-$variant.tar" else echo "does not exist: $dir/debian-$dist-$variant.tar" >&2 fi done if [ -e "$dir/debian/dists/$dist" ]; then rm --one-file-system --recursive "$dir/debian/dists/$dist" else echo "does not exist: $dir/debian/dists/$dist" >&2 fi case "$dist" in oldstable|stable) if [ -e "$dir/debian/dists/$dist-updates" ]; then rm --one-file-system --recursive "$dir/debian/dists/$dist-updates" else echo "does not exist: $dir/debian/dists/$dist-updates" >&2 fi ;; esac case "$dist" in oldstable) if [ -e "$dir/debian-security/dists/$dist/updates" ]; then rm --one-file-system --recursive "$dir/debian-security/dists/$dist/updates" else echo "does not exist: $dir/debian-security/dists/$dist/updates" >&2 fi ;; stable) if [ -e "$dir/debian-security/dists/$dist-security" ]; then rm --one-file-system --recursive "$dir/debian-security/dists/$dist-security" else echo "does not exist: $dir/debian-security/dists/$dist-security" >&2 fi ;; esac done if [ -e $dir/debian-*.qcow ]; then rm --one-file-system "$dir"/debian-*.qcow else echo "does not exist: $dir/debian-*.qcow" >&2 fi if [ -e "$dir/debian/pool/main" ]; then rm --one-file-system --recursive "$dir/debian/pool/main" else echo "does not exist: $dir/debian/pool/main" >&2 fi if [ -e "$dir/debian-security/pool/updates/main" ]; then rm --one-file-system --recursive "$dir/debian-security/pool/updates/main" else echo "does not exist: $dir/debian-security/pool/updates/main" >&2 fi for i in $(seq 1 6); do if [ ! -e "$dir/debian$i" ]; then continue fi rm "$dir/debian$i" done rm "$dir/mmdebstrapcache" # remove all symlinks find "$dir" -type l -delete # now the rest should only be empty directories if [ -e "$dir" ]; then find "$dir" -depth -print0 | xargs -0 --no-run-if-empty rmdir else echo "does not exist: $dir" >&2 fi } cleanup_newcachedir() { echo "running cleanup_newcachedir" deletecache "$newcachedir" } get_oldaptnames() { if [ ! -e "$1/$2" ]; then return fi xz -dc "$1/$2" \ | grep-dctrl --no-field-names --show-field=Package,Version,Architecture,Filename '' \ | paste -sd " \n" \ | while read name ver arch fname; do if [ ! -e "$1/$fname" ]; then continue fi # apt stores deb files with the colon encoded as %3a while # mirrors do not contain the epoch at all #645895 case "$ver" in *:*) ver="${ver%%:*}%3a${ver#*:}";; esac aptname="$rootdir/var/cache/apt/archives/${name}_${ver}_${arch}.deb" # we have to cp and not mv because other # distributions might still need this file # we have to cp and not symlink because apt # doesn't recognize symlinks cp --link "$1/$fname" "$aptname" echo "$aptname" done } get_newaptnames() { if [ ! -e "$1/$2" ]; then return fi # skip empty files by trying to uncompress the first byte of the payload if [ "$(xz -dc "$1/$2" | head -c1 | wc -c)" -eq 0 ]; then return fi xz -dc "$1/$2" \ | grep-dctrl --no-field-names --show-field=Package,Version,Architecture,Filename,SHA256 '' \ | paste -sd " \n" \ | while read name ver arch fname hash; do # sanity check for the hash because sometimes the # archive switches the hash algorithm if [ "${#hash}" -ne 64 ]; then echo "expected hash length of 64 but got ${#hash} for: $hash" >&2 exit 1 fi dir="${fname%/*}" # apt stores deb files with the colon encoded as %3a while # mirrors do not contain the epoch at all #645895 case "$ver" in *:*) ver="${ver%%:*}%3a${ver#*:}";; esac aptname="$rootdir/var/cache/apt/archives/${name}_${ver}_${arch}.deb" if [ -e "$aptname" ]; then # make sure that we found the right file by checking its hash echo "$hash $aptname" | sha256sum --check >&2 mkdir -p "$1/$dir" # since we move hardlinks around, the same hardlink might've been # moved already into the same place by another distribution. # mv(1) refuses to copy A to B if both are hardlinks of each other. if [ "$aptname" -ef "$1/$fname" ]; then # both files are already the same so we just need to # delete the source rm "$aptname" else mv "$aptname" "$1/$fname" fi echo "$aptname" fi done } cleanupapt() { echo "running cleanupapt" >&2 if [ ! -e "$rootdir" ]; then return fi for f in \ "$rootdir/var/cache/apt/archives/"*.deb \ "$rootdir/var/cache/apt/archives/partial/"*.deb \ "$rootdir/var/cache/apt/"*.bin \ "$rootdir/var/lib/apt/lists/"* \ "$rootdir/var/lib/dpkg/status" \ "$rootdir/var/lib/dpkg/lock-frontend" \ "$rootdir/var/lib/dpkg/lock" \ "$rootdir/etc/apt/apt.conf" \ "$rootdir/etc/apt/sources.list" \ "$rootdir/oldaptnames" \ "$rootdir/newaptnames" \ "$rootdir/var/cache/apt/archives/lock"; do if [ ! -e "$f" ]; then echo "does not exist: $f" >&2 continue fi if [ -d "$f" ]; then rmdir "$f" else rm "$f" fi done find "$rootdir" -depth -print0 | xargs -0 --no-run-if-empty rmdir } # note: this function uses brackets instead of curly braces, so that it's run # in its own process and we can handle traps independent from the outside update_cache() ( dist="$1" nativearch="$2" # use a subdirectory of $newcachedir so that we can use # hardlinks rootdir="$newcachedir/apt" mkdir -p "$rootdir" # we only set this trap here and overwrite the previous trap, because # the update_cache function is run as part of a pipe and thus in its # own process which will EXIT after it finished trap "cleanupapt" EXIT INT TERM for p in /etc/apt/apt.conf.d /etc/apt/sources.list.d /etc/apt/preferences.d /var/cache/apt/archives /var/lib/apt/lists/partial /var/lib/dpkg; do mkdir -p "$rootdir/$p" done # read sources.list content from stdin cat > "$rootdir/etc/apt/sources.list" cat << END > "$rootdir/etc/apt/apt.conf" Apt::Architecture "$nativearch"; Apt::Architectures "$nativearch"; Dir::Etc "$rootdir/etc/apt"; Dir::State "$rootdir/var/lib/apt"; Dir::Cache "$rootdir/var/cache/apt"; Apt::Install-Recommends false; Apt::Get::Download-Only true; Acquire::Languages "none"; Dir::Etc::Trusted "/etc/apt/trusted.gpg"; Dir::Etc::TrustedParts "/etc/apt/trusted.gpg.d"; Acquire::http::Dl-Limit "1000"; Acquire::https::Dl-Limit "1000"; Acquire::Retries "5"; END > "$rootdir/var/lib/dpkg/status" APT_CONFIG="$rootdir/etc/apt/apt.conf" apt-get update # before downloading packages and before replacing the old Packages # file, copy all old *.deb packages from the mirror to # /var/cache/apt/archives so that apt will not re-download *.deb # packages that we already have { get_oldaptnames "$oldmirrordir" "dists/$dist/main/binary-$nativearch/Packages.xz" case "$dist" in oldstable|stable) get_oldaptnames "$oldmirrordir" "dists/$dist-updates/main/binary-$nativearch/Packages.xz" ;; esac case "$dist" in oldstable) get_oldaptnames "$oldcachedir/debian-security" "dists/$dist/updates/main/binary-$nativearch/Packages.xz" ;; stable) get_oldaptnames "$oldcachedir/debian-security" "dists/$dist-security/main/binary-$nativearch/Packages.xz" ;; esac } | sort -u > "$rootdir/oldaptnames" pkgs=$(APT_CONFIG="$rootdir/etc/apt/apt.conf" apt-get indextargets \ --format '$(FILENAME)' 'Created-By: Packages' "Architecture: $nativearch" \ | xargs --delimiter='\n' /usr/lib/apt/apt-helper cat-file \ | grep-dctrl --no-field-names --show-field=Package --exact-match \ \( --field=Essential yes --or --field=Priority required \ --or --field=Priority important --or --field=Priority standard \ \)) pkgs="$(echo $pkgs) build-essential busybox gpg eatmydata" APT_CONFIG="$rootdir/etc/apt/apt.conf" apt-get --yes install $pkgs # to be able to also test gpg verification, we need to create a mirror mkdir -p "$newmirrordir/dists/$dist/main/binary-$nativearch/" curl --location "$mirror/dists/$dist/Release" > "$newmirrordir/dists/$dist/Release" curl --location "$mirror/dists/$dist/Release.gpg" > "$newmirrordir/dists/$dist/Release.gpg" curl --location "$mirror/dists/$dist/main/binary-$nativearch/Packages.xz" > "$newmirrordir/dists/$dist/main/binary-$nativearch/Packages.xz" codename=$(awk '/^Codename: / { print $2; }' < "$newmirrordir/dists/$dist/Release") [ -L "$newmirrordir/dists/$codename" ] || ln -s "$dist" "$newmirrordir/dists/$codename" case "$dist" in oldstable|stable) mkdir -p "$newmirrordir/dists/$dist-updates/main/binary-$nativearch/" curl --location "$mirror/dists/$dist-updates/Release" > "$newmirrordir/dists/$dist-updates/Release" curl --location "$mirror/dists/$dist-updates/Release.gpg" > "$newmirrordir/dists/$dist-updates/Release.gpg" curl --location "$mirror/dists/$dist-updates/main/binary-$nativearch/Packages.xz" > "$newmirrordir/dists/$dist-updates/main/binary-$nativearch/Packages.xz" [ -L "$newmirrordir/dists/$codename-updates" ] || ln -s "$dist-updates" "$newmirrordir/dists/$codename-updates" ;; esac case "$dist" in oldstable) mkdir -p "$newcachedir/debian-security/dists/$dist/updates/main/binary-$nativearch/" curl --location "$security_mirror/dists/$dist/updates/Release" > "$newcachedir/debian-security/dists/$dist/updates/Release" curl --location "$security_mirror/dists/$dist/updates/Release.gpg" > "$newcachedir/debian-security/dists/$dist/updates/Release.gpg" curl --location "$security_mirror/dists/$dist/updates/main/binary-$nativearch/Packages.xz" > "$newcachedir/debian-security/dists/$dist/updates/main/binary-$nativearch/Packages.xz" ;; stable) mkdir -p "$newcachedir/debian-security/dists/$dist-security/main/binary-$nativearch/" curl --location "$security_mirror/dists/$dist-security/Release" > "$newcachedir/debian-security/dists/$dist-security/Release" curl --location "$security_mirror/dists/$dist-security/Release.gpg" > "$newcachedir/debian-security/dists/$dist-security/Release.gpg" curl --location "$security_mirror/dists/$dist-security/main/binary-$nativearch/Packages.xz" > "$newcachedir/debian-security/dists/$dist-security/main/binary-$nativearch/Packages.xz" [ -L "$newcachedir/debian-security/dists/$codename-security" ] || ln -s "$dist-security" "$newcachedir/debian-security/dists/$codename-security" ;; esac # the deb files downloaded by apt must be moved to their right locations in the # pool directory # # Instead of parsing the Packages file, we could also attempt to move the deb # files ourselves to the appropriate pool directories. But that approach # requires re-creating the heuristic by which the directory is chosen, requires # stripping the epoch from the filename and will break once mirrors change. # This way, it doesn't matter where the mirror ends up storing the package. { get_newaptnames "$newmirrordir" "dists/$dist/main/binary-$nativearch/Packages.xz"; case "$dist" in oldstable|stable) get_newaptnames "$newmirrordir" "dists/$dist-updates/main/binary-$nativearch/Packages.xz" ;; esac case "$dist" in oldstable) get_newaptnames "$newcachedir/debian-security" "dists/$dist/updates/main/binary-$nativearch/Packages.xz" ;; stable) get_newaptnames "$newcachedir/debian-security" "dists/$dist-security/main/binary-$nativearch/Packages.xz" ;; esac } | sort -u > "$rootdir/newaptnames" rm "$rootdir/var/cache/apt/archives/lock" rmdir "$rootdir/var/cache/apt/archives/partial" # remove all packages that were in the old Packages file but not in the # new one anymore comm -23 "$rootdir/oldaptnames" "$rootdir/newaptnames" | xargs --delimiter="\n" --no-run-if-empty rm # now the apt cache should be empty if [ ! -z "$(ls -1qA "$rootdir/var/cache/apt/archives/")" ]; then echo "$rootdir/var/cache/apt/archives not empty:" ls -la "$rootdir/var/cache/apt/archives/" exit 1 fi APT_CONFIG="$rootdir/etc/apt/apt.conf" apt-get --option Dir::Etc::SourceList=/dev/null update APT_CONFIG="$rootdir/etc/apt/apt.conf" apt-get clean cleanupapt # this function is run in its own process, so we unset all traps before # returning trap "-" EXIT INT TERM ) if [ -e "./shared/cache.A" ] && [ -e "./shared/cache.B" ]; then echo "both ./shared/cache.A and ./shared/cache.B exist" >&2 echo "was a former run of the script aborted?" >&2 if [ -e ./shared/cache ]; then echo "cache symlink points to $(readlink ./shared/cache)" >&2 case "$(readlink ./shared/cache)" in cache.A) echo "maybe rm -r ./shared/cache.B" >&2 ;; cache.B) echo "maybe rm -r ./shared/cache.A" >&2 ;; *) echo "unexpected" >&2 esac fi exit 1 fi if [ -e "./shared/cache.A" ]; then oldcache=cache.A newcache=cache.B else oldcache=cache.B newcache=cache.A fi oldcachedir="./shared/$oldcache" newcachedir="./shared/$newcache" oldmirrordir="$oldcachedir/debian" newmirrordir="$newcachedir/debian" mirror="http://deb.debian.org/debian" security_mirror="http://security.debian.org/debian-security" components=main : "${DEFAULT_DIST:=unstable}" : "${HAVE_QEMU:=yes}" : "${RUN_MA_SAME_TESTS:=yes}" : "${HAVE_PROOT:=yes}" # by default, use the mmdebstrap executable in the current directory : "${CMD:=./mmdebstrap}" if [ -e "$oldmirrordir/dists/$DEFAULT_DIST/Release" ]; then http_code=$(curl --output /dev/null --silent --location --head --time-cond "$oldmirrordir/dists/$DEFAULT_DIST/Release" --write-out '%{http_code}' "$mirror/dists/$DEFAULT_DIST/Release") case "$http_code" in 200) ;; # need update 304) echo up-to-date; exit 0;; *) echo "unexpected status: $http_code"; exit 1;; esac fi trap "cleanup_newcachedir" EXIT INT TERM mkdir -p "$newcachedir" touch "$newcachedir/mmdebstrapcache" HOSTARCH=$(dpkg --print-architecture) if [ "$HOSTARCH" = amd64 ]; then arches="amd64 arm64 i386" else arches="$HOSTARCH" fi for nativearch in $arches; do for dist in oldstable stable testing unstable; do # non-host architectures are only downloaded for $DEFAULT_DIST if [ $nativearch != $HOSTARCH ] && [ $DEFAULT_DIST != $dist ]; then continue fi # we need a first pass without updates and security patches # because otherwise, old package versions needed by # debootstrap will not get included echo "deb [arch=$nativearch] $mirror $dist $components" | update_cache "$dist" "$nativearch" # we need to include the base mirror again or otherwise # packages like build-essential will be missing case "$dist" in oldstable) cat << END | update_cache "$dist" "$nativearch" deb [arch=$nativearch] $mirror $dist $components deb [arch=$nativearch] $mirror $dist-updates main deb [arch=$nativearch] $security_mirror $dist/updates main END ;; stable) cat << END | update_cache "$dist" "$nativearch" deb [arch=$nativearch] $mirror $dist $components deb [arch=$nativearch] $mirror $dist-updates main deb [arch=$nativearch] $security_mirror $dist-security main END ;; esac done done # Create some symlinks so that we can trick apt into accepting multiple apt # lines that point to the same repository but look different. This is to # avoid the warning: # W: Target Packages (main/binary-all/Packages) is configured multiple times... for i in $(seq 1 6); do ln -s debian "$newcachedir/debian$i" done tmpdir="" cleanuptmpdir() { if [ -z "$tmpdir" ]; then return fi if [ ! -e "$tmpdir" ]; then return fi for f in "$tmpdir/extlinux.conf" \ "$tmpdir/worker.sh" \ "$tmpdir/mini-httpd" "$tmpdir/hosts" \ "$tmpdir/debian-chroot.tar" \ "$tmpdir/mmdebstrap.service" \ "$tmpdir/debian-$DEFAULT_DIST.img"; do if [ ! -e "$f" ]; then echo "does not exist: $f" >&2 continue fi rm "$f" done rmdir "$tmpdir" } export SOURCE_DATE_EPOCH=$(date --date="$(grep-dctrl -s Date -n '' "$newmirrordir/dists/$DEFAULT_DIST/Release")" +%s) if [ "$HAVE_QEMU" = "yes" ]; then case "$HOSTARCH" in amd64|i386) # okay ;; *) echo "qemu support is only available on amd64 and i386" >&2 echo "because syslinux is only available on those arches" >&2 exit 1 ;; esac # We must not use any --dpkgopt here because any dpkg options still # leak into the chroot with chrootless mode. # We do not use our own package cache here because # - it doesn't (and shouldn't) contain the extra packages # - it doesn't matter if the base system is from a different mirror timestamp # procps is needed for /sbin/sysctl tmpdir="$(mktemp -d)" trap "cleanuptmpdir; cleanup_newcachedir" EXIT INT TERM pkgs=perl-doc,systemd-sysv,perl,arch-test,fakechroot,fakeroot,mount,uidmap,qemu-user-static,binfmt-support,qemu-user,dpkg-dev,mini-httpd,libdevel-cover-perl,libtemplate-perl,debootstrap,procps,apt-cudf,aspcud,python3,libcap2-bin,gpg,debootstrap,distro-info-data,iproute2,ubuntu-keyring,apt-utils if [ "$DEFAULT_DIST" != "oldstable" ]; then pkgs="$pkgs,squashfs-tools-ng,genext2fs" fi if [ "$HAVE_PROOT" = "yes" ]; then pkgs="$pkgs,proot" fi if [ ! -e ./mmdebstrap ]; then pkgs="$pkgs,mmdebstrap" fi case "$HOSTARCH" in amd64|arm64) pkgs="$pkgs,linux-image-$HOSTARCH" ;; i386) pkgs="$pkgs,linux-image-686" ;; ppc64el) pkgs="$pkgs,linux-image-powerpc64le" ;; *) echo "no kernel image for $HOSTARCH" >&2 exit 1 ;; esac if [ "$HOSTARCH" = amd64 ] && [ "$RUN_MA_SAME_TESTS" = "yes" ]; then arches=amd64,arm64 pkgs="$pkgs,libfakechroot:arm64,libfakeroot:arm64" else arches=$HOSTARCH fi $CMD --variant=apt --architectures=$arches --include="$pkgs" \ --aptopt='Acquire::http::Dl-Limit "1000"' \ --aptopt='Acquire::https::Dl-Limit "1000"' \ --aptopt='Acquire::Retries "5"' \ $DEFAULT_DIST - "$mirror" > "$tmpdir/debian-chroot.tar" cat << END > "$tmpdir/extlinux.conf" default linux timeout 0 label linux kernel /vmlinuz append initrd=/initrd.img root=/dev/vda1 rw console=ttyS0,115200 serial 0 115200 END cat << END > "$tmpdir/mmdebstrap.service" [Unit] Description=mmdebstrap worker script [Service] Type=oneshot ExecStart=/worker.sh [Install] WantedBy=multi-user.target END # here is something crazy: # as we run mmdebstrap, the process ends up being run by different users with # different privileges (real or fake). But for being able to collect # Devel::Cover data, they must all share a single directory. The only way that # I found to make this work is to mount the database directory with a # filesystem that doesn't support ownership information at all and a umask that # gives read/write access to everybody. # https://github.com/pjcj/Devel--Cover/issues/223 cat << 'END' > "$tmpdir/worker.sh" #!/bin/sh echo 'root:root' | chpasswd mount -t 9p -o trans=virtio,access=any,msize=128k mmdebstrap /mnt # need to restart mini-httpd because we mounted different content into www-root systemctl restart mini-httpd handler () { while IFS= read -r line || [ -n "$line" ]; do printf "%s %s: %s\n" "$(date -u -d "0 $(date +%s.%3N) seconds - $2 seconds" +"%T.%3N")" "$1" "$line" done } ( cd /mnt; if [ -e cover_db.img ]; then mkdir -p cover_db mount -o loop,umask=000 cover_db.img cover_db fi now=$(date +%s.%3N) ret=0 { { { { { sh -x ./test.sh 2>&1 1>&4 3>&- 4>&-; echo $? >&2; } | handler E "$now" >&3; } 4>&1 | handler O "$now" >&3; } 2>&1; } | { read xs; exit $xs; }; } 3>&1 || ret=$? if [ -e cover_db.img ]; then df -h cover_db umount cover_db fi echo $ret ) > /mnt/result.txt 2>&1 umount /mnt systemctl poweroff END chmod +x "$tmpdir/worker.sh" # initially we serve from the new cache so that debootstrap can grab # the new package repository and not the old cat << END > "$tmpdir/mini-httpd" START=1 DAEMON_OPTS="-h 127.0.0.1 -p 80 -u nobody -dd /mnt/$newcache -i /var/run/mini-httpd.pid -T UTF-8" END cat << 'END' > "$tmpdir/hosts" 127.0.0.1 localhost END #libguestfs-test-tool #export LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1 # # In case the rootfs was prepared in fakechroot mode, ldconfig has to # run to populate /etc/ld.so.cache or otherwise fakechroot tests will # fail to run. # # The disk size is sufficient in most cases. Sometimes, gcc will do # an upload with unstripped executables to make tracking down ICEs much # easier (see #872672, #894014). During times with unstripped gcc, the # buildd variant will not be 400MB but 1.3GB large and needs a 10G # disk. if [ -z ${DISK_SIZE+x} ]; then DISK_SIZE=3G fi guestfish -N "$tmpdir/debian-$DEFAULT_DIST.img"=disk:$DISK_SIZE -- \ part-disk /dev/sda mbr : \ mkfs ext2 /dev/sda1 : \ mount /dev/sda1 / : \ tar-in "$tmpdir/debian-chroot.tar" / : \ command /sbin/ldconfig : \ copy-in "$tmpdir/extlinux.conf" / : \ mkdir-p /etc/systemd/system/multi-user.target.wants : \ ln-s ../mmdebstrap.service /etc/systemd/system/multi-user.target.wants/mmdebstrap.service : \ copy-in "$tmpdir/mmdebstrap.service" /etc/systemd/system/ : \ copy-in "$tmpdir/worker.sh" / : \ copy-in "$tmpdir/mini-httpd" /etc/default : \ copy-in "$tmpdir/hosts" /etc/ : \ touch /mmdebstrap-testenv : \ upload /usr/lib/SYSLINUX/mbr.bin /mbr.bin : \ copy-file-to-device /mbr.bin /dev/sda size:440 : \ rm /mbr.bin : \ extlinux / : \ sync : \ umount / : \ part-set-bootable /dev/sda 1 true : \ shutdown qemu-img convert -O qcow2 "$tmpdir/debian-$DEFAULT_DIST.img" "$newcachedir/debian-$DEFAULT_DIST.qcow" cleanuptmpdir trap "cleanup_newcachedir" EXIT INT TERM fi mirror="http://127.0.0.1/debian" for dist in oldstable stable testing unstable; do for variant in minbase buildd -; do echo "running debootstrap --no-merged-usr --variant=$variant $dist \${TEMPDIR} $mirror" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 export SOURCE_DATE_EPOCH=$SOURCE_DATE_EPOCH tmpdir="\$(mktemp -d)" chmod 755 "\$tmpdir" debootstrap --no-merged-usr --variant=$variant $dist "\$tmpdir" $mirror tar --sort=name --mtime=@$SOURCE_DATE_EPOCH --clamp-mtime --numeric-owner --one-file-system --xattrs -C "\$tmpdir" -c . > "$newcache/debian-$dist-$variant.tar" rm -r "\$tmpdir" END if [ "$HAVE_QEMU" = "yes" ]; then cachedir=$newcachedir ./run_qemu.sh else ./run_null.sh SUDO fi done done if [ "$HAVE_QEMU" = "yes" ]; then # now replace the minihttpd config with one that serves the new repository guestfish -a "$newcachedir/debian-$DEFAULT_DIST.qcow" -i < # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to # deal in the Software without restriction, including without limitation the # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or # sell copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # The software is provided "as is", without warranty of any kind, express or # implied, including but not limited to the warranties of merchantability, # fitness for a particular purpose and noninfringement. In no event shall the # authors or copyright holders be liable for any claim, damages or other # liability, whether in an action of contract, tort or otherwise, arising # from, out of or in connection with the software or the use or other dealings # in the software. use strict; use warnings; our $VERSION = '0.8.4'; use English; use Getopt::Long; use Pod::Usage; use File::Copy; use File::Path qw(make_path); use File::Temp qw(tempfile tempdir); use File::Basename; use File::Find; use Cwd qw(abs_path getcwd); require "syscall.ph"; ## no critic (Modules::RequireBarewordIncludes) use Fcntl qw(S_IFCHR S_IFBLK FD_CLOEXEC F_GETFD F_SETFD); use List::Util qw(any none); use POSIX qw(SIGINT SIGHUP SIGPIPE SIGTERM SIG_BLOCK SIG_UNBLOCK strftime); use Carp; use Term::ANSIColor; use Socket; use Time::HiRes; use Math::BigInt; use version; ## no critic (InputOutput::RequireBriefOpen) # from sched.h # use typeglob constants because "use constant" has several drawback as # explained in the documentation for the Readonly CPAN module *CLONE_NEWNS = \0x20000; # mount namespace *CLONE_NEWUTS = \0x4000000; # utsname *CLONE_NEWIPC = \0x8000000; # ipc *CLONE_NEWUSER = \0x10000000; # user *CLONE_NEWPID = \0x20000000; # pid *CLONE_NEWNET = \0x40000000; # net *_LINUX_CAPABILITY_VERSION_3 = \0x20080522; *CAP_SYS_ADMIN = \21; *PR_CAPBSET_READ = \23; our ( $CLONE_NEWNS, $CLONE_NEWUTS, $CLONE_NEWIPC, $CLONE_NEWUSER, $CLONE_NEWPID, $CLONE_NEWNET, $_LINUX_CAPABILITY_VERSION_3, $CAP_SYS_ADMIN, $PR_CAPBSET_READ ); #<<< # type codes: # 0 -> normal file # 1 -> hardlink # 2 -> symlink # 3 -> character special # 4 -> block special # 5 -> directory my @devfiles = ( # filename mode type link target major minor ["", oct(755), 5, '', undef, undef], ["console", oct(666), 3, '', 5, 1], ["fd", oct(777), 2, '/proc/self/fd', undef, undef], ["full", oct(666), 3, '', 1, 7], ["null", oct(666), 3, '', 1, 3], ["ptmx", oct(666), 3, '', 5, 2], ["pts/", oct(755), 5, '', undef, undef], ["random", oct(666), 3, '', 1, 8], ["shm/", oct(755), 5, '', undef, undef], ["stderr", oct(777), 2, '/proc/self/fd/2', undef, undef], ["stdin", oct(777), 2, '/proc/self/fd/0', undef, undef], ["stdout", oct(777), 2, '/proc/self/fd/1', undef, undef], ["tty", oct(666), 3, '', 5, 0], ["urandom", oct(666), 3, '', 1, 9], ["zero", oct(666), 3, '', 1, 5], ); #>>> # verbosity levels: # 0 -> print nothing # 1 -> normal output and progress bars # 2 -> verbose output # 3 -> debug output my $verbosity_level = 1; my $is_covering = 0; { # make $@ local, so we don't print "Undefined subroutine called" # in other parts where we evaluate $@ local $@ = ''; $is_covering = !!(eval { Devel::Cover::get_coverage() }); } # the reason why Perl::Critic warns about this is, that it suspects that the # programmer wants to implement a test whether the terminal is interactive or # not, in which case, complex interactions with the magic *ARGV indeed make it # advisable to use IO::Interactive. In our case, we do not want to create an # interactivity check but just want to check whether STDERR is opened to a tty, # so our use of -t is fine and not "fragile and complicated" as is written in # the description of InputOutput::ProhibitInteractiveTest. Also see # https://github.com/Perl-Critic/Perl-Critic/issues/918 sub stderr_is_tty() { ## no critic (InputOutput::ProhibitInteractiveTest) if (-t STDERR) { return 1; } else { return 0; } } sub debug { if ($verbosity_level < 3) { return; } my $msg = shift; my ($package, $filename, $line) = caller; $msg = "D: $PID $line $msg"; if (stderr_is_tty()) { $msg = colored($msg, 'clear'); } print STDERR "$msg\n"; return; } sub info { if ($verbosity_level == 0) { return; } my $msg = shift; if ($verbosity_level >= 3) { my ($package, $filename, $line) = caller; $msg = "$PID $line $msg"; } $msg = "I: $msg"; if (stderr_is_tty()) { $msg = colored($msg, 'green'); } print STDERR "$msg\n"; return; } sub warning { if ($verbosity_level == 0) { return; } my $msg = shift; $msg = "W: $msg"; if (stderr_is_tty()) { $msg = colored($msg, 'bold yellow'); } print STDERR "$msg\n"; return; } sub error { # if error() is called with the string from a previous error() that was # caught inside an eval(), then the string will have a newline which we # are stripping here chomp(my $msg = shift); $msg = "E: $msg"; if (stderr_is_tty()) { $msg = colored($msg, 'bold red'); } if ($verbosity_level == 3) { croak $msg; # produces a backtrace } else { die "$msg\n"; } } # The encoding of dev_t is MMMM Mmmm mmmM MMmm, where M is a hex digit of # the major number and m is a hex digit of the minor number. sub major { my $rdev = shift; my $right = Math::BigInt->from_hex("0x00000000000fff00")->band($rdev)->brsft(8); my $left = Math::BigInt->from_hex("0xfffff00000000000")->band($rdev)->brsft(32); return $right->bior($left); } sub minor { my $rdev = shift; my $right = Math::BigInt->from_hex("0x00000000000000ff")->band($rdev); my $left = Math::BigInt->from_hex("0x00000ffffff00000")->band($rdev)->brsft(12); return $right->bior($left); } # check whether a directory is mounted by comparing the device number of the # directory itself with its parent sub is_mountpoint { my $dir = shift; if (!-e $dir) { return 0; } my @a = stat "$dir/."; my @b = stat "$dir/.."; # if the device number is different, then the directory must be mounted if ($a[0] != $b[0]) { return 1; } # if the inode number is the same, then the directory must be mounted if ($a[1] == $b[1]) { return 1; } return 0; } # tar cannot figure out the decompression program when receiving data on # standard input, thus we do it ourselves. This is copied from tar's # src/suffix.c sub get_tar_compressor { my $filename = shift; if ($filename eq '-') { return; } elsif ($filename =~ /\.tar$/) { return; } elsif ($filename =~ /\.(gz|tgz|taz)$/) { return ['gzip']; } elsif ($filename =~ /\.(Z|taZ)$/) { return ['compress']; } elsif ($filename =~ /\.(bz2|tbz|tbz2|tz2)$/) { return ['bzip2']; } elsif ($filename =~ /\.lz$/) { return ['lzip']; } elsif ($filename =~ /\.(lzma|tlz)$/) { return ['lzma']; } elsif ($filename =~ /\.lzo$/) { return ['lzop']; } elsif ($filename =~ /\.lz4$/) { return ['lz4']; } elsif ($filename =~ /\.(xz|txz)$/) { return ['xz']; } elsif ($filename =~ /\.zst$/) { return ['zstd']; } return; } # avoid dependency on String::ShellQuote by implementing the mechanism # from python's shlex.quote function sub shellescape { my $string = shift; if (length $string == 0) { return "''"; } # search for occurrences of characters that are not safe # the 'a' regex modifier makes sure that \w only matches ASCII if ($string !~ m/[^\w@\%+=:,.\/-]/a) { return $string; } # wrap the string in single quotes and handle existing single quotes by # putting them outside of the single-quoted string $string =~ s/'/'"'"'/g; return "'$string'"; } sub test_unshare_userns { my $verbose = shift; if ($EFFECTIVE_USER_ID == 0) { my $msg = "cannot unshare user namespace when executing as root"; if ($verbose) { warning $msg; } else { debug $msg; } return 0; } # arguments to syscalls have to be stored in their own variable or # otherwise we will get "Modification of a read-only value attempted" my $unshare_flags = $CLONE_NEWUSER; # we spawn a new per process because if unshare succeeds, we would # otherwise have unshared the mmdebstrap process itself which we don't want my $pid = fork() // error "fork() failed: $!"; if ($pid == 0) { my $ret = syscall(&SYS_unshare, $unshare_flags); if ($ret == 0) { exit 0; } else { my $msg = "unshare syscall failed: $!"; if ($verbose) { warning $msg; } else { debug $msg; } exit 1; } } waitpid($pid, 0); if (($? >> 8) != 0) { return 0; } # if newuidmap and newgidmap exist, the exit status will be 1 when # executed without parameters system "newuidmap 2>/dev/null"; if (($? >> 8) != 1) { if (($? >> 8) == 127) { my $msg = "cannot find newuidmap"; if ($verbose) { warning $msg; } else { debug $msg; } } else { my $msg = "newuidmap returned unknown exit status: $?"; if ($verbose) { warning $msg; } else { debug $msg; } } return 0; } system "newgidmap 2>/dev/null"; if (($? >> 8) != 1) { if (($? >> 8) == 127) { my $msg = "cannot find newgidmap"; if ($verbose) { warning $msg; } else { debug $msg; } } else { my $msg = "newgidmap returned unknown exit status: $?"; if ($verbose) { warning $msg; } else { debug $msg; } } return 0; } return 1; } sub read_subuid_subgid() { my $username = getpwuid $REAL_USER_ID; my ($subid, $num_subid, $fh, $n); my @result = (); if (!-e "/etc/subuid") { warning "/etc/subuid doesn't exist"; return; } if (!-r "/etc/subuid") { warning "/etc/subuid is not readable"; return; } open $fh, "<", "/etc/subuid" or error "cannot open /etc/subuid for reading: $!"; while (my $line = <$fh>) { ($n, $subid, $num_subid) = split(/:/, $line, 3); last if ($n eq $username); } close $fh; if (!length $subid) { warning "/etc/subuid is empty"; return; } if ($n ne $username) { warning "no entry in /etc/subuid for $username"; return; } push @result, ["u", 0, $subid, $num_subid]; if (scalar(@result) < 1) { warning "/etc/subuid does not contain an entry for $username"; return; } if (scalar(@result) > 1) { warning "/etc/subuid contains multiple entries for $username"; return; } my $groupname = getgrgid $REAL_GROUP_ID; if (!-e "/etc/subgid") { warning "/etc/subgid doesn't exist"; return; } if (!-r "/etc/subgid") { warning "/etc/subgid is not readable"; return; } open $fh, "<", "/etc/subgid" or error "cannot open /etc/subgid for reading: $!"; while (my $line = <$fh>) { ($n, $subid, $num_subid) = split(/:/, $line, 3); last if ($n eq $groupname); } close $fh; if (!length $subid) { warning "/etc/subgid is empty"; return; } if ($n ne $groupname) { warning "no entry in /etc/subgid for $groupname"; return; } push @result, ["g", 0, $subid, $num_subid]; if (scalar(@result) < 2) { warning "/etc/subgid does not contain an entry for $groupname"; return; } if (scalar(@result) > 2) { warning "/etc/subgid contains multiple entries for $groupname"; return; } return @result; } # This function spawns two child processes forming the following process tree # # A # | # fork() # | \ # B C # | | # | fork() # | | \ # | D E # | | | # |unshare() # | close() # | | | # | | read() # | | newuidmap(D) # | | newgidmap(D) # | | / # | waitpid() # | | # | fork() # | | \ # | F G # | | | # | | exec() # | | / # | waitpid() # | / # waitpid() # # To better refer to each individual part, we give each process a new # identifier after calling fork(). Process A is the main process. After # executing fork() we call the parent and child B and C, respectively. This # first fork() is done because we do not want to modify A. B then remains # waiting for its child C to finish. C calls fork() again, splitting into # the parent D and its child E. In the parent D we call unshare() and close a # pipe shared by D and E to signal to E that D is done with calling unshare(). # E notices this by using read() and follows up with executing the tools # new[ug]idmap on D. E finishes and D continues with doing another fork(). # This is because when unsharing the PID namespace, we need a PID 1 to be kept # alive or otherwise any child processes cannot fork() anymore themselves. So # we keep F as PID 1 and finally call exec() in G. sub get_unshare_cmd { my $cmd = shift; my $idmap = shift; # unsharing the mount namespace (NEWNS) requires CAP_SYS_ADMIN my $unshare_flags = $CLONE_NEWNS | $CLONE_NEWPID | $CLONE_NEWUTS | $CLONE_NEWIPC; # we only need to add CLONE_NEWUSER if we are not yet root if ($EFFECTIVE_USER_ID != 0) { $unshare_flags |= $CLONE_NEWUSER; } if (0) { $unshare_flags |= $CLONE_NEWNET; } # fork a new process and let the child get unshare()ed # we don't want to unshare the parent process my $gcpid = fork() // error "fork() failed: $!"; if ($gcpid == 0) { # Create a pipe for the parent process to signal the child process that # it is done with calling unshare() so that the child can go ahead # setting up uid_map and gid_map. pipe my $rfh, my $wfh; # We have to do this dance with forking a process and then modifying # the parent from the child because: # - new[ug]idmap can only be called on a process id after that process # has unshared the user namespace # - a process looses its capabilities if it performs an execve() with # nonzero user ids see the capabilities(7) man page for details. # - a process that unshared the user namespace by default does not # have the privileges to call new[ug]idmap on itself # # this also works the other way around (the child setting up a user # namespace and being modified from the parent) but that way, the # parent would have to stay around until the child exited (so a pid # would be wasted). Additionally, that variant would require an # additional pipe to let the parent signal the child that it is done # with calling new[ug]idmap. The way it is done here, this signaling # can instead be done by wait()-ing for the exit of the child. my $ppid = $$; my $cpid = fork() // error "fork() failed: $!"; if ($cpid == 0) { # child # Close the writing descriptor at our end of the pipe so that we # see EOF when parent closes its descriptor. close $wfh; # Wait for the parent process to finish its unshare() call by # waiting for an EOF. 0 == sysread $rfh, my $c, 1 or error "read() did not receive EOF"; # the process is already root, so no need for newuidmap/newgidmap if ($EFFECTIVE_USER_ID == 0) { exit 0; } # The program's new[ug]idmap have to be used because they are # setuid root. These privileges are needed to map the ids from # /etc/sub[ug]id to the user namespace set up by the parent. # Without these privileges, only the id of the user itself can be # mapped into the new namespace. # # Since new[ug]idmap is setuid root we also don't need to write # "deny" to /proc/$$/setgroups beforehand (this is otherwise # required for unprivileged processes trying to write to # /proc/$$/gid_map since kernel version 3.19 for security reasons) # and therefore the parent process keeps its ability to change its # own group here. # # Since /proc/$ppid/[ug]id_map can only be written to once, # respectively, instead of making multiple calls to new[ug]idmap, # we assemble a command line that makes one call each. my $uidmapcmd = ""; my $gidmapcmd = ""; foreach (@{$idmap}) { my ($t, $hostid, $nsid, $range) = @{$_}; if ($t ne "u" and $t ne "g" and $t ne "b") { error "invalid idmap type: $t"; } if ($t eq "u" or $t eq "b") { $uidmapcmd .= " $hostid $nsid $range"; } if ($t eq "g" or $t eq "b") { $gidmapcmd .= " $hostid $nsid $range"; } } my $idmapcmd = ''; if ($uidmapcmd ne "") { 0 == system "newuidmap $ppid $uidmapcmd" or error "newuidmap $ppid $uidmapcmd failed: $!"; } if ($gidmapcmd ne "") { 0 == system "newgidmap $ppid $gidmapcmd" or error "newgidmap $ppid $gidmapcmd failed: $!"; } exit 0; } # parent # After fork()-ing, the parent immediately calls unshare... 0 == syscall &SYS_unshare, $unshare_flags or error "unshare() failed: $!"; # .. and then signals the child process that we are done with the # unshare() call by sending an EOF. close $wfh; # Wait for the child process to finish its setup by waiting for its # exit. $cpid == waitpid $cpid, 0 or error "waitpid() failed: $!"; my $exit = $? >> 8; if ($exit != 0) { error "child had a non-zero exit status: $exit"; } # Currently we are nobody (uid and gid are 65534). So we become root # user and group instead. # # We are using direct syscalls instead of setting $(, $), $< and $> # because then perl would do additional stuff which we don't need or # want here, like checking /proc/sys/kernel/ngroups_max (which might # not exist). It would also also call setgroups() in a way that makes # the root user be part of the group unknown. if ($EFFECTIVE_USER_ID != 0) { 0 == syscall &SYS_setgid, 0 or error "setgid failed: $!"; 0 == syscall &SYS_setuid, 0 or error "setuid failed: $!"; 0 == syscall &SYS_setgroups, 0, 0 or error "setgroups failed: $!"; } if (1) { # When the pid namespace is also unshared, then processes expect a # master pid to always be alive within the namespace. To achieve # this, we fork() here instead of exec() to always have one dummy # process running as pid 1 inside the namespace. This is also what # the unshare tool does when used with the --fork option. # # Otherwise, without a pid 1, new processes cannot be forked # anymore after pid 1 finished. my $cpid = fork() // error "fork() failed: $!"; if ($cpid != 0) { # The parent process will stay alive as pid 1 in this # namespace until the child finishes executing. This is # important because pid 1 must never die or otherwise nothing # new can be forked. $cpid == waitpid $cpid, 0 or error "waitpid() failed: $!"; exit($? >> 8); } } &{$cmd}(); exit 0; } # parent return $gcpid; } sub havemknod { my $root = shift; my $havemknod = 0; if (-e "$root/test-dev-null") { error "/test-dev-null already exists"; } TEST: { # we fork so that we can read STDERR my $pid = open my $fh, '-|' // error "failed to fork(): $!"; if ($pid == 0) { open(STDERR, '>&', STDOUT) or error "cannot open STDERR: $!"; # we use mknod(1) instead of the system call because creating the # right dev_t argument requires makedev(3) exec 'mknod', "$root/test-dev-null", 'c', '1', '3'; } chomp( my $content = do { local $/; <$fh> } ); close $fh; { last TEST unless $? == 0 and $content eq ''; last TEST unless -c "$root/test-dev-null"; last TEST unless open my $fh, '>', "$root/test-dev-null"; last TEST unless print $fh 'test'; } $havemknod = 1; } if (-e "$root/test-dev-null") { unlink "$root/test-dev-null" or error "cannot unlink /test-dev-null: $!"; } return $havemknod; } sub print_progress { if ($verbosity_level != 1) { return; } my $perc = shift; if (!stderr_is_tty()) { return; } if ($perc eq "done") { # \e[2K clears everything on the current line (i.e. the progress bar) print STDERR "\e[2Kdone\n"; return; } if ($perc >= 100) { $perc = 100; } my $width = 50; my $num_x = int($perc * $width / 100); my $bar = '=' x $num_x; if ($num_x != $width) { $bar .= '>'; $bar .= ' ' x ($width - $num_x - 1); } printf STDERR "%6.2f [%s]\r", $perc, $bar; return; } sub run_progress { my ($get_exec, $line_handler, $line_has_error, $chdir) = @_; pipe my $rfh, my $wfh; my $got_signal = 0; my $ignore = sub { info "run_progress() received signal $_[0]: waiting for child..."; }; debug("run_progress: exec " . (join ' ', ($get_exec->('${FD}')))); # delay signals so that we can fork and change behaviour of the signal # handler in parent and child without getting interrupted my $sigset = POSIX::SigSet->new(SIGINT, SIGHUP, SIGPIPE, SIGTERM); POSIX::sigprocmask(SIG_BLOCK, $sigset) or error "Can't block signals: $!"; my $pid1 = open(my $pipe, '-|') // error "failed to fork(): $!"; if ($pid1 == 0) { # child: default signal handlers local $SIG{'INT'} = 'DEFAULT'; local $SIG{'HUP'} = 'DEFAULT'; local $SIG{'PIPE'} = 'DEFAULT'; local $SIG{'TERM'} = 'DEFAULT'; # unblock all delayed signals (and possibly handle them) POSIX::sigprocmask(SIG_UNBLOCK, $sigset) or error "Can't unblock signals: $!"; close $rfh; # Unset the close-on-exec flag, so that the file descriptor does not # get closed when we exec my $flags = fcntl($wfh, F_GETFD, 0) or error "fcntl F_GETFD: $!"; fcntl($wfh, F_SETFD, $flags & ~FD_CLOEXEC) or error "fcntl F_SETFD: $!"; my $fd = fileno $wfh; # redirect stderr to stdout so that we can capture it open(STDERR, '>&', STDOUT) or error "cannot open STDOUT: $!"; my @execargs = $get_exec->($fd); # before apt 1.5, "apt-get update" attempted to chdir() into the # working directory. This will fail if the current working directory # is not accessible by the user (for example in unshare mode). See # Debian bug #860738 if (defined $chdir) { chdir $chdir or error "failed chdir() to $chdir: $!"; } eval { Devel::Cover::set_coverage("none") } if $is_covering; exec { $execargs[0] } @execargs or error 'cannot exec() ' . (join ' ', @execargs); } close $wfh; # spawn two processes: # parent will parse stdout to look for errors # child will parse $rfh for the progress meter my $pid2 = fork() // error "failed to fork(): $!"; if ($pid2 == 0) { # child: default signal handlers local $SIG{'INT'} = 'IGNORE'; local $SIG{'HUP'} = 'IGNORE'; local $SIG{'PIPE'} = 'IGNORE'; local $SIG{'TERM'} = 'IGNORE'; # unblock all delayed signals (and possibly handle them) POSIX::sigprocmask(SIG_UNBLOCK, $sigset) or error "Can't unblock signals: $!"; my $progress = 0.0; my $status = undef; print_progress($progress); while (my $line = <$rfh>) { my ($newprogress, $newstatus) = $line_handler->($line); next unless $newprogress; # start a new line if the new progress value is less than the # previous one if ($newprogress < $progress) { print_progress("done"); } if (defined $newstatus) { $status = $newstatus; } if ( defined $status and $verbosity_level == 1 and stderr_is_tty()) { # \e[2K clears everything on the current line (i.e. the # progress bar) print STDERR "\e[2K$status: "; } print_progress($newprogress); $progress = $newprogress; } print_progress("done"); exit 0; } # parent: ignore signals # by using "local", the original is automatically restored once the # function returns local $SIG{'INT'} = $ignore; local $SIG{'HUP'} = $ignore; local $SIG{'PIPE'} = $ignore; local $SIG{'TERM'} = $ignore; # unblock all delayed signals (and possibly handle them) POSIX::sigprocmask(SIG_UNBLOCK, $sigset) or error "Can't unblock signals: $!"; my $output = ''; my $has_error = 0; while (my $line = <$pipe>) { $has_error = $line_has_error->($line); if ($verbosity_level >= 2) { print STDERR $line; } else { # forward captured apt output $output .= $line; } } close($pipe); my $fail = 0; if ($? != 0 or $has_error) { $fail = 1; } waitpid $pid2, 0; $? == 0 or error "progress parsing failed"; if ($got_signal) { error "run_progress() received signal: $got_signal"; } # only print failure after progress output finished or otherwise it # might interfere with the remaining output if ($fail) { if ($verbosity_level >= 1) { print STDERR $output; } error((join ' ', $get_exec->('<$fd>')) . ' failed'); } return; } sub run_dpkg_progress { my $options = shift; my @debs = @{ $options->{PKGS} // [] }; my $get_exec = sub { return @{ $options->{ARGV} }, "--status-fd=$_[0]", @debs; }; my $line_has_error = sub { return 0; }; my $num = 0; # each package has one install and one configure step, thus the total # number is twice the number of packages my $total = (scalar @debs) * 2; my $line_handler = sub { my $status = undef; if ($_[0] =~ /^processing: (install|configure): /) { if ($1 eq 'install') { $status = 'installing'; } elsif ($1 eq 'configure') { $status = 'configuring'; } else { error "unknown status: $1"; } $num += 1; } return $num / $total * 100, $status; }; run_progress $get_exec, $line_handler, $line_has_error; return; } sub run_apt_progress { my $options = shift; my @debs = @{ $options->{PKGS} // [] }; my $tmpedsp; if (exists $options->{EDSP_RES}) { (undef, $tmpedsp) = tempfile( "mmdebstrap.edsp.XXXXXXXXXXXX", OPEN => 0, TMPDIR => 1 ); } my $get_exec = sub { my @prefix = (); my @opts = (); if (exists $options->{EDSP_RES}) { push @prefix, 'env', "APT_EDSP_DUMP_FILENAME=$tmpedsp"; if (-e "./proxysolver") { # for development purposes, use the current directory if it # contains a file called proxysolver push @opts, ("-oDir::Bin::solvers=" . getcwd()), '--solver=proxysolver'; } else { push @opts, '--solver=mmdebstrap-dump-solution'; } } return ( @prefix, @{ $options->{ARGV} }, @opts, "-oAPT::Status-Fd=$_[0]", # prevent apt from messing up the terminal and allow dpkg to # receive SIGINT and quit immediately without waiting for # maintainer script to finish '-oDpkg::Use-Pty=false', @debs ); }; my $line_has_error = sub { return 0; }; if ($options->{FIND_APT_WARNINGS}) { $line_has_error = sub { # apt-get doesn't report a non-zero exit if the update failed. # Thus, we have to parse its output. See #778357, #776152, #696335 # and #745735 for the parsing bugs as well as #594813, #696335, # #776152, #778357 and #953726 for non-zero exit on transient # network errors. # # For example, we want to fail with the following warning: # W: Some index files failed to download. They have been ignored, # or old ones used instead. # But since this message is meant for human consumption it is not # guaranteed to be stable across different apt versions and may # change arbitrarily in the future. Thus, we error out on any W: # lines as well. The downside is, that apt also unconditionally # and by design prints a warning for unsigned repositories, even # if they were allowed with Acquire::AllowInsecureRepositories "1" # or with trusted=yes. # # A workaround was introduced by apt 2.1.16 with the --error-on=any # option to apt-get update. if ($_[0] =~ /^(W: |Err:)/) { return 1; } return 0; }; } my $line_handler = sub { if ($_[0] =~ /(pmstatus|dlstatus):[^:]+:(\d+\.\d+):.*/) { my $status = undef; if ($1 eq 'pmstatus') { $status = "installing"; } elsif ($1 eq 'dlstatus') { $status = "downloading"; } else { error "unknown status: $1"; } return $2, $status; } }; run_progress $get_exec, $line_handler, $line_has_error, $options->{CHDIR}; if (exists $options->{EDSP_RES}) { info "parsing EDSP results..."; open my $fh, '<', $tmpedsp or error "failed to open $tmpedsp for reading: $!"; my $inst = 0; my $pkg; my $ver; while (my $line = <$fh>) { chomp $line; if ($line ne "") { if ($line =~ /^Install: \d+/) { $inst = 1; } elsif ($line =~ /^Package: (.*)/) { $pkg = $1; } elsif ($line =~ /^Version: (.*)/) { $ver = $1; } next; } if ($inst == 1 && defined $pkg && defined $ver) { push @{ $options->{EDSP_RES} }, [$pkg, $ver]; } $inst = 0; undef $pkg; undef $ver; } close $fh; unlink $tmpedsp; } return; } sub run_chroot { my $cmd = shift; my $options = shift; my @cleanup_tasks = (); my $cleanup = sub { my $signal = $_[0]; while (my $task = pop @cleanup_tasks) { $task->(); } if ($signal) { warning "pid $PID cought signal: $signal"; exit 1; } }; local $SIG{INT} = $cleanup; local $SIG{HUP} = $cleanup; local $SIG{PIPE} = $cleanup; local $SIG{TERM} = $cleanup; eval { if (any { $_ eq $options->{mode} } ('root', 'unshare')) { # if more than essential should be installed, make the system look # more like a real one by creating or bind-mounting the device # nodes foreach my $file (@devfiles) { my ($fname, $mode, $type, $linkname, $devmajor, $devminor) = @{$file}; next if $fname eq ''; if ($type == 0) { # normal file error "type 0 not implemented"; } elsif ($type == 1) { # hardlink error "type 1 not implemented"; } elsif ($type == 2) { # symlink if (!$options->{havemknod}) { # If we had mknod, then the symlink was already created # in the run_setup function. if (!-d "$options->{root}/dev") { warning( "skipping creation of ./dev/$fname because the" . " /dev directory is missing in the target" ); next; } push @cleanup_tasks, sub { unlink "$options->{root}/dev/$fname" or warn "cannot unlink ./dev/$fname: $!"; }; symlink $linkname, "$options->{root}/dev/$fname" or error "cannot create symlink ./dev/$fname -> $linkname"; } } elsif ($type == 3 or $type == 4) { # character/block special if ((any { $_ eq $options->{mode} } ('root', 'unshare')) && !$options->{canmount}) { warning "skipping bind-mounting ./dev/$fname"; } elsif (!$options->{havemknod}) { if (!-d "$options->{root}/dev") { warning( "skipping creation of ./dev/$fname because the" . " /dev directory is missing in the target" ); next; } if (!-e "/dev/$fname") { warning("skipping creation of ./dev/$fname because" . " /dev/$fname does not exist" . " on the outside"); next; } if (!-c "/dev/$fname") { warning("skipping creation of ./dev/$fname because" . " /dev/$fname on the outside is not a" . " character special file"); next; } open my $fh, '>', "$options->{root}/dev/$fname" or error "cannot open $options->{root}/dev/$fname: $!"; close $fh; my @umountopts = (); if ($options->{mode} eq 'unshare') { push @umountopts, '--no-mtab'; } push @cleanup_tasks, sub { 0 == system('umount', @umountopts, "$options->{root}/dev/$fname") or warn "umount ./dev/$fname failed: $?"; unlink "$options->{root}/dev/$fname" or warn "cannot unlink ./dev/$fname: $!"; }; 0 == system('mount', '-o', 'bind', "/dev/$fname", "$options->{root}/dev/$fname") or error "mount ./dev/$fname failed: $?"; } } elsif ($type == 5 && (any { $_ eq $options->{mode} } ('root', 'unshare')) && !$options->{canmount}) { warning "skipping bind-mounting ./dev/$fname"; } elsif ($type == 5) { # directory if (!-d "$options->{root}/dev") { warning( "skipping creation of ./dev/$fname because the" . " /dev directory is missing in the target"); next; } if (!-e "/dev/$fname") { warning("skipping creation of ./dev/$fname because" . " /dev/$fname does not exist" . " on the outside"); next; } if (!-d "/dev/$fname") { warning("skipping creation of ./dev/$fname because" . " /dev/$fname on the outside is not a" . " directory"); next; } if (!$options->{havemknod}) { # If had mknod, then the directory to bind-mount into # was already created in the run_setup function. push @cleanup_tasks, sub { rmdir "$options->{root}/dev/$fname" or warn "cannot rmdir ./dev/$fname: $!"; }; if (-e "$options->{root}/dev/$fname") { if (!-d "$options->{root}/dev/$fname") { error "./dev/$fname already exists but is not" . " a directory"; } } else { my $num_created = make_path "$options->{root}/dev/$fname", { error => \my $err }; if ($err && @$err) { error( join "; ", ( map { "cannot create " . (join ": ", %{$_}) } @$err )); } elsif ($num_created == 0) { error "cannot create $options->{root}/dev/$fname"; } } chmod $mode, "$options->{root}/dev/$fname" or error "cannot chmod ./dev/$fname: $!"; } my @umountopts = (); if ($options->{mode} eq 'unshare') { push @umountopts, '--no-mtab'; } push @cleanup_tasks, sub { 0 == system('umount', @umountopts, "$options->{root}/dev/$fname") or warn "umount ./dev/$fname failed: $?"; }; 0 == system('mount', '-o', 'bind', "/dev/$fname", "$options->{root}/dev/$fname") or error "mount ./dev/$fname failed: $?"; } else { error "unsupported type: $type"; } } } elsif ( any { $_ eq $options->{mode} } ('proot', 'fakechroot', 'chrootless') ) { # we cannot mount in fakechroot and proot mode # in proot mode we have /dev bind-mounted already through # --bind=/dev } else { error "unknown mode: $options->{mode}"; } # We can only mount /proc and /sys after extracting the essential # set because if we mount it before, then base-files will not be able # to extract those if ((any { $_ eq $options->{mode} } ('root', 'unshare')) && !$options->{canmount}) { warning "skipping mount sysfs"; } elsif ((any { $_ eq $options->{mode} } ('root', 'unshare')) && !-d "$options->{root}/sys") { warning("skipping mounting of sysfs because the" . " /sys directory is missing in the target"); } elsif ((any { $_ eq $options->{mode} } ('root', 'unshare')) && !-e "/sys") { warning("skipping bind-mounting /sys because" . " /sys does not exist on the outside"); } elsif ((any { $_ eq $options->{mode} } ('root', 'unshare')) && !-d "/sys") { warning("skipping bind-mounting /sys because" . " /sys on the outside is not a directory"); } elsif ($options->{mode} eq 'root') { push @cleanup_tasks, sub { 0 == system('umount', "$options->{root}/sys") or warn "umount /sys failed: $?"; }; 0 == system( 'mount', '-t', 'sysfs', '-o', 'ro,nosuid,nodev,noexec', 'sys', "$options->{root}/sys" ) or error "mount /sys failed: $?"; } elsif ($options->{mode} eq 'unshare') { # naturally we have to clean up after ourselves in sudo mode where # we do a real mount. But we also need to unmount in unshare mode # because otherwise, even with the --one-file-system tar option, # the permissions of the mount source will be stored and not the # mount target (the directory) push @cleanup_tasks, sub { # since we cannot write to /etc/mtab we need --no-mtab # unmounting /sys only seems to be successful with --lazy 0 == system('umount', '--no-mtab', '--lazy', "$options->{root}/sys") or warn "umount /sys failed: $?"; }; # without the network namespace unshared, we cannot mount a new # sysfs. Since we need network, we just bind-mount. # # we have to rbind because just using bind results in "wrong fs # type, bad option, bad superblock" error 0 == system('mount', '-o', 'rbind', '/sys', "$options->{root}/sys") or error "mount /sys failed: $?"; } elsif ( any { $_ eq $options->{mode} } ('proot', 'fakechroot', 'chrootless') ) { # we cannot mount in fakechroot and proot mode # in proot mode we have /proc bind-mounted already through # --bind=/proc } else { error "unknown mode: $options->{mode}"; } if ((any { $_ eq $options->{mode} } ('root', 'unshare')) && !$options->{canmount}) { warning "skipping mount proc"; } elsif ((any { $_ eq $options->{mode} } ('root', 'unshare')) && !-d "$options->{root}/proc") { warning("skipping mounting of proc because the" . " /proc directory is missing in the target"); } elsif ((any { $_ eq $options->{mode} } ('root', 'unshare')) && !-e "/proc") { warning("skipping bind-mounting /proc because" . " /proc does not exist on the outside"); } elsif ((any { $_ eq $options->{mode} } ('root', 'unshare')) && !-d "/proc") { warning("skipping bind-mounting /proc because" . " /proc on the outside is not a directory"); } elsif ($options->{mode} eq 'root') { push @cleanup_tasks, sub { # some maintainer scripts mount additional stuff into /proc # which we need to unmount beforehand if ( is_mountpoint( $options->{root} . "/proc/sys/fs/binfmt_misc" ) ) { 0 == system('umount', "$options->{root}/proc/sys/fs/binfmt_misc") or error "umount /proc/sys/fs/binfmt_misc failed: $?"; } 0 == system('umount', "$options->{root}/proc") or error "umount /proc failed: $?"; }; 0 == system('mount', '-t', 'proc', '-o', 'ro', 'proc', "$options->{root}/proc") or error "mount /proc failed: $?"; } elsif ($options->{mode} eq 'unshare') { # naturally we have to clean up after ourselves in sudo mode where # we do a real mount. But we also need to unmount in unshare mode # because otherwise, even with the --one-file-system tar option, # the permissions of the mount source will be stored and not the # mount target (the directory) push @cleanup_tasks, sub { # since we cannot write to /etc/mtab we need --no-mtab 0 == system('umount', '--no-mtab', "$options->{root}/proc") or error "umount /proc failed: $?"; }; 0 == system('mount', '-t', 'proc', 'proc', "$options->{root}/proc") or error "mount /proc failed: $?"; } elsif ( any { $_ eq $options->{mode} } ('proot', 'fakechroot', 'chrootless') ) { # we cannot mount in fakechroot and proot mode # in proot mode we have /sys bind-mounted already through # --bind=/sys } else { error "unknown mode: $options->{mode}"; } # prevent daemons from starting # the directory might not exist in custom variant, for example # # ideally, we should use update-alternatives but we cannot rely on it # existing inside the chroot # # See #911290 for more problems of this interface if (-d "$options->{root}/usr/sbin/") { open my $fh, '>', "$options->{root}/usr/sbin/policy-rc.d" or error "cannot open policy-rc.d: $!"; print $fh "#!/bin/sh\n"; print $fh "exit 101\n"; close $fh; chmod 0755, "$options->{root}/usr/sbin/policy-rc.d" or error "cannot chmod policy-rc.d: $!"; } # the file might not exist if it was removed in a hook if (-f "$options->{root}/sbin/start-stop-daemon") { if (-e "$options->{root}/sbin/start-stop-daemon.REAL") { error "$options->{root}/sbin/start-stop-daemon.REAL already" . " exists"; } move( "$options->{root}/sbin/start-stop-daemon", "$options->{root}/sbin/start-stop-daemon.REAL" ) or error "cannot move start-stop-daemon: $!"; open my $fh, '>', "$options->{root}/sbin/start-stop-daemon" or error "cannot open start-stop-daemon: $!"; print $fh "#!/bin/sh\n"; print $fh "echo \"Warning: Fake start-stop-daemon called, doing" . " nothing\">&2\n"; close $fh; chmod 0755, "$options->{root}/sbin/start-stop-daemon" or error "cannot chmod start-stop-daemon: $!"; } &{$cmd}(); # cleanup if (-e "$options->{root}/sbin/start-stop-daemon.REAL") { move( "$options->{root}/sbin/start-stop-daemon.REAL", "$options->{root}/sbin/start-stop-daemon" ) or error "cannot move start-stop-daemon: $!"; } if (-f "$options->{root}/usr/sbin/policy-rc.d") { unlink "$options->{root}/usr/sbin/policy-rc.d" or error "cannot unlink policy-rc.d: $!"; } }; my $error = $@; # we use the cleanup function to do the unmounting $cleanup->(0); if ($error) { error "run_chroot failed: $error"; } return; } sub run_hooks { my $name = shift; my $options = shift; if (scalar @{ $options->{"${name}_hook"} } == 0) { return; } if ($options->{dryrun}) { info "not running ${name}-hooks because of --dry-run"; return; } my @env_opts = (); # At this point TMPDIR is set to "$options->{root}/tmp". This is to have a # writable TMPDIR even in unshare mode. But if TMPDIR is still set when # running hooks, then every hook script calling chroot, will have to wrap # that into an "env --unset=TMPDIR". To avoid this, we unset TMPDIR here. # If the hook script needs a writable TMPDIR, then it can always use /tmp # inside the chroot. This is also why we do not set a new MMDEBSTRAP_TMPDIR # environment variable. if (length $ENV{TMPDIR}) { push @env_opts, '--unset=TMPDIR'; } # The APT_CONFIG variable, if set, will confuse any manual calls to # apt-get. If you want to use the same config used by mmdebstrap, the # original value is stored in MMDEBSTRAP_APT_CONFIG. if (length $ENV{APT_CONFIG}) { push @env_opts, '--unset=APT_CONFIG'; } if (length $ENV{APT_CONFIG}) { push @env_opts, "MMDEBSTRAP_APT_CONFIG=$ENV{APT_CONFIG}"; } # Storing the mode is important for hook scripts to potentially change # their behavior depending on the mode. It's also important for when the # hook wants to use the mmdebstrap --hook-helper. push @env_opts, "MMDEBSTRAP_MODE=$options->{mode}"; # Storing the hook name is important for hook scripts to potentially change # their behavior depending on the hook. It's also important for when the # hook wants to use the mmdebstrap --hook-helper. push @env_opts, "MMDEBSTRAP_HOOK=$name"; # This is the file descriptor of the socket that the mmdebstrap # --hook-helper can write to and read from to communicate with the outside. push @env_opts, ("MMDEBSTRAP_HOOKSOCK=" . fileno($options->{hooksock})); my $runner = sub { foreach my $script (@{ $options->{"${name}_hook"} }) { if ( $script =~ /^( copy-in|copy-out |tar-in|tar-out |upload|download |sync-in|sync-out )\ /x ) { info "running special hook: $script"; if ( any { $_ eq $options->{variant} } ('extract', 'custom') and any { $_ eq $options->{mode} } ('fakechroot', 'proot') and $name ne 'setup' ) { info "the copy-in, copy-out, tar-in and tar-out commands" . " in fakechroot mode or proot mode might fail in" . " extract and custom variants because there might be" . " no tar inside the chroot"; } my $pid = fork() // error "fork() failed: $!"; if ($pid == 0) { # whatever the script writes on stdout is sent to the # socket # whatever is written to the socket, send to stdin open(STDOUT, '>&', $options->{hooksock}) or error "cannot open STDOUT: $!"; open(STDIN, '<&', $options->{hooksock}) or error "cannot open STDIN: $!"; # we execute ourselves under sh to avoid having to # implement a clever parser of the quoting used in $script # for the filenames my $prefix = ""; if ($is_covering) { $prefix = "$EXECUTABLE_NAME -MDevel::Cover=-silent,-nogcov "; } exec 'sh', '-c', "$prefix$PROGRAM_NAME --hook-helper" . " \"\$1\" \"\$2\" \"\$3\" \"\$4\" \"\$5\" $script", 'exec', $options->{root}, $options->{mode}, $name, ( defined $options->{qemu} ? "qemu-$options->{qemu}" : 'env', $verbosity_level ); } waitpid($pid, 0); $? == 0 or error "special hook failed with exit code $?"; } elsif (-x $script || $script !~ m/[^\w@\%+=:,.\/-]/a) { info "running --$name-hook directly: $script $options->{root}"; # execute it directly if it's an executable file # or if it there are no shell metacharacters # (the /a regex modifier makes \w match only ASCII) 0 == system('env', @env_opts, $script, $options->{root}) or error "command failed: $script"; } else { info "running --$name-hook in shell: sh -c '$script' exec" . " $options->{root}"; # otherwise, wrap everything in sh -c 0 == system('env', @env_opts, 'sh', '-c', $script, 'exec', $options->{root}) or error "command failed: $script"; } } }; # Unset the close-on-exec flag, so that the file descriptor does not # get closed when we exec my $flags = fcntl($options->{hooksock}, F_GETFD, 0) or error "fcntl F_GETFD: $!"; fcntl($options->{hooksock}, F_SETFD, $flags & ~FD_CLOEXEC) or error "fcntl F_SETFD: $!"; if ($name eq 'setup') { # execute directly without mounting anything (the mount points do not # exist yet) &{$runner}(); } else { run_chroot(\&$runner, $options); } # Restore flags fcntl($options->{hooksock}, F_SETFD, $flags) or error "fcntl F_SETFD: $!"; return; } sub setup { my $options = shift; foreach my $key (sort keys %{$options}) { my $value = $options->{$key}; if (!defined $value) { next; } if (ref $value eq '') { debug "$key: $options->{$key}"; } elsif (ref $value eq 'ARRAY') { debug "$key: [" . (join ', ', @{$value}) . "]"; } elsif (ref $value eq 'GLOB') { debug "$key: GLOB"; } else { error "unknown type for key $key: " . (ref $value); } } if (-e $options->{apttrusted} && !-r $options->{apttrusted}) { warning "cannot read $options->{apttrusted}"; } if (-e $options->{apttrustedparts} && !-r $options->{apttrustedparts}) { warning "cannot read $options->{apttrustedparts}"; } if (any { $_ eq 'setup' } @{ $options->{skip} }) { info "skipping setup as requested"; } else { run_setup($options); } run_hooks('setup', $options); if (any { $_ eq 'update' } @{ $options->{skip} }) { info "skipping update as requested"; } else { run_update($options); } (my $essential_pkgs, my $cached_debs) = run_download($options); # in theory, we don't have to extract the packages in chrootless mode # but we do it anyways because otherwise directory creation timestamps # will differ compared to non-chrootless and we want to create bit-by-bit # identical tar output # # FIXME: dpkg could be changed to produce the same results run_extract($options, $essential_pkgs); run_hooks('extract', $options); if ($options->{variant} ne 'extract') { my $chrootcmd = []; if ($options->{mode} ne 'chrootless') { $chrootcmd = run_prepare($options); } run_essential($options, $essential_pkgs, $chrootcmd, $cached_debs); run_hooks('essential', $options); run_install($options, $chrootcmd); run_hooks('customize', $options); } if (any { $_ eq 'cleanup' } @{ $options->{skip} }) { info "skipping cleanup as requested"; } else { run_cleanup($options); } return; } sub run_setup() { my $options = shift; { my @directories = ( '/etc/apt/apt.conf.d', '/etc/apt/sources.list.d', '/etc/apt/preferences.d', '/var/cache/apt', '/var/lib/apt/lists/partial', '/tmp' ); # we need /var/lib/dpkg in case we need to write to /var/lib/dpkg/arch push @directories, '/var/lib/dpkg'; # since we do not know the dpkg version inside the chroot at this # point, we can only omit it in chrootless mode if ($options->{mode} ne 'chrootless' or scalar @{ $options->{dpkgopts} } > 0) { push @directories, '/etc/dpkg/dpkg.cfg.d/'; } # if dpkg and apt operate from the outside we need some more # directories because dpkg and apt might not even be installed inside # the chroot. Thus, the following block is not strictly necessary in # chrootless mode. We unconditionally add it anyways, so that the # output with and without chrootless mode is equal. { push @directories, '/var/log/apt'; # since we do not know the dpkg version inside the chroot at this # point, we can only omit it in chrootless mode if ($options->{mode} ne 'chrootless') { push @directories, '/var/lib/dpkg/triggers', '/var/lib/dpkg/info', '/var/lib/dpkg/alternatives', '/var/lib/dpkg/updates'; } } foreach my $dir (@directories) { if (-e "$options->{root}/$dir") { if (!-d "$options->{root}/$dir") { error "$dir already exists but is not a directory"; } } else { my $num_created = make_path "$options->{root}/$dir", { error => \my $err }; if ($err && @$err) { error( join "; ", (map { "cannot create " . (join ": ", %{$_}) } @$err)); } elsif ($num_created == 0) { error "cannot create $options->{root}/$dir"; } } } # make sure /tmp is not 0755 like the rest chmod 01777, "$options->{root}/tmp" or error "cannot chmod /tmp: $!"; } # The TMPDIR set by the user or even /tmp might be inaccessible by the # unshared user. Thus, we place all temporary files in /tmp inside the new # rootfs. # # This will affect calls to tempfile() as well as runs of "apt-get update" # which will create temporary clearsigned.message.XXXXXX files to verify # signatures. # # Setting TMPDIR to inside the chroot is also necessary for when packages # are installed with apt from outside the chroot with # DPkg::Chroot-Directory { ## no critic (Variables::RequireLocalizedPunctuationVars) $ENV{"TMPDIR"} = "$options->{root}/tmp"; } my ($conf, $tmpfile) = tempfile("mmdebstrap.apt.conf.XXXXXXXXXXXX", TMPDIR => 1) or error "cannot open apt.conf: $!"; print $conf "Apt::Architecture \"$options->{nativearch}\";\n"; # the host system might have configured additional architectures # force only the native architecture if (scalar @{ $options->{foreignarchs} } > 0) { print $conf "Apt::Architectures { \"$options->{nativearch}\"; "; foreach my $arch (@{ $options->{foreignarchs} }) { print $conf "\"$arch\"; "; } print $conf "};\n"; } else { print $conf "Apt::Architectures \"$options->{nativearch}\";\n"; } print $conf "Dir \"$options->{root}\";\n"; # not needed anymore for apt 1.3 and newer print $conf "Dir::State::Status \"$options->{root}/var/lib/dpkg/status\";\n"; # for authentication, use the keyrings from the host print $conf "Dir::Etc::Trusted \"$options->{apttrusted}\";\n"; print $conf "Dir::Etc::TrustedParts \"$options->{apttrustedparts}\";\n"; if ($options->{variant} ne 'apt') { # apt considers itself essential. Thus, when generating an EDSP # document for an external solver, it will add the Essential:yes field # to the apt package stanza. This is unnecessary for any other variant # than 'apt' because in all other variants we compile the set of # packages we consider essential ourselves and for the 'essential' # variant it would even be wrong to add apt. This workaround is only # needed when apt is used with an external solver but doesn't hurt # otherwise and we don't have a good way to figure out whether apt is # using an external solver or not short of parsing the --aptopt # options. print $conf "pkgCacheGen::ForceEssential \",\";\n"; } close $conf; # We put certain configuration items in their own configuration file # because they have to be valid for apt invocation from outside as well as # from inside the chroot. # The config filename is chosen such that any settings in it will be # overridden by what the user specified with --aptopt. if (!-e "$options->{root}/etc/apt/apt.conf.d/00mmdebstrap") { open my $fh, '>', "$options->{root}/etc/apt/apt.conf.d/00mmdebstrap" or error "cannot open /etc/apt/apt.conf.d/00mmdebstrap: $!"; print $fh "Apt::Install-Recommends false;\n"; print $fh "Acquire::Languages \"none\";\n"; close $fh; } # apt-get update requires this if (!-e "$options->{root}/var/lib/dpkg/status") { open my $fh, '>', "$options->{root}/var/lib/dpkg/status" or error "failed to open(): $!"; close $fh; } # we create /var/lib/dpkg/arch inside the chroot either if there is more # than the native architecture in the chroot or if chrootless mode is # used to create a chroot of a different architecture than the native # architecture outside the chroot. chomp(my $hostarch = `dpkg --print-architecture`); if ( (!-e "$options->{root}/var/lib/dpkg/arch") and ( scalar @{ $options->{foreignarchs} } > 0 or ( $options->{mode} eq 'chrootless' and $hostarch ne $options->{nativearch})) ) { open my $fh, '>', "$options->{root}/var/lib/dpkg/arch" or error "cannot open /var/lib/dpkg/arch: $!"; print $fh "$options->{nativearch}\n"; foreach my $arch (@{ $options->{foreignarchs} }) { print $fh "$arch\n"; } close $fh; } if (scalar @{ $options->{aptopts} } > 0 and (!-e "$options->{root}/etc/apt/apt.conf.d/99mmdebstrap")) { open my $fh, '>', "$options->{root}/etc/apt/apt.conf.d/99mmdebstrap" or error "cannot open /etc/apt/apt.conf.d/99mmdebstrap: $!"; foreach my $opt (@{ $options->{aptopts} }) { if (-r $opt) { # flush handle because copy() uses syswrite() which bypasses # buffered IO $fh->flush(); copy $opt, $fh or error "cannot copy $opt: $!"; } else { print $fh $opt; if ($opt !~ /;$/) { print $fh ';'; } if ($opt !~ /\n$/) { print $fh "\n"; } } } close $fh; if ($verbosity_level >= 3) { debug "content of /etc/apt/apt.conf.d/99mmdebstrap:"; copy("$options->{root}/etc/apt/apt.conf.d/99mmdebstrap", \*STDERR); } } if (scalar @{ $options->{dpkgopts} } > 0 and (!-e "$options->{root}/etc/dpkg/dpkg.cfg.d/99mmdebstrap")) { # FIXME: in chrootless mode, dpkg will only read the configuration # from the host -- see #808203 if ($options->{mode} eq 'chrootless') { warning('dpkg is unable to read an alternative configuration in' . 'chrootless mode -- see Debian bug #808203'); } open my $fh, '>', "$options->{root}/etc/dpkg/dpkg.cfg.d/99mmdebstrap" or error "cannot open /etc/dpkg/dpkg.cfg.d/99mmdebstrap: $!"; foreach my $opt (@{ $options->{dpkgopts} }) { if (-r $opt) { # flush handle because copy() uses syswrite() which bypasses # buffered IO $fh->flush(); copy $opt, $fh or error "cannot copy $opt: $!"; } else { print $fh $opt; if ($opt !~ /\n$/) { print $fh "\n"; } } } close $fh; if ($verbosity_level >= 3) { debug "content of /etc/dpkg/dpkg.cfg.d/99mmdebstrap:"; copy("$options->{root}/etc/dpkg/dpkg.cfg.d/99mmdebstrap", \*STDERR); } } if (!-e "$options->{root}/etc/fstab") { open my $fh, '>', "$options->{root}/etc/fstab" or error "cannot open fstab: $!"; print $fh "# UNCONFIGURED FSTAB FOR BASE SYSTEM\n"; close $fh; chmod 0644, "$options->{root}/etc/fstab" or error "cannot chmod fstab: $!"; } # write /etc/apt/sources.list and files in /etc/apt/sources.list.d/ { my $firstentry = $options->{sourceslists}->[0]; # if the first sources.list entry is of one-line type and without # explicit filename, then write out an actual /etc/apt/sources.list # otherwise everything goes into /etc/apt/sources.list.d my $fname; if ($firstentry->{type} eq 'one-line' && !defined $firstentry->{fname}) { $fname = "$options->{root}/etc/apt/sources.list"; } else { $fname = "$options->{root}/etc/apt/sources.list.d/0000"; if (defined $firstentry->{fname}) { $fname .= $firstentry->{fname}; if ( $firstentry->{fname} !~ /\.list/ && $firstentry->{fname} !~ /\.sources/) { if ($firstentry->{type} eq 'one-line') { $fname .= '.list'; } elsif ($firstentry->{type} eq 'deb822') { $fname .= '.sources'; } else { error "invalid type: $firstentry->{type}"; } } } else { # if no filename is given, then this must be a deb822 file # because if it was a one-line type file, then it would've been # written to /etc/apt/sources.list $fname .= 'main.sources'; } } if (!-e $fname) { open my $fh, '>', "$fname" or error "cannot open $fname: $!"; print $fh $firstentry->{content}; close $fh; } # everything else goes into /etc/apt/sources.list.d/ for (my $i = 1 ; $i < scalar @{ $options->{sourceslists} } ; $i++) { my $entry = $options->{sourceslists}->[$i]; my $fname = "$options->{root}/etc/apt/sources.list.d/" . sprintf("%04d", $i); if (defined $entry->{fname}) { $fname .= $entry->{fname}; if ( $entry->{fname} !~ /\.list/ && $entry->{fname} !~ /\.sources/) { if ($entry->{type} eq 'one-line') { $fname .= '.list'; } elsif ($entry->{type} eq 'deb822') { $fname .= '.sources'; } else { error "invalid type: $entry->{type}"; } } } else { if ($entry->{type} eq 'one-line') { $fname .= 'main.list'; } elsif ($entry->{type} eq 'deb822') { $fname .= 'main.sources'; } else { error "invalid type: $entry->{type}"; } } if (!-e $fname) { open my $fh, '>', "$fname" or error "cannot open $fname: $!"; print $fh $entry->{content}; close $fh; } } } # allow network access from within foreach my $file ("/etc/resolv.conf", "/etc/hostname") { if (-e $file && !-e "$options->{root}/$file") { # this will create a new file with 644 permissions and copy # contents only even if $file was a symlink copy($file, "$options->{root}/$file") or error "cannot copy $file: $!"; # if the source was a regular file, preserve the permissions if (-f $file) { my $mode = (stat($file))[2]; $mode &= oct(7777); # mask off bits that aren't the mode chmod $mode, "$options->{root}/$file" or error "cannot chmod $file: $!"; } } else { warning("Host system does not have a $file to copy into the" . " rootfs."); } } if ($options->{havemknod}) { foreach my $file (@devfiles) { my ($fname, $mode, $type, $linkname, $devmajor, $devminor) = @{$file}; if ($type == 0) { # normal file error "type 0 not implemented"; } elsif ($type == 1) { # hardlink error "type 1 not implemented"; } elsif ($type == 2) { # symlink if ( $options->{mode} eq 'fakechroot' and $linkname =~ /^\/proc/) { # there is no /proc in fakechroot mode next; } symlink $linkname, "$options->{root}/dev/$fname" or error "cannot create symlink ./dev/$fname"; next; # chmod cannot work on symlinks } elsif ($type == 3) { # character special 0 == system('mknod', "$options->{root}/dev/$fname", 'c', $devmajor, $devminor) or error "mknod failed: $?"; } elsif ($type == 4) { # block special 0 == system('mknod', "$options->{root}/dev/$fname", 'b', $devmajor, $devminor) or error "mknod failed: $?"; } elsif ($type == 5) { # directory if (-e "$options->{root}/dev/$fname") { if (!-d "$options->{root}/dev/$fname") { error "./dev/$fname already exists but is not a directory"; } } else { my $num_created = make_path "$options->{root}/dev/$fname", { error => \my $err }; if ($err && @$err) { error( join "; ", ( map { "cannot create " . (join ": ", %{$_}) } @$err )); } elsif ($num_created == 0) { error "cannot create $options->{root}/dev/$fname"; } } } else { error "unsupported type: $type"; } chmod $mode, "$options->{root}/dev/$fname" or error "cannot chmod ./dev/$fname: $!"; } } # we tell apt about the configuration via a config file passed via the # APT_CONFIG environment variable instead of using the --option command # line arguments because configuration settings like Dir::Etc have already # been evaluated at the time that apt takes its command line arguments # into account. { ## no critic (Variables::RequireLocalizedPunctuationVars) $ENV{"APT_CONFIG"} = "$tmpfile"; } # we have to make the config file world readable so that a possible # /usr/lib/apt/solvers/apt process which is run by the _apt user is also # able to read it chmod 0666, "$tmpfile" or error "cannot chmod $tmpfile: $!"; if ($verbosity_level >= 3) { 0 == system('apt-get', '--version') or error "apt-get --version failed: $?"; 0 == system('apt-config', 'dump') or error "apt-config failed: $?"; debug "content of $tmpfile:"; copy($tmpfile, \*STDERR); } if (none { $_ eq $options->{mode} } ('fakechroot', 'proot')) { # Apt dropping privileges to another user than root is not useful in # fakechroot and proot mode because all users are faked and thus there # is no real privilege difference anyways. We could set # APT::Sandbox::User "root" in fakechroot and proot mode but we don't # because if we would, then /var/cache/apt/archives/partial/ and # /var/lib/apt/lists/partial/ would not be owned by the _apt user # if mmdebstrap was run in fakechroot or proot mode. # # when apt-get update is run by the root user, then apt will attempt to # drop privileges to the _apt user. This will fail if the _apt user # does not have permissions to read the root directory. In that case, # we have to disable apt sandboxing. This can for example happen in # root mode when the path of the chroot is not in a world-readable # location. my $partial = '/var/lib/apt/lists/partial'; if ( system('/usr/lib/apt/apt-helper', 'drop-privs', '--', 'test', '-r', "$options->{root}$partial") != 0 ) { warning "Download is performed unsandboxed as root as file" . " $options->{root}$partial couldn't be accessed by user _apt"; open my $fh, '>>', $tmpfile or error "cannot open $tmpfile for appending: $!"; print $fh "APT::Sandbox::User \"root\";\n"; close $fh; } } return; } sub run_update() { my $options = shift; my $aptopts = { ARGV => ['apt-get', 'update', '--error-on=any'], CHDIR => $options->{root}, }; info "running apt-get update..."; run_apt_progress($aptopts); # check if anything was downloaded at all { open my $fh, '-|', 'apt-get', 'indextargets' // error "failed to fork(): $!"; chomp( my $indextargets = do { local $/; <$fh> } ); close $fh; if ($indextargets eq '') { if ($verbosity_level >= 1) { 0 == system('apt-cache', 'policy') or error "apt-cache failed: $?"; } error "apt-get update didn't download anything"; } } return; } sub run_download() { my $options = shift; # We use /var/cache/apt/archives/ to figure out which packages apt chooses # to install. That's why the directory must be empty if: # - /var/cache/apt/archives exists, and # - no simulation run is done, and # - the variant is not extract or custom or the number to be # installed packages not zero # # We could also unconditionally use the proxysolver and then "apt-get # download" any missing packages but using the proxysolver requires # /usr/lib/apt/solvers/apt from the apt-utils package and we want to avoid # that dependency. # # In the future we want to replace downloading packages with "apt-get # install --download-only" and installing them with dpkg by just installing # the essential packages with apt from the outside with # DPkg::Chroot-Directory. We are not doing that because then the preinst # script of base-passwd will not be called early enough and packages will # fail to install because they are missing /etc/passwd. my @cached_debs = (); my @dl_debs = (); if ( !$options->{dryrun} && ((none { $_ eq $options->{variant} } ('extract', 'custom')) || scalar @{ $options->{include} } != 0) && -d "$options->{root}/var/cache/apt/archives/" ) { my $apt_archives = "/var/cache/apt/archives/"; opendir my $dh, "$options->{root}/$apt_archives" or error "cannot read $apt_archives"; while (my $deb = readdir $dh) { if ($deb !~ /\.deb$/) { next; } if (!-f "$options->{root}/$apt_archives/$deb") { next; } push @cached_debs, $deb; } closedir $dh; if (scalar @cached_debs > 0) { if (any { $_ eq 'download/empty' } @{ $options->{skip} }) { info "skipping download/empty as requested"; } else { error("/var/cache/apt/archives/ inside the chroot contains: " . (join ', ', (sort @cached_debs))); } } } # To figure out the right package set for the apt variant we can use: # $ apt-get dist-upgrade -o dir::state::status=/dev/null # This is because that variants only contain essential packages and # apt and libapt treats apt as essential. If we want to install less # (essential variant) then we have to compute the package set ourselves. # Same if we want to install priority based variants. if (any { $_ eq $options->{variant} } ('extract', 'custom')) { if (scalar @{ $options->{include} } == 0) { info "nothing to download -- skipping..."; return ([], []); } my %pkgs_to_install; for my $incl (@{ $options->{include} }) { for my $pkg (split /[,\s]+/, $incl) { # strip leading and trailing whitespace $pkg =~ s/^\s+|\s+$//g; # skip if the remainder is an empty string if ($pkg eq '') { next; } $pkgs_to_install{$pkg} = (); } } my %result = (); if ($options->{dryrun}) { info "simulate downloading packages with apt..."; } else { # if there are already packages in /var/cache/apt/archives/, we # need to use our proxysolver to obtain the solution chosen by apt if (scalar @cached_debs > 0) { $result{EDSP_RES} = \@dl_debs; } info "downloading packages with apt..."; } run_apt_progress({ ARGV => [ 'apt-get', '--yes', '-oApt::Get::Download-Only=true', $options->{dryrun} ? '-oAPT::Get::Simulate=true' : (), 'install' ], PKGS => [keys %pkgs_to_install], %result }); } elsif ($options->{variant} eq 'apt') { # if we just want to install Essential:yes packages, apt and their # dependencies then we can make use of libapt treating apt as # implicitly essential. An upgrade with the (currently) empty status # file will trigger an installation of the essential packages plus apt. # # 2018-09-02, #debian-dpkg on OFTC, times in UTC+2 # 23:39 < josch> I'll just put it in my script and if it starts # breaking some time I just say it's apt's fault. :P # 23:42 < DonKult> that is how it usually works, so yes, do that :P (<- # and please add that line next to it so you can # remind me in 5+ years that I said that after I wrote # in the bugreport: "Are you crazy?!? Nobody in his # right mind would even suggest depending on it!") my %result = (); if ($options->{dryrun}) { info "simulate downloading packages with apt..."; } else { # if there are already packages in /var/cache/apt/archives/, we # need to use our proxysolver to obtain the solution chosen by apt if (scalar @cached_debs > 0) { $result{EDSP_RES} = \@dl_debs; } info "downloading packages with apt..."; } run_apt_progress({ ARGV => [ 'apt-get', '--yes', '-oApt::Get::Download-Only=true', $options->{dryrun} ? '-oAPT::Get::Simulate=true' : (), 'dist-upgrade' ], %result }); } elsif ( any { $_ eq $options->{variant} } ('essential', 'standard', 'important', 'required', 'buildd') ) { # 2021-06-07, #debian-apt on OFTC, times in UTC+2 # 17:27 < DonKult> (?essential includes 'apt' through) # 17:30 < josch> DonKult: no, because pkgCacheGen::ForceEssential ","; # 17:32 < DonKult> touché my %result = (); if ($options->{dryrun}) { info "simulate downloading packages with apt..."; } else { # if there are already packages in /var/cache/apt/archives/, we # need to use our proxysolver to obtain the solution chosen by apt if (scalar @cached_debs > 0) { $result{EDSP_RES} = \@dl_debs; } info "downloading packages with apt..."; } run_apt_progress({ ARGV => [ 'apt-get', '--yes', '-oApt::Get::Download-Only=true', $options->{dryrun} ? '-oAPT::Get::Simulate=true' : (), 'install', '?narrow(' . ( length($options->{suite}) ? '?or(?archive(^' . $options->{suite} . '$),?codename(^' . $options->{suite} . '$)),' : '' ) . '?architecture(' . $options->{nativearch} . '),?essential)' ], %result }); } else { error "unknown variant: $options->{variant}"; } my @essential_pkgs; if (scalar @cached_debs > 0 && scalar @dl_debs > 0) { my $archives = "/var/cache/apt/archives/"; # for each package in @dl_debs, check if it's in # /var/cache/apt/archives/ and add it to @essential_pkgs foreach my $p (@dl_debs) { my ($pkg, $ver_epoch) = @{$p}; # apt appends the architecture at the end of the package name ($pkg, my $arch) = split ':', $pkg, 2; # apt replaces the colon by its percent encoding %3a my $ver = $ver_epoch; $ver =~ s/:/%3a/; # the architecture returned by apt is the native architecture. # Since we don't know whether the package is architecture # independent or not, we first try with the native arch and then # with "all" and only error out if neither exists. if (-e "$options->{root}/$archives/${pkg}_${ver}_$arch.deb") { push @essential_pkgs, "$archives/${pkg}_${ver}_$arch.deb"; } elsif (-e "$options->{root}/$archives/${pkg}_${ver}_all.deb") { push @essential_pkgs, "$archives/${pkg}_${ver}_all.deb"; } else { error( "cannot find package for $pkg:$arch (= $ver_epoch) " . "in /var/cache/apt/archives/"); } } } else { # collect the .deb files that were downloaded by apt from the content # of /var/cache/apt/archives/ if (!$options->{dryrun}) { my $apt_archives = "/var/cache/apt/archives/"; opendir my $dh, "$options->{root}/$apt_archives" or error "cannot read $apt_archives"; while (my $deb = readdir $dh) { if ($deb !~ /\.deb$/) { next; } $deb = "$apt_archives/$deb"; if (!-f "$options->{root}/$deb") { next; } push @essential_pkgs, $deb; } closedir $dh; if (scalar @essential_pkgs == 0) { # check if a file:// URI was used open(my $pipe_apt, '-|', 'apt-get', 'indextargets', '--format', '$(URI)', 'Created-By: Packages') or error "cannot start apt-get indextargets: $!"; while (my $uri = <$pipe_apt>) { if ($uri =~ /^file:\/\//) { error "nothing got downloaded -- use copy:// instead of" . " file://"; } } error "nothing got downloaded"; } } } # Unpack order matters. Since we create this list using two different # methods but we want both methods to have the same result, we sort the # list before returning it. @essential_pkgs = sort @essential_pkgs; return (\@essential_pkgs, \@cached_debs); } sub run_extract() { my $options = shift; my $essential_pkgs = shift; if ($options->{dryrun}) { info "skip extracting packages because of --dry-run"; return; } if (scalar @{$essential_pkgs} == 0) { info "nothing to extract -- skipping..."; return; } info "extracting archives..."; print_progress 0.0; my $counter = 0; my $total = scalar @{$essential_pkgs}; foreach my $deb (@{$essential_pkgs}) { $counter += 1; my $tarfilter; my @tarfilterargs; # if the path-excluded option was added to the dpkg config, # insert the tarfilter between dpkg-deb and tar if (-e "$options->{root}/etc/dpkg/dpkg.cfg.d/99mmdebstrap") { open(my $fh, '<', "$options->{root}/etc/dpkg/dpkg.cfg.d/99mmdebstrap") or error "cannot open /etc/dpkg/dpkg.cfg.d/99mmdebstrap: $!"; my @matches = grep { /^path-(?:exclude|include)=/ } <$fh>; close $fh; chop @matches; # remove trailing newline @tarfilterargs = map { "--" . $_ } @matches; } if (scalar @tarfilterargs > 0) { if (-x "./tarfilter") { $tarfilter = "./tarfilter"; } else { $tarfilter = "mmtarfilter"; } } my $dpkg_writer; my $tar_reader; my $filter_reader; my $filter_writer; if (scalar @tarfilterargs > 0) { pipe $filter_reader, $dpkg_writer or error "pipe failed: $!"; pipe $tar_reader, $filter_writer or error "pipe failed: $!"; } else { pipe $tar_reader, $dpkg_writer or error "pipe failed: $!"; } # not using dpkg-deb --extract as that would replace the # merged-usr symlinks with plain directories # https://bugs.debian.org/989602 # not using dpkg --unpack because that would try running preinst # maintainer scripts my $pid1 = fork() // error "fork() failed: $!"; if ($pid1 == 0) { open(STDOUT, '>&', $dpkg_writer) or error "cannot open STDOUT: $!"; close($tar_reader) or error "cannot close tar_reader: $!"; if (scalar @tarfilterargs > 0) { close($filter_reader) or error "cannot close filter_reader: $!"; close($filter_writer) or error "cannot close filter_writer: $!"; } debug("running dpkg-deb --fsys-tarfile $options->{root}/$deb"); eval { Devel::Cover::set_coverage("none") } if $is_covering; exec 'dpkg-deb', '--fsys-tarfile', "$options->{root}/$deb"; } my $pid2; if (scalar @tarfilterargs > 0) { $pid2 = fork() // error "fork() failed: $!"; if ($pid2 == 0) { open(STDIN, '<&', $filter_reader) or error "cannot open STDIN: $!"; open(STDOUT, '>&', $filter_writer) or error "cannot open STDOUT: $!"; close($dpkg_writer) or error "cannot close dpkg_writer: $!"; close($tar_reader) or error "cannot close tar_reader: $!"; debug("running $tarfilter " . (join " ", @tarfilterargs)); eval { Devel::Cover::set_coverage("none") } if $is_covering; exec $tarfilter, @tarfilterargs; } } my $pid3 = fork() // error "fork() failed: $!"; if ($pid3 == 0) { open(STDIN, '<&', $tar_reader) or error "cannot open STDIN: $!"; close($dpkg_writer) or error "cannot close dpkg_writer: $!"; if (scalar @tarfilterargs > 0) { close($filter_reader) or error "cannot close filter_reader: $!"; close($filter_writer) or error "cannot close filter_writer: $!"; } debug( "running tar -C $options->{root}" . " --keep-directory-symlink --extract --file -"); eval { Devel::Cover::set_coverage("none") } if $is_covering; exec 'tar', '-C', $options->{root}, '--keep-directory-symlink', '--extract', '--file', '-'; } close($dpkg_writer) or error "cannot close dpkg_writer: $!"; close($tar_reader) or error "cannot close tar_reader: $!"; if (scalar @tarfilterargs > 0) { close($filter_reader) or error "cannot close filter_reader: $!"; close($filter_writer) or error "cannot close filter_writer: $!"; } waitpid($pid1, 0); $? == 0 or error "dpkg-deb --fsys-tarfile failed: $?"; if (scalar @tarfilterargs > 0) { waitpid($pid2, 0); $? == 0 or error "tarfilter failed: $?"; } waitpid($pid3, 0); $? == 0 or error "tar --extract failed: $?"; print_progress($counter / $total * 100); } print_progress "done"; return; } sub run_prepare { my $options = shift; if ($options->{mode} eq 'fakechroot') { # this borrows from and extends # /etc/fakechroot/debootstrap.env and # /etc/fakechroot/chroot.env { my $ldconfig = getcwd() . '/ldconfig.fakechroot'; if (!-x $ldconfig) { $ldconfig = '/usr/libexec/mmdebstrap/ldconfig.fakechroot'; } my @fakechrootsubst = (); foreach my $d ('/usr/sbin', '/usr/bin', '/sbin', '/bin') { push @fakechrootsubst, "$d/chroot=/usr/sbin/chroot.fakechroot"; push @fakechrootsubst, "$d/mkfifo=/bin/true"; push @fakechrootsubst, "$d/ldconfig=$ldconfig"; push @fakechrootsubst, "$d/ldd=/usr/bin/ldd.fakechroot"; push @fakechrootsubst, "$d/ischroot=/bin/true"; } if (defined $ENV{FAKECHROOT_CMD_SUBST} && $ENV{FAKECHROOT_CMD_SUBST} ne "") { push @fakechrootsubst, split /:/, $ENV{FAKECHROOT_CMD_SUBST}; } ## no critic (Variables::RequireLocalizedPunctuationVars) $ENV{FAKECHROOT_CMD_SUBST} = join ':', @fakechrootsubst; } if (defined $ENV{FAKECHROOT_EXCLUDE_PATH} && $ENV{FAKECHROOT_EXCLUDE_PATH} ne "") { ## no critic (Variables::RequireLocalizedPunctuationVars) $ENV{FAKECHROOT_EXCLUDE_PATH} = "$ENV{FAKECHROOT_EXCLUDE_PATH}:/dev:/proc:/sys"; } else { ## no critic (Variables::RequireLocalizedPunctuationVars) $ENV{FAKECHROOT_EXCLUDE_PATH} = '/dev:/proc:/sys'; } # workaround for long unix socket path if FAKECHROOT_BASE # exceeds the limit of 108 bytes { ## no critic (Variables::RequireLocalizedPunctuationVars) $ENV{FAKECHROOT_AF_UNIX_PATH} = "/tmp"; } { my @ldlibpath = (); if (defined $ENV{LD_LIBRARY_PATH} && $ENV{LD_LIBRARY_PATH} ne "") { push @ldlibpath, (split /:/, $ENV{LD_LIBRARY_PATH}); } # FIXME: workaround allowing installation of systemd should # live in fakechroot, see #917920 push @ldlibpath, "$options->{root}/lib/systemd"; my $parse_ld_so_conf; $parse_ld_so_conf = sub { foreach my $conf (@_) { next if !-r $conf; open my $fh, '<', "$conf" or error "can't read $conf: $!"; while (my $line = <$fh>) { chomp $line; if ($line eq "") { next; } if ($line =~ /^#/) { next; } if ($line =~ /include (.*)/) { $parse_ld_so_conf->(glob("$options->{root}/$1")); next; } if (!-d "$options->{root}/$line") { next; } push @ldlibpath, "$options->{root}/$line"; } close $fh; } }; if (-e "$options->{root}/etc/ld.so.conf") { $parse_ld_so_conf->("$options->{root}/etc/ld.so.conf"); } ## no critic (Variables::RequireLocalizedPunctuationVars) $ENV{LD_LIBRARY_PATH} = join ':', @ldlibpath; } } # make sure that APT_CONFIG and TMPDIR are not set when executing # anything inside the chroot my @chrootcmd = ('env', '--unset=APT_CONFIG', '--unset=TMPDIR'); if ($options->{mode} eq 'proot') { push @chrootcmd, ( 'proot', '--root-id', '--bind=/dev', '--bind=/proc', '--bind=/sys', "--rootfs=$options->{root}", '--cwd=/' ); } elsif ( any { $_ eq $options->{mode} } ('root', 'unshare', 'fakechroot') ) { push @chrootcmd, ('/usr/sbin/chroot', $options->{root}); } else { error "unknown mode: $options->{mode}"; } # copy qemu-user-static binary into chroot or setup proot with # --qemu if (defined $options->{qemu}) { if ($options->{mode} eq 'proot') { push @chrootcmd, "--qemu=qemu-$options->{qemu}"; } elsif ($options->{mode} eq 'fakechroot') { # Make sure that the fakeroot and fakechroot shared # libraries exist for the right architecture open my $fh, '-|', 'dpkg-architecture', '-a', $options->{nativearch}, '-qDEB_HOST_MULTIARCH' // error "failed to fork(): $!"; chomp( my $deb_host_multiarch = do { local $/; <$fh> } ); close $fh; if (($? != 0) or (!$deb_host_multiarch)) { error "dpkg-architecture failed: $?"; } my $fakechrootdir = "/usr/lib/$deb_host_multiarch/fakechroot"; if (!-e "$fakechrootdir/libfakechroot.so") { error "$fakechrootdir/libfakechroot.so doesn't exist." . " Install libfakechroot:$options->{nativearch}" . " outside the chroot"; } my $fakerootdir = "/usr/lib/$deb_host_multiarch/libfakeroot"; if (!-e "$fakerootdir/libfakeroot-sysv.so") { error "$fakerootdir/libfakeroot-sysv.so doesn't exist." . " Install libfakeroot:$options->{nativearch}" . " outside the chroot"; } # The rest of this block sets environment variables, so we # have to add the "no critic" statement to stop perlcritic # from complaining about setting global variables ## no critic (Variables::RequireLocalizedPunctuationVars) # fakechroot only fills LD_LIBRARY_PATH with the # directories of the host's architecture. We append the # directories of the chroot architecture. $ENV{LD_LIBRARY_PATH} = "$ENV{LD_LIBRARY_PATH}:$fakechrootdir:$fakerootdir"; # The binfmt support on the outside is used, so qemu needs # to know where it has to look for shared libraries if (defined $ENV{QEMU_LD_PREFIX} && $ENV{QEMU_LD_PREFIX} ne "") { $ENV{QEMU_LD_PREFIX} = "$ENV{QEMU_LD_PREFIX}:$options->{root}"; } else { $ENV{QEMU_LD_PREFIX} = $options->{root}; } } elsif (any { $_ eq $options->{mode} } ('root', 'unshare')) { my $require_qemu_static = 1; # make $@ local, so we don't print an eventual error # in other parts where we evaluate $@ local $@ = ''; eval { # Check for the F flag which makes the kernel open the binfmt # binary at configuration time instead of lazily at startup # time. If the flag is set, then the qemu-static binary is not # required inside the chroot. open my $fh, '<', "/proc/sys/fs/binfmt_misc/qemu-$options->{qemu}"; while (my $line = <$fh>) { chomp($line); if ($line =~ /^flags: [A-Z]*F[A-Z]*$/) { $require_qemu_static = 0; last; } } close $fh; }; if ($require_qemu_static) { # other modes require a static qemu-user binary my $qemubin = "/usr/bin/qemu-$options->{qemu}-static"; if (!-e $qemubin) { error "cannot find $qemubin"; } copy $qemubin, "$options->{root}/$qemubin" or error "cannot copy $qemubin: $!"; # File::Copy does not retain permissions but on some # platforms (like Travis CI) the binfmt interpreter must # have the executable bit set or otherwise execve will # fail with EACCES chmod 0755, "$options->{root}/$qemubin" or error "cannot chmod $qemubin: $!"; } } else { error "unknown mode: $options->{mode}"; } } # some versions of coreutils use the renameat2 system call in mv. # This breaks certain versions of fakechroot and proot. Here we do # a sanity check and warn the user in case things might break. if (any { $_ eq $options->{mode} } ('fakechroot', 'proot') and -e "$options->{root}/bin/mv") { mkdir "$options->{root}/000-move-me" or error "cannot create directory: $!"; my $ret = system @chrootcmd, '/bin/mv', '/000-move-me', '/001-delete-me'; if ($ret != 0) { if ($options->{mode} eq 'proot') { info "the /bin/mv binary inside the chroot doesn't" . " work under proot"; info "this is likely due to missing support for" . " renameat2 in proot"; info "see https://github.com/proot-me/PRoot/issues/147"; } else { info "the /bin/mv binary inside the chroot doesn't" . " work under fakechroot"; info "with certain versions of coreutils and glibc," . " this is due to missing support for renameat2 in" . " fakechroot"; info "see https://github.com/dex4er/fakechroot/issues/60"; } info "expect package post installation scripts not to work"; rmdir "$options->{root}/000-move-me" or error "cannot rmdir: $!"; } else { rmdir "$options->{root}/001-delete-me" or error "cannot rmdir: $!"; } } return \@chrootcmd; } sub run_essential() { my $options = shift; my $essential_pkgs = shift; my $chrootcmd = shift; my $cached_debs = shift; if (scalar @{$essential_pkgs} == 0) { info "no essential packages -- skipping..."; return; } if ($options->{mode} eq 'chrootless') { if ($options->{dryrun}) { info "simulate installing essential packages..."; } else { info "installing essential packages..."; } # FIXME: the dpkg config from the host is parsed before the command # line arguments are parsed and might break this mode # Example: if the host has --path-exclude set, then this will also # affect the chroot. See #808203 my @chrootless_opts = ( '-oDPkg::Options::=--force-not-root', '-oDPkg::Options::=--force-script-chrootless', '-oDPkg::Options::=--root=' . $options->{root}, '-oDPkg::Options::=--log=' . "$options->{root}/var/log/dpkg.log", $options->{dryrun} ? '-oAPT::Get::Simulate=true' : (), ); if (defined $options->{qemu}) { # The binfmt support on the outside is used, so qemu needs to know # where it has to look for shared libraries if (defined $ENV{QEMU_LD_PREFIX} && $ENV{QEMU_LD_PREFIX} ne "") { ## no critic (Variables::RequireLocalizedPunctuationVars) $ENV{QEMU_LD_PREFIX} = "$ENV{QEMU_LD_PREFIX}:$options->{root}"; } else { ## no critic (Variables::RequireLocalizedPunctuationVars) $ENV{QEMU_LD_PREFIX} = $options->{root}; } } # we don't use apt because that will not run the base-passwd preinst # early enough #run_apt_progress({ # ARGV => ['apt-get', '--yes', @chrootless_opts, 'install'], # PKGS => [map { "$options->{root}/$_" } @{$essential_pkgs}], #}); run_dpkg_progress({ ARGV => [ 'dpkg', '--force-not-root', '--force-script-chrootless', "--root=$options->{root}", "--log=$options->{root}/var/log/dpkg.log", '--install', '--force-depends' ], PKGS => [map { "$options->{root}/$_" } @{$essential_pkgs}] }); } elsif ( any { $_ eq $options->{mode} } ('root', 'unshare', 'fakechroot', 'proot') ) { # install the extracted packages properly # we need --force-depends because dpkg does not take Pre-Depends # into account and thus doesn't install them in the right order # And the --predep-package option is broken: #539133 # # We could use apt from outside the chroot using DPkg::Chroot-Directory # but then the preinst script of base-passwd will not be called early # enough and packages will fail to install because they are missing # /etc/passwd. Also, with plain dpkg the essential variant can finish # within 9 seconds. If we use apt instead, it becomes 12 seconds. We # prefer speed here. if ($options->{dryrun}) { info "simulate installing essential packages..."; } else { info "installing essential packages..."; run_chroot( sub { run_dpkg_progress({ ARGV => [ @{$chrootcmd}, 'dpkg', '--install', '--force-depends' ], PKGS => $essential_pkgs, }); }, $options ); } } else { error "unknown mode: $options->{mode}"; } if (any { $_ eq 'essential/unlink' } @{ $options->{skip} }) { info "skipping essential/unlink as requested"; } else { foreach my $deb (@{$essential_pkgs}) { # do not unlink those packages that were in /var/cache/apt/archive # before the download phase next if any { "/var/cache/apt/archives/$_" eq $deb } @{$cached_debs}; unlink "$options->{root}/$deb" or error "cannot unlink $deb: $!"; } } return; } sub run_install() { my $options = shift; my $chrootcmd = shift; my %pkgs_to_install; for my $incl (@{ $options->{include} }) { for my $pkg (split /[,\s]+/, $incl) { # strip leading and trailing whitespace $pkg =~ s/^\s+|\s+$//g; # skip if the remainder is an empty string if ($pkg eq '') { next; } $pkgs_to_install{$pkg} = (); } } if ($options->{variant} eq 'buildd') { $pkgs_to_install{'build-essential'} = (); } if ( any { $_ eq $options->{variant} } ('required', 'important', 'standard', 'buildd') ) { # Many of the priority:required packages are also essential:yes. We # make sure not to select those here to avoid useless "xxx is already # the newest version" messages. my $priority; if (any { $_ eq $options->{variant} } ('required', 'buildd')) { $priority = '?and(?priority(required),?not(?essential))'; } elsif ($options->{variant} eq 'important') { $priority = '?and(?or(?priority(required),?priority(important)),' . '?not(?essential))'; } elsif ($options->{variant} eq 'standard') { $priority = '?and(?or(~prequired,~pimportant,~pstandard),' . '?not(?essential))'; } $pkgs_to_install{ "?narrow(" . ( length($options->{suite}) ? '?or(?archive(^' . $options->{suite} . '$),?codename(^' . $options->{suite} . '$)),' : '' ) . "?architecture($options->{nativearch})," . "$priority)" } = (); } my @pkgs_to_install = keys %pkgs_to_install; if ($options->{mode} eq 'chrootless') { if (scalar @pkgs_to_install > 0) { my @chrootless_opts = ( '-oDPkg::Options::=--force-not-root', '-oDPkg::Options::=--force-script-chrootless', '-oDPkg::Options::=--root=' . $options->{root}, '-oDPkg::Options::=--log=' . "$options->{root}/var/log/dpkg.log", $options->{dryrun} ? '-oAPT::Get::Simulate=true' : (), ); run_apt_progress({ ARGV => ['apt-get', '--yes', @chrootless_opts, 'install'], PKGS => [@pkgs_to_install], }); } } elsif ( any { $_ eq $options->{mode} } ('root', 'unshare', 'fakechroot', 'proot') ) { if ($options->{variant} ne 'custom' and scalar @pkgs_to_install > 0) { # Advantage of running apt on the outside instead of inside the # chroot: # # - we can build chroots without apt (for example from buildinfo # files) # # - we do not need to install additional packages like # apt-transport-* or ca-certificates inside the chroot # # - we do not not need additional key material inside the chroot # # - we can make use of file:// and copy:// # # - we can use EDSP solvers without installing apt-utils or other # solvers inside the chroot # # The DPkg::Install::Recursive::force=true workaround can be # dropped after this issue is fixed: # https://salsa.debian.org/apt-team/apt/-/merge_requests/189 # # We could also move the dpkg call to the outside and run dpkg with # --root but this would only make sense in situations where there # is no dpkg inside the chroot. if (!$options->{dryrun}) { run_chroot( sub { info "installing remaining packages inside the" . " chroot..."; run_apt_progress({ ARGV => [ 'apt-get', '-o', 'Dir::Bin::dpkg=env', '-o', 'DPkg::Options::=--unset=TMPDIR', '-o', 'DPkg::Options::=dpkg', $options->{mode} eq 'fakechroot' ? ( '-o', 'DPkg::Install::Recursive::force=true' ) : (), '-o', "DPkg::Chroot-Directory=$options->{root}", '--yes', 'install' ], PKGS => [@pkgs_to_install], }); }, $options ); } else { info "simulate installing remaining packages inside the" . " chroot..."; run_apt_progress({ ARGV => [ 'apt-get', '--yes', '-oAPT::Get::Simulate=true', 'install' ], PKGS => [@pkgs_to_install], }); } } } else { error "unknown mode: $options->{mode}"; } return; } sub run_cleanup() { my $options = shift; if (any { $_ eq 'cleanup/apt' } @{ $options->{skip} }) { info "skipping cleanup/apt as requested"; } else { if ( none { $_ eq 'cleanup/apt/lists' } @{ $options->{skip} } and none { $_ eq 'cleanup/apt/cache' } @{ $options->{skip} }) { info "cleaning package lists and apt cache..."; } if (any { $_ eq 'cleanup/apt/lists' } @{ $options->{skip} }) { info "skipping cleanup/apt/lists as requested"; } else { if (any { $_ eq 'cleanup/apt/cache' } @{ $options->{skip} }) { info "cleaning package lists..."; } run_apt_progress({ ARGV => [ 'apt-get', '--option', 'Dir::Etc::SourceList=/dev/null', '--option', 'Dir::Etc::SourceParts=/dev/null', 'update' ], CHDIR => $options->{root}, }); } if (any { $_ eq 'cleanup/apt/cache' } @{ $options->{skip} }) { info "skipping cleanup/apt/cache as requested"; } else { if (any { $_ eq 'cleanup/apt/lists' } @{ $options->{skip} }) { info "cleaning apt cache..."; } run_apt_progress( { ARGV => ['apt-get', 'clean'], CHDIR => $options->{root} }); } # apt since 1.6 creates the auxfiles directory. If apt inside the # chroot is older than that, then it will not know how to clean it. if (-e "$options->{root}/var/lib/apt/lists/auxfiles") { 0 == system( 'rm', '--interactive=never', '--recursive', '--preserve-root', '--one-file-system', "$options->{root}/var/lib/apt/lists/auxfiles" ) or error "rm failed: $?"; } } if (any { $_ eq 'cleanup/mmdebstrap' } @{ $options->{skip} }) { info "skipping cleanup/mmdebstrap as requested"; } else { # clean up temporary configuration file unlink "$options->{root}/etc/apt/apt.conf.d/00mmdebstrap" or error "failed to unlink /etc/apt/apt.conf.d/00mmdebstrap: $!"; if (defined $ENV{APT_CONFIG} && -e $ENV{APT_CONFIG}) { unlink $ENV{APT_CONFIG} or error "failed to unlink $ENV{APT_CONFIG}: $!"; } if (any { $_ eq 'cleanup/mmdebstrap/qemu' } @{ $options->{skip} }) { info "skipping cleanup/mmdebstrap/qume as requested"; } elsif (defined $options->{qemu} and any { $_ eq $options->{mode} } ('root', 'unshare') and -e "$options->{root}/usr/bin/qemu-$options->{qemu}-static") { unlink "$options->{root}/usr/bin/qemu-$options->{qemu}-static" or error "cannot unlink /usr/bin/qemu-$options->{qemu}-static: $!"; } } if (any { $_ eq 'cleanup/reproducible' } @{ $options->{skip} }) { info "skipping cleanup/reproducible as requested"; } else { # clean up certain files to make output reproducible foreach my $fname ( '/var/log/dpkg.log', '/var/log/apt/history.log', '/var/log/apt/term.log', '/var/log/alternatives.log', '/var/cache/ldconfig/aux-cache', '/var/log/apt/eipp.log.xz', '/var/lib/dbus/machine-id' ) { my $path = "$options->{root}$fname"; if (!-e $path) { next; } unlink $path or error "cannot unlink $path: $!"; } if (-e "$options->{root}/etc/machine-id") { # from machine-id(5): # For operating system images which are created once and used on # multiple machines, for example for containers or in the cloud, # /etc/machine-id should be an empty file in the generic file # system image. An ID will be generated during boot and saved to # this file if possible. Having an empty file in place is useful # because it allows a temporary file to be bind-mounted over the # real file, in case the image is used read-only. unlink "$options->{root}/etc/machine-id" or error "cannot unlink /etc/machine-id: $!"; open my $fh, '>', "$options->{root}/etc/machine-id" or error "failed to open(): $!"; print $fh "uninitialized\n"; close $fh; } } if (any { $_ eq 'cleanup/tmp' } @{ $options->{skip} }) { info "skipping cleanup/tmp as requested"; } else { # remove any possible leftovers in /tmp but warn about it if (-d "$options->{root}/tmp") { opendir(my $dh, "$options->{root}/tmp") or error "Can't opendir($options->{root}/tmp): $!"; while (my $entry = readdir $dh) { # skip the "." and ".." entries next if $entry eq "."; next if $entry eq ".."; warning "deleting files in /tmp: $entry"; 0 == system( 'rm', '--interactive=never', '--recursive', '--preserve-root', '--one-file-system', "$options->{root}/tmp/$entry" ) or error "rm failed: $?"; } closedir($dh); } } return; } # messages from process inside unshared namespace to the outside # openw -- open file for writing # untar -- extract tar into directory # write -- write data to last opened file or tar process # close -- finish file writing or tar extraction # adios -- last message and tear-down # messages from process outside unshared namespace to the inside # okthx -- success sub checkokthx { my $fh = shift; my $ret = read($fh, my $buf, 2 + 5) // error "cannot read from socket: $!"; if ($ret == 0) { error "received eof on socket"; } my ($len, $msg) = unpack("nA5", $buf); if ($msg ne "okthx") { error "expected okthx but got: $msg"; } if ($len != 0) { error "expected no payload but got $len bytes"; } return; } # resolve a path inside a chroot sub chrooted_realpath { my $root = shift; my $src = shift; my $result = $root; my $prefix; # relative paths are relative to the root of the chroot # remove prefixed slashes $src =~ s{^/+}{}; my $loop = 0; while (length $src) { if ($loop > 25) { error "too many levels of symbolic links"; } # Get the first directory component. ($prefix, $src) = split m{/+}, $src, 2; # Resolve the first directory component. if ($prefix eq ".") { # Ignore, stay at the same directory. } elsif ($prefix eq "..") { # Go up one directory. $result =~ s{(.*)/[^/]*}{$1}; # but not further than the root if ($result !~ m/^\Q$root\E/) { $result = $root; } } elsif (-l "$result/$prefix") { my $dst = readlink "$result/$prefix"; if ($dst =~ s{^/+}{}) { # Absolute pathname, reset result back to $root. $result = $root; } $src = length $src ? "$dst/$src" : $dst; $loop++; } else { # Otherwise append the prefix. $result = "$result/$prefix"; } } return $result; } sub hookhelper { # we put everything in an eval block because that way we can easily handle # errors without goto labels or much code duplication: the error handler # has to send an "error" message to the other side eval { my $root = $ARGV[1]; my $mode = $ARGV[2]; my $hook = $ARGV[3]; my $qemu = $ARGV[4]; $verbosity_level = $ARGV[5]; my $command = $ARGV[6]; my @cmdprefix = (); my @tarcmd = ( 'tar', '--numeric-owner', '--xattrs', '--format=pax', '--pax-option=exthdr.name=%d/PaxHeaders/%f,' . 'delete=atime,delete=ctime' ); if ($hook eq 'setup') { if ($mode eq 'proot') { # since we cannot run tar inside the chroot under proot during # the setup hook because the chroot is empty, we have to run # tar from the outside, which leads to all files being owned # by the user running mmdebstrap. To let the ownership # information not be completely off, we force all files be # owned by the root user. push @tarcmd, '--owner=0', '--group=0'; } } elsif (any { $_ eq $hook } ('extract', 'essential', 'customize')) { if ($mode eq 'fakechroot') { # Fakechroot requires tar to run inside the chroot or # otherwise absolute symlinks will include the path to the # root directory push @cmdprefix, '/usr/sbin/chroot', $root; } elsif ($mode eq 'proot') { # proot requires tar to run inside proot or otherwise # permissions will be completely off push @cmdprefix, 'proot', '--root-id', "--rootfs=$root", '--cwd=/', "--qemu=$qemu"; } elsif (any { $_ eq $mode } ('root', 'chrootless', 'unshare')) { # not chrooting in this case } else { error "unknown mode: $mode"; } } else { error "unknown hook: $hook"; } if ( any { $_ eq $command } ('copy-in', 'tar-in', 'upload', 'sync-in') ) { if (scalar @ARGV < 9) { error "$command needs at least one path on the" . " outside and the output path inside the chroot"; } my $outpath = $ARGV[-1]; for (my $i = 7 ; $i < $#ARGV ; $i++) { # the right argument for tar's --directory argument depends on # whether tar is called from inside the chroot or from the # outside my $directory; if ($hook eq 'setup') { # tar runs outside, so acquire the correct path $directory = chrooted_realpath $root, $outpath; } elsif ( any { $_ eq $hook } ('extract', 'essential', 'customize') ) { if (any { $_ eq $mode } ('fakechroot', 'proot')) { # tar will run inside the chroot $directory = $outpath; } elsif ( any { $_ eq $mode } ('root', 'chrootless', 'unshare') ) { $directory = chrooted_realpath $root, $outpath; } else { error "unknown mode: $mode"; } } else { error "unknown hook: $hook"; } # if chrooted_realpath was used and if neither fakechroot or # proot were used (absolute symlinks will be broken) we can # check and potentially fail early if the target does not exist if (none { $_ eq $mode } ('fakechroot', 'proot')) { my $dirtocheck = $directory; if ($command eq 'upload') { # check the parent directory instead $dirtocheck =~ s/(.*)\/[^\/]*/$1/; } if (!-e $dirtocheck) { error "path does not exist: $dirtocheck"; } if (!-d $dirtocheck) { error "path is not a directory: $dirtocheck"; } } my $fh; if ($command eq 'upload') { # open the requested file for writing open $fh, '|-', @cmdprefix, 'sh', '-c', 'cat > "$1"', 'exec', $directory // error "failed to fork(): $!"; } elsif ( any { $_ eq $command } ('copy-in', 'tar-in', 'sync-in') ) { # open a tar process that extracts the tarfile that we # supply it with on stdin to the output directory inside # the chroot my @cmd = ( @cmdprefix, @tarcmd, '--xattrs-include=*', '--directory', $directory, '--extract', '--file', '-' ); debug("helper: running " . (join " ", @cmd)); open($fh, '|-', @cmd) // error "failed to fork(): $!"; } else { error "unknown command: $command"; } if ($command eq 'copy-in') { # instruct the parent process to create a tarball of the # requested path outside the chroot debug "helper: sending mktar"; print STDOUT ( pack("n", length $ARGV[$i]) . "mktar" . $ARGV[$i]); } elsif ($command eq 'sync-in') { # instruct the parent process to create a tarball of the # content of the requested path outside the chroot debug "helper: sending mktac"; print STDOUT ( pack("n", length $ARGV[$i]) . "mktac" . $ARGV[$i]); } elsif (any { $_ eq $command } ('upload', 'tar-in')) { # instruct parent process to open a tarball of the # requested path outside the chroot for reading debug "helper: sending openr"; print STDOUT ( pack("n", length $ARGV[$i]) . "openr" . $ARGV[$i]); } else { error "unknown command: $command"; } STDOUT->flush(); debug "helper: waiting for okthx"; checkokthx \*STDIN; # handle "write" messages from the parent process and feed # their payload into the tar process until a "close" message # is encountered while (1) { # receive the next message my $ret = read(STDIN, my $buf, 2 + 5) // error "cannot read from socket: $!"; if ($ret == 0) { error "received eof on socket"; } my ($len, $msg) = unpack("nA5", $buf); debug "helper: received message: $msg"; if ($msg eq "close") { # finish the loop if ($len != 0) { error "expected no payload but got $len bytes"; } debug "helper: sending okthx"; print STDOUT (pack("n", 0) . "okthx") or error "cannot write to socket: $!"; STDOUT->flush(); last; } elsif ($msg ne "write") { error "expected write but got: $msg"; } # read the payload my $content; { my $ret = read(STDIN, $content, $len) // error "error cannot read from socket: $!"; if ($ret == 0) { error "received eof on socket"; } } # write the payload to the tar process print $fh $content or error "cannot write to tar process: $!"; debug "helper: sending okthx"; print STDOUT (pack("n", 0) . "okthx") or error "cannot write to socket: $!"; STDOUT->flush(); } close $fh; if ($command ne 'upload' and $? != 0) { error "tar failed"; } } } elsif ( any { $_ eq $command } ('copy-out', 'tar-out', 'download', 'sync-out') ) { if (scalar @ARGV < 9) { error "$command needs at least one path inside the chroot and" . " the output path on the outside"; } my $outpath = $ARGV[-1]; for (my $i = 7 ; $i < $#ARGV ; $i++) { # the right argument for tar's --directory argument depends on # whether tar is called from inside the chroot or from the # outside my $directory; if ($hook eq 'setup') { # tar runs outside, so acquire the correct path $directory = chrooted_realpath $root, $ARGV[$i]; } elsif ( any { $_ eq $hook } ('extract', 'essential', 'customize') ) { if (any { $_ eq $mode } ('fakechroot', 'proot')) { # tar will run inside the chroot $directory = $ARGV[$i]; } elsif ( any { $_ eq $mode } ('root', 'chrootless', 'unshare') ) { $directory = chrooted_realpath $root, $ARGV[$i]; } else { error "unknown mode: $mode"; } } else { error "unknown hook: $hook"; } # if chrooted_realpath was used and if neither fakechroot or # proot were used (absolute symlinks will be broken) we can # check and potentially fail early if the source does not exist if (none { $_ eq $mode } ('fakechroot', 'proot')) { if (!-e $directory) { error "path does not exist: $directory"; } if ($command eq 'download') { if (!-f $directory) { error "path is not a file: $directory"; } } } my $fh; if ($command eq 'download') { # open the requested file for reading open $fh, '-|', @cmdprefix, 'sh', '-c', 'cat "$1"', 'exec', $directory // error "failed to fork(): $!"; } elsif ($command eq 'sync-out') { # Open a tar process that creates a tarfile of everything # inside the requested directory inside the chroot and # writes it to stdout. my @cmd = ( @cmdprefix, @tarcmd, '--directory', $directory, '--create', '--file', '-', '.' ); debug("helper: running " . (join " ", @cmd)); open($fh, '-|', @cmd) // error "failed to fork(): $!"; } elsif (any { $_ eq $command } ('copy-out', 'tar-out')) { # Open a tar process that creates a tarfile of the # requested directory inside the chroot and writes it to # stdout. To emulate the behaviour of cp, change to the # dirname of the requested path first. my @cmd = ( @cmdprefix, @tarcmd, '--directory', dirname($directory), '--create', '--file', '-', basename($directory)); debug("helper: running " . (join " ", @cmd)); open($fh, '-|', @cmd) // error "failed to fork(): $!"; } else { error "unknown command: $command"; } if (any { $_ eq $command } ('copy-out', 'sync-out')) { # instruct the parent process to extract a tarball to a # certain path outside the chroot debug "helper: sending untar"; print STDOUT ( pack("n", length $outpath) . "untar" . $outpath); } elsif (any { $_ eq $command } ('download', 'tar-out')) { # instruct parent process to open a tarball of the # requested path outside the chroot for writing debug "helper: sending openw"; print STDOUT ( pack("n", length $outpath) . "openw" . $outpath); } else { error "unknown command: $command"; } STDOUT->flush(); debug "helper: waiting for okthx"; checkokthx \*STDIN; # read from the tar process and send as payload to the parent # process while (1) { # read from tar my $ret = read($fh, my $cont, 4096) // error "cannot read from pipe: $!"; if ($ret == 0) { last; } debug "helper: sending write"; # send to parent print STDOUT pack("n", $ret) . "write" . $cont; STDOUT->flush(); debug "helper: waiting for okthx"; checkokthx \*STDIN; if ($ret < 4096) { last; } } # signal to the parent process that we are done debug "helper: sending close"; print STDOUT pack("n", 0) . "close"; STDOUT->flush(); debug "helper: waiting for okthx"; checkokthx \*STDIN; close $fh; if ($? != 0) { error "$command failed"; } } } else { error "unknown command: $command"; } }; if ($@) { # inform the other side that something went wrong print STDOUT (pack("n", 0) . "error"); STDOUT->flush(); error "hookhelper failed: $@"; } return; } sub hooklistener { # we put everything in an eval block because that way we can easily handle # errors without goto labels or much code duplication: the error handler # has to send an "error" message to the other side eval { $verbosity_level = $ARGV[1]; while (1) { # get the next message my $msg = "error"; my $len = -1; { debug "listener: reading next command"; my $ret = read(STDIN, my $buf, 2 + 5) // error "cannot read from socket: $!"; debug "listener: finished reading command"; if ($ret == 0) { error "received eof on socket"; } ($len, $msg) = unpack("nA5", $buf); } if ($msg eq "adios") { debug "listener: received message: adios"; # setup finished, so we break out of the loop if ($len != 0) { error "expected no payload but got $len bytes"; } last; } elsif ($msg eq "openr") { # handle the openr message debug "listener: received message: openr"; my $infile; { my $ret = read(STDIN, $infile, $len) // error "cannot read from socket: $!"; if ($ret == 0) { error "received eof on socket"; } } # make sure that the requested path exists outside the chroot if (!-e $infile) { error "$infile does not exist"; } debug "listener: sending okthx"; print STDOUT (pack("n", 0) . "okthx") or error "cannot write to socket: $!"; STDOUT->flush(); open my $fh, '<', $infile or error "failed to open $infile for reading: $!"; # read from the file and send as payload to the child process while (1) { # read from file my $ret = read($fh, my $cont, 4096) // error "cannot read from pipe: $!"; if ($ret == 0) { last; } debug "listener: sending write"; # send to child print STDOUT pack("n", $ret) . "write" . $cont; STDOUT->flush(); debug "listener: waiting for okthx"; checkokthx \*STDIN; if ($ret < 4096) { last; } } # signal to the child process that we are done debug "listener: sending close"; print STDOUT pack("n", 0) . "close"; STDOUT->flush(); debug "listener: waiting for okthx"; checkokthx \*STDIN; close $fh; } elsif ($msg eq "openw") { debug "listener: received message: openw"; # payload is the output directory my $outfile; { my $ret = read(STDIN, $outfile, $len) // error "cannot read from socket: $!"; if ($ret == 0) { error "received eof on socket"; } } # make sure that the directory exists my $outdir = dirname($outfile); if (-e $outdir) { if (!-d $outdir) { error "$outdir already exists but is not a directory"; } } else { my $num_created = make_path $outdir, { error => \my $err }; if ($err && @$err) { error( join "; ", ( map { "cannot create " . (join ": ", %{$_}) } @$err )); } elsif ($num_created == 0) { error "cannot create $outdir"; } } debug "listener: sending okthx"; print STDOUT (pack("n", 0) . "okthx") or error "cannot write to socket: $!"; STDOUT->flush(); # now we expect one or more "write" messages containing the # tarball to write open my $fh, '>', $outfile or error "failed to open $outfile for writing: $!"; # handle "write" messages from the child process and feed # their payload into the file handle until a "close" message # is encountered while (1) { # receive the next message my $ret = read(STDIN, my $buf, 2 + 5) // error "cannot read from socket: $!"; if ($ret == 0) { error "received eof on socket"; } my ($len, $msg) = unpack("nA5", $buf); debug "listener: received message: $msg"; if ($msg eq "close") { # finish the loop if ($len != 0) { error "expected no payload but got $len bytes"; } debug "listener: sending okthx"; print STDOUT (pack("n", 0) . "okthx") or error "cannot write to socket: $!"; STDOUT->flush(); last; } elsif ($msg ne "write") { # we should not receive this message at this point error "expected write but got: $msg"; } # read the payload my $content; { my $ret = read(STDIN, $content, $len) // error "error cannot read from socket: $!"; if ($ret == 0) { error "received eof on socket"; } } # write the payload to the file handle print $fh $content or error "cannot write to file handle: $!"; debug "listener: sending okthx"; print STDOUT (pack("n", 0) . "okthx") or error "cannot write to socket: $!"; STDOUT->flush(); } close $fh; } elsif (any { $_ eq $msg } ('mktar', 'mktac')) { # handle the mktar message debug "listener: received message: $msg"; my $indir; { my $ret = read(STDIN, $indir, $len) // error "cannot read from socket: $!"; if ($ret == 0) { error "received eof on socket"; } } # make sure that the requested path exists outside the chroot if (!-e $indir) { error "$indir does not exist"; } debug "listener: sending okthx"; print STDOUT (pack("n", 0) . "okthx") or error "cannot write to socket: $!"; STDOUT->flush(); # Open a tar process creating a tarfile of the instructed # path. To emulate the behaviour of cp, change to the # dirname of the requested path first. my @cmd = ( 'tar', '--numeric-owner', '--xattrs', '--format=pax', '--pax-option=exthdr.name=%d/PaxHeaders/%f,' . 'delete=atime,delete=ctime', '--directory', $msg eq 'mktar' ? dirname($indir) : $indir, '--create', '--file', '-', $msg eq 'mktar' ? basename($indir) : '.' ); debug("listener: running " . (join " ", @cmd)); open(my $fh, '-|', @cmd) // error "failed to fork(): $!"; # read from the tar process and send as payload to the child # process while (1) { # read from tar my $ret = read($fh, my $cont, 4096) // error "cannot read from pipe: $!"; if ($ret == 0) { last; } debug "listener: sending write ($ret bytes)"; # send to child print STDOUT pack("n", $ret) . "write" . $cont; STDOUT->flush(); debug "listener: waiting for okthx"; checkokthx \*STDIN; if ($ret < 4096) { last; } } # signal to the child process that we are done debug "listener: sending close"; print STDOUT pack("n", 0) . "close"; STDOUT->flush(); debug "listener: waiting for okthx"; checkokthx \*STDIN; close $fh; if ($? != 0) { error "tar failed"; } } elsif ($msg eq "untar") { debug "listener: received message: untar"; # payload is the output directory my $outdir; { my $ret = read(STDIN, $outdir, $len) // error "cannot read from socket: $!"; if ($ret == 0) { error "received eof on socket"; } } # make sure that the directory exists if (-e $outdir) { if (!-d $outdir) { error "$outdir already exists but is not a directory"; } } else { my $num_created = make_path $outdir, { error => \my $err }; if ($err && @$err) { error( join "; ", ( map { "cannot create " . (join ": ", %{$_}) } @$err )); } elsif ($num_created == 0) { error "cannot create $outdir"; } } debug "listener: sending okthx"; print STDOUT (pack("n", 0) . "okthx") or error "cannot write to socket: $!"; STDOUT->flush(); # now we expect one or more "write" messages containing the # tarball to unpack open my $fh, '|-', 'tar', '--numeric-owner', '--xattrs', '--xattrs-include=*', '--directory', $outdir, '--extract', '--file', '-' // error "failed to fork(): $!"; # handle "write" messages from the child process and feed # their payload into the tar process until a "close" message # is encountered while (1) { # receive the next message my $ret = read(STDIN, my $buf, 2 + 5) // error "cannot read from socket: $!"; if ($ret == 0) { error "received eof on socket"; } my ($len, $msg) = unpack("nA5", $buf); debug "listener: received message: $msg"; if ($msg eq "close") { # finish the loop if ($len != 0) { error "expected no payload but got $len bytes"; } debug "listener: sending okthx"; print STDOUT (pack("n", 0) . "okthx") or error "cannot write to socket: $!"; STDOUT->flush(); last; } elsif ($msg ne "write") { # we should not receive this message at this point error "expected write but got: $msg"; } # read the payload my $content; { my $ret = read(STDIN, $content, $len) // error "error cannot read from socket: $!"; if ($ret == 0) { error "received eof on socket"; } } # write the payload to the tar process print $fh $content or error "cannot write to tar process: $!"; debug "listener: sending okthx"; print STDOUT (pack("n", 0) . "okthx") or error "cannot write to socket: $!"; STDOUT->flush(); } close $fh; if ($? != 0) { error "tar failed"; } } else { error "unknown message: $msg"; } } }; if ($@) { debug("hooklistener errored out: $@"); # inform the other side that something went wrong print STDOUT (pack("n", 0) . "error") or error "cannot write to socket: $!"; STDOUT->flush(); } return; } # parse files of the format found in /usr/share/distro-info/ and return two # lists: the first contains codenames of end-of-life distros and the second # list contains codenames of currently active distros sub parse_distro_info { my $file = shift; my @eol = (); my @current = (); my $today = POSIX::strftime "%Y-%m-%d", localtime; open my $fh, '<', $file or error "cannot open $file: $!"; my $i = 0; while (my $line = <$fh>) { chomp($line); $i++; my @cells = split /,/, $line; if (scalar @cells < 4) { error "cannot parse line $i of $file"; } if ( $i == 1 and ( scalar @cells < 6 or $cells[0] ne 'version' or $cells[1] ne 'codename' or $cells[2] ne 'series' or $cells[3] ne 'created' or $cells[4] ne 'release' or $cells[5] ne 'eol') ) { error "cannot find correct header in $file"; } if ($i == 1) { next; } if (scalar @cells == 6) { if ($cells[5] !~ m/^\d\d\d\d-\d\d-\d\d$/) { error "invalid eof date format in $file:$i: $cells[5]"; } # since the date format is iso8601, we can use lexicographic string # comparison to compare dates if ($cells[5] lt $today) { push @eol, $cells[2]; } else { push @current, $cells[2]; } } else { push @current, $cells[2]; } } close $fh; return ([@eol], [@current]); } sub get_suite_by_vendor { my %suite_by_vendor = ( 'debian' => {}, 'ubuntu' => {}, 'tanglu' => {}, 'kali' => {}, ); # pre-fill with some known values foreach my $suite ( 'potato', 'woody', 'sarge', 'etch', 'lenny', 'squeeze', 'wheezy', 'jessie' ) { $suite_by_vendor{'debian'}->{$suite} = 1; } foreach my $suite ( 'unstable', 'stable', 'oldstable', 'stretch', 'buster', 'bullseye', 'bookworm', 'trixie' ) { $suite_by_vendor{'debian'}->{$suite} = 0; } foreach my $suite ('aequorea', 'bartholomea', 'chromodoris', 'dasyatis') { $suite_by_vendor{'tanglu'}->{$suite} = 0; } foreach my $suite ('kali-dev', 'kali-rolling', 'kali-bleeding-edge') { $suite_by_vendor{'kali'}->{$suite} = 0; } foreach my $suite ('trusty', 'xenial', 'zesty', 'artful', 'bionic', 'cosmic') { $suite_by_vendor{'ubuntu'}->{$suite} = 0; } # if the Debian package distro-info-data is installed, then we can use it, # to get better data about new distros or EOL distros if (-e '/usr/share/distro-info/debian.csv') { my ($eol, $current) = parse_distro_info('/usr/share/distro-info/debian.csv'); foreach my $suite (@{$eol}) { $suite_by_vendor{'debian'}->{$suite} = 1; } foreach my $suite (@{$current}) { $suite_by_vendor{'debian'}->{$suite} = 0; } } if (-e '/usr/share/distro-info/ubuntu.csv') { my ($eol, $current) = parse_distro_info('/usr/share/distro-info/ubuntu.csv'); foreach my $suite (@{$eol}, @{$current}) { $suite_by_vendor{'ubuntu'}->{$suite} = 0; } } # if debootstrap is installed we infer distro names from the symlink # targets of the scripts in /usr/share/debootstrap/scripts/ my $debootstrap_scripts = '/usr/share/debootstrap/scripts/'; if (-d $debootstrap_scripts) { opendir(my $dh, $debootstrap_scripts) or error "Can't opendir($debootstrap_scripts): $!"; while (my $suite = readdir $dh) { # this is only a heuristic -- don't overwrite anything but instead # just update anything that was missing if (!-l "$debootstrap_scripts/$suite") { next; } my $target = readlink "$debootstrap_scripts/$suite"; if ($target eq "sid" and not exists $suite_by_vendor{'debian'}->{$suite}) { $suite_by_vendor{'debian'}->{$suite} = 0; } elsif ($target eq "gutsy" and not exists $suite_by_vendor{'ubuntu'}->{$suite}) { $suite_by_vendor{'ubuntu'}->{$suite} = 0; } elsif ($target eq "aequorea" and not exists $suite_by_vendor{'tanglu'}->{$suite}) { $suite_by_vendor{'tanglu'}->{$suite} = 0; } elsif ($target eq "kali" and not exists $suite_by_vendor{'kali'}->{$suite}) { $suite_by_vendor{'kali'}->{$suite} = 0; } } closedir($dh); } return %suite_by_vendor; } # try to guess the right keyring path for the given suite sub get_keyring_by_suite { my $query = shift; my $suite_by_vendor = shift; my $debianvendor; my $ubuntuvendor; # make $@ local, so we don't print "Can't locate Dpkg/Vendor/Debian.pm" # in other parts where we evaluate $@ local $@ = ''; eval { require Dpkg::Vendor::Debian; require Dpkg::Vendor::Ubuntu; $debianvendor = Dpkg::Vendor::Debian->new(); $ubuntuvendor = Dpkg::Vendor::Ubuntu->new(); }; my $keyring_by_vendor = sub { my $vendor = shift; my $eol = shift; if ($vendor eq 'debian') { if ($eol) { if (defined $debianvendor) { return $debianvendor->run_hook( 'archive-keyrings-historic'); } else { return '/usr/share/keyrings/debian-archive-removed-keys.gpg'; } } else { if (defined $debianvendor) { return $debianvendor->run_hook('archive-keyrings'); } else { return '/usr/share/keyrings/debian-archive-keyring.gpg'; } } } elsif ($vendor eq 'ubuntu') { if (defined $ubuntuvendor) { return $ubuntuvendor->run_hook('archive-keyrings'); } else { return '/usr/share/keyrings/ubuntu-archive-keyring.gpg'; } } elsif ($vendor eq 'tanglu') { return '/usr/share/keyrings/tanglu-archive-keyring.gpg'; } elsif ($vendor eq 'kali') { return '/usr/share/keyrings/kali-archive-keyring.gpg'; } else { error "unknown vendor: $vendor"; } }; my %keyrings = (); foreach my $vendor (keys %{$suite_by_vendor}) { foreach my $suite (keys %{ $suite_by_vendor->{$vendor} }) { my $keyring = $keyring_by_vendor->( $vendor, $suite_by_vendor->{$vendor}->{$suite}); debug "suite $suite with keyring $keyring"; $keyrings{$suite} = $keyring; } } if (exists $keyrings{$query}) { return $keyrings{$query}; } else { return; } } sub get_sourceslist_by_suite { my $suite = shift; my $arch = shift; my $signedby = shift; my $compstr = shift; my $suite_by_vendor = shift; my @debstable = keys %{ $suite_by_vendor->{'debian'} }; my @ubuntustable = keys %{ $suite_by_vendor->{'ubuntu'} }; my @tanglustable = keys %{ $suite_by_vendor->{'tanglu'} }; my @kali = keys %{ $suite_by_vendor->{'kali'} }; my $mirror = 'http://deb.debian.org/debian'; my $secmirror = 'http://security.debian.org/debian-security'; if (any { $_ eq $suite } @ubuntustable) { if (any { $_ eq $arch } ('amd64', 'i386')) { $mirror = 'http://archive.ubuntu.com/ubuntu'; $secmirror = 'http://security.ubuntu.com/ubuntu'; } else { $mirror = 'http://ports.ubuntu.com/ubuntu-ports'; $secmirror = 'http://ports.ubuntu.com/ubuntu-ports'; } if (-e '/usr/share/debootstrap/scripts/gutsy') { # try running the debootstrap script but ignore errors my $script = 'set -eu; default_mirror() { echo $1; }; mirror_style() { :; }; download_style() { :; }; finddebs_style() { :; }; variants() { :; }; keyring() { :; }; doing_variant() { false; }; . /usr/share/debootstrap/scripts/gutsy;'; open my $fh, '-|', 'env', "ARCH=$arch", "SUITE=$suite", 'sh', '-c', $script // last; chomp( my $output = do { local $/; <$fh> } ); close $fh; if ($? == 0 && $output ne '') { $mirror = $output; } } } elsif (any { $_ eq $suite } @tanglustable) { $mirror = 'http://archive.tanglu.org/tanglu'; } elsif (any { $_ eq $suite } @kali) { $mirror = 'https://http.kali.org/kali'; } my $sourceslist = ''; $sourceslist .= "deb$signedby $mirror $suite $compstr\n"; if (any { $_ eq $suite } @ubuntustable) { $sourceslist .= "deb$signedby $mirror $suite-updates $compstr\n"; $sourceslist .= "deb$signedby $secmirror $suite-security $compstr\n"; } elsif (any { $_ eq $suite } @tanglustable) { $sourceslist .= "deb$signedby $secmirror $suite-updates $compstr\n"; } elsif (any { $_ eq $suite } @debstable and none { $_ eq $suite } ('testing', 'unstable', 'sid')) { $sourceslist .= "deb$signedby $mirror $suite-updates $compstr\n"; # the security mirror changes, starting with bullseye # https://lists.debian.org/87r26wqr2a.fsf@43-1.org my $bullseye_or_later = 0; if ( any { $_ eq $suite } ('stable', 'bullseye', 'bookworm', 'trixie') ) { $bullseye_or_later = 1; } my $distro_info = '/usr/share/distro-info/debian.csv'; # make $@ local, so we don't print "Can't locate Debian/DistroInfo.pm" # in other parts where we evaluate $@ local $@ = ''; eval { require Debian::DistroInfo; }; if (!$@) { debug "libdistro-info-perl is installed"; my $debinfo = DebianDistroInfo->new(); if ($debinfo->version($suite, 0) >= 11) { $bullseye_or_later = 1; } } elsif (-f $distro_info) { debug "distro-info-data is installed"; open my $fh, '<', $distro_info or error "cannot open $distro_info: $!"; my $i = 0; my $matching_version; my @releases; my $today = POSIX::strftime "%Y-%m-%d", localtime; while (my $line = <$fh>) { chomp($line); $i++; my @cells = split /,/, $line; if (scalar @cells < 4) { error "cannot parse line $i of $distro_info"; } if ( $i == 1 and ( scalar @cells < 6 or $cells[0] ne 'version' or $cells[1] ne 'codename' or $cells[2] ne 'series' or $cells[3] ne 'created' or $cells[4] ne 'release' or $cells[5] ne 'eol') ) { error "cannot find correct header in $distro_info"; } if ($i == 1) { next; } if ( scalar @cells > 4 and $cells[4] =~ m/^\d\d\d\d-\d\d-\d\d$/ and $cells[4] lt $today) { push @releases, $cells[0]; } if (lc $cells[1] eq $suite or lc $cells[2] eq $suite) { $matching_version = $cells[0]; last; } } close $fh; if (defined $matching_version and $matching_version >= 11) { $bullseye_or_later = 1; } if ($suite eq "stable" and $releases[-1] >= 11) { $bullseye_or_later = 1; } } else { debug "neither libdistro-info-perl nor distro-info-data installed"; } if ($bullseye_or_later) { # starting from bullseye use $sourceslist .= "deb$signedby $secmirror $suite-security" . " $compstr\n"; } else { $sourceslist .= "deb$signedby $secmirror $suite/updates" . " $compstr\n"; } } return $sourceslist; } sub guess_sources_format { my $content = shift; my $is_deb822 = 0; my $is_oneline = 0; for my $line (split "\n", $content) { if ($line =~ /^deb(-src)? /) { $is_oneline = 1; last; } if ($line =~ /^[^#:\s]+:/) { $is_deb822 = 1; last; } } if ($is_deb822) { return 'deb822'; } if ($is_oneline) { return 'one-line'; } return; } sub approx_disk_usage { my $directory = shift; info "approximating disk usage..."; # the "du" utility reports different results depending on the underlying # filesystem, see https://bugs.debian.org/650077 for a discussion # # we use code similar to the one used by dpkg-gencontrol instead # # Regular files are measured in number of 1024 byte blocks. All other # entries are assumed to take one block of space. # # We ignore /dev because depending on the mode, the directory might be # populated or not and we want consistent disk usage results independent # of the mode. my $installed_size = 0; my $scan_installed_size = sub { if ($File::Find::name eq "$directory/dev") { # add all entries of @devfiles once $installed_size += scalar @devfiles; } elsif ($File::Find::name =~ /^$directory\/dev\//) { # ignore everything below /dev } elsif (-l $File::Find::name) { # -f follows symlinks, so we first check if we have a symlink $installed_size += 1; } elsif (-f $File::Find::name) { # add file size in 1024 byte blocks, rounded up $installed_size += int(((-s $File::Find::name) + 1024) / 1024); } else { # all other entries are assumed to only take up one block $installed_size += 1; } }; find($scan_installed_size, $directory); # because the above is only a heuristic we add 10% extra for good measure return int($installed_size * 1.1); } sub main() { my $before = Time::HiRes::time; umask 022; if (scalar @ARGV >= 7 && $ARGV[0] eq "--hook-helper") { hookhelper(); exit 0; } # this is the counterpart to --hook-helper and will receive and carry # out its instructions if (scalar @ARGV == 2 && $ARGV[0] eq "--hook-listener") { hooklistener(); exit 0; } # this is like: # lxc-usernsexec -- lxc-unshare -s 'MOUNT|PID|UTSNAME|IPC' ... # but without needing lxc if ($ARGV[0] eq "--unshare-helper") { if ($EFFECTIVE_USER_ID != 0 && !test_unshare_userns(1)) { exit 1; } my @idmap = (); if ($EFFECTIVE_USER_ID != 0) { @idmap = read_subuid_subgid; } my $pid = get_unshare_cmd( sub { 0 == system @ARGV[1 .. $#ARGV] or error "system failed: $?"; }, \@idmap ); waitpid $pid, 0; $? == 0 or error "unshared command failed"; exit 0; } my $mtime = time; if (exists $ENV{SOURCE_DATE_EPOCH}) { $mtime = $ENV{SOURCE_DATE_EPOCH} + 0; } { ## no critic (Variables::RequireLocalizedPunctuationVars) $ENV{DEBIAN_FRONTEND} = 'noninteractive'; $ENV{DEBCONF_NONINTERACTIVE_SEEN} = 'true'; $ENV{LC_ALL} = 'C.UTF-8'; $ENV{LANGUAGE} = 'C.UTF-8'; $ENV{LANG} = 'C.UTF-8'; } # copy ARGV because getopt modifies it my @ARGVORIG = @ARGV; # obtain the correct defaults for the keyring locations that apt knows # about my $apttrusted = `eval \$(apt-config shell v Dir::Etc::trusted/f); printf \$v`; my $apttrustedparts = `eval \$(apt-config shell v Dir::Etc::trustedparts/d); printf \$v`; chomp(my $hostarch = `dpkg --print-architecture`); my $options = { components => ["main"], variant => "important", include => [], architectures => [$hostarch], mode => 'auto', dpkgopts => [], aptopts => [], apttrusted => $apttrusted, apttrustedparts => $apttrustedparts, noop => [], setup_hook => [], extract_hook => [], essential_hook => [], customize_hook => [], dryrun => 0, skip => [], }; my $logfile = undef; my $format = 'auto'; Getopt::Long::Configure('default', 'bundling', 'auto_abbrev', 'ignore_case_always'); GetOptions( 'h|help' => sub { pod2usage(-exitval => 0, -verbose => 1) }, 'man' => sub { pod2usage(-exitval => 0, -verbose => 2) }, 'version' => sub { print STDOUT "mmdebstrap $VERSION\n"; exit 0; }, 'components=s@' => \$options->{components}, 'variant=s' => \$options->{variant}, 'include=s' => sub { my ($opt_name, $opt_value) = @_; for my $pkg (split /[,\s]+/, $opt_value) { # strip leading and trailing whitespace $pkg =~ s/^\s+|\s+$//g; # skip if the remainder is an empty string if ($pkg eq '') { next; } push @{ $options->{include} }, $pkg; } }, 'architectures=s@' => \$options->{architectures}, 'mode=s' => \$options->{mode}, 'dpkgopt=s@' => \$options->{dpkgopts}, 'aptopt=s@' => \$options->{aptopts}, 'keyring=s' => sub { my ($opt_name, $opt_value) = @_; if ($opt_value =~ /"/) { error "--keyring: apt cannot handle paths with double quotes:" . " $opt_value"; } if (!-e $opt_value) { error "keyring \"$opt_value\" does not exist"; } my $abs_path = abs_path($opt_value); if (!defined $abs_path) { error "unable to get absolute path of --keyring: $opt_value"; } # since abs_path resolved all symlinks for us, we can now test # what the actual target actually is if (-d $abs_path) { $options->{apttrustedparts} = $abs_path; } else { $options->{apttrusted} = $abs_path; } }, 's|silent' => sub { $verbosity_level = 0; }, 'q|quiet' => sub { $verbosity_level = 0; }, 'v|verbose' => sub { $verbosity_level = 2; }, 'd|debug' => sub { $verbosity_level = 3; }, 'format=s' => \$format, 'logfile=s' => \$logfile, # no-op options so that mmdebstrap can be used with # sbuild-createchroot --debootstrap=mmdebstrap 'resolve-deps' => sub { push @{ $options->{noop} }, 'resolve-deps'; }, 'merged-usr' => sub { push @{ $options->{noop} }, 'merged-usr'; }, 'no-merged-usr' => sub { push @{ $options->{noop} }, 'no-merged-usr'; }, 'force-check-gpg' => sub { push @{ $options->{noop} }, 'force-check-gpg'; }, 'setup-hook=s@' => \$options->{setup_hook}, 'extract-hook=s@' => \$options->{extract_hook}, 'essential-hook=s@' => \$options->{essential_hook}, 'customize-hook=s@' => \$options->{customize_hook}, 'hook-directory=s' => sub { my ($opt_name, $opt_value) = @_; if (!-e $opt_value) { error "hook directory \"$opt_value\" does not exist"; } my $abs_path = abs_path($opt_value); if (!defined $abs_path) { error( "unable to get absolute path of " . "--hook-directory: $opt_value"); } # since abs_path resolved all symlinks for us, we can now test # what the actual target actually is if (!-d $opt_value) { error "hook directory \"$opt_value\" is not a directory"; } # gather all files starting with special prefixes into the # respective keys of a hash my %scripts; opendir(my $dh, $opt_value) or error "Can't opendir($opt_value): $!"; while (my $entry = readdir $dh) { foreach my $hook ('setup', 'extract', 'essential', 'customize') { if ($entry =~ m/^\Q$hook\E/ and -x "$opt_value/$entry") { push @{ $scripts{$hook} }, "$opt_value/$entry"; } } } closedir($dh); # add the sorted list associated with each key to the respective # list of hooks foreach my $hook (keys %scripts) { push @{ $options->{"${hook}_hook"} }, (sort @{ $scripts{$hook} }); } }, # Sometimes --simulate fails even though non-simulate succeeds because # in simulate mode, apt cannot rely on dpkg to figure out tricky # dependency situations and will give up instead when it cannot find # a solution. # # 2020-02-06, #debian-apt on OFTC, times in UTC+1 # 12:52 < DonKult> [...] It works in non-simulation because simulate is # more picky. If you wanna know why simulate complains # here prepare for long suffering in dependency hell. 'simulate' => \$options->{dryrun}, 'dry-run' => \$options->{dryrun}, 'skip=s@' => \$options->{skip}, ) or pod2usage(-exitval => 2, -verbose => 1); if (defined($logfile)) { open(STDERR, '>', $logfile) or error "cannot open $logfile: $!"; } foreach my $arg (@{ $options->{noop} }) { info "the option --$arg is a no-op. It only exists for compatibility" . " with some debootstrap wrappers."; } if ($options->{dryrun}) { foreach my $hook ('setup', 'extract', 'essential', 'customize') { if (scalar @{ $options->{"${hook}_hook"} } > 0) { warning "In dry-run mode, --$hook-hook options have no effect"; } } } my @valid_variants = ( 'extract', 'custom', 'essential', 'apt', 'required', 'minbase', 'buildd', 'important', 'debootstrap', '-', 'standard' ); if (none { $_ eq $options->{variant} } @valid_variants) { error "invalid variant. Choose from " . (join ', ', @valid_variants); } # debootstrap and - are an alias for important if (any { $_ eq $options->{variant} } ('-', 'debootstrap')) { $options->{variant} = 'important'; } # minbase is an alias for required if ($options->{variant} eq 'minbase') { $options->{variant} = 'required'; } # fakeroot is an alias for fakechroot if ($options->{mode} eq 'fakeroot') { $options->{mode} = 'fakechroot'; } # sudo is an alias for root if ($options->{mode} eq 'sudo') { $options->{mode} = 'root'; } my @valid_modes = ('auto', 'root', 'unshare', 'fakechroot', 'proot', 'chrootless'); if (none { $_ eq $options->{mode} } @valid_modes) { error "invalid mode. Choose from " . (join ', ', @valid_modes); } # sqfs is an alias for squashfs if ($format eq 'sqfs') { $format = 'squashfs'; } # dir is an alias for directory if ($format eq 'dir') { $format = 'directory'; } my @valid_formats = ('auto', 'directory', 'tar', 'squashfs', 'ext2', 'null'); if (none { $_ eq $format } @valid_formats) { error "invalid format. Choose from " . (join ', ', @valid_formats); } # setting PATH for chroot, ldconfig, start-stop-daemon... if (length $ENV{PATH}) { ## no critic (Variables::RequireLocalizedPunctuationVars) $ENV{PATH} = "$ENV{PATH}:/usr/sbin:/usr/bin:/sbin:/bin"; } else { ## no critic (Variables::RequireLocalizedPunctuationVars) $ENV{PATH} = "/usr/sbin:/usr/bin:/sbin:/bin"; } foreach my $tool ( 'dpkg', 'dpkg-deb', 'apt-get', 'apt-cache', 'apt-config', 'tar', 'rm', 'find', 'env' ) { my $found = 0; foreach my $path (split /:/, $ENV{PATH}) { if (-f "$path/$tool" && -x _ ) { $found = 1; last; } } if (!$found) { error "cannot find $tool"; } } { my $dpkgversion = version->new(0); my $pid = open my $fh, '-|' // error "failed to fork(): $!"; if ($pid == 0) { # redirect stderr to /dev/null to hide error messages from dpkg # versions before 1.20.0 open(STDERR, '>', '/dev/null') or error "cannot open /dev/null for writing: $!"; exec 'dpkg', '--robot', '--version'; } chomp( my $content = do { local $/; <$fh> } ); close $fh; # the --robot option was introduced in 1.20.0 but until 1.20.2 the # output contained a string after the version, separated by a # whitespace -- since then, it's only the version if ($? == 0 and $content =~ /^([0-9.]+).*$/) { # dpkg is new enough for the --robot option $dpkgversion = version->new($1); } if ($dpkgversion < "1.20.0") { error "need dpkg >= 1.20.0 but have $dpkgversion"; } } { my $aptversion = version->new(0); my $pid = open my $fh, '-|', 'apt-get', '--version' // error "failed to fork(): $!"; chomp( my $content = do { local $/; <$fh> } ); close $fh; if ( $? == 0 and $content =~ /^apt ([0-9]+\.[0-9]+\.[0-9]+) \([a-z0-9-]+\)$/m) { $aptversion = version->new($1); } if ($aptversion < "2.3.14") { error "need apt >= 2.3.14 but have $aptversion"; } } my $check_fakechroot_running = sub { # test if we are inside fakechroot already # We fork a child process because setting FAKECHROOT_DETECT seems to # be an irreversible operation for fakechroot. my $pid = open my $rfh, '-|' // error "failed to fork(): $!"; if ($pid == 0) { # with the FAKECHROOT_DETECT environment variable set, any program # execution will be replaced with the output "fakeroot [version]" local $ENV{FAKECHROOT_DETECT} = 0; exec 'echo', 'If fakechroot is running, this will not be printed'; } my $content = do { local $/; <$rfh> }; waitpid $pid, 0; my $result = 0; if ($? == 0 and $content =~ /^fakechroot [0-9.]+$/) { $result = 1; } return $result; }; # figure out the mode to use or test whether the chosen mode is legal if ($options->{mode} eq 'auto') { if (&{$check_fakechroot_running}()) { # if mmdebstrap is executed inside fakechroot, then we assume the # user expects fakechroot mode $options->{mode} = 'fakechroot'; } elsif ($EFFECTIVE_USER_ID == 0) { # if mmdebstrap is executed as root, we assume the user wants root # mode $options->{mode} = 'root'; } elsif (test_unshare_userns(0)) { # if we are not root, unshare mode is our best option if # test_unshare_userns() succeeds $options->{mode} = 'unshare'; } elsif (system('fakechroot --version>/dev/null') == 0) { # the next fallback is fakechroot # exec ourselves again but within fakechroot my @prefix = (); if ($is_covering) { @prefix = ($EXECUTABLE_NAME, '-MDevel::Cover=-silent,-nogcov'); } exec 'fakechroot', 'fakeroot', @prefix, $PROGRAM_NAME, @ARGVORIG; } elsif (system('proot --version>/dev/null') == 0) { # and lastly, proot $options->{mode} = 'proot'; } else { error "unable to pick chroot mode automatically"; } info "automatically chosen mode: $options->{mode}"; } elsif ($options->{mode} eq 'root') { if ($EFFECTIVE_USER_ID != 0) { error "need to be root"; } } elsif ($options->{mode} eq 'proot') { if (system('proot --version>/dev/null') != 0) { error "need working proot binary"; } } elsif ($options->{mode} eq 'fakechroot') { if (&{$check_fakechroot_running}()) { # fakechroot is already running } elsif (system('fakechroot --version>/dev/null') != 0) { error "need working fakechroot binary"; } else { # exec ourselves again but within fakechroot my @prefix = (); if ($is_covering) { @prefix = ($EXECUTABLE_NAME, '-MDevel::Cover=-silent,-nogcov'); } exec 'fakechroot', 'fakeroot', @prefix, $PROGRAM_NAME, @ARGVORIG; } } elsif ($options->{mode} eq 'unshare') { # For unshare mode to work we either need to already be the root user # and then we do not have to unshare the user namespace anymore but we # need to be able to unshare the mount namespace... # # We need to call unshare with "--propagation unchanged" or otherwise # we get 'cannot change root filesystem propagation' when running # mmdebstrap inside a chroot for which the root of the chroot is not # its own mount point. if ($EFFECTIVE_USER_ID == 0 && 0 != system 'unshare --mount --propagation unchanged -- true') { error "unable to unshare the mount namespace"; } # ...or we are not root and then we need to be able to unshare the user # namespace. if ($EFFECTIVE_USER_ID != 0 && !test_unshare_userns(1)) { my $procfile = '/proc/sys/kernel/unprivileged_userns_clone'; open(my $fh, '<', $procfile) or error "failed to open $procfile: $!"; chomp( my $content = do { local $/; <$fh> } ); close($fh); if ($content ne "1") { info "/proc/sys/kernel/unprivileged_userns_clone is set to" . " $content"; info "Try running:"; info " sudo sysctl -w kernel.unprivileged_userns_clone=1"; info "or permanently enable unprivileged usernamespaces by" . " putting the setting into /etc/sysctl.d/"; info "THIS SETTING HAS SECURITY IMPLICATIONS!"; info "Refer to https://bugs.debian.org/cgi-bin/" . "bugreport.cgi?bug=898446"; } exit 1; } } elsif ($options->{mode} eq 'chrootless') { if ($EFFECTIVE_USER_ID == 0) { warning "running chrootless mode as root might damage the host " . "system"; } } else { error "unknown mode: $options->{mode}"; } $options->{canmount} = 1; if ($options->{mode} eq 'root') { # It's possible to be root but not be able to mount anything. # This is for example the case when running under docker. # Mounting needs CAP_SYS_ADMIN which might not be available. # # We test for CAP_SYS_ADMIN using the capget syscall. # We cannot use cap_get_proc from sys/capability.h because Perl. # We don't use capsh because we don't want to depend on libcap2-bin my $hdrp = pack( "Li", # __u32 followed by int $_LINUX_CAPABILITY_VERSION_3, # available since Linux 2.6.26 0 # caps of this process ); my $datap = pack("LLLLLL", 0, 0, 0, 0, 0, 0); # six __u32 0 == syscall &SYS_capget, $hdrp, $datap or error "capget failed: $!"; my ($effective, undef) = unpack "LLLLLL", $datap; if (($effective >> $CAP_SYS_ADMIN) & 1 != 1) { warning "cannot mount because CAP_SYS_ADMIN is not in the effective set"; $options->{canmount} = 0; } if (0 == syscall &SYS_prctl, $PR_CAPBSET_READ, $CAP_SYS_ADMIN) { warning "cannot mount because CAP_SYS_ADMIN is not in the bounding set"; $options->{canmount} = 0; } # To test whether we can use mount without actually trying to mount # something we try unsharing the mount namespace. If this is allowed, # then we are also allowed to mount. # # We need to call unshare with "--propagation unchanged" or otherwise # we get 'cannot change root filesystem propagation' when running # mmdebstrap inside a chroot for which the root of the chroot is not # its own mount point. if (0 != system 'unshare --mount --propagation unchanged -- true') { # if we cannot unshare the mount namespace as root, then we also # cannot mount warning "cannot mount because unshare --mount failed"; $options->{canmount} = 0; } } if (any { $_ eq $options->{mode} } ('root', 'unshare')) { if (system('mount --version>/dev/null') != 0) { warning "cannot execute mount"; $options->{canmount} = 0; } } # we can only possibly mount in root and unshare mode if (none { $_ eq $options->{mode} } ('root', 'unshare')) { $options->{canmount} = 0; } my @architectures = (); foreach my $archs (@{ $options->{architectures} }) { foreach my $arch (split /[,\s]+/, $archs) { # strip leading and trailing whitespace $arch =~ s/^\s+|\s+$//g; # skip if the remainder is an empty string if ($arch eq '') { next; } # do not append component if it's already in the list if (any { $_ eq $arch } @architectures) { next; } push @architectures, $arch; } } $options->{nativearch} = $hostarch; $options->{foreignarchs} = []; if (scalar @architectures == 0) { warning "empty architecture list: falling back to native architecture" . " $hostarch"; } elsif (scalar @architectures == 1) { $options->{nativearch} = $architectures[0]; } else { $options->{nativearch} = $architectures[0]; push @{ $options->{foreignarchs} }, @architectures[1 .. $#architectures]; } debug "Native architecture (outside): $hostarch"; debug "Native architecture (inside): $options->{nativearch}"; debug("Foreign architectures (inside): " . (join ', ', @{ $options->{foreignarchs} })); { # FIXME: autogenerate this list my $deb2qemu = { alpha => 'alpha', amd64 => 'x86_64', arm => 'arm', arm64 => 'aarch64', armel => 'arm', armhf => 'arm', hppa => 'hppa', i386 => 'i386', m68k => 'm68k', mips => 'mips', mips64 => 'mips64', mips64el => 'mips64el', mipsel => 'mipsel', powerpc => 'ppc', ppc64 => 'ppc64', ppc64el => 'ppc64le', riscv64 => 'riscv64', s390x => 's390x', sh4 => 'sh4', sparc => 'sparc', sparc64 => 'sparc64', }; if (any { $_ eq 'check/qemu' } @{ $options->{skip} }) { info "skipping check/qemu as requested"; } elsif ($options->{mode} eq "chrootless") { info "skipping emulation check in chrootless mode"; } elsif ($options->{variant} eq "extract") { info "skipping emulation check for extract variant"; } elsif ($hostarch ne $options->{nativearch}) { if (system('arch-test --version>/dev/null') != 0) { error "install arch-test for foreign architecture support"; } my $withemu = 0; my $noemu = 0; { my $pid = open my $fh, '-|' // error "failed to fork(): $!"; if ($pid == 0) { { ## no critic (TestingAndDebugging::ProhibitNoWarnings) # don't print a warning if the following fails no warnings; exec 'arch-test', $options->{nativearch}; } # if exec didn't work (for example because the arch-test # program is missing) prepare for the worst and assume that # the architecture cannot be executed print "$options->{nativearch}: not supported on this" . " machine/kernel\n"; exit 1; } chomp( my $content = do { local $/; <$fh> } ); close $fh; if ($? == 0 and $content eq "$options->{nativearch}: ok") { $withemu = 1; } } { my $pid = open my $fh, '-|' // error "failed to fork(): $!"; if ($pid == 0) { { ## no critic (TestingAndDebugging::ProhibitNoWarnings) # don't print a warning if the following fails no warnings; exec 'arch-test', '-n', $options->{nativearch}; } # if exec didn't work (for example because the arch-test # program is missing) prepare for the worst and assume that # the architecture cannot be executed print "$options->{nativearch}: not supported on this" . " machine/kernel\n"; exit 1; } chomp( my $content = do { local $/; <$fh> } ); close $fh; if ($? == 0 and $content eq "$options->{nativearch}: ok") { $noemu = 1; } } # four different outcomes, depending on whether arch-test # succeeded with or without emulation # # withemu | noemu | # --------+-------+----------------- # 0 | 0 | test why emu doesn't work and quit # 0 | 1 | should never happen # 1 | 0 | use qemu emulation # 1 | 1 | don't use qemu emulation if ($withemu == 0 and $noemu == 0) { { open my $fh, '<', '/proc/filesystems' or error "failed to open /proc/filesystems: $!"; unless (grep { /^nodev\tbinfmt_misc$/ } (<$fh>)) { warning "binfmt_misc not found in /proc/filesystems --" . " is the module loaded?"; } close $fh; } { open my $fh, '<', '/proc/mounts' or error "failed to open /proc/mounts: $!"; unless ( grep { /^binfmt_misc\s+ \/proc\/sys\/fs\/binfmt_misc\s+ binfmt_misc\s+/x } (<$fh>) ) { warning "binfmt_misc not found in /proc/mounts -- not" . " mounted?"; } close $fh; } { if (!exists $deb2qemu->{ $options->{nativearch} }) { warning "no mapping from $options->{nativearch} to" . " qemu-user binary"; } elsif ( system('/usr/sbin/update-binfmts --version>/dev/null') != 0) { warning "cannot find /usr/sbin/update-binfmts"; } else { my $binfmt_identifier = 'qemu-' . $deb2qemu->{ $options->{nativearch} }; open my $fh, '-|', '/usr/sbin/update-binfmts', '--display', $binfmt_identifier // error "failed to fork(): $!"; chomp( my $binfmts = do { local $/; <$fh> } ); close $fh; if ($? != 0 || $binfmts eq '') { warning "$binfmt_identifier is not a supported" . " binfmt name"; } } } error "$options->{nativearch} can neither be executed natively" . " nor via qemu user emulation with binfmt_misc"; } elsif ($withemu == 0 and $noemu == 1) { error "arch-test succeeded without emu but not with emu"; } elsif ($withemu == 1 and $noemu == 0) { info "$options->{nativearch} cannot be executed, falling back" . " to qemu-user"; if (!exists $deb2qemu->{ $options->{nativearch} }) { error "no mapping from $options->{nativearch} to qemu-user" . " binary"; } $options->{qemu} = $deb2qemu->{ $options->{nativearch} }; if (any { $_ eq $options->{mode} } ('root', 'unshare')) { my $qemubin = "/usr/bin/qemu-$options->{qemu}-static"; if (!-e $qemubin) { error "cannot find $qemubin"; } } } elsif ($withemu == 1 and $noemu == 1) { info "$options->{nativearch} is different from $hostarch but" . " can be executed natively"; } else { error "logic error"; } } else { info "chroot architecture $options->{nativearch} is equal to the" . " host's architecture"; } } { $options->{suite} = undef; if (scalar @ARGV > 0) { $options->{suite} = shift @ARGV; if (scalar @ARGV > 0) { $options->{target} = shift @ARGV; } else { $options->{target} = '-'; } } else { info "No SUITE specified, expecting sources.list on standard input"; $options->{target} = '-'; } my $sourceslists = []; if (!defined $options->{suite}) { # If no suite was specified, then the whole sources.list has to # come from standard input info "reading sources.list from standard input..."; my $content = do { local $/; ## no critic (InputOutput::ProhibitExplicitStdin) ; }; my $type = guess_sources_format($content); if (!defined $type || ($type ne "deb822" and $type ne "one-line")) { error "cannot determine sources.list format"; } push @{$sourceslists}, { type => $type, fname => undef, content => $content, }; } else { my @components = (); foreach my $comp (@{ $options->{components} }) { my @comps = split /[,\s]+/, $comp; foreach my $c (@comps) { # strip leading and trailing whitespace $c =~ s/^\s+|\s+$//g; # skip if the remainder is an empty string if ($c eq "") { next; } # do not append component if it's already in the list if (any { $_ eq $c } @components) { next; } push @components, $c; } } my $compstr = join " ", @components; # if the currently selected apt keyrings do not contain the # necessary key material for the chosen suite, then attempt adding # a signed-by option my $signedby = ''; my %suite_by_vendor = get_suite_by_vendor(); { my $keyring = get_keyring_by_suite($options->{suite}, \%suite_by_vendor); if (!defined $keyring) { last; } # we can only check if we need the signed-by entry if we u # automatically chosen keyring exists if (!defined $keyring || !-e $keyring) { last; } # we can only check key material if gpg is installed my $gpghome = tempdir( "mmdebstrap.gpghome.XXXXXXXXXXXX", TMPDIR => 1, CLEANUP => 1 ); my @gpgcmd = ( 'gpg', '--quiet', '--ignore-time-conflict', '--no-options', '--no-default-keyring', '--homedir', $gpghome, '--no-auto-check-trustdb', ); my ($ret, $message); { my $fh; { # change warning handler to prevent message # Can't exec "gpg": No such file or directory local $SIG{__WARN__} = sub { $message = shift; }; $ret = open $fh, '-|', @gpgcmd, '--version'; } # we only want to check if the gpg command exists close $fh; } if ($? != 0 || !defined $ret || defined $message) { info "gpg --version failed: cannot determine the right" . " signed-by value"; last; } # initialize gpg trustdb with empty one { `@gpgcmd --update-trustdb >/dev/null 2>/dev/null`; $? == 0 or error "gpg failed to initialize trustdb: $?"; } # find all the fingerprints of the keys apt currently # knows about my @keyrings = (); opendir my $dh, "$options->{apttrustedparts}" or error "cannot read $options->{apttrustedparts}"; while (my $filename = readdir $dh) { if ($filename !~ /\.(asc|gpg)$/) { next; } $filename = "$options->{apttrustedparts}/$filename"; # skip empty keyrings -s "$filename" || next; push @keyrings, "$filename"; } closedir $dh; if (-s $options->{apttrusted}) { push @keyrings, $options->{apttrusted}; } my @aptfingerprints = (); if (scalar @keyrings == 0) { $signedby = " [signed-by=\"$keyring\"]"; last; } { open(my $fh, '-|', @gpgcmd, '--with-colons', '--show-keys', @keyrings) // error "failed to fork(): $!"; while (my $line = <$fh>) { if ($line !~ /^fpr:::::::::([^:]+):/) { next; } push @aptfingerprints, $1; } close $fh; } if ($? != 0) { error "gpg failed"; } if (scalar @aptfingerprints == 0) { $signedby = " [signed-by=\"$keyring\"]"; last; } # check if all fingerprints from the keyring that we guessed # are known by apt and only add signed-by option if that's not # the case my @suitefingerprints = (); { open(my $fh, '-|', @gpgcmd, '--with-colons', '--show-keys', $keyring) // error "failed to fork(): $!"; while (my $line = <$fh>) { if ($line !~ /^fpr:::::::::([^:]+):/) { next; } # if this fingerprint is not known by apt, then we need #to add the signed-by option if (none { $_ eq $1 } @aptfingerprints) { $signedby = " [signed-by=\"$keyring\"]"; last; } } close $fh; } if ($? != 0) { error "gpg failed"; } } if (scalar @ARGV > 0) { for my $arg (@ARGV) { if ($arg eq '-') { info 'reading sources.list from standard input...'; my $content = do { local $/; ## no critic (InputOutput::ProhibitExplicitStdin) ; }; my $type = guess_sources_format($content); if (!defined $type || ($type ne 'deb822' and $type ne 'one-line')) { error "cannot determine sources.list format"; } # if last entry is of same type and without filename, # then append if ( scalar @{$sourceslists} > 0 && $sourceslists->[-1]{type} eq $type && !defined $sourceslists->[-1]{fname}) { $sourceslists->[-1]{content} .= ($type eq 'one-line' ? "\n" : "\n\n") . $content; } else { push @{$sourceslists}, { type => $type, fname => undef, content => $content, }; } } elsif ($arg =~ /^deb(-src)? /) { my $content = "$arg\n"; # if last entry is of same type and without filename, # then append if ( scalar @{$sourceslists} > 0 && $sourceslists->[-1]{type} eq 'one-line' && !defined $sourceslists->[-1]{fname}) { $sourceslists->[-1]{content} .= "\n" . $content; } else { push @{$sourceslists}, { type => 'one-line', fname => undef, content => $content, }; } } elsif ($arg =~ /:\/\//) { my $content = join ' ', ( "deb$signedby", $arg, $options->{suite}, "$compstr\n" ); # if last entry is of same type and without filename, # then append if ( scalar @{$sourceslists} > 0 && $sourceslists->[-1]{type} eq 'one-line' && !defined $sourceslists->[-1]{fname}) { $sourceslists->[-1]{content} .= "\n" . $content; } else { push @{$sourceslists}, { type => 'one-line', fname => undef, content => $content, }; } } elsif (-f $arg) { my $content = ''; open my $fh, '<', $arg or error "cannot open $arg: $!"; while (my $line = <$fh>) { $content .= $line; } close $fh; my $type = undef; if ($arg =~ /\.list$/) { $type = 'one-line'; } elsif ($arg =~ /\.sources$/) { $type = 'deb822'; } else { $type = guess_sources_format($content); } if (!defined $type || ($type ne 'deb822' and $type ne 'one-line')) { error "cannot determine sources.list format"; } push @{$sourceslists}, { type => $type, fname => basename($arg), content => $content, }; } else { error "invalid mirror: $arg"; } } } else { my $sourceslist = get_sourceslist_by_suite($options->{suite}, $options->{nativearch}, $signedby, $compstr, \%suite_by_vendor); push @{$sourceslists}, { type => 'one-line', fname => undef, content => $sourceslist, }; } } if (scalar @{$sourceslists} == 0) { error "empty apt sources.list"; } debug("sources list entries:"); for my $list (@{$sourceslists}) { if (defined $list->{fname}) { debug("fname: $list->{fname}"); } debug("type: $list->{type}"); debug("content:"); for my $line (split "\n", $list->{content}) { debug(" $line"); } } $options->{sourceslists} = $sourceslists; } if ($options->{target} ne '-') { my $abs_path = abs_path($options->{target}); if (!defined $abs_path) { error "unable to get absolute path of target directory" . " $options->{target}"; } $options->{target} = $abs_path; } if ($options->{target} eq '/') { error "refusing to use the filesystem root as output directory"; } my $tar_compressor = get_tar_compressor($options->{target}); # figure out the right format if ($format eq 'auto') { # (stat(...))[6] is the device identifier which contains the major and # minor numbers for character special files # major 1 and minor 3 is /dev/null on Linux if ( $options->{target} eq '/dev/null' and $OSNAME eq 'linux' and -c '/dev/null' and major((stat("/dev/null"))[6]) == 1 and minor((stat("/dev/null"))[6]) == 3) { $format = 'null'; } elsif ($options->{target} eq '-' and $OSNAME eq 'linux' and major((stat(STDOUT))[6]) == 1 and minor((stat(STDOUT))[6]) == 3) { # by checking the major and minor number of the STDOUT fd we also # can detect redirections to /dev/null and choose the null format # accordingly $format = 'null'; } elsif ($options->{target} ne '-' and -d $options->{target}) { $format = 'directory'; } elsif ( defined $tar_compressor or $options->{target} =~ /\.tar$/ or $options->{target} eq '-' or -p $options->{target} # named pipe (fifo) or -c $options->{target} # character special like /dev/null ) { $format = 'tar'; # check if the compressor is installed if (defined $tar_compressor) { my $pid = fork() // error "fork() failed: $!"; if ($pid == 0) { open(STDOUT, '>', '/dev/null') or error "cannot open /dev/null for writing: $!"; open(STDIN, '<', '/dev/null') or error "cannot open /dev/null for reading: $!"; exec { $tar_compressor->[0] } @{$tar_compressor} or error("cannot exec " . (join " ", @{$tar_compressor}) . ": $!"); } waitpid $pid, 0; if ($? != 0) { error("failed to start " . (join " ", @{$tar_compressor})); } } } elsif ($options->{target} =~ /\.(squashfs|sqfs)$/) { $format = 'squashfs'; # check if tar2sqfs is installed my $pid = fork() // error "fork() failed: $!"; if ($pid == 0) { open(STDOUT, '>', '/dev/null') or error "cannot open /dev/null for writing: $!"; open(STDIN, '<', '/dev/null') or error "cannot open /dev/null for reading: $!"; exec('tar2sqfs', '--version') or error("cannot exec tar2sqfs --version: $!"); } waitpid $pid, 0; if ($? != 0) { error("failed to start tar2sqfs --version"); } } elsif ($options->{target} =~ /\.ext2$/) { $format = 'ext2'; # check if the installed version of genext2fs supports tarballs on # stdin (undef, my $filename) = tempfile( "mmdebstrap.ext2.XXXXXXXXXXXX", OPEN => 0, TMPDIR => 1 ); open my $fh, '|-', 'genext2fs', '-B', '1024', '-b', '8', '-N', '11', '-a', '-', $filename // error "failed to fork(): $!"; # write 10240 null-bytes to genext2fs -- this represents an empty # tar archive print $fh ("\0" x 10240) or error "cannot write to genext2fs process"; close $fh; my $exitstatus = $?; unlink $filename // die "cannot unlink $filename"; if ($exitstatus != 0) { error "genext2fs failed with exit status: $exitstatus"; } } else { $format = 'directory'; } info "automatically chosen format: $format"; } if ($options->{target} eq '-' and $format ne 'tar' and $format ne 'null') { error "the $format format is unable to write to standard output"; } if ($format eq 'null' and none { $_ eq $options->{target} } ('-', '/dev/null')) { info "ignoring target $options->{target} with null format"; } if (any { $_ eq $format } ('tar', 'squashfs', 'ext2', 'null')) { if ($format ne 'null') { if ( any { $_ eq $options->{variant} } ('extract', 'custom') and any { $_ eq $options->{mode} } ('fakechroot', 'proot')) { info "creating a tarball or squashfs image or ext2 image in" . " fakechroot mode or proot mode might fail in extract and" . " custom variants because there might be no tar inside the" . " chroot"; } # try to fail early if target tarball or squashfs image cannot be # opened for writing if ($options->{target} ne '-') { if ($options->{dryrun}) { if (-e $options->{target}) { info "not overwriting $options->{target} because in" . " dry-run mode"; } } else { open my $fh, '>', $options->{target} or error "cannot open $options->{target} for writing: $!"; close $fh; } } } # since the output is a tarball, we create the rootfs in a temporary # directory $options->{root} = tempdir('mmdebstrap.XXXXXXXXXX', TMPDIR => 1); info "using $options->{root} as tempdir"; # in unshare and root mode, other users than the current user need to # access the rootfs, most prominently, the _apt user. Thus, make the # temporary directory world readable. if ( any { $_ eq $options->{mode} } ('unshare', 'root') or ($EFFECTIVE_USER_ID == 0 and $options->{mode} eq 'chrootless') ) { chmod 0755, $options->{root} or error "cannot chmod root: $!"; } } elsif ($format eq 'directory') { # user does not seem to have specified a tarball as output, thus work # directly in the supplied directory $options->{root} = $options->{target}; if (-e $options->{root}) { if (!-d $options->{root}) { error "$options->{root} exists and is not a directory"; } if (any { $_ eq 'check/empty' } @{ $options->{skip} }) { info "skipping check/empty as requested"; } else { # check if the directory is empty or contains nothing more than # an empty lost+found directory. The latter exists on freshly # created ext3 and ext4 partitions. # rationale for requiring an empty directory: # https://bugs.debian.org/833525 opendir(my $dh, $options->{root}) or error "Can't opendir($options->{root}): $!"; while (my $entry = readdir $dh) { # skip the "." and ".." entries next if $entry eq "."; next if $entry eq ".."; # if the entry is a directory named "lost+found" then skip # it, if it's empty if ($entry eq "lost+found" and -d "$options->{root}/$entry") { opendir(my $dh2, "$options->{root}/$entry"); # Attempt reading the directory thrice. If the third # time succeeds, then it has more entries than just "." # and ".." and must thus not be empty. readdir $dh2; readdir $dh2; # rationale for requiring an empty directory: # https://bugs.debian.org/833525 if (readdir $dh2) { error "$options->{root} contains a non-empty" . " lost+found directory"; } closedir($dh2); } else { error "$options->{root} is not empty"; } } closedir($dh); } } else { my $num_created = make_path "$options->{root}", { error => \my $err }; if ($err && @$err) { error(join "; ", (map { "cannot create " . (join ": ", %{$_}) } @$err)); } elsif ($num_created == 0) { error "cannot create $options->{root}"; } } } else { error "unknown format: $format"; } # check for double quotes because apt doesn't allow to escape them and # thus paths with double quotes are invalid in the apt config if ($options->{root} =~ /"/) { error "apt cannot handle paths with double quotes"; } my @idmap; # for unshare mode the rootfs directory has to have appropriate # permissions if ($EFFECTIVE_USER_ID != 0 and $options->{mode} eq 'unshare') { @idmap = read_subuid_subgid; # sanity check if ( scalar(@idmap) != 2 || $idmap[0][0] ne 'u' || $idmap[1][0] ne 'g' || !length $idmap[0][2] || !length $idmap[1][2]) { error "invalid idmap"; } my $outer_gid = $REAL_GROUP_ID + 0; my $pid = get_unshare_cmd( sub { chown 1, 1, $options->{root} }, [ ['u', '0', $REAL_USER_ID, '1'], ['g', '0', $outer_gid, '1'], ['u', '1', $idmap[0][2], '1'], ['g', '1', $idmap[1][2], '1']]); waitpid $pid, 0; $? == 0 or error "chown failed"; } # figure out whether we have mknod $options->{havemknod} = 0; if ($options->{mode} eq 'unshare') { my $pid = get_unshare_cmd( sub { $options->{havemknod} = havemknod($options->{root}); }, \@idmap ); waitpid $pid, 0; $? == 0 or error "havemknod failed"; } elsif ( any { $_ eq $options->{mode} } ('root', 'fakechroot', 'proot', 'chrootless') ) { $options->{havemknod} = havemknod($options->{root}); } else { error "unknown mode: $options->{mode}"; } my $devtar = ''; # We always craft the /dev entries ourselves if a tarball is to be created if (any { $_ eq $format } ('tar', 'squashfs', 'ext2')) { foreach my $file (@devfiles) { my ($fname, $mode, $type, $linkname, $devmajor, $devminor) = @{$file}; if (length "./dev/$fname" > 100) { error "tar entry cannot exceed 100 characters"; } my $entry = pack( 'a100 a8 a8 a8 a12 a12 A8 a1 a100 a8 a32 a32 a8 a8 a155 x12', "./dev/$fname", sprintf('%07o', $mode), sprintf('%07o', 0), # uid sprintf('%07o', 0), # gid sprintf('%011o', 0), # size sprintf('%011o', $mtime), '', # checksum $type, $linkname, "ustar ", '', # username '', # groupname defined($devmajor) ? sprintf('%07o', $devmajor) : '', defined($devminor) ? sprintf('%07o', $devminor) : '', '', # prefix ); # compute and insert checksum substr($entry, 148, 7) = sprintf("%06o\0", unpack("%16C*", $entry)); $devtar .= $entry; } } elsif (any { $_ eq $format } ('directory', 'null')) { # nothing to do } else { error "unknown format: $format"; } my $exitstatus = 0; my @taropts = ( '--sort=name', "--mtime=\@$mtime", '--clamp-mtime', '--numeric-owner', '--one-file-system', '--format=pax', '--pax-option=exthdr.name=%d/PaxHeaders/%f,delete=atime,delete=ctime', '-c', '--exclude=./dev' ); # tar2sqfs and genext2fs do not support extended attributes if ($format eq "squashfs") { # tar2sqfs supports user.*, trusted.* and security.* but not system.* # https://bugs.debian.org/988100 # lib/sqfs/xattr/xattr.c of https://github.com/AgentD/squashfs-tools-ng # https://github.com/AgentD/squashfs-tools-ng/issues/83 # https://github.com/AgentD/squashfs-tools-ng/issues/25 warning("tar2sqfs does not support extended attributes" . " from the 'system' namespace"); push @taropts, '--xattrs', '--xattrs-exclude=system.*'; } elsif ($format eq "ext2") { warning "genext2fs does not support extended attributes"; } else { push @taropts, '--xattrs'; } # disable signals so that we can fork and change behaviour of the signal # handler in the parent and child without getting interrupted my $sigset = POSIX::SigSet->new(SIGINT, SIGHUP, SIGPIPE, SIGTERM); POSIX::sigprocmask(SIG_BLOCK, $sigset) or error "Can't block signals: $!"; my $pid; # a pipe to transfer the final tarball from the child to the parent pipe my $rfh, my $wfh; # instead of two pipe calls, creating four file handles, we use socketpair socketpair my $childsock, my $parentsock, AF_UNIX, SOCK_STREAM, PF_UNSPEC or error "socketpair failed: $!"; $options->{hooksock} = $childsock; # for communicating the required number of blocks, we don't need # bidirectional communication, so a pipe() is enough # we don't communicate this via the hook communication because # a) this would abuse the functionality exclusively for hooks # b) it puts code writing the protocol outside of the helper/listener # c) the forked listener process cannot communicate to its parent pipe my $nblkreader, my $nblkwriter or error "pipe failed: $!"; if ($options->{mode} eq 'unshare') { $pid = get_unshare_cmd( sub { # child local $SIG{'INT'} = 'DEFAULT'; local $SIG{'HUP'} = 'DEFAULT'; local $SIG{'PIPE'} = 'DEFAULT'; local $SIG{'TERM'} = 'DEFAULT'; # unblock all delayed signals (and possibly handle them) POSIX::sigprocmask(SIG_UNBLOCK, $sigset) or error "Can't unblock signals: $!"; close $rfh; close $parentsock; open(STDOUT, '>&', STDERR) or error "cannot open STDOUT: $!"; setup($options); print $childsock (pack('n', 0) . 'adios'); $childsock->flush(); close $childsock; close $nblkreader; if (!$options->{dryrun} && $format eq 'ext2') { my $numblocks = approx_disk_usage($options->{root}); print $nblkwriter "$numblocks\n"; $nblkwriter->flush(); } close $nblkwriter; if ($options->{dryrun}) { info "simulate creating tarball..."; } elsif (any { $_ eq $format } ('tar', 'squashfs', 'ext2')) { info "creating tarball..."; # redirect tar output to the writing end of the pipe so # that the parent process can capture the output open(STDOUT, '>&', $wfh) or error "cannot open STDOUT: $!"; # Add ./dev as the first entries of the tar file. # We cannot add them after calling tar, because there is no # way to prevent tar from writing NULL entries at the end. if (any { $_ eq 'output/dev' } @{ $options->{skip} }) { info "skipping output/dev as requested"; } else { print $devtar; } # pack everything except ./dev 0 == system('tar', @taropts, '-C', $options->{root}, '.') or error "tar failed: $?"; info "done"; } elsif (any { $_ eq $format } ('directory', 'null')) { # nothing to do } else { error "unknown format: $format"; } exit 0; }, \@idmap ); } elsif ( any { $_ eq $options->{mode} } ('root', 'fakechroot', 'proot', 'chrootless') ) { $pid = fork() // error "fork() failed: $!"; if ($pid == 0) { local $SIG{'INT'} = 'DEFAULT'; local $SIG{'HUP'} = 'DEFAULT'; local $SIG{'PIPE'} = 'DEFAULT'; local $SIG{'TERM'} = 'DEFAULT'; # unblock all delayed signals (and possibly handle them) POSIX::sigprocmask(SIG_UNBLOCK, $sigset) or error "Can't unblock signals: $!"; close $rfh; close $parentsock; open(STDOUT, '>&', STDERR) or error "cannot open STDOUT: $!"; setup($options); print $childsock (pack('n', 0) . 'adios'); $childsock->flush(); close $childsock; close $nblkreader; if (!$options->{dryrun} && $format eq 'ext2') { my $numblocks = approx_disk_usage($options->{root}); print $nblkwriter $numblocks; $nblkwriter->flush(); } close $nblkwriter; if ($options->{dryrun}) { info "simulate creating tarball..."; } elsif (any { $_ eq $format } ('tar', 'squashfs', 'ext2')) { info "creating tarball..."; # redirect tar output to the writing end of the pipe so that # the parent process can capture the output open(STDOUT, '>&', $wfh) or error "cannot open STDOUT: $!"; # Add ./dev as the first entries of the tar file. # We cannot add them after calling tar, because there is no way # to prevent tar from writing NULL entries at the end. if (any { $_ eq 'output/dev' } @{ $options->{skip} }) { info "skipping output/dev as requested"; } else { print $devtar; } if ($options->{mode} eq 'fakechroot') { # By default, FAKECHROOT_EXCLUDE_PATH includes /proc and # /sys which means that the resulting tarball will contain # the permission and ownership information of /proc and # /sys from the outside, which we want to avoid. ## no critic (Variables::RequireLocalizedPunctuationVars) $ENV{FAKECHROOT_EXCLUDE_PATH} = "/dev"; # Fakechroot requires tar to run inside the chroot or # otherwise absolute symlinks will include the path to the # root directory 0 == system('/usr/sbin/chroot', $options->{root}, 'tar', @taropts, '-C', '/', '.') or error "tar failed: $?"; } elsif ($options->{mode} eq 'proot') { # proot requires tar to run inside proot or otherwise # permissions will be completely off my @qemuopt = (); if (defined $options->{qemu}) { push @qemuopt, "--qemu=qemu-$options->{qemu}"; push @taropts, "--exclude=./host-rootfs"; } 0 == system('proot', '--root-id', "--rootfs=$options->{root}", '--cwd=/', @qemuopt, 'tar', @taropts, '-C', '/', '.') or error "tar failed: $?"; } elsif ( any { $_ eq $options->{mode} } ('root', 'chrootless') ) { # If the chroot directory is not owned by the root user, # then we assume that no measure was taken to fake root # permissions. Since the final tarball should contain # entries with root ownership, we instruct tar to do so. my @owneropts = (); if ((stat $options->{root})[4] != 0) { push @owneropts, '--owner=0', '--group=0', '--numeric-owner'; } 0 == system('tar', @taropts, @owneropts, '-C', $options->{root}, '.') or error "tar failed: $?"; } else { error "unknown mode: $options->{mode}"; } info "done"; } elsif (any { $_ eq $format } ('directory', 'null')) { # nothing to do } else { error "unknown format: $format"; } exit 0; } } else { error "unknown mode: $options->{mode}"; } # parent my $got_signal = 0; my $waiting_for = "setup"; my $ignore = sub { $got_signal = shift; info "main() received signal $got_signal: waiting for $waiting_for..."; }; local $SIG{'INT'} = $ignore; local $SIG{'HUP'} = $ignore; local $SIG{'PIPE'} = $ignore; local $SIG{'TERM'} = $ignore; # unblock all delayed signals (and possibly handle them) POSIX::sigprocmask(SIG_UNBLOCK, $sigset) or error "Can't unblock signals: $!"; close $wfh; close $childsock; debug "starting to listen for hooks"; # handle special hook commands via parentsock my $lpid = fork() // error "fork() failed: $!"; if ($lpid == 0) { # whatever the script writes on stdout is sent to the # socket # whatever is written to the socket, send to stdin open(STDOUT, '>&', $parentsock) or error "cannot open STDOUT: $!"; open(STDIN, '<&', $parentsock) or error "cannot open STDIN: $!"; my @prefix = (); if ($is_covering) { @prefix = ($EXECUTABLE_NAME, "-MDevel::Cover=-silent,-nogcov"); } exec @prefix, $PROGRAM_NAME, "--hook-listener", $verbosity_level; } waitpid($lpid, 0); if ($? != 0) { # we cannot die here because that would leave the other thread # running without a parent warning "listening on child socket failed: $@"; $exitstatus = 1; } debug "finish to listen for hooks"; close $parentsock; my $numblocks = 0; close $nblkwriter; if (!$options->{dryrun} && $format eq 'ext2') { chomp($numblocks = <$nblkreader>); } close $nblkreader; if ($options->{dryrun}) { # nothing to do } elsif (any { $_ eq $format } ('directory', 'null')) { # nothing to do } elsif (any { $_ eq $format } ('tar', 'squashfs', 'ext2')) { # we use eval() so that error() doesn't take this process down and # thus leaves the setup() process without a parent eval { if ($options->{target} eq '-') { if (!copy($rfh, *STDOUT)) { error "cannot copy to standard output: $!"; } } else { if ( $format eq 'squashfs' or $format eq 'ext2' or defined $tar_compressor) { my @argv = (); if ($format eq 'squashfs') { push @argv, 'tar2sqfs', '--quiet', '--no-skip', '--force', '--exportable', '--compressor', 'xz', '--block-size', '1048576', $options->{target}; } elsif ($format eq 'ext2') { if ($numblocks <= 0) { error "invalid number of blocks: $numblocks"; } push @argv, 'genext2fs', '-B', 1024, '-b', $numblocks, '-i', '16384', '-a', '-', $options->{target}; } elsif ($format eq 'tar') { push @argv, @{$tar_compressor}; } else { error "unknown format: $format"; } POSIX::sigprocmask(SIG_BLOCK, $sigset) or error "Can't block signals: $!"; my $cpid = fork() // error "fork() failed: $!"; if ($cpid == 0) { # child: default signal handlers local $SIG{'INT'} = 'DEFAULT'; local $SIG{'HUP'} = 'DEFAULT'; local $SIG{'PIPE'} = 'DEFAULT'; local $SIG{'TERM'} = 'DEFAULT'; # unblock all delayed signals (and possibly handle # them) POSIX::sigprocmask(SIG_UNBLOCK, $sigset) or error "Can't unblock signals: $!"; # redirect stdout to file or /dev/null if ($format eq 'squashfs' or $format eq 'ext2') { open(STDOUT, '>', '/dev/null') or error "cannot open /dev/null for writing: $!"; } elsif ($format eq 'tar') { open(STDOUT, '>', $options->{target}) or error "cannot open $options->{target} for writing: $!"; } else { error "unknown format: $format"; } open(STDIN, '<&', $rfh) or error "cannot open file handle for reading: $!"; eval { Devel::Cover::set_coverage("none") } if $is_covering; exec { $argv[0] } @argv or error("cannot exec " . (join " ", @argv) . ": $!"); } POSIX::sigprocmask(SIG_UNBLOCK, $sigset) or error "Can't unblock signals: $!"; waitpid $cpid, 0; if ($? != 0) { error("failed to run " . (join " ", @argv)); } } else { if (!copy($rfh, $options->{target})) { error "cannot copy to $options->{target}: $!"; } } } }; if ($@) { # we cannot die here because that would leave the other thread # running without a parent # We send SIGHUP to all our processes (including eventually # running tar and this process itself) to reliably tear down # all running child processes. The main process is not affected # because we are ignoring SIGHUP. warning "creating tarball failed: $@"; kill HUP => -getpgrp(); $exitstatus = 1; } } else { error "unknown format: $format"; } close($rfh); waitpid $pid, 0; if ($? != 0) { $exitstatus = 1; } # change signal handler message $waiting_for = "cleanup"; if (any { $_ eq $format } ('directory')) { # nothing to do } elsif (any { $_ eq $format } ('tar', 'squashfs', 'ext2', 'null')) { if (!-e $options->{root}) { error "$options->{root} does not exist"; } info "removing tempdir $options->{root}..."; if ($options->{mode} eq 'unshare') { # We don't have permissions to remove the directory outside # the unshared namespace, so we remove it here. # Since this is still inside the unshared namespace, there is # no risk of removing anything important. $pid = get_unshare_cmd( sub { # change CWD to chroot directory because find tries to # chdir to the current directory which might not be # accessible by the unshared user: # find: Failed to restore initial working directory 0 == system('env', "--chdir=$options->{root}", 'find', $options->{root}, '-mount', '-mindepth', '1', '-delete') or error "rm failed: $?"; # ignore failure in case the unshared user doesn't have the # required permissions -- we attempt again later if # necessary rmdir "$options->{root}"; }, \@idmap ); waitpid $pid, 0; $? == 0 or error "remove_tree failed"; # in unshare mode, the toplevel directory might've been created in # a directory that the unshared user cannot change and thus cannot # delete. We attempt its removal again outside as the normal user. if (-e $options->{root}) { rmdir "$options->{root}" or error "cannot rmdir $options->{root}: $!"; } } elsif ( any { $_ eq $options->{mode} } ('root', 'fakechroot', 'proot', 'chrootless') ) { # without unshare, we use the system's rm to recursively remove the # temporary directory just to make sure that we do not accidentally # remove more than we should by using --one-file-system. # # --interactive=never is needed when in proot mode, the # write-protected file /apt/apt.conf.d/01autoremove-kernels is to # be removed. 0 == system('rm', '--interactive=never', '--recursive', '--preserve-root', '--one-file-system', $options->{root}) or error "rm failed: $?"; } else { error "unknown mode: $options->{mode}"; } } else { error "unknown format: $format"; } if ($got_signal) { $exitstatus = 1; } if ($exitstatus == 0) { my $duration = Time::HiRes::time - $before; info "success in " . (sprintf "%.04f", $duration) . " seconds"; exit 0; } error "mmdebstrap failed to run"; return 1; } main(); __END__ =head1 NAME mmdebstrap - multi-mirror Debian chroot creation =head1 SYNOPSIS B [B] [I [I [I...]]] =head1 DESCRIPTION B creates a Debian chroot of I into I from one or more Is. It is meant as an alternative to the debootstrap tool (see section B). In contrast to debootstrap it uses apt to resolve dependencies and is thus able to use more than one mirror and resolve more complex dependencies. If no I option is provided, L is used. If I is a stable release name and no I is specified, then mirrors for updates and security are automatically added. If a I option starts with "deb " or "deb-src " then it is used as a one-line-style format entry for apt's sources.list inside the chroot. If a I option contains a "://" then it is interpreted as a mirror URI and the apt line inside the chroot is assembled as "deb [arch=A] B C D" where A is the host's native architecture, B is the I, C is the given I and D is the components given via B<--components> (defaults to "main"). If a I option happens to be an existing file, then its contents are pasted into the chroot's sources.list. This can be used to supply a deb822 style sources.list. If I is C<-> then standard input is pasted into the chroot's sources.list. More than one mirror can be specified and are appended to the chroot's sources.list in the given order. If you specify a https or tor I and you want the chroot to be able to update itself, don't forget to also install the ca-certificates package, the apt-transport-https package for apt versions less than 1.5 and/or the apt-transport-tor package using the B<--include> option, as necessary. The optional I argument can either be the path to a directory, the path to a tarball filename, the path to a squashfs image, the path to an ext2 image, a FIFO, a character special device, or C<->. Without the B<--format> option, I will be used to choose the format. See the section B for more information. If no I was specified or if I is C<->, an uncompressed tarball will be sent to standard output. The I may be a valid release code name (eg, sid, stretch, jessie) or a symbolic name (eg, unstable, testing, stable, oldstable). Any suite name that works with apt on the given mirror will work. If no I was specified, then a single I C<-> is added and thus the information of the desired suite has to come from standard input as part of a valid apt sources.list file. The value of the I argument will be used to determine which apt index to use for finding out the set of C packages and/or the set of packages with the right priority for the selected variant. See the section B for more information. All status output is printed to standard error unless B<--logfile> is used to redirect it to a file or B<--quiet> or B<--silent> is used to suppress any output on standard error. Help and version information will be printed to standard error with the B<--help> and B<--version> options, respectively. Otherwise, an uncompressed tarball might be sent to standard output if I is C<-> or if no I was specified. =head1 OPTIONS Options are case insensitive. Short options may be bundled. Long options require a double dash and may be abbreviated to uniqueness. =over 8 =item B<-h,--help> Print synopsis and options of this man page and exit. =item B<--man> Show the full man page as generated from Perl POD in a pager. This requires the perldoc program from the perl-doc package. This is the same as running: pod2man /usr/bin/mmdebstrap | man -l - =item B<--version> Print the B version and exit. =item B<--variant>=I Choose which package set to install. Valid variant Is are B, B, B, B, B, B, B, B, B, B<->, and B. The default variant is B. See the section B for more information. =item B<--mode>=I Choose how to perform the chroot operation and create a filesystem with ownership information different from the current user. Valid mode Is are B, B, B, B, B, B, B and B. The default mode is B. See the section B for more information. =item B<--format>=I Choose the output format. Valid format Is are B, B, B, B, B and B. The default format is B. See the section B for more information. =item B<--aptopt>=I