pax_global_header00006660000000000000000000000064134362170110014510gustar00rootroot0000000000000052 comment=6d774a3d92ed7b584a2021d3485b30280498080c mmdebstrap-0.4.1-6d774a3d92ed7b584a2021d3485b30280498080c/000077500000000000000000000000001343621701100207245ustar00rootroot00000000000000mmdebstrap-0.4.1-6d774a3d92ed7b584a2021d3485b30280498080c/.gitignore000066400000000000000000000000071343621701100227110ustar00rootroot00000000000000shared mmdebstrap-0.4.1-6d774a3d92ed7b584a2021d3485b30280498080c/CHANGELOG.md000066400000000000000000000022521343621701100225360ustar00rootroot000000000000000.4.1 (2019-03-01) ------------------ - re-enable fakechroot mode testing - disable apt sandboxing if necessary - keep apt and dpkg lock files 0.4.0 (2019-02-23) ------------------ - disable merged-usr - add --verbose option that prints apt and dpkg output instead of progress bars - add --quiet/--silent options which print nothing on stderr - add --debug option for even more output than with --verbose - add some no-op options to make mmdebstrap a drop-in replacement for certain debootstrap wrappers like sbuild-createchroot - add --logfile option which outputs to a file what would otherwise be written to stderr - add --version option 0.3.0 (2018-11-21) ------------------ - add chrootless mode - add extract and custom variants - make testsuite unprivileged through qemu and guestfish - allow empty lost+found directory in target - add 54 testcases and fix lots of bugs as a result 0.2.0 (2018-10-03) ------------------ - if no MIRROR was specified but there was data on standard input, then use that data as the sources.list instead of falling back to the default mirror - lots of bug fixes 0.1.0 (2018-09-24) ------------------ - initial release mmdebstrap-0.4.1-6d774a3d92ed7b584a2021d3485b30280498080c/README.md000066400000000000000000000077711343621701100222170ustar00rootroot00000000000000mmdebstrap ========== An alternative to debootstrap which uses apt internally and is thus able to use more than one mirror and resolve more complex dependencies. Usage ----- Use like debootstrap: sudo mmdebstrap unstable ./unstable-chroot Without superuser privileges: mmdebstrap unstable unstable-chroot.tar With complex apt options: cat /etc/apt/sources.list | mmdebstrap > unstable-chroot.tar The sales pitch in comparison to debootstrap -------------------------------------------- Summary: - more than one mirror possible - security and updates mirror included for Debian stable chroots - 3-6 times faster - chroot with apt in 11 seconds - gzipped tarball with apt is 27M small - bit-by-bit reproducible output - unprivileged operation using Linux user namespaces, fakechroot or proot - can operate on filesystems mounted with nodev - foreign architecture chroots with qemu-user The author believes that a chroot of a Debian stable release should include the latest packages including security fixes by default. This has been a wontfix with debootstrap since 2009 (See #543819 and #762222). Since mmdebstrap uses apt internally, support for multiple mirrors comes for free and stable or oldstable **chroots will include security and updates mirrors**. A side-effect of using apt is being **3-6 times faster** than debootstrap. The timings were carried out on a laptop with an Intel Core i5-5200U. | variant | mmdebstrap | debootstrap | | ------- | ---------- | ------------ | | minbase | 14.18 s | 51.47 s | | buildd | 20.55 s | 59.38 s | | - | 18.98 s | 127.18 s | Apt considers itself an `Essential: yes` package. This feature allows one to create a chroot containing just the `Essential: yes` packages and apt (and their hard dependencies) in **just 11 seconds**. If desired, a most minimal chroot with just the `Essential: yes` packages and their hard dependencies can be created with a gzipped tarball size of just 34M. By using dpkg's `--path-exclude` option to exclude documentation, even smaller gzipped tarballs of 21M in size are possible. If apt is included, the result is a **gzipped tarball of only 27M**. These small sizes are also achieved because apt caches and other cruft is stripped from the chroot. This also makes the result **bit-by-bit reproducible** if the `$SOURCE_DATE_EPOCH` environment variable is set. The author believes, that it should not be necessary to have superuser privileges to create a file (the chroot tarball) in one's home directory. Thus, mmdebstrap provides multiple options to create a chroot tarball with the right permissions **without superuser privileges**. Depending on what is available, it uses either Linux user namespaces, fakechroot or proot. Debootstrap supports fakechroot but will not create a tarball with the right permissions by itself. Support for Linux user namespaces and proot is missing (see bugs #829134 and #698347, respectively). When creating a chroot tarball with debootstrap, the temporary chroot directory cannot be on a filesystem that has been mounted with nodev. In unprivileged mode, **mknod is never used**, which means that /tmp can be used as a temporary directory location even if if it's mounted with nodev as a security measure. If the chroot architecture cannot be executed by the current machine, qemu-user is used to allow one to create a **foreign architecture chroot**. Limitations in comparison to debootstrap ---------------------------------------- Debootstrap supports creating a Debian chroot on non-Debian systems but mmdebstrap requires apt. There is no `SCRIPT` argument. There is no `--second-stage` option. Tests ===== The script `coverage.sh` runs mmdebstrap in all kind of scenarios to execute all code paths of the script. It verifies its output in each scenario and displays the results gathered with Devel::Cover. It also compares the output of mmdebstrap with debootstrap in several scenarios. Bugs ==== mmdebstrap has bugs. Report them here: https://gitlab.mister-muffin.de/josch/mmdebstrap/issues mmdebstrap-0.4.1-6d774a3d92ed7b584a2021d3485b30280498080c/coverage.sh000077500000000000000000001145201343621701100230610ustar00rootroot00000000000000#!/bin/sh set -eu mirrordir="./shared/cache/debian" ./make_mirror.sh # we use -f because the file might not exist rm -f shared/cover_db.img : "${HAVE_QEMU:=yes}" if [ "$HAVE_QEMU" = "yes" ]; then # prepare image for cover_db guestfish -N shared/cover_db.img=disk:200M -- mkfs vfat /dev/sda if [ ! -e "./shared/cache/debian-unstable.qcow" ]; then echo "./shared/cache/debian-unstable.qcow does not exist" >&2 exit 1 fi fi # check if all required debootstrap tarballs exist notfound=0 for dist in stable testing unstable; do for variant in minbase buildd -; do # skip because of different userids for apt/systemd if [ "$dist" = 'stable' ] && [ "$variant" = '-' ]; then continue fi # skip because of #917386 and #917407 if [ "$dist" = 'unstable' -o "$dist" = 'testing' ] && [ "$variant" = '-' ]; then continue fi if [ ! -e "shared/cache/debian-$dist-$variant.tar" ]; then echo "shared/cache/debian-$dist-$variant.tar does not exist" >&2 notfound=1 fi done done if [ "$notfound" -ne 0 ]; then echo "not all required debootstrap tarballs are present" >&2 exit 1 fi # only copy if necessary if [ ! -e shared/mmdebstrap ] || [ mmdebstrap -nt shared/mmdebstrap ]; then cp -a mmdebstrap shared fi starttime= total=88 i=1 print_header() { echo ------------------------------------------------------------------------------ echo "($i/$total) $1" if [ -z "$starttime" ]; then starttime=$(date +%s) else currenttime=$(date +%s) timeleft=$(((total-i+1)*(currenttime-starttime)/(i-1))) printf "time left: %02d:%02d:%02d\n" $((timeleft/3600)) $(((timeleft%3600)/60)) $((timeleft%60)) fi echo ------------------------------------------------------------------------------ i=$((i+1)) } nativearch=$(dpkg --print-architecture) # choose the timestamp of the unstable Release file, so that we get # reproducible results for the same mirror timestamp SOURCE_DATE_EPOCH=$(date --date="$(grep-dctrl -s Date -n '' "$mirrordir/dists/unstable/Release")" +%s) # for traditional sort order that uses native byte values export LC_ALL=C.UTF-8 : "${HAVE_UNSHARE:=yes}" : "${HAVE_PROOT:=yes}" : "${HAVE_BINFMT:=yes}" defaultmode="auto" if [ "$HAVE_UNSHARE" != "yes" ]; then defaultmode="root" fi # by default, use the mmdebstrap executable in the current directory together # with perl Devel::Cover but allow to overwrite this : "${CMD:=perl -MDevel::Cover=-silent,-nogcov ./mmdebstrap}" mirror="http://127.0.0.1/debian" for dist in stable testing unstable; do for variant in minbase buildd -; do # skip because of different userids for apt/systemd if [ "$dist" = 'stable' ] && [ "$variant" = '-' ]; then continue fi # skip because of #917386 and #917407 if [ "$dist" = 'unstable' -o "$dist" = 'testing' ] && [ "$variant" = '-' ]; then continue fi print_header "mode=root,variant=$variant: check against debootstrap $dist" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 export SOURCE_DATE_EPOCH=$SOURCE_DATE_EPOCH $CMD --variant=$variant --mode=root $dist /tmp/debian-$dist-mm.tar $mirror mkdir /tmp/debian-$dist-mm tar -C /tmp/debian-$dist-mm -xf /tmp/debian-$dist-mm.tar mkdir /tmp/debian-$dist-debootstrap tar -C /tmp/debian-$dist-debootstrap -xf "cache/debian-$dist-$variant.tar" # diff cannot compare device nodes, so we use tar to do that for us and then # delete the directory tar -C /tmp/debian-$dist-debootstrap -cf dev1.tar ./dev tar -C /tmp/debian-$dist-mm -cf dev2.tar ./dev cmp dev1.tar dev2.tar rm dev1.tar dev2.tar rm -r /tmp/debian-$dist-debootstrap/dev /tmp/debian-$dist-mm/dev # remove downloaded deb packages rm /tmp/debian-$dist-debootstrap/var/cache/apt/archives/*.deb # remove aux-cache rm /tmp/debian-$dist-debootstrap/var/cache/ldconfig/aux-cache # remove logs rm /tmp/debian-$dist-debootstrap/var/log/dpkg.log \ /tmp/debian-$dist-debootstrap/var/log/bootstrap.log \ /tmp/debian-$dist-mm/var/log/apt/eipp.log.xz \ /tmp/debian-$dist-debootstrap/var/log/alternatives.log # remove *-old files rm /tmp/debian-$dist-debootstrap/var/cache/debconf/config.dat-old \ /tmp/debian-$dist-mm/var/cache/debconf/config.dat-old rm /tmp/debian-$dist-debootstrap/var/cache/debconf/templates.dat-old \ /tmp/debian-$dist-mm/var/cache/debconf/templates.dat-old rm /tmp/debian-$dist-debootstrap/var/lib/dpkg/status-old \ /tmp/debian-$dist-mm/var/lib/dpkg/status-old # remove dpkg files rm /tmp/debian-$dist-debootstrap/var/lib/dpkg/available \ /tmp/debian-$dist-debootstrap/var/lib/dpkg/cmethopt touch /tmp/debian-$dist-debootstrap/var/lib/dpkg/available # since we installed packages directly from the .deb files, Priorities differ # thus we first check for equality and then remove the files chroot /tmp/debian-$dist-debootstrap dpkg --list > dpkg1 chroot /tmp/debian-$dist-mm dpkg --list > dpkg2 diff -u dpkg1 dpkg2 rm dpkg1 dpkg2 grep -v '^Priority: ' /tmp/debian-$dist-debootstrap/var/lib/dpkg/status > status1 grep -v '^Priority: ' /tmp/debian-$dist-mm/var/lib/dpkg/status > status2 diff -u status1 status2 rm status1 status2 rm /tmp/debian-$dist-debootstrap/var/lib/dpkg/status /tmp/debian-$dist-mm/var/lib/dpkg/status # this file is only created by apt 1.6 or newer rmdir /tmp/debian-$dist-mm/var/lib/apt/lists/auxfiles # debootstrap exposes the hosts's kernel version rm /tmp/debian-$dist-debootstrap/etc/apt/apt.conf.d/01autoremove-kernels \ /tmp/debian-$dist-mm/etc/apt/apt.conf.d/01autoremove-kernels # who creates /run/mount? if [ -e "/tmp/debian-$dist-debootstrap/run/mount/utab" ]; then rm "/tmp/debian-$dist-debootstrap/run/mount/utab" fi if [ -e "/tmp/debian-$dist-debootstrap/run/mount" ]; then rmdir "/tmp/debian-$dist-debootstrap/run/mount" fi # debootstrap doesn't clean apt rm /tmp/debian-$dist-debootstrap/var/lib/apt/lists/127.0.0.1_debian_dists_${dist}_main_binary-amd64_Packages \ /tmp/debian-$dist-debootstrap/var/lib/apt/lists/127.0.0.1_debian_dists_${dist}_Release \ /tmp/debian-$dist-debootstrap/var/lib/apt/lists/127.0.0.1_debian_dists_${dist}_Release.gpg if [ "$variant" = "-" ]; then rm /tmp/debian-$dist-debootstrap/etc/machine-id rm /tmp/debian-$dist-mm/etc/machine-id rm /tmp/debian-$dist-debootstrap/var/lib/systemd/catalog/database rm /tmp/debian-$dist-mm/var/lib/systemd/catalog/database fi rm /tmp/debian-$dist-mm/var/cache/apt/archives/lock rm /tmp/debian-$dist-mm/var/lib/apt/extended_states rm /tmp/debian-$dist-mm/var/lib/apt/lists/lock # introduced in dpkg 1.19.1 if [ "$dist" = "stable" ]; then rm /tmp/debian-$dist-mm/var/lib/dpkg/lock-frontend fi # the list of shells might be sorted wrongly for f in "/tmp/debian-$dist-debootstrap/etc/shells" "/tmp/debian-$dist-mm/etc/shells"; do sort -o "\$f" "\$f" done # workaround for https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=917773 awk -v FS=: -v OFS=: -v SDE=\$SOURCE_DATE_EPOCH '{ print \$1,\$2,int(SDE/60/60/24),\$4,\$5,\$6,\$7,\$8,\$9 }' < /tmp/debian-$dist-mm/etc/shadow > /tmp/debian-$dist-mm/etc/shadow.bak mv /tmp/debian-$dist-mm/etc/shadow.bak /tmp/debian-$dist-mm/etc/shadow awk -v FS=: -v OFS=: -v SDE=\$SOURCE_DATE_EPOCH '{ print \$1,\$2,int(SDE/60/60/24),\$4,\$5,\$6,\$7,\$8,\$9 }' < /tmp/debian-$dist-mm/etc/shadow- > /tmp/debian-$dist-mm/etc/shadow-.bak mv /tmp/debian-$dist-mm/etc/shadow-.bak /tmp/debian-$dist-mm/etc/shadow- # check if the file content differs diff --no-dereference --recursive /tmp/debian-$dist-debootstrap /tmp/debian-$dist-mm # check if file properties (permissions, ownership, symlink names, modification time) differ # # we cannot use this (yet) because it cannot copy with paths that have [ or @ in them #fmtree -c -p /tmp/debian-$dist-debootstrap -k flags,gid,link,mode,size,time,uid | sudo fmtree -p /tmp/debian-$dist-mm rm /tmp/debian-$dist-mm.tar rm -r /tmp/debian-$dist-debootstrap /tmp/debian-$dist-mm END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi done done print_header "test --help" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --help | grep --quiet SYNOPSIS END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO else ./run_null.sh fi print_header "test --version" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --version | egrep --quiet '^mmdebstrap [0-9](\.[0-9])+$' END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO else ./run_null.sh fi print_header "mode=root,variant=apt: create directory" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt unstable /tmp/debian-unstable $mirror tar -C /tmp/debian-unstable --one-file-system -c . | tar -t | sort > tar1.txt rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: fail with unshare as root user" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 ret=0 $CMD --mode=unshare --variant=apt unstable /tmp/debian-unstable $mirror || ret=\$? if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret exit 1 fi END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else echo "HAVE_QEMU != yes -- Skipping test..." fi print_header "mode=root,variant=apt: test progress bars on fake tty" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 script -qfc "$CMD --mode=root --variant=apt unstable /tmp/unstable-chroot.tar $mirror" /dev/null tar -tf /tmp/unstable-chroot.tar | sort > tar2.txt diff -u tar1.txt tar2.txt rm /tmp/unstable-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: test --debug output on fake tty" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 script -qfc "$CMD --mode=root --debug --variant=apt unstable /tmp/unstable-chroot.tar $mirror" /dev/null tar -tf /tmp/unstable-chroot.tar | sort > tar2.txt diff -u tar1.txt tar2.txt rm /tmp/unstable-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: existing empty directory" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 mkdir /tmp/debian-unstable $CMD --mode=root --variant=apt unstable /tmp/debian-unstable $mirror tar -C /tmp/debian-unstable --one-file-system -c . | tar -t | sort > tar2.txt diff -u tar1.txt tar2.txt rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: existing directory with lost+found" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 mkdir /tmp/debian-unstable mkdir /tmp/debian-unstable/lost+found $CMD --mode=root --variant=apt unstable /tmp/debian-unstable $mirror rmdir /tmp/debian-unstable/lost+found tar -C /tmp/debian-unstable --one-file-system -c . | tar -t | sort > tar2.txt diff -u tar1.txt tar2.txt rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: chroot directory not accessible by _apt user" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 mkdir /tmp/debian-unstable chmod 700 /tmp/debian-unstable $CMD --mode=root --variant=apt unstable /tmp/debian-unstable $mirror tar -C /tmp/debian-unstable --one-file-system -c . | tar -t | sort > tar2.txt diff -u tar1.txt tar2.txt rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=unshare,variant=apt: create gzip compressed tarball" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 adduser --gecos user --disabled-password user sysctl -w kernel.unprivileged_userns_clone=1 runuser -u user -- $CMD --mode=unshare --variant=apt unstable /tmp/unstable-chroot.tar.gz $mirror tar -tf /tmp/unstable-chroot.tar.gz | sort > tar2.txt diff -u tar1.txt tar2.txt rm /tmp/unstable-chroot.tar.gz END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else echo "HAVE_QEMU != yes -- Skipping test..." fi print_header "mode=root,variant=apt: fail with missing lz4" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 ret=0 $CMD --mode=root --variant=apt unstable /tmp/unstable-chroot.tar.lz4 $mirror || ret=\$? if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret exit 1 fi END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: create tarball with /tmp mounted nodev" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 mount -t tmpfs -o nodev,nosuid,size=300M tmpfs /tmp # use --customize-hook to exercise the mounting/unmounting code of block devices in root mode $CMD --mode=root --variant=apt --customize-hook='mount | grep /dev/full' --customize-hook='test "\$(echo foo | tee /dev/full 2>&1 1>/dev/null)" = "tee: /dev/full: No space left on device"' unstable /tmp/unstable-chroot.tar $mirror tar -tf /tmp/unstable-chroot.tar | sort > tar2.txt diff -u tar1.txt tar2.txt rm /tmp/unstable-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else echo "HAVE_QEMU != yes -- Skipping test..." fi print_header "mode=$defaultmode,variant=apt: read from stdin, write to stdout" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 echo "deb $mirror unstable main" | $CMD --mode=$defaultmode --variant=apt > /tmp/unstable-chroot.tar tar -tf /tmp/unstable-chroot.tar | sort > tar2.txt diff -u tar1.txt tar2.txt rm /tmp/unstable-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO else ./run_null.sh fi print_header "mode=root,variant=apt: stable default mirror" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 cat << HOSTS >> /etc/hosts 127.0.0.1 deb.debian.org 127.0.0.1 security.debian.org HOSTS apt-cache policy cat /etc/apt/sources.list $CMD --mode=root --variant=apt stable /tmp/debian-unstable cat << SOURCES | cmp /tmp/debian-unstable/etc/apt/sources.list deb http://deb.debian.org/debian stable main deb http://deb.debian.org/debian stable-updates main deb http://security.debian.org/debian-security stable/updates main SOURCES rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else echo "HAVE_QEMU != yes -- Skipping test..." fi print_header "mode=$defaultmode,variant=apt: pass distribution but implicitly write to stdout" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 echo "127.0.0.1 deb.debian.org" >> /etc/hosts $CMD --mode=$defaultmode --variant=apt unstable > /tmp/unstable-chroot.tar tar -tf /tmp/unstable-chroot.tar | sort > tar2.txt diff -u tar1.txt tar2.txt rm /tmp/unstable-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else echo "HAVE_QEMU != yes -- Skipping test..." fi print_header "mode=$defaultmode,variant=apt: mirror is -" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 echo "deb $mirror unstable main" | $CMD --mode=$defaultmode --variant=apt unstable /tmp/unstable-chroot.tar - tar -tf /tmp/unstable-chroot.tar | sort > tar2.txt diff -u tar1.txt tar2.txt rm /tmp/unstable-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO else ./run_null.sh fi print_header "mode=$defaultmode,variant=apt: mirror is deb..." cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=$defaultmode --variant=apt unstable /tmp/unstable-chroot.tar "deb $mirror unstable main" tar -tf /tmp/unstable-chroot.tar | sort > tar2.txt diff -u tar1.txt tar2.txt rm /tmp/unstable-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO else ./run_null.sh fi print_header "mode=$defaultmode,variant=apt: mirror is real file" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 echo "deb $mirror unstable main" > /tmp/sources.list $CMD --mode=$defaultmode --variant=apt unstable /tmp/unstable-chroot.tar /tmp/sources.list tar -tf /tmp/unstable-chroot.tar | sort > tar2.txt diff -u tar1.txt tar2.txt rm /tmp/unstable-chroot.tar /tmp/sources.list END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO else ./run_null.sh fi print_header "mode=$defaultmode,variant=apt: no mirror but data on stdin" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 echo "deb $mirror unstable main" | $CMD --mode=$defaultmode --variant=apt unstable /tmp/unstable-chroot.tar tar -tf /tmp/unstable-chroot.tar | sort > tar2.txt diff -u tar1.txt tar2.txt rm /tmp/unstable-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO else ./run_null.sh fi print_header "mode=$defaultmode,variant=apt: invalid mirror" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 ret=0 $CMD --mode=$defaultmode --variant=apt unstable /tmp/unstable-chroot.tar $mirror/invalid || ret=\$? if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret exit 1 fi rm /tmp/unstable-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh elif [ "$defaultmode" = "root" ]; then ./run_null.sh SUDO else ./run_null.sh fi print_header "mode=root,variant=apt: test --include=libc6:armhf" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt --architectures=amd64,armhf --include=gcc-8-base:armhf unstable /tmp/debian-unstable $mirror { echo "amd64"; echo "armhf"; } | cmp /tmp/debian-unstable/var/lib/dpkg/arch - rm /tmp/debian-unstable/var/lib/dpkg/arch rm /tmp/debian-unstable/var/log/apt/eipp.log.xz rm /tmp/debian-unstable/var/lib/apt/extended_states rm /tmp/debian-unstable/var/lib/dpkg/info/gcc-8-base:armhf.list rm /tmp/debian-unstable/var/lib/dpkg/info/gcc-8-base:armhf.md5sums rm /tmp/debian-unstable/usr/share/doc/gcc-8-base/README.Debian.armhf.gz rmdir /tmp/debian-unstable/usr/lib/gcc/arm-linux-gnueabihf/8/ rmdir /tmp/debian-unstable/usr/lib/gcc/arm-linux-gnueabihf/ tar -C /tmp/debian-unstable --one-file-system -c . | tar -t | sort > tar2.txt diff -u tar1.txt tar2.txt rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: test --aptopt" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 echo 'Acquire::Languages "none";' > config $CMD --mode=root --variant=apt --aptopt='Acquire::Check-Valid-Until "false"' --aptopt=config unstable /tmp/debian-unstable $mirror printf 'Acquire::Check-Valid-Until "false";\nAcquire::Languages "none";\n' | cmp /tmp/debian-unstable/etc/apt/apt.conf.d/99mmdebstrap - rm /tmp/debian-unstable/etc/apt/apt.conf.d/99mmdebstrap tar -C /tmp/debian-unstable --one-file-system -c . | tar -t | sort > tar2.txt diff -u tar1.txt tar2.txt rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: test --dpkgopt" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 echo no-pager > config $CMD --mode=root --variant=apt --dpkgopt="path-exclude=/usr/share/doc/*" --dpkgopt=config unstable /tmp/debian-unstable $mirror printf 'path-exclude=/usr/share/doc/*\nno-pager\n' | cmp /tmp/debian-unstable/etc/dpkg/dpkg.cfg.d/99mmdebstrap - rm /tmp/debian-unstable/etc/dpkg/dpkg.cfg.d/99mmdebstrap tar -C /tmp/debian-unstable --one-file-system -c . | tar -t | sort > tar2.txt grep -v '^./usr/share/doc/.' tar1.txt | diff -u - tar2.txt rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: test --include" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt --include=doc-debian unstable /tmp/debian-unstable $mirror rm /tmp/debian-unstable/usr/share/doc-base/debian-* rm -r /tmp/debian-unstable/usr/share/doc/debian rm -r /tmp/debian-unstable/usr/share/doc/doc-debian rm /tmp/debian-unstable/var/log/apt/eipp.log.xz rm /tmp/debian-unstable/var/lib/apt/extended_states rm /tmp/debian-unstable/var/lib/dpkg/info/doc-debian.list rm /tmp/debian-unstable/var/lib/dpkg/info/doc-debian.md5sums tar -C /tmp/debian-unstable --one-file-system -c . | tar -t | sort > tar2.txt diff -u tar1.txt tar2.txt rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: test --setup-hook" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 cat << 'SCRIPT' > customize.sh #!/bin/sh for d in sbin lib; do ln -s usr/\$d "\$1/\$d"; mkdir -p "\$1/usr/\$d"; done SCRIPT chmod +x customize.sh $CMD --mode=root --variant=apt --setup-hook='ln -s usr/bin "\$1/bin"; mkdir -p "\$1/usr/bin"' --setup-hook=./customize.sh unstable /tmp/debian-unstable $mirror tar -C /tmp/debian-unstable --one-file-system -c . | tar -t | sort > tar2.txt { sed -e 's/^\.\/bin\//.\/usr\/bin\//;s/^\.\/lib\//.\/usr\/lib\//;s/^\.\/sbin\//.\/usr\/sbin\//;' tar1.txt; echo ./bin; echo ./lib; echo ./sbin; } | sort -u | diff -u - tar2.txt rm customize.sh rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: test --essential-hook" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 cat << 'SCRIPT' > customize.sh #!/bin/sh echo tzdata tzdata/Zones/Europe select Berlin | chroot "\$1" debconf-set-selections SCRIPT chmod +x customize.sh $CMD --mode=root --variant=apt --include=tzdata --essential-hook='echo tzdata tzdata/Areas select Europe | chroot "\$1" debconf-set-selections' --essential-hook=./customize.sh unstable /tmp/debian-unstable $mirror echo Europe/Berlin | cmp /tmp/debian-unstable/etc/timezone tar -C /tmp/debian-unstable --one-file-system -c . | tar -t | sort \ | grep -v '^./etc/localtime' \ | grep -v '^./etc/timezone' \ | grep -v '^./usr/sbin/tzconfig' \ | grep -v '^./usr/share/doc/tzdata' \ | grep -v '^./usr/share/zoneinfo' \ | grep -v '^./var/lib/dpkg/info/tzdata.' \ | grep -v '^./var/log/apt/eipp.log.xz$' \ | grep -v '^./var/lib/apt/extended_states$' \ > tar2.txt diff -u tar1.txt tar2.txt rm customize.sh rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: test --customize-hook" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 cat << 'SCRIPT' > customize.sh #!/bin/sh chroot "\$1" whoami > "\$1/output2" chroot "\$1" pwd >> "\$1/output2" SCRIPT chmod +x customize.sh $CMD --mode=root --variant=apt --customize-hook='chroot "\$1" sh -c "whoami; pwd" > "\$1/output1"' --customize-hook=./customize.sh unstable /tmp/debian-unstable $mirror printf "root\n/\n" | cmp /tmp/debian-unstable/output1 printf "root\n/\n" | cmp /tmp/debian-unstable/output2 rm /tmp/debian-unstable/output1 rm /tmp/debian-unstable/output2 tar -C /tmp/debian-unstable --one-file-system -c . | tar -t | sort > tar2.txt diff -u tar1.txt tar2.txt rm customize.sh rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: test failing --customize-hook" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 ret=0 $CMD --mode=root --variant=apt --customize-hook='chroot "\$1" sh -c "exit 1"' unstable /tmp/debian-unstable $mirror || ret=\$? if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret exit 1 fi rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: test sigint during --customize-hook" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 setsid --wait $CMD --mode=root --variant=apt --customize-hook='touch done && sleep 10 && touch fail' unstable /tmp/debian-unstable $mirror & pid=\$! while sleep 1; do [ -e done ] && break; done rm done pgid=\$(echo \$(ps -p \$pid -o pgid=)) /bin/kill --signal INT -- -\$pgid ret=0 wait \$pid || ret=\$? if [ -e fail ]; then echo customize hook was not interrupted rm fail exit 1 fi if [ "\$ret" = 0 ]; then echo expected failure but got exit \$ret exit 1 fi rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: debootstrap no-op options" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt --resolve-deps --merged-usr --no-merged-usr unstable /tmp/debian-unstable $mirror tar -C /tmp/debian-unstable --one-file-system -c . | tar -t | sort > tar2.txt diff -u tar1.txt tar2.txt rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: --verbose" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt --verbose unstable /tmp/debian-unstable $mirror tar -C /tmp/debian-unstable --one-file-system -c . | tar -t | sort > tar2.txt diff -u tar1.txt tar2.txt rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: --debug" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt --debug unstable /tmp/debian-unstable $mirror tar -C /tmp/debian-unstable --one-file-system -c . | tar -t | sort > tar2.txt diff -u tar1.txt tar2.txt rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: --quiet" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt --quiet unstable /tmp/debian-unstable $mirror tar -C /tmp/debian-unstable --one-file-system -c . | tar -t | sort > tar2.txt diff -u tar1.txt tar2.txt rm -r /tmp/debian-unstable END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi print_header "mode=root,variant=apt: --logfile" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=apt --logfile=log unstable /tmp/debian-unstable $mirror tar -C /tmp/debian-unstable --one-file-system -c . | tar -t | sort > tar2.txt grep --quiet "I: running apt-get update..." log grep --quiet "I: downloading packages with apt..." log grep --quiet "I: extracting archives..." log grep --quiet "I: installing packages..." log grep --quiet "I: cleaning package lists and apt cache..." log diff -u tar1.txt tar2.txt rm -r /tmp/debian-unstable rm log END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi # test all variants for variant in essential apt required minbase buildd important debootstrap - standard; do print_header "mode=root,variant=$variant: create directory" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 $CMD --mode=root --variant=$variant unstable /tmp/unstable-chroot.tar $mirror tar -tf /tmp/unstable-chroot.tar | sort > "$variant.txt" rm /tmp/unstable-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh SUDO fi # check if the other modes produce the same result in each variant for mode in unshare fakechroot proot; do # fontconfig doesn't install reproducibly because differences # in /var/cache/fontconfig/. See # https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864082 if [ "$variant" = standard ]; then continue fi case "$mode" in proot) case "$variant" in important|debootstrap|-|standard) # the systemd postint yields: # chfn: PAM: System error # adduser: `/usr/bin/chfn -f systemd Time Synchronization systemd-timesync' returned error code 1. Exiting. # similar error with fakechroot https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=745082#75 # https://github.com/proot-me/PRoot/issues/156 continue ;; esac ;; esac print_header "mode=$mode,variant=$variant: create tarball" if [ "$mode" = "unshare" ] && [ "$HAVE_UNSHARE" != "yes" ]; then echo "HAVE_UNSHARE != yes -- Skipping test..." continue fi if [ "$mode" = "proot" ] && [ "$HAVE_PROOT" != "yes" ]; then echo "HAVE_PROOT != yes -- Skipping test..." continue fi cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 [ "\$(id -u)" -eq 0 ] && ! id -u user > /dev/null 2>&1 && adduser --gecos user --disabled-password user [ "$mode" = unshare ] && sysctl -w kernel.unprivileged_userns_clone=1 prefix= [ "\$(id -u)" -eq 0 ] && prefix="runuser -u user --" \$prefix $CMD --mode=$mode --variant=$variant unstable /tmp/unstable-chroot.tar $mirror # in fakechroot mode, we use a fake ldconfig, so we have to # artificially add some files { tar -tf /tmp/unstable-chroot.tar; [ "$mode" = "fakechroot" ] && printf "./etc/ld.so.cache\n./var/cache/ldconfig/\n"; [ "$mode" = "fakechroot" ] && [ "$variant" != "essential" ] && printf "./etc/.pwd.lock\n"; } | sort | diff -u "./$variant.txt" - rm /tmp/unstable-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh fi # Devel::Cover doesn't survive mmdebstrap re-exec-ing itself # with fakechroot, thus, we do an additional run where we # explicitly run mmdebstrap with fakechroot from the start if [ "$mode" = "fakechroot" ]; then print_header "mode=$mode,variant=$variant: create tarball (ver 2)" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 [ "\$(id -u)" -eq 0 ] && ! id -u user > /dev/null 2>&1 && adduser --gecos user --disabled-password user prefix= [ "\$(id -u)" -eq 0 ] && prefix="runuser -u user --" \$prefix fakechroot fakeroot $CMD --mode=$mode --variant=$variant unstable /tmp/unstable-chroot.tar $mirror { tar -tf /tmp/unstable-chroot.tar; printf "./etc/ld.so.cache\n./var/cache/ldconfig/\n"; [ "$variant" != "essential" ] && printf "./etc/.pwd.lock\n"; } | sort | diff -u "./$variant.txt" - rm /tmp/unstable-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh fi fi done # some variants are equal and some are strict superset of the last # special case of the buildd variant: nothing is a superset of it case "$variant" in essential) ;; # nothing to compare it to apt) [ $(comm -23 shared/essential.txt shared/apt.txt | wc -l) -eq 0 ] [ $(comm -13 shared/essential.txt shared/apt.txt | wc -l) -gt 0 ] rm shared/essential.txt ;; required) [ $(comm -23 shared/apt.txt shared/required.txt | wc -l) -eq 0 ] [ $(comm -13 shared/apt.txt shared/required.txt | wc -l) -gt 0 ] rm shared/apt.txt ;; minbase) # equal to required cmp shared/required.txt shared/minbase.txt rm shared/required.txt ;; buildd) [ $(comm -23 shared/minbase.txt shared/buildd.txt | wc -l) -eq 0 ] [ $(comm -13 shared/minbase.txt shared/buildd.txt | wc -l) -gt 0 ] rm shared/buildd.txt # we need minbase.txt but not buildd.txt ;; important) [ $(comm -23 shared/minbase.txt shared/important.txt | wc -l) -eq 0 ] [ $(comm -13 shared/minbase.txt shared/important.txt | wc -l) -gt 0 ] rm shared/minbase.txt ;; debootstrap) # equal to important cmp shared/important.txt shared/debootstrap.txt rm shared/important.txt ;; -) # equal to debootstrap cmp shared/debootstrap.txt shared/-.txt rm shared/debootstrap.txt ;; standard) [ $(comm -23 shared/-.txt shared/standard.txt | wc -l) -eq 0 ] [ $(comm -13 shared/-.txt shared/standard.txt | wc -l) -gt 0 ] rm shared/-.txt shared/standard.txt ;; *) exit 1;; esac done # test extract variant also with chrootless mode for mode in root unshare fakechroot proot chrootless; do print_header "mode=$mode,variant=extract: unpack doc-debian" if [ "$mode" = "unshare" ] && [ "$HAVE_UNSHARE" != "yes" ]; then echo "HAVE_UNSHARE != yes -- Skipping test..." continue fi if [ "$mode" = "proot" ] && [ "$HAVE_PROOT" != "yes" ]; then echo "HAVE_PROOT != yes -- Skipping test..." continue fi cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 [ "\$(id -u)" -eq 0 ] && ! id -u user > /dev/null 2>&1 && adduser --gecos user --disabled-password user [ "$mode" = unshare ] && sysctl -w kernel.unprivileged_userns_clone=1 prefix= [ "\$(id -u)" -eq 0 ] && [ "$mode" != "root" ] && prefix="runuser -u user --" [ "$mode" = "fakechroot" ] && prefix="\$prefix fakechroot fakeroot" \$prefix $CMD --mode=$mode --variant=extract --include=doc-debian unstable /tmp/debian-unstable $mirror # delete contents of doc-debian rm /tmp/debian-unstable/usr/share/doc-base/debian-* rm -r /tmp/debian-unstable/usr/share/doc/debian rm -r /tmp/debian-unstable/usr/share/doc/doc-debian # delete real files rm /tmp/debian-unstable/etc/apt/sources.list rm /tmp/debian-unstable/etc/fstab rm /tmp/debian-unstable/etc/hostname rm /tmp/debian-unstable/etc/resolv.conf rm /tmp/debian-unstable/var/lib/dpkg/status rm /tmp/debian-unstable/var/lib/dpkg/available rm /tmp/debian-unstable/var/cache/apt/archives/lock rm /tmp/debian-unstable/var/lib/dpkg/lock rm /tmp/debian-unstable/var/lib/dpkg/lock-frontend rm /tmp/debian-unstable/var/lib/apt/lists/lock ## delete merged usr symlinks #rm /tmp/debian-unstable/libx32 #rm /tmp/debian-unstable/lib64 #rm /tmp/debian-unstable/lib32 #rm /tmp/debian-unstable/sbin #rm /tmp/debian-unstable/bin #rm /tmp/debian-unstable/lib # delete ./dev (files might exist or not depending on the mode) rm -f /tmp/debian-unstable/dev/console rm -f /tmp/debian-unstable/dev/fd rm -f /tmp/debian-unstable/dev/full rm -f /tmp/debian-unstable/dev/null rm -f /tmp/debian-unstable/dev/ptmx rm -f /tmp/debian-unstable/dev/random rm -f /tmp/debian-unstable/dev/stderr rm -f /tmp/debian-unstable/dev/stdin rm -f /tmp/debian-unstable/dev/stdout rm -f /tmp/debian-unstable/dev/tty rm -f /tmp/debian-unstable/dev/urandom rm -f /tmp/debian-unstable/dev/zero # the rest should be empty directories that we can rmdir recursively find /tmp/debian-unstable -depth -print0 | xargs -0 rmdir END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh elif [ "$mode" = "root" ]; then ./run_null.sh SUDO else ./run_null.sh fi done print_header "mode=chrootless,variant=custom: install doc-debian" cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 [ "\$(id -u)" -eq 0 ] && ! id -u user > /dev/null 2>&1 && adduser --gecos user --disabled-password user prefix= [ "\$(id -u)" -eq 0 ] && prefix="runuser -u user --" \$prefix $CMD --mode=chrootless --variant=custom --include=doc-debian unstable /tmp/debian-unstable $mirror # delete contents of doc-debian rm /tmp/debian-unstable/usr/share/doc-base/debian-* rm -r /tmp/debian-unstable/usr/share/doc/debian rm -r /tmp/debian-unstable/usr/share/doc/doc-debian # delete real files rm /tmp/debian-unstable/etc/apt/sources.list rm /tmp/debian-unstable/etc/fstab rm /tmp/debian-unstable/etc/hostname rm /tmp/debian-unstable/etc/resolv.conf rm /tmp/debian-unstable/var/lib/dpkg/status rm /tmp/debian-unstable/var/lib/dpkg/available rm /tmp/debian-unstable/var/cache/apt/archives/lock rm /tmp/debian-unstable/var/lib/dpkg/lock rm /tmp/debian-unstable/var/lib/dpkg/lock-frontend rm /tmp/debian-unstable/var/lib/apt/lists/lock rm /tmp/debian-unstable/var/lib/apt/extended_states ## delete merged usr symlinks #rm /tmp/debian-unstable/libx32 #rm /tmp/debian-unstable/lib64 #rm /tmp/debian-unstable/lib32 #rm /tmp/debian-unstable/sbin #rm /tmp/debian-unstable/bin #rm /tmp/debian-unstable/lib # in chrootless mode, there is more to remove rm /tmp/debian-unstable/var/log/apt/eipp.log.xz rm /tmp/debian-unstable/var/lib/dpkg/triggers/Lock rm /tmp/debian-unstable/var/lib/dpkg/triggers/Unincorp rm /tmp/debian-unstable/var/lib/dpkg/status-old rm /tmp/debian-unstable/var/lib/dpkg/info/format rm /tmp/debian-unstable/var/lib/dpkg/info/doc-debian.md5sums rm /tmp/debian-unstable/var/lib/dpkg/info/doc-debian.list # the rest should be empty directories that we can rmdir recursively find /tmp/debian-unstable -depth -print0 | xargs -0 rmdir END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh else ./run_null.sh fi # test foreign architecture with all modes # create directory in sudo mode for mode in root unshare fakechroot proot; do print_header "mode=$mode,variant=apt: create armhf tarball" if [ "$HAVE_BINFMT" != "yes" ]; then echo "HAVE_BINFMT != yes -- Skipping test..." continue fi if [ "$mode" = "unshare" ] && [ "$HAVE_UNSHARE" != "yes" ]; then echo "HAVE_UNSHARE != yes -- Skipping test..." continue fi if [ "$mode" = "proot" ] && [ "$HAVE_PROOT" != "yes" ]; then echo "HAVE_PROOT != yes -- Skipping test..." continue fi cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 [ "\$(id -u)" -eq 0 ] && ! id -u user > /dev/null 2>&1 && adduser --gecos user --disabled-password user [ "$mode" = unshare ] && sysctl -w kernel.unprivileged_userns_clone=1 prefix= [ "\$(id -u)" -eq 0 ] && [ "$mode" != "root" ] && prefix="runuser -u user --" [ "$mode" = "fakechroot" ] && prefix="\$prefix fakechroot fakeroot" \$prefix $CMD --mode=$mode --variant=apt --architectures=armhf unstable /tmp/unstable-chroot.tar $mirror # we ignore differences between architectures by ignoring some files # and renaming others # in fakechroot mode, we use a fake ldconfig, so we have to # artificially add some files # in proot mode, some extra files are put there by proot { tar -tf /tmp/unstable-chroot.tar \ | grep -v '^\./lib/ld-linux-armhf\.so\.3$' \ | grep -v '^\./lib/arm-linux-gnueabihf/ld-linux\.so\.3$' \ | grep -v '^\./lib/arm-linux-gnueabihf/ld-linux-armhf\.so\.3$' \ | sed 's/arm-linux-gnueabihf/x86_64-linux-gnu/' \ | sed 's/armhf/amd64/'; [ "$mode" = "fakechroot" ] && printf "./etc/ld.so.cache\n./var/cache/ldconfig/\n./etc/.pwd.lock\n"; } | sort > tar2.txt { cat tar1.txt \ | grep -v '^\./usr/bin/i386$' \ | grep -v '^\./usr/bin/x86_64$' \ | grep -v '^\./lib64/$' \ | grep -v '^\./lib64/ld-linux-x86-64\.so\.2$' \ | grep -v '^\./lib/x86_64-linux-gnu/ld-linux-x86-64\.so\.2$' \ | grep -v '^\./lib/x86_64-linux-gnu/libmvec-2\.[0-9]\+\.so$' \ | grep -v '^\./lib/x86_64-linux-gnu/libmvec\.so\.1$' \ | grep -v '^\./usr/share/man/man8/i386\.8\.gz$' \ | grep -v '^\./usr/share/man/man8/x86_64\.8\.gz$'; [ "$mode" = "proot" ] && printf "./etc/ld.so.preload\n"; } | sort | diff -u - tar2.txt rm /tmp/unstable-chroot.tar END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh elif [ "$mode" = "root" ]; then ./run_null.sh SUDO else ./run_null.sh fi done # TODO: test if auto mode picks the right mode if [ "$HAVE_QEMU" = "yes" ]; then guestfish add-ro shared/cover_db.img : run : mount /dev/sda / : tar-out / - \ | tar -C shared/cover_db --extract fi if [ -e shared/cover_db/runs ]; then cover -nogcov -report html_basic shared/cover_db mkdir -p report for f in common.js coverage.html cover.css css.js mmdebstrap--branch.html mmdebstrap--condition.html mmdebstrap.html mmdebstrap--subroutine.html standardista-table-sorting.js; do cp -a shared/cover_db/$f report done cover -delete shared/cover_db echo echo open file://$(pwd)/report/coverage.html in a browser echo fi rm shared/tar1.txt shared/tar2.txt mmdebstrap-0.4.1-6d774a3d92ed7b584a2021d3485b30280498080c/make_mirror.sh000077500000000000000000000376211343621701100236030ustar00rootroot00000000000000#!/bin/sh set -eu # This script fills either cache.A or cache.B with new content and then # atomically switches the cache symlink from one to the other at the end. # This way, at no point will the cache be in an non-working state, even # when this script got canceled at any point. # Working with two directories also automatically prunes old packages in # the local repository. if [ -e "./shared/cache.A" ] && [ -e "./shared/cache.B" ]; then echo "both ./shared/cache.A and ./shared/cache.B exist" >&2 echo "was a former run of the script aborted?" >&2 echo "cache symlink points to $(readlink ./shared/cache)" >&2 exit 1 fi if [ -e "./shared/cache.A" ]; then oldcache=cache.A newcache=cache.B else oldcache=cache.B newcache=cache.A fi oldcachedir="./shared/$oldcache" newcachedir="./shared/$newcache" oldmirrordir="$oldcachedir/debian" newmirrordir="$newcachedir/debian" mirror="http://deb.debian.org/debian" security_mirror="http://security.debian.org/debian-security" arch1=$(dpkg --print-architecture) arch2=armhf if [ "$arch1" = "$arch2" ]; then arch2=amd64 fi components=main : "${HAVE_QEMU:=yes}" if [ -e "$oldmirrordir/dists/unstable/Release" ]; then http_code=$(curl --output /dev/null --silent --location --head --time-cond "$oldmirrordir/dists/unstable/Release" --write-out '%{http_code}' "$mirror/dists/unstable/Release") case "$http_code" in 200) ;; # need update 304) echo up-to-date; exit 0;; *) echo "unexpected status: $http_code"; exit 1;; esac fi get_oldaptnames() { if [ ! -e "$1/$2" ]; then return fi gzip -dc "$1/$2" \ | grep-dctrl --no-field-names --show-field=Package,Version,Architecture,Filename '' \ | paste -sd " \n" \ | while read name ver arch fname; do if [ ! -e "$1/$fname" ]; then continue fi # apt stores deb files with the colon encoded as %3a while # mirrors do not contain the epoch at all #645895 case "$ver" in *:*) ver="${ver%%:*}%3a${ver#*:}";; esac aptname="$rootdir/var/cache/apt/archives/${name}_${ver}_${arch}.deb" # we have to cp and not mv because other # distributions might still need this file # we have to cp and not symlink because apt # doesn't recognize symlinks cp --link "$1/$fname" "$aptname" echo "$aptname" done } get_newaptnames() { if [ ! -e "$1/$2" ]; then return fi gzip -dc "$1/$2" \ | grep-dctrl --no-field-names --show-field=Package,Version,Architecture,Filename,MD5sum '' \ | paste -sd " \n" \ | while read name ver arch fname md5; do dir="${fname%/*}" # apt stores deb files with the colon encoded as %3a while # mirrors do not contain the epoch at all #645895 case "$ver" in *:*) ver="${ver%%:*}%3a${ver#*:}";; esac aptname="$rootdir/var/cache/apt/archives/${name}_${ver}_${arch}.deb" if [ -e "$aptname" ]; then # make sure that we found the right file by checking its hash echo "$md5 $aptname" | md5sum --check >&2 mkdir -p "$1/$dir" # since we move hardlinks around, the same hardlink might've been # moved already into the same place by another distribution. # mv(1) refuses to copy A to B if both are hardlinks of each other. if [ "$aptname" -ef "$1/$fname" ]; then # both files are already the same so we just need to # delete the source rm "$aptname" else mv "$aptname" "$1/$fname" fi echo "$aptname" fi done } update_cache() { dist="$1" nativearch="$2" # use a subdirectory of $newcachedir so that we can use # hardlinks rootdir="$newcachedir/apt" mkdir -p "$rootdir" for p in /etc/apt/apt.conf.d /etc/apt/sources.list.d /etc/apt/preferences.d /var/cache/apt/archives /var/lib/apt/lists/partial /var/lib/dpkg; do mkdir -p "$rootdir/$p" done # read sources.list content from stdin cat > "$rootdir/etc/apt/sources.list" cat << END > "$rootdir/etc/apt/apt.conf" Apt::Architecture "$nativearch"; Apt::Architectures "$nativearch"; Dir::Etc "$rootdir/etc/apt"; Dir::State "$rootdir/var/lib/apt"; Dir::Cache "$rootdir/var/cache/apt"; Apt::Install-Recommends false; Apt::Get::Download-Only true; Acquire::Languages "none"; Dir::Etc::Trusted "/etc/apt/trusted.gpg"; Dir::Etc::TrustedParts "/etc/apt/trusted.gpg.d"; END > "$rootdir/var/lib/dpkg/status" APT_CONFIG="$rootdir/etc/apt/apt.conf" apt-get update # before downloading packages and before replacing the old Packages # file, copy all old *.deb packages from the mirror to # /var/cache/apt/archives so that apt will not re-download *.deb # packages that we already have { get_oldaptnames "$oldmirrordir" "dists/$dist/main/binary-$nativearch/Packages.gz" if grep --quiet security.debian.org "$rootdir/etc/apt/sources.list"; then get_oldaptnames "$oldmirrordir" "dists/stable-updates/main/binary-$nativearch/Packages.gz" get_oldaptnames "$oldcachedir/debian-security" "dists/stable/updates/main/binary-$nativearch/Packages.gz" fi } | sort -u > "$rootdir/oldaptnames" pkgs=$(APT_CONFIG="$rootdir/etc/apt/apt.conf" apt-get indextargets \ --format '$(FILENAME)' 'Created-By: Packages' "Architecture: $nativearch" \ | xargs --delimiter='\n' /usr/lib/apt/apt-helper cat-file \ | grep-dctrl --no-field-names --show-field=Package --exact-match \ \( --field=Essential yes --or --field=Priority required \ --or --field=Priority important --or --field=Priority standard \ --or --field=Package build-essential \) ) pkgs="$(echo $pkgs) build-essential" APT_CONFIG="$rootdir/etc/apt/apt.conf" apt-get --yes install $pkgs # to be able to also test gpg verification, we need to create a mirror mkdir -p "$newmirrordir/dists/$dist/main/binary-$nativearch/" curl --location "$mirror/dists/$dist/Release" > "$newmirrordir/dists/$dist/Release" curl --location "$mirror/dists/$dist/Release.gpg" > "$newmirrordir/dists/$dist/Release.gpg" curl --location "$mirror/dists/$dist/main/binary-$nativearch/Packages.gz" > "$newmirrordir/dists/$dist/main/binary-$nativearch/Packages.gz" if grep --quiet security.debian.org "$rootdir/etc/apt/sources.list"; then mkdir -p "$newmirrordir/dists/stable-updates/main/binary-$nativearch/" curl --location "$mirror/dists/stable-updates/Release" > "$newmirrordir/dists/stable-updates/Release" curl --location "$mirror/dists/stable-updates/Release.gpg" > "$newmirrordir/dists/stable-updates/Release.gpg" curl --location "$mirror/dists/stable-updates/main/binary-$nativearch/Packages.gz" > "$newmirrordir/dists/stable-updates/main/binary-$nativearch/Packages.gz" mkdir -p "$newcachedir/debian-security/dists/stable/updates/main/binary-$nativearch/" curl --location "$security_mirror/dists/stable/updates/Release" > "$newcachedir/debian-security/dists/stable/updates/Release" curl --location "$security_mirror/dists/stable/updates/Release.gpg" > "$newcachedir/debian-security/dists/stable/updates/Release.gpg" curl --location "$security_mirror/dists/stable/updates/main/binary-$nativearch/Packages.gz" > "$newcachedir/debian-security/dists/stable/updates/main/binary-$nativearch/Packages.gz" fi # the deb files downloaded by apt must be moved to their right locations in the # pool directory # # Instead of parsing the Packages file, we could also attempt to move the deb # files ourselves to the appropriate pool directories. But that approach # requires re-creating the heuristic by which the directory is chosen, requires # stripping the epoch from the filename and will break once mirrors change. # This way, it doesn't matter where the mirror ends up storing the package. { get_newaptnames "$newmirrordir" "dists/$dist/main/binary-$nativearch/Packages.gz"; if grep --quiet security.debian.org "$rootdir/etc/apt/sources.list"; then get_newaptnames "$newmirrordir" "dists/stable-updates/main/binary-$nativearch/Packages.gz" get_newaptnames "$newcachedir/debian-security" "dists/stable/updates/main/binary-$nativearch/Packages.gz" fi } | sort -u > "$rootdir/newaptnames" rm "$rootdir/var/cache/apt/archives/lock" rmdir "$rootdir/var/cache/apt/archives/partial" # remove all packages that were in the old Packages file but not in the # new one anymore comm -23 "$rootdir/oldaptnames" "$rootdir/newaptnames" | xargs --delimiter="\n" --no-run-if-empty rm # now the apt cache should be empty if [ ! -z "$(ls -1qA "$rootdir/var/cache/apt/archives/")" ]; then echo "/var/cache/apt/archives not empty" exit 1 fi # cleanup APT_CONFIG="$rootdir/etc/apt/apt.conf" apt-get --option Dir::Etc::SourceList=/dev/null update APT_CONFIG="$rootdir/etc/apt/apt.conf" apt-get clean rm "$rootdir/var/cache/apt/archives/lock" rm "$rootdir/var/lib/apt/lists/lock" rm "$rootdir/var/lib/dpkg/status" rm "$rootdir/var/lib/dpkg/lock-frontend" rm "$rootdir/var/lib/dpkg/lock" rm "$rootdir/etc/apt/apt.conf" rm "$rootdir/etc/apt/sources.list" rm "$rootdir/oldaptnames" rm "$rootdir/newaptnames" find "$rootdir" -depth -print0 | xargs -0 rmdir } for nativearch in "$arch1" "$arch2"; do for dist in stable testing unstable; do cat << END | update_cache "$dist" "$nativearch" deb [arch=$nativearch] $mirror $dist $components END if [ "$dist" = "stable" ]; then cat << END | update_cache "$dist" "$nativearch" deb [arch=$nativearch] $mirror $dist $components deb [arch=$nativearch] $mirror stable-updates main deb [arch=$nativearch] $security_mirror stable/updates main END fi done done if [ "$HAVE_QEMU" = "yes" ]; then # We must not use any --dpkgopt here because any dpkg options still # leak into the chroot with chrootless mode. # We do not use our own package cache here because # - it doesn't (and shouldn't) contain the extra packages # - it doesn't matter if the base system is from a different mirror timestamp # procps is needed for /sbin/sysctl tmpdir="$(mktemp -d)" ./mmdebstrap --variant=apt --architectures=amd64,armhf --mode=unshare \ --include=perl-doc,linux-image-amd64,systemd-sysv,perl,arch-test,fakechroot,fakeroot,mount,uidmap,proot,qemu-user-static,binfmt-support,qemu-user,dpkg-dev,mini-httpd,libdevel-cover-perl,debootstrap,libfakechroot:armhf,libfakeroot:armhf,procps \ unstable - "$mirror" > "$tmpdir/debian-unstable.tar" cat << END > "$tmpdir/extlinux.conf" default linux timeout 0 label linux kernel /vmlinuz append initrd=/initrd.img root=/dev/sda1 rw console=ttyS0,115200 serial 0 115200 END cat << END > "$tmpdir/mmdebstrap.service" [Unit] Description=mmdebstrap worker script [Service] Type=oneshot ExecStart=/worker.sh [Install] WantedBy=multi-user.target END # here is something crazy: # as we run mmdebstrap, the process ends up being run by different users with # different privileges (real or fake). But for being able to collect # Devel::Cover data, they must all share a single directory. The only way that # I found to make this work is to mount the database directory with a # filesystem that doesn't support ownership information at all and a umask that # gives read/write access to everybody. # https://github.com/pjcj/Devel--Cover/issues/223 cat << 'END' > "$tmpdir/worker.sh" #!/bin/sh echo 'root:root' | chpasswd mount -t 9p -o trans=virtio,access=any mmdebstrap /mnt # need to restart mini-httpd because we mounted different content into www-root systemctl restart mini-httpd handler () { while IFS= read -r line || [ -n "$line" ]; do printf "%s %s: %s\n" "$(date -u -d "0 $(date +%s.%3N) seconds - $2 seconds" +"%T.%3N")" "$1" "$line" done } ( cd /mnt; if [ -e cover_db.img ]; then mkdir -p cover_db mount -o loop,umask=000 cover_db.img cover_db fi now=$(date +%s.%3N) ret=0 { { { { { sh -x ./test.sh 2>&1 1>&4 3>&- 4>&-; echo $? >&2; } | handler E "$now" >&3; } 4>&1 | handler O "$now" >&3; } 2>&1; } | { read xs; exit $xs; }; } 3>&1 || ret=$? if [ -e cover_db.img ]; then df -h cover_db umount cover_db fi echo $ret ) > /mnt/result.txt 2>&1 umount /mnt systemctl poweroff END chmod +x "$tmpdir/worker.sh" # initially we serve from the new cache so that debootstrap can grab # the new package repository and not the old cat << END > "$tmpdir/mini-httpd" START=1 DAEMON_OPTS="-h 127.0.0.1 -p 80 -u nobody -dd /mnt/$newcache -i /var/run/mini-httpd.pid -T UTF-8" END cat << 'END' > "$tmpdir/hosts" 127.0.0.1 localhost END #libguestfs-test-tool #export LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1 guestfish -N "$tmpdir/debian-unstable.img"=disk:3G -- \ part-disk /dev/sda mbr : \ part-set-bootable /dev/sda 1 true : \ mkfs ext2 /dev/sda1 : \ mount /dev/sda1 / : \ tar-in "$tmpdir/debian-unstable.tar" / : \ extlinux / : \ copy-in "$tmpdir/extlinux.conf" / : \ mkdir-p /etc/systemd/system/multi-user.target.wants : \ ln-s ../mmdebstrap.service /etc/systemd/system/multi-user.target.wants/mmdebstrap.service : \ copy-in "$tmpdir/mmdebstrap.service" /etc/systemd/system/ : \ copy-in "$tmpdir/worker.sh" / : \ copy-in "$tmpdir/mini-httpd" /etc/default : \ copy-in "$tmpdir/hosts" /etc/ : rm "$tmpdir/extlinux.conf" "$tmpdir/worker.sh" "$tmpdir/mini-httpd" "$tmpdir/hosts" "$tmpdir/debian-unstable.tar" "$tmpdir/mmdebstrap.service" qemu-img convert -O qcow2 "$tmpdir/debian-unstable.img" "$newcachedir/debian-unstable.qcow" rm "$tmpdir/debian-unstable.img" rmdir "$tmpdir" fi mirror="http://127.0.0.1/debian" SOURCE_DATE_EPOCH=$(date --date="$(grep-dctrl -s Date -n '' "$newmirrordir/dists/unstable/Release")" +%s) for dist in stable testing unstable; do for variant in minbase buildd -; do # skip because of different userids for apt/systemd if [ "$dist" = 'stable' ] && [ "$variant" = '-' ]; then continue fi # skip because of #917386 and #917407 if [ "$dist" = 'unstable' ] && [ "$variant" = '-' ]; then continue fi echo running debootstrap --no-merged-usr --variant=$variant $dist /tmp/debian-$dist-debootstrap $mirror cat << END > shared/test.sh #!/bin/sh set -eu export LC_ALL=C.UTF-8 export SOURCE_DATE_EPOCH=$SOURCE_DATE_EPOCH debootstrap --no-merged-usr --variant=$variant $dist /tmp/debian-$dist-debootstrap $mirror tar --sort=name --mtime=@$SOURCE_DATE_EPOCH --clamp-mtime --numeric-owner --one-file-system -C /tmp/debian-$dist-debootstrap -c . > "$newcache/debian-$dist-$variant.tar" rm -r /tmp/debian-$dist-debootstrap END if [ "$HAVE_QEMU" = "yes" ]; then cachedir=$newcachedir ./run_qemu.sh else ./run_null.sh SUDO fi done done if [ "$HAVE_QEMU" = "yes" ]; then # now replace the minihttpd config with one that serves the new repository # create a temporary directory because "copy-in" cannot rename the file tmpdir="$(mktemp -d)" cat << END > "$tmpdir/mini-httpd" START=1 DAEMON_OPTS="-h 127.0.0.1 -p 80 -u nobody -dd /mnt/cache -i /var/run/mini-httpd.pid -T UTF-8" END guestfish -a "$newcachedir/debian-unstable.qcow" -i copy-in "$tmpdir/mini-httpd" /etc/default rm "$tmpdir/mini-httpd" rmdir "$tmpdir" fi # delete possibly leftover symlink if [ -e ./shared/cache.tmp ]; then rm ./shared/cache.tmp fi # now atomically switch the symlink to point to the other directory ln -s $newcache ./shared/cache.tmp mv --no-target-directory ./shared/cache.tmp ./shared/cache # be very careful with removing the old directory for dist in stable testing unstable; do for variant in minbase buildd -; do if [ -e "$oldcachedir/debian-$dist-$variant.tar" ]; then rm "$oldcachedir/debian-$dist-$variant.tar" fi done if [ -e "$oldcachedir/debian/dists/$dist" ]; then rm --one-file-system --recursive "$oldcachedir/debian/dists/$dist" fi if [ "$dist" = "stable" ]; then if [ -e "$oldcachedir/debian/dists/stable-updates" ]; then rm --one-file-system --recursive "$oldcachedir/debian/dists/stable-updates" fi if [ -e "$oldcachedir/debian-security/dists/stable/updates" ]; then rm --one-file-system --recursive "$oldcachedir/debian-security/dists/stable/updates" fi fi done if [ -e $oldcachedir/debian-unstable.qcow ]; then rm --one-file-system "$oldcachedir/debian-unstable.qcow" fi if [ -e "$oldcachedir/debian/pool/main" ]; then rm --one-file-system --recursive "$oldcachedir/debian/pool/main" fi if [ -e "$oldcachedir/debian-security/pool/updates/main" ]; then rm --one-file-system --recursive "$oldcachedir/debian-security/pool/updates/main" fi # now the rest should only be empty directories if [ -e "$oldcachedir" ]; then find "$oldcachedir" -depth -print0 | xargs -0 --no-run-if-empty rmdir fi mmdebstrap-0.4.1-6d774a3d92ed7b584a2021d3485b30280498080c/mmdebstrap000077500000000000000000003047701343621701100230230ustar00rootroot00000000000000#!/usr/bin/perl # # Copyright: 2018 Johannes Schauer # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to # deal in the Software without restriction, including without limitation the # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or # sell copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. use strict; use warnings; our $VERSION = '0.4.1'; use English; use Getopt::Long; use Pod::Usage; use File::Copy; use File::Path qw(make_path remove_tree); use File::Temp qw(tempfile tempdir); use Cwd qw(abs_path); require "syscall.ph"; use Fcntl qw(S_IFCHR S_IFBLK FD_CLOEXEC F_GETFD F_SETFD); use List::Util qw(any none); use POSIX qw(SIGINT SIGHUP SIGPIPE SIGTERM SIG_BLOCK SIG_UNBLOCK); use Carp; use Term::ANSIColor; # from sched.h use constant { CLONE_NEWNS => 0x20000, CLONE_NEWUTS => 0x4000000, CLONE_NEWIPC => 0x8000000, CLONE_NEWUSER => 0x10000000, CLONE_NEWPID => 0x20000000, CLONE_NEWNET => 0x40000000, }; # type codes: # 0 -> normal file # 1 -> hardlink # 2 -> symlink # 3 -> character special # 4 -> block special # 5 -> directory my @devfiles = ( # filename mode type link target major minor [ "./dev/", 0755, 5, '', undef, undef ], [ "./dev/console", 0666, 3, '', 5, 1 ], [ "./dev/fd", 0777, 2, '/proc/self/fd', undef, undef ], [ "./dev/full", 0666, 3, '', 1, 7 ], [ "./dev/null", 0666, 3, '', 1, 3 ], [ "./dev/ptmx", 0666, 3, '', 5, 2 ], [ "./dev/pts/", 0755, 5, '', undef, undef ], [ "./dev/random", 0666, 3, '', 1, 8 ], [ "./dev/shm/", 0755, 5, '', undef, undef ], [ "./dev/stderr", 0777, 2, '/proc/self/fd/2', undef, undef ], [ "./dev/stdin", 0777, 2, '/proc/self/fd/0', undef, undef ], [ "./dev/stdout", 0777, 2, '/proc/self/fd/1', undef, undef ], [ "./dev/tty", 0666, 3, '', 5, 0 ], [ "./dev/urandom", 0666, 3, '', 1, 9 ], [ "./dev/zero", 0666, 3, '', 1, 5 ], ); # verbosity levels: # 0 -> print nothing # 1 -> normal output and progress bars # 2 -> verbose output # 3 -> debug output my $verbosity_level = 1; sub debug { if ($verbosity_level < 3) { return; } my $msg = shift; $msg = "D: $msg"; if ( -t STDERR ) { $msg = colored($msg, 'clear') } print STDERR "$msg\n"; } sub info { if ($verbosity_level == 0) { return; } my $msg = shift; $msg = "I: $msg"; if ( -t STDERR ) { $msg = colored($msg, 'green') } print STDERR "$msg\n"; } sub warning { if ($verbosity_level == 0) { return; } my $msg = shift; $msg = "W: $msg"; if ( -t STDERR ) { $msg = colored($msg, 'bold yellow') } print STDERR "$msg\n"; } sub error { if ($verbosity_level == 0) { return; } # if error() is called with the string from a previous error() that was # caught inside an eval(), then the string will have a newline which we # are stripping here chomp (my $msg = shift); $msg = "E: $msg"; if ( -t STDERR ) { $msg = colored($msg, 'bold red') } if ($verbosity_level == 3) { croak $msg; # produces a backtrace } else { die "$msg\n"; } } # tar cannot figure out the decompression program when receiving data on # standard input, thus we do it ourselves. This is copied from tar's # src/suffix.c sub get_tar_compressor($) { my $filename = shift; if ($filename eq '-') { return undef } elsif ($filename =~ /\.tar$/) { return undef } elsif ($filename =~ /\.(gz|tgz|taz)$/) { return 'gzip'; } elsif ($filename =~ /\.(Z|taZ)$/) { return 'compress'; } elsif ($filename =~ /\.(bz2|tbz|tbz2|tz2)$/) { return 'bzip2'; } elsif ($filename =~ /\.lz$/) { return 'lzip'; } elsif ($filename =~ /\.(lzma|tlz)$/) { return 'lzma'; } elsif ($filename =~ /\.lzo$/) { return 'lzop'; } elsif ($filename =~ /\.lz4$/) { return 'lz4'; } elsif ($filename =~ /\.(xz|txz)$/) { return 'xz'; } elsif ($filename =~ /\.zst$/) { return 'zstd'; } return undef } sub test_unshare($) { my $verbose = shift; if ($EFFECTIVE_USER_ID == 0) { my $msg = "cannot use unshare mode when executing as root"; if ($verbose) { warning $msg; } else { debug $msg; } return 0; } # arguments to syscalls have to be stored in their own variable or # otherwise we will get "Modification of a read-only value attempted" my $unshare_flags = CLONE_NEWUSER; # we spawn a new per process because if unshare succeeds, we would # otherwise have unshared the mmdebstrap process itself which we don't want my $pid = fork() // error "fork() failed: $!"; if ($pid == 0) { my $ret = syscall &SYS_unshare, $unshare_flags; if ($ret == 0) { exit 0; } else { my $msg = "unshare syscall failed: $!"; if ($verbose) { warning $msg; } else { debug $msg; } exit 1; } } waitpid($pid, 0); if (($? >> 8) != 0) { return 0; } # if newuidmap and newgidmap exist, the exit status will be 1 when # executed without parameters system "newuidmap 2>/dev/null"; if (($? >> 8) != 1) { if (($? >> 8) == 127) { my $msg = "cannot find newuidmap"; if ($verbose) { warning $msg; } else { debug $msg; } } else { my $msg = "newuidmap returned unknown exit status: $?"; if ($verbose) { warning $msg; } else { debug $msg; } } return 0; } system "newgidmap 2>/dev/null"; if (($? >> 8) != 1) { if (($? >> 8) == 127) { my $msg = "cannot find newgidmap"; if ($verbose) { warning $msg; } else { debug $msg; } } else { my $msg = "newgidmap returned unknown exit status: $?"; if ($verbose) { warning $msg; } else { debug $msg; } } return 0; } return 1; } sub read_subuid_subgid() { my $username = getpwuid $<; my ($subid, $num_subid, $fh, $n); my @result = (); if (! -e "/etc/subuid") { warning "/etc/subuid doesn't exist"; return; } if (! -r "/etc/subuid") { warning "/etc/subuid is not readable"; return; } open $fh, "<", "/etc/subuid" or error "cannot open /etc/subuid for reading: $!"; while (my $line = <$fh>) { ($n, $subid, $num_subid) = split(/:/, $line, 3); last if ($n eq $username); } close $fh; push @result, ["u", 0, $subid, $num_subid]; if (scalar(@result) < 1) { warning "/etc/subuid does not contain an entry for $username"; return; } if (scalar(@result) > 1) { warning "/etc/subuid contains multiple entries for $username"; return; } open $fh, "<", "/etc/subgid" or error "cannot open /etc/subgid for reading: $!"; while (my $line = <$fh>) { ($n, $subid, $num_subid) = split(/:/, $line, 3); last if ($n eq $username); } close $fh; push @result, ["g", 0, $subid, $num_subid]; if (scalar(@result) < 2) { warning "/etc/subgid does not contain an entry for $username"; return; } if (scalar(@result) > 2) { warning "/etc/subgid contains multiple entries for $username"; return; } return @result; } # This function spawns two child processes forming the following process tree # # A # | # fork() # | \ # B C # | | # | fork() # | | \ # | D E # | | | # |unshare() # | close() # | | | # | | read() # | | newuidmap(D) # | | newgidmap(D) # | | / # | waitpid() # | | # | fork() # | | \ # | F G # | | | # | | exec() # | | / # | waitpid() # | / # waitpid() # # To better refer to each individual part, we give each process a new # identifier after calling fork(). Process A is the main process. After # executing fork() we call the parent and child B and C, respectively. This # first fork() is done because we do not want to modify A. B then remains # waiting for its child C to finish. C calls fork() again, splitting into # the parent D and its child E. In the parent D we call unshare() and close a # pipe shared by D and E to signal to E that D is done with calling unshare(). # E notices this by using read() and follows up with executing the tools # new[ug]idmap on D. E finishes and D continues with doing another fork(). # This is because when unsharing the PID namespace, we need a PID 1 to be kept # alive or otherwise any child processes cannot fork() anymore themselves. So # we keep F as PID 1 and finally call exec() in G. sub get_unshare_cmd(&$) { my $cmd = shift; my $idmap = shift; my $unshare_flags = CLONE_NEWUSER | CLONE_NEWNS | CLONE_NEWPID | CLONE_NEWUTS | CLONE_NEWIPC; if (0) { $unshare_flags |= CLONE_NEWNET; } # fork a new process and let the child get unshare()ed # we don't want to unshare the parent process my $gcpid = fork() // error "fork() failed: $!"; if ($gcpid == 0) { # Create a pipe for the parent process to signal the child process that it is # done with calling unshare() so that the child can go ahead setting up # uid_map and gid_map. pipe my $rfh, my $wfh; # We have to do this dance with forking a process and then modifying the # parent from the child because: # - new[ug]idmap can only be called on a process id after that process has # unshared the user namespace # - a process looses its capabilities if it performs an execve() with nonzero # user ids see the capabilities(7) man page for details. # - a process that unshared the user namespace by default does not have the # privileges to call new[ug]idmap on itself # # this also works the other way around (the child setting up a user namespace # and being modified from the parent) but that way, the parent would have to # stay around until the child exited (so a pid would be wasted). Additionally, # that variant would require an additional pipe to let the parent signal the # child that it is done with calling new[ug]idmap. The way it is done here, # this signaling can instead be done by wait()-ing for the exit of the child. my $ppid = $$; my $cpid = fork() // error "fork() failed: $!"; if ($cpid == 0) { # child # Close the writing descriptor at our end of the pipe so that we # see EOF when parent closes its descriptor. close $wfh; # Wait for the parent process to finish its unshare() call by # waiting for an EOF. 0 == sysread $rfh, my $c, 1 or error "read() did not receive EOF"; # The program's new[ug]idmap have to be used because they are # setuid root. These privileges are needed to map the ids from # /etc/sub[ug]id to the user namespace set up by the parent. # Without these privileges, only the id of the user itself can be # mapped into the new namespace. # # Since new[ug]idmap is setuid root we also don't need to write # "deny" to /proc/$$/setgroups beforehand (this is otherwise # required for unprivileged processes trying to write to # /proc/$$/gid_map since kernel version 3.19 for security reasons) # and therefore the parent process keeps its ability to change its # own group here. # # Since /proc/$ppid/[ug]id_map can only be written to once, # respectively, instead of making multiple calls to new[ug]idmap, # we assemble a command line that makes one call each. my $uidmapcmd = ""; my $gidmapcmd = ""; foreach (@{$idmap}) { my ($t, $hostid, $nsid, $range) = @{$_}; if ($t ne "u" and $t ne "g" and $t ne "b") { error "invalid idmap type: $t"; } if ($t eq "u" or $t eq "b") { $uidmapcmd .= " $hostid $nsid $range"; } if ($t eq "g" or $t eq "b") { $gidmapcmd .= " $hostid $nsid $range"; } } my $idmapcmd = ''; if ($uidmapcmd ne "") { 0 == system "newuidmap $ppid $uidmapcmd" or error "newuidmap $ppid $uidmapcmd failed: $!"; } if ($gidmapcmd ne "") { 0 == system "newgidmap $ppid $gidmapcmd" or error "newgidmap $ppid $gidmapcmd failed: $!"; } exit 0; } # parent # After fork()-ing, the parent immediately calls unshare... 0 == syscall &SYS_unshare, $unshare_flags or error "unshare() failed: $!"; # .. and then signals the child process that we are done with the # unshare() call by sending an EOF. close $wfh; # Wait for the child process to finish its setup by waiting for its # exit. $cpid == waitpid $cpid, 0 or error "waitpid() failed: $!"; my $exit = $? >> 8; if ($exit != 0) { error "child had a non-zero exit status: $exit"; } # Currently we are nobody (uid and gid are 65534). So we become root # user and group instead. # # We are using direct syscalls instead of setting $(, $), $< and $> # because then perl would do additional stuff which we don't need or # want here, like checking /proc/sys/kernel/ngroups_max (which might # not exist). It would also also call setgroups() in a way that makes # the root user be part of the group unknown. 0 == syscall &SYS_setgid, 0 or error "setgid failed: $!"; 0 == syscall &SYS_setuid, 0 or error "setuid failed: $!"; 0 == syscall &SYS_setgroups, 0, 0 or error "setgroups failed: $!"; if (1) { # When the pid namespace is also unshared, then processes expect a # master pid to always be alive within the namespace. To achieve # this, we fork() here instead of exec() to always have one dummy # process running as pid 1 inside the namespace. This is also what # the unshare tool does when used with the --fork option. # # Otherwise, without a pid 1, new processes cannot be forked # anymore after pid 1 finished. my $cpid = fork() // error "fork() failed: $!"; if ($cpid != 0) { # The parent process will stay alive as pid 1 in this # namespace until the child finishes executing. This is # important because pid 1 must never die or otherwise nothing # new can be forked. $cpid == waitpid $cpid, 0 or error "waitpid() failed: $!"; exit ($? >> 8); } } &{$cmd}(); exit 0; } # parent return $gcpid; } sub havemknod($) { my $root = shift; my $havemknod = 0; if (-e "$root/test-dev-null") { error "/test-dev-null already exists"; } TEST: { # we fork so that we can read STDERR my $pid = open my $fh, '-|' // error "failed to fork(): $!"; if ($pid == 0) { open(STDERR, '>&', STDOUT); # we use mknod(1) instead of the system call because creating the # right dev_t argument requires makedev(3) exec 'mknod', "$root/test-dev-null", 'c', '1', '3'; } chomp (my $content = do { local $/; <$fh> }); close $fh; { last TEST unless $? == 0 and $content eq ''; last TEST unless -c "$root/test-dev-null"; last TEST unless open my $fh, '>', "$root/test-dev-null"; last TEST unless print $fh 'test'; } $havemknod = 1; } if (-e "$root/test-dev-null") { unlink "$root/test-dev-null" or error "cannot unlink /test-dev-null: $!"; } return $havemknod; } sub print_progress { if ($verbosity_level != 1) { return; } my $perc = shift; if (!-t STDERR) { return; } if ($perc eq "done") { # \e[2K clears everything on the current line (i.e. the progress bar) print STDERR "\e[2Kdone\n"; return; } if ($perc >= 100) { $perc = 100; } my $width = 50; my $num_x = int($perc*$width/100); my $bar = '=' x $num_x; if ($num_x != $width) { $bar .= '>'; $bar .= ' ' x ($width - $num_x - 1); } printf STDERR "%6.2f [%s]\r", $perc, $bar; } sub run_progress { my ($get_exec, $line_handler, $line_has_error) = @_; pipe my $rfh, my $wfh; my $got_signal = 0; my $ignore = sub { info "run_progress() received signal $_[0]: waiting for child..."; }; # delay signals so that we can fork and change behaviour of the signal # handler in parent and child without getting interrupted my $sigset = POSIX::SigSet->new(SIGINT, SIGHUP, SIGPIPE, SIGTERM); POSIX::sigprocmask(SIG_BLOCK, $sigset) or error "Can't block signals: $!"; my $pid1 = open(my $pipe, '-|') // error "failed to fork(): $!"; if ($pid1 == 0) { # child: default signal handlers $SIG{'INT'} = 'DEFAULT'; $SIG{'HUP'} = 'DEFAULT'; $SIG{'PIPE'} = 'DEFAULT'; $SIG{'TERM'} = 'DEFAULT'; # unblock all delayed signals (and possibly handle them) POSIX::sigprocmask(SIG_UNBLOCK, $sigset) or error "Can't unblock signals: $!"; close $rfh; # Unset the close-on-exec flag, so that the file descriptor does not # get closed when we exec my $flags = fcntl( $wfh, F_GETFD, 0 ) or error "fcntl F_GETFD: $!"; fcntl($wfh, F_SETFD, $flags & ~FD_CLOEXEC ) or error "fcntl F_SETFD: $!"; my $fd = fileno $wfh; # redirect stderr to stdout so that we can capture it open(STDERR, '>&', STDOUT); my @execargs = $get_exec->($fd); exec { $execargs[0] } @execargs or error 'cannot exec() ' . (join ' ', @execargs); } close $wfh; # spawn two processes: # parent will parse stdout to look for errors # child will parse $rfh for the progress meter my $pid2 = fork() // error "failed to fork(): $!"; if ($pid2 == 0) { # child: default signal handlers $SIG{'INT'} = 'IGNORE'; $SIG{'HUP'} = 'IGNORE'; $SIG{'PIPE'} = 'IGNORE'; $SIG{'TERM'} = 'IGNORE'; # unblock all delayed signals (and possibly handle them) POSIX::sigprocmask(SIG_UNBLOCK, $sigset) or error "Can't unblock signals: $!"; print_progress 0.0; while (my $line = <$rfh>) { my $output = $line_handler->($line); next unless $output; print_progress $output; } print_progress "done"; exit 0; } # parent: ignore signals # by using "local", the original is automatically restored once the # function returns local $SIG{'INT'} = $ignore; local $SIG{'HUP'} = $ignore; local $SIG{'PIPE'} = $ignore; local $SIG{'TERM'} = $ignore; # unblock all delayed signals (and possibly handle them) POSIX::sigprocmask(SIG_UNBLOCK, $sigset) or error "Can't unblock signals: $!"; my $output = ''; my $has_error = 0; while (my $line = <$pipe>) { $has_error = $line_has_error->($line); if ($verbosity_level >= 2) { print STDERR $line; } else { # forward captured apt output $output .= $line; } } close($pipe); my $fail = 0; if ($? != 0 or $has_error) { $fail = 1; } waitpid $pid2, 0; $? == 0 or error "progress parsing failed"; if ($got_signal) { error "run_progress() received signal: $got_signal"; } # only print failure after progress output finished or otherwise it # might interfere with the remaining output if ($fail) { if ($verbosity_level >= 1) { print STDERR $output; } error ((join ' ', $get_exec->('<$fd>')) . ' failed'); } } sub run_dpkg_progress { my $options = shift; my @debs = @{$options->{PKGS} // []}; my $get_exec = sub { return @{$options->{ARGV}}, "--status-fd=$_[0]", @debs; }; my $line_has_error = sub { return 0; }; my $num = 0; # each package has one install and one configure step, thus the total # number is twice the number of packages my $total = (scalar @debs) * 2; my $line_handler = sub { if ($_[0] =~ /^processing: (install|configure): /) { $num += 1; } return $num/$total*100; }; run_progress $get_exec, $line_handler, $line_has_error; } sub run_apt_progress { my $options = shift; my @debs = @{$options->{PKGS} // []}; my $get_exec = sub { return ( @{$options->{ARGV}}, "-oAPT::Status-Fd=$_[0]", # prevent apt from messing up the terminal and allow dpkg to # receive SIGINT and quit immediately without waiting for # maintainer script to finish '-oDpkg::Use-Pty=false', @debs )}; my $line_has_error = sub { # apt-get doesn't report a non-zero exit if the update failed. Thus, we # have to parse its output. See #778357, #776152, #696335 and #745735 if ($_[0] =~ /^(W: |Err:)/) { return 1; } return 0; }; my $line_handler = sub { if ($_[0] =~ /(pmstatus|dlstatus):[^:]+:(\d+\.\d+):.*/) { return $2; } }; run_progress $get_exec, $line_handler, $line_has_error; } sub run_chroot(&$) { my $cmd = shift; my $options = shift; my @cleanup_tasks = (); my $cleanup = sub { my $signal = $_[0]; while (my $task = pop @cleanup_tasks) { $task->(); } if ($signal) { warning "pid $PID cought signal: $signal"; exit 1; } }; local $SIG{INT} = $cleanup; local $SIG{HUP} = $cleanup; local $SIG{PIPE} = $cleanup; local $SIG{TERM} = $cleanup; eval { if (any { $_ eq $options->{mode} } ('root', 'unshare')) { # if more than essential should be installed, make the system look # more like a real one by creating or bind-mounting the device nodes foreach my $file (@devfiles) { my ($fname, $mode, $type, $linkname, $devmajor, $devminor) = @{$file}; next if $fname eq './dev/'; if ($type == 0) { # normal file error "type 0 not implemented"; } elsif ($type == 1) { # hardlink error "type 1 not implemented"; } elsif ($type == 2) { # symlink if (!$options->{havemknod}) { if ($options->{mode} eq 'fakechroot' and $linkname =~ /^\/proc/) { # there is no /proc in fakechroot mode next; } if (any { $_ eq $options->{mode} } ('root', 'unshare')) { push @cleanup_tasks, sub { unlink "$options->{root}/$fname" or warn "cannot unlink $fname: $!"; } } symlink $linkname, "$options->{root}/$fname" or error "cannot create symlink $fname"; } } elsif ($type == 3 or $type == 4) { # character/block special if (!$options->{havemknod}) { open my $fh, '>', "$options->{root}/$fname" or error "cannot open $options->{root}/$fname: $!"; close $fh; if ($options->{mode} eq 'unshare') { push @cleanup_tasks, sub { 0 == system('umount', '--no-mtab', "$options->{root}/$fname") or warn "umount $fname failed: $?"; unlink "$options->{root}/$fname" or warn "cannot unlink $fname: $!"; }; } elsif ($options->{mode} eq 'root') { push @cleanup_tasks, sub { 0 == system('umount', "$options->{root}/$fname") or warn "umount failed: $?"; unlink "$options->{root}/$fname" or warn "cannot unlink $fname: $!"; }; } else { error "unknown mode: $options->{mode}"; } 0 == system('mount', '-o', 'bind', "/$fname", "$options->{root}/$fname") or error "mount $fname failed: $?"; } } elsif ($type == 5) { # directory if (!$options->{havemknod}) { if (any { $_ eq $options->{mode} } ('root', 'unshare')) { push @cleanup_tasks, sub { rmdir "$options->{root}/$fname" or warn "cannot rmdir $fname: $!"; } } make_path "$options->{root}/$fname" or error "cannot make_path $fname"; chmod $mode, "$options->{root}/$fname" or error "cannot chmod $fname: $!"; } if ($options->{mode} eq 'unshare') { push @cleanup_tasks, sub { 0 == system('umount', '--no-mtab', "$options->{root}/$fname") or warn "umount $fname failed: $?"; }; } elsif ($options->{mode} eq 'root') { push @cleanup_tasks, sub { 0 == system('umount', "$options->{root}/$fname") or warn "umount $fname failed: $?"; }; } else { error "unknown mode: $options->{mode}"; } 0 == system('mount', '-o', 'bind', "/$fname", "$options->{root}/$fname") or error "mount $fname failed: $?"; } else { error "unsupported type: $type"; } } } elsif (any { $_ eq $options->{mode} } ('proot', 'fakechroot')) { # we cannot mount in fakechroot and proot mode # in proot mode we have /dev bind-mounted already through --bind=/dev } else { error "unknown mode: $options->{mode}"; } # We can only mount /proc and /sys after extracting the essential # set because if we mount it before, then base-files will not be able # to extract those if ($options->{mode} eq 'root') { push @cleanup_tasks, sub { 0 == system('umount', "$options->{root}/sys") or warn "umount /sys failed: $?"; }; 0 == system('mount', '-t', 'sysfs', '-o', 'nosuid,nodev,noexec', 'sys', "$options->{root}/sys") or error "mount /sys failed: $?"; } elsif ($options->{mode} eq 'unshare') { # naturally we have to clean up after ourselves in sudo mode where we # do a real mount. But we also need to unmount in unshare mode because # otherwise, even with the --one-file-system tar option, the # permissions of the mount source will be stored and not the mount # target (the directory) push @cleanup_tasks, sub { # since we cannot write to /etc/mtab we need --no-mtab # unmounting /sys only seems to be successful with --lazy 0 == system('umount', '--no-mtab', '--lazy', "$options->{root}/sys") or warn "umount /sys failed: $?"; }; # without the network namespace unshared, we cannot mount a new # sysfs. Since we need network, we just bind-mount. # # we have to rbind because just using bind results in "wrong fs # type, bad option, bad superblock" error 0 == system('mount', '-o', 'rbind', '/sys', "$options->{root}/sys") or error "mount /sys failed: $?"; } elsif (any { $_ eq $options->{mode} } ('proot', 'fakechroot')) { # we cannot mount in fakechroot and proot mode # in proot mode we have /proc bind-mounted already through --bind=/proc } else { error "unknown mode: $options->{mode}"; } if ($options->{mode} eq 'root') { push @cleanup_tasks, sub { 0 == system('umount', "$options->{root}/proc") or error "umount /proc failed: $?"; }; 0 == system('mount', '-t', 'proc', 'proc', "$options->{root}/proc") or error "mount /proc failed: $?"; } elsif ($options->{mode} eq 'unshare') { # naturally we have to clean up after ourselves in sudo mode where we # do a real mount. But we also need to unmount in unshare mode because # otherwise, even with the --one-file-system tar option, the # permissions of the mount source will be stored and not the mount # target (the directory) push @cleanup_tasks, sub { # since we cannot write to /etc/mtab we need --no-mtab 0 == system('umount', '--no-mtab', "$options->{root}/proc") or error "umount /proc failed: $?"; }; 0 == system('mount', '-t', 'proc', 'proc', "$options->{root}/proc") or error "mount /proc failed: $?"; } elsif (any { $_ eq $options->{mode} } ('proot', 'fakechroot')) { # we cannot mount in fakechroot and proot mode # in proot mode we have /sys bind-mounted already through --bind=/sys } else { error "unknown mode: $options->{mode}"; } # prevent daemons from starting { open my $fh, '>', "$options->{root}/usr/sbin/policy-rc.d" or error "cannot open policy-rc.d: $!"; print $fh "#!/bin/sh\n"; print $fh "exit 101\n"; close $fh; chmod 0755, "$options->{root}/usr/sbin/policy-rc.d" or error "cannot chmod policy-rc.d: $!"; } { move("$options->{root}/sbin/start-stop-daemon", "$options->{root}/sbin/start-stop-daemon.REAL") or error "cannot move start-stop-daemon"; open my $fh, '>', "$options->{root}/sbin/start-stop-daemon" or error "cannot open policy-rc.d: $!"; print $fh "#!/bin/sh\n"; print $fh "echo \"Warning: Fake start-stop-daemon called, doing nothing\">&2\n"; close $fh; chmod 0755, "$options->{root}/sbin/start-stop-daemon" or error "cannot chmod start-stop-daemon: $!"; } &{$cmd}(); # cleanup move("$options->{root}/sbin/start-stop-daemon.REAL", "$options->{root}/sbin/start-stop-daemon") or error "cannot move start-stop-daemon"; unlink "$options->{root}/usr/sbin/policy-rc.d" or error "cannot unlink policy-rc.d: $!"; }; my $error = $@; # we use the cleanup function to do the unmounting $cleanup->(0); if ($error) { error "run_chroot failed: $error"; } } sub run_hooks($$) { my $name = shift; my $options = shift; if (scalar @{$options->{"${name}_hook"}} == 0) { return; } my $runner = sub { foreach my $script (@{$options->{"${name}_hook"}}) { if ( -x $script || $script !~ m/[^\w@\%+=:,.\/-]/a) { info "running --$name-hook directly: $script $options->{root}"; # execute it directly if it's an executable file # or if it there are no shell metacharacters # (the /a regex modifier makes \w match only ASCII) 0 == system($script, $options->{root}) or error "command failed: $script"; } else { info "running --$name-hook in shell: sh -c '$script' exec $options->{root}"; # otherwise, wrap everything in sh -c 0 == system('sh', '-c', $script, 'exec', $options->{root}) or error "command failed: $script"; } } }; if ($name eq 'setup') { # execute directly without mounting anything (the mount points do not # exist yet) &{$runner}(); } else { run_chroot \&$runner, $options; } } sub setup { my $options = shift; foreach my $key (sort keys %{$options}) { my $value = $options->{$key}; if (!defined $value) { next; } if (ref $value eq '') { debug "$key: $options->{$key}"; } elsif (ref $value eq 'ARRAY') { debug "$key: [" . (join ', ', @{$value}) . "]"; } else { error "unknown type"; } } my ($conf, $tmpfile) = tempfile(UNLINK => 1) or error "cannot open apt.conf: $!"; print $conf "Apt::Architecture \"$options->{nativearch}\";\n"; # the host system might have configured additional architectures # force only the native architecture if (scalar @{$options->{foreignarchs}} > 0) { print $conf "Apt::Architectures { \"$options->{nativearch}\"; "; foreach my $arch (@{$options->{foreignarchs}}) { print $conf "\"$arch\"; "; } print $conf "};\n"; } else { print $conf "Apt::Architectures \"$options->{nativearch}\";\n"; } print $conf "Dir \"$options->{root}\";\n"; # for authentication, use the keyrings from the host print $conf "Dir::Etc::Trusted \"/etc/apt/trusted.gpg\";\n"; print $conf "Dir::Etc::TrustedParts \"/etc/apt/trusted.gpg.d\";\n"; close $conf; { my @directories = ('/etc/apt/apt.conf.d', '/etc/apt/sources.list.d', '/etc/apt/preferences.d', '/var/cache/apt', '/var/lib/apt/lists/partial', '/var/lib/dpkg', '/etc/dpkg/dpkg.cfg.d/'); # if dpkg and apt operate from the outside we need some more # directories because dpkg and apt might not even be installed inside # the chroot if ($options->{mode} eq 'chrootless') { push @directories, ('/var/log/apt', '/var/lib/dpkg/triggers', '/var/lib/dpkg/info', '/var/lib/dpkg/alternatives', '/var/lib/dpkg/updates'); } foreach my $dir (@directories) { make_path("$options->{root}/$dir") or error "failed to create $dir: $!"; } } # We put certain configuration items in their own configuration file # because they have to be valid for apt invocation from outside as well as # from inside the chroot. # The config filename is chosen such that any settings in it will be # overridden by what the user specified with --aptopt. { open my $fh, '>', "$options->{root}/etc/apt/apt.conf.d/00mmdebstrap" or error "cannot open /etc/apt/apt.conf.d/00mmdebstrap: $!"; print $fh "Apt::Install-Recommends false;\n"; print $fh "Acquire::Languages \"none\";\n"; close $fh; } { open my $fh, '>', "$options->{root}/var/lib/dpkg/status" or error "failed to open(): $!"; close $fh; } # /var/lib/dpkg/available is required to exist or otherwise package # removals will fail { open my $fh, '>', "$options->{root}/var/lib/dpkg/available" or error "failed to open(): $!"; close $fh; } if (scalar @{$options->{foreignarchs}} > 0) { open my $fh, '>', "$options->{root}/var/lib/dpkg/arch" or error "cannot open /var/lib/dpkg/arch: $!"; print $fh "$options->{nativearch}\n"; foreach my $arch (@{$options->{foreignarchs}}) { print $fh "$arch\n"; } close $fh; } if (scalar @{$options->{aptopts}} > 0) { open my $fh, '>', "$options->{root}/etc/apt/apt.conf.d/99mmdebstrap" or error "cannot open /etc/apt/apt.conf.d/99mmdebstrap: $!"; foreach my $opt (@{$options->{aptopts}}) { if (-r $opt) { # flush handle because copy() uses syswrite() which bypasses # buffered IO $fh->flush(); copy $opt, $fh or error "cannot copy $opt: $!"; } else { print $fh $opt; if ($opt !~ /;$/) { print $fh ';'; } if ($opt !~ /\n$/) { print $fh "\n"; } } } close $fh; } if (scalar @{$options->{dpkgopts}} > 0) { # FIXME: in chrootless mode, dpkg will only read the configuration # from the host open my $fh, '>', "$options->{root}/etc/dpkg/dpkg.cfg.d/99mmdebstrap" or error "cannot open /etc/dpkg/dpkg.cfg.d/99mmdebstrap: $!"; foreach my $opt (@{$options->{dpkgopts}}) { if (-r $opt) { # flush handle because copy() uses syswrite() which bypasses # buffered IO $fh->flush(); copy $opt, $fh or error "cannot copy $opt: $!"; } else { print $fh $opt; if ($opt !~ /\n$/) { print $fh "\n"; } } } close $fh; } ## setup merged usr #my @amd64_dirs = ('lib32', 'lib64', 'libx32'); # only amd64 for now #foreach my $dir ("bin", "sbin", "lib", @amd64_dirs) { # symlink "usr/$dir", "$options->{root}/$dir" or die "cannot create symlink: $!"; # make_path("$options->{root}/usr/$dir") or die "cannot create /usr/$dir: $!"; #} { open my $fh, '>', "$options->{root}/etc/fstab" or error "cannot open fstab: $!"; print $fh "# UNCONFIGURED FSTAB FOR BASE SYSTEM\n"; close $fh; chmod 0644, "$options->{root}/etc/fstab" or error "cannot chmod fstab: $!"; } # write /etc/apt/sources.list { open my $fh, '>', "$options->{root}/etc/apt/sources.list" or error "cannot open /etc/apt/sources.list: $!"; print $fh $options->{sourceslist}; close $fh; } # allow network access from within copy("/etc/resolv.conf", "$options->{root}/etc/resolv.conf") or error "cannot copy /etc/resolv.conf: $!"; copy("/etc/hostname", "$options->{root}/etc/hostname") or error "cannot copy /etc/hostname: $!"; if ($options->{havemknod}) { foreach my $file (@devfiles) { my ($fname, $mode, $type, $linkname, $devmajor, $devminor) = @{$file}; if ($type == 0) { # normal file error "type 0 not implemented"; } elsif ($type == 1) { # hardlink error "type 1 not implemented"; } elsif ($type == 2) { # symlink if ($options->{mode} eq 'fakechroot' and $linkname =~ /^\/proc/) { # there is no /proc in fakechroot mode next; } symlink $linkname, "$options->{root}/$fname" or error "cannot create symlink $fname"; next; # chmod cannot work on symlinks } elsif ($type == 3) { # character special 0 == system('mknod', "$options->{root}/$fname", 'c', $devmajor, $devminor) or error "mknod failed: $?"; } elsif ($type == 4) { # block special 0 == system('mknod', "$options->{root}/$fname", 'b', $devmajor, $devminor) or error "mknod failed: $?"; } elsif ($type == 5) { # directory make_path "$options->{root}/$fname", { error => \my $err }; if (@$err) { error "cannot create $fname"; } } else { error "unsupported type: $type"; } chmod $mode, "$options->{root}/$fname" or error "cannot chmod $fname: $!"; } } # we tell apt about the configuration via a config file passed via the # APT_CONFIG environment variable instead of using the --option command # line arguments because configuration settings like Dir::Etc have already # been evaluated at the time that apt takes its command line arguments # into account. $ENV{"APT_CONFIG"} = "$tmpfile"; # when apt-get update is run by the root user, then apt will attempt to # drop privileges to the _apt user. This will fail if the _apt user does # not have permissions to read the root directory. In that case, we have # to disable apt sandboxing. if ($options->{mode} eq 'root') { my $partial = '/var/lib/apt/lists/partial'; if (system('/usr/lib/apt/apt-helper', 'drop-privs', '--', 'test', '-r', "$options->{root}$partial") != 0) { warning "Download is performed unsandboxed as root as file $options->{root}$partial couldn't be accessed by user _apt"; open my $fh, '>>', $tmpfile or error "cannot open $tmpfile for appending: $!"; print $fh "APT::Sandbox::User \"root\";\n"; close $fh; } } # setting PATH for chroot, ldconfig, start-stop-daemon... if (defined $ENV{PATH} && $ENV{PATH} ne "") { $ENV{PATH} = "$ENV{PATH}:/usr/sbin:/usr/bin:/sbin:/bin"; } else { $ENV{PATH} = "/usr/sbin:/usr/bin:/sbin:/bin"; } # run setup hooks run_hooks('setup', $options); info "running apt-get update..."; run_apt_progress({ ARGV => ['apt-get', 'update'] }); # check if anything was downloaded at all { open my $fh, '-|', 'apt-get', 'indextargets' // error "failed to fork(): $!"; chomp (my $indextargets = do { local $/; <$fh> }); close $fh; if ($indextargets eq '') { info "content of /etc/apt/sources.list:"; if ($verbosity_level >= 1) { copy("$options->{root}/etc/apt/sources.list", *STDERR); } error "apt-get update didn't download anything"; } } my %pkgs_to_install; if (defined $options->{include}) { for my $pkg (split /,/, $options->{include}) { $pkgs_to_install{$pkg} = (); } } if ($options->{variant} eq 'buildd') { $pkgs_to_install{'build-essential'} = (); } # To figure out the right package set for the apt variant we can use: # $ apt-get dist-upgrade -o dir::state::status=/dev/null # This is because that variants only contain essential packages and # apt and libapt treats apt as essential. If we want to install less # (essential variant) then we have to compute the package set ourselves. # Same if we want to install priority based variants. if (any { $_ eq $options->{variant} } ('extract', 'custom')) { info "downloading packages with apt..."; run_apt_progress({ ARGV => ['apt-get', '--yes', '-oApt::Get::Download-Only=true', 'install'], PKGS => [keys %pkgs_to_install], }); } elsif ($options->{variant} eq 'apt') { # if we just want to install Essential:yes packages, apt and their # dependencies then we can make use of libapt treating apt as # implicitly essential. An upgrade with the (currently) empty status # file will trigger an installation of the essential packages plus apt. # # 2018-09-02, #debian-dpkg on OFTC, times in UTC+2 # 23:39 < josch> I'll just put it in my script and if it starts # breaking some time I just say it's apt's fault. :P # 23:42 < DonKult> that is how it usually works, so yes, do that :P (<- # and please add that line next to it so you can # remind me in 5+ years that I said that after I wrote # in the bugreport: "Are you crazy?!? Nobody in his # right mind would even suggest depending on it!") info "downloading packages with apt..."; run_apt_progress({ ARGV => ['apt-get', '--yes', '-oApt::Get::Download-Only=true', 'dist-upgrade'], }); } elsif (any { $_ eq $options->{variant} } ('essential', 'standard', 'important', 'required', 'buildd', 'minbase')) { my %ess_pkgs; open(my $pipe_apt, '-|', 'apt-get', 'indextargets', '--format', '$(FILENAME)', 'Created-By: Packages') or error "cannot start apt-get indextargets: $!"; while (my $fname = <$pipe_apt>) { chomp $fname; open (my $pipe_cat, '-|', '/usr/lib/apt/apt-helper', 'cat-file', $fname) or error "cannot start apt-helper cat-file: $!"; my $pkgname; my $ess = ''; my $prio = 'optional'; my $arch = ''; while (my $line = <$pipe_cat>) { chomp $line; # Dpkg::Index takes 10 seconds to parse a typical Packages # file. Thus we instead use a simple parser that just retrieve # the information we need. if ($line ne "") { if ($line =~ /^Package: (.*)/) { $pkgname = $1; } elsif ($line =~ /^Essential: yes$/) { $ess = 'yes' } elsif ($line =~ /^Priority: (.*)/) { $prio = $1; } elsif ($line =~ /^Architecture: (.*)/) { $arch = $1; } next; } # we are only interested of packages of native architecture or # Architecture:all if ($arch eq $options->{nativearch} or $arch eq 'all') { # the line is empty, thus a package stanza just finished # processing and we can handle it now if ($ess eq 'yes') { $ess_pkgs{$pkgname} = (); } elsif ($options->{variant} eq 'essential') { # for this variant we are only interested in the # essential packages } elsif (any { $_ eq $options->{variant} } ('standard', 'important', 'required', 'buildd', 'minbase')) { if ($prio eq 'optional' or $prio eq 'extra') { # always ignore packages of priority optional and extra } elsif ($prio eq 'standard') { if (none { $_ eq $options->{variant} } ('important', 'required', 'buildd', 'minbase')) { $pkgs_to_install{$pkgname} = (); } } elsif ($prio eq 'important') { if (none { $_ eq $options->{variant} } ('required', 'buildd', 'minbase')) { $pkgs_to_install{$pkgname} = (); } } elsif ($prio eq 'required') { # required packages are part of all sets except # essential and apt $pkgs_to_install{$pkgname} = (); } else { error "unknown priority: $prio"; } } else { error "unknown variant: $options->{variant}"; } } # reset values undef $pkgname; $ess = ''; $prio = 'optional'; $arch = ''; } close $pipe_cat; $? == 0 or error "apt-helper cat-file failed: $?"; } close $pipe_apt; $? == 0 or error "apt-get indextargets failed: $?"; info "downloading packages with apt..."; run_apt_progress({ ARGV => ['apt-get', '--yes', '-oApt::Get::Download-Only=true', 'install'], PKGS => [keys %ess_pkgs], }); } else { error "unknown variant: $options->{variant}"; } # extract the downloaded packages my @essential_pkgs; { my $apt_archives = "/var/cache/apt/archives/"; opendir my $dh, "$options->{root}/$apt_archives" or error "cannot read $apt_archives"; while (my $deb = readdir $dh) { if ($deb !~ /\.deb$/) { next; } $deb = "$apt_archives/$deb"; if (!-f "$options->{root}/$deb") { next; } push @essential_pkgs, $deb; } close $dh; } if (scalar @essential_pkgs == 0) { # check if a file:// URI was used open(my $pipe_apt, '-|', 'apt-get', 'indextargets', '--format', '$(URI)', 'Created-By: Packages') or error "cannot start apt-get indextargets: $!"; while (my $uri = <$pipe_apt>) { if ($uri =~ /^file:\/\//) { error "nothing got downloaded -- use copy:// instead of file://"; } } error "nothing got downloaded"; } # We have to extract the packages from @essential_pkgs either if we run in # chrootless mode and extract variant or in any other mode. # In other words, the only scenario in which the @essential_pkgs are not # extracted are in chrootless mode in any other than the extract variant. if ($options->{mode} eq 'chrootless' and $options->{variant} ne 'extract') { # nothing to do } else { info "extracting archives..."; print_progress 0.0; my $counter = 0; my $total = scalar @essential_pkgs; foreach my $deb (@essential_pkgs) { $counter += 1; # not using dpkg-deb --extract as that would replace the # merged-usr symlinks with plain directories pipe my $rfh, my $wfh; my $pid1 = fork() // error "fork() failed: $!"; if ($pid1 == 0) { open(STDOUT, '>&', $wfh); exec 'dpkg-deb', '--fsys-tarfile', "$options->{root}/$deb"; } my $pid2 = fork() // error "fork() failed: $!"; if ($pid2 == 0) { open(STDIN, '<&', $rfh); exec 'tar', '-C', $options->{root}, '--keep-directory-symlink', '--extract', '--file', '-'; } waitpid($pid1, 0); $? == 0 or error "dpkg-deb --fsys-tarfile failed: $?"; waitpid($pid2, 0); $? == 0 or error "tar --extract failed: $?"; print_progress ($counter/$total*100); } print_progress "done"; } if ($options->{mode} eq 'chrootless') { info "installing packages..."; # FIXME: the dpkg config from the host is parsed before the command # line arguments are parsed and might break this mode # Example: if the host has --path-exclude set, then this will also # affect the chroot. my @chrootless_opts = ( '-oDPkg::Options::=--force-not-root', '-oDPkg::Options::=--force-script-chrootless', '-oDPkg::Options::=--root=' . $options->{root}, '-oDPkg::Options::=--log=' . "$options->{root}/var/log/dpkg.log"); if ($options->{variant} eq 'extract') { # nothing to do } else { run_apt_progress({ ARGV => ['apt-get', '--yes', @chrootless_opts, 'install'], PKGS => [map { "$options->{root}/$_" } @essential_pkgs], }); } if (any { $_ eq $options->{variant} } ('extract', 'custom')) { # nothing to do } elsif (any { $_ eq $options->{variant} } ('essential', 'apt', 'standard', 'important', 'required', 'buildd', 'minbase')) { # run essential hooks run_hooks('essential', $options); if (%pkgs_to_install) { run_apt_progress({ ARGV => ['apt-get', '--yes', @chrootless_opts, 'install'], PKGS => [keys %pkgs_to_install], }); } } else { error "unknown variant: $options->{variant}"; } } elsif (any { $_ eq $options->{mode} } ('root', 'unshare', 'fakechroot', 'proot')) { if (any { $_ eq $options->{variant} } ('extract', 'custom')) { # nothing to do } elsif (any { $_ eq $options->{variant} } ('essential', 'apt', 'standard', 'important', 'required', 'buildd', 'minbase')) { if ($options->{mode} eq 'fakechroot') { # this borrows from and extends # /etc/fakechroot/debootstrap.env and /etc/fakechroot/chroot.env { my @fakechrootsubst = (); foreach my $dir ('/usr/sbin', '/usr/bin', '/sbin', '/bin') { push @fakechrootsubst, "$dir/chroot=/usr/sbin/chroot.fakechroot"; push @fakechrootsubst, "$dir/mkfifo=/bin/true"; push @fakechrootsubst, "$dir/ldconfig=/bin/true"; push @fakechrootsubst, "$dir/ldd=/usr/bin/ldd.fakechroot"; push @fakechrootsubst, "$dir/ischroot=/bin/true"; } if (defined $ENV{FAKECHROOT_CMD_SUBST} && $ENV{FAKECHROOT_CMD_SUBST} ne "") { push @fakechrootsubst, split /:/, $ENV{FAKECHROOT_CMD_SUBST}; } $ENV{FAKECHROOT_CMD_SUBST} = join ':', @fakechrootsubst; } if (defined $ENV{FAKECHROOT_EXCLUDE_PATH} && $ENV{FAKECHROOT_EXCLUDE_PATH} ne "") { $ENV{FAKECHROOT_EXCLUDE_PATH} = "$ENV{FAKECHROOT_EXCLUDE_PATH}:/dev:/proc:/sys"; } else { $ENV{FAKECHROOT_EXCLUDE_PATH} = '/dev:/proc:/sys'; } # workaround for long unix socket path if FAKECHROOT_BASE # exceeds the limit of 108 bytes $ENV{FAKECHROOT_AF_UNIX_PATH} = "/tmp"; { my @ldsoconf = ('/etc/ld.so.conf'); opendir(my $dh, '/etc/ld.so.conf.d') or error "Can't opendir(/etc/ld.so.conf.d): $!"; while (my $entry = readdir $dh) { # skip the "." and ".." entries next if $entry eq "."; next if $entry eq ".."; next if $entry !~ /\.conf$/; push @ldsoconf, "/etc/ld.so.conf.d/$entry"; } closedir($dh); my @ldlibpath = (); if (defined $ENV{LD_LIBRARY_PATH} && $ENV{LD_LIBRARY_PATH} ne "") { push @ldlibpath, (split /:/, $ENV{LD_LIBRARY_PATH}); } # FIXME: workaround allowing installation of systemd should # live in fakechroot, see #917920 push @ldlibpath, "/lib/systemd"; foreach my $fname (@ldsoconf) { open my $fh, "<", $fname or error "cannot open $fname for reading: $!"; while (my $line = <$fh>) { next if $line !~ /^\//; push @ldlibpath, $line; } close $fh; } $ENV{LD_LIBRARY_PATH} = join ':', @ldlibpath; } } # make sure that APT_CONFIG is not set when executing anything inside the # chroot my @chrootcmd = (); if ($options->{mode} eq 'proot') { push @chrootcmd, ( 'proot', '--root-id', '--bind=/dev', '--bind=/proc', '--bind=/sys', "--rootfs=$options->{root}", '--cwd=/'); } elsif (any { $_ eq $options->{mode} } ('root', 'unshare', 'fakechroot')) { push @chrootcmd, ('/usr/sbin/chroot', $options->{root}); } else { error "unknown mode: $options->{mode}"; } # copy qemu-user-static binary into chroot or setup proot with --qemu if (defined $options->{qemu}) { if ($options->{mode} eq 'proot') { push @chrootcmd, "--qemu=qemu-$options->{qemu}"; } elsif ($options->{mode} eq 'fakechroot') { # The binfmt support on the outside is used, so qemu needs to know # where it has to look for shared libraries $ENV{QEMU_LD_PREFIX} = $options->{root}; # Make sure that the fakeroot and fakechroot shared libraries # exist for the right architecture open my $fh, '-|', 'dpkg-architecture', '-a', $options->{nativearch}, '-qDEB_HOST_MULTIARCH' // error "failed to fork(): $!"; chomp (my $deb_host_multiarch = do { local $/; <$fh> }); close $fh; if ($? != 0 or !$deb_host_multiarch) { error "dpkg-architecture failed: $?"; } my $fakechrootdir = "/usr/lib/$deb_host_multiarch/fakechroot"; if (!-e "$fakechrootdir/libfakechroot.so") { error "$fakechrootdir/libfakechroot.so doesn't exist. Install libfakechroot:$options->{nativearch} outside the chroot"; } my $fakerootdir = "/usr/lib/$deb_host_multiarch/libfakeroot"; if (!-e "$fakerootdir/libfakeroot-sysv.so") { error "$fakerootdir/libfakeroot-sysv.so doesn't exist. Install libfakeroot:$options->{nativearch} outside the chroot"; } # fakechroot only fills LD_LIBRARY_PATH with the directories of # the host's architecture. We append the directories of the chroot # architecture. $ENV{LD_LIBRARY_PATH} .= ":$fakechrootdir:$fakerootdir"; } elsif (any { $_ eq $options->{mode} } ('root', 'unshare')) { # other modes require a static qemu-user binary my $qemubin = "/usr/bin/qemu-$options->{qemu}-static"; if (!-e $qemubin) { error "cannot find $qemubin"; } copy $qemubin, "$options->{root}/$qemubin" or error "cannot copy $qemubin: $!"; } else { error "unknown mode: $options->{mode}"; } } # some versions of coreutils use the renameat2 system call in mv. # This breaks certain versions of fakechroot and proot. Here we do # a sanity check and warn the user in case things might break. if (any { $_ eq $options->{mode} } ('fakechroot', 'proot') and -e "$options->{root}/bin/mv") { mkdir "$options->{root}/000-move-me" or error "cannot create directory: $!"; my $ret = system @chrootcmd, '/bin/mv', '/000-move-me', '/001-delete-me'; if ($ret != 0) { if ($options->{mode} eq 'proot') { info "the /bin/mv binary inside the chroot doesn't work under proot"; info "this is likely due to missing support for renameat2 in proot"; info "see https://github.com/proot-me/PRoot/issues/147"; } else { info "the /bin/mv binary inside the chroot doesn't work under fakechroot"; info "with certain versions of coreutils and glibc, this is due to missing support for renameat2 in fakechroot"; info "see https://github.com/dex4er/fakechroot/issues/60"; } info "expect package post installation scripts not to work"; rmdir "$options->{root}/000-move-me" or error "cannot rmdir: $!"; } else { rmdir "$options->{root}/001-delete-me" or error "cannot rmdir: $!"; } } # install the extracted packages properly # we need --force-depends because dpkg does not take Pre-Depends into # account and thus doesn't install them in the right order # And the --predep-package option is broken: #539133 info "installing packages..."; run_dpkg_progress({ ARGV => [@chrootcmd, 'env', '--unset=TMPDIR', 'dpkg', '--install', '--force-depends'], PKGS => \@essential_pkgs, }); # if the path-excluded option was added to the dpkg config, reinstall all # packages if (-e "$options->{root}/etc/dpkg/dpkg.cfg.d/99mmdebstrap") { open(my $fh, '<', "$options->{root}/etc/dpkg/dpkg.cfg.d/99mmdebstrap") or error "cannot open /etc/dpkg/dpkg.cfg.d/99mmdebstrap: $!"; my $num_matches = grep /^path-exclude=/, <$fh>; close $fh; if ($num_matches > 0) { # without --skip-same-version, dpkg will install the given # packages even though they are already installed info "re-installing packages because of path-exclude..."; run_dpkg_progress({ ARGV => [@chrootcmd, 'env', '--unset=TMPDIR', 'dpkg', '--install', '--force-depends'], PKGS => \@essential_pkgs, }); } } foreach my $deb (@essential_pkgs) { unlink "$options->{root}/$deb" or error "cannot unlink $deb: $!"; } # run essential hooks run_hooks('essential', $options); if (%pkgs_to_install) { # some packages have to be installed from the outside before anything # can be installed from the inside. # # we do not need to install any *-archive-keyring packages inside the # chroot prior to installing the packages, because the keyring is only # used when doing "apt-get update" and that was already done at the # beginning using key material from the outside. Since the apt cache # is already filled and we are not calling "apt-get update" again, the # keyring can be installed later during installation. But: if it's not # installed during installation, then we might end up with a fully # installed system without keyrings that are valid for its # sources.list. my %pkgs_to_install_from_outside; # install apt if necessary if ($options->{variant} ne 'apt') { $pkgs_to_install_from_outside{apt} = (); } # since apt will be run inside the chroot, make sure that # apt-transport-https and ca-certificates gets installed first if any # mirror is a https URI open(my $pipe_apt, '-|', 'apt-get', 'indextargets', '--format', '$(URI)', 'Created-By: Packages') or error "cannot start apt-get indextargets: $!"; while (my $uri = <$pipe_apt>) { if ($uri =~ /^https:\/\//) { # FIXME: support for https is part of apt >= 1.5 $pkgs_to_install_from_outside{'apt-transport-https'} = (); $pkgs_to_install_from_outside{'ca-certificates'} = (); last; } elsif ($uri =~ /^tor(\+[a-z]+)*:\/\//) { # tor URIs can be tor+http://, tor+https:// or even # tor+mirror+file:// $pkgs_to_install_from_outside{'apt-transport-tor'} = (); last; } } close $pipe_apt; $? == 0 or error "apt-get indextargets failed"; if (%pkgs_to_install_from_outside) { info 'downloading ' . (join ', ', keys %pkgs_to_install_from_outside) . "..."; run_apt_progress({ ARGV => ['apt-get', '--yes', '-oApt::Get::Download-Only=true', 'install'], PKGS => [keys %pkgs_to_install_from_outside], }); my @debs_to_install; my $apt_archives = "/var/cache/apt/archives/"; opendir my $dh, "$options->{root}/$apt_archives" or error "cannot read $apt_archives"; while (my $deb = readdir $dh) { if ($deb !~ /\.deb$/) { next; } $deb = "$apt_archives/$deb"; if (!-f "$options->{root}/$deb") { next; } push @debs_to_install, $deb; } close $dh; if (scalar @debs_to_install == 0) { error "nothing got downloaded"; } # we need --force-depends because dpkg does not take Pre-Depends # into account and thus doesn't install them in the right order info 'installing ' . (join ', ', keys %pkgs_to_install_from_outside) . "..."; run_dpkg_progress({ ARGV => [@chrootcmd, 'env', '--unset=TMPDIR', 'dpkg', '--install', '--force-depends'], PKGS => \@debs_to_install, }); foreach my $deb (@debs_to_install) { unlink "$options->{root}/$deb" or error "cannot unlink $deb: $!"; } } run_chroot { info "installing remaining packages inside the chroot..."; run_apt_progress({ ARGV => [@chrootcmd, 'env', '--unset=APT_CONFIG', '--unset=TMPDIR', 'apt-get', '--yes', 'install'], PKGS => [keys %pkgs_to_install], }); } $options; } } else { error "unknown variant: $options->{variant}"; } } else { error "unknown mode: $options->{mode}"; } run_hooks('customize', $options); # clean up temporary configuration file unlink "$options->{root}/etc/apt/apt.conf.d/00mmdebstrap" or error "failed to unlink /etc/apt/apt.conf.d/00mmdebstrap: $!"; # apt since 1.6 creates the auxfiles directory. If apt inside the chroot # is older than that, then it will not know how to clean it. if (-e "$options->{root}/var/lib/apt/lists/auxfiles") { rmdir "$options->{root}/var/lib/apt/lists/auxfiles" or die "cannot rmdir /var/lib/apt/lists/auxfiles: $!"; } info "cleaning package lists and apt cache..."; run_apt_progress({ ARGV => ['apt-get', '--option', 'Dir::Etc::SourceList=/dev/null', 'update'], }); run_apt_progress({ ARGV => ['apt-get', 'clean'] }); if (defined $options->{qemu} and $options->{mode} ne 'proot' and $options->{mode} ne 'fakechroot') { unlink "$options->{root}/usr/bin/qemu-$options->{qemu}-static" or error "cannot unlink /usr/bin/qemu-$options->{qemu}-static: $!"; } # clean up certain files to make output reproducible unlink "$options->{root}/var/log/dpkg.log"; unlink "$options->{root}/var/log/apt/history.log"; unlink "$options->{root}/var/log/apt/term.log"; unlink "$options->{root}/var/log/alternatives.log"; unlink "$options->{root}/var/cache/ldconfig/aux-cache"; } sub main() { umask 022; my $mtime = time; if (exists $ENV{SOURCE_DATE_EPOCH}) { $mtime = $ENV{SOURCE_DATE_EPOCH}+0; } $ENV{DEBIAN_FRONTEND} = 'noninteractive'; $ENV{DEBCONF_NONINTERACTIVE_SEEN} = 'true'; $ENV{LC_ALL} = 'C.UTF-8'; $ENV{LANGUAGE} = 'C.UTF-8'; $ENV{LANG} = 'C.UTF-8'; # copy ARGV because getopt modifies it my @ARGVORIG = @ARGV; my $options = { components => "main", variant => "important", include => undef, mode => 'auto', dpkgopts => [], aptopts => [], noop => [], setup_hook => [], essential_hook => [], customize_hook => [], }; chomp ($options->{architectures} = `dpkg --print-architecture`); my $logfile = undef; Getopt::Long::Configure ('default', 'bundling', 'auto_abbrev', 'ignore_case_always'); GetOptions( 'h|help' => sub { pod2usage(-exitval => 0, -verbose => 2) }, 'version' => sub { print STDOUT "mmdebstrap $VERSION\n"; exit 0; }, 'components=s' => \$options->{components}, 'variant=s' => \$options->{variant}, 'include=s' => \$options->{include}, 'architectures=s' => \$options->{architectures}, 'mode=s' => \$options->{mode}, 'dpkgopt=s@' => \$options->{dpkgopts}, 'aptopt=s@' => \$options->{aptopts}, 's|silent' => sub { $verbosity_level = 0; }, 'q|quiet' => sub { $verbosity_level = 0; }, 'v|verbose' => sub { $verbosity_level = 2; }, 'd|debug' => sub { $verbosity_level = 3; }, 'logfile=s' => \$logfile, # no-op options so that mmdebstrap can be used with # sbuild-createchroot --debootstrap=mmdebstrap 'resolve-deps' => sub { push @{$options->{noop}}, 'resolve-deps'; }, 'merged-usr' => sub { push @{$options->{noop}}, 'merged-usr'; }, 'no-merged-usr' => sub { push @{$options->{noop}}, 'no-merged-usr'; }, # hook options are hidden until I'm happy with them 'setup-hook=s@' => \$options->{setup_hook}, 'essential-hook=s@' => \$options->{essential_hook}, 'customize-hook=s@' => \$options->{customize_hook}, ) or pod2usage(-exitval => 2, -verbose => 1); if (defined($logfile)) { open(STDERR, '>', $logfile) or error "cannot open $logfile: $!"; } foreach my $arg (@{$options->{noop}}) { info "The option --$arg is a no-op. It only exists for compatibility with some debootstrap wrappers."; } my @valid_variants = ('extract', 'custom', 'essential', 'apt', 'required', 'minbase', 'buildd', 'important', 'debootstrap', '-', 'standard'); if (none { $_ eq $options->{variant}} @valid_variants) { error "invalid variant. Choose from " . (join ', ', @valid_variants); } # debootstrap and - are an alias for important if (any { $_ eq $options->{variant} } ('-', 'debootstrap')) { $options->{variant} = 'important'; } if ($options->{variant} eq 'essential' and defined $options->{include}) { error "cannot install extra packages with variant essential because apt is missing"; } # fakeroot is an alias for fakechroot if ($options->{mode} eq 'fakeroot') { $options->{mode} = 'fakechroot'; } # sudo is an alias for root if ($options->{mode} eq 'sudo') { $options->{mode} = 'root'; } my @valid_modes = ('auto', 'root', 'unshare', 'fakechroot', 'proot', 'chrootless'); if (none { $_ eq $options->{mode} } @valid_modes) { error "invalid mode. Choose from " . (join ', ', @valid_modes); } # figure out the mode to use or test whether the chosen mode is legal if ($options->{mode} eq 'auto') { if ($EFFECTIVE_USER_ID == 0) { $options->{mode} = 'root'; } elsif (test_unshare(0)) { $options->{mode} = 'unshare'; } elsif (system('fakechroot --version>/dev/null') == 0) { $options->{mode} = 'fakechroot'; } elsif (system('proot --version>/dev/null') == 0) { $options->{mode} = 'proot'; } else { error "unable to pick chroot mode automatically"; } info "automatically chosen mode: $options->{mode}"; } elsif ($options->{mode} eq 'root') { if ($EFFECTIVE_USER_ID != 0) { error "need to be root"; } } elsif ($options->{mode} eq 'proot') { if (system('proot --version>/dev/null') != 0) { error "need working proot binary"; } } elsif ($options->{mode} eq 'fakechroot') { # test if we are inside fakechroot already # We fork a child process because setting FAKECHROOT_DETECT seems to # be an irreversible operation for fakechroot. my $pid = open my $rfh, '-|' // error "failed to fork(): $!"; if ($pid == 0) { # with the FAKECHROOT_DETECT environment variable set, any program # execution will be replaced with the output "fakeroot [version]" $ENV{FAKECHROOT_DETECT} = 0; exec 'echo', 'If fakechroot is running, this will not be printed'; } my $content = do { local $/; <$rfh> }; waitpid $pid, 0; if ($? == 0 and $content =~ /^fakechroot \d\.\d+$/) { # fakechroot is already running } elsif (system('fakechroot --version>/dev/null') != 0) { error "need working fakechroot binary"; } else { # exec ourselves again but within fakechroot exec 'fakechroot', 'fakeroot', $PROGRAM_NAME, @ARGVORIG; } } elsif ($options->{mode} eq 'unshare') { if (!test_unshare(1)) { my $procfile = '/proc/sys/kernel/unprivileged_userns_clone'; open(my $fh, '<', $procfile) or error "failed to open $procfile: $!"; chomp(my $content = do { local $/; <$fh> }); close($fh); if ($content ne "1") { info "/proc/sys/kernel/unprivileged_userns_clone is set to $content"; info "try running: sudo sysctl -w kernel.unprivileged_userns_clone=1"; info "or permanently enable unprivileged usernamespaces by putting the setting into /etc/sysctl.d/"; info "see https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=898446"; } exit 1; } } elsif ($options->{mode} eq 'chrootless') { # nothing to do } else { error "unknown mode: $options->{mode}"; } my ($nativearch, @foreignarchs) = split /,/, $options->{architectures}; $options->{nativearch} = $nativearch; $options->{foreignarchs} = \@foreignarchs; { # FIXME: autogenerate this list my $deb2qemu = { alpha => 'alpha', amd64 => 'x86_64', arm => 'arm', arm64 => 'aarch64', armel => 'arm', armhf => 'arm', hppa => 'hppa', i386 => 'i386', m68k => 'm68k', mips => 'mips', mips64 => 'mips64', mips64el => 'mips64el', mipsel => 'mipsel', powerpc => 'ppc', ppc64 => 'ppc64', ppc64el => 'ppc64le', riscv64 => 'riscv64', s390x => 's390x', sh4 => 'sh4', sparc => 'sparc', sparc64 => 'sparc64', }; chomp (my $hostarch = `dpkg --print-architecture`); if ($hostarch ne $nativearch) { my $pid = open my $fh, '-|' // error "failed to fork(): $!"; if ($pid == 0) { { no warnings; # don't print a warning if the following fails exec 'arch-test', '-n', $nativearch; } # if exec didn't work (for example because the arch-test program is # missing) prepare for the worst and assume that the architecture # cannot be executed print "$nativearch: not supported on this machine/kernel\n"; exit 1; } chomp (my $content = do { local $/; <$fh> }); close $fh; if ($? != 0 or $content ne "$nativearch: ok") { info "$nativearch cannot be executed, falling back to qemu-user"; if (!exists $deb2qemu->{$nativearch}) { error "no mapping from $nativearch to qemu-user binary"; } $options->{qemu} = $deb2qemu->{$nativearch}; { open my $fh, '<', '/proc/filesystems' or error "failed to open /proc/filesystems: $!"; unless (grep /^nodev\tbinfmt_misc$/, (<$fh>)) { error "binfmt_misc not found in /proc/filesystems -- is the module loaded?"; } close $fh; } { open my $fh, '<', '/proc/mounts' or error "failed to open /proc/mounts: $!"; unless (grep /^binfmt_misc \/proc\/sys\/fs\/binfmt_misc binfmt_misc/, (<$fh>)) { error "binfmt_misc not found in /proc/mounts -- not mounted?"; } close $fh; } { open my $fh, '-|', '/usr/sbin/update-binfmts', '--display', "qemu-$options->{qemu}" // error "failed to fork(): $!"; chomp (my $binfmts = do { local $/; <$fh> }); close $fh; if ($binfmts eq '') { error "qemu-$options->{qemu} is not a supported binfmt name"; } } } else { info "$nativearch can be executed on this $hostarch machine"; } } else { info "chroot architecture $nativearch is equal to the host's architecture"; } } { my $suite; if (scalar @ARGV > 0) { $suite = shift @ARGV; if (scalar @ARGV > 0) { $options->{target} = shift @ARGV; } else { $options->{target} = '-'; } } else { info "No SUITE specified, expecting sources.list on standard input"; $options->{target} = '-'; } my $sourceslist = ''; my $stdindata = ''; # make sure that we only attempt to read from STDIN if it's *not* # connected to the terminal (because we don't expect the user to type # the sources.list file if (! -t STDIN) { info "Reading sources.list from standard input..."; $stdindata = do { local $/; }; } if (! defined $suite) { # If no suite was specified, then the whole sources.list has to # come from standard input $sourceslist .= $stdindata; } else { if (scalar @ARGV > 0) { for my $arg (@ARGV) { if ($arg eq '-') { $sourceslist .= $stdindata; } elsif ($arg =~ /^deb(-src)? /) { $sourceslist .= "$arg\n"; } elsif ($arg =~ /:\/\//) { $sourceslist .= "deb $arg $suite $options->{components}\n"; } elsif (-f $arg) { open my $fh, '<', $arg or error "cannot open $arg: $!"; while (my $line = <$fh>) { $sourceslist .= $line; } close $fh; } else { error "invalid mirror: $arg"; } } # if there was no explicit '-' mirror listed and something was # read on standard input, then just append it to the end if (none { $_ eq '-' } @ARGV) { # if nothing was read on standard input then nothing will # be appended $sourceslist .= $stdindata; } } elsif ($stdindata ne '') { $sourceslist .= $stdindata; } else { # FIXME: change default mirror depending on $suite # (for derivatives) $sourceslist .= "deb http://deb.debian.org/debian $suite $options->{components}\n"; if (any { $_ eq $suite } ('stable', 'oldstable', 'stretch') ) { $sourceslist .= "deb http://deb.debian.org/debian $suite-updates $options->{components}\n"; $sourceslist .= "deb http://security.debian.org/debian-security $suite/updates $options->{components}\n"; } } } if ($sourceslist eq '') { error "empty apt sources.list"; } $options->{sourceslist} = $sourceslist; } if ($options->{target} ne '-') { my $abs_path = abs_path($options->{target}); if (!defined $abs_path) { error "unable to get absolute path of target directory $options->{target}"; } $options->{target} = $abs_path; } if ($options->{target} eq '/') { error "refusing to use the filesystem root as output directory"; } my $tar_compressor = get_tar_compressor($options->{target}); # figure out whether a tarball has to be created in the end $options->{maketar} = 0; if (defined $tar_compressor or $options->{target} =~ /\.tar$/ or $options->{target} eq '-') { $options->{maketar} = 1; if (any { $_ eq $options->{variant} } ('extract', 'custom') and $options->{mode} eq 'fakechroot') { info "creating a tarball in fakechroot mode might fail in extract and custom variants because there might be no tar inside the chroot"; } # try to fail early if target tarball cannot be opened for writing if ($options->{target} ne '-') { open my $fh, '>', $options->{target} or error "cannot open $options->{target} for writing: $!"; close $fh; } # check if the compressor is installed if (defined $tar_compressor) { my $pid = fork(); if ($pid == 0) { open(STDOUT, '>', '/dev/null') or error "cannot open /dev/null for writing: $!"; open(STDIN, '<', '/dev/null') or error "cannot open /dev/null for reading: $!"; exec $tar_compressor or error "cannot exec $tar_compressor: $!"; } waitpid $pid, 0; if ($? != 0) { error "failed to start $tar_compressor"; } } } if ($options->{maketar}) { # since the output is a tarball, we create the rootfs in a temporary # directory $options->{root} = tempdir( 'mmdebstrap.XXXXXXXXXX', DIR => File::Spec->tmpdir ); info "using $options->{root} as tempdir"; # in unshare and root mode, other users than the current user need to # access the rootfs, most prominently, the _apt user. Thus, make the # temporary directory world readable. if (any { $_ eq $options->{mode} } ('unshare', 'root')) { chmod 0755, $options->{root} or error "cannot chmod root: $!"; } } else { # user does not seem to have specified a tarball as output, thus work # directly in the supplied directory $options->{root} = $options->{target}; if (-e $options->{root}) { if (!-d $options->{root}) { error "$options->{root} exists and is not a directory"; } # check if the directory is empty or contains nothing more than an # empty lost+found directory. The latter exists on freshly created # ext3 and ext4 partitions. # rationale for requiring an empty directory: https://bugs.debian.org/833525 opendir(my $dh, $options->{root}) or error "Can't opendir($options->{root}): $!"; while (my $entry = readdir $dh) { # skip the "." and ".." entries next if $entry eq "."; next if $entry eq ".."; # if the entry is a directory named "lost+found" then skip it # if it's empty if ($entry eq "lost+found" and -d "$options->{root}/$entry") { opendir(my $dh2, "$options->{root}/$entry"); # Attempt reading the directory thrice. If the third time # succeeds, then it has more entries than just "." and ".." # and must thus not be empty. readdir $dh2; readdir $dh2; # rationale for requiring an empty directory: # https://bugs.debian.org/833525 if (readdir $dh2) { error "$options->{root} contains a non-empty lost+found directory"; } closedir($dh2); } else { error "$options->{root} is not empty"; } } closedir($dh); } else { make_path($options->{root}) or error "cannot create root: $!"; } } # check for double quotes because apt doesn't allow to escape them and # thus paths with double quotes are invalid in the apt config if ($options->{root} =~ /"/) { error "apt cannot handle paths with double quotes"; } my @idmap; # for unshare mode the rootfs directory has to have appropriate # permissions if ($options->{mode} eq 'unshare') { @idmap = read_subuid_subgid; # sanity check if (scalar(@idmap) != 2 || $idmap[0][0] ne 'u' || $idmap[1][0] ne 'g') { error "invalid idmap"; } my $outer_gid = $REAL_GROUP_ID+0; my $pid = get_unshare_cmd { chown 1, 1, $options->{root} } [ ['u', '0', $REAL_USER_ID, '1'], ['g', '0', $outer_gid, '1'], ['u', '1', $idmap[0][2], '1'], ['g', '1', $idmap[1][2], '1']]; waitpid $pid, 0; $? == 0 or error "chown failed"; } # figure out whether we have mknod $options->{havemknod} = 0; if ($options->{mode} eq 'unshare') { my $pid = get_unshare_cmd { $options->{havemknod} = havemknod($options->{root}); } \@idmap; waitpid $pid, 0; $? == 0 or error "havemknod failed"; } elsif (any { $_ eq $options->{mode} } ('root', 'fakechroot', 'proot', 'chrootless')) { $options->{havemknod} = havemknod($options->{root}); } else { error "unknown mode: $options->{mode}"; } my $devtar = ''; # We always craft the /dev entries ourselves if a tarball is to be created if ($options->{maketar}) { foreach my $file (@devfiles) { my ($fname, $mode, $type, $linkname, $devmajor, $devminor) = @{$file}; my $entry = pack('a100 a8 a8 a8 a12 a12 A8 a1 a100 a8 a32 a32 a8 a8 a155 x12', $fname, sprintf('%07o', $mode), sprintf('%07o', 0), # uid sprintf('%07o', 0), # gid sprintf('%011o', 0), # size sprintf('%011o', $mtime), '', # checksum $type, $linkname, "ustar ", '', # username '', # groupname defined($devmajor) ? sprintf('%07o', $devmajor) : '', defined($devminor) ? sprintf('%07o', $devminor) : '', '', # prefix ); # compute and insert checksum substr($entry,148,7) = sprintf("%06o\0", unpack("%16C*",$entry)); $devtar .= $entry; } } my $exitstatus = 0; my @taropts = ('--sort=name', "--mtime=\@$mtime", '--clamp-mtime', '--numeric-owner', '--one-file-system', '-c', '--exclude=./dev'); # disable signals so that we can fork and change behaviour of the signal # handler in the parent and child without getting interrupted my $sigset = POSIX::SigSet->new(SIGINT, SIGHUP, SIGPIPE, SIGTERM); POSIX::sigprocmask(SIG_BLOCK, $sigset) or error "Can't block signals: $!"; my $pid; pipe my $rfh, my $wfh; if ($options->{mode} eq 'unshare') { $pid = get_unshare_cmd { # child $SIG{'INT'} = 'DEFAULT'; $SIG{'HUP'} = 'DEFAULT'; $SIG{'PIPE'} = 'DEFAULT'; $SIG{'TERM'} = 'DEFAULT'; # unblock all delayed signals (and possibly handle them) POSIX::sigprocmask(SIG_UNBLOCK, $sigset) or error "Can't unblock signals: $!"; close $rfh; open(STDOUT, '>&', STDERR); setup($options); if ($options->{maketar}) { info "creating tarball..."; # redirect tar output to the writing end of the pipe so that the # parent process can capture the output open(STDOUT, '>&', $wfh); # Add ./dev as the first entries of the tar file. # We cannot add them after calling tar, because there is no way to # prevent tar from writing NULL entries at the end. print $devtar; # pack everything except ./dev 0 == system('tar', @taropts, '-C', $options->{root}, '.') or error "tar failed: $?"; info "done"; } exit 0; } \@idmap; } elsif (any { $_ eq $options->{mode} } ('root', 'fakechroot', 'proot', 'chrootless')) { $pid = fork() // error "fork() failed: $!"; if ($pid == 0) { $SIG{'INT'} = 'DEFAULT'; $SIG{'HUP'} = 'DEFAULT'; $SIG{'PIPE'} = 'DEFAULT'; $SIG{'TERM'} = 'DEFAULT'; # unblock all delayed signals (and possibly handle them) POSIX::sigprocmask(SIG_UNBLOCK, $sigset) or error "Can't unblock signals: $!"; close $rfh; open(STDOUT, '>&', STDERR); setup($options); if ($options->{maketar}) { info "creating tarball..."; # redirect tar output to the writing end of the pipe so that the # parent process can capture the output open(STDOUT, '>&', $wfh); # Add ./dev as the first entries of the tar file. # We cannot add them after calling tar, because there is no way to # prevent tar from writing NULL entries at the end. print $devtar; if ($options->{mode} eq 'fakechroot') { # Fakechroot requires tar to run inside the chroot or # otherwise absolute symlinks will include the path to the # root directory 0 == system('/usr/sbin/chroot', $options->{root}, 'tar', @taropts, '-C', '/', '.') or error "tar failed: $?"; } elsif ($options->{mode} eq 'proot') { # proot requires tar to run inside proot or otherwise # permissions will be completely off my @qemuopt = (); if (defined $options->{qemu}) { push @qemuopt, "--qemu=qemu-$options->{qemu}"; push @taropts, "--exclude=./host-rootfs" } 0 == system('proot', '--root-id', "--rootfs=$options->{root}", '--cwd=/', @qemuopt, 'tar', @taropts, '-C', '/', '.') or error "tar failed: $?"; } elsif (any { $_ eq $options->{mode} } ('root', 'chrootless')) { 0 == system('tar', @taropts, '-C', $options->{root}, '.') or error "tar failed: $?"; } else { error "unknown mode: $options->{mode}"; } info "done"; } exit 0; } } else { error "unknown mode: $options->{mode}"; } # parent my $got_signal = 0; my $waiting_for = "setup"; my $ignore = sub { $got_signal = shift; info "main() received signal $got_signal: waiting for $waiting_for..."; }; $SIG{'INT'} = $ignore; $SIG{'HUP'} = $ignore; $SIG{'PIPE'} = $ignore; $SIG{'TERM'} = $ignore; # unblock all delayed signals (and possibly handle them) POSIX::sigprocmask(SIG_UNBLOCK, $sigset) or error "Can't unblock signals: $!"; close $wfh; if ($options->{maketar}) { # we use eval() so that error() doesn't take this process down and # thus leaves the setup() process without a parent eval { if ($options->{target} eq '-') { if (!copy($rfh, *STDOUT)) { error "cannot copy to standard output: $!"; } } else { if (defined $tar_compressor) { POSIX::sigprocmask(SIG_BLOCK, $sigset) or error "Can't block signals: $!"; my $cpid = fork(); if ($cpid == 0) { # child: default signal handlers $SIG{'INT'} = 'DEFAULT'; $SIG{'HUP'} = 'DEFAULT'; $SIG{'PIPE'} = 'DEFAULT'; $SIG{'TERM'} = 'DEFAULT'; # unblock all delayed signals (and possibly handle them) POSIX::sigprocmask(SIG_UNBLOCK, $sigset) or error "Can't unblock signals: $!"; open(STDOUT, '>', $options->{target}) or error "cannot open $options->{target} for writing: $!"; open(STDIN, '<&', $rfh) or error "cannot open file handle for reading: $!"; exec $tar_compressor or error "cannot exec $tar_compressor: $!"; } POSIX::sigprocmask(SIG_UNBLOCK, $sigset) or error "Can't unblock signals: $!"; waitpid $cpid, 0; if ($? != 0) { error "failed to start $tar_compressor"; } } else { if(!copy($rfh, $options->{target})) { error "cannot copy to $options->{target}: $!"; } } } }; if ($@) { # we cannot die here because that would leave the other thread # running without a parent warning "run_chroot failed: $@"; $exitstatus = 1; } } close($rfh); waitpid $pid, 0; if ($? != 0) { $exitstatus = 1; } # change signal handler message $waiting_for = "cleanup"; if ($options->{maketar} and -e $options->{root}) { info "removing tempdir $options->{root}..."; if ($options->{mode} eq 'unshare') { # We don't have permissions to remove the directory outside # the unshared namespace, so we remove it here. # Since this is still inside the unshared namespace, there is # no risk of removing anything important. $pid = get_unshare_cmd { remove_tree($options->{root}, {error => \my $err}); if (@$err) { for my $diag (@$err) { my ($file, $message) = %$diag; if ($file eq '') { warning "general error: $message"; } else { warning "problem unlinking $file: $message"; } } } } \@idmap; waitpid $pid, 0; $? == 0 or error "remove_tree failed"; } elsif (any { $_ eq $options->{mode} } ('root', 'fakechroot', 'proot', 'chrootless')) { # without unshare, we use the system's rm to recursively remove the # temporary directory just to make sure that we do not accidentally # remove more than we should by using --one-file-system. # # --interactive=never is needed when in proot mode, the # write-protected file /apt/apt.conf.d/01autoremove-kernels is to # be removed. 0 == system('rm', '--interactive=never', '--recursive', '--preserve-root', '--one-file-system', $options->{root}) or error "rm failed: $!"; } else { error "unknown mode: $options->{mode}"; } } if ($got_signal) { $exitstatus = 1; } exit $exitstatus; } main(); __END__ =head1 NAME mmdebstrap - multi-mirror Debian chroot creation =head1 SYNOPSIS B [B] [I [I [I...]]] =head1 DESCRIPTION B creates a Debian chroot of I into I from one or more Is. It is meant as an alternative to the debootstrap tool (see section B). In contrast to debootstrap it uses apt to resolve dependencies and is thus able to use more than one mirror and resolve more complex dependencies. If no I option is provided, L is used, except if data was given on standard input in which case the lines read from there are used as the content of the chroot's sources.list file. If I is a stable release name and no I is specified, then mirrors for updates and security are automatically added. If a I option starts with "deb " or "deb-src " then it is used as a one-line-style format entry for apt's sources.list inside the chroot. If a I option contains a "://" then it is interpreted as a mirror URI and the apt line inside the chroot is assembled as "deb [arch=A] B C D" where A is the host's native architecture, B is the I, C is the given I and D is the components given via B<--components> (defaults to "main"). If a I option happens to be an existing file, then its contents are pasted into the chroot's sources.list. This can be used to supply a deb822 style sources.list. If I is C<-> then standard input is pasted into the chroot's sources.list. If there was data on standard input but no C<-> mirror was listed, the lines read from standard input will be appended to the end of the chroot's sources.list. More than one mirror can be specified and are appended to the chroot's sources.list in the given order. If any mirror contains a https URI, then the packages apt-transport-https and ca-certificates will be installed inside the chroot. If any mirror contains a tor+xxx URI, then the apt-transport-tor package will be installed inside the chroot. The optional I argument can either be the path to a directory, the path to a tarball filename or C<->. If I ends with C<.tar>, or with any of the filename extensions listed in the section B, then I will be interpreted as a path to a tarball filename. If I is the path to a tarball filename or if I is C<-> or if no I was specified, B will create a temporary chroot directory in C<$TMPDIR> or F. If I is the path to a tarball filename, B will create a tarball of that directory and store it as I, optionally applying a compression algorithm as indicated by its filename extension. If I is C<-> or if no I was specified, then an uncompressed tarball of that directory will be sent to standard output. If I does not end in C<.tar> or with any of the filename extensions listed in the section B, then I will be interpreted as the path to a directory. If the directory already exists, it must either be empty or only contain an empty C directory. If a directory is chosen as output in any other mode than B, then its contents will have wrong ownership information and special device files will be missing. The I may be a valid release code name (eg, sid, stretch, jessie) or a symbolic name (eg, unstable, testing, stable, oldstable). Any suite name that works with apt on the given mirror will work. If no I was specified, then a single I C<-> is added and thus the information of the desired suite has to come from standard input as part of a valid apt sources.list file. All status output is printed to standard error unless B<--logfile> is used to redirect it to a file or B<--quiet> or B<--silent> is used to suppress any output on standard error. Help and version information will be printed to standard error with the B<--help> and B<--version> options, respectively. Otherwise, an uncompressed tarball might be sent to standard output if I is C<-> or if no I was specified. =head1 OPTIONS Options are case insensitive. Short options may be bundled. Long options require a double dash and may be abbreviated to uniqueness. =over 8 =item B<-h,--help> Print this help text and exit. =item B<--version> Print the B version and exit. =item B<--variant>=I Choose which package set to install. Valid variant Is are B, B, B, B, B, B, B, B, B, B<->, and B. The default variant is B. See the section B for more information. =item B<--mode>=I Choose how to perform the chroot operation and create a filesystem with ownership information different from the current user. Valid mode Is are B, B, B, B, B, B and B. The default mode is B. See the section B for more information. =item B<--aptopt>=I