背景:我正在探索如何将普通的 LVM-on-LUKS Debian 9(“Stretch”)安装从拇指驱动器(“源驱动器”)复制到 ZFS 格式的驱动器(“目标驱动器”),以实现ZFS-on-LUKS 安装。我的流程基于本指南.* 我认为 ZFS 方面与我想要帮助解决的问题无关,但我提及它只是为了以防万一它很重要。
作为我的流程的一部分,当 Stretch 从源驱动器运行时,我将目标 ZFS 根 ( /
) 文件系统挂载于/mnt
。然后我递归绑定:
/dev
到/mnt/dev
/proc
到/mnt/proc
/sys
到/mnt/sys
。
然后我 chroot 进入/mnt
.
(将来,当我 chroot 到 时/mnt
,我打算运行update-initramfs
、update-grub
等来配置分区的内容/boot
。)
然后我退出chroot
,我的麻烦就开始了。我发现我可以卸载/mnt/dev
并且/mnt/proc
,但不是/mnt/sys
。后者拒绝卸载,因为它包含/mnt/sys/fs/cgroup/systemd
,系统出于某种原因认为它“正在使用”。重新格式化 ZFS 驱动器并重新启动可以解决问题,但会大大减慢我的学习和文档过程的迭代速度。
我的问题是:
- 如何/mnt/sys
在 chroot 后卸载而不重新启动?
- 失败(umount: /mnt/sys/fs/cgroup/systemd: target is busy
)是预期的吗?如果没有,我应该针对哪个软件提交错误报告:卸载,cgroups,系统, 这Linux内核, 或者是其他东西?
这是(我认为)最小工作示例。 (如果您在重现此内容时遇到困难并认为我可能错过了某个步骤,请告诉我。)首先,样板文件:
# Activate the ZFS kernel module
/sbin/modprobe zfs
# Set variables
BOOT_POOL=bpool
ROOT_POOL=rpool
DIRS_TO_COPY=(boot bin etc home lib lib64 opt root sbin srv usr var)
FILES_TO_COPY=(initrd.img initrd.img.old vmlinuz vmlinuz.old)
VIRTUAL_FILESYSTEM_DIRS=(dev proc sys)
## Partition target drive
# 1MB BIOS boot partition
sgdisk -a2048 -n1:2048:4095 -t1:EF02 $1 -c 1:"bios_boot_partition"
wait
# 510MB partition for /boot ZFS filesystem
sgdisk -a2048 -n2:4096:1052671 -t2:BF07 $1 -c 2:"zfs_boot_partition"
wait
# Remaining drive space, except the last 510MiB in case of future need:
# partition to hold the LUKS container and the root ZFS filesystem
sgdisk -a2048 -n3:1052672:-510M -t3:8300 $1 -c 3:"luks_zfs_root_partition"
wait
# Before proceeding, ensure /dev/disk/by-id/ knows of these new partitions
partprobe
wait
# Create the /boot pool
zpool create -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/boot \
-R /mnt \
$BOOT_POOL "$1"-part2
wait
# Create the LUKS container for the root pool
cryptsetup luksFormat "$1"-part3 \
--hash sha512 \
--cipher aes-xts-plain64 \
--key-size 512
wait
# Open LUKS container that will contain the root pool
cryptsetup luksOpen "$1"-part3 "$DRIVE_SHORTNAME"3_crypt
wait
# Create the root pool
zpool create -o ashift=12 \
-O atime=off \
-O canmount=off \
-O compression=lz4 \
-O normalization=formD \
-O mountpoint=/ \
-R /mnt \
$ROOT_POOL /dev/mapper/"$DRIVE_SHORTNAME"3_crypt
wait
# Create ZFS datasets for the root ("/") and /boot filesystems
zfs create -o canmount=noauto -o mountpoint=/ "$ROOT_POOL"/debian
zfs create -o canmount=noauto -o mountpoint=/boot "$BOOT_POOL"/debian
# Mount the root ("/") and /boot ZFS datasets
zfs mount "$ROOT_POOL"/debian
zfs mount "$BOOT_POOL"/debian
# Create datasets for subdirectories
zfs create -o setuid=off "$ROOT_POOL"/home
zfs create -o mountpoint=/root "$ROOT_POOL"/home/root
zfs create -o canmount=off -o setuid=off -o exec=off "$ROOT_POOL"/var
zfs create -o com.sun:auto-snapshot=false "$ROOT_POOL"/var/cache
zfs create "$ROOT_POOL"/var/log
zfs create "$ROOT_POOL"/var/mail
zfs create "$ROOT_POOL"/var/spool
zfs create -o com.sun:auto-snapshot=false -o exec=on "$ROOT_POOL"/var/tmp
zfs create "$ROOT_POOL"/srv
zfs create -o com.sun:auto-snapshot=false -o exec=on "$ROOT_POOL"/tmp
# Set the `bootfs` property. ***TODO: IS THIS CORRECT???***
zpool set bootfs="$ROOT_POOL"/debian "$ROOT_POOL"
# Set correct permission for tmp directories
chmod 1777 /mnt/tmp
chmod 1777 /mnt/var/tmp
这是问题的核心部分:
# Copy Debian install from source drive to target drive
for i in "${DIRS_TO_COPY[@]}"; do
rsync --archive --quiet --delete /"$i"/ /mnt/"$i"
done
for i in "${FILES_TO_COPY[@]}"; do
cp -a /"$i" /mnt/
done
for i in "${VIRTUAL_FILESYSTEM_DIRS[@]}"; do
# Make mountpoints for virtual filesystems on target drive
mkdir /mnt/"$i"
# Recursively bind the virtual filesystems from source environment to the
# target. N.B. This is using `--rbind`, not `--bind`.
mount --rbind /"$i" /mnt/"$i"
done
# `chroot` into the target environment
chroot /mnt /bin/bash --login
# (Manually exit from the chroot)
# Delete copied files
for i in "${DIRS_TO_COPY[@]}" "${FILES_TO_COPY[@]}"; do
rm -r /mnt/"$i"
done
# Remove recursively bound virtual filesystems from target
for i in "${VIRTUAL_FILESYSTEM_DIRS[@]}"; do
# First unmount them
umount --recursive --verbose --force /mnt/"$i" || sleep 0
wait
# Then delete their mountpoints
rmdir /mnt/"$i"
wait
done
在最后一步,我得到:
umount: /mnt/sys/fs/cgroup/systemd: target is busy
(In some cases useful info about processes that
use the device is found by lsof(8) or fuser(1).)
如果有帮助的话:findmnt
显示安装两次的完整sys
树:一次在/sys
,一次在/mnt/sys
。
*ZFS 上的 Debian Jessie Root,抄送-SA 3.0,理查德·拉格和乔治·梅利科夫。
答案1
您需要mount --make-rslave /mnt/"$i"
在第一个挂载命令之后添加,以便为这些挂载点设置正确的传播标志。
它们保护主机免受 chroot 环境内所做的更改的影响,并有助于防止像您这样的阻塞情况。