ZFS 数据集在重启时消失

ZFS 数据集在重启时消失


我在 Centos 7 中安装了 ZFS(0.6.5),还创建了一个 zpool,除了数据集在重启时消失之外,其他一切都运行正常。
我一直在尝试借助各种在线资源和博客来调试这个问题,但无法得到想要的结果。
重启后,当我发出zfs list命令时,我得到了“没有可用的数据集”,并zpool list给出“没有可用的游泳池” 在网上搜索了很多资料后,我终于可以手动导入缓存文件了zpool import -c 缓存文件但我还是得跑zpool set cachefile=/etc/zfs/zpool.cache 池重启之前,以便稍后在重启后导入。

如下所示systemctl status zfs-import-cache

zfs-import-cache.service - Import ZFS pools by cache file Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; static) Active: inactive (dead)

cat /etc/sysconfig/zfs

# ZoL userland configuration.

# Run `zfs mount -a` during system start?
ZFS_MOUNT='yes'

# Run `zfs unmount -a` during system stop?
ZFS_UNMOUNT='yes'

# Run `zfs share -a` during system start?
# nb: The shareiscsi, sharenfs, and sharesmb dataset properties.
ZFS_SHARE='yes'

# Run `zfs unshare -a` during system stop?
ZFS_UNSHARE='yes'

# Specify specific path(s) to look for device nodes and/or links for the
# pool import(s). See zpool(8) for more information about this variable.
# It supersedes the old USE_DISK_BY_ID which indicated that it would only
# try '/dev/disk/by-id'.
# The old variable will still work in the code, but is deprecated.
#ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"

# Should the datasets be mounted verbosely?
# A mount counter will be used when mounting if set to 'yes'.
VERBOSE_MOUNT='no'

# Should we allow overlay mounts?
# This is standard in Linux, but not ZFS which comes from Solaris where this
# is not allowed).
DO_OVERLAY_MOUNTS='no'

# Any additional option to the 'zfs mount' command line?
# Include '-o' for each option wanted.
MOUNT_EXTRA_OPTIONS=""

# Build kernel modules with the --enable-debug switch?
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_DKMS_ENABLE_DEBUG='no'

# Build kernel modules with the --enable-debug-dmu-tx switch?
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_DKMS_ENABLE_DEBUG_DMU_TX='no'

# Keep debugging symbols in kernel modules?
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_DKMS_DISABLE_STRIP='no'

# Wait for this many seconds in the initrd pre_mountroot?
# This delays startup and should be '0' on most systems.
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_INITRD_PRE_MOUNTROOT_SLEEP='0'

# Wait for this many seconds in the initrd mountroot?
# This delays startup and should be '0' on most systems. This might help on
# systems which have their ZFS root on a USB disk that takes just a little
# longer to be available
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_INITRD_POST_MODPROBE_SLEEP='0'

# List of additional datasets to mount after the root dataset is mounted?
#
# The init script will use the mountpoint specified in the 'mountpoint'
# property value in the dataset to determine where it should be mounted.
#
# This is a space separated list, and will be mounted in the order specified,
# so if one filesystem depends on a previous mountpoint, make sure to put
# them in the right order.
#
# It is not necessary to add filesystems below the root fs here. It is
# taken care of by the initrd script automatically. These are only for
# additional filesystems needed. Such as /opt, /usr/local which is not
# located under the root fs.
# Example: If root FS is 'rpool/ROOT/rootfs', this would make sense.
#ZFS_INITRD_ADDITIONAL_DATASETS="rpool/ROOT/usr rpool/ROOT/var"

# List of pools that should NOT be imported at boot?
# This is a space separated list.
#ZFS_POOL_EXCEPTIONS="test2"

# Optional arguments for the ZFS Event Daemon (ZED).
# See zed(8) for more information on available options.
#ZED_ARGS="-M"

我不确定这是否是一个已知问题,如果是,有没有什么解决方法?也许这是一种在重启后保存数据集的简单方法,最好没有缓存文件的开销。

答案1

请确保已启用 zfs 服务(目标)。该服务用于在启动/关闭时处理池导入/导出。

zfs.target loaded active active ZFS startup target

您永远不必为此而苦恼。如果有机会,请在您的 zfs 发行版上运行更新,因为我知道启动服务在过去的几个版本中有所改进:

[root@zfs2 ~]# rpm -qi zfs
Name        : zfs
Version     : 0.6.5.2
Release     : 1.el7.centos

答案2

好的,所以池在那里,这意味着问题出在你的 zfs.cache 上,它不是持久的,这就是为什么它在你重新启动时会丢失它的配置。我建议你运行:

      zpool import zfsPool 
      zpool list 

并检查它是否可用。重新启动服务器并查看它是否恢复,如果没有,则执行相同的步骤并运行:

      zpool scrub

只是为了确保您的泳池等一切正常。

请同时发布以下内容:

      /etc/default/zfs.conf
      /etc/init/zpool-import.conf

或者,如果您正在寻找解决此问题的方法,您可以按如下方式进行设置。

将值从 1 更改为 0:

    /etc/init/zpool-import.conf

并将以下内容添加到您的 /etc/rc.local:

    zfs mount -a

这就成功了。

答案3

我也遇到了重启后 zfs 消失的问题。运行 CentOS 7.3 和 ZFS 0.6.5.9 重新导入后,它才恢复 (zpool import zfspool),直到下次重启。

这是对我有用的命令(使其在重启后仍然存在):

systemctl preset zfs-import-cache zfs-import-scan zfs-mount zfs-share zfs-zed zfs.target

(发现于: https://github.com/zfsonlinux/zfs/wiki/RHEL-%26-CentOS

答案4

在我的案例中,ZFS 无法导入 zpool,因为它位于云持久卷上,而该卷未物理连接到机器。我猜网络卷在启动过程中比预期的要晚一些才可用。

启动后运行systemctl status zfs-import-cache.service出现以下消息:

● zfs-import-cache.service - Import ZFS pools by cache file
     Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Tue 2021-09-07 18:37:28 UTC; 3 months 17 days ago
       Docs: man:zpool(8)
    Process: 780 ExecStart=/sbin/zpool import -c /etc/zfs/zpool.cache -aN (code=exited, status=1/FAILURE)
   Main PID: 780 (code=exited, status=1/FAILURE)

Sep 07 18:37:26 ingress-zfs-2 systemd[1]: Starting Import ZFS pools by cache file...
Sep 07 18:37:28 ingress-zfs-2 zpool[780]: cannot import 'data': no such pool or dataset
Sep 07 18:37:28 ingress-zfs-2 zpool[780]:         Destroy and re-create the pool from
Sep 07 18:37:28 ingress-zfs-2 zpool[780]:         a backup source.
Sep 07 18:37:28 ingress-zfs-2 systemd[1]: zfs-import-cache.service: Main process exited, code=exited, status=1/FAILURE
Sep 07 18:37:28 ingress-zfs-2 systemd[1]: zfs-import-cache.service: Failed with result 'exit-code'.
Sep 07 18:37:28 ingress-zfs-2 systemd[1]: Failed to start Import ZFS pools by cache file.

解决方案是修补zfs-import-cache.service服务文件以包含remote-fs.target依赖项:

[Unit]
...
After=remote-fs.target
...

在 Ubuntu 20.04 上,此文件位于:/etc/systemd/system/zfs-import.target.wants/zfs-import-cache.service

我认为指定After=remote-fs.target相当于_netdev在文件中使用该选项/etc/fstab(参见:https://unix.stackexchange.com/a/226453/78327)。

相关内容