使用 FreeBSD ZFS zroot(分区上的 ZFS)时更换磁盘?

使用 FreeBSD ZFS zroot(分区上的 ZFS)时更换磁盘?

使用 ZFS 时如何用新磁盘替换损坏的磁盘

我有使用 zroot 的 4 磁盘 RAIDZ2 池。这意味着 ZFS 运行在单独的分区而不是使用整个磁盘。我没有找到任何有关如何在这种情况下更换磁盘的文档,或者信息已被弃用。池是由安装自动生成的。

Camcontrol设备列表:

% doas camcontrol devlist -v
scbus0 on mpt0 bus 0:
<>                                 at scbus0 target -1 lun ffffffff ()
scbus1 on ahcich0 bus 0:
<>                                 at scbus1 target -1 lun ffffffff ()
scbus2 on ahcich1 bus 0:
<>                                 at scbus2 target -1 lun ffffffff ()
scbus3 on ahcich2 bus 0:
<ST2000DM001-1CH164 CC43>          at scbus3 target 0 lun 0 (pass0,ada0)
<>                                 at scbus3 target -1 lun ffffffff ()
scbus4 on ahcich3 bus 0:
<ST2000DM001-1CH164 CC43>          at scbus4 target 0 lun 0 (pass1,ada1)
<>                                 at scbus4 target -1 lun ffffffff ()
scbus5 on ahcich4 bus 0:
<ST2000DM001-1CH164 CC43>          at scbus5 target 0 lun 0 (pass2,ada2)
<>                                 at scbus5 target -1 lun ffffffff ()
scbus6 on ahcich5 bus 0:
<SAMSUNG HD204UI 1AQ10001>         at scbus6 target 0 lun 0 (pass3,ada3)
<>                                 at scbus6 target -1 lun ffffffff ()
scbus7 on ahciem0 bus 0:
<AHCI SGPIO Enclosure 1.00 0001>   at scbus7 target 0 lun 0 (pass4,ses0)
<>                                 at scbus7 target -1 lun ffffffff ()
scbus-1 on xpt0 bus 0:
<>                                 at scbus-1 target -1 lun ffffffff (xpt0)

现有磁盘的 gpart:

% gpart show ada0
=>        40  3907029088  ada0  GPT  (1.8T)
          40        1024     1  freebsd-boot  (512K)
        1064         984        - free -  (492K)
        2048     4194304     2  freebsd-swap  (2.0G)
     4196352  3902832640     3  freebsd-zfs  (1.8T)
  3907028992         136        - free -  (68K)

z池状态:

% zpool status zroot
  pool: zroot
 state: DEGRADED
status: One or more devices has been removed by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: scrub repaired 28K in 0h41m with 0 errors on Thu Sep 27 17:58:02 2018
config:

        NAME                      STATE     READ WRITE CKSUM
        zroot                     DEGRADED     0     0     0
          raidz2-0                DEGRADED     0     0     0
            ada0p3                ONLINE       0     0     0
            ada1p3                ONLINE       0     0     0
            ada2p3                ONLINE       0     0     0
            15120424524672854601  REMOVED      0     0     0  was /dev/ada3p3

errors: No known data errors

离线:

% doas zpool offline zroot 15120424524672854601

我尝试将前几个 GiB 从 ada0 复制到 ada3,dd但两者都zpool attach给出zpool replace错误:/dev/ada3p3 is part of active pool 'zroot'甚至强制标志也无济于事。我猜测磁盘 UUID 发生冲突。

ada0-2p1-3如何将分区复制/复制到新磁盘 (ada3) 并更换故障驱动器的步骤是什么?自动安装程序首先运行哪些命令来创建这些分区?

答案1

第一:记住将新驱动器脱机,并确保它没有以任何方式安装或使用。

将分区表从旧磁盘复制ada0到新磁盘ada3

% doas gpart backup ada0 | doas gpart restore -F ada3

现在ada3具有与以下相同的三个分区ada0

% doas gpart show ada3
=>        40  3907029088  ada3  GPT  (1.8T)
          40        1024     1  freebsd-boot  (512K)
        1064         984        - free -  (492K)
        2048     4194304     2  freebsd-swap  (2.0G)
     4196352  3902832640     3  freebsd-zfs  (1.8T)
  3907028992         136        - free -  (68K)

删除旧的 ZFS 元数据(注意分区p3):

% doas dd if=/dev/zero of=/dev/ada3p3

更换驱动器(注意分区p3):

% doas zpool replace -f zroot 15120424524672854601 /dev/ada3p3
Make sure to wait until resilver is done before rebooting.

If you boot from pool 'zroot', you may need to update
boot code on newly attached disk '/dev/ada3p3'.

Assuming you use GPT partitioning and 'da0' is your new boot disk
you may use the following command:

        gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0

运行上述命令以更新新磁盘上的启动信息:

% doas gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada3
partcode written to ada3p1
bootcode written to ada3

UUID 现在不同了:

% gpart list ada0 | grep uuid | sort
   rawuuid: 7f842536-bcd0-11e8-b271-00259014958c
   rawuuid: 7fbe27a9-bcd0-11e8-b271-00259014958c
   rawuuid: 7fe24f3e-bcd0-11e8-b271-00259014958c
% gpart list ada3 | grep uuid | sort
   rawuuid: 9c629875-c369-11e8-a2b0-00259014958c
   rawuuid: 9c63d063-c369-11e8-a2b0-00259014958c
   rawuuid: 9c66f76e-c369-11e8-a2b0-00259014958c
% gpart list ada0 | grep efimedia | sort
   efimedia: HD(1,GPT,7f842536-bcd0-11e8-b271-00259014958c,0x28,0x400)
   efimedia: HD(2,GPT,7fbe27a9-bcd0-11e8-b271-00259014958c,0x800,0x400000)
   efimedia: HD(3,GPT,7fe24f3e-bcd0-11e8-b271-00259014958c,0x400800,0xe8a08000)
% gpart list ada3 | grep efimedia | sort
   efimedia: HD(1,GPT,9c629875-c369-11e8-a2b0-00259014958c,0x28,0x400)
   efimedia: HD(2,GPT,9c63d063-c369-11e8-a2b0-00259014958c,0x800,0x400000)
   efimedia: HD(3,GPT,9c66f76e-c369-11e8-a2b0-00259014958c,0x400800,0xe8a08000)

驱动器现在正在重新同步:

% zpool status zroot
  pool: zroot
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Sep 29 01:01:24 2018
        64.7G scanned out of 76.8G at 162M/s, 0h1m to go
        15.7G resilvered, 84.22% done
config:

        NAME                        STATE     READ WRITE CKSUM
        zroot                       DEGRADED     0     0     0
          raidz2-0                  DEGRADED     0     0     0
            ada0p3                  ONLINE       0     0     0
            ada1p3                  ONLINE       0     0     0
            ada2p3                  ONLINE       0     0     0
            replacing-3             OFFLINE      0     0     0
              15120424524672854601  OFFLINE      0     0     0  was /dev/ada3p3/old
              ada3p3                ONLINE       0     0     0

重新镀银后:

% zpool status zroot
  pool: zroot
 state: ONLINE
  scan: resilvered 18.6G in 0h7m with 0 errors on Sat Sep 29 01:09:22 2018
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            ada0p3  ONLINE       0     0     0
            ada1p3  ONLINE       0     0     0
            ada2p3  ONLINE       0     0     0
            ada3p3  ONLINE       0     0     0

errors: No known data errors

相关内容