之后zpool remove
,删除的磁盘显示为indirect-0
、、indirect-1
等indirect-2
。
- 这些是什么?
- 我怎样才能摆脱它们?
- 它们会引起任何问题吗?
证明:
有两个磁盘。其中一个被移除。然后一个新磁盘被重新添加到池中。
# truncate -s 1G raid{0,1}_{1..4}.img
# ls /tmp/test_zfs_remove/
raid0_1.img raid0_2.img raid0_3.img raid0_4.img raid1_1.img raid1_2.img raid1_3.img raid1_4.img
# zpool create jbod /tmp/test_zfs_remove/raid0_{1,2}.img
# zpool status -v ; zpool list -v
pool: jbod
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
jbod ONLINE 0 0 0
/tmp/test_zfs_remove/raid0_1.img ONLINE 0 0 0
/tmp/test_zfs_remove/raid0_2.img ONLINE 0 0 0
errors: No known data errors
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
jbod 1.88G 110K 1.87G - - 0% 0% 1.00x ONLINE -
/tmp/test_zfs_remove/raid0_1.img 960M 54K 960M - - 0% 0.00% - ONLINE
/tmp/test_zfs_remove/raid0_2.img 960M 55.5K 960M - - 0% 0.00% - ONLINE
现在我们zpool remove
盘2。请注意间接-1 出现。
# zpool remove jbod /tmp/test_zfs_remove/raid0_2.img
# zpool status -v ; zpool list -v
pool: jbod
state: ONLINE
remove: Removal of vdev 1 copied 49.5K in 0h0m, completed on Fri Jun 10 02:03:59 2022
144 memory used for removed device mappings
config:
NAME STATE READ WRITE CKSUM
jbod ONLINE 0 0 0
/tmp/test_zfs_remove/raid0_1.img ONLINE 0 0 0
errors: No known data errors
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
jbod 960M 148K 960M - - 0% 0% 1.00x ONLINE -
/tmp/test_zfs_remove/raid0_1.img 960M 148K 960M - - 0% 0.01% - ONLINE
indirect-1 - - - - - - - - ONLINE
然后我们zpool add
磁盘3。间接-1 不会消失。
# zpool add jbod /tmp/test_zfs_remove/raid0_3.img
# zpool status -v ; zpool list -v
pool: jbod
state: ONLINE
remove: Removal of vdev 1 copied 49.5K in 0h0m, completed on Fri Jun 10 02:03:59 2022
144 memory used for removed device mappings
config:
NAME STATE READ WRITE CKSUM
jbod ONLINE 0 0 0
/tmp/test_zfs_remove/raid0_1.img ONLINE 0 0 0
/tmp/test_zfs_remove/raid0_3.img ONLINE 0 0 0
errors: No known data errors
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
jbod 1.88G 222K 1.87G - - 0% 0% 1.00x ONLINE -
/tmp/test_zfs_remove/raid0_1.img 960M 222K 960M - - 0% 0.02% - ONLINE
indirect-1 - - - - - - - - ONLINE
/tmp/test_zfs_remove/raid0_3.img 960M 0 960M - - 0% 0.00% - ONLINE
即使我们立即zpool remove
将磁盘 3 放入zpool add
磁盘 3(同一个磁盘),另一个间接-2 也出现了。
# zpool remove jbod /tmp/test_zfs_remove/raid0_3.img
# zpool add jbod /tmp/test_zfs_remove/raid0_3.img
# zpool status -v ; zpool list -v
pool: jbod
state: ONLINE
remove: Removal of vdev 2 copied 11.5K in 0h0m, completed on Fri Jun 10 02:09:35 2022
240 memory used for removed device mappings
config:
NAME STATE READ WRITE CKSUM
jbod ONLINE 0 0 0
/tmp/test_zfs_remove/raid0_1.img ONLINE 0 0 0
/tmp/test_zfs_remove/raid0_3.img ONLINE 0 0 0
errors: No known data errors
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
jbod 1.88G 279K 1.87G - - 0% 0% 1.00x ONLINE -
/tmp/test_zfs_remove/raid0_1.img 960M 260K 960M - - 0% 0.02% - ONLINE
indirect-1 - - - - - - - - ONLINE
indirect-2 - - - - - - - - ONLINE
/tmp/test_zfs_remove/raid0_3.img 960M 18.5K 960M - - 0% 0.00% - ONLINE
更令人担忧的是,如果磁盘性能下降(模拟zpool offline -f
),降级状态不会消失。
# zpool add jbod /tmp/test_zfs_remove/raid0_4.img
# zpool offline -f jbod /tmp/test_zfs_remove/raid0_4.img
# zpool remove jbod /tmp/test_zfs_remove/raid0_4.img
# zpool status -v ; zpool list -v
pool: jbod
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
remove: Removal of vdev 4 copied 19K in 0h0m, completed on Fri Jun 10 02:16:49 2022
336 memory used for removed device mappings
config:
NAME STATE READ WRITE CKSUM
jbod DEGRADED 0 0 0
/tmp/test_zfs_remove/raid0_1.img ONLINE 0 0 0
/tmp/test_zfs_remove/raid0_3.img ONLINE 0 0 0
errors: No known data errors
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
jbod 1.88G 308K 1.87G - - 0% 0% 1.00x DEGRADED -
/tmp/test_zfs_remove/raid0_1.img 960M 154K 960M - - 0% 0.01% - ONLINE
indirect-1 - - - - - - - - ONLINE
indirect-2 - - - - - - - - ONLINE
/tmp/test_zfs_remove/raid0_3.img 960M 154K 960M - - 0% 0.01% - ONLINE
indirect-4 - - - - - - - - DEGRADED
边注:
如果磁盘没有降级,
zpool remove
将不是导致数据丢失。虽然这是一个没有冗余的 RAID0 JBOD 设置,但发出命令时数据会复制到剩余磁盘zpool remove
。如果剩余磁盘没有足够的空间容纳所有数据,则会提示“无法移除(磁盘):空间不足”错误。那么我们就无法移除zpool remove
磁盘。使用
zpool replace
(而不是zpool remove
那么zpool add
)不会产生奇怪的间接-X但它有一些缺点。新磁盘必须等于或大于被替换的磁盘。相反,我们可以先更换zpool remove
一个更大的磁盘,然后再更换zpool add
一个较小的磁盘。
zpool detach
再进行一次侧面演示zpool attach
镜子不会产生奇怪的间接X现象。
# zpool destroy jbod
# zpool create mirr mirror /tmp/test_zfs_remove/raid1_1.img /tmp/test_zfs_remove/raid1_2.img
# zpool detach mirr /tmp/test_zfs_remove/raid1_2.img
# zpool attach mirr /tmp/test_zfs_remove/raid1_1.img /tmp/test_zfs_remove/raid1_3.img
# zpool detach mirr /tmp/test_zfs_remove/raid1_3.img
# zpool attach mirr /tmp/test_zfs_remove/raid1_1.img /tmp/test_zfs_remove/raid1_3.img
# zpool attach mirr /tmp/test_zfs_remove/raid1_1.img /tmp/test_zfs_remove/raid1_4.img
# zpool offline -f mirr /tmp/test_zfs_remove/raid1_4.img
# zpool status -v ; zpool list -v
pool: mirr
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
scan: resilvered 291K in 00:00:00 with 0 errors on Fri Jun 10 02:47:06 2022
config:
NAME STATE READ WRITE CKSUM
mirr DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
/tmp/test_zfs_remove/raid1_1.img ONLINE 0 0 0
/tmp/test_zfs_remove/raid1_3.img ONLINE 0 0 0
/tmp/test_zfs_remove/raid1_4.img FAULTED 0 0 0 external device fault
errors: No known data errors
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
mirr 960M 200K 960M - - 0% 0% 1.00x DEGRADED -
mirror-0 960M 200K 960M - - 0% 0.02% - DEGRADED
/tmp/test_zfs_remove/raid1_1.img - - - - - - - - ONLINE
/tmp/test_zfs_remove/raid1_3.img - - - - - - - - ONLINE
/tmp/test_zfs_remove/raid1_4.img - - - - - - - - FAULTED
# zpool detach mirr /tmp/test_zfs_remove/raid1_4.img
# zpool status -v ; zpool list -v
pool: mirr
state: ONLINE
scan: resilvered 291K in 00:00:00 with 0 errors on Fri Jun 10 02:47:06 2022
config:
NAME STATE READ WRITE CKSUM
mirr ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/tmp/test_zfs_remove/raid1_1.img ONLINE 0 0 0
/tmp/test_zfs_remove/raid1_3.img ONLINE 0 0 0
errors: No known data errors
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
mirr 960M 196K 960M - - 0% 0% 1.00x ONLINE -
mirror-0 960M 196K 960M - - 0% 0.01% - ONLINE
/tmp/test_zfs_remove/raid1_1.img - - - - - - - - ONLINE
/tmp/test_zfs_remove/raid1_3.img - - - - - - - - ONLINE