使用以下命令创建 zfs 池:
$ sudo zpool create -f data raidz2 sda sdb sdc sdd
$ sudo zpool status
pool: data
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
errors: No known data errors
lsblk
$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 55M 1 loop /snap/core18/1880
loop1 7:1 0 71.3M 1 loop /snap/lxd/16099
loop2 7:2 0 29.9M 1 loop /snap/snapd/8542
sda 8:0 0 2.7T 0 disk
├─sda1 8:1 0 2.7T 0 part
└─sda9 8:9 0 8M 0 part
sdb 8:16 0 2.7T 0 disk
├─sdb1 8:17 0 2.7T 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 2.7T 0 disk
├─sdc1 8:33 0 2.7T 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 2.7T 0 disk
├─sdd1 8:49 0 2.7T 0 part
└─sdd9 8:57 0 8M 0 part
sde 8:64 1 961M 0 disk
├─sde1 8:65 1 914M 0 part
└─sde2 8:66 1 3.9M 0 part
sdf 8:80 0 111.8G 0 disk
├─sdf1 8:81 0 512M 0 part /boot/efi
└─sdf2 8:82 0 111.3G 0 part
└─md0 9:0 0 111.2G 0 raid1
└─md0p1 259:3 0 111.2G 0 part /
nvme0n1 259:0 0 111.8G 0 disk
├─nvme0n1p1 259:1 0 512M 0 part
└─nvme0n1p2 259:2 0 111.3G 0 part
└─md0 9:0 0 111.2G 0 raid1
└─md0p1 259:3 0 111.2G 0 part /
然后重新启动,/dev/sd* 重命名重新排序,zpool 状态显示没有可用池
$ zpool status
no pools available
$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 55M 1 loop /snap/core18/1880
loop1 7:1 0 71.3M 1 loop /snap/lxd/16099
loop2 7:2 0 29.9M 1 loop /snap/snapd/8542
sda 8:0 0 111.8G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
└─sda2 8:2 0 111.3G 0 part
└─md0 9:0 0 111.2G 0 raid1
└─md0p1 259:3 0 111.2G 0 part /
sdb 8:16 1 961M 0 disk
├─sdb1 8:17 1 914M 0 part
└─sdb2 8:18 1 3.9M 0 part
sdc 8:32 0 2.7T 0 disk
├─sdc1 8:33 0 2.7T 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 2.7T 0 disk
├─sdd1 8:49 0 2.7T 0 part
└─sdd9 8:57 0 8M 0 part
sde 8:64 0 2.7T 0 disk
├─sde1 8:65 0 2.7T 0 part
└─sde9 8:73 0 8M 0 part
sdf 8:80 0 2.7T 0 disk
├─sdf1 8:81 0 2.7T 0 part
└─sdf9 8:89 0 8M 0 part
nvme0n1 259:0 0 111.8G 0 disk
├─nvme0n1p1 259:1 0 512M 0 part
└─nvme0n1p2 259:2 0 111.3G 0 part
└─md0 9:0 0 111.2G 0 raid1
└─md0p1 259:3 0 111.2G 0 part /
答案1
在 /dev/sda、/dev/sda1 上创建一个分区(当分区使用 't' 选择 48 时Solaris /usr & Apple ZFS
,不确定这里是否重要)然后使用 /dev/sda1 的 UUID 作为 zfs 成员似乎解决了该问题
Disk /dev/sda: 2.75 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: Hitachi HUA72303
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E22EFAF2-BB4B-459A-A22F-D772E84C3C9E
Device Start End Sectors Size Type
/dev/sda1 2048 5860533134 5860531087 2.7T Solaris /usr & Apple ZFS
$ lsblk --ascii -o NAME,PARTUUID,LABEL,PATH,FSTYPE
NAME PARTUUID LABEL PATH FSTYPE
sda /dev/sda
`-sda1 4cad5b3d-7348-ef4b-808e-2beace6e9a21 /dev/sda1
重复 sdb、sdc、sdd 的分区
导致
$ ls -l /dev/disk/by-partuuid/
total 0
lrwxrwxrwx 1 root root 10 Apr 17 01:55 361fad97-fa34-604b-8733-3c08147ab32e -> ../../sdb1
lrwxrwxrwx 1 root root 10 Apr 17 01:53 4cad5b3d-7348-ef4b-808e-2beace6e9a21 -> ../../sda1
lrwxrwxrwx 1 root root 10 Apr 17 01:56 88445cda-d61e-ea4c-9f73-4f151996f4a0 -> ../../sdd1
lrwxrwxrwx 1 root root 10 Apr 17 01:55 fda16bae-fe77-4b4e-9eae-8fdecd2bfd80 -> ../../sdc1
然后使用 partuuid 创建 zfs 池
$ sudo zpool create data /dev/disk/by-partuuid/4cad5b3d-7348-ef4b-808e-2beace6e9a21 /dev/disk/by-partuuid/361fad97-fa34-604b-8733-3c08147ab32e /dev/disk/by-partuuid/fda16bae-fe77-4b4e-9eae-8fdecd2bfd80 /dev/disk/by-partuuid/88445cda-d61e-ea4c-9f73-4f151996f4a0
igdvs@srv-bk-vm:~$ sudo zpool status
pool: data
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
4cad5b3d-7348-ef4b-808e-2beace6e9a21 ONLINE 0 0 0
361fad97-fa34-604b-8733-3c08147ab32e ONLINE 0 0 0
fda16bae-fe77-4b4e-9eae-8fdecd2bfd80 ONLINE 0 0 0
88445cda-d61e-ea4c-9f73-4f151996f4a0 ONLINE 0 0 0
errors: No known data errors
重新启动 ubuntu,/dev/sd* 名称重新排序,但 zpool 不受影响,因为它位于 partuuid 上
$ sudo zpool status
[sudo] password for igdvs:
pool: data
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
4cad5b3d-7348-ef4b-808e-2beace6e9a21 ONLINE 0 0 0
361fad97-fa34-604b-8733-3c08147ab32e ONLINE 0 0 0
fda16bae-fe77-4b4e-9eae-8fdecd2bfd80 ONLINE 0 0 0
88445cda-d61e-ea4c-9f73-4f151996f4a0 ONLINE 0 0 0
errors: No known data errors
igdvs@srv-bk-vm:~$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 55M 1 loop /snap/core18/1880
loop1 7:1 0 63.3M 1 loop /snap/core20/1852
loop2 7:2 0 91.9M 1 loop /snap/lxd/24061
loop3 7:3 0 49.9M 1 loop /snap/snapd/18596
loop4 7:4 0 71.3M 1 loop /snap/lxd/16099
loop5 7:5 0 55.6M 1 loop /snap/core18/2721
sda 8:0 0 111.8G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
└─sda2 8:2 0 111.3G 0 part
└─md0 9:0 0 111.2G 0 raid1
└─md0p1 259:3 0 111.2G 0 part /
sdb 8:16 1 961M 0 disk
├─sdb1 8:17 1 951M 0 part
└─sdb9 8:25 1 8M 0 part
sdc 8:32 0 2.7T 0 disk
└─sdc1 8:33 0 2.7T 0 part
sdd 8:48 0 2.7T 0 disk
└─sdd1 8:49 0 2.7T 0 part
sde 8:64 0 2.7T 0 disk
└─sde1 8:65 0 2.7T 0 part
sdf 8:80 0 2.7T 0 disk
└─sdf1 8:81 0 2.7T 0 part
nvme0n1 259:0 0 111.8G 0 disk
├─nvme0n1p1 259:1 0 512M 0 part
└─nvme0n1p2 259:2 0 111.3G 0 part
└─md0 9:0 0 111.2G 0 raid1
└─md0p1 259:3 0 111.2G 0 part /
答案2
这是一个更好的答案,使用/dev/disk/by-id/<serial number of disk>
$ ls -l /dev/disk/by-id/
lrwxrwxrwx 1 root root 9 Apr 17 02:42 scsi-SATA_Hitachi_HUA72303_MK0331YHGZHD7A -> ../../sdd
lrwxrwxrwx 1 root root 9 Apr 17 02:42 scsi-SATA_Hitachi_HUA72303_MK0371YHK1ME0A -> ../../sdc
lrwxrwxrwx 1 root root 9 Apr 17 02:42 scsi-SATA_Hitachi_HUA72303_MK0371YHK2E0XA -> ../../sda
lrwxrwxrwx 1 root root 9 Apr 17 02:42 scsi-SATA_Hitachi_HUA72303_MK0371YHK2L7NA -> ../../sdb
$ sudo zpool create data /dev/disk/by-id/scsi-SATA_Hitachi_HUA72303_MK0371YHK2E0XA /dev/disk/by-id/scsi-SATA_Hitachi_HUA72303_MK0371YHK2L7NA /dev/disk/by-id/scsi-SATA_Hitachi_HUA72303_MK0371YHK1ME0A /dev/disk/by-id/scsi-SATA_Hitachi_HUA72303_MK0331YHGZHD7A
$ zpool status
pool: data
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
scsi-SATA_Hitachi_HUA72303_MK0371YHK2E0XA ONLINE 0 0 0
scsi-SATA_Hitachi_HUA72303_MK0371YHK2L7NA ONLINE 0 0 0
scsi-SATA_Hitachi_HUA72303_MK0371YHK1ME0A ONLINE 0 0 0
scsi-SATA_Hitachi_HUA72303_MK0331YHGZHD7A ONLINE 0 0 0
errors: No known data errors
重新启动,sd* 名称重新排序,但不影响 zfs 池
$ sudo zpool status
[sudo] password for igdvs:
pool: data
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
scsi-SATA_Hitachi_HUA72303_MK0371YHK2E0XA ONLINE 0 0 0
scsi-SATA_Hitachi_HUA72303_MK0371YHK2L7NA ONLINE 0 0 0
scsi-SATA_Hitachi_HUA72303_MK0371YHK1ME0A ONLINE 0 0 0
scsi-SATA_Hitachi_HUA72303_MK0331YHGZHD7A ONLINE 0 0 0
errors: No known data errors
igdvs@srv-bk-vm:~$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 55M 1 loop /snap/core18/1880
loop1 7:1 0 55.6M 1 loop /snap/core18/2721
loop2 7:2 0 71.3M 1 loop /snap/lxd/16099
loop3 7:3 0 49.9M 1 loop /snap/snapd/18596
loop4 7:4 0 63.3M 1 loop /snap/core20/1852
loop5 7:5 0 91.9M 1 loop /snap/lxd/24061
sda 8:0 0 111.8G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
└─sda2 8:2 0 111.3G 0 part
└─md0 9:0 0 111.2G 0 raid1
└─md0p1 259:3 0 111.2G 0 part /
sdb 8:16 0 2.7T 0 disk
├─sdb1 8:17 0 2.7T 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 2.7T 0 disk
├─sdc1 8:33 0 2.7T 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 2.7T 0 disk
├─sdd1 8:49 0 2.7T 0 part
└─sdd9 8:57 0 8M 0 part
sde 8:64 0 2.7T 0 disk
├─sde1 8:65 0 2.7T 0 part
└─sde9 8:73 0 8M 0 part
sdf 8:80 1 961M 0 disk
├─sdf1 8:81 1 951M 0 part
└─sdf9 8:89 1 8M 0 part
nvme0n1 259:0 0 111.8G 0 disk
├─nvme0n1p1 259:1 0 512M 0 part
└─nvme0n1p2 259:2 0 111.3G 0 part
└─md0 9:0 0 111.2G 0 raid1
└─md0p1 259:3 0 111.2G 0 part /
答案3
虽然我同意使用 by-id 名称是最好的,但 zfs 通常可以很好地处理磁盘移动,因为 zpool 是在启动时导入的。我不确定为什么这次没有。但从此类问题中恢复的快速方法是命令。zpool import
根据确切的问题,导入可能需要指定 zpool 名称或各种级别的强制选项;如果您遇到这种情况并从搜索中找到此答案,请阅读手册页。