我正在使用全新安装的 Ubuntu 12.04 LTS,ZFS PPA。
我发现当我创建一个池时它会挂载并正常运行,但重新启动后它显示为UNAVAIL并且我找不到方法恢复它。
以下是一个快速测试的日志以作演示:
root@nas1:~# zpool status
no pools available
root@nas1:~# zpool create data /dev/disk/by-id/scsi-360019b90b24d9300174d28912b1c485d /dev/disk/by-id/scsi-360019b90b24d9300174d28a610419bec
root@nas1:~# zpool status
pool: data
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
scsi-360019b90b24d9300174d28912b1c485d ONLINE 0 0 0
scsi-360019b90b24d9300174d28a610419bec ONLINE 0 0 0
errors: No known data errors
root@nas1:~# shutdown -r now
Broadcast message from root@nas1
(/dev/pts/0) at 10:41 ...
The system is going down for reboot NOW!
root@nas1:~#
login as: root
Server refused our key
root@nas1's password:
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-24-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Wed May 23 10:42:09 BST 2012
System load: 0.48 Users logged in: 0
Usage of /: 6.0% of 55.66GB IP address for eth0: 10.24.0.5
Memory usage: 1% IP address for eth1: 192.168.30.51
Swap usage: 0% IP address for eth2: 192.168.99.41
Processes: 142
Graph this data and manage this system at https://landscape.canonical.com/
Last login: Wed May 23 10:40:06 2012 from 192.168.100.35
root@nas1:~# zpool status
pool: data
state: UNAVAIL
status: One or more devices could not be used because the label is missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from
a backup source.
see: http://zfsonlinux.org/msg/ZFS-8000-5E
scan: none requested
config:
NAME STATE READ WRITE CKSUM
data UNAVAIL 0 0 0 insufficient replicas
scsi-360019b90b24d9300174d28912b1c485d UNAVAIL 0 0 0
scsi-360019b90b24d9300174d28a610419bec UNAVAIL 0 0 0
root@nas1:~#
编辑
根据要求,输出ls -l /dev/disk/by-id/scsi-*
:
root@nas1:~# ls -l /dev/disk/by-id/scsi-*
lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28912b1c485d -> ../../sdb
lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28a610419bec -> ../../sdc
lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28b1031dd786 -> ../../sdd
lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28baf7edd45e -> ../../sde
lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28c5ea9c6198 -> ../../sdf
lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28d1db783151 -> ../../sdg
lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28e6c0af4c8e -> ../../sdh
lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28eeb7d87669 -> ../../sdi
lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28f6ad29d90a -> ../../sdj
lrwxrwxrwx 1 root root 9 May 23 12:03 /dev/disk/by-id/scsi-360019b90b24d9300174d28fca5534028 -> ../../sdk
编辑
我刚刚做了一些进一步的测试。我没有使用 id,而是尝试使用 sdb、sdc 等:
zpool create data sdb sdc sdd sde
结果相同。它创建了池,但重新启动后它显示为“不可用”。
编辑
根据要求,输出zdb -l /dev/sdb
:
~# zdb -l /dev/sdb
--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
failed to unpack label 2
--------------------------------------------
LABEL 3
--------------------------------------------
failed to unpack label 3
我创建新池后进行了该测试并得到了相同的结果。
编辑
我刚刚尝试全新安装 Ubuntu 11.04(以排除 12.04 中的错误)。
- 添加了 PPA 存储库
- 进行了 dist-upgrade,然后安装了 ubuntu-zfs
- 运行“zpool create data sdb sdc”
- 检查了 zpool status 并且池显示在那里
- 重启服务器
- 再次检查,仍然存在。
所以这是我的 12.04 实例的问题。很想重新安装...
答案1
原来是处理磁盘的 RAID 控制器出现故障。更换控制器后,一切正常!
答案2
不要害怕,只需:
cd /path_to_your_disks
zpool import -d . <name_of_your_pool>
就我的情况来说,它位于/disks
。就你的情况来说,它也许位于/dev/disk/by-id
。