ZFS 卷无法访问

ZFS 卷无法访问

我使用的是 FreeNAS-11.2-U4.1。该服务器用于存储 VMWare vSphere 虚拟机。有两个 zvol:Lab 和 Edari。它们都属于同一个池,即 SSD-Storage。

问题是 vSphere 无法挂载其中一个 zvol,即 Edari。因此无法访问存储在此池中的虚拟机。但另一个很好,我可以浏览其文件。

我在 FreeNAS 的 Web 界面上收到此警报(我不确定它是否与问题有关,因为 zvol Edari 不属于它):

The volume Pool-1.8SSD state is UNKNOWN: Wed, 14 Aug 2019 05:59:38 GMT

但是 zpool status 没有显示任何有关这个池的信息:

root@Storage[~]# zpool status
  pool: SSD-Storage
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:48:30 with 0 errors on Wed Aug 14 12:50:59 2019
config:

    NAME                                            STATE     READ WRITE CKSUM
    SSD-Storage                                     ONLINE       0     0     0
      raidz1-0                                      ONLINE       0     0     0
        gptid/ec475918-925c-11e9-af9b-f4ce46a6411d  ONLINE       0     0     0
        gptid/f1ed0bd1-925c-11e9-af9b-f4ce46a6411d  ONLINE       0     0     0
        gptid/f796acd9-925c-11e9-af9b-f4ce46a6411d  ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:05 with 0 errors on Wed Aug  7 03:45:05 2019
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

当我尝试导入池时发生的情况如下:

root@Storage[~]# zpool import
root@Storage[~]# 

此池甚至未在此处列出。ZFS 怎么会完全忘记一个池,以至于它从未存在过?我搜索了论坛,发现这可能是由于使用 RAID 造成的,但我不使用 RAID。这是 gpart 显示的内容:

root@Storage[~]# gpart show
=>       40  488326960  da0  GPT  (233G)
     40       1024    1  freebsd-boot  (512K)
       1064  488308736    2  freebsd-zfs  (233G)
  488309800      17200       - free -  (8.4M)

=>        40  1953459552  da1  GPT  (931G)
      40          88       - free -  (44K)
     128     4194304    1  freebsd-swap  (2.0G)
     4194432  1949265160    2  freebsd-zfs  (929G)

=>        40  1953459552  da2  GPT  (931G)
      40          88       - free -  (44K)
     128     4194304    1  freebsd-swap  (2.0G)
     4194432  1949265160    2  freebsd-zfs  (929G)

=>        40  1953459552  da3  GPT  (931G)
      40          88       - free -  (44K)
     128     4194304    1  freebsd-swap  (2.0G)
     4194432  1949265160    2  freebsd-zfs  (929G)

我在 /var/logs/debug.log 上发现了这一点:

Aug 14 12:40:19 Storage uwsgi: [storage.models:123] Exception on retrieving disks for Pool-1.8SSD: list index out of range

这是 zfs list 的输出:

root@Storage[~]# zfs list
NAME                                USED  AVAIL  REFER  MOUNTPOINT
SSD-Storage                        1.76T  88.0G   117K  /mnt/SSD-Storage
SSD-Storage/Cload                  60.9G   149G  74.6K  -
SSD-Storage/Edari                   660G   571G   178G  -
SSD-Storage/Lab                    1.04T   923G   232G  -
SSD-Storage/iocage                  858K  88.0G   155K  /mnt/SSD-Storage/iocage
SSD-Storage/iocage/download         117K  88.0G   117K  /mnt/SSD-Storage/iocage/download
SSD-Storage/iocage/images           117K  88.0G   117K  /mnt/SSD-Storage/iocage/images
SSD-Storage/iocage/jails            117K  88.0G   117K  /mnt/SSD-Storage/iocage/jails
SSD-Storage/iocage/log              117K  88.0G   117K  /mnt/SSD-Storage/iocage/log
SSD-Storage/iocage/releases         117K  88.0G   117K  /mnt/SSD-Storage/iocage/releases
SSD-Storage/iocage/templates        117K  88.0G   117K  /mnt/SSD-Storage/iocage/templates
freenas-boot                        760M   224G    64K  none
freenas-boot/ROOT                   760M   224G    29K  none
freenas-boot/ROOT/Initial-Install     1K   224G   756M  legacy
freenas-boot/ROOT/default           760M   224G   756M  legacy

最后一件事是,我经常在 /var/log/messages 中发现这一行:

ctld: connect(2) failed for 172.19.20.11: Connection refused

172.19.20.11 是我的 FreeNAS 服务器。

您能帮我找出 zvol Edari 出了什么问题吗?

相关内容