KVM 下的 Solaris 11 上的 zfs 校验和错误

KVM 下的 Solaris 11 上的 zfs 校验和错误

概要:libvirt 5.6.0、QEMU 4.1.1、Linux 内核 5.5.10-200、Fedora Server 31。

Solaris 11.4 全新安装(带有 Solaris 10 品牌区域),XFS 上的原始磁盘(不幸的是,无法在 Linux 上切换到 ZFS 并为 VM 提供直通 ZVOL)。当我在 Solaris VM 上的 ZFS 数据集上复制大型 gzip 文件时,zpool 收到一些 zfs 错误,当我对文件进行 gunzip 时,gunzip 文件会损坏。

首先,Solaris VM 托管在 qcow2 虚拟磁盘上,我认为 CoW on CoW 可能不是一个好主意,所以我改用 Raw。实际上什么都没有改变。

有想法吗(我实际上没有任何想法)?Solaris 11.4 数据集本身没有损坏。我也在 KVM 下的类似设置上成功运行了 FreeBSD/zfs(但是,使用 ZVOL,但仍在 Linux 上 - 没有校验和错误)。

纯净水池:

  pool: oracle
 state: ONLINE
  scan: scrub repaired 0 in 28s with 0 errors on Mon Mar 22 09:58:30 2021

config:

        NAME    STATE      READ WRITE CKSUM
        oracle  ONLINE        0     0     0
          c3d0  ONLINE        0     0     0

errors: No known data errors

复制文件:

[root@s10-zone ~]# cd /opt/oracle/exchange/
[root@s10-zone exchange]# scp [email protected]:/Backup/oracle/expdp/lcomsys.dmp.gz .
Password: 
lcomsys.dmp.gz       100% |*********************************************************************| 27341 MB  2:23:09

复制完成后运行清理:

  pool: oracle
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
        entire pool from backup.
   see: http://support.oracle.com/msg/ZFS-8000-8A
  scan: scrub repaired 6.50K in 5m16s with 3 errors on Tue Mar 23 09:36:34 2021

config:

        NAME    STATE      READ WRITE CKSUM
        oracle  ONLINE        0     0     3
          c3d0  ONLINE        0     0    10

errors: Permanent errors have been detected in the following files:

        /system/zones/s10-zone/root/opt/oracle/exchange/lcomsys.dmp.gz

solaris 虚拟磁盘的连接方式如下:

    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/vms/disks/solaris11.img'/>
      <backingStore/>
      <target dev='sda' bus='sata'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/vms/iso/sol-11_4-text-x86.iso'/>
      <backingStore/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/vms/disks/solaris10-data.img'/>
      <backingStore/>
      <target dev='hdb' bus='ide'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/vms/disks/solaris11-data.img'/>
      <backingStore/>
      <target dev='hdc' bus='ide'/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>

答案1

很奇怪,但是,考虑到 rpool 没有损坏,我已将 VM 的磁盘定义更改为 sata:

    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/vms/disks/solaris10-data.img'/>
      <backingStore/>
      <target dev='sdb' bus='sata'/>
      <address type='drive' controller='1' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/vms/disks/solaris11-data.img'/>
      <backingStore/>
      <target dev='sdc' bus='sata'/>
      <address type='drive' controller='2' bus='0' target='0' unit='0'/>
    </disk>

并且 zfs 校验和损坏神奇地停止了。

相关内容