RAID 阵列变为只读

RAID 阵列变为只读

我创建了一个在 KVM 上运行的虚拟机,仅用于测试和学习目的。在安装过程中,RAID 1 阵列已配置为 3 个根磁盘和 3 个引导磁盘。经过一些播放和测试后,我决定将零写入其中一个驱动器并检查会发生什么:

dd if=/dev/zero of=/dev/vdc2 

之后系统进入只读状态,但 mdamd 中没有任何错误。

消息:

[ 2177.091939] RAID1 conf printout:
[ 2177.091947]  --- wd:2 rd:3
[ 2177.091954]  disk 0, wo:0, o:1, dev:vda2
[ 2177.091956]  disk 1, wo:0, o:1, dev:vdb2
[ 2177.091958]  disk 2, wo:1, o:1, dev:vdc2
[ 2177.095315] md: recovery of RAID array md1
[ 2177.095321] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[ 2177.095323] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 2177.095330] md: using 128k window, over a total of 9792512k.
[ 2217.132610] RAID1 conf printout:
[ 2217.132616]  --- wd:2 rd:3
[ 2217.132622]  disk 0, wo:0, o:1, dev:vda1
[ 2217.132625]  disk 1, wo:0, o:1, dev:vdb1
[ 2217.132626]  disk 2, wo:1, o:1, dev:vdc1
[ 2217.135129] md: delaying recovery of md0 until md1 has finished (they share one or more physical units)
[ 2225.567664] md: md1: recovery done.
[ 2225.572072] md: recovery of RAID array md0
[ 2225.572081] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[ 2225.572083] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 2225.572087] md: using 128k window, over a total of 682432k.
[ 2225.574833] RAID1 conf printout:
[ 2225.574836]  --- wd:3 rd:3
[ 2225.574904]  disk 0, wo:0, o:1, dev:vda2
[ 2225.574906]  disk 1, wo:0, o:1, dev:vdb2
[ 2225.574908]  disk 2, wo:0, o:1, dev:vdc2
[ 2229.036805] md: md0: recovery done.
[ 2229.042732] RAID1 conf printout:
[ 2229.042736]  --- wd:3 rd:3
[ 2229.042740]  disk 0, wo:0, o:1, dev:vda1
[ 2229.042742]  disk 1, wo:0, o:1, dev:vdb1
[ 2229.042744]  disk 2, wo:0, o:1, dev:vdc1
[ 5241.129626] md/raid1:md1: Disk failure on vdc2, disabling device.
               md/raid1:md1: Operation continuing on 2 devices.
[ 5241.131639] RAID1 conf printout:
[ 5241.131642]  --- wd:2 rd:3
[ 5241.131645]  disk 0, wo:0, o:1, dev:vda2
[ 5241.131647]  disk 1, wo:0, o:1, dev:vdb2
[ 5241.131648]  disk 2, wo:1, o:0, dev:vdc2
[ 5241.131655] RAID1 conf printout:
[ 5241.131656]  --- wd:2 rd:3
[ 5241.131658]  disk 0, wo:0, o:1, dev:vda2
[ 5241.131684]  disk 1, wo:0, o:1, dev:vdb2
[ 5326.850032] md: unbind<vdc2>
[ 5326.850050] md: export_rdev(vdc2)
[ 5395.301755] md: export_rdev(vdc2)
[ 5395.312985] md: bind<vdc2>
[ 5395.315022] RAID1 conf printout:
[ 5395.315024]  --- wd:2 rd:3
[ 5395.315027]  disk 0, wo:0, o:1, dev:vda2
[ 5395.315029]  disk 1, wo:0, o:1, dev:vdb2
[ 5395.315031]  disk 2, wo:1, o:1, dev:vdc2
[ 5395.318161] md: recovery of RAID array md1
[ 5395.318168] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[ 5395.318170] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 5395.318174] md: using 128k window, over a total of 9792512k.
[ 5443.707445] md: md1: recovery done.
[ 5443.712678] RAID1 conf printout:
[ 5443.712682]  --- wd:3 rd:3
[ 5443.712686]  disk 0, wo:0, o:1, dev:vda2
[ 5443.712688]  disk 1, wo:0, o:1, dev:vdb2
[ 5443.712689]  disk 2, wo:0, o:1, dev:vdc2
[ 8017.777012] EXT4-fs error (device md1): ext4_lookup:1584: inode #36: comm systemd-sysv-ge: deleted inode referenced: 135
[ 8017.782244] Aborting journal on device md1-8.
[ 8017.785487] EXT4-fs (md1): Remounting filesystem read-only
[ 8017.876415] EXT4-fs error (device md1): ext4_lookup:1584: inode #36: comm systemd: deleted inode referenced: 137

猫 /proc/mdstat:

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md1 : active raid1 vdb2[1] vda2[0]
      9792512 blocks super 1.2 [3/2] [UU_]

md0 : active raid1 vdc1[2] vdb1[1] vda1[0]
      682432 blocks super 1.2 [3/3] [UUU]

unused devices: <none>

我尝试以读写方式重新安装根目录,但没有成功:

挂载-o重新挂载/

Segmentation fault (core dumped)

然后:

fsck-Af

fsck from util-linux 2.27.1
Segmentation fault (core dumped)

我希望在不删除 vdc2 驱动器的情况下我能够成功地重新平衡它,但我错了。损坏的驱动器已被删除:

mdadm --manage /dev/md1 --fail /dev/vdc2
mdadm --manage /dev/md1 --remove /dev/vdc2

并尝试使用 fdisk 或 cfdisk 再次删除并创建驱动器,但我遇到了相同的错误:分段错误(核心转储)

我使用 mdadm 粘贴 md1 和驱动器的状态:

mdadm -D /dev/md1

/dev/md1:
        Version : 1.2
  Creation Time : Mon Nov  7 21:22:29 2016
     Raid Level : raid1
     Array Size : 9792512 (9.34 GiB 10.03 GB)
  Used Dev Size : 9792512 (9.34 GiB 10.03 GB)
   Raid Devices : 3
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Tue Nov  8 02:38:26 2016
          State : clean, degraded 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : ubuntu-raid:1  (local to host ubuntu-raid)
           UUID : c846618f:d77238fe:95edac3d:dd19e295
         Events : 108

    Number   Major   Minor   RaidDevice State
       0     253        2        0      active sync   /dev/vda2
       1     253       18        1      active sync   /dev/vdb2
       4       0        0        4      removed

mdadm -E /dev/vdc2

/dev/vdc2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c846618f:d77238fe:95edac3d:dd19e295
           Name : ubuntu-raid:1  (local to host ubuntu-raid)
  Creation Time : Mon Nov  7 21:22:29 2016
     Raid Level : raid1
   Raid Devices : 3

 Avail Dev Size : 19585024 (9.34 GiB 10.03 GB)
     Array Size : 9792512 (9.34 GiB 10.03 GB)
    Data Offset : 16384 sectors
   Super Offset : 8 sectors
   Unused Space : before=16296 sectors, after=0 sectors
          State : clean
    Device UUID : 25a823f7:a301598a:91f9c66b:cc27d311

    Update Time : Tue Nov  8 02:20:34 2016
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : d6d7fc77 - correct
         Events : 101


   Device Role : Active device 2
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)

操作系统:Ubuntu 16.04.1 内核:4.4.0-31-generic mdadm 版本:v3.3

所以我有两个问题:为什么会发生这种情况,无法以读写方式挂载阵列的主要原因是什么,第二个问题是将来如何防止这种情况发生。当然,这是一个测试环境,但我正在寻找一种无需重新启动或类似的方法即可修复它的方法

答案1

Linuxmd系统取决于 RAID 阵列的组件驱动器来提供良好的数据或不提供数据。在现实世界的故障情况下,这是一个合理的假设:磁盘上有纠错信息,并且坏扇区极不可能以无法检测到的方式自我损坏。

通过将零写入磁盘,您可以绕过此保护。系统md认为数据仍然良好,并将损坏的数据传递到文件系统层,文件系统层反应很差。由于您使用的是 RAID 1,md因此将平衡所有驱动器的读取以提高性能;您遇到的崩溃是因为从损坏的驱动器中读取了mount和的碎片。fsck

要恢复,请从系统中完全删除故障磁盘(因为您使用的是虚拟机,因此请使用虚拟机的管理工具执行此操作;如果这是物理系统,则需要拔下驱动器)。这将迫使md系统意识到驱动器已发生故障并停止读取;然后您可以执行所需的任何文件系统级恢复。

如果您想用磁盘玩这种游戏,请使用 ZFS 或 BTRFS 对其进行格式化:这些文件系统不会假设“好数据或无数据”,并使用校验和来发现从磁盘读取的坏数据。

相关内容