具有 4 个磁盘的 RAID 5 无法在 1 个磁盘出现故障的情况下运行?

具有 4 个磁盘的 RAID 5 无法在 1 个磁盘出现故障的情况下运行?

我发现了一个关于mdadm 备用磁盘这几乎回答了我的问题,但我不清楚发生了什么。

我们设置了 4 个磁盘的 RAID5 - 所有磁盘在正常操作下都标记为active/sync

    Update Time : Sun Sep 29 03:44:01 2013
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

...

Number   Major   Minor   RaidDevice State
   0     202       32        0      active sync   /dev/sdc
   1     202       48        1      active sync   /dev/sdd
   2     202       64        2      active sync   /dev/sde 
   4     202       80        3      active sync   /dev/sdf

但当其中一个磁盘发生故障时,RAID 就停止工作:

    Update Time : Sun Sep 29 01:00:01 2013
          State : clean, FAILED 
 Active Devices : 2
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 1

...

Number   Major   Minor   RaidDevice State
   0     202       32        0      active sync   /dev/sdc
   1     202       48        1      active sync   /dev/sdd
   2       0        0        2      removed
   3       0        0        3      removed

   2     202       64        -      faulty spare   /dev/sde
   4     202       80        -      spare   /dev/sdf

这里到底发生了什么?

修复方法是重新安装 RAID - 幸运的是我可以做到这一点。下次它可能会有一些重要的数据。我需要了解这一点,这样我才能拥有一个不会因单个磁盘故障而失败的 RAID。

我意识到我没有列出我的预期和发生的情况。

我预计具有 3 个正常磁盘和 1 个坏磁盘的 RAID5 将在降级模式下运行 - 3 个活动/同步和 1 个故障。

发生的事情是凭空创建了一个备件并宣布有故障 - 然后又制造了一个新的备件凭空创造并宣布声音 - 之后RAID被宣布无效。

这是以下的输出blkid

$ blkid
/dev/xvda1: LABEL="/" UUID="4797c72d-85bd-421a-9c01-52243aa28f6c" TYPE="ext4" 
/dev/xvdc: UUID="feb2c515-6003-478b-beb0-089fed71b33f" TYPE="ext3" 
/dev/xvdd: UUID="feb2c515-6003-478b-beb0-089fed71b33f" SEC_TYPE="ext2" TYPE="ext3" 
/dev/xvde: UUID="feb2c515-6003-478b-beb0-089fed71b33f" SEC_TYPE="ext2" TYPE="ext3" 
/dev/xvdf: UUID="feb2c515-6003-478b-beb0-089fed71b33f" SEC_TYPE="ext2" TYPE="ext3" 

TYPE 和 SEC_TYPE 很有趣,因为 RAID 有 XFS,而不是 ext3....

在此磁盘上尝试安装的日志(与其他所有安装一样,导致了前面列出的最终结果)具有以下日志条目:

Oct  2 15:08:51 it kernel: [1686185.573233] md/raid:md0: device xvdc operational as raid disk 0
Oct  2 15:08:51 it kernel: [1686185.580020] md/raid:md0: device xvde operational as raid disk 2
Oct  2 15:08:51 it kernel: [1686185.588307] md/raid:md0: device xvdd operational as raid disk 1
Oct  2 15:08:51 it kernel: [1686185.595745] md/raid:md0: allocated 4312kB
Oct  2 15:08:51 it kernel: [1686185.600729] md/raid:md0: raid level 5 active with 3 out of 4 devices, algorithm 2
Oct  2 15:08:51 it kernel: [1686185.608928] md0: detected capacity change from 0 to 2705221484544
Oct  2 15:08:51 it kernel: [1686185.615772] md: recovery of RAID array md0
Oct  2 15:08:51 it kernel: [1686185.621150] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
Oct  2 15:08:51 it kernel: [1686185.627626] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
Oct  2 15:08:51 it kernel: [1686185.634024]  md0: unknown partition table
Oct  2 15:08:51 it kernel: [1686185.645882] md: using 128k window, over a total of 880605952k.
Oct  2 15:22:25 it kernel: [1686999.697076] XFS (md0): Mounting Filesystem
Oct  2 15:22:26 it kernel: [1686999.889961] XFS (md0): Ending clean mount
Oct  2 15:24:19 it kernel: [1687112.817845] end_request: I/O error, dev xvde, sector 881423360
Oct  2 15:24:19 it kernel: [1687112.820517] raid5_end_read_request: 1 callbacks suppressed
Oct  2 15:24:19 it kernel: [1687112.821837] md/raid:md0: read error not correctable (sector 881423360 on xvde).
Oct  2 15:24:19 it kernel: [1687112.821837] md/raid:md0: Disk failure on xvde, disabling device.
Oct  2 15:24:19 it kernel: [1687112.821837] md/raid:md0: Operation continuing on 2 devices.
Oct  2 15:24:19 it kernel: [1687112.821837] md/raid:md0: read error not correctable (sector 881423368 on xvde).
Oct  2 15:24:19 it kernel: [1687112.821837] md/raid:md0: read error not correctable (sector 881423376 on xvde).
Oct  2 15:24:19 it kernel: [1687112.821837] md/raid:md0: read error not correctable (sector 881423384 on xvde).
Oct  2 15:24:19 it kernel: [1687112.821837] md/raid:md0: read error not correctable (sector 881423392 on xvde).
Oct  2 15:24:19 it kernel: [1687112.821837] md/raid:md0: read error not correctable (sector 881423400 on xvde).
Oct  2 15:24:19 it kernel: [1687112.821837] md/raid:md0: read error not correctable (sector 881423408 on xvde).
Oct  2 15:24:19 it kernel: [1687112.821837] md/raid:md0: read error not correctable (sector 881423416 on xvde).
Oct  2 15:24:19 it kernel: [1687112.821837] md/raid:md0: read error not correctable (sector 881423424 on xvde).
Oct  2 15:24:19 it kernel: [1687112.821837] md/raid:md0: read error not correctable (sector 881423432 on xvde).
Oct  2 15:24:19 it kernel: [1687113.432129] md: md0: recovery done.
Oct  2 15:24:19 it kernel: [1687113.685151] Buffer I/O error on device md0, logical block 96
Oct  2 15:24:19 it kernel: [1687113.691386] Buffer I/O error on device md0, logical block 96
Oct  2 15:24:19 it kernel: [1687113.697529] Buffer I/O error on device md0, logical block 64
Oct  2 15:24:20 it kernel: [1687113.703589] Buffer I/O error on device md0, logical block 64
Oct  2 15:25:51 it kernel: [1687205.682022] Buffer I/O error on device md0, logical block 96
Oct  2 15:25:51 it kernel: [1687205.688477] Buffer I/O error on device md0, logical block 96
Oct  2 15:25:51 it kernel: [1687205.694591] Buffer I/O error on device md0, logical block 64
Oct  2 15:25:52 it kernel: [1687205.700728] Buffer I/O error on device md0, logical block 64
Oct  2 15:25:52 it kernel: [1687205.748751] XFS (md0): last sector read failed

我没有看到 xvdf 列出在那里......

答案1

这是 RAID5 的一个根本问题——重建时的坏块是一个杀手。

Oct  2 15:08:51 it kernel: [1686185.573233] md/raid:md0: device xvdc operational as raid disk 0
Oct  2 15:08:51 it kernel: [1686185.580020] md/raid:md0: device xvde operational as raid disk 2
Oct  2 15:08:51 it kernel: [1686185.588307] md/raid:md0: device xvdd operational as raid disk 1
Oct  2 15:08:51 it kernel: [1686185.595745] md/raid:md0: allocated 4312kB
Oct  2 15:08:51 it kernel: [1686185.600729] md/raid:md0: raid level 5 active with 3 out of 4 devices, algorithm 2
Oct  2 15:08:51 it kernel: [1686185.608928] md0: detected capacity change from 0 to 2705221484544

阵列已经组装、降级。它已与 xvdc、xvde 和 xvdd 组装在一起。显然,有一个热备件:

Oct  2 15:08:51 it kernel: [1686185.615772] md: recovery of RAID array md0
Oct  2 15:08:51 it kernel: [1686185.621150] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
Oct  2 15:08:51 it kernel: [1686185.627626] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
Oct  2 15:08:51 it kernel: [1686185.634024]  md0: unknown partition table
Oct  2 15:08:51 it kernel: [1686185.645882] md: using 128k window, over a total of 880605952k.

“分区表”消息无关。其他消息告诉您 md 正在尝试进行恢复,可能是在热备用设备上(如果您尝试删除/重新添加它,则可能是之前发生故障的设备)。


Oct  2 15:24:19 it kernel: [1687112.817845] end_request: I/O error, dev xvde, sector 881423360
Oct  2 15:24:19 it kernel: [1687112.820517] raid5_end_read_request: 1 callbacks suppressed
Oct  2 15:24:19 it kernel: [1687112.821837] md/raid:md0: read error not correctable (sector 881423360 on xvde).
Oct  2 15:24:19 it kernel: [1687112.821837] md/raid:md0: Disk failure on xvde, disabling device.
Oct  2 15:24:19 it kernel: [1687112.821837] md/raid:md0: Operation continuing on 2 devices.

这里 md 尝试从 xvde(其余三个设备之一)读取扇区。那会失败[可能是坏扇区],并且 md (因为阵列已降级)无法恢复。因此,它将磁盘从阵列中踢出,并且如果出现双磁盘故障,您的 RAID5 就会失效。

我不确定为什么它被标记为备用 - 这很奇怪(不过,我想我通常会查看/proc/mdstat,所以也许 mdadm 就是这样标记它的)。另外,我认为较新的内核对于剔除坏块要犹豫得多,但也许您正在运行较旧的内核?

对此你能做什么?

良好的备份。这始终是任何保持数据活力的策略的重要组成部分。

确保定期清理阵列中的坏块。您的操作系统可能已经包含一个用于此目的的 cron 作业。您可以通过回显 或 来完成repaircheck操作/sys/block/md0/md/sync_action。 “修复”还将修复任何发现的奇偶校验错误(例如,奇偶校验位与磁盘上的数据不匹配)。

# echo repair > /sys/block/md0/md/sync_action
#

cat /proc/mdstat可以使用、 或 sysfs 目录中的各种文件来查看进度。 (您可以在以下位置找到一些最新的文档Linux Raid Wiki mdstat 文章

注意:在较旧的内核上(不确定确切的版本),检查可能无法修复坏块。

最后一个选择是切换到 RAID6。这将需要另一个磁盘(您运行四个甚至三个磁盘的 RAID6,您可能不想)。有了足够新的内核,坏块就会尽可能地被即时修复。 RAID6 可以承受两次磁盘故障,因此当一个磁盘发生故障时,它仍然可以承受坏块,因此它会映射出坏块并继续重建。

答案2

我想象您正在像这样创建 RAID5 阵列:

$ mdadm --create /dev/md0 --level=5 --raid-devices=4 \
       /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

这不完全是你想要的。相反,您需要像这样添加磁盘:

$ mdadm --create /dev/md0 --level=5 --raid-devices=4 \
       /dev/sda1 /dev/sdb1 /dev/sdc1
$ mdadm --add /dev/md0 /dev/sdd1

或者您可以使用mdadm的选项来添加备件,如下所示:

$ mdadm --create /dev/md0 --level=5 --raid-devices=3 --spare-devices=1 \
       /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

列表中的最后一个驱动器将是备用驱动器。

摘自mdadm 手册页

-n, --raid-devices=
      Specify the number of active devices in the array.  This, plus the 
      number of spare devices (see below) must  equal the  number  of  
      component-devices (including "missing" devices) that are listed on 
      the command line for --create. Setting a value of 1 is probably a 
      mistake and so requires that --force be specified first.  A  value 
      of  1  will then be allowed for linear, multipath, RAID0 and RAID1.  
      It is never allowed for RAID4, RAID5 or RAID6. This  number  can only 
      be changed using --grow for RAID1, RAID4, RAID5 and RAID6 arrays, and
      only on kernels which provide the necessary support.

-x, --spare-devices=
      Specify the number of spare (eXtra) devices in the initial array.  
      Spares can also be  added  and  removed  later. The  number  of component
      devices listed on the command line must equal the number of RAID devices 
      plus the number of spare devices.

相关内容