重建失败后 mdraid 处于奇怪的状态

重建失败后 mdraid 处于奇怪的状态

所以我的 NAS(mdraid level5)主板坏了,我建立了一个新系统 - 在那里我重新组装了 raid 并开始重建:

md0 : active raid5 sde4[3] sdc4[0] sdd4[2] sdb4[4]
  8634123072 blocks level 5, 64k chunk, algorithm 2 [4/3] [U_UU]
  [>....................]  recovery =  4.2% (121889248/2878041024) finish=394.4min speed=116448K/sec

几个小时后,我返回,重建失败并出现一些 IO 错误。我重新启动系统并尝试再次重新组装 - 但所有驱动器现在都是备用的:

root@el-kisto:~# mdadm --assemble  /dev/md0 /dev/sd[b-e]4 --verbose
mdadm: looking for devices for /dev/md0
mdadm: /dev/sdb4 is identified as a member of /dev/md0, slot 5.
mdadm: /dev/sdc4 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdd4 is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sde4 is identified as a member of /dev/md0, slot 3.
mdadm: added /dev/sdc4 to /dev/md0 as 0 (possibly out of date)
mdadm: no uptodate device for slot 1 of /dev/md0
mdadm: added /dev/sde4 to /dev/md0 as 3
mdadm: added /dev/sdb4 to /dev/md0 as 5
mdadm: added /dev/sdd4 to /dev/md0 as 2
mdadm: /dev/md0 assembled from 2 drives and 1 spare - not enough to start the array.

root@el-kisto:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb4[5](S) sdc4[0](S) sdd4[2](S) sde4[3](S)
      11512164096 blocks

使用 mdadm 检查会产生以下结果:

mdadm --examine /dev/sd[bcde]4 | egrep 'dev|Update|Role|State|Chunk Size|Events'
/dev/sdb4:
    Update Time : Mon Aug 28 02:05:59 2023
          State : clean
         Events : 2045534
     Chunk Size : 64K
      Number   Major   Minor   RaidDevice State
this     5       8       20        5      spare   /dev/sdb4
   2     2       8       52        2      active sync   /dev/sdd4
   3     3       8       68        3      active sync   /dev/sde4
   4     4       8       36        4      faulty   /dev/sdc4
/dev/sdc4:
    Update Time : Mon Aug 28 01:54:48 2023
          State : clean
         Events : 2045530
     Chunk Size : 64K
      Number   Major   Minor   RaidDevice State
this     0       8       36        0      active sync   /dev/sdc4
   0     0       8       36        0      active sync   /dev/sdc4
   2     2       8       52        2      active sync   /dev/sdd4
   3     3       8       68        3      active sync   /dev/sde4
   4     4       8       20        4      spare   /dev/sdb4
/dev/sdd4:
    Update Time : Mon Aug 28 02:05:59 2023
          State : clean
         Events : 2045534
     Chunk Size : 64K
      Number   Major   Minor   RaidDevice State
this     2       8       52        2      active sync   /dev/sdd4
   2     2       8       52        2      active sync   /dev/sdd4
   3     3       8       68        3      active sync   /dev/sde4
   4     4       8       36        4      faulty   /dev/sdc4
/dev/sde4:
    Update Time : Mon Aug 28 02:05:59 2023
          State : clean
         Events : 2045534
     Chunk Size : 64K
      Number   Major   Minor   RaidDevice State
this     3       8       68        3      active sync   /dev/sde4
   2     2       8       52        2      active sync   /dev/sdd4
   3     3       8       68        3      active sync   /dev/sde4
   4     4       8       36        4      faulty   /dev/sdc4

这看起来还不错,3/4 磁盘的事件计数器是相同的,后面的那个无论如何都是重建的。但不知何故,md 似乎确实认为现在应该有 5 个设备?罪魁祸首似乎是 sdb4,它自我识别为备用和#5,而它应该是活动的和#4? --组装过程中用力没有帮助。我如何说服 md 将 sd[bde]4 作为降级的 4 磁盘 arry 接收?

相关内容