Linux RAID10 在重新同步时遇到坏盘。它被搞坏了吗?

Linux RAID10 在重新同步时遇到坏盘。它被搞坏了吗?

我在 md0 上有一个 Linux 软件 RAID10 设备。它由 4 个 1TB 磁盘 sd[abcd] 组成。昨天 Smart 给我发了一封电子邮件,说一个磁盘出了问题(寻道错误增加并重新分配扇区)。我重新启动了一个新驱动器并将其添加到阵列中。/proc/mdstat 显示它正在重新同步。上午某个时候,阵列中另一个磁盘上开始出现“介质错误”的错误。我检查了 /var/log/messages,看到大量Emask 0x49(媒体错误)同一阵列中另一个驱动器的条目。谢谢 Murphy。

我更换了新近发生故障的驱动器,但无法启动阵列。mdadm 还告诉我 sdc 很忙。有人知道为什么吗?这是最新的驱动器:

    # mdadm  -S /dev/md0
    mdadm: stopped /dev/md0

    # mdadm --assemble /dev/md0 /dev/sda /dev/sdb /dev/sdc /dev/sdd -fv
    mdadm: looking for devices for /dev/md0
    mdadm: /dev/sda is identified as a member of /dev/md0, slot 1.
    mdadm: /dev/sdb is identified as a member of /dev/md0, slot -1.
    mdadm: /dev/sdc is identified as a member of /dev/md0, slot -1.
    mdadm: /dev/sdd is identified as a member of /dev/md0, slot 0.
    mdadm: added /dev/sda to /dev/md0 as 1
    mdadm: no uptodate device for slot 2 of /dev/md0
    mdadm: no uptodate device for slot 3 of /dev/md0
    mdadm: added /dev/sdb to /dev/md0 as -1
    mdadm: failed to add /dev/sdc to /dev/md0: Device or resource busy
    mdadm: added /dev/sdd to /dev/md0 as 0
    mdadm: /dev/md0 assembled from 2 drives and 1 spare - not enough to start the array.

# cat /proc/mdstat 
Personalities : [raid10] 
md0 : inactive sdd[4](S) sdb[6](S) sda[5](S)
      2930287104 blocks super 1.0

unused devices: <none>

# for d in a b c d; do mdadm -E /dev/sd$d; done
/dev/sda:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : 24edfbfb:f97149e1:93e019e7:fc7b3f03
           Name : bach:0
  Creation Time : Thu Sep 30 13:50:40 2010
     Raid Level : raid10
   Raid Devices : 4

 Avail Dev Size : 1953524896 (931.51 GiB 1000.20 GB)
     Array Size : 3907049472 (1863.03 GiB 2000.41 GB)
  Used Dev Size : 1953524736 (931.51 GiB 1000.20 GB)
   Super Offset : 1953525152 sectors
          State : clean
    Device UUID : fc75bc5b:e32851bb:9725e0ce:aeaa1680

    Update Time : Thu Dec 27 09:28:13 2012
       Checksum : 3a03b8e1 - correct
         Events : 7314

         Layout : near=1, far=2
     Chunk Size : 256K

    Array Slot : 5 (failed, failed, failed, failed, 0, 1, failed)
   Array State : uU__ 5 failed


/dev/sdb:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : 24edfbfb:f97149e1:93e019e7:fc7b3f03
           Name : bach:0
  Creation Time : Thu Sep 30 13:50:40 2010
     Raid Level : raid10
   Raid Devices : 4

 Avail Dev Size : 1953524896 (931.51 GiB 1000.20 GB)
     Array Size : 3907049472 (1863.03 GiB 2000.41 GB)
  Used Dev Size : 1953524736 (931.51 GiB 1000.20 GB)
   Super Offset : 1953525152 sectors
          State : clean
    Device UUID : adbb2437:931c08fc:0e5428b8:a6d0d47d

    Update Time : Thu Dec 27 09:28:13 2012
       Checksum : 3d2946ab - correct
         Events : 7306

         Layout : near=1, far=2
     Chunk Size : 256K

    Array Slot : 6 (failed, failed, failed, failed, 0, 1)
   Array State : uu__ 4 failed


/dev/sdc:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : 24edfbfb:f97149e1:93e019e7:fc7b3f03
           Name : bach:0
  Creation Time : Thu Sep 30 13:50:40 2010
     Raid Level : raid10
   Raid Devices : 4

 Avail Dev Size : 1953524896 (931.51 GiB 1000.20 GB)
     Array Size : 3907049472 (1863.03 GiB 2000.41 GB)
  Used Dev Size : 1953524736 (931.51 GiB 1000.20 GB)
   Super Offset : 1953525152 sectors
          State : clean
    Device UUID : 5c216a06:c17d4e4f:9dc5c09b:b3f7d72f

    Update Time : Thu Dec 27 09:28:13 2012
       Checksum : f5508998 - correct
         Events : 0

         Layout : near=1, far=2
     Chunk Size : 256K

    Array Slot : 6 (failed, failed, failed, failed, 0, 1)
   Array State : uu__ 4 failed


/dev/sdd:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : 24edfbfb:f97149e1:93e019e7:fc7b3f03
           Name : bach:0
  Creation Time : Thu Sep 30 13:50:40 2010
     Raid Level : raid10
   Raid Devices : 4

 Avail Dev Size : 1953524896 (931.51 GiB 1000.20 GB)
     Array Size : 3907049472 (1863.03 GiB 2000.41 GB)
  Used Dev Size : 1953524736 (931.51 GiB 1000.20 GB)
   Super Offset : 1953525152 sectors
          State : clean
    Device UUID : 69a39c8f:0b25b888:0b4e1848:42aed006

    Update Time : Thu Dec 27 09:28:13 2012
       Checksum : 3b3d0e7c - correct
         Events : 7314

         Layout : near=1, far=2
     Chunk Size : 256K

    Array Slot : 4 (failed, failed, failed, failed, 0, 1, failed)
   Array State : Uu__ 5 failed

我已经备份了阵列,但恢复需要一整天的时间。有什么办法可以把这个东西放到网上吗?

答案1

好吧,作为最后的尝试,我尝试使用新故障的磁盘和 mdadm 选项重新创建阵列,--assume-clean看看它会做什么。它出现了,但没有找到数据。嗯,好吧……备份真棒。

相关内容