md RAID10 失败,输出混乱

md RAID10 失败,输出混乱

我在这里用 mdadm 测试 RAID10 阵列。我想看看它可以容忍多少个故障设备、重建时间等。有一次我让它在 5 或 6 个设备上重新同步,然后我重新启动它,现在它显示为不活动,我不确定它在做什么或如何恢复它。

那里没有什么重要的东西,我可以重新创建它,但我更愿意弄清楚出了什么问题以及是否可以恢复。

root@netcu1257-vs-02:~# cat /proc/mdstat  Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]  md0 : inactive sdz[19] sdy[18] sdx[17] sdw[16] sdv[15] sdu[14] sds[12] sdt[13] sdr[11] sdq[10](S) sdp[21] sdn[8] sdm[7] sdo[9] sdl[6] sdj[20](R) sdk[22](S) sdi[4](S) sdh[3] sdf[1] sde[0] sdg[2]
              257812572160 blocks super 1.2
        root@netcu1257-vs-02:~# mdadm -D /dev/md0
        /dev/md0:
                   Version : 1.2
             Creation Time : Fri Oct 29 13:59:41 2021
                Raid Level : raid10
             Used Dev Size : 18446744073709551615
              Raid Devices : 20
             Total Devices : 22
               Persistence : Superblock is persistent
    
           Update Time : Mon Nov  8 09:59:42 2021
                 State : active, FAILED, Not Started 
        Active Devices : 13
       Working Devices : 22
        Failed Devices : 0
         Spare Devices : 9
    
                Layout : near=2
            Chunk Size : 512K
    
    Consistency Policy : unknown
    
                  Name : netcu1257-vs-02:0  (local to host netcu1257-vs-02)
                  UUID : c3418360:4fb5857c:eb952018:163a60c6
                Events : 85985
    
        Number   Major   Minor   RaidDevice State
           -       0        0        0      removed
           -       0        0        1      removed
           -       0        0        2      removed
           -       0        0        3      removed
           -       0        0        4      removed
           -       0        0        5      removed
           -       0        0        6      removed
           -       0        0        7      removed
           -       0        0        8      removed
           -       0        0        9      removed
           -       0        0       10      removed
           -       0        0       11      removed
           -       0        0       12      removed
           -       0        0       13      removed
           -       0        0       14      removed
           -       0        0       15      removed
           -       0        0       16      removed
           -       0        0       17      removed
           -       0        0       18      removed
           -       0        0       19      removed
    
           -      65      112       17      sync set-B   /dev/sdx
           -       8       64        0      spare rebuilding   /dev/sde
           -       8      208        8      sync set-A   /dev/sdn
           -      65       80       15      sync set-B   /dev/sdv
           -       8      176        6      sync set-A   /dev/sdl
           -      65       48       13      sync set-B   /dev/sdt
           -       8      144        5      spare rebuilding   /dev/sdj
           -      65       16       11      sync set-B   /dev/sdr
           -       8      112        3      sync set-B   /dev/sdh
           -       8      240        7      spare rebuilding   /dev/sdp
           -      65      128       18      sync set-A   /dev/sdy
           -       8       80        1      sync set-B   /dev/sdf
           -       8      224        9      spare rebuilding   /dev/sdo
           -      65       96       16      sync set-A   /dev/sdw
           -       8      192       10      spare rebuilding   /dev/sdm
           -      65       64       14      sync set-A   /dev/sdu
           -       8      160        -      spare   /dev/sdk
           -      65       32       12      sync set-A   /dev/sds
           -       8      128        -      spare   /dev/sdi
           -      65        0        -      spare   /dev/sdq
           -      65      144       19      sync set-B   /dev/sdz
           -       8       96        2      spare rebuilding   /dev/sdg

如您所见,我的所有设备 (/dev/sd[ez]) 都显示为 md0 的一部分,但它还显示有 20 个缺失的设备。阵列的原始格式是 20 个设备,2 个备用设备。虽然它显示正在重建,但没有磁盘活动,/proc/mdstat 也显示相同情况。

这可以恢复吗?并且考虑到阵列在主机重新启动之前正在重建,我该怎么做才能确保重建能够继续并且阵列在重新启动后保持活动状态?

编辑:

我发现我的 mdadm.conf 文件被错误地放置在 /etc/ 中。我将其移动到 /etc/mdadm/ 并重新启动,现在我的阵列显示为 RAID0,仍然处于非活动状态:

root@netcu1257-vs-02:~# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
        Raid Level : raid0
     Total Devices : 22
       Persistence : Superblock is persistent

             State : inactive
   Working Devices : 22

              Name : netcu1257-vs-02:0  (local to host netcu1257-vs-02)
              UUID : c3418360:4fb5857c:eb952018:163a60c6
            Events : 85985

    Number   Major   Minor   RaidDevice

       -      65      112        -        /dev/sdx
       -       8       64        -        /dev/sde
       -       8      208        -        /dev/sdn
       -      65       80        -        /dev/sdv
       -       8      176        -        /dev/sdl
       -      65       48        -        /dev/sdt
       -       8      144        -        /dev/sdj
       -      65       16        -        /dev/sdr
       -       8      112        -        /dev/sdh
       -       8      240        -        /dev/sdp
       -      65      128        -        /dev/sdy
       -       8       80        -        /dev/sdf
       -       8      224        -        /dev/sdo
       -      65       96        -        /dev/sdw
       -       8      192        -        /dev/sdm
       -      65       64        -        /dev/sdu
       -       8      160        -        /dev/sdk
       -      65       32        -        /dev/sds
       -       8      128        -        /dev/sdi
       -      65        0        -        /dev/sdq
       -      65      144        -        /dev/sdz
       -       8       96        -        /dev/sdg

答案1

您需要重新添加所有驱动器。

对于集合 A 中的所有内容,对于集合 B 也一样

mdadm --manage /dev/mdN -a /dev/sdX1

在此之前尝试一个简单的

mdadm --assemble /dev/mdN /dev/sd? ...

相关内容