响应更新 1

响应更新 1

我对 Linux 领域相当陌生,并且没有足够的经验来真正认为自己是一个可以信任使用该系统的人:P

总而言之,我决定使用 Linux RAID 5,因为我认为它比在 Windows 上运行更稳定。
RAID 最近无法安装,我确信它在尝试重建时遇到了问题。

现在尝试组装阵列,mdadm不断报告设备或资源繁忙 - 但据我所知,它尚未安装或忙于任何事情。谷歌报告称 dmraid 可能是罪魁祸首 - 但尝试删除它表明它没有安装。

系统是 12 个驱动器 RAID-5,但似乎有 2 个驱动器未安装正确的超级块数据。

我已经包含了下面大多数常见命令的输出


cat /proc/mdstat

erwin@erwin-ubuntu:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : inactive sdd1[10](S) sde1[2](S) sdf1[11](S) sdg1[6](S) sdm1[4](S) sdl1[9](S) sdk1[5](S) sdj1[7](S) sdi1[13](S) sdc1[8](S) sdb1[0](S) sda1[3](S)
     11721120064 blocks

unused devices: <none>

详细信息


erwin@erwin-ubuntu:~$ sudo mdadm --detail /dev/md0
mdadm: md device /dev/md0 does not appear to be active.
erwin@erwin-ubuntu:~$

mdadm 检查

注意到奇怪的部分 - 我不知道为什么,但系统驱动器通常是sda- 现在突然是sdh- 不,我没有移动任何物理接线?


erwin@erwin-ubuntu:~$ sudo mdadm --examine /dev/sd*1
/dev/sda1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 7964c122:1ec1e9ff:efb010e8:fc8e0ce0 (local to host erwin-ubuntu)
  Creation Time : Sun Oct 10 11:54:54 2010
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 10744359296 (10246.62 GiB 11002.22 GB)
   Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

    Update Time : Mon Dec  5 19:24:00 2011
          State : clean
 Active Devices : 10
Working Devices : 11
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 35a1bcd - correct
         Events : 74295

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     3       8       97        3      active sync   /dev/sdg1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       0        0        1      faulty removed
   2     2       0        0        2      faulty removed
   3     3       8       97        3      active sync   /dev/sdg1
   4     4       8       81        4      active sync   /dev/sdf1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8      193        6      active sync   /dev/sdm1
   7     7       8       33        7      active sync   /dev/sdc1
   8     8       8      129        8      active sync   /dev/sdi1
   9     9       8       65        9      active sync   /dev/sde1
  10    10       8      145       10      active sync   /dev/sdj1
  11    11       8      177       11      active sync   /dev/sdl1
  12    12       8      161       12      faulty   /dev/sdk1
/dev/sdb1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 7964c122:1ec1e9ff:efb010e8:fc8e0ce0 (local to host erwin-ubuntu)
  Creation Time : Sun Oct 10 11:54:54 2010
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 10744359296 (10246.62 GiB 11002.22 GB)
   Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

    Update Time : Mon Dec  5 19:24:00 2011
          State : clean
 Active Devices : 10
Working Devices : 11
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 35a1bd7 - correct
         Events : 74295

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8      113        0      active sync   /dev/sdh1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       0        0        1      faulty removed
   2     2       0        0        2      faulty removed
   3     3       8       97        3      active sync   /dev/sdg1
   4     4       8       81        4      active sync   /dev/sdf1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8      193        6      active sync   /dev/sdm1
   7     7       8       33        7      active sync   /dev/sdc1
   8     8       8      129        8      active sync   /dev/sdi1
   9     9       8       65        9      active sync   /dev/sde1
  10    10       8      145       10      active sync   /dev/sdj1
  11    11       8      177       11      active sync   /dev/sdl1
  12    12       8      161       12      faulty   /dev/sdk1
/dev/sdc1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 7964c122:1ec1e9ff:efb010e8:fc8e0ce0 (local to host erwin-ubuntu)
  Creation Time : Sun Oct 10 11:54:54 2010
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 10744359296 (10246.62 GiB 11002.22 GB)
   Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

    Update Time : Mon Dec  5 19:24:00 2011
          State : clean
 Active Devices : 10
Working Devices : 11
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 35a1bf7 - correct
         Events : 74295

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     8       8      129        8      active sync   /dev/sdi1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       0        0        1      faulty removed
   2     2       0        0        2      faulty removed
   3     3       8       97        3      active sync   /dev/sdg1
   4     4       8       81        4      active sync   /dev/sdf1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8      193        6      active sync   /dev/sdm1
   7     7       8       33        7      active sync   /dev/sdc1
   8     8       8      129        8      active sync   /dev/sdi1
   9     9       8       65        9      active sync   /dev/sde1
  10    10       8      145       10      active sync   /dev/sdj1
  11    11       8      177       11      active sync   /dev/sdl1
  12    12       8      161       12      faulty   /dev/sdk1
/dev/sdd1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 7964c122:1ec1e9ff:efb010e8:fc8e0ce0 (local to host erwin-ubuntu)
  Creation Time : Sun Oct 10 11:54:54 2010
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 10744359296 (10246.62 GiB 11002.22 GB)
   Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

    Update Time : Mon Dec  5 19:24:00 2011
          State : clean
 Active Devices : 10
Working Devices : 11
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 35a1c0b - correct
         Events : 74295

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this    10       8      145       10      active sync   /dev/sdj1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       0        0        1      faulty removed
   2     2       0        0        2      faulty removed
   3     3       8       97        3      active sync   /dev/sdg1
   4     4       8       81        4      active sync   /dev/sdf1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8      193        6      active sync   /dev/sdm1
   7     7       8       33        7      active sync   /dev/sdc1
   8     8       8      129        8      active sync   /dev/sdi1
   9     9       8       65        9      active sync   /dev/sde1
  10    10       8      145       10      active sync   /dev/sdj1
  11    11       8      177       11      active sync   /dev/sdl1
  12    12       8      161       12      faulty   /dev/sdk1
/dev/sde1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 7964c122:1ec1e9ff:efb010e8:fc8e0ce0 (local to host erwin-ubuntu)
  Creation Time : Sun Oct 10 11:54:54 2010
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 10744359296 (10246.62 GiB 11002.22 GB)
   Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

    Update Time : Mon Dec  5 08:05:07 2011
          State : clean
 Active Devices : 11
Working Devices : 12
 Failed Devices : 1
  Spare Devices : 1
       Checksum : 3597cbb - correct
         Events : 74284

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     2       8      161        2      active sync   /dev/sdk1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       0        0        1      faulty removed
   2     2       8      161        2      active sync   /dev/sdk1
   3     3       8       97        3      active sync   /dev/sdg1
   4     4       8       81        4      active sync   /dev/sdf1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8      193        6      active sync   /dev/sdm1
   7     7       8       33        7      active sync   /dev/sdc1
   8     8       8      129        8      active sync   /dev/sdi1
   9     9       8       65        9      active sync   /dev/sde1
  10    10       8      145       10      active sync   /dev/sdj1
  11    11       8      177       11      active sync   /dev/sdl1
  12    12       8       17       12      spare   /dev/sdb1
/dev/sdf1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 7964c122:1ec1e9ff:efb010e8:fc8e0ce0 (local to host erwin-ubuntu)
  Creation Time : Sun Oct 10 11:54:54 2010
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 10744359296 (10246.62 GiB 11002.22 GB)
   Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

    Update Time : Mon Dec  5 19:24:00 2011
          State : clean
 Active Devices : 10
Working Devices : 11
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 35a1c2d - correct
         Events : 74295

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this    11       8      177       11      active sync   /dev/sdl1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       0        0        1      faulty removed
   2     2       0        0        2      faulty removed
   3     3       8       97        3      active sync   /dev/sdg1
   4     4       8       81        4      active sync   /dev/sdf1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8      193        6      active sync   /dev/sdm1
   7     7       8       33        7      active sync   /dev/sdc1
   8     8       8      129        8      active sync   /dev/sdi1
   9     9       8       65        9      active sync   /dev/sde1
  10    10       8      145       10      active sync   /dev/sdj1
  11    11       8      177       11      active sync   /dev/sdl1
  12    12       8      161       12      faulty   /dev/sdk1
/dev/sdg1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 7964c122:1ec1e9ff:efb010e8:fc8e0ce0 (local to host erwin-ubuntu)
  Creation Time : Sun Oct 10 11:54:54 2010
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 10744359296 (10246.62 GiB 11002.22 GB)
   Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

    Update Time : Mon Dec  5 19:24:00 2011
          State : clean
 Active Devices : 10
Working Devices : 11
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 35a1c33 - correct
         Events : 74295

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     6       8      193        6      active sync   /dev/sdm1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       0        0        1      faulty removed
   2     2       0        0        2      faulty removed
   3     3       8       97        3      active sync   /dev/sdg1
   4     4       8       81        4      active sync   /dev/sdf1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8      193        6      active sync   /dev/sdm1
   7     7       8       33        7      active sync   /dev/sdc1
   8     8       8      129        8      active sync   /dev/sdi1
   9     9       8       65        9      active sync   /dev/sde1
  10    10       8      145       10      active sync   /dev/sdj1
  11    11       8      177       11      active sync   /dev/sdl1
  12    12       8      161       12      faulty   /dev/sdk1
mdadm: No md superblock detected on /dev/sdh1.
/dev/sdi1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 7964c122:1ec1e9ff:efb010e8:fc8e0ce0 (local to host erwin-ubuntu)
  Creation Time : Sun Oct 10 11:54:54 2010
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 10744359296 (10246.62 GiB 11002.22 GB)
   Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

    Update Time : Mon Dec  5 19:24:00 2011
          State : clean
 Active Devices : 10
Working Devices : 11
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 35a1b8b - correct
         Events : 74295

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this    13       8       17       13      spare   /dev/sdb1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       0        0        1      faulty removed
   2     2       0        0        2      faulty removed
   3     3       8       97        3      active sync   /dev/sdg1
   4     4       8       81        4      active sync   /dev/sdf1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8      193        6      active sync   /dev/sdm1
   7     7       8       33        7      active sync   /dev/sdc1
   8     8       8      129        8      active sync   /dev/sdi1
   9     9       8       65        9      active sync   /dev/sde1
  10    10       8      145       10      active sync   /dev/sdj1
  11    11       8      177       11      active sync   /dev/sdl1
  12    12       8      161       12      faulty   /dev/sdk1
/dev/sdj1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 7964c122:1ec1e9ff:efb010e8:fc8e0ce0 (local to host erwin-ubuntu)
  Creation Time : Sun Oct 10 11:54:54 2010
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 10744359296 (10246.62 GiB 11002.22 GB)
   Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

    Update Time : Mon Dec  5 19:24:00 2011
          State : clean
 Active Devices : 10
Working Devices : 11
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 35a1b95 - correct
         Events : 74295

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     7       8       33        7      active sync   /dev/sdc1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       0        0        1      faulty removed
   2     2       0        0        2      faulty removed
   3     3       8       97        3      active sync   /dev/sdg1
   4     4       8       81        4      active sync   /dev/sdf1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8      193        6      active sync   /dev/sdm1
   7     7       8       33        7      active sync   /dev/sdc1
   8     8       8      129        8      active sync   /dev/sdi1
   9     9       8       65        9      active sync   /dev/sde1
  10    10       8      145       10      active sync   /dev/sdj1
  11    11       8      177       11      active sync   /dev/sdl1
  12    12       8      161       12      faulty   /dev/sdk1
/dev/sdk1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 7964c122:1ec1e9ff:efb010e8:fc8e0ce0 (local to host erwin-ubuntu)
  Creation Time : Sun Oct 10 11:54:54 2010
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 10744359296 (10246.62 GiB 11002.22 GB)
   Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

    Update Time : Mon Dec  5 19:24:00 2011
          State : clean
 Active Devices : 10
Working Devices : 11
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 35a1ba1 - correct
         Events : 74295

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     5       8       49        5      active sync   /dev/sdd1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       0        0        1      faulty removed
   2     2       0        0        2      faulty removed
   3     3       8       97        3      active sync   /dev/sdg1
   4     4       8       81        4      active sync   /dev/sdf1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8      193        6      active sync   /dev/sdm1
   7     7       8       33        7      active sync   /dev/sdc1
   8     8       8      129        8      active sync   /dev/sdi1
   9     9       8       65        9      active sync   /dev/sde1
  10    10       8      145       10      active sync   /dev/sdj1
  11    11       8      177       11      active sync   /dev/sdl1
  12    12       8      161       12      faulty   /dev/sdk1
/dev/sdl1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 7964c122:1ec1e9ff:efb010e8:fc8e0ce0 (local to host erwin-ubuntu)
  Creation Time : Sun Oct 10 11:54:54 2010
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 10744359296 (10246.62 GiB 11002.22 GB)
   Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

    Update Time : Mon Dec  5 19:24:00 2011
          State : clean
 Active Devices : 10
Working Devices : 11
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 35a1bb9 - correct
         Events : 74295

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     9       8       65        9      active sync   /dev/sde1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       0        0        1      faulty removed
   2     2       0        0        2      faulty removed
   3     3       8       97        3      active sync   /dev/sdg1
   4     4       8       81        4      active sync   /dev/sdf1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8      193        6      active sync   /dev/sdm1
   7     7       8       33        7      active sync   /dev/sdc1
   8     8       8      129        8      active sync   /dev/sdi1
   9     9       8       65        9      active sync   /dev/sde1
  10    10       8      145       10      active sync   /dev/sdj1
  11    11       8      177       11      active sync   /dev/sdl1
  12    12       8      161       12      faulty   /dev/sdk1
/dev/sdm1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 7964c122:1ec1e9ff:efb010e8:fc8e0ce0 (local to host erwin-ubuntu)
  Creation Time : Sun Oct 10 11:54:54 2010
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 10744359296 (10246.62 GiB 11002.22 GB)
   Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0

    Update Time : Mon Dec  5 19:24:00 2011
          State : clean
 Active Devices : 10
Working Devices : 11
 Failed Devices : 2
  Spare Devices : 1
       Checksum : 35a1bbf - correct
         Events : 74295

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     4       8       81        4      active sync   /dev/sdf1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       0        0        1      faulty removed
   2     2       0        0        2      faulty removed
   3     3       8       97        3      active sync   /dev/sdg1
   4     4       8       81        4      active sync   /dev/sdf1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8      193        6      active sync   /dev/sdm1
   7     7       8       33        7      active sync   /dev/sdc1
   8     8       8      129        8      active sync   /dev/sdi1
   9     9       8       65        9      active sync   /dev/sde1
  10    10       8      145       10      active sync   /dev/sdj1
  11    11       8      177       11      active sync   /dev/sdl1
  12    12       8      161       12      faulty   /dev/sdk1

mdadm --assemble --scan --verbose - acapture 被截断以保存字符 - 如编辑中所述 - 通过首先停止数组来解决资源繁忙问题 - 是的,就这么简单


erwin@erwin-ubuntu:~$ sudo mdadm --assemble --scan --verbose
mdadm: looking for devices for /dev/md0
mdadm: cannot open device /dev/sdm1: Device or resource busy
mdadm: /dev/sdm1 has wrong uuid.

我的感觉是,我可能需要将两个故障驱动器上的超级块归零(因为一个驱动器显示为备用驱动器,而另一个驱动器 - 磁盘编号不匹配?) - 然后需要重新组装,但是我不知道如何处理资源繁忙的情况。

我不想采取不必要的、可能损坏数据的步骤 - 所以任何建议将不胜感激。

1

derobert 建议停止阵列,然后重新组装它:D 是的,资源繁忙已得到修复,但似乎两个驱动器仍然不合作。我猜手动组装/重新创建是否合适?

欢迎对下一步有什么想法吗?

下面列出了 mdadm assemble 的最新输出:

erwin@erwin-ubuntu:~$ sudo mdadm --assemble --scan --verbose
mdadm: looking for devices for /dev/md0
mdadm: no RAID superblock on /dev/sdm
mdadm: /dev/sdm has wrong uuid.
mdadm: no RAID superblock on /dev/sdl
mdadm: /dev/sdl has wrong uuid.
mdadm: no RAID superblock on /dev/sdk
mdadm: /dev/sdk has wrong uuid.
mdadm: no RAID superblock on /dev/sdj
mdadm: /dev/sdj has wrong uuid.
mdadm: no RAID superblock on /dev/sdi
mdadm: /dev/sdi has wrong uuid.
mdadm: cannot open device /dev/sdh6: Device or resource busy
mdadm: /dev/sdh6 has wrong uuid.
mdadm: no RAID superblock on /dev/sdh5
mdadm: /dev/sdh5 has wrong uuid.
mdadm: no RAID superblock on /dev/sdh4
mdadm: /dev/sdh4 has wrong uuid.
mdadm: no RAID superblock on /dev/sdh3
mdadm: /dev/sdh3 has wrong uuid.
mdadm: no RAID superblock on /dev/sdh2
mdadm: /dev/sdh2 has wrong uuid.
mdadm: no RAID superblock on /dev/sdh1
mdadm: /dev/sdh1 has wrong uuid.
mdadm: cannot open device /dev/sdh: Device or resource busy
mdadm: /dev/sdh has wrong uuid.
mdadm: no RAID superblock on /dev/sdg
mdadm: /dev/sdg has wrong uuid.
mdadm: no RAID superblock on /dev/sdf
mdadm: /dev/sdf has wrong uuid.
mdadm: no RAID superblock on /dev/sde
mdadm: /dev/sde has wrong uuid.
mdadm: no RAID superblock on /dev/sdd
mdadm: /dev/sdd has wrong uuid.
mdadm: no RAID superblock on /dev/sdc
mdadm: /dev/sdc has wrong uuid.
mdadm: no RAID superblock on /dev/sdb
mdadm: /dev/sdb has wrong uuid.
mdadm: no RAID superblock on /dev/sda
mdadm: /dev/sda has wrong uuid.
mdadm: /dev/sdm1 is identified as a member of /dev/md0, slot 4.
mdadm: /dev/sdl1 is identified as a member of /dev/md0, slot 9.
mdadm: /dev/sdk1 is identified as a member of /dev/md0, slot 5.
mdadm: /dev/sdj1 is identified as a member of /dev/md0, slot 7.
mdadm: /dev/sdi1 is identified as a member of /dev/md0, slot 13.
mdadm: /dev/sdg1 is identified as a member of /dev/md0, slot 6.
mdadm: /dev/sdf1 is identified as a member of /dev/md0, slot 11.
mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 10.
mdadm: /dev/sdc1 is identified as a member of /dev/md0, slot 8.
mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sda1 is identified as a member of /dev/md0, slot 3.
mdadm: no uptodate device for slot 1 of /dev/md0
mdadm: added /dev/sde1 to /dev/md0 as 2
mdadm: added /dev/sda1 to /dev/md0 as 3
mdadm: added /dev/sdm1 to /dev/md0 as 4
mdadm: added /dev/sdk1 to /dev/md0 as 5
mdadm: added /dev/sdg1 to /dev/md0 as 6
mdadm: added /dev/sdj1 to /dev/md0 as 7
mdadm: added /dev/sdc1 to /dev/md0 as 8
mdadm: added /dev/sdl1 to /dev/md0 as 9
mdadm: added /dev/sdd1 to /dev/md0 as 10
mdadm: added /dev/sdf1 to /dev/md0 as 11
mdadm: added /dev/sdi1 to /dev/md0 as 13
mdadm: added /dev/sdb1 to /dev/md0 as 0
mdadm: /dev/md0 assembled from 10 drives and 1 spare - not enough to start the array.

答案1

首先,驱动器重新刻字有时会发生,具体取决于您的计算机的设置方式。预计驱动器号在重新启动后不会保持稳定,嗯,有一段时间了。因此,你的动力转移到你身上并不是一个大问题。

假设 dmraid 和 device-mapper 没有使用您的设备:

好吧,mdadm --stop /dev/md0可能会处理你繁忙的消息,我想这就是它抱怨的原因。然后你可以再次尝试你的装配线。如果它不起作用,请再次 --stop ,然后进行 assemble with --run(如果不运行, --assemble --scan 将不会启动降级的数组)。然后,您可以删除并重新添加故障磁盘以使其尝试重建。

/dev/sde 已过时(查看事件计数器)。其他的乍一看还不错,所以我认为你实际上很有可能没有困难。

您还不应该将任何超级块归零。数据丢失的风险太高了。如果 --run 不起作用,我认为您需要在本地(或可以 ssh 登录)找到一个知道他/她正在做什么的人来尝试修复。

响应更新 1

从 mdadm 那里得到的“不足以启动阵列”从来都不是一个好消息。这意味着 mdadm 已从您的 12 驱动器 RAID5 阵列中找到 10 个驱动器,并且我希望您知道 RAID5 只能生存失败,不是两次。

好吧,让我们尝试拼凑出发生了什么。首先,重新启动后,驱动器盘符发生了变化,这对于我们试图找出答案来说很烦人,但 mdraid 并不关心这一点。阅读 mdadm 输出,这是发生的重新映射(按 raid 磁盘 # 排序):

00 sdh1 -> sdb1
02 sdk1 -> sde1 [OUTDATED]
03 sdg1 -> sda1
04 sdf1 -> sdm1
05 sdd1 -> sdk1
06 sdm1 -> sdg1
07 sdc1 -> sdj1
08 sdi1 -> sdc1
09 sde1 -> sdl1
10 sdj1 -> sdd1
11 sdl1 -> sdf1
13 sdb1 -> sdi1 [SPARE]

#02 的“事件”计数器比其他计数器低。这意味着它在某个时刻离开了数组。

如果您了解该阵列的一些历史,那就太好了,例如,“12 驱动器 RAID5,1 个热备用”是否正确?

不过,我不太确定导致这种情况的失败顺序是什么。似乎在某个时刻,设备 #1 发生故障,并开始重建设备 #12。

但我无法确切地弄清楚接下来发生了什么。也许您有日志,或者需要询问管理员。这是我无法解释的:

不知何故,#12 变成了#13。不知何故,#2 变成了#12。

因此,重建到#12应该完成后,#12 将成为#1。也许没有——也许由于某种原因未能重建。那么也许 #2 失败了,或者也许 #2 失败了,这就是重建没有完成的原因,有人尝试删除并重新添加 #2?这可能会使其成为#12。然后可能会移除并重新添加备用件,使其成为#13。

好的,但是当然,此时您已经遇到了两个磁盘故障。好的。这就说得通了。

如果发生这种情况,则说明您遇到了两个磁盘故障。这意味着您丢失了数据。接下来您要做什么取决于数据的重要性(还要考虑您的备份有多好)。

如果数据非常有价值(并且您没有良好的备份),请联系数据恢复专家。否则:

如果数据足够有价值,您应该用于dd对所有涉及的磁盘进行映像(您可以使用更大的磁盘,并在每个磁盘上放置文件以节省资金。例如,2 或 3 TB 外部磁盘)。然后复制图像。然后在该副本上进行恢复(您可以使用循环设备来执行此操作)。

获取更多备件。您可能有一个死磁盘。您至少有一些有问题的磁盘——smartctl也许可以告诉您更多信息。

--force在你的线旁边--assemble。无论如何,这将使 mdadm 使用过时的磁盘。这意味着有些行业现在将拥有过时的数据,有些则不会。添加其中一个新磁盘作为备用磁盘,让重建完成。希望你不会碰到任何坏块(这会导致重建失败,我相信唯一的答案是让磁盘将它们映射出来)接下来是fsck -f磁盘。很可能会有错误。修复后,安装磁盘,然后查看数据的形状。

建议

以后不要构建12盘RAID5。两块硬盘出现故障的概率太高了。请改用 RAID6 或 RAID10。另外,请确保定期清理阵列中的坏块 ( echo check > /sys/block/md0/md0/sync_action)。

答案2

您可以尝试使用 mdadm 命令通过以下内核参数启动:init=/bin/bash

答案3

不确定这是最好的方法,所以解决这个问题,但是当我的 RAID10 中的一个驱动器由于未知原因不同步时,这对我很有帮助:

首先,我停止了所有可以找到的 RAID 容器sudo mdadm --stop /dev/md*(这里要小心,以防您运行多个 RAID,其中一些您可能依赖)。然后我使用 scan 命令重新创建所有 RAID:

sudo mdadm --assemble --scan --verbose

然而,这为不同步的驱动器创建了一个单独的容器,所以我停止了它(您可以通过检查sudo mdadm --stop /dev/mdX找出哪个容器;在我的例子中,我还可以看到不同步驱动器的设备名称) 。最后,我将此驱动器重新添加到父容器中,在我的例子中:Xsudo mdadm --detail /dev/md*/dev/sdgmd127

sudo --manage /dev/md127 -a /dev/sdg

现在它开始同步,我可以通过检查看出

sudo watch cat /proc/mdstat
[===>............................] recovery = 8.3%

相关内容