断电后raid5降级的问题

断电后raid5降级的问题

我有一个由 8 个磁盘组成的 raid5。停电后,mdadm 告诉我 raid 性能下降。这是输出:

/dev/md0:
        Version : 0.90
  Creation Time : Sat Jul 10 09:08:28 2010
     Raid Level : raid5
     Array Size : 13674601472 (13041.12 GiB 14002.79 GB)
  Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)
   Raid Devices : 8
  Total Devices : 7
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Thu Aug  2 18:42:05 2012
          State : clean, degraded
 Active Devices : 7
Working Devices : 7
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 09dd4536:a6153f6b:f1f4aaa9:53aab85a
         Events : 0.9189750

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       1       8       16        1      active sync   /dev/sdb
       2       8       96        2      active sync   /dev/sdg
       3       8      112        3      active sync   /dev/sdh
       4       8       32        4      active sync   /dev/sdc
       5       8       48        5      active sync   /dev/sdd
       6       8       64        6      active sync   /dev/sde
       7       0        0        7      removed

所以 /dev/sdf 丢失了。所以我试图获取有关该驱动器的一些信息。 smartctl 不会报告任何问题并按fdisk -l预期列出。最后我尝试了一下mdadm -E /dev/sdf,得到了这个输出:

/dev/sdf:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 09dd4536:a6153f6b:f1f4aaa9:53aab85a
  Creation Time : Sat Jul 10 09:08:28 2010
     Raid Level : raid5
  Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)
     Array Size : 13674601472 (13041.12 GiB 14002.79 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 0

    Update Time : Wed Mar 28 17:19:58 2012
          State : clean
 Active Devices : 8
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 0
       Checksum : afeeecc7 - correct
         Events : 9081618

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     7       8      112        7      active sync   /dev/sdh

   0     0       8        0        0      active sync   /dev/sda
   1     1       8       16        1      active sync   /dev/sdb
   2     2       8       32        2      active sync   /dev/sdc
   3     3       8       48        3      active sync   /dev/sdd
   4     4       8       64        4      active sync   /dev/sde
   5     5       8       80        5      active sync   /dev/sdf
   6     6       8       96        6      active sync   /dev/sdg
   7     7       8      112        7      active sync   /dev/sdh

这里有什么问题呢?我应该更换sdf还是可以修复?

更新

我也看过了dmesg,我得到了这个:

[    9.289086] md: kicking non-fresh sdf from array!
[    9.289090] md: unbind<sdf>
[    9.296541] md: export_rdev(sdf)

答案1

好吧,我太快发布这个问题了。事实证明,当停电时就会发生这种情况。

我只是重新添加了驱动器 ( mdadm /dev/md0 --add /dev/sdf),然后 mdadm 开始重建阵列。

相关内容