QNAP TS-869U-RP 上的 RAID 恢复

QNAP TS-869U-RP 上的 RAID 恢复

我最信赖的 QNAP NAS(T869-RU)一夜之间出现故障,RAID5 MDADM 阵列中的八个磁盘中有两个出现故障

/proc/mdstat

[/etc] # cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] 
md0 : inactive sdb3[1](S) sdh3[7](S) sdg3[6](S) sdf3[5](S) sde3[4](S) sdd3[3](S) sdc3[2](S) sda3[0](S)
      15615564800 blocks

md8 : active raid1 sdh2[2](S) sdf2[3](S) sde2[4](S) sdd2[5](S) sdc2[1] sdb2[0]
      530048 blocks [2/2] [UU]

md13 : active raid1 sda4[0] sdd4[7] sde4[6] sdf4[5] sdg4[4] sdh4[3] sdc4[2] sdb4[1]
      458880 blocks [8/8] [UUUUUUUU]
      bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sdb1[1] sdh1[7] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2]
      530048 blocks [8/7] [_UUUUUUU]
      bitmap: 65/65 pages [260KB], 4KB chunk

消息

[  329.450972] md: md0 stopped.
[  329.652404] md: bind<sda3>
[  329.653837] md: bind<sdc3>
[  329.655223] md: bind<sdd3>
[  329.656624] md: bind<sde3>
[  329.658021] md: bind<sdf3>
[  329.659409] md: bind<sdg3>
[  329.661720] md: bind<sdh3>
[  329.662977] md: bind<sdb3>
[  329.664083] md: kicking non-fresh sdg3 from array!
[  329.665110] md: bind<sde2>
[  329.665113] md: unbind<sdg3>
[  329.670009] md: export_rdev(sdg3)
[  329.671043] md: kicking non-fresh sda3 from array!
[  329.671981] md: unbind<sda3>
[  329.678009] md: export_rdev(sda3)
[  329.679473] md/raid:md0: not clean -- starting background reconstruction
[  329.680449] md/raid:md0: device sdb3 operational as raid disk 1
[  329.681408] md/raid:md0: device sdh3 operational as raid disk 7
[  329.682325] md/raid:md0: device sdf3 operational as raid disk 5
[  329.683219] md/raid:md0: device sde3 operational as raid disk 4
[  329.684085] md/raid:md0: device sdd3 operational as raid disk 3
[  329.684939] md/raid:md0: device sdc3 operational as raid disk 2
[  329.695598] md/raid:md0: allocated 136320kB
[  329.696599] md/raid:md0: not enough operational devices (2/8 failed)
[  329.697497] RAID conf printout:
[  329.697499]  --- level:5 rd:8 wd:6
[  329.697504]  disk 1, o:1, dev:sdb3
[  329.697507]  disk 2, o:1, dev:sdc3
[  329.697510]  disk 3, o:1, dev:sdd3
[  329.697514]  disk 4, o:1, dev:sde3
[  329.697516]  disk 5, o:1, dev:sdf3
[  329.697519]  disk 7, o:1, dev:sdh3
[  329.706658] md/raid:md0: failed to run raid set.
[  329.707554] md: pers->run() failed ...
[  330.713729] md: md0 stopped.
[  330.714629] md: unbind<sdb3>
[  330.719018] md: export_rdev(sdb3)
[  330.720035] md: unbind<sdh3>
[  330.724009] md: export_rdev(sdh3)
[  330.724860] md: unbind<sdf3>
[  330.736010] md: export_rdev(sdf3)
[  330.736851] md: unbind<sde3>
[  330.744010] md: export_rdev(sde3)
[  330.744792] md: unbind<sdd3>
[  330.752010] md: export_rdev(sdd3)
[  330.752760] md: unbind<sdc3>
[  330.760010] md: export_rdev(sdc3)
[  331.718141] md: bind<sdf2>
[  332.912023] md: md0 stopped.
[  333.932428] md: bind<sdh2>
[  336.469591] md: md0 stopped.
[  338.488805] md: md0 stopped.
[  338.713968] md: md8: recovery done.

所有驱动器都报告自己是干净的,尽管有些驱动器由于不健康而被移除,即:

[/etc] # mdadm -E /dev/sda3
/dev/sda3:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : eac0dd4a:cf58c4fd:29cb9955:7f700b4b
  Creation Time : Wed Dec  1 12:09:20 2010
     Raid Level : raid5
  Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
     Array Size : 13663619200 (13030.64 GiB 13991.55 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 0

    Update Time : Sun Jan 20 03:00:38 2013
          State : clean
 Active Devices : 8
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 1ecb8590 - correct
         Events : 0.40

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8        3        0      active sync   /dev/sda3

   0     0       8        3        0      active sync   /dev/sda3
   1     1       8       19        1      active sync   /dev/sdb3
   2     2       8       35        2      active sync   /dev/sdc3
   3     3       8       51        3      active sync   /dev/sdd3
   4     4       8       67        4      active sync   /dev/sde3
   5     5       8       83        5      active sync   /dev/sdf3
   6     6       8       99        6      active sync   /dev/sdg3
   7     7       8      115        7      active sync   /dev/sdh3

有什么想法我可以在这里做什么吗?我以为一切都丢失了,因为两个驱动器显然无法正常工作,但它们看起来也很干净/健康,所以发生了许多其他事情。

看来 Web 界面(显示驱动器运行状况和 RAID 控制面板)必须由主阵列(/dev/md0)驱动,该阵列已关闭,因为除了 QNAP 服务器上的 SSH 之外,我无法访问任何其他内容

谢谢

答案1

忽略这个问题。

最后与 QNAP 支持人员进行了交谈,他们确认 sda 是问题驱动器,另一个驱动器被标记为不干净,但一切正常。目前,我正在从 8 个驱动器中的 7 个运行它,等待两个替换件交付。

相关内容