RAID-5 阵列降级,似乎>1 故障;我可以在不丢失数据的情况下恢复吗?

RAID-5 阵列降级,似乎>1 故障;我可以在不丢失数据的情况下恢复吗?

我刚刚将 Fedora 22 全新安装到具有现有 RAID-5 阵列的系统上。五驱。内核在一夜之间报告了设备错误,3 TB XFS 文件系统被卸载,现在重新启动后阵列将无法组装。

这是尝试组装数组的结果:

mdadm --assemble /dev/md0 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 
mdadm: /dev/md0 assembled from 2 drives and 1 spare - not enough to start the array.  

以下是 5 个分区中每个分区的“mdadm --examine”的输出。我不足以理解事件计数器和“数组状态”之间的差异(所有设备上的情况都不相同)。

我知道不要使用“--create”,但我犹豫是否要在没有人监视的情况下尝试“--force”。

这个数组丢失了吗?如果不可能,我应该采取什么步骤?

/dev/sdc1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 65f056dc:780db9b5:023c0144:77f12f74
           Name : odin.hudaceks.home:1  (local to host odin.hudaceks.home)
  Creation Time : Thu Sep 18 16:30:47 2014
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 1953101824 (931.31 GiB 999.99 GB)
     Array Size : 2929651200 (2793.93 GiB 2999.96 GB)
  Used Dev Size : 1953100800 (931.31 GiB 999.99 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=1024 sectors
          State : clean
    Device UUID : 3f08354b:c076cddc:99b85968:a8928ea8

    Update Time : Sun Aug  2 22:25:33 2015
       Checksum : 9db3229f - correct
         Events : 6078

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AA.. ('A' == active, '.' == missing, 'R' == replacing)

/dev/sdd1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 65f056dc:780db9b5:023c0144:77f12f74
           Name : odin.hudaceks.home:1  (local to host odin.hudaceks.home)
  Creation Time : Thu Sep 18 16:30:47 2014
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 1953101824 (931.31 GiB 999.99 GB)
     Array Size : 2929651200 (2793.93 GiB 2999.96 GB)
  Used Dev Size : 1953100800 (931.31 GiB 999.99 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=1024 sectors
          State : clean
    Device UUID : addd6f2b:fb4c33a6:2a8b152e:e716eba7

    Update Time : Sun Aug  2 22:25:33 2015
       Checksum : c6c2519 - correct
         Events : 6078

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AA.. ('A' == active, '.' == missing, 'R' == replacing)

/dev/sde1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 65f056dc:780db9b5:023c0144:77f12f74
           Name : odin.hudaceks.home:1  (local to host odin.hudaceks.home)
  Creation Time : Thu Sep 18 16:30:47 2014
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 1953101824 (931.31 GiB 999.99 GB)
     Array Size : 2929651200 (2793.93 GiB 2999.96 GB)
  Used Dev Size : 1953100800 (931.31 GiB 999.99 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=1024 sectors
          State : clean
    Device UUID : d92883c5:0e3ded13:75b11223:f0570e0a

    Update Time : Sun Aug  2 22:21:47 2015
       Checksum : 6b57c6ce - correct
         Events : 6073

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

/dev/sdf1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 65f056dc:780db9b5:023c0144:77f12f74
           Name : odin.hudaceks.home:1  (local to host odin.hudaceks.home)
  Creation Time : Thu Sep 18 16:30:47 2014
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 1953101824 (931.31 GiB 999.99 GB)
     Array Size : 2929651200 (2793.93 GiB 2999.96 GB)
  Used Dev Size : 1953100800 (931.31 GiB 999.99 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=1024 sectors
          State : active
    Device UUID : 42eeb231:ccb57477:357d0c47:d99b159d

    Update Time : Sun Aug  2 22:21:51 2015
       Checksum : f21014a5 - correct
         Events : 6074

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

/dev/sdg1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 65f056dc:780db9b5:023c0144:77f12f74
           Name : odin.hudaceks.home:1  (local to host odin.hudaceks.home)
  Creation Time : Thu Sep 18 16:30:47 2014
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 1953101824 (931.31 GiB 999.99 GB)
     Array Size : 2929651200 (2793.93 GiB 2999.96 GB)
  Used Dev Size : 1953100800 (931.31 GiB 999.99 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262064 sectors, after=1024 sectors
          State : clean
    Device UUID : 93469bf1:9f571d4b:dab66eb4:08c45766

    Update Time : Sun Aug  2 22:25:33 2015
       Checksum : bc477178 - correct
         Events : 6078

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : spare
   Array State : AA.. ('A' == active, '.' == missing, 'R' == replacing)

编辑 1:添加有关控制器的信息。

04:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9123 PCIe SATA 6.0 Gb/s controller (rev 11) (prog-if 01 [AHCI 1.0])
    Subsystem: Marvell Technology Group Ltd. 88SE9123 PCIe SATA 6.0 Gb/s controller
    Flags: bus master, fast devsel, latency 0, IRQ 30
    I/O ports at d040 [size=8]
    I/O ports at d030 [size=4]
    I/O ports at d020 [size=8]
    I/O ports at d010 [size=4]
    I/O ports at d000 [size=16]
    Memory at fe510000 (32-bit, non-prefetchable) [size=2K]
    Expansion ROM at fe500000 [disabled] [size=64K]
    Capabilities: [40] Power Management version 3
    Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit-
    Capabilities: [70] Express Legacy Endpoint, MSI 00
    Capabilities: [100] Advanced Error Reporting
    Kernel driver in use: ahci

答案1

你有一个备用的/dev/sdg1,对吗?如果它只是备用,则它不应该包含任何数据,因此对您的恢复工作毫无用处。

/dev/sde1失败了,不久之后也失败了/dev/sdf1,最大的问题是,为什么?这些磁盘真的坏了吗?您检查过 SMART 并运行自检吗?或者是您已经修复的控制器/电缆/电源问题?

如果你想安全一点,可以使用这个方法:

https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID#Making_the_harddisks_read-only_using_an_overlay_file

然后assemble --force使用dev/sdc1 /dev/sdd1 /dev/sdf1. (两个完好的磁盘加上最后一个发生故障的磁盘)。如果它运行,并且 RAID 上的文件系统在发生故障时正在使用,它将看起来像断电后并且可能需要fsck(更有理由在写时复制层上执行此操作,因此您可以如果出现问题则撤消)。

相关内容