Mdadm Raid6 从原始阵列的 6x 磁盘映像重新(组装|构建|创建)

Mdadm Raid6 从原始阵列的 6x 磁盘映像重新(组装|构建|创建)

我有 6 个几年前的 1TB 驱动器。希捷硬盘的微代码有问题,所以我更新了它。不幸的是,这重置了两个驱动器的 UUID,并且 RAID6 阵列被破坏。当时比较忙,就把盘存起来留着以后用。我现在回来了(几年后),一切都不再一样了,我所拥有的只是 6 个驱动器,以及由它们组成的 6 个图像。有谁知道我如何访问已安装的数组来检查那里有哪些数据?

我认为这个问题之前还没有真正被完全涵盖,我尝试寻找一个简单的分步指南,但没有找到符合我情况的指南。

我目前使用的是linuxmint19.3,发生故障时,可能是xubuntu 13.04。

mdadm --version
mdadm - v4.1-rc1 - 2018-03-22

我设法从原始日志文件中提取此信息。

-----------------------------------------------------------
[    2.042156] scsi 0:0:0:0: Direct-Access     ATA      Hitachi HDS72105 JP2O PQ: 0 ANSI: 5
[    2.042742] sd 0:0:0:0: [sda] 976773168 512-byte logical blocks: (500 GB/465 GiB)
-----------------------------------------------------------
[    2.043315] scsi 1:0:0:0: Direct-Access     ATA      ST3750640A       3.AA PQ: 0 ANSI: 5
[    2.044099] sd 1:0:0:0: [sdb] 1465149168 512-byte logical blocks: (750 GB/698 GiB)
-----------------------------------------------------------
[    2.353991] scsi 6:0:0:0: Direct-Access     ATA      MAXTOR STM310003 MX15 PQ: 0 ANSI: 5
[    2.354401] sd 6:0:0:0: [sdc] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
-----------------------------------------------------------
[    2.354937] scsi 8:0:0:0: Direct-Access     ATA      ST31500541AS     CC34 PQ: 0 ANSI: 5
[    2.355790] sd 8:0:0:0: [sdd] 2930277168 512-byte logical blocks: (1.50 TB/1.36 TiB)
-----------------------------------------------------------
[    2.355895] scsi 9:0:0:0: Direct-Access     ATA      MAXTOR STM310003 MX15 PQ: 0 ANSI: 5
[    2.356635] sd 9:0:0:0: [sde] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
-----------------------------------------------------------
[    2.811740] scsi 10:0:0:0: Direct-Access     ATA      ST31500341AS     CC1H PQ: 0 ANSI: 5
[    2.812103] sd 10:0:0:0: [sdf] 2930277168 512-byte logical blocks: (1.50 TB/1.36 TiB)
-----------------------------------------------------------
[    3.210713] scsi 11:0:0:0: Direct-Access     ATA      ST31500541AS     CC34 PQ: 0 ANSI: 5
[    3.211137] sd 11:0:0:0: [sdg] 2930277168 512-byte logical blocks: (1.50 TB/1.36 TiB)
-----------------------------------------------------------
[    3.600925] scsi 12:0:0:0: Direct-Access     ATA      WDC WD10EACS-00D 01.0 PQ: 0 ANSI: 5
[    3.601341] sd 12:0:0:0: [sdh] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
-----------------------------------------------------------
[   21.333709] md: bind<sdf1>
[   21.334160] md: bind<sdg1>
[   21.334504] md: bind<sdh1>
[   21.334817] md: bind<sdd1>
[   21.335197] md: bind<sdc1>
[   21.335474] md: bind<sde1>
[   22.902998] raid5: device sde1 operational as raid disk 0 +
[   22.903008] raid5: device sdc1 operational as raid disk 5 +
[   22.903016] raid5: device sdd1 operational as raid disk 4 *
[   22.903023] raid5: device sdh1 operational as raid disk 3 %
[   22.903031] raid5: device sdg1 operational as raid disk 2 *
[   22.903038] raid5: device sdf1 operational as raid disk 1 $
[   22.904902] raid5: allocated 6386kB for md0
[   22.905042] 0: w=1 pa=0 pr=6 m=2 a=2 r=6 op1=0 op2=0
[   22.905053] 5: w=2 pa=0 pr=6 m=2 a=2 r=6 op1=0 op2=0
[   22.905063] 4: w=3 pa=0 pr=6 m=2 a=2 r=6 op1=0 op2=0
[   22.905072] 3: w=4 pa=0 pr=6 m=2 a=2 r=6 op1=0 op2=0
[   22.905082] 2: w=5 pa=0 pr=6 m=2 a=2 r=6 op1=0 op2=0
[   22.905091] 1: w=6 pa=0 pr=6 m=2 a=2 r=6 op1=0 op2=0
[   22.905100] raid5: raid level 6 set md0 active with 6 out of 6 devices, algorithm 2
[   22.905108] RAID5 conf printout:
[   22.905113]  --- rd:6 wd:6
[   22.905122]  disk 0, o:1, dev:sde1
[   22.905130]  disk 1, o:1, dev:sdf1
[   22.905137]  disk 2, o:1, dev:sdg1
[   22.905145]  disk 3, o:1, dev:sdh1
[   22.905152]  disk 4, o:1, dev:sdd1
[   22.905159]  disk 5, o:1, dev:sdc1
[   22.905299] md0: detected capacity change from 0 to 4000808697856

每个 raid 元素都有相同大小的分区 (~1TB)。这可以通过每个图像文件的校验和看出。

9e43e11e04ac5d8f    1000202241024   raid_dev1.img
e1d810f9cea1cbff    1000202241024   raid_dev2.img
633e675b9b958a18    1000202241024   raid_dev3.img
8b881f07549fc7c9    1000202241024   raid_dev4.img
5727cefbc60af466    1000202241024   raid_dev5.img
1dacd8b59f896a85    1000202241024   raid_dev6.img

每个图像都附加到循环设备,例如。

sudo for N in {1..6} ; do losetup -f --read-only raid_dev"${N}".img ; done

循环设备的特征来自 mdadm --examine;

/dev/loop12:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 00000000:00000000:00000000:00000000
  Creation Time : Fri May 18 04:32:55 2012
     Raid Level : -unknown-
   Raid Devices : 0
  Total Devices : 4
Preferred Minor : 127

    Update Time : Fri May 18 05:02:45 2012
          State : active
 Active Devices : 0
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 4
       Checksum : 82bfacf0 - correct
         Events : 1


      Number   Major   Minor   RaidDevice State
this     3       8       49        3      spare   /dev/sdd1

   0     0       8      177        0      spare
   1     1       8      145        1      spare   /dev/sdj1
   2     2       8       65        2      spare
   3     3       8       49        3      spare   /dev/sdd1
/dev/loop13:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : dc7b216a:36b68a65:c395db1e:b55a1d97
  Creation Time : Mon Nov  9 06:46:41 2009
     Raid Level : raid6
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 3907039744 (3726.04 GiB 4000.81 GB)
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 0

    Update Time : Sun Apr 15 00:38:10 2012
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0
       Checksum : a06dc07 - correct
         Events : 46

         Layout : left-symmetric
     Chunk Size : 4K

      Number   Major   Minor   RaidDevice State
this     1       8       33        1      active sync   /dev/sdc1

   0     0       8       17        0      active sync
   1     1       8       33        1      active sync   /dev/sdc1
   2     2       8       49        2      active sync   /dev/sdd1
   3     3       8       65        3      active sync
   4     4       8      113        4      active sync
   5     5       8       97        5      active sync   /dev/sdg1
/dev/loop14:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 00000000:00000000:00000000:00000000
  Creation Time : Fri May 18 04:32:55 2012
     Raid Level : -unknown-
   Raid Devices : 0
  Total Devices : 4
Preferred Minor : 127

    Update Time : Fri May 18 05:02:45 2012
          State : active
 Active Devices : 0
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 4
       Checksum : 82bfad4c - correct
         Events : 1


      Number   Major   Minor   RaidDevice State
this     1       8      145        1      spare   /dev/sdj1

   0     0       8      177        0      spare
   1     1       8      145        1      spare   /dev/sdj1
   2     2       8       65        2      spare
   3     3       8       49        3      spare   /dev/sdd1
/dev/loop15:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 00000000:00000000:00000000:00000000
  Creation Time : Fri May 18 04:32:55 2012
     Raid Level : -unknown-
   Raid Devices : 0
  Total Devices : 4
Preferred Minor : 127

    Update Time : Fri May 18 05:02:45 2012
          State : active
 Active Devices : 0
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 4
       Checksum : 82bfacfe - correct
         Events : 1


      Number   Major   Minor   RaidDevice State
this     2       8       65        2      spare

   0     0       8      177        0      spare
   1     1       8      145        1      spare   /dev/sdj1
   2     2       8       65        2      spare
   3     3       8       49        3      spare   /dev/sdd1
/dev/loop16:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 00000000:00000000:00000000:00000000
  Creation Time : Fri May 18 04:32:55 2012
     Raid Level : -unknown-
   Raid Devices : 0
  Total Devices : 4
Preferred Minor : 127

    Update Time : Fri May 18 05:02:45 2012
          State : active
 Active Devices : 0
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 4
       Checksum : 82bfad6a - correct
         Events : 1


      Number   Major   Minor   RaidDevice State
this     0       8      177        0      spare

   0     0       8      177        0      spare
   1     1       8      145        1      spare   /dev/sdj1
   2     2       8       65        2      spare
   3     3       8       49        3      spare   /dev/sdd1
/dev/loop17:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 00000000:00000000:00000000:00000000
  Creation Time : Wed May 16 07:23:49 2012
     Raid Level : -unknown-
   Raid Devices : 0
  Total Devices : 1
Preferred Minor : 127

    Update Time : Wed May 16 09:23:39 2012
          State : active
 Active Devices : 0
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 1
       Checksum : 82baca2e - correct
         Events : 1


      Number   Major   Minor   RaidDevice State
this     0       8       17        0      spare

   0     0       8       17        0      spare

所以现在只有一张镜像的 raid 信息中有 UUID。

我不想破坏阵列,或者必须再次重新映像所有内容。

说实话,我已经不太记得当时做了什么,所以我可能已经迷路了。但我不想在没有先得到一些建议的情况下尝试更具破坏性的伎俩。

有人可以帮我从这些图像中重新形成 RAID6 阵列吗?还是这是一个绝望的情况?

谢谢,

相关内容