RAID 1 至 RAID 5 故障

RAID 1 至 RAID 5 故障

我需要您在这方面提供宝贵的帮助:

我有一台 NAS,可以组成带有四个磁盘的 RAID。我使用很长时间,只有两个磁盘,sda 和 sdd 在 RAID 1 上 - 它们是 WD30EFRX。现在,我又买了两个 WD30EFRX(翻新的),我的想法是将它们添加在一起以组成 RAID 5 阵列。这些是我采取的步骤:

没有做备份(因为我很笨......)

卸载所有东西:

$ sudo umount /srv/dev-disk-by-uuid-d1430a9e-6461-481b-9765-86e18e517cfc

$ sudo umount -f /dev/md0

停止阵列:

$ sudo mdadm --stop /dev/md0

将阵列更改为仅包含现有磁盘的 RAID 5:

$ sudo mdadm --create /dev/md0 -a yes -l 5 -n 2 /dev/sda /dev/sdd

我在这里犯了一个错误,使用了整个磁盘而不是 /dev/sd[ad]1 分区,MDADM 警告我 /dev/sdd 有一个分区,它将被覆盖...我按“Y”继续... :-( 它花了很长时间才完成,没有任何错误。

然后我将两个新磁盘 /dev/sdb 和 /dev/sdc 添加到阵列中:

$ sudo mdadm --add /dev/md0 /dev/sdb
$ sudo mdadm --add /dev/md0 /dev/sdc

并且确实使用了四个磁盘:

$ sudo mdadm --grow /dev/md0 --raid-disk=4

在此过程中,进行了如下重塑

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sdc[4] sdb[3] sdd[2] sda[0]
      2930134016 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      [==================>..]  reshape = 90.1% (2640502272/2930134016) finish=64.3min speed=75044K/sec
      bitmap: 0/22 pages [0KB], 65536KB chunk
$ sudo mdadm -D /dev/md0

/dev/md0:
           Version : 1.2
     Creation Time : Fri Mar 11 16:10:02 2022
        Raid Level : raid5
        Array Size : 2930134016 (2794.39 GiB 3000.46 GB)
     Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Mar 12 20:20:14 2022
             State : clean, reshaping 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

    Reshape Status : 97% complete
     Delta Devices : 2, (2->4)

              Name : helios4:0  (local to host helios4)
              UUID : 8e1ac1a8:8eabc3de:c01c8976:0be5bf6c
            Events : 12037

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       2       8       48        1      active sync   /dev/sdd
       4       8       32        2      active sync   /dev/sdc
       3       8       16        3      active sync   /dev/sdb

当这个漫长的过程顺利完成后,我进行了 e2fsck

$ sudo e2fsck /dev/md0

并且...它给出了以下信息:

e2fsck 1.46.2 (28-Feb-2021)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/md0
 
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
or
    e2fsck -b 32768 <device>

此时,我意识到我在此过程中犯了一些错误...使用 Google 搜索了该问题,从这篇文章来看,我认为阵列中的磁盘的顺序在某种程度上是“反转”的:https://forum.qnap.com/viewtopic.php?t=125534

因此,分区已“消失”,当我尝试组装阵列时,我得到了以下信息:

$ sudo mdadm --assemble --scan -v

mdadm: /dev/sdd is identified as a member of /dev/md/0, slot 1.
mdadm: /dev/sdb is identified as a member of /dev/md/0, slot 3.
mdadm: /dev/sdc is identified as a member of /dev/md/0, slot 2.
mdadm: /dev/sda is identified as a member of /dev/md/0, slot 0.
mdadm: added /dev/sdd to /dev/md/0 as 1
mdadm: added /dev/sdc to /dev/md/0 as 2
mdadm: added /dev/sdb to /dev/md/0 as 3
mdadm: added /dev/sda to /dev/md/0 as 0
mdadm: /dev/md/0 has been started with 4 drives.

$ dmesg

[143605.261894] md/raid:md0: device sda operational as raid disk 0
[143605.261909] md/raid:md0: device sdb operational as raid disk 3
[143605.261919] md/raid:md0: device sdc operational as raid disk 2
[143605.261927] md/raid:md0: device sdd operational as raid disk 1
[143605.267400] md/raid:md0: raid level 5 active with 4 out of 4 devices, algorithm 2
[143605.792653] md0: detected capacity change from 0 to 17580804096

$ cat /proc/mdstat 

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active (auto-read-only) raid5 sda[0] sdb[3] sdc[4] sdd[2]
      8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/22 pages [0KB], 65536KB chunk


$ sudo mdadm -D /dev/md0 

/dev/md0:
           Version : 1.2
     Creation Time : Fri Mar 11 16:10:02 2022
        Raid Level : raid5
        Array Size : 8790402048 (8383.18 GiB 9001.37 GB)
     Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Mar 12 21:24:59 2022
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : helios4:0  (local to host helios4)
              UUID : 8e1ac1a8:8eabc3de:c01c8976:0be5bf6c
            Events : 12124

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       2       8       48        1      active sync   /dev/sdd
       4       8       32        2      active sync   /dev/sdc
       3       8       16        3      active sync   /dev/sdb

阵列已安装,但没有超级块。

在这个阶段,我做了一个照片记录,试图恢复我宝贵的数据(主要是家庭照片):

$ sudo photorec /log /d ~/k/RAID_REC/ /dev/md0

我刚刚恢复了很多,但是其他的都损坏了,因为在 photorec 恢复过程中(逐个扇区),它会随着时间的推移增加扇区数,但是然后计数器会“重置”为一个较低的值(我怀疑磁盘在阵列中是混乱的)并且它再次恢复了一些文件(有些是相等的)。

所以,我的问题是:是否有机会正确地重做阵列而不丢失内部信息?是否有可能恢复 RAID 1 上存在的“丢失”分区,以便能够进行方便的备份?或者唯一的机会是在阵列内部进行正确的磁盘对齐,以便能够使用 photorec 正确恢复文件?

感谢您的帮助。非常感谢!

最好的,

乔治

相关内容