mdadm RAID 10 文件系统损坏--可以恢复吗?

mdadm RAID 10 文件系统损坏--可以恢复吗?

问题:我使用 mdadm 创建的 RAID 10 阵列中有一个降级的磁盘。我首先移除了该磁盘,并用新磁盘替换了它,但当我重建时,我收到一条错误消息,说它无法检测到磁盘的文件系统。我用 fdisk 意识到磁盘标签是 dos(/dev/sdc),内部分区是 GPT(/dev/sdc1)。我使用了 /dev/sdc。计算机启动正常,因此我决定移除磁盘并在擦除并在其上放置 GPT 表后重试。我这样做了,然后将其重新添加到 raid 阵列中。成功了。阵列花了两天时间恢复磁盘。之后,当我重新启动时,我收到一条错误消息。

我没有备份,希望至少能从中挽救一些数据。我读过这篇文章,我认为如果你两次读取磁盘,元数据可能会导致文件系统损坏。我想挽救它,但如果没有选择,也许有人可以给我一个更好的选择来保存它。

启动时出错:

systemd-fsck[687]: fsck.ext4: Bad magic number in super-block while trying to open /dev/md0
systemd-fsck[687]: /dev/md0:
systemd-fsck[687]: The superblock could nto be read or does not describe a valid ext2/ext3/ext4
systemd-fsck[687]: filesystem. If the device is valid and it really contains an ext2/ext3/ext4
systemd-fsck[687]: filesystem (and not swap or ufs or something else), then the superblock
systemd-fsck[687]: is corrupt, and you might try running e2fsck with an alternate superblock:
systemd-fsck[687]:     e2fsck -b 8193 <device>
systemd-fsck[687]: or
systemd-fsck[687]:     e2fsck -b 32768 <device>
systemd-fsck[687]: fsck failed with exit status 8.

哦哦。下面是:

kernel: EXT4-fs (md0): VFS: Can't find ext4 filesystem
mount[693]: mount: /media/raid10: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error.
systemd[1]: media-raid10.mount: Mount process exited, code=exited status=32
systemd[1]: media-raid10.mount: Failed with result 'exit-code'
systemd[1]: Failed to mount /media/raid10.

我尝试了几种 stackoverflow 解决方案,但其中很多似乎都太具破坏性了。任何挂载尝试都失败了。我没有尝试强制执行任何操作,但 mdadm 似乎输出 ok。我移除了重新添加的磁盘以尝试恢复,但 Ubuntu 仍然拒绝启动。

输出mdadm -D /dev/md0

/dev/md0:
           Version : 1.2
     Creation Time : Mon Mar 23 (01:06:26)
         Rid Level : raid10
        Array Size : 19532609536 (18627.75 GiB 20001.39 GB)
     Used Dev Size : 9766304768 (9313.87 GiB 10000.70 GB)
      Raid Devices : 4
     Total Devices : 3
       Persistence : Superblock is persistent
    
     Intent Bitmap : Internal

       Update Time : Fri Sep 18 15:56:34 2020
             State : clean, degraded
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
      Spare Devces : 0

            Layout : near=2
        Chunk Size : 512k

Consistency Policy : bitmap
     
              Name : ubuntu-server:0
              UUID : d4c7dc04:6db4b430:66269a3b:44ee5c02
            Events : 3501

Number   Major   Minor   RaidDevice   State
  0        8       0          0       active  sync set-A      /dev/sda
  1        8      16          1       active  sync set-B      /dev/sdb
  2        8      48          2       active  sync set-A      /dev/sdd
  -        0       0          3       removed

cat /proc/mdstat 的输出

Personalities: [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
md 0: active raid10 sdd[2] sda[0] sdb[1]
      19532609536 blocks super 1.2 512k chunks 2 near-copies [4/3] [UUU_]
      bitmap: 0/146 pages [0kb], 65536KB chunk
unused devices: <none>

编辑:这是我跑步时得到的dumpe2fs -h -o superblock=8193 /dev/md0; dumpe2fs -h -o superblock=32768 /dev/md0

dumpe2fs 1.44.1 (24-Mar-2018)
dumpe2fs: Bad magic number in super-block while trying to open /dev/md0
Couldn't find valid filesystem superblock
dumpe2fs 1.44.1 (24-Mar-2018)
dumpe2fs: Bad magic number in super-block while trying to open /dev/md0
Couldn't find valid filesystem superblock 

相关内容