mdadm - 意外地在现有 raid-1 上运行“mdadm --create”。超级块现已损坏,我无法恢复数据。我窃取了我的数据吗?

mdadm - 意外地在现有 raid-1 上运行“mdadm --create”。超级块现已损坏,我无法恢复数据。我窃取了我的数据吗?

我之前/dev/sdb1已经/dev/sdc2将其设置为 RAID-1 mdadm,但后来我重新安装并丢失了旧配置。出于愚蠢,我跑了

sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1

尝试重新配置 RAID。在我让驱动器同步后(哎呀?),现在/dev/md0/dev/sdb1、 或/dev/sdc2都不会安装。对于/dev/md0,它抱怨超级块中的幻数不好。对于/dev/sd{b,c}1,它抱怨缺少 inode。

简而言之,问题是:我是否只是删除了所有数据,或者是否仍然可以恢复数组?

dumpe2fs以下是这些分区的输出:

brent@codpiece:~$ sudo dumpe2fs /dev/md0 
dumpe2fs 1.42 (29-Nov-2011)
dumpe2fs: Bad magic number in super-block while trying to open /dev/md0
Couldn't find valid filesystem superblock.
brent@codpiece:~$ sudo dumpe2fs /dev/sdb1 
dumpe2fs 1.42 (29-Nov-2011)
Filesystem volume name:   <none>
Last mounted on:          /var/media
Filesystem UUID:          1462d79f-8a10-4590-8d63-3fcc105b601d
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash 
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              61054976
Block count:              244189984
Reserved block count:     12209499
Free blocks:              59069396
Free inodes:              60960671
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      965
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Flex block group size:    16
Filesystem created:       Wed Feb 10 21:04:42 2010
Last mount time:          Fri May 10 20:25:34 2013
Last write time:          Sun May 12 14:41:02 2013
Mount count:              189
Maximum mount count:      38
Last checked:             Wed Feb 10 21:04:42 2010
Check interval:           15552000 (6 months)
Next check after:         Mon Aug  9 22:04:42 2010
Lifetime writes:          250 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:           256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      7cd5ce46-b823-4453-aa66-00ddaff69952
Journal backup:           inode blocks
dumpe2fs: A block group is missing an inode table while reading journal inode

编辑:

看来 @hauke-laging 是正确的,我在过去的 1.0 元数据 raid 上创建了 1.2 元数据版本 RAID-1。我已经重新运行了mdadm --create正确的版本,但现在我的文件系统已损坏。我是否需要弄乱分区表,或者我可以简单地运行吗fsck /dev/md0

以下是fsck和的新输出dumpe2fs

brent@codpiece:~$ sudo fsck /dev/md0 
fsck from util-linux 2.20.1
e2fsck 1.42 (29-Nov-2011)
The filesystem size (according to the superblock) is 244189984 blocks
The physical size of the device is 244189952 blocks
Either the superblock or the partition table is likely to be corrupt!

brent@codpiece:~$ sudo dumpe2fs /dev/md0
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          1462d79f-8a10-4590-8d63-3fcc105b601d
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash 
Default mount options:    (none)
Filesystem state:         not clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              61054976
Block count:              244189984
Reserved block count:     12209499
Free blocks:              240306893
Free inodes:              61054965
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      965
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Flex block group size:    16
Filesystem created:       Wed Feb 10 21:04:42 2010
Last mount time:          n/a
Last write time:          Mon May 13 10:38:58 2013
Mount count:              0
Maximum mount count:      38
Last checked:             Wed Feb 10 21:04:42 2010
Check interval:           15552000 (6 months)
Next check after:         Mon Aug  9 22:04:42 2010
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:           256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      7cd5ce46-b823-4453-aa66-00ddaff69952
Journal backup:           inode blocks
Journal features:         journal_incompat_revoke
Journal size:             128M
Journal length:           32768
Journal sequence:         0x00215ad3
Journal start:            0


Group 0: (Blocks 0-32767) [ITABLE_ZEROED]
  Checksum 0x4453, unused inodes 0
  Primary superblock at 0, Group descriptors at 1-59
  Reserved GDT blocks at 60-1024
  Block bitmap at 1025 (+1025), Inode bitmap at 1041 (+1041)
  Inode table at 1057-1568 (+1057)
  23513 free blocks, 8181 free inodes, 2 directories
  Free blocks: 12576-12591, 12864-12879, <...>
  Free inodes: 
Group 1: (Blocks 32768-65535) [ITABLE_ZEROED]
  Checksum 0x348a, unused inodes 0
  Backup superblock at 32768, Group descriptors at 32769-32827
  Reserved GDT blocks at 32828-33792
  Block bitmap at 1026 (bg #0 + 1026), Inode bitmap at 1042 (bg #0 + 1042)
  Inode table at 1569-2080 (bg #0 + 1569)
  31743 free blocks, 8192 free inodes, 0 directories
  Free blocks: 43232-43239, 43264-43271, <...>
  Free inodes: 
Group 2: (Blocks 65536-98303) [ITABLE_ZEROED]
  Checksum 0x2056, unused inodes 0
  Block bitmap at 1027 (bg #0 + 1027), Inode bitmap at 1043 (bg #0 + 1043)
  Inode table at 2081-2592 (bg #0 + 2081)
  32768 free blocks, 8192 free inodes, 0 directories
  Free blocks: 66417-66432, 66445-66456, 66891, <...>
  Free inodes: 23921-24576
Group 3: (Blocks 98304-131071) [ITABLE_ZEROED]
  Checksum 0x4254, unused inodes 0
  Backup superblock at 98304, Group descriptors at 98305-98363
  Reserved GDT blocks at 98364-99328
  Block bitmap at 1028 (bg #0 + 1028), Inode bitmap at 1044 (bg #0 + 1044)
  Inode table at 2593-3104 (bg #0 + 2593)
  31743 free blocks, 8192 free inodes, 0 directories
  Free blocks: 99334-99339, 99438-99443, 99456-99459, <...>
  Free inodes: 24585-32768
Group 4: (Blocks 131072-163839) [ITABLE_ZEROED]
  Checksum 0x6a00, unused inodes 0
  Block bitmap at 1029 (bg #0 + 1029), Inode bitmap at 1045 (bg #0 + 1045)
  Inode table at 3105-3616 (bg #0 + 3105)
  32768 free blocks, 8192 free inodes, 0 directories
  Free blocks: 131074-131075, 131124-131129, <...>
  Free inodes: 32769-40960
Group 5: (Blocks 163840-196607) [ITABLE_ZEROED]
  Checksum 0x37e0, unused inodes 0
  Backup superblock at 163840, Group descriptors at 163841-163899
  Reserved GDT blocks at 163900-164864
  Block bitmap at 1030 (bg #0 + 1030), Inode bitmap at 1046 (bg #0 + 1046)
  Inode table at 3617-4128 (bg #0 + 3617)
  31743 free blocks, 8192 free inodes, 0 directories
  Free blocks: 164968-164970, 164979, <...>
  Free inodes: 40961-49152
Group 6: (Blocks 196608-229375) [ITABLE_ZEROED]
  <...>

答案1

看看这个问题。我想这对你的问题很熟悉。

重新创建甚至同步 RAID-1 不应破坏数据。显然,MD 设备现在从另一个偏移量开始。因此,mount 寻找超级块的地方就有数据。这至少可以通过两种方式发生:

  1. 您(或者更确切地说:默认设置)已经创建了具有不同超级块格式的新数组(请参见--metadataman mdadm。因此,超级块现在处于另一个位置(或具有不同的大小)。你是否碰巧知道旧的元数据格式是什么?
  2. 即使使用相同的格式,由于默认偏移量不同,偏移量也会发生变化。请参阅mdadm --examine /dev/sdb1(将输出添加到您的问题中)。

您应该在磁盘的第一个区域 (/dev/sdb1) 中查找超级块。也许这可以使用 或类似的工具来完成parted。不过,您可能必须删除相应的分区(没问题,因为您可以轻松备份和恢复分区表)。

或者您创建偏移量不断增加的循环设备/DM 设备(不一定在整个磁盘上,几个 MiB 就足够了)并尝试dumpe2fs -h使用它们。如果您想执行此操作但不知道如何执行,那么我可以为此提供一些 shell 代码。

最坏的情况是新的 MD 超级块覆盖了文件系统超级块。在这种情况下,您可以搜索超级块副本(请参阅 的输出mke2fs)。在相同大小的虚拟设备上运行mke2fs可能会告诉您超级块副本的位置。

编辑1:

现在我已经读过了了解你的dumpe2fs输出。您的旧 RAID-1 在末尾有其超级块(0.9 或 1.0)。现在您可能已经有了 1.2,因此您的文件系统的一部分已被覆盖。我无法评估损失有多大。这是一个案例e2fsck。但首先您应该将 RAID 重置为其旧类型。有助于了解旧版本。

/dev/sdb1您可以通过将 DM 设备放在完整的磁盘上并/dev/sdc1为它们创建快照(并在快照上创建新阵列)来降低风险dmsetup directly。这样磁盘的相关部分就不会被写入。从输出中dumpe2fs我们知道 MD 设备大小必须为 1000202174464 字节,应在测试创​​建后立即检查。

相关内容