我有一个 3 磁盘 RAID5 设置。在迁移过程中不小心,一个磁盘从阵列中弹出,然后另一个磁盘紧随其后。
因此,我无法合法地重建阵列,因为没有对齐的磁盘对。
我已经使用 dd 将阵列中的两个磁盘克隆到一对备用(相同)磁盘中,这样我就可以随意弄乱它并重新启动多次。
我知道如果数据不匹配,我很可能会得到损坏,但我想尝试一下,只是为了学习一些东西。
如果我强制组装这两个磁盘,我可以启动阵列,但是我无法安装它,因为:
$ sudo mdadm /dev/md1 --assemble /dev/sde2 /dev/sdd2 -f
mdadm: /dev/md1 has been started with 2 drives (out of 3).
sudo mount -v /dev/md1 /media/mfloris/raidNas/
mount: /media/mfloris/raidNas: wrong fs type, bad option, bad superblock on /dev/md1, missing codepage or helper program, or other error.
有没有什么办法可以摆弄元数据让系统相信磁盘已经对齐了?
我已经尝试过了fsck -n /dev/md1(它什么也不做)并且dumpe2fs /dev/md1(它说超级块中的魔法数字不好)
我也尝试过:
$sudo file -skL /dev/md1
/dev/md1: BTRFS Filesystem sectorsize 4096, nodesize 16384, leafsize 16384, UUID=f0b84f7d-7247-4781-959d-1da2eea20e66, 407236403200/5999719088128 bytes used, 1 devices\012- data
$ sudo grep btrfs /proc/filesystems
btrfs
$ lsmod | grep btrfs
btrfs 1138688 0
zstd_compress 163840 1 btrfs
xor 24576 2 async_xor,btrfs
raid6_pq 114688 4 async_pq,btrfs,raid456,async_raid6_recov
sudo mount -t btrfs -v /dev/md1 /media/mfloris/raidNas/
mount: /media/mfloris/raidNas: wrong fs type, bad option, bad superblock on /dev/md1, missing codepage or helper program, or other error.
$sudo btrfs check /dev/md1
parent transid verify failed on 654950400 wanted 8458 found 8456
parent transid verify failed on 654950400 wanted 8458 found 8456
parent transid verify failed on 654950400 wanted 8458 found 8460
parent transid verify failed on 654950400 wanted 8458 found 8460
Ignoring transid failure
leaf parent key incorrect 654950400
ERROR: cannot open file system
和危险的
$ sudo btrfsck --init-extent-tree /dev/md1
Checking filesystem on /dev/md1
UUID: f0b84f7d-7247-4781-959d-1da2eea20e66
Creating a new extent tree
ERROR: tree block bytenr 169114808628 is not aligned to sectorsize 4096
Error reading tree block
error pinning down used bytes
ERROR: attempt to start transaction over already running one
extent buffer leak: start 653361152 len 16384
我该如何尝试修复超级块?
这是设备的状态。事件计数器很接近,更新时间大约相隔 8 小时,但在此期间数据很可能未受影响。
$sudo mdadm -E /dev/sd*2
/dev/sdd2:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : b57aca26:65609077:9fe7889a:6241c63a
Name : NAS:1
Creation Time : Fri Aug 3 08:13:23 2018
Raid Level : raid5
Raid Devices : 3
Avail Dev Size : 5859101344 (2793.84 GiB 2999.86 GB)
Array Size : 5859100672 (5587.67 GiB 5999.72 GB)
Used Dev Size : 5859100672 (2793.84 GiB 2999.86 GB)
Super Offset : 5859101600 sectors
Unused Space : before=0 sectors, after=912 sectors
State : clean
Device UUID : bb700941:772cb7b0:db32a940:e902d0bd
Internal Bitmap : -16 sectors from superblock
Update Time : Tue Aug 28 07:47:49 2018
Checksum : 153fd25e - correct
Events : 78660
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : .A. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde2:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : b57aca26:65609077:9fe7889a:6241c63a
Name : NAS:1
Creation Time : Fri Aug 3 08:13:23 2018
Raid Level : raid5
Raid Devices : 3
Avail Dev Size : 5859101344 (2793.84 GiB 2999.86 GB)
Array Size : 5859100672 (5587.67 GiB 5999.72 GB)
Used Dev Size : 5859100672 (2793.84 GiB 2999.86 GB)
Super Offset : 5859101600 sectors
Unused Space : before=0 sectors, after=912 sectors
State : clean
Device UUID : c844b66b:fe21447d:e74c865a:751baa07
Internal Bitmap : -16 sectors from superblock
Update Time : Tue Aug 28 00:04:18 2018
Checksum : 703361e6 - correct
Events : 78660
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)