在 Linux 上从发生故障的 ReadyNas 104 挂载 Btrfs raid 5 - 也就是我如何从我的 ReadyNas 恢复数据

在 Linux 上从发生故障的 ReadyNas 104 挂载 Btrfs raid 5 - 也就是我如何从我的 ReadyNas 恢复数据

大约 7 年后,我的 ReadyNas 终于出现故障,我期望 Neatgear 能提供详细的恢复说明,但我阅读了数十个帖子 - 我的问题并不罕见,但没有找到解决方案。

经过快速研究,我发现有 3-4 家主要供应商提供适用于 Windows 的恢复程序 - 但价格都很昂贵,因为人们很重视他们的数据...

我相信我拥有最新的 NetGear OS 6,即使它出现故障(2-3 周前),4 个磁盘,每个 4TB - raid 5。

因此,我决定尝试使用 Linux...Ubuntu 20.04LTS

安装 mdadm 并检查 btrfs

apt-get update

apt-get install mdadm

modinfo btrfs | grep version
srcversion: ACBD2347FFF0DB004CA4F96
vermagic: 5.15.0-43-generic SMP mod_unload modversions

组建突袭队

mdadm --assemble --scan
mdadm: /dev/md/0 has been started with 4 drives.
mdadm: /dev/md/1 has been started with 4 drives.
mdadm: /dev/md/data-0 has been started with 4 drives.

检查突袭

cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md127 : active raid5 sde3[0] sda3[3] sdb3[4] sdd3[1]
11706500352 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md1 : active raid6 sda2[0] sde2[3] sdd2[2] sdb2[1]
1046528 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]

md0 : active raid1 sde1[0] sda1[3] sdb1[2] sdd1[1]
4190208 blocks super 1.2 [4/4] [UUUU]

更多细节

mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Tue Nov 11 02:45:28 2014
Raid Level : raid5
Array Size : 11706500352 (10.90 TiB 11.99 TB)
Used Dev Size : 3902166784 (3.63 TiB 4.00 TB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent

Update Time : Wed Nov 30 02:26:47 2022
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Consistency Policy : resync

Name : 0e36db36:data-0
UUID : 93bd4b5a:c6d3c47d:cd03b2e6:0f16a169
Events : 37041

Number Major Minor RaidDevice State
0 8 67 0 active sync /dev/sde3
1 8 51 1 active sync /dev/sdd3
4 8 19 2 active sync /dev/sdb3
3 8 3 3 active sync /dev/sda3

检查 btrfs 卷

btrfs fi label /dev/md127
0e36db36:data

更多检查

btrfs filesystem show
Label: '0e36db36:data' uuid: 40a8107f-5114-4c1c-94d6-54bc71f69a7c
Total devices 1 FS bytes used 10.10TiB
devid 1 size 10.90TiB used 10.27TiB path /dev/md127



Checking logs

dmesg

[ 3115.651958] md: md0 stopped.
[ 3115.661011] md/raid1:md0: active with 4 out of 4 mirrors
[ 3115.661023] md0: detected capacity change from 0 to 8380416
[ 3117.469235] md: md1 stopped.
[ 3117.474369] async_tx: api initialized (async)
[ 3117.499570] md/raid:md1: device sda2 operational as raid disk 0
[ 3117.499574] md/raid:md1: device sde2 operational as raid disk 3
[ 3117.499575] md/raid:md1: device sdd2 operational as raid disk 2
[ 3117.499577] md/raid:md1: device sdb2 operational as raid disk 1
[ 3117.500103] md/raid:md1: raid level 6 active with 4 out of 4 devices, algorithm 2
[ 3117.500120] md1: detected capacity change from 0 to 2093056
[ 3117.648661] md: md127 stopped.
[ 3117.660642] md/raid:md127: device sde3 operational as raid disk 0
[ 3117.660654] md/raid:md127: device sda3 operational as raid disk 3
[ 3117.660660] md/raid:md127: device sdb3 operational as raid disk 2
[ 3117.660665] md/raid:md127: device sdd3 operational as raid disk 1
[ 3117.662671] md/raid:md127: raid level 5 active with 4 out of 4 devices, algorithm 2
[ 3117.662831] md127: detected capacity change from 0 to 23413000704
[ 3117.913560] BTRFS: device label 0e36db36:data devid 1 transid 3643415 /dev/md127 scanned by systemd-udevd (10428)

因此,此时我相信 raid 没问题,并且我可以使用 ReadyNas 的 root 权限挂载 md0。

因此,现在开始 btrfs 游戏。

我期望的命令根据我找到的所有可能的信息起作用

mount -t btrfs -o ro /dev/md127 /mnt

mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error.



dmesg

[ 4308.157063] BTRFS info (device md127): flagging fs with big metadata feature
[ 4308.157074] BTRFS info (device md127): disk space caching is enabled
[ 4310.614947] BTRFS critical (device md127): corrupt leaf: root=1 block=28693330493440 slot=49, invalid root flags, have 0x10000 expect mask 0x1000000000001
[ 4310.614970] BTRFS error (device md127): block=28693330493440 read time tree block corruption detected
[ 4310.619823] BTRFS critical (device md127): corrupt leaf: root=1 block=28693330493440 slot=49, invalid root flags, have 0x10000 expect mask 0x1000000000001
[ 4310.619846] BTRFS error (device md127): block=28693330493440 read time tree block corruption detected
[ 4310.619891] BTRFS warning (device md127): failed to read root (objectid=2): -5
[ 4310.634017] BTRFS error (device md127): open_ctree failed

重试恢复

mount -t btrfs -o ro,recovery /dev/md127 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error.



dmesg

[ 4498.628078] BTRFS info (device md127): flagging fs with big metadata feature
[ 4498.628095] BTRFS warning (device md127): 'recovery' is deprecated, use 'rescue=usebackuproot' instead
[ 4498.628100] BTRFS info (device md127): trying to use backup root at mount time
[ 4498.628104] BTRFS info (device md127): disk space caching is enabled
[ 4498.664705] BTRFS critical (device md127): corrupt leaf: root=1 block=28693330493440 slot=49, invalid root flags, have 0x10000 expect mask 0x1000000000001
[ 4498.664711] BTRFS error (device md127): block=28693330493440 read time tree block corruption detected
[ 4498.664877] BTRFS critical (device md127): corrupt leaf: root=1 block=28693330493440 slot=49, invalid root flags, have 0x10000 expect mask 0x1000000000001
[ 4498.664881] BTRFS error (device md127): block=28693330493440 read time tree block corruption detected
[ 4498.664886] BTRFS warning (device md127): failed to read root (objectid=2): -5
[ 4498.708647] BTRFS critical (device md127): corrupt leaf: root=1 block=28693526478848 slot=49, invalid root flags, have 0x10000 expect mask 0x1000000000001
[ 4498.708657] BTRFS error (device md127): block=28693526478848 read time tree block corruption detected
[ 4498.712357] BTRFS critical (device md127): corrupt leaf: root=1 block=28693526478848 slot=49, invalid root flags, have 0x10000 expect mask 0x1000000000001
[ 4498.712367] BTRFS error (device md127): block=28693526478848 read time tree block corruption detected
[ 4498.712392] BTRFS warning (device md127): failed to read root (objectid=2): -5
[ 4498.731972] BTRFS error (device md127): parent transid verify failed on 28693524611072 wanted 3643412 found 3643414
[ 4498.744606] BTRFS error (device md127): parent transid verify failed on 28693524611072 wanted 3643412 found 3643414
[ 4498.744637] BTRFS warning (device md127): couldn't read tree root
[ 4498.771820] BTRFS error (device md127): parent transid verify failed on 28693524971520 wanted 3643413 found 3643415
[ 4498.781483] BTRFS error (device md127): parent transid verify failed on 28693524971520 wanted 3643413 found 3643415
[ 4498.781513] BTRFS warning (device md127): couldn't read tree root
[ 4498.795150] BTRFS error (device md127): open_ctree failed

让我们检查文件系统

btrfs check /dev/md127

pening filesystem to check...
Checking filesystem on /dev/md127
UUID: 40a8107f-5114-4c1c-94d6-54bc71f69a7c
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
[4/7] checking fs roots
[5/7] checking only csums items (without verifying data)
[6/7] checking root refs
[7/7] checking quota groups skipped (not enabled on this FS)
found 11110486872064 bytes used, no error found
total csum bytes: 436299240
total tree bytes: 2136965120
total fs tree bytes: 840761344
total extent tree bytes: 627933184
btree space waste bytes: 434774783
file data blocks allocated: 11185027407872
referenced 11311939104768

因此,文件系统似乎没问题,因此,我怀疑 OS 6 和 Ubuntu 20.04 LTS 之间的 btrfs 选项/功能有所不同

深入了解 btrfs

btrfs inspect-internal dump-super /dev/md127
superblock: bytenr=65536, device=/dev/md127
---------------------------------------------------------
csum_type 0 (crc32c)
csum_size 4
csum 0xd7757ba3 [match]
bytenr 65536
flags 0x1
( WRITTEN )
magic _BHRfS_M [match]
fsid 40a8107f-5114-4c1c-94d6-54bc71f69a7c
metadata_uuid 40a8107f-5114-4c1c-94d6-54bc71f69a7c
label 0e36db36:data
generation 3643415
root 28693330231296
sys_array_size 129
chunk_root_generation 3641017
root_level 1
chunk_root 28035093004288
chunk_root_level 1
log_root 0
log_root_transid 0
log_root_level 0
total_bytes 11987456360448
bytes_used 11110486872064
sectorsize 4096
nodesize 32768
leafsize (deprecated) 32768
stripesize 4096
root_dir 6
num_devices 1
compat_flags 0x0
compat_ro_flags 0x0
incompat_flags 0x21
( MIXED_BACKREF |
BIG_METADATA )
cache_generation 18446744073709551615
uuid_tree_generation 3643414
dev_item.uuid 45694a58-7723-45b1-be77-2e8bee500448
dev_item.fsid 40a8107f-5114-4c1c-94d6-54bc71f69a7c [match]
dev_item.type 2
dev_item.total_bytes 11987456360448
dev_item.bytes_used 11289397035008
dev_item.io_align 4096
dev_item.io_width 4096
dev_item.sector_size 4096
dev_item.devid 1
dev_item.dev_group 0
dev_item.seek_speed 0
dev_item.bandwidth 0
dev_item.generation 0

所以,这就是我所坚持的地方,我怀疑这与标记为不兼容的 MIXED_BACKREF 和 BIG_METADATA 标志有关。

或者,我正在考虑将 ubuntu 降级到 14 并可能获取较低版本的 Btrfs。

答案1

如果我知道它可以这么简单就好了——因为我花了好几个小时试图修复它。我仍然不知道如何解决兼容性标志,但降级到 ubuntu 14.06 LTS 解决了这个问题,btrfs 获得了版本 4.x,一切都正常了。

因此,如果您正在从 ReadyNas 恢复 Btrfs 卷,请不要使用最新版本的 Btrfs。

PS:我希望这个回复对大家有用,因为我看到很多类似的问题,却没有一个答案。

相关内容