我有一台运行 RAID 5 4-Drive 的 Asustor NAS,系统更新后它重新启动到 Web 控制台中的初始化页面,我认为这是升级过程的一部分,因此我开始了初始化进程,几分钟后我感觉到有些不对劲,拔掉电源,NAS 启动进入一个干净的操作系统,所有设置都已消失,无法安装 RAID。
在终端中检查 mdadm 和 fdisk 后,我发现最后 2 个驱动器已重新初始化为 RAID 0 阵列(sdc4、sdd4)。
我曾尝试组装原始 RAID,但没有成功
# mdadm --assemble /dev/mdx /dev/sd*4
mdadm: superblock on /dev/sdc4 doesn't match others - assembly aborted
这是结果mdadm --examine /dev/sd*
这是原始 RAIDsdc4,sdd4]UUID:1ba5dfd1:e861b791:eb307ef1:4ae4e4ad 8T。
意外创建的 raid0 是 [sdc4,sdd4] UUID:06b57325:241ba722:6dd303af:baaa5e4e
/dev/sda:
MBR Magic : aa55
Partition[0] : 522240 sectors at 2048 (type 83)
Partition[3] : 2047 sectors at 1 (type ee)
mdadm: No md superblock detected on /dev/sda1.
/dev/sda2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 1c90030d:10445d9f:d39fc32a:06d4b79a
Name : AS1004T-7CBC:0 (local to host AS1004T-7CBC)
Creation Time : Sun Jun 11 10:56:28 2017
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 4190208 (2046.34 MiB 2145.39 MB)
Array Size : 2095104 (2046.34 MiB 2145.39 MB)
Data Offset : 4096 sectors
Super Offset : 8 sectors
Unused Space : before=4008 sectors, after=0 sectors
State : active
Device UUID : cca1545a:14112668:0ebd0ed3:df55018d
Update Time : Sun Oct 13 01:05:27 2019
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 95866108 - correct
Events : 228987
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sda3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 8c3ca866:3e6b6804:32f2955e:1b955d76
Name : AS1004T-7CBC:126 (local to host AS1004T-7CBC)
Creation Time : Sun May 14 09:50:45 2023
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 4190208 (2046.34 MiB 2145.39 MB)
Array Size : 2095104 (2046.34 MiB 2145.39 MB)
Data Offset : 4096 sectors
Super Offset : 8 sectors
Unused Space : before=4008 sectors, after=0 sectors
State : clean
Device UUID : f3836318:4899a170:a0018b8b:1aa428ab
Update Time : Sun May 14 14:40:28 2023
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 48f1cfbb - correct
Events : 92
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sda4:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 1ba5dfd1:e861b791:eb307ef1:4ae4e4ad
Name : AS1004T-7CBC:1 (local to host AS1004T-7CBC)
Creation Time : Sun Jun 11 10:56:51 2017
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 5851357184 (2790.14 GiB 2995.89 GB)
Array Size : 8777035776 (8370.43 GiB 8987.68 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 6a18260d:f0d1b882:5608a7e4:8eeabe1f
Update Time : Sun May 14 09:31:25 2023
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 6e46beec - correct
Events : 213501
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb:
MBR Magic : aa55
Partition[0] : 522240 sectors at 2048 (type 83)
Partition[3] : 2047 sectors at 1 (type ee)
mdadm: No md superblock detected on /dev/sdb1.
/dev/sdb2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 1c90030d:10445d9f:d39fc32a:06d4b79a
Name : AS1004T-7CBC:0 (local to host AS1004T-7CBC)
Creation Time : Sun Jun 11 10:56:28 2017
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 4190208 (2046.34 MiB 2145.39 MB)
Array Size : 2095104 (2046.34 MiB 2145.39 MB)
Data Offset : 4096 sectors
Super Offset : 8 sectors
Unused Space : before=4008 sectors, after=0 sectors
State : active
Device UUID : 648f0d6d:967f432c:3b9e1ceb:d15959c2
Update Time : Sun Oct 13 01:05:27 2019
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : b9c2a23f - correct
Events : 228987
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 8c3ca866:3e6b6804:32f2955e:1b955d76
Name : AS1004T-7CBC:126 (local to host AS1004T-7CBC)
Creation Time : Sun May 14 09:50:45 2023
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 4190208 (2046.34 MiB 2145.39 MB)
Array Size : 2095104 (2046.34 MiB 2145.39 MB)
Data Offset : 4096 sectors
Super Offset : 8 sectors
Unused Space : before=4008 sectors, after=0 sectors
State : clean
Device UUID : 8adc82c0:010edc11:5702a9f6:7287da86
Update Time : Sun May 14 14:40:28 2023
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : d91b8119 - correct
Events : 92
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb4:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 1ba5dfd1:e861b791:eb307ef1:4ae4e4ad
Name : AS1004T-7CBC:1 (local to host AS1004T-7CBC)
Creation Time : Sun Jun 11 10:56:51 2017
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 5851357184 (2790.14 GiB 2995.89 GB)
Array Size : 8777035776 (8370.43 GiB 8987.68 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 15bd0bdb:b5fdcfaf:94729f61:ed9e7bea
Update Time : Sun May 14 09:31:25 2023
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : b0f8adf8 - correct
Events : 213501
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc:
MBR Magic : aa55
Partition[0] : 522240 sectors at 2048 (type 83)
Partition[3] : 2047 sectors at 1 (type ee)
mdadm: No md superblock detected on /dev/sdc1.
/dev/sdc2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 14d010c5:aaed7a5c:30956792:cfd0c452
Name : AS1004T-7CBC:0 (local to host AS1004T-7CBC)
Creation Time : Sun May 14 09:50:35 2023
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 4190208 (2046.34 MiB 2145.39 MB)
Array Size : 2095104 (2046.34 MiB 2145.39 MB)
Data Offset : 4096 sectors
Super Offset : 8 sectors
Unused Space : before=4008 sectors, after=0 sectors
State : clean
Device UUID : 373358f6:76ca625d:e9193081:216676cb
Update Time : Sun May 14 14:37:42 2023
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : ba188081 - correct
Events : 880
Device Role : Active device 1
Array State : AA.. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 8c3ca866:3e6b6804:32f2955e:1b955d76
Name : AS1004T-7CBC:126 (local to host AS1004T-7CBC)
Creation Time : Sun May 14 09:50:45 2023
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 4190208 (2046.34 MiB 2145.39 MB)
Array Size : 2095104 (2046.34 MiB 2145.39 MB)
Data Offset : 4096 sectors
Super Offset : 8 sectors
Unused Space : before=4008 sectors, after=0 sectors
State : clean
Device UUID : 737541e2:f5a3673d:8db35b12:2db86324
Update Time : Sun May 14 14:40:28 2023
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : dfa191e3 - correct
Events : 92
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc4:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 06b57325:241ba722:6dd303af:baaa5e4e
Name : AS1004T-7CBC:1 (local to host AS1004T-7CBC)
Creation Time : Sun May 14 09:51:00 2023
Raid Level : raid0
Raid Devices : 2
Avail Dev Size : 5851357184 (2790.14 GiB 2995.89 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : d73a946c:9aa8e26e:c4388d7a:566dcf90
Update Time : Sun May 14 09:51:00 2023
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 9bd7221c - correct
Events : 0
Chunk Size : 64K
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd:
MBR Magic : aa55
Partition[0] : 522240 sectors at 2048 (type 83)
Partition[3] : 2047 sectors at 1 (type ee)
mdadm: No md superblock detected on /dev/sdd1.
/dev/sdd2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 14d010c5:aaed7a5c:30956792:cfd0c452
Name : AS1004T-7CBC:0 (local to host AS1004T-7CBC)
Creation Time : Sun May 14 09:50:35 2023
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 4190208 (2046.34 MiB 2145.39 MB)
Array Size : 2095104 (2046.34 MiB 2145.39 MB)
Data Offset : 4096 sectors
Super Offset : 8 sectors
Unused Space : before=4008 sectors, after=0 sectors
State : clean
Device UUID : acfa8c63:b226e810:3640a42a:9f8b72b1
Update Time : Sun May 14 14:37:42 2023
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 6a42effb - correct
Events : 880
Device Role : Active device 0
Array State : AA.. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 8c3ca866:3e6b6804:32f2955e:1b955d76
Name : AS1004T-7CBC:126 (local to host AS1004T-7CBC)
Creation Time : Sun May 14 09:50:45 2023
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 4190208 (2046.34 MiB 2145.39 MB)
Array Size : 2095104 (2046.34 MiB 2145.39 MB)
Data Offset : 4096 sectors
Super Offset : 8 sectors
Unused Space : before=4008 sectors, after=0 sectors
State : clean
Device UUID : 1dd56ce1:770fa0d6:13127388:46c0d14f
Update Time : Sun May 14 14:40:28 2023
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 198ac3af - correct
Events : 92
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd4:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 06b57325:241ba722:6dd303af:baaa5e4e
Name : AS1004T-7CBC:1 (local to host AS1004T-7CBC)
Creation Time : Sun May 14 09:51:00 2023
Raid Level : raid0
Raid Devices : 2
Avail Dev Size : 7804860416 (3721.65 GiB 3996.09 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 1dece618:58743ad6:9f56922c:fa500120
Update Time : Sun May 14 09:51:00 2023
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 6528b89e - correct
Events : 0
Chunk Size : 64K
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
fdisk
其中一个磁盘,所有磁盘都缺少 SD2 和 sd3,但/dev/sd*[1-4]存在且可读。
root@AS1004T-7CBC:~ # fdisk /dev/sdd
fdisk: device has more than 2^32 sectors, can't use all of them
The number of cylinders for this disk is set to 267349.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sdd: 2199.0 GB, 2199023255040 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 33 261120 83 Linux
Partition 1 does not end on cylinder boundary
/dev/sdd4 1 1 1023+ ee EFI GPT
Partition 4 does not end on cylinder boundary
Partition table entries are not in disk order
Command (m for help): q
我有以下问题:
- RAID 0 阵列的重新初始化是否覆盖了我的数据?
- 我是否应该对第 3 个驱动器进行零超级块处理并重新组装前 3 个驱动器?
- 由于前 2 个驱动器看起来不错,我可以从前 2 个驱动器恢复后 2 个驱动器的超级块吗?
- 我想恢复 RAID 5 数据
========================
更新 2023-05-16:
在 qemu-nbd 的帮助下,我能够同时安装[dbca]
和设备和,但是[cbda]
md1
md2
- 两个设备都无法找到 ext4 的超级块,尝试
mke2fs -n
使用备份块,但e2fsck -b
没有成功 - 0~100MB 的十六进制转储具有“1 行随机数据,2 行零,1 行随机数据,2 行零......重复”
- 两个设备在 100MB 之后都能够使用十六进制转储找到可读文本。
- 不知道哪一个是正确的顺序(
[dbca]
或[cbda]
),稍后将尝试搜索大于 64KB 的文件。
mdadm --create -e1.2 --chunk=64 /dev/md1 --assume-clean --raid-devices=4 --level=5 /dev/nbd{3,2,1,0}
使用不同部件重新创建后的一个设备:
- 特征图:0x1(而不是0.0)
- 内部位图:来自超级块的 8 个扇区(新行)
- 忘了说一个硬盘是 4Tb,另一个是 3Tb
/dev/nbd3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 1539136c:7e081bd8:e244150d:69900af6
Name : osboxes:111 (local to host osboxes)
Creation Time : Mon May 15 12:36:32 2023
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 7804858368 (3721.65 GiB 3996.09 GB)
Array Size : 8777032704 (8370.43 GiB 8987.68 GB)
Used Dev Size : 5851355136 (2790.14 GiB 2995.89 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=1953503232 sectors
State : clean
Device UUID : 03f5a8c0:2c7adf91:9e3acb0f:1ef7590b
Internal Bitmap : 8 sectors from superblock
Update Time : Mon May 15 12:36:32 2023
Bad Block Log : 512 entries available at offset 24 sectors
Checksum : fefc213c - correct
Events : 0
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
我在 NAS 上创建了一个新的 1 驱动器 raid0 以便进行比较:
- 数据卷的文件系统为ext4,没有lvm。(sda4->md1->ext4)
- 新磁盘的一些信息:
root@AS1004T-7CBC:/volume1/home/admin # df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 250M 8.0K 250M 1% /tmp
/dev/md0 2.0G 296M 1.6G 16% /volume0
/dev/loop0 951K 12K 919K 2% /share
/dev/md1 3.6T 89M 3.6T 1% /volume1
root@AS1004T-7CBC:/volume1/home/admin # mount
rootfs on / type rootfs (rw)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /tmp type tmpfs (rw,relatime)
/dev/md0 on /volume0 type ext4 (rw,relatime,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group)
/dev/loop0 on /share type ext4 (rw,relatime)
/dev/md1 on /volume1 type ext4 (rw,relatime,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group)
/dev/md1 on /share/Public type ext4 (rw,relatime,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group)
/dev/md1 on /share/home type ext4 (rw,relatime,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group)
/dev/md1 on /share/Web type ext4 (rw,relatime,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group)
root@AS1004T-7CBC:/volume1/home/admin # mdadm --examine /dev/sda*
/dev/sda:
MBR Magic : aa55
Partition[0] : 522240 sectors at 2048 (type 83)
Partition[3] : 2047 sectors at 1 (type ee)
mdadm: No md superblock detected on /dev/sda1.
/dev/sda2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : f1ee1039:f9c14088:cffa3471:aa3e90ae
Name : AS1004T-7CBC:0 (local to host AS1004T-7CBC)
Creation Time : Mon May 15 23:05:28 2023
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 4188160 (2045.00 MiB 2144.34 MB)
Array Size : 2094080 (2045.00 MiB 2144.34 MB)
Data Offset : 6144 sectors
Super Offset : 8 sectors
Unused Space : before=6064 sectors, after=0 sectors
State : clean
Device UUID : 9e92e90d:da2823cc:a37fd324:f8afc5b3
Update Time : Mon May 15 23:13:59 2023
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : a6bb9ed2 - correct
Events : 170
Device Role : Active device 0
Array State : A... ('A' == active, '.' == missing, 'R' == replacing)
/dev/sda3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 7fbd2329:0372140e:b608f289:fc20681e
Name : AS1004T-7CBC:126 (local to host AS1004T-7CBC)
Creation Time : Mon May 15 23:05:38 2023
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 4188160 (2045.00 MiB 2144.34 MB)
Array Size : 2094080 (2045.00 MiB 2144.34 MB)
Data Offset : 6144 sectors
Super Offset : 8 sectors
Unused Space : before=6064 sectors, after=0 sectors
State : clean
Device UUID : 0f7afe79:862435fc:fb725dcf:ac326f8c
Update Time : Mon May 15 23:06:31 2023
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : fac36572 - correct
Events : 6
Device Role : Active device 0
Array State : A... ('A' == active, '.' == missing, 'R' == replacing)
/dev/sda4:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 93f70cdf:afa455d9:e6d74e16:dc0b718f
Name : AS1004T-7CBC:1 (local to host AS1004T-7CBC)
Creation Time : Mon May 15 23:05:43 2023
Raid Level : raid1
Raid Devices : 1
Avail Dev Size : 7804858368 (3721.65 GiB 3996.09 GB)
Array Size : 3902429184 (3721.65 GiB 3996.09 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=263912 sectors, after=0 sectors
State : active
Device UUID : 8d4012c7:d65e50fc:77afded9:e64fcfe1
Update Time : Mon May 15 23:06:00 2023
Bad Block Log : 512 entries available at offset 264 sectors
Checksum : 921f67a4 - correct
Events : 3
Device Role : Active device 0
Array State : A ('A' == active, '.' == missing, 'R' == replacing)
更新日期 2023-05-17:(跳至05-18) 尝试从数组中找到 ext4 superbokck
root@osboxes:/home/osboxes# hexdump /dev/md1 |grep 'ffff ef53'
7d00030 9bf5 6460 006b ffff ef53 0000 0001 0000
bd00030 af1c 6460 0000 ffff ef53 0000 0001 0000
11c7b550 ef20 0012 ffff ffff ef53 0012 0000 0000
1347b550 ef20 0012 ffff ffff ef53 0012 0000 0000
^C
root@osboxes:/home/osboxes# hexdump /dev/md2 |grep 'ffff ef53'
7[ce]0030 9bf5 6460 006b ffff ef53 0000 0001 0000
bd[2]0030 af1c 6460 0000 ffff ef53 0000 0001 0000
11c7b550 ef20 0012 ffff ffff ef53 0012 0000 0000
1347b550 ef20 0012 ffff ffff ef53 0012 0000 0000
^C
这看起来像 ext4 usperblock 吗?
root@osboxes:/home/osboxes# hexdump -s 0xbd00000 -n 0x100 /dev/md1
bd00000 f000 0cb7 2b00 65bf fffb 000b f6f2 64f0
bd00010 eff5 0cb7 0000 0000 0002 0000 0002 0000
bd00020 8000 0000 8000 0000 1000 0000 0000 0000
bd00030 af1c 6460 0000 ffff ef53 0000 0001 0000
bd00040 af05 6460 0000 0000 0000 0000 0001 0000
bd00050 0000 0000 000b 0000 0100 0001 003c 0000
bd00060 02c2 0000 007b 0000 b5d9 42ac 7544 a848
bd00070 df96 1606 2700 6edf 0000 0000 0000 0000
bd00080 0000 0000 0000 0000 0000 0000 0000 0000
*
bd000c0 0000 0000 0000 0000 0000 0000 0000 0400
bd000d0 0000 0000 0000 0000 0000 0000 0000 0000
bd000e0 0008 0000 0000 0000 0000 0000 215a a8e3
bd000f0 17b0 154b e18b 37e5 99bd e3cf 0101 0040
bd00100
root@osboxes:/home/osboxes# hexdump -s 0xbd20000 -n 0x100 /dev/md2
bd20000 f000 0cb7 2b00 65bf fffb 000b f6f2 64f0
bd20010 ... data is same except address
0xbd00000/4096=0xbd00 或 48384
root@osboxes:/home/osboxes# e2fsck -b 48384 /dev/md1
e2fsck 1.46.2 (28-Feb-2021)
Superblock has an invalid journal (inode 8).
Clear<y>?
dumpe2fs -o superblock=48384 /dev/md1 dumpe2fs -o superblock=$((48384+131072/4096)) /dev/md2 显示相同的结果,块数:1707027200 表明它是 6.98 太字节。
root@osboxes:/home/osboxes# dumpe2fs -o superblock=48384 /dev/md1
dumpe2fs 1.46.2 (28-Feb-2021)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: d9b5ac42-4475-48a8-96df-06160027df6e
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: unsigned_directory_hash
Default mount options: user_xattr acl
Filesystem state: not clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 213381120
Block count: 1707027200
Reserved block count: 786427
Free blocks: 1693513458
Free inodes: 213381109
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Reserved GDT blocks: 1024
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 4096
Inode blocks per group: 256
RAID stride: 16
RAID stripe width: 32
Flex block group size: 16
Filesystem created: Sun May 14 05:51:01 2023
Last mount time: n/a
Last write time: Sun May 14 05:51:24 2023
Mount count: 0
Maximum mount count: -1
Last checked: Sun May 14 05:51:01 2023
Check interval: 0 (<none>)
Lifetime writes: 145 MB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 5a21e3a8-b017-4b15-8be1-e537bd99cfe3
Journal backup: inode blocks
Journal superblock magic number invalid!
超级块位置已连接(与 48384-32768=0x8000 相比mke2fs -n
),新阵列是否具有错误的 ext4 文件系统偏移量?
root@osboxes:/home/osboxes# e2fsck -p -b $((48384)) /dev/md1
/dev/md1: Superblock has an invalid journal (inode 8).
CLEARED.
*** journal has been deleted ***
/dev/md1: Block bitmap for group 960 is not in group. (block 8305676774422481356)
/dev/md1: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
root@osboxes:/home/osboxes# e2fsck -p -b $((48384+131072/4096)) /dev/md2
/dev/md2: Superblock has an invalid journal (inode 8).
CLEARED.
*** journal has been deleted ***
/dev/md2: Block bitmap for group 960 is not in group. (block 1961591391969192664)
/dev/md2: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
更新 2023-05-18:
昨晚我使用错误的超级块进行恢复,阵列上有两组超级块,一组创建于 2017 年,一组创建于上周。我只过滤了 2017 年的那个,发现它们与普通 ext4 相比具有相同的相对位置
md1 上的超级块 | +偏移量= | 正常超级块 |
---|---|---|
未找到 | +0x300000 | 0000430 |
7d00030 | +0x300000 | 8000030 |
17d00030 | +0x300000 | 18000030 |
27d00030 | +0x300000 | 28000030 |
37d00030 | +0x300000 | 38000030 |
47d00030 | +0x300000 | 48000030 |
c7d00030 | +0x300000 | c8000030 |
d7d00030 | +0x300000 | d8000030 |
187d00030 | +0x300000 | 88000030 |
287d00030 | +0x300000 | e8000030 |
3e7d00030 | +0x300000 | 98000030 |
... 看起来第一个 0x2ffbd0+0x430=0x300000 字节= 3MB 被截断。
更新 2023-05-19:
在 md1 块中添加 3MB 后,只需简单操作e2fsck -p
,我就能挂载文件系统。
我在答案中写了一个总结。
答案1
如果您没有多余的空间资源来存储至少两倍于阵列原始大小的数据,例如,如果您没有至少 3 TB * 4 个驱动器 * 2 = 24 TB 的可用空间用于恢复操作,请停止并将整个工作交给专业的数据恢复服务。
现在,答案来了。
如果它运行
mdadm --create
没有--assume-clean
是的,数据被零覆盖了。不可以。您不能更改驱动器上的任何内容。您首先需要执行的步骤是转储 (图像) RAID 的所有四个成员。
不是。这些超级块是不同的。它们中有些东西应该相同,有些东西应该不同。特别是,每个超级块都记录了这阵列中的设备。
如 (1) 中所述,最有可能的是位于阵列开头的数据被不可逆转地破坏(就像 RAID5 中的两个驱动器同时丢失一样)。可能有可能恢复位于停止过程的位置的“尾部”。这并不一定意味着你能够恢复用户数据它就在那里,因为驻留在那儿的文件系统结构大概也依赖于位于损坏区域中的块。但是正常的文件系统会有许多超级块的副本,这些副本可能恰好位于未损坏区域中,所以仍然有希望。您可以尝试揭示这个未损坏的尾部并从中恢复可能的内容。
首先进行必要的备份所有四个设备如 (2) 中所述(使用例如
dd
或ddrescue
)。这将使用一半的备用空间。然后,你可以继续使用 重新创建阵列
mdadm --create -e1.2 -l5 -n4 --assume-clean /dev/sd[abcd]4
。请注意驱动器的顺序,正如我在上面的命令中介绍的那样很可能顺序不正确。你需要玩一会儿才能猜出正确的顺序;可能正确的顺序应该是[cbda]
或[dbca]
,因为幸存的设备有顺序:sda4=3,sdb4=1(取自设备角色属性)。如果您猜错了,则必须将转储复制回驱动器并重新开始;这就是转储的用途,但请参阅下面的提示。理想情况下,这需要的猜测次数不超过 2 次。最多有 4!= 1 * 2 * 3 * 4 = 4 个驱动器的 24 种不同排序。
您应该期望的是,数组末尾的数据恰好是干净的。如何检查,取决于您那里有什么。您的数组使用块大小 64KiB,因此您必须检查数组上 64KiB 的数据段是否以正确的顺序排列。再次,请参阅提示以简化猜测过程。
- 当找到正确的顺序时,你就可以丢弃组装阵列的图像到剩余的备用可用空间。现在您将在阵列上执行文件系统恢复操作。如果是 ext4,您可以尝试运行
e2fsck -b <superblock>
,指定未损坏区域中的超级块;您可以通过运行mke2fs -n
模拟文件系统创建而不实际写入任何内容来猜测哪个超级块。基本上,您在此步骤后得到的就是可以恢复的内容。
提示。在获取所需的完整转储后,您可以通过实例化读写覆盖来加快猜测过程,这样就不会更改驱动器上的数据。如果猜测错误,您只需要重新创建覆盖映像,而不是将转储复制回驱动器,这比复制 12 TB 要快得多。我描述了这一点在另一个答案中但是对于您的问题,您不是为组装的阵列制作覆盖,而是为四个单独的设备制作覆盖,然后从分层的 nbdX 设备构建阵列。
这还可以让您跳过转储文件系统映像。您可以同时在这些转储中执行所有 2 个甚至所有 24 个可能的顺序(这将分别需要 8 个或 96 个覆盖映像和 NBD,但所有这些映像都只包含更改,并且在此类恢复操作期间往往不会增长太多)。然后尝试在每个映像上恢复文件系统,看看哪一个是正确的。然后删除所有不正确的尝试,将文件系统的内容复制到备用可用空间,删除设备上的阵列并重新创建它们,然后将幸存的数据复制回来。
答案2
总结
仅当您覆盖了不同的 RAID 配置/文件系统(因为备份超级块位置不同,旧超级块不会被覆盖)并且旧文件系统是 EXT4/3 时才有效。并且没有物理损坏。恢复的关键是备份超级块存在。
如果只有 EXT4 的第一部分损坏,应该可以用此方法恢复其文件结构
情况:4 个 RAID 5 驱动器中的 2 个被覆盖了其标头(通过重新初始化新 raid 并在其中创建新的 ext4),另外 2 个保持不变。
- 首先备份,或者将这 4 个驱动器以只读方式安装在 PC 上。对我来说,旧的 raid 是 /dev/sd{b,c,d,e}4。
blockdev --setro /dev/sd{b,c,d,e}4 # check the device using lsblk
- 为它们设置覆盖块设备(@Nikita Kipriyanov 提到,它确实使操作更加高效)
modprobe nbd
apt update && apt install qemu-utils -y
# create tmp file to hold changes
qemu-img create -f qcow2 -b /dev/sdb4 -F raw /tmp/sdb4.qcow2
qemu-img create -f qcow2 -b /dev/sdc4 -F raw /tmp/sdc4.qcow2
qemu-img create -f qcow2 -b /dev/sdd4 -F raw /tmp/sdd4.qcow2
qemu-img create -f qcow2 -b /dev/sde4 -F raw /tmp/sde4.qcow2
# mount nbd[1-4] as an overlay device of `/dev/sd[bcde]4`
qemu-nbd -c /dev/nbd0 /tmp/sdb4.qcow2
qemu-nbd -c /dev/nbd1 /tmp/sdc4.qcow2
qemu-nbd -c /dev/nbd2 /tmp/sdd4.qcow2
qemu-nbd -c /dev/nbd3 /tmp/sde4.qcow2
你可以通过重新创建 nbd 设备重新开始
mdadm --stop /dev/md1
qemu-nbd -d /dev/nbd0
qemu-nbd -d /dev/nbd1
qemu-nbd -d /dev/nbd2
qemu-nbd -d /dev/nbd3
# run step 2 again
- 创建新的 mdadm 阵列注意您必须找到正确的顺序(更改 3,2,1,0),我能够通过 mdadm --examine /dev/sdx4 按
Device Role
字段找到(不受影响的 2 个驱动器)。
(感谢@Nikita Kipriyanov 指出)
# Note you must use --assume-clean to avoid mdadm to destroy the data inside old array
# mdadm --create -e1.2 --chunk=64 /dev/md1 --assume-clean --raid-devices=4 --level=5 /dev/nbd{3,2,1,0}
mdadm: /dev/nbd3 appears to be part of a raid array:
level=raid0 devices=2 ctime=Sun May 14 05:51:00 2023
mdadm: /dev/nbd2 appears to be part of a raid array:
level=raid5 devices=4 ctime=Sun Jun 11 06:56:51 2017
mdadm: /dev/nbd1 appears to be part of a raid array:
level=raid0 devices=2 ctime=Sun May 14 05:51:00 2023
mdadm: /dev/nbd0 appears to be part of a raid array:
level=raid5 devices=4 ctime=Sun Jun 11 06:56:51 2017
mdadm: largest drive (/dev/nbd3) exceeds size (2925677568K) by more than 1%
Continue creating array? y
0 和第 2 个驱动器意外重新初始化,因此我使用旧配置(--chunk=64)bd20030 4 强制重新创建一个新阵列。新的我有 /dev/md1,假设它没有 lvm,没有分区表,里面只有一个 EXT4 fs。(通过在新驱动器上创建新的 NAS 操作系统来了解)
正在搜索 ext4 超级块备份。
# hexdump /dev/md1 | awk '$6 == "ef53"'
7ce0030 9bf5 6460 006b ffff ef53 0000 0001 0000
bd20030 af1c 6460 0000 ffff ef53 0000 0001 0000
11c7b550 ef20 0012 ffff ffff ef53 0012 0000 0000
1347b550 ef20 0012 ffff ffff ef53 0012 0000 0000
17d20030 9bf5 6460 006b ffff ef53 0000 0001 0000
23d20030 af1c 6460 0000 ffff ef53 0000 0001 0000
37ce0030 9bf5 6460 006b ffff ef53 0000 0001 0000
3bd20030 af1c 6460 0000 ffff ef53 0000 0001 0000
47d20030 9bf5 6460 006b ffff ef53 0000 0001 0000
53d20030 af1c 6460 0000 ffff ef53 0000 0001 0000
6bd20030 af1c 6460 0000 ffff ef53 0000 0001 0000
95395060 ef52 e732 fffd ffff ef53 e731 5f5f 5757
注意,两个 ext4 fs 超级块的偏移量不同,我们只需要旧的,通过 dumpe2fs 进行验证
# dumpe2fs -o superblock=$(((0xbd20030-0x30)/4096)) /dev/md1
dumpe2fs 1.46.2 (28-Feb-2021)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: d9b5ac42-4475-48a8-96df-06160027df6e
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: unsigned_directory_hash
Default mount options: user_xattr acl
Filesystem state: not clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 213381120
Block count: 1707027200
Reserved block count: 786427
Free blocks: 1693513458
Free inodes: 213381109
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Reserved GDT blocks: 1024
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 4096
Inode blocks per group: 256
RAID stride: 16
RAID stripe width: 32
Flex block group size: 16
Filesystem created: Sun May 14 05:51:01 2023
Last mount time: n/a
Last write time: Sun May 14 05:51:24 2023
Mount count: 0
Maximum mount count: -1
Last checked: Sun May 14 05:51:01 2023
Check interval: 0 (<none>)
Lifetime writes: 145 MB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 5a21e3a8-b017-4b15-8be1-e537bd99cfe3
Journal backup: inode blocks
Journal superblock magic number invalid!
注意该Filesystem created
字段,如果是你不小心格式化的时候,那么这不是你要找的超级块。发现 magicbyte 之前的 2 个字节是旧的,006b
新的是的0000
。
过滤正确的签名后,结果更有意义。
# hexdump /dev/md1 | awk '$6 == "ef53" && $4 == "006b"'
7d00030 9bf5 6460 006b ffff ef53 0000 0001 0000
17d00030 9bf5 6460 006b ffff ef53 0000 0001 0000
27d00030 9bf5 6460 006b ffff ef53 0000 0001 0000
37d00030 9bf5 6460 006b ffff ef53 0000 0001 0000
47d00030 9bf5 6460 006b ffff ef53 0000 0001 0000
c7d00030 9bf5 6460 006b ffff ef53 0000 0001 0000
d7d00030 9bf5 6460 006b ffff ef53 0000 0001 0000
187d00030 9bf5 6460 006b ffff ef53 0000 0001 0000
287d00030 9bf5 6460 006b ffff ef53 0000 0001 0000
3e7d00030 9bf5 6460 006b ffff ef53 0000 0001 0000
如果我与普通 EXT4 布局进行比较,它们具有与普通 ext4 相同的相对位置。
md1 上的超级块 | +偏移= | 正常 EXTT4 超级块 |
---|---|---|
未找到 | +0x300000 | 0000430 |
7d00030 | +0x300000 | 8000030 |
17d00030 | +0x300000 | 18000030 |
27d00030 | +0x300000 | 28000030 |
37d00030 | +0x300000 | 38000030 |
47d00030 | +0x300000 | 48000030 |
c7d00030 | +0x300000 | c8000030 |
d7d00030 | +0x300000 | d8000030 |
187d00030 | +0x300000 | 88000030 |
287d00030 | +0x300000 | e8000030 |
3e7d00030 | +0x300000 | 98000030 |
看起来第一个 0x2ffbd0+0x430 = 0x300000 字节 = 3MB 被截断了。不确定为什么 3MB 被截断了,我猜 3 Mb 是每个 mdadm 成员的头 [3(数据成员)* 1M(mdadm 头)],所以 3MB 被新创建的阵列占用,可能是磁盘上的分区表也有问题。
3MB 的偏移可能是由于错误造成的data offset
?
https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm
These superblocks also define a "data offset". This is the gap between the start of the device, and the start of the data. This means that v1.2 must always have at least 4K per device, although it's normally several megabytes. This space can be used for all sorts of things, typically the write-intent bitmap, the bad blocks log, and a buffer space when reshaping an array. It is usually calculated automatically, but can be over-ridden.
- 在块前添加 3 MB,这样我就可以得到原始文件系统的正确位置。
# dd if=/dev/zero of=/root/header bs=1M count=$((3+2)) # 1M mdadm header for each mdadm member block
# losetup /dev/loop13 /root/header
# mdadm --create /dev/md111 --level=linear --assume-clean --raid-devices=2 /dev/loop13 /dev/md1
# hexdump /dev/md1 | awk '$6 == "ef53" && $5 == "ffff" && $4 == "006b"'
8000030 9bf5 6460 006b ffff ef53 0000 0001 0000
18000030 9bf5 6460 006b ffff ef53 0000 0001 0000
28000030 9bf5 6460 006b ffff ef53 0000 0001 0000
38000030 9bf5 6460 006b ffff ef53 0000 0001 0000
48000030 9bf5 6460 006b ffff ef53 0000 0001 0000
c8000030 9bf5 6460 006b ffff ef53 0000 0001 0000
d8000030 9bf5 6460 006b ffff ef53 0000 0001 0000
88000030 9bf5 6460 006b ffff ef53 0000 0001 0000
e8000030 9bf5 6460 006b ffff ef53 0000 0001 0000
仅缺少第一个超级块,现在很容易修复。
# e2fsck -p -b $(((0x8000030-0x30)/4096)) /dev/md111
/dev/md111: Superblock needs_recovery flag is clear, but journal has data.
/dev/md111: Recovery flag not set in backup superblock, so running journal anyway.
/dev/md111: recovering journal
/dev/md111: Resize inode not valid.
/dev/md111: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
# mount -o ro /dev/md111 /mnt
參考文獻:
https://raid.wiki.kernel.org/index.php/Advanced_data_recovery
https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm
https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout#Layout