在具有 RAID6 配置的 4 个磁盘的服务器发生电源故障后,/dev/md1 将无法启动(但 /dev/md0 可以启动)。
mdadm --assemble --scan
mdadm: /dev/md/1 assembled from 3 drives - not enough to start the array while not clean - consider --force.
mdadm: No arrays found in config file or automatically
>: mdadm --assemble --force /dev/md1 /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2 -v
mdadm: looking for devices for /dev/md1
mdadm: cannot open device /dev/sdc2/: Not a directory
mdadm: /dev/sdc2/ has no superblock - assembly aborted
>: mdadm --examine /dev/sdc2
Magic: a92b4efc
Raid Level: raid6
Raid devices: 4
...
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 24c01b72 - correct
Array State : AAA
但是当我查看其他 raid md0 时,它看起来很好:
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : Active (auto-read-only) raid6 sdb1[0] sde1[3] sdd1[2] sdc1[1]
124932095 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
当我在 /dev/sdc1 上使用 mdadm --examine 时,它看起来与 /dev/sdc2 大致相同,但显然没有损坏。我尝试在 /dev/sdc2 上恢复备用超级块,例如:
>: e2fsck /dev/sdc2
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/sdc2
...
you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
>: e2fsck -b 8193
e2fsck: Bad magic number in super-block while trying to open /dev/sdc2
除了两个备用超级块(我都尝试过)之外,还有其他方法可以修复错误的幻数吗?我查看了 raid 阵列中的其他 3 个卷,它们说的是与 /dev/sdc2 相同的内容,它们不可能全部损坏,不是吗?我该如何恢复它们?这是四个驱动器中每个驱动器的分区表:
fdisk -l /dev/sdc
Disk /dev/sdc: 2.7 TiB, xxxxxxx
Disklabel type: gpt
Device Start End Sectors Size Type
/dev/sdc1 xxxx xxxx xxxx 59.6G Linux RAID
/dev/sdc2 xxxx xxxx xxxx 2.7T Linux RAID
注意:之前是从屏幕截图中输入的,因为 ssh 不会出现,现在它已经出现,所以我可以剪切/粘贴:
md1 中的第一个驱动器:
>: mdadm -E /dev/sdb
/dev/sdb:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
>: fdisk -l /dev/sdb
Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D724DBDE-FC51-4BA1-AF65-01C21E6D1846
Device Start End Sectors Size Type
/dev/sdb1 2048 124999679 124997632 59.6G Linux RAID
/dev/sdb2 124999680 5860532223 5735532544 2.7T Linux RAID
md1 卷所在的分区 2:
>: mdadm -E /dev/sdb2
/dev/sdb2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 31731ada:b0804a8a:a69cbab4:505c2adf
Name : kvmhost4:1 (local to host kvmhost4)
Creation Time : Fri Nov 3 12:52:05 2017
Raid Level : raid6
Raid Devices : 4
Avail Dev Size : 5735270400 (2734.79 GiB 2936.46 GB)
Array Size : 5735270400 (5469.58 GiB 5872.92 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : ee3768a3:397f01ad:6086d77f:222b5e8d
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Jul 29 15:24:04 2020
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 190180d - correct
Events : 16450527
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
现在是 raid 6 中的第二卷,/dev/sdc
>: mdadm -E /dev/sdc
/dev/sdc:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
>: mdadm -E /dev/sdc1
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3e7427b8:e82071a4:8224a7b7:f4dc6d0f
Name : kvmhost4:0 (local to host kvmhost4)
Creation Time : Fri Nov 3 12:51:44 2017
Raid Level : raid6
Raid Devices : 4
Avail Dev Size : 124932096 (59.57 GiB 63.97 GB)
Array Size : 124932096 (119.14 GiB 127.93 GB)
Data Offset : 65536 sectors
Super Offset : 8 sectors
Unused Space : before=65448 sectors, after=0 sectors
State : clean
Device UUID : dd82716f:49b276d7:9d383a43:06cf9206
Update Time : Fri Jul 10 06:20:40 2020
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : c2291539 - correct
Events : 37
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
md1 分区所在的分区 2
>: mdadm -E /dev/sdc2
/dev/sdc2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 31731ada:b0804a8a:a69cbab4:505c2adf
Name : kvmhost4:1 (local to host kvmhost4)
Creation Time : Fri Nov 3 12:52:05 2017
Raid Level : raid6
Raid Devices : 4
Avail Dev Size : 5735270400 (2734.79 GiB 2936.46 GB)
Array Size : 5735270400 (5469.58 GiB 5872.92 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : b5d3a4ac:e353b079:3c77d4fe:39cda55b
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Jul 29 15:24:04 2020
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 24c01b72 - correct
Events : 16450527
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
卷 /dev/sdd 中的驱动器 3:
>: mdadm -E /dev/sdd2
/dev/sdd2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 31731ada:b0804a8a:a69cbab4:505c2adf
Name : kvmhost4:1 (local to host kvmhost4)
Creation Time : Fri Nov 3 12:52:05 2017
Raid Level : raid6
Raid Devices : 4
Avail Dev Size : 5735270400 (2734.79 GiB 2936.46 GB)
Array Size : 5735270400 (5469.58 GiB 5872.92 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : 5399392a:dfbadbbf:f6d4148d:ff796dc2
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Jul 29 15:24:04 2020
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : dd88528c - correct
Events : 16450527
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
并驱动阵列的 4:
>: mdadm -E /dev/sde2
/dev/sde2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x9
Array UUID : 31731ada:b0804a8a:a69cbab4:505c2adf
Name : kvmhost4:1 (local to host kvmhost4)
Creation Time : Fri Nov 3 12:52:05 2017
Raid Level : raid6
Raid Devices : 4
Avail Dev Size : 5735270400 (2734.79 GiB 2936.46 GB)
Array Size : 5735270400 (5469.58 GiB 5872.92 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : 5230e8e3:792ce23c:8431a3f1:99d5bfef
Internal Bitmap : 8 sectors from superblock
Update Time : Mon Jul 6 13:10:59 2020
Bad Block Log : 512 entries available at offset 72 sectors - bad blocks present.
Checksum : a5d7e1fa - correct
Events : 13500044
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
答案1
好吧,我通过以下方式让它发挥作用(我认为):
mdadm --assemble --scan --force
mdadm: Marking array /dev/md/1 as 'clean'
mdadm: /dev/md/1 has been started with 3 drives (out of 4).
>: cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md1 : active raid6 sdb2[0] sdd2[2] sdc2[1]
5735270400 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [UUU_]
[>....................] resync = 0.0% (765440/2867635200) finish=561.7min speed=85048K/sec
bitmap: 4/22 pages [16KB], 65536KB chunk
md0 : active (auto-read-only) raid6 sdb1[0] sde1[3] sdd1[2] sdc1[1]
124932096 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
现在我将等待它重新同步并尝试重新启动。与此同时,卷安装完毕,我恢复了数据!