我是个坏人,没有在其他地方备份我的 RAID。我现在有一个无法组装的 RAID10 阵列,我希望可以挽救它。以下是详细信息:
我有五块硬盘设置为 RAID10(4+1 备用)。由于未知原因,两块硬盘发生故障,备用硬盘也坏了,现在我似乎无法重新组装它们。
下面有一些信息,特别是重新启动后 mdstat 的输出 [1]、停止阵列并尝试重新组装的输出 [2] 以及来自 mdadm -E [3] 的详细信息。
文献似乎表明我可以使用--create和--assume-clean强制重新创建 RAID 阵列,但我担心这会加剧我的数据问题。
我目前的想法是使用具有最高匹配事件的驱动器(下面的 sdc1、sdd1 和 sde1)重新创建处于降级状态的阵列。有更好的解决方案吗?
megatron ~ # mdadm -E /dev/sd[bcdefghijklmnop]1 | egrep 'Event|/dev'
/dev/sdb1:
Events : 494734
/dev/sdc1:
Events : 502154
/dev/sdd1:
Events : 502154
/dev/sde1:
Events : 502154
/dev/sdf1:
Events : 494756
[1] 重启后,这是 mdstat 的输出:
megatron ~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid10] [raid6] [raid5] [raid4]
md0 : inactive sde1[3](S) sdf1[2](S) sdc1[1](S) sdd1[4](S) sdb1[0](S)
7814052444 blocks super 1.2
unused devices: <none>
[2] 如果我停止数组并重新组装它,我会得到以下结果:
megatron ~ # mdadm --assemble --scan --verbose
mdadm: looking for devices for /dev/md0
mdadm: no RAID superblock on /dev/sde2
mdadm: /dev/sde2 has wrong uuid.
mdadm: no RAID superblock on /dev/sde
mdadm: /dev/sde has wrong uuid.
mdadm: no RAID superblock on /dev/sdf
mdadm: /dev/sdf has wrong uuid.
mdadm: no RAID superblock on /dev/sdd
mdadm: /dev/sdd has wrong uuid.
mdadm: no RAID superblock on /dev/sdc
mdadm: /dev/sdc has wrong uuid.
mdadm: no RAID superblock on /dev/sdb
mdadm: /dev/sdb has wrong uuid.
mdadm: cannot open device /dev/sda3: Device or resource busy
mdadm: /dev/sda3 has wrong uuid.
mdadm: cannot open device /dev/sda2: Device or resource busy
mdadm: /dev/sda2 has wrong uuid.
mdadm: no RAID superblock on /dev/sda1
mdadm: /dev/sda1 has wrong uuid.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm: /dev/sda has wrong uuid.
mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdf1 is identified as a member of /dev/md0, slot -1.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdc1 is identified as a member of /dev/md0, slot -1.
mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 0.
mdadm: added /dev/sdb1 to /dev/md0 as 0
mdadm: no uptodate device for slot 1 of /dev/md0
mdadm: added /dev/sde1 to /dev/md0 as 3
mdadm: added /dev/sdf1 to /dev/md0 as -1
mdadm: added /dev/sdc1 to /dev/md0 as -1
mdadm: added /dev/sdd1 to /dev/md0 as 2
mdadm: /dev/md0 assembled from 2 drives and 2 spares - not enough to start the array.
[3] 最后,mdadm -E /dev/sd[bcdefghijklmnop]1 的输出结果为:
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 01516d30:2e2c0bc4:b743e476:12445ecf
Name : megatron:1 (local to host megatron)
Creation Time : Thu Feb 3 23:38:56 2011
Raid Level : raid10
Raid Devices : 4
Avail Dev Size : 2930269954 (1397.26 GiB 1500.30 GB)
Array Size : 5860538368 (2794.52 GiB 3000.60 GB)
Used Dev Size : 2930269184 (1397.26 GiB 1500.30 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 09ce480d:aad5f0ac:4cfbc777:7303d4a7
Update Time : Tue Jul 9 20:16:22 2013
Checksum : 8a17d4c9 - correct
Events : 494734
Layout : near=2
Chunk Size : 512K
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing)
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 01516d30:2e2c0bc4:b743e476:12445ecf
Name : megatron:1 (local to host megatron)
Creation Time : Thu Feb 3 23:38:56 2011
Raid Level : raid10
Raid Devices : 4
Avail Dev Size : 2930269954 (1397.26 GiB 1500.30 GB)
Array Size : 5860538368 (2794.52 GiB 3000.60 GB)
Used Dev Size : 2930269184 (1397.26 GiB 1500.30 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : d89673c0:5efc51a7:1589bbc2:d7002e7d
Update Time : Wed Jul 10 00:14:04 2013
Checksum : 57efa473 - correct
Events : 502154
Layout : near=2
Chunk Size : 512K
Device Role : spare
Array State : ..AA ('A' == active, '.' == missing)
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 01516d30:2e2c0bc4:b743e476:12445ecf
Name : megatron:1 (local to host megatron)
Creation Time : Thu Feb 3 23:38:56 2011
Raid Level : raid10
Raid Devices : 4
Avail Dev Size : 3907025072 (1863.01 GiB 2000.40 GB)
Array Size : 5860538368 (2794.52 GiB 3000.60 GB)
Used Dev Size : 2930269184 (1397.26 GiB 1500.30 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 08542b46:f4cfa093:31650e72:dd781e74
Update Time : Wed Jul 10 00:14:04 2013
Checksum : aa71a2c3 - correct
Events : 502154
Layout : near=2
Chunk Size : 512K
Device Role : Active device 2
Array State : ..AA ('A' == active, '.' == missing)
/dev/sde1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 01516d30:2e2c0bc4:b743e476:12445ecf
Name : megatron:1 (local to host megatron)
Creation Time : Thu Feb 3 23:38:56 2011
Raid Level : raid10
Raid Devices : 4
Avail Dev Size : 2930269954 (1397.26 GiB 1500.30 GB)
Array Size : 5860538368 (2794.52 GiB 3000.60 GB)
Used Dev Size : 2930269184 (1397.26 GiB 1500.30 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 8c655b9d:4630e93b:32eb61b8:f7a5f513
Update Time : Wed Jul 10 00:14:04 2013
Checksum : 55dcae05 - correct
Events : 502154
Layout : near=2
Chunk Size : 512K
Device Role : Active device 3
Array State : ..AA ('A' == active, '.' == missing)
/dev/sdf1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 01516d30:2e2c0bc4:b743e476:12445ecf
Name : megatron:1 (local to host megatron)
Creation Time : Thu Feb 3 23:38:56 2011
Raid Level : raid10
Raid Devices : 4
Avail Dev Size : 2930269954 (1397.26 GiB 1500.30 GB)
Array Size : 5860538368 (2794.52 GiB 3000.60 GB)
Used Dev Size : 2930269184 (1397.26 GiB 1500.30 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 3b54d74e:a1065dd6:d3e836ca:b340b56d
Update Time : Tue Jul 9 20:17:04 2013
Checksum : d63b6fa - correct
Events : 494756
Layout : near=2
Chunk Size : 512K
Device Role : spare
Array State : .AAA ('A' == active, '.' == missing)
答案1
RAID10 4+1 配置中两个驱动器发生故障将破坏阵列,是的 :(
看起来你很幸运,其中三个是同步的(正如你所说,与 502154 同步),所以是的,你应该尝试重新组装它们三个。看起来很有希望。
当然,如果您有 4 个额外的驱动器,您应该在尝试之前进行复制,以防情况不太顺利......