我希望有 mdadm 经验的人可以帮助我。
我执行了以下操作,导致重启后无法安装 raid。还有什么办法可以挽救吗?
我使用了以下命令。
sudo mdadm --zero-superblock /dev/sda
sudo mdadm --zero-superblock /dev/sdb
sudo mdadm --zero-superblock /dev/sde
sudo mdadm --add /dev/md127 /dev/sda
sudo mdadm --add /dev/md127 /dev/sdb
sudo mdadm --add /dev/md127 /dev/sde
sudo mdadm --grow /dev/md127 --level=6 --raid-devices=6 --backup-file=/tmp/grow_md127_0.bak
这是使用后的结果mdadm --examine
/dev/sda:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x5
Array UUID : 226f436b:85b0b3c5:d31f2eb4:7739d6e1
Name : home-server:127 (local to host home-server)
Creation Time : Wed Mar 2 11:50:24 2022
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 7813772976 sectors (3.64 TiB 4.00 TB)
Array Size : 15627544576 KiB (14.55 TiB 16.00 TB)
Used Dev Size : 7813772288 sectors (3.64 TiB 4.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=688 sectors
State : clean
Device UUID : b12d75a3:f8cb94cd:8f0069b4:2950637c
Internal Bitmap : 8 sectors from superblock
Reshape pos'n : 601929728 (574.04 GiB 616.38 GB)
Delta Devices : 2 (4->6)
New Layout : left-symmetric
Update Time : Fri Mar 11 11:54:04 2022
Bad Block Log : 512 entries available at offset 24 sectors
Checksum : d4128f77 - correct
Events : 34035
Layout : left-symmetric-6
Chunk Size : 512K
Device Role : Active device 4
Array State : AAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x5
Array UUID : 226f436b:85b0b3c5:d31f2eb4:7739d6e1
Name : home-server:127 (local to host home-server)
Creation Time : Wed Mar 2 11:50:24 2022
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 7813772976 sectors (3.64 TiB 4.00 TB)
Array Size : 15627544576 KiB (14.55 TiB 16.00 TB)
Used Dev Size : 7813772288 sectors (3.64 TiB 4.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=688 sectors
State : clean
Device UUID : 7f8ef88a:fc2b2902:874b1e33:53481170
Internal Bitmap : 8 sectors from superblock
Reshape pos'n : 601929728 (574.04 GiB 616.38 GB)
Delta Devices : 2 (4->6)
New Layout : left-symmetric
Update Time : Fri Mar 11 11:54:04 2022
Bad Block Log : 512 entries available at offset 24 sectors
Checksum : 628d9365 - correct
Events : 34035
Layout : left-symmetric-6
Chunk Size : 512K
Device Role : Active device 0
Array State : AAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x5
Array UUID : 226f436b:85b0b3c5:d31f2eb4:7739d6e1
Name : home-server:127 (local to host home-server)
Creation Time : Wed Mar 2 11:50:24 2022
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 7813772976 sectors (3.64 TiB 4.00 TB)
Array Size : 15627544576 KiB (14.55 TiB 16.00 TB)
Used Dev Size : 7813772288 sectors (3.64 TiB 4.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=688 sectors
State : clean
Device UUID : 6df77bcc:f8c3f21b:d6f87baf:accbcec5
Internal Bitmap : 8 sectors from superblock
Reshape pos'n : 601929728 (574.04 GiB 616.38 GB)
Delta Devices : 2 (4->6)
New Layout : left-symmetric
Update Time : Fri Mar 11 11:54:04 2022
Bad Block Log : 512 entries available at offset 24 sectors
Checksum : 8ff5c4f9 - correct
Events : 34035
Layout : left-symmetric-6
Chunk Size : 512K
Device Role : Active device 1
Array State : AAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
/dev/sdf:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
输出mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Raid Level : raid6
Total Devices : 3
Persistence : Superblock is persistent
State : inactive
Working Devices : 3
Name : home-server:127 (local to host home-server)
UUID : 96843f8b:c08ab184:b6cb7150:474ac047
Events : 6
Number Major Minor RaidDevice
- 8 32 - /dev/sdc
- 8 48 - /dev/sdd
- 8 16 - /dev/sdb
答案1
我认为你遇到了一个大问题。
从 mdadm --detail 输出可以清楚地看出,3 个额外的磁盘未添加到 raid 集。
但是 read-type做过更改为 raid6,这在仅使用 3 个磁盘的情况下是不可能实现的。
同时,您的 mdmadm --examine 输出显示 raid-set 中有 6 个磁盘,正如预期的那样。
由此我得出的结论是,不知何故,突袭集处于内部不一致的状态。
我不知道如何修复它。
我希望您对原始 raid5 上的数据有良好的备份,因为这样看起来您将不得不丢弃 raid 集并重新开始,从而丢失 raid 集上的所有数据。