我已经使用 mdadm - v3.2.5 - 2012 年 5 月 18 日在 Ubuntu 13.04(内核 3.8.0-27-generic)上设置了 RAID-5 阵列。它似乎运行良好并且运行良好:
$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdb3[0] sdd1[3] sdc1[1]
2929994752 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
但是,在重新启动时,阵列会拆分为两个单独的阵列,在我看来这似乎毫无道理。在启动时,我收到提示:
*** WARNING: Degraded RAID devices detected. ***
Press Y to start the degraded RAID or N to launch recovery shell
对此我通常回答是,然后进入 initramfs shell,然后立即退出。一旦我回到系统,我的 RAID 阵列就会被拆分成以下几部分:
$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb3[0]
1464997976 blocks super 1.2
md127 : inactive sdc[1] sdd[2]
2930275120 blocks super 1.2
我也得到过相反的结果:
$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdb3[0]
1464997976 blocks super 1.2
md0 : inactive sdc[1] sdd[2]
2930275120 blocks super 1.2
尽管 sdc 和 sdd 似乎已经形成了一个小集团。我可以通过发出以下命令重新组合数组:
$ mdadm --stop /dev/md0
$ mdadm --stop /dev/md127
$ mdadm -A /dev/md0 /dev/sdb3 /dev/sdc1 /dev/sdd1
之后,我可以挂载位于 md0 上的 LVM 卷,并且像什么都没发生一样(没有重建或做任何事情)。然而,我真正想要的是不必执行这些步骤。我的 mdadm.conf 文件包含以下行:
ARRAY /dev/md0 metadata=1.2 UUID=e8aaf501:b564493d:ee375c76:b1242a82
我在此论坛帖子. 运行详细信息并扫描,结果如下:
$ mdadm --detail --scan
mdadm: cannot open /dev/md/mimir:0: No such file or directory
ARRAY /dev/md0 metadata=1.2 name=turbopepper:0 UUID=e8aaf501:b564493d:ee375c76:b1242a82
注意数组“mimir”。这是我以前玩数组时留下的残留数组。我不知道它是从哪里检测到的(它不在 mdadm.conf 中,fstab 中也没有对它的引用)。它可能需要消失,但我不知道它来自哪里(它实际上可能是罪魁祸首)。
任何帮助都将不胜感激,使阵列能够在无需干预的情况下继续重启。
为了以防万一,这里还有一些可能有用也可能没用的输出。
$ fdisk -l /dev/sdb /dev/sdc /dev/sdd
Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x7f0e98a6
Device Boot Start End Blocks Id System
/dev/sdb1 2048 499711 248832 fd Linux raid autodetect
/dev/sdb2 499712 976771071 488135680 fd Linux raid autodetect
/dev/sdb3 976771072 3907029167 1465129048 fd Linux raid autodetect
Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes
81 heads, 63 sectors/track, 574226 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00052c9c
Device Boot Start End Blocks Id System
/dev/sdc1 2048 2930277167 1465137560 fd Linux raid autodetect
Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes
81 heads, 63 sectors/track, 574226 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000bd694
Device Boot Start End Blocks Id System
/dev/sdd1 2048 2930277167 1465137560 fd Linux raid autodetect
我想象 sdc 和 sdd 在分割过程中在一起的原因是它们是相同的驱动器。
$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 UUID=e8aaf501:b564493d:ee375c76:b1242a82
# This file was auto-generated on Sun, 08 Dec 2013 00:39:01 -0500
# by mkconf $Id$
答案1
您的某个分区(可能是 sdb3)上仍有一个旧的超级块,用于“mimir”阵列,该超级块在启动时由 mdadm 扫描。应该可以通过发出以下命令来修复它
mdadm --zero-superblock /dev/sdb3
然后将分区重新添加到阵列。