我设置了一个新的双磁盘 RAID1,它在 mdstat 中显示正常。几个小时后,当我运行 时,我看到了以下内容cat /proc/mdstat
。
Personalities : [raid1]
md1 : active (auto-read-only) raid1 sda2[0] sdb2[1]
4982784 blocks super 1.2 [2/2] [UU]
resync=PENDING
md0 : active raid1 sda1[0]
483266560 blocks super 1.2 [2/1] [U_]
bitmap: 4/4 pages [16KB], 65536KB chunk
unused devices: <none>
我觉得很奇怪,md0
显示缺少sdb
分区但md1
实际上没有。我不认为sdb
出现故障,因为它是新的,那么我该如何修复md0
?
从系统日志中:
$ dmesg | grep sdb
[ 3.612217] sd 1:0:0:0: [sdb] 976773168 512-byte logical blocks: (500 GB/465 GiB)
[ 3.612219] sd 1:0:0:0: [sdb] 4096-byte physical blocks
[ 3.612290] sd 1:0:0:0: [sdb] Write Protect is off
[ 3.612294] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[ 3.612326] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 3.630283] sdb: sdb1 sdb2
[ 3.631320] sd 1:0:0:0: [sdb] Attached SCSI disk
[ 3.793804] md: bind<sdb1>
[ 3.795337] md: bind<sdb2>
[ 3.846233] md: kicking non-fresh sdb1 from array!
[ 3.846240] md: unbind<sdb1>
[ 3.865721] md: export_rdev(sdb1)
答案1
从 Google 的众多关于“从阵列中踢出非新鲜的 sdb1”搜索结果中:
这种情况可能发生在非正常关机(如电源故障)之后。通常,移除并重新添加有问题的设备可以解决问题:
/sbin/mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
/sbin/mdadm /dev/md0 --add /dev/sdb1