mdadm 自行重新连接了发生故障的驱动器

mdadm 自行重新连接了发生故障的驱动器

我在 mdadm 镜像中设置了 2 个驱动器(每个驱动器上有 3 个分区)。上周,第二个驱动器 (sdb) 由于一些 I/O 错误而退出了阵列,我仍在调查此问题。服务器重启后,/proc/mdstat 报告阵列降级,正如预期的那样。

Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md2 : active raid1 sda3[0]
      1410613056 blocks super 1.0 [2/1] [U_]
      bitmap: 11/11 pages [44KB], 65536KB chunk

md1 : active (auto-read-only) raid1 sda2[0]
      2097088 blocks super 1.0 [2/1] [U_]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md0 : active raid1 sda1[0]
      52426624 blocks super 1.0 [2/1] [U_]
      bitmap: 1/1 pages [4KB], 65536KB chunk

我已将前两个分区重新添加到相应的阵列中,因为它们很小(50Gb 和 2Gb),并省略了第三个分区,因为我不想等待同步(1.5Tb 同步持续 > 5 小时)。第三个分区的 /var/log/messages 的 grep 如下所示:

2014-08-29T07:02:09.168903+02:00 cube kernel: [   11.460362] md: bind<sdb3>
2014-08-29T07:02:09.168908+02:00 cube kernel: [   11.681545] md: bind<sda3>
2014-08-29T07:02:09.168913+02:00 cube kernel: [   11.692178] md: kicking non-fresh sdb3 from array!
2014-08-29T07:02:09.168917+02:00 cube kernel: [   11.692209] md: unbind<sdb3>
2014-08-29T07:02:09.168922+02:00 cube kernel: [   11.700679] md: export_rdev(sdb3)
2014-08-29T07:02:09.168942+02:00 cube kernel: [   11.706311] md/raid1:md2: active with 1 out of 2 mirrors
2014-08-29T07:02:09.168949+02:00 cube kernel: [   11.829345] created bitmap (11 pages) for device md2
2014-08-29T07:02:09.168954+02:00 cube kernel: [   11.877779] md2: bitmap initialized from disk: read 1 pages, set 859 of 21525 bits
2014-08-29T07:02:09.168959+02:00 cube kernel: [   11.915548] md2: detected capacity change from 0 to 1444467769344
2014-08-28T20:17:30.195301+02:00 cube kernel: [   10.713443] RAID1 conf printout:
2014-08-28T20:17:30.195306+02:00 cube kernel: [   10.713459]  --- wd:1 rd:2
2014-08-28T20:17:30.195333+02:00 cube kernel: [   10.713469]  disk 0, wo:0, o:1, dev:sda3
2014-08-28T20:17:30.195342+02:00 cube kernel: [   10.810753]  md2: unknown partition table

今天我启动了服务器,mdadm 自行重新添加了第三个分区。一开始我以为我做到了,但我的命令历史记录中没有发出此命令。这可能是一些隐藏的 mdadm 选项/功能吗?今天 /var/log/messages 中的 grep 非常奇怪:

2014-08-31T11:31:42.388226+02:00 cube kernel: [   13.240221] md: bind<sda3>
2014-08-31T11:31:42.388234+02:00 cube kernel: [   13.294663] md/raid1:md2: active with 1 out of 2 mirrors
2014-08-31T11:31:42.388238+02:00 cube kernel: [   13.382804] created bitmap (11 pages) for device md2
2014-08-31T11:31:42.388243+02:00 cube kernel: [   13.387596] md2: bitmap initialized from disk: read 1 pages, set 859 of 21525 bits
2014-08-31T11:31:42.388248+02:00 cube kernel: [   13.433121] md2: detected capacity change from 0 to 1444467769344
2014-08-31T11:31:42.388253+02:00 cube kernel: [   13.445851]  md2: unknown partition table
2014-08-31T11:31:42.388258+02:00 cube kernel: [   13.448115] md: bind<sdb3>
2014-08-31T11:31:42.388276+02:00 cube kernel: [   13.448230] RAID1 conf printout:
2014-08-31T11:31:42.388283+02:00 cube kernel: [   13.448237]  --- wd:1 rd:2
2014-08-31T11:31:42.388288+02:00 cube kernel: [   13.448245]  disk 0, wo:0, o:1, dev:sda3
2014-08-31T11:31:42.388292+02:00 cube kernel: [   13.448252]  disk 1, wo:1, o:1, dev:sdb3
2014-08-31T11:31:42.388297+02:00 cube kernel: [   13.448266] RAID1 conf printout:
2014-08-31T11:31:42.388302+02:00 cube kernel: [   13.448271]  --- wd:2 rd:2
2014-08-31T11:31:42.388306+02:00 cube kernel: [   13.448277]  disk 0, wo:0, o:1, dev:sda3
2014-08-31T11:31:42.388374+02:00 cube kernel: [   13.448284]  disk 1, wo:0, o:1, dev:sdb3

相关内容