RAID 重建似乎已停止

RAID 重建似乎已停止

我的服务器正在运行RAID 1带有两个磁盘的阵列。其中一个磁盘今天发生故障并被替换。

我复制了谷氨酰胺磷酸酶使用以下命令对新硬盘(sda)进行分区:

sgdisk -R /dev/sda /dev/sdb

并使用以下方法更改 UDID:

sgdisk -G /dev/sda

然后我将两个分区都添加到 RAID 阵列:

mdadm /dev/md4 -a /dev/sda4

mdadm /dev/md5 -a /dev/sda5

/dev/md4已正确重建,但不是/dev/md5

当我cat /proc/mdstat运行这些命令后不久运行时,它显示了以下内容:

Personalities : [raid1]
md5 : active raid1 sda5[2] sdb5[1]
      2820667711 blocks super 1.2 [2/1] [_U]
      [>....................]  recovery =  0.0% (2109952/2820667711) finish=423.0min speed=111050K/sec

md4 : active raid1 sda4[2] sdb4[0]
      15727544 blocks super 1.2 [2/2] [UU]

unused devices: <none>

这是正确的;它正在尝试重建md5,但几分钟后,它停止了并cat /proc/mdstat返回:

Personalities : [raid1]
md5 : active raid1 sda5[2](S) sdb5[1]
      2820667711 blocks super 1.2 [2/1] [_U]

md4 : active raid1 sda4[2] sdb4[0]
      15727544 blocks super 1.2 [2/2] [UU]

unused devices: <none>

为什么它在新磁盘上停止重建?以下是我在运行时得到的结果mdadm --detail /dev/md5

    /dev/md5:
        Version : 1.2
  Creation Time : Sun Sep 16 15:26:58 2012
     Raid Level : raid1
     Array Size : 2820667711 (2690.00 GiB 2888.36 GB)
  Used Dev Size : 2820667711 (2690.00 GiB 2888.36 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat Dec 27 04:01:26 2014
          State : clean, degraded
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

           Name : rescue:5  (local to host rescue)
           UUID : 29868a4d:f63c6b43:ee926581:fd775604
         Events : 5237753

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       21        1      active sync   /dev/sdb5

       2       8        5        -      spare   /dev/sda5

感谢@Michael Hampton 的回答。睡了一晚后我又回来了 :-) 所以我检查了 dmesg 并得到了以下信息:

[Sat Dec 27 04:01:04 2014] md: recovery of RAID array md5
[Sat Dec 27 04:01:04 2014] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[Sat Dec 27 04:01:04 2014] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[Sat Dec 27 04:01:04 2014] md: using 128k window, over a total of 2820667711k.
[Sat Dec 27 04:01:04 2014] RAID1 conf printout:
[Sat Dec 27 04:01:04 2014]  --- wd:2 rd:2
[Sat Dec 27 04:01:04 2014]  disk 0, wo:0, o:1, dev:sdb4
[Sat Dec 27 04:01:04 2014]  disk 1, wo:0, o:1, dev:sda4
[Sat Dec 27 04:01:21 2014] ata2.00: exception Emask 0x0 SAct 0x1e000 SErr 0x0 action 0x0
[Sat Dec 27 04:01:21 2014] ata2.00: irq_stat 0x40000008
[Sat Dec 27 04:01:21 2014] ata2.00: cmd 60/80:68:00:12:51/03:00:0d:00:00/40 tag 13 ncq 458752 in
[Sat Dec 27 04:01:21 2014]          res 41/40:80:68:14:51/00:03:0d:00:00/00 Emask 0x409 (media error) <F>
[Sat Dec 27 04:01:21 2014] ata2.00: configured for UDMA/133
[Sat Dec 27 04:01:21 2014] sd 1:0:0:0: [sdb] Unhandled sense code
[Sat Dec 27 04:01:21 2014] sd 1:0:0:0: [sdb]  
[Sat Dec 27 04:01:21 2014] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[Sat Dec 27 04:01:21 2014] sd 1:0:0:0: [sdb]  
[Sat Dec 27 04:01:21 2014] Sense Key : Medium Error [current] [descriptor]
[Sat Dec 27 04:01:21 2014] Descriptor sense data with sense descriptors (in hex):
[Sat Dec 27 04:01:21 2014]         72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 
[Sat Dec 27 04:01:21 2014]         0d 51 14 68 
[Sat Dec 27 04:01:21 2014] sd 1:0:0:0: [sdb]  
[Sat Dec 27 04:01:21 2014] Add. Sense: Unrecovered read error - auto reallocate failed
[Sat Dec 27 04:01:21 2014] sd 1:0:0:0: [sdb] CDB: 
[Sat Dec 27 04:01:21 2014] Read(16): 88 00 00 00 00 00 0d 51 12 00 00 00 03 80 00 00
[Sat Dec 27 04:01:21 2014] end_request: I/O error, dev sdb, sector 223417448
[Sat Dec 27 04:01:21 2014] ata2: EH complete
[Sat Dec 27 04:01:24 2014] ata2.00: exception Emask 0x0 SAct 0x8 SErr 0x0 action 0x0
[Sat Dec 27 04:01:24 2014] ata2.00: irq_stat 0x40000008
[Sat Dec 27 04:01:24 2014] ata2.00: cmd 60/08:18:68:14:51/00:00:0d:00:00/40 tag 3 ncq 4096 in
[Sat Dec 27 04:01:24 2014]          res 41/40:08:68:14:51/00:00:0d:00:00/00 Emask 0x409 (media error) <F>
[Sat Dec 27 04:01:24 2014] ata2.00: configured for UDMA/133
[Sat Dec 27 04:01:24 2014] sd 1:0:0:0: [sdb] Unhandled sense code
[Sat Dec 27 04:01:24 2014] sd 1:0:0:0: [sdb]  
[Sat Dec 27 04:01:24 2014] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[Sat Dec 27 04:01:24 2014] sd 1:0:0:0: [sdb]  
[Sat Dec 27 04:01:24 2014] Sense Key : Medium Error [current] [descriptor]
[Sat Dec 27 04:01:24 2014] Descriptor sense data with sense descriptors (in hex):
[Sat Dec 27 04:01:24 2014]         72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 
[Sat Dec 27 04:01:24 2014]         0d 51 14 68 
[Sat Dec 27 04:01:24 2014] sd 1:0:0:0: [sdb]  
[Sat Dec 27 04:01:24 2014] Add. Sense: Unrecovered read error - auto reallocate failed
[Sat Dec 27 04:01:24 2014] sd 1:0:0:0: [sdb] CDB: 
[Sat Dec 27 04:01:24 2014] Read(16): 88 00 00 00 00 00 0d 51 14 68 00 00 00 08 00 00
[Sat Dec 27 04:01:24 2014] end_request: I/O error, dev sdb, sector 223417448
[Sat Dec 27 04:01:24 2014] ata2: EH complete
[Sat Dec 27 04:01:24 2014] md/raid1:md5: sdb: unrecoverable I/O read error for block 4219904
[Sat Dec 27 04:01:24 2014] md: md5: recovery interrupted.
[Sat Dec 27 04:01:24 2014] RAID1 conf printout:
[Sat Dec 27 04:01:24 2014]  --- wd:1 rd:2
[Sat Dec 27 04:01:24 2014]  disk 0, wo:1, o:1, dev:sda5
[Sat Dec 27 04:01:24 2014]  disk 1, wo:0, o:1, dev:sdb5
[Sat Dec 27 04:01:24 2014] RAID1 conf printout:
[Sat Dec 27 04:01:24 2014]  --- wd:1 rd:2
[Sat Dec 27 04:01:24 2014]  disk 1, wo:0, o:1, dev:sdb5

所以这似乎是一个读取错误。但 SMART 似乎并不太糟糕(如果我理解正确的话):

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   088   087   006    Pre-fail  Always       -       154455820
  3 Spin_Up_Time            0x0003   096   096   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       5
  5 Reallocated_Sector_Ct   0x0033   084   084   036    Pre-fail  Always       -       21664
  7 Seek_Error_Rate         0x000f   072   060   030    Pre-fail  Always       -       38808769144
  9 Power_On_Hours          0x0032   071   071   000    Old_age   Always       -       26073
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       5
183 Runtime_Bad_Block       0x0032   099   099   000    Old_age   Always       -       1
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   001   001   000    Old_age   Always       -       721
188 Command_Timeout         0x0032   100   099   000    Old_age   Always       -       4295032833
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   063   061   045    Old_age   Always       -       37 (Min/Max 33/37)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       3
193 Load_Cycle_Count        0x0032   095   095   000    Old_age   Always       -       10183
194 Temperature_Celsius     0x0022   037   040   000    Old_age   Always       -       37 (0 21 0 0)
197 Current_Pending_Sector  0x0012   088   088   000    Old_age   Always       -       2072
198 Offline_Uncorrectable   0x0010   088   088   000    Old_age   Offline      -       2072
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       157045479198210
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       4435703883570
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       5487937263078

SMART Error Log Version: 1
ATA Error Count: 6 (device log contains only the most recent five errors)

无论如何,感谢您的回答。是的,如果我再次设置服务器,我绝对不会为我的 RAID 阵列使用多个分区(在这种情况下,实际上 md5 甚至使用 LVM。

谢谢,

答案1

看起来您物理上移除了故障磁盘,而 Linux 却没有完全意识到这一点,因此当您添加新磁盘时,它被标记为备用磁盘(系统仍在等待您将旧磁盘放回去)。很可能是 /dev/md4 发生故障,Linux 检测到了故障,但由于 /dev/md5 是一个单独的阵列(它本身没有发生故障),Linux 仍然认为它是好的。

要从这种情况中恢复,您需要告诉系统开始使用备用磁盘,并忘记已删除的磁盘。

首先,将 RAID 阵列扩展到设备,以便能够利用备用设备。

mdadm --grow /dev/md5 --raid-devices=3

此时它应该开始同步到备用设备,备用设备将如spare rebuilding中列出mdadm --detail,您应该在中看到同步操作/proc/mdstat

当同步完成后,您会告诉 mdadm 忘记不再存在的设备。

mdadm --remove /dev/md5 detached

最后,将设备数量设置回 2。

mdadm --grow /dev/md5 --raid-devices=2

我无法确定你的系统是如何进入这种状态的。但可能是你的其他磁盘发生读取错误,导致重新同步停止并出现此失败状态。如果是这种情况,您将在dmesg同步操作终止时看到与此相关的日志条目。如果确实如此,您将需要一些更深层次的魔法(如果发生这种情况,请更新您的问题)并且可能需要准备好备份。


你可能还想阅读超级用户上这个几乎相同的问题因为它包含一些其他可能的解决方案。


最后,最佳做法是使用整个磁盘作为 RAID 阵列成员,或者最多使用磁盘的单个分区,然后您可以在必要时使用 LVM 划分 RAID 块设备。此配置可以避免此问题。

相关内容