我在服务器(Ubuntu 10.04)中的 RAID 阵列方面遇到了问题。
我有一个由 4 个磁盘组成的 raid5 阵列 - sd[cdef],创建方式如下:
# partition disks
parted /dev/sdc mklabel gpt
parted /dev/sdc mkpart primary ext2 1 2000GB
parted /dev/sdc set 1 raid on
# create array
mdadm --create -v --level=raid5 /dev/md2 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
几个月来一直运行良好。
我刚刚应用了系统更新并重新启动,但 raid5 - /dev/md2
- 在启动时没有恢复。当我用 重新组装它时mdadm --assemble --scan
,它似乎只出现了 3 个成员驱动器 - 缺少 sdf1。以下是我能找到的内容:
(旁注:md0 和 md1 是在几个驱动器上构建的 raid-1,分别用于 / 和交换。)
root@dwight:~# mdadm --query --detail /dev/md2
/dev/md2:
Version : 00.90
Creation Time : Sun Feb 20 23:52:28 2011
Raid Level : raid5
Array Size : 5860540224 (5589.05 GiB 6001.19 GB)
Used Dev Size : 1953513408 (1863.02 GiB 2000.40 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Fri Apr 8 22:10:38 2011
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 1bb282b6:fe549071:3bf6c10c:6278edbc (local to host dwight)
Events : 0.140
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 8 49 1 active sync /dev/sdd1
2 8 65 2 active sync /dev/sde1
3 0 0 3 removed
(是的,服务员叫德怀特;我是办公室的粉丝:))
因此它认为缺少一个驱动器(实际上是一个分区),即 /dev/sdf1。
root@dwight:~# mdadm --detail --scan
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=00.90 UUID=c7dbadaa:7762dbf7:beb6b904:6d3aed07
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=00.90 UUID=1784e912:d84242db:3bf6c10c:6278edbc
mdadm: md device /dev/md/d2 does not appear to be active.
ARRAY /dev/md2 level=raid5 num-devices=4 metadata=00.90 UUID=1bb282b6:fe549071:3bf6c10c:6278edbc
什么,什么,/dev/md/d2?/dev/md/d2 是什么?我没有创建它。
root@dwight:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid5 sdc1[0] sde1[2] sdd1[1]
5860540224 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_]
md_d2 : inactive sdf1[3](S)
1953513408 blocks
md1 : active raid1 sdb2[1] sda2[0]
18657728 blocks [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0]
469725120 blocks [2/2] [UU]
unused devices: <none>
同上。md_d2?sd[cde]1 正确位于 md2 中,但 sdf1 缺失(并且似乎认为它应该是它自己的数组?)
root@dwight:~# mdadm -v --examine /dev/sdf1
/dev/sdf1:
Magic : a92b4efc
Version : 00.90.00
UUID : 1bb282b6:fe549071:3bf6c10c:6278edbc (local to host dwight)
Creation Time : Sun Feb 20 23:52:28 2011
Raid Level : raid5
Used Dev Size : 1953513408 (1863.02 GiB 2000.40 GB)
Array Size : 5860540224 (5589.05 GiB 6001.19 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 2
Update Time : Fri Apr 8 21:40:42 2011
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : 71136469 - correct
Events : 114
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 3 8 81 3 active sync /dev/sdf1
0 0 8 33 0 active sync /dev/sdc1
1 1 8 49 1 active sync /dev/sdd1
2 2 8 65 2 active sync /dev/sde1
3 3 8 81 3 active sync /dev/sdf1
...所以 sdf1 认为它是 md2 设备的一部分,对吗?
当我在 /dev/sdc1 上运行该程序时,我得到:
root@dwight:~# mdadm -v --examine /dev/sdc1
/dev/sdc1:
Magic : a92b4efc
Version : 00.90.00
UUID : 1bb282b6:fe549071:3bf6c10c:6278edbc (local to host dwight)
Creation Time : Sun Feb 20 23:52:28 2011
Raid Level : raid5
Used Dev Size : 1953513408 (1863.02 GiB 2000.40 GB)
Array Size : 5860540224 (5589.05 GiB 6001.19 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 2
Update Time : Fri Apr 8 22:50:03 2011
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Checksum : 71137458 - correct
Events : 144
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 33 0 active sync /dev/sdc1
0 0 8 33 0 active sync /dev/sdc1
1 1 8 49 1 active sync /dev/sdd1
2 2 8 65 2 active sync /dev/sde1
3 3 0 0 3 faulty removed
当我尝试将 sdf1 添加回 /dev/md2 阵列时,出现繁忙错误:
root@dwight:~# mdadm --add /dev/md2 /dev/sdf1
mdadm: Cannot open /dev/sdf1: Device or resource busy
救命!如何将 sdf1 添加回 md2 数组?
谢谢,
- 本
答案1
mdadm -S /dev/md_d2
,然后尝试添加 sdf1。