我尝试重建 raid 1 (dmraid+isw),但没有成功。我用新磁盘替换了故障磁盘,BIOS 自动将其添加到 raid。运行内核 2.6.18-194.17.4.el5。
dmraid -r
/dev/sda: isw, "isw_babcjifefe", GROUP, ok, 1953525165 sectors, data@ 0
/dev/sdb: isw, "isw_babcjifefe", GROUP, ok, 1953525165 sectors, data@ 0
dmraid -s
*** Group superset isw_babcjifefe
--> Subset
name : isw_babcjifefe_Raid0
size : 1953519616
stride : 128
type : mirror
status : nosync
subsets: 0
devs : 2
spares : 0
当我尝试启动突袭时,我收到以下错误
dmraid -f isw -S -M /dev/sdb
ERROR: isw: SPARE disk must use all space on the disk
dmraid-tay
isw_babcjifefe_Raid0: 0 1953519616 mirror core 3 131072 sync block_on_error 2 /dev/sda 0 /dev/sdb 0
dmraid -ay
RAID set "isw_babcjifefe_Raid0" was not activated
ERROR: device "isw_babcjifefe_Raid0" could not be found
dmraid -f isw -S -M /dev/sdb
ERROR: isw: SPARE disk must use all space on the disk
dmraid -R isw_babcjifefe_Raid0 /dev/sdb
ERROR: disk /dev/sdb cannot be used to rebuilding
消息
device-mapper: table: 253:13: mirror: Device lookup failure
device-mapper: ioctl: error adding target to table
device-mapper: ioctl: device doesn't appear to be in the dev hash table.
device-mapper: table: 253:13: mirror: Device lookup failure
device-mapper: ioctl: error adding target to table
device-mapper: ioctl: device doesn't appear to be in the dev hash table.
device-mapper: table: 253:13: mirror: Device lookup failure
device-mapper: ioctl: error adding target to table
device-mapper: ioctl: device doesn't appear to be in the dev hash table.
device-mapper: ioctl: device doesn't appear to be in the dev hash table.
device-mapper: ioctl: device doesn't appear to be in the dev hash table.
磁盘:
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
逻辑卷管理器(LVM):
PV /dev/sda5 VG storage lvm2 [914.64 GB / 28.64 GB free]
Total: 1 [914.64 GB] / in use: 1 [914.64 GB] / in no VG: 0 [0 ]
Reading all physical volumes. This may take a while...
Found volume group "storage" using metadata type lvm2
ACTIVE '/dev/storage/home' [68.00 GB] inherit
ACTIVE '/dev/storage/home2' [68.00 GB] inherit
ACTIVE '/dev/storage/home3' [68.00 GB] inherit
ACTIVE '/dev/storage/home4' [68.00 GB] inherit
ACTIVE '/dev/storage/home5' [68.00 GB] inherit
ACTIVE '/dev/storage/var' [15.00 GB] inherit
ACTIVE '/dev/storage/mysql' [20.00 GB] inherit
ACTIVE '/dev/storage/pgsql' [7.00 GB] inherit
ACTIVE '/dev/storage/exim' [12.00 GB] inherit
ACTIVE '/dev/storage/apache' [25.00 GB] inherit
ACTIVE '/dev/storage/tmp' [2.00 GB] inherit
ACTIVE '/dev/storage/backup' [450.00 GB] inherit
ACTIVE '/dev/storage/log' [15.00 GB] inherit
答案1
我同意安德鲁的观点。
希望你可以执行 mount -o ro /dev/sd
然后,从该驱动器复制数据并使用 mdadm 和完整、可靠、易于使用、快速的软件 raid 重新开始。
祝你好运。
答案2
我不确定 dmraid 是否支持重建 fakeraid(如 isw)。我建议删除 isw 并使用 mdadm 构建纯软件 raid。