我们的集群上有一个 Rocks 5.5 发行版,我们想升级到 6.2。“/export”分区安装在由 2 个相同硬盘 (HD)(/dev/sdb 和 /dev/sdc)组成的 Intel 软件 RAID 0 阵列上,其余分区,即“/”、“/var”和“swap”,安装在单独的(启动)HD 或“/dev/sda”上。
第一次升级尝试失败,因为出现警告说它在 HD 上发现了突袭信息,无法继续进行。我天真地通过执行以下命令解决了这个问题:
dmraid -r -E /dev/sda
错误没有再次出现,我能够执行升级。使用手动分区,我格式化了启动硬盘,并保持 RAID 阵列未格式化并再次挂载为“/export”。
安装完成后,启动过程失败,提示
ERROR: ddf1: Cannot find physical drive description on /dev/sdc!
ERROR: ddf1: setting up RAID device /dev/sdc
ERROR: ddf1: Cannot find physical drive description on /dev/sdb!
ERROR: ddf1: setting up RAID device /dev/sdb
/export1: The filesystem size (according to the superblock) is 488378000 blocks
The physical size of the device is 244190638 blocks
Either the superblock or partition table is likely to be corrupt!
在“救援”模式下启动 Rocks,我们能够再次安装驱动器,尽管它说 RAID 分区没有被干净地卸载。
“dmraid”显示 RAID 阵列,但有错误:
$ dmraid -r
ERROR: ddf1: Cannot find physical drive description on /dev/sdc!
ERROR: ddf1: setting up RAID device /dev/sdc
ERROR: ddf1: Cannot find physical drive description on /dev/sdb!
ERROR: ddf1: setting up RAID device /dev/sdb
/dev/sdc: isw, "isw_eecceiche", GROUP, ok, 1953525166 sectors, data@ 0
/dev/sdb: isw, "isw_eecceiche", GROUP, ok, 1953525166 sectors, data@ 0
$ dmraid -s
ERROR: ddf1: Cannot find physical drive description on /dev/sdc!
ERROR: ddf1: setting up RAID device /dev/sdc
ERROR: ddf1: Cannot find physical drive description on /dev/sdb!
ERROR: ddf1: setting up RAID device /dev/sdb
*** Group superset isw_eecceiche
--> Active Subset
name : isw_eecceiche_Volume0
size : 3907038720
stride : 256
type : stripe
status : ok
subsets: 0
devs : 2
spares : 0
这是“fstab”:
#
# /etc/fstab
# Created by anaconda on Tue Sep 15 17:35:11 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=90471019-650c-4901-a8f1-e8cce3fbc059 / ext4 defaults 1 1
UUID=5dae925e-6e01-4442-8f5b-07bfbde7ff09 /export ext2 defaults 1 2
UUID=18303228-189f-4fa3-9661-71786323d70d /var ext4 defaults 1 2
UUID=d14c42ec-e41a-4dbd-b6b8-60afb4aa1b14 swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
# The ram-backed filesystem for ganglia RRD graph databases.
tmpfs /var/lib/ganglia/rrds tmpfs size=2045589000,gid=nobody,uid=nobody,defaults 1 0
和“blkid”:
/dev/loop0: TYPE="squashfs"
/dev/sda1: UUID="90471019-650c-4901-a8f1-e8cce3fbc059" TYPE="ext4"
/dev/sda2: UUID="18303228-189f-4fa3-9661-71786323d70d" TYPE="ext4"
/dev/sda3: UUID="d14c42ec-e41a-4dbd-b6b8-60afb4aa1b14" TYPE="swap"
/dev/sdb: UUID="M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?" TYPE="ddf_raid_member"
/dev/sdc: UUID="M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?M-^?" TYPE="ddf_raid_member"
/dev/sde1: LABEL="Expansion Drive" UUID="BC448C59448C1872" TYPE="ntfs"
/dev/sdd1: UUID="66F2-41D7" TYPE="vfat"
/dev/mapper/isw_eecceiche_Volume0p1: LABEL="/export1" UUID="5dae925e-6e01-4442-8f5b-07bfbde7ff09" TYPE="ext2"
以及“fdisk”输出:
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x44f45cd4
Device Boot Start End Blocks Id System
/dev/sda1 * 1 111403 894841856 83 Linux
/dev/sda2 111403 119562 65536000 83 Linux
/dev/sda3 119562 121602 16382976 82 Linux swap / Solaris
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00045387
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 243201 1953512001 83 Linux
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
6 heads, 1 sectors/track, 325587528 cylinders
Units = cylinders of 6 * 512 = 3072 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
This doesn't look like a partition table
Probably you selected the wrong device.
Device Boot Start End Blocks Id System
/dev/sdc1 22094 22341 743+ cf Unknown
/dev/sdc2 ? 1 1 0 0 Empty
Partition 2 does not end on cylinder boundary.
/dev/sdc3 357936035 357936283 743+ cf Unknown
/dev/sdc4 1 1 0 0 Empty
Partition 4 does not end on cylinder boundary.
Disk /dev/mapper/isw_eecceiche_Volume0: 2000.4 GB, 2000403824640 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk identifier: 0x00045387
Device Boot Start End Blocks Id System
/dev/mapper/isw_eecceiche_Volume0p1 * 1 243201 1953512001 83 Linux
Partition 1 does not start on physical sector boundary.
Disk /dev/mapper/isw_eecceiche_Volume0p1: 2000.4 GB, 2000396289024 bytes
255 heads, 63 sectors/track, 243200 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Alignment offset: 98816 bytes
Disk identifier: 0x00000000
我们可以访问“/export”分区,并且其数据在“救援”模式下仍然可用。
我很想知道是否有办法重建 RAID 元数据,而无需格式化或删除/重建阵列。
任何能帮助解决该问题的帮助都将不胜感激。