最近,我硬着头皮将操作系统从 Fedora 11 升级到 Fedora 15,我一直在努力找出为什么 Fedora 15 无法看到我在 Fedora 11 中创建的 Raid 设置。我想我一定错过了所以我在这里诉诸集体智慧。
当我升级时,我为 Fedora 15 使用了新的引导驱动器,因此我可以物理交换引导驱动器并引导到 Fedora 11 或 15。Fedora 11 仍然可以看到 Raid 并且一切正常。 Fedora 15 显示了一些非常奇怪的东西。
[编辑以添加 @psusi 请求的输出]
在 Fedora 11 上
我有一个常规启动驱动器 (/dev/sda) 和一个基于 raid 5 构建的 lvm (/dev/sdb、/dev/sdc、/dev/sdd)。
具体来说,raid 磁盘 /dev/md/127_0 是由 /dev/sdb1、/dev/sdc1、/dev/sdd1 构建的,其中每个分区占用整个磁盘空间。
引导驱动器的卷组 (/dev/vg_localhost/) 不相关。我在 raid 磁盘上创建的卷组称为 /dev/lvm-tb-storage/。
以下是我从系统获取的设置(mdadm、pvscan、lvscan 等)
[root@localhost ~]# cat /etc/mdadm.conf
[root@localhost ~]# pvscan
PV /dev/md127 VG lvm-tb-storage lvm2 [1.82 TB / 0 free]
PV /dev/sda5 VG vg_localhost lvm2 [61.44 GB / 0 free]
Total: 2 [1.88 TB] / in use: 2 [1.88 TB] / in no VG: 0 [0 ]
[root@localhost ~]# lvscan
ACTIVE '/dev/lvm-tb-storage/tb' [1.82 TB] inherit
ACTIVE '/dev/vg_localhost/lv_root' [54.68 GB] inherit
ACTIVE '/dev/vg_localhost/lv_swap' [6.77 GB] inherit
[root@localhost ~]# vgdisplay
--- Volume group ---
VG Name lvm-tb-storage
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.82 TB
PE Size 4.00 MB
Total PE 476839
Alloc PE / Size 476839 / 1.82 TB
Free PE / Size 0 / 0
VG UUID wqIXsb-KRZQ-eRnH-JvuP-VdHk-XJTG-DSWimc
--- Volume group ---
VG Name vg_localhost
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 61.44 GB
PE Size 4.00 MB
Total PE 15729
Alloc PE / Size 15729 / 61.44 GB
Free PE / Size 0 / 0
VG UUID IVIpCV-C4qg-Lii7-zwkz-P3si-MXAZ-WYUSe6
[root@localhost ~]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "lvm-tb-storage" using metadata type lvm2
Found volume group "vg_localhost" using metadata type lvm2
[root@localhost ~]# mdadm --detail --scan
ARRAY /dev/md/127_0 metadata=0.90 UUID=bebfd467:cb6700d9:29bdc0db:c30228ba
[root@localhost ~]# ls -al /dev/md
total 0
drwxr-xr-x. 2 root root 60 2011-09-13 03:14 .
drwxr-xr-x. 19 root root 5180 2011-09-13 03:15 ..
lrwxrwxrwx. 1 root root 8 2011-09-13 03:14 127_0 -> ../md127
[root@localhost ~]# mdadm --detail /dev/md/127_0
/dev/md/127_0:
Version : 0.90
Creation Time : Wed Nov 5 18:26:25 2008
Raid Level : raid5
Array Size : 1953134208 (1862.65 GiB 2000.01 GB)
Used Dev Size : 976567104 (931.33 GiB 1000.00 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 127
Persistence : Superblock is persistent
Update Time : Tue Sep 13 03:28:51 2011
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : bebfd467:cb6700d9:29bdc0db:c30228ba
Events : 0.671154
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 49 1 active sync /dev/sdd1
2 8 33 2 active sync /dev/sdc1
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md127 : active raid5 sdb1[0] sdc1[2] sdd1[1]
1953134208 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
[root@localhost ~]# mdadm --examine /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 0.90.00
UUID : bebfd467:cb6700d9:29bdc0db:c30228ba
Creation Time : Wed Nov 5 18:26:25 2008
Raid Level : raid5
Used Dev Size : 976567104 (931.33 GiB 1000.00 GB)
Array Size : 1953134208 (1862.65 GiB 2000.01 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 127
Update Time : Tue Sep 13 03:29:50 2011
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Checksum : f1ddf826 - correct
Events : 671154
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 17 0 active sync /dev/sdb1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 49 1 active sync /dev/sdd1
2 2 8 33 2 active sync /dev/sdc1
[root@localhost ~]# fdisk -lu 2>&1
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1 doesn't contain a valid partition table
Disk /dev/md127 doesn't contain a valid partition table
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/sda: 250.0 GB, 250000000000 bytes
255 heads, 63 sectors/track, 30394 cylinders, total 488281250 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000080
Device Boot Start End Blocks Id System
/dev/sda1 63 610469 305203+ 83 Linux
/dev/sda2 610470 359004554 179197042+ 83 Linux
/dev/sda3 * 359004555 359414154 204800 83 Linux
/dev/sda4 359422245 488279609 64428682+ 5 Extended
/dev/sda5 359422308 488278371 64428032 8e Linux LVM
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0xb03e1980
Device Boot Start End Blocks Id System
/dev/sdb1 63 1953134504 976567221 da Non-FS data
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x7db522d5
Device Boot Start End Blocks Id System
/dev/sdc1 63 1953134504 976567221 da Non-FS data
Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x20af5840
Device Boot Start End Blocks Id System
/dev/sdd1 63 1953134504 976567221 da Non-FS data
Disk /dev/dm-0: 58.7 GB, 58707673088 bytes
255 heads, 63 sectors/track, 7137 cylinders, total 114663424 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000000
Disk /dev/dm-1: 7264 MB, 7264534528 bytes
255 heads, 63 sectors/track, 883 cylinders, total 14188544 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000000
Disk /dev/md127: 2000.0 GB, 2000009428992 bytes
2 heads, 4 sectors/track, 488283552 cylinders, total 3906268416 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000000
Disk /dev/dm-2: 2000.0 GB, 2000007725056 bytes
255 heads, 63 sectors/track, 243153 cylinders, total 3906265088 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x00000000
我拥有的内核启动参数
kernel /vmlinuz-2.6.30.10-105.2.23.fc11.x86_64 ro root=/dev/mapper/vg_localhost-lv_root rhgb quiet
在 Fedora 15 上
我在新的启动驱动器上安装了 Fedora 15,安装程序还在该驱动器上为我创建了一个 lvm (/dev/vg_20110912a/),但这又是无关紧要的。
在 Fedora 15 下lvm
,pvscan
除了vgscan
不相关的引导驱动器之外什么也看不到。mdadm
然而,显示出一些非常奇怪的东西——原来的raid被分成了三个raid,而且组合起来非常令人费解。
[root@localhost ~] # cat /etc/mdadm.conf
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
[root@localhost ~]# pvscan
PV /dev/sda2 VG vg_20110912a lvm2 [59.12 GiB / 0 free]
Total: 1 [59.12 GiB] / in use: 1 [59.12 GiB] / in no VG: 0 [0 ]
[root@localhost ~]# lvscan
ACTIVE '/dev/vg_20110912a/lv_home' [24.06 GiB] inherit
ACTIVE '/dev/vg_20110912a/lv_swap' [6.84 GiB] inherit
ACTIVE '/dev/vg_20110912a/lv_root' [28.22 GiB] inherit
[root@localhost ~]# vgdisplay
--- Volume group ---
VG Name vg_20110912a
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 59.12 GiB
PE Size 32.00 MiB
Total PE 1892
Alloc PE / Size 1892 / 59.12 GiB
Free PE / Size 0 / 0
VG UUID 8VRJyx-XSQp-13mK-NbO6-iV24-rE87-IKuhHH
[root@localhost ~]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg_20110912a" using metadata type lvm2
[root@localhost ~]# mdadm --detail --scan
ARRAY /dev/md/0_0 metadata=0.90 UUID=153e151b:8c717565:fd59f149:d2ea02c9
ARRAY /dev/md/127_0 metadata=0.90 UUID=bebfd467:cb6700d9:29bdc0db:c30228ba
[root@localhost ~]# ls -l /dev/md
total 4
lrwxrwxrwx. 1 root root 8 Sep 13 02:39 0_0 -> ../md127
lrwxrwxrwx. 1 root root 10 Sep 13 02:39 0_0p1 -> ../md127p1
lrwxrwxrwx. 1 root root 8 Sep 13 02:39 127_0 -> ../md126
-rw-------. 1 root root 120 Sep 13 02:39 md-device-map
[root@localhost ~]# cat /dev/md/md-device-map
md126 0.90 bebfd467:cb6700d9:29bdc0db:c30228ba /dev/md/127_0
md127 0.90 153e151b:8c717565:fd59f149:d2ea02c9 /dev/md/0_0
[root@localhost ~]# mdadm --detail /dev/md/0_0
/dev/md/0_0:
Version : 0.90
Creation Time : Tue Nov 4 21:45:19 2008
Raid Level : raid5
Array Size : 976762496 (931.51 GiB 1000.20 GB)
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 127
Persistence : Superblock is persistent
Update Time : Wed Nov 5 09:04:28 2008
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 153e151b:8c717565:fd59f149:d2ea02c9
Events : 0.2202
Number Major Minor RaidDevice State
0 8 48 0 active sync /dev/sdd
1 8 16 1 active sync /dev/sdb
[root@localhost ~]# mdadm --detail /dev/md/127_0
/dev/md/127_0:
Version : 0.90
Creation Time : Wed Nov 5 18:26:25 2008
Raid Level : raid5
Array Size : 1953134208 (1862.65 GiB 2000.01 GB)
Used Dev Size : 976567104 (931.33 GiB 1000.00 GB)
Raid Devices : 3
Total Devices : 2
Preferred Minor : 126
Persistence : Superblock is persistent
Update Time : Tue Sep 13 00:39:51 2011
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : bebfd467:cb6700d9:29bdc0db:c30228ba
Events : 0.671154
Number Major Minor RaidDevice State
0 259 0 0 active sync /dev/md/0_0p1
1 0 0 1 removed
2 8 33 2 active sync /dev/sdc1
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md126 : active (auto-read-only) raid5 md127p1[0] sdc1[2]
1953134208 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
md127 : active (auto-read-only) raid5 sdb[1] sdd[0]
976762496 blocks level 5, 64k chunk, algorithm 2 [2/2] [UU]
unused devices: <none>
[root@localhost ~]# mdadm --examine /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 0.90.00
UUID : bebfd467:cb6700d9:29bdc0db:c30228ba
Creation Time : Wed Nov 5 18:26:25 2008
Raid Level : raid5
Used Dev Size : 976567104 (931.33 GiB 1000.00 GB)
Array Size : 1953134208 (1862.65 GiB 2000.01 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 127
Update Time : Tue Sep 13 00:39:51 2011
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Checksum : f1ddd04f - correct
Events : 671154
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 17 0 active sync /dev/sdb1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 49 1 active sync /dev/sdd1
2 2 8 33 2 active sync /dev/sdc1
[root@localhost ~]# fdisk -lu 2>&1
Disk /dev/mapper/vg_20110912a-lv_swap doesn't contain a valid partition table
Disk /dev/mapper/vg_20110912a-lv_root doesn't contain a valid partition table
Disk /dev/md127 doesn't contain a valid partition table
Disk /dev/mapper/vg_20110912a-lv_home doesn't contain a valid partition table
Disk /dev/sda: 64.0 GB, 64023257088 bytes
255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001aa2f
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 125044735 62009344 8e Linux LVM
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xb03e1980
Device Boot Start End Blocks Id System
/dev/sdb1 63 1953134504 976567221 da Non-FS data
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x7db522d5
Device Boot Start End Blocks Id System
/dev/sdc1 63 1953134504 976567221 da Non-FS data
Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x20af5840
Device Boot Start End Blocks Id System
/dev/sdd1 63 1953134504 976567221 da Non-FS data
Disk /dev/mapper/vg_20110912a-lv_swap: 7348 MB, 7348420608 bytes
255 heads, 63 sectors/track, 893 cylinders, total 14352384 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/vg_20110912a-lv_root: 30.3 GB, 30299652096 bytes
255 heads, 63 sectors/track, 3683 cylinders, total 59179008 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/md127: 2000.0 GB, 2000009428992 bytes
2 heads, 4 sectors/track, 488283552 cylinders, total 3906268416 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 131072 bytes
Disk identifier: 0x00000000
Disk /dev/md126: 1000.2 GB, 1000204795904 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953524992 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disk identifier: 0x20af5840
Device Boot Start End Blocks Id System
/dev/md126p1 63 1953134504 976567221 da Non-FS data
Partition 1 does not start on physical sector boundary.
Disk /dev/mapper/vg_20110912a-lv_home: 25.8 GB, 25836912640 bytes
255 heads, 63 sectors/track, 3141 cylinders, total 50462720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
我的内核启动参数:
kernel /vmlinuz-2.6.40.4-5.fc15.x86_64 ro root=/dev/mapper/vg_20110912a-lv_root rd_LVM_LV=vg_20110912a/lv_root rd_LVM_LV=vg_20110912a/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet rdblacklist=nouveau nouveau.modeset=0 nodmraid
最后mdadm --examine /dev/sdb1
显示的结果与 Fedora 11 中完全相同,但我不明白为什么mdadm --detail /dev/md/0_0
只显示 /dev/sdb 和 /dev/sdd ,并mdadm --detail /dev/md/127_0
显示 /dev/sdc1 和 /dev/md/0_0p1 。
由于mdadm --examine /dev/sdb1
显示了正确的结果,Fedora 15 能够以某种方式访问 raid,但我不知道该怎么做。我应该创建/组装一个新的 raid /dev/md2 并希望lvm
我创建的会神奇地出现吗?
先感谢您。
答案1
看起来你周围有一些旧的、粗糙的突袭超级街区。您使用的阵列有 3 个磁盘,uuid 为 bebfd467:cb6700d9:29bdc0db:c30228ba,创建于 2008 年 11 月 5 日。Fedora 15 已识别出另一个只有两个磁盘的 raid 阵列,该阵列是在前一天创建的,使用整个阵列磁盘而不是第一个分区。 Fedora 15 似乎激活了旧的 raid 阵列,然后尝试使用该阵列作为正确阵列中的组件之一,这导致了混乱。
我认为你需要摧毁旧的、虚假的超级街区:
mdadm --zero-superblock /dev/sdb /dev/sdd
您确实有当前的备份吗? ;)