升级到 SLES11 SP1 后 - 软件 RAID5 上的 LVM 不再起作用

升级到 SLES11 SP1 后 - 软件 RAID5 上的 LVM 不再起作用

我在运行 OpenSuSE11.1 一段时间的系统上执行了 SLES11 SP1 的全新安装。该系统使用软件 RAID5 系统,并在其上设置了一个 LVM,其中有一个大约 2.5 TB 大小的分区,用于挂载 /data 等。

问题是 SLES11.1 无法识别软件 RAID,因此我无法再挂载 LVM。

以下是 vgdisplay 和 pvdisplay 的输出:

$ vgdisplay
--- Volume group ---
VG Name               vg001
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  2
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                1
Open LV               0
Max PV                0
Cur PV                1
Act PV                1 
VG Size               2.73 TB
PE Size               4.00 MB
Total PE              715402
Alloc PE / Size       715402 / 2.73 TB
Free  PE / Size       0 / 0
VG UUID               Aryj93-QgpG-8V1S-qGV7-gvFk-GKtc-OTmuFk

$ pvdisplay
--- Physical volume ---
PV Name               /dev/md0
VG Name               vg001
PV Size               2.73 TB / not usable 896.00 KB
Allocatable           yes (but full)
PE Size (KByte)       4096
Total PE              715402
Free PE               0
Allocated PE          715402
PV UUID               rIpmyi-GmB9-oybx-pwJr-50YZ-GQgQ-myGjhi

PE 信息表明卷的大小已被识别,但物理卷不可访问。

软件 RAID 似乎运行正常,它是从以下内容组装而成的mdadm.conf,后面是 md0 设备的诊断输出mdadm以及组装所使用的设备:

$ cat /etc/mdadm.conf
DEVICE partitions
ARRAY /dev/md0  auto=no level=raid5 num-devices=4 UUID=a0340426:324f0a4f:2ce7399e:ae4fabd0

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb[0] sde[3] sdd[2] sdc[1]
  2930287488 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

$ mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Tue Oct 27 13:04:40 2009
     Raid Level : raid5
     Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
  Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Feb 27 14:55:46 2012
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0
         Events : 0.20

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde


$ mdadm --examine /dev/sdb
/dev/sdb:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0
  Creation Time : Tue Oct 27 13:04:40 2009
     Raid Level : raid5
  Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
     Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0

    Update Time : Mon Feb 27 14:55:46 2012
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 2b5182e8 - correct
         Events : 20

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8       16        0      active sync   /dev/sdb

   0     0       8       16        0      active sync   /dev/sdb
   1     1       8       32        1      active sync   /dev/sdc
   2     2       8       48        2      active sync   /dev/sdd
   3     3       8       64        3      active sync   /dev/sde

$ mdadm --examine /dev/sdc
/dev/sdc:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0
  Creation Time : Tue Oct 27 13:04:40 2009
     Raid Level : raid5
  Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
     Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0

    Update Time : Mon Feb 27 14:55:46 2012
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 2b5182fa - correct
         Events : 20

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     1       8       32        1      active sync   /dev/sdc

   0     0       8       16        0      active sync   /dev/sdb
   1     1       8       32        1      active sync   /dev/sdc
   2     2       8       48        2      active sync   /dev/sdd
   3     3       8       64        3      active sync   /dev/sde

$ mdadm --examine /dev/sdd
/dev/sdd:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0
  Creation Time : Tue Oct 27 13:04:40 2009
     Raid Level : raid5
  Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
     Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0

    Update Time : Mon Feb 27 14:55:46 2012
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 2b51830c - correct
         Events : 20

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     2       8       48        2      active sync   /dev/sdd

   0     0       8       16        0      active sync   /dev/sdb
   1     1       8       32        1      active sync   /dev/sdc
   2     2       8       48        2      active sync   /dev/sdd
   3     3       8       64        3      active sync   /dev/sde

$ mdadm --examine /dev/sde
/dev/sde:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0
  Creation Time : Tue Oct 27 13:04:40 2009
     Raid Level : raid5
  Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
     Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0

    Update Time : Mon Feb 27 14:55:46 2012
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 2b51831e - correct
         Events : 20

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     3       8       64        3      active sync   /dev/sde

   0     0       8       16        0      active sync   /dev/sdb
   1     1       8       32        1      active sync   /dev/sdc
   2     2       8       48        2      active sync   /dev/sdd
   3     3       8       64        3      active sync   /dev/sde

我唯一怀疑的是/dev/md0p1启动后自动创建的分区 - 这看起来不对。它似乎被视为另一个软件 RAID 设备,但在我看来,这看起来像是md0RAID 设备的奇偶校验区域:

$ mdadm --detail /dev/md0p1
/dev/md0p1:
        Version : 0.90
  Creation Time : Tue Oct 27 13:04:40 2009
     Raid Level : raid5
     Array Size : 976752000 (931.50 GiB 1000.19 GB)
  Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Feb 27 15:43:00 2012
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : a0340426:324f0a4f:2ce7399e:ae4fabd0
         Events : 0.20

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde

在 SLES 的 YaST 管理中的分区工具中,软件 RAID 设备列在常规硬盘部分下,而不是 RAID 部分下。分区md0p1列在md0磁盘的分区表中。

操作系统是否无法正确识别软件 RAID 磁盘?还是 LVM 配置存在问题?

有什么想法可以解决这个问题吗?

相关内容