升级底层 RAID6 mdadm 阵列中的物理磁盘后,LVM“未启动”

升级底层 RAID6 mdadm 阵列中的物理磁盘后,LVM“未启动”

前:

  • 单独的 SSD 启动驱动器
  • RAID6 4 x 2TB ->/dev/md0
  • 左心室射血分数/dev/md0
  • LVM VG 建立在 PV 上面
  • 在 VG 中形成 LVM 2 个数据分区
    • 安装在/home/mnt/data
  • 我从 ubuntu 20.04 升级到了 22.04(重启后一切仍正常)

上述运行良好,但我将 2TB 驱动器升级到 3TB,并添加了第 5 个驱动器,从而形成 RAID6 5 x 3TB。

我没有对新驱动器进行分区,我只需在每个旧的 2TB 驱动器上执行一个mdadm --fail/ mdadm --remove,然后mdadm --add对新的 3TB 驱动器执行一个,让每个驱动器同步,然后再继续下一个(密切关注cat /proc/mdstat进度)。

显示mdadm --detail /dev/md0一切都处于活动状态,所有 5 个驱动器都处于良好状态。简而言之,/dev/md0看起来很棒。它可以很好地承受重启。

root@home:~# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sat Sep  3 11:02:41 2016
        Raid Level : raid6
        Array Size : 8790409728 (8.19 TiB 9.00 TB)
     Used Dev Size : 2930136576 (2.73 TiB 3.00 TB)
      Raid Devices : 5
     Total Devices : 5
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Thu Sep 29 10:53:05 2022
             State : clean
    Active Devices : 5
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : home:0  (local to host home)
              UUID : 73d52bf2:9c18d305:a554e9ae:e67b7fbc
            Events : 210995

    Number   Major   Minor   RaidDevice State
       4       8       48        0      active sync   /dev/sdd
       5       8        0        1      active sync   /dev/sda
       7       8       32        2      active sync   /dev/sdc
       6       8       16        3      active sync   /dev/sdb
       8       8       64        4      active sync   /dev/sde



root@home:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid6 sde[8] sdd[4] sdc[7] sdb[6] sda[5]
      8790409728 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/5] [UUUUU]
      bitmap: 0/11 pages [0KB], 131072KB chunk

unused devices: <none>

但是,在启动时,我现在遇到了 LVM 设置问题。如果我以-vv详细程度运行 LVM 命令,它们确实会显示 LVM 卷组已“看到” - 但出于某种原因,由于WARNING: device /dev/md0 is an md component, not setting device for PV.

root@home:~# pvscan
  WARNING: device /dev/md0 is an md component, not setting device for PV.
  No matching physical volumes found

root@home:~# pvscan -vv
  devices/hints not found in config: defaulting to all
  metadata/record_lvs_history not found in config: defaulting to 0
  global/locking_type not found in config: defaulting to 1
  devices/md_component_checks not found in config: defaulting to auto
  report/output_format not found in config: defaulting to basic
  log/report_command_log not found in config: defaulting to 0
  Locking /run/lock/lvm/P_global RB
  Device /dev/md0 metadata_version is 1.2.
  /dev/loop0: size is 138880 sectors
  /dev/sda: size is 5860533168 sectors
  /dev/md0: size is 17580819456 sectors
  /dev/loop1: size is 98272 sectors
  /dev/loop2: size is 0 sectors
  /dev/loop3: size is 0 sectors
  /dev/loop4: size is 0 sectors
  /dev/loop5: size is 0 sectors
  /dev/loop6: size is 0 sectors
  /dev/loop7: size is 0 sectors
  /dev/sdb: size is 5860533168 sectors
  /dev/sdc: size is 5860533168 sectors
  /dev/sdd: size is 5860533168 sectors
  /dev/sde: size is 5860533168 sectors
  /dev/sdf: size is 117231408 sectors
  /dev/sdf1: size is 117227520 sectors
  /dev/loop0: using cached size 138880 sectors
  /dev/loop0: using cached size 138880 sectors
  /dev/loop0: No lvm label detected
  /dev/sda: using cached size 5860533168 sectors
  /dev/md0: using cached size 17580819456 sectors
  /dev/md0: using cached size 17580819456 sectors
  /dev/md0: lvm2 label detected at sector 1
  /dev/loop1: using cached size 98272 sectors
  /dev/loop1: using cached size 98272 sectors
  /dev/loop1: No lvm label detected
  /dev/sdb: using cached size 5860533168 sectors
  /dev/sdb: using cached size 5860533168 sectors
  /dev/sdc: using cached size 5860533168 sectors
  /dev/sdd: using cached size 5860533168 sectors
  /dev/sde: using cached size 5860533168 sectors
  /dev/sdf: using cached size 117231408 sectors
  /dev/sdf1: using cached size 117227520 sectors
  /dev/sdf1: using cached size 117227520 sectors
  /dev/sdf1: No lvm label detected
  /dev/loop2: using cached size 0 sectors
  /dev/loop3: using cached size 0 sectors
  /dev/loop4: using cached size 0 sectors
  /dev/loop5: using cached size 0 sectors
  /dev/loop6: using cached size 0 sectors
  /dev/loop7: using cached size 0 sectors
  Locking /run/lock/lvm/V_VolGroupArray RB
  /dev/md0: using cached size 17580819456 sectors
  WARNING: device /dev/md0 is an md component, not setting device for PV.
  Unlocking /run/lock/lvm/V_VolGroupArray
  Reading orphan VG #orphans_lvm2.
  No matching physical volumes found
  Unlocking /run/lock/lvm/P_global

root@home:~# vgscan -vv
  devices/hints not found in config: defaulting to all
  metadata/record_lvs_history not found in config: defaulting to 0
  global/locking_type not found in config: defaulting to 1
  devices/md_component_checks not found in config: defaulting to auto
  Locking /run/lock/lvm/P_global RB
  Device /dev/md0 metadata_version is 1.2.
  /dev/loop0: size is 138880 sectors
  /dev/sda: size is 5860533168 sectors
  /dev/md0: size is 17580819456 sectors
  /dev/loop1: size is 98272 sectors
  /dev/loop2: size is 0 sectors
  /dev/loop3: size is 0 sectors
  /dev/loop4: size is 0 sectors
  /dev/loop5: size is 0 sectors
  /dev/loop6: size is 0 sectors
  /dev/loop7: size is 0 sectors
  /dev/sdb: size is 5860533168 sectors
  /dev/sdc: size is 5860533168 sectors
  /dev/sdd: size is 5860533168 sectors
  /dev/sde: size is 5860533168 sectors
  /dev/sdf: size is 117231408 sectors
  /dev/sdf1: size is 117227520 sectors
  /dev/loop0: using cached size 138880 sectors
  /dev/loop0: using cached size 138880 sectors
  /dev/loop0: No lvm label detected
  /dev/sda: using cached size 5860533168 sectors
  /dev/md0: using cached size 17580819456 sectors
  /dev/md0: using cached size 17580819456 sectors
  /dev/md0: lvm2 label detected at sector 1
  /dev/loop1: using cached size 98272 sectors
  /dev/loop1: using cached size 98272 sectors
  /dev/loop1: No lvm label detected
  /dev/sdb: using cached size 5860533168 sectors
  /dev/sdb: using cached size 5860533168 sectors
  /dev/sdc: using cached size 5860533168 sectors
  /dev/sdd: using cached size 5860533168 sectors
  /dev/sde: using cached size 5860533168 sectors
  /dev/sdf: using cached size 117231408 sectors
  /dev/sdf1: using cached size 117227520 sectors
  /dev/sdf1: using cached size 117227520 sectors
  /dev/sdf1: No lvm label detected
  Obtaining the complete list of VGs to process
  report/output_format not found in config: defaulting to basic
  log/report_command_log not found in config: defaulting to 0
  Processing VG VolGroupArray hLJW0y-qc4O-x5oN-sIg0-5Yxi-JR1R-3A0FEV
  Locking /run/lock/lvm/V_VolGroupArray RB
  /dev/md0: using cached size 17580819456 sectors
  WARNING: device /dev/md0 is an md component, not setting device for PV.
  Unlocking /run/lock/lvm/V_VolGroupArray
  Unlocking /run/lock/lvm/P_global

root@home:~# lvscan -vv
  devices/hints not found in config: defaulting to all
  metadata/record_lvs_history not found in config: defaulting to 0
  global/locking_type not found in config: defaulting to 1
  devices/md_component_checks not found in config: defaulting to auto
  report/output_format not found in config: defaulting to basic
  log/report_command_log not found in config: defaulting to 0
  Locking /run/lock/lvm/P_global RB
  Device /dev/md0 metadata_version is 1.2.
  /dev/loop0: size is 138880 sectors
  /dev/sda: size is 5860533168 sectors
  /dev/md0: size is 17580819456 sectors
  /dev/loop1: size is 98272 sectors
  /dev/loop2: size is 0 sectors
  /dev/loop3: size is 0 sectors
  /dev/loop4: size is 0 sectors
  /dev/loop5: size is 0 sectors
  /dev/loop6: size is 0 sectors
  /dev/loop7: size is 0 sectors
  /dev/sdb: size is 5860533168 sectors
  /dev/sdc: size is 5860533168 sectors
  /dev/sdd: size is 5860533168 sectors
  /dev/sde: size is 5860533168 sectors
  /dev/sdf: size is 117231408 sectors
  /dev/sdf1: size is 117227520 sectors
  /dev/loop0: using cached size 138880 sectors
  /dev/loop0: using cached size 138880 sectors
  /dev/loop0: No lvm label detected
  /dev/sda: using cached size 5860533168 sectors
  /dev/md0: using cached size 17580819456 sectors
  /dev/md0: using cached size 17580819456 sectors
  /dev/md0: lvm2 label detected at sector 1
  /dev/loop1: using cached size 98272 sectors
  /dev/loop1: using cached size 98272 sectors
  /dev/loop1: No lvm label detected
  /dev/sdb: using cached size 5860533168 sectors
  /dev/sdb: using cached size 5860533168 sectors
  /dev/sdc: using cached size 5860533168 sectors
  /dev/sdd: using cached size 5860533168 sectors
  /dev/sde: using cached size 5860533168 sectors
  /dev/sdf: using cached size 117231408 sectors
  /dev/sdf1: using cached size 117227520 sectors
  /dev/sdf1: using cached size 117227520 sectors
  /dev/sdf1: No lvm label detected
  Obtaining the complete list of VGs before processing their LVs
  Processing VG VolGroupArray hLJW0y-qc4O-x5oN-sIg0-5Yxi-JR1R-3A0FEV
  Locking /run/lock/lvm/V_VolGroupArray RB
  /dev/md0: using cached size 17580819456 sectors
  WARNING: device /dev/md0 is an md component, not setting device for PV.
  Unlocking /run/lock/lvm/V_VolGroupArray
  Unlocking /run/lock/lvm/P_global

请指导我提供适当的输出来帮助诊断这个问题。

编辑1:

以下是我注释掉 LVM 分区之后的内容/etc/fstab

输出lsblk

root@home:~# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
loop0    7:0    0 67.8M  1 loop  /snap/lxd/22753
loop1    7:1    0   48M  1 loop  /snap/snapd/17029
sda      8:0    0  2.7T  0 disk
└─md0    9:0    0  8.2T  0 raid6
sdb      8:16   0  2.7T  0 disk
└─md0    9:0    0  8.2T  0 raid6
sdc      8:32   0  2.7T  0 disk
└─md0    9:0    0  8.2T  0 raid6
sdd      8:48   0  2.7T  0 disk
└─md0    9:0    0  8.2T  0 raid6
sde      8:64   0  2.7T  0 disk
└─md0    9:0    0  8.2T  0 raid6
sdf      8:80   0 55.9G  0 disk
└─sdf1   8:81   0 55.9G  0 part  /

输出blkid

root@home:~# blkid
/dev/sdf1: UUID="751e6495-f77b-49a0-b170-2be146064752" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="96cba4d1-01"
/dev/sdd: UUID="73d52bf2-9c18-d305-a554-e9aee67b7fbc" UUID_SUB="eaa0c0ab-08e9-bbd9-79bf-18386fcee952" LABEL="home:0" TYPE="linux_raid_member"
/dev/sdb: UUID="73d52bf2-9c18-d305-a554-e9aee67b7fbc" UUID_SUB="06927a3a-e179-95d0-ae6d-591faed89962" LABEL="home:0" TYPE="linux_raid_member"
/dev/md0: UUID="PjdxdD-aMVt-5qDb-ZtHJ-tzec-K0ho-l92bbW" TYPE="LVM2_member"
/dev/sde: UUID="73d52bf2-9c18-d305-a554-e9aee67b7fbc" UUID_SUB="34a14cf3-3ebe-c11b-c2d1-289c1b6a3527" LABEL="home:0" TYPE="linux_raid_member"
/dev/sdc: UUID="73d52bf2-9c18-d305-a554-e9aee67b7fbc" UUID_SUB="a24fee08-76a9-09cf-2650-963f1c2431b0" LABEL="home:0" TYPE="linux_raid_member"
/dev/sda: UUID="73d52bf2-9c18-d305-a554-e9aee67b7fbc" UUID_SUB="4dbbe2cb-5cd4-22f1-4dee-26d0079c6d1a" LABEL="home:0" TYPE="linux_raid_member"
/dev/loop1: TYPE="squashfs"
/dev/loop0: TYPE="squashfs"

输出自mdadm --examine /dev/md0

root@home:~# mdadm --examine /dev/md0
mdadm: No md superblock detected on /dev/md0.

输出自lvmdiskscan

root@home:~# lvmdiskscan
  /dev/loop0 [      67.81 MiB]
  /dev/md0   [      <8.19 TiB] LVM physical volume
  /dev/loop1 [      47.98 MiB]
  /dev/sdf1  [     <55.90 GiB]
  0 disks
  3 partitions
  0 LVM physical volume whole disks
  1 LVM physical volume

答案1

什么解决了这个问题?

为了解决这个问题,我必须修改 中的 LVM 配置/etc/lvm/lvm.conf。具体来说,我必须更改:

devices {
        ...

        md_component_detection = 1
}

devices {
        ...

        md_component_detection = 0
}

设置配置说明md_component_detection

    # Configuration option devices/md_component_detection.
    # Enable detection and exclusion of MD component devices.
    # An MD component device is a block device that MD uses as part
    # of a software RAID virtual device. When an LVM PV is created
    # on an MD device, LVM must only use the top level MD device as
    # the PV, and should ignore the underlying component devices.
    # In cases where the MD superblock is located at the end of the
    # component devices, it is more difficult for LVM to consistently
    # identify an MD component, see the md_component_checks setting.

多年来我一直使用 LVM 默认设置;之前我从来没有看过配置。

为什么会发生这种情况?

我怀疑这是因为我将原始 3TB 磁盘添加到现有 RAID6 阵列,而不是在每个磁盘上创建一个分区并将其添加到阵列。(所以我添加了/dev/sda而不是/dev/sda1)。添加的每个原始磁盘都会有 MD 超级块,当组合时,这些超级块的位置会“混淆”LVM 的检测。

如果能得到那些经验丰富的人对上述内容的确认就好了,但上述改变才“解决”了我的问题。

相关内容