引导期间在多路径上激活 LVM 分区的正确方法

引导期间在多路径上激活 LVM 分区的正确方法

我的 Debian 9 已成功配置 iSCSI 和多路径:

# multipath -ll /dev/mapper/mpathb
mpathb (222c60001556480c6) dm-2 Promise,Vess R2600xi
size=10T features='1 retain_attached_hw_handler' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 12:0:0:0 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
  `- 13:0:0:0 sdd 8:48 active ready running

/dev/mapper/mpathb是 LVM 组的一部分vg-one-100

# pvs
  PV         VG         Fmt  Attr PSize  PFree
  /dev/dm-2  vg-one-100 lvm2 a--  10,00t 3,77t
# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  vg-one-100   1  17   0 wz--n- 10,00t 3,77t

vg-one-100组包含几卷:

# lvs
  LV          VG         Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv-one-0-1  vg-one-100 -wi-a----- 20,00g                                                    
  lv-one-1-0  vg-one-100 -wi-a-----  2,41g                                                    
  lv-one-10-0 vg-one-100 -wi------- 20,00g                                                    
  lv-one-11-0 vg-one-100 -wi------- 30,00g                                                    
  lv-one-12-0 vg-one-100 -wi-------  2,41g                                                    
  lv-one-13-0 vg-one-100 -wi-------  2,41g                                                    
  lv-one-14-0 vg-one-100 -wi-------  2,41g                                                    
  lv-one-15-0 vg-one-100 -wi-------  2,41g                                                    
  lv-one-16-0 vg-one-100 -wi-------  2,41g                                                    
  lv-one-17-0 vg-one-100 -wi------- 30,00g                                                    
  lv-one-18-0 vg-one-100 -wi------- 30,00g                                                    
  lv-one-23-0 vg-one-100 -wi------- 20,00g                                                    
  lv-one-31-0 vg-one-100 -wi------- 20,00g                                                    
  lv-one-8-0  vg-one-100 -wi------- 30,00g                                                    
  lv-one-9-0  vg-one-100 -wi------- 20,00g                                                    
  lvm_images  vg-one-100 -wi-a-----  5,00t                                                    
  lvm_system  vg-one-100 -wi-a-----  1,00t          

我的lvm.conf包括以下过滤器:

# grep filter /etc/lvm/lvm.conf | grep -vE '^.*#'
    filter = ["a|/dev/dm-*|", "r|.*|"]
    global_filter = ["a|/dev/dm-*|", "r|.*|"]

lvmetad被禁用:

# grep use_lvmetad /etc/lvm/lvm.conf | grep -vE '^.*#'
    use_lvmetad = 0

如果lvmetad被禁用,则将lvm2-activation-generator被使用。

就我而言,lvm2-activation-generator生成所有需要的单元文件并在引导期间执行它:

# ls -1 /var/run/systemd/generator/lvm2-activation*
/var/run/systemd/generator/lvm2-activation-early.service
/var/run/systemd/generator/lvm2-activation-net.service
/var/run/systemd/generator/lvm2-activation.service

# systemctl status lvm2-activation-early.service
● lvm2-activation-early.service - Activation of LVM2 logical volumes
   Loaded: loaded (/etc/lvm/lvm.conf; generated; vendor preset: enabled)
   Active: inactive (dead) since Thu 2019-03-28 17:20:48 MSK; 3 weeks 4 days ago
     Docs: man:lvm2-activation-generator(8)
 Main PID: 897 (code=exited, status=0/SUCCESS)

systemd[1]: Starting Activation of LVM2 logical volumes...
systemd[1]: Started Activation of LVM2 logical volumes.
root@virt1:~# systemctl status lvm2-activation-net.service
● lvm2-activation-net.service - Activation of LVM2 logical volumes
   Loaded: loaded (/etc/lvm/lvm.conf; generated; vendor preset: enabled)
   Active: inactive (dead) since Thu 2019-03-28 17:21:24 MSK; 3 weeks 4 days ago
     Docs: man:lvm2-activation-generator(8)
 Main PID: 1537 (code=exited, status=0/SUCCESS)

systemd[1]: Starting Activation of LVM2 logical volumes...
lvm[1537]:   4 logical volume(s) in volume group "vg-one-100" now active
systemd[1]: Started Activation of LVM2 logical volumes.
root@virt1:~# systemctl status lvm2-activation.service
● lvm2-activation.service - Activation of LVM2 logical volumes
   Loaded: loaded (/etc/lvm/lvm.conf; generated; vendor preset: enabled)
   Active: inactive (dead) since Thu 2019-03-28 17:20:48 MSK; 3 weeks 4 days ago
     Docs: man:lvm2-activation-generator(8)
 Main PID: 900 (code=exited, status=0/SUCCESS)

systemd[1]: Starting Activation of LVM2 logical volumes...
systemd[1]: Started Activation of LVM2 logical volumes.

问题在于:我无法在引导期间自动激活所有 LVM 卷,因为在通过 iSCSI 而不是多路径设备(片段)lvm2-activator-net.service附加(登录)卷后才激活卷:journalctl

. . .
kernel: sd 11:0:0:0: [sdc] 21474836480 512-byte logical blocks: (11.0 TB/10.0 TiB)
kernel: sd 10:0:0:0: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA
kernel: sd 11:0:0:0: [sdc] Write Protect is off
kernel: sd 11:0:0:0: [sdc] Mode Sense: 97 00 10 08
kernel: sd 11:0:0:0: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA
kernel: sd 10:0:0:0: [sdb] Attached SCSI disk
kernel: sd 11:0:0:0: [sdc] Attached SCSI disk
iscsiadm[1765]: Logging in to [iface: default, target: iqn.2012-07.com.promise:alias.tgt0000.2000000155588d75, portal: 172.16.0.151,3260] (multiple)
iscsiadm[1765]: Logging in to [iface: default, target: iqn.2012-07.com.promise:alias.tgt0000.2000000155588d75, portal: 172.16.1.151,3260] (multiple)
iscsiadm[1765]: Login to [iface: default, target: iqn.2012-07.com.promise:alias.tgt0000.2000000155588d75, portal: 172.16.0.151,3260] successful.
iscsiadm[1765]: Login to [iface: default, target: iqn.2012-07.com.promise:alias.tgt0000.2000000155588d75, portal: 172.16.1.151,3260] successful.
systemd[1]: Started Login to default iSCSI targets.
systemd[1]: Starting Activation of LVM2 logical volumes...
systemd[1]: Starting Activation of LVM2 logical volumes...
multipathd[884]: sdb: add path (uevent)
systemd[1]: Started Activation of LVM2 logical volumes.
systemd[1]: Started Activation of LVM2 logical volumes.
systemd[1]: Reached target Remote File Systems (Pre).
systemd[1]: Mounting /var/lib/one/datastores/101...
systemd[1]: Mounting /var/lib/one/datastores/100...
multipathd[884]: mpathb: load table [0 21474836480 multipath 1 retain_attached_hw_handler 0 1 1 service-time 0 1 1 8:16 1]
multipathd[884]: mpathb: event checker started
multipathd[884]: sdb [8:16]: path added to devmap mpathb
multipathd[884]: sdc: add path (uevent)
multipathd[884]: mpathb: load table [0 21474836480 multipath 1 retain_attached_hw_handler 0 2 1 service-time 0 1 1 8:16 1 service-time 0 1 1 8:32 1]
. . .

启动条件lvm2-activation-net.service正确:

# grep After /var/run/systemd/generator/lvm2-activation-net.service 
After=lvm2-activation.service iscsi.service fcoe.service

如何all在启动过程中正确激活逻辑卷?

答案1

既然你似乎只有一个物理卷,我真的想知道如何部分的您的情况可能会发生激活。要么全有,要么全无。但无论如何,这里有几个问题需要注意:

  • 您需要持久的多路径设备名称。我不确定mpathb来自哪里,但为了清楚起见,我建议不要user_friendly_names启用。/etc/multipath.conf手动配置别名或使用存储提供的 WWID。
  • LVM 过滤器是正则表达式,而不是 shell 全局变量,因此您需要将语法更改为类似

    filter = ["a|^/dev/mapper/222c60001556480c6$|", "r|.|"]
    

    global_filter对于正确的功能来说是可选的,但它可能会影响启动时间。)

  • 您必须延迟激活,直到所有物理卷的多路径设备出现。一种可能性是添加

    Requires = dev-mapper-222c60001556480c6.device
    After = dev-mapper-222c60001556480c6.device
    

    /etc/systemd/system/lvm2-activation-net.service.d/wait_for_storage.conf。另一个是创建专门的激活服务。

  • iSCSI 存储设备(及其多路径设备)可能需要很长时间才会出现。您可能需要创建/etc/systemd/system/dev-mapper-222c60001556480c6.device包含

    [Unit]
    JobTimeoutSec=3min
    

    确保 systemd 不会在等待过程中过快超时。如果您有多个此类设备,请使用指向通用文件的符号链接。

即使上述方法不能立即解决您的问题,它也会使调试更容易处理。祝你好运!

相关内容