我在戴尔硬件上安装了 ubuntu 服务器系统,它有两个逻辑 LVM2 卷。一个是启动系统,另一个是数据存储,它们存在于两个物理驱动器中,这些驱动器被组织为 RAID1。启动后,数据存储不会自动安装。
root@pluto:/# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 ubuntu-vg lvm2 a-- 445.62g 178.78g
/dev/sdb1 data-vg lvm2 a-- <7.28t <2.19t
lvscan 提供所有卷组
root@pluto:/# lvscan
ACTIVE Original '/dev/data-vg/data-lv' [4.00 TiB] inherit
ACTIVE Snapshot '/dev/data-vg/data-snapshot' [1.09 TiB] inherit
ACTIVE Original '/dev/ubuntu-vg/ubuntu-lv' [200.00 GiB] inherit
ACTIVE Snapshot '/dev/ubuntu-vg/ubuntu-snapshot' [<66.84 GiB] inherit
lvdisplay 还显示所有卷组。
root@pluto:/# lvdisplay
--- Logical volume ---
LV Path /dev/data-vg/data-lv
LV Name data-lv
VG Name data-vg
LV UUID AC5nN1-aGdj-lgfo-PqBP-lIkZ-D5vx-tcO6IP
LV Write Access read/write
LV Creation host, time pluto, 2020-11-10 14:19:31 +0100
LV snapshot status source of
data-snapshot [active]
LV Status available
# open 0
LV Size 4.00 TiB
Current LE 1048576
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5
--- Logical volume ---
LV Path /dev/data-vg/data-snapshot
LV Name data-snapshot
VG Name data-vg
LV UUID oHsjAj-79tp-UMN3-MUb6-Efwc-Zl43-XHEGx1
LV Write Access read/write
LV Creation host, time pluto, 2020-11-10 14:19:45 +0100
LV snapshot status active destination for data-lv
LV Status available
# open 0
LV Size 4.00 TiB
Current LE 1048576
COW-table size 1.09 TiB
COW-table LE 286137
Allocated to snapshot 100.00%
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:7
--- Logical volume ---
LV Path /dev/ubuntu-vg/ubuntu-lv
LV Name ubuntu-lv
VG Name ubuntu-vg
LV UUID eiVtC7-Uz40-BQdS-Dsrj-vgUw-gL6v-BwLfqt
LV Write Access read/write
LV Creation host, time ubuntu-server, 2020-11-10 16:07:23 +0100
LV snapshot status source of
ubuntu-snapshot [active]
LV Status available
# open 1
LV Size 200.00 GiB
Current LE 51200
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Path /dev/ubuntu-vg/ubuntu-snapshot
LV Name ubuntu-snapshot
VG Name ubuntu-vg
LV UUID CL35iM-uBSY-QC5A-FvLD-UHiF-M7Cw-rH831B
LV Write Access read/write
LV Creation host, time pluto, 2020-11-10 16:26:42 +0100
LV snapshot status active destination for ubuntu-lv
LV Status available
# open 0
LV Size 200.00 GiB
Current LE 51200
COW-table size <66.84 GiB
COW-table LE 17111
Allocated to snapshot 10.33%
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3
root@pluto:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 97.8M 1 loop /snap/core/10185
loop1 7:1 0 55M 1 loop /snap/core18/1880
loop2 7:2 0 29.9M 1 loop /snap/snapd/8542
loop3 7:3 0 55.4M 1 loop /snap/core18/1932
loop4 7:4 0 67.8M 1 loop /snap/lxd/18150
loop5 7:5 0 71.3M 1 loop /snap/lxd/16099
loop6 7:6 0 31M 1 loop /snap/snapd/9721
sda 8:0 0 446.6G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 445.6G 0 part
├─ubuntu--vg-ubuntu--lv-real 253:0 0 200G 0 lvm
│ ├─ubuntu--vg-ubuntu--lv 253:1 0 200G 0 lvm /
│ └─ubuntu--vg-ubuntu--snapshot 253:3 0 200G 0 lvm
└─ubuntu--vg-ubuntu--snapshot-cow 253:2 0 66.9G 0 lvm
└─ubuntu--vg-ubuntu--snapshot 253:3 0 200G 0 lvm
sdb 8:16 0 7.3T 0 disk
└─sdb1 8:17 0 7.3T 0 part
└─data--vg-data--snapshot-cow 253:6 0 1.1T 0 lvm
sr0 11:0 1 1024M 0 rom
当我调用 vgscan --mknodes 时,/dev/data-vg 中的设备已生成。并且出现消息。
root@pluto:/# vgscan --mknodes
Found volume group "data-vg" using metadata type lvm2
Found volume group "ubuntu-vg" using metadata type lvm2
The link /dev/data-vg/data-lv should have been created by udev but it was not found. Falling back to direct link creation.
The link /dev/data-vg/data-snapshot should have been created by udev but it was not found. Falling back to direct link creation.
当我调用 vgchange -ay 时,您可以在日志 pluto lvm[972] 中看到:目标(null)不是快照。
经过很长时间后命令结束并且 lvm 可用。
device-mapper: reload ioctl on (253:7) failed: Invalid argument
2 logical volume(s) in volume group "data-vg" now active
2 logical volume(s) in volume group "ubuntu-vg" now active
root@pluto:/dev# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 97.8M 1 loop /snap/core/10185
loop1 7:1 0 55M 1 loop /snap/core18/1880
loop2 7:2 0 29.9M 1 loop /snap/snapd/8542
loop3 7:3 0 55.4M 1 loop /snap/core18/1932
loop4 7:4 0 67.8M 1 loop /snap/lxd/18150
loop5 7:5 0 71.3M 1 loop /snap/lxd/16099
loop6 7:6 0 31M 1 loop /snap/snapd/9721
sda 8:0 0 446.6G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 445.6G 0 part
├─ubuntu--vg-ubuntu--lv-real 253:0 0 200G 0 lvm
│ ├─ubuntu--vg-ubuntu--lv 253:1 0 200G 0 lvm /
│ └─ubuntu--vg-ubuntu--snapshot 253:3 0 200G 0 lvm
└─ubuntu--vg-ubuntu--snapshot-cow 253:2 0 66.9G 0 lvm
└─ubuntu--vg-ubuntu--snapshot 253:3 0 200G 0 lvm
sdb 8:16 0 7.3T 0 disk
└─sdb1 8:17 0 7.3T 0 part
├─data--vg-data--lv-real 253:4 0 4T 0 lvm
│ ├─data--vg-data--lv 253:5 0 4T 0 lvm
│ └─data--vg-data--snapshot 253:7 0 4T 0 lvm
└─data--vg-data--snapshot-cow 253:6 0 1.1T 0 lvm
└─data--vg-data--snapshot 253:7 0 4T 0 lvm
sr0
并且我能够挂载逻辑卷。
我的目标是在启动期间直接安装卷。请您帮助我。
此致
答案1
我有同样的问题。戴尔硬件,2x SSD 在 RAID1 中,使用 LVM 进行启动(运行良好),2x SSD 在 RAID1 中,使用 LVM 进行数据。数据 LV 在启动时未激活大多数时候. 很少情况下,它会在启动时激活。
进入操作系统并运行vgchange -ay
将激活 LV,并且它可以正常工作。这似乎是一个至少存在了 11 年的竞争条件:https://serverfault.com/questions/199185/logical-volumes-are-inactive-at-boot-time
到目前为止我尝试了很多解决方案但都没有奏效。