在我的服务器中,我有一个 NVMe SSD 和两个普通的 4TB HDD,用于大容量数据存储。在设置 Ubuntu 20.04 LTS 期间,我喜欢将两个 HDD 分开的想法,并且没有设置 LVM。后来我发现一个磁盘比另一个磁盘填满得更快,所以我决定事后设置 LVM。
我清除了磁盘 2,并使用 fdisk 创建了一个新的 GPT 分区表。然后我添加了一个占据整个磁盘的新分区,并将分区类型设置为 Linux LVM。
root@server:/# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): g
Created a new GPT disklabel (GUID: AEF8BBF3-526D-394C-A0A7-5852FF661F95).
Command (m for help): n
Partition number (1-128, default 1):
First sector (2048-7814037134, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-7814037134, default 7814037134):
Created a new partition 1 of type 'Linux filesystem' and of size 3.7 TiB.
Command (m for help): t
Selected partition 1
Partition type (type L to list all types): 31
Changed type of partition 'Linux filesystem' to 'Linux LVM'.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
然后我创建了一个物理卷、一个卷组和一个逻辑卷。我将逻辑卷设置为活动卷并在其上创建了一个 BTRFS 文件系统。
root@server:/# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
root@server:/# vgcreate storage /dev/sdb1
Volume group "storage" successfully created
root@server:/# lvcreate -n downloads -L 3.63TB storage
Rounding up size to full physical extent 3.63 TiB
Logical volume "downloads" created.
root@server:/# lvchange -ay /dev/storage/downloads
root@server:/# mkfs.btrfs /dev/storage/downloads
btrfs-progs v5.4.1
See http://btrfs.wiki.kernel.org for more information.
Label: (null)
UUID: ba46d3c4-38a2-4122-9f58-48bc0546a49d
Node size: 16384
Sector size: 4096
Filesystem size: 3.63TiB
Block group profiles:
Data: single 8.00MiB
Metadata: DUP 1.00GiB
System: DUP 8.00MiB
SSD detected: no
Incompat features: extref, skinny-metadata
Checksum: crc32c
Number of devices: 1
Devices:
ID SIZE PATH
1 3.63TiB /dev/storage/downloads
此刻,一切看起来都很好,可以投入使用了。
root@server:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 55.5M 1 loop /snap/core18/2253
loop1 7:1 0 61.9M 1 loop /snap/core20/1242
loop2 7:2 0 73.1M 1 loop /snap/lxd/21902
sda 8:0 0 3.7T 0 disk
└─sda1 8:1 0 3.7T 0 part /mnt/data1
sdb 8:16 0 3.7T 0 disk
└─sdb1 8:17 0 3.7T 0 part
└─storage-downloads 253:0 0 3.6T 0 lvm
nvme0n1 259:0 0 477G 0 disk
├─nvme0n1p1 259:1 0 512M 0 part /boot/efi
└─nvme0n1p2 259:2 0 476.4G 0 part /
root@server:/# blkid -o list
device fs_type label mount point UUID
-----------------------------------------------------------------------------------------------------
/dev/nvme0n1p1 vfat /boot/efi DC44-3B2F
/dev/nvme0n1p2 btrfs (in use) d0be6aed-8d84-495e-b69e-fc46f2700254
/dev/sda1 btrfs (in use) 17316c71-d190-456f-a3de-ff40a8ca2c3c
/dev/loop0 squashfs /snap/core18/2253
/dev/loop1 squashfs /snap/core20/1242
/dev/loop2 squashfs /snap/lxd/21902
/dev/sdb1 LVM2_member (in use) 6cZbcI-egPH-butU-DN2Z-fQre-R8l0-z6CKkZ
/dev/mapper/storage-downloads
btrfs (not mounted) ba46d3c4-38a2-4122-9f58-48bc0546a49d
我进行了更新/etc/fstab
以包含新的分区并列出挂载点。
root@server:/# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/nvme0n1p2 during curtin installation
/dev/disk/by-uuid/d0be6aed-8d84-495e-b69e-fc46f2700254 / btrfs defaults 0 0
# /boot/efi was on /dev/nvme0n1p1 during curtin installation
/dev/disk/by-uuid/DC44-3B2F /boot/efi vfat defaults 0 0
/swap.img none swap sw 0 0
# 4TB HDDs
UUID=17316c71-d190-456f-a3de-ff40a8ca2c3c /mnt/data1 btrfs defaults 0 0
#UUID=7bba47ee-4053-45e1-bf53-03892a8eb474 /mnt/data2 btrfs defaults 0 0 # Pre-LVM partition
UUID=ba46d3c4-38a2-4122-9f58-48bc0546a49d /mnt/data2 btrfs defaults 0 0
接下来,我挂载新的逻辑卷:
root@server:/# mount /mnt/data2
没有错误,所以一切看起来都很好,但是它仍然显示未安装。
root@server:/# blkid -o list
device fs_type label mount point UUID
-----------------------------------------------------------------------------------------------------
/dev/nvme0n1p1 vfat /boot/efi DC44-3B2F
/dev/nvme0n1p2 btrfs (in use) d0be6aed-8d84-495e-b69e-fc46f2700254
/dev/sda1 btrfs (in use) 17316c71-d190-456f-a3de-ff40a8ca2c3c
/dev/loop0 squashfs /snap/core18/2253
/dev/loop1 squashfs /snap/core20/1242
/dev/loop2 squashfs /snap/lxd/21902
/dev/sdb1 LVM2_member (in use) 6cZbcI-egPH-butU-DN2Z-fQre-R8l0-z6CKkZ
/dev/mapper/storage-downloads
btrfs (not mounted) ba46d3c4-38a2-4122-9f58-48bc0546a49d
此外,我写入它的任何数据最终都会出现在磁盘上/dev/nvme0n1p2
,因为/mnt
它位于 上/
,这是 SSD 的安装位置,但不是新创建的逻辑卷。
我缺少的是什么?
编辑:
在 Tom Yan 的评论的帮助下,我设法在 systemd 日志中找到了错误。
root@server:/# journalctl -b -e
Jan 02 13:08:25 server kernel: BTRFS info (device dm-0): flagging fs with big metadata feature
Jan 02 13:08:25 server kernel: BTRFS info (device dm-0): disk space caching is enabled
Jan 02 13:08:25 server kernel: BTRFS info (device dm-0): has skinny extents
Jan 02 13:08:25 server systemd[1]: mnt-data2.mount: Unit is bound to inactive unit dev-disk-by\x2duuid-7bba47ee\x2d4053\x2d45e1\x2dbf53\x2d03892a8eb474.device. Stopping, too.
Jan 02 13:08:25 server systemd[1]: Unmounting /mnt/data2...
Jan 02 13:08:25 server systemd[150969]: mnt-data2.mount: Succeeded.
Jan 02 13:08:25 server systemd[1]: mnt-data2.mount: Succeeded.
Jan 02 13:08:25 server systemd[1]: Unmounted /mnt/data2.
简单systemctl daemon-reload
修复一下问题:之后挂载就可以按预期工作,并且日志也很干净。
root@server:/# journalctl -b -e
Jan 02 13:14:14 server systemd[150969]: run-docker-runtime\x2drunc-moby-1eb6b8d67787d43d173426f8c658b027cfd8667aa520c430365c20b0b383d121-runc.WYdXc6.mount: Succeeded.
Jan 02 13:14:14 server systemd[1]: run-docker-runtime\x2drunc-moby-1eb6b8d67787d43d173426f8c658b027cfd8667aa520c430365c20b0b383d121-runc.WYdXc6.mount: Succeeded.
Jan 02 13:14:44 server systemd[150969]: run-docker-runtime\x2drunc-moby-1eb6b8d67787d43d173426f8c658b027cfd8667aa520c430365c20b0b383d121-runc.WTnqMK.mount: Succeeded.
Jan 02 13:14:44 server systemd[1]: run-docker-runtime\x2drunc-moby-1eb6b8d67787d43d173426f8c658b027cfd8667aa520c430365c20b0b383d121-runc.WTnqMK.mount: Succeeded.
Jan 02 13:14:55 server kernel: BTRFS info (device dm-0): flagging fs with big metadata feature
Jan 02 13:14:55 server kernel: BTRFS info (device dm-0): disk space caching is enabled
Jan 02 13:14:55 server kernel: BTRFS info (device dm-0): has skinny extents
答案1
这似乎是由于 systemd 特定版本的缺陷/错误造成的,当用户尝试挂载新创建的 LV 上的文件系统时,它立即被卸载,显然可以通过运行以下命令来解决:
systemctl daemon-reload
这会导致 systemd 进程中相关单元的状态更新,从而解决问题。