通过相同的磁盘增加 LVM RAID5,但扩展区不够

通过相同的磁盘增加 LVM RAID5,但扩展区不够

我在由 3x4TB 驱动器组成的 CentOS 8 机器上有一个现有的 LVM RAID5 阵列。该阵列的空间开始不足,因此我有一个相同的 4TB 驱动器,我想将其添加到阵列中以增加总空间。但是,当我运行时lvextend /dev/storage/raidarray /dev/sda,我得到以下输出:

Converted 100%PVS into 953861 physical extents.
Using stripesize of last segment 64.00 KiB
Archiving volume group "storage" metadata (seqno 35).
Extending logical volume storage/raidarray to <10.92 TiB
Insufficient free space: 1430790 extents needed, but only 953861 available

这是输出pvs

PV         VG      Fmt  Attr PSize   PFree
/dev/sda   storage lvm2 a--   <3.64t  <3.64t
/dev/sdb3  cl      lvm2 a--  221.98g      0
/dev/sdc   storage lvm2 a--   <3.64t      0
/dev/sdd   storage lvm2 a--   <3.64t      0
/dev/sde   storage lvm2 a--   <3.64t      0
/dev/sdf           lvm2 ---  119.24g 119.24g

lvs -o +devices

LV        VG      Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices
home      cl      -wi-a----- <164.11g                                                     /dev/sdb3(12800)
root      cl      -wi-ao----   50.00g                                                     /dev/sdb3(0)
swap      cl      -wi-ao----   <7.88g                                                     /dev/sdb3(54811)
raidarray storage rwi-aor---   <7.28t                                    100.00           raidarray_rimage_0(0),raidarray_rimage_1(0),raidarray_rimage_2(0)

pvdisplay

--- Physical volume ---
PV Name               /dev/sdb3
VG Name               cl
PV Size               221.98 GiB / not usable 3.00 MiB
Allocatable           yes (but full)
PE Size               4.00 MiB
Total PE              56827
Free PE               0
Allocated PE          56827
PV UUID               MM6j63-1V3E-YWXl-61ro-f3bB-7ysd-c1DGQv

--- Physical volume ---
PV Name               /dev/sdc
VG Name               storage
PV Size               <3.64 TiB / not usable <3.84 MiB
Allocatable           yes (but full)
PE Size               4.00 MiB
Total PE              953861
Free PE               0
Allocated PE          953861
PV UUID               rmqBBu-DD8U-d7WW-yzKW-R97b-1M4r-RYb1Qx

--- Physical volume ---
PV Name               /dev/sdd
VG Name               storage
PV Size               <3.64 TiB / not usable <3.84 MiB
Allocatable           yes (but full)
PE Size               4.00 MiB
Total PE              953861
Free PE               0
Allocated PE          953861
PV UUID               TBn2He-cRTU-eybT-fuBM-REbO-YNfr-Ca86gU

--- Physical volume ---
PV Name               /dev/sde
VG Name               storage
PV Size               <3.64 TiB / not usable <3.84 MiB
Allocatable           yes (but full)
PE Size               4.00 MiB
Total PE              953861
Free PE               0
Allocated PE          953861
PV UUID               wHZOf0-KTK9-2qLW-USl9-Gkgz-6MjV-D3gWrH

--- Physical volume ---
PV Name               /dev/sdf
VG Name               storage
PV Size               119.24 GiB / not usable <4.34 MiB
Allocatable           yes
PE Size               4.00 MiB
Total PE              30525
Free PE               30525
Allocated PE          0
PV UUID               MWWaUJ-UC2h-YT29-bMol-fWoQ-5Chl-uKBB4O

--- Physical volume ---
PV Name               /dev/sda
VG Name               storage
PV Size               <3.64 TiB / not usable <3.84 MiB
Allocatable           yes
PE Size               4.00 MiB
Total PE              953861
Free PE               953861
Allocated PE          0
PV UUID               vzGHi9-TF42-EFx9-uLch-EioJ-DI35-RuZuJt

lsblk

NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0   3.7T  0 disk
sdb                            8:16   0 223.6G  0 disk
├─sdb1                         8:17   0   600M  0 part /boot/efi
├─sdb2                         8:18   0     1G  0 part /boot
└─sdb3                         8:19   0   222G  0 part
  ├─cl-root                  253:0    0    50G  0 lvm  /
  └─cl-swap                  253:1    0   7.9G  0 lvm  [SWAP]
sdc                            8:32   0   3.7T  0 disk
├─storage-raidarray_rmeta_0  253:7    0     4M  0 lvm
│ └─storage-raidarray        253:14   0   7.3T  0 lvm  /home
└─storage-raidarray_rimage_0 253:8    0   3.7T  0 lvm
  └─storage-raidarray        253:14   0   7.3T  0 lvm  /home
sdd                            8:48   0   3.7T  0 disk
├─storage-raidarray_rmeta_1  253:9    0     4M  0 lvm
│ └─storage-raidarray        253:14   0   7.3T  0 lvm  /home
└─storage-raidarray_rimage_1 253:10   0   3.7T  0 lvm
  └─storage-raidarray        253:14   0   7.3T  0 lvm  /home
sde                            8:64   0   3.7T  0 disk
├─storage-raidarray_rmeta_2  253:11   0     4M  0 lvm
│ └─storage-raidarray        253:14   0   7.3T  0 lvm  /home
└─storage-raidarray_rimage_2 253:12   0   3.7T  0 lvm
  └─storage-raidarray        253:14   0   7.3T  0 lvm  /home
sdf                            8:80   0 119.2G  0 disk
sdg                            8:96   1  14.8G  0 disk
└─sdg1                         8:97   1  14.8G  0 part

我一直在寻找这个问题的答案,但发现关于 LVM RAID 的文章很少;只有mdadm。有谁知道我可以在不购买额外驱动器且不丢失数据的情况下扩展 RAID 阵列的方法吗?

答案1

我通常不使用 LVM RAID,因此如果我没有完美地重现您的情况,请原谅。因此数字会有点奇怪。

考虑到mdadm.在 LVM 术语中,这称为具有 2 个条带的 raid5(不计算奇偶校验)。

# lvs -o +devices HDD/raidtest
  LV       VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                       
  raidtest HDD rwi-a-r--- 256.00m                                    100.00           raidtest_rimage_0(0),raidtest_rimage_1(0),raidtest_rimage_2(0)

再增加一个条纹的效果如下:

# lvconvert --stripes 3 HDD/raidtest
  Using default stripesize 64.00 KiB.
  WARNING: Adding stripes to active logical volume HDD/raidtest will grow it from 4 to 6 extents!
  Run "lvresize -l4 HDD/raidtest" to shrink it or use the additional capacity.
Are you sure you want to add 1 images to raid5 LV HDD/raidtest? [y/n]: maybe
[... this takes a while ...]
  Logical volume HDD/raidtest successfully converted.

需要注意的事项:警告消息应该明确说明该设备是生长,不收缩。

另外,我没有指定扩展要使用哪个 PV,因此 LVM 自行选择了它。在您的情况下,这也是可选的,并且应该可以工作(因为没有其他符合条件的 PV),但请随意继续指定它,这样就不会出现意外。

结果:

# lvs -o +devices HDD/raidtest
  LV       VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                                            
  raidtest HDD rwi-a-r--- 384.00m                                    100.00           raidtest_rimage_0(0),raidtest_rimage_1(0),raidtest_rimage_2(0),raidtest_rimage_3(0)

在这种情况下,文件系统不会增长,您可以选择单独执行此操作,或者使用lvresize将 LV 缩小到之前的状态(现在只是分发到更多驱动器)。我想这在并排使用多个 RAID LV 时很有用,而不是像您所做的那样将整个磁盘分配给单个磁盘。

相关内容