添加磁盘以增长 LVM Raid5

添加磁盘以增长 LVM Raid5

我有一个带有单个 LV 的 LVM VG,它是带有三个 PV 的 raid5 卷。我想向卷组添加一个额外的 PV 并扩展 raid5 LV 以使用它。

这里我用4个100MB的文件作为测试盘来练习。

$ sudo vgs
  WARNING: Not using lvmetad because a repair command was run.
  VG     #PV #LV #SN Attr   VSize   VFree 
  testvg   4   1   0 wz--n- 384.00m 96.00m
$ sudo pvs
  WARNING: Not using lvmetad because a repair command was run.
  PV           VG     Fmt  Attr PSize  PFree 
  /dev/loop0p1 testvg lvm2 a--  96.00m     0 
  /dev/loop1p1 testvg lvm2 a--  96.00m     0 
  /dev/loop2p1 testvg lvm2 a--  96.00m     0 
  /dev/loop3p1 testvg lvm2 a--  96.00m 96.00m
$ sudo lvs
  WARNING: Not using lvmetad because a repair command was run.
  LV       VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  testraid testvg rwi-a-r--- 184.00m                                    100.00    

如果我尝试更改条带数以引入附加磁盘,该命令会返回错误,但新的 PV 现在显示子LV,并且 LV 显示增加的可用空间。但子LV 显示不同步属性,并且对LV 运行修复失败。

$ sudo lvconvert --stripes 3 /dev/testvg/testraid
  Using default stripesize 64.00 KiB.
  WARNING: Adding stripes to active logical volume testvg/testraid will grow it from 46 to 69 extents!
  Run "lvresize -l46 testvg/testraid" to shrink it or use the additional capacity.
Are you sure you want to add 1 images to raid5 LV testvg/testraid? [y/n]: y
  Insufficient free space: 4 extents needed, but only 0 available
  Failed to allocate out-of-place reshape space for testvg/testraid.
  Insufficient free space: 4 extents needed, but only 0 available
  Failed to allocate out-of-place reshape space for testvg/testraid.
  Reshape request failed on LV testvg/testraid.
$ sudo pvs -a -o +pv_pe_count,pv_pe_alloc_count
  PV                    VG     Fmt  Attr PSize  PFree PE  Alloc
  /dev/loop0p1          testvg lvm2 a--  96.00m    0   24    24
  /dev/loop1p1          testvg lvm2 a--  96.00m    0   24    24
  /dev/loop2p1          testvg lvm2 a--  96.00m    0   24    24
  /dev/loop3p1          testvg lvm2 a--  96.00m    0   24    24
$ sudo lvs -a
  LV                  VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  testraid            testvg rwi-a-r--- 276.00m                                    100.00          
  [testraid_rimage_0] testvg iwi-aor---  92.00m                                                    
  [testraid_rimage_1] testvg iwi-aor---  92.00m                                                    
  [testraid_rimage_2] testvg iwi-aor---  92.00m                                                    
  [testraid_rimage_3] testvg Iwi-aor---  92.00m                                                    
  [testraid_rmeta_0]  testvg ewi-aor---   4.00m                                                    
  [testraid_rmeta_1]  testvg ewi-aor---   4.00m                                                    
  [testraid_rmeta_2]  testvg ewi-aor---   4.00m                                                    
  [testraid_rmeta_3]  testvg ewi-aor---   4.00m 
$ sudo lvconvert --repair /dev/testvg/testraid 
  WARNING: Not using lvmetad because of repair.
  Active raid has a wrong number of raid images!
  Metadata says 4, kernel says 3.
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y
  WARNING: Disabling lvmetad cache for repair command.
  Unable to repair testvg/testraid.  Source devices failed before the RAID could synchronize.
  You should choose one of the following:
    1) deactivate testvg/testraid, revive failed device, re-activate LV, and proceed.
    2) remove the LV (all data is lost).
    3) Seek expert advice to attempt to salvage any data from remaining devices.
  Failed to replace faulty devices in testvg/testraid.

我应该采取什么步骤来通过额外的相同磁盘来扩展我的 LV?

答案1

我在处理此问题时所做的笔记如下:https://wiki.archlinux.org/index.php/User:Ctag/Notes#Growing_LVM_Raid5

我最终添加了一个新磁盘,并迁移到 Raid6。

如果我没记错的话,问题在于新磁盘比其他磁盘小一些扇区,并且必要的 LVM/raid 元数据的开销随着新磁盘的添加而略有增加(因此相同的磁盘也无法工作) )。解决这两个问题的方法是通过几个扇区充分利用所有磁盘,为元数据和未来的磁盘差异留出空间。

# pvs -a -o +pv_pe_count,pv_pe_alloc_count
  PV                     VG      Fmt  Attr PSize  PFree    PE     Alloc 
  /dev/mapper/cryptslow1 cryptvg lvm2 a--  <1.82t   20.00m 476931 476931
  /dev/mapper/cryptslow2 cryptvg lvm2 a--  <1.82t   20.00m 476931 476931
  /dev/mapper/cryptslow3 cryptvg lvm2 a--  <2.73t <931.52g 715395 476931
  /dev/mapper/cryptslow4 cryptvg lvm2 a--  <1.82t   <1.82t 476927      0

看到上面新磁盘如何只有“476927”范围而不是“476931”?那就是问题所在。我们需要让 LVM 只为 RAID5 分配分配较小数量(或更少)的盘区,以便能够使用这个新磁盘。

# lvresize -r -l -10 /dev/cryptvg/raid
fsck from util-linux 2.34
/dev/mapper/cryptvg-raid: clean, 913995/240320512 files, 686703011/961280000 blocks
resize2fs 1.45.3 (14-Jul-2019)
Resizing the filesystem on /dev/mapper/cryptvg-raid to 976742400 (4k) blocks.
The filesystem on /dev/mapper/cryptvg-raid is now 976742400 (4k) blocks long.

  Size of logical volume cryptvg/raid changed from <3.64 TiB (953860 extents) to <3.64 TiB (953850 extents).
  Logical volume cryptvg/raid successfully resized.

# pvs -a -o +pv_pe_count,pv_pe_alloc_count
  PV                     VG      Fmt  Attr PSize  PFree    PE     Alloc 
  /dev/mapper/cryptslow1 cryptvg lvm2 a--  <1.82t   20.00m 476931 476926
  /dev/mapper/cryptslow2 cryptvg lvm2 a--  <1.82t   20.00m 476931 476926
  /dev/mapper/cryptslow3 cryptvg lvm2 a--  <2.73t <931.52g 715395 476926
  /dev/mapper/cryptslow4 cryptvg lvm2 a--  <1.82t   <1.82t 476927      0

现在我们可以继续添加新磁盘,这次它可以工作了。

相关内容