将 mdadm raid 从 raid1 扩展到 raid0,并启用活动 lvm

将 mdadm raid 从 raid1 扩展到 raid0,并启用活动 lvm

我们租用了一台服务器,该服务器有两个 NVMe 磁盘,采用 raid1 配置,并在其上有一个 lvm。

是否可以在不对 lvm 配置进行任何更改的情况下将 raid 级别更改为 raid0?我们不需要冗余,但可能很快就需要更多磁盘空间。

我没有使用过 mdadm。我尝试运行,mdadm --grow /dev/md4 -l 0但出现错误:mdadm: failed to remove internal bitmap.

一些附加信息:

操作系统是ubuntu 18.04
托管服务提供商,IONOS
我可以访问 Debian 救援系统,但无法物理访问服务器。

mdadm --detail /dev/md4
=======================

/dev/md4:
           Version : 1.0
     Creation Time : Wed May 12 09:52:01 2021
        Raid Level : raid1
        Array Size : 898628416 (857.00 GiB 920.20 GB)
     Used Dev Size : 898628416 (857.00 GiB 920.20 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Wed May 12 10:55:07 2021
             State : clean, degraded, recovering 
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : bitmap

    Rebuild Status : 7% complete

              Name : punix:4
              UUID : 42d57123:263dd789:ef368ee1:8e9bbe3f
            Events : 991

    Number   Major   Minor   RaidDevice State
       0     259        9        0      active sync   /dev/nvme0n1p4
       2     259        4        1      spare rebuilding   /dev/nvme1n1p4


/proc/mdstat:
=======

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md2 : active raid1 nvme0n1p2[0] nvme1n1p2[2]
      29293440 blocks super 1.0 [2/1] [U_]
        resync=DELAYED
      
md4 : active raid1 nvme0n1p4[0] nvme1n1p4[2]
      898628416 blocks super 1.0 [2/1] [U_]
      [>....................]  recovery =  2.8% (25617280/898628416) finish=704.2min speed=20658K/sec
      bitmap: 1/7 pages [4KB], 65536KB chunk

unused devices: <none>


df -h:
======

Filesystem             Size  Used Avail Use% Mounted on
udev                    32G     0   32G   0% /dev
tmpfs                  6.3G   11M  6.3G   1% /run
/dev/md2                28G  823M   27G   3% /
/dev/vg00/usr          9.8G 1013M  8.3G  11% /usr
tmpfs                   32G     0   32G   0% /dev/shm
tmpfs                  5.0M     0  5.0M   0% /run/lock
tmpfs                   32G     0   32G   0% /sys/fs/cgroup
/dev/mapper/vg00-home  9.8G   37M  9.3G   1% /home
/dev/mapper/vg00-var   9.8G  348M  9.0G   4% /var
tmpfs                  6.3G     0  6.3G   0% /run/user/0


fdisk -l:
=========

Disk /dev/nvme1n1: 894.3 GiB, 960197124096 bytes, 1875385008 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 3FEDFA8D-D63F-42EE-86C9-5E728FA617D2

Device            Start        End    Sectors  Size Type
/dev/nvme1n1p1     2048       6143       4096    2M BIOS boot
/dev/nvme1n1p2     6144   58593279   58587136   28G Linux RAID
/dev/nvme1n1p3 58593280   78125055   19531776  9.3G Linux swap
/dev/nvme1n1p4 78125056 1875382271 1797257216  857G Linux RAID


Disk /dev/md4: 857 GiB, 920195497984 bytes, 1797256832 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md2: 28 GiB, 29996482560 bytes, 58586880 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/nvme0n1: 894.3 GiB, 960197124096 bytes, 1875385008 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 948B7F9A-0758-4B01-8CD2-BDB08D0BE645

Device            Start        End    Sectors  Size Type
/dev/nvme0n1p1     2048       6143       4096    2M BIOS boot
/dev/nvme0n1p2     6144   58593279   58587136   28G Linux RAID
/dev/nvme0n1p3 58593280   78125055   19531776  9.3G Linux swap
/dev/nvme0n1p4 78125056 1875382271 1797257216  857G Linux RAID


Disk /dev/mapper/vg00-usr: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-var: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-home: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


lvm configuration:
==================

  --- Physical volume ---
  PV Name               /dev/md4
  VG Name               vg00
  PV Size               <857.00 GiB / not usable 2.81 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              219391
  Free PE               211711
  Allocated PE          7680
  PV UUID               bdTpM6-vxql-momc-sTZC-0B3R-VFtZ-S72u7V
   
  --- Volume group ---
  VG Name               vg00
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <857.00 GiB
  PE Size               4.00 MiB
  Total PE              219391
  Alloc PE / Size       7680 / 30.00 GiB
  Free  PE / Size       211711 / <827.00 GiB
  VG UUID               HIO5xT-VRw3-BZN7-3h3m-MGqr-UwOS-WxOQTS
   
  --- Logical volume ---
  LV Path                /dev/vg00/usr
  LV Name                usr
  VG Name                vg00
  LV UUID                cv3qcf-8ZB4-JaIp-QYvo-x4ol-veIH-xI37Z6
  LV Write Access        read/write
  LV Creation host, time punix, 2021-05-12 09:52:03 +0000
  LV Status              available
  # open                 1
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/vg00/var
  LV Name                var
  VG Name                vg00
  LV UUID                ZtAM8T-MO4F-YrqF-hgUN-ctMC-1RSn-crup3E
  LV Write Access        read/write
  LV Creation host, time punix, 2021-05-12 09:52:03 +0000
  LV Status              available
  # open                 1
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/vg00/home
  LV Name                home
  VG Name                vg00
  LV UUID                AeIwpS-dnX1-6oGP-ieZ2-hmGs-57zd-6DnXRv
  LV Write Access        read/write
  LV Creation host, time punix, 2021-05-12 09:52:03 +0000
  LV Status              available
  # open                 1
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2
   

谢谢

答案1

这可能不是您最初考虑的方法,但您可以在磁盘之间移动 LVM 数据,以便最终将两个驱动器作为卷组中的 LVM 物理卷。

为此,您需要从 RAID1 阵列中移除一个驱动器,pvcreate在分离的驱动器上运行以重新格式化它,然后使用 将其添加到您的 LVM 卷vgextend中。这应该会使您的 LVM 卷组大小翻倍。然后从 LVM VG 中移除降级的阵列,这应该以相当容错的方式传输数据。(pvmove有关详细信息,请参阅手册页中的“注释”部分)。一旦从 VG 中移除了该降级的阵列,您就可以停止该阵列,然后以与添加另一个驱动器相同的方式将剩余的驱动器添加到 LVM 组。

我最近在类似情况下迁移了 LVM 托管数据,但从具有两个数据副本的 RAID10 迁移到两个 RAID1 阵列,每个阵列有三个副本,并且磁盘更大。因此,我们实现了两全其美:更多数据和更高可靠性。我不知道您的用例是什么,但我应该说,除非能够轻松地从头开始重新生成,否则我个人不会愿意在没有 RAID 的情况下托管数据。2 TB 似乎需要重新创建或同步大量数据,但如果没有人会因为长时间停机或网络流量而烦恼,那就由您决定了。

相关内容