如何调整 RAID 阵列上的文件系统大小?

如何调整 RAID 阵列上的文件系统大小?

我最近向我的软件 raid 阵列添加了第 5 个驱动器 - 并且 mdadm 已经接受了它:

$ lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
nvme0n1        259:0    0 894.3G  0 disk
├─nvme0n1p1    259:4    0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme0n1p2    259:5    0 893.8G  0 part
  └─md1          9:1    0   3.5T  0 raid5
    ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
    ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
    └─vg0-root 253:2    0   2.6T  0 lvm   /
nvme3n1        259:1    0 894.3G  0 disk
├─nvme3n1p1    259:6    0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme3n1p2    259:7    0 893.8G  0 part
  └─md1          9:1    0   3.5T  0 raid5
    ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
    ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
    └─vg0-root 253:2    0   2.6T  0 lvm   /
nvme2n1        259:2    0 894.3G  0 disk
├─nvme2n1p1    259:8    0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme2n1p2    259:9    0 893.8G  0 part
  └─md1          9:1    0   3.5T  0 raid5
    ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
    ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
    └─vg0-root 253:2    0   2.6T  0 lvm   /
nvme1n1        259:3    0 894.3G  0 disk
├─nvme1n1p1    259:10   0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme1n1p2    259:11   0 893.8G  0 part
  └─md1          9:1    0   3.5T  0 raid5
    ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
    ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
    └─vg0-root 253:2    0   2.6T  0 lvm   /
nvme4n1        259:12   0 894.3G  0 disk
├─nvme4n1p1    259:15   0   512M  0 part
│ └─md0          9:0    0   511M  0 raid1 /boot
└─nvme4n1p2    259:16   0 893.8G  0 part
  └─md1          9:1    0   3.5T  0 raid5
    ├─vg0-swap 253:0    0    32G  0 lvm   [SWAP]
    ├─vg0-tmp  253:1    0    50G  0 lvm   /tmp
    └─vg0-root 253:2    0   2.6T  0 lvm   /
$ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10]
md0 : active raid1 nvme4n1p1[4] nvme1n1p1[2] nvme3n1p1[0] nvme0n1p1[3] nvme2n1p1[1]
      523264 blocks super 1.2 [5/5] [UUUUU]

md1 : active raid5 nvme4n1p2[5] nvme2n1p2[1] nvme1n1p2[2] nvme3n1p2[0] nvme0n1p2[4]
      3748134912 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
      bitmap: 3/7 pages [12KB], 65536KB chunk

unused devices: <none>

问题是我的文件系统仍然认为我只连接了 4 个驱动器,并且还没有发展到可以利用额外的驱动器。

我试过了

$ sudo e2fsck -fn /dev/md1
e2fsck 1.45.5 (07-Jan-2020)
Warning!  /dev/md1 is in use.
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/md1

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

/dev/md1 contains a LVM2_member file system

$ sudo resize2fs /dev/md1
resize2fs 1.45.5 (07-Jan-2020)
resize2fs: Device or resource busy while trying to open /dev/md1
Couldn't find valid filesystem superblock.

但到目前为止还没有运气:

$ df
Filesystem            1K-blocks       Used Available Use% Mounted on
udev                  131841212          0 131841212   0% /dev
tmpfs                  26374512       2328  26372184   1% /run
/dev/mapper/vg0-root 2681290296 2329377184 215641036  92% /
tmpfs                 131872540          0 131872540   0% /dev/shm
tmpfs                      5120          0      5120   0% /run/lock
tmpfs                 131872540          0 131872540   0% /sys/fs/cgroup
/dev/md0                 498532      86231    386138  19% /boot
/dev/mapper/vg0-tmp    52427196     713248  51713948   2% /tmp
tmpfs                  26374508          0  26374508   0% /run/user/1001
tmpfs                  26374508          0  26374508   0% /run/user/1002

我希望这些信息足够了 - 但如果有用的话我很乐意提供更多信息。

答案1

由于您使用 lvm,因此您必须执行多个步骤:

  1. 使用以下方法调整 lvm-disk 的大小pvresize /dev/md1
  2. 如果你也想调整 /tmp 的大小,那么lvextend -L +1G /dev/mapper/vg0-tmp
  3. 如果你不想为 /tmp 或新卷的未来扩展保留一些空间,请将其余空间分配给根卷lvextend -l +100%FREE /dev/mapper/vg0-root
  4. 调整文件系统大小resize2fs /dev/mapper/vg0-rootresize2fs /dev/mapper/vg0-tmp如果卷大小已调整)

相关内容