扩展 ZFS 以占用可用空间

扩展 ZFS 以占用可用空间

我有一个 Ubuntu 21.10 VirtualBox VM,设置了 100GB zfs 驱动器。

我将其扩展到 500GB,因为该系统成为了其自身受欢迎程度的牺牲品。

然而,我不知道该如何扩大规模以填满所有可用空间。

部分原因是我不确定完成后应该看到什么。我是否仍会看到“可用空间”,但 zfs 会神奇地增长到可用空间,还是应该看到池 (rpool) 调整大小并占用可用空间?

我遵循的基本步骤在下面的代码段中。

我做错了什么吗?请评论、更正等

==========

### zfs list
NAME                                               USED  AVAIL     REFER  MOUNTPOINT
bpool                                              867M   924M       96K  /boot
bpool/BOOT                                         864M   924M       96K  none
bpool/BOOT/ubuntu_j4opxa                           864M   924M      159M  /boot
rpool                                             47.5G  44.5G       96K  /
rpool/ROOT                                        33.5G  44.5G       96K  none
rpool/ROOT/ubuntu_j4opxa                          33.5G  44.5G     7.17G  /
rpool/ROOT/ubuntu_j4opxa/srv                       376K  44.5G       96K  /srv
rpool/ROOT/ubuntu_j4opxa/usr                      5.74G  44.5G       96K  /usr
rpool/ROOT/ubuntu_j4opxa/usr/local                5.74G  44.5G     5.74G  /usr/local
rpool/ROOT/ubuntu_j4opxa/var                      11.2G  44.5G       96K  /var
rpool/ROOT/ubuntu_j4opxa/var/games                  96K  44.5G       96K  /var/games
rpool/ROOT/ubuntu_j4opxa/var/lib                  10.1G  44.5G     8.02G  /var/lib
rpool/ROOT/ubuntu_j4opxa/var/lib/AccountsService   828K  44.5G      340K  /var/lib/AccountsService
rpool/ROOT/ubuntu_j4opxa/var/lib/NetworkManager   1.71M  44.5G      132K  /var/lib/NetworkManager
rpool/ROOT/ubuntu_j4opxa/var/lib/apt               186M  44.5G     80.4M  /var/lib/apt
rpool/ROOT/ubuntu_j4opxa/var/lib/dpkg              250M  44.5G     55.0M  /var/lib/dpkg
rpool/ROOT/ubuntu_j4opxa/var/log                  1.09G  44.5G      928M  /var/log
rpool/ROOT/ubuntu_j4opxa/var/mail                   96K  44.5G       96K  /var/mail
rpool/ROOT/ubuntu_j4opxa/var/snap                 11.4M  44.5G     3.96M  /var/snap
rpool/ROOT/ubuntu_j4opxa/var/spool                1.67M  44.5G      136K  /var/spool
rpool/ROOT/ubuntu_j4opxa/var/www                  1.20M  44.5G      320K  /var/www
rpool/USERDATA                                    13.9G  44.5G       96K  /
rpool/USERDATA/root_pp526g                        3.14M  44.5G      948K  /root
rpool/USERDATA/tomcat_bkjegm                       434M  44.5G      433M  /home/tomcat
rpool/USERDATA/universal_pp526g                   13.5G  44.5G     9.50G  /home/universal

### lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0    7:0    0 138.1M  1 loop /snap/krita/63
loop1    7:1    0  65.1M  1 loop /snap/gtk-common-themes/1515
loop2    7:2    0 111.6M  1 loop /snap/mysql-workbench-community/7
loop3    7:3    0 162.9M  1 loop /snap/gnome-3-28-1804/145
loop4    7:4    0 260.7M  1 loop /snap/kde-frameworks-5-core18/32
loop5    7:5    0   219M  1 loop /snap/gnome-3-34-1804/66
loop6    7:6    0     4K  1 loop /snap/bare/5
loop7    7:7    0 386.6M  1 loop /snap/bluej/161
loop8    7:8    0 386.6M  1 loop /snap/bluej/168
loop9    7:9    0  61.8M  1 loop /snap/core20/1081
loop10   7:10   0  55.5M  1 loop /snap/core18/2246
loop11   7:11   0  55.4M  1 loop /snap/core18/2128
loop12   7:12   0 152.3M  1 loop /snap/firefox/689
loop13   7:13   0   219M  1 loop /snap/gnome-3-34-1804/72
loop14   7:14   0  42.2M  1 loop /snap/snapd/13831
loop15   7:15   0  99.4M  1 loop /snap/core/11798
loop16   7:16   0 124.7M  1 loop /snap/mysql-workbench-community/9
loop17   7:17   0  61.8M  1 loop /snap/core20/1169
loop18   7:18   0 164.8M  1 loop /snap/gnome-3-28-1804/161
loop19   7:19   0 176.9M  1 loop /snap/krita/64
loop20   7:20   0 152.3M  1 loop /snap/firefox/701
loop21   7:21   0  54.2M  1 loop /snap/snap-store/557
loop22   7:22   0 251.5M  1 loop /snap/dbeaver-ce/148
loop23   7:23   0  99.4M  1 loop /snap/core/11993
loop24   7:24   0   277M  1 loop /snap/gimp/380
loop25   7:25   0 276.7M  1 loop /snap/gimp/372
loop26   7:26   0 242.3M  1 loop /snap/gnome-3-38-2004/76
loop27   7:27   0    51M  1 loop /snap/snap-store/547
loop28   7:28   0  32.4M  1 loop /snap/snapd/13640
loop29   7:29   0 248.9M  1 loop /snap/dbeaver-ce/147
loop30   7:30   0 241.4M  1 loop /snap/gnome-3-38-2004/70
loop31   7:31   0  65.2M  1 loop /snap/gtk-common-themes/1519
sda      8:0    0   500G  0 disk 
├─sda1   8:1    0     1M  0 part 
├─sda2   8:2    0   513M  0 part /boot/efi
├─sda3   8:3    0     2G  0 part [SWAP]
├─sda4   8:4    0     2G  0 part 
└─sda5   8:5    0  95.5G  0 part 
sr0     11:0    1  1024M  0 rom  

### zpool status
  pool: bpool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:11 with 0 errors on Sun Nov 14 00:24:12 2021
config:

    NAME                                    STATE     READ WRITE CKSUM
    bpool                                   ONLINE       0     0     0
      256b9001-18b9-474f-85e9-42f2a4886fa1  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
    still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 00:12:30 with 0 errors on Sun Nov 14 00:36:32 2021
config:

    NAME                                    STATE     READ WRITE CKSUM
    rpool                                   ONLINE       0     0     0
      723a9d83-56e9-d94f-b80a-fc040fda002a  ONLINE       0     0     0

errors: No known data errors

### zpool set autoexpand=on rpool

### sudo zpool online -e rpool 723a9d83-56e9-d94f-b80a-fc040fda002a

### Then reboot...

Then it seemed that "all I needed to do was this"

Step 1 - Run parted
parted
print       [gives the partitions, in my case 1..5]
print free  [shows up the free space, and in particular the end -> 537GB]

Step 2 - Use parted to resize
resizepart
Partition number? 5
End? 537GB

Step 3 - Reboot

Except it didn't work, despite showing that rpool was all 500G as intended, data transfer blew out at 100GB, the original size.

I'm at a loss...

答案1

似乎“我需要做的就是这个”

步骤 1 - 运行 parted
分开
打印[给出分区,在我的情况下是 1..5]
打印可用空间 [显示可用空间,特别是末尾 -> 537GB]

步骤 2 - 使用 parted 调整大小
调整部分大小
分区号?5
结束?537GB

步骤 3 - 重新启动

完毕...

也许有一个 zfs 实用程序可以做同样的事情。我不知道。

答案2

我的情况非常相似。我在 ZFS 上运行 Ubuntu 20.04,我想将笔记本电脑上的 SSD 换成更大的 SSD。

### 关于克隆磁盘 ###

我只有一个磁盘插槽,所以我通过 USB 连接了更大的 SSD,并在 Ubuntu 的 live-usb 上启动。然后我做了sudo dd if=/dev/old-ssd of=/dev/new-ssd status=progress。完成后,我用笔记本电脑上的大 SSD 替换了小 SSD。

### 关于导入 bpool ###

它在新磁盘上正确启动,我可以看到 rpool,zpool status但看不到 bpool!在 中cat /etc/fstab,我看到 Ubuntu 期望 bpool 的 UUID 为“8EB4-3F4B”。因此,我使用 获得了设备的完整路径find /dev -name 8EB4-3F4B。然后我使用 导入了池sudo zpool import -d /dev/disk/by-uuid/8EB4-3F4B bpool

### 关于扩展 rpool ###

我发现https://serverfault.com/questions/946055/increase-the-zfs-partition-to-use-the-entire-disk。由于我只有一个磁盘,bpool 和 rpool 位于同一磁盘的不同“分区”上。因此,除非您增加该分区的大小,否则zpool set autoexpand=onzpool online -e ...将不起作用。不幸的是,gparted 不允许我增加分区的大小,所以我不得不通过命令行来做到这一点。

在我的例子中,zpool status为 rpool 返回了设备“732fe207-fc96-9f42-b50b-30ca4a096e77”。ls -lah /dev/disk/*/732fe207-fc96-9f42-b50b-30ca4a096e77显示“/dev/disk/by-partuuid/732fe207-fc96-9f42-b50b-30ca4a096e77”指向“/dev/nvme0n1p4”(/dev/nvme0n1 的第 4 个分区)。所以我做了:

  • sudo su
  • partprobe
  • parted /dev/nvme0n1在 parted 中:(print列出分区),并且resizepart(我选择了分区 4 并输入“100%”)
  • 我扩大了 rpoolzfs online -e rpool /dev/disk/by-partuuid/732fe207-fc96-9f42-b50b-30ca4a096e77
  • 我验证了新的可用空间zfs list

我希望它有帮助!

相关内容