我最初的 raid 设置是使用 mdadm 的 2x2TB RAID 1。
我购买了第三个 2TB 驱动器,希望使用 mdadm 将 RAID 的总容量升级到 4 TB。
我已经运行了以下两个命令,但没有看到容量变化:
sudo mdadm --grow /dev/md0 --level=5
sudo mdadm --grow /dev/md0 --add /dev/sdd --raid-devices=3
与 mdadm 详细信息:
$ sudo mdadm --detail /dev/md0
[sudo] password for userd:
/dev/md0:
Version : 1.2
Creation Time : Wed Jul 5 19:59:17 2017
Raid Level : raid5
Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed May 22 17:58:37 2019
State : clean, reshaping
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Consistency Policy : bitmap
Reshape Status : 5% complete
Delta Devices : 1, (2->3)
Name : userd:0 (local to host userd)
UUID : 986fca95:68ef5344:5136f8af:b8d34a03
Events : 13557
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 48 2 active sync /dev/sdd
更新:随着重塑现已完成,4TB 中只有 2TB 可用。
/dev/md0:
Version : 1.2
Creation Time : Wed Jul 5 19:59:17 2017
Raid Level : raid5
Array Size : 3906766976 (3725.78 GiB 4000.53 GB)
Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu May 23 23:40:16 2019
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Consistency Policy : bitmap
Name : userd:0 (local to host userd)
UUID : 986fca95:68ef5344:5136f8af:b8d34a03
Events : 17502
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 48 2 active sync /dev/sdd
如何让 mdadm 使用全部 4TB 而不仅仅是 2TB?
答案1
查看重塑状态:
Update Time : Wed May 22 17:58:37 2019
State : clean, reshaping
...
Reshape Status : 5% complete
Delta Devices : 1, (2->3)
在完成之前您不会获得任何额外的存储空间,并且您提供的报告显示它目前仅完成 5%。
在此重塑过程中,请勿中断该过程或尝试再次更改形状。
答案2
根据 Gparted 的说法,答案是进行文件系统检查以利用额外的空间。
为了解决这个问题,我必须:
- 卸载文件系统。
- 开放式
- 选择 raid 设备(在我的例子中为 /dev/md0)
- 运行检查(分区 -> 检查)
这成功地调整了 md0 分区的大小以使用所有可用空间。
gparted 的确切操作输出如下:
GParted 0.33.0 --enable-libparted-dmraid --enable-online-resize
Libparted 3.2
Check and repair file system (ext4) on /dev/md0 00:03:51 ( SUCCESS )
calibrate /dev/md0 00:00:00 ( SUCCESS )
path: /dev/md0 (device)
start: 0
end: 7813533951
size: 7813533952 (3.64 TiB)
check file system on /dev/md0 for errors and (if possible) fix them 00:02:43 ( SUCCESS )
e2fsck -f -y -v -C 0 '/dev/md0' 00:02:43 ( SUCCESS )
Pass 1: Checking inodes, blocks, and sizes
Inode 30829505 extent tree (at level 1) could be shorter. Optimize? yes
Inode 84025620 extent tree (at level 1) could be narrower. Optimize? yes
Inode 84806354 extent tree (at level 2) could be narrower. Optimize? yes
Pass 1E: Optimizing extent trees
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
/lost+found not found. Create? yes
Pass 4: Checking reference counts
Pass 5: Checking group summary information
StorageArray0: ***** FILE SYSTEM WAS MODIFIED *****
5007693 inodes used (4.10%, out of 122093568)
23336 non-contiguous files (0.5%)
2766 non-contiguous directories (0.1%)
# of inodes with ind/dind/tind blocks: 0/0/0
Extent depth histogram: 4942467/2090/2
458492986 blocks used (93.89%, out of 488345872)
0 bad blocks
52 large files
4328842 regular files
612231 directories
0 character device files
0 block device files
3 fifos
1396 links
66562 symbolic links (63077 fast symbolic links)
45 sockets
------------
5009079 files
e2fsck 1.45.1 (12-May-2019)
grow file system to fill the partition 00:01:08 ( SUCCESS )
resize2fs -p '/dev/md0' 00:01:08 ( SUCCESS )
Resizing the filesystem on /dev/md0 to 976691744 (4k) blocks.
The filesystem on /dev/md0 is now 976691744 (4k) blocks long.
resize2fs 1.45.1 (12-May-2019)
========================================