实际上我已经创建了软件 RAID 5数组 md0 使用管理linux。在此配置中,我使用了 6 个硬盘(每个 4 TB),最终的 raid5 阵列大小约为 20 TB。众所周知,raid 5 使用 n-1 规则,其中 n 是磁盘总数,如果任何磁盘发生故障,则使用 1 个磁盘。以下是我的配置:
$ cat /proc/mdstat
md0 : active raid5 sdc1[4] sdg1[6] sdd1[1] sdf1[5] sde1[3] sdb1[0]
19534428160 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]
bitmap: 0/30 pages [0KB], 65536KB chunk
[root@storageserver ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed May 17 17:28:11 2017
Raid Level : raid5
**Array Size : 19534428160 (18629.48 GiB 20003.25 GB)**
Used Dev Size : 3906885632 (3725.90 GiB 4000.65 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed Jan 9 16:37:50 2019
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : localhost.localdomain:0
UUID : 988759d7:91d52c10:f4e39656:2129ab64
Events : 51388
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 49 1 active sync /dev/sdd1
3 8 65 2 active sync /dev/sde1
4 8 33 3 active sync /dev/sdc1
6 8 97 4 active sync /dev/sdg1
5 8 81 5 active sync /dev/sdf1
当我将此阵列挂载到 /mnt/wsraid 目录中时,linuxdf -h
命令显示该阵列的大小仅为 7.3 TB,
[root@storageserver ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 494M 260M 234M 53% /boot
/dev/mapper/centos-root 50G 43G 7.2G 86% /
/dev/mapper/centos-home 166G 79G 88G 48% /home
**/dev/md0 7.3T 7.3T 0 100% /mnt/wsraid**
现在我完全无法复制超过 7.3 TB 的数据,当我在互联网上搜索时,我找到了如何安装我的全尺寸阵列(约 20TB/18TB)。所以请大家帮我解决这个问题。谢谢
以下是fdisk -l
命令输出,
[root@storageserver ~]# fdisk -l
Disk /dev/sda: 240.1 GB, 240057409536 bytes, 468862128 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x0008ff0d
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 468860927 233917440 8e Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: A577BBCF-4DEE-47AF-9747-B03DC20700E0
# Start End Size Type Name
1 2048 7814035455 3.7T Microsoft basic
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: FBA6EDAA-1C7E-47D8-8A7C-C6CC82CB482A
# Start End Size Type Name
1 2048 7814035455 3.7T Microsoft basic
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: 200CDB7A-D314-4377-B0F3-33430D804913
# Start End Size Type Name
1 2048 7814035455 3.7T Microsoft basic
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sde: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: 43CF8B19-B035-49BC-A015-D9F9CDC8779A
# Start End Size Type Name
1 2048 7814035455 3.7T Microsoft basic
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sdf: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: 49A22211-B798-45BC-9499-0F32D2E7D1EC
# Start End Size Type Name
1 2048 7814035455 3.7T Microsoft basic
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sdg: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: 449A3298-D412-4AEE-897C-B307B6868F1B
# Start End Size Type Name
1 2048 7814035455 3.7T Microsoft basic
Disk /dev/mapper/centos-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/mapper/centos-swap: 8388 MB, 8388608000 bytes, 16384000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/md0: 20003.3 GB, 20003254435840 bytes, 39068856320 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 2621440 bytes
Disk /dev/mapper/centos-home: 177.4 GB, 177385504768 bytes, 346456064 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 byte
同样为了调试,这里是lsblk
命令输出:
[root@storageserver ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223.6G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 223.1G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 7.8G 0 lvm [SWAP]
└─centos-home 253:2 0 165.2G 0 lvm /home
sdb 8:16 0 3.7T 0 disk
└─sdb1 8:17 0 3.7T 0 part
└─md0 9:0 0 18.2T 0 raid5 /mnt/wsraid
sdc 8:32 0 3.7T 0 disk
└─sdc1 8:33 0 3.7T 0 part
└─md0 9:0 0 18.2T 0 raid5 /mnt/wsraid
sdd 8:48 0 3.7T 0 disk
└─sdd1 8:49 0 3.7T 0 part
└─md0 9:0 0 18.2T 0 raid5 /mnt/wsraid
sde 8:64 0 3.7T 0 disk
└─sde1 8:65 0 3.7T 0 part
└─md0 9:0 0 18.2T 0 raid5 /mnt/wsraid
sdf 8:80 0 3.7T 0 disk
└─sdf1 8:81 0 3.7T 0 part
└─md0 9:0 0 18.2T 0 raid5 /mnt/wsraid
sdg 8:96 0 3.7T 0 disk
└─sdg1 8:97 0 3.7T 0 part
└─md0 9:0 0 18.2T 0 raid5 /mnt/wsraid
我正在为 raid5 阵列使用 ext4 文件系统,请查看 fstab 输出,
[root@storageserver ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Fri May 19 15:00:43 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=7fb7d6be-1fbe-4567-b895-2045fb0023f6 /boot xfs defaults 0 0
/dev/mapper/centos-home /home xfs defaults 0 0
/dev/mapper/centos-swap swap swap defaults 0 0
## mounted raid device
/dev/md0 /mnt/wsraid ext4 defaults 0 2
## 这是blkid
命令输出,
[root@storageserver ~]# blkid
/dev/mapper/centos-root: UUID="40c11708-98ef-4558-9f87-0423141ee60d" TYPE="xfs"
/dev/sda2: UUID="kGDekH-73be-Evv0-rF4j-M9kE-fsvC-ezKi3e" TYPE="LVM2_member"
/dev/sda1: UUID="7fb7d6be-1fbe-4567-b895-2045fb0023f6" TYPE="xfs"
/dev/sdb1: UUID="988759d7-91d5-2c10-f4e3-96562129ab64" UUID_SUB="bd69f6f1-42c0-8072-ae15-365572fc49b0" LABEL="localhost.localdomain:0" TYPE="linux_raid_member" PARTUUID="9370d19b-81e1-4e0d-bd3c-b08f5974dd7f"
/dev/sdd1: UUID="988759d7-91d5-2c10-f4e3-96562129ab64" UUID_SUB="37f5bd25-537e-b49a-a768-db12eb6f5f07" LABEL="localhost.localdomain:0" TYPE="linux_raid_member" PARTUUID="d89df7a3-2b3e-42a2-ad28-9c6c2dbe6275"
/dev/sdc1: UUID="988759d7-91d5-2c10-f4e3-96562129ab64" UUID_SUB="7c08e4f6-78dc-dfc2-37f5-d7ea681b0dde" LABEL="localhost.localdomain:0" TYPE="linux_raid_member" PARTUUID="17a4d434-0883-430f-a0e9-7d8236bc9cc0"
/dev/sde1: UUID="988759d7-91d5-2c10-f4e3-96562129ab64" UUID_SUB="5687ab0f-0c0f-25b8-e527-03a28463577f" LABEL="localhost.localdomain:0" TYPE="linux_raid_member" PARTUUID="91a34c90-3ed3-4418-abf4-bcfefcd278ae"
/dev/sdf1: UUID="988759d7-91d5-2c10-f4e3-96562129ab64" UUID_SUB="765d94c6-d6ad-3a0a-ad95-2f36d14e5ee5" LABEL="localhost.localdomain:0" TYPE="linux_raid_member" PARTUUID="70091b1c-052b-49fd-9fa5-d4512c26105f"
/dev/sdg1: UUID="988759d7-91d5-2c10-f4e3-96562129ab64" UUID_SUB="07680433-8091-fca8-7daf-70218e70c329" LABEL="localhost.localdomain:0" TYPE="linux_raid_member" PARTUUID="faf0474b-0cd2-45e5-b5a5-b24b43e8cfa6"
/dev/mapper/centos-swap: UUID="80883a57-2a98-4530-a9a3-717a9573d71a" TYPE="swap"
/dev/md0: UUID="ddc6f4a7-b22a-483f-8e51-6f1280a4d4b7" TYPE="ext4"
/dev/mapper/centos-home: UUID="287a27dd-9eaf-42fa-a8f0-a4a618a90ad7" TYPE="xfs"
答案1
/dev/md0
您的 ext4 文件系统实际上比包含它的块设备 () 要小。
通常我只会说运行来resize2fs /dev/md0
将其调整为与包含块设备相同的大小,但在您的情况下这不起作用,因为 ext4 文件系统不能那么大。
您需要使用实际大小为 20TiB 的文件系统类型(例如 xfs)重新创建文件系统。
您可以使用fstransform
EPEL 存储库中打包的实用程序更改文件系统类型而不会丢失数据。但请注意,它需要一些释放操作空间,因为您的文件系统已经满了,所以您应该首先按照上面的方式 resize2fs 以给它一些可用的空间。