我正在尝试从四个磁盘创建 raid 5:
Disk /dev/sdc: 8001.6 GB, 8001563222016 bytes
/dev/sdc1 2048 4294967294 2147482623+ fd Linux raid autodetect
Disk /dev/sdb: 8001.6 GB, 8001563222016 bytes
/dev/sdb1 2048 4294967294 2147482623+ fd Linux raid autodetect
Disk /dev/sdd: 24003.1 GB, 24003062267904 bytes
/dev/sdd1 2048 4294967294 2147482623+ fd Linux raid autodetect
Disk /dev/sde: 8001.6 GB, 8001563222016 bytes
/dev/sde1 2048 4294967294 2147482623+ fd Linux raid autodetect
但是,创建后我只得到了 6T 空间(我的磁盘之一):
/dev/md0 ext4 6.0T 184M 5.7T 1% /mnt/raid5
这是我的创建过程的其他信息:
结果mdadm -E /dev/sd[b-e]1
:
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 8953b4f1:61212c46:b0a63144:25eb4a7d
Name : node7:0 (local to host node7)
Creation Time : Fri Sep 7 09:16:42 2018
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
Array Size : 6442053120 (6143.62 GiB 6596.66 GB)
Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 2fcb3346:9ed69eab:64c6f851:0bcc39c4
Update Time : Fri Sep 7 13:17:38 2018
Checksum : c701ff7e - correct
Events : 18
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing)
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 8953b4f1:61212c46:b0a63144:25eb4a7d
Name : node7:0 (local to host node7)
Creation Time : Fri Sep 7 09:16:42 2018
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
Array Size : 6442053120 (6143.62 GiB 6596.66 GB)
Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 6f13c9f0:de2d4c6f:cbac6b87:67bc483e
Update Time : Fri Sep 7 13:17:38 2018
Checksum : e4c675c2 - correct
Events : 18
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing)
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 8953b4f1:61212c46:b0a63144:25eb4a7d
Name : node7:0 (local to host node7)
Creation Time : Fri Sep 7 09:16:42 2018
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
Array Size : 6442053120 (6143.62 GiB 6596.66 GB)
Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 4dab38e6:94c5052b:06d6b6b0:34a41472
Update Time : Fri Sep 7 13:17:38 2018
Checksum : f306b65f - correct
Events : 18
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing)
/dev/sde1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 8953b4f1:61212c46:b0a63144:25eb4a7d
Name : node7:0 (local to host node7)
Creation Time : Fri Sep 7 09:16:42 2018
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 4294703103 (2047.87 GiB 2198.89 GB)
Array Size : 6442053120 (6143.62 GiB 6596.66 GB)
Used Dev Size : 4294702080 (2047.87 GiB 2198.89 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : b04d152e:0448fe56:3b22a2d6:b2504d26
Update Time : Fri Sep 7 13:17:38 2018
Checksum : 40ffd3e7 - correct
Events : 18
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing)
结果mdadm --detail /dev/md0
:
/dev/md0:
Version : 1.2
Creation Time : Fri Sep 7 09:16:42 2018
Raid Level : raid5
Array Size : 6442053120 (6143.62 GiB 6596.66 GB)
Used Dev Size : 2147351040 (2047.87 GiB 2198.89 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Sep 7 13:17:38 2018
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : node7:0 (local to host node7)
UUID : 8953b4f1:61212c46:b0a63144:25eb4a7d
Events : 18
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
4 8 65 3 active sync /dev/sde1
结果mkfs.ext4 /dev/md0
mke2fs 1.41.9 (22-Aug-2009)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
402628608 inodes, 1610513280 blocks
80525664 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
49149 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:
done
This filesystem will be automatically checked every 37 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
然后mkdir /mnt/raid5
和mount /dev/md0 /mnt/raid5/
。
答案1
答案2
6 TB 将为 (4 - 1) * 2 TB,其中 4 是您的设备数量,减去奇偶校验的 1,2 TB 是您似乎拥有的分区的大小。
假设第一个输出来自fdisk
实用程序,这些字段可能是
partition name start end length type
/dev/sdc1 2048 4294967294 2147482623+ fd Linux raid autodetect
以 512 字节扇区为单位,分区从开始到结束为 2 TB。 (+
长度字段末尾的 似乎暗示实际长度大于 ,因此我忽略了该字段。)我的fdisk
实用程序也以人类单位显示分区的大小,但 2 TB 是最大长度的限制旧式MBR分区表可以提供,因此请检查您是否没有使用它来代替 GPT。
某些旧版本fdisk
可能不了解 GPT 分区表,因此您可能需要使用其他工具(或获取更新版本)。实际上,您甚至不需要使用分区,您只需mdadm
在/dev/sd[bcde]
.但请注意,由于 RAID-5 布局,最小的驱动器(或分区)设置阵列的大小,因此单个较大的磁盘会被部分浪费。