我从一家托管公司购买了一台专用服务器,用于我要启动的论坛,并且驱动器配置为 RAID 10 软件阵列中的 4 x 64GB SSD。
我预计可用的空间总量约为 128GB,但由于某种原因,我的可用空间仅略低于 110GB。
我向提供商询问了丢失的空间,他们说这只是因为使用软件 RAID 的限制,但经过大量研究后我找不到任何支持这一点的东西。
该系统是CentOS 6.6 kernel 2.6.32-042stab108.2
这是来自服务器的数据:
所有 4 个磁盘都完全用于在磁盘上创建 RAID10 阵列,软件 RAID 和硬件 RAID 的可用总空间可能有所不同。您可以在下面查看磁盘和 raid 状态。
[root@ ~]# cat /proc/mdstat
Personalities : [raid10] [raid1]
md0 : active raid1 sdd2[3] sdc2[2] sdb2[1] sda2[0]
511936 blocks super 1.0 [4/4] [UUUU]
md2 : active raid10 sda3[0] sdc3[2] sdd3[3] sdb3[1]
89231360 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
bitmap: 1/1 pages [4KB], 65536KB chunk
md1 : active raid1 sdc1[2] sdd1[3] sdb1[1] sda1[0]
16375808 blocks super 1.1 [4/4] [UUUU]
unused devices: <none>
[root@ ~]# fdisk -l
Disk /dev/sda: 63.0 GB, 63023063040 bytes
255 heads, 63 sectors/track, 7662 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002023a
Device Boot Start End Blocks Id System
/dev/sda1 1 2040 16384000 fd Linux raid autodetect
/dev/sda2 * 2040 2104 512000 fd Linux raid autodetect
/dev/sda3 2104 7663 44648448 fd Linux raid autodetect
Disk /dev/sdc: 63.0 GB, 63023063040 bytes
255 heads, 63 sectors/track, 7662 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004f2ab
Device Boot Start End Blocks Id System
/dev/sdc1 1 2040 16384000 fd Linux raid autodetect
/dev/sdc2 * 2040 2104 512000 fd Linux raid autodetect
/dev/sdc3 2104 7663 44648448 fd Linux raid autodetect
Disk /dev/sdb: 63.0 GB, 63023063040 bytes
255 heads, 63 sectors/track, 7662 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000369e8
Device Boot Start End Blocks Id System
/dev/sdb1 1 2040 16384000 fd Linux raid autodetect
/dev/sdb2 * 2040 2104 512000 fd Linux raid autodetect
/dev/sdb3 2104 7663 44648448 fd Linux raid autodetect
Disk /dev/sdd: 63.0 GB, 63023063040 bytes
255 heads, 63 sectors/track, 7662 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00069960
Device Boot Start End Blocks Id System
/dev/sdd1 1 2040 16384000 fd Linux raid autodetect
/dev/sdd2 * 2040 2104 512000 fd Linux raid autodetect
/dev/sdd3 2104 7663 44648448 fd Linux raid autodetect
Disk /dev/md1: 16.8 GB, 16768827392 bytes
2 heads, 4 sectors/track, 4093952 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/md2: 91.4 GB, 91372912640 bytes
2 heads, 4 sectors/track, 22307840 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
Disk identifier: 0x00000000
Disk /dev/md0: 524 MB, 524222464 bytes
2 heads, 4 sectors/track, 127984 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
***** 根据 @frostschutz 的请求提供新数据 *****
[root@host ~]# mdadm --detail /dev/md*
mdadm: /dev/md does not appear to be an md device
/dev/md0:
Version : 1.0
Creation Time : Fri Jun 5 05:05:44 2015
Raid Level : raid1
Array Size : 511936 (500.02 MiB 524.22 MB)
Used Dev Size : 511936 (500.02 MiB 524.22 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sat Jun 6 07:19:42 2015
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Name : host.domain.com:0 (local to host host.domain.com)
UUID : f9e0a41c:2c27ffe7:146a6a71:6bc894fe
Events : 28
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
2 8 34 2 active sync /dev/sdc2
3 8 50 3 active sync /dev/sdd2
/dev/md1:
Version : 1.1
Creation Time : Fri Jun 5 05:05:32 2015
Raid Level : raid1
Array Size : 16375808 (15.62 GiB 16.77 GB)
Used Dev Size : 16375808 (15.62 GiB 16.77 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sat Jun 6 07:19:42 2015
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Name : host.domain.com:1 (local to host host.domain.com)
UUID : ae2e2e67:dd9728e7:2d290b6f:78ecec69
Events : 19
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
/dev/md2:
Version : 1.1
Creation Time : Fri Jun 5 05:05:32 2015
Raid Level : raid10
Array Size : 89231360 (85.10 GiB 91.37 GB)
Used Dev Size : 44615680 (42.55 GiB 45.69 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Jun 6 22:04:51 2015
State : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : host.domain.com:2 (local to host host.domain.com)
UUID : 61d27050:e5608a68:84646e94:aa6d0d0c
Events : 96
Number Major Minor RaidDevice State
0 8 3 0 active sync set-A /dev/sda3
1 8 19 1 active sync set-B /dev/sdb3
2 8 35 2 active sync set-A /dev/sdc3
3 8 51 3 active sync set-B /dev/sdd3
***** @frostschultz 的第二个请求 *****
[root@host ~]# mdadm --examine /dev/sd*
/dev/sda:
MBR Magic : aa55
Partition[0] : 32768000 sectors at 2048 (type fd)
Partition[1] : 1024000 sectors at 32770048 (type fd)
Partition[2] : 89296896 sectors at 33794048 (type fd)
/dev/sda1:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x0
Array UUID : ae2e2e67:dd9728e7:2d290b6f:78ecec69
Name : host.domain.com:1 (local to host host.domain.com)
Creation Time : Fri Jun 5 05:05:32 2015
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 32751616 (15.62 GiB 16.77 GB)
Array Size : 16375808 (15.62 GiB 16.77 GB)
Data Offset : 16384 sectors
Super Offset : 0 sectors
Unused Space : before=16304 sectors, after=0 sectors
State : clean
Device UUID : 942ee69a:8ab0df94:d35c9128:8bfb7655
Update Time : Sat Jun 6 07:19:42 2015
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 48294779 - correct
Events : 19
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sda2:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : f9e0a41c:2c27ffe7:146a6a71:6bc894fe
Name : host.domain.com:0 (local to host host.domain.com)
Creation Time : Fri Jun 5 05:05:44 2015
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 1023968 (500.07 MiB 524.27 MB)
Array Size : 511936 (500.02 MiB 524.22 MB)
Used Dev Size : 1023872 (500.02 MiB 524.22 MB)
Super Offset : 1023984 sectors
Unused Space : before=0 sectors, after=104 sectors
State : clean
Device UUID : f1c6b25d:693eee98:2addb450:2aeb5c37
Update Time : Sat Jun 6 07:19:42 2015
Bad Block Log : 512 entries available at offset -8 sectors
Checksum : 62a85b35 - correct
Events : 28
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sda3:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x1
Array UUID : 61d27050:e5608a68:84646e94:aa6d0d0c
Name : host.domain.com:2 (local to host host.domain.com)
Creation Time : Fri Jun 5 05:05:32 2015
Raid Level : raid10
Raid Devices : 4
Avail Dev Size : 89231360 (42.55 GiB 45.69 GB)
Array Size : 89231360 (85.10 GiB 91.37 GB)
Data Offset : 65536 sectors
Super Offset : 0 sectors
Unused Space : before=65456 sectors, after=0 sectors
State : clean
Device UUID : ba051f9d:ad1c1656:7fdc5bf1:397728fc
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Jun 6 22:25:13 2015
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : b3fae8fc - correct
Events : 96
Layout : near=2
Chunk Size : 512K
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb:
MBR Magic : aa55
Partition[0] : 32768000 sectors at 2048 (type fd)
Partition[1] : 1024000 sectors at 32770048 (type fd)
Partition[2] : 89296896 sectors at 33794048 (type fd)
/dev/sdb1:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x0
Array UUID : ae2e2e67:dd9728e7:2d290b6f:78ecec69
Name : host.domain.com:1 (local to host host.domain.com)
Creation Time : Fri Jun 5 05:05:32 2015
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 32751616 (15.62 GiB 16.77 GB)
Array Size : 16375808 (15.62 GiB 16.77 GB)
Data Offset : 16384 sectors
Super Offset : 0 sectors
Unused Space : before=16304 sectors, after=0 sectors
State : clean
Device UUID : 8dd698b4:c8ad9ac0:a92fa8b1:be01b26c
Update Time : Sat Jun 6 07:19:42 2015
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 2de8c5bb - correct
Events : 19
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb2:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : f9e0a41c:2c27ffe7:146a6a71:6bc894fe
Name : host.domain.com:0 (local to host host.domain.com)
Creation Time : Fri Jun 5 05:05:44 2015
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 1023968 (500.07 MiB 524.27 MB)
Array Size : 511936 (500.02 MiB 524.22 MB)
Used Dev Size : 1023872 (500.02 MiB 524.22 MB)
Super Offset : 1023984 sectors
Unused Space : before=0 sectors, after=104 sectors
State : clean
Device UUID : fe45fa24:a9b1b2e1:357894ce:0a25ae78
Update Time : Sat Jun 6 07:19:42 2015
Bad Block Log : 512 entries available at offset -8 sectors
Checksum : 31e5226f - correct
Events : 28
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb3:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x1
Array UUID : 61d27050:e5608a68:84646e94:aa6d0d0c
Name : host.domain.com:2 (local to host host.domain.com)
Creation Time : Fri Jun 5 05:05:32 2015
Raid Level : raid10
Raid Devices : 4
Avail Dev Size : 89231360 (42.55 GiB 45.69 GB)
Array Size : 89231360 (85.10 GiB 91.37 GB)
Data Offset : 65536 sectors
Super Offset : 0 sectors
Unused Space : before=65456 sectors, after=0 sectors
State : clean
Device UUID : 3dc3c3bb:7d46da83:6bcffa25:71fb51e3
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Jun 6 22:25:13 2015
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 1c2c4774 - correct
Events : 96
Layout : near=2
Chunk Size : 512K
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc:
MBR Magic : aa55
Partition[0] : 32768000 sectors at 2048 (type fd)
Partition[1] : 1024000 sectors at 32770048 (type fd)
Partition[2] : 89296896 sectors at 33794048 (type fd)
/dev/sdc1:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x0
Array UUID : ae2e2e67:dd9728e7:2d290b6f:78ecec69
Name : host.domain.com:1 (local to host host.domain.com)
Creation Time : Fri Jun 5 05:05:32 2015
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 32751616 (15.62 GiB 16.77 GB)
Array Size : 16375808 (15.62 GiB 16.77 GB)
Data Offset : 16384 sectors
Super Offset : 0 sectors
Unused Space : before=16304 sectors, after=0 sectors
State : clean
Device UUID : 970f9254:c1498c69:ae75f67a:499e294e
Update Time : Sat Jun 6 07:19:42 2015
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 21997d4e - correct
Events : 19
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc2:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : f9e0a41c:2c27ffe7:146a6a71:6bc894fe
Name : host.domain.com:0 (local to host host.domain.com)
Creation Time : Fri Jun 5 05:05:44 2015
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 1023968 (500.07 MiB 524.27 MB)
Array Size : 511936 (500.02 MiB 524.22 MB)
Used Dev Size : 1023872 (500.02 MiB 524.22 MB)
Super Offset : 1023984 sectors
Unused Space : before=0 sectors, after=104 sectors
State : clean
Device UUID : 1b64b17c:6b32f75a:2ea5e242:7fe886e8
Update Time : Sat Jun 6 07:19:42 2015
Bad Block Log : 512 entries available at offset -8 sectors
Checksum : e707b1bc - correct
Events : 28
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc3:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x1
Array UUID : 61d27050:e5608a68:84646e94:aa6d0d0c
Name : host.domain.com:2 (local to host host.domain.com)
Creation Time : Fri Jun 5 05:05:32 2015
Raid Level : raid10
Raid Devices : 4
Avail Dev Size : 89231360 (42.55 GiB 45.69 GB)
Array Size : 89231360 (85.10 GiB 91.37 GB)
Data Offset : 65536 sectors
Super Offset : 0 sectors
Unused Space : before=65456 sectors, after=0 sectors
State : clean
Device UUID : 9d2d00a0:54582922:b6ff7c9f:d7a72861
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Jun 6 22:25:13 2015
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 9610a05c - correct
Events : 96
Layout : near=2
Chunk Size : 512K
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd:
MBR Magic : aa55
Partition[0] : 32768000 sectors at 2048 (type fd)
Partition[1] : 1024000 sectors at 32770048 (type fd)
Partition[2] : 89296896 sectors at 33794048 (type fd)
/dev/sdd1:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x0
Array UUID : ae2e2e67:dd9728e7:2d290b6f:78ecec69
Name : host.domain.com:1 (local to host host.domain.com)
Creation Time : Fri Jun 5 05:05:32 2015
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 32751616 (15.62 GiB 16.77 GB)
Array Size : 16375808 (15.62 GiB 16.77 GB)
Data Offset : 16384 sectors
Super Offset : 0 sectors
Unused Space : before=16304 sectors, after=0 sectors
State : clean
Device UUID : 094af00a:40bf3096:d675d62d:4f77b847
Update Time : Sat Jun 6 07:19:42 2015
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : b10b066d - correct
Events : 19
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd2:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : f9e0a41c:2c27ffe7:146a6a71:6bc894fe
Name : host.domain.com:0 (local to host host.domain.com)
Creation Time : Fri Jun 5 05:05:44 2015
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 1023968 (500.07 MiB 524.27 MB)
Array Size : 511936 (500.02 MiB 524.22 MB)
Used Dev Size : 1023872 (500.02 MiB 524.22 MB)
Super Offset : 1023984 sectors
Unused Space : before=0 sectors, after=104 sectors
State : clean
Device UUID : b7c19d19:a8b7ca21:23c9259b:0ecaaa2f
Update Time : Sat Jun 6 07:19:42 2015
Bad Block Log : 512 entries available at offset -8 sectors
Checksum : ea2e9a19 - correct
Events : 28
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd3:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x1
Array UUID : 61d27050:e5608a68:84646e94:aa6d0d0c
Name : host.domain.com:2 (local to host host.domain.com)
Creation Time : Fri Jun 5 05:05:32 2015
Raid Level : raid10
Raid Devices : 4
Avail Dev Size : 89231360 (42.55 GiB 45.69 GB)
Array Size : 89231360 (85.10 GiB 91.37 GB)
Data Offset : 65536 sectors
Super Offset : 0 sectors
Unused Space : before=65456 sectors, after=0 sectors
State : clean
Device UUID : 25151df4:f0c22a6f:fa9d3e43:d40295dd
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Jun 6 22:25:13 2015
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 575cebc3 - correct
Events : 96
Layout : near=2
Chunk Size : 512K
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
[root@host ~]#
***** 反馈后更新附加反馈: *****
重做系统(我是一个从头开始使用全新盒子的人)
**Option 1)**
Boot : 512 MB on 4 drives in RAID 10 : 1GB
Swap : 8 GB on 4 drives in RAID 10 : 16 GB
/ : ~54 GB on 4 drives in RAID 10 : ~108 GB
**Option 2)**
Boot : 256 MB on 4 drives in RAID 10 : 512 MB
Swap : 8 GB on 4 drives in RAID 10 : 16 GB
/ : ~55 GB on 4 drives in RAID 10 : ~110 GB
剩下的问题是,选项 2 是否足够,或者我应该选择选项 1 只是为了保守/安全?
答案1
您的驱动器并未完全设置为 RAID10 设备。您的四个驱动器中的每一个都分为三个分区,其中一个包含大约 16.8GB(我正在使用SI此处为 GB),其中一个包含 524MB,另一个包含 45.7GB。
这组四个 16.8GB 分区组装成md1
一个 16.8GB RAID1 设备(md1: active raid1
;所有四个分区都是镜像的,因此总容量是单个分区的容量)。这组四个 524MB 分区被组装成md0
一个 512MB RAID1 设备。最后,这组四个 45.7GB 被组装成md2
一个 91.4GB RAID10 设备。
因此,您的总容量为 91.4+0.5+16.8 = 108.7GB,而不是您在所有内容都设置为 RAID10 设备时所期望的 126GB。
可以将具有四个设备的 RAID1 转换为 RAID10,并mdadm
间接支持这一点(如弗罗斯特舒茨):它可以将 RAID1 阵列“增长”为 RAID0 阵列,然后将 RAID0 阵列增长为 RAID10 阵列。 (请阅读联机帮助页的“增长模式”部分mdadm(8)
。)如果您将两个 RAID1 阵列转换为 RAID10 阵列,您最终的总容量将为 91.4+0.5*2+16.8*2 = 91.4+1+33.6 = 126GB。
从您的选项中我会选择选项 1 的变体:
- 启动:512 MB,RAID 1 中 4 个驱动器:512 MB
- 交换:RAID 10 中 4 个驱动器上 8 GB:16 GB
- /:RAID 10 中 4 个驱动器上约 54 GB:约 108 GB
只是因为我不知道grub
(或您使用的任何引导加载程序)对RAID10的支持程度如何,而且因为RAID1分区可以在紧要关头直接使用,无需md
支持。
答案2
造成这种混乱的原因有两个。
SSD 驱动器通常被宣传为“原始”容量,其中包括针对故障扇区的所有过度配置。因此,64GB 设备实际上可能只有 60GB 可用空间。
磁盘以 10^n 为单位指定,因此 64GB 是 (64*10^9) 字节。内存和计算机用户的常见期望基于 2^m 单位左右,其中 64GiB 是 (2^36) 字节。
如果您查看 63023063040 字节报告的磁盘原始大小fdisk
,则大约为 63GB 或 58GiB。
除此之外,分区通常不会从第一个磁盘扇区开始,并且通常会在方便的边界处四舍五入(如果您使用柱面格式化,则它将是完整的柱面数。最好按块格式化SSD或部门)。然后还有文件系统开销,这就是为什么df -h
在完整的磁盘上永远不会匹配通过 声明的大小fdisk
。
答案3
Stephen Kitt 已经正确回答了您的问题,但是既然您提到了“使用软件 RAID 的限制”,那么还有一个问题可能会浪费一点空间,那就是 RAID 元数据的大小。
理论上,RAID 元数据非常小,最多几千字节。
实际上mdadm
为其元数据保留了相当多的空间。在您的情况下,它是16384
上的扇区/dev/sd[abcd]1
和65536
上的扇区/dev/sd[abcd]3
。所以40MiB
每个磁盘都浪费了。当前版本的mdadm
储备最多为262144
扇区。对于多个 RAID 分区和给定 RAID 中的许多磁盘,它会累加起来。在我自己的设置中,仅偏移量就需要几千兆字节。
它使用如此大的偏移量的唯一原因是为了使某些增长操作更加高效,并且每次增长 RAID 时都会丢失一些偏移量。
如果不需要此功能,可以--data-offset=2048
在创建 RAID 时指定。这样元数据将使用1MiB
偏移量来实现数据对齐。