使用 mdadm 组装了两个具有相同磁盘特性的阵列,但最终大小略有不同

使用 mdadm 组装了两个具有相同磁盘特性的阵列,但最终大小略有不同

我对 mdadm 创建的阵列的大小有疑问。首先,介绍一些背景信息。我最近从 ubuntu server 18.04 升级到了 20.04。我正在运行几个 raid5 阵列和一个 raid10。其中两个将成为这个问题的主题。我正在运行两个 raid5 阵列,其中有 4 个活动“设备”(实际上是分区)和一个备用设备。这些 raid 分别名为 md5 和 md6。升级后,其中一个磁盘坏了,但 md5 和 md6 的行为不同。md5 启动了备用设备,重建了自身,而 md6 显示其中一个设备已被移除,但备用设备没有启动。必须手动重新创建。

总而言之,md5 和 md6 具有相同的磁盘空间,分区大小也相同。在升级和磁盘故障之前,md5 和 md6 的大小也相同。在升级和磁盘故障之后,必须从头开始重新创建 md6,结果大小不同。md5 是使用服务器版本 18.04 创建的,而 md6 是使用服务器版本 20.04 创建的。

以下是详细信息。

输出:lsblk --fs

sda
├─sda1
├─sda2  linux_raid_member (none):5       3c2fa5c0-42a9-e54c-7487-7dbc38986eaf
│ └─md5 ext4              FSROOT1        446d8ccf-08ea-4300-8d9e-72cf4606966a  221.6G     5% /
├─sda3  linux_raid_member salto-server:6 fbc8430b-64d7-8d34-b54f-6c4626505783
│ └─md6 ext4                             acb4b5ec-7ad6-476c-951e-3f12e6a2b3cf
├─sda4  linux_raid_member ubuntu:7       a6a7101e-539e-1310-1f8e-1d6af62327a6
│ └─md7 ext4              FDDATA1        9c42781d-cb8a-4061-ac43-13fbf24f40e3    1.6T    35% /mnt/r5d2
├─sda5  linux_raid_member salto-server:4 a7ef9f85-e156-fc26-1aff-66d7eec302c9
│ └─md4 swap              SWAP           09a58a47-0ca0-452d-ac72-d09c3e198c11                [SWAP]
└─sda6  linux_raid_member ubuntu:8       9f6decc2-851b-6c29-ac65-4940b7a4d4c5
  └─md8 ext4              FDDATA2        6c8dc3b0-3e23-4268-8b33-bb6d106b0d44    1.5T    64% /mnt/r5d3
sdb
├─sdb1
├─sdb2  linux_raid_member (none):5       3c2fa5c0-42a9-e54c-7487-7dbc38986eaf
│ └─md5 ext4              FSROOT1        446d8ccf-08ea-4300-8d9e-72cf4606966a  221.6G     5% /
├─sdb3  linux_raid_member salto-server:6 fbc8430b-64d7-8d34-b54f-6c4626505783
│ └─md6 ext4                             acb4b5ec-7ad6-476c-951e-3f12e6a2b3cf
├─sdb4  linux_raid_member ubuntu:7       a6a7101e-539e-1310-1f8e-1d6af62327a6
│ └─md7 ext4              FDDATA1        9c42781d-cb8a-4061-ac43-13fbf24f40e3    1.6T    35% /mnt/r5d2
├─sdb5  linux_raid_member salto-server:4 a7ef9f85-e156-fc26-1aff-66d7eec302c9
│ └─md4 swap              SWAP           09a58a47-0ca0-452d-ac72-d09c3e198c11                [SWAP]
└─sdb6  linux_raid_member ubuntu:8       9f6decc2-851b-6c29-ac65-4940b7a4d4c5
  └─md8 ext4              FDDATA2        6c8dc3b0-3e23-4268-8b33-bb6d106b0d44    1.5T    64% /mnt/r5d3
sdc
├─sdc1
├─sdc2  linux_raid_member (none):5       3c2fa5c0-42a9-e54c-7487-7dbc38986eaf
│ └─md5 ext4              FSROOT1        446d8ccf-08ea-4300-8d9e-72cf4606966a  221.6G     5% /
├─sdc3  linux_raid_member salto-server:6 fbc8430b-64d7-8d34-b54f-6c4626505783
│ └─md6 ext4                             acb4b5ec-7ad6-476c-951e-3f12e6a2b3cf
├─sdc4  linux_raid_member ubuntu:7       a6a7101e-539e-1310-1f8e-1d6af62327a6
│ └─md7 ext4              FDDATA1        9c42781d-cb8a-4061-ac43-13fbf24f40e3    1.6T    35% /mnt/r5d2
├─sdc5  linux_raid_member salto-server:4 a7ef9f85-e156-fc26-1aff-66d7eec302c9
│ └─md4 swap              SWAP           09a58a47-0ca0-452d-ac72-d09c3e198c11                [SWAP]
└─sdc6  linux_raid_member ubuntu:8       9f6decc2-851b-6c29-ac65-4940b7a4d4c5
  └─md8 ext4              FDDATA2        6c8dc3b0-3e23-4268-8b33-bb6d106b0d44    1.5T    64% /mnt/r5d3
sdd
├─sdd1
├─sdd2  linux_raid_member (none):5       3c2fa5c0-42a9-e54c-7487-7dbc38986eaf
│ └─md5 ext4              FSROOT1        446d8ccf-08ea-4300-8d9e-72cf4606966a  221.6G     5% /
├─sdd3  linux_raid_member salto-server:6 fbc8430b-64d7-8d34-b54f-6c4626505783
│ └─md6 ext4                             acb4b5ec-7ad6-476c-951e-3f12e6a2b3cf
├─sdd4  linux_raid_member ubuntu:7       a6a7101e-539e-1310-1f8e-1d6af62327a6
│ └─md7 ext4              FDDATA1        9c42781d-cb8a-4061-ac43-13fbf24f40e3    1.6T    35% /mnt/r5d2
├─sdd5  linux_raid_member salto-server:4 a7ef9f85-e156-fc26-1aff-66d7eec302c9
│ └─md4 swap              SWAP           09a58a47-0ca0-452d-ac72-d09c3e198c11                [SWAP]
└─sdd6  linux_raid_member ubuntu:8       9f6decc2-851b-6c29-ac65-4940b7a4d4c5
  └─md8 ext4              FDDATA2        6c8dc3b0-3e23-4268-8b33-bb6d106b0d44    1.5T    64% /mnt/r5d3

这是设备的构造。

cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md4 : active raid10 sda5[0] sdd5[3] sdc5[2] sdb5[1]
      9754624 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

md6 : active raid5 sda3[0] sdd3[4] sdc3[2] sdb3[1]
      263473152 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

md8 : active raid5 sdd6[6] sda6[0] sdc6[5] sdb6[4]
      5318710272 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/14 pages [0KB], 65536KB chunk

md7 : active raid5 sdd4[6] sda4[0] sdc4[5] sdb4[4]
      2929293312 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 3/8 pages [12KB], 65536KB chunk

md5 : active raid5 sdd2[6] sda2[0] sdc2[5] sdb2[4]
      263476224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

请注意,在此步骤中,md6 和 md5 显示不同数量的块。

这是设备的尺寸

sudo fdisk -l /dev/sd[a-d][2,3]

Disk /dev/sda2: 83.84 GiB, 90000326656 bytes, 175781888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sda3: 83.84 GiB, 90000326656 bytes, 175781888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdb2: 83.84 GiB, 90000326656 bytes, 175781888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdb3: 83.84 GiB, 90000326656 bytes, 175781888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdc2: 83.84 GiB, 90000326656 bytes, 175781888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdc3: 83.84 GiB, 90000326656 bytes, 175781888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdd2: 83.84 GiB, 90000326656 bytes, 175781888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdd3: 83.84 GiB, 90000326656 bytes, 175781888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

为了冗余,这里是 md5 和 md6 上的 fdisk 输出

fdisk -l /dev/md[5,6]

Disk /dev/md5: 251.28 GiB, 269799653376 bytes, 526952448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes


Disk /dev/md6: 251.27 GiB, 269796507648 bytes, 526946304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes

也许我遗漏了某些内容,或者隐藏了某些选项。但我确信这两个数组都是使用相同的命令创建的。似乎不同版本的 mdadm 创建的数组大小略有不同。

有没有办法知道差异在哪里?也许可以重建 md6 以匹配 md5?我无法像 / 一样触碰 md5,但 md6 是备份,我可以随心所欲地使用它。

先谢谢您的帮助。

相关内容