我在 raid1 中有 4 个驱动器。我最近用更大的新驱动器更换了故障驱动器。新的 12TB 驱动器似乎未得到充分利用。我进行了一项平衡,希望将数据从其他 3 个驱动器迁移到新驱动器。然而,数据似乎被错误地移动了。有人可以解释这种奇怪的行为吗?
# btrfs fi usage /av
Overall:
Device size: 29.11TiB
Device allocated: 11.32TiB
Device unallocated: 17.79TiB
Device missing: 0.00B
Used: 10.90TiB
Free (estimated): 9.10TiB (min: 9.10TiB)
Data ratio: 2.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data,RAID1: Size:5.65TiB, Used:5.44TiB (96.32%)
/dev/sdh 5.13TiB
/dev/sdf 3.33TiB
/dev/sdg 2.54TiB
/dev/sde 310.00GiB
Metadata,RAID1: Size:9.00GiB, Used:7.28GiB (80.91%)
/dev/sdh 6.00GiB
/dev/sdf 3.00GiB
/dev/sdg 6.00GiB
/dev/sde 3.00GiB
System,RAID1: Size:32.00MiB, Used:976.00KiB (2.98%)
/dev/sdg 32.00MiB
/dev/sde 32.00MiB
Unallocated:
/dev/sdh 2.14TiB
/dev/sdf 2.13TiB
/dev/sdg 2.91TiB
/dev/sde 10.61TiB
在未过滤的平衡期间,我开始监控进度,如下所示:
# btrfs fi show /av
Label: 'av' uuid: a0e1cb85-1b4f-4657-971d-ba1d8c1bb772
Total devices 4 FS bytes used 5.45TiB
devid 4 size 7.28TiB used 5.13TiB path /dev/sdh
devid 5 size 5.46TiB used 3.33TiB path /dev/sdf
devid 6 size 5.46TiB used 2.57TiB path /dev/sdg
devid 7 size 10.91TiB used 330.06GiB path /dev/sde
# btrfs fi show
Label: 'av' uuid: a0e1cb85-1b4f-4657-971d-ba1d8c1bb772
Total devices 4 FS bytes used 5.45TiB
devid 4 size 7.28TiB used 5.13TiB path /dev/sdh
devid 5 size 5.46TiB used 3.33TiB path /dev/sdf
devid 6 size 5.46TiB used 2.56TiB path /dev/sdg
devid 7 size 10.91TiB used 320.03GiB path /dev/sde
# btrfs fi show
Label: 'av' uuid: a0e1cb85-1b4f-4657-971d-ba1d8c1bb772
Total devices 4 FS bytes used 5.45TiB
devid 4 size 7.28TiB used 5.13TiB path /dev/sdh
devid 5 size 5.46TiB used 3.33TiB path /dev/sdf
devid 6 size 5.46TiB used 2.55TiB path /dev/sdg
devid 7 size 10.91TiB used 313.03GiB path /dev/sde
12TB 驱动器 7 没有添加新数据,而是被删除。那是怎么回事?而且,如何使数据更均匀地分布在所有驱动器上?
附加信息
btrfs --version
btrfs-progs v5.4.1
我不记得我使用哪个版本的 btrfs 来创建卷或添加设备。