我有一个现有的 6x16TB BTRFS raid10,想知道是否可以再扩展 2 个或 4 个 16TB HDD?
我知道这对于传统的软件/假/硬件袭击是不可能的,但 BTRFS 的重新平衡和添加功能给了我希望。
- 那可能吗?
- 实现这一目标的命令是什么?和,
- 最终调整大小后我会保留 1 个驱动器故障功能吗?
答案1
好的。刚刚启动了带有 6x1GB 虚拟 HDD 的 ubuntu 22.04 VM。添加了其中 4 个作为开始:
### Created the raid 4 drives
root@ubuntu-for-devops:~# mkfs.btrfs -L data -d raid10 -m raid10 -f /dev/sdc /dev/sdd /dev/sde /dev/sdf
### mounted the raid10
root@ubuntu-for-devops:~# mount /dev/disk/by-label/data /mnt/
### checked space and created a random 100MB file on the raid10
root@ubuntu-for-devops:~# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdc btrfs 2.0G 104M 1.8G 6% /mnt
root@ubuntu-for-devops:~# dd if=/dev/urandom of=/mnt/somefile bs=1M count=100
### Added two more devices 1GB each
root@ubuntu-for-devops:~# btrfs device add /dev/sda /dev/sdg /mnt/
### Turned out they are not part of the RAID yet, though space grew from 2GB to 3GB (I think it should be 4GB as we added 2x1GB)
root@ubuntu-for-devops:~# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdc btrfs 3.0G 104M 2.8G 4% /mnt
root@ubuntu-for-devops:~# btrfs device usage /mnt/
/dev/sdc, ID: 1
Device size: 1.00GiB
Device slack: 0.00B
Data,RAID10/4: 102.38MiB
Metadata,RAID10/4: 64.00MiB
System,RAID10/4: 8.00MiB
Unallocated: 849.62MiB
/dev/sdd, ID: 2
Device size: 1.00GiB
Device slack: 0.00B
Data,RAID10/4: 102.38MiB
Metadata,RAID10/4: 64.00MiB
System,RAID10/4: 8.00MiB
Unallocated: 849.62MiB
/dev/sde, ID: 3
Device size: 1.00GiB
Device slack: 0.00B
Data,RAID10/4: 102.38MiB
Metadata,RAID10/4: 64.00MiB
System,RAID10/4: 8.00MiB
Unallocated: 849.62MiB
/dev/sdf, ID: 4
Device size: 1.00GiB
Device slack: 0.00B
Data,RAID10/4: 102.38MiB
Metadata,RAID10/4: 64.00MiB
System,RAID10/4: 8.00MiB
Unallocated: 849.62MiB
/dev/sda, ID: 5
Device size: 1.00GiB
Device slack: 0.00B
Unallocated: 1.00GiB
/dev/sdg, ID: 6
Device size: 1.00GiB
Device slack: 0.00B
Unallocated: 1.00GiB
### Had to rebalance the raid10's metadata and data
root@ubuntu-for-devops:~# btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt
Done, had to relocate 3 out of 3 chunks
root@ubuntu-for-devops:~# btrfs device usage /mnt/
/dev/sdc, ID: 1
Device size: 1.00GiB
Device slack: 0.00B
Data,RAID10/6: 208.00MiB
Metadata,RAID10/6: 96.00MiB
System,RAID10/6: 32.00MiB
Unallocated: 688.00MiB
/dev/sdd, ID: 2
Device size: 1.00GiB
Device slack: 0.00B
Data,RAID10/6: 208.00MiB
Metadata,RAID10/6: 96.00MiB
System,RAID10/6: 32.00MiB
Unallocated: 688.00MiB
/dev/sde, ID: 3
Device size: 1.00GiB
Device slack: 0.00B
Data,RAID10/6: 208.00MiB
Metadata,RAID10/6: 96.00MiB
System,RAID10/6: 32.00MiB
Unallocated: 688.00MiB
/dev/sdf, ID: 4
Device size: 1.00GiB
Device slack: 0.00B
Data,RAID10/6: 208.00MiB
Metadata,RAID10/6: 96.00MiB
System,RAID10/6: 32.00MiB
Unallocated: 688.00MiB
/dev/sda, ID: 5
Device size: 1.00GiB
Device slack: 0.00B
Data,RAID10/6: 208.00MiB
Metadata,RAID10/6: 96.00MiB
System,RAID10/6: 32.00MiB
Unallocated: 688.00MiB
/dev/sdg, ID: 6
Device size: 1.00GiB
Device slack: 0.00B
Data,RAID10/6: 208.00MiB
Metadata,RAID10/6: 96.00MiB
System,RAID10/6: 32.00MiB
Unallocated: 688.00MiB
### We lost another 100+MB because of the new metadata
root@ubuntu-for-devops:~# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdc btrfs 3.0G 104M 2.6G 4% /mnt
### Time to check if the raid will sustain a one drive failure by erasing sdd completely
root@ubuntu-for-devops:~# dd if=/dev/urandom of=/dev/sdd bs=1M conv=fsync status=progress
### After simulating a write the dmesg reflected the issue:
root@ubuntu-for-devops:~# touch /mnt/123
root@ubuntu-for-devops:~# dmesg
...
[ 2198.260580] BTRFS warning (device sdc): csum failed root 5 ino 257 off 131072 csum 0x35b08b64 expected csum 0xbae6547e mirror 1
[ 2198.260626] BTRFS error (device sdc): bdev /dev/sdd errs: wr 0, rd 0, flush 0, corrupt 1, gen 0
[ 2198.260895] BTRFS warning (device sdc): csum failed root 5 ino 257 off 135168 csum 0x343bf823 expected csum 0xb1298672 mirror 1
[ 2198.260901] BTRFS error (device sdc): bdev /dev/sdd errs: wr 0, rd 0, flush 0, corrupt 2, gen 0
[ 2198.263315] BTRFS warning (device sdc): csum failed root 5 ino 257 off 139264 csum 0x654231d1 expected csum 0x6f03029b mirror 1
[ 2198.263322] BTRFS error (device sdc): bdev /dev/sdd errs: wr 0, rd 0, flush 0, corrupt 3, gen 0
[ 2198.263547] BTRFS warning (device sdc): csum failed root 5 ino 257 off 143360 csum 0xa9f20424 expected csum 0xac791696 mirror 1
[ 2198.263553] BTRFS error (device sdc): bdev /dev/sdd errs: wr 0, rd 0, flush 0, corrupt 4, gen 0
[ 2198.263791] BTRFS warning (device sdc): csum failed root 5 ino 257 off 147456 csum 0x224092d4 expected csum 0x7bba1416 mirror 1
[ 2198.263797] BTRFS error (device sdc): bdev /dev/sdd errs: wr 0, rd 0, flush 0, corrupt 5, gen 0
[ 2198.264935] BTRFS warning (device sdc): csum failed root 5 ino 257 off 151552 csum 0xfc08be81 expected csum 0x221b75d8 mirror 1
[ 2198.264940] BTRFS error (device sdc): bdev /dev/sdd errs: wr 0, rd 0, flush 0, corrupt 6, gen 0
[ 2198.265129] BTRFS warning (device sdc): csum failed root 5 ino 257 off 155648 csum 0xfa1929bb expected csum 0xedc93828 mirror 1
[ 2198.265154] BTRFS error (device sdc): bdev /dev/sdd errs: wr 0, rd 0, flush 0, corrupt 7, gen 0
[ 2198.265993] BTRFS warning (device sdc): csum failed root 5 ino 257 off 159744 csum 0xb7c7453d expected csum 0xfef92d74 mirror 1
[ 2198.265998] BTRFS error (device sdc): bdev /dev/sdd errs: wr 0, rd 0, flush 0, corrupt 8, gen 0
[ 2198.266201] BTRFS warning (device sdc): csum failed root 5 ino 257 off 163840 csum 0x37c8083f expected csum 0x56e50bbd mirror 1
[ 2198.266204] BTRFS error (device sdc): bdev /dev/sdd errs: wr 0, rd 0, flush 0, corrupt 9, gen 0
[ 2198.267108] BTRFS warning (device sdc): csum failed root 5 ino 257 off 167936 csum 0x7b5bbbe0 expected csum 0x415c3cf5 mirror 1
[ 2198.267112] BTRFS error (device sdc): bdev /dev/sdd errs: wr 0, rd 0, flush 0, corrupt 10, gen 0
[ 2198.271391] BTRFS info (device sdc): read error corrected: ino 257 off 131072 (dev /dev/sdd sector 359168)
[ 2198.271855] BTRFS info (device sdc): read error corrected: ino 257 off 135168 (dev /dev/sdd sector 359176)
[ 2198.272376] BTRFS info (device sdc): read error corrected: ino 257 off 139264 (dev /dev/sdd sector 359184)
### Time for scrub data testing
root@ubuntu-for-devops:/mnt# btrfs scrub start /mnt/
scrub started on /mnt/, fsid 8d21c38c-9697-4998-b5a7-73858939e7dd (pid=2002)
root@ubuntu-for-devops:/mnt# WARNING: errors detected during scrubbing, corrected
root@ubuntu-for-devops:/mnt# btrfs scrub status /mnt/
UUID: 8d21c38c-9697-4998-b5a7-73858939e7dd
Scrub started: Wed Jan 18 12:30:14 2023
Status: finished
Duration: 0:00:00
Total to scrub: 200.50MiB
Rate: 0.00B/s
Error summary: csum=1
Corrected: 1
Uncorrectable: 0
Unverified: 0
...
看起来 BTRFS 可以很好地扩展现有的 raid10,并且可以承受单个驱动器故障,正如预期的那样。