我在 JBOD 服务器中有一个 10TB 的 BTRFS 卷,由 7 个全磁盘卷(无分区)组成,每个卷都是作为单驱动器 RAID0* 安装的物理驱动器。包含 7 个驱动器的 BTRFS 卷创建为RAID1 数据、元数据和系统,意味着只有5TB的可用空间。
该装置出现过几次断电情况,卷现已损坏。
我启动了一个btrfs scrub
耗时 10 小时的程序,它恢复了一些错误,但仍然有无法恢复的错误。以下是日志:
scrub status:1
1ea7ff96-0c60-46c3-869c-ae398cd106a8:3|data_extents_scrubbed:43337833|tree_extents_scrubbed:274036|data_bytes_scrubbed:2831212044288|tree_bytes_scrubbed:4489805824|read_errors:0|csum_errors:0|verify_errors:0|no_csum:45248|csum_discards:0|super_errors:0|malloc_errors:0|uncorrectable_errors:0|corrected_errors:0|last_physical:2908834758656|t_start:1548346756|t_resumed:0|duration:33370|canceled:0|finished:1
1ea7ff96-0c60-46c3-869c-ae398cd106a8:4|data_extents_scrubbed:6079208|tree_extents_scrubbed:57260|data_bytes_scrubbed:397180661760|tree_bytes_scrubbed:938147840|read_errors:0|csum_errors:0|verify_errors:0|no_csum:5248|csum_discards:0|super_errors:0|malloc_errors:0|uncorrectable_errors:0|corrected_errors:0|last_physical:409096683520|t_start:1548346756|t_resumed:0|duration:6044|canceled:0|finished:1
1ea7ff96-0c60-46c3-869c-ae398cd106a8:5|data_extents_scrubbed:13713623|tree_extents_scrubbed:63427|data_bytes_scrubbed:895829155840|tree_bytes_scrubbed:1039187968|read_errors:67549319|csum_errors:34597|verify_errors:45|no_csum:40128|csum_discards:0|super_errors:0|malloc_errors:0|uncorrectable_errors:67546631|corrected_errors:37330|last_physical:909460373504|t_start:1548346756|t_resumed:0|duration:20996|canceled:0|finished:1
1ea7ff96-0c60-46c3-869c-ae398cd106a8:6|data_extents_scrubbed:44399586|tree_extents_scrubbed:267573|data_bytes_scrubbed:2890078298112|tree_bytes_scrubbed:4383916032|read_errors:0|csum_errors:0|verify_errors:0|no_csum:264000|csum_discards:0|super_errors:0|malloc_errors:0|uncorrectable_errors:0|corrected_errors:0|last_physical:2908834758656|t_start:1548346756|t_resumed:0|duration:35430|canceled:0|finished:1
1ea7ff96-0c60-46c3-869c-ae398cd106a8:7|data_extents_scrubbed:13852777|tree_extents_scrubbed:0|data_bytes_scrubbed:898808254464|tree_bytes_scrubbed:0|read_errors:0|csum_errors:0|verify_errors:0|no_csum:133376|csum_discards:0|super_errors:0|malloc_errors:0|uncorrectable_errors:0|corrected_errors:0|last_physical:909460373504|t_start:1548346756|t_resumed:0|duration:20638|canceled:0|finished:1
1ea7ff96-0c60-46c3-869c-ae398cd106a8:8|data_extents_scrubbed:13806820|tree_extents_scrubbed:0|data_bytes_scrubbed:896648761344|tree_bytes_scrubbed:0|read_errors:0|csum_errors:0|verify_errors:0|no_csum:63808|csum_discards:0|super_errors:0|malloc_errors:0|uncorrectable_errors:0|corrected_errors:0|last_physical:909460373504|t_start:1548346756|t_resumed:0|duration:20443|canceled:0|finished:1
1ea7ff96-0c60-46c3-869c-ae398cd106a8:9|data_extents_scrubbed:5443823|tree_extents_scrubbed:0|data_bytes_scrubbed:356618694656|tree_bytes_scrubbed:0|read_errors:0|csum_errors:0|verify_errors:0|no_csum:0|csum_discards:0|super_errors:0|malloc_errors:0|uncorrectable_errors:0|corrected_errors:0|last_physical:377958170624|t_start:1548346756|t_resumed:0|duration:3199|canceled:0|finished:1
然后我卸载了该卷并执行了btrfs check --repair
以下输出:
Checking filesystem on /dev/sdb
UUID: 1ea7ff96-0c60-46c3-869c-ae398cd106a8
checking extents [o]
cache and super generation don't match, space cache will be invalidated
checking fs roots [o]
checking csums
checking root refs
found 4588612874240 bytes used err is 0
total csum bytes: 4474665852
total tree bytes: 5423104000
total fs tree bytes: 734445568
total extent tree bytes: 71221248
btree space waste bytes: 207577944
file data blocks allocated: 4583189770240
referenced 4583185391616
现在我无法使用挂载卷mount -a
,输出如下:
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
检查 dmesg,输出了清理期间的消息:
[37825.838303] BTRFS error (device sde): bdev /dev/sdf errs: wr 67699124, rd 67694614, flush 0, corrupt 34597, gen 45
[37826.202827] sd 1:1:0:4: rejecting I/O to offline device
后来在dmesg中挂载错误如下:
[pciavald@Host-005 ~]$ sudo mount -a
[63078.778765] BTRFS info (device sde): disk space caching is enabled
[63078.778771] BTRFS info (device sde): has skinny extents
[63078.779882] BTRFS error (device sde): failed to read chunk tree: -5
[63078.790696] BTRFS: open_ctree failed
[pciavald@Host-005 ~]$ sudo mount -o recovery,ro /dev/sdb /data
[75788.205006] BTRFS warning (device sde): 'recovery' is deprecated, use 'usebackuproot' instead
[75788.205012] BTRFS info (device sde): trying to use backup root at mount time
[75788.205016] BTRFS info (device sde): disk space caching is enabled
[75788.205018] BTRFS info (device sde): has skinny extents
[75788.206382] BTRFS error (device sde): failed to read chunk tree: -5
[75788.215661] BTRFS: open_ctree failed
[pciavald@Host-005 ~]$ sudo mount -o usebackuproot,ro /dev/sdb /data
[76171.713546] BTRFS info (device sde): trying to use backup root at mount time
[76171.713552] BTRFS info (device sde): disk space caching is enabled
[76171.713556] BTRFS info (device sde): has skinny extents
[76171.714829] BTRFS error (device sde): failed to read chunk tree: -5
[76171.725735] BTRFS: open_ctree failed
从清理日志来看,似乎所有不可恢复的错误都位于单个硬盘上。devid 5
此外,从 dmesg 消息来看,这些错误似乎与驱动器有关/dev/sdf
。清理日志显示设备 上的所有错误1ea7ff96-0c60-46c3-869c-ae398cd106a8:5
。
*:我知道在由物理 RAID 驱动程序管理的卷上而不是在物理驱动器上直接使用 BTRFS 并不是最好的选择,但我别无选择。阵列中插入的每个驱动器都格式化为单个 RAID0 驱动器,这使得它对操作系统可见。这些逻辑驱动器被格式化为全卷 BTRFS 驱动器,并添加到 BTRFS 设备中,数据和元数据重复。
编辑:我来到服务器,将其重新启动到较新的内核,并注意到出现错误的驱动器/dev/sdf
的故障状态 LED 亮起。我关闭了服务器,重新启动了 JBOD 和服务器,它变成了绿色。该卷目前已正确安装,我重新启动了清理。6 分钟后,状态已经出现错误,但没有迹象表明是否可以纠正:
scrub status for 1ea7ff96-0c60-46c3-869c-ae398cd106a8
scrub started at Fri Jan 25 11:53:28 2019, running for 00:06:31
total bytes scrubbed: 243.83GiB with 3 errors
error details: super=3
corrected errors: 0, uncorrectable errors: 0, unverified errors: 0
本次清理经过 8 小时后结束时,输出如下:
scrub status for 1ea7ff96-0c60-46c3-869c-ae398cd106a8
scrub started at Fri Jan 25 11:53:28 2019 and finished after 07:59:20
total bytes scrubbed: 8.35TiB with 67549322 errors
error details: read=67549306 super=3 csum=13
corrected errors: 2701, uncorrectable errors: 67546618, unverified errors: 0
该清理的新日志如下:
1ea7ff96-0c60-46c3-869c-ae398cd106a8:3|data_extents_scrubbed:43337833|tree_extents_scrubbed:273855|data_bytes_scrubbed:2831212044288|tree_bytes_scrubbed:4486840320|read_errors:0|csum_errors:0|verify_errors:0|no_csum:45248|csum_discards:0|super_errors:0|malloc_errors:0|uncorrectable_errors:0|corrected_errors:0|last_physical:2908834758656|t_start:1548413608|t_resumed:0|duration:26986|canceled:0|finished:1
1ea7ff96-0c60-46c3-869c-ae398cd106a8:4|data_extents_scrubbed:6079208|tree_extents_scrubbed:57127|data_bytes_scrubbed:397180661760|tree_bytes_scrubbed:935968768|read_errors:0|csum_errors:0|verify_errors:0|no_csum:5248|csum_discards:0|super_errors:0|malloc_errors:0|uncorrectable_errors:0|corrected_errors:0|last_physical:409096683520|t_start:1548413608|t_resumed:0|duration:6031|canceled:0|finished:1
1ea7ff96-0c60-46c3-869c-ae398cd106a8:5|data_extents_scrubbed:13713623|tree_extents_scrubbed:63206|data_bytes_scrubbed:895829155840|tree_bytes_scrubbed:1035567104|read_errors:67549306|csum_errors:13|verify_errors:0|no_csum:40128|csum_discards:0|super_errors:3|malloc_errors:0|uncorrectable_errors:67546618|corrected_errors:2701|last_physical:909460373504|t_start:1548413608|t_resumed:0|duration:14690|canceled:0|finished:1
1ea7ff96-0c60-46c3-869c-ae398cd106a8:6|data_extents_scrubbed:44399652|tree_extents_scrubbed:267794|data_bytes_scrubbed:2890081705984|tree_bytes_scrubbed:4387536896|read_errors:0|csum_errors:0|verify_errors:0|no_csum:264832|csum_discards:0|super_errors:0|malloc_errors:0|uncorrectable_errors:0|corrected_errors:0|last_physical:2908834758656|t_start:1548413608|t_resumed:0|duration:28760|canceled:0|finished:1
1ea7ff96-0c60-46c3-869c-ae398cd106a8:7|data_extents_scrubbed:13852771|tree_extents_scrubbed:0|data_bytes_scrubbed:898807992320|tree_bytes_scrubbed:0|read_errors:0|csum_errors:0|verify_errors:0|no_csum:133312|csum_discards:0|super_errors:0|malloc_errors:0|uncorrectable_errors:0|corrected_errors:0|last_physical:909460373504|t_start:1548413608|t_resumed:0|duration:14372|canceled:0|finished:1
1ea7ff96-0c60-46c3-869c-ae398cd106a8:8|data_extents_scrubbed:13806827|tree_extents_scrubbed:0|data_bytes_scrubbed:896649023488|tree_bytes_scrubbed:0|read_errors:0|csum_errors:0|verify_errors:0|no_csum:63872|csum_discards:0|super_errors:0|malloc_errors:0|uncorrectable_errors:0|corrected_errors:0|last_physical:909460373504|t_start:1548413608|t_resumed:0|duration:14059|canceled:0|finished:1
1ea7ff96-0c60-46c3-869c-ae398cd106a8:9|data_extents_scrubbed:5443823|tree_extents_scrubbed:3|data_bytes_scrubbed:356618694656|tree_bytes_scrubbed:49152|read_errors:0|csum_errors:0|verify_errors:0|no_csum:0|csum_discards:0|super_errors:0|malloc_errors:0|uncorrectable_errors:0|corrected_errors:0|last_physical:377991725056|t_start:1548413608|t_resumed:0|duration:3275|canceled:0|finished:1
同一卷有无法纠正的错误,因此我尝试列出 btrfs 卷,但devid 5
列表中缺少以下内容:
[pciavald@Host-001 ~]$ sudo btrfs fi show /data
Label: 'data' uuid: 1ea7ff96-0c60-46c3-869c-ae398cd106a8
Total devices 7 FS bytes used 4.17TiB
devid 3 size 2.73TiB used 2.65TiB path /dev/sdd
devid 4 size 465.73GiB used 381.00GiB path /dev/sde
devid 6 size 2.73TiB used 2.65TiB path /dev/sdb
devid 7 size 931.48GiB used 847.00GiB path /dev/sdc
devid 8 size 931.48GiB used 847.00GiB path /dev/sdg
devid 9 size 931.48GiB used 352.03GiB path /dev/sdh
*** Some devices missing
这里列出了除 之外的所有设备devid 5
,/dev/sdf
所以我猜损坏的驱动器就是这个。由于数据重复,我应该能够删除此设备并重新平衡设置,所以我尝试了:
[pciavald@Host-001 ~]$ sudo btrfs device delete /dev/sdf /data
ERROR: error removing device '/dev/sdf': No such device or address
我如何才能正确删除该设备?
编辑2:我去 IRC freenode #btrfs 寻求帮助,并做了以下调查。在使用过程中,我们可以看到整个系统的数据分布在 2 个不同的驱动器上:
[pciavald@Host-001 ~]$ sudo btrfs fi usage /data
Overall:
Device size: 9.55TiB
Device allocated: 8.49TiB
Device unallocated: 1.06TiB
Device missing: 931.48GiB
Used: 8.35TiB
Free (estimated): 615.37GiB (min: 615.37GiB)
Data ratio: 2.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data,RAID1: Size:4.24TiB, Used:4.17TiB
/dev/sdb 2.64TiB
/dev/sdc 847.00GiB
/dev/sdd 2.64TiB
/dev/sde 380.00GiB
/dev/sdf 846.00GiB
/dev/sdg 847.00GiB
/dev/sdh 352.00GiB
Metadata,RAID1: Size:6.00GiB, Used:5.05GiB
/dev/sdb 5.00GiB
/dev/sdd 5.00GiB
/dev/sde 1.00GiB
/dev/sdf 1.00GiB
System,RAID1: Size:64.00MiB, Used:624.00KiB
/dev/sdb 64.00MiB
/dev/sdd 32.00MiB
/dev/sdh 32.00MiB
Unallocated:
/dev/sdb 85.43GiB
/dev/sdc 84.48GiB
/dev/sdd 85.46GiB
/dev/sde 84.73GiB
/dev/sdf 84.48GiB
/dev/sdg 84.48GiB
/dev/sdh 579.45GiB
我们btrfs dev stats /data
可以看到,所有错误都位于,/dev/sdf
这表明清理的不可恢复错误不是由于损坏数据的镜像副本中的错误造成的,而是因为操作系统无法在有缺陷的驱动器上正确读取/写入:
[/dev/sdd].write_io_errs 0
[/dev/sdd].read_io_errs 0
[/dev/sdd].flush_io_errs 0
[/dev/sdd].corruption_errs 0
[/dev/sdd].generation_errs 0
[/dev/sde].write_io_errs 0
[/dev/sde].read_io_errs 0
[/dev/sde].flush_io_errs 0
[/dev/sde].corruption_errs 0
[/dev/sde].generation_errs 0
[/dev/sdf].write_io_errs 135274911
[/dev/sdf].read_io_errs 135262641
[/dev/sdf].flush_io_errs 0
[/dev/sdf].corruption_errs 34610
[/dev/sdf].generation_errs 48
[/dev/sdb].write_io_errs 0
[/dev/sdb].read_io_errs 0
[/dev/sdb].flush_io_errs 0
[/dev/sdb].corruption_errs 0
[/dev/sdb].generation_errs 0
[/dev/sdc].write_io_errs 0
[/dev/sdc].read_io_errs 0
[/dev/sdc].flush_io_errs 0
[/dev/sdc].corruption_errs 0
[/dev/sdc].generation_errs 0
[/dev/sdg].write_io_errs 0
[/dev/sdg].read_io_errs 0
[/dev/sdg].flush_io_errs 0
[/dev/sdg].corruption_errs 0
[/dev/sdg].generation_errs 0
[/dev/sdh].write_io_errs 0
[/dev/sdh].read_io_errs 0
[/dev/sdh].flush_io_errs 0
[/dev/sdh].corruption_errs 0
[/dev/sdh].generation_errs 0
我已经订购了一个新的 1TB 硬盘进行更换,/dev/sdf
一旦我成功更换它,我就会回答这个问题。
答案1
将新的 1TB 驱动器插入阵列后,我在阵列配置实用程序中将其格式化为单个 RAID0 驱动器,以使其对操作系统可见。然后,在不创建任何分区表或驱动器上的分区的情况下,我发出了以下命令:
sudo btrfs replace start -r -f 5 /dev/sdi /data
让我们分解一下:我们希望 btrfs 开始用刚插入的新驱动器进行替换devid 5
(我们使用此符号而不是,/dev/sdf
因为它可能处于丢失状态)/dev/sdi
。-r
用于仅当不存在其他零缺陷镜像时才从 srcdev 读取和 -f 强制覆盖目标磁盘。/data
是我们的挂载点。
2小时30分后替换完成,状态如下:
Started on 27.Jan 21:57:31, finished on 28.Jan 00:19:23, 0 write errs, 0 uncorr. read errs
需要注意的是,我们又停电了一次期间更换,持续时间恰好比新 UPS 所能承受的时间长了 10 秒,因此服务器在更换操作期间停机。重新启动服务器后,更换无需发出任何命令即可恢复,当我status
在启动时发出命令时,它已经启动了。
然后我再次清理了卷,这里是输出:
scrub status for 1ea7ff96-0c60-46c3-869c-ae398cd106a8
scrub started at Mon Jan 28 00:24:15 2019 and finished after 06:45:57
total bytes scrubbed: 8.35TiB with 212759 errors
error details: csum=212759
corrected errors: 212759, uncorrectable errors: 0, unverified errors: 0
现在一切都已纠正,我重新启动了清理以确保不再纠正任何错误:
scrub status for 1ea7ff96-0c60-46c3-869c-ae398cd106a8
scrub started at Mon Jan 28 10:19:24 2019 and finished after 06:33:05
total bytes scrubbed: 8.35TiB with 0 errors