我的公司生产一款嵌入式 Debian Linux 设备,该设备从内部 SSD 驱动器上的 ext3 分区启动。由于该设备是一个嵌入式“黑匣子”,因此通常以粗暴的方式关闭它,只需通过外部开关切断设备电源即可。
这通常是没问题的,因为 ext3 的日志功能可以让一切保持有序,所以除了偶尔丢失部分日志文件外,一切仍然运行良好。
但是,我们最近发现,在多次硬电源循环后,许多设备 ext3 分区开始出现结构问题 - 特别是,我们在 ext3 分区上运行 e2fsck,它发现了许多问题,如本问题底部的输出列表中所示。运行 e2fsck 直到它停止报告错误(或重新格式化分区)即可清除问题。
我的问题是......在经历过多次突然/意外关机的 ext3/SSD 系统上出现此类问题意味着什么?
我的感觉是,这可能是我们系统中存在软件或硬件问题的迹象,因为我的理解是(除非出现错误或硬件问题)ext3 的日志功能应该可以防止此类文件系统完整性错误。(注意:我知道用户数据没有被记录下来,因此可能会发生用户文件混乱/丢失/截断的情况;我在这里特别谈论的是如下所示的文件系统元数据错误)
另一方面,我的同事说这是已知/预期的行为,因为 SSD 控制器有时会重新排序写入命令,这会导致 ext3 日志混乱。特别是,他认为,即使硬件正常运行且软件没有错误,ext3 日志也只能降低文件系统损坏的可能性,而不是不可能,所以我们不应该对不时看到这样的问题感到惊讶。
我们当中谁是对的?
Embedded-PC-failsafe:~# ls
Embedded-PC-failsafe:~# umount /mnt/unionfs
Embedded-PC-failsafe:~# e2fsck /dev/sda3
e2fsck 1.41.3 (12-Oct-2008)
embeddedrootwrite contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Invalid inode number for '.' in directory inode 46948.
Fix<y>? yes
Directory inode 46948, block 0, offset 12: directory corrupted
Salvage<y>? yes
Entry 'status_2012-11-26_14h13m41.csv' in /var/log/status_logs (46956) has deleted/unused inode 47075. Clear<y>? yes
Entry 'status_2012-11-26_10h42m58.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47076. Clear<y>? yes
Entry 'status_2012-11-26_11h29m41.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47080. Clear<y>? yes
Entry 'status_2012-11-26_11h42m13.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47081. Clear<y>? yes
Entry 'status_2012-11-26_12h07m17.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47083. Clear<y>? yes
Entry 'status_2012-11-26_12h14m53.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47085. Clear<y>? yes
Entry 'status_2012-11-26_15h06m49.csv' in /var/log/status_logs (46956) has deleted/unused inode 47088. Clear<y>? yes
Entry 'status_2012-11-20_14h50m09.csv' in /var/log/status_logs (46956) has deleted/unused inode 47073. Clear<y>? yes
Entry 'status_2012-11-20_14h55m32.csv' in /var/log/status_logs (46956) has deleted/unused inode 47074. Clear<y>? yes
Entry 'status_2012-11-26_11h04m36.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47078. Clear<y>? yes
Entry 'status_2012-11-26_11h54m45.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47082. Clear<y>? yes
Entry 'status_2012-11-26_12h12m20.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47084. Clear<y>? yes
Entry 'status_2012-11-26_12h33m52.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47086. Clear<y>? yes
Entry 'status_2012-11-26_10h51m59.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47077. Clear<y>? yes
Entry 'status_2012-11-26_11h17m09.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47079. Clear<y>? yes
Entry 'status_2012-11-26_12h54m11.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47087. Clear<y>? yes
Pass 3: Checking directory connectivity
'..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953).
Fix<y>? yes
Couldn't fix parent of inode 46948: Couldn't find parent directory entry
Pass 4: Checking reference counts
Unattached inode 46945
Connect to /lost+found<y>? yes
Inode 46945 ref count is 2, should be 1. Fix<y>? yes
Inode 46953 ref count is 5, should be 4. Fix<y>? yes
Pass 5: Checking group summary information
Block bitmap differences: -(208264--208266) -(210062--210068) -(211343--211491) -(213241--213250) -(213344--213393) -213397 -(213457--213463) -(213516--213521) -(213628--213655) -(213683--213688) -(213709--213728) -(215265--215300) -(215346--215365) -(221541--221551) -(221696--221704) -227517
Fix<y>? yes
Free blocks count wrong for group #6 (17247, counted=17611).
Fix<y>? yes
Free blocks count wrong (161691, counted=162055).
Fix<y>? yes
Inode bitmap differences: +(47089--47090) +47093 +47095 +(47097--47099) +(47101--47104) -(47219--47220) -47222 -47224 -47228 -47231 -(47347--47348) -47350 -47352 -47356 -47359 -(47457--47488) -47985 -47996 -(47999--48000) -48017 -(48027--48028) -(48030--48032) -48049 -(48059--48060) -(48062--48064) -48081 -(48091--48092) -(48094--48096)
Fix<y>? yes
Free inodes count wrong for group #6 (7608, counted=7624).
Fix<y>? yes
Free inodes count wrong (61919, counted=61935).
Fix<y>? yes
embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED *****
embeddedrootwrite: ********** WARNING: Filesystem still has errors **********
embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks
Embedded-PC-failsafe:~#
Embedded-PC-failsafe:~# e2fsck /dev/sda3
e2fsck 1.41.3 (12-Oct-2008)
embeddedrootwrite contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Directory entry for '.' in ... (46948) is big.
Split<y>? yes
Missing '..' in directory inode 46948.
Fix<y>? yes
Setting filetype for entry '..' in ... (46948) to 2.
Pass 3: Checking directory connectivity
'..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953).
Fix<y>? yes
Pass 4: Checking reference counts
Inode 2 ref count is 12, should be 13. Fix<y>? yes
Pass 5: Checking group summary information
embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED *****
embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks
Embedded-PC-failsafe:~#
Embedded-PC-failsafe:~# e2fsck /dev/sda3
e2fsck 1.41.3 (12-Oct-2008)
embeddedrootwrite: clean, 657/62592 files, 87882/249937 blocks
答案1
你们都错了(也许?)... ext3 正在尽最大努力应对其底层存储的突然删除。
您的 SSD 可能具有某种类型的板载缓存。您没有提到所用 SSD 的品牌/型号,但这听起来像是消费级 SSD,而不是企业级或工业级机型。
无论如何,缓存用于帮助合并写入并延长驱动器的使用寿命。如果有写入正在传输中,突然断电肯定是损坏的根源。真正的企业和工业 SSD 具有超级电容器能够维持足够长时间的电力,将数据从缓存移动到非易失性存储器,就像电池供电和闪存供电的 RAID 控制器缓存工作。
如果您的驱动器没有超级电容,则正在进行的交易将会丢失,从而导致文件系统损坏。ext3 可能会被告知所有内容都在稳定存储中,但这只是缓存的功能。
答案2
你是对的,你的同事是错的。除非出现问题,否则日志可确保你永远不会有不一致的 fs 元数据。你可以检查hdparm
驱动器的写入缓存是否已启用。如果已启用,并且你没有启用 IO 屏障(ext3 上默认关闭,ext4 上默认打开),那么这可能是问题的原因。
需要使用屏障来强制驱动器写入缓存在正确的时间刷新以保持一致性,但有些驱动器表现不佳,要么报告其写入缓存已禁用(但实际上并未禁用),要么默默忽略刷新命令。这会阻止日志执行其工作。