重新启动或停止后 RAID 未完全卸载

重新启动或停止后 RAID 未完全卸载

在我的 Ubuntu 16.04(Linux ebox 4.4.0-98-generic)上,重启或停止后,我经常需要对我的 Raid 卷 /dev/md0 执行 fsck,因为我的卷没有干净地卸载。

这是 /etc/rc0.d:

K09umountfs -> ../init.d/umountfs

K10umountroot -> ../init.d/umountroot

K11mdadm-waitidle -> ../init.d/mdadm-waitidle

K12halt -> ../init.d/halt

这是 /etc/rc6.d

K09umountfs -> ../init.d/umountfs

K10umountroot -> ../init.d/umountroot

K11mdadm-waitidle -> ../init.d/mdadm-waitidle

K12reboot -> ../init.d/reboot

在我的系统日志中:

Nov 17 13:24:32 ebox systemd-fsck[877]: /dev/md0: Note: if several inode or block bitmap blocks or part

Nov 17 13:24:32 ebox systemd-fsck[877]: of the inode table require relocation, you may wish to try

Nov 17 13:24:32 ebox systemd-fsck[877]: running e2fsck with the '-b 32768' option first.  The problem

Nov 17 13:24:32 ebox systemd-fsck[877]: may lie only with the primary block group descriptors, and

Nov 17 13:24:32 ebox systemd-fsck[877]: the backup block group descriptors may be OK.

Nov 17 13:24:32 ebox systemd-fsck[877]: /dev/md0: Block bitmap for group 1920 is not in group.  (block 3499016243)

Nov 17 13:24:32 ebox systemd-fsck[877]: /dev/md0: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.

Nov 17 13:24:32 ebox systemd-fsck[877]:         (i.e., without -a or -p options)

脚本 mdadm-waitidle 应该在停止之前同步卷但它似乎没有这样做。

-$ mdadm --detail /dev/md0 :

/dev/md0:
        Version : 0.90
  Creation Time : Fri Feb 18 13:19:52 2011
     Raid Level : raid5
     Array Size : 3907023872 (3726.03 GiB 4000.79 GB)
  Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Fri Nov 17 13:33:27 2017
          State : clean
 Active Devices : 3

Working Devices : 3

 Failed Devices : 0

  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 44f10eaa:b8c95e94:b49a4af1:2aa0198b (local to host ebox)
         Events : 0.7936

Number   Major   Minor   RaidDevice State
   0       8        0        0      active sync   /dev/sda
   1       8       16        1      active sync   /dev/sdb
   2       8       32        2      active sync   /dev/sdc

如何解释这个问题?

相关内容