在软件 RAID 5 上的 LVM 上使用 EXT4 时遇到问题(损坏)

在软件 RAID 5 上的 LVM 上使用 EXT4 时遇到问题(损坏)

以下是我遇到的问题的简要概述。我使用的是 Lubuntu 12.04、Xubuntu 12.04 和普通的 Ubuntu 12.04。尝试使用通用内核 3.2.0-25 和 3.2.0-26。此问题在 3 台不同计算机上的所有 3 种变体上都存在。我构建了一次 raid,然后将驱动器移动到其他计算机进行试用测试,每次在新计算机上重新创建 pv/vg/lv,以防内核出现问题或出现一些奇怪的事情。我尝试在 Linux 软件 raid 上的 LVM2 逻辑卷上同时使用 EXT4 和 XFS。配置如下。执行 mkfs 后,立即运行 fsck/xfs_check 失败。我花时间用测试数据填充卷后发现了这一点。接下来我将严格参考 EXT4 结果,因为这是我首选的文件系统。希望有人能解释一下哪里出了问题,或者我是不是在某个地方做错了什么,或者某个地方是否有错误。最终,这将是我主 NAS 的备份副本,主 NAS 也将使用具有多个 LV 的 LVM 运行,但显然,如果我不能让一个 LV 与更大的驱动器组一起工作,那么多个 LV 就是毫无意义的。

Raid5 由 (4x) 1.5TB USB 硬盘组成,通过 GPT 分区,并使用分区作为 raid 设备

root@nasbackup:~# for part in d e f g; do parted /dev/sd$part unit s print; done
Model: WD 15EADS External (scsi)
Disk /dev/sdd: 2930277168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End          Size         File system  Name                           Flags
1      2048s  2928175104s  2928173057s               raid - storage_backup - dev 0  raid

Model: WD 15EADS External (scsi)
Disk /dev/sde: 2930277168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End          Size         File system  Name                           Flags
1      2048s  2928175104s  2928173057s               raid - storage_backup - dev 1  raid

Model: WD Ext HDD 1021 (scsi)
Disk /dev/sdf: 2930272256s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End          Size         File system  Name                           Flags
1      2048s  2928175104s  2928173057s               raid - storage_backup - dev 2  raid

Model: Maxtor OneTouch (scsi)
Disk /dev/sdg: 2930277168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End          Size         File system  Name                           Flags
1      2048s  2928175104s  2928173057s               raid - storage_backup - dev 3  raid

root@nasbackup:~# mdadm -D /dev/md/storage_backup
/dev/md/storage_backup:
        Version : 1.2
  Creation Time : Fri Jul 13 18:26:34 2012
     Raid Level : raid5
     Array Size : 4392254976 (4188.78 GiB 4497.67 GB)
  Used Dev Size : 1464084992 (1396.26 GiB 1499.22 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Thu Jul 19 18:16:40 2012
          State : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : nasbackup:storage_backup  (local to host nasbackup)
           UUID : 2d23d3bf:dead6278:7c7cbf74:b1c308b4
         Events : 108635

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/sdd1
       1       8       65        1      active sync   /dev/sde1
       2       8       81        2      active sync   /dev/sdf1
       4       8       97        3      active sync   /dev/sdg1

raid 本身似乎不是问题,因为我尝试将文件系统直接放在 md 设备上,在放入大约 1TB 的数据并用校验和验证其正确后,没有出现明显问题,它还可以运行完整的 fsck 而不会出现问题。重新创建 LVM 层后,问题再次出现。我使用 128MB 扩展区而不是默认的 4MB 进行创建,因为我在某处读到 lvm2 在超过 65k 扩展区计数时会出现一些问题。我还在其他地方读到说该问题从未在版本中得到解决,但我只是为了好玩而尝试了 4MB、16MB、32MB、64MB 和 128MB,但结果相同。

root@nasbackup:~# pvcreate --verbose /dev/md/storage_backup
    Set up physical volume for "/dev/md/storage_backup" with 8784509952 available sectors
    Zeroing start of device /dev/md/storage_backup
  Physical volume "/dev/md/storage_backup" successfully created
root@nasbackup:~# vgcreate --verbose -s 128M vg_storage_backup /dev/md/storage_backup
    Wiping cache of LVM-capable devices
    Wiping cache of LVM-capable devices
    Adding physical volume '/dev/md/storage_backup' to volume group 'vg_storage_backup'
    Archiving volume group "vg_storage_backup" metadata (seqno 0).
    Creating volume group backup "/etc/lvm/backup/vg_storage_backup" (seqno 1).
  Volume group "vg_storage_backup" successfully created

root@nasbackup:~# vgdisplay vg_storage_backup
  --- Volume group ---
  VG Name               vg_storage_backup
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               4.09 TiB
  PE Size               128.00 MiB
  Total PE              33510
  Alloc PE / Size       0 / 0
  Free  PE / Size       33510 / 4.09 TiB
  VG UUID               eDm02A-PmI0-67my-Gxfd-helQ-MJGv-r61Nb3
root@nasbackup:~# lvcreate  --verbose --extents 100%FREE  --name lv_storage_backup vg_storage_backup
    Setting logging type to disk
    Finding volume group "vg_storage_backup"
    Archiving volume group "vg_storage_backup" metadata (seqno 1).
    Creating logical volume lv_storage_backup
    Creating volume group backup "/etc/lvm/backup/vg_storage_backup" (seqno 2).
    Found volume group "vg_storage_backup"
    Creating vg_storage_backup-lv_storage_backup
    Loading vg_storage_backup-lv_storage_backup table (252:0)
    Resuming vg_storage_backup-lv_storage_backup (252:0)
    Clearing start of logical volume "lv_storage_backup"
    Creating volume group backup "/etc/lvm/backup/vg_storage_backup" (seqno 2).
  Logical volume "lv_storage_backup" created
root@nasbackup:~# lvdisplay /dev/vg_storage_backup/lv_storage_backup
  --- Logical volume ---
  LV Name                /dev/vg_storage_backup/lv_storage_backup
  VG Name                vg_storage_backup
  LV UUID                hbYfbR-4hFn-rxGK-VoQ2-S4uw-d5us-alAUoz
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                4.09 TiB
  Current LE             33510
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     6144
  Block device           252:0
root@nasbackup:~# mkfs.ext4 -m 0 -L "storage_backup" -T largefile /dev/md/storage_backup
mke2fs 1.42 (29-Nov-2011)
Filesystem label=storage_backup
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=384 blocks
4289408 inodes, 1098063744 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
33511 block groups
32768 blocks per group, 32768 fragments per group
128 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848, 512000000, 550731776, 644972544

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

到目前为止一切顺利,直到

root@nasbackup:~# fsck.ext4 -vfy /dev/vg_storage_backup/lv_storage_backup
e2fsck 1.42 (29-Nov-2011)
fsck.ext4: Group descriptors look bad... trying backup blocks...
One or more block group descriptor checksums are invalid.  Fix? yes

Group descriptor 0 checksum is invalid.  FIXED.
Group descriptor 1 checksum is invalid.  FIXED.
Group descriptor 2 checksum is invalid.  FIXED.
Group descriptor 3 checksum is invalid.  FIXED.
Group descriptor 4 checksum is invalid.  FIXED.
Group descriptor 5 checksum is invalid.  FIXED.
Group descriptor 6 checksum is invalid.  FIXED.
Group descriptor 7 checksum is invalid.  FIXED.
Group descriptor 8 checksum is invalid.  FIXED.
Group descriptor 9 checksum is invalid.  FIXED.
Group descriptor 10 checksum is invalid.  FIXED.
Group descriptor 11 checksum is invalid.  FIXED.
Group descriptor 12 checksum is invalid.  FIXED.
Group descriptor 13 checksum is invalid.  FIXED.
--------SNIP----------
Group descriptor 33509 checksum is invalid.  FIXED.
Pass 1: Checking inodes, blocks, and sizes
Group 9976's inode table at 326631520 conflicts with some other fs block.
Relocate? yes

Group 9976's block bitmap at 326631432 conflicts with some other fs block.
Relocate? yes

Group 9976's inode bitmap at 326631448 conflicts with some other fs block.
Relocate? yes

Group 9977's inode table at 326631528 conflicts with some other fs block.
Relocate? yes

Group 9977's block bitmap at 326631433 conflicts with some other fs block.
Relocate? yes

Group 9977's inode bitmap at 326631449 conflicts with some other fs block.
Relocate? yes

-----SNIP------

Inode 493088 is in use, but has dtime set.  Fix? yes

Inode 493088 has imagic flag set.  Clear? yes

Inode 493088 has a extra size (65535) which is invalid
Fix? yes

Inode 493088 has compression flag set on filesystem without compression support.  Clear? yes

Inode 493088 has INDEX_FL flag set but is not a directory.
Clear HTree index? yes

Inode 493088 should not have EOFBLOCKS_FL set (size 18446744073709551615, lblk -1)
Clear? yes

Inode 493088, i_size is 18446744073709551615, should be 0.  Fix? yes

Inode 493088, i_blocks is 281474976710655, should be 0.  Fix? yes

Inode 514060 has an invalid extent node (blk 131629062, lblk 0)
Clear? yes

fsck.ext4: e2fsck_read_bitmaps: illegal bitmap block(s) for storage_backup

storage_backup: ***** FILE SYSTEM WAS MODIFIED *****
e2fsck: aborted

storage_backup: ***** FILE SYSTEM WAS MODIFIED *****

它甚至没有完成就自行中止了。如果我再次运行 fsck,我会得到完全相同的错误,就好像它实际上并没有像它所说的那样修改文件系统一样。

以防万一,结果直接显示在 md 设备上

root@nasbackup:~# mkfs.ext4 -m 0 -L "storage_backup" -T largefile /dev/md/storage_backup
mke2fs 1.42 (29-Nov-2011)
Filesystem label=storage_backup
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=384 blocks
4289408 inodes, 1098063744 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
33511 block groups
32768 blocks per group, 32768 fragments per group
128 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848, 512000000, 550731776, 644972544

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

root@nasbackup:~# fsck.ext4 -vfy /dev/md/storage_backup
e2fsck 1.42 (29-Nov-2011)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

      11 inodes used (0.00%)
       0 non-contiguous files (0.0%)
       0 non-contiguous directories (0.0%)
         # of inodes with ind/dind/tind blocks: 0/0/0
         Extent depth histogram: 3
  390434 blocks used (0.04%)
       0 bad blocks
       1 large file

       0 regular files
       2 directories
       0 character device files
       0 block device files
       0 fifos
       0 links
       0 symbolic links (0 fast symbolic links)
       0 sockets
--------
       2 files

答案1

看来我的问题解决了。结果发现,如果我将文件系统直接放在 raid md 设备上,它也会失败,只是不是马上失败。我使用 -n 运行 badblocks 进行非破坏性测试,发现数据无法正确写入/读取,并将问题缩小到其中一个硬盘。移除该硬盘后,整个混乱局面开始正常工作。

相关内容