从 11.04 实时升级到 11.10 后,LVM 逻辑卷无法在启动时激活

从 11.04 实时升级到 11.10 后,LVM 逻辑卷无法在启动时激活

我已使用以下方法将服务器从 11.04 升级到 11.10(64 位)sudo do-release-upgrade

现在,机器在启动过程中会停止运行,因为它无法在 /mnt 中找到某些逻辑卷。发生这种情况时,我按“m”键进入 root shell,然后看到以下内容(请原谅我的不准确,我正在重现此内容):

$ lvs
  LV       VG   Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  audio    vg   -wi--- 372.53g                                      
  home     vg   -wi-ao 186.26g                                      
  swap     vg   -wi-ao   3.72g                                      

/dev 中缺少“音频”对应的块设备。

如果我运行:

$ vgchange -a y
$ lvs
  LV       VG   Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  audio    vg   -wi-ao 372.53g                                      
  home     vg   -wi-ao 186.26g                                      
  swap     vg   -wi-ao   3.72g                                      

然后所有 LV 都被激活,并且系统在退出根维护 shell 后继续完美启动。

发生了什么事?我该如何设置 LV 以使其在启动时始终处于活动状态?


更新以回答提出的问题:

有一个卷组:

# vgs
  VG   #PV #LV #SN Attr   VSize VFree
  vg     1   6   0 wz--n- 1.68t    0 
# pvs
  PV         VG   Fmt  Attr PSize PFree
  /dev/md2   vg   lvm2 a-   1.68t    0 

在由一对匹配的 SATA 硬盘组成的 RAID1 MD 阵列上:

 # cat /proc/mdstat 
 Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
 md2 : active raid1 sda3[0] sdb3[1]
       1806932928 blocks [2/2] [UU]

 md1 : active raid1 sda2[0] sdb2[1]
       146484160 blocks [2/2] [UU]

 md3 : active raid1 sda4[0] sdb4[1]
       95168 blocks [2/2] [UU]

 unused devices: <none>

所以:

 # mount
 /dev/md1 on / type ext4 (rw,errors=remount-ro)
 proc on /proc type proc (rw,noexec,nosuid,nodev)
 sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
 fusectl on /sys/fs/fuse/connections type fusectl (rw)
 none on /sys/kernel/debug type debugfs (rw)
 none on /sys/kernel/security type securityfs (rw)
 udev on /dev type devtmpfs (rw,mode=0755)
 devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
 tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
 none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
 none on /run/shm type tmpfs (rw,nosuid,nodev)
 /dev/md3 on /boot type ext3 (rw)
 /dev/mapper/vg-home on /home type reiserfs (rw)
 /dev/mapper/vg-audio on /mnt/audio type reiserfs (rw)
 rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
 nfsd on /proc/fs/nfsd type nfsd (rw)

使用完整 lvdisplay 进行更新 - 正如所料,有效的分区位于列表的首位。我自己看不出有什么奇怪的地方。我在此处提供了完整列表 - 这是我的所有 LVM 分区。

这是从正在运行的机器获得的,如果从损坏的状态获得的输出有用,则需要一些时间来获得。

# lvdisplay 
  --- Logical volume ---
  LV Name                /dev/vg/swap
  VG Name                vg
  LV UUID                INuOTR-gwB8-Z0RW-lGHM-qtRF-Xc7D-Bv43ah
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                3.72 GiB
  Current LE             953
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0

  --- Logical volume ---
  LV Name                /dev/vg/home
  VG Name                vg
  LV UUID                7L34YS-Neh0-V5OL-bFfd-TmO4-8CkV-GwXuRL
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                186.26 GiB
  Current LE             47683
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1

  --- Logical volume ---
  LV Name                /dev/vg/audio
  VG Name                vg
  LV UUID                AX1ZG5-vwyk-mYVl-DBHt-Rgp2-DSwg-oDZlbS
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                372.53 GiB
  Current LE             95367
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:2

  --- Logical volume ---
  LV Name                /dev/vg/vmware
  VG Name                vg
  LV UUID                bj0m1h-jndV-GWU8-aePm-gaoo-Q0pE-cWhWj2
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                372.53 GiB
  Current LE             95367
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:3

  --- Logical volume ---
  LV Name                /dev/vg/backup
  VG Name                vg
  LV UUID                PHDnjD-8uT8-yHB2-8SBW-d7E1-1Zws-Qx0Tp8
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                93.13 GiB
  Current LE             23841
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:4

  --- Logical volume ---
  LV Name                /dev/vg/download
  VG Name                vg
  LV UUID                64Your-pvNG-7EvG-exns-eK9A-vMDD-eozIBM
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                695.05 GiB
  Current LE             177934
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:5

更新2012/01/28:重新启动使我们有机会查看处于故障状态的机器。

我不知道是否相关,但机器已干净关闭,但重新启动时文件系统并不干净。

# lvs
  LV       VG   Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  audio    vg   -wi--- 372.53g                                      
  backup   vg   -wi---  93.13g                                      
  download vg   -wi--- 695.05g                                      
  home     vg   -wi-ao 186.26g                                      
  swap     vg   -wi-ao   3.72g                                      
  vmware   vg   -wi--- 372.53g

尽管也许感兴趣的是(注意下载):

# lvs --segments
  LV       VG   Attr   #Str Type   SSize  
  audio    vg   -wi---    1 linear 372.53g
  backup   vg   -wi---    1 linear  93.13g
  download vg   -wi---    1 linear 508.79g
  download vg   -wi---    1 linear 186.26g
  home     vg   -wi-ao    1 linear 186.26g
  swap     vg   -wi-ao    1 linear   3.72g
  vmware   vg   -wi---    1 linear 372.53g

更多的:

# lvdisplay 
  --- Logical volume ---
  LV Name                /dev/vg/swap
  VG Name                vg
  LV UUID                INuOTR-gwB8-Z0RW-lGHM-qtRF-Xc7D-Bv43ah
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                3.72 GiB
  Current LE             953
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0

  --- Logical volume ---
  LV Name                /dev/vg/home
  VG Name                vg
  LV UUID                7L34YS-Neh0-V5OL-bFfd-TmO4-8CkV-GwXuRL
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                186.26 GiB
  Current LE             47683
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1

  --- Logical volume ---
  LV Name                /dev/vg/audio
  VG Name                vg
  LV UUID                AX1ZG5-vwyk-mYVl-DBHt-Rgp2-DSwg-oDZlbS
  LV Write Access        read/write
  LV Status              NOT available
  LV Size                372.53 GiB
  Current LE             95367
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---
  LV Name                /dev/vg/vmware
  VG Name                vg
  LV UUID                bj0m1h-jndV-GWU8-aePm-gaoo-Q0pE-cWhWj2
  LV Write Access        read/write
  LV Status              NOT available
  LV Size                372.53 GiB
  Current LE             95367
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---
  LV Name                /dev/vg/backup
  VG Name                vg
  LV UUID                PHDnjD-8uT8-yHB2-8SBW-d7E1-1Zws-Qx0Tp8
  LV Write Access        read/write
  LV Status              NOT available
  LV Size                93.13 GiB
  Current LE             23841
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---
  LV Name                /dev/vg/download
  VG Name                vg
  LV UUID                64Your-pvNG-7EvG-exns-eK9A-vMDD-eozIBM
  LV Write Access        read/write
  LV Status              NOT available
  LV Size                695.05 GiB
  Current LE             177934
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto


# vgs
  VG   #PV #LV #SN Attr   VSize VFree
  vg     1   6   0 wz--n- 1.68t    0 

# pvs
  PV         VG   Fmt  Attr PSize PFree
  /dev/md2   vg   lvm2 a-   1.68t    0 

来自 dmesg:

[    0.908322] ata3.00: ATA-8: ST32000542AS, CC34, max UDMA/133
[    0.908325] ata3.00: 3907029168 sectors, multi 16: LBA48 NCQ (depth 0/32)
[    0.908536] ata3.01: ATA-8: ST32000542AS, CC34, max UDMA/133
[    0.908538] ata3.01: 3907029168 sectors, multi 16: LBA48 NCQ (depth 0/32)
[    0.924307] ata3.00: configured for UDMA/133
[    0.940315] ata3.01: configured for UDMA/133
[    0.940408] scsi 2:0:0:0: Direct-Access     ATA      ST32000542AS     CC34 PQ: 0 ANSI: 5
[    0.940503] sd 2:0:0:0: [sda] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[    0.940541] sd 2:0:0:0: Attached scsi generic sg0 type 0
[    0.940544] sd 2:0:0:0: [sda] Write Protect is off
[    0.940546] sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00
[    0.940564] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    0.940611] scsi 2:0:1:0: Direct-Access     ATA      ST32000542AS     CC34 PQ: 0 ANSI: 5
[    0.940699] sd 2:0:1:0: Attached scsi generic sg1 type 0
[    0.940728] sd 2:0:1:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
[    0.945319] sd 2:0:1:0: [sdb] Write Protect is off
[    0.945322] sd 2:0:1:0: [sdb] Mode Sense: 00 3a 00 00
[    0.945660] sd 2:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    0.993794]  sda: sda1 sda2 sda3 sda4
[    1.023974] sd 2:0:0:0: [sda] Attached SCSI disk
[    1.024277]  sdb: sdb1 sdb2 sdb3 sdb4
[    1.024529] sd 2:0:1:0: [sdb] Attached SCSI disk


[    1.537688] md: bind<sdb3>
[    1.538922] bio: create slab <bio-1> at 1
[    1.538983] md/raid1:md2: active with 2 out of 2 mirrors
[    1.539005] md2: detected capacity change from 0 to 1850299318272
[    1.540678]  md2: unknown partition table
[    1.540851] md: bind<sdb4>
[    1.542231] md/raid1:md3: active with 2 out of 2 mirrors
[    1.542245] md3: detected capacity change from 0 to 97452032
[    1.543867] md: bind<sdb2>
[    1.544680]  md3: unknown partition table
[    1.545627] md/raid1:md1: active with 2 out of 2 mirrors
[    1.545642] md1: detected capacity change from 0 to 149999779840
[    1.556008]    generic_sse:  9824.000 MB/sec
[    1.556010] xor: using function: generic_sse (9824.000 MB/sec)
[    1.556721] md: raid6 personality registered for level 6
[    1.556723] md: raid5 personality registered for level 5
[    1.556724] md: raid4 personality registered for level 4
[    1.560491] md: raid10 personality registered for level 10
[    1.571416]  md1: unknown partition table


[    1.935835] EXT4-fs (md1): INFO: recovery required on readonly filesystem
[    1.935838] EXT4-fs (md1): write access will be enabled during recovery
[    2.901833] EXT4-fs (md1): orphan cleanup on readonly fs
[    2.901840] EXT4-fs (md1): ext4_orphan_cleanup: deleting unreferenced inode 4981215
[    2.901904] EXT4-fs (md1): ext4_orphan_cleanup: deleting unreferenced inode 8127848
[    2.901944] EXT4-fs (md1): 2 orphan inodes deleted
[    2.901946] EXT4-fs (md1): recovery complete
[    3.343830] EXT4-fs (md1): mounted filesystem with ordered data mode. Opts: (null)
[   64.851211] Adding 3903484k swap on /dev/mapper/vg-swap.  Priority:-1 extents:1 across:3903484k 

[   67.600045] EXT4-fs (md1): re-mounted. Opts: errors=remount-ro
[   68.459775] EXT3-fs: barriers not enabled
[   68.460520] kjournald starting.  Commit interval 5 seconds
[   68.461183] EXT3-fs (md3): using internal journal
[   68.461187] EXT3-fs (md3): mounted filesystem with ordered data mode
[  130.280048] REISERFS (device dm-1): found reiserfs format "3.6" with standard journal
[  130.280060] REISERFS (device dm-1): using ordered data mode
[  130.284596] REISERFS (device dm-1): journal params: device dm-1, size 8192, journal first block 18, max trans len 1024, max batch 900, max commit age 30, max trans age 30
[  130.284918] REISERFS (device dm-1): checking transaction log (dm-1)
[  130.450867] REISERFS (device dm-1): Using r5 hash to sort names

答案1

随着更新的安装,机器变得越来越糟糕。今天早上它重新启动了,但无法重新启动 - /home 根本无法正常激活。然后是 dbus 错误,等等。

9 小时后,我已将 11.10 完全重新安装到相同的分区上,现在它们工作正常。很奇怪,但这似乎是解决方案。

谢谢 ppetraki,我同意你关于性能的观点,在更换机器时会考虑这一点。

答案2

日志中没有任何错误/警告信息。但是,我注意到您创建了 3 个 MD,它们都使用相同的后备存储,并且基本上有 3 个不同的文件系统(ext3、ext4、reiserfs(可能还有原始 VMware?))。我想知道,所有竞争的 I/O 写回策略,加上对相同后备存储的多个锁定/排队,是否会在某些条件下挤占您使用相邻 MD 的尝试?

这将以日志失败或写回的形式在文件系统上反弹,并在 LVM 中表现为组装或激活映射失败。

理想情况下,你的设置应如下所示:

sda [sda1 {SPAN 整个磁盘 - 1-2%}]

sdb [sdb1 {SPAN 整个磁盘 - 1-2%}]

md1 (RAID1) [sda1 sdb1]

vg [md1]

您可以直接从由 md 支持的根 lvm 启动,我自己就是这么做的。您不跨越整个磁盘的原因是,如果您遇到坏块并使用供应商的低级修复工具,有时可能会用尽空闲块和工具命令器正在使用的块,这会改变磁盘的净大小,并在此过程中破坏您的分区表,当然还有 MD。请参阅MDADM 超级块恢复

如果您仍然遇到一个 MD 问题(而不是 3 个),那么您的文件系统或 VMware 中的一个正在发生问题并且有人正在使您的兄弟文件系统挨饿,或者您的后备存储存在真正的问题,这会使其他所有人都成为受害者。

相关内容