灾难恢复。MDADM/LVM2 取得了一些进展,但停留在最终安装阶段

灾难恢复。MDADM/LVM2 取得了一些进展,但停留在最终安装阶段

我们使用错误的存储库对正在运行的服务器进行了愚蠢的升级,导致系统完全无法启动。

我们使用 openSuse 存储库升级了 SLES 11 系统,但一切都出了问题。现在只能在(修复文件系统)中启动。

启动时无法为主 LVM 安装 RAID1。此时我们只对访问数据以将其移动到正常运行的服务器感兴趣。

有 2 个 2TB 的硬盘、一个 4G 的启动 (根) 分区、一个交换分区和主分区。启动分区 (/dev/md1) 就在那里。

启动时无法组建大 raid (/dev/md3) 并且无法创建卷组和 LVM。

在修复文件系统启动后,我们尝试以下步骤重建磁盘阵列:

mdadm --assemble /dev/md3 /dev/sda3 /dev/sdb3 
(repair filesystem) # cat /prot c/mdstat
Personalities : [raid1] [raid0] [raid10] [raid6] [raid5] [raid4]
md3 : active raid1 sda3[0] sdb3[1]
  1947090880 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sda1[0] sdb1[1]
  4194240 blocks [2/2] [UU]

unused devices: <none> 

然后我进入存储卷组备份的 /etc/lvm/backup/vg00,并发现物理卷的 UUID 无法访问。

pvcreate --uuid 12pHfn-ibCI-pS8a-YOcc-LVNy-UMyp-lg9tG2 /dev/md3
vgcfgrestore vg00
vgchange -a y vg00

完成这些步骤后,卷组就创建好了,并且 LVM 就在那里...命令的输出...

(repair filesystem) # pvdisplay
 --- Physical volume ---
 PV Name               /dev/md3
 VG Name               vg00
 PV Size               1.81 TiB / not usable 4.00 MiB
 Allocatable           yes
 PE Size               4.00 MiB
 Total PE              475395
 Free PE               192259
 Allocated PE          283136
 PV UUID               12pHfn-ibCI-pS8a-YOcc-LVNy-UMyp-lg9tG2

(repair filesystem) # vgdisplay
  --- Volume group ---
  VG Name               vg00
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                4
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.81 TiB
  PE Size               4.00 MiB
  Total PE              475395
  Alloc PE / Size       283136 / 1.08 TiB
  Free  PE / Size       192259 / 751.01 GiB
  VG UUID               e51mlr-zA1U-n0Of-k3zE-Q5PP-aULU-7rTXhC


(repair filesystem) # lvdisplay
  --- Logical volume ---
  LV Name                /dev/vg00/usr
  VG Name                vg00
  LV UUID                SxlDZT-KYf9-q4jS-i5kz-FzRl-Xttk-ilJLuP
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                302.00 GiB
  Current LE             77312
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/vg00/var
  VG Name                vg00
  LV UUID                lTHXSr-wUea-gqLI-n2KX-OBEE-fGRt-JLYWbk
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                200.00 GiB
  Current LE             51200
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:1

  --- Logical volume ---
  LV Name                /dev/vg00/home
  VG Name                vg00
  LV UUID                853Lhz-J6DX-DTgc-zleK-RHIb-XDOA-tHguo9
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                600.00 GiB
  Current LE             153600
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:2

  --- Logical volume ---
  LV Name                /dev/vg00/srv
  VG Name                vg00
  LV UUID                7KKWlv-ADsx-WeUB-i8Vm-VJhL-w0nX-5MhmP2
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                4.00 GiB
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:3


(repair filesystem) # cat /etc/fstab
/dev/md1        /               ext3    acl,user_xattr       1 1
/dev/sda2       none            swap    sw
/dev/sdb2       none            swap    sw
/dev/vg00/usr   /usr            xfs     defaults             1 2
/dev/vg00/var   /var            xfs     defaults             1 2
/dev/vg00/home  /home           xfs     defaults             1 2
proc            /proc                proc       defaults              0 0
sysfs           /sys                 sysfs      noauto                0 0
debugfs         /sys/kernel/debug    debugfs    noauto                0 0
usbfs           /proc/bus/usb        usbfs      noauto                0 0
devpts          /dev/pts             devpts     mode=0620,gid=5       0 0

所以在那之后,我对恢复数据并尝试将其移动到新盒子的可能性寄予厚望。但是当我尝试安装卷时......(我使用 srv LVM,因为它是空的或不重要的)

mkdir /mnt/srvb
mount vg00-srv  /mnt/srvb

mount: you must specify the filesystem type

(repair filesystem) # fsck /dev/vg00/srv
fsck from util-linux-ng 2.16
e2fsck 1.41.14 (22-Dec-2010)
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev/dm-3

The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>

e2fsck -b 32768 /dev/mapper/vg00-srv

我会继续努力....

(repair filesystem) # mke2fs -n -S /dev/mapper/vg00-srv
mke2fs 1.41.14 (22-Dec-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1 blocks, Stripe width=1 blocks
262144 inodes, 1048576 blocks
52428 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736

和 ...

(repair filesystem) # e2fsck -b 32768 /dev/mapper/vg00-srv
e2fsck 1.41.14 (22-Dec-2010)
e2fsck: Bad magic number in super-block while trying to open /dev/mapper/vg00-srv

The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>

任何系统都失败了。我也不知道为什么,但我担心文件系统类型不一致,我非常怀疑它们是 xfs,它们应该是 ext-4 或 ext-3(就像可以正常工作的主分区一样)。

我不确定,因为这些是在服务器数据中心自动创建的。新服务器使用 ext4,但这台服务器比较旧,操作系统也比较旧,所以我猜应该是 ext3。

尝试将上述命令作为 fsck.ext3 仍然会产生相同的结果。

有人能指导我下一步该怎么做吗?我坚持认为我只需要访问 /home LVM 中的数据,然后尝试将其复制到新机器上。

非常感谢。我希望我说清楚了,也希望有人能帮助我。

- - 编辑 - - -

rescue:~# lsblk --fs
NAME                   FSTYPE            LABEL    MOUNTPOINT
sda
|-sda1                 linux_raid_member
| `-md1                ext3              root
|-sda2                 swap
`-sda3                 linux_raid_member rescue:3
  `-md3                LVM2_member
    |-vg00-usr (dm-0)
    |-vg00-var (dm-1)
    |-vg00-home (dm-2)
    `-vg00-srv (dm-3)
sdb
|-sdb1                 linux_raid_member
| `-md1                ext3              root
|-sdb2                 swap
`-sdb3                 linux_raid_member rescue:3
  `-md3                LVM2_member
    |-vg00-usr (dm-0)
    |-vg00-var (dm-1)
    |-vg00-home (dm-2)
    `-vg00-srv (dm-3)

命令 /blkid /dev/vg00/srv 没有结果

rescue:~# blkid
/dev/sda1: UUID="a15ef723-f84f-7aaa-1f51-fb8978ee93fe" TYPE="linux_raid_member"
/dev/sda2: UUID="804e745e-8bc4-47bc-bf2e-5e95c620d9ca" TYPE="swap"
/dev/sda3: UUID="3b31972d-e311-8292-4fc6-2add1afd58fe" UUID_SUB="f6d18087-8acd-3229-523d-a0a9960c1717" LABEL="rescue:3" TYPE="linux_raid_member"
/dev/sdb1: UUID="a15ef723-f84f-7aaa-1f51-fb8978ee93fe" TYPE="linux_raid_member"
/dev/sdb2: UUID="143565ee-04ac-4b20-93c2-4c81e4eb738e" TYPE="swap"
/dev/sdb3: UUID="3b31972d-e311-8292-4fc6-2add1afd58fe" UUID_SUB="1c8aa8bc-4a43-17c5-4b94-f56190083bdb" LABEL="rescue:3" TYPE="linux_raid_member"
/dev/md1: LABEL="root" UUID="635b7b96-6f32-420d-8431-074303eeee11" SEC_TYPE="ext2" TYPE="ext3"
/dev/md3: UUID="12pHfn-ibCI-pS8a-YOcc-LVNy-UMyp-lg9tG2" TYPE="LVM2_member"


rescue:~# mdadm --detail /dev/md3
/dev/md3:
        Version : 1.2
  Creation Time : Wed Mar  4 01:03:28 2015
     Raid Level : raid1
     Array Size : 1947090880 (1856.89 GiB 1993.82 GB)
  Used Dev Size : 1947090880 (1856.89 GiB 1993.82 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat Mar 14 19:58:45 2015
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : rescue:3  (local to host rescue)
           UUID : 3b31972d:e3118292:4fc62add:1afd58fe
         Events : 450

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3

我无法从 Live CD 启动,但我可以从似乎更稳定的 Debian 64 救援系统启动,因为正常启动现在也被错误的内核更新造成的坏库搞乱了。

我仍然找不到在任一启动中安装 LV 的方法。

启动救援时还会出现以下情况:

[    5.693921] md: Waiting for all devices to be available before autodetect
[    5.707605] md: If you don't use raid, use raid=noautodetect
[    5.719376] md: Autodetecting RAID arrays.
[    5.775853] md: invalid raid superblock magic on sdb3
[    5.786069] md: sdb3 does not have a valid v0.90 superblock, not importing!
[    5.821768] md: invalid raid superblock magic on sda3
[    5.831986] md: sda3 does not have a valid v0.90 superblock, not importing!
[    5.846010] md: Scanned 4 and added 2 devices.
[    5.855010] md: autorun ...
[    5.860707] md: considering sda1 ...
[    5.867974] md:  adding sda1 ...
[    5.874524] md:  adding sdb1 ...
[    5.881491] md: created md1
[    5.887204] md: bind<sdb1>
[    5.892738] md: bind<sda1>
[    5.898262] md: running: <sda1><sdb1>

我可以毫无问题地挂载 /dev/md1,但这是没有 VG 的原始启动 /root 分区。

答案1

对于 md3,它报告“创建时间:2015 年 3 月 4 日星期三 01:03:28”,这似乎是一个相当近的日期,不像是真正的创建日期。也许进行了重新创建?

无论如何,如果这是 RAID1,您应该能够针对其中一个分区运行 testdisk,并让它搜索文件系统。尝试... testdisk /dev/sda3

请注意,在数据恢复情况下,除非您非常确定结果,否则在原始驱动器上执行写入操作是错误的。

相关内容