Xen:将 vrom Debian 10 升级到 12 后以及重新启动后,某些逻辑卷丢失

Xen:将 vrom Debian 10 升级到 12 后以及重新启动后,某些逻辑卷丢失

我将 Debian Xen 服务器从 10 更新到 11,然后在不重新启动的情况下更新到 Debian 12,然后重新启动服务器,现在只有一些虚拟机启动。

vm06 运行良好,而 vm04 则运行不佳(还有更多,但为了便于阅读,我从这个问题中删除了它们)

在文件夹 /dev/vg0 中现在只有一些链接的 vlumes:

# ll /dev/vg0/
total 0
lrwxrwxrwx 1 root root 7 Aug 20 17:04 backup -> ../dm-2
lrwxrwxrwx 1 root root 7 Aug 20 17:04 root -> ../dm-0
lrwxrwxrwx 1 root root 8 Aug 20 17:04 vm06.docker-disk -> ../dm-10
# lvs
  LV                              VG  Attr       LSize    Pool Origin                 Data%  Meta%  Move Log Cpy%Sync Convert
  backup                          vg0 -wi-ao----    2,01t                                                                    
  root                            vg0 -wi-ao----   10,00g                                                                    
  vm04.matrix-disk                vg0 owi-i-s---  130,00g                         
  vm06.docker-disk                vg0 owi-a-s---  610,00g                                                                    

如果我使用lvdisplay,丢失的卷 vm04-matrix 仍然包含在内,如下所示:

# lvdisplay|grep Path|grep -v swap
  LV Path                /dev/vg0/root
  LV Path                /dev/vg0/backup
  LV Path                /dev/vg0/vm06.docker-disk
  LV Path                /dev/vg0/vm04.matrix-disk
lvdisplay|awk  '/LV Name/{n=$3} /Block device/{d=$3; sub(".*:","dm-",d); print d,n;}'

表明,矩阵盘存在于/dev/dm-25


# cat /etc/lvm/backup/vg0

...

vg0 {
    id = "Cfe7Ii-rZBl-mEnH-tk5Z-q9WW-UyTk-3WstVn"
    seqno = 29269
    format = "lvm2"         # informational
    status = ["RESIZEABLE", "READ", "WRITE"]
    flags = []
    extent_size = 8192      # 4 Megabytes
    max_lv = 0
    max_pv = 0
    metadata_copies = 0

    physical_volumes {

        pv0 {
            id = "KlmZUe-3FiK-VbBZ-R962-219A-GGAU-I3a5Nl"
            device = "/dev/md1" # Hint only

            status = ["ALLOCATABLE"]
            flags = []
            dev_size = 15626736256  # 7,27677 Terabytes
            pe_start = 2048
            pe_count = 1907560  # 7,27676 Terabytes
        }
    }

    logical_volumes {

        ...

        vm06.docker-disk {
            id = "y3CSuy-z4gU-72Bd-678E-sYTi-Lkmi-dwAkgT"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_time = 1584654364  # 2020-03-19 22:46:04 +0100
            creation_host = "dom0-eclabs"
            segment_count = 3

            segment1 {
                start_extent = 0
                extent_count = 97280    # 380 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 263936
                ]
            }
            segment2 {
                start_extent = 97280
                extent_count = 7680 # 30 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 422912
                ]
            }
            segment3 {
                start_extent = 104960
                extent_count = 51200    # 200 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 1050889
                ]
            }
        }

        ...
        
        vm04.matrix-disk {
            id = "Tak3Zq-3dUU-SAJl-Hd5h-weTM-cHMR-qgWXeI"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_time = 1584774051  # 2020-03-21 08:00:51 +0100
            creation_host = "dom0-eclabs"
            segment_count = 2

            segment1 {
                start_extent = 0
                extent_count = 25600    # 100 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 531968
                ]
            }
            segment2 {
                start_extent = 25600
                extent_count = 7680 # 30 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 1002249
                ]
            }
        }

    }
}

dmsetup 仍然列出磁盘:

 # dmsetup info|grep matrix 
Name:              vg0-vm04.matrix--disk

但它似乎没有链接到/dev/dm-X:

# for i in /dev/dm-*; do echo $i; dmsetup info $i; done|grep matrix 

(这没有显示任何内容)。

我已经pvcreate –restorefile lvm_backup_datei –uuid <uuid> <partition>在恢复模式下做了,就像解释的那样这里并重新启动,但仍然是同样的问题。

我注意到,那在恢复系统中所有lvm分区都在那里withlsblk和 under /dev/mapper/...(救援系统使用内核 6.4.7),但重新启动系统后,再次只有这几个是可见的:

├─sdb2                                          8:18   0   7,3T  0 part  
│ └─md1                                         9:1    0   7,3T  0 raid1 
│   ├─vg0-root                                253:0    0    10G  0 lvm   /
│   ├─vg0-swap                                253:1    0     4G  0 lvm   [SWAP]
│   ├─vg0-backup                              253:2    0     2T  0 lvm   /backup   
│   ├─vg0-vm06.docker--swap                   253:8    0     8G  0 lvm   
│   ├─vg0-vm06.docker--disk-real              253:9    0   610G  0 lvm   
│   │ ├─vg0-vm06.docker--disk                 253:10   0   610G  0 lvm   
│   │ └─vg0-snap--tmp--vm06.docker--disk      253:12   0   610G  0 lvm   
│   ├─vg0-snap--tmp--vm06.docker--disk-cow    253:11   0    16G  0 lvm   
│   │ └─vg0-snap--tmp--vm06.docker--disk      253:12   0   610G  0 lvm   

更新:

我设法在救援模式下将丢失的卷安装到备份分区中并备份所有需要的数据,这样我就可以创建一个新卷,重新安装并输入备份的数据库

答案1

如果你手动调用

lvchange -a y /dev/vg0

这将恢复所有卷。

另外您还必须注意,内核已更新,/boot/vmlinuz-6.1.0-11-amd64必须在 /etc/xen/x.conf 文件中更新

虚拟机内的网络设备是否从以前更改eth0enX0必须在每个虚拟机网络设置中进行调整

相关内容