启用 mq_deadline 后 Linux pvmove 出错,有什么办法可以恢复吗?

启用 mq_deadline 后 Linux pvmove 出错,有什么办法可以恢复吗?

因此,我们在办公室使用 pvmove 进行了迁移。然后发生了以下情况

Mar  8 12:26:51 v1 kernel: [ 5798.100321] BUG: kernel NULL pointer dereference, address: 0000000000000140
Mar  8 12:26:51 v1 kernel: [ 5798.101099] #PF: supervisor read access in kernel mode
Mar  8 12:26:51 v1 kernel: [ 5798.101716] #PF: error_code(0x0000) - not-present page
Mar  8 12:26:51 v1 kernel: [ 5798.102310] PGD 0 P4D 0 
Mar  8 12:26:51 v1 kernel: [ 5798.102904] Oops: 0000 [#1] SMP NOPTI
Mar  8 12:26:51 v1 kernel: [ 5798.103465] CPU: 48 PID: 1190 Comm: kworker/48:1H Not tainted 5.5.8-050508-generic #202003051633
Mar  8 12:26:51 v1 kernel: [ 5798.104071] Hardware name: ASUSTeK COMPUTER INC. RS700A-E9-RS12/KNPP-D32 Series, BIOS 1301 06/17/2019
Mar  8 12:26:51 v1 kernel: [ 5798.104693] Workqueue: kblockd blk_mq_run_work_fn
Mar  8 12:26:51 v1 kernel: [ 5798.105315] RIP: 0010:blk_mq_get_driver_tag+0x61/0x100
Mar  8 12:26:51 v1 kernel: [ 5798.105931] Code: 00 00 48 89 45 c0 8b 47 18 48 8b 7f 10 48 c7 45 d8 00 00 00 00 89 45 d0 b8 01 00 00 00 c7 45 c8 01 00 00 00 48 89 7d e0 75 50 <48> 8b 87 40 01 00 00 8b 40 04 39 43 24 73 07 c7 45 c8 03 00 00 00
Mar  8 12:26:51 v1 kernel: [ 5798.106653] RSP: 0018:ffffa92b9c59bcc0 EFLAGS: 00010246
Mar  8 12:26:51 v1 kernel: [ 5798.107371] RAX: 0000000000000001 RBX: ffff8d9b04805a00 RCX: ffffa92b9c59bda0
Mar  8 12:26:51 v1 kernel: [ 5798.108146] RDX: 0000000000000001 RSI: ffffa92b9c59bda0 RDI: 0000000000000000
Mar  8 12:26:51 v1 kernel: [ 5798.108881] RBP: ffffa92b9c59bd00 R08: 0000000000000000 R09: ffff8d9b04805ee8
Mar  8 12:26:51 v1 kernel: [ 5798.109613] R10: 0000000000000000 R11: 0000000000000800 R12: ffff8d9b04805a00
Mar  8 12:26:51 v1 kernel: [ 5798.110397] R13: ffffa92b9c59bda0 R14: ffff8d9b04805a48 R15: 0000000000000000
Mar  8 12:26:51 v1 kernel: [ 5798.111167] FS:  0000000000000000(0000) GS:ffff8d9b1ef80000(0000) knlGS:0000000000000000
Mar  8 12:26:51 v1 kernel: [ 5798.111938] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mar  8 12:26:51 v1 kernel: [ 5798.112693] CR2: 0000000000000140 CR3: 000000a7875fc000 CR4: 00000000003406e0
Mar  8 12:26:51 v1 kernel: [ 5798.113553] Call Trace:
Mar  8 12:26:51 v1 kernel: [ 5798.114351]  blk_mq_dispatch_rq_list+0xf9/0x550
Mar  8 12:26:51 v1 kernel: [ 5798.115121]  ? deadline_remove_request+0x4e/0xb0
Mar  8 12:26:51 v1 kernel: [ 5798.115862]  ? dd_dispatch_request+0x63/0x1f0
Mar  8 12:26:51 v1 kernel: [ 5798.116637]  blk_mq_do_dispatch_sched+0x67/0x100
Mar  8 12:26:51 v1 kernel: [ 5798.117404]  blk_mq_sched_dispatch_requests+0x12d/0x180
Mar  8 12:26:51 v1 kernel: [ 5798.118178]  __blk_mq_run_hw_queue+0x5a/0x110
Mar  8 12:26:51 v1 kernel: [ 5798.118944]  blk_mq_run_work_fn+0x1b/0x20
Mar  8 12:26:51 v1 kernel: [ 5798.119741]  process_one_work+0x1eb/0x3b0
Mar  8 12:26:51 v1 kernel: [ 5798.120534]  worker_thread+0x4d/0x400
Mar  8 12:26:51 v1 kernel: [ 5798.121369]  kthread+0x104/0x140
Mar  8 12:26:51 v1 kernel: [ 5798.122156]  ? process_one_work+0x3b0/0x3b0
Mar  8 12:26:51 v1 kernel: [ 5798.122960]  ? kthread_park+0x90/0x90
Mar  8 12:26:51 v1 kernel: [ 5798.123741]  ret_from_fork+0x22/0x40
Mar  8 12:26:51 v1 kernel: [ 5798.124524] Modules linked in: act_police cls_u32 sch_ingress sch_sfq sch_htb nls_utf8 isofs uas usb_storage xt_socket nf_socket_ipv4 nf_socket_ipv6 nf_defrag_ipv6 nf_defrag_ipv4 xt_mark iptable_mangle ebt_ip6 ebt_arp ebt_ip ebtable_broute ebtable_nat ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter ip_tables x_tables bpfilter binfmt_misc dm_mirror dm_region_hash dm_log dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio input_leds ipmi_ssif amd64_edac_mod edac_mce_amd i2c_piix4 k10temp ipmi_si ipmi_devintf ipmi_msghandler mac_hid kvm_amd ccp kvm ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi vhost_net vhost tap bonding lp parport br_netfilter bridge stp llc autofs4 btrfs blake2b_generic zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq multipath linear ast drm_vram_helper drm_ttm_helper ttm raid1 hid_generic raid0 drm_kms_helper usbhid crct10dif_pclmul syscopyarea crc32_pclmul
Mar  8 12:26:51 v1 kernel: [ 5798.124577]  bnx2x sysfillrect ghash_clmulni_intel sysimgblt fb_sys_fops aesni_intel crypto_simd mdio cryptd igb ahci hid glue_helper nvme libcrc32c drm dca libahci nvme_core i2c_algo_bit
Mar  8 12:26:51 v1 kernel: [ 5798.130380] CR2: 0000000000000140
Mar  8 12:26:51 v1 kernel: [ 5798.131449] ---[ end trace 2451c5dc4d61723b ]---
Mar  8 12:26:51 v1 kernel: [ 5798.246646] RIP: 0010:blk_mq_get_driver_tag+0x61/0x100
Mar  8 12:26:51 v1 kernel: [ 5798.248626] Code: 00 00 48 89 45 c0 8b 47 18 48 8b 7f 10 48 c7 45 d8 00 00 00 00 89 45 d0 b8 01 00 00 00 c7 45 c8 01 00 00 00 48 89 7d e0 75 50 <48> 8b 87 40 01 00 00 8b 40 04 39 43 24 73 07 c7 45 c8 03 00 00 00
Mar  8 12:26:51 v1 kernel: [ 5798.250301] RSP: 0018:ffffa92b9c59bcc0 EFLAGS: 00010246
Mar  8 12:26:51 v1 kernel: [ 5798.251725] RAX: 0000000000000001 RBX: ffff8d9b04805a00 RCX: ffffa92b9c59bda0
Mar  8 12:26:51 v1 kernel: [ 5798.253111] RDX: 0000000000000001 RSI: ffffa92b9c59bda0 RDI: 0000000000000000
Mar  8 12:26:51 v1 kernel: [ 5798.254411] RBP: ffffa92b9c59bd00 R08: 0000000000000000 R09: ffff8d9b04805ee8
Mar  8 12:26:51 v1 kernel: [ 5798.255695] R10: 0000000000000000 R11: 0000000000000800 R12: ffff8d9b04805a00
Mar  8 12:26:51 v1 kernel: [ 5798.256925] R13: ffffa92b9c59bda0 R14: ffff8d9b04805a48 R15: 0000000000000000
Mar  8 12:26:51 v1 kernel: [ 5798.258145] FS:  0000000000000000(0000) GS:ffff8d9b1ef80000(0000) knlGS:0000000000000000
Mar  8 12:26:51 v1 kernel: [ 5798.259333] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mar  8 12:26:51 v1 kernel: [ 5798.260587] CR2: 0000000000000140 CR3: 000000a7875fc000 CR4: 00000000003406e0

目前,中止和继续均不起作用:

# pvmove --abort
    Failed to copy one or more poll_operation_id members.

# pvmove --atomic /dev/nvme3n1 /dev/md127
 Detected pvmove in progress for /dev/nvme3n1
 Ignoring remaining command line arguments
 ABORTING: Mirror percentage check failed.

我们成功删除了“pvmove0_mimage_0”并尝试继续前进 - 但它仍然不起作用。

  LV                                        VG      Attr       LSize   Pool  Origin Data%  Meta%        Move Log Cpy%Sync Convert Devices          
    Intel                                     vgnvme0 twi-aotz--   3.64t              90.90  12.86                                  Intel_tdata(0)   
    [Intel_tdata]                             vgnvme0 TwI-ao----   3.64t                                                            /dev/md127(78)   
    [Intel_tdata]                             vgnvme0 TwI-ao----   3.64t                                                            pvmove0(0)       
    [Intel_tmeta]                             vgnvme0 ewI-ao---- 900.00m                                                            /dev/md127(0)    
    [Intel_tmeta]                             vgnvme0 ewI-ao---- 900.00m                                                            pvmove0(0)       
    [Intel_tmeta]                             vgnvme0 ewI-ao---- 900.00m                                                            pvmove0(0)       
    [lvol0_pmspare]                           vgnvme0 ewI-a----- 324.00m                                                            pvmove0(0)       
    [pvmove0]                                 vgnvme0 p-C-aom---   1.82t                                                            /dev/nvme3n1(0)  
    [pvmove0]                                 vgnvme0 p-C-aom---   1.82t                                                            /dev/nvme3n1(84) 
    [pvmove0]                                 vgnvme0 p-C-aom---   1.82t                                                            /dev/nvme3n1(228)
    [pvmove0]                                 vgnvme0 p-C-aom---   1.82t                                                            /dev/nvme3n1(3) 

系统正在运行,我们看到 vgnvme0-pvmove0 的一些使用情况(很可能是因为它是一个镜像),但在这种情况下我们如何中止 pvmove?这确实是一个绝对没有记录的事情。我们不想重新映像备份,因为在迁移后的 3 小时内已经写入了一些新内容。

目前恢复的建议是创建一个新的精简池,逐卷迁移到该池,逐个停止正在运行的虚拟机,更改软件数据库以适应新位置并重新启动虚拟机……在完全成功迁移后,删除旧的精简池。我似乎找不到制作单个精简 LV 镜像的方法,这可能吗?如果我们可以镜像单个 LV,我们就可以毫无问题地迁移所有精简 我们有像 vm1 vm2 vm3 等 LV……

答案1

因此,我们所做的是使用 vm managedsave 安全地移动到新的精简池(使用稀疏的 dd),然后从 lvm 配置中删除旧的精简池。

主要问题是,当移动的数据启动时,pvmove 处于“裂脑”状态,但由于 --atomic 的原因,数据仍驻留在前一个磁盘上。在这种情况下,无法轻松继续,因为 pvmove 显然存在一个错误,阻止数据复制以防止覆盖已复制的数据(正如我上面提到的,这些数据在重新启动后因某种原因被使用)。

相关内容