OOM 不断导致 VirtualBox 停止运行

OOM 不断导致 VirtualBox 停止运行

过去几周我遇到了一个严重的问题。Ubuntu 主机的内存不足杀手不断杀死我的 VirtualBox 会话(一个 Win10 实例)。我只为 Win10 分配了 3Gb,而我的主机上有 16Gb,加上同样多的交换空间 [更正:(*)]。我甚至不需要在 Windows 中做任何事情就能发生这种情况,我只需让登录屏幕保持打开状态而不登录,几分钟后它就会被清理干净。

我的猜测是,这与 VBox 无关,只是因为它是最大的内存消耗者而被收获。

但是发生了什么?一旦使用的物理内存达到 15Gb,砰的一声,kswapd0崩溃了 10 分钟,VBox 被杀死了。交换几乎根本没有使用(根据 ,它保持在 1Gb 以下使用量systemmonitor

以下是dmesg我要说的内容:

NetworkManager invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=0
CPU: 7 PID: 1415 Comm: NetworkManager Tainted: G        W  OE     5.19.0-26-generic #27-Ubuntu
Hardware name: Dell Inc. Latitude 7420/07MHG4, BIOS 1.14.1 12/18/2021
Call Trace:
<TASK>
show_stack+0x4e/0x61
dump_stack_lvl+0x4a/0x6d
dump_stack+0x10/0x18
dump_header+0x53/0x246
oom_kill_process.cold+0xb/0x10
out_of_memory+0x101/0x2f0
__alloc_pages_may_oom+0x112/0x1e0
__alloc_pages_slowpath.constprop.0+0x4ac/0x9b0
__alloc_pages+0x31d/0x350
alloc_pages+0x90/0x1c0
folio_alloc+0x1d/0x60
filemap_alloc_folio+0x8e/0xb0
__filemap_get_folio+0x1c7/0x3c0
filemap_fault+0x144/0x910
__do_fault+0x39/0x120
do_read_fault+0xf5/0x170
do_fault+0xa6/0x300
handle_pte_fault+0x117/0x240
__handle_mm_fault+0x696/0x740
handle_mm_fault+0xba/0x2a0
do_user_addr_fault+0x1c1/0x680
exc_page_fault+0x80/0x1b0
asm_exc_page_fault+0x27/0x30
RIP: 0033:0x562d5e907c45
Code: Unable to access opcode bytes at RIP 0x562d5e907c1b.
RSP: 002b:00007ffc2b269db0 EFLAGS: 00010286
RAX: 00000000ffffffff RBX: 0000562d60a9dbc0 RCX: 00000000000000ff
RDX: 0000000000143bb5 RSI: 0000562d60a9dc58 RDI: 0000562d60a9d898
RBP: 0000562d60a9d800 R08: 0000000000000187 R09: 0000000000000002
R10: 00007ffc2b269df0 R11: 48164b1643c927fa R12: 0000562d60a9d800
R13: 00007ffc2b269fb8 R14: 00007ffc2b269fc0 R15: 0000562d60a60d90
</TASK>
Mem-Info:
active_anon:488243 inactive_anon:2424656 isolated_anon:0
            active_file:1388 inactive_file:1294 isolated_file:90
            unevictable:38011 dirty:0 writeback:0
            slab_reclaimable:33372 slab_unreclaimable:72134
            mapped:1152675 shmem:677800 pagetables:28551 bounce:0
            kernel_misc_reclaimable:0
            free:66993 free_pcp:3217 free_cma:0
Node 0 active_anon:1952972kB inactive_anon:9698624kB active_file:5552kB inactive_file:5176kB unevictable:152044kB isolated(anon):0kB isolated(file):360kB mapped:4610700kB dirty:0kB writeback:0kB shmem:2711200kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB kernel_stack:36912kB pagetables:114204kB all_unreclaimable? no
Node 0 DMA free:13312kB boost:0kB min:64kB low:80kB high:96kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 1330 15590 15590 15590
Node 0 DMA32 free:62528kB boost:0kB min:5764kB low:7204kB high:8644kB reserved_highatomic:0KB active_anon:25924kB inactive_anon:614516kB active_file:380kB inactive_file:60kB unevictable:16416kB writepending:0kB present:1547020kB managed:1480888kB mlocked:2024kB bounce:0kB free_pcp:2332kB local_pcp:480kB free_cma:0kB
lowmem_reserve[]: 0 0 14259 14259 14259
Node 0 Normal free:192132kB boost:138936kB min:200688kB low:216124kB high:231560kB reserved_highatomic:2048KB active_anon:1927048kB inactive_anon:9084108kB active_file:5672kB inactive_file:5532kB unevictable:135628kB writepending:0kB present:14934016kB managed:14609576kB mlocked:19264kB bounce:0kB free_pcp:10644kB local_pcp:1396kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0 0
Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB (U) 2*2048kB (UM) 2*4096kB (M) = 13312kB
Node 0 DMA32: 208*4kB (UE) 164*8kB (UME) 98*16kB (UME) 56*32kB (UME) 52*64kB (UE) 77*128kB (UME) 129*256kB (UE) 21*512kB (ME) 0*1024kB 0*2048kB 0*4096kB = 62464kB
Node 0 Normal: 8168*4kB (UEH) 7514*8kB (UEH) 4380*16kB (UEH) 852*32kB (UEH) 3*64kB (H) 4*128kB (H) 0*256kB 2*512kB (H) 0*1024kB 0*2048kB 0*4096kB = 191856kB
Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
719398 total pagecache pages
37844 pages in swap cache
Swap cache stats: add 583122, delete 545193, find 135102/172662
Free swap  = 0kB
Total swap = 999420kB
4124257 pages RAM
0 pages HighMem/MovableOnly
97801 pages reserved
0 pages hwpoisoned
Tasks state (memory values in pages):
[  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[    747]     0   747    76984      561   638976       42          -250 systemd-journal
[    803]     0   803     7075     1146    73728      140         -1000 systemd-udevd
...
[  97317] 10705 97317  1932577   924002  9097216        0           200 VirtualBoxVM
[  97621] 10705 97621   673626    17766  1384448        0           100 Isolated Web Co
[  97875] 10705 97875  2543969   418177  6213632        0           200 java
[  98221] 10705 98221    60478     2021   212992        0           200 ion.clangd.main
oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-10705.slice/[email protected]/app.slice/app-virtualbox-f53ab59e62d342c192d9fe7f637f3855.scope,task=VirtualBoxVM,pid=97317,uid=10705
Out of memory: Killed process 97317 (VirtualBoxVM) total-vm:7730308kB, anon-rss:338076kB, file-rss:3293128kB, shmem-rss:64804kB, UID:10705 pgtables:8884kB oom_score_adj:200

(*)不,那是错的:

$ swapon --show
NAME      TYPE      SIZE   USED PRIO
/dev/dm-2 partition 976M 963,4M   -2

现在开始明白了。我只是为其他事情使用了太多内存,而我没有足够的交换空间,而且由于它位于加密的 LVM 中,我不知道如何使用gparted或其他方式增加它的大小。

答案1

我似乎通过扩展交换分区解决了这个问题。这可以在实时系统上使用几个命令完成 ( swapoff / lvresize / mkswap / swapon)。

以下是一份详细的概述:https://www.thegeekdiary.com/how-to-extend-and-reduce-swap-space-on-lvm2-logical-volume/

相关内容