我们有 2 台配置相同的服务器,其中一台显示较高,%sy
而另一台则没有。这似乎表明系统正在忙于做某事。
Cpu(s): 28.1%us, 66.3%sy, 0.7%ni, 4.8%id, 0.0%wa, 0.2%hi, 0.0%si, 0.0%st
我的问题
如何确定内核的哪些子模块导致了如此高的负载?
附加信息
- 两个系统都是 Ubuntu 12.04 LTS
- 两个系统都是在 AWS (Amazon) 上运行的 EC2 实例
受影响的系统在日志中出现了类似堆栈跟踪的内容,并显示以下消息:
软锁消息[33644527.529071] BUG: soft lockup - CPU#0 stuck for 23s! [monitorcron:31103]
[33644527.529087] Modules linked in: isofs ip6table_filter ip6_tables ipt_REJECT xt_state iptable_filter xt_REDIRECT xt_comment xt_multiport iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack ip_tables x_tables intel_rapl x86_pkg_temp_thermal coretemp crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel ablk_helper cryptd lrw gf128mul glue_helper aes_x86_64
堆栈跟踪
[33644527.529116] CPU: 0 PID: 31103 Comm: moncron Tainted: G W 3.13.0-34-generic #60~precise1-Ubuntu
[33644527.529120] task: ffff8800a565afe0 ti: ffff8800c6150000 task.ti: ffff8800c6150000
[33644527.529122] RIP: e030:[<ffffffff8175f32f>] [<ffffffff8175f32f>] _raw_spin_unlock+0x1f/0x30
[33644527.529133] RSP: e02b:ffff8800c6151c58 EFLAGS: 00000286
[33644527.529135] RAX: ffff8801aed728c0 RBX: ffff8800c6151cc0 RCX: ffff8801aed728c0
[33644527.529137] RDX: ffff8801aed728c0 RSI: 00000000a10ca10a RDI: ffff8801aed72898
[33644527.529139] RBP: ffff8800c6151c58 R08: 000000000000000a R09: 0000000000000000
[33644527.529141] R10: 0000000000000131 R11: 0000000000000130 R12: ffff8801aed72840
[33644527.529142] R13: ffff8801aed728c0 R14: ffff8801aed72898 R15: ffff8801aed72840
[33644527.529149] FS: 00007f37888a8700(0000) GS:ffff8801dee00000(0000) knlGS:0000000000000000
[33644527.529152] CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[33644527.529153] CR2: 0000000000dac3b8 CR3: 00000000051e5000 CR4: 0000000000002660
[33644527.529156] Stack:
[33644527.529158] ffff8800c6151ca8 ffffffff811e0a98 ffff8800c6151cb8 ffff8801aed728c0
[33644527.529161] ffff8800c6151d00 ffff8800c6151cc0 ffff8801aee64900 0000000000007980
[33644527.529164] 0000000000007980 ffff8801aee64900 ffff8800c6151ce8 ffffffff811e0c18
[33644527.529168] Call Trace:
[33644527.529177] [<ffffffff811e0a98>] shrink_dentry_list+0x28/0xe0
[33644527.529181] [<ffffffff811e0c18>] shrink_dcache_parent+0x28/0x70
[33644527.529188] [<ffffffff81232257>] proc_flush_task_mnt.isra.15+0x77/0x170
[33644527.529194] [<ffffffff81235776>] proc_flush_task+0x56/0x70
[33644527.529200] [<ffffffff8106c803>] release_task+0x33/0x130
[33644527.529204] [<ffffffff8106cdcf>] wait_task_zombie+0x4cf/0x5f0
[33644527.529209] [<ffffffff8106cffb>] wait_consider_task.part.8+0x10b/0x180
[33644527.529213] [<ffffffff8106d0d5>] wait_consider_task+0x65/0x70
[33644527.529217] [<ffffffff8106d1e1>] do_wait+0x101/0x260
[33644527.529220] [<ffffffff8106e213>] SyS_wait4+0xa3/0x100
[33644527.529225] [<ffffffff8106bc10>] ? task_stopped_code+0x50/0x50
[33644527.529231] [<ffffffff8176853f>] tracesys+0xe1/0xe6
[33644527.529232] Code: 66 66 66 2e 0f 1f 84 00 00 00 00 00 66 66 66 66 90 55 48 89 e5 e9 0a 00 00 00 66 83 07 02 5d c3 0f 1f 40 00 8b 37 f0 66 83 07 02 <f6> 47 02 01 74 ed e8 d0 74 fe ff 5d c3 0f 1f 40 00 66 66 66 66
我假设这是相关的,但尚未确定如何或为什么(如果确实如此)。
答案1
首先,您应该尝试确定此问题是由单个进程/应用程序还是系统范围引起的。可能有一些工具(我不知道)可以以更直接的方式执行此操作,但如果没有它们,我建议迭代所有消耗 CPU 时间相关部分的进程并停止它们一会儿 ( kill -STOP $PID
)。如果这是由一个或几个进程引起的,那么该%sy
值应该会下降很多。
如果您发现了这样的进程,那么您可以附加strace -c -p $PID
它几秒钟,以查看使用了哪些系统调用以及使用了多长时间。这可能会提示您涉及哪些内核部分(特别是如果您将其与其他系统上的输出进行比较)。