为什么“压力”的负荷分布不均匀?

为什么“压力”的负荷分布不均匀?

当我使用Stress对我的系统进行压力测试时,发现CPU使用分布存在问题。CPU2使用率远高于其他三者CPUs。请问,这正常吗?

我观察了大约10分钟,所有结果都相似。

执行压力测试命令::

stress -c 4 -m 16 -d 16

顶部信息:

CPU0: 45.9% usr 53.8% sys  0.0% nic  0.0% idle  0.1% io  0.0% irq  0.0% sirq
CPU1: 44.0% usr 55.5% sys  0.0% nic  0.1% idle  0.1% io  0.0% irq  0.0% sirq
CPU2: 87.6% usr 11.9% sys  0.0% nic  0.0% idle  0.0% io  0.0% irq  0.3% sirq
CPU3: 55.8% usr 44.1% sys  0.0% nic  0.0% idle  0.0% io  0.0% irq  0.0% sirq

LSCPU:

Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         39 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  4
  On-line CPU(s) list:   0-3
Vendor ID:               GenuineIntel
  BIOS Vendor ID:        Intel(R) Corporation
  Model name:            Intel(R) Celeron(R) J6412 @ 2.00GHz
    BIOS Model name:     Intel(R) Celeron(R) J6412 @ 2.00GHz To Be Filled By O.E.M. CPU @ 1.9GHz
    BIOS CPU family:     15
    CPU family:          6
    Model:               150
    Thread(s) per core:  1
    Core(s) per socket:  4
    Socket(s):           1
    Stepping:            1
    CPU(s) scaling MHz:  100%
    CPU max MHz:         2000.0000
    CPU min MHz:         800.0000
    BogoMIPS:            3993.60
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc art arch_per
                         fmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse
                         4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexprio
                         rity ept vpid ept_ad fsgsbase tsc_adjust smep erms rdt_a rdseed smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm arat pln
                         pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req vnmi umip waitpkg gfni rdpid movdiri movdir64b md_clear flush_l1d arch_capabilities
Virtualization features:
  Virtualization:        VT-x
Caches (sum of all):
  L1d:                   128 KiB (4 instances)
  L1i:                   128 KiB (4 instances)
  L2:                    1.5 MiB (1 instance)
  L3:                    4 MiB (1 instance)
NUMA:
  NUMA node(s):          1
  NUMA node0 CPU(s):     0-3
Vulnerabilities:
  Itlb multihit:         Not affected
  L1tf:                  Not affected
  Mds:                   Not affected
  Meltdown:              Not affected
  Mmio stale data:       Mitigation; Clear CPU buffers; SMT disabled
  Retbleed:              Not affected
  Spec store bypass:     Mitigation; Speculative Store Bypass disabled via prctl
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
  Srbds:                 Vulnerable: No microcode
  Tsx async abort:       Not affected

添加/proc/中断:

cat /proc/interrupts
            CPU0       CPU1       CPU2       CPU3
   0:        122          0          0          0  IR-IO-APIC    2-edge      timer
   8:          0          0          0          0  IR-IO-APIC    8-edge      rtc0
   9:          0          0          0          0  IR-IO-APIC    9-fasteoi   acpi
  16:          0          0       1163          0  IR-IO-APIC   16-fasteoi   i801_smbus, mmc0
 120:          0          0          0          0  DMAR-MSI    0-edge      dmar0
 121:          0          0          0          0  DMAR-MSI    1-edge      dmar1
 122:          0          0          0          0  IR-PCI-MSI-0000:00:1c.0    0-edge      PCIe PME, aerdrv, pcie-dpc
 123:          0          0          0      25032  IR-PCI-MSI-0000:00:14.0    0-edge      xhci_hcd
 124:          0          0          0          0  IR-PCI-MSI-0000:00:17.0    0-edge      ahci[0000:00:17.0]
 125:          0          1          0          0  IR-PCI-MSIX-0000:01:00.0    0-edge      eth0
 126:          0          0        650          0  IR-PCI-MSIX-0000:01:00.0    1-edge      eth0-TxRx-0
 127:          0          0          0        650  IR-PCI-MSIX-0000:01:00.0    2-edge      eth0-TxRx-1
 128:        648          0          0          0  IR-PCI-MSIX-0000:01:00.0    3-edge      eth0-TxRx-2
 129:          0        658          0          0  IR-PCI-MSIX-0000:01:00.0    4-edge      eth0-TxRx-3
 NMI:          0          0          0          0   Non-maskable interrupts
 LOC:      12551      10285      21303      11548   Local timer interrupts
 SPU:          0          0          0          0   Spurious interrupts
 PMI:          0          0          0          0   Performance monitoring interrupts
 IWI:         68         21         68         36   IRQ work interrupts
 RTR:          0          0          0          0   APIC ICR read retries
 RES:      12513       3510      14961       3320   Rescheduling interrupts
 CAL:      12861       2485       1025        889   Function call interrupts
 TLB:         31         48         28         61   TLB shootdowns
 TRM:          0          0          0          0   Thermal event interrupts
 THR:          0          0          0          0   Threshold APIC interrupts
 DFR:          0          0          0          0   Deferred Error APIC interrupts
 MCE:          0          0          0          0   Machine check exceptions
 MCP:          4          5          5          5   Machine check polls
 ERR:          0
 MIS:          0
 PIN:          0          0          0          0   Posted-interrupt notification event
 NPI:          0          0          0          0   Nested posted-interrupt event
 PIW:          0          0          0          0   Posted-interrupt wakeup event

答案1

一个 4 核 Celeron CPU 和一个 NUMA 节点……显然 NUMA 不是这里的解释。使用这些参数stress,您将创建一个复杂的工作负载。

您有 4 个进程进行纯计算,而总共 32 个进程正在进行有效的内存管理,无论是malloc()s 的形式(16 个进程)还是磁盘空间分配管理和写入缓存(16 个进程)。

看起来调度程序可能主要将计算作业放在 2 号核心上,而其他核心主要处理内存和磁盘缓存分配的双用户空间/内核空间工作。也许将计算过程集中到单个核心可以最大限度地减少 L1 缓存刷新的需要,并允许它们在运行时基本上不接触除 L1 缓存之外的任何内容。由于这 4 个计算工作线程都运行相同的紧密循环代码,因此它们可能只运行该代码的一个共享副本,该副本可以轻松装入特定于核心的 L1 缓存。

您的所有 CPU 核心都 >99% 忙碌(usr+sys所有核心都达到或超过 99%)。请注意,核心 #0 和 #1 花费了一半以上的时间来运行内核进程,可能与内存管理、磁盘缓存和文件系统空间分配有关。不要忽略sys百分比:它们不是百分比的子集usr,而是所花费的总时间的百分比,就像usr百分比一样。

io和百分比都可以忽略不计,这表明所有这些活动本质上都不会访问实际磁盘,irqsirq只是使磁盘写入缓存保持非常非常繁忙。

相关内容