我的 Linux 机器有一个问题,系统现在似乎很容易耗尽 RAM(并触发 OOM Killer),而它通常可以很好地处理类似的负载。检查free -tm
显示buff/cache
正在消耗大量 RAM。通常这很好,因为我想缓存磁盘 IO,但现在看来,即使系统内存不足,内核也无法释放该内存。
该系统目前看起来是这样的:
total used free shared buff/cache available
Mem: 31807 15550 1053 14361 15203 1707
Swap: 993 993 0
Total: 32801 16543 1053
但是当我尝试强制释放缓存时,我得到以下信息:
$ grep -E "^MemTotal|^Cached|^Committed_AS" /proc/meminfo
MemTotal: 32570668 kB
Cached: 15257208 kB
Committed_AS: 47130080 kB
$ time sync
real 0m0.770s
user 0m0.000s
sys 0m0.002s
$ time echo 3 | sudo tee /proc/sys/vm/drop_caches
3
real 0m3.587s
user 0m0.008s
sys 0m0.680s
$ grep -E "^MemTotal|^Cached|^Committed_AS" /proc/meminfo
MemTotal: 32570668 kB
Cached: 15086932 kB
Committed_AS: 47130052 kB
那么将所有脏页写入磁盘并删除所有缓存只能释放 15 GB 缓存中的大约 130 MB 吗?正如您所看到的,我已经运行了相当严重的过量使用,所以我真的不能浪费 15 GB RAM 用于不工作的缓存。
内核slabtop
还声称使用了不到 600 MB 的空间:
$ sudo slabtop -sc -o | head
Active / Total Objects (% used) : 1825203 / 2131873 (85.6%)
Active / Total Slabs (% used) : 57745 / 57745 (100.0%)
Active / Total Caches (% used) : 112 / 172 (65.1%)
Active / Total Size (% used) : 421975.55K / 575762.55K (73.3%)
Minimum / Average / Maximum Object : 0.01K / 0.27K / 16.69K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
247219 94755 0% 0.57K 8836 28 141376K radix_tree_node
118864 118494 0% 0.69K 5168 23 82688K xfrm_state
133112 125733 0% 0.56K 4754 28 76064K ecryptfs_key_record_cache
$ cat /proc/version_signature
Ubuntu 5.4.0-80.90~18.04.1-lowlatency 5.4.124
$ cat /proc/meminfo
MemTotal: 32570668 kB
MemFree: 1009224 kB
MemAvailable: 0 kB
Buffers: 36816 kB
Cached: 15151936 kB
SwapCached: 760 kB
Active: 13647104 kB
Inactive: 15189688 kB
Active(anon): 13472248 kB
Inactive(anon): 14889144 kB
Active(file): 174856 kB
Inactive(file): 300544 kB
Unevictable: 117868 kB
Mlocked: 26420 kB
SwapTotal: 1017824 kB
SwapFree: 696 kB
Dirty: 200 kB
Writeback: 0 kB
AnonPages: 13765260 kB
Mapped: 879960 kB
Shmem: 14707664 kB
KReclaimable: 263184 kB
Slab: 601400 kB
SReclaimable: 263184 kB
SUnreclaim: 338216 kB
KernelStack: 34200 kB
PageTables: 198116 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 17303156 kB
Committed_AS: 47106156 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 67036 kB
VmallocChunk: 0 kB
Percpu: 1840 kB
HardwareCorrupted: 0 kB
AnonHugePages: 122880 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
FileHugePages: 0 kB
FilePmdMapped: 0 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 9838288 kB
DirectMap2M: 23394304 kB
您能解释一下是什么原因导致Cached
in/proc/meminfo
占用了大约 50% 的系统 RAM 而无法释放它吗?我知道shared_buffers
启用大页面的 PostgreSQL 会显示为,Cached
但我没有在这台机器上运行 PostgreSQL。我发现它Shmem
看起来meminfo
大得可疑,但如何找出哪些进程正在使用它?
我猜这可能是一些行为不当的程序,但如何查询系统以找出哪个进程正在占用该 RAM?我目前有 452 个进程/2144 个线程,因此手动调查所有这些将是一项艰巨的任务。
我还检查了此 RAM 使用的原因不是(仅?)System V 共享内存:
$ ipcs -m | awk 'BEGIN{ sum=0 } { sum += $5 } END{print sum}'
1137593612
虽然报告的总字节数ipcs
很大,但仍然“仅”1.1 GB。
我也发现类似的问题https://askubuntu.com/questions/762717/high-shmem-memory-usage其中 Shmem 使用率高是由tmpfs
挂载目录中的垃圾引起的。然而,这似乎也不是我的系统的问题,仅使用 221 MB:
$ df -h -B1M | grep tmpfs
tmpfs 3181 3 3179 1% /run
tmpfs 15904 215 15689 2% /dev/shm
tmpfs 5 1 5 1% /run/lock
tmpfs 15904 0 15904 0% /sys/fs/cgroup
tmpfs 3181 1 3181 1% /run/user/1000
tmpfs 3181 1 3181 1% /run/user/1001
我找到了另一个答案,它解释了过去存在tmpfs
于已被删除但文件句柄仍然存在的系统上的文件不会显示在df
输出中,但仍然会占用 RAM。我发现 Google Chrome 浪费了大约 1.6 GB 来删除它忘记(?)关闭的文件:
$ sudo lsof -n | grep "/dev/shm" | grep deleted | grep -o 'REG.*' | awk 'BEGIN{sum=0}{sum+=$3}END{print sum}'
1667847810
(是的,上面没有过滤,chrome
但我也测试了过滤,这几乎只是 Google Chrome 通过打开文件句柄删除文件浪费了我的 RAM。)
更新:看起来真正的罪魁祸首是Shmem: 14707664 kB
1.6 GB 是由已删除的文件解释的tmpfs
,System V 共享内存解释了 1.1 GB 和tmpfs
大约 220 MB 的现有文件。所以我仍然在某个地方缺少大约 11.8 GB 的空间。
至少在 Linux 内核 5.4.124 中,似乎Cached
包含了所有这些,这就是为什么即使释放了缓存也无法将该字段归零的Shmem
原因。echo 3 > drop_caches
Cached
所以真正的问题是,为什么Shmem
在我没有预料到的情况下却占用了超过 10 GB 的 RAM?
更新:我检查了一下top
,发现字段RSan
(“RES Anonymous”)和RSsh
(“RES Shared”)指向thunderbird
Eclipse。关闭 Thunderbird 不会释放任何缓存内存,但关闭 Eclipse 会释放 3.9 GB 的Cached
.我正在使用 JVM 标志运行 Eclipse -Xmx4000m
,因此 JVM 内存使用情况可能会显示为Cached
!我仍然更愿意找到一种将内存使用情况映射到进程的方法,而不是随机关闭进程并检查它是否释放了任何内存。
更新:tmpfs
幕后使用的文件系统也可能导致Shmem
增加。我是这样测试的:
$ df --output=used,source,fstype -B1M | grep -v '/dev/sd' | grep -v ecryptfs | tail -n +2 | awk 'BEGIN{sum=0}{sum+=$1}END{print sum}'
4664
因此,即使我只排除由真实块设备支持的文件系统(我的ecryptfs
也安装在这些块设备上),我也只能解释大约 4.7 GB 的内存丢失。其中 4.3 GB 是由snapd
创建的squashfs
安装解释的,据我所知,这些安装并没有使用Shmem
。
更新:对于某些人来说,解释是 GPU 驱动程序保留的 GEM 对象。似乎没有任何标准接口来查询这些,但对于我的英特尔集成图形,我得到以下结果:
$ sudo sh -c 'cat /sys/kernel/debug/dri/*/i915_gem_objects' | perl -npe 's#([0-9]+) bytes#sprintf("%.1f", $1/1024/1024)." MB"#e'
1166 shrinkable [0 free] objects, 776.8 MB
Xorg: 114144 objects, 815.9 MB (38268928 active, 166658048 inactive, 537980928 unbound, 0 closed)
calibre-paralle: 1 objects, 0.0 MB (0 active, 0 inactive, 32768 unbound, 0 closed)
Xorg: 595 objects, 1329.9 MB (0 active, 19566592 inactive, 1360146432 unbound, 0 closed)
chrome: 174 objects, 63.2 MB (0 active, 0 inactive, 66322432 unbound, 0 closed)
chrome: 174 objects, 63.2 MB (0 active, 0 inactive, 66322432 unbound, 0 closed)
chrome: 20 objects, 1.2 MB (0 active, 0 inactive, 1241088 unbound, 0 closed)
firefox: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
GLXVsyncThread: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
chrome: 1100 objects, 635.1 MB (0 active, 0 inactive, 180224 unbound, 0 closed)
chrome: 1100 objects, 635.1 MB (0 active, 665772032 inactive, 180224 unbound, 0 closed)
chrome: 20 objects, 1.2 MB (0 active, 0 inactive, 1241088 unbound, 0 closed)
[k]contexts: 3 objects, 0.0 MB (0 active, 40960 inactive, 0 unbound, 0 closed)
这些结果对我来说没有意义。如果每一行都是实际的内存分配,那么总数将达到数百 GB!
即使我假设 GPU 驱动程序只是多次报告某些行,我也会得到以下结果:
$ sudo sh -c 'cat /sys/kernel/debug/dri/*/i915_gem_objects' | sort | uniq | perl -npe 's#([0-9]+) bytes#sprintf("%.1f", $1/1024/1024)." MB"#e'
1218 shrinkable [0 free] objects, 797.6 MB
calibre-paralle: 1 objects, 0.0 MB (0 active, 0 inactive, 32768 unbound, 0 closed)
chrome: 1134 objects, 645.0 MB (0 active, 0 inactive, 163840 unbound, 0 closed)
chrome: 1134 objects, 645.0 MB (0 active, 676122624 inactive, 163840 unbound, 0 closed)
chrome: 174 objects, 63.2 MB (0 active, 0 inactive, 66322432 unbound, 0 closed)
chrome: 20 objects, 1.2 MB (0 active, 0 inactive, 1241088 unbound, 0 closed)
firefox: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
GLXVsyncThread: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
[k]contexts: 2 objects, 0.0 MB (0 active, 24576 inactive, 0 unbound, 0 closed)
Renderer: 4844 objects, 7994.5 MB (0 active, 0 inactive, 8382816256 unbound, 0 closed)
Xorg: 114162 objects, 826.8 MB (0 active, 216350720 inactive, 537980928 unbound, 0 closed)
Xorg: 594 objects, 1329.8 MB (14794752 active, 4739072 inactive, 1360146432 unbound, 0 closed)
这仍然远远超出了 4-8 GB 范围内的预期总数。 (系统当前有两个登录席位,因此我希望看到两个 Xorg 进程。)
更新:进一步查看 GPU 调试输出,我现在认为这些unbound
数字意味着没有实际使用 RAM 的虚拟块。如果我这样做,我会得到更合理的 GPU 内存使用数字:
$ sudo sh -c 'cat /sys/kernel/debug/dri/*/i915_gem_objects' | perl -npe 's#^(.*?): .*?([0-9]+) bytes.*?([0-9]+) unbound.*#sprintf("%s: %.1f", $1, ($2-$3)/1024/1024)." MB"#eg' | grep -v '0.0 MB'
1292 shrinkable [0 free] objects, 848957440 bytes
Xorg: 303.1 MB
Xorg: 32.7 MB
chrome: 667.5 MB
chrome: 667.5 MB
这可以解释大约 1.5 GB 的 RAM,这对于我正在处理的数据来说似乎很正常。我仍然缺少几千兆字节到某个地方!
更新:我目前认为问题实际上是由 RAM 支持的已删除文件引起的。这些可能是由于损坏的软件在删除/丢弃文件后泄漏打开的文件句柄引起的。当我跑步时
$ sudo lsof -n | grep -Ev ' /home/| /tmp/| /lib/| /usr/' | grep deleted | grep -o " REG .*" | awk 'BEGIN{sum=0}{sum+=$3}END{print sum / 1024 / 1024 " MB"}'
4560.65 MB
(手动收集的路径前缀列表实际上由真实的块设备支持 - 因为我的根由真实的块设备支持,所以我不能只在这里列出所有块安装点。更聪明的脚本可以列出所有非安装点目录在 root 中,并列出所有比/
此处更长的块安装。)
这解释了近 4.6 GB 的 RAM 损失。结合 的输出ipcs
、GPU RAM(假设未绑定内存)和tmpfs
使用情况,我目前在某处仍然缺少大约 4 GB Shmem
!
答案1
我是上述问题的作者,尽管到目前为止还没有完整的答案,但这是迄今为止最著名的解释:
对于现代 Linux 内核,
Cached
的值/proc/meminfo
不再描述磁盘缓存的数量。然而,内核开发人员认为此时更改此设置已经为时已晚。在实践中,要实际测量正在使用的磁盘缓存量,您应该进行计算
Cached - Shmem
来估计它。如果您从原始问题中获取数字,您将得到15151936−14707664
(kiB)(来自/proc/meminfo
)或444272
(kiB) 的输出,因此看来系统实际上拥有大约 433 MiB 的磁盘缓存。在这种情况下,很明显,删除所有磁盘缓存不会释放大量内存(即使删除所有磁盘缓存,该Cached
字段也只会减少 3%。
因此,最好的猜测是某些用户模式软件使用了大量共享内存(通常tmpfs
或共享内存映射),这导致Cached
显示高值,尽管系统实际上只有很少的磁盘缓存,这表明它接近进入内存不足的情况。我认为Committed_AS
这远远MemTotal
支持了这一理论。
这是上述链接线程的结论的(缩短的)副本,linux-mm
以防上述链接将来不起作用:
主题:回复:为什么 Shmem 包含在 Cached in /proc/meminfo 中?
来自:Vlastimil Babka @ 2021-08-30 16:05 UTC21 年 8 月 30 日凌晨 12:44,Mikko Rantalainen 写道:
从 fs/proc/meminfo.c 函数 meminfo_proc_show() 来看,这并不是很明显,但是 Cached: 字段的输出似乎也总是包含所有 Shmem: 字段
然而,如果我们现在改变它,我们可能会造成更大的混乱。人们第一次在新内核上查看输出(IIRC 也是“free”命令使用它)时就不会再被误导了。但是使用新旧内核的人们现在必须考虑到它在某些时候发生了变化......不好。
来自:哈立德·阿齐兹 @ 2021-08-30 19:38 UTC
2021 年 8 月 30 日星期一 20:26 +0300,Mikko Rantalainen 写道:
当然,一种可能的解决方案是保持“Cached”不变,并引入具有真实缓存语义的“Cache”(即,它包括(Cached - Shmem)和内存支持的 RAM 的总和)。这样系统管理员至少会看到两个具有唯一值的不同字段并查找文档。
我建议添加新字段。可能有大量工具/脚本已经解释了 /proc/meminfo 中的数据,并可能根据该数据采取行动。如果我们改变现有数据的含义,这些工具就会崩溃。新字段的缺点是会进一步扩大输出,但它也不会破坏现有的工具。
答案2
我现在正在写帮助诊断内存问题的工具,基于a中的信息来自 RedHat 的文档,其中包含一些公式。
关于磁盘缓存/tmpfs,我的理解是:
缓存 = 磁盘缓存 - 交换缓存 - tmpfs 内存使用情况
tmpfs可以驻留在swap中,因此我们必须首先计算tmpfs的实际内存使用情况。
简单的解决方案:
shmem = 共享内存段 + tmpfs ram
不过,共享内存段也可以在swap中,并且shmem似乎不包括大页共享内存段(在内核5.4和5.15上测试)。
更精确的解决方案
shmem =“4k 页 sysvipc shm rss”+ tmpfs ram 使用情况
“4k sysvipc shm rss”是标准页面大小(4k)的共享内存段使用的内存总和,因此没有大页面。
您可以在 下获取内存段的 RSS 使用情况/proc/sysvipc/shm
。
shm 使用 4k 或 2M 页面的事实似乎并未在 /proc 下公开,但可以通过附加到共享内存段并扫描物理页面来获取该信息 ( /proc/kpageflags
)。我用它来将共享内存页的数量添加到输出中:
sudo ./memstats groups
[...]
Scanning shm...
Shared memory segments (MiB):
key id Size RSS 4k/2M SWAP USED% SID
============================================================================================
0 3 9 9 2442/0 0 100.02
0 2 9 10 0/5 0 104.86
[...]