Breakdown of noncache kernel's dynamic memory - linux

I found this answer about the noncache memory for kernel: https://unix.stackexchange.com/questions/62066/what-is-kernel-dynamic-memory-as-reported-by-smem
I have similar question, problem, but I don't have zram as mentioned in the question above.
My question is, how can I break down the 'noncache' memory reported by the smem utility? I know hugetables are one part of it, but what else? I couldn't find after some longer search how to do that, and I guess I would have to deep dive in kernel code to get some sense of it, if I even could.
Below you can find reports from smem, free and /proc/meminfo. Noncache for kernel dynamic memory in smem output is calculated as memtotal - userspace - free - cache and - hugetabes in my version. My questions is, what fields should I sum up from /proc/meminfo to get the Noncache figure of 704576? Or same question posted differently, when you breakdown noncache kernel memory, what fields from /proc/meminfo contribute to it?
smem (https://www.selenic.com/smem/):
] smem -wt
Area Used Cache Noncache Hugepages
firmware/hardware 0 0 0 0
kernel image 0 0 0 0
kernel dynamic memory 2117736 706600 704576 706560
userspace memory 783384 156516 626868 0
free memory 10665504 10665504 0 0
---------------------------------------------------------------------
13566624 11528620 1331444 706560
Free:
free -h
total used free shared buff/cache available
Mem: 12G 1.9G 10G 63M 842M 10G
Swap: 9G 513M 9.5G
And /proc/meminfo:
MemTotal: 13566624 kB
MemFree: 10668220 kB
MemAvailable: 11048420 kB
Buffers: 158544 kB
Cached: 649660 kB
SwapCached: 229900 kB
Active: 545392 kB
Inactive: 1059268 kB
Active(anon): 74132 kB
Inactive(anon): 787872 kB
Active(file): 471260 kB
Inactive(file): 271396 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 10485756 kB
SwapFree: 9960188 kB
Dirty: 2176 kB
Writeback: 0 kB
AnonPages: 621768 kB
Mapped: 154476 kB
Shmem: 65520 kB
KReclaimable: 54912 kB
Slab: 123356 kB
SReclaimable: 54912 kB
SUnreclaim: 68444 kB
KernelStack: 11280 kB
PageTables: 8456 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 16915788 kB
Committed_AS: 2715192 kB
VmallocTotal: 135290159040 kB
VmallocUsed: 29464 kB
VmallocChunk: 0 kB
Percpu: 16512 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
FileHugePages: 0 kB
FilePmdMapped: 0 kB
CmaTotal: 65536 kB
CmaFree: 63760 kB
HugePages_Total: 345
HugePages_Free: 1
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 706560 kB

Related

JVM out of memory issue with Error :- Native memory allocation (mmap) failed to map X bytes for committing reserved memory

Im facing Java OOM issue when system is loaded with traffic . I have allocated -Xmx=10G and i have been monitoring the memory via Jconsole , it's not even going till 10 Gb it throws error after 5Gb.
It fails with below error
Native memory allocation (mmap) failed to map 518520832 bytes for committing reserved memory
OS:DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.3 LTS"
uname:Linux 5.0.0-29-generic #31~18.04.1-Ubuntu SMP Thu Sep 12 18:29:21 UTC 2019 x86_64
libc:glibc 2.27 NPTL 2.27
rlimit: STACK 8192k, CORE 0k, NPROC 44326, NOFILE 4096, AS infinity
load average:5,55 4,44 3,46
/proc/meminfo:
MemTotal: 11397468 kB
MemFree: 133900 kB
MemAvailable: 24040 kB
Buffers: 600 kB
Cached: 188880 kB
SwapCached: 20644 kB
Active: 9980144 kB
Inactive: 1012376 kB
Active(anon): 9965700 kB
Inactive(anon): 996552 kB
Active(file): 14444 kB
Inactive(file): 15824 kB
Unevictable: 16 kB
Mlocked: 16 kB
SwapTotal: 1003516 kB
SwapFree: 92868 kB
Dirty: 180 kB
Writeback: 596 kB
AnonPages: 10782764 kB
Mapped: 171488 kB
Shmem: 158936 kB
KReclaimable: 45080 kB
Slab: 88608 kB
SReclaimable: 45080 kB
SUnreclaim: 43528 kB
KernelStack: 16800 kB
PageTables: 76584 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 6702248 kB
Committed_AS: 23938048 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
Percpu: 1264 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 167872 kB
DirectMap2M: 11505664 kB
This is the error There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 518520832 bytes for committing reserved memory

System mem can't add up afer open huge pages

Platform: two Oracle Linux7.5 servers and they have an Oracle RAC database.
Database version:12.2.0.1
Huge pages: enable.
problem description:
After I enable the huge pages function, I found that used mem + free mem + buff/cache + available mem much more than total mem in these two servers.
Below are some mem data of one node.
As I know, huge page mem is unusable for other applications once the system allocates it even the system has mem pressure. So this part of mem can not be the available mem.
After I turn off the huge page function, the figures for free -g command seems normal.
Question: Why my server has so much available mem(35.66GB, total mem is 64GB) despite it used 55G? How Linux calculate all these figures?
MEM DATA-----huge page function enable
[#~]$ free -tm <<-----node-2
total used free shared buff/cache available
Mem: 63819 55146 1471 944 7201 36515
Swap: 65531 15 65516
Total: 129351 55162 66988
File Name or Source
file name =node2_meminfo_19.11.06.1100.dat
zzz ***Wed Nov 6 11:00:02 CST 2019
MemTotal: 65351524 kB <<<<<<62GB in total
MemFree: 2137568 kB <<<<<<2GB free
MemAvailable: 38189832 kB
Buffers: 13832 kB
Cached: 4424852 kB
SwapCached: 0 kB
Active: 38255876 kB
Inactive: 1384672 kB
Active(anon): 4706984 kB
Inactive(anon): 94764 kB
Active(file): 33548892 kB <<<<<<32GB memory in OS filesystem cache
Inactive(file): 1289908 kB
Unevictable: 401128 kB
Mlocked: 401128 kB
SwapTotal: 67104764 kB
SwapFree: 67104764 kB
Dirty: 560 kB
Writeback: 0 kB
AnonPages: 4254508 kB
Mapped: 675800 kB
Shmem: 808328 kB
Slab: 1735924 kB
SReclaimable: 1612152 kB
SUnreclaim: 123772 kB
KernelStack: 18736 kB
PageTables: 238216 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 89421740 kB
Committed_AS: 7028572 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 518656 kB
VmallocChunk: 34358945788 kB
HardwareCorrupted: 0 kB
AnonHugePages: 1648640 kB
CmaTotal: 16384 kB
CmaFree: 10532 kB
HugePages_Total: 10116
HugePages_Free: 515
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 601952 kB
DirectMap2M: 21059584 kB
DirectMap1G: 45088768 kB
huge page function disable
[# ~]$ free -g
total used free shared buff/cache available
Mem: 62 34 1 21 25 32
Swap: 63 0 63

/proc/meminfo-inactive(file) remain high, and can't be reclaimed by drop_caches

Hello everyone,
We use windriver Linux 6.0 in our products (base on yocto 3.10.55).
And we are faced with a strange problem: inactive(file) in /proc/meminfo remain high. and even we do "echo 3 > /drop/sys/vm/drop_caches", the statistics can't be reclaimed.
sh-4.2# echo 3 > /proc/sys/vm/drop_caches
sh-4.2# free
total used free shared buffers cached
Mem: 8000008 7777308 222700 0 196 871328
-/+ buffers/cache: 6905784 1094224
Swap: 0 0 0
sh-4.2# cat /proc/meminfo
MemTotal: 8000008 kB
MemFree: 220988 kB
Buffers: 288 kB
Cached: 872912 kB
SwapCached: 0 kB
Active: 2145984 kB
Inactive: 4779720 kB
Active(anon): 2126188 kB
Inactive(anon): 804404 kB
Active(file): 19796 kB
Inactive(file): 3975316 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 2077148 kB
Mapped: 242972 kB
Shmem: 853588 kB
Slab: 145620 kB
SReclaimable: 121040 kB
SUnreclaim: 24580 kB
KernelStack: 10704 kB
PageTables: 10624 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 4000004 kB
Committed_AS: 4084848 kB
VmallocTotal: 4294967296 kB
VmallocUsed: 27604 kB
VmallocChunk: 4294931084 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 4096 kB
The inactive(file) memory can be reduced by malloc() call or copy file to ramdisk(/run or /var/volatile).
But it will affect GDB to attach the running process which used memory large than the free memory. GDB attach will fail. (get can't apply memory issue, then gdb quit: can't attach the process).
As we know Inactive(fils) are free and available. This is a memory that has not been recently used and can be reclaimed for other purposes. But now it caused gdb to fail, or other effects we are not faced yet.
What may cause this issue to appear? And how can I reclaim these memories manually?

DirectMap1G display a wired huge number

I don't set any hugepages in the system. why direct mapping has a weired value
DirectMap4k: 251600 kB
DirectMap2M: 5941248 kB
DirectMap1G: 130023424 kB
Look at the cmdline, no hugepages specified. also the runtime hugepages, only 2M hugepage entries in the directory, and nothing specified.
# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.11.0-26-generic root=UUID=7e5b93c9-ace5-4a9d-8623-c6718a2d720a ro console=ttyS0,9600 console=tty0 rootdelay=90 nomodes
_hugepages 3:~# cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr
0 ^C
_hugepages 3:~# cat /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr
0
# free -k
total used free shared buffers cached
Mem: 131911116 43668088 88243028 0 202272 2004796
-/+ buffers/cache: 41461020 90450096
Swap: 3999740 0 3999740
# cat /proc/meminfo
MemTotal: 131911116 kB
MemFree: 87704076 kB
Buffers: 202272 kB
Cached: 2004444 kB
SwapCached: 0 kB
Active: 38864132 kB
Inactive: 1784416 kB
Active(anon): 38441104 kB
Inactive(anon): 7924 kB
Active(file): 423028 kB
Inactive(file): 1776492 kB
Unevictable: 8384 kB
Mlocked: 8384 kB
SwapTotal: 3999740 kB
SwapFree: 3999740 kB
Dirty: 120 kB
Writeback: 0 kB
AnonPages: 38450956 kB
Mapped: 29576 kB
Shmem: 760 kB
Slab: 1441772 kB
SReclaimable: 184536 kB
SUnreclaim: 1257236 kB
KernelStack: 11632 kB
PageTables: 146568 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 69955296 kB
Committed_AS: 81453204 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 721460 kB
VmallocChunk: 34291709228 kB
HardwareCorrupted: 0 kB
AnonHugePages: 5980160 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 251600 kB
DirectMap2M: 5941248 kB
**DirectMap1G: 130023424 kB**
It counts the number of pages mapped as 4KB pages, 2MB/4MB pages, and 1GB pages if supported and used. It doesn't have any effect if /proc/sys/vm/nr_hugepages is set to 0 (kernel is booted without hugepages parameter):
cat /proc/sys/vm/nr_hugepages
0
See what does mean by HardwareCorrupted, DirectMap4k, DirectMap2M fields in “/proc/meminfo” file of Linux? for more details.

How do I account for all of the memory in meminfo?

I'm trying to understand how meminfo tracks memory. Here's what I'm looking at:
MemTotal: 341596 kB
MemFree: 147288 kB
Buffers: 56 kB
Cached: 46752 kB
SwapCached: 0 kB
Active: 86928 kB
Inactive: 41384 kB
Active(anon): 81532 kB
Inactive(anon): 288 kB
Active(file): 5396 kB
Inactive(file): 41096 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 81532 kB
Mapped: 87648 kB
Shmem: 316 kB
Slab: 11568 kB
SReclaimable: 2580 kB
SUnreclaim: 8988 kB
KernelStack: 3232 kB
PageTables: 5480 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 170796 kB
Committed_AS: 2692436 kB
VmallocTotal: 327680 kB
VmallocUsed: 59244 kB
VmallocChunk: 259076 kB
Here are my questions:
1) How can I account for all 341596kB of memory? Clearly 147288 are free. How can I account for the remaining memory? (short of writing a problem to solve the zero-subset-sum problem, I mean...)
2) Total inactive/active file memory is 46492, but Mapped is 87648. But, according to the manual, it's
Mapped: files which have been mmaped, such as libraries
so.. how could there be more pages devoted to mapped files than there are files themselves?
Due diligence I've seen other posts on SO related to this subject, but none that explains how to account for all the "used" memory here... Also, I've found the Linux Kernel file explaining /proc/meminfo -- it just doesn't seem to have the complete information I want.

Resources