I'm looking at /proc//smaps for a program compiled with libasan (-fsanitize=address).
I see some massive sizes and I'm trying to understand what it means.
For example:
2008fff7000-10007fff8000 rw-p 00000000 00:00 0
Size: 15032123396 kB
Rss: 142592 kB
Pss: 142592 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 142592 kB
Referenced: 142592 kB
Anonymous: 142592 kB
AnonHugePages: 0 kB
Shared_Hugetlb: 0 kB
Private_Hugetlb: 0 kB
Swap: 0 kB
SwapPss: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Locked: 0 kB
VmFlags: rd wr mr mw me nr dd nh
Total size adds up to 21,475,147,836K
I'm using Amazon Linux AMI release 2018.03 with kernel 4.4.19-29.55.amzn1.x86_64
Any ideas?
ASAN works by reserving one byte (known as a shadow byte) per 8 bytes of user memory. The shadow bytes are checked on every memory access and updated on every change in allocation status.
Processes running on Linux on x86_64 have about 2^47 bytes of addressable space available, so ASAN maps around 2^47*1/9 ~= 15TB for these shadow bytes.
This is the mapping you're seeing.
Related
I found this answer about the noncache memory for kernel: https://unix.stackexchange.com/questions/62066/what-is-kernel-dynamic-memory-as-reported-by-smem
I have similar question, problem, but I don't have zram as mentioned in the question above.
My question is, how can I break down the 'noncache' memory reported by the smem utility? I know hugetables are one part of it, but what else? I couldn't find after some longer search how to do that, and I guess I would have to deep dive in kernel code to get some sense of it, if I even could.
Below you can find reports from smem, free and /proc/meminfo. Noncache for kernel dynamic memory in smem output is calculated as memtotal - userspace - free - cache and - hugetabes in my version. My questions is, what fields should I sum up from /proc/meminfo to get the Noncache figure of 704576? Or same question posted differently, when you breakdown noncache kernel memory, what fields from /proc/meminfo contribute to it?
smem (https://www.selenic.com/smem/):
] smem -wt
Area Used Cache Noncache Hugepages
firmware/hardware 0 0 0 0
kernel image 0 0 0 0
kernel dynamic memory 2117736 706600 704576 706560
userspace memory 783384 156516 626868 0
free memory 10665504 10665504 0 0
---------------------------------------------------------------------
13566624 11528620 1331444 706560
Free:
free -h
total used free shared buff/cache available
Mem: 12G 1.9G 10G 63M 842M 10G
Swap: 9G 513M 9.5G
And /proc/meminfo:
MemTotal: 13566624 kB
MemFree: 10668220 kB
MemAvailable: 11048420 kB
Buffers: 158544 kB
Cached: 649660 kB
SwapCached: 229900 kB
Active: 545392 kB
Inactive: 1059268 kB
Active(anon): 74132 kB
Inactive(anon): 787872 kB
Active(file): 471260 kB
Inactive(file): 271396 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 10485756 kB
SwapFree: 9960188 kB
Dirty: 2176 kB
Writeback: 0 kB
AnonPages: 621768 kB
Mapped: 154476 kB
Shmem: 65520 kB
KReclaimable: 54912 kB
Slab: 123356 kB
SReclaimable: 54912 kB
SUnreclaim: 68444 kB
KernelStack: 11280 kB
PageTables: 8456 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 16915788 kB
Committed_AS: 2715192 kB
VmallocTotal: 135290159040 kB
VmallocUsed: 29464 kB
VmallocChunk: 0 kB
Percpu: 16512 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
FileHugePages: 0 kB
FilePmdMapped: 0 kB
CmaTotal: 65536 kB
CmaFree: 63760 kB
HugePages_Total: 345
HugePages_Free: 1
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 706560 kB
Im facing Java OOM issue when system is loaded with traffic . I have allocated -Xmx=10G and i have been monitoring the memory via Jconsole , it's not even going till 10 Gb it throws error after 5Gb.
It fails with below error
Native memory allocation (mmap) failed to map 518520832 bytes for committing reserved memory
OS:DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.3 LTS"
uname:Linux 5.0.0-29-generic #31~18.04.1-Ubuntu SMP Thu Sep 12 18:29:21 UTC 2019 x86_64
libc:glibc 2.27 NPTL 2.27
rlimit: STACK 8192k, CORE 0k, NPROC 44326, NOFILE 4096, AS infinity
load average:5,55 4,44 3,46
/proc/meminfo:
MemTotal: 11397468 kB
MemFree: 133900 kB
MemAvailable: 24040 kB
Buffers: 600 kB
Cached: 188880 kB
SwapCached: 20644 kB
Active: 9980144 kB
Inactive: 1012376 kB
Active(anon): 9965700 kB
Inactive(anon): 996552 kB
Active(file): 14444 kB
Inactive(file): 15824 kB
Unevictable: 16 kB
Mlocked: 16 kB
SwapTotal: 1003516 kB
SwapFree: 92868 kB
Dirty: 180 kB
Writeback: 596 kB
AnonPages: 10782764 kB
Mapped: 171488 kB
Shmem: 158936 kB
KReclaimable: 45080 kB
Slab: 88608 kB
SReclaimable: 45080 kB
SUnreclaim: 43528 kB
KernelStack: 16800 kB
PageTables: 76584 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 6702248 kB
Committed_AS: 23938048 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
Percpu: 1264 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 167872 kB
DirectMap2M: 11505664 kB
This is the error There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 518520832 bytes for committing reserved memory
Platform: two Oracle Linux7.5 servers and they have an Oracle RAC database.
Database version:12.2.0.1
Huge pages: enable.
problem description:
After I enable the huge pages function, I found that used mem + free mem + buff/cache + available mem much more than total mem in these two servers.
Below are some mem data of one node.
As I know, huge page mem is unusable for other applications once the system allocates it even the system has mem pressure. So this part of mem can not be the available mem.
After I turn off the huge page function, the figures for free -g command seems normal.
Question: Why my server has so much available mem(35.66GB, total mem is 64GB) despite it used 55G? How Linux calculate all these figures?
MEM DATA-----huge page function enable
[#~]$ free -tm <<-----node-2
total used free shared buff/cache available
Mem: 63819 55146 1471 944 7201 36515
Swap: 65531 15 65516
Total: 129351 55162 66988
File Name or Source
file name =node2_meminfo_19.11.06.1100.dat
zzz ***Wed Nov 6 11:00:02 CST 2019
MemTotal: 65351524 kB <<<<<<62GB in total
MemFree: 2137568 kB <<<<<<2GB free
MemAvailable: 38189832 kB
Buffers: 13832 kB
Cached: 4424852 kB
SwapCached: 0 kB
Active: 38255876 kB
Inactive: 1384672 kB
Active(anon): 4706984 kB
Inactive(anon): 94764 kB
Active(file): 33548892 kB <<<<<<32GB memory in OS filesystem cache
Inactive(file): 1289908 kB
Unevictable: 401128 kB
Mlocked: 401128 kB
SwapTotal: 67104764 kB
SwapFree: 67104764 kB
Dirty: 560 kB
Writeback: 0 kB
AnonPages: 4254508 kB
Mapped: 675800 kB
Shmem: 808328 kB
Slab: 1735924 kB
SReclaimable: 1612152 kB
SUnreclaim: 123772 kB
KernelStack: 18736 kB
PageTables: 238216 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 89421740 kB
Committed_AS: 7028572 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 518656 kB
VmallocChunk: 34358945788 kB
HardwareCorrupted: 0 kB
AnonHugePages: 1648640 kB
CmaTotal: 16384 kB
CmaFree: 10532 kB
HugePages_Total: 10116
HugePages_Free: 515
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 601952 kB
DirectMap2M: 21059584 kB
DirectMap1G: 45088768 kB
huge page function disable
[# ~]$ free -g
total used free shared buff/cache available
Mem: 62 34 1 21 25 32
Swap: 63 0 63
Hello everyone,
We use windriver Linux 6.0 in our products (base on yocto 3.10.55).
And we are faced with a strange problem: inactive(file) in /proc/meminfo remain high. and even we do "echo 3 > /drop/sys/vm/drop_caches", the statistics can't be reclaimed.
sh-4.2# echo 3 > /proc/sys/vm/drop_caches
sh-4.2# free
total used free shared buffers cached
Mem: 8000008 7777308 222700 0 196 871328
-/+ buffers/cache: 6905784 1094224
Swap: 0 0 0
sh-4.2# cat /proc/meminfo
MemTotal: 8000008 kB
MemFree: 220988 kB
Buffers: 288 kB
Cached: 872912 kB
SwapCached: 0 kB
Active: 2145984 kB
Inactive: 4779720 kB
Active(anon): 2126188 kB
Inactive(anon): 804404 kB
Active(file): 19796 kB
Inactive(file): 3975316 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 2077148 kB
Mapped: 242972 kB
Shmem: 853588 kB
Slab: 145620 kB
SReclaimable: 121040 kB
SUnreclaim: 24580 kB
KernelStack: 10704 kB
PageTables: 10624 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 4000004 kB
Committed_AS: 4084848 kB
VmallocTotal: 4294967296 kB
VmallocUsed: 27604 kB
VmallocChunk: 4294931084 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 4096 kB
The inactive(file) memory can be reduced by malloc() call or copy file to ramdisk(/run or /var/volatile).
But it will affect GDB to attach the running process which used memory large than the free memory. GDB attach will fail. (get can't apply memory issue, then gdb quit: can't attach the process).
As we know Inactive(fils) are free and available. This is a memory that has not been recently used and can be reclaimed for other purposes. But now it caused gdb to fail, or other effects we are not faced yet.
What may cause this issue to appear? And how can I reclaim these memories manually?
I couldn't find any information on what [vectors] mean in /proc/pid/smaps.
Here is a continuous part of some my smaps file:
76eec000-76f11000 rw-p 0025b000 00:0c 32363615 /usr/lib/libQt5Quick.so.5.0.0
Size: 148 kB
Rss: 148 kB
Pss: 97 kB
Shared_Clean: 60 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 88 kB
Referenced: 148 kB
Anonymous: 88 kB
AnonHugePages: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Locked: 0 kB
VmFlags: rd wr mr mw me ac
76f11000-76f14000 rw-p 00000000 00:00 0 [vectors]
Size: 12 kB
Rss: 12 kB
Pss: 12 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 12 kB
Referenced: 12 kB
Anonymous: 12 kB
AnonHugePages: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Locked: 0 kB
VmFlags: rd wr mr mw me ac
I wonder if [vectors] is also related to the previous library libQt5Quick.so.5.0.0.
I need to count memory consumed by a process and particular libraries in it, so I need to know if I have to count [vectors] output
[vectors] indicates a page used by the VDSO mechanism. VDSO is a way of accelerating common system calls by eliminating the overhead of context switching.
Basically the kernel just shares a bit of its memory with the result of common system calls (think gettimeofday() and the like) where your user space process can read it.
You should not count it as used memory, as the same memory will be shared by all processes.