I couldn't find any information on what [vectors] mean in /proc/pid/smaps.
Here is a continuous part of some my smaps file:
76eec000-76f11000 rw-p 0025b000 00:0c 32363615 /usr/lib/libQt5Quick.so.5.0.0
Size: 148 kB
Rss: 148 kB
Pss: 97 kB
Shared_Clean: 60 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 88 kB
Referenced: 148 kB
Anonymous: 88 kB
AnonHugePages: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Locked: 0 kB
VmFlags: rd wr mr mw me ac
76f11000-76f14000 rw-p 00000000 00:00 0 [vectors]
Size: 12 kB
Rss: 12 kB
Pss: 12 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 12 kB
Referenced: 12 kB
Anonymous: 12 kB
AnonHugePages: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Locked: 0 kB
VmFlags: rd wr mr mw me ac
I wonder if [vectors] is also related to the previous library libQt5Quick.so.5.0.0.
I need to count memory consumed by a process and particular libraries in it, so I need to know if I have to count [vectors] output
[vectors] indicates a page used by the VDSO mechanism. VDSO is a way of accelerating common system calls by eliminating the overhead of context switching.
Basically the kernel just shares a bit of its memory with the result of common system calls (think gettimeofday() and the like) where your user space process can read it.
You should not count it as used memory, as the same memory will be shared by all processes.
Related
I found this answer about the noncache memory for kernel: https://unix.stackexchange.com/questions/62066/what-is-kernel-dynamic-memory-as-reported-by-smem
I have similar question, problem, but I don't have zram as mentioned in the question above.
My question is, how can I break down the 'noncache' memory reported by the smem utility? I know hugetables are one part of it, but what else? I couldn't find after some longer search how to do that, and I guess I would have to deep dive in kernel code to get some sense of it, if I even could.
Below you can find reports from smem, free and /proc/meminfo. Noncache for kernel dynamic memory in smem output is calculated as memtotal - userspace - free - cache and - hugetabes in my version. My questions is, what fields should I sum up from /proc/meminfo to get the Noncache figure of 704576? Or same question posted differently, when you breakdown noncache kernel memory, what fields from /proc/meminfo contribute to it?
smem (https://www.selenic.com/smem/):
] smem -wt
Area Used Cache Noncache Hugepages
firmware/hardware 0 0 0 0
kernel image 0 0 0 0
kernel dynamic memory 2117736 706600 704576 706560
userspace memory 783384 156516 626868 0
free memory 10665504 10665504 0 0
---------------------------------------------------------------------
13566624 11528620 1331444 706560
Free:
free -h
total used free shared buff/cache available
Mem: 12G 1.9G 10G 63M 842M 10G
Swap: 9G 513M 9.5G
And /proc/meminfo:
MemTotal: 13566624 kB
MemFree: 10668220 kB
MemAvailable: 11048420 kB
Buffers: 158544 kB
Cached: 649660 kB
SwapCached: 229900 kB
Active: 545392 kB
Inactive: 1059268 kB
Active(anon): 74132 kB
Inactive(anon): 787872 kB
Active(file): 471260 kB
Inactive(file): 271396 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 10485756 kB
SwapFree: 9960188 kB
Dirty: 2176 kB
Writeback: 0 kB
AnonPages: 621768 kB
Mapped: 154476 kB
Shmem: 65520 kB
KReclaimable: 54912 kB
Slab: 123356 kB
SReclaimable: 54912 kB
SUnreclaim: 68444 kB
KernelStack: 11280 kB
PageTables: 8456 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 16915788 kB
Committed_AS: 2715192 kB
VmallocTotal: 135290159040 kB
VmallocUsed: 29464 kB
VmallocChunk: 0 kB
Percpu: 16512 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
FileHugePages: 0 kB
FilePmdMapped: 0 kB
CmaTotal: 65536 kB
CmaFree: 63760 kB
HugePages_Total: 345
HugePages_Free: 1
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 706560 kB
I'm looking at /proc//smaps for a program compiled with libasan (-fsanitize=address).
I see some massive sizes and I'm trying to understand what it means.
For example:
2008fff7000-10007fff8000 rw-p 00000000 00:00 0
Size: 15032123396 kB
Rss: 142592 kB
Pss: 142592 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 142592 kB
Referenced: 142592 kB
Anonymous: 142592 kB
AnonHugePages: 0 kB
Shared_Hugetlb: 0 kB
Private_Hugetlb: 0 kB
Swap: 0 kB
SwapPss: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Locked: 0 kB
VmFlags: rd wr mr mw me nr dd nh
Total size adds up to 21,475,147,836K
I'm using Amazon Linux AMI release 2018.03 with kernel 4.4.19-29.55.amzn1.x86_64
Any ideas?
ASAN works by reserving one byte (known as a shadow byte) per 8 bytes of user memory. The shadow bytes are checked on every memory access and updated on every change in allocation status.
Processes running on Linux on x86_64 have about 2^47 bytes of addressable space available, so ASAN maps around 2^47*1/9 ~= 15TB for these shadow bytes.
This is the mapping you're seeing.
Platform: two Oracle Linux7.5 servers and they have an Oracle RAC database.
Database version:12.2.0.1
Huge pages: enable.
problem description:
After I enable the huge pages function, I found that used mem + free mem + buff/cache + available mem much more than total mem in these two servers.
Below are some mem data of one node.
As I know, huge page mem is unusable for other applications once the system allocates it even the system has mem pressure. So this part of mem can not be the available mem.
After I turn off the huge page function, the figures for free -g command seems normal.
Question: Why my server has so much available mem(35.66GB, total mem is 64GB) despite it used 55G? How Linux calculate all these figures?
MEM DATA-----huge page function enable
[#~]$ free -tm <<-----node-2
total used free shared buff/cache available
Mem: 63819 55146 1471 944 7201 36515
Swap: 65531 15 65516
Total: 129351 55162 66988
File Name or Source
file name =node2_meminfo_19.11.06.1100.dat
zzz ***Wed Nov 6 11:00:02 CST 2019
MemTotal: 65351524 kB <<<<<<62GB in total
MemFree: 2137568 kB <<<<<<2GB free
MemAvailable: 38189832 kB
Buffers: 13832 kB
Cached: 4424852 kB
SwapCached: 0 kB
Active: 38255876 kB
Inactive: 1384672 kB
Active(anon): 4706984 kB
Inactive(anon): 94764 kB
Active(file): 33548892 kB <<<<<<32GB memory in OS filesystem cache
Inactive(file): 1289908 kB
Unevictable: 401128 kB
Mlocked: 401128 kB
SwapTotal: 67104764 kB
SwapFree: 67104764 kB
Dirty: 560 kB
Writeback: 0 kB
AnonPages: 4254508 kB
Mapped: 675800 kB
Shmem: 808328 kB
Slab: 1735924 kB
SReclaimable: 1612152 kB
SUnreclaim: 123772 kB
KernelStack: 18736 kB
PageTables: 238216 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 89421740 kB
Committed_AS: 7028572 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 518656 kB
VmallocChunk: 34358945788 kB
HardwareCorrupted: 0 kB
AnonHugePages: 1648640 kB
CmaTotal: 16384 kB
CmaFree: 10532 kB
HugePages_Total: 10116
HugePages_Free: 515
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 601952 kB
DirectMap2M: 21059584 kB
DirectMap1G: 45088768 kB
huge page function disable
[# ~]$ free -g
total used free shared buff/cache available
Mem: 62 34 1 21 25 32
Swap: 63 0 63
Hello everyone,
We use windriver Linux 6.0 in our products (base on yocto 3.10.55).
And we are faced with a strange problem: inactive(file) in /proc/meminfo remain high. and even we do "echo 3 > /drop/sys/vm/drop_caches", the statistics can't be reclaimed.
sh-4.2# echo 3 > /proc/sys/vm/drop_caches
sh-4.2# free
total used free shared buffers cached
Mem: 8000008 7777308 222700 0 196 871328
-/+ buffers/cache: 6905784 1094224
Swap: 0 0 0
sh-4.2# cat /proc/meminfo
MemTotal: 8000008 kB
MemFree: 220988 kB
Buffers: 288 kB
Cached: 872912 kB
SwapCached: 0 kB
Active: 2145984 kB
Inactive: 4779720 kB
Active(anon): 2126188 kB
Inactive(anon): 804404 kB
Active(file): 19796 kB
Inactive(file): 3975316 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 2077148 kB
Mapped: 242972 kB
Shmem: 853588 kB
Slab: 145620 kB
SReclaimable: 121040 kB
SUnreclaim: 24580 kB
KernelStack: 10704 kB
PageTables: 10624 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 4000004 kB
Committed_AS: 4084848 kB
VmallocTotal: 4294967296 kB
VmallocUsed: 27604 kB
VmallocChunk: 4294931084 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 4096 kB
The inactive(file) memory can be reduced by malloc() call or copy file to ramdisk(/run or /var/volatile).
But it will affect GDB to attach the running process which used memory large than the free memory. GDB attach will fail. (get can't apply memory issue, then gdb quit: can't attach the process).
As we know Inactive(fils) are free and available. This is a memory that has not been recently used and can be reclaimed for other purposes. But now it caused gdb to fail, or other effects we are not faced yet.
What may cause this issue to appear? And how can I reclaim these memories manually?
On CentOS 7.3.1611(linux kernel 3.10), My program is I/O bound that read disk device using buffer I/O, generally linux kernel may use the full memory to cache disk data as buffers, but it always left about 20G memory that not used, and kswapd daemon keeping to reclaim pages all the time.
top
top - 14:11:47 up 16 days, 2:42, 5 users, load average: 2.92, 3.18, 3.37
Tasks: 329 total, 2 running, 327 sleeping, 0 stopped, 0 zombie
%Cpu(s): 3.3 us, 7.7 sy, 0.0 ni, 83.3 id, 4.7 wa, 0.0 hi, 1.1 si, 0.0 st
KiB Mem : 13175558+total, 22444704 free, 26496388 used, 82814488 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 10222934+avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
181 root 20 0 0 0 0 S 13.0 0.0 1395:34 kswapd0
cat /proc/meminfo
MemTotal: 131755580 kB
MemFree: 21335956 kB
MemAvailable: 102232248 kB
Buffers: 75100216 kB
Cached: 6990768 kB
SwapCached: 0 kB
Active: 69447392 kB
Inactive: 38154608 kB
Active(anon): 26890392 kB
Inactive(anon): 709920 kB
Active(file): 42557000 kB
Inactive(file): 37444688 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 180 kB
Writeback: 0 kB
AnonPages: 25511156 kB
Mapped: 35036 kB
Shmem: 2090184 kB
Slab: 1835540 kB
SReclaimable: 1684472 kB
SUnreclaim: 151068 kB
KernelStack: 15536 kB
PageTables: 59556 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 65877788 kB
Committed_AS: 5750772 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 551220 kB
VmallocChunk: 34358888444 kB
HardwareCorrupted: 0 kB
AnonHugePages: 21006336 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 294840 kB
DirectMap2M: 11159552 kB
DirectMap1G: 124780544 kB
cat /proc/buddyinfo
Node 0, zone DMA 1 0 1 0 2 1 1 0 1 1 3
Node 0, zone DMA32 785 898 869 489 260 227 442 316 142 0 0
Node 0, zone Normal 71568 1575732 446338 39 9 0 0 0 0 0 0
sar -B -r 1
Linux 3.10.0-514.el7.x86_64 (PZ-R-01) 02/09/2018 _x86_64_ (32 CPU)
02:13:57 PM pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s pgscand/s pgsteal/s %vmeff
02:13:58 PM 139388.00 48.00 4622.00 0.00 399474.00 0.00 17844.00 17908.00 100.36
02:13:57 PM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
02:13:58 PM 20986792 110768788 84.07 75421044 7021576 5743480 4.36 69832168 38119340 200
It seems that free memory pages is enough, why kswapd still reclaim pages?
// 2018/3/13 updated
It can use the almost full memory to cache disk data when updated linux kernel from 3.10 to 4.4, so it must be caused by kswapd behavior,and linux kernel 3.11 had improved the page reclaim behaviour.
see details:
Linux_3.11#Memory_management