On CentOS 7.3.1611(linux kernel 3.10), My program is I/O bound that read disk device using buffer I/O, generally linux kernel may use the full memory to cache disk data as buffers, but it always left about 20G memory that not used, and kswapd daemon keeping to reclaim pages all the time.
top
top - 14:11:47 up 16 days, 2:42, 5 users, load average: 2.92, 3.18, 3.37
Tasks: 329 total, 2 running, 327 sleeping, 0 stopped, 0 zombie
%Cpu(s): 3.3 us, 7.7 sy, 0.0 ni, 83.3 id, 4.7 wa, 0.0 hi, 1.1 si, 0.0 st
KiB Mem : 13175558+total, 22444704 free, 26496388 used, 82814488 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 10222934+avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
181 root 20 0 0 0 0 S 13.0 0.0 1395:34 kswapd0
cat /proc/meminfo
MemTotal: 131755580 kB
MemFree: 21335956 kB
MemAvailable: 102232248 kB
Buffers: 75100216 kB
Cached: 6990768 kB
SwapCached: 0 kB
Active: 69447392 kB
Inactive: 38154608 kB
Active(anon): 26890392 kB
Inactive(anon): 709920 kB
Active(file): 42557000 kB
Inactive(file): 37444688 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 180 kB
Writeback: 0 kB
AnonPages: 25511156 kB
Mapped: 35036 kB
Shmem: 2090184 kB
Slab: 1835540 kB
SReclaimable: 1684472 kB
SUnreclaim: 151068 kB
KernelStack: 15536 kB
PageTables: 59556 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 65877788 kB
Committed_AS: 5750772 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 551220 kB
VmallocChunk: 34358888444 kB
HardwareCorrupted: 0 kB
AnonHugePages: 21006336 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 294840 kB
DirectMap2M: 11159552 kB
DirectMap1G: 124780544 kB
cat /proc/buddyinfo
Node 0, zone DMA 1 0 1 0 2 1 1 0 1 1 3
Node 0, zone DMA32 785 898 869 489 260 227 442 316 142 0 0
Node 0, zone Normal 71568 1575732 446338 39 9 0 0 0 0 0 0
sar -B -r 1
Linux 3.10.0-514.el7.x86_64 (PZ-R-01) 02/09/2018 _x86_64_ (32 CPU)
02:13:57 PM pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s pgscand/s pgsteal/s %vmeff
02:13:58 PM 139388.00 48.00 4622.00 0.00 399474.00 0.00 17844.00 17908.00 100.36
02:13:57 PM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
02:13:58 PM 20986792 110768788 84.07 75421044 7021576 5743480 4.36 69832168 38119340 200
It seems that free memory pages is enough, why kswapd still reclaim pages?
// 2018/3/13 updated
It can use the almost full memory to cache disk data when updated linux kernel from 3.10 to 4.4, so it must be caused by kswapd behavior,and linux kernel 3.11 had improved the page reclaim behaviour.
see details:
Linux_3.11#Memory_management
Related
I found this answer about the noncache memory for kernel: https://unix.stackexchange.com/questions/62066/what-is-kernel-dynamic-memory-as-reported-by-smem
I have similar question, problem, but I don't have zram as mentioned in the question above.
My question is, how can I break down the 'noncache' memory reported by the smem utility? I know hugetables are one part of it, but what else? I couldn't find after some longer search how to do that, and I guess I would have to deep dive in kernel code to get some sense of it, if I even could.
Below you can find reports from smem, free and /proc/meminfo. Noncache for kernel dynamic memory in smem output is calculated as memtotal - userspace - free - cache and - hugetabes in my version. My questions is, what fields should I sum up from /proc/meminfo to get the Noncache figure of 704576? Or same question posted differently, when you breakdown noncache kernel memory, what fields from /proc/meminfo contribute to it?
smem (https://www.selenic.com/smem/):
] smem -wt
Area Used Cache Noncache Hugepages
firmware/hardware 0 0 0 0
kernel image 0 0 0 0
kernel dynamic memory 2117736 706600 704576 706560
userspace memory 783384 156516 626868 0
free memory 10665504 10665504 0 0
---------------------------------------------------------------------
13566624 11528620 1331444 706560
Free:
free -h
total used free shared buff/cache available
Mem: 12G 1.9G 10G 63M 842M 10G
Swap: 9G 513M 9.5G
And /proc/meminfo:
MemTotal: 13566624 kB
MemFree: 10668220 kB
MemAvailable: 11048420 kB
Buffers: 158544 kB
Cached: 649660 kB
SwapCached: 229900 kB
Active: 545392 kB
Inactive: 1059268 kB
Active(anon): 74132 kB
Inactive(anon): 787872 kB
Active(file): 471260 kB
Inactive(file): 271396 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 10485756 kB
SwapFree: 9960188 kB
Dirty: 2176 kB
Writeback: 0 kB
AnonPages: 621768 kB
Mapped: 154476 kB
Shmem: 65520 kB
KReclaimable: 54912 kB
Slab: 123356 kB
SReclaimable: 54912 kB
SUnreclaim: 68444 kB
KernelStack: 11280 kB
PageTables: 8456 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 16915788 kB
Committed_AS: 2715192 kB
VmallocTotal: 135290159040 kB
VmallocUsed: 29464 kB
VmallocChunk: 0 kB
Percpu: 16512 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
FileHugePages: 0 kB
FilePmdMapped: 0 kB
CmaTotal: 65536 kB
CmaFree: 63760 kB
HugePages_Total: 345
HugePages_Free: 1
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 706560 kB
Im facing Java OOM issue when system is loaded with traffic . I have allocated -Xmx=10G and i have been monitoring the memory via Jconsole , it's not even going till 10 Gb it throws error after 5Gb.
It fails with below error
Native memory allocation (mmap) failed to map 518520832 bytes for committing reserved memory
OS:DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.3 LTS"
uname:Linux 5.0.0-29-generic #31~18.04.1-Ubuntu SMP Thu Sep 12 18:29:21 UTC 2019 x86_64
libc:glibc 2.27 NPTL 2.27
rlimit: STACK 8192k, CORE 0k, NPROC 44326, NOFILE 4096, AS infinity
load average:5,55 4,44 3,46
/proc/meminfo:
MemTotal: 11397468 kB
MemFree: 133900 kB
MemAvailable: 24040 kB
Buffers: 600 kB
Cached: 188880 kB
SwapCached: 20644 kB
Active: 9980144 kB
Inactive: 1012376 kB
Active(anon): 9965700 kB
Inactive(anon): 996552 kB
Active(file): 14444 kB
Inactive(file): 15824 kB
Unevictable: 16 kB
Mlocked: 16 kB
SwapTotal: 1003516 kB
SwapFree: 92868 kB
Dirty: 180 kB
Writeback: 596 kB
AnonPages: 10782764 kB
Mapped: 171488 kB
Shmem: 158936 kB
KReclaimable: 45080 kB
Slab: 88608 kB
SReclaimable: 45080 kB
SUnreclaim: 43528 kB
KernelStack: 16800 kB
PageTables: 76584 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 6702248 kB
Committed_AS: 23938048 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
Percpu: 1264 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 167872 kB
DirectMap2M: 11505664 kB
This is the error There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 518520832 bytes for committing reserved memory
Platform: two Oracle Linux7.5 servers and they have an Oracle RAC database.
Database version:12.2.0.1
Huge pages: enable.
problem description:
After I enable the huge pages function, I found that used mem + free mem + buff/cache + available mem much more than total mem in these two servers.
Below are some mem data of one node.
As I know, huge page mem is unusable for other applications once the system allocates it even the system has mem pressure. So this part of mem can not be the available mem.
After I turn off the huge page function, the figures for free -g command seems normal.
Question: Why my server has so much available mem(35.66GB, total mem is 64GB) despite it used 55G? How Linux calculate all these figures?
MEM DATA-----huge page function enable
[#~]$ free -tm <<-----node-2
total used free shared buff/cache available
Mem: 63819 55146 1471 944 7201 36515
Swap: 65531 15 65516
Total: 129351 55162 66988
File Name or Source
file name =node2_meminfo_19.11.06.1100.dat
zzz ***Wed Nov 6 11:00:02 CST 2019
MemTotal: 65351524 kB <<<<<<62GB in total
MemFree: 2137568 kB <<<<<<2GB free
MemAvailable: 38189832 kB
Buffers: 13832 kB
Cached: 4424852 kB
SwapCached: 0 kB
Active: 38255876 kB
Inactive: 1384672 kB
Active(anon): 4706984 kB
Inactive(anon): 94764 kB
Active(file): 33548892 kB <<<<<<32GB memory in OS filesystem cache
Inactive(file): 1289908 kB
Unevictable: 401128 kB
Mlocked: 401128 kB
SwapTotal: 67104764 kB
SwapFree: 67104764 kB
Dirty: 560 kB
Writeback: 0 kB
AnonPages: 4254508 kB
Mapped: 675800 kB
Shmem: 808328 kB
Slab: 1735924 kB
SReclaimable: 1612152 kB
SUnreclaim: 123772 kB
KernelStack: 18736 kB
PageTables: 238216 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 89421740 kB
Committed_AS: 7028572 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 518656 kB
VmallocChunk: 34358945788 kB
HardwareCorrupted: 0 kB
AnonHugePages: 1648640 kB
CmaTotal: 16384 kB
CmaFree: 10532 kB
HugePages_Total: 10116
HugePages_Free: 515
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 601952 kB
DirectMap2M: 21059584 kB
DirectMap1G: 45088768 kB
huge page function disable
[# ~]$ free -g
total used free shared buff/cache available
Mem: 62 34 1 21 25 32
Swap: 63 0 63
Hello everyone,
We use windriver Linux 6.0 in our products (base on yocto 3.10.55).
And we are faced with a strange problem: inactive(file) in /proc/meminfo remain high. and even we do "echo 3 > /drop/sys/vm/drop_caches", the statistics can't be reclaimed.
sh-4.2# echo 3 > /proc/sys/vm/drop_caches
sh-4.2# free
total used free shared buffers cached
Mem: 8000008 7777308 222700 0 196 871328
-/+ buffers/cache: 6905784 1094224
Swap: 0 0 0
sh-4.2# cat /proc/meminfo
MemTotal: 8000008 kB
MemFree: 220988 kB
Buffers: 288 kB
Cached: 872912 kB
SwapCached: 0 kB
Active: 2145984 kB
Inactive: 4779720 kB
Active(anon): 2126188 kB
Inactive(anon): 804404 kB
Active(file): 19796 kB
Inactive(file): 3975316 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 2077148 kB
Mapped: 242972 kB
Shmem: 853588 kB
Slab: 145620 kB
SReclaimable: 121040 kB
SUnreclaim: 24580 kB
KernelStack: 10704 kB
PageTables: 10624 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 4000004 kB
Committed_AS: 4084848 kB
VmallocTotal: 4294967296 kB
VmallocUsed: 27604 kB
VmallocChunk: 4294931084 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 4096 kB
The inactive(file) memory can be reduced by malloc() call or copy file to ramdisk(/run or /var/volatile).
But it will affect GDB to attach the running process which used memory large than the free memory. GDB attach will fail. (get can't apply memory issue, then gdb quit: can't attach the process).
As we know Inactive(fils) are free and available. This is a memory that has not been recently used and can be reclaimed for other purposes. But now it caused gdb to fail, or other effects we are not faced yet.
What may cause this issue to appear? And how can I reclaim these memories manually?
I don't set any hugepages in the system. why direct mapping has a weired value
DirectMap4k: 251600 kB
DirectMap2M: 5941248 kB
DirectMap1G: 130023424 kB
Look at the cmdline, no hugepages specified. also the runtime hugepages, only 2M hugepage entries in the directory, and nothing specified.
# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.11.0-26-generic root=UUID=7e5b93c9-ace5-4a9d-8623-c6718a2d720a ro console=ttyS0,9600 console=tty0 rootdelay=90 nomodes
_hugepages 3:~# cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr
0 ^C
_hugepages 3:~# cat /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr
0
# free -k
total used free shared buffers cached
Mem: 131911116 43668088 88243028 0 202272 2004796
-/+ buffers/cache: 41461020 90450096
Swap: 3999740 0 3999740
# cat /proc/meminfo
MemTotal: 131911116 kB
MemFree: 87704076 kB
Buffers: 202272 kB
Cached: 2004444 kB
SwapCached: 0 kB
Active: 38864132 kB
Inactive: 1784416 kB
Active(anon): 38441104 kB
Inactive(anon): 7924 kB
Active(file): 423028 kB
Inactive(file): 1776492 kB
Unevictable: 8384 kB
Mlocked: 8384 kB
SwapTotal: 3999740 kB
SwapFree: 3999740 kB
Dirty: 120 kB
Writeback: 0 kB
AnonPages: 38450956 kB
Mapped: 29576 kB
Shmem: 760 kB
Slab: 1441772 kB
SReclaimable: 184536 kB
SUnreclaim: 1257236 kB
KernelStack: 11632 kB
PageTables: 146568 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 69955296 kB
Committed_AS: 81453204 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 721460 kB
VmallocChunk: 34291709228 kB
HardwareCorrupted: 0 kB
AnonHugePages: 5980160 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 251600 kB
DirectMap2M: 5941248 kB
**DirectMap1G: 130023424 kB**
It counts the number of pages mapped as 4KB pages, 2MB/4MB pages, and 1GB pages if supported and used. It doesn't have any effect if /proc/sys/vm/nr_hugepages is set to 0 (kernel is booted without hugepages parameter):
cat /proc/sys/vm/nr_hugepages
0
See what does mean by HardwareCorrupted, DirectMap4k, DirectMap2M fields in “/proc/meminfo” file of Linux? for more details.