What is the vm.overcommit_ratio in Linux? - linux

Here is my current setting:
vm.overcommit_ratio = 50 (default)
vm.overcommit_memory = 2
And Current Memory Usage:
[localhost~]$ free -g
total used free shared buffers cached
Mem: 47 46 0 0 0 45
-/+ buffers/cache: 1 45
Swap: 47 0 47
As per the documentation what I understood is:
vm.overcommit_memory = 2 will not allow to overcommit memory than 50 % of RAM (as vm.overcommit_ratio is 50) but still I can see that current memory usage is 46 GB out of 47 GB.
Did I misunderstood anything?

I believe the default for vm.overcommit_memory is 0 and not 2. Is the overcommit_ratio only relevant to mode 2? I assume yes, but I'm not entirely sure.
From https://www.kernel.org/doc/Documentation/vm/overcommit-accounting
0 - Heuristic overcommit handling. Obvious overcommits of address
space are refused. Used for a typical system. It ensures a seriously
wild allocation fails while allowing overcommit to reduce swap
usage. root is allowed to allocate slightly more memory in this
mode. This is the default.
1 - Always overcommit. Appropriate for some scientific applications.
Classic example is code using sparse arrays and just relying on the
virtual memory consisting almost entirely of zero pages.
2 - Don't overcommit. The total address space commit for the system
is not permitted to exceed swap + a configurable amount (default is
50%) of physical RAM. Depending on the amount you use, in most
situations this means a process will not be killed while accessing
pages but will receive errors on memory allocation as appropriate.
Instead of free -g which I assume rounds down to zero, you might want to use free -m or just free to be more precise.
This might be interesting as well:
cat /proc/meminfo|grep Commit

Related

'Memory allocatable utilization' for kubernetes.io/node > 100%

I am trying to configure monitoring on a variety of Kubernetes (GKE) nodes, specifically to identify [near] out-of-memory conditions. The documentation for node/memory/allocatable_utilization states:
This value cannot exceed 1 as usage cannot exceed allocatable memory bytes.
However, it reports a non-evictable value > 1 (1.015), which contradicts that constraint. Also, it's not clear to me how this corresponds with the actual condition on the node, as shown by free -m:
$ free -m
total used free shared buff/cache available
Mem: 15038 10041 184 67 4812 4606
Swap: 0 0 0
This node is designed to run memory-intensive workloads (Java) and as such this is in line with what I'd expect per our heap size planning.
Why would Stackdriver report this value with those conditions on the node?

How do I tune node.js memory usage for Raspberry pi?

I'm running node.js on a Raspberry Pi 3 B with the following free memory:
free -m
total used free shared buffers cached
Mem: 973 230 742 6 14 135
-/+ buffers/cache: 80 892
Swap: 99 0 99
How can I configure node (v7) to not use all the free memory? To prolong the SD card life, I would like to prevent it from going to swap.
I am aware of --max_old_space_size:
node --v8-options | grep -A 5 max_old
--max_old_space_size (max size of the old space (in Mbytes))
type: int default: 0
I know some of the answer is application specific, however what are some general tips to limit node.js memory consumption to prevent swapping? Also any other tips to squeeze more free ram out of the pi would be appreciated.
I have already set the memory split so that the GPU has the minimum 16 megs of RAM allocated.
The only bulletproof way to prevent swapping is to turn off swapping in the operating system (delete or comment out any swap lines in /etc/fstab for permanent settings, or use swapoff -a to turn off all swap devices for the current session). Note that the kernel is forced to kill random processes when there is no free memory available (this is true both with and without swap).
In node.js, what you can limit is the size of V8's managed heap, and the --max-old-space-size flag you already mentioned is the primary way for doing that. A value around 400-500 (megabytes) probably makes sense for your Raspberry. There's also --max-semi-space-size which should be small and you can probably just stick with the default, and --max-executable-size for generated code (how much you need depends on the app you run; I'd just stick with the default).
That said, there's no way to limit the overall memory usage of the process, because there are other memory consumers outside the managed heap (e.g. node.js itself, V8's parser and compiler). There is no way to set limits on all kinds of memory usage. (Because what would such a limit do? Crash when memory is needed but not available? The kernel will take care of that anyway.)

Memcached started evicting items even when limit_maxbytes was not reached

I was running an application which was to load about 60 mil items in memcache. I had two servers added in a bucket. After about 65% of the data was loaded, I saw 1.3 mil items evicted in both servers. And these were statistics at that point.
On server 1
STAT bytes_written 619117542
STAT limit_maxbytes 3145728000
On server 2
STAT bytes_written 619118863
STAT limit_maxbytes 3145728000
Here's the output of free -m at that point of time.
On server 1
total used free shared buffers cached
Mem: 7987 5965 2021 0 310 441
-/+ buffers/cache: 5213 2774
Swap: 4095 0 4095
On sever 2
total used free shared buffers cached
Mem: 11980 11873 106 0 207 5860
-/+ buffers/cache: 5805 6174
Swap: 5119 0 5119
As we can see, on both servers, limit_maxbytes was not reached. Only about 600MB was used at both the places. However on server 2, free memory dipped to as low as 100 mb. Now I know that cached is 5.8 GB and that linux could free that memory for running processes. But it looks like that didn't happen and seeing memory reaching critical level, memcached started evicting items.
Or is there any other reason? When exactly does linux free up cache memory? Is 100 mb of free ram is still not critical enough for linux to free up cache? Please help me understanding why such an even occured.
The 'slabs' refer to how Memcached allocates memory. Rather than a complex exact-match,it puts your data into a close-enough (slightly larger) piece of memory within the server. This means that it will frequently 'waste' memory that isn't storing your data.
You can tweak how big each potential slot is though when you start the memcached server with the factor (-f) and the initial chunk-size (-s) options. How you set those, depends on the mix of sizes you are storing in cache.

On Linux: We see following: Physical, Real, Swap, Virtual Memory - Which should we consider for sizing?

We use a Tool (Whats Up Gold) to monitor memory usage on a Linux Box.
We see Memory usage (graphs) related to:
Physical, Real, Swap, Virtual Memory and ALL Memory (which is a average of all these).
'The ALL' Memory graphs show low memory usage of about: 10%.
But Physical memory shows as 95% used.
Swap memory shows as 2% used.
So, do i need more memory on this Linux Box?
In other words should i go by:
ALL Memory graph(which says memory situation is good) OR
Physical Memory Graph (which says memory situation is bad).
Real and Physical
Physical memory is the amount of DRAM which is currently used. Real memory shows how much your applications are using system DRAM memory. It is roughly lower than physical memory. Linux system caches some of disk data. This caching is the difference between physical and real memory. Actually, when you have free memory Linux goes to use it for caching. Do not worry, as your applications demand memory they gonna get the cached space back.
Swap and Virtual
Swap is additional space to your actual DRAM. This space is borrowed from disk space and once you application fill-out entire DRAM, Linux transfers some unused memory to swap to let all application stay alive. Total of swap and physical memory is the virtual memory.
Do you need extra memory?
In answer to your question, you need to check real memory. If your real memory is full, you need to get some RAM. Use free command to check the amount of actual free memory. For example on my system free says:
$ free
total used free shared buffers cached
Mem: 16324640 9314120 7010520 0 433096 8066048
-/+ buffers/cache: 814976 15509664
Swap: 2047992 0 2047992
You need to check buffer/cache section. As shown above, there are real 15 GB free DRAM (second line) on my system. Check this on your system and find out whether you need more memory or not. The lines represent physical, real, and swap memory, respectively.
free -m
as for free tool analisys about memory lack in linux i have some opinion proved by experiments (practice)
~# free -m
total used free shared buff/cache available
Mem: 2000 164 144 1605 1691 103
you should summarize 'used'+'shared' and compare with 'total'
other columns are useless just confuse and nothing more
i would say
[ total - (used + shared ) ] should be always at least > 200 MB
also you can get almost the same number if you check MemAvailable in meminfo :
# cat /proc/meminfo
MemAvailable: 107304 kB
MemAvailable - is how much memory linux thinks right now is really free before active swapping happens.
so now you can consume 107304 kB maximum. if you
consume more big swappening starts happening.
MemAvailable also is in good correlation with real practice.

Linux memory usage is much larger than the sum of memory used by all applications?

I am using "free -m -t " command to monitor my linux system and get
total used free shared buffers cached
Mem: 64334 64120 213 0 701 33216
-/+ buffers/cache: 30202 34131
Swap: 996 0 996
Total: 65330 64120 1209
it means 30GB of physical memory is used by user processes.
but when using top command and sort by memory usage, only 3~4GB of memory is used by all the application processes.
Why does this inconsistency happen?
As I understand it, the amount of memory that top shows as used includes cold memory from older processes that are not running anymore. This is due to the fact that in case of a restart of said process, the required data may still be in memory, enabling the system to start the process faster and more efficiently instead or always reloading the data from disk.
or, in short, linux generally frees cold data in memory as late as possible.
Hope that clears it up :)

Resources