I'm preparing for more traffic in the days to come, and I want to be sure server can handle it.
Running sar -q, the load of "3.5" doesn't seem much on 32 CPU architecture:
However, I'm not sure about the memory.
Running sar -r shows 98.5% for the %memused and only 13.60 for %commit:
running htop seems OK too: 14.9G/126G. Does this means only 14.9 Gigs are in use by the apps, out of the 126 available?
I'm more interested by the sar -r output.
%memused looks 98.5% and %commit is only 13.6%
I wonder what it means.
You see, linux will try to cache disk blocks read or written in memory when the memory is not in use. This is what you see reported by sar in columns kbcached and kbbuffers. When a new request comes in and requires memory, it is granted from this cache or from the free list.
kbmemused = memory consumed by running processes + cache + buffers
To find out the actual memory used by your application, you should subtract kbbuffers and kbcached from kbmemused.
Monitoring %commit makes more sense which is the actual memory used by current running processes. In your case this number approximately matches with the output of htop.
Another way to check actual free memory is by using command free -m.
free reports the same stats as sar.
To summarize
memused 98.5% shows you the memory utilised by your application + cache and buffers used to kernel to speed up disk access.
commit 13.6% is the actual memory committed by the kernel to you application processes.
Related
Good day , I know that Docker containers are using the host's kernel (which is why containers are considered as lightweight vms) Here the the source . However, after reading Runtime Options part of a docker documentation I met an option called --kernel-memory. The doc says
The maximum amount of kernel memory the container can use.
I didn't understand what it does. My guess is every container will allocate some memory in host's kernel space .If so then what is the reason , isn't it vulnerable for a user process to allocate memory in kernel space ?
The whole CPU/Memory Limitation stuff is using cgroups.
You can find all settings performed by docker run (either per args or per default) under /sys/fs/cgroup/memory/docker/<container ID> for memory or /sys/fs/cgroup/cpu/docker/<container ID> for cpu.
So the --kernel-memory:
Reading: cat memory.kmem.limit_in_bytes
Writing: sudo -s echo 2167483648 > memory.kmem.limit_in_bytes
And also the benchmarking memory.kmem.max_usage_in_bytes and memory.kmem.usage_in_bytes which shows (rather selfexplaining) the current usage and the highest usage overall.
CGroup docs about Kernel Memory
For the functionality I will recommend reading Kernel Docs for CGroups V1 instead of the docker docs:
2.7 Kernel Memory Extension (CONFIG_MEMCG_KMEM)
With the Kernel memory extension, the Memory Controller is able to
limit the amount of kernel memory used by the system. Kernel memory is
fundamentally different than user memory, since it can't be swapped
out, which makes it possible to DoS the system by consuming too much
of this precious resource.
[..]
The memory used is
accumulated into memory.kmem.usage_in_bytes, or in a separate counter
when it makes sense. (currently only for tcp). The main "kmem" counter
is fed into the main counter, so kmem charges will also be visible
from the user counter.
Currently no soft limit is implemented for kernel memory. It is future
work to trigger slab reclaim when those limits are reached.
and
2.7.2 Common use cases
Because the "kmem" counter is fed to the main user counter, kernel
memory can never be limited completely independently of user memory.
Say "U" is the user limit, and "K" the kernel limit. There are three
possible ways limits can be set:
U != 0, K = unlimited:
This is the standard memcg limitation mechanism already present before kmem
accounting. Kernel memory is completely ignored.
U != 0, K < U:
Kernel memory is a subset of the user memory. This setup is useful in
deployments where the total amount of memory per-cgroup is overcommited.
Overcommiting kernel memory limits is definitely not recommended, since the
box can still run out of non-reclaimable memory.
In this case, the admin could set up K so that the sum of all groups is
never greater than the total memory, and freely set U at the cost of his
QoS.
WARNING: In the current implementation, memory reclaim will NOT be
triggered for a cgroup when it hits K while staying below U, which makes
this setup impractical.
U != 0, K >= U:
Since kmem charges will also be fed to the user counter and reclaim will be
triggered for the cgroup for both kinds of memory. This setup gives the
admin a unified view of memory, and it is also useful for people who just
want to track kernel memory usage.
Clumsy Attempt of a Conclusion
Given a running container with --memory="2g" --memory-swap="2g" --oom-kill-disable using
cat memory.kmem.max_usage_in_bytes
10747904
10 MB of Kernel-Memory in normal state. Would make sense to me to limit it, let's say to 20 MB of Kernel-Memory. Then it should kill or limit the container to protect the host. But due to the fact that there is - according to the docs - no possibility to reclaim the memory and the OOM Killer is starting to kill processes on host then even with a plenty of free memory (according to this: https://github.com/docker/for-linux/issues/1001) for me it is rather unpractical to use that.
The quoted option to set it >= memory.limit_in_bytes is not really helpful in that scenario either.
Deprecated
--kernel-memory is deprecated in v20.10, due to the fact someone (=Linux Kernel) realized all that as well..
What we can do then?
ULimit
Docker API exposes HostConfig|Ulimit which writes to /etc/security/limits.conf. For docker run should be --ulimit <type>=<soft>:<hard>. Use cat /etc/security/limits.conf or man setrlimit to see the categories and you can try to protect your system from filling kernel memory by e.g. generate unlimited processes with --ulimit nproc=500:500, but be careful, nproc works for users and not for containers, so count together..
To prevent DDoS (intentionally or unintentionally) i would suggest to limit at least nofile and nproc. Maybe someone can elaborate further..
sysctl:
docker run --sysctl can change kernel variables on message queue and shared memory, also network, e.g. docker run --sysctl net.ipv4.tcp_max_orphans= for orphan tcp connections which defaults on my system to 131072, and by a kernel memory usage of 64 kB each: Bang 8 GB on malfunction or dos. Maybe someone can elaborate further..
I'm running node.js on a Raspberry Pi 3 B with the following free memory:
free -m
total used free shared buffers cached
Mem: 973 230 742 6 14 135
-/+ buffers/cache: 80 892
Swap: 99 0 99
How can I configure node (v7) to not use all the free memory? To prolong the SD card life, I would like to prevent it from going to swap.
I am aware of --max_old_space_size:
node --v8-options | grep -A 5 max_old
--max_old_space_size (max size of the old space (in Mbytes))
type: int default: 0
I know some of the answer is application specific, however what are some general tips to limit node.js memory consumption to prevent swapping? Also any other tips to squeeze more free ram out of the pi would be appreciated.
I have already set the memory split so that the GPU has the minimum 16 megs of RAM allocated.
The only bulletproof way to prevent swapping is to turn off swapping in the operating system (delete or comment out any swap lines in /etc/fstab for permanent settings, or use swapoff -a to turn off all swap devices for the current session). Note that the kernel is forced to kill random processes when there is no free memory available (this is true both with and without swap).
In node.js, what you can limit is the size of V8's managed heap, and the --max-old-space-size flag you already mentioned is the primary way for doing that. A value around 400-500 (megabytes) probably makes sense for your Raspberry. There's also --max-semi-space-size which should be small and you can probably just stick with the default, and --max-executable-size for generated code (how much you need depends on the app you run; I'd just stick with the default).
That said, there's no way to limit the overall memory usage of the process, because there are other memory consumers outside the managed heap (e.g. node.js itself, V8's parser and compiler). There is no way to set limits on all kinds of memory usage. (Because what would such a limit do? Crash when memory is needed but not available? The kernel will take care of that anyway.)
I am using "free -m -t " command to monitor my linux system and get
total used free shared buffers cached
Mem: 64334 64120 213 0 701 33216
-/+ buffers/cache: 30202 34131
Swap: 996 0 996
Total: 65330 64120 1209
it means 30GB of physical memory is used by user processes.
but when using top command and sort by memory usage, only 3~4GB of memory is used by all the application processes.
Why does this inconsistency happen?
As I understand it, the amount of memory that top shows as used includes cold memory from older processes that are not running anymore. This is due to the fact that in case of a restart of said process, the required data may still be in memory, enabling the system to start the process faster and more efficiently instead or always reloading the data from disk.
or, in short, linux generally frees cold data in memory as late as possible.
Hope that clears it up :)
I got a linux hardware server having 16GB of physical memory and running some applications. This server is up and running for around 365 days till now and I am observing the "free -m" showing memory is running low.
total used free shared buffers cached
Mem: 14966 13451 1515 0 234 237
-/+ buffers/cache: 12979 1987
Swap: 4094 367 3727
I understand 1987 is the actual free memory in the system which less than 14%. If I add up the %MEM section in "ps -A v" output or from "top" it does not add up to 100%.
I need to understand why the memory has gone so low?
Update (29/Feb/2012):
Let me split this problem into two parts:
1) System having less free memory.
2) Identifying where the used memory has gone.
For 1), I understand; if system is running low on free memory we may see gradual degradation in performance. At some point paging would give additional free memory to the system resulting in restoration in system's performance. Correct me if I am wrong on this.
For 2), Now this is what I want to understand where has the used memory vanished. If I sum up the %MEM in output of "ps -A v" or "top -n 1 -b" it comes to no more than 50%. So where to account for the remaining 40% of untraceable memory. We have our own kernel modules in the server. If these modules leak memory would they get accounted. Is it possible to know amount of leakage in kernel modules.
It's not running low. Free memory is running low. But that's fine, since free memory is completely useless. (Free memory is memory that is providing no benefit. Free memory is memory that would be just a useful sitting on your shelf as in your computer.)
Free memory is bad, it serves no purpose. Low free memory is good, it means your system has found some use for most of your memory.
So what's bad? If your system is slow because it doesn't have enough memory in use.
I was able to identify and solve my issue. But it was not without the help of the information present at http://linux-mm.org/Low_On_Memory.
The memory at slabinfo for dentry was around 5GB. After issuing "sync" command the dirty pages got synced to hard-drive and the command "echo 3 > /proc/sys/vm/drop_caches" freed up some more memory by dropping some more caches.
In addition to the literature present in the above website, the memory is reclaimed by the kernel at a rate dependent on vfs_cache_pressure (/proc/sys/vm/vfs_cache_pressure).
Thanks to all for your help.
see http://www.linuxatemyram.com/
I have written a program which works on huge set of data. My CPU and OS(Ubuntu) both are 64 bit and I have got 4GB of RAM. Using "top" (%Mem field), I saw that the process's memory consumption went up to around 87% i.e 3.4+ GB and then it got killed.
I then checked how much memory a process can access using "uname -m" which comes out to be "unlimited".
Now, since both the OS and CPU are 64 bit and also there exists a swap partition, the OS should have used the virtual memory i.e [ >3.4GB + yGB from swap space ] in total and only if the process required more memory, it should have been killed.
So, I have following ques:
How much physical memory can a process access theoretically on 64 bit m/c. My answer is 2^48 bytes.
If less than 2^48 bytes of physical memory exists, then OS should use virtual memory, correct?
If ans to above ques is YES, then OS should have used SWAP space as well, why did it kill the process w/o even using it. I dont think we have to use some specific system calls which coding our program to make this happen.
Please suggest.
It's not only the data size that could be the reason. For example, do ulimit -a and check the max stack size. Have you got a kill reason? Set 'ulimit -c 20000' to get a core file, it shows you the reason when you examine it with gdb.
Check with file and ldd that your executable is indeed 64 bits.
Check also the resource limits. From inside the process, you could use getrlimit system call (and setrlimit to change them, when possible). From a bash shell, try ulimit -a. From a zsh shell try limit.
Check also that your process indeed eats the memory you believe it does consume. If its pid is 1234 you could try pmap 1234. From inside the process you could read the /proc/self/maps or /proc/1234/maps (which you can read from a terminal). There is also the /proc/self/smaps or /proc/1234/smaps and /proc/self/status or /proc/1234/status and other files inside your /proc/self/ ...
Check with free that you got the memory (and the swap space) you believe. You can add some temporary swap space with swapon /tmp/someswapfile (and use mkswap to initialize it).
I was routinely able, a few months (and a couple of years) ago, to run a 7Gb process (a huge cc1 compilation), under Gnu/Linux/Debian/Sid/AMD64, on a machine with 8Gb RAM.
And you could try with a tiny test program, which e.g. allocates with malloc several memory chunks of e.g. 32Mb each. Don't forget to write some bytes inside (at least at each megabyte).
standard C++ containers like std::map or std::vector are rumored to consume more memory than what we usually think.
Buy more RAM if needed. It is quite cheap these days.
In what can be addressed literally EVERYTHING has to fit into it, including your graphics adaptors, OS kernel, BIOS, etc. and the amount that can be addressed can't be extended by SWAP either.
Also worth noting that the process itself needs to be 64-bit also. And some operating systems may become unstable and therefore kill the process if you're using excessive RAM with it.