Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 days ago.
Improve this question
Our system is consuming swap memory even system has avaiable memory. Is that behaviour normal ?
Our system is redhat 8.6.
memory usage figure
Solution for memory usage problem.
The Linux 2.6 kernel introduced a new kernel parameter called swappiness, which allows administrators to customize how Linux swaps.
Swappiness is a property for the Linux kernel that changes the balance between swapping out runtime memory, as opposed to dropping pages from the system page cache. Swappiness can be set to values between 0 and 100, inclusive. A low value means the kernel will try to avoid swapping as much as possible where a higher value instead will make the kernel aggressively try to use swap space.
Since the linux kernel 5.8 has a swappiness value range of 0 to 200 and a default value of 60. You can change it temporarily (until your next reboot) with the following command
echo 42 > /proc/sys/vm/swappiness
If you want to change it permanently, edit the vm.swappiness parameter in the /etc/sysctl.conf file.
It should be noted that the swappiness number does not imply that 60% of memory will be moved into swap. There is a swap algorithm that determines when and how much data is put into swap.
The following formula is provided by Redhat to determine swap tendency:
swap_tendency = mapped_ratio/2 + distress + vm_swappiness;
You can read more here
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Assume I have a pod with 3 containers: X, Y, and Z.
K8S can set a cpu limit for each container in a pod. However, if I set 1000M CPU limit to each container, then any container cannot use more than 1000M CPU even if the other two are ilde, which is not what I want.
I want to set a CPU quota of 3000M to the pod, rather than to each container. For example, if X & Y are idle, Z can use 3000M CPU; if X is using 1500M CPU, Y is using 1000M CPU, then Z can only use 500M CPU.
So, my question is:
How to share a CPU quota among multiple containers?
I must set limits because I must pay for my usage to the cloud provider.
In such a case, I would recommend you to use Vertical-Pod-AutoScaler along with a Limit Range.
A LimitRange provides constraints that can:
Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.
And the VPA will try to cap recommendations between min and max of limitRanges based on the current usages.
N.B.: Make sure you have metrics-server installed in your cluster to enable the VPA.
Just don't set a CPU limit at all. Or set it to some higher value. Setting a CPU limit should only happen in situations where it's known to be needed, in general they do more harm than good.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
The community reviewed whether to reopen this question 4 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
My current configs are:
> cat /proc/sys/vm/panic_on_oom
0
> cat /proc/sys/vm/oom_kill_allocating_task
0
> cat /proc/sys/vm/overcommit_memory
1
but when I run a task, it's killed anyway.
> ./test/mem.sh
Killed
> dmesg | tail -2
[24281.788131] Memory cgroup out of memory: Kill process 10565 (bash) score 1001 or sacrifice child
[24281.788133] Killed process 10565 (bash) total-vm:12601088kB, anon-rss:5242544kB, file-rss:64kB
Update
My tasks are used to scientific computing, which costs many memories, it seems that overcommit_memory=1 may be the best choice.
Update 2
Actually, I'm working on a data analyzation project, which costs memory more than 16G, but I was asked to limit them in about 5G. It might be impossible to implement this requirement via optimizing the program itself, because the project uses many sub-commands, and most of them does not contains options like Xms or Xmx in Java.
Update 3
My project should be an overcommited system. Exacetly as what a3f saying, it seems that my apps prefer to crash by xmalloc when mem allocated failed.
> cat /proc/sys/vm/overcommit_memory
2
> ./test/mem.sh
./test/mem.sh: xmalloc: .././subst.c:3542: cannot allocate 1073741825 bytes (4295237632 bytes allocated)
I don't want to surrender, although so many aweful tests make me exhausted.
So please show me a way to the light ; )
The OOM killer won't go away. If there is no memory, someone's got to pay. What you can do is set a limit after which memory allocations fail.
That's exactly what setting vm.overcommit_memory to 2 achieves.
From the docs:
The Linux kernel supports the following overcommit handling modes
2 - Don't overcommit. The total address space commit for the system
is not permitted to exceed swap + a configurable amount (default is
50%) of physical RAM. Depending on the amount you use, in most
situations this means a process will not be killed while accessing
pages but will receive errors on memory allocation as appropriate.
Normally, the kernel will happily hand out virtual memory (overcommit). Only when you reference a page, the kernel has to map the page to a real physical frame. If it can't service that request, a process needs to be killed by the OOM killer to make space.
Disabling overcommit means that e.g. malloc(3) will return NULL if the kernel couldn't commit the amount of memory requested. This makes things a bit more predictable, albeit limited (many applications allocate more than they would ever need).
The possible values of oom_adj range from -17 to +15. The higher the score, more likely the associated process is to be killed by OOM-killer. If oom_adj is set to -17, the process is not considered for OOM-killing.
But, increase ram is better choice ,if increasing ram is not possible, then add swap memory.
To increase swap memory try this link,
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I found at several places that Linux uses pages and a paging mechanism but I didn't find anywhere where this file is or how to configure it.
All the information I found is about the Linux swap file / partition. There is a difference between paging and swapping:
Paging moves pages (a small frame which contains a piece of data - usually 4 KB but can vary between different OS's) from main memory to a backbend storage, happens always as a normal function of the operating system.
Swapping moves an entire process to storage and happens when the system is memory stressed or on windows 8 when a new application is hibernating.
Does Linux uses it's swap file / partition for both cases?
If so, how could I see how many page are currently paged out? This information is not there in vmstat, free or swapon commands (or that I fail to see it).
Or is there another file used for paging?
If so, how can I configure it (and watch it's usage)?
Or perhaps Linux does not use paging at all and I was mislead?
I would appreciate if the answers will be specific to red hat enterprise Linux both versions 6 and 7 but also a general answer about all Linux's will be good.
Thanks in advance.
On Linux, the swap partition(s) are used for paging.
Linux does not respond to memory pressure by swapping out whole processes. The virtual memory system does demand paging, page by page. Under extreme memory pressure, one or more processes will be killed by the OOM killer. (There are some useful links to documentation in the first NOTE in man malloc)
There is a line in the top header which shows swap partition usage, but if that is all the information you want, use
swapon -s
man swapon for more information.
The swap partition usage is not the same as the number of unmapped pages. A page might be memory-mapped to a file using the mmap call; since that page has backing store in the file, there is no need to also write it to a swap partition, and the system won't use swap space for that. But swap partition usage is a pretty good indicator.
Also note that Linux (unlike Windows) does not allocate swap space for pages when they are allocated. Instead, it adds the new page to the virtual memory map without any backing store. and allocates the swap space when the page needs to be swapped out. The consequence (as described in the malloc manpage referenced earlier) is that a malloc call may succeed in allocating virtual memory, but a subsequent attempt to use that virtual memory may fail.
Although Linux retains the term 'swap partition' as a historical relic, it actually performs paging. So your expectation is borne out; you were just thrown by the archaic terminology.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
On a linux system, while using "free", following are the values:
total used free shared buff/cache available
Mem: 26755612 873224 389320 286944 25493068 25311948
Swap: 0 0 0
The total, used and free values don't add up. I'm expecting total = used + free.
Question:
What am I missing here?
For the main memory, the actual size of memory can be calculated as used+free+buffers+cache OR used+free+buffers/cache because buffers/cache = buffer+cache.
The man page of free highlights used as Used memory (calculated as total - free - buffers - cache)
As the man page of free says :-
total Total installed memory (MemTotal and SwapTotal in /proc/meminfo)
used Used memory (calculated as total - free - buffers - cache)
free Unused memory (MemFree and SwapFree in /proc/meminfo)
shared Memory used (mostly) by tmpfs (Shmem in /proc/meminfo,
on kernels 2.6.32, displayed as zero if not available)
buffers Memory used by kernel buffers (Buffers in /proc/meminfo)
cache Memory used by the page cache and slabs (Cached and Slab in
/proc/meminfo)
buff / cache Sum of buffers and cache
available Estimation of how much memory is available for starting new applications, without swapping. Unlike the data provided by the cache or free fields, this field takes into account page cache and also that not all reclaimable memory slabs will be reclaimed due to items being in use (MemAvailable in /proc/meminfo, available on kernels 3.14, emulated on kernels 2.6.27+, otherwise the same as free)
In your case,
873224(used) + 389320(free) + 25493068(buff/cache) = 26755612(total)
Linux likes to cache every file that it opens. Every time you open a file for reading, Linux will cache it but it will drop those caches if it needs the memory for something more important -- like when a process on the system wants to allocate more memory. These caches in memory simply make Linux faster when the same files are used over and over again. Instead of actually going to disk every time it wants to read the file, it just gets it from memory and memory is a lot faster that disk. That is why your system shows 25493068 used in buff/cache but also shows 25311948 available. Much of that cached data can be freed if the system needs it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm newbie to linux.
My linux server says it has 47 gb Ram and quadcore cpu. But it is not as fast as it should be.
Used free -m command and it shows
Available: ~ 47gb
Used: ~ 45gb
Free: ~ 2gb
At that the server is not used by anyone else.
Used top command and it showed cpu is 0.1% used.
Is the used value shown in free command correct?
If the data is reliable what could make use of 45gb?
It is a fedora 64 bit kernel and it supports pae - physical address extension.
Please help and let me know if it is a known question.
Yes, it is a question, but the answer is, your memory is all there primarily free and not the source of your slowdowns. Take a look at your memory with free. For example:
$ free -tm
total used free shared buffers cached
Mem: 3833 3751 82 0 1056 1107
-/+ buffers/cache: 1587 2246
Swap: 2000 83 1916
Total: 5833 3834 1999
In the first line used does not mean currently in use.
Looking at the first line it says I have 3833 total and have 3751 used. Is that a problem? No. Why? When Linux uses memory, it marks the memory as used and when it is done, it releases the buffers and cached memory that is no longer needed. The memory that was used, but is now free is not returned to total and subtracted from used, rather the buffers and cache are simply returned to the system and are available for re-use by any other process that may need it.
If you look further to the right, you see I have 1056 buffers and 1107 cached. The next line explains that of the total there is only 1587 used and 2246 free. The 2246 roughly being the sum of the original 82 free + (1056 buffers + 1107 cached) that have been released for re-use. This is the current memory in use and available.
The next line shows the swap available and its use and the last line shows the rough sums of lines 1 and 3. So no need to panic, if there is a slowdown, it is most likely not because your memory has all been used.