free command: Which column is most relevant? - linux

I am running a program on a Debian 9 cloud server with 16G of RAM. I am concerned the program may be stressing memory, so I have it run the 'free -h' command as it cycles through a loop. I got the following output toward the end of the program, when memory consumption is maximal:
total used free shared buff/cache available
Mem: 15G 6.4G 155M 10M 9.1G 9.0G
Swap: 511M 20K 511M
If you look at the 'free' column it looks like there is only 155M free, but if you look at the 'available' column it looks like 9G is available. So, depending on the column, it looks like I have very little memory available, or lots of memory. Which column should I believe?
I've consulted 'man free' but I find it inscrutable.

Memory that is free is completely unused at this point. This number will generally and ideally be very low, since the OS tries to use as much of this memory as possible for buffering and caching.
The memory that is freely available to your application is, in fact, mentioned in the buffering/cached column.
If your program ran out of memory, it would try to free memory by using swap to outsource data to HDD and free up memory for usage. Considering there's only 20K of swap space used is another indicator that your program is not running out of memory.

Actually it depends in what context you are talking about memory .
The memory which is free if you see is 155 M .
However server has set memory for buffer/cache 9.1 GB out of which 9.0 GB is
available, which can be used for apps so its unused/free
If we are concerned about system performance only , this will not degrade system performance until lot of swapping occurs .
But for new apps installation which required more than 155 MB you will get memory error as we have
only 155 MB free .

Related

Application fails when free memory is low but available memory is high

I am building data models via an app called Sisense on Linux. Lately the process fails with an out of memory error. Running free -h I see that that the failure occurs when free memory is low, but before it actually reaches zero and even though there is still plenty of available memory.
Here is the exception:
Failed to build custom table: Rule_pre; BE#521691 SQL error: SafeModeException:
Safe-Mode triggered due to memory pressure. Pod physical memory: 5.31 GB available, 2.87 GB
used, 8.19 GB total. Server physical memory: 4.86 GB available, 28.67 GB used,
33.54 GB total. Application total virtual memory: 2.54 GB. The server exceeded 85% capacity
(28.67/33.54). Possible ways to reduce memory pressure: increase server memory, adjust data
modelling (M2M, un-indexed string fields, etc.), reduce number of simultaneous queries
And here is the output of free -h where you can see the declining memory in the center "free" column. Once free memory got below 235 MB I saw the above exception.
The free util man page has these definitions for free and available memory:
free Unused memory (MemFree and SwapFree in /proc/meminfo)
available
Estimation of how much memory is available for starting new applications, without swapping. Unlike the data provided by the cache or free fields, this field takes into account page cache and also that not all reclaimable memory slabs will be reclaimed due to items being in use (MemAvailable in /proc/meminfo, available on kernels 3.14, emulated on kernels 2.6.27+, otherwise the same as free
As I read on the internet there seems to be a casualness about low free memory. That it is not an issue. But the failure coincides with free memory getting to low. If I understand the man page, the available memory is for starting new applications. I am assuming then that available memory is not available to the existing application that fails, and that free memory is indeed what matters. But any confirmation form others or additional explanation would be appreciated. I'd also be curious about opinions on whether this may constitute a memory leak or if I should simply allocate more memory somehow perhaps at the Linux layer.
I think I have enough understanding here. Free memory never goes below 200MB whether a build fails or succeed. It does not appear to be an indicator of the issue. A successful build will also show a drop in free memory to 200MB.

On Linux: We see following: Physical, Real, Swap, Virtual Memory - Which should we consider for sizing?

We use a Tool (Whats Up Gold) to monitor memory usage on a Linux Box.
We see Memory usage (graphs) related to:
Physical, Real, Swap, Virtual Memory and ALL Memory (which is a average of all these).
'The ALL' Memory graphs show low memory usage of about: 10%.
But Physical memory shows as 95% used.
Swap memory shows as 2% used.
So, do i need more memory on this Linux Box?
In other words should i go by:
ALL Memory graph(which says memory situation is good) OR
Physical Memory Graph (which says memory situation is bad).
Real and Physical
Physical memory is the amount of DRAM which is currently used. Real memory shows how much your applications are using system DRAM memory. It is roughly lower than physical memory. Linux system caches some of disk data. This caching is the difference between physical and real memory. Actually, when you have free memory Linux goes to use it for caching. Do not worry, as your applications demand memory they gonna get the cached space back.
Swap and Virtual
Swap is additional space to your actual DRAM. This space is borrowed from disk space and once you application fill-out entire DRAM, Linux transfers some unused memory to swap to let all application stay alive. Total of swap and physical memory is the virtual memory.
Do you need extra memory?
In answer to your question, you need to check real memory. If your real memory is full, you need to get some RAM. Use free command to check the amount of actual free memory. For example on my system free says:
$ free
total used free shared buffers cached
Mem: 16324640 9314120 7010520 0 433096 8066048
-/+ buffers/cache: 814976 15509664
Swap: 2047992 0 2047992
You need to check buffer/cache section. As shown above, there are real 15 GB free DRAM (second line) on my system. Check this on your system and find out whether you need more memory or not. The lines represent physical, real, and swap memory, respectively.
free -m
as for free tool analisys about memory lack in linux i have some opinion proved by experiments (practice)
~# free -m
total used free shared buff/cache available
Mem: 2000 164 144 1605 1691 103
you should summarize 'used'+'shared' and compare with 'total'
other columns are useless just confuse and nothing more
i would say
[ total - (used + shared ) ] should be always at least > 200 MB
also you can get almost the same number if you check MemAvailable in meminfo :
# cat /proc/meminfo
MemAvailable: 107304 kB
MemAvailable - is how much memory linux thinks right now is really free before active swapping happens.
so now you can consume 107304 kB maximum. if you
consume more big swappening starts happening.
MemAvailable also is in good correlation with real practice.

Linux memory usage is much larger than the sum of memory used by all applications?

I am using "free -m -t " command to monitor my linux system and get
total used free shared buffers cached
Mem: 64334 64120 213 0 701 33216
-/+ buffers/cache: 30202 34131
Swap: 996 0 996
Total: 65330 64120 1209
it means 30GB of physical memory is used by user processes.
but when using top command and sort by memory usage, only 3~4GB of memory is used by all the application processes.
Why does this inconsistency happen?
As I understand it, the amount of memory that top shows as used includes cold memory from older processes that are not running anymore. This is due to the fact that in case of a restart of said process, the required data may still be in memory, enabling the system to start the process faster and more efficiently instead or always reloading the data from disk.
or, in short, linux generally frees cold data in memory as late as possible.
Hope that clears it up :)

Where has my used memory gone?

I got a linux hardware server having 16GB of physical memory and running some applications. This server is up and running for around 365 days till now and I am observing the "free -m" showing memory is running low.
total used free shared buffers cached
Mem: 14966 13451 1515 0 234 237
-/+ buffers/cache: 12979 1987
Swap: 4094 367 3727
I understand 1987 is the actual free memory in the system which less than 14%. If I add up the %MEM section in "ps -A v" output or from "top" it does not add up to 100%.
I need to understand why the memory has gone so low?
Update (29/Feb/2012):
Let me split this problem into two parts:
1) System having less free memory.
2) Identifying where the used memory has gone.
For 1), I understand; if system is running low on free memory we may see gradual degradation in performance. At some point paging would give additional free memory to the system resulting in restoration in system's performance. Correct me if I am wrong on this.
For 2), Now this is what I want to understand where has the used memory vanished. If I sum up the %MEM in output of "ps -A v" or "top -n 1 -b" it comes to no more than 50%. So where to account for the remaining 40% of untraceable memory. We have our own kernel modules in the server. If these modules leak memory would they get accounted. Is it possible to know amount of leakage in kernel modules.
It's not running low. Free memory is running low. But that's fine, since free memory is completely useless. (Free memory is memory that is providing no benefit. Free memory is memory that would be just a useful sitting on your shelf as in your computer.)
Free memory is bad, it serves no purpose. Low free memory is good, it means your system has found some use for most of your memory.
So what's bad? If your system is slow because it doesn't have enough memory in use.
I was able to identify and solve my issue. But it was not without the help of the information present at http://linux-mm.org/Low_On_Memory.
The memory at slabinfo for dentry was around 5GB. After issuing "sync" command the dirty pages got synced to hard-drive and the command "echo 3 > /proc/sys/vm/drop_caches" freed up some more memory by dropping some more caches.
In addition to the literature present in the above website, the memory is reclaimed by the kernel at a rate dependent on vfs_cache_pressure (/proc/sys/vm/vfs_cache_pressure).
Thanks to all for your help.
see http://www.linuxatemyram.com/

How is the Linux calculating MemFree

I am trying to understand my embedded linux memory usage.
By using the top utility and the process file /proc/meminfo I can see how much virtual memory a process is using, and how much physical memory is available to the system. But it would seem for any given process the virtual memory can be very much higher than the used physical memory. As this is an embedded system memory swapping is disabled.(SwapTotal = 0)
How is linux calculating the free physical memory? As it doesn't seem to be accounting for everything allocated in the virtual memory space.
MemFree in /proc/meminfo is a count of how many pages are free in the buddy allocator. This buddy allocator is the fundamental unit of physical memory allocation in the kernel; however there are a lot of ways pages can be returned to the buddy allocator in time of need - for example, freeing empty SLABs, discarding cache/buffer RAM (even if this means invalidating PTEs in a running process), or as a last resort, swapping things out.
In fact, MemFree is generally controlled to be only 5-10% of total physical RAM, with any extra free RAM being co-opted into cache as time goes on. As such, MemFree alone is a very incomplete view of the overall memory situation.
As for the virtual memory (VSIZE) of a given process, this refers to the sum total of the sizes of all mapped memory segments in the process's address space. However, not all of these will be physically present - some may be paged in upon first access and as such will not register as memory in use until actually used. The resident size (RSIZE) is a more accurate view, as it only registers pages that are mapped in right now - although this may also not be accurate if a given page is mapped in multiple virtual addresses (which is very common when you consider multiple processes - shared libraries have the same physical RAM mapped to all processes that are using that library)
Try using htop. You will have to install it sudo apt-get install htop or yum install htop, whatever.
It will show you a more accurate representation of memory usage.
Basically, it comes down to "buffers/cache".
free -m
Look at the free column in the buffers/cache row, this is a more accurate representation of what is actually available.
total used free shared buffers cached
Mem: 3770 3586 183 0 112 1498
-/+ buffers/cache: 1976 1793
Swap: 7624 750 6874

Resources