I'm wondering if there is an easy way to determine the amount of "utilized" memory in Linux. Specifically, memory that is actively in use by the Kernel and applications and not counting the buffers and cached memory. I'm looking for something analogous to Window's reporting of used memory found in the task manager (Where you see the percentage of memory used).
So far, the closest solution I can think of to calculate it comes from this link: Determining Free Memory on Linux
On my Ubuntu 13.0.4, doing a cat /proc/meminfo,
I then calculate 100-(((MemFree+Buffers+Cached)/MemTotal)*100) which should give the percentage of "utilized" memory.
This is the closest way I found to get a Physical Memory percentage like the one found in window's task manager.
Does this seem like a valid approach? And if so, are there more straight-forward approaches?
You can use AWK to parse the output of the free command, and get a percentage.
free | grep Mem | awk '{print $4/$2 * 100}'
Linux command for percentage of memory that is free
I am a fan of free -m
total used free shared buffers cached
Mem: 1446 1172 273 0 225 821
-/+ buffers/cache: 126 1320
Swap: 1471 0 1471
This show you memory stats in a more human readable way:
sar -r 0
Related
I am running a program on a Debian 9 cloud server with 16G of RAM. I am concerned the program may be stressing memory, so I have it run the 'free -h' command as it cycles through a loop. I got the following output toward the end of the program, when memory consumption is maximal:
total used free shared buff/cache available
Mem: 15G 6.4G 155M 10M 9.1G 9.0G
Swap: 511M 20K 511M
If you look at the 'free' column it looks like there is only 155M free, but if you look at the 'available' column it looks like 9G is available. So, depending on the column, it looks like I have very little memory available, or lots of memory. Which column should I believe?
I've consulted 'man free' but I find it inscrutable.
Memory that is free is completely unused at this point. This number will generally and ideally be very low, since the OS tries to use as much of this memory as possible for buffering and caching.
The memory that is freely available to your application is, in fact, mentioned in the buffering/cached column.
If your program ran out of memory, it would try to free memory by using swap to outsource data to HDD and free up memory for usage. Considering there's only 20K of swap space used is another indicator that your program is not running out of memory.
Actually it depends in what context you are talking about memory .
The memory which is free if you see is 155 M .
However server has set memory for buffer/cache 9.1 GB out of which 9.0 GB is
available, which can be used for apps so its unused/free
If we are concerned about system performance only , this will not degrade system performance until lot of swapping occurs .
But for new apps installation which required more than 155 MB you will get memory error as we have
only 155 MB free .
I recently saw my AWS EC2 instance's states at SSH helper program (Not a putty program).
I saw below.
[centos#ip-172-31-xx-xx ~]$ free -h
total used free shared buffers cached
Mem: 1.8G 1.0G 869M 144K 137M 267M
-/+ buffers/cache: 600M 1.2G
Swap: 0B 0B 0B
I understand buffers and cached usage are reserved usage, so it is empty usage. But, I didn't understand this.
-/+ buffers/cache: 600M 1.2G
What does it mean?
As an alternative look at the contents of: /proc/meminfo
For example:
grep MemAvailable /proc/meminfo
and:
cat /proc/meminfo
Notice that MemAvailable is only available in modern Linux kernels (not RHEL/CentOS 6 unless you run it with a newer kernel, Like Oracle Unbreakable Linux does)
For fun and education look also at: https://www.linuxatemyram.com/
For a more convenient info on your systems resource usage you may be interested in something like atop: https://haydenjames.io/use-atop-linux-server-performance-analysis/ or one of the other top tools like these: https://haydenjames.io/alternatives-top-htop/
I'm just no big fan of free so I avoid it like the plague ;-)
According to the post Meaning of the buffers/cache line in the output of free.
It seems to be the used memory minus the free memory in cache and buffers and the free memory plus the free memory in cache and buffers.
You can calculate the value if you form the sum of buffers and cached (400M) and substract the value from used (1000M - 400M = 600M) and add it on free (869M + 400~ 1,2G).
We have two machines with identical configuration and use (we have two balanced Siebel application servers in them).
Normally, we have a very similar RAM usage in them (around 7 Gb).
Recently, we've have a sudden increase of RAM in only one of them and now we have close to 14 Gb utilization of RAM in that machine.
So, for very similar boxes, we have one of them using 7Gb of RAM while the other one is consuming 14 Gb.
Now, using ps aux command to determine which process it's using all this additional memory, we see memory consumption is very similar in both machines. Somehow, we don't see any process that's using those 7 Gb of additional RAM.
Let's see:
Machine 1:
total used free shared buffers cached
Mem: 15943 15739 204 0 221 1267
-/+ buffers/cache: 14249 1693
Swap: 8191 0 8191
So, we have 14249 Mb usage of RAM.
Machine 2:
total used free shared buffers cached
Mem: 15943 15636 306 0 962 6409
-/+ buffers/cache: 8264 7678
Swap: 8191 0 8191
So, we have 8264 Mb usage of RAM.
I guess, the sum of Resident Set Size memory of ps should be equal or bigger to this value. According to this answer is how much memory is allocated to the process and is in RAM (including memory from shared libraries). We don't have any memory in SWAP.
However:
Machine 1:
ps aux | awk 'BEGIN {sum=0} {sum +=$6} END {print sum/1024}'
8357.08
8357.08 < 14249 -> NOK!
Machine 2:
ps aux | awk 'BEGIN {sum=0} {sum +=$6} END {print sum/1024}'
8468.63
8468.63 > 8264 -> OK
What do I get wrong? How can I find where this "missing" memory is?
Thank you in advance
If them two are virtual machines, maybe the "missing" memory is occupied by Balloon driver, especially they are hosted by VMware ESXi.
Recently I encounter the similar scenario. Sum of all process RSS is 14GB, command free shows 26GB used, so there are 12GB memory missing.
After search on internet, I follow this article and execute command vmware-toolbox-cmd stat balloon on my VM, console shows 12xxxMB (used by balloon), BINGO!
I am using "free -m -t " command to monitor my linux system and get
total used free shared buffers cached
Mem: 64334 64120 213 0 701 33216
-/+ buffers/cache: 30202 34131
Swap: 996 0 996
Total: 65330 64120 1209
it means 30GB of physical memory is used by user processes.
but when using top command and sort by memory usage, only 3~4GB of memory is used by all the application processes.
Why does this inconsistency happen?
As I understand it, the amount of memory that top shows as used includes cold memory from older processes that are not running anymore. This is due to the fact that in case of a restart of said process, the required data may still be in memory, enabling the system to start the process faster and more efficiently instead or always reloading the data from disk.
or, in short, linux generally frees cold data in memory as late as possible.
Hope that clears it up :)
I got a linux hardware server having 16GB of physical memory and running some applications. This server is up and running for around 365 days till now and I am observing the "free -m" showing memory is running low.
total used free shared buffers cached
Mem: 14966 13451 1515 0 234 237
-/+ buffers/cache: 12979 1987
Swap: 4094 367 3727
I understand 1987 is the actual free memory in the system which less than 14%. If I add up the %MEM section in "ps -A v" output or from "top" it does not add up to 100%.
I need to understand why the memory has gone so low?
Update (29/Feb/2012):
Let me split this problem into two parts:
1) System having less free memory.
2) Identifying where the used memory has gone.
For 1), I understand; if system is running low on free memory we may see gradual degradation in performance. At some point paging would give additional free memory to the system resulting in restoration in system's performance. Correct me if I am wrong on this.
For 2), Now this is what I want to understand where has the used memory vanished. If I sum up the %MEM in output of "ps -A v" or "top -n 1 -b" it comes to no more than 50%. So where to account for the remaining 40% of untraceable memory. We have our own kernel modules in the server. If these modules leak memory would they get accounted. Is it possible to know amount of leakage in kernel modules.
It's not running low. Free memory is running low. But that's fine, since free memory is completely useless. (Free memory is memory that is providing no benefit. Free memory is memory that would be just a useful sitting on your shelf as in your computer.)
Free memory is bad, it serves no purpose. Low free memory is good, it means your system has found some use for most of your memory.
So what's bad? If your system is slow because it doesn't have enough memory in use.
I was able to identify and solve my issue. But it was not without the help of the information present at http://linux-mm.org/Low_On_Memory.
The memory at slabinfo for dentry was around 5GB. After issuing "sync" command the dirty pages got synced to hard-drive and the command "echo 3 > /proc/sys/vm/drop_caches" freed up some more memory by dropping some more caches.
In addition to the literature present in the above website, the memory is reclaimed by the kernel at a rate dependent on vfs_cache_pressure (/proc/sys/vm/vfs_cache_pressure).
Thanks to all for your help.
see http://www.linuxatemyram.com/