Take care my Linux Memory Usage Summary in console - linux

I recently saw my AWS EC2 instance's states at SSH helper program (Not a putty program).
I saw below.
[centos#ip-172-31-xx-xx ~]$ free -h
total used free shared buffers cached
Mem: 1.8G 1.0G 869M 144K 137M 267M
-/+ buffers/cache: 600M 1.2G
Swap: 0B 0B 0B
I understand buffers and cached usage are reserved usage, so it is empty usage. But, I didn't understand this.
-/+ buffers/cache: 600M 1.2G
What does it mean?

As an alternative look at the contents of: /proc/meminfo
For example:
grep MemAvailable /proc/meminfo
and:
cat /proc/meminfo
Notice that MemAvailable is only available in modern Linux kernels (not RHEL/CentOS 6 unless you run it with a newer kernel, Like Oracle Unbreakable Linux does)
For fun and education look also at: https://www.linuxatemyram.com/
For a more convenient info on your systems resource usage you may be interested in something like atop: https://haydenjames.io/use-atop-linux-server-performance-analysis/ or one of the other top tools like these: https://haydenjames.io/alternatives-top-htop/
I'm just no big fan of free so I avoid it like the plague ;-)

According to the post Meaning of the buffers/cache line in the output of free.
It seems to be the used memory minus the free memory in cache and buffers and the free memory plus the free memory in cache and buffers.
You can calculate the value if you form the sum of buffers and cached (400M) and substract the value from used (1000M - 400M = 600M) and add it on free (869M + 400~ 1,2G).

Related

free command: Which column is most relevant?

I am running a program on a Debian 9 cloud server with 16G of RAM. I am concerned the program may be stressing memory, so I have it run the 'free -h' command as it cycles through a loop. I got the following output toward the end of the program, when memory consumption is maximal:
total used free shared buff/cache available
Mem: 15G 6.4G 155M 10M 9.1G 9.0G
Swap: 511M 20K 511M
If you look at the 'free' column it looks like there is only 155M free, but if you look at the 'available' column it looks like 9G is available. So, depending on the column, it looks like I have very little memory available, or lots of memory. Which column should I believe?
I've consulted 'man free' but I find it inscrutable.
Memory that is free is completely unused at this point. This number will generally and ideally be very low, since the OS tries to use as much of this memory as possible for buffering and caching.
The memory that is freely available to your application is, in fact, mentioned in the buffering/cached column.
If your program ran out of memory, it would try to free memory by using swap to outsource data to HDD and free up memory for usage. Considering there's only 20K of swap space used is another indicator that your program is not running out of memory.
Actually it depends in what context you are talking about memory .
The memory which is free if you see is 155 M .
However server has set memory for buffer/cache 9.1 GB out of which 9.0 GB is
available, which can be used for apps so its unused/free
If we are concerned about system performance only , this will not degrade system performance until lot of swapping occurs .
But for new apps installation which required more than 155 MB you will get memory error as we have
only 155 MB free .

How do I tune node.js memory usage for Raspberry pi?

I'm running node.js on a Raspberry Pi 3 B with the following free memory:
free -m
total used free shared buffers cached
Mem: 973 230 742 6 14 135
-/+ buffers/cache: 80 892
Swap: 99 0 99
How can I configure node (v7) to not use all the free memory? To prolong the SD card life, I would like to prevent it from going to swap.
I am aware of --max_old_space_size:
node --v8-options | grep -A 5 max_old
--max_old_space_size (max size of the old space (in Mbytes))
type: int default: 0
I know some of the answer is application specific, however what are some general tips to limit node.js memory consumption to prevent swapping? Also any other tips to squeeze more free ram out of the pi would be appreciated.
I have already set the memory split so that the GPU has the minimum 16 megs of RAM allocated.
The only bulletproof way to prevent swapping is to turn off swapping in the operating system (delete or comment out any swap lines in /etc/fstab for permanent settings, or use swapoff -a to turn off all swap devices for the current session). Note that the kernel is forced to kill random processes when there is no free memory available (this is true both with and without swap).
In node.js, what you can limit is the size of V8's managed heap, and the --max-old-space-size flag you already mentioned is the primary way for doing that. A value around 400-500 (megabytes) probably makes sense for your Raspberry. There's also --max-semi-space-size which should be small and you can probably just stick with the default, and --max-executable-size for generated code (how much you need depends on the app you run; I'd just stick with the default).
That said, there's no way to limit the overall memory usage of the process, because there are other memory consumers outside the managed heap (e.g. node.js itself, V8's parser and compiler). There is no way to set limits on all kinds of memory usage. (Because what would such a limit do? Crash when memory is needed but not available? The kernel will take care of that anyway.)

Determine 'Free' Memory in Linux

I'm wondering if there is an easy way to determine the amount of "utilized" memory in Linux. Specifically, memory that is actively in use by the Kernel and applications and not counting the buffers and cached memory. I'm looking for something analogous to Window's reporting of used memory found in the task manager (Where you see the percentage of memory used).
So far, the closest solution I can think of to calculate it comes from this link: Determining Free Memory on Linux
On my Ubuntu 13.0.4, doing a cat /proc/meminfo,
I then calculate 100-(((MemFree+Buffers+Cached)/MemTotal)*100) which should give the percentage of "utilized" memory.
This is the closest way I found to get a Physical Memory percentage like the one found in window's task manager.
Does this seem like a valid approach? And if so, are there more straight-forward approaches?
You can use AWK to parse the output of the free command, and get a percentage.
free | grep Mem | awk '{print $4/$2 * 100}'
Linux command for percentage of memory that is free
I am a fan of free -m
total used free shared buffers cached
Mem: 1446 1172 273 0 225 821
-/+ buffers/cache: 126 1320
Swap: 1471 0 1471
This show you memory stats in a more human readable way:
sar -r 0

On Linux: We see following: Physical, Real, Swap, Virtual Memory - Which should we consider for sizing?

We use a Tool (Whats Up Gold) to monitor memory usage on a Linux Box.
We see Memory usage (graphs) related to:
Physical, Real, Swap, Virtual Memory and ALL Memory (which is a average of all these).
'The ALL' Memory graphs show low memory usage of about: 10%.
But Physical memory shows as 95% used.
Swap memory shows as 2% used.
So, do i need more memory on this Linux Box?
In other words should i go by:
ALL Memory graph(which says memory situation is good) OR
Physical Memory Graph (which says memory situation is bad).
Real and Physical
Physical memory is the amount of DRAM which is currently used. Real memory shows how much your applications are using system DRAM memory. It is roughly lower than physical memory. Linux system caches some of disk data. This caching is the difference between physical and real memory. Actually, when you have free memory Linux goes to use it for caching. Do not worry, as your applications demand memory they gonna get the cached space back.
Swap and Virtual
Swap is additional space to your actual DRAM. This space is borrowed from disk space and once you application fill-out entire DRAM, Linux transfers some unused memory to swap to let all application stay alive. Total of swap and physical memory is the virtual memory.
Do you need extra memory?
In answer to your question, you need to check real memory. If your real memory is full, you need to get some RAM. Use free command to check the amount of actual free memory. For example on my system free says:
$ free
total used free shared buffers cached
Mem: 16324640 9314120 7010520 0 433096 8066048
-/+ buffers/cache: 814976 15509664
Swap: 2047992 0 2047992
You need to check buffer/cache section. As shown above, there are real 15 GB free DRAM (second line) on my system. Check this on your system and find out whether you need more memory or not. The lines represent physical, real, and swap memory, respectively.
free -m
as for free tool analisys about memory lack in linux i have some opinion proved by experiments (practice)
~# free -m
total used free shared buff/cache available
Mem: 2000 164 144 1605 1691 103
you should summarize 'used'+'shared' and compare with 'total'
other columns are useless just confuse and nothing more
i would say
[ total - (used + shared ) ] should be always at least > 200 MB
also you can get almost the same number if you check MemAvailable in meminfo :
# cat /proc/meminfo
MemAvailable: 107304 kB
MemAvailable - is how much memory linux thinks right now is really free before active swapping happens.
so now you can consume 107304 kB maximum. if you
consume more big swappening starts happening.
MemAvailable also is in good correlation with real practice.

Linux memory usage is much larger than the sum of memory used by all applications?

I am using "free -m -t " command to monitor my linux system and get
total used free shared buffers cached
Mem: 64334 64120 213 0 701 33216
-/+ buffers/cache: 30202 34131
Swap: 996 0 996
Total: 65330 64120 1209
it means 30GB of physical memory is used by user processes.
but when using top command and sort by memory usage, only 3~4GB of memory is used by all the application processes.
Why does this inconsistency happen?
As I understand it, the amount of memory that top shows as used includes cold memory from older processes that are not running anymore. This is due to the fact that in case of a restart of said process, the required data may still be in memory, enabling the system to start the process faster and more efficiently instead or always reloading the data from disk.
or, in short, linux generally frees cold data in memory as late as possible.
Hope that clears it up :)

Resources