How to get the total memory of a process which fork many children in linux with shell? - linux

http daemon for example:
I use ps aux|grep httpd|grep -v grep:
USER PID RSS COMMAND
root 14347 3220 /usr/sbin/httpd
apache 14348 2400 /usr/sbin/httpd
apache 14349 2400 /usr/sbin/httpd
apache 14350 2400 /usr/sbin/httpd
I can simple accumulate the RSS fields to get total memory usage of [httpd]. 3220+2400+2400+2400 = 10420
But i know, child processes have shared memory. There are some redundant computing here. Actually the total memory usage size may less than 10420.
My question is how to get the actually memory usage.

If you need to get the actual memory usage, you need to run it within a profiler like Valgrind.
reference
http://kratos-wiki.cimne.upc.edu/index.php/Checking_memory_use_with_Valgrind

Valgrind is probably your most exact choice, but can be a bit awkward to use, and is not reasonable for a production system because of performance (virtually none).
Smem (homepage) (manpage) is a less complicated alternative. PSS process set size is what you're looking for.

I have used the following command, with Chrome:
ps aux | grep chrome | grep -v grep | awk '{s+=$5} END {print s}'
Note that $5 may actually vary, depending on how ps aux actually displays its output. This may or may not be useful to you and also displays the total usage in bytes.

Related

get available memory in gb using single bash shell command

following command returns available memory in kilobytes
cat /proc/meminfo | grep MemFree | awk '{ print $2 }'
can some one suggest single command to get the available memory in gb?
Just a slight modification to your own magical incantation:
awk '/MemFree/ { printf "%.3f \n", $2/1024/1024 }' /proc/meminfo
P.S.: Dear OP, if you find yourself invoking grep & awk in one line you're most likely doing it wrong ;} ... Same with invoking cat on a single file; that's hardly ever warranted.
Most simple is the following :
free -h
Following is the output Screenshot :
More details :
DESCRIPTION
free - displays the total amount of free and used physical and swap mem‐
ory in the system, as well as the buffers and caches used by the ker‐
nel. The information is gathered by parsing /proc/meminfo. The dis‐
played columns are:
total Total installed memory (MemTotal and SwapTotal in /proc/meminfo)
used Used memory (calculated as total - free - buffers - cache)
free Unused memory (MemFree and SwapFree in /proc/meminfo)
shared Memory used (mostly) by tmpfs (Shmem in /proc/meminfo, available
on kernels 2.6.32, displayed as zero if not available)
buffers
Memory used by kernel buffers (Buffers in /proc/meminfo)
cache Memory used by the page cache and slabs (Cached and Slab in
/proc/meminfo)
buff/cache
Sum of buffers and cache
available
Estimation of how much memory is available for starting new
applications, without swapping. Unlike the data provided by the
cache or free fields, this field takes into account page cache
and also that not all reclaimable memory slabs will be reclaimed
due to items being in use (MemAvailable in /proc/meminfo, avail‐
able on kernels 3.14, emulated on kernels 2.6.27+, otherwise the
same as free)
freemem_in_gb () {
read -r _ freemem _ <<< "$(grep --fixed-strings 'MemFree' /proc/meminfo)"
bc <<< "scale=3;${freemem}/1024/1024"
}
Please notice that scale=3 can be changed to some other value, for a better precision.
So, for example one could write a function that will take a precision argument, like so:
freemem_in_gb () {
prec=$1;
read -r _ freemem _ <<< "$(grep --fixed-strings 'MemFree' /proc/meminfo)"
bc <<< "scale=${prec:-3};${freemem}/1024/1024"
}
Which will take (or use 3 as a default value) and pass a precision argument to bc's scale option
Usage example:
$ freemem_in_gb
5.524
$ freemem_in_gb 7
5.5115814
EDIT
Thanks for #Stephen P and #Etan Reisner for leaving a comment and improving this answer.
Code edited accordingly.
grep's long option --fixed-strings is used purposely instead of -F or fgrep for explanatory reasons.
Yet another way:
expr $(sed -n '/^MemTotal:/ s/[^[:digit:]]//gp' /proc/meminfo) / 1024 / 1024
Also a bit shorter:
expr $(sed -n '/^MemTotal:/ s/[^0-9]//gp' /proc/meminfo) / 1024 / 1024
And if you like bc and precision that much:
bc <<< "scale=2; $(sed -n '/^MemTotal:/ s/[^[:digit:]]//gp' /proc/meminfo) / 1024 / 1024 "
If you have python, you can do it this way:
To get total available memory:
python -c "import os;print(int(round(os.sysconf('SC_PAGE_SIZE') * os.sysconf('SC_PHYS_PAGES') / 1024.0**3)))"
In this example, I used round to round to the nearest GB. You can make it into a shell function like so:
get_mem(){
MEM=$(python -c "import os;print(int(round(os.sysconf('SC_PAGE_SIZE') * os.sysconf('SC_PHYS_PAGES') / 1024.0**3)))")
echo $MEM
}
To get free and used memory check out psutil here.
Whilst I agree that dividing by 1024 should be more correct, I find on my various cloud and physical servers, this gives neater output:
free -m | awk '/^Mem:/{printf("%.1fGb\n",$2/1000)}'

Linux total disk I/O from already running process

I'm working on a performance tool and I'm interested in the total disk I/O i single process have done since it started. I have the porcess PID and i can easily get the current I/O rate with tools like iotop or sar, but not the total I/O.
Is this even logged in Linux and is there a way to get it?
/Mpresmann
You can read the /proc/<PID>/io file for specific process
 $ sudo cat /proc/1/io
rchar: 144440702940
wchar: 4615239440674
syscr: 156954128
syscw: 173077623
read_bytes: 113700176646
write_bytes: 100325525146
cancelled_write_bytes: 2596581376

Why is the system CPU time (% sy) high?

I am running a script that loads big files. I ran the same script in a single core OpenSuSe server and quad core PC. As expected in my PC it is much more faster than in the server. But, the script slows down the server and makes it impossible to do anything else.
My script is
for 100 iterations
Load saved data (about 10 mb)
time myscript (in PC)
real 0m52.564s
user 0m51.768s
sys 0m0.524s
time myscript (in server)
real 32m32.810s
user 4m37.677s
sys 12m51.524s
I wonder why "sys" is so high when i run the code in server. I used top command to check the memory and cpu usage.
It seems there is still free memory, so swapping is not the reason. % sy is so high, its probably the reason for the speed of server but I dont know what is causing % sy so high. The process that is using highest percent of CPU (99%) is "myscript". %wa is zero in the screenshot but sometimes it gets very high (50 %).
When the script is running, load average is greater than 1 but have never seen to be as high as 2.
I also checked my disc:
strt:~ # hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 16480 MB in 2.00 seconds = 8247.94 MB/sec
Timing buffered disk reads: 20 MB in 3.44 seconds = 5.81 MB/sec
john#strt:~> df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 245G 102G 131G 44% /
udev 4.0G 152K 4.0G 1% /dev
tmpfs 4.0G 76K 4.0G 1% /dev/shm
I have checked these things but I am still not sure what is the real problem in my server and how to fix it. Can anyone identify a probable reason for the slowness? What could be the solution?
Or is there anything else I should check?
Thanks!
You're getting a high sys activity because the load of the data you're doing takes system calls that happen in kernel. To resolve your slowness problems without upgrading the server might be possible. You can modify scheduling priority. See the man pages for nice and renice. See here and especially:
Niceness values range from -20 (the highest priority, lowest niceness) and 19 (the lowest priority, highest niceness).
$ ps -lp 941
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
4 S 0 941 1 0 70 -10 - 1713 poll_s ? 00:00:00 sshd
$ nice -n 19 ./test.sh
My niceness value is 19
$ renice -n 10 -p 941
941 (process ID) old priority -10, new priority 10

How to see top processes sorted by actual memory usage?

I have a server with 12G of memory. A fragment of top is shown below:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12979 frank 20 0 206m 21m 12m S 11 0.2 26667:24 krfb
13 root 15 -5 0 0 0 S 1 0.0 36:25.04 ksoftirqd/3
59 root 15 -5 0 0 0 S 0 0.0 4:53.00 ata/2
2155 root 20 0 662m 37m 8364 S 0 0.3 338:10.25 Xorg
4560 frank 20 0 8672 1300 852 R 0 0.0 0:00.03 top
12981 frank 20 0 987m 27m 15m S 0 0.2 45:10.82 amarok
24908 frank 20 0 16648 708 548 S 0 0.0 2:08.84 wrapper
1 root 20 0 8072 608 572 S 0 0.0 0:47.36 init
2 root 15 -5 0 0 0 S 0 0.0 0:00.00 kthreadd
The free -m shows the following:
total used free shared buffers cached
Mem: 12038 11676 362 0 599 9745
-/+ buffers/cache: 1331 10706
Swap: 2204 257 1946
If I understand correctly, the system has only 362 MB of available memory. My question is: How can I find out which process is consuming most of the memory?
Just as background info, the system is running 64bit OpenSuse 12.
use quick tip using top command in linux/unix
$ top
and then hit Shift+m (i.e. write a capital M).
From man top
SORTING of task window
For compatibility, this top supports most of the former top sort keys.
Since this is primarily a service to former top users, these commands do
not appear on any help screen.
command sorted-field supported
A start time (non-display) No
M %MEM Yes
N PID Yes
P %CPU Yes
T TIME+ Yes
Or alternatively: hit Shift + f , then choose the display to order by memory usage by hitting key n then press Enter. You will see active process ordered by memory usage
First, repeat this mantra for a little while: "unused memory is wasted memory". The Linux kernel keeps around huge amounts of file metadata and files that were requested, until something that looks more important pushes that data out. It's why you can run:
find /home -type f -name '*.mp3'
find /home -type f -name '*.aac'
and have the second find instance run at ridiculous speed.
Linux only leaves a little bit of memory 'free' to handle spikes in memory usage without too much effort.
Second, you want to find the processes that are eating all your memory; in top use the M command to sort by memory use. Feel free to ignore the VIRT column, that just tells you how much virtual memory has been allocated, not how much memory the process is using. RES reports how much memory is resident, or currently in ram (as opposed to swapped to disk or never actually allocated in the first place, despite being requested).
But, since RES will count e.g. /lib/libc.so.6 memory once for nearly every process, it isn't exactly an awesome measure of how much memory a process is using. The SHR column reports how much memory is shared with other processes, but there is no guarantee that another process is actually sharing -- it could be sharable, just no one else wants to share.
The smem tool is designed to help users better gage just how much memory should really be blamed on each individual process. It does some clever work to figure out what is really unique, what is shared, and proportionally tallies the shared memory to the processes sharing it. smem may help you understand where your memory is going better than top will, but top is an excellent first tool.
ps aux | awk '{print $2, $4, $11}' | sort -k2rn | head -n 10
(Adding -n numeric flag to sort command.)
First you should read an explanation on the output of free. Bottom line: you have at least 10.7 GB of memory readily usable by processes.
Then you should define what "memory usage" is for a process (it's not easy or unambiguous, trust me).
Then we might be able to help more :-)
List and Sort Processes by Memory Usage:
ps -e -orss=,args= | sort -b -k1,1n | pr -TW$COLUMNS
ps aux --sort '%mem'
from procps' ps (default on Ubuntu 12.04) generates output like:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
...
tomcat7 3658 0.1 3.3 1782792 124692 ? Sl 10:12 0:25 /usr/lib/jvm/java-7-oracle/bin/java -Djava.util.logging.config.file=/var/lib/tomcat7/conf/logging.properties -D
root 1284 1.5 3.7 452692 142796 tty7 Ssl+ 10:11 3:19 /usr/bin/X -core :0 -seat seat0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch
ciro 2286 0.3 3.8 1316000 143312 ? Sl 10:11 0:49 compiz
ciro 5150 0.0 4.4 660620 168488 pts/0 Sl+ 11:01 0:08 unicorn_rails worker[1] -p 3000 -E development -c config/unicorn.rb
ciro 5147 0.0 4.5 660556 170920 pts/0 Sl+ 11:01 0:08 unicorn_rails worker[0] -p 3000 -E development -c config/unicorn.rb
ciro 5142 0.1 6.3 2581944 239408 pts/0 Sl+ 11:01 0:17 sidekiq 2.17.8 gitlab [0 of 25 busy]
ciro 2386 3.6 16.0 1752740 605372 ? Sl 10:11 7:38 /usr/lib/firefox/firefox
So here Firefox is the top consumer with 16% of my memory.
You may also be interested in:
ps aux --sort '%cpu'
Building on gaoithe's answer, I attempted to make the memory units display in megabytes, and sorted by memory descending limited to 15 entries:
ps -e -orss=,args= |awk '{print $1 " " $2 }'| awk '{tot[$2]+=$1;count[$2]++} END {for (i in tot) {print tot[i],i,count[i]}}' | sort -n | tail -n 15 | sort -nr | awk '{ hr=$1/1024; printf("%13.2fM", hr); print "\t" $2 }'
588.03M /usr/sbin/apache2
275.64M /usr/sbin/mysqld
138.23M vim
97.04M -bash
40.96M ssh
34.28M tmux
17.48M /opt/digitalocean/bin/do-agent
13.42M /lib/systemd/systemd-journald
10.68M /lib/systemd/systemd
10.62M /usr/bin/redis-server
8.75M awk
7.89M sshd:
4.63M /usr/sbin/sshd
4.56M /lib/systemd/systemd-logind
4.01M /usr/sbin/rsyslogd
Here's an example alias to use it in a bash config file:
alias topmem="ps -e -orss=,args= |awk '{print \$1 \" \" \$2 }'| awk '{tot[\$2]+=\$1;count[\$2]++} END {for (i in tot) {print tot[i],i,count[i]}}' | sort -n | tail -n 15 | sort -nr | awk '{ hr=\$1/1024; printf(\"%13.2fM\", hr); print \"\t\" \$2 }'"
Then you can just type topmem on the command line.
How to total up used memory by process name:
Sometimes even looking at the biggest single processes there is still a lot of used memory unaccounted for. To check if there are a lot of the same smaller processes using the memory you can use a command like the following which uses awk to sum up the total memory used by processes of the same name:
ps -e -orss=,args= |awk '{print $1 " " $2 }'| awk '{tot[$2]+=$1;count[$2]++} END {for (i in tot) {print tot[i],i,count[i]}}' | sort -n
e.g. output
9344 docker 1
9948 nginx: 4
22500 /usr/sbin/NetworkManager 1
24704 sleep 69
26436 /usr/sbin/sshd 15
34828 -bash 19
39268 sshd: 10
58384 /bin/su 28
59876 /bin/ksh 29
73408 /usr/bin/python 2
78176 /usr/bin/dockerd 1
134396 /bin/sh 84
5407132 bin/naughty_small_proc 1432
28061916 /usr/local/jdk/bin/java 7
you can specify which column to sort by, with following steps:
steps:
* top
* shift + F
* select a column from the list
e.g. n means sort by memory,
* press enter
* ok
You can see memory usage by executing this code in your terminal:
$ watch -n2 free -m
$ htop
This very second in time
ps -U $(whoami) -eom pid,pmem,pcpu,comm | head -n4
Continuously updating
watch -n 1 'ps -U $(whoami) -eom pid,pmem,pcpu,comm | head -n4'
I also added a few goodies here you might appreciate (or you might ignore)
-n 1 watch and update every second
-U $(whoami) To show only your processes. $(some command) evaluates now
| head -n4 To only show the header and 3 processes at a time bc often you just need high usage line items
${1-4} says my first argument $1 I want to default to 4, unless I provide it
If you are using a mac you may need to install watch first
brew install watch
Alternatively you might use a function
psm(){
watch -n 1 "ps -eom pid,pmem,pcpu,comm | head -n ${1-4}"
# EXAMPLES:
# psm
# psm 10
}
You have this simple command:
$ free -h

How to minimize memory allocation of mod_perl script?

I have created a simple perl script.
The only thing it does is waiting for 5 seconds.
When I spawn the script on the server through mod_perl, it takes a lot of memory.
The instance takes 36 megabytes.
Why there is so much memory is allocated?
How can I minimize the memory taken from the system by the running script?
This is the output of "top" utility when running 2 scripts.
5162 www-data 25 0 36732 8124 2868 S 1.3 3.1 0:00.05 apache2
5161 www-data 25 0 36732 8124 2868 S 0.7 3.1 0:00.04 apache2
The script.
#!/usr/bin/perl
use CGI;
my $query= new CGI;
my $content = "5 second delay...\n";
$query->header(
'-Content-type' => "text/plain",
'-Content-Length' => length($content)
);
print $content;
sleep(5);
No, it doesn't take 36 megabytes.
That's the amount of address space allocated in the process. It includes space that is mapped from executables, mmap()'d from files, and crucially space which is shared with other processes.
The vast majority of it will be shared with other processes (particularly other Apache worker processes).
To find out how much memory it's really using, get some Perl memory profiler on the job.

Resources