calculating correct memory utilization from invalid output of pidstat - linux

I used, pidstat -r -p <pid> <interval> >> log_Path/MemStat.CSV & command to to collect the memory stat.
after running this command I found that RSS VSZ %MEM values are increased continuously, which is not expected, as pidstat provides the the values considering the interval.
After searching on net I found that there is bug in pidstat and I need to update the syssat package.
(please refer last few statements by pidstat author on this link : http://sebastien.godard.pagesperso-orange.fr/tutorial.html)
Now, my question is, how do I calculate the correct %MEM utilization from the current output as we cannot run the test again.
sample output :
Time PID minflt/s majflt/s VSZ RSS %MEM
9:55:22 AM 18236 1280 0 26071488 119136 0.36
9:55:23 AM 18236 4273 0 27402768 126276 0.38
9:55:24 AM 18236 9831 0 27402800 162468 0.49
9:55:25 AM 18236 161 0 27402800 169092 0.51
9:55:26 AM 18236 51 0 27402800 175416 0.53
9:55:27 AM 18236 6859 0 27402800 198340 0.6
9:55:28 AM 18236 1440 0 27402800 203608 0.62

On the Sysstat tutorial page you refer to it is said:
I noticed that pidstat had a memory footprint (VSZ and RSS fields)
that was constantly increasing as the time went by. I quickly found
that I had forgotten to close a file descriptor in a function of my
code and that was responsible for the memory leak...!
So, the output of pidstat has never been invalid; on the contrary, the author wrote
pidstat helped me to detect a memory leak in the pidstat command
itself.

Related

Is there an equivalent for time([some command]) for checking peak memory usage of a bash command?

I want to figure out how much memory a specific command uses but I'm not sure how to check for the peak memory of the command. Is there anything like the time([command]) usage but for memory?
Basically, I'm going to have to run an interactive queue using SLURM, then test a command for a program I need to use for a single sample, see how much memory was used, then submit a bunch of jobs using that info.
Yes, time is the program that monitors programs and shows the Maximum resident set size. Not to be confused with time Bash builtin that only shows real/user/sys times. On my Arch Linux you have to install time with pacman -S time, it's a separate package.
$ command time -v echo 1
1
Command being timed: "echo 1"
User time (seconds): 0.00
System time (seconds): 0.00
Percent of CPU this job got: 0%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.00
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 1968
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 90
Voluntary context switches: 1
Involuntary context switches: 1
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
Note:
$ type time
time is a shell keyword
$ time -V
bash: -V: command not found
real 0m0.002s
user 0m0.000s
sys 0m0.002s
$ command time -V
time (GNU Time) 1.9
$ /bin/time -V
time (GNU Time) 1.9
$ /usr/bin/time -V
time (GNU Time) 1.9

OPENSHIFT app running slow. Low memory even when app is stopped

My app is running slow after a few days. Using the unix "top" command it seems there is not a lot of free memory. See below. Even if I stop the application about the same memory shows used. Any ideas why? Does this amount of memory look normal with no app running on a small gear application? How can I reboot the virtual machine?
Below is the output of the "top" command with no app running. Shows
7513700k total, 7327484k used, 186216k free
top - 22:06:26 up 14 days, 5:42, 0 users, load average: 1.83, 2.82, 3.21
Tasks: 3 total, 1 running, 2 sleeping, 0 stopped, 0 zombie
Cpu(s): 10.2%us, 26.6%sy, 1.6%ni, 57.4%id, 4.0%wa, 0.0%hi, 0.0%si, 0.2%st
Mem: 7513700k total, 7327484k used, 186216k free, 170244k buffers
Swap: 6249464k total, 4210036k used, 2039428k free, 925320k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
48736 3558 20 0 14908 1176 944 R 0.7 0.0 0:00.04 top
48374 3558 20 0 102m 2684 848 S 0.0 0.0 0:00.00 sshd
48383 3558 20 0 106m 2072 1436 S 0.0 0.0 0:00.19 bash
What type of app are you running? Also, since openshift uses cgroups you'll want to see what your usage is within your cgroup (top output shows the whole system). Try including the output from for i in $(oo-cgroup-read all);do echo “oo-cgroup-read $i” && oo-cgroup-read $i; done and pay close attention to your memory limits.

Why is the system CPU time (% sy) high?

I am running a script that loads big files. I ran the same script in a single core OpenSuSe server and quad core PC. As expected in my PC it is much more faster than in the server. But, the script slows down the server and makes it impossible to do anything else.
My script is
for 100 iterations
Load saved data (about 10 mb)
time myscript (in PC)
real 0m52.564s
user 0m51.768s
sys 0m0.524s
time myscript (in server)
real 32m32.810s
user 4m37.677s
sys 12m51.524s
I wonder why "sys" is so high when i run the code in server. I used top command to check the memory and cpu usage.
It seems there is still free memory, so swapping is not the reason. % sy is so high, its probably the reason for the speed of server but I dont know what is causing % sy so high. The process that is using highest percent of CPU (99%) is "myscript". %wa is zero in the screenshot but sometimes it gets very high (50 %).
When the script is running, load average is greater than 1 but have never seen to be as high as 2.
I also checked my disc:
strt:~ # hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 16480 MB in 2.00 seconds = 8247.94 MB/sec
Timing buffered disk reads: 20 MB in 3.44 seconds = 5.81 MB/sec
john#strt:~> df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 245G 102G 131G 44% /
udev 4.0G 152K 4.0G 1% /dev
tmpfs 4.0G 76K 4.0G 1% /dev/shm
I have checked these things but I am still not sure what is the real problem in my server and how to fix it. Can anyone identify a probable reason for the slowness? What could be the solution?
Or is there anything else I should check?
Thanks!
You're getting a high sys activity because the load of the data you're doing takes system calls that happen in kernel. To resolve your slowness problems without upgrading the server might be possible. You can modify scheduling priority. See the man pages for nice and renice. See here and especially:
Niceness values range from -20 (the highest priority, lowest niceness) and 19 (the lowest priority, highest niceness).
$ ps -lp 941
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
4 S 0 941 1 0 70 -10 - 1713 poll_s ? 00:00:00 sshd
$ nice -n 19 ./test.sh
My niceness value is 19
$ renice -n 10 -p 941
941 (process ID) old priority -10, new priority 10

Getting CPU utilization information

How could I get the CPU utilization with time info of a process in linux? Basically I want to let my application run overnight. At the same time, I would like to monitor the CPU utilization during the period the application is run.
I tried top | grep appName >& log, it does not seem to return me anything in the log. Could someone help me with this?
Thanks.
vmstat and iostat can both give you periodic information of this nature; I would suggest either setting up the number of times manually, or putting a single poll into a cron job, and then redirecting the output to a file:
vmstat 20 4230 >> cpu_log_file
This would give you a snapshot of usage every 20 seconds for 24 hours.
install sysstat package and run sar
nohup sar -o output.file 12 8 >/dev/null 2>&1 &
use the top or watch command
PID COMMAND %CPU TIME #TH #WQ #PORT #MREG RPRVT RSHRD RSIZE VPRVT VSIZE PGRP PPID STATE UID FAULTS COW MSGSENT MSGRECV SYSBSD SYSMACH CSW PAGEINS USER
10764 top 8.4 00:01.04 1/1 0 24 33 2000K 244K 2576K 17M 2378M 10764 10719 running 0 9908+ 54 564790+ 282365+ 3381+ 283412+ 838+ 27 root
10763 taskgated 0.0 00:00.00 2 0 25 27 432K 244K 1004K 27M 2387M 10763 1 sleeping 0 376 60 140 60 160 109 11 0 root
Write a program that invokes your process and then calls getrusage(2) and reports statistics for its children.
You can monitor the time used by your program with top while it is running.
Alternatively, you can launch your application with the time command, which will print the total amount of CPU time used by your program at the end of its execution. Just type time ./my_app instead of just ./my_app
For more info, man 1 time

Why the SWAP listed in the detail list of the TOP command is greater than in the summary?

The TOP command results:
Mem: 3991840k total, 1496328k used, 2495512k free, 156752k buffers
**Swap**: 3905528k total, **3980k** used, 3901548k free, 447860k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ **SWAP** COMMAND
28250 www-data 20 0 430m 210m 21m R 63 5.4 0:07.29 **219m** apache2
28266 www-data 20 0 256m 40m 21m S 30 1.0 0:01.94 **216m** apache2
28206 www-data 20 0 260m 44m 21m S 27 1.1 0:10.27 **215m** apache2
28259 www-data 20 0 256m 40m 21m S 26 1.0 0:02.21 **216m** apache2
The details list shows a group of apache2 processes are using SWAP memory about 210m+ each, but the summary reports only 3980k is used. The total SWAP memory in the detail list is much greater than in the summary. Do the two swap refer the same thing?
Quoted from http://www.linuxforums.org/articles/using-top-more-efficiently_89.html :
VIRT=RES+SWAP
As explained previously, VIRT includes anything inside task's
address space, no matter it is in RAM,
swapped out or still not loaded from
disk. While RES represents total RAM
consumed by this task. So, SWAP here
means it represents the total amount
of data being swapped out OR still not
loaded from disk. Don't be fooled by
the name, it doesn't just represent
the swapped out data.

Resources