I am trying to capture the CPU idle time from TOP.
The following code captures Load Average
I am trying to manipulate the following code so that it capture's CPU idle time.
Any ideas welcome.
top -bn1 | grep load | awk '{printf "CPU load %: %.2f\n", $(NF-2)}'
The Above Code Outputs: CPU load %: 0.44
I want to change the code so that it outputs CPU idle time
CPU Id %: 92.9%
Example Top output:
top - 10:35:25 up 1 day, 16:06, 5 users, load average: 0.24, 0.16, 0.15
Tasks: 210 total, 2 running, 198 sleeping, 10 stopped, 0 zombie
%Cpu(s): 2.2 us, 0.2 sy, 4.7 ni, 92.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.1 st
KiB Mem: 16433064 total, 1353396 used, 15079668 free, 180944 buffers
KiB Swap: 0 total, 0 used, 0 free. 700468 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
24293 ubuntu 30 10 32828 2576 1608 S 19.3 0.0 0:25.30 fiberlamp
2173 ubuntu 20 0 51200 16496 4952 S 9.3 0.1 263:34.18 Xvnc4
12648 ubuntu 20 0 23668 1732 1180 R 0.3 0.0 0:04.25 top.....
........
grep for '%Cpu(s)'
top -bn1 | grep '%Cpu(s)' | awk -F',' '{printf "CPU id %: %.2f%\n", $4}'
Related
in reference to this from perlvar:
In multithreaded scripts Perl coordinates the threads so that any thread may modify its copy of the $0 and the change becomes visible to ps(1) (assuming the operating system plays along). Note that the view of $0 the other threads have will not change since they have their own copies of it.
I don't seem to be getting this behavior. Instead, the $0 seems to be shared by all my threads, and in the ps output, the top level main perl interpreter's cmdline is being modified to the final value applied by the last thread.
eg.
My goal is instead of having this where all threads a named the same under COMMAND column:
top -b -n 1 -H -p 223860
top - 17:54:56 up 73 days, 2:15, 7 users, load average: 0.23, 0.70, 0.92
Threads: 22 total, 0 running, 22 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 32358832 total, 26418060 free, 1090028 used, 4850744 buff/cache
KiB Swap: 16777212 total, 16149116 free, 628096 used. 30804716 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
223860 app_sy+ 20 0 3833640 166216 2500 S 0.0 0.5 0:45.75 tool_reader.
223863 app_sy+ 20 0 3833640 166216 2500 S 0.0 0.5 0:03.88 tool_reader.
223864 app_sy+ 20 0 3833640 166216 2500 S 0.0 0.5 0:04.67 tool_reader.
223865 app_sy+ 20 0 3833640 166216 2500 S 0.0 0.5 0:00.00 tool_reader.
223867 app_sy+ 20 0 3833640 166216 2500 S 0.0 0.5 0:34.62 tool_reader.
223868 app_sy+ 20 0 3833640 166216 2500 S 0.0 0.5 0:03.85 tool_reader.
223869 app_sy+ 20 0 3833640 166216 2500 S 0.0 0.5 0:04.41 tool_reader.
223870 app_sy+ 20 0 3833640 166216 2500 S 0.0 0.5 0:00.00 tool_reader.
223872 app_sy+ 20 0 3833640 166216 2500 S 0.0 0.5 0:40.14 tool_reader.
To have something more useful under the command column like this, and the main thread stays the same.
|
|
|
v
top -b -n 1 -H -p 223860
top - 17:54:56 up 73 days, 2:15, 7 users, load average: 0.23, 0.70, 0.92
Threads: 22 total, 0 running, 22 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 32358832 total, 26418060 free, 1090028 used, 4850744 buff/cache
KiB Swap: 16777212 total, 16149116 free, 628096 used. 30804716 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
223860 app_sy+ 20 0 3833640 166216 2500 S 0.0 0.5 0:45.75 tool_reader.
223863 app_sy+ 20 0 3833640 166216 2500 S 0.0 0.5 0:03.88 syncer
223864 app_sy+ 20 0 3833640 166216 2500 S 0.0 0.5 0:04.67 partition1
223865 app_sy+ 20 0 3833640 166216 2500 S 0.0 0.5 0:00.00 partition2
223867 app_sy+ 20 0 3833640 166216 2500 S 0.0 0.5 0:34.62 partition3
223868 app_sy+ 20 0 3833640 166216 2500 S 0.0 0.5 0:03.85 input_merger1
223869 app_sy+ 20 0 3833640 166216 2500 S 0.0 0.5 0:04.41 input_merger2
223870 app_sy+ 20 0 3833640 166216 2500 S 0.0 0.5 0:00.00 input_merger3
Would anyone know how this can be done? I'm using a rather old perl now, version 5.16.3, in case this was a bug?
Update 2020-10-21:
I just discovered an even better way to achieve this - the actual linux syscall. https://man7.org/linux/man-pages/man2/prctl.2.html
Troels Liebe Bentsen has kindly contributed a module that handles this neatly.
https://metacpan.org/pod/Sys::Prctl
Far more seamless than fiddling with $0 !!!
Original Post content continues below....
ps -T -p 126193
PID SPID TTY TIME CMD
126193 126193 pts/11 00:00:00 test2.pl
126193 126194 pts/11 00:00:00 __thr1 #<--- now unique
126193 126195 pts/11 00:00:00 __thr2 #<--- now unique
top -H -p 126193
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
126193 xxxxxxx+ 20 0 305948 7972 2244 S 0.0 0.0 0:00.01 test2.pl
126194 xxxxxxx+ 20 0 305948 7972 2244 S 0.0 0.0 0:00.00 __thr1
126195 xxxxxxx+ 20 0 305948 7972 2244 S 0.0 0.0 0:00.00 __thr2
##################
Thanks to #ikegami , I found a solution that will work.
Couple of small changes were needed in order to get keep the child threads alive, and also needed to stop the main thread from joining them back in. (Based on how it behaves, I assume that if the child threads reach the end of the sub they are spawned with, they are completely terminated and Linux cleans them up - even though the main thread hasn't called join on them yet.
To anyone else reading this page in future, I would love to know why each of pstree, ps, and top, all show a different result.
Any how, leaving this info and comparisons here in case its helpful to others.
End result:
Using ps command , it does NOT appear to be possible to get the modified name of the threads. It only shows the string of what the last thread that touched $0 set it to
Similarly, using pstree pstree -p -a -l 144741 also only shows the main thread as the name for each child, and does not show anything about the changes made by the threads
But, very fortunately, using top works!!!! top -H -b -p 180547 , which clearly shows the main thread, and all child threads by the name they set using $0
Example from ps:
app_sy+ 180547 131203 180547 0 3 18:08 pts/1 00:00:00 thr2
app_sy+ 180547 131203 180548 0 3 18:08 pts/1 00:00:00 thr2
app_sy+ 180547 131203 180549 0 3 18:08 pts/1 00:00:00 thr2
Example using pstree:
test.pl,180547
|-{test.pl},180548
`-{test.pl},180549
And the winner, using top -n 1 -H -b -p 180547 , which shows the distinct names applied to $0 by each thread successfully!!!!!!
top - 18:00:08 up 69 days, 8:53, 3 users, load average: 4.10, 3.95, 4.05
Threads: 3 total, 0 running, 3 sleeping, 0 stopped, 0 zombie
%Cpu(s): 7.7 us, 33.5 sy, 0.0 ni, 58.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 13144056+total, 1351640 free, 45880316 used, 84208608 buff/cache
KiB Swap: 16777212 total, 16777212 free, 0 used. 78196224 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
180547 app_+ 20 0 299572 7152 2144 S 0.0 0.0 0:00.00 test.pl
180548 app_+ 20 0 299572 7152 2144 S 0.0 0.0 0:00.02 thr1
180549 app_+ 20 0 299572 7152 2144 S 0.0 0.0 0:00.01 thr2
Adding Modified version of Ikegami's code here for future reference for others looking at this page, saved as test.pl :
#!/usr/bin/perl
use strict;
use warnings;
use feature qw( say );
use threads;
use threads::shared;
my $phase :shared = 0;
my $main_pid = $$;
sub advance {
lock $phase;
++$phase;
cond_signal($phase);
}
sub wait_for {
lock $phase;
cond_wait($phase) while $phase != $_[0];
}
sub advance_and_wait_for {
lock $phase;
++$phase;
cond_signal($phase);
cond_wait($phase) while $phase != $_[0];
}
my $thr1 = async {
my $id = 'thr1';
wait_for(0);
advance_and_wait_for(2);
say "[$id] Setting \$0 to $id.";
$0 = $id;
say "[$id] \$0 = $0";
print `ps -eLf|grep $main_pid` =~ s/^/[$id] /mrg;
advance_and_wait_for(4);
say "[$id] \$0 = $0";
advance();
while(1){
sleep 1;
}
};
my $thr2 = async {
my $id = 'thr2';
wait_for(1);
advance_and_wait_for(3);
say "[$id] \$0 = $0";
say "[$id] Setting \$0 to $id.";
$0 = $id;
say "[$id] \$0 = $0";
print `ps -eLf| grep $main_pid` =~ s/^/[$id] /mrg;
advance();
while(1){
sleep 1;
}
};
sleep 5;
print "Main thread pid is $main_pid - and \$0 is ($0)\n";
my $waitfor = <STDIN>;
$_->join for $thr1, $thr2;
This is a 8 core machine.
The %Cpu(s)->id is 99.4 but one java process already occupies 82.7% cpu usage.
The "top" output is as below:
top - 09:04:09 up 17:22, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 142 total, 1 running, 74 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 0.1 sy, 0.0 ni, 99.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.3 st
KiB Mem : 62876640 total, 9865752 free, 51971500 used, 1039388 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 10121552 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4859 root 20 0 50.1g 49.4g 144356 S 82.7 82.4 20:28.62 java
3847 root 20 0 6452 792 716 S 0.3 0.0 0:09.50 rngd
1 root 20 0 43724 5680 4196 S 0.0 0.0 0:02.30 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd
the answer:
we have cpu = 8 cores
from top we have:
process cpu_usage = 82.7%
idle = 99.4 % id
look ! lets calculate
8 cores at full usage give = 800%,
so cpu usage % = > [82.7/800] * 100% = 10.3 % (calculated)
cpu idle % = 100-10.3 = 89.7 % (calculated)
well 89.7% kind of slightly different from 99.4% but will
give you a flavour
i suppose your main confusion is about 82.7%.
82.7% does not mean 82.7% usage of all the cpu. it would be so if your cpu had 1 core. for multicore cpus 100% usage means that kind of only one core is 100% busy
not all the cpu.
Hello it's depending on your usage, running service or apps (eclipse, android studio, JBoss Server, etc...) check them.
Normally the CPU tries to distribute the functions/process between its cores in order to fulfill the multi-tasking. So, the specific process can fetch up the big part of one the cores, however the other cores and the CPU are not handling a huge load.
BR
In linux, I'm writting a script to log system parameters to a file.
How can I get the name of the task consuming the most CPU resources, and the percentage of CPU used by that task?
For example, using top:
$ top -bin 1
top - 19:11:05 up 2:57, 1 user, load average: 1,43, 1,47, 1,06
Tasks: 178 total, 2 running, 124 sleeping, 0 stopped, 0 zombie
%Cpu(s): 5,8 us, 1,3 sy, 0,0 ni, 92,8 id, 0,0 wa, 0,0 hi, 0,1 si, 0,0 st
KiB Mem : 3892704 total, 1594348 free, 1282992 used, 1015364 buff/cache
KiB Swap: 2097148 total, 2097148 free, 0 used. 2335136 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
11883 root 20 0 645964 104036 87792 R 93,8 2,7 18:07.03 Xorg
12030 raf 20 0 412824 35632 14860 S 12,5 0,9 2:44.51 xfsettingsd
23468 raf 20 0 39648 3864 3332 R 6,2 0,1 0:00.02 top
From the exammple above, what I would like to have is a [sequence of [piped]] bash command[s] that outputs:
93.8 Xorg
You can try
ps -eo %cpu,comm --sort %cpu | tail -n 1
I have this script:
echo $(date +%F-%H%M ) $( top -n 1 -b -c -p $ZK_PID,$KAFKA_PID,$AGENT_PID,$ENGINE_PID | tail -n 1) >> `hostname`_top.log
which produce the following output:
top - 06:32:15 up 7 days, 21:22, 2 users, load average: 1.71, 1.66, 1.66
Tasks: 3 total, 0 running, 3 sleeping, 0 stopped, 0 zombie
%Cpu(s): 22.8 us, 15.9 sy, 0.0 ni, 61.1 id, 0.1 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem : 14360876 total, 191296 free, 10837496 used, 3332084 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 3066536 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
48721 equalum 20 0 12.828g 1.214g 6756 S 3.0 8.9 2176:15 /usr/lib/+
52019 equalum 20 0 5809096 1.436g 5392 S 1.3 10.5 450:51.78 java -Dna+
48411 equalum 20 0 4150868 403536 4992 S 0.0 2.8 3:56.87 /usr/lib/+
I am trying to get only %CPU and %MEM values for those processes , how can i do that?
Try to add this after your script code
| grep '%CPU|%MEM'
On RedHat Linux 6.2 I'm running free -m and it shows nearly all 8GB used
total used free shared buffers cached
Mem: 7989 7734 254 0 28 7128
-/+ buffers/cache: 578 7411
Swap: 4150 0 4150
But at the same time in top -M I cannot see any processes using all this memory:
top - 16:03:34 up 4:10, 2 users, load average: 0.08, 0.04, 0.01
Tasks: 169 total, 1 running, 163 sleeping, 5 stopped, 0 zombie
Cpu(s): 0.7%us, 0.3%sy, 0.0%ni, 98.6%id, 0.4%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 7989.539M total, 7721.570M used, 267.969M free, 28.633M buffers
Swap: 4150.992M total, 0.000k used, 4150.992M free, 7115.312M cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1863 sroot 20 0 398m 24m 9.8m S 0.3 0.3 3:12.87 App1
1 sroot 20 0 2864 1392 1180 S 0.0 0.0 0:00.91 init
2 sroot 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 sroot RT 0 0 0 0 S 0.0 0.0 0:00.07 migration/0
4 sroot 20 0 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0
5 sroot RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0
6 sroot RT 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/0
7 sroot RT 0 0 0 0 S 0.0 0.0 0:00.08 migration/1
8 sroot RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/1
I also tried this ps mem script but it onlt shows about 400MB memory being used.
Don't look at the "Mem" line, look at the one below it.
The Linux kernel consumes as much memory as it can to provide the I/O cache (and other non-critical buffers, but the cache is going to be most of this usage). This memory is relinquished to processes when they request it. The "-/+ buffers/cache" line is showing you the adjusted values after the I/O cache is accounted for, that is, the amount of memory used by processes and the amount available to processes (in this case, 578MB used and 7411MB free).
The difference of used memory between the "Mem" and "-/+ buffers/cache" line shows you how much is in use by the kernel for the purposes of caching: 7734MB - 578MB = 7156MB in the I/O cache. If processes need this memory, the kernel will simply shrink the size of the I/O cache.
Also, as the first line shows
total used free shared buffers cached
Mem: 7989 7734 254 0 28 7128
-/+ buffers/cache: 578 7411
If we add (cached[7128] + buffers[28] + free[254]), we will get approximately the second line's free[7411] value
7128 + 28 + 254 = 7410
If the cached is small, try this command:
ps aux --sort -rss