I would like to know the cpu uage and i tried to get through the 'top' commands.
But seems used CPU above it shows "19 %" while in the process list it shows for 100% for cpu.
So please let me know how to get the exact value for CPU usage.
top - 05:14:39 up 34 days, 14:57, 1 user, load average: 0.20, 0.31, 0.30
Tasks: 231 total, 2 running, 184 sleeping, 1 stopped, 1 zombie
%Cpu(s): 19.0 us, 2.3 sy, 0.0 ni, 78.4 id, 0.1 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem : 16123248 total, 3329216 free, 7078736 used, 5715296 buff/cache
KiB Swap: 1048572 total, 743164 free, 305408 used. 9380980 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
27928 root 20 0 415656 10196 5212 R 100.0 0.1 0:00.17 runc:[2:INIT]
27933 karthik+ 20 0 33992 3496 2956 R 6.2 0.0 0:00.01 top
enter image description here
Thanks in Advance
top only shows total CPU usage, that's different from which you see in details, try other cmds show more detail information.
on Linux: try mpstat -P ALL 1 cmd, show CPU load per core.
on Mac: try install htop
The cpu usage shown in top and ps are not showing exactly the same statistics.
The 'top' command by default shows the summary of cpu usage across all cores. If you press '1' in 'top', you will see the usage for each core.
The output of 'ps' shows the total cpu usage time divided by process run time. Having 100 may mean that your process used single core 100% during run time. On 4-cores cpu, the top will probably calculate around 25% total cpu usage.
The 'man' page for both commands may provide some additional details as well.
Related
I think I have a fairly basic question. I just discovered the GNU parallel package and I think my workflow can really benefit from it!
I am using a loop which loops through my read files and generates the desired output. The command that is excecuted for each read looks something like this:
STAR --runThreadN 8 --genomeDir star_index/ --readFilesIn R1.fq R2.fq
As you can see I specified 8 threads, which is the amount of threads my virtual machine has.
My question now is this following:
If I use GNU parallel with a command like this:
cat reads| parallel -j 3 STAR --runThreadN 8 --genomeDir star_index/ --readFilesIn {}_R1.fq {}_R2.fq
Can my virtual machine handle the number of threads I specified, if I execute 3 jobs in parallel?
Or do I need 24 threads (3*8 threads) to properly excecute this command?
Im sorry if this is a basic question, I am very new to the field and any help is much appreciated!
The best advice is simply: Try different values and measure.
In parallelization there are sooo many factors that can affect the results: Disk I/O, shared CPU cache, and shared RAM bandwidth just to name three.
top is your friend when measuring. If you can manage to get all CPUs to have <5% idle you are unlikely to go any faster - no matter what you do.
top - 14:49:10 up 10 days, 5:48, 123 users, load average: 2.40, 1.72, 1.67
Tasks: 751 total, 3 running, 616 sleeping, 8 stopped, 4 zombie
%Cpu(s): 17.3 us, 6.2 sy, 0.0 ni, 76.2 id, 0.3 wa, 0.0 hi, 0.0 si, 0.0 st
GiB Mem : 31.239 total, 1.441 free, 21.717 used, 8.081 buff/cache
GiB Swap: 117.233 total, 104.146 free, 13.088 used. 4.706 avail Mem
This machine is 76.2% idle. If your processes use loads of CPU then starting more processes in parallel here may help. If they use loads of disk I/O it may or may not help. Only way to know is to test and measure.
top - 14:51:00 up 10 days, 5:50, 124 users, load average: 3.41, 2.04, 1.78
Tasks: 759 total, 8 running, 619 sleeping, 8 stopped, 4 zombie
%Cpu(s): 92.8 us, 6.9 sy, 0.0 ni, 0.1 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
GiB Mem : 31.239 total, 1.383 free, 21.772 used, 8.083 buff/cache
GiB Swap: 117.233 total, 104.146 free, 13.087 used. 4.649 avail Mem
This machine is 0.1% idle. Starting more processes is unlikely to make things go faster.
So increase the parallelization until idle time hits a minimum or until average processing time hits a minimum (--joblog my.log can be useful to see how long a job takes).
And yes: GNU Parallel is likely to speed-up bioinformatics (being written by a fellow bioinformatician).
Consider reading GNU Parallel 2018 (paper: http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html download: https://doi.org/10.5281/zenodo.1146014) Read at least chapter 1+2. It should take you less than 20 minutes. Your command line will love you for it.
Pls help !
We use IBM MQ this is output dspmqver:
Name: WebSphere MQ
Version: 8.0.0.14
Level: p800-014-200107.1
BuildType: IKAP - (Production)
Platform: WebSphere MQ for Linux (x86-64 platform)
Mode: 64-bit
O/S: Linux 4.15.3-1-generic
InstName: Installation1
InstDesc:
Primary: Yes
InstPath: /opt/mqm
DataPath: /var/mqm
MaxCmdLevel: 802
LicenseType: Production
About at 1 -2 week we need reboot mq manager because its eat all our RAM. TOP output:
top - 09:08:52 up 55 days, 22:13, 2 users, load average: 1,02, 0,66, 0,43
Tasks: 164 total, 1 running, 101 sleeping, 0 stopped, 0 zombie
%Cpu(s): 1,8 us, 1,0 sy, 0,0 ni, 96,3 id, 0,5 wa, 0,0 hi, 0,3 si, 0,0 st
KiB Mem : 12296712 total, 151156 free, 11192404 used, 953152 buff/cache
KiB Swap: 2095100 total, 0 free, 2095100 used. 173800 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5946 mqm 20 0 5186568 1,619g 567820 S 1,0 13,8 267:47.39 amqzlaa0
19235 mqm 20 0 4663220 1,516g 394512 S 5,0 12,9 925:24.87 amqrmppa
3180 mqm 20 0 5187452 1,400g 563312 S 1,0 11,9 37:46.55 amqzlaa0
20194 mqm 20 0 4644060 1,398g 393016 S 3,3 11,9 654:11.41 amqrmppa
19569 mqm 20 0 4644188 1,317g 397828 S 1,3 11,2 644:51.19 amqrmppa
24828 mqm 20 0 5186308 1,195g 562544 S 1,7 10,2 92:26.81 amqzlaa0
12922 mqm 20 0 5187712 1,160g 563500 S 1,0 9,9 300:29.25 amqzlaa0
482 mqm 20 0 5187712 1,042g 552212 S 0,0 8,9 273:48.41 amqzlaa0
19367 mqm 20 0 5186308 773968 585808 S 0,7 6,3 143:51.83 amqzlaa0
19214 mqm 20 0 5187628 459176 454008 S 1,3 3,7 174:04.99 amqzlaa0
30683 mqm 20 0 634188 404700 380924 S 0,0 3,3 3:19.41 runmqchl
19204 mqm 20 0 547812 220656 219936 S 1,3 1,8 86:29.25 amqpcsea
11943 mq_expo+ 20 0 1401016 203476 191800 S 0,0 1,7 17:18.51 mq_prometheus
19148 mqm 20 0 2511796 192916 192196 S 0,7 1,6 181:09.56 amqzmuc0
19135 mqm 20 0 1648540 86728 86500 S 0,0 0,7 29:16.12 amqzxma0
19176 mqm 20 0 873692 20104 19764 S 0,0 0,2 0:05.29 amqrrmfa
As you can see most memmory usage is amq* process. But WHY ?
For solving this problem we install last patch - 14 , but its didnt help.
May by we has some wrong configure, or some thing else , i dont know. Pls if can help this problem.
I think that reboot manager every 2 week its wrong solution but what i can do else ?
I would be willing to bet that you have one or more applications that stay running and continually connect to the queue manager without ever disconnecting.
i.e.
Applications starts
connect to queue manager - 1st
open queue
get or put messages
may or may not close the queue
connect to queue manager - 2nd
open queue
get or put messages
may or may not close the queue
connect to queue manager - 3rd
open queue
get or put messages
may or may not close the queue
etc.
Use your MQ monitoring tool to review the channel status of the queue manager to find the rogue (bad) application. Or you can use MQ Explorer or runmqsc i.e. 'DISPLAY CHSTATUS(*) CURSHCNV'.
Once you find the rogue application, remind the programmer that if an application connects to something then it must disconnect from it. And if an application opens something then it must close it. I'm willing to bet that they will say "I thought it would automatically disconnect (or close) it". Your reply will be: "No, it doesn't, please fix/update your code."
If you don't have an MQ monitoring tool, there are lots of them available. Here's a list of commercially available MQ tools.
You should have an amqzlaa0 thread per mqconnected application.
You should have an amqrmppa thread for each active channel.
There are 3 channels which are using a lot of memory, so this looks like it is not just an application issue.
See this link
I am currently using the TOP command to fetch the CPU and Memory of a process. My query here is on understanding the value it displays.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6742 aaaa 20 0 843596 1.0g 238841 S 4.0 1.7 0:49.66 java
14355 aaaa 20 0 658704 749560 234112 S 3.3 1.2 15:45.75 java
2779 aaaa 20 0 688868 846620 160844 S 3.0 1.4 54:30.61 java
2337 aaaa 20 0 701200 1.0g 231923 S 2.3 1.7 13:18.34 java
Let's say, I'm monitoring the CPU of a process ID 6742, it sometimes show 4%, sometimes 8%, 6% and sometimes it shoots up to 200% and comes back.
When i check the number of cores the system has, it says 8.
nproc -> 8
So should I calculate the CPU which is shown in the TOP command or should i calculate it based on the number of cores, like since it has 8 cores, so out of 800%, the CPU is 200% for that process ID.
How should we calculate this scenario?
As seen in the above image, if you sum all the values in the third line it exceeds 100%, giving 100.1%:
%Cpu(s): 18.3 us, 21.9 sy, 0.0 ni, 59.6, 0.3 wa, 0.0 hi, 0.0 si, 0.0 st
18.3 + 21.9 + 59.6 + 0.3 = 100.1
Can anyone explain the meaning of the 3rd line of top's output?
Edit
The question asked above is for the net CPU consumption shown in the 3rd line of top output.
The total sum of %CPU consumption in the 3rd line will definitely be equal to 100%. There is rounding off done for calculating individual elements, us, id, wa, sys, etc.
In this particular case, it is just a matter of round off that it is reaching 100.1%
Below information is for the column of %CPU consumption of individual processes.
This depends on the number of cores that you have on your system. Every core would give you a 100% value.
Therefore, if you have 4 cores, that means the total of %CPU can go up to 400%.
What do you really mean by cores?
grep processor /proc/cpuinfo | wc -l
This will give you the number of CPUs you have.
From a logical point of view (as an example Intel Core i5-3570, this could be understood from cpuinfo information also)
[root#localhost ~] egrep "processor|core id|physical id" /proc/cpuinfo
processor : 0
physical id : 0
core id : 0
processor : 1
physical id : 0
core id : 1
processor : 2
physical id : 0
core id : 2
processor : 3
physical id : 0
core id : 3
In this there are
Physical Processors = 1
Number of cores on physical processor = 4
Number of virtual cores per physical core = None
Therefore total CPUs = 4
If there were virtual cores (such as those on Xeon processors) you would more processors.
Row three shows the cpu utilization status on server, you can find here how much cpu is free and how much is utilizing by system:
I have a dell pd2950(2x4core) server running Ubuntu server 12.04LTS. And there's a VLC encoder instance running. Recently I updated the script(VLM) for VLC to increase quality and this means I'm increasing the CPU utilization too. So I started to tune the script to avoid exceeding maximum utilization. I use top to monitor the CPU utilization. I found that the load average is higher than 100%(I have 8-cores totally so 8.00 is 100%) but there's still 20-35% is idle, like:
top - 21:41:19 up 2 days, 17:15, 1 user, load average: 9.20, 9.65, 8.80
Tasks: 148 total, 1 running, 147 sleeping, 0 stopped, 0 zombie
Cpu(s): 32.8%us, 0.7%sy, 29.7%ni, 36.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 1982680k total, 1735672k used, 247008k free, 126284k buffers
Swap: 0k total, 0k used, 0k free, 774228k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
9715 wilson RT 0 2572m 649m 13m S 499 33.5 13914:44 vlc
11663 wilson 20 0 17344 1328 964 R 2 0.1 0:02.00 top
1 root 20 0 24332 2264 1332 S 0 0.1 0:01.06 init
2 root 20 0 0 0 0 S 0 0.0 0:00.09 kthreadd
3 root 20 0 0 0 0 S 0 0.0 0:27.05 ksoftirqd/0
4 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/0:0
5 root 0 -20 0 0 0 S 0 0.0 0:00.00 kworker/0:0H
To confirm my CPU(s) don't have Hyper-Thread, I tried:
wilson#server:/$ nproc
8
And to reduce the sampling deviation cause by refresh time, I also tried:
wilson#server:/$ top -d 0.1
I looked at the number %id for a long time, it haven't been lower than 14.
I also tried:
wilson#server:/$ uptime
21:57:20 up 2 days, 17:31, 1 user, load average: 9.03, 9.12, 9.35
The 1m load average often reach 14-15. So I'm wondering what's wrong with my system? Has anyone ever have this problem?
More information:
I'm using VLC with x264 codec to encode a live HTTP stream(application/octet-stream). It use ffmpeg(libavc) to decode and output as Apple HLS(.ts segment). I found this problem after I added arguments for x264:
level=41,ref=5,b-adapt=2,direct=auto,me=umh,subq=8,rc-lookahead=60,analyse=all
This almost equal to preset=slower. And as you can see, my VLC in running in real-time. The parameter is:
wilson#server:/$ chrt -p -f 99 vlc-wrapper
There does not appear to be anything wrong with your system. What is wrong seems to be your understanding of CPU accounting. In particular, load average has nearly nothing at all to do with CPU usage. Load average is based on the number of processes that are ready to run (not waiting on I/O, network, keyboard input, etc...), if there is an available CPU for them to be scheduled on. While it's true that, given an 8 core system, if all 8 cores are 100% busy with a single CPU-bound thread each, your load average should be around 8.00, it is entirely possible to have a load average of 200.0 with near-0% CPU utilization. All that would indicate is you have 200 processes that are ready to run, but as soon as they get scheduled, they do almost nothing before they go back to waiting for input of some sort.
Your top output shows that vlc seems to be using roughly the equivalent of 5 of your cores, but it doesn't indicate whether you have 5 cores at 100% each, or if all 8 cores are at 62.5% each. All of the other processes listed by top also contribute to your load average, as well as CPU usage. In particular, top running with a short delay like your example of 0.1 seconds, will probably increase your load average by almost 1 itself, even though, overall, it's not using a lot of CPU time.
Read this:
Understanding load average vs. cpu usage
If the load average is at 7, with 4 hyper-threaded processors, shouldn't that means that the CPU is working to about 7/8 capacity?
No it just means that you have 7 running processes in the job queue on average.
But I think that we can't use load average as a reference number to determine system is overload or not. So that I wonder if there's a kernel-level cpu utitlization statistical tools or not?(why kernel level because reduce performance loss)