I have 2 quad core processors and I cant seem to understand what "top" is telling me.
I run a VLC transcoding application and i currently transcode 8 streams and "top" shows me that I am using 200% of my CPU. Now for a Pentium 3 that would be horribly bad but I dont understand how Linux calculates CPU usage with multi core processors.
Does this mean that my both processors are utilized 100%? 2 cores at 100%?
I also ran ffmpeg application for the same purpose and I could run 8 instances at 90% each which seemed to me like each processoss would occupy 1 core.
VLC has much lower CPU usage footprint so I just want to make sure I am not killing the hardware.
The output of top stating %cpu = 200 could be that of two cores at 100% or all eight using 25%
Being that you have 2 quadcores the %cpu would range from 0 - 800%
Related
I'm trying to understand what stress command actually does in Linux, in particular -c option. My background is Physics, so I'm struggling with some concepts.
Does stress -c launch 3 processes that consume 100% of 3 bounded CPU cores (for example core 0, 1, 3)? The output of htop is confusing since I don't see 3 CPU cores at 100% all the time. Note: with bounded, I mean that these processes cannot run on other CPU cores (in this case 4 to N).
For example, after running stress -c 3, sometimes I see this (which makes sense to me):
But most of the time I'm seeing something like this (which doesn't because there aren't 3 CPU cores at 100%):
I'm using htop to monitor the CPU usage of my task. However, the CPU% value exceed 100% sometimes, which really confused me.
Some blogs explain that this is because I'm using a multi-core machine(this is true). If there are 8 (logic) cores, the max value of CPU% is gonna be 800%. CPU% over 100% means that my task is occupying more than one core.
But my question is: there is a column named CPU in htop window which shows the id of the core my task is running on. So how can the usage of this single core exceed 100%.
This is the screenshot. You can see the 84th core's usage is 375%!
I think this question isn't new, but I couldn't find a clear answer for this: I have a Fortran-code and a Intel core i7 with 6 physical and 12 logical cores. At this moment my code is running on the 6 physical cores, i.e. on each core is running the same code. I saw at the inter power gadget that the utilization is ca. 50%, therefore I want to know if I get my results (nearly) twice as fast if I run the code on the 12 logical cores. I have also heard that I get in certain circumstances my results slower if I use the logical cores for running the code, so I'm not sure about what to do.
Thank you for your help!
Here is my situation:
my company need to run tests on tons of test samples. But if we start a single process on a windows PC machine, this test could last for hours, even days. so we try to split the test set and start a process to test each one of the slices on a multi-core linux server.
we expect a linear performance improvement for the server solution, but the truth is we could only observe a 2~3 times improvement when the test task finished by 10~20 processes.
I tried several means to locate the problem:
disable hyper-threading;
use max-performance power policy
use taskset to pin each process on different core
but no luck, the problem remains.
Why does this happen? which is the root cause, our code, OS or hardware?
here is the info of my pc and server:
PC: os: win10; cpu: i5-4570, 2 physical core; mem : 16gb
server: os: redhat 6.5 cpu: E5-2630 v3, 2 physical core; mem : 32gb
Edit:
About CPU: the server has 2 processors, and each of them has 8 physical cores. check this link for more information.
About My Test: it's handwriting recognition related(that's why it's a cpu-sensitive task).
About IO: the performance check points do not involve much IO if logging doesn't count.
we expect a linear performance improvement for the server solution,
but the truth is we could only observe a 2~3 times improvement when
the test task finished by 10~20 processes.
This seems very logical considering there are only 2 cores on the system. Starting 10-20 processes will only add some overhead due to task switching.
Also, I/O could be a bottleneck here too, if multiple processes are reading from disk at the same time.
Ideally, the number of running threads should not exceed 2 x the number of cores.
A program I'm working on needs to process certain objects upon arrival from network in real-time. The throughput is good, but I have occasional drops in the input queue due to unexpected delays.
My analysis shows that most probably the source of the delay is outside my program; something like another process being scheduled on my process's CPU core (I set the affinity of the process to a certain core) or a hardware interrupt arriving (perhaps a network interrupt).
My problem is I don't know the source of the delay for sure. Is there a tool or a method to find how a CPU core was used exactly during a certain period of time? (Like for example telling me that core 0 was used by process 19494 99.1 percent of the time, process 20001 0.8 percent of the time and process 8110 0.1 percent of the time.)
I use Ubuntu 14.04 Server Edition on an HP server with a Xeon CPU.
could be CPU, diskspeed, networkspeed or memory.
Memory usage and CPU is easy to spot using htop . (use the sort option, F6)
HD speed could be an issue. for example if you use low-energy disks (they slow down when not in use). Do you have a database running on the same system?
use iotop , it might give a clue.