Max CPU usage (max allowed CPU usage) - linux

I have a quad core (with hyper threading Technology - HT)
I'm running an application which takes 270% CPU (according to TOP command)
What is the total available CPU usage? (is it 400% or 800%?)
I'm asking because according to Intel documentation, the HT can up the performance up to 30% cpu, so 800% seem to much, yeah?)
What is the relation between load averages and CPU usage?

1: 800. You have 8 cores visible to the OS - that they are not real cores (due to hyperthreading limitations) is not of concern.
2: Ever bothered reading documentation? Practically there is no relation between load average and CPU uage. Load average is "waiting processes" but that can mean they are waiting for IO, and the CPU may not be busy.

Related

How to measure the context switching overhead of a very large program?

I am trying to measure the impact of CPU scheduler on a large AI program (https://github.com/mozilla/DeepSpeech).
By using strace, I can see that it uses a lot of (~200) CPU threads.
I have tried using Linux Perf to measure this, but I have only been able to find the number of context switch events, not the overhead of them.
What I am trying to achieve is the total CPU core-seconds spent on context switching. Since it is a pretty large program, I would prefer non-invasive tools to avoid having to edit the source code of this program.
How can I do this?
Are you sure most of those 200 threads are actually waiting to run at the same time, not waiting for data from a system call? I guess you can tell from perf stat that context-switches are actually pretty high, but part of the question is whether they're high for the threads doing the critical work.
The cost of a context-switch is reflected in cache misses once a thread is running again. (And stopping OoO exec from finding as much ILP right at the interrupt boundary). This cost is more significant than the cost of the kernel code that saves/restores registers. So even if there was a way to measure how much time the CPUs spent in kernel context-switch code (possible with perf record sampling profiler as long as your perf_event_paranoid setting allows recording kernel addresses), that wouldn't be an accurate reflection of the true cost.
Even making a system call has a similar (but lower and more frequent) performance cost from serializing OoO exec, as well as disturbing caches (and TLB). There's a useful characterization of this on real modern CPUs (from 2010) in a paper by Livio & Stumm, especially the graph on the first page of IPC (instructions per cycle) dropping after a system call returns, and taking time to recover: FlexSC: Flexible System Call Scheduling with Exception-Less System Calls. (Conference presentation: https://www.usenix.org/conference/osdi10/flexsc-flexible-system-call-scheduling-exception-less-system-calls)
You might estimate context-switch cost by running the program on a system with enough cores not to need to context-switch much at all (e.g. a big many-core Xeon or Epyc), vs. on fewer cores but with the same CPUs / caches / inter-core latency and so on. So, on the same system with taskset --cpu-list 0-8 ./program to limit how many cores it can use.
Look at the total user-space CPU-seconds used: the amount higher is the extra amount of CPU time needed because of slowdowns from context switched. The wall-clock time will of course be higher when the same work has to compete for fewer cores, but perf stat includes a "task-clock" output which tells you a total time in CPU-milliseconds that threads of your process spent on CPUs. That would be constant for the same amount of work, with perfect scaling to more threads, and/or to the same threads competing for more / fewer cores.
But that would tell you about context-switch overhead on that big system with big caches and higher latency between cores than on a small desktop.

c++ std::async : faster on 4 cores compared to 8 cores

I have 16000 jobs to perform.
Each job is independent. There is no shared memory, no interprocess communication, no lock or mutex.
I am on ubuntu 16.06. c++11. Intel® Core™ i7-8550U CPU # 1.80GHz × 8
I use std::async to split jobs between cores.
If I split the jobs into 8 (2000 per core), computation time is 145.
If I split the jobs into 4 (4000 per core), computation time is 60.
Output after reduce is the same in both case.
If I monitor the CPU during computation (just using htop), things happen as expected (8 cores are used at 100% in first case, only 4 cores are used 100% in second case).
I am very confused why 4 cores would process much faster than 8.
The i7-8550U has 4 cores and 8 threads.
What is the difference? Quoting How-To Geek:
Hyper-threading was Intel’s first attempt to bring parallel
computation to consumer PCs. It debuted on desktop CPUs with the
Pentium 4 HT back in 2002. The Pentium 4’s of the day featured just a
single CPU core, so it could really only perform one task at a
time—even if it was able to switch between tasks quickly enough that
it seemed like multitasking. Hyper-threading attempted to make up for
that.
A single physical CPU core with hyper-threading appears as two logical
CPUs to an operating system. The CPU is still a single CPU, so it’s a
little bit of a cheat. While the operating system sees two CPUs for
each core, the actual CPU hardware only has a single set of execution
resources for each core. The CPU pretends it has more cores than it
does, and it uses its own logic to speed up program execution. In
other words, the operating system is tricked into seeing two CPUs for
each actual CPU core.
Hyper-threading allows the two logical CPU cores to share physical
execution resources. This can speed things up somewhat—if one virtual
CPU is stalled and waiting, the other virtual CPU can borrow its
execution resources. Hyper-threading can help speed your system up,
but it’s nowhere near as good as having actual additional cores.
By splitting the jobs to more cores than available - you are paying a big penalty.

Weird EC2 CPU usage

I'm really confused. Why does the load average and %CPU does not match the process CPU usage below. It seems like the process is eating up a lot of CPU while the AWS EC2 meters only says 25% CPU is used.
%CPU -- CPU Usage : The percentage of your CPU that is being used by the process. By default, top displays this as a percentage
of a single CPU. On multi-core systems, you can have percentages
that are greater than 100%. For example, if 3 cores are at 60% use,
top will show a CPU use of 180%.
You can toggle this behavior by hitting Shift+i while top is running to show the overall percentage of available
CPUs in use.
load average: 22.56, 24.99, 26.51
From left to right, these numbers show you the average load over the last 1 minute, the last 5 minutes, and the last 15 minutes.
us -- User CPU time
The time the CPU has spent running users' processes that are not niced.
sy -- System CPU time
The time the CPU has spent running the kernel and its processes.
ni -- Nice CPU time
The time the CPU has spent running users' proccess that have been niced.
wa -- iowait
Amount of time the CPU has been waiting for I/O to complete.
hi -- Hardware IRQ
The amount of time the CPU has been servicing hardware interrupts.
si -- Software Interrupts
The amount of time the CPU has been servicing software interrupts.
st -- Steal Time
The amount of CPU 'stolen' from this virtual machine by the hypervisor for other tasks (such as running another virtual machine).
See more details from In Linux “top” command what are us, sy, ni, id, wa, hi, si and st (for CPU usage).
after you run command "top" you can press "1" on your keyboard to see individual CPU utilization, more details when you run command "man top"
Note process "msqld" can use CPU from several resources and its utilization % could easily go beyond 100% in "top" display.
Hi maybe your app using single core and other cores are free. I think your instance has 4 CPU core and one is utilizing 100%. can you please check utilization by each core.

Meaning of values in CPU tab of resource monitor on windows 8.1

(Sorry for non english character in picture. Each column is thread/CPU/average CPU)
When I open CPU tab in resource monitor on Window 8.1, I see above values.
What's the difference between CPU and average CPU?
At first, I thought average CPU means avaerag usage per core but I have 4 cores so the value should be CPU=4*avg. CPU which is not.
Please let me know the meaning of CPU and average CPU values.
CPU. Current percent of CPU consumption by the process, or how much of the system's processing power is being devoted to this specific process.
Average CPU. This is average CPU consumption by the process over the past 60 seconds. This gives you a real-time look at what's happening on the system right now and for the past minute.
http://www.techrepublic.com/blog/the-enterprise-cloud/use-resource-monitor-to-monitor-cpu-performance/

How would a multithreaded program be more energy efficient?

In its Energy-Efficient Software Guidelines Intel suggests that programs are designed multithreaded for better energy efficiency.
I don't get it. Suppose I have a quad core processor that can switch off unused cores. Suppose my code is perfectly parallelizeable (synchronization overhead is negligible).
If I use only one core I burn one core for one hour, if I use four cores I burn four cores for 15 minutes - the same amount of core-hours either way. Where's the saving?
I suspect it has to do with a non-linear relation between CPU utilization and power consumption. So if you can spread 100% CPU utilization over 4 CPUs each will have 25% utilization - and say 12% consumption.
This is especially true when dynamic CPU scaling is used according to Wikipedia the power drain of a CPU is P = C(V^2)F. When a CPU is running faster it requires higher voltages - and that 'to the power of 2' becomes crucial. Furthermore the voltage will be a function of F (which means F can be solved for V) giving something like P = C(F^2)F. Thus by spreading the load over 4 CPUs (running at 100% capacity at that frequency) you can mitigate the cost for the same work.
We can make F a function of L (load) at 100% of one core (as it would be in your OS), so:
F = 1000 + L/100 * 500 = 1000 + 5L
p = C((1000 + 5L)^2)(1000 + 5L) = C(1000 + 5L)^3
Now that we can relate load (L) to the power consumption we can see the characteristics of the power consumption given everything on one core:
p = C(1000 + 5L)^3
p = 1000000000 + 15000000L + 75000L^2 + 125L^3
Or spread over 4 cores:
p = 4C(1000 + (5/4)L)^3
p = 4000000000 + 15000000L + 18750.4L^2 + 7.5L^3
Notice the factors in front of the L^2 and L^3.
During that one hour, the one core isn't the only thing you keep running.
You burn 4 times energy with 4 cores but you do 4 times more work too! If, as you said, the synchro is negligible and the work is parallelizable, you'll spend 4 times less time.
Using multiple threads can save energy when you have i/o waits. One thread can wait while other threads can perform other computations; instead of having your application idle.
A CPU is one part of a computer. It has fans, a motherboard, hard drives, graphics card, RAM etc, lets call this the BASE. If your doing scientific computing (i.e., a compute cluster) you are powering many computers. If you are powering 100's of BASE's anyway, why not allow those BASES to have multiple physical CPU's on them so those CPU's can share the resources of the BASE, physical and logical.
Now INTEL's marketing blurb probably also depends on the fact that these days, each CPU wafer contains multiple cores. Powering multiple physical CPU's is different to powering a single physical cpu with multiple cores.
So if amount of work done per unit of power is the benchmark in question, then modern CPU's performing highly parallel tasks then yes you get more bang for your buck, compared with the previous generation of processors. As not only can you get more cores / cpu, it is also common to get BASE's which can take multiple CPU's.
One may easily assert that one top-end system can now house the processing power of 8-16 singl-cpu single-core CPU's of the past (assuming that in this hypothetical case, that on the new system and the older generation system, each core has the same processing power ).
If a program is multithreaded that doesn't mean that it would use more cores. It just means that more tasks are dealt with in the same time so the overall processor time is shorter.
There are 3 reasons, two of which have already been pointed out:
More overall time means that other (non-CPU) components need to run longer, even if the net calculation for the CPU remains the same
More threads mean more things are done at the same time (because stalls are used for something useful), again the overall real time is reduced.
The CPU power consumption for running the same calculations on one core is not the same. Intel CPUs have a built-in clock boosting for single-core usage (I forgot the marketing buzzword for it). A higher clock means dysproportionally more power consumption and dysproportionally more heat, which again requires the fan to spin faster, too.
So in summary, you consume more power with the CPU and more power for cooling the CPU for a longer time, and you run other components for a longer time, too.
As a 4th reason, one could allege (note that this is only an assumption!) that Intel CPUs are hyperthreaded, and since hyperthreaded cores share some resources, running two threads at once is more efficient than running one thread twice as long.

Resources