I have encountered a weird behavior with my algorithm/cpu, I was wondering what could be causing this.
CPU that I am using: AMD 2990WX 32c/64t, OS: Ubuntu 18.04LTS with 4.15.0-64-generic kernel.
The algorithm (Julia 1.0.3):
#sync #distributed for var in range(0.1,step=0.1,stop=10.0)
res=do_heavy_stuff(var) #solves differential equation,
#basically, multiplying 200x200 matrices many times
save(filename,"RES",res)
end
Function do_heavy_stuff(var) takes ~3 hours to solve on a single CPU core.
When I launch it in parallel with 10 processes (julia -p 10 my_code.jl)it takes ~4 hours for each parallel loop, meaning every 4 hours I get 10 files saved. The slowdown is expected, as cpu frequency goes down from 4.1Ghz to 3.4Ghz.
If I launch 3 separate instances with 10 processes each, so a total cpu utilization is 30 cores, it still takes ~4 hours for one loop cycle, meaning I get 30 runs completed and saved every 4 hours.
However, if I run 2 instances (one has nice value of 0, another nice value of +10) with 30 processes each at once julia -p 30 my_code.jl, I see (using htop) that CPU utilization is 60(+) threads, but the algorithm becomes extremely slow (after 20 hours still zero files saved). Furthermore, I see that CPU temperature is abnormally low (~45C instead of expected 65C).
From this information I can guess, that using (almost) all threads of my cpu makes it do something useless that is eating up CPU cycles, but no floating point operations are being done. I see no I/O to SSD, I utilize only half of RAM.
I launched mpstat mpstat -A: https://pastebin.com/c19nycsT and I can see that all of my cores are just chilling in idle state, that explains low temperature, however, I still don`t understand what exactly is the bottleneck? How do I troubleshoot from here? Is there any way too see (without touching hardware) whether the problem is RAM bandwidth or something else?
EDIT: It came to my attention, that I was using mpstat wrong. Apparently mpstat -A gives cpu stats since launch of the computer, while what I was needed was short time integrated results that can be obtained with mpstat -P ALL 2. Unfortunately, I only learned this after I killed my code in question, so no real data from mpstat. However, I am still interested, how would one troubleshoot such situation, where cores seems to be doing something, but result is not showing? How do I find the bottleneck?
Since you are using multiprocessing there are 2 most likely reasons for the observer behavior:
long delays on I/O. When you are processing lots of disk data or reading data from the network your processes are naturally staled. In this case CPU utilization can be low combined with long execution times.
high variance of execution time for do_heavy_stuff. This variance could arise from unstable I/O or different model parameters resulting in different execution times. Why it is a problem requires understanding how #distributed is sharing the workload among worker processes. Namely, each worker gets an equal of the for loop. For an example if you have 4 workers the first one gets var in range 0.1:0.1:2.5 the second one 2.6:0.1:5.0 and so on. Now if some of the var values result in heavy tasks the first worker might get 5h of work and other workers 1h of work. This means that #sync completes after 5 hours with only one CPU actually working all time.
Looking at your post I would strongly bet on the second reason.
Related
I am confused of the concept of millicores in Kubernetes . As per my programming knowledge , only 1 thread can run per core so why would give a limit in millicores ?
For example if i give a cpu limit of 600m to a container , can i use 400m for another pod or container , is it possible ?
I have tried installing minikube and ran on it .
Will both containers or pods run different threads ? Please if anyone can explain.
It's best to see millicores as a way to express fractions, x millicores correspond to the fraction x/1000 (e.g. 250millicores = 250/1000 = 1/4).
The value 1 represent the complete usage of 1 core (or hardware thread if hyperthreading or any other SMT is enabled).
So 100mcpu means the process is using 1/10th of a single CPU time. This means that it is using 1 second out of 10, or 100ms out of a second or 10us out of 100.
Just take any unit of time, divide it into ten parts, the process is running only for one of them.
Of course, if you take a too short interval (say, 1us), the overhead of the scheduler becomes non-negligeable but that's not important.
If the value is above 1, then the process is using more than one CPU. A value of 2300mcpu means that out of, say, 10 seconds, the process is running for... 23!
This is used to mean that the process is using 2 whole CPUs and a 3/10 of a third one.
This may sound weird but it's no different to saying: "I work out 3.5 times a week" to mean that "I work out 7 days every 2 weeks".
Remember: millicores represent a fraction of CPU time not of CPU number. So 2300mcpu is 230% the time of a single CPU.
What I hate about technologies like Kubernetes and Docker is that they hide too much, confusing seasoned programmers.
The millicores unit arises, at its base, from the way the Linux scheduler works. It doesn't divide the time into quanta and assigns each thread the CPU for a quantum, instead, it runs a thread until it's unfair to keep it running. So a thread can run for a variable time.
The current Linux scheduler, named CFS, works with the concept of waiting time.
Each thread has a waiting time, a counter that is incremented each nanosecond (but any sufficiently fine unit of time will do) that the thread is waiting to execute and that is decremented each nanosecond the thread is executing.
The threads are then ordered by their wait time divided the total number of threads, the thread with the greatest wait time is picked up and run until its wait time (that now is decreasing) falls below the wait time of another thread (which will be then scheduled).
So if we have one core (without HyperThreading or any other SMT) and four threads, after, say, a second, the scheduler will have allocated 1/4 of that second (250ms) to each thread.
You can say that each thread used 250millicores. This means it uses 250/1000 = 1/4 of the core time on average. The "core time" can be any amount of time, granted it is far greater than the scheduler wallclock. So 250millicores means 1 minute of time every 4, or 2 days every 8.
When a system has multiple CPUs/cores, the waiting time is scaled to account for that.
Now if a thread is scheduled, over the course of 1 second, to two CPUs for the whole second, we have an usage of 1/1 for the first CPU and 1/1 for the second one. A total of 1/1 + 1/1 = 2 or 2000mcpu.
This way of counting CPU times, albeit weird at first, at the advantage that it is absolute. 100mcpu means 1/10 of a CPU, no matter how many CPUs there are, this is by design.
If we counted time in a relative matter (i.e. where the value 1 means all the CPUs) then a value like 0.5 would mean 24 CPUs in a 48 CPUs system and 4 in an 8 CPUs system.
It would be hard to compare timings.
The Linux scheduler doesn't actually know about millicores, as we have seen it uses the waiting time and doesn't need any other measurement unit.
That millicores unit is just a unit we make up, so far, for our convenience.
However, it will turn out this unit will arise naturally due to how containers are constrained.
As implied by its name, the Linux scheduler is fair: all threads are equals. But you don't always want that, a process in a container should not hog all the cores on a machine.
This is where cgroups comes into play. It is a kernel feature that is used, along with namespace and union fs, to implement containers.
Its main goal is to restrict processes, including their CPU bandwidth.
This is done with two parameters, a period and a quota.
The restricted thread is allowed, by the scheduler, to run for quota microseconds (us) every period us.
Here, again, a quota greater than the period means using more than one CPU. Quoting the kernel documentation:
Limit a group to 1 CPU worth of runtime.
If period is 250ms and quota is also 250ms, the group will get
1 CPU worth of runtime every 250ms.
Limit a group to 2 CPUs worth of runtime on a multi-CPU machine.
With 500ms period and 1000ms quota, the group can get 2 CPUs worth of
runtime every 500ms.
We see how, given x millicores, we can compute the quota and the period.
We can fix the period to 100ms and the quota to (100 * x) / 1000.
This is how Docker does it.
Of course, we have an infinite choice of pairs, we set the period to 100ms but indeed we can use any value (actually, there aren't infinite value but still).
Larger values of the period mean the thread can run for a longer time but will also pause for a longer time.
Here is where Docker is hiding things from the programmer, using an arbitrary value for the period in order to compute the quota (given the millicores, which the authors dub as more "user-friendly").
Kubernetes is designed around Docker (yes, it can use other container managers but they must expose an interface similar to the Docker's one), and the Kubernetes millicores unit match the unit used by Docker in its --cpus parameter.
So, long story short, millicores are the fractions of time of a single CPU (not the fraction of number of CPUs).
Cgroups, and hence Docker, and hence Kubernetes, doesn't restrict CPU usage by assigning cores to processes (like VMs do), instead it restricts CPU usage by restricting the amount of time (quota over period) the process can run on each CPU (with each CPU taking up to 1000mcpus worth of allowed time).
The scheduler of the kernel running the containers (f.e. linux) has means reserve time slices for an process to run concurrently with other processes on the same cpu.
You can throttle a process - giving it less time slices - if it uses too much cpu. This happens then a (hard) limit is hit. You can schedule a pod to a different node, if the cpu requests exceed the available cpu resources on a node.
So the requests is a hint for the kubernetes scheduler how to optimally place pods across nodes and the limit is to ensure by the kernel scheduler that no more resources will actually be used.
Actually if you just configure requests and no limits, all pods will be scheduled by the kernel scheduler policy, which is trying to be fair and balance the resources across all processes to maximize the usage while not starving any single process.
I am observing strange effects with the CPU percentage as shown in e.g. top or htop on Linux (Ubuntu 16.04) for one special application. The application uses many threads (around 1000). Each thread has one computational task. About half of these tasks need to be computed once per "trigger" - the trigger is an external event received exactly every 100ms. The other threads are mostly sleeping (waiting for user interaction) and hence do not play a big role here. So to summarise: many threads are waking up basically simultaneously within a short period of time, doing there (relatively short) computation and going back to sleep again.
Since the machine running this application has 8 virtual CPUs (4 cores each 2 threads, it's an i7-3612QE), only 8 threads can really wake up at a time, so many threads will have to wait. Also some of these tasks have interdependencies, so they anyway have to wait, but I think as an approximation one can think of this application as a bunch of threads going to the runnable state at the same time every 100ms and each doing only a short computation (way below 1ms of CPU time each).
Now coming to the strange effect: If I look at the CPU percentage in "top", it shows something like 250%. As far as I know, top looks on the CPU time (user + system) the kernel accounts for this process, so 250% would mean the process uses 3 virtual CPUs on average. So far so good. Now, if I use taskset to force the entire process to use only a single virtual CPU, the CPU percentage drops to 80%. The application has internal accounting which tells me that still all data is being processed. So the application is doing the same amount of work, but it seemingly uses less CPU resources. How can that be? Can I really trust the kernel CPU time accounting, or is this an artefact of the measurement?
The CPU percentage also goes down, if I start other processes which take a lot of CPU, even if the do nothing ("while(true);") and are running at low priority (nice). If I launch 8 of these CPU-eating processes, the application reaches again 80%. With fewer CPU-eaters, I get gradually higher CPU%.
Not sure if this plays a role: I have used the profiler vtune, which tells me my application is actually quite inefficient (only about 1 IPC), mostly because it's memory bound. This does not change if I restrict the process to a single virtual CPU, so I assume the effect is not caused by a huge increase in efficiency when running everything on the same core (which would be strange anyway).
My question was essentially already answered by myself in the last paragraph: The process is memory bound. Hence not the CPU is the limited resource but the memory bandwidth. Allowing such process to run on multiple CPU cores in parallel will mainly have the effect that more CPU cores are waiting for data to arrive from RAM. This is counted as CPU load, since the CPU is executing the thread, but just quite slowly. All my other observations go along with this.
I want to understand how a certain worst case scenario of context switch happens. Say I have 10 CPU cores running a single process. Everything is CPU intensive, no thread is sleeping (waiting for I/O).
(I am mainly concerned with mainstream modern personal computer architectures and systems, typically x64 with Windows, Linux...)
Correct me if I'm wrong: running 10 CPU/RAM intensive independent threads is most often a near optimal situation. The amount of time spent in context switch is rather negligible. While the system may sometimes decide to re-attribute threads to different cores in a round-robin fashion causing a reset of RAM caches, it has a minor effect and works almost as if each thread was running on a single fixed core.
Only the main RAM bus may be a limitation since all threads share it, but it's not the point I'm interested in here. Reducing the number of threads will not increase the throughput anyway.
Now assume you still have 10 cores but run 1000 threads. The scheduler could theoretically decide to switch rarely (say every second) running 10 threads for a second, then 10 others... and the whole thing would still be close to optimal performance (throughput).
But it does not seem to be the case and it looks like threads are switched intensively causing a strongly suboptimal performance (throughput). Am I right about it? What is the main cause for this suboptimal performance? A few numbers would be nice if you have any idea of orders of magnitude of (for example): switches per second, performance loss caused by switching...
I'm going to answer my own question (after some search).
On windows, the number of context switches can be measured with performance counters: https://technet.microsoft.com/en-us/library/cc938606.aspx
I measured it on my machine (core i7/Windows 10) and the order of magnitude is around 1000/s by core when the number of running threads is more than the number of cores (and these threads are full CPU).
The time needed for a context switch varies quite a bit depending on:
what registers need to be saved
if FPU registers need to be saved
the processor model (of course)
You can read: https://www.quora.com/How-long-does-a-context-switch-take or http://blog.tsunanet.net/2010/11/how-long-does-it-take-to-make-context.html
A slightly pessimistic avg. order of magnitude seems to be 1000 ns. Thus the total time for all context switches on each core is 1ms per second, that is 0.1%.
This does not depend on the number of threads: if you run 100 or 1000 threads, the number of switches does not change. As a conclusion the time spent in context switching is somehow negligible.
This reasoning is correct as long as the threads are pure CPU with only small memory read/write like a few local variables. I ran a test with full CPU threads and the difference between a few and 1000 threads is not noticeable.
But the situation changes when RAM is involved and switches makes CPU (memory) cache less efficient. A worse case is when:
computation can be split into 1000 independent "data" parts
each part of the data fits just into the memory cache (say L1 or L2) of a core
each part needs to be read many times
In this situation, running 10 threads to completion, then ten others... would take full advantage of the cache, while running 1000 threads at a time would causes the cache to be useful only during 1ms.
But if the data of several threads could fit into the cache, or if the threads read common data to some degree, or if each thread reads the data just once, then it is possible that running 1000 threads vs. running 10 threads a hundred times will have similar throughput.
It is more a matter a adapting parallelism to memory access. And it depends very much on the way memory needs to be accessed.
The time spend in context switching is negligible, the time lost because of wrong usage of caches may sometimes be problem, sometimes not, depending on how the memory is accessed and shared.
Summary
I am trying to understand the limits of my compute resources when performing multiple simulations. My task is trivial in terms of parallelisation - I need to run a large number of simple independent simulations, i.e. each simulation program does not rely on another for information. Each simulation has roughly the same running time. For this purpose I have created an experiment that is detailed below.
Details
I have two shell scripts located in the same directory.
First script called simple:
#!/bin/bash
# Simple Script
echo "Running sleep with arg= $1 "
sleep 5s
echo "Finished sleeping with arg= $1"
Second script called runall:
#!/bin/bash
export PATH="$PATH:./"
# Fork off a new process for each program by running in background
# Run N processes at a time and wait until all of them have finished
# before executing the next batch. This is sub-optimal if the running
# time of each process varies significantly.
# Note if the number of total processes is not divisible by the alloted pool something weird happens
echo "Executing runall script..."
for ARG in $(seq 600); do
simple $ARG &
NPROC=$(($NPROC+1))
if [ "$NPROC" -ge 300 ]; then
wait
echo "New batch"
NPROC=0
fi
done
Here are some specs on my computer (MAC OS X):
$ ulimit -u
709
$ sysctl hw.ncpu
hw.ncpu: 8
$ sysctl hw.physicalcpu
hw.physicalcpu: 4
From this I interpret that I have 709 processes at my disposal and 8 processor cores available.
However when I execute $ ./runall I eventually end up with:
...
Running sleep with arg= 253
Running sleep with arg= 254
Running sleep with arg= 255
Running sleep with arg= 256
Running sleep with arg= 257
Running sleep with arg= 258
./runall: fork: Resource temporarily unavailable
Running sleep with arg= 259
./simple: fork: Resource temporarily unavailable
Running sleep with arg= 260
$ Running sleep with arg= 261
Finished sleeping with arg= 5
Finished sleeping with arg= 7
Finished sleeping with arg= 4
Finished sleeping with arg= 8
Finished sleeping with arg= 3
...
SO:
Question 1
Does this mean that out of the 709 processes available, only 258 can be dedicated to my runall program, the rest remaining probably being used by other processes on my computer?
Question 2
I substituted the simple script with something else which does something more complicated than just sleep (it reads a file and processes the data in the file to create a graph) and now I start to notice some differences. With the help of using $ time ./runall I can get the total run time and whereas before when calling simple for up to the 258 processes I always got a run time of about 5s:
real 0m5.071s
user 0m0.184s
sys 0m0.263s
i.e, running many simulations in parallel gives the same runtime as a single simulation. However now that I am calling a more complex program instead of simple I get a longer total run time than the single simulation time (calling a single simulation takes 1.5s whereas 20 simulations in parallel takes about 8.5s). How do I explain this behavior?
Question 3
Im not sure how the number of processor cores is related to the parallel performance - Since I have 8 cores at my disposal I thought I would be able to run 8 programs in parallel at the same time it would take me to just run one. Im not sure about my reasoning on this...
If you have 8 cpu threads available, and your programs consume 100% of a single CPU, it does not make sense to run more than 8 programs at a time.
If your programs are multi-threaded, then you may want to have fewer than 8 processes running at a time. If your programs occasionally use less than 100% of a single CPU (perhaps if they're waiting for IO), then you may want to run more than 8 processes at a time.
Even if the process limit for your user is extremely high, other resources could be exhausted much sooner - for instance, RAM. If you launch 200 processes and they exhaust RAM, then the operating system will respond by satisfying requests for RAM by swapping out some other process's RAM to disk; and now the computer needlessly crawls to a halt because 200 processes are waiting on IO to get their memory back from disk, only to have it be written out again because some other process wants to run. This is called thrashing.
If your goal is to perform some batch computation, it does not make sense to load the computer any more than enough processes to keep all CPU cores at 100% utilization. Anything more is waste.
Edit - Clarification on terminology.
A single computer can have more than one CPU socket.
A single CPU can have more than one CPU core.
A single CPU core can support simultaneous execution of more than one stream of instructions. Hyperthreading is an example of this.
A stream of instructions is what we typically call a "thread", either in the context of the operating system, processes, or in the CPU.
So I could have a computer with 2 sockets, with each socket containing a 4-core CPU, where each of those CPUs supports hyperthreading and thus supports two threads per core.
Such a computer could execute 2 * 4 * 2 = 16 threads simultaneously.
A single process can have as many threads as it wants, until some resources is exhausted - raw RAM, internal operating system data structures, etc. Each process has at least one thread.
It's important to note that tricks like hyperthreading may not scale performance linearly. When you have unhyperthreaded CPU cores, those cores contain enough parts to be able to execute a single stream of instructions all by itself; aside from memory access, it doesn't share anything with the rest of the other cores, and so performance can scale linearly.
However, each core has a lot of parts - and during some types of computations, some of those parts are inactive while others are active. And during other types of computations could be the opposite. Doing a lot of floating-point math? Well, then the integer math unit in the core might be idle. Doing a lot of integer math? Well, then the floating-point math unit might be idle.
Hyperthreading seeks to increase perform, even if only a little bit, by exploiting these temporarily unused units within a core; while the floating point unit is busy, schedule something that can use the integer unit.
...
As far as the operating system is concerned when it comes to scheduling is how many threads across all processes are runnable. If I have one process with 3 runnable threads, a second process with one runnable thread, and a third process with 10 runnable threads, then the OS will want to run a total of 3 + 1 + 10 = 14 threads.
If there are more runnable program threads than there are CPU execution threads, then the operating system will run as many as it can, and the others will sit there doing nothing, waiting. Meanwhile, those programs and those threads may have allocated a bunch of memory.
Lets say I have a computer with 128 GB of RAM and CPU resources such that the hardware can execute a total of 16 threads at the same time. I have a program that uses 2 GB of memory to perform a simple simulation, and that program only creates one thread to perform its execution, and each program needs 100s of CPU time to finish. What would happen if I were to try to run 16 instances of that program at the same time?
Each program would allocate 2 GB * 16 = 32 GB of ram to hold its state, and then begin performing its calculations. Since each program creates a single thread, and there are 16 CPU execution threads available, every program can run on the CPU without competing for CPU time. The total time we'd need to wait for the whole batch to finish would be 100 s: 16 processes / 16 cpu execution threads * 100s.
Now what if I increase that to 32 programs running at the same time? Well, we'll allocate a total of 64GB of RAM, and at any one point in time, only 16 of them will be running. This is fine, nothing bad will happen because we've not exhausted RAM (and presumably any other resource), and the programs will all run efficiently and eventually finish. Runtime will be approximately twice as long at 200s.
Ok, now what happens if we try to run 128 programs at the same time? We'll run out of memory: 128 * 2 = 256 GB of ram, more than double what the hardware has. The operating system will respond by swapping memory to dis and reading it back in as needed, but it'll have to do this very frequently, and it'll have to wait for the disk.
If you had enough ram, this would run in 800s (128 / 16 * 100). Since you don't, it's very possible it could take an order of magnitude longer.
Your questions are a little confusing. But here's an attempt to explain some of it:
Question 1 Does this mean that out of the 709 processes available, only 258 can be dedicated to my runall program, the rest remaining probably being used by other processes on my computer?
As the ulimit manpage explains, -u tells you how many processes you can start as a user. As you know, every process on Unix has a uid (there are some nitty gritty details here like euid, setuid etc.) which refers to the user on the system that owns that process. What -u tells you is the number of processes you (since you are logged in and executing the ulimit command) can start and simultaneously run on the computer. Note that once a process with pid p exits, OS is free to recycle that number p for some other processes.
Question 2
The answer to question 2 (which seems to be your main confusion) can only be given when we understand what the time command actually reports. Understanding the output of the time command needs some experimentation. For instance, when I run your experiment (on a comparable Mac) with 100 processes (i.e. $(seq 100)), I get:
./runall.sh 0.01s user 0.02s system 39% cpu 0.087 total
This means that only 39% of the available computing power was used resulting in 0.087s of the wall clock time. Roughly speaking, the wall clock time multiplied by the CPU utilization gives the running time (user time that your code needs + system time that system calls need to execute). Your simple script is rather too simple. It does not cause the CPU's to do any work by making the sleep system call!
Compare this example with a more real-life example to find a subset of a given set with given sum. This (Java) program, on the same computer produces the following times:
java SubsetSum 38.25s user 1.09s system 510% cpu 7.702 total
This means that the total wall clock time in about 7.7 seconds, but all the available cores are stressed extremely highly to execute this program. On a 4-CPU (8 logical CPU), I get a 500% CPU utilization! (And you can see that wall clock time (7.7) multiplied by CPU utilization (5.1) i.e. 39.27 is roughly equal to total time (38.25+1.09 = 39.34))
Question 3
Well, the way to parallelize your programs is by finding out parallelizable activity in solving the problem. You have 8 cores available and the OS will decide how to allocate them to the processes that ask for it. But what if a process goes into BLOCKING state (blocked on I/O)? Then, the OS will schedule this process out and schedule something else in. A simplistic view of this like "8 cores => 8 programs at the same time" is hardly true when you take into account the way the scheduling works.
I've a multi-threaded application which uses a threadpool of 10 threads. Each thread takes 5 minutes to process input. Is there a law/formula which governs the total time taken to process n inputs?
In other words, is it right to say that every 5 minutes, 10 inputs can be processed, so to process 100 inputs, it will take 50 minutes?
In addition to the computing power (processors/cores) and hardware resource dependencies (hard disk, I/O competition, etc.), the data dependency should also be considered. For example, if the processing of each input includes updating a shared data by all the other threads, which requires locking (mutex), then the total throughput will be less than 10 times, even if it is a multi-core processor with more than 10 cores. The maximum speed-up depends on the proportion of the critical section. If you need a formula, refer to the famous Amdahl's law: en.wikipedia.org/wiki/Amdahl's_law
Not really, you have to consider the total computing power required. If for example a thread takes 5 minutes to do the work, and the processor is completely consumed during that time, then additional threads will not help you. On the other extreme, if the processor utilization is near zero (all of the time is spent waiting for I/O for example), then your proposed calculation would work. So you have to consider the actual resources being used by the computation.