Multiprocessing: why doesn't a single thread just use more cpu? - multithreading

I'm learning about multiprocessing and it seems to be applicable in one of two scenarios:
our program is waitng for some I/O, so it makes sense to go do something else while waiting;
we break our program up so that individual parts of it can run "in parellel", in an attempt to take full advantage of the cpu
My confusion is about the second case. I'm probably just lacking in my understanding of how cpus really work: but if our single thread process is only using 1% of the cpu and it therefore makes sense to get more threads going, then why wouldn't we just (somehow?) speed up that single process so that it uses more cpu and finishes faster?

but if our single thread process is only using 1% of the cpu and it therefore makes sense to get more threads going, then why wouldn't we just (somehow?) speed up that single process so that it uses more cpu and finishes faster?
We don't know how to. There seem to be fundamental limitations to how fast we can do things that we haven't quite figured out how to get around. So instead, we do more than one thing at a time.
It takes a woman 9 months to make a baby. So if you want lots of babies, you get lots of women. You don't try to get one woman to go faster.
Say you want to raise 7 to the twenty-millionth power and also raise 11 to the twenty-millionth power. Each of these two operations can be reduced in the number of steps, but you will reach a limit. Say each operation takes N sequential steps (each requiring the output from the previous step as its input) and the fastest we can do a single step is Q nanoseconds. With one thread, it will take at least 2NQ nanoseconds to perform all the operations. With two threads, can do one step from each of the two operations at the same time, reducing the time minimum to N*Q nanoseconds.
That's a big win.

I might be wrong, but when we split things into threads, we want to make use of multi-core architecture of our CPUs.
We mostly think CPUs being a single unit, but you must've heard about how i5 is a quad-core processor, meaning it has 4 cores-- or 4 cores make a CPU, i3 is a dual core processor-- i.e, it only has two cores.
So the aggregate CPU utilization for quad-core would be 100% split into 4x25. There's a difference b/w concurrency and parallelism. Parallel means each thread runs on a separate core, making full use of it. Now you have 4 people doing one job-- or a better analogy would be there are 4 printers in the office, and 4 people can go ahead and get the copies that they want. This is parallelism.
Using that same analogy let's extend it to just one copier/printer and 4 people want to make copies, what we do is make use concurrency, we print each requested copy but only 25% of it, then we switch to the next person, then the next and then the next, this will take 4 iterations for all the copies to get printed. Even though we utilized 100% of the copier's capability, still our guys had to wait-- this waiting time also depends on what was the length of the document they wanted to print-- so we use something like pre-emption, you can only execute/print for a certain amount of time, before we start printing for the next guy.
Speeding up a single process-- allocating it 100% of the CPU is not a problem [although we want to run bunch of other stuff like GUI, play music, system services etc, but 85% is doable], the execution time becomes 1/4th when it's distributed b/w the CPUs. Imagine you have to print a book, with 4 copiers, book is 400pages long-- you use 4 copiers to print 100pages each. Will be faster right?
I hope I made some sense, Going to sleep.

Related

Does multi-threading improve performance? How?

I hear everyone talking about how multi-threading can improve performance. I don't believe this, unless there is something I'm missing. If I have an array of 100 elements and traversing it takes 6 seconds. When I divide the work between two threads, the processor would have to go through the same amount of work and therefore time, except that they are working simultaneously but at half the speed. Shouldn't multi threading make it even slower? Since you need additional instructions for dividing the work?
For a simple task of iterating 100 elements multi-threading the task will not provide a performance benefit.
Iterating over 100 billion elements and do processing on each element, then the use of additional CPU's may well help reduce processing time. And more complicated tasks will likely incur interrupts due to I/O for example. When one thread is sleeping waiting for a peripheral to complete I/O (e.g. a disk write, or a key press from the keyboard), other threads can continue their work.
For CPU bound tasks where you have more than one core in your processor you can divide your work on each of your processor core. If you have two cores, split the work on two threads. This way you have to threads working at full speed.
However threads are really expensive to create, so you need a pretty big workload to overcome the initial cost of creating the threads.
You can also use threads to improve appeared performance (or responsiveness) in an interactive application. You run heavy computations on a background thread to avoid blocking UI interactions. Your computations does not complete faster, but your application does not have those "hangs" that make it appear slow and unresponsive.

How to optimize number of threads per number of cores

I'm trying to get a better idea of how many threads should run on n number of cores. I know this a complicated question in which the answer depends on a number of factors, such as how much shared state there is, and how much sleeping and waiting on resources each thread does.
To simplify things, let's say we have 2 cores and just one process that can divide its work into threads with no shared state. Let's say each thread just performs computation after computation with no sleeping and no waiting on resources. Would the ideal number of threads in this case be 2?
Let's complicate things a bit and say that the threads have to do some sort of disk I/O. How does this change our answer? I would think that we could have more than 2 cores in this case.
Or let's say they don't do any sleeping or waiting on resources, but instead there's some memory that they both have access to that requires synchronization. How does this change our answer? I would think that in this case we may actually prefer 1 thread over 2, depending on how much synchronization is required.
This is a hard question to answer in the general matter. It really depends on more specifics in the case. What to remember is that it costs to do context-switches - if you're only doing computations it would be wasteful to have 2 threads running on one core (as you wouldn't really gain anything - only lose in the context switches). On the other hand if you are waiting for resources and at the same time can continue with other calculations, it is a good idea to have a thread wait for those resources to not make the entire execution lag behind.
When it comes to IO you don't think in terms of threads per core. You think in terms of threads per physical device. Each individual device has different optimal DOP's (magnetic disk = 1, SSD at least 4, network much higher).
For CPU bound work the optimal number is 1 (per core).
In mixed cases or cases where it is more complicated than that no general answer can be given. The system can behave in surprising ways (like collapsing under load!). The approach here is to test different DOPs and use the best. Generally, there will be exactly one optimum while both 1 and infinity perform much worse. So you only need to find the single maximum which is quite easy.

How to do the same calculations faster on 4-core CPU: 4 threads or 50 threads?

Lets assume we have fixed amount of calculation work, without blocking, sleeping, i/o-waiting. The work can be parallelized very well - it consists of 100M small and independent calculation tasks.
What is faster for 4-core CPU - to run 4 threads or... lets say 50? Why second variant should be slover and how much slover?
As i assume: when you run 4 heavy threads on 4-core CPU without another CPU-consuming processes/threads, scheduler is allowed to not move the threads between cores at all; it has no reason to do that in this situation. Core0 (main CPU) will be responsible for executing interruption handler for hardware timer 250 times per second (basic Linux configuration) and other hardware interruption handlers, but another cores may not feel any worries.
What is the cost of context switching? The time for store and restore CPU registers for different context? What about caches, pipelines and various code-prediction things inside CPU? Can we say that each time we switch context, we hurt caches, pipelines and some code-decoding facilities in CPU? So more threads executing on a single core, less work they can do together in comparison to their serial execution?
Question about caches and another hardware optimization in multithreading environment is the interesting question for me now.
As #Baile mentions in the comments, this is highly application, system, environment-specific.
And as such, I'm not going to take the hard-line approach of mentioning exactly 1 thread for each core. (or 2 threads/core in the case of Hyperthreading)
As an experienced shared-memory programmer, I have seen from my experience that the optimal # of threads (for a 4 core machine) can range anywhere from 1 to 64+.
Now I will enumerate the situations that can cause this range:
Optimal Threads < # of Cores
In certain tasks that are very fine-grained paralleled (such as small FFTs), the overhead of threading is the dominant performance factor. In some cases, it's it not helpful to parallelize at all. In some cases, you get speedup with 2 threads, but backwards scaling at 4 threads.
Another issue is resource contention. Even if you have a highly parallelizable task that can easily split across 4 cores/threads, you may be bottlenecked by memory bandwidth and cache effects. So often, you find that 2 threads will be just as fast as 4 threads. (as if often the case with very large FFTs)
Optimal Threads = # of Cores
This is the optimal case. No need to explain here - one thread per core. Most embarrassingly parallel applications that are not memory or I/O bound fit right here.
Optimal Threads > # of Cores
This is where it gets interesting... very interesting. Have you heard about load-imbalance? How about over-decomposition and work-stealing?
Many parallelizable applications are irregular - meaning that the tasks do not split into sub-tasks of equal size. So if you may end up splitting a large task into 4 unequal sizes, assign them to 4 threads and run them on 4 cores... the result? Poor parallel performance because 1 thread happened to get 10x more work than the other threads.
A common solution here is to over-decompose the task into many sub-tasks. You can either create threads for each one of them (so now you get threads >> cores). Or you can use some sort of task-scheduler with a fixed number of threads. Not all tasks are suited for both, so quite often, the approach of over-decomposing a task to 8 or 16 threads for a 4-core machine gives optimal results.
Although spawning more threads can lead to better load-balance, the overhead builds up. So there's typically an optimal point somewhere. I've seen as high as 64 threads on 4 cores. But as mentioned, it's highly application specific. And you need to experiment.
EDIT : Expanding answer to more directly answer the question...
What is the cost of context switching? The time for store and restore
CPU registers for different context?
This is very dependent on the environment - and is somewhat difficult to measure directly. Short answer: Very Expensive This might be a good read.
What about caches, pipelines and various code-prediction things inside
CPU? Can we say that each time we switch context, we hurt caches,
pipelines and some code-decoding facilities in CPU?
Short answer: Yes When you context switch out, you likely flush your pipeline and mess up all the predictors. Same with caches. The new thread is likely to replace the cache with new data.
There's a catch though. In some applications where the threads share the same data, it's possible that one thread could potentially "warm" the cache for another incoming thread or another thread on a different core sharing the same cache. (Although rare, I've seen this happen before on one of my NUMA machines - superlinear speedup: 17.6x across 16 cores!?!?!)
So more threads executing on a single core, less work they can do
together in comparison to their serial execution?
Depends, depends... Hyperthreading aside, there will definitely be overhead. But I've read a paper where someone used a second thread to prefetch for the main thread... Yes it's crazy...
Creating 50 threads will actually hurt performance, not improve it. It just doesn't make any sense.
Ideally you should make the 4 threads, not more, not less. There will be some overhead because of context switching, but that is unavoidable. The OS/services/other applications threads should too be executed. But nowadays with so powerful and lighting-fast CPUs this is of no concern since those OS threads will only take less that 2 % of the CPU's time. Almost all of them will be in blocked state while your program is running.
You might think that, since performance is of critical importance, you should code those small critical areas in low-level assembly language. Modern programming languages allow this.
But seriously... compilers and, in case of Java, the JVM, will optimize those portions so well that it just isn't worth it (unless you actually want to exercise something like this). Instead of your calculations finishing in 100 seconds, they'll finish in 97 or 98. The question you must ask yourself is: is it worth all those hours of coding and debugging ?
You asked about the time cost of context switching. These days, these are extremely low. Look at modern day dual-core CPUs that run Windows 7 for example. If you start an Apache web server on that machine and a MySQL database server, you will easily go over 800 threads. The machine just doesn't feel it. To see how low this cost is, read here: How to estimate the thread context switching overhead? . To spare you the searching/reading part: context switching can be done hundreds of thousands of times per second.
4 threads are faster if you can program your 40 tasks switching better than Operating System does.
If you can use 4 threads, use them. There's no way 50 will go faster than 4 on a 4-core machine. All you get is more overhead.
Of course, you're describing an ideal non-real-world situation, so whatever you are actually building, you'll need to measure in order to understand how the performance is affected.
There is Hyperthreading technology which can handle more that one thread per CPU, but it is hardly dependent on type of calculation you want to do. Consider using of GPU or very low assembly language to achieve maximum power.

I don't understand multi-threaded programming

Can someone please explain to me how a multi-threaded application can be faster when a single core cpu can only do a single thing at a time. If I have 10 threads then only 1 of those threads is really 'running' at any given moment on a single core cpu and all the extra threads just add context switching overhead. So if each thread has 10 instructions to process then in the end I'm still processing 100 instructions sequentially plus the context switching overhead. Am I missing something here?
A Helpful Analogy About Bananas
Imagine a supermarket with 4 checkout lanes. But there is only one cashier. Should she work on a single register or work on all 4 registers, moving between them?
The obvious answer is that she should stay on one register to avoid wasting time moving between checkout lanes.
But now imagine that when you buy fruit, the scale can take up to 5 minutes to re-calibrate for each specific type of fruit.
While the scale is recalibrating and the register is tied up, suddenly it becomes more efficient overall to rotate over to the next lane and ring up some items there rather than just waiting for the scale to be ready again.
The scale calibrating is non-CPU work (such as disk I/O, network latency, etc.). Rotating to the next register is switching to another thread. And there you have it.
Yes, you are missing the fact that a process might BLOCK to wait for I/O. So, if you use only ONE THREAD in your application, if it blocks to wait for I/O to finish, it will be extremely slow.
On the other hand, if you have multiple threads, your application might have a couple of them waiting for I/O to finish, but the rest of them "executing" while OS gives it access to the SINGLE PROCESSOR.
Do keep in mind that I/O operations compared to CPU operations are orders of magnitude slower.
And yes. Even in single cores, a multithreaded application will probably be faster than a single threaded one. Consider the case of a server process like APACHE running on a single thread. Every time there is a connection waiting for I/O to finish, the rest of the connection will halt waiting for that I/O operation to finish. Of course there is ASYNC-IO. But the programming model to make a huge server like Apache running on a single thread with ASYNC-IO, will be too complicated to maintain, improve or anything else.
You're right, it's not faster on a single-core processor. Most programs do many things at once. Most of these operations are 'bursty' for the processor. They do something, wait for input or output to finish, then do some more. Multithreaded programming allows another operation to use the processor during the wait. Remember, all processors basically do the same thing. The difference is the speed that they can do their operations. The goal then is to keep the processor busy doing useful stuff as much as possible. Multithreaded programming is just a method that makes it easier for programmers to get to that goal.
On a single core, of course it is not faster. But it can make the system more responsive, by not appearing dead to the world while doing a long-running task.
It really depends on that what the threads are doing. If there is a relative big amount of latency, another thread can do its job while other threads are waiting for "themselves".

Question about app with multiple threads in a few CPU-machine

Given a machine with 1 CPU and a lot of RAM. Besides other kinds of applications (web server etc.), there are 2 other server applications running on that machine doing the exact same kind of processing although one uses 10 threads and the other users 1 thread. Assume the processing logic for each request is 100% CPU-bound and typically takes no longer than 2 seconds to finish. The question is whose throughput, in terms of transactions processed per minute, might be better? Why?
Note that the above is not a real environment, I just make up the data to make the question clear. My current thinking is that there should be no difference because the apps are 100% CPU-bound and therefore if the machine can handle 30 requests per minute for the 2nd app, it will also be able to handle 3 requests per minute for each of the 10 threads of the 1st app. But I'm glad to be proven wrong, given the fact that there are other applications running in the machine and one application might not be always given 100% CPU time.
There's always some overhead involved in task switching, so if the threads aren't blocking on anything, fewer threads is generally better. Also, if the threads aren't executing the same part of code, you'll get some cache flushing each time you swtich.
On the other hand, the difference might not be measurable.
Interesting question.
I wrote a sample program that does just this. It has a class that will go do some processor intensive work, then return. I specify the total number of threads I want to run, and the total number of times I want the work to run. The program will then equally divide the work between all the threads (if there's only one thread, it just gets it all) and start them all up.
I ran this on a single proc VM since I could find a real computer with only 1 processor in it anymore.
Run independently:
1 Thread 5000 Work Units - 50.4365sec
10 Threads 5000 Work Units - 49.7762sec
This seems to show that on a one proc PC, with lots of threads that are doing processor intensive work, windows is smart enough not to rapidly switch them back and fourth, and they take about the same amount of time.
Run together (or as close as I could get to pushing enter at the same time):
1 Thread 5000 Work Units - 99.5112sec
10 Threads 5000 Work Units - 56.8777sec
This is the meat of the question. When you run 10 threads + 1 thread, they all seem to be scheduled equally. The 10 threads each took 1/10th longer (because there was an 11th thread running) while the other thread took almost twice its time (really, it got 1/10th of its work done in the first 56sec, then did the other 9/10ths in the next 43sec...which is about right).
The result: Window's scheduler is fair on a thread level, but not on a process level. If you make a lot of threads, it you can leave the other processes that weren't smart enought to make lots of threads high and dry. Or just do it right and us a thread pool :-)
If you're interested in trying it for yourself, you can find my code:
http://teeks99.com/ThreadWorkTest.zip
The scheduling overhead could make the app with 10 threads slower than the one with 1 thread. You won't know for sure unless you create a test.
For some background on multithreading see http://en.wikipedia.org/wiki/Thread_(computer_science)
This might very well depend on the operating system scheduler. For example, back in single-thread days the scheduler knew only about processes, and had measures like "niceness" to figure out how much to allocate.
In multithreaded code, there is probably a way in which one process that has 100 threads doesn't get 99% of the CPU time if there's another process that has a single thread. On the other hand, if you have only two processes and one of them is multithreaded I would suspect that the OS may give it more overall time. However, AFAIK nothing is really guaranteed.
Switching costs between threads in the same process may be cheaper than switching between processes (e.g., due to cache behavior).
One thing you must consider is wait time on the other end of the transaction. Having multiple threads will allow you to be waiting for a response on one while preparing the next transaction on the next. At least that's how I understand it. So I think a few threads will turn out better than one.
On the other hand you must consider the overhead involved with dealing on multiple threads. The details of the application are important part of the consideration here.

Resources