Task parallel Library mixed multithread and single threaded .net 4 - multithreading

I am using the tpl to process thousands of files in a multithreaded fashion.All good.
However there is some part of the application that I must process those files single thread.
Setting maxdegreeParallelism=1 means 1 thread x core is this correct?
When you dont you parallelism and you have 4 cores does it still use 1 thread x core?
The problem is that tpl does lot of hard work for you and also not been very familiar with threading does not help.
Bottom line I need to make sure that maxdegreeParallelism=1 is single threaded
Sorry for silly question but could not find a straight answer by googling.

See here.
No. It is not the case that necessarily one thread is run per core when you set the `MaxDegreePrallelism". It has a different meaning. It limits the number of parallel tasks done in the entire parallel operation. If you set that to one, it basically renders your parallel approach useless.
The TPL schedules a task on the threadpool. Once the task is scheduled, the threadpool decides how all the tasks to be done are to be distributed among threads, cores and processors. This is based on certain heuristics like the virtual address space, number of threads currently in blocked state, etc.
Now, if you mean that there is a part of your application in which the tasks should be done in a sequential form, there are ways to achieve that. Take a look at ContinueWith.

Documentation doesnt say anything about CPU-Cores but concurrent operations.
So this means, setting to 1 equals 1 Thread in total. Though it is a different thread than the callingthread.
Fiddle to poorly prove assumption.

Related

Do Rust threads run at the same time in parallel? Documentation sounds like it does not [duplicate]

I want to know if a program can run two threads at the same time (that is basically what it is used for correct?). But if I were to do a system call in one function where it runs on thread A, and have some other tasks running in another function where it runs on thread B, would they both be able to run at the same time or would my second function wait until the system call finishes?
Add-on to my original question: Now would this process still be an uninterruptable process while the system call is going on? I am talking about using any system call on UNIX/LINUX.
Multi-threading and parallel processing are two completely different topics, each worthy of its own conversation, but for the sake of introduction...
Threading:
When you launch an executable, it is running in a thread within a process. When you launch another thread, call it thread 2, you now have 2 separately running execution chains (threads) within the same process. On a single core microprocessor (uP), it is possible to run multiple threads, but not in parallel. Although conceptually the threads are often said to run at the same time, they are actually running consecutively in time slices allocated and controlled by the operating system. These slices are interleaved with each other. So, the execution steps of thread 1 do not actually happen at the same time as the execution steps of thread 2. These behaviors generally extend to as many threads as you create, i.e. packets of execution chains all working within the same process and sharing time slices doled out by the operating system.
So, in your system call example, it really depends on what the system call is as to whether or not it would finish before allowing the execution steps of the other thread to proceed. Several factors play into what will happen: Is it a blocking call? Does one thread have more priority than the other. What is the duration of the time slices?
Links relevant to threading in C:
SO Example
POSIX
ANSI C
Parallel Processing:
When multi-threaded program execution occurs on a multiple core system (multiple uP, or multiple multi-core uP) threads can run concurrently, or in parallel as different threads may be split off to separate cores to share the workload. This is one example of parallel processing.
Again, conceptually, parallel processing and threading are thought to be similar in that they allow things to be done simultaneously. But that is concept only, they are really very different, in both target application and technique. Where threading is useful as a way to identify and split out an entire task within a process (eg, a TCP/IP server may launch a worker thread when a new connection is requested, then connects, and maintains that connection as long as it remains), parallel processing is typically used to send smaller components of the same task (eg. a complex set of computations that can be performed independently in separate locations) off to separate resources (cores, or uPs) to be completed simultaneously. This is where multiple core processors really make a difference. But parallel processing also takes advantage of multiple systems, popular in areas such as genetics and MMORPG gaming.
Links relevant to parallel processing in C:
OpenMP
More OpenMP (examples)
Gribble Labs - Introduction to OpenMP
CUDA Tookit from NVIDIA
Additional reading on the general topic of threading and architecture:
This summary of threading and architecture barely scratches the surface. There are many parts to the the topic. Books to address them would fill a small library, and there are thousands of links. Not surprisingly within the broader topic some concepts do not seem to follow reason. For example, it is not a given that simply having more cores will result in faster multi-threaded programs.
Yes, they would, at least potentially, run "at the same time", that's exactly what threads are for; of course there are many details, for example:
If both threads run system calls that e.g. write to the same file descriptor they might temporarily block each other.
If thread synchronisation primitives like mutexes are used then the parallel execution will be blocked.
You need a processor with at least two cores in order to have two threads truly run at the same time.
It's a very large and very complex subject.
If your computer has only a single CPU, you should know, how it can execute more than one thread at the same time.
In single-processor systems, only a single thread of execution occurs at a given instant. because Single-processor systems support logical concurrency, not physical concurrency.
On multiprocessor systems, several threads do, in fact, execute at the same time, and physical concurrency is achieved.
The important feature of multithreaded programs is that they support logical concurrency, not whether physical concurrency is actually achieved.
The basics are simple, but the details get complex real quickly.
You can break a program into multiple threads (if it makes sense to do so), and each thread will run "at its own pace", such that if one must wait for, eg, some file I/O that doesn't slow down the others.
On a single processor multiple threads are accommodated by "time slicing" the processor somehow -- either on a simple clock basis or by letting one thread run until it must wait (eg, for I/O) and then "switching" to the next thread. There is a whole art/science to doing this for maximum efficiency.
On a multi-processor (such as most modern PCs which have from 2 to 8 "cores") each thread is assigned to a separate processor, and if there are not enough processors then they are shared as in the single processor case.
The whole area of assuring "atomicity" of operations by a single thread, and assuring that threads don't somehow interfere with each other is incredibly complex. In general a there is a "kernel" or "nucleus" category of system call that will not be interrupted by another thread, but thats only a small subset of all system calls, and you have to consult the OS documentation to know which category a particular system call falls into.
They will run at the same time, for one thread is independent from another, even if you perform a system call.
It's pretty easy to test it though, you can create one thread that prints something to the console output and perform a system call at another thread, that you know will take some reasonable amount of time. You will notice that the messages will continue to be printed by the other thread.
Yes, A program can run two threads at the same time.
it is called Multi threading.
would they both be able to run at the same time or would my second function wait until the system call finishes?
They both are able to run at the same time.
if you want, you can make thread B wait until Thread A completes or reverse
Two thread can run concurrently only if it is running on multiple core processor system, but if it has only one core processor then two threads can not run concurrently. So only one thread run at a time and if it finishes its job then the next thread which is on queue take the time.

Thread synchronisation for very short tasks

I have a C++ application running on winapi. Portability is not an issue. All I want is maximum performance. I have a basic understanding of multithreading and synchronization issues, but limited experience with the multitude of options ranging from winapi over C++ threads to third party libraries.
In the performance critical core of my application I identified a loop, which could be parallelized. I managed to split the loop into 4 parts which do not depend on each other. I would like to delegate the job to 4 threads running in parallel. The main thread should wait until all 4 threads have done their job, before it continues.
Sounds very simple. However, currently the loop takes only about 10 microseconds when running on one thread. I'm afraid that synchronization methods which cause a switch to the kernel (events, mutexes, etc.) would produce more overhead than the parallelization could save. SRWLocks + condition variables claim to be very lightweight, but I didn't find a way to solve my synchronization with these tools.
Of course I could test all kinds of synchronization APIs, but I'm sure this has been done before.
So my question is: Is there a reasonable way to synchronize very short tasks and if so, what are the appropriate tools?
If you simply need to wait for threads to complete you would use WaitForMultipleObjects on the thread handles. The other direct option would be to use a synchronization barrier, a primitive that allows a group of threads to halt until all members of the group have reached the barrier, but that is generally for the case where there is more work for the spawned threads to perform after being released.
Your question of whether this would actually be of benefit in your particular case is one that can only be answered through implementation and timing. And note that if you are going to perform this testing it should be done on a release build with optimizations enabled. It may well be the case that if the amount of work to perform is short enough that the time involved in thread management dwarfs any benefit.
The update algorithm consists of two steps. Each of these steps can be applied to the knots in arbitrary order, but step 1 must be completed before step 2 can start. I can portion the whole net into four (or more) parts and delegate each part to a separate thread. My problem is: Each thread has to pause after step 1 and wait until all threads have finished their job. Then each thread makes step 2, wait for completion of the other threads and so on.
You want to break the work into a large number of small chunks and have a fixed pool of threads take chunks of work. Do not make 8 threads on an 8 core machine and split the work into 8 chunks. That algorithm will work poorly if, for one reason or another, only 7 of those cores winds up doing work for you. Your algorithm will need twice as long as the second half of the time only one core is working.
The easy way is to have an extra dispatch thread. Just keep a "work unit" count somewhere protected by a mutex. When a thread finishes a work unit, have it decrement the "work unit" count. When it hits zero, broadcast a condition variable. That will wake the dispatch thread which will then do whatever it takes to get the worker threads going again. It can start them by setting the "work unit" count to the right level and broadcasting another condition variable that the worker threads wait for.
You can also just keep a count of which node needs to be done next and the number of nodes currently doing work. That will require synchronization after each thread though (to figure out which node to do next) and it may make more sense to have each thread grab some number of nodes, iterate over them, and then synchronize to grab another few nodes.
Avoid breaking the work into large chunks early. That can lead to the problem where you have 8 cores but 2 large work units left at some point. Remember, many modern CPUs run their cores at different speeds based on temperature and power measurements.

Will a multi-threaded application be actually faster than a single-threaded application?

All is entirely theoretical, the question just came to mind and I wasn't entirely sure whats the answer:
Assume you have an application that calculates 4 independent calculations. (Totally independent, doesn't matter what order you do them and you don't need one to calculate another).
Also assume those calculations are long (minutes) and CPU-bound (not waiting for any kind of IO)
1) Now, if you have a 1-processor computer, a single thread application will logically be faster than (or the same as) a multithreaded application. As the computer not able to do more then one thing at a time with one processor, it would "waste" time on context switching and the likes.
So far so good?
2) If you have a 4 processor computer, 4 threads will mostly likely be faster for this than single thread. Right? your computer can now do 4 operations at a time so its just logical to divide your application to 4 threads, and it should complete with the time the longest of the 4 calculations take.
Still good so far?
3) And now the actual part I am confused about - why would I EVER have my application create more threads than the number of processors (well actually - cores) available? I have programmed and have seen applications that create tens and hundreds of threads, but actually - the perfect number is about 8 for an average computer?
P.S. I already read this: Threading vs single thread
but didn't quiet answer that.
Cheers
Why would I EVER have my application create more threads than the number of processors (well actually - cores) available?
One very good reason is if you have threads that wait on events. For example you might have a producer/consumer application in which the producer is reading from some data stream, and that data arrives in bursts: a few hundred (or thousand) records in a batch, followed by nothing for a while, and then another burst. Say you have a 4-core machine. You could have a single producer thread that reads the data and places it in a queue, and three consumer threads to process the queue.
Or, you could have a single producer thread and four consumer threads. Most of the time, the producer thread is idle, giving you four consumer threads to process items from the queue. But when items are available on the data stream, one of the consumer threads gets swapped out in favor of the producer.
That's a simplified example, but substantially similar to programs that I have in production.
More generally, it doesn't make any sense to create more continuously-working (i.e. CPU bound) threads than you have processing units (CPU cores in general, although the existence of hyperthreading muddies the waters a bit). If you know that your threads won't be waiting on external events, then having n+1 threads when you only have n cores will end up wasting time with thread context switches. Note that this is strictly in the context of your program. If there are other applications and OS services running, your application's threads will get swapped out from time to time so that those other apps and services can get a timeslice. But one assumes that, if you're running a CPU-intensive program, you'll limit the other apps and services that are running at the same time.
Your best bet, of course, is to set up a test. On a 4-core machine, test your app with 1, 2, 3, 4, 5, ... threads. Time how long it takes to complete with different numbers of threads. I think you'll find that on a 4-core machine the sweet spot will be 3 or 4; most likely 4 unless there are other apps or OS services that take a lot of CPU.
One reason i could come up with for more threads than cores would be if some threads needed to interface with other parties... waiting for a response from a server.. querying something from the database. This will allow the thread to sleep until an answer is provided. this way other computations wouldn't have to wait. in the 4cores->4thread the thread would wait for input which possibly causes other code to have to wait too
Adding threads to your application is not strictly about performance gains. Some times you want or need to perform more than one task at the same time because that is the most logical way to architect your program.
As an example, perhaps you are writing a game engine, if you take a multi-threaded approach, you may have one thread for physics, one thread for graphics, one thread for networking, one thread for user input, one thread for resource loading from disk etc.
Also James Baxters point is very true as well. Some times threads are waiting on a resource and can not execute further until they access said resource. With only the same number of threads as cores, one core would be going to waste.
I think you are assuming that all programs are CPU bound - remember some of your threads will be waiting for I/O (disk/network/user traffic).

Is there an advantage of recycling threads

In C++, I want to create an algorithms with the following structure:
A sequential part
A parallel part A
A sequential part
A parallel part B
A sequential part
Using pthreads, I can think of two ways to solve the problem:
Create N threads for part A and then destructing these threads after part A is finished. Then allocating N new threads for part B.
Using the same threads for part A and part B using the various kinds of synchronization methods available.
How much overhead does it take to create new threads for solution 1 when performance matters. Should I go for solution 1 or solution 2?
Parallel frameworks such as OpenMP recycle threads. This is called a Thread Pool, and you can find information about those on the site. Here's one related post: Thread Pool vs Thread Spawning
If you're really concerned about performance, the best way to find out what suits your application is to try both approaches and measure them.
In general, if your processing task is expensive and the code is easier to understand if you just spawn new threads, then do that.
And just to colour the argument a little, check out this post that I answered using experimentation the other day: Why are 50 threads faster than 4?

Check number of idle cores when creating .Net 4.0 Parallel Task

My question might sound a bit naive but I'm pretty new with multi-threaded programming.
I'm writing an application which processes incoming external data. For each data that arrives a new task is created in the following way:
System.Threading.Tasks.Task.Factory.StartNew(() => methodToActivate(data));
The items of data arrive very fast (each second, half second, etc...), so many tasks are created. Handling each task might take around a minute. When testing it I saw that the number of threads is increasing all the time. How can I limit the number of tasks created, so the number of actual working threads is stable and efficient. My computer is only dual core.
Thanks!
One of your issues is that the default scheduler sees tasks that last for a minute and makes the assumption that they are blocked on another tasks that have yet to be executed. To try and unblock things it schedules more pending tasks, hence the thread growth. There are a couple of things you can do here:
Make your tasks shorter (probably not an option).
Write a scheduler that deals with this scenario and doesn't add more threads.
Use SetMaxThreads to prevent
unbounded thread pool growth.
See the section on Thread Injection here:
http://msdn.microsoft.com/en-us/library/ff963549.aspx
You should look into using the producer/consumer pattern with a BlockingCollection<T> around a ConcurrentQueue<T> where you set the BoundedCapacity to something that makes sense given the characteristics of your workload. You can make your BoundedCapacity configurable and then tweak as you run through some profiling sessions to find the sweet spot.
While it's true that the TPL will take care of queueing up the tasks you create, creating too many tasks does not come without penalties. Also, what's the point in producing more work than you can consume? You want to produce enough work that the consumers will never be starved, but you don't want to get to far ahead of yourself because that's just wasting resources and potentially stealing those very same resources from your consumers.
You can create a custom TaskScheduler for the Task Parallel library and then schedule tasks on that by passing an instance of it to the TaskFactory constructor.
Here's one example of how to do that: Task Scheduler with a maximum degree of parallelism.

Resources