Use of the terms "queues", "multicore", and "threads" in Grand Central Dispatch - multithreading

I am trying to get my head around the concepts of Grand Central Dispatch. I want to understand these quotes from Vandad's book on Concurrent Programming.
The real use for GCD is to dispatch tasks to multiple cores, without making you the programmer, worry about which core is executing which task.
and
At the heart of GCD are dispatch queues. Dispatch queues are pools of threads.
and finally
You will not be working with these threads directly. You will just work with dispatch queues, dispatching tasks to these queues and asking queues to invoke your task.
I have bolded the key terms.
Are multiple cores the same as queues? Does a queue consist of many threads? Does each thread perform a task?

So multiple cores are the same as queues?
Not really. A queue is a programming abstraction, a core is a physical resource in your processor. There is no unique relationship between a queue and a core, although at any given point in time it can be said that a given queue is executing a given task on a given core.
A queue consists of many threads?
A queue consists of tasks. Tasks are assigned to threads by the queue managing system when it comes the time to execute that task. Threads are OS resources and are allocated to cores, which effectively run them and have no notion of what a task is (except for Hyper-Threading CPUs).
If you do not account for hardware-multithreading (e.g., Hyper-threading), at any given point in time a core is running a specific thread; when it comes the time to run a different thread, a context-switch occurs in that core. If you account for hardware-multithreading, you can have multiple threads running on virtual cores hosted in the same physical core.
The relationship between queues and threads is opaque. A queue could manage several threads at once, or several threads once at a time, or just one all the time; in the first case, you have a parallel queue, able to execute parallel tasks on simultaneous threads; in the second and third case, you have a serial queue.
Each thread performs a task?
At any given point in time, a thread is performing a task. You can have threads that are spawn, execute their task, and die; or you can have long running threads (i.e., the main thread) that execute several tasks.
Maybe it is pretty puzzling at start, you might need some reading about Operating Systems and maybe high-level Processor Architectures to fully understand this.
GCD aims at letting you reason exclusively in abstract terms: i.e., in terms of tasks and queues, and forget about threads and cores, that are seen as a sort of "implementation means", or low-level details that you can leave to the system to use efficiently.

Queues are just list of tasks to execute, cores depend on the processor, you can have 1 or many cores.
Queues are configurable and you can decide if tasks can be executed concurently or not, if you allow concurency in your queue, tasks in the queue can be executed at the same time in different cores.

I'm not sure those quotes really do GCD justice. For example, to take each quote in turn:
GCD is more than useable (and useful) even if you have only a single core available, since multi-threading certain tasks have their place in computer science regardless of the number of physical CPU cores available. Better to think of it as an alternative to managing threads explicitly - GCD will do the thread management so you don't have to, you (as the programmer) just have to think in terms of queues and whether certain related tasks must be done serially or can be done concurrently.
Dispatch queues are not "pools of threads". Dispatch queues are "units of work aggregation" and should be thought of that way. How that work is physically performed, by one thread or multiple threads, is not the programmer's concern and, in fact, the less assumptions the programmer makes about that the better since GCD tries very hard to be efficient and use as few threads as possible while still effectively utilizing hardware resources.
The third quote is good - that is the appropriate idiom to embrace. Just submit your work (be it blocks or function/context tuples) to the appropriate queue, creating queues as necessary to associate with resources that require synchronization, and you've got the gist of GCD.

Related

Do Rust threads run at the same time in parallel? Documentation sounds like it does not [duplicate]

I want to know if a program can run two threads at the same time (that is basically what it is used for correct?). But if I were to do a system call in one function where it runs on thread A, and have some other tasks running in another function where it runs on thread B, would they both be able to run at the same time or would my second function wait until the system call finishes?
Add-on to my original question: Now would this process still be an uninterruptable process while the system call is going on? I am talking about using any system call on UNIX/LINUX.
Multi-threading and parallel processing are two completely different topics, each worthy of its own conversation, but for the sake of introduction...
Threading:
When you launch an executable, it is running in a thread within a process. When you launch another thread, call it thread 2, you now have 2 separately running execution chains (threads) within the same process. On a single core microprocessor (uP), it is possible to run multiple threads, but not in parallel. Although conceptually the threads are often said to run at the same time, they are actually running consecutively in time slices allocated and controlled by the operating system. These slices are interleaved with each other. So, the execution steps of thread 1 do not actually happen at the same time as the execution steps of thread 2. These behaviors generally extend to as many threads as you create, i.e. packets of execution chains all working within the same process and sharing time slices doled out by the operating system.
So, in your system call example, it really depends on what the system call is as to whether or not it would finish before allowing the execution steps of the other thread to proceed. Several factors play into what will happen: Is it a blocking call? Does one thread have more priority than the other. What is the duration of the time slices?
Links relevant to threading in C:
SO Example
POSIX
ANSI C
Parallel Processing:
When multi-threaded program execution occurs on a multiple core system (multiple uP, or multiple multi-core uP) threads can run concurrently, or in parallel as different threads may be split off to separate cores to share the workload. This is one example of parallel processing.
Again, conceptually, parallel processing and threading are thought to be similar in that they allow things to be done simultaneously. But that is concept only, they are really very different, in both target application and technique. Where threading is useful as a way to identify and split out an entire task within a process (eg, a TCP/IP server may launch a worker thread when a new connection is requested, then connects, and maintains that connection as long as it remains), parallel processing is typically used to send smaller components of the same task (eg. a complex set of computations that can be performed independently in separate locations) off to separate resources (cores, or uPs) to be completed simultaneously. This is where multiple core processors really make a difference. But parallel processing also takes advantage of multiple systems, popular in areas such as genetics and MMORPG gaming.
Links relevant to parallel processing in C:
OpenMP
More OpenMP (examples)
Gribble Labs - Introduction to OpenMP
CUDA Tookit from NVIDIA
Additional reading on the general topic of threading and architecture:
This summary of threading and architecture barely scratches the surface. There are many parts to the the topic. Books to address them would fill a small library, and there are thousands of links. Not surprisingly within the broader topic some concepts do not seem to follow reason. For example, it is not a given that simply having more cores will result in faster multi-threaded programs.
Yes, they would, at least potentially, run "at the same time", that's exactly what threads are for; of course there are many details, for example:
If both threads run system calls that e.g. write to the same file descriptor they might temporarily block each other.
If thread synchronisation primitives like mutexes are used then the parallel execution will be blocked.
You need a processor with at least two cores in order to have two threads truly run at the same time.
It's a very large and very complex subject.
If your computer has only a single CPU, you should know, how it can execute more than one thread at the same time.
In single-processor systems, only a single thread of execution occurs at a given instant. because Single-processor systems support logical concurrency, not physical concurrency.
On multiprocessor systems, several threads do, in fact, execute at the same time, and physical concurrency is achieved.
The important feature of multithreaded programs is that they support logical concurrency, not whether physical concurrency is actually achieved.
The basics are simple, but the details get complex real quickly.
You can break a program into multiple threads (if it makes sense to do so), and each thread will run "at its own pace", such that if one must wait for, eg, some file I/O that doesn't slow down the others.
On a single processor multiple threads are accommodated by "time slicing" the processor somehow -- either on a simple clock basis or by letting one thread run until it must wait (eg, for I/O) and then "switching" to the next thread. There is a whole art/science to doing this for maximum efficiency.
On a multi-processor (such as most modern PCs which have from 2 to 8 "cores") each thread is assigned to a separate processor, and if there are not enough processors then they are shared as in the single processor case.
The whole area of assuring "atomicity" of operations by a single thread, and assuring that threads don't somehow interfere with each other is incredibly complex. In general a there is a "kernel" or "nucleus" category of system call that will not be interrupted by another thread, but thats only a small subset of all system calls, and you have to consult the OS documentation to know which category a particular system call falls into.
They will run at the same time, for one thread is independent from another, even if you perform a system call.
It's pretty easy to test it though, you can create one thread that prints something to the console output and perform a system call at another thread, that you know will take some reasonable amount of time. You will notice that the messages will continue to be printed by the other thread.
Yes, A program can run two threads at the same time.
it is called Multi threading.
would they both be able to run at the same time or would my second function wait until the system call finishes?
They both are able to run at the same time.
if you want, you can make thread B wait until Thread A completes or reverse
Two thread can run concurrently only if it is running on multiple core processor system, but if it has only one core processor then two threads can not run concurrently. So only one thread run at a time and if it finishes its job then the next thread which is on queue take the time.

Can kernel schedule user level threads of same process on different cores?

As far as I know kernel doesn't know whether it is executing a user thread or user process because for kernel user threads are user process, it only schedules user processes and doesn't care which thread was running in that process.
I have one more question, Is there per core ready queue or a single ready queue for all the cores?
I was reading this paper and it is written that
In the stock Linux kernel the set of runnable threads is partitioned
into mostly-private per core scheduling queues; in the common case,
each core only reads, writes, and locks its own queue.
The linux kernel scheduler uses the "task" as its primary schedulable entity. This corresponds to a user-space thread. For a traditional simple Unix-style program, there is only a single thread in the process and so the distinction can be ignored. Other programs of course may have multiple threads. But in all cases, the kernel only schedules tasks (i.e. threads).
Your terminology above therefore doesn't really match the situation. The kernel doesn't really care whether the different threads it schedules are part of the same process or different processes: each thread can be scheduled independently. You can have multiple threads from the same process running on different processors/cores at the same time.
Yes, there are separate run queues for each core.
The paper you reference is, I think, slightly misleading in its phrasing. In particular, saying that the "set of runnable threads is partitioned into..." doesn't give quite the right meaning; that makes it sound like the threads are divided into multiple groups that are then assigned to different cores and can only be executed there. It would be more accurate to say that there is a separate run queue for each core containing a set of threads waiting to execute, and in common use, the scheduler doesn't need to reference the queues for other cores.
But in fact, threads can migrate from one core to another. For example, if there is a thread waiting to run on core A (hence in core A's run queue), but core A is already busy running some other thread, and there is another core that is not busy, the waiting thread may be migrated to that other core and executed there. (This is an oversimplification of course as there are other factors that go into deciding whether/when to migrate a thread.)

Will a multi-threaded application be actually faster than a single-threaded application?

All is entirely theoretical, the question just came to mind and I wasn't entirely sure whats the answer:
Assume you have an application that calculates 4 independent calculations. (Totally independent, doesn't matter what order you do them and you don't need one to calculate another).
Also assume those calculations are long (minutes) and CPU-bound (not waiting for any kind of IO)
1) Now, if you have a 1-processor computer, a single thread application will logically be faster than (or the same as) a multithreaded application. As the computer not able to do more then one thing at a time with one processor, it would "waste" time on context switching and the likes.
So far so good?
2) If you have a 4 processor computer, 4 threads will mostly likely be faster for this than single thread. Right? your computer can now do 4 operations at a time so its just logical to divide your application to 4 threads, and it should complete with the time the longest of the 4 calculations take.
Still good so far?
3) And now the actual part I am confused about - why would I EVER have my application create more threads than the number of processors (well actually - cores) available? I have programmed and have seen applications that create tens and hundreds of threads, but actually - the perfect number is about 8 for an average computer?
P.S. I already read this: Threading vs single thread
but didn't quiet answer that.
Cheers
Why would I EVER have my application create more threads than the number of processors (well actually - cores) available?
One very good reason is if you have threads that wait on events. For example you might have a producer/consumer application in which the producer is reading from some data stream, and that data arrives in bursts: a few hundred (or thousand) records in a batch, followed by nothing for a while, and then another burst. Say you have a 4-core machine. You could have a single producer thread that reads the data and places it in a queue, and three consumer threads to process the queue.
Or, you could have a single producer thread and four consumer threads. Most of the time, the producer thread is idle, giving you four consumer threads to process items from the queue. But when items are available on the data stream, one of the consumer threads gets swapped out in favor of the producer.
That's a simplified example, but substantially similar to programs that I have in production.
More generally, it doesn't make any sense to create more continuously-working (i.e. CPU bound) threads than you have processing units (CPU cores in general, although the existence of hyperthreading muddies the waters a bit). If you know that your threads won't be waiting on external events, then having n+1 threads when you only have n cores will end up wasting time with thread context switches. Note that this is strictly in the context of your program. If there are other applications and OS services running, your application's threads will get swapped out from time to time so that those other apps and services can get a timeslice. But one assumes that, if you're running a CPU-intensive program, you'll limit the other apps and services that are running at the same time.
Your best bet, of course, is to set up a test. On a 4-core machine, test your app with 1, 2, 3, 4, 5, ... threads. Time how long it takes to complete with different numbers of threads. I think you'll find that on a 4-core machine the sweet spot will be 3 or 4; most likely 4 unless there are other apps or OS services that take a lot of CPU.
One reason i could come up with for more threads than cores would be if some threads needed to interface with other parties... waiting for a response from a server.. querying something from the database. This will allow the thread to sleep until an answer is provided. this way other computations wouldn't have to wait. in the 4cores->4thread the thread would wait for input which possibly causes other code to have to wait too
Adding threads to your application is not strictly about performance gains. Some times you want or need to perform more than one task at the same time because that is the most logical way to architect your program.
As an example, perhaps you are writing a game engine, if you take a multi-threaded approach, you may have one thread for physics, one thread for graphics, one thread for networking, one thread for user input, one thread for resource loading from disk etc.
Also James Baxters point is very true as well. Some times threads are waiting on a resource and can not execute further until they access said resource. With only the same number of threads as cores, one core would be going to waste.
I think you are assuming that all programs are CPU bound - remember some of your threads will be waiting for I/O (disk/network/user traffic).

Running two threads at the same time

I want to know if a program can run two threads at the same time (that is basically what it is used for correct?). But if I were to do a system call in one function where it runs on thread A, and have some other tasks running in another function where it runs on thread B, would they both be able to run at the same time or would my second function wait until the system call finishes?
Add-on to my original question: Now would this process still be an uninterruptable process while the system call is going on? I am talking about using any system call on UNIX/LINUX.
Multi-threading and parallel processing are two completely different topics, each worthy of its own conversation, but for the sake of introduction...
Threading:
When you launch an executable, it is running in a thread within a process. When you launch another thread, call it thread 2, you now have 2 separately running execution chains (threads) within the same process. On a single core microprocessor (uP), it is possible to run multiple threads, but not in parallel. Although conceptually the threads are often said to run at the same time, they are actually running consecutively in time slices allocated and controlled by the operating system. These slices are interleaved with each other. So, the execution steps of thread 1 do not actually happen at the same time as the execution steps of thread 2. These behaviors generally extend to as many threads as you create, i.e. packets of execution chains all working within the same process and sharing time slices doled out by the operating system.
So, in your system call example, it really depends on what the system call is as to whether or not it would finish before allowing the execution steps of the other thread to proceed. Several factors play into what will happen: Is it a blocking call? Does one thread have more priority than the other. What is the duration of the time slices?
Links relevant to threading in C:
SO Example
POSIX
ANSI C
Parallel Processing:
When multi-threaded program execution occurs on a multiple core system (multiple uP, or multiple multi-core uP) threads can run concurrently, or in parallel as different threads may be split off to separate cores to share the workload. This is one example of parallel processing.
Again, conceptually, parallel processing and threading are thought to be similar in that they allow things to be done simultaneously. But that is concept only, they are really very different, in both target application and technique. Where threading is useful as a way to identify and split out an entire task within a process (eg, a TCP/IP server may launch a worker thread when a new connection is requested, then connects, and maintains that connection as long as it remains), parallel processing is typically used to send smaller components of the same task (eg. a complex set of computations that can be performed independently in separate locations) off to separate resources (cores, or uPs) to be completed simultaneously. This is where multiple core processors really make a difference. But parallel processing also takes advantage of multiple systems, popular in areas such as genetics and MMORPG gaming.
Links relevant to parallel processing in C:
OpenMP
More OpenMP (examples)
Gribble Labs - Introduction to OpenMP
CUDA Tookit from NVIDIA
Additional reading on the general topic of threading and architecture:
This summary of threading and architecture barely scratches the surface. There are many parts to the the topic. Books to address them would fill a small library, and there are thousands of links. Not surprisingly within the broader topic some concepts do not seem to follow reason. For example, it is not a given that simply having more cores will result in faster multi-threaded programs.
Yes, they would, at least potentially, run "at the same time", that's exactly what threads are for; of course there are many details, for example:
If both threads run system calls that e.g. write to the same file descriptor they might temporarily block each other.
If thread synchronisation primitives like mutexes are used then the parallel execution will be blocked.
You need a processor with at least two cores in order to have two threads truly run at the same time.
It's a very large and very complex subject.
If your computer has only a single CPU, you should know, how it can execute more than one thread at the same time.
In single-processor systems, only a single thread of execution occurs at a given instant. because Single-processor systems support logical concurrency, not physical concurrency.
On multiprocessor systems, several threads do, in fact, execute at the same time, and physical concurrency is achieved.
The important feature of multithreaded programs is that they support logical concurrency, not whether physical concurrency is actually achieved.
The basics are simple, but the details get complex real quickly.
You can break a program into multiple threads (if it makes sense to do so), and each thread will run "at its own pace", such that if one must wait for, eg, some file I/O that doesn't slow down the others.
On a single processor multiple threads are accommodated by "time slicing" the processor somehow -- either on a simple clock basis or by letting one thread run until it must wait (eg, for I/O) and then "switching" to the next thread. There is a whole art/science to doing this for maximum efficiency.
On a multi-processor (such as most modern PCs which have from 2 to 8 "cores") each thread is assigned to a separate processor, and if there are not enough processors then they are shared as in the single processor case.
The whole area of assuring "atomicity" of operations by a single thread, and assuring that threads don't somehow interfere with each other is incredibly complex. In general a there is a "kernel" or "nucleus" category of system call that will not be interrupted by another thread, but thats only a small subset of all system calls, and you have to consult the OS documentation to know which category a particular system call falls into.
They will run at the same time, for one thread is independent from another, even if you perform a system call.
It's pretty easy to test it though, you can create one thread that prints something to the console output and perform a system call at another thread, that you know will take some reasonable amount of time. You will notice that the messages will continue to be printed by the other thread.
Yes, A program can run two threads at the same time.
it is called Multi threading.
would they both be able to run at the same time or would my second function wait until the system call finishes?
They both are able to run at the same time.
if you want, you can make thread B wait until Thread A completes or reverse
Two thread can run concurrently only if it is running on multiple core processor system, but if it has only one core processor then two threads can not run concurrently. So only one thread run at a time and if it finishes its job then the next thread which is on queue take the time.

Why are message queues used insted of mulithreading?

I have the following query which i need someone to please help me with.Im new to message queues and have recently started looking at the Kestrel message queue.
As i understand,both threads and message queues are used for concurrency in applications so what is the advantage of using message queues over multitreading ?
Please help
Thank you.
message queues allow you to communicate outside your program.
This allows you to decouple your producer from your consumer. You can spread the work to be done over several processes and machines, and you can manage/upgrade/move around those programs independently of each other.
A message queue also typically consists of one or more brokers that takes care of distributing your messages and making sure the messages are not lost in case something bad happens (e.g. your program crashes, you upgrade one of your programs etc.)
Message queues might also be used internally in a program, in which case it's often just a facility to exchange/queue data from a producer thread to a consumer thread to do async processing.
Actually, one facilitates the other. Message queue is a nice and simple multithreading pattern: when you have a control thread (usually, but not necessarily an application's main thread) and a pool of (usually looping) worker threads, message queues are the easiest way to facilitate control over the thread pool.
For example, to start processing a relatively heavy task, you submit a corresponding message into the queue. If you have more messages, than you can currently process, your queue grows, and if less, it goes vice versa. When your message queue is empty, your threads sleep (usually by staying locked under a mutex).
So, there is nothing to compare: message queues are part of multithreading and hence they're used in some more complicated cases of multithreading.
Creating threads is expensive, and every thread that is simultaneously "live" will add a certain amount of overhead, even if the thread is blocked waiting for something to happen. If program Foo has 1,000 tasks to be performed and doesn't really care in what order they get done, it might be possible to create 1,000 threads and have each thread perform one task, but such an approach would not be terribly efficient. An second alternative would be to have one thread perform all 1,000 tasks in sequence. If there were other processes in the system that could employ any CPU time that Foo didn't use, this latter approach would be efficient (and quite possibly optimal), but if there isn't enough work to keep all CPUs busy, CPUs would waste some time sitting idle. In most cases, leaving a CPU idle for a second is just as expensive as spending a second of CPU time (the main exception is when one is trying to minimize electrical energy consumption, since an idling CPU may consume far less power than a busy one).
In most cases, the best strategy is a compromise between those two approaches: have some number of threads (say 10) that start performing the first ten tasks. Each time a thread finishes a task, have it start work on another until all tasks have been completed. Using this approach, the overhead related to threading will be cut by 99%, and the only extra cost will be the queue of tasks that haven't yet been started. Since a queue entry is apt to be much cheaper than a thread (likely less than 1% of the cost, and perhaps less than 0.01%), this can represent a really huge savings.
The one major problem with using a job queue rather than threading is that if some jobs cannot complete until jobs later in the list have run, it's possible for the system to become deadlocked since the later tasks won't run until the earlier tasks have completed. If each task had been given a separate thread, that problem would not occur since the threads associated with the later tasks would eventually manage to complete and thus let the earlier ones proceed. Indeed, the more earlier tasks were blocked, the more CPU time would be available to run the later ones.
It makes more sense to contrast message queues and other concurrency primitives, such as semaphores, mutex, condition variables, etc. They can all be used in the presence of threads, though message-passing is also commonly used in non-threaded contexts, such as inter-process communication, whereas the others tend to be confined to inter-thread communication and synchronisation.
The short answer is that message-passing is easier on the brain. In detail...
Message-passing works by sending stuff from one agent to another. There is generally no need to coordinate access to the data. Once an agent receives a message it can usually assume that it has unqualified access to that data.
The "threading" style works by giving all agent open-slather access to shared data but requiring them to carefully coordinate their access via primitives. If one agent misbehaves, the process becomes corrupted and all hell breaks loose. Message passing tends to confine problems to the misbehaving agent and its cohort, and since agents are generally self-contained and often programmed in a sequential or state-machine style, they tend not to misbehave as often — or as mysteriously — as conventional threaded code.

Resources