I am a beginner of multi thread programming.
I've been using Concurrent classes in a multi-threaded environment, but suddenly I was curious about using blockingqueue.
I thought Concurrent classes like ConcurrentHashMap use locking.
Recently, I happened to use QUEUE, and I looked at thread-safe queues. So I knew that there were BlockingQueue and linkedConcurrentQueue, and I studied these two queues.
The blockingqueue is thread-safe using locking. This is the typical thread-safe way I was thinking.
Concurrentqueue is processed thread-safe by using an algorithm called CAS.
However, I do not know whether the target of this CAS is the queue itself or an element belonging to the queue.
Even if multiple threads poll elements in concurrentqueue at the same time, are they polling different elements? Isn't there a case that polls for the same element at a time?
If so, lock-free concurrentqueue looks too good compared to blockingqueue... What is the reason that blockingqueue is still deprecate and alive? (except for take() method)
Are there any articles I would like to study or reference? Thank you!
A blocking operation cannot be lock-free (and vice versa). A lock-free operation provides a progress guarantee. Here is the definition from Wikipedia:
An algorithm is lock-free if, when the program threads are run for a sufficiently long time, at least one of the threads makes progress (for some sensible definition of progress).
A thread that is blocked cannot make progress until it is "unblocked". But that usually means that some other thread must have made progress in order to unblock the blocked thread. So the blocked thread depends on some other thread's progress. In a lock-free operation this is not the case - a thread can only be prevented to make progress, if some other thread is instead making progress and thereby interferes with the first thread.
That's why a lock-free queue cannot have a blocking pop operation that blocks in case the queue is empty, waiting for some other thread to push an item. Instead they usually have a tryPop operation that indicates whether it was successful or not.
A (lock-free) Compare-And-Swap (CAS) operation can usually only be performed on a single pointer. It is one of the fundamental operations required by lock-free algorithms. In case of the ConcurrentLinkedQueue (which is an implementation of the queue proposed by Michael and Scott in Simple, Fast, and Practical Non-Blocking and Blocking Concurrent Queue Algorithms), the head and tail pointers are updated using CAS operations.
The topic of concurrent programming is quite vast and complex. If you are really interested, I would suggest to start with the excellent book The Art of Multiprocessor Programming.
Related
I am wondering which are the advantages of lock-free programming over spin lock? I think that when we do lock free programming using CAS mechanism in a thread(called A), if other thread change the value in CAS, A thread still need to loop again. And I think it just like we use spin lock!
I am so confused about this. Although I know that CAS and spin-lock are suitable to use when the lock contention is not fierce, can someone explain in which scenarios lock free should be used and spin lock should be used?
Lock-freedom provides what is called progress guarantee. You are right that in your example thread A has do perform a retry (i.e., loop again), but only if some other thread changed the value, which implies that that thread was able to make progress.
In contrast, a thread (let's call it X) that holds a spin-lock prevents all other threads from making progress until the lock is released. So if thread X is preempted, execution of all threads waiting for the lock is effectively stalled until X can resume execution and finally release the lock. If X were to be stalled indefinitely, then all other threads would also be blocked indefinitely.
Such a situation is not possible with lock-free algorithms, since it is guaranteed that at any time at least one thread can make progress.
Which should be used depends on the situation. Lock-free algorithms are inherently difficult to design, especially for more complex data structures like trees. And even if you have a lock-free algorithm, it is almost always slower than a serial one, so a serial version protected by a lock might perform better. Then again, if the data structure is heavily contended, a lock-free version will scale better than one protected by a lock. However, if your workload is mostly read-only, a read-write-lock will also provide good scalability. Unfortunately, there is no general rule here...
If you want to learn more about lock-freedom (and more) I recommend the book The Art of Multiprocessor Programming.
If you prefer free alternatives I recommend Is Parallel Programming Hard, And, If So, What Can You Do About It? by Paul McKenney or Practicallock-freedom by Keir Fraser.
Anecdotally, I've found that a lot of programmers mistakenly believe that "lock-free" simply means "concurrent programming without mutexes". Usually, there's also a correlated misunderstanding that the purpose of writing lock-free code is for better concurrent performance. Of course, the correct definition of lock-free is actually about progress guarantees. A lock-free algorithm guarantees that at least one thread is able to make forward progress regardless of what any other threads are doing.
This means a lock-free algorithm can never have code where one thread is depending on another thread in order to proceed. E.g., lock-free code can not have a situation where Thread A sets a flag, and then Thread B keeps looping while waiting for Thread A to unset the flag. Code like that is basically implementing a lock (or what I would call a mutex in disguise).
However, other cases are more subtle and there are some cases where I honestly can't really tell if an algorithm qualifies as lock-free or not, because the notion of "making progress" sometimes appears subjective to me.
One such case is in the (well-regarded, afaik) concurrency library, liblfds. I was studying the implementation of a multi-producer/multi-consumer bounded queue in liblfds - the implementation is very straightforward, but I cannot really tell if it should qualify as lock-free.
The relevant algorithm is in lfds711_queue_bmm_enqueue.c. Liblfds uses custom atomics and memory barriers, but the algorithm is simple enough for me to describe in a paragraph or so.
The queue itself is a bounded contiguous array (ringbuffer). There is a shared read_index and write_index. Each slot in the queue contains a field for user-data, and a sequence_number value, which is basically like an epoch counter. (This avoids ABA issues).
The PUSH algorithm is as follows:
Atomically LOAD the write_index
Attempt to reserve a slot in the queue at write_index % queue_size using a CompareAndSwap loop that attempts to set write_index to write_index + 1.
If the CompareAndSwap is successful, copy the user data into the
reserved slot.
Finally, update the sequence_index on the
slot by making it equal to write_index + 1.
The actual source code uses custom atomics and memory barriers, so for further clarity about this algorithm I've briefly translated it into (untested) standard C++ atomics for better readability, as follows:
bool mcmp_queue::enqueue(void* data)
{
int write_index = m_write_index.load(std::memory_order_relaxed);
for (;;)
{
slot& s = m_slots[write_index % m_num_slots];
int sequence_number = s.sequence_number.load(std::memory_order_acquire);
int difference = sequence_number - write_index;
if (difference == 0)
{
if (m_write_index.compare_exchange_weak(
write_index,
write_index + 1,
std::memory_order_acq_rel
))
{
break;
}
}
if (difference < 0) return false; // queue is full
}
// Copy user-data and update sequence number
//
s.user_data = data;
s.sequence_number.store(write_index + 1, std::memory_order_release);
return true;
}
Now, a thread that wants to POP an element from the slot at read_index will not be able to do so until it observes that the slot's sequence_number is equal to read_index + 1.
Okay, so there are no mutexes here, and the algorithm likely performs well (it's only a single CAS for PUSH and POP), but is this lock-free? The reason it's unclear to me is because the definition of "making progress" seems murky when there is the possibility that a PUSH or POP can always just fail if the queue is observed to be full or empty.
But what's questionable to me is that the PUSH algorithm essentially reserves a slot, meaning that the slot can never be POP'd until the push thread gets around to updating the sequence number. This means that a POP thread that wants to pop a value depends on the PUSH thread having completed the operation. Otherwise, the POP thread will always return false because it thinks the queue is EMPTY. It seems debatable to me whether this actually falls within the definition of "making progress".
Generally, truly lock-free algorithms involve a phase where a pre-empted thread actually tries to ASSIST the other thread in completing an operation. So, in order to be truly lock-free, I would think that a POP thread that observes an in-progress PUSH would actually need to try and complete the PUSH, and then only after that, perform the original POP operation. If the POP thread simply returns that the queue is EMPTY when a PUSH is in progress, the POP thread is basically blocked until the PUSH thread completes the operation. If the PUSH thread dies, or goes to sleep for 1,000 years, or otherwise gets scheduled into oblivion, the POP thread can do nothing except continuously report that the queue is EMPTY.
So does this fit the defintion of lock-free? From one perspective, you can argue that the POP thread can always make progress, because it can always report that the queue is EMPTY (which is at least some form of progress I guess.) But to me, this isn't really making progress, since the only reason the queue is observed as empty is because we are blocked by a concurrent PUSH operation.
So, my question is: is this algorithm truly lock-free? Or is the index reservation system basically a mutex in disguise?
This queue data structure is not strictly lock-free by what I consider the most reasonable definition. That definition is something like:
A structure is lock-free if only if any thread can be indefinitely
suspended at any point while still leaving the structure usable by the
remaining threads.
Of course this implies a suitable definition of usable, but for most structures this is fairly simple: the structure should continue to obey its contracts and allow elements to be inserted and removed as expected.
In this case a thread that has succeeded in incrementing m_write_increment, but hasn't yet written s.sequence_number leaves the container in what will soon be an unusable state. If such a thread is killed, the container will eventually report both "full" and "empty" to push and pop respectively, violating the contract of a fixed size queue.
There is a hidden mutex here (the combination of m_write_index and the associated s.sequence_number) - but it basically works like a per-element mutex. So the failure only becomes apparent to writers once you've looped around and a new writer tries to get the mutex, but in fact all subsequent writers have effectively failed to insert their element into the queue since no reader will ever see it.
Now this doesn't mean this is a bad implementation of a concurrent queue. For some uses it may behave mostly as if it was lock free. For example, this structure may have most of the useful performance properties of a truly lock-free structure, but at the same time it lacks some of the useful correctness properties. Basically the term lock-free usually implies a whole bunch of properties, only a subset of which will usually be important for any particular use. Let's look at them one by one and see how this structure does. We'll broadly categorize them into performance and functional categories.
Performance
Uncontended Performance
The uncontended or "best case" performance is important for many structures. While you need a concurrent structure for correctness, you'll usually still try to design your application so that contention is kept to a minimum, so the uncontended cost is often important. Some lock-free structures help here, by reducing the number of expensive atomic operations in the uncontended fast-path, or avoiding a syscall.
This queue implementation does a reasonable job here: there is only a single "definitely expensive" operation: the compare_exchange_weak, and a couple of possibly expensive operations (the memory_order_acquire load and memory_order_release store)1, and little other overhead.
This compares to something like std::mutex which would imply something like one atomic operation for lock and another for unlock, and in practice on Linux the pthread calls have non-negligible overhead as well.
So I expect this queue to perform reasonably well in the uncontended fast-path.
Contended Performance
One advantage of lock-free structures is that they often allow better scaling when a structure is heavily contended. This isn't necessarily an inherent advantage: some lock-based structures with multiple locks or read-write locks may exhibit scaling that matches or exceeds some lock-free approaches, but it is usually that case that lock-free structures exhibit better scaling that a simple one-lock-to-rule-them-all alternative.
This queue performs reasonably in this respect. The m_write_index variable is atomically updated by all readers and will be a point of contention, but the behavior should be reasonable as long as the underlying hardware CAS implementation is reasonable.
Note that a queue is generally a fairly poor concurrent structure since inserts and removals all happen at the same places (the head and the tail), so contention is inherent in the definition of the structure. Compare this to a concurrent map, where different elements have no particular ordered relationship: such a structure can offer efficient contention-free simultaneous mutation if different elements are being accessed.
Context-switch Immunity
One performance advantage of lock-free structures that is related to the core definition above (and also to the functional guarantees) is that a context switch of a thread which is mutating the structure doesn't delay all the other mutators. In a heavily loaded system (especially when runnable threads >> available cores), a thread may be switched out for hundreds of milliseconds or seconds. During this time, any concurrent mutators will block and incur additional scheduling costs (or they will spin which may also produce poor behavior). Even though such "unluckly scheduling" may be rare, when it does occur the entire system may incur a serious latency spike.
Lock-free structures avoid this since there is no "critical region" where a thread can be context switched out and subsequently block forward progress by other threads.
This structure offers partial protection in this area — the specifics of which depend on the queue size and application behavior. Even if a thread is switched out in the critical region between the m_write_index update and the sequence number write, other threads can continue to push elements to the queue as long as they don't wrap all the way around to the in-progress element from the stalled thread. Threads can also pop elements, but only up to the in-progress element.
While the push behavior may not be a problem for high-capacity queues, the pop behavior can be a problem: if the queue has a high throughput compared to the average time a thread is context switched out, and the average fullness, the queue will quickly appear empty to all consumer threads, even if there are many elements added beyond the in-progress element. This isn't affected by the queue capacity, but simply the application behavior. It means that the consumer side may completely stall when this occurs. In this respect, the queue doesn't look very lock-free at all!
Functional Aspects
Async Thread Termination
On advantage of lock-free structures it they are safe for use by threads that may be asynchronously canceled or may otherwise terminate exceptionally in the critical region. Cancelling a thread at any point leaves the structure is a consistent state.
This is not the case for this queue, as described above.
Queue Access from Interrupt or Signal
A related advantage is that lock-free structures can usually be examined or mutated from an interrupt or signal. This is useful in many cases where an interrupt or signal shares a structure with regular process threads.
This queue mostly supports this use case. Even if the signal or interrupt occurs when another thread is in the critical region, the asynchronous code can still push an element onto the queue (which will only be seen later by consuming threads) and can still pop an element off of the queue.
The behavior isn't as complete as a true lock-free structure: imagine a signal handler with a way to tell the remaining application threads (other than the interrupted one) to quiesce and which then drains all the remaining elements of the queue. With a true lock-free structure, this would allow the signal handler to full drain all the elements, but this queue might fail to do that in the case a thread was interrupted or switched out in the critical region.
1 In particular, on x86, this will only use an atomic operation for the CAS as the memory model is strong enough to avoid the need for atomics or fencing for the other operations. Recent ARM can do acquire and release fairly efficiently as well.
I am the author of liblfds.
The OP is correct in his description of this queue.
It is the single data structure in the library which is not lock-free.
This is described in the documentation for the queue;
http://www.liblfds.org/mediawiki/index.php?title=r7.1.1:Queue_%28bounded,_many_producer,_many_consumer%29#Lock-free_Specific_Behaviour
"It must be understood though that this is not actually a lock-free data structure."
This queue is an implementation of an idea from Dmitry Vyukov (1024cores.net) and I only realised it was not lock-free while I was making the test code work.
By then it was working, so I included it.
I do have some thought to remove it, since it is not lock-free.
Most of the time people use lock-free when they really mean lockless. lockless means a data-structure or algorithm that does not use locks, but there is no guarantee for forward progress. Also check this question. So the queue in liblfds is lockless, but as BeeOnRope mentioned is not lock-free.
A thread that calls POP before the next update in sequence is complete is NOT "effectively blocked" if the POP call returns FALSE immediately. The thread can go off and do something else. I'd say that this queue qualifies as lock-free.
However, I wouldn't say that it qualifies as a "queue" -- at least not the kind of queue that you could publish as a queue in a library or something -- because it doesn't guarantee a lot of the behaviors that you can normally expect from a queue. In particular, you can PUSH and element and then try and FAIL to POP it, because some other thread is busy pushing an earlier item.
Even so, this queue could still be useful in some lock-free solutions for various problems.
For many applications, however, I would worry about the possibility for consumer threads to be starved while a producer thread is pre-empted. Maybe liblfds does something about that?
"Lock-free" is a property of the algorithm, which implements some functionality. The property doesn't correlate with a way, how given functionality is used by a program.
When talk about mcmp_queue::enqueue function, which returns FALSE if underlying queue is full, its implementation (given in the question post) is lock-free.
However, implementing mcmp_queue::dequeue in lock-free manner would be difficult. E.g., this pattern is obviously not-lock free, as it spins on the variable changed by other thread:
while(s.sequence_number.load(std::memory_order_acquire) == read_index);
data = s.user_data;
...
return data;
I did formal verification on this same code using Spin a couple years ago for a course in concurrency testing and it is definitely not lock-free.
Just because there is no explicit "locking", doesn't mean it's lock-free. When it comes to reasoning about progress conditions, think of it from an individual thread's perspective:
Blocking/locking: if another thread gets descheduled and this can block my progress, then it is blocking.
Lock-free/non-blocking: if I am able to eventually make progress in the absence of contention from other threads, then it is at most lock-free.
If no other thread can block my progress indefinitely, then it is wait-free.
I am a little bit confused about the two concepts.
definition of lock-free on wiki:
A non-blocking algorithm is lock-free if there is guaranteed
system-wide progress
definition of non-blocking:
an algorithm is called non-blocking if failure or suspension of any
thread cannot cause failure or suspension of another thread
I thought spinlock is lock-free, or at least non-blocking. But now I'm not sure. Because by definition, "spinlock is not lock-free" also makes sense to me. Like, if the thread holding the spinlock gets suspended, then it will cause suspension of other threads spinning outside. So, by definition, spinlock is not even non-blocking, let alone lock-free.
I'm so confused now. Can anyone explain it clearly?
Anything that can be called a lock (exclude other threads from a critical section until the current thread unlocks) is by definition not lock-free. And yes, spinlocks are a kind of lock.
If a thread sleeps while holding the lock, no other thread can acquire it and make forward progress, and spinlocks can't prevent this. The OS can de-schedule a thread whenever it wants, even if it's in the middle of a critical section.
Note that "lock-free" isn't the same thing as "wait-free", so a lock-free algorithm can still have stuff like cmpxchg retry loops, but as long as one thread succeeds every time, it's lock free.
A wait-free algorithm can't even have that, and at most has to wait for cache misses / hardware arbitration of contended atomic operations. Wikipedia's non-blocking algorithm article defines wait-free and lock-free in more detail.
I think you're mixing up two definitions of "blocking".
I think you're talking about a spin_trylock function that tries to acquire a spinlock, and returns with an error if it fails instead of spinning. So this is non-blocking in the same sense as non-blocking I/O: fail with an error instead of waiting for resource availability.
That doesn't mean any thread in the system is making forward progress on the thing protected by the spinlock. It just means your thread can go and do something else before trying again, instead of needing to use separate threads to do something in parallel with waiting to acquire a lock.
Spinning in an infinite loop counts as blocking / not-making-progress. For this definition, there's no difference between a pure spinlock and one that (with OS assistance) sleeps until another thread unlocks.
The definition of lock-free isn't concerned with wasting CPU time / power to make room for independent work to happen.
Somewhat related: acquiring an uncontended spinlock doesn't require a system call, which means it's a "light-weight" lock. Some lock implementations always use a (relatively slow) system call even in the uncontended case. See Jeff Preshing's Always Use a Lightweight Mutex article. Also read Jeff's other posts to learn more about lock-free programming, because they're excellent. So good in fact that the [lock-free] tag wiki links to them.
Do you think threadpooling design pattern is the way to go for the multi-core future?
A threadpooling library for instance, if used extensively, makes/force the application writer
(1) to break the problem into separate parallel jobs hence promoting (enforcing :) ) parallelism
(2) With abstraction from all the low level OS calls, synchronization etc etc makes programmer's life easier. (Especially for C programmers :) )
I have strong belief its the best way (Or One of the "best" ways :) ) for multi-core future...
So, my question is, am I write in thinking so or I am in some delusion :)
Regards,
Microkernel
Thread pooling is a technique that involves a queue and a number of threads taking jobs from the queue and process them. This is in contrast to the technique of just starting new threads whenever a new task arrives.
Benefits are that the maximum number of threads is limited to avoid too much threading and that there is less overhead involved with any new task (Thread is already running and takes task. No threat starting needed).
Whether this is a good design highly depends on your problem. If you have many short jobs that come to your program at a very fast rate, then this is a good idea because the lower overhead is really a benefit. If you have extremely large numbers of concurrent tasks this is a good idea to keep your scheduler from having to do too much work.
There are many areas where thread pooling is just not helpful. So you cannot generalize. Sometimes multi threading at all is not possible. Or not even desired, as multi threading adds an unpredictable element (race conditions) to your code which is extremely hard to debug.
A thread pooling library can hardly "force" you to use it. You still need to think stuff through and if you just start one thread... Won't help.
As almost every informatics topic the answer is: It Depends.
the pooling system is fine with Embarrassingly parallel http://en.wikipedia.org/wiki/Embarrassingly_parallel
For other task where you need more syncornization between threads it's not that good
For the Windows NT engine thread pools are usually much less efficient than I/O Completion Ports. These are covered extensively in numerous questions and answers here. IOCPs enable event-driven processing in that multiple threads can wait on the IOCP until an event occurs due to an IOC (read or write) on a socket or handle which is then queued to the IOCP. The IOCP in turn pairs a waiting thread with the id of the event and releases the thread for processing. After the thread has processed the event and initiated a new I/O it returns to the IOCP to wait for the next event (which may or may not be the completion of the I/O it just initiated).
IOCs may also be artificially signalled by explicit posting from a non-event.
Using IOCPs is not polling. The optimal IOCP implementation will have as many threads waiting on the IOCP as there are cores in the system. The threads may all execute the same physical code if that is deemed efficient. Since a thread processes from an IOC up until it issues an I/O it does nothing which forces it to wait for other resources except perhaps to compete for access to thread-safe areas. It is a natural choice to move away from the "one handle per thread" paradigm. IOCP-controlled threads are therefore as efficient as the programmer is able to construct them.
I like the answer by #yaankee a lot except I would argue that thread pool is almost always the right way to go. The reason: a thread pool can degenerate itself into a simple static work partitioning model for problems like matrix-matrix multiply. OpenMP guided is kind of along those lines.
I'm reading up on concurrency. I've got a bit over my head with terms that have confusingly similar definitions. Namely:
Processes
Threads
"Green threads"
Protothreads
Fibers
Coroutines
"Goroutines" in the Go language
My impression is that the distinctions rest on (1) whether truly parallel or multiplexed; (2) whether managed at the CPU, at the OS, or in the program; and (3..5) a few other things I can't identify.
Is there a succinct and unambiguous guide to the differences between these approaches to parallelism?
OK, I'm going to do my best. There are caveats everywhere, but I'm going to do my best to give my understanding of these terms and references to something that approximates the definition I've given.
Process: OS-managed (possibly) truly concurrent, at least in the presence of suitable hardware support. Exist within their own address space.
Thread: OS-managed, within the same address space as the parent and all its other threads. Possibly truly concurrent, and multi-tasking is pre-emptive.
Green Thread: These are user-space projections of the same concept as threads, but are not OS-managed. Probably not truly concurrent, except in the sense that there may be multiple worker threads or processes giving them CPU time concurrently, so probably best to consider this as interleaved or multiplexed.
Protothreads: I couldn't really tease a definition out of these. I think they are interleaved and program-managed, but don't take my word for it. My sense was that they are essentially an application-specific implementation of the same kind of "green threads" model, with appropriate modification for the application domain.
Fibers: OS-managed. Exactly threads, except co-operatively multitasking, and hence not truly concurrent.
Coroutines: Exactly fibers, except not OS-managed.
Goroutines: They claim to be unlike anything else, but they seem to be exactly green threads, as in, process-managed in a single address space and multiplexed onto system threads. Perhaps somebody with more knowledge of Go can cut through the marketing material.
It's also worth noting that there are other understandings in concurrency theory of the term "process", in the process calculus sense. This definition is orthogonal to those above, but I just thought it worth mentioning so that no confusion arises should you see process used in that sense somewhere.
Also, be aware of the difference between parallel and concurrent. It's possible you were using the former in your question where I think you meant the latter.
I mostly agree with Gian's answer, but I have different interpretations of a few concurrency primitives. Note that these terms are often used inconsistently by different authors. These are my favorite definitions (hopefully not too far from the modern consensus).
Process:
OS-managed
Each has its own virtual address space
Can be interrupted (preempted) by the system to allow another process to run
Can run in parallel with other processes on different processors
The memory overhead of processes is high (includes virtual memory tables, open file handles, etc)
The time overhead for creating and context switching between processes is relatively high
Threads:
OS-managed
Each is "contained" within some particular process
All threads in the same process share the same virtual address space
Can be interrupted by the system to allow another thread to run
Can run in parallel with other threads on different processors
The memory and time overheads associated with threads are smaller than processes, but still non-trivial
(For example, typically context switching involves entering the kernel and invoking the system scheduler.)
Cooperative Threads:
May or may not be OS-managed
Each is "contained" within some particular process
In some implementations, each is "contained" within some particular OS thread
Cannot be interrupted by the system to allow a cooperative peer to run
(The containing process/thread can still be interrupted, of course)
Must invoke a special yield primitive to allow peer cooperative threads to run
Generally cannot be run in parallel with cooperative peers
(Though some people think it's possible: http://ocm.dreamhosters.com/.)
There are lots of variations on the cooperative thread theme that go by different names:
Fibers
Green threads
Protothreads
User-level threads (user-level threads can be interruptable/preemptive, but that's a relatively unusual combination)
Some implementations of cooperative threads use techniques like split/segmented stacks or even individually heap-allocating every call frame to reduce the memory overhead associated with pre-allocating a large chunk of memory for the stack
Depending on the implementation, calling a blocking syscall (like reading from the network or sleeping) will either cause a whole group of cooperative threads to block or implicitly cause the calling thread to yield
Coroutines:
Some people use "coroutine" and "cooperative thread" more or less synonymously
I do not prefer this usage
Some coroutine implementations are actually "shallow" cooperative threads; yield can only be invoked by the "coroutine entry procedure"
The shallow (or semi-coroutine) version is easier to implement than threads, because each coroutine does not need a complete stack (just one frame for the entry procedure)
Often coroutine frameworks have yield primitives that require the invoker to explicitly state which coroutine control should transfer to
Generators:
Restricted (shallow) coroutines
yield can only return control back to whichever code invoked the generator
Goroutines:
An odd hybrid of cooperative and OS threads
Cannot be interrupted (like cooperative threads)
Can run in parallel on a language runtime-managed pool of OS threads
Event handlers:
Procedures/methods that are invoked by an event dispatcher in response to some action happening
Very popular for user interface programming
Require little to no language/system support; can be implemented in a library
At most one event handler can be running at a time; the dispatcher must wait for a handler to finish (return) before starting the next
Makes synchronization relatively simple; different handler executions never overlap in time
Implementing complex tasks with event handlers tends to lead to "inverted control flow"/"stack ripping"
Tasks:
Units of work that are doled out by a manager to a pool of workers
The workers can be threads, processes or machines
Of course the kind of worker a task library uses has a significant impact on how one implements the tasks
In this list of inconsistently and confusingly used terminology, "task" takes the crown. Particularly in the embedded systems community, "task" is sometimes used to mean "process", "thread" or "event handler" (usually called an "interrupt service routine"). It is also sometimes used generically/informally to refer to any kind of unit of computation.
One pet peeve that I can't stop myself from airing: I dislike the use of the phrase "true concurrency" for "processor parallelism". It's quite common, but I think it leads to much confusion.
For most applications, I think task-based frameworks are best for parallelization. Most of the popular ones (Intel's TBB, Apple's GCD, Microsoft's TPL & PPL) use threads as workers. I wish there were some good alternatives that used processes, but I'm not aware of any.
If you're interested in concurrency (as opposed to processor parallelism), event handlers are the safest way to go. Cooperative threads are an interesting alternative, but a bit of a wild west. Please do not use threads for concurrency if you care about the reliability and robustness of your software.
Protothreads are just a switch case implementation that acts like a state machine but makes implementation of the software a whole lot simpler. It is based around idea of saving a and int value before a case label and returning and then getting back to the point after the case by reading back that variable and using switch to figure out where to continue. So protothread are a sequential implementation of a state machine.
Protothreads are great when implementing sequential state machines. Protothreads are not really threads at all, but rather a syntax abstraction that makes it much easier to write a switch/case state machine that has to switch states sequentially (from one to the next etc..).
I have used protothreads to implement asynchronous io: http://martinschroder.se/asynchronous-io-using-protothreads/