Understanding Condition Variables - Isn't there always spin-waiting "somewhere"? - multithreading

I am going to explain my understanding of this OS construct and appreciate some polite correction.
I understand thread-safety clearly and simply.
If there is some setup where
X: some condition
Y: do something
and
if X
do Y
is atomic, meaning that if at the exact moment in time
doing Y
not X
there is some problem.
By my understanding, the lowest-level solution of this is to use shared objects (mutexes). As an example, in the solution to the "Too Much Milk" Problem
Thead A | Thread B
-------------------------------------
leave Note A | leave Note B
while Note B | if no Note A
do nothing | if no milk
if no milk | buy milk
buy milk | remove Note B
remove Note A |
Note A and Note B would be the shared objects, i.e. some piece of memory accessible by both threads A and B.
This is can be generalized (beyond milk) for 2-thread case like
Thead A | Thread B
-------------------------------------
leave Note A | leave Note B
while Note B | if no Note A
do nothing | if X
if X | do Y
do Y | remove Note B
remove Note A |
and there is some way to generalize it for the N-thread case (so I'll continue referring to the 2-thread case for simplicity).
Possibly incorrect assumption #1: This is the lowest-level solution known (possible?).
Now one of the defficiencies of this solution is the spinning or busy-wait
while Note B
do nothing
because if the do Y is an expensive task then the thread scheduler will keep switching to Thread A to perform this check, i.e. the thread is still "awake" and using processing power even when we "know" its processing is to perform a check that will fail for some time.
The question then becomes: Is there some way we could make Thread A "sleep", so that it isn't scheduled to run until Note B is gone, and then "wake up"???
The Condition Variable design pattern provides a solution and it built on top of mutexes.
Possibly incorrect assumption #2: Then, isn't there still some spinning under the hood? Is the average amount of spinning somehow reduced?
I could use a logical explanation like only S.O. can provide ;)

Isn't there still some spinning under the hood.
No. That's the whole point of condition variables: It's to avoid the need for spinning.
An operating system scheduler creates a private object to represent each thread, and it keeps these objects in containers which, for purpose of this discussion, we will call queues.
Simplistic explanation:
When a thread calls condition.await(), that invokes a system call. The scheduler handles it by removing the calling thread from whatever CPU it was running on, and by putting its proxy object into a queue. Specifically, it puts it into the queue of threads that are waiting to be notified about that particular condition.
There usually is a separate queue for every different thing that a thread could wait for. If you create a mutex, the OS creates a queue of threads that are waiting to acquire the mutex. If you create a condition variable, the OS creates a queue of threads that are waiting to be notified.
Once the thread's proxy object is in that queue, nothing will wake it up until some other thread notifies the condition variable. That notification also is a system call. The OS handles it (simplest case) by moving all of the threads that were in the condition variable's queue into the global run queue. The run queue holds all of the threads that are waiting for a CPU to run on.
On some future timer tick, the OS will pick the formerly waiting thread from the run queue and set it up on a CPU.
Extra credit:
Bad News! the first thing the thread does after being awakened, while it's still inside the condition.await() call, is it tries to re-lock the mutex. But there's a chance that the thread that signalled the condition still has the mutex locked. Our victim is going to go right back to sleep again, this time, waiting in the queue for the mutex.
A more sophisticated system might be able to optimize the situation by moving the thread directly from the condition variable's queue to the mutex queue without ever needing to wake it up and then put it back to sleep.

yes, on the lowest, hardware level instructions like Compare-and-set, Compare-and-swap are used, which spin until the condition is met, and only then make set (assignment). This spin is required each time we put a thread in a queue, be it queue to a mutex, to condition or to processor.

Then, isn't there still some spinning under the hood? Is the average amount of spinning somehow reduced?
That's a decision for the implementation to make. If spinning works best on the platform, then spinning can be used. But almost no spinning is required.
Typically, there's a lock somewhere at the lowest level of the implementation that protects system state. That lock is only held by any thread for a tiny split second as it manipulates that system state. Typically, you do need to spin while waiting for that inner lock.
A block on a mutex might look like this:
Atomically try to acquire the mutex.
If that succeeds, stop, you are done. (This is the "fast path".)
Acquire the inner lock that no thread holds for more than a few instructions.
Mark yourself as waiting for that mutex to be acquired.
Atomically release the inner lock and set your thread as not ready-to-run.
Notice the only place that there is any spinning in here is in step 3. That's not in the fast path. No spinning is needed after the call in step 5 does not return to this thread until the lock is conveyed to this thread by the thread that held it.
When a thread releases the lock, it checks the count of threads waiting for the lock. If that's greater than zero, instead of releasing the lock, it acquires the inner lock protecting system state, picks one of the threads recorded as waiting for the lock, conveys the lock to that thread, and tells the scheduler to run that thread. That thread then sees step 5 return from its call with now holding the lock.
Again, the only waiting is on that inner lock that is used just to track what thread is waiting for what.

Related

Kernel Programming - Mutexes

So I'm trying to use mutex_init(), mutex_lock(), mutex_unlock() for thread synchronization.
I am currently trying to schedule threads in a round robin fashion(but more than 1 thread could be running at a time) and I set the current state of a thread to TASK_INTERRUPTIBLE, followed by waking up another thread whose PID, I have in a list.
I need to iterate over this list for my logic.
As I understand it, I need to lock this list as I access its elements, or another thread might miss a new entry while I'm making changes to it. Also, as one mutex has locked a resource, no other mutex can unlock it, until the original mutex releases it.
But, I'm still not sure if I'm locking it correctly. (I release the lock before I call schedule(), and re-lock after that)
I declare a mutex locally within a thread and lock the list. After my current thread locks
mutex_lock(&lock);
and I iterate over the list, till I find something(or ends if it doesn't find anything), then unlocks.
mutex_unlock(&lock);
I assume locking while I iterate is legal. I have never seen examples of this though.
Also, is it normal for the process to have a state of (TASK_UNINTERRUPTIBLE) while it holds a mutex lock?
EDIT : I am adding some more information based on the answer below.
It is possible my program may be run on a virtual machine with a single core. Therefore, I do not want to risk infinite polling using spin_lock().
I am trying to maintain scheduling between threads that have a certain id. For example if there are 4 threads. 2 in set 'A' and 2 in set 'B'. I allow only 1 thread to run in each set. But I switch between threads in a given set. However, a thread in set 'A' should not switch to any thread in set 'B'
(I know the kernel scheduler wont be perfect, so an approximate switching will do).
My Reasoning for TASK_STATE's:
1) Initial thread that gets created is running.
2) If another thread in the same set is running (and this one hasn't executed for a given time). Set other thread to TASK_INTERRUPTIPLE, while calling schedule(); Note: There can be more than 2 threads in each set, but let's keep it simple by considering only 2 for now.
3) If it has executed for enough time, set this task to TASK_INTERRUPTIPLE, set the other task in the same set to TASK_RUNNING, while calling schedule();
All this logic happens while I am accessing certain data structures which are locked by a (now) Global Mutex. I unlock the mutex just before I call schedule(), and instantly re-lock afterward. After my logic part is done, I completely unlock the mutex.
Is there anything fundamentally wrong with the approach?
As I understand it, I need to lock this list as I access its elements
Yes, that is true. But if you use a mutex, you're going to be really sad because a call to lock/unlock is a call to the scheduler. Therefore, calling it from inside the scheduler should result in deadlock. What you need to do depends on if your processor is multi-core or (the mythical) single-core. (Is this a virtual system?) On a single-core processor you can disable interrupts. On a multi-core processor, disabling interrupts is not sufficient (it only disables interrupts for that one core, and another core may still be interrupted). The simplest thing to do on a multi-core is to use a spinlock. Unlike the mutex, both of these locking mechanisms can be unlocked from different threads.
I set the current state of a thread to TASK_INTERRUPTIBLE
Is the thread being taken off the CPU? If so, it's not running, so I suspect that TASK_INTERRUPTIBLE is the wrong state. It would be helpful if you could list the possible states for me or if you could describe what the state is supposed to indicate. Because to me "TASK_INTERRUPTIBLE" sounds like a running task.
I declare a mutex locally within a thread and lock the list
Local mutexes are a red flag! The resource you are locking should be protected by a mutex with the same scope. If the list is global, it should have a global mutex to protect it. Threads that want to use the list must first acquire its mutex. Of course, as I already talked about, you probably want to use a different kind of locking to protect the list of ready-to-run processes.
I assume locking while I iterate is legal
It is perfectly legal (assuming of course that your mutual exclusion scheme is bug-free). In fact, it's required. If another thread were allowed to, for example, remove a node from the list while you were reading it, you could end up dereferencing a deleted node.
Also, is it normal for the process to have a state of TASK_UNINTERRUPTIBLE while it holds a mutex lock?
No, not while it holds the lock if the process is currently running on a CPU. A mutex is available to user code. If holding a mutex made the process uninterruptible, that would mean that a process could hijack the system by simply locking a mutex and never releasing it. Now, you will find that the lock and unlock functions need to be uninterruptible on a single-core processor. However, it doesn't make sense to set the state for the process because it's actually the scheduler that must not be interrupted.

Java Thread Live Lock

I have an interesting problem related to Java thread live lock. Here it goes.
There are four global locks - L1,L2,L3,L4
There are four threads - T1, T2, T3, T4
T1 requires locks L1,L2,L3
T2 requires locks L2
T3 required locks L3,L4
T4 requires locks L1,L2
So, the pattern of the problem is - Any of the threads can run and acquire the locks in any order. If any of the thread detects that a lock which it needs is not available, it release all other locks it had previously acquired waits for a fixed time before retrying again. The cycle repeats giving rise to a live lock condition.
So, to solve this problem, I have two solutions in mind
1) Let each thread wait for a random period of time before retrying.
OR,
2) Let each thread acquire all the locks in a particular order ( even if a thread does not require all the
locks)
I am not convinced that these are the only two options available to me. Please advise.
Have all the threads enter a single mutex-protected state-machine whenever they require and release their set of locks. The threads should expose methods that return the set of locks they require to continue and also to signal/wait for a private semaphore signal. The SM should contain a bool for each lock and a 'Waiting' queue/array/vector/list/whatever container to store waiting threads.
If a thread enters the SM mutex to get locks and can immediately get its lock set, it can reset its bool set, exit the mutex and continue on.
If a thread enters the SM mutex and cannot immediately get its lock set, it should add itself to 'Waiting', exit the mutex and wait on its private semaphore.
If a thread enters the SM mutex to release its locks, it sets the lock bools to 'return' its locks and iterates 'Waiting' in an attempt to find a thread that can now run with the set of locks available. If it finds one, it resets the bools appropriately, removes the thread it found from 'Waiting' and signals the 'found' thread semaphore. It then exits the mutex.
You can twiddle with the algorithm that you use to match up the available set lock bools with waiting threads as you wish. Maybe you should release the thread that requires the largest set of matches, or perhaps you would like to 'rotate' the 'Waiting' container elements to reduce starvation. Up to you.
A solution like this requires no polling, (with its performance-sapping CPU use and latency), and no continual aquire/release of multiple locks.
It's much easier to develop such a scheme with an OO design. The methods/member functions to signal/wait the semaphore and return the set of locks needed can usually be stuffed somewhere in the thread class inheritance chain.
Unless there is a good reason (performance wise) not to do so,
I would unify all locks to one lock object.
This is similar to solution 2 you suggested, only more simple in my opinion.
And by the way, not only is this solution more simple and less bug proned,
The performance might be better than solution 1 you suggested.
Personally, I have never heard of Option 1, but I am by no means an expert on multithreading. After thinking about it, it sounds like it will work fine.
However, the standard way to deal with threads and resource locking is somewhat related to Option 2. To prevent deadlocks, resources need to always be acquired in the same order. For example, if you always lock the resources in the same order, you won't have any issues.
Go with 2a) Let each thread acquire all of the locks that it needs (NOT all of the locks) in a particular order; if a thread encounters a lock that isn't available then it releases all of its locks
As long as threads acquire their locks in the same order you can't have deadlock; however, you can still have starvation (a thread might run into a situation where it keeps releasing all of its locks without making forward progress). To ensure that progress is made you can assign priorities to threads (0 = lowest priority, MAX_INT = highest priority) - increase a thread's priority when it has to release its locks, and reduce it to 0 when it acquires all of its locks. Put your waiting threads in a queue, and don't start a lower-priority thread if it needs the same resources as a higher-priority thread - this way you guarantee that the higher-priority threads will eventually acquire all of their locks. Don't implement this thread queue unless you're actually having problems with thread starvation, though, because it's probably less efficient than just letting all of your threads run at once.
You can also simplify things by implementing omer schleifer's condense-all-locks-to-one solution; however, unless threads other than the four you've mentioned are contending for these resources (in which case you'll still need to lock the resources from the external threads), you can more efficiently implement this by removing all locks and putting your threads in a circular queue (so your threads just keep running in the same order).

Advantages of using condition variables over mutex

I was wondering what is the performance benefit of using condition variables over mutex locks in pthreads.
What I found is : "Without condition variables, the programmer would need to have threads continually polling (possibly in a critical section), to check if the condition is met. This can be very resource consuming since the thread would be continuously busy in this activity. A condition variable is a way to achieve the same goal without polling." (https://computing.llnl.gov/tutorials/pthreads)
But it also seems that mutex calls are blocking (unlike spin-locks). Hence if a thread (T1) fails to get a lock because some other thread (T2) has the lock, T1 is put to sleep by the OS, and is woken up only when T2 releases the lock and the OS gives T1 the lock. The thread T1 does not really poll to get the lock. From this description, it seems that there is no performance benefit of using condition variables. In either case, there is no polling involved. The OS anyway provides the benefit that the condition-variable paradigm can provide.
Can you please explain what actually happens.
A condition variable allows a thread to be signaled when something of interest to that thread occurs.
By itself, a mutex doesn't do this.
If you just need mutual exclusion, then condition variables don't do anything for you. However, if you need to know when something happens, then condition variables can help.
For example, if you have a queue of items to work on, you'll have a mutex to ensure the queue's internals are consistent when accessed by the various producer and consumer threads. However, when the queue is empty, how will a consumer thread know when something is in there for it to work on? Without something like a condition variable it would need to poll the queue, taking and releasing the mutex on each poll (otherwise a producer thread could never put something on the queue).
Using a condition variable lets the consumer find that when the queue is empty it can just wait on the condition variable indicating that the queue has had something put into it. No polling - that thread does nothing until a producer puts something in the queue, then signals the condition that the queue has a new item.
You're looking for too much overlap in two separate but related things: a mutex and a condition variable.
A common implementation approach for a mutex is to use a flag and a queue. The flag indicates whether the mutex is held by anyone (a single-count semaphore would work too), and the queue tracks which threads are in line waiting to acquire the mutex exclusively.
A condition variable is then implemented as another queue bolted onto that mutex. Threads that got in line to wait to acquire the mutex can—usually once they have acquired it—volunteer to get out of the front of the line and get into the condition queue instead. At this point, you have two separate sets of waiters:
Those waiting to acquire the mutex exclusively
Those waiting for the condition variable to be signaled
When a thread holding the mutex exclusively signals the condition variable, for which we'll assume for now that it's a singular signal (unleashing no more than one waiting thread) and not a broadcast (unleashing all the waiting threads), the first thread in the condition variable queue gets shunted back over into the front (usually) of the mutex queue. Once the thread currently holding the mutex—usually the thread that signaled the condition variable—relinquishes the mutex, the next thread in the mutex queue can acquire it. That next thread in line will have been the one that was at the head of the condition variable queue.
There are many complicated details that come into play, but this sketch should give you a feel for the structures and operations in play.
If you are looking for performance, then start reading about "non blocking / non locking" thread synchronization algorithms. They are based upon atomic operations, which gcc is kind enough to provide. Lookup gcc atomic operations. Our tests showed we could increment a global value with multiple threads using atomic operation magnitudes faster than locking with a mutex. Here is some sample code that shows how to add items to and from a linked list from multiple threads at the same time without locking.
For sleeping and waking threads, signals are much faster than conditions. You use pthread_kill to send the signal, and sigwait to sleep the thread. We tested this too with the same kind of performance benefits. Here is some example code.

Mutex lock: what does "blocking" mean?

I've been reading up on multithreading and shared resources access and one of the many (for me) new concepts is the mutex lock. What I can't seem to find out is what is actually happening to the thread that finds a "critical section" is locked. It says in many places that the thread gets "blocked", but what does that mean? Is it suspended, and will it resume when the lock is lifted? Or will it try again in the next iteration of the "run loop"?
The reason I ask, is because I want to have system supplied events (mouse, keyboard, etc.), which (apparantly) are delivered on the main thread, to be handled in a very specific part in the run loop of my secondary thread. So whatever event is delivered, I queue in my own datastructure. Obviously, the datastructure needs a mutex lock because it's being modified by both threads. The missing puzzle-piece is: what happens when an event gets delivered in a function on the main thread, I want to queue it, but the queue is locked? Will the main thread be suspended, or will it just jump over the locked section and go out of scope (losing the event)?
Blocked means execution gets stuck there; generally, the thread is put to sleep by the system and yields the processor to another thread. When a thread is blocked trying to acquire a mutex, execution resumes when the mutex is released, though the thread might block again if another thread grabs the mutex before it can.
There is generally a try-lock operation that grab the mutex if possible, and if not, will return an error. But you are eventually going to have to move the current event into that queue. Also, if you delay moving the events to the thread where they are handled, the application will become unresponsive regardless.
A queue is actually one case where you can get away with not using a mutex. For example, Mac OS X (and possibly also iOS) provides the OSAtomicEnqueue() and OSAtomicDequeue() functions (see man atomic or <libkern/OSAtomic.h>) that exploit processor-specific atomic operations to avoid using a lock.
But, why not just process the events on the main thread as part of the main run loop?
The simplest way to think of it is that the blocked thread is put in a wait ("sleeping") state until the mutex is released by the thread holding it. At that point the operating system will "wake up" one of the threads waiting on the mutex and let it acquire it and continue. It's as if the OS simply puts the blocked thread on a shelf until it has the thing it needs to continue. Until the OS takes the thread off the shelf, it's not doing anything. The exact implementation -- which thread gets to go next, whether they all get woken up or they're queued -- will depend on your OS and what language/framework you are using.
Too late to answer but I may facilitate the understanding. I am talking more from implementation perspective rather than theoretical texts.
The word "blocking" is kind of technical homonym. People may use it for sleeping or mere waiting. The term has to be understood in context of usage.
Blocking means Waiting - Assume on an SMP system a thread B wants to acquire a spinlock held by some other thread A. One of the mechanisms is to disable preemption and keep spinning on the processor unless B gets it. Another mechanism probably, an efficient one, is to allow other threads to use processor, in case B does not gets it in easy attempts. Therefore we schedule out thread B (as preemption is enabled) and give processor to some other thread C. In this case thread B just waits in the scheduler's queue and comes back with its turn. Understand that B is not sleeping just waiting rather passively instead of busy-wait and burning processor cycles. On BSD and Solaris systems there are data-structures like turnstiles to implement this situation.
Blocking means Sleeping - If the thread B had instead made system call like read() waiting data from network socket, it cannot proceed until it gets it. Therefore, some texts casually use term blocking as "... blocked for I/O" or "... in blocking system call". Actually, thread B is rather sleeping. There are specific data-structures known as sleep queues - much like luxury waiting rooms on air-ports :-). The thread will be woken up when OS detects availability of data, much like an attendant of the waiting room.
Blocking means just that. It is blocked. It will not proceed until able. You don't say which language you're using, but most languages/libraries have lock objects where you can "attempt" to take the lock and then carry on and do something different depending on whether you succeeded or not.
But in, for example, Java synchronized blocks, your thread will stall until it is able to acquire the monitor (mutex, lock). The java.util.concurrent.locks.Lock interface describes lock objects which have more flexibility in terms of lock acquisition.

Conditional Variable vs Semaphore

When to use a semaphore and when to use a conditional variable?
Locks are used for mutual exclusion. When you want to ensure that a piece of code is atomic, put a lock around it. You could theoretically use a binary semaphore to do this, but that's a special case.
Semaphores and condition variables build on top of the mutual exclusion provide by locks and are used for providing synchronized access to shared resources. They can be used for similar purposes.
A condition variable is generally used to avoid busy waiting (looping repeatedly while checking a condition) while waiting for a resource to become available. For instance, if you have a thread (or multiple threads) that can't continue onward until a queue is empty, the busy waiting approach would be to just doing something like:
//pseudocode
while(!queue.empty())
{
sleep(1);
}
The problem with this is that you're wasting processor time by having this thread repeatedly check the condition. Why not instead have a synchronization variable that can be signaled to tell the thread that the resource is available?
//pseudocode
syncVar.lock.acquire();
while(!queue.empty())
{
syncVar.wait();
}
//do stuff with queue
syncVar.lock.release();
Presumably, you'll have a thread somewhere else that is pulling things out of the queue. When the queue is empty, it can call syncVar.signal() to wake up a random thread that is sitting asleep on syncVar.wait() (or there's usually also a signalAll() or broadcast() method to wake up all the threads that are waiting).
I generally use synchronization variables like this when I have one or more threads waiting on a single particular condition (e.g. for the queue to be empty).
Semaphores can be used similarly, but I think they're better used when you have a shared resource that can be available and unavailable based on some integer number of available things. Semaphores are good for producer/consumer situations where producers are allocating resources and consumers are consuming them.
Think about if you had a soda vending machine. There's only one soda machine and it's a shared resource. You have one thread that's a vendor (producer) who is responsible for keeping the machine stocked and N threads that are buyers (consumers) who want to get sodas out of the machine. The number of sodas in the machine is the integer value that will drive our semaphore.
Every buyer (consumer) thread that comes to the soda machine calls the semaphore down() method to take a soda. This will grab a soda from the machine and decrement the count of available sodas by 1. If there are sodas available, the code will just keep running past the down() statement without a problem. If no sodas are available, the thread will sleep here waiting to be notified of when soda is made available again (when there are more sodas in the machine).
The vendor (producer) thread would essentially be waiting for the soda machine to be empty. The vendor gets notified when the last soda is taken from the machine (and one or more consumers are potentially waiting to get sodas out). The vendor would restock the soda machine with the semaphore up() method, the available number of sodas would be incremented each time and thereby the waiting consumer threads would get notified that more soda is available.
The wait() and signal() methods of a synchronization variable tend to be hidden within the down() and up() operations of the semaphore.
Certainly there's overlap between the two choices. There are many scenarios where a semaphore or a condition variable (or set of condition variables) could both serve your purposes. Both semaphores and condition variables are associated with a lock object that they use to maintain mutual exclusion, but then they provide extra functionality on top of the lock for synchronizing thread execution. It's mostly up to you to figure out which one makes the most sense for your situation.
That's not necessarily the most technical description, but that's how it makes sense in my head.
Let's reveal what's under the hood.
Conditional variable is essentially a wait-queue, that supports blocking-wait and wakeup operations, i.e. you can put a thread into the wait-queue and set its state to BLOCK, and get a thread out from it and set its state to READY.
Note that to use a conditional variable, two other elements are needed:
a condition (typically implemented by checking a flag or a counter)
a mutex that protects the condition
The protocol then becomes,
acquire mutex
check condition
block and release mutex if condition is true, else release mutex
Semaphore is essentially a counter + a mutex + a wait queue. And it can be used as it is without external dependencies. You can use it either as a mutex or as a conditional variable.
Therefore, semaphore can be treated as a more sophisticated structure than conditional variable, while the latter is more lightweight and flexible.
Semaphores can be used to implement exclusive access to variables, however they are meant to be used for synchronization. Mutexes, on the other hand, have a semantics which is strictly related to mutual exclusion: only the process which locked the resource is allowed to unlock it.
Unfortunately you cannot implement synchronization with mutexes, that's why we have condition variables. Also notice that with condition variables you can unlock all the waiting threads in the same instant by using the broadcast unlocking. This cannot be done with semaphores.
semaphore and condition variables are very similar and are used mostly for the same purposes. However, there are minor differences that could make one preferable. For example, to implement barrier synchronization you would not be able to use a semaphore.But a condition variable is ideal.
Barrier synchronization is when you want all of your threads to wait until everyone has arrived at a certain part in the thread function. this can be implemented by having a static variable which is initially the value of total threads decremented by each thread when it reaches that barrier. this would mean we want each thread to sleep until the last one arrives.A semaphore would do the exact opposite! with a semaphore, each thread would keep running and the last thread (which will set semaphore value to 0) will go to sleep.
a condition variable on the other hand, is ideal. when each thread gets to the barrier we check if our static counter is zero. if not, we set the thread to sleep with the condition variable wait function. when the last thread arrives at the barrier, the counter value will be decremented to zero and this last thread will call the condition variable signal function which will wake up all the other threads!
I file condition variables under monitor synchronization. I've generally seen semaphores and monitors as two different synchronization styles. There are differences between the two in terms of how much state data is inherently kept and how you want to model code - but there really isn't any problem that can be solved by one but not the other.
I tend to code towards monitor form; in most languages I work in that comes down to mutexes, condition variables, and some backing state variables. But semaphores would do the job too.
semaphore need to know the count upfront for initialization. There is no such requirement for condition variables.
The the mutex and conditional variables are inherited from semaphore.
For mutex, the semaphore uses two states: 0, 1
For condition variables the semaphore uses counter.
They are like syntactic sugar
conditionalVar + mutex == semaphore

Resources