Difference between a semaphore and a conditional variable - multithreading

I am implementing conditional wait, and both semaphore or conditional varible can be used to implement it. Is there any difference between the two? More specifically from the performance point of view?
I have heard that when a thread waits on a conditional variable it is not scheduled until it is signaled. This ensures that it does not consume CPU cycle. But this is not true for a semaphore and a semaphore will consume CPU cycle even if it is waiting?

If all of your threads are waiting for some event, e.g., submission of a task, then you can wake them all up by using a condition variable upon an event.
If you have a limited resource, say 10 pages of memory reserved for your threads, then you will need them to wait until a page is available. When this happens, you will need to let just one thread start execution. In this case you can use a semaphore unlock up as many threads as available pages.

A semaphore has extra state - a count of units held - as well as a queue for threads waiting on it, so allowing a sema to, say, record how many times it has been signaled even if there is no thread currently waiting on it. If a thread loops around a semaphore wait() and the semaphore is signaled N times, the thread will eventually loop N times, even if the thread is sometimes busy when the sema is signaled - very useful for producer-consumer queues.
A condvar does not have this extra count state, but it can release a lock that it is bound to until a thread signals it - very useful for producer-consumer queues.
Sometimes, I wish for a combination of the two - a condvar with a count, but this does not seem to to be forthcoming from OS developers :(
A semaphore and condvar are the same in that they are both synchro primitives. Apart from that..

conditional variable and binary semaphore both block thread until specified signaled condition true , and both are same you can use any one but conditional varibale always use with mutex . in both cases you scheduled only by signal without it you can not scheduled . But in case to maintain number of resources in this case you use counting semaphore .
Both are not consume cpu when you use mutex.

Related

Can it be assumed that `pthread_cond_signal` will wake the signaled thread atomically with regards to the mutex bond to the condition variable?

Quoting POSIX:
The pthread_cond_broadcast() or pthread_cond_signal() functions may be called by a thread whether or not it currently owns the mutex that threads calling pthread_cond_wait() or pthread_cond_timedwait() have associated with the condition variable during their waits; however, if predictable scheduling behavior is required, then that mutex shall be locked by the thread calling pthread_cond_broadcast() or pthread_cond_signal().
"If predictable scheduling behavior is required". This could/would hint that locking the mutex bound to the condition variable right before calling pthread_cond_signal() should guarantee that the signaled thread will be woken up before any other thread manages to lock this mutex. Is this correct?
We will se if any PThreads guru has a more comprehensive answer, but as far as I can see, at least in the Linux manpage, you do not get fully predictable behavior. What you do get is a guarantee that if two threads wait on the same condition variable, the higher-prio thread gets to go first (at least, that should be true on Linux if one thread is SCHED_OTHER and the other is real-time SCHED_FIFO). That holds if you lock mutex before signalling (with reservation for errors after a quick read of the manpage).
See
https://linux.die.net/man/3/pthread_cond_signal
No, there is no guarantee the signalled thread will be waken up. Worse, if in the signalling thread you have sequence:
while(run_again) {
pthread_mutex_lock(&mutex);
/* prepare data */
pthread_mutex_unlock(&mutex);
pthread_cond_broadcast(&cond);
}
there is reasonable chance control would never be passed to other threads waiting on mutex because of logic in the scheduler. Some examples to play with you can find in this answer.
No.
The best reference I have found regarding the predictability is this one:
https://austin-group-l.opengroup.narkive.com/lKcmfoRI/predictable-scheduling-behavior-in-pthread-cond-broadcast
Basically, people want to guard against the possibility that threads do not get a fair chance to run. Apparently, it is not a problem for most producer-consumer scenarios, and it does not apply to pthread_cond_broadcast as well. I would say, it is useful only in limited cases.
Cppreference.com actually considers unlocking after notifying may be a pessimization:
https://en.cppreference.com/w/cpp/thread/condition_variable/notify_all

How can any thread signal for release of a binary semaphore

I am new to multithreading paradigm. While learning concurrency, every source I found says:
"The difference between mutex and binary semaphore is the ownership
i.e. a mutex can be signaled for release by only the thread who
created it while a semaphore can be signaled any thread"
Considering a scenario where thread A has acquired a binary semaphore lock on resource x and processing it. If any thread can call a release signal for lock on x, doesn't this open a possibility of any thread calling a release on the lock amidst the time when thread A was using x.
Isn't there a scope of inconsistency in this or am I missing something?
Of course, if threads are arbitrarily acquiring or releasing a semaphore, the result would be disastrous and the fact, that implementations do not prevent this, does not imply that this is a useful scenario.
However, there might be real use cases if the involved threads use another mechanism to coordinate themselves while using the semaphore to hold these threads off which do not participate in these coordination.
Imagine you expand a use case where one thread acquires the semaphore for performing a task to parallel execution of said task. After acquiring the semaphore, several worker threads are spawned, each of them working on a different part of the data, thus naturally working non-interfering. Then the last worker thread releases the semaphore which elides the need for another communication between the initiating thread and the worker threads. Of course, this requires the worker thread to detect whether it is the last one, but a simple atomic integer holding the number of active workers would be sufficient.

Why do I get a thread context switch every time I synchronize with a mutex?

I have multiple threads updating a single array in tight loops. (10 threads on a dual-core processor # roughly 100000 updates per second). Each time the array is updated under the protection of a mutex (WaitForSingleObject / ReleaseMutex). I have noticed that no thread ever does two consecutive updates to the array which means there must be some sort of yield relating to the synchronization. This means there are about 100000 context switches happening every second which seems sub-optimal. Why does this happen ?
The problem here is that there is an order of all waiting threads.
Each thread blocked in a WaitForSingleObject goes into a queue and is then suspended by the scheduler so that it does not eat up execution time anymore. When the mutex is freed, one of the waiting threads is resumed by the scheduler. It is unspecified what the exact order is in which threads are wakened from the queue, but in many cases it will be a simple first-in, first-out.
What happens now is that if the same thread releases the mutex and then does another WaitForSingleObject on the same mutex, he is going to be re-inserted into the queue and it is quite unlikely that he will be inserted at the front of the queue if there are already other threads waiting. This makes sense, as allowing him to skip to the front of the queue could lead to other threads starving. So the scheduler will probably just suspend him and wake the the thread that is at the front of the queue instead.
I guess this is because of the multi processor.
When the first thread (running on the first processor) release the mutex, the second thread (on the second processor) got it, then when the first thread try to get the mutex, it can not. When the mutex is finally released by the second thread, it is taken by the third thread (on the first processor).

Java Thread Live Lock

I have an interesting problem related to Java thread live lock. Here it goes.
There are four global locks - L1,L2,L3,L4
There are four threads - T1, T2, T3, T4
T1 requires locks L1,L2,L3
T2 requires locks L2
T3 required locks L3,L4
T4 requires locks L1,L2
So, the pattern of the problem is - Any of the threads can run and acquire the locks in any order. If any of the thread detects that a lock which it needs is not available, it release all other locks it had previously acquired waits for a fixed time before retrying again. The cycle repeats giving rise to a live lock condition.
So, to solve this problem, I have two solutions in mind
1) Let each thread wait for a random period of time before retrying.
OR,
2) Let each thread acquire all the locks in a particular order ( even if a thread does not require all the
locks)
I am not convinced that these are the only two options available to me. Please advise.
Have all the threads enter a single mutex-protected state-machine whenever they require and release their set of locks. The threads should expose methods that return the set of locks they require to continue and also to signal/wait for a private semaphore signal. The SM should contain a bool for each lock and a 'Waiting' queue/array/vector/list/whatever container to store waiting threads.
If a thread enters the SM mutex to get locks and can immediately get its lock set, it can reset its bool set, exit the mutex and continue on.
If a thread enters the SM mutex and cannot immediately get its lock set, it should add itself to 'Waiting', exit the mutex and wait on its private semaphore.
If a thread enters the SM mutex to release its locks, it sets the lock bools to 'return' its locks and iterates 'Waiting' in an attempt to find a thread that can now run with the set of locks available. If it finds one, it resets the bools appropriately, removes the thread it found from 'Waiting' and signals the 'found' thread semaphore. It then exits the mutex.
You can twiddle with the algorithm that you use to match up the available set lock bools with waiting threads as you wish. Maybe you should release the thread that requires the largest set of matches, or perhaps you would like to 'rotate' the 'Waiting' container elements to reduce starvation. Up to you.
A solution like this requires no polling, (with its performance-sapping CPU use and latency), and no continual aquire/release of multiple locks.
It's much easier to develop such a scheme with an OO design. The methods/member functions to signal/wait the semaphore and return the set of locks needed can usually be stuffed somewhere in the thread class inheritance chain.
Unless there is a good reason (performance wise) not to do so,
I would unify all locks to one lock object.
This is similar to solution 2 you suggested, only more simple in my opinion.
And by the way, not only is this solution more simple and less bug proned,
The performance might be better than solution 1 you suggested.
Personally, I have never heard of Option 1, but I am by no means an expert on multithreading. After thinking about it, it sounds like it will work fine.
However, the standard way to deal with threads and resource locking is somewhat related to Option 2. To prevent deadlocks, resources need to always be acquired in the same order. For example, if you always lock the resources in the same order, you won't have any issues.
Go with 2a) Let each thread acquire all of the locks that it needs (NOT all of the locks) in a particular order; if a thread encounters a lock that isn't available then it releases all of its locks
As long as threads acquire their locks in the same order you can't have deadlock; however, you can still have starvation (a thread might run into a situation where it keeps releasing all of its locks without making forward progress). To ensure that progress is made you can assign priorities to threads (0 = lowest priority, MAX_INT = highest priority) - increase a thread's priority when it has to release its locks, and reduce it to 0 when it acquires all of its locks. Put your waiting threads in a queue, and don't start a lower-priority thread if it needs the same resources as a higher-priority thread - this way you guarantee that the higher-priority threads will eventually acquire all of their locks. Don't implement this thread queue unless you're actually having problems with thread starvation, though, because it's probably less efficient than just letting all of your threads run at once.
You can also simplify things by implementing omer schleifer's condense-all-locks-to-one solution; however, unless threads other than the four you've mentioned are contending for these resources (in which case you'll still need to lock the resources from the external threads), you can more efficiently implement this by removing all locks and putting your threads in a circular queue (so your threads just keep running in the same order).

Advantages of using condition variables over mutex

I was wondering what is the performance benefit of using condition variables over mutex locks in pthreads.
What I found is : "Without condition variables, the programmer would need to have threads continually polling (possibly in a critical section), to check if the condition is met. This can be very resource consuming since the thread would be continuously busy in this activity. A condition variable is a way to achieve the same goal without polling." (https://computing.llnl.gov/tutorials/pthreads)
But it also seems that mutex calls are blocking (unlike spin-locks). Hence if a thread (T1) fails to get a lock because some other thread (T2) has the lock, T1 is put to sleep by the OS, and is woken up only when T2 releases the lock and the OS gives T1 the lock. The thread T1 does not really poll to get the lock. From this description, it seems that there is no performance benefit of using condition variables. In either case, there is no polling involved. The OS anyway provides the benefit that the condition-variable paradigm can provide.
Can you please explain what actually happens.
A condition variable allows a thread to be signaled when something of interest to that thread occurs.
By itself, a mutex doesn't do this.
If you just need mutual exclusion, then condition variables don't do anything for you. However, if you need to know when something happens, then condition variables can help.
For example, if you have a queue of items to work on, you'll have a mutex to ensure the queue's internals are consistent when accessed by the various producer and consumer threads. However, when the queue is empty, how will a consumer thread know when something is in there for it to work on? Without something like a condition variable it would need to poll the queue, taking and releasing the mutex on each poll (otherwise a producer thread could never put something on the queue).
Using a condition variable lets the consumer find that when the queue is empty it can just wait on the condition variable indicating that the queue has had something put into it. No polling - that thread does nothing until a producer puts something in the queue, then signals the condition that the queue has a new item.
You're looking for too much overlap in two separate but related things: a mutex and a condition variable.
A common implementation approach for a mutex is to use a flag and a queue. The flag indicates whether the mutex is held by anyone (a single-count semaphore would work too), and the queue tracks which threads are in line waiting to acquire the mutex exclusively.
A condition variable is then implemented as another queue bolted onto that mutex. Threads that got in line to wait to acquire the mutex can—usually once they have acquired it—volunteer to get out of the front of the line and get into the condition queue instead. At this point, you have two separate sets of waiters:
Those waiting to acquire the mutex exclusively
Those waiting for the condition variable to be signaled
When a thread holding the mutex exclusively signals the condition variable, for which we'll assume for now that it's a singular signal (unleashing no more than one waiting thread) and not a broadcast (unleashing all the waiting threads), the first thread in the condition variable queue gets shunted back over into the front (usually) of the mutex queue. Once the thread currently holding the mutex—usually the thread that signaled the condition variable—relinquishes the mutex, the next thread in the mutex queue can acquire it. That next thread in line will have been the one that was at the head of the condition variable queue.
There are many complicated details that come into play, but this sketch should give you a feel for the structures and operations in play.
If you are looking for performance, then start reading about "non blocking / non locking" thread synchronization algorithms. They are based upon atomic operations, which gcc is kind enough to provide. Lookup gcc atomic operations. Our tests showed we could increment a global value with multiple threads using atomic operation magnitudes faster than locking with a mutex. Here is some sample code that shows how to add items to and from a linked list from multiple threads at the same time without locking.
For sleeping and waking threads, signals are much faster than conditions. You use pthread_kill to send the signal, and sigwait to sleep the thread. We tested this too with the same kind of performance benefits. Here is some example code.

Resources