I like to know that is there a way to know the number of threads waiting on a condition variable from the condition variable itself, without using some "count" variable?
No, because in general the number of waiting threads is not reliable even without notifications occurring: right after you retrieve this number a new thread can start waiting or a thread waiting with timeout can stop waiting.
Related
pthread_cond_wait allows us to wait until a condition variable gets signaled.
However, is there any chance to wait until any of two condition variables gets signaled?
The reason I’m asking is that I have the following situation: I have 42 threads and two possible predicates under which these threads may continue they work. They may continue their work if ANY of these two predicates is fulfilled.
But, but, the problem is, that when one of these predicates is fulfilled, then only one thread, and a specific thread, not any thread, can continue working. If the other is fulfilled, then ALL threads should be resumed.
So, my idea was to have one condition variable that is broadcasted whenever the second pred is fulfilled… and 42 more condition variables, each associated with one of the threads. The appropriate one of them is to be signaled whenever the first pred is fulfilled.
But this requires the threads to wake whenever ANY of a given set of cond variables is signaled… Any chance to achieve this?
No, you can only wait on a single condition variable.
However, that is fine. When a thread is woken from a pthread_cond_wait(), it must check that the predicate it is waiting for is actually true anyway - this is because waits on condition variables are allowed to be woken at any time (known as a "spurious wakeup").
So, since your threads must check on whether they are allowed to proceed when pthread_cond_wait() returns, you can just use a single condition variable paired with pthread_cond_broadcast(). The threads themselves determine (based on examining the shared state protected by the mutex) whether they are individually allowed to proceed or should continue waiting. So, something like this:
pthread_mutex_lock(&lock);
while (!this_thread_can_proceed(state) && !all_threads_can_proceed(state))
pthread_cond_wait(&cond, &lock);
Alternatively, you are allowed to wait on multiple condition variables using the same mutex (the converse is not allowed), so you can have every thread wait on its own condition variable, and then the waking thread either signals a single condition variable, or loops over the set signalling them all, depending on which predicate it has fulfilled.
Which one of these solutions will perform better depends on the balance between "all threads" and "one thread" wakeups in your application.
The Delphi XE2 documentation says this about TEvent:
Sometimes, you need to wait for a thread to finish some operation rather than waiting for a particular thread to complete execution. To do this, use an event object. Event objects (System.SyncObjs.TEvent) should be created with global scope so that they can act like signals that are visible to all threads.
When a thread completes an operation that other threads depend on, it calls TEvent.SetEvent. SetEvent turns on the signal, so any other thread that checks will know that the operation has completed. To turn off the signal, use the ResetEvent method.
For example, consider a situation where you must wait for several threads to complete their execution rather than a single thread. Because you don't know which thread will finish last, you can't simply use the WaitFor method of one of the threads. Instead, you can have each thread increment a counter when it is finished, and have the last thread signal that they are all done by setting an event.
The Delphi documentation does not, however, explain how another thread can detect that TEvent.Set event was called. Could you please explain how to check to see if TEvent.Set was called?
If you want to test if an event is signaled or not, call the WaitFor method and pass a timeout value of 0. If the event is set, it will return wrSignaled. If not, it will time out immediately and return wrTimeout.
Having said that, the normal usage of an event is not to check whether it's signaled in this manner, but to synchronize by blocking the current thread until the event is signaled. You do this by passing a nonzero value to the timeout parameter, either the constant INFINITE if you're certain that it will finish and you want to wait until it does, or a smaller value if you don't want to block for an indefinite amount of time.
I am implementing conditional wait, and both semaphore or conditional varible can be used to implement it. Is there any difference between the two? More specifically from the performance point of view?
I have heard that when a thread waits on a conditional variable it is not scheduled until it is signaled. This ensures that it does not consume CPU cycle. But this is not true for a semaphore and a semaphore will consume CPU cycle even if it is waiting?
If all of your threads are waiting for some event, e.g., submission of a task, then you can wake them all up by using a condition variable upon an event.
If you have a limited resource, say 10 pages of memory reserved for your threads, then you will need them to wait until a page is available. When this happens, you will need to let just one thread start execution. In this case you can use a semaphore unlock up as many threads as available pages.
A semaphore has extra state - a count of units held - as well as a queue for threads waiting on it, so allowing a sema to, say, record how many times it has been signaled even if there is no thread currently waiting on it. If a thread loops around a semaphore wait() and the semaphore is signaled N times, the thread will eventually loop N times, even if the thread is sometimes busy when the sema is signaled - very useful for producer-consumer queues.
A condvar does not have this extra count state, but it can release a lock that it is bound to until a thread signals it - very useful for producer-consumer queues.
Sometimes, I wish for a combination of the two - a condvar with a count, but this does not seem to to be forthcoming from OS developers :(
A semaphore and condvar are the same in that they are both synchro primitives. Apart from that..
conditional variable and binary semaphore both block thread until specified signaled condition true , and both are same you can use any one but conditional varibale always use with mutex . in both cases you scheduled only by signal without it you can not scheduled . But in case to maintain number of resources in this case you use counting semaphore .
Both are not consume cpu when you use mutex.
I was wondering what is the performance benefit of using condition variables over mutex locks in pthreads.
What I found is : "Without condition variables, the programmer would need to have threads continually polling (possibly in a critical section), to check if the condition is met. This can be very resource consuming since the thread would be continuously busy in this activity. A condition variable is a way to achieve the same goal without polling." (https://computing.llnl.gov/tutorials/pthreads)
But it also seems that mutex calls are blocking (unlike spin-locks). Hence if a thread (T1) fails to get a lock because some other thread (T2) has the lock, T1 is put to sleep by the OS, and is woken up only when T2 releases the lock and the OS gives T1 the lock. The thread T1 does not really poll to get the lock. From this description, it seems that there is no performance benefit of using condition variables. In either case, there is no polling involved. The OS anyway provides the benefit that the condition-variable paradigm can provide.
Can you please explain what actually happens.
A condition variable allows a thread to be signaled when something of interest to that thread occurs.
By itself, a mutex doesn't do this.
If you just need mutual exclusion, then condition variables don't do anything for you. However, if you need to know when something happens, then condition variables can help.
For example, if you have a queue of items to work on, you'll have a mutex to ensure the queue's internals are consistent when accessed by the various producer and consumer threads. However, when the queue is empty, how will a consumer thread know when something is in there for it to work on? Without something like a condition variable it would need to poll the queue, taking and releasing the mutex on each poll (otherwise a producer thread could never put something on the queue).
Using a condition variable lets the consumer find that when the queue is empty it can just wait on the condition variable indicating that the queue has had something put into it. No polling - that thread does nothing until a producer puts something in the queue, then signals the condition that the queue has a new item.
You're looking for too much overlap in two separate but related things: a mutex and a condition variable.
A common implementation approach for a mutex is to use a flag and a queue. The flag indicates whether the mutex is held by anyone (a single-count semaphore would work too), and the queue tracks which threads are in line waiting to acquire the mutex exclusively.
A condition variable is then implemented as another queue bolted onto that mutex. Threads that got in line to wait to acquire the mutex can—usually once they have acquired it—volunteer to get out of the front of the line and get into the condition queue instead. At this point, you have two separate sets of waiters:
Those waiting to acquire the mutex exclusively
Those waiting for the condition variable to be signaled
When a thread holding the mutex exclusively signals the condition variable, for which we'll assume for now that it's a singular signal (unleashing no more than one waiting thread) and not a broadcast (unleashing all the waiting threads), the first thread in the condition variable queue gets shunted back over into the front (usually) of the mutex queue. Once the thread currently holding the mutex—usually the thread that signaled the condition variable—relinquishes the mutex, the next thread in the mutex queue can acquire it. That next thread in line will have been the one that was at the head of the condition variable queue.
There are many complicated details that come into play, but this sketch should give you a feel for the structures and operations in play.
If you are looking for performance, then start reading about "non blocking / non locking" thread synchronization algorithms. They are based upon atomic operations, which gcc is kind enough to provide. Lookup gcc atomic operations. Our tests showed we could increment a global value with multiple threads using atomic operation magnitudes faster than locking with a mutex. Here is some sample code that shows how to add items to and from a linked list from multiple threads at the same time without locking.
For sleeping and waking threads, signals are much faster than conditions. You use pthread_kill to send the signal, and sigwait to sleep the thread. We tested this too with the same kind of performance benefits. Here is some example code.
I need to parallelize a simple password cracker, for using it on a n-processor system. My idea is to create n threads and to feed them more and more job as they finish.
What is the best way to know when a thread has finished? A mutex? Isn't expensive checking this mutex constantly while other threads are running?
You can have a simple queue structure - use any data structure you like - and then just use a mutex when you add/remove items from it.
Provided your threads grab the work they need to do in big enough "chunks", then there will be very little contention on the mutex, so very little overhead.
For example, if each thread was to grab approximately 1 second of work at a time and work independently for 1 second, then there would be very few operations on the mutex.
The threads could exit when they had no more work; the main thread could then wait using pthread_join.
Use message queues between the threads :-
Master -> Process (saying go with this).
Process -> Master (saying I'm done - give me more, or, I've found the result!)
Using this, the thread only closes down when the system does - otherwise it's either processing data or waiting on a message queue.
This way, the MCP (I've always wanted to say that!) simply processes messages and hands jobs out to threads that are waiting for more work.
This may be more efficient that creating and destroying threads all the time.
Normally you use a "condition variable" for this kind of thing where you want to wait for an asynchronous job to finish.
Condition variables are basically mutex-protected, simple signals. Pthread has condition variables (see e.g. the pthread_cond_create(...) function).