Related
TL;DR
Why does std::condition_variable::wait needs a mutex as one of its variables?
Answer 1
You may look a the documentation and quote that:
wait... Atomically releases lock
But that's not a real reason. That's just validate my question even more: why does it need it in the first place?
Answer 2
predicate is most likely query the state of a shared resource and it must be lock guarded.
OK. fair.
Two questions here
Is it always true that predicate query the state of a shared resource? I assume yes. I t doesn't make sense to me to implement it otherwise
What if I do not pass any predicate (it is optional)?
Using predicate - lock makes sense
int i = 0;
void waits()
{
std::unique_lock<std::mutex> lk(cv_m);
cv.wait(lk, []{return i == 1;});
std::cout << i;
}
Not Using predicate - why can't we lock after the wait?
int i = 0;
void waits()
{
cv.wait(lk);
std::unique_lock<std::mutex> lk(cv_m);
std::cout << i;
}
Notes
I know that there are no harmful implications to this practice. I just don't know how to explain to my self why it was design this way?
Question
If predicate is optional and is not passed to wait, why do we need the lock?
When using a condition variable to wait for a condition, a thread performs the following sequence of steps:
It determines that the condition is not currently true.
It starts waiting for some other thread to make the condition true. This is the wait call.
For example, the condition might be that a queue has elements in it, and a thread might see that the queue is empty and wait for another thread to put things in the queue.
If another thread were to intercede between these two steps, it could make the condition true and notify on the condition variable before the first thread actually starts waiting. In this case, the waiting thread would not receive the notification, and it might never stop waiting.
The purpose of requiring the lock to be held is to prevent other threads from interceding like this. Additionally, the lock must be unlocked to allow other threads to do whatever we're waiting for, but it can't happen before the wait call because of the notify-before-wait problem, and it can't happen after the wait call because we can't do anything while we're waiting. It has to be part of the wait call, so wait has to know about the lock.
Now, you might look at the notify_* methods and notice that those methods don't require the lock to be held, so there's nothing actually stopping another thread from notifying between steps 1 and 2. However, a thread calling notify_* is supposed to hold the lock while performing whatever action it does to make the condition true, which is usually enough protection.
TL;DR
If predicate is optional and is not passed to wait, why do we need the lock?
condition_variable is designed to wait for a certain condition to come true, not to wait just for a notification. So to "catch" the "moment" when the condition becomes true you need to check the condition and wait for the notification. And to avoid a race condition you need those two to be a single atomic operation.
Purpose Of condition_variable:
Enable a program to implement this: do some action when a condition C holds.
Intended Protocol:
Condition producer changes state of the world from !C to C.
Condition consumer waits for C to happen and takes the action while/after condition C holds.
Simplification:
For simplicity (to limit number of cases to think of) let's assume that C never switches back to !C. Let's also forget about spurious wakeups. Even with this assumptions we'll see that the lock is necessary.
Naive Approach:
Let's have two threads with an essential code summarized like this:
void producer() {
_condition = true;
_condition_variable.notify_all();
}
void consumer() {
if (!_condition) {
_condition_variable.wait();
}
action();
}
The Problem:
The problem here is a race condition. A problematic interleaving of the threads is following:
The consumer reads condition, checks it to be false and decides to wait.
A thread scheduler interrupts consumer and resumes producer.
The producer updates condition to become true and invokes notify_all().
The consumer is resumed.
The consumer actually does wait(), but is never notified and waken up (a liveness hazard).
So without locking the consumer may miss the event of the condition becoming true.
Solution:
Disclaimer: this code still does not handle spurious wakeups and possibility of condition becoming false again.
void producer() {
{ std::unique_lock<std::mutex> l(_mutex);
_condition = true;
}
_condition_variable.notify_all();
}
void consumer() {
{ std::unique_lock<std::mutex> l(_mutex);
if (!_condition) {
_condition_variable.wait(l);
}
}
action();
}
Here we check condition, release lock and start waiting as a single atomic operation, preventing the race condition mentioned before.
See Also
Why Lock condition await must hold the lock
You need a std::unique_lock when using std::condition_variable for the same reason you need a std::FILE* when using std::fwrite and for the same reason a BasicLockable is necessary when using std::unique_lock itself.
The feature std::fwrite gives you, entire the reason it exists, is to write to files. So you have to give it a file. The feature std::unique_lock provides you is RAII locking and unlocking of a mutex (or another BasicLockable, like std::shared_mutex, etc.) so you have to give it something to lock and unlock.
The feature std::condition_variable provides, the entire reason it exists, is the atomically waiting and unlocking a lock (and completing a wait and locking). So you have to give it something to lock.
Why would someone want that is a separate question that has been discussed already. For example:
When is a condition variable needed, isn't a mutex enough?
Conditional Variable vs Semaphore
Advantages of using condition variables over mutex
And so on.
As has been explained, the pred parameter is optional, but having some sort of a predicate and testing it isn't. Or, in other words, not having a predicate doesn't make any sense inn a manner similar to how having a condition variable without a lock doesn't making any sense.
The reason you have a lock is because you have shared state you need to protect from simultaneous access. Some function of that shared state is the predicate.
If you don't have a predicate and you don't have a lock you really don't need a condition variable just like if you don't have a file you really don't need fwrite.
A final point is that the second code snippet you wrote is very broken. Obviously it won't compile as you define the lock after you try to pass it as an argument to condition_variable::wait(). You probably meant something like:
std::mutex mtx_cv;
std::condition_variable cv;
...
{
std::unique_lock<std::mutex> lk(mtx_cv);
cv.wait(lk);
lk.lock(); // throws std::system_error with an error code of std::errc::resource_deadlock_would_occur
}
The reason this is wrong is very simple. condition_variable::wait's effects are (from [thread.condition.condvar]):
Effects:
— Atomically calls lock.unlock() and blocks on *this.
— When unblocked, calls lock.lock() (possibly blocking on the lock), then returns.
— The function will unblock when signaled by a call to notify_one() or a call to notify_all(), or spuriously
After the return from wait() the lock is locked, and unique_lock::lock() throws an exception if it has already locked the mutex it wraps ([thread.lock.unique.locking]).
Again, why would someone want coupling waiting and locking the way std::condition_variable does is a separate question, but given that it does - you cannot, by definition, lock a std::condition_variable's std::unique_lock after std::condition_variable::wait has returned.
It's not stated in the documentation (and could be implemented differently) but conceptually you can imagine the condition variable has another mutex to both protect its own data but also coordinate the condition, waiting and notification with modification of the consumer code data (e.g. queue.size()) affecting the test.
So when you call wait(...) the following (logically) happens.
Precondition: The consumer code holds the lock (CCL) controlling the consumer condition data (CCD).
The condition is checked, if true, execution in the consumer code continues still holding the lock.
If false, it first acquires its own lock (CVL), adds the current thread to the waiting thread collection releases the consumer lock and puts itself to waiting and releases its own lock (CVL).
That final step is tricky because it needs to sleep the thread and release the CVL at the same time or in that order or in a way that threads notified just before going to wait are able to (somehow) not go to wait.
The step of acquiring the CVL before releasing the CCD is key. Any parallel thread trying to update the CCD and notify will be blocked either by the CCL or CVL. If the CCL was released before acquiring the CVL a parallel thread could acquire the CCL, change the data and then notify before the the to-be-waiting thread is added to the waiters.
A parallel thread acquires the CCL, modifies the data to make the condition true (or at least worth testing) and then notifies. Notification acquires the the CVL and identifies a blocked thread (or threads) if any to unwait. The unwaited threads then seek to acquire the CCL and may block there but won't leave wait and re-perform the test until they've acquired it.
Notification must acquire the CVL to make sure threads that have found the test false have been added to the waiters.
It's OK (possibly preferable for performance) to notify without holding the CCL because the hand-off between the CCL and CVL in the wait code is ensuring the ordering.
It may be preferrable because notifying when holding the CCL may mean all the unwaited threads just unwait to block (on the CCL) while the thread modifying the data is still holding the lock.
Notice that even if the CCD is atomic you must modify it holding the CCL or that Lock CVL, unlock CCL step won't ensure the total ordering required to make sure notifications aren't sent when threads are in the process of going to wait.
The standard only talks about atomicity of operations and another implementation may have a way of blocking notification before completing the 'add to waiters' step has completed following a failed test. The C++ Standard is careful to not dictate an implementation.
In all that, to answer some of the specific questions.
Must the state be shared? Sort of. There could be an external condition like a file being in a directory and the wait is timed to re-try after a time-period. You can decide for yourself whether you consider the file system or even just the wall-clock to be shared state.
Must there be any state? Not necessarily. A thread can wait on notification.
That could be tricky to coordinate because there has to be enough sequencing to stop the other thread notifying out of turn. The commonest solution is to have some boolean flag set by the notifying thread so the notified thread knows if it missed it. The normal use of void wait(std::unique_lock<std::mutex>& lk) is when the predicate is checked outside:
std::unique_lock<std::mutex> ulk(ccd_mutex)
while(!condition){
cv.wait(ulk);
}
Where the notifying thread uses:
{
std::lock_guard<std::mutex> guard(ccd_mutex);
condition=true;
}
cv.notify();
The reason is that in some times the waiting-thread holds the m_mutex:
#include <mutex>
#include <condition_variable>
void CMyClass::MyFunc()
{
std::unique_lock<std::mutex> guard(m_mutex);
// do something (on the protected resource)
m_condiotion.wait(guard, [this]() {return !m_bSpuriousWake; });
// do something else (on the protected resource)
guard.unluck();
// do something else than else
}
and a thread should never go to sleep while holding a m_mutex. One doesn't want to lock everybody out, while sleeping. So, atomically: {guard is unlocked and the thread go to sleep}. Once it waked up by the other-thread (m_condiotion.notify_one(), let's say) guard is locked again, and then the thread continue.
Reference (video)
Because if not so, there's a race condition before the waiting thread noticing the change of the shared state and the wait() call.
Assume we got a shared state of type std::atomic state_, there's still a fair chance for the waiting thread to miss a notification:
T1(waiting) | T2(notification)
---------------------------------------------- * ---------------------------
1) for (int i = state_; i != 0; i = state_) { |
2) | state_ = 0;
3) | cv.notify();
4) cv.wait(); |
5) }
6) // go on with the satisfied condition... |
Note that the wait() call failed to notice the latest value of state_ and may keep waiting forever.
I've always been told to puts locks around variables that multiple threads will access, I've always assumed that this was because you want to make sure that the value you are working with doesn't change before you write it back
i.e.
mutex.lock()
int a = sharedVar
a = someComplexOperation(a)
sharedVar = a
mutex.unlock()
And that makes sense that you would lock that. But in other cases I don't understand why I can't get away with not using Mutexes.
Thread A:
sharedVar = someFunction()
Thread B:
localVar = sharedVar
What could possibly go wrong in this instance? Especially if I don't care that Thread B reads any particular value that Thread A assigns.
It depends a lot on the type of sharedVar, the language you're using, any framework, and the platform. In many cases, it's possible that assigning a single value to sharedVar may take more than one instruction, in which case you may read a "half-set" copy of the value.
Even when that's not the case, and the assignment is atomic, you may not see the latest value without a memory barrier in place.
MSDN Magazine has a good explanation of different problems you may encounter in multithreaded code:
Forgotten Synchronization
Incorrect Granularity
Read and Write Tearing
Lock-Free Reordering
Lock Convoys
Two-Step Dance
Priority Inversion
The code in your question is particularly vulnerable to Read/Write Tearing. But your code, having neither locks nor memory barriers, is also subject to Lock-Free Reordering (which may include speculative writes in which thread B reads a value that thread A never stored) in which side-effects become visible to a second thread in a different order from how they appeared in your source code.
It goes on to describe some known design patterns which avoid these problems:
Immutability
Purity
Isolation
The article is available here
The main problem is that the assignment operator (operator= in C++) is not always guaranteed to be atomic (not even for primitive, built in types). In plain English, that means that assignment can take more than a single clock cycle to complete. If, in the middle of that, the thread gets interrupted, then the current value of the variable might be corrupted.
Let me build off of your example:
Lets say sharedVar is some object with operator= defined as this:
object& operator=(const object& other) {
ready = false;
doStuff(other);
if (other.value == true) {
value = true;
doOtherStuff();
} else {
value = false;
}
ready = true;
return *this;
}
If thread A from your example is interrupted in the middle of this function, ready will still be false when thread B starts to run. This could mean that the object is only partially copied over, or is in some intermediate, invalid state when thread B attempts to copy it into a local variable.
For a particularly nasty example of this, think of a data structure with a removed node being deleted, then interrupted before it could be set to NULL.
(For some more information regarding structures that don't need a lock (aka, are atomic), here is another question that talks a bit more about that.)
This could go wrong, because threads can be suspended and resumed by the thread scheduler, so you can't be sure about the order these instructions are executed. It might just as well be in this order:
Thread B:
localVar = sharedVar
Thread A:
sharedVar = someFunction()
In which case localvar will be null or 0 (or some completeley unexpected value in an unsafe language), probably not what you intended.
Mutexes actually won't fix this particular issue by the way. The example you supply does not lend itself well for parallelization.
I'm sure mutex isn't enough that's the reason the concept of condition variables exist; but it beats me and I'm not able to convince myself with a concrete scenario when a condition variable is essential.
Differences between Conditional variables, Mutexes and Locks question's accepted answer says that a condition variable is a
lock with a "signaling" mechanism. It is used when threads need to
wait for a resource to become available. A thread can "wait" on a CV
and then the resource producer can "signal" the variable, in which
case the threads who wait for the CV get notified and can continue
execution
Where I get confused is that, a thread can wait on a mutex too, and when it gets signalled, is simply means the variable is now available, why would I need a condition variable?
P.S.: Also, a mutex is required to guard the condition variable anyway, when makes my vision more askew towards not seeing condition variable's purpose.
Even though you can use them in the way you describe, mutexes weren't designed for use as a notification/synchronization mechanism. They are meant to provide mutually exclusive access to a shared resource. Using mutexes to signal a condition is awkward and I suppose would look something like this (where Thread1 is signaled by Thread2):
Thread1:
while(1) {
lock(mutex); // Blocks waiting for notification from Thread2
... // do work after notification is received
unlock(mutex); // Tells Thread2 we are done
}
Thread2:
while(1) {
... // do the work that precedes notification
unlock(mutex); // unblocks Thread1
lock(mutex); // lock the mutex so Thread1 will block again
}
There are several problems with this:
Thread2 cannot continue to "do the work that precedes notification" until Thread1 has finished with "work after notification". With this design, Thread2 is not even necessary, that is, why not move "work that precedes" and "work after notification" into the same thread since only one can run at a given time!
If Thread2 is not able to preempt Thread1, Thread1 will immediately re-lock the mutex when it repeats the while(1) loop and Thread1 will go about doing the "work after notification" even though there was no notification. This means you must somehow guarantee that Thread2 will lock the mutex before Thread1 does. How do you do that? Maybe force a schedule event by sleeping or by some other OS-specific means but even this is not guaranteed to work depending on timing, your OS, and the scheduling algorithm.
These two problems aren't minor, in fact, they are both major design flaws and latent bugs. The origin of both of these problems is the requirement that a mutex is locked and unlocked within the same thread. So how do you avoid the above problems? Use condition variables!
BTW, if your synchronization needs are really simple, you could use a plain old semaphore which avoids the additional complexity of condition variables.
Mutex is for exclusive access of shared resources, while conditional variable is about waiting for a condition to be true. They are tw different types of kernel resource. Some people might think they can implement conditional variable by themselves with mutex, a common pattern is "flag + mutex":
lock(mutex)
while (!flag) {
sleep(100);
}
unlock(mutex)
do_something_on_flag_set();
but it doesn't work, because you never release the mutex during the wait, no one else can set the flag in a thread-safe way. This is why we need kernel support for conditional variables, so when you're waiting on a condition variable, the associated mutex is not hold by your thread until it's signaled.
I was thinking about this too and the most important information which I think was missing everywhere is that mutex can be owned (or changed) by only one thread at a time. So if you have one producer and more consumers, the producer would have to wait on mutex to produce. With cond. variable it can produce at any time.
You need condition variables, to be used with a mutex (each cond.var. belongs to a mutex) to signal changing states (conditions) from one thread to another one. The idea is that a thread can wait till some condition becomes true. Such conditions are program specific (i.e. "queue is empty", "matrix is big", "some resource is almost exhausted", "some computation step has finished" etc). A mutex might have several related condition variables. And you need condition variables because such conditions may not always be expressed as simply as "a mutex is locked" (so you need to broadcast changes in conditions to other threads).
Read some good posix thread tutorials, e.g. this tutorial or that or that one. Better yet, read a good pthread book. See this question.
Also read Advanced Unix Programming and Advanced Linux Programming
P.S. Parallelism and threads are difficult concepts to grasp. Take time to read and experiment and read again.
The conditional var and the mutex pair can be replaced by a binary semaphore and mutex pair. The sequence of operations of a consumer thread when using the conditional var + mutex is:
Lock the mutex
Wait on the conditional var
Process
Unlock the mutex
The producer thread sequence of operations is
Lock the mutex
Signal the conditional var
Unlock the mutex
The corresponding consumer thread sequence when using the sema+mutex pair is
Wait on the binary sema
Lock the mutex
Check for the expected condition
If the condition is true, process.
Unlock the mutex
If the condition check in the step 3 was false, go back to the step 1.
The sequence for the producer thread is:
Lock the mutex
Post the binary sema
Unlock the mutex
As you can see the unconditional processing in the step 3 when using the conditional var is replaced by the conditional processing in the step 3 and step 4 when using the binary sema.
The reason is that when using sema+mutex, in a race condition, another consumer thread may sneak in between the step 1 and 2 and process/nullify the condition. This won't happen when using conditional var. When using the conditional var, the condition is guarantied to be true after the step 2.
The binary semaphore can be replaced with the regular counting semaphore. This may result in the step 6 to step 1 loop a few more times.
Slowjelj is right, but to shed some light on the problem, look at the python code below. We have a buffer, a producer, and a consumer. And think if you could rewrite it just with mutexes.
import threading, time, random
cv = threading.Condition()
buffer = []
MAX = 3
def put(value):
cv.acquire()
while len(buffer) == MAX:
cv.wait()
buffer.append(value)
print("added value ", value, "length =", len(buffer))
cv.notify()
cv.release()
def get():
cv.acquire()
while len(buffer) == 0:
cv.wait()
value = buffer.pop()
print("removed value ", value, "length =", len(buffer))
cv.notify()
cv.release()
def producer():
while True:
put(0) # it doesn't mater what is the value in our example
time.sleep(random.random()/10)
def consumer():
while True:
get()
time.sleep(random.random()/10)
if __name__ == '__main__':
cs = threading.Thread(target=consumer)
pd = threading.Thread(target=producer)
cs.start()
pd.start()
cs.join()
pd.join()
I think it is implementation defined.
The mutex is enough or not depends on whether you regard the mutex as a mechanism for critical sections or something more.
As mentioned in http://en.cppreference.com/w/cpp/thread/mutex/unlock,
The mutex must be locked by the current thread of execution, otherwise, the behavior is undefined.
which means a thread could only unlock a mutex which was locked/owned by itself in C++.
But in other programming languages, you might be able to share a mutex between processes.
So distinguishing the two concepts may be just performance considerations, a complex ownership identification or inter-process sharing are not worthy for simple applications.
For example, you may fix #slowjelj's case with an additional mutex (it might be an incorrect fix):
Thread1:
lock(mutex0);
while(1) {
lock(mutex0); // Blocks waiting for notification from Thread2
... // do work after notification is received
unlock(mutex1); // Tells Thread2 we are done
}
Thread2:
while(1) {
lock(mutex1); // lock the mutex so Thread1 will block again
... // do the work that precedes notification
unlock(mutex0); // unblocks Thread1
}
But your program will complain that you have triggered an assertion left by the compiler (e.g. "unlock of unowned mutex" in Visual Studio 2015).
Let's say I'm programming in a threading framework that does not have multiple-reader/single-writer mutexes. Can I implement their functionality with the following:
Create two mutexes: a recursive (lock counting) one for readers and a binary one for the writer.
Write:
acquire lock on binary mutex
wait until recursive mutex has lock count zero
actual write
release lock on binary mutex
Read:
acquire lock on binary mutex (so I know the writer is not active)
increment count of recursive mutex
release lock on binary mutex
actual read
decrement count of recursive mutex
This is not homework. I have no formal training in concurrent programming, and am trying to grasp the issues. If someone can point out a flaw, spell out the invariants or provide a better algorithm, I'd be very pleased. A good reference, either online or on dead trees, would also be appreciated.
The following is taken directly from The Art of Multiprocessor Programming which is a good book to learn about this stuff. There's actually 2 implementations presented: a simple version and a fair version. I'll go ahead and reproduce the fair version.
One of the requirements for this implementation is that you have a condition variable primitive. I'll try to figure out a way to remove it but that might take me a little while. Until then, this should still be better than nothing. Note that it's also possible to implement this primitive using only locks.
public class FifoReadWriteLock {
int readAcquires = 0, readReleases = 0;
boolean writer = false;
ReentrantLock lock;
Condition condition = lock.newCondition(); // This is the condition variable.
void readLock () {
lock.lock();
try {
while(writer)
condition.await();
readAcquires++;
}
finally {
lock.unlock();
}
}
void readUnlock () {
lock.lock();
try {
readReleases++;
if (readAcquires == readReleases)
condition.signalAll();
}
finally {
lock.unlock();
}
}
void writeLock () {
lock.lock();
try {
while (writer)
condition.await();
writer = true;
while (readAcquires != readReleases)
condition.await();
}
finally {
lock.unlock();
}
}
void writeUnlock() {
writer = false;
condition.signalAll();
}
}
First off, I simplified the code a little but the algorithm remains the same. There also happens to be an error in the book for this algorithm which is corrected in the errata. If you plan on reading the book, keep the errata close by or you'll end up being very confused (like me a few minutes ago when I was trying to re-understand the algorithm). Note that on the bright side, this is a good thing since it keeps you on your toes and that's a requirement when you're dealing with concurrency.
Next, while this may be a Java implementation, only use it as pseudo code. When doing the actual implementation you'll have to be carefull about the memory model of the language or you'll definitely end up with a headache. As an example, I think that the readAcquires and readReleases and writer variable all have to be declared as volatile in Java or the compiler is free to optimize them out of the loops. This is because in a strictly sequential programs there's no point in continuously looping on a variable that is never changed inside the loop. Note that my Java is a little rusty so I might be wrong. There's also another issue with integer overflow of the readReleases and readAcquires variables which is ignored in the algorithm.
One last note before I explain the algorithm. The condition variable is initialized using the lock. That means that when a thread calls condition.await(), it gives up its ownership of the lock. Once it's woken up by a call to condition.signalAll() the thread will resume once it has reacquired the lock.
Finally, here's how and why it works. The readReleases and readAcquires variables keep track of the number threads that have acquired and released the read lock. When these are equal, no thread has the read lock. The writer variable indicates that a thread is trying to acquire the write lock or it already has it.
The read lock part of the algorithm is fairly simple. When trying to lock, it first checks to see if a writer is holding the lock or is trying to acquire it. If so, it waits until the writer is done and then claims the lock for the readers by incrementing the readAcquires variable. When unlocking, a thread increases the readReleases variable and if there's no more readers, it notifies any writers that may be waiting.
The write lock part of the algorithm isn't much more complicated. To lock, a thread must first check whether any other writer is active. If they are, it has to wait until the other writer is done. It then indicates that it wants the lock by setting writer to true (note that it doesn't hold it yet). It then waits until there's no more readers before continuing. To unlock, it simply sets the variable writer to false and notifies any other threads that might be waiting.
This algorithm is fair because the readers can't block a writer indefinitely. Once a writer indicates that it wants to acquire the lock, no more readers can acquire the lock. After that the writer simply waits for the last remaining readers to finish up before continuing. Note that there's still the possibility of a writer indefinitely blocking another writer. That's a fairly rare case but the algorithm could be improved to take that into account.
So I re-read your question and realised that I partly (badly) answered it with the algorithm presented below. So here's my second attempt.
The algorithm, you described is fairly similar to the simple version presented in the book I mentionned. The only problem is that A) it's not fair and B) I'm not sure how you would implement wait until recursive mutex has lock count zero. For A), see above and for B), the book uses a single int to keep track of the readers and a condition variable to do the signalling.
You may want to prevent write starvation, to accomplish this you can either give preference to writes or make mutex fair.
ReadWriteLock Java's interface documentation says Writer preference is common,
ReentrantReadWriteLock class documentation says
This class does not impose a reader or writer preference ordering for lock access. However, it does support an optional fairness policy.
Note R..'s comment
Rather than locking and unlocking the binary mutex for reading, you
can just check the binary mutex state after incrementing the count on
the recursive mutex, and wait (spin/yield/futex_wait/whatever) if it's
locked until it becomes unlocked
Recommended reading:
Programming with POSIX Threads
Perl's RWLock
Java's ReadWriteLock documentation.
I have this POSIX thread:
void subthread(void)
{
while(!quit_thread) {
// do something
...
// don't waste cpu cycles
if(!quit_thread) usleep(500);
}
// free resources
...
// tell main thread we're done
quit_thread = FALSE;
}
Now I want to terminate subthread() from my main thread. I've tried the following:
quit_thread = TRUE;
// wait until subthread() has cleaned its resources
while(quit_thread);
But it does not work! The while() clause does never exit although my subthread clearly sets quit_thread to FALSE after having freed its resources!
If I modify my shutdown code like this:
quit_thread = TRUE;
// wait until subthread() has cleaned its resources
while(quit_thread) usleep(10);
Then everything is working fine! Could someone explain to me why the first solution does not work and why the version with usleep(10) suddenly works? I know that this is not a pretty solution. I could use semaphores/signals for this but I'd like to learn something about multithreading, so I'd like to know why my first solution doesn't work.
Thanks!
Without a memory fence, there is no guarantee that values written in one thread will appear in another. Most of the pthread primitives introduce a barrier, as do several system calls such as usleep. Using a mutex around both the read and write introduces a barrier, and more generally prevents multi-byte values being visible in partially written state.
You also need to separate the idea of asking a thread to stop executing, and reporting that it has stopped, and appear to be using the same variable for both.
What's most likely to be happening is that your compiler is not aware that quit_thread can be changed by another thread (because C doesn't know about threads, at least at the time this question was asked). Because of that, it's optimising the while loop to an infinite loop.
In other words, it looks at this code:
quit_thread = TRUE;
while(quit_thread);
and thinks to itself, "Hah, nothing in that loop can ever change quit_thread to FALSE, so the coder obviously just meant to write while (TRUE);".
When you add the call to usleep, the compiler has another think about it and assumes that the function call may change the global, so it plays it safe and doesn't optimise it.
Normally you would mark the variable as volatile to stop the compiler from optimising it but, in this case, you should use the facilities provided by pthreads and join to the thread after setting the flag to true (and don't have the sub-thread reset it, do that in the main thread after the join if it's necessary). The reason for that is that a join is likely to be more efficient than a continuous loop waiting for a variable change since the thread doing the join will most likely not be executed until the join needs to be done.
In your spinning solution, the joining thread will most likely continue to run and suck up CPU grunt.
In other words, do something like:
Main thread Child thread
------------------- -------------------
fStop = false
start Child Initialise
Do some other stuff while not fStop:
fStop = true Do what you have to do
Finish up and exit
join to Child
Do yet more stuff
And, as an aside, you should technically protect shared variables with mutexes but this is one of the few cases where it's okay, one-way communication where half-changed values of a variable don't matter (false/not-false).
The reason you normally mutex-protect a variable is to stop one thread seeing it in a half-changed state. Let's say you have a two-byte integer for a count of some objects, and it's set to 0x00ff (255).
Let's further say that thread A tries to increment that count but it's not an atomic operation. It changes the top byte to 0x01 but, before it gets a chance to change the bottom byte to 0x00, thread B swoops in and reads it as 0x01ff.
Now that's not going to be very good if thread B want to do something with the last element counted by that value. It should be looking at 0x0100 but will instead try to look at 0x01ff, the effect of which will be wrong, if not catastrophic.
If the count variable were protected by a mutex, thread B wouldn't be looking at it until thread A had finished updating it, hence no problem would occur.
The reason that doesn't matter with one-way booleans is because any half state will also be considered as true or false so, if thread A was halfway between turning 0x0000 into 0x0001 (just the top byte), thread B would still see that as 0x0000 (false) and keep going (until thread A finishes its update next time around).
And if thread A was turning the boolean into 0xffff, the half state of 0xff00 would still be considered true by thread B so it would do its thing before thread A had finished updating the boolean.
Neither of those two possibilities is bad simply because, in both, thread A is in the process of changing the boolean and it will finish eventually. Whether thread B detects it a tiny bit earlier or a tiny bit later doesn't really matter.
The while(quite_thread); is using the value quit_thread was set to on the line before it. Calling a function (usleep) induces the compiler to reload the value on each test.
In any case, this is the wrong way to wait for a thread to complete. Use pthread_join instead.
You're "learning" multhithreading the wrong way. The right way is to learn to use mutexes and condition variables; any other solution will fail under some circumstances.