In some cases we need to be sure that some section of code won't be used by one thread while another thread is in this section. Using languages that support multi-threading it's easy to achieve with "locking" these parts. But when we have to "simulate" threads there's not built-in thing like lock keyword in C# or Lock interface in Java. The best way I've found to lock sections in these cases look likes this:
if (!locked){
locked = true;
do some stuff
locked = false;
} else {
add to queue
}
What are the drawbacks of current solution? Does it worth to actively use one?
Maybe two threads can enter in the if statement.
Explanation
If T1 entered in the if statement and doesn't change the locked value. Suddenly, if the CPU change T1 for T2, locked value will continue false. Then T2 will enter into if statement too.
T1 -> !locked -> Stopped -> Waiting -> ... //locked value still false and T2
T2 -> Waiting -> Waked -> !locked -> ... //will be able to enter into if
You should take a look at Semaphores and Mutexes and If you will only use two process you can use the Peterson's Algorithm, but it still have a busy wait to deal!
Take a look at double-checked locking and syncronization primitives provided by underlaying system or OS: mutex, semaphore, motinore, etc. You'll be suprised that even C++ does not have Lock keyword, but everything is fine. C# lock is implemented using monitor and you could find more details here.
Related
When using monitors for most concurrency problems, you can just put the critical section inside a monitor method and then invoke the method. However, there are some multiplexing problems wherein up to n threads can run their critical sections simultaneously. So we can say that it's useful to know how to use a monitor like the following:
monitor.enter();
runCriticalSection();
monitor.exit();
What can we use inside the monitors so we can go about doing this?
Side question: Are there standard resources tackling this? Most of what I read involve only putting the critical section inside the monitor. For semaphores there is "The Little Book of Semaphores".
As far as I understand your question, any solution must satisfy this:
When fewer than n threads are in the critical section, a thread calling monitor.enter() should not blockāi.e. the only thing preventing it from progressing should be the whims of the scheduler.
At most n threads are in the critical section at any point in time; implying that
When thread n+1 calls monitor.enter(), it must block until a thread calls monitor.exit().
As far as I can tell, your requirements are equivalent to this:
The "monitor" is a semaphore with an initial value of n.
monitor.enter() is semaphore.prolaag() (aka P, decrement or wait)
monitor.exit() is semaphore.verhoog() (aka V, increment or signal)
So here it is, a semaphore implemented from a monitor:
monitor Semaphore(n):
int capacity = n
method enter:
while capacity == 0: wait()
capacity -= 1
method exit:
capacity += 1
signal()
Use it like this:
shared state:
monitor = Semaphore(n)
each thread:
monitor.enter()
runCriticalSection()
monitor.exit()
The other path
I guess that you might want some kind of syntactic wrapper, let's call it Multimonitor, so you can write something like this:
Multimonitor(n):
method critical_section_a:
<statements>
method critical_section_b:
<statements>
And your run-time environment would ensure that at most n threads are active inside any of the monitor methods (in your case you just wanted one method). I know of no such feature in any programming language or runtime environment.
Perhaps in python you can create a Multimonitor class containing all the book-keeping variables, then subclass from it and put decorators on all the methods; a metaclass-involving solution might be able to do the decorating for the user.
The third option
If you implement monitors using semaphores, you're often using a semaphore as a mutex around monitor entry and resume points. I think you could initialize such a semaphore with a value larger than one and thereby produce such a Multimonitor, complete with wait() and signal() on condition variables. But: it would do more than what you need in your stated question, and if you use semaphores, why not just use them in the basic and straightforward way?
I've always been told to puts locks around variables that multiple threads will access, I've always assumed that this was because you want to make sure that the value you are working with doesn't change before you write it back
i.e.
mutex.lock()
int a = sharedVar
a = someComplexOperation(a)
sharedVar = a
mutex.unlock()
And that makes sense that you would lock that. But in other cases I don't understand why I can't get away with not using Mutexes.
Thread A:
sharedVar = someFunction()
Thread B:
localVar = sharedVar
What could possibly go wrong in this instance? Especially if I don't care that Thread B reads any particular value that Thread A assigns.
It depends a lot on the type of sharedVar, the language you're using, any framework, and the platform. In many cases, it's possible that assigning a single value to sharedVar may take more than one instruction, in which case you may read a "half-set" copy of the value.
Even when that's not the case, and the assignment is atomic, you may not see the latest value without a memory barrier in place.
MSDN Magazine has a good explanation of different problems you may encounter in multithreaded code:
Forgotten Synchronization
Incorrect Granularity
Read and Write Tearing
Lock-Free Reordering
Lock Convoys
Two-Step Dance
Priority Inversion
The code in your question is particularly vulnerable to Read/Write Tearing. But your code, having neither locks nor memory barriers, is also subject to Lock-Free Reordering (which may include speculative writes in which thread B reads a value that thread A never stored) in which side-effects become visible to a second thread in a different order from how they appeared in your source code.
It goes on to describe some known design patterns which avoid these problems:
Immutability
Purity
Isolation
The article is available here
The main problem is that the assignment operator (operator= in C++) is not always guaranteed to be atomic (not even for primitive, built in types). In plain English, that means that assignment can take more than a single clock cycle to complete. If, in the middle of that, the thread gets interrupted, then the current value of the variable might be corrupted.
Let me build off of your example:
Lets say sharedVar is some object with operator= defined as this:
object& operator=(const object& other) {
ready = false;
doStuff(other);
if (other.value == true) {
value = true;
doOtherStuff();
} else {
value = false;
}
ready = true;
return *this;
}
If thread A from your example is interrupted in the middle of this function, ready will still be false when thread B starts to run. This could mean that the object is only partially copied over, or is in some intermediate, invalid state when thread B attempts to copy it into a local variable.
For a particularly nasty example of this, think of a data structure with a removed node being deleted, then interrupted before it could be set to NULL.
(For some more information regarding structures that don't need a lock (aka, are atomic), here is another question that talks a bit more about that.)
This could go wrong, because threads can be suspended and resumed by the thread scheduler, so you can't be sure about the order these instructions are executed. It might just as well be in this order:
Thread B:
localVar = sharedVar
Thread A:
sharedVar = someFunction()
In which case localvar will be null or 0 (or some completeley unexpected value in an unsafe language), probably not what you intended.
Mutexes actually won't fix this particular issue by the way. The example you supply does not lend itself well for parallelization.
Let's say I'm programming in a threading framework that does not have multiple-reader/single-writer mutexes. Can I implement their functionality with the following:
Create two mutexes: a recursive (lock counting) one for readers and a binary one for the writer.
Write:
acquire lock on binary mutex
wait until recursive mutex has lock count zero
actual write
release lock on binary mutex
Read:
acquire lock on binary mutex (so I know the writer is not active)
increment count of recursive mutex
release lock on binary mutex
actual read
decrement count of recursive mutex
This is not homework. I have no formal training in concurrent programming, and am trying to grasp the issues. If someone can point out a flaw, spell out the invariants or provide a better algorithm, I'd be very pleased. A good reference, either online or on dead trees, would also be appreciated.
The following is taken directly from The Art of Multiprocessor Programming which is a good book to learn about this stuff. There's actually 2 implementations presented: a simple version and a fair version. I'll go ahead and reproduce the fair version.
One of the requirements for this implementation is that you have a condition variable primitive. I'll try to figure out a way to remove it but that might take me a little while. Until then, this should still be better than nothing. Note that it's also possible to implement this primitive using only locks.
public class FifoReadWriteLock {
int readAcquires = 0, readReleases = 0;
boolean writer = false;
ReentrantLock lock;
Condition condition = lock.newCondition(); // This is the condition variable.
void readLock () {
lock.lock();
try {
while(writer)
condition.await();
readAcquires++;
}
finally {
lock.unlock();
}
}
void readUnlock () {
lock.lock();
try {
readReleases++;
if (readAcquires == readReleases)
condition.signalAll();
}
finally {
lock.unlock();
}
}
void writeLock () {
lock.lock();
try {
while (writer)
condition.await();
writer = true;
while (readAcquires != readReleases)
condition.await();
}
finally {
lock.unlock();
}
}
void writeUnlock() {
writer = false;
condition.signalAll();
}
}
First off, I simplified the code a little but the algorithm remains the same. There also happens to be an error in the book for this algorithm which is corrected in the errata. If you plan on reading the book, keep the errata close by or you'll end up being very confused (like me a few minutes ago when I was trying to re-understand the algorithm). Note that on the bright side, this is a good thing since it keeps you on your toes and that's a requirement when you're dealing with concurrency.
Next, while this may be a Java implementation, only use it as pseudo code. When doing the actual implementation you'll have to be carefull about the memory model of the language or you'll definitely end up with a headache. As an example, I think that the readAcquires and readReleases and writer variable all have to be declared as volatile in Java or the compiler is free to optimize them out of the loops. This is because in a strictly sequential programs there's no point in continuously looping on a variable that is never changed inside the loop. Note that my Java is a little rusty so I might be wrong. There's also another issue with integer overflow of the readReleases and readAcquires variables which is ignored in the algorithm.
One last note before I explain the algorithm. The condition variable is initialized using the lock. That means that when a thread calls condition.await(), it gives up its ownership of the lock. Once it's woken up by a call to condition.signalAll() the thread will resume once it has reacquired the lock.
Finally, here's how and why it works. The readReleases and readAcquires variables keep track of the number threads that have acquired and released the read lock. When these are equal, no thread has the read lock. The writer variable indicates that a thread is trying to acquire the write lock or it already has it.
The read lock part of the algorithm is fairly simple. When trying to lock, it first checks to see if a writer is holding the lock or is trying to acquire it. If so, it waits until the writer is done and then claims the lock for the readers by incrementing the readAcquires variable. When unlocking, a thread increases the readReleases variable and if there's no more readers, it notifies any writers that may be waiting.
The write lock part of the algorithm isn't much more complicated. To lock, a thread must first check whether any other writer is active. If they are, it has to wait until the other writer is done. It then indicates that it wants the lock by setting writer to true (note that it doesn't hold it yet). It then waits until there's no more readers before continuing. To unlock, it simply sets the variable writer to false and notifies any other threads that might be waiting.
This algorithm is fair because the readers can't block a writer indefinitely. Once a writer indicates that it wants to acquire the lock, no more readers can acquire the lock. After that the writer simply waits for the last remaining readers to finish up before continuing. Note that there's still the possibility of a writer indefinitely blocking another writer. That's a fairly rare case but the algorithm could be improved to take that into account.
So I re-read your question and realised that I partly (badly) answered it with the algorithm presented below. So here's my second attempt.
The algorithm, you described is fairly similar to the simple version presented in the book I mentionned. The only problem is that A) it's not fair and B) I'm not sure how you would implement wait until recursive mutex has lock count zero. For A), see above and for B), the book uses a single int to keep track of the readers and a condition variable to do the signalling.
You may want to prevent write starvation, to accomplish this you can either give preference to writes or make mutex fair.
ReadWriteLock Java's interface documentation says Writer preference is common,
ReentrantReadWriteLock class documentation says
This class does not impose a reader or writer preference ordering for lock access. However, it does support an optional fairness policy.
Note R..'s comment
Rather than locking and unlocking the binary mutex for reading, you
can just check the binary mutex state after incrementing the count on
the recursive mutex, and wait (spin/yield/futex_wait/whatever) if it's
locked until it becomes unlocked
Recommended reading:
Programming with POSIX Threads
Perl's RWLock
Java's ReadWriteLock documentation.
What is the difference between a dead lock and a race around condition in programming terms?
Think of a race condition using the traditional example. Say you and a friend have an ATM cards for the same bank account. Now suppose the account has $100 in it. Consider what happens when you attempt to withdraw $10 and your friend attempts to withdraw $50 at exactly the same time.
Think about what has to happen. The ATM machine must take your input, read what is currently in your account, and then modify the amount. Note, that in programming terms, an assignment statement is a multi-step process.
So, label both of your transactions T1 (you withdraw $10), and T2 (your friend withdraws $50). Now, the numbers below, to the left, represent time steps.
T1 T2
---------------- ------------------------
1. Read Acct ($100)
2. Read Acct ($100)
3. Write New Amt ($90)
4. Write New Amt ($50)
5. End
6. End
After both transactions complete, using this timeline, which is possible if you don't use any sort of locking mechanism, the account has $50 in it. This is $10 more than it should (your transaction is lost forever, but you still have the money).
This is a called race condition. What you want is for the transaction to be serializable, that is in no matter how you interleave the individual instruction executions, the end result will be the exact same as some serial schedule (meaning you run them one after the other with no interleaving) of the same transactions. The solution, again, is to introduce locking; however incorrect locking can lead to dead lock.
Deadlock occurs when there is a conflict of a shared resource. It's sort of like a Catch-22.
T1 T2
------- --------
1. Lock(x)
2. Lock(y)
3. Write x=1
4. Write y=19
5. Lock(y)
6. Write y=x+1
7. Lock(x)
8. Write x=y+2
9. Unlock(x)
10. Unlock(x)
11. Unlock(y)
12. Unlock(y)
You can see that a deadlock occurs at time 7 because T2 tries to acquire a lock on x but T1 already holds the lock on x but it is waiting on a lock for y, which T2 holds.
This bad. You can turn this diagram into a dependency graph and you will see that there is a cycle. The problem here is that x and y are resources that may be modified together.
One way to prevent this sort of deadlock problem with multiple lock objects (resources) is to introduce an ordering. You see, in the previous example, T1 locked x and then y but T2 locked y and then x. If both transactions adhered here to some ordering rule that says "x shall always be locked before y" then this problem will not occur. (You can change the previous example with this rule in mind and see no deadlock occurs).
These are trivial examples and really I've just used the examples you may have already seen if you have taken any kind of undergrad course on this. In reality, solving deadlock problems can be much harder than this because you tend to have more than a couple resources and a couple transactions interacting.
As always, use Wikipedia as a starting point for CS concepts:
http://en.wikipedia.org/wiki/Deadlock
http://en.wikipedia.org/wiki/Race_condition
A deadlock is when two (or more) threads are blocking each other. Usually this has something to do with threads trying to acquire shared resources. For example if threads T1 and T2 need to acquire both resources A and B in order to do their work. If T1 acquires resource A, then T2 acquires resource B, T1 could then be waiting for resource B while T2 was waiting for resource A. In this case, both threads will wait indefinitely for the resource held by the other thread. These threads are said to be deadlocked.
Race conditions occur when two threads interact in a negatve (buggy) way depending on the exact order that their different instructions are executed. If one thread sets a global variable, for example, then a second thread reads and modifies that global variable, and the first thread reads the variable, the first thread may experience a bug because the variable has changed unexpectedly.
Deadlock :
This happens when 2 or more threads are waiting on each other to release the resource for infinite amount of time.
In this the threads are in blocked state and not executing.
Race/Race Condition:
This happens when 2 or more threads run in parallel but end up giving a result which is wrong and not equivalent if all the operations are done in sequential order.
Here all the threads run and execute there operations.
In Coding we need to avoid both race and deadlock condition.
I assume you mean "race conditions" and not "race around conditions" (I've heard that term...)
Basically, a dead lock is a condition where thread A is waiting for resource X while holding a lock on resource Y, and thread B is waiting for resource Y while holding a lock on resource X. The threads block waiting for each other to release their locks.
The solution to this problem is (usually) to ensure that you take locks on all resources in the same order in all threads. For example, if you always lock resource X before resource Y then my example can never result in a deadlock.
A race condition is something where you're relying on a particular sequence of events happening in a certain order, but that can be messed up if another thread is running at the same time. For example, to insert a new node into a linked list, you need to modify the list head, usually something like so:
newNode->next = listHead;
listHead = newNode;
But if two threads do that at the same time, then you might have a situation where they run like so:
Thread A Thread B
newNode1->next = listHead
newNode2->next = listHead
listHead = newNode2
listHead = newNode1
If this were to happen, then Thread B's modification of the list will be lost because Thread A would have overwritten it. It can be even worse, depending on the exact situation, but that's the basics of it.
The solution to this problem is usually to ensure that you include the proper locking mechanisms (for example, take out a lock any time you want to modify the linked list so that only one thread is modifying it at a time).
Withe rest to Programming language if you are not locking shared resources and are accessed by multiple threads then its called as "Race condition", 2nd case if you locked the resources and sequences of access to shared resources are not defined properly then threads may go long waiting for the resources to use then its a case of "deadlock"
I understand about race conditions and how with multiple threads accessing the same variable, updates made by one can be ignored and overwritten by others, but what if each thread is writing the same value (not different values) to the same variable; can even this cause problems? Could this code:
GlobalVar.property = 11;
(assuming that property will never be assigned anything other than 11), cause problems if multiple threads execute it at the same time?
The problem comes when you read that state back, and do something about it. Writing is a red herring - it is true that as long as this is a single word most environments guarantee the write will be atomic, but that doesn't mean that a larger piece of code that includes this fragment is thread-safe. Firstly, presumably your global variable contained a different value to begin with - otherwise if you know it's always the same, why is it a variable? Second, presumably you eventually read this value back again?
The issue is that presumably, you are writing to this bit of shared state for a reason - to signal that something has occurred? This is where it falls down: when you have no locking constructs, there is no implied order of memory accesses at all. It's hard to point to what's wrong here because your example doesn't actually contain the use of the variable, so here's a trivialish example in neutral C-like syntax:
int x = 0, y = 0;
//thread A does:
x = 1;
y = 2;
if (y == 2)
print(x);
//thread B does, at the same time:
if (y == 2)
print(x);
Thread A will always print 1, but it's completely valid for thread B to print 0. The order of operations in thread A is only required to be observable from code executing in thread A - thread B is allowed to see any combination of the state. The writes to x and y may not actually happen in order.
This can happen even on single-processor systems, where most people do not expect this kind of reordering - your compiler may reorder it for you. On SMP even if the compiler doesn't reorder things, the memory writes may be reordered between the caches of the separate processors.
If that doesn't seem to answer it for you, include more detail of your example in the question. Without the use of the variable it's impossible to definitively say whether such a usage is safe or not.
It depends on the work actually done by that statement. There can still be some cases where Something Bad happens - for example, if a C++ class has overloaded the = operator, and does anything nontrivial within that statement.
I have accidentally written code that did something like this with POD types (builtin primitive types), and it worked fine -- however, it's definitely not good practice, and I'm not confident that it's dependable.
Why not just lock the memory around this variable when you use it? In fact, if you somehow "know" this is the only write statement that can occur at some point in your code, why not just use the value 11 directly, instead of writing it to a shared variable?
(edit: I guess it's better to use a constant name instead of the magic number 11 directly in the code, btw.)
If you're using this to figure out when at least one thread has reached this statement, you could use a semaphore that starts at 1, and is decremented by the first thread that hits it.
I would expect the result to be undetermined. As in it would vary from compiler to complier, langauge to language and OS to OS etc. So no, it is not safe
WHy would you want to do this though - adding in a line to obtain a mutex lock is only one or two lines of code (in most languages), and would remove any possibility of problem. If this is going to be two expensive then you need to find an alternate way of solving the problem
In General, this is not considered a safe thing to do unless your system provides for atomic operation (operations that are guaranteed to be executed in a single cycle).
The reason is that while the "C" statement looks simple, often there are a number of underlying assembly operations taking place.
Depending on your OS, there are a few things you could do:
Take a mutual exclusion semaphore (mutex) to protect access
in some OS, you can temporarily disable preemption, which guarantees your thread will not swap out.
Some OS provide a writer or reader semaphore which is more performant than a plain old mutex.
Here's my take on the question.
You have two or more threads running that write to a variable...like a status flag or something, where you only want to know if one or more of them was true. Then in another part of the code (after the threads complete) you want to check and see if at least on thread set that status... for example
bool flag = false
threadContainer tc
threadInputs inputs
check(input)
{
...do stuff to input
if(success)
flag = true
}
start multiple threads
foreach(i in inputs)
t = startthread(check, i)
tc.add(t) // Keep track of all the threads started
foreach(t in tc)
t.join( ) // Wait until each thread is done
if(flag)
print "One of the threads were successful"
else
print "None of the threads were successful"
I believe the above code would be OK, assuming you're fine with not knowing which thread set the status to true, and you can wait for all the multi-threaded stuff to finish before reading that flag. I could be wrong though.
If the operation is atomic, you should be able to get by just fine. But I wouldn't do that in practice. It is better just to acquire a lock on the object and write the value.
Assuming that property will never be assigned anything other than 11, then I don't see a reason for assigment in the first place. Just make it a constant then.
Assigment only makes sense when you intend to change the value unless the act of assigment itself has other side effects - like volatile writes have memory visibility side-effects in Java. And if you change state shared between multiple threads, then you need to synchronize or otherwise "handle" the problem of concurrency.
When you assign a value, without proper synchronization, to some state shared between multiple threads, then there's no guarantees for when the other threads will see that change. And no visibility guarantees means that it it possible that the other threads will never see the assignt.
Compilers, JITs, CPU caches. They're all trying to make your code run as fast as possible, and if you don't make any explicit requirements for memory visibility, then they will take advantage of that. If not on your machine, then somebody elses.