I have met a semantic problem with guard conditions and fork in activity diagrams. Suppose terminating action A leads to a fork, and the out put of the fork leads to action B and C (that is, this fork has 1 input and 2 outputs). If A has successfully terminated, and guard condition of B is valid while guard condition of C is not, will the whole activity continue to action B and waits for guard condition of C to become true, or neither B nor C would be executed?
Update: consider the following activity example
Suppose that the first time A terminates, guard condition of C is not valid while B does not have guard. Along the merge node, A is exectued the second time. After the second termination of A, guard condition of C becomes eternally valid and it will be executed twice continuously due to the first and second termination of A. Is this correct?
Once A is finished it will emerge a token and the fork will duplicate that. One token goes straight to B which after finalization re-triggers A to infinity. Now, what happens to the token(s) traveling to C? They just queue at the guard. When the guard is opened after some time it lets pass a single token (because C can hold only a single one). When C is finished it will allow another token to enter (if meanwhile multiple tokens have reached) depending additionally on the guard. Basically C can be started as many times as A has been completed before.
N.B. Your implication in the question "guard conditions on the outputs" is wrong. A guard is always on an incoming control flow of an action. The fork will not control the guard, it's the action. And further an action can never have a guard on the output. This is controlled be the action's behavior. When it terminates it will emerge a token on each of its outgoing control flows (so called implicit fork).
Answer to initial question left as common information
Actually when you draw it, the situation is obvious:
The token emerging right from the top fork will be blocked. B will start since the token passed the guard. Because C does not start the lower fork will hang as it needs 2 tokens. So D is not reached. Unless the guard from C will somewhen be unblocked externally.
Related
I'm trying to understand how GHC Haskell synchronises the computation of "basic" values (i.e. not IORef, TVar, etc.) between threads. I have searched for information about this but haven't found anything clear.
Take the following example program:
import Control.Concurrent
expensiveFunction x = sum [1..x] -- Just an example
val = expensiveFunction 12345
thread1 = print val
thread2 = print val
main = do
forkOS thread1
forkOS thread2
I understand that the value val will initially be represented by an unevaluated closure. In order to print val, the program must first evaluate it. Once a toplevel binding has been evaluated it should not need to be evaluated again.
Is the representation for "val" even shared by separate threads?
If for some reason thread1 completes evaluation first, can it convey the final computed value to thread2 by swapping out the pointer? How would that be synchronised?
If thread1 is busy evaluating when thread2 wants the value, does thread2 wait for it to finish or do they both race to evaluate it first?
In GHC-compiled programs, values go through three(-ish) phases of evaluation:
Thunk. This is where they start.
Black hole. When forced, a thunk is converted to a black hole and computation begins. Other threads that request the value of a black hole will instead add themselves to a notification list for when the black hole is updated. (Also, if the thunk itself tries to access the black hole, it will short-circuit to an exception instead of waiting forever.)
Evaluated. When the computation finishes, its last task is to update the black hole to a plain value (well, WHNF value, anyway).
The pointer that is getting updated during these phase transitions is shared with other threads and not protected from race conditions. This means that, very rarely, it is possible for two (or more) threads to both see a pointer in phase 1 and for both to execute the 1 -> 2 transition; in that case, both will evaluate the thunk, and the transition 2 -> 3 will also happen twice. Notably, though, the 1 -> 2 transition is typically much faster than the computation it is replacing (essentially just a memory access or two), in part exactly so that the race is difficult to trigger.
Because the language is pure, the racing threads will come to the same answer. So there is no semantic difficulty here. But in some rare cases, a little bit of work may be duplicated. It is very, very rare that the overhead of a lock on every 1 -> 2 transition would be better than this slight duplication. (If you find it is in your case, consider manually protecting the evaluation of whichever expensive thing is being shared!)
Corollary: great care must be taken with the unsafe IO a -> a family of functions; some guarantee synchronization of the evaluation of the resulting a and some don't. If your IO a action is not as pure as you promised it is, and a race causes it to be executed twice, all manner of strange heisenbugs can occur.
While I was taking a look on the golang memory model document(link), I found a weird behavior on go lang. This document says that below code can happen that g prints 2 and then 0.
var a, b int
func f() {
a = 1
b = 2
}
func g() {
print(b)
print(a)
}
func main() {
go f()
g()
}
Is this only go routine issue? Because I am curious that why value assignment of variable 'b' can happen before that of 'a'? Even if value assignment of 'a' and 'b would happen in different thread(not in main thread), does it have to be ensured that 'a' should be assigned before 'b' in it's own thread?(because assignment of 'a' comes first and that of 'b' comes later) Can anyone please tell me about this issue clearly?
Variables a and b are allocated and initialized with the zero values of their respective type (which is 0 in case of int) before any of the functions start to execute, at this line:
var a, b int
What may change is the order new values are assigned to them in the f() function.
Quoting from that page: Happens Before:
Within a single goroutine, reads and writes must behave as if they executed in the order specified by the program. That is, compilers and processors may reorder the reads and writes executed within a single goroutine only when the reordering does not change the behavior within that goroutine as defined by the language specification. Because of this reordering, the execution order observed by one goroutine may differ from the order perceived by another. For example, if one goroutine executes a = 1; b = 2;, another might observe the updated value of b before the updated value of a.
Assignment to a and b may not happen in the order you write them if reordering them does not make a difference in the same goroutine. The compiler may reorder them for example if first changing the value of b is more efficient (e.g. because its address is already loaded in a register). If changing the assignment order would (or may) cause issue in the same goroutine, then obviously the compiler is not allowed to change the order. Since the goroutine of the f() function does nothing with the variables a and b after the assignment, the compiler is free to carry out the assignments in whatever order.
Since there is no synchronization between the 2 goroutines in the above example, the compiler makes no effort to check whether reordering would cause any issues in the other goroutine. It doesn't have to.
Buf if you synchronize your goroutines, the compiler will make sure that at the "synchronization point" there will be no inconsistencies: you will have guarantee that at that point both the assignments will be "completed"; so if the "synchronization point" is before the print() calls, then you will see the assigned new values printed: 2 and 1.
Problem
Summary: Parallely apply a function F to each element of an array where F is NOT thread safe.
I have a set of elements E to process, lets say a queue of them.
I want to process all these elements in parallel using the same function f( E ).
Now, ideally I could call a map based parallel pattern, but the problem has the following constraints.
Each element contains a pair of 2 objects.( E = (A,B) )
Two elements may share an object. ( E1 = (A1,B1); E2 = (A1, B2) )
The function f cannot process two elements that share an object. so E1 and E2 cannot be processing in parallel.
What is the right way of doing this?
My thoughts are like so,
trivial thought: Keep a set of active As and Bs, and start processing an Element only when no other thread is already using A OR B.
So, when you give the element to a thread you add the As and Bs to the active set.
Pick the first element, if its elements are not in the active set spawn a new thread , otherwise push it to the back of the queue of elements.
Do this till the queue is empty.
Will this cause a deadlock ? Ideally when a processing is over some elements will become available right?
2.-The other thought is to make a graph of these connected objects.
Each node represents an object (A / B) . Each element is an edge connecting A & B, and then somehow process the data such that we know the elements are never overlapping.
Questions
How can we achieve this best?
Is there a standard pattern to do this ?
Is there a problem with these approaches?
Not necessary, but if you could tell the TBB methods to use, that'll be great.
The "best" approach depends on a lot of factors here:
How many elements "E" do you have and how much work is needed for f(E). --> Check if it's really worth it to work the elements in parallel (if you need a lot of locking and don't have much work to do, you'll probably slow down the process by working in parallel)
Is there any possibility to change the design that can make f(E) multi-threading safe?
How many elements "A" and "B" are there? Is there any logic to which elements "E" share specific versions of A and B? --> If you can sort the elements E into separate lists where each A and B only appears in a single list, then you can process these lists parallel without any further locking.
If there are many different A's and B's and you don't share too many of them, you may want to do a trivial approach where you just lock each "A" and "B" when entering and wait until you get the lock.
Whenever you do "lock and wait" with multiple locks it's very important that you always take the locks in the same order (e.g. always A first and B second) because otherwise you may run into deadlocks. This locking order needs to be observed everywhere (a single place in the whole application that uses a different order can cause a deadlock)
Edit: Also if you do "try lock" you need to ensure that the order is always the same. Otherwise you can cause a lifelock:
thread 1 locks A
thread 2 locks B
thread 1 tries to lock B and fails
thread 2 tries to lock A and fails
thread 1 releases lock A
thread 2 releases lock B
Goto 1 and repeat...
Chances that this actually happens "endless" are relatively slim, but it should be avoided anyway
Edit 2: principally I guess I'd just split E(Ax, Bx) into different lists based on Ax (e.g one list for all E's that share the same A). Then process these lists in parallel with locking of "B" (there you can still "TryLock" and continue if the required B is already used.
I'm writing some lock-free code, and I came up with an interesting pattern, but I'm not sure if it will behave as expected under relaxed memory ordering.
The simplest way to explain it is using an example:
std::atomic<int> a, b, c;
auto a_local = a.load(std::memory_order_relaxed);
auto b_local = b.load(std::memory_order_relaxed);
if (a_local < b_local) {
auto c_local = c.fetch_add(1, std::memory_order_relaxed);
}
Note that all operations use std::memory_order_relaxed.
Obviously, on the thread that this is executed on, the loads for a and b must be done before the if condition is evaluated.
Similarly, the read-modify-write (RMW) operation on c must be done after the condition is evaluated (because it's conditional on that... condition).
What I want to know is, does this code guarantee that the value of c_local is at least as up-to-date as the values of a_local and b_local? If so, how is this possible given the relaxed memory ordering? Is the control dependency together with the RWM operation acting as some sort of acquire fence? (Note that there's not even a corresponding release anywhere.)
If the above holds true, I believe this example should also work (assuming no overflow) -- am I right?
std::atomic<int> a(0), b(0);
// Thread 1
while (true) {
auto a_local = a.fetch_add(1, std::memory_order_relaxed);
if (a_local >= 0) { // Always true at runtime
b.fetch_add(1, std::memory_order_relaxed);
}
}
// Thread 2
auto b_local = b.load(std::memory_order_relaxed);
if (b_local < 777) {
// Note that fetch_add returns the pre-incrementation value
auto a_local = a.fetch_add(1, std::memory_order_relaxed);
assert(b_local <= a_local); // Is this guaranteed?
}
On thread 1, there is a control dependency which I suspect guarantees that a is always incremented before b is incremented (but they each keep being incremented neck-and-neck). On thread 2, there is another control dependency which I suspect guarantees that b is loaded into b_local before a is incremented. I also think that the value returned from fetch_add will be at least as recent as any observed value in b_local, and the assert should therefore hold. But I'm not sure, since this departs significantly from the usual memory-ordering examples, and my understanding of the C++11 memory model is not perfect (I have trouble reasoning about these memory ordering effects with any degree of certainty). Any insights would be appreciated!
Update: As bames53 has helpfully pointed out in the comments, given a sufficiently smart compiler, it's possible that an if could be optimised out entirely under the right circumstances, in which case the relaxed loads could be reordered to occur after the RMW, causing their values to be more up-to-date than the fetch_add return value (the assert could fire in my second example). However, what if instead of an if, an atomic_signal_fence (not atomic_thread_fence) is inserted? That certainly can't be ignored by the compiler no matter what optimizations are done, but does it ensure that the code behaves as expected? Is the CPU allowed to do any re-ordering in such a case?
The second example then becomes:
std::atomic<int> a(0), b(0);
// Thread 1
while (true) {
auto a_local = a.fetch_add(1, std::memory_order_relaxed);
std::atomic_signal_fence(std::memory_order_acq_rel);
b.fetch_add(1, std::memory_order_relaxed);
}
// Thread 2
auto b_local = b.load(std::memory_order_relaxed);
std::atomic_signal_fence(std::memory_order_acq_rel);
// Note that fetch_add returns the pre-incrementation value
auto a_local = a.fetch_add(1, std::memory_order_relaxed);
assert(b_local <= a_local); // Is this guaranteed?
Another update: After reading all the responses so far and combing through the standard myself, I don't think it can be shown that the code is correct using only the standard. So, can anyone come up with a counter-example of a theoretical system that complies with the standard and also fires the assert?
Signal fences don't provide the necessary guarantees (well, not unless 'thread 2' is a signal hander that actually runs on 'thread 1').
To guarantee correct behavior we need synchronization between threads, and the fence that does that is std::atomic_thread_fence.
Let's label the statements so we can diagram various executions (with thread fences replacing signal fences, as required):
while (true) {
auto a_local = a.fetch_add(1, std::memory_order_relaxed); // A
std::atomic_thread_fence(std::memory_order_acq_rel); // B
b.fetch_add(1, std::memory_order_relaxed); // C
}
auto b_local = b.load(std::memory_order_relaxed); // X
std::atomic_thread_fence(std::memory_order_acq_rel); // Y
auto a_local = a.fetch_add(1, std::memory_order_relaxed); // Z
So first let's assume that X loads a value written by C. The following paragraph specifies that in that case the fences synchronize and a happens-before relationship is established.
29.8/2:
A release fence A synchronizes with an acquire fence B if there exist atomic operations X and Y, both operating on some atomic object M, such that A is sequenced before X, X modifies M, Y is sequenced before B, and Y reads the value written by X or a value written by any side effect in the hypothetical release sequence X would head if it were a release operation.
And here's a possible execution order where the arrows are happens-before relations.
Thread 1: A₁ → B₁ → C₁ → A₂ → B₂ → C₂ → ...
↘
Thread 2: X → Y → Z
If a side effect X on an atomic object M happens before a value computation B of M, then the evaluation B shall take its value from X or from a side effect Y that follows X in the modification order of M. — [C++11 1.10/18]
So the load at Z must take its value from A₁ or from a subsequent modification. Therefore the assert holds because the value written at A₁ and at all later modifications is greater than or equal to the value written at C₁ (and read by X).
Now let's look at the case where the fences do not synchronize. This happens when the load of b does not load a value written by thread 1, but instead reads the value that b is initialized with. There's still synchronization where the threads starts though:
30.3.1.2/5
Synchronization: The completion of the invocation of the constructor synchronizes with the beginning of the invocation of the copy of f.
This is specifying the behavior of std::thread's constructor. So (assuming the thread creation is correctly sequenced after the initialization of a) the value read by Z must take its value from the initialization of a or from one of the subsequent modifications on thread 1, which means that the assertions still holds.
This example gets at a variation of reads-from-thin-air like behavior. The relevant discussion in the spec is in section 29.3p9-11. Since the current version of the C11 standard doesn't guarantee dependences be respected, the memory model should allow the assertion to be fired. The most likely situation is that the compiler optimizes away the check that a_local>=0. But even if you replace that check with a signal fence, CPUs would be free to reorder those instructions.
You can test such code examples under the C/C++11 memory models using the open source CDSChecker tool.
The interesting issue with your example is that for an execution to violate the assertion, there has to be a cycle of dependences. More concretely:
The b.fetch_add in thread one depends on the a.fetch_add in the same loop iteration due to the if condition. The a.fetch_add in thread 2 depends on b.load. For an assertion violation, we have to have T2's b.load read from a b.fetch_add in a later loop iteration than T2's a.fetch_add. Now consider the b.fetch_add the b.load reads from and call it # for future reference. We know that b.load depends on # as it takes it value from #.
We know that # must depend on T2's a.fetch_add as T2's a.fetch_add atomic reads and updates a prior a.fetch_add from T1 in the same loop iteration as #. So we know that # depends on the a.fetch_add in thread 2. That gives us a cycle in dependences and is plain weird, but allowed by the C/C++ memory model. The most likely way of actually producing that cycle is (1) compiler figures out that a.local is always greater than 0, breaking the dependence. It can then do loop unrolling and reorder T1's fetch_add however it wants.
After reading all the responses so far and combing through the
standard myself, I don't think it can be shown that the code is
correct using only the standard.
And unless you admit that non atomic operations are magically safer and more ordered then relaxed atomic operations (which is silly) and that there is one semantic of C++ without atomics (and try_lock and shared_ptr::count) and another semantic for those features that don't execute sequentially, you also have to admit that no program at all can be proven correct, as the non atomic operations don't have an "ordering" and they are needed to construct and destroy variables.
Or, you stop taking the standard text as the only word on the language and use some common sense, which is always recommended.
What is the difference between a dead lock and a race around condition in programming terms?
Think of a race condition using the traditional example. Say you and a friend have an ATM cards for the same bank account. Now suppose the account has $100 in it. Consider what happens when you attempt to withdraw $10 and your friend attempts to withdraw $50 at exactly the same time.
Think about what has to happen. The ATM machine must take your input, read what is currently in your account, and then modify the amount. Note, that in programming terms, an assignment statement is a multi-step process.
So, label both of your transactions T1 (you withdraw $10), and T2 (your friend withdraws $50). Now, the numbers below, to the left, represent time steps.
T1 T2
---------------- ------------------------
1. Read Acct ($100)
2. Read Acct ($100)
3. Write New Amt ($90)
4. Write New Amt ($50)
5. End
6. End
After both transactions complete, using this timeline, which is possible if you don't use any sort of locking mechanism, the account has $50 in it. This is $10 more than it should (your transaction is lost forever, but you still have the money).
This is a called race condition. What you want is for the transaction to be serializable, that is in no matter how you interleave the individual instruction executions, the end result will be the exact same as some serial schedule (meaning you run them one after the other with no interleaving) of the same transactions. The solution, again, is to introduce locking; however incorrect locking can lead to dead lock.
Deadlock occurs when there is a conflict of a shared resource. It's sort of like a Catch-22.
T1 T2
------- --------
1. Lock(x)
2. Lock(y)
3. Write x=1
4. Write y=19
5. Lock(y)
6. Write y=x+1
7. Lock(x)
8. Write x=y+2
9. Unlock(x)
10. Unlock(x)
11. Unlock(y)
12. Unlock(y)
You can see that a deadlock occurs at time 7 because T2 tries to acquire a lock on x but T1 already holds the lock on x but it is waiting on a lock for y, which T2 holds.
This bad. You can turn this diagram into a dependency graph and you will see that there is a cycle. The problem here is that x and y are resources that may be modified together.
One way to prevent this sort of deadlock problem with multiple lock objects (resources) is to introduce an ordering. You see, in the previous example, T1 locked x and then y but T2 locked y and then x. If both transactions adhered here to some ordering rule that says "x shall always be locked before y" then this problem will not occur. (You can change the previous example with this rule in mind and see no deadlock occurs).
These are trivial examples and really I've just used the examples you may have already seen if you have taken any kind of undergrad course on this. In reality, solving deadlock problems can be much harder than this because you tend to have more than a couple resources and a couple transactions interacting.
As always, use Wikipedia as a starting point for CS concepts:
http://en.wikipedia.org/wiki/Deadlock
http://en.wikipedia.org/wiki/Race_condition
A deadlock is when two (or more) threads are blocking each other. Usually this has something to do with threads trying to acquire shared resources. For example if threads T1 and T2 need to acquire both resources A and B in order to do their work. If T1 acquires resource A, then T2 acquires resource B, T1 could then be waiting for resource B while T2 was waiting for resource A. In this case, both threads will wait indefinitely for the resource held by the other thread. These threads are said to be deadlocked.
Race conditions occur when two threads interact in a negatve (buggy) way depending on the exact order that their different instructions are executed. If one thread sets a global variable, for example, then a second thread reads and modifies that global variable, and the first thread reads the variable, the first thread may experience a bug because the variable has changed unexpectedly.
Deadlock :
This happens when 2 or more threads are waiting on each other to release the resource for infinite amount of time.
In this the threads are in blocked state and not executing.
Race/Race Condition:
This happens when 2 or more threads run in parallel but end up giving a result which is wrong and not equivalent if all the operations are done in sequential order.
Here all the threads run and execute there operations.
In Coding we need to avoid both race and deadlock condition.
I assume you mean "race conditions" and not "race around conditions" (I've heard that term...)
Basically, a dead lock is a condition where thread A is waiting for resource X while holding a lock on resource Y, and thread B is waiting for resource Y while holding a lock on resource X. The threads block waiting for each other to release their locks.
The solution to this problem is (usually) to ensure that you take locks on all resources in the same order in all threads. For example, if you always lock resource X before resource Y then my example can never result in a deadlock.
A race condition is something where you're relying on a particular sequence of events happening in a certain order, but that can be messed up if another thread is running at the same time. For example, to insert a new node into a linked list, you need to modify the list head, usually something like so:
newNode->next = listHead;
listHead = newNode;
But if two threads do that at the same time, then you might have a situation where they run like so:
Thread A Thread B
newNode1->next = listHead
newNode2->next = listHead
listHead = newNode2
listHead = newNode1
If this were to happen, then Thread B's modification of the list will be lost because Thread A would have overwritten it. It can be even worse, depending on the exact situation, but that's the basics of it.
The solution to this problem is usually to ensure that you include the proper locking mechanisms (for example, take out a lock any time you want to modify the linked list so that only one thread is modifying it at a time).
Withe rest to Programming language if you are not locking shared resources and are accessed by multiple threads then its called as "Race condition", 2nd case if you locked the resources and sequences of access to shared resources are not defined properly then threads may go long waiting for the resources to use then its a case of "deadlock"