Reusable Barrier Algorithm - multithreading

I'm looking into the Reusable Barrier algorithm from the book "The Little Book Of Semaphores" (archived here).
The puzzle is on page 31 (Basic Synchronization Patterns/Reusable Barrier), and I have come up with a 'solution' (or not) which differs from the solution from the book (a two-phase barrier).
This is my 'code' for each thread:
# n = 4; threads running
# semaphore = n max., initialized to 0
# mutex, unowned.
start:
mutex.wait()
counter = counter + 1
if counter = n:
semaphore.signal(4) # add 4 at once
counter = 0
mutex.release()
semaphore.wait()
# critical section
semaphore.release()
goto start
This does seem to work, I've even inserted different sleep timers into different sections of the threads, and they still wait for all the threads to come before continuing each and every loop. Am I missing something? Is there a condition that this will fail?
I've implemented this using the Windows library Semaphore and Mutex functions.
Update:
Thank you to starblue for the answer. Turns out that if for whatever reason a thread is slow between mutex.release() and semaphore.wait() any of the threads that arrive to semaphore.wait() after a full loop will be able to go through again, since there will be one of the N unused signals left.
And having put a Sleep command for thread number 3, I got this result where one can see that thread 3 missed a turn the first time, with thread 1 having done 2 turns, and then catching up on the second turn (which was in fact its 1st turn).
Thanks again to everyone for the input.

One thread could run several times through the barrier while some other thread doesn't run at all.

Related

Synchronize multiple pthreads in a

I'm discovering the pthread library (in C) and I'm having some trouble understanding well a few things.
First of all, I understand what a mutex is, I understand how it works, ok, I also understand the concept of the cond, but I can't manage to use it properly (I don't really get how to combine the mutex and the cond)
This is, in pseudo-code, what I want to do :
thread :
loop :
// do something
end loop
end thread
So there is n threads, but each thread uses the same function. I want the inside of the loop to be executed in parallel by all the threads BUT each thread must be in the same iteration of the loop, meaning I don't care in what order the instructions inside the loop are executed between threads, but to start iteration 2 of a thread, all the other threads must have finished iteration 1 (etc).
So my question is : how do you do that ? Not particularly in a specific example, but theoretically.
EDIT
I manage to do it, I don't know if it's the proper way, but it's working :
global nbOfThreads
global nbOfIterations
thread :
lock(mutex0)
unlock(mutex0)
loop :
// Do something
lock(mutex1)
nbOfIterations++
if (nbOfIterations == nbOfThread) :
nbOfIterations = 0
broadcast(cond)
unlock(mutex1)
continue
end if
wait(cond, mutex1)
unlock(mutex1)
end loop
end thread
main (n) :
nbOfThreads = n
nbOfIterations = 0
lock(mutex0)
do nbOfThreads times : create(thread)
unlock(mutex0)
end main
I obviously tried to understand myself, but there are some things I don't understand :
The main one : WHY does a cond need to be pair with a mutex
In some examples I saw something like this :
// thread A :
while (!condition)
wait(&cond)
// thread B :
if (condition)
signal(&cond)
well I really don't get the point of this while loop, I thought wait put the thread in pause until the condition is true (until the other thread send the signal). I mean I would get it if it was an if instead of a while.
Thank you
WHY does a cond need.... because the (!condition) you reference almost certainly depends upon some bits of the object not changing while you reference them. Correspondingly, modifying the state of the object should be done in such a way as to appear atomic to any observer; thus a mutex. While you could rely on too-clever-by-half hackery like atomic types, there is also the problem of ‘what if it was modified just after you checked it’ -- a race condition. Thus the idiomatic lock(); while (!cond) { wait(); }.
The point of the while... The signal+wait is not a handoff of control; after the signal, any number of things could happen to the object before a particular thread returns from wait. Even though the condition might have been in the correct state, by the time thread A examines it, it may no longer be. At the point of exiting the while loop, thread A knows: The condition is in the state I desire, and I have exclusive access to the object.
Condition variables can have spurious wake-ups. The condition might not actually be true when the wait function returns.
Depending on your task, a different synchronization primitive, such as a barrier (see pthread_barrier_init) or a semaphore (sem_init) might be easier to use.

Goroutines are cooperatively scheduled. Does that mean that goroutines that don't yield execution will cause goroutines to run one by one?

From: http://blog.nindalf.com/how-goroutines-work/
As the goroutines are scheduled cooperatively, a goroutine that loops continuously can starve other goroutines on the same thread.
Goroutines are cheap and do not cause the thread on which they are multiplexed to block if they are blocked on
network input
sleeping
channel operations or
blocking on primitives in the sync package.
So given the above, say that you have some code like this that does nothing but loop a random number of times and print the sum:
func sum(x int) {
sum := 0
for i := 0; i < x; i++ {
sum += i
}
fmt.Println(sum)
}
if you use goroutines like
go sum(100)
go sum(200)
go sum(300)
go sum(400)
will the goroutines run one by one if you only have one thread?
A compilation and tidying of all of creker's comments.
Preemptive means that kernel (runtime) allows threads to run for a specific amount of time and then yields execution to other threads without them doing or knowing anything. In OS kernels that's usually implemented using hardware interrupts. Process can't block entire OS. In cooperative multitasking thread have to explicitly yield execution to others. If it doesn't it could block whole process or even whole machine. That's how Go does it. It has some very specific points where goroutine can yield execution. But if goroutine just executes for {} then it will lock entire process.
However, the quote doesn't mention recent changes in the runtime. fmt.Println(sum) could cause other goroutines to be scheduled as newer runtimes will call scheduler on function calls.
If you don't have any function calls, just some math, then yes, goroutine will lock the thread until it exits or hits something that could yield execution to others. That's why for {} doesn't work in Go. Even worse, it will still lead to process hanging even if GOMAXPROCS > 1 because of how GC works, but in any case you shouldn't depend on that. It's good to understand that stuff but don't count on it. There is even a proposal to insert scheduler calls in loops like yours
The main thing that Go's runtime does is it gives its best to allow everyone to execute and don't starve anyone. How it does that is not specified in the language specification and might change in the future. If the proposal about loops will be implemented then even without function calls switching could occur. At the moment the only thing you should remember is that in some circumstances function calls could cause goroutine to yield execution.
To explain the switching in Akavall's answer, when fmt.Printf is called, the first thing it does is checks whether it needs to grow the stack and calls the scheduler. It MIGHT switch to another goroutine. Whether it will switch depends on the state of other goroutines and exact implementation of the scheduler. Like any scheduler, it probably checks whether there're starving goroutines that should be executed instead. With many iterations function call has greater chance to make a switch because others are starving longer. With few iterations goroutine finishes before starvation happens.
For what its worth it. I can produce a simple example where it is clear that the goroutines are not ran one by one:
package main
import (
"fmt"
"runtime"
)
func sum_up(name string, count_to int, print_every int, done chan bool) {
my_sum := 0
for i := 0; i < count_to; i++ {
if i % print_every == 0 {
fmt.Printf("%s working on: %d\n", name, i)
}
my_sum += 1
}
fmt.Printf("%s: %d\n", name, my_sum)
done <- true
}
func main() {
runtime.GOMAXPROCS(1)
done := make(chan bool)
const COUNT_TO = 10000000
const PRINT_EVERY = 1000000
go sum_up("Amy", COUNT_TO, PRINT_EVERY, done)
go sum_up("Brian", COUNT_TO, PRINT_EVERY, done)
<- done
<- done
}
Result:
....
Amy working on: 7000000
Brian working on: 8000000
Amy working on: 8000000
Amy working on: 9000000
Brian working on: 9000000
Brian: 10000000
Amy: 10000000
Also if I add a function that just does a forever loop, that will block the entire process.
func dumb() {
for {
}
}
This blocks at some random point:
go dumb()
go sum_up("Amy", COUNT_TO, PRINT_EVERY, done)
go sum_up("Brian", COUNT_TO, PRINT_EVERY, done)
Well, let's say runtime.GOMAXPROCS is 1. The goroutines run concurrently one at a time. Go's scheduler just gives the upper hand to one of the spawned goroutines for a certain time, then to another, etc until all are finished.
So, you never know which goroutine is running at a given time, that's why you need to synchronize your variables. From your example, it's unlikely that sum(100) will run fully, then sum(200) will run fully, etc
The most probable is that one goroutine will do some iterations, then another will do some, then another again etc.
So, the overall is that they are not sequential, even if there is only one goroutine active at a time (GOMAXPROCS=1).
So, what's the advantage of using goroutines ? Plenty. It means that you can just do an operation in a goroutine because it is not crucial and continue the main program. Imagine an HTTP webserver. Treating each request in a goroutine is convenient because you do not have to care about queueing them and run them sequentially: you let Go's scheduler do the job.
Plus, sometimes goroutines are inactive, because you called time.Sleep, or they are waiting for an event, like receiving something for a channel. Go can see this and just executes other goroutines while some are in those idle states.
I know there are a handful of advantages I didn't present, but I don't know concurrency that much to tell you about them.
EDIT:
Related to your example code, if you add each iteration at the end of a channel, run that on one processor and print the content of the channel, you'll see that there is no context switching between goroutines: Each one runs sequentially after another one is done.
However, it is not a general rule and is not specified in the language. So, you should not rely on these results for drawing general conclusions.
#Akavall Try adding sleep after creating dumb goroutine, goruntime never executes sum_up goroutines.
From that it looks like go runtime spawns next go routines immediately, it might execute sum_up goroutine until go runtime schedules dumb() goroutine to run. Once dumb() is scheduled to run then go runtime won't schedule sum_up goroutines to run, as dumb runs for{}

What is progress and bounded waiting in 'critical section algorithm'?

Consider the following code
//proces i: //proces j:
flag[i] = true; flag[j] = true;
turn = j; turn = i;
while(flag[j] == true && turn==j); while(flag[i] == true && turn == i);
<critical section> <critical section>
flag[i] = false; flag[j] = false;
<remainder section <remainder section>
I am certain that the above code will satisfies the mutual exclusion property but what I am uncertain about the following
What exactly does progress mean ? and does the above code satisfy it, The above code requires the the critical section being executed in strict alternation. Is that considered as progress ?
From what I see the above code does not maintain any information on the number of times a process has entered the critical section, would that mean that the above code does not satisfy bounded waiting ?
Progress means that the process will eventually do some work - an example of where this may not be the case is when a low-priority thread might be pre-empted and rolled back by high-priority threads. Once your processes reach their critical section they won't be pre-empted, so they'll make progress.
Bounded waiting means that the process will eventually gain control of the processor - an example of where this may not be the case is when another process has a non-terminating loop in a critical section with no possibility of the thread being interrupted. Your code has bounded waiting IF the critical sections terminate AND the remainder section will not re-invoke the process's critical section (otherwise a process might keep running its critical section without the other process ever gaining control of the processor).
Progress of the processes mean that the processes don't enter in a deadlock situation and hence,their execution continues independently! Actually, at any moment of time,only one of the process i or process j will be executing its critical section code and hence,the consistency will be maintained! SO, Progress of both processes are being talked and met successfully in the given code.
Next, this particular code is for processes which are intended to run only for once and hence, they won't be reaching their critical section code again. It is for single execution of process.
Bounded waiting says that a bound must exist on the number of times
that other processes are allowed to enter their critical sections
after a process has made a request to enter its critical section and
before that request is granted.
This particular piece of code has nothing to do with bounded waiting and is for trivial cases where processes execute for once only!

pthreads: If I increment a global from two different threads, can there be sync issues?

Suppose I have two threads A and B that are both incrementing a ~global~ variable "count". Each thread runs a for loop like this one:
for(int i=0; i<1000; i++)
count++; //alternatively, count = count + 1;
i.e. each thread increments count 1000 times, and let's say count starts at 0. Can there be sync issues in this case? Or will count correctly equal 2000 when the execution is finished? I guess since the statement "count = count + 1" may break down into TWO assembly instructions, there is potential for the other thread to be swapped in between these two instructions? Not sure. What do you think?
Yes there can be sync issues in this case. You need to either protect the count variable with a mutex, or use a (usually platform specific) atomic operation.
Example using pthread mutexes
static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
for(int i=0; i<1000; i++) {
pthread_mutex_lock(&mutex);
count++;
pthread_mutex_unlock(&mutex);
}
Using atomic ops
There is a prior discussion of platform specific atomic ops here:
UNIX Portable Atomic Operations
If you only need to support GCC, this approach is straightforward. If you're supporting other compilers, you'll probably have to make some per-platform decisions.
Count clearly needs to be protected with a mutex or other synchronization mechanism.
At a fundamental level, the count++ statment breaks down to:
load count into register
increment register
store count from register
A context switch could occur before/after any of those steps, leading to situations like:
Thread 1: load count into register A (value = 0)
Thread 2: load count into register B (value = 0)
Thread 1: increment register A (value = 1)
Thread 1: store count from register A (value = 1)
Thread 2: increment register B (value = 1)
Thread 2: store count from register B (value = 1)
As you can see, both threads completed one iteration of the loop, but the net result is that count was only incremented once.
You probably would also want to make count volatile to force loads & stores to go to memory, since a good optimizer would likely keep count in a register unless otherwise told.
Also, I would suggest that if this is all the work that's going to be done in your threads, performance will dramatically drop from all the mutex locking/unlocking required to keep it consistent. Threads should have much bigger work units to perform.
Yes, there can be sync problems.
As an example of the possible issues, there is no guarantee that an increment itself is an atomic operation.
In other words, if one thread reads the value for increment then gets swapped out, the other thread could come in and change it, then the first thread will write back the wrong value:
+-----+
| 0 | Value stored in memory (0).
+-----+
| 0 | Thread 1 reads value into register (r1 = 0).
+-----+
| 0 | Thread 2 reads value into register (r2 = 0).
+-----+
| 1 | Thread 2 increments r2 and writes back.
+-----+
| 1 | Thread 1 increments r1 and writes back.
+-----+
So you can see that, even though both threads have tried to increment the value, it's only increased by one.
This is just one of the possible problems. It may also be that the write itself is not atomic and one thread may update only part of the value before being swapped out.
If you have atomic operations that are guaranteed to work in your implementation, you can use them. Otherwise, use mutexes, That's what pthreads provides for synchronisation (and guarantees will work) so is the safest approach.
I guess since the statement "count = count + 1" may break down into TWO assembly instructions, there is potential for the other thread to be swapped in between these two instructions? Not sure. What do you think?
Don't think like this. You're writing C code and pthreads code. You don't have to ever think about assembly code to know how your code will behave.
The pthreads standard does not define the behavior when one thread accesses an object while another thread is, or might be, modifying it. So unless you're writing platform-specific code, you should assume this code can do anything -- even crash.
The obvious pthreads fix is to use mutexes. If your platform has atomic operations, you can use those.
I strongly urge you not to delve into detailed discussions about how it might fail or what the assembly code might look like. Regardless of what you might or might not think compilers or CPUs might do, the behavior of the code is undefined. And it's too easy to convince yourself you've covered every way you can think of that it might fail and then you miss one and it fails.

Interview Question on .NET Threading

Could you describe two methods of synchronizing multi-threaded write access performed
on a class member?
Please could any one help me what is this meant to do and what is the right answer.
When you change data in C#, something that looks like a single operation may be compiled into several instructions. Take the following class:
public class Number {
private int a = 0;
public void Add(int b) {
a += b;
}
}
When you build it, you get the following IL code:
IL_0000: nop
IL_0001: ldarg.0
IL_0002: dup
// Pushes the value of the private variable 'a' onto the stack
IL_0003: ldfld int32 Simple.Number::a
// Pushes the value of the argument 'b' onto the stack
IL_0008: ldarg.1
// Adds the top two values of the stack together
IL_0009: add
// Sets 'a' to the value on top of the stack
IL_000a: stfld int32 Simple.Number::a
IL_000f: ret
Now, say you have a Number object and two threads call its Add method like this:
number.Add(2); // Thread 1
number.Add(3); // Thread 2
If you want the result to be 5 (0 + 2 + 3), there's a problem. You don't know when these threads will execute their instructions. Both threads could execute IL_0003 (pushing zero onto the stack) before either executes IL_000a (actually changing the member variable) and you get this:
a = 0 + 2; // Thread 1
a = 0 + 3; // Thread 2
The last thread to finish 'wins' and at the end of the process, a is 2 or 3 instead of 5.
So you have to make sure that one complete set of instructions finishes before the other set. To do that, you can:
1) Lock access to the class member while it's being written, using one of the many .NET synchronization primitives (like lock, Mutex, ReaderWriterLockSlim, etc.) so that only one thread can work on it at a time.
2) Push write operations into a queue and process that queue with a single thread. As Thorarin points out, you still have to synchronize access to the queue if it isn't thread-safe, but it's worth it for complex write operations.
There are other techniques. Some (like Interlocked) are limited to particular data types, and there are even more (like the ones discussed in Non-blocking synchronization and Part 4 of Joseph Albahari's Threading in C#), though they are more complex: approach them with caution.
In multithreaded applications, there are many situations where simultaneous access to the same data can cause problems. In such cases synchronization is required to guarantee that only one thread has access at any one time.
I imagine they mean using the lock-statement (or SyncLock in VB.NET) vs. using a Monitor.
You might want to read this page for examples and an understanding of the concept. However, if you have no experience with multithreaded application design, it will likely become quickly apparent, should your new employer put you to the test. It's a fairly complicated subject, with many possible pitfalls such as deadlock.
There is a decent MSDN page on the subject as well.
There may be other options, depending on the type of member variable and how it is to be changed. Incrementing an integer for example can be done with the Interlocked.Increment method.
As an excercise and demonstration of the problem, try writing an application that starts 5 simultaneous threads, incrementing a shared counter a million times per thread. The intended end result of the counter would be 5 million, but that is (probably) not what you will end up with :)
Edit: made a quick implementation myself (download). Sample output:
Unsynchronized counter demo:
expected counter = 5000000
actual counter = 4901600
Time taken (ms) = 67
Synchronized counter demo:
expected counter = 5000000
actual counter = 5000000
Time taken (ms) = 287
There are a couple of ways, several of which are mentioned previously.
ReaderWriterLockSlim is my preferred method. This gives you a database type of locking, and allows for upgrading (although the syntax for that is incorrect in the MSDN last time I looked and is very non-obvious)
lock statements. You treat a read like a write and just prevent access to the variable
Interlocked operations. This performs an operations on a value type in an atomic step. This can be used for lock free threading (really wouldn't recommend this)
Mutexes and Semaphores (haven't used these)
Monitor statements (this is essentially how the lock keyword works)
While I don't mean to denigrate other answers, I would not trust anything that does not use one of these techniques. My apologies if I have forgotten any.

Resources