Peterson's solution with single variable - multithreading

do {
turn = j; // = (1-i)
while(turn==j);
//critical section
turn = j; //exit section.
} while(true);
Can peterson's algorithm work with just the turn variable. Why is the flag variable required?

Yes obviously. Using this approach, the Progress condition is not satisfied.
Clearly, here s process j will move forward (i.e. get out of busy wait at while();) if and only if someone else changes the turn variable to its own id (say i).
Thus clearly, the progress of process j is in hands of process 'i'.
For eg. Suppose the other process got busy in a non critical section area above the critical section. Or maybe the other process is killed/ deadlocked etc. Then this poor process j keeps waiting forever.

Related

Why does a simplification of Peterson's Algorithm using a single 'turn' variable not provide process synchronization?

I'm reading "Operating System Concepts" and trying to make sense of Peterson's Solution (on page 208), an algorithm for ensuring two processes sharing memory do not access the shared resource at the same time (potentially creating a race condition).
In Peterson's Solution, there are two shared variables that help with synchronization: "boolean flag[2]" and "int turn". "flag[i]" indicates whether a particular process is trying to enter its "critical section," the section where it accesses shared data. "turn" contains one of the two processes' indexes (0 or 1) and indicates which process' turn it is to access the shared data.
Peterson's algorithm (for process i, where the other process is denoted by j) is below:
do {
#set flag to say I'm trying to run
flag[i] = true
#let the other process have a turn
turn = j
while(flag[j] == true && turn == j);
#enter critical section
flag[i] = false
#do non-critial remaining stuff
} while (true);
Why does the following simplification of Peterson's algorithm not also provide process synchronization? If I understand why not, I believe it will help me understand Peterson's algorithm.
#initialize turn to i or j
do {
while(turn == j);
#enter critical section
turn = j;
#do non-critial remaining stuff
} while (true);
In this simplified algorithm, it seems to me that a given process i will continue while looping until the other process j finishes its critical section, at which point j will set "turn = i", allowing i to begin. Why does this algorithm not achieve process synchronization?
Because there is a chance of starvation in the simplified version.
As you mentioned:
j finishes its critical section, at which point j will set "turn = i",
allowing i to begin.
Ok, now say Process i finishes and will set turn = j . Now if Process i, again wants to enter critical section, it cannot enter as turn = j. The only way for Process i to be able to enter Critical Section is Process j again enters Critical Section and sets turn = i.
So, as you see the simplified version requires that the processes must enter critical section in strict alternation, otherwise it will lead to starvation.

how does peterson's solution solve bounded waiting?

The dinosaur book says that a solution to critical section problem must satisfy Mutual exclusion, Progress and Bounded Wait
This is the structure of a process as described under Peterson's solution in the book:
do {
flag[i]=True;
turn=j;
while (flag[j] && turn==j);
// critical section
flag[i]=False;
// remainder section
} while (True);
I dont understand how this is solving bounded waiting problem. The bounded waiting says that there is a limit to how many times a process can be stopped from getting into its critical section so that no process gets starved. But here there is no counter for that and processes share just these two variables among themselves in this solution:
int turn;
boolean flag[2];
Bounded waiting says that a bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.
Here, the Peterson's solution is considers strict alternation so, alternatively process[0] and process[1] will get access to critical section.Here bounded waiting won't be satisfied in case e.g. some process gets C.S. repeatedly starving other processes but this situation is not possible because of strict alternation.
By using 'turn' variable bounded waiting is ensured.
First of all it is needed to know that the Peterson's solution is a 2 process solution.
Now the answer...
Here you can see that when the process enters the loop
while(flag[j] && turn==j);
it lets the process j to enters its critical section. Here the process i will only enter its critical section when either the turn != j or flag[j] == false;
Lets say that flag[j] = true. In this case process i has to wait and can't enter its critical section(process i is waiting). Now we know that as soon as the process j is done with its critical section it executes the line
flag[j] = false;
which helps process i to get out of the loop and even if now process j again tries to enter its critical section, it will get stuck in the same loop and process i will be able to execute its critical section without waiting any longer(here the bound on waiting is 1).
Here we can see that even if process j is fast and tries to enter its critical section as many times it wants, process i wont get starved once it is ready to execute its critical section. Thus bounded waiting i.e. there is a bound on the amount of process(which in here is 1) that can execute its critical section before the process requested for, is granted
permission to execute its critical section.

What is progress and bounded waiting in 'critical section algorithm'?

Consider the following code
//proces i: //proces j:
flag[i] = true; flag[j] = true;
turn = j; turn = i;
while(flag[j] == true && turn==j); while(flag[i] == true && turn == i);
<critical section> <critical section>
flag[i] = false; flag[j] = false;
<remainder section <remainder section>
I am certain that the above code will satisfies the mutual exclusion property but what I am uncertain about the following
What exactly does progress mean ? and does the above code satisfy it, The above code requires the the critical section being executed in strict alternation. Is that considered as progress ?
From what I see the above code does not maintain any information on the number of times a process has entered the critical section, would that mean that the above code does not satisfy bounded waiting ?
Progress means that the process will eventually do some work - an example of where this may not be the case is when a low-priority thread might be pre-empted and rolled back by high-priority threads. Once your processes reach their critical section they won't be pre-empted, so they'll make progress.
Bounded waiting means that the process will eventually gain control of the processor - an example of where this may not be the case is when another process has a non-terminating loop in a critical section with no possibility of the thread being interrupted. Your code has bounded waiting IF the critical sections terminate AND the remainder section will not re-invoke the process's critical section (otherwise a process might keep running its critical section without the other process ever gaining control of the processor).
Progress of the processes mean that the processes don't enter in a deadlock situation and hence,their execution continues independently! Actually, at any moment of time,only one of the process i or process j will be executing its critical section code and hence,the consistency will be maintained! SO, Progress of both processes are being talked and met successfully in the given code.
Next, this particular code is for processes which are intended to run only for once and hence, they won't be reaching their critical section code again. It is for single execution of process.
Bounded waiting says that a bound must exist on the number of times
that other processes are allowed to enter their critical sections
after a process has made a request to enter its critical section and
before that request is granted.
This particular piece of code has nothing to do with bounded waiting and is for trivial cases where processes execute for once only!

Reusable Barrier Algorithm

I'm looking into the Reusable Barrier algorithm from the book "The Little Book Of Semaphores" (archived here).
The puzzle is on page 31 (Basic Synchronization Patterns/Reusable Barrier), and I have come up with a 'solution' (or not) which differs from the solution from the book (a two-phase barrier).
This is my 'code' for each thread:
# n = 4; threads running
# semaphore = n max., initialized to 0
# mutex, unowned.
start:
mutex.wait()
counter = counter + 1
if counter = n:
semaphore.signal(4) # add 4 at once
counter = 0
mutex.release()
semaphore.wait()
# critical section
semaphore.release()
goto start
This does seem to work, I've even inserted different sleep timers into different sections of the threads, and they still wait for all the threads to come before continuing each and every loop. Am I missing something? Is there a condition that this will fail?
I've implemented this using the Windows library Semaphore and Mutex functions.
Update:
Thank you to starblue for the answer. Turns out that if for whatever reason a thread is slow between mutex.release() and semaphore.wait() any of the threads that arrive to semaphore.wait() after a full loop will be able to go through again, since there will be one of the N unused signals left.
And having put a Sleep command for thread number 3, I got this result where one can see that thread 3 missed a turn the first time, with thread 1 having done 2 turns, and then catching up on the second turn (which was in fact its 1st turn).
Thanks again to everyone for the input.
One thread could run several times through the barrier while some other thread doesn't run at all.

Simple POSIX threads question

I have this POSIX thread:
void subthread(void)
{
while(!quit_thread) {
// do something
...
// don't waste cpu cycles
if(!quit_thread) usleep(500);
}
// free resources
...
// tell main thread we're done
quit_thread = FALSE;
}
Now I want to terminate subthread() from my main thread. I've tried the following:
quit_thread = TRUE;
// wait until subthread() has cleaned its resources
while(quit_thread);
But it does not work! The while() clause does never exit although my subthread clearly sets quit_thread to FALSE after having freed its resources!
If I modify my shutdown code like this:
quit_thread = TRUE;
// wait until subthread() has cleaned its resources
while(quit_thread) usleep(10);
Then everything is working fine! Could someone explain to me why the first solution does not work and why the version with usleep(10) suddenly works? I know that this is not a pretty solution. I could use semaphores/signals for this but I'd like to learn something about multithreading, so I'd like to know why my first solution doesn't work.
Thanks!
Without a memory fence, there is no guarantee that values written in one thread will appear in another. Most of the pthread primitives introduce a barrier, as do several system calls such as usleep. Using a mutex around both the read and write introduces a barrier, and more generally prevents multi-byte values being visible in partially written state.
You also need to separate the idea of asking a thread to stop executing, and reporting that it has stopped, and appear to be using the same variable for both.
What's most likely to be happening is that your compiler is not aware that quit_thread can be changed by another thread (because C doesn't know about threads, at least at the time this question was asked). Because of that, it's optimising the while loop to an infinite loop.
In other words, it looks at this code:
quit_thread = TRUE;
while(quit_thread);
and thinks to itself, "Hah, nothing in that loop can ever change quit_thread to FALSE, so the coder obviously just meant to write while (TRUE);".
When you add the call to usleep, the compiler has another think about it and assumes that the function call may change the global, so it plays it safe and doesn't optimise it.
Normally you would mark the variable as volatile to stop the compiler from optimising it but, in this case, you should use the facilities provided by pthreads and join to the thread after setting the flag to true (and don't have the sub-thread reset it, do that in the main thread after the join if it's necessary). The reason for that is that a join is likely to be more efficient than a continuous loop waiting for a variable change since the thread doing the join will most likely not be executed until the join needs to be done.
In your spinning solution, the joining thread will most likely continue to run and suck up CPU grunt.
In other words, do something like:
Main thread Child thread
------------------- -------------------
fStop = false
start Child Initialise
Do some other stuff while not fStop:
fStop = true Do what you have to do
Finish up and exit
join to Child
Do yet more stuff
And, as an aside, you should technically protect shared variables with mutexes but this is one of the few cases where it's okay, one-way communication where half-changed values of a variable don't matter (false/not-false).
The reason you normally mutex-protect a variable is to stop one thread seeing it in a half-changed state. Let's say you have a two-byte integer for a count of some objects, and it's set to 0x00ff (255).
Let's further say that thread A tries to increment that count but it's not an atomic operation. It changes the top byte to 0x01 but, before it gets a chance to change the bottom byte to 0x00, thread B swoops in and reads it as 0x01ff.
Now that's not going to be very good if thread B want to do something with the last element counted by that value. It should be looking at 0x0100 but will instead try to look at 0x01ff, the effect of which will be wrong, if not catastrophic.
If the count variable were protected by a mutex, thread B wouldn't be looking at it until thread A had finished updating it, hence no problem would occur.
The reason that doesn't matter with one-way booleans is because any half state will also be considered as true or false so, if thread A was halfway between turning 0x0000 into 0x0001 (just the top byte), thread B would still see that as 0x0000 (false) and keep going (until thread A finishes its update next time around).
And if thread A was turning the boolean into 0xffff, the half state of 0xff00 would still be considered true by thread B so it would do its thing before thread A had finished updating the boolean.
Neither of those two possibilities is bad simply because, in both, thread A is in the process of changing the boolean and it will finish eventually. Whether thread B detects it a tiny bit earlier or a tiny bit later doesn't really matter.
The while(quite_thread); is using the value quit_thread was set to on the line before it. Calling a function (usleep) induces the compiler to reload the value on each test.
In any case, this is the wrong way to wait for a thread to complete. Use pthread_join instead.
You're "learning" multhithreading the wrong way. The right way is to learn to use mutexes and condition variables; any other solution will fail under some circumstances.

Resources