I'm writing to ask about this question from 'The Little Book of Semaphores' by Allen B. Downey.
Question from 'The Little Book of Semaphores'
Puzzle: Suppose that 100 threads run the following program concurrently. (if you are not familiar with Python, the for loop runs the update 100 times.):
for i in range(100):
temp = count
count = temp + 1
What is the largest possible value of count after all threads have completed? What is the smallest possible value? Hint: the first question is easy; the second is not.
My understanding is that count is a variable shared by all threads, and that it's initial value is 0.
I believe that the largest possible value is 10,000, which occurs when there is no interleaving between threads.
I believe that the smallest possible value is 100. If line 2 is executed for each thread, they will each have a value of temp = 0. If line 3 is then executed for each thread, they will each set count = 1. If the same behaviour occurs in each iteration, the final value of count will be 100.
Is this correct, or is there another execution path that can result in a value smaller than 100 for count?
The worst case that I can think of will leave count equal to two. It's extremely unlikely that this would ever happen in practice, but in theory, it's possible. I'll need to talk about Thread A, Thread B, and 98 other threads:
Thread A reads count as zero, but then it is preempted before it can do anything else,
Thread B is allowed to run 99 iterations of its loop, and 98 other threads all run to completion before thread A finally is allowed to run again,
Thread A writes 1 to count before—are you ready to believe this?—it gets preempted again!
Thread B starts its 100th iteration. It gets as far as reading count as 1 (just now written by thread A) before thread A finally comes roaring back to life and runs to completion,
Thread B is last to cross the finish line after it writes 2 to count.
Related
I am stuck while solving a counting semaphore problem in Operating system subject.
S is a semaphore initialized to 5.
count = 0 (shared variable)
Assume that the increment operation in line #7 is not atomic.
Now,
1. int counter = 0;
2. Semaphore S = init(5);
3. void parop(void)
4. {
5. wait(S);
6. wait(S);
7. counter++;
8. signal(S);
9. signal(S);
10. }
If five threads execute the function parop concurrently, which of the following program behavior(s) is/are possible?
A. The value of counter is 5 after all the threads successfully complete the execution of parop
B. The value of counter is 1 after all the threads successfully complete the execution of parop
C. The value of counter is 0 after all the threads successfully complete the execution of parop
D. There is a deadlock involving all the threads
what i have understand till now is answer is A and D,because what if all process are executed one by one say(T1->T2->T3->T4->T5) and final value saved will be 5(so A is one of correct options)
Now, why D,because what if all process execute line 5 before 6 and will get blocked.
Now, please can any one help me to understand why B is another correct answer. ?
Thanks in advance,
Hope to here from you soon
Any help will be highly appreciated.
Imagine thread 1 gets to line 7 before any other thread, and line 7 is implemented as three instructions:
7_1: load counter, %r0
7_2: add $1, %r0
7_3: store %r0, counter
For some reason (eg. interrupt, preempted), thread 1 stops at instruction 7_2; so it has loaded the value 0 into register %r0.
Next, thread's 2..5 all run through this sequence, leaving counter at say 4.
Thread 1 is rescheduled, increments %r0 to the value 1 and stores it into counter.
I have a questions regarding the actual synchronization points in the following c - like psuedocode examples. In our slides the synchronization point is shown to occur at the point indicated below.
Two process 2 way synchronization, x and y = 0 to start
Process 1
signal(x);
//Marked as sync point
wait(y);
Process 2
signal(y);
//This arrow isn't as exact but appears to be near the middle again.
wait(x);
Now for just two process 2 way sync this seems to make sense. However, when expanding this two 3 process 3 way sync this logic seems to break down. There are no arrows given in the slide deck.
3 Process 3 Way Synchronization (S1, S2, S3 = 0 to start)
Process 0
signal(S0);
signal(S0);
wait(S1);
wait(S2);
Process 1
signal(S1);
signal(S1);
wait(S0);
wait(S2);
Process 2
signal(S2);
signal(S2);
wait(S0);
wait(S1);
Now I find the sync point couldn't actually be between the signal and the wait. For example:
Let's so Process 0 runs first and signals S0 once. Now S0 = 1. Now let's say that before the second signal(S0) can be run that the process is interrupted and Process 1 runs next. Let's say that only one signal(S1) can be run before the process is interrupted. Now the value of S1 = 1. Now let's say that Process 2 runs next. This signal(S2) is allowed to run so S2 = 2. Now the process is not interrupted so it is allowed to continue. Wait(S0) runs which decrements S0 by 1. S0 now equals 0. However, process 2 is allowed to continue running because S0's value is not a negative value. Now wait(S1) is allowed to run and a similar thing here happens.
At this point Process 2 is done running. However Process 0 and Process 1 did not finish their signal's. If the sync point is truly in between signals and wait then this solution to 3 way 3 process sync is incorrect.
A similar issue can arise in solution for 3 process 3 way synchronization that allows each process to run more than one instance of itself at a time. Attached is that slide but I will not explain why the "middle" point in the process can't be the sync point as I already have a huge wall of text.
Please let me know which way is correct, no amount of googling has given me an answer. I will include all relevant slides.
This question came up in a practice exam. I do not really understand how they got to the answers that they said are correct.
I was hoping for some help understanding the question and how to answer it?
Thank you!
I think it all comes to this:
wait (P): If the value of semaphore variable is not negative, decrements it by 1. If the semaphore variable is now negative, the process executing wait is blocked (i.e., added to the semaphore's queue) until the value is greater or equal to 1. Otherwise, the process continues execution, having used a unit of the resource.
Source and More information
For the correct answers:
1) Since the semaphore is initially equal 0, thread 3 will block on P(S) waiting for another thread to do V(S). This will only happen on thread 1, and after A is completed. So no matter how long statement A takes to execute, thread 3 will wait until the instruction V(S) is executed. So A will always be executed before F.
2) The same concept applies with B and G. Before G you have to execute P(T) and this will wait for an instruction V(T). This only happens after B has been executed.
3) Since A is executed before F as demonstrated in (1), and F is always executed before G, A is always executed before G.
As for the incorrect answers:
a) A is executed before E? Maybe, but not always. Because thread 1 and thread 2 have to wait for a semaphore, thread 2 could execute B, V(T) and E faster than A, so in this case the sentence (a) is false.
b) B is executed before F? Maybe, but not always. Why? To execute F, thread 3 only depends on thread 1 (the semaphore S) so C and A can execute very fast and then go to F while B can be still executing because it is very slow.
d) C is executed before D? Maybe, but not always. Again, C might take a long time to execute and thread 1, since it has not to wait for any semaphore, could execute all its instructions very fast, before C is completed.
I am working on semaphores in Linux. I would like to know if the semaphore value can ever be incremented beyond the initialized value? If so, when can that happen?
For example, semaphore value is initialized to 1.
If I increment twice continuously using up(sem), will the value of semaphore increment beyond 1.
x(void){
sema_init(sem1, 1);
down(sem1);
{
.
. // some code implementation
.
}
up(sem1); // i understand this increment the value back to 1.
up(sem1);
/* what exactly does this statement do to the semaphore?
Will it increment the value to 2? If so what is the meaning of this statement? */
}
Yes it will increment it to 2. The effect is that the next two semaphore down calls will run without blocking. The general use case of semaphores is to protect a pool of resources. If there is 1 resource then the max expected value of the semaphore will be 1. If there are 2 resources then max expected value is 2 and so on. So whether incrementing the semaphore to 2 is correct or not depends upon the context. If only 1 process should get past the semaphore at any given time then incrementing to 2 is a bug in the code. If 2 or more processes are allowed then incrementing to 2 is allowable.
This is a simplified explanation. For more details look up "counting semaphores". The other type of semaphore which you may be thinking of is "binary semaphores" which are either 0 or 1.
I'm looking into the Reusable Barrier algorithm from the book "The Little Book Of Semaphores" (archived here).
The puzzle is on page 31 (Basic Synchronization Patterns/Reusable Barrier), and I have come up with a 'solution' (or not) which differs from the solution from the book (a two-phase barrier).
This is my 'code' for each thread:
# n = 4; threads running
# semaphore = n max., initialized to 0
# mutex, unowned.
start:
mutex.wait()
counter = counter + 1
if counter = n:
semaphore.signal(4) # add 4 at once
counter = 0
mutex.release()
semaphore.wait()
# critical section
semaphore.release()
goto start
This does seem to work, I've even inserted different sleep timers into different sections of the threads, and they still wait for all the threads to come before continuing each and every loop. Am I missing something? Is there a condition that this will fail?
I've implemented this using the Windows library Semaphore and Mutex functions.
Update:
Thank you to starblue for the answer. Turns out that if for whatever reason a thread is slow between mutex.release() and semaphore.wait() any of the threads that arrive to semaphore.wait() after a full loop will be able to go through again, since there will be one of the N unused signals left.
And having put a Sleep command for thread number 3, I got this result where one can see that thread 3 missed a turn the first time, with thread 1 having done 2 turns, and then catching up on the second turn (which was in fact its 1st turn).
Thanks again to everyone for the input.
One thread could run several times through the barrier while some other thread doesn't run at all.