I am trying to write a monitor solution for the sleeping barber problem using two barbers and customers of three types who are either waiting solely for barber 1, barber 2 or may not care which barber cuts their hair.
I was hoping for guidance on this problem -
My thoughts so far are that the algorithm will utilize a single list for the waiting customers and can use procedures such as
try_to_get_haircut()
if_not_first()
wake_up_barber()
wait_for_haircut()
Below is one barber solution and i hope it'll guide for you.
monitor sleeping_barber{
condition wait_for_cust, wait_for_barber ;
int wait;
entry barber{
if (wait == 0) then cwait(wait_for_cust);
wait = wait - 1;
csignal(wait_for_barber); }
entry cut_customer_hair(){
if(wait < seat_num)
{
wait = wait + 1;
csignal(wait_for_cust);
cwait(wait_for_barber);
do_haircut();
}
}
{ wait = 0;}}
Related
I have a stress test issue that I want to solve with simple synchronization in Go. So far I have tried to find documenation on my specific usecase regarding synchronization in Go, but didn't find anything that fits.
To be a bit more specific:
I must fulfill a task where I have to start a large amount of threads (in this example only illustrated with two threads) in the main routine. All of the initiated workers are supposed to prepare some initialization actions by themselves in unordered manner. Until they reach a small sequence of commands, which I want them to be executed by all goroutines at once, which is why I want to self-synchronize the goroutines with each other. It is very vital for my task that the delay through the main routine, which instantiates all other goroutines, does not affect the true parallelism of the workers execution (at the label #maximum parallel in the comment). For this purpose I do initialize a wait group with the amount of running goroutines in the main routine and pass it over to all routines so they can synchronize each others workflow.
The code looks similar to this example:
import sync
func worker_action(wait_group *sync.WaitGroup) {
// ...
// initialization
// ...
defer wait_group.Done()
wait_group.Wait() // #label: wait
// sequence of maximum parallel instructions // #label: maximum parallel
// ...
}
func main() {
var numThreads int = 2 // the number of threads shall be much higher for the actual stress test
var wait_group sync.WaitGroup
wait_group.Add(numThreads)
for i := 0; i < numThreads; i++ {
go worker_action(&wait_group)
}
// ...
}
Unfortunately my setup runs into a deadlock, as soon as all goroutines have reached the Wait instruction (labeled with #wait in the comment). This is true for any amount of threads that I start with the main routine (even two threads are caught in a deadlock within no time).
From my point of view a deadlock should not occur, due to the fact that immediately before the wait instruction each goroutine executes the done function on the same wait group.
Do I have a wrong understanding of how wait groups work? Is it for instance not allowed to execute the wait function inside of a goroutine other than the main routine? Or can someone give me a hint on what else I am missing?
Thank you very much in advance.
EDIT:
Thanks a lot #tkausl. It was indeed the unnecessary "defer" that caused the problem. I do not know how I could not see it myself.
There are several issues in your code. First the form. Idiomatic Go should use camelCase. wg is a better name for the WaitGroup.
But more important is the use where your code is waiting. Not inside your Goroutines. It should wait inside the main func:
func workerAction(wg *sync.WaitGroup) {
// ...
// initialization
// ...
defer wg.Done()
// wg.Wait() // #label: wait
// sequence of maximum parallel instructions // #label: maximum parallel
// ...
}
func main() {
var numThreads int = 2 // the number of threads shall be much higher for the actual stress test
var wg sync.WaitGroup
wg.Add(numThreads)
for i := 0; i < numThreads; i++ {
go workerAction(&wg)
}
wg.Wait() // you need to wait here
// ...
}
Again thanks #tkausl. The issue was resolved by removing the unnecessary "defer" instruction from the line that was meant to let the worker goroutines increment the number of finished threads.
I.e. "defer wait_group.Done()" -> "wait_group.Done()"
In the University I'm given this canonical parallel programming problem from "Gregory R. Andrews-Foundations of Multithreaded .... programming": (though I have a newer and Russian edition of the book I found an old English variant and try to convey everything properly)
I was also given task to solve this problem but with m consequently moving cars possible using semaphores To solve that task I was told by the tutor to mimic Reader's behavior from readers-writers task
The One-Lane Bridge. Cars coming from the north and the south arrive at a one-
lane bridge. Cars heading in the same direction can cross the bridge at the same
time, but cars heading in opposite directions cannot.
Develop a solution to this problem. Model the cars as processes, and use a
monitor for synchronization. First specify the monitor invariant, then develop the
body of the monitor.Ensure fairness. (Have cars take tums)
I googled and found solution for analogous task (http://www.cs.cornell.edu/courses/cs4410/2008fa/homework/hw3_soln.pdf) but lecturer said most of it is useless and incorrect I ended up with the following solution:
monitor onelanebridge{
int nb=0,sb=0; //Invar:(nb==0 and sb<=1)or(sb=0 and nb<=1)
cond nbfreetogo,sbfreetogo; //conditional variables
procedure enter_n(){
if(sb!=0andnb==0) wait(nbfreetogo);
nb++;
}
procedure enter_s(){
if(nb!=0andsb==0) wait(sbfreetogo);
sb++;
}
procedure leave_n(){
nb--;
if(nb==0) signal(sbfreetogo);
}
procedure leave_s(){
sb--;
if(sb==0) signal(nbfreetogo);
}
}
I was asked the question "What ensures that no more than one car at a time can cross the bridge?".. And am not even sure whether it's even so... Please help me solve the task correctly. I must use only constructions from the above mentioned book...
Example of readers-writers problem solution from the book:
monitor RW_Controller {
int nr = 0, nw =0; //Invar: (nr == 0 or nw == 0) and nw <= 1
cond oktoread; # recieves signal, when nw == 0
cond oktowrite; # recieves signal, when nr == 0 и nw == 0
procedure request_read() {
while (nw > 0) wait(oktoread);
nr = nr + 1;
}
procedure release_read() {
nr = nr - 1;
if (nr == 0) signal(oktowrite);
# run one writer-process
}
procedure request_write() {
while (nr > 0 || nw > 0) wait(oktowrite);
nw = nw + 1 ;
}
procedure release_ write() {
nw = nw - 1;
signal(oktowrite); # run one writer-process and
signal_all(oktoread); # all reader-processes
}
}
Of course my solution is just a random try. Halp me please to solve the task properly
Note: A variable of "conditional variable" type according to the book is a "wait queue" which has these methods:
wait(cv)//wait at end of queue
wait(cv,rank)//wait in order of increasing value of rank
signal(cv)//awaken process at end of queue then continue
signal_all(cv)//awaken all processes at end of queue then continue
empty(cv) //true if wait queue is empty; false otherwise
minrank(cv) //value of rank of process at front of wait queue
And so I should solve the task probably using some of these
Your monitor onelanebridge is not far off the mark, but it has no notion of fairness. If there was a steady stream of northbound traffic, nothing would trigger a switch to southbound. You need to separate the count of waiting and ‘active’.
A simple fairness would be to alternate, so you could limit the ‘active’ counter at 1; and check whether to switch when it becomes zero.
To avoid inciting road rage, you might choose a limit based on the transit time of the single lane section.
You would now have vehicles waiting in enter_[ns] which had the right direction, but have to wait because of the limit, so your if (cond) wait needs to become while (more complex cond) wait.
Concurrent programming is not natural, but with practise can become ingrained. Try and think of the problem at hand rather than how can I employ these mechanisms.
I want to atomically add 1 to a counter under certain conditions, but I'm not sure if following is correct in a threaded environment:
void UpdateCounterAndLastSessionIfMoreThan60Seconds() const {
auto currentTime = timeProvider->GetCurrentTime();
auto currentLastSession = lastSession.load();
bool shouldIncrement = (currentTime - currentLastSession >= 1 * 60);
if (shouldIncrement) {
auto isUpdate = lastSession.compare_exchange_strong(currentLastSession, currentTime);
if (isUpdate)
changes.fetch_add(1);
}
}
private:
std::shared_ptr<Time> timeProvider;
mutable std::atomic<time_t> lastSession;
mutable std::atomic<uint32_t> changes;
I don't want to increment changes multiple times if 2 threads simultaneously evaluate to shouldIncrement = true and isUpdate = true also (only one should increment changes in that case)
I'm no C++ expert, but it looks to me like you've got a race condition between the evaluation of "isUpdate" and the call to "fetch_add(1)".
So I think the answer to your question "Is this thread safe?" is "No, it is not".
It is at least a bit iffy, as following scenario will show:
First thread 1 does these:
auto currentTime = timeProvider->GetCurrentTime();
auto currentLastSession = lastSession.load();
bool shouldIncrement = (currentTime - currentLastSession >= 1 * 60);
Then thread 2 does the same 3 statements, but so that currentTime is more than it just was for Thread 1.
Then thread 1 continues to update lastSession with it's time, which is less than thread 2's time.
Then thread 2 gets its turn, but fails to update lastSession, because thread 1 changed the value already.
So end result is, lastSession is outdated, because thread 2 failed to update it to the latest value. This might not matter in all cases, situation might be fixed very soon after, but it's one ugly corner which might break some assumptions somewhere, if not with current code then after some later changes.
Another thing to note is, lastSession and chnages are not atomically in sync. Other threads occasionally see changed lastSession with changes counter still not incremeted for that change. Again this is something which might not matter, but it's easy to forget something like this and accidentally code something which assumes they are in sync.
I'm not immediately sure if you can make this 100% safe with just atomics. Wrap it in a mutex instead.
The pseudocode for a 'Inadequate implementation' of a producer consumer problem mentioned in wikipedia is as below. This solution is said to have a race condition which could cause deadlock.
My question is : Wouldn't just modifying the conditions of wakeing up the other thread as below solve the possible deadlock issue. That way there is not just one wakeup which could be lost, but subsequent multiple ones, or am I missing something. Trying to understand here.
int itemCount = 0;
procedure producer() {
while (true) {
item = produceItem();
if (itemCount == BUFFER_SIZE) {
sleep();
}
putItemIntoBuffer(item);
itemCount = itemCount + 1;
//if (itemCount == 1) <<<<<<<< change this to below condition
if(itemCount > 0)
{
wakeup(consumer);
}
}
}
procedure consumer() {
while (true) {
if (itemCount == 0) {
sleep();
}
item = removeItemFromBuffer();
itemCount = itemCount - 1;
//if (itemCount == BUFFER_SIZE - 1) <<<<<<< Change this to below
if(itermCount < BUFFER_SIZE)
{
wakeup(producer);
}
consumeItem(item);
}
}
Wouldn't just modifying the conditions of wakeing up the other thread as below solve the possible deadlock issue.
No, the race condition still exists. The problem is that there are multiple threads doing the consuming and/or producing. When a consumer (for example) is awoken and told that there are items to be processed, it might go remove the item but some other thread (or threads) has gotten there before it.
The solution is to do the following:
lock() {
while (itemCount == 0) {
sleep();
}
item = removeItemFromBuffer();
itemCount = itemCount - 1;
}
So even if the consumer is awoken, it immediately checks again that the itemCount is not 0 with a while loop. Even though the itemCount was incremented, another thread might have removed that element and decremented itemCount before the thread that got the signal had a chance to act. That is the race.
It is the same for the producer side although the race is to stop the producer from over-filling the buffer. A producer may be awoken because there is space available but by the time it goes to put items into buffer, other threads have beaten it and re-filled the buffer. It has to test again to make sure after it was awoken that there is space.
I go into line by line detail about this race on this page from my website entitled Producer Consumer Thread Race Conditions. There also is a little test program there that demonstrates the issue.
The important point to realize is that in most locking implementations, there is a queue of threads waiting to gain access to a lock. When a signal is sent to a thread, it first has to reacquire the lock, before it can continue. When a thread is signaled it then goes to the end of the BLOCK queue. If there are additional threads that are waiting for the lock but not waiting, they will run ahead of the awoken thread and steal the items.
This is very similar to this question about while loops in similar code. Unfortunately the accepted answer does not address this race condition. Please consider upvoting my answer to a similar question here. Spurious wakeups are an issue but the real problem here is the race condition wikipedia is talking about.
I have a function that boils down to:
while(doWork)
{
config = generateConfigurationForTesting();
result = executeWork(config);
doWork = isDone(result);
}
How can I rewrite this for efficient asynchronous execution, assuming all functions are thread safe, independent of previous iterations, and probably require more iterations than the maximum number of allowable threads ?
The problem here is we don't know how many iterations are required in advance so we can't make a dispatch_group or use dispatch_apply.
This is my first attempt, but it looks a bit ugly to me because of arbitrarily chosen values and sleeping;
int thread_count = 0;
bool doWork = true;
int max_threads = 20; // arbitrarily chosen number
dispatch_queue_t queue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
while(doWork)
{
if(thread_count < max_threads)
{
dispatch_async(queue, ^{ Config myconfig = generateConfigurationForTesting();
Result myresult = executeWork();
dispatch_async(queue, checkResult(myresult)); });
thread_count++;
}
else
usleep(100); // don't consume too much CPU
}
void checkResult(Result value)
{
if(value == good) doWork = false;
thread_count--;
}
Based on your description, it looks like generateConfigurationForTesting is some kind of randomization technique or otherwise a generator which can make a near-infinite number of configuration (hence your comment that you don't know ahead of time how many iterations you will need). With that as an assumption, you are basically stuck with the model that you've created, since your executor needs to be limited by some reasonable assumptions about the queue and you don't want to over-generate, as that would just extend the length of the run after you have succeeded in finding value ==good measurements.
I would suggest you consider using a queue (or OSAtomicIncrement* and OSAtomicDecrement*) to protect access to thread_count and doWork. As it stands, the thread_count increment and decrement will happen in two different queues (main_queue for the main thread and the default queue for the background task) and thus could simultaneously increment and decrement the thread count. This could lead to an undercount (which would cause more threads to be created than you expect) or an overcount (which would cause you to never complete your task).
Another option to making this look a little nicer would be to have checkResult add new elements into the queue if value!=good. This way, you load up the initial elements of the queue using dispatch_apply( 20, queue, ^{ ... }) and you don't need the thread_count at all. The first 20 will be added using dispatch_apply (or an amount that dispatch_apply feels is appropriate for your configuration) and then each time checkResult is called you can either set doWork=false or add another operation to queue.
dispatch_apply() works for this, just pass ncpu as the number of iterations (apply never uses more than ncpu worker threads) and keep each instance of your worker block running for as long as there is more work to do (i.e. loop back to generateConfigurationForTesting() unless !doWork).