How this deadlock happens in Scala Future? - multithreading

This snippet is excerpted from Monix document.
It's an example that how to enter deadlock in Scala.
import java.util.concurrent.Executors
import scala.concurrent._
implicit val ec = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(1))
def addOne(x: Int) = Future(x + 1)
def multiply(x: Int, y: Int) = Future {
val a = addOne(x)
val b = addOne(y)
val result = for (r1 <- a; r2 <- b) yield r1 * r2
// This can dead-lock due to the limited size of our thread-pool!
Await.result(result, Duration.Inf)
}
I understand what the code does, but not about how it executed.
Why it is the line Await.result(result, Duration.Inf) causing the deadlock ? (Yes, I tested it)
Is not that the outermost Future at multiply function occupy all the thread pool(the single one) and thus deadlock (because the addOne future is forever blocked on waiting for thread)?

Is not that the outermost Future at multiply function occupy all the thread pool(the single one) and thus deadlock (because the addOne future is forever blocked on waiting for thread)?
Yes, sort of.
When you call val a = addOne(x), you create a new Future that starts waiting for a thread. However, as you noted, the only thread is currently in use by the outermost Future. That wouldn't be a problem without await, since Futures are able to handle this condition. However, this line:
Await.result(result, Duration.Inf)
causes the outer Future to wait for the result Future, which can't run because the outer Future is still using the only available thread. (And, of course, it also can't run because the a and b Futures can't run, again due to the outer Future.)
Here's a simpler example that also deadlocks without creating so many Futures:
def addTwo(x: Int) = Future {
Await.result(addOne(x + 1), Duration.Inf)
}

First of all I would say this code can simulate deadlock, it’s not guaranteed that it will always be in the deadlock.
What is happening in the above code. We have only a single thread in the thread pool. And as soon as we are calling the multiple function as it’s the future so it should run on a separate thread say we assign the single thread we have in the thread pool to this function.
Now the function addOne also is a future so it will again start running on the same thread, but will not wait for a=addOne to get complete and move to the next line b=addOne hence the same thread which was executing the a=addOne now executing the b=addOne and the value of all will never be calculated and that future is not complete and never going to be complete as we have only one thread, same case with the line b=addOne it control will not wait to complete that future and move to the for loop for is also async in the Scala so it will again not evaluated and move to the last line await and it will be waiting for the infinity amount of time to complete the previous futures.
Necessary and sufficient condition to get into the dead lock.
Mutual Exclusion Condition
Hold and Wait Condition
No-Preemptive Condition
Circular Wait Condition
Here we can see we have only one thread so the processes going to be execute are not mutually exclusive.
once the thread is executing specific block and hence it’s a future and not waiting to complete it, it’s going ahead and executing the next block hence it’s reaching to the await statement and the thread is holding there while all the other future which are not complete are waiting for the thread to complete the future.
Once the thread is allocated to the await it can’t be preempt that’s the reason we can’t execute the remaining future which are not complete.
And circular wait is there because awaits is waiting for the non-complete future to be complete and other futures are waiting for the await call to be complete.
Simply we can say the control will directly reach to the await statement and start waiting for the non-complete futures to got complete which is not going to be happen anyhow. Because we have only one thread in our thread pool.

Await.result(result, Duration.Inf)
When you use await, you are waiting for future to complete. And you have given infinite time. So if anyhow Future will never be able to complete, main thread go to infinite wait.

Related

Best way to wake 0-N sleeping goroutines at once

I'm writing a program where I start N (N is a command-line argument) worker threads, and at any time 0 to N-1 of them can be waiting on another to update a variable. What's the best way for the threads to wait for this event, and the best way for one of the threads to notify all the others at once of the event occurring? This event will be sent multiple times by each thread.
sync.Cond isn't appropriate because the threads don't need to lock a resource upon waking from sleep. sync.WaitGroup won't work because I don't know how many times to call wg.Done().
Solution #1: I could use a sync.Mutex and have the thread that will eventually notify the others acquire the lock and then unlock it to notify the others, but it seems really inefficient for the others to all fight over a lock when they all just need to pop out of sleep, read a variable to see if that particular worker is now the master, and then either go back to sleep or start working.
Solution #2: Create a wrapper for sync.WaitGroup that allows keeping track of the number of waiting threads so that I can call wg.Add(-numWaitingThreads) to wake them. This sounds like a headache to figure out how to code it without all sorts of race conditions.
Solution #3: Until someone comes up with a better idea, I'll be using a list of N channels and have the notifier non-blocking-send to all of the channels except its own. Is this really the best way?
More details: I give each worker some unique credits and have a central variable for "which credit is the next to be written to the output file". When a worker finishes its work for whichever credit ID it was working on, it needs to do the following:
for centralNextCreditID != creditID {
wait_for_centralNextCreditID_to_change()
}
saveWorkToFile()
centralNextCreditID++
wake_other_threads_waiting_for_centralNextCreditID_to_change()
To me it does seem like this is an appropriate use case for sync.Cond. You can use a *RWMutex.RLocker() for Cond.L so all goroutines can acquire the read lock simultaneously once the Cond.Broadcast() is sent.
Additionally, it may be worth making sure you hold a write lock when changing this "who's master" variable to avoid race conditions, which would make sync.Cond an even better fit.
sync.WaitGroup won't work because I don't know how many times to call wg.Done().
wg can be used in this case. Make a wg with count 1 and pass this to the N goroutines. Make them wg.Wait(), except the one that updates the variable.
The goroutine updating the variable calls wg.Done() after successful update thus resulting in N goroutines to come out of wait and start executing further.
The title says that you want to wake 0-N sleeping goroutines, but the body of the question indicates that you only need to wake the goroutine for the next id (if there is a goroutine waiting).
Here's how to implement the problem described in the body of the question:
// waiter sequences work according to an incrementing id.
type waiter struct {
mu sync.Mutex
id int
waiting map[int]chan struct{}
}
func NewWaiter(firstID int) *waiter {
return &waiter{id: firstID, waiting: make(map[int]chan struct{})}
}
// wait waits for id's turn in the sequence.
func (w *waiter) wait(id int) {
w.mu.Lock()
if w.id == id {
// This id is next. Nothing to do.
w.mu.Unlock()
return
}
// Wait for our turn.
c := make(chan struct{})
w.waiting[id] = c
w.mu.Unlock()
<-c
}
// done signals that the work for the previous id is done.
func (w *waiter) done() {
w.mu.Lock()
w.id++
c, ok := w.waiting[w.id]
if ok {
delete(w.waiting, w.id)
}
w.mu.Unlock()
if ok {
// close cause c to receive a zero value
close(c)
}
}
Here's how to use it:
for _, creditID := range creditIDs {
doWorkFor(creditID)
waiter.wait(creditID)
saveWorkToFile()
waiter.done()
}
WaitGroup is the best option. The reason is that is keeps its signalled state and you are safe from deadlock if the main thread signals too early.
If you use Cond there is a risk that the main thread calls cond.Broadcast BEFORE the worker thread calls cond.Wait(). Since Cond doesn't remember that it was signalled, the worker thread will wait for the event to happen.
Here is an example: https://go.dev/play/p/YLfvEGO2A18
The main thread broadcasts too early, the worker threads run into a deadlock.
Same case with con.WaitGroup: https://go.dev/play/p/R6_-ULo2eJ2
The main thread releases the wait group too early, but there is no deadlock.

Kotlin coroutines multithread dispatcher and thread-safety for local variables

Let's consider this simple code with coroutines
import kotlinx.coroutines.*
import java.util.concurrent.Executors
fun main() {
runBlocking {
launch (Executors.newFixedThreadPool(10).asCoroutineDispatcher()) {
var x = 0
val threads = mutableSetOf<Thread>()
for (i in 0 until 100000) {
x++
threads.add(Thread.currentThread())
yield()
}
println("Result: $x")
println("Threads: $threads")
}
}
}
As far as I understand this is quite legit coroutines code and it actually produces expected results:
Result: 100000
Threads: [Thread[pool-1-thread-1,5,main], Thread[pool-1-thread-2,5,main], Thread[pool-1-thread-3,5,main], Thread[pool-1-thread-4,5,main], Thread[pool-1-thread-5,5,main], Thread[pool-1-thread-6,5,main], Thread[pool-1-thread-7,5,main], Thread[pool-1-thread-8,5,main], Thread[pool-1-thread-9,5,main], Thread[pool-1-thread-10,5,main]]
The question is what makes these modifications of local variables thread-safe (or is it thread-safe?). I understand that this loop is actually executed sequentially but it can change the running thread on every iteration. The changes done from thread in first iteration still should be visible to the thread that picked up this loop on second iteration. Which code does guarantee this visibility? I tried to decompile this code to Java and dig around coroutines implementation with debugger but did not find a clue.
Your question is completely analogous to the realization that the OS can suspend a thread at any point in its execution and reschedule it to another CPU core. That works not because the code in question is "multicore-safe", but because it is a guarantee of the environment that a single thread behaves according to its program-order semantics.
Kotlin's coroutine execution environment likewise guarantees the safety of your sequential code. You are supposed to program to this guarantee without any worry about how it is maintained.
If you want to descend into the details of "how" out of curiosity, the answer becomes "it depends". Every coroutine dispatcher can choose its own mechanism to achieve it.
As an instructive example, we can focus on the specific dispatcher you use in your posted code: JDK's fixedThreadPoolExecutor. You can submit arbitrary tasks to this executor, and it will execute each one of them on a single (arbitrary) thread, but many tasks submitted together will execute in parallel on different threads.
Furthermore, the executor service provides the guarantee that the code leading up to executor.execute(task) happens-before the code within the task, and the code within the task happens-before another thread's observing its completion (future.get(), future.isCompleted(), getting an event from the associated CompletionService).
Kotlin's coroutine dispatcher drives the coroutine through its lifecycle of suspension and resumption by relying on these primitives from the executor service, and thus you get the "sequential execution" guarantee for the entire coroutine. A single task submitted to the executor ends whenever the coroutine suspends, and the dispatcher submits a new task when the coroutine is ready to resume (when the user code calls continuation.resume(result)).

manage early return of event loop with python

I have a service running the following loop
while True:
feedback = f1()
if check1(feedback):
break
feedback = f2()
if check2(feedback):
break
feedback = f3()
if check3(feedback):
break
time.sleep(10)
do_cleanup(feedback)
Now I would like to run these feedback checks with different time intervals. One naive way is to move the time.sleep() into the f functions. But that causes blocking. What would be the easiest way to achieve periodic checks with different intervals? Here all the f functions are cheap to run.
The event loop in asyncio sounds like the way to go. But due to my inexperience, I don't know where the check and break logic should go for the event loop.
Or is there any other packages/code patterns to do this kind of monitoring logic?
In asyncio you might split the service into three separate tasks, each with its own loop and timing - you can think of them as three threads, except they are all scheduled in the same thread, and multi-task cooperatively by suspending at await.
For this purpose let's start with a utility function that calls a function and checks its result at a regular interval:
async def at_interval(f, check, seconds):
while True:
feedback = f()
if check(feedback):
return feedback
await asyncio.sleep(seconds)
The return is the equivalent to the break in your original code.
With that in place, the service spawns three such loops and wait for any of them to finish. Whichever completes first carries the "feedback" we're waiting for, and we can dispose of the others.
async def service():
loop = asyncio.get_event_loop()
t1 = loop.create_task(at_interval(f1, check1, 3))
t2 = loop.create_task(at_interval(f2, check2, 5))
t3 = loop.create_task(at_interval(f3, check3, 7))
done, pending = await asyncio.wait(
[t1, t2, t3], return_when=asyncio.FIRST_COMPLETED)
for t in pending:
t.cancel()
feedback = await list(done)[0]
do_cleanup(feedback)
asyncio.get_event_loop().run_until_complete(service())
A small difference between this and your code is that here it is possible (though very unlikely) for more than one check to fail before the service picks up on it. For example, if through a stroke of bad luck two of the above tasks end up sharing the absolute time of wakeup to the microsecond, they will be scheduled in the same event loop iteration. Both will return from their corresponding at_interval coroutines, and done will contain more than one feedback. The code handles it by picking a feedback and calling do_cleanup on that one, but it could also loop over all.
If this is not acceptable, you can easily pass each at_interval a callable that cancels all tasks except itself. This is currently done in service for brevity, but it can be done in at_interval as well. One task cancelling the others would ensure that only one feedback can exist.

Goroutines are cooperatively scheduled. Does that mean that goroutines that don't yield execution will cause goroutines to run one by one?

From: http://blog.nindalf.com/how-goroutines-work/
As the goroutines are scheduled cooperatively, a goroutine that loops continuously can starve other goroutines on the same thread.
Goroutines are cheap and do not cause the thread on which they are multiplexed to block if they are blocked on
network input
sleeping
channel operations or
blocking on primitives in the sync package.
So given the above, say that you have some code like this that does nothing but loop a random number of times and print the sum:
func sum(x int) {
sum := 0
for i := 0; i < x; i++ {
sum += i
}
fmt.Println(sum)
}
if you use goroutines like
go sum(100)
go sum(200)
go sum(300)
go sum(400)
will the goroutines run one by one if you only have one thread?
A compilation and tidying of all of creker's comments.
Preemptive means that kernel (runtime) allows threads to run for a specific amount of time and then yields execution to other threads without them doing or knowing anything. In OS kernels that's usually implemented using hardware interrupts. Process can't block entire OS. In cooperative multitasking thread have to explicitly yield execution to others. If it doesn't it could block whole process or even whole machine. That's how Go does it. It has some very specific points where goroutine can yield execution. But if goroutine just executes for {} then it will lock entire process.
However, the quote doesn't mention recent changes in the runtime. fmt.Println(sum) could cause other goroutines to be scheduled as newer runtimes will call scheduler on function calls.
If you don't have any function calls, just some math, then yes, goroutine will lock the thread until it exits or hits something that could yield execution to others. That's why for {} doesn't work in Go. Even worse, it will still lead to process hanging even if GOMAXPROCS > 1 because of how GC works, but in any case you shouldn't depend on that. It's good to understand that stuff but don't count on it. There is even a proposal to insert scheduler calls in loops like yours
The main thing that Go's runtime does is it gives its best to allow everyone to execute and don't starve anyone. How it does that is not specified in the language specification and might change in the future. If the proposal about loops will be implemented then even without function calls switching could occur. At the moment the only thing you should remember is that in some circumstances function calls could cause goroutine to yield execution.
To explain the switching in Akavall's answer, when fmt.Printf is called, the first thing it does is checks whether it needs to grow the stack and calls the scheduler. It MIGHT switch to another goroutine. Whether it will switch depends on the state of other goroutines and exact implementation of the scheduler. Like any scheduler, it probably checks whether there're starving goroutines that should be executed instead. With many iterations function call has greater chance to make a switch because others are starving longer. With few iterations goroutine finishes before starvation happens.
For what its worth it. I can produce a simple example where it is clear that the goroutines are not ran one by one:
package main
import (
"fmt"
"runtime"
)
func sum_up(name string, count_to int, print_every int, done chan bool) {
my_sum := 0
for i := 0; i < count_to; i++ {
if i % print_every == 0 {
fmt.Printf("%s working on: %d\n", name, i)
}
my_sum += 1
}
fmt.Printf("%s: %d\n", name, my_sum)
done <- true
}
func main() {
runtime.GOMAXPROCS(1)
done := make(chan bool)
const COUNT_TO = 10000000
const PRINT_EVERY = 1000000
go sum_up("Amy", COUNT_TO, PRINT_EVERY, done)
go sum_up("Brian", COUNT_TO, PRINT_EVERY, done)
<- done
<- done
}
Result:
....
Amy working on: 7000000
Brian working on: 8000000
Amy working on: 8000000
Amy working on: 9000000
Brian working on: 9000000
Brian: 10000000
Amy: 10000000
Also if I add a function that just does a forever loop, that will block the entire process.
func dumb() {
for {
}
}
This blocks at some random point:
go dumb()
go sum_up("Amy", COUNT_TO, PRINT_EVERY, done)
go sum_up("Brian", COUNT_TO, PRINT_EVERY, done)
Well, let's say runtime.GOMAXPROCS is 1. The goroutines run concurrently one at a time. Go's scheduler just gives the upper hand to one of the spawned goroutines for a certain time, then to another, etc until all are finished.
So, you never know which goroutine is running at a given time, that's why you need to synchronize your variables. From your example, it's unlikely that sum(100) will run fully, then sum(200) will run fully, etc
The most probable is that one goroutine will do some iterations, then another will do some, then another again etc.
So, the overall is that they are not sequential, even if there is only one goroutine active at a time (GOMAXPROCS=1).
So, what's the advantage of using goroutines ? Plenty. It means that you can just do an operation in a goroutine because it is not crucial and continue the main program. Imagine an HTTP webserver. Treating each request in a goroutine is convenient because you do not have to care about queueing them and run them sequentially: you let Go's scheduler do the job.
Plus, sometimes goroutines are inactive, because you called time.Sleep, or they are waiting for an event, like receiving something for a channel. Go can see this and just executes other goroutines while some are in those idle states.
I know there are a handful of advantages I didn't present, but I don't know concurrency that much to tell you about them.
EDIT:
Related to your example code, if you add each iteration at the end of a channel, run that on one processor and print the content of the channel, you'll see that there is no context switching between goroutines: Each one runs sequentially after another one is done.
However, it is not a general rule and is not specified in the language. So, you should not rely on these results for drawing general conclusions.
#Akavall Try adding sleep after creating dumb goroutine, goruntime never executes sum_up goroutines.
From that it looks like go runtime spawns next go routines immediately, it might execute sum_up goroutine until go runtime schedules dumb() goroutine to run. Once dumb() is scheduled to run then go runtime won't schedule sum_up goroutines to run, as dumb runs for{}

Multi-Producer Single-Consumer Lazy Task Execution

I am trying to model a system where there are multiple threads producing data, and a single thread consuming the data. The trick is that I don't want a dedicated thread to consume the data because all of the threads live in a pool. Instead, I want one of the producers to empty the queue when there is work, and yield if another producer is already clearing the queue.
The basic idea is that there is a queue of work, and a lock around the processing. Each producer pushes its payload onto the queue, and then attempts to enter the lock. The attempt is non-blocking and returns either true (the lock was acquired), or false (the lock is held by someone else).
If the lock is acquired, then that thread then processes all of the data in the queue until it is empty (including any new payloads introduced by other producers during processing). Once all of the work has been processed, the thread releases the lock and quits out.
The following is C++ code for the algorithm:
void Process(ITask *task) {
// queue is a thread safe implementation of a regular queue
queue.push(task);
// crit_sec is some handle to a critical section like object
// try_scoped_lock uses RAII to attempt to acquire the lock in the constructor
// if the lock was acquired, it will release the lock in the
// destructor
try_scoped_lock lock(crit_sec);
// See if this thread won the lottery. Prize is doing all of the dishes
if (!lock.Acquired())
return;
// This thread got the lock, so it needs to do the work
ITask *currTask;
while (queue.try_pop(currTask)) {
... execute task ...
}
}
In general this code works fine, and I have never actually witnessed the behavior I am about to describe below, but that implementation makes me feel uneasy. It stands to reason that a race condition is introduced between when the thread exits the while loop and when it releases the critical section.
The whole algorithm relies on the assumption that if the lock is being held, then a thread is servicing the queue.
I am essentially looking for enlightenment on 2 questions:
Am I correct that there is a race condition as described (bonus for other races)
Is there a standard pattern for implementing this mechanism that is performant and doesn't introduce race conditions?
Yes, there is a race condition.
Thread A adds a task, gets the lock, processes itself, then asks for a task from the queue. It is rejected.
Thread B at this point adds a task to the queue. It then attempts to get the lock, and fails, because thread A has the lock. Thread B exits.
Thread A then exits, with the queue non-empty, and nobody processing the task on it.
This will be difficult to find, because that window is relatively narrow. To make it more likely to find, after the while loop introduce a "sleep for 10 seconds". In the calling code, insert a task, wait 5 seconds, then insert a second task. After 10 more seconds, check that both insert tasks are finished, and there is still a task to be processed on the queue.
One way to fix this would be to change try_pop to try_pop_or_unlock, and pass in your lock to it. try_pop_or_unlock then atomically checks for an empty queue, and if so unlocks the lock and returns false.
Another approach is to improve the thread pool. Add a counting semaphore based "consume" task launcher to it.
semaphore_bool bTaskActive;
counting_semaphore counter;
when (counter || !bTaskActive)
if (bTaskActive)
return
bTaskActive = true
--counter
launch_task( process_one_off_queue, when_done( [&]{ bTaskActive=false ) );
When the counting semaphore is active, or when poked by the finished consume task, it launches a consume task if there is no consume task active.
But that is just off the top of my head.

Resources