val future = Future {
println("hello")
Thread.sleep(2000)
}
future.onComplete(_ => println("done"))
The code in the future above blocks.
My question is: since Future uses an ExecutionContext. Does it mean that some thread in this thread pool will be blocked by this future while executing the code inside it?
What thread exactly will call the callback? Another thread from the thread pool? How will it find out that the code executed inside this future is finished?
Assuming you're calling code that blocks in a Future:
since Future uses ExecutionContext. Does it mean, that some thread in this thread pool will be blocked by this future while executing the code inside it?
Essentially, yes. If you're calling blocking code, something has to be blocked. An ExecutionContext is more general than a thread pool, though. It can be thread pool, but it can also be just something that manages calls in some other way. Still, something will be blocked, more often than not it will be a thread. scala.concurrent.ExecutionContext.Implicits.global is a ForkJoinPool, for example.
What thread exactly will call the callback?
That is mostly up to the ExecutionContext to decide. You can print Thread.currentThread in your callback code, but there is no reason (other than maybe debugging) why you really need to know this information. Your code certainly shouldn't need to know. Typically this means the next available thread in the pool, which could be the same thread that executed the body of the Future, or a different one. It appears that the same thread that executes the original Future will dispatch the callbacks, but that only schedules them for execution.
How will it find out, that the code execution inside this future is finished?
Calling onComplete will either execute immediately if the underlying Promise of the Future is in the completed state, or a CallbackRunnable will be added to a list of listeners to be executed when the promise is completed. DefaultPromise#tryComplete will call the listeners immediately after the computation has executed. See this excerpt of the current code.
Related
If we use coroutine in main function then, how execution of coroutine resume after delay.
Like in this image, coroutine is in main function and after delay of 2 second code resumes. So I just wanted to know how execution going back to the code after delay. I know about state machine and how coroutine works in android. I am asking about kotlin with main function( Not in android activity ).
If you know how coroutines work in Android I am unsure what is confusing you. Coroutines are non blocking so as soon delay is done print will get executed.
Global scope is used to launch top-level coroutines which are operating on the whole application lifetime and are not cancelled prematurely.
GlobalScope.launch {
delay(2000)
print("World)
}
println("Hello")
Thread.sleep(3000)
Without going much into the detail of coroutine suspend/resume execution using continuation or state-machine
how execution of coroutine resume after delay
Because, you are blocking the main thread. Thread.sleep(3000) blocks the main thread for 3secs. If you remove Thread.sleep you can see the difference.
In android it's isn't required, reason being android UI thread runs on Handler/Looper concept, which is same as any other thread, only difference being, it always keep the Main/UI thread alive and keep executing the tasks from the message-queue.
If I create a new thread like this:
Thread thread = new Thread(.....);
thread.Start(......);
What happens if a method inside that thread uses the await operator?
I understand that in a normal scenario this would cause .Net to save the state of the current execution and give the thread back to the thread pool so it can work on something else while we wait for the awaited method to complete, but what if this is not in a thread pool thread to begin with?
I understand that in a normal scenario this would cause .Net to save the state of the current execution and give the thread back to the thread pool so it can work on something else while we wait for the awaited method to complete, but what if this is not in a thread pool thread to begin with?
To clarify a bit, await will save the local state and then return. It doesn't give up the thread right away.
So, in this case, if the Thread's main method returns, then that thread exits. It isn't returned to the thread pool since it's not a thread pool thread; it just exits because its thread proc returned.
There are other scenarios, too:
If the thread is a UI thread, then it returns to its message loop. The thread keeps running, processing other messages.
If the thread is the main thread of a Console application, then it exits, which causes the Console app to exit.
What happens if a method inside that thread uses the await operator?
The same as any other time await is used:
await captures the "context" (SynchronizationContext.Current or TaskScheduler.Current). In this case, the context would be the thread pool context.
When the method is ready to resume, it resumes on that context.
So in this case, the await will return, causing the thread to exit. Then later, the rest of the method will run on a thread pool thread.
If you "create" the thread, you "manage" it.
Once the code scheduled to run on that thread finishes, the thread will be destroyed.
If you're running async/await code on a thread you crated, chances are that you will run out of that thread soon and pay the price of creating and destroying the thread for no benefit.
Thread pools are used to schedule short lived code and, with async/await that is the norm.
If you have some long running code that blocks, then it might be better to create your own thread.
Looked at a few other questions but didn't quite find what I was looking for. Im using Scala but my questions is very high level and so is hopefully agnostic of any languages.
A regular scenario:
Thread A runs a function and there is some blocking work to be done (say a DB call).
The function has some non-blocking code (eg. Async block in Scala) to cause some sort of 'worker' Thread B (in a different pool) to pick up the I/O task.
The method in Thread A completes returning a Future which will eventually contain the result and Thread A is returned to its pool to quickly pick up another request to process.
Q1. Some thread somewhere usually has to wait?
My understanding of non-blocking architectures is that the common approach is to still have some Thread waiting/blocking on the I/O work somewhere - its just a case of having different pools which have access to different cores so that a small number of request processing threads can manage a large number of concurrent requests without ever waiting on a CPU core.
Is this a correct general understanding?
Q2. How the callback works ?
In the above scenario - Thread B that is doing the I/O work will run the callback function (provided by Thread A) if/when the I/O work has completed - which completes the Future with some Result.
Thread A is now off doing something else and has no association any more with the original request. How does the Result in the Future get sent back to the client socket? I understand that different languages have different implementations of such a mechanism but at a high level my current assumption is that (regardless of the language/framework) some framework/container objects must always be doing some sort of orchestration so that when a Future task is completed the Result gets sent back to the original socket handling the request.
I have spent hours trying to find articles which will explain this but every article seems to just deal with real low-level details. I know Im missing some details but i am having difficulty asking my question because Im not quite sure which parts Im missing :)
My understanding of non-blocking architectures is that the common approach is to still have some Thread waiting/blocking on the I/O work somewhere
If a thread is getting blocked somewhere, it is not really a non-blocking architecture. So no, that's not really a correct understanding of it. That doesn't mean that this is necessarily bad. Sometimes you just have to deal with blocking (using JDBC, for example). It would be better to push it off into a fixed thread pool designated for blocking, rather than allowing the entire application to suffer thread starvation.
Thread A is now off doing something else and has no association any more with the original request. How does the Result in the Future get sent back to the client socket?
Using Futures, it really depends on the ExecutionContext. When you create a Future, where the work is done depends on the ExecutionContext.
val f: Future[?] = ???
val g: Future[?] = ???
f and g are created immediately, and the work is submitted to a task queue in the ExecutionContext. We cannot guarantee which will actually execute or complete first in most cases. What you do with the values matters is well. Obviously if you use an Await to wait for the completion of the Futures, then we block the current thread. If we map them and do something with the values, then we again need another ExecutionContext to submit the task to. This gives us a chain of tasks that are asynchronously getting submitted and re-submitted to the executor for execution every time we manipulate the Future.
Eventually there needs to be some onComplete at the end of that chain to return the pass along that value to something, whether it's writing to stream, or something else. ie., it is probably out of the hands of the original thread.
Q1: No, at least not at the user code level. Hopefully your async I/O ultimately comes down to an async kernel API (e.g. select()). Which in turn will be using DMA to do the I/O and trigger an interrupt when it's done. So it's async at least down to the hardware level.
Q2: Thread B completes the Future. If you're using something like onComplete, then thread B will trigger that (probably by creating a new task and handing that task off to a thread pool to pick it up later) as part of the completing call. If a different thread has called Await to block on the Future, it will trigger that thread to resume. If nothing has accessed the Future yet, nothing in particular happens - the value sits there in the Future until something uses it. (See PromiseCompletingRunnable for the gritty details - it's surprisingly readable).
Is it possible for the thread that is running a section of code to change during that code block?
I am specifically thinking of code running inside ASP.net, where methods are executed against the Thread Pool.
If my code initiates an I/O operation (e.g. database) the execution is suspended pending the completion of an I/O completion port. In the meantime, that thread-pool thread can be re-used to handle another web request.
When my I/O completes, and my code is returned to a thread-pool thread - is it guaranteed to be the same thread?
E.g.
private void DoStuff()
{
DWORD threadID = GetCurrentThreadID();
//And what if this 3rd party code (e.g. ADO/ADO.net) uses completion ports?
//my thread-pool thread is given to someone else(?)
ExecuteSynchronousOperationThatWaitsOnIOCompletionPort();
//Synchronous operation has completed
threadID2 = GetCurrentThread();
}
Is it possible for
threadID2 <> threadID
?
i mentioned the .NET Thread Pool, but there is also the native thread pool. And i have written code for both.
Is it ever possible for my ThreadID to be ripped out from under me? Ever.
Why do i care?
The reason i care is because i'm trying to make an object thread-safe. That means that sometimes i have to know which so-called "thread" of execution called the method. Later, when they return, i know that "they" are still "them".
The only way i know to identify a "series of machine instructions that were written to be executed by one virtual processing unit" is through GetCurrentThreadID. But if GetCurrentThreadID changes; if my series of machine instructions can be moved to different "virtual processing units" (i.e. threads) during execution, then i cannot rely on GetCurrentThreadID.
The short answer is no. No synchronous function would do this.
With sufficient cleverness and evil you could create a function that did this. However, you would expect it to break things and that's why nobody actually does this.
As just the most obvious issue -- what happens if the calling thread holds a lock for the duration of the synchronous call?
This question already has answers here:
asynchronous and non-blocking calls? also between blocking and synchronous
(15 answers)
Closed 4 years ago.
I am reading 'Operation System Concepts With Java'. I am quite confused by the concept of
blocking and synchronous, what are the differences between them?
Blocking may or may not be the same as synchronous, depending on the context. When we talk about method calls, then a synchronous call can also be said to be blocking (I'll get back to this in a bit), because the thread calling the method cannot proceed forward until the method returns. The antonym in this case would be asynchronous.
In lock terminology, a lock is said to be blocking if the thread waiting to acquire it is put in a suspended mode until the lock becomes available (or until a timeout elapses). The antonym in this case is a non-blocking lock, meaning that the thread returns immediately even if it cannot acquire the lock. This can be used to implement the so called spinning lock, where you keep polling the state of the lock while keeping the thread active.
Having said this, you can extrapolate the difference between the concepts: synchronous generally means an activity that must wait for a reply before the thread can move forward. Blocking refers to the fact that the thread is placed in a wait state (generally meaning it will not be scheduled for execution until some event occurs). From here you can conclude that a synchronous call may involve blocking behavior or may not, depending on the underlying implementation (i.e. it may also be spinning, meaning that you are simulating synchronous behavior with asynchronous calls).
Blocking - operation are said to have blocking behavior if it waits for some event to get complete. For example: if a lock is not available a thread may enter a wait state on event till lock is available. Such an operation is said to be blocking.
Synchronous - Synchronous call can be easily understood with an example of http protocol where client waits for reply from server an then proceeds. Synchronous call can be blocking or non blocking.
Asynchronous - A method can asynchronous call other method. After a call it can continue to execute its next instruction. When called method completes it execution it will send an reply/callback to caller method of it's success or failure.
Non-blocking - Non blocking behavior is like checking the condition at that instance. For example- in case of locks if it is not available it will not wait till it is available like blocking operation. Also we need to repeatedly check the availability of locks as there will be no callback like asynchronous calls.
Summary:
Blocking is always synchronous.
Synchronous call have blocking operations if it waits for some event to get complete, caller method may enter wait state.
Synchronous call is non blocking, if it repeatedly check for some event to occur before proceeding for next instruction. Caller method does not enter wait state on some event to complete.
Asynchronous call cannot be blocking and it involves callback from called method which needs to handle.
I would classify them as follows:
Blocking - Thread will wait on action untill success or failure (highlight on 'will wait', failure is commonly a timeout)
Synchronous - Thread will complete the action, either by success or failure, before reaching any line after it (highlight on action completion)
Non-blocking - Thread will not wait to complete the action, executes action immediately
Asynchronous - Another thread (either logical or physical) will complete the action or inform it is ready using a callback, will not wait before performing following commands.
Note: from here the name asynchronous originates, since you cant be sure in which order the commands will execute
synchronous means that the work is done in the thread that calls the function and the method does not return until it is finished.
asynchronous methods return immediately because another thread does the work and raises a flag or fires an event when the work is done.
blocking means that the thread executing a blocking event will wait until the event has occurred. for example you try to read from a socket and none sends you a message. the blocking call will not return until the message has been revived from the socket.
well and nonblocking means the opposite to blocking with implies that nonblocking calls are asynchronous.