using asyncio and threads - python-3.x

Would it make sense to use both asyncio and threading in the same python project so that code runs in different threads where is some of them asyncio is used to get a sequentially looking code for asynchronous activities?
or would trying to do this mean that I am missing some basic concept on the usage of either threading or asyncio?

I didn't understand what you're asking (part about "sequentially looking code for asynchronous activities"), but since there's no answers I'll write some thoughts.
Let's talk why we need asyncio/threads at all. Imagine we have a task to make two requests.
If we will use plain one-thread non-async code, only option for us
is to make request for one url and only after it's done - for
another:
request(url1)
request(url2)
Problem here is that we do job ineffective: each function most time of it's execution do nothing just waiting for network results. It would be cool if we somehow would be able to use CPU for second request while first one stuck with network stuff and don't need it.
This problem can be solved (and usually solves) by running functions in different threads:
with ThreadPoolExecutor(max_workers=2) as e:
e.submit(request, url1)
e.submit(request, url2)
We would get results faster this way. While first request is stuck with network, CPU would be able to do something useful for second request in another thread.
This is however not ideal solution: switching between threads have some cost, executing flow is more complex than in the first example.
There should be way better.
Use one function idle period to start executing another function is what asyncio in general about:
await asyncio.gather(
async_request(url1),
async_request(url2),
)
Event loop manages execution flow: when first coroutine reaches some I/O operation and CPU can be used to do job elsewhere, second coroutine starts. Later event loop returns to remain executing of first coroutine.
We get "parallel" requests and clean understandable code. Since we have parallelization in single thread, we just don't need another.
Actually, when we use asyncio threads still can be useful. If we ready to pay for them, they can help us to cast synchronous I/O functions to asynchronous very quickly:
async def async_request(url):
loop = asyncio.get_event_loop()
return (await loop.run_in_executor(None, request, url))
But again, it's optional and we usually can find module to make requests (and other I/O tasks) asynchronously without threads.
I didn't face with any other tasks when threads can be useful in asynchronous programs.

Sure it may make sense.
Asynchronous code in principle runs a bunch of routines in the same thread.
This means that the moment one routine has to wait for input or output (I/O) it will halt that routine temporarily and simply starts processing another routine until it encounters a wait there, etc.
Multi-threaded (or "parallelized" code) runs in principle at the same time on different cores of your machine. (Note that in Python parallel processing is achieved by using multiple processes as pointed out by #Yassine Faris below).
It may make perfect sense to use both in the same program. Use asyncio in order to keep processing while waiting for I/O. Use multi-threading (multi processing in Python) to do, for example, heavy calculations in parallel in another part of your program.

Related

Async and scheduling - how do libraries avoid blocking at the lowest level?

I've been using various concurrency constructs for a while now without much consideration for how all the magic happens, which has recently made me increasingly uneasy.
In an attempt to remedy this ... feeling, I have been reading up on how async works under the hood. When I say async, in this case I'm referring to userland / greenthread / cooperative multitasking, although I assume some of the concepts will also apply to traditional OS managed threads insofar as a scheduler and workers are involved.
I see how a worker can suspend itself and let other workers execute, but at the lowest level in non-blocking library code, how does the scheduler know when a previously suspended worker's job is done and to wake up that worker?
For example if you fire up a worker in some sort of async block and perform an operation that would normally block (e.g. HTTP request, SQL query, other I/O), then even though your calling code is async, that operation (library code) better play nice with your async framework or you've effectively defeated the purpose of using it and blocked your scheduler from calling other waiting operations (the, What Color is Your Function problem) while it waits for your blocking call, which was executed inside your non-blocking calling code, to complete.
So now we've got async code calling other async library code, and now I'm asking myself the question all over again - how does the async library code know when to suspend and resume operation?
The idea of firing off a HTTP request, moving on, and returning later to check for results is weird to think about for me - not conceptually but from an implementation standpoint.
How do you perform a partial operation, e.g. sending TCP packets and then continuing with the rest of the program execution, only to come back later and check if results have been delivered. Delivered to what? A socket?
Now we're another layer deep and you are using socket selects to avoid creating threads and blocking, but, again...
how do those sockets start an operation, move on before completion, and then how does select know when data is available?
Are you continually checking some buffer to see if bytes have been delivered in an infinite loop and moving on if not?
Anyhow - I think you see where I'm going here....
I focused mainly on HTTP as a motivating example, but the same question applies for any normally blocking operations - how does it all work at the bottom?
Here are some of the resources I found helpful while researching the topic and which informed this question:
David Beazley's excellent video Build Your Own Async where he walks you through a simple implementation of a scheduler which fire callbacks and suspend execution by sleeping on a waiting queue. I found this video tremendously instructive, but it stops a bit short in that it shows you how using an async sleep frees up the scheduler to execute other workers, but doesn't really go into what would happen when you call code in those workers that itself must be non-blocking so it plays nice with the scheduler.
How does non-blocking IO work under the hood - This got me further along in my understanding, but still left with a few uncertainties.

Dart is Single Threaded but why it uses Future Objects and perform asynchronous operations

In Documentation, Dart is Single Threaded but to perform two operations at a time we use future objects which work same as thread.
Use Future objects (futures) to perform asynchronous operations.
If Dart is single threaded then why it allows to perform asynchronous operations.
Note: Asynchronous operations are parallel operations which are called threads
You mentioned that :
Asynchronous operations are parallel operations which are called threads
First of all, Asynchronous operations are not exactly parallel or even concurrent. Its just simply means that we do not want to block our flow of execution(Thread) or wait for the response until certain work is done. But the way we implement Asynchronous operations could decide either it is parallel or concurrent.
Parallellism vs Concurrency ?
Parallelism is actually doing lots of things simultaneously at the
same time. ex - You are walking and at the same time you're digesting
you food. Both tasks are completely running parallel and exactly at the
same time.
While
Concurrency is the illusion of Parallelism.Tasks seems to be Executed
parallel but they aren't. It like handing lots of things at a time but
only doing one task at a specific time. ex - You are walking and suddenly stop to tie your show lace. After tying your shoe lace you again start walking.
Now coming to Dart, Future Objects along with async and await keywords are used to perform asynchronous task. Here asynchronous doesn't means that tasks will be executed parallel or concurrent to each other. Instead in Dart even the asynchronous task is executed on the same thread which means that while we wait for another task to be completed, we will continue executing our synchronous code . Future Objects are used to represent the result of task which will be done at some time in future.
If you want to really execute your task concurrently then consider using Isolates(Which runs in separate thread and doesn't shares it memory with the main thread(or spawning thread).
Why? Because it is a necessity. Some operations, like http requests or timers, are asynchronous in nature.
There are isolates which allow you to execute code in a different process. The difference to threads in other programming languages is that isolates do not share memory with each other (which would lead to concurrency issues), they only communicate through messages.
To receive these messages (or wrapped in a Future, the result of it), Dart uses an event loop.
The Event Loop and Dart
Are Futures in Dart threads?
Dart is single threaded, but it can call native code(like c/c++) to perform asynchronous operations, which can introduce new thread.
In Flutter, Flutter engine is implement in c++, which provide the low-level implementation of Flutter’s core API, including asynchronous tasks like file and network I/O through new thread underneath.
Like Dart, JavaScript is also single threaded, I find this video very helpful to understand "Single Threaded" thing. what the heck is event loop
Here are a few notes:
Asynchronous doesn't mean multi-threaded. It means the code is not run at the same time. Usually asyncronous just means that it is scheduled to be run on the same thread (Isolate) after other tasks have finished.
Dart isn't actually single threaded. You can create another thread by creating another Isolate. However, within an Isolate the Dart code runs on a single thread and separate Isolates don't share memory. They can only communicate by messages.
A Future says that a value (or an error) will be returned at some point in the future. It doesn't say which thread the work is done on. Most futures are done on the current Isolate, but some futures (IO, for example) can be done on separate threads.
See this answer for links to more resources.
I have an article explaining this https://medium.com/#truongsinh/flutter-dart-async-concurrency-demystify-1cc739aaae57
In short, Flutter/Dart is not technically single-threaded, even though Dart code is executed in a single thread. Dart is a concurrent language with message passing pattern, that can take full advantage of modern multi-core architecture, without worrying about lock or mutex. Blocking in Dart can be either I/O-bound or CPU-bound, which should be solved, respectively, by Future and Dart’s Isolate/Flutter’s compute.

Goroutines vs asyncio tasks + thread pool for CPU-bound calls

Are goroutines roughly equivalent to python's asyncio tasks, with an additional feature that any CPU-bound task is routed to a ThreadPoolExecutor instead of being added to the event loop (of course, with the assumption that we use a python interpreter without GIL)?
Is there any substantial difference between the two approaches that I'm missing? Of course, apart from the efficiencies and code clarity that result from the concurrency being an integral part of Go.
I think I know part of the answer. I tried to summarize my understanding of the differences, in order of importance, between asyncio tasks and goroutines:
1) Unlike under asyncio, one rarely needs to worry that their goroutine will block for too long. OTOH, memory sharing across goroutines is akin to memory sharing across threads rather than asyncio tasks since goroutine execution order guarantees are much weaker (even if the hardware has only a single core).
asyncio will only switch context on explicit await, yield and certain event loop methods, while Go runtime may switch on far more subtle triggers (such as certain function calls). So asyncio is perfectly cooperative, while goroutines are only mostly cooperative (and the roadmap suggests they will become even less cooperative over time).
A really tight loop (such as with numeric computation) could still block Go runtime (well, the thread it's running on). If it happens, it's going to have less of an impact than in python - unless it occurs in mutliple threads.
2) Goroutines are have off-the-shelf support for parallel computation, which would require a more sophisticated approach under asyncio.
Go runtime can run threads in parallel (if multiple cores are available), and so it's somewhat similar to running multiple asyncio event loops in a thread pool under a GIL-less python runtime, with a language-aware load balancer in front.
3) Go runtime will automatically handle blocking syscalls in a separate thread; this needs to be done explicitly under asyncio (e.g., using run_in_executor).
That said, in terms of memory cost, goroutines are very much like asyncio tasks rather than threads.
I suppose you could think of it working that way underneath, sure. It's not really accurate, but, close enough.
But there is a big difference: in Go you can write straight line code, and all the I/O blocking is handled for you automatically. You can call Read, then Write, then Read, in simple straight line code. With Python asyncio, as I understand it, you need to queue up a function to handle the reads, rather than just calling Read.

How Do Callbacks work in Non-blocking Design?

Looked at a few other questions but didn't quite find what I was looking for. Im using Scala but my questions is very high level and so is hopefully agnostic of any languages.
A regular scenario:
Thread A runs a function and there is some blocking work to be done (say a DB call).
The function has some non-blocking code (eg. Async block in Scala) to cause some sort of 'worker' Thread B (in a different pool) to pick up the I/O task.
The method in Thread A completes returning a Future which will eventually contain the result and Thread A is returned to its pool to quickly pick up another request to process.
Q1. Some thread somewhere usually has to wait?
My understanding of non-blocking architectures is that the common approach is to still have some Thread waiting/blocking on the I/O work somewhere - its just a case of having different pools which have access to different cores so that a small number of request processing threads can manage a large number of concurrent requests without ever waiting on a CPU core.
Is this a correct general understanding?
Q2. How the callback works ?
In the above scenario - Thread B that is doing the I/O work will run the callback function (provided by Thread A) if/when the I/O work has completed - which completes the Future with some Result.
Thread A is now off doing something else and has no association any more with the original request. How does the Result in the Future get sent back to the client socket? I understand that different languages have different implementations of such a mechanism but at a high level my current assumption is that (regardless of the language/framework) some framework/container objects must always be doing some sort of orchestration so that when a Future task is completed the Result gets sent back to the original socket handling the request.
I have spent hours trying to find articles which will explain this but every article seems to just deal with real low-level details. I know Im missing some details but i am having difficulty asking my question because Im not quite sure which parts Im missing :)
My understanding of non-blocking architectures is that the common approach is to still have some Thread waiting/blocking on the I/O work somewhere
If a thread is getting blocked somewhere, it is not really a non-blocking architecture. So no, that's not really a correct understanding of it. That doesn't mean that this is necessarily bad. Sometimes you just have to deal with blocking (using JDBC, for example). It would be better to push it off into a fixed thread pool designated for blocking, rather than allowing the entire application to suffer thread starvation.
Thread A is now off doing something else and has no association any more with the original request. How does the Result in the Future get sent back to the client socket?
Using Futures, it really depends on the ExecutionContext. When you create a Future, where the work is done depends on the ExecutionContext.
val f: Future[?] = ???
val g: Future[?] = ???
f and g are created immediately, and the work is submitted to a task queue in the ExecutionContext. We cannot guarantee which will actually execute or complete first in most cases. What you do with the values matters is well. Obviously if you use an Await to wait for the completion of the Futures, then we block the current thread. If we map them and do something with the values, then we again need another ExecutionContext to submit the task to. This gives us a chain of tasks that are asynchronously getting submitted and re-submitted to the executor for execution every time we manipulate the Future.
Eventually there needs to be some onComplete at the end of that chain to return the pass along that value to something, whether it's writing to stream, or something else. ie., it is probably out of the hands of the original thread.
Q1: No, at least not at the user code level. Hopefully your async I/O ultimately comes down to an async kernel API (e.g. select()). Which in turn will be using DMA to do the I/O and trigger an interrupt when it's done. So it's async at least down to the hardware level.
Q2: Thread B completes the Future. If you're using something like onComplete, then thread B will trigger that (probably by creating a new task and handing that task off to a thread pool to pick it up later) as part of the completing call. If a different thread has called Await to block on the Future, it will trigger that thread to resume. If nothing has accessed the Future yet, nothing in particular happens - the value sits there in the Future until something uses it. (See PromiseCompletingRunnable for the gritty details - it's surprisingly readable).

What is the difference between single thread and non-blocking I/O operation in NodeJs?

I've been reading up and going through as much of NodeJs code as I can but I'm a bit confused about this:
What exactly does Node being single threaded mean and what does non-blocking I/O mean? I can achieve the first one by spawning a child process and the second one by using async library. But I wanted to be clear what it meant and how non-blocking I/O can still slow up your app.
I'll try my best to explain.
Single-threaded means that the Node.js Javascript runtime - at a particular point in time - only is executing one piece of code from all the code it has loaded. In effect, it starts somewhere, and works it way down through all instructions (the call stack) until it's done. While it's executing code, nothing can interrupt this process, and all I/O must wait. Thankfully, most call stacks are relatively short, and lots of things we do in Node.js are more of the "bookkeeping" type than CPU-heavy.
Being single-thread though, any instructions that would take a long time would be a huge problem for the responsiveness in a system. The runtime can only do one thing at a time, so everything must wait until that instruction has finished. If any "I/O" instruction (say reading from disk) would block execution, then the system would be unnecessarily unavailable at that time.
But thankfully, we've got non-blocking I/O.
Instead of waiting for a file to be read:
console.log(readFileSync(filePath))
you write your code so that you DON'T wait for a file to be read:
readFile(filePath)
The readFile call returns almost instantly (perhaps in a a few nano-seconds), so the runtime can continue executing instructions that come next. But if the readFile call returns before the data has been read, there's no way that the the readFile call can return the file contents. That's where callbacks come in:
readFile(filePath, function(err, contents) { console.log(contents))
Still, the readFile call returns almost instantly. The runtime can continue. It will finish the current work before it (all instructions coming after readFile). Nothing is done with the function that's passed, other than storing a reference to it.
Then, at some later point in time (perhaps 10ms, 100ms, or 1000ms later) when reading the file has completed, the callback is called with as second argument the full contents of the file. Before that time, any number of other batches of work could have been done by the runtime.
Now I will address your comments about spawning child processes and Async library. You are wrong on both accounts.
Spawning a child process is a way to let Node.js use more than CPU core. Being single-threaded, a single Node.js has no purpose for using more than one core. Still, if you are on a multi-core computer, you may want to use all those cores. Hence, start multiple Node.js. processes.
The Async library will not give you non-blocking I/O, Node.js gives you that. What Node.js does not give you itself, is an easy way to deal with data coming in from multiple callbacks. The Async library can help a great deal with that.
As I'm not an expert on Node.js internals, I welcome corrections!
Related questions:
asynchronous vs non-blocking
What's the difference between: Asynchronous, Non-Blocking, Event-Base architectures?

Resources