How exactly thread pools are working in ASP.NET Core? - multithreading

I suppose, there is a thread pool which the web server are using to serve requests. So the controllers are running within one of the thread of this thread pool. Say it is the 'serving' pool.
In one of my async action method I use an async method,
var myResult = await myObject.MyMethodAsync();
// my completion logic here
As explained many places, we are doing this, to not block the valuable serving pool thread, instead execute MyMethodAsync in an other background thread... then continue the completion logic in again a serving pool thread, probably in other one, but having the http context, and some othe minor things marshaled there correctly.
So the background thread in which MyMethodAsync runs must be from an other thread pool, unless the whole thing makes no sense.
Question
Please confirm or correct my understanding and in case if it is correct, I still miss why would one thread in one pool more valuable resource than other thread in another pool? In the end of the day the whole thing runs on a same particular hardware with given number of cores and CPU performance...

There is only one thread pool in a .NET application. It has both worker threads and I/O threads, which are treated differently, but there is only one pool.
I suppose, there is a thread pool which the web server are using to serve requests. So the controllers are running within one of the thread of this thread pool. Say it is the 'serving' pool.
ASP.NET uses the .NET thread pool to serve requests, yes.
As explained many places, we are doing this, to not block the valuable serving pool thread, instead execute MyMethodAsync in an other background thread... then continue the completion logic in again a serving pool thread, probably in other one, but having the http context, and some othe minor things marshaled there correctly.
So the background thread in which MyMethodAsync runs must be from an other thread pool, unless the whole thing makes no sense.
This is the wrong part.
With truly asynchronous methods, there is no thread (as described on my blog). While the code within MyMethodAsync will run on some thread, there is no thread dedicated to running MyMethodAsync until it completes.
You can think about it this way: asynchronous code usually deals with I/O, so lets say for example that MyMethodAsync is posting something to an API. Once the post is sent, there's no point in having a thread just block waiting for a response. Instead, MyMethodAsync just wires up a continuation and returns. As a result, most asynchronous controller methods use zero threads while waiting for external systems to respond. There's no "other thread pool" because there's no "other thread" at all.
Which is kind of the point of asynchronous code on ASP.NET: to use fewer threads to serve more requests. Also see this article.

Related

Node JS multiple request handling

NodeJs is single-threaded means it has one main thread that cares about our operations and for the rest of asynchronous, it gives the task to another thread.
So as per understanding callback results in offloading the current task to a separate thread.
app.post("/get",(req,res)=>{
res.status(200).json({message:"Success"})
})
Does the above statement execute in another thread?
Everything in a Javascript program runs in one thread (unless you use workers, which your example does not).
Javascript runs with a main loop. The main loop reads a queue of events and processes them one by one. All sorts of things turn into events in that queue: incoming web requests, timeouts, intervals, promises awaiting resolution, messages, you name it. It's more complex than this, of course.
It's good that it's all one thread: you don't have to worry about thread concurrency.
There's a downside: if you write a long-running method, its execution will block the main loop until it's finished. This makes web pages janky and nodejs programs have slow response. Use workers (separate threads) for that stuff.
The method in a .post() like yours is run in response to an event on the queue announcing an incoming web request.
Read about the main loop.

Node js : how libuv thread pool works?

I am learning Node Js, I understand the heart of node js is the reactor pattern which is based on the event loop.
When any event occurs it goes to the event queue and then gets picked up by the stack once it's running tasks get over, this happens if the event is a non- blocking one but if it is a blocking request then event loop passes it to a thread from a thread pool of libuv.
Now my doubts are:
Once the execution is over does libuv thread passes the request back to event queue or event loop? , different tutorial has a different scenario.
Thread pool in libuv has 3 threads more, now suppose 10 users tries to login and everyone at the same time (some app like facebook or so), how only, and the threads are blocked as they want to connect to DB, so how only three threads will handle so much load?
I am really confused and did not get a good explanation of these doubts anywhere, any help will be appreciated.
When any event occurs it goes to the event queue and then gets picked up by the stack
event does not get picked up by the stack. it is passed to call stack by event loop.
if it is a blocking request then event loop passes it to a thread from a thread pool of libuv.
There are ONLY FOUR things that use the thread pool - DNS lookup, fs, crypto and zlib. Everything else execute in the main thread irrespective or blocking or not.
So logging is a network request and threadpool does not handle this. Neither libuv nor node has any code to handle this low level operations that are involved with a network request. Instead libuv delegates the request making to the underlying operating system then it just waits on the operating system to emit a signal that some response has come back to the request. OS keeps track of connections in the network stack. But network i/o is handled by your network hardware and your ISP.
Once the execution is over does libuv thread passes the request back to event queue or event loop?
From the node.js point of view does it make a difference?
2) Thread pool in libuv has 3 threads more , now suppose 10 users
tries to login and everyone at the same time (some app like facebook
or so), how only , and the threads are blocked as they want to connect
to DB, so how only three threads will handle so much load ?
libuv uses threat pool but not in a "naive" way. Most asynchronous requests are filesystem/tcp interaction which are handled via select(). You have to worry about the thread pool only when you create a custom C++ module and manually dispatch a CPU/IO blocking task.

How is an event based model(Node.js) more efficient than thread based model(Apache) for serving http requests?

In apache, we have a single thread for each incoming request. Each thread consumes a memory space. the memory spaces don't collide with each other because of which each request serves it purpose.
How does this happen in node.js as it has single thread execution. A single memory space is used by all incoming requests. Why don't the requests collide with each other. What differentiates them?
As you self noticed an event based model allows to share the given memory more efficiently as the overhead of reexecuting a stack again and again is minimized.
However to make an event or single threaded model non-blocking you have to get back to threads somewhere and this is where nodes "io-engine" libuv is working.
libuv supplies an API which underneath manages IO-tasks in a thread pool if an IO task is done async. Using a thread pool results in not blocking the main process however extensive javascript operations still can do (this is why there is the cluster module which allows spawning multiple worker processes).
I hope this answers you question if not feel free to comment!

IIS long polling AsyncController

I ran into an interesting problem in IIS and I would like to get to the bottom of it.
I have an app that long polls . i have implemented my own long polling.
A request comes in. I block that req and write to it from my worker thread.
then everything finish. I signal. And the thread that was handling the GET request is releases.
I am not talking about scalability here. It is not my concern.
Just for testing, I ONLY make concurrent get requests.
So there is only 2 threads running. ONE for the get request and one worker thread.
I know that Request threads exits safely.I put a print right before the Action of the Controller returns. (Is that good enough?)
what I run into is. IIS slows down after a while even though I am exiting the GET thread.
So why is it slowing down ? When I implemented with AsyncController it does not slow down.
I know AsyncControllers attach and detach the threads from the pool. But if I have 25 thread available in my pool and If I have one active thread for worker and one thread that enters and exits for the get. I am sort of lost. Thanks

multithreading in ASP.Net webservice - what happens after the main thread completes?

I have inherited a set of legacy webservices (VB.Net, IIS hosted ASMX) in which some of the WebMethods are using basic multithreading.
It seems like they have done this to allow the WebMethod to return to the client quicker with a response, while still doing some longer running operations that do not impact the response object itself (such as cleanup operations, logging, etc).
My question is, what happens in this webservice when the main thread (that which created the WebMethod instance) completes? Do these other threads terminate or does it actually block the main thread from completing if the other threads are not complete? Or, do the threads run to completion on the IIS process?
Threads are independent of each other unless one thread waits on another. Once created, there is nothing stopping the request (main) thread from completing, and any other threads simply complete on their own.

Resources