I have inherited a set of legacy webservices (VB.Net, IIS hosted ASMX) in which some of the WebMethods are using basic multithreading.
It seems like they have done this to allow the WebMethod to return to the client quicker with a response, while still doing some longer running operations that do not impact the response object itself (such as cleanup operations, logging, etc).
My question is, what happens in this webservice when the main thread (that which created the WebMethod instance) completes? Do these other threads terminate or does it actually block the main thread from completing if the other threads are not complete? Or, do the threads run to completion on the IIS process?
Threads are independent of each other unless one thread waits on another. Once created, there is nothing stopping the request (main) thread from completing, and any other threads simply complete on their own.
Related
I suppose, there is a thread pool which the web server are using to serve requests. So the controllers are running within one of the thread of this thread pool. Say it is the 'serving' pool.
In one of my async action method I use an async method,
var myResult = await myObject.MyMethodAsync();
// my completion logic here
As explained many places, we are doing this, to not block the valuable serving pool thread, instead execute MyMethodAsync in an other background thread... then continue the completion logic in again a serving pool thread, probably in other one, but having the http context, and some othe minor things marshaled there correctly.
So the background thread in which MyMethodAsync runs must be from an other thread pool, unless the whole thing makes no sense.
Question
Please confirm or correct my understanding and in case if it is correct, I still miss why would one thread in one pool more valuable resource than other thread in another pool? In the end of the day the whole thing runs on a same particular hardware with given number of cores and CPU performance...
There is only one thread pool in a .NET application. It has both worker threads and I/O threads, which are treated differently, but there is only one pool.
I suppose, there is a thread pool which the web server are using to serve requests. So the controllers are running within one of the thread of this thread pool. Say it is the 'serving' pool.
ASP.NET uses the .NET thread pool to serve requests, yes.
As explained many places, we are doing this, to not block the valuable serving pool thread, instead execute MyMethodAsync in an other background thread... then continue the completion logic in again a serving pool thread, probably in other one, but having the http context, and some othe minor things marshaled there correctly.
So the background thread in which MyMethodAsync runs must be from an other thread pool, unless the whole thing makes no sense.
This is the wrong part.
With truly asynchronous methods, there is no thread (as described on my blog). While the code within MyMethodAsync will run on some thread, there is no thread dedicated to running MyMethodAsync until it completes.
You can think about it this way: asynchronous code usually deals with I/O, so lets say for example that MyMethodAsync is posting something to an API. Once the post is sent, there's no point in having a thread just block waiting for a response. Instead, MyMethodAsync just wires up a continuation and returns. As a result, most asynchronous controller methods use zero threads while waiting for external systems to respond. There's no "other thread pool" because there's no "other thread" at all.
Which is kind of the point of asynchronous code on ASP.NET: to use fewer threads to serve more requests. Also see this article.
NodeJs is single-threaded means it has one main thread that cares about our operations and for the rest of asynchronous, it gives the task to another thread.
So as per understanding callback results in offloading the current task to a separate thread.
app.post("/get",(req,res)=>{
res.status(200).json({message:"Success"})
})
Does the above statement execute in another thread?
Everything in a Javascript program runs in one thread (unless you use workers, which your example does not).
Javascript runs with a main loop. The main loop reads a queue of events and processes them one by one. All sorts of things turn into events in that queue: incoming web requests, timeouts, intervals, promises awaiting resolution, messages, you name it. It's more complex than this, of course.
It's good that it's all one thread: you don't have to worry about thread concurrency.
There's a downside: if you write a long-running method, its execution will block the main loop until it's finished. This makes web pages janky and nodejs programs have slow response. Use workers (separate threads) for that stuff.
The method in a .post() like yours is run in response to an event on the queue announcing an incoming web request.
Read about the main loop.
I have a spring controller. The request thread from the controller is passed to the #Service annotated Service class. Now I want to do some background work and the request thread must some how trigger the background thread and continue with it's own work and should not wait for the background thread to complete.
My first question : is this safe to do this.?
Second question : how to do this.?
Is this safe
Not really. If you have many concurrent users, you'll spawn a thread for everyone of them, and the high number of threads could bring your server to its knees. The app server uses a pool of threads, precisely to avoid this problem.
How to do this
I would do this by using the asynchronous capabilities of Spring. Call a service method annotated with #Async, and the service method will be executed by another thread, from a configurable pool.
I ran into an interesting problem in IIS and I would like to get to the bottom of it.
I have an app that long polls . i have implemented my own long polling.
A request comes in. I block that req and write to it from my worker thread.
then everything finish. I signal. And the thread that was handling the GET request is releases.
I am not talking about scalability here. It is not my concern.
Just for testing, I ONLY make concurrent get requests.
So there is only 2 threads running. ONE for the get request and one worker thread.
I know that Request threads exits safely.I put a print right before the Action of the Controller returns. (Is that good enough?)
what I run into is. IIS slows down after a while even though I am exiting the GET thread.
So why is it slowing down ? When I implemented with AsyncController it does not slow down.
I know AsyncControllers attach and detach the threads from the pool. But if I have 25 thread available in my pool and If I have one active thread for worker and one thread that enters and exits for the get. I am sort of lost. Thanks
Situation: A high-scale Azure IIS7 application, which must do this:
Receive request
Place request payload onto a queue for decoupled asynchronous processing
Maintain connection to client
Wait for a notification that the asynchronous process has completed
Respond to client
Note that these will be long-running processes (30 seconds to 5 minutes).
If we employ Monitor.Wait(...) here, waiting for a callback to the same web application, from the asynchronous process, to invoke Monitor.Pulse(...) on the object we invoked Monitor.Wait() on, will this effectively create thread starvation in a hurry?
If so, how can this be mitigated? Is there a better pattern to employ here for awaiting the callback? For example, could we place the Response object into a thread-safe dictionary, and then somehow yield, and let the callback code lock the Response and proceed to respond to the client? If so, how?
Also, what if the asynchronous process dies, and never invokes the callback, thus never causing Monitor.Pulse() to fire? Is our thread hung now?
Given the requirement you have, I would suggest to have a look at AsyncPage/AsyncController (depends on whether you use ASP.NET WebForms or ASP.NET MVC). These give you the possibility to execute long running tasks in IIS without blocking I/O threads.