I ran into an interesting problem in IIS and I would like to get to the bottom of it.
I have an app that long polls . i have implemented my own long polling.
A request comes in. I block that req and write to it from my worker thread.
then everything finish. I signal. And the thread that was handling the GET request is releases.
I am not talking about scalability here. It is not my concern.
Just for testing, I ONLY make concurrent get requests.
So there is only 2 threads running. ONE for the get request and one worker thread.
I know that Request threads exits safely.I put a print right before the Action of the Controller returns. (Is that good enough?)
what I run into is. IIS slows down after a while even though I am exiting the GET thread.
So why is it slowing down ? When I implemented with AsyncController it does not slow down.
I know AsyncControllers attach and detach the threads from the pool. But if I have 25 thread available in my pool and If I have one active thread for worker and one thread that enters and exits for the get. I am sort of lost. Thanks
Related
I suppose, there is a thread pool which the web server are using to serve requests. So the controllers are running within one of the thread of this thread pool. Say it is the 'serving' pool.
In one of my async action method I use an async method,
var myResult = await myObject.MyMethodAsync();
// my completion logic here
As explained many places, we are doing this, to not block the valuable serving pool thread, instead execute MyMethodAsync in an other background thread... then continue the completion logic in again a serving pool thread, probably in other one, but having the http context, and some othe minor things marshaled there correctly.
So the background thread in which MyMethodAsync runs must be from an other thread pool, unless the whole thing makes no sense.
Question
Please confirm or correct my understanding and in case if it is correct, I still miss why would one thread in one pool more valuable resource than other thread in another pool? In the end of the day the whole thing runs on a same particular hardware with given number of cores and CPU performance...
There is only one thread pool in a .NET application. It has both worker threads and I/O threads, which are treated differently, but there is only one pool.
I suppose, there is a thread pool which the web server are using to serve requests. So the controllers are running within one of the thread of this thread pool. Say it is the 'serving' pool.
ASP.NET uses the .NET thread pool to serve requests, yes.
As explained many places, we are doing this, to not block the valuable serving pool thread, instead execute MyMethodAsync in an other background thread... then continue the completion logic in again a serving pool thread, probably in other one, but having the http context, and some othe minor things marshaled there correctly.
So the background thread in which MyMethodAsync runs must be from an other thread pool, unless the whole thing makes no sense.
This is the wrong part.
With truly asynchronous methods, there is no thread (as described on my blog). While the code within MyMethodAsync will run on some thread, there is no thread dedicated to running MyMethodAsync until it completes.
You can think about it this way: asynchronous code usually deals with I/O, so lets say for example that MyMethodAsync is posting something to an API. Once the post is sent, there's no point in having a thread just block waiting for a response. Instead, MyMethodAsync just wires up a continuation and returns. As a result, most asynchronous controller methods use zero threads while waiting for external systems to respond. There's no "other thread pool" because there's no "other thread" at all.
Which is kind of the point of asynchronous code on ASP.NET: to use fewer threads to serve more requests. Also see this article.
A COM application based on the 'free' threading model subscribes to events published from another COM application that runs out of process.
The application works normally. But in some cases (or configurations?) it burns through a lot of so called Tpp worker threads.
These threads apparently belong to a thread pool managed by Windows/COM. And they are at least used by COM to deliver incoming events to the application.
When the application receives events, that always happens in the context of one of these worker threads.
In the normal situation, updates are coming in from at most 2 or 3 unique worker threads.
But in the abnormal situation the application sees new & unique worker thread IDs appear every 3-8 minutes. Over the course of a week the application has seen about 1000 unique threads (!).
I highly suspect there is something wrong here. Because surely the thread pool doesn't need so many different threads, right?
What could be a reason for the thread pool behavior I'm seeing. Is it just normal that it creates different threads from time to time? Are the old threads still sticking around doing nothing? What action could trigger this while the application is running in the context of the worker thread?
Notes:
Our application is an OPC DA client (and the other application is the Siemens OPC-DA server)
The OS is Windows 10
I do not yet know if the worker threads have exited or that they stick around doing nothing
By way of an experiment I have tried several bad/illegal things to see if it is possible for our application to somehow break a worker thread
- which would then explain why the thread pool would have to create a new one - we might have destroyed the old one. But that seems more difficult than I had expected:
When running in the context of the worker thread, I have tried...
deliberately hanging with while (true) {}, result: event delivery process just stalls, no new worker thread is being created for us though
the deliberate uncaught c++ exception, no new worker thread is created
triggering a deliberate (read) access violation, no new thread either...
And that got me thinking, if our application can't kill that worker thread in an obvious way, what else could or why would the thread pool behave like this?
I'm investigating what reactive means and because it is kind of low level difference, compared to the common non-reactive approach, I'd like to understand what is going on. Let's take Tomcat as a server(I guess it will be different for netty)
Non-reactive
Connection from the browser is created.
For each request thread from thread pool is taken, which will process it.
After the thread finished processing, it returns the result through the connection back to other side.
Reactive???
How is it done for Tomcat or Netty. I cannot find any decent article about how Tomcat supports reactive apps and how Netty does that differently(Connection, Thread, request level explanation)
What bothers me is how reactive is making the webserver unblocking, when you still need to wait for the response. You can get first part of the response quicker maybe with reactive, but is it all? I guess the main point of reactivness is effective thread utilization and this is what I am asking about.
The last point by you : " I guess the main point of reactiveness is effective thread utilization and this is what I am asking about.", is exactly what reactive approach was designed for.
So how does effective utilization achieved?
Well, as an example, lets say you are requesting data from a server multiple times.
In a typical non-reactive way, you will be creating/using multiple threads(may be from a thread-pool) for each of your requests. And job of one particular thread is only to serve that particular request. The thread will take the request, give it to the server and waits for its response till the data is fetched from the server, and then bring that data back to the client.
Now, in a Reactive way, once there is a request, a thread will be allocated for it. Now if another request comes up, there won't be creation of another thread, rather it will be served by the same thread. How?
The thread when takes a request to the server, it won't wait for any immediate response from the server, rather it will come back and serve other request.
Now, when server searches for the data and it is available with the server, an event will be raised, and then the thread will go to fetch that data. This is called Event-loop mechanism as all the work of calling the thread when data is available is achieved by invoking an event.
Now, there is complexity assigned with it to map exact response to requests.
And all these complexity is abstracted by Spring-Webflux(in Java).
So the whole process becomes non-blocking. And as only one thread is enough to serve all the requests, there will be no thread switching we can have one thread per CPU core. Thus achieving effective utilization of threads.
Few images over the net to help you understand: ->
I am learning Node Js, I understand the heart of node js is the reactor pattern which is based on the event loop.
When any event occurs it goes to the event queue and then gets picked up by the stack once it's running tasks get over, this happens if the event is a non- blocking one but if it is a blocking request then event loop passes it to a thread from a thread pool of libuv.
Now my doubts are:
Once the execution is over does libuv thread passes the request back to event queue or event loop? , different tutorial has a different scenario.
Thread pool in libuv has 3 threads more, now suppose 10 users tries to login and everyone at the same time (some app like facebook or so), how only, and the threads are blocked as they want to connect to DB, so how only three threads will handle so much load?
I am really confused and did not get a good explanation of these doubts anywhere, any help will be appreciated.
When any event occurs it goes to the event queue and then gets picked up by the stack
event does not get picked up by the stack. it is passed to call stack by event loop.
if it is a blocking request then event loop passes it to a thread from a thread pool of libuv.
There are ONLY FOUR things that use the thread pool - DNS lookup, fs, crypto and zlib. Everything else execute in the main thread irrespective or blocking or not.
So logging is a network request and threadpool does not handle this. Neither libuv nor node has any code to handle this low level operations that are involved with a network request. Instead libuv delegates the request making to the underlying operating system then it just waits on the operating system to emit a signal that some response has come back to the request. OS keeps track of connections in the network stack. But network i/o is handled by your network hardware and your ISP.
Once the execution is over does libuv thread passes the request back to event queue or event loop?
From the node.js point of view does it make a difference?
2) Thread pool in libuv has 3 threads more , now suppose 10 users
tries to login and everyone at the same time (some app like facebook
or so), how only , and the threads are blocked as they want to connect
to DB, so how only three threads will handle so much load ?
libuv uses threat pool but not in a "naive" way. Most asynchronous requests are filesystem/tcp interaction which are handled via select(). You have to worry about the thread pool only when you create a custom C++ module and manually dispatch a CPU/IO blocking task.
I have inherited a set of legacy webservices (VB.Net, IIS hosted ASMX) in which some of the WebMethods are using basic multithreading.
It seems like they have done this to allow the WebMethod to return to the client quicker with a response, while still doing some longer running operations that do not impact the response object itself (such as cleanup operations, logging, etc).
My question is, what happens in this webservice when the main thread (that which created the WebMethod instance) completes? Do these other threads terminate or does it actually block the main thread from completing if the other threads are not complete? Or, do the threads run to completion on the IIS process?
Threads are independent of each other unless one thread waits on another. Once created, there is nothing stopping the request (main) thread from completing, and any other threads simply complete on their own.