To understand how IIS processes a request, I have used the diagram here: https://blogs.iis.net/tomwoolums/iis-7-0-http-request-processing
My questions:
Is step 1 until 5 executed for every request or only for the very first request?
Who is actually sending the request to the application pool? WAS? Or does WAS only create/map the worker process for the request and HTTP.sys sends the actual request to the worker process? If WAS sends the actual request, why would we need to configure the HTTP.sys if the request is passed to WAS, which already knows the configuration?
Am I right if I say: The application pool passes the response directly to the HTTP.sys?
Step 1-5 is generally speaking executed once for multiple requests. When you change certain settings in IIS configuration, then a new request will trigger step 1-5 again.
Requests only go through http.sys and worker processes. WAS only manages worker process lifecycle, and that's why it is called Windows Process Activation Service.
Correct.
Related
I know that service workers can intercept http request coming from the UI/Main thread. I would like to know whether the service worker can intercept http request coming from the Worker Threads (web worker). The reason for this is to enable retries and recovery for when the Tab/Browser is eventually destroyed or stopped by the user or the Operating system.
Yes, something created by new Worker(workerSrcUrl) can be a client of a service worker, and a service worker's fetch event listener can respond to network requests initiated inside of the Worker.
You should note a few things:
There is an upcoming change, currently planned for Chrome/Edge 93, to resolve some non-compliant behavior around Clients.matchAll({type: 'worker'}) and how scoping is applied. More details can be found in this status entry.
A Worker can't actually outlive the tab or browser that started it, so creating a Worker is not going to allow you to run code that will continue outside of the lifetime of the browser's tab that created it.
I have Web application on IIS server.
I have POST method that take a long time to run (Around 30-40min).
After period time the application stop running (Without any exception).
I set Idle timeout to be 0 and It is not help for me.
What can I do to solve it?
Instead of doing all the work initiated by the request before responding at all:
Receive the request
Put the information in the request in a queue (which you could manage with a database table, ZeroMQ, or whatever else you like)
Respond with a "Request recieved" message.
That way you respond within seconds, which is acceptable for HTTP.
Then have a separate process monitor the queue and process the data on it (doing the 30-40 minute long job). When the job is complete, notify the user.
You could do this through the browser with a Notification or through a WebSocket or use a completely different mechanism (such as by sending an email to the user who made the request).
I have a web service that accepts post requests. A post request specifies a specific job to be executed in the background, that modifies a database used for later analysis. The sender of the request does not care about the result, and only needs to receive a 202 acknowledgment from the web service.
How it was implemented so far:
Flask Web service will get the http request , and add the necessary parameters to the task queue (rq workers), and return back an acknowledgement. A separate rq worker process listens on the queue and processes the job.
We have now switched to aiohttp, and realized that the web service can now schedule the actual job request in its own event loop, by using the aiohttp.ensure_future() method.
This however blurs the lines between the web-server and the task queue. On the positive side, it eliminates the need of having to manage the rq workers.
Is this considered a good practice?
If your tasks are not CPU heavy - yes, it is good practice.
But if so, then you need to move them to separate service or use run_in_executor(). In other case your aiohttp event loop will be blocked by this tasks and server will not be able to accept new requests.
I wanted to get an overview of the way HTTP.sys forwards request to work process in IIS 7.0 and above. For that purpose I read the post at http://www.iis.net/learn/get-started/introduction-to-iis/introduction-to-iis-architecture. However, there are two points in this post that seem to be contradicting and is confusing me.
Point 1: 2nd bullet point mentioned under section "Hypertext Transfer Protocol Stack (HTTP.sys)" is as follows.
Kernel-mode request queuing. Requests cause less overhead in context switching because the kernel forwards requests directly to the correct worker process. If no worker process is available to accept a request, the kernel-mode request queue holds the request until a worker process picks it up.
My conclusion according to this point is as follows:
HTTP.sys forwards request "directly" to worker process bypassing the WWW service. In case no worker process is available, HTTP.sys queues the request in kernel-mode request queue while WAS service starts a new worker process. This worker process then picks up requests from the kernel-mode queue on its own.
Point 2: Process Management subsection under the section "Windows Process Activation Service(WAS)" is as follows.
WAS manages application pools and worker processes for both HTTP and non-HTTP requests. When a protocol listener picks up a client request, WAS determines if a worker process is running or not. If an application pool already has a worker process that is servicing requests, the listener adapter passes the request onto the worker process for processing. If there is no worker process in the application pool, WAS will start a worker process so that the listener adapter can pass the request to it for processing.
My conclusion according to this point is as follows:
HTTP.sys forwards request to the worker process "through WWW service" as that is the listener adapter. In case no worker process is available, HTTP.sys queues the request in kernel-mode request queue while WAS service starts a new worker process. The request from the kernel-mode queue is then picked up by WWW service and forwarded to the worker process.
Could anyone please let me know which of my above two conclusions is correct? If both are incorrect, please let me know the right flow.
I dont think either is correct. I was also trying to figure out the exact workings and finally found the HTTP Server API, https://learn.microsoft.com/en-us/windows/desktop/http/http-version-2-0-architecture.
"HTTP.sys forwards request to the worker process "through WWW service" as that is the listener adapter." From the documentation above, and here https://learn.microsoft.com/en-us/windows/desktop/http/process-isolation, you can seen the HTTP Kernel Mode(http.sys) routes requests to queues that are associated with urls. The queue would have been configured when the app pool was created in iis mgr and the urls would have been associated with the queue when you create a website in IIS mgr and tie the website to a pool. http.sys puts stuff in queues. The app pool process processes stuff from the queues. No direct interaction between http.sys and worker process.
"In case no worker process is available,..." this is not true either from the Process Isolation documentation above:
Creator or controller process: The controller process can run with, or without, administrative privileges to monitor the health and configure the service. The controller process typically creates a single server session for the service and defines the URL groups under the server session. The URL group that a particular URL is associated with determines which request queue services the namespace denoted by the particular URL. The controller process also creates the request queue and launches the worker processes that can access the request queue.
Worker Process: The worker processes, launched by the controller process, perform IO on the request queue that is associated with the URLs they service. The web application is granted access to the request queue by the controller process in the ACL when the request queue is created. Unless the web application is also the creator process, it does not manage the service or configure the request queue. The controller process communicates the name of the request queue to the worker process and the worker process opens the request queue by name. Worker processes can load third party web applications without introducing security vulnerabilities in other parts of the application.
So the controller process would create the workers. This is without a doubt the WAS, exactly how it detects when to create a process is not defined, but its definitely the "controller process" spoken about above.
Interesting, in asp.net core you can run your app on top of http.sys, Microsoft.AspNetCore.Server.HttpSys. Interntally it uses this api to configure things. https://github.com/aspnet/HttpSysServer/blob/master/src/Microsoft.AspNetCore.Server.HttpSys/NativeInterop/HttpApi.cs.
This documentation cleared up a lot of confusion for me. I hope it helps.
I read this from < IIS 7.0 resource kit >
HTTP.sys maintains a request queue for each worker process. It sends the HTTP requests to the request queue for the worker process that serves the application pool where the requested application is located.
For each application, HTTP.sys maintains the URI namespace routing table with one entry. The routing table data is used to determine which application pool responds to requests from what parts of the namespace. Each request queue corresponds to one application pool. And application pool corresponds to one request queues within HTTP.sys and one or more worker processes.
The bold parts confused me.
My understanding is: HTTP.sys matain a request queue for each worker process. An application pool can have one or more worker processes. So an application pool should also corresponds to one or more request queues. Why only one in the bold sentence?
And btw, could anyone give a more clear explanation about the URI namespace routing table? Some examples would be better.
Thanks.
To discuss a paragraph in a book, you should give more info.
This paragraph comes from "IIS 7.0 Core Components" section, and the version at Safari Books Online is different from what you pasted,
HTTP.sys maintains a request queue for each worker process. It sends
the HTTP requests it receives to the request queue for the worker
process that serves the application pool where the requested
application is located. For each application, HTTP.sys maintains the
URI namespace routing table with one entry. The routing table data is
used to determine which application pool responds to requests from
what parts of the namespace. Each request queue corresponds to one
application pool. An application pool corresponds to one request queue
within HTTP.sys and one or more worker processes.
So the last sentence should be understood as,
An application pool corresponds to one request queue within http.sys.
An application pool corresponds to one or more worker processes.
Thus, your understanding of "HTTP.sys maintains a request queue for each worker process" is not correct. The correct one should be "HTTP.sys maintains a request queue for each application pool". So no matter how many worker processes are there for a single application pool, they only serve requests from a single request queue in http.sys.
"For each application, HTTP.sys maintains the URI namespace routing
table with one entry"
I think it should be "for each application pool, HTTP.sys maintains the URI namespace routing table with one entry". This routing table makes it easier to dispatch requests (whose URL is clear) to the pools. Very similar to a hash table.
The table can be constructed from <sites> tag in applicationHost.config, by combining sites, their bindings, applications, and their application pool association. There is no further information from Microsoft on the exact table structure.
I am struggling with the same question... but I think the process is as follows:
Request intercepted by HTTP.sys
HTTP.sys makes initial contact with WAS
WAS read the ApplicationHost.config and passes it the WWW service.
WWW service configures HTTP.sys (from this point, the HTTP.sys has set up the corresponding application pool queue I guess)
HTTP.sys checks if an worker process is available, (contacts WAS via WWW) if not, the request is stored in the application queue.
=> if a worker process is available, the request is now forwarded to the correct worker pool
If no worker process is available, the request is stored in the application queue. HTTP.sys will now notify WAS (via the WWW service) that a new request has been added to the queue. The WWW service will ask WAS for a worker process. WAS will spawn one and let WWW know that an app pool has been created. Now WWW can pass the request to the corresponding worker process(by adding it to its queue queue). Then WWW will let HTTP.sys know that a worker process is spawned, so the next request, HTTP.sys can forward the request immmediatly...
I am not completely sure if this is technically all correct, so if anyone could correct/confirm this, that would be great!
A listener needs to receive messages. For this, it needs to open a socket (or a pipe handle, or start an MSMQ read, and so on). However, in order to receive the proper messages, it needs to obtain the necessary addressing information from WAS. This is accomplished during listener startup. The protocol's listener adapter calls a function on the WAS listener adapter interface and essentially says, "I am now listening on the net.tcp protocol; please use this set of callback functions I'm handing you to tell me what I need to know." In response, WAS will call back with any configuration it has for applications that are set up to accept messages over the protocol in question. For example, a TCP listener would be informed that there were two applications (*:7777/Foo and *:7777/Bar) configured to use TCP. WAS also assigns to each application a unique listener channel ID used for associating requests with their destination applications.
The listener process uses the configuration information provided by WAS to build up a routing table, which it will use to map incoming requests to listener channel IDs as they arrive.
An application pool can have one or more worker processes
This is not correct 1 App Pool = 1 Worker Process