Does IIS Request Content Filtering Load the full request before filter - iis

I'm looking into IIS Request filtering by content-length. I've set the max allowed content length :
appcmd set config /section:requestfiltering /requestlimits.maxallowedcontentlength:30000000
My question is about when the filter will occur.
Will IIS first read ALL the request into memory and then throw an error, or will it raise an issue as soon as it reaches the threshold?

The IIS Request Filtering module is processed very early in the request pipeline. Unwanted requests are quickly discarded before proceeding to application code which is slower and has a much larger attack surface. For this reason, some have reported performance increases after implementing Request Filtering settings.
Limitations
Request Filtering Limitations include the following:
Stateless - Request Filtering has no knowledge of application or session state. Each request is processed individually regardless of whether a session has or has not been established.
Request Header Only - Request Filtering can only inspect the request header. It has no visibility into the request body or any part of the response.
Basic Logic - Regular expressions and wildcard matches are not available. Most settings consist of establishing size constraints while others perform simple string matching.
maxAllowedContentLength
Request Filtering checks the value of the Content-Length request header. If the value exceeds that which is set for maxAllowedContentLength the client will receive an HTTP 404.13.
The IIS 8.5 STIG recommends a value of 30000000 or less.
IISRFBaseline
This above information is based on my PowerShell module IISRFBaseline. It helps establish an IIS Request Filtering baseline by leveraging Microsoft Logparser to scan a website's content directory and IIS logs.
Many of the settings have a dedicated markdown file providing more information about the setting. The one for maxAllowedContentLength can be found at the following:
https://github.com/phbits/IISRFBaseline/blob/master/IISRFBaseline-maxAllowedContentLength.md
Update - #johnny-5 comment
The filtering happens immediately which makes sense because Request Filtering only has visibility into the request header. This was confirmed via the following methods:
Failed Request Tracing - the Request Filtering module responded to the request with an HTTP 413 Request entity too large.
http.sys event tracing - the request is accepted and handed off to the IIS website. Shortly thereafter is an entry showing the HTTP 413 response. The time between was not nearly long enough for the upload to complete.
Packet capture - Using Microsoft Network Monitor, the HTTP conversation shows IIS immediately responded with an HTTP 413 Request entity too large.
The part you're rightfully concerned with is that IIS still accepts the upload regardless of file size. I found the limiting factor to be connectionTimeout which has a default setting of 120 seconds. If the file is "completed" before the timeout then an HTTP 413 error message is displayed. When a timeout occurs, the browser shows a connection reset since the TCP connection is destroyed by IIS after sending a TCP ACK/RST.
To test this further the timeout was increased and set to connectionTimeout=6000. Then a large upload was submitted and the following IIS components were stopped one at a time. After each stop, the upload was checked via Network Monitor and confirmed to be still running.
Website
Application Pool (Stop-WebAppPool -Name AppPoolName)
World Wide Web Publishing Service (Stop-Service -Name W3SVC)
With all three stopped I verified there was no IIS process still running and yet bytes were still being uploaded. This leads me to conclude that the connection is maintained by http.sys. The fact that connectionTimeout is closely tied to http.sys seems to support this. I do not know if the uploaded bytes go to a buffer or are simply discarded. The event tracing messages didn't provide anything helpful in this context.
Leaving out the Content-Length request header will result in an RFC protocol error (i.e. HTTP 400 Bad request) generated by http.sys since the size of the HTTP payload isn't being declared.

Related

AJAX request errors out with no response

I have an application with webix on UI and node js on server side.
From the UI if I trigger a long running AJAX request for e.g. process 1000 records, the request errors out after 1.5 mins (not consistently) approximately.
The error object contains no information about the reason for request failure but since processing smaller set of records seems to work fine I am thinking of blaming it on timeout.
From the developer console I see that request seems to be Stalled and response is empty.
Currently I cant drop a request and keep polling it after every few seconds to see if the processing has been finished. I have to wait for the request to finish but I am not sure how to do it as webix forum doesn't seem to have any information on this except for setting timeout.
If setting timeout is the way to go then what would happen tomorrow if the request size goes to 2000 records - I don't want to keep on increasing the timeout
Also, if I am left with no choice how would I implement the polling. If I drop a request on to server there can be other clients as well who are triggering a similar request. How would I distinguish between requests originated from different clients?
I would really appreciate some help on this.

Does Azure's HTTP request timeout of 4 minutes apply to multipart-formdata file uploads?

Azure apparently has a 4 minute timeout for http requests before they kill the connection. This is non configurable in app services:
https://social.msdn.microsoft.com/Forums/en-US/32b76114-67a4-4e6b-ac45-61b0f0a0829f/changing-the-4-minute-request-time-out-for-app-services?forum=AzureAPIApps
I have seen this first hand in my application - I have a process that allows users to view files that exist on a network drive, select a subset of those files and upload those files to a third party service. This happens via a post request which sends the list of file names using content-type json. This operation can take a while and I receive a timeout error at almost exactly 4 minutes.
I also have another process which allows users to drag and drop files into the web application directly, these files are posted to the server using content-type multipart/form-data, and forwarded to the third party service. This request never times out no matter how long the upload takes.
Is there something about using multipart/form-data that overrides azures 4 minute timeout?
It probably does not matter but I am using Node.
The timeout is actually 3m 50s (230 seconds) and not 4 minutes.
But note that it is an idle connection timeout, meaning that it only kicks in if there is no data flowing in the request/response. So it is strange that you would hit this if you are actively uploading files. I would suggest monitoring network traffic to see if anything is being sent. If it really goes 230s with no uploaded data, then there is probably some other issue, and the timeout is just a side effect.

Acunetix Unable to connect to server

I am using Acunetix tool to scan my website but gives an error cannot connect to the website. I am trying to connect to my local host. But it keeps on throwing me this error
04.11 10:53.48, [Error] Server "http://192.168.24.199/" is not responsive.
04.11 10:53.48, [Error] Response body length exceeds configured limit
Its a .Net website running on IIS
Any suggestions will be appretiacted
What is happening here is that the HTTP response body length exceeds Acunetix's maximum configured limit.
To change this, navigate to Configuration > Scan Settings > HTTP Options and change the 'HTTP response size limit in kilobytes' to a larger value.
Note -- Having said this, I would look into why your app is returning a response body larger than 5MB (Acuentix's default limit), it's not normal to have such large responses (Acunetix automatically ignores common file types like PDFs, images, spreadsheets...).

How to set timeout in IIS 6 when ColdFusion is unresponsive

This maybe related to platforms other than ColdFusion.
IIS 6 Log reports "time-taken" much longer (30 minutes) than 120 seconds set in Connection Timeout for several requests to ColdFusion page.
I assume that ColdFusion was unresponsive at the moment. I would like IIS to stop the request rather than wait this long.
Is there an IIS setting that would force this?
Not really because iis is no longer handling the request once it has been passed to cf. You could try playing with application pool timeout and see if you can get that to throw an error.
This scenario can also be considered as the slow HTTP DoS attack when caused by the client. IIS doesn't provide much protection against it (at least for slow POST body) because Microsoft considers it a protocol bug, not an IIS weakness. Although I think in this case it is your server doing it to itself.
Things to check:
You didn't mention whether it is the request that is slow or the
server's response. You could try tweaking your
MinFileBytesPerSec parameter if it's the response that is slow. By
default it will drop the connection if the client is downloading at
less than 240 bytes per second.
Remember, that 120 second IIS timeout is an idle timeout. As long as the client sends or receives a few bytes inside 120 seconds, that timer will keep getting reset.
You didn't mention if this long wait is happening on all pages or always in a few specific ones. It is possible that your CF script is making another external
connection, e.g. CFQUERY, which is not subject to CF timeouts, but to the timeouts
of the server it is connecting to. Using the timeout attribute inside CFQUERY may prevent this.
You also didn't mention what your Coldfusion settings are. Maybe the IIS timeout setting is being ignored by the Coldfusion
JRUN Connector ISAPI filter, so you should check the settings in
Coldfusion Administrator. Especially check if Timeout Requests
after has been changed. If its still at the default of 60
seconds, check your code to see if it has been overridden there, e.g.
<cfsetting requestTimeOut = "3600">
Finally, there is the matter of the peculiar behavior of CF's requestTimeout that you might have to workaround by replacing some cfscript tags with CFML.

Multiple request triggered when used browser but not when used java httpclient

Here is my application cloud environment.
I have ELB with sticky session -> 2 HA Proxy -> 1 Machines which hosts my application on jboss.
I am processing a request which takes more than 1 minute. I am logging IP addresses at the start of the processing request.
When i process this request through browser, I see that duplicate request is being logged after 1 minute and few seconds. If first request routes from the HAProxy1 then another request routes from HAProxy2. On browser I get HttpStatus=0 response after 2.1 minute
My hypotesis is that ELB is triggering this duplicate request.
Kindly help me to verify this hypothesis.
When I use the Apache Http Client for same request, I do not see duplicate request being triggered. Also I get exception after 1 minute and few seconds.
org.apache.http.NoHttpResponseException: The target server failed to respond
Kindly help me to understand what is happening over here.
-Thanks
By ELB I presume you are referring to Amazon AWS's Elastic Load Balancer.
Elastic Load Balancer has a built-in request time-out of 60 seconds, which cannot be changed. The browser has smart re-try logic, hence you're seeing two requests, but your server should be processing them as two separate unrelated requests, so this actually makes matters worse. Using httpclient, the timeout causes the NoHttpResponseException, and no retry is used.
The solution is to either improve the performance of your request on the server, or have the initial request fire off a background task, and then a supplemental request (possibly using AJAX) which polls for completion.

Resources