Handling maximum request length exceeded in ASP.NET Core - asp.net-core-2.0

Is there is a way to handle maximum request size exceeded in ASP.NET Core?
I am looking for a way to override the default behavior of ASP.NET Core which is to return HTTP 404 and instead I would like to return a custom response for calls to API methods that exceed maximum allowed size.

Related

Azure api management not hitting azure redis cache when request have more than 10 MB

I have a problem because when I try to configure Azure Api managment with Azure Redis Cache my data does not always get saved, if the request "GET" is up to 9MB everything is okey the key and its value goes to the Redis cache but if the request exceeds 9/10MB the key is not written to the Redis cache.
I have checked that the problem is definitely on the side of the api management because if I write to the cache directly from the console application, all the data is written even for 50MB.
My policy in API management :
Inbound
<cache-lookup vary-by-developer="false" vary-by-developer-groups="false" allow-private-response-caching="true" must-revalidate="false" downstream-caching-type="public" caching-type="external" />
outbound
<cache-store duration="1000" cache-response="true" />
If I change downstream-caching-type to none then request to 9MB not working only request max to 2MB
If I change downstream-caching-type to none then request to 9MB not working only request max to 2MB
As commented by silent, adding it as a community wiki answer to help community members who might face a similar issue.
As per API Management limits, maximum cached response size is 2MB.
You can follow the best practices on how to handle a larger response sizes:
Optimize your application for a large number of small values, rather than a few large values.
Increase the size of your VM to get higher bandwidth capabilities.
Increase the number of connection objects your application uses.

Does IIS Request Content Filtering Load the full request before filter

I'm looking into IIS Request filtering by content-length. I've set the max allowed content length :
appcmd set config /section:requestfiltering /requestlimits.maxallowedcontentlength:30000000
My question is about when the filter will occur.
Will IIS first read ALL the request into memory and then throw an error, or will it raise an issue as soon as it reaches the threshold?
The IIS Request Filtering module is processed very early in the request pipeline. Unwanted requests are quickly discarded before proceeding to application code which is slower and has a much larger attack surface. For this reason, some have reported performance increases after implementing Request Filtering settings.
Limitations
Request Filtering Limitations include the following:
Stateless - Request Filtering has no knowledge of application or session state. Each request is processed individually regardless of whether a session has or has not been established.
Request Header Only - Request Filtering can only inspect the request header. It has no visibility into the request body or any part of the response.
Basic Logic - Regular expressions and wildcard matches are not available. Most settings consist of establishing size constraints while others perform simple string matching.
maxAllowedContentLength
Request Filtering checks the value of the Content-Length request header. If the value exceeds that which is set for maxAllowedContentLength the client will receive an HTTP 404.13.
The IIS 8.5 STIG recommends a value of 30000000 or less.
IISRFBaseline
This above information is based on my PowerShell module IISRFBaseline. It helps establish an IIS Request Filtering baseline by leveraging Microsoft Logparser to scan a website's content directory and IIS logs.
Many of the settings have a dedicated markdown file providing more information about the setting. The one for maxAllowedContentLength can be found at the following:
https://github.com/phbits/IISRFBaseline/blob/master/IISRFBaseline-maxAllowedContentLength.md
Update - #johnny-5 comment
The filtering happens immediately which makes sense because Request Filtering only has visibility into the request header. This was confirmed via the following methods:
Failed Request Tracing - the Request Filtering module responded to the request with an HTTP 413 Request entity too large.
http.sys event tracing - the request is accepted and handed off to the IIS website. Shortly thereafter is an entry showing the HTTP 413 response. The time between was not nearly long enough for the upload to complete.
Packet capture - Using Microsoft Network Monitor, the HTTP conversation shows IIS immediately responded with an HTTP 413 Request entity too large.
The part you're rightfully concerned with is that IIS still accepts the upload regardless of file size. I found the limiting factor to be connectionTimeout which has a default setting of 120 seconds. If the file is "completed" before the timeout then an HTTP 413 error message is displayed. When a timeout occurs, the browser shows a connection reset since the TCP connection is destroyed by IIS after sending a TCP ACK/RST.
To test this further the timeout was increased and set to connectionTimeout=6000. Then a large upload was submitted and the following IIS components were stopped one at a time. After each stop, the upload was checked via Network Monitor and confirmed to be still running.
Website
Application Pool (Stop-WebAppPool -Name AppPoolName)
World Wide Web Publishing Service (Stop-Service -Name W3SVC)
With all three stopped I verified there was no IIS process still running and yet bytes were still being uploaded. This leads me to conclude that the connection is maintained by http.sys. The fact that connectionTimeout is closely tied to http.sys seems to support this. I do not know if the uploaded bytes go to a buffer or are simply discarded. The event tracing messages didn't provide anything helpful in this context.
Leaving out the Content-Length request header will result in an RFC protocol error (i.e. HTTP 400 Bad request) generated by http.sys since the size of the HTTP payload isn't being declared.

Error: You've exceeded your current limit of 5 requests per second for query class

could someone correct this problem without having to change plan in cloudant?
If you are using the official Cloudant nodejs library, see the retry plugin which handles 429 errors.
Note that 429 retry handling is probably only suitable for development environments or for the small fluctuations in demand that exceed your capacity. Excessive use of 429 handling will result in a build up of 'back-pressure' in your application.

Acunetix Unable to connect to server

I am using Acunetix tool to scan my website but gives an error cannot connect to the website. I am trying to connect to my local host. But it keeps on throwing me this error
04.11 10:53.48, [Error] Server "http://192.168.24.199/" is not responsive.
04.11 10:53.48, [Error] Response body length exceeds configured limit
Its a .Net website running on IIS
Any suggestions will be appretiacted
What is happening here is that the HTTP response body length exceeds Acunetix's maximum configured limit.
To change this, navigate to Configuration > Scan Settings > HTTP Options and change the 'HTTP response size limit in kilobytes' to a larger value.
Note -- Having said this, I would look into why your app is returning a response body larger than 5MB (Acuentix's default limit), it's not normal to have such large responses (Acunetix automatically ignores common file types like PDFs, images, spreadsheets...).

Optimizing a humbly large node.js response

I am responding with a large (10 MB) payload from node.js to an Akka spray.io based actor, and getting a chunk size error from spray/akka [akka://spray-actor-system/user/IO-HTTP/group-0/3] Received illegal response: HTTP chunk size exceeds the configured limit of 1048576 bytes.
My node.js code just plainly sends off the response in one res.end command. (It does so because the response is generated in a non-streamed way and therefore there has been no "inherent" gain in streaming it further along, at least up until a spray.io client-side was added).
I am wondering what's the simplest way to chunk the response using node.js api, in case I prefer not handling very large http responses by increasing the spray.io size limit. Also whether there's any performance downside in sending responses this size from node.js - i.e. does node.js block on the res.end operation?

Resources