I have an endpoint where it returns the list of students in a college. When I hit that endpoint, it is giving Response Too Large message.
I know that there is a size limit for post body data and i.e ~29 MB.
Is there any size limit for a response?
Related
We have a lot of web applications behind one Application Gateway. We recognize a problem with a couple of them when "Inspect request body size" is on and configured to it's default size - 128KB.
I would like to get recommendation how to solve it best way:
Turn it off?
Increase a Max body size?
Create additional Application Gateway?
Any ideas will be appreciated.
I understand that you are having issues with Azure App Gateway where you are seeing an issue when "Inspect request body size" is turned on and set to 128KB and want to know the best way to address the same.
As per Azure WAF Request size limits:
The maximum request body size field is specified in kilobytes and controls overall request size limit excluding any file uploads. This field has a minimum value of 1 KB and a maximum value of 128 KB. The default value for request body size is 128 KB.
However, For CRS 3.2 (on the WAF_v2 SKU) and newer, these limits are as follows:
2MB request body size limit
4GB file upload limit
WAF also offers a configurable knob to turn the request body inspection on or off. By default, the request body inspection is enabled. If the request body inspection is turned off, WAF doesn't evaluate the contents of HTTP message body. In such cases, WAF continues to enforce WAF rules on headers, cookies, and URI. If the request body inspection is turned off, then maximum request body size field isn't applicable and can't be set. Turning off the request body inspection allows for messages larger than 128 KB to be sent to WAF, but the message body isn't inspected for vulnerabilities.
To change to CRS 3.2, go to WAF Policy > Manged Rules > change to 3.2 and hit save. Once you do the same, change the size of the request body size to 2 MB and hit save.
Hope this helps. If you have any further questions, please do let us know and we will be glad to assist further. Thank you!
I am trying to use NestJs to develop an API. I am getting Http Error Code 431, when the bearer token passed as HTTP header is too long (around 2400 characters). But it works when the bearer token is around 1200 characters. Is there any setting that we can do to increase the header size limit? I am using nodejs12
The HTTP 431 Request Header Fields Too Large response status code indicates that the server refuses to process the request because the request’s HTTP headers are too long. The request may be resubmitted after reducing the size of the request headers.
431 can be used when the total size of request headers is too large, or when a single header field is too large. To help those running into this error, indicate which of the two is the problem in the response body — ideally, also include which headers are too large. This lets users attempt to fix the problem, such as by clearing their cookies.
Servers will often produce this status if:
The Referer URL is too long
There are too many Cookies sent in the request
The below solution is not specific to nest.js but to any node.js server.
On running node --help, you'll see one of the flag will be:
...
--max-http-header-size=... set the maximum size of HTTP headers (default: 8KB)
...
This Node.js CLI flag can help:
--max-http-header-size=16384
It sets the HTTP Max Headers Size to 16KB.
You can set this flag to the value you want.
See this for reference.
The documentation states the maximum size on this flag, so take care of that.
Using the MEAN stack, I have a specific API call that works well locally but fails on the production web server...but only when fetching the largest documents in my MongoDB collection.
I can use curl to get a proper response, but from a browser, making API calls to my server on the web, I get ERR_EMPTY_RESPONSE. Much trial and error has narrowed it down to specific very large documents in the collection. I can artificially remove data from those specific documents -- from anywhere in those documents, making them shorter -- and the request will succeed.
That leads me to think there is a limit on the amount of data that can be in the RESPONSE of a request sent back to a web browser. Is that correct?
Google shows me how to change the limit of a REQUEST to prevent DOS by doing something like this:
const exp = express;
exp.use(bodyParser.json({ limit: '2mb' }));
...but it seems I'm butting up against the limit of the data that can be returned. I really do need all of the data that can be in a RESPONSE, not a REQUEST, so I'm hoping the result limit can be raised in some setting rather than restructuring my collections and making multiple API calls.
I have working url to mp3, and I am trying to send audio to Telegram by this url. It works from time to time, but most of time it gives error
"A request to the Telegram API was unsuccessful. The server returned HTTP 400
Bad Request. Response body:
[b'{"ok":false,"error_code":400,"description":"Bad Request: failed to get HTTP
URL content"}']"
And I can't understand why, because I've checked the size and type, everything is clear, I can't find no more limitations in documentation of Telegram API. Who knows what is the reason of error?
link to mp3 - http://data3.api.xn--41a.ws/vkp/cs9-5v4.userapi.com/p15/51bdb5ec5899ed.mp3
Check file size.
Provide Telegram with an HTTP URL for the file to be sent. Telegram
will download and send the file. 5 MB max size for photos and 20 MB
max for other types of content.
Post the file using multipart/form-data in the usual way that files
are uploaded via the browser. 10 MB max size for photos, 50 MB for
other files.
I use Azure client lib to perform batch inserts into Azure Table Storage. Everything works fine. But when I sniff requests using Fiddler I found that every response from azure is about 90KB. I've changed prefer header to "return-no-content", but the response is still over 60KB (when request is 50KB).
Is there any way to reduce response lenght? Just to be like 100B (HTTP 202 or something).
Per OData V3 protocol format that Azure Storage Table Service uses, a batch response body must structurally match one-to-one with the corresponding batch request body, such that the same multipart MIME message structure defined for requests is used for responses.
Setting echoContent to false (aka "Prefer: return-no-content") would ensure that entities themselves are not returned though as you also observed, therefore reducing the size of the response body.