Suggestion about Azure Application gateway - azure

We have a lot of web applications behind one Application Gateway. We recognize a problem with a couple of them when "Inspect request body size" is on and configured to it's default size - 128KB.
I would like to get recommendation how to solve it best way:
Turn it off?
Increase a Max body size?
Create additional Application Gateway?
Any ideas will be appreciated.

I understand that you are having issues with Azure App Gateway where you are seeing an issue when "Inspect request body size" is turned on and set to 128KB and want to know the best way to address the same.
As per Azure WAF Request size limits:
The maximum request body size field is specified in kilobytes and controls overall request size limit excluding any file uploads. This field has a minimum value of 1 KB and a maximum value of 128 KB. The default value for request body size is 128 KB.
However, For CRS 3.2 (on the WAF_v2 SKU) and newer, these limits are as follows:
2MB request body size limit
4GB file upload limit
WAF also offers a configurable knob to turn the request body inspection on or off. By default, the request body inspection is enabled. If the request body inspection is turned off, WAF doesn't evaluate the contents of HTTP message body. In such cases, WAF continues to enforce WAF rules on headers, cookies, and URI. If the request body inspection is turned off, then maximum request body size field isn't applicable and can't be set. Turning off the request body inspection allows for messages larger than 128 KB to be sent to WAF, but the message body isn't inspected for vulnerabilities.
To change to CRS 3.2, go to WAF Policy > Manged Rules > change to 3.2 and hit save. Once you do the same, change the size of the request body size to 2 MB and hit save.
Hope this helps. If you have any further questions, please do let us know and we will be glad to assist further. Thank you!

Related

413 (Payload Too Large) error in SharedMap

When I convert an xml document to fluid DDS using SharedMap and SharedObjectSequence and set it in the fluid container I get the error 413 (Payload Too Large). Error response:
{"message":"request entity too large","expected":109452,"length":109452,"limit":102400,"type":"entity.too.large"}
I am trying this in https://github.com/microsoft/FluidHelloWorld. I am using tinylicious and localhost. It works fine for small xml files. I did a quick search through the code and didn't find where this is enforced.
Is it possible to increase this limit?
You're running up against a max request size in the web server rather than a Fluid Framework issue.
If you want to enable Tinylicious to handle larger request sizes, you would clone the Fluid Framework repository and modify the configuration of the Express service.
So go to tinylicious/src/app.ts and add app.use(express.bodyParser({limit: '50mb'})); This will raise the request limit. To use your modified Tinylicious, you'd compile and run the service locally.
Alternatively, you can break up the XML and set parts of the XML into the keys of the DDSs.
Without knowing your scenario, I'd lean towards the solution of breaking up the XML because it'll lead to lower latency updates. For instance, as you read in the XML, set the tag pairs in the Shared Object Sequence immediately, then keep reading and continue for the children objects.
You may want to open an issue on the repo because the max request body size should cause a clearer error. The service should also accept up to the max Kafka message size, which is the true limiting factor.

AzureFunction created with python, HTTP request length is limited to 100MB

We are using AzureFunction created with python, in which we are facing an issue "Request body too large." While calling an API with multipart file upload around 200MB.
While we gone through some support link given below, we noticed HTTP request length is limited to 100MB.
We also tried editing the web.config through ssh, but still we are facing the same issue, also after restart functionapp service web.config file gets resetted to 100MB restriction.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook-trigger?tabs=csharp
Kindly provide some workaround to resolve this "Request body too large." issue.
On this ticket that originally discussed increasing size to 100 MB, seems like this work around was suggested for sizes greater than the max:
paulbatum commented on Dec 14, 2018 #two2tee You're not totally off.
In the case of functions you can't configure this (thats what this
issue tracks). Our general recommendation is to switch to a flow where
large files are uploaded to blob storage which are then processed by
your functions.
There is also this open item on allowing content body over 100MB that you might want to follow.

Is there a reason why I shouldn't set a size of blob to maximum size of 8TB?

I am connected to Azure blob storage and we need to upload files bigger than 256MB.
I followed documentation and created PageBlob and than uploaded files through PutPage.
The problem is that I don't know in advance how big the data is going to be so I set it to the max size of 8TB.
Is there a reason why I shouldn't do this?
As far as I know, the max size is only the maximum possible size for the blob and shouldn't cause any issues with memory.
Correct me if I'm wrong please.
Thanks
Per my experience and according to the REST API reference Put Page, the API requires to set the value of the header parameter Content-Length with a specific number of bytes being transmitted in the request body, not an approximate value or possible value, as the figure below.
Otherwise, Azure Storage will refuse the request and response an error after it checked the validity of the request on cloud. I think it's the real reason about the errors you might get from Common REST API Error Codes and Blob Service Error Codes.

Large file truncated on Azure Verizon CDN - is there a timeout of request setting?

I uploaded a 295437KB file to Azure private Blob. I connected Azure Verizon Premium CDN via an app service that streams it from the Blob. The file returned is truncated, at different lengths, less than the full file length. Several 10s of MB shorter.
I have checked the file size on the Blob (correct) and also tested the call that retrieves it from the App Service (correct).
So it appears to be on the CDN side. Is there some timeout or request limit I can set on the CDN to alleviate this?
Here is an example of a CDN call that truncates the file:
https://holojem-prod-files-cdn.azureedge.net/artifacts/11/283/332/0008%20Watch%20This%20Video.mp4?DYiNiOt7Q_9xGaZhscklXmcn0tlpDU649hQUD2n7WzgxfirhVQyzwch2-szLjDmUjAshEfe2ZsQ6ejEDR46QvHVKf5WneWFAz1vOQppOPfcBq3KCS11mZ3LpnfFGEzR9RtnsvKyvVSadMXuFy8cLPLYiy4S2boiJ0S-YhQdODqFY7_MbeiJB
And here is the underlying API (mine) that the CDN points to:
I get the full video if I hit that. It is 295,437 KB.
http://holojem-prod-cdn-api.azurewebsites.net/artifacts/11/283/332/0008%20Watch%20This%20Video.mp4?DYiNiOt7Q_9xGaZhscklXmcn0tlpDU649hQUD2n7WzgxfirhVQyzwch2-szLjDmUjAshEfe2ZsQ6ejEDR46QvHVKf5WneWFAz1vOQppOPfcBq3KCS11mZ3LpnfFGEzR9RtnsvKyvVSadMXuFy8cLPLYiy4S2boiJ0S-YhQdODqFY7_MbeiJB
Interestingly, the results are not consistent. When I hit the origin directly a second time from Postman, I got a file of 260,276 KB
When I downloaded from the origin in Chrome, I got 260,744 the first time and 262,144 KB the second time.
The origin is an ASPNET Core Web API
According to your CDN url, I found the CDN have compressed the file when I downloaded it.
You could run fiddler to catch the request as below:
According to this article : To check whether your files are being returned compressed, you need to use a tool like Fiddler or your browser's developer tools. Check the HTTP response headers returned with your cached CDN content. If there is a header named Content-Encoding with a value of gzip, bzip2, or deflate, your content is compressed.
So I suggest you could firstly check the compress setting in your azure portal.
More details, you could refer to this article.
Update:
According to your two url, I have download both video. I found the website's size is a little more than the CDN's video.
The result is as below:
I have also compare the difference between these two file by using mediainfo --fullscan.
Just the Overall bit rate not the same.
One is 17.7 Mbps, another oen is 17.6 Mbps. There are both two minutes.
So I guess may be something wrong with your website to get the blob stream code.I suggest you could recheck it. If you still face the same issue, I suggest you could post some relevant code and the blob video url for us to reproduce the issue.

How to change the amount of fields that can be posted in a form with IIS 7.5?

We've hit a problem with some forms in the admin portion of our web app. There are a handful of forms that contain a large number of fields (it can range anywhere from one input field to the hundreds).
We've found that as these forms grow, there is a point where the server will throw 500 errors when a form is posted.
After running a test, I was able to find that the server can handle forms with 100 fields in them; once 101 or more fields are used, we get the errors.
We run Coldfusion, and we have determined that Coldfusion is not throwing this error. We never see this error logged in Coldfusion, so we are assuming IIS is throwing an error even before it sends the request to the Coldfusion server.
I'm assuming there is some setting in IIS 7.5 where we can up this limit. I've searched on the web, but all I can find is how to raise the byte-size limits of this data, not any kind of limit on a number of fields that are allowed.
So, am I right in assuming that this can be changed, and if so, how can it be done?
This is an issue introduced with hotfix APSB12-06. While it is a ColdFusion error, people have reported receiving the error in Tomcat, before it supposedly hit the CF server
There is a setting in neo-runtime.xml which defines the postsizelimit - and is defaulted to 100.
The full notes are located here, but here is the short version.
This hot fix has a new setting in ColdFusion, Post Parameter Limit. This setting limits the number of parameters in a post request. The default value is 100. If a post request contains more parameters as specified, the server doesn't process the request and throws an exception. This process protects against DoS attack using Hash Collision. This setting is different from Post Size Limit (ColdFusion Administrator > Settings > Maximum size of post data). This setting isn't exposed in the ColdFusion Administrator console. But you can easily change this limit in the neo-runtime.xml file. See point 5 below.
Customers who want to change postParameterLimit, go to {ColdFusion-Home}/lib for Server Installation or {ColdFusion-Home}/WEB-INF/cfusion/lib for Multiserver or J2EE installation. Open file neo-runtime.xml, after line.
<var name='postSizeLimit'><number>100.0</number></var>
Add the line below and you can change 100 with the desired number.
<var name='postParametersLimit'><number>100.0</number></var>
CF10+ has the setting available to edit within the CF Admin Settings page under Maximum number of POST request parameters under Server Settings -> Settings.
On our 9.0.1 server, we just increased the setting up to 10000 and have seen no adverse effects.
I believe you are bumping up against a security feature of ColdFusion. What ColdFusion version are you running? In ColdFusion Security Hotfix APSB12-06 they introduced a fix to protect against DoS attack using Hash Collision. From that page:
This hotfix implements a new setting in ColdFusion, Post Parameter
Limit. This limits the number of parameters in a post request. The
default value is 100. If a post request contains more parameters as
specified, server will not process the request and throws an
exception. This is done to protect against DoS attack using Hash
Collision. This setting is different from Post Size Limit (ColdFusion
Administrator > Settings > Maximum size of post data). We are not
exposing this setting in ColdFusion Administrator console, but this
limit can be easily changed in neo-runtime.xml file. See point 5
below.
Also on that page are instructions on how to increase that limit. Basically you have to make a change in your neo-runtime.xml file.
Customers who want to change postParameterLimit, go to
{ColdFusion-Home}/lib for Server Installation or
{ColdFusion-Home}/WEB-INF/cfusion/lib for Multiserver or J2EE
installation. Open file neo-runtime.xml, after line:
<var name='postSizeLimit'><number>100.0</number></var>
add the below line and you can change 100 with desired number.
<var name='postParametersLimit'><number>100.0</number></var>

Resources