AzureFunction created with python, HTTP request length is limited to 100MB - python-3.x

We are using AzureFunction created with python, in which we are facing an issue "Request body too large." While calling an API with multipart file upload around 200MB.
While we gone through some support link given below, we noticed HTTP request length is limited to 100MB.
We also tried editing the web.config through ssh, but still we are facing the same issue, also after restart functionapp service web.config file gets resetted to 100MB restriction.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook-trigger?tabs=csharp
Kindly provide some workaround to resolve this "Request body too large." issue.

On this ticket that originally discussed increasing size to 100 MB, seems like this work around was suggested for sizes greater than the max:
paulbatum commented on Dec 14, 2018 #two2tee You're not totally off.
In the case of functions you can't configure this (thats what this
issue tracks). Our general recommendation is to switch to a flow where
large files are uploaded to blob storage which are then processed by
your functions.
There is also this open item on allowing content body over 100MB that you might want to follow.

Related

Upload 1Gb file through Logic app using sftp-ssh to Azure File share

I am using the Logic App to upload 1 Gb file as below-
Trigger - When files are added or modified (properties only)
Action1 - Get file content
Action 2- Create file(Azure fileshare)
Till 35 MB all triggers and actions works fine. After the file uploaded in SFTP crosses 40 MB, SFTP-SSH trigger and action all works fine. But while the workflow moves to the second action - 'Create File': it fails with the below error The specified resource may be in use by an SMB client'. When I see the Azure file share storage account, I see filename.partial.lock getting created. I modified the access policy as well, but the issue persists.
Logic apps are not designed to upload or download the large amount data from source/destination, its a workflow solution which you can design to provide solution your business need, however you can still use chunk upload functionality in logic-app to upload or download large file via logic app.
please refer
https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-handle-large-messages#set-up-chunking
To upload large file, make sure enable the Allow chunking.
From your description, suppose it should be SharingViolation, you could check the error codes here.
And in the official doc, there are two scenario to get the Sharing Violation error:
Sharing Violation Due to File Access
Client A opens the file with FileAccess.Write and FileShare.Read
(denies subsequent Write/Deletewhile open).
Client B then opens the file with FileAccess.Write with
FileShare.Write (denies subsequent Read/Delete while open).
Result: Client B encounters a sharing violation since it specified a
file access that is denied by the share mode specified previously by
Client A.
Sharing Violation Due to Share Mode
Client A opens the file with FileAccess.Write and FileShare.Write
(denies subsequent Read/Delete while open).
Client B then opens the file with FileAccess.Write with
FileShare.Read (denies subsequent Write/Delete while open).
Result: Client B encounters a sharing violation since it specified a
share mode that denies write access to a file that is still open for
write access.
These are scenario you need to consider, another choice you could try to use the REST API to upload the file and in the HTTP action set the Allow chunking.
I think one more way to resolve the issue is redesign which could be more scale-able solution:
Use logic app to get notification if file is added or modified on SFTP/FTP location.
once the file is added read the file path for that file.
Create Service bus message and send the file Path to Service Bus message as message content.
Create Service Bus queue message trigger Azure function which listen to those message (created in step 2)
Azure function will read the Chunk of files from SFTP using the file path.
this way you can read or write more then 30 GB file.
this solution will be more scale able solution as azure function and auto scale on demand.
Thank you all. The reason for the error 'The specified resource may be in use by an SMB client' is due to the mounting of the file share with two Linux Virtual machines. We unmounted the Linux VMs and did fresh single mounting. The error got fixed with that.
MSFT has confirmed in our discussion that "Create File share" has a limitation of 100 or 300 MB. It was just able to work with SFTP as data arrives in chunk. MSFT is working further to give proper error statement, when the file size is beyond 100 or 300 MB. Below is quoted from MSFT email-
"Thanks for the details , actually the Product team confirm to me that your flow is working by luck , it should not work with this sizes an they are working to implement the limits correctly to prevent files larger than maximum size which is maybe 300 or 100 “I am not yet sure ”
And this strange behavior is only happing when we are reading the content from SFTP chucked.
"

Is there a reason why I shouldn't set a size of blob to maximum size of 8TB?

I am connected to Azure blob storage and we need to upload files bigger than 256MB.
I followed documentation and created PageBlob and than uploaded files through PutPage.
The problem is that I don't know in advance how big the data is going to be so I set it to the max size of 8TB.
Is there a reason why I shouldn't do this?
As far as I know, the max size is only the maximum possible size for the blob and shouldn't cause any issues with memory.
Correct me if I'm wrong please.
Thanks
Per my experience and according to the REST API reference Put Page, the API requires to set the value of the header parameter Content-Length with a specific number of bytes being transmitted in the request body, not an approximate value or possible value, as the figure below.
Otherwise, Azure Storage will refuse the request and response an error after it checked the validity of the request on cloud. I think it's the real reason about the errors you might get from Common REST API Error Codes and Blob Service Error Codes.

How to record the total size of a request in ASP.NET Core

I'm trying to log the total size of a request sent out from an ASP.NET Core website hosted in Azure. The goal is to be able to attribute some sense of what the data out cost is for specific functionality within the application. (Specifically we are looking at some functionality that uses Azure Blob Storage and allows downloads of the blobs but I don't think that's relevant to the question.)
I think the solution is some simple middleware that logs out the sizes to a storage mechanism (less concerned with that part) but not sure what to put inside the middleware.
Does anyone know how you would "construct" the total size of the response being sent out of the web app.?
I would presume it will include HttpContext.Response.Body.Length but I'm pretty confident that doesn't include the headers. Also not sure if that's the compressed size of the response body or not.
Thanks

Large file truncated on Azure Verizon CDN - is there a timeout of request setting?

I uploaded a 295437KB file to Azure private Blob. I connected Azure Verizon Premium CDN via an app service that streams it from the Blob. The file returned is truncated, at different lengths, less than the full file length. Several 10s of MB shorter.
I have checked the file size on the Blob (correct) and also tested the call that retrieves it from the App Service (correct).
So it appears to be on the CDN side. Is there some timeout or request limit I can set on the CDN to alleviate this?
Here is an example of a CDN call that truncates the file:
https://holojem-prod-files-cdn.azureedge.net/artifacts/11/283/332/0008%20Watch%20This%20Video.mp4?DYiNiOt7Q_9xGaZhscklXmcn0tlpDU649hQUD2n7WzgxfirhVQyzwch2-szLjDmUjAshEfe2ZsQ6ejEDR46QvHVKf5WneWFAz1vOQppOPfcBq3KCS11mZ3LpnfFGEzR9RtnsvKyvVSadMXuFy8cLPLYiy4S2boiJ0S-YhQdODqFY7_MbeiJB
And here is the underlying API (mine) that the CDN points to:
I get the full video if I hit that. It is 295,437 KB.
http://holojem-prod-cdn-api.azurewebsites.net/artifacts/11/283/332/0008%20Watch%20This%20Video.mp4?DYiNiOt7Q_9xGaZhscklXmcn0tlpDU649hQUD2n7WzgxfirhVQyzwch2-szLjDmUjAshEfe2ZsQ6ejEDR46QvHVKf5WneWFAz1vOQppOPfcBq3KCS11mZ3LpnfFGEzR9RtnsvKyvVSadMXuFy8cLPLYiy4S2boiJ0S-YhQdODqFY7_MbeiJB
Interestingly, the results are not consistent. When I hit the origin directly a second time from Postman, I got a file of 260,276 KB
When I downloaded from the origin in Chrome, I got 260,744 the first time and 262,144 KB the second time.
The origin is an ASPNET Core Web API
According to your CDN url, I found the CDN have compressed the file when I downloaded it.
You could run fiddler to catch the request as below:
According to this article : To check whether your files are being returned compressed, you need to use a tool like Fiddler or your browser's developer tools. Check the HTTP response headers returned with your cached CDN content. If there is a header named Content-Encoding with a value of gzip, bzip2, or deflate, your content is compressed.
So I suggest you could firstly check the compress setting in your azure portal.
More details, you could refer to this article.
Update:
According to your two url, I have download both video. I found the website's size is a little more than the CDN's video.
The result is as below:
I have also compare the difference between these two file by using mediainfo --fullscan.
Just the Overall bit rate not the same.
One is 17.7 Mbps, another oen is 17.6 Mbps. There are both two minutes.
So I guess may be something wrong with your website to get the blob stream code.I suggest you could recheck it. If you still face the same issue, I suggest you could post some relevant code and the blob video url for us to reproduce the issue.

Azure Block Blob PUT fails when using HTTPS

I have written a Cygwin app that uploads (using the REST API PUT operation) Block Blobs to my Azure storage account, and it works well for different size blobs when using HTTP. However, use of SSL (i.e. PUT using HTTPS) fails for Blobs greater than 5.5MB. Blobs less than 5.5MB upload correctly. Anything greater and I find that the TCP session (as seen by Wireshark) reports a dwindling window size that goes to 0 once the aforementioned number of bytes are transferred. The failure is repeatable and consistent. As a point of reference, PUT operations against my Google/AWS/HP cloud storage accounts work fine when using HTTPS for various object sizes, which suggests the problem is not my client but specific to the HTTPS implementation on the MSAZURE storage servers.
If I upload the 5.5MB blob as two separate uploads of 4MB and 1.5MB followed by a PUT Block List, the operation succeeds as long as the two uploads used separate HTTPS sessions. Notice the emphasis on separate. That same operation fails if I attempt to maintain an HTTPS session across both uploads.
Any ideas on why I might be seeing this odd behavior with MS Azure? Same PUT operation with HTTPS works ok with AWS/Google/HP cloud storage servers.
Thank you for reporting this and we apologize for the inconvenience. We have managed to recreate the issue and have filed a bug. Unfortunately we cannot share a timeline for the fix at this time, but we will respond to this forum when the fix has been deployed. In the meantime, a plausible workaround (and a recommended best practice) is to break large uploads into smaller chunks (using the Put Block and Put Block List APIs), thus enabling the client to parallelize the upload.
This bug has now been fixed and the operation should now complete as expected.

Resources