I'm trying to upload a file using webapi hosted as an azure website. I'm getting a 400 bad request error.
Failed request tracing tells me that the module ManagedPipelineHandler is giving the 400 status with a notification of 128.
Googling suggests this is down to file size limits.
The MultipartFormDataStreamProvider is successfully saving the file into a temp folder on azure, and I know the code "works on my machine" so I suspect it's a config issue (the files are under a meg at the moment)
I've tried changing the maxRequestLength to something quite high in the config but that hasn't resolved the issue, and I can't really see anything to change for webapi itself.
Any advice would be great!
Ta
Ross
Avoid uploading files to local storage of the Azure Website. Instead, upload the file to centralized Azure blob storage.
Related
I have a requirement that should allow user to allow download file through Azure Blob Storage. I am not supposed to expose the blob storage or generate SAS for a file and expose it to end user. So for this purpose i have used API Management and in the inbound policy i am generating SAS and forming the complete URL for blob download and setting it as Backend service.
Eg: After the backend service is formed it will look like this
https://myblobstorage.blob.core.windows.net/container/file.zip?sv=2018-03-28&sr=b&sig=fceSGjsjsuswsZk1yv0Db7EYo%3D&st=2020-02-14T12%3A36%3A13Z&se=2020-03-15T12%3A41%3A13Z&sp=r
I am able to download files with size of 14 GB through API Management with a through put of 10MBPS. But I also want to download a file that is of size 200 GB. When i try to download this file, the download is initiated and i am able to download some content but after a while it fails with below error. And during the download the max throughput achieved is 10 MBPS.
After I check App Insight log for this failure, i see following error - BackendConnectionFailure: at transfer-response, Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. It seems that this error means there was a problem at blob storage but does not exactly state what it could be.
If i use the actual SAS url that is generated out of API Management and download file. The downloads completes with a much higher throughput of 90 MBPS.
I have not set any bandwidth limit or download limit using policy in APIM.
I am trying to check if there is any default setting that is preventing this file to be downloaded either on Blob or on APIM? And also trying to figure out why the throughput is so low when i download the file using APIM.
Note: I am using an Azure VM with good configuration and using curl to test my API.
I am trying to upload an application to newly created azure batch account from the portal I followed below steps-
1) Created a azure batch account.
2) Create .zip of the application exe in local desktop
3) Go to the application options
4) Click on add applications
5) Give applicationid, version and application package path by selecting the .zip from local machine.
6) Click on submit
Got different Errors-
ajaxExtended call failed
Upload Error for ffmpeg.zip
Upload block blob to blob store failed. Details: StatusCode = 201, StatusText = Created.
This happened to me as well, and like Phil G, I noticed that a message was showing up in F12 developer tools that 'the auto storage account keys are invalid.'. However, they were valid.
The problem was that I had turned off 'allow access from all networks' under firewalls and network configuration. Changing this back to 'allow access from all networks' worked, at the tradeoff of security.
If relevant, I'm using a cluster with public access disabled, and user subscription pool allocation mode.
Actually when we upload a .zip file in this case it's failing, its better to use Azure Batch Explorer, its a desktop application.
https://azure.github.io/BatchExplorer/
Than you can easily add a package/ application to your batch account.
Also I was getting the same error when uploading a file to blob container from azure portal, so I used Microsoft Azure Explorer to upload and download the files.
I had a slightly different error and the message was very vague:
Upload Error for ffmpeg-3.4-win64-static.zip
File Upload encountered an unexpected error during upload.
Batch Explorer also failed to upload the file.
By looking at the network traffic in my browser I saw the POST request recieved a success 200 code, but looking inside the response JSON I saw the detailed error:
HHTP Status 409 - The auto storage account keys are invalid, please
sync auto storage keys.
I'd changed them a day ago, and had successfully used the new ones in a batch app, but in order for the batch account to automatically upload the application to the storage account they keys needed to be synchronized.
Quick fix was to sync the keys and all was good.
I am trying to upload a file to Azure Data lake using Azure Data lake Upload File action of Logic Apps. It is working fine for small files about 20 MB. But files with 28 MB or greater are failing with Status code 413- request entity too large.
I have enabled Chunking also in the Upload File Action.Is there any solution for this?
Thanks for the response George.
I have got a workaround. My scenario involves getting file from SharePoint online and uploading to Azure Data Lake. In earlier setup which had the above issue, I was using SharePoint trigger -When a file is created or modified in a folder, which returns file content, to get the file from SharePoint and Datalake Upload File action to upload it to Azure Data Lake. This setup was failing for files larger than 27MB (request entity too large-413) in File Upload Action even when chunking was enabled at File Upload action.
After some troubleshooting, I got a workaround which involves using another SharePoint trigger-When a file is created or modified in a folder(properties-only). It returns metadeta instead of file content. After getting metadeta I used Get File Content SharePoint Action to get the file content to upload to Azure Data lake which worked fine.
Logic app has the limits for message, for the Logic Apps message size limit, see Logic Apps limits and configuration.
However actions that support chunking can access the message content in these outputs. So you just need to set the Allow chunking on.
I test with a 40MB blob file and it succeeds. Further more information you could refer to this doc:Handle large messages with chunking in Azure Logic Apps. Hope this could help you.
I planning to host a simple node.js app on the website tier of Azure.
The application uses a .pem file to sign responses. The issues is that I'd like to deploy the application by simply pushing the git repo, but I don't want to include the .pem file in that repo because it seems that would be a big security issue?
Is there a way I can manually upload one file? What's the best way to store a .pem file on Windows Azure? What are common ways to handle situations like this?
This question is a bit open-ended, as I'm sure there are several viable ways to securely transfer a file.
Having said that: From an Azure-specific standpoint: You should be able to upload a file to your Web Site via ftp. Also, you could push a file to a specific blob and have your app check periodically for files in said blob. To upload (or download later), you'd need the storage account's access key, and as long as you're the only one with that key, you should be ok. When uploading from outside of Azure, you can connect to storage via SSL, further securing the upload.
While you could probably use a queue (either Storage or Service Bus) with the .pem file as payload in a message that your node app would monitor, you'd need to ensure that the file fits within the limits of the message size (64K for Azure Queue, 256K for Service Bus queue).
I have an application that is deployed on Windows Azure, in the application there is a Report part, the reports works as shown below.
The application generates the report as a PDF file and save it in a certain folder in the application.
I have a PDF viewer in the application that takes the URL of the file and displays it.
As you know, in windows azure I will have several VMs that will handled through a Load balancer so I can not ensure that the request in step 2 will go to the same VM in step 1, and this will cause a problem for me.
Any help is very appreciated.
I know that I can use BLOB, but this is not the problem.
The problem is that after creating the file on a certain VM, I give the PDF viewer the url of the pdf viewer as "http://..../file.pdf". This will generate a new request that I cannot control, and I cannot know which VM will server, so even I saved the file in the BLOB it will not solve my problem.
as in any farm environment, you have to consider saving files in a storage that is common for all machines in the farm. In Windows Azure, such common storage is Windows Azure Blob Storage.
You have to make some changes to your application, so that it saves the files to a Blob stroage. If these are public files, then you just mark the Blob Container as public and provide the full URL to the file in blob to the PDF viewer.
If your PDF files are private, you have to mark your container as private. Second step is to generate a Shared Access Signature URL for the PDF and provide that URL to the PDF viewer.
Furthermore, while developing you can explore your Azure storage using any of the (freely and not so freely) available tools for Windows Azure Storage. Here are some:
Azure Storage Explorer
Azure Cloud Storage Studio
There are a lot of samples how to upload file to Azure Storage. Just search it with your favorite search engine. Check out these resources:
http://msdn.microsoft.com/en-us/library/windowsazure/ee772820.aspx
http://blogs.msdn.com/b/windowsazurestorage/archive/2010/04/11/using-windows-azure-page-blobs-and-how-to-efficiently-upload-and-download-page-blobs.aspx
http://wely-lau.net/2012/01/10/uploading-file-securely-to-windows-azure-blob-storage-with-shared-access-signature-via-rest-api/
The Windows Azure Training Kit has great lab named "Exploring Windows Azure Storage"
Hope this helps!
UPDATE (following question update)
The problem is that after creating the file on a certain VM, I give
the PDF viewer the url of the pdf viewer as "http://..../file.pdf".
This will generate a new request that I cannot control, and I cannot
know which VM will server, so even I saved the file in the BLOB it
will not solve
Try changing a bit your logic, and follow my instructions. When your VM create the PDF, upload the file to a blob. Then give the full blob URL for your pdf file to the PDF viewer. Thus the request will not go to any VM, but just to the blob. And the full blob URL will be something like http://youraccount.blob.core.windows.net/public_files/file.pdf
Or I am missing something? What I understand, your process flow is as follows:
User makes a special request which would cause PDF file generation
File is generated on the server
full URL to the file is sent back to the client so that a client PDF viewer could render it
If this is the flow, that with suggested changes will look like the following:
User make a special request which would cause PDF file generation
File is generated on the server
File is uploaded to a BLOB storage
Full URL for the file in the BLOB is returned back to the client, so that it can be rendered on the client.
What is not clear? Or what is different in your process flow? I do exaclty the same for on-the-fly report generation and it works quite well. The only difference is that my app is Silverlight based and I force file download instead of inline-displaying.
An alternative approach is not to persist the file at all.
Rather, generate it in memory, set the content type of the response to "application/pdf" and return the binary content of the report. This is particularly easy if you're using ASP.NET MVC, but you can use a HttpHandler instead. It is a technique I regularly use in similar circumstances (though lately with Excel reports rather than PDF).
The usefulness of this approach does depend on how you're generating the PDF, how big it is and what the load is on your application.
But if the report is to be served just once, persisting it just so that another request can be made by the browser to retrieve it is just wasteful (and you have to provide the persistence mechanism).
If the same file is to be served multiple times and it is resource-intensive to create, it makes sense to persist it, then.
You want to save your PDF to a centralized persisted storage. VM's hard drive is neither. Azure Blob Storage is likely the simplest and best solution. It is dirt cheap to store and access. API for storing files and access them is very simple
There are two things you could consider.
Windows Azure Blob + Queue Storage
Blob Storage is a cost effective way of storing binary and sharing that information between instances. You would most likely use a worker role to create the Report which would store the report to Blob Storage and drop a completed message on the Queue.
Your web role instance could monitor the queue looking for reports that are ready to be displayed.
It would be similar to the concept used in the Windows Azure Guest Book app.
Windows Azure Caching Service
Similarly [and much more expensive] you could share the binary using the Caching Service. This gives a common layer between your VMs in which to store things, however you won't be able to provide a url to the PDF you'd have to download the binary and use either an HttpHandler or change the content-type of the request.
This would be much harder to implement, very expensive to run, and is not guaranteed to work in your scenario. I'd still suggest Blobs over any other means
Another option would be to implement a sticky session handler of your own. Take a look at:
http://dunnry.com/blog/2010/10/14/StickyHTTPSessionRoutingInWindowsAzure.aspx