I am running an Express website on an Azure Website instance (note I say Azure Website, not Azure Webrole)
Initially, uploading large files failed with an HTTP 500 error. After much research, I found that the solution is to manually adjust the value of the parameter <requestLimits maxAllowedContentLength="xxxxxxx" /> in the web.config file to a higher value. I increased that value to 1Gb and large files started to get uploaded successfully.
However, when I increase the size of that parameter (maxAllowedContentLength) to something much larger (say, 5Gb or 10Gb), the website does not even start up anymore. It looks like there is a hard-coded limit to how large this parameter can be.
Does anyone have links to documentation where the max value of this parameter is specified by Microsoft for an Azure Website, or any pointers on how to get files up to 10Gb to be uploaded?
maxAllowedContentLength is a uint which has a max value of 4,294,967,295 which makes the max limit 4GB - If you want to upload larger amounts of data, you will have to use chunked transfer encoding.
Related
We are experimenting with using blazor webasembly with angular. It works nicely, but blazor requires a lot of dlls to be loaded, so we decided to store them in azure blob storage, and to serve it we use microsoft CDN on azure.
When we check average latency as users started working, it shows values between 200-400 ms. But max values of latency jump to 5-6 minutes.
This happens for our ussual workload 1k-2k users over course of 1 hour. If they dont have blazor files cached locally yet that can be over 60 files per user requested from cdn.
My question is if this is expected behaviour or we can have some bad configuration somewhere.
I mention blazor WS just in case, not sure it can be problem specificaly with way how these files are loaded, or it is only because it is big amount of fetched files.
Thanks for any advice in advance.
I did check if files are served from cache, and from response headers it seems so: x-cache: TCP_HIT. Also Byte Hit ratio from cdn profile seem ok,mostly 100% and it never falls under 65%.
The platform I'm working on involves the client(ReactJS) and server(NodeJs, Express), of course.
The major feature of this platform involves users uploading images constantly.
Everything has been setup successfully using multer to receive images in formdata on my api server and now its time to create an "image management system".
The main problem I'll be tackling is the unpredictable file size of users. The files are images and the depend on the OS of the user i.e user taking pictures, users taking screenshots.
The first solution is to determine the max file size, and to transport it using a compression algorithm to the api server. When the backend receives it successfully, images are uploaded to a CDN (Cloudinary) and then the link is stored in the database along with other related records.
The second which im strongly leaning towards is shifting this "upload to CDN" system to the client side and make the client connect to cloudinary directly and after grab the secure link and insert into the JSON that would be sent to the server.
This eliminates the problem of grappling with file sizes which is progress, but I'll like to know if it is a good practice.
Restricting the file size is possible when using the Cloudinary Upload Widget for client-side uploads.
You can include the 'maxFileSize' parameter when calling the widget, setting its value to 500000 (value should be provided in bytes).
https://cloudinary.com/documentation/upload_widget
If the client will try and upload a larger file he/she will get back an error stating the max file size is exceeded and the upload will fail.
Alternatively, you can choose to limit the dimensions of the image, so if exceeded, instead of failing the upload with an error, it will automatically scale down to the given dimensions while retaining aspect ratio and the upload request will succeed.
However, using this method doesn't guarantee the uploaded file will be below a certain desired size (e.g. 500kb) as each image is different and one image that is scaled-down to given dimensions can result in a file size that is smaller than your threshold, while another image may slightly exceed it.
This can be achieved using the limit cropping method as part of an incoming transformation.
https://cloudinary.com/documentation/image_transformations#limit
https://cloudinary.com/documentation/upload_images#incoming_transformations
I have an express app that allows downloads via special ids in the url.
Somebody can add a file, say file.pdf and someone else can download it via http://app/files/<64 char id>.
For the files/:id route I use res.download().
The download speed however never exceeds 300kbit/s and averages around 162kbit/s (measured with wget). This is in contrast to the 200 Mbit/s bandwidth that iperf3 reports between the two systems. The files are also placed on a ssd so access speeds should not be an issue.
The average file size is around 10 MB.
Is there any way I can speed res.download up or how I can implement the same app with different, more performant, expressjs functions ?
I am connected to Azure blob storage and we need to upload files bigger than 256MB.
I followed documentation and created PageBlob and than uploaded files through PutPage.
The problem is that I don't know in advance how big the data is going to be so I set it to the max size of 8TB.
Is there a reason why I shouldn't do this?
As far as I know, the max size is only the maximum possible size for the blob and shouldn't cause any issues with memory.
Correct me if I'm wrong please.
Thanks
Per my experience and according to the REST API reference Put Page, the API requires to set the value of the header parameter Content-Length with a specific number of bytes being transmitted in the request body, not an approximate value or possible value, as the figure below.
Otherwise, Azure Storage will refuse the request and response an error after it checked the validity of the request on cloud. I think it's the real reason about the errors you might get from Common REST API Error Codes and Blob Service Error Codes.
I'm trying to log the total size of a request sent out from an ASP.NET Core website hosted in Azure. The goal is to be able to attribute some sense of what the data out cost is for specific functionality within the application. (Specifically we are looking at some functionality that uses Azure Blob Storage and allows downloads of the blobs but I don't think that's relevant to the question.)
I think the solution is some simple middleware that logs out the sizes to a storage mechanism (less concerned with that part) but not sure what to put inside the middleware.
Does anyone know how you would "construct" the total size of the response being sent out of the web app.?
I would presume it will include HttpContext.Response.Body.Length but I'm pretty confident that doesn't include the headers. Also not sure if that's the compressed size of the response body or not.
Thanks