ASP.NET Core Higher memory use uploading files to Azure Blob Storage SDK using v12 compared to v11 - azure

I am building a service with an endpoint that images and other files will be uploaded to, and I need to stream the file directly to Blob Storage. This service will handle hundreds of images per second, so I cannot buffer the images into memory before sending it to Blob Storage.
I was following the article here and ran into this comment
Next, using the latest version (v12) of the Azure Blob Storage libraries and a Stream upload method. Notice that it’s not much better than IFormFile! Although BlobStorageClient is the latest way to interact with blob storage, when I look at the memory snapshots of this operation it has internal buffers (at least, at the time of this writing) that cause it to not perform too well when used in this way.
But, using almost identical code and the previous library version that uses CloudBlockBlob instead of BlobClient, we can see a much better memory performance. The same file uploads result in a small increase (due to resource consumption that eventually goes back down with garbage collection), but nothing near the ~600MB consumption like above
I tried this and found that yes, v11 has considerably less memory usage compared to v12! When I ran my tests with about a ~10MB file the memory, each new upload (after initial POST) jumped the memory usage 40MB, while v11 jumped only 20MB
I then tried a 100MB file. On v12 the memory seemed to use 100MB nearly instantly each request and slowly climbed after that, and was over 700MB after my second upload. Meanwhile v11 didn't really jump in memory, though it would still slowly climb in memory, and ended with around 430MB after the 2nd upload.
I tried experimenting with creating BlobUploadOptions properties InitialTransferSize, MaximumConcurrency, etc. but it only seemed to make it worse.
It seems unlikely that v12 would be straight up worse in performance than v11, so I am wondering what I could be missing or misunderstanding.
Thanks!

Sometimes this issue may occur due to Azure blob storage (v12) libraries.
Try to upload the large files in chunks [a technique called file chunking which breaks the large file into smaller chunks for each upload] instead of uploading whole file. Please refer this link
I tried  producing the scenario in my lab
public void uploadfile()
{
string connectionString = "connection string";
string containerName = "fileuploaded";
string blobName = "test";
string filePath = "filepath";
BlobContainerClient container = new BlobContainerClient(connectionString, containerName);
container.CreateIfNotExists();
// Get a reference to a blob named "sample-file" in a container named "sample-container"
BlobClient blob = container.GetBlobClient(blobName);
// Upload local file
blob.Upload(filePath);
}
The output after uploading file.

Related

Process large files from S3

I am trying to get a large file (>10gb) on s3 (stored as csv on s3) and send it as a csv in the response header. I am doing it by using the following procedure:
async getS3Object(params:any) {
s3.getObject(params, function (err, data) {
if (err) {
console.log('Error Fetching File');
}
else {
const csv = data.Body.toString('utf-8');
res.setHeader('Content-disposition', `attachment; filename=${fileId}.csv`);
res.set('Content-Type', 'text/csv');
res.status(200).send(csv);
}
});
This is taking painfully long to process the file and send it as a csv attachments. How can I make this faster?
You're dealing with a huge file; you could break that into chunks using range (see also the docs, search for "calling the getobject property"). If you need the whole file, you could split the work off into workers, though at some point the limit will probably be your connection, and if you need to send the whole file as an attachment that won't help much.
A better solution would be to never download the file in the first place. You can do this by streaming from S3 (see also this, and this), or setting up a proxy in your server so the bucket/subdir seems to the client to be in your app.
If you run this on EC2, the network performance of the EC2 instances varies based on the EC2 type and size. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-network-bandwidth.html
A bottleneck can happen at multiple places:
Network (bandwidth and latency)
CPU
Memory
Local Storage
One can check each of these. CloudWatch Metrics is our friend here.
CPU is the easiest to see and to scale with a bigger instance size.
Memory is a bit harder to observe, but one should have enough memory to keep the document in memory, so the OS does not use the swap.
Local Storage - IO can be observed; If the business logic is just to parse a csv file and output the result in, let's say, another S3 bucket, and there is no need to save the file locally - EC2 instances with local storage can be used - https://aws.amazon.com/ec2/instance-types/ - Storage Optimized.
Network - EC2 instance size can be modified, or Network optimized instances can be used.
Network - the way that one connects to S3 matters. Usually, the best approach is the use an S3 VPC endpoint https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html. The gateway option is free to use. By adopting it, one eliminates the VPC NAT gateway/NAT instance limitations, and it's even more secure.
Network - Sometimes, the S3 is in one region, and the compute is in another. S3 support replication https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html
Maybe some type of APM monitoring and code instrumentation can show is the code can also be optimized.
Thank you.

Azure Function App copy blob from one container to another using startCopy in java

I am using java to write a Azure Function App which is eventgrid trigger and the trigger is blobcreated. So when ever blob is created it will be trigerred and the function is to copy a blob from one container to another. I am using startCopy function from com.microsoft.azure.storage.blob. It was working fine but sometimes, It uses to copy files of zero bytes which are actually containing some data in source location. So at destination sometimes it dumps zero bytes of files. I would like to have a little help on this so that I could understand how to possibly handle this situation
CloudBlockBlob cloudBlockBlob = container.getBlockBlobReference(blobFileName);
CloudStorageAccount storageAccountdest = CloudStorageAccount.parse("something");
CloudBlobClient blobClientdest = storageAccountdest.createCloudBlobClient();
CloudBlobContainer destcontainer = blobClientdest.getContainerReference("something");
CloudBlockBlob destcloudBlockBlob = destcontainer.getBlockBlobReference();
destcloudBlockBlob.startCopy(cloudBlockBlob);
Copying blobs across storage accounts is an async operation. When you call startCopy method, it just signals Azure Storage to copy a file. Actual file copy operation happens asynchronously and may take some time depending how how large file you're transferring.
I would suggest that you check the copy operation progress on the target blob to see how many bytes have been copied and if there's a failure in the copy operation. You can do so by fetching the properties of the target blob. A copy operation could potentially fail if the source blob is modified after the copy operation has started by Azure Storage.
had the same problem, and later figured out from the docs
Event Grid isn't a data pipeline, and doesn't deliver the actual
object that was updated
Event grid will tell you that something has changed and that the actual message has a size limit and as long as the data that you are copying is within that limit it will be successful if not it will be 0 bytes. I was able to copy upto 1mb and beyond that it resulted 0 bytes. You can try and see if azure has increased by size limit in the recent.
However if you want to copy the complete data then you need to use Event Hub or Service Bus. For mine, I went with service bus.

Firebase Functions - ERROR, but no Event Message in Console

I have written a function on firebase that downloads an image (base64) from firebase storage and sends that as response to the user:
const functions = require('firebase-functions');
import os from 'os';
import path from 'path';
const storage = require('firebase-admin').storage().bucket();
export default functions.https.onRequest((req, res) => {
const name = req.query.name;
let destination = path.join(os.tmpdir(), 'image-randomNumber');
return storage.file('postPictures/' + name).download({
destination
}).then(() => {
res.set({
'Content-Type': 'image/jpeg'
});
return res.status(200).sendFile(destination);
});
});
My client calls that function multiple times after one another (in series) to load a range of images for display, ca. 20, of an average size of 4KB.
After 10 or so pictures have been loaded (amount varies), all other pictures fail. The reason is that my function does not respond correctly, and the firebase console shows me that my function threw an error:
The above image shows that
A request to the function (called "PostPictureView") suceeds
Afterwards, three requests to the controller fail
In the end, after executing a new request to the "UserLogin"-function, also that fails.
The response given to the client is the default "Error: Could not handle request". After waiting a few seconds, all requests get handled again as they are supposed to be.
My best guesses:
The project is on free tier, maybe google is throttling something? (I did not hit any limits afaik)
Is there a limit of messages the google firebase console can handle?
Could the tmpdir from the functions-app run low? I never delete the temporary files so far, but would expect that either google deletes them automatically, or warns me in a different way that the space is running low.
Does someone know an alternative way to receive the error messages, or has experienced similar issues? (As Firebase Functions is still in Beta, it could also be an error from google)
Btw: Downloading the image from the client (android app, react-native) directly is not possible, because I will use the function to check for access permissions later. The problem is reproducable for me.
In Cloud Functions, the /tmp directory is backed by memory. So, every file you download there is effectively taking up memory on the server instance that ran the function.
Cloud Functions may reuses server instances for repeated calls to the same function. This means your function is downloading another file (to that same instance) with each invocation. Since the names of the files are different each time, you are accumulating files in /tmp that each occupy memory.
At some point, this server instance is going to run out of memory with all these files in /tmp. This is bad.
It's a best practice to always clean up files after you're done with them. Better yet, if you can stream the file content from Cloud Storage to the client, you'll use even less memory (and be billed even less for the memory-hours you use).
After some more research, I've found the solution: The Firebase Console seems to not show all error information.
For detailed information to your functions, and errors that might be omitted in the Firebase Console, check out the website from google cloud functions.
There I saw: The memory (as suggested by #Doug Stevensson) usage never ran over 80MB (limit of 256MB) and never shut the server down. Moreover, there is a DNS resolution limit for the free tier, that my application hit.
The documentation points to a limit of DNS resolutions: 40,000 per 100 seconds. In my case, this limit was never hit - firebase counts a total executions of roundabout 8000 - but it seems there is a lower, undocumented limit for the free tier. After upgrading my account (I started the trial that GCP offers, so actually not paying anything) and linking the project to the billing account, everything works perfectly.

Azure WebJob: BlobTrigger vs QueueTrigger resource usage

I created a WebJob project to backup images from Azure to Amazon that used the BlobTrigger attribute for the first parameter in the method
public static async Task CopyImage([BlobTrigger("images/{name}")] ICloudBlob image, string name, TextWriter log)
{
var imageStream = new MemoryStream();
image.DownloadToStream(imageStream);
await S3ImageBackupContext.UploadImageAsync(name, imageStream);
}
Then I read that the BlobTrigger is based on the best effort basis in the document How to use Azure blob storage and changed it to a QueueTrigger.
Both works perfectly fine :-) so it's not a problem but a question. Since I deployed the change the CPU and Memory usage of the WebJob looks like
Can somebody explain me the reason for the drop of memory and CPU usage? Also the data egress went down.
Very interesting.
I think you're the only one who can answer that question.
Do a remote profile for both blob and queue versions, see which method eats up that CPU time:
https://azure.microsoft.com/en-us/blog/remote-profiling-support-in-azure-app-service/
For memory consumption you probably need to grab a memory dump:
https://blogs.msdn.microsoft.com/waws/2015/07/01/create-a-memory-dump-for-your-slow-performing-web-app/

Azure WebJob not processing all Blobs

I upload gzipped files to an Azure Storage Container (input). I then have a WebJob that is supposed to pick up the Blobs, decompress them and drop them into another Container (output). Both containers use the same storage account.
My problem is that it doesn't process all Blobs. It always seems to miss 1. This morning I uploaded 11 blobs to the input Container and only 10 were processed and dumped into the output Container. If I upload 4 then 3 will be processed. The dashboard will show 10 invocations even though 11 blobs have been uploaded. It doesn't look like it gets triggered for the 11th blob. If I only upload 1 it seems to process it.
I am running the website in Standard Mode with Always On set to true.
I have tried:
Writing code like the Azure Samples (https://github.com/Azure/azure-webjobs-sdk-samples).
Writing code like the code in this article (http://azure.microsoft.com/en-us/documentation/articles/websites-dotnet-webjobs-sdk-get-started).
Using Streams for the input and output instead of CloudBlockBlobs.
Various combinations of closing the input, output and Gzip Streams.
Having the UnzipData code in the Unzip method.
This is my latest code. Am I doing something wrong?
public class Functions
{
public static void Unzip(
[BlobTrigger("input/{name}.gz")] CloudBlockBlob inputBlob,
[Blob("output/{name}")] CloudBlockBlob outputBlob)
{
using (Stream input = inputBlob.OpenRead())
{
using (Stream output = outputBlob.OpenWrite())
{
UnzipData(input, output);
}
}
}
public static void UnzipData(Stream input, Stream output)
{
GZipStream gzippedStream = null;
gzippedStream = new GZipStream(input, CompressionMode.Decompress);
gzippedStream.CopyTo(output);
}
}
As per Victor's comment above it looks like it is a bug on Microsoft's end.
Edit: I don't get the downvote. There is a problem and Microsoft are going to fix it. That is the answer to why some of my blobs are ignored...
"There is a known issue about some Storage log events being ignored. Those events are usually generated for large files. We have a fix for it but it is not public yet. Sorry for the inconvenience. – Victor Hurdugaci Jan 9 at 12:23"
Just as an workaround, what if you don't directly listen to the Blob instead bring a Queue in-between, when you write to the Input Blob Container also write a message about the new Blob in the Queue also, let the WebJob listen to this Queue, once a message arrived to the Queue , the WebJob Function read the File from the Input Blob Container and copied into the Output Blob Container.
Does this model work with you ?

Resources