I have an azure function that I use for PGP decryption. I am using the C# library below.
https://github.com/mattosaurus/PgpCore
the code I am using below. Even though I am using premium app service plan with 32GB of memory, my function is failing with outofmemory exception while decrypting the file size of 100MB.
I wasnt able to find any solution to avoid that problem. What would you recommend?
Thanks,
Related
On Windows plan there is an option to take a memory dump:
However, this option is missing on Linux plan:
The option which is available for App Services on Linux is not available for Azure Functions on Linux.
Is there a way to get memory dumps for Azure Functions on Linux?
Is there a way to get memory dumps for Azure Functions on Linux?
After researching in my local environment came to a conclusion that we can't take memory dump in Azure Functions with the Linux platform.
And as per the below mentioned answer from Microsoft and many other documents provides only information about memory dump with Windows Platform.
So, I would even suggest you raise a feature request so that it would be helpful for the other members with related issue.
I am using an Azure Storage Account to deliver images to a client service and we are experiencing latency for a couple of days, with download times jumping from a few ms to a few seconds for the same blob being downloaded
I read on MSDN here that it can happen when the same blob is queried for reading over and over, which is our case.
Has anyone experienced the same issue before? Can it jump by x 1000 randomly ?
And, what would be the easiest way to fix this problem? It is compromising performances of the client system, and since there is only one client machine located into the same Datacenter, it doesn't really make sense (from my point of view) to set up a CDN.
A potential mitigation is to set up a cache from the client propsepctive as well, but this is a bit complex due to the amount of data being queried and I'd like to solve this issue from the Storage-Side directly.
We use :
A Premium Storage Account
Locally redundant
And a BlockBlobStorage Account Kind
Thanks for your help.
I have Azure Durable functions with Event Grid as a trigger point which is pointing to blob storage.
I have 8 activity functions and 1 orchestrator.
Based on the file types I receive one of the activity function is executed.
However I keep receiving the crashing message as in the image.
Based on the error message that you have shared is pointing that function failed with "System.ExecutionEngineException"
Generally , System.ExecutionEngineException exception is thrown when the CLR detects that something has gone horribly wrong.
This can happen some considerable time after the problem occurred. This is because the exception is usually a result of corruption of internal data structures - the CLR discovers that something has got into a state that makes no sense. It throws an uncatchable exception because it's not safe to proceed.
Looking at the stack trace that you have mentioned in the screen shot more over exception is pointing out DurableTask.AzureStorage.TimeoutHandler+ <ExecuteWithTimeout issue.
You can use memory dump generated by the Proactive Crash Monitoring tool
to identify the function crash & associated crashing thread call stack.
please create a technical support ticket by following the link wherein technical support team would help you in troubleshooting the issue or open a discussion over Microsoft Q&A Community.
Microsoft is aware of it, and it's currently as designed. It shouldn't affect your apps. https://github.com/Azure/azure-functions-durable-extension/issues/1965#issuecomment-931637193
I'm running an Azure Function under the Consumption Plan, and every so often I get a bunch of errors around no space in the Function... Errors such as:
Exception while executing function: PdfToImages. Microsoft.Azure.WebJobs.Host: Exception binding parameter 'req'. mscorlib: Error while copying content to a stream. mscorlib: There is not enough space on the disk.
I read that there is a limitation of 1GB or memory, but I dont seem to be using that and I deleted all the files I use after each call.
The storage limit is the total content size in temporary storage across all apps in the same App Service plan. Consumption plan uses Azure Files for temporary storage.
What seems even more confusion is that some calls make it throught and don't have space issues and also that once I start seeing the issues I restart the Function and all the calls work.
Here is a sample volume screenshot...
Can someone please enlight me on how exactly disk space is handled...
I have an app service running that has 8 instances running in the service plan.
The app is written in asp dotnet core, it's an older version than is currently available.
Occasionally I have an issue where the servers start returning a high number of 5xx errors after a period of sustained load.
It appears that only one instance is having an issue - which is causing the failed request rate to climb.
I've noticed that there is a corresponding increase in the "locally written bytes" on the instance that is having problems - I am not writing any data locally so I am confused as to what this metric is actually measuring. In addition the number of open connections goes high and then stays high - rebooting the problematic instance doesn't seem to achieve anything.
The only thing I suspect is that we are copying data from a user's request straight into Azure Blob Store using the UploadFromStreamAsync from the HttpRequest.Body - with the data coming from a mobile phone app.
Microsoft support suggested we swapped to using local cache as an option to reduce issues with storage, however this has not resolved the issue.
Can anyone tell me what is the "locally written bytes" actually measuring? There is little documentation on this metric that I can find in google.