incremental memory consumption on azure functions app - azure

On azure functions app which is running on the app service plan we notice that memory is significantly increasing (from ~100MB to 3GB).
The function app is written in Python and is triggered whenever new event is received in the events hub.
I've tried to profile memory based on azure's official guide and there are several weird parts I've noticed:
on each new event invocation, the function memory is being increased by several KB / MB
for example, when variables hold data, inside the Python function, as logs show the memory is not released (?)
over time this little increments add up to high memory usage.
It would be helpful if you can suggest possible solutions or any further debug methods.

Related

How to set a memory limitation per request on Azure Function?

I created an Azure Function App(.Net 6 Isolated) utilizing the Consumption plan, which is responsible for converting various documents from one format to another, such as converting PDFs to PNGs. However, the processing time for certain documents may be longer due to factors such as the size of the document. I am aware that the Consumption plan has a memory limitation of 1.5 GB per function app. There are two function endpoints on the app, and I would like to set a hard limit on the memory usage per request to ensure that it does not exceed 512 MB. Is this possible?
But the MemoryFailPoint class does not guarantee that the block of code will execute within a specific amount of memory. It only ensures that a certain amount of memory is available before executing the code
This functionality of setting the memory consumption size is available for the Azure Functions before the Year of 2016.
There have been few changes in the Serverless design especially Azure Functions on utilizing of the dependent resources.
Microsoft has disabled the memory setting in Consumption Plan based on the experience feedback from many of Azure Users and brought up the change that the Consumption Hosting Plan will decides the resource utilization including memory/CPU based on your usage of Functions.
Refer to this MS Article for more information on memory settings to each of our function apps.

High memory consumption on Azure Function App on Linux plan

I just switched from Windows plan to Linux on Azure Function App and memory usage went up 5 times.
I didn't change the way how package is built. And it is just dotnet publish -c Release --no-build --no-restore. I wonder if I could do sotmething here - build for specific runtime?
Is there a way to decrease that consumption? I'm wondering because my plan was to switch all functions to Linux plans as they are cheaper, but not neceserilly if it ends up in higher plans.
Few details:
dotnet 3.1
function runtime version ~3
functions run in-process
The function is rarely used, so there is no correlation between higher memory usage and bigger traffic.
Please check if my findings are helpful:
Memory Working Set is the Current amount of memory used by the Function App in MB's or the tracking how much of the application is currently loaded in physical memory.
If the requests are high, then the Memory working set is most likely to increase.
AFAIK, during the initial start/request or cold start of the Azure Function takes high memory consumption ranges nearly 60 MiB - 180 MiB and the net memory working set count depends on the amount of physical memory is using by our function application during requests and response time.
According to Azure Functions Plan Migration Official documentation, direct migration to a Dedicated (App Service) plan in not supported currently and this migration is not supported on Linux.
Also, you can check the cause and resolution on Azure Functions (Linux Plan) > Diagnose and Solve Problems > Availability & Performance >

Google Cloud Function out of memory error not making sense

Question:
Do you know a way to actually monitor how much memory is being used by a GCF (Node.js 8) ?
Do you have recommendations regarding the memory profiling of Google Cloud Functions (even locally) for Node.js 8?
Context:
I deployed a Google Cloud Function (NodeJS), with 128MB of memory, that used to work pretty well.
Today, it fails saying "Error: memory limit exceeded.".
GCP tells me the function doesn't use up more than 58MiB, yet it fails with a memory error when it has 128MB.
I feel lost and flawed because:
It used to work and I didn't change a thing since then.
It seems I can't trust google when it comes to monitoring the consumption of memory
The "Details" screen of the function shows it consuming no more than 58MiB.
The Dashboard I created in Monitoring in order to monitor it shows the same values.
Yet it fails with a memory limit.
I have already seen this question Memory profiler for Google cloud function?, but Stackdriver Profiler doesn't seem to work for GCF (per doc)
The cloud functions need to respond when they're done! if they don't respond then their allocated resources won't be free. Any exception in the cloud functions may cause a memory limit error. So you need to handle all corner cases, exceptions, and promise rejections properly and respond immediately!
A tutorial video series on youtube by Doug Stevenson.
Another video about promises in cloud function by Doug.
An ask firebase video hosted by Jen Person about the memory of cloud function.
Set memory allocation in Cloud Functions from the Google Cloud Console.
From the documentation:
To set memory allocation and timeout in the Google Cloud Platform Console:
In the Google Cloud Platform Console, select Cloud Functions from the left menu.
Select a function by clicking on its name in the functions list.
Click the Edit icon in the top menu.
Select a memory allocation from the drop-down menu labeled Memory allocated.
Click More to display the advanced options, and enter a number of seconds in the Timeout text box.
Click Save to update the function.
Things to check for memory leaks (very tricky to troubleshoot):
Async-await functions.
Promises run "in the background" (with .then).
Writing to the writeable part of the filesystem /tmp/ to store temporary files in a function instance will also consume memory provisioned for the function.
Cloud Function's Auto-scaling and Concurrency concepts
Each instance of a function handles only one concurrent request at a time. This means that while your code is processing one request, there is no possibility of a second request being routed to the same instance. Thus the original request can use the full amount of resources (CPU and memory) that you requested.
Cloud Functions monitoring
These are the available resources at hand to monitor your Cloud Functions:
Stackdriver Logging captures and stores Cloud Functions logs.
Stackdriver Error Reporting captures specially formatted error logs and displays them in the Error Reporting dashboard.
Stackdriver Monitoring records metrics regarding the execution of Cloud Functions.

100% Memory usage on Azure App Service Plan with two Apps - working set used 10gb+

I've got an app service plan with 14gb of memory - it should be plenty for my application's needs. There are two application services running on it, each identical - the private memory consumption of these hovers around 1gb but can spike to 4gb during periods of high usage. One app has a heavier usage pattern than the other.
Lately, during periods of high usage, I've noticed that the heavily used service can become unresponsive, and memory usage stays at 100% in the App Service Plan.
The high traffic service is using 4gb of private memory and starting to massively slow down. When I head over to the /scm.../ProcessExplorer/ page, I can see that the low traffic service has 1gb private memory used and 10gb of 'Working Set'.
As I understand it, on a single machine at least, the working set should be freed up when that memory is needed on another process. Does this happen naturally when two App Services share a single Plan?
It looks to me like the working set on the low-traffic instance is not being freed up to supply the needs of the high-traffic App Service.
If this is indeed the case, the simple fix is to move them to separate App Service Plans, each with 7gb of memory. However this seems like it might potentially be just shifting the problem around - has anyone else noticed similar issues with multiple Apps on a single App Service Plan? As far as I understand it, these shouldn't interfere with one another to the extent that they all need to be separated. Or have I got the wrong diagnosis?
In some high memory-consumption scenarios, your app might truly require more computing resources. In that case, consider scaling to a higher service tier so the application gets all the resources it needs. Other times, a bug in the code might cause a memory leak. A coding practice also might increase memory consumption. Getting insight into what's triggering high memory consumption is a two-part process. First, create a process dump, and then analyze the process dump. Crash Diagnoser from the Azure Site Extension Gallery can efficiently perform both these steps. For more information.
refer Capture and analyze a dump file for intermittent high memory for Web Apps.
In the end we solved this one via mitigation, rather than getting to the root cause.
We found a mitigation strategy to our previous memory issues several months ago, which was just to restart the server each night using a powershell script. This seems to prevent the memory just building up over time, and only costs us a few seconds of downtime. Our system doesn't have much overnight traffic as our users are all based in the same geographic location.
However we recently found that the overnight restart was reporting 'success' but actually failing each night due to expired credentials. Which meant that the memory issues we were having in the question I posted were actually exacerbated by server uptimes of several weeks. Restoring the overnight restart resolved the memory issues we were seeing, and we certainly don't see our system ever using 10gb+ again.
We'll investigate the memory issues if they rear their heads again. KetanChawda-MSFT's suggestion of using memory dumps to analyse the memory usage will be employed for this investigation when it's needed.

What would cause high KUDU usage (and eventual 502 errors) on an Azure App Service Plan?

We have a number of API apps and WebApps on an Azure App Service P2v2 instance. We've been experiencing an amount of platform instability: the App Service becomes unhealthy and we get a rash of 502 errors across various of the Apps (different ones each time), attributable to very high CPU and Memory usage on the app service. We've tried scaling all the way up to P3v2, but whatever the issue is seems eventually to consume all resources available.
Whenever we've been able to trace a culprit among the apps, it has turned dout not to be the app itself but the Kudu service related to it.
A sample error message is High physical memory usage detected on multiple occasions. The kudu process for the app [sitename]'pe-services-color' is the most common cause of high memory usage. The most common cause of high memory usage for the kudu process is web jobs. where the actual app whose Kudu service is named changes quite frequently.
What could be causing the Kudu services to consume so much CPU/Memory, and what can we do to stabilise this app service?
Is it simply that we have too many apps running on one plan? This seems unlikely since all these apps ran previously on a single classic cloud service instance, but if so, what are the limits for apps and slots on a single plan?
(I have seen this question but the answer doesn't help)
Update
From Azure support, these are apparently the limits on Small - Medium - Large non-shared app services:
Worker Size Max sites
Small 5 Medium 10 Large 20
with 'sites' comprising app services/api apps and their slots.
They seem ridiculously low, and make the larger App Service units highly uneconomic. Can anyone confirm these numbers?
(Incidentally, we found that turning off Always On across the board fixed the issue - it was only causing a problem on empty sites though - we haven't had a chance yet to see if performance is good with all the sites filled.)
High CPU and memory utilization would be mostly caused by your program/code itself. If there are lot of CPU intensive tasks and you applied lot of parallel programming that spawn lot of new threads can contribute to high cpu and memory utilization. So review your code and see such instances. When number of parallel threads increased cpu utilization goes high and it starts scaling up frequently that adds up your cost also sometime thread loss and unexpected results. As Azure resources costs are high you need to plan your performance accordingly.
You can monitor this using the Metrics option of the app service plan in the blade .

Resources