I just switched from Windows plan to Linux on Azure Function App and memory usage went up 5 times.
I didn't change the way how package is built. And it is just dotnet publish -c Release --no-build --no-restore. I wonder if I could do sotmething here - build for specific runtime?
Is there a way to decrease that consumption? I'm wondering because my plan was to switch all functions to Linux plans as they are cheaper, but not neceserilly if it ends up in higher plans.
Few details:
dotnet 3.1
function runtime version ~3
functions run in-process
The function is rarely used, so there is no correlation between higher memory usage and bigger traffic.
Please check if my findings are helpful:
Memory Working Set is the Current amount of memory used by the Function App in MB's or the tracking how much of the application is currently loaded in physical memory.
If the requests are high, then the Memory working set is most likely to increase.
AFAIK, during the initial start/request or cold start of the Azure Function takes high memory consumption ranges nearly 60 MiB - 180 MiB and the net memory working set count depends on the amount of physical memory is using by our function application during requests and response time.
According to Azure Functions Plan Migration Official documentation, direct migration to a Dedicated (App Service) plan in not supported currently and this migration is not supported on Linux.
Also, you can check the cause and resolution on Azure Functions (Linux Plan) > Diagnose and Solve Problems > Availability & Performance >
Related
I created an Azure Function App(.Net 6 Isolated) utilizing the Consumption plan, which is responsible for converting various documents from one format to another, such as converting PDFs to PNGs. However, the processing time for certain documents may be longer due to factors such as the size of the document. I am aware that the Consumption plan has a memory limitation of 1.5 GB per function app. There are two function endpoints on the app, and I would like to set a hard limit on the memory usage per request to ensure that it does not exceed 512 MB. Is this possible?
But the MemoryFailPoint class does not guarantee that the block of code will execute within a specific amount of memory. It only ensures that a certain amount of memory is available before executing the code
This functionality of setting the memory consumption size is available for the Azure Functions before the Year of 2016.
There have been few changes in the Serverless design especially Azure Functions on utilizing of the dependent resources.
Microsoft has disabled the memory setting in Consumption Plan based on the experience feedback from many of Azure Users and brought up the change that the Consumption Hosting Plan will decides the resource utilization including memory/CPU based on your usage of Functions.
Refer to this MS Article for more information on memory settings to each of our function apps.
On azure functions app which is running on the app service plan we notice that memory is significantly increasing (from ~100MB to 3GB).
The function app is written in Python and is triggered whenever new event is received in the events hub.
I've tried to profile memory based on azure's official guide and there are several weird parts I've noticed:
on each new event invocation, the function memory is being increased by several KB / MB
for example, when variables hold data, inside the Python function, as logs show the memory is not released (?)
over time this little increments add up to high memory usage.
It would be helpful if you can suggest possible solutions or any further debug methods.
I'm going through the list of perf improvements that can be made against Cosmos DB. My APIs are hosted in a Function app in consumption mode. Is turning on gcServer recommended for Azure Functions?
There is more information on gcServer here.
For single-processor computers, the default workstation garbage
collection should be the fastest option. Either workstation or server
can be used for two-processor computers. Server garbage collection
should be the fastest option for more than two processors. Most
commonly, multiprocessor server systems disable server GC and use
workstation GC instead when many instances of a server app run on the
same machine.
How many processors run in an active instance in a consumption plan?
How many processors run in an active instance in a consumption plan?
Each instance of the Functions host in the Consumption plan is limited to 1.5 GB of memory and one CPU. So there is only 1 processor for that. For more details, you can refer to this article.
I ran into a situation where out of memory exceptions were generated in our Azure App Service for a .Net Core Web API even though memory & utilization topped 50% in the App Service Plan (P2V2: 7GB RAM).
I have looked at this SO article to check private bytes and other things but still don't see where the memory of exhaustion comes from. I see a max usage of 1.5GB on the memory working set which is well below the 7GB.
Nothing shows up under Support + Troubleshooting -> Resource Health or App Service Advisor.
I am not sure where to look next and any help would be appreciated.
Azure App Services caps memory usage at 1.5G by default. But you can change this behaviour with this application setting (to be added under Configuration):
WEBSITE_MEMORY_LIMIT_MB = 3072
See also my answer here:
Is there way to determine why Azure App Service restarted?
The Metrics view on the portal can only go up to a 1 minute granularity level.
(The default is 5 minutes)
This means that each metric point is an average value over a 60-second interval.
It may be spiking up and down over 60 seconds, so you need a more real-time view.
Try the SCM console (Advanced Tools > Go), and check the Process Explorer to see the actual memory consumption.
We have a number of API apps and WebApps on an Azure App Service P2v2 instance. We've been experiencing an amount of platform instability: the App Service becomes unhealthy and we get a rash of 502 errors across various of the Apps (different ones each time), attributable to very high CPU and Memory usage on the app service. We've tried scaling all the way up to P3v2, but whatever the issue is seems eventually to consume all resources available.
Whenever we've been able to trace a culprit among the apps, it has turned dout not to be the app itself but the Kudu service related to it.
A sample error message is High physical memory usage detected on multiple occasions. The kudu process for the app [sitename]'pe-services-color' is the most common cause of high memory usage. The most common cause of high memory usage for the kudu process is web jobs. where the actual app whose Kudu service is named changes quite frequently.
What could be causing the Kudu services to consume so much CPU/Memory, and what can we do to stabilise this app service?
Is it simply that we have too many apps running on one plan? This seems unlikely since all these apps ran previously on a single classic cloud service instance, but if so, what are the limits for apps and slots on a single plan?
(I have seen this question but the answer doesn't help)
Update
From Azure support, these are apparently the limits on Small - Medium - Large non-shared app services:
Worker Size Max sites
Small 5 Medium 10 Large 20
with 'sites' comprising app services/api apps and their slots.
They seem ridiculously low, and make the larger App Service units highly uneconomic. Can anyone confirm these numbers?
(Incidentally, we found that turning off Always On across the board fixed the issue - it was only causing a problem on empty sites though - we haven't had a chance yet to see if performance is good with all the sites filled.)
High CPU and memory utilization would be mostly caused by your program/code itself. If there are lot of CPU intensive tasks and you applied lot of parallel programming that spawn lot of new threads can contribute to high cpu and memory utilization. So review your code and see such instances. When number of parallel threads increased cpu utilization goes high and it starts scaling up frequently that adds up your cost also sometime thread loss and unexpected results. As Azure resources costs are high you need to plan your performance accordingly.
You can monitor this using the Metrics option of the app service plan in the blade .