I created an Azure Function App(.Net 6 Isolated) utilizing the Consumption plan, which is responsible for converting various documents from one format to another, such as converting PDFs to PNGs. However, the processing time for certain documents may be longer due to factors such as the size of the document. I am aware that the Consumption plan has a memory limitation of 1.5 GB per function app. There are two function endpoints on the app, and I would like to set a hard limit on the memory usage per request to ensure that it does not exceed 512 MB. Is this possible?
But the MemoryFailPoint class does not guarantee that the block of code will execute within a specific amount of memory. It only ensures that a certain amount of memory is available before executing the code
This functionality of setting the memory consumption size is available for the Azure Functions before the Year of 2016.
There have been few changes in the Serverless design especially Azure Functions on utilizing of the dependent resources.
Microsoft has disabled the memory setting in Consumption Plan based on the experience feedback from many of Azure Users and brought up the change that the Consumption Hosting Plan will decides the resource utilization including memory/CPU based on your usage of Functions.
Refer to this MS Article for more information on memory settings to each of our function apps.
Related
On azure functions app which is running on the app service plan we notice that memory is significantly increasing (from ~100MB to 3GB).
The function app is written in Python and is triggered whenever new event is received in the events hub.
I've tried to profile memory based on azure's official guide and there are several weird parts I've noticed:
on each new event invocation, the function memory is being increased by several KB / MB
for example, when variables hold data, inside the Python function, as logs show the memory is not released (?)
over time this little increments add up to high memory usage.
It would be helpful if you can suggest possible solutions or any further debug methods.
I just switched from Windows plan to Linux on Azure Function App and memory usage went up 5 times.
I didn't change the way how package is built. And it is just dotnet publish -c Release --no-build --no-restore. I wonder if I could do sotmething here - build for specific runtime?
Is there a way to decrease that consumption? I'm wondering because my plan was to switch all functions to Linux plans as they are cheaper, but not neceserilly if it ends up in higher plans.
Few details:
dotnet 3.1
function runtime version ~3
functions run in-process
The function is rarely used, so there is no correlation between higher memory usage and bigger traffic.
Please check if my findings are helpful:
Memory Working Set is the Current amount of memory used by the Function App in MB's or the tracking how much of the application is currently loaded in physical memory.
If the requests are high, then the Memory working set is most likely to increase.
AFAIK, during the initial start/request or cold start of the Azure Function takes high memory consumption ranges nearly 60 MiB - 180 MiB and the net memory working set count depends on the amount of physical memory is using by our function application during requests and response time.
According to Azure Functions Plan Migration Official documentation, direct migration to a Dedicated (App Service) plan in not supported currently and this migration is not supported on Linux.
Also, you can check the cause and resolution on Azure Functions (Linux Plan) > Diagnose and Solve Problems > Availability & Performance >
I'm trying to find the optimal cloud architecture to host a software on Microsoft Azure.
The scenario is the following:
A (containerised) REST API is exposed to the users through which they can submit POST and GET requests. POST requests trigger a backend that needs a robust configuration to operate properly and GET requests are sent to fetch the result of the backend, if any. This component of the solution is currently hosted on an Azure Web App Service which does the job perfectly.
The (containerised) backend (triggered by POST requests) perform heavy calculations during a short amount of time (typically 5-10 minutes are allotted for the calculation). This backend needs (at least) 4 cores and 16 Gb RAM, but the more the better.
The current configuration consists in the backend hosted together with the REST API on the App Service with a plan that accommodates the backend's requirements. This is clearly not very cost-efficient, as the backend is idle ~90% of the time. On top of that it's not really scalable despite an automatic scaling rule to spawn new instances based on the CPU use: it's indeed possible that if several POST requests come at the same time, they are handled by the same instance and make it crash due to a lack of memory.
Azure Functions doesn't seem to be an option: the serverless (consumption plan) solution they propose is restricted to 1.5 Gb RAM and doesn't have Docker support.
Azure Container Instances neither, because first the max number of CPUs is 4 (which is really few for the needs here, although acceptable) and second there are cold starts of approximately 2 minutes (I imagine due to the creation of the container group, pull of the image, and so on). Despite the process is async from a user perspective, a high latency is not allowed as the result is expected within 5-10 minutes, so cold starts are a problem.
Azure Batch, which at first glance appears to be a perfect fit (beefy configurations available, made for hpc, cost effective, made for time limited tasks, ...) seems to be slow too (it takes a couple of minutes to create a pool and jobs don't run immediately when submitted).
Do you have any idea what I could use?
Thanks in advance!
Azure Functions
You could look at Azure Functions Elastic Premium plan. EP3 has 4 cores, 14GB of RAM and 250GB of storage.
Premium plan hosting provides the following benefits to your functions:
Avoid cold starts with perpetually warm instances
Virtual network connectivity.
Unlimited execution duration, with 60 minutes guaranteed.
Premium instance sizes: one core, two core, and four core instances.
More predictable pricing, compared with the Consumption plan.
High-density app allocation for plans with multiple function apps.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-premium-plan?tabs=portal
Batch Considerations
When designing an application that uses Batch, you must consider the possibility of Batch not being available in a region. It's possible to encounter a rare situation where there is a problem with the region as a whole, the entire Batch service in the region, or your specific Batch account.
If the application or solution using Batch always needs to be available, then it should be designed to either failover to another region or always have the workload split between two or more regions. Both approaches require at least two Batch accounts, with each account located in a different region.
https://learn.microsoft.com/en-us/azure/batch/high-availability-disaster-recovery
I don't manage to get over 14 msg/second with the Azure ServiceBus Standard Plan. I'm running some benchmark tests with the Azure-Sample tool that I found in this question:
The test is done with a ServiceBus resource with a single Queue and all default configurations:
If I read this correctly, you've got the maximum concurrency of one (MaxInflightReceives) with 5 receivers (ReceiverCount). Increasing concurrency and enabling prefetch on the clients will increase the overall throughput. But,
Testing should be done within the same Azure data centre. If you're testing from a local machine, you're introducing a substantial latency that cannot be avoided.
The receive mode used is PeekLock. It is slower than ReceiveAndDelete. Not suggesting to switch, but this needs to be taken into consideration as you're trading throughput for safety by using PeekLock.
The standard tier has a cap on the number of operations per second. In addition to that, your namespace is deployed in a shared environment with entities scattered in various deployment containers. Performance will vary and cannot be guaranteed. If you want to have a guaranteed throughput, use Premium SKU.
We're trying to test scalability of Azure functions (it's a bear). We came across this https://azure.microsoft.com/en-in/documentation/articles/functions-reference/#parallel-execution
If a function app is using the Dynamic Service Plan, the function app
could scale out automatically up to 10 concurrent instances. Each
instance of the function app, whether the app runs on the Dynamic
Service Plan or a regular App Service Plan
Does this mean that the maximum scalability of a single function is just 10? we've never been able to get over 10 units running... (previous question on the algorithm to determine adding another consumption unit, this to determine the upper end of scalability).
Thanks
UPDATE: There is no official maximum number of instances. We see customers who are able to scale out to hundreds. The number you achieve depends mostly on your workload, but partly on the region you're running in (some regions have more capacity than others). The 10 instance limit mentioned in previous versions of the docs has been removed.
You can find more information about our consumption plan and scaling here: https://learn.microsoft.com/en-us/azure/azure-functions/functions-scale#how-the-consumption-plan-works
Also note that each instance in Azure Functions can run multiple function executions in parallel. For example, if you have a function app which has a single function that runs quickly, you could expect to see dozens or even hundreds of concurrent executions on a single instance. This is unlike other services such as AWS Lambda which only execute a single function at a time per instance. New instances are added only when the system decides that the current number of instances is insufficient to handle the current load (more details on that in my answer to your other question).