Is the maximum scalability of an Azure Function capped at 10? - azure

We're trying to test scalability of Azure functions (it's a bear). We came across this https://azure.microsoft.com/en-in/documentation/articles/functions-reference/#parallel-execution
If a function app is using the Dynamic Service Plan, the function app
could scale out automatically up to 10 concurrent instances. Each
instance of the function app, whether the app runs on the Dynamic
Service Plan or a regular App Service Plan
Does this mean that the maximum scalability of a single function is just 10? we've never been able to get over 10 units running... (previous question on the algorithm to determine adding another consumption unit, this to determine the upper end of scalability).
Thanks

UPDATE: There is no official maximum number of instances. We see customers who are able to scale out to hundreds. The number you achieve depends mostly on your workload, but partly on the region you're running in (some regions have more capacity than others). The 10 instance limit mentioned in previous versions of the docs has been removed.
You can find more information about our consumption plan and scaling here: https://learn.microsoft.com/en-us/azure/azure-functions/functions-scale#how-the-consumption-plan-works
Also note that each instance in Azure Functions can run multiple function executions in parallel. For example, if you have a function app which has a single function that runs quickly, you could expect to see dozens or even hundreds of concurrent executions on a single instance. This is unlike other services such as AWS Lambda which only execute a single function at a time per instance. New instances are added only when the system decides that the current number of instances is insufficient to handle the current load (more details on that in my answer to your other question).

Related

What is the optimal architecture design on Azure for an infrequently used backend that needs a robust configuration?

I'm trying to find the optimal cloud architecture to host a software on Microsoft Azure.
The scenario is the following:
A (containerised) REST API is exposed to the users through which they can submit POST and GET requests. POST requests trigger a backend that needs a robust configuration to operate properly and GET requests are sent to fetch the result of the backend, if any. This component of the solution is currently hosted on an Azure Web App Service which does the job perfectly.
The (containerised) backend (triggered by POST requests) perform heavy calculations during a short amount of time (typically 5-10 minutes are allotted for the calculation). This backend needs (at least) 4 cores and 16 Gb RAM, but the more the better.
The current configuration consists in the backend hosted together with the REST API on the App Service with a plan that accommodates the backend's requirements. This is clearly not very cost-efficient, as the backend is idle ~90% of the time. On top of that it's not really scalable despite an automatic scaling rule to spawn new instances based on the CPU use: it's indeed possible that if several POST requests come at the same time, they are handled by the same instance and make it crash due to a lack of memory.
Azure Functions doesn't seem to be an option: the serverless (consumption plan) solution they propose is restricted to 1.5 Gb RAM and doesn't have Docker support.
Azure Container Instances neither, because first the max number of CPUs is 4 (which is really few for the needs here, although acceptable) and second there are cold starts of approximately 2 minutes (I imagine due to the creation of the container group, pull of the image, and so on). Despite the process is async from a user perspective, a high latency is not allowed as the result is expected within 5-10 minutes, so cold starts are a problem.
Azure Batch, which at first glance appears to be a perfect fit (beefy configurations available, made for hpc, cost effective, made for time limited tasks, ...) seems to be slow too (it takes a couple of minutes to create a pool and jobs don't run immediately when submitted).
Do you have any idea what I could use?
Thanks in advance!
Azure Functions
You could look at Azure Functions Elastic Premium plan. EP3 has 4 cores, 14GB of RAM and 250GB of storage.
Premium plan hosting provides the following benefits to your functions:
Avoid cold starts with perpetually warm instances
Virtual network connectivity.
Unlimited execution duration, with 60 minutes guaranteed.
Premium instance sizes: one core, two core, and four core instances.
More predictable pricing, compared with the Consumption plan.
High-density app allocation for plans with multiple function apps.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-premium-plan?tabs=portal
Batch Considerations
When designing an application that uses Batch, you must consider the possibility of Batch not being available in a region. It's possible to encounter a rare situation where there is a problem with the region as a whole, the entire Batch service in the region, or your specific Batch account.
If the application or solution using Batch always needs to be available, then it should be designed to either failover to another region or always have the workload split between two or more regions. Both approaches require at least two Batch accounts, with each account located in a different region.
https://learn.microsoft.com/en-us/azure/batch/high-availability-disaster-recovery

Azure Functions: Understanding Change Feed in the context of multiple apps

According to the below diagram on https://learn.microsoft.com/en-us/azure/cosmos-db/change-feed-processor, at least 4 partition key ranges are distributed between two hosts. What I'm struggling to understand in this diagram is the distinction between a host and a consumer. In the context of Azure Functions, would it be true to say that a host is a Function app whereas a consumer is an active/warm instance?
I'd like to create a setup with N many Function apps each with 0-200 active instances (depending on workload). At the same time, I'd like to read Change Feed. If I use a CosmosDBTrigger with the same connection string and lease container in each app, is this taken care of automatically or do I need a manual implementation?
The documentation you linked is mainly for the Change Feed Processor, but the Azure Functions binding actually runs the Change Feed Processor underneath.
When just using CFP, it's maybe easier to understand because you are mainly in control of the instances and distribution, but I'll try to map it to Functions.
The document mentions a deployment unit concept:
A single change feed processor deployment unit consists of one or more instances with the same processorName and lease container configuration. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
For example, you might have one deployment unit that triggers an external API anytime there is a change in your container. Another deployment unit might move data, in real time, each time there is a change. When a change happens in your monitored container, all your deployment units will get notified.
The deployment unit in Functions is the Function App. One Function App can span many instances. So each instance/host within that Function App deployment, will act as a available host/consumer.
Further down, the article talks about the dynamic scaling and what it says is basically that, within a Deployment Unit (Function App), the leases will get evenly distributed. So if you have 20 leases and 10 Function App instances, then each instance will own 2 leases and process them independently from the other instances.
One important note on that article is, scaling enables a higher CPU pool, but not a necessarily a higher parallelism.
As the documentation mentions, even on a single instance, CFP will process and read each lease it owns on an independent Task. The problem is, all these parallel processing is sharing the same CPU, so adding more instances will help if you currently see the instance having a CPU thread/bottleneck.
Now, in your example, you want to have N Function Apps, I assume that each one, doing something different. Basically, microservice deployments which would trigger on any change, but do a different task or fire a different business flow.
This other article covers that. Basically you can either, have each Function App use a separate Lease collection (having the monitored collection be the same) or you can share the lease collection but use a different LeaseCollectionPrefix for each Function App deployment. If the number of Function Apps you will be shared the lease collection is high, please check the RU usage on the lease collection as you might need to increase it (there is a note about it on the article).

How do you queue multiple Azure Functions HttpTriggers to prevent memory errors?

I have a Python Azure Function (Linux Consumption Plan) that is being set up to run multiple HttpTriggers at various times throughout the day. It's possible for more than one of these triggers to execute at or around the same time as the other triggers. To avoid exceeding the 1.5 GB memory limit, I'd like to make sure only one function invocation is allowed to run at a time. Is there any way to achieve this?
Edit: After doing a little research, would this setting allow me to avoid concurrent executions of my HttpTriggers: https://learn.microsoft.com/en-us/azure/azure-functions/functions-app-settings#website_max_dynamic_application_scale_out?
If I set WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT to 1, would that mean only one invocation could run at a time and the other HttpTriggers would wait?
Check out the full description of WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT here. It says that:
WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT sets a maximum number of instances that a function app can scale to.
This limit is not yet fully supported - it does work to limit your
scale out, but there are some cases where it might not be completely
foolproof. We're working on improving this.
I believe this is not what you are looking for.
Logically, this can be made possible if we choose timer trigger times in such a way that they never collide within a day (24hrs). Please note, this depends on the business requirements of function app HttpTriggers - what is the required frequency of them.
Another solution can be to have separate consumption plans for different HttpTriggers. Infact, in this case, you will get a monthly free grant of 1 million requests and 4,00,000 GB-s of resource consumption per month per subscription in pay-as-you-go pricing across all function apps in that subscription.
As mentioned above, there can be different solutions for this, you need to choose as per what suits you the best.

Limit number of servers on Azure functions (consumption plan)

Is it possible to fix a cap on the number of servers on which the azure functions scale? I have a consumption plan and basically I would like to set a cap on the number of resources that azure functions can use.
The only solutions I found are:
set a cap for daily GB/sec threshold, after which the functions are stopped until the following day, which is definitely something I do not want because I need to use some functions for online tasks.
In the host.json, changing parameters for http.maxConcurrentRequests and http.maxOutstandingRequests, which will affect the number of concurrent functions running. Is this the thing should I look into? isn't this setting per-server level? my fear is that this won't end up capping resources, but insted will let Azure create just more and more servers, in order to comply with request loads
You can use the WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT app setting: The maximum number of instances that the function app can scale out to. Default is no limit.
Note: This setting is a preview feature - and only reliable if set to a value <= 5
Ref: https://learn.microsoft.com/en-us/azure/azure-functions/functions-app-settings#websitemaxdynamicapplicationscaleout
One thing to note is that timer-triggered functions are automatically singletons. In my case that was sufficient, as I can wake-up such function every minute and process specific amount of data. Even if the function takes longer than expected, there's no risk a second one will be run concurrently.
More info: https://stackoverflow.com/a/53919048/4619705

Azure Functions - Limiting parallel execution

Is it possible to limit the maximum number of Functions that run in parallel?
I read the documentation and came across this:
When multiple triggering events occur faster than a single-threaded function runtime can process them, the runtime may invoke the function multiple times in parallel.
If a function app is using the Consumption hosting plan, the function app could scale out automatically. Each instance of the function app, whether the app runs on the Consumption hosting plan or a regular App Service hosting plan, might process concurrent function invocations in parallel using multiple threads.
The maximum number of concurrent function invocations in each function app instance varies based on the type of trigger being used as well as the resources used by other functions within the function app.
https://learn.microsoft.com/en-gb/azure/azure-functions/functions-reference#parallel-execution
I am using a Function on an App Service plan with an Event Hub input binding and only have a single Function within my Function App. If I can't limit it, does anyone know what the maximum number of concurrent function invocations will be for this kind of setup?
There isn't a way to specify a maximum concurrency for Event Hubs triggered functions, but you can control batch size and fetching options as described here.
The maximum number of concurrent invocations may also vary depending on your workload and resource utilization.
If concurrency limits are needed, this is (currently) something you'd need to handle, and the following posts discuss some patterns you may find useful:
Throttling Azure Storage Queue processing in Azure Function App
Limiting the number of concurrent jobs on Azure Functions queue
Just for reference, I came across here in my search for throttling. You can use the [Singleton] attribute on your function ensuring only one-at-a-time execution. Maybe not really what you were looking for and a very rigorous way of throttling, but still, it is an option.
https://learn.microsoft.com/en-us/azure/app-service/webjobs-sdk-how-to#singleton-attribute
Microsoft has added a new setting which can be used to limit concurrency of function execution. The setting is WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT and can be used to limit how many function instances will execute in parallel. However, according to Microsoft, it isn't fully implemented yet.
https://github.com/Azure/azure-functions-host/wiki/Configuration-Settings
For those who are still interested:
https://learn.microsoft.com/en-us/azure/azure-functions/event-driven-scaling#limit-scale-out
There's a way to limit the number of parallel execution by setting functionAppScaleLimit parameter.

Resources