We currently using old cloud service classic which scale out to 100 instances.
We are interested to migrate to Azure Functions. Based on this doc, the best in terms of timeout and max instances is Premium Plan with timeout 30 minutes and max instances 100:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-scale#scale
But when i read this doc, it mentioned max instance is 20 only, so i am confuse:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-premium-plan?tabs=portal#always-ready-instances
Anyone has experience what's the max allowed scale out instance for Premium Plan? Thanks.
For premium plan, the pre-warmed instances can go up to 20; but total (or maxmium burst) can go up to 100.
When events begin to trigger the app, they are first routed to the always ready instances. As the function becomes active, additional instances will be warmed.
Related
We have a solution where we use an Azure Storage Queue to process messages that take approx 6 minutes.
I've read that the maximum batchSize of Queue messages concurrently processed are 32 per VM.
If the function app scales out to multiple VMs, each VM could run one instance of each queue-triggered function.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-queue?tabs=in-process%2Cextensionv5%2Cextensionv3&pivots=programming-language-csharp#host-json
How does that translate to Azure Functions Premium plan?
Lets say we want to be able to process 64 messages at once using Azure Functions Premium plan with Always ready instances. If we have 2 ready instances, can they process 2 * 32 concurrent messages? Or do they underwater really need to be on seperate VM's and 2 instances will not do anything different?
In the Premium plan, you can have your app always ready on a specified number of instances. The maximum number of always ready instances is 20. When events begin to trigger the app, they are first routed to the always ready instances.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-premium-plan?tabs=portal#always-ready-instances
Yes. In Azure Functions premium plan, if you have pre-warmed instance, then that is given a dedicated VM instance. So, if you had 2 VM instances running your function app, then they can process 2*(batchSize + newBatchThreshold) concurrent Queue messages!
The Azure platform scales the function app onto new VM as the existing instances gets more busy.
I am using Azure Functions on the App Service Plan. My understanding is for every new execution the Azure Function will create a new App Service, execute the function and then shut down the App Service. There would be nothing shared between the multiple App Services that are spawned due to multiple requests.
However when I do test my Function(which is a video processing one), for one request the time it takes is around 2-3 mins however for multiple simultaneous requests the time increases to 10-15 mins. My questions are whether my understanding above is correct? If not then what resource is shared amongst these App Services? How should I decide my scaling options(manual vs auto)?
"My understanding is for every new execution the Azure Function will create a new App Service" Nope it will not run new instance each time. Generally if there is no load on AF it will stop all instances.
Then if first request/event comes in it will start first instance. This is why we have ColdStart in Serverless. After that scale controller will measure your instance performance memory and CPU consumption and decide if it needs to scale but it wont be instant. So if lets say you sent N amount of requests to do smth with video they could go to same first instance and increase load. Then AF will scale, because of CPU spike but it wont help with old requests since they are handled at first instance. Keep in mind For non-HTTP triggers, new instances are allocated, at most, once every 30 seconds which means that your AF should have CPU spike for at least 30 second to add new instance https://learn.microsoft.com/en-us/azure/azure-functions/event-driven-scaling
I am not sure if Azure Functions are good option for video processing. Azure function should be used for quick stuff usually I would say not more than 30 sec. But there are some limitation of execution time depends how you run it https://learn.microsoft.com/en-us/azure/azure-functions/functions-premium-plan?tabs=portal
Not sure what type of video processing you doing but i would have a look into Azure Media Services
The other options as you mentioned is Batch jobs with low priority https://azure.microsoft.com/en-au/blog/announcing-public-preview-of-azure-batch-low-priority-vms/ it actually a good use case you have: Media processing and transcoding, rendering and so on
A small addition to Vova's answer: if you're running your Function in an App Service (also known as a Dedicated Plan), it will by default only scale instances within the possibilities of the App Service Plan you defined. This means that all of the instances of your Function App run on the same virtual machine. That is most probably the reason you're seeing increasing request times with more requests.
If you want your Functions to scale beyond the capabilities of that plan, you will need to manually scale or enable autoscaling for the App Service plan.
An App Service plan defines a set of compute resources for an app to run. These compute resources are analogous to the server farm in conventional hosting.
and
Using an App Service plan, you can manually scale out by adding more VM instances. You can also enable autoscale, though autoscale will be slower than the elastic scale of the Premium plan. [...] You can also scale up by choosing a different App Service plan.
If you run your Function App on Consumption Plan (the true serverless hosting plan option since it enables scaling to zero),
The Consumption plan scales automatically, even during periods of high load.
In case you need longer execution times than those available in Consumption Plan, but the App Service Plan doesn't seem to be the best hosting environment for your Functions there's also the Premium Plan.
The Azure Functions Elastic Premium plan is a dynamic scale hosting option for function apps.
Premium plan hosting provides the following benefits to your functions:
Avoid cold starts with perpetually warm instances
Virtual network connectivity.
Unlimited execution duration, with 60 minutes guaranteed.
Premium instance sizes: one core, two core, and four core instances.
More predictable pricing, compared with the Consumption plan.
High-density app allocation for plans with multiple function apps.
More info on all the different Azure Functions hosting options.
I am following best practices document and trying to implement Auto Scaling and would like to know about pricing perspective.
Robust-Apps-for-the-cloud
I would like to utilize custom auto scale to use multiple instances. I have configured the rules as shown here:
With this, I would like more information on how this will affect the pricing for my app service plan.
Note: My App Service Plan is S2.
App Service Plans are priced based on the size and number of instances you run, and they are billed on a per second basis. For your case on the S2 plan, a single instance will cost $0.20/hour.
I see on your autoscale configuration that the minimum and default number of instances you will be running on this plan is two instances. With this, if the autoscale triggers are not hit, your App Service Plan would cost $0.40/hour.
With the configuration you shared this could run up to $0.80/hour, if maximum four instances are run after the autoscale triggers are met.
As App Service Plans are billed on a per second basis, the cost will be prorated on a per second basis for the number of instances you run.
For example:
if you were running two instances for 40 minutes, three instances for 10 minutes, and four instances for the last 10 minutes of an hour. The total cost of the App Service Plan for that hour would be roughly: $0.50 for that hour.
If you were to scale up or down your App Service Plan tier you can see more information about how this could affect pricing using this tool here:
App Service Pricing
I am using Azure SignalR service instance. SignalR service currently only supports 1000 concurrent connections per service instance per unit. If the number of concurrent SignalR connections exceed 1000, the service instances will have to be increased manually, and reduced manually as the users decrease.
Looking for a suitable solution to auto-scale (scale up and scale down) the SignalR service instances based on the demand.
If any idea, please share. Thanks.
Azure SignalR service doesn't support any auto-scaling capabilities out of the box.
If you want to automatically increase or decrease the number of units based on the current number of concurrent connections, you will have to implement your own solution. You may for example try to do this using a Logic App as suggested here.
The common approach is otherwise to increase the number of units manually using the portal, the REST API or the Azure CLI.
They solved the disconnection issue when scaling, according to https://github.com/Azure/azure-signalr/issues/1096#issuecomment-878387639
And for the auto-scaling feature they are working on it, and in the mean-time here are 2 ways of doing so:
Using Powershell function https://gist.github.com/mattbrailsford/84d23e03cd18c7b657e1ce755a36483d
Using Logic App https://staffordwilliams.com/blog/2019/07/13/auto-scaling-signalr-service-with-logic-apps/
Azure SignalR Service supports autoscale as of 2022 if you select premium pricing tear.
Go to Scale up on the SignalR Service and select Premium pricing
tear.
Go to Scale out and create a custom autoscale.
The examples says that you can scale up if the metric "Connection Quota Utilization" is over 70% (should be about 700 out of your 1000 connections for your first unit). You can also scale down with a similar rule. The examples says to scale down when the connection quota is under 20%.
20% from the example seems a bit restrictive, but I guess its to avoid unneeded scaling. The client connections should be closed and reconnected while scaling down, so doing so very frequently is probably a bad idea.
https://learn.microsoft.com/en-us/azure/azure-signalr/signalr-howto-scale-autoscale
Can anyone please elaborate me the minimum and miximum price of below settings(for reference) while creating the FunctionApp in Premium Elastic plan(EP1).
Q. How does Plan Scale out
Minimum Instances,
Maximum Burst and
App Scale out
Pre-Warmed Instances settings of FunctionApp affects costs?
For your question, I summarized the pricing rules below for your reference:
Azure Functions Premium plan provides the same features and scaling mechanism used on the Consumption plan, but the difference is we can set the Pre-Warmed instances.
Premium plan can avoid cold start by setting the Pre-Warmed instances, for example we can set 1 instance for Pre-Warmed(but smaller than minimum instances size which you set), then if the function app hasn't been requested for a long time, we just need to pay for the warm instance. The price is shown as below:
The EP1 plan is 3.5GB for each instance, so we can calculate the price.
While the function app is requested, then it will billed on a per second basis based on the number of vCPU-s and GB-s(based on the number of the instance which your function app uses on a per secnond).
Hope it would be helpful to your question~
I got the answer: here
"You are charged for each instance allocated in the minimum instance count regardless if functions are executing or not."