Azure Function Consumption Plan limitation - azure

I have an Azure function for IoT scenario, having a predictable event load. Currently, 15 functions running under same function app (single dll)
Now we are planning to create a separate function for each function (15 dll projects).
Why 15 function?
One function handling Millions for events per day, will put this function in the dedicated app service plan.
Rest 14 function has a very limited load, so we are planning to move into consumption plan. 1 million executions are included free each month.
Every function can scale independently
Concern
I need to create 15 projects in my solution (more will be added according to this design)
Too much resources will display on portal (15 functionApp + 15 App service plan + 1 storage account (common for all functions)), Multiply numbers of env (DEV+INT+QA+Perf+Stag+Prod) Total 186 resources
This design doesn't look good for me but has some advantages. Working in Agile mode :P
Is there any limitation/Issue with the number of resources wise or any other in this design?

Based on this post by Fabio, you could just have 1 App Service for all your Function Apps using Consumption Plan. Also, if the load of all your functions (on Consumption Plan) combined would be less than 1 million executions, you could probably just have them in one app too but do consider the limits that functions in Consumption plan pose.
As for the number of resources, I wouldn't think that should pose any problem directly except the Resource Group limits.

Related

Azure Functions scalability issue

I am using Azure Functions on the App Service Plan. My understanding is for every new execution the Azure Function will create a new App Service, execute the function and then shut down the App Service. There would be nothing shared between the multiple App Services that are spawned due to multiple requests.
However when I do test my Function(which is a video processing one), for one request the time it takes is around 2-3 mins however for multiple simultaneous requests the time increases to 10-15 mins. My questions are whether my understanding above is correct? If not then what resource is shared amongst these App Services? How should I decide my scaling options(manual vs auto)?
"My understanding is for every new execution the Azure Function will create a new App Service" Nope it will not run new instance each time. Generally if there is no load on AF it will stop all instances.
Then if first request/event comes in it will start first instance. This is why we have ColdStart in Serverless. After that scale controller will measure your instance performance memory and CPU consumption and decide if it needs to scale but it wont be instant. So if lets say you sent N amount of requests to do smth with video they could go to same first instance and increase load. Then AF will scale, because of CPU spike but it wont help with old requests since they are handled at first instance. Keep in mind For non-HTTP triggers, new instances are allocated, at most, once every 30 seconds which means that your AF should have CPU spike for at least 30 second to add new instance https://learn.microsoft.com/en-us/azure/azure-functions/event-driven-scaling
I am not sure if Azure Functions are good option for video processing. Azure function should be used for quick stuff usually I would say not more than 30 sec. But there are some limitation of execution time depends how you run it https://learn.microsoft.com/en-us/azure/azure-functions/functions-premium-plan?tabs=portal
Not sure what type of video processing you doing but i would have a look into Azure Media Services
The other options as you mentioned is Batch jobs with low priority https://azure.microsoft.com/en-au/blog/announcing-public-preview-of-azure-batch-low-priority-vms/ it actually a good use case you have: Media processing and transcoding, rendering and so on
A small addition to Vova's answer: if you're running your Function in an App Service (also known as a Dedicated Plan), it will by default only scale instances within the possibilities of the App Service Plan you defined. This means that all of the instances of your Function App run on the same virtual machine. That is most probably the reason you're seeing increasing request times with more requests.
If you want your Functions to scale beyond the capabilities of that plan, you will need to manually scale or enable autoscaling for the App Service plan.
An App Service plan defines a set of compute resources for an app to run. These compute resources are analogous to the server farm in conventional hosting.
and
Using an App Service plan, you can manually scale out by adding more VM instances. You can also enable autoscale, though autoscale will be slower than the elastic scale of the Premium plan. [...] You can also scale up by choosing a different App Service plan.
If you run your Function App on Consumption Plan (the true serverless hosting plan option since it enables scaling to zero),
The Consumption plan scales automatically, even during periods of high load.
In case you need longer execution times than those available in Consumption Plan, but the App Service Plan doesn't seem to be the best hosting environment for your Functions there's also the Premium Plan.
The Azure Functions Elastic Premium plan is a dynamic scale hosting option for function apps.
Premium plan hosting provides the following benefits to your functions:
Avoid cold starts with perpetually warm instances
Virtual network connectivity.
Unlimited execution duration, with 60 minutes guaranteed.
Premium instance sizes: one core, two core, and four core instances.
More predictable pricing, compared with the Consumption plan.
High-density app allocation for plans with multiple function apps.
More info on all the different Azure Functions hosting options.

Why Azure ASP (App Service Plan) manual scale-up is too slow?

Currently, I have an Azure ASP I1, which contains about 8 app services and 2 function apps.
When I do the manual scale-out from 1 instance to 2 instances. it costs about more than 30 minutes and I think it is too slow.
My questions:
What reasons are effecting to the scale time? (number of resources, apps?)
What can I do to reduce the manual scale time? ( I mean the best practice of configuration)
If we apply auto-scale to this ASP, will it scale faster? If not, the auto-scale will not bring any value, because when the moment that the scale is finished, the pressure to our server might already be reduced.
Any partial answer and discussion will be appreciated
My understanding with scaling is that it is a simple sum total of how long it would take to provision all the resources that come under the service plan. You said, you have 8 app services and 2 function apps. Try to think back to how long it took to provision them. If each app took about a minute, then, it would be roughly 10 minutes. for example, if your app has a cosmos db, that along would take anywhere from 3 to 10 minutes. i am speaking based on my own experience.
So, now, to your questions.
What reasons are effecting to the scale time? (number of resources, apps?)
Yes individual apps and the resources they depend on are a huge factor in deciding the scale time.
What can I do to reduce the manual scale time? ( I mean the best practice of configuration)
not much. this is one of those that is outside your control.
However, if I were you, I would consider moving some of the apps and functions out of this service plan, and may be manage them individually?
Let's say I have a web app with a database service. I find out that the server is able to handle the load just fine, but it is the database that needs a bigger plan. Then, instead of keeping them on the same plan, i would move the database to a separate plan, only focus the scaling efforts on the database and leave the web app service alone.
If we apply auto-scale to this ASP, will it scale faster?
No.

Azure FunctionApps vs Azure App Services for Compute intensive work

I have 2 questions first related to hosting, second related to sdk/library to use:
I need to write a kind of work allocation service scheduler to people, which will run say every 1 hour to run compute intensive logic in background and push the results in our database. The input may be number of days to create schedule for, number of people available, count of tasks to be done. So primarily its compute intensive.
Should i host it in App Service or in Azure Function (TimerTrigger)? This scheduler run as total background job and never called from UI or any backend API.
If i go App service way i have choice of either Hangfire or WebJob. How should i decide which is good for me.
Certainly quick execution with lesser cost is my criteia to move ahead.
One consideration for Azure function is how long the processing will take. Azure functions have a maximum time limit that depends on hosting plan. When you create a function app in Azure, you must choose a hosting plan for your app. There are three hosting plans available for Azure Functions: Consumption plan, Premium plan, and Dedicated (App Service) plan. An overview of hosting plans and their timeout durations is here: Azure Functions scale and hosting.
Unlimited duration is in Premium plan or Dedicated plan (Unlimited execution duration - 60 minutes guaranteed).
Maximum duration for Consumption plan is 10 minutes.

Multiple function apps on a single app plan

When you create a new function app you can choose an AppPlan. If you choose an EP3 AppPlan as an example it says "estimated cost 200 dollars". If I create 3 different function apps all on the same AppPlan. Will it then be 200x3 dollars or do you pay pr AppPlan?
When it comes to Azure functions with App Service Plan, you don’t get to pay for the number of executions, execution time, and memory used. Instead, you pay for the cost of the VM instance that you allocate. So, in this case, you will be charged based on the VM (does not matter how many functions you use). However, the recommended practice is to stick with the consumption plan

Running an azure cloud service after every n days

I have created an azure service which is responsible for below task:
(1) Access the blob containers and download the files from there.
(2) Extract some data from downloaded files
(3) Stored the extracted data to an Azure SQL Server
I want to run this processing after every 7 days. Is there a way to achieve this? or can I use any other option than cloud service to achieve the above goal?
I would recommend you to use Azure Function as its Timer-based processing (Timer trigger) feature is able to fulfill your requirements.
Timer triggers call functions based on a schedule, one time or
recurring.
Reference: Azure Functions timer trigger, Azure Functions Pricing
Another great advantage of using Azure Function for your scenario is its pricing model.
Azure Functions consumption plan is billed based on resource
consumption and executions.
Consumption plan pricing includes a
monthly free grant of 1 million requests and 400,000 GB-s of resource
consumption per month.
Certainly not natively with the Cloud Service itself. I mean, you can obviously code it so it performs some task(s) and sleeps for 7 days, but you will pay for all of that time, that makes no sense
You can use Azure WebJobs, Functions and Scheduler for this purpose, or you can create a PowerShell\Cli or something else cron task\task scheduler to turn on your Azure Cloud Service, wait for it to finish processing and turn it off. But that seems like a lot of extra effort, I'd rather go with Scheduler or Functions.

Resources