Multiple function apps on a single app plan - azure

When you create a new function app you can choose an AppPlan. If you choose an EP3 AppPlan as an example it says "estimated cost 200 dollars". If I create 3 different function apps all on the same AppPlan. Will it then be 200x3 dollars or do you pay pr AppPlan?

When it comes to Azure functions with App Service Plan, you don’t get to pay for the number of executions, execution time, and memory used. Instead, you pay for the cost of the VM instance that you allocate. So, in this case, you will be charged based on the VM (does not matter how many functions you use). However, the recommended practice is to stick with the consumption plan

Related

Understanding Azure Functions with App Service Plan

While creating an Azure Function. It provides an option to create an App Service Plan.
Let's say we select P2V2 which has 7GB Ram and 2 Cores. Here are the questions:
Let's say when the function is triggered, and each invocation consumes 1GB Ram. Does it mean that the same instance at maximum can concurrently run ~6 (leaving aside 1GB for OS let's say). Where all the 6 concurrent triggered functions re-use the same cores?
When does the App Service plan decide to scale out to multiple instances?
Yes, probably. As stated in Azure Functions hosting options - Service limits the number of Function apps per plan is unbounded, but:
The actual number of function apps that you can host depends on the activity of the apps, the size of the machine instances, and the corresponding resource utilization.
By default, an App Service Plan doesn't scale. In the same article I linked to before, it states that for a Dedicated Plan you can use Manual scaling or Autoscale. For autoscale, you control the rules.
For more information, see the documentation Juunas linked to in this comment.
Best practices for Autoscale

Azure Functions scalability issue

I am using Azure Functions on the App Service Plan. My understanding is for every new execution the Azure Function will create a new App Service, execute the function and then shut down the App Service. There would be nothing shared between the multiple App Services that are spawned due to multiple requests.
However when I do test my Function(which is a video processing one), for one request the time it takes is around 2-3 mins however for multiple simultaneous requests the time increases to 10-15 mins. My questions are whether my understanding above is correct? If not then what resource is shared amongst these App Services? How should I decide my scaling options(manual vs auto)?
"My understanding is for every new execution the Azure Function will create a new App Service" Nope it will not run new instance each time. Generally if there is no load on AF it will stop all instances.
Then if first request/event comes in it will start first instance. This is why we have ColdStart in Serverless. After that scale controller will measure your instance performance memory and CPU consumption and decide if it needs to scale but it wont be instant. So if lets say you sent N amount of requests to do smth with video they could go to same first instance and increase load. Then AF will scale, because of CPU spike but it wont help with old requests since they are handled at first instance. Keep in mind For non-HTTP triggers, new instances are allocated, at most, once every 30 seconds which means that your AF should have CPU spike for at least 30 second to add new instance https://learn.microsoft.com/en-us/azure/azure-functions/event-driven-scaling
I am not sure if Azure Functions are good option for video processing. Azure function should be used for quick stuff usually I would say not more than 30 sec. But there are some limitation of execution time depends how you run it https://learn.microsoft.com/en-us/azure/azure-functions/functions-premium-plan?tabs=portal
Not sure what type of video processing you doing but i would have a look into Azure Media Services
The other options as you mentioned is Batch jobs with low priority https://azure.microsoft.com/en-au/blog/announcing-public-preview-of-azure-batch-low-priority-vms/ it actually a good use case you have: Media processing and transcoding, rendering and so on
A small addition to Vova's answer: if you're running your Function in an App Service (also known as a Dedicated Plan), it will by default only scale instances within the possibilities of the App Service Plan you defined. This means that all of the instances of your Function App run on the same virtual machine. That is most probably the reason you're seeing increasing request times with more requests.
If you want your Functions to scale beyond the capabilities of that plan, you will need to manually scale or enable autoscaling for the App Service plan.
An App Service plan defines a set of compute resources for an app to run. These compute resources are analogous to the server farm in conventional hosting.
and
Using an App Service plan, you can manually scale out by adding more VM instances. You can also enable autoscale, though autoscale will be slower than the elastic scale of the Premium plan. [...] You can also scale up by choosing a different App Service plan.
If you run your Function App on Consumption Plan (the true serverless hosting plan option since it enables scaling to zero),
The Consumption plan scales automatically, even during periods of high load.
In case you need longer execution times than those available in Consumption Plan, but the App Service Plan doesn't seem to be the best hosting environment for your Functions there's also the Premium Plan.
The Azure Functions Elastic Premium plan is a dynamic scale hosting option for function apps.
Premium plan hosting provides the following benefits to your functions:
Avoid cold starts with perpetually warm instances
Virtual network connectivity.
Unlimited execution duration, with 60 minutes guaranteed.
Premium instance sizes: one core, two core, and four core instances.
More predictable pricing, compared with the Consumption plan.
High-density app allocation for plans with multiple function apps.
More info on all the different Azure Functions hosting options.

Azure FunctionApps vs Azure App Services for Compute intensive work

I have 2 questions first related to hosting, second related to sdk/library to use:
I need to write a kind of work allocation service scheduler to people, which will run say every 1 hour to run compute intensive logic in background and push the results in our database. The input may be number of days to create schedule for, number of people available, count of tasks to be done. So primarily its compute intensive.
Should i host it in App Service or in Azure Function (TimerTrigger)? This scheduler run as total background job and never called from UI or any backend API.
If i go App service way i have choice of either Hangfire or WebJob. How should i decide which is good for me.
Certainly quick execution with lesser cost is my criteia to move ahead.
One consideration for Azure function is how long the processing will take. Azure functions have a maximum time limit that depends on hosting plan. When you create a function app in Azure, you must choose a hosting plan for your app. There are three hosting plans available for Azure Functions: Consumption plan, Premium plan, and Dedicated (App Service) plan. An overview of hosting plans and their timeout durations is here: Azure Functions scale and hosting.
Unlimited duration is in Premium plan or Dedicated plan (Unlimited execution duration - 60 minutes guaranteed).
Maximum duration for Consumption plan is 10 minutes.

Azure Functions not Running Fast Enough

I have an azure function that reads jobs from a storage queue. It then executes these jobs and grabs more. I have been getting more jobs for it to run lately and noticed that the queue is building up.
What can I do from an Azure Perspective to get better performance out of this? Each job runs in its own little world so adding a new instance or adding threads or attaching to a "better" machine would all work fine.
Things come to mind with the information provided:
For more pure power: Host your Azure Function in a dedicated App Service plan instead of using the consumption plan. You can scale up (better hardware) or out (more hardware). Be aware that this could also be worse in theory. I would give it a try. Or try the "premium consumption plan" mentioned by Ken.
More parallelism: If your queue builds up even though you are not using most of your resources. Try playing with the configuration parameters batchSize and newBatchThreshold.
Changed execution logic: Depending where most of your time is spent during function execution, durable functions might help. Based on your comments you might also try to cache the external data using static or Azure Redis Cache.
Look at the most common performance considerations
Premium plan (Preview)
Azure Functions Premium plan provides customers the same features and scaling mechanism used on the Consumption plan (based on number of events) with enhanced performance and VNET access. Azure Functions Premium Functions plan is billed on a per second basis based on the number of vCPU-s and GB-s your premium functions consume.
In order to use the Azure Functions Premium Plan private preview your subscription needs to be added to an allowlist. Please apply for access via http://aka.ms/functionspremium.
More Info:
https://github.com/Azure/Azure-Functions/blob/master/functions-premium-plan/overview.md

Azure Function Consumption Plan limitation

I have an Azure function for IoT scenario, having a predictable event load. Currently, 15 functions running under same function app (single dll)
Now we are planning to create a separate function for each function (15 dll projects).
Why 15 function?
One function handling Millions for events per day, will put this function in the dedicated app service plan.
Rest 14 function has a very limited load, so we are planning to move into consumption plan. 1 million executions are included free each month.
Every function can scale independently
Concern
I need to create 15 projects in my solution (more will be added according to this design)
Too much resources will display on portal (15 functionApp + 15 App service plan + 1 storage account (common for all functions)), Multiply numbers of env (DEV+INT+QA+Perf+Stag+Prod) Total 186 resources
This design doesn't look good for me but has some advantages. Working in Agile mode :P
Is there any limitation/Issue with the number of resources wise or any other in this design?
Based on this post by Fabio, you could just have 1 App Service for all your Function Apps using Consumption Plan. Also, if the load of all your functions (on Consumption Plan) combined would be less than 1 million executions, you could probably just have them in one app too but do consider the limits that functions in Consumption plan pose.
As for the number of resources, I wouldn't think that should pose any problem directly except the Resource Group limits.

Resources