Why Azure ASP (App Service Plan) manual scale-up is too slow? - azure

Currently, I have an Azure ASP I1, which contains about 8 app services and 2 function apps.
When I do the manual scale-out from 1 instance to 2 instances. it costs about more than 30 minutes and I think it is too slow.
My questions:
What reasons are effecting to the scale time? (number of resources, apps?)
What can I do to reduce the manual scale time? ( I mean the best practice of configuration)
If we apply auto-scale to this ASP, will it scale faster? If not, the auto-scale will not bring any value, because when the moment that the scale is finished, the pressure to our server might already be reduced.
Any partial answer and discussion will be appreciated

My understanding with scaling is that it is a simple sum total of how long it would take to provision all the resources that come under the service plan. You said, you have 8 app services and 2 function apps. Try to think back to how long it took to provision them. If each app took about a minute, then, it would be roughly 10 minutes. for example, if your app has a cosmos db, that along would take anywhere from 3 to 10 minutes. i am speaking based on my own experience.
So, now, to your questions.
What reasons are effecting to the scale time? (number of resources, apps?)
Yes individual apps and the resources they depend on are a huge factor in deciding the scale time.
What can I do to reduce the manual scale time? ( I mean the best practice of configuration)
not much. this is one of those that is outside your control.
However, if I were you, I would consider moving some of the apps and functions out of this service plan, and may be manage them individually?
Let's say I have a web app with a database service. I find out that the server is able to handle the load just fine, but it is the database that needs a bigger plan. Then, instead of keeping them on the same plan, i would move the database to a separate plan, only focus the scaling efforts on the database and leave the web app service alone.
If we apply auto-scale to this ASP, will it scale faster?
No.

Related

Azure Functions scalability issue

I am using Azure Functions on the App Service Plan. My understanding is for every new execution the Azure Function will create a new App Service, execute the function and then shut down the App Service. There would be nothing shared between the multiple App Services that are spawned due to multiple requests.
However when I do test my Function(which is a video processing one), for one request the time it takes is around 2-3 mins however for multiple simultaneous requests the time increases to 10-15 mins. My questions are whether my understanding above is correct? If not then what resource is shared amongst these App Services? How should I decide my scaling options(manual vs auto)?
"My understanding is for every new execution the Azure Function will create a new App Service" Nope it will not run new instance each time. Generally if there is no load on AF it will stop all instances.
Then if first request/event comes in it will start first instance. This is why we have ColdStart in Serverless. After that scale controller will measure your instance performance memory and CPU consumption and decide if it needs to scale but it wont be instant. So if lets say you sent N amount of requests to do smth with video they could go to same first instance and increase load. Then AF will scale, because of CPU spike but it wont help with old requests since they are handled at first instance. Keep in mind For non-HTTP triggers, new instances are allocated, at most, once every 30 seconds which means that your AF should have CPU spike for at least 30 second to add new instance https://learn.microsoft.com/en-us/azure/azure-functions/event-driven-scaling
I am not sure if Azure Functions are good option for video processing. Azure function should be used for quick stuff usually I would say not more than 30 sec. But there are some limitation of execution time depends how you run it https://learn.microsoft.com/en-us/azure/azure-functions/functions-premium-plan?tabs=portal
Not sure what type of video processing you doing but i would have a look into Azure Media Services
The other options as you mentioned is Batch jobs with low priority https://azure.microsoft.com/en-au/blog/announcing-public-preview-of-azure-batch-low-priority-vms/ it actually a good use case you have: Media processing and transcoding, rendering and so on
A small addition to Vova's answer: if you're running your Function in an App Service (also known as a Dedicated Plan), it will by default only scale instances within the possibilities of the App Service Plan you defined. This means that all of the instances of your Function App run on the same virtual machine. That is most probably the reason you're seeing increasing request times with more requests.
If you want your Functions to scale beyond the capabilities of that plan, you will need to manually scale or enable autoscaling for the App Service plan.
An App Service plan defines a set of compute resources for an app to run. These compute resources are analogous to the server farm in conventional hosting.
and
Using an App Service plan, you can manually scale out by adding more VM instances. You can also enable autoscale, though autoscale will be slower than the elastic scale of the Premium plan. [...] You can also scale up by choosing a different App Service plan.
If you run your Function App on Consumption Plan (the true serverless hosting plan option since it enables scaling to zero),
The Consumption plan scales automatically, even during periods of high load.
In case you need longer execution times than those available in Consumption Plan, but the App Service Plan doesn't seem to be the best hosting environment for your Functions there's also the Premium Plan.
The Azure Functions Elastic Premium plan is a dynamic scale hosting option for function apps.
Premium plan hosting provides the following benefits to your functions:
Avoid cold starts with perpetually warm instances
Virtual network connectivity.
Unlimited execution duration, with 60 minutes guaranteed.
Premium instance sizes: one core, two core, and four core instances.
More predictable pricing, compared with the Consumption plan.
High-density app allocation for plans with multiple function apps.
More info on all the different Azure Functions hosting options.

Saving on Azure billing cost with App Services?

I have a .NET Core application currently running as an Azure App Service, and I need it to do a lot of 'work' only about a few times a day. In order to save on the hourly billing, this is the solution I developed:
Using a runbook (Azure Automation): scale the App Service Plan to the 'Free' tier at 7:00 PM
Using a runbook (Azure Automation): scale the App Service Plan back up to the premium tier at 8:00 AM
Hard-code my .NET Core application to ensure it only does the heavy 'work' between 8:00 AM and 7:00 PM
This is fine as it saves me a significant portion of cost, as I'm only paying for the hours in which the App Service Plan is scaled up to the premium tier. However it is definitely not ideal.
My question is - what design pattern should I implement in order to accomplish what I'm trying to do? I need a lot of compute resources but only for a few hours out of the day. I know AWS has 'spot' instances that you can configure - is there a similar mechanism in Azure?
Ideally I could implement a solution that involves me only paying for those heavy compute resources when I actually need it (e.g.: a few times a day, while the sun is up)
Thank you for any insight and help!
EDIT in regards to the type of computation, my summary is essentially a few ML.NET trainers running in parallel with some moderate Elasticsearch document writing
It is pretty tough to answer this with the whole description of your workload being a "lot" of "heavy compute".
If you can put your "compute" into Azure Functions, going serverless with a consumption plan will probably be the nicest solution. However, individual function executions have a given timeout, so you need to see if your app fits the bill.
As an alternative, you can put your application into an Azure Container Instance, and spin that up on demand.
If you have REALLY high workload, you can use Azure Batch. If your current workload can be done on an AppService plan, this may be "overkill".
The equivalent to AWS spot instances is called Azure Spot Virtual Machines. You can also use them with Azure Batch.
Yes, you can switch to Serverless. Host front end on Storage Accounts and back end move to Azure Functions (Consumption Plan).
PS: If it's a long running processing, it may not be the best solution unless you use Durable Functions.

How long does it take for an Azure App Service instance to be available after a scale out?

Context: I am designing the auto-scaling (scale out) configuration for my .NET Framework 4.7 web app hosted on a Microsoft Azure App Service. I am using the P3V2 pricing tier. The application is CPU-bound. The app's 30 day CPU average is 30% usage while running on 2 instances, according to the stats indicated in the App Service plan. We occasionally have traffic spikes which will overwhelm the 2 instances: I want to implement auto-scaling.
I want to take into account the App Service Provisioning + App Startup Time when designing the metrics thresholds that decide when my app service scales out. I need to make my thresholds low enough to give Azure time to spin up a new app service instance but not so low that I am paying unnecessarily for processing power that's not needed. Budget is a significant factor.
Question: How long does it take for an Azure App Service instance to be available after a scale out? In other words, how long does it take for an Azure App Service to scale out?
P.S. I recognize that there is a lot more to scaling in/out that I am not addressing here. I'm trying my best to be succinct. :)
Generally, not long at all. By that I mean typically under one minute, but the time will vary depending on several factors, such as application size, time of day, region of deployment.
You could scale out manually and inspect the run history logs on the scale out tab.
FYI you can also use Azure Monitor to create auto-scale policies, in case this is of any use to you.

What is the standard setup for web design agency / creative agencies on Azure Web Apps?

Background
Our company designs and hosts websites for approx. 500 clients, each client has one website. Each website is built on ASP.net. Our current hosting infrastructure is built on hypervisors with virtual machines running Windows. We have 3 virtual machines all running the same spec (8 cores, 24 GB RAM). The 500 client sites are split over these three web servers, there is no load balancing or fault tolerance – the website exists in only one location.
Therefore, as we accumulate clients each web server’s site count increases. When we max out each server, we bring another one online and start again, then once that one is full we spin another VM up etc.
Goal
We would like to move (eventually) our sites over to Azure, however we do not want to replicate our current set up on Azure, instead we would like to move each website over to Azure Web app instead to take advantage of scaling.
We would also like more fine-grained control over our costs when bringing online additional sites. Currently, we bring online a VM and costs us X (for an empty server), it may take us 3 months to fill this. We would like to steadily add to our hosting hosts, not in big steps.
My question
I have investigated for many days on this and cannot find a tutorial or guide on what the ideal set up looks like on Azure Web apps when hosting 100’s of websites. Almost all tutorials assume you only ever going to have one website, so there is a 1:1 relationship between a site and the underlying resource. They never talk about how you should organise your apps into App Service Plans etc.
I understand the concept of adding a website, choosing the appropriate pricing tier and setting the scale settings, what I do not understand is why people online talk about scaling out Azure Apps – surely if an ASP.net websites consumes a certain amount of RAM on a system, by bringing online another VM all you are doing is immediately consuming that amount of RAM again on another system. So scaling out in this sense is to ONLY improve availability – is this correct?
If someone is able to provide some of their own experiences when dealing with a lot of websites on Azure (even better if they own a web design company who hosts on Azure) it would be very much appreciated.
Think of AppService plan as a VM or pool of VMs (in case you run multiple instances) that runs the same applications simultaneously and share the same data disc. If you scale out, you add a new VM to the pool, if you scale up, you change the size of VMs (actually they aren't VMs, but from the user's point of view it is simmilar).
So basically in case like yours, where you run many applications (potentially) smaller applications, scaling up/down establishes the baseline - how many websites you can run, how many applications you can fit in the memory. And then scaling out gives your better reliability and more CPU power that helps you to cope with high traffic.
Our company is much smaller than yours, we host dozens of websites not hundreds. But there are some points that our experience have taught us:
Use at least S2 instances that have 2 cores, with S1 instances a single app can easily degrade performance of other apps in the same AppService plan
Use TrafficManager. If a need arises (e.g. an outage of the service in your region), you can easily move to another region
Split webistes between more smaller AppService plans and collocate applications with the similar usage patterns to the AppService plans. That way you can run one instance, when the traffic is low and spin up new instances when the traffic spikes up.
You are correct that in all pricing tiers (except free and shared) web apps are scaled to all machines in an app service plan. This is an availability feature from the perspective of a web app. Scaling an app service plan from 1 to 2 machines(or auto-scaling) essentially provisions the same web app on all the machines. This of course is no good for your situation, but all is not lost. Generally, the unit of scaling is the app service plan. You could break down web apps into buckets of app service plans. Say first 100+ web apps in AppServicePlan1, then roll over to the next 100+ in AppServicePlan2. The downside is that you will have to manage tracking what app service plan to place the next web app in.

windows azure automatic scaling

Hi I have a web app deployed as a Cloud Service on Windows Azure. Now I am performing some load/stress test against this app. In the Azure Management Portal I have configured the web role to scale automatically when the CPU goes over 40%.
I start the tests with only one instance of this web role. As the test progresses, I have set the number of concurrent users to increase over time up to 2000 users.
After I start the test, I connect via remote desktop to the web role instance on Azure and I monitor the CPU usage. After 10 mins or so, the CPU is constantly at 100% (and in fact my requests in the test take a very long time to complete) but if I check the CPU of the very same web role on the Azure management portal it says 1, 2 or 6, there was a peak of 70% but it sunk back immediately (but never the values I see in its task manager when I am connected in remote desktop) or even does not display any value (I go to the dashboard page of my cloud service), which means the graph is not updated any more.
Furthermore, and this is the point of my question, NO SCALING of the web role instances is performed whatsoever.
Any ideas where/what I am missing? Feel free to ask if my explanation is incomplete.
Autoscaling on the CPU metric for a Cloud Service or Virtual Machine doesn't occur as fast as you are expecting (~10+ minutes). In this scenario, the CPU metric is averaged across all instances of the services for a period of 1 hour. Therefore, your autoscaling actions will not be immediate.
You can read more about this and some recommendations for configuring your autoscale settings here.
If you want to tighten this up a little more then take a look at this post where I show how to set the TimeWindow using the Monitoring Service Management Library. You may be able to get closer to what you want taking this approach.
A few things to consider:
1) As Rick pointed out, by default CPU is taken at an hour average
2) If you start at only 1 server, and then autoscale up to 2, your first server will get yanked out of load balancer during the scale operation. You should really always have a minimum of 2 servers at all time.
3) Feel free to check out AzureWatch (link in my profile).. it was designed to perform decently advanced scaling scenarios and allows you to configure scaling rules without touching APIs

Resources