Warming up of Cloud Service Web Role deployments before VIP Swap - iis

What is the correct way to way to warm up Cloud Service instances before performing a VIP Swap?
We're running two Web Role instances in our cloud service, and have already performed the following optimizations:
IIS 8.0 Application Initialization module in a Windows Azure Web Role
Controlling Application Pool Idle Timeouts in Windows Azure
Set AutoStart = true
Despite all these changes our site still takes around 10-20 seconds to start up after having performed a VIP Swap.
Is doing something like using a WebRole to hit the endpoints still the best way to go?

Related

How does Azure Service Plan load-balance traffic with different apps

I am trying to understand better how Azure App Service Plan (ASP) load-balances the traffic when multiple/different App Services are deployed in it.
Let's assume my ASP is made of 2 nodes (VMs or instances) and I deploy 2 apps (total 4 app instances running) and with following URL:
https://app1.azurewebsites.net
https://app2.azurewebsites.net
I know that there are ASP front-ends acting as load balancers. So here if I understand correctly it is like when I have a web-server hosting different web-sites and address distinction is based on virtual hostnames (which are the URL above). Right?
App Service is a multitenant service, so it uses the host header in the request to route the request to the correct endpoint. The default domain name of App Services, *.azurewebsites.net (say, contoso.azurewebsites.net), is different from the application gateway's domain name (say, contoso.com). ref.1
When using App Service, you can scale your apps by scaling the App Service plan they run on. When multiple apps are run in the same App Service plan, each scaled-out instance runs all the apps in the plan.
Apps are allocated to available App Service plan using a best effort approach for an even distribution across instances. While an even distribution is not guaranteed, the platform will make sure that two instances of the same app will not be hosted on the same App Service plan instance.
The platform does not rely on metrics to decide on worker allocation. Applications are rebalanced only when instances are added or removed from the App Service plan.
You can also now do Per-app scaling, which can be enabled at the App Service plan level to allow for scaling an app independently from the App Service plan that hosts it. This way, an App Service plan can be scaled to 10 instances, but an app can be set to use only five. ref.2

Azure function apps and web sockets

I can see multiple places that web sockets are not supported in function apps. I just want to create a web socket for some seconds and close it down again. So I do not have a need for a complex socket framework. I was wondering why this settings is present if it is not supported? Has Microsoft started supporting this feature?
Azure Functions are generally hosted in 2 ways:
Consumption Plan (Serverless)
App Service Plan (Dedicated)
Consumption Plan (Serverless)
In this plan, the underlying machine is de-provisioned when the server is idle. So, you may lose your active Web-Socket connections when the machine is idle and de-provisioned.
Also, below is the statement from the Microsoft Azure Function team:
There are some challenges here because WebSocket is really a stateful protocol (you have a long lived connection between a given client and a given server) while Azure Functions is designed to be stateless. For example, we will often deprovision a Function App from one VM and start it on a different VM (perhaps in a different scale unit) based on capacity constraints. This is safe to do for us today because of our stateless design - but if we did it when there were WebSockets connections to that VM, we'd be terminating those connections. Source: GitHub
App Service Plan (Dedicated)
If you are using a dedicated App Service Plan, then Web Sockets will work for sure, because there is a machine in the background which is not serverless (always available).
Just make sure you have enabled Web Sockets in the configuration (as you have done already).
Check web-socket connection limits for App Service Plans from here -
App Service limits

What happens when an Azure App Service restarts?

What is happening behind the scenes when an App Service is restarted?
I'm trying to troubleshoot a slow restart for my app (ASP.Net and Sql published from Visual Studio) and I feel like understanding this would help me know what the issue is. My app starts within a few seconds on my dev machine but takes 90 seconds on Azure.
From my research, it sounds like a new service instance is provisioned, application files are copied from the shared storage to the instance and it is started. Is this correct? Is there a way to monitor the startup process to see what is slow?
Edit:
It's a tier S1 service plan. The app isn't slow, just the restart. I was hoping to understand the process so that I could understand whether the slow startup is due to my code or just the nature of the way the restart works. I've noticed that my app runs for about 10 seconds after the restart (refreshing the page), then I get a service unavailable for about 20 seconds, then the page is loading until for about 60 seconds.
It all depends which app service plan you are using, different plans have a different memory, network bandwidth, IO etc, App Service runs customer apps in a multi-tenant hosting environment. Apps deployed in the Free and Shared tiers run in worker processes on shared virtual machines, while apps deployed in the Standard and Premium tiers run on the virtual machine(s) dedicated specifically for the apps associated with a single customer.
Refer to this link for a guide on Troubleshooting slow WebApp in Azure.

Are Web Apps inside an Azure App Service Plan implemented as virtual web servers in IIS? Are web gardens used?

If Azure App Service plans are virtual machines dedicated to the Web, API, Logic, and Mobile apps defined within them, does that mean that a web app in an app service plan is an instance of a virtual web server in IIS on that virtual machine?
Assuming this is the case and that each virtual web site gets it's own application pool, is there an Azure scaling strategy or scenario where more than one worker process in that app pool will run, creating a web garden? My understanding of web app scale out is that it results in additional VMs being allocated and not additional worker processes.
The scaling strategy will depend upon the pricing tier you have opted for.
Basically each Service Plan will contain a collection of Web, API, Logic, Mobile apps. These will form a web garden within the Service Plan server you choose.
If you initially choose a single B1 Basic Service Plan, you will get a single virtual machine with all of your applications running on that. As the load on that server increases, you can scale it up to larger servers, but it will still be running on a single server.
If you then choose to create a second instance (and a 3rd, 4th, 5th...) that second server will be a replica of the first server, with the load being balanced between the two. (3,4...)
While I've not seen documentation for this, I would imagine that each Web, API, etc app is run under its own application pool / worker process, and scale out is simply duplicated instances.
I'm not sure what a Virtual Server is, but each app runs in its own dedicated application pool and w3wp.exe process. There is only a single w3wp.exe process per application pool, so no web gardens.
Is there a specific reason you think you need these to scale your apps? In most cases, using web gardens is the wrong way to scale, as adding more processes can cause unnecessary overhead (amongst other problems - you can find some useful resources on the web). You almost always want to prefer threads over processes for improving concurrency. If you're running out of physical resources (CPU, memory, etc), then the correct way to scale is to add additional VMs.

Azure Webapps not failover when instance fails

We deployed a Node.js Azure Web App and defined a minimum of 2 instances (for scalability and high-availability).
It seems like the LB is balancing the load between the instances, but it doesn't react on instance error (crash) and seems to insist balancing the load between all the instances including the one which crashed.
Is there a way to set a fail-over mechanism for high-availability?
The load balancer used by Azure App Service will continue to send requests to individual web servers as long as the underlying virtual machines are up and running.
To workaround the issue you are running into, you can try configuring the "auto-heal" feature. If the scenario is that the app gets "stuck" in a permanently broken state, auto-heal rules can be configured to automatically restart the app.
More details on auto-heal here:
Auto-heal for Azure Web Sites

Resources