Azure App Service - Always On vs Health Check - azure

If I have one instance of app service is it still recommended to have enabled Health Check (e.g. for monitoring purpose)?
If yes then what about functionality Always On? Doesn't it double requests which at the end do the same thing? I mean to keep application running without idle and check if there are server http errors.

Azure WebApp Always On and Health Check features are used for different use-cases.
Always On setting is used to keep the app always loaded. This eliminates longer load times after the app is idle. With the Always On feature, the front end loadbalancer sends a request to the application root.
Health check setting allows you to monitor the health of your site using Azure Monitor where you can see the site's historical health status and create a new alert rule.
You can disable Always On and just use Health Checks, that will cover both use-cases:
keep application running without idle
monitor the health of your site

Related

Azure App Service unload time with health check

Azure App Service instances can be unloaded after 20min of inactivity, but does anyone know how this is impacted by health checks? In particular, does enabling health check to prevent the application of being unloaded? If so, does that lead to increased costs?
Enabling Health Check will keep the application alive as they continuously ping the health check endpoint in a specific time.
However a better way to keep the application alive is by using the setting AlwaysOn within Azure App Service settings.
Regarding the cost, the App Service are always billed even if they are inactive. Once provisioned, the meter will keep ticking and active/inactive web app will not have any impact on costs whatsoever.

Azure App Service - Auto-Heal vs Health Check

Health Check can restart an instance. Auto-Heal also can restart an instance.
So, when should I use Health Check and when should I use Auto-Heal. Should I use them both together?
The Health Check feature is pretty basic compared to Auto-Heal. Basically it makes a request to a predefined url and if it does not get a succesful response it will take that instance out of the load balancer pool. If it remain unhealthy it will be replaces with a new instance. It works only if scale-out is applied to the web app.
Auto-Heal is much more sophisticated: instead of pinging a url it can be configured to restart an instance when a certain memory or cpu usage limit is reached, or when the response time is degraded during a certain period.

How do I set up healthprobe for a web application running on an Azure virtual machine?

State of the application:
A single virtual machine which runs an apache server.
Application exposed via the virtual machine's public IP (not behind a loadbalancer)
I have an healthprobe endpoint running that needs probed every few seconds to see if the app is up, and trigger an alert in case it is not.
What are my options? I want to get the healthprobe up and running first, before I move to a virtual machine scale set and a load balancer.
You need something like a watchdog that calls the health endpoint at a given interval. In Azure you can use an availability test. You can then create alerts based on this availability and optionally create dashboards that show the status over a given period.
As a bonus you might integrate the application insights resource in your web app to get detailed monitoring. See the docs
Under Support+troubleshooting -> Resource health of your virtual machine portal panel, you can set up a health alert.
You can then select under which conditions the alert should be triggered. In your case, Current resource status: Unavailable should work just fine. You can also implement a custom notification (E-Mail) under Actions or implement a logic that triggers an Azure Function or Logic App that performs an action when the VM is unavailable.
To detect if your application in Apache server is working correctly you can use a monitoring solution that checks the Apache error logs.

Exclude an Azure AppService instance from load balancing

Is there a way to exclude an AppService instance from the Load Balancer:
Via the portal?
Via the SDK?
Via the SDK would be ideal, then we could set the MakeVisibleToLoadBalance flag (if such a thing existed) once all initialization completed.
If it's only available via the portal, it would be good to set n seconds after an instance is loaded before it becomes visible to the load balancer.
Reason:
When we restart an instance (e.g. via advanced restart), the metrics show a significant increase in response times, every time.
I believe the cause is the load balancer thinks the machine is available but it really hasn't completed initialization, so requests that the load balancer sends to that instance are significantly delayed.
Another reason is we may observe an instance is performing poorly, it would be great if we could exclude that instance until either it recovered or was restarted.
//As per the discussion with wallismark in the 'comments'. Copied the helpful comments to answer.
To fix the 'reason'/scenarios you have mentioned above, you could leverage ApplicationInitialization method. Every time your application starts, this can be because of a new worker coming online (horizontal scaling) or even just a cold start caused by a new deployment, config change etc. The ApplicationInitialization will be executed to warm up the site before accepting requests on that worker.
So the Application Initialization Module, handy feature that allows you to warm your app prior to the application receiving requests to help avoid the cold-start or slow initial load times when the app is restarted. Please checkout - https://ruslany.net/2015/09/how-to-warm-up-azure-web-app-during-deployment-slots-swap/
- It has also been implemented for all other operations in which a new worker is provisioned (such as auto scale, manual scale or Azure fabric maintenance). But, you cannot exclude the instance from the load balancer.
If your requirement fits, you could leverage ARR affinity; in a multi-instance deployment, ensures that the client is routed to the same instance for the life of the session. You can set this option to Off for stateless applications.
Typically, the Scale-out (trigger) -multiple running copies of your WebApps and handle the load balancing configurations necessary to distribute incoming requests across all instances. When you have more than one instance a request made to your WebApp can go to any of them using a load-balancer that will decide which instance to route the request based on how busy each instance is at the time.
To share more information on this feature - On load-balancer is that once a request from your browser is made to the site, it will add a ARRAffinity cookie to it (with the response) containing the specific instance id that will make the next request from this browser go to the same instance. You can use this feature to send a request to a specific instance of our site. You can find the setting in the App Service's Application Settings:
When multiple apps are run in the same App Service plan, each scaled-out instance runs all the apps in the plan.

Does an Azure Web App care if its instances are healthy/unhealthy?

If I deploy a web app (formerly known as an Azure WebSite) to an App Hosting Plan in Azure with a couple of instances (scale = 2) will the load balancer in front of the instances care if any of the instances is unhealthy?
I'm troubleshooting an issue that sometimes causes my site to return an http 503 ~50% of the time. My thinking here is that one of two of my instances has failed but the load balancer hasn't noticed.
If the load balancer does care, what does it look for? I can't find anyway to specify a ping url, for instance.
Note: this question has nothing to do with Traffic Manager.
Yes, Azure Web Apps monitors the health of the workers by making internal requests to it and verifying that they're healthy.
However, we don't check status codes that the web app returns to user requests (like 500, etc) since that could easily be an app specific issue rather than a problem with the machine.
So the answer you're looking for is: We continuously test whether or not the instances (VMs) are healthy and take them down if they're not. However, those tests do not rely on error codes the customer's site returns

Resources