What is the difference between Azure Availability Test and Health Test - azure

If you go to Azure webapp, and on the left hand panel select Application Insights. Then View Application Insights Data and then click the Availability on the left hand panel, you can add new tests. Basically, here you can specify the health/ping endpoint for the site. You can also here configure some associated rules for the alerts.
Now, Azure has got a new functionality which is called Health Check on the webapp. All you have to do is enable it, and give it your health/ping endpoint. Then you can also configure rules here.
With both methods, the health endpoint is triggered by azure and if something is not right based on the alert rules you get an alert message.
But what is the difference between the two approaches?

The difference is that if your web app runs in multi instances(if you specify the scale rules), for Health check, if an instance fails to respond to the ping, the system determines it is unhealthy and removes it from the load balancer rotation. This increases your application’s average availability and resiliency.
Availability-test in Application Insights does not do such thing, it just checks the health.
You can review these docs: Health Check is now Generally Available, Does App Service Health Checks logs in Application Insights?, What App Service does with Health checks.

App Insights Data Availability is very specified for checking health and alerting via some mode, while Health check was released for a way bigger prospects with the facility of
Health check for all instances every 1 min (somewhere what availability test does)
Removes the instance if ping fails.
restarts underlying VM
replaces the instance if needed
Helps in scale out/up for new instances.
Moreover, this can be used for more stuff like reporting etc. please make sure that it's not used for premium services.

Related

How to drop front door health probes from application insights

We are using azure app service codeless implementation of application insights: https://learn.microsoft.com/en-us/azure/azure-monitor/app/azure-web-apps?tabs=net#enable-agent-based-monitoring
We are also using front door, therefore all the health prob HEAD requests are ending up in application insights creating a lot of noise and extra cost.
I understand if you are using the application insights SDK and have an applicationinsights.config file you can filter these requests out.
But is there a way of doing this using the agent based monitoring, the doc hints that applicationinsights.config settings can be set as application settings in the app service, but does anyone have an example of how to do filtering this way?
Currently, Telemetry processors (preview) feature for filtering out unwanted telemetry data is available only for codeless application monitoring for Java apps via the standalone Java 3.x agent (examples here).
For other environments/languages and advanced configurations, manual instrumentation with the SDK might still be the way to go. Although it would require some management effort, this approach is much more customizable and would give you greater control over the telemetry you want to ingest.
Regardless of the approach, to reduce the volume of telemetry without affecting your statistics, you can try configuring Sampling, either via Application settings or the SDK.
From a Front Door configuration perspective, you could increase the Interval between health checks to reduce the frequency of requests. Or, if you have a single backend in your backend pool, you can choose to disable the health probes. Even if you have multiple backends in the backend pool but only one of them is in enabled state, you can disable health probes. This should help in reducing the load on your application backend, and also the telemetry traffic.

Autoscale rule for max connection and response time out in azure app service

When we do a load testing for a Rest API which is developed using dotnetcore with 10000 users and 100 ramp up finally ended up with 502 bad gateway error.
There are many outbound api calls are happening inside our application which uses singletone httpclient instance.
The connections crosses more than 1920 limit and also the response time crosses 2 min default time. Here is appservice metrics.
We wanted to set autoscale rule in appservice to balance the load and avoid 502 bad gateway error. But I don't find any options related to connections and response time.
When you're adding a new metric for your autoscaling configuration, the first drop-down you are presented with allows you set the Metric Source. By default this will point to the current resource (which when configuring autoscaling is the App Service PLAN).
You want set this source to "Other resource"
Choose the resource type you want and then choose the target resource. In this case, I think you would want the App Service itself (or choose "App Service (Slots)" if you want to target a specific slot).
After changing the resource type, the new Metric Namespace will present different metric choices, one of which will be Response Time.
I know it's been a while since this was asked, but I hope this helps anyone new showing up here.

Azure AppService auto shutdown to save cost

I have an app service that I use from time to time (test env). How to configure it to auto-close when I do not use it?
App Service always incurs cost so that is not possible. You can create it when you need it using some sort of automation (powershell\cli\arm templates\etc) and delete it after you dont need it.
another option - colocate it with some other App Service which you need all the time, so it will just use small fraction of that app resources (wont cost anything extra).
I would recommend to use Dev/Test option if you are really worried about pricing.
Dev/Test pricing applies only when you run the resources within an
Azure subscription that is based on one of the Dev/Test offer
May be a delayed answer but I have found an elegant solution. There is a "Always ON" flag in the settings page that can be used for this purpose.
Location
AppService --> Configuration --> General Settings --> Platform Settings --> Always On.
Usage
Always On: Keeps the app loaded even when there's no traffic. When Always On is not turned on (default), the app is unloaded after 20 minutes without any incoming requests. The unloaded app can cause high latency for new requests because of its warm-up time. When Always On is turned on, the front-end load balancer sends a GET request to the application root every five minutes. The continuous ping prevents the app from being unloaded.
Always On is required for continuous WebJobs or for WebJobs that are triggered using a CRON expression.
For dev/test there is an app service plan that is free (it has the "Always On" setting turned off and disabled). Create a Free F1 app service plan and then assign your app service to use that plan.

How to get number of instances in time for Azure App Service

I have a website hosted on Azure and I was wondering how to get some info on the number of instances in time. I'm looking for info like this.
User XYZ changed the number of instances from 1 to 3 on 2018-09-18 10:00
This could be very useful also when autoscaling enabled. Is this kind of info available somewhere in the Azure portal? I was looking into Activity log and there seems to be some "Update hosting plan" operation but can't read the number of instances from it.
You can view the Run History to know the numbers of instances via Scale-out (App service plan) in your App service.
Alternatively, you can also get the specific instance info via Process Explorer for web app service. Ref: Monitoring your multiple Azure Web App instances
Update
I'm looking for kind of audit WHO and WHEN changed the number of
instances. Is it at least possible to retrieve the info from some
logs?
As for as I know, there is not exactly the same log as you expect. WHO is not specific somebody here since Autoscale setting enabled makes automatically increase or decrease the number of instances for your web app service. You can configure Autoscale settings to be triggered based on metrics that indicate load or performance, or triggered at a scheduled date and time. When an autoscale rule triggers, your scale set can automatically scale in. You can see the logs from click here to see more details in Run history.
If you have not enabled the autoscale, you do not activate the scale rules, the scale instances will not be triggered, so there is not a log for instances changes.
Ref: Understand Autoscale settings

Does Azure Cloud Service Load Balancer take Role Status into account?

Here's my scenario:
I have an Azure Cloud Service that runs a "hefty" .NET WCF project. The heftiness comes in with the startup tasks, as we cache a large amount of data into memory to make the project run quickly.
We're have some logic to override the OnStart method of the RoleInstance to perform this caching, so the instance doesn't return as "Ready" until all of this caching is completed.
When we deploy our service, we have 2 instances (so theyre on separate Fault\Update domains).
To that scenario I have 2 questions:
When we deploy an update or Microsoft performs maintenance against one of these managed VM's, does the Azure Load Balancer take the role state into account and not route traffic to it until it's in a "Ready" state?
For the aforementioned Load Balancer, do I have to configure anything for the cloud service to balance between the multiple instances? I was always under the impression that Microsoft managed that for you.. this way if you scale out to N role instances, the cloud service will take into account the number if instances and assign load accordingly.
Thanks!
It is handled for you. The load balancer probe communicates with the guest agent on each VM which only returns an HTTP 200 once the role is in the Ready state. However, if you’re using a web role and running w3wp.exe on it, the load balancer is not able to detect any failures like HTTP 500 responses that it may generate.
In that case, you’d need to insert an appropriate LoadBalancerProbe section in your .csdef file and also properly handle the OnStop event. This article describes the default load balancer behaviour in more detail, as well as how to customise it.

Resources