We are using azure app service codeless implementation of application insights: https://learn.microsoft.com/en-us/azure/azure-monitor/app/azure-web-apps?tabs=net#enable-agent-based-monitoring
We are also using front door, therefore all the health prob HEAD requests are ending up in application insights creating a lot of noise and extra cost.
I understand if you are using the application insights SDK and have an applicationinsights.config file you can filter these requests out.
But is there a way of doing this using the agent based monitoring, the doc hints that applicationinsights.config settings can be set as application settings in the app service, but does anyone have an example of how to do filtering this way?
Currently, Telemetry processors (preview) feature for filtering out unwanted telemetry data is available only for codeless application monitoring for Java apps via the standalone Java 3.x agent (examples here).
For other environments/languages and advanced configurations, manual instrumentation with the SDK might still be the way to go. Although it would require some management effort, this approach is much more customizable and would give you greater control over the telemetry you want to ingest.
Regardless of the approach, to reduce the volume of telemetry without affecting your statistics, you can try configuring Sampling, either via Application settings or the SDK.
From a Front Door configuration perspective, you could increase the Interval between health checks to reduce the frequency of requests. Or, if you have a single backend in your backend pool, you can choose to disable the health probes. Even if you have multiple backends in the backend pool but only one of them is in enabled state, you can disable health probes. This should help in reducing the load on your application backend, and also the telemetry traffic.
Related
If you go to Azure webapp, and on the left hand panel select Application Insights. Then View Application Insights Data and then click the Availability on the left hand panel, you can add new tests. Basically, here you can specify the health/ping endpoint for the site. You can also here configure some associated rules for the alerts.
Now, Azure has got a new functionality which is called Health Check on the webapp. All you have to do is enable it, and give it your health/ping endpoint. Then you can also configure rules here.
With both methods, the health endpoint is triggered by azure and if something is not right based on the alert rules you get an alert message.
But what is the difference between the two approaches?
The difference is that if your web app runs in multi instances(if you specify the scale rules), for Health check, if an instance fails to respond to the ping, the system determines it is unhealthy and removes it from the load balancer rotation. This increases your application’s average availability and resiliency.
Availability-test in Application Insights does not do such thing, it just checks the health.
You can review these docs: Health Check is now Generally Available, Does App Service Health Checks logs in Application Insights?, What App Service does with Health checks.
App Insights Data Availability is very specified for checking health and alerting via some mode, while Health check was released for a way bigger prospects with the facility of
Health check for all instances every 1 min (somewhere what availability test does)
Removes the instance if ping fails.
restarts underlying VM
replaces the instance if needed
Helps in scale out/up for new instances.
Moreover, this can be used for more stuff like reporting etc. please make sure that it's not used for premium services.
We have this UWP app deployed at some client machines with telemetry code.
We now want no telemetry data (if not possible reduce the traffic) to flow into app insights in azure.
By deleting the app insights resource itself, will it help in reducing the traffic through client ISP or will there be data going out of the app but just that we are not monitoring any further?
We tried Ingestion sampling, but it discards some of the telemetry that arrives from your app, at a sampling rate that you set. It doesn't reduce telemetry traffic sent from your app.
Is there a way we could handle this without changing the code?
Even if you delete the application insights resource, your app still can send telemetry data to azure server, but these data will be rejected. By this way, it cannot reduce the telemetry traffic.
The only possible way is using firewall rule.
And in the future, you can use some code like this to dynamically start/stop telemetry data: TelemetryConfiguration.Active.DisableTelemetry = true;
Reduce telemetry traffic sent from your UWP app
The better way is remove telemetry code within client apps. and make new version app and publish the update with store. If you don't want to reduce telemetry with editing code, you could also disable your server api that used to revive telemetry data. And it will make client app filed when posting the data.
Is there any easy way to find out which applications are using a particular application insights from azure portal?
I have checked the various options in the portals but don't find any easy to understand interface where I can find the list of applications which are sending data to that particular application insights.
The application map should provide you with a good view of various resources using the app insights resource
The application map is good. You can also go to Performance, then choose Roles. Roles is in the same tab group as Operations and Dependencies. This will give you a listing of all services that use that Application Insights instance. This has the added benefit of allowing you to expand a particular node and see the actual instances.
This same approach also works for the Failures tab. You can see the number of calls and failures rolled up per service, and also see the breakout metrics per instance.
I have multiple services running on Service Fabric. I would like to add Application Insight for logging. I'm just wondering whether I have to add an Application Insight resource for each microservice or only one is common for all. What is the best practice?
There is no such thing a the best practice for this. It really depends. Some considerations:
Pricing: depending on the level (basic or enterprise) you will get an amount of data for free / included in the base price. See the docs. So in some cases, depending on the amount of traffic you can reduce costs by having a dedicated AI resource per service. AI resources for services that send data below the threshold of the AI pricing plan are then (almost) free.
Querying: if you split up services per AI resource getting an overview of the whole system is difficult since at the moment you cannot create queries spanning multiple AI resources.
Responsibility: If you have multiple teams working on multiple services it might be an option to have an AI resource per team so they have a good insight in only the parts they are responsible for.
If you do decide to use a shared AI resource there are options like custom telemetry initializers to include custom data that further identify which ASF application or service is sending the data if it is not included by default.
See also Add Application Insight to a existing Azure Service Fabric cluster for more info about how to integrate AI.
Now, when it comes to bring data together you do have some additional options that may or may not need additional services or configuration. For example:
PowerBi: You can visualize data of AI resources using dashboards, see https://learn.microsoft.com/en-us/azure/application-insights/app-insights-export-power-bi
OMS: Operation Management Suite, See https://blogs.technet.microsoft.com/msoms/2016/09/26/application-insights-connector-in-oms/. As Jesse mentions you can link multiple AI Resources
Custom dashboards: Using the rest api you can create your own solution that displays data for one or more AI resources.
Azure API Management has promises of 1000 requests per second for an instance. (I don't know this is a correct rate but let's assume it is). My question is how can we scale web service without scaling its infrastructure just by scaling API Management instance.
For example if Azure API Management supports 1000 requests per second for an instance, then backend service also should support the same request handling threshold in its infrastructure. If this is the case what is really meant by scaling up the web service by Azure API Management.
By using Azure API management you can turn on caching easily, which can significantly reduce the traffic to your back-end. In addition, your API Management instance can be scaled up easily to have more VMs behind it. However, if the back-end cannot handle the traffic (after caching), then you might need a more scalable back-end :)
Miao is correct. However remember Azure API Management scaling will only work with GET request. Plus cache size provided by API Management is of only 1GBas of today [may increase in future]; with no monitoring as of today. So if you need monitoring of API Management cache then use external cache like Redis.
When you talk about scalability it will be at all layers. API Management consumption plan can be good option to think through for auto scaling. Then think of Azure VMSS or App service auto scale for scaling backed APIs. And if your backend APIS are talking to DB then think of something like Autoscale for DB on Azure like SQL Azure HyperScale.
So scalability is not only at API Management level but think carefully at all layers.
Sample implementation of Cache in API Management is here - https://sanganakauthority.blogspot.com/2019/09/improve-azure-api-management.html