Use separate Application Insights for error logging and user analytics? - azure

We are planning on using Azure Application Insights for our web app. It has been suggested that we use two instances: one for error logging and the other for user analytics. While these are different needs, it seems like one instance can accommodate both needs. What is best practice?

[I'm from Application Insights team]
The best practice is that telemetry from one app should go to the same Application Insights resource.
The might be advanced (and rare) scenarios (for instance, one stream represents audit log and should be retained way longer and / or have different RBAC requirements) where it makes sense to send to different Application Insights resources.

Related

Should Azure Log Analytics and Application Insights be used per app or per environment?

We have a Azure based system which is growing in complexity, and we need to monitor chains of events and ensure they arrive where we expect them to arrive.
We have a on-prem Java application, which sends events to an IoT Hub. The IoT hub routes to service bus queues. We have functions that update a cosmos database, trigger other functions or route to additional queues. Some functions are also callable through an API Management instance.
Our functions are already connected to Application Insights, and here the Application Insights instance is named the same as the Function App (IIRC this naming was suggested through the form that created the AI resource)
The application map in Application Insights make me lean toward one AI per environment, to have a complete map of the system. Log Analytics also seems logical to use one per environment to be able to potentially correlate data if needed.
What is the correct path for Log Analytics and Application Insights, respectively?
If it is not as clear-cut as stated in my title, what factors do I need to consider when I start to use these services?
The correct number of instances is the one that works best for you, whether that exactly follows recommended practices or not.
The recommendation is to use one workspace per environment and make sure the cloud_RoleName in App Insights to distinguish parts of the system. Log Analytics has similar considerations.
Functions defaults to spinning up an App Insights instance along with the app because if you don't use App Insights you loose most of the logging ability- it's important to connect it to App Insights, but overriding the default behavior and connecting to a centralized workspace is common in larger systems.
There are certainly reasons you might want to split the workspaces, and you can union data across workspaces as needed to pull data together from both Log Analytics and App Insights instances.
Data access control or geographic locations. If you need to keep a portion of the data within certain geographic boundaries or limit access to certain people, then split that portion off.
Similar to the security concern is a billing one. If for whatever reason, billing for different portions of the application needs to be split, then you would also want to split the logging portion.
Different portions of the system rarely interact, or are maintained by different teams, and organizing the data into separate workspaces will provide more benefits over the hassle of cross-
You are going to surpass the limitations on a single resource. Very few applications actually hit these limits, but they are there.

Application insights usage

Is there any easy way to find out which applications are using a particular application insights from azure portal?
I have checked the various options in the portals but don't find any easy to understand interface where I can find the list of applications which are sending data to that particular application insights.
The application map should provide you with a good view of various resources using the app insights resource
The application map is good. You can also go to Performance, then choose Roles. Roles is in the same tab group as Operations and Dependencies. This will give you a listing of all services that use that Application Insights instance. This has the added benefit of allowing you to expand a particular node and see the actual instances.
This same approach also works for the Failures tab. You can see the number of calls and failures rolled up per service, and also see the breakout metrics per instance.

Azure Application Insights for Service Fabric

I have multiple services running on Service Fabric. I would like to add Application Insight for logging. I'm just wondering whether I have to add an Application Insight resource for each microservice or only one is common for all. What is the best practice?
There is no such thing a the best practice for this. It really depends. Some considerations:
Pricing: depending on the level (basic or enterprise) you will get an amount of data for free / included in the base price. See the docs. So in some cases, depending on the amount of traffic you can reduce costs by having a dedicated AI resource per service. AI resources for services that send data below the threshold of the AI pricing plan are then (almost) free.
Querying: if you split up services per AI resource getting an overview of the whole system is difficult since at the moment you cannot create queries spanning multiple AI resources.
Responsibility: If you have multiple teams working on multiple services it might be an option to have an AI resource per team so they have a good insight in only the parts they are responsible for.
If you do decide to use a shared AI resource there are options like custom telemetry initializers to include custom data that further identify which ASF application or service is sending the data if it is not included by default.
See also Add Application Insight to a existing Azure Service Fabric cluster for more info about how to integrate AI.
Now, when it comes to bring data together you do have some additional options that may or may not need additional services or configuration. For example:
PowerBi: You can visualize data of AI resources using dashboards, see https://learn.microsoft.com/en-us/azure/application-insights/app-insights-export-power-bi
OMS: Operation Management Suite, See https://blogs.technet.microsoft.com/msoms/2016/09/26/application-insights-connector-in-oms/. As Jesse mentions you can link multiple AI Resources
Custom dashboards: Using the rest api you can create your own solution that displays data for one or more AI resources.

Logging in Azure Web App with Enterprise Library

Is it possible to use Enterprise Library for logging errors in my Azure Web APP?
Probably, but where would this data live? In log files? If you are running multiple Azure regions, you'll have log files in multiple places. Also how would you query for data when you need to trouble shoot? Detect patterns, aggregate, and perform calculations, etc?
I think you'll run into a lot of operational issues with traditional log files, especially with a high traffic sites.
Azure provides Application Insights too. I would look into that first before looking into writing to log files.

Logging and tracing on Azure

We are looking a solution for logging and tracing for our multi-tenant application with distributed architecture, that will be hosted on Azure.
We have already gone through these two articles – Troubleshooting Best Practices for Developing Windows Azure Applications and Enabling Diagnostics in Windows Azure. Is there anything other better solution?
We would like to know
• what are the best practices and approach for it?
o Storage strategy?
• Any third party / open source tool that helps us for the same?
EDIT:
We are looking for two things:
Best practice for storage strategy, where should we store log data? Since it's multi-tenant multi-tier application, should we keep data separate for each tier per tenant, combine them or any better solution? How do we store the data so that we can trace single request individually that spanned across multiple tiers?
A tool that helps us to view trace data, analyse them, filter, sort, etc. Since size of trace data will be comparatively huge, trace a flow of single task that spanned across multiple tiers.
I have used System.Diagnostic with XML listener, in on-premise application - with multiple tiers (web app, service layer 1, service layer 2, etc). I then, used Microsoft Service Trace Viewer to view the log data. SVCTraceViewer supports many features including combining log files of many tier, graphical representation, tracing individual request, etc.
So, some thing similar third party / open source tool for Azure. That also helps support engineer to drill down the issue and resolve it.
I would recommend looking into an open source library like log4net. It provides a pluggable/fully configurable and super flexible way to log messages with a lot of custom data and to a lot of sources. Configuration for it can be retrieved from external sources/xml, code, config files, etc.
You can create your own appender for Table Storage or find someone else's
HTH

Resources