Sink configuration for Application Insights for Azure Service Fabric - azure

Where do I update the sink configuration for Azure Service Fabric
https://learn.microsoft.com/en-us/azure/monitoring-and-diagnostics/azure-diagnostics-configure-application-insights?
<SinksConfig>
<Sink name="ApplicationInsights">
<ApplicationInsights>{Insert InstrumentationKey}</ApplicationInsights>
<Channels>
<Channel logLevel="Error" name="MyTopDiagData" />
<Channel logLevel="Verbose" name="MyLogData" />
</Channels>
</Sink>
</SinksConfig>

These settings is set as part of the cluster creation, when you setup a cluster from azure portal, you have to provide Application Insights Key as part of the setup and it will create for your.
If you used ARM templates, you have to configure it on wadCfg section of your template.
Please take a look at this link to know more
Please take into account that these settings are targeted monitor and
log events from the cluster, if are planning to use these settings to
monitor you apps, I would recommend using EventFlow bundled in your
application. Because the cluster generally does not change as often as
your application does.

Related

How can I use appInsights with HDInsight cluster

I have a spark cluster on azure hdinsight. Is there any way I can integrate azure applicationInsights with it to get all the monitoring and log analysis capabilties. This can be done through Azure Monitor Logs according to Microsoft docs but for some reason I need to specifically integrate my spark app on hdinsight with applicationInsights only. Couldn't find any documentation or example for the same anywhere.
Unfortunately, you cannot use Application Insight to get logs of HDInsight cluster.
Azure Monitor logs enable data generated by multiple resources, such as HDInsight clusters, to be collected and aggregated in one place to achieve a unified monitoring experience.
As a prerequisite, you'll need a Log Analytics Workspace to store the collected data. If you haven't already created one, you can follow instructions here: Create a Log Analytics Workspace.
What's the difference between Azure Monitor, Log Analytics, and
Application Insights?
In September 2018, Microsoft combined Azure Monitor, Log Analytics, and Application Insights into a single service to provide powerful end-to-end monitoring of your applications and the components they rely on. Features in Log Analytics and Application Insights have not changed, although some features have been rebranded to Azure Monitor in order to better reflect their new scope. The log data engine and query language of Log Analytics is now referred to as Azure Monitor Logs. See Azure Monitor terminology updates.
Can I use Application Insights with ...?
Web apps on an IIS server in Azure VM or Azure virtual machine scale
set Web apps on an IIS server - on-premises or in a VM
Java web apps
Node.js apps
Web apps on Azure
Cloud Services on Azure
App servers running in Docker
Single-page web apps
SharePoint
Windows desktop app
Other platforms
Found a way of making ApplicationInsights work with HdInsight (Spark) cluster. The application deployed on the cluster is a Spark application written in Scala (maven based). Though Microsoft doesn't have an SDK for Scala at this point, I was able to use the applicationinsights-logging-log4j dependency to send app logs as well as spark yarn logs to AppInsights, which was the end goal in my case.
Here's how to do it:
Add these dependencies to pom.xml
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>applicationinsights-core</artifactId>
<version>2.6.1</version>
</dependency>
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>applicationinsights-logging-log4j1_2</artifactId>
<version>2.6.1</version>
</dependency>
Use the ApplicationInsightsAppender class
import org.apache.log4j.{ Logger, Level }
import com.microsoft.applicationinsights.log4j.v1_2.ApplicationInsightsAppender
object AppInsightLogger {
var rootLogger = Logger.getRootLogger()
var ai = new ApplicationInsightsAppender()
ai.setInstrumentationKey("your-key")
ai.activateOptions()
#transient lazy val logger = Logger.getLogger(this.getClass)
logger.setLevel(Level.INFO)
rootLogger.addAppender(ai)
def info(message: String): Unit = {
logger.info(message)
}
}
Finally, this can be used anywhere in the application such as:
AppInsightLogger.info("Streaming messages from EH")
I was able to get Spark yarn logs as well as custom logs from the application deployed on HdInsight to AppInsights without using an SDK for scala! (The dashboards and telemetry data cannot be seen using this approach. We get the logs as "Trace" having different severity levels. We can see exceptions as well if any. Use the "Search" option on the portal for viewing the logs)
Logs seen on AppInsights
Exception logs seen on AppInsights

Possible to integrate Azure Application insights with existing service fabric cluster in code-less manner?

Would like to use Application insights especially, "Azure live metrics stream" feature on existing PROD Azure service fabric workloads to do performance analysis.Is service fabric has built-in integration with Azure Application Insights?
Is it possible do it in code less manner like how Application insights can be enabled via portal for web Apps/Azure functions? If not, why?
Then, how to do it code based manner? Any reference to do code changes would be helpful.
What is the difference between code less & code based monitoring? When to choose one over another? Our requirement is to study performance of application (deployed on various nodes of PROD service fabric cluster) under different load.
Please clarify above list of queries.
Is service fabric has built-in integration with Azure Application Insights?
You could leverage Windows Azure Diagnostics (WAD) extension to sink SF cluster logs and/or Perf metrics into App Insights - Configuring Application Insights with WAD
There is no way to monitor an application running in SF via AppInsights without a small amount of coding.
Then, how to do it code based manner? Any reference to do code changes would be helpful.
Here you go - Monitor and diagnose an ASP.NET Core application on Service Fabric using Application Insights.
What is the difference between code less & code based monitoring? When to choose one over another? Our requirement is to study performance of application (deployed on various nodes of PROD service fabric cluster) under different load.
As I said, to monitor your app you have to code. Although it's super simple. Other from that, here is a general recommendation - Event analysis and visualization with Application Insights:
It is recommended to use EventFlow and WAD as aggregation solutions,
because they allow for a more modular approach to diagnostics and
monitoring, i.e. if you want to change your outputs from EventFlow, it
requires no change to your actual instrumentation, just a simple
modification to your config file. If, however, you decide to invest in
using Application Insights and are not likely to change to a different
platform, you should look into using Application Insights' new SDK for
aggregating events and sending them to Application Insights. This
means that you will no longer have to configure EventFlow to send your
data to Application Insights, but instead will install the
ApplicationInsight's Service Fabric NuGet package.
Here is the link to the best practices - Monitoring and diagnostics on SF platform.

Azure PaaS for Storing Centralized Log Data for Microservices

If we deploy a Microservice in azure AKS, there may be multiple pods and replica of same services. If Microservice want to keep any custom log information about it's flow or errors , where better to store such logs centrally? Any PaaS services? e. g. Azure Storage? Any real experience to share here?
I would suggest Azure Application Insigths. You can enrich your telemetry with aks details using this integration package:
... when using Microsoft Application Insights for Kubernetes, you will see Kubernetes related properties like Pod-Name, Deployment ... on all your telemetry entries. Proper values will also be set to make use of the rich features like enabling the Application Map to show the multiple micro services on the same map.
For logging within your application you can use the ILogger interface (if using .net) and pipe it to app insights.
Or you can write your own logging using the sdk, available in selected languages.
For java applications, you can pipe your logs as well. You probably have to manually add pod details to your telemetry using telemetry initializers as the Microsoft Application Insights for Kubernetes package is .Net only. As far as I know they use the kubernetes rest api to query for details.

How to add the instrumentation key in Windows Dev Center Dashboard / Usage page to Azure Portal?

I have published an app to Windows Store. Subscribed to Azure.
It seems I can only deploy/create new Application Insights resource.
How to add/link the instrumentation key found in Windows Dev Center App Analytics / Usage page to Azure Portal / Application Insights?
If you follow the standard getting started flow for Application Insights. Use the InstrumentationKey that the AI portal provides. Specify the iKey in the ApplicationInsights.config file. If the key is in the .config file, the Windows Dev Center should pick up the key as part of the ingestion / publishing process.
As one of the comments indicated, you can use these instructions to add Application Insights Nuget to your application and insert the instrumentation key in code or in ApplicationInsights.config file.
However please note that while you can still use Application Insights for Windows Store apps, we are recommending that you onboard to this Nuget and start using HockeyApp for Windows Store apps:

Logging Http Service Request Queues counters in Azure Diagnostics

I was reading how we can detect request queueing problem in our Azure Service irrespective of level in our service at which it is queued.
http://blog.leansentry.com/2013/07/all-about-iis-asp-net-request-queues/
After reading the above mentioned article, I feel that setting up a monitoring on Http Service Request Queues\CurrentQueueSize performance counter is what I actually want. But now the question is how can I enable logging of this counter in Azure Diagnostics? I read over internet and didn't got much. Any idea?
When you create your project in Visual Studio you will see that you get a <Import moduleName="Diagnostics" /> in your ServiceDefinition.csdef file. This is what enables Windows Azure Diagnostics (WAD) in your deployment.
You will also see a diagnostics.wadcfg file in your Cloud Service project if you expand your Roles. In that wadcfg you can add any perf counters you want, and you will see some examples in there that you can use as a template. For the HTTP queue you would add something like:
<PerformanceCounterConfiguration counterSpecifier="\Memory\Available MBytes" sampleRate="PT3M" />
Once you do this then the perf counter will be in your storage account and you can query it using any standard storage tool, or one of the WAD tools such as Cerebrata, or you can configure Verbose monitoring from the management portal and then see the counter in the Monitor tab on the management portal.
Also note that Windows Azure Diagnostics 1.2 has just been released and is a good option if you haven't already enabled WAD 1.0 in your project. For more info see http://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-diagnostics/.

Resources