Is it possible to get windows event logs into application insights? - azure

We have application insights running in our application (on premise and hosted in azure) and we are sending telemetry without issues, different resources, regular data, pageViews, exceptions, traces etc, recently I was asked to increase the telemetry data by adding Windows Event logs (from event viewer), but, to be honest, I am quite new with azure or application insights and all the documentation I find it a bit confusing, since all I find talks about azure monitor, log analytics workspaces configuration, but nothing clear enough (at least to me) that points me to get this data logged into application insights resources specifically. Is this possible to achieve? Something like adding a nuget package and configure applicationInsights.config?
Update... I've followed your suggestions, and added the nuget package for EtwCollectorTelemetryModule, and modified the applicationInsights.config file.
This is how it looks now:
<Add Type="Microsoft.ApplicationInsights.EtwCollector.EtwCollectorTelemetryModule, Microsoft.ApplicationInsights.EtwCollector"> <Sources <Add ProviderName="Microsoft-Windows-Eventlog" Level="Warning" /> </Sources>
But, I'm still not able to see any kind of logs in the traces table (if i understand correctly, logs will be sent to that table.) Do I need to initialize this module in order to start tracing these logs?
Or am I doing something wrong?

I agree with #Peter Bons here. When you need to add Windows Event logs to azure application insights, you need to add ETW module in the config file.
And pls note here, the EtwCollectorTelemetryModule is an asp.net module so that it suitable for asp.net application.
You can refer to this tutorial to modify the configuration if your program written in asp.net. Pls note, Add Application Insights Telemetry mentioned in the doc is the operation 'right click the project and click Configure Application Insights'

Related

Application Insights Reporting Duplicate Events for each Server Request

I have an API App running under Azure App Service, with Application Insights installed to track server side telemetry of API calls. When viewing Application Insights in the Azure portal, I am seeing two events for every one server call. Each event has an exact duplicate with the same timestamp, response time, telemetry, etc. I have verified that only one event is in the web server logs, so I'm not accidentally calling the same function twice from the client.
Here are a couple of screenshots to illustrate:
What could be causing this? And how can I fix it?
There is a one known scenario that may lead to the data duplication:
Application is not onboarded to AI SDK is deployed as an Azure
Web App
AI Extension is installed to the app -> after this step
you start to receive data without need to modify your code
Later on you decided to use more powerful features of AI let's say custom
event tracking and on-boarded your application to AI from VS and
re-deployed.
Now you may end up in the situation when HTTP module is registered twice and you start to receive duplicate request data. It happens because AI nuget packages add HTTP module definition in web.config, but extension installation drops additional assembly into your application bin folder that registers HTTP module dynamically during app start - Microsoft.AI.HttpModule.dll (Microsoft.ApplicationInsights.Extensibility.HttpModule.dll in previous versions). To correctly handle this case you need to remove extension leftovers during your application deployment by choosing "Settings->Remove additional files from destination" in case of deploying from VS.

Your role instances have recycled a number of times during an update or upgrade operation

I am trying to deploy a Cloud Service with 1 Web Role to Azure.
When I do so, I get this message:
Your role instances have recycled a number of times during an update or upgrade operation. This indicates that the new version of your service or the configuration settings you provided when configuring the service prevent the role instances from running. Verify your code does not throw unhandled exceptions and that your configuration settings are correct and then start another update or upgrade operation.
The project runs just fine locally, and I'm having a hard time figuring out how to start debugging this issue. Are there any common problems that cause this message or steps to figure out what is causing it?
See https://learn.microsoft.com/en-us/archive/blogs/kwill/windows-azure-paas-compute-diagnostics-data. This will walk through all of the diagnostic data available as well as how to troubleshoot the most common issues.
We also had this annoying problem and in our case:
We use local storage, but it wasn't defined in service definition (or Worker Role's properties)
Our worker role project has reference to a service project which has reference to data layer project. But, the worker role project doesn't have reference to the data layer project. As soon as we added reference to data layer project in worker role project, it deploys successfully.
Problem #1 can be easily noticed if you first run the project in your local machine. Exception will be thrown.
Problem #2, however, is more difficult, mainly because it runs just fine in local machine. After 5 days of trouble shooting, we finally found the problem. So, check all references and try to add sub-reference projects, those that are referenced by other references.
We had similar problem, and it was due to some DDLs failed to load. (due to different version from the one MS have deployed to the VM)
Try to set CopyLocal to "true" for all the References in the project, and re-deploy.
I would either remote desktop to the cloud instance and review the Windows Event Logs for exceptions or redeploy with IntelliTrace Enabled. If you choose the later, you can download the IntelliTrace logs from Visual Studio and debug
http://msdn.microsoft.com/en-us/library/windowsazure/ff683671.aspx
One way to find out the actual error is to click on the " 1 instance" at the top of Dashboard after trying to deploy your web role. It will tell you the status of the role instance. The status should include more information about the type of error which blocks your deployment.
It depends on what your case is. For me, the status claimed that I had an unhandled Security exception. After some investigation, it turned out that under my role's OnStart(), I tried to create a event source. However, Azure service doesn't have the permission to create an event source.
For more possible issues, check http://blogs.msdn.com/b/kwill/archive/2013/09/06/troubleshooting-scenario-3-role-stuck-in-busy.aspx
For me, the issue was with my SQL Azure DB firewall rules. My Azure SQL Database servers are not set to "Allow Access to Azure services", so I have to explicitly list IPs that are allowed.
I discovered this after wrapping my code in a try/catch that swallowed all exceptions, refactoring my OnStart() and RunAsync() methods, and setting all my references to Copy Local = True. None of that worked, then I saw that I had this line in my RunAsync() method:
log4net.Config.XmlConfigurator.Configure();
I am using the AdoNetAdapter for log4net and connecting to an Azure SQL DB for logging, so that led me to check the firewall rules.
For me, I had some differing version of nuget packages in my various projects. Once I consolidated everything to the same version(s), it worked fine.
With the release of Windows Azure SDK version 2.2 for Visual Studio 2012 and 2013, Now you can Remote Debug Cloud Resources within Visual Studio.
Once your cloud service is published and running live in the cloud, you can simply set a breakpoint in your local source code. This may help you in digging out what's going wrong!

Where is Umbraco.config stored in Azure cloud services?

I've got an existing Umbraco install on Azure cloud services (not Azure web sites), and although the web.config tells me that it should be found at ~/App_Data/umbraco.config, it isn't there.
Does anyone know where this would be stored? Is there a chance it is writing to a db table perhaps?
I need to edit some nodes as I suspect that at least one is owned by a user which no longer exists, hence no nodes at that level are visible in the admin system (JS error).
The site was set up with Umbraco Azure Accelerator, if that offers any clues.
Thanks!
It is worth noting that Umbraco hasn't needed the Umbraco Accelerator for Azure in the latest versions due to the use of Azure Web Sites. I am wondering which version of Umbraco you are running? The Accelerator projects are being deprecated and have ceased updates, as you can see here by the lack of recent updates. You can read more about the reasoning behind this, and about how the Accelerator's functionality is now a part of the Azure core itself, over here.
So - assuming that your site is an old one and you cannot just reinstall it as an Azure Web Site, can you firstly please confirm that you have the config enabled through the following setting in /config/umbracoSettings.config?
<XmlCacheEnabled>True</XmlCacheEnabled>
If this is false (as is best during all hosting environments except live) then we know that Umbraco will not use the cache. Can you please also check that the following section doesn't list any other machine IP addresses in umbracoSettings.config? Note that this is only relevant if the enabled attribute is true as in the example below.
<distributedCall enabled="true">
<user>0</user>
<servers>
...
</servers>
</distributedCall>
Next, we need to check that Umbraco is still set to use the location /App_Data/umbraco.config through the web.config file.
<appSettings>
<add key="umbracoContentXML" value="~/App_Data/umbraco.config" />
</appSettings>
We should consider the way that Umbraco works on Azure and whether it could have any effect on your site. The Umbraco Accelerator used to be necessary to synchronise the umbraco.config file between Azure instances. With each instance running a separate Umbraco website, there has to be a way that they can talk to each other. The accelerator mirrors that cache file between instances.
Assuming that your code is a default install, and not having been worked on by someone else before you, then it could be an idea to reduce your site to a single instance. Now see if it generates the cache after restarting the website. Finally, you can upgrade the site to see if it regenerates the cache.
These issues are usually always caused by some kind of configuration issue. I also remember that you can simulate Azure deployment using the Windows Azure SDK which you can use to examine for signs of the cache. Good luck.
I'm not sure about the Umbraco Accelerator or a non Web Site project, but we currently are running Umbraco on an Azure web site and App_Data\umbraco.config comes and goes when it pleases and the website unexplainably works. I would like to find the reason behind this if anyone has an answer??

Force update Diagnostic Configuration file under wad-control-container for Azure

I would like to update the Diagnostic configuration file for the azure roles whenever I upgrade my deployment. How can I do this automatically?
From time to time, we do change our diagnostic (using code) - and upgrade the service. But whenever we upgrade the service, it is still using the old diagnostic configuration and we do not see any new logs we have configured using new code.
How can I achieve this so that whenever I upgrade my deployment, it upgrades the diagnostic configuration as well.
I wonder if you have a bug in your diagnostics updating code. If each role ran code in OnStart or Run to configure diagnostics on startup, there would be no reason that your instances wouldn't be properly configured. I tend to think that imperative code that configures diagnostics is inherently a bad idea in the long run, but it should still work. If you share the code, maybe I can spot an issue.
The best** way I have found to update and enforce configuration is to use the diagnostics.wadcfg file and update it. When you upgrade your deployment, it will use those settings if you have not overridden it in code somewhere. Contrary to Microsoft's guidance at that link, it should be the preferred method as opposed to code which must be maintained and is orthogonal to your application's purpose. Said another way - a declarative configuration file that your ops team can maintain over writing code is usually a better idea. To use this, just include it in your deployment as content and delete any existing files in wad-control-container (and remove any code that configured diagnostics). It will just configure itself from that file then when you next upgrade.
** you can also using a 3rd party SaaS monitoring to set and maintain your diagnostics config. I work on one such one, but I am guessing you want to know how to do it yourself. :)

IIS app pool recycling randomly every few seconds

I need to determine WHY the application pool is recycling. (its for no obvious reason)
Is there any way to determine this inside of the application_end sub in the global.asax file?
I have put some basic logging in there, so I know WHEN its shutting down, but I cannot tell why.
(and its nothing obvious... it just seems like every couple of requests certain operations cause the application to end. I have turned off every normal reason for recycling such as time outs, memory checks, etc, etc, etc. Same code is working fine on a different server, so I am sure its something wrong with this setup, but what?...)
You don't have to incur overhead to add custom logging, ASP.NET 2.0 health monitoring does the job for you. You can add the following configuration which will log events in the eventlogs with information why Application pool is restarted.
To turn ASP.NET health monitoring ON, you can edit the "master" web.config file, normally found in %systemroot%\microsoft.net\framework\v2.0.50727\config.
First, look for <healthMonitoring> in the master web.config
Inside the healthMonitoring node, find the <rules> node
Inside rules, add the following:
<add name="Application Lifetime Events Default"
eventName="Application Lifetime Events"
provider="EventLogProvider"
profile="Default"
minInstances="1"
maxLimit="Infinite"
minInterval="00:01:00"
custom="" />
Reproduce the issue and look in the Application event log fpr a source of ASP.NET 2.0. This should log why application pool is recycled.
Try looking in the EventLog. When the app pool recycles, there is an entry written to the log along with the reason.
The following link describes the errors codes you'll see in the eventlog for IIS 7.5
http://technet.microsoft.com/en-us/library/dd349270(WS.10).aspx
If there are not already log entries in your troubled machine's event viewer, you can modify IIS to log all app pool recycles.
Article from Microsoft on how to do it is found below
http://support.microsoft.com/kb/332088

Resources