I heave created a query in (the wonderful tool ) Application Insight Analytics which I intended to use for monitoring in one way or another, but from what I found, this is not that easy?
The query returns some data I would like to set up Application Insight alerts on (such as if (column1 equals '1') then alert() or if(column2 > 10) then alert().
Or if that is not possible, is the Analytics service available either from .net code or power shell? If so, I would be able to create the alert-service myself (not ideal though).
Is any of the above features available?
(I do not think the functionality I´m after is available in Insights. I want to compare two custom events and based on differences between them, take actions if necessary)
There's currently no way to get azure alerts from an Analytics query.
However, there is a request for that on uservoice:
https://visualstudio.uservoice.com/forums/357324-application-insights/suggestions/14428134-add-alerts-based-on-results-of-analytics-queries
so go upvote and comment on that to make your voice heard.
There's also a planned service to read data from Analytics through an API as well:
https://visualstudio.uservoice.com/forums/357324-application-insights/suggestions/4999529-make-data-accessible-via-apis-for-custom-processin
Which you could write your own service against to do extra work.
Related
I read about the difference between log-based and pre-aggregated metrics here:
https://learn.microsoft.com/en-us/azure/azure-monitor/app/pre-aggregated-metrics-log-metrics
Later on I came across event counters:
https://learn.microsoft.com/en-us/azure/azure-monitor/app/eventcounters
They both seem to be used to track metrics of some kind. I see the EventCounters documentation doesn't mention any (pre-)aggregation, but besides that, what is the difference between both and when do I use an EventCounter rather than making a call to TelemetryClient.TrackMetric()?
TelemetryClient.TrackMetric() is specific to application insights, EventCounter is not.
EventCounter is a mechanism in .Net to define custom metrics inside an application/library. You have to create a listener for them to read out the values and possible send those values somewhere. This could be a simple console output, a logging framework or something else like Application Insights. It decouples the generation of metric from the consumption of those metrics.
If an application or library you are using already defines metrics using EventCounters you can publish them as metrics in Application Insights. The referenced documentation tells you how to do that.
If you write your own code and what to track custom metrics in Application Insights you can decide yourself. Using TrackMetric is the fastest and easiest option but you might loose some flexibility when you want to publish the metrics somewhere else.
I wrote a blogpost about EventCounters a while ago, if you're interested in the why and the how.
I managed to connect my bot's telemetry with Azure Application Insights. I am now trying to make it so the Application Insights can show certain values from the bot (example: a user's input). I assume this would be related to custom events, but after looking at documentations, I am still really confused and do not know how to set it up to log the values.
The bot framework itself has a way to write telemetry to an Application Insights instance. I believe this is what you've configured and have working so far. For writing custom events/metrics you would want to simply utilize the AI TelemetryClient yourself like you would in any other .NET Core application.
Once registered, you would change your IBot class to take TelemetryClient as a dependency to its constructor which will then be injected for you and then you just start recording events/metrics as you normally would.
The real question I always like to ask is: do you really want to tightly couple yourself directly to the Application Insights APIs? Do you perhaps just want to have a certain level of logging that you're doing through the logging abstraction (e.g. ILogger[<T>])? Or, if you need events, perhaps you want to use an EventSource instead. Both of these abstractions can then be captured by Application Insights by configuring the appropriate telemetry modules, but they do not tie your code directly to Application Insights itself. I believe the only thing that doesn't have a good existing abstraction would be if you needed to gather metrics. You could of course still build your own abstraction for that and then a custom module that funnels the details into AI.
We have a running site using NLog for logs. We are not only login errors, we use it to measure things relative to business logic.
Now we are moving to Azure and that's why I'm searching for a better way to log this type of info in azure. I'm looking for something like graylog.
Things to have in mind:
What azure provides to log info is easy to read?
Can i make queries to read data?
Is there an API to log?
Check out the following stuff, which is more or less native to Azure. Also you could probably use some of the third parties, like New Relic.
Log Analytics
Application Insights
Operations Management Suite
Application Insights not only has out of the box monitoring but also provides capabilities to create your own queries.
ps. Just my 2 cents, I'd go for OMS, Microsoft is pushing it oh so hard, it is evolving rapidly, even if you are missing some capabilities they are going to be there soon and in the long run, Microsoft is really unlikely to drop OMS anytime soon, since they started forcing it like 1.5 year ago.
Our team have just recently started using Application Insights to add telemetry data to our windows desktop application. This data is sent almost exclusively in the form of events (rather than page views etc). Application Insights is useful only up to a point; to answer anything other than basic questions we are exporting to Azure storage and then using Power BI.
My question is one of data structure. We are new to analytics in general and have just been reading about star/snowflake structures for data warehousing. This looks like it might help in providing the answers we need.
My question is quite simple: Is this the right approach? Have we over complicated things? My current feeling is that a better approach will be to pull the latest data and transform it into a SQL database of facts and dimensions for Power BI to query. Does this make sense? Is this what other people are doing? We have realised that this is more work than we initially thought.
Definitely pursue Michael Milirud's answer, if your source product has suitable analytics you might not need a data warehouse.
Traditionally, a data warehouse has three advantages - integrating information from different data sources, both internal and external; data is cleansed and standardised across sources, and the history of change over time ensures that data is available in its historic context.
What you are describing is becoming a very common case in data warehousing, where star schemas are created for access by tools like PowerBI, Qlik or Tableau. In smaller scenarios the entire warehouse might be held in the PowerBI data engine, but larger data might need pass through queries.
In your scenario, you might be interested in some tools that appear to handle at least some of the migration of Application Insights data:
https://sesitai.codeplex.com/
https://github.com/Azure/azure-content/blob/master/articles/application-insights/app-insights-code-sample-export-telemetry-sql-database.md
Our product Ajilius automates the development of star schema data warehouses, speeding the development time to days or weeks. There are a number of other products doing a similar job, we maintain a complete list of industry competitors to help you choose.
I would continue with Power BI - it actually has a very sophisticated and powerful data integration and modeling engine built in. Historically I've worked with SQL Server Integration Services and Analysis Services for these tasks - Power BI Desktop is superior in many aspects. The design approaches remain consistent - star schemas etc, but you build them in-memory within PBI. It's way more flexible and agile.
Also are you aware that AI can be connected directly to PBI Web? This connects to your AI data in minutes and gives you PBI content ready to use (dashboards, reports, datasets). You can customize these and build new reports from the datasets.
https://powerbi.microsoft.com/en-us/documentation/powerbi-content-pack-application-insights/
What we ended up doing was not sending events from our WinForms app directly to AI but to the Azure EventHub
We then created a job that reads from the eventhub and send the data to
AI using the SDK
Blob storage for later processing
Azure table storage to create powerbi reports
You can of course add more destinations.
So basically all events are send to one destination and from there stored in many destinations, each for their own purposes. We definitely did not want to be restricted to 7 days of raw data and since storage is cheap and blob storage can be used in many analytics solutions of Azure and Microsoft.
The eventhub can be linked to stream analytics as well.
More information about eventhubs can be found at https://azure.microsoft.com/en-us/documentation/articles/event-hubs-csharp-ephcs-getstarted/
You can start using the recently released Application Insights Analytics' feature. In Application Insights we now let you write any query you would like so that you can get more insights out of your data. Analytics runs your queries in seconds, lets you filter / join / group by any possible property and you can also run these queries from Power BI.
More information can be found at https://azure.microsoft.com/en-us/documentation/articles/app-insights-analytics/
So I've been reading through the Application Insights information published by Microsoft, and in particular this article: https://azure.microsoft.com/en-gb/documentation/articles/app-insights-search-diagnostic-logs/
So what I want to ask is, whats the most logical methodology to log database calls?
In my head, I want to be able to log into application insights, see the most common database calls being made, and see what their average call times are. That way, I can say "wow the lookup to the membership profile table is taking a few seconds today, what's the deal?"
So I have a database name, a stored procedure name, and an execution time, what's the best way for me to take that data and store it in AI? As a metric, an event, something else?
First of all AI has dependency calls autocollection. Please read this. Secondly it is planned to release SDK 1.1 next week. As part of that release there you will have DependencyTelemetry type that is added specifically for monitoring SQL, http, blob and other external dependencies.