I am trying to understand the use of application insights for capturing the module logs and considering appinsights as a potential option.
I am keen on understanding how would the appinsights work considering there would be multiple devices each running the same modules where modules are configured to send log data to appinsights. The type of data I want to capture are container logs which are currently being sent to stderr/stdout streams.I am expecting this to work on windows devices , hence the logspout project may not be useful here (https://github.com/veyalla/logspout-loganalytics) but i want to do something similar.
I am trying to figure out a design where module logs from multiple edge devices can be captured using appinsights. It would be immensely useful for me to know if appinisghts is really suited for the problem I am trying to solve and how can it be used for multiple devices.
I'm not aware of a good/secure solution for Windows containers that does a continuous push of module logs to log analytics.
Since the built-in log pull via edgeAgent is experimental, we might change the API or make some modifications but we're unlikely to pull the feature entirely without an equivalent alternative.
Related
I currently am creating a faster test harness for our team and will be recording a baseline from our prod sdk run and our staging sdk run. I am running the tests via jest and want to eventually fire the parsed requests and their query params to a datastore of sorts and have a nice UI around it for tracking.
I thought that Prometheus and Grafana would be able to provide that, but after getting a little POC for myself working yesterday it seems that this combo is more used for tracking application performance rather than request log handling/manipulation/tracking.
Is this the right tool to be using for what I am trying to achieve and if so might someone shed some light on where I might find some more reading aligned with what I am trying to do?
Prometheus does only one thing and it well. It collects metrics and store them. It is used for monitoring your infrastructure or applications to monitor performance, availability, error rates etc. You can write rules using PromQL expression to create alert based on conditions and send them to alert manager which can send it to Pager duty, slack, email or any ticketing system. Even though Prometheus comes with a UI for visualising the data it's better to use Grafana since it's pretty good with it and easy to analyse data.
If you are looking tools for distributed tracing you can check Jaeger
I managed to connect my bot's telemetry with Azure Application Insights. I am now trying to make it so the Application Insights can show certain values from the bot (example: a user's input). I assume this would be related to custom events, but after looking at documentations, I am still really confused and do not know how to set it up to log the values.
The bot framework itself has a way to write telemetry to an Application Insights instance. I believe this is what you've configured and have working so far. For writing custom events/metrics you would want to simply utilize the AI TelemetryClient yourself like you would in any other .NET Core application.
Once registered, you would change your IBot class to take TelemetryClient as a dependency to its constructor which will then be injected for you and then you just start recording events/metrics as you normally would.
The real question I always like to ask is: do you really want to tightly couple yourself directly to the Application Insights APIs? Do you perhaps just want to have a certain level of logging that you're doing through the logging abstraction (e.g. ILogger[<T>])? Or, if you need events, perhaps you want to use an EventSource instead. Both of these abstractions can then be captured by Application Insights by configuring the appropriate telemetry modules, but they do not tie your code directly to Application Insights itself. I believe the only thing that doesn't have a good existing abstraction would be if you needed to gather metrics. You could of course still build your own abstraction for that and then a custom module that funnels the details into AI.
Currently I'm trying out ElasticSearch as a logging solution to pump out ETW events to.
I've followed this tutorial (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-diagnostic-how-to-use-elasticsearch), and this is working great for my own custom ActorEventSource logs, but I haven't found a way to log the Actor Runtime events (ActorMethodStart, ActorMethodStop... etc) using the "in-process" trace capturing.
1) Is this possible using the in-process trace capturing?
I'm also considering using the "out-of-process" trace capturing, which to me seems like the preferable way of doing things in our situation, as we already have WAD setup which includes all of the Actor Runtime events already. Not to mention the potential performance impact / other side-effects of running the ElasticSearchListener inside of our Actor Services.
2) I'm not quite sure how to implement this.The https://github.com/Azure/azure-diagnostics-tools/tree/master/ES-MultiNode project doesn't seem to include Logstash, so i'm assuming I would need a template such as this one: https://github.com/Azure/azure-diagnostics-tools/tree/master/ELK-Semantic-Logging/ELK/AzureRM/elk-simple-on-ubuntu, otherwise I would need to modify the ES-MultiNode project to install Logstash as well? Just trying to get an idea if I'm going down the right path with regards to this.
If there's any other suggestions in terms of logging, I'd love to hear them!
We have a running site using NLog for logs. We are not only login errors, we use it to measure things relative to business logic.
Now we are moving to Azure and that's why I'm searching for a better way to log this type of info in azure. I'm looking for something like graylog.
Things to have in mind:
What azure provides to log info is easy to read?
Can i make queries to read data?
Is there an API to log?
Check out the following stuff, which is more or less native to Azure. Also you could probably use some of the third parties, like New Relic.
Log Analytics
Application Insights
Operations Management Suite
Application Insights not only has out of the box monitoring but also provides capabilities to create your own queries.
ps. Just my 2 cents, I'd go for OMS, Microsoft is pushing it oh so hard, it is evolving rapidly, even if you are missing some capabilities they are going to be there soon and in the long run, Microsoft is really unlikely to drop OMS anytime soon, since they started forcing it like 1.5 year ago.
Context
I want to develop an automated script for broker (IIB9/10) resource monitoring, capturing information about broker running status, message flows deployed, jvm usage, number of threads running, etc.
The initial thought is to have a report generated using scripts and then displayed over a browser.
Question
Can this be entirely done using only Ant scripts (i am not sure as have not explored iterative processing in Ant in detail) or a combination of Ant and batch/shell scripts is the best bet?
I know Web user interface in IIB10 does most of it but i want to add some features.
I suggest you to take a look at message flow statistics and accounting:
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_9.0.0/com.ibm.etools.mft.doc/ac19100_.htm?lang=en
This is a feature of IIB by which it is capable of emitting resource statistics. The statistics are published to a topic in a well defined XML format. I would try solving your requirement by writing an application to read these messages and use the data in them to generate your graphs or other reports.
There is a support pack, IS03 which can give you an idea of such an application.
This will not cover everything you mentioned, for example monitoring what flows are deployed cannot be achieved like this, but it gives a comprehensive view of the load and performance of your applications:
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_9.0.0/com.ibm.etools.mft.doc/bj10440_.htm?lang=en
And there is a resource statistics feature as well for monitoring resources used by your applications:
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_9.0.0/com.ibm.etools.mft.doc/bj43310_.htm?lang=en
To get everything you will need a variety of tools I think. You can use Resource Stats and Accounting / Stats as suggested by Attila to get JVM and thread usage. The Broker publishes updates to a topic so you can create a simple subscriber to grab that info.
For deploy related info, stop / start state and so forth I would be looking at building simple Integration API or REST API applications to call from ant.
You can find documentation for these API's here:
http://www-01.ibm.com/support/knowledgecenter/SSMKHH_10.0.0/com.ibm.etools.mft.doc/be43410_.htm?lang=en
and here:
http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/SSMKHH_10.0.0/com.ibm.etools.mft.restapi.doc/index.html