What are the differences in features between AppDynamics and Zipkin apart from the pricing since zipkin is opensource. Can any of them show request or response, in their console?
Zipkin only does tracing. APM tools like Appdynamics do other monitoring (browser, mobile, database, server, network). Code-level diagnostics with automated overhead controls and limiters. Don't forget log analytics and transaction analytics. It also collects metrics.
There is a lot more to APM than just tracing, which is what Zipkin does. You could do this with a stack of 20 open source tools, but you'd have to deal with disjointed UIs and data models not to mention the work associated with keeping them all working.
Related
I currently am creating a faster test harness for our team and will be recording a baseline from our prod sdk run and our staging sdk run. I am running the tests via jest and want to eventually fire the parsed requests and their query params to a datastore of sorts and have a nice UI around it for tracking.
I thought that Prometheus and Grafana would be able to provide that, but after getting a little POC for myself working yesterday it seems that this combo is more used for tracking application performance rather than request log handling/manipulation/tracking.
Is this the right tool to be using for what I am trying to achieve and if so might someone shed some light on where I might find some more reading aligned with what I am trying to do?
Prometheus does only one thing and it well. It collects metrics and store them. It is used for monitoring your infrastructure or applications to monitor performance, availability, error rates etc. You can write rules using PromQL expression to create alert based on conditions and send them to alert manager which can send it to Pager duty, slack, email or any ticketing system. Even though Prometheus comes with a UI for visualising the data it's better to use Grafana since it's pretty good with it and easy to analyse data.
If you are looking tools for distributed tracing you can check Jaeger
I have a few IIS sites I would like to monitor using Prometheus. Specifically monitor and alert on outages. I cannot figure out how to grab a metric when a site experiences an outage. Ideally, I would like when a site goes down to be able to provide that information, scrape the metric to Prometheus and then using the Prometheus Alertsmanager send it to our Slack webhook. I know there are tools specifically for this such as Pingdom, Uptime Robot, StatusCake but if I could do this using Prometheus, a tool we are already using, that would be better.
I am currently running WMI Exporter to get metrics.
I believe you're interested in blackbox-exporter (see https://github.com/prometheus/blackbox_exporter) to monitor targets via HTTP requests.
Once you've installed the exporter and configured targets, you'll be interested in alerting on the probe_success metric.
It can be done using windows_exporter only. You should add W3SVC as the service to monitor. Hope this helps!
i am using nodejs as my server with express. I am logging all my request and response on server. Is there any package available to read my logs and generate graphical report like how many requests we got and how many succeeded. What was the request received and responded. Is there a package which can track all these details for me?
It sounds like you're trying to get some performance metrics about your application which is great. There are many different ways you can go with this, here are a few suggestions for you to weigh up.
Non-real-time performance metrics
If you don't care about seeing the services real-time metrics you might want to create something to process them into a CSV and use something like excel or google sheets to generate graphs from them. If you need something immedietely and don't need to respond to things "in the moment" when a dip happens then this is a good quick and dirty solution.
Real-time performance metrics using SaaS software
If you want the metrics but don't want to host the systems yourself you might want to checkout services such as DataDog. They provide dashboards and graphs as a service. You can use something like statsd to get metrics into DataDog, or use their own integrations. They have a lot of integrations with cloud providers like AWS, GCP, and Azure for machine metrics (CPU etc). They also have packages for inteacting with your application itself such such as their ExpressJS package.
Real-time performance metrics using self-hosted solutions
I've often used a self-hosted approach as I find the pricing often scales a bit better. The setup is fairly simple.
Use a statsd package for all system components (nginx, nodejs, postgres, etc) to publish metrics to the statsd daemon.
The statsd daemon self-hosted somewhere (maybe a proxy cluster if you're working on large applications).
Self-hosted Graphite to consume metrics from the statsd daemon. Graphite is a software package designed for aggregating metrics and has an API for producing static graph images.
Self-hosted Grafana that pulls metrics from graphite. Grafana is a real-time dashboarding software. It allows you to create multiple dashboards that hook into various data sources such as Graphite or other time series data stores.
The self-hosting route can take a day to setup but it does mean you don't increase your costs per-host. It's also easy to put behind internal networks if that's a requirement for your organisation.
Personally, I would recommend either real-time performance metrics approaches. If your application is small and doesn't have many hosts then services like DataDog could be useful and cost effective but if you do need to scale up you'll find your costs sky rocketing. At that point you might decide to move over to a self-hosted infrastructure.
I have an application that is generating 3 kind of log files
Transaction log
Server log
Fatal log
and I want to analyse the performance of my server using appdynamics so what kind of data my logs should be generating to generate analytics for server health, performance, throughput, server utilization?
That's the beauty of APM is you don't need to deal with logging to get performance data. APM tools instrument the applications themselves regardless of what the code does (logging, generating metrics, etc). AppDynamics can collect log data, and provide similar capabilities to what a log analytics tool can do, but it's of less value than the transaction tracing and instrumentation you get. Your application has to be built in a supported language (Java, .NET, PHP, Python, C++, Node.js) and if it's web or mobile based you can also collect client side performance data and unify between both frontend and backend. If you have questions just reach out and I can answer them for you. Good luck!
You basically need the AppDynamics Controller and a AppDynamics Machine-Agent which will be installed on the machine to monitor. In the Machine-Agent configuration you set the URI of the controller and the agent starts to report machine metrics to the controller. Then you can configure alarms, see metrics, create dashboards, etc. I can deliver you more information if you want, but as Jonah Kowall said, take a look at the documentation as well AppDynamics Machine Agent Doc
I'm currently using Microsoft Network Monitor to parse thru debug event traces. It is not a bad tool, but not very good either. Do you know some better solutions?
These are readers for exploring custom ETW traces:
SvcPerf - End-to-End ETW trace viewer for manifest based traces
LINQPad + Tx (LINQ for Logs and traces) driver - Simple reader that allows you to query ETW traces
PerfView - multitool that allows you to do amost everything with ETW, but not particularly user-friendly
PerfView http://www.microsoft.com/download/en/details.aspx?id=28567
If you're after giving graphic visualization of traces for the sake of performance analysis, you may use the following:
1. Windows Reliability and Performance Monitor which is an MMC snap-in and is easy to use for basic analysis (locally, from the server)
2. xperf, which is a stand-alone tool from the Windows Performance Tools.
Xperf itself is a command-line tool for captures and processing traces and Xperfview allows creating graphs and tables from the captured data. Look at this blog post for an overview.
3. Visual Studio 2010 profiler contains a "Concurrency Visualizer" which is actually a nice tool to collect and visualize ETW traces, specifically tailored around analysis of thread contention issues (but can also be used to analyze network traces, I think). See this blog post on using the tool and also you may use the underlying tools directly: VSPerfCmd and VSPerfReport.
I like to use Log Parser [link] to parse through the logs for the events that I am most interested in. I love the SQL-like query structure.