Is it possible to use Zipkin and Jaeger together? - zipkin

I am making a short presentation about Distributed Tracing and I wanted to present it using Zipkin and Jaeger working together i.e. I send a request to my app, Sleuth adds essential trace headers, timestamp, etc., and sends it to Zipkin Collector and as well to Jaeger Collector. And here arise my question, is it possible to send data to both Zipkin and Jaeger?
I have found Jaeger and Zipkin architecture but I could not figure out whether my idea is possible or not.
Jaeger architecture:
Zipkin architecture:
I have also found info about the usage of OpenTelemetry with Zipkin/Jaeger.
OpenTelemetry Zipkin/Jaeger conf img:
(https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md#zipkin-exporter)
But according to this, we have to specify "otel.traces.exporter" variable to either "jaeger" or "zipkin" value, so it would suggest that running Zipkin and Jaeger together is not possible. However, these are only my speculations.
What do you think about it?

Related

Issue in Koa.js for jaeger tracing with Istio

I am facing issue in jaeger tracing with Koa.js microservices.I haven't change anything regarding jaeger at my code level .I only install istio at AKS cluster and internally its taking tracing from there.But its showing tracing between two microservices only.I require full tracing like if I am getting response from four microservices then four microservices will show in tracing together but in my case only two microsrvices are only showing.
Do I need to make change in my all repos regarding jaeger headers ?
Currently all my microservices are written in Nodejs Koa.js framework.
Thanks.

Would Prometheus and Grafana be an incorrect tool to use for request logging, tracking and analysis?

I currently am creating a faster test harness for our team and will be recording a baseline from our prod sdk run and our staging sdk run. I am running the tests via jest and want to eventually fire the parsed requests and their query params to a datastore of sorts and have a nice UI around it for tracking.
I thought that Prometheus and Grafana would be able to provide that, but after getting a little POC for myself working yesterday it seems that this combo is more used for tracking application performance rather than request log handling/manipulation/tracking.
Is this the right tool to be using for what I am trying to achieve and if so might someone shed some light on where I might find some more reading aligned with what I am trying to do?
Prometheus does only one thing and it well. It collects metrics and store them. It is used for monitoring your infrastructure or applications to monitor performance, availability, error rates etc. You can write rules using PromQL expression to create alert based on conditions and send them to alert manager which can send it to Pager duty, slack, email or any ticketing system. Even though Prometheus comes with a UI for visualising the data it's better to use Grafana since it's pretty good with it and easy to analyse data.
If you are looking tools for distributed tracing you can check Jaeger

How to use application insights for capturing iot edge device logs?

I am trying to understand the use of application insights for capturing the module logs and considering appinsights as a potential option.
I am keen on understanding how would the appinsights work considering there would be multiple devices each running the same modules where modules are configured to send log data to appinsights. The type of data I want to capture are container logs which are currently being sent to stderr/stdout streams.I am expecting this to work on windows devices , hence the logspout project may not be useful here (https://github.com/veyalla/logspout-loganalytics) but i want to do something similar.
I am trying to figure out a design where module logs from multiple edge devices can be captured using appinsights. It would be immensely useful for me to know if appinisghts is really suited for the problem I am trying to solve and how can it be used for multiple devices.
I'm not aware of a good/secure solution for Windows containers that does a continuous push of module logs to log analytics.
Since the built-in log pull via edgeAgent is experimental, we might change the API or make some modifications but we're unlikely to pull the feature entirely without an equivalent alternative.

How do i write the zipkin trace to a file in a format(json) that is supported by the zipkin ui?

I want to write the zipkin trace data from the recorder to a file using NodeJS in a format which zipkin ui supports so that i can import the file into the zipkin ui later and do analysis.
Zipkin currently supports four types of backend storage to store spans in-memory, MySQl, ElasticSearch, Cassandra. Although for production it is recommended to use ES or Cassandra. The other two can be used for learning and understanding. Traces stored in the in-memory is ephemeral and won't be available after the restart.
In the zipkin UI there is an option to see the trace and download it, which can be used to view at a later point in time. If you still have further questions drop in to the zipkin gitter channel.

Sending Actor Runtime ETW logs to ElasticSearch

Currently I'm trying out ElasticSearch as a logging solution to pump out ETW events to.
I've followed this tutorial (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-diagnostic-how-to-use-elasticsearch), and this is working great for my own custom ActorEventSource logs, but I haven't found a way to log the Actor Runtime events (ActorMethodStart, ActorMethodStop... etc) using the "in-process" trace capturing.
1) Is this possible using the in-process trace capturing?
I'm also considering using the "out-of-process" trace capturing, which to me seems like the preferable way of doing things in our situation, as we already have WAD setup which includes all of the Actor Runtime events already. Not to mention the potential performance impact / other side-effects of running the ElasticSearchListener inside of our Actor Services.
2) I'm not quite sure how to implement this.The https://github.com/Azure/azure-diagnostics-tools/tree/master/ES-MultiNode project doesn't seem to include Logstash, so i'm assuming I would need a template such as this one: https://github.com/Azure/azure-diagnostics-tools/tree/master/ELK-Semantic-Logging/ELK/AzureRM/elk-simple-on-ubuntu, otherwise I would need to modify the ES-MultiNode project to install Logstash as well? Just trying to get an idea if I'm going down the right path with regards to this.
If there's any other suggestions in terms of logging, I'd love to hear them!

Resources