I am facing issue in jaeger tracing with Koa.js microservices.I haven't change anything regarding jaeger at my code level .I only install istio at AKS cluster and internally its taking tracing from there.But its showing tracing between two microservices only.I require full tracing like if I am getting response from four microservices then four microservices will show in tracing together but in my case only two microsrvices are only showing.
Do I need to make change in my all repos regarding jaeger headers ?
Currently all my microservices are written in Nodejs Koa.js framework.
Thanks.
Related
I am making a short presentation about Distributed Tracing and I wanted to present it using Zipkin and Jaeger working together i.e. I send a request to my app, Sleuth adds essential trace headers, timestamp, etc., and sends it to Zipkin Collector and as well to Jaeger Collector. And here arise my question, is it possible to send data to both Zipkin and Jaeger?
I have found Jaeger and Zipkin architecture but I could not figure out whether my idea is possible or not.
Jaeger architecture:
Zipkin architecture:
I have also found info about the usage of OpenTelemetry with Zipkin/Jaeger.
OpenTelemetry Zipkin/Jaeger conf img:
(https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md#zipkin-exporter)
But according to this, we have to specify "otel.traces.exporter" variable to either "jaeger" or "zipkin" value, so it would suggest that running Zipkin and Jaeger together is not possible. However, these are only my speculations.
What do you think about it?
I have a solution in Azure that has multiple networking components, and am trying to trace requests through from component to component. I have enabled a LogAnalyticWorkspace that these components output to.
Application Gateway w/ WAFv2,
API Management Instance,
Application Gateway w/o WAF,
Container Instance,
AppGW-WAF-->APIM-->AppGW-->Container
Is there some common attribute/header value/query string addition, etc. that I can use in the LAW to trace a request from point to point in the sequence above?
Any advice is appreciated!
it sounds like you want to do tracing on the application / HTTP layer to get something like this?!
Then you want to look at Application Insights and Correlation, probably using distributed tracing.
This also nicely integrates out of the box with APIM.
I need your help in Loopback Framework.
Actually, my need is how we can achieve Microservices related functionality with Loopback Framework.
Please share any links/tutorials/knowledge if you have any.
I have gone through below links,
https://strongloop.com/strongblog/creating-a-multi-tenant-connector-microservice-using-loopback/
I have downloaded the related demo from below links but doesn't work it.
https://github.com/strongloop/loopback4-example-microservices
https://github.com/strongloop/loopback-example-facade
Thanks,
Basically it depends on your budget and size of your system. You can make some robust and complex implementations using tools like Spring Cloud or KrakenD. As a matter of fact, your question is too broad. I've some microservices architecture knowledge and I can recommend just splitting your functionality into containerized solutions, probably orchestrated by Kubernetes. In that way, you can expose for example, the User microservice with loopback, and another Authentication microservice with loopback and/or any other language/framework.
You could (but shouldn't) add communication between those microservices (as you should expose some REST functionality) with something like gRPC.
The biggest cloud providers have some already made solutions, eg AWS has ECS or Fargate. For GCP you have Kubernetes.
We have created an open source catalog of microservices which can be used in any microservice project using LB4. Also, you can get an idea of how to create microservices using LB4. https://github.com/sourcefuse/loopback4-microservice-catalog
Background
Our backend is currently written in Grails. We would like to change the backend to NodeJs. We would like to execute the change in small iterations. We deploy everything on AWS.
Question
How to change the technology from Grails to NodeJs iteratively?
My opinion
Although we don't use Microservice Architecture (and none of us has any experience with it) I personally would:
build a NodeJs server before our Grails server (something like a Gateway API maybe?)
at first the NodeJs would just pass requests/responses to/from Grails
then we would move other functionality from Grails (request logging, validations, ...) until we moved everything needed. (Maybe we keep something on Grails but most of the logic should end up on NodeJs.)
We have done a successful migration from Ruby & Rails to API Gateway & Lambda based Microservices written in NodeJS. The same architecture you can use if you prefer NodeJS server (Without Microservices) or using Docker Containers Cluster with ECS.
Setup CloudFront as a Proxy which will be getting all the HTTP traffic to your application domain (You can map the DNS to CloudFront CName)
In CloudFront you can add the current Grails Application as the default origin and behavior which makes your application works as it is same as today.
Then you can setup your Microservices Architecture with API Gateway and Lambda or NodeJS Web Server or Docker Container Cluster with ECS seperately. (Note that if you use a Relational Database like MySQL, it also requires to do proper placement of new server code in Lambda, WebServer or Containers so that it can access the Database)
Afterwards, you can write the new feature logic and override one http subpath at a time from CloudFront pointing to the new application.
Following diagram shows the architecture in high-level.
Note: In the diagram, it uses DynamoDB for new Microservices and in the migration phase, you can also connect to the current Database with proper VPC, Subnet and Server placement.
In Addition you gets the benefits from CloudFront CDN in caching static assets to improve application performance and also you will be able to terminate the SSL handhshake in CloudFront with free SSL Certificates issued by Amazon.
Your approach is definitely possible. But I'd take another one and try using microservices. Then you will move parts of your code one by one to little cute microservices and eventually have microservice architecture. I like this approach because it allows very fast switching... Of everything. You can build your microservices with Java, Node, Go - everything you want. And if you suddenly discover that node.js is not up to your expectations (for example, if you have hardcore math modules) - just throw this microservice out and quickly impliment it in any other language and framework. The most important part is defining communication architecture. REST API is already a thing of the past and you will possibly want to use some message broker like RabbitMQ.
I am in the planning phase of moving a c#.net monolithic application to node.js. I would like to implement the microservices architecture, event-driven, for this app using seneca.js and docker to separate each microservice into its own container hosted on aws elastic beanstalk. From what I have read and per recommendations this seems the way to go so far.
Here is where I am confused, in reviewing the seneca.js docs, I am not seeing how out-of-process communication is occurring.
In particular, if I want to allow multiple clients to subscribe to the same event should I use rabbitmq with seneca.js as there are times where several microservices have to perform actions for a particular event? In going this route, how would I handle a scenario where one of the subscribers fails and needs to run again? Seems like this event would need to be run again for this microservice only and not the others.
Also, in using seneca.js, how do I allow for exposing a rest api for each microservice to allow clients to gain access to its internal database and data using this approach?
Please let me know if I am incorrect in any aspects of this.