nestjs, GRPC server and micro services architecture - node.js

I've started a new project with nestjs with microservices, but it's my first microservices project and i don't' have enough knowledge.
During my documentation study, I can't find a way to use a microservice with grpc and HTTP at the same time.
In my architecture, I have got a few microservices that have to serve REST API for the client but have also to serve grpc request for "internal" purpose, is that a right decision?

It is not correct to say "I can't find a way to use a microservice with grpc and HTTP at the same time" since GRPC uses HTTP. GRPC is not a protocol, it is a way to serialize messages, by exposing HTTP endpoints you have the possibility to choose between different alternatives such as XML; REST/JSON or GRPC.
Normally following the "hexagonal architecture" philosophy (https://en.wikipedia.org/wiki/Hexagonal_architecture_(software)) you should be able to separate the logic from the adapters and your project can implement multiple adapters for the same logic, for example one adapter in HTTP/REST and another in HTTP/GRPC. On the other hand, a way to avoid having to implement multiple ports is to always choose HTTP/GRPC and use Envoy as a proxy between HTTP/REST and HTTP/GRPC (see https://grpc.io/docs/tutorials/basic/web/) but final solution depends on many factors

Related

Data Aggregator/composition service in Microservices

I am developing an application where there is a dashboard for data insights.
The backend is a set of microservices written in NodeJS express framework, with MySQL backend. The pattern used is the Database-Per-Service pattern, with a message broker in between.
The problem I am facing is, that I have this dashboard that derives data from multiple backend services(Different databases altogether, some are sql, some are nosql and some from graphDB)
I want to avoid multiple queries between front end and backend for this screen. However, I want to avoid a single point of failure as well. I have come up with the following solutions.
Use an API gateway aggregator/composition that makes multiple calls to backend services on behalf of a single frontend request, and then compose all the responses together and send it to the client. However, scaling even one server would require scaling of the gateway itself. Also, it makes the gateway a single point of contact.
Create a facade service, maybe called dashboard service, that issues calls to multiple services in the backend and then composes the responses together and sends a single payload back to the server. However, this creates a synchronous dependency.
I favor approach 2. However, I have a question there as well. Since the services are written in nodeJs, is there a way to enforce time-bound SLAs for each service, and if the service doesn't respond to the facade aggregator, the client shall be returned partial, or cached data? Is there any mechanism for the same?
GraphQL has been designed for this.
You start by defining a global GraphQL schema that covers all the schemas of your microservices. Then you implement the fetchers, that will "populate" the response by querying the appropriate microservices. You can start several instances to do not have a single point of failure. You can return partial responses if you have a timeout (your answer will incluse resolver errors). GraphQL knows how to manage cache.
Honestly, it is a bit confusing at first, but once you got it, it is really simple to extend the schema and include new microservices into it.
I can’t answer on node’s technical implementation but indeed the second approach allows to model the query calls to remote services in a way that the answer is supposed to be received within some time boundary.
It depends on the way you interconnect between the services. The easiest approach is to spawn an http request from the aggregator service to the service that actually bring the data.
This http request can be set in a way that it won’t wait longer than X seconds for response. So you spawn multiple http requests to different services simultaneously and wait for response. I come from the java world, where these settings can be set at the level of http client making those connections, I’m sure node ecosystem has something similar…
If you prefer an asynchronous style of communication between the services, the situation is somewhat more complicated. In this case you can design some kind of ‘transactionId’ in the message protocol. So the requests from the aggregator service might include such a ‘transactionId’ (UUID might work) and “demand” that the answer will include just the same transactionId. Now the sends when sent the messages should wait for the response for the certain amount of time and then “quit waiting” after X seconds/milliseconds. All the responses that might come after that time will be discarded because no one is expected to handle them at the aggregator side.
BTW this “aggregator” approach also good / simple from the front end approach because it doesn’t have to deal with many requests to the backend as in the gateway approach, but only with one request. So I completely agree that the aggregator approach is better here.

How to make two NODE.js servers communicate each other over RabbitMQ?

I wanted to create two servers in Node.js and make full-duplex communication with each other over rabbitMQ. I am new to messagebrokers or event-driven development, I just want to make one server serve API to the front-end another one just a chat server? Is that even a good approach?
Working directly with a broker is a bad idea. Typically, a gateway is added between the clients and the broker as an abstract layer. In this case, it will be easier for you to change the broker (for example, from rabbit to kafka, etc.), and you do not need to copy the client <-> broker logic in different languages. As example I just add this link reddwarf. Simple demo service is service and client is client

Why does nestjs framework use a transport layer different than HTTP in a microservices approach?

I have been developing microservices with Spring Boot for a while, using feign client, rest template and AMPQ brokers to establish communication between each microservice.
Now, I am learning NestJs and its microservice approach. I've noticed that nestjs uses TCP as the default transport layer which is different from the way it is done with Spring Boot.
Why does nestjs prefer those transport layers (TCP, AMPQ) instead of HTTP? isn't HTTP the transport protocol for rest microservices?
From NestJs documentation:
"a microservice is fundamentally an application that uses a different transport layer than HTTP"
The main reason is it is slow. The problem with HTTP approach is that, with HTTP, JSON can generate an unwanted processing time to send and translate the information.
One problem with http-json is the serialization time of JSON sent. This is an expensive process and imagine serialization for a big data.
In addition to JSON, there are a number of HTTP headers that should be
interpreted further which may be discarded. The only concern should be to maintain a single layer for sending and receiving messages. Therefore, the HTTP protocol with JSON to communicate
between microservices is very slow. There are some optimization techniques and those are complex and does not add significant performance benefits
Also,HTTP spends more time waiting than it does transfer data.
If you look a the OSI model, HTTP is part of Layer 7 (Application). TCP is Layer 4 (Transport).
When looking at Layer 4 there is no determining characteristic that makes it HTTP, AMPQ, gRPC, or RTSP. Layer 4 is explicitly how data is transmitted and received with the remote device.
Now, this is where networking and the software development worlds collide. Networking people will use "transport" meaning Layer 4, while Programming people use "transport" meaning the way a packet of data is transmitted to another component.
The meaning of "transport" (or "transporter" as used in the docs) is used as an abstraction from how messages are shared in this architecture.
Looking at the documentation if you are looking for something like AMPQ for your microservice you can use NATS or REDIS (both implementations are built by them).
https://docs.nestjs.com/microservices/basics#getting-started

What is the best way to communicate between two servers?

I am building a web app which has two parts. In one part it uses a real time connection between the server and the client and in the other part it does some cpu intensive task to provide relevant data.
Implementing the real time communication in nodejs and the cpu intensive part in python/java. What is the best way the nodejs server can participate in a duplex communication with the other server ?
For a basic solution you can use Socket.IO if you are already using it and know how it works, it will get the job done since it allows for communication between a client and server where the client can be a different server in a different language.
If you want a more robust solution with additional options and controls or which can handle higher traffic throughput (though this shouldn't be an issue if you are ultimately just sending it through the relatively slow internet) you can look at something like ØMQ (ZeroMQ). It is a messaging queue which gives you more control and lots of different communications methods beyond just request-response.
When you set either up I would recommend using your CPU intensive server as the stable end(server) and your web server(s) as your client. Assuming that you are using a single server for your CPU intensive tasks and you are running several NodeJS server instances to take advantage of multi-cores for your web server. This simplifies your communication since you want to have a single point to connect to.
If you foresee needing multiple CPU servers you will want to setup a routing server that can route between multiple web servers and multiple CPU servers and in this case I would recommend the extra work of learning ØMQ.
You can use http.request method provided to make curl request within node's code.
http.request method is also used for implementing Authentication api.
You can put your callback in the success of request and when you get the response data in node, you can send it back to user.
While in backgrount java/python server can utilize node's request for CPU intensive task.
I maintain a node.js application that intercommunicates among 34 tasks spread across 2 servers.
In your case, for communication between the web server and the app server you might consider mqtt.
I use mqtt for this kind of communication. There are mqtt clients for most languages, including node/javascript, python and java. In my case I publish json messages using mqtt 'topics' and any task that has registered to subscribe to a 'topic' receives it's data when published. If you google "pub sub", "mqtt" and "mosquitto" you'll find lots of references and examples. Mosquitto (now an Eclipse project) is only one of a number of mqtt brokers that are available. Another very good broker that is written in Java is called hivemq.
This is a very simple, reliable solution that scales well. In my case literally millions of messages reliably pass through mqtt every day.
You must be looking for socketio
Socket.IO enables real-time bidirectional event-based communication.
It works on every platform, browser or device, focusing equally on reliability and speed.
Sockets have traditionally been the solution around which most
realtime systems are architected, providing a bi-directional
communication channel between a client and a server.

best way to call own api in nodejs

If to call own api for building the website is a good practice.
Which is the best way to call own api on the same server in a nodejs application?
simply calling the api-methods
using socket.io with emit() and listen it with .on('event', function(){})
install jquery on the server and use the ajax call
or not use at all the own api and rewrite the methods
i'm just confusing. Hope someone can clarify me on this.
If you need to call own API from another process it would be good to use some messaging protocol. ZeroMQ sounds like perfect fit here. It allows to create different patterns of communication between different services in internal networks, and communicate in different ways. Simplest example is Request > Response pattern that is similar to HTTP requests. And it might be a good start point.
Remember that if you using routing system within express, then ZeroMQ solution will not utilize that, it would be able directly communicate, not through HTTP interface. It is much more efficient as well, as HTTP has unnecessary overhead especially for internal communication.
If you still want to use express routing then your option would be to use http.request, which behaves very similar to curl or $.ajax. This function makes HTTP requests, so you can reuse your express routing system.

Resources