How to probe NestJS microservice health in TCP transport? - nestjs

Using the NestJS Terminus Microservice template: https://github.com/nestjs/terminus/tree/master/sample/002-microservice-app
The microservice starts successfully.
How do I call the health check controller method from outside?
I tried echo "20#{\"pattern\":\"health\"}" | nc localhost 8889 (like in https://stackoverflow.com/a/55630329/6517320) but Nest writes:
ERROR [Server] There is no matching event handler defined in the remote service. Event pattern: "health".

in case you wanna do the health check with an http call, you'd need a hybrid app that also listens to http requests

Related

how to run multiple telegram bots on the same port?

I have 6 bots and i want to run them on the same server, But when i run the second bot, it throws this error:
address already in use :::8443
I know i only can use 4 ports(80,88,443,8443) for webhook, But i have 6 bots.
Actually, I'm trying to run all the bots on the same port.
Is there anyway to do it?
I'm using telegraf framework. I made a directory for every bot because i think it is the way to do it. maybe i'm wrong.
This is the code of the bots in every directory:
const Telegraf = require('telegraf');
const fs = require('fs');
let token = 'botID:botToken';
const bot = new Telegraf(token);
const tlsOptions = {
key: fs.readFileSync('server-key.pem'),
cert: fs.readFileSync('server-cert.pem'),
ca: []
}
bot.on('message', (ctx) => ctx.reply('Hey'));
bot.telegram.setWebhook('https://myAddress.com:8443/test/botID');
bot.startWebhook('/test/botID', tlsOptions, 8443);
I must admit I have not written a Telegram bot so some of what I might say next may not apply. Edit - I did some reading and have updated my answer based upon my reading.
Short Answer
The Telegram Bot service provides webhook access, so, you can simply register different URL's per Bot and then use routing to have the appropriate Bot code answer the requests. You can do this all in a mono-repository or set it up as a microservice. Up to you.
Detailed Answer
Option 1 - mono-repo
I'm not going to provide code but basically you would have a single repository that exposes a web service on one of the ports accepted by the Telegram Bot service for the webhook (80, 88, 443, 8443). Within this web service you will receive a request on a particular URL that will then route that request to the appropriate handler which in this case will be your bot code.
E.g. web service exposed on port 8443
Bot 1 URL: https://:8443/<bot 1 token>/
Bot 2 URL: https://:8443/<bot 2 token>/
and so on...
Express, Koa, etc... can all provide this type of code. (Telegraf even provides an example for Koa integration in their documentation.)
Option 2 - microservice
This option, believe it or not, may actually require less coding on your part BUT will require more configuration and orchestration. In this configuration you simply need to setup a reverse proxy (nginx works great) that will receive all of your inbound Telegram bot requests and then forwards them to your local Telegraf based bots.
So you would build and run your bots on all separate ports (e.g. 3000, 3001, 3002, etc). Then configure the reverse proxy to route inbound requests to the appropriate bot handler.
E.g. reverse proxy listening on port 8443
Bot 1 URL: https://:8443/<bot 1 token> --> nginx redirect to -> http://:3000
Bot 2 URL: https://:8443/<bot 2 token> --> nginx redirect to -> http://:3001
and so on...
For reverse proxy I mentioned nginx, you can also use services like AWS or Azure's API Gateways. You could also run the Telegraf bot as a serverless app on AWS (through Lambda) or Azure (through Functions).
Either way will work. Personally I would deploy it using the microservice method because then you can scale each bot independently as needed.
Original Answer - before edit above
A port is simply a "number" on a TCP or UDP packet that tells the IP stack what application should receive the packet. There are 65536 ports available per IP address, per protocol (TCP or UDP - HTTPS uses TCP). So, technically, there are no limitations to which ports you can use to have your application receive its packets. Therefore, excluding any other limitations (inbound firewalls, framework limitations, etc) you could simply start your 6 bots on ports: 8443, 8444, 8445, and so on.
To answer your question about running six bots on a single port, again I can address this with generic server techniques. Backing up a little, when packets are incoming to a computer they find they right computer first with the IP address, then locally the MAC address, then finally the port on the computer to get it to the right application. If you need multiple applications to receive data on the same port then you need to perform that addressing yourself in the application protocol. Therefore, if in the Telegraf bot application protocol there is an indication which bot should receive the data then you could simply have routing code that directs incoming data to the proper bot. In this case you would NOT have each BOT start by listening on a port (e.g. in your case the same port) because that will, as you have experienced, generate the error that the port is already in use. You would instead have your routing code listen on the port you wish to listen on, and then route the incoming data to the proper bot code.
If there are limitations to which ports you can receive information on, and there is no way within the Telegraf bot protocol to determine which bot should receive information, then your only way to run six bots will be to have more than a single IP address and spread the bots out across the ports available and then across multiple IP addresses when you run out of ports.
E.g. ports available 80, 88, 443, 8443
Need to run 6 bots.
IP address #1 Bot 1 - 80, Bot 2 - 88, Bot 3 - 443, Bot 4 - 8443
IP address #2 Bot 5 - 80, Bot 6 - 88

Identify Internal Request Nginx + NodeJS Express

I have an NodeJS API using Express Framework.
I use Nginx for Load Balancing betwween my NodeJS instances. I use PM2 to spawn NodeJS Instances.
I identified in the log that Ngnix makes some "dummy/internal" requests, probably to identify if the instance is on (heartbeat requests could be the appropriate name for this requests).
My question is: Which is the right method to identifiy these "dummy/internal" requests on my API?
I'm fairly certain that nginx only uses passive health checks for upstream servers. In other words – because all HTTP requests are assumed to result in a response, nginx says "If I send this server a bunch of requests and don't get responses for them, I'll consider the server to be unhealthy".
Can you share some access logs of the requests you're seeing?
As far as I know, nginx does not send any requests to upstream servers that are not ultimately initiated by a client.

http server on top of net server

I implemented 2 webservers with express. One is the main, one is a microservice.
They are communicating through a HTTP REST API, and we had historically a socket.io server started on the microservice to watch the up/down status from the main server.
----HTTP-----
[main server] [microservice]
--socket.io--
I then realized that socket.io is not the right tool for that. So I decided to trade socket.io for a raw TCP socket.
So the question is : Is that possible to start the http server "ON TOP" of a raw TCP server (on the same port) ? (allowing to connect via TCP client AND to send HTTP requests ?)
I have this so far :
const app = express();
const server = http.createServer(app);
// const io = sio(server);
server.listen(config.port, config.ip, callback);
and I'm trying to integrate with this
What I'm trying to achieve, and achieved successuly with socket.io, is starting a socket server on the microservice, connect to it on the main server, keep it alive, and watch for events to keep a global variable boolean "connected" in sync with it. I'm using this variable to aknowledge the my frontend of microservice state, also to pre-check if I should try to request the microservice when requested, and also for loggin purposes. I'd like to avoid manual polling, firstly for maintenability, and also for realtime purpose.
Is that possible to start the http server "ON TOP" of a raw TCP server (on the same port) ?
Sort of, not really. HTTP runs on top of TCP. So, you could technically open a raw TCP server and then write your own code to parse incoming HTTP requests and send out legal HTTP responses. But, now you've just written your own HTTP server so it's no longer raw TCP.
The challenge with trying to have a single server that accepts both HTTP and some other protocol is that your server has to be able to figure out for any given incoming packets, what it is supposed to do with it. Is it an HTTP request? Or is it your other type of custom request. It would be technically feasible to write such a thing.
Or, you could use the webSocket technique that starts out as an HTTP request, but requests an upgrade to some other protocol using the upgrade header. It is fully defined in the http spec how to do this.
But, unless you have some network restriction that you can only have one server or one open port, I'd ask why? It's a complicated way to do things. It doesn't really cost anything to just use a different port and a different listening server for the different type of communication. And, when each server is listening only for one type of traffic, things are a heck of a lot simpler. You can use a standard HTTP server for your HTTP requests and you can use your own custom TCP server for your custom TCP requests.
I can't really tell from your question what the real problem is here that you're trying to solve. If you just want to test if your HTTP server is up/down, then use some external process that just queries one of your HTTP REST API calls every once in a while and then discerns whether the server is responding as expected. There are many existing bodies of code that can be configured to do this too (it's a common task to check on the well being of a web server).
The code you link to shows a sample server that just sends back any message that it receives (called an echo server). This is just a classic test server for a client to connect to as a test. The second code block is a sample piece of client code to connect to a server, send a short message and then disconnect.
From your comments:
The underlying TCP server wouldn't even be used for messaging, it just would be used to watch connect/disconnect events
The http server already inherits from a TCP server so it has all the same events for the server itself. You can see all those events in the http server doc. I don't know exactly what you want, but there are server lifetime events such as:
listening (server now listening)
close (server now closed)
And, there are server activity events such as:
connect (new client connected)
request (new client issues a request)
And, from the request event, you can get both the httpClientRequest and httpServerResponse objects which allow you to monitor the lifetime of an individual connection, including event get the actual socket object of an incoming connection.
Here's a code example for the connect event right in the http server doc.

Node express from webserver

I have been developing a standalone app for an event exhibition. This comprised of a backbonejs frontend with a node express server backend for saving data. At the event this will run over localhost fine but how can I make the express server be accessed via normal http. I.e the backend responds when app is added to my webserver for client review
Any ideas.
G
Servers available at localhost should respond to http requests if your firewall is open. If you are behind a router, you need to configure it to forward requests to your machine. NAT is the simplest (and least safe) way.

HAProxy health check

My current setup has 2 HAProxies configured with keepalived for High Availability, the 2 proxies serve as a Reverse Proxy and Load Balancer for virtual webservices. I know that HAProxy can check the health of its backend (I've already configured this) but my question is something else.
At my company there's a F5 Big-IP Load Balancer which serves as the first line of defense, it will redirect requests to my HAProxies when needed.
I need to know if there is a way to let my F5 Big-IP check the health of the HAProxies frontend, so when the proxies are booting no requests will be lost.
Thanks
There used to be a mode health option but in recent versions the easiest way is to use a monitor-uri on a given port:
listen health_check_http_url
bind :8888
mode http
monitor-uri /healthz
option dontlognull
You can use the monitor-uri in a frontend and select it with an ACL too but the port version is much clear and straightforward.
https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2-mode
https://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2-monitor-uri
From the HAProxy Reference Manual:
Health-checking mode
--------------------
This mode provides a way for external components to check the proxy's health.
It is meant to be used with intelligent load-balancers which can use send/expect
scripts to check for all of their servers' availability. This one simply accepts
the connection, returns the word 'OK' and closes it. If the 'option httpchk' is
set, then the reply will be 'HTTP/1.0 200 OK' with no data, so that it can be
tested from a tool which supports HTTP health-checks. To enable it, simply
specify 'health' as the working mode :
Example :
---------
# simple response : 'OK'
listen health_check 0.0.0.0:60000
mode health
# HTTP response : 'HTTP/1.0 200 OK'
listen http_health_check 0.0.0.0:60001
mode health
option httpchk
From the HAProxy Docs
Example:
frontend www
mode http
acl site_dead nbsrv(dynamic) lt 2
acl site_dead nbsrv(static) lt 2
monitor-uri /site_alive
monitor fail if site_dead
Checkout the reference documentation.
http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-monitor-uri
<uri> is the exact URI which we want to intercept to return HAProxy's
health status instead of forwarding the request.
When an HTTP request referencing <uri> will be received on a frontend,
HAProxy will not forward it nor log it, but instead will return either
"HTTP/1.0 200 OK" or "HTTP/1.0 503 Service unavailable", depending on failure
conditions defined with "monitor fail". This is normally enough for any
front-end HTTP probe to detect that the service is UP and running without
forwarding the request to a backend server. Note that the HTTP method, the
version and all headers are ignored, but the request must at least be valid
at the HTTP level. This keyword may only be used with an HTTP-mode frontend.
Monitor requests are processed very early. It is not possible to block nor
divert them using ACLs. They cannot be logged either, and it is the intended
purpose. They are only used to report HAProxy's health to an upper component,
nothing more. However, it is possible to add any number of conditions using
"monitor fail" and ACLs so that the result can be adjusted to whatever check
can be imagined (most often the number of available servers in a backend).

Resources