how to run multiple telegram bots on the same port? - node.js

I have 6 bots and i want to run them on the same server, But when i run the second bot, it throws this error:
address already in use :::8443
I know i only can use 4 ports(80,88,443,8443) for webhook, But i have 6 bots.
Actually, I'm trying to run all the bots on the same port.
Is there anyway to do it?
I'm using telegraf framework. I made a directory for every bot because i think it is the way to do it. maybe i'm wrong.
This is the code of the bots in every directory:
const Telegraf = require('telegraf');
const fs = require('fs');
let token = 'botID:botToken';
const bot = new Telegraf(token);
const tlsOptions = {
key: fs.readFileSync('server-key.pem'),
cert: fs.readFileSync('server-cert.pem'),
ca: []
}
bot.on('message', (ctx) => ctx.reply('Hey'));
bot.telegram.setWebhook('https://myAddress.com:8443/test/botID');
bot.startWebhook('/test/botID', tlsOptions, 8443);

I must admit I have not written a Telegram bot so some of what I might say next may not apply. Edit - I did some reading and have updated my answer based upon my reading.
Short Answer
The Telegram Bot service provides webhook access, so, you can simply register different URL's per Bot and then use routing to have the appropriate Bot code answer the requests. You can do this all in a mono-repository or set it up as a microservice. Up to you.
Detailed Answer
Option 1 - mono-repo
I'm not going to provide code but basically you would have a single repository that exposes a web service on one of the ports accepted by the Telegram Bot service for the webhook (80, 88, 443, 8443). Within this web service you will receive a request on a particular URL that will then route that request to the appropriate handler which in this case will be your bot code.
E.g. web service exposed on port 8443
Bot 1 URL: https://:8443/<bot 1 token>/
Bot 2 URL: https://:8443/<bot 2 token>/
and so on...
Express, Koa, etc... can all provide this type of code. (Telegraf even provides an example for Koa integration in their documentation.)
Option 2 - microservice
This option, believe it or not, may actually require less coding on your part BUT will require more configuration and orchestration. In this configuration you simply need to setup a reverse proxy (nginx works great) that will receive all of your inbound Telegram bot requests and then forwards them to your local Telegraf based bots.
So you would build and run your bots on all separate ports (e.g. 3000, 3001, 3002, etc). Then configure the reverse proxy to route inbound requests to the appropriate bot handler.
E.g. reverse proxy listening on port 8443
Bot 1 URL: https://:8443/<bot 1 token> --> nginx redirect to -> http://:3000
Bot 2 URL: https://:8443/<bot 2 token> --> nginx redirect to -> http://:3001
and so on...
For reverse proxy I mentioned nginx, you can also use services like AWS or Azure's API Gateways. You could also run the Telegraf bot as a serverless app on AWS (through Lambda) or Azure (through Functions).
Either way will work. Personally I would deploy it using the microservice method because then you can scale each bot independently as needed.
Original Answer - before edit above
A port is simply a "number" on a TCP or UDP packet that tells the IP stack what application should receive the packet. There are 65536 ports available per IP address, per protocol (TCP or UDP - HTTPS uses TCP). So, technically, there are no limitations to which ports you can use to have your application receive its packets. Therefore, excluding any other limitations (inbound firewalls, framework limitations, etc) you could simply start your 6 bots on ports: 8443, 8444, 8445, and so on.
To answer your question about running six bots on a single port, again I can address this with generic server techniques. Backing up a little, when packets are incoming to a computer they find they right computer first with the IP address, then locally the MAC address, then finally the port on the computer to get it to the right application. If you need multiple applications to receive data on the same port then you need to perform that addressing yourself in the application protocol. Therefore, if in the Telegraf bot application protocol there is an indication which bot should receive the data then you could simply have routing code that directs incoming data to the proper bot. In this case you would NOT have each BOT start by listening on a port (e.g. in your case the same port) because that will, as you have experienced, generate the error that the port is already in use. You would instead have your routing code listen on the port you wish to listen on, and then route the incoming data to the proper bot code.
If there are limitations to which ports you can receive information on, and there is no way within the Telegraf bot protocol to determine which bot should receive information, then your only way to run six bots will be to have more than a single IP address and spread the bots out across the ports available and then across multiple IP addresses when you run out of ports.
E.g. ports available 80, 88, 443, 8443
Need to run 6 bots.
IP address #1 Bot 1 - 80, Bot 2 - 88, Bot 3 - 443, Bot 4 - 8443
IP address #2 Bot 5 - 80, Bot 6 - 88

Related

Which is best port to use for secure Chat server to allow general firewall bypass (port 443 is already occupied)

so I developed a public chat application which will run on a node server using secure socket.io.
That server, which only has a single IP address already has ports 80 / 443 occupied.
So I need to find the next best port to use for the chat server.
I wonder is there a recommended next best port that will allow most firwalls to communicate to? I know for example using ports like 21 (FTP) will normally be blocked.
And is it best to pick one above 1024 or below?
thanks
Sean.
In general 8443 will be the "alternative port for HTTPS", but you are still at risk of being filtered.
The proper solution should be to run proxy like nginx on port 443 and provide access to various applications based on the hostname, not the port. In example you can configure it to run your current app when user reaches https://example.com and chat app when user reaches https://chat.example.com.
Here is an example article showing how to do it https://www.manuelkruisz.com/blog/posts/nginx-multiple-domains-one-server
The idea is that each app runs on different internal port on the server, and proxy running on port 443 picks which app the request should be routed to based on the hostname.

Does an MQTT subscriber need a static IP?

I want to develop a publisher -> subscriber model with 1 publisher and many subscriber in nodejs.
Currently my idea was to use a normal websocket. The problem with this is that every subscriber needs a static ip and port forwarding if it runs over the internet. This doesnt suit the requirements.
A solution to this seems to be MQTT as it should be suited for that use case, but i saw that it also runs over websockets which should lead to the same problem or does MQTT handle it differently?
Essentially i need a solution where the publisher has a static ip and the subscriber can be anywhere on the world. Is this possible with MQTT or do i need another solution?
No, only the MQTT broker needs a fixed IP address (and preferably a DNS entry) so the clients know where to find it.
All the MQTT clients (both subscribers and publishers), be they native MQTT or MQTT over websockets connect out to the broker. This means they will work even behind a NAT router running with a dynamic IP address (they would all get disconnected when ever the IP address changed, but nearly all MQTT clients automatically reconnect).
These features make MQTT a good choice for consumer IoT devices as the situation described above is pretty much every home broadband setup.
It sounds like your subscribing devices are on local networks, and yes, you would need a static IP for the network and forwarding inside it (not to mention a firewall exception in many systems) for a local device to serve incoming requests. Regardless of protocol, your subscribers do not need to be servers. It is far safer, and ultimately easier, to have them query a central server/system. Only that system needs an IP.
WebSockets do not require port forwarding - they are often used to avoid it. The client opens a connection to a server, then keeps using it to send and receive. It no more requires port forwarding than your computer does when receiving a page from a website. If your publisher is a server or other web-exposed system, you may get the job done by configuring your subscribers to open websockets to it.
However, you may still want MQTT:
It sounds like your publisher might be something other than a webserver, and might be less suited to serving than your clients, since you asked this question. With an MQTT client, it could publish to an MQTT broker on a server, which will then pass the messages to the subscribers' clients.
Developing robust publish-subscribe functionality is extra work, and existing MQTT software will often serve better than new development.
With some extra configuration, it is even possible to make MQTT subscriptions over WebSockets, but even a regular subscription works fine for avoiding static IP, portforwarding, and inbound firewall rules.
Explore one way RPC approach. It will not need a public IP address or port forwarding.

How tunneling services like 'Localtunnel' works without SSH?

I want to understand how my local IP address (localhost) can be exposed to Internet. For that I've read [here] a method of port forwarding using SSH. Which basically does routing from publicly available server to our localhost using SSH.
But I wonder how the service like 'LocalTunnel' works? In above article it's written as following:
There services (localtunnel for example) creates a tunnel from their server back to port 3000 on your dev box. It functions pretty much exactly like an SSH tunnel, but doesn’t require that you have a server.
I've tried reading code from it's github repository and what I understood is:
These services have a server which is publicly available, which generates different URLs, and when we hit that URL, It forward the request to localhost corresponding to that URL.
So basically it's works like a proxy server.
Is these understanding correct? If yes then what I don't understand is that how these server has access to some localhost running on my computer? How it perform request to it? What I'm missing here? Here is the code which I referred.
The Short Answer (TL;DR)
The Remote (i.e. the localtunnel software on your computer) initializes the connection to the Relay (i.e. localtunnel.me) acts as a multiplexing proxy and when Clients (i.e. web browsers) connect, the relay multiplexes the connections between remotes and clients by appending special headers with network information.
Browser <--\ /--> Device
Browser <---- M-PROXY Service ----> Device
Browser <--/ \--> Device
Video Presentation
If you prefer a video preso, I just gave a talk on this at UtahJS Conf 2018, in which I talk a little about all of the other potential solutions as well: SSH Socksv5 proxies (which you mentioned), VPN, UPnP, DHT, Relays, etc:
Access Ability: Access your Devices, Share your Stuff
Slides: http://telebit.cloud/utahjs2018
How localtunnel, ngrok, and Telebit work (Long Answer)
I'm the author of Telebit, which provides service with similar features to what ngrok, localtunnel, and libp2p provide (as well as open source code for both the remote/client and relay/server to run it yourself).
Although I don't know the exact internals of how localtunnel is implemented, I can give you an explanation of how it's generally done (and how we do it), and it's most likely nearly identical to how they do it.
The magic that you're curious about happens in two places: the remote socket and the multiplexer.
How does a remote client access the server on my localhost?
1. The Remote Socket
This is pretty simple. When you run a "remote" (telebit, ngrok, and localtunnel all work nearly the same in this regard), it's actually your computer that initiates the request.
So imagine that the relay (the localtunnel proxy in your case) uses port 7777 to receive traffic from "remotes" (like your computer) and your socket number (randomly chosen source address on your computer) is 1234:
Devices: [Your Computer] (tcp 1234:7777) [Proxy Server]
Software: [Remote] -----------------------> [Relay]
(auth & request 5678)
However, the clients (such as browsers, netcat, or other user agents) that connect to you actually also initiate requests with the relay.
Devices: [Proxy Service] (tcp 5678) [Client Computer]
Software: [Relay] <------------------------ [netcat]
If you're using tcp ports, then the relay service keeps an internal mapping, much like NAT
Internal Relay "Routing Table"
Rule:
Socket remote[src:1234] --- Captures ------> ALL_INCOMING[dst:5678]
Condition:
Incoming client[dst:5678] --- MATCHES -------> ALL_INCOMING[dst:5678]
Therefore:
Incoming client[dst:5678] --- Forwards To ---> remote[src:1234]
Both connections are "incoming" connections, but the remote connection on the "south end" is authorized to receive traffic coming from another incoming source (without some form of authorized session anyone could claim use of that port or address and hijack your traffic).
[Client Computer] (tcp 5678) [Proxy Service] (tcp 1234) [Your Computer]
[netcat] --------------> <--[Relay]--> <------------ [remote]
2. The Multiplexer
You may have noticed that there's a critical flaw in the description above. If you just route the traffic as-is, your computer (the remote) could only handle one connection at a time. If another client (browser, netcat, etc) hopped on the connection, your computer wouldn't be able to tell which packets came from where.
Browser <--\ /--> Device
Browser <---- M-PROXY Service ----> Device
Browser <--/ \--> Device
Essentially what the relay (i.e. localtunnel proxy) and the remote (i.e. your computer) do is place a header in front of all data that is to be received by the client. It needs to be something very similar to HAProxy's The PROXY Protocol, but works for non-local traffic as well. It could look like this:
<src-address>,<src-port>,<sni>,<dst-port>,<protocol-guess>,<datalen>
For example
172.2.3.4,1234,example.com,443,https,1024
That info could be sent exactly before or append to each data packet that's routed.
Likewise, when the remote responds to the relay, it has to include that information so that the relay knows which client the data packet is responding to.
See https://www.npmjs.com/package/proxy-packer for long details
Sidenote/Rant: Ports vs TLS SNI
The initial explanation I gave using tcp ports, because it's easy to understand. However, localtunnel, ngrok, and telebit all have the ability to use tls servername indicator (SNI) instead of relying on port numbers.
[Client Computer] (https 443) [Proxy Service] (wss 443) [Your Computer]
[netcat+openssl] --------------------> <--[Relay]--> <------------ [remote]
(or web browser) (sni:xyz.somerelay.com) (sni:somerelay.com)
MITM vs p2p
There are still a few different ways you can go about this (and this is where I want to give a shameless plug for telebit because if you're into decentralization and peer-to-peer, this is where we shine!)
If you only use the tls sni for routing (which is how localtunnel and ngrok both work by default last time I checked) all of the traffic is decrypted at the relay.
Anther way requires ACME/Let's Encrypt integration (i.e. Greenlock.js) so that the traffic remains encrypted, end-to-end, routing the tls traffic to the client without decrypting it. This method functions as peer-to-peer channel for all practical purposes (the relay acts as just another opaque "switch" on the network of the Internet, unaware of the contents of the traffic).
Furthermore, if https is used both for remotes (for example, via Secure WebSockets) and the clients, then the clients look just like any other type of https request and won't be hindered by poorly configured firewalls or other harsh / unfavorable network conditions.
Now, solutions built on libp2p also give you a virtualized peer connection, but it's far more indirect and requires routing through untrusted parties. I'm not a fan of that because it's typically slower and I see it as more risky. I'm a big believer than network federation will win out over anonymization (like libp2p) in the long. (for our use case we needed something that could be federated - run by independently trusted parties- which is why we built our solution the way that we did)

Expressjs app, using websockets for chat. Use different port for websocket server?

I'm making an app using node.js' express framework which serves both html content over http and uses websockets for a chat feature. I'm wondering how I can accomplish both at the same time. My idea is to use a different port for websocket connections (so http requests would come to port 3000 and websockets would connect on port 3001) but I don't know if that's a good solution. I'm especially worried about deployment to something like heroku and if I can specify different ports for my app.
I'm wondering how I can accomplish both at the same time.
The webSocket protocol is specially designed so it can run on the same port as your regular web server requests. So, you don't need a separate port in order to have both a web server and chat running using webSockets.
This works because a webSocket connection is always initiated with an http request that sets a few special headers. The receiving web server can then detect those special headers and know that this incoming http request is actually a request to initiate a webSocket connection. With a particular response, the client and server then agree to "upgrade" the connection and switch to the webSocket protocol. From that point on, that particular TCP connection uses the webSocket protocol.
Meanwhile any incoming http request that does not have the special webSocket headers on it is treated by your web server as just a regular http request. In this way, the same server and the same port can be used for both webSocket connections and regular http requests. No second port is needed.
Another advantage of this scheme is that the client can avoid the cross-origin issues that it would run into if it was trying to use a different port than the web page it was loaded from.
I'm especially worried about deployment to something like heroku and
if I can specify different ports for my app.
If you were to actually use two ports, then you would need to create two separate servers, one listening on each port since a given server can only listen on one port. In node.js, the two servers could both be in the same node.js app (making it easier to share data between them) or you could put them in completely separate node.js processes (your choice).
And, if you used multiple ports, you'd also have to support CORS so that the browser would be allowed to connect to the separate port (to avoid same-origin restrictions).

Multiple tcp services on the same port

I'm working on a project where some clients (embedded linux systems) needs to connect to a main server using so far at least two protocols: HTTPS and SSH. One of the requirement is that only one flow is allowed from every client to the server, so I've to found a way to make the two services works on the same port.
One solution that I was thinking about is to use the iptables markers: on the client side mark the packets for SSH with 0x1, the packets for HTTPS with 0x2 and then on the server side, based on the marker, redirect the packets to the right service listening on the local interface. Is it an acceptable solution? Are the markers kept by the network routers or is only something working locally on the same machine for iptables?
And anyway, if you can advice about a different solution, of course it's welcome!
More for other users finding this question in the future:
https://github.com/yrutschle/sslh has what you might need. I haven't used it (yet) but planning on it.
From the Github site:
sslh -- A ssl/ssh multiplexer
sslh accepts connections on specified ports, and forwards them further based on tests performed on the first data packet sent by the remote client.
Probes for HTTP, SSL, SSH, OpenVPN, tinc, XMPP are implemented, and any other protocol that can be tested using a regular expression, can be recognised. A typical use case is to allow serving several services on port 443 (e.g. to connect to SSH from inside a corporate firewall, which almost never block port 443) while still serving HTTPS on that port.
Hence sslh acts as a protocol demultiplexer, or a switchboard. Its name comes from its original function to serve SSH and HTTPS on the same port.

Resources