Transporting Socket.IO socket to another server - node.js

How can I transport a Websocket that connects to Server A (Which is using Socket.IO) to Server B each time data is sent, and be able to send data to that socket from Server B. But on top of all of that, the socket should not be able to send data to Server B directly, all data going into the servers must be going to server A. Would I be able to just pass the socket object from Server A to B? Note: Any request server B gets is being authenticated from Server A.
I hope this isn't too confusing the way a worded it. Here's a text diagram.
Socket -> Server A -> Server B -> Socket

You cannot pass a socket from one server to another. A given TCP socket (which socket.io is based on) is between two specific endpoints only and you cannot change that once the socket has been connected.
Your question is not entirely clear, but it sounds like maybe you just want server A to act like a proxy where the client connects to server A and then server A can decide to forward some data on to server B. You can either use a pre-existing proxy server that supports your method of authentication in place of server A or if you already have a server A process, then you can add proxying capabilities to it by just having it send appropriate packets of information to server B.
Or, you can have a socket connect to server A, authentication through server A and as part of that connection, it could receive a token which would it could then use to directly connect to server B. Server A and B would share access to some "token store" so that server B could check the token store to see if any incoming connection has presented a valid token or not.

Related

How socket and express listen on same port

Hello I have a conceptional dellema.
I wanted to integrated socket.io(real-time communication) after user asks for drive(making an Uber like project).
I got an example here: Medium Article Now as I can see express and socket are listening to the same port. How it can be possible?
Like if I send an http protocol request the app in express picks it up and does the job for me on port 80(say). Now I emit an event. It reaches port 80. How it decides that this goes to socket.io and not to app. On the server there is only 1 port 80. How on the same port, two different protocols are able to understand which request to process.
What my life looks like:
Client opened a random port and send a get request to the server. It is a protocol where data is designed in a particular format. Like header, body etc. Server picks it up and app in express decompiled it for me. Now client wants to open real time communication. He sends a request to open socket. All good. A request is received and socket opened.
Exactly here my life stops. How? How the socket starts to listen on the same port on my server and client? How server processes the request?
I am not able to get the feel of the real time communication with http protocol or any other protocol.
Like if I want to make my socket secure(I mean a particular socket used by particular authenticated user) I do something like:
app.post('/secure', jwt.verify, (req, res) => { });
Now let's say this request is responsible to open a socket connection for sharing of location data like in Uber. Now if I open a socket inside this then it will automatically be secure? Coming from a secure port? What the socket be like?
Please someone get me a feel of what is going on the server side and client side. I just don't understand it even after reading a lot on stack overflow.
There is a server and client may need to open real time communication after some time but how everything fits in?

What port is used to send request by an Express app?

I have an Express app A, that is configured to listen on 4455 port.
The app also uses axios to send requests to a different server B.
The server B is configured such that it replies to host:port from which it recieved the request.
In this case server A can't recieve response from B, because in the request the, port of A keeps changing.
Does express server send and receive messages from the same port?
The port on which Express listens for incoming connections has nothing to do with the port used for requests that happen to be made from the same application.
Requests are typically made from a random(-ish) port, and it would require some effort if you want that port to be fixed (always the same). In fact, I'm not even sure if you can make axios use a specific local port from which it makes requests.
That leaves the following solution: you make a request using axios, somehow (I'm not sure how) record from which local port that request is being made, and after the request has finished, create a temporary (Express) server that listens for the response on that same local port. When server B has sent its response (or after a specific timeout), that server is stopped.
To be honest, the way that server B sends back its responses is quite uncommon, especially since requests are almost always made from a random port. I also don't understand why server B cannot send back the response over the existing connection.

Persist websocket connection object across the multiple server

I am using a websocket library on server for establishing socket connection.
https://github.com/websockets/ws
I have a more than one server in cluster, I want to know how can I use same socket connection object on another server in cluster.
And also I want to know what is the best option for webchat implementation native websocket or socket.io
You cannot use the same actual socket object across multiple servers. The socket object represents a socket connection between a client and one physical server process. It is possible to build a virtual socket object that would know what server its connection is on, send that server a message to then send out over the actual socket from that other server.
The socket.io/redis adapter is one such virtual ways of doing this. You set up a node.js cluster and you use the redis adapter with socket.io. It uses a central redis-based store to keep track of which serve process each physical connection is one. Then, when you want to send a message to a particular client from any of the server processes, you send that message through socket.io and it looks up for you in the redis database where that socket is connected, contacts that actual server and asks it to send the message to that particular client over the socket.io connection that is currently present on that other server process. Similarly, you can broadcast to groups of sockets and it will do all the work under the covers of making sure the message gets to the clients no matter which actual server they are connected to.
You could surely build something similar yourself for plain webSocket connections and others have built pieces of it. I'm not familiar enough with what exists out there in the wild to offer any recommendations for a plain webSocket. There plenty of articles on scaling webSocket servers horizontally which you can find with Google and read to get started if you want to do it with a plain webSocket.

Can a socket.io server communicate with a non-socket.io client

I am building a chat server which uses a custom protocol for communicating with clients over sockets. The client sends specific strings to the server and the server must take an appropriate action based on these non-standard messages. I can't change this protocol, nor do I have any access to the underlying client code.
My question is, can I use the node.js socket.io package to power the server socket communication if I have no idea how the client is handling it's socket activity? I'm asking because, reading through the socket.io docs, each time I push anything through a socket an 'event' is associated with each action.
Is it possible to send a very exact message to the client from the server with this 'event' bundled in? Am I better off using the websockets package?
Can a socket.io server communicate with a non-socket.io client
No. A socket.io server requires both the webSocket protocol for connection initiation and the socket.io format on top of that. So, a socket.io server can only talk to a socket.io client and vice versa.
If your chat client uses a custom protocol, then you need to implement a TCP server that also speaks that custom protocol (whatever it is).
If you can modify the client, then you can modify it to use a socket.io client and then you can send your chat messages via socket.io where your socket.io server can then receive those messages.
The client sends specific strings to the server and the server must take an appropriate action based on these non-standard messages. I can't change this protocol, nor do I have any access to the underlying client code.
Then, you have to implement a server that speaks your custom client protocol based on whatever the underlying protocol is for the client. There is no other way around it.
I'm asking because, reading through the socket.io docs, each time I push anything through a socket an 'event' is associated with each action.
This is how the socket.io layer works. It sends a message and (optional) data. This can always be used to just send data by just declaring a generic data message and then just listening for the data message on the other end. But, this assumes you can modify both client and server to work this way. If you can't modify your client to use the socket.io protocol, then you can't use socket.io.

http server on top of net server

I implemented 2 webservers with express. One is the main, one is a microservice.
They are communicating through a HTTP REST API, and we had historically a socket.io server started on the microservice to watch the up/down status from the main server.
----HTTP-----
[main server] [microservice]
--socket.io--
I then realized that socket.io is not the right tool for that. So I decided to trade socket.io for a raw TCP socket.
So the question is : Is that possible to start the http server "ON TOP" of a raw TCP server (on the same port) ? (allowing to connect via TCP client AND to send HTTP requests ?)
I have this so far :
const app = express();
const server = http.createServer(app);
// const io = sio(server);
server.listen(config.port, config.ip, callback);
and I'm trying to integrate with this
What I'm trying to achieve, and achieved successuly with socket.io, is starting a socket server on the microservice, connect to it on the main server, keep it alive, and watch for events to keep a global variable boolean "connected" in sync with it. I'm using this variable to aknowledge the my frontend of microservice state, also to pre-check if I should try to request the microservice when requested, and also for loggin purposes. I'd like to avoid manual polling, firstly for maintenability, and also for realtime purpose.
Is that possible to start the http server "ON TOP" of a raw TCP server (on the same port) ?
Sort of, not really. HTTP runs on top of TCP. So, you could technically open a raw TCP server and then write your own code to parse incoming HTTP requests and send out legal HTTP responses. But, now you've just written your own HTTP server so it's no longer raw TCP.
The challenge with trying to have a single server that accepts both HTTP and some other protocol is that your server has to be able to figure out for any given incoming packets, what it is supposed to do with it. Is it an HTTP request? Or is it your other type of custom request. It would be technically feasible to write such a thing.
Or, you could use the webSocket technique that starts out as an HTTP request, but requests an upgrade to some other protocol using the upgrade header. It is fully defined in the http spec how to do this.
But, unless you have some network restriction that you can only have one server or one open port, I'd ask why? It's a complicated way to do things. It doesn't really cost anything to just use a different port and a different listening server for the different type of communication. And, when each server is listening only for one type of traffic, things are a heck of a lot simpler. You can use a standard HTTP server for your HTTP requests and you can use your own custom TCP server for your custom TCP requests.
I can't really tell from your question what the real problem is here that you're trying to solve. If you just want to test if your HTTP server is up/down, then use some external process that just queries one of your HTTP REST API calls every once in a while and then discerns whether the server is responding as expected. There are many existing bodies of code that can be configured to do this too (it's a common task to check on the well being of a web server).
The code you link to shows a sample server that just sends back any message that it receives (called an echo server). This is just a classic test server for a client to connect to as a test. The second code block is a sample piece of client code to connect to a server, send a short message and then disconnect.
From your comments:
The underlying TCP server wouldn't even be used for messaging, it just would be used to watch connect/disconnect events
The http server already inherits from a TCP server so it has all the same events for the server itself. You can see all those events in the http server doc. I don't know exactly what you want, but there are server lifetime events such as:
listening (server now listening)
close (server now closed)
And, there are server activity events such as:
connect (new client connected)
request (new client issues a request)
And, from the request event, you can get both the httpClientRequest and httpServerResponse objects which allow you to monitor the lifetime of an individual connection, including event get the actual socket object of an incoming connection.
Here's a code example for the connect event right in the http server doc.

Resources