I currently am creating a horizontally scalable socket.io server which looks like the following:
LoadBalancer (nginx)
Proxy1 Proxy2 Proxy3 Proxy{N}
BackEnd1 BackEnd2 BackEnd3 BackEnd4 BackEnd{N}
My question is, with socket-io redis module, can I send a message to a specific socket connected to one of the proxy servers from one of the backend servers if they are all connected to the same redis server? If so, how do I do that?
As you wan to scale socket.io server, and you have used nginx as load balancer, do not forget to setup sticky load balancing, othersie single connection will be connected to multiple server based on load balancer pass the connection to socket.io server. So better to use sticky load balancing
With the redis socket io adapter, you can send and receive message with one or more socket.io server with help of Redis Pub/Sub implementation.
if you tell me which technology is used for Proxy and Backend, i will let you know more information on this.
Using the socket.io-redis module all of your backend servers will share the same pool of connected users. You can emit from Backend1 and if a client is connected to Backend4 he will get the message.
The key for this working though with Socket.io is to use sticky sessions on nginx so that once I client connects, it stays on the same machine. This is because the way that socket.io starts with a WebSocket and several long polling threads, they all need to be on the same backend server to work correctly.
Instead of sticky sessions, you can change your client connection optons to use Websockets ONLY and this will remove the problems with the multiple connections to multiple servers as there will only be one connection, the single websocket. This will also make your app lose the ability to downgrade to long-polling instead of WebSockets.
Related
I am working on a nodejs app with Socket.io and I did a test in a single process using PM 2 and it was no errors. Then I move to our production environment(We use Google Cloud Compute Instance).
I run 3 app processes and a iOS client connects to the server.
By the way the iOS client doesn't keep the socket connection. It doesn't send disconnect to the server. But it's disconnected and reconnect to the server. It happens continuously.
I am not sure why the server disconnects the client.
If you have any hint or answer for this, I would appreciate you.
That's probably because requests end up on a different machine rather than the one they originated from.
Straight from Socket.io Docs: Using Multiple Nodes:
If you plan to distribute the load of connections among different processes or machines, you have to make sure that requests associated with a particular session id connect to the process that originated them.
What you need to do:
Enable session affinity, a.k.a sticky sessions.
If you want to work with rooms/namespaces you also need to use a centralised memory store to keep track of namespace information, such as the Redis/Redis Adapter.
But I'd advise you to read the documentation piece I posted, things might have changed a bit since the last time I've implemented something like this.
By default, the socket.io client "tests" out the connection to its server with a couple http requests. If you have multiple server requests and those initial http requests don't go to the exact same server each time, then the socket.io connect will never get established properly and will not switch over to webSocket and it will keep attempting to use http polling.
There are two ways to fix this.
You can configure your clients to just assume the webSocket protocol will work. This will initiate the connection with one and only one http connection which will then be immediately upgraded to the webSocket protocol (with socket.io running on top of that). In socket.io, this is a transport option specified with the initial connection.
You can configure your server infrastructure to be sticky so that a request from a given client always goes back to the exact same server. There are lots of ways to do this depending upon your server architecture and how the load balancing is done between your servers.
If your servers are keeping any client state local to the server (and not in a shared database that all servers access), then you will need even a dropped connection and reconnect to go back to the same server and you will need sticky connections as your only solution. You can read more about sticky sessions on the socket.io website here.
Thanks for your replies.
I finally figured out the issue. The issue was caused by TTL of backend service in Google Cloud Load Balancer. The default TTL was 30 seconds and it made each socket connection tried to disconnect and reconnect.
So I updated the value to 3600s and then I could keep the connection.
I am using a websocket library on server for establishing socket connection.
https://github.com/websockets/ws
I have a more than one server in cluster, I want to know how can I use same socket connection object on another server in cluster.
And also I want to know what is the best option for webchat implementation native websocket or socket.io
You cannot use the same actual socket object across multiple servers. The socket object represents a socket connection between a client and one physical server process. It is possible to build a virtual socket object that would know what server its connection is on, send that server a message to then send out over the actual socket from that other server.
The socket.io/redis adapter is one such virtual ways of doing this. You set up a node.js cluster and you use the redis adapter with socket.io. It uses a central redis-based store to keep track of which serve process each physical connection is one. Then, when you want to send a message to a particular client from any of the server processes, you send that message through socket.io and it looks up for you in the redis database where that socket is connected, contacts that actual server and asks it to send the message to that particular client over the socket.io connection that is currently present on that other server process. Similarly, you can broadcast to groups of sockets and it will do all the work under the covers of making sure the message gets to the clients no matter which actual server they are connected to.
You could surely build something similar yourself for plain webSocket connections and others have built pieces of it. I'm not familiar enough with what exists out there in the wild to offer any recommendations for a plain webSocket. There plenty of articles on scaling webSocket servers horizontally which you can find with Google and read to get started if you want to do it with a plain webSocket.
I'm working on a project which uses SocketIO and should be Horizontally scalable. Im using
A Load Balancer using HAProxy
Multiple Node Servers (2-4)
Database server(Redis and MongoDB)
I'm able to redirect my incoming Socket connections to Node servers using roundrobin method. Socket connection is stable and if I use socket.emit() I'm receiving the data. I'm also able to emit to the Other socket connection connected to the same Node server.
I'm facing issue in the following scenario:
User A connected to Node server 1 and User B connected to Node Server 2
My intention is to store the Socket data in redis
If User A wants to send some data to User B, how can I tell the Node server 2 to emit the data to User B from Node server 1
Please let me know how can I achieve this (with ref if possible).
Thanks in advance.
This scenario is a match for the case Pub/Sub of Redis.
If you haven't already, you should try Pub/Sub.
Have a look at socket.io Redis adapter. It should be exactly what you need.
clients() method in particular looks promising. Keep in mind, that socket.io creates a unique room for each client.
I have spun 3 node instances using pm2. They are all running a websocket server using these ports: (9300, 9301, and 9302).
My main server acts as a nginx load balancer. The nginx upstream block:
upstream websocket {
least_conn;
server 127.0.0.1:9300;
server 127.0.0.1:9301;
server 127.0.0.1:9302;
}
After 10 players have connected, they are distributed in round-robin fashion. I am also utilizing Redis for Pub/Sub for all the node instances.
I am curious if it's possible for a connected player that is on instance 9300, switch to 9302 while not losing their connection?
The reasoning is because my game is instance based. I have "games" if you will, that players can create or join. If I can get the connected players onto the same node instance for their games, I would reduce all the extra Pub/Sub signals and achieve better latency. (Or so I think, but just curious if this is possible)
I am curious if it's possible for a connected player that is on
instance 9300, switch to 9302 while not losing their connection?
No, it is not possible. A TCP socket is a connection between two specific endpoints and it cannot be moved from one endpoint to another after it is established. There are very good security reasons why this is prohibited (so connections can't be hijaacked).
The usual way around this problem is for the server to tell the client to reconnect and give it instructions for how to connect to the particular server you want it connected to (e.g. connect to a specific port or specific hostname or some other means that your load balancer might use).
I have a Node.js application deployed on multiple servers with Nginx server load-balancing the browser traffic to these servers. The server uses push notification mechanism (using websocket module) to communicate with the browser.
In the current setup, the browser loads an application page where it opens a client socket, which connects to the server. The websocket request is sent to Nginx, which sends it to one of the servers in the cluster. When an event happens on the server, it notifies the client browsers listening to the server websocket.
The problem is that each server is communicating only with a subset of the websocket clients. Also, each server is only aware of the events that take place on the server. As a result not all clients are notified of all the server events.
I can see several potential solutions:
Configure Nginx to send websocket requests from each browser to all the server in the cluster. I could not figure out how to do it. Load balancing does not support broadcasting.
Store websocket connections in the databse, so that all servers had access to it. I am not sure how to serialize the websocket connection object to store it in MongoDB.
Set up a communication mechanism among the servers in the cluster (some kind message bus) and whenever event happens, have all the servers notify the websocket clients they are tracking. This somewhat complicates the system and requires the nodes to know the addresses of each other. Which package is most suitable for such a solution?
What is the simplest way to implements distributed push notification in a Node.js app?