I have spun 3 node instances using pm2. They are all running a websocket server using these ports: (9300, 9301, and 9302).
My main server acts as a nginx load balancer. The nginx upstream block:
upstream websocket {
least_conn;
server 127.0.0.1:9300;
server 127.0.0.1:9301;
server 127.0.0.1:9302;
}
After 10 players have connected, they are distributed in round-robin fashion. I am also utilizing Redis for Pub/Sub for all the node instances.
I am curious if it's possible for a connected player that is on instance 9300, switch to 9302 while not losing their connection?
The reasoning is because my game is instance based. I have "games" if you will, that players can create or join. If I can get the connected players onto the same node instance for their games, I would reduce all the extra Pub/Sub signals and achieve better latency. (Or so I think, but just curious if this is possible)
I am curious if it's possible for a connected player that is on
instance 9300, switch to 9302 while not losing their connection?
No, it is not possible. A TCP socket is a connection between two specific endpoints and it cannot be moved from one endpoint to another after it is established. There are very good security reasons why this is prohibited (so connections can't be hijaacked).
The usual way around this problem is for the server to tell the client to reconnect and give it instructions for how to connect to the particular server you want it connected to (e.g. connect to a specific port or specific hostname or some other means that your load balancer might use).
Related
My team and I are working on a digital signage platform.
We have ~ 2000 Raspberry Pi around the world connected to a Nodejs server using Socket IO. The Raspberries are initiating the connection.
We would like to be able to scale horizontally our application on multiple servers but we have a problem that we can’t figure out.
Basically, the application stores the sockets of the connected Raspberry in an array.
We have an external program that calls the API within the server, this results by the server searching which sockets will be "impacted" by the API call and send them the informations.
After lots of search, we assume that we have to stores the sockets (or their ID) elsewhere (Redis ?), to make the application stateless. Then, any server can respond to a API call and look the sockets in a central place.
Unfortunately, we can’t find any detailed example on how to do that.
Can you please help us ?
Thanks
(You can't store sockets from multiple server instances in a shared datastore like redis: they only make sense in the context of the server where they were initiated).
You will need a cluster of node.js servers to handle this. There are various ways to make a cluster. They all involve directing incoming connections from your RPis to a "generic" hostname, for example server.example.com. Behind that server.example.com hostname will be multiple node.js servers.
Each incoming connection from each RPi connects to just one of those multiple servers. (You know this, I believe.) This means one node.js server in your cluster "owns" each individual RPi.
(Telling you how to rig up a cluster of node.js servers is beyond the scope of this answer. Hints: round-robin DNS or a reverse-proxy nginx front end.)
Then, you want to route -- to fan out -- the incoming data from each API call to each server in the cluster, so the server can route it to the RPis it owns.
Here's a good way to handle that:
Set up a redis cache or other shared data store. It can be very small.
When each node.js server starts, have it register itself as active. That is, have it place its own specific address for handling API calls into the shared server. The specific address is probably of the form 12.34.56.78:3000: that is, an IP address and port.
Have each server update that address every so often, once a minute or so, to show it is still alive.
When an API call arrives at server.example.com, it will come to a more-or-less randomly chosen node.js server instance.
Get that server to read the list of server addresses from the redis cache
Get that server to repeat the API call to all servers except itself. Add a parameter like repeated=yes to the repeated API calls.
Then, each server looks at its list of connected sockets and does what your application requires.
On server shutdown, have the server unregister itself -- remove its address from redis -- if possible.
In other words, build a way of fanning out the API calls to all active node.js servers in your cluster.
If this must scale up to a very large number (more than a hundred or so) node.js servers, or to many hundreds of API calls a minute, you probably should investigate using message queuing software.
SECURE YOUR REDIS server from random cybercreeps on the internet.
Every few months when thinking through a personal project that involves sockets I find myself having the question of "How would you properly load balance sockets on a dynamic horizontally scaling WebSocket server?"
I understand the theory behind horizontally scaling the WebSockets and using pub/sub models to get data to the right server that holds the socket connection for a specific user. I think I understand ways to effectively identify the server with the fewest current socket connections that I would want to route a new socket connection too. What I don't understand is how to effectively route new socket connections to the server you've picked with low socket count.
I don't imagine this answer would be tied to a specific server implementation, but rather could be applied to most servers. I could easily see myself implementing this with vert.x, node.js, or even perfect.
First off, you need to define the bounds of the problem you're asking about. If you're truly talking about dynamic horizontal scaling where you spin up and down servers based on total load, then that's an even more involved problem than just figuring out where to route the latest incoming new socket connection.
To solve that problem, you have to have a way of "moving" a socket from one host to another so you can clear connections from a host that you want to spin down (I'm assuming here that true dynamic scaling goes both up and down). The usual way I've seen that done is by engaging a cooperating client where you tell the client to reconnect and when it reconnects it is load balanced onto a different server so you can clear off the one you wanted to spin down. If your client has auto-reconnect logic already (like socket.io does), you can just have the server close the connection and the client will automatically re-connect.
As for load balancing the incoming client connections, you have to decide what load metric you want to use. Ultimately, you need a score for each server process that tells you how "busy" you think it is so you can put new connections on the least busy server. A rudimentary score would just be number of current connections. If you have large numbers of connections per server process (tens of thousands) and there's no particular reason in your app that some might be lots more busy than others, then the law of large numbers probably averages out the load so you could get away with just how many connections each server has. If the use of connections is not that fair or even, then you may have to also factor in some sort of time moving average of the CPU load along with the total number of connections.
If you're going to load balance across multiple physical servers, then you will need a load balancer or proxy service that everyone connects to initially and that proxy can look at the metrics for all currently running servers in the pool and assign the connection to the one with the most lowest current score. That can either be done with a proxy scheme or (more scalable) via a redirect so the proxy gets out of the way after the initial assignment.
You could then also have a process that regularly examines your load score (however you decided to calculate it) on all the servers in the cluster and decides when to spin a new server up or when to spin one down or when things are too far out of balance on a given server and that server needs to be told to kick several connections off, forcing them to rebalance.
What I don't understand is how to effectively route new socket connections to the server you've picked with low socket count.
As described above, you either use a proxy scheme or a redirect scheme. At a slightly higher cost at connection time, I favor the redirect scheme because it's more scalable when running and creates fewer points of failure for an existing connection. All clients connect to your incoming connection gateway server which is responsible for knowing the current load score for each of the servers in the farm and based on that, it assigns an incoming connection to the host with the lowest score and this new connection is then redirected to reconnect to one of the specific servers in your farm.
I have also seen load balancing done purely by a custom DNS implementation. Client requests IP address for farm.somedomain.com and that custom DNS server gives them the IP address of the host it wants them assigned to. Each client that looks up the IP address for farm.somedomain.com may get a different IP address. You spin hosts up or down by adding or removing them from the custom DNS server and it is that custom DNS server that has to contain the logic for knowing the load balancing logic and the current load scores of all the running hosts.
Route the websocket requests to a load balancer that makes the decision about where to send the connections.
As an example, HAProxy has a leastconn method for long connections that picks the least recently used server with the lowest connection count.
The HAProxy backend server weightings can also be modified by external inputs, #jfriend00 detailed the technicalities of weighting in their answer.
I found this project that might be useful:
https://github.com/apundir/wsbalancer
A snippet from the description:
Websocket balancer is a stateful reverse proxy for websockets. It distributes incoming websockets across multiple available backends. In addition to load balancing, the balancer also takes care of transparently switching from one backend to another in case of mid session abnormal failure.
During this failover, the remote client connection is retained as-is thus remote client do not even see this failover. Every attempt is made to ensure none of the message is dropped during this failover.
Regarding your question : that new connection will be routed by the load balancer if configured to do so.
As #Matt mentioned, for example with HAProxy using the leastconn option.
I'm working in a project where we need to connect clients to devices behind LAN networks.
Brief description: there are "devices" connected, in a home for example, under a LAN created by a router. These devices create a full webserver, operating under linux, and using nodejs as the backend implementation language. They also have access to Internet, through the public IP of the router. On the other side, there are clients which can choose to which device to connect to.
The goal is to connect the clients with the webServer created by any device.
Up to now, my idea is to try to implement something similar to how TeamViewer works. As I understand, Teamviewer has a central server, which the agents connect to. When an agent connects to the central server, this one gets hold of the TCP connection, keeping it alive. When another client wants to access to the first client, the server bypasses both TCP connections. That way the server acts like a proxy, where it additionally routes the TCP connections. This also allows to connect to clients under LAN or firewalls (because the connections are created always from the clients).
If this is correct, what I would like to implement is a central server, in nodejs as well, which manages a pool of socket connections coming from the different active devices, and when a client wants to connect to one specific device, the server bypasses the incoming TCP connection of the client with the already existing connection of the device.
What I first would like to know is if this is possible in nodejs. My idea is to keep the device connections alive, so clients can inmediately connect to them, creating some sort of pool of device connections.
If implemented in C, I guess I could get hold of the socket descriptor, keeping it alive, and bypassing it to the incoming client request. But in nodejs I can't seem to find any modules that manage TCP connections.
Are there any high level npm packages which do this function? Else, is it possible to use lower level modules (like net) which have those functionalities.
Ideally I would like to implement it with high level modules (express), but if it's not possible, I could always rewrite the server using low level modules.
Thanks in advance
I currently am creating a horizontally scalable socket.io server which looks like the following:
LoadBalancer (nginx)
Proxy1 Proxy2 Proxy3 Proxy{N}
BackEnd1 BackEnd2 BackEnd3 BackEnd4 BackEnd{N}
My question is, with socket-io redis module, can I send a message to a specific socket connected to one of the proxy servers from one of the backend servers if they are all connected to the same redis server? If so, how do I do that?
As you wan to scale socket.io server, and you have used nginx as load balancer, do not forget to setup sticky load balancing, othersie single connection will be connected to multiple server based on load balancer pass the connection to socket.io server. So better to use sticky load balancing
With the redis socket io adapter, you can send and receive message with one or more socket.io server with help of Redis Pub/Sub implementation.
if you tell me which technology is used for Proxy and Backend, i will let you know more information on this.
Using the socket.io-redis module all of your backend servers will share the same pool of connected users. You can emit from Backend1 and if a client is connected to Backend4 he will get the message.
The key for this working though with Socket.io is to use sticky sessions on nginx so that once I client connects, it stays on the same machine. This is because the way that socket.io starts with a WebSocket and several long polling threads, they all need to be on the same backend server to work correctly.
Instead of sticky sessions, you can change your client connection optons to use Websockets ONLY and this will remove the problems with the multiple connections to multiple servers as there will only be one connection, the single websocket. This will also make your app lose the ability to downgrade to long-polling instead of WebSockets.
We have a node.js server that is primarily used with socket.io for browser inter connectivity in a web application.
We want to have a high availability solution which would theoretically consist of two node.js servers, one as a primary server and the other as a backup should the primary fail. The solution would allow that if or when the primary node.js server goes down the backup would take over to provide seamless functionality without interruption.
Is there a solution that allows socket.io to maintain the array of client connections over multiple servers without duplication of clients or of messages sent?
Is there another paradigm we should be considering for HA and node.js?
There is no way to have a webSocket auto fail-over without any interruption to a new server when the one it is currently connected to goes down. The webSockets that were connected to the server that went down will die. That's just how TCP sockets work.
Fortunately with socket.io, the client will quickly realize that the connection has been lost (within seconds) and the clients will try to reconnect fairly quickly. If your backup server is immediately in place (e.g. hot standby) to handle the incoming socket.io connections, then the reconnect will be fairly seamless from the client point of view. It will appear to just be a momentary network interruption from the client's point of view.
On the server, however, you need to not only have a backup, but you have to be able to restore any state that was present for each connection. If the connections are just pipes for delivering notifications and are stateless, then this is fairly easy since your backup server that receives the reconnects will immediately be in business.
If your socket.io connections are stateful on the server-side, then you will need a way to restore/access that state when the backup server takes over. One way of doing this is by keeping the state in a redis server that is separate from your web server (though you will then need a backup/high availability plan for the redis server too).
SocketIO in primary and backup server can be connected to a redis server. This will maintain the sessions in the primary server and can be used by backup server, The clients should once again connect to the new server(when primary fails).
SoketIO- Redis
HA - proxy is used for load balancing between multiple node.js instances. The usage of HA proxy will depend on how you are going to deal with failure of primary server. If you have any method to automatically switch primary server, then HA-proxy will not be much useful, else you can configure HA-Proxy to forward request to backup server if the primary server is unreachable.
Other options similar to HA-Proxy are:
node-http-proxy
Nginx