I am looking for a way to cluster WebSocket servers - written in node - so that a proper load balancing and a client request will be served by appropriate node instance. In case of WebSocket, the connection is stateful and I believe a node cluster could help. I want the connection/state information to be shared so that any node instance could serve the request than the client does not need to keep a track of the specific node instance. The reason for this thought process is to ensure that the node instances can be killed and replaced by new instances without bothering about the overheads of state management.
I have a setup where we use multiple instances with load balancers in AWS ECS, deployed by CI/CD pipelines. The number of frontend and backend servers varies between 2 and 8 each depending on bursts and current deployments. If one server crashes, a new one will take its place.
We use socket.io with the Redis adapter to share the websocket state between all connected instances via the in-memory db Redis. This ensures that even if the clients are connected to different instances, they all receive the events.
Related
My current setup is running multiple node instances using PM2 to manage the instances and act as a load balancer.
I would like to implement some functionality using websockets.
The first issue that came to mind is sharing the sockets among X node instances.
My understanding is that if I boot up a websocket-server within a node env only that env will have access to the sockets connected to it.
I do not want to load up web sockets for each instance for each user as that seems like a waste of resources.
Currently I am playing around with the websocket package on npm but I am in no way tied to this if there is a better alternative.
I would like the sockets to more or less push data one-way from server to client and avoid anything coming from the client to the server.
My solution so far is spin up another node instance that solely acts as a websocket server.
This would allow a user to make all requests as normal to the usual instances but make a websocket connection to the separate node instance dedicated to sockets.
The servee could then fire off messages to the dedicated socket server anytime something is updated to send data back to the appropriate clients.
I am not sure this is the best option and I am trying to see if there are other recommended ways of managing websockets across multiple node instances yet still allow me to spin up/down node instances as required.
I'd recommend you avoid a complex setup and just get socket.io working across multiple nodes, thus distributing the load; If you want to avoid data coming from the client to the server, just don't listen to incoming events on your server.
Socket.io supports multiple nodes, under the following conditions:
You have sticky sessions enabled. This ensures requests connect back to the process from which they originated from.
You use a special adapter called socket.io-redis & a small Redis instance as a central point of storage - it keeps track of namespaces/rooms and connected sockets across your cluster of nodes.
Here's an example:
// setup io as usual
const io = require('socket.io')(3000)
// Set a redisAdapter as an adapter.
const redisAdapter = require('socket.io-redis')
io.adapter(redisAdapter({ host: 'localhost', port: 6379 }))
From then on, it's business as usual:
io.emit('hello', 'to all clients')
You can read more here: Socket.IO - Using Multiple Nodes.
I have nodejs app running on AWS EC2.
I would like to scale it up by creating more instances of it.
I don't quite understand how to do it on the networking side.
Lets say I create another instance and it's listening to a different port.
Should I change the client side to request from two different ports? I believe this could lead to race conditions on the DB
Am I suppose to listen on one port on the EC2 machine and direct the request to one of the instances? In that case the port will be busy until the instance is done with the request instead of processing requests in parallel with the other instance
Does anyone has some pointers or maybe can point me to some documents about this subject?
At the basic level, you'll have multiple instances of your Node.js application, connecting to a common database backend. (Your database should be clustered as well, but that's a topic for another post.)
If you're following best practices already, one request will be one response, and it won't matter if subsequent requests land on a different application server. That is, the client could hit any of them at any time and there won't be any side effects. If not, you'll need some sort of server pinning (usually done via cookie or similar methods) to ensure clients always land on the same application server.
The simplest way to load balance is to have your clients connect to a host name, and that hostname resolved round-robin style to several hosts. This will spread the traffic around, but not necessarily correctly. For instance, a particular command or issue on one host could mean it can only handle 5 requests, while the other servers can handle 5000. Most cloud providers have management of load balancing. AWS certainly does.
Since you're already on AWS, I'd recommend you deploy your application via Elastic Beanstalk. It automates the spin-up/tear-down as-configured. You could certainly roll your own solution, but it's probably just easier to use what's already there. The load balancer is configured with Beanstalk.
I am working on a webRTC application where a P2P connection is established between a Customer and free agents .The agents are fetched using AJAX call in the application.I want to scale the application such that if the agents are running on any node server they are able to have a communication mechanism and update status on agent(available,busy,unavailable)can be performed.
My problem statement is that the application is running on 8040 and agentsservice is running on 8088 where the application is making ajax calls and bringing the data.What best can be done to scale the agents or any idea about how to scale the application.
I followed https://github.com/rajaraodv/redispubsub using Redis pub/sub but my problem is not resolved as the agents are being updated , fetched on another node using ajax calls .
You didnt gave enough info... but to scale your nodejs app you need a centeral place which will hold all the info that needed and than can scale redis can scale easily, youc can try socket.io etc..
now after you have your cluster of redis for example you need to make all your node.js to communicate with the redis server that way all you nodes server will have access to same info, now its up to you to send to right info to right clients
Message Bus approach:
Ajax call will send to one of the nodejs servers. If the message doesn't find its destination in that server, it will be sent to the next one, and so one. So signaling server must distribute the received message to all the other nodes in the cluster by establishing a Message Bus
I'm running socket.io/node.js on my EC2 auto-scaling array of servers. As soon as I have more than 1 server, there is the obvious problem where the socket.io connections need to be shared between the servers. This is where you'd normally use the redis store plugin that comes with socket.io.
Unfortunately, I use MongoDB (not redis), and the cost of adding another database to my stack would be prohibitively high. I tried using the socket.io-mongo module, but it absolutely killed my servers (huge CPU usage).
Is there any other way to share the socket.io session between EC2 servers?
I have an Azure hosted application (iisnode) that accepts direct connections from multiple client services. This application streams data between the various connections. If running on a system with multiple instances of node.js, the actual TCP connections will be connecting to different instances.
Is there a way to somehow "move" or "share" the in-memory connection from one instance to another?
Sure, I could build some inter-instance communication to route data, but I don't think the application will scale since it's entire purpose is to move data around quickly. For example, I would have 4 instances, 100 connections to each, and I would spend as many resources moving the data between instances as I would spend moving the data between the client connections.
When you configure iisnode to create more than one node.exe process (using the nodeProcessCountPerApplication setting), it will dispatch incoming HTTP requests between them using a round robin logic; the application has no control over that behavior. Given your scenario there is no way to deterministically ensure that the requests ("connections") from two distinct clients will be colocated in the same node.exe process.
There is no mechanism to "move" an existing TCP connection or HTTP request between node.exe processes.
In general a better way to address such a notification scenario may be to use a subscription-based messaging infrastructure as your backend. ServiceBus in Azure provides such mechanisms. In this design, each instance of node.exe would subscribe to a particular topic when it receives a connection from the client, and be notified by ServiceBus when a matching notification arrives, possibly via a different instance of node.exe.