Socket.io-redis configuration not working - node.js

I'm trying to set up socket.io-redis for a kubernetes deployment, but something is wrong with the configuration. It is partially working, in that I can see socket.io messages in redis using redis-cli PSUBSCRIBE, but I don't have access to any of socket.io-redis' functions.
io.of('/').adapter.sockets(), io.of('/').adapter.allRooms()and every other function that should be available upon the successful configuration of socket.io-redis is undefined. My configuration is below.
const app = require('express')();
const server = require('http').createServer(app);
const io = require('socket.io')(server, {transports: ['websocket']});
const redisAdapter = require('socket.io-redis');
io.adapter(redisAdapter({port: 6379, host: '127.0.0.1'}));
I can't find any other cases of difficulty in what should be a simple configuration. I am using socket.io 2.3.0 and socket.io-redis 5.4.0 which should be compatible according to the docs.

When deploying Socket.IO application on a Kubernetes cluster, that means multiple SocketIO servers (Pods), there are two things to take care of:
Enabling the sticky session feature:
when a request comes from a SocketIO client (browser) to your app, it gets associated with a particular session-id, these requests must be kept connecting with the same process (Pod in Kubernetes) that originated their ids.
Using the Redis adapter
you can learn more about this from this Medium story (source code available) Medium

Related

WebSockets handshake not working with NestJS when bootstrapping with express (multiple servers)

Our NestJs server used to handle websockets handshakes perfectly over HTTP. But then we added HTTPS support, and we did it using the multiple servers technique as described here: https://docs.nestjs.com/faq/multiple-servers
After doing so, websockets handshakes (upgrade requests) have stopped working.
The NestJs bootstrapping we had that had web sockets working was like this:
const app = await NestFactory.create(AppModule);
const webSocketAdapter = new WebSocketAdapter(app);
app.useWebSocketAdapter(webSocketAdapter);
[...] // We initialize express sessions and passport here.
await app.init();
await app.listen(80);
To support both HTTP and HTTPS at the same time, we followed instructions as indicated on https://docs.nestjs.com/faq/multiple-servers and changed the bootstrapping as follows:
const server = express();
const app = await NestFactory.create(AppModule, new ExpressAdapter(server));
const webSocketAdapter = new WebSocketAdapter(app);
app.useWebSocketAdapter(webSocketAdapter);
[...] // We initialize express sessions and passport here.
await app.init();
http.createServer(server).listen(80);
const httpsConfiguration = [...] // Nothing meaningful here. We load PEM files and put them in there.
https.createServer(httpsConfiguration, server).listen(443);
After that change, our web sockets stopped working. I verified in our WebSocketAdapter if the verifyClient() function is getting called. It is not any longer. So it seems like NestJs doesn't seem to process the request as a websocket upgrade request any longer.
In the browser, the console displays the following error message:
myScript.js:123 WebSocket connection to 'ws://localhost/api/v1/mysubpath' failed: Error during WebSocket handshake: Unexpected response code: 200
I have been trying to figure out what is going wrong by tracing in the NestJs code, but it's not a trivial task at all.
Has anyone have an idea as to why websockets are no longer working?
EDIT:
After investigating further, it seems like the WsAdapter can't properly initialize the websocket server if we supply it the Nest application when the Nest application is initialized with an Express adapter. So instead we provide to the WebSocket adapter the http server instance.
However, we are then stuck to not being able to serve websockets on HTTP and HTTPS at the same time. We can supply only one server to the websocket adapter, and only one websocket adapter can be supplied to the nest application. So we could not figure out a way to get the adapter or Nest application to support web sockets for both HTTP and HTTPS simultaneously.
Our code now looks like this:
const server = express();
const app = await NestFactory.create(AppModule, new ExpressAdapter(server));
const httpServer = http.createServer(server);
const httpsConfiguration = [...]
const httpsServer = https.createServer(httpsConfiguration, server)
const webSocketAdapter = new WebSocketAdapter(httpServer); // Here we can only supply one server.
app.useWebSocketAdapter(webSocketAdapter); // Here we can only supply one adapter.
[...] // We initialize express sessions and passport here.
await app.init();
httpServer.listen(80);
httpsServer.listen(443);

How to check socket is alive (connected) in socket.io with multiple nodes and socket.io-redis

I am using socket.io with multiple nodes, socket.io-redis and nginx. I follow this guide: http://socket.io/docs/using-multiple-nodes/
I am trying to do: At a function (server site), I want to query by socketid that this socket is connected or disconnect
I tried io.of('namespace').connected[socketid], it only work for current process ( it mean that it can check for current process only).
Anyone can help me? Thanks for advance.
How can I check socket is alive (connected) with socketid I tried
namespace.connected[socketid], it only work for current process.
As you said, separate process means that the sockets are only registered on the process that they first connected to. You need to use socket.io-redis to connect all your nodes together, and what you can do is broadcast an event each time a client connects/disconnects, so that each node has an updated real-time list of all the clients.
Check out here
as mentioned above you should use socket.io-redis to get it work on multiple nodes.
var io = require('socket.io')(3000);
var redis = require('socket.io-redis');
io.adapter(redis({ host: 'localhost', port: 6379 }));
I had the same problem and no solution at my convenience. So I made a log of the client to see the different methods and variable that I can use. there is the client.conn.readystate property for the state of the connection "open/closed" and the client.onclose() function to capture the closing of the connection.
const server = require('http').createServer(app);
const io = require('socket.io')(server);
let clients = [];
io.on('connection', (client)=>{
clients.push(client);
console.log(client.conn.readyState);
client.onclose = ()=>{
// do something
console.log(client.conn.readyState);
clients.splice(clients.indexOf(client),1);
}
});
When deploying Socket.IO application on a multi-nodes cluster, that means multiple SocketIO servers, there are two things to take care of:
Using the Redis adapter and Enabling the sticky session feature: when a request comes from a SocketIO client (browser) to your app, it gets associated with a particular session-id, these requests must be kept connecting with the same process (Pod in Kubernetes) that originated their ids.
you can learn more about this from this Medium story (source code available) https://saphidev.medium.com/socketio-redis...

Errors going to 2 dynos on Heroku with socket.io / socket.io-redis / rediscloud / node.js

I have a node.js / socket.io app running on Heroku. I am using socket.io-redis with RedisCloud to allow users who connect to different dynos to communicate, as described here.
From my app.js:
var express = require('express'),
app = express(),
http = require('http'),
server = http.createServer(app),
io = require('socket.io').listen(server),
redis = require('redis'),
ioredis = require('socket.io-redis'),
url = require('url'),
redisURL = url.parse(process.env.REDISCLOUD_URL),
And later in app.js ...
var sub1 = redis.createClient(redisURL.port, redisURL.hostname, {
no_ready_check: true,
return_buffers: true
});
sub1.auth(redisURL.auth.split(":")[1]);
var pub1 = redis.createClient(redisURL.port, redisURL.hostname, {
no_ready_check: true,
return_buffers: true
});
pub1.auth(redisURL.auth.split(":")[1]);
var redisOptions = {
pubClient: pub1,
subClient: sub1,
host: redisURL.hostname,
port: redisURL.port
};
if (io.adapter) {
io.adapter(ioredis(redisOptions));
console.log("mylog: io.adapter found");
}
It is kind of working -- communication is succeeding between dynos.
Three issues that happen with 2 dynos but not with 1 dyno:
1) There is a login prompt which comes up and works reliably with 1 dyno but is hit-and-miss with 2 dynos -- may not come up and may not work if it does come up. It is (or should be) triggered by the io.sockets.on('connection') event.
2) I'm seeing a lot of disconnects in the server log.
3) Also lots of errors in the client console on Chrome, for example:
socket.io.js:5039 WebSocket connection to 'ws://example.mydomain.com/socket.io/?EIO=3&transport=websocket&sid=F8babuJrLI6AYdXZAAAI' failed: Error during WebSocket handshake: Unexpected response code: 503
socket.io.js:2739 POST http://example.mydomain.com/socket.io/?EIO=3&transport=polling&t=1419624845433-63&sid=dkFE9mUbvKfl_fiPAAAJ net::ERR_INCOMPLETE_CHUNKED_ENCODING
socket.io.js:2739 GET http://example.mydomain.com/socket.io/?EIO=3&transport=polling&t=1419624842679-54&sid=Og2ZhJtreOG0wnt8AAAQ 400 (Bad Request)
socket.io.js:3318 WebSocket connection to 'ws://example.mydomain.com/socket.io/?EIO=3&transport=websocket&sid=ITYEPePvxQgs0tcDAAAM' failed: WebSocket is closed before the connection is established.
Any thoughts or suggestions would be welcome.
Yes, like generalhenry said, the issue is that Socket.io requires sticky sessions (meaning that requests from a given user always go to the same dyno), and Heroku doesn't support that.
(It works with 1 dyno because when there's only 1 then all requests go to it.)
https://github.com/Automattic/engine.io/issues/261 has a lot more good info, apparently web sockets don't really require sticky sessions, but long-polling does. It also mentions a couple of potential work-arounds:
Roll back to socket.io version 0.9.17, which tries websockets first
Only use SSL connections which, makes websockets more reliable (because ISP's and corporate proxies and whatnot can't tinker with the connection as easily.)
You might get the best results from combining both of those.
You could also spin up your own load balancer that adds sticky session support, but by that point, you're fighting against Heroku and might be better off on a different host.
RE: your other question about the Node.js cluster module: it wouldn't really help here. It's for using up all of the available CPU cores on a single server/dyno,

Socket.io and multiple Dyno's on Heroku Node.js app. WebSocket is closed before the connection is established

I'm building an App deployed to Heroku which uses Websockets.
The websockets connection is working properly when I use only 1 dyno, but when I scale to >1, I get the following errors
POST
http://****.herokuapp.com/socket.io/?EIO=2&transport=polling&t=1412600135378-1&sid=zQzJJ8oPo5p3yiwIAAAC
400 (Bad Request) socket.io-1.0.4.js:2
WebSocket connection to
'ws://****.herokuapp.com/socket.io/?EIO=2&transport=websocket&sid=zQzJJ8oPo5p3yiwIAAAC'
failed: WebSocket is closed before the connection is established.
socket.io-1.0.4.js:2
I am using the Redis adaptor to enable multiple web processes
var io = socket.listen(server);
var redisAdapter = require('socket.io-redis');
var redis = require('redis');
var pub = redis.createClient(18049, '[URI]', {auth_pass:"[PASS]"});
var sub = redis.createClient(18049, '[URI]', {detect_buffers: true, auth_pass:"[PASS]"} );
io.adapter( redisAdapter({pubClient: pub, subClient: sub}) );
This is working on localhost (which I am using foreman to run, as Heroku does, and I am launching 2 web processes, same as on Heroku).
Before I implemented the redis adaptor I got a web-sockets handshake error, so the adaptor has had some effect. Also it is working occasionally now, I assume when the sockets match the same web dyno.
I have also tried to enable sticky sessions, but then it never works.
var sticky = require('sticky-session');
sticky(1, server).listen(port, function (err) {
if (err) {
console.error(err);
return process.exit(1);
}
console.log('Worker listening on %s', port);
});
I'm the Node.js Platform Owner at Heroku.
WebSockets works on Heroku out-of-the-box across multiple dynos; socket.io (and other realtime libs) use fallbacks to stateless processes like xhr polling that break without session affinity.
To scale up socket.io apps, first follow all the instructions from socket.io:
http://socket.io/docs/using-multiple-nodes/
Then, enable session affinity on your app (this is a free feature):
https://devcenter.heroku.com/articles/session-affinity
I spent a while trying to make socket.io work in multi-server architecture, first on Heroku and then on Openshift as many suggest.
The only way to make it work on both PAAS is disabling xhr-polling and setting transports: ['websocket'] on both client and server.
On Openshift, you must explicitly set the port of the server to 8000 (for ws – 8443 for wss on socket.io client initialization, using the *.rhcloud.com server, as explained in this post: http://tamas.io/deploying-a-node-jssocket-io-app-to-openshift/.
Polling strategy doesn't work on Heroku because it does not support sticky sessions (https://github.com/Automattic/engine.io/issues/261), and on Openshift it fails because of this issue: https://github.com/Automattic/engine.io/issues/279, that will hopefully be fixed soon.
So, the only solution I found so far, is disabling polling and use websocket transport only.
In order to do that, with socket.io > 1.0
server-side:
var app = express();
var server = require('http').createServer(app);
var socketio = require('socket.io')(server, {
path: '/socket.io-client'
});
socketio.set('transports', ['websocket']);
client-side:
var ioSocket = io('<your-openshift-app>.rhcloud.com:8000' || '<your-heroku-app>.herokuapp.com', {
path: '/socket.io-client'
transports: ['websocket']
})
Hope this will help.
It could be you need to be running RedisStore:
var session = require('express-session');
var RedisStore = require('connect-redis')(session);
app.use(session({
store: new RedisStore(options),
secret: 'keyboard cat'
}));
per earlier q here: Multiple dynos on Heroku + socket.io broadcasts
I know this isn't a normal answer, but I've tried to get WebSockets working on Heroku for more than a week. After many long conversations with customer support I finally tried out OpenShift. Heroku WebSockets are in beta, but OpenShift WebSockets are stable. I got my code working on OpenShift in under an hour.
http://www.openshift.com
I am not affiliated with OpenShift in any way. I'm just a satisfied (non-paying) customer.
I was having huge problems with this. There were a number of issues failing simultaneously making it a huge nightmare. Make sure you do the following to scale socket.io on heroku:
if you're using clusters make sure you implement socketio-sticky-session or something similar
client's connect url should not be https://example.com/socket.io/?EIO=3&transport=polling but rather https://example.com/ notably I'm using https because heroku supports it
enable cors in socket.io
specify only websocket connections
For you and others it could be any one of these.
if you're having trouble setting up sticky-session clusters here's my working code
var http = require('http');
var cluster = require('cluster');
var numCPUs = require('os').cpus().length;
var sticky = require('socketio-sticky-session');
var redis = require('socket.io-redis');
var io;
if(cluster.isMaster){
console.log('Inside Master');
// create the worker processes
for (var i = 0; i < numCPUs ; i++){
cluster.fork();
}
}
else {
// The worker code to be run is written inside
// the sticky().
}
sticky(function(){
// This code runs inside the workers.
// The sticky-session balances connection between workers based on their ip.
// So all the requests from the same client would go to the same worker.
// If multiple browser windows are opened in the same client, all would be
// redirected to the same worker.
io = require('socket.io')({transports:'websocket', 'origins' : '*:*'});
var server = http.createServer(function(req,res){
res.end('socket.io');
})
io.listen(server);
// The Redis server can also be used to store the socket state
//io.adapter(redis({host:'localhost', port:6379}));
console.log('Worker: '+cluster.worker.id);
// when multiple workers are spawned, the client
// cannot connect to the cloudlet.
StartConnect(); //this function connects my mongodb, then calls a function with io.on('connection', ..... socket.on('message'...... in relation to the io variable above
return server;
}).listen(process.env.PORT || 4567, function(){
console.log('Socket.io server is up ');
});
more information:
personally it would work flawlessly from a session not using websockets (I'm using socket.io for a unity game. It worked flawlessly from the editor only!). When connecting through the browser whether chrome or firefox it would show these handshaking errors, along with error 503 and 400.

Load Balance: Node.js - Socket.io - Redis

I have 3 Servers running NodeJs, and they are related each other with Redis (1 master, 2 slaves).
The issue i'm having is that running the system on a single server works fine, but when I scale it to 3 NodeJS servers, it starts missing messages and the system gets unstable.
My load balancer does not accept sticky sessions. So every time that the requests from the client arrives to it, they can go to a different server.
I'm pointing all the NodeJS servers to the Redis Master.
It looks like socket.io is storing information on each server and it is not being distributed with redis.
I'm using socket.io V9, I'm suspecting that I don't have any handshake code, could this be the reason?
My code to configure socket.io is:
var express = require('express');
var io = require('socket.io');
var redis = require('socket.io/node_modules/redis');
var RedisStore = require('socket.io/lib/stores/redis');
var pub = redis.createClient("a port", "an ip");
var sub = redis.createClient("a port", "an ip");
var client = redis.createClient("a port", "an ip");
var events = require('./modules/eventHandler');
exports.createServer = function createServer() {
var app = express();
var server = app.listen(80);
var socketIO = io.listen(server);
socketIO.configure(function () {
socketIO.set('store', new RedisStore({
redisPub: pub,
redisSub: sub,
redisClient: client
}));
socketIO.set('resource', '/chat/socket.io');
socketIO.set('log level', 0);
socketIO.set('transports', [, 'htmlfile', 'xhr-polling', 'jsonp-polling']);
});
// attach event handlers
events.attachHandlers(socketIO);
// return server instance
return server;
};
Redis only syncs from the master to the slaves. It never syncs from the slaves to the master. So, if you're writing to all 3 of your machines, then the only messages that will wind up synced across all three servers will be the ones hitting the master. This is why it looks like you're missing messages.
More info here.
Read only slave
Since Redis 2.6 slaves support a read-only mode that
is enabled by default. This behavior is controlled by the
slave-read-only option in the redis.conf file, and can be enabled and
disabled at runtime using CONFIG SET.
Read only slaves will reject all
the write commands, so that it is not possible to write to a slave
because of a mistake. This does not mean that the feature is conceived
to expose a slave instance to the internet or more generally to a
network where untrusted clients exist, because administrative commands
like DEBUG or CONFIG are still enabled. However security of read-only
instances can be improved disabling commands in redis.conf using the
rename-command directive.
You may wonder why it is possible to revert
the default and have slave instances that can be target of write
operations. The reason is that while this writes will be discarded if
the slave and the master will resynchronize, or if the slave is
restarted, often there is ephemeral data that is unimportant that can
be stored into slaves. For instance clients may take information about
reachability of master in the slave instance to coordinate a fail over
strategy.
I arrived to this post:
It can be a good idea to have a "proxy" between nodejs servers and the load balancer.
With this approach XHR-Polling can be used in load balancers without Sticky sessions.
Load balancing with node.js using http-proxy
using nodejs-http-proxy i can have custom routing route, ex. by adding a parameter on the "connect url" of socket.io.
Anyone tried this solution before?

Resources