Socket.io takes a long time to connect - node.js

I'm writing a node.js socket.io websockets application (version 1.3.7 of socket.io), and about 75% of the time the client takes a long time to connect to the server - the other 25% of the time it connects pretty much instantly. I've enabled debugging on both the server and the client, and it hangs in both places at the same spot:
Server Log
Client Log (Chrome)
Eventually it will connect, and I've been able to make it connect faster by reducing the timeout from the default of 20 seconds to about 5 seconds, but I'm not sure why it's hanging in the first place. Watching the Chrome network tab, it seems like when a connect attempt is made it will either work immediately or it won't work for the rest of the connect attempt. So dropping the timeout to 5 seconds just means it will make more attempts faster, one of which will eventually succeed.
Network Log (Chrome)
In this case it took 5 connection tries, about 20 seconds, to connect.
Client Code
// client.wsPath is typically http://127.0.0.1:8080/abc, where abc is the namespace to connect to.
client.socket = io.connect(client.wsPath, {timeout: 5000, transports: ["websocket"]});
Server Code
var express = require("express");
var io = require("socket.io");
var htmlApp = express();
var htmlServer = http.Server(htmlApp);
htmlServer.listen(DISPATCH_SERVER_LISTEN_PORT, function()
{
log.info("HTML Server is listening on port " + DISPATCH_SERVER_LISTEN_PORT);
});
var wsServer = io(htmlServer, {transports: ["websocket"]});
var nsp = wsServer.of("/" + namespace);
nsp.on("connection", function(socket)
{
log.info("connect");
};
We've found that clearing the browser cookies can help, but doesn't seem like a permanent solution - is there something that I'm doing wrong?

We are facing similar issue with socket IO SDK. It seems the SDK is actually waiting for the acknowledgement(util is receives the message with SID) to start messaging. But in the typical WS/WSS communication we can start messaging immediately after the connect. We are trying to tweak the SDK in such a way that it can start messaging immediately after connection establishment. Please share if any one has found a better approach.

Anybody else still facing this error?
It is happening to a server I deployed, these are the versions:
"socket.io-client": "^4.5.1"
"socket.io": "^4.5.1"
Sometimes it connects instantly, but others it takes around 1~2 minutes

Related

Socket disconnection issue and polling

I have developed one chat application using socket and node js , in start it works perfectly but i have observed that after some interval of time there is one polling error thrown and than socket disconnects.
Error screenshot : https://drive.google.com/file/d/1RWAoqHmpRSR1RkNvE1-XsFtVIvBX24bY/view?usp=sharing
This might fix the issue:
Add this line on server.js:
io.set('transports', ['websocket']);
Add this on client.js:
var socket = io('/',{transports: ['websocket'],upgrade:false});
And do one more thing just use lower socket.io version like 2.0.3

Errors going to 2 dynos on Heroku with socket.io / socket.io-redis / rediscloud / node.js

I have a node.js / socket.io app running on Heroku. I am using socket.io-redis with RedisCloud to allow users who connect to different dynos to communicate, as described here.
From my app.js:
var express = require('express'),
app = express(),
http = require('http'),
server = http.createServer(app),
io = require('socket.io').listen(server),
redis = require('redis'),
ioredis = require('socket.io-redis'),
url = require('url'),
redisURL = url.parse(process.env.REDISCLOUD_URL),
And later in app.js ...
var sub1 = redis.createClient(redisURL.port, redisURL.hostname, {
no_ready_check: true,
return_buffers: true
});
sub1.auth(redisURL.auth.split(":")[1]);
var pub1 = redis.createClient(redisURL.port, redisURL.hostname, {
no_ready_check: true,
return_buffers: true
});
pub1.auth(redisURL.auth.split(":")[1]);
var redisOptions = {
pubClient: pub1,
subClient: sub1,
host: redisURL.hostname,
port: redisURL.port
};
if (io.adapter) {
io.adapter(ioredis(redisOptions));
console.log("mylog: io.adapter found");
}
It is kind of working -- communication is succeeding between dynos.
Three issues that happen with 2 dynos but not with 1 dyno:
1) There is a login prompt which comes up and works reliably with 1 dyno but is hit-and-miss with 2 dynos -- may not come up and may not work if it does come up. It is (or should be) triggered by the io.sockets.on('connection') event.
2) I'm seeing a lot of disconnects in the server log.
3) Also lots of errors in the client console on Chrome, for example:
socket.io.js:5039 WebSocket connection to 'ws://example.mydomain.com/socket.io/?EIO=3&transport=websocket&sid=F8babuJrLI6AYdXZAAAI' failed: Error during WebSocket handshake: Unexpected response code: 503
socket.io.js:2739 POST http://example.mydomain.com/socket.io/?EIO=3&transport=polling&t=1419624845433-63&sid=dkFE9mUbvKfl_fiPAAAJ net::ERR_INCOMPLETE_CHUNKED_ENCODING
socket.io.js:2739 GET http://example.mydomain.com/socket.io/?EIO=3&transport=polling&t=1419624842679-54&sid=Og2ZhJtreOG0wnt8AAAQ 400 (Bad Request)
socket.io.js:3318 WebSocket connection to 'ws://example.mydomain.com/socket.io/?EIO=3&transport=websocket&sid=ITYEPePvxQgs0tcDAAAM' failed: WebSocket is closed before the connection is established.
Any thoughts or suggestions would be welcome.
Yes, like generalhenry said, the issue is that Socket.io requires sticky sessions (meaning that requests from a given user always go to the same dyno), and Heroku doesn't support that.
(It works with 1 dyno because when there's only 1 then all requests go to it.)
https://github.com/Automattic/engine.io/issues/261 has a lot more good info, apparently web sockets don't really require sticky sessions, but long-polling does. It also mentions a couple of potential work-arounds:
Roll back to socket.io version 0.9.17, which tries websockets first
Only use SSL connections which, makes websockets more reliable (because ISP's and corporate proxies and whatnot can't tinker with the connection as easily.)
You might get the best results from combining both of those.
You could also spin up your own load balancer that adds sticky session support, but by that point, you're fighting against Heroku and might be better off on a different host.
RE: your other question about the Node.js cluster module: it wouldn't really help here. It's for using up all of the available CPU cores on a single server/dyno,

Node.js - Azure - Socket.io - Why am I running out of concurrent sockets?

Ok, so I have an app that works just fine locally. I deployed it to Azure the other day and I am regularly getting the error:
IIS Detailed Error - 503.0 - Number of active WebSocket requests has reached the maximum concurrent WebSocket requests allowed.
I don't understand why...I have read a lot of tutorials, guides, etc about socket.io (and I have been building with it for 4 months locally with no issue).
Here is my connection code.
io.sockets.on('connection', function (socket) {
var handshake = socket.handshake;
var session = socket.handshake.session;
clients.push(socket);
console.log('A socket with sessionID ' + handshake.sessionID + ' connected!');
// setup an inteval that will keep our session fresh
var intervalID = setInterval(function () {
session.reload( function () {
session.touch().save();
});
}, 60 * 1000);
socket.on('disconnect', function () {
console.log('A socket with sessionID ' + handshake.sessionID + ' disconnected!');
var i = clients.indexOf(socket);
clients.splice(i, 1);
// clear the socket interval to stop refreshing the session
clearInterval(intervalID);
});
}
The console logs when people connect and disconnect...this is working just fine.
If I reset my server my code will run for a little while. I know Azure supports 350 concurrent sockets...not sure how a single user fills that up.
I come from a .NET background so I am used to closing connections when I am done with them, but that doesn't seem to be necessary with node.js sockets.
But if I don't need to explicitly close my sockets, then why are my connections piling up?
Thanks for your help,
David
UPDATE
So, based on the answer below, I discovered that azure limits the concurrent connections pretty severely on the free plan. I updated to the standard package to get the full 350 connections.
Of note, I learned that if you use this command:
io.sockets.manager.server.connections
you will get a count of the current connections. This plainly showed me that even by myself I was using 7 (which is why the free plan died). Now I just need to figure out why...
The blog post states:
•Free: (5) concurrent connections per website instance
•Shared: (35) concurrent connections per website instance
•Standard: (350) concurrent connections per website instance
The 350 concurrent connections limits applies only to "Standard" Windows Azure Web Sites. Are you in fact using Standard?

How to make the client disconnect immediately on network loss in socket.io?

I am developing a chat application using socket.io for iOS app and androdi app. And the issue am facing is when network connection is suddenly lost on client, it takes 25 secs(default heartbeat interval) for the disconnect event on the socket.io server to get called. I am removing the socket.id/username from my db when the disconnect event is called. But since during this 25 secs the socket.id/username still exists on my db, when some other user sends a message to this user who lost network connection, it will be shown as sent even though it was never delivered. The solution would probably be to reduce the Heartbeat interval to maybe 1 or 2 secs. I have tried that but strangely its not being set, it still takes 25 secs approx. Below is my code
var app = require('express')()
, server = require('http').createServer(app)
, sio = require('socket.io')
, redis = require('redis');
var client = redis.createClient();
var io = sio.listen(server, { origins: '*:*' });
io.set("store", new sio.RedisStore);
io.set("heartbeat interval", 2);
var azure = require('azure');
server.listen(4002);
But even if i get the heartbeat interval set to 2secs, i think there might be a downside to it. If the mobile apps send acknowledgement to the server's heartbeat every 2secs then taht would probably drain the app's battery if the app is left idle for a long time. So I would appreciate any solutions at all that would work best in my case.
Thanks
there are many variables to consider, when to disconnect a client , since 25secs is a way to small interval to check something especially if you application has say 100.000 users.
Things you could try
client side control, to see if a user is typing or even touching the device to get user status idle|active|zombie.
don't remove user from redis, immediately after client emit disconnect, instead you could place the user , into a toDisconnect queue list, or set an additional variable, if the same user is connected again then you can simply move the object, instead of querying again for creating the object.
if still not satisfied with result, try binary websockets, at least it will remove some overhead from packets and they will be delivered a bit faster, since the size is a smaller.
bottom line, don't rely on redis, you could easily remove it from structure, and apply a custom websocket server, and all user management build in node.js

I'm receiving duplicate messages in my clustered node.js/socket.io/redis pub/sub application

I'm using Node.js, Socket.io with Redisstore, Cluster from the Socket.io guys, and Redis.
I've have a pub/sub application that works well on just one Node.js node. But, when it comes under heavy load is maxes out just one core of the server since Node.js isn't written for multi-core machines.
As you can see below, I'm now using the Cluster module from Learnboost, the same people who make Socket.io.
But, when I fire up 4 worker processes, each browser client that comes in and subscribes gets 4 copies of each message that is published in Redis. If there are are three worker processes, there are three copies.
I'm guessing I need to move the redis pub/sub functionality to the cluster.js file somehow.
Cluster.js
var cluster = require('./node_modules/cluster');
cluster('./app')
.set('workers', 4)
.use(cluster.logger('logs'))
.use(cluster.stats())
.use(cluster.pidfiles('pids'))
.use(cluster.cli())
.use(cluster.repl(8888))
.listen(8000);
App.js
redis = require('redis'),
sys = require('sys');
var rc = redis.createClient();
var path = require('path')
, connect = require('connect')
, app = connect.createServer(connect.static(path.join(__dirname, '../')));
// require the new redis store
var sio = require('socket.io')
, RedisStore = sio.RedisStore
, io = sio.listen(app);
io.set('store', new RedisStore);io.sockets.on('connection', function(socket) {
sys.log('ShowControl -- Socket connected: ' + socket.id);
socket.on('channel', function(ch) {
socket.join(ch)
sys.log('ShowControl -- ' + socket.id + ' joined channel: ' + ch);
});
socket.on('disconnect', function() {
console.log('ShowControll -- Socket disconnected: ' + socket.id);
});
});
rc.psubscribe('showcontrol_*');
rc.on('pmessage', function(pat, ch, msg) {
io.sockets.in(ch).emit('show_event', msg);
sys.log('ShowControl -- Publish sent to channel: ' + ch);
});
// cluster compatiblity
if (!module.parent) {
app.listen(process.argv[2] || 8081);
console.log('Listening on ', app.address());
} else {
module.exports = app;
}
client.html
<script src="http://localhost:8000/socket.io/socket.io.js"></script>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.0/jquery.min.js"></script>
<script>
var socket = io.connect('localhost:8000');
socket.emit('channel', 'showcontrol_106');
socket.on('show_event', function (msg) {
console.log(msg);
$("body").append('<br/>' + msg);
});
</script>
I've been battling with cluster and socket.io. Every time I use cluster function (I use the built in Nodejs cluster though) I get alot of performance problems and issues with socket.io.
While trying to research this, I've been digging around the bug reports and similar on the socket.io git and anyone using clusters or external load balancers to their servers seems to have problems with socket.io.
It seems to produce the problem "client not handshaken client should reconnect" which you will see if you increase the verbose logging. This appear alot whenever socket.io runs in a cluster so I think it reverts back to this. I.E the client gets connected to randomized instance in the socket.io cluster every time it does a new connection (it does several http/socket/flash connections when authorizing and more all the time later when polling for new data).
For now I've reverted back to only using 1 socket.io process at a time, this might be a bug but could also be a shortcoming of how socket.io is built.
Added: My way of solving this in the future will be to assign a unique port to each socket.io instance inside the cluster and then cache port selection on client side.
Turns out this isn't a problem with Node.js/Socket.io, I was just going about it the completely wrong way.
Not only was I publishing into the Redis server from outside the Node/Socket stack, I was still directly subscribed to the Redis channel. On both ends of the pub/sub situation I was bypassing the "Socket.io cluster with Redis Store on the back end" goodness.
So, I created a little app (with Node.js/Socket.io/Express) that took messages from my Rails app and 'announced' them into a Socket.io room using the socket.io-announce module. Now, by using Socket.io routing magic, each node worker would only get and send messages to browsers connected to them directly. In other words, no more duplicate messages since both the pub and sub happened within the Node.js/Socket.io stack.
After I get my code cleaned up I'll put an example up on a github somewhere.

Resources