I'm using a very simple redis pub-sub application, in which I have a redis server in AWS and a nodejs based redis client that is located inside office LAN that subscribes to some channel.
This worked great until the network changed and it seems that some device is now interfering with outgoing connections (I also started receiving socket hangups on outbound SSH connections which I mitigated with the ServerAliveInterval 60 setting in the SSH config).
After the network change, whenever the redis client application is executed, it creates a redis client, subscribes to some channel and acts upon published messages in that channel.
It works okay for several minutes, but then it stops receiving any messages.
I registered the redis client to all known connection events (including the "error" event), I added a "retry_strategy" handler and also modified the configuration to have "socket_keepalive" and "socket_initialdelay" to 10 seconds (see code below).
Nevertheless, no event is triggered when the connection is interfered.
When the application stops receiving the messages, I see that the connection on the redis port is still valid:
dev#server:~> sudo netstat -tlnpua | grep 6379
tcp 0 0 10.43.22.150:52052 <server_ip>:6379 ESTABLISHED 27014/node
I also captured a PCAP on port 6379 on which I don't see any resets or TCP errors, and it seems that from the connection perspective everything is valid.
I tried running another nodejs application from within the LAN in which I create a client that connects to the AWS redis server, registers to all events and only publishes messages once in a while.
After several minutes (in which the connection breaks), I try publishing another command and the error event handler is indeed triggered:
> client.publish("channel", "ANOTHER TRY")
true
> Error: Redis connection to <server_hostname>:6379 failed - read ECONNRESET
Redis connection ended
Redis reconnecting
Redis connected
Redis connection is ready
So if I try publishing via the client after the connection was interfered, the connection event callbacks are indeed called and I can run some kind of reconnection logic.
But in the scenario in which I subscribe and wait for publishes to the channel, no connection event handler is called and the application is basically broken.
Application code:
const redis = require('redis');
const config = { "host": <hostname>, "port": 6379, "socket_keepalive": true,
"socket_initdelay": 10};
config.retry_strategy = function (options) {
console.log("retry strategy. error code: " + (options.error ?
options.error.code : "N/A"));
console.log("options.attempt", options.attempt, "options.total_retry_time",
options.total_retry_time);
return 2000;
}
const client = redis.createClient(config);
client.on('message', function(channel, message) {
console.log("Channel", channel, ", message", message);
});
client.on("error", function (err) {
console.log("Error " + err);
});
client.on("end", function () {
console.log("Redis connection ended");
});
client.on("connect", function () {
console.log("Redis connected");
});
client.on("reconnecting", function () {
console.log("Redis reconnecting");
});
client.on("ready", function () {
console.log("Redis connection is ready");
});
const channel = "channel";
console.log("Subscribing to channel", channel);
client.subscribe(channel);
I'm using redis#2.8.0 and node v8.11.3.
The solution for this issue is quite sad.
First, there is indeed some network device between the redis client and server, which drops inactive connections after some timeout. It seems that this timeout is really low (several minutes).
Redis has a socket_keepalive configuration which is enabled by default, and its default value is Node.js's default socket keep alive value (which is set for 2 hours if i'm not mistaken).
As can be seen above, I used a socket_initdelay configuration parameter that should have changed this default value, but unfortunately the code that uses this parameter isn't in the redis npm package but rather in node-redis.
To summarize:
There is no configuration setting to change the keep alive timeout value in redis#2.8.0 (latest version when writing this post).
You can either:
Use node-redis which accepts the socket_initdelay setting.
Modify the timeout manually by running the following:
const client = redis.createClient();
client.on("connect", function () {
client.stream.setKeepAlive(true, <timeout_value_in_milliseconds>);
}
Related
I have a NodeJS service hosted on Google Cloud that uses Socket IO to communicate back to the browser client whenever the service instance is running.
However, I am noticing something weird.
The weird thing is that sometimes when the server emits a socket event to the client, the client gets the event immediately but on some other occasions the event never gets to the client. This happens so randomly that it's really hard to reproduce where the disconnection is coming from.
Below is my client code:
client_socket.js
import io from "socket.io-client";
const socketUrl = EndPoints.SOCKET_IO_BASE;
let socketOptions = { transports: ["websocket"] }
let socket;
if (!socket) {
socket = io(socketUrl, socketOptions);
socket.on('connect', () => {
console.log(`Connected to Server`);
})
socket.on('disconnect', () => {
console.log(`Disconnected from Server`); //This never gets called when the Cloud Run service instance is running, so I can assume a disconnect never happened.
})
}
export default socket;
Funny enough, a disconnect event was never fired back to the client while the Cloud Run service instance is running, meaning the client is still connected to the service. So, it's really weird that on some occasions it doesn't get events from the server even while been connected.
Please note that on the Google Cloud Run service side I have set the timeout of my service to 3600s which is more than good enough to ensure the service is running long enough to keep the socket connection in place.
I can't figure out one problem I got.
I'm using the Net module on my Node.JS server which is used to listen to client connections.
The client do connect to the server correctly and the connection remains available to read/write data. So far, so good. But when the client unexpectedly disconnects (ed. when internet falls away at client side) I want to fire an event server side.
In socket.io it would be done with the 'disconnect' event, but this event doesn't seem to exist for the Net module. How is it possible to do?
I've searched on Google/StackOverflow and in the Net documentation (https://nodejs.org/api/net.html) but I couldn't find anything usefull. I'm sry if I did mis something.
Here is a code snippet I got:
var net = require('net');
var server = net.createServer(function(connection) {
console.log('client connected');
connection.wildcard = false;//Connection must be initialised with a configuration stored in the database
connection.bidirectional = true;//When piped this connection will be configured as bidirectional
connection.setKeepAlive(true, 500);
connection.setTimeout(3000);
connection.on('close', function (){
console.log('Socket is closed');
});
connection.on('error', function (err) {
console.log('An error happened in connection' + err.stack);
});
connection.on('end', function () {
console.log('Socket did disconnect');
});
connection.on('timeout', function () {
console.log('Socket did timeout');
connection.end();
});
connection.on('data', function (data) {
//Handling incoming data
});
});
serverUmrs.listen(40000, function () {
console.log('server is listening');
});
All the events(close, end, error, timeout) don't fire when I disconnect the client(by pulling out the UTP cable).
Thanks in advance!
EDIT:
I did add a timeout event in the code here above but the only thing that happens is that the socket does timeout after 3 seconds everytime the client does connect again. Isn't KeepAlive enough to make the socket not Idle? How is it possible to make the socket not idle without to much overhead. It may be possible that there are more than 10,000 connections at the same time which must remain alive as long as they are connected (ie respond to the keepalive message).
Update:
I think the KeepAlive is not related with the Idle state of socket, sort of.
Here is my test, I remove the following code in your example.
//connection.setKeepAlive(true, 500);
Then test this server with one client connect to it var nc localhost 40000. If there is no message sending to server after 3 seconds, the server logs as below
Socket did timeout
Socket did disconnect
Socket is closed
The timeout event is triggered without KeepAlive setting.
Do further investigation, refer to the Node.js code
function onread(nread, buffer) {
//...
self._unrefTimer();
We know timeout event is triggered by onread() operation of socket. Namely, if there is no read operation after 3 seconds, the timeout event will be emitted. To be more precisely, not only onread but also write successfully will call _unrefTimer().
In summary, when the write or read operation on the socket, it is NOT idle.
Actually, the close event is used to detect the client connection is alive or not, also mentioned in this SO question.
Emitted when the server closes. Note that if connections exist, this event is not emitted until all connections are ended.
However, in your case
disconnect the client(by pulling out the UTP cable).
The timeout event should be used to detective the connection inactivity. This is only to notify that the socket has been idle. The user must manually close the connection. Please refer to this question.
In TCP connection, end event fire when the client sends 'FIN' message to the server.
If the client side is not sending 'FIN' message that event is not firing.
For example, in your situation,
But when the client unexpectedly disconnects (ed. when internet falls away at client side) I want to fire an event server side.
There may not be a 'FIN' message because internet is gone.
So you should handle this situation in timeout without using keepAlive. If there is no data coming data, you should end or destroy the socket.
EDIT: I did add a timeout event in the code here above but the only
thing that happens is that the socket does timeout after 3 seconds
everytime the client does connect again. Isn't KeepAlive enough to
make the socket not Idle? How is it possible to make the socket not
idle without to much overhead. It may be possible that there are more
than 10,000 connections at the same time which must remain alive as
long as they are connected (ie respond to the keepalive message).
For your edit, your devices should send to the server some heartbeat message between a time period. So that, server understands that that device is alive and that timeout event will not fire because you get some data. If there is no heartbeat message such cases you cannot handle this problem.
I have a node.js script which allows a client to connect and receive some realtime data from an external script.
I have just upgraded node.js & socket.io to the current versions (from <0.9) and am trying to get to grips with what happens when a client quits, times out or disconnects from the server.
Here is my current node.js script;
var options = {
allowUpgrades: true,
pingTimeout: 50000,
pingInterval: 25000,
cookie: 'k1'
};
var io = require('socket.io')(8002, options);
cp = require('child_process');
var tail = cp.spawn('test-scripts/k1.rb');
//On connection do the code below//
io.on('connection', function(socket) {
console.log('************ new client connected ****************', io.engine.clientsCount);
//Read from mongodb//
var connection_string = '127.0.0.1:27017/k1-test';
var mongojs = require('mongojs');
var db = mongojs(connection_string, ['k1']);
var k1 = db.collection('k1');
db.k1.find({}, {'_id': 0, "data.time":0}).forEach(function(err, doc) {
if (err) throw err;
if (doc) { socket.emit('k1', doc); }
});
//Run Ruby script & Listen to STDOUT//
tail.stdout.on('data', function(chunk) {
var closer = chunk.toString()
var sampArray = closer.split('\n');
for (var i = 0; i < sampArray.length; i++) {
try {
var newObj = JSON.parse(sampArray[i]);
// DO SOCKET //
socket.emit('k1', newObj);
} catch (err) {}
}
});
socket.on('disconnect', function(){
console.log('****************** user disconnected *******************', socket.id, io.engine.clientsCount);
socket.disconnect();
});
});
In the old version of socket.io when a client exits I get the following logged in debug;
info - transport end (undefined)
debug - set close timeout for client Owb_B6I0ZEIXf6vOF_b-
debug - cleared close timeout for client Owb_B6I0ZEIXf6vOF_b-
debug - cleared heartbeat interval for client Owb_B6I0ZEIXf6vOF_b-
debug - discarding transport
then everything goes quite and all is well.
With the new (1.3.7) version of socket.io when a client exits I get the following logged in debug;
socket.io:client client close with reason transport close +2s
socket.io:socket closing socket - reason transport close +1ms
socket.io:client ignoring remove for -0BK2XTmK98svWTNAAAA +1ms
****************** user disconnected ******************* -0BK2XTmK98svWTNAAAA
note the line socket.io:client ignoring remove for -0BK2XTmK98svWTNAAAA
but after that and with no other clients connected to the server I'm still seeing it trying to write data to a client that already left. (in the example below this is what I get after I've had 2 clients connected, both of which have since disconnected.
socket.io:client ignoring packet write {"type":2,"data":["k1",{"item":"switch2","datapoint":{"type":"SWITCH","state":"0"}}],"nsp":"/"} +1ms
socket.io:client ignoring packet write {"type":2,"data":["k1",{"item":"switch2","datapoint":{"type":"SWITCH","state":"0"}}],"nsp":"/"} +3ms
I'm trying to stop this apparently new behaviour so that once a client has disconnected and the server is idle its not still trying to send data out.
I've been playing about with socket.disconnect and delete socket["id"] but I'm still left with the same thing.
I tried with io.close() which sort of worked - it booted any clients who where actually connected and made them re-connect but still left the server sitting there trying to send updates to the client that had left.
Am I missing something obvious, or has there been a change in the way this is done with the new version of socket.io? There is nothing in the migration doc about this. The only other result I found was this bug report from June 2014 which has been marked as closed. From my reading of it - it appears to be the same problem I'm having but with the current version.
Update: I've done some more testing and added io.engine.clientsCount to both instances of console.log to track what it's doing. It appears when I connect 1 client it gives me 1 (as expected) and when I close that client it changes to 0 (as expected) this leads me to believe that the client connection has been closed and engine.io know this. So why am I still seeing all the 'ignoring packet write' lines and more with every client who has disconnected.
Update 2: I've updated the code above to include the parser section and the DB section - this represents the full node script as there was a thought that I may need to clean up my own clients. I have tried adding the following code to the script in the hope it would but alas not :(
In the connection event I added clients[socket.id] = socket; and the disconnection event I added delete clients[socket.id]; but it didn't change anything (that I could see)
Update 3: Answer thanks to #robertklep It was an 'event handler leak' that I was actually looking for. Having found that I also found this post.
My guess is that the newer socket.io is just showing you (by way of debug messages) a situation that was already happening in the old socket.io, where it just wasn't being logged.
I think the main issue is this setup:
var tail = cp.spawn('test-scripts/k1.rb');
io.on('connection', function(socket) {
...
tail.stdout.on('data', function(chunk) { ... });
...
});
This adds a new handler for each incoming connection. However, these won't miraculously disappear once the socket is disconnected, so they keep on trying to push new data through the socket (whether it's disconnected or not). It's basically an event handler leak, as they aren't getting cleaned up.
To clean up the handlers, you need to keep a reference to the handler function and remove it as a listener in the disconnect event handler:
var handler = function(chunk) { ... }:
tail.stdout.on('data', handler)
socket.on('disconnect', function() {
tail.stdout.removeListener('data', handler);
});
There's also a (slight) chance that you will get ignored packet writes from your MongoDB code, if the socket is closed before the forEach() has finished, but that may be acceptable (since the amount of data is finite).
PS: eventually, you should consider moving the processing code (what handler is doing) to outside the socket code, as it's now being run for each connected socket. You can create a separate event emitter instance that will emit the processed data, and subscribe to that from each new socket connection (and unsubscribe again when they disconnect), so they only have to pass the processed data to the clients.
This is most probably due to your connection is established via polling transport, which is sooo painful for developer. The reason is that this transport uses timeout to determine if the client is here or not.
The behavior you see is due to the client has left but next polling session opening moment has not come yet, and due to it server still thinks that client "it out there".
I have tried to "fight" this problem in many ways (like adding a custom onbeforeunload event on client side to force disconnect) but they all just do not work in 100% cases when polling is used as transport.
I have a client that connects to a socket server (node.js). I seem to being leaking sockets.
Here is the flow that causes the leaked sockets.
Connect to socket server
Signout (I see the log out confirmation on the server)
Signin in again to the socket server (see confirmation on socket server)
Restart socket server quickly (force restart using supervisor module by resaving a file)
Client reconnects to socket server. I now see 2 sockets that have connected to the socket server, instead of what should be just one.
If I repeat steps 2-4, I can see multiple connections from the same client.
Here is my client socket.io code:
client.js:
function start_socket(tok){
console.log("socket trying to connect");
//try every second to reconnect
socket = io.connect(sockets_host, { query: $.param({token: tok}), 'forceNew' : true, 'reconnection limit' : 100, 'max reconnection attempts' : 'Infinity' });
socket.on('connect', function() {
console.log('connected to socket server');
set_loggedin_status('true');
});
socket.on('disconnect', function() {
console.log('disconnected from server');
set_loggedin_status('false');
close_socket(); //get leaked sockets, wether I call this or not, though less if I do...
});
socket.on('error',function(err){
console.log('socket error: ' + err);
attempt_login();
});
}
function close_socket(){
console.log("in close socket");
socket.disconnect(true);
set_loggedin_status('false');
}
I've tried the above without 'forceNew' : true,, but then I seem to have problems signing in again after the client signed-out.
If I call close_socket from within the disconnected event (and not just from elsewhere when the client chooses to signout), I seem to get fewer leaked sockets, but still get them.
How am I creating multiple sockets?
The solution, though not necessarily the answer to the question, was in my case to use socket.io.disconnect() instead of socket.disconnect().
However, this meant if the socket server goes down once i've already established a connection, that no reconnects were being tried. So, I have to handle this situation myself if using this approach to solve the leaking sockets.
I'm writing a node.js application that needs to talk to a server. It establishes an http connection with the following code:
var client = http.createClient(u.port, u.hostname, u.secure);
client.on("error", function(exception) {
logger.error("error from client");
});
var request = client.request(method, u.path, headers);
I don't see any option in the node.js documentation for setting a timeout on the connection, and it seems to be set to 20 seconds by default. The problem I'm having is that I have users in China on what appears to be a slow or flaky network, who sometimes hit the timeout connecting to our datacenter in the US. I'd like to increase the timeout to 1 minute, to see if that fixes it for them.
Is there a way to do that in node.js?
Try
request.socket.setTimeout(60000); // 60 sec
I think you can do something like:
request.connection.setTimeout(60000)
request.connection returns the net.Stream object associated with the connection.
and net.Stream has a setTimeout method.
There is no capability in Node to increase connect timeout. Since usually connect timeout (i.e. connection establishing timeout) is OS-wide setting for all applications (e.g., 21 seconds in Windows, from 20 to 120 seconds in Linux). See also Timouts in Request package.
In contrast, Node allows to set decreased timeout and abort connecting even in case when the connection is not yet established.
The further timeouts (in case of connection has been established) can be controlled according to the documentation (see request.setTimeout, socket.setTimeout).
You have to wait for the client socket connection to be established first, before setting the timeout. To do this, add a callback for the 'socket' event:
req.on('socket', function (socket) {
myTimeout = 500; // millis
socket.setTimeout(myTimeout);
socket.on('timeout', function() {
console.log("Timeout, aborting request")
req.abort();
});
}).on('error', function(e) {
console.log("Got error: " + e.message);
// error callback will receive a "socket hang up" on timeout
});
See this answer.