I wrote a TCP client and server in node.js. I can't figure out how to make client reconnect when it disconnects for any reason whatsoever. I understand I have to handle 'close' event, but it's an event that gets fired only when connection gracefully closes. I don't know how to handle network errors when connection just ends because of network issues.
Here's the code that I've so far:
function connect () {
client = new net.Socket();
client.connect(config.port, config.host, function () {
console.log('connected to server');
});
client.on('close', function () {
console.log('dis-connected from server');
connect();
});
client.on('error', ...); // what do I do with this?
client.on('timeout', ...); // what is this event? I don't understand
}
Can anyone explain what do I do in error and timeout case? And how do I know that the network has disconnected and reconnect?
You can probably just leave the error event handler as a no-op/empty function, unless you want to also log the error or something useful. The close event for TCP sockets is always emitted, even after error, so that is why you can ignore it.
Also, close event handlers are passed a boolean had_error argument that indicates whether the socket close was due to error and not a normal connection teardown.
The timeout event is for detecting a loss of traffic between the two endpoints (the timeout used is set with socket.setTimeout().
Related
I'm using a very simple redis pub-sub application, in which I have a redis server in AWS and a nodejs based redis client that is located inside office LAN that subscribes to some channel.
This worked great until the network changed and it seems that some device is now interfering with outgoing connections (I also started receiving socket hangups on outbound SSH connections which I mitigated with the ServerAliveInterval 60 setting in the SSH config).
After the network change, whenever the redis client application is executed, it creates a redis client, subscribes to some channel and acts upon published messages in that channel.
It works okay for several minutes, but then it stops receiving any messages.
I registered the redis client to all known connection events (including the "error" event), I added a "retry_strategy" handler and also modified the configuration to have "socket_keepalive" and "socket_initialdelay" to 10 seconds (see code below).
Nevertheless, no event is triggered when the connection is interfered.
When the application stops receiving the messages, I see that the connection on the redis port is still valid:
dev#server:~> sudo netstat -tlnpua | grep 6379
tcp 0 0 10.43.22.150:52052 <server_ip>:6379 ESTABLISHED 27014/node
I also captured a PCAP on port 6379 on which I don't see any resets or TCP errors, and it seems that from the connection perspective everything is valid.
I tried running another nodejs application from within the LAN in which I create a client that connects to the AWS redis server, registers to all events and only publishes messages once in a while.
After several minutes (in which the connection breaks), I try publishing another command and the error event handler is indeed triggered:
> client.publish("channel", "ANOTHER TRY")
true
> Error: Redis connection to <server_hostname>:6379 failed - read ECONNRESET
Redis connection ended
Redis reconnecting
Redis connected
Redis connection is ready
So if I try publishing via the client after the connection was interfered, the connection event callbacks are indeed called and I can run some kind of reconnection logic.
But in the scenario in which I subscribe and wait for publishes to the channel, no connection event handler is called and the application is basically broken.
Application code:
const redis = require('redis');
const config = { "host": <hostname>, "port": 6379, "socket_keepalive": true,
"socket_initdelay": 10};
config.retry_strategy = function (options) {
console.log("retry strategy. error code: " + (options.error ?
options.error.code : "N/A"));
console.log("options.attempt", options.attempt, "options.total_retry_time",
options.total_retry_time);
return 2000;
}
const client = redis.createClient(config);
client.on('message', function(channel, message) {
console.log("Channel", channel, ", message", message);
});
client.on("error", function (err) {
console.log("Error " + err);
});
client.on("end", function () {
console.log("Redis connection ended");
});
client.on("connect", function () {
console.log("Redis connected");
});
client.on("reconnecting", function () {
console.log("Redis reconnecting");
});
client.on("ready", function () {
console.log("Redis connection is ready");
});
const channel = "channel";
console.log("Subscribing to channel", channel);
client.subscribe(channel);
I'm using redis#2.8.0 and node v8.11.3.
The solution for this issue is quite sad.
First, there is indeed some network device between the redis client and server, which drops inactive connections after some timeout. It seems that this timeout is really low (several minutes).
Redis has a socket_keepalive configuration which is enabled by default, and its default value is Node.js's default socket keep alive value (which is set for 2 hours if i'm not mistaken).
As can be seen above, I used a socket_initdelay configuration parameter that should have changed this default value, but unfortunately the code that uses this parameter isn't in the redis npm package but rather in node-redis.
To summarize:
There is no configuration setting to change the keep alive timeout value in redis#2.8.0 (latest version when writing this post).
You can either:
Use node-redis which accepts the socket_initdelay setting.
Modify the timeout manually by running the following:
const client = redis.createClient();
client.on("connect", function () {
client.stream.setKeepAlive(true, <timeout_value_in_milliseconds>);
}
EDIT: I see that I'm getting ping timeout and transport error reasons in my handler for disconnect on the server. This makes it difficult to maintain state in my server (I'm trying to keep track of which users are connected in a chat-like setup(. I was reading that it may be related to background tabs in Chrome (which I'm running). Does anyone have any experience with these 'spurious' disconnect events?
I'm new to Socket.io and am having some trouble understanding the connection and disconnection process.
As I understand it, the server receives the connection event once when a client connects, and one registers all the handlers for that client in the callback on on.('connection'). Is that true?
I want to maintain an of connected users, so I add a user to that array on the connection handler.
Should I then listen for the disconnect event to know when to remove a user from that array? Can I be guaranteed that that event will only be fired once?
It's a bit confusing, because on the client side, there is the connect event, which apparently can be fired multiple times -- the documentation says
// note: you should register event handlers outside of connect,
// so they are not registered again on reconnection
which is a different paradigm than on the server side, where all the handlers are registered inside the connection handler. But if the client-side connect event can fire on re-connection, what is the reconnect event for? (The docs says this event is "Fired upon a successful reconnection.")
In general I'm confused about the process of connection, disconnection and re-connection and how this relates to events, whether it happens "randomly" due to connection issues or only under the programmer's control, and how many times one should anticipate receiving each of these events -- once only for server, multiple times for client?
Thanks for any help!
I'm new to Socket.io and am having some trouble understanding the
connection and disconnection process.
Welcome to the wonderful world of Node.js + Socket.io It's super powerful!
As I understand it, the server receives the connection event once when
a client connects, and one registers all the handlers for that client
in the callback on on.('connection'). Is that true?
Correct. Take a look at this example of my code:
Server-side
var clients = []; /* stores all sockets on the fly */
io.on('connection', function (socket) {
clients[socket.id] = socket; /* keeps an array of sockets currently connected */
socket.on('disconnect', function (data) {
console.log(socket.id + " disconnected");
delete clients[socket.id];
});
});
Client-side
socket = io.connect(YOUR_SOCKET_URI, { transports: ['websocket'] } );
socket_delegates();
socket_delegates = function() {
// Socket events
socket.on('connect', function(data) {
/* handle on connect events */
});
socket.on('disconnect', function () {
/* handle disconnect events - possibly reconnect? */
});
socket.on('reconnect', function () {
/* handle reconnect events */
});
socket.on('reconnect_error', function () {
/* handle reconnect error events - possible retry? */
});
}
Should I then listen for the disconnect event to know when to remove a
user from that array? Can I be guaranteed that that event will only be
fired once?
Yes. You will see in the above server code that we listen for disconnect and then do what we need to.
Nothing should be random. You should have code in place to handle the connect, disconnect on the server side and code to handle the connect, disconnect and reconnect on the client side.
In case a client timeouts or I want to close the client connection for another reason I would like to close the socket connection properly. By properly I mean that:
The client knows that it shouldn't send any further information
The serverside closes the connection completely (because attackers still might send data to the server which we don't want to read)
At first I thought about using socket.destroy() which will ensure that no more I/O activity will happen. When I tried this I noticed that the client does not get informed about this. Most likely because it can't know that the connection has been closed since nothing has been sent to the client, right?
Because of that I thought emitting socket.end() and immediately after that emitting socket.destroy(). This time the client closed properly, but it triggered the socket.end() event twice. Why is that happening? Is that the proper way of forcing a socketconnection to close or am I missing something?
Server code:
sock.on('destroy', function() {
console.log(sock.remoteAddress + ' has been destroyed');
});
sock.on('end', function() {
console.log(sock.remoteAddress + ' has been half closed');
});
sock.on('timeout', function() {
console.log(sock.remoteAddress + " timed out");
sock.emit('end');
sock.emit('destroy');
});
I can't figure out one problem I got.
I'm using the Net module on my Node.JS server which is used to listen to client connections.
The client do connect to the server correctly and the connection remains available to read/write data. So far, so good. But when the client unexpectedly disconnects (ed. when internet falls away at client side) I want to fire an event server side.
In socket.io it would be done with the 'disconnect' event, but this event doesn't seem to exist for the Net module. How is it possible to do?
I've searched on Google/StackOverflow and in the Net documentation (https://nodejs.org/api/net.html) but I couldn't find anything usefull. I'm sry if I did mis something.
Here is a code snippet I got:
var net = require('net');
var server = net.createServer(function(connection) {
console.log('client connected');
connection.wildcard = false;//Connection must be initialised with a configuration stored in the database
connection.bidirectional = true;//When piped this connection will be configured as bidirectional
connection.setKeepAlive(true, 500);
connection.setTimeout(3000);
connection.on('close', function (){
console.log('Socket is closed');
});
connection.on('error', function (err) {
console.log('An error happened in connection' + err.stack);
});
connection.on('end', function () {
console.log('Socket did disconnect');
});
connection.on('timeout', function () {
console.log('Socket did timeout');
connection.end();
});
connection.on('data', function (data) {
//Handling incoming data
});
});
serverUmrs.listen(40000, function () {
console.log('server is listening');
});
All the events(close, end, error, timeout) don't fire when I disconnect the client(by pulling out the UTP cable).
Thanks in advance!
EDIT:
I did add a timeout event in the code here above but the only thing that happens is that the socket does timeout after 3 seconds everytime the client does connect again. Isn't KeepAlive enough to make the socket not Idle? How is it possible to make the socket not idle without to much overhead. It may be possible that there are more than 10,000 connections at the same time which must remain alive as long as they are connected (ie respond to the keepalive message).
Update:
I think the KeepAlive is not related with the Idle state of socket, sort of.
Here is my test, I remove the following code in your example.
//connection.setKeepAlive(true, 500);
Then test this server with one client connect to it var nc localhost 40000. If there is no message sending to server after 3 seconds, the server logs as below
Socket did timeout
Socket did disconnect
Socket is closed
The timeout event is triggered without KeepAlive setting.
Do further investigation, refer to the Node.js code
function onread(nread, buffer) {
//...
self._unrefTimer();
We know timeout event is triggered by onread() operation of socket. Namely, if there is no read operation after 3 seconds, the timeout event will be emitted. To be more precisely, not only onread but also write successfully will call _unrefTimer().
In summary, when the write or read operation on the socket, it is NOT idle.
Actually, the close event is used to detect the client connection is alive or not, also mentioned in this SO question.
Emitted when the server closes. Note that if connections exist, this event is not emitted until all connections are ended.
However, in your case
disconnect the client(by pulling out the UTP cable).
The timeout event should be used to detective the connection inactivity. This is only to notify that the socket has been idle. The user must manually close the connection. Please refer to this question.
In TCP connection, end event fire when the client sends 'FIN' message to the server.
If the client side is not sending 'FIN' message that event is not firing.
For example, in your situation,
But when the client unexpectedly disconnects (ed. when internet falls away at client side) I want to fire an event server side.
There may not be a 'FIN' message because internet is gone.
So you should handle this situation in timeout without using keepAlive. If there is no data coming data, you should end or destroy the socket.
EDIT: I did add a timeout event in the code here above but the only
thing that happens is that the socket does timeout after 3 seconds
everytime the client does connect again. Isn't KeepAlive enough to
make the socket not Idle? How is it possible to make the socket not
idle without to much overhead. It may be possible that there are more
than 10,000 connections at the same time which must remain alive as
long as they are connected (ie respond to the keepalive message).
For your edit, your devices should send to the server some heartbeat message between a time period. So that, server understands that that device is alive and that timeout event will not fire because you get some data. If there is no heartbeat message such cases you cannot handle this problem.
I have a client that connects to a socket server (node.js). I seem to being leaking sockets.
Here is the flow that causes the leaked sockets.
Connect to socket server
Signout (I see the log out confirmation on the server)
Signin in again to the socket server (see confirmation on socket server)
Restart socket server quickly (force restart using supervisor module by resaving a file)
Client reconnects to socket server. I now see 2 sockets that have connected to the socket server, instead of what should be just one.
If I repeat steps 2-4, I can see multiple connections from the same client.
Here is my client socket.io code:
client.js:
function start_socket(tok){
console.log("socket trying to connect");
//try every second to reconnect
socket = io.connect(sockets_host, { query: $.param({token: tok}), 'forceNew' : true, 'reconnection limit' : 100, 'max reconnection attempts' : 'Infinity' });
socket.on('connect', function() {
console.log('connected to socket server');
set_loggedin_status('true');
});
socket.on('disconnect', function() {
console.log('disconnected from server');
set_loggedin_status('false');
close_socket(); //get leaked sockets, wether I call this or not, though less if I do...
});
socket.on('error',function(err){
console.log('socket error: ' + err);
attempt_login();
});
}
function close_socket(){
console.log("in close socket");
socket.disconnect(true);
set_loggedin_status('false');
}
I've tried the above without 'forceNew' : true,, but then I seem to have problems signing in again after the client signed-out.
If I call close_socket from within the disconnected event (and not just from elsewhere when the client chooses to signout), I seem to get fewer leaked sockets, but still get them.
How am I creating multiple sockets?
The solution, though not necessarily the answer to the question, was in my case to use socket.io.disconnect() instead of socket.disconnect().
However, this meant if the socket server goes down once i've already established a connection, that no reconnects were being tried. So, I have to handle this situation myself if using this approach to solve the leaking sockets.