How to deal with 'read ETIMEDOUT' in Node.js? - node.js

I have a pub/sub model using Node.js to transmit data from one client to another client. Besides, the server also records everything received and sends it to new clients.
However, some data corrupted when transfer, and I got error like:
Error with socket!
{ [Error: write EPIPE] code: 'EPIPE', errno: 'EPIPE', syscall: 'write' }
Error with socket!
{ [Error: read ETIMEDOUT] code: 'ETIMEDOUT', errno: 'ETIMEDOUT', syscall: 'read' }
I don't know how to properly handle these errors. It looks like the client is down.
Since the server is only a proxy like a server, it doesn't really know what data means. I have no idea how to validate every data pack before meeting these errors.
Here is my code:
// server is an object inheriting from net.Server
server.on('listening', function() {
var port = server.address().port;
}).on('connection', function(cli) {
cli.socketBuf = new Buffers();
cli.commandStarted = false;
cli.dataSize = 0;
cli.setKeepAlive(true, 10*1000);
cli.setNoDelay(true);
cli.on('connect', function() {
server.clients.push(cli);
}).on('close', function() {
var index = server.clients.indexOf(cli);
server.clients.splice(index, 1);
}).on('data', function (buf) {
server.emit('data', cli, buf);
if(op.autoBroadcast) {
_.each(server.clients, function(c) {
if(c != cli) c.write(buf);
});
}
}).on('error', function(err) {
console.log('Error with socket!');
console.log(err);
});
}).on('error', function(err) {
console.log('Error with server!');
console.log(err);
});
// ...
// room.dataSocket is an instance of server beyond
room.dataSocket.on('data', function(cli, d) {
// bf is a buffered file
bf.append(d);
room.dataFileSize += d.length;
}).on('connection', function(con){
bf.readAll(function(da) {
con.write(da);
});
});

If you get an EPIPE or indeed any error when writing, the peer has closed or the connection has been dropped. So you must close the connection at that point.
If you get a read timeout the inference is that either you have set an unrealistically short timeout or else the peer has failed to deliver in time: in the second case once again you should assume the connection is down, and close it.

Related

Error: read ECONNRESET while connection rabbitmq with nodejs

I've encountered following error message while connection our external RabbitMQ with NodeJS as follow:
Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:205:27) {
errno: 'ECONNRESET',
code: 'ECONNRESET',
syscall: 'read'
}
and my nodejs code is as follow:
const amqp_url = "amqp://un:pw#sb-mq.com:9901/my-vhost";
amqp.connect(amqp_url, function (error0, connection) {
if (error0) {
throw error0;
}
connection.createChannel(function (error1, channel) {
if (error1) {
throw error1;
}
var queue = 'hello';
var msg = 'Hello World!';
channel.assertQueue(queue, {
durable: false
});
channel.sendToQueue(queue, Buffer.from(msg));
console.log(" [x] Sent %s", msg);
});
setTimeout(function () {
connection.close();
process.exit(0);
}, 500);
});
But the thing is when I've setup RabbidMQ locally with same configuration but using default port (like amqp://un:pw#localhost:5672/my-vhost), it was working perfectly. Please let me know how to troubleshoot that one, thanks.
"ECONNRESET" means the other side of the TCP conversation abruptly closed its end of the connection.
see How do I debug error ECONNRESET in Node.js?
about RabbitMQ check if rabbitmq actually is active in that port, just:
telnet sb-mq.com 9901
from your client machine and check the firewall configuration.
You may have another service running on 9901
ECONNRESET is network problem, rabbitmq can work in different ports without problems
I found that issue has been resolved when I've tried to use amqps instead of amqp.

net.connect: reconnecting tcp net.connects yields connect ECONNREFUSED

Ive a tcp server client connection that I am trying out
{ [Error: connect ECONNREFUSED]
code: 'ECONNREFUSED',
errno: 'ECONNREFUSED',
syscall: 'connect' }
everything works the first time client is connected to port and host of server.
next time it gives this error.
why and how can i reconnect/fix this?
conceptually, inner logic aside, this is the server code:
net.createServer( function serverConnection (connection) {
connection.write('whatver i wana write');
connection.write("\n");
connection.on('end', function connectionEnd () {
console.log("Client Connection Ended");
connection.unref();
connection.destroy();
server.close();
});
connection.on('close', function connectionClose() {
console.log("Connection Closed");
connection.unref();
connection.destroy();
server.close();
});
connection.on('error', function connectionErr (err) {
console.error(err);
server.close();
});
});
You're closing the server after the first connection is closed. When you call server.close(), the server stops listening for new connections.

Socket IO infinite loop over 1000 connections

I need to benchmark multiples socket connections. I've got a nodejs server with this following code :
var io = require('./lib/node_modules/socket.io').listen(12345)
io.sockets.on("connect", function(socket) {
console.log("Socket " + socket.id + " connected.")
socket.on("disconnect", function() {
console.log("Socket " + socket.id +" disconnected.")
})
})
and a nodejs client :
var port = 12345
, nbSocket = 1000
, io = require("./lib/node_modules/socket.io-client")
for (var i = 1;i <= nbSocket;i++)
{
var socket = io.connect("http://<<my_ip>>:" + port, {forceNew: true})
}
When client code was executed, server correctly connects sockets and ends normally.
But if we change nbSocket to 2000, server never ends connecting and disconnecting sockets.
We already tried to change the limit with :
ulimit -n5000
But it didn't worked. Is there another limit somewhere or something we missed ?
I tested on OSX running Node v0.12.4 and socket.io v1.3.5 and it started to cause me problems around nbSocket=5000.
Try appending this snippet to the end of your server script:
process.on('uncaughtException', function(err) {
console.info(util.inspect(err, {colors: true}));
});
Also, I changed your code a little and added a timer that twice every second prints the number of open sockets:
var
util = require('util'),
io = require('socket.io').listen(12345);
var
clientCount = 0;
function onClientDisconnect() {
clientCount--;
}
io.on('connect', function(socket) {
clientCount++;
socket.on('disconnect', onClientDisconnect);
});
console.info('Listening...');
setInterval(function () {
console.info('Number of open sockets: %d', clientCount);
}, 500);
process.on('uncaughtException', function(err) {
console.info(util.inspect(err, {colors: true}));
});
When the number of open sockets started to get close to 5000, I started seeing these 2 messages several times:
{ [Error: accept ENFILE] code: 'ENFILE', errno: 'ENFILE', syscall: 'accept' }
{ [Error: accept EMFILE] code: 'EMFILE', errno: 'EMFILE', syscall: 'accept' }
According to libc manual:
ENFILE: too many distinct file openings in the entire system
EMFILE: the current process has too many files open
So in fact my problem was the limit of file descriptors, so check if it's also your problem by appending the above snippet to your server script. If the exceptions appear, you should investigate how to properly increase the limit of open files in your system.

Reconnect to TCP/IP socket on NodeJS

I use "net" library to create TCP connection on my nodeJs.
root.socket = net.createConnection(root.config.port, root.config.server);
I'm trying to handle error when remote server is down and reconnect in Cycle.
root.socket.on('error', function(error) {
console.log('socket error ' + error);
root.reconnectId = setInterval(function () {
root.socket.destroy();
try {
console.log('trying to reconnect');
root.socket = net.createConnection(root.config.port, root.config.server);
} catch (err) {
console.log('ERROR trying to reconnect', err);
}
}, 200);
}
The trouble is that in case of remote server shutdown I still get en error and my nodeJS server stops.
events.js:72
throw er; // Unhandled 'error' event
^ Error: connect ECONNREFUSED
at errnoException (net.js:904:11)
at Object.afterConnect [as oncomplete] (net.js:895:19)
You will need something like this:
var net = require('net');
var c = createConnection(/* port, server */);
function createConnection(port, server) {
c = net.createConnection(port, server);
console.log('new connection');
c.on('error', function (error) {
console.log('error, trying again');
c = createConnection(port, server);
});
return c;
}
In your case you are creating a new connection but you don't attach any error listener, the error is raised somewhere else in the execution loop and can not be caught by the "try / catch" statement.
P.S. try to avoid using "try / catch" statement, error handling in Node.JS is made using error listeners and domains, it can be useful only for JSON.parse() or other functions that are executed synchronously.

Nodejs HTTPS client timeout not closing TCP connection

I need to close http connection if they take longer than 3s, so this is my code:
var options = {
host: 'google.com',
port: '81',
path: ''
};
callback = function(response) {
var str = '';
response.on('data', function (chunk) {
str += chunk;
});
response.on('end', function () {
console.log(str);
});
response.on('error', function () {
console.log('ERROR!');
});
}
var req = https.request(options, callback);
req.on('socket', function(socket) {
socket.setTimeout(3000);
socket.on('timeout', function() {
console.log('Call timed out!');
req.abort();
//req.end();
//req.destroy();
});
});
req.on('error', function(err) {
console.log('REQUEST ERROR');
console.dir(err);
req.abort();
//req.end();
});
req.end();
This is what I get after 3s:
Call timed out!
REQUEST ERROR
{ [Error: socket hang up] code: 'ECONNRESET' }
Using a watch on lsof | grep TCP | wc -l I can see that the TCP connection remains open, even after receiving the 'timeout' event.
After an eternity, I get this and the connection is closed:
REQUEST ERROR
{ [Error: connect ETIMEDOUT] code: 'ETIMEDOUT', errno: 'ETIMEDOUT', syscall: 'connect' }
Does anyone know why this is happening? Why does calling req.abort() or req.end() or req.destory() not close the connection? Is it because I'm setting the timeout on the socket instead of the actual HTTP call? If yes, how do I close the connection?
you need to set the timeout on the connection:
req.connection.setTimeout(3000);
This timeout will change the socket status from ESTABLISHED to FIN_WAIT1 and FIN_WAIT2.
In Ubuntu there is a default timeout of 60 seconds for FIN_WAIT socket status, so the total time for the socket to close is 63 seconds if it doesn't receive any traffic. If the sockets receive traffic, the timeouts will start over.
If you need to close the socket within 3 seconds, I guess you have to set the connection timeout to 3000ms and lower the kernel tcp fin wait timeout.

Resources