NodeJS crash with multiple request - node.js

My nodejs server crashes randomly in real time ( and always on Web Stress Tool with 10+ thread request). Below is the code that I believe to be the root cause.
main.js
--------
app = express():
---------
app.get('/image/*', actions.download);
actions.js
var request = require('request');
exports.download = function(req, res){
var url = <Amazon s3 URL>;
req.pipe(request(url)).pipe(res);
};
When server crashes, I am getting below error in nohup
stream.js:94
throw er; // Unhandled stream error in pipe.
^
Error: socket hang up
at createHangUpError (http.js:1476:15)
at Socket.socketOnEnd [as onend] (http.js:1572:23)
at Socket.g (events.js:180:16)
at Socket.emit (events.js:117:20)
at _stream_readable.js:943:16
at process._tickCallback (node.js:419:13)
Detailed log when I tried with sudo NODE_DEBUG=net node main.js and subjected to stress test with 10 threads
NET: 3017 Socket._read readStart
NET: 3017 afterWrite 0 { domain: null, bytes: 335, oncomplete: [Function: afterWrite] }
NET: 3017 afterWrite call cb
NET: 3017 onread ECANCELED 164640 4092 168732
NET: 2983 got data
NET: 2983 onSocketFinish
NET: 2983 oSF: not ended, call shutdown()
NET: 2983 destroy undefined
NET: 2983 destroy
NET: 2983 close
NET: 2983 close handle
Error: read ECONNRESET
at errnoException (net.js:904:11)
at TCP.onread (net.js:558:19)

This is cased by libuv in src/unix/stream.c. Here we have:
if (stream->shutdown_req) {
/* The UV_ECANCELED error code is a lie, the shutdown(2) syscall is a
* fait accompli at this point. Maybe we should revisit this in v0.11.
* A possible reason for leaving it unchanged is that it informs the
* callee that the handle has been destroyed.
*/
uv__req_unregister(stream->loop, stream->shutdown_req);
uv__set_artificial_error(stream->loop, UV_ECANCELED);
stream->shutdown_req->cb(stream->shutdown_req, -1);
stream->shutdown_req = NULL;
}
I've found the reason of this problem:
stream->shutdown_req was assigned by int uv_shutdown(. So someone called uv_shutdown. Who called uv_shutdown?
uv_shutdown is not a simple function. See here. The name of this function is StreamWrap::Shutdown.
StreamWrap::Shutdown is used in nodejs: SET_INSTANCE_METHOD("shutdown", StreamWrap::Shutdown, 0). shutdown method is a part of wrappers/pipe_wrap and wrappers/tcp_wrap
So someone called shutdown from nodejs/lib. It can be lib/net and lib/tls.
So shutdown is called from function onCryptoStreamFinish or function onSocketFinish.
So you need to find who sent shutdown request in your case. onread ECANCELED means that stream (for example socket1.pipe(socket2)) has been killed.
BTW I think that you can workaround your issue by using special technique to destroy piped sockets from lib/tls:
pair.encrypted.on('close', function() {
process.nextTick(function() {
// Encrypted should be unpiped from socket to prevent possible
// write after destroy.
pair.encrypted.unpipe(socket);
socket.destroy();
});
});

Related

How to fix 'events.js :167 error Error: connect ECONNREFUSED 127.0.0.1:443' in Node.js when no other apps seems to be attempting to use the port?

I'm getting the error described below when running my node.js app after perfoming a few api calls.
The error does not always show in the exactly same place/line of code. But most of the times it is at the end of the api call.
events.js:167
throw er; // Unhandled 'error' event
^
Error: connect ECONNREFUSED 127.0.0.1:443
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1113:14)
Emitted 'error' event at:
at TLSSocket.socketErrorListener (_http_client.js:391:9)
at TLSSocket.emit (events.js:182:13)
at emitErrorNT (internal/streams/destroy.js:82:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:50:3)
at process._tickCallback (internal/process/next_tick.js:63:19)
Based on similar questions here at SO my hypothesis is that a) there is something using 127.0.0.1:443 and therefore conflicting with my app or b) node is trying to use 127.0.0.1:443 but there is nothing there for it to use (my app is listening to localhost :3000).
Hyphothesis a) doesn't seem likely since after running netstat -ano | findstr 127.0.0.1:443 nothing shows up (when app is running and right after it terminates).
Also killed every node.exe and mongod.exeb using any port in my computer, closed the terminal and restarted the node app without success.
In case error is related with hypothesis b) I'm not sure how to address it.
api.post('/parsePOpdf', wagner.invoke(function(Pdfeq, Pdfdocspec, Product, User, Order){
return async function(req,res){
//... some code
pdfParser.on("pdfParser_dataError", errData => console.error(errData.parserError) );
pdfParser.on("pdfParser_dataReady", async function(pdfData) {
fs.writeFile("./test.json", JSON.stringify(pdfData), function(err){
console.log(err);
});
let pages = pdfData.formImage.Pages;
//console.log('pages 557', pages);
let order = {
orderDetails : {
supplier : [{
item : []
}]
}
};
for (const page of pages){
let value = await getItemsInPDF(page, productKeys, pdfParsingDetails, order, Product, customer, supplierLink, User);
//... more code
order = value;
}
return res.json(order);
});
pdfParser.loadPDF(pdfFile);
}
}));
I would expect the code to finish without throwing this error.
It turns out that the problem was in the api code: an http.get line to fetch a remote file was generating the conflict. This makes sense since the error was not present for other endpoints of the api.
So learning is that if the terminal reports no app using the suspected conflicting port (see question) answser should be within the same code and you need to go line by line to identify which one is causing the problem (instead of focusing on other apps trying to use the same port, like I was focusing on).

UDP multicast failing - NodeJS / Windows 10

I am beating my brains out trying to get this to work. I read all the other answers related to NodeJS UDP on SO already, but to no avail. I am on Windows 10.
Here is the error I am getting:
Uncaught Exception: Error: write ENOTSUP
at exports._errnoException (util.js:1022:11)
at ChildProcess.target._send (internal/child_process.js:654:20)
at ChildProcess.target.send (internal/child_process.js:538:19)
at sendHelper (cluster.js:751:15)
at send (cluster.js:534:12)
at cluster.js:509:7
at SharedHandle.add (cluster.js:99:3)
at queryServer (cluster.js:501:12)
at Worker.onmessage (cluster.js:450:7)
at ChildProcess.<anonymous> (cluster.js:765:8)
at emitTwo (events.js:111:20)
at ChildProcess.emit (events.js:191:7)
at process.nextTick (internal/child_process.js:744:12)
at _combinedTickCallback (internal/process/next_tick.js:67:7)
at process._tickDomainCallback [as _tickCallback] (internal/process/next_tick.js:122:9)
Here is my code:
let dgram = require('dgram'),
server = dgram.createSocket('udp4'),
multicastAddress = '239.255.255.250',
multicastPort = 1900,
myIp = '192.168.51.133';
server.bind(multicastPort, myIp, function () {
server.setBroadcast(true);
server.setMulticastTTL(128);
server.setInterface.getbyname(myIp);
server.addMembership(multicastAddress, myIp);
});
//wait for incoming messages and print ip address
server.on('message', function (data, rinfo) {
console.log(new Date() + ' RECEIVER received from ', rinfo.address, ':');
console.log(data.toString());
});
//Set up discovery message. Make sure to leave out any extra space in the message.
var discover_message = new Buffer('M-SEARCH * HTTP/1.1\r\nHost: 239.255.255.250:1900\r\nMan: ssdp:discover\r\nST: colortouch:ecp\r\n');
server.send(discover_message, 0, discover_message.length, 1900, multicastAddress);
Finally found an answer for this. The issue is due to being on Windows and using clusters in Node. The problem is on the server.bind call. Here is the correct, working code:
server.bind({port: 1900, exclusive: true}, function () {
console.log('PORT BIND SUCCESS');
server.setBroadcast(true);
server.setMulticastTTL(128);
server.addMembership(multicastAddress, myIp);
});
The fix was to pass in the object {port: 1900, exclusive: true}. Source: https://github.com/misterdjules/node/commit/1a87a95d3d7ccc67fd74145c6f6714186e56f571

MongoDB: Server sockets closed after a few minutes

I am working with multiple AWS intances connected to the same mongo database (inside Compose.io Elastic deployment) but I keep getting the error server <url>:<port> sockets closed after a few minutes. Can anyone give me any hint about what may be wrong with the connection code?
CONNECTION CODE
var url = "mongodb://<user>:<password>#<url1>:<port1>,<url2>:<port2>/<dbName>?replicaSet=<replicaSetName>";
var options = {
server : {"socketOptions.keepAlive": 1},
replSet : { "replicaSet": <replicaSetName>, "socketOptions.keepAlive": 1 }
};
MongoClient.connect(url, options, function(err, db) { ... });
ERROR MESSAGE
Potentially unhandled rejection [2] MongoError: server <url>:<port> sockets closed
at null. (/var/app/current/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/server.js:328:47)
at g (events.js:199:16)
at emit (events.js:110:17)
at null. (/var/app/current/node_modules/mongodb/node_modules/mongodb-core/lib/connection/pool.js:101:12)
at g (events.js:199:16)
at emit (events.js:110:17)
at Socket. (/var/app/current/node_modules/mongodb/node_modules/mongodb-core/lib/connection/connection.js:142:12)
at Socket.g (events.js:199:16)
at Socket.emit (events.js:107:17)
at TCP.close (net.js:485:12)

node.js - handling TCP socket error ECONNREFUSED

I'm using node.js with socket.io to give my web page access to character data served by a TCP socket. I'm quite new to node.js.
User ----> Web Page <--(socket.io)--> node.js <--(TCP)--> TCP Server
The code is mercifully brief:
io.on('connection', function (webSocket) {
tcpConnection = net.connect(5558, 'localhost', function() {});
tcpConnection.on('error', function(error) {
webSocket.emit('error', error);
tcpConnection.close();
});
tcpConnection.on('data', function(tcpData) {
webSocket.emit('data', { data: String.fromCharCode.apply(null, new Uint8Array(tcpData))});
});
});
It all works just fine in the normal case, but I can't guarantee that the TCP server will be there all the time. When it isn't, the TCP stack returns ECONNREFUSED to node.js - this is entirely expected and I need to handle it gracefully. Currently, I see:
events.js:72
throw er; // Unhandled 'error' event
^
Error: connect ECONNREFUSED
at errnoException (net.js:904:11)
at Object.afterConnect [as oncomplete] (net.js:895:19)
... and the whole process ends.
I've done a lot of searching for solutions to this; most hits seem to be from programmers asking why ECONNREFUSED is received in the first place - and the advice is simply to make sure that the TCP server is available. No discussing of handling failure cases.
This post - Node.js connectListener still called on socket error - suggests adding a handler for the 'error' event as I've done in the code above. This is exactly how I would like it to work ... except it doesn't (for me), my program does not trap ECONNREFUSED.
I've tried to RTFM, and the node.js docs at http://nodejs.org/api/net.html#net_event_error_1 suggest that there is indeed an 'error' event - but give little clue how to use it.
Answers to other similar SO posts (such as Node.js Error: connect ECONNREFUSED ) advise a global uncaught exception handler, but this seems like a poor solution to me. This is not my program throwing an exception due to bad code, it's working fine - it's supposed to be handling external failures as it's designed to.
So
Am I approaching this in the right way? (happy to admit this is a newbie error)
Is it possible to do what I want to do, and if so, how?
Oh, and:
$ node -v
v0.10.31
I ran the following code:
var net = require('net');
var client = net.connect(5558, 'localhost', function() {
console.log("bla");
});
client.on('error', function(ex) {
console.log("handled error");
console.log(ex);
});
As I do not have 5558 open, the output was:
$ node test.js
handled error
{ [Error: connect ECONNREFUSED]
code: 'ECONNREFUSED',
errno: 'ECONNREFUSED',
syscall: 'connect' }
This proves that the error gets handled just fine... suggesting that the error is happening else-where.
As discussed in another answer, the problem is actually this line:
webSocket.emit('error', error);
The 'error' event is special and needs to be handled somewhere (if it isn't, the process ends).
Simply renaming the event to 'problem' or 'warning' results in the whole error object being transmitted back through the socket.io socket up to the web page:
webSocket.emit('warning', error);
The only way I found to fix this is wrapping the net stuff in a domain:
const domain = require('domain');
const net = require('net');
const d = domain.create();
d.on('error', (domainErr) => {
console.log(domainErr.message);
});
d.run(() => {
const client = net.createConnection(options, () => {
client.on('error', (err) => {
throw err;
});
client.write(...);
client.on('data', (data) => {
...
});
});
});
The domain error captures error conditions which arise before the net client has been created, such as an invalid host.
See also: https://nodejs.org/api/domain.html

Node.js, dgram.setBroadcast(flag) fails due to "EBADF"

I'm using Node.js 0.6.9, and am trying to send a datagram broadcast package. Code:
var sys = require('util');
var net = require('net');
var dgram = require('dgram');
var message = new Buffer('message');
var client = dgram.createSocket("udp4");
client.setBroadcast(true);
client.send(message, 0, message.length, 8282, "192.168.1.255", function(err, bytes) {
client.close();
});
Running the code:
$ node test.js
node.js:201
throw e; // process.nextTick error, or 'error' event on first tick
^
Error: setBroadcast EBADF
at errnoException (dgram.js:352:11)
at Socket.setBroadcast (dgram.js:227:11)
at Object.<anonymous> (/home/letharion/tmp/collision/hello.js:25:8)
at Module._compile (module.js:444:26)
at Object..js (module.js:462:10)
at Module.load (module.js:351:32)
at Function._load (module.js:310:12)
at Array.0 (module.js:482:10)
at EventEmitter._tickCallback (node.js:192:41)
Some googling reveals that "EBADF" means "The socket argument is not a valid file descriptor". But I don't understand enough about the problem for that to be helpful.
First of all, you seem to have trouble understanding the format of the stacktrace, so let's clarify it before we go to the actual error that is thrown here.
Format of a node.js Stacktrace
node.js:201
throw e; // process.nextTick error, or 'error' event on first tick
This part is just the location where the internal NodeJS logic choked up and put out the error below:
The actual error stacktrace follows, it shows the deepest location in the callstack first, so going down in the stack trace, brings you up in the call hierachy, eventually leading you to the point in your code where everything began.
Error: setBroadcast EBADF
at errnoException (dgram.js:352:11)
at Socket.setBroadcast (dgram.js:227:11)
at Object.<anonymous> (/home/letharion/tmp/collision/hello.js:25:8)
at Module._compile (module.js:444:26)
at Object..js (module.js:462:10)
at Module.load (module.js:351:32)
at Function._load (module.js:310:12)
at Array.0 (module.js:482:10)
at EventEmitter._tickCallback (node.js:192:41)
First it fails in dgram.js on line 352, dgram.js is a internal node.js module abstracting the "low level" code. Line 352 is in a function containing generic logic for throwing errors.
It was called at dgram.js in line 227, after a failed if check which wraps the call to the wrapped native UDP sockets setBroadcast method.
Going up one more layer, we end up at your hello.js file on line 25 with the client.setBroadcast(true); call.
The rest is more node.js code resulting from the initial load of the hello.js file.
The actual Error
The error thrown by the native code which node.js wraps here is EBADF looking this up in conjunction with UDP gives us:
EBADF
The socket argument is not a valid file descriptor.
By going further down into the node.js rabbit hole, we end up in the udp wrapper, which wraps the uv wrapper for the actual C implementation, in the uv wrapper we find:
/*
* Set broadcast on or off
*
* Arguments:
* handle UDP handle. Should have been initialized with
* `uv_udp_init`.
* on 1 for on, 0 for off
*
* Returns:
* 0 on success, -1 on error.
*/
Leading us to the conclusion that your socket has not been initialized yet.
In the end, binding the socket via client.bind(8000) fixed the missing initialization and made the program run.
The method setBroadcast should be called on 'listening' event or passed as callback in bind method:
var socket = dgram.createSocket('udp4');
socket.on('listening', function(){
socket.setBroadcast(true);
});
socket.bind(8000);
OR:
var socket = dgram.createSocket('udp4');
socket.bind(8000, undefined, function() {
socket.setBroadcast(true);
});
It seems like the file descriptor is created only on bind or on send, and it's required before setBroadcast. You can call client.bind() with no parameter to bind to a random port before setting broadcast. Don't worry about using a random port, since it's done "lazily" when using client.send anyway.
var sys = require('util');
var net = require('net');
var dgram = require('dgram');
var message = new Buffer('message');
var client = dgram.createSocket("udp4");
client.bind();
client.setBroadcast(true);
client.send(message, 0, message.length, 8282, "192.168.1.255", function(err, bytes) {
client.close();
});

Resources