Node handling Primus socket disconnect during substream write - node.js

If I have a (synchronous) process writing to a substream on a socket which disconnects during code execution, what's the best way to keep it from throwing the exception?
(I'm listening to the socket's close/end/etc events to remove the block from flow, but these event handlers wouldn't fire until after the code has finished)
Should I do this:
if (clientStream.stream) {
clientStream.write(bufferData);
} else {
console.log('*** Client FAILURE *****')
}
or use a try/catch?
try {
clientStream.write(bufferData);
} catch (err) {
console.log('*** Client FAILURE *****')
}
I know try/catches are expensive, but I haven't found any info on checking the clientStream.stream object to verify it exists. Maybe it's been deprecated like stream.readyState?

Related

How to log stack trace on node.js process error event

My node process is dying and I can't seem to log to a file when the process exits. It is a long running process invoked directly with node index.js:
// index.js
const fs = require('fs');
exports.getAllCars = (process => {
if (require.main === module) {
console.log(`Running process: ${process.getgid()}.`);
let out = fs.createWriteStream(`${__dirname}/process.log`);
// trying to handle process events here:
process.on('exit', code => out.write(`Exit: ${code}`));
return require('./lib/cars').getAllCars();
} else {
return require('./lib/cars').getAllCars;
}
})(process);
Also tried creating event handlers for error, uncaughtException. Nothing works when killing my process manually (with kill {pid}). The file process.log is created but nothing is there. Do writeable streams require a stream.end() to be called on completion?
According to Node.js documentation:
The 'exit' event is emitted when the Node.js process is about to exit
as a result of either:
The process.exit() method being called explicitly.
The Node.js event loop no longer having any additional work to perform.
So, if you start a process that should never end, it will never trigger.
Also, writable streams do not require to be closed:
If autoClose(an option from createWriteStream) is set to true (default
behavior) on error or end the file descriptor will be closed
automatically.
however, the createWriteStream function opens the file with flag 'w' by default, which means that the file will be overwritten every time (maybe this is the reason why you always see it empty). I suggest to use
fs.appendFileSync(file, data)
Here are the events that want to listen:
//catches ctrl+c event
//NOTE:
//If SIGINT has a listener installed, its default behavior will be removed (Node.js will no longer exit).
process.on('SIGINT', () => {
fs.appendFileSync(`${__dirname}/process.log`, `Received SIGINT\n`);
process.exit()
});
//emitted when an uncaught JavaScript exception bubbles
process.on('uncaughtException', (err) => {
fs.appendFileSync(`${__dirname}/process.log`, `Caught exception: ${err}\n`);
});
//emitted whenever a Promise is rejected and no error handler is attached to it
process.on('unhandledRejection', (reason, p) => {
fs.appendFileSync(`${__dirname}/process.log`, `Unhandled Rejection at: ${p}, reason: ${reason}\n`);
});
I suggest you put the code in a try catch block to find out whether its the code or some external cause which results in program termination.
and then check the log after the event...
try {
//your code
}catch(e) {
console.log(e.stack);
}

NodeJS sockets initialized as unpaused?

A net.Socket object in NodeJS is a Readable Stream, however one note in the docs got me concerned:
For the Net.Socket 'data' event, the docs say
Note that the data will be lost if there is no listener when a Socket emits a 'data' event.
That seems to imply a Socket is returned to the calling script in "flowing-mode" and already un-paused? However, for a generic Readable Stream, the documentation for the 'data' event says
If you attach a data event listener, then it will switch the stream into flowing mode, and data will be passed to your handler as soon as it is available.
That "If" seems to imply if you wait a bit to bind to the 'data' event, the stream will wait for you, and if you intentionally want to miss the 'data' events, the example in the resume() method seems to indicate you must call the resume() method to start the flow of data.
My concern is that when working with a net.Server, when you receive a net.Socket as part of a 'connection' event, is it imperative that you start handling the 'data' events right away since it's already opened? Meaning if I do:
var s = new net.Server();
s.on('connection', function(socket) {
// Do some lengthy setup process here, blocking execution for a few seconds...
socket.on('data', function(d) { console.log(d); });
});
s.listen(8080);
Meaning not bind to the 'data' event right away, I could lose data? So is this a more robust way to handle incoming connections if you have a lengthy setup required for each one?
var s = new net.Server();
s.on('connection', function(socket) {
socket.pause(); // Not ready for you yet!
// Do some lengthy setup process here, blocking execution for a few seconds...
socket.on('data', function(d) { console.log(d); });
socket.resume(); // Okay, go!
});
s.listen(8080);
Anyone have experience working with listening on raw socket streams to know if this data loss is an issue?
I'm hoping this is an instance where the Net.Socket documentation wasn't updated since v0.10, since the stream documentation has a section that mentions 'data' events started emitting right away in versions prior to 0.10. Were TCP sockets properly updated to not start emitting 'data' packets right away, and the documentation not updated appropriately?
Yes, this is the docs flaw. Here is an example:
var net = require('net')
var server = net.createServer(onConnection)
function onConnection (socket) {
console.log('onConnection')
setTimeout(startReading, 1000)
function startReading () {
socket.on('data', read)
socket.on('end', stopReading)
}
function stopReading () {
socket.removeListener('data', read)
socket.removeListener('end', stopReading)
}
}
function read (data) {
console.log('Received: ' + data.toString('utf8'))
}
server.listen(1234, onListening)
function onListening () {
console.log('onListening')
net.connect(1234, onConnect)
}
function onConnect () {
console.log('onConnect')
this.write('1')
this.write('2')
this.write('3')
this.write('4')
this.write('5')
this.write('6')
}
All the data is received. If you explicitly resume() socket, you will lose it.
Also, if you do your "lengthy" setup in a blocking manner (which you shouldn't) you can't lose any IO as it has no chance to be processed, so no events will be emitted.

Nodeunit Execution Order?

I am trying to test my web server using nodeunit:
test.js
exports.basic = testCase({
setUp: function (callback) {
this.ws = new WrappedServer();
this.ws.run(PORT);
callback();
},
tearDown: function (callback) {
delete this.ws;
callback();
},
testFoo: function(test) {
var socket = ioClient.connect(URL);
console.log('before client emit')
socket.emit('INIT', 1, 1);
console.log('after client emit');
}
});
and this is my very simple nodejs server:
WrappedServer.prototype.run = function(port) {
this.server = io.listen(port, {'log level': 2});
this.attachCallbacks();
};
WrappedServer.prototype.attachCallbacks = function() {
var ws = this;
ws.server.sockets.on('connection', function(socket) {
ws.attachDebugToSocket(socket);
console.log('socket attaching INIT');
socket.on('INIT', function(userId, roomId) {
// do something here
});
console.log('socket finished attaching INIT');
});
}
Basically I am getting this error:
[...cts/lolol/nodejs/testing](testingServer)$ nodeunit ws.js
info - socket.io started
before client emit
after client emit
info - handshake authorized 1013616781193777373
The "sys" module is now called "util". It should have a similar interface.
socket before attaching INIT
socket finished attaching INIT
info - transport end
Somehow, the socket emits INIT BEFORE the server attaches callbacks for sockets.
Why is this happening? In addition, what's the right way to do this?
I'm assuming you were expecting the order to be this?
socket before attaching INIT
socket finished attaching INIT
before client emit
after client emit
From the small amount of code given, the issue is probably two things.
First, and probably the main issue, is that your ioClient.connect will not connect immediately. You need to pass some kind of callback to that, and emit INIT, and then execute the test's callback function once it has actually connected.
Second, you should probably do the same thing with you run command. listen will not stary listening immediately, so you're going to get inconsistent results occasionally if it hasn't started listening by the time it executes your test. You should also pass the setUp's callback to io.listen.
Update
To be clear for listen, just like most things in node, the socketio server's listen method is asynchronous. Calling the method tells it to start listening, but there is some time in the background where the server sets up the networking stuff to start listening. Just like node's core listen, http://nodejs.org/docs/latest/api/net.html#server.listen, socket.io's version takes a callback argument that is called once the server is up and listening.
io.listen(port, {'log level': 2}, callback);
Unless socket.io starts giving you errors about failing to connect, this probably is not an issue, but it is something to keep in mind. Treating asynchronous actions as if they were instantaneous is an easy way to make bugs that only come up occasionally. Since your run wraps listen, I think in general, not just for testing, passing a callback to run would be a very good idea.

NodeJS socket.io-client doesn't fire 'disconnect' or 'close' events when the server is killed

I've written up a minimal example of this. The code is posted here: https://gist.github.com/1524725
I start my server, start my client, verify that the connection between the two is successful, and finally kill the server with CTRL+C. When the server dies, the client immediately runs to completion and closes without printing the message in either on_client_close or on_client_disconnect. There is no perceptible delay.
From the reading I've done, because the client process is terminating normally there isn't any chance that the STDOUT buffer isn't being flushed.
It may also be worth noting that when I kill the client instead of the server, the server responds as expected, firing the on_ws_disconnect function and removing the client connection from its list of active clients.
32-bit Ubuntu 11.10
Socket.io v0.8.7
Socket.io-client v0.8.7
NodeJS v0.6.0
Thanks!
--- EDIT ---
Please note that both the client and the server are Node.js processes rather than the conventional web browser client and node.js server.
NEW ANSWER
Definitely a bug in io-client. :(
I was able to fix this by modifying socket.io-client/libs/socket.js. Around line 433, I simply moved the this.publish('disconnect', reason); above if (wasConnected) {.
Socket.prototype.onDisconnect = function (reason) {
var wasConnected = this.connected;
this.publish('disconnect', reason);
this.connected = false;
this.connecting = false;
this.open = false;
if (wasConnected) {
this.transport.close();
this.transport.clearTimeouts();
After pressing ctrl+c, the disconnect message fires in roughly ten seconds.
OLD DISCUSSION
To notify client of shutdown events, you would add something like this to demo_server.js:
var logger = io.log;
process.on('uncaughtException', function (err) {
if( io && io.socket ) {
io.socket.broadcast.send({type: 'error', msg: err.toString(), stack: err.stack});
}
logger.error(err);
logger.error(err.stack);
//todo should we have some default resetting (restart server?)
app.close();
process.exit(-1);
});
process.on('SIGHUP', function () {
logger.error('Got SIGHUP signal.');
if( io && io.socket ) {
io.socket.broadcast.send({type: 'error', msg: 'server disconnected with SIGHUP'});
}
//todo what happens on a sighup??
//todo if you're using upstart, just call restart node demo_server.js
});
process.on('SIGTERM', function() {
logger.error('Shutting down.');
if( io && io.socket ) {
io.socket.broadcast.send({type: 'error', msg: 'server disconnected with SIGTERM'});
}
app.close();
process.exit(-1);
});
Of course, what you send in the broadcast.send(...) (or even which command you use there) depends on your preference and client structure.
For the client side, you can tell if the server connection is lost using on('disconnect', ...), which you have in your example:
client.on('disconnect', function(data) {
alert('disconnected from server; reconnecting...');
// and so on...
});

Make node.js not exit on error

I am working on a websocket oriented node.js server using Socket.IO. I noticed a bug where certain browsers aren't following the correct connect procedure to the server, and the code isn't written to gracefully handle it, and in short, it calls a method to an object that was never set up, thus killing the server due to an error.
My concern isn't with the bug in particular, but the fact that when such errors occur, the entire server goes down. Is there anything I can do on a global level in node to make it so if an error occurs it will simply log a message, perhaps kill the event, but the server process will keep on running?
I don't want other users' connections to go down due to one clever user exploiting an uncaught error in a large included codebase.
You can attach a listener to the uncaughtException event of the process object.
Code taken from the actual Node.js API reference (it's the second item under "process"):
process.on('uncaughtException', function (err) {
console.log('Caught exception: ', err);
});
setTimeout(function () {
console.log('This will still run.');
}, 500);
// Intentionally cause an exception, but don't catch it.
nonexistentFunc();
console.log('This will not run.');
All you've got to do now is to log it or do something with it, in case you know under what circumstances the bug occurs, you should file a bug over at Socket.IO's GitHub page:
https://github.com/LearnBoost/Socket.IO-node/issues
Using uncaughtException is a very bad idea.
The best alternative is to use domains in Node.js 0.8. If you're on an earlier version of Node.js rather use forever to restart your processes or even better use node cluster to spawn multiple worker processes and restart a worker on the event of an uncaughtException.
From: http://nodejs.org/api/process.html#process_event_uncaughtexception
Warning: Using 'uncaughtException' correctly
Note that 'uncaughtException' is a crude mechanism for exception handling intended to be used only as a last resort. The event should not be used as an equivalent to On Error Resume Next. Unhandled exceptions inherently mean that an application is in an undefined state. Attempting to resume application code without properly recovering from the exception can cause additional unforeseen and unpredictable issues.
Exceptions thrown from within the event handler will not be caught. Instead the process will exit with a non-zero exit code and the stack trace will be printed. This is to avoid infinite recursion.
Attempting to resume normally after an uncaught exception can be similar to pulling out of the power cord when upgrading a computer -- nine out of ten times nothing happens - but the 10th time, the system becomes corrupted.
The correct use of 'uncaughtException' is to perform synchronous cleanup of allocated resources (e.g. file descriptors, handles, etc) before shutting down the process. It is not safe to resume normal operation after 'uncaughtException'.
To restart a crashed application in a more reliable way, whether uncaughtException is emitted or not, an external monitor should be employed in a separate process to detect application failures and recover or restart as needed.
I just did a bunch of research on this (see here, here, here, and here) and the answer to your question is that Node will not allow you to write one error handler that will catch every error scenario that could possibly occur in your system.
Some frameworks like express will allow you to catch certain types of errors (when an async method returns an error object), but there are other conditions that you cannot catch with a global error handler. This is a limitation (in my opinion) of Node and possibly inherent to async programming in general.
For example, say you have the following express handler:
app.get("/test", function(req, res, next) {
require("fs").readFile("/some/file", function(err, data) {
if(err)
next(err);
else
res.send("yay");
});
});
Let's say that the file "some/file" does not actually exist. In this case fs.readFile will return an error as the first argument to the callback method. If you check for that and do next(err) when it happens, the default express error handler will take over and do whatever you make it do (e.g. return a 500 to the user). That's a graceful way to handle an error. Of course, if you forget to call next(err), it doesn't work.
So that's the error condition that a global handler can deal with, however consider another case:
app.get("/test", function(req, res, next) {
require("fs").readFile("/some/file", function(err, data) {
if(err)
next(err);
else {
nullObject.someMethod(); //throws a null reference exception
res.send("yay");
}
});
});
In this case, there is a bug if your code that results in you calling a method on a null object. Here an exception will be thrown, it will not be caught by the global error handler, and your node app will terminate. All clients currently executing requests on that service will get suddenly disconnected with no explanation as to why. Ungraceful.
There is currently no global error handler functionality in Node to handle this case. You cannot put a giant try/catch around all your express handlers because by the time your asyn callback executes, those try/catch blocks are no longer in scope. That's just the nature of async code, it breaks the try/catch error handling paradigm.
AFAIK, your only recourse here is to put try/catch blocks around the synchronous parts of your code inside each one of your async callbacks, something like this:
app.get("/test", function(req, res, next) {
require("fs").readFile("/some/file", function(err, data) {
if(err) {
next(err);
}
else {
try {
nullObject.someMethod(); //throws a null reference exception
res.send("yay");
}
catch(e) {
res.send(500);
}
}
});
});
That's going to make for some nasty code, especially once you start getting into nested async calls.
Some people think that what Node does in these cases (that is, die) is the proper thing to do because your system is in an inconsistent state and you have no other option. I disagree with that reasoning but I won't get into a philosophical debate about it. The point is that with Node, your options are lots of little try/catch blocks or hope that your test coverage is good enough so that this doesn't happen. You can put something like upstart or supervisor in place to restart your app when it goes down but that's simply mitigation of the problem, not a solution.
Node.js has a currently unstable feature called domains that appears to address this issue, though I don't know much about it.
I've just put together a class which listens for unhandled exceptions, and when it see's one it:
prints the stack trace to the console
logs it in it's own logfile
emails you the stack trace
restarts the server (or kills it, up to you)
It will require a little tweaking for your application as I haven't made it generic as yet, but it's only a few lines and it might be what you're looking for!
Check it out!
Note: this is over 4 years old at this point, unfinished, and there may now be a better way - I don't know!)
process.on
(
'uncaughtException',
function (err)
{
var stack = err.stack;
var timeout = 1;
// print note to logger
logger.log("SERVER CRASHED!");
// logger.printLastLogs();
logger.log(err, stack);
// save log to timestamped logfile
// var filename = "crash_" + _2.formatDate(new Date()) + ".log";
// logger.log("LOGGING ERROR TO "+filename);
// var fs = require('fs');
// fs.writeFile('logs/'+filename, log);
// email log to developer
if(helper.Config.get('email_on_error') == 'true')
{
logger.log("EMAILING ERROR");
require('./Mailer'); // this is a simple wrapper around nodemailer http://documentup.com/andris9/nodemailer/
helper.Mailer.sendMail("GAMEHUB NODE SERVER CRASHED", stack);
timeout = 10;
}
// Send signal to clients
// logger.log("EMITTING SERVER DOWN CODE");
// helper.IO.emit(SIGNALS.SERVER.DOWN, "The server has crashed unexpectedly. Restarting in 10s..");
// If we exit straight away, the write log and send email operations wont have time to run
setTimeout
(
function()
{
logger.log("KILLING PROCESS");
process.exit();
},
// timeout * 1000
timeout * 100000 // extra time. pm2 auto-restarts on crash...
);
}
);
Had a similar problem. Ivo's answer is good. But how can you catch an error in a loop and continue?
var folder='/anyFolder';
fs.readdir(folder, function(err,files){
for(var i=0; i<files.length; i++){
var stats = fs.statSync(folder+'/'+files[i]);
}
});
Here, fs.statSynch throws an error (against a hidden file in Windows that barfs I don't know why). The error can be caught by the process.on(...) trick, but the loop stops.
I tried adding a handler directly:
var stats = fs.statSync(folder+'/'+files[i]).on('error',function(err){console.log(err);});
This did not work either.
Adding a try/catch around the questionable fs.statSynch() was the best solution for me:
var stats;
try{
stats = fs.statSync(path);
}catch(err){console.log(err);}
This then led to the code fix (making a clean path var from folder and file).
I found PM2 as the best solution for handling node servers, single and multiple instances
One way of doing this would be spinning the child process and communicate with the parent process via 'message' event.
In the child process where the error occurs, catch that with 'uncaughtException' to avoid crashing the application. Mind that Exceptions thrown from within the event handler will not be caught. Once the error is caught safely, send a message like: {finish: false}.
Parent Process would listen to the message event and send the message again to the child process to re-run the function.
Child Process:
// In child.js
// function causing an exception
const errorComputation = function() {
for (let i = 0; i < 50; i ++) {
console.log('i is.......', i);
if (i === 25) {
throw new Error('i = 25');
}
}
process.send({finish: true});
}
// Instead the process will exit with a non-zero exit code and the stack trace will be printed. This is to avoid infinite recursion.
process.on('uncaughtException', err => {
console.log('uncaught exception..',err.message);
process.send({finish: false});
});
// listen to the parent process and run the errorComputation again
process.on('message', () => {
console.log('starting process ...');
errorComputation();
})
Parent Process:
// In parent.js
const { fork } = require('child_process');
const compute = fork('child.js');
// listen onto the child process
compute.on('message', (data) => {
if (!data.finish) {
compute.send('start');
} else {
console.log('Child process finish successfully!')
}
});
// send initial message to start the child process.
compute.send('start');

Resources