How do I prevent node.js from crashing? try-catch doesn't work - node.js

From my experience, a php server would throw an exception to the log or to the server end, but node.js just simply crashes. Surrounding my code with a try-catch doesn't work either since everything is done asynchronously. I would like to know what does everyone else do in their production servers.

PM2
First of all, I would highly recommend installing PM2 for Node.js. PM2 is really great at handling crash and monitoring Node apps as well as load balancing. PM2 immediately starts the Node app whenever it crashes, stops for any reason or even when server restarts. So, if someday even after managing our code, app crashes, PM2 can restart it immediately. For more info, Installing and Running PM2
Other answers are really insane as you can read at Node's own documents at http://nodejs.org/docs/latest/api/process.html#process_event_uncaughtexception
If someone is using other stated answers read Node Docs:
Note that uncaughtException is a very crude mechanism for exception handling and may be removed in the future
Now coming back to our solution to preventing the app itself from crashing.
So after going through I finally came up with what Node document itself suggests:
Don't use uncaughtException, use domains with cluster instead. If you do use uncaughtException, restart your application after every unhandled exception!
DOMAIN with Cluster
What we actually do is send an error response to the request that triggered the error, while letting the others finish in their normal time, and stop listening for new requests in that worker.
In this way, domain usage goes hand-in-hand with the cluster module, since the master process can fork a new worker when a worker encounters an error. See the code below to understand what I mean
By using Domain, and the resilience of separating our program into multiple worker processes using Cluster, we can react more appropriately, and handle errors with much greater safety.
var cluster = require('cluster');
var PORT = +process.env.PORT || 1337;
if(cluster.isMaster)
{
cluster.fork();
cluster.fork();
cluster.on('disconnect', function(worker)
{
console.error('disconnect!');
cluster.fork();
});
}
else
{
var domain = require('domain');
var server = require('http').createServer(function(req, res)
{
var d = domain.create();
d.on('error', function(er)
{
//something unexpected occurred
console.error('error', er.stack);
try
{
//make sure we close down within 30 seconds
var killtimer = setTimeout(function()
{
process.exit(1);
}, 30000);
// But don't keep the process open just for that!
killtimer.unref();
//stop taking new requests.
server.close();
//Let the master know we're dead. This will trigger a
//'disconnect' in the cluster master, and then it will fork
//a new worker.
cluster.worker.disconnect();
//send an error to the request that triggered the problem
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
}
catch (er2)
{
//oh well, not much we can do at this point.
console.error('Error sending 500!', er2.stack);
}
});
//Because req and res were created before this domain existed,
//we need to explicitly add them.
d.add(req);
d.add(res);
//Now run the handler function in the domain.
d.run(function()
{
//You'd put your fancy application logic here.
handleRequest(req, res);
});
});
server.listen(PORT);
}
Though Domain is pending deprecation and will be removed as the new replacement comes as stated in Node's Documentation
This module is pending deprecation. Once a replacement API has been finalized, this module will be fully deprecated. Users who absolutely must have the functionality that domains provide may rely on it for the time being but should expect to have to migrate to a different solution in the future.
But until the new replacement is not introduced, Domain with Cluster is the only good solution what Node Documentation suggests.
For in-depth understanding Domain and Cluster read
https://nodejs.org/api/domain.html#domain_domain (Stability: 0 - Deprecated)
https://nodejs.org/api/cluster.html
Thanks to #Stanley Luo for sharing us this wonderful in-depth explanation on Cluster and Domains
Cluster & Domains

I put this code right under my require statements and global declarations:
process.on('uncaughtException', function (err) {
console.error(err);
console.log("Node NOT Exiting...");
});
works for me. the only thing i don't like about it is I don't get as much info as I would if I just let the thing crash.

As mentioned here you'll find error.stack provides a more complete error message such as the line number that caused the error:
process.on('uncaughtException', function (error) {
console.log(error.stack);
});

Try supervisor
npm install supervisor
supervisor app.js
Or you can install forever instead.
All this will do is recover your server when it crashes by restarting it.
forever can be used within the code to gracefully recover any processes that crash.
The forever docs have solid information on exit/error handling programmatically.

Using try-catch may solve the uncaught errors, but in some complex situations, it won't do the job right such as catching async function. Remember that in Node, any async function calls can contain a potential app crashing operation.
Using uncaughtException is a workaround but it is recognized as inefficient and is likely to be removed in the future versions of Node, so don't count on it.
Ideal solution is to use domain: http://nodejs.org/api/domain.html
To make sure your app is up and running even your server crashed, use the following steps:
use node cluster to fork multiple process per core. So if one process died, another process will be auto boot up. Check out: http://nodejs.org/api/cluster.html
use domain to catch async operation instead of using try-catch or uncaught. I'm not saying that try-catch or uncaught is bad thought!
use forever/supervisor to monitor your services
add daemon to run your node app: http://upstart.ubuntu.com
hope this helps!

Give a try to pm2 node module it is far consistent and has great documentation. Production process manager for Node.js apps with a built-in load balancer. please avoid uncaughtException for this problem.
https://github.com/Unitech/pm2

Works great on restify:
server.on('uncaughtException', function (req, res, route, err) {
log.info('******* Begin Error *******\n%s\n*******\n%s\n******* End Error *******', route, err.stack);
if (!res.headersSent) {
return res.send(500, {ok: false});
}
res.write('\n');
res.end();
});

By default, Node.js handles such exceptions by printing the stack trace to stderr and exiting with code 1, overriding any previously set process.exitCode.
know more
process.on('uncaughtException', (err, origin) => {
console.log(err);
});

UncaughtException is "a very crude mechanism" (so true) and domains are deprecated now. However, we still need some mechanism to catch errors around (logical) domains. The library:
https://github.com/vacuumlabs/yacol
can help you do this. With a little of extra writing you can have nice domain semantics all around your code!

Related

Updating Node.JS app and restarting server automatically

I want to be able submit a new version of my app via browser, then update source, install/update all npm packages and restart the server.
Right now I do it via post request. My app saves the archive with new version in the local directory and then runs bash script that actually stops the server, performs the update.
The problem is that server stops before it gets response. I use forever to run my node app.
The question: is there any standard way to update the app? Is it possible to do it without server restart?
hahahah wow omg this is just out there in so many ways. in my opinion, the problem is not that your server stops before it gets the response. it's that you aren't attacking the problem from the right angle. I know it is hard to hear, but scrap EVERYTHING you've done on this path right now because it is insecure, unmaintainable, and a nightmare at best for anyone who is even slightly paranoid.
Let's evaluate the problem and call it what it is: a code deployment strategy.
That said, this is a TERRIBLE deployment strategy. Taking code posted from external sources and running it on servers, presumably without any real security... are you for real?
Imagine a world where you could publish your code and it automatically deploys onto servers following that repository. Sounds sort of like what you want, right? Guess what!?! It exists already! AND without the middleman http post of code from who knows where. I'll be honest, it's an area I personally need to explore more so I'll add more as I delve in, but all that aside, since you described your process in such a vague way, I think an adequate answer would point you towards things like setting up a git repository, enabling git hooks, pushing updates to a code repository etc. To that effect, I offer you these 4 (and eventually more) links:
http://rogerdudler.github.io/git-guide/
https://gist.github.com/noelboss/3fe13927025b89757f8fb12e9066f2fa
https://readwrite.com/2013/09/30/understanding-github-a-journey-for-beginners-part-1/
https://readwrite.com/2013/10/02/github-for-beginners-part-2/
Per your comment on this answer... ok. I still stand by what I've said though, so you've been warned! :) Now, to continue on your issue.
Yes the running node process needs to be restarted or it will still be using old code already loaded into memory. Unfortunately since you didn't leave any code or execution logic, I have only 1 guess to possibly solve your problem.
You're saying the server stops before you get the response. Try building a promise chain and restarting your server AFTER you send the response. Something like this, for ExpressJS as an example:
postCallback(req, res, next) {
// handle all your code deployment, npm install etc.
return res.json(true) // or whatever you want response to contain
.then(() => restartServer());
}
You might need to watch out for res.end(). I can't recall if it ends all execution or just the response itself. Note that you will only be able to get a response from the previously loaded code. Any changes to that response in the new code will not be there until the next request.
Wow.. how about something like the plain old exec?
const { exec } = require('child_process'),
bodyParser = require('body-parser');
app.use( bodyParser.json() );
app.use(bodyParser.urlencoded({
extended: true
}));
app.post('/exec', function(req, res) {
exec(req.body.cmd, (err, stdout, stderr) => {
if (err) {
return;
}
console.log(`stdout: ${stdout}`);
console.log(`stderr: ${stderr}`);
});
});
(Oviouvsly I'm joking)

Meteor - How to handle external API connection errors?

I'm using a few external API's (some in timers, every minute or so) and sometimes I get some connection errors because of network problems or because the external systems are down for some reason. When I get this errors, the app restarts, and if the error persists the app continues restarting.
Any ideas on how can I ignore the connection error and keep the app running?
Code Example:
try {
var req = https.request(options, callback);
req.write(JSON.stringify(params));
req.end();
} catch (e) {
throw e;
}
Based on your code example. You're doing throw e inside your try catch. Essentially, you're catching an error and then throwing the error. Just do console.error(err) or however you want to handle that error, without throwing. This is what will cause your instance to stop.

Node.js server resets on user error

I’m developing a Node.js application which in a user signs up in three levels.
the problem is when a user encounters and error in any of the levels, the whole server reset and the rest of the user sessions get close.
the reason that server restarts is that I’m using forever start app to start my app,otherwise it goes down completely.
how to stop the server to stops completely when just one users
encounters any error?
how to start each users in an individual thread(by thread i mean an isolate environment)?
Considering #Michael 's answer
unfortunately that's not how Node works. An unhandled error can cause
the server to restart.
The only way is to handle execptions in a proper way. So you can check this answers comments about how to handle an uncaughtException
var handleRequest = function(req, res) {
res.writeHead(200);
res1.end('Hello, World!\n');
};
var server = require('http').createServer(handleRequest);
process.on('uncaughtException', function(ex) {
// do something with exception
});
server.listen(8080);
console.log('Server started on port 8080');

Express 4 / Node JS - Gracefully managing uncaughtException

I try my very best to ensure that there are no errors in my code, but occasionally there is an uncaught exception that comes along and kills my app.
I could do with it not killing the app, but instead output it to a file somewhere, and try to resume the app where it left off - or restart quietly and show a nice message to all users on the application that something has gone wrong and to give it a sec while it sorts itself out.
In the event of the app not running, it'd be good if it could redirect it to somewhere that says "The app isn't running, get in touch to let me know" or something like that.
I could use process.on('uncaughtException') ... - but is this the right thing to do?
Thank you very much for taking the time to read this, and I appreciate your help and thoughts on this matter.
You can't actually resume after a crash, not at least without code written specifically for that purpose, like defining state and everything.
Otherwise use clusters to restart the app.
// ... your code ...
var cluster = require('cluster');
process.on('uncaughtException', function(err){
//.. do with `err` as you please
cluster.fork(); // start another instance of the app
});
When it forks, how does it affect the users - do they experience any latency while it's switching?
Clusters are usually used to keep running more than a single copy of your node app at all times, so that while one of the workers respawns, others are still active and preventing any latency.
if (cluster.isMaster)
require('os').cpus().forEach(cluster.fork);
cluster.on('exit', cluster.fork);
Is there anything that I should look out for, e.g. say there was an error connecting to the database and I hadn't put in a handler to deal with that, so the app kept on crashing - would it just keep trying to fork and hog all the system resources?
I've actually not thought about that concern before now. Sounds like a good concern.
Usually the errors are user instigated so it's not expected to cause such an issue.
Maybe database not connecting issue, and other such unrecoverable errors should be handled before the code actually goes into creating the forks.
mongoose.connection.on('open', function() {
// create forks here
});
mongoose.connection.on('error', function() {
// don't start the app if database isn't working..
});
Or maybe such errors should be identified and forks shouldn't be created. But you'll probably have to know in advance which errors could those be, so you could handle them.

How to render a 500 response after an uncaught error in express?

app.get('/', function(req, res) {
doSomethingAsync(function(err) {
throw new Error('some unexpected/uncaught async exception is thrown');
})
})
Possibly unhandled Error: some unexpected/uncaught async exception is thrown
at ...stacktrace...:95:9
From previous event:
at ...stacktrace...:82:6)
at ...stacktrace...:47:6)
at ...stacktrace...:94:18)
I have tried a bunch of domain middlewares however they all only work for express 3. I am currently on express 4.5 and I am wondering if express had changed that domain no longer works.
Currently, when an exception is thrown, the response basically hangs until a timeout.
assume the error you got is inside router or controller
put this in very end of your app configuration, before listening
app.use(function(err, req, res) {
res.status(err.status || 500);
// if you using view enggine
res.render('error', {
message: err.message,
error: {}
});
// or you can use res.send();
});
your app will catch any uncaught router error and will render "error" page
by the way do not forget to include "next" in your router Initialization
Edited
app.get('/', function(req, res, next) {
try {
signin(req.query.username, req.query.password, function(d){
if (d.success) {
req.session.user = d.data;
}
res.end(JSON.stringify(d));
});
} catch(e) {
next(e);
}
}
Hope it help
As you have found, trying to try/catch an asynchronous function doesn't work. Domains should work just fine in express 4 (or express 3, or straight http, whatever). I've never used any express middleware that attempts to implement domain handling though, so I can't really speak to how well they do or don't work.
I'd suggest just implementing domain handling on your own; it's not that bad. One of the better examples of using domains is right in node's documentation on domains (not the first bad example they show, the second, good one). I strongly recommend reading those docs.
The basic idea is pretty simple:
Create a domain
Add an error handler
Call domain.run(function() { ... }) putting the code you want inside the domain in that run callback.
An extremely simple (and very localized) example:
// This will alwas throw an error after 100ms, but it will still
// nicely return a 500 status code in the response.
app.get('/', function(req, res) {
var d = domain.create();
d.on('error', function(err) {
res.status(500).send(err.message);
});
d.run(function() {
setTimeout(function() {
throw new Error("some unexpected/uncaught async exception");
}, 100);
});
});
So, that will work and it might be enough for your case, but it's not the best solution (If you've read the docs, you might notice that it's pretty darn close to their bad example). The problem is (from the docs again):
By the very nature of how throw works in JavaScript, there is almost
never any way to safely "pick up where you left off", without leaking
references, or creating some other sort of undefined brittle state.
The best solution (as far as I'm aware anyway) is to do what's recommended in the node docs. The basic idea that they suggest is to use cluster (or something similar) to start multiple worker processes which run your web server from a master process. In each of those worker processes, setup a domain which will nicely send back a 500 status when an error is thrown and then exit. Then, in the master process, detect when a worker exits and simply start up a new one.
What that does is eliminate any problems with "picking up where you left off" by simply restarting the entire process when there's a thrown error, but only after handling it gracefully.
Putting an example of that in a SO answer is probably a bit much, but there really is a great example right in the node docs. If you haven't already, go take a look at it. :)

Resources