Periodically restart node express server - node.js

I have an express application that predictably spikes CPU after running for a while under load. I'd like to proactively restart it every N minutes in order to avoid the spikes. It's currently running under forever, but I could use pm2 or some other process manager. Is there any process manager that can do periodic restarts? How can I go about accomplishing this with the minimum of added structure?

You can do it programmatically with PM2 with the following code.
var pm2 = require('pm2');
pm2.connect(function(err) {
if (err) throw err;
setTimeout(function worker() {
console.log("Restarting app...");
pm2.restart('app', function() {});
setTimeout(worker, NUM_MILLI_SECONDS);
}, NUM_MILLI_SECONDS);
});
This will restart it every number of given milliseconds. There is also a CRON library you can use.

Related

pm2 Read Metrics Programmatically

I am using pm2 to supervise a node app, and occasionally the app is hanging. I need to detect that this has happened so that pm2 can restart the app. The easiest way seems to be to create a metric with a timestamp and update it from the app every few seconds, then the supervisor will also check the value of the metric to see if it still being updated.
The problem I am having is that I am not finding any documentation on how to read the pm2/io metrics programmatically from the pm2 supervision code.
How do I read the metrics programmatically from the pm2 supervision code?
Or is there a better way to do what I am trying to do?
It's late but here's the code
const pm2 = require('pm2')
pm2.connect(function (err) {
if (err) {
console.error(err)
process.exit(2)
}
pm2.list((err2, processList) => {
if (err2) {
console.error(err2)
process.exit(2)
}
for (eachProcess of processList) {
console.log(eachProcess.pm2_env.axm_monitor)
}
})
});

write polling service in node.js with sails.js framework

i have project in sails.js, i want to write a polling service that check some record in some interval and after that send email. my example code is:
module.exports.bootstrap = function(cb) {
cb();
var refresh = function() {
setTimeout(doWork, //someInterval);
};
var doWork = function() {
if (//check some condition) {
sendEmail();
}
refresh();
};
doWork();
}
i use pm2 libary and start my project with cluster mode. example code is:
pm2 start app.js -i 4
this command run app.js in cluster mode with 4 process.
the problem is my polling service run in all process because i run my polling service in config/bootstrap.js file and this is very bad.
my question is how can i run my service once in all process?
You can check if process is master and then run script only on that case.
var cluster = require('cluster');
if(cluster.isMaster) // rest of your service...
But for me... This is strange logic... You should queue your tasks to shared db, and when task is pooled remove it from it etc.

How to restart my node js app when the CPU usage reach 100% on Amazon EC2 and the server stops

Currently i am using forever to handle crashes,etc on EC2 but i want some way to manage restarting the app when the CPU usage on the server reaches 100%.
The way it works now is that when the CPU usage reach 100% the app stops running and if i didn't notice the alarms sent by amazon on my mail the app remains down until i restart manually it again using forever.
What i want is a way for when the cpu usage reach 90% or higher it restarts the app, should i use another module other than forever and if so any suggestions ?
I recommand you to reduce your CPU usage BUT, I use a similar tricks but to restart when memory usage is to high (due to very small memory leak)
You need module "usage"
var usage = require('usage');
then:
CHECK_CPU_USAGE_INTERVAL = 1000*60; // every minute
HIGH_CPU_USAGE_LIMIT = 90; // percentage
autoRestart = setInterval(function()
{
usage.lookup(process.pid, function(err, result)
{
if(!err)
{
if(result.cpu > HIGH_CPU_USAGE_LIMIT)
{
// log
console.log('restart due to high cpu usage');
// restart because forever will respawn your process
process.exit();
}
}
});
}, CHECK_CPU_USAGE_INTERVAL);
Check out Forever:
https://github.com/nodejitsu/forever
You can use it for the exact situation described. Thought i'd try to figure out why you're hitting max CPU.

Efficient HTTP shutdown with keepalives?

This Node.js server will shutdown cleanly on a Ctrl+C when all connections are closed.
var http = require('http');
var app = http.createServer(function (req, res) {
res.end('Hello');
});
process.on('SIGINT', function() {
console.log('Closing...');
app.close(function () {
console.log('Closed.');
process.exit();
});
});
app.listen(3000);
The problem with this is that it includes keepalive connections. If you open a tab to this app in Chrome and then try to Ctrl+C it, it won't shutdown for about 2 minutes when Chrome finally releases the connection.
Is there a clean way of detecting when there are no more HTTP requests, even if some connections are still open?
By default there's no socket timeout, that means that connections will be open forever until the client closes them. If you want to set a timeout use this function: socket.setTimeout.
If you try to close the server you simply can't because there are active connections, so if you try to gracefully shutdown the shutdown function will hang up. The only way is to set a timeout and when it expires kill the app.
If you have workers it's not as simple as killing the app with process.exit(), so I made a module that does extacly what you're asking: grace.
You can hack some request tracking with the finish event on response:
var reqCount = 0;
var app = http.createServer(function (req, res) {
reqCount++;
res.on('finish', function() { reqCount--; });
res.end('Hello');
});
Allowing you to check whether reqCount is zero when you come to close the server.
The correct thing to do, though, is probably to not care about the old server and just start a new one. Usually the restart is to get new code, so you can start a fresh process without waiting for the old one to end, optionally using the child_process module to have a toplevel script managing the whole thing. Or even use the cluster module, allowing you to start the new process before you've even shut down the old one (since cluster manages balancing traffic between its child instances).
One thing I haven't actually tested very far, is whether it's guaranteed safe to start a new server as soon as server.close() returns. If not, then the new server could potentially fail to bind. There's an example in the server.listen() docs about how to handle such an EADDRINUSE error.

How do I prevent node.js from crashing? try-catch doesn't work

From my experience, a php server would throw an exception to the log or to the server end, but node.js just simply crashes. Surrounding my code with a try-catch doesn't work either since everything is done asynchronously. I would like to know what does everyone else do in their production servers.
PM2
First of all, I would highly recommend installing PM2 for Node.js. PM2 is really great at handling crash and monitoring Node apps as well as load balancing. PM2 immediately starts the Node app whenever it crashes, stops for any reason or even when server restarts. So, if someday even after managing our code, app crashes, PM2 can restart it immediately. For more info, Installing and Running PM2
Other answers are really insane as you can read at Node's own documents at http://nodejs.org/docs/latest/api/process.html#process_event_uncaughtexception
If someone is using other stated answers read Node Docs:
Note that uncaughtException is a very crude mechanism for exception handling and may be removed in the future
Now coming back to our solution to preventing the app itself from crashing.
So after going through I finally came up with what Node document itself suggests:
Don't use uncaughtException, use domains with cluster instead. If you do use uncaughtException, restart your application after every unhandled exception!
DOMAIN with Cluster
What we actually do is send an error response to the request that triggered the error, while letting the others finish in their normal time, and stop listening for new requests in that worker.
In this way, domain usage goes hand-in-hand with the cluster module, since the master process can fork a new worker when a worker encounters an error. See the code below to understand what I mean
By using Domain, and the resilience of separating our program into multiple worker processes using Cluster, we can react more appropriately, and handle errors with much greater safety.
var cluster = require('cluster');
var PORT = +process.env.PORT || 1337;
if(cluster.isMaster)
{
cluster.fork();
cluster.fork();
cluster.on('disconnect', function(worker)
{
console.error('disconnect!');
cluster.fork();
});
}
else
{
var domain = require('domain');
var server = require('http').createServer(function(req, res)
{
var d = domain.create();
d.on('error', function(er)
{
//something unexpected occurred
console.error('error', er.stack);
try
{
//make sure we close down within 30 seconds
var killtimer = setTimeout(function()
{
process.exit(1);
}, 30000);
// But don't keep the process open just for that!
killtimer.unref();
//stop taking new requests.
server.close();
//Let the master know we're dead. This will trigger a
//'disconnect' in the cluster master, and then it will fork
//a new worker.
cluster.worker.disconnect();
//send an error to the request that triggered the problem
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
}
catch (er2)
{
//oh well, not much we can do at this point.
console.error('Error sending 500!', er2.stack);
}
});
//Because req and res were created before this domain existed,
//we need to explicitly add them.
d.add(req);
d.add(res);
//Now run the handler function in the domain.
d.run(function()
{
//You'd put your fancy application logic here.
handleRequest(req, res);
});
});
server.listen(PORT);
}
Though Domain is pending deprecation and will be removed as the new replacement comes as stated in Node's Documentation
This module is pending deprecation. Once a replacement API has been finalized, this module will be fully deprecated. Users who absolutely must have the functionality that domains provide may rely on it for the time being but should expect to have to migrate to a different solution in the future.
But until the new replacement is not introduced, Domain with Cluster is the only good solution what Node Documentation suggests.
For in-depth understanding Domain and Cluster read
https://nodejs.org/api/domain.html#domain_domain (Stability: 0 - Deprecated)
https://nodejs.org/api/cluster.html
Thanks to #Stanley Luo for sharing us this wonderful in-depth explanation on Cluster and Domains
Cluster & Domains
I put this code right under my require statements and global declarations:
process.on('uncaughtException', function (err) {
console.error(err);
console.log("Node NOT Exiting...");
});
works for me. the only thing i don't like about it is I don't get as much info as I would if I just let the thing crash.
As mentioned here you'll find error.stack provides a more complete error message such as the line number that caused the error:
process.on('uncaughtException', function (error) {
console.log(error.stack);
});
Try supervisor
npm install supervisor
supervisor app.js
Or you can install forever instead.
All this will do is recover your server when it crashes by restarting it.
forever can be used within the code to gracefully recover any processes that crash.
The forever docs have solid information on exit/error handling programmatically.
Using try-catch may solve the uncaught errors, but in some complex situations, it won't do the job right such as catching async function. Remember that in Node, any async function calls can contain a potential app crashing operation.
Using uncaughtException is a workaround but it is recognized as inefficient and is likely to be removed in the future versions of Node, so don't count on it.
Ideal solution is to use domain: http://nodejs.org/api/domain.html
To make sure your app is up and running even your server crashed, use the following steps:
use node cluster to fork multiple process per core. So if one process died, another process will be auto boot up. Check out: http://nodejs.org/api/cluster.html
use domain to catch async operation instead of using try-catch or uncaught. I'm not saying that try-catch or uncaught is bad thought!
use forever/supervisor to monitor your services
add daemon to run your node app: http://upstart.ubuntu.com
hope this helps!
Give a try to pm2 node module it is far consistent and has great documentation. Production process manager for Node.js apps with a built-in load balancer. please avoid uncaughtException for this problem.
https://github.com/Unitech/pm2
Works great on restify:
server.on('uncaughtException', function (req, res, route, err) {
log.info('******* Begin Error *******\n%s\n*******\n%s\n******* End Error *******', route, err.stack);
if (!res.headersSent) {
return res.send(500, {ok: false});
}
res.write('\n');
res.end();
});
By default, Node.js handles such exceptions by printing the stack trace to stderr and exiting with code 1, overriding any previously set process.exitCode.
know more
process.on('uncaughtException', (err, origin) => {
console.log(err);
});
UncaughtException is "a very crude mechanism" (so true) and domains are deprecated now. However, we still need some mechanism to catch errors around (logical) domains. The library:
https://github.com/vacuumlabs/yacol
can help you do this. With a little of extra writing you can have nice domain semantics all around your code!

Resources