pm2 Read Metrics Programmatically - node.js

I am using pm2 to supervise a node app, and occasionally the app is hanging. I need to detect that this has happened so that pm2 can restart the app. The easiest way seems to be to create a metric with a timestamp and update it from the app every few seconds, then the supervisor will also check the value of the metric to see if it still being updated.
The problem I am having is that I am not finding any documentation on how to read the pm2/io metrics programmatically from the pm2 supervision code.
How do I read the metrics programmatically from the pm2 supervision code?
Or is there a better way to do what I am trying to do?

It's late but here's the code
const pm2 = require('pm2')
pm2.connect(function (err) {
if (err) {
console.error(err)
process.exit(2)
}
pm2.list((err2, processList) => {
if (err2) {
console.error(err2)
process.exit(2)
}
for (eachProcess of processList) {
console.log(eachProcess.pm2_env.axm_monitor)
}
})
});

Related

PWA with Node.js, Registering Service Worker, Navigator is not defined

I am trying to turn my existing Webapp into a PWA. I am using Node.js, the structure of my project was generated with express. The server is currently a https server, but the certificate is not safe yet. I tried to register a serviceWorker in my app.js, but when I run npm start I get the following error:
Here is the code I am using, to register the sw:
registerSW();
function registerSW() {
if ('serviceWorker' in navigator) {
navigator.serviceWorker.register('/service-worker.js')
.then((reg) => {
console.log('Service worker registered -->', reg);
}, (err) => {
console.error('Service worker not registered -->', err);
});
}
}
I am unsure which information is needed to solve the error. If you need more context I will gladly provide it.
Thank you for your answers!
Carina

Node.js Cluster Shared Cache

I'm using node-cache to create a local cache, however, the problem I have is that when using the application with PM2 which creates an application cluster the cache is created multiple times, one for each process - this isn't too much of a problem as the cached data is small so memory isn't the issue.
The real problem that I have an API call to my application to flush the cache, however when calling this API it will only flush the cache for the particular process that handles that call.
Is there a way to signal all workers to perform a function?
I did think about using Redis to cache instead as that would make it simpler to only have the one cache, the problem I have with Redis is I'm not sure the best way to scale it, I've currently got 50 applications and wouldn't want to set-up a new Redis database for each application, the alternative was to use ioredis and it's transparent key prefixing for each application but this could cause some security vulnerabilities if one application was to accidentally read data from the other clients application - And I don't believe there is a way to delete all keys just for a particular prefix (i.e. one app/client) as FLUSHALL will remove all keys
What are best practices for sharing cache for clustered node instances, but where there are many instances of the application too - think SAAS application.
Currently, my workaround for this issue is using node-cron to clear the cache every 15mins, however, there are items in the cache that don't really ever change, and there are other items which should be updated as soon as an external tool signals the application to flush the cache via an API call
For anyone looking at this, for my use case, the best method was to use IPC.
I implemented an IPC messenger to pass messages to all processes, I read in the process name from the pm2 config file (app.json) to ensure we send the message to the correct application
// Sender
// The sender can run inside or outside of pm2
var pm2 = require('pm2');
var cfg = require('../app.json');
exports.IPCSend = function (topic, message) {
pm2.connect(function () {
// Find the IDs of who you want to send to
pm2.list(function (err, processes) {
for (var i in processes) {
if (processes[i].name == cfg.apps[0].name) {
console.log('Sending Message To Id:', processes[i].pm_id, 'Name:', processes[i].name)
pm2.sendDataToProcessId(processes[i].pm_id, {
data: {
message: message
},
topic: topic
}, function (err, res) {
console.log(err, res);
});
}
}
});
});
}
// Receiver
// No need to require require('pm2') however the receiver must be running inside of pm2
process.on('message', function (packet) {
console.log(packet);
});

Periodically restart node express server

I have an express application that predictably spikes CPU after running for a while under load. I'd like to proactively restart it every N minutes in order to avoid the spikes. It's currently running under forever, but I could use pm2 or some other process manager. Is there any process manager that can do periodic restarts? How can I go about accomplishing this with the minimum of added structure?
You can do it programmatically with PM2 with the following code.
var pm2 = require('pm2');
pm2.connect(function(err) {
if (err) throw err;
setTimeout(function worker() {
console.log("Restarting app...");
pm2.restart('app', function() {});
setTimeout(worker, NUM_MILLI_SECONDS);
}, NUM_MILLI_SECONDS);
});
This will restart it every number of given milliseconds. There is also a CRON library you can use.

Nodejs on azure : server not starting before the first request

I have a weird probleme, when starting listening I run the following function :
app.listen(process.env.PORT || 3000, async function () {
await db.init().catch(err => {
console.error(err);
console.error("Shutting down because there was an error setting up the database.");
process.exit(1);
});
await recordsDb.init().catch(err => {
console.error(err);
console.error("Shutting down because there was an error setting up the records database.");
process.exit(1);
});
//this db object will be used by controllers..
app.set('db',db);
});
the problem is that the function doesn't run before the first request (and so the first request always fail), am I missing something ?
Thank you!
You will need to first scale up your Web App Service plan, then you can enable Always On. By default, apps are unloaded if they are idle for some period of time. This lets the system conserve resources. In Basic or Standard mode, you can enable Always On to keep the app loaded all the time. If your app runs continuous WebJobs or runs WebJobs triggered using a CRON expression, you should enable Always On, or the web jobs may not run reliably. Free Azure web apps do not support Always On.
https://learn.microsoft.com/en-us/azure/app-service/web-sites-configure

How do I prevent node.js from crashing? try-catch doesn't work

From my experience, a php server would throw an exception to the log or to the server end, but node.js just simply crashes. Surrounding my code with a try-catch doesn't work either since everything is done asynchronously. I would like to know what does everyone else do in their production servers.
PM2
First of all, I would highly recommend installing PM2 for Node.js. PM2 is really great at handling crash and monitoring Node apps as well as load balancing. PM2 immediately starts the Node app whenever it crashes, stops for any reason or even when server restarts. So, if someday even after managing our code, app crashes, PM2 can restart it immediately. For more info, Installing and Running PM2
Other answers are really insane as you can read at Node's own documents at http://nodejs.org/docs/latest/api/process.html#process_event_uncaughtexception
If someone is using other stated answers read Node Docs:
Note that uncaughtException is a very crude mechanism for exception handling and may be removed in the future
Now coming back to our solution to preventing the app itself from crashing.
So after going through I finally came up with what Node document itself suggests:
Don't use uncaughtException, use domains with cluster instead. If you do use uncaughtException, restart your application after every unhandled exception!
DOMAIN with Cluster
What we actually do is send an error response to the request that triggered the error, while letting the others finish in their normal time, and stop listening for new requests in that worker.
In this way, domain usage goes hand-in-hand with the cluster module, since the master process can fork a new worker when a worker encounters an error. See the code below to understand what I mean
By using Domain, and the resilience of separating our program into multiple worker processes using Cluster, we can react more appropriately, and handle errors with much greater safety.
var cluster = require('cluster');
var PORT = +process.env.PORT || 1337;
if(cluster.isMaster)
{
cluster.fork();
cluster.fork();
cluster.on('disconnect', function(worker)
{
console.error('disconnect!');
cluster.fork();
});
}
else
{
var domain = require('domain');
var server = require('http').createServer(function(req, res)
{
var d = domain.create();
d.on('error', function(er)
{
//something unexpected occurred
console.error('error', er.stack);
try
{
//make sure we close down within 30 seconds
var killtimer = setTimeout(function()
{
process.exit(1);
}, 30000);
// But don't keep the process open just for that!
killtimer.unref();
//stop taking new requests.
server.close();
//Let the master know we're dead. This will trigger a
//'disconnect' in the cluster master, and then it will fork
//a new worker.
cluster.worker.disconnect();
//send an error to the request that triggered the problem
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
}
catch (er2)
{
//oh well, not much we can do at this point.
console.error('Error sending 500!', er2.stack);
}
});
//Because req and res were created before this domain existed,
//we need to explicitly add them.
d.add(req);
d.add(res);
//Now run the handler function in the domain.
d.run(function()
{
//You'd put your fancy application logic here.
handleRequest(req, res);
});
});
server.listen(PORT);
}
Though Domain is pending deprecation and will be removed as the new replacement comes as stated in Node's Documentation
This module is pending deprecation. Once a replacement API has been finalized, this module will be fully deprecated. Users who absolutely must have the functionality that domains provide may rely on it for the time being but should expect to have to migrate to a different solution in the future.
But until the new replacement is not introduced, Domain with Cluster is the only good solution what Node Documentation suggests.
For in-depth understanding Domain and Cluster read
https://nodejs.org/api/domain.html#domain_domain (Stability: 0 - Deprecated)
https://nodejs.org/api/cluster.html
Thanks to #Stanley Luo for sharing us this wonderful in-depth explanation on Cluster and Domains
Cluster & Domains
I put this code right under my require statements and global declarations:
process.on('uncaughtException', function (err) {
console.error(err);
console.log("Node NOT Exiting...");
});
works for me. the only thing i don't like about it is I don't get as much info as I would if I just let the thing crash.
As mentioned here you'll find error.stack provides a more complete error message such as the line number that caused the error:
process.on('uncaughtException', function (error) {
console.log(error.stack);
});
Try supervisor
npm install supervisor
supervisor app.js
Or you can install forever instead.
All this will do is recover your server when it crashes by restarting it.
forever can be used within the code to gracefully recover any processes that crash.
The forever docs have solid information on exit/error handling programmatically.
Using try-catch may solve the uncaught errors, but in some complex situations, it won't do the job right such as catching async function. Remember that in Node, any async function calls can contain a potential app crashing operation.
Using uncaughtException is a workaround but it is recognized as inefficient and is likely to be removed in the future versions of Node, so don't count on it.
Ideal solution is to use domain: http://nodejs.org/api/domain.html
To make sure your app is up and running even your server crashed, use the following steps:
use node cluster to fork multiple process per core. So if one process died, another process will be auto boot up. Check out: http://nodejs.org/api/cluster.html
use domain to catch async operation instead of using try-catch or uncaught. I'm not saying that try-catch or uncaught is bad thought!
use forever/supervisor to monitor your services
add daemon to run your node app: http://upstart.ubuntu.com
hope this helps!
Give a try to pm2 node module it is far consistent and has great documentation. Production process manager for Node.js apps with a built-in load balancer. please avoid uncaughtException for this problem.
https://github.com/Unitech/pm2
Works great on restify:
server.on('uncaughtException', function (req, res, route, err) {
log.info('******* Begin Error *******\n%s\n*******\n%s\n******* End Error *******', route, err.stack);
if (!res.headersSent) {
return res.send(500, {ok: false});
}
res.write('\n');
res.end();
});
By default, Node.js handles such exceptions by printing the stack trace to stderr and exiting with code 1, overriding any previously set process.exitCode.
know more
process.on('uncaughtException', (err, origin) => {
console.log(err);
});
UncaughtException is "a very crude mechanism" (so true) and domains are deprecated now. However, we still need some mechanism to catch errors around (logical) domains. The library:
https://github.com/vacuumlabs/yacol
can help you do this. With a little of extra writing you can have nice domain semantics all around your code!

Resources