Is possible to restart automatically a node cli script when it's finished to execute the code?
I have this cli script that will run until a variable reach a limit
const maxExecution = 200;
let i = 0;
let task = setInterval( () => {
i++;
if( i >= maxExecution ){
clearInterval(task);
}
// code here...
},5000);
This code will work fine and will stop the tasks when the i variable reach the set limit. I'm reading this question about how to manage process.exit. Is there any event I can listen to understand if the script execution have reached the end?
I had this code (typescript, not vanilla nodejs) when I was working with node<=10 (possibly between 6 ~ 10, not quite sure), it can restart itself on-demand:
import { spawn } from "child_process";
let need_restart:boolean=false;
process.on("exit",function(){
if(need_restart){
spawn(process.argv.shift(),process.argv,{
"cwd":process.cwd(),
"detached":true,
"stdio":"inherit"
});
}
});
It's part of a http/ws server, when all server closed (no more event), the process automatically exit. By setting the need_restart to true, it will restart itself when this happen.
I haven't used node for quite some time, so I'm not sure if this still work on later version, but I think it's worth mentioning it here.
Related
I am very new to NodeJS and trying to develop an application which acts as a scheduler that tries to fetch data from ELK and sends the processed data to another ELK. I am able to achieve the expected behaviour but after completing all the processes, scheduler job does not exists and wait for another scheduler job to come up.
Note: This scheduler runs every 3 minutes.
job.js
const self = module.exports = {
async schedule() {
if (process.env.SCHEDULER == "MinuteFrequency") {
var timenow = moment().seconds(0).milliseconds(0).valueOf();
var endtime = timenow - 60000;
var starttime = endtime - 60000 * 3;
//sendData is an async method
reports.sendData(starttime, endtime, "SCHEDULER");
}
}
}
I tried various solutions such Promise.allSettled(....., Promise.resolve(true), etc, but not able to fix this.
As per my requirement, I want the scheduler to complete and process and exit so that I can save some resources as I am planning to deploy the application using Kubernetes cronjobs.
When all your work is done, you can call process.exit() to cause your application to exit.
In this particular code, you may need to know when reports.sendData() is actually done before exiting. We would have to know what that code is and/or see the code to know how to know when it is done. Just because it's an async function doesn't mean it's written properly to return a promise that resolves when it's done. If you want further help, show us the code for sendData() and any code that it calls too.
I have made a Node.js script which checks for new entries in a MySQL database and uses socket.io to send data to the client's web browser. The script is meant to check for new entries approximately every 2 seconds. I am using Forever to keep the script running as this is hosted on a VPS.
I believe what's happening is that the for loop is looping infinitely (more on why I think that's the issue below). There are no error messages in the Forever generated log file and the script is "running" even when it's started to hang up. Specifically, the part of the script that hangs up is the script stops accepting browser requests at port 8888 and doesn't serve the client-side socket.io js files. I've done some troubleshooting and identified a few key components that may be causing this issue, but at the end of the day, I'm not sure why it's happening and can't seem to find a work around.
Here is the relevant part of the code:
http.listen(8888,function(){
console.log("Listening on 8888");
});
function checkEntry() {
pool.getConnection(function(err,connection) {
connection.query("SELECT * FROM `data_alert` WHERE processtime > " + (Math.floor(new Date() / 1000) - 172800) + " AND pushed IS NULL", function (err, rows) {
connection.release();
if (!err) {
if(Object.keys(rows).length > 0) {
var x;
for(x = 0; x < Object.keys(rows).length; x++) {
connection.query("UPDATE `data_alert` SET pushed = 1 WHERE id = " + rows[x]['id'],function() {
connection.release();
io.emit('refresh feed', 'refresh');
});
}
}
}
});
});
setTimeout(function() { checkEntry();var d = new Date();console.log(d.getTime()); },1000);
}
checkEntry();
Just a few interesting things I've discovered while trouble shooting...
This only happens when I run the script on Forever. Work's completely fine if I use shell and just leave my terminal open.
It starts to happen after 5-30 minutes of running the script, it does not immediately hang up on the first execution of the checkEntry function.
I originally tried this with setInterval instead of setTimeout, the issue has remained exactly the same.
If I remove the setInterval/setTimeout function and run the checkEntry function only once, it does not hang up.
If I take out the javascript for loop in the checkEntry function, the hang ups stop (but obviously, that for loop controls necessary functionality so I have to at least find another way of using it).
I've also tried using a for-in loop for the rows object and the performance is exactly the same.
Any ideas would be immensely helpful at this point. I started working with Node.js just recently so there may be a glaringly obvious reason that I'm missing here.
Thank you.
So I just wanted to come back to this and address what the issue was. It took me quite some time to figure out and it can only be explained by my own inexperience. There is a section to my script where my code contained the following:
app.get("/", (request, response) => {
// Some code to log things to the console here.
});
The issue was that I was not sending a response. The new code looks as follows and has resolved my hang up issues:
app.get("/", (request, response) => {
// Some code to log things to the console here.
response.send("OK");
});
The issue had nothing to do with the part of the code I presented in the initial question.
I am using PM2 to manage our Node.js based micro services platform. We wanted a dashboard from where we can see the micro services status e.g. if any service is taking too much CPU or memory and for that I used PM2's api and wrote the following piece of code.
function getMicroService(){
pm2.connect(function(err) {
if(!err){
// Get all processes running
logger.info('core_module','Connecting to PM2 Daemon for Micro Services List');
var dataArr = {};
var microServices = [];
var counter = 0;
var curDateTime = helperLib.getDateTimeISO();
pm2.list(function(err, process_list) {
if(process_list.length > 0){
process_list.forEach(function(process){
delete process.pm2_env;
process.lastChecked = curDateTime;
microServices.push(process);
counter++;
})
}
if(counter == process_list.length){
dataArr.event = 'microServices';
dataArr.data = microServices;
publishStats(dataArr);
}
});
}else{
logger.error('core_module','on Line 245: '+err)
}
})
}
The above function is called every 15 seconds and it displays data on the Dashboard. But I noticed that this service started taking too much CPU over 100% and PM2 whole Daemon service went offline and stopped responding. Couldn't issue any command e.g. pm2 stop all etc. I had to manually kill the processes and then start the service again. The error I extracted from the Log file is
{"message":"core_module Threw Exception: ","stack":"Error: EMFILE: too many open files, open '/root/.pm2/pm2.log'\n at Object.fs.openSync (fs.js:584:18)\n at module.exports.Client.launchDaemon (/etc/node/node_modules/pm2/lib/Client.js:207:14)\n at /etc/node/node_modules/pm2/lib/Client.js:102:10\n at /etc/node/node_modules/pm2/lib/Client.js:294:14\n at _combinedTickCallback (internal/process/next_tick.js:73:7)\n at process._tickDomainCallback (internal/process/next_tick.js:128:9)","errno":-24,"code":"EMFILE","syscall":"open","path":"/root/.pm2/pm2.log","__error_callsites":[{},{},{},{},{},{}],"level":"error","timestamp":"2017-10-20T00:49:26.826Z"}
Could anyone please help out if the above code is right. Calling it every 15 seconds is a good approach or how can I optimize it. Should I call pm2.disconnect() at the end of the function.
Please advise.
Regards
Habib
You need to call pm.disconnect() at the end, otherwise you'll end up leaving all the created connections open. It says in the pm2 api documentation:
If your script does not exit by itself, make sure you called pm2.disconnect() at the end.
NodeJS server with a Mongo DB - one feature will generate a report JSON file from the DB, which can take a while (60 seconds up - has to process hundreds of thousands of entries).
We want to run this as a background task. We need to be able to start a report build process, monitor it, and abort it if the user decides to change the params and re build it.
What is the simplest approach with node? Don't really want to get into the realms of separate worker servers processing jobs, message queues etc - we need to keep this on the same box and fairly simple implementation.
1) Start the build as a async method, and return to the user, with socket.io reporting progress?
2) Spin off a child process for the build script?
3) Use something like https://www.npmjs.com/package/webworker-threads?
With the few approaches I've looked at I get stuck on the same two areas;
1) How to monitor progress?
2) How to abort an existing build process if the user re-submits data?
Any pointers would be greatly appreciated...
The best would be to separate this task from your main application. That said, it'd be easy to run it in the background.
To run it in the background and monit without message queue etc., the easiest would be a child_process.
You can launch a spawn job on an endpoint (or url) called by the user.
Next, setup a socket to return live monitoring of the child process
Add another endpoint to stop the job, with a unique id returned by 1. (or not, depending of your concurrency needs)
Some coding ideas:
var spawn = require('child_process').spawn
var job = null //keeping the job in memory to kill it
app.get('/save', function(req, res) {
if(job && job.pid)
return res.status(500).send('Job is already running').end()
job = spawn('node', ['/path/to/save/job.js'],
{
detached: false, //if not detached and your main process dies, the child will be killed too
stdio: [process.stdin, process.stdout, process.stderr] //those can be file streams for logs or wathever
})
job.on('close', function(code) {
job = null
//send socket informations about the job ending
})
return res.status(201) //created
})
app.get('/stop', function(req, res) {
if(!job || !job.pid)
return res.status(404).end()
job.kill('SIGTERM')
//or process.kill(job.pid, 'SIGTERM')
job = null
return res.status(200).end()
})
app.get('/isAlive', function(req, res) {
try {
job.kill(0)
return res.status(200).end()
} catch(e) { return res.status(500).send(e).end() }
})
To monit the child process you could use pidusage, we use it in PM2 for example. Add a route to monit a job and call it every second. Don't forget to release memory when job ends.
You might want to check out this library which will help you manage multi processing across microservices.
I need to write a script, that takes an array of values and multithreaded way it (forks?) runs another script with a value from array as a param, but so max running forks would be set, so it would wait for script to finish if there are more than n running already. How do I do that?
There is a plugin named child_process, but not sure how to get it done, as it always waits for child termination.
Basically, in PHP it would be something like this (wrote it from head, may contain some syntax errors):
<php
declare(ticks = 1);
$data = file('data.txt');
$max=20;
$child=0;
function sig_handler($signo) {
global $child;
switch ($signo) {
case SIGCHLD:
$child -= 1;
}
}
pcntl_signal(SIGCHLD, "sig_handler");
foreach($data as $dataline){
$dataline = trim($dataline);
while($child >= $max){
sleep(1);
}
$child++;
$pid=pcntl_fork();
if($pid){
// SOMETHING WENT WRONG? NEVER HAPPENS!
}else{
exec("php processdata.php \"$dataline\"");
exit;
}//fork
}
while($child != 0){
sleep(1);
}
?>
After the conversation in the comments, here's how to have Node executing your PHP script.
Since you're calling an external command, there's no need to create a new thread. The Node.js runloop understands that calls to external commands are async operations, and it can execute all of them at the same time.
You can see different ways for executing an external process in this SO question (linked answer may be the best in your case).
However, since you're already moving everything to Node, you may even consider rewriting your "process.php" script to Node.js code. Since, as you explained, that script connects to remote servers and databases and uses nslookup (which you may not really need with Node.js), you won't need any separate thread: they're all async operations that Node.js excels at performing.