Is there a way of ensuring that a child process has been killed?
I currently have the following code:
let p = child_process.spawn(app, args);
...
try{
p.kill('SIGKILL');
} catch(e) {
console.error("Killing process exception:", e);
}
job = setInterval( () => {
if(p.killed || timeout === true){
clearInterval(job);
callback();
}
}, 100);
setTimeout( () => {
console.log("Killing process timeout!");
timeout = true;
}, 1000);
I check periodically (100 ms period) if the killing signal has been properly send to the process and, in that moment, I assume that the process has been killed; but, to ensure that the process is not locked, I set a timeout of 1 second.
Many times the timeout is fired, independently of waiting for 1 second or 10 seconds.
The code below is executed in linux; if working in WSL, then everything seems to work properly
Related
I have a electron application that opens a external program (in my case Office), and has to wait for the program to be closed.
the code I wrote works great but sometimes the child_process.on('close') event is fired 10 or 20 seconds after the program has closed. The code is:
const cp = require("child_process");
child = cp.spawn(path/to/Office.exe + ' "' + path/to/myFile.pptx + '"', {shell: true});
child.on('close', function (code) {
//do something
});
Most of the time it reacts after 1 or 2 seconds which is fine, but sometimes it takes up to 20 seconds until I receive the close event. The program closes fast (according to the task manager), but node seems to wait for something.
I also tried child.on('exit'), calling the program with cp.exec()and using the options.stdio: ignore for spawn, as I thought maybe node is waiting for some stream from the child. But that made no difference.
Does anybody know a safe way to speed that process up?
I have tried your code and the close event triggers with a 0.5-2s delay, bearable i would say.
However, the 20s delay did not occur, but if this problem still persists on your end, you can try the approach below, which consists in checking the spawn pid.
const pidExists = (pid) => {
let pidOk = true;
try {
process.kill(pid, 0);
} catch (e) {
pidOk = false;
}
return pidOk;
};
const cp = require("child_process");
// I added the detach option because we won't need that process anymore since we're following the PID.
let child = cp.spawn(path/to/Office.exe + ' "' + path/to/myFile.pptx + '"', {shell: true, detach: true});
let officePID = child.pid; // this is the spawn pid
setInterval(()=>{
if( pidExists(officePID)){
console.log('file is still open', new Date().getTime());
}else{
console.log('file was closed', new Date().getTime());
process.exit(0);
}
}, 500);
This is a better approach since you said that the task manager shows you that the program was closed.
I have created a small example application in node.js with unit tests and acceptance tests here
Both unit and acceptance tests are run inside mocha process. Acceptance tests start from forking the process and basically running the server on before() method. after() method stops the process and
before((initialized) => {
console.log('before script');
serverProcess = child_process.fork('server.js');
serverProcess.on('close', function (code) {
console.log('child process exited with code ' + code);
});
setTimeout(() => {
console.log('1s elapsed');
initialized();
}, 1000);
The code without any delays works on my local gitlab-runner, however on server it's not always the case, so I have added delay - wait for a while until the server will start.
Empirically I have found that 1s is enough and .5s is not.
However, I would like to know what should I do to make sure that the server is.
Are there any solutions to run server, execute the tests and shutdown the server that works on Linux, Windows, docker and outside of it?
There is a good help about how to communicate between fork processes.
The idea will be to send a message from the child saying to it's dad (I am ready!). Then the dad will continue is work.
Example :
before((initialized) => {
serverProcess = child_process.fork('server.js');
serverProcess.on('close', function(code) {
console.log('child process exited with code ' + code);
});
serverProcess.on('close', function(code) {
console.log('child process exited with code ' + code);
});
// We add a backup plan. If it takes too long to launch, throw
const timeout = setTimeout(() => {
initialized(new Error('tiemout');
}, 30000);
// Cait for the child to send a message to us
serverProcess.on('message', function(str) {
if (str === 'init done') {
clearTimeout(timeout);
// server.js got successfully initialized
initialized();
}
});
});
// To add inside of your server.js listen
if (process.send) {
process.send("init done");
}
I have a job that is executed ones per day. My app is running on Heroku, and dyno is restarted ones a day.
So what can happen is that during job execution Heroku starts the restart of dyno.
That itself is not a problem as I can start job two times per day, but what is a problem is to stop the job in the middle of task when it is not in stable status.
I would like now somehow to send this signal to job function so I can break any loops and stop function execution in safe way.
I know how to get signal:
process
.on('SIGTERM', shutdown('SIGTERM'))
.on('SIGINT', shutdown('SIGINT'))
.on('uncaughtException', shutdown('uncaughtException'));
function shutdown(signal) {
console.log(`${ signal }...`);
return (err) => {
if (err) console.error(err.stack || err);
setTimeout(() => {
console.log('...waited 5s, exiting.');
process.exit(err ? 1 : 0);
}, 5000).unref();
};
}
but how to send this signal to my job function and to break from it safely?
Thank you.
So the best solution I came up with is following.
// Manage signals
let shutDownSignal = false;
process
.on('SIGTERM', shutdown('SIGTERM'))
.on('SIGINT', shutdown('SIGINT'))
.on('uncaughtException', shutdown('uncaughtException'));
function shutdown(signal) {
return (err) => {
shutDownSignal = true;
console.log(`Received signal: ${ signal }...`);
if (err) console.error(err.stack || err);
setTimeout(() => {
console.log('...waited 15s, exiting.');
process.exit(err ? 1 : 0);
}, 15000).unref();
};
}
module.exports.getShutDownSingnal = function(){ return shutDownSignal; }
then with getShutDownSingnal() anywhere I can check whether shutdown is initiated.
One more thing. It is necessary to put Procfile in app root with
web: node index.js
in it (or app.js depending what are you using).
This is necessary so that SIGTERM and SIGKILL signals are transferred correctly to node (for example if using npm, it will not transfer this signal correctly). More about this on Heroku docs
Maybe this will be useful for someone.
What I want to do is when an endpoint in my Express app is hit, I want to run a command line script - without waiting for the result - in a separate process.
Right now I am using the child_process’s spawn function and it is working, but if the Node server were to quit, the child script would quit as well. I need to have the child script run to completion even if the server quits.
I don’t need access to stdout or anything from the child script. I just need a way to basically “fire and forget”
Is there any way to do this with spawn that I may be missing? Or is there another way I should be going about this?
Thanks in advance for any guidance!
What you want here is options.detached of spawn. Setting this option will allow the sub-process to continue even after the main process calling spawn has terminated.
Quoting the documentation:
On Windows, setting options.detached to true makes it possible for the child process to continue running after the parent exits. The child will have its own console window. Once enabled for a child process, it cannot be disabled.
On non-Windows platforms, if options.detached is set to true, the child process will be made the leader of a new process group and session. Note that child processes may continue running after the parent exits regardless of whether they are detached or not. See setsid(2) for more information.
Basically this means what you "launch" keeps running until it actually terminates itself. As 'detached', there is nothing that "ties" the sub-process to the execution of the parent from which it was spawned.
Example:
listing of sub.js:
(async function() {
try {
await new Promise((resolve,reject) => {
let i = 0;
let ival = setInterval(() => {
i++;
console.log('Run ',i);
if (i === 5) {
clearInterval(ival);
resolve();
}
}, 2000);
});
} catch(e) {
console.error(e);
} finally {
process.exit();
}
})();
listing of main.js
const fs = require('fs');
const { spawn } = require('child_process');
(async function() {
try {
const out = fs.openSync('./out.log', 'a');
const err = fs.openSync('./out.log', 'a');
console.log('spawn sub');
const sub = spawn(process.argv[0], ['sub.js'], {
detached: true, // this removes ties to the parent
stdio: [ 'ignore', out, err ]
});
sub.unref();
console.log('waiting..');
await new Promise((resolve,reject) =>
setTimeout(() => resolve(), 3000)
);
console.log('exiting main..');
} catch(e) {
console.error();
} finally {
process.exit();
}
})();
The basics there are that the sub.js listing is going to output every 2 seconds for 5 iterations. The main.js is going to "spawn" this process as detached, then wait for 3 seconds and terminate itself.
Though it's not really needed, for demonstration purposes we are setting up the spawned sub-process to redirect its output ( both stdout and stderr ) to a file named out.log in the same directory.
What you see here is that the main listing does it's job and spawns the new process then terminates after 3 seconds. At this time the sub-process will only have output 1 line, but it will continue to run and produce output to the redirected file for another 7 seconds, despite the main process being terminated.
I'm starting to learn and use node and I like it but I'm not really sure how certain features work. Maybe you can help me resolve one such issue:
I want to spawn local scripts and programs from my node server upon rest commands. looking at the fs library I saw the example below of how to spawn a child process and add some pipes/event handlers on it.
var spawn = require('child_process').spawn,
ps = spawn('ps', ['ax']),
grep = spawn('grep', ['ssh']);
ps.stdout.on('data', function (data) {
grep.stdin.write(data);
});
ps.stderr.on('data', function (data) {
console.log('ps stderr: ' + data);
});
ps.on('close', function (code) {
if (code !== 0) {
console.log('ps process exited with code ' + code);
}
grep.stdin.end();
});
grep.stdout.on('data', function (data) {
console.log('' + data);
});
grep.stderr.on('data', function (data) {
console.log('grep stderr: ' + data);
});
grep.on('close', function (code) {
if (code !== 0) {
console.log('grep process exited with code ' + code);
}
});
What's weird to me is that I don't understand how I can be guaranteed that the event handler code will be registered before the program starts to run. It's not like there's a 'resume' function that you run to start up the child. Isn't this a race condition? Granted the condition would be minisculy small and would almost never hit because its such a short snipping of code afterward but still, if it is I'd rather not code it this way out of good habits.
So:
1) if it's not a race condition why?
2) if it is a race condition how could I write it the right way?
Thanks for your time!
Given the slight conflict and ambiguity in the accepted answer's comments, the sample and output below tells me two things:
The child process (referring to the node object returned by spawn) emits no events even though the real underlying process is live / executing.
The pipes for the IPC are setup before the child process is executed.
Both are obvious. The conflict is w.r.t. interpretation of the OP's question:-
Actually 'yes', this is the epitome of a data race condition if one needs to consider the real child process's side effects. But 'no', there's no data race as far as IPC pipe plumbing is concerned. The data is written to a buffer and retrieved as a (bigger) blob as and when (as already well described) the context completes allowing the event loop to continue.
The first data event seen below pushes not 1 but 5 chunks written to stdout by the child process whilst we were blocking.. thus nothing is lost.
sample:
let t = () => (new Date()).toTimeString().split(' ')[0]
let p = new Promise(function (resolve, reject) {
console.log(`[${t()}|info] spawning`);
let cp = spawn('bash', ['-c', 'for x in `seq 1 1 10`; do printf "$x\n"; sleep 1; done']);
let resolved = false;
if (cp === undefined)
reject();
cp.on('error', (err) => {
console.log(`error: ${err}`);
reject(err);
});
cp.stdout.on('data', (data) => {
if (!resolved) {
console.log(`[${t()}|info] spawn succeeded`);
resolved = true;
resolve();
}
process.stdout.write(`[${t()}|data] ${data}`);
});
let ts = parseInt(Date.now() / 1000);
while (parseInt(Date.now() / 1000) - ts < 5) {
// waste some cycles in the current context
ts--; ts++;
}
console.log(`[${t()}|info] synchronous time wasted`);
});
Promise.resolve(p);
output:
[18:54:18|info] spawning
[18:54:23|info] synchronous time wasted
[18:54:23|info] spawn succeeded
[18:54:23|data] 1
2
3
4
5
[18:54:23|data] 6
[18:54:24|data] 7
[18:54:25|data] 8
[18:54:26|data] 9
[18:54:27|data] 10
It is not a race condition. Node.js is single threaded and handles events on a first come first serve basis. New events are put at the end of the event loop. Node will execute your code in a synchronous manner, part of which will involve setting up event emitters. When these event emitters emit events, they will be put to the end of the queue, and will not be handled until Node finishes executing whatever piece of code its currently working on, which happens to be the same code that registers the listener. Therefore, the listener will always be registered before the event is handled.