I need to verify that a child_process has successfully been killed because I cannot execute the next action if that process is still alive.
var proc = require('child_process');
var prog = proc.spawn('myprog', ['--option', 'value']);
prog.on('data', function(data) {
// Do something
});
Somewhere else in the code I reach to a certain event and on a certain condition I need to kill prog:
prog.kill('SUGHUP');
// Only when the process has successfully been killed execute next
// Code...
Since kill is probably async, I am using q. I would like to use q on kill but kill does not have a callback which is executed when the signal has successfully been processed.
How to do?
Possible idea
If I send a message to process prog and in process prog when receiving the message I kill it? How can tell a process to self-kill?
Wouldn't prog.exec() with the option killsignal and a callback fit your needs ?
Related
In the documentation for Node's Child Process spawn() function, and in examples I've seen elsewhere, the pattern is to call the spawn() function, and then to set up a bunch of handlers on the returned ChildProcess object. For instance, here is the first example of spawn() given on that documentation page:
const { spawn } = require('child_process');
const ls = spawn('ls', ['-lh', '/usr']);
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.error(`stderr: ${data}`);
});
ls.on('close', (code) => {
console.log(`child process exited with code ${code}`);
});
The spawn() function itself is called on the second line. My understanding is that spawn() starts a child process asynchronously. From the documentation:
The child_process.spawn() method spawns a new process using the given
command, with command line arguments in args.
However, the following lines of the script above go on to set up various handlers for the process, so it's assuming that the process hasn't actually started (and potentially finished) between the time spawn() is called on line 2 and the other stuff happens on the subsequent lines. I know JavaScript/Node is single threaded. However, the operating system is not single threaded, and naively one would read that spawn() call to be telling the operating system to spawn the process right now (at which point, with unfortunate timing, the OS could suspend the parent Node process and run/complete the child process before the next line of the Node code is executed).
But it must be that the process doesn't actually get spawned until the current JavaScript function completes (or more generally the current JavaScript event handler that called the current function completes), right?
That seems like a pretty important thing to say. Why doesn't it say that in the Child Process documentation page? Is there some overriding Node principle that makes it unnecessary to say that explicitly?
The spawning of the new process starts immediately (it's handed over to the OS to actually fire up the process and get it going). Starting the new process with .spawn() is asynchronous and non-blocking. So, it will initiate the operation with the OS and immediately return. You might think that that's why it's OK to set up event handlers after it returns (because the process hasn't yet finished starting). Well, yes and no. It likely hasn't yet finished starting the new process, but that isn't the main reason why it's OK.
It's OK, because node.js runs all its events through a single threaded event queue. Thus no events from the newly spawned process can be processed until after your code finishes executing and returns control back to the system. Only then can it process the next event in the event queue and trigger one of the events you are registering handlers for.
Or, said another way, none of the events from the other process are pre-emptive. They won't/can't interrupt your existing Javascript code. So, since you're still running your Javascript code, those events can't get run yet. Instead, they sit in the event queue until your Javascript code finishes and then the interpreter can go get the next event from the event queue and run the callback associated with it. Likewise, that callback runs until it returns back to the interpreter and then the interpreter can get the next event and run its callback and so on...
That's why node.js is called an event-driven system.
As such, it's perfectly fine to do this type of structure:
const { spawn } = require('child_process');
const ls = spawn('ls', ['-lh', '/usr']);
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.error(`stderr: ${data}`);
});
ls.on('close', (code) => {
console.log(`child process exited with code ${code}`);
});
None of those data or close events can execute their callbacks until after your code is done and returns control back to the system. So, it's perfectly safe to set up those event handlers like you are. Even if the newly spawned process was running and generating events right away, those events will just sit in the event queue until your Javascript finishes what it is doing (which includes setting up your event handlers).
Now, if you delayed setting up the event handlers until some future tick of the event loop (as shown below) with something like a setTimeout(), then you could miss some events:
const { spawn } = require('child_process');
const ls = spawn('ls', ['-lh', '/usr']);
setTimeout(() => {
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.error(`stderr: ${data}`);
});
ls.on('close', (code) => {
console.log(`child process exited with code ${code}`);
});
}, 10);
Here you are not setting up the event handlers immediately as part of the same tick of the event loop, but after a short delay. Therefore some events could get processed from the event loop before you install your event handlers and you could miss some of these events. Obviously, you would never do it this way (on purpose), but I just wanted to show that code running on the same tick of the event loop does not have a problem, but code running on some future tick of the event loop could have a problem missing events.
This is to follow up on jfriend00's answer, to explain what it helped me understand, in case it helps someone else. I knew about the event-driven nature of JavaScript/Node. What jfriend00's explanation made clear to me is the idea that an event can happen and Node can be aware that it happened, but it doesn't actually decide which handlers to tell about that event until the next tick. For instance, if the spawn() call fails outright (e.g., command does not exist), Node obviously knows that immediately. My thought was that it would then immediately queue the appropriate handlers to run on the next tick. But what I now understand is that it puts the "raw event" (i.e., the fact that the spawn failed, with whatever details about that) in its queue, and then on the next tick it determines and calls the appropriate handlers. And the same is true for other events like receiving output from the process, etc. The event is saved but the appropriate handlers for the event are only determined when the next tick runs, so handlers assigned on the previous tick, after spawn(), will get called.
Is there a way to kill the child_process.exec() from previous request by a new request ex
I have part of code like so :
var proc = require('child_process').exec('ffmpeg -i a.mp4 -o b.avi');
scenario like this
request come -> check the exec() is running or not -> if running kill
it->run new exec() -> return !
is that possible to kill this running process by a new HTTP request?
is there a way to set an app status for Node.js and set a flag, then check the flag to stop the process?
In my opinion, you can kill a child process by two ways.
Solution 1:
Use child process object with kill() function like
proc.kill();
Solution 2:
Get process PID from child process and kill it by nodejs process anywhere in your application (execute kill action in HTTP request) like
// Get process's pid and save it somewhere or global variable
let pid = proc.pid;
At next HTTP request comes, you can kill it by:
// Use node process to kill it anywhere by pid
process.kill(pid);
For your information, if you want to check a child process is running or not, try to check my question here Nodejs how to check independently a process is running by PID?
I tried to spawn child process - vvp (https://linux.die.net/man/1/vvp). At the certain time, I need to send CTRL+C to that process.
I am expecting that simulation will be interrupted and I get the interactive prompt. And after that I can continue the simulation by send command to the child process.
So, I tried something like this:
var child = require('child_process');
var fs = require('fs');
var vcdGen = child.spawn('vvp', ['qqq'], {});
vcdGen.stdout.on('data', function(data) {
console.log(data.toString())
});
setTimeout(function() {
vcdGen.kill('SIGINT');
}, 400);
In that case, a child process was stopped.
I also tried vcdGen.stdin.write('\x03') instead of vcdGen.kill('SIGINT'); but it isn't work.
Maybe it's because of Windows?
Is there any way to achieve the same behaviour as I got in cmd?
kill only really supports a rude process kill on Windows - the application signal model in Windows and *nix isn't compatible. You can't pass Ctrl+C through standard input, because it never comes through standard input - it's a function of the console subsystem (and thus you can only use it if the process has an attached console). It creates a new thread in the child process to do its work.
There's no supported way to do this programmatically. It's a feature for the user, not the applications. The only way to do this would be to do the same thing the console subsystem does - create a new thread in the target application and let it do the signalling. But the best way would be to simply use coöperative signalling instead - though that of course requires you to change the target application to understand the signal.
If you want to go the entirely unsupported route, have a look at https://stackoverflow.com/a/1179124/3032289.
If you want to find a middle ground, there's a way to send a signal to yourself, of course. Which also means that you can send Ctrl+C to a process if your consoles are attached. Needless to say, this is very tricky - you'd probably want to create a native host process that does nothing but create a console and run the actual program you want to run. Your host process would then listen for an event, and when the event is signalled, call GenerateConsoleCtrlEvent.
I have the following node.js code:
var testProcess = spawn(item.testCommand, [], {
cwd: process.cwd(),
stdio: ['ignore', process.stdout, process.stderr]
});
testProcess.on('close', function(data) {
console.log('test');
});
waitpid(testProcess.pid);
testProcess.kill();
however the close method never gets calls.
The end result I am looking for is that I spwan a process and the the script waits for that child processs to finish (which waitpid() is doing correctly). I want the output/err of the child process to be display to the screen (which the stdio config is doing correctly). I also want to perform code on the close of the child process which I was going to do in the close event (also tried exit), but it does not fire.
Why is the event not not firing?
http://nodejs.org/api/process.html
Note that just because the name of this function is process.kill, it is really just a signal sender, like the kill system call. The signal sent may do something other than kill the target process.
You can specify the signal while Kill() call.
Looking at waitpid() I found out that it returns an object with the exitCode. I changed my code so that I just perform certain actions based on what the value of the exitCode is.
I will have a parent process that is used to handle webserver restarts. It will signal the child to stop listening for new requests, the child will signal the parent that it has stopped listening, then the parent will signal the new child that it can start listening. In this way, we can accomplish less than 100ms down time for a restart of that level (I have a zero-downtime grandchild restart also, but that is not always enough of a restart).
The service manager will kill the parent when it is time for shutdown. How can the child detect that the parent has ended?
The signals are sent using stdin and stdout of the child process. Perhaps I can detect the end of an stdin stream? I am hoping to avoid a polling interval. Also, I would like this to be a really quick detection if possible.
a simpler solution could be by registering for 'disconnect' in the child process
process.on('disconnect', function() {
console.log('parent exited')
process.exit();
});
This answer is just for providing an example of the node-ffi solution that entropo has proposed (above) (as mentioned it will work on linux):
this is the parent process, it is spawning the child and then exit after 5 seconds:
var spawn = require('child_process').spawn;
var node = spawn('node', [__dirname + '/child.js']);
setTimeout(function(){process.exit(0)}, 5000);
this is the child process (located in child.js)
var FFI = require('node-ffi');
var current = new FFI.Library(null, {"prctl": ["int32", ["int32", "uint32"]]})
//1: PR_SET_PDEATHSIG, 15: SIGTERM
var returned = current.prctl(1,15);
process.on('SIGTERM',function(){
//do something interesting
process.exit(1);
});
doNotExit = function (){
return true;
};
setInterval(doNotExit, 500);
without the current.prctl(1,15) the child will run forever even if the parent is dying. Here it will be signaled with a SIGTERM which will be handled gracefully.
Could you just put an exit listener in the parent process that signals the children?
Edit:
You can also use node-ffi (Node Foreign Function Interface) to call ...
prctl(PR_SET_PDEATHSIG, SIGHUP);
... in Linux. ( man 2 prctl )
I start Node.JS from within a native OSX application as a background worker. To make node.js exit when the parent process which consumes node.js stdout dies/exits, I do the following:
// Watch parent exit when it dies
process.stdout.resume();
process.stdout.on('end', function() {
process.exit();
});
Easy like that, but I'm not exactly sure if it's what you've been asking for ;-)