I have a few child processes of node, that depends from master. Every process is a program with some asynchronic logic. And i have to terminate this process when all will be done. But process not terminate by himself, cause there some listeners on it. Example:
if (cluster.isMaster) {
for (var i = 0; i < numCPUs; i++) {
let worker = cluster.fork();
worker.send(i);
}
} else {
process.once('message', msg => {
// here some logic
// and after this is done, process have to terminated
console.log(msg);
})
}
But process still working, even i using "once". I had tried to remove all of process listeners, but it still works. How i can terminate it?
Use module like
terminate
Terminate a Node.js Process based on the Process ID
A minimalist yet reliable (tested) way to Terminate a Node.js Process (and all Child Processes) based on the Process ID
var terminate = require('terminate');
terminate(process.pid, function(err, done){
if(err) { // you will get an error if you did not supply a valid process.pid
console.log("Oopsy: " + err); // handle errors in your preferred way.
}
else {
console.log(done); // do what you do best!
}
});
or
We can start child processes with {detached: true} option so those processes will not be attached to main process but they will go to a new group of processes. Then using process.kill(-pid) method on main process we can kill all processes that are in the same group of a child process with the same pid group. In my case, I only have one processes in this group.
var spawn = require('child_process').spawn;
var child = spawn('my-command', {detached: true});
process.kill(-child.pid);
For cluster worker processes, you can call process.disconnect() to disconnect the IPC channel with the master process. Having the IPC channel connected will keep the worker process alive.
Related
I start a spawn child process this way:
let process = spawn(apiPath, {
detached: true
})
process.unref()
process.stdout.on('data', data => { /* do something */ })
When I start the process I need to keep it attached because I want to read its output. But just before closing my Node process (the parent) I want to detach all not finished children processes to keep them running in background, but as the documentation say:
When using the detached option to start a long-running process, the process will not stay running in the background after the parent exits unless it is provided with a stdio configuration that is not connected to the parent.
But with the option stdio: 'ignore' I can't read the stdout which is a problem.
I tried to manually close the pipes before to close the parent process but it is unsuccessful:
// Trigger just before the main process end
process.stdin.end()
process.stderr.unpipe()
process.stdout.unpipe()
After many tests I found at least one way to solve this problem : destroying all pipe before to leave the main process.
One tricky point is that the child process have to handle correctly the pipes destroying, if not it could got an error and close anyway. In this example the node child process seems to have no problem with this but it could be different with other scenario.
main.js
const { spawn } = require('child_process')
console.log('Start Main')
let child = spawn('node', ['child.js'], { detached: true })
child.unref() // With this the main process end after fully disconnect the child
child.stdout.on('data', data => {
console.log(`Got data : ${data}`)
})
// In real case should be triggered just before the end of the main process
setTimeout(() => {
console.log('Disconnect the child')
child.stderr.unpipe()
child.stderr.destroy()
child.stdout.unpipe()
child.stdout.destroy()
child.stdin.end()
child.stdin.destroy()
}, 5000)
child.js
console.log('Start Child')
setInterval(function() {
process.stdout.write('hello from child')
}, 1000)
output
Start Main
Got data : Start Child
Got data : hello from child
Got data : hello from child
Got data : hello from child
Got data : hello from child
Disconnect the child
I'd like to fork a long running express request in node and send an express response with the child, allowing the parent to serve other requests. I'm already using cluster but I'd like to fork another process in addition to the cluster for specific long running requests. What I'd like to prevent is all the processes in the cluster being consumed by a specific long running processes, while most of the other requests are fast.
Thanks
var express = require('express');
var webserver = express();
webserver.get("/test", function(request, response) {
// long running HTTP request
response.send(...);
});
What I'm thinking of is something like following, although I'm not sure this works:
var cp = require('child_process');
var express = require('express');
var webserver = express();
webserver.get("/test", function(request, response) {
var child = cp.fork('do_nothing.js');
child.on("message", function(message) {
if(message == "start") {
response.send(...);
process.exit();
}
});
child.send("start");
});
Let me know if anyone knows how to do this.
Edit: So, the idea is that the child could take a long time. There are a limited number of processes in the cluster serving express responses and I don't want to consume them all on a specific long-running request type. In the code below, the entire cluster would be consumed by the long running express requests.
while(1) {
if(rand() % 100 == 0) {
if(fork() == 0) {
sleep(hour(1));
exit(0);
}
} else {
sleep(second(1));
}
waitpid(WAIT_ANY, &status, WNOHANG);
}
Edit: I am going to mark the self-answer as solved. I'm sure there's a way to pass a socket to a child but it's not really necessary because the cluster master can manage all child processes. Thanks for your help.
Your second code block is confusing because it appears that you're killing the parent process with process.exit() rather than the child.
In any case, if we assume the problem is this:
You have a cluster of "regular processes".
Occasionally, you want to take an incoming request that was assigned to one of the cluster processes and pass it off to a long running child that will eventually send the response.
After sending the response, the long running child process should exit.
You have a couple options.
You can have the clustered process that was assigned the request, start up a child, send it some initial data and listen for a message back from the child. When it gets the message back from the child, it can send the response and kill the child. This appears to be what you're attempting to do in your second code block.
You can have the clustered process that was assigned the request, start up a child and reassign the request socket to the child process and the child can then own that socket from then on. When it finally sends the response, it can then exit itself.
The first is simpler because no socket assignment from one process to another is required. To implement the second, you'd have to write or find the code to do socket reassignment and then reconstituted as an express request within the child. The cluster module does something like this so the code is there to be found and learned from, but I'm not aware of a trivial way to do it.
Personally, I don't see any particular downside to the first. I suppose if the clustered process were to die for some , you'd lose the long running request socket, but hopefully you can just code your clustered processes not to die unnecessarily.
You can read this article on sending a socket to a new node.js process:
Sending a socket to a forked process
And, this node.js doc on sending a socket:
Example: sending a socket object
So, I've verified that this is not necessary for my use case, but I was able to get it working using the code below. It's not exactly what the OP asks for, but it works.
What it's doing is sending an instruction to the cluster master, which forks the additional process upon receipt of the slow express request.
Since the express request doesn't need to know the status of the newly forked cluster worker, it just handles the slow request as normal and then exits.
The instruction to the cluster master informs the master not to replace the dying slow express request process, so the number of workers reverts to the original number after the slow request finishes.
The pool will increase in size when there are slow requests, but revert to normal. This will prevent like 20 simultaneous slow requests from bringing down the cluster.
var numberOfWorkers = 10;
var workerCount = 0;
var slowRequestPids = { };
if (cluster.isMaster) {
for(var i = 0; i < numberOfWorkers; i++) {
workerCount++;
cluster.fork();
}
cluster.on('exit', function(worker) {
workerCount--;
var pidString = String(worker.process.pid);
if(pidString in slowRequestPids) {
delete slowRequestPids[pidString];
if(workerCount >= numberOfWorkers) {
logger.info('not forking replacement for slow process');
return;
}
}
logger.info('forking replacement for a process that died unexpectedly');
workerCount++;
cluster.fork();
}
cluster.on("message", function(msg) {
if(typeof msg.fork != "undefined" && workerCount < 100) {
logger.info("forking additional process upon slow request");
slowRequestPids[msg.fork] = 1;
workerCount++;
cluster.fork();
}
});
return;
}
webserver.use("/slow", function(req, res) {
process.send({fork: String(process.pid) });
sleep.sleep(300);
res.send({ response_from: "virtual child" });
res.on("finish", function() {
logger.info('process exits, restoring cluster to original size');
process.exit();
});
});
I have an electron app that uses child_process.exec to run long running tasks.
I am struggling to manage when the user exits the app during those tasks.
If they exit my app or hit close the child processes continue to run until they finish however the electron app window has already closed and exited.
Is there a way to notify the user that there are process still running and when they have finished then close the app window?
All I have in my main.js is the standard code:
// Quit when all windows are closed.
app.on('window-all-closed', function() {
// On OS X it is common for applications and their menu bar
// to stay active until the user quits explicitly with Cmd + Q
if (process.platform != 'darwin') {
app.quit();
}
});
Should I be adding a check somewhere?
Thanks for your help
EDITED
I cannot seem to get the PID of the child_process until it has finished. This is my child_process code
var loader = child_process.exec(cmd, function(error, stdout, stderr) {
console.log(loader.pid)
if (error) {
console.log(error.message);
}
console.log('Loaded: ', value);
});
Should I be trying to get it in a different way?
So after everyones great comments I was able to update my code with a number of additions to get it to work, so am posting my updates for everyone else.
1) Change from child_process.exec to child_process.spawn
var loader = child_process.spawn('program', options, { detached: true })
2) Use the Electron ipcRenderer to communicate from my module to the main.js script. This allows me to send the PIDs to main.js
ipcRenderer.send('pid-message', loader.pid);
ipcMain.on('pid-message', function(event, arg) {
console.log('Main:', arg);
pids.push(arg);
});
3) Add those PIDs to array
4) In my main.js I added the following code to kill any PIDs that exist in the array before exiting the app.
// App close handler
app.on('before-quit', function() {
pids.forEach(function(pid) {
// A simple pid lookup
ps.kill( pid, function( err ) {
if (err) {
throw new Error( err );
}
else {
console.log( 'Process %s has been killed!', pid );
}
});
});
});
Thanks for everyones help.
ChildProcess emits an exit event when the process has finished - if you keep track of the current processes in an array, and have them remove themselves after the exit event fires, you should be able to just foreach over the remaining ones running ChildProcess.kill() when you exit your app.
This may not be 100% working code/not the best way of doing things, as I'm not in a position to test it right now, but it should be enough to set you down the right path.
var processes = [];
// Adding a process
var newProcess = child_process.exec("mycommand");
processes.push(newProcess);
newProcess.on("exit", function () {
processes.splice(processes.indexOf(newProcess), 1);
});
// App close handler
app.on('window-all-closed', function() {
if (process.platform != 'darwin') {
processes.forEach(function(proc) {
proc.kill();
});
app.quit();
}
});
EDIT: As shreik mentioned in a comment, you could also just store the PIDs in the array instead of the ChildProcess objects, then use process.kill(pid) to kill them. Might be a little more efficient!
Another solution. If you want to keep using exec()
In order to kill the child process running by exec() take a look to the module ps-tree. They exaplain what is happening.
in UNIX, a process may terminate by using the exit call, and it's
parent process may wait for that event by using the wait system call.
the wait system call returns the process identifier of a terminated
child, so that the parent tell which of the possibly many children has
terminated. If the parent terminates, however, all it's children have
assigned as their new parent the init process. Thus, the children
still have a parent to collect their status and execution statistics.
(from "operating system concepts")
SOLUTION: use ps-tree to get all processes that a child_process may have started, so that they
exec() actually works like this:
function exec (cmd, cb) {
spawn('sh', ['-c', cmd]);
...
}
So check the example and adapt it to your needs
var cp = require('child_process'),
psTree = require('ps-tree');
var child = cp.exec("node -e 'while (true);'", function () { /*...*/ });
psTree(child.pid, function (err, children) {
cp.spawn('kill', ['-9'].concat(children.map(function (p) { return p.PID })));
});
I'm trying to fork a node child process with
child_process.fork("child.js")
and have it say alive after the parent exits. I've tried using the detached option like so:
child_process.fork("child.js", [], {detached:true});
Which works when using spawn, but when detached is true using fork it just fails silently, not even executing the child.js.
I've also tried
var p = child_process.fork("child.js")
p.disconnect();
p.unref();
But child still dies when the parent does.
Any help or insight would be greatly appreciated.
EDIT:
Node Version: v5.3.0
Platform: Windows 8.1
Code:
//Parent
var child_process = require("child_process");
var p;
try{
console.log(1)
p = child_process.fork("./child.js")
console.log(2)
} catch(e){
console.log(e)
}
p.on('error', console.log.bind(console))
p.disconnect();
p.unref();
//To keep process alive
setTimeout(() => {
console.log(1);
}, 100000);
--
//Child
var fs = require("fs");
console.log(3);
fs.writeFileSync("test.txt", new Date().toString());
setTimeout(()=>{
console.log(1);
}, 100000);
I'm assuming you're executing your parent file from the command line, which is probably why it "appears" that the forked child is not executing. In reality when the parent process exits, the terminal stops waiting and thus prints a new line, waiting for your next command. This makes it seem like the child isn't executing, but trust me it is. Also there is no "detached" option for child_process.fork
Add some console.log() statements to your child process and you should see input printing in your terminal even after the parent has exited. If you don't it's because your child is prematurely exiting due to an error. Run your child process directly to debug it, before calling it from the parent.
Check out this quick example:
Hope this helps.
Is it possible to get the parent process-id using Node.JS? I would like to detect if the parent is killed or fails in such a way that it cannot notify the child. If this happens, the parent process id of the child should become 1.
This would be preferable to requiring the parent to periodically send a keep-alive signal and also preferable to running the ps command.
You can use pid-file. Something like that
var util = require('util'),
fs = require('fs'),
pidfile = '/var/run/nodemaster.pid';
try {
var pid = fs.readFileSync(pidfile);
//REPLACE with your signal or use another method to check process existence :)
process.kill(pid, 'SIGUSR2');
util.puts('Master already running');
process.exit(1);
} catch (e) {
fs.writeFileSync(pidfile, process.pid.toString(), 'ascii');
}
//run your childs here
Also you can send pid as argument in spawn() call
I start Node.JS from within a native OSX application as a background worker. To make node.js exit when the parent process which consumes node.js stdout dies/exits, I do the following:
// Watch parent exit when it dies
process.stdout.resume();
process.stdout.on('end', function() {
process.exit();
});
Easy like that, but I'm not exactly sure if it's what you've been asking for ;-)