Prevent sending data to stdin if spawn fails - node.js

In my Node.js (v0.10.9) code I'm trying to detect 2 cases:
an external tool (dot) is installed - in that case I want to send some data to stdin of created process
the external tool is not installed - in that case I want to display warning and I don't want to send anything to process' stdin
My problem is that I don't know how to send data to child's stdin if and only if the process was spawned successfully (i.e. stdin is ready for writing).
Following code works fine if dot is installed, but otherwise it tries to send data to the child although the child wasn't spawned.
var childProcess = require('child_process');
var child = childProcess.spawn('dot');
child.on('error', function (err) {
console.error('Failed to start child process: ' + err.message);
});
child.stdin.on('error', function(err) {
console.error('Working with child.stdin failed: ' + err.message);
});
// I want to execute following lines only if child process was spawned correctly
child.stdin.write('data');
child.stdin.end();
I'd need something like this
child.on('successful_spawn', function () {
child.stdin.write('data');
child.stdin.end();
});

From the node.js docs: http://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options
Example of checking for failed exec:
var spawn = require('child_process').spawn,
child = spawn('bad_command');
child.stderr.setEncoding('utf8');
child.stderr.on('data', function (data) {
if (/^execvp\(\)/.test(data)) {
console.log('Failed to start child process.');
}
});

Have a look at core-worker:
https://www.npmjs.com/package/core-worker
This package makes it a lot easier to handle processes.
I think what you want to do is something like that (from the docs):
import { process } from "core-worker";
const simpleChat = process("node chat.js", "Chat ready");
setTimeout(() => simpleChat.kill(), 360000); // wait an hour and close the chat
simpleChat.ready(500)
.then(console.log.bind(console, "You are now able to send messages."))
.then(::simpleChat.death)
.then(console.log.bind(console, "Chat closed"))
.catch(() => /* handle err */);
So if the process is not started correctly, none of the .then statements are executed which is exactly what you want to do, right?

Related

Can I “listen” for a specific output with child_process?

So far I have gotten my script to execute a windows .bat file with child_process, my issue is that it opens it in the background with no way to “connect” to it to see what happens and debug, is there a way to “listen” for a certain output to happen? For example, if the .bat outputs a “Done!” in the shell at one point, is there a way to make my node.js script detect that certain keyword and run further commands if it does?
Thanks!
Some clarification: The .bat outputs "Done!" and stays running, it doesn't stop, all I want to do is detect that "Done!" so that I can send a message to the user that the server has successfully started
My current code:
exec('D:\\servers\\game_server_1\\start.bat', {shell: true, cwd: 'D:\\servers\\game_server_1'});
Well, if you're trying to do a one and done type of NodeJS script, you can just spawn a process that launches with the given command and exits when all commands completed. This creates a one and done streaming interface that you can monitor. The stdout returns a data buffer that returns the command you ran, unless it's something like START to launch a program-- it returns null. You could just issue a KILL command after the START -- your_program.exe:
const spawn = require('child_process').spawn;
const child = spawn('cmd.exe', ['/c', 'commands.bat']);
let DONE = 0;
const done = () => {
console.log("log it");
DONE++;
};
child.stdout.on('data', function (data) {
console.log('stdout: ' + data);
//it's important to add some type of counter to
//prevent any logic from running twice, since
//this will run twice for any given command
if ( data.toString().includes("DONE") && DONE === 0 ) {
done();
}
});
child.stderr.on('data', function (data) {
console.log('stderr: ' + data);
});
child.on('exit', function (code) {
console.log('child process exited with code ' + code);
});
Keep in mind, when you run a command to launch a program and the program launches, the data buffer will be null in stdout event listener. The error event will only fire if there was an issue with launching the program.
YOUR .BAT:
ECHO starting batch script
//example launching of program
START "" https://localhost:3000
//issue a command after your program launch
ECHO DONE
EXIT
You could also issue an ECHO DONE command right after the command where you launched the program and listen for that, and try and parse out that command from stdout.
You could use a Regular expression.
const { spawn } = require('child_process');
const child = spawn(...);
child.stdout.on('data', function (data) {
console.log('stdout: ' + data);
// Now use a regular expression to detect a done event
// For example
data.toString().match(/Done!/);
});
// Error handling etc. here

Execute script from Node in a separate process

What I want to do is when an endpoint in my Express app is hit, I want to run a command line script - without waiting for the result - in a separate process.
Right now I am using the child_process’s spawn function and it is working, but if the Node server were to quit, the child script would quit as well. I need to have the child script run to completion even if the server quits.
I don’t need access to stdout or anything from the child script. I just need a way to basically “fire and forget”
Is there any way to do this with spawn that I may be missing? Or is there another way I should be going about this?
Thanks in advance for any guidance!
What you want here is options.detached of spawn. Setting this option will allow the sub-process to continue even after the main process calling spawn has terminated.
Quoting the documentation:
On Windows, setting options.detached to true makes it possible for the child process to continue running after the parent exits. The child will have its own console window. Once enabled for a child process, it cannot be disabled.
On non-Windows platforms, if options.detached is set to true, the child process will be made the leader of a new process group and session. Note that child processes may continue running after the parent exits regardless of whether they are detached or not. See setsid(2) for more information.
Basically this means what you "launch" keeps running until it actually terminates itself. As 'detached', there is nothing that "ties" the sub-process to the execution of the parent from which it was spawned.
Example:
listing of sub.js:
(async function() {
try {
await new Promise((resolve,reject) => {
let i = 0;
let ival = setInterval(() => {
i++;
console.log('Run ',i);
if (i === 5) {
clearInterval(ival);
resolve();
}
}, 2000);
});
} catch(e) {
console.error(e);
} finally {
process.exit();
}
})();
listing of main.js
const fs = require('fs');
const { spawn } = require('child_process');
(async function() {
try {
const out = fs.openSync('./out.log', 'a');
const err = fs.openSync('./out.log', 'a');
console.log('spawn sub');
const sub = spawn(process.argv[0], ['sub.js'], {
detached: true, // this removes ties to the parent
stdio: [ 'ignore', out, err ]
});
sub.unref();
console.log('waiting..');
await new Promise((resolve,reject) =>
setTimeout(() => resolve(), 3000)
);
console.log('exiting main..');
} catch(e) {
console.error();
} finally {
process.exit();
}
})();
The basics there are that the sub.js listing is going to output every 2 seconds for 5 iterations. The main.js is going to "spawn" this process as detached, then wait for 3 seconds and terminate itself.
Though it's not really needed, for demonstration purposes we are setting up the spawned sub-process to redirect its output ( both stdout and stderr ) to a file named out.log in the same directory.
What you see here is that the main listing does it's job and spawns the new process then terminates after 3 seconds. At this time the sub-process will only have output 1 line, but it will continue to run and produce output to the redirected file for another 7 seconds, despite the main process being terminated.

NodeJS and RabbitMQ, how to be sure my message is processed

I am building a kind of micro service application and using RabbitMQ to communicate between my services.
I have a nodeJS app that is supposed to receive messages from RabbitMQ and execute commands when a particular message comes in. So here is what the following code does:
Connects to RabbitMQ
Listens to symfony_messages queue
If a message identified by product.created comes in, the script executes a particular command using spawn from child_process.
My question is: Sometimes, I am going to "restart" my script. How can I be sure that at the moment of restarting the script is not in the middle of processing an event? How can I be sure that the process is not going to consume a message and stop before spawning the process?
The possible solution that came to my mind is:
Send a signal to the nodeJS process to tell him "Process a last message and stop". But how can I send such a signal?
And here is the code (you do not need to read if you already get the question):
const amqp = require('amqplib/callback_api')
const { spawn } = require('child_process')
amqp.connect('amqp://guest:guest#127.0.0.1:5672', (err, conn) => {
if (err) {
console.log(err)
return
}
conn.createChannel((err, channel) => {
let q = 'symfony_messages'
channel.assertQueue(q, {
durable: false
})
console.log(" [*] Waiting for messages in %s. To exit press CTRL+C", q);
channel.consume(q, (msg) => {
let event = JSON.parse(msg.content.toString())
if (event.name === 'product.created') {
console.log('Indexing order...')
let cp = spawn('php', [path.join(__dirname, '..', '..', 'bin', 'console'), 'elastic:index:orders', event.payload.product_id])
cp.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
})
cp.stderr.on('data', (data) => {
console.log(`stderr: ${data}`);
})
cp.on('close', (code) => {
console.log(`child process exited with code ${code}`);
})
}
}, {noAck: true});
})
})
Wouldn't it be a good pattern to use the channel.ack(message) function on the message once the message has been processed successfully? You've set the noAck option to true, but you can use the ACK mechanism to ensure messages are only taken off the queue once they are successfully processed.
Likewise, you can use the Nack function to deliberately tell RabbitMQ that the message was not processed, I normally do this in the process function error handler (or promise.catch).
I use a similar mechanism in a service that writes messages to a database. I only ACK the message once the message is written to the db. It's also useful to setup a dead letter exchange / queue within RabbitMQ so that any message that is Nacked ends up there. You can then inspect these messages and see why they couldn't be processed (or automatically attempt to re-process once the error condition that caused the problem is resolved.)

Creating NodeJS child processes asynchronously

I am creating child processes in NodeJS in a function called "pythonGraphTools" in a for loop after generating some variables that need to be passed in. This for loop may run 50 times.
Then I am writing to the stdin of the spawned process. However, sometimes I am getting a "Error:EPIPE writing to closed socket" error for this line py.stdin.write(JSON.stringify(dotfilepath));
I suspect it is because the child process has not yet completed spawning and am attempting to write to it when it is not ready. I have seen asynchronous spawning such as that on https://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options but these only seem to have asynch events for the flow of data/messages from child to parent.
Any insight in how I might make sure that the child is fully spawned before I call py.stdin.write()
function pythonGraphTools(dotfilepath,allGraphsPerTrans,graphtools_color,graphtools_label,transHashArray){
var spawn = require('child_process').spawn,
py = spawn('python', ['python_module.py']);
//write file to disk temporarily.
console.log("dotfilepath is "+dotfilepath)
fs.writeFile(dotfilepath,allGraphsPerTrans, function(err){ //must create a file first //2nd param was res_str_dot_no_lbl
if(err){
console.log("there was an error writing to file" + err);
}
//now send this dot file path to the python module which will make the graph
console.log("now writing to python module!"+py.pid)
console.log("nodejs colorarray length for debuging "+ graphtools_color.length)
py.stdin.write(JSON.stringify(dotfilepath)); //sending data to the python process!
py.stdin.write("\n")
py.stdin.write(JSON.stringify(graphtools_color)); // sending colours
py.stdin.write("\n")
py.stdin.write(JSON.stringify(graphtools_label));//sending opcodes
py.stdin.write("\n");
py.stdin.write(JSON.stringify(transHashArray));//sending opcodes
py.stdin.write("\n");
py.stdin.end();
});
var dataString=""; //variable to store return from python module
py.stdout.on('data', function(data){ // listen for data coming back from python!
dataString += data.toString();
});
py.stdout.on('end', function(){ //pythons stdout has finished - now do stuff
console.log(dataString); // print out everything collected from python stdout
//now delete temp dot file (with all dot files in it)
fs.stat(dotfilepath, function (err, stats) { //check first if there is a dot file
console.log(stats);//here we got all information of file in stats variable
if (err) {
return console.error(err);
}
fs.unlink(dotfilepath,function(err){ //actually deleting comment this functiont to not delete
if(err) return console.log(err);
console.log('file deleted successfully');
});//end unlink
});//end file stat
py.stdout.end();
}); // on python 'finish'
py.on('exit', function (code, signal) { //which process? add pid ?
console.log('child process '+py.pid +' exited with ' +
`code ${code} and signal ${signal}`);
});
}

Electron kill child_process.exec

I have an electron app that uses child_process.exec to run long running tasks.
I am struggling to manage when the user exits the app during those tasks.
If they exit my app or hit close the child processes continue to run until they finish however the electron app window has already closed and exited.
Is there a way to notify the user that there are process still running and when they have finished then close the app window?
All I have in my main.js is the standard code:
// Quit when all windows are closed.
app.on('window-all-closed', function() {
// On OS X it is common for applications and their menu bar
// to stay active until the user quits explicitly with Cmd + Q
if (process.platform != 'darwin') {
app.quit();
}
});
Should I be adding a check somewhere?
Thanks for your help
EDITED
I cannot seem to get the PID of the child_process until it has finished. This is my child_process code
var loader = child_process.exec(cmd, function(error, stdout, stderr) {
console.log(loader.pid)
if (error) {
console.log(error.message);
}
console.log('Loaded: ', value);
});
Should I be trying to get it in a different way?
So after everyones great comments I was able to update my code with a number of additions to get it to work, so am posting my updates for everyone else.
1) Change from child_process.exec to child_process.spawn
var loader = child_process.spawn('program', options, { detached: true })
2) Use the Electron ipcRenderer to communicate from my module to the main.js script. This allows me to send the PIDs to main.js
ipcRenderer.send('pid-message', loader.pid);
ipcMain.on('pid-message', function(event, arg) {
console.log('Main:', arg);
pids.push(arg);
});
3) Add those PIDs to array
4) In my main.js I added the following code to kill any PIDs that exist in the array before exiting the app.
// App close handler
app.on('before-quit', function() {
pids.forEach(function(pid) {
// A simple pid lookup
ps.kill( pid, function( err ) {
if (err) {
throw new Error( err );
}
else {
console.log( 'Process %s has been killed!', pid );
}
});
});
});
Thanks for everyones help.
ChildProcess emits an exit event when the process has finished - if you keep track of the current processes in an array, and have them remove themselves after the exit event fires, you should be able to just foreach over the remaining ones running ChildProcess.kill() when you exit your app.
This may not be 100% working code/not the best way of doing things, as I'm not in a position to test it right now, but it should be enough to set you down the right path.
var processes = [];
// Adding a process
var newProcess = child_process.exec("mycommand");
processes.push(newProcess);
newProcess.on("exit", function () {
processes.splice(processes.indexOf(newProcess), 1);
});
// App close handler
app.on('window-all-closed', function() {
if (process.platform != 'darwin') {
processes.forEach(function(proc) {
proc.kill();
});
app.quit();
}
});
EDIT: As shreik mentioned in a comment, you could also just store the PIDs in the array instead of the ChildProcess objects, then use process.kill(pid) to kill them. Might be a little more efficient!
Another solution. If you want to keep using exec()
In order to kill the child process running by exec() take a look to the module ps-tree. They exaplain what is happening.
in UNIX, a process may terminate by using the exit call, and it's
parent process may wait for that event by using the wait system call.
the wait system call returns the process identifier of a terminated
child, so that the parent tell which of the possibly many children has
terminated. If the parent terminates, however, all it's children have
assigned as their new parent the init process. Thus, the children
still have a parent to collect their status and execution statistics.
(from "operating system concepts")
SOLUTION: use ps-tree to get all processes that a child_process may have started, so that they
exec() actually works like this:
function exec (cmd, cb) {
spawn('sh', ['-c', cmd]);
...
}
So check the example and adapt it to your needs
var cp = require('child_process'),
psTree = require('ps-tree');
var child = cp.exec("node -e 'while (true);'", function () { /*...*/ });
psTree(child.pid, function (err, children) {
cp.spawn('kill', ['-9'].concat(children.map(function (p) { return p.PID })));
});

Resources