Reading stdout of child process unbuffered - node.js

I'm trying to read the output of a Python script launched by Node.js as it arrives. However, I only get access to the data once the process has finished.
var proc, args;
args = [
'./bin/build_map.py',
'--min_lon',
opts.sw.lng,
'--max_lon',
opts.ne.lng,
'--min_lat',
opts.sw.lat,
'--max_lat',
opts.ne.lat,
'--city',
opts.city
];
proc = spawn('python', args);
proc.stdout.on('data', function (buf) {
console.log(buf.toString());
socket.emit('map-creation-response', buf.toString());
});
If I launch the process with { stdio : 'inherit' } I can see the output as it happens directly in the console. But doing something like process.stdout.on('data', ...) will not work.
How do I make sure I can read the output from the child process as it arrives and direct it somewhere else?

The process doing the buffering, because it knows the terminal was redirected and not really going to the terminal, is python. You can easily tell Python not to do this buffering: Just run "python -u" instead of "python". Should be easy as that.

When a process is spawned by child_process.spawn(), the streams connected to the child process's standard output and standard error are actually unbuffered on the Nodejs side. To illustrate this, consider the following program:
const spawn = require('child_process').spawn;
var proc = spawn('bash', [
'-c',
'for i in $(seq 1 80); do echo -n .; sleep 1; done'
]);
proc.stdout
.on('data', function (b) {
process.stdout.write(b);
})
.on('close', function () {
process.stdout.write("\n");
});
This program runs bash and has it emit . characters every second for 80 seconds, while consuming this child process's standard output via data events. You should notice that the dots are emitted by the Node program every second, helping to confirm that buffering does not occur on the Nodejs side.
Also, as explained in the Nodejs documentation on child_process:
By default, pipes for stdin, stdout and stderr are established between
the parent Node.js process and the spawned child. It is possible to
stream data through these pipes in a non-blocking way. Note, however,
that some programs use line-buffered I/O internally. While that does
not affect Node.js, it can mean that data sent to the child process
may not be immediately consumed.
You may want to confirm that your Python program does not buffer its output. If you feel you're emitting data from your Python program as separate distinct writes to standard output, consider running sys.stdout.flush() following each write to suggest that Python should actually write data instead of trying to buffer it.
Update: In this commit that passage from the Nodejs documentation was removed for the following reason:
doc: remove confusing note about child process stdio
It’s not obvious what the paragraph is supposed to say. In particular,
whether and what kind of buffering mechanism a process uses for its
stdio streams does not affect that, in general, no guarantees can be
made about when it consumes data that was sent to it.
This suggests that there could be buffering at play before the Nodejs process receives data. In spite of this, care should be taken to ensure that processes within your control upstream of Nodejs are not buffering their output.

Related

Spawn child process, then later send argument to that process

I want to spawn a child process, and then at a later time send an argument to it that will then execute. How can I do that? (NodeJS, on Mac)
For example, I have a command to execute a script file:
osascript script-test.scpt
This command works in the terminal, and it also works using exec, like so:
const { exec } = require('child_process')
var script = 'osascript script-test.scpt'
exec(script)
But how do I get it to work in an already running child process?
I've tried the following, but nothing happens (no errors, and no activity):
const { spawn } = require('child_process')
var process = spawn('osascript')
...
[at some later point (after spawned process has been created)]
process.stdin.write('script-test.scpt')
In all current operating systems, a process is spawned with a given set of arguments (also called argv, argument values) and preserves this set until execution ends. This means that you cannot change arguments on the fly.
For a program to support multiple job submissions after spawning, it needs to implement this explicitly using some form of communication - this is known as IPC, or Inter-Process Communication. A program that supports IPC will usually allow another program to control its behavior to some extent - for example, submit jobs for processing and report back on their completion.
Popular methods of implementing IPC include:
Network communication
Local calls via a "message bus" such as D-Bus
Pipes (direct communication over stdin/stdout)
Inspect the documentation for program that you're trying to call and find out if it supports any form of control, out of the ones listed above. If yes, you may be able to integrate (in a program-specific way) with it. If not, then you will need to spawn a new instance every time you need to process a new job.

Node detect child_process waiting to read from stdin

I've found some questions about writing to child_process standard input, such as Nodejs Child Process: write to stdin from an already initialised process, however, I'm wondering if it is possible to recognize when a process spawned using Node's child_process attempts to read from its standard input and take action on that (perhaps according to what it has written to its standard output up until then).
I see that the stdio streams are implemented using Stream in Node. Stream has an event called data which is for when it is being written into, however, I see no event for detecting the stream is being read from.
Is the way to go here to subclass Stream and override its read method with custom implementation or is there a simpler way?
I've played around with Node standard I/O and streams for a bit until I eventually arrived at a solution. You can find it here: https://github.com/TomasHubelbauer/node-stdio
The gist of it is that we need to create a Readable stream and pipe it to the process' standard input. Then we need to listen for the process' standard output and parse it, detect the chunks of interest (prompts to the user) and each time we get one of those, make our Readable output our "reaction" to the prompt to the process' standard input.
Start the process:
const cp = child_process.exec('node test');
Prepare a Readable and pipe it to the process' standard input:
new stream.Readable({ read }).pipe(cp.stdin);
Provide the read implementation which will be called when the process asks for input:
/** #this {stream.Readable} */
async function read(/** #type {number} */ size) {
this.push(await promise + '\n');
}
Here the promise is used to block until we have an answer to the question the process asked through its standard output. this.push will add the answer to an internal queue of the Readable and eventually it will be sent to the standard input of the process.
An example of how to parse the input for a program prompt, derive an answer from the question, wait for the answer to be provided and then send it to the process is in the linked repository.

child_process.execFile() without buffering

I'm using Node's child_process.execFile() to start and communicate with a process that puts all of its output into its standard output and error streams. The process runs for an indeterminate amount of time and may theoretically generate any amount of output, i.e.:
const process = execFile('path/to/executable', [], {encoding: buffer'});
process.stdout.on('data', (chunk) => {
doSomethingWith(chunk);
});
process.stderr.on('data', (chunk) => {
renderLogMessage(chunk);
});
Notice that I'm not using the last argument to execFile() because I never need an aggregated view of all data that ever came out of either of those streams. Despite this omission, Node appears to be buffering the output anyway and I can reliably make the process end with the SIGTERM signal just by giving it enough input for it to generate a large amount of output. That is problematic because the process is stateful and cannot simply be restarted periodically.
How can I alter or work around this behavior?
You don't want to use execFile, which will wait for the child process to exit before "returning" (by calling the callback that you're not passing).
The documentation for execFile also describes why your child process is being terminated:
maxBuffer <number> Largest amount of data in bytes allowed on stdout or stderr. (Default: 200*1024) If exceeded, the child process is terminated.
For long-running processes for which you want to incrementally read stdout/stderr, use child_process.spawn().

Are the extra stdio streams in node.js child_process.spawn blocking?

When creating a child process using spawn() you can pass options to create multiple streams via the options.stdio argument. after the standard 3 (stdin, stdout, stderr) you can pass extra streams and pipes, which will be file descriptor in the child process. Then you can use a fs.createRead/WriteStream to access those.
See http://nodejs.org/api/child_process.html#child_process_child_process_spawn_command_args_options
var opts = {
stdio: [process.stdin, process.stdout, process.stderr, 'pipe']
};
var child = child_process.spawn('node', ['./child.js'], opts);
But the docs are not really clear on where these pipes are blocking. I know stdin/stdout/stderr are blocking, but what about the 'pipe''s?
In one part they say:
"Please note that the send() method on both the parent and child are
synchronous - sending large chunks of data is not advised (pipes can
be used instead, see child_process.spawn"
But elsewhere they say:
process.stderr and process.stdout are unlike other streams in Node in
that writes to them are usually blocking.
They are blocking in the case that they refer to regular files or TTY file descriptors.
In the case they refer to pipes:
They are blocking in Linux/Unix.
They are non-blocking like other streams in Windows.
Can anybody clarify this? Are pipes blocking on Linux?
I need to transfer large amounts of data without blocking my worker processes.
Related:
How to send huge amounts of data from child process to parent process in a non-blocking way in Node.js?
How to transfer/stream big data from/to child processes in node.js without using the blocking stdio?

How to transfer/stream big data from/to child processes in node.js without using the blocking stdio?

I have a bunch of (child)processes in node.js that need to transfer large amounts of data.
When I read the manual it says the the stdio and ipc inferface between them are blocking, so that won't do.
I'm looking into using file descriptors but I cannot find a way to stream from them (see my other more specific question How to stream to/from a file descriptor in node?)
I think I might use a net socket, but I fear that has unwanted overhead.
I also see this but it not the same (and has no answers: How to send huge amounts of data from child process to parent process in a non-blocking way in Node.js?)
I found a solution that seems to work: when spawning the child process you can pass options for stdio and setup a pipe to stream data.
The trick is to add an additional element, and set it to 'pipe'.
In the parent process stream to child.stdio[3].
var opts = {
stdio: [process.stdin, process.stdout, process.stderr, 'pipe']
};
var child = child_process.spawn('node', ['./child.js'], opts);
// send data
mySource.pipe(child.stdio[3]);
//read data
child.stdio[3].pipe(myHandler);
In de child open stream for file descriptor 3.
// read from it
var readable = fs.createReadStream(null, {fd: 3});
// write to it
var writable = fs.createWriteStream(null, {fd: 3});
Note that not every stream you get from npm works correctly, I tried JSONStream.stringify() but it created errors, but it worked after I piped it via through2. (no idea why that is).
Edit: some observations: it seems the pipe is not always Duplex stream, so you might need two pipes. And there is something weird going on where in one case it only works if I also have a ipc channel, so 6 total: [stdin, stdout, stderr, pipe, pipe, ipc].

Resources