I am trying to create a node app that you can pipe in JSON data and display it as a question to the user.
I learned how to bind to process.stdin here http://dailyjs.com/2012/03/08/unix-node-pipes/
Here is an example. The following is my UI for asking questions in the terminal.
var inquirer = require("inquirer");
function runInquirer(choices) {
inquirer.prompt([
{
type: "checkbox",
message: "Select toppings",
name: "toppings",
choices: choices
}
], function( answers ) {
console.log( JSON.stringify(answers, null, " ") );
});
}
If I invoke the handler directly, it works
runInquirer([
{name:'Cheese'},
{name:'Tomato'}
]);
But when I try to pipe the data into it
process.stdin.resume();
process.stdin.setEncoding('utf8');
process.stdin.on('data', function(data) {
runInquirer(JSON.parse(data));
});
It displays the questions and then exits immediately. So it is not waiting for my user input.
Example invoking from the command line (note the pipe | )
➜ test git:(master) ✗ cat questions.json | node test.js
? Select toppings (Press <space> to select)
The usual:
❯◯ Peperonni
◉ Cheese
I have also tried this test with another node user interface library called https://github.com/yaronn/blessed-contrib and got the same result.
You appear to be misunderstanding a bit about how processes handle stdin. inquirer.prompt sets up a prompt by using stdin and stdout to accept user input and write output to the terminal. process.stdin.on('data', sets up a handler to read data as it arrives on stdin. When you run
cat questions.json | node test.js
You are instructing your terminal to stop using the terminal as stdin and to instead use the output of the command cat as stdin.
That means that both of your methods of input are in fact trying to use the same channel. With your current structure, the answer to the question would also have to be coming from your questions.json file. It is exiting immediately because the file has run out of content and cannot possibly have any answers for your inquirer.
What you probably want is for your test.js file to accept a JSON file as the first argument, and read that file instead of reading from process.stdin.
Related
I have to run a python script from my node project that converts a .csv file to a .txt file.
The command for running the python script on the terminal is
python3 csv_to_rdf.py "pathToFile" "date" > "nameOfNewFile.txt"
eg. python3 csv_to_rdf.py modifiedCSV.csv 08122022 > test.txt
I am using the child_process.spawn() method in nodeJS to run the script but I can't figure out the args.
When I use the following code snippet:
I get the following error message:
My question is how do I send in "> test.txt" in my spawn() method?
This is incredible, I am a semantic developer, and I had an extremely similar file (a python script that parsed a .csv and then converted it to .txt and then to .TriG) and I needed it run in my Node.js backend.
Short answer
A very simple solution I thought of after writing everything below is:
Python is better at writing to files than bash. Use python to write to your file and then do not pass a redirect into your spawn() and you'll be good. Escaped characters are difficult to handle in the child_process because you have to deal with JavaScript and Python both trying to handle escaped characters at the same time. If python doesn't have write access, then below may server you better.
Long answer
I moved away from child processes because they cause security issues. Any child_process with user input can be exploited. If you could move away from a child process and re-write your python file in JS/TypeScript, that would be best. If you are confident in the use of the spawn, then here is a solution:
Firstly, you cannot add a redirect to your spawn because the command is python3 and, as the error suggests, python3 takes the redirect as an unrecognized argument. It literally takes it as an argument to the python file, so sys.argv is the array you passed in the spawn. Python will take it in but without seeing your python code, I couldn't say how the extra arguments are being handled. Obviously an error is being thrown.
Remember the child_process.spawn() happens asynchronously, so the stdout and stderr can be extracted right after. Read child_process.spawn() docs. So remove the redirect in the spawns argument array and just run another child_process or use fn to write the output to a file when the stdout occurs:
const { spawn } = require('node:child_process');
const childPython = spawn('python3', ['modifiedCSV.csv', '08182022']);
childPython.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
Or you could use exec instead.
const { exec } = require('node:child_process');
const path = "modifiedCSV.csv";
const date = "08182022";
exec(`python3 ${path} ${date} > test.txt`, (error, stdout, stderr) => {
if (error) {
console.error(`exec error: ${error}`);
return;
}
console.log(`stdout: ${stdout}`);
console.error(`stderr: ${stderr}`);
})
The main issue is, if the path to the modified csv file is in anyway connected to a user's input, you have a security risk, because if python3 is run as sudo, which I believe in node, it is, a user can inject python code in-place of the file path and re-write and encrypt all the files that it can touch.
I'm trying to create a simple node API that would spawn a shell script and add user input from a POST call to that spawn. I created a controller called testController.js that would run a script called test.sh located in the same project
I was having a few problems writing the user input but thankfully this solution saved me.
So this really simple controller function ended up to be:
testController.js:
exports.create_task = function (req, res) {
var spawn = require("child_process").spawn;
var spawned = spawn("sh", ["/var/www/html/node_test_proj/test.sh"]);
spawned.stdout.on("data", function (data) {
console.log("In stdout");
spawned.stdin.write(req.body.name + "\n");
spawned.stdin.write(req.body.number + "\n");
});
res.send("posted");
};
My shell script would basically just take a name and number and export those details into a file:
test.sh:
#!/bin/bash
echo "Please input your name"
read name
echo "Please input your number"
read number
echo "Your name is $name and number $number" > knowingthis.txt;
Simple enough; does what it's supposed to and (given name abc and number 123) prints out:
your name is abc and number 123
However, to simplify things further I decide to replace the unnecessary echo statements with something simpler i.e read -p. Now my modified script becomes:
#!/bin/bash
read -p "Please input your name: " name;
read -p "Please input your number: " number;
echo "Your name is $name and number $number" > knowingthis.txt;
Lo and behold! Now when I spawn the script it no longer works; it doesn't even log the "In stdout" so that means that it's never really going in that clause, it simply exports the file with variables empty in the sentence, leaving the output to be:
your name is and number
I thought maybe there's something wrong with the script, so I ran it directly, but it was working fine. Why is it working with read and not read -p? Is there something I need to change in my function? Is it not a normal stdout stream but something else?
The man page section, or info page or website page, for builtin commands under read option -p says (emphasis added)
Display prompt, without a trailing newline, before attempting to read any input. The prompt is displayed only if input is coming from a terminal.
'coming from' means directly, i.e. only if file descriptor #0 (stdin) of the shell process is an open file which is a terminal and not a (redirected) disk file or pipe or socket. When nodejs spawns a child process, the child's stdin is a pipe from nodejs, and the child's stdout and stderr are pipes to nodejs; they (all) are not terminals.
OTOH echo writes to stdout unconditionally (regardless of what type of file it is).
My node process gets some PDF file via HTTP Request, then uses the request's onData event to pass the incoming data on to a properly configured lpr, spawned via child_process.exec. I write to stdin using process.stdin.write(...), followed by process.stdin.end() when done. This allows me to print those files immediately.
Now I have a situation where I don't want the data to be piped to lpr, but to some bash script. The script uses cat to process its stdin.
myscript.sh < somefile.pdf works as expected, as does cat somefile.pdf | myscript.sh.
However, when I spawn /path/to/script.sh from node (by simply replacing lpr with the script path in the source), the process exits with
events.js:183
throw er; // Unhandled 'error' event
^
Error: write EPIPE
at WriteWrap.afterWrite [as oncomplete] (net.js:868:14)
Subsequently, the whole node process crashes, the error sneaking around all try...catch blocks. Logging at the beginning of the bash script shows, it does not even get started.
When I target anything that's not a shell script but some compiled executable, like cat, echo,... everything works just fine.
Adding epipebomb module would not change anything.
I also tried piping to process.exec("bash", ["-c cat | myscript.sh"]), with the same errors.
An example bash script, just to test for execution:
#!/usr/bin/env bash
date > logfile.txt
cat > /dev/null
EDIT:
I think I maybe need to signal to keep the stdin stream open somehow.
The process-spawning part of the script, leaving promisification and output processing away:
const process = require("child_process")
// inputObservable being an rxjs Observable
execstuff(inputObervable) {
const task = process.spawn("/path/to/script.sh");
inputObservable.subscribe(
chunk => task.stdin.write(chunk),
error => console.error(error),
finished => task.stdin.end()
);
}
There is an example at child_process.spawn how you can write the following lines ps ax | grep ssh as node.js script, maybe it will be helpful for you:
const { spawn } = require('child_process');
const ps = spawn('ps', ['ax']);
const grep = spawn('grep', ['ssh']);
ps.stdout.on('data', (data) => {
grep.stdin.write(data);
});
ps.stderr.on('data', (data) => {
console.log(`ps stderr: ${data}`);
});
The first impression is that you are doing the same stuff, the problem may be in the chunk data, maybe one of the chunks is null, and it is closing the stream, and you want to close it by running task.stdin.end().
The other thing you can try is to run the node.js script with the NODE_DEBUG=stream node script.js
Will log the node.js internals how the stream, behaves, also may be helpful for you.
When type a unterminated command in a mongo shell, it will return three dots indicating need more input to complete this command like below:
> db.test.find(
... {
...
I am using nodejs child_process.spawn to create a mongo shell process and listen on its output. I can get the standard and error output from the mongo shell but I can't get the ... output. Below is my nodejs code:
const shell = spawn('mongo', params);
shell
.stdout
.on('data', (data) => {
winston.debug('get output ' + data);
});
shell
.stderr
.on('data', (data) => {
const output = data + '';
winston.error('get error output ', data);
});
I run below code to send command on the shell:
shell.stdin.write('db.test.find(');
I wander why I can't get the ... output on above method. Is it a special output?
EDIT1
I tried to use node-pty and pty.js. They can get the ... output but they mix the input and output data together. It is not possible to separate them.
I also tried to use stdbuf and unbuffer to disable buffer but it still doesn't work.
It seems that nodejs child_process doesn't work well with interactive command.
Your code doesn't include anything that writes to the stdin of your child process so I would be surprised if you got the ellipsis that indicates incomplete command when in fact you don't send any command at all - incomplete or otherwise.
That having been said, many command line utilities behave differently when they discover a real terminal connected to their stdin/stdout. E.g. git log will page the results when you run it directly but not when you pipe the results to some other command like git log | cat so this may also be the case here.
This can also have to do with the buffering - if your stream is line-buffered then you won't see any line that is not ended with a newline right away.
The real question is: do you see the > prompt? Do you send any command to the mongo shell?
Scritping interactive CLI tools can be tricky. E.g. see what I had to do to test a very simple interactive program here:
https://github.com/rsp/rsp-pjc-c01/blob/master/test-z05.sh#L8-L16
I had to create two named pipes, make sure that stdin, stderr and stdout are not buffered, and then use some other tricks to make it work. It is a shell script but it's just to show you an example.
I'm trying to wrap the lftp program in a node.js application, using child_process. The problem is that lftp doesn't write its output to stdout, so I cannot catch its output in node.js. Sample code:
var proc = require('child_process').spawn('lftp', ['-p', port, '-u', username + ',' + password, host]);
proc.stdout.on('data', function (data) {
console.log('stdout:', data.toString('utf-8'));
});
proc.on('exit', function (code) {
console.log('process exited with code ' + code);
});
proc.stdin.write('ls');
// proc.stdin.end();
If I uncomment the line that calls stdin.end() for the lftp child process, the output from the ls command appears in my terminal as it should. If I don't the process simply hangs and nothing gets outputted.
I've also tried using unbuffer, but it doesn't seem to allow me to write to lftp's stdin anymore. It outputs the usual "[Resolving host address...]" stuff, but not the output from the ls command.
My question is: what do I have to do to be able to interact with lftp using node.js' child_process?
Well, this was dumb. I forgot to write a newline after the ls command to stdin. It seems to work without the need for unbuffer.