child_process gives 'Command failed' executing grep - node.js

I want to search for existence of a file using child_process in node:
const { exec } = require('child_process');
exec('ls | grep "filename"', (err, result) => {...})
When the filename exists, exec result is fine. But when the filename doesn't exist, I get an error:
Command failed: ls | grep "filename"
In this case, how can I tell if it's an error executing the command, or just because no result is found?
EDIT
Thanks for the advice on not searching for a file this way. The above code is not the actual code, but just a demo piece illustrating my problem with grep. In my actual case I'm searching for keywords in the output by task spooler, thus I used exec and tsp -l | grep ...

You can determine this by looking at the value of err.code in the callback. It is documented in more detail in the Node.js docs.
Since the last command in the pipe is grep, consult the grep manpage for a complete list of status codes to branch your logic appropriately. In your case, grep will return a status code of 1 if no lines were selected (i.e. "there are no results"), so if err.code === 1, then you know no files were matched.
Note: as mentioned by #CharlesDuffy, It should be preferred to achieve your desired file manipulations via the Node.js File System API. Leverage this answer as an alternative for situations where exec is explicitly needed.

Related

How to use set -x without showing stdout?

Within CI, I am running a bash script that calls many bash scripts.
./internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
This doest not disable the stdout returned by the script.
The Gitlabi-CI runners stop logging after 100MB of log, It says Job's log exceeded limit of 10240000 bytes.
I know the log script can only grow up.
How can I optimize the output log size?
I don't need to have all the stdout, I can have stderr but then it will be a long running script without information.
Is there a way to display the commands which is running like when doing set -x?
Edit
Reading the answers, I was not able to solve my issue. I need to add that I am using nodejs to run the bash script that run the long bash script.
This is how I call my node script within .gitlab-ci.yml:
scripts:
- node my_script.js
Within my_script.js, I have:
exports.handler = () => {
const ls = spawn('bash', [path.join(__dirname, 'release.sh')], { stdio: 'inherit' });
ls.on('close', (code) => {
if (code !== 0) {
console.log(`ps process exited with code ${code}`);
process.exitCode = code;
}
});
};
Within my_script.sh, I have:
./internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
You can selectively redirect file handles with exec.
exec >stdout 2>stderr
This however loses the connection to the terminal, so there is no simple way to output anything to the terminal after this point.
You can instead duplicate a file handle with m>&n where m is the number of the file descriptor to duplicate and n is the number of the new one (choose a big number like 99 to not accidentally clobber an existing handle).
exec 98<&1 # stdout
exec 99<&2 # stderr
exec >/dev/null 2>&1
:
To re-enable output,
exec 1<&98 2<&99
If you redirected to a temporary file instead of /dev/null you could obviously now show the tail of those files to the caller.
tail -n 100 "$TMPDIR"/stdout "$TMPDIR"/stderr
(On a shared server, probably use mktemp to create a unique temporary directory at the beginning of your script; static hard-coded file names make it impossible to run two builds at the same time.)
As you usually can't predict where the next error will happen, probably put all of this in a wrapper script which performs the redirection, runs the build, and finally displays the tail end of the temporary log files. Some build servers probably want to see some signs of life in the log file every few minutes, so perhaps tail a few lines every once in a while in a loop, too.
On the other hand, if there is just a single build command, the whole build job's stdout and stderr can simply be redirected to a log file, and you don't need to exec things back and forth. If you need to enable output selectively for portions of the script, use exec as above; but for wholesale redirection, just redirect the one command.
In summary, maybe your build script would look something like this.
#!/bin/sh
t=$(mktemp -t -d cibuild.XXXXXXXX) || exit
trap 'kill $buildpid; wait $buildpid; tail -n 500 "$t"/*; rm -rf "$t"' 0 1 2 3 5 15
# Your original commands here
${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}">"$t"/stdout 2>"$t"/stderr &
buildpid=$!
while kill -0 $buildpid; do
sleep 180
date
tail -n 1 "$t"/*
done
wait
A flaw with this approach is that you lose timing information. A proper solution woud let you see when each line was produced, and display standard output and standard error intermixed in the order the messages were printed, perhaps with visible time stamps, and even with coloring hints (red time stamps for stderr?)
Option 1
If your script will output the error message to stderr, you can ignore all output to stdout by using command > /dev/null, where /dev/null is a black hole that will take away any output to it.
Option 2
If there's any pattern on your error message, you can use grep to filter out those error messages.
Edit 1:
To show the command that is running, you can supply -x command to bash; therefore, your command will be
bash -x ${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
bash will print the command executed to stderr
Edit 2:
If you want to reduce the size of the output file, you can pass it to gzip by using ${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" | gzip > logfile.
To read the content of the logfile, you can use zcat logfile.

Prevent script running with same arguments twice

We are looking into building a logcheck script that will tail a given log file and email when the given arguments are found. I am having trouble accurately determining if another version of this script is running with at least one of the same arguments against the same file. Script can take the following:
logcheck -i <filename(s)> <searchCriterion> <optionalEmailAddresses>
I have tried to use ps aux with a series of grep, sed, and cut, but it always ends up being more code than the script itself and seldom works very efficiently. Is there an efficient way to tell if another version of this script is running with the same filename and search criteria? A few examples of input:
EX1 .\logcheck -i file1,file2,file3 "foo string 0123" email#address.com
EX2 .\logcheck -s file1 Hello,World,Foo
EX3 .\logcheck -i file3 foo email#address1.com,email#address2.com
In this case 3 should not run because 1 is already running with parameters file3 and foo.
There are many solutions for your problem, I would recommend creating a lock file, with the following format:
arg1Ex1 PID#(Ex1)
arg2Ex1 PID#(Ex1)
arg3Ex1 PID#(Ex1)
arg4Ex1 PID#(Ex1)
arg1Ex2 PID#(Ex2)
arg2Ex2 PID#(Ex2)
arg3Ex2 PID#(Ex2)
arg4Ex2 PID#(Ex2)
when your script starts:
It will search in the file for all the arguments it has received (awk command or grep)
If one of the arguments is present in the list, fetch the process PID (awk 'print $2' for example) to check if it is still running (ps) (double check for concurrency and in case of process ended abnormally previously garbage might remain inside the file)
If the PID is still there, the script will not run
Else append the arguments to the lock file with the current process PID and run the script.
At the end, of the execution you remove the lines that contains the arguments that have been used by the script, or remove all lines with its PID.

How to get mongo shell output(three dot) for unterminated command

When type a unterminated command in a mongo shell, it will return three dots indicating need more input to complete this command like below:
> db.test.find(
... {
...
I am using nodejs child_process.spawn to create a mongo shell process and listen on its output. I can get the standard and error output from the mongo shell but I can't get the ... output. Below is my nodejs code:
const shell = spawn('mongo', params);
shell
.stdout
.on('data', (data) => {
winston.debug('get output ' + data);
});
shell
.stderr
.on('data', (data) => {
const output = data + '';
winston.error('get error output ', data);
});
I run below code to send command on the shell:
shell.stdin.write('db.test.find(');
I wander why I can't get the ... output on above method. Is it a special output?
EDIT1
I tried to use node-pty and pty.js. They can get the ... output but they mix the input and output data together. It is not possible to separate them.
I also tried to use stdbuf and unbuffer to disable buffer but it still doesn't work.
It seems that nodejs child_process doesn't work well with interactive command.
Your code doesn't include anything that writes to the stdin of your child process so I would be surprised if you got the ellipsis that indicates incomplete command when in fact you don't send any command at all - incomplete or otherwise.
That having been said, many command line utilities behave differently when they discover a real terminal connected to their stdin/stdout. E.g. git log will page the results when you run it directly but not when you pipe the results to some other command like git log | cat so this may also be the case here.
This can also have to do with the buffering - if your stream is line-buffered then you won't see any line that is not ended with a newline right away.
The real question is: do you see the > prompt? Do you send any command to the mongo shell?
Scritping interactive CLI tools can be tricky. E.g. see what I had to do to test a very simple interactive program here:
https://github.com/rsp/rsp-pjc-c01/blob/master/test-z05.sh#L8-L16
I had to create two named pipes, make sure that stdin, stderr and stdout are not buffered, and then use some other tricks to make it work. It is a shell script but it's just to show you an example.

Internal Variable PIPESTATUS

I am new to linux and bash scripting and i have query about this internal variable PIPESTATUS which is an array and stores the exit status of individual commands in pipe.
On command line:
$ find /home | /bin/pax -dwx ustar | /bin/gzip -c > myfile.tar.gz
$ echo ${PIPESTATUS[*]}
$ 0 0 0
working fine on command line but when I am putting this code in a bash script it is showing only one exit status. My default SHELL on command line is bash only.
Somebody please help me to understand why this behaviour is changing? And what should I do to get this work in script?
#!/bin/bash
cmdfile=/var/tmp/cmd$$
backfile=/var/tmp/backup$$
find_fun() {
find /home
}
cmd1="find_fun | /bin/pax -dwx ustar"
cmd2="/bin/gzip -c"
eval "$cmd1 | $cmd2 > $backfile.tar.gz " 2>/dev/null
echo -e " find ${PIPESTATUS[0]} \npax ${PIPESTATUS[1]} \ncompress ${PIPESTATUS[2]} > $cmdfile
The problem you are having with your script is that you aren't running the same code as you ran on the command line. You are running different code. Namely the script has the addition of eval. If you were to wrap your command line test in eval you would see that it fails in a similar manner.
The reason the eval version fails (only gives you one value in PIPESTATUS) is because you aren't executing a pipeline anymore. You are executing eval on a string that contains a pipeline. This is similar to executing /bin/bash -c 'some | pipe | line'. The thing actually being run by the current shell is a single command so it has a single exit code.
You have two choices here:
Get rid of eval (which you should do anyway as eval is generally something to avoid) and stop using a string for a command (see Bash FAQ 050 for more on why doing this is a bad idea.
Move the echo "${PIPESTATUS[#]}" into the eval and then capture (and split/parse) the resulting output. (This is clearly a worse solution in just about every way.)
Instead of ${PIPESTATUS[0]} use ${PIPESTATUS[#]}
As with any array in bash PIPESTATUS[0] contains the first command exit status. If you want to get all of them you have to use PIPESTATUS[#] which returns all the contents of the array.
I'm not sure why it worked for you when you tried it in the command line. I tested it and I didn't get the same result as you.

cannot save the results of find command in a file - file is empty

The program is supposed to find a file and return whether it exists on the system or not, now i have found that find command should be used, but since this command will be initiated by the code (using System) i need to save the results in a file, and trying on the terminal, i cannot get it to work, the result doesn't appear in the file, i am trying:
find / -name 'test2abc' 2>/dev/null -> res
The file res is empty. How to do it right?
Also is there a better way to do it, i am supposed to print the details of the file using stat command if the file exists. Is there a way to use juts the stat command to search for the file in the subfolders as well?
The -> res part should be > res only.
If you try the command like this on the commandline:
find / -name 'test2abc' -> res
it will print an error:
find: paths must precede expression: -
The - is not part of any valid redirection and hence given to find which cannot interpret it either.
It may be wise not to suppress error messages. A simple way can be redirecting both stderr and stdout to the file like this:
find / -name 'test2abc' > res 2>&1
Then the error about the - would have been in the file right from the start you you would have known what`s wrong very fast.

Resources