The below piece of script is not behaving as expected
if docker pull docker.pkg.github.com/private-repo/centos7 | grep -q 'Error response from daemon: unauthorized'; then
echo "matched"
else
echo "unmatched"
fi
output
Error response from daemon: unauthorized
unmatched
expected output
matched
I have followed this post
What i have tried:
i replaced "docker.pkg.github.com/private-repo/centos7" with echo "Error response from daemon: unauthorized" and it gives expected o/p as matched.
so, what i understand here is the o/p from command "docker pull docker.pkg.github.com/private-repo/centos7" is not captured by "grep" but i don't understand why?
I've also tried this but same result:
docker pull docker.pkg.github.com/private-repo/centos7 | grep 'Error response from daemon: unauthorized' &> /dev/null
if [ $? == 0 ]; then
echo "matched"
else
echo "unmatched"
fi
Working solution suggested by #Gordon Davisson
docker pull docker.pkg.github.com/private-repo/centos7 2>&1 | grep 'Error response from daemon: unauthorized' &> /dev/null
if [ $? == 0 ]; then
echo "matched"
else
echo "unmatched"
fi
output:
matched
It’s just as #Gordon Davisson said, and please give him the answer credit if he chooses to claim. I’m making the answer more visible.
This is an oversimplification, but it will get the point across. All “outputs” are sent to the terminal through stdout and stderr.
When you use the basic pipe-syntax (|), the only thing actually being processed by the pipe is the stdout. The stderr will still be printed to the terminal. In your case this is undesirable behavior.
The fix is to force the stderr into the stdout BEFORE the pipe, the syntax for this is 2&>1 (or in Bash |&). This works around the pipe’s limitation of only being able to process stdout, and it also prevents the stderr leak into the terminal.
if docker pull… 2&>1 | grep -q…
<SNIPPED>
OR IN BASH
if docker pull… |& grep -q…
<SNIPPED>
The reason your 2nd-attempted solution didn’t work was because pipes and redirections are processed in-order from left-to-right.
if docker pull… | grep… &> /dev/null
# ^ LEAK HAPPENS HERE, FIX COMES TOO LATE
<SNIPPED>
Meaning that the stderr leak into the terminal already occurred BEFORE you redirected grep’s output. And that the error wasn’t occurring from grep.
You might have some luck with just searching for Error instead of the whole string and see if you got something wrong with the way you typed out the string.
Related
I have a bash script I use to check if certain forever processes are running. The script is basically:
#!/bin/bash
processIsRunning=$(forever list | grep -q 'process/index.js')
if [ -n $processIsRunning]; then
echo 'Processes are running'
else
echo 'Processes are not running'
fi
I get this error though:
events.js:72
throw er; // Unhandled 'error' event
^
Error: write EPIPE
at errnoException (net.js:904:11)
at Object.afterWrite (net.js:720:19)
If I remove the '-q' flag from my grep command in line 3 then I do not get a pipe error, but instead I get an error about it trying to run the grep result as a command instead of just checking for the length of the output to be greater than 0.
Does anyone know why the -q parameter would cause an EPIPE error?
UPDATE BASED ON COMMENTS:
My mistake, I'm pretty new to bash and was trying to learn how to use if statements. I originally had it directly in the if statement but took it out into a variable because it wasn't working (turns out it was failing because of my lack of spaces, didn't realize they are a requirement in bash). I clearly didn't port it out properly. I'm currently using just grep without -q and then checking the length of the output and that is working well.
Try something like this:
forever list | grep -q 'process/index.js'
if [ $? -eq 0 ]; then
echo 'Processes are running'
else
echo 'Processes are not running'
fi
grep -q says do not write anything to standard output.
$? is used to find the return value of the last executed command.
I'm creating a script which check if a VPS do have TUN driver enabled.
The check command is :
cat /dev/net/tun
if it return:
cat: /dev/net/tun: File descriptor in bad state
the module is enabled. otherwise return ERROR.
Here is my script:
tunstring="File descriptor in bad state"
if cat /dev/net/tun | grep -q "$tunstring"; then
echo "GOOOOOD"
else
echo "ERROR"
fi
I get ERROR message.
I tried the same script with a text file and it worked...
Since that output is being written on stderr you can use:
tunstring="File descriptor in bad state"
if cat /dev/net/tun |& grep -q "$tunstring"; then
echo "GOOOOOD"
else
echo "ERROR"
fi
|& pipes previous command's stdout and stderr to next in pipe line.
Looks like your VPS path i.e /dev/net/tun isn't valid anymore and cat command is failing to read it.
How to disable stdout or stderr in bash scripts temporarily?
Of course the most common way is to redirect stdout or stderr to /dev/null.
But on some systems /dev/null may be unwritable for normal users.
I am writing some scripts that is aim to be portable, so I do not prefer using /dev/null
Some blogs/posts say that >&- can close stdout, but when I tried echo 123 >&- in a bash terminal, it just failed with the message "bash: echo: write error: Bad file descriptor"
Surely I can do it by redirecting stdout or stderr to a tmp file like this:
some_command > /tmp/null
But what I want is a more "elegant" way
I think perhaps I can achieve this by using pipe like this:
some_command | :
But in this way, it may "pollutes" the exit code of the original command
Here is a possible way to do what you want:
( my_cmd 3>&1 1>&2 2>&3- ) | :
This temporarily send stdout to a new file handle, 3 and redirect stderr to stdout so that the stderr pipes into the command (in this case, :). Then the new file handle is routed back out to stdout. These avoid piping the stdout of my_cmd into :. The - in closes the handle after it's used.
To check the exist status of my_cmd after the above you examine the environment variable $PIPESTATUS[0]. $PIPESTATUS is a bash environment array variable that holds the exit status of each piped command in the last pipe that was done.
I think the really correct answer is to investigate why /dev/null isn't world writable. Having it not so is an off-standard system configuration and may cause system problems. The above work-around is a little messy by comparison.
Based on what I wrote earlier and #nos's comment above, here's an example:
(assuming you have no file called 'zzz' in your current directory, and that '.' is readable)
#!/bin/bash
set -o pipefail
ls . 2>&1 |:
echo $?
ls zzz 2>&1 |:
echo $?
The pipelines succeed and fail silently and maintain the exit code. Note that you can probably still make a pipeline example where this would not produce the desired results. I haven't come up with one in my head yet, but that doesn't mean it's not out there. The best answer, as many have noted already, is to fix the system so that /dev/null is world writable.
EDIT: Changed /bin/sh to /bin/bash, although this probably isn't necessary. But since I haven't tested this against a true Bourne Shell, I decided to err on the side of caution.
EDIT: Another script, showing several different redirections, and using the |& shortcut for 2>&1 |. If you run this, you'll notice that some of the ls failures return a 141 exit status rather than the expected 2. This is a broken pipe exit status, but still represents a failure.
#!/bin/bash
set -o pipefail
# start with commands that should succeed
# redirect everything to ':'
echo "ls . |& :"
ls . |& :
echo $?
# redirect only stdout to ':'
echo "ls . | :"
ls . | :
echo $?
# redirect only stderr to ':'
echo "((ls . 1>&3) |& : ) 3>&1"
((ls . 1>&3) |& : ) 3>&1
echo $?
# now move to failures
# redirect everything to ':'
echo "ls zzz |& :"
ls zzz |& :
echo $?
# redirect only stdout to ':'
echo "ls zzz |:"
ls zzz |:
echo $?
# redirect only stderr to ':'
echo "((ls zzz 1>&3) |& : ) 3>&1"
((ls zzz 1>&3) |& : ) 3>&1
echo $?
I use two subshells when I'm attempting to destroy stdout but keep stderr. You could do it without the outer one. In fact, that might be better. Instead of getting a broken pipe error, you get a 1 exit status.
I'm writing a bash script that is supposed to be "transparent" to the user. It reads commands from the user and intercepts them, allowing only some of them to be executed by bash, depending on some criteria. It (basically) works like this:
while true; do
read COMMAND
can_be_done $COMMAND
if [ $? == 0 ]; then
eval $COMMAND
if [ $? != 0 ]; then
echo "Error: command not found"
fi
fi
done
The problem is, when the command fails, you also get stuff printed to the console. BUT, if I keep the result in a variable and only print it when it doesn't fail, like so:
RESULT=$(eval $COMMAND)
Then there's another problem: The special formatting gets lost (for example, "ls --color" doesn't show colors anymore)
My question is: Is there a way to have the command print to STDOUT if successful, but to /dev/null if it fails?
Do you really need the second part, replacing the output of the command with an error message? Linux commands print their own error messages, which aren't necessarily "command not found". You'd be hiding the true error (permission denied, file not found, out of memory, segfault, etc.) with an oftentimes incorrect error message (command not found).
If you remove that check, you could simplify the loop to something like this:
while true; do
read -e COMMAND
if can_be_done "$COMMAND"; then
eval "$COMMAND"
fi
done
read -e uses readline to obtain the command, making the prompt a lot more shell-like (↑ and ↓ for history, for instance).
command; if [ $? == 0 ]; then is more idiomatically written as if <command>; then.
Quoting makes sure special characters and whitespace are handled properly.
I would argue strongly that you should not do this. If you do not want to see output, redirect it to /dev/null. If you do want to see errors, do not redirect stderr. If you are using a program that prints its error messages on stdout instead of stderr, FIX THE PROGRAM! Error messages belong on stderr. Note that this means your program is broken, as it ought to read:
echo "Error: command not found" >&2
I'm not sure if it is rule number 1, but it certainly belongs in the top 10, and it may be the most often violated rule: Error messages belong on stderr. A program which prints error messages on stdout is broken.
if false > /dev/null;then echo 1; else echo 2; fi 2> /dev/null
Will output 2
if true > /dev/null;then echo 1; else echo 2; fi 2> /dev/null
Will output 1
remove the > /dev/null to print the command also to stdout
for example
if echo 123;then echo 1; else echo 2; fi 2> /dev/null
Will output
123 & 1
Assuming that the command is not very expensive to run you can do this:
test `ls /mooo 2>/dev/null` || echo moo not found
test will return true only if the command exits with 0, in this case ls is the command. You could have put this in an if statement too like so:
if [ `ls /moo 2>/dev/null` ];then
echo moo is a folder
fi
I have a bash script that checks some log files created by a cron job that have time stamps in the filename (down to the second). It uses the following code:
CRON_LOG=$(ls -1 $LOGS_DIR/fetch_cron_{true,false}_$CRON_DATE*.log 2> /dev/null | sed 's/^[^0-9][^0-9]*\([0-9][0-9]*\).*/\1 &/' | sort -n | cut -d ' ' -f2- | tail -1 )
if [ -f "$CRON_LOG" ]; then
printf "Checking $CRON_LOG for errors\n"
else
printf "\n${txtred}Error: cron log for $CRON_NOW does not exist.${txtrst}\n"
printf "Either the specified date is too old for the log to still be around or there is a problem.\n"
exit 1
fi
CRIT_ERRS=$(cat $CRON_LOG | grep "ERROR" | grep -v "Duplicate tracking code")
if [ -z "$CRIT_ERRS" ]; then
printf "%74s[${txtgrn}PASS${txtrst}]\n"
else
printf "%74s[${txtred}FAIL${txtrst}]\n"
printf "Critical errors detected! Outputting to console...\n"
echo $CRIT_ERRS
fi
So this bit of code works fine, but I'm trying to clean up my scripts now and implement set -e at the top of all of them. When i do it to this script, it exits with error code 1. Note that I have errors form the first statement dumping to /dev/null. This is because some days the file has the word "true" and other days "false" in it. Anyway, i don't think this is my problem because the script outputs "Checking xxxxx.log for errors." before exiting when I add set -e to the top.
Note: the $CRON_DATE variable is derived form user input. I can run the exact same statement from command line "$./checkcron.sh 01/06/2010" and it works fine without the set -e statement at the top of the script.
UPDATE: I added "set -x" to my script and narrowed the problem down. The last bit of output is:
Checking /map/etl/tektronix/logs/fetch_cron_false_010710054501.log for errors
++ cat /map/etl/tektronix/logs/fetch_cron_false_010710054501.log
++ grep ERROR
++ grep -v 'Duplicate tracking code'
+ CRIT_ERRS=
[1]+ Exit 1 ./checkLoad.sh...
So it looks like the problem is occurring on this line:
CRIT_ERRS=$(cat $CRON_LOG | grep "ERROR" | grep -v "Duplicate tracking code")
Any help is appreciated. :)
Thanks,
Ryan
Adding set -x, which prints a trace of the script's execution, may help you diagnose the source of the error.
Edit:
Your grep is returning an exit code of 1 since it's not finding the "ERROR" string.
Edit 2:
My apologies regarding the colon. I didn't test it.
However, the following works (I tested this one before spouting off) and avoids calling the external cat. Because you're setting a variable using the results of a subshell and set -e looks at the subshell as a whole, you can do this:
CRIT_ERRS=$(cat $CRON_LOG | grep "ERROR" | grep -v "Duplicate tracking code"; true)
bash -c 'f=`false`; echo $?'
1
bash -c 'f=`true`; echo $?'
0
bash -e -c 'f=`false`; echo $?'
bash -e -c 'f=`true`; echo $?'
0
Note that backticks (and $()) "return" the error code of the last command they run. Solution:
CRIT_ERRS=$(cat $CRON_LOG | grep "ERROR" | grep -v "Duplicate tracking code" | cat)
Redirecting error messages to /dev/null does nothing about the exit status returned by the script. The reason your ls command isn't causing the error is because it's part of a pipeline, and the exit status of the pipeline is the return value of the last command in it (unless pipefail is enabled).
Given your update, it looks like the command that's failing is the last grep in the pipeline. grep only returns 0 if it finds a match; otherwise it returns 1, and if it encounters an error, it returns 2. This is a danger of set -e; things can fail even when you don't expect them to, because commands like grep return non-zero status even if there hasn't been an error. It also fails to exit on errors earlier in a pipeline, and so may miss some error.
The solutions given by geocar or ephemient (piping through cat or using || : to ensure that the last command in the pipe returns successfully) should help you get around this, if you really want to use set -e.
Asking for set -e makes the script exit as soon as a simple command exits with a non-zero exit status. This combines perniciously with your ls command, which exits with a non-zero status when asked to list a non-existent file, which is always the case for you because the true and false variants don't co-exist.