if command "cat /dev/net/tun" result $string then - linux

I'm creating a script which check if a VPS do have TUN driver enabled.
The check command is :
cat /dev/net/tun
if it return:
cat: /dev/net/tun: File descriptor in bad state
the module is enabled. otherwise return ERROR.
Here is my script:
tunstring="File descriptor in bad state"
if cat /dev/net/tun | grep -q "$tunstring"; then
echo "GOOOOOD"
else
echo "ERROR"
fi
I get ERROR message.
I tried the same script with a text file and it worked...

Since that output is being written on stderr you can use:
tunstring="File descriptor in bad state"
if cat /dev/net/tun |& grep -q "$tunstring"; then
echo "GOOOOOD"
else
echo "ERROR"
fi
|& pipes previous command's stdout and stderr to next in pipe line.
Looks like your VPS path i.e /dev/net/tun isn't valid anymore and cat command is failing to read it.

Related

Unable to grep output of a command in bash script

The below piece of script is not behaving as expected
if docker pull docker.pkg.github.com/private-repo/centos7 | grep -q 'Error response from daemon: unauthorized'; then
echo "matched"
else
echo "unmatched"
fi
output
Error response from daemon: unauthorized
unmatched
expected output
matched
I have followed this post
What i have tried:
i replaced "docker.pkg.github.com/private-repo/centos7" with echo "Error response from daemon: unauthorized" and it gives expected o/p as matched.
so, what i understand here is the o/p from command "docker pull docker.pkg.github.com/private-repo/centos7" is not captured by "grep" but i don't understand why?
I've also tried this but same result:
docker pull docker.pkg.github.com/private-repo/centos7 | grep 'Error response from daemon: unauthorized' &> /dev/null
if [ $? == 0 ]; then
echo "matched"
else
echo "unmatched"
fi
Working solution suggested by #Gordon Davisson
docker pull docker.pkg.github.com/private-repo/centos7 2>&1 | grep 'Error response from daemon: unauthorized' &> /dev/null
if [ $? == 0 ]; then
echo "matched"
else
echo "unmatched"
fi
output:
matched
It’s just as #Gordon Davisson said, and please give him the answer credit if he chooses to claim. I’m making the answer more visible.
This is an oversimplification, but it will get the point across. All “outputs” are sent to the terminal through stdout and stderr.
When you use the basic pipe-syntax (|), the only thing actually being processed by the pipe is the stdout. The stderr will still be printed to the terminal. In your case this is undesirable behavior.
The fix is to force the stderr into the stdout BEFORE the pipe, the syntax for this is 2&>1 (or in Bash |&). This works around the pipe’s limitation of only being able to process stdout, and it also prevents the stderr leak into the terminal.
if docker pull… 2&>1 | grep -q…
<SNIPPED>
OR IN BASH
if docker pull… |& grep -q…
<SNIPPED>
The reason your 2nd-attempted solution didn’t work was because pipes and redirections are processed in-order from left-to-right.
if docker pull… | grep… &> /dev/null
# ^ LEAK HAPPENS HERE, FIX COMES TOO LATE
<SNIPPED>
Meaning that the stderr leak into the terminal already occurred BEFORE you redirected grep’s output. And that the error wasn’t occurring from grep.
You might have some luck with just searching for Error instead of the whole string and see if you got something wrong with the way you typed out the string.

I can I echo error message and send it into a log file?

I'm trying to echo the error message as well as writing it in a log file at the same time but I'm not sure how to do it. I've used 1>&2, but it just sends it to the log file and doesn't echo the message. Here's my code:
while read -r username password; do
egrep "^$username" /etc/passwd >/dev/null
if [ $? -eq 0 ]; then
echo "ERROR BLABLAH $DATE" 1>&2 >> /var/log/error.log
Try
echo "ERROR BLABLAH $DATE" | tee -a /var/log/error.log 1>&2
Description:
tee # will repeat the std input.
-a /var/log/error.log # will append to the error.log file
1>&2 # will send the stdin to stderr.
You want to use the 'tee' command:
NAME
tee - read from standard input and write to standard output
and files
SYNOPSIS
tee [OPTION]... [FILE]...
e.g.
$echo "Hello world!" | tee test.txt
Hello world!
$cat test.txt
Hello world!

Bash scripting: permanent pipe

Here is a script I tried to write:
#!/bin/bash
cat <&3 & # runs in background, takes input from file desc 3
echo "To Terminal"
...
echo "To cat" 1>&3
echo "to cat again" 1>&3
Essentially I want my script to spawn a program (in this case, cat) and be able to send input to it through a file descriptor.
This doesn't work ("bad file descriptor"), I think because file descriptors must be associated with a real file. What I need then is to be able to create a permanent pipe with an associated descriptor (such as 3) that I can use to write to cat throughout the program. How can I do it?
Try:
#!/bin/bash
exec 3> >(cat)
echo "To Terminal"
echo "To cat" 1>&3
echo "To cat again" 1>&3
exec 3>&-
cat, of course, does nothing interesting. For an example that is still simple but slightly more interesting output, replace cat with awk:
exec 3> >(awk '{print NR,length($0),$0}')

How to redirect stdout/stderr when /dev/null is not writable for normal users

How to disable stdout or stderr in bash scripts temporarily?
Of course the most common way is to redirect stdout or stderr to /dev/null.
But on some systems /dev/null may be unwritable for normal users.
I am writing some scripts that is aim to be portable, so I do not prefer using /dev/null
Some blogs/posts say that >&- can close stdout, but when I tried echo 123 >&- in a bash terminal, it just failed with the message "bash: echo: write error: Bad file descriptor"
Surely I can do it by redirecting stdout or stderr to a tmp file like this:
some_command > /tmp/null
But what I want is a more "elegant" way
I think perhaps I can achieve this by using pipe like this:
some_command | :
But in this way, it may "pollutes" the exit code of the original command
Here is a possible way to do what you want:
( my_cmd 3>&1 1>&2 2>&3- ) | :
This temporarily send stdout to a new file handle, 3 and redirect stderr to stdout so that the stderr pipes into the command (in this case, :). Then the new file handle is routed back out to stdout. These avoid piping the stdout of my_cmd into :. The - in closes the handle after it's used.
To check the exist status of my_cmd after the above you examine the environment variable $PIPESTATUS[0]. $PIPESTATUS is a bash environment array variable that holds the exit status of each piped command in the last pipe that was done.
I think the really correct answer is to investigate why /dev/null isn't world writable. Having it not so is an off-standard system configuration and may cause system problems. The above work-around is a little messy by comparison.
Based on what I wrote earlier and #nos's comment above, here's an example:
(assuming you have no file called 'zzz' in your current directory, and that '.' is readable)
#!/bin/bash
set -o pipefail
ls . 2>&1 |:
echo $?
ls zzz 2>&1 |:
echo $?
The pipelines succeed and fail silently and maintain the exit code. Note that you can probably still make a pipeline example where this would not produce the desired results. I haven't come up with one in my head yet, but that doesn't mean it's not out there. The best answer, as many have noted already, is to fix the system so that /dev/null is world writable.
EDIT: Changed /bin/sh to /bin/bash, although this probably isn't necessary. But since I haven't tested this against a true Bourne Shell, I decided to err on the side of caution.
EDIT: Another script, showing several different redirections, and using the |& shortcut for 2>&1 |. If you run this, you'll notice that some of the ls failures return a 141 exit status rather than the expected 2. This is a broken pipe exit status, but still represents a failure.
#!/bin/bash
set -o pipefail
# start with commands that should succeed
# redirect everything to ':'
echo "ls . |& :"
ls . |& :
echo $?
# redirect only stdout to ':'
echo "ls . | :"
ls . | :
echo $?
# redirect only stderr to ':'
echo "((ls . 1>&3) |& : ) 3>&1"
((ls . 1>&3) |& : ) 3>&1
echo $?
# now move to failures
# redirect everything to ':'
echo "ls zzz |& :"
ls zzz |& :
echo $?
# redirect only stdout to ':'
echo "ls zzz |:"
ls zzz |:
echo $?
# redirect only stderr to ':'
echo "((ls zzz 1>&3) |& : ) 3>&1"
((ls zzz 1>&3) |& : ) 3>&1
echo $?
I use two subshells when I'm attempting to destroy stdout but keep stderr. You could do it without the outer one. In fact, that might be better. Instead of getting a broken pipe error, you get a 1 exit status.

How do I write standard error to a file while using "tee" with a pipe?

I know how to use tee to write the output (standard output) of aaa.sh to bbb.out, while still displaying it in the terminal:
./aaa.sh | tee bbb.out
How would I now also write standard error to a file named ccc.out, while still having it displayed?
I'm assuming you want to still see standard error and standard output on the terminal. You could go for Josh Kelley's answer, but I find keeping a tail around in the background which outputs your log file very hackish and cludgy. Notice how you need to keep an extra file descriptor and do cleanup afterward by killing it and technically should be doing that in a trap '...' EXIT.
There is a better way to do this, and you've already discovered it: tee.
Only, instead of just using it for your standard output, have a tee for standard output and one for standard error. How will you accomplish this? Process substitution and file redirection:
command > >(tee -a stdout.log) 2> >(tee -a stderr.log >&2)
Let's split it up and explain:
> >(..)
>(...) (process substitution) creates a FIFO and lets tee listen on it. Then, it uses > (file redirection) to redirect the standard output of command to the FIFO that your first tee is listening on.
The same thing for the second:
2> >(tee -a stderr.log >&2)
We use process substitution again to make a tee process that reads from standard input and dumps it into stderr.log. tee outputs its input back on standard output, but since its input is our standard error, we want to redirect tee's standard output to our standard error again. Then we use file redirection to redirect command's standard error to the FIFO's input (tee's standard input).
See Input And Output
Process substitution is one of those really lovely things you get as a bonus of choosing Bash as your shell as opposed to sh (POSIX or Bourne).
In sh, you'd have to do things manually:
out="${TMPDIR:-/tmp}/out.$$" err="${TMPDIR:-/tmp}/err.$$"
mkfifo "$out" "$err"
trap 'rm "$out" "$err"' EXIT
tee -a stdout.log < "$out" &
tee -a stderr.log < "$err" >&2 &
command >"$out" 2>"$err"
Simply:
./aaa.sh 2>&1 | tee -a log
This simply redirects standard error to standard output, so tee echoes both to log and to the screen. Maybe I'm missing something, because some of the other solutions seem really complicated.
Note: Since Bash version 4 you may use |& as an abbreviation for 2>&1 |:
./aaa.sh |& tee -a log
This may be useful for people finding this via Google. Simply uncomment the example you want to try out. Of course, feel free to rename the output files.
#!/bin/bash
STATUSFILE=x.out
LOGFILE=x.log
### All output to screen
### Do nothing, this is the default
### All Output to one file, nothing to the screen
#exec > ${LOGFILE} 2>&1
### All output to one file and all output to the screen
#exec > >(tee ${LOGFILE}) 2>&1
### All output to one file, STDOUT to the screen
#exec > >(tee -a ${LOGFILE}) 2> >(tee -a ${LOGFILE} >/dev/null)
### All output to one file, STDERR to the screen
### Note you need both of these lines for this to work
#exec 3>&1
#exec > >(tee -a ${LOGFILE} >/dev/null) 2> >(tee -a ${LOGFILE} >&3)
### STDOUT to STATUSFILE, stderr to LOGFILE, nothing to the screen
#exec > ${STATUSFILE} 2>${LOGFILE}
### STDOUT to STATUSFILE, stderr to LOGFILE and all output to the screen
#exec > >(tee ${STATUSFILE}) 2> >(tee ${LOGFILE} >&2)
### STDOUT to STATUSFILE and screen, STDERR to LOGFILE
#exec > >(tee ${STATUSFILE}) 2>${LOGFILE}
### STDOUT to STATUSFILE, STDERR to LOGFILE and screen
#exec > ${STATUSFILE} 2> >(tee ${LOGFILE} >&2)
echo "This is a test"
ls -l sdgshgswogswghthb_this_file_will_not_exist_so_we_get_output_to_stderr_aronkjegralhfaff
ls -l ${0}
In other words, you want to pipe stdout into one filter (tee bbb.out) and stderr into another filter (tee ccc.out). There is no standard way to pipe anything other than stdout into another command, but you can work around that by juggling file descriptors.
{ { ./aaa.sh | tee bbb.out; } 2>&1 1>&3 | tee ccc.out; } 3>&1 1>&2
See also How to grep standard error stream (stderr)? and When would you use an additional file descriptor?
In bash (and ksh and zsh), but not in other POSIX shells such as dash, you can use process substitution:
./aaa.sh > >(tee bbb.out) 2> >(tee ccc.out)
Beware that in bash, this command returns as soon as ./aaa.sh finishes, even if the tee commands are still executed (ksh and zsh do wait for the subprocesses). This may be a problem if you do something like ./aaa.sh > >(tee bbb.out) 2> >(tee ccc.out); process_logs bbb.out ccc.out. In that case, use file descriptor juggling or ksh/zsh instead.
To redirect standard error to a file, display standard output to the screen, and also save standard output to a file:
./aaa.sh 2>ccc.out | tee ./bbb.out
To display both standard error and standard output to screen and also save both to a file, you can use Bash's I/O redirection:
#!/bin/bash
# Create a new file descriptor 4, pointed at the file
# which will receive standard error.
exec 4<>ccc.out
# Also print the contents of this file to screen.
tail -f ccc.out &
# Run the command; tee standard output as normal, and send standard error
# to our file descriptor 4.
./aaa.sh 2>&4 | tee bbb.out
# Clean up: Close file descriptor 4 and kill tail -f.
exec 4>&-
kill %1
If using Bash:
# Redirect standard out and standard error separately
% cmd >stdout-redirect 2>stderr-redirect
# Redirect standard error and out together
% cmd >stdout-redirect 2>&1
# Merge standard error with standard out and pipe
% cmd 2>&1 |cmd2
Credit (not answering from the top of my head) goes here: Re: bash : stderr & more (pipe for stderr)
If you're using Z shell (zsh), you can use multiple redirections, so you don't even need tee:
./cmd 1>&1 2>&2 1>out_file 2>err_file
Here you're simply redirecting each stream to itself and the target file.
Full example
% (echo "out"; echo "err">/dev/stderr) 1>&1 2>&2 1>/tmp/out_file 2>/tmp/err_file
out
err
% cat /tmp/out_file
out
% cat /tmp/err_file
err
Note that this requires the MULTIOS option to be set (which is the default).
MULTIOS
Perform implicit tees or cats when multiple redirections are attempted (see Redirection).
Like the accepted answer well explained by lhunath, you can use
command > >(tee -a stdout.log) 2> >(tee -a stderr.log >&2)
Beware than if you use bash you could have some issue.
Let me take the matthew-wilcoxson example.
And for those who "seeing is believing", a quick test:
(echo "Test Out";>&2 echo "Test Err") > >(tee stdout.log) 2> >(tee stderr.log >&2)
Personally, when I try, I have this result:
user#computer:~$ (echo "Test Out";>&2 echo "Test Err") > >(tee stdout.log) 2> >(tee stderr.log >&2)
user#computer:~$ Test Out
Test Err
Both messages do not appear at the same level. Why does Test Out seem to be put like if it is my previous command?
The prompt is on a blank line letting me think the process is not finished, and when I press Enter this fix it.
When I check the content of the files, it is ok, and redirection works.
Let’s take another test.
function outerr() {
echo "out" # stdout
echo >&2 "err" # stderr
}
user#computer:~$ outerr
out
err
user#computer:~$ outerr >/dev/null
err
user#computer:~$ outerr 2>/dev/null
out
Trying again the redirection, but with this function:
function test_redirect() {
fout="stdout.log"
ferr="stderr.log"
echo "$ outerr"
(outerr) > >(tee "$fout") 2> >(tee "$ferr" >&2)
echo "# $fout content: "
cat "$fout"
echo "# $ferr content: "
cat "$ferr"
}
Personally, I have this result:
user#computer:~$ test_redirect
$ outerr
# stdout.log content:
out
out
err
# stderr.log content:
err
user#computer:~$
No prompt on a blank line, but I don't see normal output. The stdout.log content seem to be wrong, and only stderr.log seem to be ok.
If I relaunch it, the output can be different...
So, why?
Because, like explained here:
Beware that in bash, this command returns as soon as [first command] finishes, even if the tee commands are still executed (ksh and zsh do wait for the subprocesses)
So, if you use Bash, prefer use the better example given in this other answer:
{ { outerr | tee "$fout"; } 2>&1 1>&3 | tee "$ferr"; } 3>&1 1>&2
It will fix the previous issues.
Now, the question is, how to retrieve exit status code?
$? does not work.
I have no found better solution than switch on pipefail with set -o pipefail (set +o pipefail to switch off) and use ${PIPESTATUS[0]} like this:
function outerr() {
echo "out"
echo >&2 "err"
return 11
}
function test_outerr() {
local - # To preserve set option
! [[ -o pipefail ]] && set -o pipefail; # Or use second part directly
local fout="stdout.log"
local ferr="stderr.log"
echo "$ outerr"
{ { outerr | tee "$fout"; } 2>&1 1>&3 | tee "$ferr"; } 3>&1 1>&2
# First save the status or it will be lost
local status="${PIPESTATUS[0]}" # Save first, the second is 0, perhaps tee status code.
echo "==="
echo "# $fout content :"
echo "<==="
cat "$fout"
echo "===>"
echo "# $ferr content :"
echo "<==="
cat "$ferr"
echo "===>"
if (( status > 0 )); then
echo "Fail $status > 0"
return "$status" # or whatever
fi
}
user#computer:~$ test_outerr
$ outerr
err
out
===
# stdout.log content:
<===
out
===>
# stderr.log content:
<===
err
===>
Fail 11 > 0
In my case, a script was running command while redirecting both stdout and stderr to a file, something like:
cmd > log 2>&1
I needed to update it such that when there is a failure, take some actions based on the error messages. I could of course remove the dup 2>&1 and capture the stderr from the script, but then the error messages won't go into the log file for reference. While the accepted answer from lhunath is supposed to do the same, it redirects stdout and stderr to different files, which is not what I want, but it helped me to come up with the exact solution that I need:
(cmd 2> >(tee /dev/stderr)) > log
With the above, log will have a copy of both stdout and stderr and I can capture stderr from my script without having to worry about stdout.
The following will work for KornShell (ksh) where the process substitution is not available,
# create a combined (standard input and standard output) collector
exec 3 <> combined.log
# stream standard error instead of standard output to tee, while draining all standard output to the collector
./aaa.sh 2>&1 1>&3 | tee -a stderr.log 1>&3
# cleanup collector
exec 3>&-
The real trick here, is the sequence of the 2>&1 1>&3 which in our case redirects the standard error to standard output and redirects the standard output to file descriptor 3. At this point the standard error and standard output are not combined yet.
In effect, the standard error (as standard input) is passed to tee where it logs to stderr.log and also redirects to file descriptor 3.
And file descriptor 3 is logging it to combined.log all the time. So the combined.log contains both standard output and standard error.
Thanks lhunath for the answer in POSIX.
Here's a more complex situation I needed in POSIX with the proper fix:
# Start script main() function
# - We redirect standard output to file_out AND terminal
# - We redirect standard error to file_err, file_out AND terminal
# - Terminal and file_out have both standard output and standard error, while file_err only holds standard error
main() {
# my main function
}
log_path="/my_temp_dir"
pfout_fifo="${log_path:-/tmp}/pfout_fifo.$$"
pferr_fifo="${log_path:-/tmp}/pferr_fifo.$$"
mkfifo "$pfout_fifo" "$pferr_fifo"
trap 'rm "$pfout_fifo" "$pferr_fifo"' EXIT
tee -a "file_out" < "$pfout_fifo" &
tee -a "file_err" < "$pferr_fifo" >>"$pfout_fifo" &
main "$#" >"$pfout_fifo" 2>"$pferr_fifo"; exit
Compilation errors which are sent to standard error (STDERR) can be redirected or save to a file by:
Bash:
gcc temp.c &> error.log
C shell (csh):
% gcc temp.c |& tee error.log
See: How can I redirect compilation/build error to a file?

Resources