How does: `if ls /etc/*release 1>/dev/null 2>&1` work? An explanation please - linux

Could someone help me under stand the condition ls /etc/*release 1>/dev/null 2>&1 that's contained in the code:
if ls /etc/*release 1>/dev/null 2>&1; then
echo "<h2>System release info</h2>"
echo "<pre>"
for i in /etc/*release; do
# Since we can't be sure of the
# length of the file, only
# display the first line.
head -n 1 $i
done
uname -orp
echo "</pre>"
fi
I pretty much don't understand any of that line but specifically what I wanted to know was:
Why dose it not have to use the 'test' syntax i.e. [ expression ]?
The spacing in the condition also confuses, is 1>/dev/null a variable in the ls statement?
what is 2>&1?
I understand the purpose of this statement, which is; if there exists a file with release in it's name under the /etc/ directory the statement will continue, I just don't understand how this achieves this.
Thanks for you help

[ isn't a special character, it's a command (/bin/[ or /usr/bin/[, usually a link to test). That means
if [ ...
if test ...
are the same. For this to work, test ignores ] as last argument if it's being called [.
if simply responds to the exit code of the command it invokes. An exit code of 0 means success or "true".
1>/dev/null 2>&1 redirects stdout (1) to the device /dev/null and then stderr (2) to stdout which means the command can't display and output or errors on the terminal.
Since stdout isn't a normal file or device, you have to use >& for the redirection.
At first glance, one would think that if [ -e /etc/*release ] would be a better solution but test -e doesn't work with patterns.

The test programm just evaluate its arguments and return a code 0 or 1 to tell whether it was true or not.
But you can use any shell commands/function with a if. It will do the then part if the return code ($?) was 0.
So, here, we look if ls return 0 (a file matched), or not.
So, in the end, it's equivalent to write if [ -e /etc/*release ] ; then, which is more "shell-liked".
The last two statements 1>/dev/null and 2>&1 are just here to avoid displaying the output of the ls
1>/dev/null redirect stdout to /dev/null, so the standard out is not shown
2>&1 redirect stderr to stdout. Here, stdout is redirected to /dev/null, so everything is redirected to /dev/null

Related

Confused about use of return status code shell script?

In a book I'm reading the below line
ls "$1" 2>/dev/null | grep "$1" 2>/dev/null 1>&2
when written in a script - by the book it says "The command is executed to check whether the file passed as the command line argument exists. The standard error is redirected to /dev/null (the unix black hole), and standard output is redirected to standard error by using 1>&2. Thus, the command does not produce any output or error message; its only puprose is to set the command returns status value $?."
But running the code:
if [ $? -eq 0 ]
would I not know it otherwise, I have tried without the cmd at beginning and with it as well with having no impact on the results. I'm sure the author would have written for some purpose. I cannot just figure what?
This looks like a very bad book, giving code that noone sane would ever write to poorly illustrate concepts that are generally used in completely different ways in shell scripts.
The line:
ls "$1" 2>/dev/null | grep "$1" 2>/dev/null 1>&2
is as described -- it has no visible effect other than setting the return code. Is your question about what this does in detail to get a return code or something else?
The line:
if [ $? -eq 0 ]
is an incomplete fragment that checks the return code of the previous command. It's incomplete as there is no then or fi, without which the shell will reject it as a syntax error and not do anything (if you type the above at a prompt, you'll get the secondary prompt, telling you the shell is waiting for more input to get a complete command). So without more code there's no apparent effect. Something more complete like:
if [ $? -eq 0 ]; then echo YES; else echo NO; fi
would output YES or NO based on that return code.
A more sensible way of doing the 6 lines starting with the ls would be:
if [ ! -e "$1" ]; then
echo "$1: not found"
exit 1
fi
As to what the ls line actually does, it runs ls (list files) with the name in $1 as an argument, then uses grep to search that listing for the same filename.
So if the file does not exist, ls gives an error and outputs nothing, so the grep fails (setting $? to 1). If the filename exists and is not a directory, the grep will succeed (setting $? to 0). Finally, if the filename exists and is a directory, it will search the contents of that directory, looking for any file or subdirectory with the same name as a substring -- which is probably just a bug. In addition, if $1 is a string beginning with -, it will do something fairly useless and unpredictable.
Overall, a prime example of a shell script that should never be written -- any student that turned in such a monstrosity should get an immediate F.

How to redirect stdout/stderr when /dev/null is not writable for normal users

How to disable stdout or stderr in bash scripts temporarily?
Of course the most common way is to redirect stdout or stderr to /dev/null.
But on some systems /dev/null may be unwritable for normal users.
I am writing some scripts that is aim to be portable, so I do not prefer using /dev/null
Some blogs/posts say that >&- can close stdout, but when I tried echo 123 >&- in a bash terminal, it just failed with the message "bash: echo: write error: Bad file descriptor"
Surely I can do it by redirecting stdout or stderr to a tmp file like this:
some_command > /tmp/null
But what I want is a more "elegant" way
I think perhaps I can achieve this by using pipe like this:
some_command | :
But in this way, it may "pollutes" the exit code of the original command
Here is a possible way to do what you want:
( my_cmd 3>&1 1>&2 2>&3- ) | :
This temporarily send stdout to a new file handle, 3 and redirect stderr to stdout so that the stderr pipes into the command (in this case, :). Then the new file handle is routed back out to stdout. These avoid piping the stdout of my_cmd into :. The - in closes the handle after it's used.
To check the exist status of my_cmd after the above you examine the environment variable $PIPESTATUS[0]. $PIPESTATUS is a bash environment array variable that holds the exit status of each piped command in the last pipe that was done.
I think the really correct answer is to investigate why /dev/null isn't world writable. Having it not so is an off-standard system configuration and may cause system problems. The above work-around is a little messy by comparison.
Based on what I wrote earlier and #nos's comment above, here's an example:
(assuming you have no file called 'zzz' in your current directory, and that '.' is readable)
#!/bin/bash
set -o pipefail
ls . 2>&1 |:
echo $?
ls zzz 2>&1 |:
echo $?
The pipelines succeed and fail silently and maintain the exit code. Note that you can probably still make a pipeline example where this would not produce the desired results. I haven't come up with one in my head yet, but that doesn't mean it's not out there. The best answer, as many have noted already, is to fix the system so that /dev/null is world writable.
EDIT: Changed /bin/sh to /bin/bash, although this probably isn't necessary. But since I haven't tested this against a true Bourne Shell, I decided to err on the side of caution.
EDIT: Another script, showing several different redirections, and using the |& shortcut for 2>&1 |. If you run this, you'll notice that some of the ls failures return a 141 exit status rather than the expected 2. This is a broken pipe exit status, but still represents a failure.
#!/bin/bash
set -o pipefail
# start with commands that should succeed
# redirect everything to ':'
echo "ls . |& :"
ls . |& :
echo $?
# redirect only stdout to ':'
echo "ls . | :"
ls . | :
echo $?
# redirect only stderr to ':'
echo "((ls . 1>&3) |& : ) 3>&1"
((ls . 1>&3) |& : ) 3>&1
echo $?
# now move to failures
# redirect everything to ':'
echo "ls zzz |& :"
ls zzz |& :
echo $?
# redirect only stdout to ':'
echo "ls zzz |:"
ls zzz |:
echo $?
# redirect only stderr to ':'
echo "((ls zzz 1>&3) |& : ) 3>&1"
((ls zzz 1>&3) |& : ) 3>&1
echo $?
I use two subshells when I'm attempting to destroy stdout but keep stderr. You could do it without the outer one. In fact, that might be better. Instead of getting a broken pipe error, you get a 1 exit status.

Redirecting stdout only if command failed?

I'm writing a bash script that is supposed to be "transparent" to the user. It reads commands from the user and intercepts them, allowing only some of them to be executed by bash, depending on some criteria. It (basically) works like this:
while true; do
read COMMAND
can_be_done $COMMAND
if [ $? == 0 ]; then
eval $COMMAND
if [ $? != 0 ]; then
echo "Error: command not found"
fi
fi
done
The problem is, when the command fails, you also get stuff printed to the console. BUT, if I keep the result in a variable and only print it when it doesn't fail, like so:
RESULT=$(eval $COMMAND)
Then there's another problem: The special formatting gets lost (for example, "ls --color" doesn't show colors anymore)
My question is: Is there a way to have the command print to STDOUT if successful, but to /dev/null if it fails?
Do you really need the second part, replacing the output of the command with an error message? Linux commands print their own error messages, which aren't necessarily "command not found". You'd be hiding the true error (permission denied, file not found, out of memory, segfault, etc.) with an oftentimes incorrect error message (command not found).
If you remove that check, you could simplify the loop to something like this:
while true; do
read -e COMMAND
if can_be_done "$COMMAND"; then
eval "$COMMAND"
fi
done
read -e uses readline to obtain the command, making the prompt a lot more shell-like (&uparrow; and &downarrow; for history, for instance).
command; if [ $? == 0 ]; then is more idiomatically written as if <command>; then.
Quoting makes sure special characters and whitespace are handled properly.
I would argue strongly that you should not do this. If you do not want to see output, redirect it to /dev/null. If you do want to see errors, do not redirect stderr. If you are using a program that prints its error messages on stdout instead of stderr, FIX THE PROGRAM! Error messages belong on stderr. Note that this means your program is broken, as it ought to read:
echo "Error: command not found" >&2
I'm not sure if it is rule number 1, but it certainly belongs in the top 10, and it may be the most often violated rule: Error messages belong on stderr. A program which prints error messages on stdout is broken.
if false > /dev/null;then echo 1; else echo 2; fi 2> /dev/null
Will output 2
if true > /dev/null;then echo 1; else echo 2; fi 2> /dev/null
Will output 1
remove the > /dev/null to print the command also to stdout
for example
if echo 123;then echo 1; else echo 2; fi 2> /dev/null
Will output
123 & 1
Assuming that the command is not very expensive to run you can do this:
test `ls /mooo 2>/dev/null` || echo moo not found
test will return true only if the command exits with 0, in this case ls is the command. You could have put this in an if statement too like so:
if [ `ls /moo 2>/dev/null` ];then
echo moo is a folder
fi

Need to redirect an output to /dev/null.... works fine in command line but not in shell

I need to write an execute some command in bash file and ignore the inputs.
Example
pvs --noheadings -o pv_name,vg_name,vg_size 2> /dev/null
The above command works great in command line, but when I write the same in shell, it gives me an error
like
Failed to read physical volume "2>"
Failed to read physical volume "/dev/null"
I guess it looks it as an part of the whole command. Can you please give me some suggestions on how to rectify it?
Thanks in advance.
FULLCODE
#------------------------------
main() {
pv_cmd='pvs'
nh='--noheadings'
sp=' '
op='-o'
vgn='vg_name'
pvn='pv_name'
pvz='pv_size'
cm=','
tonull=' 2 > /dev/null '
pipe='|'
#cmd=$pv_cmd$sp$nh$sp$op$sp$vgn$cm$pvn$cm$pvz$sp$pipe$tonull #line A
cmd='pvs --noheadings -o vg_name,pv_name,pv_size 2> /dev/null' #line B
echo -n "Cmd="
echo $cmd
$cmd
}
main
#-----------------------------------------------------
If you look at the Line A & B both the versions are there, although one is commented out.....
You can't include the 2> /dev/null inside the quoted string. Quote removal happens after redirections are processed. You'll have to do
cmd='pvs --noheadings -o vg_name,pv_name,pv_size'
$cmd 2> /dev/null
for redirection to work properly.
The way you did it, 2> and /dev/null will be parsed as arguments. But you want 2> /dev/null to be bash code, not program argument, so
instead of
$cmd
you should
eval $cmd
That is how things work.
Or if the echo thing is for debugging, you can just set -o xtrace before the command and set +o xtrace after it. And do it the normal way instead of stuffing a string.
I think what's going on is that there is some character inside the line that is either not visible to us or the > is a different character than it appears. After all the shell should swallow the redirect before the command gets to see it, but the command sees 2> and /dev/null as [PhysicalVolume [PhysicalVolume...]]. Alternatively the redirection could be passed quoted (so it loses the special meaning to the shell and gets passed on), see chepner's answer.
tonull=' 2 > /dev/null '
is the issue. Exactly as chepner guessed.
eliminate space between 2 and >
pvs --noheadings -o pv_name,vg_name,vg_size 2>/dev/null

What does "who | grep $1" command do in the shell script?

I am learning shell programming from the very basics using the book called Beginning Linux Programming (4th Edition). I am confused by this script with an until-clause:
#!/bin/bash
until who | grep "$1" > /dev/null
do
sleep 60
done
# Now ring the bell and announce the unexpected user.
echo -e '\a'
echo "***** $1 has just logged in *****"
exit 0
My quesiton is what is who | grep "$1" > /dev/null used for here? Why redirect the grep output to /dev/null?
The 'until' loop is used to test a condition, as you mentioned, and will run all the 'do|done' block until the condition present becomes true. In other words, it only executes the code block when the condition present is FALSE, and runs it until it becomes true. The script you are testing is useful for catching a logged in user that you pass as a parameter to the script (hence, the grep "$1", being $1 a positional parameter). It will sleep for a minute (sleep 60) until that user logs in to the system, and then it will exit the loop and do all the '$1 has just logged in' stuff. The redirection of grep output to /dev/null is used to not display the output of the grep comand (you could have used grep -q "$1" and that will achieve the same effect).
Hope to have clarified your doubts.
while and until (and, admittedly, if) look at the exit code of the test, not at any text that may or may not be generated on stdout (or stderr).
I suspect the reason redirection to /dev/null has been used is because the command only generates output if there is a match, most of the time there is (admittedly) none, but when there is, you're not interested in seeing the result.

Resources