Need to redirect an output to /dev/null.... works fine in command line but not in shell - linux

I need to write an execute some command in bash file and ignore the inputs.
Example
pvs --noheadings -o pv_name,vg_name,vg_size 2> /dev/null
The above command works great in command line, but when I write the same in shell, it gives me an error
like
Failed to read physical volume "2>"
Failed to read physical volume "/dev/null"
I guess it looks it as an part of the whole command. Can you please give me some suggestions on how to rectify it?
Thanks in advance.
FULLCODE
#------------------------------
main() {
pv_cmd='pvs'
nh='--noheadings'
sp=' '
op='-o'
vgn='vg_name'
pvn='pv_name'
pvz='pv_size'
cm=','
tonull=' 2 > /dev/null '
pipe='|'
#cmd=$pv_cmd$sp$nh$sp$op$sp$vgn$cm$pvn$cm$pvz$sp$pipe$tonull #line A
cmd='pvs --noheadings -o vg_name,pv_name,pv_size 2> /dev/null' #line B
echo -n "Cmd="
echo $cmd
$cmd
}
main
#-----------------------------------------------------
If you look at the Line A & B both the versions are there, although one is commented out.....

You can't include the 2> /dev/null inside the quoted string. Quote removal happens after redirections are processed. You'll have to do
cmd='pvs --noheadings -o vg_name,pv_name,pv_size'
$cmd 2> /dev/null
for redirection to work properly.

The way you did it, 2> and /dev/null will be parsed as arguments. But you want 2> /dev/null to be bash code, not program argument, so
instead of
$cmd
you should
eval $cmd
That is how things work.
Or if the echo thing is for debugging, you can just set -o xtrace before the command and set +o xtrace after it. And do it the normal way instead of stuffing a string.

I think what's going on is that there is some character inside the line that is either not visible to us or the > is a different character than it appears. After all the shell should swallow the redirect before the command gets to see it, but the command sees 2> and /dev/null as [PhysicalVolume [PhysicalVolume...]]. Alternatively the redirection could be passed quoted (so it loses the special meaning to the shell and gets passed on), see chepner's answer.
tonull=' 2 > /dev/null '
is the issue. Exactly as chepner guessed.

eliminate space between 2 and >
pvs --noheadings -o pv_name,vg_name,vg_size 2>/dev/null

Related

How does: `if ls /etc/*release 1>/dev/null 2>&1` work? An explanation please

Could someone help me under stand the condition ls /etc/*release 1>/dev/null 2>&1 that's contained in the code:
if ls /etc/*release 1>/dev/null 2>&1; then
echo "<h2>System release info</h2>"
echo "<pre>"
for i in /etc/*release; do
# Since we can't be sure of the
# length of the file, only
# display the first line.
head -n 1 $i
done
uname -orp
echo "</pre>"
fi
I pretty much don't understand any of that line but specifically what I wanted to know was:
Why dose it not have to use the 'test' syntax i.e. [ expression ]?
The spacing in the condition also confuses, is 1>/dev/null a variable in the ls statement?
what is 2>&1?
I understand the purpose of this statement, which is; if there exists a file with release in it's name under the /etc/ directory the statement will continue, I just don't understand how this achieves this.
Thanks for you help
[ isn't a special character, it's a command (/bin/[ or /usr/bin/[, usually a link to test). That means
if [ ...
if test ...
are the same. For this to work, test ignores ] as last argument if it's being called [.
if simply responds to the exit code of the command it invokes. An exit code of 0 means success or "true".
1>/dev/null 2>&1 redirects stdout (1) to the device /dev/null and then stderr (2) to stdout which means the command can't display and output or errors on the terminal.
Since stdout isn't a normal file or device, you have to use >& for the redirection.
At first glance, one would think that if [ -e /etc/*release ] would be a better solution but test -e doesn't work with patterns.
The test programm just evaluate its arguments and return a code 0 or 1 to tell whether it was true or not.
But you can use any shell commands/function with a if. It will do the then part if the return code ($?) was 0.
So, here, we look if ls return 0 (a file matched), or not.
So, in the end, it's equivalent to write if [ -e /etc/*release ] ; then, which is more "shell-liked".
The last two statements 1>/dev/null and 2>&1 are just here to avoid displaying the output of the ls
1>/dev/null redirect stdout to /dev/null, so the standard out is not shown
2>&1 redirect stderr to stdout. Here, stdout is redirected to /dev/null, so everything is redirected to /dev/null

command works in terminal and not in script

The bellow commands works in terminal and prints value 1
count_XXX=`ll -d /usr/Systems/XXX* 2> /dev/null | grep ^d | wc -l`
echo "$count_XXX"
There is one directory and two softlinks in the directory /usr/Systems with the same name XXX*.
when i keep the same two lines in shell script. It prints value 0
This works fine in unix(both terminal and script) but when i try to run in linux server the issue happens(in script).
do i need to change something for Linux.
Thanks in advance.
Using ll then grep and then wc is bit too much for counting directories and this is also error prone due to possibility of whitespace/newline in directory names.
In BASH use this simple snippet:
shopt -s nullglob
arr=( /usr/Systems/XXX*/ )
echo ${#arr[#]}
2
/ at the end of your glob pattern makes sure it matches only directories.
shopt -s nullglob to make sure to not to print pattern when glob pattern doesn't match anything

How to redirect stdout/stderr when /dev/null is not writable for normal users

How to disable stdout or stderr in bash scripts temporarily?
Of course the most common way is to redirect stdout or stderr to /dev/null.
But on some systems /dev/null may be unwritable for normal users.
I am writing some scripts that is aim to be portable, so I do not prefer using /dev/null
Some blogs/posts say that >&- can close stdout, but when I tried echo 123 >&- in a bash terminal, it just failed with the message "bash: echo: write error: Bad file descriptor"
Surely I can do it by redirecting stdout or stderr to a tmp file like this:
some_command > /tmp/null
But what I want is a more "elegant" way
I think perhaps I can achieve this by using pipe like this:
some_command | :
But in this way, it may "pollutes" the exit code of the original command
Here is a possible way to do what you want:
( my_cmd 3>&1 1>&2 2>&3- ) | :
This temporarily send stdout to a new file handle, 3 and redirect stderr to stdout so that the stderr pipes into the command (in this case, :). Then the new file handle is routed back out to stdout. These avoid piping the stdout of my_cmd into :. The - in closes the handle after it's used.
To check the exist status of my_cmd after the above you examine the environment variable $PIPESTATUS[0]. $PIPESTATUS is a bash environment array variable that holds the exit status of each piped command in the last pipe that was done.
I think the really correct answer is to investigate why /dev/null isn't world writable. Having it not so is an off-standard system configuration and may cause system problems. The above work-around is a little messy by comparison.
Based on what I wrote earlier and #nos's comment above, here's an example:
(assuming you have no file called 'zzz' in your current directory, and that '.' is readable)
#!/bin/bash
set -o pipefail
ls . 2>&1 |:
echo $?
ls zzz 2>&1 |:
echo $?
The pipelines succeed and fail silently and maintain the exit code. Note that you can probably still make a pipeline example where this would not produce the desired results. I haven't come up with one in my head yet, but that doesn't mean it's not out there. The best answer, as many have noted already, is to fix the system so that /dev/null is world writable.
EDIT: Changed /bin/sh to /bin/bash, although this probably isn't necessary. But since I haven't tested this against a true Bourne Shell, I decided to err on the side of caution.
EDIT: Another script, showing several different redirections, and using the |& shortcut for 2>&1 |. If you run this, you'll notice that some of the ls failures return a 141 exit status rather than the expected 2. This is a broken pipe exit status, but still represents a failure.
#!/bin/bash
set -o pipefail
# start with commands that should succeed
# redirect everything to ':'
echo "ls . |& :"
ls . |& :
echo $?
# redirect only stdout to ':'
echo "ls . | :"
ls . | :
echo $?
# redirect only stderr to ':'
echo "((ls . 1>&3) |& : ) 3>&1"
((ls . 1>&3) |& : ) 3>&1
echo $?
# now move to failures
# redirect everything to ':'
echo "ls zzz |& :"
ls zzz |& :
echo $?
# redirect only stdout to ':'
echo "ls zzz |:"
ls zzz |:
echo $?
# redirect only stderr to ':'
echo "((ls zzz 1>&3) |& : ) 3>&1"
((ls zzz 1>&3) |& : ) 3>&1
echo $?
I use two subshells when I'm attempting to destroy stdout but keep stderr. You could do it without the outer one. In fact, that might be better. Instead of getting a broken pipe error, you get a 1 exit status.

Triple nested quotations in shell script

I'm trying to write a shell script that calls another script that then executes a rsync command.
The second script should run in its own terminal, so I use a gnome-terminal -e "..." command. One of the parameters of this script is a string containing the parameters that should be given to rsync. I put those into single quotes.
Up until here, everything worked fine until one of the rsync parameters was a directory path that contained a space. I tried numerous combinations of ',",\",\' but the script either doesn't run at all or only the first part of the path is taken.
Here's a slightly modified version of the code I'm using
gnome-terminal -t 'Rsync scheduled backup' -e "nice -10 /Scripts/BackupScript/Backup.sh 0 0 '/Scripts/BackupScript/Stamp' '/Scripts/BackupScript/test' '--dry-run -g -o -p -t -R -u --inplace --delete -r -l '\''/media/MyAndroid/Internal storage'\''' "
Within Backup.sh this command is run
rsync $5 "$path"
where the destination $path is calculated from text in Stamp.
How can I achieve these three levels of nested quotations?
These are some question I looked at just now (I've tried other sources earlier as well)
https://unix.stackexchange.com/questions/23347/wrapping-a-command-that-includes-single-and-double-quotes-for-another-command
how to make nested double quotes survive the bash interpreter?
Using multiple layers of quotes in bash
Nested quotes bash
I was unsuccessful in applying the solutions to my problem.
Here is an example. caller.sh uses gnome-terminal to execute foo.sh, which in turn prints all the arguments and then calls rsync with the first argument.
caller.sh:
#!/bin/bash
gnome-terminal -t "TEST" -e "./foo.sh 'long path' arg2 arg3"
foo.sh:
#!/bin/bash
echo $# arguments
for i; do # same as: for i in "$#"; do
echo "$i"
done
rsync "$1" "some other path"
Edit: If $1 contains several parameters to rsync, some of which are long paths, the above won't work, since bash either passes "$1" as one parameter, or $1 as multiple parameters, splitting it without regard to contained quotes.
There is (at least) one workaround, you can trick bash as follows:
caller2.sh:
#!/bin/bash
gnome-terminal -t "TEST" -e "./foo.sh '--option1 --option2 \"long path\"' arg2 arg3"
foo2.sh:
#!/bin/bash
rsync_command="rsync $1"
eval "$rsync_command"
This will do the equivalent of typing rsync --option1 --option2 "long path" on the command line.
WARNING: This hack introduces a security vulnerability, $1 can be crafted to execute multiple commands if the user has any influence whatsoever over the string content (e.g. '--option1 --option2 \"long path\"; echo YOU HAVE BEEN OWNED' will run rsync and then execute the echo command).
Did you try escaping the space in the path with "\ " (no quotes)?
gnome-terminal -t 'Rsync scheduled backup' -e "nice -10 /Scripts/BackupScript/Backup.sh 0 0 '/Scripts/BackupScript/Stamp' '/Scripts/BackupScript/test' '--dry-run -g -o -p -t -R -u --inplace --delete -r -l ''/media/MyAndroid/Internal\ storage''' "

Linux: start a script after another has finished

I read the answer for this issue from this link
in Stackoverflow.com. But I am so new in writing shell script that I did something wrong. The following are my scripts:
testscript:
#!/bin/csh -f
pid=$(ps -opid= -C csh testscript1)
while [ -d /proc/$pid ] ; do
sleep 1
done && csh testscript2
exit
testscript1:
#!/bin/csh -f
/usr/bin/firefox
exit
testscript2:
#!/bin/csh -f
echo Done
exit
The purpose is for testscript to call testscript1 first; once testscript1 already finish (which means the firefox called in script1 is closed) testscript will call testscript2. However I got this result after running testscript:
$ csh testscript
Illegal variable name.
Please help me with this issue. Thanks ahead.
I believe this line is not CSH:
pid=$(ps -opid= -C csh testscript1)
In general in csh you define variables like this:
set pid=...
I am not sure what the $() syntax is, perhaps back ticks woudl work as a replacement:
set pid=`ps -opid= -C csh testscript1`
Perhaps you didn't notice that the scripts you found were written for bash, not csh, but
you're trying to process them with the csh interpreter.
It looks like you've misunderstood what the original code was trying to do -- it was
intended to monitor an already-existing process, by looking up its process id using the process name.
You seem to be trying to start the first process from inside the ps command. But
in that case, there's no need for you to do anything so complicated -- all you need
is:
#!/bin/csh
csh testscript1
csh testscript2
Unless you go out of your way to run one of the scripts in the background,
the second script will not run until the first script is finished.
Although this has nothing to do with your problem, csh is more oriented toward
interactive use; for script writing, it's considered a poor choice, so you might be
better off learning bash instead.
Try,
below script will check testscript1's pid, if it is not found then it will execute testscirpt2
sp=$(ps -ef | grep testscript1 | grep -v grep | awk '{print $2}')
/bin/ls -l /proc/ | grep $sp > /dev/null 2>&1 && sleep 0 || /bin/csh testscript2

Resources