How to include stdout/stderr redirection with command subsitution when command passed as a variable? [duplicate] - linux

This question already has answers here:
Why does shell ignore quoting characters in arguments passed to it through variables? [duplicate]
(3 answers)
Closed 6 years ago.
I have the following script to detect when my network comes back on after restarting my router:
#!/bin/bash
pingCommand="ping 192.168.1.1 -c 3 &>/dev/null"
while [[ ! $($pingCommand) ]]; do
sleep 3s;
done
However, when run in the terminal, it prints:
ping: unknown host &>/dev/null
I ran the script with the -x option (to enable debugging) and found that
ping 192.168.1.1 -c 3 &>/dev/null
was being executed in the subshell as
ping 192.168.1.1 -c 3 '&>/dev/null'
How do I change my command substitution call so that bash does not put single quotes around the output redirection?

Don't store commands in variables. Use functions. They handle redirections and pipes without the quoting issues that plague variables.
It also doesn't make sense to try to capture ping's output when you're redirecting all of that output to /dev/null. If you just want to know if it worked or not, check its exit code.
pingCommand() {
ping 192.168.1.1 -c 3 &>/dev/null
}
while ! pingCommand; do
sleep 3s;
done

Use eval:
#!/bin/bash
pingCommand="ping 192.168.1.1 -c 3 &>/dev/null"
# set -x # uncomment to see what's going on
while ! eval $pingCommand ; do
sleep 3s;
done
And you do not need the [[ ]] (expression evaluation) or the $() (output capture).
Of course, as John Kugelman suggested in another answer, using the functions avoids all the potential pitfalls associated with the eval.

Related

How do I properly use SSH heredoc?

This question is somewhat related to the question I asked here, but it has not been adequately answered. What interests me here is the following:
When I run the command type -t test on a remote computer, I get the answer 'function' because the 'test' is an existing function inside the .bashrc file on the remote computer.
However, when I run this SSH command on the local computer,
s="$(
ssh -T $HOST <<'EOSSH'
VAR=$(type -f test)
echo $VAR
EOSSH
)"
echo $s
I don't get anything printed. The first question would be how do I make this work?
The second question builds on the previous one. That is, my ultimate goal is to define on a local computer which function I want to check on a remote computer and come up with an adequate answer, ie.:
a="test"
s="$(
ssh -T $HOST <<'EOSSH'
VAR=$(type -f $a)
echo $VAR
EOSSH
)"
echo $s
So, I would like the variable s to be equal to 'function'. How to do it?
how do I make this work?
Either load .bashrc (. .bashrc) or start an interactive session (bash -i).
Because your work is not-interactive, if you want .bashrc loaded and it has no protection against non-interactive use, just load it. If not, maybe move your function somewhere else, to something you can source. If not, be prepared that interactive session may print /etc/motd and /etc/issue and other interactive stuff.
Remove -T - you do not need a tty for non-interactive work.
I would like the variable s to be equal to 'function'. How to do it?
I recommend using declare to transfer all the work and context that you need, which is flexible and works generically, preserves STDIN and doesn't require you to deal with the intricacies escaping inside a here document. Specifically request bash shell from the remote and use printf "%q" to properly escape all the data.
functions_to_check=(a b c)
fn_exists() { [[ "$(LC_ALL=C type -t -- "$1" 2>/dev/null)" = function ]]; }
work() {
for f in "${functions_to_check[#]}"; do
if fn_exists "$f"; then
echo "Great - function $f exists!"
else
echo "Och nuu - no function $f!"
fi
done
}
ssh "$host" "$(printf "%q " bash -c "
$(declare -p function_to_check) # transfer variables
$(declare -f fn_exists work) # transfer functions
work # run the work to do
")"

Loop ends prematurely when executing a command via SSH in a Bash function [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

Execute a find command with expression from a shell script [duplicate]

This question already has answers here:
Why does shell ignore quoting characters in arguments passed to it through variables? [duplicate]
(3 answers)
Closed 6 years ago.
I'm trying to write a database call from within a bash script and I'm having problems with a sub-shell stripping my quotes away.
This is the bones of what I am doing.
#---------------------------------------------
#! /bin/bash
export COMMAND='psql ${DB_NAME} -F , -t --no-align -c "${SQL}" -o ${EXPORT_FILE} 2>&1'
PSQL_RETURN=`${COMMAND}`
#---------------------------------------------
If I use an 'echo' to print out the ${COMMAND} variable the output looks fine:
echo ${COMMAND}
screen output:-
#---------------
psql drupal7 -F , -t --no-align -c "SELECT DISTINCT hostname FROM accesslog;" -o /DRUPAL/INTERFACES/EXPORTS/ip_list.dat 2>&1
#---------------
Also if I cut and paste this screen output it executes just fine.
However, when I try to execute the command as a variable within a sub-shell call, it gives an error message.
The error is from the psql client to the effect that the quotes have been removed from around the ${SQL} string.
The error suggests psql is trying to interpret the terms in the sql string as parameters.
So it seems the string and quotes are composed correctly but the quotes around the ${SQL} variable/string are being interpreted by the sub-shell during the execution call from the main script.
I've tried to escape them using various methods: \", \\", \\\", "", \"" '"', \'"\', ... ...
As you can see from my 'try it all' approach I am no expert and it's driving me mad.
Any help would be greatly appreciated.
Charlie101
Instead of storing command in a string var better to use BASH array here:
cmd=(psql ${DB_NAME} -F , -t --no-align -c "${SQL}" -o "${EXPORT_FILE}")
PSQL_RETURN=$( "${cmd[#]}" 2>&1 )
Rather than evaluating the contents of a string, why not use a function?
call_psql() {
# optional, if variables are already defined in global scope
DB_NAME="$1"
SQL="$2"
EXPORT_FILE="$3"
psql "$DB_NAME" -F , -t --no-align -c "$SQL" -o "$EXPORT_FILE" 2>&1
}
then you can just call your function like:
PSQL_RETURN=$(call_psql "$DB_NAME" "$SQL" "$EXPORT_FILE")
It's entirely up to you how elaborate you make the function. You might like to check for the correct number of arguments (using something like (( $# == 3 ))) before calling the psql command.
Alternatively, perhaps you'd prefer just to make it as short as possible:
call_psql() { psql "$1" -F , -t --no-align -c "$2" -o "$3" 2>&1; }
In order to capture the command that is being executed for debugging purposes, you can use set -x in your script. This will the contents of the function including the expanded variables when the function (or any other command) is called. You can switch this behaviour off using set +x, or if you want it on for the whole duration of the script you can change the shebang to #!/bin/bash -x. This saves you explicitly echoing throughout your script to find out what commands are being run; you can just turn on set -x for a section.
A very simple example script using the shebang method:
#!/bin/bash -x
ec() {
echo "$1"
}
var=$(ec 2)
Running this script, either directly after making it executable or calling it with bash -x, gives:
++ ec 2
++ echo 2
+ var=2
Removing the -x from the shebang or the invocation results in the script running silently.

Bash command substitution on remote host [duplicate]

This question already has answers here:
How to cat <<EOF >> a file containing code?
(5 answers)
Closed 7 years ago.
I'm trying to run a bash script that ssh's onto a remote host and stops the single docker container that is running.
#!/usr/bin/env bash
set -e
ssh <machine> <<EOF
container=$(docker ps | awk 'NR==2' | awk '{print $1;}')
docker stop $container
EOF
However, I get the following error:
stop.sh: line 4: docker: command not found
When I do this manually (ssh to the machine, run the commands) all is fine, but when trying to do so by means of a script I get the error. I guess that my command substitution syntax is incorrect and I've searched and tried all kinds of quotes etc but to no avail.
Can anyone point me to where I'm going wrong?
Use <<'EOF' (or <<\EOF -- quoting only the first character will have the same effect) when starting your heredoc to prevent its expansions from being evaluated locally.
BTW, personally, I'd write this a bit differently:
#!/bin/sh -e
ssh "$1" bash <<'EOF'
{ read; read container _; } < <(docker ps)
docker stop "$container"
EOF
The first read consumes the first line of docker ps output; the second extracts only the first column -- using bash builtins only.

How to turn off echo while executing a shell script Linux [duplicate]

This question already has answers here:
How to silence output in a Bash script?
(9 answers)
Closed 6 years ago.
Here is a simple thing i was working on
echo "please enter a command"
read x
$x
checkexitstatus()
{...}
checkexit status is a ifloop created somewhere else just to check exit status
What i want to know is
Is there any way that when i run the $x that it wont be displayed on the screen
I want to know if it is possible without redirecting the output to a file
No, it isn't possible.
$x &> /dev/null
You could use Bash redirection :
command 1> /.../path_to_file => to redirect stdout into path_to_file.
command > /.../path_to_file is a shortcut of the previous command.
command 2> /.../path_to_file => to redirect stderr into path_to_file
To do both at the same time to the same output: command >/.../path_to_file 2>&1.
2>&1 means redirect 2 (stderr) to 1 (stdout which became path_to_file).
You could replace path_to_file by /dev/null if you don't want to retrieve the output of your command.
Otherwise, you could also store the output of a command :
$ var=$(command) # Recent shell like Bash or KSH
$ var=`command` # POSIX compliant
In this example, the output of command will be stored in $var.
If you want to turn off only the echo command and not other commands that send their output to the stdout, as the title suggests, you can possibly (it may break the code) create an alias for echo
alias echo=':'
now echo is an alias for noop. You can unalias it by unalias echo.

Resources