Using a for loop inside a ssh statement replaces variable with empty string - linux

I have the following statement in a bash script:
ssh $host "cd /directory; for i in *$date.gz; do echo $i; done; exit"
I expect it to print the name of each file in the directory that ends with the date, and is a zip file. By ssh-ing to the host on the command line, and searching the directory, I find that there should be 5 such files. However, this script returns 5 blank lines. I checked if the $date variable was properly defined inside the quotes (it was). When I replaced $i with 'adf', the script printed
adf
adf
adf
adf
adf
So it is correctly filtering out those 5 files, but it is just not printing their names, and is replacing the $i in the statement with nothing (so that that line is just echo). Why is it doing this, and how can I make it print the filenames? The same thing happens when I run this line on the command line.

By double-quoting your command, the variable expansion occurs before the ssh call.
So when you call this command line:
ssh $host "cd /directory; for i in *$date.gz; do echo $i; done; exit"
It calls the ssh command with two arguments: $host and "cd /directory; for i in *$date.gz; do echo $i; done; exit"
The second argument picks the content of the date variable and the i variable when the string is built. But at this time, you do NOT have the correct value for i yet.
I think that escaping $i into \$i should solve your issue:
ssh $host "cd /directory; for i in *$date.gz; do echo \$i; done; exit"

Related

bash script loop breaks [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

Loop ends prematurely when executing a command via SSH in a Bash function [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

Changing shell inside a shell script

in the default shell
the for loop given below
for ((i=$llimit; i<=$ulimit; i++));
do
echo $i
done;
it throws error "'((' is not expected"
but when switching to the bash shell
the for loop works fine
is there a way to change shell inside a shellscript
or any other solution as this for loop is inside a shell script
EDIT:
this is hte shell script
#!/bin/bash
nav_var=`sqlplus -s tcs384160/tcs#1234 <<\EOF
set pagesize 0 feedback off verify off heading off echo off
select max(sequence#) from v$archived_log where applied='YES' and thread#=2 and dest_id=2;
exit;
EOF`
echo $nav_var;
ulimit=`expr $nav_var - 30`;
llimit=`expr $ulimit - 200`;
for ((i=$llimit; i<=$ulimit; i++));
do ls -l arch_aceprod_2_${i}_743034701.arc;
done;
The C-style for loop you've used is a bashism.
Change the line
for ((i=$llimit; i<=$ulimit; i++));
to
for i in $(seq $llimit $ulimit);
and it would work well with both sh and bash.
EDIT: If you don't have seq, you could change the loop as:
i=$llimit
while [ $i -le $ulimit ]; do
echo "Do something here"
let i=i+1
done
By "default shell" I assume you mean /bin/sh? Is there a line starting "#!" at the top of the script?
Bash is pretty much backwards compatible with sh. If you put "#!/bin/bash" (without the quotes) as the first line this should get the whole thing to run under bash.
try another for loop syntax
for counter in {$llimit..$ulimit}
do
your logic
done
this works for all type of shells.
Or #!bin/bash will also work in your case

How to append a variable in SCP command in shell script?

Below is my shell script in which I am trying to append $element in my below scp call in the if statement block.
for element in ${x[$key]}; do # no quotes here
printf "%s\t%s\n" "$key" "$element"
if [ $key -eq 0 ]
then
scp david#machineB:/data/be_t1_snapshot/20131215/t1_$element_5.data /data01/primary/.
fi
done
But whenever I am running my above shell script, I always get like this -
scp david#machineB:/data/be_t1_snapshot/20131215/t1_.data No such file or directory
When I have taken a close look in the above error message, the above scp statement is not right as it should be -
scp david#machineB:/data/be_t1_snapshot/20131215/t1_0_5.data /data01/primary/.
The value of $element should get replaced with 0 but somehow my above appending logic is not working. Is there anything wrong I am doing in the way I am appending $element in my above scp command
try t1_${element}_5.data
scp david#machineB:/data/be_t1_snapshot/20131215/t1_${element}_5.data /data01/primary/.
when you use t1_$element_5.data, bash will replace $element_5 with its value,
you don't have $element_5 defined, so you are getting
t1_.data No such file or directory

Parameter list with double quotes does not pass through properly in Bash

I have a Bash script that calls another Bash script. The called script does some modification and checking on a few things, shifts, and then passes the rest of the caller's command line through.
In the called script, I have verified that I have everything managed and ready to call. Here's some debug-style code I've put in:
echo $SVN $command $# > /tmp/shimcmd
bash /tmp/shimcmd
$SVN $command $#
Now, in /tmp/shimcmd you'll see:
svn commit --username=myuser --password=mypass --non-interactive --trust-server-cert -m "Auto Update autocommit Wed Apr 11 17:33:37 CDT 2012"
That is, the built command, all on one line, perfectly fine, including a -m "my string with spaces" portion.
It's perfect. And the "bash /tmp/shimcmd" execution of it works perfectly as well.
But of course I don't want this silly tmp file and such (only used it to debug). The problem is that calling the command directly, instead of via the shim file:
$SVN $command $#
results in the svn command itself NOT receiving the quoted string with spaces--it garbles the '-m "my string with spaces"' parameter and shanks the command as if it was passed as '-m my string with spaces'.
I have tried all manner of crazy escape methods to no avail. Can't believe it's dogging me this badly. Again, by echoing the very same thing ($SVN $command $#) to a file and then executing that file, it's FINE. But calling directly garbles the quoted string. That element alone shanks.
Any ideas?
Dan
Did you try:
eval "$SVN $command $#"
?
Here's a way to demonstrate the problem:
$ args='-m "foo bar"'
$ printf '<%s> ' $args
<-m> <"foo> <bar">
And here's a way to avoid it:
$ args=( -m "foo bar" )
$ printf '<%s> ' "${args[#]}"
<-m> <foo bar>
In this latter case, args is an array, not a quoted string.
Note, by the way, that it has to be "$#", not $#, to get this behavior (in which string-splitting is avoided in favor of respecting the array entries' boundaries).
this
echo -n -e $SVN \"$command\" > /tmp/shimcmd
for x in "$#"
do
a=$a" "\"$x\"
done
echo -e " " $a >> /tmp/shimcmd
bash /tmp/shimcmd
or simply
$SVN "$command" "$#"

Resources