How to append a variable in SCP command in shell script? - linux

Below is my shell script in which I am trying to append $element in my below scp call in the if statement block.
for element in ${x[$key]}; do # no quotes here
printf "%s\t%s\n" "$key" "$element"
if [ $key -eq 0 ]
then
scp david#machineB:/data/be_t1_snapshot/20131215/t1_$element_5.data /data01/primary/.
fi
done
But whenever I am running my above shell script, I always get like this -
scp david#machineB:/data/be_t1_snapshot/20131215/t1_.data No such file or directory
When I have taken a close look in the above error message, the above scp statement is not right as it should be -
scp david#machineB:/data/be_t1_snapshot/20131215/t1_0_5.data /data01/primary/.
The value of $element should get replaced with 0 but somehow my above appending logic is not working. Is there anything wrong I am doing in the way I am appending $element in my above scp command

try t1_${element}_5.data
scp david#machineB:/data/be_t1_snapshot/20131215/t1_${element}_5.data /data01/primary/.
when you use t1_$element_5.data, bash will replace $element_5 with its value,
you don't have $element_5 defined, so you are getting
t1_.data No such file or directory

Related

Using a for loop inside a ssh statement replaces variable with empty string

I have the following statement in a bash script:
ssh $host "cd /directory; for i in *$date.gz; do echo $i; done; exit"
I expect it to print the name of each file in the directory that ends with the date, and is a zip file. By ssh-ing to the host on the command line, and searching the directory, I find that there should be 5 such files. However, this script returns 5 blank lines. I checked if the $date variable was properly defined inside the quotes (it was). When I replaced $i with 'adf', the script printed
adf
adf
adf
adf
adf
So it is correctly filtering out those 5 files, but it is just not printing their names, and is replacing the $i in the statement with nothing (so that that line is just echo). Why is it doing this, and how can I make it print the filenames? The same thing happens when I run this line on the command line.
By double-quoting your command, the variable expansion occurs before the ssh call.
So when you call this command line:
ssh $host "cd /directory; for i in *$date.gz; do echo $i; done; exit"
It calls the ssh command with two arguments: $host and "cd /directory; for i in *$date.gz; do echo $i; done; exit"
The second argument picks the content of the date variable and the i variable when the string is built. But at this time, you do NOT have the correct value for i yet.
I think that escaping $i into \$i should solve your issue:
ssh $host "cd /directory; for i in *$date.gz; do echo \$i; done; exit"

bash script loop breaks [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

Loop ends prematurely when executing a command via SSH in a Bash function [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

'read -r' doesn't read beyond first line in a loop that does ssh [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

Check the status code of a scp command and if it is failed, then call scp on another machine

Below is my snippet of shell script in which I am executing scp command to copy the files from machineB to machineA.
for element in ${x[$key]}; do # no quotes here
printf "%s\t%s\n" "$key" "$element"
if [ $key -eq 0 ]
then
scp david#machineB:/data/be_t1_snapshot/20131215/t1_$element_5.data /data01/primary/.
fi
done
I have a very simple question which is mentioned below -
If the above scp command in my shell script gives me this error for whatever reason - No such file or directory
then I need to try doing scp from machineC and for that scp command will be like this, only machine will be different and everything else will be same -
scp david#machineC:/data/be_t1_snapshot/20131215/t1_$element_5.data /data01/primary/.
So my question is how to check the output of the above scp command in my shell script and then decide whether I need to call scp command from machineC or not? Is there any status kind of thing which I can use to check and if it got failed for whatever reason, then I can call scp command on machineC?
Is this possible to do in shell script?
Here you go:
for element in ${x[$key]}; do # no quotes here
printf "%s\t%s\n" "$key" "$element"
if [ $key -eq 0 ]
then
scp david#machineB:/data/be_t1_snapshot/20131215/t1_$element_5.data /data01/primary/. || scp david#machineB:/data/be_t1_snapshot/20131215/t1_$element_5.data /data01/primary/
fi
done
Well-behaving commands exit with "success" (exit code = 0) if the operation was successful, or otherwise with an exit code != 0. You can chain commands together like this:
cmd && echo successful || echo failed
cmd && keep going || do something else
The exit code is also stored in the $? variable, so this is equivalent:
cmd; if $? = 0; then echo successful; else echo failed; fi
Not only this is possible, the status code of commands is extremely important in shell scripting. Consider these two examples:
./configure && make && make install
./configure; make; make install
The first one will execute the chain of commands if all are successful. The second will execute all of them always, even if an earlier command failed.
scp returns 0 only wen it succeeds.
so you can write like this:
scp machineB:/path/toyourfile .
if [ $? -ne 0 ]
then
scp machineC:/path/to/your/file .
fi
a shorter way is:
scp machineB:/path/toyourfile .
[ $? -eq 0 ] || scp machineC:/path/to/your/file .
or
scp machineB:/path/toyourfile .
[ $? -ne 0 ] && scp machineC:/path/to/your/file .
personally I prefer the even shorter way, and the scp output is of no use in script:
scp -q machineB:/path/to/your/file . || scp -q machineC:/path/to/your/file .
and remember to use ${element} instead of $element

Resources