Check the status code of a scp command and if it is failed, then call scp on another machine - linux

Below is my snippet of shell script in which I am executing scp command to copy the files from machineB to machineA.
for element in ${x[$key]}; do # no quotes here
printf "%s\t%s\n" "$key" "$element"
if [ $key -eq 0 ]
then
scp david#machineB:/data/be_t1_snapshot/20131215/t1_$element_5.data /data01/primary/.
fi
done
I have a very simple question which is mentioned below -
If the above scp command in my shell script gives me this error for whatever reason - No such file or directory
then I need to try doing scp from machineC and for that scp command will be like this, only machine will be different and everything else will be same -
scp david#machineC:/data/be_t1_snapshot/20131215/t1_$element_5.data /data01/primary/.
So my question is how to check the output of the above scp command in my shell script and then decide whether I need to call scp command from machineC or not? Is there any status kind of thing which I can use to check and if it got failed for whatever reason, then I can call scp command on machineC?
Is this possible to do in shell script?

Here you go:
for element in ${x[$key]}; do # no quotes here
printf "%s\t%s\n" "$key" "$element"
if [ $key -eq 0 ]
then
scp david#machineB:/data/be_t1_snapshot/20131215/t1_$element_5.data /data01/primary/. || scp david#machineB:/data/be_t1_snapshot/20131215/t1_$element_5.data /data01/primary/
fi
done
Well-behaving commands exit with "success" (exit code = 0) if the operation was successful, or otherwise with an exit code != 0. You can chain commands together like this:
cmd && echo successful || echo failed
cmd && keep going || do something else
The exit code is also stored in the $? variable, so this is equivalent:
cmd; if $? = 0; then echo successful; else echo failed; fi
Not only this is possible, the status code of commands is extremely important in shell scripting. Consider these two examples:
./configure && make && make install
./configure; make; make install
The first one will execute the chain of commands if all are successful. The second will execute all of them always, even if an earlier command failed.

scp returns 0 only wen it succeeds.
so you can write like this:
scp machineB:/path/toyourfile .
if [ $? -ne 0 ]
then
scp machineC:/path/to/your/file .
fi
a shorter way is:
scp machineB:/path/toyourfile .
[ $? -eq 0 ] || scp machineC:/path/to/your/file .
or
scp machineB:/path/toyourfile .
[ $? -ne 0 ] && scp machineC:/path/to/your/file .
personally I prefer the even shorter way, and the scp output is of no use in script:
scp -q machineB:/path/to/your/file . || scp -q machineC:/path/to/your/file .
and remember to use ${element} instead of $element

Related

SSH Remote command exit code

I know there are lots of discussions about it but i need you help with ssh remote command exit codes. I have that code:
(scan is a script which scans for viruses in the given file)
for i in $FILES
do
RET_CODE=$(ssh $SSH_OPT $HOST "scan $i; echo $?")
if [ $? -eq 0 ]; then
SOME_CODE
The scan works and it returns either 0 or (1 for errors) or 2 if a virus is found. But somehow my return code is always 0. Even, if i scan a virus.
Here is set -x output:
++ ssh -i /home/USER/.ssh/id host 'scan Downloads/eicar.com; echo 0'
+ RET_CODE='File Downloads/eicar.com: VIRUS: Virus found.
code of the Eicar-Test-Signature virus
0'
Here is the Output if i run those commands on the "remote" machine without ssh:
[user#ws ~]$ scan eicar.com; echo $?
File eicar.com: VIRUS: Virus found.
code of the Eicar-Test-Signature virus
2
I just want to have the return Code, i dont need all the other output of scan.
!UPDATE!
It seems like, echo is the problem.
The reason your ssh is always returning 0 is because the final echo command is always succeeding! If you want to get the return code from scan, either remove the echo or assign it to a variable and use exit. On my system:
$ ssh host 'false'
$ echo $?
1
$ ssh host 'false; echo $?'
1
$ echo $?
0
$ ssh host 'false; ret=$?; echo $ret; exit $ret'
1
$ echo $?
1
ssh returns the exit status of the entire pipeline that it runs - in this case, that's the exit status of echo $?.
What you want to do is simply use the ssh result directly (since you say that you don't want any of the output):
for i in $FILES
do
if ssh $SSH_OPT $HOST "scan $i >/dev/lull 2>&1"
then
SOME_CODE
If you really feel you must print the return code, that you can do that without affecting the overall result by using an EXIT trap:
for i in $FILES
do
if ssh $SSH_OPT $HOST "trap 'echo \$?' EXIT; scan $i >/dev/lull 2>&1"
then
SOME_CODE
Demo:
$ ssh $host "trap 'echo \$?' EXIT; true"; echo $?
0
0
$ ssh $host "trap 'echo \$?' EXIT; false"; echo $?
1
1
BTW, I recommend you avoid uppercase variable names in your scripts - those are normally used for environment variables that change the behaviour of programs.

how to pick first machine random out of three in shell?

I have three remote machines (machineA, machineB, machineC) from where I can copy files. If for whatever reason I can't copy from machineA, then I should copy from machineB and if for whatever reason I can't copy from machineB then start copying from machineC.
Below is the single shell command I have and I need to run it on many machines but then it means on all those machines, it will copy from machineA only.
(ssh goldy#machineA 'ls -1 /process/snap/20180418/*' | parallel -j5 'scp goldy#machineA:{} /data/files/') || (ssh goldy#machineB 'ls -1 /process/snap/20180418/*' | parallel -j5 'scp goldy#machineB:{} /data/files/') || (ssh goldy#machineC 'ls -1 /process/snap/20180418/*' | parallel -j5 'scp goldy#machineC:{} /data/files/')
Now is there any way by which I can pick first machine randomly (out of those three) instead of keeping machineA as first always. So pick first machine randomly and keep other two as the backup incase first machine is down? Is this possible to do?
Update:
I have something like this:
machines=(machineA machineB machineC)
for machine in $(shuf -e ${machines[#]}); do
ssh -o StrictHostKeyChecking=no david#$machine 'ls -1 /process/snap/{{ folder }}/*' | parallel -j{{ threads }} 'scp -o StrictHostKeyChecking=no david#${machine}:{} /data/files/'
[ $? -eq 0 ] && break
done
How about keeping the machine names in a file and using shuf to shuffle them? Then you could create a script like this:
while read machine; do
ssh goldy#$machine 'ls -1 /process/snap/20180418/*' | parallel -j5 "scp goldy#$machine:{} /data/files/"
if [ $? == 0 ]; then
break
fi
done
And the machine file like this:
machineA
machineB
machineC
And call the script like this:
shuf machines | ./script.sh
Here's a test version that doesn't do anything but shows how the logic works:
while read machine; do
echo ssh goldy#$machine 'ls -1 /process/snap/20180418/*'
echo parallel -j5 "scp goldy#$machine:{} /data/files/"
executenonexistingcommand
if [ $? == 0 ]; then
break
fi
done
Addressing your comment to use arrays instead and put everything on a single line:
shuf -e ${machines[#]} shuffles an array. And to read it back into the array, you need to feed the outpu into readarray. Turning scripts into a single line is just a matter of putting semicolons where we had newlines before.
machines=( machineA machineB machineC ); for machine in $(shuf -e ${machines[#]}); do ssh goldy#$machine 'ls -1 /process/snap/20180418/*' | parallel -j5 "scp goldy#${machine}:{} /data/files/"; if [ $? == 0 ]; then break; fi; done
Here is a little example of how you might do it - it is largely comments, to show what I am thinking, but you can remove them to make it concise.
#!/bin/bash
# Machine names, number of machines, random starting index
machines=("machineA" "machineB" "machineC")
num=${#machines[#]}
idx=$((RANDOM%num))
# Make one try per machine, break on success
for ((try=0;try<num;try++)) ; do
this=${machines[$idx]}
echo $this
((idx=(idx+1)%num))
done
So, you would put your command where I have echo $this, and follow it with:
[ $? -eq 0 ] && break
Sample Output
./go
machineB
machineC
machineA
If you have shuf you can do the same thing more succinctly like this:
#!/bin/bash
# Machine names, in random order
machines=("machineA" "machineB" "machineC")
machines=( $(shuf -e "${machines[#]}") )
# Make one try per machine, break on success
for i in "${machines[#]}"; do
echo $i
... your command
[ $? -eq 0 ] && break
done

ssh to different nodes using shell scripting

I am using below code to ssh to different nodes and find if an user exists or not. If the user doesn't exist it will create it.
The script works fine if I don't do ssh but it fails if I do ssh.
How can I go through different nodes using this script?
for node in `nodes.txt`
usr=root
ssh $usr#$node
do
if [ $(id -u) -eq 0 ]; then
read -p "Enter username : " username
read -s -p "Enter password : " password
egrep "^$username" /etc/passwd >/dev/null
if [ $? -eq 0 ]; then
echo "$username exists!"
exit 1
else
pass=$(perl -e 'print crypt($ARGV[0], "password")' $password)
useradd -m -p $pass $username
[ $? -eq 0 ] && echo "User has been added to system!" || echo "F
ailed to add a user!"
fi
else
echo "Only root may add a user to the system"
exit 2
fi
done
Your script has grave syntax errors. I guess the for loop at the beginning is what you attempted to add but you totally broke the script in the process.
The syntax for looping over lines in a file is
while read -r line; do
.... # loop over "$line"
done <nodes.txt
(or marginally for line in $(cat nodes.txt); do ... but this has multiple issues; see http://mywiki.wooledge.org/DontReadLinesWithFor for details).
If the intent is to actually run the remainder of the script in the ssh you need to pass it to the ssh command. Something like this:
while read -r node; do
read -p "Enter user name: " username
read -p -s "Enter password: "
ssh root#"$node" "
# Note addition of -q option and trailing :
egrep -q '^$username:' /etc/passwd ||
useradd -m -p \"\$(perl -e 'print crypt(\$ARGV[0], \"password\")' \"$password\")" '$username'" </dev/null
done <nodes.txt
Granted, the command you pass to ssh can be arbitrarily complex, but you will want to avoid doing interactive I/O inside a root-privileged remote script, and generally make sure the remote command is as quiet and robust as possible.
The anti-pattern command; if [ $? -eq 0 ]; then ... is clumsy but very common. The purpose of if is to run a command and examine its result code, so this is better and more idiomatically written if command; then ... (which can be even more succinctly written command && ... or ! command || ... if you only need the then or the else part, respectively, of the full long-hand if/then/else structure).
Maybe you should only do the remote tasks via ssh. All the rest runs local.
ssh $user#$node egrep "^$username" /etc/passwd >/dev/null
and
ssh $user#$node useradd -m -p $pass $username
It might also be better to ask for username and password outside of the loop if you want to create the same user on all nodes.

How to append a variable in SCP command in shell script?

Below is my shell script in which I am trying to append $element in my below scp call in the if statement block.
for element in ${x[$key]}; do # no quotes here
printf "%s\t%s\n" "$key" "$element"
if [ $key -eq 0 ]
then
scp david#machineB:/data/be_t1_snapshot/20131215/t1_$element_5.data /data01/primary/.
fi
done
But whenever I am running my above shell script, I always get like this -
scp david#machineB:/data/be_t1_snapshot/20131215/t1_.data No such file or directory
When I have taken a close look in the above error message, the above scp statement is not right as it should be -
scp david#machineB:/data/be_t1_snapshot/20131215/t1_0_5.data /data01/primary/.
The value of $element should get replaced with 0 but somehow my above appending logic is not working. Is there anything wrong I am doing in the way I am appending $element in my above scp command
try t1_${element}_5.data
scp david#machineB:/data/be_t1_snapshot/20131215/t1_${element}_5.data /data01/primary/.
when you use t1_$element_5.data, bash will replace $element_5 with its value,
you don't have $element_5 defined, so you are getting
t1_.data No such file or directory

shell to find a file , execute it - exit if 'error' and continue if ' no error'

I have to write a shell script and i don't know how to go about it.
Basically i have to write a script where i'd find a file ( it could be possibly named differently). If either file exists then it must be executed, if it returns a 0 ( no error), it should continue the build, if it's not equal to 0 ( returns with error), it should exit. If either file is not found it should continue the build.
the file i have to find could be either file.1 or file.2 so it could be either named (file.1), or (file.2).
some of the conditions to make it more clear.
1) if either file exists , it should execute - if it has any errors it should exit, if no errors it should continue.
2) none could exist, if that's the case then it should continue the build.
3) both files will not be present at the same time ( additional info)
I have tried to write a script but i doubt it's even closer to what i am looking for.
if [-f /home/(file.1) or (file.2)]
then
-exec /home/(file.1) or (file.2)
if [ $! -eq 0]; then
echo "no errors continuing build"
fi
else
if [ $! -ne 0] ; then
exit ;
fi
else
echo "/home/(file.1) or (file.2) not found, continuing build"
fi
any help is much appreciated.
Thanks in advance
DOIT=""
for f in file1.sh file2.sh; do
if [ -x /home/$f ]; then DOIT="/home/$f"; break; fi
done
if [ -z "$DOIT" ]; then echo "Files not found, continuing build"; fi
if [ -n "$DOIT" ]; then $DOIT && echo "No Errors" || exit 1; fi
For those confused about my syntax, try running this:
true && echo "is true" || echo "is false"
false && echo "is true" || echo "is false"
Just putting the line
file.sh
in your script should work, if you set up your script to exit on errors.
For example, if your script was
#!/bin/bash -e
echo one
./file.sh
echo two
Then if file.sh exists and is executable it would run and your whole script would run. If not, the script would fail when it tried to execute the non-existing file.
If you want to execute one file or the other, extend the idea to the following:
#!/bin/bash -e
echo one
./file1.sh || ./file2.sh
echo two
This means if file1.sh does not exist, it will try file2.sh and if that is there it will run and your whole script will run.
This give preference to file1 of course, meaning if they both exist, then only file1 will run.

Resources