how to pick first machine random out of three in shell? - linux

I have three remote machines (machineA, machineB, machineC) from where I can copy files. If for whatever reason I can't copy from machineA, then I should copy from machineB and if for whatever reason I can't copy from machineB then start copying from machineC.
Below is the single shell command I have and I need to run it on many machines but then it means on all those machines, it will copy from machineA only.
(ssh goldy#machineA 'ls -1 /process/snap/20180418/*' | parallel -j5 'scp goldy#machineA:{} /data/files/') || (ssh goldy#machineB 'ls -1 /process/snap/20180418/*' | parallel -j5 'scp goldy#machineB:{} /data/files/') || (ssh goldy#machineC 'ls -1 /process/snap/20180418/*' | parallel -j5 'scp goldy#machineC:{} /data/files/')
Now is there any way by which I can pick first machine randomly (out of those three) instead of keeping machineA as first always. So pick first machine randomly and keep other two as the backup incase first machine is down? Is this possible to do?
Update:
I have something like this:
machines=(machineA machineB machineC)
for machine in $(shuf -e ${machines[#]}); do
ssh -o StrictHostKeyChecking=no david#$machine 'ls -1 /process/snap/{{ folder }}/*' | parallel -j{{ threads }} 'scp -o StrictHostKeyChecking=no david#${machine}:{} /data/files/'
[ $? -eq 0 ] && break
done

How about keeping the machine names in a file and using shuf to shuffle them? Then you could create a script like this:
while read machine; do
ssh goldy#$machine 'ls -1 /process/snap/20180418/*' | parallel -j5 "scp goldy#$machine:{} /data/files/"
if [ $? == 0 ]; then
break
fi
done
And the machine file like this:
machineA
machineB
machineC
And call the script like this:
shuf machines | ./script.sh
Here's a test version that doesn't do anything but shows how the logic works:
while read machine; do
echo ssh goldy#$machine 'ls -1 /process/snap/20180418/*'
echo parallel -j5 "scp goldy#$machine:{} /data/files/"
executenonexistingcommand
if [ $? == 0 ]; then
break
fi
done
Addressing your comment to use arrays instead and put everything on a single line:
shuf -e ${machines[#]} shuffles an array. And to read it back into the array, you need to feed the outpu into readarray. Turning scripts into a single line is just a matter of putting semicolons where we had newlines before.
machines=( machineA machineB machineC ); for machine in $(shuf -e ${machines[#]}); do ssh goldy#$machine 'ls -1 /process/snap/20180418/*' | parallel -j5 "scp goldy#${machine}:{} /data/files/"; if [ $? == 0 ]; then break; fi; done

Here is a little example of how you might do it - it is largely comments, to show what I am thinking, but you can remove them to make it concise.
#!/bin/bash
# Machine names, number of machines, random starting index
machines=("machineA" "machineB" "machineC")
num=${#machines[#]}
idx=$((RANDOM%num))
# Make one try per machine, break on success
for ((try=0;try<num;try++)) ; do
this=${machines[$idx]}
echo $this
((idx=(idx+1)%num))
done
So, you would put your command where I have echo $this, and follow it with:
[ $? -eq 0 ] && break
Sample Output
./go
machineB
machineC
machineA
If you have shuf you can do the same thing more succinctly like this:
#!/bin/bash
# Machine names, in random order
machines=("machineA" "machineB" "machineC")
machines=( $(shuf -e "${machines[#]}") )
# Make one try per machine, break on success
for i in "${machines[#]}"; do
echo $i
... your command
[ $? -eq 0 ] && break
done

Related

create an array from original array but basis on where code is running?

I have three machine (each in different datacenter) in a machines array.
If my shell script is running on abc datacenter then I want to scp files from machineA.abc.host.com which will be my local box. I will pick other two boxes as remote servers to copy files incase local box is down.
If my shell script is running on def datacenter then I want to scp files from machineB.def.host.com which will be my local box. I will pick other two boxes as remote servers to copy files incase local box is down.
If my shell script is running on pqr datacenter then I want to scp files from machineC.pqr.host.com which will be my local box. I will pick other two boxes as remote servers to copy files incase local box is down.
Below is my script but I believe it can be done in much better way instead of using three different variables and then having three scp staetments seprated by or:
machines=(machineA.abc.host machineB.def.host.com machineC.pqr.host.com)
case $(hostname -f) in
*abc.host.com)
local_server=("${machines[0]}")
primary_remote==("${machines[1]}")
secondary_remote==("${machines[2]}")
;;
*def.host.com)
local_server=("${machines[1]}")
primary_remote==("${machines[2]}")
secondary_remote==("${machines[0]}")
;;
*pqr.host.com)
local_server=("${machines[2]}")
primary_remote==("${machines[0]}")
secondary_remote==("${machines[1]}")
;;
*) echo "unknown host: $(hostname -f), exiting." >&2 && exit 1 ;;
# ?
esac
export local="$local_server"
export remote1="$primary_remote"
export remote2="$secondary_remote"
copyFiles() {
el=$1
primsec=$2
# can we just iterate from for loop instead of writing three scp statements?
(scp -C -o StrictHostKeyChecking=no goldy#"$local":/proc/data/abc_187_"$el"_111_8.data "$primsec"/.) || (scp -C -o StrictHostKeyChecking=no goldy#"$remote1":/proc/data/abc_187_"$el"_111_8.data "$primsec"/.) || (scp -C -o StrictHostKeyChecking=no goldy#"$remote2":/proc/data/abc_187_"$el"_111_8.data "$primsec"/.)
}
export -f copyFiles
# using gnu parallel here to call above methods parallely
Now as you can see I have three scp statements one for local box, other for remote1 and remote2. What I am thinking is maybe we can get rid of these three scp statements and instead store hostnames (in a particular order, first index can be local box and other two can be remote) in an array and then iterate that array from a for loop and just write one scp statement?
for p in "$machines"; do scp -C -o StrictHostKeyChecking=no goldy#"$p":/proc/data/abc_187_"$el"_111_8.data "$primsec"/. && break; done > /dev/null 2>&1
If this is possible then how can I reshuffle machines array accordingly or maybe create a different array then with right machine in them at proper index?
Update:
Somehow my for loop inside that function is not running at all:
copyFiles() {
local el=$1
local primsec=$2
local remote_file="/proc/data/abc_187_${el}_111_8.data"
for host in "${hosts[#]}"; do
echo "$host"
echo "scp -C -o StrictHostKeyChecking=no "goldy#$host:$remote_file" "$primsec"/." && break
done
}
export hosts
export -f copyFiles
parallel -j 5 copyFiles {} $proc::: ${pro[#]} &
parallel -j 5 copyFiles {} $data::: ${seco[#]} &
wait
echo "everything copied"
How about this: it uses
an associative array to hold the "local" machine names
an array to hold the sequence of hosts for scp
a for loop to iterate over the possible hosts, and break after the first successful scp
#!/bin/bash
declare -A machines=(
[abc]=machineA.abc.host.com
[def]=machineB.def.host.com
[pqr]=machineC.pqr.host.com
)
IFS=. read -a host_parts < <(hostname -f)
case "${host_parts[1]}" in
abc) hosts=( "${machines[abc]}" "${machines[def]}" "${machines[pqr]}" ) ;;
def) hosts=( "${machines[def]}" "${machines[pqr]}" "${machines[abc]}" ) ;;
pqr) hosts=( "${machines[pqr]}" "${machines[abc]}" "${machines[def]}" ) ;;
*) echo "unknown host: $(hostname -f), exiting." >&2; exit 1 ;;
esac
copyFiles() {
local el=$1
local primsec=$2
local remote_file="/proc/data/abc_187_${el}_111_8.data"
for host in "${hosts[#]}"; do
scp -C -o StrictHostKeyChecking=no "goldy#$host:$remote_file" "$primsec"/. && break
done
}
export hosts
export -f copyFiles

bash - wget -N if else value check

I'm working on a bash script that pulls a file from an FTP site only if the timestamp on remote is different than local. After it puts the file, it copies the file over to 3 other computers via samba (smbclient).
Everything works, but the file copies even if the wget -N ftp://insertsitehere.com returns a value that the file on the remote was not newer. What would be the best way to check the output of the script so that the copy only happens if a new version was pulled from FTP?
Ideally, I'd like the copy to the computers to preserve the timestamp just like the wget -N command does, too.
Here is an example of what I have:
#!/bin/bash
OUTDIR=/cats/dogs
cd $OUTDIR
wget -N ftp://user:password#sitegoeshere.com/filename
if [ $? -eq 0 ]; then
HOSTS="server1 server2 server3"
for i in $HOSTS; do
echo "Uploading to $i..."
smbclient -A /root/.smbclient.authfile //$i/path -c "lcd /cats/dogs; put fiilename.txt"
if [ $? -eq 0 ]; then
echo "Upload to $i successful..."
else
echo "There was an issue uploading to host $i..."
fi
done
else
echo "There was an issue with the FTP Download...."
exit 1
fi
The return value of wget is different than 0 only if there is an error. If -N is in use and the remote file is older than the local file, it will still have a return value of 0, so you cannot use that to check if the file has been modified.
You could check the mtime of the file to see if it changed, or the content. For example, you could use something like:
md5_old=$( md5sum filename.txt 2>/dev/null )
wget -N ftp://user:password#sitegoeshere.com/filename.txt
md5_new=$( md5sum filename.txt )
if [ "$md5_old" != "$md5_new" ]; then
# Copy filename.txt to SMB servers
fi
Regarding smbclient, unfortunately there is no way to preserve timestamps in either get or put commands. If you need it, you must use some different tool (scp -p, rsync -t...)
touch -r foo.txt foo.old
wget -N example.com/foo.txt
if [ foo.txt -nt foo.old ]
then
echo 'Uploading to server1...'
fi
"Save" the current timestamp into a new empty file
Use wget --timestamping to only download the file if it is newer
If file is newer than the "save" file, do stuff

scp: how to find out that copying was finished

I'm using scp command to copy file from one Linux host to another.
I run scp commend on host1 and copy file from host1 to host2. File is quite big and it takes for some time to copy it.
On host2 file appears immediately as soon as copying was started. I can do everything with this file even if copying is still in progress.
Is there any reliable way to find out if copying was finished or not on host2?
Off the top of my head, you could do something like:
touch tinyfile
scp bigfile tinyfile user#host:
Then when tinyfile appears you know that the transfer of bigfile is complete.
As pointed out in the comments, this assumes that scp will copy the files one by one, in the order specified. If you don't trust it, you could do them one by one explicitly:
scp bigfile user#host:
scp tinyfile user#host:
The disadvantage of this approach is that you would potentially have to authenticate twice. If this were an issue you could use something like ssh-agent.
On sending side (host1) use script like this:
#!/bin/bash
echo 'starting transfer'
scp FILE USER#DST_SERVER:DST_PATH
OUT=$?
if [ $OUT = 0 ]; then
echo 'transfer successful'
touch successful
scp successful USER#DST_SERVER:DST_PATH
else
echo 'transfer faild'
fi
On receiving side (host2) make script like this:
#!/bin/bash
SLEEP_TIME=30
MAX_CNT=10
CNT=0
while [[ ! -e successful && $CNT < $MAX_CNT ]]; do
((CNT++))
sleep($SLEEP_TIME);
done;
if [[ -e successful ]]; then
echo 'successful'
rm successful
# do somethning with FILE
fi
With CNT and MAX_CNT you disable endless loop (in case file successful isn't transferred).
Product MAX_CNT and SLEEP_TIME should be equal or greater expected transfer time. In my example expected transfer time is less than 300 seconds.
A checksum (md5sum, sha256sum ,sha512sum) of the local and remote files would tell you if they're identical.
For the situation where you don't have SSH access to the remote system - like an FTP server - you can download the file after it's uploaded and compare the checksums. I do this for files I send from production scripts at work. Below is a snippet from the script in which I do this.
MD5SRC=$(md5sum $LOCALFILE | cut -c 1-32)
MD5TESTFILE=$(mktemp -p /ramdisk)
curl \
-o $MD5TESTFILE \
-sS \
-u $FTPUSER:$FTPPASS \
ftp://$FTPHOST/$REMOTEFILE
MD5DST=$(md5sum $MD5TESTFILE | cut -c 1-32)
if [ "$MD5SRC" == "$MD5DST" ]
then
echo "+Local and Remote files match!"
else
echo "-Local and Remote files don't match"
fi
if you use inotify-tools,
then the solution will looks like this:
while ! inotifywait -e close $(dirname ${bigfile_fullname}) 2>/dev/null | \
grep -Eo "CLOSE $(basename ${bigfile_fullname})$">/dev/null
do true
done
echo "File ${bigfile_fullname} closed"
After some investigation, and discussion of the problem on other forums I have found one more solution. Maybe it can help somebody.
There is a command "lsof". It lists open files. During copying the file will be opened, so the command
lsof | grep filename
will return non empty result.
So you might want to make a while loop to wait until lsof returns nothing and proceed with your task.
Example:
# provide your file name here
f=<nameOfYourFile>
lsofresult=`lsof | grep $f | wc -l`
while [ $lsofresult != 0 ]; do
echo still copying file $f...
sleep 5
lsofresult=`lsof | grep $f | wc -l`
done; echo copying file $f is finished: `ls $f`
For the duplicate question, How to check if file has been scp 100% to the remote location , which was for an expect script, to know if a file is transferred completely, we can add expect 100% .. .. i.e something like this ...
expect -c "
set timeout 1
spawn scp user#$REMOTE_IP:/tmp/my.file user#$HOST_IP:/home/.
expect yes/no { send yes\r ; exp_continue }
expect password: { send $SCP_PASSWORD\r }
expect 100%
sleep 1
exit
"
if [ -f "/home/my.file" ]; then
echo "Success"
fi
If avoiding a second SSH handshake is important, you can use something like the following:
ssh host cat \> bigfile \&\& touch complete < bigfile
Then wait for the "complete" file to get created on the remote end.

Check the status code of a scp command and if it is failed, then call scp on another machine

Below is my snippet of shell script in which I am executing scp command to copy the files from machineB to machineA.
for element in ${x[$key]}; do # no quotes here
printf "%s\t%s\n" "$key" "$element"
if [ $key -eq 0 ]
then
scp david#machineB:/data/be_t1_snapshot/20131215/t1_$element_5.data /data01/primary/.
fi
done
I have a very simple question which is mentioned below -
If the above scp command in my shell script gives me this error for whatever reason - No such file or directory
then I need to try doing scp from machineC and for that scp command will be like this, only machine will be different and everything else will be same -
scp david#machineC:/data/be_t1_snapshot/20131215/t1_$element_5.data /data01/primary/.
So my question is how to check the output of the above scp command in my shell script and then decide whether I need to call scp command from machineC or not? Is there any status kind of thing which I can use to check and if it got failed for whatever reason, then I can call scp command on machineC?
Is this possible to do in shell script?
Here you go:
for element in ${x[$key]}; do # no quotes here
printf "%s\t%s\n" "$key" "$element"
if [ $key -eq 0 ]
then
scp david#machineB:/data/be_t1_snapshot/20131215/t1_$element_5.data /data01/primary/. || scp david#machineB:/data/be_t1_snapshot/20131215/t1_$element_5.data /data01/primary/
fi
done
Well-behaving commands exit with "success" (exit code = 0) if the operation was successful, or otherwise with an exit code != 0. You can chain commands together like this:
cmd && echo successful || echo failed
cmd && keep going || do something else
The exit code is also stored in the $? variable, so this is equivalent:
cmd; if $? = 0; then echo successful; else echo failed; fi
Not only this is possible, the status code of commands is extremely important in shell scripting. Consider these two examples:
./configure && make && make install
./configure; make; make install
The first one will execute the chain of commands if all are successful. The second will execute all of them always, even if an earlier command failed.
scp returns 0 only wen it succeeds.
so you can write like this:
scp machineB:/path/toyourfile .
if [ $? -ne 0 ]
then
scp machineC:/path/to/your/file .
fi
a shorter way is:
scp machineB:/path/toyourfile .
[ $? -eq 0 ] || scp machineC:/path/to/your/file .
or
scp machineB:/path/toyourfile .
[ $? -ne 0 ] && scp machineC:/path/to/your/file .
personally I prefer the even shorter way, and the scp output is of no use in script:
scp -q machineB:/path/to/your/file . || scp -q machineC:/path/to/your/file .
and remember to use ${element} instead of $element

Bash Script to allow Nagios to report ping between two other Linux machines

I'm looking for alternatives to working out the ping between two machine (mA and mB) and report this back to Nagios (on mC).
My current thoughts are to write a BASH script that will ping the machines in a cron job, output the data to a file then have another bash script that Nagios can use to read that file. This doesn't feel like the best/right way to do this though?
Here's the script I plan to run in the cron job:
#!/bin/bash
if [ -z "$1" ] || [ -z "$2" ] || [ -z "$3" ] || [ -z "$4" ]
then
echo $0: usage: $0 file? ip? pingcount? deadline?
exit 126
else
FILE=$1
IP=$2
PCOUNT=$3
DLINE=$4
while read line
do
if [[ $line == rtt* ]]
then
#replace forward slash with underscore
line=${line////_}
#replace spaces with underscore
line=${line// /_}
#get the 8 item when splitting string on underscore
#echo $line| cut -d'_' -f 8 >> $FILE #Append
#echo $line| cut -d'_' -f 8 > $FILE #Overwrite
echo $line| cut -d'_' -f 8
fi
done < <(ping $IP -c $PCOUNT -q -w $DLINE) #-q output summary / -w deadline / -c pint count
I though about using trace route, but I think this would be produces a slower ping?, is there another way to achieve what I want?
Note: I know Nagios can directly ping a machine, but this isn't what I want to do and won't tell me what I want. Also this is my second script ever, so it's probably rubbish. Also, what alternative would I have if ICMP was blocked?
Have you looked at NRPE and check_ping? This would allow the nagios machine (mC) to ask mA to ping mB and then mA would report the results to mC. You would need to install and configure NRPE and the nagios-plugins on mA for this to work.

Resources