create an array from original array but basis on where code is running? - linux

I have three machine (each in different datacenter) in a machines array.
If my shell script is running on abc datacenter then I want to scp files from machineA.abc.host.com which will be my local box. I will pick other two boxes as remote servers to copy files incase local box is down.
If my shell script is running on def datacenter then I want to scp files from machineB.def.host.com which will be my local box. I will pick other two boxes as remote servers to copy files incase local box is down.
If my shell script is running on pqr datacenter then I want to scp files from machineC.pqr.host.com which will be my local box. I will pick other two boxes as remote servers to copy files incase local box is down.
Below is my script but I believe it can be done in much better way instead of using three different variables and then having three scp staetments seprated by or:
machines=(machineA.abc.host machineB.def.host.com machineC.pqr.host.com)
case $(hostname -f) in
*abc.host.com)
local_server=("${machines[0]}")
primary_remote==("${machines[1]}")
secondary_remote==("${machines[2]}")
;;
*def.host.com)
local_server=("${machines[1]}")
primary_remote==("${machines[2]}")
secondary_remote==("${machines[0]}")
;;
*pqr.host.com)
local_server=("${machines[2]}")
primary_remote==("${machines[0]}")
secondary_remote==("${machines[1]}")
;;
*) echo "unknown host: $(hostname -f), exiting." >&2 && exit 1 ;;
# ?
esac
export local="$local_server"
export remote1="$primary_remote"
export remote2="$secondary_remote"
copyFiles() {
el=$1
primsec=$2
# can we just iterate from for loop instead of writing three scp statements?
(scp -C -o StrictHostKeyChecking=no goldy#"$local":/proc/data/abc_187_"$el"_111_8.data "$primsec"/.) || (scp -C -o StrictHostKeyChecking=no goldy#"$remote1":/proc/data/abc_187_"$el"_111_8.data "$primsec"/.) || (scp -C -o StrictHostKeyChecking=no goldy#"$remote2":/proc/data/abc_187_"$el"_111_8.data "$primsec"/.)
}
export -f copyFiles
# using gnu parallel here to call above methods parallely
Now as you can see I have three scp statements one for local box, other for remote1 and remote2. What I am thinking is maybe we can get rid of these three scp statements and instead store hostnames (in a particular order, first index can be local box and other two can be remote) in an array and then iterate that array from a for loop and just write one scp statement?
for p in "$machines"; do scp -C -o StrictHostKeyChecking=no goldy#"$p":/proc/data/abc_187_"$el"_111_8.data "$primsec"/. && break; done > /dev/null 2>&1
If this is possible then how can I reshuffle machines array accordingly or maybe create a different array then with right machine in them at proper index?
Update:
Somehow my for loop inside that function is not running at all:
copyFiles() {
local el=$1
local primsec=$2
local remote_file="/proc/data/abc_187_${el}_111_8.data"
for host in "${hosts[#]}"; do
echo "$host"
echo "scp -C -o StrictHostKeyChecking=no "goldy#$host:$remote_file" "$primsec"/." && break
done
}
export hosts
export -f copyFiles
parallel -j 5 copyFiles {} $proc::: ${pro[#]} &
parallel -j 5 copyFiles {} $data::: ${seco[#]} &
wait
echo "everything copied"

How about this: it uses
an associative array to hold the "local" machine names
an array to hold the sequence of hosts for scp
a for loop to iterate over the possible hosts, and break after the first successful scp
#!/bin/bash
declare -A machines=(
[abc]=machineA.abc.host.com
[def]=machineB.def.host.com
[pqr]=machineC.pqr.host.com
)
IFS=. read -a host_parts < <(hostname -f)
case "${host_parts[1]}" in
abc) hosts=( "${machines[abc]}" "${machines[def]}" "${machines[pqr]}" ) ;;
def) hosts=( "${machines[def]}" "${machines[pqr]}" "${machines[abc]}" ) ;;
pqr) hosts=( "${machines[pqr]}" "${machines[abc]}" "${machines[def]}" ) ;;
*) echo "unknown host: $(hostname -f), exiting." >&2; exit 1 ;;
esac
copyFiles() {
local el=$1
local primsec=$2
local remote_file="/proc/data/abc_187_${el}_111_8.data"
for host in "${hosts[#]}"; do
scp -C -o StrictHostKeyChecking=no "goldy#$host:$remote_file" "$primsec"/. && break
done
}
export hosts
export -f copyFiles

Related

Ping multiple IP's without exiting from SSH using shell script

I have list of IP's which has to be pinged each other. Once I SSH to IP-1, i should ping all IP's in a loop before I come out of the loop.
I have tried the below..
for name in "${ip[#]}";
do
status=$(ssh -n -o LogLevel=QUIET -t -t -o StrictHostKeyChecking=no
ubuntu#$node ping -W 2 -q -c 5 $name")
if [ "$?" -eq "2" ]; then
echo -e "$(tput setab 7) $(tput setaf 1)$(date) $i unable to ping $name
$(tput sgr0)"
fi
done
This code is working. However every time it requires to do SSH, which is having a performance impact as i'm having more than 100 IP's in the list.
Can I get any help on this?
You could just make this list part of the command that you run on your target host, something like this:
ips=( "10.0.0.1" "10.0.0.2")
ssh serverName 'for i in '${ips[#]}'; do ping ${i} -c1; done'
Note the breaking single-quote to pass the array.
Edit:
Just to have it mentioned here: the tool "fping" is quite right for the job. It would give you just the list you asked for:
ips=( "10.0.0.1" "10.0.0.2")
ssh serverName 'fping -a '${ips[#]}' 2>/dev/null'
Cupcake is right about the possible problems that arise when you passing the list as suggested having entries containing whitespaces. In this special case, however, there are no whitespaces to be expected.
This should give you the List of IPs without fping
ips=( "10.0.0.1" "10.0.0.2")
ssh serverName 'for host in '${ips[#]}'; do if ping -c1 -w1 ${host} >/dev/null 2>&1; then echo ${host};fi;done'

how to pick first machine random out of three in shell?

I have three remote machines (machineA, machineB, machineC) from where I can copy files. If for whatever reason I can't copy from machineA, then I should copy from machineB and if for whatever reason I can't copy from machineB then start copying from machineC.
Below is the single shell command I have and I need to run it on many machines but then it means on all those machines, it will copy from machineA only.
(ssh goldy#machineA 'ls -1 /process/snap/20180418/*' | parallel -j5 'scp goldy#machineA:{} /data/files/') || (ssh goldy#machineB 'ls -1 /process/snap/20180418/*' | parallel -j5 'scp goldy#machineB:{} /data/files/') || (ssh goldy#machineC 'ls -1 /process/snap/20180418/*' | parallel -j5 'scp goldy#machineC:{} /data/files/')
Now is there any way by which I can pick first machine randomly (out of those three) instead of keeping machineA as first always. So pick first machine randomly and keep other two as the backup incase first machine is down? Is this possible to do?
Update:
I have something like this:
machines=(machineA machineB machineC)
for machine in $(shuf -e ${machines[#]}); do
ssh -o StrictHostKeyChecking=no david#$machine 'ls -1 /process/snap/{{ folder }}/*' | parallel -j{{ threads }} 'scp -o StrictHostKeyChecking=no david#${machine}:{} /data/files/'
[ $? -eq 0 ] && break
done
How about keeping the machine names in a file and using shuf to shuffle them? Then you could create a script like this:
while read machine; do
ssh goldy#$machine 'ls -1 /process/snap/20180418/*' | parallel -j5 "scp goldy#$machine:{} /data/files/"
if [ $? == 0 ]; then
break
fi
done
And the machine file like this:
machineA
machineB
machineC
And call the script like this:
shuf machines | ./script.sh
Here's a test version that doesn't do anything but shows how the logic works:
while read machine; do
echo ssh goldy#$machine 'ls -1 /process/snap/20180418/*'
echo parallel -j5 "scp goldy#$machine:{} /data/files/"
executenonexistingcommand
if [ $? == 0 ]; then
break
fi
done
Addressing your comment to use arrays instead and put everything on a single line:
shuf -e ${machines[#]} shuffles an array. And to read it back into the array, you need to feed the outpu into readarray. Turning scripts into a single line is just a matter of putting semicolons where we had newlines before.
machines=( machineA machineB machineC ); for machine in $(shuf -e ${machines[#]}); do ssh goldy#$machine 'ls -1 /process/snap/20180418/*' | parallel -j5 "scp goldy#${machine}:{} /data/files/"; if [ $? == 0 ]; then break; fi; done
Here is a little example of how you might do it - it is largely comments, to show what I am thinking, but you can remove them to make it concise.
#!/bin/bash
# Machine names, number of machines, random starting index
machines=("machineA" "machineB" "machineC")
num=${#machines[#]}
idx=$((RANDOM%num))
# Make one try per machine, break on success
for ((try=0;try<num;try++)) ; do
this=${machines[$idx]}
echo $this
((idx=(idx+1)%num))
done
So, you would put your command where I have echo $this, and follow it with:
[ $? -eq 0 ] && break
Sample Output
./go
machineB
machineC
machineA
If you have shuf you can do the same thing more succinctly like this:
#!/bin/bash
# Machine names, in random order
machines=("machineA" "machineB" "machineC")
machines=( $(shuf -e "${machines[#]}") )
# Make one try per machine, break on success
for i in "${machines[#]}"; do
echo $i
... your command
[ $? -eq 0 ] && break
done

bash - wget -N if else value check

I'm working on a bash script that pulls a file from an FTP site only if the timestamp on remote is different than local. After it puts the file, it copies the file over to 3 other computers via samba (smbclient).
Everything works, but the file copies even if the wget -N ftp://insertsitehere.com returns a value that the file on the remote was not newer. What would be the best way to check the output of the script so that the copy only happens if a new version was pulled from FTP?
Ideally, I'd like the copy to the computers to preserve the timestamp just like the wget -N command does, too.
Here is an example of what I have:
#!/bin/bash
OUTDIR=/cats/dogs
cd $OUTDIR
wget -N ftp://user:password#sitegoeshere.com/filename
if [ $? -eq 0 ]; then
HOSTS="server1 server2 server3"
for i in $HOSTS; do
echo "Uploading to $i..."
smbclient -A /root/.smbclient.authfile //$i/path -c "lcd /cats/dogs; put fiilename.txt"
if [ $? -eq 0 ]; then
echo "Upload to $i successful..."
else
echo "There was an issue uploading to host $i..."
fi
done
else
echo "There was an issue with the FTP Download...."
exit 1
fi
The return value of wget is different than 0 only if there is an error. If -N is in use and the remote file is older than the local file, it will still have a return value of 0, so you cannot use that to check if the file has been modified.
You could check the mtime of the file to see if it changed, or the content. For example, you could use something like:
md5_old=$( md5sum filename.txt 2>/dev/null )
wget -N ftp://user:password#sitegoeshere.com/filename.txt
md5_new=$( md5sum filename.txt )
if [ "$md5_old" != "$md5_new" ]; then
# Copy filename.txt to SMB servers
fi
Regarding smbclient, unfortunately there is no way to preserve timestamps in either get or put commands. If you need it, you must use some different tool (scp -p, rsync -t...)
touch -r foo.txt foo.old
wget -N example.com/foo.txt
if [ foo.txt -nt foo.old ]
then
echo 'Uploading to server1...'
fi
"Save" the current timestamp into a new empty file
Use wget --timestamping to only download the file if it is newer
If file is newer than the "save" file, do stuff

scp: how to find out that copying was finished

I'm using scp command to copy file from one Linux host to another.
I run scp commend on host1 and copy file from host1 to host2. File is quite big and it takes for some time to copy it.
On host2 file appears immediately as soon as copying was started. I can do everything with this file even if copying is still in progress.
Is there any reliable way to find out if copying was finished or not on host2?
Off the top of my head, you could do something like:
touch tinyfile
scp bigfile tinyfile user#host:
Then when tinyfile appears you know that the transfer of bigfile is complete.
As pointed out in the comments, this assumes that scp will copy the files one by one, in the order specified. If you don't trust it, you could do them one by one explicitly:
scp bigfile user#host:
scp tinyfile user#host:
The disadvantage of this approach is that you would potentially have to authenticate twice. If this were an issue you could use something like ssh-agent.
On sending side (host1) use script like this:
#!/bin/bash
echo 'starting transfer'
scp FILE USER#DST_SERVER:DST_PATH
OUT=$?
if [ $OUT = 0 ]; then
echo 'transfer successful'
touch successful
scp successful USER#DST_SERVER:DST_PATH
else
echo 'transfer faild'
fi
On receiving side (host2) make script like this:
#!/bin/bash
SLEEP_TIME=30
MAX_CNT=10
CNT=0
while [[ ! -e successful && $CNT < $MAX_CNT ]]; do
((CNT++))
sleep($SLEEP_TIME);
done;
if [[ -e successful ]]; then
echo 'successful'
rm successful
# do somethning with FILE
fi
With CNT and MAX_CNT you disable endless loop (in case file successful isn't transferred).
Product MAX_CNT and SLEEP_TIME should be equal or greater expected transfer time. In my example expected transfer time is less than 300 seconds.
A checksum (md5sum, sha256sum ,sha512sum) of the local and remote files would tell you if they're identical.
For the situation where you don't have SSH access to the remote system - like an FTP server - you can download the file after it's uploaded and compare the checksums. I do this for files I send from production scripts at work. Below is a snippet from the script in which I do this.
MD5SRC=$(md5sum $LOCALFILE | cut -c 1-32)
MD5TESTFILE=$(mktemp -p /ramdisk)
curl \
-o $MD5TESTFILE \
-sS \
-u $FTPUSER:$FTPPASS \
ftp://$FTPHOST/$REMOTEFILE
MD5DST=$(md5sum $MD5TESTFILE | cut -c 1-32)
if [ "$MD5SRC" == "$MD5DST" ]
then
echo "+Local and Remote files match!"
else
echo "-Local and Remote files don't match"
fi
if you use inotify-tools,
then the solution will looks like this:
while ! inotifywait -e close $(dirname ${bigfile_fullname}) 2>/dev/null | \
grep -Eo "CLOSE $(basename ${bigfile_fullname})$">/dev/null
do true
done
echo "File ${bigfile_fullname} closed"
After some investigation, and discussion of the problem on other forums I have found one more solution. Maybe it can help somebody.
There is a command "lsof". It lists open files. During copying the file will be opened, so the command
lsof | grep filename
will return non empty result.
So you might want to make a while loop to wait until lsof returns nothing and proceed with your task.
Example:
# provide your file name here
f=<nameOfYourFile>
lsofresult=`lsof | grep $f | wc -l`
while [ $lsofresult != 0 ]; do
echo still copying file $f...
sleep 5
lsofresult=`lsof | grep $f | wc -l`
done; echo copying file $f is finished: `ls $f`
For the duplicate question, How to check if file has been scp 100% to the remote location , which was for an expect script, to know if a file is transferred completely, we can add expect 100% .. .. i.e something like this ...
expect -c "
set timeout 1
spawn scp user#$REMOTE_IP:/tmp/my.file user#$HOST_IP:/home/.
expect yes/no { send yes\r ; exp_continue }
expect password: { send $SCP_PASSWORD\r }
expect 100%
sleep 1
exit
"
if [ -f "/home/my.file" ]; then
echo "Success"
fi
If avoiding a second SSH handshake is important, you can use something like the following:
ssh host cat \> bigfile \&\& touch complete < bigfile
Then wait for the "complete" file to get created on the remote end.

Bash: Based on user input run all commands in a function on local OR on remote machine

I have a bash function which takes an array as an argument and it executes multiple commands.
Based on user input I want to run all the commands in this method locally or on remote machine. It has many quotes in the commands and echoing with "" will become ugly.
This is how I am invoking the function right now:
run_tool_commands "${ARGS[#]}"
function run_tool_commands {
ARGS=("$#")
.. Loads of commands here
}
if [ case 1 ]; then
# run locally
else
# run remotely
fi
This seems helpful, but this is possible if I have the method text piped to "here document".
If
all the commands that are to be executed under run_tool_commands are present on remote system as well,
All commands are executables, & not alias/functions
All these excutables are in default paths. (No need to source .bashrc or any other file on remote.)
Then perhaps this code may work: (not tested):
{ declare -f run_tool_commands; echo run_tool_commands "${ARGS[#]}"; } | ssh -t user#host
OR
{ declare -f run_tool_commands;
echo -n run_tool_commands;
for arg in "${ARGS[#]}"; do
echo -ne "\"$t\" ";
done; } | ssh -t user#host
Using for loop, to preserve quotes around arguments. (may or may not be required, not tested.)

Resources