Rerun Rsync After Unsuccessful Connection in Cron - linux

I have a bash script in my cron that has a passwordless rsync command to pass files from my local system to a web server.
Bash Script Code:
rsync -avzhe "ssh -p2222" ---chmod=Du=rw,Dg=r,Do=r,Fu=rw,Fg=r,Fo=r -p /home/sysadmin/some_file_{0..10}.png username#web.server:public_html/some.directory/
Over the past few days, I have noticed that the connection is randomly refused sometimes. I have correctly set up openssh-client and openssh-server and have set up the password ssh login successfully so, I'm not sure what is causing the connection to randomly be refused sometimes.
Now, I am looking for some code to force the rsync code to rerun until the files are successfully passed to the web.
Rsync Code:
RC=1
while [[ $RC -ne 0 ]]
do
rsync -avzhe "ssh -p2222" ---chmod=Du=rw,Dg=r,Do=r,Fu=rw,Fg=r,Fo=r -p /home/sysadmin/some_file_{0..10}.png username#web.server:public_html/some.directory/
RC=$?
done
Is this the best method to try and circumvent the issue?

You don't need to store the return code of rsync, just do a loop like
#!/bin/bash
until rsync -avzhe "ssh -p2222" ---chmod=Du=rw,Dg=r,Do=r,Fu=rw,Fg=r,Fo=r -p /home/sysadmin/some_file_{0..10}.png username#web.server:public_html/some.directory/; do
sleep 5 # waiting for 5 seconds before re-starting the command.
done

Related

Can't get remote ssh stdout output in cron, but in my terminal it works

I run into an issue last week that drives me crazy. I wrote a BASH script which does a remote ssh connection to acamai and than performs a simple 'ls'. I want to redirect the 'ls' sdtout output to a given file.
While the script itself works like a charm when run manually, it does not while it runs via cron. The cronjob runs as root and each command works as expected expect the ssh command. My System is Gentoo Linux and cron is the old but gold vixie-cron.
To reduce the 200 LOC I put the basics herein which alone (as a single script) are enough to demonstrate the problem.
#!/bin/bash
PATH='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin'
#set -x
shopt -s lastpipe
exec 2>log.out
(ssh -i <path to key> -o HostKeyAlgorithms=+ssh-dss -o StrictHostKeyChecking=no <account#example.com> 'ls -r <path>') > '/root/listing.txt'
Even in -vvv debug mode of ssh I can see, that everything works...just except that I get no stdout output.
Than I tried something else that I found in another posting on the internet:
#!/bin/bash
PATH='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin'
#set -x
shopt -s lastpipe
exec 2>log.out
(ssh -T -i <path to key> -o HostKeyAlgorithms=+ssh-dss -o StrictHostKeyChecking=no <account#example.com> 'ls -r <path>' </dev/zero) > '/root/listing.txt'
Drawback here, I start a ssh session that I can't close and I guess its due to /dev/zero.
Another approach was to TEE Pipe the sub-shell of the ssh command...this worked for a short time ( and why not yet anymore ?!)
Now I'm clueless and need help. Cron has its PATH, uses BASH etc. Curious my boss did that with success with java (and he hates BASH...).
Any explanation and helpful tips are greatly welcome.
I have same issue, I make script for CRON and it gets output from remote SSH host.
If i run script manually - it works as should. But when CRON runs it - i get just a part of remote output.
I cant realise why its happening.
#!/bin/sh
pass=123
filelist=$(sshpass -p "$pass" ssh -q -tt -o StrictHostKeyChecking=no user#"10.10.10.10" "list")
filestring=$(echo "$filelist" | grep -Po "(\S+\s\S+\s+\d+\s\d{2}:\d{2}:\d{2}\s\d{4})\slist0\.lst")
filedate=${filestring% list0.lst}
echo $filedate
filestamp=$(date -d "$filedate" +"%s")
echo $filestamp
When i get echos in file via CRON - there are date from 0:00:00 - field with date (echo $filedate) is empty. But when i run manually - i get normal date with time...
It really bother me.
Help?
I found solution - add "-tt" to ssh command and all input goes to variable.
filelist=$(sshpass -p "$pass" ssh -q -tt -o StrictHostKeyChecking=no user#"10.10.10.10" "list")

bash cript: ssh create file; sleep 3m; rm file;

Trying to create a script that will ssh into server, backup some files, sleep for 3 minutes, then remove the files.
While it's sleeping the same script is back to local and rsync the file. Then when the 3 minutes are up... file is removed.
Just trying this so as to not connect twice with ssh.
ssh $site "
tar -zcf $domain-$date.tar.gz $path;
{ sleep 3m && rm -f $domain-$date.tar.gz };
"
rsync -az $site:$domain-$date.tar.gz ~/WebSites/$domain/BackUp/$date;
I tried with command grouping with (), to create a sub command, but I think the variable would not be read. Not sure.
Your ssh command will sleep for 3 minutes and remove the files, then your script proceeds to try to rsync the files that got removed. There is no easy workaround for having your first ssh command sleep while your own script proceeds to run rsync.
Do either:
ssh into the server twice. After rsync completes, ssh into the server again and remove the files.
Tell rsync to remove the files after it's synced them. Add the --remove-source-files to rsync.

Wifi disconnected before init.d script is run

I've set up a simple init.d script "S3logrotate" to run on shutdown. The "S3logrotate" script works fine when run manually from command line but the script does not function correctly on shut down.
The script uploads logs from my PC to an Amazon S3 bucket and requires wifi to run correctly.
Debugging proved that the script is actually run but the upload process fails.
I found that the problem seems to be that the script seems to run after wifi is terminated.
These are the blocks I used to test my internet connection in the script.
if ping -q -c 1 -W 1 8.8.8.8 >/dev/null; then
echo "IPv4 is up" >> *x.txt*
else
echo "IPv4 is down" >> *x.txt*
fi
if ping -q -c 1 -W 1 google.com >/dev/null; then
echo "The network is up" >> *x.txt*
else
echo "The network is down" >> *x.txt*
fi
The output for this block is:
IPv4 is down
The network is down
Is there any way to set the priority of an init.d script? As in, can I make my script run before the network connection is terminated? If not, is there any alternative to init.d?
I use Ubuntu 16.04 and have dual booted with Windows 10 if that's significant.
Thanks,
sganesan7
You should place you scrip in:
/etc/NetworkManager/dispatcher.d/pre-down.d
change group and owner to root
chown root:root S3logrotate
and it should work. If you need to do this for separate interface place script in
create a script inside
/etc/NetworkManager/dispatcher.d/
and name it (for example):
wlan0-down
and should work too.

commands in bash script doesn't work properly

I have this script :
#!/bin/bash
./process-list $1
det=$?
echo $det
if [ $det -eq 1 ]
then
echo "!!!"
ssh -n -f 192.0.2.1 "/usr/local/bin/sshfs -r 192.0.2.2:/home/sth/rootcheck_redhat /home/ossl7/r"
rk=$(ssh -n -f 192.0.2.1 'cd /home/s/r/rootcheck-2.4; ./ossec-rootcheck >&2; echo $?' 2>res)
if [ $rk -eq 0 ]
then
echo "not!"
fi
fi
exit;
I ssh to system 192.0.2.1 and run sshfs command on it. actualy I want to mount a directory of system 192.0.2.2 on system 192.0.2.1 and then run a program (which is located in that directory) on system 192.0.2.1. all these ssh and sshfs commands work properly. when I run them manually and output of program ossec-rootcheck is written to file res ,but when I run this script, mount is done but no output is written to file res. I guess program ossec-rootcheck is runned but I don't know why the output isn't written!
this script used to work properly before I don't know what happend suddenly!
As far as I understand the program, the remote machine has stdin>stderr, but how do you get that to the local machine where ssh is being evaluated?
The end ' means on the rk= line, the 2>res happens locally. (and there is no error from ssh, the remote error, if any, is lost when ssh successfully completes.) You could try >res it will get whatever ssh prints out, unfortunately including non-errors.

Linux FTP put success output

I have a bash script that creates backups incrementally(daily) and full(on Mondays). Every 7 days the script combines the week of backups(full and Incremental) and sends them off to an FTP server, the problem i am having is i want to delete the files from my backup directory after the FTP upload is finished, but i cant do that until i know the file was successfully uploaded. I need to figure out how to capture the '226 transfer complete' so i can use that in an 'IF' statement to delete the backup files. Any help is greatly appreciated. also he is my FTP portion of the script
if [ -a "$WKBKDIR/weekending-$YEAR-$MONTH-$DAY.tar.gz" ]; then
HOST=192.168.40.30 #This is the FTP servers host or IP address.
USER=username #This is the FTP user that has access to the server.
PASS=password #This is the password for the FTP user.
ftp -inv $HOST << EOF
user $USER $PASS
cd /baks
lcd $WKBKDIR
put weekending-$YEAR-$MONTH-$DAY.tar.gz
bye
EOF
fi
I could use whatever mean i needed i suppose, FTP was something already setup for another backup function for something else, thanks
2nd EDIT Ahmed the rsync works great in test from command line, its a lot faster than FTP, the server is on the local network so SSH not that big of a deal but nice to have for added security, i will finish implementing in my script tomorrow, thanks again
FTP OPTION
The simple solution would be to do something like this:
ftp -inv $HOST >ftp-results-$YEAR-$MONTH-$DAY.out 2>&1 <<-EOF
user $USER $PASS
cd /baks
bin
lcd $WKBKDIR
put weekending-$YEAR-$MONTH-$DAY.tar.gz
bye
EOF
Also there is an issue with your here-document syntax; there is no space between << and the delimiter word (in your case EOF) and I added a - because you are putting white-spaces before the ACTUAL delimeter (it's tabbed in for the if / fi block) so the [-] is required
Now when you do this; you can parse the output file to look for the successful put of the file. For example:
if grep -q '226 transfer complete' ftp-results-$YEAR-$MONTH-$DAY.out; then
echo "It seems that FTP transfer completed fine, we can schedule a delete"
echo "rm -f $PWD/weekending-$YEAR-$MONTH-$DAY.tar.gz" >> scheduled_cleanup.sh
fi
and just run scheduled_cleanup.sh using cron at a given time; this way you will have some margin before the files are cleaned up
If your remote FTP server has good SITE or PROXY options you may be able to get the remote FTP to run a checksum on the uploaded file after successful upload and return the result.
SCP / RSYNC OPTION
Using FTP is clunky and dangerous, you should really try and see if you can have scp or ssh access to the remote system.
If you can then generate an ssh key if you don't have one using ssh-keygen:
ssh-keygen -N "" -t rsa -f ftp-rsa
put the ftp-rsa.pub file into the $HOST/home/$USER/.ssh/authorized_keys and you have a much nicer method for uploading files:
if scp -B -C weekending-$YEAR-$MONTH-$DAY.tar.gz $USER#$HOST:/baks/ ; then
echo Upload successful 1>&2
else
echo Upload failed 1>&2
fi
Or better yet using rsync:
if rsync --progress -a weekending-$YEAR-$MONTH-$DAY.tar.gz $HOST:/baks/ ; then
echo Upload successful 1>&2
else
echo Upload failed 1>&2
fi
et voilĂ  you are done since rsync works over ssh you are happy and secure
Try the next
#!/bin/bash
runifok() { echo "will run this when the transfer is OK"; }
if [ -a "$WKBKDIR/weekending-$YEAR-$MONTH-$DAY.tar.gz" ]; then
HOST=192.168.40.30 #This is the FTP servers host or IP address.
USER=username #This is the FTP user that has access to the server.
PASS=password #This is the password for the FTP user.
ftp -inv <<EOF | grep -q '226 transfer complete' && runifok
user $USER $PASS
cd /baks
lcd $WKBKDIR
put weekending-$YEAR-$MONTH-$DAY.tar.gz
bye
EOF
fi
test it and when will run ok - replace the echo in the runifok function for your commands what want execute after the upload is succesful.

Resources