How do I run 1 instance of rsync / rclone script using flock as a cron job? - linux

I'm trying to run only one instance of my back up script as a cron job.
I know I can do it with a function that checks if the process is running:
if pgrep -x rclone >/dev/null
then
# rclone is still running, backup is not done yet, exit.
exit
else
# rclone is not running.
# start backup.
/path/to/rclone/script.sh
fi
But after some research, I found out flock should be used instead of looking for process ID, in crontab -e, in this case, run the script every 30 minutes:
*/30 * * * * /usr/bin/flock -n /var/lock/myjob.lock /path/to/rclone/script.sh
Running the command above requires sudo. Therefore, the script asks for sudo password, and never runs. (I ran the command above manually, that's how I found out).
How exactly do I use flock? Do I type a variable in my script that injects the sudo password when flock asks for it? (I know it's not secure, so there must be a different way to do this).
I tried to research this subject but couldn't find any good answers that explain how to use it properly.
Thank you.

Related

How to automate rsync in linux?

I want to run rsync(in remote linux system) automatically in every minute. so whatever the changes (in test.txt, as mentioned below) are done in one system, it should be affected in another system at the same minute interval.
For this purpose, I have changed in sudo crontab -e , and added
*/1 * * * * /home/john/rsync.sh
rsync.sh contains two commands:
sudo rsync -av /home/john1/test.txt remote#remote_ip:
sudo rsync -av --update /home/john1/test.txt remote#remote_ip:
when I am running rsync.sh manually, all the changes are affected successfully.
If you added this in the root crontab, you don't need to start the rsync commands with sudo.
Things that run in crontab will probably not have the same environment variables. You can add the absolute path to rsync if you're unsure, for example /usr/bin/rsync. Also check other environment variables, for example running set.
When you run it manually, you're already in a specific shell which is probably able to run it. But when cron runs it, it doesn't know which interpreter to use. Always start your scripts with #!/usr/bin/bash (or whatever is your favorite shell). And/or call your cron job specifying which shell to use, for example:
*/1 * * * * /bin/bash /home/john/rsync.sh
I hope this helps. ;)

Cronfile did not execute sudo -u line?

I have made the following cronjob sh file :
Vi RestartServices.sh
/etc/init.d/b1s stop
sleep 10
/etc/init.d/sapb1servertools stop
sleep 10
sudo -u ndbadm /usr/sap/NDB/HDB00/HDB stop
sleep 20
sudo -u ndbadm /usr/sap/NDB/HDB00/HDB start
sleep 10
/etc/init.d/sapb1servertools start
sleep 10
/etc/init.d/b1s start
When I run this file manually the job runs correctly.
When scheduled in crontab (root user)
Crontab content:
# srvmagtCron: restarts daemons that died
0,5,10,15,20,25,30,35,40,45,50,55 * * * * /bin/sh -c "[ -x /etc/srvmagt/srvmagtCron ] && /etc/srvmagt/srvmagtCron"
0 2 * * * /hanamnt/shared/NDB/HDB00/backup/scripts/VGRbackup.sh
#RESTARTS SERVICE LAYER , SAPB1ServerTools service , HDB
0 3 * * * /hanamnt/shared/NDB/HDB00/backup/scripts/RestartServices.sh
It does get started at the requested time but I think it failed to execute the sudo line as the HDB service has not been restarted.
I'm trying to find out why?
Is it because sudo cannot be executed in a cronjob?
(service needs to start using user ndbadm)
path:
/opt/sap/sapjvm_6//bin:/opt/fujitsu/bwai/bin:/sbin:/usr/sbin:/usr/local/sbin:/root/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:/usr/lib64/jvm/jre/bin:/usr/lib/mit/bin:/usr/lib/mit/sbin
You have a non-standard $PATH and crond(8) is running your crontab(5) entries with a shorter $PATH. See also environ(7), credentials(7) and execvp(3) with execve(2)
My recommendation would be to write a complete shell script, and put only that in crontab. So don't use sh -c in crontab entries, and set explicitly the PATH (either, and preferably, in the shell scripts your crontab entry is firing, or maybe in your crontab file).
You could for example have
0,5,10,15,20,25,30,35,40,45,50,55 * * * * /hanamnt/shared/srvmagt.sh
in your crontab, and have an executable /hanamnt/shared/srvmagt.sh file starting with
#!/bin/bash
export PATH=/opt/sap/sapjvm_6//bin:/opt/fujitsu/bwai/bin:/sbin:\
/usr/sbin:/usr/local/sbin:/root/bin:/usr/local/bin:\
/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:\
/usr/lib64/jvm/jre/bin:/usr/lib/mit/bin:/usr/lib/mit/sbin
# log a starting message
logger start of $0
Notice the use of logger(1) - and you should use it more wisely to get appropriate log messages under /var/log
BTW, your PATH is ridiculously too long. Such a long PATH is messy (and might slow down your shells) and could be a security risk; my recommendation would be to have a much shorter one (perhaps as short as $HOME/bin:/usr/local/bin:/bin:/usr/bin) and add appropriate symlinks or scripts in e.g. $HOME/bin/ or /usr/local/bin/ using explicit program paths.
Notice that sudo could be used in a crontab job (but that is often unwise) and then probably should be configured in /etc/sudoers ; perhaps you should prefer /bin/su (see su(1)...) in some shell script.
Read also more about setuid. Sometimes it is wiser to write in C a wrapper setuid- program using it (with setreuid(2)), but be careful (you could open huge security holes by mistake).
Read also Advanced Linux Programming (freely downloadable, a bit old) then syscalls(2) to understand better how Linux works internally. You need to have a better and more clear picture of your system in your head.

Getting Crontab to work on Nitrous

I am quite new to nitrous and programming in general. However, I wanted to see why my crontab job isn't doing anything on Nitrous.io.
I am using a virtualenv and I am in the understanding that you can run them on crontab. THis is my cronline:
10 6,19 * * * /home/action/susteq/bin/activate /home/action/susteq/start.py 2>&1 >> /home/action/susteq/log/start.log
Crontab should work on Nitrous.IO as long as you are actively logged into the box (or using tmux) and the box doesn't shutdown from inactivity. Paid boxes will stay running indefinitely.
Looking at this command you may want to ensure it runs as expected outside of the crontab. Try running the process first:
$ /home/action/susteq/bin/activate /home/action/susteq/http://start.py 2>&1 >> /home/action/susteq/log/start.log
If not then you may actually want to try placing 2>%1 at the end of the line (further explained on this redirection tutorial). The following command may be what you are looking for:
$ /home/action/susteq/bin/activate /home/action/susteq/start.py >> /home/action/susteq/log/start.log 2>&1
If that works, try adding it to your crontab:
$ 10 6,19 * * * /home/action/susteq/bin/activate /home/action/susteq/start.py >> /home/action/susteq/log/start.log 2>&1
Update: To run cron on Nitrous Pro you need to enable privileged mode on your container, which requires that you enable advanced container management. More details can be found here:
https://community.nitrous.io/docs/running-cron-on-nitrous

Simple bash script runs asynchronously when run as a cron job

I have a backup script written that will do the following in this order:
Zip up files via SSH on a remote backup server
Dump my local database
Transfer my local database via SSH rsync to the backup server
Now when I run this script from the command line in RHEL it works a-ok perfectly fine.
BUT when I set this script to run via a cronjob, the script does run, but from what I can tell, it's somehow running those above 3 commands simultaneously. Because of that, things are getting done out of order (my local database is completed dumping and transferred before the #1 zip job is actually complete).
Has anyone run across such a strange scenario? As the most simple fix, is there a way to force a script to run synchronously? Maybe add some kind of command to wait for the prior line to complete before moving on?
EDIT I added a example version of my backup script. It seems that the second line of my script runs at the same time as the first line of my script, so while the SSH command has been issued, it has not yet completed before my second line triggers and an SQL dump begins.
#!/bin/bash
THEDIR="sample"
THEDBNAME="mydatabase"
ssh -i /rsync/mirror-rsync-key sample#sample.com "tar zcvpf /$THEDIR/old-1.tar /$THEDIR/public_html/*"
mysqldump --opt -Q $THEDBNAME > mySampleDb
/usr/bin/rsync -avz --delete --exclude=**/stats --exclude=**/error -e "ssh -i /rsync/mirror-rsync-key" /$THEDIR/public_html/ sample#sample.com:/$THEDIR/public_html/
/usr/bin/rsync -avz --delete --exclude=**/stats --exclude=**/error -e "ssh -i /rsync/mirror-rsync-key" /$THEDIR/ sample#sample.com:/$THEDIR/
Unless you're explicitly using backgrounding (&) everything should run one-by-one, waiting until the prior finishes.
Perhaps you are actually seeing overlapping prior executions by cron? If so, you can prevent multi-execution by calling your script with flock
e.g. midnight cron entry from
0 0 * * * backup.sh
to
0 0 * * * flock -n /tmp/backup.lock -c backup.sh
If you want to run commands in a sequential order you can use ; operator.
; – semicolon operator
This operator Run multiple commands in one go, but in a sequential order. If we take three commands separated by semicolon, second command will run after first command completion, third command will run only after second command execution completes. One point we should know is that to run second command, it do not depend on first command exit status.
Execute ls, pwd, whoami commands in one line sequentially one after the other.
ls;pwd;whoami
Please correct me if i am not understanding your question correctly.

Shell script to log server checks runs manually, but not from cron

I'm using a basic shell script to log the results of top, netstat, ps and free every minute.
This is the script:
/scripts/logtop:
TERM=vt100
export TERM
time=$(date)
min=${time:14:2}
top -b -n 1 > /var/log/systemCheckLogs/$min
netstat -an >> /var/log/systemCheckLogs/$min
ps aux >> /var/log/systemCheckLogs/$min
free >> /var/log/systemCheckLogs/$min
echo "Message Content: $min" | mail -s "Ran System Check script" email#domain.com
exit 0
When I run this script directly it works fine. It creates the files and puts them in /var/log/systemCheckLogs/ and then sends me an email.
I can't, however, get it to work when trying to get cron to do it every minute.
I tried putting it in /var/spool/cron/root like so:
* * * * * /scripts/logtop > /dev/null 2>&1
and it never executes
I also tried putting it in /var/spool/cron/myservername and also like so:
* * * * * /scripts/logtop > /dev/null 2>&1
it'll run every minute, but nothing gets created in systemCheckLogs.
Is there a reason it works when I run it but not when cron runs it?
Also, here's what the permissions look like:
-rwxrwxrwx 1 root root 326 Jul 21 01:53 logtop
drwxr-xr-x 2 root root 4096 Jul 21 01:51 systemCheckLogs
Normally crontabs are kept in "/var/spool/cron/crontabs/". Also, normally, you update it with the crontab command as this HUPs crond after you're done and it'll make sure the file gets in the correct place.
Are you using the crontab command to create the cron entry? crontab to import a file directly. crontab -e to edit the current crontab with $EDITOR.
All jobs run by cron need the interpreter listed at the top, so cron knows how to run them.
I can't tell if you just omitted that line or if it is not in your script.
For example,
#!/bin/bash
echo "Test cron jon"
When running from /var/spool/cron/root, it may be failing because cron is not configured to run for root. On linux, root cron jobs are typically run from /etc/crontab rather than from /var/spool/cron.
When running from /var/spool/cron/myservername, you probably have a permissions problem. Don't redirect the error to /dev/null -- capture them and examine.
Something else to be aware of, cron doesn't initialize the full run environment, which can sometimes mean you can run it just fine from a fully logged-in shell, but it doesn't behave the same from cron.
In the case of above, you don't have a "#!/bin/shell" up top in your script. If root is configured to use something like a regular bourne shell or cshell, the syntax you use to populate your variables will not work. This would explain why it would run, but not populate your files. So if you need it to be ksh, "#!/bin/ksh". It's generally best not to trust the environment to keep these things sane. If you need your profile run the do a ". ~/.profile" up front as well. Or a quick and dirty way to get your relatively full env is to do it from su as such "* * * * * su - root -c "/path/to/script" > /dev/null 2>&1
Just some things I've picked up over the years. You're definitely expecting a ksh based on your syntax, so you might want to be sure it's using it.
Thanks for the tips... used a little bit of each answer to get to the bottom of this.
I did have the interpreter at the top (wasn't shown here), but may have been wrong.
Am using #!/bin/bash now and that works.
Also had to tinker with the permissions of the directory the log files are being dumped in to get things working.

Resources