How to automate rsync in linux? - cron

I want to run rsync(in remote linux system) automatically in every minute. so whatever the changes (in test.txt, as mentioned below) are done in one system, it should be affected in another system at the same minute interval.
For this purpose, I have changed in sudo crontab -e , and added
*/1 * * * * /home/john/rsync.sh
rsync.sh contains two commands:
sudo rsync -av /home/john1/test.txt remote#remote_ip:
sudo rsync -av --update /home/john1/test.txt remote#remote_ip:
when I am running rsync.sh manually, all the changes are affected successfully.

If you added this in the root crontab, you don't need to start the rsync commands with sudo.
Things that run in crontab will probably not have the same environment variables. You can add the absolute path to rsync if you're unsure, for example /usr/bin/rsync. Also check other environment variables, for example running set.
When you run it manually, you're already in a specific shell which is probably able to run it. But when cron runs it, it doesn't know which interpreter to use. Always start your scripts with #!/usr/bin/bash (or whatever is your favorite shell). And/or call your cron job specifying which shell to use, for example:
*/1 * * * * /bin/bash /home/john/rsync.sh
I hope this helps. ;)

Related

How do I run 1 instance of rsync / rclone script using flock as a cron job?

I'm trying to run only one instance of my back up script as a cron job.
I know I can do it with a function that checks if the process is running:
if pgrep -x rclone >/dev/null
then
# rclone is still running, backup is not done yet, exit.
exit
else
# rclone is not running.
# start backup.
/path/to/rclone/script.sh
fi
But after some research, I found out flock should be used instead of looking for process ID, in crontab -e, in this case, run the script every 30 minutes:
*/30 * * * * /usr/bin/flock -n /var/lock/myjob.lock /path/to/rclone/script.sh
Running the command above requires sudo. Therefore, the script asks for sudo password, and never runs. (I ran the command above manually, that's how I found out).
How exactly do I use flock? Do I type a variable in my script that injects the sudo password when flock asks for it? (I know it's not secure, so there must be a different way to do this).
I tried to research this subject but couldn't find any good answers that explain how to use it properly.
Thank you.

crontab does not execute a simple shell scripts to turn swap off and turn swap back on

When I execute swapoff -a && swapon -a that works like a charm and when I create a scriptfile swap.sh and I run that it works great too.
chmod 755 swap.sh
But when I make a crontab that should execute the script, than nothing happens.
crontab -e
0 2 * * * /scripts/swap.sh
Am I missing something here?
Since you confirmed the script is running as super user (sudo) and the file has permissions to execute the problem is other thing (cron in this case):
Try using the full path
/sbin/swapoff -a && /sbin/swapon -a
You have to use full absolute pathnames in crontab commands because when cron runs a script, it does not perform initial login activity (which is where the path(s) are set).
All credits go to: https://ubuntuforums.org/archive/index.php/t-1766875.html

Cronfile did not execute sudo -u line?

I have made the following cronjob sh file :
Vi RestartServices.sh
/etc/init.d/b1s stop
sleep 10
/etc/init.d/sapb1servertools stop
sleep 10
sudo -u ndbadm /usr/sap/NDB/HDB00/HDB stop
sleep 20
sudo -u ndbadm /usr/sap/NDB/HDB00/HDB start
sleep 10
/etc/init.d/sapb1servertools start
sleep 10
/etc/init.d/b1s start
When I run this file manually the job runs correctly.
When scheduled in crontab (root user)
Crontab content:
# srvmagtCron: restarts daemons that died
0,5,10,15,20,25,30,35,40,45,50,55 * * * * /bin/sh -c "[ -x /etc/srvmagt/srvmagtCron ] && /etc/srvmagt/srvmagtCron"
0 2 * * * /hanamnt/shared/NDB/HDB00/backup/scripts/VGRbackup.sh
#RESTARTS SERVICE LAYER , SAPB1ServerTools service , HDB
0 3 * * * /hanamnt/shared/NDB/HDB00/backup/scripts/RestartServices.sh
It does get started at the requested time but I think it failed to execute the sudo line as the HDB service has not been restarted.
I'm trying to find out why?
Is it because sudo cannot be executed in a cronjob?
(service needs to start using user ndbadm)
path:
/opt/sap/sapjvm_6//bin:/opt/fujitsu/bwai/bin:/sbin:/usr/sbin:/usr/local/sbin:/root/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:/usr/lib64/jvm/jre/bin:/usr/lib/mit/bin:/usr/lib/mit/sbin
You have a non-standard $PATH and crond(8) is running your crontab(5) entries with a shorter $PATH. See also environ(7), credentials(7) and execvp(3) with execve(2)
My recommendation would be to write a complete shell script, and put only that in crontab. So don't use sh -c in crontab entries, and set explicitly the PATH (either, and preferably, in the shell scripts your crontab entry is firing, or maybe in your crontab file).
You could for example have
0,5,10,15,20,25,30,35,40,45,50,55 * * * * /hanamnt/shared/srvmagt.sh
in your crontab, and have an executable /hanamnt/shared/srvmagt.sh file starting with
#!/bin/bash
export PATH=/opt/sap/sapjvm_6//bin:/opt/fujitsu/bwai/bin:/sbin:\
/usr/sbin:/usr/local/sbin:/root/bin:/usr/local/bin:\
/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:\
/usr/lib64/jvm/jre/bin:/usr/lib/mit/bin:/usr/lib/mit/sbin
# log a starting message
logger start of $0
Notice the use of logger(1) - and you should use it more wisely to get appropriate log messages under /var/log
BTW, your PATH is ridiculously too long. Such a long PATH is messy (and might slow down your shells) and could be a security risk; my recommendation would be to have a much shorter one (perhaps as short as $HOME/bin:/usr/local/bin:/bin:/usr/bin) and add appropriate symlinks or scripts in e.g. $HOME/bin/ or /usr/local/bin/ using explicit program paths.
Notice that sudo could be used in a crontab job (but that is often unwise) and then probably should be configured in /etc/sudoers ; perhaps you should prefer /bin/su (see su(1)...) in some shell script.
Read also more about setuid. Sometimes it is wiser to write in C a wrapper setuid- program using it (with setreuid(2)), but be careful (you could open huge security holes by mistake).
Read also Advanced Linux Programming (freely downloadable, a bit old) then syscalls(2) to understand better how Linux works internally. You need to have a better and more clear picture of your system in your head.

sh file not running on cron ubuntu

I am trying to run a shell script on crontab on Ubuntu platform. I have tried googling and other links but nothing has helped so far.
This is my crontab:
*/2 * * * * sudo bash /data/html/mysite/site_cleanup.sh
This is the content of my sh file:
#!/bin/sh
# How many days retention do we want ?
DAYS=0
# geting present day
now=$(date +"%m_%d_%Y")
# Where is the base directory
BASEDIR=/data/html/mysite
#where is the backup directory
BKPDIR=/data/html/backup
# Where is the log file
LOGFILE=$BKPDIR/log/mysite.log
# add to tar
tar -cvzf $now.tar.gz $BASEDIR
mv $now.tar.gz $BKPDIR
# REMOVE OLD FILES
echo `date` Purge Started >> $LOGFILE
find $BASEDIR -mtime +$DAYS | xargs rm
echo `date` Purge Completed >> $LOGFILE
The same script runs from a terminal and gives the desired result.
Generic troubleshooting for noninteractive shell scripts
Put set -x; exec 2>/path/to/logfile at the top of your script to log all subsequent commands to a file as they're run. If this doesn't work, you'll know that your script isn't being run at all; if it does, you'll know where it fails and how.
If this is a personal crontab
If you're running crontab -e as a user (without sudo), then the crontab being modified is one for commands run with that user's permissions. Check that file permissions allow that user to modify the content in question (which, if these files are in a cgi-bin directory, may require being run by the same user as the web server).
If your intent is to have commands run as root, rather than as your own user, be sure you use sudo when editing the crontab to edit the system crontab instead (but please take care as to your script's correctness in this case -- carelessness such as missing quotes or lack of appropriate precautions in xargs usage can cause a script to delete the wrong files if malicious filenames are created):
sudo crontab -e ## to edit the system (root) crontab
...or, if you're cleaning up files owned by the apache user (for example; check which account is correct for your own operating system and web server):
sudo -u apache crontab -e ## to edit the apache user's crontab
Troubleshooting for a system crontab
Do not attempt to put a sudo command within the commands run by cron; with sudo's default configuration, it requires a TTY (a keyboard and screen) to be attached to a session in order to run. Thus, your crontab line should not contain sudo, but instead should look like the following:
*/2 * * * * bash /data/html/mysite/site_cleanup.sh
Your issue is likely coming from the sudo call from your user level cron. Unless you've gone through and edited the bashrc profile to allow that script to run without sudo it'll hang up every time.
So you can lookup how to run a script with no password by modifying the bashrc profile, remove the sudo call if you aren't doing something in your script that calls for Super User permissions, or as a last ditch, extremely bad idea you can call your script from root's cron by doing sudo crontab -e or sudo env EDITOR=nano crontab -e if you prefer nano as your editor.
try to add this line to the crontab of user root and without the sudo.
like this:
*/2 * * * * bash /data/html/mysite/site_cleanup.sh

Handling cron jobs in docker?

How are people generally handling cron jobs with docker? The most common case I've seen is there will be a sidekick image running just crond and the code base, however when using cronie I'm not able to read in any environmental variables that are passed in on the docker command line.
Specifically I'll do this:
docker run -d --name cron -e VAR1=val1 -e VAR2=val2 cron_image start
Inside the image we'll have this:
[root#dae7207bf10e /]# yum info cronie
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.ash.fastserv.com
* epel: mirror.cs.pitt.edu
* extras: mirror.vcu.edu
* updates: mirror.us.leaseweb.net
Installed Packages
Name : cronie
Arch : x86_64
Version : 1.4.11
Release : 13.el7
Size : 211 k
Repo : installed
Summary : Cron daemon for executing programs at set times
URL : https://fedorahosted.org/cronie
License : MIT and BSD and ISC and GPLv2+
Description : Cronie contains the standard UNIX daemon crond that runs specified programs at
: scheduled times and related tools. It is a fork of the original vixie-cron and
: has security and configuration enhancements like the ability to use pam and
: SELinux.
[root#dae7207bf10e /]# cat /usr/local/bin/start
#!/bin/bash
/usr/bin/env > /var/tmp/docker_env
/usr/sbin/crond -n
And my crontabs will look like this:
SHELL=/bin/bash
5 16 * * * source /var/tmp/docker_env; /usr/local/bin/randomchallenge &> /var/log/randomchallenge.log
Originally I didn't have the source bits at all and tried to use the variables directly however it doesn't look like cronie presents them to called jobs (which does make sense in the vast majority of use cases). I've tried pulling in this env file a variety of ways without luck, my program can never read the variables. Even wrapping the whole thing in a shell script that pulls in env doesn't do the job.
How are people handling this kind of thing? Hard coding values is not an option. I suppose I could make the start script generate the crontab on the fly but that seems really ugly.
Sourcing the env file does not work, I'm not sure why (originally I was chmod +xing the env file, I removed it for this answer so it isn't that). I ended up finding this wonky env kludge to do it. env can set variables from stdin so we're just catting our env file, sending it to env, and then using that environment for our actual job.
[root#b7886c463928 /]# cat /usr/local/bin/start
#!/bin/bash
env > /var/tmp/docker_env
/usr/sbin/crond -n
[root#b7886c463928 /]# crontab -l
*/1 * * * * env - `cat /var/tmp/docker_env` env > /tmp/cron.check
You'll need to add this bit before every job
env - `cat /var/tmp/docker_env`
I'm going to write a lightweight crond clone that can handle standard job formats but passes the environment along and outputs to stdout.
Cron in docker world (due to whatever reasons) seem to have received lesser love compared to other facilities in a standard Linux environment. I found it not very obvious how to do it correctly.
Here is my take on the problem and a solution for it. Have a look at docker-vixie-cron and its docker image redmatter/cron to see if it helps your scenario. It did take a bit of trial and error to arrive at the current solution, but please feel free to air your thoughts.
It is quite different to what you have done with cronie; here is how. In your project you have to add the Dockerfile that has the below lines and a crontab.txt with your cron definition.
Dockerfile
FROM redmatter/cron
ADD randomchallenge /usr/local/bin/
crontab.txt
*/1 * * * * /usr/local/bin/randomchallenge >>/var/log/randomchallenge.log 2>&1
If you want to use a different user to root (say because you have another container sharing the cron container), then you can additionally define RUN_USER=another.user and then add the user using a built-in script called cron-user add; as in the below version of Dockerfile.
Dockerfile with another.user
FROM redmatter/cron
ENV RUN_USER=another.user
RUN cron-user add -u another.user
ADD randomchallenge /usr/local/bin/
Run
In both cases you can run the container using the command as below.
docker run -d --name cron \
-e PRESERVE_ENV_VARS="VAR1 VAR2" \
-e VAR1=val1 -e VAR2=val2 \
cron_image start
Here it is important to specify PRESERVE_ENV_VARS="VAR1 VAR2" so that VAR1 and VAR2 are preserved for randomchallenge to see.

Resources