I'm having some difficulty with my Crotab. I've run through this list: https://askubuntu.com/questions/23009/reasons-why-crontab-does-not-work Which was helpful. I have the following crontab
35 02 * * * /root/scripts/backup
Which runs:
#!/bin/bash
PATH=/home:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
echo "Time: $(date)" >> /home/me/SystemSync.log
rsync -ar -e ssh /home/ root#IPADDRESSOFNAS:/volume1/backup/server/SystemSSD/
echo "Time: $(date)" >> /SecondHD/RsyncDebug.log
sudo rsync -arv -e ssh /SecondHD/ root#IPADDRESSOFNAS:/volume1/backup/server/SecondHD/
The .log files are written to the server, with the intention that they will be synced with the NAS if it is successfully synced.
I know the following:
The SystemSSD portion of the script works as do both of the log portions of the script. On the NAS, I see the Systemsync.log file with the newest entry. I find the RsyncDebug.log on the server with the updated time stamp, but not on the on the NAS.
The script runs in its entirety when I run it from the command line, just not in crontab.
Potentially pertinent information:
I'm running CentOS 6 on the server.
The system drive is a 1TB SSD and the Second drive is a 4 TB Raid1 hard drive with ca. 1 TB of space remaining.
The NAS volume has 5TB drives, with about 1 TB of space remaining.
Thanks in advance. Someday I hope to teach as much as I learn from this community.
Related
I am trying to keep 3 large directories (9G, 400G, 800G) in sync between our home site and another in a land far, far away across a network link that is a bit dodgy (slow and drops occasionally). Data was copied onto disks prior to installation so the rsync only needs to send updates.
The problem I'm having is the rsync hangs for hours on the client side.
The smaller 9G job completed, the 400G job has been in limbo for 15 hours - no output to the log file in that time, but has not timed out.
What I've done to setup for this (after reading many forum articles about rsync/rsync server/partial since I am not really a system admin)
I setup rsync server (/etc/rsyncd.conf) on our home system, entred it into xinetd and wrote a script to run rsync on the distant server, it loops if rsync fails in an attempt to deal with the dodgy network. The rsync command in the script looks like this:
rsync -avzAXP --append root#homesys01::tools /disk1/tools
Note the "-P" option is equivalent to "--progress --partial"
I can see in the log file that rsync did fail at one point and the loop restarted rsync, data was transferred after that based on entries in the log file, but the last update to the log file was 15 hours ago, and the rsync process on the client is still running.
CNT=0
while [ 1 ]
do
rsync -avzAXP --append root#homesys01::tools /disk1/tools
STATUS=$?
if [ $STATUS -eq 0 ] ; then
echo "Successful completion of tools rsync."
exit 0
else
CNT=`expr ${CNT} + 1`
echo " Rsync of tools failure. Status returned: ${STATUS}"
echo " Backing off and retrying(${CNT})..."
sleep 180
fi
done
So I expected these jobs to take a long time, I expected to see the occasional failure message in the log files (which I have) and to see rsync restart (which it has). Was not expecting rsync to just hang for 15 hours or more with no progress and no timeout error.
Is there a way to tell if rsync on the client is hung versus dealing with the dodgy network?
I set no timeout in the /etc/rsyncd.conf file. Should I and how do I determin a reasonable timeout setting?
I set rsync up to be available through xinetd, but don't always see the "rsync --daemon" process running. It restarts if I run rsync from the remote system. But shouldn't it be always running?
Any guidance or suggestions would be appreciated.
to tell the rsync client working status , with verbose option and keep a log file
change this line
rsync -avzAXP --append root#homesys01::tools /disk1/tools
to
rsync -avzAXP --append root#homesys01::tools /disk1/tools >>/tmp/rsync.log.`date +%F`
this would produce one log file per day under /tmp directory
then you can use tail -f command to trace the most recent log file ,
if it is rolling , it is working
see also
rsync - what means the f+++++++++ on rsync logs?
to understand more about the log
I thought I would post my final solution, in case it can help anyone else. I added --timeout 300 and --append-verify. The timeout eliminates the case of rsync getting hung indefinitely, the loop will restart it after the timeout. The append-verify is necessary to have it check any partial file it updated.
Note the following code is in a shell script and the output is redirected to a log file.
CNT=0
while [ 1 ]
do
rsync -avzAXP --append-verify --timeout 300 root#homesys01::tools /disk1/tools
STATUS=$?
if [ $STATUS -eq 0 ] ; then
echo "Successful completion of tools rsync."
exit 0
else
CNT=`expr ${CNT} + 1`
echo " Rsync of tools failure. Status returned: ${STATUS}"
echo " Backing off and retrying(${CNT})..."
sleep 180
fi
done
I got a simple cron job which simply prints the current date to a log file. For testing purposes, I've done this cron job to occur every minute.
crontab -u user01 -e
* * * * * echo "Date is $(date)" >> /home/user01/date.log
It was used to work before I created a logical volume, give ext4 format to this logical volume and mount it to /home/user01. After the mount operation, it doesn't do anything.
After this, I create a crontab with just (crontab -e), which means I dont give the username , and the crontab started to work again. But I want to know why my first crontab not working after mount.
Also, I know the /home/date.log will be deleted after mount operation but the crontab should write an output to date.log every minute .
For the record, there isn't any problem with mounting. I check /etc/fstab, and df -hT. The /home/user01 directory is mounted.
Also I have tried exact same cron job for another user(user02) in another directory, and it worked so there isn't any syntax or privilige issue.
Also when I check the /var/log/cron, below output come every minute
(user01) CMD (echo "Today is $(date)" >> /home/user01/date.log)
(user02) CMD (echo "Today is $(date)" >> /home/user02/date.log)
This output comes to log file every minute so that I guess the crontab is working but not giving the output for user01 or something.
Thank you for your help
You can login user01 to execute echo "Date is $(date)" >> /home/user01/date.log. success?
I backup files to Tar files once a day and grab from our Ubuntu servers using a backup shell script and put them in a share. We only have 5TB shares but can have several.
At the moment we need more as we backup 30 days worth of Tar files.
I need a method where the first 10 days go to share one, next ten to share tow, next 11 to share three
Currently each Server VM runs the following script to backup and tar folders and place then in another folder ready to be grabbed by the backup server
!/bin/bash
appname=myapp.com
dbname=mydb
dbuser=myDBuser
dbpass=MyDBpass
datestamp=`date +%d%m%y`
rm -f /var/mybackupTars/* > /dev/null 2>&1
mysqldump -u$dbuser -p$dbpass $dbname > /var/mybackups/$dbname-$datestamp.sql && gzip /var/mybackupss/$dbname-$datestamp.sql
tar -zcf /var/mybackups/myapp-$datestamp.tar.gz /var/www/myapp > /dev/null 2>&1
tar -zcf /var/mydirectory/myapp-$datestamp.tar.gz /var/www/html/myapp > /dev/null 2>&1
I then grab the backups using a script on the backup server and put them in a share
#!/bin/bash
#
# Generate a list of myapps to grab
df|grep myappbackups|awk -F/ '{ print $NF }'>/tmp/myapplistlistsmb
# Get each app in turn
for APPNAME in `cat /tmp/applistsmb`
do
cd /srv/myappbackups/$APPNAME
scp $APPNAME:* .
done
I know this is a tough one but I really need 3 shares with ten days worth in each share
I do not anticipate the backup script changing on each server VM that backs up to itself
Only maybe the grabber script that puts the dated backups in the share on the backup server
Or am I wrong??
Any help would be great
I'm pretty new to bash scripting, but have constructed something that works. It copies new/changed files from a folder on my web server to another directory. This directory is then compressed and the compressed folder is uploaded to my drop box account.
This works perfectly when I run it manually with;
sudo run-parts /path/to/bash/scripts
I wanted to automate this, so I edited my crontab file using sudo crontab -e to include the following;
0 2 * * * sudo run-parts /path/to/bash/scripts
This works, but with one issue. It spikes my CPU usage to 60% and it doesn't drop until I open htop and kill the final process (the script that does the uploading). When it runs the next day, CPU usage spikes to 100% and stays there, because it was still running from the previous day. This issue doesn't occur when manually running the scripts.
Thoughts?
I am very new to this shell scripting and bacula. I want to create a script that schedules the backup using bacula?
How to do that?
Any lead is appreciated?
Thanks.
If you are going to administer your own Linux system, learn bash. The man page is really quite detailed and useful. Do man bash.
If you are really new to Linux and command-lines, administering bacula is not for newbies. It is a fairly comprehensive backup system, for multiple systems, with a central database, which means that is is also complex.
There are much simpler tools available on Linux to perform simple system backups, which are just as reliable. If you just want to backup you home directory, tar or zip are excellent tools. In particular, tar can do both full backups and incremental backups.
Assuming that you really want to use bacula and have enough information to write a couple of simple scripts, then even so, the original request is ambiguous.
Do you mean schedule a periodic cron job to accomplish backups unattended? Or, do you mean to schedule an single invocation of bacula at a determined time and date?
In either case, it's a good idea to create two simple scripts: one to perform a full backup, and one to perform an incremental backup. The full backup should be run, say, once a week or once a month, and the incremental backup should be run every day, or once a week -- depending on how often your system data changes.
Most modest sites undergoing daily usage would have a daily incremental backup with a full backup on the weekends (say, Sunday). This way, if the system crashed on, say, Friday, you would need to recover with the most recent full backup (on the previous Sunday), and then recover with each day's incremental backup (Mon, Tue, Wed, Thu). You would probably lose data changes that had occurred on the day of the crash.
If the rate of data change was hourly, and recovery at an hourly rate was important, then the incrementals should be scheduled for each hour, with full backups each night.
An important consideration is knowing what, exactly, is to be backed up. Most home users want their home directory to be recoverable. The OS root and application partitions are often easily recoverable without backups. Alternatively, these are backed up on a very infrequent schedule (say once a month or so), since they change must less frequently than the user's home director.
Another important consideration is where to put the backups. Bacula supports external storage devices, such as tapes, which are not mounted filesystems. tar also supports tape archives. Most home users have some kind of USB or network-attached storage that is used to store backups.
Let's assume that the backups are to be stored on /mnt/backups/, and let's assume that the user's home directory (and subdirectories) are all to be backed up and made recoverable.
% cat <<EOF >/usr/local/bin/full-backup
#!/bin/bash
# full-backup SRCDIRS [--options]
# incr-backup SRCDIRS [--options]
#
# set destdir to the path at which the backups will be stored
# each backup will be stored in a directory of the date of the
# archive, grouped by month. The directories will be:
#
# /mnt/backups/2014/01
# /mnt/backups/2014/02
# ...
# the full and incremental files will be named this way:
#
# /mnt/backups/2014/01/DIR-full-2014-01-24.192832.tgz
# /mnt/backups/2014/01/DIR-incr-2014-01-25.192531.tgz
# ...
# where DIR is the name of the source directory.
#
# There is also a file named ``lastrun`` which is used for
# its last mod-time which is used to select files changed
# since the last backup.
$PROG=${0##*/} # prog name: full-backup or incr-backup
destdir=/mnt/backup
now=`date +"%F-%H%M%S"`
monthdir=`date +%Y-%m`
dest=$destdir/$monthdir/
set -- "$#"
while (( $# > 0 )) ; do
dir="$1" ; shift ;
options='' # collect options
while [[ $# -gt 0 && "x$1" =~ x--* ]]; do # any options?
options="$options $1"
shift
done
basedir=`basename $dir`
fullfile=$dest/$basedir-full-$now.tgz
incrfile=$dest/$basedir-incr-$now.tgz
lastrun=$destdir/lastrun
case "$PROG" in
full*) archive="$fullfile" newer= kind=Full ;;
incr*) archive="$incrfile" newer="--newer $lastrun" kind=Incremental ;;
esac
cmd="tar cfz $archive $newer $options $dir"
echo "$kind backup starting at `date`"
echo ">> $cmd"
eval "$cmd"
echo "$kind backup done at `date`"
touch $lastrun # mark the end of the backup date/time
exit
EOF
(cd /usr/local/bin ; ln -s full-backup incr-backup )
chmod +x /usr/local/bin/full-backup
Once this script is configured and available, it can be scheduled with cron. See man cron. Use cron -e to create and edit a crontab entry to invoke full-backup once a week (say), and another crontab entry to invoke incr-backup once a day. The following are three sample crontab entries (see man 5 crontab for details on syntax) for performing incremental and full backups, as well as removing old archives.
# run incremental backups on all user home dirs at 3:15 every day
15 3 * * * /usr/local/bin/incr-backup /Users
# run full backups every sunday, at 3:15
15 3 * * 7 /usr/local/bin/full-backup /Users
# run full backups on the entire system (but not the home dirs) every month
30 4 * 1 7 /usr/local/bin/full-backup / --exclude=/Users --exclude=/tmp --exclude=/var
# delete old backup files (more than 60 days old) once a month
15 3 * 1 7 find /mnt/backups -type f -mtime +60 -delete
Recovering from these backups is an exercise left for later.
Good luck.
I don't think it gives meaning to have a cron scheduled script to activate Bacula.
The standard way to schedule backup using bacula is :
1) Install the Bacula file daemon on the machine you want to backup and then
2) Configure your Bacula Directory to schedule the backup
ad 1)
If your machine to backup is Debian or Ubuntu, you can install the Bacula file daemon from the shell like this:
shell> apt-get install bacula-fd (bacula-fd stands for Bacula File Daemon)
If your machine to backup is Windows, then you need to download the Bacula file daemon and install it. You can download here : http://sourceforge.net/projects/bacula/files/Win32_64/ (select the version that match your Bacula server version)
ad 2)
You need to find the bacula-dir.conf file on your Bacula server (if you installed Bacula Director on a Ubuntu machine, then the path is : /etc/bacula/bacula-dir.conf)
The bacula-dir.conf schedule section is very flexible and therefore also somewhat complicated, here is an example :
Schedule {
Name = "MonthlyCycle"
Run = Level=Full on 1 at 2:05 # full backup the 1. of every month at 2:05.
Run = Level=Incremental on 2-31 at 2:05 # incremental backup all other days.
}
Note that there are a lot more configuration necessary to run Bacula, here is a full tutorial how to install, configure, backup and restore Bacula : http://webmodelling.com/webbits/miscellaneous/bacula.aspx (disclaimer : I wrote the Bacula tutorial myself)