RSnapshot reports errors using rsnapreport.pl: "NO STATS DATA" - linux

I configured RSnapshot on a WD My Book Live (2TB) and its working (at least that's what the logs say). I used the reporting tool rsnapreport.pl from /usr/share/doc/rsnapshot/examples/utils/rsnapreport.pl.gz to get human readable mail reports about the crontab triggered backup jobs.
While the backup jobs seem to work, the reports are a obviously missing information as you can see in this snipplet:
SOURCE TOTAL FILES FILES TRANS TOTAL MB MB TRANS LIST GEN TIME FILE XFER TIME
--------------------------------------------------------------------------------------------------------------------
rsync://server:/vmail 13950 137 3687.81 20.31 0.052 seconds 0.000 seconds
ERRORS
/shares/rsnapshot/daily.0/ NO STATS DATA
The Question now is:
Beside the error at the bottom, which is my first and major problem and question, the FILE XFER TIME is also 0 for all backup jobs (I guess that the issues correlate).
I followed all instructions (see below) - what am I missing?
So what did I do so far:
*) The NAS runs Debian Squeeze (incl. squeeze-backports), Kernel Version is 2.6.32, PPC Architecture.
*) rsync version 3.0.3-2 (preinstalled), with /etc/rsyncd.conf:
pid file=/var/run/rsyncd.pid
lock file=/var/run/rsync.lock
log file=/var/log/rsync.log
[rsync]
path=/shares/rsync
uid=root
gid=share
read only=no
list=yes
auth users=root
*) Installed rsnapshot 1.3.1-1 with /etc/rsnapshot.conf:
config_version 1.2
snapshot_root /shares/rsnapshot/
cmd_rm /bin/rm
cmd_rsync /usr/bin/rsync
cmd_logger /usr/bin/logger
interval daily 7
interval weekly 4
interval monthly 3
verbose 3
loglevel 3
logfile /var/log/rsnapshot.log
lockfile /var/run/rsnapshot.pid
rsync_long_args --delete --numeric-ids --relative --delete-excluded --stats
backup rsync://server:/vmail/ backupOfServer/vmail/
backup ...
backup ...
backup ...
*) unpacked the report script and followed instructions in the script (most of which you can see in the config above):
# this script prints a pretty report from rsnapshot output
# in the rsnapshot.conf you must set
# verbose >= 3
# and add --stats to rsync_long_args
# then setup crontab 'rsnapshot daily 2>&1 | rsnapreport.pl | mail -s"SUBJECT" backupadm#adm.com
# don't forget the 2>&1 or your errors will be lost to stderr
*) and set up cron.d/rsnapshot:
MAILTO="user1#foo,user2#foo"
30 3 * * * root /usr/bin/rsnapshot daily 2>&1 | /root/rsnapreport.pl
0 3 * * 1 root /usr/bin/rsnapshot weekly 2>&1 | /root/rsnapreport.pl
30 2 1 * * root /usr/bin/rsnapshot monthly 2>&1 | /root/rsnapreport.pl
If you need any detailed or additional information, don't hesitate. We are happy to have daily reports of the backup at all, just the errors at the bottom are making us nervous.
Best Regards and thanks in advance,
Peter

The reason for this error was, that I did not uncomment the cmd_cp parameter. RSnapshot therefore used its build-in copy mechanism, which uses rsync.
This call of rsync was echoed to the output. The report script scans the output for calls to rsync and looks for transfer statistics, but the initial "copy" command does not produce such stats - therefore the error saying "NO STATS" for the source /daily.0
The solution is, to read the configuration file and follow the instructions:
# LINUX USERS: Be sure to uncomment "cmd_cp". This gives you extra features.
# EVERYONE ELSE: Leave "cmd_cp" commented out for compatibility.
#
# See the README file or the man page for more details.
#
#cmd_cp /bin/cp
Uncommenting the last line fixes the error... RTFM ;)

The "NO STATS DATA" error is also reported if you miss out on the:
rsync_long_args --stats

The "NO STATS DATA" error is also reported if you backing up something containing "rsync" in its path like /etc/default/rsync.
For example in this case the command rsnapshot daily 2>&1 | /bu/script/rsnapreport.pl | mail -s "[BU Report]date" me#example.com will return the following errors:
Use of uninitialized value $source in hash element at
/bu/script/rsnapreport.pl line 95, <> line 3991. Use of uninitialized
value $source in hash element at /bu/script/rsnapreport.pl line 96, <>
line 3991. ...
This is due to the rsnapreport.pl script which parse stats info from the rsync outpout and from the "rsync" string in it.
To simply resolve this problem add in your /etc/rsnapshot.conf the line corresponding to the problematic rsync string found in the rsync outpout:
For example and if you don't need to backup etc/default/rsync:
exclude etc/default/rsync
If you need to backup the files with path containing "rsync" you have to modify the rsnapreport.pl script.

Related

Crontab not launching script

I'm trying to run the following script through crontab every day at 12 :
#!/bin/sh
mount -t nfs 10.1.25.7:gadal /mnt/NAS_DFG
echo >> ~/Documents/Crontab_logs/logs.txt
date >> ~/Documents/Crontab_logs/logs.txt
rsync -ar /home /mnt/NAS_DFG/ >> ~/Documents/Crontab_logs/logs.txt 2>&1
unmout /mnt/NAS_DFG
As it needs to run in sudo, I added the following line to 'sudo crontab' such that I have :
someone#something:~$ sudo crontab -l
# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
0 12 * * * ~/Documents/Crontab_logs/Making_save.sh
But it does not run. I mention that just executing the script thourgh :
sudo ~/Documents/Crontab_logs/Making_save.sh
works well, except that no output of the rsync command is written in the log file.
Any ideas what's going wrong ? I think I checked the main source of mistakes, i.e. using shell, leaving an empty line at the end, etc ...
sudo crontab creates a job which runs out of the crontab of root (if you manage to configure it correctly; the syntax in root crontabs is different). When cron runs the job, $HOME (and ~ if you use a shell or scripting language with tilde expansion) will refer to the home of root etc.
You should probably simply add
0 12 * * * sudo ./Documents/Crontab_logs/Making_save.sh
to your own crontab instead.
Notice that crontab does not have tilde expansion at all (but we can rely on the fact that cron will always run out of your home directory).
... Though this will still have issues, because if the script runs under sudo and it creates new files, those files will be owned by root, and cannot be changed by your regular user account. A better solution still is to only run the actual mount and umount commands with sudo, and minimize the amount of code which runs on the privileged account, i.e. remove the sudo from your crontab and instead add it within the script to the individual commands which require it.

Logrotate daily+maxsize is not rotating

I've CentOS 7.4 with logrotate 3.8.6 installed. I've a custom logrotate file under /etc/logrotate.d/ to rotate some logs on a Tomcat (e.g., catalina.out) which is installed in the same machine.
/opt/test/apache-tomcat-8.5.15-client/logs/catalina.out {
copytruncate
daily
rotate 30
olddir /opt/test/apache-tomcat-8.5.15-client/logs/backup
compress
missingok
maxsize 50M
dateext
dateformat .%Y-%m-%d
}
I want the log to be rotated daily or if the size reaches 50MB. When this happens log files are compressed and copied into a backup folder and are kept for 30 days before being deleted.
I already ran logrotate manually in debug mode with the following command and no errors were displayed (and the expected zipped log files were created):
/usr/sbin/logrotate -d /etc/logrotate.d/openncp-tomcat-backoffice 2> /tmp/logrotate.debug
In /var/lib/logrotate/logrotate.status there are no issues, the files are shown as rotated but they're not in fact:
"/var/log/yum.log" 2017-11-27-19:0:0
"/opt/test/apache-tomcat-8.5.15-server/logs/catalina.out" 2017-12-15-3:41:1
"/var/log/boot.log" 2017-12-15-3:41:1
"/var/log/up2date" 2017-11-27-19:0:0
I've the default /etc/logrotate.conf:
# see "man logrotate" for details
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# use date as a suffix of the rotated file
dateext
# uncomment this if you want your log files compressed
#compress
# RPM packages drop log rotation information into this directory
include /etc/logrotate.d
# no packages own wtmp and btmp -- we'll rotate them here
/var/log/wtmp {
monthly
create 0664 root utmp
minsize 1M
rotate 1
}
/var/log/btmp {
missingok
monthly
create 0600 root utmp
rotate 1
}
# system-specific logs may be also be configured here.
I also have the default /etc/cron.daily/logrotate:
#!/bin/sh
/usr/sbin/logrotate -s /var/lib/logrotate/logrotate.status /etc/logrotate.conf
EXITVALUE=$?
if [ $EXITVALUE != 0 ]; then
/usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]"
fi
exit 0
I ask for your guidance on configuring this appropriately.
The problem was related to the SELinux file type of the log files, which were located in a directory different from /var/log, meaning that the logrotate process didn't have access to perform its tasks. I found this other SO thread as well as this Redhat page that helped to solve the issue. I found the Redhat documentation very helpful, so I provide here 2 links:
5.9.3. CHECKING THE DEFAULT SELINUX CONTEXT
5.6.2. PERSISTENT CHANGES: SEMANAGE FCONTEXT
To answer your question (as in the title) about having daily and maxsize, note that by default logrotate runs once a day anyway so that means maxsize is hardly useful in that situation. The file will be rotated anyway (assuming you don't have SELinux in the way, of course).
maxsize is useful with weekly and monthly, of course, because logrotate still checks the files daily.
Note that the fact that logrotate runs daily is just because on many systems it is installed that way by default.
$ ls -l /etc/cron.daily
...
-rwxr-xr-x 1 root root 372 Aug 21 2017 logrotate
...
Moving that file to /etc/cron.hourly is safe and it now will be useful to have the daily option turned on:
$ sudo mv -i /etc/cron.daily/logrotate /etc/cron.hourly/logrotate
WARNING: a system like Ubuntu is likely to re-install the daily file on the next update of the logrotate package, which is rather infrequent, but can happen. I do not know of a clean way to avoid that issue. The ugly way is to create an empty file of the same name which will prevent the packager from adding the file from the package.
$ sudo touch /etc/cron.daily/logrotate
Or edit and put a comment such as:
# placeholder to prevent installer from changing this file

How do i activate cron command once within specific time frame?

Basic information about my system: I have a music system where people can schedule songs to start and end at a specific time.
OS: Arch linux
It sets two crons at the moment. One lets say at 1.50 (start time with a command like "play etc") and another set at 3.20 (end time with a command like "end etc").
My setup works perfectly and i can end delete schedules etc etc but i now noticed an issue! If i set the above times and turn the system off (My system is a raspberry pi) and turn back on at lets say 2.00 and i missed the 1.50 deadline, the music doesnt start (obviously) and i want to try make it so no matter what time i turn it on within a range lets say: 1.50 - 3.20 it will start the play command. But it will run the command once!
I looked around and the commands i got was like:
0 1.50-3.20/2 * * * your_command.sh
But thats to run every 2 hours. I want it to run once only between these times?
Thanks!
You could add an additional cron job which starts a script on every reboot. For instance, you could add a line like this to your crontab:
#reboot /home/pi/startplayback.sh
Your startplayback.sh script should check if current time is within the desired period and run the desired command if it is. For example the code below will print PLAY! if the script is run between 1:50 and 3:20. You could replace echo 'PLAY!' by run WHATEVER
#!/bin/bash
current=$(date '+%H%M')
(( current=(10#$current) ))
((current > 150 && current < 320 )) && echo 'PLAY!'
P.S. Don't forget to make your script executable sudo chmod +x startplayback.sh
You might want to look at the at command and its utilities.
SYNOPSIS
at [-q queue] [-f file] [-mldbv] time
at [-q queue] [-f file] [-mldbv] -t [[CC]YY]MMDDhhmm[.SS]
at -c job [job ...]
at -l [job ...]
at -l -q queue
at -r job [job ...]
atq [-q queue] [-v]
atrm job [job ...]
batch [-q queue] [-f file] [-mv] [time]
at is good for scheduling one time jobs to be run at some point in the future. It maintains a queue of these jobs, so you can use it to schedule things with a great variety of different time specifications.
Cron is in my opinion a scheduler for jobs that are to be repeated over and over.
So a quick and dirty example for you:
echo 'ls -lathF' | at now + 1 minute
As expected you will see a job to be run in one minute. Try atq to see the list of jobs.
When the job is done, output will be mailed to your user by default.
I solved the issue by creating a PHP file and load the page on reboot then do its work and redirect back to such and such.

How to write a script for backup using bacula?

I am very new to this shell scripting and bacula. I want to create a script that schedules the backup using bacula?
How to do that?
Any lead is appreciated?
Thanks.
If you are going to administer your own Linux system, learn bash. The man page is really quite detailed and useful. Do man bash.
If you are really new to Linux and command-lines, administering bacula is not for newbies. It is a fairly comprehensive backup system, for multiple systems, with a central database, which means that is is also complex.
There are much simpler tools available on Linux to perform simple system backups, which are just as reliable. If you just want to backup you home directory, tar or zip are excellent tools. In particular, tar can do both full backups and incremental backups.
Assuming that you really want to use bacula and have enough information to write a couple of simple scripts, then even so, the original request is ambiguous.
Do you mean schedule a periodic cron job to accomplish backups unattended? Or, do you mean to schedule an single invocation of bacula at a determined time and date?
In either case, it's a good idea to create two simple scripts: one to perform a full backup, and one to perform an incremental backup. The full backup should be run, say, once a week or once a month, and the incremental backup should be run every day, or once a week -- depending on how often your system data changes.
Most modest sites undergoing daily usage would have a daily incremental backup with a full backup on the weekends (say, Sunday). This way, if the system crashed on, say, Friday, you would need to recover with the most recent full backup (on the previous Sunday), and then recover with each day's incremental backup (Mon, Tue, Wed, Thu). You would probably lose data changes that had occurred on the day of the crash.
If the rate of data change was hourly, and recovery at an hourly rate was important, then the incrementals should be scheduled for each hour, with full backups each night.
An important consideration is knowing what, exactly, is to be backed up. Most home users want their home directory to be recoverable. The OS root and application partitions are often easily recoverable without backups. Alternatively, these are backed up on a very infrequent schedule (say once a month or so), since they change must less frequently than the user's home director.
Another important consideration is where to put the backups. Bacula supports external storage devices, such as tapes, which are not mounted filesystems. tar also supports tape archives. Most home users have some kind of USB or network-attached storage that is used to store backups.
Let's assume that the backups are to be stored on /mnt/backups/, and let's assume that the user's home directory (and subdirectories) are all to be backed up and made recoverable.
% cat <<EOF >/usr/local/bin/full-backup
#!/bin/bash
# full-backup SRCDIRS [--options]
# incr-backup SRCDIRS [--options]
#
# set destdir to the path at which the backups will be stored
# each backup will be stored in a directory of the date of the
# archive, grouped by month. The directories will be:
#
# /mnt/backups/2014/01
# /mnt/backups/2014/02
# ...
# the full and incremental files will be named this way:
#
# /mnt/backups/2014/01/DIR-full-2014-01-24.192832.tgz
# /mnt/backups/2014/01/DIR-incr-2014-01-25.192531.tgz
# ...
# where DIR is the name of the source directory.
#
# There is also a file named ``lastrun`` which is used for
# its last mod-time which is used to select files changed
# since the last backup.
$PROG=${0##*/} # prog name: full-backup or incr-backup
destdir=/mnt/backup
now=`date +"%F-%H%M%S"`
monthdir=`date +%Y-%m`
dest=$destdir/$monthdir/
set -- "$#"
while (( $# > 0 )) ; do
dir="$1" ; shift ;
options='' # collect options
while [[ $# -gt 0 && "x$1" =~ x--* ]]; do # any options?
options="$options $1"
shift
done
basedir=`basename $dir`
fullfile=$dest/$basedir-full-$now.tgz
incrfile=$dest/$basedir-incr-$now.tgz
lastrun=$destdir/lastrun
case "$PROG" in
full*) archive="$fullfile" newer= kind=Full ;;
incr*) archive="$incrfile" newer="--newer $lastrun" kind=Incremental ;;
esac
cmd="tar cfz $archive $newer $options $dir"
echo "$kind backup starting at `date`"
echo ">> $cmd"
eval "$cmd"
echo "$kind backup done at `date`"
touch $lastrun # mark the end of the backup date/time
exit
EOF
(cd /usr/local/bin ; ln -s full-backup incr-backup )
chmod +x /usr/local/bin/full-backup
Once this script is configured and available, it can be scheduled with cron. See man cron. Use cron -e to create and edit a crontab entry to invoke full-backup once a week (say), and another crontab entry to invoke incr-backup once a day. The following are three sample crontab entries (see man 5 crontab for details on syntax) for performing incremental and full backups, as well as removing old archives.
# run incremental backups on all user home dirs at 3:15 every day
15 3 * * * /usr/local/bin/incr-backup /Users
# run full backups every sunday, at 3:15
15 3 * * 7 /usr/local/bin/full-backup /Users
# run full backups on the entire system (but not the home dirs) every month
30 4 * 1 7 /usr/local/bin/full-backup / --exclude=/Users --exclude=/tmp --exclude=/var
# delete old backup files (more than 60 days old) once a month
15 3 * 1 7 find /mnt/backups -type f -mtime +60 -delete
Recovering from these backups is an exercise left for later.
Good luck.
I don't think it gives meaning to have a cron scheduled script to activate Bacula.
The standard way to schedule backup using bacula is :
1) Install the Bacula file daemon on the machine you want to backup and then
2) Configure your Bacula Directory to schedule the backup
ad 1)
If your machine to backup is Debian or Ubuntu, you can install the Bacula file daemon from the shell like this:
shell> apt-get install bacula-fd (bacula-fd stands for Bacula File Daemon)
If your machine to backup is Windows, then you need to download the Bacula file daemon and install it. You can download here : http://sourceforge.net/projects/bacula/files/Win32_64/ (select the version that match your Bacula server version)
ad 2)
You need to find the bacula-dir.conf file on your Bacula server (if you installed Bacula Director on a Ubuntu machine, then the path is : /etc/bacula/bacula-dir.conf)
The bacula-dir.conf schedule section is very flexible and therefore also somewhat complicated, here is an example :
Schedule {
Name = "MonthlyCycle"
Run = Level=Full on 1 at 2:05 # full backup the 1. of every month at 2:05.
Run = Level=Incremental on 2-31 at 2:05 # incremental backup all other days.
}
Note that there are a lot more configuration necessary to run Bacula, here is a full tutorial how to install, configure, backup and restore Bacula : http://webmodelling.com/webbits/miscellaneous/bacula.aspx (disclaimer : I wrote the Bacula tutorial myself)

Notification when Cron job is NOT normal

I have a few cron jobs that are running daily from my cPanel. Which means that 99% of the time I receive daily emails saying that everything is OK - backup is OK, xml import is OK, etc.
But what I need is a notification when the result of the cron is NOT OK - i.e. out of the ordinary. I have tried making rules in outlook, but I can't make it work.
This is for example one of my crons:
30 2 * * * /usr/local/bin/php /home/xx/public_html/administrator/xx/xx/backup.php -profile=1 -description="Backup"
I understand that 'Bash' has the possibility, but not how to practically use it.
Anyone have good ideas?
Input appreciated.
You can write a wrapper around the job, and call the wrapper from cron.
Example:
#!/usr/bin/env ksh
#
# Wrapper around a backup script.
#
# Redirecting all output to a log file
# In this case to /home/xx/public_html/administrator/xx/xx/backup_job.log
#
# Lets log the start date of the job in a log file
#
echo "Starting backup job at `/bin/date`" >> /home/xx/public_html/administrator/xx/xx/backup_job.log
#
# start the backup
#
/home/xx/public_html/administrator/xx/xx/backup.php >> /home/xx/public_html/administrator/xx/xx/backup_job.log
#
# The backup job should have set a return value (0 or non-zero).
#
# If it is non-zero then there was an error. Lets send a warning mail about it
#
#
if [[ $? -ne 0 ]]; then
/bin/mail admin > /var/adm/backup_log
#
# Exit with a 0 value. That should indictate that this job (the wrapper) worked fine
# Usually this means cron does not have to send a mail. (But then, we already took care
# of warning mails on our won if there was an error
#
exit 0

Resources