Notification when Cron job is NOT normal - cron

I have a few cron jobs that are running daily from my cPanel. Which means that 99% of the time I receive daily emails saying that everything is OK - backup is OK, xml import is OK, etc.
But what I need is a notification when the result of the cron is NOT OK - i.e. out of the ordinary. I have tried making rules in outlook, but I can't make it work.
This is for example one of my crons:
30 2 * * * /usr/local/bin/php /home/xx/public_html/administrator/xx/xx/backup.php -profile=1 -description="Backup"
I understand that 'Bash' has the possibility, but not how to practically use it.
Anyone have good ideas?
Input appreciated.

You can write a wrapper around the job, and call the wrapper from cron.
Example:
#!/usr/bin/env ksh
#
# Wrapper around a backup script.
#
# Redirecting all output to a log file
# In this case to /home/xx/public_html/administrator/xx/xx/backup_job.log
#
# Lets log the start date of the job in a log file
#
echo "Starting backup job at `/bin/date`" >> /home/xx/public_html/administrator/xx/xx/backup_job.log
#
# start the backup
#
/home/xx/public_html/administrator/xx/xx/backup.php >> /home/xx/public_html/administrator/xx/xx/backup_job.log
#
# The backup job should have set a return value (0 or non-zero).
#
# If it is non-zero then there was an error. Lets send a warning mail about it
#
#
if [[ $? -ne 0 ]]; then
/bin/mail admin > /var/adm/backup_log
#
# Exit with a 0 value. That should indictate that this job (the wrapper) worked fine
# Usually this means cron does not have to send a mail. (But then, we already took care
# of warning mails on our won if there was an error
#
exit 0

Related

Is there a way to get the duration time for a running cronjob?

I want to monitor the long running time cronjob.
enter image description here
You could do:
#!/bin/bash
echo "Starting job blablabla" >>/tmp/job.log
date >>/tmp/duration.log
# THE REST OF THE CODE FOR SCRIPT blablabla
echo "End of job blablabla" >>/tmp/duration.log
date >>/tmp/duration.log
Every execution of the script will add entries with start and end time
Or directly in the cron:
* * * * * time -o /tmp/timing.txt blablabla.sh >/tmp/blablabla.out 2>/tmp/blablabla.out2
time -o shows the resource usage duration for the command.
The -o option specifies which file to send the results into.
The > and 2> redirections are to keep logs of the output and errors. But that is not related to time itself, just a good idea all around with cron jobs.

Getting bad minute error from crontab despite seemingly accurate syntax

Not sure what's going on as this cronjob is pretty straighforward. I want my cronjob to run every 5 minutes. Below is the script (memory.sh) in it's entirety.
Below that is the schedule it's requested to run on via the crontab.
I've replicated the crontab and run crontab memory.sh both under my username and as root, but each time I get the same error:
"/opt/memory.sh":4: bad minute
errors in crontab file, can't install.
memory.sh
#!/bin/bash
now=`date +%Y-%m-%d.%H:%M:%S`
echo "$now" >>/opt/memory.out
ps aux --sort -rss>> /opt/memory.out
echo "$now" >>/opt/memory_free.out
free -m>> /opt/memory_free.out
crontab
# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command*
*/5 * * * * /opt/memory.sh
directory
crontab memory.sh installs a new crontab file. memory.sh is not a crontab, it is a script references from the crontab file. You need to pass the crontab file to the crontab command, or edit the existing file using crontab -e to add the line you indicated.

Managing log files created by cron jobs

I have a cron job that copies its log file daily to my home folder.
Everyday it overrides the existing file in the destination folder, which is expected. I want to preserve the log from previous dates so that next time it copies the file to destination folder, it preserves the files from previous dates.
How do I do that?
The best way to manage cron logs is to have a wrapper around each job. The wrapper could do these things, at the minimum:
initialize environment
redirect stdout and stderr to log
run the job
perform checks to see if job succeeded or not
send notifications if necessary
clean up logs
Here is a bare bones version of a cron wrapper:
#!/bin/bash
log_dir=/tmp/cron_logs/$(date +'%Y%m%d')
mkdir -p "$log_dir" || { echo "Can't create log directory '$log_dir'"; exit 1; }
#
# we write to the same log each time
# this can be enhanced as per needs: one log per execution, one log per job per execution etc.
#
log_file=$log_dir/cron.log
#
# hitherto, both stdout and stderr end up in the log file
#
exec 2>&1 1>>"$log_file"
#
# Run the environment setup that is shared across all jobs.
# This can set up things like PATH etc.
#
# Note: it is not a good practice to source in .profile or .bashrc here
#
source /path/to/setup_env.sh
#
# run the job
#
echo "$(date): starting cron, command=[$*]"
"$#"
echo "$(date): cron ended, exit code is $?"
Your cron command line would look like:
/path/to/cron_wrapper command ...
Once this is in place, we can have another job called cron_log_cleaner which can remove older logs. It's not a bad idea to call the log cleaner from the cron wrapper itself, at the end.
An example:
# run the cron job from command line
cron_wrapper 'echo step 1; sleep 5; echo step 2; sleep 10'
# inspect the log
cat /tmp/cron_logs/20170120/cron.log
The log would contain this after running the wrapped cron job:
Fri Jan 20 04:35:10 UTC 2017: starting cron, command=[echo step 1; sleep 5; echo step 2; sleep 10]
step 1
step 2
Fri Jan 20 04:35:25 UTC 2017: cron ended, exit code is 0
Insert
`date +%F`
to your cp command, like this:
cp /path/src_file /path/dst_file_`date +%F`
so it will copy src_file to dst_file_2017-01-20
upd:
As #tripleee noticed, % character should be escaped in cron, so your cron job will look like this:
0 3 * * * cp /path/src_file /path/dst_file_`date +\%F`

RSnapshot reports errors using rsnapreport.pl: "NO STATS DATA"

I configured RSnapshot on a WD My Book Live (2TB) and its working (at least that's what the logs say). I used the reporting tool rsnapreport.pl from /usr/share/doc/rsnapshot/examples/utils/rsnapreport.pl.gz to get human readable mail reports about the crontab triggered backup jobs.
While the backup jobs seem to work, the reports are a obviously missing information as you can see in this snipplet:
SOURCE TOTAL FILES FILES TRANS TOTAL MB MB TRANS LIST GEN TIME FILE XFER TIME
--------------------------------------------------------------------------------------------------------------------
rsync://server:/vmail 13950 137 3687.81 20.31 0.052 seconds 0.000 seconds
ERRORS
/shares/rsnapshot/daily.0/ NO STATS DATA
The Question now is:
Beside the error at the bottom, which is my first and major problem and question, the FILE XFER TIME is also 0 for all backup jobs (I guess that the issues correlate).
I followed all instructions (see below) - what am I missing?
So what did I do so far:
*) The NAS runs Debian Squeeze (incl. squeeze-backports), Kernel Version is 2.6.32, PPC Architecture.
*) rsync version 3.0.3-2 (preinstalled), with /etc/rsyncd.conf:
pid file=/var/run/rsyncd.pid
lock file=/var/run/rsync.lock
log file=/var/log/rsync.log
[rsync]
path=/shares/rsync
uid=root
gid=share
read only=no
list=yes
auth users=root
*) Installed rsnapshot 1.3.1-1 with /etc/rsnapshot.conf:
config_version 1.2
snapshot_root /shares/rsnapshot/
cmd_rm /bin/rm
cmd_rsync /usr/bin/rsync
cmd_logger /usr/bin/logger
interval daily 7
interval weekly 4
interval monthly 3
verbose 3
loglevel 3
logfile /var/log/rsnapshot.log
lockfile /var/run/rsnapshot.pid
rsync_long_args --delete --numeric-ids --relative --delete-excluded --stats
backup rsync://server:/vmail/ backupOfServer/vmail/
backup ...
backup ...
backup ...
*) unpacked the report script and followed instructions in the script (most of which you can see in the config above):
# this script prints a pretty report from rsnapshot output
# in the rsnapshot.conf you must set
# verbose >= 3
# and add --stats to rsync_long_args
# then setup crontab 'rsnapshot daily 2>&1 | rsnapreport.pl | mail -s"SUBJECT" backupadm#adm.com
# don't forget the 2>&1 or your errors will be lost to stderr
*) and set up cron.d/rsnapshot:
MAILTO="user1#foo,user2#foo"
30 3 * * * root /usr/bin/rsnapshot daily 2>&1 | /root/rsnapreport.pl
0 3 * * 1 root /usr/bin/rsnapshot weekly 2>&1 | /root/rsnapreport.pl
30 2 1 * * root /usr/bin/rsnapshot monthly 2>&1 | /root/rsnapreport.pl
If you need any detailed or additional information, don't hesitate. We are happy to have daily reports of the backup at all, just the errors at the bottom are making us nervous.
Best Regards and thanks in advance,
Peter
The reason for this error was, that I did not uncomment the cmd_cp parameter. RSnapshot therefore used its build-in copy mechanism, which uses rsync.
This call of rsync was echoed to the output. The report script scans the output for calls to rsync and looks for transfer statistics, but the initial "copy" command does not produce such stats - therefore the error saying "NO STATS" for the source /daily.0
The solution is, to read the configuration file and follow the instructions:
# LINUX USERS: Be sure to uncomment "cmd_cp". This gives you extra features.
# EVERYONE ELSE: Leave "cmd_cp" commented out for compatibility.
#
# See the README file or the man page for more details.
#
#cmd_cp /bin/cp
Uncommenting the last line fixes the error... RTFM ;)
The "NO STATS DATA" error is also reported if you miss out on the:
rsync_long_args --stats
The "NO STATS DATA" error is also reported if you backing up something containing "rsync" in its path like /etc/default/rsync.
For example in this case the command rsnapshot daily 2>&1 | /bu/script/rsnapreport.pl | mail -s "[BU Report]date" me#example.com will return the following errors:
Use of uninitialized value $source in hash element at
/bu/script/rsnapreport.pl line 95, <> line 3991. Use of uninitialized
value $source in hash element at /bu/script/rsnapreport.pl line 96, <>
line 3991. ...
This is due to the rsnapreport.pl script which parse stats info from the rsync outpout and from the "rsync" string in it.
To simply resolve this problem add in your /etc/rsnapshot.conf the line corresponding to the problematic rsync string found in the rsync outpout:
For example and if you don't need to backup etc/default/rsync:
exclude etc/default/rsync
If you need to backup the files with path containing "rsync" you have to modify the rsnapreport.pl script.

Test a weekly cron job [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I have a #!/bin/bash file in cron.week directory.
Is there a way to test if it works? Can't wait 1 week
I am on Debian 6 with root
Just do what cron does, run the following as root:
run-parts -v /etc/cron.weekly
... or the next one if you receive the "Not a directory: -v" error:
run-parts /etc/cron.weekly -v
Option -v prints the script names before they are run.
A wee bit beyond the scope of your question... but here's what I do.
The "how do I test a cron job?" question is closely connected to "how do I test scripts that run in non-interactive contexts launched by other programs?" In cron, the trigger is some time condition, but lots of other *nix facilities launch scripts or script fragments in non-interactive ways, and often the conditions in which those scripts run contain something unexpected and cause breakage until the bugs are sorted out. (See also: https://stackoverflow.com/a/17805088/237059 )
A general approach to this problem is helpful to have.
One of my favorite techniques is to use a script I wrote called 'crontest'. It launches the target command inside a GNU screen session from within cron, so that you can attach with a separate terminal to see what's going on, interact with the script, even use a debugger.
To set this up, you would use "all stars" in your crontab entry, and specify crontest as the first command on the command line, e.g.:
* * * * * crontest /command/to/be/tested --param1 --param2
So now cron will run your command every minute, but crontest will ensure that only one instance runs at a time. If the command takes time to run, you can do a "screen -x" to attach and watch it run. If the command is a script, you can put a "read" command at the top to make it stop and wait for the screen attachment to complete (hit enter after attaching)
If your command is a bash script, you can do this instead:
* * * * * crontest --bashdb /command/to/be/tested --param1 --param2
Now, if you attach with "screen -x", you'll be facing an interactive bashdb session, and you can step through the code, examine variables, etc.
#!/bin/bash
# crontest
# See https://github.com/Stabledog/crontest for canonical source.
# Test wrapper for cron tasks. The suggested use is:
#
# 1. When adding your cron job, use all 5 stars to make it run every minute
# 2. Wrap the command in crontest
#
#
# Example:
#
# $ crontab -e
# * * * * * /usr/local/bin/crontest $HOME/bin/my-new-script --myparams
#
# Now, cron will run your job every minute, but crontest will only allow one
# instance to run at a time.
#
# crontest always wraps the command in "screen -d -m" if possible, so you can
# use "screen -x" to attach and interact with the job.
#
# If --bashdb is used, the command line will be passed to bashdb. Thus you
# can attach with "screen -x" and debug the remaining command in context.
#
# NOTES:
# - crontest can be used in other contexts, it doesn't have to be a cron job.
# Any place where commands are invoked without an interactive terminal and
# may need to be debugged.
#
# - crontest writes its own stuff to /tmp/crontest.log
#
# - If GNU screen isn't available, neither is --bashdb
#
crontestLog=/tmp/crontest.log
lockfile=$(if [[ -d /var/lock ]]; then echo /var/lock/crontest.lock; else echo /tmp/crontest.lock; fi )
useBashdb=false
useScreen=$( if which screen &>/dev/null; then echo true; else echo false; fi )
innerArgs="$#"
screenBin=$(which screen 2>/dev/null)
function errExit {
echo "[-err-] $#" | tee -a $crontestLog >&2
}
function log {
echo "[-stat-] $#" >> $crontestLog
}
function parseArgs {
while [[ ! -z $1 ]]; do
case $1 in
--bashdb)
if ! $useScreen; then
errExit "--bashdb invalid in crontest because GNU screen not installed"
fi
if ! which bashdb &>/dev/null; then
errExit "--bashdb invalid in crontest: no bashdb on the PATH"
fi
useBashdb=true
;;
--)
shift
innerArgs="$#"
return 0
;;
*)
innerArgs="$#"
return 0
;;
esac
shift
done
}
if [[ -z $sourceMe ]]; then
# Lock the lockfile (no, we do not wish to follow the standard
# advice of wrapping this in a subshell!)
exec 9>$lockfile
flock -n 9 || exit 1
# Zap any old log data:
[[ -f $crontestLog ]] && rm -f $crontestLog
parseArgs "$#"
log "crontest starting at $(date)"
log "Raw command line: $#"
log "Inner args: $#"
log "screenBin: $screenBin"
log "useBashdb: $( if $useBashdb; then echo YES; else echo no; fi )"
log "useScreen: $( if $useScreen; then echo YES; else echo no; fi )"
# Were building a command line.
cmdline=""
# If screen is available, put the task inside a pseudo-terminal
# owned by screen. That allows the developer to do a "screen -x" to
# interact with the running command:
if $useScreen; then
cmdline="$screenBin -D -m "
fi
# If bashdb is installed and --bashdb is specified on the command line,
# pass the command to bashdb. This allows the developer to do a "screen -x" to
# interactively debug a bash shell script:
if $useBashdb; then
cmdline="$cmdline $(which bashdb) "
fi
# Finally, append the target command and params:
cmdline="$cmdline $innerArgs"
log "cmdline: $cmdline"
# And run the whole schlock:
$cmdline
res=$?
log "Command result: $res"
echo "[-result-] $(if [[ $res -eq 0 ]]; then echo ok; else echo fail; fi)" >> $crontestLog
# Release the lock:
9<&-
fi
After messing about with some stuff in cron which wasn't instantly compatible I found that the following approach was nice for debugging:
crontab -e
* * * * * /path/to/prog var1 var2 &>>/tmp/cron_debug_log.log
This will run the task once a minute and you can simply look in the /tmp/cron_debug_log.log file to figure out what is going on.
It is not exactly the "fire job" you might be looking for, but this helped me a lot when debugging a script that didn't work in cron at first.
I'd use a lock file and then set the cron job to run every minute. (use crontab -e and * * * * * /path/to/job) That way you can just keep editing the files and each minute they'll be tested out. Additionally, you can stop the cronjob by just touching the lock file.
#!/bin/sh
if [ -e /tmp/cronlock ]
then
echo "cronjob locked"
exit 1
fi
touch /tmp/cronlock
<...do your regular cron here ....>
rm -f /tmp/cronlock
What about putting it into cron.hourly, waiting until the next run of hourly cron jobs, then removing it? That would run it once within an hour, and in the cron environment. You can also run ./your_script, but that won't have the same environment as under cron.
Aside from that you can also use:
http://pypi.python.org/pypi/cronwrap
to wrap up your cron to send you an email upon success or failure.
None of these answers fit my specific situation, which was that I wanted to run one specific cron job, just once, and run it immediately.
I'm on a Ubuntu server, and I use cPanel to setup my cron jobs.
I simply wrote down my current settings, and then edited them to be one minute from now. When I fixed another bug, I just edited it again to one minute from now. And when I was all done, I just reset the settings back to how they were before.
Example: It's 4:34pm right now, so I put 35 16 * * *, for it to run at 16:35.
It worked like a charm, and the most I ever had to wait was a little less than one minute.
I thought this was a better option than some of the other answers because I didn't want to run all of my weekly crons, and I didn't want the job to run every minute. It takes me a few minutes to fix whatever the issues were before I'm ready to test it again. Hopefully this helps someone.
The solution I am using is as follows:
Edit crontab(use command :crontab -e) to run the job as frequently
as needed (every 1 minute or 5 minutes)
Modify the shell script which should be executed using cron to prints the output into some file (e.g: echo "Working fine" >>
output.txt)
Check the output.txt file using the command : tail -f output.txt, which will print the latest additions into this file, and thus you can track the execution of the script
I normally test by running the job i created like this:
It is easier to use two terminals to do this.
run job:
#./jobname.sh
go to:
#/var/log and run
run the following:
#tailf /var/log/cron
This allows me to see the cron logs update in real time. You can also review the log after you run it, I prefer watching in real time.
Here is an example of a simple cron job. Running a yum update...
#!/bin/bash
YUM=/usr/bin/yum
$YUM -y -R 120 -d 0 -e 0 update yum
$YUM -y -R 10 -e 0 -d 0 update
Here is the breakdown:
First command will update yum itself and next will apply system updates.
-R 120 : Sets the maximum amount of time yum will wait before performing a command
-e 0 : Sets the error level to 0 (range 0 - 10). 0 means print only critical errors about which you must be told.
-d 0 : Sets the debugging level to 0 - turns up or down the amount of things that are printed. (range: 0 - 10).
-y : Assume yes; assume that the answer to any question which would be asked is yes
After I built the cron job I ran the below command to make my job executable.
#chmod +x /etc/cron.daily/jobname.sh
Hope this helps,
Dorlack
sudo run-parts --test /var/spool/cron/crontabs/
files in that crontabs/ directory needs to be executable by owner - octal 700
source: man cron and NNRooth's
I'm using Webmin because its a productivity gem for someone who finds command line administration a bit daunting and impenetrable.
There is a "Save and Run Now" button in the "System > Scheduled Cron Jobs > Edit Cron Job" web interface.
It displays the output of the command and is exactly what I needed.

Resources