Managing log files created by cron jobs - linux

I have a cron job that copies its log file daily to my home folder.
Everyday it overrides the existing file in the destination folder, which is expected. I want to preserve the log from previous dates so that next time it copies the file to destination folder, it preserves the files from previous dates.
How do I do that?

The best way to manage cron logs is to have a wrapper around each job. The wrapper could do these things, at the minimum:
initialize environment
redirect stdout and stderr to log
run the job
perform checks to see if job succeeded or not
send notifications if necessary
clean up logs
Here is a bare bones version of a cron wrapper:
#!/bin/bash
log_dir=/tmp/cron_logs/$(date +'%Y%m%d')
mkdir -p "$log_dir" || { echo "Can't create log directory '$log_dir'"; exit 1; }
#
# we write to the same log each time
# this can be enhanced as per needs: one log per execution, one log per job per execution etc.
#
log_file=$log_dir/cron.log
#
# hitherto, both stdout and stderr end up in the log file
#
exec 2>&1 1>>"$log_file"
#
# Run the environment setup that is shared across all jobs.
# This can set up things like PATH etc.
#
# Note: it is not a good practice to source in .profile or .bashrc here
#
source /path/to/setup_env.sh
#
# run the job
#
echo "$(date): starting cron, command=[$*]"
"$#"
echo "$(date): cron ended, exit code is $?"
Your cron command line would look like:
/path/to/cron_wrapper command ...
Once this is in place, we can have another job called cron_log_cleaner which can remove older logs. It's not a bad idea to call the log cleaner from the cron wrapper itself, at the end.
An example:
# run the cron job from command line
cron_wrapper 'echo step 1; sleep 5; echo step 2; sleep 10'
# inspect the log
cat /tmp/cron_logs/20170120/cron.log
The log would contain this after running the wrapped cron job:
Fri Jan 20 04:35:10 UTC 2017: starting cron, command=[echo step 1; sleep 5; echo step 2; sleep 10]
step 1
step 2
Fri Jan 20 04:35:25 UTC 2017: cron ended, exit code is 0

Insert
`date +%F`
to your cp command, like this:
cp /path/src_file /path/dst_file_`date +%F`
so it will copy src_file to dst_file_2017-01-20
upd:
As #tripleee noticed, % character should be escaped in cron, so your cron job will look like this:
0 3 * * * cp /path/src_file /path/dst_file_`date +\%F`

Related

How to check last running time of any script in linux instance

I want to check if my scripts ran the last night(or last ran timestamp) on linux instance based on scripts crontab running time stamp.
So how to get scripts last ran time on linux instance?
I would suggest better record the start time during the start of the script and end time at the end of the Script.
# Start Time Entry
echo "Start : " $(date +%T) > exec.log
start=`date +%s`
CALL YOUR SCRIPT HERE
# End Time Entry
end=`date +%s`
echo "End : " $(date +%T) >> exec.log
# Get the Runtime
runtime=$((end-start))
echo "Runtime: $runtime" >> exec.log
If there is any better way, I am also curious to see and implement too.
grep cron from your "messages" or "syslog
grep -i cron /var/log/messages
or create a separate log file for cron from rsyslog, edit file /etc/rsyslog.conf and change #cron to cron. You will find logs in /var/log/cron

Crontab not launching script

I'm trying to run the following script through crontab every day at 12 :
#!/bin/sh
mount -t nfs 10.1.25.7:gadal /mnt/NAS_DFG
echo >> ~/Documents/Crontab_logs/logs.txt
date >> ~/Documents/Crontab_logs/logs.txt
rsync -ar /home /mnt/NAS_DFG/ >> ~/Documents/Crontab_logs/logs.txt 2>&1
unmout /mnt/NAS_DFG
As it needs to run in sudo, I added the following line to 'sudo crontab' such that I have :
someone#something:~$ sudo crontab -l
# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
0 12 * * * ~/Documents/Crontab_logs/Making_save.sh
But it does not run. I mention that just executing the script thourgh :
sudo ~/Documents/Crontab_logs/Making_save.sh
works well, except that no output of the rsync command is written in the log file.
Any ideas what's going wrong ? I think I checked the main source of mistakes, i.e. using shell, leaving an empty line at the end, etc ...
sudo crontab creates a job which runs out of the crontab of root (if you manage to configure it correctly; the syntax in root crontabs is different). When cron runs the job, $HOME (and ~ if you use a shell or scripting language with tilde expansion) will refer to the home of root etc.
You should probably simply add
0 12 * * * sudo ./Documents/Crontab_logs/Making_save.sh
to your own crontab instead.
Notice that crontab does not have tilde expansion at all (but we can rely on the fact that cron will always run out of your home directory).
... Though this will still have issues, because if the script runs under sudo and it creates new files, those files will be owned by root, and cannot be changed by your regular user account. A better solution still is to only run the actual mount and umount commands with sudo, and minimize the amount of code which runs on the privileged account, i.e. remove the sudo from your crontab and instead add it within the script to the individual commands which require it.

Getting bad minute error from crontab despite seemingly accurate syntax

Not sure what's going on as this cronjob is pretty straighforward. I want my cronjob to run every 5 minutes. Below is the script (memory.sh) in it's entirety.
Below that is the schedule it's requested to run on via the crontab.
I've replicated the crontab and run crontab memory.sh both under my username and as root, but each time I get the same error:
"/opt/memory.sh":4: bad minute
errors in crontab file, can't install.
memory.sh
#!/bin/bash
now=`date +%Y-%m-%d.%H:%M:%S`
echo "$now" >>/opt/memory.out
ps aux --sort -rss>> /opt/memory.out
echo "$now" >>/opt/memory_free.out
free -m>> /opt/memory_free.out
crontab
# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command*
*/5 * * * * /opt/memory.sh
directory
crontab memory.sh installs a new crontab file. memory.sh is not a crontab, it is a script references from the crontab file. You need to pass the crontab file to the crontab command, or edit the existing file using crontab -e to add the line you indicated.

Linux bash shell script output is different from cronjob vs manually running the script

I wrote a linux bash shell script which works fine except the output when I run it manually is different than when I run it from a cronjob.
The particular command is lftp:
lftp -e "lcd $outgoingpathlocal;mput -O $incomingpathremote *.CSV;exit" -u $FTPUSERNAME,$FTPPASSWORD $FTPSERVER >> ${SCRIPTLOGFILE} 2>&1
When I run the script manually, the ${SCRIPTLOGFILE} contains a lot of info such as how many files/bytes/etc transferred. But when I run the same script from a cronjob there is no output unless there was an error (such as could not connect). I have tried various terminal output configurations but none work for this lftp command. Suggestions?
It's worth reading this:
crontab PATH and USER
In particular, cron won't set the same environment variables you're used to an interactive shell.
You might want to wrap your entire cron job up in a script, and then you can, for example, temporarily add some code like export >> scriptenvironment.txt and see what the difference is between the cron invoked script and the interactively invoked script.
Try man 5 crontab for details.
Once you know what envrionment variables you need for your script to run, you can set them in the crontab as necessary, or source at the start of your own script.
EXAMPLE CRON FILE
# use /bin/sh to run commands, overriding the default set by cron
SHELL=/bin/sh
# mail any output to `paul', no matter whose crontab this is
MAILTO=paul
#
# run five minutes after midnight, every day
5 0 * * * $HOME/bin/daily.job >> $HOME/tmp/out 2>&1
# run at 2:15pm on the first of every month -- output mailed to paul
15 14 1 * * $HOME/bin/monthly
# run at 10 pm on weekdays, annoy Joe
0 22 * * 1-5 mail -s "It's 10pm" joe%Joe,%%Where are your kids?%
23 0-23/2 * * * echo "run 23 minutes after midn, 2am, 4am ..., everyday"
5 4 * * sun echo "run at 5 after 4 every sunday"

Notification when Cron job is NOT normal

I have a few cron jobs that are running daily from my cPanel. Which means that 99% of the time I receive daily emails saying that everything is OK - backup is OK, xml import is OK, etc.
But what I need is a notification when the result of the cron is NOT OK - i.e. out of the ordinary. I have tried making rules in outlook, but I can't make it work.
This is for example one of my crons:
30 2 * * * /usr/local/bin/php /home/xx/public_html/administrator/xx/xx/backup.php -profile=1 -description="Backup"
I understand that 'Bash' has the possibility, but not how to practically use it.
Anyone have good ideas?
Input appreciated.
You can write a wrapper around the job, and call the wrapper from cron.
Example:
#!/usr/bin/env ksh
#
# Wrapper around a backup script.
#
# Redirecting all output to a log file
# In this case to /home/xx/public_html/administrator/xx/xx/backup_job.log
#
# Lets log the start date of the job in a log file
#
echo "Starting backup job at `/bin/date`" >> /home/xx/public_html/administrator/xx/xx/backup_job.log
#
# start the backup
#
/home/xx/public_html/administrator/xx/xx/backup.php >> /home/xx/public_html/administrator/xx/xx/backup_job.log
#
# The backup job should have set a return value (0 or non-zero).
#
# If it is non-zero then there was an error. Lets send a warning mail about it
#
#
if [[ $? -ne 0 ]]; then
/bin/mail admin > /var/adm/backup_log
#
# Exit with a 0 value. That should indictate that this job (the wrapper) worked fine
# Usually this means cron does not have to send a mail. (But then, we already took care
# of warning mails on our won if there was an error
#
exit 0

Resources