Redirect stderr with date to log file from Cron - linux

A bash script is run from cron, stderr is redirected to a logfile, this all works fine.
The code is:
*/10 5-22 * * * /opt/scripts/sql_fetch 2>> /opt/scripts/logfile.txt
I want to prepend the date to every line in the log file, this does not work, the code is:
*/10 5-22 * * * /opt/scripts/sql_fetch 2>> ( /opt/scripts/predate.sh >> /opt/scripts/logfile.txt )
The predate.sh script looks as follows:
#!/bin/bash
while read line ; do
echo "$(date): ${line}"
done
So the second bit of code doesn't work, could someone shed some light?
Thanks.

I have a small script cronlog.sh to do this. The script code
#!/bin/sh
echo "[`date`] Start executing $1"
$# 2>&1 | sed -e "s/\(.*\)/[`date`] \1/"
echo "[`date`] End executing $1"
Then you could do
cronlog.sh /opt/scripts/sql_fetch >> your_log_file
Example result
cronlog.sh echo 'hello world!'
[Mon Aug 22 04:46:03 CDT 2011] Start executing echo
[Mon Aug 22 04:46:03 CDT 2011] helloworld!
[Mon Aug 22 04:46:03 CDT 2011] End executing echo

*/10 5-22 * * * (/opt/scripts/predate.sh; /opt/scripts/sql_fetch 2>&1) >> /opt/scripts/logfile.txt
should be exactly your way.

Related

Creating script to report system suspend or awake is not running?

Here is the code that should output to a file when system goes into SUSPEND or AWAKES:
(this code is in /etc/pm/sleep.d)
(also had to make the file executable: sudo chmod +x sleep_mode)
(when running from the command line, the "suspend script" is written to the file.
(but when I suspend the computer or awaken the computer... nothing is written to file.)
(16.04 LTS)
#!/bin/bash
# general entry
echo "suspend script"
echo "%suspend script" >> /tmp/suspend_time.txt
date +%s >> /tmp/suspend_time.txt
case "$1" in
suspend)
# executed on suspend
echo "%system_suspend" >> /tmp/suspend_time.txt
date +%s >> /tmp/suspend_time.txt
;;
resume)
# executed on resume
echo "%system_resume" >> /tmp/suspend_time.txt
date +%s >> /tmp/suspend_time.txt
;;
*)
;;
esac
You don't say what distribution you are running, but if you are running the systemd daemon, try putting it in /lib/systemd/system-sleep instead (note that the arguments are different).
Script:
[eje#irenaeus ~]$ sudo cat /lib/systemd/system-sleep/95test
[sudo] password for eje:
#!/bin/bash
echo $(date) "$#" >> /tmp/args
Output:
Sun Dec 20 02:45:51 PM EST 2020 pre suspend
Sun Dec 20 02:45:59 PM EST 2020 post suspend

script to log the completion of an LSF (bsub) job

I have a script called by cron to run an LSF job.
I would like to be told when that job is submitted and when it completes. The-Powers-That-Be have decided to disable email notifications. So I am writing this script to output the relevant information to a logfile.
It almost works. Here is some code:
crontab:
00 12 * * * my_script.sh my_job.bsub
my_job.bsub:
#!/bin/bash
#BSUB -q my_queue
#BSUB -W 06:00
echo "I am a long process"
sleep 60
my_script.sh:
#!/bin/sh
BSUB_SCRIPT=$1
# run bsub_script (and get the LSF job id while you're at it)...
JOB_ID=`bsub < $BSUB_SCRIPT | awk -F[\<,\>] '{print $2}'`
# log that job was submitted...
echo "`date +%Y-%m%d %T` submitted '$BSUB_SCRIPT' [$JOB_ID]" >> $HOME/my_logfile.txt
# and log when job completes...
bsub -w "ended($JOB_ID)" << EOF
#!/bin/bash
#BSUB -q my_queue
#BSUB -W 00:30
echo "`date +%Y-%m-%d %T` completed '$BSUB_SCRIPT' [$JOB_ID]" >> $HOME/my_logfile.txt
EOF
(I found this answer helpful in figuring out how to submit a job that waits until an earlier one has completed.)
The issue is that the 2nd call to date, in the heredoc, gets evaluated immediately, so I wind up with a logfile that looks like this:
my_logfile.txt:
2018-01-30 13:15:14 submitted 'my_job.bsub' [1234567]
2018-01-30 13:15:14 completed 'my_job.bsub' [1234567]
Notice how the times are exactly the same.
How can I ensure that evaluation of the content of the heredoc is deferred until the LSF job runs?
The date command in the heredoc is being expanded before being passed to bsub. You need to quote the EOF in your heredoc expression or escape the date command. See the answer to this question:
How does "cat << EOF" work in bash?
In particular:
The format of here-documents is:
<<[-]word
here-document
delimiter
...
If word is unquoted, all lines of the here-document are
subjected to parameter expansion, command substitution, and arithmetic
expansion.
So, for example when I run
$ cat << EOF
> echo `date`
> EOF
The output is
echo Tue Jan 30 11:57:32 EST 2018
Note that the date command is expanded, which is what's happening in your script. However, if I quote the delimiter in the heredoc:
$ cat << "EOF"
> echo `date`
> EOF
You get the unexpanded output you want:
echo `date`
Similarly, escaping date would preserve the other variables you want to expand:
$ cat << EOF
> echo \$(date)
> EOF
Output:
echo $(date)

How to add a crontab job to crontab using a bash script?

I tried the below command and crontab stopped running any jobs:
echo "#reboot /bin/echo 'test' > /home/user/test.sh"| crontab -
What is the correct way to script adding a job to crontab in linux?
I suggest you read Cron and Crontab usage and examples .
And you can run this:
➜ ( printf -- '0 4 8-14 * * test $(date +\%u) -eq 7 && echo "2nd Sunday"' ) | crontab
➜ crontab -l
0 4 8-14 * * test $(date +\0) -eq 7 && echo "2nd Sunday"
Or
#!/bin/bash
cronjob="* * * * * /path/to/command"
(crontab -u userhere -l; echo "$cronjob" ) | crontab -u userhere -
Hope this helps.
Late answer, but on CentOS I create a new cronjob (for root, change user as needed) from a bash script using:
echo "#reboot command..." >> /var/spool/cron/root
>> will force appending to existing cronjobs or create a new cronjob file and append to it if it doesn't exist.
Im not sure about this
but try this one
echo "* * * * * whatever" > /etc/crontabs/root
then check the "crontab -e" you will see your command there
For those who are using alpaine distribution , do not forget to call "crond" to make your crons start

bash script not running as expected from cron vs. shell.

I have a script I got from this website and modified to suit my needs. Orginal post: Linux Script to check if process is running & act on the result
When running the script from cron, it always creates a new process. If I run it from shell it runs normally. Can anyone help me debug the issue ?
Script
[root#server2 ~]# cat /root/full-migrate.sh
#!/bin/bash
case "$(pidof perl | wc -w)" in
0) echo "Restarting iu-maildirtoimap: $(date)" >> /var/log/iu-maildirtoimap.txt
/usr/local/bin/iu-maildirtoimap -i currentuser.txt -D imap.gmail.com:993 -d -n7 -I&
;;
1) echo "Everything seems okay: $(date)" >> /var/log/iu-maildirtoimap.txt
;;
*) echo "Removed double iu-maildirtoimap: $(date)" >> /var/log/iu-maildirtoimap.txt
kill -9 $(pidof perl | awk '{print $1}')
;;
esac
crontab job
[root#server2 ~]# crontab -l
*/1 * * * * /bin/bash /root/full-migrate.sh
From the logfile:
Removed double iu-maildirtoimap: Tue Dec 30 02:32:37 GMT 2014
Removed double iu-maildirtoimap: Tue Dec 30 02:32:38 GMT 2014
Removed double iu-maildirtoimap: Tue Dec 30 02:32:39 GMT 2014
Everything seems okay: Tue Dec 30 02:32:39 GMT 2014
Restarting iu-maildirtoimap: Tue Dec 30 02:33:01 GMT 2014
Restarting iu-maildirtoimap: Tue Dec 30 02:34:01 GMT 2014
Restarting iu-maildirtoimap: Tue Dec 30 02:35:01 GMT 2014
The first 4 entries are me manually running "/bin/bash /root/full-migrate.sh"
The last 3 are from the crontab.
Any suggestions on how to debug this issue ?
At the time of writing:
[root#server2 ~]# $(pidof perl | wc -w)
bash: 13: command not found
[root#server2 ~]# $(pidof perl | awk '{print $1}')
bash: 26370: command not found
Your test from the command line is not valid, because you are basically executing the process id, which will give you a command not found.
From the command line you will need to test this way:
$ pidof perl | wc -l
without the $()
The issue you are most likely having is that cron cannot find pidof in the path. So you will need to figure out where pidof resides on your system:
$ which pidof
and then put that full path in your cron job and it should work.

Init.d script hanging

I have an init.d script that looks like:
#!/bin/bash
# chkconfig 345 85 60
# description: startup script for swapi
# processname: swapi
LDIR=/var/www/html/private/daemon
EXEC=swapi.php
PIDF=/var/run/swapi.pid
IEXE=/etc/init.d/swapi
### BEGIN INIT INFO
# Provides: swapi
# Required-Start: $local_fs
# Required-Stop:
# Default-Start: 3 4 5
# Default-Stop: 0 1 2 6
# Short-Description: startup script for swapi
# Description: startup script for swapi.php which processes actionq into switch
### END INIT INFO
if [ ! -f $LDIR/$EXEC ]
then
echo "swapi was not found at $LDIR/$EXEC"
exit
fi
case "$1" in
start)
if [ -f $PIDF ]
then
echo "swapi is currently running. Killing running process..."
$IEXE stop
fi
$LDIR/$EXEC >> $LDIR/swapi.log & MYPID=$!
echo $MYPID > $PIDF
echo "swapi is now running."
;;
stop)
if [ -f $PIDF ]
then
echo "Stopping swapi."
PID_2=`cat $PIDF`
if [ ! -z "`ps -f -p $PID_2 | grep -v grep | grep 'swapi'`" ]
then
kill -9 $PID_2
fi
rm -f $PIDF
else
echo "swapi is not running, cannot stop it. Aborting now..."
fi
;;
force-reload|restart)
$0 stop
$0 start
;;
*)
echo "Use: /etc/init.d/swapi {start|stop|restart|force-reload}"
exit 1
esac
And then I have a keepalive cronjob that calls this if the pid goes down. Problem is that that keepalive script hangs whenever I run it like a cron job (ie. run-parts /var/www/html/private/fivemin), (the keepalive script being in /var/www/html/private/fivemin).
Is there something funky in my init.d script that I am missing?
I have been racking my brain on this problem for hours now! I am on centos4 btw.
Thanks for any help.
-Eric
EDIT:
The keepalive/cronjob script was simplified for testing to a simple:
#!/usr/bin/php
<?
exec("/etc/init.d/swapi start");
?>
The strange this is the error output from swapi.php is put into /var/spool/mail like normal cron output except that I have all the output being dumped into the swapi.log in the init.d script?
When I run keepalive.php from the cli (as root from /) it operates exactly as I would expect it to.
When keepalive runs ps aux | grep php looks like:
root 4525 0.0 0.0 5416 584 ? S 15:10 0:00 awk -v progname=/var/www/html/private/fivemin/keepalive.php progname {????? print progname ":\n"????? progname="";???? }???? { print; }
root 4527 0.7 1.4 65184 14264 ? S 15:10 0:00 /usr/bin/php /var/www/html/private/daemon/swapi.php
And If I do a:
/etc/init.d/swapi stop
from the cli then both programs are no longer listed.
Swapi ls -l looks like:
-rwxr-xr-x 1 5500 5500 33148 Aug 29 15:07 swapi.php
Here is what the crontab looks like:
*/5 * * * * root run-parts /var/www/html/private/fivemin
Here is the first bit of swapi.php
#!/usr/bin/php
<?
chdir(dirname( __FILE__ ));
include("../../config/db.php");
include("../../config/sql.php");
include("../../config/config.php");
include("config_local.php");
include("../../config/msg.php");
include("../../include/functions.php");
set_time_limit(0);
echo "starting # ".date("Ymd.Gi")."...\n";
$actionstr = "";
while(TRUE){
I modified the init.d script and put init above the variable declarations, it did not make a difference.
The answer was that bash was staying open because my init.d script was not redirecting the stderr output. I have now changed it to
$LDIR/$EXEC &> $LDIR/swapi.log & MYPID=$!
And it now functions perfectly.
Thanks for the help everyone!
When you run a command from cron, the environment is not the same as when it is run from the bash command line after logging in. I would suspect in this case that the sh is not able to understand swapi.php as a PHP command.
Do a
which php
to see where your php binary is and add this to your init.d script
PHP=/usr/bin/php
...
$PHP $LDIR/$EXEC >> $LDIR/swapi.log & MYPID=$!
Probably not that important but you may want to redirect the output from the cron line
0 * * * * /path/to/script 2>&1 >> /dev/null
for instance.
Make sure your script has the correct execution permissions, the right owner, and the first lines should look like this:
#!/usr/bin/php
<?php

Resources