Cron job generate duplicate processess - linux

My cron is like below:
$ crontab -l
0,15,30,45 * * * * /vas/app/check_cron/cronjob.sh 2>&1 > /vas/app/check_cron/cronjob.log; echo "Exit code: $?" >> /vas/app/check_cron/cronjob.log
$ more /vas/app/check_cron/cronjob.sh
#!/bin/sh
echo "starting script";
/usr/local/bin/rsync -r /vas/app/check_cron/cron1/ /vas/app/check_cron/cron2/
echo "completed running the script";
$ ls -l /usr/local/bin/rsync
-rwxr-xr-x 1 bin bin 411494 Oct 5 2011 /usr/local/bin/rsync
$ ls -l /vas/app/check_cron/cronjob.sh
-rwxr-xr-x 1 vas vas 153 May 14 12:28 /vas/app/check_cron/cronjob.sh
if i run it manually ... the script is running well.
$ /vas/app/check_cron/cronjob.sh 2>&1 > /vas/app/check_cron/cronjob.log; echo "Exit code: $?" >> /vas/app/check_cron/cronjob.log
if run by crontab, the cron generate double processes more than 30 in 24hours until i kill them manually.
$ ps -ef | grep cron | grep -v root | grep -v grep
vas 24157 24149 0 14:30:00 ? 0:00 /bin/sh /vas/app/check_cron/cronjob.sh
vas 24149 8579 0 14:30:00 ? 0:00 sh -c /vas/app/check_cron/cronjob.sh 2>&1 > /vas/app/check_cron/cronjob.log; ec
vas 24178 24166 0 14:30:00 ? 0:00 /usr/local/bin/rsync -r /vas/app/check_cron/cron1/ /vas/app/check_cron/cron2/
vas 24166 24157 0 14:30:00 ? 0:01 /usr/local/bin/rsync -r /vas/app/check_cron/cron1/ /vas/app/check_cron/cron2/
Please give me advice how to make running well and no processes still running in the system
and processes stop properly.
BR,
Noel

The output you provide seems normal, the first two processes is just /bin/sh running your cron script and the later two are the rsync processes.
It might be a permission issue if the crontab is not the same user as the one you use for testing, causing the script to take longer when run from cron. You can add -v, -vv, or even -vvv to the rsync command for increased output and then observe the cron email after each run.
One method to prevent multiple running instances of scripts is to use lock files of some sort, I find it easy to use mkdir for this purpose.
#!/bin/sh
LOCK="/tmp/$0.lock"
# If mkdir fails then the lock already exists
mkdir $LOCK > /dev/null 2>&1
[ $? -ne 0 ] && exit 0
# We clean up the lock when the script exists for any reason
trap "{ rmdir $LOCK ; exit 0 ; }" EXIT
echo "starting script";
/usr/local/bin/rsync -r /vas/app/check_cron/cron1/ /vas/app/check_cron/cron2/
echo "completed running the script";
Just make sure you have some kind of cleanup when the OS starts in case it doesn't clean up /tmp by itself. The lock might be left there if the script crashes, is killed or is running when the OS is rebooted.

Why do you worry? Is something not working? From the parent process ID's I can deduce that the shell (PID=24157) forks an rsync (24166), and the rsync forks another rsync (24178). Looks like that's just how rsync operates...
It's certainly not cron starting two rsync processes.

Instead of CRON, you might want to have a look at the Fat Controller
It works similarly to CRON but has various built-in strategies for managing cases where instances of the script you want to run would overlap.
For example, you could specify that the currently running instance is killed and a new one started, or you could specify a grace period in which the currently running instance has to finish before then terminating it and starting a new one. Alternatively, you can specify to wait indefinitely.
There are more examples and full documentation on the website:
http://fat-controller.sourceforge.net/

Related

Debian: Restart process when killed automatically in PuTTY

I would like to know if there is any simple script to automatically restart a screened background process.
The process gets killed but couldn't manage to create a successful one :(.
Thanks in advance! <3
I believe that the safest (but not the easiest) way to do this is to create a cron job to check if the process is running, and if it is not, restart it again. The reason why this method is "safer", is because if you use a loop like what ivanivan suggested and that script "crashes", the program will not be restarted again; on the other hand, by doing via cron, the check program will be called every minute.
For example, your cron could be:
* * * * * env DISPLAY=:0 /folder/testscript >/dev/null 2>&1
The env DISPLAY=:0 might not be needed in your case, or it might be needed, depending on your script (note: you might need to adapt this to your case, run echo $DISPLAY to find out your variable on the case).
For example, your testscript could be:
#!/bin/bash
testvar="$(ps aux | grep -s "mainscript" | grep -sv "grep -s mainscript")"
if [ -z "$testvar" ]; then nohup /folder/mainscript &; fi
#sleep and run second test
sleep 30
testvar="$(ps aux | grep -s "mainscript" | grep -sv "grep -s mainscript")"
if [ -z "$testvar" ]; then nohup /folder/mainscript &; fi
exit 0
On the example above, the testscript would check to see if the mainscript is running (and restart it if necessary) twice every minute.

How do I start application if it stopped using Cron?

Debian 8.6. No root.
I can use cron.
I need to check if application ( php ./somescript & ) running in background stopped, and restart it. How can I check it using bash?
Of course, there is ps aux | grep ....., but how do I automate it?
I suggest to take a look at keyword #reboot from man 5 crontab to start a job once at server startup.
One way to go about it would be:
Cron:
* * * * * env DISPLAY=:0 /folder/myscript >/dev/null 2>&1
The env DISPLAY=:0 might not be needed in your case, or it might be needed, depending on your script (note: you might need to adapt this to your case, run echo $DISPLAY to find out your variable on the case).
Script:
#!/bin/bash
testvar="$(ps aux | grep -s "somescript" | grep -sv "grep")"
if [ -z "$testvar" ]; then nohup /folder/somescript &; fi
exit 0
This all could and should be fine tuned to your needs, but I believe this example could serve you well.
Edit: I fixed a small oversight on the code (I added | grep -sv "grep" to get rid of the own grep process of looking for the file from the tesvar results).

When launching script in background I get two processes running

I came across a weird scenario that is stumping me.
I have a script that I launch in the background with an &
example:
root## some_script.sh &
After running it, I do a ps -ef | grep some_script and I see TWO processes running where the 2nd process keeps getting an different PID but it's Parent is the process that I started (like the parent process is spawning children that die off - but this was never written in the code).
example:
root## ps -ef | grep some_script.sh
root 4696 17882 0 13:30 pts/2 00:00:00 /bin/bash ./some_script.sh
root 4778 4696 0 13:30 pts/2 00:00:00 /bin/bash ./some_script.sh
root## ps -ef | grep some_script.sh
root 4696 17882 0 13:30 pts/2 00:00:00 /bin/bash ./some_script.sh
root 4989 4696 0 13:30 pts/2 00:00:00 /bin/bash ./some_script.sh
What gives here? It seems to be messing up the output and functionality of the script too and basically makes it a never ending process (when I have a defined start and stop in the script).
the script:
`
#! /bin/bash
# Set Global Variables
LOGDIR="/srv/script_logs"
OUTDIR="/srv/audits"
BUCKET_LS=$OUTDIR"/LSOUT_"$i"_"$(date +%d%b%Y)".TXT"
MYCMD1="aws s3api list-objects --bucket viddler-flvs"
MYCMD2="--starting-token"
MAX_ITEMS="--max-items 10000"
MYSTARTING_TOKEN='""'
rm tokenlog.txt flv_out.txt
while [[ $MYSTARTING_TOKEN != "null" ]]
do
# First - Get the token for the next batch
CMD_PRE="$MYCMD1 $MAX_ITEMS $MYCMD2 $MYSTARTING_TOKEN"
MYSTARTING_TOKEN=($($CMD_PRE | jq -r .NextToken))
echo $MYSTARTING_TOKEN >> tokenlog.txt
# Now - get the values of the files for the existing batch
# First - re-run the batch and get the file values we want
MYOUT2=$($CMD_PRE | (jq ".Contents[] | {Key, Size, LastModified,StorageClass }"))
echo $MYOUT2 | sed 's/[{},"]//g;s/ /\n/g;s/StorageClass://g;s/LastModified://g;s/Size://g;s/Key://g;s/^ *//g;s/ *$//g' >> flv_out.txt
#echo $STARTING_TOKEN
done
`
I guess you have
(
some shell instructions
)
inside of your .sh
This syntax executes commands in the new process (but command line would be the same).

Shell Script For Process Monitoring

This
#!/bin/bash
if [ `ps -ef | grep "91.34.124.35" | grep -v grep | wc -l` -eq 0 ]; then sh home/asfd.sh; fi
or this?
ps -ef | grep "91\.34\.124\.35" | grep -v grep > /dev/null
if [ "$?" -ne "0" ]
then
sh home/asfd.sh
else
echo "Process is running fine"
fi
Hello, how can I write a shell script that looks in running processes and if there isn't a process name CONTAINING 91.34.124.35 then execute a file in a certain place and I want to make this run every 30 seconds in a continuous loop, I think there was a sleep command.
you can't use cron since on the implementation I know the smallest unit is one minute. You can use sleep but then your process will always be running (with cron it will started every time).
To use sleep just
while true ; do
if ! pgrep -f '91\.34\.124\.35' > /dev/null ; then
sh /home/asfd.sh
fi
sleep 30
done
If your pgrep has the option -q to suppress output (as on BSD) you can also use pgrep -q without redirecting the output to /dev/null
First of all, you should be able to reduce your script to simply
if ! pgrep "91\.34\.124\.35" > /dev/null; then ./your_script.sh; fi
To run this every 30 seconds via cron (because cron only runs every minute) you need 2 entries - one to run the command, another to delay for 30 seconds before running the same command again. For example:
* * * * * root if ! pgrep "91\.34\.124\.35" > /dev/null; then ./your_script.sh; fi
* * * * * root sleep 30; if ! pgrep "91\.34\.124\.35" > /dev/null; then ./your_script.sh; fi
To make this cleaner, you might be able to first store the command in a variable and use it for both entries. (I haven't tested this).
CHECK_COMMAND="if ! pgrep '91\.34\.124\.35' > /dev/null; then ./your_script.sh; fi"
* * * * * root eval "$CHECK_COMMAND"
* * * * * root sleep 30; eval "$CHECK_COMMAND"
p.s. The above assumes you're adding that to /etc/crontab. To use it in a user's crontab (crontab -e) simply leave out the username (root) before the command.
I would suggest using watch:
watch -n 30 launch_my_script_if_process_is_dead.sh
Either way is fine, you can save it in a .sh file and add it to the crontab to run every 30 seconds. Let me know if you want to know how to use crontab.
Try this:
if ps -ef | grep "91\.34\.124\.35" | grep -v grep > /dev/null
then
sh home/asfd.sh
else
echo "Process is running fine"
fi
No need to use test. if itself will examine the exit code.
You can save your script in file name, myscript.sh
then you can run your script through cron,
*/30 * * * * /full/path/for/myscript.sh
or you can use while
# cat script1.sh
#!/bin/bash
while true; do /bin/sh /full/path/for/myscript.sh ; sleep 30; done &
# ./script1.sh
Thanks.
I have found deamonizing critical scripts very effective.
http://cr.yp.to/daemontools.html
You can use monit for this task. See docu. It is available on most linux distributions and has a straightforward config. Find some examples in this post
For your app it will look something like
check process myprocessname
matching "91\.34\.124\.35"
start program = "/home/asfd.sh"
stop program = "/home/dfsa.sh"
If monit is not available on your platform you can use supervisord.
I also found this question very similar Repeat command automatically in Linux. It suggests to use watch.
Use cron for the "loop every 30 seconds" part.

Init.d script hanging

I have an init.d script that looks like:
#!/bin/bash
# chkconfig 345 85 60
# description: startup script for swapi
# processname: swapi
LDIR=/var/www/html/private/daemon
EXEC=swapi.php
PIDF=/var/run/swapi.pid
IEXE=/etc/init.d/swapi
### BEGIN INIT INFO
# Provides: swapi
# Required-Start: $local_fs
# Required-Stop:
# Default-Start: 3 4 5
# Default-Stop: 0 1 2 6
# Short-Description: startup script for swapi
# Description: startup script for swapi.php which processes actionq into switch
### END INIT INFO
if [ ! -f $LDIR/$EXEC ]
then
echo "swapi was not found at $LDIR/$EXEC"
exit
fi
case "$1" in
start)
if [ -f $PIDF ]
then
echo "swapi is currently running. Killing running process..."
$IEXE stop
fi
$LDIR/$EXEC >> $LDIR/swapi.log & MYPID=$!
echo $MYPID > $PIDF
echo "swapi is now running."
;;
stop)
if [ -f $PIDF ]
then
echo "Stopping swapi."
PID_2=`cat $PIDF`
if [ ! -z "`ps -f -p $PID_2 | grep -v grep | grep 'swapi'`" ]
then
kill -9 $PID_2
fi
rm -f $PIDF
else
echo "swapi is not running, cannot stop it. Aborting now..."
fi
;;
force-reload|restart)
$0 stop
$0 start
;;
*)
echo "Use: /etc/init.d/swapi {start|stop|restart|force-reload}"
exit 1
esac
And then I have a keepalive cronjob that calls this if the pid goes down. Problem is that that keepalive script hangs whenever I run it like a cron job (ie. run-parts /var/www/html/private/fivemin), (the keepalive script being in /var/www/html/private/fivemin).
Is there something funky in my init.d script that I am missing?
I have been racking my brain on this problem for hours now! I am on centos4 btw.
Thanks for any help.
-Eric
EDIT:
The keepalive/cronjob script was simplified for testing to a simple:
#!/usr/bin/php
<?
exec("/etc/init.d/swapi start");
?>
The strange this is the error output from swapi.php is put into /var/spool/mail like normal cron output except that I have all the output being dumped into the swapi.log in the init.d script?
When I run keepalive.php from the cli (as root from /) it operates exactly as I would expect it to.
When keepalive runs ps aux | grep php looks like:
root 4525 0.0 0.0 5416 584 ? S 15:10 0:00 awk -v progname=/var/www/html/private/fivemin/keepalive.php progname {????? print progname ":\n"????? progname="";???? }???? { print; }
root 4527 0.7 1.4 65184 14264 ? S 15:10 0:00 /usr/bin/php /var/www/html/private/daemon/swapi.php
And If I do a:
/etc/init.d/swapi stop
from the cli then both programs are no longer listed.
Swapi ls -l looks like:
-rwxr-xr-x 1 5500 5500 33148 Aug 29 15:07 swapi.php
Here is what the crontab looks like:
*/5 * * * * root run-parts /var/www/html/private/fivemin
Here is the first bit of swapi.php
#!/usr/bin/php
<?
chdir(dirname( __FILE__ ));
include("../../config/db.php");
include("../../config/sql.php");
include("../../config/config.php");
include("config_local.php");
include("../../config/msg.php");
include("../../include/functions.php");
set_time_limit(0);
echo "starting # ".date("Ymd.Gi")."...\n";
$actionstr = "";
while(TRUE){
I modified the init.d script and put init above the variable declarations, it did not make a difference.
The answer was that bash was staying open because my init.d script was not redirecting the stderr output. I have now changed it to
$LDIR/$EXEC &> $LDIR/swapi.log & MYPID=$!
And it now functions perfectly.
Thanks for the help everyone!
When you run a command from cron, the environment is not the same as when it is run from the bash command line after logging in. I would suspect in this case that the sh is not able to understand swapi.php as a PHP command.
Do a
which php
to see where your php binary is and add this to your init.d script
PHP=/usr/bin/php
...
$PHP $LDIR/$EXEC >> $LDIR/swapi.log & MYPID=$!
Probably not that important but you may want to redirect the output from the cron line
0 * * * * /path/to/script 2>&1 >> /dev/null
for instance.
Make sure your script has the correct execution permissions, the right owner, and the first lines should look like this:
#!/usr/bin/php
<?php

Resources