I have an init.d script that looks like:
#!/bin/bash
# chkconfig 345 85 60
# description: startup script for swapi
# processname: swapi
LDIR=/var/www/html/private/daemon
EXEC=swapi.php
PIDF=/var/run/swapi.pid
IEXE=/etc/init.d/swapi
### BEGIN INIT INFO
# Provides: swapi
# Required-Start: $local_fs
# Required-Stop:
# Default-Start: 3 4 5
# Default-Stop: 0 1 2 6
# Short-Description: startup script for swapi
# Description: startup script for swapi.php which processes actionq into switch
### END INIT INFO
if [ ! -f $LDIR/$EXEC ]
then
echo "swapi was not found at $LDIR/$EXEC"
exit
fi
case "$1" in
start)
if [ -f $PIDF ]
then
echo "swapi is currently running. Killing running process..."
$IEXE stop
fi
$LDIR/$EXEC >> $LDIR/swapi.log & MYPID=$!
echo $MYPID > $PIDF
echo "swapi is now running."
;;
stop)
if [ -f $PIDF ]
then
echo "Stopping swapi."
PID_2=`cat $PIDF`
if [ ! -z "`ps -f -p $PID_2 | grep -v grep | grep 'swapi'`" ]
then
kill -9 $PID_2
fi
rm -f $PIDF
else
echo "swapi is not running, cannot stop it. Aborting now..."
fi
;;
force-reload|restart)
$0 stop
$0 start
;;
*)
echo "Use: /etc/init.d/swapi {start|stop|restart|force-reload}"
exit 1
esac
And then I have a keepalive cronjob that calls this if the pid goes down. Problem is that that keepalive script hangs whenever I run it like a cron job (ie. run-parts /var/www/html/private/fivemin), (the keepalive script being in /var/www/html/private/fivemin).
Is there something funky in my init.d script that I am missing?
I have been racking my brain on this problem for hours now! I am on centos4 btw.
Thanks for any help.
-Eric
EDIT:
The keepalive/cronjob script was simplified for testing to a simple:
#!/usr/bin/php
<?
exec("/etc/init.d/swapi start");
?>
The strange this is the error output from swapi.php is put into /var/spool/mail like normal cron output except that I have all the output being dumped into the swapi.log in the init.d script?
When I run keepalive.php from the cli (as root from /) it operates exactly as I would expect it to.
When keepalive runs ps aux | grep php looks like:
root 4525 0.0 0.0 5416 584 ? S 15:10 0:00 awk -v progname=/var/www/html/private/fivemin/keepalive.php progname {????? print progname ":\n"????? progname="";???? }???? { print; }
root 4527 0.7 1.4 65184 14264 ? S 15:10 0:00 /usr/bin/php /var/www/html/private/daemon/swapi.php
And If I do a:
/etc/init.d/swapi stop
from the cli then both programs are no longer listed.
Swapi ls -l looks like:
-rwxr-xr-x 1 5500 5500 33148 Aug 29 15:07 swapi.php
Here is what the crontab looks like:
*/5 * * * * root run-parts /var/www/html/private/fivemin
Here is the first bit of swapi.php
#!/usr/bin/php
<?
chdir(dirname( __FILE__ ));
include("../../config/db.php");
include("../../config/sql.php");
include("../../config/config.php");
include("config_local.php");
include("../../config/msg.php");
include("../../include/functions.php");
set_time_limit(0);
echo "starting # ".date("Ymd.Gi")."...\n";
$actionstr = "";
while(TRUE){
I modified the init.d script and put init above the variable declarations, it did not make a difference.
The answer was that bash was staying open because my init.d script was not redirecting the stderr output. I have now changed it to
$LDIR/$EXEC &> $LDIR/swapi.log & MYPID=$!
And it now functions perfectly.
Thanks for the help everyone!
When you run a command from cron, the environment is not the same as when it is run from the bash command line after logging in. I would suspect in this case that the sh is not able to understand swapi.php as a PHP command.
Do a
which php
to see where your php binary is and add this to your init.d script
PHP=/usr/bin/php
...
$PHP $LDIR/$EXEC >> $LDIR/swapi.log & MYPID=$!
Probably not that important but you may want to redirect the output from the cron line
0 * * * * /path/to/script 2>&1 >> /dev/null
for instance.
Make sure your script has the correct execution permissions, the right owner, and the first lines should look like this:
#!/usr/bin/php
<?php
Related
I've written a small bash script to start a program every 3 seconds. This script is executed on startup and it saves its PID into a pidfile:
#!/bin/bash
echo $$ > /var/run/start_gps-read.pid
while [ true ] ; do
if [ "$1" == "stop" ] ;
then
echo "Stopping GPS read script ..."
sudo pkill -F /var/run/start_gps-read.pid
exit
fi
sudo /home/dh/gps_read.exe /dev/ttyACM0 /home/dh/gps_files/gpsMaus_1.xml
sleep 3
done
The problem is, I can't terminate the shell script by calling start_gps-read.sh stop. There it should read the pidfile and stop the inital process (from startup).
But when I call stop, the script still runs:
dh#Raspi_DataHarvest:~$ sudo /etc/init.d/start_gps-read.sh stop
Stopping GPS read script ...
dh#Raspi_DataHarvest:~$ ps aux | grep start
root 488 0.0 0.3 5080 2892 ? Ss 13:30 0:00 /bin/bash /etc/init.d/start_gps-read.sh start
dh 1125 0.0 0.2 4296 2016 pts/0 S+ 13:34 0:00 grep start
Note: The script is always executed as sudo.
Does anyone know how to stop my shell script?
The "stop" check needs to come before you overwrite the pid file, and certainly doesn't need to be inside the loop.
if [ "$1" = stop ]; then
echo "Stopping ..."
sudo pkill -F /var/run/start_gps-read.pid
exit
fi
echo "$$" > /var/run/start_gps-read.pid
while true; do
sudo /home/dh/gps_read.exe ...
sleep 3
done
My cron is like below:
$ crontab -l
0,15,30,45 * * * * /vas/app/check_cron/cronjob.sh 2>&1 > /vas/app/check_cron/cronjob.log; echo "Exit code: $?" >> /vas/app/check_cron/cronjob.log
$ more /vas/app/check_cron/cronjob.sh
#!/bin/sh
echo "starting script";
/usr/local/bin/rsync -r /vas/app/check_cron/cron1/ /vas/app/check_cron/cron2/
echo "completed running the script";
$ ls -l /usr/local/bin/rsync
-rwxr-xr-x 1 bin bin 411494 Oct 5 2011 /usr/local/bin/rsync
$ ls -l /vas/app/check_cron/cronjob.sh
-rwxr-xr-x 1 vas vas 153 May 14 12:28 /vas/app/check_cron/cronjob.sh
if i run it manually ... the script is running well.
$ /vas/app/check_cron/cronjob.sh 2>&1 > /vas/app/check_cron/cronjob.log; echo "Exit code: $?" >> /vas/app/check_cron/cronjob.log
if run by crontab, the cron generate double processes more than 30 in 24hours until i kill them manually.
$ ps -ef | grep cron | grep -v root | grep -v grep
vas 24157 24149 0 14:30:00 ? 0:00 /bin/sh /vas/app/check_cron/cronjob.sh
vas 24149 8579 0 14:30:00 ? 0:00 sh -c /vas/app/check_cron/cronjob.sh 2>&1 > /vas/app/check_cron/cronjob.log; ec
vas 24178 24166 0 14:30:00 ? 0:00 /usr/local/bin/rsync -r /vas/app/check_cron/cron1/ /vas/app/check_cron/cron2/
vas 24166 24157 0 14:30:00 ? 0:01 /usr/local/bin/rsync -r /vas/app/check_cron/cron1/ /vas/app/check_cron/cron2/
Please give me advice how to make running well and no processes still running in the system
and processes stop properly.
BR,
Noel
The output you provide seems normal, the first two processes is just /bin/sh running your cron script and the later two are the rsync processes.
It might be a permission issue if the crontab is not the same user as the one you use for testing, causing the script to take longer when run from cron. You can add -v, -vv, or even -vvv to the rsync command for increased output and then observe the cron email after each run.
One method to prevent multiple running instances of scripts is to use lock files of some sort, I find it easy to use mkdir for this purpose.
#!/bin/sh
LOCK="/tmp/$0.lock"
# If mkdir fails then the lock already exists
mkdir $LOCK > /dev/null 2>&1
[ $? -ne 0 ] && exit 0
# We clean up the lock when the script exists for any reason
trap "{ rmdir $LOCK ; exit 0 ; }" EXIT
echo "starting script";
/usr/local/bin/rsync -r /vas/app/check_cron/cron1/ /vas/app/check_cron/cron2/
echo "completed running the script";
Just make sure you have some kind of cleanup when the OS starts in case it doesn't clean up /tmp by itself. The lock might be left there if the script crashes, is killed or is running when the OS is rebooted.
Why do you worry? Is something not working? From the parent process ID's I can deduce that the shell (PID=24157) forks an rsync (24166), and the rsync forks another rsync (24178). Looks like that's just how rsync operates...
It's certainly not cron starting two rsync processes.
Instead of CRON, you might want to have a look at the Fat Controller
It works similarly to CRON but has various built-in strategies for managing cases where instances of the script you want to run would overlap.
For example, you could specify that the currently running instance is killed and a new one started, or you could specify a grace period in which the currently running instance has to finish before then terminating it and starting a new one. Alternatively, you can specify to wait indefinitely.
There are more examples and full documentation on the website:
http://fat-controller.sourceforge.net/
MAILTO=""
*/10 * * * * /bin/bash /var/www/sym_monitor/sym_start.sh > /var/www/migrate/root_start.txt 2>&1
*/10 * * * * /bin/bash /var/www/sym_monitor/stop.sh > /var/www/migrate/root_stop.txt 2>&1
Both these are jobs inside cron running at 10 min interval #17:30 second one starting and 1735 first one starting avoiding the killing of first job by second before it actually started.
First script consist of the following code
#!/bin/bash
value=$(</var/www/sym_monitor/man.txt)
if [ "$value" == "true" ]; then
ps -ef|grep sym |grep -v grep |awk '{ print $2 }'|sudo xargs kill -9;
fi
Second script consists of the following code.
#!/bin/bash
value=$(</var/www/sym_monitor/man.txt)
if [ "$value" == "true" ]; then
sleep 30;
cd /var/www/symmetric-ds-3.1.6/bin;
(sudo ./sym --port 8082 --server);
fi
The problem is when I run both the scripts unfortunately sym_start.sh is not executing. But when I remove the stop.sh and manually run the stop script then the only script in the cron is executing properly. why thus this happen? any idea?
can you try changing
(sudo ./sym --port 8082 --server);
to its absolute path
(sudo /var/www/symmetric-ds-3.1.6/bin/sym --port 8082 --server);
I think the path is not getting changed in the shell
I have a Perl script that I want to daemonize. Basically this perl script will read a directory every 30 seconds, read the files that it finds and then process the data. To keep it simple here consider the following Perl script (called synpipe_server, there is a symbolic link of this script in /usr/sbin/) :
#!/usr/bin/perl
use strict;
use warnings;
my $continue = 1;
$SIG{'TERM'} = sub { $continue = 0; print "Caught TERM signal\n"; };
$SIG{'INT'} = sub { $continue = 0; print "Caught INT signal\n"; };
my $i = 0;
while ($continue) {
#do stuff
print "Hello, I am running " . ++$i . "\n";
sleep 3;
}
So this script basically prints something every 3 seconds.
Then, as I want to daemonize this script, I've also put this bash script (also called synpipe_server) in /etc/init.d/ :
#!/bin/bash
# synpipe_server : This starts and stops synpipe_server
#
# chkconfig: 12345 12 88
# description: Monitors all production pipelines
# processname: synpipe_server
# pidfile: /var/run/synpipe_server.pid
# Source function library.
. /etc/rc.d/init.d/functions
pname="synpipe_server"
exe="/usr/sbin/synpipe_server"
pidfile="/var/run/${pname}.pid"
lockfile="/var/lock/subsys/${pname}"
[ -x $exe ] || exit 0
RETVAL=0
start() {
echo -n "Starting $pname : "
daemon ${exe}
RETVAL=$?
PID=$!
echo
[ $RETVAL -eq 0 ] && touch ${lockfile}
echo $PID > ${pidfile}
}
stop() {
echo -n "Shutting down $pname : "
killproc ${exe}
RETVAL=$?
echo
if [ $RETVAL -eq 0 ]; then
rm -f ${lockfile}
rm -f ${pidfile}
fi
}
restart() {
echo -n "Restarting $pname : "
stop
sleep 2
start
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status ${pname}
;;
restart)
restart
;;
*)
echo "Usage: $0 {start|stop|status|restart}"
;; esac
exit 0
So, (if I have well understood the doc for daemon) the Perl script should run in the background and the output should be redirected to /dev/null if I execute :
service synpipe_server start
But here is what I get instead :
[root#master init.d]# service synpipe_server start
Starting synpipe_server : Hello, I am running 1
Hello, I am running 2
Hello, I am running 3
Hello, I am running 4
Caught INT signal
[ OK ]
[root#master init.d]#
So it starts the Perl script but runs it without detaching it from the current terminal session, and I can see the output printed in my console ... which is not really what I was expecting. Moreover, the PID file is empty (or with a line feed only, no pid returned by daemon).
Does anyone have any idea of what I am doing wrong ?
EDIT : maybe I should say that I am on a Red Hat machine.
Scientific Linux SL release 5.4 (Boron)
Thanks,
Tony
I finally re-wrote the start function in the bash init script, and I am not using daemon anymore.
start() {
echo -n "Starting $pname : "
#daemon ${exe} # Not working ...
if [ -s ${pidfile} ]; then
RETVAL=1
echo -n "Already running !" && warning
echo
else
nohup ${exe} >/dev/null 2>&1 &
RETVAL=$?
PID=$!
[ $RETVAL -eq 0 ] && touch ${lockfile} && success || failure
echo
echo $PID > ${pidfile}
fi
}
I check that the pid file is not existing already (if so, just write a warning). If not, I use
nohup ${exe} >/dev/null 2>&1 &
to start the script.
I don't know if it is safe this way (?) but it works.
The proper way to daemonize a process is have it detach from the terminal by itself. This is how most larger software suites do it, for instance, apache.
The rationale behind daemon not doing what you would expect from its name, and how to make a unix process detach into the background, can be found here in section 1.7 How do I get my program to act like a daemon?
Simply invoking a program in the background isn't really adequate for
these long-running programs; that does not correctly detach the
process from the terminal session that started it. Also, the
conventional way of starting daemons is simply to issue the command
manually or from an rc script; the daemon is expected to put itself
into the background.
For further reading on this topic: What's the difference between nohup and a daemon?
According to man daemon correct syntax is
daemon [options] -- [command] [command args]
Your init script startup should run something like:
daemon --pidfile ${pidfile} -- ${exe}
As said here, it seems that the process needs to be sent to the background using &.
Daemon don’t do it for you.
This
#!/bin/bash
if [ `ps -ef | grep "91.34.124.35" | grep -v grep | wc -l` -eq 0 ]; then sh home/asfd.sh; fi
or this?
ps -ef | grep "91\.34\.124\.35" | grep -v grep > /dev/null
if [ "$?" -ne "0" ]
then
sh home/asfd.sh
else
echo "Process is running fine"
fi
Hello, how can I write a shell script that looks in running processes and if there isn't a process name CONTAINING 91.34.124.35 then execute a file in a certain place and I want to make this run every 30 seconds in a continuous loop, I think there was a sleep command.
you can't use cron since on the implementation I know the smallest unit is one minute. You can use sleep but then your process will always be running (with cron it will started every time).
To use sleep just
while true ; do
if ! pgrep -f '91\.34\.124\.35' > /dev/null ; then
sh /home/asfd.sh
fi
sleep 30
done
If your pgrep has the option -q to suppress output (as on BSD) you can also use pgrep -q without redirecting the output to /dev/null
First of all, you should be able to reduce your script to simply
if ! pgrep "91\.34\.124\.35" > /dev/null; then ./your_script.sh; fi
To run this every 30 seconds via cron (because cron only runs every minute) you need 2 entries - one to run the command, another to delay for 30 seconds before running the same command again. For example:
* * * * * root if ! pgrep "91\.34\.124\.35" > /dev/null; then ./your_script.sh; fi
* * * * * root sleep 30; if ! pgrep "91\.34\.124\.35" > /dev/null; then ./your_script.sh; fi
To make this cleaner, you might be able to first store the command in a variable and use it for both entries. (I haven't tested this).
CHECK_COMMAND="if ! pgrep '91\.34\.124\.35' > /dev/null; then ./your_script.sh; fi"
* * * * * root eval "$CHECK_COMMAND"
* * * * * root sleep 30; eval "$CHECK_COMMAND"
p.s. The above assumes you're adding that to /etc/crontab. To use it in a user's crontab (crontab -e) simply leave out the username (root) before the command.
I would suggest using watch:
watch -n 30 launch_my_script_if_process_is_dead.sh
Either way is fine, you can save it in a .sh file and add it to the crontab to run every 30 seconds. Let me know if you want to know how to use crontab.
Try this:
if ps -ef | grep "91\.34\.124\.35" | grep -v grep > /dev/null
then
sh home/asfd.sh
else
echo "Process is running fine"
fi
No need to use test. if itself will examine the exit code.
You can save your script in file name, myscript.sh
then you can run your script through cron,
*/30 * * * * /full/path/for/myscript.sh
or you can use while
# cat script1.sh
#!/bin/bash
while true; do /bin/sh /full/path/for/myscript.sh ; sleep 30; done &
# ./script1.sh
Thanks.
I have found deamonizing critical scripts very effective.
http://cr.yp.to/daemontools.html
You can use monit for this task. See docu. It is available on most linux distributions and has a straightforward config. Find some examples in this post
For your app it will look something like
check process myprocessname
matching "91\.34\.124\.35"
start program = "/home/asfd.sh"
stop program = "/home/dfsa.sh"
If monit is not available on your platform you can use supervisord.
I also found this question very similar Repeat command automatically in Linux. It suggests to use watch.
Use cron for the "loop every 30 seconds" part.