How to stop a shell script correctly? - linux

I've written a small bash script to start a program every 3 seconds. This script is executed on startup and it saves its PID into a pidfile:
#!/bin/bash
echo $$ > /var/run/start_gps-read.pid
while [ true ] ; do
if [ "$1" == "stop" ] ;
then
echo "Stopping GPS read script ..."
sudo pkill -F /var/run/start_gps-read.pid
exit
fi
sudo /home/dh/gps_read.exe /dev/ttyACM0 /home/dh/gps_files/gpsMaus_1.xml
sleep 3
done
The problem is, I can't terminate the shell script by calling start_gps-read.sh stop. There it should read the pidfile and stop the inital process (from startup).
But when I call stop, the script still runs:
dh#Raspi_DataHarvest:~$ sudo /etc/init.d/start_gps-read.sh stop
Stopping GPS read script ...
dh#Raspi_DataHarvest:~$ ps aux | grep start
root 488 0.0 0.3 5080 2892 ? Ss 13:30 0:00 /bin/bash /etc/init.d/start_gps-read.sh start
dh 1125 0.0 0.2 4296 2016 pts/0 S+ 13:34 0:00 grep start
Note: The script is always executed as sudo.
Does anyone know how to stop my shell script?

The "stop" check needs to come before you overwrite the pid file, and certainly doesn't need to be inside the loop.
if [ "$1" = stop ]; then
echo "Stopping ..."
sudo pkill -F /var/run/start_gps-read.pid
exit
fi
echo "$$" > /var/run/start_gps-read.pid
while true; do
sudo /home/dh/gps_read.exe ...
sleep 3
done

Related

Execute loop while commands is running

I want to create a loop (blinking LED) while a command (in this case ping) is running.
I am using the Raspberry Pi (Raspbian)
while [ `nmap -p 80 example.com` ] # something like this
do
echo "1">/sys/class/gpio/...
sleep 0.2
echo "0">/sys/class/gpio/...
sleep 0.2
done
What I would do :
any_command & _pid=$!
while kill &>/dev/null -0 $_pid; do
echo "1">/sys/class/gpio/...
sleep 0.2
echo "0">/sys/class/gpio/...
sleep 0.2
done
kill -0 just test if the pid exists =)
the command any_command is launched in the background
& put the command in the background
$! is the integer of the pid of the latest backgrounded job

How to prevent multiples instances of a shell script ?

using centos5
below is my shell script. I want to prevent it from multiple instance.. but it doesn't work with if I fir "kill -9" option. also I doubt It will work on reboot.
Is there anyway to apply this logic ? which can also handle kill -9 or reboot or any signal which cause manual exit of the script ?
[root#manage aaa]# cat script.sh
#!/bin/sh
set -e
scriptname=$(basename $0)
pidfile="/var/run/${scriptname}"
# lock it
exec 200>$pidfile
flock -n 200 || exit 1
pid=$$
echo $pid 1>&200
#### SCRIPT CODE
Try using the flock command. From the man page:
(
flock -n 9 || exit 1
# ... commands executed under lock ...
) 9>/var/lock/mylockfile
You can resolv this problens using this logic.
Start the lock
verify, is locked? no continue, or stop...
My script.
#!/bin/bash
LOCK=/var/run/redundancia.lock
LOG=/var/log/redundancia.log
#----------------------------------------------------------------
control_c () {
echo -e "\nScript stoped: `date +%d/%m/%Y%t%T`" >> $LOG
rm $LOCK &>/dev/null
exit 0
}
trap control_c INT HUP TERM
if [ ! -f $LOCK ]
then
touch $LOCK
# Your code....
else
echo "This script is running"
exit 0
fi
Sorry my bad english ;/

Cron job generate duplicate processess

My cron is like below:
$ crontab -l
0,15,30,45 * * * * /vas/app/check_cron/cronjob.sh 2>&1 > /vas/app/check_cron/cronjob.log; echo "Exit code: $?" >> /vas/app/check_cron/cronjob.log
$ more /vas/app/check_cron/cronjob.sh
#!/bin/sh
echo "starting script";
/usr/local/bin/rsync -r /vas/app/check_cron/cron1/ /vas/app/check_cron/cron2/
echo "completed running the script";
$ ls -l /usr/local/bin/rsync
-rwxr-xr-x 1 bin bin 411494 Oct 5 2011 /usr/local/bin/rsync
$ ls -l /vas/app/check_cron/cronjob.sh
-rwxr-xr-x 1 vas vas 153 May 14 12:28 /vas/app/check_cron/cronjob.sh
if i run it manually ... the script is running well.
$ /vas/app/check_cron/cronjob.sh 2>&1 > /vas/app/check_cron/cronjob.log; echo "Exit code: $?" >> /vas/app/check_cron/cronjob.log
if run by crontab, the cron generate double processes more than 30 in 24hours until i kill them manually.
$ ps -ef | grep cron | grep -v root | grep -v grep
vas 24157 24149 0 14:30:00 ? 0:00 /bin/sh /vas/app/check_cron/cronjob.sh
vas 24149 8579 0 14:30:00 ? 0:00 sh -c /vas/app/check_cron/cronjob.sh 2>&1 > /vas/app/check_cron/cronjob.log; ec
vas 24178 24166 0 14:30:00 ? 0:00 /usr/local/bin/rsync -r /vas/app/check_cron/cron1/ /vas/app/check_cron/cron2/
vas 24166 24157 0 14:30:00 ? 0:01 /usr/local/bin/rsync -r /vas/app/check_cron/cron1/ /vas/app/check_cron/cron2/
Please give me advice how to make running well and no processes still running in the system
and processes stop properly.
BR,
Noel
The output you provide seems normal, the first two processes is just /bin/sh running your cron script and the later two are the rsync processes.
It might be a permission issue if the crontab is not the same user as the one you use for testing, causing the script to take longer when run from cron. You can add -v, -vv, or even -vvv to the rsync command for increased output and then observe the cron email after each run.
One method to prevent multiple running instances of scripts is to use lock files of some sort, I find it easy to use mkdir for this purpose.
#!/bin/sh
LOCK="/tmp/$0.lock"
# If mkdir fails then the lock already exists
mkdir $LOCK > /dev/null 2>&1
[ $? -ne 0 ] && exit 0
# We clean up the lock when the script exists for any reason
trap "{ rmdir $LOCK ; exit 0 ; }" EXIT
echo "starting script";
/usr/local/bin/rsync -r /vas/app/check_cron/cron1/ /vas/app/check_cron/cron2/
echo "completed running the script";
Just make sure you have some kind of cleanup when the OS starts in case it doesn't clean up /tmp by itself. The lock might be left there if the script crashes, is killed or is running when the OS is rebooted.
Why do you worry? Is something not working? From the parent process ID's I can deduce that the shell (PID=24157) forks an rsync (24166), and the rsync forks another rsync (24178). Looks like that's just how rsync operates...
It's certainly not cron starting two rsync processes.
Instead of CRON, you might want to have a look at the Fat Controller
It works similarly to CRON but has various built-in strategies for managing cases where instances of the script you want to run would overlap.
For example, you could specify that the currently running instance is killed and a new one started, or you could specify a grace period in which the currently running instance has to finish before then terminating it and starting a new one. Alternatively, you can specify to wait indefinitely.
There are more examples and full documentation on the website:
http://fat-controller.sourceforge.net/

About shell and subshell

I'm new to shell,I just learned that use (command) will create a new subshell and exec the command, so I try to print the pid of father shell and subshell:
#!/bin/bash
echo $$
echo "`echo $$`"
sleep 4
var=$(echo $$;sleep 4)
echo $var
But the answer is:
$./test.sh
9098
9098
9098
My questions are:
Why just three echo prints? There are 4 echos in my code.
Why three pids are the same? subshell's pid is obviously not same with his father's.
Thanks a lot for answers :)
First, the assignment captures standard output of the child and puts it into var, rather than printing it:
var=$(echo $$;sleep 4)
This can be seen with:
$ xyzzy=$(echo hello)
$ echo $xyzzy
hello
Secondly, all those $$ variables are evaluated in the current shell which means they're turned into the current PID before any children start. The children see the PID that has already been generated. In other words, the children are executing echo 9098 rather than echo $$.
If you want the PID of the child, you have to prevent translation in the parent, such as by using single quotes:
bash -c 'echo $$'
echo "one.sh $$"
echo `eval echo '$$'`
I am expecting the above to print different pids, but it doesn't. It's creating a child process. Verified by adding sleep in ``.
echo "one.sh $$"
echo `eval "echo '$$'";sleep 10`
On executing the above from a script and running ps shows two processs one.sh(name of the script) and sleep.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
test 12685 0.0 0.0 8720 1012 pts/15 S+ 13:50 0:00 \_ bash one.sh
test 12686 0.0 0.0 8720 604 pts/15 S+ 13:50 0:00 \_ bash one.sh
test 12687 0.0 0.0 3804 452 pts/15 S+ 13:50 0:00 \_ sleep 10
This is the output produced
one.sh 12685
12685
Not sure what i am missing.
The solution is $!. As in:
#!/bin/bash
echo "parent" $$
yes > /dev/null &
echo "child" $!
Output:
$ ./prueba.sh
parent 30207
child 30209

Init.d script hanging

I have an init.d script that looks like:
#!/bin/bash
# chkconfig 345 85 60
# description: startup script for swapi
# processname: swapi
LDIR=/var/www/html/private/daemon
EXEC=swapi.php
PIDF=/var/run/swapi.pid
IEXE=/etc/init.d/swapi
### BEGIN INIT INFO
# Provides: swapi
# Required-Start: $local_fs
# Required-Stop:
# Default-Start: 3 4 5
# Default-Stop: 0 1 2 6
# Short-Description: startup script for swapi
# Description: startup script for swapi.php which processes actionq into switch
### END INIT INFO
if [ ! -f $LDIR/$EXEC ]
then
echo "swapi was not found at $LDIR/$EXEC"
exit
fi
case "$1" in
start)
if [ -f $PIDF ]
then
echo "swapi is currently running. Killing running process..."
$IEXE stop
fi
$LDIR/$EXEC >> $LDIR/swapi.log & MYPID=$!
echo $MYPID > $PIDF
echo "swapi is now running."
;;
stop)
if [ -f $PIDF ]
then
echo "Stopping swapi."
PID_2=`cat $PIDF`
if [ ! -z "`ps -f -p $PID_2 | grep -v grep | grep 'swapi'`" ]
then
kill -9 $PID_2
fi
rm -f $PIDF
else
echo "swapi is not running, cannot stop it. Aborting now..."
fi
;;
force-reload|restart)
$0 stop
$0 start
;;
*)
echo "Use: /etc/init.d/swapi {start|stop|restart|force-reload}"
exit 1
esac
And then I have a keepalive cronjob that calls this if the pid goes down. Problem is that that keepalive script hangs whenever I run it like a cron job (ie. run-parts /var/www/html/private/fivemin), (the keepalive script being in /var/www/html/private/fivemin).
Is there something funky in my init.d script that I am missing?
I have been racking my brain on this problem for hours now! I am on centos4 btw.
Thanks for any help.
-Eric
EDIT:
The keepalive/cronjob script was simplified for testing to a simple:
#!/usr/bin/php
<?
exec("/etc/init.d/swapi start");
?>
The strange this is the error output from swapi.php is put into /var/spool/mail like normal cron output except that I have all the output being dumped into the swapi.log in the init.d script?
When I run keepalive.php from the cli (as root from /) it operates exactly as I would expect it to.
When keepalive runs ps aux | grep php looks like:
root 4525 0.0 0.0 5416 584 ? S 15:10 0:00 awk -v progname=/var/www/html/private/fivemin/keepalive.php progname {????? print progname ":\n"????? progname="";???? }???? { print; }
root 4527 0.7 1.4 65184 14264 ? S 15:10 0:00 /usr/bin/php /var/www/html/private/daemon/swapi.php
And If I do a:
/etc/init.d/swapi stop
from the cli then both programs are no longer listed.
Swapi ls -l looks like:
-rwxr-xr-x 1 5500 5500 33148 Aug 29 15:07 swapi.php
Here is what the crontab looks like:
*/5 * * * * root run-parts /var/www/html/private/fivemin
Here is the first bit of swapi.php
#!/usr/bin/php
<?
chdir(dirname( __FILE__ ));
include("../../config/db.php");
include("../../config/sql.php");
include("../../config/config.php");
include("config_local.php");
include("../../config/msg.php");
include("../../include/functions.php");
set_time_limit(0);
echo "starting # ".date("Ymd.Gi")."...\n";
$actionstr = "";
while(TRUE){
I modified the init.d script and put init above the variable declarations, it did not make a difference.
The answer was that bash was staying open because my init.d script was not redirecting the stderr output. I have now changed it to
$LDIR/$EXEC &> $LDIR/swapi.log & MYPID=$!
And it now functions perfectly.
Thanks for the help everyone!
When you run a command from cron, the environment is not the same as when it is run from the bash command line after logging in. I would suspect in this case that the sh is not able to understand swapi.php as a PHP command.
Do a
which php
to see where your php binary is and add this to your init.d script
PHP=/usr/bin/php
...
$PHP $LDIR/$EXEC >> $LDIR/swapi.log & MYPID=$!
Probably not that important but you may want to redirect the output from the cron line
0 * * * * /path/to/script 2>&1 >> /dev/null
for instance.
Make sure your script has the correct execution permissions, the right owner, and the first lines should look like this:
#!/usr/bin/php
<?php

Resources