Not able to stop init script using start-stop-daemon - linux

I want to use start-stop-daemon to stop the scripts it has started, but currently the scripts are not killed, and so I have resorted to hacking around it:
#!/bin/sh
case $1 in
start)
start-stop-daemon --start --background -c myapp --exec /home/myapp/dev-myapp.sh
;;
stop)
# couldn't get this to work - hacking around it
#start-stop-daemon --stop -c myapp --exec /home/myapp/dev-myapp.sh
# hack
killall dev-myapp.sh
sleep 3
killall -9 dev-myapp.sh
;;
restart)
$0 stop
$0 start
;;
*)
echo "No such command. "
echo "Usage: $0 start|stop|restart"
esac
exit 0
How can I get the script to kill the bash scripts it has started using start-stop-daemon?
edit: I assume the failure to stop the processes has to do with this section from the man page:
-x, --exec executable
Check for processes that are instances of this executable. The executable argument should be an absolute pathname. Note: this
might not work as intended with interpreted scripts, as the executable will point to the interpreter. Take into account processes
running from inside a chroot will also be matched, so other match restrictions might be needed.
So I might be forced to rely on name detection instead, but I don't know what the process name is ... Is this the whole absolute filename, the filename alone, or something else?

Regarding your comment, if you're willing to extend the shellscript you are running, you can use a pidfile. In practice, maybe you want to make it an option to your script, but as an example, this line would be sufficient:
echo $$ >/var/run/dev-myapp.sh.pid
Then, use these matching parameters for start-stop-daemon, if necessary replacing /bin/bash with whatever shell executes your script:
-p /var/run/dev-myapp.sh.pid -x /bin/bash
(To clarify: The process name of the script is that of the script interpreter, in your case, the shell)

Related

start-stop-daemon starting multiple processes

I am trying to use start-stop-daemon to start a process that runs in the background. To my knowledge, start-stop-daemon is supposed to prevent a second process from being started if one is already running. The script I am running is rather simple for now:
#!/bin/sh
while true; do
date > /home/pi/test/test.txt
sleep 10
done
I am starting the script using start-stop-daemon --start -v -b -m --pidfile /var/run/test.pid --exec /home/pi/test/test.sh
I am able to successfully stop the script using start-stop-daemon --stop -v --pidfile /var/run/test.pid
However, if I run the start command twice, it will start two processes, instead of just one that I was expecting. Does the start command check the pid file before starting the process, or is there something else that needs to be done for that to happen?
The man page of start-stop-daemon contains a special warning on the usage of the --exec option with scripts.
-x, --exec executable
Check for processes that are instances of this executable. The executable
argument should be an absolute pathname. Note: this might not work as
intended with interpreted scripts, as the executable will point to the
interpreter.
When you run a script, the process that is actually launched is the interpreter noted in the shebang line of the script. This confuses the start-stop-daemon utility.
BTW, you can use the -t option to debug that kind of issues with start-stop-daemon.

Launching a bash shell from a sudo-ed environment

Apologies for the confusing Question title. I am trying to launch an interactive bash shell from a shell script ( say shel2.sh) which has been launched by a parent script (shel1.sh) in a sudo-ed environment. ( I am creating a guided deployment
script for my software which needs to be installed as super-user , hence the sudo, but may need the user to access the shell. )
Here's shel1.sh
#!/bin/bash
set -x
sudo bash << EOF
echo $?
./shel2.sh
EOF
echo shel1 done
And here's shel2.sh
#!/bin/bash
set -x
bash --norc --verbose --noprofile -i
echo $?
echo done
I expected this to launch an interactive bash shell which waits for my input before returning to shel1.sh. This is what I see:
+ ./shel1.sh
+ sudo bash
0
+ bash --norc --verbose --noprofile -i
bash-4.3# exit
+ echo 0
0
+ echo done
done
+ echo shel1 done
shel1 done
The bash-4.3# calls an exit automatically and quits. Interestingly if I invoke the bash shell with -l (or --login) the automatic entry is logout !
Can someone explain what is happening here ?
When you use a here document, you are tying up the shell's -- and its spawned child processes' -- standard input to the here document input.
You can avoid using a here document in many situations. For example, replace the here document with a single-quoted string.
#!/bin/bash
set -x
sudo bash -c '
# Aside: How is this actually useful?
echo $?
# Spawned script inherits the stdin of "sudo bash"
./shel2.sh'
echo shel1 done
Without more details, it's hard to see where exactly you want to go with this, but most modern Linux platforms have package managers which allow all kinds of hooks for installation, so that you would typically not need to do this sort of thing. Have you looked into that?

autolaunch of application in ubuntu

I created the script file -
#!/bin/sh
echo "my application is here"
./helloworld # helloworld is our application
after creating the script file i copied it in init.d
I gave the command chmod +x /etc/init.d/vcc_app (vcc_app is the name of script which I have created)
Then I gave the command ln -s /etc/init.d/vcc_app /etc/rc.d/vcc_app (rc.d is the run level directory)
But when i reboot the board my application is not executed automatically. Can anyone help me out?
Scripts that are in /etc/init.d need to be LSB-compliant.
If you simply want to automatically run commands at the end of the boot process, try placing them in /etc/rc.local instead.
Not all linux systems use the same init daemon (ubuntu uses upstart: http://upstart.ubuntu.com/getting-started.html), but they all use start and stop functions in the script. Other common functions are status and restart, but again, there is no true across the board standard. Eg:
!#/bin/sh
start () {
echo "application started";
./helloworld # you should use an absolute path here instead of ./
}
stop () {
}
case "$1" in
start)
start
;;
stop)
stop
;;
*)
echo "Usage start|stop";
esac
exit $?
The last bit is a switch based on the first command line arg, since init will invoke the script myrcscript start.
In order to use stop() (and the also often useful restart()) you need to keep, or be able to get, the pid of the process launched by start(); sometimes this is done with a little "pid file" in /tmp (text file containing the pid, eg, /tmp/myscript.pid created in start()).
The "upstart" init daemon used on Ubuntu has its own specific features, but unless you need to use them, just keep it stop/start minimal and it will (probably) work anywhere.

Kill bash script foreground children when a signal comes

I am wrapping a fastcgi app in a bash script like this:
#!/bin/bash
# stuff
./fastcgi_bin
# stuff
As bash only executes traps for signals when the foreground script ends I can't just kill -TERM scriptpid because the fastcgi app will be kept alive.
I've tried sending the binary to the background:
#!/bin/bash
# stuff
./fastcgi_bin &
PID=$!
trap "kill $PID" TERM
# stuff
But if I do it like this, apparently the stdin and stdout aren't properly redirected because it does not connect with lighttpds mod_fastgi, the foreground version does work.
EDIT: I've been looking at the problem and this happens because bash redirects /dev/null to stdin when a program is launched in the background, so any way of avoiding this should solve my problem as well.
Any hint on how to solve this?
There are some options that come to my mind:
When a process is launched from a shell script, both belong to the same process group. Killing the parent process leaves the children alive, so the whole process group should be killed. This can be achieved by passing the negated PGID (Process Group ID) to kill, which is the same as the parent's PID. ej: kill -TERM -$PARENT_PID
Do not execute the binary as
a child, but replacing the script
process with exec. You lose the
ability to execute stuff afterwards
though, because exec completely
replaces the parent process.
Do not kill the shell script process, but the FastCGI binary. Then, in the script, examine the return code and act accordingly. e.g: ./fastcgi_bin || exit -1
Depending on how mod_fastcgi handles worker processes, only the second option might be viable.
I have no idea if this is an option for you or not, but since you have a bounty I am assuming you might go for ideas that are outside the box.
Could you rewrite the bash script in Perl? Perl has several methods of managing child processes. You can read perldoc perlipc and more specifics in the core modules IPC::Open2 and IPC::Open3.
I don't know how this will interface with lighttpd etc or if there is more functionality in this approach, but at least it gives you some more flexibility and some more to read in your hunt.
I'm not sure I fully get your point, but here's what I tried and the process seems to be able to manage the trap (call it trap.sh):
#!/bin/bash
trap "echo trap activated" TERM INT
echo begin
time sleep 60
echo end
Start it:
./trap.sh &
And play with it (only one of those commands at once):
kill -9 %1
kill -15 %1
Or start in foreground:
./trap.sh
And interrupt with control-C.
Seems to work for me.
What exactly does not work for you?
I wrote this script just minutes ago to kill a bash script and all of its children...
#!/bin/bash
# This script will kill all the child process id for a given pid
# based on http://www.unix.com/unix-dummies-questions-answers/5245-script-kill-all-child-process-given-pid.html
ppid=$1
if [ -z $ppid ] ; then
echo "This script kills the process identified by pid, and all of its kids";
echo "Usage: $0 pid";
exit;
fi
for i in `ps j | awk '$3 == '$ppid' { print $2 }'`
do
$0 $i
kill -9 $i
done
Make sure the script is executable, or you will get an error on the $0 $i
You can override the implicit </dev/null for a background process by redirecting stdin yourself, for example:
sh -c 'exec 3<&0; { read x; echo "[$x]"; } <&3 3<&- & exec 3<&-; wait'
Try keeping the original stdin using ./fastcgi_bin 0<&0 &:
#!/bin/bash
# stuff
./fastcgi_bin 0<&0 &
PID=$!./fastcgi_bin 0<&0 &
trap "kill $PID" TERM
# stuff
# test
#sh -c 'sleep 10 & lsof -p ${!}'
#sh -c 'sleep 10 0<&0 & lsof -p ${!}'
You can do that with a coprocess.
Edit: well, coprocesses are background processes that can have stdin and stdout open (because bash prepares fifos for them). But you still need to read/write to those fifos, and the only useful primitive for that is bash's read (possibly with a timeout or a file descriptor); nothing robust enough for a cgi. So on second thought, my advice would be not to do this thing in bash. Doing the extra work in the fastcgi, or in an http wrapper like WSGI, would be more convenient.

How do I translate init.d scripts from Ubuntu/Debian Linux to Solaris?

I have several init.d scripts that I'm using to start some daemons. Most of these scripts I've found on the internet and they all use start-stop-daemon. My understanding is that "start-stop-daemon" is a command that is specific to Linux or BSD distros and is not available on Solaris.
What is the best way to translate my init.d scripts from Linux to Solaris? Is there a command equivalent to start-stop-daemon that I can use, roughly?
Since I'm not much of a Solaris user, I'm willing to admit upfront that I don't even know if my question is inherently invalid or not.
start-stop-daemon is a Linux thing, and not used that much on Solaris. I guess you can port the command though, if you want to reuse your init scripts.
Otherwise it depends on what version of Solaris you are using. Starting with Solaris 10 and also OpenSolaris they use a new startup script framework "Solaris Service Management Facility", which you configure with the commands svcs, svccfg and svcadm.
See http://www.oracle.com/technetwork/server-storage/solaris/overview/servicemgmthowto-jsp-135655.html for more information.
For older Solaris releases most init scripts are written in pure shell without any helper commands like start-stop-daemon.
On Solaris 10 or later using SMF is recommended, but on an earlier release you'd create an init script in /etc/init.d and link to it from the rcX.d directories. Here's a bare-bones example of an init script for launching an rsync daemon:
#!/sbin/sh
startcmd () {
/usr/local/bin/rsync --daemon # REPLACE WITH YOUR COMMANDS
}
stopcmd () {
pkill -f "/usr/local/bin/rsync --daemon" # REPLACE WITH YOUR COMMANDS
}
case "$1" in
'start')
startcmd
;;
'stop')
stopcmd
;;
'restart')
stopcmd
sleep 1
startcmd
;;
*)
echo "Usage: $0 { start | stop | restart }"
exit 1
;;
esac
Create a link to the script from each rcX.d directory (following the "S"/"K" convention).
ln rsync /etc/rc3.d/S91rsync
for i in `ls -1d /etc/rc*.d | grep -v 3`; do ln rsync $i/K02rsync; done
See the README in each rcX.d directory and check the man page for init.d. Here's a bit of the man page:
File names in rc?.d directories are of the form
[SK]nn, where S means start this job, K means
kill this job, and nn is the relative sequence number for killing or
starting the job.
When entering a state (init S,0,2,3,etc.) the rc[S0-6] script
executes those scripts in /etc/rc[S0-6].d that are prefixed with K
followed by those scripts prefixed with S. When executing each
script in one of the /etc/rc[S0-6] directories, the /sbin/rc[S0-6]
script passes a single argu- ment. It passes the argument 'stop'
for scripts prefixed with K and the argument 'start' for scripts
prefixed with S. There is no harm in applying the same sequence
number to multiple scripts.

Resources