How to use "kill -STOP" in shell script? - linux

I want to use "kill -STOP" in a script to wait for the process to end
I wrote this script, but "kill -STOP" command doesn't pause the process
#!/bin/bash
echo $$ > ~/screen_finished
screen -S app -dm bash -c "sleep 5 ; kill -CONT `cat ~/screen_finished`"
kill -STOP $$

Related

Kill -2 or Kill -INT ,how does it kill process

I have two shellscripts:
a.sh is:
#!/bin/bash
read -p "Enter name": name
echo $name
b.sh is:
#!/bin/bash
for ((i=0; i<100; i++))
do
echo "$i"
sleep 1s
done
then, I start another shell using command pkill -2 a.sh pkill -2 b.sh
but, the first can be killed
the second cannot be killed,
what does pkill -2 do?
Kill -2 sends an interrupt (2 is the value associated with SIGINT). This will wake up the sleep call but then the loop continues. If you send a 15 (SIGTERM), the process should terminate.

finding the process group id created through setsid

In a shell script, I see that using setsid, we could create a new process group. I am not able to find a reliable way to get the group id after the creation. My requirement is simple, launch a process, and after it is done, clean up any descendant (if any). I dont want to kill the main process, hence I have to wait for the main process to end. After which, I can kill the leftover child processes if I had somehow got the group id. which can be done with kill -- -pgid. The missing piece is how do I get the group id ?
This script is what I came up with finally. Hope this helps someone.
$! will give the pid, and a ps has to be used to find its gid.
there was an extra space in front while using ps,the next line of variable expansion removes the leading space.
Finally after waiting for the main process,it kills the group.
#!/bin/sh -x
setsid "$#" &
pid=$!
gidspace=$(ps -o pgid= $pid)
gid="${gidspace## }"
echo "gid $gid"
echo "waiting"
wait $pid
ps -s $gid -o pid,ppid,pgid,command
kill -- -$gid
I managed to do it with a coproc, and a sleep to ensure we have enough time to read back the pid. This is bash-specific of course, and the only way to avoid using a hackish sleep inside a coproc is to write to a temp file and wait for the command to terminate (no need for coproc then).
Using a coproc
Note that I open filehandle 3 to write the pgid back to the parent shell and close it before executing the command.
#!/bin/bash -x
coproc setsid bash -c 'ps -o pgid= $BASHPID >&3; exec 3>&-; exec "$#" & sleep 1' -- "$#" 3>&1
read -u ${COPROC[0]} gid
echo "gid $gid"
ps -s $gid -o pid,ppid,pgid,command
kill -- -$gid
Using a temp file
To avoid having to pass the temp file to the subshell (and the risk the parent dies and removes it before child writes to it) I again open fh 3 so the children can write its pgid to it.
#!/bin/bash -x
t=$(mktemp)
trap 'rm -f "$t"' EXIT
exec {fh}>"$t"
setsid bash -c 'ps -o pgid= $BASHPID >&3; exec 3>&-; exec "$#" &' -- "$#" 3>&${fh}
read gid <$t
echo "gid $gid"
ps -s $gid -o pid,ppid,pgid,command
kill -- -$gid

Bash: Killing all processes in subprocess

In bash I can get the process ID (pid) of the last subprocess through the $! variable. I can then kill this subprocess before it finishes:
(sleep 5) & pid=$!
kill -9 $pid
This works as advertised. If I now extend the subprocess with more commands after the sleep, the sleep command continues after the subprocess is killed, even though the other commands never get executed.
As an example, consider the following, which spins up a subprocess and monitor its assassination using ps:
# Start subprocess and get its pid
(sleep 5; echo done) & pid=$!
# grep for subprocess
echo "grep before kill:"
ps aux | grep "$pid\|sleep 5"
# Kill the subprocess
echo
echo "Killing process $pid"
kill -9 $pid
# grep for subprocess
echo
echo "grep after kill:"
ps aux | grep "$pid\|sleep 5"
# Wait for sleep to finish
sleep 6
# grep for subprocess
echo
echo "grep after sleep is finished:"
ps aux | grep "$pid\|sleep 5"
If I save this to a file named filename and run it, I get this printout:
grep before kill:
username 7464 <...> bash filename
username 7466 <...> sleep 5
username 7467 <...> grep 7464\|sleep 5
Killing process 7464
grep after kill:
username 7466 <...> sleep 5
username 7469 <...> grep 7464\|sleep 5
grep after sleep is finished:
username 7472 <...> grep 7464\|sleep 5
where unimportant information from the ps command is replaced with <...>. It looks like the kill has killed the overall bash execution of filename, while leaving sleep running.
How can I correctly kill the entire subprocess?
You can set a trap in the subshell to kill any active jobs before exiting:
(trap 'kill $(jobs -p)' EXIT; sleep 5; echo done ) & pid=$!
I don't know exactly why that sleep process gets orphaned, anyway instead kill you can use pkill with -P flag to also kill all children
pkill -TERM -P $pid
EDIT:
that means that in order to kill a process and all it's children you should use instead
CPIDS=`pgrep -P $pid` # gets pids of child processes
kill -9 $pid
for cpid in $CPIDS ; do kill -9 $cpid ; done
You can have a look at rkill that seems to meet your requirements :
http://www.unix.com/man-page/debian/1/rkill/
rkill [-SIG] pid/name...
When invoked as rkill, this utility does not display information about the processes, but
sends them all a signal instead. If not specified on the command line, a terminate
(SIGTERM) signal is sent.

Bash script iterate over PID's and kill items

I try to kill all occurrences of a process, what's happen actually an iteration stops after first item, what's wrong here ?
#!/usr/bin/env bash
SUPERVISORCLS=($(pidof supervisorctl))
for i in "${SUPERVISORCLS[#]}"
do
echo $i
exec sudo kill -9 ${i}
done
Before I tried sth like this as solution for restart script, but as well script was not always executed at total always only one if block was executed.?
ERROR0=$(sudo supervisord -c /etc/supervisor/supervisord.conf 2>&1)
if [ "$ERROR0" ];then
exec sudo pkill supervisord
exec sudo supervisord -c /etc/supervisor/supervisord.conf
echo restarted supervisord
fi
ERROR1=$(sudo supervisord -c /etc/supervisor/supervisord.conf 2>&1)
if [ "$ERROR1" ];then
exec sudo pkill -9 supervisorctl
exec sudo supervisorctl -c /etc/supervisor/supervisord.conf
echo restarted supervisorctl
fi
exec replaces your process with the executable that's the argument to it, so you will never execute another statement in your script after it hits an exec. Your process will no longer exist. In the first example your process will no longer be your script it will be kill and pkill in the second.
To fix it, just remove exec from all those lines. It's not needed. When executing a script the shell will execute the commands on every line already, you don't have to tell it to do so.

Cleanup after the background process finished its work on Linux

I have a script-launcher (bash) that executes Python scripts in the background, so I can start it and then close terminal/ssh connection, leaving the script working.
It accepts the name of the script to run and optional arguments to pass there. Then it starts the Python script (detached) and creates a file with PID (of the Python script) in the same directory, so I can later reconnect to the server and kill this background process by using the PID from this file.
Also this PID file is used to prevent the same script been started if it's already running (singleton).
The problem is that I can't figure out how to delete this PID file after the Python script finished its work. I need this to be implemented in bash script, no Python solutions (since I want to use it for all cases) or screen tool. This supervisor (that will delete PID file after the script finished work) also should be run in the background (!), so I can do the same thing: close terminal session.
What I've tried so far:
#!/bin/bash
PIDFILE=$1.pid
if [ -f $PIDFILE ]; then
echo "Process is already running, PID: $(< $PIDFILE)"
exit 1
else
nohup python $1 "${#:2}" > /dev/null 2>&1 &
PID=$!
echo $PID > $PIDFILE
# supervisor
nohup sh -c "wait $PID; rm -f $PIDFILE" > /dev/null 2>&1 &
fi
In this example the PID file is deleted immediately, because wait command returns immediately (I think it's because the new process isn't a child of the current one, so wait doesn't work in this case as I expect).
Do you have any thoughts about how it can be implemented?
Basically, I need something to replace this line
nohup sh -c "wait $PID; rm -f $PIDFILE" > /dev/null 2>&1 &
that will wait until the previous script (Python's in this case) will finish its work and then delete PID file.
UPD: OK, the problem was with wait command, because it can't wait for non-child processes. The working solution is to replace it with while loop:
#!/bin/bash
function cleanup {
while [ -e /proc/$1 ]; do
sleep 1;
done
rm -f $PIDFILE
}
PIDFILE=$1.pid
if [ -f $PIDFILE ]; then
echo "Process is already running, PID: $(< $PIDFILE)"
exit 1
else
python $1 "${#:2}" > /dev/null 2>&1 &
PID=$!
echo $PID > $PIDFILE
cleanup $PID > /dev/null 2>&1 &
disown
fi
For shell scripts, use traps:
#!/bin/bash
function finish {
wait $PID
rm $PIDFILE > /dev/null 2>&1 &
}
trap finish EXIT
trap "finish; exit 2" SIGINT
PIDFILE=$1.pid
if [ -f $PIDFILE ]; then
echo "Process is already running, PID: $(< $PIDFILE)"
exit 1
else
nohup python $1 "${#:2}" > /dev/null 2>&1 &
PID=$!
echo $PID > $PIDFILE
fi
Traps allow you to catch signals and respond to them, so in the code above, the EXIT signal (normal completion) will execute finish, removing the $PIDFILE. On SIGINT (user requested exit with ctrl-c), the script will remove the $PIDFILE and exit with 2.
Directly in python: if you want to handle it manually take a look at atexit. I haven't looked at the source, but it looks like it implements traps in order to register cleanup functions:
import atexit
import os
def cleanup():
os.unlink(pidfile)
atexit.register(cleanup)
Or to automate pidfile handling checkout pid which will handle preventing simultaneous execution all on its own:
from pid import PidFile
with PidFile():
do_something()
or better yet
from pid.decorator import pidfile
#pidfile()
def main():
pass
if __name__ == "__main__":
main()

Resources