How to send signal to a bash script from another shell - linux

I start the following script which I run in a bash shell(let's say shell1) in foreground and from another shell(shell2) I send the kill -SIGUSR1 pidof(scriptA). Nothing happens. What am I doing wrong ? I tried other signals(SIGQUIT etc) but the result is same.
test_trap.sh
function iAmDone { echo "Trapped Signal"; exit 0 }
trap iAmDone SIGUSR1
echo "Running... "
tail -f /dev/null # Do nothing
In shell1
./test_trap.sh
In shell2
kill -SIGUSR1 ps aux | grep [t]est_trap | awk '{print $2}'

The trap is not executed until tail finishes. But tail never finishes. Try:
tail -f /dev/null &
wait
The trap will execute without waiting for tail to complete, but if you exit the tail will be left running. So you'll probably want a kill $! in the trap.

Related

linux program does not start after sh gets killed

I need to start a program, which uses a serial port from within a bash script. The matter is that prior to doing that I need to kill "-sh" process in order to release the serial port occupied by it (I use a serial console and this is the only way to communicate with Linux). When I kill "-sh" my program doesn't start, however the bash script continues to execute. If I don't kill "-sh" my program normally starts. See code below for details:
#!/bin/bash
SH_PID=`ps -o comm,pid | egrep -e '^sh' | awk -F " " '{print $2}'`
kill -9 $SH_PID
myprog #start my program
while true
do
sleep 10
echo "script is running..." > /dev/ttyS0
done
Any thoughts?
What if you kill your shell after running your program in background:
#!/bin/bash
SH_PID=`ps -o comm,pid | egrep -e '^sh' | awk -F " " '{print $2}'`
nohup myprog & #start my program in background
kill --HUP $SH_PID
while true
do
sleep 10
echo "script is running..." > /dev/ttyS0
done

How can I kill piped background processes?

Example session:
- cat myscript.sh
#!/bin/bash
tail -f example.log | grep "foobar" &
echo "code goes here"
# here is were I want tail and grep to die
echo "more code here"
- ./myscript.sh
- ps
PID TTY TIME CMD
15707 pts/8 00:00:00 bash
20700 pts/8 00:00:00 tail
20701 pts/8 00:00:00 grep
21307 pts/8 00:00:00 ps
As you can see, tail and grep are still running.
Something like the following would be great
#!/bin/bash
tail -f example.log | grep "foobar" &
PID=$!
echo "code goes here"
kill $PID
echo "more code here"
But that only kills grep, not tail.
Although the entire pipeline is executed in the background, only the PID of the grep process is stored in $!. You want to tell kill to kill the entire job instead. You can use %1, which will kill the first job started by the current shell.
#!/bin/bash
tail -f example.log | grep "foobar" &
echo "code goes here"
kill %1
echo "more code here"
Even if you just kill the grep process, the tail process should exit the next time it tries to write to standard output, since that file handle was closed when grep exits. Depending on how often example.log is updated, that could be almost immediately, or it could take a while.
you could add kill %1 at the end of your script.
That will kill the first background created, that way no need to find out pids etc.

Close the foreground process when a background process finishes

For instance, how would I kill tail when wget finishes.
#!/bin/bash
wget http://en.wikipedia.org/wiki/File:Example.jpg &
tail -f example.log
Perhaps this is better - i haven't tested it though:
#!/bin/bash
LOGFILE=example.log
> $LOGFILE # truncate log file so tail begins reading at the beginning
tail -f $LOGFILE &
# launch tail and background it
PID=$!
# record the pid of the last command - in this case tail
wget --output-file=$LOGFILE http://en.wikipedia.org/wiki/File:Example.jpg
kill $PID
#launch wget and when finished kill program (tail) with PID
This counts on the fact that tail although in the background will still show it's output on a console. This won't be as easily redirectable though.

terminate infinite loop initiated in remote server when exiting bash script

Script which executes commands in infinite loop in background
<SOMETHING ELSE AT START OF SCRIPT>
cmd='while true;
do
ps aux | head;
sleep 1;
done > $FILE'
ssh root#$SERVER $cmd &
...
...
<SOME OTHER TASKS>
...
...
( at the end of this script, how to kill the above snippet executing in remote server)
[ kindly note i dont want to wait as the while loop is infinite ]
Read and tried some posts from stackoverflow, but could not find exact solution for this problem.
Rather than an infinite loop, use a sentinel file:
cmd='while [ -r /tmp/somefile];
do
# stuff
done > $FILE'
ssh root#$SERVER touch /tmp/somefile
ssh root#$SERVER $cmd &
# do other stuff
ssh root#$SERVER rm -f /tmp/somefile
This follows your current practice of putting the remote command in a variable, but the arguments against that cited elsewhere should be considered.
If you want to kill the ssh process running in background at the end of your script, just do:
kill $!
I assume this is the only (or the last) process you started in background.
Try following sequence
CTRL+Z
fg
CTRL+C
or
jobs
kill %jobspec
To kill everything belonging to user logged in you could try:
whois=`w|grep $user|awk '{print $2}'`;user=root; ssh $user#server -C "ps auwx|grep $whois|awk '{print \$2}'"
This will list all the processes owned by the user you just logged in as - just add |xargs kill -9
whois=`w|grep $user|awk '{print $2}'`;user=root; ssh $user#server -C "ps auwx|grep $whois|awk '{print \$2}'|xargs kill -9 "
whois=`w|grep $user|awk '{print $2}'`;user=root; ssh $user#server -C "ps auwx|grep $whois|awk '{print \$2}'|awk '{print "kill -9 " $1}'|/bin/sh "

Monitoring bash script won't terminate

I have this bash script whose job is to monitor a log file for the occurrence of a certain line. When located, the script will send out an email warning and then terminate itself. For some reason, it keeps on running. How can I be sure to terminate bash script below:
#!/bin/sh
tail -n 0 -f output.err | grep --line-buffered "Exception" | while read line
do
echo "An exception has been detected!" | mail -s "ALERT" monitor#company.com
exit 0
done
You are opening a subshell in the while read and that subshell is who is exiting, not the proper one.
Try before entering the while loop:
SHELLPID=$$
And then in the loop:
kill $SHELLPID
exit 0
Or change your loop to not use a subshell.
Since the parent script is always going to be in the tail -f which never ends I think you have no other choice than killing it from the inner subshell.
Try something like this:
tail -n 0 -f output.err | grep --line-buffered "Exception" | while read line
do
echo "An exception has been detected!" | mail -s "ALERT" monitor#company.com
kill -term `ps ax | grep tail | grep output.err | awk '{print $1}'`
done
This should work, provided you have only one tail keeping an eye on this particular file.

Resources