How to trap signals in shell script? - linux

How can we trap signals in shell script, where can we trap signals?
also can someone explain
# trap commands signals

You can write a shell script :
trap ctl_c INT # trap <name_of_function_to_called> <Signal to be handled>
function ctl_c(){
// signal handling logic needed.
}
Now whenever you will send SIGINT (key-press CTRL + C), this function will get called, instead of default functionality.

Related

Recover after "kill 0"

I have a script that invokes kill 0. I want to invoke that script from another script, and have the outer script continue to execute. (kill 0 sends a signal, defaulting to SIGTERM, to every process in the process group of the calling process; see man 2 kill.)
kill0.sh:
#!/bin/sh
kill 0
caller.sh:
#!/bin/sh
echo BEFORE
./kill0.sh
echo AFTER
The current behavior is:
$ ./caller.sh
BEFORE
Terminated
$
How can I modify caller.sh so it prints AFTER after invoking kill0.sh?
Modifying kill0.sh is not an option. Assume that kill0.sh might read from stdin and write to stdout and/or stderr before invoking kill 0, and I don't want to interfere with that. I still want the kill 0 command to kill the kill0.sh process itself; I just don't want it to kill the caller as well.
I'm using Ubuntu 16.10 x86_64, and /bin/sh is a symlink to dash. That shouldn't matter, and I prefer answers that don't depend on that.
This is of course a simplified version of a larger set of scripts, so I'm at some risk of having an XY problem, but I think that a solution to the problem as stated here should let me solve the actual problem. (I have a wrapper script that invokes a specified command, capturing and displaying its output, with some other bells and whistles.)
One solution
You need to trap the signal in the parent, but enable it in the child. So a script like run-kill0.sh could be:
#!/bin/sh
echo BEFORE
trap '' TERM
(trap 15; exec ./kill0.sh)
echo AFTER
The first trap disables the TERM signal. The second trap in the sub-shell re-enables the signal (using the signal number instead of the name — see below) before running the kill0.sh script. Using exec is a minor optimization — you can omit it and it will work the same.
Digression on obscure syntactic details
Why 15 instead of TERM in the sub-shell? Because when I tested it with TERM instead of 15, I got:
$ sh -x run-kill0.sh
+ echo BEFORE
BEFORE
+ trap '' TERM
+ trap TERM
trap: usage: trap [-lp] [arg signal_spec ...]
+ echo AFTER
AFTER
$
When I used 15 in place of TERM (twice), I got:
$ sh -x run-kill0.sh
+ echo BEFORE
BEFORE
+ trap '' 15
+ trap 15
+ exec ./kill0.sh
Terminated: 15
+ echo AFTER
AFTER
$
Using TERM in place of the first 15 would also work.
Bash documentation on trap
Studying the Bash manual for trap shows:
trap [-lp] [arg] [sigspec …]
The commands in arg are to be read and executed when the shell receives signal sigspec. If arg is absent (and there is a single sigspec) or equal to ‘-’, each specified signal’s disposition is reset to the value it had when the shell was started.
A second solution
The second sentence is the key: trap - TERM should (and empirically does) work.
#!/bin/sh
echo BEFORE
trap '' TERM
(trap - TERM; exec ./kill0.sh)
echo AFTER
Running that yields:
$ sh -x run-kill0.sh
+ echo BEFORE
BEFORE
+ trap '' TERM
+ trap - TERM
+ exec ./kill0.sh
Terminated: 15
+ echo AFTER
AFTER
$
I've just re-remembered why I use numbers and not names (but my excuse is that the shell — it wasn't Bash in those days — didn't recognize signal names when I learned it).
POSIX documentation for trap
However, in Bash's defense, the POSIX spec for trap says:
If the first operand is an unsigned decimal integer, the shell shall treat all operands as conditions, and shall reset each condition to the default value. Otherwise, if there are operands, the first is treated as an action and the remaining as conditions.
If action is '-', the shell shall reset each condition to the default value. If action is null ( "" ), the shell shall ignore each specified condition if it arises.
This is clearer than the Bash documentation, IMO. It states why trap 15 works. There's also a minor glitch in the presentation. The synopsis says (on one line):
trap n [condition...]trap [action condition...]
It should say (on two lines):
trapn[condition...]
trap [action condition...]

Terminate a process started by a bash script with CTRL-C

I am having an issue with terminating the execution of a process inside a bash script.
Basically my script does the following actions:
Issue some starting commands
Start a program who waits for CTRL+C to stop
Do some post-processing on data retreived by the program
My problem is that when I hit CTRL+C the whole script terminates, not just the "inner" program.
I have seen around some scripts that do this, this is why I think it's possible.
You can set up a signal handler using trap:
trap 'myFunction arg1 arg2 ...' SIGINT;
I suggest keeping your script abortable overall, which you can do by using a simple boolean:
#!/bin/bash
# define signal handler and its variable
allowAbort=true;
myInterruptHandler()
{
if $allowAbort; then
exit 1;
fi;
}
# register signal handler
trap myInterruptHandler SIGINT;
# some commands...
# before calling the inner program,
# disable the abortability of the script
allowAbort=false;
# now call your program
./my-inner-program
# and now make the script abortable again
allowAbort=true;
# some more commands...
In order to reduce the likelihood of messing up with allowAbort, or just to keep it a bit cleaner, you can define a wrapper function to do the job for you:
#!/bin/bash
# define signal handler and its variable
allowAbort=true;
myInterruptHandler()
{
if $allowAbort; then
exit 1;
fi;
}
# register signal handler
trap myInterruptHandler SIGINT;
# wrapper
wrapInterruptable()
{
# disable the abortability of the script
allowAbort=false;
# run the passed arguments 1:1
"$#";
# save the returned value
local ret=$?;
# make the script abortable again
allowAbort=true;
# and return
return "$ret";
}
# call your program
wrapInterruptable ./my-inner-program

How to kill a running bash function from terminal?

...... (some awesome script).............
echo "I just wanna kill the function, not this"
myFunction()
{
while true
do
echo "this is looping forever"
done
}
myFunction
...... (some awesome script)...............
How to kill a running function without killing the script itself from terminal ?
First you cannot "kill" a function, "killing" refers to processes.
However you can install some special signal handling inside your function, that can make it react the way you want.
For this in bash you use trap, to define a signal handler for the signal you want to catch.
The function that is used as a signal handler here, also clears the trap, as traps are global and the defined handler would be called on any subsequent SIGUSR1 that could occur.
echo "I just wanna kill the function, not this"
trap_myFunction()
{
trap - SIGUSR1
return
}
myFunction()
{
trap trap_myFunction SIGUSR1
while true
do
echo "this is looping forever"
sleep 1
done
}
myFunction
echo "Continuing processing .."
Now, if you start this script and kill it, using:
kill -SIGUSR1 pid_of_process
it will enter the signal handler installed, which is simply return and continue with the echo-command after myFunction.
If you kill it by using any other signal, the trap will not be triggered and the process will be terminated completely.

How to restart a group of processes when it is triggered from one of them in C code

i have few processes *.rt written in C.
I want to restart all of them(*.rt) in the process foo.rt(one of the *.rt) in itself (buid-in C code)
Normally i have 2 bash scripts stop.sh and start.sh. These scripts are invoked from shell.
Here are the staffs of the scripts
stop.sh --> send kill -9 signal to all ".rt" files.
start.sh -->invokes processes named ".rt"
My problem is how can i restart all rt's from C code. Is there any Idea to restart all "*.rt" files triggered from foo.rt file ?
I tried to use this in foo.rt but it doesnt work. *Because stop.sh is killing all .rt files even if it is forked as a child which is deployed to execute start.sh script
...
case 708: /* There is a trigger signal here*/
{
result = APP_RES_PRG_OK;
if (fork() == 0) { /* child */
execl("/bin/sh","sh","-c","/sbin/stop.sh",NULL);
execl("/bin/sh","sh","-c","/sbin/start.sh",NULL);// Error:This will be killed by /sbin/stop command
}
}
I'have solved problem with "at" daemon in Linux
I invoke 2 system() calls stop & start.
My first attempt was faulty as explained above. execl make a new image and never returns to later execl unless it is succeed
Here is my solution
case 708: /*There is a trigger signal here*/
{
system("echo '/sbin/start.sh' | at now + 2 min");
system("echo '/sbin/stop.sh | at now + 1 min");
}
You could use process groups, at least if all your related processes are originated by the same process...
So you could write a glue program in C which set a new process group using setpgrp(2) and store its pid (or keeps running, waiting for some IPC).
Then you would stop that process group by using killpg(2).
See also the notion of session and setsid(2)

What is the Perl equivalent of PHP's proc_open(), proc_close(), etc.?

Using PHP's proc_open(), I can start a process, read from STDOUT and STDERR (separately) an arbitrary number of bytes at a time using fread() while the process is running, detect when the process is done using feof() on the STDOUT and STDERR pipes, and then use proc_close() to get the exit code of the process. I've done all of this in PHP. It works well, and gives me a lot of control.
Is there a way to do all of these things in Perl? To summarize, I need to be able to do the following:
start an external process
read STDOUT and STDERR separately
read STDOUT and STDERR an arbitrary number of bytes at a time while the process is running (i.e. without having to wait for the process to finish)
detect when the process is finished
get the exit code of the process
Thanks in advance for your answers.
You could roll your own solution using Perl's system call interface, but it's easier to use the built-in module IPC::Open3. As for your list:
Start an external process:
use IPC::Open3;
use IO::Handle;
use strict;
my $stdout = IO::Handle->new;
my $stderr = IO::Handle->new;
my $pid = open3(undef, $stdout, $stderr, 'my-command', 'arg1', 'arg2');
Read STDOUT and STDERR separately, an arbitrary number of bytes at a time:
my $line = <$stdout>;
# Or
sysread $stderr, my $buffer, 1024;
Detect when the process is finished:
use POSIX qw(sys_wait_h);
waitpid $pid, 0; # Waits for process to terminate
waitpid $pid, WNOHANG; # Checks if the process has terminated
Get the exit code of the process:
my $status = $?; # After waitpid indicates the process has exited
Be sure to read the IPC::Open3 documentation; as it warns, it's easy to get yourself deadlocked when you have separate stdout and stderr pipes, if you're not careful. If the child process fills either pipe, it will block, and if the parent process reads the other pipe, it will block.
You want this module: IPC::Open3
You want IPC::Run, it captures the IO and returns the exit value

Resources