How to kill a running bash function from terminal? - linux

...... (some awesome script).............
echo "I just wanna kill the function, not this"
myFunction()
{
while true
do
echo "this is looping forever"
done
}
myFunction
...... (some awesome script)...............
How to kill a running function without killing the script itself from terminal ?

First you cannot "kill" a function, "killing" refers to processes.
However you can install some special signal handling inside your function, that can make it react the way you want.
For this in bash you use trap, to define a signal handler for the signal you want to catch.
The function that is used as a signal handler here, also clears the trap, as traps are global and the defined handler would be called on any subsequent SIGUSR1 that could occur.
echo "I just wanna kill the function, not this"
trap_myFunction()
{
trap - SIGUSR1
return
}
myFunction()
{
trap trap_myFunction SIGUSR1
while true
do
echo "this is looping forever"
sleep 1
done
}
myFunction
echo "Continuing processing .."
Now, if you start this script and kill it, using:
kill -SIGUSR1 pid_of_process
it will enter the signal handler installed, which is simply return and continue with the echo-command after myFunction.
If you kill it by using any other signal, the trap will not be triggered and the process will be terminated completely.

Related

bash how to kill parent process, or exit from parent process from a function in bash module script

I have a few scripts:
functions.sh - has many functions defined including work() and abort()
it looks like this:
#!/bin/bash
abort()
{
message=$1
echo Error: $message ..Aborting >log
exit 7
}
work()
{
cp ./testfile ./test1 #./testfile doesnt exist so non zero status here
if [ $? -eq 0 ];then
echo "variable_value"
else
abort "Can not copy"
fi
}
parent.sh - parent script is the main script, it looks like this:
#!/bin/sh
. ./functions.sh
value=$(work)
echo "why is this still getting printed"
Basically i have many functions in the functions.sh file, and i am sourcing that file in parent.sh to make available all functions. Parent can call any function and any function in functions.sh can call abort at which point execution of parent should stop, but its not happening, parent.sh runs to the next step. Is there a way to get around this problem?
I realise that its happening due to assignment step value=$(work). But is there a way to abort execution just at abort function call in this case?
I'm not convinced that this behavior should be described as a "problem". As you noted, when you invoke work in a subshell, aborting from work just aborts the subshell. This is correct, expected behavior. Fortunately, the value returned by work in the subshell is available to the parent, so you can simply respond to it and write:
value=$(work) || exit
If work returns a non-zero value, then the script will exit with that same non-zero value.

How to get PID of perl daemon in init script?

I have the following perl script:
#!/usr/bin/perl
use strict;
use warnings;
use Proc::Daemon;
Proc::Daemon::Init;
my $continue = 1;
$SIG{TERM} = sub { $continue = 0 };
while ($continue) {
# stuff
}
I have the following in my init script:
DAEMON='/path/to/perl/script.pl'
start() {
PID=`$DAEMON > /dev/null 2>&1 & echo $!`
echo $PID > /var/run/mem-monitor.pid
}
The problem is, this returns the wrong PID! This returns the PID of the parent process which is started when the daemon is run, but that process is immediately killed off. I need to get the PID of the child process!
The Proc::Daemon says
Proc::Daemon does the following:
...
9. The first child transfers the PID of the second child (daemon) to the parent. Additionally the PID of the daemon process can be written into a file if 'pid_file' is defined. Then the first child exits.
and then later, under new ( %ARGS )
pid_file
Defines the path to a file (owned by the parent user) where the PID of the daemon process will be stored. Defaults to undef (= write no file).
Also look at Init() method description. This all implies that you may want to use new first.
The point is that it is the grand-child process that is the daemon. However, the childr passes the pid along and it is available to the parent. If pid_file => $file_name is set in the constructor (the daemon's) pid is written to that file.
A comment asks to not have shell script rely on a file written by another script.
I can see two ways to do that.
Print the pid, returned by the $daemon->Init(), from the parent and pick it up in the shell. This is defeated by redirects in the question, but I don't know why they are needed. The parent and child exit right as all is set up, while the daemon is detached from everything.
Shell script can start the Perl script with the desired log-file name as an argument, letting it write the daemon pid to that file by the above process. The file is still output by Perl, but what matters about it is decided by the shell script.
I'd like to include a statement from my comment below. I consider these superior to two other things that come to mind: picking the filename from a config-style file kept by the shell is more complicated, while parsing the process table may be unreliable.
I've seen this before and had to resort to using STDERR to send back the childs PID to the calling shell script. I've always assumed it was due to the mentioned unreliability of exit codes - but details were not clear in the documentation. Please try something like this:
#!/usr/bin/perl
use strict;
use warnings;
use Proc::Daemon;
if( my $pid = Proc::Daemon::Init() ) {
print STDERR $pid;
exit;
}
my $continue = 1;
$SIG{TERM} = sub { $continue = 0 };
while ($continue) {
sleep(20);
exit;
}
With a calling script like this:
#!/bin/bash
DAEMON='./script.pl'
start() {
PID=$($DAEMON 2>&1 >/dev/null)
echo $PID > ./mem-monitor.pid
}
start;
When the bash script is ran, it will capture the STDERR output (containing the correct PID), and store it in the file. Any STDOUT the Perl script produces would be sent to /dev/null - though this is unlikely as the 1st level Perl script does (in this case) exit fairly early on.
Thank you to the suggestions from zdim and Hakon. They are certainly workable, and got me on the right track, but ultimately I went a different route. Rather than relying on $!, I used ps and awk to get the PID, as follows:
DAEMON='/path/to/perl/script.pl'
start() {
$DAEMON > /dev/null 2>&1
PID=`ps aux | grep -v 'grep' | grep "$DAEMON" | awk '{print $2}'`
echo $PID > /var/run/mem-monitor.pid
}
This works and satisfies my OCD! Note the double quotes around "$DAEMON" in grep "$DAEMON".

Terminate a process started by a bash script with CTRL-C

I am having an issue with terminating the execution of a process inside a bash script.
Basically my script does the following actions:
Issue some starting commands
Start a program who waits for CTRL+C to stop
Do some post-processing on data retreived by the program
My problem is that when I hit CTRL+C the whole script terminates, not just the "inner" program.
I have seen around some scripts that do this, this is why I think it's possible.
You can set up a signal handler using trap:
trap 'myFunction arg1 arg2 ...' SIGINT;
I suggest keeping your script abortable overall, which you can do by using a simple boolean:
#!/bin/bash
# define signal handler and its variable
allowAbort=true;
myInterruptHandler()
{
if $allowAbort; then
exit 1;
fi;
}
# register signal handler
trap myInterruptHandler SIGINT;
# some commands...
# before calling the inner program,
# disable the abortability of the script
allowAbort=false;
# now call your program
./my-inner-program
# and now make the script abortable again
allowAbort=true;
# some more commands...
In order to reduce the likelihood of messing up with allowAbort, or just to keep it a bit cleaner, you can define a wrapper function to do the job for you:
#!/bin/bash
# define signal handler and its variable
allowAbort=true;
myInterruptHandler()
{
if $allowAbort; then
exit 1;
fi;
}
# register signal handler
trap myInterruptHandler SIGINT;
# wrapper
wrapInterruptable()
{
# disable the abortability of the script
allowAbort=false;
# run the passed arguments 1:1
"$#";
# save the returned value
local ret=$?;
# make the script abortable again
allowAbort=true;
# and return
return "$ret";
}
# call your program
wrapInterruptable ./my-inner-program

setting global variable in bash

I have function where I am expecting it to hang sometime. So I am setting one global variable and then reading it, if it didn't come up after few second I give up. Below is not complete code but it's not working as I am not getting $START as value 5
START=0
ineer()
{
sleep 5
START=5
echo "done $START" ==> I am seeing here it return 5
return $START
}
echo "Starting"
ineer &
while true
do
if [ $START -eq 0 ]
then
echo "Not null $START" ==> But $START here is always 0
else
echo "else $START"
break;
fi
sleep 1;
done
You run inner function call in back ground, which means the START will be assigned in a subshell started by current shell. And in that subshell, the START value will be 5.
However in your current shell, which echo the START value, it is still 0. Since the update of START will only be in the subshell.
Each time you start a shell in background, it is just like fork a new process, which will make a copy of all current shell environments, including the variable value, and the new process will be completely isolate from your current shell.
Since the subshell have been forked as a new process, there is no way to directly update the parent shell's START value. Some alternative ways include signals passing when the subshell which runs inner function exit.
common errors:
export
export could only be used to make the variable name available to any subshells forked from current shell. however, once the subshell have been forked. The subshell will have a new copy of the variable and the value, any changes to the exported variable in the shell will not effect the subshell.
Please take the following code for details.
#!/bin/bash
export START=0
ineer()
{
sleep 3
export START=5
echo "done $START" # ==> I am seeing here it return 5
sleep 1
echo "new value $START"
return $START
}
echo "Starting"
ineer &
while true
do
if [ $START -eq 0 ]
then
echo "Not null $START" # ==> But $START here is always 0
export START=10
echo "update value to $START"
sleep 3
else
echo "else $START"
break;
fi
sleep 1;
done
The problem is that ineer & runs the function in a subshell, which is its own scope for variables. Changes made in a subshell will not apply to the parent shell. I recommend looking into kill and signal catching.
Save pid of inner & by:
pid=$!
and use kill -0 $pid (that is zero!!) to detect if your process still alive.
But better redesign inner to use lock file, this is safer check!
UPDATE From KILL(2) man page:
#include <sys/types.h>
#include <signal.h>
int kill(pid_t pid, int sig);
If sig is 0, then no signal is sent, but error checking is still
performed; this can be used to check for the existence
of a process ID or process group ID.
The answer is: in this case you can use export.
This instruction allow all subprocesses to use this variable.
So when you'll call the ineer function it will fork a process that is copying the entire environment, including the START variable taken from the parent process.
You have to change the first line from:
START=0
to:
export START=0
You may also want to read this thread: Defining a variable with or without export

How to trap signals in shell script?

How can we trap signals in shell script, where can we trap signals?
also can someone explain
# trap commands signals
You can write a shell script :
trap ctl_c INT # trap <name_of_function_to_called> <Signal to be handled>
function ctl_c(){
// signal handling logic needed.
}
Now whenever you will send SIGINT (key-press CTRL + C), this function will get called, instead of default functionality.

Resources