Setting variables in a KSH spawned process - multithreading

I have a lengthy menu script that relies on a few command outputs for it's variables. These commands take several seconds to run each and I would like to spawn new processes to set these variables. It would look something like this:
VAR1=`somecommand` &
VAR2=`somecommand` &
...
wait
echo $VAR1 $VAR2
The problem is that the processes are spawned and die with those variables they set. I realize that I can do this by sending these to a file and then reading that but I would like to do it without a temp file. Any ideas?

You can get the whole process' output using command substitution, like:
VAR1=$(somecommand &)
VAR2=$(somecommand &)
...
wait
echo $VAR1 $VAR2

This is rather clunky, but works for me. I have three scripts.
cmd.sh is your "somecommand", it is a test script only:
#!/bin/ksh
sleep 10
echo "End of job $1"
Below is wrapper.sh, which runs a single command, captures the output, signals the parent when done, then writes the result to stdout:
#!/bin/ksh
sig=$1
shift
var=$($#)
kill -$sig $PPID
echo $var
and here is the parent script:
#!/bin/ksh
trap "read -u3 out1" SIGUSR1
trap "read -p out2" SIGUSR2
./wrapper.sh SIGUSR1 ./cmd.sh one |&
exec 3<&p
exec 4>&p
./wrapper.sh SIGUSR2 ./cmd.sh two |&
wait
wait
echo "out1: $out1, out2: $out2"
echo "Ended"
2x wait because the first will be interrupted.
In the parent script I am running the wrapper twice, once for each job, passing in the command to be run and any arguments. The |& means "pipe to background" - run as a co-process.
The two exec commands copy the pipe file descriptors to fds 3 and 4. When the jobs are finished, the wrapper signals the main process to read the pipes. The signals are caught using the trap, which read the pipe for the appropriate child process, and gather the resulting data.
Rather convoluted and clunky, but it appears to work.

Related

How can I send a timeout signal to a wrapped command in sbatch?

I have a program that, when it receives a SIGUSR1, writes some output and quits. I'm trying to get sbatch to notify this program before timing out.
I enqueue the program using:
sbatch -t 06:00:00 --signal=USR1 ... --wrap my_program
but my_program never receives the signal. I've tried sending signals while the program is running, with: scancel -s USR1 <JOBID>, but without any success. I also tried scancel --full, but it kills the wrapper and my_program is not notified.
One option is to write a bash file that wraps my_program and traps the signal, forwarding it to my_program (similar to this example), but I don't need this cumbersome bash file for anything else. Also, sbatch --signal documentation very clearly says that, when you want to notify the enveloping bash file, you need to specify signal=B:, so I believe that the bash wrapper is not really necessary.
So, is there a way to send a SIGUSR1 signal to a program enqueued using sbatch --wrap?
Your command is sending the USR1 to the shell created by the --wrap. However, if you want the signal to be caught and processed, you're going to need to write the shell functions to handle the signal and that's probably too much for a --wrap command.
These folks are doing it but you can't see into their setup.sh script to see what they are defining. https://docs.nersc.gov/jobs/examples/#annotated-example-automated-variable-time-jobs
Note they use "." to run the code in setup.sh in the same process instead of spawing a sub-shell. You need that.
These folks describe a nice method of creating the functions you need: Is it possible to detect *which* trap signal in bash?
The only thing they don't show there is the function that would actually take action on receiving the signal. Here's what I wrote that does it - put this in a file that can be included from any user's sbatch submit script and show them how to use it and the --signal option:
trap_with_arg() {
func="$1" ; shift
for sig ; do
echo "setting trap for $sig"
trap "$func $sig" "$sig"
done
}
func_trap () {
echo "called with sig $1"
case $1 in
USR1)
echo "caught SIGUSR1, making ABORT file"
date
cd $WORKDIR
touch ABORT
ls -l ABORT
;;
*) echo "something else" ;;
esac
}
trap_with_arg func_trap USR1 USR2

Linux : launch a specific action when another process is terminated

I Have a script foo.sh that launches 5 process of bfoo.sh in background like this :
for i in {1..5}
do
./bfoo.sh &
done
wait
echo ok
and I use it like this :
./foo.sh
In foo.s after the for loop, I want to do something like i.e. for each process bfoo.sh terminated do
echo $PID_Terminated
To achieve this, you need to store the PID of each of the background process of bfoo.sh. The $! contains the process id that was last backgrounded by the shell. We append them one at a time to the array and iterate it over later
Remember this runs your background process one after the other since you have wait on each process id separately.
#!/usr/bin/env bash
pidArray=()
for i in {1..5}; do
./bfoo.sh &
pidArray+=( "$!" )
done
Now wait on each of the processes and in a loop
for pid in "${pidArray[#]}"; do
wait "$pid"
printf 'process-id: %d finished with code %d\n' "$pid" "$?"
done
I have additionally added the exit code of the background process $? when it finishes, so that any abnormal exit can be debugged.

PID of all child processes of a command

In a bash script, I want to launch a process in the foreground, then print a list of all the process names and PIDs that were started as children of that process. For example, suppose I have the following scripts, but I can only modify the first one:
A.sh:
#!/bin/bash
B.sh
B.sh:
#!/bin/bash
C.sh
C.sh:
#!/bin/bash
echo "Running C.sh"
Without modifying B.sh, C.sh or the echo command, and without starting any of the child processes in the background, I would like A.sh to print the following:
B.sh 1208
C.sh 1210
echo 1211
Can A.sh fork a process that records this information while the child processes are running in the foreground of A.sh?
Update: In the comments below my answer it turned out that:
I need something that observes the creation of all child processes during a span of time. Given that, filtering to isolate my subtree will not be difficult.
... was the intention behind the question and it was for debugging purposes.
In that case I'd recommend to use strace like this:
strace -f command
-f will track child processes - recursively. Since forking and exec-ing requires system calls, strace will list any child creation plus the pids.
Original answer:
You can use pgrep for that:
run_process &
pid=${!}
pgrep --parent "${pid}"
wait # wait for run_process to finish
Btw, you may want to use the pstree command, it is nice to use:
run_process &
pid=${!}
pstree -p "${pid}"
wait # wait for run_process to finish
Anyhow, you'll need to install pstree.
You can try doing this with A.sh
#!/usr/bin/env bash
./B.sh &
b_PID=$!
./C.sh &
c_PID=$!
echo "B.sh $b_PID"
echo "C.sh $c_PID"
The output will look something like this
B.sh 22802
C.sh 22803
Running C.sh

Don't show the output of kill command in a Linux bash script [duplicate]

How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null

Kill bash script foreground children when a signal comes

I am wrapping a fastcgi app in a bash script like this:
#!/bin/bash
# stuff
./fastcgi_bin
# stuff
As bash only executes traps for signals when the foreground script ends I can't just kill -TERM scriptpid because the fastcgi app will be kept alive.
I've tried sending the binary to the background:
#!/bin/bash
# stuff
./fastcgi_bin &
PID=$!
trap "kill $PID" TERM
# stuff
But if I do it like this, apparently the stdin and stdout aren't properly redirected because it does not connect with lighttpds mod_fastgi, the foreground version does work.
EDIT: I've been looking at the problem and this happens because bash redirects /dev/null to stdin when a program is launched in the background, so any way of avoiding this should solve my problem as well.
Any hint on how to solve this?
There are some options that come to my mind:
When a process is launched from a shell script, both belong to the same process group. Killing the parent process leaves the children alive, so the whole process group should be killed. This can be achieved by passing the negated PGID (Process Group ID) to kill, which is the same as the parent's PID. ej: kill -TERM -$PARENT_PID
Do not execute the binary as
a child, but replacing the script
process with exec. You lose the
ability to execute stuff afterwards
though, because exec completely
replaces the parent process.
Do not kill the shell script process, but the FastCGI binary. Then, in the script, examine the return code and act accordingly. e.g: ./fastcgi_bin || exit -1
Depending on how mod_fastcgi handles worker processes, only the second option might be viable.
I have no idea if this is an option for you or not, but since you have a bounty I am assuming you might go for ideas that are outside the box.
Could you rewrite the bash script in Perl? Perl has several methods of managing child processes. You can read perldoc perlipc and more specifics in the core modules IPC::Open2 and IPC::Open3.
I don't know how this will interface with lighttpd etc or if there is more functionality in this approach, but at least it gives you some more flexibility and some more to read in your hunt.
I'm not sure I fully get your point, but here's what I tried and the process seems to be able to manage the trap (call it trap.sh):
#!/bin/bash
trap "echo trap activated" TERM INT
echo begin
time sleep 60
echo end
Start it:
./trap.sh &
And play with it (only one of those commands at once):
kill -9 %1
kill -15 %1
Or start in foreground:
./trap.sh
And interrupt with control-C.
Seems to work for me.
What exactly does not work for you?
I wrote this script just minutes ago to kill a bash script and all of its children...
#!/bin/bash
# This script will kill all the child process id for a given pid
# based on http://www.unix.com/unix-dummies-questions-answers/5245-script-kill-all-child-process-given-pid.html
ppid=$1
if [ -z $ppid ] ; then
echo "This script kills the process identified by pid, and all of its kids";
echo "Usage: $0 pid";
exit;
fi
for i in `ps j | awk '$3 == '$ppid' { print $2 }'`
do
$0 $i
kill -9 $i
done
Make sure the script is executable, or you will get an error on the $0 $i
You can override the implicit </dev/null for a background process by redirecting stdin yourself, for example:
sh -c 'exec 3<&0; { read x; echo "[$x]"; } <&3 3<&- & exec 3<&-; wait'
Try keeping the original stdin using ./fastcgi_bin 0<&0 &:
#!/bin/bash
# stuff
./fastcgi_bin 0<&0 &
PID=$!./fastcgi_bin 0<&0 &
trap "kill $PID" TERM
# stuff
# test
#sh -c 'sleep 10 & lsof -p ${!}'
#sh -c 'sleep 10 0<&0 & lsof -p ${!}'
You can do that with a coprocess.
Edit: well, coprocesses are background processes that can have stdin and stdout open (because bash prepares fifos for them). But you still need to read/write to those fifos, and the only useful primitive for that is bash's read (possibly with a timeout or a file descriptor); nothing robust enough for a cgi. So on second thought, my advice would be not to do this thing in bash. Doing the extra work in the fastcgi, or in an http wrapper like WSGI, would be more convenient.

Resources