Bash trap not killing children, causes unexpected ctrl-c behavior - linux

edit
For future readers. The root of this problem really came down to running the function in an interactive shell vs. putting it in a separate script.
Also, there are many things that could be improved in the code I originally posted. Please see comments for things that could/should have been done better.
/edit
I have a bash function intended to rerun a process in the background when files in a directory change (think like Grunt, but for general purposes). The script functions as desired while running:
The subprocess is correctly started (including any children)
On file change, the sub is killed (including children) and started again
However, on exit (ctrl-c) none of the processes are killed. Additionally, pressing ctrl-c a second time will kill the current terminal session. I'm assuming this is a problem with my trap, but have been unable to identify a reason for the issue.
Here is the code of rerun.sh
#!/bin/bash
# rerun.sh
_kill_children() {
isTop=$1
curPid=$2
# Get pids of children
children=`ps -o pid --no-headers --ppid ${curPid}`
for child in $children
do
# Call this function to get grandchildren as well
_kill_children 0 $child
done
# Parent calls this with 1, all other with 0 so only children are killed
if [[ $isTop -eq 0 ]]; then
kill -9 $curPid 2> /dev/null
fi
}
rerun() {
trap " _kill_children 1 $$; exit 0" SIGINT SIGTERM
FORMAT=$(echo -e "\033[1;33m%w%f\033[0m written")
#Command that should be repeatedly run is passed as args
args=$#
$args &
#When a file changes in the directory, rerun the process
while inotifywait -qre close_write --format "$FORMAT" .
do
#Kill current bg proc and it's children
_kill_children 1 $$
$args & #Rerun the proc
done
}
#This is sourced in my bash profile so I can run it any time
To test this, create a pair of executable files parent.sh and child.sh as follows:
#!/bin/bash
#parent.sh
./child.sh
#!/bin/bash
#child.sh
sleep 86400
Then source the rerun.sh file and run rerun ./parent.sh. In another terminal window I watch "ps -ef | grep pts/4" to see all processes for the rerun (in this example on pts/4). Touching a file in the directory triggers a restart of parent.sh and children. [ctrl-c] exits, but leaves the pids running. [ctrl-c] again kills bash and all other processes on pts/4.
Desired behavior: on [ctrl-c], kill children and exit to shell normally. Help?
--
Code sources:
Inotify idea from: https://exyr.org/2011/inotify-run/
Kill children from: http://riccomini.name/posts/linux/2012-09-25-kill-subprocesses-linux-bash/

This isn't a good practice to follow in the first place. Track your children explicitly:
children=( )
foo & children+=( "$!" )
...then, you can kill or wait for them explicitly, referring to "${children[#]}" for the list. If you want to get grandchildren as well, this is a good user for fuser -k and a lockfile:
lockfile_name="$(mktemp /tmp/lockfile.XXXXXX)" # change appropriately
trap 'rm -f "$lockfile_name"' 0
exec 3>"$lockfile_name" # open lockfile on FD 3
kill_children() {
# close our own handle on the lockfile
exec 3>&-
# kill everything that still has it open (our children and their children)
fuser -k "$lockfile_name" >/dev/null
# ...then open it again.
exec 3>"$lockfile_name"
}
rerun() {
trap 'kill_children; exit 0' SIGINT SIGTERM
printf -v format '%b' "\033[1;33m%w%f\033[0m written"
"$#" &
#When a file changes in the directory, rerun the process
while inotifywait -qre close_write --format "$format" .; do
kill_children
"$#" &
done
}

Related

Don't show the output of kill command in a Linux bash script [duplicate]

How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null

How to properly sigint a bash script that is run from another bash script?

I have two scripts, in which one is calling the other, and needs to kill it after some time. A very basic, working example is given below.
main_script.sh:
#!/bin/bash
cd "${0%/*}" #make current working directory the folder of this script
./record.sh &
PID=$!
# perform some other commands
sleep 5
kill -s SIGINT $PID
#wait $PID
echo "Finished"
record.sh:
#!/bin/bash
cd "${0%/*}" #make current working directory the folder of this script
RECORD_PIDS=1
printf "WallTimeStart: %f\n\n" $(date +%s.%N) >> test.txt
top -b -p $RECORD_PIDS -d 1.00 >> test.txt
printf "WallTimeEnd: %f\n\n" $(date +%s.%N) >> test.txt
Now, if I run main_script.sh, it will not nicely close record.sh on finish: the top command will keep on running in the background (test.txt will grow until you manually kill the top process), even though the main_script is finished and the record script is killed using SIGINT.
If I ctrl+c the main_script.sh, everything shuts down properly. If I run record.sh on its own and ctrl+c it, everything shuts down properly as well.
If I uncomment wait, the script will hang and I will need to ctrl+z it.
I have already tried all kinds of things, including using 'trap' to launch some cleanup script when receiving a SIGINT, EXIT, and/or SIGTERM, but nothing worked. I also tried bring record.sh back to the foreground using fg, but that did not help too. I have been searching for nearly a day now already, with now luck unfortunately. I have made an ugly workaround which uses pidof to find the top process and kill it manually (from main_script.sh), and then I have to write the "WallTimeEnd" statement manually to it as well from the main_script.sh. Not very satisfactory to me...
Looking forward to any tips!
Cheers,
Koen
Your issue is that the SIGINT is delivered to bash rather than to top. One option would be to use a new session and send the signal to the process group instead, like:
#!/bin/bash
cd "${0%/*}" #make current working directory the folder of this script
setsid ./record.sh &
PID=$!
# perform some other commands
sleep 5
kill -s SIGINT -$PID
wait $PID
echo "Finished"
This starts the sub-script in a new process group and the -pid tells kill to signal every process in that group, which will include top.

Bash not trapping interrupts during rsync/subshell exec statements

Context:
I have a bash script that contains a subshell and a trap for the EXIT pseudosignal, and it's not properly trapping interrupts during an rsync. Here's an example:
#!/bin/bash
logfile=/path/to/file;
directory1=/path/to/dir
directory2=/path/to/dir
cleanup () {
echo "Cleaning up!"
#do stuff
trap - EXIT
}
trap '{
(cleanup;) | 2>&1 tee -a $logfile
}' EXIT
(
#main script logic, including the following lines:
(exec sleep 10;);
(exec rsync --progress -av --delete $directory1 /var/tmp/$directory2;);
) | 2>&1 tee -a $logfile
trap - EXIT #just in case cleanup isn't called for some reason
The idea of the script is this: most of the important logic runs in a subshell which is piped through tee and to a logfile, so I don't have to tee every single line of the main logic to get it all logged. Whenever the subshell ends, or the script is stopped for any reason (the EXIT pseudosignal should capture all of these cases), the trap will intercept it and run the cleanup() function, and then remove the trap. The rsync and sleep commands (the sleep is just an example) are run through exec to prevent the creation of zombie processes if I kill the parent script while they're running, and each potentially-long-running command is wrapped in its own subshell so that when exec finishes, it won't terminate the whole script.
The problem:
If I interrupt the script (via kill or CTRL+C) during the exec/subshell wrapped sleep command, the trap works properly, and I see "Cleaning up!" echoed and logged. If I interrupt the script during the rsync command, I see rsync end, and write rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(544) [sender=3.0.6] to the screen, and then the script just dies; no cleanup, no trapping. Why doesn't an interrupting/killing of rsync trigger the trap?
I've tried using the --no-detach switch with rsync, but it didn't change anything.
I have bash 4.1.2, rsync 3.0.6, centOS 6.2.
How about just having all the output from point X be redirected to tee without having to repeat it everywhere and mess with all the sub-shells and execs ... (hope I didn't miss something)
#!/bin/bash
logfile=/path/to/file;
directory1=/path/to/dir
directory2=/path/to/dir
exec > >(exec tee -a $logfile) 2>&1
cleanup () {
echo "Cleaning up!"
#do stuff
trap - EXIT
}
trap cleanup EXIT
sleep 10
rsync --progress -av --delete $directory1 /var/tmp/$directory2
In addition to set -e, I think you want set -E:
If set, any trap on ERR is inherited by shell functions, command substitutions, and commands executed in a sub‐shell environment. The ERR trap is normally not inherited in such cases.
Alternatively, instead of wrapping your commands in subshells use curly braces which will still give you the ability to redirect command outputs but will execute them in the current shell.
The interupt will be properly caught if you add INT to the trap
trap '{
(cleanup;) | 2>&1 tee -a $logfile
}' EXIT INT
Bash is trapping interrupts correctly. However, this does not anwer the question, why the script traps on exit if sleep is interupted, nor why it does not trigger on rsync, but makes the script work as it is supposed to. Hope this helps.
Your shell might be configured to exit on error:
bash # enter subshell
set -e
trap "echo woah" EXIT
sleep 4
If you interrupt sleep (^C) then the subshell will exit due to set -e and print woah in the process.
Also, slightly unrelated: your trap - EXIT is in a subshell (explicitly), so it won't have an effect after the cleanup function returns
It's pretty clear from experimentation that rsync behaves like other tools such as ping and do not inherit signals from the calling Bash parent.
So you have to get a little creative with this and do something like the following:
$ cat rsync.bash
#!/bin/sh
set -m
trap '' SIGINT SIGTERM EXIT
rsync -avz LargeTestFile.500M root#host.mydom.com:/tmp/. &
wait
echo FIN
Now when I run it:
$ ./rsync.bash
X11 forwarding request failed
building file list ... done
LargeTestFile.500M
^C^C^C^C^C^C^C^C^C^C
sent 509984 bytes received 42 bytes 92732.00 bytes/sec
total size is 524288000 speedup is 1027.96
FIN
And we can see the file did fully transfer:
$ ll -h | grep Large
-rw-------. 1 501 games 500M Jul 9 21:44 LargeTestFile.500M
How it works
The trick here is we're telling Bash via set -m to disable job controls on any background jobs within it. We're then backgrounding the rsync and then running a wait command which will wait on the last run command, rsync, until it's complete.
We then guard the entire script with the trap '' SIGINT SIGTERM EXIT.
References
https://access.redhat.com/solutions/360713
https://access.redhat.com/solutions/1539283

How to kill a child process after a given timeout in Bash?

I have a bash script that launches a child process that crashes (actually, hangs) from time to time and with no apparent reason (closed source, so there isn't much I can do about it). As a result, I would like to be able to launch this process for a given amount of time, and kill it if it did not return successfully after a given amount of time.
Is there a simple and robust way to achieve that using bash?
P.S.: tell me if this question is better suited to serverfault or superuser.
(As seen in:
BASH FAQ entry #68: "How do I run a command, and have it abort (timeout) after N seconds?")
If you don't mind downloading something, use timeout (sudo apt-get install timeout) and use it like: (most Systems have it already installed otherwise use sudo apt-get install coreutils)
timeout 10 ping www.goooooogle.com
If you don't want to download something, do what timeout does internally:
( cmdpid=$BASHPID; (sleep 10; kill $cmdpid) & exec ping www.goooooogle.com )
In case that you want to do a timeout for longer bash code, use the second option as such:
( cmdpid=$BASHPID;
(sleep 10; kill $cmdpid) \
& while ! ping -w 1 www.goooooogle.com
do
echo crap;
done )
# Spawn a child process:
(dosmth) & pid=$!
# in the background, sleep for 10 secs then kill that process
(sleep 10 && kill -9 $pid) &
or to get the exit codes as well:
# Spawn a child process:
(dosmth) & pid=$!
# in the background, sleep for 10 secs then kill that process
(sleep 10 && kill -9 $pid) & waiter=$!
# wait on our worker process and return the exitcode
exitcode=$(wait $pid && echo $?)
# kill the waiter subshell, if it still runs
kill -9 $waiter 2>/dev/null
# 0 if we killed the waiter, cause that means the process finished before the waiter
finished_gracefully=$?
sleep 999&
t=$!
sleep 10
kill $t
I also had this question and found two more things very useful:
The SECONDS variable in bash.
The command "pgrep".
So I use something like this on the command line (OSX 10.9):
ping www.goooooogle.com & PING_PID=$(pgrep 'ping'); SECONDS=0; while pgrep -q 'ping'; do sleep 0.2; if [ $SECONDS = 10 ]; then kill $PING_PID; fi; done
As this is a loop I included a "sleep 0.2" to keep the CPU cool. ;-)
(BTW: ping is a bad example anyway, you just would use the built-in "-t" (timeout) option.)
Assuming you have (or can easily make) a pid file for tracking the child's pid, you could then create a script that checks the modtime of the pid file and kills/respawns the process as needed. Then just put the script in crontab to run at approximately the period you need.
Let me know if you need more details. If that doesn't sound like it'd suit your needs, what about upstart?
One way is to run the program in a subshell, and communicate with the subshell through a named pipe with the read command. This way you can check the exit status of the process being run and communicate this back through the pipe.
Here's an example of timing out the yes command after 3 seconds. It gets the PID of the process using pgrep (possibly only works on Linux). There is also some problem with using a pipe in that a process opening a pipe for read will hang until it is also opened for write, and vice versa. So to prevent the read command hanging, I've "wedged" open the pipe for read with a background subshell. (Another way to prevent a freeze to open the pipe read-write, i.e. read -t 5 <>finished.pipe - however, that also may not work except with Linux.)
rm -f finished.pipe
mkfifo finished.pipe
{ yes >/dev/null; echo finished >finished.pipe ; } &
SUBSHELL=$!
# Get command PID
while : ; do
PID=$( pgrep -P $SUBSHELL yes )
test "$PID" = "" || break
sleep 1
done
# Open pipe for writing
{ exec 4>finished.pipe ; while : ; do sleep 1000; done } &
read -t 3 FINISHED <finished.pipe
if [ "$FINISHED" = finished ] ; then
echo 'Subprocess finished'
else
echo 'Subprocess timed out'
kill $PID
fi
rm finished.pipe
Here's an attempt which tries to avoid killing a process after it has already exited, which reduces the chance of killing another process with the same process ID (although it's probably impossible to avoid this kind of error completely).
run_with_timeout ()
{
t=$1
shift
echo "running \"$*\" with timeout $t"
(
# first, run process in background
(exec sh -c "$*") &
pid=$!
echo $pid
# the timeout shell
(sleep $t ; echo timeout) &
waiter=$!
echo $waiter
# finally, allow process to end naturally
wait $pid
echo $?
) \
| (read pid
read waiter
if test $waiter != timeout ; then
read status
else
status=timeout
fi
# if we timed out, kill the process
if test $status = timeout ; then
kill $pid
exit 99
else
# if the program exited normally, kill the waiting shell
kill $waiter
exit $status
fi
)
}
Use like run_with_timeout 3 sleep 10000, which runs sleep 10000 but ends it after 3 seconds.
This is like other answers which use a background timeout process to kill the child process after a delay. I think this is almost the same as Dan's extended answer (https://stackoverflow.com/a/5161274/1351983), except the timeout shell will not be killed if it has already ended.
After this program has ended, there will still be a few lingering "sleep" processes running, but they should be harmless.
This may be a better solution than my other answer because it does not use the non-portable shell feature read -t and does not use pgrep.
Here's the third answer I've submitted here. This one handles signal interrupts and cleans up background processes when SIGINT is received. It uses the $BASHPID and exec trick used in the top answer to get the PID of a process (in this case $$ in a sh invocation). It uses a FIFO to communicate with a subshell that is responsible for killing and cleanup. (This is like the pipe in my second answer, but having a named pipe means that the signal handler can write into it too.)
run_with_timeout ()
{
t=$1 ; shift
trap cleanup 2
F=$$.fifo ; rm -f $F ; mkfifo $F
# first, run main process in background
"$#" & pid=$!
# sleeper process to time out
( sh -c "echo \$\$ >$F ; exec sleep $t" ; echo timeout >$F ) &
read sleeper <$F
# control shell. read from fifo.
# final input is "finished". after that
# we clean up. we can get a timeout or a
# signal first.
( exec 0<$F
while : ; do
read input
case $input in
finished)
test $sleeper != 0 && kill $sleeper
rm -f $F
exit 0
;;
timeout)
test $pid != 0 && kill $pid
sleeper=0
;;
signal)
test $pid != 0 && kill $pid
;;
esac
done
) &
# wait for process to end
wait $pid
status=$?
echo finished >$F
return $status
}
cleanup ()
{
echo signal >$$.fifo
}
I've tried to avoid race conditions as far as I can. However, one source of error I couldn't remove is when the process ends near the same time as the timeout. For example, run_with_timeout 2 sleep 2 or run_with_timeout 0 sleep 0. For me, the latter gives an error:
timeout.sh: line 250: kill: (23248) - No such process
as it is trying to kill a process that has already exited by itself.
#Kill command after 10 seconds
timeout 10 command
#If you don't have timeout installed, this is almost the same:
sh -c '(sleep 10; kill "$$") & command'
#The same as above, with muted duplicate messages:
sh -c '(sleep 10; kill "$$" 2>/dev/null) & command'

Kill bash script foreground children when a signal comes

I am wrapping a fastcgi app in a bash script like this:
#!/bin/bash
# stuff
./fastcgi_bin
# stuff
As bash only executes traps for signals when the foreground script ends I can't just kill -TERM scriptpid because the fastcgi app will be kept alive.
I've tried sending the binary to the background:
#!/bin/bash
# stuff
./fastcgi_bin &
PID=$!
trap "kill $PID" TERM
# stuff
But if I do it like this, apparently the stdin and stdout aren't properly redirected because it does not connect with lighttpds mod_fastgi, the foreground version does work.
EDIT: I've been looking at the problem and this happens because bash redirects /dev/null to stdin when a program is launched in the background, so any way of avoiding this should solve my problem as well.
Any hint on how to solve this?
There are some options that come to my mind:
When a process is launched from a shell script, both belong to the same process group. Killing the parent process leaves the children alive, so the whole process group should be killed. This can be achieved by passing the negated PGID (Process Group ID) to kill, which is the same as the parent's PID. ej: kill -TERM -$PARENT_PID
Do not execute the binary as
a child, but replacing the script
process with exec. You lose the
ability to execute stuff afterwards
though, because exec completely
replaces the parent process.
Do not kill the shell script process, but the FastCGI binary. Then, in the script, examine the return code and act accordingly. e.g: ./fastcgi_bin || exit -1
Depending on how mod_fastcgi handles worker processes, only the second option might be viable.
I have no idea if this is an option for you or not, but since you have a bounty I am assuming you might go for ideas that are outside the box.
Could you rewrite the bash script in Perl? Perl has several methods of managing child processes. You can read perldoc perlipc and more specifics in the core modules IPC::Open2 and IPC::Open3.
I don't know how this will interface with lighttpd etc or if there is more functionality in this approach, but at least it gives you some more flexibility and some more to read in your hunt.
I'm not sure I fully get your point, but here's what I tried and the process seems to be able to manage the trap (call it trap.sh):
#!/bin/bash
trap "echo trap activated" TERM INT
echo begin
time sleep 60
echo end
Start it:
./trap.sh &
And play with it (only one of those commands at once):
kill -9 %1
kill -15 %1
Or start in foreground:
./trap.sh
And interrupt with control-C.
Seems to work for me.
What exactly does not work for you?
I wrote this script just minutes ago to kill a bash script and all of its children...
#!/bin/bash
# This script will kill all the child process id for a given pid
# based on http://www.unix.com/unix-dummies-questions-answers/5245-script-kill-all-child-process-given-pid.html
ppid=$1
if [ -z $ppid ] ; then
echo "This script kills the process identified by pid, and all of its kids";
echo "Usage: $0 pid";
exit;
fi
for i in `ps j | awk '$3 == '$ppid' { print $2 }'`
do
$0 $i
kill -9 $i
done
Make sure the script is executable, or you will get an error on the $0 $i
You can override the implicit </dev/null for a background process by redirecting stdin yourself, for example:
sh -c 'exec 3<&0; { read x; echo "[$x]"; } <&3 3<&- & exec 3<&-; wait'
Try keeping the original stdin using ./fastcgi_bin 0<&0 &:
#!/bin/bash
# stuff
./fastcgi_bin 0<&0 &
PID=$!./fastcgi_bin 0<&0 &
trap "kill $PID" TERM
# stuff
# test
#sh -c 'sleep 10 & lsof -p ${!}'
#sh -c 'sleep 10 0<&0 & lsof -p ${!}'
You can do that with a coprocess.
Edit: well, coprocesses are background processes that can have stdin and stdout open (because bash prepares fifos for them). But you still need to read/write to those fifos, and the only useful primitive for that is bash's read (possibly with a timeout or a file descriptor); nothing robust enough for a cgi. So on second thought, my advice would be not to do this thing in bash. Doing the extra work in the fastcgi, or in an http wrapper like WSGI, would be more convenient.

Resources