If I kill gracefully (without -9) a script which is running another script, which is running java in turn, will java process receive kill signal by cascade?
I have seen java not properly shutdown in this case, and become owned by init (pid 1). I have fixed this in the past by recording the pid of the java process after it has launched, and then sending a kill -15 in a signal handler inside the bash script.
jpid=
trap_intr()
{
[ ! -z "$jpid" ] && kill $jpid
}
trap trap_intr INT TERM
java -cp ... foo &
jpid=$!
wait
UPDATE: I forgot to put the java process in the background, and have the bash script wait on $!
Related
I have a bash script:
node web/dist/web/src/app.js & node api/dist/api/src/app.js &
$SHELL
It successfully starts both my node servers. However:
I do not receive any output (from console.log etc) in my terminal window
If I cancel by (Ctrl +C) the processes are not exited, so then I annoyingly have to manually do a taskkill /F /PID etc afterwards.
Is there anyway around this?
The reason you can't stop your background jobs with Ctrl+C is because signals (SIGINT in this case) are received only by the foreground process.
When your foreground process (the non-interactive main script) exits, its children processes become orphans which are immediately adopted by the init process. To kill them, you need their PIDs. (When you run a background process in an interactive shell, it will receive the SIGHUP, and probably exit, when shell exits.)
The solution in your case is to make your script wait for its children, using the shell built-in wait command. wait will ensure your script receives the SIGINT, which you can then handle (with trap) and kill the background jobs (with kill 0):
#!/bin/bash
trap 'kill 0' EXIT
node app1.js &
node app2.js &
wait
By setting trap on EXIT (special pseudo-signal in bash), you'll ensure background processes will terminate whenever your main script exits (either by Ctrl+C/SIGINT, or by any other signal like SIGTERM, SIGHUP, SIGKILL). The kill 0 command kills all processes in the current process group.
Regarding the output -- on Linux, background processes will inherit the standard output/error from shell (if not redirected), and continue to write to your TTY/terminal. If that's not working on Windows, I'm not sure why not.
However, even if your background processes somehow lost their way to your TTY, you can, as a workaround, append to a log file:
node app1.js >>/path/to/file.log 2>&1 &
node app2.js >>/path/to/file.log 2>&1 &
and then tail -f that log file, either in this, or some other terminal:
tail -f /path/to/file.log
I am running a script in Scala and Play using:
val pb = Process(s"bash $path/script.sh")
pb.run
The script starts a background process in the background that is supposed to start run even when sbt is killed. Here is the script:
#!/bin/bash
nohup liquidsoap liquidsoap.ls >/dev/null 2>&1 &
echo $! > liquidsoap.pid
The problem is that even after using nohup and redirecting the output. When I kill SBT, the background process that was started using the script is killed too.
Thank you
Try to add this to you sbt file:
fork in run := true
I found a solution. The problem was that when I was killing SBT I was sending a SIGINT signal to all processes. In order to avoid the created processes to not be killed I need to put the process in a different process group which is done the setsid command.
I run a shell script inside php (using shell_exec). I want to kill it and all processes it created when it is in operation. I know the shell script pid. Do you have any suggestion?
I am using ubuntu, php as apache module.
Thank you.
Example:
#!/bin/bash
echo hello
sleep 20
When I kill my script (shell_exec("sudo kill -9 $pid")), the sleep process is not killed which is not desired.
use
pkill -TERM -P pid
will kill the child processes
see this answer
Use this kill command instead:
kill -- -$pid
to kill the running script and all its spawned children.
Ok, just like in this thread, How to get PID of background process?, I know how to get the PID of background process. However, what I need to do countains more than one operation.
{
sleep 300;
echo "Still running after 5 min, killing process manualy.";
COMMAND COMMAND COMMAND
echo "Shutdown complete"
}&
PID_CHECK_STOP=$!
some stuff...
kill -9 $PID_CHECK_STOP
But it doesn't work. It seems i get either a bad PID or I just can't kill it. I tried to run ps | grep sleep and the pid it gives is always right next to the one i get in PID_CHECK_STOP. Is there a way to make it work? Can i wrap those commands an other way so i can kill them all when i need to?
Thx guys!
kill -9 kills the process before it can do anything else, including signalling its children to exit. Use a gentler signal (kill by itself, which sends a TERM, should be sufficient). You do need to have the process signal its children to exit (if any) explicitly, though, via a trap command.
I'm assuming sleep is a placeholder for the real command. sleep is tricky, however, as it ignores any signals until it returns (i.e., it is non-interruptible). To make your example work, put sleep itself in the background and immediately wait on it. When you kill the "outer" background process, it will interrupt the wait call, which will allow sleep to be killed as well.
{
trap 'kill $(jobs -p)' EXIT
sleep 300 & wait
echo "Still running after 5 min, killing process manualy.";
COMMAND COMMAND COMMAND
echo "Shutdown complete"
}&
PID_CHECK_STOP=$!
some stuff...
kill $PID_CHECK_STOP
UPDATE: COMMAND COMMAND COMMAND includes a command that runs via sudo. To kill that process, kill must also be run via sudo. Keep in mind that doing so will run the external kill program, not the shell built-in (there is little difference between the two; the built-in exists to allow you to kill a process when your process quota has been reached).
You can have another script containing those commands and kill that script. If you are dynamically generating code for the block, just write out a script, execute it and kill when you are done.
The { ... } surrounding the statements starts a new shell, and you get its PID afterwards. sleep and other commands within the block get separate PIDs.
To illustrate, look for your process in ps afux | less - the parent shell process (above the sleep) has the PID you were just given.
using bash in linux is it possible to spawn parallel processes in the foreground? For example the following :
top.sh
#!/bin/bash
./myscript1.sh &
./myscript2.sh &
will spawn two processes in parallel as background threads. However is it possible to spawn these as foreground threads? The aim is to automatically kill myscript1.sh and myscript2.sh, when top.sh is killed. Thanks
You can only have one job in the foreground. You can make your script react to any signal that reaches it and forward the signal to other jobs. You need to make sure your script sticks around, if you want to have a central way of killing the subprocesses: call wait so that your script will not exit until all the jobs have died or the script itself is killed.
#!/bin/bash
jobs=
trap 'kill -HUP $jobs' INT TERM HUP
myscript1.sh & jobs="$jobs $!"
myscript2.sh & jobs="$jobs $!"
wait
You can still kill only the wrapper script by sending it a signal that it doesn't catch, such as SIGQUIT (which I purposefully left out) or SIGKILL (which can't be caught).
There's a way to have all the processes in the foreground: connect them through pipes. Ignore SIGPIPE so that the death of a process doesn't kill the previous one. Save and restore stdin and stdout through other file descriptors if you need them. This way the script and the background tasks will be in the same process group, so pressing Ctrl+C will kill both the wrapper script and the subprocesses. If you kill the wrapper script directly, that won't affect the subprocesses; you can kill the process group instead by passing the negative of the PID of the wrapper script (e.g. kill -TERM -1234).
trap '' PIPE
{
myscript1.sh <&3 >&4 |
myscript2.sh <&3 >&4
} 3<&0 4>&1
Using GNU Parallel your script would be:
#!/bin/bash
parallel ::: ./myscript1.sh ./myscript2.sh
Or even:
#!/usr/bin/parallel --shebang -r
./myscript1.sh
./myscript2.sh
Watch the intro videos to learn more: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1