Bash handle exiting multiple processes - linux

My goal is to run multiple processes using bash, then wait for user input (for instance, issuing a command of 'exit') and exiting out upon that command.
I have the bits and pieces, I think, but am having a hard time putting them together.
From what I saw, I can run multiple processes by pushing them to the back, like so:
./process1 &
./process2 &
I also saw that $! returns the most recently run process pid. Does this, then make sense:
./process1 &
pidA = $!
./process2 &
pidB = $!
From there, I am trying to do the following:
echo "command:"
read userInput
if["$userInput" == "exit"]; then
kill $pidA
kill $pidB
fi
does this make sense or am I not appearing to be getting it?

That looks good, although you'll probably need a loop on the user input part.
Note that you need to be careful with shell syntax. "pidA = $!" is not what you think; that's "pidA=$!". The former will try to run a program or command named pidA with arguments "=" and the PID of the last started background command.
Also note that you could use the "trap" command to issue the kill commands on termination of the shell script. Like this:
trap "kill $!" EXIT

end code resulted in:
#!/bin/bash
pushd ${0%/*}
cd ../
mongoPort=12345
mongoPath="db/data"
echo "Starting mongo and node server"
db/bin/mongod --dbpath=$mongoPath --port=$mongoPort &
MONGOPID=$!
node server.js &
NODEPID=$!
while [ "$input" != "exit" ]; do
read input
if [ "$input" == "exit" ]; then
echo"exiting..."
kill $MONGOPID
kill $NODEPID
exit
fi
done

Related

how can I make bash block on getting a line of stdout from a job that I have spawned

I need to launch a process within a shell script. (It is a special logging process.) It needs to live for most of the shell script, while some other processes will run, and then at the end we will kill it.
A problem that I am having is that I need to launch this process, and wait for it to "warm up", before proceeding to launch more processes.
I know that I can wait for a line of input from a pipe using read, and I know that I can spawn a child process using &. But when I use them together, it doesn't work like I expect.
As a mockup:
When I run this (sequential):
(sleep 1 && echo "foo") > read
my whole shell blocks for 1 second, and the echo of foo is consumed by read, as I expect.
I want to do something very similar, except that I run the "foo" job in parallel:
(sleep 1 && echo "foo" &) > read
But when I run this, my shell doesn't block at all, it returns instantly -- I don't know why the read doesn't wait for a line to be printed on the pipe?
Is there some easy way to combine "spawning of a job" (&) with capturing the stdout pipe within the original shell?
An example that is very close to what I actually need is, I need to rephrase this somehow,
(sleep 1 && echo "foo" && sleep 20 &) > read; echo "bar"
and I need for it to print "bar" after exactly one second, and not immediately, or 21 seconds later.
Here's an example using named pipes, pretty close to what I used in the end. Thanks to Luis for his comments suggesting named pipes.
#!/bin/sh
# Set up temporary fifo
FIFO=/tmp/test_fifo
rm -f "$FIFO"
mkfifo "$FIFO"
# Spawn a second job that writes to FIFO after some time
sleep 1 && echo "foo" && sleep 20 >$FIFO &
# Block the main job on getting a line from the FIFO
read line <$FIFO
# So that we can see when the main job exits
echo $line
Thanks also to commenter Emily E., the example that I posted that was misbehaving was indeed writing to a file called read instead of using the shell-builtin command read.

bash: daemonizing by forking process as a new child

I have a bash script which should daemonize itself after being run. My solution looks as follows:
#!/bin/sh -xe
child() {
echo child
}
child & # fork child
echo parent
kill $$ # kill parent
However, putting the whole script itself inside the function child does not seem the correct thing to do. Unfortunately exec & won't fork-off the whole process into a backgrounded child.
How can a achieve the desired effect?
I usually do something like this:
#!/bin/bash
if [ -z "$_IS_DAEMON" ]; then
_IS_DAEMON=1 /bin/bash $0 "$#" &
exit
fi
echo "I'm a deamon!"
The script effectively restarts itself in the background, while exiting the script started by user.
To recognize the daemonization status, it uses an environment variable (the $_IS_DAEMON in the example above): if not set, assume started by user; if set, assume started as part of daemonization.
To restart itself, the script simply invokes $0 "$#": the $0 is the name of the script as was started by the user, and the "$#" is the arguments passed to the script, preserved with white-spaces and all (unlike the $*). I also typically call needed shell explicitly, as to avoid confusion between /bin/bash and /bin/sh which are on most *nix systems are not the same.

Printing the pid of just started process before its output

I am controlling a remote Linux machine via SSH, I need to be able to know the pid of a process while it is running and its exit status after the run
My attempt has been to issue this command via SSH
my_cmd & echo $!; wait $!; echo $?;
The output is thus the following, exactly what I need:
pid
...stdout...
exit_status
Now sometimes it happens that apparently the command is too fast, so I get something like:
...stdout...
pid
exit_status
Is there a way to prevent this behavior?
When you run a background program then it is an independent process and it is necessary to do some synchronization if an output in a defined order is required. But this particular can be solved easily via exec and additional shell script:
First script, let say start:
#!/bin/bash
start2 &
wait $!
echo $?
second script start2:
#!/bin/bash
echo $$
exec my_cmd
Now the first script starts the second one and waits for result. And the second script prints own pid and then execs the program which will run with the same pid as the second script.
Yes, when you use & you are invoking a background process with its output to stdout as you exemplified... And that's the point. Your process execution and stdout printing is happening faster than pid query and printing and so on.... So what I recommend you is to redirect stdout to a /tmp/tempfile and then print it exactly when you need the output to be printed.... Example:
$ o=/tmp/_o; your_cmd 1>$o 2>$o & echo "pid is $!"; wait $!; r=$?; cat $o; echo "return is $r"; rm -f $o
first we set output variable 'o' to '/tmp/_o' temporary file
then we run your_cmd redirecting all output (1 and 2 which means stdout and stderr) to $o (which points to /tmp/_o).
then we show the pid
then we wait pid
then we set return var 'r' (to show it after output)
then we concatenate the output, showing it for you as you wish..
then we show the return
then we remove the tempfile 'o'..
Hope this way works for your case..
Hope you don't think it's too complex.
Hope someone to answer a simple way to do workaround this for you..

Linux shell script not executing completely as desktop shortcut

I'm looking to create a shell script to open a set of applications whenever I start my workday. I found a couple posts like this which seem to be what I'm looking for. The problem is, the script doesn't work when I double-click on it.
If I start the script from Terminal, it executes completely, but I don't want to always have to call this from Terminal, I want to double-click a shortcut. If I add a "sleep 1" to the end, it works most the time, but the problem here is 1 second is not always enough time to execute everything. Also, it just feels very imprecise. Sure, I could say "sleep 10" and be done with it, but, as a developer, this feels like a hack solution.
Here is my current script, I intend to add my applications to this list over time, but this will be sufficient for now:
#!/bin/bash
skype &
/opt/google/chrome/google-chrome &
geany &
mysql-workbench &
So the question is: how can I ensure everything starts but not leave the temporary terminal window open longer than it needs to be?
In case it matters, to create this script I simply saved a .sh file to the desktop and checked "Allow executing file as program" in the file properties.
Try preceding each command with nohup:
#!/bin/bash
nohup skype &
nohup /opt/google/chrome/google-chrome &
nohup geany &
nohup mysql-workbench &
Better yet, use a loop:
#!/bin/bash
apps="skype /opt/google/chrome/google-chrome geany mysql-workbench"
for app in $apps; do
nohup $app &
done
If any errors occur, check nohup.out for messages.
I think the reason of this problem is too early closed I/O files (ttys, most likely). You can try redirecting all I/O (stdin, stdout, stderr), for example:
skype < /dev/null > /dev/null 2 > /dev/null &
Something like this should also work:
#!/bin/sh
{
skype &
/opt/google/chrome/google-chrome &
geany &
mysql-workbench &
} < /dev/null > /dev/null 2 > /dev/null &
EDIT:
I can reproduce it on Ubuntu 12.04. It seems terminal program when closing kills all processes in it's pgroup. Tried with:
/usr/bin/gnome-terminal -x /bin/sh -c ./test.sh
xterm -e ./test.sh`
result is the same - without sleep programs don't show up. It seems terminal, when script finishes sends SIGHUP to pgroup of the shell script. You can see it by runing any of above programs via strace -f. At the listing end there should be kill(PID,SIGHUP) with very big PID number as argument - actually it is negative number, so SIGHUP is sent to all processes in pgroup.
I would assume many X11 ignore SIGHUP. The problem is SIGHUP is sent/received before they change default behaviour. With sleep You are giving some time to change SIGHUP handling.
I've tried disown (bash builtin), but it didn't help (SIGHUP to pgroup is sent from terminal, not shell).
EDIT:
One possible solution would be to make script.desktop file (You can use some existing .desktop file as template, on Ubuntu these are located at /usr/share/applications) and start Your script from this file. It seems even X11 programs, which don't ignore SIGHUP (xclock) are normaly started this way.
Firstly, you seem to have a TRAILING ampersand (&) ... this might be causing some issues.
Secondly, you could do something like below to ensure that you only exit the shell (i.e. execution) upon success:
#!/bin/bash
skype & /opt/google/chrome/google-chrome & geany & mysql-workbench
if [ $? -eq 0 ]
then
echo "Successfully completed operation (launched files, etc...)"
#use if you don't want to see anything/be notified if successful
## 'exit 0' will exit TERMINAL, therefore the SCRIPT AS WELL
## indicating to the shell that there was NO ERROR (success)
#exit 1
exit 0
## 'return 0' will allow the "successful" message to be written
## to the command-line and then keep the terminal open, if you
## want confirmation of success. The SCRIPT then exits and
## control returns to terminal, but it will not be forced close.
return 0
else
echo "Operation failed!" >&2
## 'exit 1' will exit the TERMINAL, and therefore the SCRIPT AS
## WELL, indicating an ERROR to the shell
#exit 1
## 'return 1' will exit the SCRIPT only (but not the terminal) and
## will indicate an ERROR to the shell
return 1
fi
** UPDATE **
(notice I added an ampersand & to the end of my answer below)
You could do a one-liner. The following will run all commands sequentially, one-at-a-time, each one runs only if/when the previous one ends. The command-line statement terminates if AND WHEN any of the individual commands BETWEEN the & fail.
(skype && /opt/google/chrome/google-chrome && geany && mysql-workbench) && echo "Success!" || echo "fail" &

Confusion behaviour of nohup

On running nohup with & on command line, it is returning the process id,
while the same command I am running in perl script within backticks and trying to read output is not returning any output.
Can anyone please guide?
nohup rm -rf ragh &
[1] 10029
The job number and PID are printed by the shell when starting a background process in a terminal. nohup is irrelevant. If you don't start the job from a terminal (i.e. you use backticks in Perl on shell, or you use a plain subshell) the information isn't shown. Why do you need it, anyway? See perlipc - Perl interprocess communication for details.
If you need the process ID of the background job then use the $! variable, for example:
nohup start_long_running_job &
echo $! > jobid.txt
And then if you need to kill the job:
kill $(cat jobid.txt)
It applies equally with or without nohup.
nohup means that you spawn a new process and execute the script in that context.
If your command there takes longer than your starting script it will survive the closing of your shell. If you need the output you should pipe it somewhere else
nohup rm -rf ragh > log.txt &
choroba correctly stated when the PID isn't shown ("If you don't start the job from a terminal").
Richard RP correctly stated that $! can be used. But for running in a Perl script within backticks, in addition we need to close the command's standard output, otherwise the backtick invocation would return only after the process has finished, because perl waits for the output's EOF.
$pid = `nohup rm -rf ragh >&-& echo \$!`
gets us rm's PID in $pid.

Resources