Executing bash shell script accepting argument and running in background - linux

I have written a shell script run.sh to trigger a few tasks based on the user's input
#!/bin/bash
echo "Please choose mode [1-3]: "
read MODE
case $MODE in
1)
echo -n "Enter iteration: "
read TIME
echo "Start Updating ..."
task 1 && task 2 && task 3
;;
2)
echo -n "Enter Seed Value: "
read SEED
echo "Start Updating ..."
task 4
task 5
;;
3)
echo -n "Enter regression minimum value: "
read MIN
echo "Start Updating ..."
task 6
;;
*)
echo -n "Unknown option - Exit"
;;
esac
The tasks 1,2 ... 6 are php scripts that are run like /usr/bin/php task1.php $TIME with $TIME as an argument for the php script etc...
The script runs fine when I type bash run.sh but since tasks 1-6 takes a long time to complete I would like an option to run the script in background while I disconnect from the terminal. However if I run the script using bash run.sh & I encountered an error like this:
Please choose mode [1-3]: 2
-bash: 2: command not found
[5]+ Stopped bash run.sh
It seems like bash interpreted my input 2 as an argument not corresponding to read MODE but instead of bash run.sh 2 which causes an error. I cannot change the script such that tasks 1-6 are run in the background like task 1 & task 2 & etc. because task 2 can only start running after task 1 is completed.
How can I accomplish what I want to do?

You could run all of those tasks sequentially in a background subshell
( task 1; task 2; task 3 ) &
Try this out with:
( echo "One"; sleep 1; echo "Two"; sleep 2; echo "Three"; sleep 3; echo "Done" ) &
You could also make this more script-looking:
(
echo "One"
sleep 1
echo "Two"
sleep 2
echo "Three"
sleep 3
echo "Done"
) &
Feel free to make use of the useful envar $BASH_SUBSHELL
( echo $BASH_SUBSHELL; ( echo $BASH_SUBSHELL ) )

Short answer: running interactive programs (/scripts) in the background doesn't really work; it'll work even less well if you disconnect from the terminal. You should rewrite the script so it doesn't need user input as it runs.
Long answer: when you run the script in the background, something like this happens. Note that the exact order of events may vary, as both the background script and foreground interactive shell are running at the same time, "racing" each other to get things done.
You start the script in the background with bash run.sh &
Your interactive shell is immediately ready for your next command, so it prints your usual command prompt to the terminal.
Your script prints its prompt ("Please choose mode [1-3]: ") to the terminal.
Your interactive shell reads its next command from the terminal. Because the script's prompt printed second, it looks like you're sending input to it, but there's actually no connection between the most recent prompt and which program is receiving your input.
Your interactive shell attempts to run your input ("2") as a command, and it fails.
Your shell script finally gets its chance to read from the terminal... but it's in the background, so it's not allowed to. Instead, it is suspended, and a "Stopped" message is printed. From the bash man page, "Job Control" section:
Background processes which attempt to read from (write to when stty tostop
is in effect) the terminal are sent a SIGTTIN (SIGTTOU) signal
by the kernel's terminal driver, which, unless caught, suspends the
process.
At this point, if you want to continue the script and tell it what to do, you'd need to move it to the foreground (e.g. with the fg) command. Which would kind of negate the point here. Also, its prompt ("Please choose mode [1-3]: ") will not be repeated, as that echo command successfully finished while it was in the background.
The solution: basically, write the script to non-interactively run the tasks in the necessary order. #Lenna has given examples; follow her recommendations.

Related

Csh script wait for multiple pid

Does the wait command work in a csh script to wait for more than 1 PID to finish?
Where the wait command waits for all the PID listed to complete before moving on to the next line
e.g.
wait $job1_pid $job2_pid $job3_pid
nextline
as the documentation online that I usually see only shows the wait command with only 1 PID, although I have read of using wait for multiple PID, like here :
http://www2.phys.canterbury.ac.nz/dept/docs/manuals/unix/DEC_4.0e_Docs/HTML/MAN/MAN1/0522____.HTM
which says quote "If one or more pid operands are specified that represent known process IDs,the wait utility waits until all of them have terminated"
No, the builtin wait command in csh can only wait for all jobs to finish. The command in the documentation that you're referencing is a separate executable that is probably located at /usr/bin/wait or similar. This executable cannot be used for what you want to use it for.
I recommend using bash and its more powerful wait builtin, which does allow you to wait for specific jobs or process ids.
From the tcsh man page, wait waits for all background jobs. tcsh is compatible with csh, which is what the university's documentation you linked is referring to.
wait The shell waits for all background jobs. If the shell is interactive, an interrupt will disrupt the wait and cause the shell
to print the names and job numbers of all outstanding jobs.
You can find this exact text on the csh documentation here.
The wait executable described in the documentation is actually a separate command that waits for a list of process ids.
However, the wait executable is not actually capable of waiting for the child processes of the running shell script and has no chance of doing the right thing in a shell script.
For instance, on OS X, /usr/bin/wait is this shell script.
#!/bin/sh
# $FreeBSD: src/usr.bin/alias/generic.sh,v 1.2 2005/10/24 22:32:19 cperciva Exp $
# This file is in the public domain.
builtin `echo ${0##*/} | tr \[:upper:] \[:lower:]` ${1+"$#"}
Anyway, I can't get the /usr/bin/wait executable to work reliably in a Csh script ... because the the background jobs are not child processes of the /usr/bin/wait process itself.
#!/bin/csh -f
setenv PIDDIR "`mktemp -d`"
sleep 4 &
ps ax | grep 'slee[p]' | awk '{ print $1 }' > $PIDDIR/job
/usr/bin/wait `cat $PIDDIR/job`
I would highly recommend writing this script in bash or similar where the builtin wait does allow you to wait for pids and capturing pids from background jobs is easier.
#!/bin/bash
sleep 4 &
pid_sleep_4="$!"
sleep 7 &
pid_sleep_7="$!"
wait "$pid_sleep_4"
echo "waited for sleep 4"
wait "$pid_sleep_7"
echo "waited for sleep 7"
If you don't want to rewrite the entire csh script you're working on, you can call out to bash from inside a csh script like so.
#!/bin/csh -f
bash <<'EOF'
sleep 4 &
pid_sleep_4="$!"
sleep 7 &
pid_sleep_7="$!"
wait "$pid_sleep_4"
echo "waited for sleep 4"
wait "$pid_sleep_7"
echo "waited for sleep 7"
'EOF'
Note that you must end that heredoc with 'EOF' including the single quotes.

how can I make bash block on getting a line of stdout from a job that I have spawned

I need to launch a process within a shell script. (It is a special logging process.) It needs to live for most of the shell script, while some other processes will run, and then at the end we will kill it.
A problem that I am having is that I need to launch this process, and wait for it to "warm up", before proceeding to launch more processes.
I know that I can wait for a line of input from a pipe using read, and I know that I can spawn a child process using &. But when I use them together, it doesn't work like I expect.
As a mockup:
When I run this (sequential):
(sleep 1 && echo "foo") > read
my whole shell blocks for 1 second, and the echo of foo is consumed by read, as I expect.
I want to do something very similar, except that I run the "foo" job in parallel:
(sleep 1 && echo "foo" &) > read
But when I run this, my shell doesn't block at all, it returns instantly -- I don't know why the read doesn't wait for a line to be printed on the pipe?
Is there some easy way to combine "spawning of a job" (&) with capturing the stdout pipe within the original shell?
An example that is very close to what I actually need is, I need to rephrase this somehow,
(sleep 1 && echo "foo" && sleep 20 &) > read; echo "bar"
and I need for it to print "bar" after exactly one second, and not immediately, or 21 seconds later.
Here's an example using named pipes, pretty close to what I used in the end. Thanks to Luis for his comments suggesting named pipes.
#!/bin/sh
# Set up temporary fifo
FIFO=/tmp/test_fifo
rm -f "$FIFO"
mkfifo "$FIFO"
# Spawn a second job that writes to FIFO after some time
sleep 1 && echo "foo" && sleep 20 >$FIFO &
# Block the main job on getting a line from the FIFO
read line <$FIFO
# So that we can see when the main job exits
echo $line
Thanks also to commenter Emily E., the example that I posted that was misbehaving was indeed writing to a file called read instead of using the shell-builtin command read.

Bash script to schedule multiple jobs

I wonder if it is possible to write a bash script that would do the following:
make firstprogram
which compiles and executes the first program. Than it would wait until this program is done and then execute:
make secondprogram
How can I write the bash script so that it is run in the terminal?
Is this what you intend? It will finish the first command before running the next. If you only want to run the second if the first runs successfully (exits with exit code 0), use && instead of ;
#!/bin/bash
make firstprogram; make secondprogram
You need to utilize the wait command
#!/bin/bash
make firstprogram
firstprogram &
wait
echo "First program done!"
make secondprogram
secondprogram &
echo "Second program done!"
exit 0

Bash Switch Statement and Executing Multiple Programs

I'm trying to use a Bash switch statement to execute a set of programs. The programs are run through the terminal via a script. The simple idea is ::
In the terminal : ./shell.sh
Program asks : "What number?"
I input : 1
Program processes as:
prog="1"
case $prog in
1) exec gimp && exec mirage ;;
esac
I've tried it several ways yet nothing will run the second program and free the terminal. The first program runs fine and frees the terminal after closing. What am I to put after executing the first program that will allow the second to run in tandem with the first and also free the terminal?
To run two commands in the background, use & after each of them:
case $prog in
1)
gimp &
mirage &
;;
esac
exec basically means "start running this program instead of continuing with this script"

Linux shell script not executing completely as desktop shortcut

I'm looking to create a shell script to open a set of applications whenever I start my workday. I found a couple posts like this which seem to be what I'm looking for. The problem is, the script doesn't work when I double-click on it.
If I start the script from Terminal, it executes completely, but I don't want to always have to call this from Terminal, I want to double-click a shortcut. If I add a "sleep 1" to the end, it works most the time, but the problem here is 1 second is not always enough time to execute everything. Also, it just feels very imprecise. Sure, I could say "sleep 10" and be done with it, but, as a developer, this feels like a hack solution.
Here is my current script, I intend to add my applications to this list over time, but this will be sufficient for now:
#!/bin/bash
skype &
/opt/google/chrome/google-chrome &
geany &
mysql-workbench &
So the question is: how can I ensure everything starts but not leave the temporary terminal window open longer than it needs to be?
In case it matters, to create this script I simply saved a .sh file to the desktop and checked "Allow executing file as program" in the file properties.
Try preceding each command with nohup:
#!/bin/bash
nohup skype &
nohup /opt/google/chrome/google-chrome &
nohup geany &
nohup mysql-workbench &
Better yet, use a loop:
#!/bin/bash
apps="skype /opt/google/chrome/google-chrome geany mysql-workbench"
for app in $apps; do
nohup $app &
done
If any errors occur, check nohup.out for messages.
I think the reason of this problem is too early closed I/O files (ttys, most likely). You can try redirecting all I/O (stdin, stdout, stderr), for example:
skype < /dev/null > /dev/null 2 > /dev/null &
Something like this should also work:
#!/bin/sh
{
skype &
/opt/google/chrome/google-chrome &
geany &
mysql-workbench &
} < /dev/null > /dev/null 2 > /dev/null &
EDIT:
I can reproduce it on Ubuntu 12.04. It seems terminal program when closing kills all processes in it's pgroup. Tried with:
/usr/bin/gnome-terminal -x /bin/sh -c ./test.sh
xterm -e ./test.sh`
result is the same - without sleep programs don't show up. It seems terminal, when script finishes sends SIGHUP to pgroup of the shell script. You can see it by runing any of above programs via strace -f. At the listing end there should be kill(PID,SIGHUP) with very big PID number as argument - actually it is negative number, so SIGHUP is sent to all processes in pgroup.
I would assume many X11 ignore SIGHUP. The problem is SIGHUP is sent/received before they change default behaviour. With sleep You are giving some time to change SIGHUP handling.
I've tried disown (bash builtin), but it didn't help (SIGHUP to pgroup is sent from terminal, not shell).
EDIT:
One possible solution would be to make script.desktop file (You can use some existing .desktop file as template, on Ubuntu these are located at /usr/share/applications) and start Your script from this file. It seems even X11 programs, which don't ignore SIGHUP (xclock) are normaly started this way.
Firstly, you seem to have a TRAILING ampersand (&) ... this might be causing some issues.
Secondly, you could do something like below to ensure that you only exit the shell (i.e. execution) upon success:
#!/bin/bash
skype & /opt/google/chrome/google-chrome & geany & mysql-workbench
if [ $? -eq 0 ]
then
echo "Successfully completed operation (launched files, etc...)"
#use if you don't want to see anything/be notified if successful
## 'exit 0' will exit TERMINAL, therefore the SCRIPT AS WELL
## indicating to the shell that there was NO ERROR (success)
#exit 1
exit 0
## 'return 0' will allow the "successful" message to be written
## to the command-line and then keep the terminal open, if you
## want confirmation of success. The SCRIPT then exits and
## control returns to terminal, but it will not be forced close.
return 0
else
echo "Operation failed!" >&2
## 'exit 1' will exit the TERMINAL, and therefore the SCRIPT AS
## WELL, indicating an ERROR to the shell
#exit 1
## 'return 1' will exit the SCRIPT only (but not the terminal) and
## will indicate an ERROR to the shell
return 1
fi
** UPDATE **
(notice I added an ampersand & to the end of my answer below)
You could do a one-liner. The following will run all commands sequentially, one-at-a-time, each one runs only if/when the previous one ends. The command-line statement terminates if AND WHEN any of the individual commands BETWEEN the & fail.
(skype && /opt/google/chrome/google-chrome && geany && mysql-workbench) && echo "Success!" || echo "fail" &

Resources