Bash Switch Statement and Executing Multiple Programs - linux

I'm trying to use a Bash switch statement to execute a set of programs. The programs are run through the terminal via a script. The simple idea is ::
In the terminal : ./shell.sh
Program asks : "What number?"
I input : 1
Program processes as:
prog="1"
case $prog in
1) exec gimp && exec mirage ;;
esac
I've tried it several ways yet nothing will run the second program and free the terminal. The first program runs fine and frees the terminal after closing. What am I to put after executing the first program that will allow the second to run in tandem with the first and also free the terminal?

To run two commands in the background, use & after each of them:
case $prog in
1)
gimp &
mirage &
;;
esac
exec basically means "start running this program instead of continuing with this script"

Related

Calling `ksh` from `ksh` script stops execution

I am executing a ksh script from another ksh script. The called script ends by executing ksh which simply stops the caller script from continuing.
MRE
#!/bin/ksh
# Caller Script
. ~/called
# Does not continue to next echo.
echo "DONE!"
#!/bin/ksh
#Called script
# Some exports..
ENV=calledcalled ksh
Output with set -x
++ ksh
++ ENV=calledcalled
.kshrc executed
If I run calledcalled directly in my caller it works fine (i.e. continues with next commands. Why does this happen? I checked $? and it is 0. I tried ./called || true. Please let me know if more information is needed.
Note: Called script is outside my control.
This is completely normal and expected. Remember, when you run cmd1; cmd2, cmd2 doesn't run until cmd1 exits.
When your script runs ksh (and is invoked from a terminal or other context where reading from stdin doesn't cause an immediate EOF), nothing is making that new copy of ksh exit -- it waits for code to run to be given to it on stdin as normal -- so that script is just sitting around waiting for the copy of ksh to exit before it does anything else.
There are plenty of ways you can work around this. A few easy ones:
Ensure that stdin is empty so the child interpreter can't wait for input
. ~/called </dev/null
Define a function named ksh that doesn't do anything at all.
ksh() { echo "Not actually running ksh" >&2; }
. ~/called
Set ENV (a variable which, when defined, tells any shell to run the code in that file before doing anything else) to the filename of a script that, when run, causes any interactive shell to exit immediately.
exit_script=$(mktemp -t exit_script.XXXXXX)
printf '%s\n' 'case $- in *i*) exit 0;; esac' >"$exit_script"
ENV=$exit_script . ~/called
rm -f -- "$exit_script"
The above are just a few approaches; you can surely imagine many more with just a little thought and experimentation.

Executing bash shell script accepting argument and running in background

I have written a shell script run.sh to trigger a few tasks based on the user's input
#!/bin/bash
echo "Please choose mode [1-3]: "
read MODE
case $MODE in
1)
echo -n "Enter iteration: "
read TIME
echo "Start Updating ..."
task 1 && task 2 && task 3
;;
2)
echo -n "Enter Seed Value: "
read SEED
echo "Start Updating ..."
task 4
task 5
;;
3)
echo -n "Enter regression minimum value: "
read MIN
echo "Start Updating ..."
task 6
;;
*)
echo -n "Unknown option - Exit"
;;
esac
The tasks 1,2 ... 6 are php scripts that are run like /usr/bin/php task1.php $TIME with $TIME as an argument for the php script etc...
The script runs fine when I type bash run.sh but since tasks 1-6 takes a long time to complete I would like an option to run the script in background while I disconnect from the terminal. However if I run the script using bash run.sh & I encountered an error like this:
Please choose mode [1-3]: 2
-bash: 2: command not found
[5]+ Stopped bash run.sh
It seems like bash interpreted my input 2 as an argument not corresponding to read MODE but instead of bash run.sh 2 which causes an error. I cannot change the script such that tasks 1-6 are run in the background like task 1 & task 2 & etc. because task 2 can only start running after task 1 is completed.
How can I accomplish what I want to do?
You could run all of those tasks sequentially in a background subshell
( task 1; task 2; task 3 ) &
Try this out with:
( echo "One"; sleep 1; echo "Two"; sleep 2; echo "Three"; sleep 3; echo "Done" ) &
You could also make this more script-looking:
(
echo "One"
sleep 1
echo "Two"
sleep 2
echo "Three"
sleep 3
echo "Done"
) &
Feel free to make use of the useful envar $BASH_SUBSHELL
( echo $BASH_SUBSHELL; ( echo $BASH_SUBSHELL ) )
Short answer: running interactive programs (/scripts) in the background doesn't really work; it'll work even less well if you disconnect from the terminal. You should rewrite the script so it doesn't need user input as it runs.
Long answer: when you run the script in the background, something like this happens. Note that the exact order of events may vary, as both the background script and foreground interactive shell are running at the same time, "racing" each other to get things done.
You start the script in the background with bash run.sh &
Your interactive shell is immediately ready for your next command, so it prints your usual command prompt to the terminal.
Your script prints its prompt ("Please choose mode [1-3]: ") to the terminal.
Your interactive shell reads its next command from the terminal. Because the script's prompt printed second, it looks like you're sending input to it, but there's actually no connection between the most recent prompt and which program is receiving your input.
Your interactive shell attempts to run your input ("2") as a command, and it fails.
Your shell script finally gets its chance to read from the terminal... but it's in the background, so it's not allowed to. Instead, it is suspended, and a "Stopped" message is printed. From the bash man page, "Job Control" section:
Background processes which attempt to read from (write to when stty tostop
is in effect) the terminal are sent a SIGTTIN (SIGTTOU) signal
by the kernel's terminal driver, which, unless caught, suspends the
process.
At this point, if you want to continue the script and tell it what to do, you'd need to move it to the foreground (e.g. with the fg) command. Which would kind of negate the point here. Also, its prompt ("Please choose mode [1-3]: ") will not be repeated, as that echo command successfully finished while it was in the background.
The solution: basically, write the script to non-interactively run the tasks in the necessary order. #Lenna has given examples; follow her recommendations.

Bash script to schedule multiple jobs

I wonder if it is possible to write a bash script that would do the following:
make firstprogram
which compiles and executes the first program. Than it would wait until this program is done and then execute:
make secondprogram
How can I write the bash script so that it is run in the terminal?
Is this what you intend? It will finish the first command before running the next. If you only want to run the second if the first runs successfully (exits with exit code 0), use && instead of ;
#!/bin/bash
make firstprogram; make secondprogram
You need to utilize the wait command
#!/bin/bash
make firstprogram
firstprogram &
wait
echo "First program done!"
make secondprogram
secondprogram &
echo "Second program done!"
exit 0

Preventing to start bash script with ./ (dot slash)

I wrote a lot of bash scripts that should work with the current bash session, because I often used fg, jobs, etc.
I always starts my scripts with . script.sh but one of my friends startet it with ./script.sh and got error that fg "couldn't be executed".
Is it possible to force a . script.sh or anything else what I can do to prevent errors? Such as cancel the whole script and print an error with echo or something else.
Edit:
I think bashtraps have problems when executing sourced, is there any way to use fg, jobs and bashtraps in one script?
Looks like you're trying to determine if a script is being run interactively or not. The bash manual says that you can determine this with the following test:
#! /bin/bash
case "$-" in
*i*) echo interactive ;;
*) echo non-interactive ;;
esac
sleep 2 &
fg
If you run this with ./foo.sh, you'll see "non-interactive" printed and an error for the fg built-in. If you source it with . foo.sh or source foo.sh you won't get that error (assuming you're running those from an interactive shell, obviously).
For your use-case, you can exit with an error message in the non-interactive mode.
If job control is all you need, you can make it work both ways with #!/bin/bash -i:
#!/bin/bash -i
sleep 1 &
fg
This script works the same whether you . myscript or ./myscript.
PS: You should really adopt your friend's way of executing scripts. It's more robust and most people write their scripts to work that way (e.g. assuming exit will just exit the script).
There are a couple of simple tricks to remind people to use source (or .) to run your script: First, remove execute permission from it (chmod -x script.sh), so running it with ./script.sh will give a permission error. Second, replace the normal shebang (first line) with something like this:
#!/bin/echo please run this with the command: source
This will make the script print something like "please run this with the command: source ./script.sh" (and not run the actual script) if someone does manage to execute it.
Note that neither of these helps if someone runs the script with bash script.sh.

Linux shell script not executing completely as desktop shortcut

I'm looking to create a shell script to open a set of applications whenever I start my workday. I found a couple posts like this which seem to be what I'm looking for. The problem is, the script doesn't work when I double-click on it.
If I start the script from Terminal, it executes completely, but I don't want to always have to call this from Terminal, I want to double-click a shortcut. If I add a "sleep 1" to the end, it works most the time, but the problem here is 1 second is not always enough time to execute everything. Also, it just feels very imprecise. Sure, I could say "sleep 10" and be done with it, but, as a developer, this feels like a hack solution.
Here is my current script, I intend to add my applications to this list over time, but this will be sufficient for now:
#!/bin/bash
skype &
/opt/google/chrome/google-chrome &
geany &
mysql-workbench &
So the question is: how can I ensure everything starts but not leave the temporary terminal window open longer than it needs to be?
In case it matters, to create this script I simply saved a .sh file to the desktop and checked "Allow executing file as program" in the file properties.
Try preceding each command with nohup:
#!/bin/bash
nohup skype &
nohup /opt/google/chrome/google-chrome &
nohup geany &
nohup mysql-workbench &
Better yet, use a loop:
#!/bin/bash
apps="skype /opt/google/chrome/google-chrome geany mysql-workbench"
for app in $apps; do
nohup $app &
done
If any errors occur, check nohup.out for messages.
I think the reason of this problem is too early closed I/O files (ttys, most likely). You can try redirecting all I/O (stdin, stdout, stderr), for example:
skype < /dev/null > /dev/null 2 > /dev/null &
Something like this should also work:
#!/bin/sh
{
skype &
/opt/google/chrome/google-chrome &
geany &
mysql-workbench &
} < /dev/null > /dev/null 2 > /dev/null &
EDIT:
I can reproduce it on Ubuntu 12.04. It seems terminal program when closing kills all processes in it's pgroup. Tried with:
/usr/bin/gnome-terminal -x /bin/sh -c ./test.sh
xterm -e ./test.sh`
result is the same - without sleep programs don't show up. It seems terminal, when script finishes sends SIGHUP to pgroup of the shell script. You can see it by runing any of above programs via strace -f. At the listing end there should be kill(PID,SIGHUP) with very big PID number as argument - actually it is negative number, so SIGHUP is sent to all processes in pgroup.
I would assume many X11 ignore SIGHUP. The problem is SIGHUP is sent/received before they change default behaviour. With sleep You are giving some time to change SIGHUP handling.
I've tried disown (bash builtin), but it didn't help (SIGHUP to pgroup is sent from terminal, not shell).
EDIT:
One possible solution would be to make script.desktop file (You can use some existing .desktop file as template, on Ubuntu these are located at /usr/share/applications) and start Your script from this file. It seems even X11 programs, which don't ignore SIGHUP (xclock) are normaly started this way.
Firstly, you seem to have a TRAILING ampersand (&) ... this might be causing some issues.
Secondly, you could do something like below to ensure that you only exit the shell (i.e. execution) upon success:
#!/bin/bash
skype & /opt/google/chrome/google-chrome & geany & mysql-workbench
if [ $? -eq 0 ]
then
echo "Successfully completed operation (launched files, etc...)"
#use if you don't want to see anything/be notified if successful
## 'exit 0' will exit TERMINAL, therefore the SCRIPT AS WELL
## indicating to the shell that there was NO ERROR (success)
#exit 1
exit 0
## 'return 0' will allow the "successful" message to be written
## to the command-line and then keep the terminal open, if you
## want confirmation of success. The SCRIPT then exits and
## control returns to terminal, but it will not be forced close.
return 0
else
echo "Operation failed!" >&2
## 'exit 1' will exit the TERMINAL, and therefore the SCRIPT AS
## WELL, indicating an ERROR to the shell
#exit 1
## 'return 1' will exit the SCRIPT only (but not the terminal) and
## will indicate an ERROR to the shell
return 1
fi
** UPDATE **
(notice I added an ampersand & to the end of my answer below)
You could do a one-liner. The following will run all commands sequentially, one-at-a-time, each one runs only if/when the previous one ends. The command-line statement terminates if AND WHEN any of the individual commands BETWEEN the & fail.
(skype && /opt/google/chrome/google-chrome && geany && mysql-workbench) && echo "Success!" || echo "fail" &

Resources