Linux shell script not executing completely as desktop shortcut - linux

I'm looking to create a shell script to open a set of applications whenever I start my workday. I found a couple posts like this which seem to be what I'm looking for. The problem is, the script doesn't work when I double-click on it.
If I start the script from Terminal, it executes completely, but I don't want to always have to call this from Terminal, I want to double-click a shortcut. If I add a "sleep 1" to the end, it works most the time, but the problem here is 1 second is not always enough time to execute everything. Also, it just feels very imprecise. Sure, I could say "sleep 10" and be done with it, but, as a developer, this feels like a hack solution.
Here is my current script, I intend to add my applications to this list over time, but this will be sufficient for now:
#!/bin/bash
skype &
/opt/google/chrome/google-chrome &
geany &
mysql-workbench &
So the question is: how can I ensure everything starts but not leave the temporary terminal window open longer than it needs to be?
In case it matters, to create this script I simply saved a .sh file to the desktop and checked "Allow executing file as program" in the file properties.

Try preceding each command with nohup:
#!/bin/bash
nohup skype &
nohup /opt/google/chrome/google-chrome &
nohup geany &
nohup mysql-workbench &
Better yet, use a loop:
#!/bin/bash
apps="skype /opt/google/chrome/google-chrome geany mysql-workbench"
for app in $apps; do
nohup $app &
done
If any errors occur, check nohup.out for messages.

I think the reason of this problem is too early closed I/O files (ttys, most likely). You can try redirecting all I/O (stdin, stdout, stderr), for example:
skype < /dev/null > /dev/null 2 > /dev/null &
Something like this should also work:
#!/bin/sh
{
skype &
/opt/google/chrome/google-chrome &
geany &
mysql-workbench &
} < /dev/null > /dev/null 2 > /dev/null &
EDIT:
I can reproduce it on Ubuntu 12.04. It seems terminal program when closing kills all processes in it's pgroup. Tried with:
/usr/bin/gnome-terminal -x /bin/sh -c ./test.sh
xterm -e ./test.sh`
result is the same - without sleep programs don't show up. It seems terminal, when script finishes sends SIGHUP to pgroup of the shell script. You can see it by runing any of above programs via strace -f. At the listing end there should be kill(PID,SIGHUP) with very big PID number as argument - actually it is negative number, so SIGHUP is sent to all processes in pgroup.
I would assume many X11 ignore SIGHUP. The problem is SIGHUP is sent/received before they change default behaviour. With sleep You are giving some time to change SIGHUP handling.
I've tried disown (bash builtin), but it didn't help (SIGHUP to pgroup is sent from terminal, not shell).
EDIT:
One possible solution would be to make script.desktop file (You can use some existing .desktop file as template, on Ubuntu these are located at /usr/share/applications) and start Your script from this file. It seems even X11 programs, which don't ignore SIGHUP (xclock) are normaly started this way.

Firstly, you seem to have a TRAILING ampersand (&) ... this might be causing some issues.
Secondly, you could do something like below to ensure that you only exit the shell (i.e. execution) upon success:
#!/bin/bash
skype & /opt/google/chrome/google-chrome & geany & mysql-workbench
if [ $? -eq 0 ]
then
echo "Successfully completed operation (launched files, etc...)"
#use if you don't want to see anything/be notified if successful
## 'exit 0' will exit TERMINAL, therefore the SCRIPT AS WELL
## indicating to the shell that there was NO ERROR (success)
#exit 1
exit 0
## 'return 0' will allow the "successful" message to be written
## to the command-line and then keep the terminal open, if you
## want confirmation of success. The SCRIPT then exits and
## control returns to terminal, but it will not be forced close.
return 0
else
echo "Operation failed!" >&2
## 'exit 1' will exit the TERMINAL, and therefore the SCRIPT AS
## WELL, indicating an ERROR to the shell
#exit 1
## 'return 1' will exit the SCRIPT only (but not the terminal) and
## will indicate an ERROR to the shell
return 1
fi

** UPDATE **
(notice I added an ampersand & to the end of my answer below)
You could do a one-liner. The following will run all commands sequentially, one-at-a-time, each one runs only if/when the previous one ends. The command-line statement terminates if AND WHEN any of the individual commands BETWEEN the & fail.
(skype && /opt/google/chrome/google-chrome && geany && mysql-workbench) && echo "Success!" || echo "fail" &

Related

Bash handle exiting multiple processes

My goal is to run multiple processes using bash, then wait for user input (for instance, issuing a command of 'exit') and exiting out upon that command.
I have the bits and pieces, I think, but am having a hard time putting them together.
From what I saw, I can run multiple processes by pushing them to the back, like so:
./process1 &
./process2 &
I also saw that $! returns the most recently run process pid. Does this, then make sense:
./process1 &
pidA = $!
./process2 &
pidB = $!
From there, I am trying to do the following:
echo "command:"
read userInput
if["$userInput" == "exit"]; then
kill $pidA
kill $pidB
fi
does this make sense or am I not appearing to be getting it?
That looks good, although you'll probably need a loop on the user input part.
Note that you need to be careful with shell syntax. "pidA = $!" is not what you think; that's "pidA=$!". The former will try to run a program or command named pidA with arguments "=" and the PID of the last started background command.
Also note that you could use the "trap" command to issue the kill commands on termination of the shell script. Like this:
trap "kill $!" EXIT
end code resulted in:
#!/bin/bash
pushd ${0%/*}
cd ../
mongoPort=12345
mongoPath="db/data"
echo "Starting mongo and node server"
db/bin/mongod --dbpath=$mongoPath --port=$mongoPort &
MONGOPID=$!
node server.js &
NODEPID=$!
while [ "$input" != "exit" ]; do
read input
if [ "$input" == "exit" ]; then
echo"exiting..."
kill $MONGOPID
kill $NODEPID
exit
fi
done

How to hide "Created new window in existing browser session" in bash

Hello I'm working on a project for university and I'm a newbie in bash scripting. Part of my code is the following (opens tabs in chromium):
chromium-browser& &>/dev/null
while read line
do
chromium-browser "$line"& &>/dev/null
sleep 5
done < url.in
However, every time a tab opens I get an annoying message on the shell "Created new window in existing browser session". It doesn't stop the execution or anything, but I don't want it there, because I want the output afterwards to be clearer. Any ideas of how to make it disappear, since the redirecting didn't work?
You can use nohup:
nohup command >/dev/null 2>&1 &
Redirect both stdout and stderr to /dev/null, like this:
command > /dev/null 2>&1 &
or
command &> /dev/null &
On the argument: http://www.tldp.org/LDP/abs/html/io-redirection.html

Linux Bash - redirect errors to file

My objective is to run a command in the background and only create a log if something goes wrong.
Can someone tell me if this command is OK for that?
bash:
./command > app/tmp/logs/model/123.log 2>&1 & echo $! >> /dev/null &
The command itself is unimportant (just a random PHP script).
And / or explain how to route the results of my command to a file only if it is an errorand?
Also, I cant understand what "echo $!" does (I've copied this from elsewhere)...
Thanks in advance!
If I understand correctly, your goal is to run command in the background and to leave a log file only if an error occurred. In that case:
{ ./command >123.log 2>&1 && rm -f 123.log; } &
How it works:
{...} &
This runs whatever is in braces in the background. The braces are not strictly needed here for this exact command but including them causes no harm and might save you from an unexpected problem later.
./command >123.log 2>&1
This runs command and saves all output to 123.log.
&&
This runs the command that follows only if command succeeded (in other words, if commmand set its exit code to zero).
rm -f 123.log
This removes the log file. Since this command follows the &&, it is only run if command succeeded.
Discussion
You asked about:
echo $! >> /dev/null
echo $! displays the process ID of the previous command that was run in the background. In this case that would be ./command. This display, however, is sent to /dev/null which is, effectively, a trash can.

Command line script run in background goes in stopped state

I have a short php utility script, I run it from cli simply with:
php myscript.php
The script is always running, periodically performing some tasks (not relevant for the question). It doesn't need any input from the user.
After running it, I usually press CTRL+z and then run bg to put the process in background, and everything is fine.
If I run it as:
php myscript.php &
the script is put on background on start, but it is also put in a stopped state. Example:
[1] 11513
[1]+ Stopped php myscript.php
even running bg at this point doesn't help, I have to run fg, then CTRL+z and bg again to make it work.
This is the php script:
<?
while(true){
echo 'hi '.time()."\n";
sleep(30);
}
?>
My problem is that I cannot run it directly in background, because the system stops it, and I don't understand why. How can I fix this?
update:
I made a bash version of the same script, and it can be run and put in background (running and not stopped) just by launching it with the & in the end (script.sh &)
script.sh:
#!/bin/bash
while true; do
echo `date`
sleep 30
done
Why the php script is being stopped after launching it in background, and the bash script doesn't?
What could cause this different behaviour?
I found what is causing the issue. In PHP, if the readline module is enabled, any command line script will expect an input, even if the script is written to NOT wait for user input.
To check if you have readline support enabled, just run:
php --info |grep "Readline Support"
and check the output. If you get Readline Support => enabled then you have readline enabled and you may experience the problem described in the original question.
The proper way to use the cli when is then to explicitly specify that php is not using the terminal for input:
php myscript.php < /dev/null &
Further info: http://php.net/manual/en/book.readline.php
Alternatives:
./test.php >/dev/null &
or (more creative):
nohup php test.php > /dev/null 2>&1 &
(p.s.: I still believe this question belongs to ServerFault, btw.. problem solved!)
Usually, the process what do you send to background with the & and the script waiting for an input from the terminal, going into the stopped state.
E.g. have a bash script valecho:
#!/bin/sh
read val
echo $val
runnig it as:
./valecho &
the script will stop.
When run it as
echo hello | ./valecho &
will correctly run and finish.
So, check your php - probably wants some input from the stdin
Edit - based on comment:
i'm not an php developer - but just tried the next script (p.php)
<?php
while(true){
   echo 'hi '.time()."\n";
   sleep(3);
}
?>
with the command:
php -f p.php &
and running nicely... so... sry for confusion...
From http://php.net/manual/en/book.readline.php :
When readline is enabled, php switches the terminal mode to accept line-buffered input. This means that the proper way to use the cli when you pipe to an interactive command is to explicitly specify that php is not using the terminal for input:
php myscript.php < /dev/null &
source

bash "&" without printing "[1]+ Done "

I call a script in my .bashrc to print number of new messages I have when I open the terminal, I want the call to be non blocking as it accesses the network and sometimes takes a few seconds which means I can't use the terminal until it completes.
However if I put:
mailcheck &
in my bashrc, it works fine. but then prints an empty line and I have when I press enter and it prints
[1]+ Done ~/bin/mailcheck
This is very messy is there a way around this?
That message isn't coming from mailcheck, it's from bash's job control telling you about your backgrounded job. The way to avoid it is to tell bash you don't want it managed by job control:
mailcheck &
disown $!
This seems to work:
(mailcheck &)
You can call your script like this:
(exec mailcheck & )
try piping stderr to /dev/null
mailcheck & 2>/dev/null
Thinking about it for a few mins, another way might be to use write.
Pipe the output of the background task to yourself, that way it can complete at any time and you can bin any additional output from stdout as well as stderr.
mailcheck | write $(whoami) & > /dev/null
This was the last page I checked before I fixed an issue I was having so I figured I would leave my finished script that had the same issue as OP:
nohup bash -c '{anycommand};echo "CommandFinished"' 1> "output$(date +"%Y%m%d%H%M").out" 2> /dev/null & disown
This runs {anycommand} with nohup and sends the stdout to a unique file, the stderr to /dev/null, and the rest to console (the PID) for scraping. The stdout file is monitored by another process looking for the CommandFinished or whatever unique string.
Bash would later print this:
[1]+ Done nohup bash -c ....
Adding disown to the end stopped bash jobs from printing that to console.

Resources