Hello I'm working on a project for university and I'm a newbie in bash scripting. Part of my code is the following (opens tabs in chromium):
chromium-browser& &>/dev/null
while read line
do
chromium-browser "$line"& &>/dev/null
sleep 5
done < url.in
However, every time a tab opens I get an annoying message on the shell "Created new window in existing browser session". It doesn't stop the execution or anything, but I don't want it there, because I want the output afterwards to be clearer. Any ideas of how to make it disappear, since the redirecting didn't work?
You can use nohup:
nohup command >/dev/null 2>&1 &
Redirect both stdout and stderr to /dev/null, like this:
command > /dev/null 2>&1 &
or
command &> /dev/null &
On the argument: http://www.tldp.org/LDP/abs/html/io-redirection.html
Related
I'm trying to make a function in my bashrc that would allow me to launch any command and automatically disown it.
e.g. launch ./myprogram or launch xdg-open myfolder
I've been used to do that many times command ; Ctrl+Z ; bg ; disown and would like to simply create a shortcut of these steps.
However I don't know how to embed the action of Ctrl+Z in a bash script. I've seen that its action is SIGTSTP, but I'm really lost as to how incorporate that in a bash function.
You can run the command in background directly instead of stopping it and then running it in the background. Use the &:
$ cat > launch
#! /bin/bash
"$#" & disown
Ctrl + d
$ chmod u+x ./launch
For posterity and othe people passing by, here is the bash function I made :
launch()
{
"$#" > /dev/null 2>&1 & disown
}
"$#" takes every arguments given in the prompt as one
> /dev/null 2>&1 redirects every output (stout and stderr) to dev/null which effectively delete them automatically, so that it doesn't appear on the shell
& runs the command in background, meaning it will let you input other commands in the shell
disown , as the name implies will lake it so that the process is no longer bound to the shell and you cans safely close the shell without it closing the process at the same time.
I'm looking to create a shell script to open a set of applications whenever I start my workday. I found a couple posts like this which seem to be what I'm looking for. The problem is, the script doesn't work when I double-click on it.
If I start the script from Terminal, it executes completely, but I don't want to always have to call this from Terminal, I want to double-click a shortcut. If I add a "sleep 1" to the end, it works most the time, but the problem here is 1 second is not always enough time to execute everything. Also, it just feels very imprecise. Sure, I could say "sleep 10" and be done with it, but, as a developer, this feels like a hack solution.
Here is my current script, I intend to add my applications to this list over time, but this will be sufficient for now:
#!/bin/bash
skype &
/opt/google/chrome/google-chrome &
geany &
mysql-workbench &
So the question is: how can I ensure everything starts but not leave the temporary terminal window open longer than it needs to be?
In case it matters, to create this script I simply saved a .sh file to the desktop and checked "Allow executing file as program" in the file properties.
Try preceding each command with nohup:
#!/bin/bash
nohup skype &
nohup /opt/google/chrome/google-chrome &
nohup geany &
nohup mysql-workbench &
Better yet, use a loop:
#!/bin/bash
apps="skype /opt/google/chrome/google-chrome geany mysql-workbench"
for app in $apps; do
nohup $app &
done
If any errors occur, check nohup.out for messages.
I think the reason of this problem is too early closed I/O files (ttys, most likely). You can try redirecting all I/O (stdin, stdout, stderr), for example:
skype < /dev/null > /dev/null 2 > /dev/null &
Something like this should also work:
#!/bin/sh
{
skype &
/opt/google/chrome/google-chrome &
geany &
mysql-workbench &
} < /dev/null > /dev/null 2 > /dev/null &
EDIT:
I can reproduce it on Ubuntu 12.04. It seems terminal program when closing kills all processes in it's pgroup. Tried with:
/usr/bin/gnome-terminal -x /bin/sh -c ./test.sh
xterm -e ./test.sh`
result is the same - without sleep programs don't show up. It seems terminal, when script finishes sends SIGHUP to pgroup of the shell script. You can see it by runing any of above programs via strace -f. At the listing end there should be kill(PID,SIGHUP) with very big PID number as argument - actually it is negative number, so SIGHUP is sent to all processes in pgroup.
I would assume many X11 ignore SIGHUP. The problem is SIGHUP is sent/received before they change default behaviour. With sleep You are giving some time to change SIGHUP handling.
I've tried disown (bash builtin), but it didn't help (SIGHUP to pgroup is sent from terminal, not shell).
EDIT:
One possible solution would be to make script.desktop file (You can use some existing .desktop file as template, on Ubuntu these are located at /usr/share/applications) and start Your script from this file. It seems even X11 programs, which don't ignore SIGHUP (xclock) are normaly started this way.
Firstly, you seem to have a TRAILING ampersand (&) ... this might be causing some issues.
Secondly, you could do something like below to ensure that you only exit the shell (i.e. execution) upon success:
#!/bin/bash
skype & /opt/google/chrome/google-chrome & geany & mysql-workbench
if [ $? -eq 0 ]
then
echo "Successfully completed operation (launched files, etc...)"
#use if you don't want to see anything/be notified if successful
## 'exit 0' will exit TERMINAL, therefore the SCRIPT AS WELL
## indicating to the shell that there was NO ERROR (success)
#exit 1
exit 0
## 'return 0' will allow the "successful" message to be written
## to the command-line and then keep the terminal open, if you
## want confirmation of success. The SCRIPT then exits and
## control returns to terminal, but it will not be forced close.
return 0
else
echo "Operation failed!" >&2
## 'exit 1' will exit the TERMINAL, and therefore the SCRIPT AS
## WELL, indicating an ERROR to the shell
#exit 1
## 'return 1' will exit the SCRIPT only (but not the terminal) and
## will indicate an ERROR to the shell
return 1
fi
** UPDATE **
(notice I added an ampersand & to the end of my answer below)
You could do a one-liner. The following will run all commands sequentially, one-at-a-time, each one runs only if/when the previous one ends. The command-line statement terminates if AND WHEN any of the individual commands BETWEEN the & fail.
(skype && /opt/google/chrome/google-chrome && geany && mysql-workbench) && echo "Success!" || echo "fail" &
I have a short php utility script, I run it from cli simply with:
php myscript.php
The script is always running, periodically performing some tasks (not relevant for the question). It doesn't need any input from the user.
After running it, I usually press CTRL+z and then run bg to put the process in background, and everything is fine.
If I run it as:
php myscript.php &
the script is put on background on start, but it is also put in a stopped state. Example:
[1] 11513
[1]+ Stopped php myscript.php
even running bg at this point doesn't help, I have to run fg, then CTRL+z and bg again to make it work.
This is the php script:
<?
while(true){
echo 'hi '.time()."\n";
sleep(30);
}
?>
My problem is that I cannot run it directly in background, because the system stops it, and I don't understand why. How can I fix this?
update:
I made a bash version of the same script, and it can be run and put in background (running and not stopped) just by launching it with the & in the end (script.sh &)
script.sh:
#!/bin/bash
while true; do
echo `date`
sleep 30
done
Why the php script is being stopped after launching it in background, and the bash script doesn't?
What could cause this different behaviour?
I found what is causing the issue. In PHP, if the readline module is enabled, any command line script will expect an input, even if the script is written to NOT wait for user input.
To check if you have readline support enabled, just run:
php --info |grep "Readline Support"
and check the output. If you get Readline Support => enabled then you have readline enabled and you may experience the problem described in the original question.
The proper way to use the cli when is then to explicitly specify that php is not using the terminal for input:
php myscript.php < /dev/null &
Further info: http://php.net/manual/en/book.readline.php
Alternatives:
./test.php >/dev/null &
or (more creative):
nohup php test.php > /dev/null 2>&1 &
(p.s.: I still believe this question belongs to ServerFault, btw.. problem solved!)
Usually, the process what do you send to background with the & and the script waiting for an input from the terminal, going into the stopped state.
E.g. have a bash script valecho:
#!/bin/sh
read val
echo $val
runnig it as:
./valecho &
the script will stop.
When run it as
echo hello | ./valecho &
will correctly run and finish.
So, check your php - probably wants some input from the stdin
Edit - based on comment:
i'm not an php developer - but just tried the next script (p.php)
<?php
while(true){
echo 'hi '.time()."\n";
sleep(3);
}
?>
with the command:
php -f p.php &
and running nicely... so... sry for confusion...
From http://php.net/manual/en/book.readline.php :
When readline is enabled, php switches the terminal mode to accept line-buffered input. This means that the proper way to use the cli when you pipe to an interactive command is to explicitly specify that php is not using the terminal for input:
php myscript.php < /dev/null &
source
I call a script in my .bashrc to print number of new messages I have when I open the terminal, I want the call to be non blocking as it accesses the network and sometimes takes a few seconds which means I can't use the terminal until it completes.
However if I put:
mailcheck &
in my bashrc, it works fine. but then prints an empty line and I have when I press enter and it prints
[1]+ Done ~/bin/mailcheck
This is very messy is there a way around this?
That message isn't coming from mailcheck, it's from bash's job control telling you about your backgrounded job. The way to avoid it is to tell bash you don't want it managed by job control:
mailcheck &
disown $!
This seems to work:
(mailcheck &)
You can call your script like this:
(exec mailcheck & )
try piping stderr to /dev/null
mailcheck & 2>/dev/null
Thinking about it for a few mins, another way might be to use write.
Pipe the output of the background task to yourself, that way it can complete at any time and you can bin any additional output from stdout as well as stderr.
mailcheck | write $(whoami) & > /dev/null
This was the last page I checked before I fixed an issue I was having so I figured I would leave my finished script that had the same issue as OP:
nohup bash -c '{anycommand};echo "CommandFinished"' 1> "output$(date +"%Y%m%d%H%M").out" 2> /dev/null & disown
This runs {anycommand} with nohup and sends the stdout to a unique file, the stderr to /dev/null, and the rest to console (the PID) for scraping. The stdout file is monitored by another process looking for the CommandFinished or whatever unique string.
Bash would later print this:
[1]+ Done nohup bash -c ....
Adding disown to the end stopped bash jobs from printing that to console.
I have a shell command like this
sudo nohup coffee -c -w *.coffee &
disown $!
wait
but when I run the shell scrit it says nohup: appending output to 'nohup.out' and makes me hit enter.
How do I get around having to hit enter?
8 year old thread, but I found that none of these answers really solve the issue in the question.
The message nohup: ignoring input and appending output to 'nohup.out' is piped through stderr (AFAIK), so in order to silence that message, all you have to do is to redirect stderr to /dev/null, like so:
nohup mycommand 2> /dev/null
However, if you additionally want to run this process in the background with &, you will find that (for bash at least), there will be a single line output of the job number and PID (e.g. [1] 27184). To avoid this, run the entire command in a subshell, like so:
(nohup my command 2> /dev/null &)
But if you're using this in a script, the former solution is sufficient.
As far as I understand, you don't have to. The message is output to the console, but not added to your input buffer. Therefore you can just continue typing your commands in as if there were no message from nohup, the message will not interfere with your input.
Well, having to type not from the exact prompt position may be aesthetically not so pleasing.
You could also redirect the log manually:
sudo nohup coffee -c -w *.coffee > /tmp/coffee.log &
That way the message won't show up at all.
Ubuntu Linux 20.04: None of the answers above solved the problem for me: script blocks in any case, waiting for input
My Solution
[sudo] nohup `command` > nohub.log < enter.txt &
where enter.txt is a text file containing a single line separator.