Command line script run in background goes in stopped state - linux

I have a short php utility script, I run it from cli simply with:
php myscript.php
The script is always running, periodically performing some tasks (not relevant for the question). It doesn't need any input from the user.
After running it, I usually press CTRL+z and then run bg to put the process in background, and everything is fine.
If I run it as:
php myscript.php &
the script is put on background on start, but it is also put in a stopped state. Example:
[1] 11513
[1]+ Stopped php myscript.php
even running bg at this point doesn't help, I have to run fg, then CTRL+z and bg again to make it work.
This is the php script:
<?
while(true){
echo 'hi '.time()."\n";
sleep(30);
}
?>
My problem is that I cannot run it directly in background, because the system stops it, and I don't understand why. How can I fix this?
update:
I made a bash version of the same script, and it can be run and put in background (running and not stopped) just by launching it with the & in the end (script.sh &)
script.sh:
#!/bin/bash
while true; do
echo `date`
sleep 30
done
Why the php script is being stopped after launching it in background, and the bash script doesn't?
What could cause this different behaviour?

I found what is causing the issue. In PHP, if the readline module is enabled, any command line script will expect an input, even if the script is written to NOT wait for user input.
To check if you have readline support enabled, just run:
php --info |grep "Readline Support"
and check the output. If you get Readline Support => enabled then you have readline enabled and you may experience the problem described in the original question.
The proper way to use the cli when is then to explicitly specify that php is not using the terminal for input:
php myscript.php < /dev/null &
Further info: http://php.net/manual/en/book.readline.php
Alternatives:
./test.php >/dev/null &
or (more creative):
nohup php test.php > /dev/null 2>&1 &
(p.s.: I still believe this question belongs to ServerFault, btw.. problem solved!)

Usually, the process what do you send to background with the & and the script waiting for an input from the terminal, going into the stopped state.
E.g. have a bash script valecho:
#!/bin/sh
read val
echo $val
runnig it as:
./valecho &
the script will stop.
When run it as
echo hello | ./valecho &
will correctly run and finish.
So, check your php - probably wants some input from the stdin
Edit - based on comment:
i'm not an php developer - but just tried the next script (p.php)
<?php
while(true){
   echo 'hi '.time()."\n";
   sleep(3);
}
?>
with the command:
php -f p.php &
and running nicely... so... sry for confusion...

From http://php.net/manual/en/book.readline.php :
When readline is enabled, php switches the terminal mode to accept line-buffered input. This means that the proper way to use the cli when you pipe to an interactive command is to explicitly specify that php is not using the terminal for input:
php myscript.php < /dev/null &
source

Related

Start Bash Script from Bash Script to Launch GUI Application

I am trying to launch a GUI application (rhythmbox) on a Ubuntu. In the following I try to explain the chain of executed files.
# Window manager executes first
~/i3wm_cmd_wrapper.sh Window_Name ~/mount_enc.sh
This wrapper uses gnome-terminal to execute stuff. This enables opening a terminal at startup where users can enter information.
# mount_enc.sh launches the following command in the end
bash ~/launch_in_bg.sh rhythmbox
mount_enc.sh does exactly what it is supposed to do when starting from a normal terminal. But I'd like to start it automatically at startup and rhythmbox should be kept open after the script is done.
# launch_in_bg.sh is just doing what it's supposed to
($PRGRM > /dev/null 2>&1) &
I can not get the gnome-terminal to open rhythmbox for me. Also I think my approach is wrong if I want rhythmbox to keep running after the gnome-terminal finishes executing the mount_enc.sh script. Can anybody think of a better solution?
If you open a program from the console (even in background), the process of the program is a child process of the console process and will terminate when the console process terminates.
To keep the program's process running it has to be detached from the console process. Detaching can be done in multiple ways. Some examples:
nohup rhythmbox &
or
rhythmbox & disown
To suppress output, use redirection as in your script.

Linux shell script not executing completely as desktop shortcut

I'm looking to create a shell script to open a set of applications whenever I start my workday. I found a couple posts like this which seem to be what I'm looking for. The problem is, the script doesn't work when I double-click on it.
If I start the script from Terminal, it executes completely, but I don't want to always have to call this from Terminal, I want to double-click a shortcut. If I add a "sleep 1" to the end, it works most the time, but the problem here is 1 second is not always enough time to execute everything. Also, it just feels very imprecise. Sure, I could say "sleep 10" and be done with it, but, as a developer, this feels like a hack solution.
Here is my current script, I intend to add my applications to this list over time, but this will be sufficient for now:
#!/bin/bash
skype &
/opt/google/chrome/google-chrome &
geany &
mysql-workbench &
So the question is: how can I ensure everything starts but not leave the temporary terminal window open longer than it needs to be?
In case it matters, to create this script I simply saved a .sh file to the desktop and checked "Allow executing file as program" in the file properties.
Try preceding each command with nohup:
#!/bin/bash
nohup skype &
nohup /opt/google/chrome/google-chrome &
nohup geany &
nohup mysql-workbench &
Better yet, use a loop:
#!/bin/bash
apps="skype /opt/google/chrome/google-chrome geany mysql-workbench"
for app in $apps; do
nohup $app &
done
If any errors occur, check nohup.out for messages.
I think the reason of this problem is too early closed I/O files (ttys, most likely). You can try redirecting all I/O (stdin, stdout, stderr), for example:
skype < /dev/null > /dev/null 2 > /dev/null &
Something like this should also work:
#!/bin/sh
{
skype &
/opt/google/chrome/google-chrome &
geany &
mysql-workbench &
} < /dev/null > /dev/null 2 > /dev/null &
EDIT:
I can reproduce it on Ubuntu 12.04. It seems terminal program when closing kills all processes in it's pgroup. Tried with:
/usr/bin/gnome-terminal -x /bin/sh -c ./test.sh
xterm -e ./test.sh`
result is the same - without sleep programs don't show up. It seems terminal, when script finishes sends SIGHUP to pgroup of the shell script. You can see it by runing any of above programs via strace -f. At the listing end there should be kill(PID,SIGHUP) with very big PID number as argument - actually it is negative number, so SIGHUP is sent to all processes in pgroup.
I would assume many X11 ignore SIGHUP. The problem is SIGHUP is sent/received before they change default behaviour. With sleep You are giving some time to change SIGHUP handling.
I've tried disown (bash builtin), but it didn't help (SIGHUP to pgroup is sent from terminal, not shell).
EDIT:
One possible solution would be to make script.desktop file (You can use some existing .desktop file as template, on Ubuntu these are located at /usr/share/applications) and start Your script from this file. It seems even X11 programs, which don't ignore SIGHUP (xclock) are normaly started this way.
Firstly, you seem to have a TRAILING ampersand (&) ... this might be causing some issues.
Secondly, you could do something like below to ensure that you only exit the shell (i.e. execution) upon success:
#!/bin/bash
skype & /opt/google/chrome/google-chrome & geany & mysql-workbench
if [ $? -eq 0 ]
then
echo "Successfully completed operation (launched files, etc...)"
#use if you don't want to see anything/be notified if successful
## 'exit 0' will exit TERMINAL, therefore the SCRIPT AS WELL
## indicating to the shell that there was NO ERROR (success)
#exit 1
exit 0
## 'return 0' will allow the "successful" message to be written
## to the command-line and then keep the terminal open, if you
## want confirmation of success. The SCRIPT then exits and
## control returns to terminal, but it will not be forced close.
return 0
else
echo "Operation failed!" >&2
## 'exit 1' will exit the TERMINAL, and therefore the SCRIPT AS
## WELL, indicating an ERROR to the shell
#exit 1
## 'return 1' will exit the SCRIPT only (but not the terminal) and
## will indicate an ERROR to the shell
return 1
fi
** UPDATE **
(notice I added an ampersand & to the end of my answer below)
You could do a one-liner. The following will run all commands sequentially, one-at-a-time, each one runs only if/when the previous one ends. The command-line statement terminates if AND WHEN any of the individual commands BETWEEN the & fail.
(skype && /opt/google/chrome/google-chrome && geany && mysql-workbench) && echo "Success!" || echo "fail" &

Running a script in the background before logging in

I have a python script that I want run prior to any user logging in. This is for a home automation server and I want it always to be up and running as soon as the system allows.
I already have it in the rc.local file including an ampersand. This works.
But I can't see the screen output that it produces.
When I log into the unit (it's a raspberry pi running raspian) via SSH I can start it using screen which works the best as when I logout and back in, it's still there. AND I can see the output from the script.
But when I try running screen from the rc.local file, and subsequently login to check, the script isn't there (ie ps aux | grep script.py confirms)
edit: I've taken on Nirk's solution below about using tail. From the command line, it works fine. But starting it form within /etc/rc.local doesn't. I have touched the file and everyone has write access to it.
This is what's in my rc.local file:
python /home/pi/gateway.py &> /x10.log &
UPDATE
This is how I did it in the end:
Although the question was just about how to run in the background prior to login, there was more to it. The script is a work in progress and because of the way a particular serial device acts with it, it is/was prone to crashing (I've almost got all the bugs out of it). I needed to be able to restart it as well. I tried nohup but for some reason, it wouldn't keep it alive so in the end I found the top answer from this page got it all sorted.
In my /etc/rc.local I included a shell script to run:
nohup /home/pi/alwaysrun.sh > /home/pi/mha.log 2>&1 &
alwaysrun.sh contains:
#!/bin/bash
until python /home/pi/gateway.py; do
echo "'gateway.py' exited with exit code $?. Restarting..." >&2
sleep 1
done
nohup will keep the alwaysrun.sh script alive, and that in turn keeps my gateway.py script running. The redirect of stdout and stderr means I can setup a tail (and/or go back and check) the log.
Instead of using screen, if you just want to see the output you should redirect the output of the command to a log file and then tail the file.

when linux system calls scripts some commands don't work ( cron / if-up.d )

Hi I'm trying to run a script that calls xclip in order to have a string ready to paste when i connect to the internet.
I have a script /etc/network/if-up.d/script that does execute when connecting (i make him post a date in a file succesfuly ) but the xclip instruction seems not to work, there's nothing to paste. If i call this script manually by typing /etc/network/if-up.d/script in a console it works perfectly.
If i try to launch a zenity message it also don't appeare when connecting. Again if i do it by hand it appeares.
Then I have a expect script that calls matlab (console mode), if I execute it manually it works but if i call it from cron it freezees when calling the script.
It's driving me crasy since it seems that only certain commands in a script can be executed when the system calls them automaticaly.
I'v tryed to call the instructions with nohup instruction & but still misses
This is working as designed, you search around and will see compliated ways to resolve this issue, or you can use xmessage as I describe here: Using Zenity in a root incron job to display message to currently logged in user
Easy option 1: xmessage (in the script)
MSSG="/tmp/mssg-file-${RANDOM}"
echo -e " MESSAGE \n ==========\n Done with task, YEY. " > ${MSSG}
xmessage -center -file ${MSSG} -display :0.0
[[ -s ${MSSG} ]] && rm -f ${MSSG}
Easy option 2: set the DISPLAY (then should work)
export DISPLAY=:0 && /usr/bin/somedirectory/somecommand
question is answered here for cron :
http://ubuntuforums.org/archive/index.php/t-105250.html
and here for if-up network :
Bash script not working properly when run automatically

Tomcat script not working when run from Hudson

I'm trying to run a script which stops and starts Tomcat on linux.
When I run it from the command line it works fine. But it does not seem to work when I run the same script from the "Execute Shell" build step in a Jenkins/Hudson job. Jenkins doesn't report any errors but if I try going to the tomcat page then I get a page not found error.
So Jenkins seems able to stop the server, but not bringing it back up.
I'd be grateful for any help.
Try unsetting the BUILD_ID in your 'shell execute' block. You might even not need to use nohup in this case
BUILD_ID=
./your_hudson_script_that_starts_tomcat.sh
Without seeing your script it is difficult to give an exact answer. However you could try adding the following to the start of your script (assuming it is a bash script):
# Trace executed commands.
set -x
# Save stdout / stderr in files
exec >/tmp/my_script.stdout
exec 2>/tmp/my_script.stderr
You could also try adding
set -e
to make the shell exit immediately if a command returns an error status.
If it looks as though Hudson is killing off Tomcat then you might want to run it within nohup (if you're not already doing that):
nohup bin/startup.sh >/dev/null 2>&1 &

Resources