I have a python script that I want run prior to any user logging in. This is for a home automation server and I want it always to be up and running as soon as the system allows.
I already have it in the rc.local file including an ampersand. This works.
But I can't see the screen output that it produces.
When I log into the unit (it's a raspberry pi running raspian) via SSH I can start it using screen which works the best as when I logout and back in, it's still there. AND I can see the output from the script.
But when I try running screen from the rc.local file, and subsequently login to check, the script isn't there (ie ps aux | grep script.py confirms)
edit: I've taken on Nirk's solution below about using tail. From the command line, it works fine. But starting it form within /etc/rc.local doesn't. I have touched the file and everyone has write access to it.
This is what's in my rc.local file:
python /home/pi/gateway.py &> /x10.log &
UPDATE
This is how I did it in the end:
Although the question was just about how to run in the background prior to login, there was more to it. The script is a work in progress and because of the way a particular serial device acts with it, it is/was prone to crashing (I've almost got all the bugs out of it). I needed to be able to restart it as well. I tried nohup but for some reason, it wouldn't keep it alive so in the end I found the top answer from this page got it all sorted.
In my /etc/rc.local I included a shell script to run:
nohup /home/pi/alwaysrun.sh > /home/pi/mha.log 2>&1 &
alwaysrun.sh contains:
#!/bin/bash
until python /home/pi/gateway.py; do
echo "'gateway.py' exited with exit code $?. Restarting..." >&2
sleep 1
done
nohup will keep the alwaysrun.sh script alive, and that in turn keeps my gateway.py script running. The redirect of stdout and stderr means I can setup a tail (and/or go back and check) the log.
Instead of using screen, if you just want to see the output you should redirect the output of the command to a log file and then tail the file.
Related
I have a process that depends on the internet, which dies randomly due to a spotty connection.
I am writing a cron script, so that it checks every minute if the process is running, and restarts it...
the process doesn't kill the terminal window it's in.
I don't want to kill the terminal - then spawn a new one.
I want the shell script I'm writing to execute in the window that's already open...
I'm using i3-sensible-terminal right now, but any terminal would do.
if ! ps -a | grep x123abc > /dev/null ; then
$CMD
fi
I have not yet located the information I need to have that run in a specific terminal.
changing the CMD to include a terminal seems to only open up a new window...
Suggesting a different design to separate running your script from observing your script output.
Write script named worker "that depends on the internet, which dies randomly due to a spotty connection." appends ALL its output to log file \home\$USER\worker.log.
Or just redirect ALL output from script named worker to log file \home\$USER\worker.log.
worker > \home\$USER\worker.log 2>&1
Run script name worker as a restartable service with systemd unit service.
Here is a good article explaining this practice: https://dev.to/setevoy/linux-systemd-unit-files-edit-restart-on-failure-and-email-notifications-5h3k
Continue to observe the log file \home\$USER\worker.log using tailf command
tailf \home\$USER\worker.log
I have a server that runs an express app and a react app. I needed to start both the apps on boot.
So I added two lines on rc.local but it seems like only the first line runs and the second one doesn't. Why is that and how can I solve it?
Just as in any other script, the second command will only be executed after the first one has finished. That's probably not what you want when the first command is supposed to keep running pretty much forever.
If you want the second command to execute before the first has finished, and if you want the script to exit before the second command has finished, then you must arrange for the commands to run in the background.
So, at a minimum, instead of
my-first-command
my-second-command
you want:
my-first-command &
my-second-command &
However, it's better to do something a little more complex that in addition to putting the command into the background also places the command's working directory at the root of the filesystem, disconnects the command's input from the console, delivers its standard output and standard error streams to the syslog service (which will typically append that data into /var/log/syslog) and protects it from unintended signals. Like:
( cd / && nohup sh -c 'my-first-command 2>&1 | logger -t my-first-command &' </dev/null >/dev/null 2>&1 )
and similarly for the second command.
The extra redirections at the end of the line are to keep nohup from emitting unwanted informational messages and creating an unused nohup.out file. You might want to leave the final 2>&1 out until you are sure that the rest of the command is correct and is behaving the way you want it to. When you get to the point where the only message shown is nohup: redirecting stderr to stdout then you can restore the 2>&1 to get rid of that message.
I am trying to schedule the execution of a shell-script with the Linux tool "at".
The shell script (video.sh) looks like this:
#!/bin/sh
/usr/bin/vlc /home/x/video.mkv
The "at" command:
at -f /home/x/video.sh -t 201411052225
When the time arrives, nothing happens.
I can execute the shell-script just fine via console or by rightclicking - Execute. VLC starts like it is supposed to. If I change the script to e.g. something simple like
#!/bin/sh
touch something.txt
it works just fine.
Any ideas, why "at" will not properly execute a script that starts a graphical program? How can I make it work?
You're trying to run an X command (a graphical program) at a scheduled time. This will be extremely difficult, and quite fragile, because the script won't have access to the X server.
At the very least, you will need to set DISPLAY to the right value, but even then, I suspect you will have issues with authorisation to use the X screen.
Try setting it to :0.0 and see if that works. But if you're logged out, or the screensaver's on, or any number of other things...
(Also, redirect vlc's stdout and stderr to a file so that you can see what went wrong.)
Your best bet might be to try something like xuserrun.
I suspect that atd is not running. You have to start the atd daemon before (and to set DISPLAY variable like chiastic-security said) ;)
You can test if atd is running with
pidof atd &>/dev/null && echo 'ATD started' || echo >&2 'ATD not started
Your vlc command should be :
DISPLAY=:0 /usr/bin/vlc /home/x/video.mkv
(Default display)
I have written a Fortran program (let's call it program.exe) with does some simulation for me. Via ssh I'm logging ino some far away computers to start runs there whose results I collect after a few days. To be up-to-date how the program proceeds I want to write the shell output into a text file output.txt also (since I can't be logged in the far away computers all the time). The command should be something like
nohup program.exe | tee output.txt > /dev/null &
This enables me to have a look at output.txt to see the current status even though the program hasn't ended its run yet. The above command works fine on my local machine. I tried first with the command '>' but here the problem was that nothing was written into the text file until the whole program had finish (maybe related to the pipe buffer?). So I used the workaround with 'tee'.
The problem is now that when I log into the computer via ssh (ssh -X user#machine), execute the above command and look at output.txt with the VI editor nothing appears until the program has finished. If I omit the 'nohup' and '&' I will not even get any shell output until it has finished. My thought was that it might have to do something with data being buffered by ssh but I'm rather a Linux newbie. For any ideas or workaround I would be very grateful!
I would use screen utility http://www.oreillynet.com/linux/cmd/cmd.csp?path=s/screen instead of nohup. Thus I would be able to set my program to detached state (^A^D) reconnect to the host, retrieve my screen session (screen -r)
and monitor my output as if I never logged out.
I'm trying to run a script which stops and starts Tomcat on linux.
When I run it from the command line it works fine. But it does not seem to work when I run the same script from the "Execute Shell" build step in a Jenkins/Hudson job. Jenkins doesn't report any errors but if I try going to the tomcat page then I get a page not found error.
So Jenkins seems able to stop the server, but not bringing it back up.
I'd be grateful for any help.
Try unsetting the BUILD_ID in your 'shell execute' block. You might even not need to use nohup in this case
BUILD_ID=
./your_hudson_script_that_starts_tomcat.sh
Without seeing your script it is difficult to give an exact answer. However you could try adding the following to the start of your script (assuming it is a bash script):
# Trace executed commands.
set -x
# Save stdout / stderr in files
exec >/tmp/my_script.stdout
exec 2>/tmp/my_script.stderr
You could also try adding
set -e
to make the shell exit immediately if a command returns an error status.
If it looks as though Hudson is killing off Tomcat then you might want to run it within nohup (if you're not already doing that):
nohup bin/startup.sh >/dev/null 2>&1 &