init script hangs the complete system up - linux

I wrote an init script to execute last that will start some pythjon script. The Python script will just run and never terminate and this makes my little linux box (getty terminal on tty) to just outpout the script but never the login prompt. I made the mistake to not assign a fix ip so i basically had to start over again (re-download the initial build onto the flash). However now I'm wondering what different possibilities I have, is it enough to launch the script in my init script with a & at the end or do I need a nohup/ What's the best way to resolve this?
Thank you!
Ron

I got this resolved by directing the output of the script to /dev/null and adding an ampersand at the end kinda like
myscript.sh > /dev/null &
this will return control to the shell and keep executing the script in the back without reporting the results to stdout.

Yes, you need to make sure you init startup script returns. It's good to follow the LSB documentation and have a script that excepts standard actions like start, stop, status and make it return proper return codes.
Launching the python script in the background with an & at the end of the command should be sufficient. You may also want to take a look at the command start-stop-daemon to start your process.

Related

How to get to the shell that runs my rc.local script?

I have modified my rc.local script to run some python script at startup.
This python script seems to be started successfully.
As the script is running forever (intended) and I want to see what the script does, my question is:
Is there a way to access the shell that runs this script?
Yes, to see what is going on, I could log to some file, but what if that script needs to get input from the user via console?
Thanks for your help!
You will not be able to interact with the script run by rc.local. But you can see what it does by logging its output into dedicated files:
python myscript.py > /home/myhome/log/myscript.log 2> /home/myhome/log/myscript.err
where error messages go into a separate log file.
Note that your script will be executed by root, having permissions and ownership accordingly.
Here's a link to an earlier answer about this with a method to log all outputs of rc.local.
Now you can see in your log file, if the execution stops due to the script demanding input or indeed crashing, and then you can fix the script accordingly.
If you don't want to mess with rc.local for testing, you could also first run it through crontab on your or root's account (scheduled execution by user, see man crontab). This might be easier for debugging, and you can start it through rc.local once it works as you want.

no job control in this shell

Quick question regarding this error in Bash. I have some bash scripts that I run at work on some servers. We have been trying to implement Rundeck as a way to automate. When I call these scripts through Rundeck I get the error bash: no job control in this shell. Now I am new to linux and shell scripting but what I have figured is this is due to interactivity of the shell. The scripts I am calling will call other scripts (perl and shell) as part of their operation. But since I am not logged on in a terminal, they can't do this and fail. That is the best understanding I have.
I have tried adding a -l to my #!/bin/bash in my script I send through Rundeck. I have also tried to utilize the ad-hoc command option through Rundeck and run the jobs individually. Still not luck. I thought perhaps it was a pathing issue so I tired setting the path PATH=$PATH but no change.
Basically I need to allow these scripts to open their subprocesses. So the question is, how I modify how I call these scripts so they have full control? Is that possible? Sorry if this lacks info, I just don't know how properly put it out there. let me know what other info is needed. Thanks
Edit: Some code snips on where the message comes in:
system "pntadm -R $Net >/dev/null 2>&1";
system "pntadm -C $Net $BUILD{$Net}{netmask} $MYip";

Running a script in the background before logging in

I have a python script that I want run prior to any user logging in. This is for a home automation server and I want it always to be up and running as soon as the system allows.
I already have it in the rc.local file including an ampersand. This works.
But I can't see the screen output that it produces.
When I log into the unit (it's a raspberry pi running raspian) via SSH I can start it using screen which works the best as when I logout and back in, it's still there. AND I can see the output from the script.
But when I try running screen from the rc.local file, and subsequently login to check, the script isn't there (ie ps aux | grep script.py confirms)
edit: I've taken on Nirk's solution below about using tail. From the command line, it works fine. But starting it form within /etc/rc.local doesn't. I have touched the file and everyone has write access to it.
This is what's in my rc.local file:
python /home/pi/gateway.py &> /x10.log &
UPDATE
This is how I did it in the end:
Although the question was just about how to run in the background prior to login, there was more to it. The script is a work in progress and because of the way a particular serial device acts with it, it is/was prone to crashing (I've almost got all the bugs out of it). I needed to be able to restart it as well. I tried nohup but for some reason, it wouldn't keep it alive so in the end I found the top answer from this page got it all sorted.
In my /etc/rc.local I included a shell script to run:
nohup /home/pi/alwaysrun.sh > /home/pi/mha.log 2>&1 &
alwaysrun.sh contains:
#!/bin/bash
until python /home/pi/gateway.py; do
echo "'gateway.py' exited with exit code $?. Restarting..." >&2
sleep 1
done
nohup will keep the alwaysrun.sh script alive, and that in turn keeps my gateway.py script running. The redirect of stdout and stderr means I can setup a tail (and/or go back and check) the log.
Instead of using screen, if you just want to see the output you should redirect the output of the command to a log file and then tail the file.

Tomcat script not working when run from Hudson

I'm trying to run a script which stops and starts Tomcat on linux.
When I run it from the command line it works fine. But it does not seem to work when I run the same script from the "Execute Shell" build step in a Jenkins/Hudson job. Jenkins doesn't report any errors but if I try going to the tomcat page then I get a page not found error.
So Jenkins seems able to stop the server, but not bringing it back up.
I'd be grateful for any help.
Try unsetting the BUILD_ID in your 'shell execute' block. You might even not need to use nohup in this case
BUILD_ID=
./your_hudson_script_that_starts_tomcat.sh
Without seeing your script it is difficult to give an exact answer. However you could try adding the following to the start of your script (assuming it is a bash script):
# Trace executed commands.
set -x
# Save stdout / stderr in files
exec >/tmp/my_script.stdout
exec 2>/tmp/my_script.stderr
You could also try adding
set -e
to make the shell exit immediately if a command returns an error status.
If it looks as though Hudson is killing off Tomcat then you might want to run it within nohup (if you're not already doing that):
nohup bin/startup.sh >/dev/null 2>&1 &

Help debugging a cron job which has the correct script path and works when manually triggered

I'm struggling trying to debug a cron job which isn't working correctly. The cron job calls a shell script which should unrar a rar file - this works correctly when i run the script manually, but for some reason it's not working via cron. I am using the absolute file path and have verified that the path is correct. Has anyone got any ideas why this could be happening?
Well, you already said that you have used absolute paths, so the number one problem is dealt with.
Next to check are permissions. Which user is the cron job run as? Does it have all the permissions necessary?
Then, a little trick: if you have a shell script that fails and it's not run in a terminal I like to redirect the output of it to some files. Right at the start of the script, add:
exec &>/tmp/my.log
This will redirect STDOUT and STDERR to /tmp/my.log. Then it might also be a good idea to also add the line:
set -x
This will make bash print which command it's about to execute, and at what nesting level.
Happy debugging!
The first thing to check when cron jobs fail is to see if the full environment is available to the script you are trying to execute. In other words, you need to realize that a job executed via cron runs as a detached process meaning it is not associated with a login environment. Therefore whenever you try to debug a cron job that works when you execute manually, you need to be sure the same environment is available to the cronjob as is available to you when you execute it manually. This include any PATH settings, and other envvars that the script may depend on.
For me, the problem was a different shell interpreter in crontab.

Resources