Cron Job Killing and Restarting Python Script - linux

I set up a cron job on a linux server to kill and restart a python script (run.py) every other day. I set the job to run as root, but I find that sometimes it doesn't kill the process properly (and ends up running two scripts in a row).
Is there a better way to do this?
My cron job parameters:
0 8 * * 1,4,7 cd /home/myUser && ./start.sh
start.sh:
#!/bin/bash
echo "Running..."
sudo pkill -f run.py
sudo python run.py &

I guess run.py runs as python, not run.py. So you won't find anything with kill -f run.py.
You should echo the PID of the process to a file and use that value to kill the previous process if it's still running. Just add echo $! >/path/to/pid.file as the last line in your start.sh script to do so.
Read more:
https://serverfault.com/questions/205498/how-to-get-pid-of-just-started-process
How to read a file into a variable in shell?
http://www.cyberciti.biz/faq/kill-process-in-linux-or-terminate-a-process-in-unix-or-linux-systems/
Example to get you started:
#!/bin/bash
echo "Running..."
sudo pkill -F /path/to/pid.pid
sudo python /path/to/run.py &
echo $! > /path/to/pid.pid

Another alternative to this is making the python script run on upstart if you are on a system that supports upstart. Then you can just do sudo /sbin/start job_name at the begin and sudo /sbin/stop job_name this makes upstart manage the pids for you.
Python upstart script
Upstart python script

Related

Kill background process started from the same bash script [duplicate]

I have a script that looks like this:
pushd .
nohup java -jar test/selenium-server.jar > /dev/null 2>&1 &
cd web/code/protected/tests/
phpunit functional/
popd
The selenium servers needs to be running for the tests, however after the phpunit command finishes I'd like to kill the selenium-server that was running.
How can I do this?
You can probably save the PID of the process in a variable, then use the kill command to kill it.
pushd .
nohup java -jar test/selenium-server.jar > /dev/null 2>&1 &
serverPID=$!
cd web/code/protected/tests/
phpunit functional/
kill $serverPID
popd
I haven't tested it myself, I'd like to write it on a comment, but not enough reputation yet :)
When the script is excecuted a new shell instance is created. Which means that the jobs in the new script would not list any jobs running in the parent shell.
Since the selenium-server server is the only background process that is created in the new script it can be killed using
#The first job
kill %1
Or
#The last job Same as the first one
kill %-
As long as you don't launch any other process in the background - which you don't - you can use $! directly:
pushd .
nohup java -jar test/selenium-server.jar > /dev/null 2>&1 &
cd web/code/protected/tests/
phpunit functional/
kill $!
popd

ssh does not return even after execution

The following ssh command does not return to terminal. It hangs though the execution is completed. The execution hangs after echo hi command.
ssh user#testserver "echo hello;source .profile;source .bash_profile;/apps/myapp/deploytools/ciInstallAndRun.sh; echo hi"
Output
hello
<outoutfrom remote script"
hi
ciInstallAndRun.sh
echo 'starting'
cd /apps/myapp/current
./tctl kill
cd /apps/myapp
mv myapp_v1.0 "myapp_v1.0_`date '+%Y%m%d%H%M'`"
unzip -o /apps/myapp/myappdist-bin.zip
java -classpath .:/apps/myapp/deploytools/cleanup.jar se.telenor.project.cleanup.Cleanup /apps/myapp myapp_v1.0_ 3
cd /apps/myapp/myapp_v1.0
echo 'Done with deploy'
chmod -R 775 *
echo 'Done'
./tctl start test
Source OS: Redhat
Dest Os: Solaris 10 8/07
Any idea to fix this.
Any idea to fix this.
Your installation script has spawned a child process.
Add a ps -f or ptree $$ command before echo hi. You'll see a child process or multiple child processes spawned by your install script.
To stop the SSH command from hanging, you need to detach such child process(es) from your terminal's input/output. You can sedirect your script's output to a file - both stdout and stderr with > /some/output/file 2>&1, and also redirect its input with < /dev/null.
Or you can use the nohup command.
You haven't provided an MCVE, as others have noted, but this is likely the problem command in you install script, since your question implies that you see the expected output from your install script:
./tctl start test
You probably would do better to replace it with something like:
./tctl start test </dev/null >/some/log/file/path.log 2>&1

Cannot return to shell session after script

I cannot get a script to return to bash.
The script is kicked off via the following Docker directives:
ENTRYPOINT ["/bin/bash", "-c"]
CMD ["set -e && /config/startup/init.sh"]
The init script looks like this:
#!/bin/bash
if [ -d /etc/postfix/init.d ]; then
for f in /etc/postfix/init.d/*.sh; do
[ -f "$f" ] && . "$f"
done
fi
echo "[x] Starting supervisord ..."
/usr/bin/supervisord -c /etc/supervisord.conf
bash
And this is the command I use to kick off the image into a container:
docker run -it --env-file ENV_LOCAL mailrelay
The init script runs as expected (and I see output from the scripts within the /etc/postfix/init.d/ directory and supervisord kicks off Postfix.
The problem is getting the script to return to the parent process (bash) instead of needing to start a new one. After it hits the supervisord the session sits there, requiring a Ctrl+C to get it to get back into a bash prompt.
If I leave off the call to bash at the end of the init.sh script, Ctrl+D exits the script AND the container, returning me to the host OS (osx). If I replace the bash call with exit, it returns to the host OS as well.
Is supervisord behaving the way it's supposed to, by running in the foreground this way? I'd like to be able to easily get back into the container shell session to check to see if things are running. Am I left with needing to Ctrl+D (into the secondary bash session) in order to do this?
UPDATE
Marc B
take out the bash line, so you don't start a new shell. and if
supervisord doesn't go into the background automatically, you could
try running it with & to force it into the background, or maybe
there's an extra cli option to force it to go into daemon mode
I've tried removing the last call to bash, but as I've mentioned it just sits there still, and Ctrl+D takes me to the host OS (exits the container).
I just tried /usr/bin/supervisord -c /etc/supervisord.conf & (and left off the call to bash at the end) and it just immediately returns to host OS, exiting the container. I assume because the container had nothing left to "do", and so stopped.
#!/bin/bash
if [ -d /etc/postfix/init.d ]; then
for f in /etc/postfix/init.d/*.sh; do
[ -f "$f" ] && . "$f"
done
fi
echo "[x] Starting supervisord ..."
/usr/bin/supervisord -c /etc/supervisord.conf
one
bash # You are spawning a new bash shell here. Remove this statement
At the end your're stuck in a child bash shell :(
Now if you're not returning to the parent shell, the last command that you have run is the culprit.
/usr/bin/supervisord -c /etc/supervisord.conf
You can either force the command to run in the background by
/usr/bin/supervisord -c /etc/supervisord.conf & #the & tells to run in background
A workaround for keeping the container open is mentioned here

UPSTART script non root not working

I'm trying to run a nodejs application using upstart as a non root user.
But somehow parts of the script will not run : for instance:
if I run it like a root user(below example) NODE_ENV never gets called/set
the only way to called is with "sudo initctl stop pdcapp"
sudo nameofApp start|stop would not work
When called sudo initctl stop nameofApp the pre-stop script will not echo to the log file
if I try to runit like a non root user it would not even start
isn't a more cleaner easier way of doing this (systemd) I've looked a various tutorials around and apparently this is how they've doneit. so what am I missing here?
This is the .conf file under /etc/init/
env FULL_PATH="/srv/pd/sept011100/dev"
env NODE_PATH="/usr/local/nodeJS/bin/node"
env NODE_ENV=production
start on filesystem or runlevel [2345]
stop on [!2345]
script
export NODE_ENV #this variable is never set
echo $$ > /var/run/PD.pid
cd $FULL_PATH
# the command below will not work
#exec sudo -u nginx "$NODE_PATH server.js >> /var/log/PD/pdapp.log 2>&1"
exec $NODE_PATH server.js >> /var/log/PD/pdapp.log 2>&1
end script
pre-start script
echo "[`date`] (sys) Starting" >> /var/log/PD/pdapp.log
end script
pre-stop script
rm /var/run/pdapp.pid
echo "[`date`] (sys) Stopping" >> /var/log/PDC/pdapp.log
end script
in /var/log/messages I get this when I stop the application, otherwise I get nothing in the logfile
Sep 2 18:23:14 547610-redhat-dev2 init: pdcapp pre-stop process (6903) terminated with status 1
Sep 2 18:23:14 547610-redhat-dev2 init: pdcapp main process (6899) terminated with status 143
any Ideas why is this not working I'm running redhat 6.5
Red Hat has a super old version of Upstart that is probably full of bugs because they never contributed to Upstart, despite using it (Fedora switched to systemd right after RHEL 6 was released, before they even really tried it out well).

Upstart: Error when using command substitution in post-start script stanza during startup sequence

I'm seeing an issue in upstart where using command substitution inside a post-start script stanza causes an error (syslog reports "terminated with status 1"), but only during the initial system startup.
I've tried using just about every startup event hook under the sun. local-filesystems and net-device-up worked without error about 1/100 tries, so it looks like a race condition. It works just fine on manual start/stop. The command substitutions I've seen trigger the error are a simple cat or date, and I've tried using both the $() way and the backtick way. I've also tried using sleep in pre-start to beat the race condition but that did nothing.
I'm running Ubuntu 11.10 on VMWare with a Win7 host. Spent too many hours troubleshooting this already... Anyone got any ideas?
Here is my .conf file for reference:
start on runlevel [2345]
stop on runlevel [016]
env NODE_ENV=production
env MYAPP_PIDFILE=/var/run/myapp.pid
respawn
exec start-stop-daemon --start --make-pidfile --pidfile $MYAPP_PIDFILE --chuid node-svc --exec /usr/local/n/versions/0.6.14/bin/node /opt/myapp/live/app.js >> /var/log/myapp/audit.node.log 2>&1
post-start script
MYAPP_PID=`cat $MYAPP_PIDFILE`
echo "[`date -u +%Y-%m-%dT%T.%3NZ`] + Started $UPSTART_JOB [$MYAPP_PID]: PROCESS=$PROCESS UPSTART_EVENTS=$UPSTART_EVENTS" >> /var/log/myapp/audit.upstart.log
end script
post-stop script
MYAPP_PID=`cat $MYAPP_PIDFILE`
echo "[`date -u +%Y-%m-%dT%T.%3NZ`] - Stopped $UPSTART_JOB [$MYAPP_PID]: PROCESS=$PROCESS UPSTART_STOP_EVENTS=$UPSTART_STOP_EVENTS EXIT_SIGNAL=$EXIT_SIGNAL EXIT_STATUS=$EXIT_STATUS" >> /var/log/myapp/audit.upstart.log
end script
The most likely scenario I can think of is that $MYAPP_PIDFILE has not been created yet.
Because you have not specified an 'expect' stanza, the post-start is run as soon as the main process has forked and execed. So, as you suspected, there is probably a race between start-stop-daemon running node and writing that pidfile and /bin/sh forking, execing, and forking again to exec cat $MYAPP_PIDFILE.
The right way to do this is to rewrite your post-start as such:
post-start script
for i in 1 2 3 4 5 ; do
if [ -f $MYAPP_PIDFILE ] ; then
echo ...
exit 0
fi
sleep 1
done
echo "timed out waiting for pidfile"
exit 1
end script
Its worth noting that in Upstart 1.4 (included first in Ubuntu 12.04), upstart added logging ability, so there's no need to redirect output into a special log file. All console output defaults to /var/log/upstart/$UPSTART_JOB.log (which is rotated by logrotate). So those echos could just be bare echos.

Resources