Upstart resulting in two processes - why? - linux

I am using upstart to start a NodeJS process which is using NVM (node version manager).
The upstart command is like this:
description "Service to start node app"
author "Barry Steyn"
setuid devuser
setgid devuser
env DIR=/home/devuser/nodejs/authentication
script
chdir $DIR
exec bash -c 'source /home/devuser/nvm/nvm.sh && node app'
end script
respawn
This starts node fine, but when I do a ps wax | grep node, I get these two processes:
4284 ? Ss 0:00 bash -c source /home/devuser/nvm/nvm.sh && node app
4316 ? Sl 1:09 node app
Why do I get two processes? Is this in anyway less efficient?

The first process is the instance of bash that started node.js. The second process is the actual node.js process.
The bash process is using a few resources (mostly memory), but is just sitting around waiting on node.js to exit.
I believe you can get rid of the extra bash by doing this:
Replace:
exec bash -c 'source /home/devuser/nvm/nvm.sh && node app'
With:
exec bash -c 'source /home/devuser/nvm/nvm.sh && exec node app'
This gets the bash process to exec node.js without using a fork first. Mostly that means it won't wait around for node.js to exit.

Related

Cannot return to shell session after script

I cannot get a script to return to bash.
The script is kicked off via the following Docker directives:
ENTRYPOINT ["/bin/bash", "-c"]
CMD ["set -e && /config/startup/init.sh"]
The init script looks like this:
#!/bin/bash
if [ -d /etc/postfix/init.d ]; then
for f in /etc/postfix/init.d/*.sh; do
[ -f "$f" ] && . "$f"
done
fi
echo "[x] Starting supervisord ..."
/usr/bin/supervisord -c /etc/supervisord.conf
bash
And this is the command I use to kick off the image into a container:
docker run -it --env-file ENV_LOCAL mailrelay
The init script runs as expected (and I see output from the scripts within the /etc/postfix/init.d/ directory and supervisord kicks off Postfix.
The problem is getting the script to return to the parent process (bash) instead of needing to start a new one. After it hits the supervisord the session sits there, requiring a Ctrl+C to get it to get back into a bash prompt.
If I leave off the call to bash at the end of the init.sh script, Ctrl+D exits the script AND the container, returning me to the host OS (osx). If I replace the bash call with exit, it returns to the host OS as well.
Is supervisord behaving the way it's supposed to, by running in the foreground this way? I'd like to be able to easily get back into the container shell session to check to see if things are running. Am I left with needing to Ctrl+D (into the secondary bash session) in order to do this?
UPDATE
Marc B
take out the bash line, so you don't start a new shell. and if
supervisord doesn't go into the background automatically, you could
try running it with & to force it into the background, or maybe
there's an extra cli option to force it to go into daemon mode
I've tried removing the last call to bash, but as I've mentioned it just sits there still, and Ctrl+D takes me to the host OS (exits the container).
I just tried /usr/bin/supervisord -c /etc/supervisord.conf & (and left off the call to bash at the end) and it just immediately returns to host OS, exiting the container. I assume because the container had nothing left to "do", and so stopped.
#!/bin/bash
if [ -d /etc/postfix/init.d ]; then
for f in /etc/postfix/init.d/*.sh; do
[ -f "$f" ] && . "$f"
done
fi
echo "[x] Starting supervisord ..."
/usr/bin/supervisord -c /etc/supervisord.conf
one
bash # You are spawning a new bash shell here. Remove this statement
At the end your're stuck in a child bash shell :(
Now if you're not returning to the parent shell, the last command that you have run is the culprit.
/usr/bin/supervisord -c /etc/supervisord.conf
You can either force the command to run in the background by
/usr/bin/supervisord -c /etc/supervisord.conf & #the & tells to run in background
A workaround for keeping the container open is mentioned here

Ubuntu upstart gets incorrect PID from Play 1.3

The Upstart script using the start-stop-daemon we've been using for Play 1.2.7 is now unable to stop/restart Play since Play 1.3 due to it having an incorrect PID.
Framework version: 1.3.0 on Ubuntu 12.04.5 LTS
Reproduction steps:
Setup an upstart script (playframework.conf) for a Play application
Play application starts successfully on server reboot Run 'sudo
status playframework' will return playframework start/running,
process 28912 - At this point process 28912 doesn't exist
vi {playapplicationfolder}/server.pid shows 28927
'stop playframework'
then fails due to unknown pid 28912 'status playframework' results in
playframework stop/killed, process 28912
Only way to restart play framework after this point is to either find the actual process and kill it then start play using the usual 'play start' command manually. Or restart the server.
This has broken our deployments scripts now as we used to install the new version of our app, then do play restart before reconnecting to the load balancer.
Upstart Script:
#Upstart script for a play application that binds to an unprivileged user.
# put this into a file like /etc/init/playframework
# you can then start/stop it using either initctl or start/stop/restart
# e.g.
# start playframework
description "PlayApp"
author "-----"
version "1.0"
env PLAY_BINARY=/opt/play/play
env JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
env HOME=/opt/myapp/latest
env USER=ubuntu
env GROUP=admin
env PROFILE=prod
start on (filesystem and net-device-up IFACE=lo) or runlevel [2345]
stop on runlevel [!2345]
limit nofile 65536 65536
respawn
respawn limit 10 5
umask 022
expect fork
pre-start script
test -x $PLAY_BINARY || { stop; exit 0; }
test -c /dev/null || { stop; exit 0; }
chdir ${HOME}
rm ${HOME}/server.pid || true
/opt/configurer.sh
end script
pre-stop script
exec $PLAY_BINARY stop $HOME
end script
post-stop script
rm ${HOME}/server.pid || true
end script
script
exec start-stop-daemon --start --exec $PLAY_BINARY --chuid $USER:$GROUP --chdir $HOME -- start $HOME -javaagent:/opt/newrelic/newrelic.jar --%$PROFILE -Dprecompiled=true --http.port=8080 --https.port=4443
end script
We've tried specifying the PID file in the start-stop-daemon as per: http://man.he.net/man8/start-stop-daemon however this also didnt seem to have any effect.
I have found some threads on similar issues https://askubuntu.com/questions/319199/upstart-tracking-wrong-pid-of-process-not-respawning but have been unable to find a way round this so far. I have tried changing fork to daemon but the same issue remains. I also can't see what has changed between Play 1.2.7 and 1.3 to cause this.
Another SO post has also asked a similar question but not had an answer as yet: https://stackoverflow.com/questions/23117345/upstart-gets-wrong-pid-after-launching-celery-with-start-stop-daemon
This is because getJavaVersion() spawns a subprocess, which bumps the PID count, which breaks Upstart, the latter which expects Play to fork exactly none, once or twice, depending on which expect stanza you use.
I've fixed this in a pull request.

Cron Job Killing and Restarting Python Script

I set up a cron job on a linux server to kill and restart a python script (run.py) every other day. I set the job to run as root, but I find that sometimes it doesn't kill the process properly (and ends up running two scripts in a row).
Is there a better way to do this?
My cron job parameters:
0 8 * * 1,4,7 cd /home/myUser && ./start.sh
start.sh:
#!/bin/bash
echo "Running..."
sudo pkill -f run.py
sudo python run.py &
I guess run.py runs as python, not run.py. So you won't find anything with kill -f run.py.
You should echo the PID of the process to a file and use that value to kill the previous process if it's still running. Just add echo $! >/path/to/pid.file as the last line in your start.sh script to do so.
Read more:
https://serverfault.com/questions/205498/how-to-get-pid-of-just-started-process
How to read a file into a variable in shell?
http://www.cyberciti.biz/faq/kill-process-in-linux-or-terminate-a-process-in-unix-or-linux-systems/
Example to get you started:
#!/bin/bash
echo "Running..."
sudo pkill -F /path/to/pid.pid
sudo python /path/to/run.py &
echo $! > /path/to/pid.pid
Another alternative to this is making the python script run on upstart if you are on a system that supports upstart. Then you can just do sudo /sbin/start job_name at the begin and sudo /sbin/stop job_name this makes upstart manage the pids for you.
Python upstart script
Upstart python script

Upstart | Ubuntu | Nodejs | Can not run multiple exec inside script block

I'm trying to setup a upstart conf for my nodejs app. I have to run 2 script scrip_1.js and script_2.js. Here the conf
start on startup
stop on shutdown
respawn
console log
env PROJ=/project/path
script
cd $PROJ
exec node script_1.js 2>&1 >> $PROJ/logs/script_1.log
exec node script_2.js 2>&1 >> $PROJ/logs/script_2.log
end script
The problem is only script_1.js running. If I move the exec node script_2.js... right before cd $PROJ then only script_2.js running.
How can I make this upstart conf?
Thank you!
Create a separate job for both script 1 and script 2.

How to run Node.js as a background process and never die?

I connect to the linux server via putty SSH. I tried to run it as a background process like this:
$ node server.js &
However, after 2.5 hrs the terminal becomes inactive and the process dies. Is there anyway I can keep the process alive even with the terminal disconnected?
Edit 1
Actually, I tried nohup, but as soon as I close the Putty SSH terminal or unplug my internet, the server process stops right away.
Is there anything I have to do in Putty?
Edit 2 (on Feb, 2012)
There is a node.js module, forever. It will run node.js server as daemon service.
nohup node server.js > /dev/null 2>&1 &
nohup means: Do not terminate this process even when the stty is cut
off.
> /dev/null means: stdout goes to /dev/null (which is a dummy
device that does not record any output).
2>&1 means: stderr also goes to the stdout (which is already redirected to /dev/null). You may replace &1 with a file path to keep a log of errors, e.g.: 2>/tmp/myLog
& at the end means: run this command as a background task.
Simple solution (if you are not interested in coming back to the process, just want it to keep running):
nohup node server.js &
There's also the jobs command to see an indexed list of those backgrounded processes. And you can kill a backgrounded process by running kill %1 or kill %2 with the number being the index of the process.
Powerful solution (allows you to reconnect to the process if it is interactive):
screen
You can then detach by pressing Ctrl+a+d and then attach back by running screen -r
Also consider the newer alternative to screen, tmux.
You really should try to use screen. It is a bit more complicated than just doing nohup long_running &, but understanding screen once you never come back again.
Start your screen session at first:
user#host:~$ screen
Run anything you want:
wget http://mirror.yandex.ru/centos/4.6/isos/i386/CentOS-4.6-i386-binDVD.iso
Press ctrl+A and then d. Done. Your session keeps going on in background.
You can list all sessions by screen -ls, and attach to some by screen -r 20673.pts-0.srv command, where 0673.pts-0.srv is an entry list.
This is an old question, but is high ranked on Google. I almost can't believe on the highest voted answers, because running a node.js process inside a screen session, with the & or even with the nohup flag -- all of them -- are just workarounds.
Specially the screen/tmux solution, which should really be considered an amateur solution. Screen and Tmux are not meant to keep processes running, but for multiplexing terminal sessions. It's fine, when you are running a script on your server and want to disconnect. But for a node.js server your don't want your process to be attached to a terminal session. This is too fragile. To keep things running you need to daemonize the process!
There are plenty of good tools to do that.
PM2: http://pm2.keymetrics.io/
# basic usage
$ npm install pm2 -g
$ pm2 start server.js
# you can even define how many processes you want in cluster mode:
$ pm2 start server.js -i 4
# you can start various processes, with complex startup settings
# using an ecosystem.json file (with env variables, custom args, etc):
$ pm2 start ecosystem.json
One big advantage I see in favor of PM2 is that it can generate the system startup script to make the process persist between restarts:
$ pm2 startup [platform]
Where platform can be ubuntu|centos|redhat|gentoo|systemd|darwin|amazon.
forever.js: https://github.com/foreverjs/forever
# basic usage
$ npm install forever -g
$ forever start app.js
# you can run from a json configuration as well, for
# more complex environments or multi-apps
$ forever start development.json
Init scripts:
I'm not go into detail about how to write a init script, because I'm not an expert in this subject and it'd be too long for this answer, but basically they are simple shell scripts, triggered by OS events. You can read more about this here
Docker:
Just run your server in a Docker container with -d option and, voilá, you have a daemonized node.js server!
Here is a sample Dockerfile (from node.js official guide):
FROM node:argon
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD [ "npm", "start" ]
Then build your image and run your container:
$ docker build -t <your username>/node-web-app .
$ docker run -p 49160:8080 -d <your username>/node-web-app
Always use the proper tool for the job. It'll save you a lot of headaches and over hours!
another solution disown the job
$ nohup node server.js &
[1] 1711
$ disown -h %1
nohup will allow the program to continue even after the terminal dies. I have actually had situations where nohup prevents the SSH session from terminating correctly, so you should redirect input as well:
$ nohup node server.js </dev/null &
Depending on how nohup is configured, you may also need to redirect standard output and standard error to files.
Nohup and screen offer great light solutions to running Node.js in the background. Node.js process manager (PM2) is a handy tool for deployment. Install it with npm globally on your system:
npm install pm2 -g
to run a Node.js app as a daemon:
pm2 start app.js
You can optionally link it to Keymetrics.io a monitoring SAAS made by Unitech.
$ disown node server.js &
It will remove command from active task list and send the command to background
I have this function in my shell rc file, based on #Yoichi's answer:
nohup-template () {
[[ "$1" = "" ]] && echo "Example usage:\nnohup-template urxvtd" && return 0
nohup "$1" > /dev/null 2>&1 &
}
You can use it this way:
nohup-template "command you would execute here"
Have you read about the nohup command?
To run command as a system service on debian with sysv init:
Copy skeleton script and adapt it for your needs, probably all you have to do is to set some variables. Your script will inherit fine defaults from /lib/init/init-d-script, if something does not fits your needs - override it in your script. If something goes wrong you can see details in source /lib/init/init-d-script. Mandatory vars are DAEMON and NAME. Script will use start-stop-daemon to run your command, in START_ARGS you can define additional parameters of start-stop-daemon to use.
cp /etc/init.d/skeleton /etc/init.d/myservice
chmod +x /etc/init.d/myservice
nano /etc/init.d/myservice
/etc/init.d/myservice start
/etc/init.d/myservice stop
That is how I run some python stuff for my wikimedia wiki:
...
DESC="mediawiki articles converter"
DAEMON='/home/mss/pp/bin/nslave'
DAEMON_ARGS='--cachedir /home/mss/cache/'
NAME='nslave'
PIDFILE='/var/run/nslave.pid'
START_ARGS='--background --make-pidfile --remove-pidfile --chuid mss --chdir /home/mss/pp/bin'
export PATH="/home/mss/pp/bin:$PATH"
do_stop_cmd() {
start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 \
$STOP_ARGS \
${PIDFILE:+--pidfile ${PIDFILE}} --name $NAME
RETVAL="$?"
[ "$RETVAL" = 2 ] && return 2
rm -f $PIDFILE
return $RETVAL
}
Besides setting vars I had to override do_stop_cmd because of python substitutes the executable, so service did not stop properly.
Apart from cool solutions above I'd mention also about supervisord and monit tools which allow to start process, monitor its presence and start it if it died. With 'monit' you can also run some active checks like check if process responds for http request
For Ubuntu i use this:
(exec PROG_SH &> /dev/null &)
regards
Try this for a simple solution
cmd & exit

Resources