I'm trying to run a nodejs application using upstart as a non root user.
But somehow parts of the script will not run : for instance:
if I run it like a root user(below example) NODE_ENV never gets called/set
the only way to called is with "sudo initctl stop pdcapp"
sudo nameofApp start|stop would not work
When called sudo initctl stop nameofApp the pre-stop script will not echo to the log file
if I try to runit like a non root user it would not even start
isn't a more cleaner easier way of doing this (systemd) I've looked a various tutorials around and apparently this is how they've doneit. so what am I missing here?
This is the .conf file under /etc/init/
env FULL_PATH="/srv/pd/sept011100/dev"
env NODE_PATH="/usr/local/nodeJS/bin/node"
env NODE_ENV=production
start on filesystem or runlevel [2345]
stop on [!2345]
script
export NODE_ENV #this variable is never set
echo $$ > /var/run/PD.pid
cd $FULL_PATH
# the command below will not work
#exec sudo -u nginx "$NODE_PATH server.js >> /var/log/PD/pdapp.log 2>&1"
exec $NODE_PATH server.js >> /var/log/PD/pdapp.log 2>&1
end script
pre-start script
echo "[`date`] (sys) Starting" >> /var/log/PD/pdapp.log
end script
pre-stop script
rm /var/run/pdapp.pid
echo "[`date`] (sys) Stopping" >> /var/log/PDC/pdapp.log
end script
in /var/log/messages I get this when I stop the application, otherwise I get nothing in the logfile
Sep 2 18:23:14 547610-redhat-dev2 init: pdcapp pre-stop process (6903) terminated with status 1
Sep 2 18:23:14 547610-redhat-dev2 init: pdcapp main process (6899) terminated with status 143
any Ideas why is this not working I'm running redhat 6.5
Red Hat has a super old version of Upstart that is probably full of bugs because they never contributed to Upstart, despite using it (Fedora switched to systemd right after RHEL 6 was released, before they even really tried it out well).
I am getting a Django 1.6 set up started on a Linux (Debian Whiskey) server on Google Compute Engine. I've got Celery 3.1 running in the background to help with some processes. When I start a new instance (using a snapshot I've created), I always need to start Celery. I am looking for a way to start Celery automatically on server-load. This is particularly helpful if the server decides to restart, as they seem to do now and then. To achieve this, I've edited the rc.local file:
$ sudo nano /etc/rc.local
It used to contain the following:
exit 0
[ -x /sbin/initctl ] && initctl emit --no-wait google-rc-local-has-run || true
I've edited the file such that it now reads:
cd /home/user/gce_app celery -A myapp.tasks --concurrency=1 --loglevel=info worker > output.log 2> errors.log &
exit 0
[ -x /sbin/initctl ] && initctl emit --no-wait google-rc-local-has-run || true
The directory:
/home/user/gce_app
is where my Django project resides and the directory I need to be in to start Celery. However, after restarting the instance, when I type in:
$ celery status
Error: No nodes replied within time constraint.
Opening the errors.log file, I see:
/etc/rc.local: 14: /etc/rc.local: celery: not found
Surely the cd at the start of that code string should address this? Is there a way (within the Django project itself) to start the Celery instance when the project is started to make the code more platform-independent and immune to inevitable OS updates?
I think you're missing a semicolon between your 'cd' and celery invocations. Also, I suspect rc.local may not be searching your path, so you may need to give an absolute path to celery. e.g.
cd /home/user/gce_app; /usr/bin/celery ...
Alternatively, you might look at using a startup script from the GCE metadata to avoid needing to modify rc.local.
Since you seem to be using upstart this might help you:
description "runs celery"
start on runlevel [2345]
stop on runlevel [!2345]
console log
env VENV='/srv/myvirtualenv'
env PROJECT='/srv/run/mydjangoproject'
exec su -s /bin/sh -c 'exec "$0" "$#"' www-data -- /usr/bin/env PATH=$VENV:$PATH $VENV/python $PROJECT/manage.py celeryd
respawn
respawn limit 10 5
My script (located in /etc/init.d) is creating a pid file ($PIDFILE), but there is no process running. My daemon script includes:
start-stop-daemon --start --quiet --pidfile $PIDFILE -m -b --startas $DAEMON --test > /dev/null || return 1
The script works fine when executing it manually.
You need to create startup links.
sudo update-rc.d SCRIPT_NAME defaults
then reboot. SCRIPT_NAME is the name of the script in /etc/init.d (Without the path)
Was able to get it working, but tried so many things, don't know exactly what fixed it (probably an error in script or config). However, learned a lot and wanted to share since I can't find much of the same in the internet abyss.
It seems Ubuntu (and many other distros based on Ubuntu, including Mint) has migrated to Upstart for job and service management. Upstart includes SysVinit (using /etc/init.d daemons) compatibility that still can use update-rc.d to manage daemons (so if you are familiar with that usage, you can keep on using it). The Upstart method is to use a single .conf file in the /etc/init folder. My SCRIPT.conf file is very simple (I'm using a python script):
start on filesystem or runlevel [2345]
stop on runlevel [016]
exec python /usr/share/python-support/SCRIPT/SCRIPT.py
This simple file completely replaces the standard script in /etc/init.d with the case statement to provide [start|stop|restart|reload] functions and the pointer to /usr/bin/SCRIPT. You can see that it includes runlevel control that would normally be found in the /etc/rc*.d files (thus eliminating several files).
I tried update-rc.d to create the necessary /etc/rc*.d/ files for my daemon. My daemon bash script is located in /etc/init.d and includes the start-stop-daemon command as in my original question. (That command also works fine from terminal.)
I had /etc/rc*.d/ files, the bash script in /etc/init.d and /etc/init/SCRIPT.conf file during boot and it seems that Upstart likely first looks for the .conf file for its direction because the SysVinit command service SCRIPT [start|stop|restart|reload] returns Unknown Instance, however you can find the process is running with ps -elf | grep SCRIPT_FILE.
One interesting thing to note is the forking of your daemon when using .conf. The script as written above only spawns one fork of the daemon. However, total independence of the original script is possible by using expect fork or expect daemon and respawn (see the Upstart Cookbook for reference). Using these will ensure that your daemon will never be killed (at least by using the kill command).
I continued to test both my daemon and the boot process by utilizing the sudo initctl reload-configuration command. This reloads the conf files where you can test your daemon by the sudo [start|stop|restart] SCRIPT command. The result of the start command is:
$ sudo start SCRIPT
SCRIPT start/running, process xxxx
$ sudo restart SCRIPT
SCRIPT start/running, process xxxx
$ sudo stop SCRIPT
SCRIPT stop/waiting
Also, there is a nice log in /var/log/upstart/SCRIPT.log that gives you useful information for your daemon during boot. Mine still has a very annoying bug that prevents root from displaying osd messages with notify-send from my daemon. My log file includes a gtk warning (I will open another question to solicit help).
Hope this helps others in developing their daemons.
I have the following in a .conf file in the /etc/init/ directory of my CentOS server:
#!upstart
description "shortnr server for fmc.io"
author "Felix Milea-Ciobanu"
start on startup
stop on shutdown
respawn
respawn limit 10 30
script
export HOME="/root"
exec /usr/local/bin/node /var/www/fmc.io/nodejs/app.js >> /var/www/fmc.io/logs/shortnr.upstart.log 2>&1
end script
pre-start script
echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Starting" >> /var/www/fmc.io/logs/shortnr.upstart.log
end script
pre-stop script
echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Stopping" >> /var/www/fmc.io/logs/shortnr.upstart.log
end script
It's a pretty simple and straight forward upstart script. I named this service shortnr, after the nodejs software that the script starts up.
At the command line if I type in start shortnr I get something along the lines of shortnr start/running, process 28350.
However, I can't seem to access the nodejs server; If I do ps aux | grep shortnr at the command shell, nothing comes up.
If I do stop shortnr after running start, I get stop: Unknown instance:, meaning that the original service never started up.
The log file that I setup in the Upstart script looks something like this:
[2012-10-05T17:00:17.174Z] (sys) Starting
[2012-10-05T17:00:17.181Z] (sys) Starting
[2012-10-05T17:00:17.190Z] (sys) Starting
[2012-10-05T17:00:17.197Z] (sys) Starting
[2012-10-05T17:00:17.204Z] (sys) Starting
Basically the script is trying to start multiple times a second when I issued the start command, meaning that the service must be crashing on start or something and trying to respawn?
However, if I copy the command after exec and paste it in the shell prompt, the nodejs script starts up and runs properly.
So that means something must be wrong with my Upstart script.
If I try start/stop the service with the initctl command, I get the same results.
I'm running CentOS 6.3 and Upstart 0.6.5
Anyone have any idea what could be causing this or how to fix my script?
While I couldn't figure out the answer to my problem, I just ended up using forever instead: https://github.com/nodejitsu/forever-monitor
I am also running into similar problems on CentOS 6.3, Upstart 0.6.5 and Node.Js 0.10.5. I specifically upgraded Node so I could use the daemon module and be able to put the daemonized Node app under Upstart control.
Here's my /etc/init/job-worker.conf:
description "job-worker under Upstart/init control"
start on job-worker-start-event
stop on job-worker-stop-event
expect daemon
script
#setuid myuser
exec /root/BasicJobWorker/bin/basic-job-worker
#sleep 5
end script
respawn
respawn limit 10 5
And here's my basic-job-worker script:
\#!/usr/bin/env node
// this code is run twice
// see implementation notes below
console.log(process.pid);
// after this point, we are a daemon
require('daemon')();
// different pid because we are now forked
// original parent has exited
console.log(process.pid);
var BasicJobWorker = require('../lib/basic-job-worker.js');
new BasicJobWorker().boot();
I have tried using "expect fork", "expect daemon" as well as no expect at all. In all cases the job is respawned too fast and it is eventually stopped.
I connect to the linux server via putty SSH. I tried to run it as a background process like this:
$ node server.js &
However, after 2.5 hrs the terminal becomes inactive and the process dies. Is there anyway I can keep the process alive even with the terminal disconnected?
Edit 1
Actually, I tried nohup, but as soon as I close the Putty SSH terminal or unplug my internet, the server process stops right away.
Is there anything I have to do in Putty?
Edit 2 (on Feb, 2012)
There is a node.js module, forever. It will run node.js server as daemon service.
nohup node server.js > /dev/null 2>&1 &
nohup means: Do not terminate this process even when the stty is cut
off.
> /dev/null means: stdout goes to /dev/null (which is a dummy
device that does not record any output).
2>&1 means: stderr also goes to the stdout (which is already redirected to /dev/null). You may replace &1 with a file path to keep a log of errors, e.g.: 2>/tmp/myLog
& at the end means: run this command as a background task.
Simple solution (if you are not interested in coming back to the process, just want it to keep running):
nohup node server.js &
There's also the jobs command to see an indexed list of those backgrounded processes. And you can kill a backgrounded process by running kill %1 or kill %2 with the number being the index of the process.
Powerful solution (allows you to reconnect to the process if it is interactive):
screen
You can then detach by pressing Ctrl+a+d and then attach back by running screen -r
Also consider the newer alternative to screen, tmux.
You really should try to use screen. It is a bit more complicated than just doing nohup long_running &, but understanding screen once you never come back again.
Start your screen session at first:
user#host:~$ screen
Run anything you want:
wget http://mirror.yandex.ru/centos/4.6/isos/i386/CentOS-4.6-i386-binDVD.iso
Press ctrl+A and then d. Done. Your session keeps going on in background.
You can list all sessions by screen -ls, and attach to some by screen -r 20673.pts-0.srv command, where 0673.pts-0.srv is an entry list.
This is an old question, but is high ranked on Google. I almost can't believe on the highest voted answers, because running a node.js process inside a screen session, with the & or even with the nohup flag -- all of them -- are just workarounds.
Specially the screen/tmux solution, which should really be considered an amateur solution. Screen and Tmux are not meant to keep processes running, but for multiplexing terminal sessions. It's fine, when you are running a script on your server and want to disconnect. But for a node.js server your don't want your process to be attached to a terminal session. This is too fragile. To keep things running you need to daemonize the process!
There are plenty of good tools to do that.
PM2: http://pm2.keymetrics.io/
# basic usage
$ npm install pm2 -g
$ pm2 start server.js
# you can even define how many processes you want in cluster mode:
$ pm2 start server.js -i 4
# you can start various processes, with complex startup settings
# using an ecosystem.json file (with env variables, custom args, etc):
$ pm2 start ecosystem.json
One big advantage I see in favor of PM2 is that it can generate the system startup script to make the process persist between restarts:
$ pm2 startup [platform]
Where platform can be ubuntu|centos|redhat|gentoo|systemd|darwin|amazon.
forever.js: https://github.com/foreverjs/forever
# basic usage
$ npm install forever -g
$ forever start app.js
# you can run from a json configuration as well, for
# more complex environments or multi-apps
$ forever start development.json
Init scripts:
I'm not go into detail about how to write a init script, because I'm not an expert in this subject and it'd be too long for this answer, but basically they are simple shell scripts, triggered by OS events. You can read more about this here
Docker:
Just run your server in a Docker container with -d option and, voilá, you have a daemonized node.js server!
Here is a sample Dockerfile (from node.js official guide):
FROM node:argon
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD [ "npm", "start" ]
Then build your image and run your container:
$ docker build -t <your username>/node-web-app .
$ docker run -p 49160:8080 -d <your username>/node-web-app
Always use the proper tool for the job. It'll save you a lot of headaches and over hours!
another solution disown the job
$ nohup node server.js &
[1] 1711
$ disown -h %1
nohup will allow the program to continue even after the terminal dies. I have actually had situations where nohup prevents the SSH session from terminating correctly, so you should redirect input as well:
$ nohup node server.js </dev/null &
Depending on how nohup is configured, you may also need to redirect standard output and standard error to files.
Nohup and screen offer great light solutions to running Node.js in the background. Node.js process manager (PM2) is a handy tool for deployment. Install it with npm globally on your system:
npm install pm2 -g
to run a Node.js app as a daemon:
pm2 start app.js
You can optionally link it to Keymetrics.io a monitoring SAAS made by Unitech.
$ disown node server.js &
It will remove command from active task list and send the command to background
I have this function in my shell rc file, based on #Yoichi's answer:
nohup-template () {
[[ "$1" = "" ]] && echo "Example usage:\nnohup-template urxvtd" && return 0
nohup "$1" > /dev/null 2>&1 &
}
You can use it this way:
nohup-template "command you would execute here"
Have you read about the nohup command?
To run command as a system service on debian with sysv init:
Copy skeleton script and adapt it for your needs, probably all you have to do is to set some variables. Your script will inherit fine defaults from /lib/init/init-d-script, if something does not fits your needs - override it in your script. If something goes wrong you can see details in source /lib/init/init-d-script. Mandatory vars are DAEMON and NAME. Script will use start-stop-daemon to run your command, in START_ARGS you can define additional parameters of start-stop-daemon to use.
cp /etc/init.d/skeleton /etc/init.d/myservice
chmod +x /etc/init.d/myservice
nano /etc/init.d/myservice
/etc/init.d/myservice start
/etc/init.d/myservice stop
That is how I run some python stuff for my wikimedia wiki:
...
DESC="mediawiki articles converter"
DAEMON='/home/mss/pp/bin/nslave'
DAEMON_ARGS='--cachedir /home/mss/cache/'
NAME='nslave'
PIDFILE='/var/run/nslave.pid'
START_ARGS='--background --make-pidfile --remove-pidfile --chuid mss --chdir /home/mss/pp/bin'
export PATH="/home/mss/pp/bin:$PATH"
do_stop_cmd() {
start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 \
$STOP_ARGS \
${PIDFILE:+--pidfile ${PIDFILE}} --name $NAME
RETVAL="$?"
[ "$RETVAL" = 2 ] && return 2
rm -f $PIDFILE
return $RETVAL
}
Besides setting vars I had to override do_stop_cmd because of python substitutes the executable, so service did not stop properly.
Apart from cool solutions above I'd mention also about supervisord and monit tools which allow to start process, monitor its presence and start it if it died. With 'monit' you can also run some active checks like check if process responds for http request
For Ubuntu i use this:
(exec PROG_SH &> /dev/null &)
regards
Try this for a simple solution
cmd & exit