I'm following a tutorial on PluralSight regarding vagrant and hubot slack setup.
The only difference is that I'm using hubot-slack.
If I start the hubot by invoking hubot script from terminal - everything works fine - the bot connects and responds to commands.
Unfortunately, when the hubot is started as a service from by the upstart - I get this logged into /var/log/upstart/myhubot.log `Cannot load adapter slack - Error: Cannot find module 'hubot-slack'
my /bin/hubot file looks like this (this works just fine when executed from cli):
#!/bin/sh
set -e
npm install
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:$PATH"
export HUBOT_SLACK_TOKEN={}
exec node_modules/.bin/hubot --name "hubot" --adapter slack "$#"
my .conf file that's executed as a service looks like this (can't find module):
description "My hubot"
author "Me bla#bla.com"
start on runlevel [2345]
stop on runlevel [016]
setuid vagrant
env HOME="/home/vagrant"
chdir /vagrant/my-awesome-hubot
console log
script
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:/usr/bin/coffee:/usr/bin/node:$PATH"
export HUBOT_SLACK_TOKEN={}
echo "DEBUG: `set`" >> /tmp/myhubot.log
exec node_modules/.bin/hubot --name "hubot" --adapter slack
end script
respawn
Keep in mind that the slack token is excluded from these scripts.
Debug reveals that chdir does the correct thing and the pwd is exactly the same as when I execute the script manually.
I've tried removing entire nodejs project and generating with yeoman from scratch and also tried installing hubot-slack both globaly and localy but to no avail.
In case of a .conf file - there is no npm install but in the provision.sh file - I am cd-ing (as a vagrant user) to the root directory, doing npm install - and only then, service restart. I am also making sure to clean up everything before another round of testing before I do - vagrant provision
cp /vagrant/upstart/myhubot.conf /etc/init/myhubot.conf
sudo -u vagrant -i sh -c 'cd /vagrant/my-awesome-hubot; npm install'
service myhubot restart
Do you have any suggestions.
I've just spent the day working through the same issue as this unanswered question so thought I would update with my solution.
The current hubot generated app is started with the cli with command HUBOT_SLACK_TOKEN=xoxb-YOUR-TOKEN-HERE ./bin/hubot --adapter slack whilst in the folder where hubot was generated. Therefore the utilises the default bin/hubot script.
Your conf file needs to pick this up therefore should run the following:
description "My hubot"
author "Me bla#bla.com"
start on runlevel [2345]
stop on runlevel [016]
script
chdir /vagrant/my-awesome-hubot
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:/usr/bin/coffee:/usr/bin/node:$PATH"
HUBOT_SLACK_TOKEN=xoxb-YOUR-TOKEN-HERE ./bin/hubot --adapter slack --name "hubot" >> /tmp/myhubot.log
end script
respawn
Related
I'm trying to write a very basic upstart script that runs a node.js script when I enter 'service myscript start'.
Upstart:
description "my script"
author "barbra"
setuid ubuntu
env NODE=/usr/bin/nodejs
env NODE_PATH=/root/
script
cd $NODE_PATH
$NODE app.js
end script
However upstart isn't recognizing this as a service and generating this error:
"Unit myscript.service not found"
Am I missing anything in my upstart script?
UPDATE:
I've tried to check my upstart version using 'initctl version' and it replies:
"initctl: Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: Connection refused"
I've installed an npm package / script in a JAIL on FreeNAS 9.10. (FreeBSD based)
It works perfectly if I run "npm start" in the directory where the scripts are installed.
However, I need this to be auto-starting when the jail starts. I don't know now to do that. Do I need to create an rc script?
Basically all I need to do is give the "npm start" in the correct directory on start up. How do I do that?
thanks
Yes, you can place an rc script within the jail and enable it using the jail's /etc/rc.conf file.
But, for a quick and dirty solution, you could create a /etc/rc.local script (also within the jail's environment) and put your startup commands in there.
See the manual page here.
Don't know about npm start, but for node.js I made such RC srcipt:
#!/bin/sh
# $FreeBSD: 340872 2014-01-24 00:14:07Z mat $
#
# PROVIDE: SERVICENAME
# REQUIRE: NETWORKING
# KEYWORD: shutdown
#
# Add the following line to /etc/rc.conf to enable SERVICENAME:
#
# SERVICENAME_enable="YES"
#
. /etc/rc.subr
name="SERVICENAME"
rcvar=SERVICENAME_enable
pidfile=${SERVICENAME_pidfile:-"/var/run/SERVICENAME.pid"}
command="/usr/sbin/daemon"
#command_args="-r -u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR" # cjayho: restart if crashed
command_args="-u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR"
load_rc_config $name
: ${SERVICENAME_enable:="NO"}
run_rc_command "$1"
name this file something like SERVICENAME and put to /usr/local/etc/rc.d
to enable automatic startup run command as root:
sysrc SERVICENAME_enable="YES"
do not forget to replace SERVICENAME, USERNAME and PROGDIR to your values, and add
process.chdir('/home/USERNAME/PROGDIR')
to your entry js file.
Working in Ubuntu server 14.04
I have an upstart .conf file in /etc/init for staring my node server. I am using forever. Here is what my script looks like
start on filesystem or runlevel [2345]
expect fork
setuid myUserId
env HOME=/home/myUserId/
env NODE_BIN_DIR=/usr/bin
env NODE_PATH=/usr/lib/nodejs:/usr/lib/node_modules:/usr/share/javascript
script
PATH=$NODE_BIN_DIR:$NODE_PATH:$PATH
echo $PATH
exec forever start -o /home/myUserId/nodeServ/lServer/logs/out.log /home/myUserId/nodeServ/lServer/server.js 1337
end script
But I keep getting this error
Error: SQLITE_CANTOPEN: unable to open database file
error: Forever detected script exited with code: 8
If I run the script from the command line exactly as it is in the conf file it works just fine. No problems. So I think it is a permissions issue. I have set permissions for read write execute on the database directory and the database and still I am unable to read from the file.
I have tried so many different things and I cannot figure out why this is happneing
UPDATE: This problem appears to not be isolated to upstart. I tried staring forever in shell script as well and I get the same errors.
I resolved my issue via workaround. Not using forever and starting node directly from the upstart file (allowing respawn). No issues. This appears to either be a sqlite3 issue or a forever issue.
write this command sudo chown www-data. . in db file directory.
other solution is check file exists or not
ex :
var fs = require("fs");
var exists = fs.existsSync(dbfilepath);
I am using an amazon ec2 instance with ubuntu to host my node.js application, i already made all the configurations, and is working good when i type:
nodemon ./bin/www
./bin/www is the file that creates the server.
Now, i am trying to setup the upstart, and i follow a tutorial, this is my configuration file:
path:
/etc/init/photogrid.conf:
inside:
description "Photogrid"
start on started mountall
stop on shutdown
respawn
respawn limit 99 5
env NODE_ENV=production
exec node /home/ubuntu/photogrid/bin/www >> /var/log/photogrid.log 2>&1
But when i try to access the site, is showing:
Cannot GET /
I follow a tutorial, and the only difference between my configuration file is this part:
Original:
exec node /home/ubuntu/photogrid/app.js >> /var/log/photogrid.log 2>&1
My one:
exec node /home/ubuntu/photogrid/bin/www >> /var/log/photogrid.log 2>&1
Start with upstart:
Start with nodemon bin/www:
In my logs i see the following when i try access the home '/':
^[[0mGET / ^[[33m404 ^[[0m12.036 ms - 13^[[0m
It seems that you need to switch to correct directory before launching exec. Maybe this will resolve your error:
description "Photogrid"
start on filesystem and started networking
stop on shutdown
respawn
respawn limit 99 5
env NODE_ENV=production
script
export HOME="/home/ubuntu/photogrid"
cd $HOME
exec node /home/ubuntu/photogrid/bin/www >> /var/log/photogrid.log 2>&1
end script
Try adding chdir /home/ubuntu/photogrid to your upstart config. Also, interactively in a terminal try: NODE_ENV=production nodemon ./bin/www. Perhaps you are using app.configure where you shouldn't be?
After update strongloop to v2.10 slc stops writing logs.
Also I couldn't make the app to start in production mode.
/etc/init/app.conf
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
env NODE_ENV=production
script
exec slc run /home/ubuntu/app/ \
-l /home/ubuntu/app/app.log \
-p /var/run/app.pid
end script
Can anybody check my upstart config or provide another working copy?
Are you were writing the pid to a file so that you can use it to send SIGUSR2 to the process to trigger log re-opening from logrotate?
Assuming you are using Upstart 1.4+ (Ubuntu 12.04 or newer), then you would be better off letting slc run log to its stdout and let Upstart take care of writing it to a file so that log rotation is done for you:
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
# assuming this is /etc/init/app.conf,
# stdout+stderr logged to: /var/log/upstart/app.log
console log
env NODE_ENV=production
exec /usr/local/bin/slc run --cluster=CPUs /home/ubuntu/app
The log rotation for "free" is nice, but the biggest benefit to this approach is Upstart can log errors that slc run reports even if they are a crash while trying to set up its internal logging, which makes debugging a lot easier.
Aside from what it means to your actual application, the only effect NODE_ENV has on slc run is to set the default number of cluster workers to the number of detected CPU cores, which literally translates to --cluster=CPUs.
Another problem I find is the node/npm path prefix not being in the $PATH as used by Upstart, so I normally put the full paths for executables in my Upstart jobs.
Service Installer
You could also try using strong-service-install, which is a module used by slc pm-install to install strong-pm as an OS service:
$ npm install -g strong-service-install
$ sudo sl-svc-install --name app --user ubuntu --cwd /home/ubuntu/app -- slc run --cluster=CPUs .
Note the spaces around the -- before slc run