Nodejs/Strongloop: working upstart config example - node.js

After update strongloop to v2.10 slc stops writing logs.
Also I couldn't make the app to start in production mode.
/etc/init/app.conf
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
env NODE_ENV=production
script
exec slc run /home/ubuntu/app/ \
-l /home/ubuntu/app/app.log \
-p /var/run/app.pid
end script
Can anybody check my upstart config or provide another working copy?

Are you were writing the pid to a file so that you can use it to send SIGUSR2 to the process to trigger log re-opening from logrotate?
Assuming you are using Upstart 1.4+ (Ubuntu 12.04 or newer), then you would be better off letting slc run log to its stdout and let Upstart take care of writing it to a file so that log rotation is done for you:
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
# assuming this is /etc/init/app.conf,
# stdout+stderr logged to: /var/log/upstart/app.log
console log
env NODE_ENV=production
exec /usr/local/bin/slc run --cluster=CPUs /home/ubuntu/app
The log rotation for "free" is nice, but the biggest benefit to this approach is Upstart can log errors that slc run reports even if they are a crash while trying to set up its internal logging, which makes debugging a lot easier.
Aside from what it means to your actual application, the only effect NODE_ENV has on slc run is to set the default number of cluster workers to the number of detected CPU cores, which literally translates to --cluster=CPUs.
Another problem I find is the node/npm path prefix not being in the $PATH as used by Upstart, so I normally put the full paths for executables in my Upstart jobs.
Service Installer
You could also try using strong-service-install, which is a module used by slc pm-install to install strong-pm as an OS service:
$ npm install -g strong-service-install
$ sudo sl-svc-install --name app --user ubuntu --cwd /home/ubuntu/app -- slc run --cluster=CPUs .
Note the spaces around the -- before slc run

Related

npm adds whitespace when setting env variable in package.json

I have a pre-written package.json file for an app which I need to modify. More specifically, I want to change the NODE_PORT environment variable through the package.json file and I'm working on a Windows machine.
In the package.json I have several scripts that I run through npm when I like to spin up an instance of the app.
For example:
set NODE_PORT=80&& set NODE_ENV=test&& pm2 install pm2-logrotate&& pm2 start app.js -i max -o ./logs/access.log -e ./logs/err.log --time --name Test
This script for example works fine.
However, when I'm trying to set the NODE_PORT variable to 8080 (that's the port I need) like so:
set NODE_PORT=8080&& set NODE_ENV=parallel_test&& pm2 install pm2-logrotate&& pm2 start app.js -i max -o ./logs/parallel_access.log -e ./logs/parallel_err.log --time --name Parallel_Test
a whitespace at the end of the variable gets added.
I verified this by printing out the number of chars of $process.env.NODE_PORT in the log file which prints 5. Moreover the login for the app via Google crashes as the redirect link of the app doesn't match with the one in the Google Cloud Platform. That is:
app: http://localhost:8080 /auth/check-google vs. Google Cloud Platform: http://localhost:8080/auth/check-google
Any idea why this is happening?
i have faced similar issue recently. Handled it with .trimEnd() while adding variables with dotenv. But I think using cross-env can solve your problems.
Most Windows command prompts will choke when you set environment
variables with NODE_ENV=production like that. (The exception is Bash
on Windows, which uses native Bash.) Similarly, there's a difference
in how windows and POSIX commands utilize environment variables. With
POSIX, you use: $ENV_VAR and on windows you use %ENV_VAR%.
Adding this inside your script: "cross-env NODE_PORT=8080 ..."

How to use pm2 with a nodejs app that uses readline for taking command line input?

I have a Node.js app that uses the node's native readline to be able to take command-line inputs.
When launching the app with pm2, the command-line input is unavailable.
Any ideas how to solve this issue? Other than using systemd and creating an init script myself?
Use pm2 to attach to your process and you will see readline, clearline and cursorTo working as expected.
First get your process id with:
$ pm2 id {your-process-name}
[ 7 ]
Let's say it's 7:
$ pm2 attach 7
if you check the pm2 website they clearly mention the following line: Advanced, production process manager for Node.js. So using it in this context is unnecessary as all pm2 does is start your 'node' process and allows you to manage it, the simple way is to use command line args while starting the process.
for example:
I myself use commander for this purpose. it manages all my command line arguments (u can see its usage). and with pm2 i use it like following:
pm2 start server.js --name production -- --env dev -p 3458
notice -- before --env, it is used to separate pm2 arguments from the arguments you want to supply to your process
p.s.
PM2 has more complex usage than this, in the terms of process management, i myself use it for production level deployment. If you want to take input from a user every time s/he starts your app, then you should stick with using node command only

hubot-slack "Cannot find module" - from upstart only

I'm following a tutorial on PluralSight regarding vagrant and hubot slack setup.
The only difference is that I'm using hubot-slack.
If I start the hubot by invoking hubot script from terminal - everything works fine - the bot connects and responds to commands.
Unfortunately, when the hubot is started as a service from by the upstart - I get this logged into /var/log/upstart/myhubot.log `Cannot load adapter slack - Error: Cannot find module 'hubot-slack'
my /bin/hubot file looks like this (this works just fine when executed from cli):
#!/bin/sh
set -e
npm install
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:$PATH"
export HUBOT_SLACK_TOKEN={}
exec node_modules/.bin/hubot --name "hubot" --adapter slack "$#"
my .conf file that's executed as a service looks like this (can't find module):
description "My hubot"
author "Me bla#bla.com"
start on runlevel [2345]
stop on runlevel [016]
setuid vagrant
env HOME="/home/vagrant"
chdir /vagrant/my-awesome-hubot
console log
script
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:/usr/bin/coffee:/usr/bin/node:$PATH"
export HUBOT_SLACK_TOKEN={}
echo "DEBUG: `set`" >> /tmp/myhubot.log
exec node_modules/.bin/hubot --name "hubot" --adapter slack
end script
respawn
Keep in mind that the slack token is excluded from these scripts.
Debug reveals that chdir does the correct thing and the pwd is exactly the same as when I execute the script manually.
I've tried removing entire nodejs project and generating with yeoman from scratch and also tried installing hubot-slack both globaly and localy but to no avail.
In case of a .conf file - there is no npm install but in the provision.sh file - I am cd-ing (as a vagrant user) to the root directory, doing npm install - and only then, service restart. I am also making sure to clean up everything before another round of testing before I do - vagrant provision
cp /vagrant/upstart/myhubot.conf /etc/init/myhubot.conf
sudo -u vagrant -i sh -c 'cd /vagrant/my-awesome-hubot; npm install'
service myhubot restart
Do you have any suggestions.
I've just spent the day working through the same issue as this unanswered question so thought I would update with my solution.
The current hubot generated app is started with the cli with command HUBOT_SLACK_TOKEN=xoxb-YOUR-TOKEN-HERE ./bin/hubot --adapter slack whilst in the folder where hubot was generated. Therefore the utilises the default bin/hubot script.
Your conf file needs to pick this up therefore should run the following:
description "My hubot"
author "Me bla#bla.com"
start on runlevel [2345]
stop on runlevel [016]
script
chdir /vagrant/my-awesome-hubot
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:/usr/bin/coffee:/usr/bin/node:$PATH"
HUBOT_SLACK_TOKEN=xoxb-YOUR-TOKEN-HERE ./bin/hubot --adapter slack --name "hubot" >> /tmp/myhubot.log
end script
respawn

How set upstart correctly for a node.js app

I am using an amazon ec2 instance with ubuntu to host my node.js application, i already made all the configurations, and is working good when i type:
nodemon ./bin/www
./bin/www is the file that creates the server.
Now, i am trying to setup the upstart, and i follow a tutorial, this is my configuration file:
path:
/etc/init/photogrid.conf:
inside:
description "Photogrid"
start on started mountall
stop on shutdown
respawn
respawn limit 99 5
env NODE_ENV=production
exec node /home/ubuntu/photogrid/bin/www >> /var/log/photogrid.log 2>&1
But when i try to access the site, is showing:
Cannot GET /
I follow a tutorial, and the only difference between my configuration file is this part:
Original:
exec node /home/ubuntu/photogrid/app.js >> /var/log/photogrid.log 2>&1
My one:
exec node /home/ubuntu/photogrid/bin/www >> /var/log/photogrid.log 2>&1
Start with upstart:
Start with nodemon bin/www:
In my logs i see the following when i try access the home '/':
^[[0mGET / ^[[33m404 ^[[0m12.036 ms - 13^[[0m
It seems that you need to switch to correct directory before launching exec. Maybe this will resolve your error:
description "Photogrid"
start on filesystem and started networking
stop on shutdown
respawn
respawn limit 99 5
env NODE_ENV=production
script
export HOME="/home/ubuntu/photogrid"
cd $HOME
exec node /home/ubuntu/photogrid/bin/www >> /var/log/photogrid.log 2>&1
end script
Try adding chdir /home/ubuntu/photogrid to your upstart config. Also, interactively in a terminal try: NODE_ENV=production nodemon ./bin/www. Perhaps you are using app.configure where you shouldn't be?

Cannot run nodejs app and mongo within a docker container

I'm setting up a container with the following Dockerfile
# Start with project/baseline
FROM project/baseline # => image with mongo / nodejs / sailsjs
# Create folder that will contain all the sources
RUN mkdir -p /var/project
# Load the configuration file and the deployment script
ADD init.sh /var/project/init.sh
ADD src/ /var/project/ # src contains a list of folder, each one being a sails app
# Compile the sources / run the services / run mongodb
CMD /var/project/init.sh
The init.sh script is called when the container runs.
It should start a couple of webapp and mongodb.
#!/bin/bash
PROJECT_PATH=/var/project
# Start mongodb
function start_mongo {
mongod --fork --logpath /var/log/mongodb.log # attempt to have mongo running in daemon
}
# Start services
function start {
for service in $(ls);do
cd $PROJECT_PATH/$service
npm start # Runs sails lift on each service
done
}
# start mongodb
start_mongo
# start web applications defined in /var/project
start
Basically, there is a couple of nodejs (sailsjs) application in /var/project.
When I run the container, I got the following message:
$ sudo docker run -t -i projects/test
about to fork child process, waiting until server is ready for connections.
forked process: 10
and then it remains stuck.
How can mongo and the sails processes can be started and the container to remain in a running state ?
UPDATE
I now use this supervisord.conf file
[supervisord]
nodaemon=false
[program:mongodb]
command=/usr/bin/mongod
[program:process1]
command=/bin/bash "cd /var/project/service1 && node app.js"
[program:process2]
command=/bin/bash "cd /var/project/service2 && node app.js"
it is called in the Dockerfile like:
# run the applications (mongodb + project related services)
CMD ["/usr/bin/supervisord"]
As my services are dependent upon mongo starting correctly, supervisord does not wait that long and the services are not started then. Any idea to solve that ?
By the way, it that a so best practice to use mongo in the same container ?
UPDATE 2
I went back to a service.sh script that is called when the container is running. I know this is not clean (but I'll say it's temporary so I can fix the pb I have in supervisor), but I'm doing the following:
run nohup mongod &
wait 60 sec
run my node (forever) processes
The thing is, the container exit right after the forever processes are ran... how can it be kept active ?
If you want to cleanly start multiple services inside a container, one option is to use a process supervisor of some sort. One option is documented here, in the official Docker documentation.
I've done something similar using runit. You can see my base runit image here, and a multi-service application image using that here.

Resources