How to simultaneously deploy Node.js web app on multiple servers with Jenkins? - node.js

I'm gonna deploy a Node.js mobile web application on two remote servers.(Linux OS)
I'm using SVN server to manage my project source code.
To simply and clearly manage the app, I decided to use Jenkins.
I'm new to Jenkins so it was a quite difficult task installing and configuring Jenkins.
But I couldn't find how to set up Jenkins to build remote servers simultaneously.
Could you help me?

You should look into supervisor. It's language and application type agnostic, it just takes care of (re-) starting application.
So in your jenkins build:
You update your code from SVN
You run your unit tests (definitely a good idea)
You either launch an svn update on each host or copy the current content to them (I'd recommend this because there are many ways to make SVN fail and this allows to include SVN_REVISION in the some .JS file for instance)
You execute on each host: fuser -k -n tcp $DAEMON_PORT, this will kill the currently running application with the port $DAEMON_PORT (the one you use in your node.js's app)
And the best is obviously that it will automatically start your node.js at system's startup (provided supervisor is correctly installed (apt-get install supervisor on Debian)) and restart it in case of failure.
A node.js supervisord's subconfig looks like this:
# /etc/supervisor/conf.d/my-node-app.conf
[program:my-node-app]
user = running-user
environment = NODE_ENV=production
directory = /usr/local/share/dir_app
command = node app.js
stderr_logfile = /var/log/supervisor/my-node-app-stderr.log
stdout_logfile = /var/log/supervisor/my-node-app-stdout.log
There are many configuration parameters.
Note: There is a node.js's supervisor, it's not the one I'm talking about and I haven't tested it.

per Linux OS, you need to ssh to your hosts to run command to get application updated:
work out the workflow of application update in shell script. Especially you need to daemonize your node app so that a completed jenkins job execution will not kill your app when exits. Here's a nice article to tell how to do this: Running node.js Apps With Upstart, or you can refer to pure nodejs tech like forever. Assume you worked out a script under /etc/init.d/myNodeApp
ssh to your Linux OS from jenkins. so you need to make sure the ssh private key file has been copied to /var/lib/jenkins/.ssh/id_rsa with the ownership of jenkins user
Here's an example shell step in jenkins job configuration:
ssh <your application ip> "service myNodeApp stop; cd /ur/app/dir; svn update; service myNodeApp restart"

Related

automatically start node server on instance start in aws autoscaling by providing user data

I have a demo project in AWS and then I created an AMI for it so that I can use it for auto-scaling. now I am looking for something that I can put in user text in my launch configuration which will let me start the server without going to ssh. I am trying out below, let me know where is my mistake.
#!/bin/bash
cd demo
node server.js
when I launch a new instance with my AMI and just do cd through SSH it works absolutely fine, however, I want to start the server with going to SSH.
These are common one can face when running node application without process manager on a remote server.
Let suppose the above script but what if a node application encounter error? so the application will be stopped, so better to use process manager which will take care of such thing and you will not need to do ssh.
You can use pm2. Which also have slack integration another interesting feature that will help to monitor the process.
You can also set Setup startup script.
Restarting PM2 with the processes you manage on server boot/reboot is
critical. To solve this, just run this command to generate an active
startup script:
run these command in the AMI, and pm2 will take care of the process on all instances.
pm2 startup
#And to freeze a process list for automatic respawn:
pm2 save

How do I automatically start a node/express app (with pm2), having node installed using software collections (scl), on CentOS7

1. Summarize the problem
I would like for a node/express app.js to listen on a port 3000, on container startup.
I created a CentOS 7 Docker container, installed the software collections (SCL) repo, and then installed node.
I can now enable node with:
scl enable rh-nodejs10 bash, and so I did, and then installed express (globally), and pm2 (globally), and can successfully run a minimal express app listening on port 3000 with commands I run at the command line.
I put scl enable rh-nodejs10 bash in my .bash_profile (of a user I created named: www - because I do not want root running the web server).
In fact, I will be building a rootless container (buildah), next after this, so there will be no 'root' user at all for security concerns.
Now on container startup I want to have the web server start automatically, and be able to get a response from: http://localhost:3000 (hello world).
The problem is that on container startup, node is not enabled for any user until a shell is invoked to enable it.
2. Provide background including what you've already tried
I have searched the web for a solution of using node, express, pm2 in conjunction with CentOS 7 software collections and have found no solution.
Please only reply if you have actually tried the solution your recommend, and have it working, otherwise it most likely will not work.
systemd needs to:
1. enable node
2. run pm2 start app
I tried putting both in a shell, but when you enable node, you are then put in a sub-shell and cannot script any additional commands.
3. show some code
scl enable rh-nodejs10 bash
4. Describe expected and actual results including any error messages
I expect the node/express server to listen on port 3000 on container startup.
I have node running on reboot on RHEL 7 by using scl-utils/scl_source technique found here
$ cat /etc/profile.d/enablenodejs.sh
#!/bin/bash
source scl_source enable rh-nodejs10

Executing node.js script remotely

I am setting continuous integration using Jenkins server for my Node.js application. For deployment I am using a powershell script and for that I have installed powershell plug-in. The script will need to perform the following tasks in the order.
# Step1
# Stop all the currently running services on web server. For that I am trying to
# execute maintenance.js remotely from the build server under node
node \\SharedWebServerFolder\Utilities\maintenance.js stopServices
# Step2
# Copies all the resources to server at \\SharedServerWebFolder
# Step3
# Start the services
node \\SharedWebServerFolder\Utilities\maintenance.js startServices
I have no problem executing Step2. My question is about Step1 and Step3.
Should I execute maintenance.js remotely from the build server? Is this even possible? ( Assume I have installed node.js on the build server)
Should I have one more PowerShell script on the web server which executes maintenance.js locally? So basically deployment script (from build server) will execute remote PowerShell script (which is on web server) and that remote PowerShell script will execute maintenance.js locally. In this scenario I don't have to install Node.js on the build server.
What is recommended?
People generally use ssh to execute commands remotely. I think you want to execute a command from your build server on your production / staging server.
see linux execute command remotely
this is simple where the remote box is a linux box.
If your remote is a Windows box then you can still run an ssh server on it but its more painful to setup. you might try Bitvise SSH Client.
Once your ssh client is setup correctly you should be able to execute remote commands from your build server.
I have used Jenkins SSH Plugin for deployment in my remote staging/production servers.I would recommend to execute your 'maintenance.js' in web server machine instead of your build server.You can just trigger 'maintenance.js' execution in staging/production servers from build server using ssh.I believe the execution of 'maintenance.js' remotely from the build server is hard to achieve.

Where to place node.js files on server?

I have just gotten a VPS to bring my first node.js project online, but I am wondering where do I place the node files like app.js if I want it to be accessible at http://www.mywebsite.com:3000?
Right now, to host a website, I am using WHM to create a cPanel account, which creates /home/cpanelusername and my HTML/PHP files all go into /home/cpanelusername/public_html. Where does node.js files go to? Or did I get this step wrong as well?
On my Mac where I developed the node app, I simply cd into the directory containing the node file and run node app.js
You have to execute app.js file using the node binary, just like you do in local development. That means that you should probably make that execution a service call, the details of which depend on your linux distro. If it's not a service call, then executing it in ssh will mean that the app stops working once you log out of ssh.
For example, in Ubuntu server (which I use) I have an Upstart script which automatically runs my node.js app automatically on system start and log to /var/log. An example of the file, named /etc/init/myapp.js.conf is:
description "myapp server"
author "Me"
# used to be: start on startup
# until we found some mounts weren't ready yet while booting:
start on started mountall
stop on shutdown
script
# We found $HOME is needed. Without it we ran into problems
export HOME="/root"
exec node /home/me/myapp/myapp.js 2>&1 >> /var/log/myapp.log
end script
Replace names, etc. as necessary.
Edit to add: You can then start and stop your service by running:
sudo start myapp.js or sudo stop myapp.js

How do I run a Node.js application as its own process?

What is the best way to deploy Node.js?
I have a Dreamhost VPS (that's what they call a VM), and I have been able to install Node.js and set up a proxy. This works great as long as I keep the SSH connection that I started node with open.
2016 answer: nearly every Linux distribution comes with systemd, which means forever, monit, PM2, etc. are no longer necessary - your OS already handles these tasks.
Make a myapp.service file (replacing 'myapp' with your app's name, obviously):
[Unit]
Description=My app
[Service]
ExecStart=/var/www/myapp/app.js
Restart=always
User=nobody
# Note Debian/Ubuntu uses 'nogroup', RHEL/Fedora uses 'nobody'
Group=nogroup
Environment=PATH=/usr/bin:/usr/local/bin
Environment=NODE_ENV=production
WorkingDirectory=/var/www/myapp
[Install]
WantedBy=multi-user.target
Note if you're new to Unix: /var/www/myapp/app.js should have #!/usr/bin/env node on the very first line and have the executable mode turned on chmod +x myapp.js.
Copy your service file into the /etc/systemd/system folder.
Tell systemd about the new service with systemctl daemon-reload.
Start it with systemctl start myapp.
Enable it to run on boot with systemctl enable myapp.
See logs with journalctl -u myapp
This is taken from How we deploy node apps on Linux, 2018 edition, which also includes commands to generate an AWS/DigitalOcean/Azure CloudConfig to build Linux/node servers (including the .service file).
Use Forever. It runs Node.js programs in separate processes and restarts them if any dies.
Usage:
forever start example.js to start a process.
forever list to see list of all processes started by forever
forever stop example.js to stop the process, or forever stop 0 to stop the process with index 0 (as shown by forever list).
I've written about my deployment method here: Deploying node.js apps
In short:
Use git post-receive hook
Jake for the build tool
Upstart as a service wrapper for node
Monit to monitor and restart applications it they go down
nginx to route requests to different applications on the same server
pm2 does the tricks.
Features are: Monitoring, hot code reload, built-in load balancer, automatic startup script, and resurrect/dump processes.
You can use monit, forever, upstart or systemd to start your server.
You can use Varnish or HAProxy instead of Nginx (Nginx is known not to work with websockets).
As a quick and dirty solution you can use nohup node your_app.js & to prevent your app terminating with your server, but forever, monit and other proposed solutions are better.
I made an Upstart script currently used for my apps:
description "YOUR APP NAME"
author "Capy - http://ecapy.com"
env LOG_FILE=/var/log/node/miapp.log
env APP_DIR=/var/node/miapp
env APP=app.js
env PID_NAME=miapp.pid
env USER=www-data
env GROUP=www-data
env POST_START_MESSAGE_TO_LOG="miapp HAS BEEN STARTED."
env NODE_BIN=/usr/local/bin/node
env PID_PATH=/var/opt/node/run
env SERVER_ENV="production"
######################################################
start on runlevel [2345]
stop on runlevel [016]
respawn
respawn limit 99 5
pre-start script
mkdir -p $PID_PATH
mkdir -p /var/log/node
end script
script
export NODE_ENV=$SERVER_ENV
exec start-stop-daemon --start --chuid $USER:$GROUP --make-pidfile --pidfile $PID_PATH/$PID_NAME --chdir $APP_DIR --exec $NODE_BIN -- $APP >> $LOG_FILE 2>&1
end script
post-start script
echo $POST_START_MESSAGE_TO_LOG >> $LOG_FILE
end script
Customize all before #########, create a file in /etc/init/your-service.conf and paste it there.
Then you can:
start your-service
stop your-service
restart your-service
status your-service
I've written a pretty comprehensive guide to deploying Node.js, with example files:
Tutorial: How to Deploy Node.js Applications, With Examples
It covers things like http-proxy, SSL and Socket.IO.
Here's a longer article on solving this problem with systemd: http://savanne.be/articles/deploying-node-js-with-systemd/
Some things to keep in mind:
Who will start your process monitoring? Forever is a great tool, but it needs a monitoring tool to keep itself running. That's a bit silly, why not just use your init system?
Can you adequately monitor your processes?
Are you running multiple backends? If so, do you have provisions in place to prevent any of them from bringing down the others in terms of resource usage?
Will the service be needed all the time? If not, consider socket activation (see the article).
All of these things are easily done with systemd.
If you have root access you would better set up a daemon so that it runs safe and sound in the background. You can read how to do just that for Debian and Ubuntu in blog post Run Node.js as a Service on Ubuntu.
Forever will do the trick.
#Kevin: You should be able to kill processes fine. I would double check the documentation a bit. If you can reproduce the error it would be great to post it as an issue on GitHub.
Try this: http://www.technology-ebay.de/the-teams/mobile-de/blog/deploying-node-applications-with-capistrano-github-nginx-and-upstart.html
A great and detailed guide for deploying Node.js apps with Capistrano, Upstart and Nginx
As Box9 said, Forever is a good choice for production code. But it is also possible to keep a process going even if the SSH connection is closed from the client.
While not necessarily a good idea for production, this is very handy when in the middle of long debug sessions, or to follow the console output of lengthy processes, or whenever is useful to disconnect your SSH connection, but keep the terminal alive in the server to reconnect later (like starting the Node.js application at home and reconnecting to the console later at work to check how things are going).
Assuming that your server is a *nix box, you can use the screen command from the shell to do keep the process running even if the client SSH is closed. You can download/install screen from the web if not already installed (look for a package for your distribution if Linux, or use MacPorts if OS X).
It works as following:
When you first open the SSH connection, type 'screen' - this will start your screen session.
Start working as normal (i.e. start your Node.js application)
When you are done, close your terminal. Your server process(es) will continue running.
To reconnect to your console, ssh back to the server, login, and enter 'screen -r' to reconnect. Your old console context will pop back ready for you to resume using it.
To exit screen, while connected to the server, type 'exit' on the console prompt - that will drop you onto the regular shell.
You can have multiple screen sessions running concurrently like this if you need, and you can connect to any of it from any client. Read the documentation online for all the options.
Forever is a good option for keeping apps running (and it's npm installable as a module which is nice).
But for more serious 'deployment' -- things like remote management of deploying, restarting, running commands etc -- I would use capistrano with the node extension.
https://github.com/loopj/capistrano-node-deploy
https://paastor.com is a relatively new service that does the deploy for you, to a VPS or other server. There is a CLI to push code. Paastor has a free tier, at least it did at the time of posting this.
In your case you may use the upstart daemon. For a complete deployment solution, I may suggest capistrano. Two useful guides are How to setup Node.js env and How to deploy via capistrano + upstart.
Try node-deploy-server. It is a complex toolset for deploying an application onto your private servers. It is written in Node.js and uses npm for installation.

Resources