Update my node.js code in multiple instances - node.js

I have a Elastic Load Balancer in AWS. I have my node.js code deployed in 3 instances and I'm using pm2 to update my code, but I need to do manually on this way:
Connect by ssh in each machine
Git pull on each machine
pm2 reload all on each machine
How can I do to update all the code in ALL machines when I do a new commit to the master or other branch (like production branch)?
Thanks.

You can just write a script in for example bash to solve this:
# This will run your local script update.sh on the remote
ssh serverIp1 "bash -s" < ./update.sh
Then in you local update.sh you can add code to git pull and reload:
# This code will run on the remote
git pull
# Update
# Other commands to run on remote host
You can also have a script that does all of this for all your machines:
ssh serverIp1 "bash -s" < ./update.sh
ssh serverIp2 "bash -s" < ./update.sh
ssh serverIp3 "bash -s" < ./update.sh
or event better:
for ip in serverIp1 serverIp2 serverIp3; do
(ssh $ip "bash -s" < ./update.sh)
done

An alternative is ElasticBeanstalk, especially if you are using a "pure" Node solution (not a lot of extra services on the instances). With beanstalk, you supply a git ref or ZIP file of your project, and it handles the deployment (starting up new instances, health checks, getting them on the load balance, removing old instances, etc.) In some ways it is an automated deployment version of what you have now, because you still will have EC2 instances, a Load Balancer, etc.

Use a tool like Jenkins (self-hosted) or Travis CI to run your builds and deployments. Many alternatives are available FYI, Jenkins and Travis are just some of the most popular.

Ok, thanks for your answers but I think the best option for me is AWS CodeDeploy.
I don't know why I did not find this before make the question...

Related

running SSH script command from jenkins using SSH

To deploy application on linux ubuntu server I have bunch of SSH commands that i currently run using PuTTY. The server has local account serviceaccount1. In PuTTY i connect to server using serviceaccount1 and executes the following commands one by one
cd /home/serviceaccount1/cr-ml
script /dev/null
screen -S data_and_status
cd cr-ml/notebooks
source activate crml
unset XDG_RUNTIME_DIR
jupyter kernelgateway --api='kernel_gateway.notebook_http' --seed_uri='data_and_status_api.ipynb' --port 8894 --ip 0.0.0.0
...
...
and so on
Now i want automate this using Jenkins. I installed SSH plugin, configured credential using SSH Username serviceaccount1 with private key
Then created a new jenkins project and added a build step Execute shell scripts on remote host using ssh and then add all the above commands.
When i build the jenkins project, it get stuck at executing 2nd command script /dev/null
i see the following console output
To me, it seems the culprit is the screen -S data_and_status command. Once you start a screen, I don't think you would be able to execute the subsequent commands over the SSH connection.
Alternatively, you can try using a tool like Ansible to run a bunch of commands against a remote server.

How to wait until services are ready

I have been setting up a Jenkins pipeline using docker images. Now I need to run various services like MySQL, Redis, Memcache, Beanstalkd and Elasticsearch. To wait the job until MySQL is ready, I am using the following command :
sh "while ! mysqladmin ping -u root -h mysqlhost ; do sleep 1; done"
sh 'echo MySQL server is up and running'
Where mysqlhost is the hostname I have provided for the container. Similarly, I need to check and wait for Redis, Memcached, Beanstalkd and Elasticsearch. But pinging to these services are not working as it is done for MySQL . How can I implement this ?
The Docker docs mention this script to manage container readiness checks: https://github.com/vishnubob/wait-for-it
I also use this one which is compatible with Alpine:
https://github.com/eficode/wait-for
You can do a curl to this services in order to check if they are alive or not.
For redis you can also do https://redis.io/commands/ping

Download version files from app engine

There is any way to download a file from google managed VM docker?
we lost one that is in production version and I want to download it to my computer but I cant find the app path
It should be possible.
First, determine the GCE instance that runs your version. The name of the version should be part of the instance name. If your version has multiple instances, you may have to try all of them (or if your file was part of the application, any of them may work).
From the Cloud console, you can switch it from "Google managed" to self-managed.
Next, use gcloud compute ssh <instance name> to ssh to the instance.
Next, run docker ps to find the container running your application code. You should see a few side-car containers like nginx, but if you look through the names of the containers you should see one for your application.
Finally, you could docker exec -it <container id> -- bash to create a shell on the instance. Or instead of bash, perhaps run a cat command or whatever else you need to do to recover your file.

Running commands upon creation of instances using AWS Elastic Beanstalk

I've been looking at various methods to run commands upon creation of EC2 instances using Elastic Beanstalk on AWS. I've been given different methods to do this through AWS Tech Support, including life cycle hooks, custom AMI's, and .ebextensions. I've been having issues getting the first 2 methods (life cycle hooks and custom AMIs) to work with EB.
I'm currently using .ebextensions to run commands upon deploy, but not sure if there's a way to run a set of commands upon creation only instead of every time I deploy code. For instance, I have a file .ebextensions/03-commands.config that contains the following code:
container_commands:
01_npm_install:
command: "npm install -g -f npm#latest"
However, I only want this code to run upon instance creation, not every time I deploy, as it currently does. Does anyone know a way to accomplish this?
Thanks in advance!
I would recommend creating an idempotent script in your application that leaves a marker file on the instance in some location say /var/myapp/marker using something like mkdir -p /var/myapp-state/; touch /var/myapp-state/marker on successful execution. That way in your script you can check that if the marker file exists you can make your script a no-op.
Then you can call your script from container commands but it will be a no-op everytime because on first successful execution it will create the marker file and subsequent executions will be no-ops.
Create a custom AMI. This way you can setup your instances whoever you want and they will launch faster
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.customenv.html
As I see from you question, you're using: container_commands, that is means you are using Elastic Beanstalk with Docker. Right? In this case I would recommended to read: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html.
The idea is following, that you can create own Dockerfile, where you can specify all commands that you need to build a docker container, for example to install all dependencies.
I would recommended to use .ebextensions for the Elastic Beanstalk customization and configuration, for example to specify ELB or RDS configuration. In the Dockerfile make sense to specify all commands, that you need to build a container for your application, that includes setup of the web server, dependencies etc.
With this approach, Elastic Beanstalk will build a container, that each time when you do deploy, it execute a docker instance with deployed source code.
There is a simple option leader_only: true you need to use as per current AWS Elasticbeanstalk configurations, You simply need to add this under
container_commands:
01_npm_install:
command: "npm install -g -f npm#latest"
leader_only: true
This is the link as per AWS
AWS Elasticbeanstalk container command option

How to simultaneously deploy Node.js web app on multiple servers with Jenkins?

I'm gonna deploy a Node.js mobile web application on two remote servers.(Linux OS)
I'm using SVN server to manage my project source code.
To simply and clearly manage the app, I decided to use Jenkins.
I'm new to Jenkins so it was a quite difficult task installing and configuring Jenkins.
But I couldn't find how to set up Jenkins to build remote servers simultaneously.
Could you help me?
You should look into supervisor. It's language and application type agnostic, it just takes care of (re-) starting application.
So in your jenkins build:
You update your code from SVN
You run your unit tests (definitely a good idea)
You either launch an svn update on each host or copy the current content to them (I'd recommend this because there are many ways to make SVN fail and this allows to include SVN_REVISION in the some .JS file for instance)
You execute on each host: fuser -k -n tcp $DAEMON_PORT, this will kill the currently running application with the port $DAEMON_PORT (the one you use in your node.js's app)
And the best is obviously that it will automatically start your node.js at system's startup (provided supervisor is correctly installed (apt-get install supervisor on Debian)) and restart it in case of failure.
A node.js supervisord's subconfig looks like this:
# /etc/supervisor/conf.d/my-node-app.conf
[program:my-node-app]
user = running-user
environment = NODE_ENV=production
directory = /usr/local/share/dir_app
command = node app.js
stderr_logfile = /var/log/supervisor/my-node-app-stderr.log
stdout_logfile = /var/log/supervisor/my-node-app-stdout.log
There are many configuration parameters.
Note: There is a node.js's supervisor, it's not the one I'm talking about and I haven't tested it.
per Linux OS, you need to ssh to your hosts to run command to get application updated:
work out the workflow of application update in shell script. Especially you need to daemonize your node app so that a completed jenkins job execution will not kill your app when exits. Here's a nice article to tell how to do this: Running node.js Apps With Upstart, or you can refer to pure nodejs tech like forever. Assume you worked out a script under /etc/init.d/myNodeApp
ssh to your Linux OS from jenkins. so you need to make sure the ssh private key file has been copied to /var/lib/jenkins/.ssh/id_rsa with the ownership of jenkins user
Here's an example shell step in jenkins job configuration:
ssh <your application ip> "service myNodeApp stop; cd /ur/app/dir; svn update; service myNodeApp restart"

Resources