So I have a node js app I would like to deploy to EC2.
I'm planning on creating multiple instances of it and put it beyond Nginx for load balancing.
I know I can use AWS Beanstalk but I think it's over provisioning stuff I don't need.
My question is about the app update process. I thought of two options.
The first one is to create a bare git repository on the EC2 and every time I push some changes, it will hook into the after receive event, create new instances of the app and update Nginx to switch to the new instances.
Another option is to work with Amazon ECR and containers. Every time I update my app image at ECR, it will send an event to the EC2 machine (I'm not sure it is even possiable) to create new instances of the app and again, tell the Nginx to switch.
Which one do you think is preferred?
Here is the deployment method we used
1)Created git bare repo in ec2 server and its tracked with production branch.
2)in the post-receive hook
#!/bin/sh
git --work-tree=/var/www/domain.com --git-dir=/var/repo/site.git checkout -f
cd /var/www/domain.com && npm install && forever restart app.js
3)In the nginx configuration
{
proxy_pass:https://localhost:3000
}
Note:
You can customise post hook to check if its first deployment then run npm install otherwise run npm update.
I hope this will help to solve your issues
Related
Currently, I'm working on a project which is hosted on Microsoft Azure as a resource. The project is presented on a virtual machine and is operated using commands on the Azure CLI.
Now I've been asked to create a web app for it using Node.js and React.js. I'm totally lost on how to connect the Node.js API to the virtual machine. Is there any way to trigger those Azure CLI commands through a Node.js app. Any help would be appreciated!
EDIT:
Managed to solve the issue. Used this npm package 'ssh-exec' which lets you execute commands on a virtual machine remotely after connecting using IP Address, username, password. Very simple to use.
Link to package - https://www.npmjs.com/package/ssh-exec
Managed to solve the issue. Used this npm package 'ssh-exec' which lets you execute commands on a virtual machine remotely after connecting using IP Address, username, password. Very simple to use.
Link to package - https://www.npmjs.com/package/ssh-exec
There are some few steps to connect Node.js API to VM,
Firstly, we need to clone the project that we will be deploying to the Azure VM. This project is a basic Node.js API with a single endpoint for returning an array of todo objects. Go to the location where you want to store the project and clone it:
git clone --single-branch --branch base-project https://github.com/coderonfleek/node-azure-vm.git
Once the project has been cloned, go to the root of the project and install dependencies:
cd node-azure-vm
npm install
Run the application using the npm run dev command. The application will start up at the address http://localhost:5000. When the application is up and running, enter http://localhost:5000/todos in your browser to see the list of todos.
enter image description here
Now, go to the package.json file of your project and add these scripts in the scripts sections:
"scripts": {
.....,
"stop": "pm2 kill",
"start": "pm2 start server.js"
}
The start and stop scripts will use the pm2 process manager to start and stop the Node.js application on the VM. The pm2 script will be installed globally on the VM when it has been set up.
At the root of the project, run the rm -rf .git command to remove any .git history. Then push the project to GitHub. Make sure that this is the GitHub account connected to your CircleCI account.
Then, Setting up a virtual machine on Azure to run Node.js.
Next, create a new VM on Azure and set its environment up for hosting the Node.js application. These are the steps:
Create a new VM instance
Install nginx
Configure nginx to act as a proxy server. Route all traffic to port 80 on your VM to the running instance of the Node.js application on port 5000
Install Node.js on the VM and clone the app from the GitHub repo into a folder in the VM
Install pm2 globally
Do not be intimidated by the complexity of these steps! You can complete all five with one command. At the root of your project, create a new file named cloud-init-github.txt; it is a cloud-init file. Cloud-init is an industry-standard method for cloud instance initialization.
cloud init- code
(REFER THE BELOW LINK FOR COMPLETE DETAILS)
https://circleci.com/blog/cd-azure-vm/
I'm pretty new to DevOps and I'm trying to set up my Node.js app on a AWS server instance. Steps I've taken:
Set up Elastic IP
Launched EC2 instance with Ubuntu server
Connected IP to instance
Allowed incoming connections on port 3000
SSH'd into the server with a .pem file
Now I'm at the point where I need to get my files uploaded to the server. I've used FileZilla (and like it) in the past to upload files but the initial part was already set up. When I set up the site on FileZilla there is no /var/www folder on the remote site.
Don't know how to connect these dots.
Also not sure what I need to run once I successfully upload the files. I imagine npm install when I'm ssh'd into the server? Most of the tutorials out there only go through the basic instance setup.
Thanks!
You don't need to have /var/www. Also, it's better that you use a version control and a remote repository like Github and then SSH to your EC2 and then clone your repository there.
Then cd into your repo and run npm install and then start your app.
And check.
Once you connect to the EC2 instance then clone your code in there. It not mandatory to be in /var/www/html but, it's best practice to keep it there. Once you clone npm install into your project home directory so all the required packages get installed. Then for running your node application in production you have to run it on service as pm2, supervisor, forever, passenger, etc. You can use any of these services and configured appropriately to run your application on desired port. As with pm2, you can follow this guide, install pm2 Then you can run with the following command w.r.t. your environment, like I want to run my application on port 5555 for production
$ PORT=5555 pm2 start app.js --name API --env production -f
Check the status using pm2 list Now, your application is running on http://server-ip:5555/ But, you won't be typing port number every-time. So, you need to configure the web server in front of your application like apache or nginx which will forward all request to your application running port. You could find the best guide to their home page. Then your application is available at http://server-ip/ You can follow this for single configuration of multiple node apps
Hope this helps.
I have built a nodejs app and now I want to deploy it into openshift.
I don't want to use github because I should create private repository which I cannot.
Also I cannot use 'rhc' since I am new user.
Is there any way to do that?
I cannot find any tutorial about that.
For OpenShift 3, you can use a binary input source build.
First create a binary input build.
oc new-build --name myapp --strategy=source --binary --image-stream=nodejs:latest
Now start a new build and upload source code from the current directory.
oc start-build myapp --from-dir=.
Once the build has completed, deploy the image created by the build.
oc new-app myapp
You can then expose the service.
oc expose svc/myapp
Every time you want to make a change, you will need to run the same oc start-build command in the directory where your source code is.
Is there any other code repo you are using? SVN? If SVN, you can use pipelines with Jenkins.
If not, put the nodejs app in a docker container and push it to the docker hub.
I don't see anybody to suggest this so I will do - you can equally well deploy code from gitlab, pagure, bigbucket, or any other git hosting service.
In fact you can even run your own git server inside OpenShift.
oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/gitserver/gitserver-persistent.yaml
oc env dc/git -p ALLOW_ANON_GIT_PULL=false
oc policy add-role-to-user edit -z git
oc get route # to see your git server URL
Now you should be able to push/pull from that server using your OpenShift username and token (also any other users you add to the project). From buildconfigs and other pods you can also use simply git as the hostname of your git server, because this should resolve to the IP of the service with the same name (again only within the same OpenShift project).
Read the template YAML (the URL after oc create) for more options you can use like REQUIRE_GIT_AUTH.
Of course it is good to keep a git mirror/backup somewhere else as with any other git service.
HTH
P.S. Forgot to say, you need to install an OpenShift v3 cluster by yourself or subscribe to OpenShift Online (which unfortunately may take awhile ATM).
I've been looking at various methods to run commands upon creation of EC2 instances using Elastic Beanstalk on AWS. I've been given different methods to do this through AWS Tech Support, including life cycle hooks, custom AMI's, and .ebextensions. I've been having issues getting the first 2 methods (life cycle hooks and custom AMIs) to work with EB.
I'm currently using .ebextensions to run commands upon deploy, but not sure if there's a way to run a set of commands upon creation only instead of every time I deploy code. For instance, I have a file .ebextensions/03-commands.config that contains the following code:
container_commands:
01_npm_install:
command: "npm install -g -f npm#latest"
However, I only want this code to run upon instance creation, not every time I deploy, as it currently does. Does anyone know a way to accomplish this?
Thanks in advance!
I would recommend creating an idempotent script in your application that leaves a marker file on the instance in some location say /var/myapp/marker using something like mkdir -p /var/myapp-state/; touch /var/myapp-state/marker on successful execution. That way in your script you can check that if the marker file exists you can make your script a no-op.
Then you can call your script from container commands but it will be a no-op everytime because on first successful execution it will create the marker file and subsequent executions will be no-ops.
Create a custom AMI. This way you can setup your instances whoever you want and they will launch faster
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.customenv.html
As I see from you question, you're using: container_commands, that is means you are using Elastic Beanstalk with Docker. Right? In this case I would recommended to read: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html.
The idea is following, that you can create own Dockerfile, where you can specify all commands that you need to build a docker container, for example to install all dependencies.
I would recommended to use .ebextensions for the Elastic Beanstalk customization and configuration, for example to specify ELB or RDS configuration. In the Dockerfile make sense to specify all commands, that you need to build a container for your application, that includes setup of the web server, dependencies etc.
With this approach, Elastic Beanstalk will build a container, that each time when you do deploy, it execute a docker instance with deployed source code.
There is a simple option leader_only: true you need to use as per current AWS Elasticbeanstalk configurations, You simply need to add this under
container_commands:
01_npm_install:
command: "npm install -g -f npm#latest"
leader_only: true
This is the link as per AWS
AWS Elasticbeanstalk container command option
I am a node.js developer. I have used Heroku and Joyent's no.de platform before .
For both of these platforms , the deployment used to be simple
git push heroku master ( Heroku )
git push joyent master ( Joyent's node)
The above commands used to do the magic . They enabled me to push the code from my local machine to the cloud server, deploy it and automatically restart the server .
Now I am planning to use Amazon AWS as its more configurable to my needs. How do I setup a similar thing on Amazon EC2 for continuos deployment ?
I am using an Ubuntu AMI.
Is there any tool that help me achieve this ?
If there are any resources/tutorials that might help me - please let me know.
Thanks !
That auto-deploy mechanism is implemented with Git Hooks. The most likely hook used is post-update.
It's a simple bash script that is executed on a git push; put one in a git repository on your EC2 server including the code to re-run NPM (if needed) and restart your code.
That's should do it. :)
Use roco - deployment solution inspired by capistrano, working great with express/railwayjs + git + upstart. If you have another env feel free to customize it using Roco.coffee
It also can be simply configured with post-update hook to work exacty as in heroku and joyent.
Here is tiny tutorial for this tool: http://node-js.ru/4-deploy-with-roco
Check out AWS Elastic Beanstalk
It lets you deploy your application to an amazon ec2 instance by running:
git aws.push --environment testing
// or
git aws.push --environment production
The documentation page contains a lot of quality information to get your started!