How to run nginx and Node.js at server startup?
In order to start Amazon EC2 with AWS Auto Scaling, I must directly connect to EC2 to run nginx and Node.js.
Can this be done for Auto Scaling?
In Amazon EC2, I want to run nginx with Node.js during EC2 startup by Auto Scaling.
EC2 is set up as an Auto Scaling group using images. I want to run EC2 Node.js applications and nginx, which are started by Auto Scaling, together with the EC2 server startup.
For nginx, I can run the executable with chkconfig, but the Node.js application will run as pm2, using the code written in package.json.
How can I run nginx and Node.js with EC2 startup and let the new EC2 -- started with Auto Scaling -- respond properly?
comment reply :
I don't want to run node.js using "node app.js" command.
I want to run node.js by package.json ( script )
ex.
"start": "NODE_ENV=production PORT=3000 pm2 start server.js -i -1"
How can I do this?
Your suggestions are using linux server init script file.
But, I want to set NODE_ENV, PORT and use pm2 command.
solution
I solved the problem.
When Linux booted, I tried to use the script file to automatically run node.js.
I created the script file and made the shell script run automatically after linux booted, but it did not seem to be a good idea.
Alternatively, pm2 startup and ecosystem.config.js can be used to solve problems flexibly.
Thank you for your reply.
This has nothing to do with autoscaling. It most often has to do with the EC2 AMI (Amazon Machine Image) that the autoscaler is launching your EC2 instances with, and possibly also with the "user metadata" that you are passing to the instance when it launches. These are the only two things that impact what an EC2 instance does when and after it starts up, up until it starts communicating with the outside world.
So what you need to do is create an AMI that is set up so that the right things launch when an EC2 instance is launched from that AMI. What you'd do is take the AMI you want to use as a starting point, launch that AMI into an instance, make the necessary changes and installations you want, and then save off a new AMI. Then change your autoscaling group to launch new instances with that new AMI.
You could alternately send a script in your "user metadata" that launches things, but this is rarely what you want to do". Most of the time, you want to have your AMI do the right thing.
It's also possible that you are using some sort of post-boot provisioner, like Chef, Ansible or Chef Habitat. If you are, that's where you'd set all of this stuff up. You'd want that system to do the work you're describing. But if you're doing that, what I have said earlier still applies. For this to work, you'd often have also built a custom AMI that has parts of the provisioning system already built into it, so that that system can connect into it and provision it. It's possible for these systems to start from a generic AMI as well. It depends on the system.
Related
I built a simple NodeJS application and ran on an EC2 instance.
Everything works fine. I decided to create an AMI (Amazon Linux based) and a launch template to be used by a ASG.
The problem is, I cannot get the application start automatically.
I tried to add the following command through the user_data field but it doesn't work:
node main.js
Any ideas on how to automatically start this application once launched by the ASG?
Typically you would add this to the startup script of the AMI, so once the instance has started it will run the script on boot.
You may want to look at PM2 as well, it's a great tool for things like this and also makes it easy to setup each node instance in cluster mode ( assuming you have an EC2 instance with more than one core )
Some other ways of doing this, albeit not an auto scale but with DigitalOcean they offer a CASS model called 'apps' that basically pushes you app into a container from a git-repo and deploys it, you can then just spin new instances out as needed. Downside is that the bandwidth is a bit small but CND etc can help with that.
I must first preface by saying I am relatively new to AWS but finding it immensely useful. Let me describe my scenario...
What I have currently
An Auto scaling group (ASG)
An Elastic Load Balancer (ELB)
A CD/CI pipeline using CodeDeploy and Bitbucket
Node/Express app serving a custom API on EC2 instances
VPC and subnets are working well
An AMI with my app code
My question
When the ASG decides to scale in a new EC2 instance using my launch template and AMI, it will use the application code from the AMI. But if I deploy to master at some point, my AMI will not be updated, but instances within the ASG will be updated. What is the best method for ensuring the new instances spawned by the ASG are running the latest code version (master)?
My initial thoughts
I am thinking to include a bash script in the launch config that will pull the latest code from Bitbucket and run any following steps to get my application running (such as "npm install", "npm run start" etc etc). I'm sure someone has a more elegant solution, and I'd love to hear some suggestions.
To anyone coming to this, I did solve my problem. I was correct initially. The "user data" field within the Launch Template was a good place to bootstrap my app once the instance is up and running. It basically clones from the remote repo and performs any necessary steps to launch the app after that.
For example, in the launch configuration for EC2
#cloud-boothook
#!/bin/bash
git clone myremoterepo.git
cd myremoterepo
npm install
npm run start
Also, if you're running a classic load balancer, CodeDeploy will attempt to start a deployment, based on your latest code repo in S3, when your ASG scales up the EC2 instances. So the above solution will be redundant.
I have a website (Nodejs) on EC2 instance (Ubuntu) which is accessible and runs via pm2 process
Problem is - After closing IDE (cloud 9) the website stops working after some time, As soon as I login back to "Cloud 9" pm2 kick-starts automatically after EC2 Instance restarts.
pm2 setup seems to be working fine that way. (I did run pm2 startup, pm2 save
From logs :
*PM2 log: App [app:0] exited with code [1] via signal [SIGINT]
PM2 error: Error: kill EPERM*
As mentioned in -- https://aws.amazon.com/cloud9/faqs/
AWS Cloud9 EC2 environment – Enables you to launch a new Amazon EC2 instance that Cloud9 connects to. By default, these instances stop 30 minutes after you close the IDE and start automatically when you open the IDE.
Does this means in PROD we can only do SSH so Instance does not stop and website remains live?
Or how do I ensure Instance is not stopping?
You develop websites on Cloud9 machines that sit on an underlying EC2. They turn off after 30 mins as they are not meant for website hosting.
You should be setting up an EC2 with a UserData script via CloudFormation to host the site.
Or spinning it up using the CLI or
SDK with your fav programming language.
Or you may want to look into Elastic Beanstalk if you have no desire to learn AWS.
If you want to host the site you can't do that off a Cloud9 machine, deploy to an EC2 or consider using Beanstalk.
First find the corresponding EC2 instance, and enable termination protection:
Secondly, in Cloud9, cloud9 > Project Settings > EC2 instance
and set Stop my environment to Never
I recently downloaded and managed to start an OrientDB server/database on an AWS EC2 Linux 14.04 (I think the name is) server for an application I want to set up. I started OrientDB "as usual" by running ./server.sh in the terminal via SSH link to the EC2 server. All works fine and I can query the database while at the computer. But as soon as I leave my computer and the SSH link is broken (for example when closing the computer), so is the database, i.e. it stops.
Is there a way to go around this or do I have to set up the database in some other way?
OrientDB is provided as AWS AMI. Take a look to
http://orientdb.com/orientdb-amazon-web-services/
If you want to DIY, follow the instructions provided on
http://orientdb.com/docs/last/Unix-Service.html
Update: new link to doc:
https://orientdb.com/docs/last/admin/Unix-Service.html
Hope this helps
you can try putting full path to server.sh into /etc/rc.local before exit 0 and reboot the instance
Before running the server, run the command:
screen
This will create a persistent environment which will allow your process to keep running after you disconnect.
When you reconnect, you can use this command to reconnect to that environment:
screen -r
I set up a AWS Ubuntu instance running a http-server using node.js
I was wondering if its possible to log out of my remote server while persistently keeping the http-server on.
This is a pretty good tutorial that deals with keeping a node.js server running, and amongst other things, deals with running it in the background.
http://blog.nodejitsu.com/keep-a-nodejs-server-up-with-forever/
Forever is a nice option (as suggested above).
Though, I recommend using AWS' Elastic Beanstalk over EC2 (that's the service you are using now, if I got it right), it provides you an easy interface to deploy you web-server with no ssh interference and keeps it alive at all times after deployment, and also gives you some other load balancing and auto scaling features with minimum effort.
You could also use pm2 for this. Besides keeping your http-server online it also gives you the possibility to do load balancing and other tasks.
Run
npm install pm2 -g
on your server and start your app with
pm2 start app.js
As marekful points out, logging out of your Ubuntu server will not have any effect on your http-server.