I must first preface by saying I am relatively new to AWS but finding it immensely useful. Let me describe my scenario...
What I have currently
An Auto scaling group (ASG)
An Elastic Load Balancer (ELB)
A CD/CI pipeline using CodeDeploy and Bitbucket
Node/Express app serving a custom API on EC2 instances
VPC and subnets are working well
An AMI with my app code
My question
When the ASG decides to scale in a new EC2 instance using my launch template and AMI, it will use the application code from the AMI. But if I deploy to master at some point, my AMI will not be updated, but instances within the ASG will be updated. What is the best method for ensuring the new instances spawned by the ASG are running the latest code version (master)?
My initial thoughts
I am thinking to include a bash script in the launch config that will pull the latest code from Bitbucket and run any following steps to get my application running (such as "npm install", "npm run start" etc etc). I'm sure someone has a more elegant solution, and I'd love to hear some suggestions.
To anyone coming to this, I did solve my problem. I was correct initially. The "user data" field within the Launch Template was a good place to bootstrap my app once the instance is up and running. It basically clones from the remote repo and performs any necessary steps to launch the app after that.
For example, in the launch configuration for EC2
#cloud-boothook
#!/bin/bash
git clone myremoterepo.git
cd myremoterepo
npm install
npm run start
Also, if you're running a classic load balancer, CodeDeploy will attempt to start a deployment, based on your latest code repo in S3, when your ASG scales up the EC2 instances. So the above solution will be redundant.
Related
I built a simple NodeJS application and ran on an EC2 instance.
Everything works fine. I decided to create an AMI (Amazon Linux based) and a launch template to be used by a ASG.
The problem is, I cannot get the application start automatically.
I tried to add the following command through the user_data field but it doesn't work:
node main.js
Any ideas on how to automatically start this application once launched by the ASG?
Typically you would add this to the startup script of the AMI, so once the instance has started it will run the script on boot.
You may want to look at PM2 as well, it's a great tool for things like this and also makes it easy to setup each node instance in cluster mode ( assuming you have an EC2 instance with more than one core )
Some other ways of doing this, albeit not an auto scale but with DigitalOcean they offer a CASS model called 'apps' that basically pushes you app into a container from a git-repo and deploys it, you can then just spin new instances out as needed. Downside is that the bandwidth is a bit small but CND etc can help with that.
Trying to understand how node is supposed to be installed for multiple developers in AWS EC2 as an administrator. (I am also one of the devs).
I have an EC2 server with nginx running on port 80. Should I now go to the webroot and install nvm/node/npm as ec2-user? Or my own user, and then all the other users after? (No one can actually use the ec2-user account except server admins.)
How about other developers who need to use node? I was hoping to install nvm/node/npm for everyone in advance who needs it so that they could use it immediately after getting access to the server, but maybe everyone should install nvm/node/npm themselves?
Or it would be nice if there is a way to install it as ec2-user and then share it with all the users properly/securely? What's the right way to set this up?
(When I ran through this myself as my own user and installed nvm for the first time in EC2 Linux 2 AMI, I noticed that when I switched to another user or root, the "node -v" command didn't work for other accounts - and basically I'm trying to do an install that covers all the users.)
In fact, in AWS EC2, you need only one user and one NodeJS running. I would suggest below set-up for development and deployment.
Let all developers have their dev environment set-up in their local machines.
Let developers check-in their code to Github or a similar repository.
Using a CI/CD pipeline. Integrate the code and build the code and deploy it into EC2.
Instead of EC2, I would recommend using AWS Beanstalk.
If this makes sense for you, we can elaborate this into a solution and implement it.
How to run nginx and Node.js at server startup?
In order to start Amazon EC2 with AWS Auto Scaling, I must directly connect to EC2 to run nginx and Node.js.
Can this be done for Auto Scaling?
In Amazon EC2, I want to run nginx with Node.js during EC2 startup by Auto Scaling.
EC2 is set up as an Auto Scaling group using images. I want to run EC2 Node.js applications and nginx, which are started by Auto Scaling, together with the EC2 server startup.
For nginx, I can run the executable with chkconfig, but the Node.js application will run as pm2, using the code written in package.json.
How can I run nginx and Node.js with EC2 startup and let the new EC2 -- started with Auto Scaling -- respond properly?
comment reply :
I don't want to run node.js using "node app.js" command.
I want to run node.js by package.json ( script )
ex.
"start": "NODE_ENV=production PORT=3000 pm2 start server.js -i -1"
How can I do this?
Your suggestions are using linux server init script file.
But, I want to set NODE_ENV, PORT and use pm2 command.
solution
I solved the problem.
When Linux booted, I tried to use the script file to automatically run node.js.
I created the script file and made the shell script run automatically after linux booted, but it did not seem to be a good idea.
Alternatively, pm2 startup and ecosystem.config.js can be used to solve problems flexibly.
Thank you for your reply.
This has nothing to do with autoscaling. It most often has to do with the EC2 AMI (Amazon Machine Image) that the autoscaler is launching your EC2 instances with, and possibly also with the "user metadata" that you are passing to the instance when it launches. These are the only two things that impact what an EC2 instance does when and after it starts up, up until it starts communicating with the outside world.
So what you need to do is create an AMI that is set up so that the right things launch when an EC2 instance is launched from that AMI. What you'd do is take the AMI you want to use as a starting point, launch that AMI into an instance, make the necessary changes and installations you want, and then save off a new AMI. Then change your autoscaling group to launch new instances with that new AMI.
You could alternately send a script in your "user metadata" that launches things, but this is rarely what you want to do". Most of the time, you want to have your AMI do the right thing.
It's also possible that you are using some sort of post-boot provisioner, like Chef, Ansible or Chef Habitat. If you are, that's where you'd set all of this stuff up. You'd want that system to do the work you're describing. But if you're doing that, what I have said earlier still applies. For this to work, you'd often have also built a custom AMI that has parts of the provisioning system already built into it, so that that system can connect into it and provision it. It's possible for these systems to start from a generic AMI as well. It depends on the system.
I have an AWS Windows Server 2016 VM. This VM has a bunch of libraries/software installed (dependencies).
I'd like to, using python3, launch and deploy multiple clones of this instance. I want to do this so that I can use them almost like batch compute nodes in Azure.
I am not very familiar with AWS, but I did find this tutorial.
Unfortunately, it shows how to launch an instance from the store, not an existing configured one.
How would I do what I want to achieve? Should I create an AMI from my configured VM and then just launch that?
Any up-to-date links and/or advice would be appreciated.
Yes, you can create an AMI from the running instance, then launch N instances from that AMI. You can do both using the AWS console or you could call boto3 create_image() and run_instances(). Alternatively, look at Packer for creating AMIs.
You don't strictly need to create an AMI. You could simply the bootstrap each instance as it launches via a user data script or some form of CM like Ansible.
I have a node.js application that runs inside docker in aws ec2 fargate.
It started to consume high cpu, and i wonder if i can profile it
I couldn't find a way to connect via ssh, and I am not sure if it helps to run it with --prof flag
I am a newbie in AWS myself, so please check everything that I will say. EC2 Fargate is provisioning EC2 instances for you and you are not allowed to interact with them directly (ssh) but I think you can use CloudWatch Logs, that prints every console.log of your app in the specified log groups. There are must be some configurations when you create your task definition or container defifnition. (at least in Cloudformation which I hardly recommend to use). You can console.log the number of users or function calls and use this info to debug what is happening.