When the Auto-scaling working the load-balance sub instances goes to unhealthy? - autoscaling

I am facing an issue on when ever the AWS auto scaling is working and creating new sub instances, the initially the sub instances going to UN-Healthy for the few minutes and in the load-balencer the load is not distributing on that time.please help me on this.
i have tryed to create maually using the AMI image, the server working fien. but when the autoscaling is working the intially unhelthy and comming to helathy after few minutes, why? and need some solution.

Related

AWS Elastic Beanstalk instance keeps deleting installed binaries after it restarts

I have a Node.js application deployed on Elastic Beanstalk (configured using CodePipeline). In that application, I have several of the cron jobs scheduled (using npm module: node-cron) at different times of the day to serve different purposes as per the need.
Using logs, I observed that I was running a cron job that was memory intensive, and was taking up most of the instance resources which caused it to restart the EB instance on its own. To solve this issue, I deployed the service as a Lambda function, which reduced the load by a great margin. Things were running smoothly for most of the days but even then, the issue of the instance restarting automatically still persists.
During that restart, it wipes all the installed binaries which I have to manually install like mongodump - for backing up production database and uploading the file onto AWS S3. I have to install such binaries all over again from scratch every single time.
I've been stuck on this problem for months now, and I've been wasting so much time fixing it manually. Any help will greatly be appreciated. Thanks in advance!

Nodejs Backend APIs, when dockerized, are taking more time to connect to mongodb

I moved my nodejs backend to docker(previously it was deployed on ec2 instance).
My mongodb is deployed on another ec2 instance, which i did't move to docker. (want to keep it this way only).
After dockerization of my backend,(Deployed it on ECS) APIs are taking longer time for db queries. Don't understand what went wrong. Or is it suppose to be like this? Any suggestions?
So, I found the issue here.
Due to different availability zone, It was taking more time. There was one extra hop causing extra time.

Build an extensible system for scraping websites

Currently, I have a server running. Whenever I receive a request, I want some mechanism to start the scraping process on some other resource(preferably dynamically created) as I don't want to perform scraping on my main instance. Further, I don't want the other instance to keep running and charging me when I am not scraping data.
So, preferably a system that I can request to start scraping the site and close when it finishes.
Currently, I have looked in google cloud functions but they have a cap at 9 min max for every function so it won't fit my requirement as scraping would take much more time than that. I have also looked in AWS SDK it allows us to create VMs on runtime and also close them but I can't figure out how to push my API script onto the newly created AWS instance.
Further, the system should be extensible. Like I have many different scripts that scrape different websites. So, a robust solution would be ideal.
I am open to using any technology. Any help would be greatly appreciated. Thanks
I can't figure out how to push my API script onto the newly created AWS instance.
This is achieved by using UserData:
When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts.
So basically, you would construct your UserData to install your scripts, all dependencies and run them. This would be executed when new instances are launched.
If you want the system to be scalable, you can lunch your instances in Auto Scaling Group and scale it up or down as you require.
The other option is running your scripts as Docker containers. For example using AWS Fargate.
By the way, AWS Lambda has limit of 15 minutes, so not much more than Google functions.

How to ensure that the ASG ( Auto scaling Group) replaces existing instances with every change in the Launch configuration

The infrastructure is provisioned using terraform code.
In our AWS environment, we have a new AMI created for every commit made to the repository. Now, if we want to have autoscaling configured for the web servers behind an ALB using this new AMI
how can we make sure that the ASG replaces existing instances with every change in the Launch configuration, because I believe, once you change the LC, only the instances that are created out of scaling in/out are launched using the new AMI and the existing ones are not replaced.
Also, do you have any idea of how can we pro-grammatically (via terraform) get how many servers run at any point in time, in case of auto- scaling ?
Any help is highly appreciated here.
Thanks!
For the most part this is pretty straightforward and there are already a dozen of implementations around the web.
The tricky part is to express the 'create_before_destroy' field on the LC and the ASG. You schould also refer to the LC in your ASG resource. That way once your LC is changed you would trigger a workflow that creates a new ASG, that replaces your current one.
Very Good Documented Example
Also, do you have any idea of how can we pro-grammatically (via
terraform) get how many servers run at any point in time, in case of
auto- scaling ?
This depends on the context. If you have a static number it's easy, you could define it in your module and stick with it. If it's about passing the previous ASG value the way would be again described in the guide above :) You need to write a custom external handler for how many in 'the moment' running instances you have around your target groups. There might be of course a new AWS REST API addition that gives you the chance to query all your Target Groups health check property and get their total sum ( not aware about it ). Then again, you might add some custom rules for scaling policies.
External Handler
Side note: in the example the deployment is happening with ELB.

Setting up ELB on AWS for Node.js

I have a very basic question but I have been searching the internet for days without finding what I am looking for.
I currently run one instance on AWS.
That instance has my node server and my database on it.
I would like to make use of ELB by separating the one machine that hosts both the server and the database:
One machine that is never terminated, which hosts the database
One machine that runs the basic node server, which as well is never terminated
A policy to deploy (and subsequently terminate) additional EC2 instances that run the server when traffic demands it.
First of all I would like to know if this setup makes sense.
Secondly,
I am very confused about the way this should work in practice:
Do all deployed instances run using the same volume or is a snapshot of the volume is used?
In general, how do I set such a system? Again, I searched the web and all of the tutorials and documentations are so generalized for every case that I cannot seem to figure out exactly what to do in my case.
Any tips? Links? Articles? Videos?
Thank you!
You would have an AutoScaling Group with a minimum size of 1, that is configured to use an AMI based on your NodeJS server. The AutoScaling Group would add/remove instances to the ELB as instances are created and deleted.
EBS volumes can not be attached to more than one instance at a time. If you need a shared disk volume you would need to look into the EFS service.
Yes you need to move your database onto a separate server that is not a member of the AutoScaling Group.

Resources