Failed to launch AWS Elastic Beanstalk in Tutorial - linux

I've been trying to work my way through this AWS Elastic Beanstalk tutorial. Following it to the letter, I'm getting a consistent error message at step #3.
Creating Auto Scaling group named: [xxx] failed. Reason: You have requested more instances (1) than your current instance limit of 0 allows for the specified instance type. Please visit http://aws.amazon.com/contact-us/ec2-request to request an adjustment to this limit. Launching EC2 instance failed.
The error message seems clear enough. I need to request an increase of my EC2 quota. However, I've done that, my quota is now at 10 EC2 instances and I've also been approved for 40 Auto Scaling Groups...
Any idea on what I'm missing? Full output attached.

I guest you still failed because your request increase on other instance type.
First, go to your aws console > EC2 > Limit page then you can see some thing as below:
Running On-Demand EC2 instances 10 Request limit increase
Running On-Demand c1.medium instances 0 Request limit increase
Running On-Demand c1.xlarge instances 0 Request limit increase
Running On-Demand m3.large instances 5 Request limit increase
You can see my limit it 10 instances but with instance type c1.medium and c1.xlarge is 0. Only limit of m3.large type is 5. So you must request AWS increase your limit on exactly which instance type you want to use.

Related

Azure App Service - WEBSITE_HEALTHCHECK_MAXPINGFAILURES and time of Load Balancing

Description
The required number of failed requests for an instance to be deemed unhealthy and removed from the load balancer. For example, when set to 2, your instances will be removed after 2 failed pings. (Default value is 10)
Here is the description for WEBSITE_HEALTHCHECK_MAXPINGFAILURES. What is the difference between WEBSITE_HEALTHCHECK_MAXPINGFAILURES and the Load Balancing in the picture below?
I found when I change Load Balancing to 5, the value of WEBSITE_HEALTHCHECK_MAXPINGFAILURES will be changed to 5.
Test
Localhost will send two requests in one minute.
Before enabling Health Check, there is no any request.
After enabling Health Check, two requests will be received in every minute for every instance.
There is no difference. The portal offers a better UI experience for setting WEBSITE_HEALTHCHECK_MAXPINGFAILURES. Both represent the total amount of time in minutes of getting failed pings before App Service determines it is unhealthy and removes it, because:
Health check pings this path on all instances of your App Service app at 1-minute intervals.

AWS EBS runs into "504 Gateway Time-out"

I'm new to using AWS EBS and ECS, so please bear with me if I ask questions that might be obvious for others. To the issue:
I've got a single-container Node/Express application that runs on EBS. The local docker container works as expected. On EBS, I can access one endpoint of the API and get the expected output. For the second endpoint, which runs longer (around 10-15 seconds) I get no response and run after 60 seconds into a time out: "504 Gateway Time-out".
I wonder how I would approach debugging this as I can't connect to the container directly? Currently there isn't any debugging functionality in the code included either as I'm not sure what the best node approach for a EBS container is - any recommendations are highly appreciated.
Thank you in advance!
You can see the EC2 instances running on EBS in your AWS, and you can choose to give them IP addresses in your EBS options. That will let you SSH directly into them if you need to.
Otherwise check the keepAliveTimeout field in your server (the value returned by app.listen() of you're using express).
I got a decent number of 504s when my Node server timeout was less than my load balancer timeout.
Your application takes longer than expected (> 60 seconds) to respond, so either nginx or the Load Balancer terminates your request.
See my answer here

how can i fix aws ec2 "statuscheckfailed" problem?

My ec2 Information
os : ubuntu
cpu : t2.micro (for freetier)
0903 status
0907 status
0908 status
What i do , What i know
It happens almost same time.
I checked NetworkIn , NetworkOut, CPUUtilization.
I use docker ( only for oracle11gXE )
I use tomcat
I edit inbound rules all port source (0.0.0.0/0)
0903 cpu
0907 cpu
0908 cpu
How can i fix it?
AWS provides a detailed guide on what to do when an instance fails a status check:
Troubleshooting instances with failed status checks
Depending on a possible cause you have a number of options to deal with them, as listed in the guide, ranging from rebooting the instance, restarting or migrating to new instance type.

AWS AutoScaling Not Scaling Up

I've setup an AWS AutoScaling group. Have 2 alarms to increase the number of servers if the average load is above 65% and decrease if it's less than 35%. Not sure what the final numbers will be, but this is what I initially used. I ran a yes >& /dev/null command on the linux server and the load very quickly went up to 100% (as reported by linux top command), but no new instances were being launched, because I think the alarms were not triggering. How exactly is the cpu load average computed/retrieved by the Auto Scaler?
I also, as an experiment, killed responding to the AWS ping commands from the server and thus, it was deemed not healthy by the AWS. The server was terminated and a new one was started up. So, I know that launching/terminating of servers is working in the Auto Scaler due to "health" reason.
What else should I look at to diagnose the problem?
Is my way of stressing the server not the "right" way as far as the Auto Scaler is concerned?
Is it using a different benchmark?
[This is a comment not an answer]
You can use set-alarm-state in aws cli to trigger your alarms
aws cloudwatch set-alarm-state --alarm-name "myalarm" --state-value ALARM --state-reason "testing purposes"
This way you can easily test them out. If you still have problems then maybe you can post the output of
aws cloudwatch describe-alarms --alarm-names "myalarm"
NOTE: Your Average load from both the instances should cross 65% only then a new instance is launched. So, in your case the load on both the instances must cross 65%. Only then AutoScaling Group launches a new instance.
You can use tools such as BeesWithMachineGuns, Loadrunner and other Load testing tools to increase load of your server such that it goes above 65%.
Suggestion: Check your server load on Cloudwatch metrics rather than from inside the server( using top). This will give you a clear picture of how AWS is calculating your Instance load.

Amazon Elastic MapReduce: Failed to create a job flow with a large number of instances

Every time I attempt to create a job flow with more than 20 instances, the creation fails.
It works for me most of the time with less than 20 instances.
Is there any limitation on the number of instances allowed for a job flow?
By the way, I use ERM CLI:
ruby elastic-mapreduce --create --alive --key-pair key --num-instances 30 --name MyJobFlow
Then I see:
j-3KRJLHMHWR1ZC FAILED MyJobFlow
Yes, Amazon limits accounts to 20 instances. You can go to http://aws.amazon.com/contact-us/ec2-request/ to request more instances.

Resources