Is there an option or a setting somewhere to control the timeout for an aws ec2 wait command?
Or the number of attempts or waiting period between attempts?
I want to be able to aws ec2 wait instance-terminated for some instances I'm quickly spinning up to perform a few task then terminating. It times out on some longer running tasks with "Waiter InstanceTerminated failed: Max attempts exceeded".
I can't seem to find any info anywhere. I've grepped the cli source code, but my knowledge of Python is too limited for me to understand what's going on. I see there might be something in this test using maxAttempts and delay, but can't figure out how to leverage that from the cli.
So far my suboptimal solution is to sleep first, then start the wait.
There is not a timeout option in the AWS CLI, but you can just use the native timeout command from coreutils to do what you want.
timeout 10 aws ec2 wait instance-terminated
will abort if the command does not return within 10 seconds. A timeout will automatically return error code 124, otherwise it returns the error code of the command.
There's an open Github issue about adding configurable parameters https://github.com/aws/aws-cli/issues/1295
You can also find some environment variables you can define here
https://docs.aws.amazon.com/cli/latest/topic/config-vars.html
One of them being AWS_MAX_ATTEMPTS Number of total requests
But for my use case (restore dynamo table from snapshot) does not seem to be working
Related
Im using a Cloud Run to run my deployment test suite. It takes about 3 minutes to run the suite and my instance timeou is set to 5 minutes.
I've set up a Cloud Run project that will accept an http request (from my CI provider) triggering the tests to run, and then report back pass fail.
Even though the containers are set to only handle 1 concurrent request they are accepting a second request after the first test run completes. As the first run took up 3 of the available 5 minutes, the Second request times-out at 2 minutes.
So, does anyone know of a way to either self terminate a given instance (preferably from within) or to set the total number of requests an instance will accept before closing itself?
Thank you very much for reading my question. Any help would be greatly appreciated!
You don't have instance timeout in Cloud Run. The timeout is on the request processing. You set the maximum duration time to process a request (up to 3600 seconds). So, in your case, you haven't this timeout issue, or I didn't understand your configuration and current issue.
The other part of your question is "how to stop an instance". Simply exit it! According to your language the method are different. In python for example exit(0).
I am working on AWS lambda functions (NodeJS) that connects to a MongoDB server running on EC2 Instance.
Lambda function is place in a VPC-1 and MongoDB server (EC2 Instance) is in VPC-2.
We have setup VPC peering between VPC-1 and VPC-2
The lambda function is intermittently throwing timeout error. It works 50% of the time and 50% of the time, it's throwing timeout error.
Note: The MongoDB is running on an EC2 Instance is specially setup for the development of this project. It does not get any additional traffic.
Also, another component of this project developed in NodeJS again running from another EC2 instance can communicate with the MongoDB server without any timeout issues.
Could someone help me in understanding the possible cause of the timeout issues?
Thanks in advance.
Hope below article might solve your problem:
To fix: Increase the timeout setting/memory on the configuration page of your Lambda function
For nodejs async related issues, please refer below link:
AWS Lambda: Task timed out
Lambda timeouts can best be described as
The amount of time that Lambda allows a function to run before stopping it. The default is 3 seconds. The maximum allowed value is 900 seconds.
Within the console you can increase this timeout to a greater number.
When you click on the Lambda function there will be a monitoring tab. From here you should be able to see execution time of Lambda functions. You might find that its always close to the bar.
I'd recommend increasing the timeout a bit higher than you anticipate it needs then reviewing these metrics. Once you have a baseline adjust this timeout value again
I wrote a simple mongo test, trying to access mongo server in a vpc.
for every run I get : "errorMessage": "*** Task timed out after 3.00 seconds"
I have written more handlers in the lambda just to check it.
There is no problem connecting to the vpc. other handler (same file) that connects to another server runs well.
There is no problem with other modules. I have added another module (make-random-string) and it's running every time.
I get no error messages. No exceptions from Mongo. it just times out every time.
increasing both memory to 1024 and execution time to 15s didn't help, the results are the same.
Mongo driver does not require any C++ builds unless you use kerberos, which I'm not.
Test file mimicking the lambda, runs fine.
The sample code is here: http://pastebin.com/R2e3jwwa where the db information is removed.
Thanks.
As weird as it may sound, we finally solved the problem just by changing the callback(null, response) to context.done(null, response). This nonsense took us more time than we would have liked to spend here.
You can find more info about the issue here https://github.com/serverless/serverless/issues/1036
I had the same issue. The solution was to move the database connection object outside the handler method and cache/reuse it.
Here I added more details about it:
https://stackoverflow.com/a/67530789/10664035
I'm currently building web API using AWS Lambda with Serverless Framework.
In my lambda functions, each of them connects to Redis (elasticache) and RDB (Aurora, RDS) or DynamoDB to retrieve data or write new data.
And all my lambda functions are running in my VPC.
Everything works fine except that when a lambda function is first executed or executed a while after last execution, it takes quite a long time (1-3 seconds) to execute the lambda function, or sometimes it even respond with a gateway timeout error (around 30 seconds), even though my lambda functions are configured to 60 seconds timeout.
As stated in here, I assume 1-3 seconds is for initializing a new container. However, I wonder if there is a way to reduce this time, because 1-3 seconds or gateway timeout is not really an ideal for production use.
You've go two issues:
The 1-3 second delay. This is expected and well-documented when using Lambda. As #Nick mentioned in the comments, the only way to prevent your container from going to sleep is using it. You can use Lambda Scheduled Events to execute your function as often as every minute using a rate expression rate(1 minute). If you add some parameters to your function to help you distinguish between a real request and one of these ping requests you can immediately return on the ping requests and then you've worked around your problem. It will cost you more, but we're probably talking pennies per month if anything. Lambda has a generous free tier.
The 30 second delay is unusual. I would definitely check your CloudWatch logs. If you see logs from when your function is working normally but no logs from when you see the 30 second timeout then I would assume the problem is with API Gateway and not with Lambda. If you do see logs then maybe they can help you troubleshoot. Another place to check is the AWS Status Page. I've seen sometimes where Lambda functions timeout and respond intermittently and I pull my hair out only to realize that there's a problem on Amazon's end and they're working on it.
Here's a blog post with additional information on Lambda Container Reuse that, while a little old, still has some good information.
I've setup an AWS AutoScaling group. Have 2 alarms to increase the number of servers if the average load is above 65% and decrease if it's less than 35%. Not sure what the final numbers will be, but this is what I initially used. I ran a yes >& /dev/null command on the linux server and the load very quickly went up to 100% (as reported by linux top command), but no new instances were being launched, because I think the alarms were not triggering. How exactly is the cpu load average computed/retrieved by the Auto Scaler?
I also, as an experiment, killed responding to the AWS ping commands from the server and thus, it was deemed not healthy by the AWS. The server was terminated and a new one was started up. So, I know that launching/terminating of servers is working in the Auto Scaler due to "health" reason.
What else should I look at to diagnose the problem?
Is my way of stressing the server not the "right" way as far as the Auto Scaler is concerned?
Is it using a different benchmark?
[This is a comment not an answer]
You can use set-alarm-state in aws cli to trigger your alarms
aws cloudwatch set-alarm-state --alarm-name "myalarm" --state-value ALARM --state-reason "testing purposes"
This way you can easily test them out. If you still have problems then maybe you can post the output of
aws cloudwatch describe-alarms --alarm-names "myalarm"
NOTE: Your Average load from both the instances should cross 65% only then a new instance is launched. So, in your case the load on both the instances must cross 65%. Only then AutoScaling Group launches a new instance.
You can use tools such as BeesWithMachineGuns, Loadrunner and other Load testing tools to increase load of your server such that it goes above 65%.
Suggestion: Check your server load on Cloudwatch metrics rather than from inside the server( using top). This will give you a clear picture of how AWS is calculating your Instance load.