Connection timeout between AWS Lambda function and MongoDB - node.js

I am working on AWS lambda functions (NodeJS) that connects to a MongoDB server running on EC2 Instance.
Lambda function is place in a VPC-1 and MongoDB server (EC2 Instance) is in VPC-2.
We have setup VPC peering between VPC-1 and VPC-2
The lambda function is intermittently throwing timeout error. It works 50% of the time and 50% of the time, it's throwing timeout error.
Note: The MongoDB is running on an EC2 Instance is specially setup for the development of this project. It does not get any additional traffic.
Also, another component of this project developed in NodeJS again running from another EC2 instance can communicate with the MongoDB server without any timeout issues.
Could someone help me in understanding the possible cause of the timeout issues?
Thanks in advance.

Hope below article might solve your problem:
To fix: Increase the timeout setting/memory on the configuration page of your Lambda function
For nodejs async related issues, please refer below link:
AWS Lambda: Task timed out

Lambda timeouts can best be described as
The amount of time that Lambda allows a function to run before stopping it. The default is 3 seconds. The maximum allowed value is 900 seconds.
Within the console you can increase this timeout to a greater number.
When you click on the Lambda function there will be a monitoring tab. From here you should be able to see execution time of Lambda functions. You might find that its always close to the bar.
I'd recommend increasing the timeout a bit higher than you anticipate it needs then reviewing these metrics. Once you have a baseline adjust this timeout value again

Related

AWS EBS runs into "504 Gateway Time-out"

I'm new to using AWS EBS and ECS, so please bear with me if I ask questions that might be obvious for others. To the issue:
I've got a single-container Node/Express application that runs on EBS. The local docker container works as expected. On EBS, I can access one endpoint of the API and get the expected output. For the second endpoint, which runs longer (around 10-15 seconds) I get no response and run after 60 seconds into a time out: "504 Gateway Time-out".
I wonder how I would approach debugging this as I can't connect to the container directly? Currently there isn't any debugging functionality in the code included either as I'm not sure what the best node approach for a EBS container is - any recommendations are highly appreciated.
Thank you in advance!
You can see the EC2 instances running on EBS in your AWS, and you can choose to give them IP addresses in your EBS options. That will let you SSH directly into them if you need to.
Otherwise check the keepAliveTimeout field in your server (the value returned by app.listen() of you're using express).
I got a decent number of 504s when my Node server timeout was less than my load balancer timeout.
Your application takes longer than expected (> 60 seconds) to respond, so either nginx or the Load Balancer terminates your request.
See my answer here

AWS EC2 boots via scheduled Lambda, how to alert of errors?

My EC2 instance boots daily for 5 minutes before shutting down.
On bootup, a NodeJS script is executed. Usually this script will complete long before the 5 minutes are up, but I'd like to be notified (SMS/email) whenever it doesn't.
What is the correct approach? I can try to send a notification within my NodeJS code after 5 minutes if execution wasn't finished, but Lambda could shut down the instance before this occurs.
I'm quite new to AWS so I apologize if this is rather basic, I haven't had luck on Google with this issue.
Can you check if whatever Node script is doing when EC2 instance is up could be replicated with one or more lambda functions.
Think about serverless and microservices architecture. Theoretically any workflow which need servers could be achived via AWS Lambda functions and various triggers. In you case I can think of the following:
SES to send out email messages
API gateway to expose your Lambda function for trigger
Cloud watch events to trigger lambda function like a cronjob.
I would be surprise to learn if Serverless won't work here. Please do share the case so that I can brainstorm more and share a solution.

Lambda starts timing out randomly when communicating with DynamoDB

I have a Node.js Lambda code base that talks to tiny dataset in DynamoDB (less than 400 byte each). Every now and then the function will time out over 5 minutes whilst doing a get() request to DynamoDB (via dynamoDbdAWS.DynamoDB.DocumentClient();).
The problem is it's completely random as to when this issue will occur but when it works it take ~2 second from a cold start, so taking over 5 minutes to run makes no sense and at random points.
It's a dev environment so only myself is using this, and I'm doing maybe 10 requests a day
context.callbackWaitsForEmptyEventLoop = false; has been set
Memory allocation never exceeds 45MB (128MB set)
I'm testing directly in Lambda
The code is deployed via Serverless
When testing, using Serverless, locally it works whilst the Lambda fails
I've inherited this project but have a good understanding of the architecture around it and it's fairly simple but I've not done much work with Lambda before.
Any ideas what I should look for or any known issues will be a massive help.
It sounds like one (or more) of the VPC subnets the Lambda function is configured to run in doesn't have a route to a NAT Gateway (or an AWS PrivateLink configuration). So whenever that subnet is used by the Lambda function it is unable to access the AWS API.
If the Lambda function doesn't actually need to access any resources in the VPC then it is much better to not configure it to use the VPC at all.

How to optimize AWS Lambda?

I'm currently building web API using AWS Lambda with Serverless Framework.
In my lambda functions, each of them connects to Redis (elasticache) and RDB (Aurora, RDS) or DynamoDB to retrieve data or write new data.
And all my lambda functions are running in my VPC.
Everything works fine except that when a lambda function is first executed or executed a while after last execution, it takes quite a long time (1-3 seconds) to execute the lambda function, or sometimes it even respond with a gateway timeout error (around 30 seconds), even though my lambda functions are configured to 60 seconds timeout.
As stated in here, I assume 1-3 seconds is for initializing a new container. However, I wonder if there is a way to reduce this time, because 1-3 seconds or gateway timeout is not really an ideal for production use.
You've go two issues:
The 1-3 second delay. This is expected and well-documented when using Lambda. As #Nick mentioned in the comments, the only way to prevent your container from going to sleep is using it. You can use Lambda Scheduled Events to execute your function as often as every minute using a rate expression rate(1 minute). If you add some parameters to your function to help you distinguish between a real request and one of these ping requests you can immediately return on the ping requests and then you've worked around your problem. It will cost you more, but we're probably talking pennies per month if anything. Lambda has a generous free tier.
The 30 second delay is unusual. I would definitely check your CloudWatch logs. If you see logs from when your function is working normally but no logs from when you see the 30 second timeout then I would assume the problem is with API Gateway and not with Lambda. If you do see logs then maybe they can help you troubleshoot. Another place to check is the AWS Status Page. I've seen sometimes where Lambda functions timeout and respond intermittently and I pull my hair out only to realize that there's a problem on Amazon's end and they're working on it.
Here's a blog post with additional information on Lambda Container Reuse that, while a little old, still has some good information.

configure aws ec2 wait timeout option

Is there an option or a setting somewhere to control the timeout for an aws ec2 wait command?
Or the number of attempts or waiting period between attempts?
I want to be able to aws ec2 wait instance-terminated for some instances I'm quickly spinning up to perform a few task then terminating. It times out on some longer running tasks with "Waiter InstanceTerminated failed: Max attempts exceeded".
I can't seem to find any info anywhere. I've grepped the cli source code, but my knowledge of Python is too limited for me to understand what's going on. I see there might be something in this test using maxAttempts and delay, but can't figure out how to leverage that from the cli.
So far my suboptimal solution is to sleep first, then start the wait.
There is not a timeout option in the AWS CLI, but you can just use the native timeout command from coreutils to do what you want.
timeout 10 aws ec2 wait instance-terminated
will abort if the command does not return within 10 seconds. A timeout will automatically return error code 124, otherwise it returns the error code of the command.
There's an open Github issue about adding configurable parameters https://github.com/aws/aws-cli/issues/1295
You can also find some environment variables you can define here
https://docs.aws.amazon.com/cli/latest/topic/config-vars.html
One of them being AWS_MAX_ATTEMPTS Number of total requests
But for my use case (restore dynamo table from snapshot) does not seem to be working

Resources