Serverless deploy failing due to timeout (Windows) - node.js

This is the error I'm getting after executing serverless deploy --verbose
Uploading service api.zip file to S3 (20.81 MB)
Recoverable error occurred (Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.), sleeping for ~4 seconds. Try 1 of 4
I tried with set AWS_CLIENT_TIMEOUT=300000 but it does not change a thing, unfortunately.
Any idea on how to go about this? This happened after installing the AWS-SDK package and creating a function that's supposed to be triggered by an S3 upload but I don't think that's the problem, but rather the big file size and my slow internet connection.
Thanks in advance.

Related

Handle timeouts of Node Function Apps

I created an Azure Function App with a Node runtime, which works properly on local and manually created cloud environments.
But when it becomes deployed via Azure Pipelines, it writes a message via context.log and seems working but finally it raises Timeout error.
Timeout value of 00:05:00 exceeded by function 'Functions.<...>' (Id: '<...>'). Initiating cancellation.
I guess, that there is some blocking Node expression because of misconfiguration, but there is no further context logged by Application Insights.
There is a way to handle the cancelation event within your Function App to provide some Node runtime information (e.g. via SIGINT callbacks)?
I've tried to reproduce this issue but failed. But I got some similar question here and noticed we can set functionTimeout value in host.json file. How about trying it.

Connection timeout between AWS Lambda function and MongoDB

I am working on AWS lambda functions (NodeJS) that connects to a MongoDB server running on EC2 Instance.
Lambda function is place in a VPC-1 and MongoDB server (EC2 Instance) is in VPC-2.
We have setup VPC peering between VPC-1 and VPC-2
The lambda function is intermittently throwing timeout error. It works 50% of the time and 50% of the time, it's throwing timeout error.
Note: The MongoDB is running on an EC2 Instance is specially setup for the development of this project. It does not get any additional traffic.
Also, another component of this project developed in NodeJS again running from another EC2 instance can communicate with the MongoDB server without any timeout issues.
Could someone help me in understanding the possible cause of the timeout issues?
Thanks in advance.
Hope below article might solve your problem:
To fix: Increase the timeout setting/memory on the configuration page of your Lambda function
For nodejs async related issues, please refer below link:
AWS Lambda: Task timed out
Lambda timeouts can best be described as
The amount of time that Lambda allows a function to run before stopping it. The default is 3 seconds. The maximum allowed value is 900 seconds.
Within the console you can increase this timeout to a greater number.
When you click on the Lambda function there will be a monitoring tab. From here you should be able to see execution time of Lambda functions. You might find that its always close to the bar.
I'd recommend increasing the timeout a bit higher than you anticipate it needs then reviewing these metrics. Once you have a baseline adjust this timeout value again

lambda timed out when trying to access mongo

I wrote a simple mongo test, trying to access mongo server in a vpc.
for every run I get : "errorMessage": "*** Task timed out after 3.00 seconds"
I have written more handlers in the lambda just to check it.
There is no problem connecting to the vpc. other handler (same file) that connects to another server runs well.
There is no problem with other modules. I have added another module (make-random-string) and it's running every time.
I get no error messages. No exceptions from Mongo. it just times out every time.
increasing both memory to 1024 and execution time to 15s didn't help, the results are the same.
Mongo driver does not require any C++ builds unless you use kerberos, which I'm not.
Test file mimicking the lambda, runs fine.
The sample code is here: http://pastebin.com/R2e3jwwa where the db information is removed.
Thanks.
As weird as it may sound, we finally solved the problem just by changing the callback(null, response) to context.done(null, response). This nonsense took us more time than we would have liked to spend here.
You can find more info about the issue here https://github.com/serverless/serverless/issues/1036
I had the same issue. The solution was to move the database connection object outside the handler method and cache/reuse it.
Here I added more details about it:
https://stackoverflow.com/a/67530789/10664035

Couchbase client-side timeout after idle period

NodeJs: v0.12.4
Couchbase: 2.0.8
Service deployed with PM2
Instance of the bucket is created once per service rather than once per call based on recommendation from couchbase support as instantiating and connecting bucket is expensive.
During the load everything seems to be in order with near 0 failure rate.
After couple of days of service being barely if at all in use client fails to connected to the bucket with the following error:
{"message":"Client-Side timeout exceeded for operation. Inspect network conditions or increase the timeout","code":23}
Recycling the node.js process using 'pm2 restart' resolves the issue.
Any ideas/suggestions short of re-creating instance of the bucket and re-connecting to the bucket?

How to have fast redirects with async db writes?

I am currently working on a node.js api deployed on aws with elastic beanstalk.
The api accepts a url with query parameters, saves the parameters on a db (in my case aws rds), and redirects to a new url without waiting for the db response.
The main priority by far for the api is the redirection speed and the ability to handle a lot of requests. The aim of this question is to get your suggestions on how to do that.
I ran the api through a service called blitz.io to see what load it could handle and this is the report I got from them: https://www.dropbox.com/s/15wsa8ksj3lz99e/Blitz.pdf?dl=0
The instance and the database are running on t2.micro and db.t2.micro respectively.
The api can handle the load if no write is performed on the db, but crashes under a certain load when it writes on the db (I shared the report for the latter case) even without waiting for the db responses.
I checked the logs and found the following error in /var/log/nginx/error.log:
*1254 socket() failed (24: Too many open files) while connecting to upstream
I am not familiar with how nginx works but I imagine that every db connection is seen as an open file. Hence, the error implies that we reach the limit for open files before being able to close the connections. Is that a correct interpretation? Why am I getting the error?
I increased the limit in the way suggested here: https://forums.aws.amazon.com/thread.jspa?messageID=613983#613983 but it did not solve the problem.
At this point I am not sure what to do. Can I close the connections before getting a response from the db? Is it a hardware limitation? The writes to the db are tiny.
Thank you in advance for your help! :)
if you just modified ulimit, it might not be enough. You should look at fs.file-max for number of file descriptors,
sysctl -w fs.file-max=100000
as explained there :
http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/

Resources