AWS lambda with mongoose to Atlas - MongoNetworkError - node.js

I am trying to connect MongoDB Atlas with mongoose and aws lambda but i get error MongoNetworkError
AWS Lambda
Mongoose
MongoDB Atlas
The same code was tested with serverless-offline and works perfect, the problem is when i deploy it to AWS Lambda.
This is the code snipet
'use strict';
const mongoose = require('mongoose');
const MongoClient = require('mongodb').MongoClient;
let dbuser = process.env.DB_USER;
let dbpass = process.env.DB_PASSWORD;
let opts = {
bufferCommands: false,
bufferMaxEntries: 0,
socketTimeoutMS: 2000000,
keepAlive: true,
reconnectTries: 30,
reconnectInterval: 500,
poolSize: 10,
ssl: true,
};
const uri = `mongodb+srv://${dbuser}:${dbpass}#carpoolingcluster0-bw91o.mongodb.net/awsmongotest?retryWrites=true&w=majority`;
// simple hello test
module.exports.hello = async (event, context, callback) => {
const response = {
body: JSON.stringify({message:'AWS Testing :: '+ `${dbuser} and ${dbpass}`}),
};
return response;
};
// connect using mongoose
module.exports.cn1 = async (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
let conn = await mongoose.createConnection(uri, opts);
const M = conn.models.Test || conn.model('Test', new mongoose.Schema({ name: String }));
const doc = await M.find();
const response = {
body: JSON.stringify({data:doc}),
};
return response;
};
// connect using mongodb
module.exports.cn2 = (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
console.log("Connec to mongo using connectmongo ");
MongoClient.connect(uri).then(client => {
console.log("Success connect to mongo DB::::");
client.db('awsmongotest').collection('tests').find({}).toArray()
.then((result)=>{
let response = {
body: JSON.stringify({data:result}),
}
callback(null, response)
})
}).catch(err => {
console.log('=> an error occurred: ', err);
callback(err);
});
};
In the CloudWatch logs i see this error
{
"errorType": "MongoNetworkError",
"errorMessage": "failed to connect to server [carpoolingcluster0-shard-00-02-bw91o.mongodb.net:27017] on first connect [MongoNetworkError: connection 5 to carpoolingcluster0-shard-00-02-bw91o.mongodb.net:27017 closed]",
"stack": [
"MongoNetworkError: failed to connect to server [carpoolingcluster0-shard-00-02-bw91o.mongodb.net:27017] on first connect [MongoNetworkError: connection 5 to carpoolingcluster0-shard-00-02-bw91o.mongodb.net:27017 closed]",
" at Pool.<anonymous> (/var/task/node_modules/mongodb-core/lib/topologies/server.js:431:11)",
" at Pool.emit (events.js:189:13)",
" at connect (/var/task/node_modules/mongodb-core/lib/connection/pool.js:557:14)",
" at callback (/var/task/node_modules/mongodb-core/lib/connection/connect.js:109:5)",
" at runCommand (/var/task/node_modules/mongodb-core/lib/connection/connect.js:129:7)",
" at Connection.errorHandler (/var/task/node_modules/mongodb-core/lib/connection/connect.js:321:5)",
" at Object.onceWrapper (events.js:277:13)",
" at Connection.emit (events.js:189:13)",
" at TLSSocket.<anonymous> (/var/task/node_modules/mongodb-core/lib/connection/connection.js:350:12)",
" at Object.onceWrapper (events.js:277:13)",
" at TLSSocket.emit (events.js:189:13)",
" at _handle.close (net.js:597:12)",
" at TCP.done (_tls_wrap.js:388:7)"
],
"name": "MongoNetworkError",
"errorLabels": [
"TransientTransactionError"
]
}
Here is example on github to reproduce the error.
https://github.com/rollrodrig/error-aws-mongo-atlas
Just clone it, npm install, add your mongo atlas user, password and push to AWS.
Thanks.

Some extra steps are required to let lambda call external endpoint
https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/
Your atlas should also whitelist IP address of the servers, from which lambda will be connected.
Another option to consider - VPC peering between your lambda VPC and Atlas.

I have some questions concerning your configuration:
Did you whitelist the AWS Lambda function's IP address in Atlas?
Several posts on SO indicate that users get a MongoNetworkError like this if the IP is not whitelisted. [1][4]
Did you read the best-practices guide by Atlas which states that mongodb connections should be initiated outside the lambda handler? [2][3]
Do you use a public lambda function or a lambda function inside a VPC? There is a substantial difference between them and the latter one is more error-prone since the VPC configuration (e.g. NAT) must be taken into account.
I was able to ping the instances in the Atlas cluster and was able to establish a connection on port 27017. However, when connecting via the mongo shell, I get the following error:
Unable to reach primary for set CarpoolingCluster0-shard-0.
Cannot reach any nodes for set CarpoolingCluster0-shard-0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.
When I use your GitHub sample from AWS lambda I get the exact same error message as described in the question.
As the error messages are not authentication-related but network-related, I assume that something is blocking the connection... Please double-check the three config questions above.
[1] What is a TransientTransactionError in Mongoose (or MongoDB)?
[2] https://docs.atlas.mongodb.com/best-practices-connecting-to-aws-lambda/
[3] https://blog.cloudboost.io/i-wish-i-knew-how-to-use-mongodb-connection-in-aws-lambda-f91cd2694ae5
[4] https://github.com/Automattic/mongoose/issues/5237

Well, thanks everyone. Finally i found the solution with the help of the mongo support.
Here is the solution for anyone who needs
When you create a Mongo Altas cluster they ask you add your local ip and it is automatically added to the WhiteList. You can see it in
Your cluster > Network Access > IP Whitelist there in the list you will see your IP. It mean that only people from YOUR network will be able to connect to your MongoAtlas. AWS Lambda is NOT in your network, soo aws lambda will never connect to your Mongo Atlas. That is why i get the error MongoNetworkError.
Fix
You need to add the AWS Lambda IP to the Mongo Atlas WhiteListIP
go to your Your cluster > Network Access > IP Whitelist
click in the button ADD IP ADDRESS
click on ALLOW ACCESS FROM ANYWHERE it will add the ip 0.0.0.0/0 to the list, click confirm
Test your call from AWS Lambda and i will work.
FINALLY !
What you did is tell to Mongo Atlas that ANYONE from ANYWHERE can connect to your mongo Atlas.
Of course this is not a good practice. What you need is add only the AWS Lambda IP, here is when VPC comes to scene.
Create a VPC is little complex and it have many steeps, there are good tutorials in the other comments.
But for sure this small guide tacle the MongoNetworkError

Related

Keep on getting MongoNetworkError connection 6 to xx.x.xx.xx:xxxxx closed

I keep on getting the below error in an AWS Lambda with Node.js 16 + MongoDB v4, this usually happens for a lambda that has high traffic, other lambdas seem fine with the current setup.
MongoNetworkError: connection 6 to xx.x.xx.xx:xxxxx closed at Connection.onClose (/var/task/node_modules/mongodb/lib/cmap/connection.js:135:19)
MongoDB connection inside the lambda:
const MongoClient = require('mongodb').MongoClient;
const logger = require(''); const log = logger(__filename);
const getDbClient = async (uri) => {
try {
log.info('Connecting to Mongo client...');
const dbClient = await MongoClient.connect(uri);
log.info('Connected to Mongo client');
return dbClient;
}
catch (err) {
log.error('Error encountered connecting to database: ', err);
throw err;
}
};
module.exports = {
getDbClient
};
The mongodb uri has an option of maxPoolSize=10 since I recently did an upgrade from MongoDB v3 to v4, and v4 has a maxPoolSize of 100 by default and v3 had it to 10. https://github.com/mongodb/node-mongodb-native/blob/HEAD/etc/notes/CHANGES_4.0.0.md#connection-pool-options
MongoDB hardware:
3 x M4.XLarge(4Core/16GB RAM)
This issue started happening after I upgraded MongoDB driver from v3 to v4 and stopped checking inside the lambda if there is an existing connection so I can use it because in v4 apparently this is done automatically.
I used to use: MongoClient.isConnected() from MongoDB v3.
Do you guys have any idea what could be the cause of this?
I don't have enough reputation to comment, so I'll just post an answer :D
I think your Lambda function is on extremely high traffic now, so it's even over the pool size of mongodb

Redis client via Amazon Elasticache is ignoring host parameter from function and using default IP address

I have a Redis cluster set up that I need to connect to from a Lambda function. I'm using the following code to create the client and connect to Redis:
const getClient = async (clientargs) => {
var params = {
no_ready_check: true,
connect_timeout: clientargs.timeout == null ? 30000 : clientargs.timeout,
enable_offline_queue: false,
db: clientargs.index,
retry_strategy: undefined
};
logger.debug('db getClient: Connecting to redis client.');
logger.debug('db getClient params', params);
const client = redis.createClient(clientargs.host, clientargs.port, params);
await client.connect();
// do stuff once it's connected
}
clientargs.host stores the Elasticache Redis endpoint and I'm using the default port 6379. I've checked that the host and port are being passed in correctly to the getClient function, and that they are being passed into redis.createClient properly as well. I was originally debugging in a local environment and noticed that it was trying to connect to 127.0.0.1 instead of the Elasticache endpoint, so I ran a few tests within the AWS Console but it's still trying on 127.0.0.1:6379.
I have also tried passing the host and port arguments into the client.connect() function, but had no luck with that either.
The Lambda function and the Elasticache Redis cluster are in the same VPC instance and security group, so I don't think that is the issue, but I am fairly new to AWS as a whole so any help/thoughts would be appreciated!

Redis sentinel connection is timing out from nodeJS

Am trying to connect redis sentinel instance from nodeJS using ioredis. Am not able to connect redis sentinel instance despite trying multiple available options. We have not configured sentinel password. But, able to connect same redis sentinel instance from .net core using StackExchange.Redis. Please find below nodeJS code,
import { AppModule } from './app.module';
import IORedis from 'ioredis';
async function bootstrap() {
const app = await NestFactory.create(AppModule);
const ioredis = new IORedis({
sentinels: [
{ host: 'sentinel-host-1' },
{ host: 'sentinel-host-2' },
{ host: 'sentinel-host-3' },
],
name: 'mastername',
password: 'password',
showFriendlyErrorStack: true,
});
try {
ioredis.set('foo', 'bar');
} catch (exception) {
console.log(exception);
}
await app.listen(3000);
}
bootstrap();
Error we got is,
[ioredis] Unhandled error event: Error: connect ETIMEDOUT
node_modules\ioredis\built\redis\index.js:317:37)
at Object.onceWrapper (node:events:475:28)
at Socket.emit (node:events:369:20)
at Socket._onTimeout (node:net:481:8)
at listOnTimeout (node:internal/timers:557:17)
at processTimers (node:internal/timers:500:7)
Connection String used from .net core is below,
Redis_Configuration = "host-1,host-2,host-3,serviceName=mastername,password=password,abortConnect=False,connectTimeout=1000,responseTimeout=1000";
Answering this for the benefit of others. Everything is fine, but this nodeJS package is resolving redis instances into private IPs which i cannot access from my local. So, had to put it over subnet group and make it work. However, FYI - .net core package does not resolve into private IPs, hence i was able to access instances from my local itself.
"The arguments passed to the constructor are different from the ones you use to connect to a single node"
Try to replace password with sentinelPassword.

Connect to MySQL database from Lambda function (Node)

I have been unable to connect to MySQL database using Node from Lambda function. The error I receive is Task timed out after 4.00 seconds.
Does anyone have any solutions?
Here is an overview of my state:
The AWS RDS database is a MySQL database. It is not confined to the VPC (I am able to connect using host/user/password from MySQLWorkbench).
The execution role of my Lambda function is set to have Lambda as a trusted entity and given AdministratorAccess.
On my local machine, I installed the mysql module, zipped my index.js and node_modules folder, and uploaded to my Lambda function.
I have tried putting the createConnection and connect function inside the handler. I have tried putting my query inside the callback function of the connection function. I have tried increasing the timeout time to 10 seconds.
My code:
var mysql = require('mysql');
var connection = mysql.createConnection({
host : 'amazon-string.rds.amazonaws.com',
user : 'myusername',
password : 'mypassword'
});
connection.connect();
exports.handler = (event, context, callback) => {
connection.query("SELECT * FROM table", function(err, rows, fields) {
console.log("rows: " + rows);
callback(null);
});
};
Increase the timeout to one minute. It could be due to the coldstart of the lambda function.
Only your first call should take time, consecutive calls should be very fast, since you are reusing the same connection.
Also, By having higher timeout, does not mean you will be charged for that timeout, you will be charged only for the time the Lambda runs.
Also to speed up the coldstart time you can webpack your scripts,
http://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/webpack.html
There is one more issue noticed,
var mysql = require('mysql');
var connection = mysql.createConnection({
host : 'amazon-string.rds.amazonaws.com',
user : 'myusername',
password : 'mypassword'
});
connection.connect();
exports.handler = (event, context) => {
connection.query("SELECT * FROM table", function(err, rows, fields) {
console.log("rows: " + rows);
context.succeed('Success');
});
};
Hope it helps.
Since you're using RDS, go check out it's security group configuration. By default RDS's security group will allow inbound connections from your own IP and your default security group on your default VPC. However Lambda, by default, runs under no VPC, and thus is not able to establish a connection to RDS.
Either change your RDS instance to allow all IP addresses, or execute your Lambda function under a VPC that your RDS instance can access, and allow access to the security group.

Connection to PSQL RDS instance via a lambda function?

I'm using AWS and trying to connect to my PSQL RDS instance when the lambda function runs. I'm using the pg npm module and this is my code :
exports.handler = (event, context, callback) => {
"use strict"
const pg = require('pg');
const connectionStr = "dbstr";
var client = new pg.Client(connectionStr);
client.connect(function(err){
if(err) {
callback(err)
}
callback(null, 'Connection established');
});
};
I've been researching for ages how to do it, but I can't really find anything specific. I've added an IAM role that allows VPC access for my lambda, like what it says in the aws tutorial and I've even set all traffic in my VPC security group, but I still keep getting timeout errors like this:
"errorMessage": "2017-01-22T16:11:21.969Z 544e7fc4-e0bd-11e6-87e6-071c13fc2fc8 Task timed out after 30.00 seconds"
I've tested my function locally and it works fine in connecting to the DB and doing what I want to, but the lamda doesn't do it, and I'm not too sure why.
Any ideas would be greatly appreciated!
Nevermind, I've just solved it. Adding:
context.callbackWaitsForEmptyEventLoop = false;
in your lambda function fixed it for me

Resources