Why can't my AWS Lambda node JS app access my MongoDB Atlas cluster? - node.js

Context :
I have just created a Node.JS application and deployed it with the Serverless Framework on AWS Lambda.
Problem :
I would like that application to be able to access my (free tier) MongoDB Atlas Cluster. For this I am using mongoose.
Setup :
I have got a IAM User with AdministratorAccess rights. This user has been authorized on my MongoDB Cluster.
I am using the authMechanism=MONGODB-AWS, therefore using the Token and Secret of that IAM user. The password has been correctly "url encoded".
This is the piece of code used to create a connection :
const uri = "mongodb+srv://myIAMtoken:myIAMsecret#cluster0.tfws6.mongodb.net/DBNAME?authSource=%24external&authMechanism=MONGODB-AWS&retryWrites=true&w=majority"
mongoose.connect(uri, {useNewUrlParser: true, useUnifiedTopology: true })
When I run this code on my laptop, the connection is made and I can retrieve the data needed.
However when I deploy this exact same code on AWS Lambda (through Serverless), I get this response :
message "Internal server error"
The trace on CloudWatch looks like this :
{
"errorType": "Runtime.UnhandledPromiseRejection",
"errorMessage": "MongoError: bad auth : aws sts call has response 403",
"reason": {
"errorType": "MongoError",
"errorMessage": "bad auth : aws sts call has response 403",
"code": 8000,
"ok": 0,
"codeName": "AtlasError",
"name": "MongoError",
"stack": [
"MongoError: bad auth : aws sts call has response 403",
" at MessageStream.messageHandler (/var/task/node_modules/mongodb/lib/cmap/connection.js:268:20)",
" at MessageStream.emit (events.js:314:20)",
" at processIncomingData (/var/task/node_modules/mongodb/lib/cmap/message_stream.js:144:12)",
" at MessageStream._write (/var/task/node_modules/mongodb/lib/cmap/message_stream.js:42:5)",
" at doWrite (_stream_writable.js:403:12)",
" at writeOrBuffer (_stream_writable.js:387:5)",
" at MessageStream.Writable.write (_stream_writable.js:318:11)",
" at TLSSocket.ondata (_stream_readable.js:718:22)",
" at TLSSocket.emit (events.js:314:20)",
" at addChunk (_stream_readable.js:297:12)",
" at readableAddChunk (_stream_readable.js:272:9)",
" at TLSSocket.Readable.push (_stream_readable.js:213:10)",
" at TLSWrap.onStreamRead (internal/stream_base_commons.js:188:23)"
]
},
"promise": {},
"stack": [
"Runtime.UnhandledPromiseRejection: MongoError: bad auth : aws sts call has response 403",
" at process.<anonymous> (/var/runtime/index.js:35:15)",
" at process.emit (events.js:314:20)",
" at processPromiseRejections (internal/process/promises.js:209:33)",
" at processTicksAndRejections (internal/process/task_queues.js:98:32)"
]
}
I thought it was a network access issue from AWS, so I tried fetching "http://google.com" , no problem, my node app could access the page and provide the response. My app has an internet access but cannot reach my MongoDB cloud instance. My MongoDB cluster is accessible from any IP address.
This is reaching the limits of my knowledge :-)

if you are using an iam type mongodb user then you don't need the username + password in the connection string.
const uri = "mongodb+srv://cluster0.tfws6.mongodb.net/DBNAME?authSource=$external&authMechanism=MONGODB-AWS&retryWrites=true&w=majority"
when you invoke your lambda connecting to the mongodb cluster, the iam role that it will be using will be the execution role of the lambda
"arn:aws:iam::ACCOUNT_ID:role/SLS_SERVICE_NAME-ENVIRONMENT-AWS_REGION-lambdaRole"
"arn:aws:iam::123456789012:role/awesome-service-dev-us-east-1-lambdaRole"
check the default iam section of sls framework:
https://www.serverless.com/framework/docs/providers/aws/guide/iam/#the-default-iam-role

Related

Unable to connect to AWS Aurora Serverless with SSL

I'm working with AWS Aurora Serverless V1 (with Postgres engine) and I'm trying to connect using node-postgres. I was able to connect without SSL perfectly, but when I add the SSL config at the pg client I keep receiving this error:
{
"message": "unable to get local issuer certificate",
"stack": "Error: unable to get local issuer certificate\n at TLSSocket.onConnectSecure (_tls_wrap.js:1497:34)\n at TLSSocket.emit (events.js:315:20)\n at TLSSocket.EventEmitter.emit (domain.js:467:12)\n at TLSSocket._finishInit (_tls_wrap.js:932:8)\n at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:706:12)"
}
This is how the SSL config object is been declared
const ca = `-----BEGIN CERTIFICATE-----
MIIEBjCCAu6gAwIBAgIJAMc0ZzaSUK51MA0GCSqGSIb3DQEBCwUAMIGPMQswCQYD
VQQGEwJVUzEQMA4GA1UEBwwHU2VhdHRsZTETMBEGA1UECAwKV2FzaGluZ3RvbjEi
MCAGA1UECgwZQW1hem9uIFdlYiBTZXJ2aWNlcywgSW5jLjETMBEGA1UECwwKQW1h
em9uIFJEUzEgMB4GA1UEAwwXQW1hem9uIFJEUyBSb290IDIwMTkgQ0EwHhcNMTkw
ODIyMTcwODUwWhcNMjQwODIyMTcwODUwWjCBjzELMAkGA1UEBhMCVVMxEDAOBgNV
BAcMB1NlYXR0bGUxEzARBgNVBAgMCldhc2hpbmd0b24xIjAgBgNVBAoMGUFtYXpv
biBXZWIgU2VydmljZXMsIEluYy4xEzARBgNVBAsMCkFtYXpvbiBSRFMxIDAeBgNV
BAMMF0FtYXpvbiBSRFMgUm9vdCAyMDE5IENBMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEArXnF/E6/Qh+ku3hQTSKPMhQQlCpoWvnIthzX6MK3p5a0eXKZ
oWIjYcNNG6UwJjp4fUXl6glp53Jobn+tWNX88dNH2n8DVbppSwScVE2LpuL+94vY
0EYE/XxN7svKea8YvlrqkUBKyxLxTjh+U/KrGOaHxz9v0l6ZNlDbuaZw3qIWdD/I
6aNbGeRUVtpM6P+bWIoxVl/caQylQS6CEYUk+CpVyJSkopwJlzXT07tMoDL5WgX9
O08KVgDNz9qP/IGtAcRduRcNioH3E9v981QO1zt/Gpb2f8NqAjUUCUZzOnij6mx9
McZ+9cWX88CRzR0vQODWuZscgI08NvM69Fn2SQIDAQABo2MwYTAOBgNVHQ8BAf8E
BAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUc19g2LzLA5j0Kxc0LjZa
pmD/vB8wHwYDVR0jBBgwFoAUc19g2LzLA5j0Kxc0LjZapmD/vB8wDQYJKoZIhvcN
AQELBQADggEBAHAG7WTmyjzPRIM85rVj+fWHsLIvqpw6DObIjMWokpliCeMINZFV
ynfgBKsf1ExwbvJNzYFXW6dihnguDG9VMPpi2up/ctQTN8tm9nDKOy08uNZoofMc
NUZxKCEkVKZv+IL4oHoeayt8egtv3ujJM6V14AstMQ6SwvwvA93EP/Ug2e4WAXHu
cbI1NAbUgVDqp+DRdfvZkgYKryjTWd/0+1fS8X1bBZVWzl7eirNVnHbSH2ZDpNuY
0SBd8dj5F6ld3t58ydZbrTHze7JJOd8ijySAp4/kiu9UfZWuTPABzDa/DSdz9Dk/
zPW4CXXvhLmE02TA9/HeCw3KEHIwicNuEfw=
-----END CERTIFICATE-----`
module.exports = {
rejectUnauthorized: true,
ca,
checkServerIdentity: (host, cert) => {
const error = tls.checkServerIdentity(host, cert)
if (
error &&
!cert.subject.CN.endsWith('.rds.amazonaws.com')
) {
return error
}
}
}
I am downloading the certificate directly from Amazon https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html
Aurora Serverless docs say that SSL is enforced by default, so I think I'm doing all the neccesary things to make the connection right, but it isn't working though. Maybe there is something about "serverless connections" that work differently from classic ones and I'm not understanding it.

Node AWS Lambda fetch request failing

I am using node-fetch to perform a request to an API (hosted on AWS Lambda/API Gateway with Serverless Framework) from a lambda. The lambda is failing with the below invocation error:
{
"errorType": "FetchError",
"errorMessage": "request to https://[API].us-east-2.amazonaws.com/[ENDPOINT] failed, reason: connect ETIMEDOUT [IP]:443",
"code": "ETIMEDOUT",
"message": "request to https://[API].us-east-2.amazonaws.com/[ENDPOINT] failed, reason: connect ETIMEDOUT [IP]:443",
"type": "system",
"errno": "ETIMEDOUT",
"stack": [
"FetchError: request to https://[API].us-east-2.amazonaws.com/[ENDPOINT] failed, reason: connect ETIMEDOUT [IP]:443",
" at ClientRequest.<anonymous> (/var/task/node_modules/node-fetch/lib/index.js:1461:11)",
" at ClientRequest.emit (events.js:315:20)",
" at TLSSocket.socketErrorListener (_http_client.js:426:9)",
" at TLSSocket.emit (events.js:315:20)",
" at emitErrorNT (internal/streams/destroy.js:92:8)",
" at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)",
" at processTicksAndRejections (internal/process/task_queues.js:84:21)"
]
}
Here is the lambda in question with extraneous code removed:
"use strict";
import { PrismaClient } from "#prisma/client";
import fetch from "node-fetch";
const prisma = new PrismaClient();
module.exports.handler = async (event, context, callback) => {
const users = await prisma.user.findMany();
for (const user of users) {
await fetch(...); // this is where the error occurs
}
};
The code works fine locally (the code in the lambda itself as well as manaully making the request). Because of that, I thought this might be fixed by setting up a NAT for the lambda/configuring the VPC to have external internet access, though I'm not sure how to do that with Serverless Framework if that is indeed the issue. The lambda attempting to perform the fetch request is in the same VPC as the API. Any help or ideas is greatly appreciated!
I solved this by adding a VPC endpoint for the lambda function. I believe an alternative solution (though possibly more expensive) is to set up a NAT gateway for the Lambda.

TypeError : i is not a function (AWS Lambda wrapped with epsagon)

This error is popped up on a lambda when I've upgraded got npm module to 11.5.x from 9.6.0.
I'm using serverlessframework to develop and deploy micro services. Using Epsagon wrapper for better monitoring.
I've been struggling with this error from the past 3 days. Any help would be well appreciated.
AWS lambda runtime: Node 10.x
following are a few npm packages
"serverless-webpack": "^5.3.3",
"terser-webpack-plugin": "^4.1.0",
"webpack": "^4.44.1"
"epsagon": "^1.82.0",
"got": "^11.6.0", (with got 9.0.6, there is no issue)
While I can't paste the entire code snippet, following is a snippet. If I use the same code in a simple index.js file and run it then I cant reproduce this issue.
const { body } = await got(path, {
headers: { 'custom-key': customKey },
responseType: 'json',
method: 'POST',
json: { ts: i1599227703000 },
});
The following log snippet is from cloudWatch.
{
"errorType": "TypeError",
"errorMessage": "i is not a function",
"stack": [
"TypeError: i is not a function",
" at ClientRequest.patchedCallback (/opt/nodejs/node_modules/epsagon/dist/bundle.js:1:60626)",
" at Object.onceWrapper (events.js:286:20)",
" at ClientRequest.emit (events.js:203:15)",
" at ClientRequest.EventEmitter.emit (domain.js:448:20)",
" at ClientRequest.e.emit.f [as emit] (/var/task/epsagon_handlers/abcdNameChanged-epsagon.js:2:1080726)",
" at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:565:21)",
" at HTTPParser.parserOnHeadersComplete (_http_common.js:111:17)",
" at TLSSocket.socketOnData (_http_client.js:451:20)",
" at TLSSocket.emit (events.js:198:13)",
" at TLSSocket.EventEmitter.emit (domain.js:448:20)"
]
}
Finally I could crack it.
If we enable epsagon auto-tracing thru epsagon webapp, it actually adds a layer called epsagon-node-layer to that lambda.
Disabling auto-tracing helped to not to get this error.
For more details, refer to: https://github.com/epsagon/epsagon-node/issues/295

getting error while connecting to redshift from aws lambda function

trying to connecting redshift from aws lambda python code using psycopg2 lib, when running same code from EC2 instance not getting any error. getting below error response.
{
"errorMessage": "FATAL: no pg_hba.conf entry for host \"::xxxxx\", user \"xxxx\", database \"xxxx\", SSL off\n",
"errorType": "OperationalError",
"stackTrace": [
[
"/var/task/aws_unload_to_s3_audit.py",
86,
"lambda_handler",
"mainly()"
],
[
"/var/task/aws_unload_to_s3_audit.py",
74,
"mainly",
"con = psycopg2.connect(conn_string)"
],
[
"/var/task/psycopg2/__init__.py",
130,
"connect",
"conn = _connect(dsn, connection_factory=connection_factory, **kwasync)"
]
]
}
My suggestion on this would be putting checking the network configuration for Redshift, chances are that the connection is being refused.
Places to check -
Redshift Security Group
VPC configuration is the lambda resides under a private subnet.

LoopbackJS version 3.x `Connection fails: MongoError: Authentication failed.`

I am trying to follow up the tutorial, I created a new app with lb, added CoffeeShop mode, then added a datasource, mongodb.
My MongoDB instance is on my local mac, and authorization is turned off.
I just run it with mongod command, no extra params, and there are no addition configs.
this is my datasources.json
{
"corp1": {
"host": "127.0.0.1",
"port": 27017,
"url": "",
"database": "devdb",
"password": "devpassword",
"name": "corp1",
"user": "devuser",
"connector": "mongodb"
}
}
I even created devdb database, and given devdb user to all databases as admin.
And still getting this error.
Connection fails: MongoError: Authentication failed.
It will be retried for the next request.
/Users/hazimdikenli/learn/loopback-getting-started/node_modules/mongodb/lib/mongo_client.js:462
throw err
^
MongoError: Authentication failed.
at Function.MongoError.create (/Users/hazimdikenli/learn/loopback-getting-started/node_modules/mongodb-core/lib/error.js:31:11)
at /Users/hazimdikenli/learn/loopback-getting-started/node_modules/mongodb-core/lib/connection/pool.js:497:72
at authenticateStragglers (/Users/hazimdikenli/learn/loopback-getting-started/node_modules/mongodb-core/lib/connection/pool.js:443:16)
at Connection.messageHandler (/Users/hazimdikenli/learn/loopback-getting-started/node_modules/mongodb-core/lib/connection/pool.js:477:5)
at Socket.<anonymous> (/Users/hazimdikenli/learn/loopback-getting-started/node_modules/mongodb-core/lib/connection/connection.js:333:22)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7)
at addChunk (_stream_readable.js:263:12)
at readableAddChunk (_stream_readable.js:250:11)
at Socket.Readable.push (_stream_readable.js:208:10)
at TCP.onread (net.js:597:20)
It am thinking that this is a newbie issue but I cannot find the problem. So please help me resolve this issue.
Although I am not running mongod with --auth option, it still turned out to be an authentication issue, it worked after I executed following.
db.createUser({
user : "devuser",
pwd : "devpassword",
roles: [ {role: "dbOwner", db: "devdb"},
{role: "readWrite", db: "devdb"}]
})
before this, I granted roles on "admin" with this script, but looks like it was not enough.
role: "userAdminAnyDatabase", db: "admin"

Resources