Connection timed out while connecting to AWS DocumentDB outside the VPC - node.js

I'm trying create a very simple node app that can use DocumentDB. I'm not using Cloud9 neither Lambda, I'm coding locally. I was following this link https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html and this link https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-ec2.html
I created a poorly secured EC2 instance with the following inbound rules
port range
protocol
source
security group
22
TCP
0.0.0.0/0
demoEC2
This demoEC2 security group has the following inbound rules
type
protocol
port range
source
SSH
TCP
22
0.0.0.0/0
Then I created a DocumentDB cluster with 1 instance available that belongs to a security group that has the following inbound rules
type
protocol
port range
source
custom tcp
TCP
27017
demoEC2
After that, I open my terminal and created a tunnel:
ssh -i "mykeypair.pem" -L 27017:<CLUSTER ENDPOINT>:27017 ec2-user#<EC2 PUBLIC IPV4 DNS> -N
And, to test if my tunnel is working, I connect using mongoshell:
> mongo "mongodb://<MASTER USERNAME>:<MASTER PASSWORD>#localhost:27017/<DATABASE>" --tls --tlsAllowInvalidHostnames --tlsCAFile rds-combined-ca-bundle.pem
MongoDB shell version v4.2.13
connecting to: mongodb://localhost:27017/<DATABASE>?compressors=disabled&gssapiServiceName=mongodb
2021-07-29T10:10:59.309+0200 W NETWORK [js] The server certificate does not match the host name. Hostname: localhost does not match docdb-2021-07-27-10-32-49.ctuxybn342pe.eu-central-1.docdb.amazonaws.com docdb-2021-07-27-10-32-49.cluster-ctuxybn342pe.eu-central-1.docdb.amazonaws.com docdb-2021-07-27-10-32-49.cluster-ro-ctuxybn342pe.eu-central-1.docdb.amazonaws.com , Subject Name: C=US,ST=Washington,L=Seattle,O=Amazon.com,OU=RDS,CN=docdb-2021-07-27-10-32-49.ctuxybn342pe.eu-central-1.docdb.amazonaws.com
Implicit session: session { "id" : UUID("63340995-54ad-471b-aa8d-85763f3c7281") }
MongoDB server version: 4.0.0
WARNING: shell and server versions do not match
Warning: Non-Genuine MongoDB Detected
This server or service appears to be an emulation of MongoDB rather than an official MongoDB product.
Some documented MongoDB features may work differently, be entirely missing or incomplete, or have unexpected performance characteristics.
To learn more please visit: https://dochub.mongodb.org/core/non-genuine-mongodb-server-warning.
rs0:PRIMARY>
However, when I try to connect in my node app:
const mongoose = require('mongoose');
const fs = require('fs');
const path = require('path');
const username = ...
const password = ...
const database = ...
const connstring = `mongodb://${username}:${password}#localhost:27017/${database}?tls=true&replicaSet=rs0&readPreference=secondaryPreferred`;
const certFile = path.resolve(__dirname, './rds-combined-ca-bundle.pem');
const certFileBuf = fs.readFileSync(certFile); //I tried this one in tlsCAFile option as well
mongoose.connect(connstring,
{
tlsCAFile: certFile,
useNewUrlParser: true,
tlsAllowInvalidHostnames: true,
}
).then(() => console.log('Connection to DB successful'))
.catch((err) => console.error(err, 'Error'));
I get a connection timeout error after a while:
> > node .\index.js
(node:12388) [MONGODB DRIVER] Warning: Current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
MongoNetworkError: failed to connect to server [<CLUSTER ENDPOINT WITHOUT HAVING .cluster->:27017] on first connect [MongoNetworkTimeoutError: connection timed out
at connectionFailureError (D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\connect.js:345:14)
at TLSSocket.<anonymous> (D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\connect.js:313:16)
at Object.onceWrapper (events.js:421:28)
at TLSSocket.emit (events.js:315:20)
at TLSSocket.Socket._onTimeout (net.js:481:8)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7)]
at Pool.<anonymous> (D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\topologies\server.js:441:11)
at Pool.emit (events.js:315:20)
at D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\pool.js:564:14
at D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\pool.js:1013:9
at D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\connect.js:32:7
at callback (D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\connect.js:283:5)
at TLSSocket.<anonymous> (D:\projects\documentdb-connect\node_modules\mongoose\node_modules\mongodb\lib\core\connection\connect.js:313:7)
at Object.onceWrapper (events.js:421:28)
at TLSSocket.emit (events.js:315:20)
at TLSSocket.Socket._onTimeout (net.js:481:8)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7) Error
Since I could connect using mongoshell, I think the tunnel is working and I can even do some inserts on it, but why Mongoose can't connect? I tried also using the MongoClient (const MongoClient = require('mongodb').MongoClient and MongoClient.connect(same everything)) but it didn't worked, I'm still getting the same timeout error.

Turns out all I need to do is to pass the username and password through the options, not in the connection string:
const connstring = `mongodb://localhost:27017/${database}`;
const certFile = path.resolve(__dirname, './rds-combined-ca-bundle.pem');
const certFileBuf = fs.readFileSync(certFile);
mongoose.connect(connstring,
{
tls: true,
tlsCAFile: certFile,
useNewUrlParser: true,
tlsAllowInvalidHostnames: true,
auth: {
username,
password
}
}
)

Related

Redis sentinel connection is timing out from nodeJS

Am trying to connect redis sentinel instance from nodeJS using ioredis. Am not able to connect redis sentinel instance despite trying multiple available options. We have not configured sentinel password. But, able to connect same redis sentinel instance from .net core using StackExchange.Redis. Please find below nodeJS code,
import { AppModule } from './app.module';
import IORedis from 'ioredis';
async function bootstrap() {
const app = await NestFactory.create(AppModule);
const ioredis = new IORedis({
sentinels: [
{ host: 'sentinel-host-1' },
{ host: 'sentinel-host-2' },
{ host: 'sentinel-host-3' },
],
name: 'mastername',
password: 'password',
showFriendlyErrorStack: true,
});
try {
ioredis.set('foo', 'bar');
} catch (exception) {
console.log(exception);
}
await app.listen(3000);
}
bootstrap();
Error we got is,
[ioredis] Unhandled error event: Error: connect ETIMEDOUT
node_modules\ioredis\built\redis\index.js:317:37)
at Object.onceWrapper (node:events:475:28)
at Socket.emit (node:events:369:20)
at Socket._onTimeout (node:net:481:8)
at listOnTimeout (node:internal/timers:557:17)
at processTimers (node:internal/timers:500:7)
Connection String used from .net core is below,
Redis_Configuration = "host-1,host-2,host-3,serviceName=mastername,password=password,abortConnect=False,connectTimeout=1000,responseTimeout=1000";
Answering this for the benefit of others. Everything is fine, but this nodeJS package is resolving redis instances into private IPs which i cannot access from my local. So, had to put it over subnet group and make it work. However, FYI - .net core package does not resolve into private IPs, hence i was able to access instances from my local itself.
"The arguments passed to the constructor are different from the ones you use to connect to a single node"
Try to replace password with sentinelPassword.

Managed DigitalOcean Redis instance giving Redis AbortError

I setup managed redis and managed postgres on digital ocean. Digital ocean gave me a .crt file, I don't know what to do with this, so didn't do anything with it. Can this be the root of the problem below:
Or do I have to allow docker container to reach outside of the container on the rediss protocol?
I dockerized a node app and then put this container onto my droplet. I have my droplet and managed redis and postgres in same region (SFO2). It connects to redis using this url:
url: 'rediss://default:REMOVED_THIS_PASSWORD#my-new-app-sfo2-do-user-5053627-0.db.ondigitalocean.com:25061/0',
I then did ran my docker container with docker run.
It then gives me error:
node_redis: WARNING: You passed "rediss" as protocol instead of the "redis" protocol!
events.js:186
throw er; // Unhandled 'error' event
^
AbortError: Connection forcefully ended and command aborted. It might have been processed.
at RedisClient.flush_and_error (/opt/apps/mynewapp/node_modules/redis/index.js:362:23)
at RedisClient.end (/opt/apps/mynewapp/node_modules/redis/lib/extendedApi.js:52:14)
at RedisClient.onPreConnectionEnd (/opt/apps/mynewapp/node_modules/machinepack-redis/machines/get-connection.js:157:14)
at RedisClient.emit (events.js:209:13)
at RedisClient.connection_gone (/opt/apps/mynewapp/node_modules/redis/index.js:590:14)
at Socket.<anonymous> (/opt/apps/mynewapp/node_modules/redis/index.js:293:14)
at Object.onceWrapper (events.js:298:28)
at Socket.emit (events.js:214:15)
at endReadableNT (_stream_readable.js:1178:12)
at processTicksAndRejections (internal/process/task_queues.js:80:21)
Emitted 'error' event on RedisClient instance at:
at /opt/apps/mynewapp/node_modules/redis/index.js:310:22
at Object.callbackOrEmit [as callback_or_emit] (/opt/apps/mynewapp/node_modules/redis/lib/utils.js:89:9)
at Command.callback (/opt/apps/mynewapp/node_modules/redis/lib/individualCommands.js:199:15)
at RedisClient.flush_and_error (/opt/apps/mynewapp/node_modules/redis/index.js:374:29)
at RedisClient.end (/opt/apps/mynewapp/node_modules/redis/lib/extendedApi.js:52:14)
[... lines matching original stack trace ...]
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
code: 'NR_CLOSED',
command: 'AUTH',
args: [ 'REMOVED_I_DONT_KNOW_IF_THIS_IS_SENSITIVE' ]
The redis protocol is different from rediss because the latter uses TLS connection. DigitalOcean Managed Redis requires the connections to be made over TLS, so you have to use rediss. However, I couldn't find any info about the TLS certificate provided by DigitalOcean to connect to the Managed Redis service.
Based on your error message, I presumed you're using this redis package. If that's the case, you can pass empty TLS object option in the connection string like so:
const Redis = require('redis')
const host = 'db-redis.db.ondigitalocean.com'
const port = '25061'
const username = 'user'
const password = 'secret'
const url = `${username}:${password}#${host}:${port}`
const client = Redis.createClient(url, {tls: {}})
Further reading/source:
SSL connections arrive for Redis on Compose
Connecting to IBM Cloud Databases for Redis from Node.js
I solved this. Below are snippets from config/env/production.js
Sockets
For sockets, to enable rediss you have to pass in all options through adapterOptions like this:
sockets: {
onlyAllowOrigins: ['https://my-website.com'],
// pass in as adapterOptions so it gets through to redis-adapter
// as i need it "rediss" but this url is not supported i get an error.
// so i need to pass in `tls` empty object. and i see he moves things into
// `adapterOptions` here here - https://github.com/balderdashy/sails-hook-sockets/blob/master/lib/configure.js#L128
adapterOptions: {
user: 'username',
pass: 'password',
host: 'host',
port: 9999,
db: 2, // pick a number
tls: {},
},
adapter: '#sailshq/socket.io-redis',
},
Session
For session, pass tls: {} empty object to config:
session: {
pass: 'password',
host: 'host',
port: 9999,
db: 1, // pick a number not used by sockets
tls: {},
cookie: {
secure: true,
maxAge: 24 * 60 * 60 * 1000, // 24 hours
},
},

Unable to retrieve data from PG DB (in Azure) using Sequelize

I am unable to retrieve data from a PG DB resource hosted in Azure. I am using Sequelize and Node.
I am able to connect to the DB hosted in Azure using the terminal and a GUI, I can create a new DB with a table and some prepopulated fields to do a proof of concept.
However, when I try to connect in my local and get the data, I get an empty array response ([ ]). If I hit the same endpoint in production, I get a 502 (after a while) with the following message displayed on the client:
Server Error.
There was an unexpected error in the request processing.
Some code below (it works with my local db configured the same way):
This is my DB config:
'use strict';
var Sequelize = require('sequelize');
var cfg = require('../config');
var sequelize = new Sequelize(cfg.db, cfg.username, cfg.password, {
define: {
timestamps: false
},
host: cfg.host,
dialect: 'postgres',
port: 5432
});
And this is my router code:
'use strict';
const express = require('express');
const router = express.Router();
var User = require('../../models/users-model');
router.get('/', (req, res) => {
User.findAll().then(user => {
res.json(user);
});
});
module.exports = router;
Both in local and prod I expect to get the JSON response with an array of User objects.
In my local, as explained, I get an empty array.
In production, as mentioned as well, it seems to timeout and finally I get a 502 err response.
Any help is much appreciated!
Update!: I managed to activate the app logs on Azure (it took me a bit to find it as I'm quite new to the platform!) and got this now when I hit the endpoint in prod:
2019-08-12T12:52:06.355595892Z Unhandled rejection SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:5432
2019-08-12T12:52:06.355632393Z at connection.connect.err (/usr/src/app/server/node_modules/sequelize/lib/dialects/postgres/connection-manager.js:170:24)
2019-08-12T12:52:06.355637793Z at Connection.connectingErrorHandler (/usr/src/app/server/node_modules/pg/lib/client.js:191:14)
2019-08-12T12:52:06.355641493Z at emitOne (events.js:116:13)
2019-08-12T12:52:06.355645293Z at Connection.emit (events.js:211:7)
2019-08-12T12:52:06.355648693Z at Socket.reportStreamError (/usr/src/app/server/node_modules/pg/lib/connection.js:72:10)
2019-08-12T12:52:06.355652093Z at emitOne (events.js:116:13)
2019-08-12T12:52:06.355655393Z at Socket.emit (events.js:211:7)
2019-08-12T12:52:06.355658393Z at emitErrorNT (internal/streams/destroy.js:64:8)
2019-08-12T12:52:06.355661493Z at _combinedTickCallback (internal/process/next_tick.js:138:11)
2019-08-12T12:52:06.355664693Z at process._tickCallback (internal/process/next_tick.js:180:9)
After hours and hours, I have hardcoded the data rather than getting that dynamically from my config files, probably I did not set up my Dockerfile properly and was not setting the ENV variable correctly.
Now I attacked the PROD DB from my local and it seems to work! Would really appreciate if someone can affirm my problem lies at the configuration level and the NODE_ENV env node var.
Dockerfile
# Node server serving Angular App
FROM node:8.11-alpine as node-server
WORKDIR /usr/src/app
COPY /server /usr/src/app/server
WORKDIR /usr/src/app/server
ENV NODE_ENV=prod
RUN npm install --production --silent
EXPOSE 80 443
CMD ["node", "index.js"]
Then in /config/index.js I have:
var env = process.env.NODE_ENV || 'global'
, cfg = require('./config.' + env);
module.exports = cfg;
So I understand that by setting the NODE_ENV to prod in Docker, when starting the Node app in Azure it should get the config.prod.js file rather than the config.global.js file, right?
You can see how I implement this on the db.js file on the question.

AWS lambda with mongoose to Atlas - MongoNetworkError

I am trying to connect MongoDB Atlas with mongoose and aws lambda but i get error MongoNetworkError
AWS Lambda
Mongoose
MongoDB Atlas
The same code was tested with serverless-offline and works perfect, the problem is when i deploy it to AWS Lambda.
This is the code snipet
'use strict';
const mongoose = require('mongoose');
const MongoClient = require('mongodb').MongoClient;
let dbuser = process.env.DB_USER;
let dbpass = process.env.DB_PASSWORD;
let opts = {
bufferCommands: false,
bufferMaxEntries: 0,
socketTimeoutMS: 2000000,
keepAlive: true,
reconnectTries: 30,
reconnectInterval: 500,
poolSize: 10,
ssl: true,
};
const uri = `mongodb+srv://${dbuser}:${dbpass}#carpoolingcluster0-bw91o.mongodb.net/awsmongotest?retryWrites=true&w=majority`;
// simple hello test
module.exports.hello = async (event, context, callback) => {
const response = {
body: JSON.stringify({message:'AWS Testing :: '+ `${dbuser} and ${dbpass}`}),
};
return response;
};
// connect using mongoose
module.exports.cn1 = async (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
let conn = await mongoose.createConnection(uri, opts);
const M = conn.models.Test || conn.model('Test', new mongoose.Schema({ name: String }));
const doc = await M.find();
const response = {
body: JSON.stringify({data:doc}),
};
return response;
};
// connect using mongodb
module.exports.cn2 = (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
console.log("Connec to mongo using connectmongo ");
MongoClient.connect(uri).then(client => {
console.log("Success connect to mongo DB::::");
client.db('awsmongotest').collection('tests').find({}).toArray()
.then((result)=>{
let response = {
body: JSON.stringify({data:result}),
}
callback(null, response)
})
}).catch(err => {
console.log('=> an error occurred: ', err);
callback(err);
});
};
In the CloudWatch logs i see this error
{
"errorType": "MongoNetworkError",
"errorMessage": "failed to connect to server [carpoolingcluster0-shard-00-02-bw91o.mongodb.net:27017] on first connect [MongoNetworkError: connection 5 to carpoolingcluster0-shard-00-02-bw91o.mongodb.net:27017 closed]",
"stack": [
"MongoNetworkError: failed to connect to server [carpoolingcluster0-shard-00-02-bw91o.mongodb.net:27017] on first connect [MongoNetworkError: connection 5 to carpoolingcluster0-shard-00-02-bw91o.mongodb.net:27017 closed]",
" at Pool.<anonymous> (/var/task/node_modules/mongodb-core/lib/topologies/server.js:431:11)",
" at Pool.emit (events.js:189:13)",
" at connect (/var/task/node_modules/mongodb-core/lib/connection/pool.js:557:14)",
" at callback (/var/task/node_modules/mongodb-core/lib/connection/connect.js:109:5)",
" at runCommand (/var/task/node_modules/mongodb-core/lib/connection/connect.js:129:7)",
" at Connection.errorHandler (/var/task/node_modules/mongodb-core/lib/connection/connect.js:321:5)",
" at Object.onceWrapper (events.js:277:13)",
" at Connection.emit (events.js:189:13)",
" at TLSSocket.<anonymous> (/var/task/node_modules/mongodb-core/lib/connection/connection.js:350:12)",
" at Object.onceWrapper (events.js:277:13)",
" at TLSSocket.emit (events.js:189:13)",
" at _handle.close (net.js:597:12)",
" at TCP.done (_tls_wrap.js:388:7)"
],
"name": "MongoNetworkError",
"errorLabels": [
"TransientTransactionError"
]
}
Here is example on github to reproduce the error.
https://github.com/rollrodrig/error-aws-mongo-atlas
Just clone it, npm install, add your mongo atlas user, password and push to AWS.
Thanks.
Some extra steps are required to let lambda call external endpoint
https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/
Your atlas should also whitelist IP address of the servers, from which lambda will be connected.
Another option to consider - VPC peering between your lambda VPC and Atlas.
I have some questions concerning your configuration:
Did you whitelist the AWS Lambda function's IP address in Atlas?
Several posts on SO indicate that users get a MongoNetworkError like this if the IP is not whitelisted. [1][4]
Did you read the best-practices guide by Atlas which states that mongodb connections should be initiated outside the lambda handler? [2][3]
Do you use a public lambda function or a lambda function inside a VPC? There is a substantial difference between them and the latter one is more error-prone since the VPC configuration (e.g. NAT) must be taken into account.
I was able to ping the instances in the Atlas cluster and was able to establish a connection on port 27017. However, when connecting via the mongo shell, I get the following error:
Unable to reach primary for set CarpoolingCluster0-shard-0.
Cannot reach any nodes for set CarpoolingCluster0-shard-0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.
When I use your GitHub sample from AWS lambda I get the exact same error message as described in the question.
As the error messages are not authentication-related but network-related, I assume that something is blocking the connection... Please double-check the three config questions above.
[1] What is a TransientTransactionError in Mongoose (or MongoDB)?
[2] https://docs.atlas.mongodb.com/best-practices-connecting-to-aws-lambda/
[3] https://blog.cloudboost.io/i-wish-i-knew-how-to-use-mongodb-connection-in-aws-lambda-f91cd2694ae5
[4] https://github.com/Automattic/mongoose/issues/5237
Well, thanks everyone. Finally i found the solution with the help of the mongo support.
Here is the solution for anyone who needs
When you create a Mongo Altas cluster they ask you add your local ip and it is automatically added to the WhiteList. You can see it in
Your cluster > Network Access > IP Whitelist there in the list you will see your IP. It mean that only people from YOUR network will be able to connect to your MongoAtlas. AWS Lambda is NOT in your network, soo aws lambda will never connect to your Mongo Atlas. That is why i get the error MongoNetworkError.
Fix
You need to add the AWS Lambda IP to the Mongo Atlas WhiteListIP
go to your Your cluster > Network Access > IP Whitelist
click in the button ADD IP ADDRESS
click on ALLOW ACCESS FROM ANYWHERE it will add the ip 0.0.0.0/0 to the list, click confirm
Test your call from AWS Lambda and i will work.
FINALLY !
What you did is tell to Mongo Atlas that ANYONE from ANYWHERE can connect to your mongo Atlas.
Of course this is not a good practice. What you need is add only the AWS Lambda IP, here is when VPC comes to scene.
Create a VPC is little complex and it have many steeps, there are good tutorials in the other comments.
But for sure this small guide tacle the MongoNetworkError

MongoDB Auth Fails to find username on Bitnami MEAN Stack Image

Trying to run a web app (MEAN) on Amazon EC2 Instance but am encountering the following problem. Can anyone help me with this?
node app.js The Server has started on 9091
/opt/bitnami/apps/YelpCamp/node_modules/mongodb-core/lib/auth/scram.js:128
username = username.replace('=', "=3D").replace(',', '=2C');
^
TypeError: Cannot read property 'replace' of undefined
at executeScram (/opt/bitnami/apps/SomeApp/node_modules/mongodb-core/lib/auth/scram.js:128:24)
at /opt/bitnami/apps/SomeApp/node_modules/mongodb-core/lib/auth/scram.js:277:7
at _combinedTickCallback (internal/process/next_tick.js:73:7)
at process._tickCallback (internal/process/next_tick.js:104:9)
Mongoose can do auth in 2 ways:
1, Connection string:
mongoose.connect('mongodb://username:password#host:port(usually 27017)/db')
Where username and password are the respective username and password for that specific db, host would be the host where your db is hosted (so localhost or some domain/IP), port is the port mongo listens on, and db is the name of the db you want to connect to
2, Using options. From the docs:
var options = {
useMongoClinet: true,
auth: {authdb: 'admin'},
user: 'myUsername',
pass: 'myPassword',
}
mongoose.connect(uri, options);
I also faced the 'username undefined' error in the first approach, but I succeeded in the second approach.
[Reference] https://github.com/Automattic/mongoose/issues/4891

Resources