connection error while connecting to AWS DocumentDB - node.js

getting the following error while connecting to AWS DocumentDB from node.js
connection error: { [MongoNetworkError: connection 1 to
docdb-2019-01-28-06-57-37.cluster-cqy6h2ypc0dj.us-east-1.docdb.amazonaws.com:27017
timed out] name: 'MongoNetworkError', errorLabels: [
'TransientTransactionError' ] }
here is my node js file
app.js
var mongoose = require('mongoose');
mongoose.connect('mongodb://abhishek:abhishek#docdb-2019-01-28-06-57-37.cluster-cqy6h2ypc0dj.us-east-1.docdb.amazonaws.com:27017/?ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs0', {
useNewUrlParser: true
});
var db = mongoose.connection;
db.on('error', console.error.bind(console, 'connection error:'));
db.once('open', function() {
console.log("connected...");
});

By default aws documentdb is designed to connect only from same VPC.
So to connect nodejs application from an ec2 in same vpc. You need to have the pem file as by default SSL is enabled while db instance is created.
step-1 : $ wget https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem in required directory
step-2 : Change the mongoose connection with options pointing to pem file
mongoose.connect(database.url, {
useNewUrlParser: true,
ssl: true,
sslValidate: false,
sslCA: fs.readFileSync('./rds-combined-ca-bundle.pem')})
.then(() => console.log('Connection to DB successful'))
.catch((err) => console.error(err,'Error'));
Here am using mongoose 5.4.0
To connnect from outside the VPC, please try to follow the below doc from aws:
https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html
Personally I tried only to connect from VPC and it worked fine.
Update =====:>
To connect from Robo 3T outside VPC please follow the link -
AWS DocumentDB with Robo 3T (Robomongo)

to use AWS DocumentDB outside VPC for example your development server EC2 or from the local machine will get a connection error unless you use ssh tunneling or port forwarding
and about tunneling it simple
use this command in your local
ssh -i "ec2Access.pem" -L 27017:sample-cluster.node.us-east-1.docdb.amazonaws.com:27017 ubuntu#EC2-Host -N
in application configuration use
{
uri: 'mongodb://:#127.0.0.1:27017/Db',
useNewUrlParser: true,
useUnifiedTopology:true,
directConnection: true
}
just make sure you can connect from this tunneling ec2 and database
and if you decide to use port forwarding
steps
0- in ec2 security grou[p add inbound role with custom TCP and port 27017 All traffic
1- go to your ec2 instance and install Haproxy
$ sudo apt install haproxy
2- edit Haproxy configuration
$ sudo nano haproxy.cfg
3- in end off file add
listen mongo
bind 0.0.0.0:27017
timeout connect 10s
timeout client 1m
timeout server 1m
mode TCP
server AWSmongo <database-host-url>:27017
4- now restart HaProxy
$ sudo service HaPoxy restart
5- now you can access your database using
{uri: 'mongodb://<database-user>:<database-pass>#<EC2-IP>:27017/<db>'}

Related

Node.JS on Google Cloud Run with cloud SQLError: connect ENOENT /cloudsql/<instancename...>

I'm not able to connect the cloudrun service to cloudsql.
I am using Sequelize and here is my connection section!
let sequelize = new (<any> Sequelize) (
DATABASE_DATABASE,
DATABASE_USERNAME,
DATABASE_PASSWORD,
{
dialect: 'postgres'
dialectOptions: {
socketPath: `/cloudsql/${DATABASE_HOST}`,
supportBigNumbers: true,
bigNumberStrings: true
},
host: `/cloudsql/${DATABASE_HOST}`,
port: DATABASE_PORT,
logging: false,
},
);
PS: Apparently everything is configured correctly, that is,
The cloudsql service is connected to the specific cloudrun service,
they are in the same region, the AMIs are released ...
My unsuccessful attempts were:
The code snippet above returns: Error: connect ENOENT / cloudsql / <instancename ...>,
If I change the host to 127.0.0.1 = connection refused 127.0.0.1:5432,
Putting the native property in the connection: true = Error: Connection not found
direct connection attempt by PG = Error: connect ENOENT / cloudsql / <instancename ...>
I created another project, other permissions and the error continues
I changed the zone and the error continues
Another attempt was:
I tried to create VPC without a server for private IP connection enabled in cloudsql, but I get timeout in cloudrun
What I had left was to enable the cloudsql public network to access 0.0.0.0/0 and the application is working fine.
I'm out of ideas and need help connecting using /cloudsql/
Cloud Run connects to Cloud SQL over Unix sockets.
You'll want to use the instance connection name in the socket path instead of the host IP address. This will follow the format project-name:region-name:instance-name. You might also need to append the suffix s.PGSQL.5432. The full socket path should look like "/cloudsql/project-name:region-name:instance-name.s.PGSQL.5432" If you're using the socket path, you can remove the host and port connection arguments

connect to mongodb in docker from node app in AWS SAM

I am getting errors connecting to mongodb running in a docker container from my Nodejs app running in AWS SAM (used to say "in my host").
I run mongodb like:
$ docker run --name mongo-myapp --rm -d -p 27018:27017 mongo
and I see the results:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ab8248d17d2d mongo "docker-entrypoint.s…" 6 minutes ago Up 6 minutes 0.0.0.0:27018->27017/tcp mongo-myapp
~~I can successfully connect and insert data using clients running on my host like MongoDB Compass and Robo 3T Community Edition, specifying port 27018.~~
When I attempt to connect with this code running on my host, (not in a docker container):
const { MongoClient } = require('mongodb');
const mongoConnection = 'mongodb://127.0.0.1:27018';
MongoClient.connect(mongoConnection, (err, db) => {
if (err) {
console.error("Failed to connect to mongodb server", err);
return reject(err);
}
console.log("Connected successfully to mongodb server");
resolve(db);
});
I always see the error:
MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27018
I get the same error using another port with all steps, like 27017.
UPDATE
It turns out my code was not running on the host. It was running in another docker container. I did not realize AWS SAM would put my code into a docker container so I did not mention it in the original question.
Now I run my code with mocha test to make sure it will run on my host, and it connects to the mongo database with no problems.
When I launched a local server using AWS SAM's start-api, I had problems. Perhaps the solution will be to specify a network when starting the mongo container as well as the SAM environment.
Now that the problem is known that the Nodejs code was running within a docker container created by AWS SAM, we can help the Nodejs code connect with Mongodb running in a separate docker container with a port exposed on the host with at least one solution:
Change the connection string to mongodb://host.docker.internal:27018 which helps the code in the container to use a service running on the host.
Install the necessary dependency for mongodb in node.js https://www.npmjs.com/package/mongodb
const MongoClient = require('mongodb').MongoClient;
const url = "mongodb://localhost:27018/testdb";
MongoClient.connect(url, function(err, db) {
if (err) throw err;
console.log("Database created!");
db.close();
});
Since you are able to connect from other clients in the same host, I assume that the container is bound to specific external facing ip of the host.
Can you try connecting to the ip of the host instead of 127.0.0.1
Eg: mongodb://external-ip:27018
Though the mapping -p 27018:27017 should bind the port to all ips, you can enforce this by -p 0.0.0.0:27018:27017.

AWS RDS / EC2: TimeoutError: Knex: Timeout acquiring a connection. The pool is probably full

I'm attempting to retrieve a User model from a Node js 8.12.0 API, using knex and bookshelf ORM. Database is Postgres 10.4.
The API works fine locally, but hosted on ElasticBeanstalk EC2 and RDS, I get error:
Unhandled rejection TimeoutError: Knex: Timeout acquiring a
connection. The pool is probably full. Are you missing a
.transacting(trx) call?
I'm able to connect and make queries to the RDS instance separately via connection string / password (it prompts for pw after I enter this):
psql -h myinstance.zmsnsdbakdha.us-east-1.rds.amazonaws.com -d mydb -U myuser
Security Groups:
The EC2 security group (set up by EB) is sg-0fa31004bd2b763ce, and RDS has an inbound security rule for PostgreSQL / TCP / port 5432 / for the matching source (sg-0fa31004bd2b763ce)— so it doesn't seem like the security group is a problem
RDS was created in a VPC, but the VPC's security rules are open too:
- security groups attached (multiple)
- name: mysgname
- group ID: sg-05d003b66fe1a4a94
- Inbound rules:
- All Traffic (0.0.0.0/0)
- HTTP (80) for TCP (0.0.0.0/0)
- SSH (22) for TCP (0.0.0.0/0)
- PostgreSQL (5432) for TCP (0.0.0.0/0)
Publicly accessible: Yes
users controller:
router.get('/users', function(req, res) {
new User.User({'id': 1})
.fetch({withRelated: ['addresses']})
.then((user) => {
res.send(user);
});
});
Knexfile:
production: {
client: 'pg',
version: '7.2',
connection: {
host: process.env.PG_HOST || 'localhost',
port: process.env.PG_PORT || '5432',
user: process.env.PG_USER || 'myuser',
password: process.env.PG_PASSWORD || '',
database: process.env.PG_DB || 'mydb',
charset: 'utf8',
},
pool: {
min: 2,
max: 20
},
},
Firstly, why is this happening only on AWS hosted environment and not locally. Secondly, how can I fix this issue? Should I increase max for pools?
You need to check your Network Access Control List (NACL) in your VPC and make sure your INBOUND and OUTBOUND are configured correctly. Security Groups are at the Instance level of security and the NACL is security at the Subnet level.
Most of the time when you are experiencing a Timeout error connecting to something in a custom VPC it will be a configuration problem with a Security Group or a NACL or Both.
I had a working code which was running in heroku instance. I have migrated to EBS and get stuck at this error for hours.
Heroku was setting NODE_ENV=production by default and i have corresponding configurations in my node.
But EBS does not set NODE_ENV=production by default so my code was breaking.

How to connect via mongoose and a ssh tunnel

I have setup my mongod.conf as follows so it only allows localhost connection.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
net:
port: 27017
bindIp: 127.0.0.1
I then want my site to ssh into the mongodb so the port has to be converted to localhost.
However how can I integrate this with mongoose's connect function?
mongoose.connect(configDB.url, function(err){
if (err){
console.log('Error connecting to mongodb: ' + err)
}
});
I have found the following command but I am not sure if this is what I need:
ssh -L 4321:localhost:27017 -i ~/.ssh/ssh_key user#ip-adress
This should ssh me via port 4321 to the localhost right? So I think I need something like this in the nodejs mongoose's connect function. I've tried to read up on this on the mongodb security tutorials but I cannot link their instructions to nodejs at all. Anyone who has experience with this?
You're nearly there. Set up the tunnel independent of node:
ssh -Nf -p [db_server_ssh_port] [mongo_user]#[mongo_domain] -L \
[local_db_port]:localhost:[remote_db_port]
And then within node, connect to mongo using [local_db_port]:
mongoose.connect(
"mongodb://localhost:[local_db_port]/[db_name]",
{"pass":"[db_pwd]"}
)
All the traffic sent to [local_db_port] on the web server will be sent through the tunnel to port [remote_db_port] on [mongo_domain]. The following post gives more info. It's connecting to a MySQL database, but the principle is the same.
Connect to MySQL using SSH Tunneling in node-mysql
Set up the tunnel independent of node:
ssh -L [your given port]:localhost:27017 [username of ssh]#[ip address of ssh matchine] -f -N
after that you have include your given port for mongo database.
In the nodejs you have to setup for mongoose connection like this
'mongodb://localhost:[your given port number]/[database name]'
enjoy it

Sequelize connection over ssh

I am not able to connect to a remote mysql server using sequelize but I am able to ssh into the machine and connect to mysql.
How do I make Sequelize connect to mysql server over ssh rather than directly in node?
You set up an SSH tunnel to your server, which listens on (for example) port 33060:
ssh -NL 33060:localhost:3306 yourserver
In another window/terminal, you can use the MySQL client (assuming that it's installed locally) to connect to the remote MySQL server over the SSH tunnel:
mysql --port 33060 --host 127.0.0.1
If that works, it's just a matter of changing the port number in Sequelize:
var sequelize = new Sequelize('database', 'username', 'password', {
host: "127.0.0.1",
port: 33060
});
If it doesn't work, it might be that the remote MySQL server isn't configured to accept connections over TCP. If that's the case, you should configure your MySQL server to get it to accept TCP connections.

Resources