AWS RDS / EC2: TimeoutError: Knex: Timeout acquiring a connection. The pool is probably full - node.js

I'm attempting to retrieve a User model from a Node js 8.12.0 API, using knex and bookshelf ORM. Database is Postgres 10.4.
The API works fine locally, but hosted on ElasticBeanstalk EC2 and RDS, I get error:
Unhandled rejection TimeoutError: Knex: Timeout acquiring a
connection. The pool is probably full. Are you missing a
.transacting(trx) call?
I'm able to connect and make queries to the RDS instance separately via connection string / password (it prompts for pw after I enter this):
psql -h myinstance.zmsnsdbakdha.us-east-1.rds.amazonaws.com -d mydb -U myuser
Security Groups:
The EC2 security group (set up by EB) is sg-0fa31004bd2b763ce, and RDS has an inbound security rule for PostgreSQL / TCP / port 5432 / for the matching source (sg-0fa31004bd2b763ce)— so it doesn't seem like the security group is a problem
RDS was created in a VPC, but the VPC's security rules are open too:
- security groups attached (multiple)
- name: mysgname
- group ID: sg-05d003b66fe1a4a94
- Inbound rules:
- All Traffic (0.0.0.0/0)
- HTTP (80) for TCP (0.0.0.0/0)
- SSH (22) for TCP (0.0.0.0/0)
- PostgreSQL (5432) for TCP (0.0.0.0/0)
Publicly accessible: Yes
users controller:
router.get('/users', function(req, res) {
new User.User({'id': 1})
.fetch({withRelated: ['addresses']})
.then((user) => {
res.send(user);
});
});
Knexfile:
production: {
client: 'pg',
version: '7.2',
connection: {
host: process.env.PG_HOST || 'localhost',
port: process.env.PG_PORT || '5432',
user: process.env.PG_USER || 'myuser',
password: process.env.PG_PASSWORD || '',
database: process.env.PG_DB || 'mydb',
charset: 'utf8',
},
pool: {
min: 2,
max: 20
},
},
Firstly, why is this happening only on AWS hosted environment and not locally. Secondly, how can I fix this issue? Should I increase max for pools?

You need to check your Network Access Control List (NACL) in your VPC and make sure your INBOUND and OUTBOUND are configured correctly. Security Groups are at the Instance level of security and the NACL is security at the Subnet level.
Most of the time when you are experiencing a Timeout error connecting to something in a custom VPC it will be a configuration problem with a Security Group or a NACL or Both.

I had a working code which was running in heroku instance. I have migrated to EBS and get stuck at this error for hours.
Heroku was setting NODE_ENV=production by default and i have corresponding configurations in my node.
But EBS does not set NODE_ENV=production by default so my code was breaking.

Related

Node.JS on Google Cloud Run with cloud SQLError: connect ENOENT /cloudsql/<instancename...>

I'm not able to connect the cloudrun service to cloudsql.
I am using Sequelize and here is my connection section!
let sequelize = new (<any> Sequelize) (
DATABASE_DATABASE,
DATABASE_USERNAME,
DATABASE_PASSWORD,
{
dialect: 'postgres'
dialectOptions: {
socketPath: `/cloudsql/${DATABASE_HOST}`,
supportBigNumbers: true,
bigNumberStrings: true
},
host: `/cloudsql/${DATABASE_HOST}`,
port: DATABASE_PORT,
logging: false,
},
);
PS: Apparently everything is configured correctly, that is,
The cloudsql service is connected to the specific cloudrun service,
they are in the same region, the AMIs are released ...
My unsuccessful attempts were:
The code snippet above returns: Error: connect ENOENT / cloudsql / <instancename ...>,
If I change the host to 127.0.0.1 = connection refused 127.0.0.1:5432,
Putting the native property in the connection: true = Error: Connection not found
direct connection attempt by PG = Error: connect ENOENT / cloudsql / <instancename ...>
I created another project, other permissions and the error continues
I changed the zone and the error continues
Another attempt was:
I tried to create VPC without a server for private IP connection enabled in cloudsql, but I get timeout in cloudrun
What I had left was to enable the cloudsql public network to access 0.0.0.0/0 and the application is working fine.
I'm out of ideas and need help connecting using /cloudsql/
Cloud Run connects to Cloud SQL over Unix sockets.
You'll want to use the instance connection name in the socket path instead of the host IP address. This will follow the format project-name:region-name:instance-name. You might also need to append the suffix s.PGSQL.5432. The full socket path should look like "/cloudsql/project-name:region-name:instance-name.s.PGSQL.5432" If you're using the socket path, you can remove the host and port connection arguments

How to fix the ECONNRESET ioredis error in Adonisjs during deployment

When I tried to deploy AdonisJS to digital ocean or Azure, I get this error
[ioredis] Unhandled error event: Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:209:20)
My Adonis app requires Redis to run. I'm using a Redis instance from Digital Ocean. Here's my production config for Redis.
prod: {
host: Env.get("REDIS_HOST"),
port: Env.get("REDIS_PORT"),
password: Env.get("REDIS_PASSWORD"),
db: 0,
keyPrefix: ""
},
If you are connecting your AdonisJS app to a Transport Layer Security (TLS) protected Redis instance, you need to add the tls host to your config.
So, your prod config should look like this
prod: {
host: Env.get("REDIS_HOST"),
port: Env.get("REDIS_PORT"),
password: Env.get("REDIS_PASSWORD"),
db: 0,
keyPrefix: "",
tls: {
host: Env.get("REDIS_HOST"),
},
},
As a followup to my comment - my docker environment degraded to the point where I couldn't even connect via redis-cli to the vanilla dockerhub Redis image. I wound up cleaning out my docker environment by removing all containers, images, volumes, networks, etc., and then rebooting my mac. After rebuilding them this problem went away for me.
I hate not knowing the "root cause" but have a theory. I had been playing with a few different Redis images including the vanilla standalone image from dockerhub and a cluster image from https://github.com/Grokzen/docker-redis-cluster. I was tweaking the build of the latter to add authentication. The theory is that there were residual processes fighting over the port from repeated builds and tear downs. I may have been impatient and hard-stopped the containers I was working on multiple times while debugging the dockerfile and docker-entrypoint.sh files. :)
I know this answer isn't directly related to hosting on DO or Azure but since the symptom is the same, perhaps there is a networking conflict somewhere.

connection refused connecting to remote mongodb server

So we've accumulated enough applications in our network that use MongoDB to justify building a dedicated server specifically for MongoDB. Unfortunately, I'm pretty new to mongodb (coming from SQL/MySQL derivatives). I have followed several guides on installing and configuring mongodb for my environment. None are perfect, but I think I'm close... I've have managed to get to a point that I can connect to the db server from the local server using the following command:
mongo -u user 127.0.0.1/admin
However, I'm NOT able to connect to the server using this from either the local OR a remote computer using it's network address, IE:
mongo -u user 192.168.24.102/admin
I've tried both with authentication enabled and disabled, and I've tried setting the bindIP to 192.168.24.102 and 0.0.0.0 with no love. Thinking it was a Firewall issue, I disabled the firewall entirely... same. no love...
so what's the secret sauce? how do I connect to a MongoDB server remotely?
Some notes to know: This server is on a local network only. There will be some NAT shenanigans at some point directing public traffic to it from remote application servers, but only specific ports (we will NOT be using 27017 when that happens) and it will sit behind a pretty robust firewall appliance, so I'm not worried about securing the server as I about securing MongoDB itself.
This answer assume a setup where a Linux server is completely remote and has MongoDB already installed.
Steps:
1. Connect to your remote server over SSH.
ssh <userName>#<server-IP-address>
2. Start Mongo shell and add users to MongoDB.
Add the admin;
use admin
db.createUser(
{
user: "AdminSammy",
pwd: "AdminSammy'sSecurePassword",
roles: [
{"userAdminAnyDatabase",
"dbAdminAnyDatabase",
"readWriteAnyDatabase"}
]
}
)
Then add general user/users. Users are added to specific databases.
use some_db
db.createUser({
user: 'userName',
pwd: 'secretPassword',
roles: [{ role: 'readWrite', db:'some_db'}]
})
3. Edit your MongoDB config file, mongod.conf, that is found in etc directory.
sudo vim /etc/mongod.conf
Scroll down to the #security: section and add the following line. Make sure to un-comment the security: line.
security:
authorization: 'enabled'
After authorization has been enabled only those authenticated with password will access the database. In this case these are the ones added in step 2 above.
Note: Visual Studio code can also be used over SSH to edit the mongo.conf file.
4. Add remote server's IP address to mongod.conf file.
Look for the net line and add the IP address of the server that is hosting this MongoDB installation, example 178.45.55.88
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1, 178.45.55.88
5. Open port 27017 on your server instance.
This allows access to your MongoDB server from anywhere in the world to anyone who knows your remote server IP address. This is one reason to have authenticated users. More robust ways of handling security are really important! Consult MongoDB manual for that.
Check firewall status using ufw.
sudo ufw status
If its not active, activate it.
sudo ufw enable
Then,
sudo ufw allow 27017
Important: You also need to allow port 22 for your SSH communication with your remote server. Otherwise you will be locked out from your remote server. Assumption here is that SSH uses port 22 for communication, the default.
sudo ufw allow 22
6. Restart Mongo daemon (mongod)
sudo systemctl restart mongod
7. Connect to remote Mongo server using Mongo shell
You can now connect to the remote MongoDB server using the following command.
mongo -u <user-name> -p <user-password> <remote-server-IP-address>:<mongo-server-port>
You can also connect to the remote MongoDB server with authentication:
mongo -u <user-name> -p <user-password> <remote-server-IP-address>:<mongo-server-port> --authenticationDatabase <auth-db-name>
You can also connect to a specific remote MongoDB database with authentication:
mongo -u <user-name> -p <user-password> <remote-server-IP-address>:<mongo-server-port>/<db-name> --authenticationDatabase <auth-db-name>
At this moment you can read and write within the some_db database from your local computer without ssh.
Important: Put into consideration the standard security measures for any database. Local security practices should guide what to do at any of the above steps.

connection error while connecting to AWS DocumentDB

getting the following error while connecting to AWS DocumentDB from node.js
connection error: { [MongoNetworkError: connection 1 to
docdb-2019-01-28-06-57-37.cluster-cqy6h2ypc0dj.us-east-1.docdb.amazonaws.com:27017
timed out] name: 'MongoNetworkError', errorLabels: [
'TransientTransactionError' ] }
here is my node js file
app.js
var mongoose = require('mongoose');
mongoose.connect('mongodb://abhishek:abhishek#docdb-2019-01-28-06-57-37.cluster-cqy6h2ypc0dj.us-east-1.docdb.amazonaws.com:27017/?ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs0', {
useNewUrlParser: true
});
var db = mongoose.connection;
db.on('error', console.error.bind(console, 'connection error:'));
db.once('open', function() {
console.log("connected...");
});
By default aws documentdb is designed to connect only from same VPC.
So to connect nodejs application from an ec2 in same vpc. You need to have the pem file as by default SSL is enabled while db instance is created.
step-1 : $ wget https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem in required directory
step-2 : Change the mongoose connection with options pointing to pem file
mongoose.connect(database.url, {
useNewUrlParser: true,
ssl: true,
sslValidate: false,
sslCA: fs.readFileSync('./rds-combined-ca-bundle.pem')})
.then(() => console.log('Connection to DB successful'))
.catch((err) => console.error(err,'Error'));
Here am using mongoose 5.4.0
To connnect from outside the VPC, please try to follow the below doc from aws:
https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html
Personally I tried only to connect from VPC and it worked fine.
Update =====:>
To connect from Robo 3T outside VPC please follow the link -
AWS DocumentDB with Robo 3T (Robomongo)
to use AWS DocumentDB outside VPC for example your development server EC2 or from the local machine will get a connection error unless you use ssh tunneling or port forwarding
and about tunneling it simple
use this command in your local
ssh -i "ec2Access.pem" -L 27017:sample-cluster.node.us-east-1.docdb.amazonaws.com:27017 ubuntu#EC2-Host -N
in application configuration use
{
uri: 'mongodb://:#127.0.0.1:27017/Db',
useNewUrlParser: true,
useUnifiedTopology:true,
directConnection: true
}
just make sure you can connect from this tunneling ec2 and database
and if you decide to use port forwarding
steps
0- in ec2 security grou[p add inbound role with custom TCP and port 27017 All traffic
1- go to your ec2 instance and install Haproxy
$ sudo apt install haproxy
2- edit Haproxy configuration
$ sudo nano haproxy.cfg
3- in end off file add
listen mongo
bind 0.0.0.0:27017
timeout connect 10s
timeout client 1m
timeout server 1m
mode TCP
server AWSmongo <database-host-url>:27017
4- now restart HaProxy
$ sudo service HaPoxy restart
5- now you can access your database using
{uri: 'mongodb://<database-user>:<database-pass>#<EC2-IP>:27017/<db>'}

Using Navicat to login to Postgresql via SSH - what are the correct settings?

I am trying to log into PostgreSQL on my EC2 server via SSH using Navicat.
I get the following error message:
"80070007: SSH Tunnel: Socket error on connecting. WSAGetLastError return 10061($274D)"
On the server, the "role" postgres already exists, and there is already a database called postgres. I have assigned a password to postgres (using ALTER NAME command via Putty).
The SSH settings I am using in Navicat are:
Port: 5432
User Name: [admin user name]
Authentication Method: Public Key
The Connection settings are:
Host Name: localhost
Port: 3306
Initial Database: postgres
User Name: postgres
Password: [password]
When I connect to the MySQL server on the same machine, the settings are exactly the same except for:
SSH to Port 22
User Name (for connection): root (with corresponding password)
I have tried the SSH to port 22, in which case the error message is:
"could not connect to server: Connection refused (0x0000274D/10061)
Is the server running on host "localhost" and accepting TCP/IP connections on port 60122?
received invalid response to SSL negotiation:4"
Any ideas on what settings I need to change to get this to work?
Your config seems to be very wrong.
ssh port should be not 5432, but 22 (ssh default).
postgresql port should be not 3306 (this is actually MySQL), but 5432 (postgres default)
To verify your setup, try ssh-ing into your EC2 instance manually.
After you ssh in, check if you can execute "telnet localhost 5432".
If you see an error immediately, that would mean that postgres server is not running.
If you see nothing - this is good sign and means that Postgres is running.
You can quit from this by Ctrl-], q, Enter.
Note that EC2 instances may require you to use ssh public key authentication (not a password). In this case, you will have to find option in Navicat to provide such a key.

Resources