Can't access MongoDB cluster in AWS VPC Elastic Beanstalk - node.js

I have a Node app running on Elastic Beanstalk.
I also have a CloudFormation MongoDB cluster with 1 replica. I can connect to this directly by using the ec2 private IP 111.22.3.44 it's named PrimaryReplicaNode0.
However I keep getting MongoDB master slave errors so I don't think I'm supposed to connect to this directly. Which address am I supposed to use from within Elastic Beanstalk to connect?
Do I connect directly to the EC2 replica address or do I use a subnet of some sort?
Both the MongoDB cluster and Elastic Beanstalk servers are in the same VPC.
Connected to mongodb named app1 at 172.00.1.XX
Express https server listening on port 8081 in development mode
{ [MongoError: connect ETIMEDOUT 172.00.1.XX:27017]
name: 'MongoError',
message: 'connect ETIMEDOUT 172.00.1.XX:27017' }

Related

When run node js application getting error whitelist ip in mongo cluster

I have implemented node js application and used MongoDB as database using nginx to run the project
Everything working fine in local but when code is published in AWS linux2 getting error
body-parser deprecated undefined extended: provide extended option server.js:39:17
(node:30529) UnhandledPromiseRejectionWarning: MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/
I have already allowed all IP's in-network access to mongo cluster 0.0.0.0/0 (includes your current IP address)

Changed from AWS-self-hosted MongoDB to Atlas causes pymongo.errors.ServerSelectionTimeoutError:

I have an app that runs in ECS that connected to a self-hosted mongoDB EC2 instance in the same VPC. The connection url was the private IP of the EC2 instances. Below is the connection code
# # mongodb connection string from systen env
MONGODB_URL = os.environ.get('MONGODB_URL', 'localhost')
MONGODB_PORT= int(os.environ.get('MONGODB_PORT', '27017'))
client = MongoClient(MONGODB_URL, MONGODB_PORT)
I moved the DB to Atlas, established a VPC peering connection, whitelisted the AWS VPC network, and created a new DB user just for the app.
Then I changed the connection string in the ECS task to
MONGODB_URL: mongodb+srv://<username>:<password>#mongodb-prod.zzl18.mongodb.net/nameofDB?retryWrites=true&w=majority
Now when I run the program I get:
pymongo.errors.ServerSelectionTimeoutError: mongodb-prod-shard-00-00.zzl18.mongodb.net:27017: timed out,mongodb-prod-shard-00-02.zzl18.mongodb.net:27017: timed out,mongodb-prod-shard-00-01.zzl18.mongodb.net:27017: timed out
To see if it was a network issue, I opened the DB network access up to 0.0.0.0/0
I still receive the same error.
Maybe the issue is the connection string, but I'm not sure. Any help you can provide would be greatly appreciated.
The ECS App subnet needed to be added to the VPC route table.

Access ElastiCache redis from a private EKS Cluster

I have an EKS cluster and trying to connect the application pod to the ElastiCache Redis endpoint. Both are in the same VPC. I allow the communication between both EKS and ElastiCache Redis.
When I telnet from a pod to the ElastiCache Redis endpoint is connected. But Unfortunately, I access from my nodejs application in won't work.
Can somebody help me to resolve this?

Kubernetes node.js container cannot connect to MongoDB Atlas

So I've been struggling with this all afternoon. I can't at all get my NodeJS application running on kubernetes to connect to my MongoDB Atlas database.
In my application I've tried running
mongoose.connect('mongodb+srv://admin:<password>#<project>.gcp.mongodb.net/project_prod?retryWrites=true&w=majority', { useNewUrlParser: true })
but I simply get the error
UnhandledPromiseRejectionWarning: Error: querySrv ETIMEOUT _mongodb._tcp.<project>.gcp.mongodb.net
at QueryReqWrap.onresolve [as oncomplete] (dns.js:196:19)
(node:32) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 17)
I've tried setting up a ExternalName service too, but using either the URL or the ExternalName results in me not being able to connect to the database.
I've whitelisted my IP on MongoDB Atlas, so I know that isn't the issue.
It also seems to work on my local machine, but not in the kubernetes pod. What am I doing wrong?
I figured out the issue, my pod DNS was not configured to allow external connections, so I set dnsPolicy: Default in my YML, because oddly enough Default is not actually the default value
I use MongoDB Atlas from Kubernetes but on AWS. Although for testing purposes you can enable all IP addresses and test, here is the approach for a production setup:
MongoDB Atlas supports Network Peering
Under Network Access > New Peering Connection
In the case of AWS, VPC ID, CIDR and Region have to be specified. For GCP it should be the standard procedure used for VPC peering.
Firstly, Pods are deployed on nodes in a cluster and not on your service (so Mongo won't recognise your service endpoints e.g; load balancer IP). Based on this, there are two solutions:
solution A
Add the endpoint of the Kubernetes cluster to the MongoDB network access IP whitelist.
Under the pod spec of your k8s pod (or deployment) manifest, add a dnsPolicy field with value set to Default. Hence, your Pods (your container basically) will connect to mongo through the name resolution configuration of the master node.
solution B
Add all node endpoints in your k8s cluster to the MongoDB network access IP whitelist.
Under the pod spec of your k8s pod (or deployment) manifest, add a dnsPolicy field with the value set to ClusterFirstWithHostNet. Since the pods run with your host network, they gain access to services listening on localhost.
Kubernetes documentation

Connect to MongoDB hosted on AWS ec2 from Elastic Beanstalk

I'm trying to host my web app on AWS.
I'm hosting my nodejs app on Elastic Beanstalk (salable).
I have created an ec2 instance to host my mongodb.
In test, the mongodb ec2 instance accepts connection at port 27017 from anywhere.
And my website works great.
The problems is that I want to restrict the access to mongodb ec2 instance to only allow connections from my Elastic Beanstalk app.
I changed the rule of my ec2 instance security group, to only accept tcp port 27017 connection from the security group where Elastic Beanstalk app is assigned to.
This breaks the communication to mongodb from my app immediately.
I have also tried to allow all traffic from beanstalk security group, no luck
Have I got anything wrong? please help!
Needed to edit the /etc/mongod.conf file and set your bind_ip = 0.0.0.0 in order to make connections externally.
Also had to try the different version of the mask to work. xxx.xxx.0.0/16 worked for me, but xxx.xxx.0.0/24 and xxx.xxx.0.0/32 didn't.
Also, they recommended that you use the private IP if you are in the same zone (keeps costs down), but public otherwise.

Resources