Kubernetes node.js container cannot connect to MongoDB Atlas - node.js

So I've been struggling with this all afternoon. I can't at all get my NodeJS application running on kubernetes to connect to my MongoDB Atlas database.
In my application I've tried running
mongoose.connect('mongodb+srv://admin:<password>#<project>.gcp.mongodb.net/project_prod?retryWrites=true&w=majority', { useNewUrlParser: true })
but I simply get the error
UnhandledPromiseRejectionWarning: Error: querySrv ETIMEOUT _mongodb._tcp.<project>.gcp.mongodb.net
at QueryReqWrap.onresolve [as oncomplete] (dns.js:196:19)
(node:32) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 17)
I've tried setting up a ExternalName service too, but using either the URL or the ExternalName results in me not being able to connect to the database.
I've whitelisted my IP on MongoDB Atlas, so I know that isn't the issue.
It also seems to work on my local machine, but not in the kubernetes pod. What am I doing wrong?

I figured out the issue, my pod DNS was not configured to allow external connections, so I set dnsPolicy: Default in my YML, because oddly enough Default is not actually the default value

I use MongoDB Atlas from Kubernetes but on AWS. Although for testing purposes you can enable all IP addresses and test, here is the approach for a production setup:
MongoDB Atlas supports Network Peering
Under Network Access > New Peering Connection
In the case of AWS, VPC ID, CIDR and Region have to be specified. For GCP it should be the standard procedure used for VPC peering.

Firstly, Pods are deployed on nodes in a cluster and not on your service (so Mongo won't recognise your service endpoints e.g; load balancer IP). Based on this, there are two solutions:
solution A
Add the endpoint of the Kubernetes cluster to the MongoDB network access IP whitelist.
Under the pod spec of your k8s pod (or deployment) manifest, add a dnsPolicy field with value set to Default. Hence, your Pods (your container basically) will connect to mongo through the name resolution configuration of the master node.
solution B
Add all node endpoints in your k8s cluster to the MongoDB network access IP whitelist.
Under the pod spec of your k8s pod (or deployment) manifest, add a dnsPolicy field with the value set to ClusterFirstWithHostNet. Since the pods run with your host network, they gain access to services listening on localhost.
Kubernetes documentation

Related

When run node js application getting error whitelist ip in mongo cluster

I have implemented node js application and used MongoDB as database using nginx to run the project
Everything working fine in local but when code is published in AWS linux2 getting error
body-parser deprecated undefined extended: provide extended option server.js:39:17
(node:30529) UnhandledPromiseRejectionWarning: MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/
I have already allowed all IP's in-network access to mongo cluster 0.0.0.0/0 (includes your current IP address)

Pool connection timeout - connecting to AWS RDS from EC2

I am trying to connect to an Amazon RDS (Postgres) instance from an EC2 server via a NodeJS application using the pg npm package. The error I am receiving an error (note i'm hitting my node backend via a react app):
OPTIONS /users/login 200 0.424 ms - 2
Error fetching client from pool Error: Connection terminated due to connection timeout
I have tested the app locally and everything works perfectly (including connecting to RDS), but as soon as I run the app on the server I can't connect.
To simplify the problem, I have just typed my credentials explicitly into the NodeJS route file so I know there's no issues with environment variables etc. I then pushed my code to the server, pulled it as-is, and ran it. No luck. From a connection perspective, I just create a pool (require pool from pg) and then use pool.connect and client.query to make the request.
I feel like given that it works locally that the issue is an AWS one with my networking/security groups, but I feel like I have tried everything:
Ensured the db is set to public
Created a fresh security group and added it to EC2 and to RDS
Completely opened the ports (inbound and outbound)
Created a VPC and added to both EC2 and RDS
Checked the inbound/outbound are open on the VPC subnet NACL
Any help would be much appreciated. I am going insane
Connect to your server and try to debug the connection with telnet or a PostgreSQL client.
The most common mistakes for this error are:
RDS Security Group does not allow incoming connections from your VPC range or for the public EC2 server IP (in the case of a public database).
RDS subnet does not allow outgoing connections in NACL. Keep in mind that only the first connection occurs in the port you define in RDS, the other connections occur on other ports; but I think this is not your case once you said you could connect locally.
RDS Route Table doesn't allow connections from outside the VPC. But, again, I think that's not your case.
EC2 Security Group does not allow outgoing connections to the RDS. This case is a little trickier but it can happen if you don't set the SG properly.
The last case is that your EC2 server subnets do not allow connections to the internet. You said that you can connect locally, so I imagine that your RDS is properly set to allow public connections; however, you can have the case that you didn't connect an Internet Gateway or a NAT Gateway in your EC2 server Route Table or didn't properly configure the NACL to allow inbound/outbound connections from the internet.

Allow AWS RDS connection from an Azure K8S pods

We are currently migrating from AWS ECS to Azure Kubernetes Service. Our first step is to first migrate the application code and just leave the database in AWS RDS, for now. Our RDS instance is protected by a security group which only allows connection from a set of IP addresses.
When connecting to the RDS instance, what IP address does the database see? How can I configure RDS to allow connection from a kubernetes pod?
If you have an Azure Load Balancer (so any kubernetes service with type LoadBalancer) attached to worker nodes - they will use the first IP attached to the Load Balancer. If not - they will use public IP attached to the VM they run on. If the VM doesnt have a public IP (default for AKS), they will use ethereal IP that might change anytime and you have no control over that.
So just create a service with the type of LoadBalancer in AKS, find its external IP address and whitelist that.

unable to access DB pod External IP from application

I've created two pods top of Azure Kubernetes cluster
1) Application
2) MS SQL server
both pods are exposed via Azure Loadbalancer and both having External IPs. I am unable to use the External IP in my application config file. But I can connect that SQL Server from anywhere. For some reason I am unable to telnet DB IP from Application container.
the connection is getting timeout. but I can ping/telnet the DB's cluster ip. So I have tried to use the DB cluster IP in my config file to check if the connection is successful but no luck.
Could someone help me with this ?
As Suresh said, we should not use public IP address to connect them.
We can refer to this article to create a application and a database, then connect a front end to a back end using a service.
This issue was fixed in other way. But still running a Application & DB as separate service is night mare in Azure container service(Kubernetes).
1) I've combined App+DB in same container and put the DB connection string as "localhost" or "localhost,1433" is my application config file.
2) Created Docker image with above setup
3) Created pod
4) Exposed pod with two listening ports "kubectl expose pods "xxx" --port=80,1433 --type=LoadBalancer
5) I can access the DB with 1433
In the above setup, we have planned to keep the container in auto scaled environment with persistent volume storage
Also we are planning to do the scheduled backup of container, So we do not want to loose the DB data.
Is anybody having other thoughts, what the major issue factors we need to consider in above setup ??
This issue was fixed..!
Create two pods for Application and DB, Earlier when I provide the DB cluster IP in application config file, it was worked.But I was able to telnet 1433
I have created another K8s cluster in Azure then tried with same setup (provided cluster IP). This time it worked like charm.
Thanks to #Suresh Vishnoi

Accessing Mongo replicas in kubernetes cluster from AWS lambdas

Some of my data is in Mongo replicas that are hosted in docker containers running in kubernetes cluster. I need to access this data from the AWS lambda that is running in the same VPC and subnet (as the kubernetes minions with mongo db). lambda as well as the kubernetes minions (hosting mongo containers) are run under the same security group. I am trying to connect using url "mongodb://mongo-rs-1-svc,mongo-rs-2-svc,mongo-rs-3-svc/res?replicaSet=mongo_rs" where mongo-rs-x-svc are three kubernetes services that enables access to the appropriate replicas. When I try to connect using this url, it fails to resolve the mongo replica url (e.g. mongo-rs-2-svc). Same URL works fine for my web service that is running in its own docker container in the same kubernetes cluster.
Here is the error I get from mongo client that I use...
{\"name\":\"MongoError\",\"message\":\"failed to connect to server [mongo-rs-1-svc:27017] on first connect [MongoError: getaddrinfo ENOTFOUND mongo-rs-1-svc mongo-rs-1-svc:27017]\"}". I tried replacing mongo-rs-x-svc to their internal ip addresses in the url. In this case the above name resolution error disappeared but got another error - {\"name\":\"MongoError\",\"message\":\"failed to connect to server [10.0.170.237:27017] on first connect [MongoError: connection 5 to 10.0.170.237:27017 timed out]\"}
What should I be doing to enable this access successfully?
I understand that I can use the webservice to access this data as intermediary but since my lambda is in VPC, I have to deploy NAT gateways and that would increase the cost. Is there a way to access the webservice using the internal endpoint instead of public url? May be that is another way to get data.
If any of you have a solution for this scenario, please share. I went through many threads that showed up as similar questions or in search results but neither had a solution for this case.
This is a common confusion with Kubernetes. The Service object in Kubernetes is only accessible from inside Kubernetes by default (i.e. when type: ClusterIP is set). If you want to be able to access it from outside the cluster you need to edit the service so that it is type: NodePort or type: LoadBalancer.
I'm not entirely sure, but it sounds like your network setup would allow you to use type: NodePort for your Service in Kubernetes. That will open a high-numbered port (e.g. 32XXX) on each of the Nodes in your cluster that forwards to your Mongo Pod(s). DNS resolution for the service names (e.g. mongo-rs-1-svc) will only work inside the Kubernetes cluster, but by using NodePort I think you should be able to address them as mongodb://ec2-instance-1-ip:32XXX,ec2-instance-2-ip:32XXX,....
Coreyphobrien's answer is correct. Subsequently you were asking for how to keep the exposure private. For that I want to add some information:
You need to make the Lambdas part of your VPC that your cluster is in. For this you use the --vpc-config parameter when creating the lambdas or updating. This will create a virtual network interface in the VPC that allows the Lambda access. For Details see this.
After that you should be able to set the AWS security group for your instances so that the NodePort will only be accessible from another security group that is used for your Lambdas network interface.
This blog discusses an example in more detail.

Resources