AWS EKS node to access RDS - amazon-rds

I have AWS EKS nodes access RDS where I have have whitelisted EKS node's public IPs in RDS's security group. But this is not viable solution because EKS Nodes can get replaced and its public IP can changes with it.
How can I make this EKS node's connecting to RDS more stable ?

Last year we have introduced a new feature to assign Security Groups to Kubernetes pods directly to overcome having to assign them at the node level (to avoid ephemerality problems you call out and to create a more secure environment where only the pod that needs to talk to RDS can do so Vs the ENTIRE node). You can follow this tutorial to configure this feature or refer to the official documentation.

If your eks cluster is in the same vpc as the Rds instance, then you can just whitelist your vpc's private ip-address (cidr) range in RDS security group. If they are in different vpc's, then connect both vpc with vpc-peering and whitelist's eks vpc's IP range in rds security group. Dont use public ip's as they will go through outside AWS network. Instead, always use private connections wherever possible as they are faster, reliable and more secure. If you don't want to whitelist complete cidr Then you can also create a NAT gateway for your eks cluster and make routes for outside traffic going outside the EKS cluster go through that NAT gateway and then you can whitelist NAT's IP in rds security group

Related

EKS node unable to connect to RDS

I have an EKS cluster where I have a Keycloak service that is trying to connect to RDS within the same VPC.
I have also added a inbound rule to the RDS Security Group which allow postgresql from source eksctl-prod-cluster-ClusterSharedNodeSecurityGroup-XXXXXXXXX
When the application tries to connect to RDS i get the following message:
timeout reached before the port went into state "inuse"
I ended up replacing the inbound rule on the RDS Security Group from the eksctl-prod-cluster-ClusterSharedNodeSecurityGroup-XXXXXXXXX with an inbound rule allowing access from the EKS VPC CIDR address instead.

How to connect/communicate Pod to DocumentDb which is outside eks cluster but within same VPC

I want to deploy my full stack application using AWS EKS, with the backend pod connected to the databases(MongoDB hosted on AWS managed service) outside of the cluster. If the EKS cluster and the databases are in same VPC, how should I configure the pod to connect exterbal database but within same VPC.
We're going to need a bit more details, but see if this blog gives you an idea about how to accomplish this.

Allow AWS RDS connection from an Azure K8S pods

We are currently migrating from AWS ECS to Azure Kubernetes Service. Our first step is to first migrate the application code and just leave the database in AWS RDS, for now. Our RDS instance is protected by a security group which only allows connection from a set of IP addresses.
When connecting to the RDS instance, what IP address does the database see? How can I configure RDS to allow connection from a kubernetes pod?
If you have an Azure Load Balancer (so any kubernetes service with type LoadBalancer) attached to worker nodes - they will use the first IP attached to the Load Balancer. If not - they will use public IP attached to the VM they run on. If the VM doesnt have a public IP (default for AKS), they will use ethereal IP that might change anytime and you have no control over that.
So just create a service with the type of LoadBalancer in AKS, find its external IP address and whitelist that.

Not able to connect to PostgreSQL from AWS ECS Fargate containers

I am setting up an infrastructure using Fargate and RDS for my web application.
Here are the basic details of infrastructure.
Fargate and RDS are using same VPC and same Subnet
We have an Application Load Balancer infront of Fargate
Able to access container applications using LB url
Now the problem is, Fargate container application is not able to connect to RDS
Can somebody suggest how to configure security groups or other perimeters to allow containers to connect RDS.
If I change RDS SG configuration with RDS port and IP as 0.0.0.0/0
(Anywhere) container application is able to connect to RDS. But this
we will not be able to do in UAT / PROD
Find the security group ID of your Fargate service. It will look
like sg-ab3123b1252, but with different values after sg-.
In your RDS security group rules, instead of putting a CIDR in your source
field, put the Fargate service security group ID. Port 5432 (assuming you are using the standard postgresql port).
By adding the Fargate security group to your RDS security group rule, you're saying "allow TCP traffic on port 5432 from any resource that uses the Source security group specified".
Check the default VPC group in the docs. That page is required reading anyway, but the section linked has an example of what I'm describing specifically.
You may want to try configuring your VPC IP with RDS port into the RDS SG.
In addition to allowing the Security Group access we also had to grant IAM permissions to the role used by our ECS stuff.

How to make RDS::DbInstance accessible from EC2::Instance?

I'm currently using Cloud Formation to deploy a stack where I deploy, among other things:
A VPC
A Subnet inside the created VPC
An EC2 Instance inside the created Subnet
A RDS Postgres database
At first I couldn't connect to the DBInstance because it didn't have a properly configured SecurityGroup.
When I tried to create the SecurityGroup, the deploy failed because the DBInstance and the SecurityGroup were being created on different VPCs.
But I can't find a property on any RDS related resource on Cloud Formation to adjust in which VPC is my database going to be created. Searching around, I've found the alternative of creating a DBSubnetGroup.
But in order to use a DBSubnetGroup, I need to have at least two subnets (because it needs to cover at least 2 Availability Zones). I wish to avoid creating an empty subnet on another AZ just to make this work.
Is there a better alternative? What's the easiest way to give my EC2 instances access to my DBInstance using only Cloud Formation?
If you don't want to go with the DBSubnetGroup way, the only possibility of creating RDS instance is to use Default VPC. If you do not specify DBSubnetGroup, your RDS Instance will be created in the default VPC.
Now there are two ways for your EC2 instance to access the RDS Instance.
Let your RDS Instance be publicly accessible. Ensure that you have tight SecurityGroup configurations to deny possibility of attacks. Then EC2 instance should be able to access the database instance.
Mark publicly accessible as false. Connect the Default VPC with the VPC which you have created using VPC Peering Connections. I recommend this way as your RDS instance will not be publicly accessible and you get your job done.
On top of this, you have mentioned
But in order to use a DBSubnetGroup, I need to have at least two subnets (because it needs to cover at least 2 Availability Zones). I wish to avoid creating an empty subnet on another AZ just to make this work.
RDS doesn't work that way. When you specify MultiAZ = true and have DBSubnetGroup, in your RDS template, a replica for your DBInstance will be maintained in another subnet that is available in different AZ. When your primary node goes down, this replica node comes up and acts as the master. Keeping this in mind, I would strongly recommend you to use DBSubnetGroup when creating RDS instance.
More reading available here
Hope this helps.

Resources