EKS node unable to connect to RDS - amazon-rds

I have an EKS cluster where I have a Keycloak service that is trying to connect to RDS within the same VPC.
I have also added a inbound rule to the RDS Security Group which allow postgresql from source eksctl-prod-cluster-ClusterSharedNodeSecurityGroup-XXXXXXXXX
When the application tries to connect to RDS i get the following message:
timeout reached before the port went into state "inuse"

I ended up replacing the inbound rule on the RDS Security Group from the eksctl-prod-cluster-ClusterSharedNodeSecurityGroup-XXXXXXXXX with an inbound rule allowing access from the EKS VPC CIDR address instead.

Related

AWS EKS node to access RDS

I have AWS EKS nodes access RDS where I have have whitelisted EKS node's public IPs in RDS's security group. But this is not viable solution because EKS Nodes can get replaced and its public IP can changes with it.
How can I make this EKS node's connecting to RDS more stable ?
Last year we have introduced a new feature to assign Security Groups to Kubernetes pods directly to overcome having to assign them at the node level (to avoid ephemerality problems you call out and to create a more secure environment where only the pod that needs to talk to RDS can do so Vs the ENTIRE node). You can follow this tutorial to configure this feature or refer to the official documentation.
If your eks cluster is in the same vpc as the Rds instance, then you can just whitelist your vpc's private ip-address (cidr) range in RDS security group. If they are in different vpc's, then connect both vpc with vpc-peering and whitelist's eks vpc's IP range in rds security group. Dont use public ip's as they will go through outside AWS network. Instead, always use private connections wherever possible as they are faster, reliable and more secure. If you don't want to whitelist complete cidr Then you can also create a NAT gateway for your eks cluster and make routes for outside traffic going outside the EKS cluster go through that NAT gateway and then you can whitelist NAT's IP in rds security group

Connect Redshift and AWS Lambda located in different regions

I am trying to connect to my Redshift database (located in N. Virginia region) from Lambda function (located in Ireland region). But on trying to establish a connection, I am getting timeout error stating:
"errorMessage": "2019-10-20T13:34:04.938Z 5ca40421-08a8-4c97-b730-7babde3278af Task timed out after 60.05 seconds"
I have closely followed the solution provided to the AWS Lambda times out connecting to RedShift but the main issue is that the solution provided is valid for services located in same VPC (and hence, same region).
On researching further, I came across Inter-region VPC Peering and followed the guidelines provided in AWS Docs. But after configuring VPC Peering also, I am unable to connect to Redshift
Here are some of the details that I think can be useful for understanding the situation:
Redshift cluster is publicly accessible, running port 8192 and has a VPC configured (say VPC1)
Lambda function is located in another VPC (say VPC2)
There is a VPC Peering connection between VPC1 and VPC2
CIDR IPv4 blocks of both VPCs are different and have been added to each other's Route tables (VPC1 has 172.31.0.0/16 range and VPC2 has 10.0.0.0/16 range)
IAM Execution role for Lambda function has Full Access of Redshift service
In VPC1, I have a security group (SG1) which has an inbound rule of type: Redshift, protocol: TCP, port: 5439 and source: 10.0.0.0/16
In VPC2, I am using default security group which has outbound rule of 0.0.0.0/0
In Lambda, I am providing private IP of Redshift (172.31.x.x) as hostname and 5439 as port (not 8192!)
Lambda function is in NodeJS 8.10 and I am using node-redshift package for connecting to Redshift
After all this, I have tried accessing Redshift with both public IP as well as through its DNS name (with port 8192)
Kindly help me out in establishing connection between these services.

Allow AWS RDS connection from an Azure K8S pods

We are currently migrating from AWS ECS to Azure Kubernetes Service. Our first step is to first migrate the application code and just leave the database in AWS RDS, for now. Our RDS instance is protected by a security group which only allows connection from a set of IP addresses.
When connecting to the RDS instance, what IP address does the database see? How can I configure RDS to allow connection from a kubernetes pod?
If you have an Azure Load Balancer (so any kubernetes service with type LoadBalancer) attached to worker nodes - they will use the first IP attached to the Load Balancer. If not - they will use public IP attached to the VM they run on. If the VM doesnt have a public IP (default for AKS), they will use ethereal IP that might change anytime and you have no control over that.
So just create a service with the type of LoadBalancer in AKS, find its external IP address and whitelist that.

Not able to connect to PostgreSQL from AWS ECS Fargate containers

I am setting up an infrastructure using Fargate and RDS for my web application.
Here are the basic details of infrastructure.
Fargate and RDS are using same VPC and same Subnet
We have an Application Load Balancer infront of Fargate
Able to access container applications using LB url
Now the problem is, Fargate container application is not able to connect to RDS
Can somebody suggest how to configure security groups or other perimeters to allow containers to connect RDS.
If I change RDS SG configuration with RDS port and IP as 0.0.0.0/0
(Anywhere) container application is able to connect to RDS. But this
we will not be able to do in UAT / PROD
Find the security group ID of your Fargate service. It will look
like sg-ab3123b1252, but with different values after sg-.
In your RDS security group rules, instead of putting a CIDR in your source
field, put the Fargate service security group ID. Port 5432 (assuming you are using the standard postgresql port).
By adding the Fargate security group to your RDS security group rule, you're saying "allow TCP traffic on port 5432 from any resource that uses the Source security group specified".
Check the default VPC group in the docs. That page is required reading anyway, but the section linked has an example of what I'm describing specifically.
You may want to try configuring your VPC IP with RDS port into the RDS SG.
In addition to allowing the Security Group access we also had to grant IAM permissions to the role used by our ECS stuff.

How do I setup connection to a Redshift cluster inside a VPC?

I createad a Redshift cluster inside a Public subnet inside a VPC. The VPC is connected to an Internet Gateway
Configured IPs for inbound/oubound traffic in cluster security group.
Configured IPs for inbound/oubound traffic in VPC security group.
Configured IPs for inbound/oubound traffic in Network ACL.
Configured IPs for inbound/oubound traffic in Main route table.
Still when I try to connect using a client, I get connection refused. Am I missing step here ?

Resources