Connect Redshift and AWS Lambda located in different regions - node.js

I am trying to connect to my Redshift database (located in N. Virginia region) from Lambda function (located in Ireland region). But on trying to establish a connection, I am getting timeout error stating:
"errorMessage": "2019-10-20T13:34:04.938Z 5ca40421-08a8-4c97-b730-7babde3278af Task timed out after 60.05 seconds"
I have closely followed the solution provided to the AWS Lambda times out connecting to RedShift but the main issue is that the solution provided is valid for services located in same VPC (and hence, same region).
On researching further, I came across Inter-region VPC Peering and followed the guidelines provided in AWS Docs. But after configuring VPC Peering also, I am unable to connect to Redshift
Here are some of the details that I think can be useful for understanding the situation:
Redshift cluster is publicly accessible, running port 8192 and has a VPC configured (say VPC1)
Lambda function is located in another VPC (say VPC2)
There is a VPC Peering connection between VPC1 and VPC2
CIDR IPv4 blocks of both VPCs are different and have been added to each other's Route tables (VPC1 has 172.31.0.0/16 range and VPC2 has 10.0.0.0/16 range)
IAM Execution role for Lambda function has Full Access of Redshift service
In VPC1, I have a security group (SG1) which has an inbound rule of type: Redshift, protocol: TCP, port: 5439 and source: 10.0.0.0/16
In VPC2, I am using default security group which has outbound rule of 0.0.0.0/0
In Lambda, I am providing private IP of Redshift (172.31.x.x) as hostname and 5439 as port (not 8192!)
Lambda function is in NodeJS 8.10 and I am using node-redshift package for connecting to Redshift
After all this, I have tried accessing Redshift with both public IP as well as through its DNS name (with port 8192)
Kindly help me out in establishing connection between these services.

Related

EKS node unable to connect to RDS

I have an EKS cluster where I have a Keycloak service that is trying to connect to RDS within the same VPC.
I have also added a inbound rule to the RDS Security Group which allow postgresql from source eksctl-prod-cluster-ClusterSharedNodeSecurityGroup-XXXXXXXXX
When the application tries to connect to RDS i get the following message:
timeout reached before the port went into state "inuse"
I ended up replacing the inbound rule on the RDS Security Group from the eksctl-prod-cluster-ClusterSharedNodeSecurityGroup-XXXXXXXXX with an inbound rule allowing access from the EKS VPC CIDR address instead.

GCP serverless VPC connection make 408 timeout

My cloud functions server needs to access to external service and ip adress needs to be applied to the system prior to access to the service. Since it is serlverless, cloud functions does not have static ip adress. I have searched the way to give static ip adress to cloud functions and it was serverless VPC access at GCP VPC network.
Image below shows my VPC creation.
Cloud functions VPC connection setting is at below
Cloud functions works fine before setting VPC connection. After VPC connection setup, error below is occured.
Do I need any other setting for VPC connection? or do I need to do something with my server code to get rid of 408 timeout error?
If you know anything about this problem, please share your knowledge. Thank you

VPC peering problems from GCP App Engine (nodejs, standard environment) to Mongodb Atlas

EDIT - SOLVED
as mentioned in comments here is a link to follow IP whitelisting process and check your IP's that work with VPC:
MongoDB and Google Cloud Functions VPC Peering?
There are 2 things I had to fix additionally:
Don't forget to change connection string to private as indicated here (https://docs.atlas.mongodb.com/reference/faq/connection-changes#std-label-connstring-private). There is no mention of that in VPC peering configuration in Atlas docs.
In GCP in VPC peering configuration disable option that is checked by default: Export subnet routes with public IP. After that your IP in mongodb to whitelist is in range from VPC serverless connector.
ORIGINAL QUESTION
I tried to create VPC peering connection from my App Engine on GCP to Mongodb Atlas. App Engine app is node / react app that is working fine with whitelisted 0.0.0.0 in Mongodb. Here are steps that I made correctly according to all documentations:
I have added peering connection in Atlas and it is visible as available.
I have added VPC peering connection in GCP and status is active
I have added IP ranges from my GCP project network to IP whitelist in Atlas
I have created a serverless VPC connector to use with my App Engine (standard envorionment) here is a line from app.yaml
vpc_access_connector:
name: projects/project-id/locations/location/connectors/connector-name
I have experimented with different IP ranges added to whitelist in Atlas. Both clusters are in the same region and I have included regional range from here:
https://cloud.google.com/vpc/docs/vpc#ip-ranges
There is a problem because connection can't be made and it is timed out (502 Bad Gateway error in my API service). When I have 0.0.0.0/0 (internet) whitelisted in Atlas everything is working fine.
I was wondering if there are any possible changes that can be made in GCP:
firewall setup
exchanging custom routes setup in VPC peering setup
exchanging subnet routes with public IP in VPC peering setup
As mentioned in comments here is a link to follow IP whitelisting process and check your IP's that work with VPC: MongoDB and Google Cloud Functions VPC Peering?
There are 2 things I had to fix additionally:
Don't forget to change connection string to private as indicated here (https://docs.atlas.mongodb.com/reference/faq/connection-changes#std-label-connstring-private). There is no mention of that in VPC peering configuration in Atlas docs.
In GCP in VPC peering configuration disable option that is checked by default: Export subnet routes with public IP. After that your IP in mongodb to whitelist is in range from VPC serverless connector.

How to configured Default VPC IN AWS?

i want to send mail using node mailer in NodeJS if my lambda function is developed in default VPC because I have to access RDS too from the lambda function.
I am unable to send success mail for data successfully inserted in RDS if I deployed my lambda function in default VPC WHAT changes I need to do so I can send.
IF I choose NO vpc then I am unable to set data to database.
From https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html,
When you connect a function to a VPC in your account, it does not have access to the internet unless your VPC provides access.
I take this to mean that if you wish to access both RDS and the internet from lambda from within your VPC, you need a NAT gatway (or to spin up your own instance). In other words, lambda does not support internet access with a public IP through an Internet Gateway, which is the mechanism of internet access within your vpc.
If you don't mind the cost, about 4.5 cents an hour plus data transfer last I checked, the simplest solution is probably:
add another subnet to your VPC.
Add a NAT Gateway to your VPC.
Add a route table to the subnet that routes through the NAT Gateway
put your lambda in that subnet
This essentially creates a connection to the internet in that VPC without your lambda holding a Public IP address.

Connection timed out exception with spark-redshift on EMR

I am using spark-redshift library provided by data bricks to read data from a redshift table in Spark. Link: https://github.com/databricks/spark-redshift.
Note: The AWS account for the redshift cluster and the EMR cluster are different in my case.
I am able to connect to redshift using spark-redshift in Spark LOCAL mode. But the same code fails on EMR with the following exception: java.sql.SQLException: Error setting/closing connection: Connection timed out.
I have tried adding Redshift in the inbound rule on the EC2 security group of my EMR cluster but it didn't help. I had used Source as MyIP while doing this.
I found the solution to this using VPC peering: http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/Welcome.html
We connected the redshift and the EMR VPCs using VPC peering and updated the route tables of individual VPCs to accept traffic from IPv4 CIDR of the other VPC. VPC peering can also be done across AWS accounts too. Refer to the link above to get more details.
Once this is done, go to the VPC peering connection in both the accounts and enable DNS resolution from peer VPC. For this, select the VPC peering connection -> go to Actions option at the top -> Select Edit DNS settings -> Select Allow DNS resolution from peer VPC.
I was in a similar situation and rather adding the Redshift in the inbound rule of the EC2 security group of EMR cluster, please add public IP of EMR cluster to redshift's security group and this worked for me. Hope this helps!

Resources