How do I setup connection to a Redshift cluster inside a VPC? - security

I createad a Redshift cluster inside a Public subnet inside a VPC. The VPC is connected to an Internet Gateway
Configured IPs for inbound/oubound traffic in cluster security group.
Configured IPs for inbound/oubound traffic in VPC security group.
Configured IPs for inbound/oubound traffic in Network ACL.
Configured IPs for inbound/oubound traffic in Main route table.
Still when I try to connect using a client, I get connection refused. Am I missing step here ?

Related

How to redirect traffic from one tunnel to another on the same Azure Network Gateway?

I am using Azure Network Gateway to connect to a customer (for VPN IPsec). I have a virtual network connection to the Gateway. In which I created a subnetwork and which IPsec is configured with the customer. Everything works perfectly. But now I need to create a separate tunnel to connect my secure environment. I created a separate subnet on the same virtual network and created a separate connection to the same Azure Network Gateway. But the traffic does not go between subnets, and I cannot get through from one tunnel to another. Could you please write me what to do? Maybe create a separate Azure Network Gateway and virtual network and make a peering?

EKS node unable to connect to RDS

I have an EKS cluster where I have a Keycloak service that is trying to connect to RDS within the same VPC.
I have also added a inbound rule to the RDS Security Group which allow postgresql from source eksctl-prod-cluster-ClusterSharedNodeSecurityGroup-XXXXXXXXX
When the application tries to connect to RDS i get the following message:
timeout reached before the port went into state "inuse"
I ended up replacing the inbound rule on the RDS Security Group from the eksctl-prod-cluster-ClusterSharedNodeSecurityGroup-XXXXXXXXX with an inbound rule allowing access from the EKS VPC CIDR address instead.

Access kubernetes services behind IKEv2 VPN (strongswan) on AKS

I am trying to establish an IKEv2 VPN between one VM(subnet: 20.25.0.0/16) and one AKS cluster(subnet: 10.0.0.0/16 - Azure CNI) using strongswan gateway. I need to access some kubernetes services behind of this AKS cluster. With Azure CNI each pod will be assigned an IP address from the POD subnets specified at cluster creation, this subnet is attached in interface eth0 for each node. Already kubernetes services of the type clusterIP will get an IP from service CIDR range specified at cluster creation, but this IP is only available in the cluster is not attached in any interface of the nodes, like POD subnet.
To run the strongswan on K8S it's necessary to mount the kernel modules(/lib/modules), in addition to enable NET_ADMIN capabilities. So the VPN tunnel is established using any of the networks attached on the host(nodes) interface, so I can't established a VPN using service CIDR range specified at cluster creation, since this IPs is known only within the cluster, through personalized routes and is not attached on any host interface. If I try to configure the VPN established with a subnet with the CIDR range of services informed in the creation of the cluster, I get an error stating that the subnet was not found in any of the interfaces.
To get around this, I realized that I can configure a tunnel informing a subnet with a larger range, as long as there is a subnet attached in my interface that is within the wider informed range. For example, I can configure a VPN informing the subnet 10.0.0.0/16, but my subnet for pods and nodes (attached in eth0) is 10.0.0.0/17 and CIDR range for services is 10.0.128.0/17, in this way all traffic 10.0.0.0/16 is routed through the vpn tunnel. In this way, as a workaround I define my services CIDR as a network subsequent to the network of pods and nodes and configure the VPN using a network that overlaps the two.
All 10.0.0.0/16 traffic from one side of the VPN (VM) is correctly routed to inside tunnel. If I try to access a Pod directly, using any IP from the Pods subnet (10.0.0.0/17), everything works fine. The issue is if I try to access a kubernetes service using a IP from CIDR for services(10.0.128.0/17), the traffic is not routed correctly until the K8S services. I can see the request in tcpdump in AKS, but it doesn't arrive in the service. So my question is, how to make a configuration on the strongswan, in which I can access the services on the aks kubernetes cluster?
Below is the current configuration of the strongswan:
PEER-1(VM)
conn %default
authby="secret"
closeaction="restart"
dpdaction="restart"
dpddelay="5"
dpdtimeout="10"
esp="aes256-sha1-modp1536"
ike="aes256-sha1-modp1024"
ikelifetime="1440m"
keyexchange="ikev2"
keyingtries="1"
keylife="60m"
mobike="no"
conn PEER-1
auto=add
leftid=<LEFT-PHASE-1-IP>
left=%any
leftsubnet=20.25.0.0/16
leftfirewall=yes
leftauth=psk
rightid=<RIGHT-PHASE-1-IP>
right=<RIGHT-PHASE-1-IP>
rightsubnet=10.0.0.0/16
rightfirewall=yes
rightauth=psk
PEER-2(AKS)
conn %default
authby="secret"
closeaction="restart"
dpdaction="restart"
dpddelay="5"
dpdtimeout="10"
esp="aes256-sha1-modp1536"
ike="aes256-sha1-modp1024"
ikelifetime="1440m"
keyexchange="ikev2"
keyingtries="1"
keylife="60m"
mobike="no"
conn PEER-2
auto=start
leftid=<LEFT-PHASE-1-IP>
left=%any
leftsubnet=10.0.0.0/16
leftfirewall=yes
leftauth=psk
rightid=<RIGHT-PHASE-1-IP>
right=<RIGHT-PHASE-1-IP>
rightsubnet=20.25.0.0/16
rightfirewall=yes
rightauth=psk

Azure vnet peering with remote gateway

I'm trying to access resources in a peered network via a remote gateway and not having much luck;
Network 1 - Has the gateway (p2s VPN)
Network 2 - Has target resource (VM1)
If I connect to the GW in NW1 from my laptop, I can't access resources in NW2. I've allowed traffic to be forwarded and configured allowing remote gateways
If I add an intermediate VM (VM2) in network 1, I can connect over the P2S to VM2 and then from VM2 to VM1.... i,e everything is open and accepting connections, just doesn't work straight through. I've opened up NSG with any any on the port as a test and still no luck
Anyone got this working?
Add a local route for the remote subnet

Why does Azure Application Gateway require an empty subnet

When I try to execute New-AzureRmApplicationGatewayIPConfiguration to create an application gateway, I get an exception:
Subnet xxx cannot be used for application gateway yyy since subnet is not empty.
I encountered this error when I tried to add the application gateway to the same subnet as the backend servers.
Why is this not an option? Does each gateway require a separate subnet? What is the recommended configuration?
Related questions:
The documentation says backend servers can be added when they belong to the virtual network subnet. How can a back-end server belong to the virtual network subnet of the application gateway if the application gateway must be in a separate subnet?
How can the application gateway be configured without requiring a public IP address on the backend servers?
The application gateway must be in a subnet by itself as explained in the documentation, hence the reason it is not an option. Create a smaller address space for your application gateway subnet (CIDR 'x.x.x.x/29') so you're not wasting IP addresses unnecessarily.
It's a good practice to strive for a multi-tier network topology using subnets. This enables you to define routes and network security groups (ie: allow port 80 ingress, deny port 80 egress, deny RDP, etc.) to control traffic flow for the resources in the subnet. The routing and security group requirements for a gateway are generally going to be different than routing and security group requirements of other resources in the virtual network.
I had the same issue, so my virtual network was 10.0.0.0/24 which was not allowing me to create a separate subnet. I solved the issue as we added another address space into the azure virtual network e.g. 10.10.0.0.24, then created a new subnet so that the application gateway was happy to work with the backend servers.

Resources