How to restrict access with Cloud NAT? - security

I have private GKE cluster on the GCP, and container registry outside the GCP(IBM Cloud Repository).
To pull image from ICR to GKE,I set Cloud Nat to GKE so that GKE can access to internet.
But in this case, GKE will be able to access to all internet IP address.
Is there any solution to restrict cloud Nat to access to only specified internet IP address?
I know that setting FW into GKE is the one way,is there any other solution?
Edit:
Cloud routes seems to be a nice solution.
https://cloud.google.com/vpc/docs/using-routes?hl=ja#addingroute

Related

Resolving On-Premise DNS and Google Cloud Internal DNS Together

I have a question about Cloud DNS or Cloud VPN i don't know which is exactly related with my issue. I have a on-premise network and i have an internal dns for this network which is example.int. I've connected via Cloud VPN this on-premise network with a Cloud VPC in my Google Cloud account.
Both of my resources can access each other correctly but my VM's in Google Cloud vpc does not resolve my dns servers in my on-premise network. For example i can access my on-premise server via it's ip adress from Google Cloud VM but i cannot access it via on-premise-vm-1.example.int domain.
If i use my on-premise dns nameservers in resolve.conf i can access on-premise server but in that case .c..internal dns adresses do not work in my vpc. I want to use both of them.
What should i do you think? I could not find any working documentation for it. I want to resolve my on-premise and google cloud internal dns zones from my gcloud vms. Is there any way to do it without making any change on resolve.conf file in my all servers?
Thanks in advance
I try to change Cloud DNS server policies but when i try to change alternate dns servers in there, i cannot access my .internal dnsses due to metadata server. However, i cannot even access my example.int dnsses.
I also try to adding example.int dns into Cloud VPC as private dns zone. It also did not work.
In this case I would recommend to use GCP Cloud DNS private forwarding and point your desired on-prem internal DNS name to your on-prem DNS server.
Be aware that the requests will be coming from 35.199.192.0/19, son in your VPN you should include this range to be reached from your GCP project.
A workaround might be to manually create an internal Cloud DNS zone on your GCP project and manually update your DNS registries there too, the downside about this is that any change you want to make you should make it on both sides.

Allow AWS RDS connection from an Azure K8S pods

We are currently migrating from AWS ECS to Azure Kubernetes Service. Our first step is to first migrate the application code and just leave the database in AWS RDS, for now. Our RDS instance is protected by a security group which only allows connection from a set of IP addresses.
When connecting to the RDS instance, what IP address does the database see? How can I configure RDS to allow connection from a kubernetes pod?
If you have an Azure Load Balancer (so any kubernetes service with type LoadBalancer) attached to worker nodes - they will use the first IP attached to the Load Balancer. If not - they will use public IP attached to the VM they run on. If the VM doesnt have a public IP (default for AKS), they will use ethereal IP that might change anytime and you have no control over that.
So just create a service with the type of LoadBalancer in AKS, find its external IP address and whitelist that.

Network setup for accessing Azure Redis service from Azure AKS

We have an application that runs on an Ubuntu VM. This application connects to Azure Redis, Azure Postgres and Azure CosmosDB(mongoDB) services.
I am currently working on moving this application to Azure AKS and intend to access all the above services from the cluster. The services will continue to be external and will not reside inside the cluster.
I am trying to understand how the network/firewall of both the services and aks should be configured so that pods inside the cluster can access the above services or any Azure service in general.
I tried the following:
Created a configMap containing the connection params(public ip/address, username/pwd, port, etc) of all the services and used this configMap in the deployment resource.
Hardcoded the connection params of all the services as env vars inside the container image
In the firewall/inbound rules of the services, I added the AKS API ip, individual node ips
None of the above worked. Did I miss anything? What else should be configured?
I tested the setup locally on minikube with all the services running on my local machine and it worked fine.
I am currently working on moving this application to Azure AKS and
intend to access all the above services from the cluster.
I assume that you would like to make all services to access each other and all the services are in AKS cluster? If so, I advise you configure the internal load balancer in AKS cluster.
Internal load balancing makes a Kubernetes service accessible to
applications running in the same virtual network as the Kubernetes
cluster.
You can take a try and follow the following document: Use an internal load balancer with Azure Kubernetes Service (AKS). In the end, good luck to you!
Outbound traffic in azure is SNAT-translated as stated in this article. If you already have a service in your AKS cluster, the outbound connection from all pods in your cluster will come thru the first LoadBalancer type service IP; I strongly suggest you create one for the sole purpose to have a consistent outbound IP. You can also pre-create a Public IP and use it as stated in this article using the LoadBalancerIP spec.
On a side note, rather than a ConfigMap, due to the sensitiveness of the connection string, I'd suggest you create a Secret and pass that down to your Deployment to be mounted or exported as environment variable.

Azure Kubernetes Service nodes cannot access internet

Im trying to get web services to my existing service from aks managed cluster on azure. I did nsg port config stuff from portal to let outbound traffic go out and restarted vm several times. But my node cannot ping any ping on the internet. Im not trying to ping somewhere with its fqdn. Im trying it with its ip address. How can i reach a service from internet into my cluster?
How did you create the service and pod? Be default load balancer one will create all the ruls for you and you dont need to create the rules by yourself.
You can share your pod details

Expose containers to private network

I am looking for a way to create a docker cluster (probably kubernetes) on azure, and expose the containers only via a vnet to my datacenter.
Is such a setup possible?
That is that the container services can only be access via the vpn that is created. So that the container can use private resources (mainly database) not available in the azure cloud?
And so that I can access the resources in the cloud, only from my dc.
Yes, that is perfectly possible. depending on your setup you need to deploy regular kubernetes cluster and use site-to-site VPN to connect networks or use ACS engine to deploy kubernetes into existing vnet\subnet.
You would also need to tweak your network security group rules to allow traffic to flow (if you have them).
https://github.com/Azure/acs-engine/tree/master/examples/vnet
https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-walkthrough
https://blogs.technet.microsoft.com/canitpro/2017/06/28/step-by-step-configuring-a-site-to-site-vpn-gateway-between-azure-and-on-premise/
I am looking for a way to create a docker cluster (probably
kubernetes) on azure, and expose the containers only via a vnet to my
datacenter.
Yes, we just create k8s pod, and not expose it to internet. Then create S2S VPN connect Azure Vnet to your DC, in this way, your DC's VMs can connect to Azure K8S pod via Azure private IP address.
Update:
If you want to connect your K8S pods via VPN, we can create Azure route table to achieve that.
More information about create route table, please refer to my another answer.

Resources