EKS DNS accessible inside private subnet - dns

The question is: How to expose DNS names pointing to the EKS cluster. DNS should only available inside our subnets and accessible with our VPN connection (which essentially means that DNS should point to addresses inside our VPC)
I have an EKS cluster which runs in the 10.0.0.0/16 VPC. Nodes are located inside private subnets, and services are exposed externally with ELB and Ingress Controller.
Since some of the services inside the VPC are only accessible inside of our company we have decided to runa OpenVPN server configured with routing to the 10.0.0.0/16 through the VPN and rest of the traffic going directly to the Internet. Currently public DNS configured in Route53 points to our private addresses which is not ideal. (ex A record for privateservice.example.com -> 10.0.1.1). It is not ideal (existence of the privateservice shouldn't be available in a public DNS) but worked for now.
To resolve the problem of private services in a public DNS I though about running Bind DNS server and configuring our OpenVPN to push this configuration to clients (couldn't get it to work on some client machines but I assume it will work).
However I have no idea how to expose some private services running inside EKS cluster. As mentioned before there are some services in the cluster which are available publicly through ELB. But in the same cluster I have a subset of services which should be available only inside our subnet.

Use private hosted zone of route 53 for private DNS resovling. It should be better using internal ELB to expose services in EKS.
Use route53 resovler for resolving aws managed DNS in on-perms.

Related

Azure Virtual Machine cannot resolve DNS entry of Application Gateway

I have the following situation:
If I deploy an application (Deployment, Service and Ingress) in my kubernetes cluster, my ingress deployment is being automatically added to my application gateway (I am using the Azure Application Gateway Ingress Controller; https://azure.github.io/application-gateway-kubernetes-ingress/annotations/ ). So far so good.
That means that my application can be reached via my application gateway via https://my-app-gateway-public-ip/myAppPath/. Also, I have an additional private DNS zone which makes my app accessible via https://dns-name/myAppPath.
Additionally, we have an AADDS in combination with a Bastion Service. Deployed some virtual machines and the virtual machines use the DNS resolver of the AADDS (for authentication against the AAD).
The problem is: If I am outside of the cloud, I can nslookup the dns or can access the site via the ip, but I cannot do that with my virtual machines. My DNS server (within the AADDS) is unable to resolve the dns or accessing the ip. I am wondering what the issue is.
The bastion and AADDS are in different subscriptions and therefore different virtual networks. I established already a peering between those virtual networks (or the authentication between the AADDS and the VMs wouldn't work).
The kubernetes cluster and the application gateway are also in a different subscription, but no peering has been done so far.
Are there any hints what I could be missing?
Kind regards
• Since, you are using a Bastion gateway server to connect to the VMs hosted in your subscription, the Bastion gateway server must be having a public IP address through which then the registered underlying VMs can be connected to via private links created in the private DNS zones associated with a particular virtual network in a subnet and an assigned private IP address and a FQDN accordingly. Thus, if you want to access the application website hosted behind the application gateway, then you will have to create a conditional forwarder in the DNS zone in AADDS to redirect the internal requests from the VMs hosted within a virtual network to the public IP address of the website hosted behind the application gateway
• Thus, a conditional forwarder forwards the DNS resolution requests for a particular resource hosted on the public internet for which the DNS host resolution is not found or done in that DNS zone which usually serves or fulfils the requests related to internal environment. As a result, when a VM configured with a private IP is registered as a host in the internal DNS zone queries the public IP or FQDN associated with the application’s website, the DNS requests are forwarded to the public internet through the conditional forwarder and then the results are displayed in the VM’s browser for the application’s webpage. Thus, the VMs don’t need to have internet access but the DNS server should have or should forward the requests through the Internet proxy server accordingly to reach the internet.
For more information on creating conditional forwarder in AADDS, kindly refer to the below link: -
https://learn.microsoft.com/en-us/azure/active-directory-domain-services/manage-dns#create-conditional-forwarders

Azure Postgres Flexible Server - Vnet integration DNS not resolving

I provisioned the resources accordingly to the documentation.
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-networking
I did the provisioning using BICEP.
The name of the server is my-dev-db and I created a DNS private zone:
my-dev.postgres.database.azure.com
Now what I see, is that from my local computer, so public internet, I can ping both
my-dev.postgres.database.azure.com
and
my-dev-db.postgres.database.azure.com
I created a VM in the same VNET and I managed to connect via postgres client, but, not to the private DNS, my-dev.postgres.database.azure.com but to the my-dev-db.postgres.database.azure.com which is the one automatically created by azure as server name. When I try to connect with the private DNS it doesn't resolve.
So my question:
Why can I ping both dns from outside Azure.
Why the private dns doesn't resolve in the VM.
Really can't make sense of this behavior.

AKS Assign a DNS name to private ClusterIP address

I don't know how to implement this problem:
I have this private AKS cluster with 4 microservices (.net5) and a frontend. These 4 microservices talk to each other via HTTP requests using their public IP addresses (not good because I want them to have only a private endpoint, like microservicename.api.my-namespace.svc.cluster-domain.example).
Frontend (that has a public DNS and IP) should then be able to request a main API in that private endpoint.
I need to implement a solution to this, and also I feel like that communicating between microservices via HTTP requests on a certain endpoint is not a good design pattern, so I wanted some suggestions on this aspect also.
Many thanks
• You can use the Azure Private Link Service in this case to establish communication between the private AKS cluster and the frontend through the private endpoint that will be exposed in the subnet of the private AKS cluster.
• The Private Link service is supported on Standard Azure Load Balancer only. Basic Azure Load Balancer isn't supported. To use a custom DNS server, add the Azure DNS IP 168.63.129.16 as the upstream DNS server in the custom DNS server.
• You can create a private AKS cluster consisting of the four microservices using the below command with a private DNS zone or a custom private DNS zone: -
‘az aks create -n -g --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity --private-dns-zone [system|none]’ – with a private DNS zone (system for default value & none for public DNS zone)
‘az aks create -n -g --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity --private-dns-zone --fqdn-subdomain ’
• If the Private DNS Zone is in a different subscription than the AKS cluster, you need to register Microsoft.ContainerServices in both the subscriptions. Additionally, you will need a user assigned identity or service principal with at least the private dns zone contributor and vnet contributor roles. "fqdn-subdomain" can be utilized with "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" only to provide subdomain capabilities to privatelink..azmk8s.io
• Next, you would need to use a VM that has access to the AKS cluster's Azure Virtual Network (VNet). There are several options for establishing network connectivity to the private cluster such as create a VM in the same Azure Virtual Network (VNet) as the AKS cluster or use a VM in a separate network and set up Virtual network peering. Thus, you can create a private link with a private endpoint for the four microservices in the private AKS cluster and provide connection to the frontend API.
Please refer the below link for more details: -
https://learn.microsoft.com/en-us/azure/aks/private-clusters

Cannot access Private AKS cluster from Local Machine (on home network) connected to Azure VPN

I have a Private AKS cluster deployed in a VNET on Azure. Once I deployed it, a private endpoint and a private DNS zone were created by default therefore making the cluster accessible from VM's which are part of the same VNET. (I have a VM deployed in the same VNET as the AKS cluster and "kubectl" commands work in it.)
My requirement is that I want to perform the "kubectl" commands from my local machine (connected to my home network) and also connected to the VPN which connects to the VNET.
My machine can talk to resources within the VNET but cannot seem to resolve the FQDN of the private cluster.
I read somewhere that having a DNS forwarder setup in the same VNET can help resolve the DNS queries made from the local machine which can then be resolved by Azure DNS. Is this the way to go about this? Or is there a better way to solve this problem?
It would really help if someone could give me an action plan to follow to solve this problem.
The better way to perform the "kubectl" commands from your local machine to your private AKS cluster is to use AKS Run Command (Preview). This feature allows you to remotely invoke commands in an AKS cluster through the AKS API. This feature provides an API that allows you to, for example, execute just-in-time commands from a remote laptop for a private cluster. Before using it, you need to enable the RunCommandPreview feature flag on your subscription and install aks-preview extension locally. However, there is a limitation that AKS-RunCommand does not work on clusters with AKS managed AAD and Private link enabled.
In this case, If you want to resolve the FQDN of the private cluster from your on-premise network, you could select to use either the hosts file locally(used for testing) or use your DNS forwarder to override the DNS resolution for a private link resource like this.
The DNS forwarder will be responsible for all the DNS queries via a server-level forwarder to the Azure-provided DNS 168.63.129.16.You can provision IaaS Windows VM with DNS role or Linux VM with bind configured as a DNS forwarder. This template shows how to create a DNS server that forwards queries to Azure's internal DNS servers for Linux VM. Refer to this for DNS forwarder on Windows VM.
If there is an internal DNS server in your on-premise network. The on-premises DNS solution needs to forward DNS traffic to Azure DNS via a conditional forwarder for your public DNS zones(e.g. {region}.azmk8s.io). The conditional forwarder references the DNS forwarder deployed in Azure. You could read this blog about DNS configuration sections for more details.

Azure Container Group behind application gateway with public IP

I have an application gateway with frontend public IP address, connected to a VNET via its subnet and using a single backend pool that points to a container group in the same VNET but different subnet.
The backend pool points to the IP address of the container group. That works!
But I don't want to rely on an IP address that could change anytime with the container restart. I already use a private DNS zone linked to the VNET. The container group is accessible as "mycontainer.my-azure.com" from the VNET thanks to the A record in my private DNS zone.
But putting "mycontainer.my-azure.com" as the FQDN of the backend pool does not work. It works with the IP address "172.22.44.5" but "mycontainer.my-azure.com" does not resolve, backend health shows "Unknown". I tried restarting the APP GW from AZ CLI to no avail.
Does anyone know how to make APP GW use the VNET's private DNS zone in its backend pool?
If the application gateway backend pool contains an internally resolvable FQDN or a private IP address, the application gateway routes the request to the backend server by using its instance private IP addresses. Make sure the FQDN in the backend pool can be resolved internally.
You can verify the following configuration, it works on my side. I am using the Standard V2 SKU application gateway. The application gateway and container group were deployed into the same VNet but different subnets with no firewall rules. I use this example for deploying ACI.
Backend pool
HTTP setting
Listener
Health probe
Private DNS zone

Resources