Access kubernetes services behind IKEv2 VPN (strongswan) on AKS - azure

I am trying to establish an IKEv2 VPN between one VM(subnet: 20.25.0.0/16) and one AKS cluster(subnet: 10.0.0.0/16 - Azure CNI) using strongswan gateway. I need to access some kubernetes services behind of this AKS cluster. With Azure CNI each pod will be assigned an IP address from the POD subnets specified at cluster creation, this subnet is attached in interface eth0 for each node. Already kubernetes services of the type clusterIP will get an IP from service CIDR range specified at cluster creation, but this IP is only available in the cluster is not attached in any interface of the nodes, like POD subnet.
To run the strongswan on K8S it's necessary to mount the kernel modules(/lib/modules), in addition to enable NET_ADMIN capabilities. So the VPN tunnel is established using any of the networks attached on the host(nodes) interface, so I can't established a VPN using service CIDR range specified at cluster creation, since this IPs is known only within the cluster, through personalized routes and is not attached on any host interface. If I try to configure the VPN established with a subnet with the CIDR range of services informed in the creation of the cluster, I get an error stating that the subnet was not found in any of the interfaces.
To get around this, I realized that I can configure a tunnel informing a subnet with a larger range, as long as there is a subnet attached in my interface that is within the wider informed range. For example, I can configure a VPN informing the subnet 10.0.0.0/16, but my subnet for pods and nodes (attached in eth0) is 10.0.0.0/17 and CIDR range for services is 10.0.128.0/17, in this way all traffic 10.0.0.0/16 is routed through the vpn tunnel. In this way, as a workaround I define my services CIDR as a network subsequent to the network of pods and nodes and configure the VPN using a network that overlaps the two.
All 10.0.0.0/16 traffic from one side of the VPN (VM) is correctly routed to inside tunnel. If I try to access a Pod directly, using any IP from the Pods subnet (10.0.0.0/17), everything works fine. The issue is if I try to access a kubernetes service using a IP from CIDR for services(10.0.128.0/17), the traffic is not routed correctly until the K8S services. I can see the request in tcpdump in AKS, but it doesn't arrive in the service. So my question is, how to make a configuration on the strongswan, in which I can access the services on the aks kubernetes cluster?
Below is the current configuration of the strongswan:
PEER-1(VM)
conn %default
authby="secret"
closeaction="restart"
dpdaction="restart"
dpddelay="5"
dpdtimeout="10"
esp="aes256-sha1-modp1536"
ike="aes256-sha1-modp1024"
ikelifetime="1440m"
keyexchange="ikev2"
keyingtries="1"
keylife="60m"
mobike="no"
conn PEER-1
auto=add
leftid=<LEFT-PHASE-1-IP>
left=%any
leftsubnet=20.25.0.0/16
leftfirewall=yes
leftauth=psk
rightid=<RIGHT-PHASE-1-IP>
right=<RIGHT-PHASE-1-IP>
rightsubnet=10.0.0.0/16
rightfirewall=yes
rightauth=psk
PEER-2(AKS)
conn %default
authby="secret"
closeaction="restart"
dpdaction="restart"
dpddelay="5"
dpdtimeout="10"
esp="aes256-sha1-modp1536"
ike="aes256-sha1-modp1024"
ikelifetime="1440m"
keyexchange="ikev2"
keyingtries="1"
keylife="60m"
mobike="no"
conn PEER-2
auto=start
leftid=<LEFT-PHASE-1-IP>
left=%any
leftsubnet=10.0.0.0/16
leftfirewall=yes
leftauth=psk
rightid=<RIGHT-PHASE-1-IP>
right=<RIGHT-PHASE-1-IP>
rightsubnet=20.25.0.0/16
rightfirewall=yes
rightauth=psk

Related

Setup internal and public workload on a single AKS cluster

I have a single AKS cluster.
I would like some workloads being exposed over the internet while other must remain reachable only from specific public IPs.
I can think of two solution that may work, but both of them appears a bit tricky to me :
Usage of Azure CNI and set NSG at pod's CNI level ?
Create an internal and a public node pool and setting NSG at vmss's NIC level ?
Is there a better option I could investigate? Is one of those two points better than the other ?
If you need complete workloads isolation you may consider option of usage Azure CNI and set NSG at pod's CNI level but you should aware that
The subnet assigned to the AKS node pool cannot be a delegated subnet.
AKS doesn't apply Network Security Groups (NSGs) to its subnet and will not modify any of the NSGs associated with that subnet. If you provide your own subnet and add NSGs associated with that subnet, you must ensure the security rules in the NSGs allow traffic within the node CIDR range.
Use Azure Network Policy and define your own virtual network and subnets that can be done only when you create an AKS cluster.
Microsoft also recommends an option to add node pool with a unique subnet to achieve isolation.
This isolation can be supported with separate subnets dedicated to each node pool in the cluster. This can address requirements such as having non-contiguous virtual network address space to split across node pools.
Limitations:
All subnets assigned to node pools must belong to the same virtual network.
System pods must have access to all nodes/pods in the cluster to provide critical functionality such as DNS resolution and tunneling kubectl logs/exec/port-forward proxy.
If you expand your VNET after creating the cluster you must update your cluster before adding a subnet outside the original cidr.
In clusters with Kubernetes version < 1.23.3, kube-proxy will SNAT traffic from new subnets, which can cause Azure Network Policy to drop the packets.
Windows nodes will SNAT traffic to the new subnets until the node pool is reimaged.
Internal load balancers default to one of the node pool subnets (usually the first subnet of the node pool at cluster creation).
Another option is to use Calico Network Policies, an open-source network and network security solution
If you need just some workloads being exposed over the internet while other must remain reachable only from specific public IPs you can run Multiple NGINX Ingress Controllers in AKS and make a whitelisting with annotation nginx.ingress.kubernetes.io/limit-whitelist
https://blog.cpolydorou.net/2022/05/running-multiple-nginx-ingress.html

Kubernetes pod failed to connect to external service

I have an Azure Kubernetes Cluster Running with Azure CNI (virtual network) as the Network. The cluster is running on 1 subnet of the network.
On another subnet, I have a Virtual Machine running as it has a private IP of 10.1.0.4.
Now I have a pod in the K8S cluster, which is trying to connect with the Virtual Machine. But it's not able to do so.
Also, the ping 10.1.0.4 from inside the pod gives a timeout.
Please help me to figure out, what I am doing wrong so that I can connect the Pod with the VM.
• You cannot directly create communication between an AKS cluster pod and a Virtual Machine as the IP assigned to a pod/node in an AKS cluster is a subset range of the address space of the higher CIDR IP address range assigned while deploying the cluster. And communication within the cluster between the nodes is uninterrupted and possible readily. But the same with resources other AKS is restricted as they are governed by Azure CNI framework policy which directs the Kubernetes cluster to direct traffic outbound of the cluster in a regulated and conditional way.
• Thus, the above said can only be achieved by initiating intermediate services such as an internal load balancer between the AKS and the VMs as the CIDR of the VM and the AKS is different. So, leveraging the Azure plugin to deploy an internal load balancer as a service through AKS is only way through which you can achieve communication between AKS pod and a VM deployed in Azure. Below is a diagram for illustration purposes.
To deploy the internal load balancer through YAML files in AKS for external communication with VMs, kindly refer to the link below for details: -
https://fabriciosanchez-en.azurewebsites.net/implementing-virtual-machine-to-pod-communication-in-azure-kubernetes-service-aks/

Azure Kubernetes Cluster - Accessing and interacting with Service(s) from on-premise

currently we have the following scenario:
We have established our connection (network wise) from on-premise to the Azure Kubernetes Cluster (private cluster!) without any problems.
Ports which are being routed and are allowed
TCP 80
TCP 443
So far, we are in a development environment and test different configurations.
For setting up our AKS, we need to set the virtual network (via CNI) and Service CIDR. We have set the following configuration (just an example)
Virtual Network: 10.2.0.0 /21
Service CIDR: 10.2.8.0 /23
So, our pods are having IPs from our virtual network subnet, the services are getting their IPs from the Service CIDR. So far so good. A route table for the virtual network (subnet has been associated with the route table) is forwarding all traffic to our firewall and vice versa: Interacting with the virtual network is working without any issue. The network team (which is new to Azure cloud stuff as well) has said that the connection and access to the Service CIDR should be working.
Sadly, we are unable to access the Service CIDR.
For example, let's say we want to establish the kubernetes dashboard (web ui) via https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/. After running the YAML code, the kubernetes dashboard pod and service is being successfully created. The pod can be pinged and "accessed", but the service, which makes it possible to access the kubernetes dashboard via port 443 cannot be accessed. For example https://10.2.8.42/.
My workaround so far is that the kubernetes dashboard (as a Service, type: ClusterIP) is having set an external IP from the virtual network. This sounds all great, but I am not really fond of it, since I have to interact with the virtual network rather than the Service CIDR.
Is this really the correct way? Any hints how to make the Service CIDR accessible? What am I missing ?
Any help would be appreciated.

networking in azure kubernetes services

Here in Azure AKS networking using kubenet it is mentioned that IP address range for --dns-cidr, --service-cide and --docker-brige-ip range should be an address space that isn't in use elsewhere in your network environment. I have also created Vnet and this AKS should be in that vnet.
Does this mean, for DNS, Service and Docker bridge, IP address range should be different then VNet IP range?
Pod CIDR, can we have it different then VNet range? As I am using Kubenet pod IP will not be from Vnet subnet.
yes, they should not overlap.
this is virtual pod ip address space, not the one they will get from your vnet (if you would be using Azure CNI). these would be the internal only kubernetes ip addresses. With kubenet they would get routed to the appropriate nodes with UDR and then the node would forward traffic to the appropriate pod.

Not able to associate more than 1 subnet to Kubernetes cluster via AKS

When I create kubernetes cluster from kubernetes service of azure in the networking tab , I have the option to create my own virtual network.
Let's say I created a virtual network with 3 subnets , still in the networking tab options , I can only associate 1 of these subnet to my cluster.
Is it a restriction in AKS?
If so, why it allows to create more than 1 subnet in virtual network?
Not sure but you only can specific one subnet when you create an AKS cluster on the Azure portal. It seems a restriction in AKS. Read the prerequisites. At least, you need one subnet, one AKS cluster. Also, AKS supports a single pool for now.
Don't create more than one AKS cluster in the same subnet.
With advanced networking in AKS, you can deploy a AKS cluster in a existing virtual network and define these subnet names and IP address ranges. IP addresses for the pods and the cluster's nodes are assigned from the specified subnet within the virtual network. So you need to plan IP addressing for your cluster. You should consider doing upgrade and scaling operations when you determine the number of IP addresses.
The reason it allows to create more than 1 subnet in virtual network is that essentially you could create many subnets with valid CIDR block in a VNet. You can create VMs in other subnets or create a dedicated Gatewaysubnet used for VPN gateway in the existing VNet. With advanced networking, this existing virtual network often provides connectivity to an on-premises network using Azure ExpressRoute or Site-to-Site VPN.
Also, it's welcome to give feedback on Azure AKS to improve this feature. Hope this helps.

Resources