I have a single AKS cluster.
I would like some workloads being exposed over the internet while other must remain reachable only from specific public IPs.
I can think of two solution that may work, but both of them appears a bit tricky to me :
Usage of Azure CNI and set NSG at pod's CNI level ?
Create an internal and a public node pool and setting NSG at vmss's NIC level ?
Is there a better option I could investigate? Is one of those two points better than the other ?
If you need complete workloads isolation you may consider option of usage Azure CNI and set NSG at pod's CNI level but you should aware that
The subnet assigned to the AKS node pool cannot be a delegated subnet.
AKS doesn't apply Network Security Groups (NSGs) to its subnet and will not modify any of the NSGs associated with that subnet. If you provide your own subnet and add NSGs associated with that subnet, you must ensure the security rules in the NSGs allow traffic within the node CIDR range.
Use Azure Network Policy and define your own virtual network and subnets that can be done only when you create an AKS cluster.
Microsoft also recommends an option to add node pool with a unique subnet to achieve isolation.
This isolation can be supported with separate subnets dedicated to each node pool in the cluster. This can address requirements such as having non-contiguous virtual network address space to split across node pools.
Limitations:
All subnets assigned to node pools must belong to the same virtual network.
System pods must have access to all nodes/pods in the cluster to provide critical functionality such as DNS resolution and tunneling kubectl logs/exec/port-forward proxy.
If you expand your VNET after creating the cluster you must update your cluster before adding a subnet outside the original cidr.
In clusters with Kubernetes version < 1.23.3, kube-proxy will SNAT traffic from new subnets, which can cause Azure Network Policy to drop the packets.
Windows nodes will SNAT traffic to the new subnets until the node pool is reimaged.
Internal load balancers default to one of the node pool subnets (usually the first subnet of the node pool at cluster creation).
Another option is to use Calico Network Policies, an open-source network and network security solution
If you need just some workloads being exposed over the internet while other must remain reachable only from specific public IPs you can run Multiple NGINX Ingress Controllers in AKS and make a whitelisting with annotation nginx.ingress.kubernetes.io/limit-whitelist
https://blog.cpolydorou.net/2022/05/running-multiple-nginx-ingress.html
Related
While migrating a cluster we moved the vnet used by the AKS from one resource group (the one with the nodepool created by the AKS) to a different RG where we created the AKS cluster. This however, resulted in an unexpected state. The original vnet in the nodepool resource group stayed as is while it copied the vnet in to the AKS RG with the same ID. So now we have to vnet with the same name but in two different resource groups. Afterwards when we tried to create a new nodepool we received the following error:
Code="VMScaleSetMustBelongToSameVnetAsLB" Message="VM scale set
references virtual network
/subscriptions/12345/resourceGroups/project-test-k8s-mc-rg/providers/Microsoft.Network/virtualNetworks/AKS-VNET-931
which is different than load balancer virtual network
/subscriptions/12345/resourceGroups/project-test-k8s-rg/providers/Microsoft.Network/virtualNetworks/AKS-VNET-931. VM scale set and load balancer must belong to same virtual network."
The cluster was created with a managed vnet.
We tried searching for ways to change the load balancer created by AKS to use a different vnet, we do not see any options. We cannot afford to recreate the cluster at this stage. So do we have any other options to fix this issue?
There was no direct option to change the load balancer created by AKS to use a different VNet. If the load balancer uses an IP address in a different subnet, ensure the AKS cluster identity also has read access to that subnet. The VM scale set and load balancer must always belong to the same virtual network.
We can modify only address space and subnet only. Found one blog by "Ajay Kumar" refer tutorial for more information.
I have an Azure Kubernetes Cluster Running with Azure CNI (virtual network) as the Network. The cluster is running on 1 subnet of the network.
On another subnet, I have a Virtual Machine running as it has a private IP of 10.1.0.4.
Now I have a pod in the K8S cluster, which is trying to connect with the Virtual Machine. But it's not able to do so.
Also, the ping 10.1.0.4 from inside the pod gives a timeout.
Please help me to figure out, what I am doing wrong so that I can connect the Pod with the VM.
• You cannot directly create communication between an AKS cluster pod and a Virtual Machine as the IP assigned to a pod/node in an AKS cluster is a subset range of the address space of the higher CIDR IP address range assigned while deploying the cluster. And communication within the cluster between the nodes is uninterrupted and possible readily. But the same with resources other AKS is restricted as they are governed by Azure CNI framework policy which directs the Kubernetes cluster to direct traffic outbound of the cluster in a regulated and conditional way.
• Thus, the above said can only be achieved by initiating intermediate services such as an internal load balancer between the AKS and the VMs as the CIDR of the VM and the AKS is different. So, leveraging the Azure plugin to deploy an internal load balancer as a service through AKS is only way through which you can achieve communication between AKS pod and a VM deployed in Azure. Below is a diagram for illustration purposes.
To deploy the internal load balancer through YAML files in AKS for external communication with VMs, kindly refer to the link below for details: -
https://fabriciosanchez-en.azurewebsites.net/implementing-virtual-machine-to-pod-communication-in-azure-kubernetes-service-aks/
I am trying to establish an IKEv2 VPN between one VM(subnet: 20.25.0.0/16) and one AKS cluster(subnet: 10.0.0.0/16 - Azure CNI) using strongswan gateway. I need to access some kubernetes services behind of this AKS cluster. With Azure CNI each pod will be assigned an IP address from the POD subnets specified at cluster creation, this subnet is attached in interface eth0 for each node. Already kubernetes services of the type clusterIP will get an IP from service CIDR range specified at cluster creation, but this IP is only available in the cluster is not attached in any interface of the nodes, like POD subnet.
To run the strongswan on K8S it's necessary to mount the kernel modules(/lib/modules), in addition to enable NET_ADMIN capabilities. So the VPN tunnel is established using any of the networks attached on the host(nodes) interface, so I can't established a VPN using service CIDR range specified at cluster creation, since this IPs is known only within the cluster, through personalized routes and is not attached on any host interface. If I try to configure the VPN established with a subnet with the CIDR range of services informed in the creation of the cluster, I get an error stating that the subnet was not found in any of the interfaces.
To get around this, I realized that I can configure a tunnel informing a subnet with a larger range, as long as there is a subnet attached in my interface that is within the wider informed range. For example, I can configure a VPN informing the subnet 10.0.0.0/16, but my subnet for pods and nodes (attached in eth0) is 10.0.0.0/17 and CIDR range for services is 10.0.128.0/17, in this way all traffic 10.0.0.0/16 is routed through the vpn tunnel. In this way, as a workaround I define my services CIDR as a network subsequent to the network of pods and nodes and configure the VPN using a network that overlaps the two.
All 10.0.0.0/16 traffic from one side of the VPN (VM) is correctly routed to inside tunnel. If I try to access a Pod directly, using any IP from the Pods subnet (10.0.0.0/17), everything works fine. The issue is if I try to access a kubernetes service using a IP from CIDR for services(10.0.128.0/17), the traffic is not routed correctly until the K8S services. I can see the request in tcpdump in AKS, but it doesn't arrive in the service. So my question is, how to make a configuration on the strongswan, in which I can access the services on the aks kubernetes cluster?
Below is the current configuration of the strongswan:
PEER-1(VM)
conn %default
authby="secret"
closeaction="restart"
dpdaction="restart"
dpddelay="5"
dpdtimeout="10"
esp="aes256-sha1-modp1536"
ike="aes256-sha1-modp1024"
ikelifetime="1440m"
keyexchange="ikev2"
keyingtries="1"
keylife="60m"
mobike="no"
conn PEER-1
auto=add
leftid=<LEFT-PHASE-1-IP>
left=%any
leftsubnet=20.25.0.0/16
leftfirewall=yes
leftauth=psk
rightid=<RIGHT-PHASE-1-IP>
right=<RIGHT-PHASE-1-IP>
rightsubnet=10.0.0.0/16
rightfirewall=yes
rightauth=psk
PEER-2(AKS)
conn %default
authby="secret"
closeaction="restart"
dpdaction="restart"
dpddelay="5"
dpdtimeout="10"
esp="aes256-sha1-modp1536"
ike="aes256-sha1-modp1024"
ikelifetime="1440m"
keyexchange="ikev2"
keyingtries="1"
keylife="60m"
mobike="no"
conn PEER-2
auto=start
leftid=<LEFT-PHASE-1-IP>
left=%any
leftsubnet=10.0.0.0/16
leftfirewall=yes
leftauth=psk
rightid=<RIGHT-PHASE-1-IP>
right=<RIGHT-PHASE-1-IP>
rightsubnet=20.25.0.0/16
rightfirewall=yes
rightauth=psk
When I create kubernetes cluster from kubernetes service of azure in the networking tab , I have the option to create my own virtual network.
Let's say I created a virtual network with 3 subnets , still in the networking tab options , I can only associate 1 of these subnet to my cluster.
Is it a restriction in AKS?
If so, why it allows to create more than 1 subnet in virtual network?
Not sure but you only can specific one subnet when you create an AKS cluster on the Azure portal. It seems a restriction in AKS. Read the prerequisites. At least, you need one subnet, one AKS cluster. Also, AKS supports a single pool for now.
Don't create more than one AKS cluster in the same subnet.
With advanced networking in AKS, you can deploy a AKS cluster in a existing virtual network and define these subnet names and IP address ranges. IP addresses for the pods and the cluster's nodes are assigned from the specified subnet within the virtual network. So you need to plan IP addressing for your cluster. You should consider doing upgrade and scaling operations when you determine the number of IP addresses.
The reason it allows to create more than 1 subnet in virtual network is that essentially you could create many subnets with valid CIDR block in a VNet. You can create VMs in other subnets or create a dedicated Gatewaysubnet used for VPN gateway in the existing VNet. With advanced networking, this existing virtual network often provides connectivity to an on-premises network using Azure ExpressRoute or Site-to-Site VPN.
Also, it's welcome to give feedback on Azure AKS to improve this feature. Hope this helps.
When I try to execute New-AzureRmApplicationGatewayIPConfiguration to create an application gateway, I get an exception:
Subnet xxx cannot be used for application gateway yyy since subnet is not empty.
I encountered this error when I tried to add the application gateway to the same subnet as the backend servers.
Why is this not an option? Does each gateway require a separate subnet? What is the recommended configuration?
Related questions:
The documentation says backend servers can be added when they belong to the virtual network subnet. How can a back-end server belong to the virtual network subnet of the application gateway if the application gateway must be in a separate subnet?
How can the application gateway be configured without requiring a public IP address on the backend servers?
The application gateway must be in a subnet by itself as explained in the documentation, hence the reason it is not an option. Create a smaller address space for your application gateway subnet (CIDR 'x.x.x.x/29') so you're not wasting IP addresses unnecessarily.
It's a good practice to strive for a multi-tier network topology using subnets. This enables you to define routes and network security groups (ie: allow port 80 ingress, deny port 80 egress, deny RDP, etc.) to control traffic flow for the resources in the subnet. The routing and security group requirements for a gateway are generally going to be different than routing and security group requirements of other resources in the virtual network.
I had the same issue, so my virtual network was 10.0.0.0/24 which was not allowing me to create a separate subnet. I solved the issue as we added another address space into the azure virtual network e.g. 10.10.0.0.24, then created a new subnet so that the application gateway was happy to work with the backend servers.