AKS: How to get rid of load balancer? - azure

When you create an Azure Kubernetes Service (AKS) it creates by default a load balancer and a networking set to access it. It creates it in a separate resource group.
We are however not interested at all in this load balancer, as we are going to use our own load balancer/Ingress configured within Kubernetes itself.
Question: How can we avoid this Load balancer from Azure to be generated all the time when we generate the cluster?

Was discussed in AKS Creation Without Public Load Balancer github issue with afterward explanation
It is possible the SLB IP is just for egress.
Let me try to clarify.
AKS with Basic Load Balancer
You can create it by passing the load balancer sku parameter Has
implicit Egress (you won't see a public IP, although it is there on
the Azure infrastructure) You can create private services accessible
only through private IPs using the internal annotation.
https://learn.microsoft.com/en-us/azure/aks/internal-lb AKS with
Standard Load Balancer
Used by default on latest clientes, or explicitly by using the same
parameter as above Has only explicit egress, which means if there
isn't an egress IP the cluster won't have egress and will be broken.
This is like the gateway to the internet IP. You can control,
pre-create or change this IP (or have more than one) You can create
private services accessible through private IPs using the internal
annotation. https://learn.microsoft.com/en-us/azure/aks/internal-lb In
some cases, enterprises might have egress defined via UDRs through a
firewall etc. In which case that egress IP will not be used, and will
be effectively not needed. But as of now it will be needed at create
time as we don't know the egress path defined. We are no working on a
UDR outbound type for SLB that will allow users to confirm they have
egress through UDRs and in this case the SLB won't be created with a
Public IP for egress
Like #Sajeetharan asked- what is the use case?
Also how is it mandatory for you to use AKS? Maybe the same you can easier resolve just regular with kubeadm cluster?
Deploying a Kubernetes cluster in Azure using kubeadm

Related

Azure: How to create Standard Load Balancer without public IP address?

I want to run my application with AKS cluster(version - 1.18.14) with the dependency of standard load balancer to create multiple node pools. But, the standard load balancer is creating public IP address. which is not suitable for my application. Because my application is private not public.
Is there any way to "create Standard load balancer without public IP address in Azure?"
Thanks.
Actually, when you create the AKS, it creates a public IP as the outbound IP address for the Load Balancer, and it's for the egress. So it does not affect that your application is private or public. Instead, what you need to focus on is inbound, if your application is private, you just need to use the internal Load Balancer, I think this is what you are looking for.

Is there any equivalent to aws eip in azure? Apart from load balancer

we have an active-passive server setup.So we want to allocate a public ip to active server. We are able to do this in AWS using eip .Is there any feature which we can use in azure just like eip in aws?
You could use static Public IPs in Azure. You could associate a Public IP to a VM's NIC then change the IP address assignment to static. Also, Azure DNS allows you to reach this IP via a Public custom DNS name.
We do support Static Public IPs in Azure today that is equivalent to
Elastic IP in AWS. Static Public IPs can be mapped to a VM’s NIC
(elastic IP equivalent) or to a load balancer’s Front end IP.
More details from the Azure feedback.
Apart for the Azure Load balancer, you may have interested in Azure traffic manager which is a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions, while providing high availability and responsiveness.

Azure AKS Network Analytics- where are these requests are coming to Kubernetes Cluster?

I am little but puzzled by Azure Network Analytics! Can someone help resolving this mystery?
My Kubernetes cluster in Azure is private. It's joined to a vNET and there is no public ip exposed anywhere. Service is configured with internal load balancer. Application gateway calls the internal load balancer. NSG blocks all inbound traffics from internet to app gateway. Only trusted NAT ips are allowed at the NSG.
Question is- I am seeing lot of internet traffic coming to aks on the vNET. They are denied of course! I don't have this public ip 40.117.133.149 anywhere in the subscription. So, how are these requests coming to aks?
You can try calling app gateway from internet and you would not get any response! http://23.100.30.223/api/customer/FindAllCountryProvinceCities?country=United%20States&state=Washington
You would get successful response if you call the Azure Function- https://afa-aspnet4you.azurewebsites.net/api/aks/api/customer/FindAllCountryProvinceCities?country=United%20States&state=Washington
Its possible because of following nsg rules!
Thank you for taking time to answer my query.
In response to #CharlesXu, I am sharing little more on the aks networking. Aks network is made of few address spaces-
Also, there is no public ip assigned to any of the two nodes in the cluster. Only private ip is assigned to vm node. Here is an example of node-0-
I don't understand why I am seeing inbound requests to 40.117.133.149 within my cluster!
After searching all the settings and activity logs, I finally found answer to the mystery IP! A load balancer with external ip was auto created as part of nginx ingress service when I restarted the VMs. NSG was updated automatically to allow internet traffic to port 80/443. I manually deleted the public load balancer along with IP but the bad actors were still calling the IP with a different ports which are denied by default inbound nsg rule.
To reproduce, I removed the public load balancer again along with public ip. Azure aks recreated once I restarted the VMs in the cluster! It's like cat and mouse game!
I think we can update the ingress service annotation to specify service.beta.kubernetes.io/azure-load-balancer-internal: "true". Don't know why Microsoft decided to auto provision public load balancer in the cluster. It's a risk and Microsoft should correct the behavior by creating internal load balancer.

can't see availability set in the backend pool in azure internal load balancer with "standard" SKU

Hoping someone can help here, is there any specific option i need to be aware off that will make the azure standard load balancer picks up (show) a availability set in the backend pool configuration?
Basically, I have created a AS and it has one vm (for now), and then I created the azure "internal" load balancer with Standard SKU but when i try to create a bep it only provides an option of virtual network in the drop down list with respect to the associations.
I tried to create the load balancer inside the same RG as the availability set RG because on this site i read someone mentioning this as a possible solution.
I have no problem picking up the same AS when i create the ILB using basic SKU. So I'm wondering what is needed to make this working for the standard SKU?
Any help much appreciated.
For the backend pool of load balancer, you can directly associate to the AS, a scale set or a VM for a basic SKU LB. While a standard LB is fully integrated with the scope of a virtual network and all virtual network concepts apply. So you only need to select one virtual network, the VMs inside the VNET will show up in the drop list.
Note: Only VMs in the same region with standard SKU public IP or no public IP can be attached to this load balancer.
If you have not see the VMs in the drop list, you can disable the public IP address of VMs or attach a standard SKU public Ip address to your VMs, then try to add the backend pool to your standard LB again.
Ref: https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-standard-overview

Azure Container Services (AKS) - Exposing containers to other VNET resources

I am using Azure Container Services (AKS - not ACS) to stand up some API's - some of which are for public consumption, some of which are not.
For the public access route everything is as you might expect, a load-balancer service bound to a public IP is created, DNS zone contains our A record forwarding to the public IP, traffic is routed through to an NGINX controller and then onwards to the correct internal service endpoints.
Currently the preview version assigns a new VNET to place the AKS resource group within, moving forwards I will place the AKS instance inside an already existing VNET which houses other components (App Services, on an App Service Environment).
My question is how to grant access to the private APIs to other components inside the same VNET, as well as components in other VNETS?
I believe AKS supports an ILB-type load balancer, which I think might be what is required for routing traffic from other VNETS? But what about where the components reside already inside the same VNET?
Thank you in advance!
If you need to access these services from other services outside the AKS cluster, you still need an ILB to load balance across your service on the different nodes in your cluster. You can either use the ILB created by using the annotation in your service. The alternative is using NodePort and then stringing up your own way to spread the traffic across all the nodes that host the endpoints.
I would use ILB instead of trying to make your own using NodePort service types. The only thing would be perhaps using some type of API Gateway VM inside your vnet where you can define the backend Pool, that may be a solution if you are hosting API's or something through a 3rd party API Gateway hosted on an Azure VM in the same VNet.
Eddie Villalba
MCSD: Azure Solutions Architect | CKA: Certified Kubernetes Administrator

Resources