Load balancing ACIs inside a Vnet - azure

I have Azure Container Instances inside a vnet and I want to implement load balancing but cannot think of a workable solution. For context, it will be a set of VMs contacting the load balancing resource which would direct the request to one of the ACIs.
Things I have tried thus far are Azure Load Balancer (does not work with ACI) and Azure Traffic Manager (cannot be inside a vnet). I don't think an application gateway is a feasible solution either. I want to know if anyone has faced this scenario before and how did they overcome it or if someone has a potential solution that I can test out?

Well, to access the ACI inside a VNet through a Load Balancer, you just need to create a Load Balancer and add the backend pool with the IP address of the ACI, here is a screenshot for it:
Then create a health probe and load balancer rule for the port you need. When all things are OK, you can access the ACI inside the VNet through the Public IP address:
Result:
ACI:
Load Balancer:

Related

Logical firewall between load balancer and VMs

I am currently trying to learn azure cloud, I have worked with AWS before so may be trying to carry over some concepts here.
I need to know how we can configure a logical firewall to allow traffic from an azure load balancer to and VMs(scale sets or backend pools)?
I was able to do this between different VMs by assigning the VMs to different application security groups and allowing respective traffic from those groups in the network security group. I found the service tag 'AzureLoadBalancer' as an option in NSG rules but it seems that is only for allowing traffic from healthprobes and not from actual load balancer (also there is no option to select a certain load balancer). In the end I had to allow traffic from the public ip of the load balancer to the VNET to get the load balancer to work.
I hope there is a logical way to do this and if there is I am not sure what I am missing here, would appreciate anyone who could help here.
Normally you wouldn't want to firewall traffic from the Azure Load Balancer as it's a load balancer so it needs to be able to reach your endpoints. I'm not quite sure on what you are trying to achieve here. You might be able to simply micro-segment your endpoints on different subnets and apply different NSGs (with different allow/deny rules) on the subnet level. Otherwise an actual firewall would be required between your Azure Load Balancer and endpoints if you need L7 inspection for example.

Azure - no access to VMs behind a internet facing standard Load Balancer

SETUP:
I have 2 Ubuntu VMs sitting behind an internet facing standard load balancer. LB is zone redundant, 2 VMs are set up as HA in zones 1 and 2.
VMs are spun up with a Virtual Machine Scale Set, and entire infrastructure is deployed with Terraform.
Applications running on containers in VMs are exposed on port 5050.
Inbound rules are set to allow traffic on port 80, 5050.
Vms are in the LB backend pool.
PROBLEM:
When VMs are up and running, I access the console the VMs are unable to connect to Ubuntu repo or any external package for download.
Deleting and scaling out VMs - same issue.
Load balancer rules
Load balancer health probe
However, when I delete the LB rules and Lb-probe, and recreate them, I immediately am able to download packages from ubuntu repo or any other external link.
I also deleted one VM and scaled out new a VM(after recreating lb rules and probe) and ubuntu packages, and docker packages install successfully.
This is driving me crazy, has anyone come across this?
I can not reproduce this issue in the same scenario when I deploy the entire infrastructure via the Azure portal.
According to control outbound connectivity for Standard Load Balancer:
If you want to establish outbound connectivity to a destination
outside of your virtual network, you have two options:
assign a Standard SKU public IP address as an Instance-Level Public IP address to the virtual machine resource or
place the virtual machine resource in the backend pool of a public Standard Load Balancer.
Both will allow outbound connectivity from the virtual network to outside of the virtual > network.
So, this issue may happen due to the load balancer rules that have not taken effect on the initial time or not got configuration correctly or the public-facing load-balancing frontend IP has not got provisioned. Or, you may check if there is any firewall or restriction on outbound traffic from your vmss instance.
When I have provisioned these resources. I have to associate an NSG that whitelist the allowed traffic to the subnet of VMSS instances. This will trigger Standard LB to begin to receive the incoming traffic. Also, I have changed the Upgrade policy to automatic.
Hope this information could help you.
I had the same issue. Once I added a load balancing rule, my VMs had internet access.

Azure AKS Network Analytics- where are these requests are coming to Kubernetes Cluster?

I am little but puzzled by Azure Network Analytics! Can someone help resolving this mystery?
My Kubernetes cluster in Azure is private. It's joined to a vNET and there is no public ip exposed anywhere. Service is configured with internal load balancer. Application gateway calls the internal load balancer. NSG blocks all inbound traffics from internet to app gateway. Only trusted NAT ips are allowed at the NSG.
Question is- I am seeing lot of internet traffic coming to aks on the vNET. They are denied of course! I don't have this public ip 40.117.133.149 anywhere in the subscription. So, how are these requests coming to aks?
You can try calling app gateway from internet and you would not get any response! http://23.100.30.223/api/customer/FindAllCountryProvinceCities?country=United%20States&state=Washington
You would get successful response if you call the Azure Function- https://afa-aspnet4you.azurewebsites.net/api/aks/api/customer/FindAllCountryProvinceCities?country=United%20States&state=Washington
Its possible because of following nsg rules!
Thank you for taking time to answer my query.
In response to #CharlesXu, I am sharing little more on the aks networking. Aks network is made of few address spaces-
Also, there is no public ip assigned to any of the two nodes in the cluster. Only private ip is assigned to vm node. Here is an example of node-0-
I don't understand why I am seeing inbound requests to 40.117.133.149 within my cluster!
After searching all the settings and activity logs, I finally found answer to the mystery IP! A load balancer with external ip was auto created as part of nginx ingress service when I restarted the VMs. NSG was updated automatically to allow internet traffic to port 80/443. I manually deleted the public load balancer along with IP but the bad actors were still calling the IP with a different ports which are denied by default inbound nsg rule.
To reproduce, I removed the public load balancer again along with public ip. Azure aks recreated once I restarted the VMs in the cluster! It's like cat and mouse game!
I think we can update the ingress service annotation to specify service.beta.kubernetes.io/azure-load-balancer-internal: "true". Don't know why Microsoft decided to auto provision public load balancer in the cluster. It's a risk and Microsoft should correct the behavior by creating internal load balancer.

Azure - Can't create load balancer for the ScaleSet

I created a Scale Set (using a template) with an existing virtual network.
This existing virtual network has already a Load Balancer (with a public IP) with specific VMs.
Now, I can't connect to the VMs in the scale set, There's no option to add the scale set to the Load Balancer or to add the scale set's VMs to the Load Balancer. Creating a new Load Balancer doesn't help.
It seems that the only option for adding a backend pool is using an availability set or a single VM (which is not in the Scale Set).
Is there any way to solve this? to somehow add the Scale Set to the Load Balancer or to connect to it?
The goal was to create the scale set to be in the existing Load Balancer (in the network with the other VMs), but unfortunately it didn't work.
It is not posible to add vms in different availability sets to the same lb. VMSS has its own availability set (by desing). so this is not possible.
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/ccf69a9c-0a6a-47bc-afca-561cf66cdebd/multiple-availability-sets-on-single-load-balancer?forum=WAVirtualMachinesVirtualNetwork
You can work around by creating vm in the network that will act as a load balancer, but that's obviously not a PAAS solution
The goal was to create the scale set to be in the existing Load
Balancer (in the network with the other VMs), but unfortunately it
didn't work.
It is not possible and no need. Please refer to this official document. Azure VMSS instances are behind a load balancer. Also VMSS's intance could not add to a existing load balancer.
Now, I can't connect to the VMs in the scale set.
Do you create inbound NAT rules for your instance? Also, you could create a jump VM in the same VNet to login one instance. See this question.
If you could not login your VM from a jump VM, it is not a VMSS issue. You should check your instance. If you don't do any change for your instances. You could create a ticket to Azure to solve this issue.

Create loadbalancer inside a vnet with azure

I want to create a load balancer for all my agents.
In the official docs I found a guide for an external load balancer, but I want to connect it with the api management so it has to be only visible in the vnet.
This post works if you only have one agent (you enter the private ip of the agent in your api route). But it does not handle the second agent.
Is it possible to use Azure API Management and Azure ACS (kubernetes) as frontend and backend?
So in my case I need to create a load balancer that handles all agents for the service and has a private ip in a vnet in that the api management service also is.
well, nothing prevents you from connecting api management to an external endpoint, so there's that.
and if you really want internal endpoint I doubt that it is possible, since a NIC can only be attached to a single load balancer. maybe if you detach agent nics from the external load balancer and attach them to internal load balancer... that might work, but looks like a solid hack.
other way around this might be using ACS engine to generate a template for you and alter the template to deploy internal load balancer.
As 4c74356b41 said, we can't add a VM to two backend pools (if your k8s create via azure portal, the agents in a VMSS.)
In your scenario, I think we can create a VM in ACS resource group, and install load balance software on it, make this VM work as a load balancer.
For example, we can use Haproxy to load balance the network traffic to agents.

Resources