Azure - no access to VMs behind a internet facing standard Load Balancer - azure

SETUP:
I have 2 Ubuntu VMs sitting behind an internet facing standard load balancer. LB is zone redundant, 2 VMs are set up as HA in zones 1 and 2.
VMs are spun up with a Virtual Machine Scale Set, and entire infrastructure is deployed with Terraform.
Applications running on containers in VMs are exposed on port 5050.
Inbound rules are set to allow traffic on port 80, 5050.
Vms are in the LB backend pool.
PROBLEM:
When VMs are up and running, I access the console the VMs are unable to connect to Ubuntu repo or any external package for download.
Deleting and scaling out VMs - same issue.
Load balancer rules
Load balancer health probe
However, when I delete the LB rules and Lb-probe, and recreate them, I immediately am able to download packages from ubuntu repo or any other external link.
I also deleted one VM and scaled out new a VM(after recreating lb rules and probe) and ubuntu packages, and docker packages install successfully.
This is driving me crazy, has anyone come across this?

I can not reproduce this issue in the same scenario when I deploy the entire infrastructure via the Azure portal.
According to control outbound connectivity for Standard Load Balancer:
If you want to establish outbound connectivity to a destination
outside of your virtual network, you have two options:
assign a Standard SKU public IP address as an Instance-Level Public IP address to the virtual machine resource or
place the virtual machine resource in the backend pool of a public Standard Load Balancer.
Both will allow outbound connectivity from the virtual network to outside of the virtual > network.
So, this issue may happen due to the load balancer rules that have not taken effect on the initial time or not got configuration correctly or the public-facing load-balancing frontend IP has not got provisioned. Or, you may check if there is any firewall or restriction on outbound traffic from your vmss instance.
When I have provisioned these resources. I have to associate an NSG that whitelist the allowed traffic to the subnet of VMSS instances. This will trigger Standard LB to begin to receive the incoming traffic. Also, I have changed the Upgrade policy to automatic.
Hope this information could help you.

I had the same issue. Once I added a load balancing rule, my VMs had internet access.

Related

Azure AKS Network Analytics- where are these requests are coming to Kubernetes Cluster?

I am little but puzzled by Azure Network Analytics! Can someone help resolving this mystery?
My Kubernetes cluster in Azure is private. It's joined to a vNET and there is no public ip exposed anywhere. Service is configured with internal load balancer. Application gateway calls the internal load balancer. NSG blocks all inbound traffics from internet to app gateway. Only trusted NAT ips are allowed at the NSG.
Question is- I am seeing lot of internet traffic coming to aks on the vNET. They are denied of course! I don't have this public ip 40.117.133.149 anywhere in the subscription. So, how are these requests coming to aks?
You can try calling app gateway from internet and you would not get any response! http://23.100.30.223/api/customer/FindAllCountryProvinceCities?country=United%20States&state=Washington
You would get successful response if you call the Azure Function- https://afa-aspnet4you.azurewebsites.net/api/aks/api/customer/FindAllCountryProvinceCities?country=United%20States&state=Washington
Its possible because of following nsg rules!
Thank you for taking time to answer my query.
In response to #CharlesXu, I am sharing little more on the aks networking. Aks network is made of few address spaces-
Also, there is no public ip assigned to any of the two nodes in the cluster. Only private ip is assigned to vm node. Here is an example of node-0-
I don't understand why I am seeing inbound requests to 40.117.133.149 within my cluster!
After searching all the settings and activity logs, I finally found answer to the mystery IP! A load balancer with external ip was auto created as part of nginx ingress service when I restarted the VMs. NSG was updated automatically to allow internet traffic to port 80/443. I manually deleted the public load balancer along with IP but the bad actors were still calling the IP with a different ports which are denied by default inbound nsg rule.
To reproduce, I removed the public load balancer again along with public ip. Azure aks recreated once I restarted the VMs in the cluster! It's like cat and mouse game!
I think we can update the ingress service annotation to specify service.beta.kubernetes.io/azure-load-balancer-internal: "true". Don't know why Microsoft decided to auto provision public load balancer in the cluster. It's a risk and Microsoft should correct the behavior by creating internal load balancer.

Cannot access Azure VM Scaleset ip address externally

I have created a Virtual Machine Scaleset in Azure
This scaleset is made up of 5 VMs
There is a public ip
When I do a ping on my public ip I get no response, nor do I get a response with the full name, e.g.
myapp.uksouth.cloudapp.azure.com
Is there something I have missed?
I am wondering if I have to add my machine's IP somewhere?
I am trying to remote into the machines within the scaleset eventually!
This scaleset will be used for azure service fabric
Paul
If you deploy your scale set with "public IP per VM", then each VM gets its own public IP: https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-networking#public-ipv4-per-virtual-machine. However, this is not the default in the portal. In the portal, the default is to create a load balancer in front of the scale set with a single public IP on the LB (today, at least; no guarantee it will stay this way). It also comes with NAT rules configured to allow RDP/SSH on ports 50000 and above. They won't necessarily be contiguous, though (at least in the default configuration), so you will need to examine the NAT rules on the load balancer to see which ports are relevant. Once you do, you should be able to do ssh -p <port-from-nat-rule> <public-ip> to ssh in (or similar in your RDP client for Windows).
When I do a ping on my public ip I get no response
Azure does not support ping.
For test, you can use RDP/SSH public IP address with different ports to test the connection.
Are you create VMSS with Azure marketplace? If yes, the Azure LB will configured.
If the load balancer created by your self, please check LB probes, backend pools(all vms should in that backend pools), load balancer rules and NAT rules.
Also you can configure log analytics for Azure load balancer to monitor it.

Azure gateway with a virtual network

I've got multiple questions on the setup of a gateway and VM, so here is what I have actually.
I've got an Application Gateway, and two VM Ubuntu, everything hosted on Azure. They are all on the same Virtual Network. Both VM have only a private IP (10.1.0.4 and 10.1.0.5) and the Gateway have a private IP (10.1.1.4) and a public IP. Because only the Gateway have a public IP, I guess that everything have to go through it, and this is what I want to.
The goals I try to achieve :
Make a load balancer on the port 1680, redirected to port 1680.
To redirect the SSH of each VM to connect specifically to one because at the moment, they have no public IP. Is it possible to do this with a path based rule ? Like www.example.com/VM1 to connect by SSH to the first VM ? If no, what can be used to differentiate the SSH connection of the VM1 and of the VM2 ?
To redirect the port 80 of the gateway to the port 8080 of a specific VM. As my previous example, www.example.com/adminPanelVM1 to connect to the first VM on port 80 (redirected to port 8080 on the VM)
I already managed to create the redirection of the port 1680 of the Gateway with an HTTP Parameter, a Listener and a Rule.
Azure Application Gateway
The Azure Application Gateway operates at the layer 7 in the OSI model on the HTTP/HTTPS/WebSocket protocols, because of that any other protocol (like SSH), is not possible to route.
You got a few options tho.
You can use a Network Security Group, or NSG, for access control to your virtual machines. In the NSG you define where the traffic can come from that is allowed access to the VMs.
A NSG behaves like a access-control-list filtering traffic based on source and destination information and evaluating rules in order of priority. See this page for more information about NSGs.
Another option is to use a load balancer.
Azure Load Balancer
If you need to do port mapping, like you describe in your question, then a simple load balancer might be a better solution for you. An Azure Load Balancer works at a lower level in the in the OSI model, namely layer 4 (transport layer), handling TCP/UDP traffic.
So, if you are using a load balancer, then you can set up NAT rules to forward your traffic to specific machines, in other words, if you want to do:
LB port 1234 redirects to VM1 port 22 and
LB port 4312 redirects to VM2 port 22
you can do that using PowerShell as described in the Creating a public load balancer in Resource Manager by using PowerShell article.
There are quite a few steps but it walks you through the whole process of creating NAT rules, NICs and associated virtual machines.
Azure Application Gateway vs Azure Load Balancer?
These two cervices are distinctly different services and are trying to solve different problem, although those problems might look similar :)
The primary uses of an Application Gateway are:
SSL termination
cookie-based session affinity
round robin for load balancing traffic
Where as the Azure Load Balancer service works as the TCP/UDP level and support e.g. port mapping.
Cost wise, the load balancer service is free while the application gateway is billed per hour.
There are many great articles on this topic, when to pick which service. See for example the links for more details
When to use Azure Load Balancer or Application Gateway
Frequently asked questions for Application Gateway

Azure load balancer: NAT redirect RDP to VM, and load balance HTTP to availability set?

It looks like you can't NAT as well as load balance unless it's to the same destination. Once I created the NAT rule (so I can RDP to the load balancer over a custom port, and then that's redirected to my management VM), I cannot create the backend pool to use for HTTP load balancing. I go to backend pools and click create and it already fills in "associated with " and I cannot change that to my web VMs availability set.
I've also tried creating the backend pool first, for which I select the web VM availability set, but then when I create a NAT rule I cannot point to the management VM, only to the availability set/specific VM in that set.
What am I missing? Is there a solution besides recreating the management VM and putting it in the web VM availability set?
I've also tried creating the backend pool first, for which I select
the web VM availability set, but then when I create a NAT rule I
cannot point to the management VM, only to the availability
set/specific VM in that set.
All of these are by design behavior. LB only work for an availability set or a single VM.
Is there a solution besides recreating the management VM and putting
it in the web VM availability set?
No, if you want to use LB to connect to the management VM, we should recreate it and add this VM to that availability set.
If you just want this VM can connect to those VMs behind that LB, we can create this VM in that Vnet, then use management VM's public IP address to login this VM, and use private IP address to connect to those VMs.

Multiple vmss behind single Azure Load Balancer

We have multiple background worker vmss that do not need a public IP to work.
I want to be able to connect to arbitrary vm (e.g. to troubleshoot via rdp, or to collect some snapshots using remote profiler etc).
When there's only one VMSS per load balancer all works like a charm. I've setup nat pools for each port used on VMs and all works fine.
Now, if I'm trying to add one more vmss to the same load balancer (using its own nat / backend pools) the deployment fails with
Virtual Machine /subscriptions/.../resourceGroups/.../providers/Microsoft.Compute/virtualMachines/|providers|Micr
osoft.Compute|virtualMachineScaleSets|...|virtualMachines|0 is using different Availability Set than other Virtual Machines connected to the Load Balancer(s) ...
message.
As far as I know there's no way to set up availability set for vmss. Are there any options but keeping own load balancer/public ip for each VMSS?
UPD I've found similar scheme for VM+Availability Set setup (see ILB endpoint section).
Something like this for VMSS?
Your are right, we can't change availability set for vmss.
if I'm trying to add one more vmss to the same load balancer
As we know, we can't add different availability sets to single load balancer, so we can't add one or more VMSS to the same load balancer.
Are there any options but keeping own load balancer/public ip for each
VMSS?
We have multiple background worker vmss that do not need a public IP
to work.
Are those VMss in same VNet? If yes, we can deploy a new VM in the same Vnet, we can connect to this VM, then use this VM to connect to VMSS instances with internal IP addresses, in this way, this new VM work as a jumpbox. we can use this jumpbox to troubleshoot.
Update:
Is it possible then to have multiple vmss in same VNet and assign own
public api/load balancer for each of it?
Yes, we can create a new Azure VM with public IP, then install HAproxy on it, make this VM work as a load balancer, add all VMSS instances which in the same Vnet to HAproxy backend pool, in this way, we can access this VM's public IP address + your NAT port to connect VMss instance.

Resources