Access own Azure internal load balancer from backend server pool - azure

I have set up an internal load balancer (lb1) with one server (srv1) as only VM in the backend pool. When I try to access the loadbalancer lb1 from this server (srv1), I get a timeout. I have a second load balancer setup in the same subnet (lb2 and srv2). All traffic on port 443. Inbound and outbound rules allow traffic on all ports to/from subnet.
srv1 can access lb2 but not lb1
srv2 can access lb1 but not lb2
Is this by design, or have I missed a configuration option?

Is this by design, or have I missed a configuration option?
This is a by design behavior. The load balancer is used for redistributing the request to the VMs in the backend pool. The VM in the pool can not access the load balancer in the same pool.

Related

Azure Public Load Balancer cannot access to backend pool VMs

I'm facing the Azure Public Load Balancer issue, cannot access the load balancer public IP use by port 80 but I can access to backend pool VM's port 80.
My Azure Public Load Balancer setting all following the Microsoft Azure Documentation.
The Backend Pool VMs also can access port 80 by VMs public IP.
There any troubleshoot for this situation?
Thanks.
If you are unable to connect to your VMs via the load balancer front end IP / port, it is usually an issue with your LB configuration, backend health, or a firewall / NSG blocking the connection.
Please refer the below troubleshooting docs to go through your configuration:
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-troubleshoot-backend-traffic
Your LB configuration will be setup using the Load Balancing Rules of your load balancer.
Azure Load Balancer will not route traffic to backends that are not reporting healthy via the Health Probes, so make sure your backend is healthy.
A Firewall or NSG can also block the connection, so make sure there are no firewalls in your environment/OS firewalls that block the traffic, and check your Network Security Groups (NSGs) on your Subnet / NICs of the VM are not blocking the load balancing probes (AllowAzureLoadBalancerInBound rule).
You can also run the automated troubleshooter for specific issues on your load balancer from the "Diagnose and solve problems" tab of your load balancer resource in Azure portal as shown in the below picture:

Health probe says the endpoints (Azure Container Instances) are degraded

I have set up a health probe to ping Azure Containers from my load balancer so that requests are forwarded to only healthy nodes. However, I am getting "Degraded" status despite the containers being up and running. I am aware this has got to do with the reponse the health probe gets from the IP Address but I cannot figure out what changes do I need to make to my container settings to ensure that it works as expected
For Azure load balancer backend pool management, there are two ways of configuring a backend pool:
Network Interface Card (NIC)
Combination of IP address and Virtual Network (VNET) Resource ID
You could deploy your ACI into a VNet then add the IP address in the backend pool, then you can deploy load balancer rules and health probe for your backend ports. You could configure the appropriate health probe type for your backends. Read Load Balancer health probes
For example, I have an application that exposes port 8000 within the container in a VNet.
Load balancer rule
Health probe

Azure internal load-balanced network with VNet Gateway with P2S VPN

So as the title suggests, I need to make a load-balanced internal gateway with a VPN. I'm a developer, so networking is not my forte.
I have two identical VMs (VM1 in Availability Zone 1 and VM2 in Availability Zone 2) and I need to share VPN traffic between them. My client has provided a range of 5 addresses that will be configured on their firewall, so I will pick one for them to use and they then need to be oblivious to the internal routing.
My ultimate goal is to allow the client to connect through a VPN to one IP address (in the range they have allocated) and let Azure direct the traffic to VM1 primarily, but failover to VM2 if Availability Zone 1 goes down. The client must be oblivious to which VM they ultimately connect to.
My problem is that I cannot create a configuration where the Load Balancer's static IP is in the address range of the Gateway's VPN P2S address pool. Azure requires the P2S address pool to be outside of the VNet's address space and the Load Balancer needs to use the VNet's Subnet (which obviously is INSIDE the VNet's address space, so I'm stuck.
I can create the GW -> Vnet -> subnet -> VM1/VM2 set up no problem using the client's specified IP range for the P2S VPN, but without a Load Balancer, how do I then direct the traffic between the VMs?
e.g. (IPs are hypothetical)
The Vnet address range is 172.10.0.0/16
The Gateway subnet is 172.10.10.0/24
The Gateway's P2S address pool is 172.5.5.5/29
VM1's IP is 172.10.10.4
VM2's IP is 172.10.10.5
I can create a Load Balancer to use the Vnet (and the VMs in a Backend Pool), but then it's static IP has to fall in the VNet's subnet and thus outside the P2S address pool. So how do I achieve this?
I thought of creating a second VNet and corresponding Gateway and linking the Gateways, but I seemed to end up in the same boat
UPDATE: here is an image of my VNet diagram. I have only added one of the VMs (NSPHiAvail1) for now, but VM2 will be in the same LB backend pool
NSP_Address_Range is the range is a subnet of the VNet and is the range dictated by the client. The load balancer has a frontend IP in this range
Firstly, the Azure load balancer does round-robin load balancing for new incoming TCP connections, you could not use it for failover.
My problem is that I cannot create a configuration where the Load
Balancer's static IP is in the address range of the Gateway's VPN P2S
address pool.
You do not need to add the Load balancer frontend IP in the P2S address pool, the address pool is used for clients connecting to your Azure VNet.
Generally, you could configure P2S VPN gateway, create Gateway subnet and vmsubnet and create an internal standard SKU load balancer in the vmsubnet, then you could add the VMs in the vmsubnet into the backend pool as the backend target of the load balancer and configure the healthpro and load balancer rule for load balancing traffic. If so, you could access the backend VMs from clients via the load balancer frontend private IP.
Moreover, you could know some limitations about internal load balancer.
My problem was the Load Balancer Rules - or lack thereof. Once I had added a rule for port 1433 (SQL Server), I was able to query the DB from my local instance of SSMS
There is another solution that is a LOT simpler than the solution I was trying to implement, BUT it does not work allow for an internal load balancer
Azure Virtual Machine Scale Sets implement as many VMs as I specify and will automatically switch to another zone if one goes down. I have no need for the scalability aspect, so I disabled this and I'm only using the Load balancing aspect.
NB This setup only exposes a PUBLIC IP and you cannot assign an internal load balancer in conjunction with the default public load balancer
Here's some info:
Quickstart: Create a virtual machine scale set in the Azure portal
Create a virtual machine scale set that uses Availability Zones
Networking for Azure virtual machine scale sets
Virtual Machine Scale Sets
The cost is exactly what you'd pay for individual VMs, but the loadbalancing is included. So it's cheaper than the solution I described in my question. Bonus!

Azure load balancer inside a subnet

I have a VNET with a subnet, there are 3 VMs in the subnet, the VNET is connected via a VPN connection to an on-premises server. The on-premises server will send requests to an internal IP of the subnet.
What I'd like to do is host a load balancer with no public IP, but has an IP in the subnet range. The on-premises app would then talk to the single load balancer, which would in turn forward the request on to any of the servers hosting my app in the subnet.
Can anyone tell me if this is possible, or an alternative if possible.
I believe you are looking for an internal load balancer.
You can find documentation for that here: https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-get-started-ilb-arm-portal.
Azure Internal Load Balancer (ILB) provides network load balancing between virtual machines that reside inside a cloud service or a virtual network with a regional scope.
Create a Load Balancer as usual, but specify Type: Internal.
Probably best to make it use a static IP address as well so it won't change.
Then you'll need to configure its back-end pool and health probe so it knows where to route traffic.

Azure secured connection for Load balancer?

we are creating a virtual machines in resource manager portal with internet facing load balancer. In virtual machines we have more than 10 web application running in different ports. we have port mapping in load balancer to access application as public. Now our requirement is how can we make secured connection for all applications? can we make "https" for load balancer public IP?
please let me know if any possibilities to make secure connection for loadbalancer.
Thanks,
Selva
you can go with Azure application gateway to do the external SSL termination.Then setup an internal loadbalancer to do the routing.
https://azure.microsoft.com/en-in/documentation/articles/application-gateway-ssl-arm/
https://azure.microsoft.com/en-in/documentation/articles/application-gateway-ilb/

Resources