Azure Load Balancer - bad outgoing IP address - azure

I have two Azure VMs behind the load balancer. VMs don't have any public IP, only LB has one static public IP address.
Sometimes VM gets outgoing public IP 13.93.5.128, which is not right. When I restart one VM, it gets right IP but second VM get this bad IP. It changes even without restart.
According this - https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections - I think I'm using Load-balanced VM (no Instance Level Public IP address on VM) (SNAT).
Trying outgoing IP with dig myip.opendns.com #resolver1.opendns.com .
How can I have outgoing IP for all VMs behind Load Balancer always the same (load balancer's one)?

This maybe over elaborate for your requirements but if your VMs are hosted using ARM (as opposed to CLASSIC) then you can reserve a public IP address for the LOAD BALANCER. If you are unhappy with the allocated address for whatever reason reserve and allocate a new one.
Example
Create a resource group
Create a virtual network inside the resource group
Create a subnet inside the virtual network
Create a public IP
Create a load balancer under the resource group
Create Front-end IP pool inside the load balancer and assign the newly created public IP to it.
Create a Backend IP pool inside the load balancer
Create rules for the load balancer
Create Inbound NAT rules inside the load balancer
Create probes for the load balancer
Create a NIC under the resource group. NIC must be under the created Resource group, Vnet, and subet.
Also, it must be attached with the backend pool from the load balancer and the inbound NAT rules from the load balancer.
Create a new VM and attach the newly created NIC
Reference
These are worth reading:
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-get-started-internet-arm-ps
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-arm

Related

Azure internal load-balanced network with VNet Gateway with P2S VPN

So as the title suggests, I need to make a load-balanced internal gateway with a VPN. I'm a developer, so networking is not my forte.
I have two identical VMs (VM1 in Availability Zone 1 and VM2 in Availability Zone 2) and I need to share VPN traffic between them. My client has provided a range of 5 addresses that will be configured on their firewall, so I will pick one for them to use and they then need to be oblivious to the internal routing.
My ultimate goal is to allow the client to connect through a VPN to one IP address (in the range they have allocated) and let Azure direct the traffic to VM1 primarily, but failover to VM2 if Availability Zone 1 goes down. The client must be oblivious to which VM they ultimately connect to.
My problem is that I cannot create a configuration where the Load Balancer's static IP is in the address range of the Gateway's VPN P2S address pool. Azure requires the P2S address pool to be outside of the VNet's address space and the Load Balancer needs to use the VNet's Subnet (which obviously is INSIDE the VNet's address space, so I'm stuck.
I can create the GW -> Vnet -> subnet -> VM1/VM2 set up no problem using the client's specified IP range for the P2S VPN, but without a Load Balancer, how do I then direct the traffic between the VMs?
e.g. (IPs are hypothetical)
The Vnet address range is 172.10.0.0/16
The Gateway subnet is 172.10.10.0/24
The Gateway's P2S address pool is 172.5.5.5/29
VM1's IP is 172.10.10.4
VM2's IP is 172.10.10.5
I can create a Load Balancer to use the Vnet (and the VMs in a Backend Pool), but then it's static IP has to fall in the VNet's subnet and thus outside the P2S address pool. So how do I achieve this?
I thought of creating a second VNet and corresponding Gateway and linking the Gateways, but I seemed to end up in the same boat
UPDATE: here is an image of my VNet diagram. I have only added one of the VMs (NSPHiAvail1) for now, but VM2 will be in the same LB backend pool
NSP_Address_Range is the range is a subnet of the VNet and is the range dictated by the client. The load balancer has a frontend IP in this range
Firstly, the Azure load balancer does round-robin load balancing for new incoming TCP connections, you could not use it for failover.
My problem is that I cannot create a configuration where the Load
Balancer's static IP is in the address range of the Gateway's VPN P2S
address pool.
You do not need to add the Load balancer frontend IP in the P2S address pool, the address pool is used for clients connecting to your Azure VNet.
Generally, you could configure P2S VPN gateway, create Gateway subnet and vmsubnet and create an internal standard SKU load balancer in the vmsubnet, then you could add the VMs in the vmsubnet into the backend pool as the backend target of the load balancer and configure the healthpro and load balancer rule for load balancing traffic. If so, you could access the backend VMs from clients via the load balancer frontend private IP.
Moreover, you could know some limitations about internal load balancer.
My problem was the Load Balancer Rules - or lack thereof. Once I had added a rule for port 1433 (SQL Server), I was able to query the DB from my local instance of SSMS
There is another solution that is a LOT simpler than the solution I was trying to implement, BUT it does not work allow for an internal load balancer
Azure Virtual Machine Scale Sets implement as many VMs as I specify and will automatically switch to another zone if one goes down. I have no need for the scalability aspect, so I disabled this and I'm only using the Load balancing aspect.
NB This setup only exposes a PUBLIC IP and you cannot assign an internal load balancer in conjunction with the default public load balancer
Here's some info:
Quickstart: Create a virtual machine scale set in the Azure portal
Create a virtual machine scale set that uses Availability Zones
Networking for Azure virtual machine scale sets
Virtual Machine Scale Sets
The cost is exactly what you'd pay for individual VMs, but the loadbalancing is included. So it's cheaper than the solution I described in my question. Bonus!

Load Balancer IP Configuration

We have configured load balancer with 2 Linux virtual machines. we have planned to remove one server from the Load balancer and Load balancer too. Is there any possibility to assigning the load balancer ip address to remaining server.
E.g
We have VM1 and VM2 and both configured with load balancer. is there any way to assign load balancer IP to VM1.
Short answer, yes. Public IP are resources of their own in ARM.
Confirm the existing Frontend IP address is static
Assign LB an additional Frontend IP
Remove existing Frontend IP
Associate IP with VM1's nic
Done

Cannot access Azure VM Scaleset ip address externally

I have created a Virtual Machine Scaleset in Azure
This scaleset is made up of 5 VMs
There is a public ip
When I do a ping on my public ip I get no response, nor do I get a response with the full name, e.g.
myapp.uksouth.cloudapp.azure.com
Is there something I have missed?
I am wondering if I have to add my machine's IP somewhere?
I am trying to remote into the machines within the scaleset eventually!
This scaleset will be used for azure service fabric
Paul
If you deploy your scale set with "public IP per VM", then each VM gets its own public IP: https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-networking#public-ipv4-per-virtual-machine. However, this is not the default in the portal. In the portal, the default is to create a load balancer in front of the scale set with a single public IP on the LB (today, at least; no guarantee it will stay this way). It also comes with NAT rules configured to allow RDP/SSH on ports 50000 and above. They won't necessarily be contiguous, though (at least in the default configuration), so you will need to examine the NAT rules on the load balancer to see which ports are relevant. Once you do, you should be able to do ssh -p <port-from-nat-rule> <public-ip> to ssh in (or similar in your RDP client for Windows).
When I do a ping on my public ip I get no response
Azure does not support ping.
For test, you can use RDP/SSH public IP address with different ports to test the connection.
Are you create VMSS with Azure marketplace? If yes, the Azure LB will configured.
If the load balancer created by your self, please check LB probes, backend pools(all vms should in that backend pools), load balancer rules and NAT rules.
Also you can configure log analytics for Azure load balancer to monitor it.

Azure internal load balancer outbound connectivity

How can virtual machines behind an Azure internal load balancer access internet? Is there an AWS NAT gateway equivalent in Azure?
A Virtual Machine that is part of the backend pool of a Standard (not Basic) Internal Load Balancer can not make outgoing connections to the Internet.
To make outgoing connections it is necessary to create a second Load Balancer with a public IP with the same backend pool and a dummy rule with a dummy probe. Once the rule is created then it will trigger the creation of an outbound SNAT.
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections#defaultsnat
By default, Azure VM behind an Azure internal load balancer, that VM can access the internet, but you can't access it from internet.
If you want to access it, you can create a VM in that VM with a public IP address, use that VM work as jumpbox. Also you can assign a public IP address to that VM, then use that public IP address to access it.

Azure internal load balancer IP

I have a very simple Azure VM setup. One VM is behind an Internal Load Balancer, and it's private IP address is 10.0.1.10.
A Web Service is running in that VM. I can access website by using http://localhost, but as per my software requirement, I have to use Load Balancer private IP address instead of localhost. But I can not browse with Internal Load Balancer IP address (http://10.0.1.10 does not work from that VM).
Is that by design that I can't access Internal Load Balancer by it's private IP address? Or I need to do something special to make it work?
There's a difference between public and internal Azure Load Balancer configurations.
When Azure Load Balancer is used in a public load balancer configuration, SNAT is used for outbound requests. This means a VM behind a public can reach the public IP address of the load balancer and the flow will be load balanced accordingly. This will consume an ephemeral port for each connection to the VIP.
Internal load balancer configurations do not offer SNAT today. In turn, an internal load balancer configuration does not allow a pool member to access the IP address of the internal load balancer.
We are looking at addressing this in a future release by allowing an option to enable SNAT for internal load balancers as well. Mandatory SNAT can actually impose constraints for those who don't need to access the IP address of the load balancer, and therefore this needs to be an option rather than the default.
According to your description, do you means that the VM in the load balancer and the web service running on the VM, you want to use the VM to browse the internal load balancer IP, but it doesn’t work.
I had test in my lab, and the same error occurred. By the way, the load balancer can’t work in this way.
Here is my network capture result:
Maybe we should to create a new VM outside of the load balancer, then you can browse the load balancer IP. Because once a network interface is added to a load balancer's back-end IP address pool, the load balancer is able to send load-balanced network traffic based on the load-balanced rules that are created.
If you still have questions, welcome to post back here. Thanks.
Best Regards,

Resources