Azure external load balaner - Source IP security - azure

I have two VMs being load balanced by an external facing load balancer. I am able to successfully connect to those VM's from the internet through that LB rule.
However, I want to restrict access to that load balancer's public IP address (or more precisely - to the VM's behind it) to a specific source network. So that rather than the entire internet being able to access it, only specific public subnets could use it.
Looking in TCP connection tables on the VM's - it looks like the Azure LB is natting the source IP coming through it. So, my NSG's on the VM guests cannot filter on "SourceIP = Desired Source".
Is there any way to do this in the Resource Manager version of Azure?

The source port and address range are from the originating computer, not the load balancer.
From https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-nsg/#design-considerations (look under "Load Balancers")
Similar to public facing load balancers, when you create NSGs to
filter traffic coming through an internal load balancer (ILB), you
need to understand that the source port and address range applied are
the ones from the computer originating the call, not the load
balancer. And the destination port and address range are related to
the computer receiving the traffic, not the load balancer.
I'm guessing it's using x-forwarded-for and that NSGs understand that. Connection tables don't. They're raw connections and as such show the NAT.

Related

Restrict inbound traffic to only come through Azure Load Balancer

Please can someone advise how to restrict access on port 80/443 to some Azure VMs, so that they can only be access via the public IP Address that is associated to an Azure Load Balancer.
Our current setup has load balancing rules passing through traffic from public IP on 80=>80 and 443=>443, to back end pool of 2 VMs. We have health probe setup on port 80. Session persistence is set to client IP and floating IP is disabled.
I thought the answer was to deny access (via Network Security Group) to internet (service tag) on 80/443. Then add rule to allow service tag (AzureLoadBalancer) on the same ports. But that didnt seem to have an effect. Having read up a little more on this, it seems the AzureLoadBalancer tag is only to allow the health probe access and not specifically inbound traffic from that load balancer.
I have also tried adding rules to allow the public IP address of the load balancer, but again no effect.
I was wondering if I need to start looking into Azure Firewalls? and somehow restrict access
to inbound traffic that comes through that?
The only way I can get the VMs to respond on those ports is to add rules to allowing 80/443 from any to any....
After reading your question, my understanding is that you have a Public load balancer and the backend VMs also have instance level Public IPs associated with them and hence direct inbound access to the VMs is possible. But you would like to make sure that the direct inbound access to VMs is restricted only via the load balancer.
The simple solution for you to achieve this is by disassociating the instance level public IP of the VMs, this will make the LB public IP as the only point of contact for your VMs.
Keep in mind that the LB is not a proxy, it is just a layer 4 resource to forward traffic, therefore, your backend VM will still see source IP of the clients and not the LB IP, hence, you will still need to allow the traffic at the NSGs level using as source "Any".
However, if your requirement is to enable outbound connectivity from Azure VMs while avoiding SNAT exhaustion, I would advise you to create NAT Gateway, where you can assign multiple Public IP address for SNAT and remove the Public IP from the VM. This setup will make sure that the inbound access is provided by the Public load balancer only and the outbound access is provided by the NAT gateway as shown below:
Refer : https://learn.microsoft.com/en-us/azure/virtual-network/nat-gateway/nat-gateway-resource#nat-and-vm-with-standard-public-load-balancer
https://learn.microsoft.com/en-us/azure/virtual-network/nat-gateway/tutorial-nat-gateway-load-balancer-public-portal
You could also configure port forwarding in Azure Load Balancer for the RDP/SSH connections to individual instances.
Refer : https://learn.microsoft.com/en-us/azure/load-balancer/manage#-add-an-inbound-nat-rule
https://learn.microsoft.com/en-us/azure/load-balancer/tutorial-load-balancer-port-forwarding-portal

How to whitelist source IPs on Azure VMs fronted by Azure Load Balancer

I have a public facing, standard sku, Azure Load Balancer that forwards the incoming requests for a certain port to a virtual machine, using load balancing rules. This virtual machine has a NSG defined at the subnet level, that allows incoming traffic for that port, with source set to as 'Internet'.
Presently, this setup works, but I need to implement whitelisting - to allow only a certain set of IP addresses to be able to connect to this virtual machine, through the load balancer. However, if I remove the 'Internet' source type in my NSG rule, the VM is no longer accessible through the Load Balancer.
Has anyone else faced a similar use case and what is the best way to setup IP whitelisting on VMs that are accessible through Load Balancer. Thanks!
Edit: to provide more details
Screenshot of NSGs
These are the top level NSGs defined at the subnet.
We have a public load balancer that fronts the virtual machine where above NSGs are applied. This virtual machine doesn’t have a specific public IP and relies on the Load Balancer’s public IP.
The public Load Balancer forwards all traffic on port 8443 and port 8543 to this virtual machine, without session persistence and with Outbound and inbound using the same IP.
Below are the observations I have made so far:
Unless I specify the source for NSG rule Port_8443 (in above table) as ‘Internet’, this virtual machine is not accessible on this port, via the load balancer’s public IP.
When I retain the NSG rule Port_8543, which whitelists only specific IP addresses, this virtual machine is not accessible on this port, via the load balancer’s public IP – even when one of those whitelisted clients try to connect to this port.
I tried adding the NSG rule Custom_AllowAzureLoadBalancerInBound, to a higher priority than the port_8543, but it still didn’t open up this access.
I also tried to add the Azure Load balancer VIP (168.63.129.16) to the Port_8543 NSG, but that too didn’t open-up the access to port 8543, on load balancer’s public IP.
I have played with Load Balancing rules options too, but nothing seems to achieve what I am looking for – which is:
Goal 1: to open-up the virtual machine’s access on port 8443 and port 8543 to only the whitelisted client IPs, AND
Goal 2: allow whitelisted client IPs to be able to connect to these ports on this virtual machine, using the load balancer’s public IP
I am only able to achieve one of the above goals, but not both of them.
I have also tried the same whitelisting with a dedicated public IP assigned to the virtual machine; and that too loses connectivity to ports, where I don't assign 'Internet' source tag.
Azure has default rules in each network security group. It allows inbound traffic from the Azure Load Balancer resources.
If you want to restrict the clients to access your clients, you just need to add a new inbound port rule with the public IP address of your clients as the Source and specify the Destination port ranges and Protocol in your specific inbound rules. You could check the client's public IPv4 here via open that URL on your client's machine.
Just wanted to add a note for anyone else stumbling here:
If you are looking to whitelist an Azure VM (available publicly or privately) for few specific client IPs, below are the steps you must perform:
Create a NSG for the VM (or subnet) - if one not already available
Add NSG rules to Allow inbound traffic from specific client IPs on specific ports
Add a NSG rule to Deny inbound traffic from all other sources [This is really optional but will help in ensuring security of your setup]
Also, please note that look at all public IPs that your client machines are planning to connect with. Especially while testing, use public IPs and not the VPN gateway address ranges - which is what we used and ended up getting a false negative of our whitelisting test.

Azure internal load-balanced network with VNet Gateway with P2S VPN

So as the title suggests, I need to make a load-balanced internal gateway with a VPN. I'm a developer, so networking is not my forte.
I have two identical VMs (VM1 in Availability Zone 1 and VM2 in Availability Zone 2) and I need to share VPN traffic between them. My client has provided a range of 5 addresses that will be configured on their firewall, so I will pick one for them to use and they then need to be oblivious to the internal routing.
My ultimate goal is to allow the client to connect through a VPN to one IP address (in the range they have allocated) and let Azure direct the traffic to VM1 primarily, but failover to VM2 if Availability Zone 1 goes down. The client must be oblivious to which VM they ultimately connect to.
My problem is that I cannot create a configuration where the Load Balancer's static IP is in the address range of the Gateway's VPN P2S address pool. Azure requires the P2S address pool to be outside of the VNet's address space and the Load Balancer needs to use the VNet's Subnet (which obviously is INSIDE the VNet's address space, so I'm stuck.
I can create the GW -> Vnet -> subnet -> VM1/VM2 set up no problem using the client's specified IP range for the P2S VPN, but without a Load Balancer, how do I then direct the traffic between the VMs?
e.g. (IPs are hypothetical)
The Vnet address range is 172.10.0.0/16
The Gateway subnet is 172.10.10.0/24
The Gateway's P2S address pool is 172.5.5.5/29
VM1's IP is 172.10.10.4
VM2's IP is 172.10.10.5
I can create a Load Balancer to use the Vnet (and the VMs in a Backend Pool), but then it's static IP has to fall in the VNet's subnet and thus outside the P2S address pool. So how do I achieve this?
I thought of creating a second VNet and corresponding Gateway and linking the Gateways, but I seemed to end up in the same boat
UPDATE: here is an image of my VNet diagram. I have only added one of the VMs (NSPHiAvail1) for now, but VM2 will be in the same LB backend pool
NSP_Address_Range is the range is a subnet of the VNet and is the range dictated by the client. The load balancer has a frontend IP in this range
Firstly, the Azure load balancer does round-robin load balancing for new incoming TCP connections, you could not use it for failover.
My problem is that I cannot create a configuration where the Load
Balancer's static IP is in the address range of the Gateway's VPN P2S
address pool.
You do not need to add the Load balancer frontend IP in the P2S address pool, the address pool is used for clients connecting to your Azure VNet.
Generally, you could configure P2S VPN gateway, create Gateway subnet and vmsubnet and create an internal standard SKU load balancer in the vmsubnet, then you could add the VMs in the vmsubnet into the backend pool as the backend target of the load balancer and configure the healthpro and load balancer rule for load balancing traffic. If so, you could access the backend VMs from clients via the load balancer frontend private IP.
Moreover, you could know some limitations about internal load balancer.
My problem was the Load Balancer Rules - or lack thereof. Once I had added a rule for port 1433 (SQL Server), I was able to query the DB from my local instance of SSMS
There is another solution that is a LOT simpler than the solution I was trying to implement, BUT it does not work allow for an internal load balancer
Azure Virtual Machine Scale Sets implement as many VMs as I specify and will automatically switch to another zone if one goes down. I have no need for the scalability aspect, so I disabled this and I'm only using the Load balancing aspect.
NB This setup only exposes a PUBLIC IP and you cannot assign an internal load balancer in conjunction with the default public load balancer
Here's some info:
Quickstart: Create a virtual machine scale set in the Azure portal
Create a virtual machine scale set that uses Availability Zones
Networking for Azure virtual machine scale sets
Virtual Machine Scale Sets
The cost is exactly what you'd pay for individual VMs, but the loadbalancing is included. So it's cheaper than the solution I described in my question. Bonus!

Azure outbound traffic is being blocked

I have setup a few VM's and a load balancer so that we can have one outgoing IP. Right now i am having issues to connect to the internet from inside my VM. If i open internet explorer and try to access a website, it shows waiting for reply and then "This page can’t be displayed".
Each VM is connected to the same subnet.
The subnet has a NSG attached to it and each VM is part of the subnet.
NSG attached to the subnet.
There is then a load balancer to allow incoming RDP but with different ports to the different VM's.
I think i am missing the SNAT but i have no idea where to configure that. From what i have read, i am using level 2 "Public Load Balancer associated with a VM (no Instance Level Public IP address on the instance)". Multiple VM's on a subnet and one load balancer to share one IP address.
Where do i actually go to set up the SNAT? Or is there another issue i am missing here?
Probably, you could add the load balancing rules for TCP port 80 or 443 instead of inbound NAT rules. NAT rules always use for port forwarding. Moreover, you do not need add NAT rules for DNS. This works on my side.
A load balancer rule defines how traffic is distributed to the VMs. The rule defines the front-end IP configuration for incoming traffic, the back-end IP pool to receive the traffic, and the required source and destination ports.

Azure load balancer with IPv6 and IPv4 frontend support

Currently my LB has a IPv4 frontend address and one backend pool with 5 VMs with IPv4 private addresses.
We would like to add IPv6 support to our Service Fabric cluster. I found this article: https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-ipv6-overview and I see a lot of "Currently not supported" texts.
The IPv6 address is assigned to the LB, but I cannot make rules:
Failed to save load balancer rule 'rulename'. Error: Frontend ipConfiguration '/subscriptions/...' referring to PublicIp with PublicIpAddressVersion 'IPv6' does not match with PrivateIpAddressVersion 'IPv4' referenced by backend ipConfiguration '/subscriptions/...' for the load balancer rule '/subscriptions/...'.
When I try to add a new backend pool, I get this message:
One basic SKU load balancer can only be associated with one virtual machine scale set at any point of time
Questions:
When can we expect the feature to have multiple LBs before one VMSS?
Is it possible to add IPv6 frontend without adding IPv6 to the backend (NAT64?)?
Is it possible to add IPv6 addresses to an existing VM scale set without recreating it?
Not sure I am understanding you exactly, It seems that some limitations are in that article.
For your questions:
I guess you mean mapping multiple LB frontends to one backend pool. If so, the same frontend protocol and port are reused across multiple frontends since each rule must produce a flow with a unique combination of destination IP address and destination port. You can get more details about multiple frontend configurations with LB.
It is not possible. The IP version of the frontend IP address must match the IP version of the target network IP configuration.
NAT64 (translation of IPv6 to IPv4) is not supported.
It is not possible, A VM Scale Set is essentially a group of load balanced VMs. There are a few differences between VM and A Vmss, you can refer to this. Also, If a network interface has a private IPv6 address assigned to it, you must add (attach) it to a VM when you create the VM. Read the network interface constraints.
You may not upgrade existing VMs to use IPv6 addresses. You must
deploy new VMs.

Resources