Restrict inbound traffic to only come through Azure Load Balancer - azure

Please can someone advise how to restrict access on port 80/443 to some Azure VMs, so that they can only be access via the public IP Address that is associated to an Azure Load Balancer.
Our current setup has load balancing rules passing through traffic from public IP on 80=>80 and 443=>443, to back end pool of 2 VMs. We have health probe setup on port 80. Session persistence is set to client IP and floating IP is disabled.
I thought the answer was to deny access (via Network Security Group) to internet (service tag) on 80/443. Then add rule to allow service tag (AzureLoadBalancer) on the same ports. But that didnt seem to have an effect. Having read up a little more on this, it seems the AzureLoadBalancer tag is only to allow the health probe access and not specifically inbound traffic from that load balancer.
I have also tried adding rules to allow the public IP address of the load balancer, but again no effect.
I was wondering if I need to start looking into Azure Firewalls? and somehow restrict access
to inbound traffic that comes through that?
The only way I can get the VMs to respond on those ports is to add rules to allowing 80/443 from any to any....

After reading your question, my understanding is that you have a Public load balancer and the backend VMs also have instance level Public IPs associated with them and hence direct inbound access to the VMs is possible. But you would like to make sure that the direct inbound access to VMs is restricted only via the load balancer.
The simple solution for you to achieve this is by disassociating the instance level public IP of the VMs, this will make the LB public IP as the only point of contact for your VMs.
Keep in mind that the LB is not a proxy, it is just a layer 4 resource to forward traffic, therefore, your backend VM will still see source IP of the clients and not the LB IP, hence, you will still need to allow the traffic at the NSGs level using as source "Any".
However, if your requirement is to enable outbound connectivity from Azure VMs while avoiding SNAT exhaustion, I would advise you to create NAT Gateway, where you can assign multiple Public IP address for SNAT and remove the Public IP from the VM. This setup will make sure that the inbound access is provided by the Public load balancer only and the outbound access is provided by the NAT gateway as shown below:
Refer : https://learn.microsoft.com/en-us/azure/virtual-network/nat-gateway/nat-gateway-resource#nat-and-vm-with-standard-public-load-balancer
https://learn.microsoft.com/en-us/azure/virtual-network/nat-gateway/tutorial-nat-gateway-load-balancer-public-portal
You could also configure port forwarding in Azure Load Balancer for the RDP/SSH connections to individual instances.
Refer : https://learn.microsoft.com/en-us/azure/load-balancer/manage#-add-an-inbound-nat-rule
https://learn.microsoft.com/en-us/azure/load-balancer/tutorial-load-balancer-port-forwarding-portal

Related

How to configure Azure ContainerApps with a Static Outbound IP?

In the documentation for Azure ContainerApps Ports and IP Addresses section it indicates that the
Outbound public IP
Used as the "from" IP for outbound connections that leave the virtual network. These
connections aren't routed down a VPN. Using a NAT gateway or other proxy for outbound
traffic from a Container App environment isn't supported. Outbound IPs aren't guaranteed
and may change over time.
The inbound IP for a ContainerApps Environment is fixed. Azure Container Instances (not ContainerApps) on the other hand seem to have documented capability to configure a static outbound IP via NAT Gateway.
Is there a way to configure a static outbound IP for Azure ContainerApps as well?
If not, which alternate deployment models for a long-running background service are recommended? The requirement is that an external service can count on a fixed outbound IP (or very small range, not the entire DataCenter IP ranges) for whitelisting.
** EDIT - It seems that NAT on VNet is not yet supported on ACA - https://github.com/microsoft/azure-container-apps/issues/522
way to configure a static outbound IP for Azure ContainerApps as well?
No, we can't configure outbound public IP via container apps; that information is there in the official documentation documentation itself.
try this out, Create outbound application rule on the firewall
using below command
az network firewall application-rule create
It will create an outbound rule on the firewall. This rule allows access from the subnet to Azure Container Instances.
HTTP access to the site will configure through egress IP address from Azure Container Instances.
i have found one blog refer this

How to allow outbound traffic on internal load balancer

I have several machines in a backend pool associated with an internal load balancer. However, they currently do not have outbound access. The documentation seems to indicate that I should be able to create a public load balancer and attach the same backend pool with it so that I can have outbound access from those machines. However, when I create a public load balancer, I don't have the option of associating it with an existing pool, and when I try to create a new backend pool for the public LB I can't associate those machines with it. Neither machine has a public IP address. From the dashboard it shows:
where all the interesting info is cut off. What am I missing?
Even VM's in the backend pool of an ILB should have a default outbound IP. If you don't have outbound access have you checked the security group rules to make sure outbound traffic is allowed?
I'm afraid you can't do this on the same LB for both inbound & outbound traffic.
If you happen to use the Basic SKU, VMs behind the LB have internet
access as outbound connections are NAT'ed by Azure. But, all VMs have to be in the same AZ. This wasn't a great idea & we declined it
If you use a Standard SKU, outbound connections to the internet are not possible. We learned this after many failed & painful attempts. More details here
As discussed in the above link, attaching a public IP to each VM nic isn't a good idea either
What worked for us is to create another LoadBalancer specifically for outbound connections, attach public IP to that LB & configure outbound rules. More details here

How to whitelist source IPs on Azure VMs fronted by Azure Load Balancer

I have a public facing, standard sku, Azure Load Balancer that forwards the incoming requests for a certain port to a virtual machine, using load balancing rules. This virtual machine has a NSG defined at the subnet level, that allows incoming traffic for that port, with source set to as 'Internet'.
Presently, this setup works, but I need to implement whitelisting - to allow only a certain set of IP addresses to be able to connect to this virtual machine, through the load balancer. However, if I remove the 'Internet' source type in my NSG rule, the VM is no longer accessible through the Load Balancer.
Has anyone else faced a similar use case and what is the best way to setup IP whitelisting on VMs that are accessible through Load Balancer. Thanks!
Edit: to provide more details
Screenshot of NSGs
These are the top level NSGs defined at the subnet.
We have a public load balancer that fronts the virtual machine where above NSGs are applied. This virtual machine doesn’t have a specific public IP and relies on the Load Balancer’s public IP.
The public Load Balancer forwards all traffic on port 8443 and port 8543 to this virtual machine, without session persistence and with Outbound and inbound using the same IP.
Below are the observations I have made so far:
Unless I specify the source for NSG rule Port_8443 (in above table) as ‘Internet’, this virtual machine is not accessible on this port, via the load balancer’s public IP.
When I retain the NSG rule Port_8543, which whitelists only specific IP addresses, this virtual machine is not accessible on this port, via the load balancer’s public IP – even when one of those whitelisted clients try to connect to this port.
I tried adding the NSG rule Custom_AllowAzureLoadBalancerInBound, to a higher priority than the port_8543, but it still didn’t open up this access.
I also tried to add the Azure Load balancer VIP (168.63.129.16) to the Port_8543 NSG, but that too didn’t open-up the access to port 8543, on load balancer’s public IP.
I have played with Load Balancing rules options too, but nothing seems to achieve what I am looking for – which is:
Goal 1: to open-up the virtual machine’s access on port 8443 and port 8543 to only the whitelisted client IPs, AND
Goal 2: allow whitelisted client IPs to be able to connect to these ports on this virtual machine, using the load balancer’s public IP
I am only able to achieve one of the above goals, but not both of them.
I have also tried the same whitelisting with a dedicated public IP assigned to the virtual machine; and that too loses connectivity to ports, where I don't assign 'Internet' source tag.
Azure has default rules in each network security group. It allows inbound traffic from the Azure Load Balancer resources.
If you want to restrict the clients to access your clients, you just need to add a new inbound port rule with the public IP address of your clients as the Source and specify the Destination port ranges and Protocol in your specific inbound rules. You could check the client's public IPv4 here via open that URL on your client's machine.
Just wanted to add a note for anyone else stumbling here:
If you are looking to whitelist an Azure VM (available publicly or privately) for few specific client IPs, below are the steps you must perform:
Create a NSG for the VM (or subnet) - if one not already available
Add NSG rules to Allow inbound traffic from specific client IPs on specific ports
Add a NSG rule to Deny inbound traffic from all other sources [This is really optional but will help in ensuring security of your setup]
Also, please note that look at all public IPs that your client machines are planning to connect with. Especially while testing, use public IPs and not the VPN gateway address ranges - which is what we used and ended up getting a false negative of our whitelisting test.

Azure outbound traffic is being blocked

I have setup a few VM's and a load balancer so that we can have one outgoing IP. Right now i am having issues to connect to the internet from inside my VM. If i open internet explorer and try to access a website, it shows waiting for reply and then "This page can’t be displayed".
Each VM is connected to the same subnet.
The subnet has a NSG attached to it and each VM is part of the subnet.
NSG attached to the subnet.
There is then a load balancer to allow incoming RDP but with different ports to the different VM's.
I think i am missing the SNAT but i have no idea where to configure that. From what i have read, i am using level 2 "Public Load Balancer associated with a VM (no Instance Level Public IP address on the instance)". Multiple VM's on a subnet and one load balancer to share one IP address.
Where do i actually go to set up the SNAT? Or is there another issue i am missing here?
Probably, you could add the load balancing rules for TCP port 80 or 443 instead of inbound NAT rules. NAT rules always use for port forwarding. Moreover, you do not need add NAT rules for DNS. This works on my side.
A load balancer rule defines how traffic is distributed to the VMs. The rule defines the front-end IP configuration for incoming traffic, the back-end IP pool to receive the traffic, and the required source and destination ports.

Azure Load Balancer + NSG Rules - Remove Access Directly

I've got a networking question for one of my customers servers in the cloud.
We are using just a standard 2012R2 VM with a few endpoints set up through the NSG Firewall, and we have a LoadBalancer infront of the network with a few ports forwarded to the same VPC.
The reason we are using a load balancer with port forwarding is because I'm finding countless records of bots trying to hit 3389 and 21 with attempts to break in.
So I have tried to change the source setting in the NSG rule to AzureLoadBalancer with the hope that it will only allow access to traffic that has come via the LoadBalancer on the external ports.
But for some reason this is not the case?
Is there a proper procedure for restricting traffic to a VM via the NSG from a LoadBalancer?
Any help with this is greatly appreciated.
Thanks
The NSG can’t be associated with Load balancer, NSGs can be associated with either subnets or individual VM instances within that subnet, so we can’t use NSG to block inbound IP address from the internet.
To protect the VM (with a public IP), we can deploy Linux VM, use IP tables work as a firewall. Also you can search some third party firewall product in Azure Marketplace.
Update:
To protect your VM, you can use NSG to allow the source IP address range to access your VM. NSG->Add inbound security rule->advanced->source IP address range.
Looking a the LB troubleshooting doc:
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-troubleshoot
You have:
-Also, check if a Deny All network security groups rule on the NIC of the VM or the subnet that has a higher priority than the default rule that allows LB probes & traffic (network security groups must allow Load Balancer IP of 168.63.129.16).
If you create your NSG rule and only allow from 168.63.129.16 you should be set. The Azure load balancer will always come from that address no matter what your frontend IP is.

Resources