Azure NSG not working as expected - azure

I have an Azure external loadbalancer with a backend pool that contains 1 kubernetes master server and has a load balancing rule on port 443.
I added a rule with priority 500 to deny all traffic coming from the internet on port 443 to the kubernetes master server. Works fine
I added a rule with priority 400 to accept traffic coming from a certain public ip because I only want to be able to connect from that ip. I expected that I should be able to connect but I can't.
If I change the rule that accepts traffic from the source ip to internet then it works fine.
What am I missing?
Kind Regards

"I added a rule with priority 400 to accept traffic coming from a
certain public ip because I only want to be able to connect from that
ip. I expected that I should be able to connect but I can't.
If I change the rule that accepts traffic from the source ip to
internet then it works fine. What am I missing?"
Things that you might have missed:
Make sure you are not specifying the source port !! It would be
taken from a pool of available ports referred to as ephemeral ports
from the client that you initiate the connection.
You are blocking the Allow Azure Load Balancer IP which is a default rule.
Load Balancer health probes originate from the IP address 168.63.129.16 and must not be blocked for probes to mark your instance up. Review probe source IP address for details.
Create a separate rule to allow this IP as this is a MSFT IP you should have no issues allowing this.** Before deny all (Priority <500)
That should fix your issue for sure !!
Diagnosis & RCA:
Why this is happening, the Azure Load balancer Probe IP is being blocked and hence the backend server is being marked as unhealthy by the load balancer.

Related

How to open port 22 on azure Kubernetes service for the Loopback Ip 127.0.0.1

How we should open port 22 on aks loopback IP.
We are trying to do telnet on loopback IP using port 22 which is working fine on any Linux VM but on AKS we are getting the error Connection closed.
• Note that AKS clusters have unrestricted outbound (egress) internet access. This level of network access allows nodes and services you run to access external resources as needed. If you wish to restrict egress traffic, a limited number of ports and addresses must be accessible to maintain healthy cluster maintenance tasks. The simplest solution to securing outbound addresses lies in the use of a firewall device that can control outbound traffic based on domain names. Azure Firewall, for example, can restrict outbound HTTP and HTTPS traffic based on the FQDN of the destination. You can also configure your preferred firewall and security rules to allow these required ports and addresses.
Thus, you can configure an inbound rule and an outbound rule to allow traffic on port 22, i.e., SSH for destination IP address as 127.0.0.1 (Loopback IP address). To do so, kindly refer to the documentation link below: -
https://learn.microsoft.com/en-us/azure/aks/limit-egress-traffic#adding-firewall-rules
According to the above link, you must deploy a firewall and create a UDR hop to Azure firewall and associate it to AKS. Thus, in this way, if you configure the Azure firewall with the AKS cluster, you will be able to control the ingress and egress port traffic.

Restrict inbound traffic to only come through Azure Load Balancer

Please can someone advise how to restrict access on port 80/443 to some Azure VMs, so that they can only be access via the public IP Address that is associated to an Azure Load Balancer.
Our current setup has load balancing rules passing through traffic from public IP on 80=>80 and 443=>443, to back end pool of 2 VMs. We have health probe setup on port 80. Session persistence is set to client IP and floating IP is disabled.
I thought the answer was to deny access (via Network Security Group) to internet (service tag) on 80/443. Then add rule to allow service tag (AzureLoadBalancer) on the same ports. But that didnt seem to have an effect. Having read up a little more on this, it seems the AzureLoadBalancer tag is only to allow the health probe access and not specifically inbound traffic from that load balancer.
I have also tried adding rules to allow the public IP address of the load balancer, but again no effect.
I was wondering if I need to start looking into Azure Firewalls? and somehow restrict access
to inbound traffic that comes through that?
The only way I can get the VMs to respond on those ports is to add rules to allowing 80/443 from any to any....
After reading your question, my understanding is that you have a Public load balancer and the backend VMs also have instance level Public IPs associated with them and hence direct inbound access to the VMs is possible. But you would like to make sure that the direct inbound access to VMs is restricted only via the load balancer.
The simple solution for you to achieve this is by disassociating the instance level public IP of the VMs, this will make the LB public IP as the only point of contact for your VMs.
Keep in mind that the LB is not a proxy, it is just a layer 4 resource to forward traffic, therefore, your backend VM will still see source IP of the clients and not the LB IP, hence, you will still need to allow the traffic at the NSGs level using as source "Any".
However, if your requirement is to enable outbound connectivity from Azure VMs while avoiding SNAT exhaustion, I would advise you to create NAT Gateway, where you can assign multiple Public IP address for SNAT and remove the Public IP from the VM. This setup will make sure that the inbound access is provided by the Public load balancer only and the outbound access is provided by the NAT gateway as shown below:
Refer : https://learn.microsoft.com/en-us/azure/virtual-network/nat-gateway/nat-gateway-resource#nat-and-vm-with-standard-public-load-balancer
https://learn.microsoft.com/en-us/azure/virtual-network/nat-gateway/tutorial-nat-gateway-load-balancer-public-portal
You could also configure port forwarding in Azure Load Balancer for the RDP/SSH connections to individual instances.
Refer : https://learn.microsoft.com/en-us/azure/load-balancer/manage#-add-an-inbound-nat-rule
https://learn.microsoft.com/en-us/azure/load-balancer/tutorial-load-balancer-port-forwarding-portal

How to whitelist source IPs on Azure VMs fronted by Azure Load Balancer

I have a public facing, standard sku, Azure Load Balancer that forwards the incoming requests for a certain port to a virtual machine, using load balancing rules. This virtual machine has a NSG defined at the subnet level, that allows incoming traffic for that port, with source set to as 'Internet'.
Presently, this setup works, but I need to implement whitelisting - to allow only a certain set of IP addresses to be able to connect to this virtual machine, through the load balancer. However, if I remove the 'Internet' source type in my NSG rule, the VM is no longer accessible through the Load Balancer.
Has anyone else faced a similar use case and what is the best way to setup IP whitelisting on VMs that are accessible through Load Balancer. Thanks!
Edit: to provide more details
Screenshot of NSGs
These are the top level NSGs defined at the subnet.
We have a public load balancer that fronts the virtual machine where above NSGs are applied. This virtual machine doesn’t have a specific public IP and relies on the Load Balancer’s public IP.
The public Load Balancer forwards all traffic on port 8443 and port 8543 to this virtual machine, without session persistence and with Outbound and inbound using the same IP.
Below are the observations I have made so far:
Unless I specify the source for NSG rule Port_8443 (in above table) as ‘Internet’, this virtual machine is not accessible on this port, via the load balancer’s public IP.
When I retain the NSG rule Port_8543, which whitelists only specific IP addresses, this virtual machine is not accessible on this port, via the load balancer’s public IP – even when one of those whitelisted clients try to connect to this port.
I tried adding the NSG rule Custom_AllowAzureLoadBalancerInBound, to a higher priority than the port_8543, but it still didn’t open up this access.
I also tried to add the Azure Load balancer VIP (168.63.129.16) to the Port_8543 NSG, but that too didn’t open-up the access to port 8543, on load balancer’s public IP.
I have played with Load Balancing rules options too, but nothing seems to achieve what I am looking for – which is:
Goal 1: to open-up the virtual machine’s access on port 8443 and port 8543 to only the whitelisted client IPs, AND
Goal 2: allow whitelisted client IPs to be able to connect to these ports on this virtual machine, using the load balancer’s public IP
I am only able to achieve one of the above goals, but not both of them.
I have also tried the same whitelisting with a dedicated public IP assigned to the virtual machine; and that too loses connectivity to ports, where I don't assign 'Internet' source tag.
Azure has default rules in each network security group. It allows inbound traffic from the Azure Load Balancer resources.
If you want to restrict the clients to access your clients, you just need to add a new inbound port rule with the public IP address of your clients as the Source and specify the Destination port ranges and Protocol in your specific inbound rules. You could check the client's public IPv4 here via open that URL on your client's machine.
Just wanted to add a note for anyone else stumbling here:
If you are looking to whitelist an Azure VM (available publicly or privately) for few specific client IPs, below are the steps you must perform:
Create a NSG for the VM (or subnet) - if one not already available
Add NSG rules to Allow inbound traffic from specific client IPs on specific ports
Add a NSG rule to Deny inbound traffic from all other sources [This is really optional but will help in ensuring security of your setup]
Also, please note that look at all public IPs that your client machines are planning to connect with. Especially while testing, use public IPs and not the VPN gateway address ranges - which is what we used and ended up getting a false negative of our whitelisting test.

Azure Advisory: Web ports should be restricted on NSG associated to your VM

What can I do to fix this Advisory message?
The VM this relates to is a webserver, which sits behind an Azure LoadBalancer. The NSG rule that is causing this (only 1 'not default rule' ) is:
Type: Allow
Source: Service Tag - Internet, source port range = *
Destination: ASG for this VM, destination port 80,443, protocol tcp
If I remove this rule, the message disappears (after some hours) but than the internet web traffic can not reach the VM anymore.
Should I ignore the Azure Advisory message? Or am I overlooking something? I was looking forward to getting this nice and tidy, AND have a 'satisfied' advisory state.
You can run your webserver on the VMs on different ports than 80 and 443. The load balancer can translate between port 80/443 on your public IP and whatever port you choose inside the VMs. Since Load Balancers are a fairly simple service, this is probably your only option.
As an alternative, you could try Application Gateway instead of your load balancer. It should act as the reverse proxy you need. Be aware that it is a bit more costly than the load balancer, but it also has a lot more features.
I see that your VM is behind an Azure LoadBalancer. So, the network flow might be similar to :
Then, your web server should not be public to the internet. It should only be accessible from the loadbalancer. You can set the source service tag to AzureLoadBalancer. For more information about service tags, you may check the official documentation: Service tags
Update:
By further researching, the AzureLoadBalancer service tag in NSG rule is used to allow Azure health probes. Actually, there is a default rule for allowing load balancer to probe to endpoints.
So, the suggestions are:
You should not assign public IPs to each instances. In this way, your backends can only be accessed by private IPs. In other words, clients can only access your web via load banlacer.
Add NSG inbound rules with 80 and 443 ports for web service. And 22 or 3389 port for remote management.
In this case, your servers should be secure now. If there are still any warnings, I think you may ignore them. The Azure system may just see that you opened 80 and 443 ports to public. However, your instances do not have public IP.
Hope the above would be helpful to you.

Azure outbound traffic is being blocked

I have setup a few VM's and a load balancer so that we can have one outgoing IP. Right now i am having issues to connect to the internet from inside my VM. If i open internet explorer and try to access a website, it shows waiting for reply and then "This page can’t be displayed".
Each VM is connected to the same subnet.
The subnet has a NSG attached to it and each VM is part of the subnet.
NSG attached to the subnet.
There is then a load balancer to allow incoming RDP but with different ports to the different VM's.
I think i am missing the SNAT but i have no idea where to configure that. From what i have read, i am using level 2 "Public Load Balancer associated with a VM (no Instance Level Public IP address on the instance)". Multiple VM's on a subnet and one load balancer to share one IP address.
Where do i actually go to set up the SNAT? Or is there another issue i am missing here?
Probably, you could add the load balancing rules for TCP port 80 or 443 instead of inbound NAT rules. NAT rules always use for port forwarding. Moreover, you do not need add NAT rules for DNS. This works on my side.
A load balancer rule defines how traffic is distributed to the VMs. The rule defines the front-end IP configuration for incoming traffic, the back-end IP pool to receive the traffic, and the required source and destination ports.

Resources