Configuring Azure load balancer and NAT rules - azure

I'm trying to build a simple two-tier wordpress environment on CentOS 7.2 in Azure.
I've defined a virtual network, have connected it to my home-lab via IPsec VPN, and I've defined several subnets in Azure (for Web tier, SQL tier, and utility tier role segregation using Network Security Groups).
I have two web-tier VMs, both members of the same Availability Set, and are both on the web-tier subnet. They have internet access (outbound), I can SSH to them from my home-lab, and the seem fine operationally to me - httpd is listening on 80/tcp, and I can hit the web pages from my home-lab network by visiting each web server directly on its 192.168.x address.
I should mention my web servers DO NOT have public IPs assigned, but I can't see this being an issue.. they're intended to be behind the load balancer.
So, I've created a Load Balancer, and:
assigned a public IP to the LB
added a backend pool (selected my availability set, and chose my two web servers)
added a probe (http probing the two web servers)
added a load balancer rule
Notice I did NOT add an inbound NAT rule. I can't figure out what that's for, or if I need it.
On my web tier, I tcpdump port 80 and see the probes. In httpd logs, I see 200 success messages for the probes. I go to a web browser, hit the external VIP I assigned to the LB, and nothing. It just times out. I cannot connect to the LB VIP.
What am I missing? What are the NAT rules about?
Any help would be appreciated. All I can find online are examples doing this in powershell etc.. and I'm using the Azure web interface.
Thanks!
Edit: Found the issue - Needed the NSG to allow not just the AzureLoadBalancer, but "Internet" to hit port 80/tcp. Should have thought of that sooner..

Found the issue - Needed the NSG to allow not just the AzureLoadBalancer, but "Internet" to hit port 80/tcp. Should have thought of that sooner..

Related

Azure Advisory: Web ports should be restricted on NSG associated to your VM

What can I do to fix this Advisory message?
The VM this relates to is a webserver, which sits behind an Azure LoadBalancer. The NSG rule that is causing this (only 1 'not default rule' ) is:
Type: Allow
Source: Service Tag - Internet, source port range = *
Destination: ASG for this VM, destination port 80,443, protocol tcp
If I remove this rule, the message disappears (after some hours) but than the internet web traffic can not reach the VM anymore.
Should I ignore the Azure Advisory message? Or am I overlooking something? I was looking forward to getting this nice and tidy, AND have a 'satisfied' advisory state.
You can run your webserver on the VMs on different ports than 80 and 443. The load balancer can translate between port 80/443 on your public IP and whatever port you choose inside the VMs. Since Load Balancers are a fairly simple service, this is probably your only option.
As an alternative, you could try Application Gateway instead of your load balancer. It should act as the reverse proxy you need. Be aware that it is a bit more costly than the load balancer, but it also has a lot more features.
I see that your VM is behind an Azure LoadBalancer. So, the network flow might be similar to :
Then, your web server should not be public to the internet. It should only be accessible from the loadbalancer. You can set the source service tag to AzureLoadBalancer. For more information about service tags, you may check the official documentation: Service tags
Update:
By further researching, the AzureLoadBalancer service tag in NSG rule is used to allow Azure health probes. Actually, there is a default rule for allowing load balancer to probe to endpoints.
So, the suggestions are:
You should not assign public IPs to each instances. In this way, your backends can only be accessed by private IPs. In other words, clients can only access your web via load banlacer.
Add NSG inbound rules with 80 and 443 ports for web service. And 22 or 3389 port for remote management.
In this case, your servers should be secure now. If there are still any warnings, I think you may ignore them. The Azure system may just see that you opened 80 and 443 ports to public. However, your instances do not have public IP.
Hope the above would be helpful to you.

Azure Internal ASE with Firewall

I am running a Linux container as a web app in an internal ASE.
The ASE is deployed to a Vnet (secondary Vnet) which is peered to a another Vnet(Primary vnet) where an Azure firewall exists.
1.I have Enable service endpoints to SQL, Storage, and Event Hub on your ASE subnet.
2.From the Azure Firewall UI > Rules > Application rule collection, Set App Service Environment FQDN Tag and the Windows Update Tag.
3.From the Azure Firewall UI > Rules > Network rule collection, Set the ports to 123.Create another rule the same way to port 12000 to help triage any system issues.
4.Create a route table with the management addresses from App Service Environment management addresses with a next hop of Internet, set 0.0.0.0/0 directed to the network appliance ( Firewall internal IP address)
5.Create Application rules to allow HTTP/HTTPS traffic (Note: address is the IP of the ILB of the Internal ASE, since I cant find an IP for the web app itself)
I don't seem to be able to reach the web app. Any guidance will be appreciated. is the problem that I created an Internal ASE?
I am trying to isolate the ISE and control external access to it via a firewall.
MSDocs I referenced :https://learn.microsoft.com/en-us/azure/app-service/environment/firewall-integration
Yes, I think it's the problem with internal ASE. Also, the referring document is intended to lock down all egress from the ASE VNet. Inbound management traffic for an ASE can not be sent through a firewall device.
There are a number of inbound dependencies that an ASE has. The
inbound management traffic cannot be sent through a firewall device.
The source addresses for this traffic are known and are published in
the App Service Environment management addresses document. You can
create Network Security Group rules with that information to secure
inbound traffic.
In addition, since it's an internal ASE, it is deployed in your VNet with ILB. You can not directly access its backend web app over the Internet, you need at least a public-facing Ip address (external VIP )or other public-facing services(Public Azure application gateway) in front of it.
It will like this,

Azure gateway with a virtual network

I've got multiple questions on the setup of a gateway and VM, so here is what I have actually.
I've got an Application Gateway, and two VM Ubuntu, everything hosted on Azure. They are all on the same Virtual Network. Both VM have only a private IP (10.1.0.4 and 10.1.0.5) and the Gateway have a private IP (10.1.1.4) and a public IP. Because only the Gateway have a public IP, I guess that everything have to go through it, and this is what I want to.
The goals I try to achieve :
Make a load balancer on the port 1680, redirected to port 1680.
To redirect the SSH of each VM to connect specifically to one because at the moment, they have no public IP. Is it possible to do this with a path based rule ? Like www.example.com/VM1 to connect by SSH to the first VM ? If no, what can be used to differentiate the SSH connection of the VM1 and of the VM2 ?
To redirect the port 80 of the gateway to the port 8080 of a specific VM. As my previous example, www.example.com/adminPanelVM1 to connect to the first VM on port 80 (redirected to port 8080 on the VM)
I already managed to create the redirection of the port 1680 of the Gateway with an HTTP Parameter, a Listener and a Rule.
Azure Application Gateway
The Azure Application Gateway operates at the layer 7 in the OSI model on the HTTP/HTTPS/WebSocket protocols, because of that any other protocol (like SSH), is not possible to route.
You got a few options tho.
You can use a Network Security Group, or NSG, for access control to your virtual machines. In the NSG you define where the traffic can come from that is allowed access to the VMs.
A NSG behaves like a access-control-list filtering traffic based on source and destination information and evaluating rules in order of priority. See this page for more information about NSGs.
Another option is to use a load balancer.
Azure Load Balancer
If you need to do port mapping, like you describe in your question, then a simple load balancer might be a better solution for you. An Azure Load Balancer works at a lower level in the in the OSI model, namely layer 4 (transport layer), handling TCP/UDP traffic.
So, if you are using a load balancer, then you can set up NAT rules to forward your traffic to specific machines, in other words, if you want to do:
LB port 1234 redirects to VM1 port 22 and
LB port 4312 redirects to VM2 port 22
you can do that using PowerShell as described in the Creating a public load balancer in Resource Manager by using PowerShell article.
There are quite a few steps but it walks you through the whole process of creating NAT rules, NICs and associated virtual machines.
Azure Application Gateway vs Azure Load Balancer?
These two cervices are distinctly different services and are trying to solve different problem, although those problems might look similar :)
The primary uses of an Application Gateway are:
SSL termination
cookie-based session affinity
round robin for load balancing traffic
Where as the Azure Load Balancer service works as the TCP/UDP level and support e.g. port mapping.
Cost wise, the load balancer service is free while the application gateway is billed per hour.
There are many great articles on this topic, when to pick which service. See for example the links for more details
When to use Azure Load Balancer or Application Gateway
Frequently asked questions for Application Gateway

Azure Load balancing to Multiple Sites with Disaster Recovery

I am trying to configure applications on 2 different Azure sites having their local load balancing capabilities. I can use Traffic manager to distribute the traffic and have weighted routing to force everything to my primary site.
But i want this to occur automatically where i can map a service pointing to the internal load balancers at both sites and evaluate the sites are up and running or not to decide where to forward the traffic. This will allow me not to manually configure the Traffic Manager in case of disaster.
Note : The services are hosted on IIS on IaaS VMs. ILB1 and ILB2 are respective loadbalancer for Site1 and Site2.
Any help is appreciated!
Thanks
As far as I know, we can't add internal load balancer as traffic manager endpoints.
But I want this to occur automatically where I can map a service
pointing to the internal load balancers at both sites and evaluate them
sites are up and running or not to decide where to forward the
traffic.
By default, we can set multiple sites around the world with traffic manager, traffic manager will probe the health of all sites, forward network traffic to the right site.
We can use traffic manager profile to manage network traffic, traffic Manager profiles use traffic-routing methods to control the distribution of traffic to your cloud services or website endpoints.
For example, we create website 1 on site 1 (primary site), create website 2 on site 2. If we use the weighted method, network traffic will to site 1. When site 1 is down, traffic manager will know site 1 was down, will route network traffic to site 2.
Traffic manager works as a DNS level Load Balancer, it will route network to the available site by default.
About traffic manager probe settings, we can via the Azure portal to modify it, like this:
By the way, if you want to use traffic manager, we can add public IP address to traffic manager endpoint.
Update:
As a workaround, we can deploy a S2S VPN between two locations, and use Haproxy to work as load balancer, then add two VMs to public load balancer, like this:
We can use Haproxy to set primary website, more information about Haproxy, please refer to this link.

Can't get Azure Virtual Machine to serve websites

I've just set up a windows azure VM and installed IIS on it.
When I remote desktop onto the box I can see the default IIS website fine but I can't get this to serve on the web from the IP address of the box.
I've opened up port 80 on windows firewall and also added an endpoint for port 80.
I've tried to access it with the firewall completely turned off also but to no avail...
I cant work out if there is anything else I need to do to get this working?
Add endpoints for port 80 (http) and port 443 (https) to the VM in the Azure portal (tip: this can be automated with powershell or the Azure cli).
Remote desktop to the machine. Open the Windows firewall control panel and allow traffic to port 80 (http) and port 443 (https) or just turn it off ... the firewall is ON by default (tip: can also be scripted through the VM agent / powershell).
Go to the Azure portal and find the cloudapp.net subdomain for your VM (actually the cloud service) your VM is running under. Try accessing the site with that domain. If that doesn't work, try browsing to http://localhost on the server (remote desktop) to make sure IIS works and troubleshoot from there.
Modify the DNS records of your custom domain to use a CNAME to the .cloudapp.net domain. If you need A records make sure to use the public IP of the cloud service (just ping the .cloudapp.net domain to find it or look in the Azure portal).
You might want to look into Azure Websites or Azure Cloud Services (web roles). Those are a lot easier to manage and a lot cheaper. They still offer most of the functionality.
What fixed the issue for me was to go into the Azure Portal, browse to 'Network Security Groups', select the VM and then create an inbound rule to allow traffic to port 80.
Note: Also ensure that the inbound rule to port 80 is added and enabled on the actual VM.
Well, I deleted the existing VM and Cloud service and started again - all worked fine out of the box this time.
How annoying! The only thing I did notice was that before my cloud service had the same name as my VM - this time they had different names so that might have been what was causing the issue.
Cheers
For the newer VMs and pre-configured setups (2015+), it's possible your setup is using an azure asset called "Public IP". If so, you can set a custom DNS name label in it, inside "Configuration". Note that this name will consider any type of region used when creating the VM (e.g. my-site.brazilsouth.cloudapp.azure.com).
It's good to remember that for testing purposes, it still suffices to use the value of the public IP that is randomly designated to you.
The VMs are actually accessed via a Cloud Service (well they are for me). Azure created a Cloud Service automatically to be the scaling engine/load balancer on the front of the VM. I have to connect to the web site via that cloud service, not the VM directly.
Its possible you were using the internal IP rather than the external IP.
The sites have to use the internal IP address in the bindings section of IIS. However, in your dns you will need to use the external IP. This is presumably since the 'internal IP' is just a virtual one that Azure uses to map traffic from the external network to the VM's inside azure.
You should find both the internal and external IP's are visible on the VM's desktop.
Switch off TLS 1.3 in the Registry Editor.
This is what worked for me as of writing this in Mar 2021.

Resources