Azure ACL virtual machines on load balanced set - azure

I am trying to restrict access to my 2 Ubuntu VMs that I have created in Azure for the default elasticsearch port of 9200 and can't seem to get it working.
My virtual machines are part of the same cloud service and the endpoints are setup to use the same load-balanced set so that any request to the cloud service on port 9200 will be load balanced between my 2 VMs. That is all working as expected it seems.
I want to set these up so only my Azure Websites can access them directly so I need to manage the ACL for the VMs I figured. To test it out I tried setting the ACL to deny my specific IP address for the port 9200 endpoint on both servers, but when I do that I can still access them over that port it seems.
I tested denying my IP address to the SSH endpoint and I was successfully blocked from getting onto the servers over SSH. So my only guess is that the load balancing set for these endpoints is causing the ACL to not work properly.
Is there a better way to handle this, maybe using Traffic Manager instead of the load balanced set for the VMs on the same cloud service? I think my backup plan would be to use iptables on each VM to set the restrictions but ideally I'd be able to handle this in the Azure portal if possible.
Thanks.

Related

Unable to access flask server hosted on azure VM

I have a flask server hosted on my azure vm.
if __name__ == '__main__':
app.run(debug=True, host="127.0.0.1", port=4400)
On vm, I can access the server via the address 127.0.0.1:4400
Now I want to be able to access this server from outside the vm i.e. my local computer.
I have already added the inbound security rule as below:
I have also added the same inbound rule on the VM's firewall on port 4400
Still I am unable to access the flask server via publicIP:4400 (publicIP is the public IP of my VM as displayed on the azure portal)
What could be the issue?
For your issue, there are two possible reasons.
You must listen to the IP 0.0.0.0 so that it's possible to access from the Internet. The 127.0.0.1 is just a loopback IP for the test in the localhost. So 4c74356b41 is right in this way.
If you also cannot access the app from the Internet when you change the IP into 0.0.0.0. Then it must be the rules issue. You should check your VM networking if it's the same NSG for your subnet and NIC. If not, add the rule to allow the port 4400 in both NSGs. Then you also should check if the public IP is associated directly with your VM, or it's associated with the load balancer and your VM is just in the backend of the load balancer. If it's a load balancer, you still need to add the load balancer rule to allow the port 4400.
Take a check for the two possible reasons.

Azure gateway with a virtual network

I've got multiple questions on the setup of a gateway and VM, so here is what I have actually.
I've got an Application Gateway, and two VM Ubuntu, everything hosted on Azure. They are all on the same Virtual Network. Both VM have only a private IP (10.1.0.4 and 10.1.0.5) and the Gateway have a private IP (10.1.1.4) and a public IP. Because only the Gateway have a public IP, I guess that everything have to go through it, and this is what I want to.
The goals I try to achieve :
Make a load balancer on the port 1680, redirected to port 1680.
To redirect the SSH of each VM to connect specifically to one because at the moment, they have no public IP. Is it possible to do this with a path based rule ? Like www.example.com/VM1 to connect by SSH to the first VM ? If no, what can be used to differentiate the SSH connection of the VM1 and of the VM2 ?
To redirect the port 80 of the gateway to the port 8080 of a specific VM. As my previous example, www.example.com/adminPanelVM1 to connect to the first VM on port 80 (redirected to port 8080 on the VM)
I already managed to create the redirection of the port 1680 of the Gateway with an HTTP Parameter, a Listener and a Rule.
Azure Application Gateway
The Azure Application Gateway operates at the layer 7 in the OSI model on the HTTP/HTTPS/WebSocket protocols, because of that any other protocol (like SSH), is not possible to route.
You got a few options tho.
You can use a Network Security Group, or NSG, for access control to your virtual machines. In the NSG you define where the traffic can come from that is allowed access to the VMs.
A NSG behaves like a access-control-list filtering traffic based on source and destination information and evaluating rules in order of priority. See this page for more information about NSGs.
Another option is to use a load balancer.
Azure Load Balancer
If you need to do port mapping, like you describe in your question, then a simple load balancer might be a better solution for you. An Azure Load Balancer works at a lower level in the in the OSI model, namely layer 4 (transport layer), handling TCP/UDP traffic.
So, if you are using a load balancer, then you can set up NAT rules to forward your traffic to specific machines, in other words, if you want to do:
LB port 1234 redirects to VM1 port 22 and
LB port 4312 redirects to VM2 port 22
you can do that using PowerShell as described in the Creating a public load balancer in Resource Manager by using PowerShell article.
There are quite a few steps but it walks you through the whole process of creating NAT rules, NICs and associated virtual machines.
Azure Application Gateway vs Azure Load Balancer?
These two cervices are distinctly different services and are trying to solve different problem, although those problems might look similar :)
The primary uses of an Application Gateway are:
SSL termination
cookie-based session affinity
round robin for load balancing traffic
Where as the Azure Load Balancer service works as the TCP/UDP level and support e.g. port mapping.
Cost wise, the load balancer service is free while the application gateway is billed per hour.
There are many great articles on this topic, when to pick which service. See for example the links for more details
When to use Azure Load Balancer or Application Gateway
Frequently asked questions for Application Gateway

Load balancers, Public-Ips and Availability sets in Microsoft Azure

I have a quick question regarding deploying a configuration in ARM mode.
I want to have two app servers behind a load balancer, with a database server on the same subnet.
Creating the load-balancer and rules for this seems to be working fine, but I have an issue with trying to access my database server via SSH.
I originally wanted to set up SSH access to my database server by setting up an inbound NAT rule to forward a port from my database server to the load balancer. This would allow me SSH access to my database via my DNS name and a specific port.
However, It seems you cannot forward a port to a load balancer outside of the machines availability set.
I don't want to have my database server in the same availability set as my app server as you should have an availability set per tier.
But I don't particularly want to give my database server a full public IP address and DNS name either, as it shouldn't really be accessible outside its own subnet.
If I have an availability set per tier, does that mean I also must have a public IP address per tier to allow for SSH access to each machine?
What is the recommended way to set up a configuration like this, with SSH access to each machine spread across avaiability sets?

open port on azure while logged onto azure vm

I want to open a port on Azure. I am logged onto Azure VM. After that how to do I open the port?
I tried opening the firewall port but that did not help. I also tried to do it thru azure-cli but it needs web login.
Can I not open a port while logged in onto that Azure VM?
For VMs in azure service management mode:
To open a particular port, say 8080 in your VM, you have to add an endpoint in azure portal, powershell or using xplat-cli. Once this is done, you have created a connectivity between external loadbalancer (I mean VIP of the VM) to the actual VM (with Internal IP address). If the VM is Linux, by default you can start using endpoint (VIP and port) it unless you restrict ports specifically.
For windows VM, for non standard ports, you have to add windows firewall inbound allow rules (say for 8080) inside your VM so that it can accept traffic forwarded from VIP
For VMs in azure resource management:
You have to first create a loadbalancer with VIP, then add NAT rules to forward traffic from VIP to VM. (use load balancing rules if same VIP port forwards traffic to multiple backend VMs)
For windows VM, again windows firewall inbound rules needs to be added
Securing ports:
The above scenario will work by default, but if you want to secure your ports, you have to follow either one of the below, not both.
Use Access control List (ACL): This works at VIP endpoint level. If we want to restrict VIP port 8080 to only few Ip and deny other IP, we can use ACL to add those IPs. This can be done in portal endpoint section/powershell/Xplat-cli
Use Network Security Group (NSG): This works at pheriphery of VM level. We have greater control here to restrict multiple VM ports, port range, etc., but we have to manage those rules. The ports needs to be secured in NSG is the VM internal port whereas in ACL it is the VIP port.
Hope this clarifies
You also need to open the port in the Endpoint settings within the Azure Portal.
Go to Azure Portal -> Your VM -> Settings -> Endpoints and add your Port.
To open a port, you have to it from the azure portal and not in the VM. You can use the NSG (Network Security Group) attached to vm and add a rule in the "Inbound security rules"

Configuring Azure load balancer and NAT rules

I'm trying to build a simple two-tier wordpress environment on CentOS 7.2 in Azure.
I've defined a virtual network, have connected it to my home-lab via IPsec VPN, and I've defined several subnets in Azure (for Web tier, SQL tier, and utility tier role segregation using Network Security Groups).
I have two web-tier VMs, both members of the same Availability Set, and are both on the web-tier subnet. They have internet access (outbound), I can SSH to them from my home-lab, and the seem fine operationally to me - httpd is listening on 80/tcp, and I can hit the web pages from my home-lab network by visiting each web server directly on its 192.168.x address.
I should mention my web servers DO NOT have public IPs assigned, but I can't see this being an issue.. they're intended to be behind the load balancer.
So, I've created a Load Balancer, and:
assigned a public IP to the LB
added a backend pool (selected my availability set, and chose my two web servers)
added a probe (http probing the two web servers)
added a load balancer rule
Notice I did NOT add an inbound NAT rule. I can't figure out what that's for, or if I need it.
On my web tier, I tcpdump port 80 and see the probes. In httpd logs, I see 200 success messages for the probes. I go to a web browser, hit the external VIP I assigned to the LB, and nothing. It just times out. I cannot connect to the LB VIP.
What am I missing? What are the NAT rules about?
Any help would be appreciated. All I can find online are examples doing this in powershell etc.. and I'm using the Azure web interface.
Thanks!
Edit: Found the issue - Needed the NSG to allow not just the AzureLoadBalancer, but "Internet" to hit port 80/tcp. Should have thought of that sooner..
Found the issue - Needed the NSG to allow not just the AzureLoadBalancer, but "Internet" to hit port 80/tcp. Should have thought of that sooner..

Resources