Configure Azure (Kubernetes) Network Security Group to support multiple Source IP addresses for same destination port - azure

We are using Kubernetes with Azure as cloud provider. The relevant setup to my question is that we have one loadbalancer and one network security group which is attached to all worker VMs. So basically every time I create a service, it creates a record in LoadBalaner frontend IP configuration, and adds a rule in network security group with specified destination port and Source IP addresses (which restricts from which source IP it can access the VM in which port.)
The problem with this set up is that, if I have a service that uses port 5000 which is open to public IP, and another service that also uses port 5000 but is open to only specific IP, both services are effectively open to public IP, because NSG rules are additive. Note that 5000 port number here does not represent the actual VM node port (although that's what Azure thinks) because it's taken care by kube-proxy in each machine and it will send the traffic to correct VM with corresponding node port. And this is why it makes sense to have two services using same port with different ingress rule set up.
Is there any way I can mitigate this problem? I can't think of any architecture setup I can deal with having different ingress rule for multiple services with same destination port.
Thank you

Related

Azure Advisory: Web ports should be restricted on NSG associated to your VM

What can I do to fix this Advisory message?
The VM this relates to is a webserver, which sits behind an Azure LoadBalancer. The NSG rule that is causing this (only 1 'not default rule' ) is:
Type: Allow
Source: Service Tag - Internet, source port range = *
Destination: ASG for this VM, destination port 80,443, protocol tcp
If I remove this rule, the message disappears (after some hours) but than the internet web traffic can not reach the VM anymore.
Should I ignore the Azure Advisory message? Or am I overlooking something? I was looking forward to getting this nice and tidy, AND have a 'satisfied' advisory state.
You can run your webserver on the VMs on different ports than 80 and 443. The load balancer can translate between port 80/443 on your public IP and whatever port you choose inside the VMs. Since Load Balancers are a fairly simple service, this is probably your only option.
As an alternative, you could try Application Gateway instead of your load balancer. It should act as the reverse proxy you need. Be aware that it is a bit more costly than the load balancer, but it also has a lot more features.
I see that your VM is behind an Azure LoadBalancer. So, the network flow might be similar to :
Then, your web server should not be public to the internet. It should only be accessible from the loadbalancer. You can set the source service tag to AzureLoadBalancer. For more information about service tags, you may check the official documentation: Service tags
Update:
By further researching, the AzureLoadBalancer service tag in NSG rule is used to allow Azure health probes. Actually, there is a default rule for allowing load balancer to probe to endpoints.
So, the suggestions are:
You should not assign public IPs to each instances. In this way, your backends can only be accessed by private IPs. In other words, clients can only access your web via load banlacer.
Add NSG inbound rules with 80 and 443 ports for web service. And 22 or 3389 port for remote management.
In this case, your servers should be secure now. If there are still any warnings, I think you may ignore them. The Azure system may just see that you opened 80 and 443 ports to public. However, your instances do not have public IP.
Hope the above would be helpful to you.

Azure gateway with a virtual network

I've got multiple questions on the setup of a gateway and VM, so here is what I have actually.
I've got an Application Gateway, and two VM Ubuntu, everything hosted on Azure. They are all on the same Virtual Network. Both VM have only a private IP (10.1.0.4 and 10.1.0.5) and the Gateway have a private IP (10.1.1.4) and a public IP. Because only the Gateway have a public IP, I guess that everything have to go through it, and this is what I want to.
The goals I try to achieve :
Make a load balancer on the port 1680, redirected to port 1680.
To redirect the SSH of each VM to connect specifically to one because at the moment, they have no public IP. Is it possible to do this with a path based rule ? Like www.example.com/VM1 to connect by SSH to the first VM ? If no, what can be used to differentiate the SSH connection of the VM1 and of the VM2 ?
To redirect the port 80 of the gateway to the port 8080 of a specific VM. As my previous example, www.example.com/adminPanelVM1 to connect to the first VM on port 80 (redirected to port 8080 on the VM)
I already managed to create the redirection of the port 1680 of the Gateway with an HTTP Parameter, a Listener and a Rule.
Azure Application Gateway
The Azure Application Gateway operates at the layer 7 in the OSI model on the HTTP/HTTPS/WebSocket protocols, because of that any other protocol (like SSH), is not possible to route.
You got a few options tho.
You can use a Network Security Group, or NSG, for access control to your virtual machines. In the NSG you define where the traffic can come from that is allowed access to the VMs.
A NSG behaves like a access-control-list filtering traffic based on source and destination information and evaluating rules in order of priority. See this page for more information about NSGs.
Another option is to use a load balancer.
Azure Load Balancer
If you need to do port mapping, like you describe in your question, then a simple load balancer might be a better solution for you. An Azure Load Balancer works at a lower level in the in the OSI model, namely layer 4 (transport layer), handling TCP/UDP traffic.
So, if you are using a load balancer, then you can set up NAT rules to forward your traffic to specific machines, in other words, if you want to do:
LB port 1234 redirects to VM1 port 22 and
LB port 4312 redirects to VM2 port 22
you can do that using PowerShell as described in the Creating a public load balancer in Resource Manager by using PowerShell article.
There are quite a few steps but it walks you through the whole process of creating NAT rules, NICs and associated virtual machines.
Azure Application Gateway vs Azure Load Balancer?
These two cervices are distinctly different services and are trying to solve different problem, although those problems might look similar :)
The primary uses of an Application Gateway are:
SSL termination
cookie-based session affinity
round robin for load balancing traffic
Where as the Azure Load Balancer service works as the TCP/UDP level and support e.g. port mapping.
Cost wise, the load balancer service is free while the application gateway is billed per hour.
There are many great articles on this topic, when to pick which service. See for example the links for more details
When to use Azure Load Balancer or Application Gateway
Frequently asked questions for Application Gateway

How to port forward Google Compute Engine Instance?

I've set up a VPS using the Google Compute Engine platform. In the instance, I've established a MongoDB database that's being locally hosted at the default port 21017. I've also set up a REST API based NodeJS server with express listening in on connections at port 8080.
Right now, I can only access the NodeJS site internally. How do I expose the VPS port 8080 to the external ip address so that I can access the API anywhere?
I tried following along an answer to this post: Enable Access Google Compute Engine Instance Via HTTP Port.
But that did not solve my issue
Default Firewall rules
Google Compute Engine firewall by default blocks all ingress traffic (i.e. incoming network traffic) to your Virtual Machines. If your VM is created on the default network, few ports like 22 (ssh), 3389 (RDP) are allowed.
The default firewall rules are documented here.
Opening ports for ingress
The ingress firewall rules are described here.
The recommended approach is to create a firewall rule which allows port 8080 to VMs containing a specific tag you choose. Then associate this tag on the VMs you would like to allow ingress 8080.
If you use gcloud, you can do that using the following steps:
# Create a new firewall rule that allows INGRESS tcp:8080 with VMs containing tag 'allow-tcp-8080'
gcloud compute firewall-rules create rule-allow-tcp-8080 --source-ranges 0.0.0.0/0 --target-tags allow-tcp-8080 --allow tcp:8080
# Add the 'allow-tcp-8080' tag to a VM named VM_NAME
gcloud compute instances add-tags VM_NAME --tags allow-tcp-8080
# If you want to list all the GCE firewall rules
gcloud compute firewall-rules list
Here is another stack overflow answer which walks you through how to allow ingress traffic on specific ports to your VM using Cloud Console Web UI (in addition to gcloud).
Static IP addresses
The answer you linked only describes how to allocate a Static IP address and assign it to your VM. This step is independent of the firewall rules and hence can be used in combination if you would like to use static IP addresses.

open port on azure while logged onto azure vm

I want to open a port on Azure. I am logged onto Azure VM. After that how to do I open the port?
I tried opening the firewall port but that did not help. I also tried to do it thru azure-cli but it needs web login.
Can I not open a port while logged in onto that Azure VM?
For VMs in azure service management mode:
To open a particular port, say 8080 in your VM, you have to add an endpoint in azure portal, powershell or using xplat-cli. Once this is done, you have created a connectivity between external loadbalancer (I mean VIP of the VM) to the actual VM (with Internal IP address). If the VM is Linux, by default you can start using endpoint (VIP and port) it unless you restrict ports specifically.
For windows VM, for non standard ports, you have to add windows firewall inbound allow rules (say for 8080) inside your VM so that it can accept traffic forwarded from VIP
For VMs in azure resource management:
You have to first create a loadbalancer with VIP, then add NAT rules to forward traffic from VIP to VM. (use load balancing rules if same VIP port forwards traffic to multiple backend VMs)
For windows VM, again windows firewall inbound rules needs to be added
Securing ports:
The above scenario will work by default, but if you want to secure your ports, you have to follow either one of the below, not both.
Use Access control List (ACL): This works at VIP endpoint level. If we want to restrict VIP port 8080 to only few Ip and deny other IP, we can use ACL to add those IPs. This can be done in portal endpoint section/powershell/Xplat-cli
Use Network Security Group (NSG): This works at pheriphery of VM level. We have greater control here to restrict multiple VM ports, port range, etc., but we have to manage those rules. The ports needs to be secured in NSG is the VM internal port whereas in ACL it is the VIP port.
Hope this clarifies
You also need to open the port in the Endpoint settings within the Azure Portal.
Go to Azure Portal -> Your VM -> Settings -> Endpoints and add your Port.
To open a port, you have to it from the azure portal and not in the VM. You can use the NSG (Network Security Group) attached to vm and add a rule in the "Inbound security rules"

Azure multiple VM with same Virtual Public IP and Host name mapped to Virtual Public IP

I am running following two VM's on Azure within same cloud service.
HOST NAME
First
PUBLIC VIRTUAL IP (VIP) ADDRESS
104.xx.xx.26
HOST NAME
Second
PUBLIC VIRTUAL IP (VIP) ADDRESS
104.xx.xx.26
On First nginx is running on port 80 and on Second no service is running on port 80
Now the question is :
I have mapped # host name to above public IP(104.xx.xx.26).
How would azure decide the VM the request to route to?
Will azure route the request to Second VM where no service is running on port 80?
Update :
This Question is not related to load balancing!
I just want all my http requests to be directed to First VM and that's the way its working now.
My concern is how azure is routing the request. Can it route the request to second server some time, In that case response won't be serverd as no nginx is not running on Second Server.
I will using second server for diffrent Services.
E.g.
First Server will have Php,Nginx installed
Second Server will have Mysql Installed
I want all the request on port 80 to be directed to first server.
In order to route traffic between the two VMs of the same availability set you will have to setup a load-balanced set of endpoints, you can find all the gory details of how to do it
Click here https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-load-balance/
Availability sets are not responsible for load balancing as they ensure at least one VM within the set remains available.
There are a few different approaches to distribute load in Azure. For public facing services running on VMs within the same Cloud Service, the perhaps most accessible way is to configure a load-balanced set for the service. In your case, a load-balanced set for port 80 for both VMs. Azure will then distribute the traffic across both VMs using round-robin provided they run services on port 80.

Resources