I am currently trying to learn azure cloud, I have worked with AWS before so may be trying to carry over some concepts here.
I need to know how we can configure a logical firewall to allow traffic from an azure load balancer to and VMs(scale sets or backend pools)?
I was able to do this between different VMs by assigning the VMs to different application security groups and allowing respective traffic from those groups in the network security group. I found the service tag 'AzureLoadBalancer' as an option in NSG rules but it seems that is only for allowing traffic from healthprobes and not from actual load balancer (also there is no option to select a certain load balancer). In the end I had to allow traffic from the public ip of the load balancer to the VNET to get the load balancer to work.
I hope there is a logical way to do this and if there is I am not sure what I am missing here, would appreciate anyone who could help here.
Normally you wouldn't want to firewall traffic from the Azure Load Balancer as it's a load balancer so it needs to be able to reach your endpoints. I'm not quite sure on what you are trying to achieve here. You might be able to simply micro-segment your endpoints on different subnets and apply different NSGs (with different allow/deny rules) on the subnet level. Otherwise an actual firewall would be required between your Azure Load Balancer and endpoints if you need L7 inspection for example.
Related
I have Azure Container Instances inside a vnet and I want to implement load balancing but cannot think of a workable solution. For context, it will be a set of VMs contacting the load balancing resource which would direct the request to one of the ACIs.
Things I have tried thus far are Azure Load Balancer (does not work with ACI) and Azure Traffic Manager (cannot be inside a vnet). I don't think an application gateway is a feasible solution either. I want to know if anyone has faced this scenario before and how did they overcome it or if someone has a potential solution that I can test out?
Well, to access the ACI inside a VNet through a Load Balancer, you just need to create a Load Balancer and add the backend pool with the IP address of the ACI, here is a screenshot for it:
Then create a health probe and load balancer rule for the port you need. When all things are OK, you can access the ACI inside the VNet through the Public IP address:
Result:
ACI:
Load Balancer:
Could you help me with understanding the pricing for loadbalancer in Azure. Here what I've found in manual https://azure.microsoft.com/en-us/pricing/details/load-balancer/
Am I right, if I add several Frontend IP configurations, Backend pools and Inbound NAT rules only, without any Load balancing rules I'll be charged only for the amount of data processed? The reason I am asking is that I can't find what is "outbound rules" there is no such item in the settings.
And,in general, my aim is just to redirect ports from public IP to VM.
Yes, you are right. If you are creating only Inbound NAT rules, you will be charged only for the amount of data processed and the charge of the Public IP address resource which is attached to the LB.
Outbound rules are not visible in Portal. You can configure it via CLI or PowerShell. It is used in scenarios where you have VMs without Public IP which are part of Internal Load balancers and need to talk to Internet.
Please forgive my ignorance.
Question:
How can I control network traffic to a publicIP resource and send it to multiple different resources based on destination port?
Background:
I have setup some VMs that are configured with only private IPs in different subnets. All belong to the same Virtual Network. All these VMs have different services and I do not want HA as I do not need it and it costs money.
I just want all the services on these VMs to communicate out using the same single publicIP and I want to split incoming traffic to that same publicIP between my resources based on destination port.
Seems like a straight forward requirement right?
At first I though "this must be a task for the Load Balancer service" as it's operating at L4 and tried to set it up but I was not able to split inbound traffic on different ports to more than a single VM or a single availability set. I do not understand why you can only use Load Balancers NAT rules with a single VM or Availability Set.
I can probably delete/re-create all VMs (thank you Microsoft..) into a single availability set that only has 1 fault and 1 error domain but does this make any sense?
It just seems to me like a dirty workaround using availability sets in a way they are not meant to in order to solve a very basic thing.
Thanks!
Basically, you could create a public-facing Azure Load balancer then target the VMs or Availability Sets to the backend pools of this load balancer. What you need to do is to configure the load balancing rules and some health probes or inbound NAT rules for the backend services with ports forwarding.
Refer to the SO answer.
You would use NAT rule when you have 1 backend server or you know
which backend server to get to and load balancing rules when you want
to load-balance to multiple backend servers.
NAT rule must be explicitly attached to a VM (or network interface) to
complete the path to the target; whereas Load Balancing rule need not
be. In the latter case, a VM is selected (from the back-end address
pool or VMs) to complete the path to the target.
Additionally, Azure Load balancer supports two SKUs: basic and standard. Different SKUs support different backend pool endpoints. Read more details about Load Balancer SKU comparison.
I've got a networking question for one of my customers servers in the cloud.
We are using just a standard 2012R2 VM with a few endpoints set up through the NSG Firewall, and we have a LoadBalancer infront of the network with a few ports forwarded to the same VPC.
The reason we are using a load balancer with port forwarding is because I'm finding countless records of bots trying to hit 3389 and 21 with attempts to break in.
So I have tried to change the source setting in the NSG rule to AzureLoadBalancer with the hope that it will only allow access to traffic that has come via the LoadBalancer on the external ports.
But for some reason this is not the case?
Is there a proper procedure for restricting traffic to a VM via the NSG from a LoadBalancer?
Any help with this is greatly appreciated.
Thanks
The NSG can’t be associated with Load balancer, NSGs can be associated with either subnets or individual VM instances within that subnet, so we can’t use NSG to block inbound IP address from the internet.
To protect the VM (with a public IP), we can deploy Linux VM, use IP tables work as a firewall. Also you can search some third party firewall product in Azure Marketplace.
Update:
To protect your VM, you can use NSG to allow the source IP address range to access your VM. NSG->Add inbound security rule->advanced->source IP address range.
Looking a the LB troubleshooting doc:
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-troubleshoot
You have:
-Also, check if a Deny All network security groups rule on the NIC of the VM or the subnet that has a higher priority than the default rule that allows LB probes & traffic (network security groups must allow Load Balancer IP of 168.63.129.16).
If you create your NSG rule and only allow from 168.63.129.16 you should be set. The Azure load balancer will always come from that address no matter what your frontend IP is.
I have 2 AWS EC2 instances living inside 2 different subnets of my vpc.
I would like to allow the ruby app running on the first instance (say App#1) to call the endpoints of the app (say App#2) running on the 2nd instance.
I would also like my users to directly call the endpoints of App#2 from their browser.
Here is what I have tried (and mostly failed):
[Sucess!] I added the known IP addresses of my users to the inbound rules of Load Balancer Security Group of App#2 and have confirmed that they can access App#2 endpoints from their browsers.
[Fail!] I added the Load Balancer Security Group ID of App#1 to the inbound rules to the Load Balancer Security Group of App#2. But my logs tell me App#1 cannot access the endpoints of App#2.
[Fail!] I added the VPC Security Group ID of App#1 to the inbound rules of the Load Balancer Security Group of App#2 - nope, still doesn't work.
(Somehow, when I launched the instance for App#1, aws automatically created 2 security groups for this instance - one for VPC and one for load balancer... I have no idea why/how this happened...)
[Fail!] I added the CIDR for the subnet App#1 was in to the inbound rules of the Load Balancer Security Group of App#2. Still no joy.
[Success...Sort Of] I assigned an elastic IP for the instance running App#1 and added that to the inbound rules of the Load Balancer Security Group of App#2. This works but I would rather not use this method since I would like to elastically scale my App#1 in the future and I do not know how to automatically assign more elastic IPs for the new instances when they spin up, add them to the inbound rules, and then somehow remove them when they shut down.
I feel like there has got to be a really clean solution to this problem and I am probably missing something painfully obvious. Can someone please give me a hint?
Any help would be appreciated!
It sounds like you might be using the public IP address of your load balancer, so it looks like the traffic is coming from the outside. Try using the private IP/DNS if there is one, or setting up a second, internally-facing load balancer.
So App#2 is in public subnet, App#1 is in private subnet. For example, the diagram will be something like:
Internet => LB#2 => App#2:80 (in public subnet) => LB#1 => App#1:4567 (in private subnet)
Let's open all inbound rules in all instances and loadbalancers, check if you can access it via internet,
then apply security groups on each layer each time, don't change all of them at same time.
Let me know which layer has issue.