Azure load balancing rule based on source network - azure

Is it possible with the Azure load balancer to create a load balancing rule based on source network? For example:
Connections from 10.1.0.0/16 to port 9000 go to backend port 9100
Connections from 10.2.0.0/16 to port 9000 go to backend port 9200

No, it's not possible because the frontend IP configuration only includes the IP address, protocol, and port. Refer here. Also, if you intend to configure an internal LB, then frontends and backends should be inside a virtual network.
A Load Balancer rule is used to define how traffic is distributed to
the VMs. You define the frontend IP configuration for the incoming
traffic and the backend IP pool to receive the traffic, along with the
required source and destination port.
and
The frontend (aka VIP) is defined by a 3-tuple comprised of an IP
address (public or internal), a transport protocol (UDP or TCP), and a
port number from the load balancing rule.
For more information, you could read multiple frontends for Azure Load Balancer.

Related

TCP vs UDP in load balancing rule with Azure Load Balancer

What is the difference between TCP vs UDP in the load balancing rule?
What if UDF in the red box below is selected?
Image above is from the https://learn.microsoft.com/en-us/azure/load-balancer/manage-rules-how-to
This can be either used for port swapping or if you need to use port forwarding to keep consistency on your incoming sessions and so on...
Please take a look to the following URL below:
Load-balancing rules are used to specify a pool of backend resources to route traffic to, balancing the load across each instance. For example, a load balancer rule can route TCP packets on port 80 of the load balancer across a pool of web servers.**
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-faqs

How to open port 22 on azure Kubernetes service for the Loopback Ip 127.0.0.1

How we should open port 22 on aks loopback IP.
We are trying to do telnet on loopback IP using port 22 which is working fine on any Linux VM but on AKS we are getting the error Connection closed.
• Note that AKS clusters have unrestricted outbound (egress) internet access. This level of network access allows nodes and services you run to access external resources as needed. If you wish to restrict egress traffic, a limited number of ports and addresses must be accessible to maintain healthy cluster maintenance tasks. The simplest solution to securing outbound addresses lies in the use of a firewall device that can control outbound traffic based on domain names. Azure Firewall, for example, can restrict outbound HTTP and HTTPS traffic based on the FQDN of the destination. You can also configure your preferred firewall and security rules to allow these required ports and addresses.
Thus, you can configure an inbound rule and an outbound rule to allow traffic on port 22, i.e., SSH for destination IP address as 127.0.0.1 (Loopback IP address). To do so, kindly refer to the documentation link below: -
https://learn.microsoft.com/en-us/azure/aks/limit-egress-traffic#adding-firewall-rules
According to the above link, you must deploy a firewall and create a UDR hop to Azure firewall and associate it to AKS. Thus, in this way, if you configure the Azure firewall with the AKS cluster, you will be able to control the ingress and egress port traffic.

Azure VM Port forwarding to localhost port

I have an Azure VM, and a web application listening internally on port 32001. The VM is publicly accessible on a static IP address. I'm trying to route all traffic the VM receives on port 443 to its localhost port 32001. I am try to set it up in this screen, and my first idea was to edit the HTTPS rule. But no matter what I try, I can't seem to get a connection to my webapp. What am I supposed to do?
You can't do this using just VM. What you actually need is a Load Balancer in front of your Azure VM which takes care on port forwarding. An example you can find here and here. But in short, what you need to do is:
expose 32001 on your VM
create Load Balancer
add VM to backend pool in Load Balancer
configure port forwarding on this balancer
In inbound and outband rules you can configure what traffic is allowed, but you can't configure there port forwarding.
You can also check this topic

Azure outbound traffic is being blocked

I have setup a few VM's and a load balancer so that we can have one outgoing IP. Right now i am having issues to connect to the internet from inside my VM. If i open internet explorer and try to access a website, it shows waiting for reply and then "This page can’t be displayed".
Each VM is connected to the same subnet.
The subnet has a NSG attached to it and each VM is part of the subnet.
NSG attached to the subnet.
There is then a load balancer to allow incoming RDP but with different ports to the different VM's.
I think i am missing the SNAT but i have no idea where to configure that. From what i have read, i am using level 2 "Public Load Balancer associated with a VM (no Instance Level Public IP address on the instance)". Multiple VM's on a subnet and one load balancer to share one IP address.
Where do i actually go to set up the SNAT? Or is there another issue i am missing here?
Probably, you could add the load balancing rules for TCP port 80 or 443 instead of inbound NAT rules. NAT rules always use for port forwarding. Moreover, you do not need add NAT rules for DNS. This works on my side.
A load balancer rule defines how traffic is distributed to the VMs. The rule defines the front-end IP configuration for incoming traffic, the back-end IP pool to receive the traffic, and the required source and destination ports.

Azure load balancer with IPv6 and IPv4 frontend support

Currently my LB has a IPv4 frontend address and one backend pool with 5 VMs with IPv4 private addresses.
We would like to add IPv6 support to our Service Fabric cluster. I found this article: https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-ipv6-overview and I see a lot of "Currently not supported" texts.
The IPv6 address is assigned to the LB, but I cannot make rules:
Failed to save load balancer rule 'rulename'. Error: Frontend ipConfiguration '/subscriptions/...' referring to PublicIp with PublicIpAddressVersion 'IPv6' does not match with PrivateIpAddressVersion 'IPv4' referenced by backend ipConfiguration '/subscriptions/...' for the load balancer rule '/subscriptions/...'.
When I try to add a new backend pool, I get this message:
One basic SKU load balancer can only be associated with one virtual machine scale set at any point of time
Questions:
When can we expect the feature to have multiple LBs before one VMSS?
Is it possible to add IPv6 frontend without adding IPv6 to the backend (NAT64?)?
Is it possible to add IPv6 addresses to an existing VM scale set without recreating it?
Not sure I am understanding you exactly, It seems that some limitations are in that article.
For your questions:
I guess you mean mapping multiple LB frontends to one backend pool. If so, the same frontend protocol and port are reused across multiple frontends since each rule must produce a flow with a unique combination of destination IP address and destination port. You can get more details about multiple frontend configurations with LB.
It is not possible. The IP version of the frontend IP address must match the IP version of the target network IP configuration.
NAT64 (translation of IPv6 to IPv4) is not supported.
It is not possible, A VM Scale Set is essentially a group of load balanced VMs. There are a few differences between VM and A Vmss, you can refer to this. Also, If a network interface has a private IPv6 address assigned to it, you must add (attach) it to a VM when you create the VM. Read the network interface constraints.
You may not upgrade existing VMs to use IPv6 addresses. You must
deploy new VMs.

Resources