Azure AKS Network Analytics- where are these requests are coming to Kubernetes Cluster? - azure

I am little but puzzled by Azure Network Analytics! Can someone help resolving this mystery?
My Kubernetes cluster in Azure is private. It's joined to a vNET and there is no public ip exposed anywhere. Service is configured with internal load balancer. Application gateway calls the internal load balancer. NSG blocks all inbound traffics from internet to app gateway. Only trusted NAT ips are allowed at the NSG.
Question is- I am seeing lot of internet traffic coming to aks on the vNET. They are denied of course! I don't have this public ip 40.117.133.149 anywhere in the subscription. So, how are these requests coming to aks?
You can try calling app gateway from internet and you would not get any response! http://23.100.30.223/api/customer/FindAllCountryProvinceCities?country=United%20States&state=Washington
You would get successful response if you call the Azure Function- https://afa-aspnet4you.azurewebsites.net/api/aks/api/customer/FindAllCountryProvinceCities?country=United%20States&state=Washington
Its possible because of following nsg rules!
Thank you for taking time to answer my query.
In response to #CharlesXu, I am sharing little more on the aks networking. Aks network is made of few address spaces-
Also, there is no public ip assigned to any of the two nodes in the cluster. Only private ip is assigned to vm node. Here is an example of node-0-
I don't understand why I am seeing inbound requests to 40.117.133.149 within my cluster!

After searching all the settings and activity logs, I finally found answer to the mystery IP! A load balancer with external ip was auto created as part of nginx ingress service when I restarted the VMs. NSG was updated automatically to allow internet traffic to port 80/443. I manually deleted the public load balancer along with IP but the bad actors were still calling the IP with a different ports which are denied by default inbound nsg rule.
To reproduce, I removed the public load balancer again along with public ip. Azure aks recreated once I restarted the VMs in the cluster! It's like cat and mouse game!
I think we can update the ingress service annotation to specify service.beta.kubernetes.io/azure-load-balancer-internal: "true". Don't know why Microsoft decided to auto provision public load balancer in the cluster. It's a risk and Microsoft should correct the behavior by creating internal load balancer.

Related

How to allow outbound traffic on internal load balancer

I have several machines in a backend pool associated with an internal load balancer. However, they currently do not have outbound access. The documentation seems to indicate that I should be able to create a public load balancer and attach the same backend pool with it so that I can have outbound access from those machines. However, when I create a public load balancer, I don't have the option of associating it with an existing pool, and when I try to create a new backend pool for the public LB I can't associate those machines with it. Neither machine has a public IP address. From the dashboard it shows:
where all the interesting info is cut off. What am I missing?
Even VM's in the backend pool of an ILB should have a default outbound IP. If you don't have outbound access have you checked the security group rules to make sure outbound traffic is allowed?
I'm afraid you can't do this on the same LB for both inbound & outbound traffic.
If you happen to use the Basic SKU, VMs behind the LB have internet
access as outbound connections are NAT'ed by Azure. But, all VMs have to be in the same AZ. This wasn't a great idea & we declined it
If you use a Standard SKU, outbound connections to the internet are not possible. We learned this after many failed & painful attempts. More details here
As discussed in the above link, attaching a public IP to each VM nic isn't a good idea either
What worked for us is to create another LoadBalancer specifically for outbound connections, attach public IP to that LB & configure outbound rules. More details here

Azure Kubernetes Cluster - Accessing and interacting with Service(s) from on-premise

currently we have the following scenario:
We have established our connection (network wise) from on-premise to the Azure Kubernetes Cluster (private cluster!) without any problems.
Ports which are being routed and are allowed
TCP 80
TCP 443
So far, we are in a development environment and test different configurations.
For setting up our AKS, we need to set the virtual network (via CNI) and Service CIDR. We have set the following configuration (just an example)
Virtual Network: 10.2.0.0 /21
Service CIDR: 10.2.8.0 /23
So, our pods are having IPs from our virtual network subnet, the services are getting their IPs from the Service CIDR. So far so good. A route table for the virtual network (subnet has been associated with the route table) is forwarding all traffic to our firewall and vice versa: Interacting with the virtual network is working without any issue. The network team (which is new to Azure cloud stuff as well) has said that the connection and access to the Service CIDR should be working.
Sadly, we are unable to access the Service CIDR.
For example, let's say we want to establish the kubernetes dashboard (web ui) via https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/. After running the YAML code, the kubernetes dashboard pod and service is being successfully created. The pod can be pinged and "accessed", but the service, which makes it possible to access the kubernetes dashboard via port 443 cannot be accessed. For example https://10.2.8.42/.
My workaround so far is that the kubernetes dashboard (as a Service, type: ClusterIP) is having set an external IP from the virtual network. This sounds all great, but I am not really fond of it, since I have to interact with the virtual network rather than the Service CIDR.
Is this really the correct way? Any hints how to make the Service CIDR accessible? What am I missing ?
Any help would be appreciated.

Azure - no access to VMs behind a internet facing standard Load Balancer

SETUP:
I have 2 Ubuntu VMs sitting behind an internet facing standard load balancer. LB is zone redundant, 2 VMs are set up as HA in zones 1 and 2.
VMs are spun up with a Virtual Machine Scale Set, and entire infrastructure is deployed with Terraform.
Applications running on containers in VMs are exposed on port 5050.
Inbound rules are set to allow traffic on port 80, 5050.
Vms are in the LB backend pool.
PROBLEM:
When VMs are up and running, I access the console the VMs are unable to connect to Ubuntu repo or any external package for download.
Deleting and scaling out VMs - same issue.
Load balancer rules
Load balancer health probe
However, when I delete the LB rules and Lb-probe, and recreate them, I immediately am able to download packages from ubuntu repo or any other external link.
I also deleted one VM and scaled out new a VM(after recreating lb rules and probe) and ubuntu packages, and docker packages install successfully.
This is driving me crazy, has anyone come across this?
I can not reproduce this issue in the same scenario when I deploy the entire infrastructure via the Azure portal.
According to control outbound connectivity for Standard Load Balancer:
If you want to establish outbound connectivity to a destination
outside of your virtual network, you have two options:
assign a Standard SKU public IP address as an Instance-Level Public IP address to the virtual machine resource or
place the virtual machine resource in the backend pool of a public Standard Load Balancer.
Both will allow outbound connectivity from the virtual network to outside of the virtual > network.
So, this issue may happen due to the load balancer rules that have not taken effect on the initial time or not got configuration correctly or the public-facing load-balancing frontend IP has not got provisioned. Or, you may check if there is any firewall or restriction on outbound traffic from your vmss instance.
When I have provisioned these resources. I have to associate an NSG that whitelist the allowed traffic to the subnet of VMSS instances. This will trigger Standard LB to begin to receive the incoming traffic. Also, I have changed the Upgrade policy to automatic.
Hope this information could help you.
I had the same issue. Once I added a load balancing rule, my VMs had internet access.

Understanding Test-AzureRmPrivateIPAddressAvailability . Application gateway Private IPs are listed as available

I'm working on a script to figure out which IPs are available for an Application Gateway if there are already Gateways in the subnet.
When I use Test-AzureRmPrivateIPAddressAvailability and test an IP address that's being used by the frontend of an Application Gateway, it still outputs Available. Should it be unavailable?
Not sure, but it seems that is a bug. The private IPs can associate to VM, Load Balancer and Application Gateway. And there should no difference that the availability shows. You can get more details about the Private IP Addresses.
I did the test that when a private IP address associated with the VM and Load Balancer, then the availability of the IP shows False. Except for the Application Gateway.
But don't worry, it does not affect the function of the Application Gateway and the virtual network. When a private IP associated with the Application Gateway and then Azure will disallow other Application Gateways use it( when the Application Gateway create in a subnet and then the subnet can only contains Application Gateway, see this ). Maybe this issue would be fixed in the future.
Hope this will help you.

Cannot access Azure VM Scaleset ip address externally

I have created a Virtual Machine Scaleset in Azure
This scaleset is made up of 5 VMs
There is a public ip
When I do a ping on my public ip I get no response, nor do I get a response with the full name, e.g.
myapp.uksouth.cloudapp.azure.com
Is there something I have missed?
I am wondering if I have to add my machine's IP somewhere?
I am trying to remote into the machines within the scaleset eventually!
This scaleset will be used for azure service fabric
Paul
If you deploy your scale set with "public IP per VM", then each VM gets its own public IP: https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-networking#public-ipv4-per-virtual-machine. However, this is not the default in the portal. In the portal, the default is to create a load balancer in front of the scale set with a single public IP on the LB (today, at least; no guarantee it will stay this way). It also comes with NAT rules configured to allow RDP/SSH on ports 50000 and above. They won't necessarily be contiguous, though (at least in the default configuration), so you will need to examine the NAT rules on the load balancer to see which ports are relevant. Once you do, you should be able to do ssh -p <port-from-nat-rule> <public-ip> to ssh in (or similar in your RDP client for Windows).
When I do a ping on my public ip I get no response
Azure does not support ping.
For test, you can use RDP/SSH public IP address with different ports to test the connection.
Are you create VMSS with Azure marketplace? If yes, the Azure LB will configured.
If the load balancer created by your self, please check LB probes, backend pools(all vms should in that backend pools), load balancer rules and NAT rules.
Also you can configure log analytics for Azure load balancer to monitor it.

Resources