Azure Kubernetes Cluster - Accessing and interacting with Service(s) from on-premise - azure

currently we have the following scenario:
We have established our connection (network wise) from on-premise to the Azure Kubernetes Cluster (private cluster!) without any problems.
Ports which are being routed and are allowed
TCP 80
TCP 443
So far, we are in a development environment and test different configurations.
For setting up our AKS, we need to set the virtual network (via CNI) and Service CIDR. We have set the following configuration (just an example)
Virtual Network: 10.2.0.0 /21
Service CIDR: 10.2.8.0 /23
So, our pods are having IPs from our virtual network subnet, the services are getting their IPs from the Service CIDR. So far so good. A route table for the virtual network (subnet has been associated with the route table) is forwarding all traffic to our firewall and vice versa: Interacting with the virtual network is working without any issue. The network team (which is new to Azure cloud stuff as well) has said that the connection and access to the Service CIDR should be working.
Sadly, we are unable to access the Service CIDR.
For example, let's say we want to establish the kubernetes dashboard (web ui) via https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/. After running the YAML code, the kubernetes dashboard pod and service is being successfully created. The pod can be pinged and "accessed", but the service, which makes it possible to access the kubernetes dashboard via port 443 cannot be accessed. For example https://10.2.8.42/.
My workaround so far is that the kubernetes dashboard (as a Service, type: ClusterIP) is having set an external IP from the virtual network. This sounds all great, but I am not really fond of it, since I have to interact with the virtual network rather than the Service CIDR.
Is this really the correct way? Any hints how to make the Service CIDR accessible? What am I missing ?
Any help would be appreciated.

Related

Azure AKS Network Analytics- where are these requests are coming to Kubernetes Cluster?

I am little but puzzled by Azure Network Analytics! Can someone help resolving this mystery?
My Kubernetes cluster in Azure is private. It's joined to a vNET and there is no public ip exposed anywhere. Service is configured with internal load balancer. Application gateway calls the internal load balancer. NSG blocks all inbound traffics from internet to app gateway. Only trusted NAT ips are allowed at the NSG.
Question is- I am seeing lot of internet traffic coming to aks on the vNET. They are denied of course! I don't have this public ip 40.117.133.149 anywhere in the subscription. So, how are these requests coming to aks?
You can try calling app gateway from internet and you would not get any response! http://23.100.30.223/api/customer/FindAllCountryProvinceCities?country=United%20States&state=Washington
You would get successful response if you call the Azure Function- https://afa-aspnet4you.azurewebsites.net/api/aks/api/customer/FindAllCountryProvinceCities?country=United%20States&state=Washington
Its possible because of following nsg rules!
Thank you for taking time to answer my query.
In response to #CharlesXu, I am sharing little more on the aks networking. Aks network is made of few address spaces-
Also, there is no public ip assigned to any of the two nodes in the cluster. Only private ip is assigned to vm node. Here is an example of node-0-
I don't understand why I am seeing inbound requests to 40.117.133.149 within my cluster!
After searching all the settings and activity logs, I finally found answer to the mystery IP! A load balancer with external ip was auto created as part of nginx ingress service when I restarted the VMs. NSG was updated automatically to allow internet traffic to port 80/443. I manually deleted the public load balancer along with IP but the bad actors were still calling the IP with a different ports which are denied by default inbound nsg rule.
To reproduce, I removed the public load balancer again along with public ip. Azure aks recreated once I restarted the VMs in the cluster! It's like cat and mouse game!
I think we can update the ingress service annotation to specify service.beta.kubernetes.io/azure-load-balancer-internal: "true". Don't know why Microsoft decided to auto provision public load balancer in the cluster. It's a risk and Microsoft should correct the behavior by creating internal load balancer.

Azure App Service doesn't see Worker Roles in VNET

I need to configure an Orleans cluster to connect to an Azure App Service. The issue is that networking is my weakest point ;).
I have configured an Orleans Silo using Azure Worker Role (4 instances), listening to the default ports:
.ConfigureEndpoints(siloPort: 11111, gatewayPort: 30000)
I've assigned the Worker Role to an Azure VNET (Classic) with these settings:
Address Range 10.0.0.0/24
Subnet-1 10.0.0.0/27 (the Worker Role is Assigned here as part of a network security group)
Point to Site range 10.0.1.0/24
GatewaySubnet 10.0.0.32/29 (added to the same network security group)
I see that the 4 instances take proper IPs in the Subnet-1: 10.0.0.4 to 10.0.0.7.
The App Service is assigned to this VPN ("Certificates in sync") and reports:
IP ADDRESSES ROUTED TO VNET
10.0.0.0 - 10.255.255.255
172.16.0.0 - 172.31.255.255
192.168.0.0 - 192.168.255.255
I see that the app service tries to connect to 10.0.0.7:30000
I tested both by checking application diagnostics and by using tcpping that 10.0.0.7:30000 is not accessible by the application. (Could not connect to 10.0.0.7:30000: AccessDenied)
I am definitely missing something elementary here, I haven't configured IPs in a decade!
(This is similar to Vnet between Virtual Machine and App Service in Azure but in this case I do want to configure the VNet, and I have a specific practical issue)
For the networking, I suggest verifying the following things:
You have integrated your app into a Classic VNET, and enable Point to Site in a Classic VNet as this DOC.
Confirm if the desired port in Orleans cluster is listening. You can go through this website to troubleshoot on the Orleans cluster side.
Firewall (VM or host lever) and NSG rules to allow the desired ports. Get more details from this.
For more references, Create a VNET and access an Azure VM hosted within it from an App Services Web App
After checking in detail all the documents Nancy provided, I ended up connecting through VM to one of the cloud service VMs (the silo). I needed to allow it through the NSG. I checked with netstat -aon that the service was listening to the expected ports. I could ping the other instances of the service.
Then I downloaded tcping and tried to connect to the expected ports from that instance to the others. It was blocked. As I was within the same silo, I could now pinpoint the problem to "Firewall (VM or host level)" (one of the possible issues Nancy mentioned).
The solution was to configure the Endpoints at the Cloud Service definition (csdef), thus the VM firewall was blocking access to these ports. I naively thought that it was enough to configure them at the SiloBuilder level, but SiloBuilder is application layer, it doesn't update the VM it's running on.
The result is that now netstat -aon was showing the service connections to 11111 as "established" not just "listening" and the VM's firewall was showing the new rules. The worker role instances could connect to each other.
Still the app service (web app) couldn't connect to the host:port of any of the worker roles. I tried to remove the NSG but this caused the instances not to be able to see each other again, so I reassigned the NSG to subnet-1 and the GatewaySubnet.
The final thing I tried was to disconnect the App Service from the VNET and re-connect it. I've run on other (unrelated) errors at that step, I will update the post when I sort them out.

Azure Kubernetes Service nodes cannot access internet

Im trying to get web services to my existing service from aks managed cluster on azure. I did nsg port config stuff from portal to let outbound traffic go out and restarted vm several times. But my node cannot ping any ping on the internet. Im not trying to ping somewhere with its fqdn. Im trying it with its ip address. How can i reach a service from internet into my cluster?
How did you create the service and pod? Be default load balancer one will create all the ruls for you and you dont need to create the rules by yourself.
You can share your pod details

Access Azure Service Fabric application from internet

I think I'm missing something that is on surface.
I have created SF cluster in Azure. It has a Load Balancer and a network with 3 VMs (nodes) which have IP addresses in 10.0.0.0/16.
When I'm asking Load balancer for application endpoint it responds with node IP address. (I captured packets with WireShark) But I can't access it because the network is private.
A bit more info about my case: 3xA0 instances, net.tcp:20001 endpoints, firewall allow connections, ports opened and listening, i have public IP address assigned to balancer, probe for service port.
On your load balancer you will need to assign a public IP address. You can find some really good detailed guides in the documentation for this.
Ok Here is it:
When you want to communicate to the service from outside the cluster - just use load balancer IP and you don't need the naming server communication. Load balancer has probs that can check ports on each node in cluster and forward your request to random instance which has service you are asking.
When you want to communicate one microservice to another within the cluster then you have 2 options:
ask naming service through load balancer and then communicate to the service directly.
if you know for sure that the service should be on every node in your cluster - you can just communicate to localhost directly.
When you want to communicate from separate vm to the microservice in the cluster from within cluster's virtual network (you can connect WebApp to the cluster using vpn) then you can ask naming service through load balancer but using Service fabric HTTP API because you will not be able to use service fabric classes on VM wich doesn't have Service Fabric SDK installed. Here is an example of service resolving: https://github.com/lAnubisl/ServiceFabricHttpApiClient

How do I expose Kubernetes service to the internet?

I am running a kubernetes cluster with 1 master (also a node) and 2 nodes on Azure. I am using Ubuntu with Flannel overlay network. So far everything is working well. The only problem I have is exposing the service to the internet.
I am running the cluster on an azure subnet. The master has a NIC attached to it that has a public IP. This means if I run a simple server that listens on port 80, I can reach my server using a domain name (Azure gives an option to have a domain name for a public IP).
I am also able to reach the kubernetes guest book frontend service with some hack. What I did was check all the listening ports on the master and try each port with the public IP. I was able to hit the kubernetes service and get response. Based on my understanding this is directly going to the pod that is running on the master (which is also a node) rather than going through the service IP (which would have load balanced across any of the pods).
My question is how do I map the external IP to the service IP? I know kubernetes has a setting that works only on GCE (which I can't use right now). But is there some neat way of telling etcd/flannel to do this?
If you use the kubectl expose command:
--external-ip="": External IP address to set for the service. The service can be accessed by this IP in addition to its generated service IP.
Or if you create from a json or yaml file, use the spec/externalIPs array.

Resources