Im trying to get web services to my existing service from aks managed cluster on azure. I did nsg port config stuff from portal to let outbound traffic go out and restarted vm several times. But my node cannot ping any ping on the internet. Im not trying to ping somewhere with its fqdn. Im trying it with its ip address. How can i reach a service from internet into my cluster?
How did you create the service and pod? Be default load balancer one will create all the ruls for you and you dont need to create the rules by yourself.
You can share your pod details
Related
currently we have the following scenario:
We have established our connection (network wise) from on-premise to the Azure Kubernetes Cluster (private cluster!) without any problems.
Ports which are being routed and are allowed
TCP 80
TCP 443
So far, we are in a development environment and test different configurations.
For setting up our AKS, we need to set the virtual network (via CNI) and Service CIDR. We have set the following configuration (just an example)
Virtual Network: 10.2.0.0 /21
Service CIDR: 10.2.8.0 /23
So, our pods are having IPs from our virtual network subnet, the services are getting their IPs from the Service CIDR. So far so good. A route table for the virtual network (subnet has been associated with the route table) is forwarding all traffic to our firewall and vice versa: Interacting with the virtual network is working without any issue. The network team (which is new to Azure cloud stuff as well) has said that the connection and access to the Service CIDR should be working.
Sadly, we are unable to access the Service CIDR.
For example, let's say we want to establish the kubernetes dashboard (web ui) via https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/. After running the YAML code, the kubernetes dashboard pod and service is being successfully created. The pod can be pinged and "accessed", but the service, which makes it possible to access the kubernetes dashboard via port 443 cannot be accessed. For example https://10.2.8.42/.
My workaround so far is that the kubernetes dashboard (as a Service, type: ClusterIP) is having set an external IP from the virtual network. This sounds all great, but I am not really fond of it, since I have to interact with the virtual network rather than the Service CIDR.
Is this really the correct way? Any hints how to make the Service CIDR accessible? What am I missing ?
Any help would be appreciated.
I am little but puzzled by Azure Network Analytics! Can someone help resolving this mystery?
My Kubernetes cluster in Azure is private. It's joined to a vNET and there is no public ip exposed anywhere. Service is configured with internal load balancer. Application gateway calls the internal load balancer. NSG blocks all inbound traffics from internet to app gateway. Only trusted NAT ips are allowed at the NSG.
Question is- I am seeing lot of internet traffic coming to aks on the vNET. They are denied of course! I don't have this public ip 40.117.133.149 anywhere in the subscription. So, how are these requests coming to aks?
You can try calling app gateway from internet and you would not get any response! http://23.100.30.223/api/customer/FindAllCountryProvinceCities?country=United%20States&state=Washington
You would get successful response if you call the Azure Function- https://afa-aspnet4you.azurewebsites.net/api/aks/api/customer/FindAllCountryProvinceCities?country=United%20States&state=Washington
Its possible because of following nsg rules!
Thank you for taking time to answer my query.
In response to #CharlesXu, I am sharing little more on the aks networking. Aks network is made of few address spaces-
Also, there is no public ip assigned to any of the two nodes in the cluster. Only private ip is assigned to vm node. Here is an example of node-0-
I don't understand why I am seeing inbound requests to 40.117.133.149 within my cluster!
After searching all the settings and activity logs, I finally found answer to the mystery IP! A load balancer with external ip was auto created as part of nginx ingress service when I restarted the VMs. NSG was updated automatically to allow internet traffic to port 80/443. I manually deleted the public load balancer along with IP but the bad actors were still calling the IP with a different ports which are denied by default inbound nsg rule.
To reproduce, I removed the public load balancer again along with public ip. Azure aks recreated once I restarted the VMs in the cluster! It's like cat and mouse game!
I think we can update the ingress service annotation to specify service.beta.kubernetes.io/azure-load-balancer-internal: "true". Don't know why Microsoft decided to auto provision public load balancer in the cluster. It's a risk and Microsoft should correct the behavior by creating internal load balancer.
We have an application that runs on an Ubuntu VM. This application connects to Azure Redis, Azure Postgres and Azure CosmosDB(mongoDB) services.
I am currently working on moving this application to Azure AKS and intend to access all the above services from the cluster. The services will continue to be external and will not reside inside the cluster.
I am trying to understand how the network/firewall of both the services and aks should be configured so that pods inside the cluster can access the above services or any Azure service in general.
I tried the following:
Created a configMap containing the connection params(public ip/address, username/pwd, port, etc) of all the services and used this configMap in the deployment resource.
Hardcoded the connection params of all the services as env vars inside the container image
In the firewall/inbound rules of the services, I added the AKS API ip, individual node ips
None of the above worked. Did I miss anything? What else should be configured?
I tested the setup locally on minikube with all the services running on my local machine and it worked fine.
I am currently working on moving this application to Azure AKS and
intend to access all the above services from the cluster.
I assume that you would like to make all services to access each other and all the services are in AKS cluster? If so, I advise you configure the internal load balancer in AKS cluster.
Internal load balancing makes a Kubernetes service accessible to
applications running in the same virtual network as the Kubernetes
cluster.
You can take a try and follow the following document: Use an internal load balancer with Azure Kubernetes Service (AKS). In the end, good luck to you!
Outbound traffic in azure is SNAT-translated as stated in this article. If you already have a service in your AKS cluster, the outbound connection from all pods in your cluster will come thru the first LoadBalancer type service IP; I strongly suggest you create one for the sole purpose to have a consistent outbound IP. You can also pre-create a Public IP and use it as stated in this article using the LoadBalancerIP spec.
On a side note, rather than a ConfigMap, due to the sensitiveness of the connection string, I'd suggest you create a Secret and pass that down to your Deployment to be mounted or exported as environment variable.
I think I'm missing something that is on surface.
I have created SF cluster in Azure. It has a Load Balancer and a network with 3 VMs (nodes) which have IP addresses in 10.0.0.0/16.
When I'm asking Load balancer for application endpoint it responds with node IP address. (I captured packets with WireShark) But I can't access it because the network is private.
A bit more info about my case: 3xA0 instances, net.tcp:20001 endpoints, firewall allow connections, ports opened and listening, i have public IP address assigned to balancer, probe for service port.
On your load balancer you will need to assign a public IP address. You can find some really good detailed guides in the documentation for this.
Ok Here is it:
When you want to communicate to the service from outside the cluster - just use load balancer IP and you don't need the naming server communication. Load balancer has probs that can check ports on each node in cluster and forward your request to random instance which has service you are asking.
When you want to communicate one microservice to another within the cluster then you have 2 options:
ask naming service through load balancer and then communicate to the service directly.
if you know for sure that the service should be on every node in your cluster - you can just communicate to localhost directly.
When you want to communicate from separate vm to the microservice in the cluster from within cluster's virtual network (you can connect WebApp to the cluster using vpn) then you can ask naming service through load balancer but using Service fabric HTTP API because you will not be able to use service fabric classes on VM wich doesn't have Service Fabric SDK installed. Here is an example of service resolving: https://github.com/lAnubisl/ServiceFabricHttpApiClient
I am trying to restrict access to my 2 Ubuntu VMs that I have created in Azure for the default elasticsearch port of 9200 and can't seem to get it working.
My virtual machines are part of the same cloud service and the endpoints are setup to use the same load-balanced set so that any request to the cloud service on port 9200 will be load balanced between my 2 VMs. That is all working as expected it seems.
I want to set these up so only my Azure Websites can access them directly so I need to manage the ACL for the VMs I figured. To test it out I tried setting the ACL to deny my specific IP address for the port 9200 endpoint on both servers, but when I do that I can still access them over that port it seems.
I tested denying my IP address to the SSH endpoint and I was successfully blocked from getting onto the servers over SSH. So my only guess is that the load balancing set for these endpoints is causing the ACL to not work properly.
Is there a better way to handle this, maybe using Traffic Manager instead of the load balanced set for the VMs on the same cloud service? I think my backup plan would be to use iptables on each VM to set the restrictions but ideally I'd be able to handle this in the Azure portal if possible.
Thanks.