I am new to Kubernetes and looking for a better understanding of the difference between Kube-DNS and CoreDNS.
As I understand it the recommendation is to use the newer CoreDNS rather than the older Kube-DNS.
I have setup a small cluster using kubeadm and now I am a little confused about the difference between CoreDNS and Kube-DNS.
Using kubectl get pods --all-namespaces I can see that I have two CoreDNS pods running.
However using kubectl get svc --all-namespaces I also see that I have a service named kube-dns running in the kube-system namespace. When I inspect that with kubectl describe svc/kube-dns -n kube-system I can see that the kube-dns service links to coredns.
I am now wondering if I am actually running both kube-dns and coredns. Or else, why is that service called kube-dns and not core-dns?
I have K8S 1.12. Do a describe of the dns pod.
kubectl describe pod coredns-576cbf47c7-hhjrs --namespace=kube-system | grep -i "image:"
Image: k8s.gcr.io/coredns:1.2.2
Looks like coredns is running. According to the documentation CoreDNS is default from K8S 1.11. For previous installations it's kube-dns.
The image is what important, rest are metadata (names, labels etc).
According to the K8S blog here.
In Kubernetes 1.11, CoreDNS has reached General Availability (GA) for DNS-based service discovery, as an alternative to the kube-dns addon. This means that CoreDNS will be offered as an option in upcoming versions of the various installation tools. In fact, the kubeadm team chose to make it the default option starting with Kubernetes 1.11.
Also, see this link for more info.
Related
Before I could use kubectl logs devops2-pdf-xxx to check the log of the pods.
But after I upgraded the kubectl version, I could not do that. Thus, seems the service is not running well.
But when I run kubectl describe node, the resource allocation is less than 100%.
kubectl logs xxx:
Error from server: Get "https://aks-agentpool-123456-1:10250/containerLogs/default/devops2-deployment-123456-456/devops2-pdf": dial tcp 10.240.0.5:10250: i/o timeout
There are several options to solve this problem. It is probably related to a closed port:
First, check that your port 10250 is open. Similar problem is described here
You are using AKS, so check solution described here:
Make sure that the default network security group isn't modified and that both port 22 and 9000 are open for connection to the API server. Check whether the tunnelfront pod is running in the kube-system namespace using the kubectl get pods --namespace kube-system command. If it isn't, force deletion of the pod and it will restart.
You can also check official Microsoft help page
These timeouts may be related to internal traffic between nodes being blocked. Verify that this traffic is not being blocked, such as by network security groups on the subnet for your cluster's nodes.
or this one.
I have a Kubernetes (K8s) cluster on azure (AKS). I configured an ingress controller (Ngnix) to make my service available publicly. AKS relies on Azure DNS Zone to create an "A" record that maps to a Public IP Address generated by Azure. The issue is that after a couple days the IP Address disappears and the Ingress stops working.
My work around is currently to delete the following pods each time it happens:
kubectl delete pod addon-http-application-routing-external-dns-XXXX -n kube-system
kubectl delete pod addon-http-application-routing-nginx-ingress-controller-xxxxxx-n kube-system
Does anyone know why the IP gets lost each time? Is there a permanent fix to that?
Thanks!
I am trying to setup horizontal pod autoscaling using custom-metrics. For support of custom metrics in kuberenetes 1.8.1, I need to enable the aggregation layer by setting the following flags in kube-apiserver:
--requestheader-client-ca-file=<path to aggregator CA cert>
--requestheader-allowed-names=aggregator
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--proxy-client-cert-file=<path to aggregator proxy cert>
--proxy-client-key-file=<path to aggregator proxy key>
The kubernetes documentation does not contains any information for how to set these flags in api-server and controller manager. I am using azure kubernetes service (AKS).
Not sure but I think one of the possible way to set these flags could be by editing the yaml of kube-apiserver-xxx pod but when I run:
kubectl get po -n kube-system
I get no pod for kube-apiserver neither for kube controller manager.
What is the possible way to set these flags in aks?
I also deployed prometheus adapter for custom metrics but the pod logs showed me the following error:
panic: cluster doesn't provide requestheader-client-ca-file
You can see the exact requirements in configuration section in this link.
kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:16:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
AKS now supports aggregated API - you can find specific scaling details in the following GitHub comment # https://github.com/Azure/AKS/issues/77#issuecomment-352926551. Run "az aks upgrade" even to the same Kubernetes version and AKS will update the control plane with the required certificates on the backend.
Support for the aggregation layer has been added a couple weeks ago, so no configuration should be necessary for a new cluster. Please see details here: https://github.com/Azure/AKS/issues/54
i use google k8s as a service with preemptible instances.
i faced with problem when google preempt one of node which serving kube-dns pod i get 5-7 mins failures in all another pods with "Cannot resolve" error.
I tried run second kube-dns pod but sometime both dns is running on the same node and i get failures again. I tried define nodeSelector for kube-dns pod but got error
Pod "kube-dns-2185667875-8b42l" is invalid: spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
There is a possibility to run dns pods on different nodes redundantly? Maybe there are any best practice?
You can not modify POD like this, you need to modify your Deployment. Also you might want to look into pod anti-affinity to separate your pods in the same deployment in a way that will never schedule them on the same node. Alternatively, you can also switch from Deployment to DaemonSet to get exactly one pod running per node in cluster.
First of all, I am not very expert of K8s, I understand some of the concepts and made already my hands dirty in the configurations.
I correctly set up the cluster configured by my company but I have this issue
I am working on a cluster with 2 pods, ingress rules are correctly configured for www.my-app.com and dashboard.my-app.com.
Both pods runs on the same VM.
If I enter in the dashboard pod (kubectl exec -it $POD bash) and try to curl http://www.my-app.com I land on the dashboard pod again (the same happens all the way around, from www to dashboard).
I have to use http://www-svc.default.svc.cluster.local and http://dashboard-svc.default.svc.cluster.local to land on the correct pods but this is a problem (links generated by the other app will contain internal k8s host, instead of the "public url").
Is there a way to configure routing so I can access pods with their "public" hostnames, from the pods themselves?
So what should happen when you curl is the external DNS record (www.my-app.com in this case) will resolve to your external IP address, usually a load balancer that then sends traffic to a kubernetes service. That service then should send traffic to the appropriate pod. It would seem that you have a misconfigured service. Make sure your service has an external IP that is different between dashboard and www. To see this a simple kubectl get svc should suffice. My guess is that the external IP is wrong, or the service is pointing to the wrong podm which you can see with a kubectl describe svc <name of service>.