Adding nameservers to kubernetes - dns

I'm using Kubernetes v1.0.6 on AWS that has been deployed using kube-up.sh.
Cluster is using kube-dns.
$ kubectl get svc kube-dns --namespace=kube-system
NAME LABELS SELECTOR IP(S) PORT(S)
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP
Which works fine.
$ kubectl exec busybox -- nslookup kubernetes.default
Server: 10.0.0.10
Address 1: 10.0.0.10 ip-10-0-0-10.eu-west-1.compute.internal
Name: kubernetes.default
Address 1: 10.0.0.1 ip-10-0-0-1.eu-west-1.compute.internal
This is the resolv.conf of a pod.
$ kubectl exec busybox -- cat /etc/resolv.conf
nameserver 10.0.0.10
nameserver 172.20.0.2
search default.svc.cluster.local svc.cluster.local cluster.local eu-west-1.compute.internal
Is it possible to have the containers use an additional nameserver?
I have a secondary DNS based service discovery Oon let's say 192.168.0.1) that I would like my kubernetes containers to be able to use for dns resolution.
ps. A kubernetes 1.1 solution would also be acceptable :)
Thank you very much in advance,
George

The DNS addon README has some details on this. Basically, the pod will inherit the resolv.conf setting of the node it is running on, so you could add your extra DNS server to the nodes' /etc/resolv.conf. The kubelet also takes a --resolv-conf argument that may provide a more explicit way for you to inject the extra DNS server. I don't see that flag documented anywhere yet, however.

In Kuberenetes (probably) 1.2 we'll be moving to a model where nameservers are assumed to be fungible. There are too many resolvers that break when different nameservers serve different subsets of DNS, and there is no real specification here that we can point to.
In other words, we'll start dropping the host's nameserver records from the container's merged resolv.conf and making our own DNS server the only nameserver line. Our DNS will be able to forward requests to upstream nameservers.

I eventually managed to solve this pretty easily by configuring SkyDNS to add an additional nameserver, you can just add the environmental variable SKYDNS_NAMESERVERS as defined in the SkyDNS docs in your SkyDNS replication controller. It has minimal impact and does not depend on node changes etc.
env:
- name: SKYDNS_NAMESERVERS
value: 10.0.0.254:53,10.0.64.254:53

For those usign Kubernetes kube-dns, flag -nameservers nor environment variable SKYDNS_NAMESERVERS are no longer avaiable.
Usage of /kube-dns:
--alsologtostderr log to standard error as well as files
--config-map string config-map name. If empty, then the config-map will not used. Cannot be used in conjunction with federations flag. config-map contains dynamically adjustable configuration.
--config-map-namespace string namespace for the config-map (default "kube-system")
--dns-bind-address string address on which to serve DNS requests. (default "0.0.0.0")
--dns-port int port on which to serve DNS requests. (default 53)
--domain string domain under which to create names (default "cluster.local.")
--healthz-port int port on which to serve a kube-dns HTTP readiness probe. (default 8081)
--kube-master-url string URL to reach kubernetes master. Env variables in this flag will be expanded.
--kubecfg-file string Location of kubecfg file for access to kubernetes master service; --kube-master-url overrides the URL part of this; if neither this nor --kube-master-url are provided, defaults to service account tokens
--log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log-dir string If non-empty, write log files in this directory
--log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
--logtostderr log to standard error instead of files (default true)
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--version version[=true] Print version information and quit
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
Now, either you put your name servers on the hosts resolv.conf, so DNS is inherited from the node, or you use custom resolv.conf and add it to Kubelet with the flag --resolv-conf as explained here

You need to know the IP of your Core DNS to set it as a secondary DNS
Run this command to get the CoreDNS IP:
kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 43d
metrics-server ClusterIP 172.20.232.147 <none> 443/TCP 43d
This is how I setup DNS in my deployment yaml.
I posted the Google DNS IP (for clarity) and my CoreDNS ip, but you should use your VPC DNS and your CoreDNS server.
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8080
dnsPolicy: None
dnsConfig:
nameservers:
- 8.8.8.8
- 172.20.0.10
searches:
- 1b.svc.cluster.local
- svc.cluster.local
- cluster.local
- ec2.internal
options:
- name: ndots
value: "5"

Related

nginx Ingress controller on Kubernetes not able to access

I installed nginx ingress controller on AKS cluster.But not able to access ingress endpoints that are exposed by our app.As per the initial analysis we see that ingress endpoints have been assigned external IP of one of the node where as ingress controller service has different IP.
What I am doing wrong ?
$kubectl get pods --all-namespaces | grep ingress
kube-system ingress-nginx-58ftggg-4xc56 1/1 Running
$kubectl get svc
kubernetes CLUSTERIP 172.16.0.1 none(ExternalIP) 443/TCP
$kubectl get ingress
vault-ingress-documentation 10.145.13.456
$kubectl describe ingress vault-ingress-documentation
Name:vault-ingress-documentation
Namespace:corebanking
Address:10.145.13.456
Default backend:default-http-backend:80 (<error:default-http-backend:80 not found)
$kubectl get services -n kube-system | grep ingress
ingress-nginx Loadbalancer 172.16.160.33 10.145.13.456 80:30389/TCP,443:31812/TCP
I tried to reproduce the same in my environment and got below results:
I have created deployment so it will create replica sets and pods for the particular nodes and we can see the pods are up and running like below:
Kubectl create -f deployment.yaml
I have created the deployment for the service file it will access inside of the cluster IP:
Kubectl create -f service.yaml
To expose the external IP of the application, I created the ingress rules using Ingress.yaml file.
I have added the annotation class for the ingress.yaml file like below:
annotations:
kubernetes.io/ingress.class: nginx
Here ingress rules will be created with empty address like below:
When I try to access the application, I am not able to access. To access the application, add loadBalancer IP with Domain name in /etc/host path.
Now I am able to connect the application with service IP, but I am not able to expose to the external IP.
To expose to the external IP, I added annotation class like below:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
After that I have applied the changes in the ingressrules.yaml file:
kubectl replace –force -f ingress_rules.yaml
OR
Kubectl create -f ingress_rules.yaml
Now I am able to see the address of the IP, by using this I am able to access the application.

Pods can't resolve external DNS

I have a problem with k8s hosted on my own bare-metal infrastructure.
The k8s was installed via kubeadm init without special configuration, and then I apply CNI plugin
Everything works perfectly expects external DNS resolution from Pod to the external world (internet).
For example:
I have Pod with the name foo, if I invoke command curl google.com I receive error
curl: (6) Could not resolve host: google.com
but if I invoke the same command on the same pod a second time I receive properly HTML
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
here.
</BODY></HTML>
and if I repeat this command again I can receive errors with DNS resolution or HTML and so on.
this behavior is random sometimes I must hit 10times and get an error and on 11 hits I can receive Html
I also try to debug this error with this guide, but it does not help.
Additional information:
CoreDNS is up and running and have default config
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
name: coredns
and files /etc/resolv.conf looks fine
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
the problem exists on Centos 8(master, kubeadm init) and on Debian 10(node, kubeadm join)
SELinux in on permissive and SWAP is disabled
it is looks like after install k8s and weavenet problem appear even on the host machine.
I'm not certain where the problem came from either k8s or Linux.
It started after I have installed k8s.
what have I missed?
I can suggest using different CNI plugin and setting it up from scratch. Remember when using kubeadm , apply CNI plugin after you ran kubeadm init, then add worker nodes. Here you can find supported CNI plugins. If the problem still exists, it's probably within your OS.
Check /etc/resolv.conf. The conf file can set the nameserver to 8.8.8.8.

How to access hosts in my network from microk8s deployment pods

I am trying to access a host that sits in another server (but on my network) from inside the pod of deployment and I am using microk8s.
The thing is that on the server where I have microk8s installed I can easily ping it by ping my-network-host.qa.local. But when I go inside the pod with microk8s kubectl exec -it pod_name -- /bin/bash and I do ping my-network-host.qa.local it says: Name or service not known.
And when I connect to a VPN on my computer to be on that network and I deploy it locally using docker-desktop kubernetes I can ping that host from within the pod. So I think the problem sits in microk8s which is not letting my pod use my network.
Is there any way to tell microk8s to use my hosts from my network?
p.s. I can ping the ip of that server from the pod, but I am not being able to ping the host from the pod
Based on another answer that I found on StackOverflow, i managed to fixed it.
There were 2 changes needed to make it work:
Update kubelet configuration to use resolv-conf:
sudo echo "--resolv-conf=/run/systemd/resolve/resolv.conf" >> /var/snap/microk8s/current/args/kubelet
Restart kubelet service:
sudo service snap.microk8s.daemon-kubelet restart
Then change the CoreDNS forward to point to your nameserver:
First open coredns config map so you can edit it
sudo microk8s.kubectl edit configmap coredns -n kube-system
and update the file at
forward . 8.8.8.8 8.8.4.4 #REMOVE THIS LINE
forward . xxx.xxx.xxx.xxx #ADD THIS WITH YOUR IP
You can get eth0 DNS address:
nmcli dev show 2>/dev/null | grep DNS | sed 's/^.*:\s*//'
On my case I already had the ip as nameserver on /run/systemd/resolve/resolv.conf.
Now just save the changes, and go inside your pods so you will be able to access them.
There is a comment in another post that suggest adding
forward . /etc/resolv.conf
But that didnt work on my case.

Kuebrnetes pods get wrong DNS nameserver IP address on minikube

I have faced an issue with resolving services host between with kubernetes on minikube.
So from the inside the pod I cannot wget web-server:8081/endpoint. But I can access the same server directly by IP address like this wget 10.0.0.81:8081/endpoint.
After troubleshooting the issue I have found that inside of the pod /etc/resolve.conf file the nameserver is set to 10.96.0.10. Here is how it looks:
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
When the Cluster IP of kube-dns service is 10.0.0.10.
After changing manually the nameserver on the pod to 10.0.0.10 I cat do wget web-server:8081/endpoint.
Why is it set to the wrong IP address and how fix it?
The problem was that I updated minikube without minikube delete.
After minikube delete and minikube start the DNS service is getting the IP address 10.96.0.10 the same as is set in the pod's /etc/resolve.conf.

How can I root cause DNS lookup failures in a (local) kubernetes instance (where sky-dns is healthy)

It appears that local-up-cluster in kubernetes, on ubuntu, isn't able to resolve DNS queries when relying on cluster DNS.
setup
I'm running an ubuntu box, with environmental variables for DNS set in local-up-cluster:
# env | grep KUBE
KUBE_ENABLE_CLUSTER_DNS=true
KUBE_DNS_SERVER_IP=172.17.0.1
running information
sky-dns seems happy:
I0615 00:04:13.563037 1 server.go:198] Skydns metrics enabled (/metrics:10055)
I0615 00:04:13.563051 1 dns.go:147] Starting endpointsController
I0615 00:04:13.563054 1 dns.go:150] Starting serviceController
I0615 00:04:13.563125 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0615 00:04:13.563141 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0615 00:04:13.589840 1 dns.go:264] New service: kubernetes
I0615 00:04:13.589971 1 dns.go:462] Added SRV record &{Host:kubernetes.default.svc.cluster.local. Port:443 Priority:10 Weight:10 Text: Mail:false Ttl:30 TargetStrip:0 Group: Key:}
I0615 00:04:14.063246 1 dns.go:171] Initialized services and endpoints from apiserver
I0615 00:04:14.063267 1 server.go:129] Setting up Healthz Handler (/readiness)
I0615 00:04:14.063274 1 server.go:134] Setting up cache handler (/cache)
I0615 00:04:14.063288 1 server.go:120] Status HTTP port 8081
kube-proxy seems happy:
I0615 00:03:53.448369 5706 proxier.go:864] Setting endpoints for "default/kubernetes:https" to [172.31.44.133:6443]
I0615 00:03:53.545124 5706 controller_utils.go:1001] Caches are synced for service config controller
I0615 00:03:53.545146 5706 config.go:210] Calling handler.OnServiceSynced()
I0615 00:03:53.545208 5706 proxier.go:979] Not syncing iptables until Services and Endpoints have been received from master
I0615 00:03:53.545125 5706 controller_utils.go:1001] Caches are synced for endpoints config controller
I0615 00:03:53.545224 5706 config.go:110] Calling handler.OnEndpointsSynced()
I0615 00:03:53.545274 5706 proxier.go:309] Adding new service port "default/kubernetes:https" at 10.0.0.1:443/TCP
I0615 00:03:53.545329 5706 proxier.go:991] Syncing iptables rules
I0615 00:03:53.993514 5706 proxier.go:991] Syncing iptables rules
I0615 00:03:54.008738 5706 bounded_frequency_runner.go:221] sync-runner: ran, next possible in 0s, periodic in 30s
I0615 00:04:24.008904 5706 proxier.go:991] Syncing iptables rules
I0615 00:04:24.023057 5706 bounded_frequency_runner.go:221] sync-runner: ran, next possible in 0s, periodic in 30s
result
However, I don't seem to be able to resolve anything inside the cluster, same result with docker exec or kube exec:
➜ kubernetes git:(master) kc exec --namespace=kube-system kube-dns-2673147055-4j6wm -- nslookup kubernetes.default.svc.cluster.local
Defaulting container name to kubedns.
Use 'kubectl describe pod/kube-dns-2673147055-4j6wm' to see all of the
containers in this pod.
nslookup: can't resolve '(null)': Name does not resolve
nslookup: can't resolve 'kubernetes.default.svc.cluster.local': Name does not resolve
question
Whats the simplest way to further debug a system created using local-up-cluster where the DNS pods are running, but kubernetes.default.svc.cluster.local is not resolved ? Note that all other aspects of this cluster appear to be working perfectly.
System info : Linux ip-172-31-44-133 4.4.0-1018-aws #27-Ubuntu SMP Fri May 19 17:20:58 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux.
Example of resolv.conf that is being placed in my containers...
/etc/cfssl # cat /etc/resolv.conf
nameserver 172.17.0.1
search default.svc.cluster.local svc.cluster.local cluster.local dc1.lan
options ndots:5
I can't comment on your post so I'll attempt to answer this.
First of all, certain Alpine images have trouble resolving using nslookup. DNS might in fact be working normally in your cluster.
To validate this, read the logs of the pods (eg. traefik, heapster, calico) that communicates with kube-apiserver. If no errors are observed, what you have is probably a non-problem.
If you want to be doubly-sure, deploy a non-Alpine pod and try nslookup.
If it really is a DNS issue, I will debug in this sequence.
kubectl exec into the kube-dns pod. Run nslookup kubernetes.default.svc.cluster.local localhost. If this works, DNS is in fact running. If it doesn't, kube-dns should have entered a CrashLoopbackOff state by now.
kubectl exec into a deployed pod. Run nslookup kubernetes.default.svc.cluster.local <cluster-ip>. If this works, you're good to go. If it doesn't, something is up with the pod network. Without details, I can't recommend further steps.
Bonne chance!
I figured I'd post a systematic answer that usually works for me. I was hoping for something more elegant, and this isn't ideal, but I think its the best place to start.
1) Make sure your DNS nanny and your SkyDNS are running. The nanny and Sky DNS should both show in their docker logs that they've bound to a port.
2) When you create new services, make sure that SkyDNS is writing them to the logs and showing the creation of SRV and so on.
3) Look in /etc/resolv.conf in your docker containers. Make sure the nameserver looks like something on your internal docker IP addresses (i.e. 10.... in a regular docker0 config on fedora)
There are specific env variables you need to export correctly: API_HOST=true and KUBE_ENABLE_CLUSTER_DNS=true.
Theres alot of deeper tools you can use, like route -n and so on to debug container networking even more, but local up cluster should generally 'just work' and if the above steps surface something supsicious, its worth mentioning in the kubernetes community as a possible bug.

Resources