502 bad gateway error on AWS kubernetes cluster with nodejs application - node.js

Getting 502 bad gateway error while doing performance testing
it occurs randomly on random API only 2-3 times in a 30 min execution
ingress logs show this error
11.150.71.00 - - [22/Dec/2022:10:44:17 +0000] "POST /api/quotes/test HTTP/1.1" 502 150 "-" "AmazonAPIGateway_hodcqxftdw" 1466 0.030 [cpq-cpq-server-service-9001] [] 11.150.71.177:9001 0 0.028 502 1322918b6f840892739d4dcf194e2226
I have added keep alive time for ingress still no success,
CPU and Memory both are stable there is no spike
Ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: testIngress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-tls-secret: "test/test-secret"
nginx.ingress.kubernetes.io/auth-tls-verify-client: "off"
spec:
rules:
- host: test.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: testService
port:
number: 9001

Related

nginx-ingress controller for Azure Kubernetes Service 502 Bad Gateway

I'm having trouble getting an nginx-ingress controller to work on an Azure Kubernetes Service; it's currently returning 502 Bad Gateway each time I try to hit some Web APIs exposed as Services.
Because I must use an existing certificate, I followed https://learn.microsoft.com/en-us/azure/aks/ingress-own-tls to set up the controller and have followed https://www.markbrilman.nl/2011/08/howto-convert-a-pfx-to-a-seperate-key-crt-file/ to generate a cert and key from a PFX (how the certificate was exported from an Azure Key Vault). I created the secret "aks-ingress-tls" using the certificate including the intermediate and root ceritficates and the decrypted key file.
I have a YAML file to create a deployment, a service to expose it, and an ingress to route to it. Applying this YAML I can access the services via their IP addresses in HTTP, but using HTTPS to the Ingress Controller's EXTERNAL_IP always gives the 502 error.
My YAML File (redacted):
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: my-api
replicas: 3
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api
image: [REDACTED]/my-api:1.0
ports:
- containerPort: 443
- containerPort: 80
imagePullSecrets:
- name: data-creds
---
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-service
spec:
ports:
- name: https
port: 443
targetPort: 443
protocol: TCP
- name: http
port: 80
targetPort: 80
protocol: TCP
selector:
app: my-api
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
tls:
- hosts:
- [REDACTED].co.uk
secretName: aks-ingress-tls
rules:
- host: [REDACTED].co.uk
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 443
I added a record to my hosts file (I'm on Windows so can't use curl's --resolve) to map [REDACTED].co.uk to the ingress controller's EXTERNAL_IP so I can try accessing it. That's when I get the errors.
A curl -v https://[REDACTED].co.uk gives this:
VERBOSE: GET https://[REDACTED].co.uk/ with 0-byte payload
curl : The request was aborted: Could not create SSL/TLS secure channel.
At line:1 char:1
+ curl -v https://[REDACTED].co.uk
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
Looking at logs for one of the ingress controller's pods:
10.244.1.1 - [10.244.1.1] - - [25/Apr/2019:13:39:20 +0000] "GET / HTTP/2.0" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36" 10 0.001 [default-sub360-auth-service-443] 10.244.1.254:443, 10.244.1.3:443, 10.244.1.4:443 0, 0, 0 0.000, 0.000, 0.000 502, 502, 502 e44e21c8a2f61f5137c9afdfc64c6584
2019/04/25 13:39:20 [error] 1622#1622: *1127096 connect() failed (111: Connection refused) while connecting to upstream, client: 10.244.1.1, server: [REDACTED].co.uk, request: "GET /favicon.ico HTTP/2.0", upstream: "https://10.244.1.254:443/favicon.ico", host: "[REDACTED].co.uk", referrer: "https://[REDACTED].co.uk/"
2019/04/25 13:39:20 [error] 1622#1622: *1127096 connect() failed (111: Connection refused) while connecting to upstream, client: 10.244.1.1, server: [REDACTED].co.uk, request: "GET /favicon.ico HTTP/2.0", upstream: "https://10.244.1.3:443/favicon.ico", host: "[REDACTED].co.uk", referrer: "https://[REDACTED].co.uk/"
2019/04/25 13:39:20 [error] 1622#1622: *1127096 connect() failed (111: Connection refused) while connecting to upstream, client: 10.244.1.1, server: [REDACTED].co.uk, request: "GET /favicon.ico HTTP/2.0", upstream: "https://10.244.1.4:443/favicon.ico", host: "[REDACTED].co.uk", referrer: "https://[REDACTED].co.uk/"
10.244.1.1 - [10.244.1.1] - - [25/Apr/2019:13:39:20 +0000] "GET /favicon.ico HTTP/2.0" 502 559 "https://[REDACTED].co.uk/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36" 26 0.000 [default-sub360-auth-service-443] 10.244.1.254:443, 10.244.1.3:443, 10.244.1.4:443 0, 0, 0 0.000, 0.000, 0.004 502, 502, 502 63b6ed4414bf32694de3d136f7f277aa
Can anyone point me to what I need to look at or do to get this working now?
For your issue, the ingress uses the HTTPS protocol with the port 443, so you do not need to expose the port 443 for your container. Just expose the port that which your application listens to.
For you, it means you just expose the port 80 for your container and the service. You also need to remove the annotation nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" and change the servicePort value into 80.
Note: Add the DNS name into the certificate is also important.

Services with azure kubernetes not reachable

I am trying to configure azure kubernetes cluster and created one on the portal.dockerized .net core webapi project and also published the image to azure container register. After applying manifest file , i get the message of service created and also the external IP. however when I do get pods I get status "Pending" all the time
NAME READY STATUS RESTARTS AGE
kubdemo1api-6c67bf759f-6slh2 0/1 Pending 0 6h
here is my yaml manifest file, can someone suggest what is wrong here?
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubdemo1api
labels:
name: kubdemo1api
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
minReadySeconds: 30
selector:
matchLabels:
app: kubdemo1api
template:
metadata:
labels:
app: kubdemo1api
version: "1.0"
tier: backend
spec:
containers:
- name: kubdemo1api
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
image: my container registry image address
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: azkubdemoapi1
spec:
ports:
-
port: 80
selector:
app: kubdemo1api
type: LoadBalancer
EDIT:
Output kubectl describe pods is this
here is it
Normal Scheduled 2m default-scheduler Successfully assigned default/kubdemo1api-697d5655c-64fnj to aks-agentpool-87689508-0
Normal Pulling 37s (x4 over 2m) kubelet, aks-agentpool-87689508-0 pulling image "myacrurl/azkubdemo:v2"
Warning Failed 37s (x4 over 2m) kubelet, aks-agentpool-87689508-0 Failed to pull image "my acr url": [rpc error: code = Unknown desc = Error response from daemon: Get https://myacrurl/v2/azkubdemo/manifests/v2: unauthorized: authentication required, rpc error: code = Unknown desc = Error response from daemon: Get https://myacrurl/v2/azkubdemo/manifests/v2: unauthorized: authentication required]
Warning Failed 37s (x4 over 2m) kubelet, aks-agentpool-87689508-0 Error: ErrImagePull
Normal BackOff 23s (x6 over 2m) kubelet, aks-agentpool-87689508-0 Back-off pulling image "myacrlurl/azkubdemo:v2"
Warning Failed 11s (x7 over 2m) kubelet, aks-agentpool-87689508-0 Error: ImagePullBackOff
For the error that you provide, it shows you have to authenticate to pull the image from the Azure Container Registry.
Actually, you just need permission to pull the image and the acrpull role is totally enough. There are two ways to achieve it.
One is that just grant the AKS access to the Azure Container Registry. It's simplest on my side. Just need to create the role assignment for the service principal which the AKS used. See Grant AKS access to ACR for the whole steps.
The other one is that use the Kubernetes secret. It's a little more complex than the first one. You need to create a new service principal differ from the one AKS used and grant access to it, then create the kubernetes secret with the service principal. See Access with Kubernetes secret for the whole steps.
This Yaml is Wrong Can you provide the correct yaml, the intending are wrong. Try Below YAML
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubdemo1api
labels:
name: kubdemo1api
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
minReadySeconds: 30
selector:
matchLabels:
app: kubdemo1api
template:
metadata:
labels:
app: kubdemo1api
version: "1.0"
tier: backend
spec:
containers:
- name: kubdemo1api
image: nginx
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: azkubdemoapi1
spec:
ports:
- port: 80
selector:
app: kubdemo1api
type: LoadBalancer

Unable to setup service DNS in Kubernetes cluster

Kubernetes version --> 1.5.2
I am setting up DNS for Kubernetes services for the first time and I came across SkyDNS.
So following documentation, my skydns-svc.yaml file is :
apiVersion: v1
kind: Service
spec:
clusterIP: 10.100.0.100
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
k8s-app: kube-dns
sessionAffinity: None
type: ClusterIP
And my skydns-rc.yaml file is :
apiVersion: v1
kind: ReplicationController
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v18
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
version: v18
spec:
containers:
- args:
- --domain=kube.local
- --dns-port=10053
image: gcr.io/google_containers/kubedns-amd64:1.6
imagePullPolicy: IfNotPresent
name: kubedns
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
terminationMessagePath: /dev/termination-log
- args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
imagePullPolicy: IfNotPresent
name: dnsmasq
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
- args:
- -cmd=nslookup kubernetes.default.svc.kube.local 127.0.0.1 >/dev/null &&
nslookup kubernetes.default.svc.kube.local 127.0.0.1:10053 >/dev/null
- -port=8080
- -quiet
image: gcr.io/google_containers/exechealthz-amd64:1.0
imagePullPolicy: IfNotPresent
name: healthz
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
Also on my minions, I updated the /etc/systemd/system/multi-user.target.wants/kubelet.service file and added the following under the ExecStart section :
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS \
--cluster-dns=10.100.0.100 \
--cluster-domain=kubernetes \
Having done all of this and having successfully brought up the rc & svc :
[root#kubernetes-master DNS]# kubectl get po | grep dns
kube-dns-v18-hl8z6 3/3 Running 0 6s
[root#kubernetes-master DNS]# kubectl get svc | grep dns
kube-dns 10.100.0.100 <none> 53/UDP,53/TCP 20m
This is all that I got from a config standpoint. Now in order to test my setup, I downloaded busybox and tested a nslookup
[root#kubernetes-master DNS]# kubectl get svc | grep kubernetes
kubernetes 10.100.0.1 <none> 443/TCP
[root#kubernetes-master DNS]# kubectl exec busybox -- nslookup kubernetes
nslookup: can't resolve 'kubernetes'
Server: 10.100.0.100
Address 1: 10.100.0.100
Is there something that I have missed ?
EDIT ::
Going through the logs, I see something that might explain why this is not working :
kubectl logs $(kubectl get pods -l k8s-app=kube-dns -o name) -c kubedns
.
.
.
E1220 17:44:48.403976 1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: Get https://10.100.0.1:443/api/v1/endpoints?resourceVersion=0: x509: failed to load system roots and no roots provided
E1220 17:44:48.487169 1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: Get https://10.100.0.1:443/api/v1/services?resourceVersion=0: x509: failed to load system roots and no roots provided
I1220 17:44:48.487716 1 dns.go:172] Ignoring error while waiting for service default/kubernetes: Get https://10.100.0.1:443/api/v1/namespaces/default/services/kubernetes: x509: failed to load system roots and no roots provided. Sleeping 1s before retrying.
E1220 17:44:49.410311 1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: Get https://10.100.0.1:443/api/v1/endpoints?resourceVersion=0: x509: failed to load system roots and no roots provided
I1220 17:44:49.492338 1 dns.go:172] Ignoring error while waiting for service default/kubernetes: Get https://10.100.0.1:443/api/v1/namespaces/default/services/kubernetes: x509: failed to load system roots and no roots provided. Sleeping 1s before retrying.
E1220 17:44:49.493429 1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: Get https://10.100.0.1:443/api/v1/services?resourceVersion=0: x509: failed to load system roots and no roots provided
.
.
.
Looks like kubedns is unable to authorize against K8S master node. I even tried to do a manual call :
curl -k https://10.100.0.1:443/api/v1/endpoints?resourceVersion=0
Unauthorized
Looks like the kube-dns pod is not able to authenticate with the kubernetes api server. I don't see any secret and serviceaccount in the YAML file for the kube-dns pod.
I suggest doing the following:
Create a k8s secret using kubectl create secret for the kube-dns pod with the right certificate file ca.crt and token:
$ kubectl get secrets -n=kube-system | grep dns
kube-dns-token-66tfx kubernetes.io/service-account-token 3 1d
Create a k8s serviceaccount using kubectl create serviceaccount for the kube-dns pod:
$ kubectl get serviceaccounts -n=kube-system | grep dns
kube-dns 1 1d`
Mount the secret at /var/run/secrets/kubernetes.io/serviceaccount inside the kube-dns container in the YAML file:
...
kind: Pod
...
spec:
...
containers:
...
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-dns-token-66tfx
readOnly: true
...
volumes:
- name: kube-dns-token-66tfx
secret:
defaultMode: 420
secretName: kube-dns-token-66tfx
Here are the links about creating serviceaccounts for pods:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
https://kubernetes.io/docs/admin/service-accounts-admin/

kubernetes skydns failure to forward request

I am creating a cluster of 1 master 2 nodes kubernetes. I am trying to create the skydns based on the following:
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v11
namespace: kube-system
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v11
template:
metadata:
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd-amd64:2.2.1
# resources:
# # TODO: Set memory limits when we've profiled the container for large
# # clusters, then set request = limit to keep this container in
# # guaranteed class. Currently, this container falls into the
# # "burstable" category so the kubelet doesn't backoff from restarting it.
# limits:
# cpu: 100m
# memory: 500Mi
# requests:
# cpu: 100m
# memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.14
# resources:
# # TODO: Set memory limits when we've profiled the container for large
# # clusters, then set request = limit to keep this container in
# # guaranteed class. Currently, this container falls into the
# # "burstable" category so the kubelet doesn't backoff from restarting it.
# limits:
# cpu: 100m
# # Kube2sky watches all pods.
# memory: 200Mi
# requests:
# cpu: 100m
# memory: 50Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
# successThreshold: 1
# failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube2sky"
- --domain=cluster.local
- name: skydns
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain=cluster.local
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
# resources:
# # keep request = limit to keep this container in guaranteed class
# limits:
# cpu: 10m
# memory: 20Mi
# requests:
# cpu: 10m
# memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.
However, skydns is spitting out the following:
> $kubectl logs kube-dns-v11-k07j9 --namespace=kube-system skydns
> 2016/04/18 12:47:05 skydns: falling back to default configuration,
> could not read from etcd: 100: Key not found (/skydns) [1] 2016/04/18
> 12:47:05 skydns: ready for queries on cluster.local. for
> tcp://0.0.0.0:53 [rcache 0] 2016/04/18 12:47:05 skydns: ready for
> queries on cluster.local. for udp://0.0.0.0:53 [rcache 0] 2016/04/18
> 12:47:11 skydns: failure to forward request "read udp
> 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:15 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18
> 12:47:19 skydns: failure to forward request "read udp
> 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:23 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18
> 12:47:27 skydns: failure to forward request "read udp
> 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:31 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18
> 12:47:35 skydns: failure to forward request "read udp
> 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:39 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18
> 12:47:43 skydns: failure to forward request "read udp
> 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:47 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18
> 12:47:51 skydns: failure to forward request "read udp
> 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:55 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18
> 12:47:59 skydns: failure to forward request "read udp
> 192.168.122.1:53: i/o timeout" 2016/04/18 12:48:03 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout"
After looking further, I just realized what is a 192.168.122.1? It is virtual switch on kvm. Why is skydns trying to hit my virtual switch or dns server of virtual machine?
SkyDNS defaults its forwarding nameservers to the one listed in /etc/resolv.conf. Since SkyDNS runs inside the kube-dns pod as a cluster addon, it inherits its /etc/resolv.conf from its host as described in the kube-dns doc.
From your question, it looks like your host's /etc/resolv.conf is configured to use 192.168.122.1 as its nameserver and hence that becomes the forwarding server in your SkyDNS config. I believe 192.168.122.1 is not routable from your Kubernetes cluster and that's why you are seeing "failure to forward request" errors in the kube-dns logs.
The simplest solution to this problem is to supply a reachable DNS server as a flag to SkyDNS in your RC config. Here is an example (it's just your RC config, but adds the -nameservers flag in the SkyDNS container spec):
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v11
namespace: kube-system
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v11
template:
metadata:
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd-amd64:2.2.1
# resources:
# # TODO: Set memory limits when we've profiled the container for large
# # clusters, then set request = limit to keep this container in
# # guaranteed class. Currently, this container falls into the
# # "burstable" category so the kubelet doesn't backoff from restarting it.
# limits:
# cpu: 100m
# memory: 500Mi
# requests:
# cpu: 100m
# memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.14
# resources:
# # TODO: Set memory limits when we've profiled the container for large
# # clusters, then set request = limit to keep this container in
# # guaranteed class. Currently, this container falls into the
# # "burstable" category so the kubelet doesn't backoff from restarting it.
# limits:
# cpu: 100m
# # Kube2sky watches all pods.
# memory: 200Mi
# requests:
# cpu: 100m
# memory: 50Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
# successThreshold: 1
# failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube2sky"
- --domain=cluster.local
- name: skydns
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain=cluster.local
- -nameservers=8.8.8.8:53,8.8.4.4:53 # Adding this flag. Dont use double quotes.
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
# resources:
# # keep request = limit to keep this container in guaranteed class
# limits:
# cpu: 10m
# memory: 20Mi
# requests:
# cpu: 10m
# memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.

Kubernetes DNS fails in Kubernetes 1.2

I'm attempting to set up DNS support in Kubernetes 1.2 on Centos 7. According to the documentation, there's two ways to do this. The first applies to a "supported kubernetes cluster setup" and involves setting environment variables:
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
DNS_SERVER_IP="10.0.0.10"
DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1
I added these settings to /etc/kubernetes/config and rebooted, with no effect, so either I don't have a supported kubernetes cluster setup (what's that?), or there's something else required to set its environment.
The second approach requires more manual setup. It adds two flags to kubelets, which I set by updating /etc/kubernetes/kubelet to include:
KUBELET_ARGS="--cluster-dns=10.0.0.10 --cluster-domain=cluster.local"
and restarting the kubelet with systemctl restart kubelet. Then it's necessary to start a replication controller and a service. The doc page cited above provides a couple of template files for this that require some editing, both for local changes (my Kubernetes API server listens to the actual IP address of the hostname rather than 127.0.0.1, making it necessary to add a --kube-master-url setting) and to remove some Salt dependencies. When I do this, the replication controller starts four containers successfully, but the kube2sky container gets terminated about a minute after completing initialization:
[david#centos dns]$ kubectl --server="http://centos:8080" --namespace="kube-system" logs -f kube-dns-v11-t7nlb -c kube2sky
I0325 20:58:18.516905 1 kube2sky.go:462] Etcd server found: http://127.0.0.1:4001
I0325 20:58:19.518337 1 kube2sky.go:529] Using http://192.168.87.159:8080 for kubernetes master
I0325 20:58:19.518364 1 kube2sky.go:530] Using kubernetes API v1
I0325 20:58:19.518468 1 kube2sky.go:598] Waiting for service: default/kubernetes
I0325 20:58:19.533597 1 kube2sky.go:660] Successfully added DNS record for Kubernetes service.
F0325 20:59:25.698507 1 kube2sky.go:625] Received signal terminated
I've determined that the termination is done by the healthz container after reporting:
2016/03/25 21:00:35 Client ip 172.17.42.1:58939 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
2016/03/25 21:00:35 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local', at 2016-03-25 21:00:35.608106622 +0000 UTC, error exit status 1
Aside from this, all other logs look normal. However, there is one anomaly: it was necessary to specify --validate=false when creating the replication controller, as the command otherwise gets the message:
error validating "skydns-rc.yaml": error validating data: [found invalid field successThreshold for v1.Probe, found invalid field failureThreshold for v1.Probe]; if you choose to ignore these errors, turn validation off with --validate=false
Could this be related? These arguments come directly Kubernetes documentation. if not, what's needed to get this running?
Here is the skydns-rc.yaml I used:
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v11
namespace: kube-system
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v11
template:
metadata:
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd-amd64:2.2.1
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.14
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
# Kube2sky watches all pods.
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube2sky"
- --domain="cluster.local"
- --kube-master-url=http://192.168.87.159:8080
- name: skydns
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain="cluster.local"
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.
and skydns-svc.yaml:
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: "10.0.0.10"
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
I just commented out the lines that contain the successThreshold and failureThreshold values in skydns-rc.yaml, then re-run the kubectl commands.
kubectl create -f skydns-rc.yaml
kubectl create -f skydns-svc.yaml

Resources