create an internal mixed protocol loadbalancer in Azure AKS - azure

I'm trying to create an internal mixed protocol loadbalancer in Azure AKS (tried 1.15.5, 1.15.7 and 1.16.4) using this yaml:
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-mixed-protocols: "true"
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
name: consullb
spec:
ports:
- port: 8500
targetPort: 8500
name: http
protocol: TCP
- port: 8400
targetPort: 8400
name: rpc
protocol: TCP
- port: 8301
targetPort: 8301
name: serflan-tcp
protocol: TCP
- port: 8302
targetPort: 8302
name: serfwan-tcp
protocol: TCP
- port: 8300
targetPort: 8300
name: server
protocol: TCP
- port: 8600
targetPort: 8600
name: consuldns-tcp
protocol: TCP
- port: 8301
targetPort: 8301
name: serflan-udp
protocol: UDP
- port: 8302
targetPort: 8302
name: serfwan-udp
protocol: UDP
- port: 8600
targetPort: 8600
name: consuldns-udp
protocol: UDP
selector:
component: consul-1582621245-consul
type: LoadBalancer
I get the following error:
cannot create an external load balancer with mix protocols
I tested two different clusters one with the Standard SKU and one with the Basic SKU.
Anything I'm missing here? Or could someone point me other aspects to try/troubleshoot?

Related

nginx-ingress controller for Azure Kubernetes Service 502 Bad Gateway

I'm having trouble getting an nginx-ingress controller to work on an Azure Kubernetes Service; it's currently returning 502 Bad Gateway each time I try to hit some Web APIs exposed as Services.
Because I must use an existing certificate, I followed https://learn.microsoft.com/en-us/azure/aks/ingress-own-tls to set up the controller and have followed https://www.markbrilman.nl/2011/08/howto-convert-a-pfx-to-a-seperate-key-crt-file/ to generate a cert and key from a PFX (how the certificate was exported from an Azure Key Vault). I created the secret "aks-ingress-tls" using the certificate including the intermediate and root ceritficates and the decrypted key file.
I have a YAML file to create a deployment, a service to expose it, and an ingress to route to it. Applying this YAML I can access the services via their IP addresses in HTTP, but using HTTPS to the Ingress Controller's EXTERNAL_IP always gives the 502 error.
My YAML File (redacted):
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: my-api
replicas: 3
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api
image: [REDACTED]/my-api:1.0
ports:
- containerPort: 443
- containerPort: 80
imagePullSecrets:
- name: data-creds
---
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-service
spec:
ports:
- name: https
port: 443
targetPort: 443
protocol: TCP
- name: http
port: 80
targetPort: 80
protocol: TCP
selector:
app: my-api
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
tls:
- hosts:
- [REDACTED].co.uk
secretName: aks-ingress-tls
rules:
- host: [REDACTED].co.uk
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 443
I added a record to my hosts file (I'm on Windows so can't use curl's --resolve) to map [REDACTED].co.uk to the ingress controller's EXTERNAL_IP so I can try accessing it. That's when I get the errors.
A curl -v https://[REDACTED].co.uk gives this:
VERBOSE: GET https://[REDACTED].co.uk/ with 0-byte payload
curl : The request was aborted: Could not create SSL/TLS secure channel.
At line:1 char:1
+ curl -v https://[REDACTED].co.uk
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
Looking at logs for one of the ingress controller's pods:
10.244.1.1 - [10.244.1.1] - - [25/Apr/2019:13:39:20 +0000] "GET / HTTP/2.0" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36" 10 0.001 [default-sub360-auth-service-443] 10.244.1.254:443, 10.244.1.3:443, 10.244.1.4:443 0, 0, 0 0.000, 0.000, 0.000 502, 502, 502 e44e21c8a2f61f5137c9afdfc64c6584
2019/04/25 13:39:20 [error] 1622#1622: *1127096 connect() failed (111: Connection refused) while connecting to upstream, client: 10.244.1.1, server: [REDACTED].co.uk, request: "GET /favicon.ico HTTP/2.0", upstream: "https://10.244.1.254:443/favicon.ico", host: "[REDACTED].co.uk", referrer: "https://[REDACTED].co.uk/"
2019/04/25 13:39:20 [error] 1622#1622: *1127096 connect() failed (111: Connection refused) while connecting to upstream, client: 10.244.1.1, server: [REDACTED].co.uk, request: "GET /favicon.ico HTTP/2.0", upstream: "https://10.244.1.3:443/favicon.ico", host: "[REDACTED].co.uk", referrer: "https://[REDACTED].co.uk/"
2019/04/25 13:39:20 [error] 1622#1622: *1127096 connect() failed (111: Connection refused) while connecting to upstream, client: 10.244.1.1, server: [REDACTED].co.uk, request: "GET /favicon.ico HTTP/2.0", upstream: "https://10.244.1.4:443/favicon.ico", host: "[REDACTED].co.uk", referrer: "https://[REDACTED].co.uk/"
10.244.1.1 - [10.244.1.1] - - [25/Apr/2019:13:39:20 +0000] "GET /favicon.ico HTTP/2.0" 502 559 "https://[REDACTED].co.uk/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36" 26 0.000 [default-sub360-auth-service-443] 10.244.1.254:443, 10.244.1.3:443, 10.244.1.4:443 0, 0, 0 0.000, 0.000, 0.004 502, 502, 502 63b6ed4414bf32694de3d136f7f277aa
Can anyone point me to what I need to look at or do to get this working now?
For your issue, the ingress uses the HTTPS protocol with the port 443, so you do not need to expose the port 443 for your container. Just expose the port that which your application listens to.
For you, it means you just expose the port 80 for your container and the service. You also need to remove the annotation nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" and change the servicePort value into 80.
Note: Add the DNS name into the certificate is also important.

Services with azure kubernetes not reachable

I am trying to configure azure kubernetes cluster and created one on the portal.dockerized .net core webapi project and also published the image to azure container register. After applying manifest file , i get the message of service created and also the external IP. however when I do get pods I get status "Pending" all the time
NAME READY STATUS RESTARTS AGE
kubdemo1api-6c67bf759f-6slh2 0/1 Pending 0 6h
here is my yaml manifest file, can someone suggest what is wrong here?
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubdemo1api
labels:
name: kubdemo1api
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
minReadySeconds: 30
selector:
matchLabels:
app: kubdemo1api
template:
metadata:
labels:
app: kubdemo1api
version: "1.0"
tier: backend
spec:
containers:
- name: kubdemo1api
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
image: my container registry image address
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: azkubdemoapi1
spec:
ports:
-
port: 80
selector:
app: kubdemo1api
type: LoadBalancer
EDIT:
Output kubectl describe pods is this
here is it
Normal Scheduled 2m default-scheduler Successfully assigned default/kubdemo1api-697d5655c-64fnj to aks-agentpool-87689508-0
Normal Pulling 37s (x4 over 2m) kubelet, aks-agentpool-87689508-0 pulling image "myacrurl/azkubdemo:v2"
Warning Failed 37s (x4 over 2m) kubelet, aks-agentpool-87689508-0 Failed to pull image "my acr url": [rpc error: code = Unknown desc = Error response from daemon: Get https://myacrurl/v2/azkubdemo/manifests/v2: unauthorized: authentication required, rpc error: code = Unknown desc = Error response from daemon: Get https://myacrurl/v2/azkubdemo/manifests/v2: unauthorized: authentication required]
Warning Failed 37s (x4 over 2m) kubelet, aks-agentpool-87689508-0 Error: ErrImagePull
Normal BackOff 23s (x6 over 2m) kubelet, aks-agentpool-87689508-0 Back-off pulling image "myacrlurl/azkubdemo:v2"
Warning Failed 11s (x7 over 2m) kubelet, aks-agentpool-87689508-0 Error: ImagePullBackOff
For the error that you provide, it shows you have to authenticate to pull the image from the Azure Container Registry.
Actually, you just need permission to pull the image and the acrpull role is totally enough. There are two ways to achieve it.
One is that just grant the AKS access to the Azure Container Registry. It's simplest on my side. Just need to create the role assignment for the service principal which the AKS used. See Grant AKS access to ACR for the whole steps.
The other one is that use the Kubernetes secret. It's a little more complex than the first one. You need to create a new service principal differ from the one AKS used and grant access to it, then create the kubernetes secret with the service principal. See Access with Kubernetes secret for the whole steps.
This Yaml is Wrong Can you provide the correct yaml, the intending are wrong. Try Below YAML
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubdemo1api
labels:
name: kubdemo1api
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
minReadySeconds: 30
selector:
matchLabels:
app: kubdemo1api
template:
metadata:
labels:
app: kubdemo1api
version: "1.0"
tier: backend
spec:
containers:
- name: kubdemo1api
image: nginx
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: azkubdemoapi1
spec:
ports:
- port: 80
selector:
app: kubdemo1api
type: LoadBalancer

Cannot access Web API deployed in Azure ACS Kubernetes Cluster

Please help. I am trying to deploy a web API to Azure ACS Kubernetes cluster, it is a simple web API created in VSTS and the result should be like this: { "value1", "value2" }.
I plan to make the type as Cluster-IP but I want to test and access it first that is why this is LoadBalancer, the pods is running and no restart (I think it's good).
The guide I'm following is: Running Web API using Docker and Kubernetes
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3d
sampleapi-service LoadBalancer 10.0.238.155 102.51.223.6 80:31676/TCP 1h
When I tried to browse the IP 102.51.223.6/api/values it says:
"This site can’t be reached"
service.yaml
kind: Service
apiVersion: v1
metadata:
name: sampleapi-service
labels:
name: sampleapi
app: sampleapi
spec:
selector:
name: sampleapi
ports:
- protocol: "TCP"
# Port accessible inside the cluster
port: 80
# Port to forwards inside the pod
targetPort: 80
# Port accessible oustide the cluster
#nodePort: 80
type: LoadBalancer
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sampleapi-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: sampleapi
spec:
containers:
- name: sampleapi
image: mycontainerregistry.azurecr.io/sampleapi:latest
ports:
- containerPort: 80
POD
Name: sampleapi-deployment-498305766-zzs2z
Namespace: default
Node: c103facs9001/10.240.0.4
Start Time: Fri, 27 Jul 2018 00:20:06 +0000
Labels: app=sampleapi
pod-template-hash=498305766
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"sampleapi-deployme
-498305766","uid":"d064a8e0-9132-11e8-b58d-0...
Status: Running
IP: 10.244.2.223
Controlled By: ReplicaSet/sampleapi-deployment-498305766
Containers:
sampleapi:
Container ID: docker://19d414c87ebafe1cc99d101ac60f1113533e44c24552c75af4ec197d3d3c9c53
Image: mycontainerregistry.azurecr.io/sampleapi:latest
Image ID: docker-pullable://mycontainerregistry.azurecr.io/sampleapi#sha256:9635a9df168ef76a6a27cd46cb15620d762657e9b57a5ac2514ba0b9a8f47a8d
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 27 Jul 2018 00:20:48 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mj5m1 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-mj5m1:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mj5m1
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 50m default-scheduler Successfully assigned sampleapi-deployment-498305766-zzs2z to c103facs9001
Normal SuccessfulMountVolume 50m kubelet, c103facs9001 MountVolume.SetUp succeeded for volume "default-token-mj5m1"
Normal Pulling 49m kubelet, c103facs9001 pulling image "mycontainerregistry.azurecr.io/sampleapi:latest"
Normal Pulled 49m kubelet, c103facs9001 Successfully pulled image "mycontainerregistry.azurecr.io/sampleapi:latest"
Normal Created 49m kubelet, c103facs9001 Created container
Normal Started 49m kubelet, c103facs9001 Started container
It seems like to me that your service isn't set to a port on the container. You have your targetPort commented out. So the service is reachable on port 80 but the service doesn't know to target the pod on that port.
You will need to start the service which exposes the internal port to some external Ip:port that can be used in your browser to access the service. try this after deploying your deployment and service yml files:
kubectl service sampleapi-service

Unable to setup service DNS in Kubernetes cluster

Kubernetes version --> 1.5.2
I am setting up DNS for Kubernetes services for the first time and I came across SkyDNS.
So following documentation, my skydns-svc.yaml file is :
apiVersion: v1
kind: Service
spec:
clusterIP: 10.100.0.100
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
k8s-app: kube-dns
sessionAffinity: None
type: ClusterIP
And my skydns-rc.yaml file is :
apiVersion: v1
kind: ReplicationController
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v18
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
version: v18
spec:
containers:
- args:
- --domain=kube.local
- --dns-port=10053
image: gcr.io/google_containers/kubedns-amd64:1.6
imagePullPolicy: IfNotPresent
name: kubedns
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
terminationMessagePath: /dev/termination-log
- args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
imagePullPolicy: IfNotPresent
name: dnsmasq
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
- args:
- -cmd=nslookup kubernetes.default.svc.kube.local 127.0.0.1 >/dev/null &&
nslookup kubernetes.default.svc.kube.local 127.0.0.1:10053 >/dev/null
- -port=8080
- -quiet
image: gcr.io/google_containers/exechealthz-amd64:1.0
imagePullPolicy: IfNotPresent
name: healthz
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
Also on my minions, I updated the /etc/systemd/system/multi-user.target.wants/kubelet.service file and added the following under the ExecStart section :
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS \
--cluster-dns=10.100.0.100 \
--cluster-domain=kubernetes \
Having done all of this and having successfully brought up the rc & svc :
[root#kubernetes-master DNS]# kubectl get po | grep dns
kube-dns-v18-hl8z6 3/3 Running 0 6s
[root#kubernetes-master DNS]# kubectl get svc | grep dns
kube-dns 10.100.0.100 <none> 53/UDP,53/TCP 20m
This is all that I got from a config standpoint. Now in order to test my setup, I downloaded busybox and tested a nslookup
[root#kubernetes-master DNS]# kubectl get svc | grep kubernetes
kubernetes 10.100.0.1 <none> 443/TCP
[root#kubernetes-master DNS]# kubectl exec busybox -- nslookup kubernetes
nslookup: can't resolve 'kubernetes'
Server: 10.100.0.100
Address 1: 10.100.0.100
Is there something that I have missed ?
EDIT ::
Going through the logs, I see something that might explain why this is not working :
kubectl logs $(kubectl get pods -l k8s-app=kube-dns -o name) -c kubedns
.
.
.
E1220 17:44:48.403976 1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: Get https://10.100.0.1:443/api/v1/endpoints?resourceVersion=0: x509: failed to load system roots and no roots provided
E1220 17:44:48.487169 1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: Get https://10.100.0.1:443/api/v1/services?resourceVersion=0: x509: failed to load system roots and no roots provided
I1220 17:44:48.487716 1 dns.go:172] Ignoring error while waiting for service default/kubernetes: Get https://10.100.0.1:443/api/v1/namespaces/default/services/kubernetes: x509: failed to load system roots and no roots provided. Sleeping 1s before retrying.
E1220 17:44:49.410311 1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: Get https://10.100.0.1:443/api/v1/endpoints?resourceVersion=0: x509: failed to load system roots and no roots provided
I1220 17:44:49.492338 1 dns.go:172] Ignoring error while waiting for service default/kubernetes: Get https://10.100.0.1:443/api/v1/namespaces/default/services/kubernetes: x509: failed to load system roots and no roots provided. Sleeping 1s before retrying.
E1220 17:44:49.493429 1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: Get https://10.100.0.1:443/api/v1/services?resourceVersion=0: x509: failed to load system roots and no roots provided
.
.
.
Looks like kubedns is unable to authorize against K8S master node. I even tried to do a manual call :
curl -k https://10.100.0.1:443/api/v1/endpoints?resourceVersion=0
Unauthorized
Looks like the kube-dns pod is not able to authenticate with the kubernetes api server. I don't see any secret and serviceaccount in the YAML file for the kube-dns pod.
I suggest doing the following:
Create a k8s secret using kubectl create secret for the kube-dns pod with the right certificate file ca.crt and token:
$ kubectl get secrets -n=kube-system | grep dns
kube-dns-token-66tfx kubernetes.io/service-account-token 3 1d
Create a k8s serviceaccount using kubectl create serviceaccount for the kube-dns pod:
$ kubectl get serviceaccounts -n=kube-system | grep dns
kube-dns 1 1d`
Mount the secret at /var/run/secrets/kubernetes.io/serviceaccount inside the kube-dns container in the YAML file:
...
kind: Pod
...
spec:
...
containers:
...
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-dns-token-66tfx
readOnly: true
...
volumes:
- name: kube-dns-token-66tfx
secret:
defaultMode: 420
secretName: kube-dns-token-66tfx
Here are the links about creating serviceaccounts for pods:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
https://kubernetes.io/docs/admin/service-accounts-admin/

Kubernetes DNS fails in Kubernetes 1.2

I'm attempting to set up DNS support in Kubernetes 1.2 on Centos 7. According to the documentation, there's two ways to do this. The first applies to a "supported kubernetes cluster setup" and involves setting environment variables:
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
DNS_SERVER_IP="10.0.0.10"
DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1
I added these settings to /etc/kubernetes/config and rebooted, with no effect, so either I don't have a supported kubernetes cluster setup (what's that?), or there's something else required to set its environment.
The second approach requires more manual setup. It adds two flags to kubelets, which I set by updating /etc/kubernetes/kubelet to include:
KUBELET_ARGS="--cluster-dns=10.0.0.10 --cluster-domain=cluster.local"
and restarting the kubelet with systemctl restart kubelet. Then it's necessary to start a replication controller and a service. The doc page cited above provides a couple of template files for this that require some editing, both for local changes (my Kubernetes API server listens to the actual IP address of the hostname rather than 127.0.0.1, making it necessary to add a --kube-master-url setting) and to remove some Salt dependencies. When I do this, the replication controller starts four containers successfully, but the kube2sky container gets terminated about a minute after completing initialization:
[david#centos dns]$ kubectl --server="http://centos:8080" --namespace="kube-system" logs -f kube-dns-v11-t7nlb -c kube2sky
I0325 20:58:18.516905 1 kube2sky.go:462] Etcd server found: http://127.0.0.1:4001
I0325 20:58:19.518337 1 kube2sky.go:529] Using http://192.168.87.159:8080 for kubernetes master
I0325 20:58:19.518364 1 kube2sky.go:530] Using kubernetes API v1
I0325 20:58:19.518468 1 kube2sky.go:598] Waiting for service: default/kubernetes
I0325 20:58:19.533597 1 kube2sky.go:660] Successfully added DNS record for Kubernetes service.
F0325 20:59:25.698507 1 kube2sky.go:625] Received signal terminated
I've determined that the termination is done by the healthz container after reporting:
2016/03/25 21:00:35 Client ip 172.17.42.1:58939 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
2016/03/25 21:00:35 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local', at 2016-03-25 21:00:35.608106622 +0000 UTC, error exit status 1
Aside from this, all other logs look normal. However, there is one anomaly: it was necessary to specify --validate=false when creating the replication controller, as the command otherwise gets the message:
error validating "skydns-rc.yaml": error validating data: [found invalid field successThreshold for v1.Probe, found invalid field failureThreshold for v1.Probe]; if you choose to ignore these errors, turn validation off with --validate=false
Could this be related? These arguments come directly Kubernetes documentation. if not, what's needed to get this running?
Here is the skydns-rc.yaml I used:
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v11
namespace: kube-system
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v11
template:
metadata:
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd-amd64:2.2.1
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.14
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
# Kube2sky watches all pods.
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube2sky"
- --domain="cluster.local"
- --kube-master-url=http://192.168.87.159:8080
- name: skydns
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain="cluster.local"
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.
and skydns-svc.yaml:
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: "10.0.0.10"
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
I just commented out the lines that contain the successThreshold and failureThreshold values in skydns-rc.yaml, then re-run the kubectl commands.
kubectl create -f skydns-rc.yaml
kubectl create -f skydns-svc.yaml

Resources