Not able to see Kubernetes UI Dashboard - azure

I have set up a cluster where there are 2 nodes. One is Master and Other is a node, both on different Azure ubuntu VMs. For networking, I used Canal tool.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu-aniket1 Ready master 57m v1.10.0
ubutu-aniket Ready <none> 56m v1.10.0
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system canal-jztfd 3/3 Running 0 57m
kube-system canal-mdbbp 3/3 Running 0 57m
kube-system etcd-ubuntu-aniket1 1/1 Running 0 58m
kube-system kube-apiserver-ubuntu-aniket1 1/1 Running 0 58m
kube-system kube-controller-manager-ubuntu-aniket1 1/1 Running 0 58m
kube-system kube-dns-86f4d74b45-8zqqr 3/3 Running 0 58m
kube-system kube-proxy-k5ggz 1/1 Running 0 58m
kube-system kube-proxy-vx9sq 1/1 Running 0 57m
kube-system kube-scheduler-ubuntu-aniket1 1/1 Running 0 58m
kube-system kubernetes-dashboard-54865c6fb9-kg5zt 1/1 Running 0 26m
When I tried to create kubernetes Dashboard with
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
and set proxy as
sh
$ kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
Starting to serve on [::]:8001
When I hit url http://<master IP>:8001 in browser, it shows following output
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/",
"/apis/admissionregistration.k8s.io",
"/apis/admissionregistration.k8s.io/v1beta1",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
"/apis/apiregistration.k8s.io",
"/apis/apiregistration.k8s.io/v1",
"/apis/apiregistration.k8s.io/v1beta1",
"/apis/apps",
"/apis/apps/v1",
"/apis/apps/v1beta1",
"/apis/apps/v1beta2",
"/apis/authentication.k8s.io",
"/apis/authentication.k8s.io/v1",
"/apis/authentication.k8s.io/v1beta1",
"/apis/authorization.k8s.io",
"/apis/authorization.k8s.io/v1",
"/apis/authorization.k8s.io/v1beta1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/autoscaling/v2beta1",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v1beta1",
"/apis/certificates.k8s.io",
"/apis/certificates.k8s.io/v1beta1",
"/apis/crd.projectcalico.org",
"/apis/crd.projectcalico.org/v1",
"/apis/events.k8s.io",
"/apis/events.k8s.io/v1beta1",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/apis/networking.k8s.io",
"/apis/networking.k8s.io/v1",
"/apis/policy",
"/apis/policy/v1beta1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1",
"/apis/rbac.authorization.k8s.io/v1beta1",
"/apis/storage.k8s.io",
"/apis/storage.k8s.io/v1",
"/apis/storage.k8s.io/v1beta1",
"/healthz",
"/healthz/autoregister-completion",
"/healthz/etcd",
"/healthz/ping",
"/healthz/poststarthook/apiservice-openapi-controller",
"/healthz/poststarthook/apiservice-registration-controller",
"/healthz/poststarthook/apiservice-status-available-controller",
"/healthz/poststarthook/bootstrap-controller",
"/healthz/poststarthook/ca-registration",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/healthz/poststarthook/kube-apiserver-autoregistration",
"/healthz/poststarthook/rbac/bootstrap-roles",
"/healthz/poststarthook/start-apiextensions-controllers",
"/healthz/poststarthook/start-apiextensions-informers",
"/healthz/poststarthook/start-kube-aggregator-informers",
"/healthz/poststarthook/start-kube-apiserver-informers",
"/logs",
"/metrics",
"/openapi/v2",
"/swagger-2.0.0.json",
"/swagger-2.0.0.pb-v1",
"/swagger-2.0.0.pb-v1.gz",
"/swagger.json",
"/swaggerapi",
"/version"
]
}
But when I tries to hit http://<master IP>:8001/ui I am not able to see Kubernetes dashboard. Instead I see following output
{
"paths": [
"/apis",
"/apis/",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
"/healthz",
"/healthz/etcd",
"/healthz/ping",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/healthz/poststarthook/start-apiextensions-controllers",
"/healthz/poststarthook/start-apiextensions-informers",
"/metrics",
"/openapi/v2",
"/swagger-2.0.0.json",
"/swagger-2.0.0.pb-v1",
"/swagger-2.0.0.pb-v1.gz",
"/swagger.json",
"/swaggerapi",
"/version"
]
}
Could you please help me resolving dashboard issue?
Thanks in advance

Try go to:
http://<master IP>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
As mentioned here: https://github.com/kubernetes/dashboard

As mention in kubernetes/dashboard issue 1803:
changes in kubernetes 1.6 users that want to enable RBACs should configure them first to allow dashboard access to api server.
Make sure you have define a service account as in here, to be able to access the dashboard.
See "Service Account Permissions":
Default RBAC policies grant scoped permissions to control-plane components, nodes, and controllers, but grant no permissions to service accounts outside the “kube-system” namespace (beyond discovery permissions given to all authenticated users).
This allows you to grant particular roles to particular service accounts as needed.
Fine-grained role bindings provide greater security, but require more effort to administrate.
Broader grants can give unnecessary (and potentially escalating) API access to service accounts, but are easier to administrate.

I faced same issue when i was creating my self-hosted kubernetes cluster on aws ec2 machines. I troubleshooted in following way and fixed
$ ssh -i ~/.ssh/id_rsa admin#api.example.com (Enter in Master machines from kops installed machine)
$ kubectl proxy --address=0.0.0.0 --port-8001 &
$ ssh -i pemfile username#Ip-address (in machine where you installed kops )
$ cat ~/.kube/config (to get user name and password )
$ kubectl -n kube-system describe secret admin-user-token-id
To get DashBoard
http://MasterIP_address:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Related

Is there a way to add a suffix to a pod's name when using the kubectl scale command

I'm running a command like this:
# add an executor pod
kubectl scale deployments executor --replicas 1
# show new pod
kubectl get pods
# NAME READY STATUS RESTARTS AGE
# executor-8cb7dc8689-w58ls 1/1 Running 0 11m
This is typically done to to run some command via kubectl exec.
We have multiple people on the team occasionally doing this and sometimes forgetting to scale back down, leaving these resources up.
Is there a way I can dynamically add a suffix just to the new pod's name when scaling so I can have some indication of ownership? For example, something like this:
echo $USER
# myusername
kubectl scale deployments executor --replicas 1 --name-suffix $USER
kubectl get pods
# NAME READY STATUS RESTARTS AGE
# executor-8cb7dc8689-w58ls-myusername 1/1 Running 0 11m
No, that's not possible. The name of pods are given by the ReplicaSet and it's always {replicasetname}-{random}.
I think the best option to achieve something similar is to create one deployment per user and then use RBAC to restrict access to each deployment so that only user1 can scale deployment1, user2 can scale deployment2, etc.

exec user process caused "exec format error" during setup

I'm trying to install haproxy-ingress under Kubernetes ver 1.18 (hosted on raspberry pi).
The master node has been correctly labeled with role=ingress-controller.
The kubectl create works also fine:
# kubectl create -f https://haproxy-ingress.github.io/resources/haproxy-ingress.yaml
namespace/ingress-controller created
serviceaccount/ingress-controller created
clusterrole.rbac.authorization.k8s.io/ingress-controller created
role.rbac.authorization.k8s.io/ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/ingress-controller created
rolebinding.rbac.authorization.k8s.io/ingress-controller created
configmap/haproxy-ingress created
daemonset.apps/haproxy-ingress created
But then, the pod is in crash loop:
# kubectl get pods -n ingress-controller -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
haproxy-ingress-dpcvc 0/1 CrashLoopBackOff 1 30s 192.168.1.101 purple.cloudlet <none> <none>
And the logs shows that error:
# kubectl logs haproxy-ingress-dpcvc -n ingress-controller
standard_init_linux.go:211: exec user process caused "exec format error"
Does anyone experience something similar? Can this be related to the arm (32-bit) architecture of the raspbian that I'm using?
raspberry pi's run arm architectures which unfortunately are not supported by haproxy-ingress.

kube-dns addon enabled but no kube-dns pods available

I discovered through some troubleshooting that kube-dns is not working as intended in my minikube cluster. I can see the kube-dns addon enabled when I do minikube addons list command and there is also a kube-dns service running, but there are no kube-dns pods running.
$ kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
po/kube-addon-manager-minikube 1/1 Running 0 15m
po/kubernetes-dashboard-bltvf 1/1 Running 0 14m
NAME DESIRED CURRENT READY AGE
rc/kubernetes-dashboard 1 1 1 14m
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 14m
svc/kubernetes-dashboard 10.0.0.192 <nodes> 80:30000/TCP 14m
$ kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns <none> 19m
I've tried using the kube-dns-controller.yaml file to create/deploy manually but I also get errors validating that file:
error: error validating "kube-dns-controller.yaml": error validating data: [found invalid field optional for v1.ConfigMapVolumeSource, found invalid field tolerations for v1.PodSpec]; if you choose to ignore these errors, turn validation off with --validate=false
Any ideas on what else I should look at to resolve the issue? Thanks!
Note: I am using minikube version v0.19.1 and kubernetes v1.5.2.
Looks like the issue was with the kubernetes version. Once I upgraded to v1.6.0, kube-dns was working fine again.
EDIT: To fix the issue with v1.5.2 I used the workaround seen here

Error: container "dnsmasq" is unhealthy, it will be killed and re-created while running local cluster in kubernetes

I am running Kubernetes local cluster with using ./hack/local-up-cluster.sh script. Now, when my firewall is off, all the containers in kube-dns are running:
```
# cluster/kubectl.sh get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-dns-73328275-87g4d 3/3 Running 0 45s
```
But when firewall is on, I can see only 2 containers are running:
```
# cluster/kubectl.sh get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-dns-806549836-49v7d 2/3 Running 0 45s
```
After investigating in details, turns out the pod is failing becase dnsmasq container is not running:
```
7m 7m 1 kubelet, 127.0.0.1 spec.containers{dnsmasq} Normal Killing Killing container with id docker://41ef024a0610463e04607665276bb64e07f589e79924e3521708ca73de33142c:pod "kube-dns-806549836-49v7d_kube-system(d5729c5c-24da-11e7-b166-52540083b23a)" container "dnsmasq" is unhealthy, it will be killed and re-created.
```
Can you help me with how do I run dnsmasq container with firewall on, and what exactly would I need to change? TIA.
Turns out my kube-dns service has no endpoints, any idea why that is?
You can turn off iptables( iptables -F ) before starting your cluster, it can solve your problems.

Cannot get kube-dns to start on Kubernetes

Hoping someone can help.
I have a 3x node CoreOS cluster running Kubernetes. The nodes are as follows:
192.168.1.201 - Controller
192.168.1.202 - Worker Node
192.168.1.203 - Worker Node
The cluster is up and running, and I can run the following commands:
> kubectl get nodes
NAME STATUS AGE
192.168.1.201 Ready,SchedulingDisabled 1d
192.168.1.202 Ready 21h
192.168.1.203 Ready 21h
> kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
kube-apiserver-192.168.1.201 1/1 Running 2 1d
kube-controller-manager-192.168.1.201 1/1 Running 4 1d
kube-dns-v20-h4w7m 2/3 CrashLoopBackOff 15 23m
kube-proxy-192.168.1.201 1/1 Running 2 1d
kube-proxy-192.168.1.202 1/1 Running 1 21h
kube-proxy-192.168.1.203 1/1 Running 1 21h
kube-scheduler-192.168.1.201 1/1 Running 4 1d
As you can see, the kube-dns service is not running correctly. It keeps restarting and I am struggling to understand why. Any help in debugging this would be greatly appreciated (or pointers at where to read about debugging this. Running kubectl logs does not bring anything back...not sure if the addons function differently to standard pods.
Running a kubectl describe pods, I can see the containers are killed due to being unhealthy:
16m 16m 1 {kubelet 192.168.1.203} spec.containers{kubedns} Normal Created Created container with docker id 189afaa1eb0d; Security:[seccomp=unconfined]
16m 16m 1 {kubelet 192.168.1.203} spec.containers{kubedns} Normal Started Started container with docker id 189afaa1eb0d
14m 14m 1 {kubelet 192.168.1.203} spec.containers{kubedns} Normal Killing Killing container with docker id 189afaa1eb0d: pod "kube-dns-v20-h4w7m_kube-system(3a545c95-ea19-11e6-aa7c-52540021bfab)" container "kubedns" is unhealthy, it will be killed and re-created
Please find a full output of this command as a github gist here: https://gist.github.com/mehstg/0b8016f5398a8781c3ade8cf49c02680
Thanks in advance!
If you installed your cluster with kubeadm you should add a pod network after installing.
If you choose flannel as your pod network, you should have this argument in your init command kubeadm init --pod-network-cidr 10.244.0.0/16.
The flannel YAML file can be found in the coreOS flannel repo.
All you need to do if your cluster was initialized properly (read above), is to run kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Once this is up and running (it will create pods on every node), your kube-dns pod should come up.
If you need to reset your installation (for example to add the argument to kubeadm init), you can use kubeadm reset on all nodes.
Normally, you would run the init command on the master, then add a pod network, and then add your other nodes.
This is all described in more detail in the Getting started guide, step 3/4 regarding the pod network.
as your gist says your pod network seems to be broken. You are using some custom podnetwork with 10.10.10.X. You should communicate this IPs to all components.
Please check, there is no collision with other existing nets.
I recommend you to setup with Calico, as this was the solution for me to bring up CoreOS k8s up working
After followed the steps in the official kubeadm doc with flannel networking, I run into a similar issue
http://janetkuo.github.io/docs/getting-started-guides/kubeadm/
It appears as networking pods get stuck in error states:
kube-dns-xxxxxxxx-xxxvn (rpc error)
kube-flannel-ds-xxxxx (CrashLoopBackOff)
kube-flannel-ds-xxxxx (CrashLoopBackOff)
kube-flannel-ds-xxxxx (CrashLoopBackOff)
In my case it is related to rbac permission errors and is resolved by running
kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml
Afterwards, all kube-system pods went into running states. The upstream issue is discussed on github https://github.com/kubernetes/kubernetes/issues/44029

Resources