I ran this command to create my pod:
kubectl run my-ngnix --image nginx
Now, I'm trying to delete the pod/deployment with the following command:
kubectl delete deployment my-nginx
The problem is that my terminal is telling me that is not possible. Since it couldn't found the right info.
Error from server (NotFound): deployments.apps "my-nginx" not found
If ask for all, this is what I see:
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-ngnix 1/1 Running 0 27m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 159m
root#aharo003:~# kubectl stop pods,services -l pod/m-ngnix
Does someone know what else should I have to do?
kubectl get all shows you the resources you created
in this case it starts with the kind and the resource name.
You can easily type kubectl delete pod/my-ngnix to delete the pod. Your command kubectl run my-ngnix --image nginx created just the pod without a deployment.
I have set up a cluster where there are 2 nodes. One is Master and Other is a node, both on different Azure ubuntu VMs. For networking, I used Canal tool.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu-aniket1 Ready master 57m v1.10.0
ubutu-aniket Ready <none> 56m v1.10.0
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system canal-jztfd 3/3 Running 0 57m
kube-system canal-mdbbp 3/3 Running 0 57m
kube-system etcd-ubuntu-aniket1 1/1 Running 0 58m
kube-system kube-apiserver-ubuntu-aniket1 1/1 Running 0 58m
kube-system kube-controller-manager-ubuntu-aniket1 1/1 Running 0 58m
kube-system kube-dns-86f4d74b45-8zqqr 3/3 Running 0 58m
kube-system kube-proxy-k5ggz 1/1 Running 0 58m
kube-system kube-proxy-vx9sq 1/1 Running 0 57m
kube-system kube-scheduler-ubuntu-aniket1 1/1 Running 0 58m
kube-system kubernetes-dashboard-54865c6fb9-kg5zt 1/1 Running 0 26m
When I tried to create kubernetes Dashboard with
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
and set proxy as
sh
$ kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
Starting to serve on [::]:8001
When I hit url http://<master IP>:8001 in browser, it shows following output
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/",
"/apis/admissionregistration.k8s.io",
"/apis/admissionregistration.k8s.io/v1beta1",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
"/apis/apiregistration.k8s.io",
"/apis/apiregistration.k8s.io/v1",
"/apis/apiregistration.k8s.io/v1beta1",
"/apis/apps",
"/apis/apps/v1",
"/apis/apps/v1beta1",
"/apis/apps/v1beta2",
"/apis/authentication.k8s.io",
"/apis/authentication.k8s.io/v1",
"/apis/authentication.k8s.io/v1beta1",
"/apis/authorization.k8s.io",
"/apis/authorization.k8s.io/v1",
"/apis/authorization.k8s.io/v1beta1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/autoscaling/v2beta1",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v1beta1",
"/apis/certificates.k8s.io",
"/apis/certificates.k8s.io/v1beta1",
"/apis/crd.projectcalico.org",
"/apis/crd.projectcalico.org/v1",
"/apis/events.k8s.io",
"/apis/events.k8s.io/v1beta1",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/apis/networking.k8s.io",
"/apis/networking.k8s.io/v1",
"/apis/policy",
"/apis/policy/v1beta1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1",
"/apis/rbac.authorization.k8s.io/v1beta1",
"/apis/storage.k8s.io",
"/apis/storage.k8s.io/v1",
"/apis/storage.k8s.io/v1beta1",
"/healthz",
"/healthz/autoregister-completion",
"/healthz/etcd",
"/healthz/ping",
"/healthz/poststarthook/apiservice-openapi-controller",
"/healthz/poststarthook/apiservice-registration-controller",
"/healthz/poststarthook/apiservice-status-available-controller",
"/healthz/poststarthook/bootstrap-controller",
"/healthz/poststarthook/ca-registration",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/healthz/poststarthook/kube-apiserver-autoregistration",
"/healthz/poststarthook/rbac/bootstrap-roles",
"/healthz/poststarthook/start-apiextensions-controllers",
"/healthz/poststarthook/start-apiextensions-informers",
"/healthz/poststarthook/start-kube-aggregator-informers",
"/healthz/poststarthook/start-kube-apiserver-informers",
"/logs",
"/metrics",
"/openapi/v2",
"/swagger-2.0.0.json",
"/swagger-2.0.0.pb-v1",
"/swagger-2.0.0.pb-v1.gz",
"/swagger.json",
"/swaggerapi",
"/version"
]
}
But when I tries to hit http://<master IP>:8001/ui I am not able to see Kubernetes dashboard. Instead I see following output
{
"paths": [
"/apis",
"/apis/",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
"/healthz",
"/healthz/etcd",
"/healthz/ping",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/healthz/poststarthook/start-apiextensions-controllers",
"/healthz/poststarthook/start-apiextensions-informers",
"/metrics",
"/openapi/v2",
"/swagger-2.0.0.json",
"/swagger-2.0.0.pb-v1",
"/swagger-2.0.0.pb-v1.gz",
"/swagger.json",
"/swaggerapi",
"/version"
]
}
Could you please help me resolving dashboard issue?
Thanks in advance
Try go to:
http://<master IP>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
As mentioned here: https://github.com/kubernetes/dashboard
As mention in kubernetes/dashboard issue 1803:
changes in kubernetes 1.6 users that want to enable RBACs should configure them first to allow dashboard access to api server.
Make sure you have define a service account as in here, to be able to access the dashboard.
See "Service Account Permissions":
Default RBAC policies grant scoped permissions to control-plane components, nodes, and controllers, but grant no permissions to service accounts outside the “kube-system” namespace (beyond discovery permissions given to all authenticated users).
This allows you to grant particular roles to particular service accounts as needed.
Fine-grained role bindings provide greater security, but require more effort to administrate.
Broader grants can give unnecessary (and potentially escalating) API access to service accounts, but are easier to administrate.
I faced same issue when i was creating my self-hosted kubernetes cluster on aws ec2 machines. I troubleshooted in following way and fixed
$ ssh -i ~/.ssh/id_rsa admin#api.example.com (Enter in Master machines from kops installed machine)
$ kubectl proxy --address=0.0.0.0 --port-8001 &
$ ssh -i pemfile username#Ip-address (in machine where you installed kops )
$ cat ~/.kube/config (to get user name and password )
$ kubectl -n kube-system describe secret admin-user-token-id
To get DashBoard
http://MasterIP_address:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
I discovered through some troubleshooting that kube-dns is not working as intended in my minikube cluster. I can see the kube-dns addon enabled when I do minikube addons list command and there is also a kube-dns service running, but there are no kube-dns pods running.
$ kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
po/kube-addon-manager-minikube 1/1 Running 0 15m
po/kubernetes-dashboard-bltvf 1/1 Running 0 14m
NAME DESIRED CURRENT READY AGE
rc/kubernetes-dashboard 1 1 1 14m
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 14m
svc/kubernetes-dashboard 10.0.0.192 <nodes> 80:30000/TCP 14m
$ kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns <none> 19m
I've tried using the kube-dns-controller.yaml file to create/deploy manually but I also get errors validating that file:
error: error validating "kube-dns-controller.yaml": error validating data: [found invalid field optional for v1.ConfigMapVolumeSource, found invalid field tolerations for v1.PodSpec]; if you choose to ignore these errors, turn validation off with --validate=false
Any ideas on what else I should look at to resolve the issue? Thanks!
Note: I am using minikube version v0.19.1 and kubernetes v1.5.2.
Looks like the issue was with the kubernetes version. Once I upgraded to v1.6.0, kube-dns was working fine again.
EDIT: To fix the issue with v1.5.2 I used the workaround seen here
I am running Kubernetes local cluster with using ./hack/local-up-cluster.sh script. Now, when my firewall is off, all the containers in kube-dns are running:
```
# cluster/kubectl.sh get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-dns-73328275-87g4d 3/3 Running 0 45s
```
But when firewall is on, I can see only 2 containers are running:
```
# cluster/kubectl.sh get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-dns-806549836-49v7d 2/3 Running 0 45s
```
After investigating in details, turns out the pod is failing becase dnsmasq container is not running:
```
7m 7m 1 kubelet, 127.0.0.1 spec.containers{dnsmasq} Normal Killing Killing container with id docker://41ef024a0610463e04607665276bb64e07f589e79924e3521708ca73de33142c:pod "kube-dns-806549836-49v7d_kube-system(d5729c5c-24da-11e7-b166-52540083b23a)" container "dnsmasq" is unhealthy, it will be killed and re-created.
```
Can you help me with how do I run dnsmasq container with firewall on, and what exactly would I need to change? TIA.
Turns out my kube-dns service has no endpoints, any idea why that is?
You can turn off iptables( iptables -F ) before starting your cluster, it can solve your problems.
I have a simple container on Google Container Engine that has been running for months with no issues. Suddenly, I cannot resolve ANY external domain. In troubleshooting I have re-created the container many times, and upgraded the cluster version to 1.4.7 in an attempt to resolve with no change.
To rule the app code out as much as possible, even a basic node.js code cannot resolve an external domain:
const dns = require('dns');
dns.lookup('nodejs.org', function(err, addresses, family) {
console.log('addresses:', addresses);
});
/* logs 'undefined' */
The same ran on a local machine or local docker container works as expected.
This kubectl call fails as well:
# kubectl exec -ti busybox -- nslookup kubernetes.default
nslookup: can't resolve 'kubernetes.default'
Two show up when getting kube-dns pods (admittedly not sure if that is expected)
# kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
kube-dns-v20-v8pd6 3/3 Running 0 1h
kube-dns-v20-vtz4o 3/3 Running 0 1h
Both say this when trying to check for errors in the DNS pod:
# kubectl logs --namespace=kube-system pod/kube-dns-v20-v8pd6 -c kube-dns
Error from server: container kube-dns is not valid for pod kube-dns-v20-v8pd6
I expect the internally created kube-dns is not properly pulling external DNS results or some other linkage disappeared.
I'll accept almost any workaround if one exists, as this is a production app - perhaps it is possible to manually set nameservers in the Kubernetes controller YAML file or elsewhere. Setting the contents of /etc/resolv.conf in Dockerfile does not seem to work.
Just checked and in our own clusters we usually have 3 kube-dns pods so something seems off there.
What does this say: kybectl describe rc kube-dns-v20 --namespace=kube-system
What happens when you kill the kube-dns pods? (the rc should automatically restart them)
What happens when you do an nslookup with a specific nameserver? nslookup nodejs.org 8.8.8.8