I created cluster in gcloud with three nodes. So far so good.Thereafter i tried to run the pod.. it is giving error.. I found out the kubectl is not configured correct.. Getting following error when I try to run the pod.. Appreciate any help in this regard.
error: could not read an encoded object from nodejs.yaml: unable to connect to a server to handle "pods": couldn't read version from server: Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused
thx
If your kubectl configuration is incorrect after creating a cluster, you can always run gcloud container clusters get-credentials NAME (see configuring kubectl) to restore a working kubeconfig file.
Related
I'm using following cmd to add add Weave Net addon to my newly configured kubernetese cluster( which is resides in a restricted network), have used proxy URLS during kubernetese installation.I'm getting following error when executing below command
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
**
Unable to connect to the server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
**
Tried using wget to download the .yaml file and apply the same cmd, yet getting the same error. can some one suggest a work around for this?
The issue was, I was exporting proxy URLS before executing the above pod network adding command, which make the kubelet to seek pod network configs through proxy(as I believe. proxies are already configured within docker to reach internet) , I opened a new terminal without executing the proxy export and it did the work.
While following the following tutorial steps: https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
I've managed to create a single node elastic search cluster.
But when running the following line of code to add a second elasticsearch node to the existing cluster:
docker run -e ENROLLMENT_TOKEN="<token>" --name es02 --net elastic -it docker.elastic.co/elasticsearch/elasticsearch:8.3.2
I get the following error:
Unable to communicate with the node on https://172.18.0.92:9200/_security/enroll/node. Error was Connection timed out.
ERROR: Aborting enrolling to cluster. Could not communicate with the node on any of the addresses from the enrollment token. All of [172.18.0.92:9200] were attempted.
I would greatly appreciate it if others are getting the same error or not or if you know how to fix this issue. Thanks.
While trying to deploy an application got an error as below:
Error: UPGRADE FAILED: could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Output of kubectl api-resources consists some resources along with the same error in the end.
Environment: Azure Cloud, AKS Service
Solution:
The steps I followed are:
kubectl get apiservices : If metric-server service is down with the error CrashLoopBackOff try to follow the step 2 otherwise just try to restart the metric-server service using kubectl delete apiservice/"service_name". For me it was v1beta1.metrics.k8s.io .
kubectl get pods -n kube-system and found out that pods like metrics-server, kubernetes-dashboard are down because of the main coreDNS pod was down.
For me it was:
NAME READY STATUS RESTARTS AGE
pod/coredns-85577b65b-zj2x2 0/1 CrashLoopBackOff 7 13m
Use kubectl describe pod/"pod_name" to check the error in coreDNS pod and if it is down because of /etc/coredns/Corefile:10 - Error during parsing: Unknown directive proxy, then we need to use forward instead of proxy in the yaml file where coreDNS config is there. Because CoreDNS version 1.5x used by the image does not support the proxy keyword anymore.
This error happens commonly when your metrics server pod is not reachable by the master node. Possible reasons are
metric-server pod is not running. This is the first thing you should check. Then look at the logs of the metric-server pod to check if it has some permission issues trying to get metrics
Try to confirm communication between master and slave nodes.
Try running kubectl top nodes and kubectl top pods -A to see if metric-server runs ok.
From these points you can proceed further.
For some reason my master node can no longer connect to my cluster after upgrading from kubernetes 1.11.9 to 1.12.9 via kops (version 1.13.0). In the manifest I'm upgrading kubernetesVersion from 1.11.9 -> 1.12.9. This is the only change I'm making. However when I run kops rolling-update cluster --yes I get the following error:
Cluster did not pass validation, will try again in "30s" until duration "5m0s" expires: machine "i-01234567" has not yet joined cluster.
Cluster did not validate within 5m0s
After that if I run a kubectl get nodes I no longer see that master node in my cluster.
Doing a little bit of debugging by sshing into the disconnected master node instance I found the following error in my api-server log by running sudo cat /var/log/kube-apiserver.log:
controller.go:135] Unable to perform initial IP allocation check: unable to refresh the service IP block: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 127.0.0.1:4001: connect: connection refused
I suspect the issue might be related to etcd, because when I run sudo netstat -nap | grep LISTEN | grep etcd there is no output.
Anyone have any idea how I can get my master node back in the cluster or have advice on things to try?
I have made some research I got few ideas for you:
If there is no output for the etcd grep it means that your etcd server is down. Check the logs for the 'Exited' etcd container | grep Exited | grep etcd and than logs <etcd-container-id>
Try this instruction I found:
1 - I removed the old master from de etcd cluster using etcdctl. You
will need to connect on the etcd-server container to do this.
2 - On the new master node I stopped kubelet and protokube services.
3 - Empty Etcd data dir. (data and data-events)
4 - Edit /etc/kubernetes/manifests/etcd.manifests and
etcd-events.manifest changing ETCD_INITIAL_CLUSTER_STATE from new to
existing.
5 - Get the name and PeerURLS from new master and use etcdctl to add
the new master on the cluster. (etcdctl member add "name"
"PeerULR")You will need to connect on the etcd-server container to do
this.
6 - Start kubelet and protokube services on the new master.
If that is not the case than you might have a problem with the certs. They are provisioned during the creation of the cluster and some of them have the allowed master's endpoints. If that is the case you'd need to create new certs and roll them for the api server/etcd clusters.
Please let me know if that helped.
I am trying to run kubernetes User Interface. I am getting error
[root#ts_kubernetes_setup gcp-live-k8s-visualizer]# kubectl proxy
Error in configuration: context was not found for specified context: cluster51
I followed this http://kubecloud.io/guide-setting-up-visualizer-for-kubernetes/
Then I tried to delete this cluster using
kubectl config delete-cluster my-cluster
kubectl config delete-context my-cluster-context
kubectl config unset users.my-cluster-admin
After performing the last step when I am trying to run kubectl proxy I am getting the error. Suggest a clean way to get UI.
when you did kubectl config delete-context cluster51, this deleted the context from your ~/.kube/config. Hence the error:
Error in configuration: context was not found for specified context: cluster51
you can view the contents of the ~/.kube/config file, or use the kubectl config view command to help troubleshoot this error.
Seems there is something (config set-credentials?) missing in these steps:
$ kubectl config set-cluster cluster51 --server=http://192.168.1.51:8080
$ kubectl config set-context cluster51 --cluster=cluster51
$ kubectl config use-context cluster51
If you're not running a rpi cluster and just want to play with kubernetes visualizer, may I suggest to use kubernetes/minikube instead?
It's might help for a beginner who stuck here and getting below message in kubenetes CLR.
kubectl config delete-cluster my-cluster doesn't delete your cluster, it only removes the entry from your kubectl configuration. The error you are getting suggests that you need to configure kubectl correctly in order to use it with your cluster. I suggest you read the kubectl documentation.