I am trying to run kubernetes User Interface. I am getting error
[root#ts_kubernetes_setup gcp-live-k8s-visualizer]# kubectl proxy
Error in configuration: context was not found for specified context: cluster51
I followed this http://kubecloud.io/guide-setting-up-visualizer-for-kubernetes/
Then I tried to delete this cluster using
kubectl config delete-cluster my-cluster
kubectl config delete-context my-cluster-context
kubectl config unset users.my-cluster-admin
After performing the last step when I am trying to run kubectl proxy I am getting the error. Suggest a clean way to get UI.
when you did kubectl config delete-context cluster51, this deleted the context from your ~/.kube/config. Hence the error:
Error in configuration: context was not found for specified context: cluster51
you can view the contents of the ~/.kube/config file, or use the kubectl config view command to help troubleshoot this error.
Seems there is something (config set-credentials?) missing in these steps:
$ kubectl config set-cluster cluster51 --server=http://192.168.1.51:8080
$ kubectl config set-context cluster51 --cluster=cluster51
$ kubectl config use-context cluster51
If you're not running a rpi cluster and just want to play with kubernetes visualizer, may I suggest to use kubernetes/minikube instead?
It's might help for a beginner who stuck here and getting below message in kubenetes CLR.
kubectl config delete-cluster my-cluster doesn't delete your cluster, it only removes the entry from your kubectl configuration. The error you are getting suggests that you need to configure kubectl correctly in order to use it with your cluster. I suggest you read the kubectl documentation.
Related
Getting following error
Starting: Bash
==============================================================================
Task : Bash
Description : Run a Bash script on macOS, Linux, or Windows
Version : 3.201.1
Author : Microsoft Corporation
Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/utility/bash
==============================================================================
Generating script.
Formatted command: exec bash '/home/vsts/work/1/s/Orchestration/dev/deploy.sh'
========================== Starting Command Output ===========================
/bin/bash /home/vsts/work/_temp/bdf3cbe7-1e0a-4e19-a974-71118a3adb33.sh
error: no context exists with the name: "default"
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
##[error]Bash exited with code '1'.
Finishing: Bash
while I run the task from the azure pipeline
- task: Bash#3
inputs:
filePath: '$(Build.SourcesDirectory)/Orchestration/dev/deploy.sh'
These are the contents of deploy.sh
kubectl config use-context default
kubectl apply -f Orchestration/dev/configmap.yaml
kubectl apply -f Orchestration/dev/secrets.yaml
kubectl apply -f Orchestration/dev/deployment.yaml
kubectl apply -f Orchestration/dev/service.yaml
I am not sure why there is an error, every azure aks cluster will have a default namespace, and I already have a default namespace
Agreed with #Thomas, The name which you have passed in kubectl config use-context default . This is the type of namespace instead of that we need to provide name of cluster.
For example to set the current context:-
$ kubectl config use-context Context_name
Also Based on this Kubernetes docs,
A context element in a kubeconfig file is used to group access parameters under a convenient name. Each context has three parameters:
cluster, namespace, and user. By default, the kubectl command-line
tool uses parameters from the current context to communicate with the
cluster
.
For more information please refer this Kubernets Docs| Configure Access to Multiple Clusters .
and this similar SO THREAD|error: no context exists with the name: "Aks_cluster_name" when tried from WSL
I am trying to deploy resources in Kubernetes cluster using gitlab ci/cd pipeline following https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_tunnel.html.
I am able successfully deploy the resources if both the agent configuration and the manifests are placed in same project.
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
testgroup/agentk:myk8sagent gitlab agent:12755
$ kubectl config use-context testgroup/agentk:myk8sagent
Switched to context "testgroup/agentk:myk8sagent".
$ kubectl get pods
No resources found in default namespace.
But when the manifests are in different project (but under the same group), it is not able to identify the context.
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
$ kubectl config use-context testgroup/agentk:myk8sagent
error: no context exists with the name: "testgroup/agentk:myk8sagent"
What am I missing here?
Also had such an issue, for now, it just doesn't work for different groups, only with in single group hierarchy. This is on v14.9.2
There is an issue opened: https://gitlab.com/gitlab-org/gitlab/-/issues/346636
As a workaround, we are using an agent per project group.
While trying to deploy an application got an error as below:
Error: UPGRADE FAILED: could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Output of kubectl api-resources consists some resources along with the same error in the end.
Environment: Azure Cloud, AKS Service
Solution:
The steps I followed are:
kubectl get apiservices : If metric-server service is down with the error CrashLoopBackOff try to follow the step 2 otherwise just try to restart the metric-server service using kubectl delete apiservice/"service_name". For me it was v1beta1.metrics.k8s.io .
kubectl get pods -n kube-system and found out that pods like metrics-server, kubernetes-dashboard are down because of the main coreDNS pod was down.
For me it was:
NAME READY STATUS RESTARTS AGE
pod/coredns-85577b65b-zj2x2 0/1 CrashLoopBackOff 7 13m
Use kubectl describe pod/"pod_name" to check the error in coreDNS pod and if it is down because of /etc/coredns/Corefile:10 - Error during parsing: Unknown directive proxy, then we need to use forward instead of proxy in the yaml file where coreDNS config is there. Because CoreDNS version 1.5x used by the image does not support the proxy keyword anymore.
This error happens commonly when your metrics server pod is not reachable by the master node. Possible reasons are
metric-server pod is not running. This is the first thing you should check. Then look at the logs of the metric-server pod to check if it has some permission issues trying to get metrics
Try to confirm communication between master and slave nodes.
Try running kubectl top nodes and kubectl top pods -A to see if metric-server runs ok.
From these points you can proceed further.
I had pull my azure acs credentials using below command and I can communicate with kubernetes machine on Azure from my local machine
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
But Now I wanted to disconnect this connection so that my kubctl can connect with other machine , it can be local or any other machine (I am trying to connect with local).
But everytime I ran kubectl command it communicate with Azure ACS
For your scenario, we can use kubectl config use-context CONTEXT_NAME to switch default cluster to others, in this way, we can switch to another k8s cluster.
We can use this command to list k8s contexts:
root#shui:~# kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
jasontest321mgmt jasontest321mgmt jasontest321mgmt-admin
* jasonk8s321mgmt jasonk8s321mgmt jasonk8s321mgmt-admin
Specify k8s cluster name, we can use this commandkubectl config use-context CONTEXT_NAME:
root#shui:~# kubectl config use-context -h
Sets the current-context in a kubeconfig file
Examples:
# Use the context for the minikube cluster
kubectl config use-context minikube
Usage:
kubectl config use-context CONTEXT_NAME [options]
For example:
root#shui:~# kubectl config use-context jasontest321mgmt
Switched to context "jasontest321mgmt".
I created cluster in gcloud with three nodes. So far so good.Thereafter i tried to run the pod.. it is giving error.. I found out the kubectl is not configured correct.. Getting following error when I try to run the pod.. Appreciate any help in this regard.
error: could not read an encoded object from nodejs.yaml: unable to connect to a server to handle "pods": couldn't read version from server: Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused
thx
If your kubectl configuration is incorrect after creating a cluster, you can always run gcloud container clusters get-credentials NAME (see configuring kubectl) to restore a working kubeconfig file.