Getting following error
Starting: Bash
==============================================================================
Task : Bash
Description : Run a Bash script on macOS, Linux, or Windows
Version : 3.201.1
Author : Microsoft Corporation
Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/utility/bash
==============================================================================
Generating script.
Formatted command: exec bash '/home/vsts/work/1/s/Orchestration/dev/deploy.sh'
========================== Starting Command Output ===========================
/bin/bash /home/vsts/work/_temp/bdf3cbe7-1e0a-4e19-a974-71118a3adb33.sh
error: no context exists with the name: "default"
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
##[error]Bash exited with code '1'.
Finishing: Bash
while I run the task from the azure pipeline
- task: Bash#3
inputs:
filePath: '$(Build.SourcesDirectory)/Orchestration/dev/deploy.sh'
These are the contents of deploy.sh
kubectl config use-context default
kubectl apply -f Orchestration/dev/configmap.yaml
kubectl apply -f Orchestration/dev/secrets.yaml
kubectl apply -f Orchestration/dev/deployment.yaml
kubectl apply -f Orchestration/dev/service.yaml
I am not sure why there is an error, every azure aks cluster will have a default namespace, and I already have a default namespace
Agreed with #Thomas, The name which you have passed in kubectl config use-context default . This is the type of namespace instead of that we need to provide name of cluster.
For example to set the current context:-
$ kubectl config use-context Context_name
Also Based on this Kubernetes docs,
A context element in a kubeconfig file is used to group access parameters under a convenient name. Each context has three parameters:
cluster, namespace, and user. By default, the kubectl command-line
tool uses parameters from the current context to communicate with the
cluster
.
For more information please refer this Kubernets Docs| Configure Access to Multiple Clusters .
and this similar SO THREAD|error: no context exists with the name: "Aks_cluster_name" when tried from WSL
Related
I am getting the following error
error: stat C:\Users\icnpuser\.kube\config: no such file or directory
error: stat C:\Users\icnpuser\.kube\config: no such file or directory
when i execute the below commands:
kubectl --kubeconfig=C:\\Users\\icnpuser\\.kube\\config apply -f ./Orchestration/dev/deployment.yaml
kubectl --kubeconfig=C:\\Users\\icnpuser\\.kube\\config apply -f ./Orchestration/dev/service.yaml
The above commands are written in a deploy.sh file in the azure repo, as shown below
The error is because of these commands as looking kubeconfig from the azure repo and not from my Windows VM.
My windows VM where I have configured kubectl already have kubeconfig, and I am able to list the contexts through kubectl command.
PS: My Node pool is running on two linux vms.
Also, I am triggering this deploy.sh file from the pipeline.
Please help me with how to execute this deployment.YAML & service.YAML to create pods in my aks cluster
While trying to deploy an application got an error as below:
Error: UPGRADE FAILED: could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Output of kubectl api-resources consists some resources along with the same error in the end.
Environment: Azure Cloud, AKS Service
Solution:
The steps I followed are:
kubectl get apiservices : If metric-server service is down with the error CrashLoopBackOff try to follow the step 2 otherwise just try to restart the metric-server service using kubectl delete apiservice/"service_name". For me it was v1beta1.metrics.k8s.io .
kubectl get pods -n kube-system and found out that pods like metrics-server, kubernetes-dashboard are down because of the main coreDNS pod was down.
For me it was:
NAME READY STATUS RESTARTS AGE
pod/coredns-85577b65b-zj2x2 0/1 CrashLoopBackOff 7 13m
Use kubectl describe pod/"pod_name" to check the error in coreDNS pod and if it is down because of /etc/coredns/Corefile:10 - Error during parsing: Unknown directive proxy, then we need to use forward instead of proxy in the yaml file where coreDNS config is there. Because CoreDNS version 1.5x used by the image does not support the proxy keyword anymore.
This error happens commonly when your metrics server pod is not reachable by the master node. Possible reasons are
metric-server pod is not running. This is the first thing you should check. Then look at the logs of the metric-server pod to check if it has some permission issues trying to get metrics
Try to confirm communication between master and slave nodes.
Try running kubectl top nodes and kubectl top pods -A to see if metric-server runs ok.
From these points you can proceed further.
I am getting issues when trying to getting the information about the nodes created using AKS(Azure Connected Service) for Kubernetes after the execution of creating the clusters and getting the credentials.
I am using the azure-cli on ubuntu linux machine.
Followed the Url for creation of clusters: https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough
I get the following error when using the command kubectl get nodes
after execution of connecting to cluster using
az aks get-credentials --resource-group <resource_group_name> --name <cluster_name>
Error:
kubectl get nodes
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get nodes)
I do get the same error when i use :
kubectl get pods -n kube-system -o=wide
When i connect back as another user by the following commands i.e.,
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
I will be able to retrieve the nodes i.e..,
kubectl get nodes
NAME STATUS ROLES AGE VERSION
<host-name> Ready master 20m v1.10.0
~$ kubectl get pods -n kube-system -o=wide
NAME READY STATUS RESTARTS AGE
etcd-actaz-prod-nb1 1/1 Running 0
kube-apiserver-actaz-prod-nb1 1/1 Running 0
kube-controller-manager-actaz-prod-nb1 1/1 Running 0
kube-dns-86f4d74b45-4qshc 3/3 Running 0
kube-flannel-ds-bld76 1/1 Running 0
kube-proxy-5s65r 1/1 Running 0
kube-scheduler-actaz-prod-nb1 1/1 Running 0
But this is actually overwriting newly clustered information from file $HOME/.kube/config
Am i missing something when we connect to AKS-cluster get-credentials command-let that's leading me to the error
*Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get nodes)*
After you
az aks get-credentials -n cluster-name -g resource-group
If should have merged to your local configuration:
/home/user-name/.kube/config
Can you check your config
kubectl config view
And check if it is pointing to the right cluster.
Assuming you have chosen default configuartion while deploying AKS. So You need to create SSH key pair to login to AKS Node.
Push above created public key to AKS node using "az vm user update" {plz take help to know what all switch you need to pass. It quite simple)
To create an SSH connection to an AKS node, you run a helper pod in your AKS cluster. This helper pod provides you with SSH access into the cluster and then additional SSH node access.
To create and use this helper pod, complete the following steps:
- Run a debian (or any other container like centos7 etc) container image and attach a terminal session to it. This container can be used to create an SSH session with any node in the AKS cluster:
kubectl run -it --rm aks-ssh --image=debian
The base Debian image doesn't include SSH components.
apt-get update && apt-get install openssh-client -y
Copy private key (the one you created in the begining to pod) using kubelet cmd. kubelet toolkit must be present on your machine from where you created ssh pair.
kubectl cp :/
Now you will see private key file on your container location, change the private key permission to 600 and now able to ssh your AKS node
Hope this helps.
I am trying to run kubernetes User Interface. I am getting error
[root#ts_kubernetes_setup gcp-live-k8s-visualizer]# kubectl proxy
Error in configuration: context was not found for specified context: cluster51
I followed this http://kubecloud.io/guide-setting-up-visualizer-for-kubernetes/
Then I tried to delete this cluster using
kubectl config delete-cluster my-cluster
kubectl config delete-context my-cluster-context
kubectl config unset users.my-cluster-admin
After performing the last step when I am trying to run kubectl proxy I am getting the error. Suggest a clean way to get UI.
when you did kubectl config delete-context cluster51, this deleted the context from your ~/.kube/config. Hence the error:
Error in configuration: context was not found for specified context: cluster51
you can view the contents of the ~/.kube/config file, or use the kubectl config view command to help troubleshoot this error.
Seems there is something (config set-credentials?) missing in these steps:
$ kubectl config set-cluster cluster51 --server=http://192.168.1.51:8080
$ kubectl config set-context cluster51 --cluster=cluster51
$ kubectl config use-context cluster51
If you're not running a rpi cluster and just want to play with kubernetes visualizer, may I suggest to use kubernetes/minikube instead?
It's might help for a beginner who stuck here and getting below message in kubenetes CLR.
kubectl config delete-cluster my-cluster doesn't delete your cluster, it only removes the entry from your kubectl configuration. The error you are getting suggests that you need to configure kubectl correctly in order to use it with your cluster. I suggest you read the kubectl documentation.
I followed the guide to getting Kubernetes running in Azure here:
http://kubernetes.io/docs/getting-started-guides/coreos/azure/
In order to create pods, etc., the guide has you ssh into the master node kube-00 in the cloud service and run kubectl commands there:
ssh -F ./output/kube_randomid_ssh_conf kube-00
Once in you can run the following:
kubectl get nodes
kubectl create -f ~/guestbook-example/
Is it possible to run these kubectl commands without logging to the master node, e.g., how can I set up kubectl to connect to the cluster hosted in Azure from my development machine instead of ssh'ing into the node this way?
I tried creating a context, user and cluster in the config but the values I tried using did not work.
Edit
For some more background the tutorial creates the azure cluster using a script using the Azure CLI. It ends up looking like this:
Resource Group: kube-randomid
- Cloud Service: kube-randomid
- VM: etcd-00
- VM: etcd-01
- VM: etcd-02
- VM: kube-00
- VM: kube-01
- VM: kube-02
It creates a Virtual Network that all of these VM's live in. As far as I can tell all of the machines in the cloud service share a single virtual IP.
The kubectl command line tool is just a wrapper to execute remote HTTPS API REST calls on the kubernetes cluster. If you want to be able to do so from your own machine you need to open the correct port (443) on your master node and pass along some parameters to the kubectl tool as specified in this tutorial:
https://coreos.com/kubernetes/docs/latest/configure-kubectl.html