I am trying to deploy resources in Kubernetes cluster using gitlab ci/cd pipeline following https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_tunnel.html.
I am able successfully deploy the resources if both the agent configuration and the manifests are placed in same project.
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
testgroup/agentk:myk8sagent gitlab agent:12755
$ kubectl config use-context testgroup/agentk:myk8sagent
Switched to context "testgroup/agentk:myk8sagent".
$ kubectl get pods
No resources found in default namespace.
But when the manifests are in different project (but under the same group), it is not able to identify the context.
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
$ kubectl config use-context testgroup/agentk:myk8sagent
error: no context exists with the name: "testgroup/agentk:myk8sagent"
What am I missing here?
Also had such an issue, for now, it just doesn't work for different groups, only with in single group hierarchy. This is on v14.9.2
There is an issue opened: https://gitlab.com/gitlab-org/gitlab/-/issues/346636
As a workaround, we are using an agent per project group.
Related
I have Gitlab CI pipeline in which I am creating Azure kubernetes cluster using terraform scripts.
After creating kubernetes cluster, I need kube config so that I can connect to cluster.
I am running following command to create the kube config :
$ az aks get-credentials --resource-group test1 --name k8stest --overwrite-existing
Merged "k8stest" as current context in C:\Windows\system32\config\systemprofile\.kube\config
Cleaning up file based variables
instead of users home folder , it is creating kube config in some systemprofile folder
how can I restrict gitlab to create kube config in users home folder ONLY ?
We have a Jenkins virtual machine on GCE which deals with deployments, including the ones we do to GKE. We've tried to deploy a project which we have not touched for some time. The deployment failed when calling
kubectl set image deployment my-deployment my-deployment=gcr.io/my-project/my-project:version-tag
getting this error:
Error from server (Forbidden): deployments.extensions "my-deployment" is forbidden: User "client" cannot get resource "deployments" in API group "extensions" in the namespace "default"
The weird thing is, if I log in to the machine, use my Linux user + my gcloud user, I can deploy fine. But when switching to the jenkins user using su - jenkins and then authorizing gcloud with my user I get this same error that our deploy account gets.
Please advise how to fix.
It seems related to cluster RBAC configurations. Did you enable the RBAC fo Google Groups? In this case you should follow the instructions in the documentation above or disable it.
Otherwise, ss Raman Sailopal stated, you can try this:
with your regular user run kubectl config get-contexts to retrieve your current context
copy from /home/Linux user/.kube/config to /home/jenkins/.kube/config
change user to jenkins and be sure you're using the same context by running kubectl config get-contexts and kubectl config set-context ...
try your rights with:
# Check to see if I can create deployments in any namespace
kubectl auth can-i create deployments
# Check to see if I can list deployments in my current namespace
kubectl auth can-i list deployments.extensions
While trying to deploy an application got an error as below:
Error: UPGRADE FAILED: could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Output of kubectl api-resources consists some resources along with the same error in the end.
Environment: Azure Cloud, AKS Service
Solution:
The steps I followed are:
kubectl get apiservices : If metric-server service is down with the error CrashLoopBackOff try to follow the step 2 otherwise just try to restart the metric-server service using kubectl delete apiservice/"service_name". For me it was v1beta1.metrics.k8s.io .
kubectl get pods -n kube-system and found out that pods like metrics-server, kubernetes-dashboard are down because of the main coreDNS pod was down.
For me it was:
NAME READY STATUS RESTARTS AGE
pod/coredns-85577b65b-zj2x2 0/1 CrashLoopBackOff 7 13m
Use kubectl describe pod/"pod_name" to check the error in coreDNS pod and if it is down because of /etc/coredns/Corefile:10 - Error during parsing: Unknown directive proxy, then we need to use forward instead of proxy in the yaml file where coreDNS config is there. Because CoreDNS version 1.5x used by the image does not support the proxy keyword anymore.
This error happens commonly when your metrics server pod is not reachable by the master node. Possible reasons are
metric-server pod is not running. This is the first thing you should check. Then look at the logs of the metric-server pod to check if it has some permission issues trying to get metrics
Try to confirm communication between master and slave nodes.
Try running kubectl top nodes and kubectl top pods -A to see if metric-server runs ok.
From these points you can proceed further.
I am trying to run kubernetes User Interface. I am getting error
[root#ts_kubernetes_setup gcp-live-k8s-visualizer]# kubectl proxy
Error in configuration: context was not found for specified context: cluster51
I followed this http://kubecloud.io/guide-setting-up-visualizer-for-kubernetes/
Then I tried to delete this cluster using
kubectl config delete-cluster my-cluster
kubectl config delete-context my-cluster-context
kubectl config unset users.my-cluster-admin
After performing the last step when I am trying to run kubectl proxy I am getting the error. Suggest a clean way to get UI.
when you did kubectl config delete-context cluster51, this deleted the context from your ~/.kube/config. Hence the error:
Error in configuration: context was not found for specified context: cluster51
you can view the contents of the ~/.kube/config file, or use the kubectl config view command to help troubleshoot this error.
Seems there is something (config set-credentials?) missing in these steps:
$ kubectl config set-cluster cluster51 --server=http://192.168.1.51:8080
$ kubectl config set-context cluster51 --cluster=cluster51
$ kubectl config use-context cluster51
If you're not running a rpi cluster and just want to play with kubernetes visualizer, may I suggest to use kubernetes/minikube instead?
It's might help for a beginner who stuck here and getting below message in kubenetes CLR.
kubectl config delete-cluster my-cluster doesn't delete your cluster, it only removes the entry from your kubectl configuration. The error you are getting suggests that you need to configure kubectl correctly in order to use it with your cluster. I suggest you read the kubectl documentation.
I followed the guide to getting Kubernetes running in Azure here:
http://kubernetes.io/docs/getting-started-guides/coreos/azure/
In order to create pods, etc., the guide has you ssh into the master node kube-00 in the cloud service and run kubectl commands there:
ssh -F ./output/kube_randomid_ssh_conf kube-00
Once in you can run the following:
kubectl get nodes
kubectl create -f ~/guestbook-example/
Is it possible to run these kubectl commands without logging to the master node, e.g., how can I set up kubectl to connect to the cluster hosted in Azure from my development machine instead of ssh'ing into the node this way?
I tried creating a context, user and cluster in the config but the values I tried using did not work.
Edit
For some more background the tutorial creates the azure cluster using a script using the Azure CLI. It ends up looking like this:
Resource Group: kube-randomid
- Cloud Service: kube-randomid
- VM: etcd-00
- VM: etcd-01
- VM: etcd-02
- VM: kube-00
- VM: kube-01
- VM: kube-02
It creates a Virtual Network that all of these VM's live in. As far as I can tell all of the machines in the cloud service share a single virtual IP.
The kubectl command line tool is just a wrapper to execute remote HTTPS API REST calls on the kubernetes cluster. If you want to be able to do so from your own machine you need to open the correct port (443) on your master node and pass along some parameters to the kubectl tool as specified in this tutorial:
https://coreos.com/kubernetes/docs/latest/configure-kubectl.html