We have a Jenkins virtual machine on GCE which deals with deployments, including the ones we do to GKE. We've tried to deploy a project which we have not touched for some time. The deployment failed when calling
kubectl set image deployment my-deployment my-deployment=gcr.io/my-project/my-project:version-tag
getting this error:
Error from server (Forbidden): deployments.extensions "my-deployment" is forbidden: User "client" cannot get resource "deployments" in API group "extensions" in the namespace "default"
The weird thing is, if I log in to the machine, use my Linux user + my gcloud user, I can deploy fine. But when switching to the jenkins user using su - jenkins and then authorizing gcloud with my user I get this same error that our deploy account gets.
Please advise how to fix.
It seems related to cluster RBAC configurations. Did you enable the RBAC fo Google Groups? In this case you should follow the instructions in the documentation above or disable it.
Otherwise, ss Raman Sailopal stated, you can try this:
with your regular user run kubectl config get-contexts to retrieve your current context
copy from /home/Linux user/.kube/config to /home/jenkins/.kube/config
change user to jenkins and be sure you're using the same context by running kubectl config get-contexts and kubectl config set-context ...
try your rights with:
# Check to see if I can create deployments in any namespace
kubectl auth can-i create deployments
# Check to see if I can list deployments in my current namespace
kubectl auth can-i list deployments.extensions
Related
i have problem in executing Kubectl commands, its errors out with 502 certificate error
Unable to connect to the server: x509: certificate signed by unknown authority
I can able to login to az login after that I'm connecting with my AKS cluster by using below command
az aks get-credentials --resource-group sitecore10.x-dev-k8s --name sitecore102-Dev-AKS-v1 --overwrite-existingenter code here
After that executing Kubectl get pods or Kubectl get services but it doesn't work
already tried adding environment variables
Opened .kube file and opened the same Url in browser , displayed the below error
Resolved
I got a chance to resolve this issue.
The actual issue is AKS URL (https://AKSInstance.hcp.westus.azmk8s.io:443) is blocked by Company Internet Monitoring Software (Netskope).
i raised a support ticket to whitelist the URL https://*.azmk8s.io
How to check the issue :
C:\Users[YourUserName].kube open config file to identify the AKS URL, try opening the URL directly in the browser, if your getting 401 authentication issue then your good, otherwise if your getting error message related to certificate then it should be your Internet Monitoring software is blocking the AKS URL calls.
I tried to reproduce the same in my environment to connect the AKS cluster from a windows machine:
I have created AKS cluster, like below.
Go to Azure Portal > Kubernetes Services > Create.
Download Kubectl tool here and install the same on a windows machine, like below.
open powershell in CMD and navigate to download folder and run kubectl.exe , like below.
connect your cluster using cloud shell to download .kube.config file. like below.
once connected to the cluster, download the config file to the Local windows machine.
Create a folder with .kube name in your username folder and place the downloaded config file in the same folder.
Path C:\Users\yourusername
Now run kubectl cmd to get the AKS cluster details from the windows machine, like below.
Reference: kubectl unable to connect to server: x509: certificate signed by unknown authority by bherto39.
I am trying to deploy resources in Kubernetes cluster using gitlab ci/cd pipeline following https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_tunnel.html.
I am able successfully deploy the resources if both the agent configuration and the manifests are placed in same project.
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
testgroup/agentk:myk8sagent gitlab agent:12755
$ kubectl config use-context testgroup/agentk:myk8sagent
Switched to context "testgroup/agentk:myk8sagent".
$ kubectl get pods
No resources found in default namespace.
But when the manifests are in different project (but under the same group), it is not able to identify the context.
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
$ kubectl config use-context testgroup/agentk:myk8sagent
error: no context exists with the name: "testgroup/agentk:myk8sagent"
What am I missing here?
Also had such an issue, for now, it just doesn't work for different groups, only with in single group hierarchy. This is on v14.9.2
There is an issue opened: https://gitlab.com/gitlab-org/gitlab/-/issues/346636
As a workaround, we are using an agent per project group.
I have a cluster configured on azure kubernetes . and the services are working fine.
following this article
https://learn.microsoft.com/en-us/azure/aks/kubernetes-dashboard
i am trying to view dashboard using but get the error as follows
az aks browse --resource-group DemoRG--name aksdemo2
Proxy running on http://127.0.0.1:8001/
Press CTRL+C to close the tunnel...
Error: unknown flag: --address
My cluster does not have RBAC enabled , i am unsure if this is related to network issue or something different.
Eventaully issue was resolved by author of this post by following existing github issue #8642:
I had two copies of kubectl and the one from docker was overriding the
one from azure. Found this by firing "where kubectl" from command
prompt, and deleting the docker copy.
Just run kubectl proxy then go to following URL
http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/#!/overview?namespace=default
I used kubectl proxy to access the dashboard
I am working on setting up environment for deploying microservices.
I have gotten as far as building my code and deploying to a registry but having problem running it in Azure Container Services.
I am following this guide to connect to ACS: https://learn.microsoft.com/en-us/azure/container-service/container-service-connect
But i fail on the step: Download Cluster Credentials
Using the given command
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
Ofc changing the reseource group and clustername to the correct names from my portal. I get an error:
[WinError 10049] The requested address is not valid in its context
(if i change resource group or clustername to something else I get other errors so seems it can find those at least)
When i try to search for the error it seems to be some IP adress problem but can't figure out what to do. Tried running same command from other network (from home) to make sure work firewall is not blocking something.. but I get the same error
Any help appriciated!
This command copy the cluster credentials to your machine. Background processes are ssh to your cluster VM and copy the credentials.
So, you should ensure you could ssh to the master VM manual. If you could not ssh to master VM manual, az command also could not do it. You could get your master-dns-name on Azure Portal.
ssh -i id_rsa <user>#<master-dns-name>
Notes: If az command does not work and you could ssh to master VM, you could download credentials to your machine. They are same. You could check your link about this.
You also need check your azure cli version. You could use the following commands
az --version
My version is 2.02. It works for me.
I followed the guide to getting Kubernetes running in Azure here:
http://kubernetes.io/docs/getting-started-guides/coreos/azure/
In order to create pods, etc., the guide has you ssh into the master node kube-00 in the cloud service and run kubectl commands there:
ssh -F ./output/kube_randomid_ssh_conf kube-00
Once in you can run the following:
kubectl get nodes
kubectl create -f ~/guestbook-example/
Is it possible to run these kubectl commands without logging to the master node, e.g., how can I set up kubectl to connect to the cluster hosted in Azure from my development machine instead of ssh'ing into the node this way?
I tried creating a context, user and cluster in the config but the values I tried using did not work.
Edit
For some more background the tutorial creates the azure cluster using a script using the Azure CLI. It ends up looking like this:
Resource Group: kube-randomid
- Cloud Service: kube-randomid
- VM: etcd-00
- VM: etcd-01
- VM: etcd-02
- VM: kube-00
- VM: kube-01
- VM: kube-02
It creates a Virtual Network that all of these VM's live in. As far as I can tell all of the machines in the cloud service share a single virtual IP.
The kubectl command line tool is just a wrapper to execute remote HTTPS API REST calls on the kubernetes cluster. If you want to be able to do so from your own machine you need to open the correct port (443) on your master node and pass along some parameters to the kubectl tool as specified in this tutorial:
https://coreos.com/kubernetes/docs/latest/configure-kubectl.html