I'm using Azure for my Continuous Deployment, My secret name is "cisecret" using
kubectl create secret docker-registry cisecret --docker-username=XXXXX --docker-password=XXXXXXX --docker-email=SomeOne#outlook.com --docker-server=XXXXXXXXXX.azurecr.io
In my Visual Studio Online Release Task
kubectl run
Under Secrets section
Type of secret: dockerRegistry
Container Registry type: Azure Container Registry
Secret name: cisecret
My Release is successfully, but when proxy into kubernetes
Failed to pull image xxxxxxx unauthorized: authentication required.
Could this be due to your container name possibly? I had an issue where I wasn't properly prepending the ACR domain in front of the image name in my Kubernetes YAML which meant I wasn't pointed at the container registry / image and therefore my secret (which was working) appeared to be broken.
Can you post your YAML? Maybe there is something simple amiss since it seems you are on the right track from the secrets perspective.
I need to grant AKS access to ACR.
Please refer to the link here
How to pass image pull secret while using 'kubectl run' command?
This should help, you need to override the kubectl command with "imagepullsecrets":"cisecret".
Add the following in yaml file.
imagePullSecrets:
- name: acr-auth
Related
I created the aks cluster with azure service principal id and i provided the contributer role according to the subscription and resource group.
For each and every time when i executed the pipeline the sign-in is asking and after i authenticated it is getting the data.
Also the "kubectl get" task is taking more than 30 min and is getting "Kubectl Server Version: Could not find kubectl server version"
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code CRA2XssWEXUUA to authenticate
Thanks in advance
What is the version of the created cluster?
I'm assuming from your snapshot that you are using az in order to get credentials for it.
Old azure auth plugin is deprecated in V1.22+. If you are using V1.22 or above you should use kubelogin in order authenticate.
You will also need to update your kube config accordingly:
kubelogin convert-kubeconfig
and specifically if you're logging via az:
kubelogin convert-kubeconfig -l azurecli
Note that the flag -l azurecli is important here: the default value is "devicecode" which will not consider your az as a logging method - and you will still be requested a browser authentication.
Alternatively, you can set environment variable:
AAD_LOGIN_METHOD=azurecli
Because you are getting sign in request and not the deprecation warning for the auth plugin, I suspect that you already have kubelogin installed on your agent, and you just need to update the kube config file
What task are you using? There is official kubectl task: https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/kubernetes?view=azure-devops
It requires the service connection.
If you still want to execute kubectl directly, you should run the following before the kubectl inside the AzureCLI task:
az aks get-credentials --resource-group "$(resourceGroup)" --name "$(k8sName)" --overwrite-existing
Please use Selfhosted agents for executing your commands. looks like you have private endpoints for your AKS and requests are only allowed from trusted devices.
I ran into the same issue and for me the fix was to change the Connection Type in the stage definition from Azure Resource Manager to Kubernetes Service Connection - check on the screenshot below.
Then you should be able to also specify the connection type in each of the tasks where you are running kubectl or helm commands. For example, in a kubectl task, under Kubernetes Cluster --> Service connection type use the Kubernetes Service Connection:
As mentioned by #DevOpsEngg, the problem could be related to private endpoints but I wouldn't say that it is regarding selfhosted agents, because I'm using these. As an extra comment - this started happening when I added more than one user to the cluster, so you might want to check user permissions and authentication. Unfortunately, I'm still getting used to K8s so I don't have more info about that.
I have been following this guide to deploy application on Azure using AKS
Every thing was fine until I deployed, one node is in not ready state with ImagePullBackOff status
kubectl describe pod output
Performing below command I get success command, so I am sure authentication is happening
az acr login --name siddacr
and this command lists out the image which was uploaded
az acr repository list --name <acrName> --output table
I figured out.
The error was in the name of the image in deployment.yml file
imagebackpulloff might be caused because of the following reasons:
The image or tag doesn’t exist
You’ve made a typo in the image name or tag
The image registry requires authentication
You’ve exceeded a rate or download limit on the registry
I'm using Minikube for development and I need to build a k8s app that pull all images from ACR, all images stored already on ACR.
To pull images from azure what I need to is to create secret with user&pass of the azure account and pass this secret to every image that I want to pull using imagePullSecrets (documentation here)
There is a way to add this registry as a global setting for namespace, or the project?
I don't understand why every image needs to get the secret implicitly in the spec.
Edit:
Thanks for the comments I'll check them later, for now I resolve this problem at minikube level. there is a way to set a private registry in minikube (doc here)
In my version this bug exists, and this answer resolve the problem.
As I know, if you do not use the K8s in Azure, I mean the Azure Kubernetes Service, then there are two ways I know the pull the images from ACR. One is the way you know that using the secrets. And another is to use the service account, but you also need to configure it in each deployment or the pods the same way as the secrets.
If you use the Azure Kubernetes Service, then you just need to assign the AcrPull role to the service principal of the AKS, and then you need to set nothing for each image.
You can add imagePullSecrets to a service account (e.g. to the default serviceaccout).
It will automatically add imagePullSecrets to the pod spec that has assigned this specific (e.g. default) serviceaccount, so you don't have to do it explicitly.
You can do it running:
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}'
You can verify it with:
$ kubectl run nginx --image=nginx --restart=Never
$ kubectl get pod nginx -o=jsonpath='{.spec.imagePullSecrets[0].name}{"\n"}'
myregistrykey
Also checkout the k8s docs add-image-pull-secret-to-service-account.
In my case, I had a local Minikube installed in order to test locally my charts and my code. I tried most of the solutions suggested here and in other Stack Overflow posts and the following are the options I found out :
Move the image from the local Docker registry to Minikube's registry and set the pullPolicy to Never or IfNotPresent in your chart.
docker build . -t my-docker-image:v1
minikube image load my-docker-image:v1
$ minikube image list
rscoreacr.azurecr.io/decibel:0.0.1
k8s.gcr.io/pause:3.5
k8s.gcr.io/kube-scheduler:v1.22.3
k8s.gcr.io/kube-proxy:v1.22.3
...
##Now edit your chart and change the `pullPolicy`.
helm install my_name chart/ ## should work.
I think that the main disadvantage of this option is that you need to change your chart and remember to change the values to their previous value.
Create a secret that holds the credentials to the acr.
First login to the acr via :
az acr login --name my-registry.azurecr.io --expose-token
The output of the command should show you a user and an access token.
Now you should create a Kubernetes secret (make sure that you are on the right Kubernetes context - Minikube) :
kubectl create secret docker-registry my-azure-secret --docker-server=my-registry.azurecr.io --docker-username=<my-user> --docker-password=<access-token>
Now, if your chart uses the default service account (When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace) you should edit the service account via the following command :
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "my-azure-secret"}]}'
I didn't like this option because if I have a different secret provider for every helm chart I need to overwrite the yaml with the imagePullSecrets.
Another alternative you have is using Minikube's registry creds
Personally, the solution I went for is the first solution with a tweak, instead of adding the pullPolicy in the yaml itself, I overwrite it when I install the chart :
$ helm install --set image.pullPolicy=IfNotPresent <name> charts/
I'm new to the Azure AKS and docker. I followed the steps in this article.
Finally, I compleated all the steps and got this status.
But the external IP is not giving the actual output. I checked in the Azure portal, The container status is waiting. Am I missing anything here?
Authentication will be needed to pull the images from ACR. We have to create a docker-registry secret for authentication. To do this, open Cloud Shell on the Azure Portal and run the command below.
> kubectl create secret docker-registry mysecretname --docker-server=myacrname.azurecr.io --docker-username=myacrname--docker-password=myacrpwd --docker-email=myportalemail
Don’t forget to change your password and email address.
To access your password go to your Azure Container Registry go to
https://portal.azure.com/ » Your Container registry » Access keys
Finally make sure that docker image url in the you kubernetes yaml file is right
https://github.com/husseinsa/kubernetes-multi-container-app/blob/master/k8/frontend.yaml
https://github.com/husseinsa/kubernetes-multi-container-app/blob/master/k8/backend.yaml
spec:
containers:
- name: backend
image: mywebregistry.azurecr.io/backend:v1
ports:
- containerPort: 80
put your image url in asure container registry
Our previous GitLab based CI/CD utilized an Authenticated curl request to a specific REST API endpoint to trigger the redeployment of an updated container to our service, if you use something similar for your Kubernetes based deployment this Question is for you.
More Background
We run a production site / app (Ghost blog based) on an Azure AKS Cluster. Right now we manually push our updated containers to a private ACR (Azure Container Registry) and then update from the command line with Kubectl.
That being said we previously used Docker Cloud for our orchestration and fully integrated re-deploying our production / staging services using GitLab-Ci.
That GitLab-Ci integration is the goal, and the 'Why' behind this question.
My Question
Since we previously used Docker Cloud (doh, should have gone K8s from the start) how should we handle the fact that GitLab-Ci was able to make use of Secrets created the Docker Cloud CLI and then authenticate with the Docker Cloud API to trigger actions on our Nodes (ie. re-deploy with new containers etc).
While I believe we can build a container (to be used by our GitLab-Ci runner) that contains Kubectl, and the Azure CLI, I know that Kubernetes also has a similar (to docker cloud) Rest API that can be found here (https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster) — specifically the section that talks about connecting WITHOUT Kubectl appears to be relevant (as does the piece about the HTTP REST API).
My Question to anyone who is connecting to an Azure (or potentially other managed Kubernetes service):
How does your Ci/CD server authenticate with your Kubernetes service provider's Management Server, and then how do you currently trigger an update / redeployment of an updated container / service?
If you have used the Kubernetes HTTP Rest API to re-deploy a service your thoughts are particularly value-able!
Kubernetes Resources I am Reviewing
How should I manage deployments with kubernetes
Kubernetes Deployments
Will update as I work through the process.
Creating the integration
I had the same problem of how to integrate the GitLab CI/CD with my Azure AKS Kubernetes cluster. I created this question because I was having some error when I tried to add my Kubernetes cluester info into GitLab.
How to integrate them:
Inside GitLab, go to "Operations" > "Kubernetes" menu.
Click on the "Add Kubernetes cluster" button on the top of the page
You will have to fill some form fields, to get the content that you have to put into these fields, connect to your Azure account from the CLI (you need Azure CLI installed on your PC) using az login command, and then execute this other command to get the Kubernetes cluster credentials: az aks get-credentials --resource-group <resource-group-name> --name <kubernetes-cluster-name>
The previous command will create a ~/.kube/config file, open this file, the content of the fields that you have to fill in the GitLab "Add Kubernetes cluster" form are all inside this .kube/config file
These are the fields:
Kubernetes cluster name: It's the name of your cluster on Azure, it's in the .kube/config file too.
API URL: It's the URL in the field server of the .kube/config file.
CA Certificate: It's the field certificate-authority-data of the .kube/config file, but you will have to base64 decode it.
After you decode it, it must be something like this:
-----BEGIN CERTIFICATE-----
...
some base64 strings here
...
-----END CERTIFICATE-----
Token: It's the string of hexadecimal chars in the field token of the .kube/config file (it might also need to be base 64 decoded?). You need to use a token belonging to an account with cluster-admin privileges, so GitLab can use it for authenticating and installing stuff on the cluster. The easiest way to achieve this is by creating a new account for GitLab: create a YAML file with the service account definition (an example can be seen here under Create a gitlab service account in the default namespace) and apply it to your cluster by means of kubectl apply -f serviceaccount.yml.
Project namespace (optional, unique): I leave it empty, don't know yet for what or where this namespace can be used.
Click in "Save" and it's done. Your GitLab project must be connected to your Kubernetes cluster now.
Deploy
In your deploy job (in the pipeline), you'll need some environment variables to access your cluster using the kubectl command, here is a list of all the variables available:
https://docs.gitlab.com/ee/user/project/clusters/index.html#deployment-variables
To have these variables injected in your deploy job, there are some conditions:
You must have added correctly the Kubernetes cluster into your GitLab project, menu "Operations" > "Kubernetes" and these steps that I described above
Your job must be a "deployment job", in GitLab CI, to be considered a deployment job, your job definition (in your .gitlab-ci.yml) must have an environment key (take a look at the line 31 in this example), and the environment name must match the name you used in menu "Operations" > "Environments".
Here are an example of a .gitlab-ci.yml with three stages:
Build: it builds a docker image and push it to gitlab private registry
Test: it doesn't do anything yet, just put an exit 0 to change it later
Deploy: download a stable version of kubectl, copy the .kube/config file to be able to run kubectl commands in the cluster and executes a kubectl cluster-info to make sure it is working. In my project I didn't finish to write my deploy script to really execute a deploy. But this kubectl cluster-info command is executing fine.
Tip: to take a look at all the environment variables and their values (Jenkins has a page with this view, GitLab CI doesn't) you can execute the command env in the script of your deploy stage. It helps a lot to debug a job.
I logged into our GitLab-Ci backend today and saw a 'Kubernetes' button — along with an offer to save $500 at GCP.
GitLab Kubernetes
URL to hit your repo's Kubernetes GitLab page is:
https://gitlab.com/^your-repo^/clusters
As I work through the integration process I will update this answer (but also welcome!).
Official GitLab Kubernetes Integration Docs
https://docs.gitlab.com/ee/user/project/clusters/index.html