I working on AKS shared cluster, where have multiple teams are working on the same cluster and have their own ACR for each team.
I want to find ways to allow ACR to pull from specified namespace only.
Currently that I have though is an expensive way by
Using ACR premium tier to enable the scope-map feature, and create the token for authentication on pull secret.
Or someone did know how to pull an image from the service principal with the AcrPull role.
please tell me.
thank you.
I have found the solution without changing the ACR pricing tier, by using only the service principal to access the target ACR.
Solution
Create the service principal and assign AcrPull role.
After that, Create kubernetes secret into your namespace to pull image by ImagePullSecrets
kubectl create secret docker-registry <secret-name> \
--namespace <namespace> \
--docker-server=<container-registry-name>.azurecr.io \
--docker-username=<service-principal-ID> \
--docker-password=<service-principal-password>
reference
Related
my scenario is like I have shared container registry in one subscription say subscription A, I need to pull image from ACR to ACA through DevOps pipelines. The ACAs are present for each environment like dev, test, UAT & etc which is in another subscription say subscription B. I am using 'az containerapp up' command in azure devops pipelines to pull image of the shared ACR. Getting error 'The resource is not found in the subscription B'. What might be the alternative possible solution because we need to reduce cost of using container registry for each environment.
I am using service connections to pull image and the service connections are separate for separate subscriptions.
I know that they are in different subscriptions but I searched on websites to connect two different subscriptions.
Is there a possibility that I can connect two different service connections in azure devops & use one service connection to pull that image.
Before integrating the Azure CLI command az containerapp up with Azure pipelines, please first confirm you are able to pull the ACR image from Sub B to deploy the container app in Sub A via CloudShell or LocalPowerShell.
I tested to create ARM service connection with Tenant Root Management Group whose referenced service principle had access to both subscriptions; the issue still existed.
In local PowerShell, I az login with my user account and still could reproduce the issue.
az containerapp up `
--name XXcontainerapp `
--image XXacrsubB.azurecr.io/azurecontainerappdemo:XX `
--resource-group rg-containerapp `
--environment TestEnv `
--registry-username XXacrsubB `
--registry-password XXXXXX
It seemed to be a limitation with this command az containerapp up. You may consider reporting the issue with Azure CLI.
A few weeks ago, I was able to use the Azure CLI to create my Container Registry (ACR) and Kubernetes (AKS) cluster. I could push images to my ACR and have AKS pull images successfully - everything worked great. Every now and then, I would have to refresh my login with az acr login --name <acrName>, but not a big deal.
Today, I found that when I go to deploy an updated image to my AKS cluster, I got a status of ImagePullBackOff:
Failed to pull image "MY_ACR.azurecr.io/MY_IMAGE:v1": rpc error: code = Unknown desc = Error response from daemon: Get https://MY_ACR.azurecr.io/v2/MY_IMAGE/manifests/v1: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.
I couldn't remember what I needed to do to make this work, so I went through my original steps and created an entirely new resource group, ACR, AKS cluster, and service principal connecting them. I pushed images to my ACR and was able to apply my Kubernetes manifest, and everything worked again.
A couple hours later, when I applied an updated manifest, I again got the same error message. As part of my setup, I created a service principal:
az ad sp create-for-rbac --skip-assignment
az role assignment create --assignee <principal's appId> --scope <my ACR's id> --role Reader
I also used --role acrpull. It seems like the authentication has timed out, and the documentation for Authenticate with an Azure container registry says that individual AD identities will time out after 3 hours, but even after running az acr login --name <acrName>, I'm not able to fix the issue.
What are the required steps to get my AKS cluster to be able to authenticate again to my ACR?
I'll note that I also attached the ACR according to the documentation at Authenticate with Azure Container Registry from Azure Kubernetes Service by running:
az aks update -n cluster_name -g resource_group --attach-acr acr_name
I also tried using the ACR id instead of the name. After a minute or so, the command completed, and even a half hour+ later, I get the same permissions issue.
The easiest way to integrate AKS with ACR is to leverage the --attach-acr option during cluster creation. This will have AKS manage the service principal for your and handle the token refresh's
https://learn.microsoft.com/en-us/azure/aks/cluster-container-registry-integration#create-a-new-aks-cluster-with-acr-integration
Assuming having access to an Azure subscription with a fully configured Azure Kubernetes Service, via
az login
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
az aks browse --resource-group somegroup --name somecluster
i can get access to Kubernetes Dashboard.
Is there a way to give temporary access to Kubernetes Dashboard to some person who does not have access to the Azure Subscription the AKS is associated with?
yes, just create appropriate kubernetes config (so the user can port-forward the dashboard pod) to the cluster and then the user will be able to connect to the dashboard.
I have an issue with the AKS, Kubernetes cannot pull the image from the ACR, It show the message "unauthorized: authentication required" I already set permissions on the ACR to the AKS Service Principal. It had worked fine until today when I proced to update the pod with a new container from the ACR.
According to the message you provided, the possible reason that I can think of is the Authorization expiry. You can take a check for your service principal if it's Authorization expiry.
Other than this, I recommend you can also check if all other things are OK, the authentication with ACR for AKS here. This can avoid the wrong action.
The SP already has authorization to pull images from the ACR.
I followed the post here and now the AKS is able to pull images from ACR. When I created the AKS its SP didn't have secrets nor certificates setted, but it had working fine since 12 months ago, suddenly AKS now needs to have a secret in its SP to authenticate over the ACR.
Thanks...
using this workaround it did the job:
az role assignment create --assignee <servicePrincipalID> --scope <registryID> --role acrpull
I'm trying to follow this guide to setting up a K8s cluster with external-dns' Azure DNS provider.
The guide states that:
When your Kubernetes cluster is created by ACS, a file named /etc/kubernetes/azure.json is created to store the Azure credentials for API access. Kubernetes uses this file for the Azure cloud provider.
When I create a cluster using aks (e.g. az aks create --resource-group myResourceGroup --name myK8sCluster --node-count 1 --generate-ssh-keys) this file doesn't exist.
Where do the API credentials get stored when using AKS?
Essentially I'm trying to work out where to point this command:
kubectl create secret generic azure-config-file --from-
file=/etc/kubernetes/azure.json
From what I can see when using AKS the /etc/kubernetes/azure.json doesn't get created. As an alternative I followed the instructions for use with non Azure hosted sites and created a service principal (https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/azure.md#optional-create-service-principal)
Creating the service principal produces some json that contains most of the detail. This can be used to manually create the azure.json file and the secret can be created from it.
Use this command to get credentials:
az aks get-credentials --resource-group myResourceGroup --name myK8sCluster
Source:
https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough
Did you try this command ?
cat ~/.kube/config
It provided all i needed for my CI to connect to the Kubernetes Cluster and use API