I want to use a Azure Container Registry of a different account in a Build pipeline in Azure DevOps.
When I try to add a Docker Registry Service Connection and select Azure Container Registry, it only lets me choose between the subscription of the current account.
When I try to add the Azure Resource Manager of the Subscription of a different account which contains the Azure Container Registry in the Service Connections, it says it cannot find any Azure Container Registry.
So how do I accomplish this?
I have the same situation in which the Azure Container Registry (ACR) is in a different Azure AD from the Azure DevOps build pipelines.
I'll outline the steps I used:
Create an app registration in the Azure AD where the ACR exists.
Give it a name like myregistry-app
Go to the myregistry-app Certificates and secrets page and create a new secret. Copy the value as you cannot retrieve it later.
Also copy the myregistry-app application id. You can find it on the overview screen.
Now go to the ACR Access Control (IAM) screen for your container registry.
Add a role assignment and assign the myregistry-app identity the Contributor role.
Back in your build pipeline create a Docker task and click on the New button under the Container Registry section.
In the popup dialog Add a Docker Registry service connection choose the Others radio button.
Put in the URL to your ACR which you can find on the container registry overview page.
Use the application id for myregistry-app as the Docker ID.
Use the myregistry-app secret for the password.
This is currently working for me. There may be a better way
Related
I have created an Azure Container Registry.
I have an Azure DevOps project.
I have created an Azure DevOps Pipeline using the wizard so
that it uses the standard template to build and push a Docker image.
When validating the Pipeline the following error is thrown:
Failed to set Azure permission 'RoleAssignmentId: ****' for the service principal '****' on subscription ID '****': error code: Forbidden, inner error code: AuthorizationFailed, inner error message The client '****' with object id '****' does not have authorization to perform action 'Microsoft.Authorization/roleAssignments/write' over scope '/subscriptions/****/resourceGroups/****/providers/Microsoft.ContainerRegistry/registries/****/providers/Microsoft.Authorization/roleAssignments/****' or the scope is invalid. If access was recently granted, please refresh your credentials. Ensure that the user has 'Owner' or 'User Access Administrator' permissions on the Subscription.
What configuration could I be missing? The documentation for this is all very sparse and written as though it should all just work.
Thanks
You need to add AcrPull permission to service principal you used here. Please got to you ACR and add it.
Here you have id of you service principal:
Failed to set Azure permission 'RoleAssignmentId: ' for the service principal ''
Also please check also networking on ACR if it blocks you or not.
To build and push your Docker image in Azure pipeline, you need follow the steps below:
Navigate to Project Settings > Service connections to create a Docker Registry service connection that can connect to your ACR.
In the pipeline, add the Docker task to build and push your Docker image to ACR.
Yours service principal on service connection is missing permissions link to documentation https://learn.microsoft.com/en-us/azure/container-registry/container-registry-roles?tabs=azure-cli. You can use custom role or build like contributor on resource group level. Example of custom role and allowed actions "Microsoft.ContainerRegistry/registries/push/write",
I'm attempting to build out my DevOps pipeline to deploy a DataFactory, Databricks Notebooks & Azure Data Warehouse,
I have my resource subscriptions setup for both Dev and Prod. deploying to Prod is more tricky than it seems.
my keyvault has GET/LIST Permissions for both Secret & Keys for the Target DataFactory.
https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment
I have used the above guide to setup my target data factory in prod - and it is stood up correctly with all the connection strings setup and keyvault permissions set.
but I am stuck on this portion :
Grant permissions to the Azure Pipelines agent The Azure Key Vault
task may fail with an Access Denied error if the proper permissions
aren't present. Download the logs for the release, and locate the .ps1
file with the command to give permissions to the Azure Pipelines
agent. You can run the command directly, or you can copy the principal
ID from the file and add the access policy manually in the Azure
portal. Get and List are the minimum permissions required.
when I deploy my release I get the following error on the KeyVault task :
The specified Azure service connection needs to have Get, List secret management permissions on the selected key vault. To set these permissions, download the ProvisionKeyVaultPermissions.ps1 script from build/release logs and execute it, or set them from the Azure portal
I've added this power shell script ProvisionKeyVaultPermissions.ps1 to my repo and added it to my task but it just runs forever ? unsure if I'm missing something here.
hope this is clear/ please ask for any additional info.
I wonder if it's the DevOps service connection that's missing the permissions.
You can check access policies for the vault from the console. You should see your service connection as an APPLICATION; it needs the GET and LIST privileges as the document your following says. My understanding is that these are privileges for the account that's deploying your code, rather than the account that will run your code.
I have created a Build Docker Image task inside DevOps Build Pipeline.
In this task, after selecting the Azure Subscription, the Azure container registry list shows No results Found message.
Although I have created the ACR in that subscription, I'm unable to connect to it via DevOps Build Pipeline.
Is this issue related to permissions or any other?
Did you create a Service Principal for Azure DevOps pipeline? Once created you can give it the proper role for what you are trying to do.
# Assign the desired role to the service principal. Modify the '--role' argument
# value as desired:
# acrpull: pull only
# acrpush: push and pull
# owner: push, pull, and assign roles
az role assignment create --assignee $SERVICE_PRINCIPAL_ID --scope $ACR_REGISTRY_ID --role acrpull
In your build pipeline, you need to add a login task that uses an existing Service Connection before your Docker build and/or push task. The Service Connection has the subscription and the name of the Azure Container Registry where you will be uploading your Docker images.
we are having two different Azure subscriptions and tenant ids. one for development env and other for a production environment. as a CI-CD pipeline, we are building docker images and pushed these images to ACR in dev subscription.
we want to reuse docker images available in dev subscription ACR when running k8s cluster in prod environment.
as per my understanding, we can not reuse ACR from different subscription and tenant id. only possible solution is to have atleast same tenant id.
do we have any way by which we can reuse these docker images.
why not, you just need to auth to acr and then you can pull images from that ACR. you wont be able to use Azure connection for that, but you can use docker connection for that (in both kubernetes and azure devops).
From ACR pov, it's supported. Say you create an Service Principal SP1 in tenant1/sub1, and assign AcrPull role to it for registry1 in tenant1/sub1. SP1 now can access registry1. You can then assign the same SP1 the AcrPull role for registry2 in another tenant tenant2/sub2 (this essentially makes SP1 a guest service principal in tenant2); now SP1 can also pull from registry2.
As long as an SP is given permission to pull from a registry, you can use the SP as user/pwd to access the registry from anywhere. Can you elaborate what is not working?
Currently im trying to create dynamic environments via AzureDevops.
One of these steps to achieve this is to take a copy of our production databases and place them in a temp resource group (Production Subscription) and then move the sql server and associated databases to our non-production subscription. From here we then create the web apps and deploy code.
When i run this via Az Cli i am able to move the resources with the following
SQLSERVERID=$(az resource show -g $RSGNAMETEMP -n $SQLSERVERNAME --resource-type "Microsoft.Sql/servers" --query id --output tsv)
az resource move --destination-group $RSGNAME --ids $SQLSERVERID --destination-subscription-id $SANDBOXSUBSCRIPTIONID
However when i run this via AzureDevops i get the following error
ERROR: The client (...) with object id (...) has permission to perform action on scope however, it does not have permission to perform action (...) on the linked scope(s).
I believe this problem is happening when you configure the AZ Cli step in AzureDevops you select the Subscription from the drop down list. The account / service principal only has access rights to that specific subscription and not to multiple. Is it possible to configure a service principal (that can be used in AzureDevops) that can connect to multiple subscriptions?
Yes, just go to Azure portal, navigate to the desired subscription blade, go to Access Control, press + sigh at the top and add your principal as a contributor to the subscription.
to find service principal name use this:
Click Manage link in the Azure Subscription field in your VSTS job, it will navigate you to a new blade. Click Manage Service Principal there. It will take you to the application page in Azure AD. After that you can copy name under Managed application in local directory field and use that name to grant it Key Vault permissions.