Accessing a private storage account from Azure Devops Hosted Agents - azure

I'm trying to access an azure blob storage account to write some files from my pipeliens running with azure devops hosted agent.
We can't use yet the azure devops service tag with azure devops hosted agent
And I was wondering if there is a smart solution to access my blob storage from my hosted agents without opening it to all the internet.
Thank you in advance guys

Based on your requirement, you need to access Private Storage account with Microsoft-hosted agent.
As far as I know, service tag is not currently supported by Azure Storage account when setting Firewall.
To meet your requirements, you can use script to get the current Microsoft-hosted agent IP and add it to Azure Storage account firewall whitelist with Azure CLI or Azure PowerShell.
For example:
steps:
- task: AzurePowerShell#5
displayName: 'Azure PowerShell script: Set Rule'
inputs:
azureSubscription: kevin0215
ScriptType: InlineScript
Inline: |
$IP= Invoke-RestMethod http://ipinfo.io/json | Select -exp ip
$IP
Add-AzStorageAccountNetworkRule -ResourceGroupName "ResourceGroup" -AccountName "kevin0204" -IPAddressOrRange "$IP"
preferredAzurePowerShellVersion: ' 3.1.0'
- task: AzureFileCopy#4
displayName: 'AzureBlob File Copy'
inputs:
SourcePath: test
azureSubscription: kevin0322
Destination: AzureBlob
storage: test
ContainerName: 1
- task: AzurePowerShell#5
displayName: 'Azure PowerShell script: Remove Rule'
inputs:
azureSubscription: kevin0215
ScriptType: InlineScript
Inline: |
$IP= Invoke-RestMethod http://ipinfo.io/json | Select -exp ip
$IP
Remove-AzStorageAccountNetworkRule -ResourceGroupName "ResourceGroup" -AccountName "kevin0204" -IPAddressOrRange "$IP"
preferredAzurePowerShellVersion: ' 3.1.0'
Explanation:
You can add the IP to the firewall whitelist before uploading the file. After uploading, you can delete this IP.
Note::The current azure storage account has a known limitation. Refer to this doc: Limitations of Azure Storage Account IP network rules.
When your Azure Devops Service organization and Azure Storage Account are in the same region, they will be accessed through private ip. This can cause intermittent access issues.

Related

why devops service prinicple does not have synapse administrator role when created synapse workspace using bicep code

I have created Azure synapse workspace using bicep code:
resource synapse 'Microsoft.Synapse/workspaces#2021-06-01' = {
name: synapseName
location: location
tags: tags
identity: {
type: 'SystemAssigned'
}
properties: {
defaultDataLakeStorage: {
accountUrl: 'https://${synapseDefaultStorageAccountName}.dfs.${environment().suffixes.storage}'
filesystem: synapseDefaultStorageAccountFileSystemName
}
managedResourceGroupName: synapseName
managedVirtualNetwork: 'default'
managedVirtualNetworkSettings: {
allowedAadTenantIdsForLinking: []
linkedAccessCheckOnTargetResource: true
preventDataExfiltration: true
}
publicNetworkAccess: 'Disabled'
purviewConfiguration: empty(XXX) ? {} : {
purviewResourceId: XXXX
}
sqlAdministratorLogin: administratorUsername
sqlAdministratorLoginPassword: administratorPassword
virtualNetworkProfile: {
computeSubnetId: subnetId
}
}
}
Synapse is being created but and in the access control section I see synapse itself as Synapse Administrator role. I expected devops Service principal too here.
Need: I need to assign this Synapse Administrator role to certain group within the same pipeline where I am deploying synapse. I dont want to add devops service connection spn manually.
after the deployment when deployment goes to powershell step, It throws error:
- task: AzureResourceManagerTemplateDeployment#3
name: Ingest_Application_Infrastructure_deployment
displayName: Ingest Application Infrastructure deployment
continueOnError: false
inputs:
deploymentScope: "Subscription"
azureResourceManagerConnection: ${{ parameters.AZURE_RESOURCE_MANAGER_CONNECTION_NAME }}
subscriptionId: $(AZURE_SUBSCRIPTION_ID)
location: ${{ parameters.AZURE_LOCATION }}
templateLocation: "Linked artifact"
csmFile: "$(System.DefaultWorkingDirectory)/infra/main.bicep"
csmParametersFile: "$(System.DefaultWorkingDirectory)/${{ parameters.PARAMETER_FILE_PATH }}"
deploymentMode: "Incremental"
deploymentOutputs: ingestDeploymentOutputs
overrideParameters: >
-environment "${{ parameters.ENVIRONMENT }}"
##Add Synapse Administrator Role to Synapse
- task: AzureCLI#2
displayName: "Add Synapse Administrator Role"
enabled: true
inputs:
azureSubscription: ${{ parameters.AZURE_RESOURCE_MANAGER_CONNECTION_NAME }}
scriptType: "pscore"
scriptLocation: "scriptPath"
ScriptPath: "$(System.DefaultWorkingDirectory)/code/MYSCRIPT.ps1"
ScriptArguments: >
-DeploymentOutputs '$(ingestDeploymentOutputs)'
Error: ERROR: (Unauthorized) The principal 'XXXXXXXXXX' does not have the required Synapse RBAC permission to perform this action. Required permission: Action: Microsoft.Synapse/workspaces/read, Scope: workspaces/MYSYNAPSEWORKSPACENAME/*.
Code: Unauthorized
what could wrong here? Thanks in advance
I had the same problem. Something has clearly changed in the Synapse resource provider, as previously the workspace creator would automatically be granted a Synapse Administrator role. You can use the Microsoft.Synapse workspaces/administrators resource to grant the DevOps service principal both the workspace AD admin role (shown in Portal under the Synapse resource's Azure Active Directory blade) and the Synapse Administrator role.
In my case I wanted to have the Synapse Administrator role for both the DevOps service principal and an AD group. As there can only be one workspace AD admin, I needed to add an extra step to for adding the Synapse Administrator role for the AD group. So first I granted the DevOps SP the workspace AD admin role in the bicep deployment. Then I added an Azure CLI task to my deployment to add the Synapse Administrator role to the AD group (az synapse role assignment create).

Need to provide link of a blob in azure storage account in an email

I need to provide the link of an Azure Storage blob in an email so that it can be downloaded from that link. This email is triggered by Azure Devops Pipeline. Now the problem is that we do not know the name of the blob in advance (we only know the container name for that blob) so we cannot hardcode it in the email body for the Email Step in Azure Devops Pipeline. Is there any alternative way by which I can provide the downloadable link in the email step?
As Suggested by Gaurav Matri is correct you need to know the name of blob in order to generate the link for the blob.
You can try to execute az storage blob download command in Azure cli task to download files from Azure Blob Storage:
steps:
- task: AzureCLI#1
displayName: 'Azure CLI '
inputs:
azureSubscription: {service connection}
scriptLocation: inlineScript
inlineScript: |
mkdir $(Build.SourcesDirectory)\BlobFile
az storage blob download --container-name $(containername) --file $(Build.SourcesDirectory)\BlobFile --name "{file name}" --account-key $(accountkey) --account-name $(accountname)
Reference : How can I download a file from blob storage inside an Azure pipeline?

Interact with Terraform resources behind firewall on Azure

This challenge is regarding Azure and Azure DevOps but I would imagine this happening on similar platforms (AWS, GCP, Github, Gitlab, etc.)
I am currently using Azure DevOps Pipelines but I am facing a problem with interacting with resources behind firewalls (either IP restricted or virtual network restricted). As Azure Pipeline spins up a new VM it requires me to whitelist that given public IP for that newly spun up machine each time I do a run. It is very janky to accommodate for this whitelisting as I am creating Azure Pipelines as submodules for reproducibility purpose extending templates from one project and using it in multiple. Terraform state needs to access configurations on restricted resources, hence throwing access denied messages.
I have looked into the following to solve the challenge and my thoughts about them:
Implementing Azure Virtual Machine scale-set agents instead of Microsoft Hosted agent (https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/scale-set-agents?view=azure-devops).
This would require a long discussion with our security team as they are not fan of using virtual machines in the cloud.
Deploy Atlantis on AKS with a static public IP from the Load Balancer and whitelist this one IP (https://learn.microsoft.com/en-us/azure/aks/static-ip).
This would require some initial setup from my side as I could not find that much documentation on deploying this using Azure (AKS) and Azure DevOps. (found this article: https://engineering.plex.com/posts/deploying-infrastructure-azure/ and this video using Github https://www.youtube.com/watch?v=33j49_n8Zvc&ab_channel=JJAsghar. Terraform Module for deploying Atlantis on AWS: https://github.com/terraform-aws-modules/terraform-aws-atlantis)
Use Terraform Enterprise for infrastructure deployment and whitelist Terraform Enterprise IP range (https://www.terraform.io/docs/cloud/api/ip-ranges.html).
This would require introducing a new platform for provisioning infrastructure.
Make a huge whitelist stage that whitelists each resource from my Terraform Module Registry depending on the module used in the pipeline (very cumbersome to maintain).
This would require too much maintenance and does not seem like the best solution.
What are your thoughts on solving this challenge?
You can use scripts to get the ip of the cloud agents. And dynamically whitelist the ip address for your azure storage account using Azure PowerShel or Azure Cli. See below example:
1, Add Azure Powershell task before Terraform
task in your azure devops pipeline to get the agent's ip address and add whitelist for azure storage account.
- task: AzurePowerShell#5
displayName: 'Azure PowerShell script: InlineScript copy'
inputs:
azureSubscription: 'Microsoft-Azure'
ScriptType: InlineScript
Inline: |
$ip = Invoke-RestMethod http://ipinfo.io/json | Select -exp ip #get agent ip
#add ip to whitelist
Add-AzStorageAccountNetworkRule -ResourceGroupName "myresourcegroup" -AccountName "mystorageaccount" -IPAddressOrRange $ip
azurePowerShellVersion: LatestVersion
2, Add another azure powershell task at the end of your pipeline to remove the whitelist.
- task: AzurePowerShell#5
displayName: 'Azure PowerShell script: InlineScript copy'
inputs:
azureSubscription: 'Microsoft-Azure'
ScriptType: InlineScript
Inline: |
$ip = Invoke-RestMethod http://ipinfo.io/json | Select -exp ip
Remove-AzStorageAccountNetworkRule -ResourceGroupName "myresourcegroup" -AccountName "mystorageaccount" -IPAddressOrRange $ip
azurePowerShellVersion: LatestVersion
Check document here for more information.
The IP ranges for cloud agents changes weekly. You can also check the weekly file and update the whitelist ip address manually. Check here for more information.

Terraform : How to hookup azurerm_api_management_authorization_server to azurerm_api_management_api

Hi and thanks for taking the time to look at my question.
I'm using terraform to create an api_management object in Azure. I've mastered the setup of API's, Products, Policies, Subscriptions and Users.
My devs setup an authorisation server on the API Management object in Azure, which I automated using azurerm_api_management_authorization_server which works fine.
However, I can't see how in terraform I can configure the API settings to use that authorisation server.
API User Authorisation Settings
Looks as if this capability is currently not available via Terraform, which can be found by the open issue against the azurerm provider:
https://github.com/terraform-providers/terraform-provider-azurerm/issues/3341
What I would recommend doing to let you continue building out your infrastructure within Terraform, would be to utilise the azurerm_template_deployment resource.
This would let you utilise the available ARM API's from an ARM deployment:
https://learn.microsoft.com/en-us/azure/templates/microsoft.apimanagement/2019-01-01/service/apis
Sadly, until that issue is closed that capability won't be in the azurerm provider.
Note to the Answer provided by Lachie White
As I'm running this from a yaml pipeline in azure devops, I got around this limitation by using an azure powershell task
- task: AzurePowerShell#5
inputs:
azureSubscription: 'Subscription (111111111-1111-1111-1111-111111111111)'
ScriptType: 'InlineScript'
Inline: |
$ApiMgmtContext = New-AzApiManagementContext -ResourceGroupName "api-mgmt-rg" -ServiceName "api-mgmt"
$ApiAuthorizationServer = Get-AzApiManagementAuthorizationServer -Context $ApiMgmtContext
$ApiAuthorizationServerName = $ApiAuthorizationServer.Name
Set-AzApiManagementApi -Context $ApiMgmtContext -ApiId "api-name" -AuthorizationServerId "$ApiAuthorizationServerName"
azurePowerShellVersion: 'LatestVersion'

Azure DevOps: Fill 'Azure subscription' field using variables

I have a question regarding Azure DevOps, It seems it is not possible to set in pipeline's jobs Azure subscription field using variables. On attempt to do this I'm getting the error message:
The pipeline is not valid.
I was trying to use in variable:
Subscription ID
Service Connection Name
Service Connection ID
But with no luck - the same issue again and again. Since I need to change subscriptions for pipeline very often, it's very critical problem for me. Could you please elaborate the issue?
If you use YAML pipelines you could configure your subscription as variable as follows:
- task: AzureCLI#2
inputs:
azureSubscription: '$(SubscriptionId)'
scriptType: 'ps'
scriptLocation: 'inlineScript'
inlineScript: |
az --version
az account show
Variable settings
And during every build you can override your azure subscription.

Resources