Publish Multiple Bicep Templates to a Container Registry - azure

We are in the process of transitioning our infrastructure from using ARM templates to Bicep templates. We have a dedicated repository with all of our template files which we wish to publish to a central repository to be used by other repos in our organization.
Previously with ARM templates, we published the folder that contained all of our templates to an Azure Storage account, that could then be referenced by others repo using the template blob url with a SAS token. We are looking to do something with bicep templates so we do not need to publish each one individually. Currently the az cli and powershell command only contain the ability to publish one file at a time using the --file argument:
az bicep publish --file storage.bicep --target br:exampleregistry.azurecr.io/bicep/modules/storage:v1
The only possibility I see is using a foreach statement in powershell that iterates through each file in the folder and publishes individually:
foreach ($file in Get-ChildItem)
{
az bicep publish --file $file.name --target br:exampleregistry.azurecr.io/bicep/modules/$filename:$version
}
Question:
Has anyone come up with a more optimized way in which to publish multiple bicep templates in a single operation?

AFAIK the way you did seems to be the way to publish multiple bicep
templates to acr which iterates through each bicep file.
You can also check this Automate Maintaining a Private Bicep Module
Registry with Azure Pipelines where already published ones are
compared to the bicep files in folder and only non existing ones are published to the registry every time.
Here the ACR is used to create a private Bicep registry for sharing
modules and a build pipeline is used to publish modules into the ACR
when new modules are added or existing ones are modified or changed.

I had some trouble to make it work with the foreach loop. The following code worked for me:
azure-pipelines.yml:
jobs:
- job: modules
displayName: 'Publish Bicep Modules'
pool:
name: 'myBuildingPoolName'
steps:
- task: AzureCLI#2
displayName: 'Publish/Update Modules to Registry'
inputs:
azureSubscription: $(ServiceConnectionName) # Pipeline paramater
scriptType: 'pscore'
scriptLocation: inlineScript
inlineScript: |
az bicep install
$registryName = '$(RegistryName)' # Pipeline paramater
$version = '$(Version)' # Pipeline paramater
# bicep files are in the modules folder
$modules = Get-ChildItem -Path ./Modules/*.bicep -Recurse -Include *.bicep
foreach ($module in $modules){
$moduleName = $module.BaseName.ToLower()
Write-Host "Adding new module ${moduleName} with version $version"
az bicep publish --file $module.FullName --target br:${registryName}.azurecr.io/bicep/modules/${moduleName}:${version}
}
Also make sure you have Azure CLI & Powershell installed in case you use a self hosted docker AgentPool:
Dockerfile:
#Install Azure-CLI
RUN curl -LsS https://aka.ms/InstallAzureCLIDeb | bash \
&& rm -rf /var/lib/apt/lists/*
#Install Powershell
RUN wget -q https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb
RUN dpkg -i packages-microsoft-prod.deb
RUN apt-get update
RUN add-apt-repository universe
RUN apt-get install -y powershell

Bicep templates are text files, so one alternative to an ACR is to simply store the templates in a code repository.
Advantages include:
storing the templates is very easy, with as much control as you would like using branches and pull requests.
controlling access to the templates is easy; you allow your consumers read-access to the repository
using the templates is very easy; your consumers can just check out the code repository and reference the template they need by a relative file path

Related

Is there a handy way to use Bicep modules in an Azure DevOps pipeline?

I am setting up my Azure DevOps pipeline (an Azure CLI task) with the intention of deploying a resource group and several resources within it. So far I have been able to deploy and validate from my local pc with no issues however when I configure my pipeline in DevOps I get the following error message:
C:\devops_work\11\s\main_v2.bicep(55,29) : Error BCP091: An error occurred reading file. Could not find a part of the path 'C:\devops_work\11\isv-bicep\storage_account.bicep'.
For context, 'main_v2.bicep' is my "main file" where the modules are called, in this case, "storage_account.bicep"
The same error occurs for all other modules. A couple of details regarding my pipeline:
I am using my own agent pool
My code sits in an Azure Repository
I have tried checking 'Checkout submodules' (Any nested submodules within)
The files all sit at the root level of the repository
My pipeline is not a YAML pipeline
Any help or insight into this is duly appreciated
You need to specify the working directory and ensure that your repository is being cloned.
steps:
- checkout: self
- powershell: |
az deployment group create `
-f "your-bicep-file.bicep" `
-g "your-resource-group-name"
workingDirectory: $(Build.SourcesDirectory)
If the pipeline is not in the same repository as your bicep files, on the checkout change self by the name of the repository alias, and complement the working directory with the path (By default it is cloned into $(Build.SourcesDirectory), but if you check out more than one repo it adds an extra directory).
steps:
- checkout: <your repo alias>
- powershell: |
az deployment group create `
-f "your-bicep-file.bicep" `
-g "your-resource-group-name"
workingDirectory: $(Build.SourcesDirectory)/<your repo alias>

Publish button on Azure Synapse using code

Hello guys I'm currently working with Azure Synapse studio. My situation could be described in this way:
I have 3 env: Dev, Test and Prod, each of them has a Azure synapse workspace but I can access only to the Dev one. I need to make some changes from Dev also for the other 2 env (sql script, pipelines etc) and then publish them to other env without touching them.
So I think Azure DevOps can be the solution.
From Dev Syanapse studio Workspace I created 3 branches 1 per env, all of them linked to an Azure DevOps repo. Also Test and Prod are linked to the same repo.
The problem is that the code on Test and Prod workspace could be different from the code on Dev. So I can't use the same ARM template (generated by publishing on the publish branch of the workspace) for all the 3 environment. A good way could be find a way to hit the publish button also on the other envs without using the portal, for example by a REST API ? It is possible ?
Now I only set up the 3 branch solution so I can magae the 3 env directly from Dev env but I think that this will not be the right solution, are changes applied on other envs ? Can I run SQL scripts or pipelines manually from other envs ?
This is my current situation on the other envs I asked to set collaboration and publish branch with the same value as the env branch name (test-test-test and prod-prod-prod)
with the new version (V2) of the Synapse workspace deployment (in Preview 2022-06), it is now possible to deploy from any branch using Azure Devops, so no need for a workspace_publish branch or the Publish button.
Just make the object json files available as artifacts to the release pipeline, and select "Validate and deploy" as the Operation Type.
I am working with Microsoft directly, building a Synapse warehouse myself for a large corporation. We have the same issue, in that the Publish button must be pressed manually for the ARM templates to be generated. Microsoft have confirmed that there is no automatic method for this available right now; we had hoped to receive a Preview AzDevOps deployment task this month, but it turns out that it simply allows us to validate the JSON assets - it still deploys using the ARM template.
We have also looked at using Azure Data Factory tools to deploy from the JSON component files, but we run into issues with the dedicated pool stored procedure tasks being unsupported. :(
The only standard option to achieve this is by creating GitHub repository and then creating Continuous Integration and creating a self-hosted Azure DevOps VM agent or use an Azure DevOps hosted agent.
Then you can setup release pipelines in Azure DevOps to work with different environments. But still you need to commit the changes in the GitHub repository for each environment, there is no Publish button kind of this available.
Refer Continuous integration and delivery for an Azure Synapse Analytics workspace for more details.
This was bothering me as well, so put together the following to be run once any PR is approved to merge into the Synapse collaboration branch, in our case, "main".
For your case, you can modify to target the relevant workspaces.
See below Azure DevOps pipeline code.
What it does is:
runs the Synapse workspace validation task, which also generates the workspace template jsons as an artifact that need to be published to the workspace_publish branch.
It will then check out your publish branch and commit and push the templates that were generated from the previous task
Such that the workspace UI does not think there are any unpublished changes when you click the "Publish" button, we need to update the workspace configuration to reflect the latest commit ID from the workspace COLLABORATION branch (main in this example) that was used to generate what we pushed to the PUBLISH branch in the previous step.
Any suggestions/improvements welcome. Hope this helps.
name: $(TeamProject)_$(Build.DefinitionName)_$(SourceBranchName)_$(Date:yyyyMMdd)$(Rev:.r) # sets Build.BuildNumber
trigger:
branches:
include:
- main
paths:
include:
- synapse/*
resources:
repositories:
- repository: 'Synapse-Publish'
type: git
name: Synapse # update to the name of your repo
ref: workspace_publish # update to the name of your synapse PUBLISH branch
variables:
repoName: $(Build.Repository.Name)
azureSubscription: your_subscription
azureTenantId: your_tenant_guid
adoOrg: your_azure_devops_org_name
adoProject: your_azure_devops_project_name
SourceWorkspaceName: your_synapse_workspace_name
workspacePublishBranch: workspace_publish # should be the same for you but update if not
stages:
- stage: build_stage
displayName: Build, Run Validations, Publish NonProd if merged to main
jobs:
# other jobs excluded from this snippet
- job: publish_workspace_artifacts_job
displayName: Publish for $(SourceWorkspaceName) $(workspacePublishBranch)
# only kick off workspace publish job for non-PR builds
condition: and(not(or(failed(), canceled())), ne(variables['Build.Reason'], 'PullRequest'))
pool:
name: 'linux-vmss' # update this for whatever you need
steps:
- checkout: self # main
clean: true
persistCredentials: true
- task: Synapse workspace deployment#2
displayName: Generate workspace artifact templates
condition: true
continueOnError: false
inputs:
operation: 'validate' # despite this name, it also generates the templates
ArtifactsFolder: '$(Build.SourcesDirectory)/$(repoName)/synapse'
TargetWorkspaceName: $(SourceWorkspaceName)
- checkout: 'Synapse-Publish' # workspace_publish
clean: true
persistCredentials: true
- task: CmdLine#2
displayName: 'Set git user'
inputs:
workingDirectory: '$(System.DefaultWorkingDirectory)'
failOnStderr: true
script: |
git config --global user.email "whatever.you.want#your_org.com"
git config --global user.name "Whatever You Want"
- task: AzurePowerShell#5
displayName: Publish to $(SourceWorkspaceName) $(workspace_publish)
condition: true
inputs:
azureSubscription: '$(azureSubscription)'
ScriptType: InlineScript
Inline: |
# the output from the workspace validate step above are saved here, also published as artifact with name = the synapse workspace name
# Get-ChildItem $(Build.SourcesDirectory)/ExportedArtifacts -Name
cd $(Build.SourcesDirectory)/$(repoName)
git pull origin $(workspacePublishBranch)
git switch $(workspacePublishBranch)
Move-Item -Path $(Build.SourcesDirectory)/ExportedArtifacts/*.json -Destination $(Build.SourcesDirectory)/$(repoName)/$(SourceWorkspaceName) -Force -Verbose
git add $(Build.SourcesDirectory)/$(repoName)/$(SourceWorkspaceName)/*.json
$diff = git diff --cached
$status = git status
if (!($status.ToLower() -like "*nothing to commit*"))
{
echo "##[section]git push changes to repo";
git commit -m "Update $(workspacePublishBranch) for source workspace $(SourceWorkspaceName) [skip ci]";
git pull --rebase;
git push origin $(workspacePublishBranch);
}
else
{
echo "##[warning]No new changes to push for source workspace $(SourceWorkspaceName) templates";
git reset –-hard origin/$(workspacePublishBranch)
git clean -fxd
}
azurePowerShellVersion: 'LatestVersion'
- task: AzurePowerShell#5
displayName: Update $(SourceWorkspaceName) Git Config # this is required so when you click "Publish" within the workspace it doesn't think there are any changes vs. what's already published
inputs:
azureSubscription: '$(azureSubscription)'
ScriptType: InlineScript
Inline: |
# get latest version of this module which now has the LastCommitId parameter that we need
Install-Module -Name Az.Synapse -Confirm:$false -RequiredVersion 1.5.0 -Force
Import-Module -Name Az.Synapse -MinimumVersion 1.5.0
cd $(Build.SourcesDirectory)/$(repoName)
[String] $latestCommitHash = git log -n 1 origin/main --pretty=format:"%H" # format to get only the hash value of the latest commit
$config = New-AzSynapseGitRepositoryConfig `
-RepositoryType AzureDevOpsGit `
-TenantId $(azureTenantId) `
-AccountName $(adoOrg) `
-ProjectName $(adoProject) `
-RepositoryName $(repoName) `
-CollaborationBranch main `
-RootFolder "/synapse" `
-LastCommitId $latestCommitHash
echo "##[section] Updating $(SourceWorkspaceName) git configuration to point to the latest main branch commit ID"
# see https://learn.microsoft.com/en-us/powershell/module/az.synapse/update-azsynapseworkspace?view=azps-8.0.0
Update-AzSynapseWorkspace -Name $(SourceWorkspaceName) -GitRepository $config
azurePowerShellVersion: 'LatestVersion'

Azure Devops nuget artifact feed and docker

Is there a good way to create an authentication mechanism to Devops to be able to access the artifact NuGet feed? I would like to create a base image for my team that would allow them to just pull an image from our Azure Container Registry that has access to our devops nuget feed. Ideally people wouldn't have to have the same stock dockerfile code in every single project that grabs a PAT from their host build system. This would also allow us to CICD this a little more nicely.
My current solution
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
ARG IT_PAT
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS "{\"endpointCredentials\": [{\"endpoint\": \"https://pkgs.dev.azure.com/MNPIT/_packaging/MNP/nuget/v3/index.json\",\"username\": \"build\",\"password\": \"${IT_PAT}\"}]}"
RUN mkdir -p $HOME/.nuget/plugins
WORKDIR /deps
# Downloads and installs the NuGet credential plugin so we can login to the private NuGet feed
RUN curl https://github.com/microsoft/artifacts-credprovider/releases/download/v0.1.24/Microsoft.NetCore2.NuGet.CredentialProvider.tar.gz -L -o creds.tar.gz -s
RUN tar -xzf creds.tar.gz
RUN cp -r plugins/netcore/ ~/.nuget/plugins
Stock code in every build file
Each user configuring their environment variables with a PAT
Passing the PAT on every build
Does not work with an automated build system
YAML
Run NuGetAuthenticate task to add VSS_NUGET_ACCESSTOKEN to environment variables (more info)
Pass token to Docker task as an argument
- task: NuGetAuthenticate#0
- task: Docker#2
displayName: 'build docker image'
inputs:
command: build
containerRegistry: 'happycodeacr'
repository: 'hc-app-sample-api-dev'
buildContext: '$(Pipeline.Workspace)/app'
Dockerfile: '$(Pipeline.Workspace)/app/src/HappyCode.Api/Dockerfile'
arguments: '--build-arg FEED_ACCESSTOKEN=$(VSS_NUGET_ACCESSTOKEN)'
tags: |
latest
$(Build.BuildId)
Dockerfile
Download and install artifacts provider (more info)
Receive token
Set VSS_NUGET_EXTERNAL_FEED_ENDPOINTS environment variable with feed url and token for nuget restore process
Copy NuGet.config file
Run dotnet restore
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /work
RUN curl -L https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | sh
ARG FEED_ACCESSTOKEN
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS \
"{\"endpointCredentials\": [{\"endpoint\":\"https://happycode.pkgs.visualstudio.com/_packaging/hc-nuget-feed/nuget/v3/index.json\", \"password\":\"${FEED_ACCESSTOKEN}\"}]}"
COPY ["NuGet.config", "./"]
COPY ["src/*/*.csproj", "./"]
RUN for projectFile in $(ls *.csproj); \
do \
mkdir -p ${projectFile%.*}/ && mv $projectFile ${projectFile%.*}/; \
done
RUN dotnet restore /work/HappyCode.Api/HappyCode.Api.csproj
# further instructions
I would like to create a base image for my team that would allow them to just pull an image from our Azure Container Registry that has access to our devops nuget feed.
You can include the credentials inside your image to achieve this, But for security concern, you've better add some extra steps or codes to pass the credentials from outside the image.
Based on your current solution, you can use the system predefined variable $(System.AccessToken) to get the security token in the azure devops CICD pipeline. Then in the docker build task, you pass the access token to the ARG IT_PAT as arguement,
--build-arg IT_PAT=$(System.AccessToken)
Besides using the NuGet credential plugin, You can also use the dotnet cli to add credentials to the nuget source. And then pass the $(System.AccessToken) in the build arguements. See below:
ARG PAT
COPY . .
RUN dotnet nuget add source "your-source-url" --name "source-name" --username "useless" --password "$PAT" --store-password-in-clear-text
RUN dotnet restore
Another workaround is to include the nuget.config in the build context. But you need to include a nuget.config file without the credentials first, and then add an extra nuget task to add the credentials to the config file. Then copy the nuget.config in your docker file . See below:
Add a nuget task to run below custom command to add the credentials to the nuget.config file.
sources Add -Name "MyPackages" -Source "https://my.pkgs.visualstudio.com/_packaging/MyPackages/nuget/v3/index.json" -username any -password $(System.AccessToken) -ConfigFile Source/Nuget.config -StorePasswordInClearText
Copy the nuget.config in the docker file, Donot forget to delete the nuget.config file when the restore is complete:
COPY *.csproj .
COPY ./nuget.config .
RUN dotnet restore
RUN rm nuget.config
If you are using Yaml based pipeline. You can also check out container jobs. Then you use your private container by setting up the container endpoints. And then you can directly use the restore tasks in your pipeline. See below example, the nuget restore task will run in your private container, and it can access to your azure feeds directly by specifying attribute vstsFeed to your nuget feed:
When you specify a container in your pipeline, the agent will first fetch and start the container. Then, each step of the job will run inside the container.
container:
image: myprivate/registry:ubuntu1604
endpoint: private_dockerhub_connection
steps:
- task: NuGetCommand#2
inputs:
command: 'restore'
feedsToUse: 'select'
vstsFeed: 'my-azure-nuget-feed'
restoreSolution: '**/*.sln'
For more information you can check out this thread.
Adding to other replies - you can avoid modifying nuget.config in runtime and use environment variable instead.
During authentication nuget checks environmental variables of format
NuGetPackageSourceCredentials_'name', where 'name' is the key of the feed in the nuget.config file.
For example
NuGetPackageSourceCredentials_MyPackages="Username=unused_but_required_by_nuget;Password=$(System.AccessToken)"

Azure Container Instance creation issue from Azure DevOps YAML pipeline

I'm in the process of migration my solution from classic pipelines to YAML pipelines in Azure DevOps.
One of the steps in the pipeline is the creation of ACI container from the image I build and push in the previous steps.
When I run this step using YAML pipeline it fails with the message -
"The image 'registry.azurecr.io/performancerunner:1.0' in container group 'performance-testing-container-group' is not accessible. Please check the image and registry credential."
When I run the exact same ACI container creating command from the classic pipeline it works.
I'm using AzureCLI task which looks like this
- task: AzureCLI#1
displayName: 'Run performance tests'
inputs:
azureSubscription: $(AZURE_SUBSCRIPTION)
scriptType: 'bash'
scriptLocation: 'scriptPath'
scriptPath: 'LoadTesting/deployment/scripts/run_tests.sh'
The content of the run_tests.sh looks like this
az container create -g $PERFORMANCE_TESTING_RG_NAME --registry-login-server "$PERFORMANCE_TESTING_REGISTRY_NAME.azurecr.io" --registry-username $PERFORMANCE_TESTING_REGISTRY_NAME \
--registry-password $REGISTRY_PASSWORD --image $IMAGE_NAME \
-n $PERFORMANCE_TESTING_CONTAINER_NAME --cpu 1 --memory 8 --restart-policy Never \
--command-line "dotnet LoadTests.dll -n testApp -c 1000"
When I echo this command, copy it with variables substituted from the logs and run it locally it works fine.
For your issue, the problem is that the error you got shows. You will get the error in two situations. One is that your image with the tag is not right in the registry that you used. And another is that the credential of the registry is not right.
With the messages, it seems it's all right for your image. Then you need to focus on another reason of the two. You set the credential through the variables, so I think a good way is to output the variables to check if it's right.

How can one deploy build artifacts from GitLab CI to an Azure app service?

I have a repository hosted on gitlab.com, it has several build jobs associated with it. I would like for an ability to deploy the compiled artifacts of any given build (generally in the form of HTML/CSS/JavaScript compiled files) to azure.
All of the guides/docs/tutorials I've seen so far (1, 2, 3, to name a few), focus on deploying files directly from a git repository, which I can see being useful in some cases, but isn't what I need in this case, as I want the compilation targets, and not the source.
Solutions welcome, we've been bashing our heads over this for several days now.
Alternatives to GitLab in which this is made possible (in case it's not in GitLab), will also be welcomed.
Add a deployment stage that has the build dependencies, from a job or more then a job, and thus downloads the artifacts of those jobs see below .gitlab-ci.yml:
stages:
- build
- ...
- deploy
buildjob:1:
stage: build
script:
- build_to_web_dir.sh
artifacts:
paths:
- web
buildjob:2:
stage: build
script:
- build_to_web_dir.sh
artifacts:
paths:
- web
deploy:
stage: deploy
GIT_STRATEGY: none
image: microsoft/azure-cli
dependencies:
- buildjob:1
- buildjob:2
script:
- export containerName=mynewcontainername
- export storageAccount=mystorageaccount
- az storage blob delete-batch --source ${containerName} --account-name ${storageAccount} --output table
- az storage blob upload-batch --source ./web --destination ${containerName} --account-name ${storageAccount} --output table --no-progress
In the deploy job, only one directory will be in the CI_PROJECT_DIR ./web containing all files that the build jobs have produced.
checkout storage quickstart azure for creating and setting up the storage container, account details etc.
For the deploy stage we can use the microsoft/azure-cli docker image, so we can call from our script the az command, see storage-quickstart-blobs-cli for more detail explanation.
az storage blob upload-batch --source ./web --destination ${containerName} --account-name ${storageAccount} --output blobname --no-progress
will copy ./web to the storage container
we should not export for security reasons in the .gitlab-ci.yml:
export AZURE_STORAGE_ACCOUNT="mystorageaccountname"
export AZURE_STORAGE_ACCESS_KEY="myStorageAccountKey"
but they should be defined in the project_or_group/settings/ci_cd environment variables, so they'll be present in the script environment.

Resources