Deploy selected artifact in Azure DevOps as docker image to ECR - azure

Environment
Azure Dev Ops (code repo and pipeline trigger)
AWS ECR/ECS (target deployment platform)
Docker
.NET Core Web application (v5.0)
Current Situation
Presently building the application using dotnet build (powershell script) and pushing the zip file to Azure DevOps artifacts using azurepipeline.yml. This works out fine. I have added another task for ECR Push and that also pushes a generated docker image to the ECR using a Dockerfile in the source code.
Business Problem
We want to be able to chose a specific build (eg 0.1.24) in the Azure Artifact (using a variable to provide version number), and generate a Docker build using the corresponding binaries and the Dockerfile. I am unable to find a way to do so. The specific task is as follows:-
Deploy user updates variable "versionNoToDeploy" with the artifact id or name
Deploy user runs a specific pipeline
Pipeline finds the artifact (assuming its valid, else sends error), unzips the package at temp location (-need help on)
Pipeline runs dockerfile to build the image (-known & working)
Pipeline pushes this image to ECR (-known & working)
The purpose is to keep on building the branch till we get stable builds. This build is deployed on a test server manually and tested. Once the build gets certified, it needs to be pushed to Production ECR/ECS instances.
Our pipeline (specific code only)
- pwsh: ./build.ps1 --target Clean Protolint Compile --runtime $(runtime)
displayName: ⚙️ Compile
- task: Docker#2
displayName: Build
inputs:
command: build
repository: appRepo
tags: |
$(Build.BuildId)
deploy
addPipelineData: true
Dockerfile: src\DockerfileNew
- task: ECRPushImage#1
inputs:
awsCredentials: 'AWS ECR Connection'
regionName: 'ap-south-1'
imageSource: 'imagename'
sourceImageName: 'myApplication'
sourceImageTag: 'deploy'
repositoryName: 'devops-demo'
outputVariable: 'imageTagOutputVar'
- pwsh: ./build.ps1 --target Test Coverage --skip
displayName: 🚦 Test
- pwsh: ./build.ps1 --target BuildImage Pack --runtime $(runtime) --skip
displayName: 📦 Pack
- pwsh: ./build.ps1 --target Publish --runtime $(runtime) --skip
displayName: 🚚 Publish
Artifact details
Any specific aspects needed can be provided

Finally after playing a lot with the pipeline and custom tweaking the individual steps, I came out with the following (excerpted yml).
This involves having a build version to be stored in a variable, which is referenced in each of the steps of the pipeline.
The admin has to decide if they want a general build, producing an artifact; or just deploy a specific build to AWS. The variable having the build-id is evaluated conditionally, and based on that, the steps are executed or bypassed.
- pwsh: ./build.ps1 --target Clean Protolint Compile --runtime $(runtime)
condition: eq(variables['artifactVersionToPush'], '')
displayName: ⚙️ Compile
- task: DownloadBuildArtifacts#0
condition: ne(variables['artifactVersionToPush'], '')
inputs:
buildType: 'specific'
project: 'NitinProj'
pipeline: 'NitinProj'
buildVersionToDownload: specific
buildId: $(artifactVersionToPush)
downloadType: 'single'
artifactName: 'app'
downloadPath: '$(System.ArtifactsDirectory)' #(this needs to be mentioned as default is Build directory)
- task: ExtractFiles#1
displayName: Extract Artifact to temp location
condition: ne(variables['artifactVersionToPush'], '')
inputs:
archiveFilePatterns: '$(System.ArtifactsDirectory)/app/*.zip' #path need update
cleanDestinationFolder: false
overwriteExistingFiles: true
destinationFolder: src
- task: Docker#2
displayName: Docker Build image with compiled code in artifact
condition: ne(variables['artifactVersionToPush'], '')
inputs:
command: build
repository: myApp
tags: |
$(Build.BuildId)
deploy
addPipelineData: true
Dockerfile: src\DockerfileNew
- task: ECRPushImage#1
displayName: Push built image to AWS ECR
condition: ne(variables['artifactVersionToPush'], '')
inputs:
awsCredentials: 'AWS ECR Connection'
regionName: 'ap-south-1'
imageSource: 'imagename'
sourceImageName: 'myApp'
sourceImageTag: 'deploy'
pushTag: '$(Build.BuildId)'
repositoryName: 'devops-demo'
outputVariable: 'imageTagOutputVar'
- pwsh: ./build.ps1 --target Test Coverage --skip
condition: eq(variables['artifactVersionToPush'], '')
displayName: 🚦 Test
- pwsh: ./build.ps1 --target BuildImage Pack --runtime $(runtime) --skip
condition: eq(variables['artifactVersionToPush'], '')
displayName: 📦 Pack
- pwsh: ./build.ps1 --target Publish --runtime $(runtime) --skip
condition: eq(variables['artifactVersionToPush'], '')
displayName: 🚚 Publish
I will update this yml to have steps organized into jobs, but that's an optimization story.. :)

Since it involves manual intervention here, you may consider splitting the workflow into several jobs like this:
jobs:
- job: BuildAndDeployToTest
steps:
- bash: echo "A"
- job: waitForValidation
displayName: Wait for external validation
pool: server
timeoutInMinutes: 4320 # job times out in 3 days
steps:
- task: ManualValidation#0
timeoutInMinutes: 1440 # task times out in 1 day
inputs:
notifyUsers: |
test#test.com
example#example.com
instructions: 'Please validate the build configuration and resume'
onTimeout: 'resume'
- job: DeployToProd
steps:
- bash: echo "B"
This is not exactly what you want in terms of involving variables, but you will be able to achieve your goal. Wait for validation and deploy to prod only validated builds.
It rely on ManualValidation task.
Another approach could be using deployment job and approvals:
jobs:
- job: BuildAndDeployToTest
steps:
- bash: echo "A"
jobs:
# Track deployments on the environment.
- deployment: DeployToProd
displayName: deploy Web App
pool:
vmImage: 'Ubuntu-16.04'
# Creates an environment if it doesn't exist.
environment: 'PROD'
strategy:
# Default deployment strategy, more coming...
runOnce:
deploy:
steps:
- checkout: self
- script: echo my first deployment
For this you need to define evnironment and define approval.
In both ways you will get clear picture what was delivered to prod and information who approved PROD deployment.

Related

Azure DevOps pipeline error: No builds currently exist in the pipeline definition supplied

I am trying to grab a schema from another artifact build and then build and push the artifact. The error I am getting no builds currently exist in the pipeline definition, I know that is usually from not having a pipeline run on that branch yet. I have run the pipeline and pushed the artifact to the repo on that branch so there is an image already there. The schema was published from the other build. I am not sure why it is not grabbing the artifact. Any help would be appreciated.
jobs:
- job: Build_Push_Artifact
steps:
- task: Docker#2
inputs:
containerRegistry: $(azureContainerRegistry)
command: "login"
- task: DownloadPipelineArtifact#2
inputs:
source: specific
project: $(projectId)
pipeline: $(schemaPublishPipelineId)
runVersion: latestFromBranch
runBranch: $(Build.SourceBranch)
artifact: $(schemaArtifact)
patterns: $(schemaArtifactPattern)
path: $(Build.SourcesDirectory)/src
A possible situation that I can reproduce on my side:
If the build pipeline runs failed in some steps, then even the artifact be published successfully, nothing will be catched.
And please try this simple structure:
trigger:
- none
pool:
vmImage: ubuntu-latest
steps:
- task: DownloadPipelineArtifact#2
inputs:
buildType: 'specific'
project: '<Project Name>'
definition: '<Pipeline Name>'
buildVersionToDownload: 'latest'
targetPath: '$(Pipeline.Workspace)'

Rolling back a deployment to a Devops "Environment"

I'm used to using the classic Devops "Release" pipelines for deploying code changes to Kubernetes clusters. Recently I've been looking into switching to using Azure Pipelines "deployment" jobs together with "Environments". It seems to work really well and I like a lot of the features, like being able to inspect the Kubernetes entities associated with your deployments, and track the deployment history.
Something that I'm accustomed to from the classic Release pipelines is rolling back to an old deployment if it is discovered that a bug has been released (to production for example). Since Release pipelines are based on build artifacts, you simply run the deployment on the old artifact in the Releases UI.
Now using deployments under the Environments tab, I'm not sure how to run a rollback, short of actually making a code change to revert back to the old state (and run through CI builds again needlessly). Another option is, since the deployment is done relative to the code (or commit) rather than an artifact, one could manually run a new pipeline and target the given commit - but this is quite cumbersome to achieve in the Devops UI, and seems prone to errors. In my opinion rolling back should be really easy to achieve, and not prone to errors.
Any ideas how to do this? Here is a sample of my yaml file
trigger:
batch: true
branches:
include:
- master
pr:
branches:
include:
- master
variables:
azureContainerRegistry: <registryUrl>
azureContainerRegistryServiceConnection: <serviceConnection>
kubernetesConfigPath: kubernetes
kubernetesNamespace: <my-namespace>
major: 0
buildNumber: $(major).$(Build.BuildId)
imageName: "$(azureContainerRegistry)/<my-app>:$(buildNumber)"
stages:
- stage: Bake
displayName: "Build and Push image"
jobs:
- job: Validate
displayName: "Build image"
pool:
name: "Docker"
steps:
- script: docker build -t $(imageName) .
displayName: Build App
- job: Publish
displayName: "Push image"
dependsOn: Validate
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))
pool:
name: "Docker"
steps:
- task: Docker#2
displayName: Login to Container Registry
inputs:
command: login
containerRegistry: $(azureContainerRegistryServiceConnection)
- script: docker push $(imageName)
displayName: PUSH $(imageName)
- stage: DeployTest
displayName: "Deploy TEST"
dependsOn: Bake
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))
jobs:
- deployment: Deploy
environment: <my-test-env>.$(kubernetesNamespace)
pool:
name: "Docker"
strategy:
runOnce:
deploy:
steps:
- task: qetza.replacetokens.replacetokens-task.replacetokens#3
displayName: "Replace tokens"
inputs:
targetFiles: $(kubernetesConfigPath)/base/*.yaml
escapeType: none
tokenPrefix: "{"
tokenSuffix: "}"
- task: Kubernetes#1
displayName: "kubectl apply"
inputs:
namespace: $(kubernetesNamespace)
command: apply
arguments: -k $(kubernetesConfigPath)/test
versionSpec: 1.7.0
checkLatest: true
- task: Kubernetes#1
displayName: "kubectl rollout status"
inputs:
namespace: $(kubernetesNamespace)
command: rollout
arguments: "status deployments/<my-app>"
versionSpec: 1.7.0
checkLatest: true
- stage: DeployProd
displayName: "Deploy PROD"
dependsOn: DeployTest
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))
jobs:
- deployment: Deploy
environment: <my-prod-env>.$(kubernetesNamespace)
pool:
name: "Docker"
strategy:
runOnce:
deploy:
steps:
- task: qetza.replacetokens.replacetokens-task.replacetokens#3
displayName: "Replace tokens"
inputs:
targetFiles: $(kubernetesConfigPath)/base/*.yaml
escapeType: none
tokenPrefix: "{"
tokenSuffix: "}"
- task: Kubernetes#1
displayName: "kubectl apply"
inputs:
namespace: $(kubernetesNamespace)
command: apply
arguments: -k $(kubernetesConfigPath)/prod
versionSpec: 1.7.0
checkLatest: true
- task: Kubernetes#1
displayName: "kubectl rollout status"
inputs:
namespace: $(kubernetesNamespace)
command: rollout
arguments: "status deployments/<my-app>"
versionSpec: 1.7.0
checkLatest: true
Redeploying from the older version of the code is the way to do it.
one could manually run a new pipeline and target the given commit -
but this is quite cumbersome to achieve in the Devops UI, and seems
prone to errors
This is you need a well-organised source control branching and tagging policy. If you previously deployed from branch "releases/release-20212710.01", then deployed from branch "releases/release-20212710.02", you don't need to make any code changes. Rolling back just means selecting the older branch – which still exists, with the same code as before – and deploying.

How to publish docker image as an artifact in azure devops

I am building a Docker image in my Azure pipeline. Now, I want to push this image to multiple aws accounts(dev, stage, prod) depending on the configuration parameters. The problem is, image is not available in publish artifact. I came across this and this article during my research. I am confused about the solution regarding saving the docker image so it can be available in publish artifact. I have two specific questions:
How will I use the docker save command in Azure pipeline task after docker build. The available docker task doesn't have this command.
Is there any better way of doing this apart from saving an image.
How will I use the docker save command in Azure pipeline task after docker build. The available docker task doesn't have this command.
This related to the task version.
Steps: Add task docker->switch the task version to 0->select the option Run a Docker command, then we could run the docker save command, check the pic below.
Is there any better way of doing this apart from saving an image.
We recommend that you use this to upload the docker image as an artifact.
Nowadays the preferred way of writing pipelines is with yaml.
Please see my code on how to do it in a multistage pipeline with Artifactory docker registry.
If we are in master the the MY_ADDITIONAL_TAG will be also pushed.
trigger:
- master
resources:
- repo: self
variables:
tag: '$(Build.BuildId)'
stages:
- stage: Build
displayName: Build DevOps base image
jobs:
- job: Build_and_Push
steps:
- task: Docker#2
displayName: Build
inputs:
containerRegistry: ''
repository: 'MY_REPO/IMAGE'
command: 'build'
Dockerfile: 'PATH_TO_MY_DOCKERFILE'
tags: $(tag)
- task: Bash#3
displayName: Save Docker Image
inputs:
targetType: 'inline'
script: |
docker save MY_DOCKER_REPO_IMAGE_NAME:$(tag) -o $(Pipeline.Workspace)/MY_IMAGE_FILE.tar
- task: PublishPipelineArtifact#1
inputs:
targetPath: '$(Pipeline.Workspace)/MY_IMAGE_FILE.tar'
artifact: 'MY_ARTIFACT'
publishLocation: 'pipeline'
- stage: Push
displayName: Push DevOps base image
jobs:
- job: Push
steps:
- task: DownloadPipelineArtifact#2
inputs:
buildType: 'current'
artifactName: 'MY_ARTIFACT'
targetPath: '$(Pipeline.Workspace)'
- task: Bash#3
displayName: Load Docker Image
inputs:
targetType: 'inline'
script: |
docker load --input $(Pipeline.Workspace)/MY_IMAGE_FILE.tar
docker tag MY_DOCKER_REPO_IMAGE_NAME:$(tag) MY_DOCKER_REPO_IMAGE_NAME:MY_ADDITIONAL_TAG
- task: Docker#2
displayName: push development tags
inputs:
containerRegistry: 'MY_DOCKER_REGISTRY'
repository: 'MY_REPO/IMAGE'
command: 'push'
tags: |
$(tag)
- task: Docker#2
condition: eq(variables['Build.SourceBranch'], 'refs/heads/master')
displayName: push 0-devops tag
inputs:
containerRegistry: 'MY_ARTIFACT'
repository: 'MY_REPO/IMAGE'
command: 'push'
tags: |
MY_ADDITIONAL_TAG
After a lot of reserach I found this article which resolved my issue https://dev.to/n3wt0n/container-image-promotion-across-environments-build-artifact-5887
Note: We can manually add commands which are not present in the Docker task snippet available in Azure DevOps. This is also mentioned in the article and steps to do this.

Project files not available in deploy job azure dev ops pipelines

The files in the related repo are available when running the Build stage of the Azure Dev Ops pipeline, but not when running the deploy stage. Any ideas as to why this would be the case?
Here is a simplified version of the yaml file:
# Deploy to Azure Kubernetes Service
# Build and push image to Azure Container Registry; Deploy to Azure Kubernetes Service
# https://learn.microsoft.com/azure/devops/pipelines/languages/docker
trigger:
- master
resources:
- repo: self
variables:
# Agent VM image name
vmImageName: 'ubuntu-latest'
# Name of the new namespace being created to deploy the PR changes.
k8sNamespaceForPR: 'review-app-$(System.PullRequest.PullRequestId)'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Bash#3
inputs:
targetType: 'inline'
script: |
pwd
ls -la
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
jobs:
- deployment: Deploy
condition: and(succeeded(), not(startsWith(variables['Build.SourceBranch'], 'refs/pull/')))
displayName: Deploy
pool:
vmImage: $(vmImageName)
environment: 'test.development'
strategy:
runOnce:
deploy:
steps:
- task: Bash#3
inputs:
targetType: 'inline'
script: |
pwd
ls -la
Additional notes:
If the deploy stage is run first (the build stage is removed) the working directory is also empty.
The job in your Deploy stage is a deployment job rather than a standard job, deployment jobs don't automatically checkout the repo that the pipeline is based on but they do have access to any published pipeline artifacts.
You can either publish a pipeline artifact in the Build stage or add a task to your Deploy stage to explicitly checkout the repo.
To publish a pipeline artifact add the Publish Pipeline Artifact as task in your Build stage. In your Deploy stage you can then reference files in that artifact with the path $(Pipeline.Workspace)/<artifactName>/<rest-of-path>
To checkout out the whole repo add this to your Deploy stage:
steps:
- checkout: self
path: 'myrepo/'
Then reference the files in the repo using $(System.DefaultWorkingDirectory)\<rest-of-path>

How to link build result of "plain React"/node app with release pipeline

To try to make it short, I have a React application that's build on Azure DevOps Build Pipeline like so
trigger:
- Release
queue:
name: Hosted
demands: npm
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.x'
displayName: 'Install Node.js'
- script: |
npm ci
npm run build
displayName: 'Do npm ci and build'
This does what the build would do locally too. Locally the results of build go to build directory (unzipped).
Now when I try to create release pipeline there isn't an artifact to be found using Azure Deploy Web Service task.
If I try to add PublishPipelineArtifact#0 to the build pipeline to create a publication, the YAML editor just tells string does not match pattern... and doesn't let save the definition.
I suppose I should zip the contents of the generated build directory, but what would be the right way? Also, Is using the Azure App Service Deploy task the right way to deploy Azure WebApp? It works for ASP.NET Core apps so it finds the code drop artefact (zipped) and deploys it.
<edit: Adding
- task: PublishPipelineArtifact#0
inputs:
artifactName: 'drop'
targetPath: '$(Build.ArtifactStagingDirectory)/build'
Can actually be saved and the build run. Though it errors with
2019-01-25T22:42:27.6896518Z ##[section]Starting:
PublishPipelineArtifact 2019-01-25T22:42:27.6898909Z
============================================================================== 2019-01-25T22:42:27.6898962Z Task : Publish Pipeline Artifact
2019-01-25T22:42:27.6899006Z Description : Publish Pipeline Artifact
2019-01-25T22:42:27.6899034Z Version : 0.139.0
2019-01-25T22:42:27.6899062Z Author : Microsoft Corporation
2019-01-25T22:42:27.6899090Z Help : Publish a local directory
or file as a named artifact for the current pipeline.
2019-01-25T22:42:27.6899137Z
============================================================================== 2019-01-25T22:42:28.0499917Z ##[error]Path not exist: D:\a\1\a\build
2019-01-25T22:42:28.0708878Z ##[section]Finishing:
PublishPipelineArtifact
< edit 2: It appears removing the /build does the trick. Feels a tad odd since this is what's produced locally. It doesn't produce a zip file which is expected by the release job, namely Azure App Service Deploy.
Have to examine this problem later today (two o'clock at night here).
There was another problem: The script ran only npm ci and didn't run the building part. Separating it to two different steps made the difference. It appears PublishPipelineArtifact#0 in this case isn't an ideal option if I'd like to have the results as zipped to the staging area.
A currently working solution seem to be
resources:
- repo: self
trigger:
- Release
queue:
name: Hosted
demands: npm
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.x'
displayName: 'Install Node.js'
- script: |
npm ci
displayName: 'npm ci'
- script: |
npm run build
displayName: 'npm run build'
- task: ArchiveFiles#2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)/build'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/artifact.zip'
replaceExistingArchive: true
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact'
inputs:
PathtoPublish: '$(build.artifactstagingdirectory)'
Maybe this can be simplified, but works for now and is flexible, it feels.

Resources