I am new to AzureDevOps, i am trying to use Azure DevOps for one of my C++ project. My requirements are below.
1: Build C++ and publish to Artifactory(Azure),This i am able to do and binary able to publish in Artifactory.
2: I want to use that Artifactory(Binary file) while building Docker image with help of binary file. But i am unable to achieve in Azure DevOps. Locally i am able to build docker image with binary and running file.
Just As summary i am writing.
I need to create simple release pipeline using the build artifacts from the previous task
● The Release Pipeline should build a Docker image with the following requirements
○ must contain Build artifacts from the build pipeline.
Please find below code:
trigger:
- master
jobs:
- job: gcctest
pool:
vmImage: 'ubuntu-16.04'
steps:
- script: sudo apt-get update && sudo apt-get install libboost-all-dev
- script: g++ -std=c++11 -I/usr/include/boost/asio -I/usr/include/boost -o binary.out main.cpp
connection.cpp connection_manager.cpp mime_types.cpp reply.cpp request_handler.cpp request_parser.cpp server.cpp -lboost_system -lboost_thread -lpthread
- powershell:
$commitId= "$env:BUILD_SOURCEVERSION"
$definitionName= "$env:BUILD_DEFINITIONNAME"
$shortId= $commitId.Substring(1, 8)
$buildName="$definitionName.$shortId"
Write-Host $buildName
Write-Output "$buildName">readme.txt
# echo "##vso[task.setvariable variable=text]$buildName"
- task: CopyFiles#2
inputs:
sourceFolder: '$(Build.SourcesDirectory)'
contents: '?(*.out|*.txt)'
targetFolder: $(Build.ArtifactStagingDirectory)
- task: PublishBuildArtifacts#1
inputs:
pathToPublish: $(Build.ArtifactStagingDirectory)
artifactName: result
- task: Docker#1
displayName: 'Build using multi-stage'
inputs:
containerregistrytype: 'Container Registry'
dockerRegistryEndpoint: 'My Container Registry'
arguments: '--build-arg ARTIFACTS_ENDPOINT=$(ArtifactFeed)'
Docker File:
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install -y libboost-all-dev
COPY . /app
EXPOSE 80
CMD /app/binary.out 0.0.0.0 80 .```
~
Expected result:
Docker image should build by artifact and able to publish on DockerHub repo.
Related
When i try to build angular app using azure devops and deploy to azure static web app i'm receiving below error
The app build failed to produce artifact folder: 'dist/harmony-front'. Please
ensure this property is configured correctly in your deployment
configuration file.
I tried by changing output_location to / , dist, /dist , dist/harmony-front,
nothing seems to work
here's the yaml code for the deployment section
- task: AzureStaticWebApp#0
inputs:
app_location: "/"
api_location: "api"
output_location: "./dist/harmony-front"
app_build_command: 'npm run build:prod'
skip_api_build: true
verbose: true
env:
continueOnError: true
CUSTOM_BUILD_COMMAND: "npm install --force"
azure_static_web_apps_api_token: $(deployment_token)
What was the mistake i made
Thanks
I have tried to repro the same and got positive results after following the below steps.
Step 1: Create a simple angular project and build the project on a local machine and inspect the dist folder.
Step 2: Push the code to Azure Repos.
Step 3: Create azure pipeline as shown below.
trigger: none
pool:
vmImage: ubuntu-latest
steps:
- task: NodeTool#0
inputs:
versionSpec: '18.x'
displayName: 'Install Node.js'
- script: |
npm install -g #angular/cli
displayName: 'install angular cli'
- task: Npm#1
inputs:
command: 'install'
displayName: 'npm install'
- task: Npm#1
inputs:
command: 'custom'
customCommand: 'run build'
displayName: 'npm build'
- task: Bash#3
inputs:
targetType: 'inline'
script: |
pwd
ls -l
ls -l dist/
- task: AzureStaticWebApp#0
displayName: 'Deploy Azure static webapp'
inputs:
app_location: '/'
output_location: 'dist/my-app'
env:
azure_static_web_apps_api_token: $(static-webapp-token)
Step 4: Verify the result.
If you are still facing that issue, verify the AzureStaticWebApp#0
output location path in angular.json file as below. Both paths must be exact.
Actually I found the issue and will post for future reference. if someone had the same issue.
issue start when you override build command using CUSTOM_BUILD_COMMAND will overwrite the RUN_BUILD_COMMAND so based on the comment of the issue posted on Github instead of npm install --force command combined with build like npm install --force && npm run build works like charm
I want to create two debian installers (packageName1 & packageName2) in same repo of azure-pipeline with same feed. Below is the task of my azure-pipeline.yaml file
- task: PublishBuildArtifacts#1
displayName: 'Publish artifacts'
inputs:
pathtoPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: 'Installer package and anti-virus scan results'
- task: UniversalPackages#0
displayName: Publish univeral package-1
inputs:
command: publish
publishDirectory: '$(Build.ArtifactStagingDirectory)'
vstsFeedPublish: '$(packageFeed)'
vstsFeedPackagePublish: '$(packageName1)'
packagePublishDescription: ''
versionOption: custom
versionPublish: $(packageVersion1)
condition: and(succeeded(), startsWith(variables['Build.SourceBranch'], 'refs/tags/'))
- task: UniversalPackages#0
displayName: Publish univeral package-2
inputs:
command: publish
publishDirectory: '$(Build.ArtifactStagingDirectory)'
vstsFeedPublish: '$(packageFeed)'
vstsFeedPackagePublish: '$(packageName2)'
packagePublishDescription: ''
versionOption: custom
versionPublish: $(packageVersion2)
condition: and(succeeded(), startsWith(variables['Build.SourceBranch'], 'refs/tags/'))
But when I am downloading artifact for one of the package installer through az command:
az artifacts universal download \
--organization "https://myplatform.visualstudio.com/" \
--feed "my-feed" \
--name $packageName1 \
--version $packageVersion1 \
--path .
the packages and files of packageName2 are also getting downloaded. I feel that's because I am using same Build.ArtifactStagingDirectory location to store/publish the artifacts for both packages. Can I use different location for both installer packages ?
You can create a subfolder under Build.ArtifactStagingDirectory for each project and publish each artifact separately.
There aren't 'multiple artifact directories', but you can create as many folders under the directory as you want and then pass the subfolder into the build and publish tasks.
I want to run my .net 5 app on a Linux app service that has specific libraries (for example ibnss3-dev).
I have pipeline for build:
trigger:
- main
pool:
vmImage: ubuntu-20.04
variables:
buildConfiguration: 'Release'
wwwrootDir: 'Web/wwwroot'
dotnetSdkVersion: '5.0.x'
- script: |
sudo apt-get update
sudo apt-get install -y libnss3-dev
displayName: 'Dep install'
steps:
- task: UseDotNet#2
displayName: 'Use .NET Core SDK $(dotnetSdkVersion)'
inputs:
version: '$(dotnetSdkVersion)'
- script: 'echo "$(Build.DefinitionName), $(Build.BuildId), $(Build.BuildNumber)" > buildinfo.txt'
displayName: 'Write build info'
workingDirectory: $(wwwrootDir)
- task: DotNetCoreCLI#2
displayName: 'Restore project dependencies'
inputs:
command: 'restore'
projects: '**/Web.csproj'
- task: DotNetCoreCLI#2
displayName: 'Build the project - $(buildConfiguration)'
inputs:
command: 'build'
arguments: '--no-restore --configuration $(buildConfiguration)'
projects: '**/Web.csproj'
- task: DotNetCoreCLI#2
displayName: 'Publish the project - $(buildConfiguration)'
inputs:
command: 'publish'
projects: '**/Web.csproj'
publishWebProjects: false
arguments: '--no-build --configuration Release --output $(Build.ArtifactStagingDirectory)/Release'
zipAfterPublish: true
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact: drop'
condition: succeeded()
And a release with Azure App Service deploy (Task Version 4)
How should I do it? I tried the following solutions, but non of them works:
Release Post Deployment Action with sudo (kuduPostDeploymentScript.sh: sudo: not found)
Release Post Deployment Action without sudo ([error]E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied))
Add script step in the pipeline
I can run install command manually via ssh but I'm looking for an automated method.
apt-get update
apt-get install -y libnss3-dev
Please check this question. As you already discovered you should replace your command "dotnet web.dll" with sh script where you first install dependencies and then run your web.dll with dotnet CLI.
In Azure DevOps I have my pipeline to create a container and deploy it in Azure Container Registry and it is working fine. In the process I use qetza.replacetokens to replace some tokens in the DOCKERFILE
trigger:
- main
resources:
- repo: self
variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: 'myconnection'
imageRepository: '$(project)'
containerRegistry: $(ACRLoginServer)
dockerfilePath: '$(Build.SourcesDirectory)/DOCKERFILE'
tag: '$(Build.BuildId)'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build and push stage
jobs:
- job: Check
condition: eq('${{ variables.imageRepository }}', '')
steps:
- script: |
echo '##[error] The imageRepository must have a value!'
echo '##[error] --------------------------------------'
echo '##[error] For the name of the repository, you can use only numbers and letters in lowercase.\nNo spaces or spacial chararters are allowed.'
exit 1
- job: Build
condition: not(eq('${{ variables.imageRepository }}', ''))
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: qetza.replacetokens.replacetokens-task.replacetokens#3
displayName: 'Replace tokens'
inputs:
targetFiles: '**/DOCKERFILE'
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
arguments: '--build-arg github_pat="$(github_pat)" '
tags: |
latest
Before all of that, I have created a Service Connection to the Azure Container Registry`.
So, I have created a new base image in Azure Container Registry and all the new projects are based on it. The problem I'm facing is that I don't know how to pass the connection to the ACR from DevOps to the DOCKERFILE and I'm getting this error
The DOCKERFILE is like that
# OS & Base R Set Up
FROM #{dockerRegistryServiceConnection}/cellorbase
RUN apt-get update && apt-get install -y libicu-dev make pandoc pandoc-citeproc && rm -rf /var/lib/apt/lists/*
RUN echo "options(repos = c(CRAN = 'https://cran.rstudio.com/'), download.file.method = 'libcurl')" >> /usr/local/lib/R/etc/Rprofile.site
# Install renv to restore all Shiny app deps
RUN R -e "install.packages('renv'); renv::consent(provided = TRUE)"
# Define Project Number
ARG project=p200403
# create root folder for app in container
RUN mkdir /root/${project}
Basically, I don't know how to tell the DOCKERFILE to pull the container from the Azure Container Registry.
Environment
Azure Dev Ops (code repo and pipeline trigger)
AWS ECR/ECS (target deployment platform)
Docker
.NET Core Web application (v5.0)
Current Situation
Presently building the application using dotnet build (powershell script) and pushing the zip file to Azure DevOps artifacts using azurepipeline.yml. This works out fine. I have added another task for ECR Push and that also pushes a generated docker image to the ECR using a Dockerfile in the source code.
Business Problem
We want to be able to chose a specific build (eg 0.1.24) in the Azure Artifact (using a variable to provide version number), and generate a Docker build using the corresponding binaries and the Dockerfile. I am unable to find a way to do so. The specific task is as follows:-
Deploy user updates variable "versionNoToDeploy" with the artifact id or name
Deploy user runs a specific pipeline
Pipeline finds the artifact (assuming its valid, else sends error), unzips the package at temp location (-need help on)
Pipeline runs dockerfile to build the image (-known & working)
Pipeline pushes this image to ECR (-known & working)
The purpose is to keep on building the branch till we get stable builds. This build is deployed on a test server manually and tested. Once the build gets certified, it needs to be pushed to Production ECR/ECS instances.
Our pipeline (specific code only)
- pwsh: ./build.ps1 --target Clean Protolint Compile --runtime $(runtime)
displayName: ⚙️ Compile
- task: Docker#2
displayName: Build
inputs:
command: build
repository: appRepo
tags: |
$(Build.BuildId)
deploy
addPipelineData: true
Dockerfile: src\DockerfileNew
- task: ECRPushImage#1
inputs:
awsCredentials: 'AWS ECR Connection'
regionName: 'ap-south-1'
imageSource: 'imagename'
sourceImageName: 'myApplication'
sourceImageTag: 'deploy'
repositoryName: 'devops-demo'
outputVariable: 'imageTagOutputVar'
- pwsh: ./build.ps1 --target Test Coverage --skip
displayName: 🚦 Test
- pwsh: ./build.ps1 --target BuildImage Pack --runtime $(runtime) --skip
displayName: 📦 Pack
- pwsh: ./build.ps1 --target Publish --runtime $(runtime) --skip
displayName: 🚚 Publish
Artifact details
Any specific aspects needed can be provided
Finally after playing a lot with the pipeline and custom tweaking the individual steps, I came out with the following (excerpted yml).
This involves having a build version to be stored in a variable, which is referenced in each of the steps of the pipeline.
The admin has to decide if they want a general build, producing an artifact; or just deploy a specific build to AWS. The variable having the build-id is evaluated conditionally, and based on that, the steps are executed or bypassed.
- pwsh: ./build.ps1 --target Clean Protolint Compile --runtime $(runtime)
condition: eq(variables['artifactVersionToPush'], '')
displayName: ⚙️ Compile
- task: DownloadBuildArtifacts#0
condition: ne(variables['artifactVersionToPush'], '')
inputs:
buildType: 'specific'
project: 'NitinProj'
pipeline: 'NitinProj'
buildVersionToDownload: specific
buildId: $(artifactVersionToPush)
downloadType: 'single'
artifactName: 'app'
downloadPath: '$(System.ArtifactsDirectory)' #(this needs to be mentioned as default is Build directory)
- task: ExtractFiles#1
displayName: Extract Artifact to temp location
condition: ne(variables['artifactVersionToPush'], '')
inputs:
archiveFilePatterns: '$(System.ArtifactsDirectory)/app/*.zip' #path need update
cleanDestinationFolder: false
overwriteExistingFiles: true
destinationFolder: src
- task: Docker#2
displayName: Docker Build image with compiled code in artifact
condition: ne(variables['artifactVersionToPush'], '')
inputs:
command: build
repository: myApp
tags: |
$(Build.BuildId)
deploy
addPipelineData: true
Dockerfile: src\DockerfileNew
- task: ECRPushImage#1
displayName: Push built image to AWS ECR
condition: ne(variables['artifactVersionToPush'], '')
inputs:
awsCredentials: 'AWS ECR Connection'
regionName: 'ap-south-1'
imageSource: 'imagename'
sourceImageName: 'myApp'
sourceImageTag: 'deploy'
pushTag: '$(Build.BuildId)'
repositoryName: 'devops-demo'
outputVariable: 'imageTagOutputVar'
- pwsh: ./build.ps1 --target Test Coverage --skip
condition: eq(variables['artifactVersionToPush'], '')
displayName: 🚦 Test
- pwsh: ./build.ps1 --target BuildImage Pack --runtime $(runtime) --skip
condition: eq(variables['artifactVersionToPush'], '')
displayName: 📦 Pack
- pwsh: ./build.ps1 --target Publish --runtime $(runtime) --skip
condition: eq(variables['artifactVersionToPush'], '')
displayName: 🚚 Publish
I will update this yml to have steps organized into jobs, but that's an optimization story.. :)
Since it involves manual intervention here, you may consider splitting the workflow into several jobs like this:
jobs:
- job: BuildAndDeployToTest
steps:
- bash: echo "A"
- job: waitForValidation
displayName: Wait for external validation
pool: server
timeoutInMinutes: 4320 # job times out in 3 days
steps:
- task: ManualValidation#0
timeoutInMinutes: 1440 # task times out in 1 day
inputs:
notifyUsers: |
test#test.com
example#example.com
instructions: 'Please validate the build configuration and resume'
onTimeout: 'resume'
- job: DeployToProd
steps:
- bash: echo "B"
This is not exactly what you want in terms of involving variables, but you will be able to achieve your goal. Wait for validation and deploy to prod only validated builds.
It rely on ManualValidation task.
Another approach could be using deployment job and approvals:
jobs:
- job: BuildAndDeployToTest
steps:
- bash: echo "A"
jobs:
# Track deployments on the environment.
- deployment: DeployToProd
displayName: deploy Web App
pool:
vmImage: 'Ubuntu-16.04'
# Creates an environment if it doesn't exist.
environment: 'PROD'
strategy:
# Default deployment strategy, more coming...
runOnce:
deploy:
steps:
- checkout: self
- script: echo my first deployment
For this you need to define evnironment and define approval.
In both ways you will get clear picture what was delivered to prod and information who approved PROD deployment.