I have a yaml pipeline that builds and deploys a project. it deploys to dev & tst. But i dont always want to deploy to tst. at the moment i have solved this with approvals. when the dev stage finishes the pipeline waits for approval.
this introduces some problems:
all developers get an email every time, which can be annoying.
the pipeline waits for deployment to tst, even when deploymen to tst isnt needed. it keeps
waiting
new ci builds dont start when the pipeline is waiting
If there is a well defined way how to determine whether to run the tst stage or not, you can either specify a custom condition on the stage or conditionally include the stage using an expression
Specify a condition
From the docs:
You can specify the conditions under which each stage, job, or step
runs. By default, a job or stage runs if it does not depend on any
other job or stage, or if all of the jobs or stages that it depends on
have completed and succeeded.
There are a number of ways you can customize this behavior, either by built in or custom defined conditions, for example, to run tun the tst stage only if the branch is main:
stages:
- stage: dev
jobs:
- job: dev1
steps:
- script: echo Hello from dev!
- stage: tst
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
jobs:
- job: tst1
steps:
- script: echo Hello From tst!
The tst stage will still be visible when you open the pipeline run, it will be marked as skipped
The condition under the tst stage can be altered to fit your need (it can include both variables, parameters and even output from previous stages.
Conditional expression
From the docs:
You can use if, elseif, and else clauses to conditionally assign
variable values or set inputs for tasks. You can also conditionally
run a step when a condition is met.
Conditional insertion only works with template syntax, which is evaluated when the pipeline yaml is compiled (i.e before the pipeline starts). Thus it cannot evaluate output of previous steps. In the below example the tst stage is completely removed from the pipeline run when the pipeline is instantiated (and the template compiled) if the branch is not main
stages:
- stage: dev
jobs:
- job: dev1
steps:
- script: echo Hello from dev!
- ${{ if eq(variables['Build.SourceBranch'], 'refs/heads/main') }}:
- stage: tst
jobs:
- job: tst1
steps:
- script: echo Hello From tst!
Related
I am struggling with the configuration of pipelines in Azure DevOps. I have the following situation:
Same repository, branch develop.
Pipeline A triggers pipeline B.
Pipeline A definition:
# PIPELINE-A
trigger:
branches:
include:
- develop
pr: none
name: $(Build.SourceBranchName)__$(Build.SourceVersion)__$(Date:yyyy-MM-dd_hh-mm)
jobs:
- job: Test
pool:
name: 'Default'
steps:
- task: CmdLine#2
displayName: Simulate 3 min. task
inputs:
script: |
echo $(Build.SourceBranchName)
echo $(Build.SourceVersion)
sleep 180
workingDirectory: '$(Build.SourcesDirectory)'
Pipeline B definition:
# PIPELINE-B
trigger: none
pr: none
resources:
pipelines:
- pipeline: PIPELINE-A
source: PIPELINE-A
trigger:
branches:
include:
- develop
name: $(Build.SourceBranchName)__$(Build.SourceVersion)__$(Date:yyyy-MM-dd_hh-mm)
jobs:
- job: Test
pool:
name: 'Default'
steps:
- task: CmdLine#2
displayName: Simulate task
inputs:
script: |
echo $(Build.SourceBranchName)
echo $(Build.SourceVersion)
echo $(resources.pipeline.PIPELINE-A.sourceBranch)
echo $(resources.pipeline.PIPELINE-A.sourceCommit)
workingDirectory: '$(Build.SourcesDirectory)'
If push 2 commits to develop, let´s say c1 and c2, I see that pipeline A is correctly triggered with 2 different runs, one for c1 and another for c2.
After completion of c1 in pipeline A, pipeline B is correctly triggered, but pipeline B has as Build.SourceVersion the value c2 (latest commit in develop branch) instead of c1. Moreover, resources.pipeline.PIPELINE-A.sourceCommit has the correct value c1 (the value that I was expecting for Build.SourceVersion in pipeline B).
After completion of c2 in pipeline A, pipeline is again correctly triggered and this time Build.SourceVersion and resources.pipeline.PIPELINE-A.sourceCommit have both c2 (that is correct because c2 is the last commit in develop branch).
Even if I trigger manually pipeline A for commit c1, pipeline B gets triggered but, again, with commit c2.
With this behavior we see several problems:
At the web page of pipeline B all the runs show c2 as commit (last commit of branch) and is impossible to distinguish between pipeline A runs for commits c1 and c2.
The checked out code is always the code of the commit c2.
Am I missing something? Jenkins handles this without any problem and it would trigger 2 different executions in pipeline B (one for c1 and another for c2).
Please note that trigger:batch: true is not an option because we want to have different concurrent builds for different commits.
Many thanks in advance for your help. Any suggestions would be really appreciated
I have tested with your yaml script but it works well on my side:
This is my pipeline-A:
And this is my pipeline-B:
So the issue may not be related to yaml script. Here are some troubleshooting advice:
Check the UI triggers. Please go to the edit page of your pipeline and click on the three dots button on the top right corner and select "triggers". Make sure that the option "Override the YAML continuous integration trigger from here" is unchecked and no triggers are set in "Build completion" part.
I suggest you to create two new pipelines to check whether they can work to narrow down the issue.
Using the following CI pipeline running on GitLab:
stages:
- build
- website
default:
retry: 1
timeout: 15 minutes
build:website:
stage: build
...
...
...
...
website:dev:
stage: website
...
...
...
What does the first colon in job name in build:website: and in website:dev: exactly mean?
Is it like we pass the second part after the stage name as a variable to the stage?
Naming of jobs does not really change the behavior of the pipeline in this case. It's just the job name.
However, if you use the same prefix before the : for multiple jobs, it will cause jobs to be grouped in the UI. It still doesn't affect the material function of the pipeline, but it will change how they show up in the UI:
It's a purely cosmetic feature.
Jobs can also be grouped using / as the separator or a space.
I've found the following block:
stages:
- stage: Print_Params
jobs:
- job: Print_Params
steps:
- ${{ each parameter in parameters }}:
- script: echo ${{ parameter.Key }} ${{ parameter.Value }}
But it invokes CmdLine once for each specified parameter. I'd really like to have a single screen I can look at to review all the parameters that a pipeline was invoked with. Is this built in, and there's a place I can already review it, or is there a way I can invoke the loop within a script to print all of the parameters in a single execution? I've tried a number of different syntaxes and nothing I've tried so far is working.
You can view runtime parameters, queue time variables and job preparation parameters in Azure Pipelines UI:
I have a Pipeline that has a few stages: detect, test, build, deploy
The detect stage detects the type of application and the test and build stages have jobs that are included or excluded based on what is computed in detect. The detect stage writes it's value to a environment variable called BUILD_MODE.
I am using rules like so:
ng-build:
extends:
- '.ng/job/build'
stage: build
rules:
- if: $BUILD_MODE == "ANGULAR"
when: always
npm-build:
extends:
- '.npm/job/build'
stage: build
rules:
- if: $BUILD_MODE == "NPM"
when: always
The problem with this is that the BUILD_MODE variable is evaluated statically when the pipeline is created not after the detect stage runs so the above never works unless I set the variable explicitly in the top level YML file like so:
variables:
BUILD_MODE: "ANGULAR"
What is the best way to solve this problem? The summary of what I want to do is evaluate some condition, either set the stages dynamically or set the variable itself before the stages in the Pipleline are created so they will be created with the rules evaluated correctly.
You could take a look at dynamic child-pipelines. Maybe you could solve your problem by dynamically creating your npm/ng build jobs.
In Azure Devops YML Multi-stage pipeline:
Is it possible to run a specific stage on a schedule without manual input to the pipeline to specify the stage?
Thank you in advance
What you can do, is use a condition on the stages of your yaml build. You can use the "Build.Reason" variable (see https://learn.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml) to determine whether or not a stage should run:
stages:
- stage: Stage1
condition: and(succeeded(), eq(variables['Build.Reason'], 'Schedule'))
jobs:
- some jobs
- stage: Stage2
condition: and(succeeded(), ne(variables['Build.Reason'], 'Schedule'))
jobs:
- some jobs
The above example runs only stage1 when the build was triggered from a schedule, and only stage2 if the build was not triggered from a schedule. You can of course adjust the conditions to your need.
Another option is to move the stages to templates, and then create 2 separate yaml pipelines using the template files containing the correct stages. This way you only need to define the contents of a stage once, but you can re-use it in several pipelines. See https://learn.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops
You can schedule a stage's run-window by creating a "check" for the stage's environment.
*If you have an environment already, skip to step 4.
in ADO, mouse over the Pipelines icon and click Environments
click the New Environment button at the top.
give it a name that your scheduled stage will use and click Create
click the 3 dots next to the "Add resource" button and select Approvals and Checks
click the plus button and choose Business Hours
schedule the stage run then click Create
make sure to add the environment name you created to the first job in the stage you're scheduling for.
Just expanding on the above answer. Since variables definition is after the schedule triggers, would have define the schedule as a variable and then set a conditional statement when the build is triggered based on a schedule and equal to schedule1 variable then run the schedule 1... stage 1... and similar applies to schedule2 which runs the stage2. Only downfall is that you would need to define cron schedule twice.
schedules:
- cron: "0 0 * * *"
displayName: Daily midnight build
branches:
include:
- master
- releases/*
exclude:
- releases/old/*
- cron: "0 */6 * * *"
displayName: Every 6 hours build
branches:
include:
- releases/*
always: true
variables:
stage1schedule1 = "0 0 * * *"
stage2schedule2 = "0 */6 * * *"
stages:
- stage: Stage1
condition: and(succeeded(), eq(variables.stage1schedule1, "0 0 * * *"), eq(variables['Build.Reason'], 'Schedule'))
jobs:
- schedule1 jobs
- stage: Stage2
condition: and(succeeded(), eq(variables.stage2schedule2, "0 */6 * * *"), eq(variables['Build.Reason'], 'Schedule'))
jobs:
- schedule2 jobs