I have a complicated release that spans multiple deployment groups and I am planning to use the 3rd party vsts-git-release-tag extension to tag the release. Ideally, the entire release (all jobs) would succeed first before tagging the repository.
So I am trying to work out what the best way to accomplish that is. If this were a build pipeline rather than a release pipeline, it is clear I could just arrange them using dependsOn, as follows.
jobs:
- job: Deployment_Group_1
steps:
- script: echo hello from Deployment Group 1
- job: Deployment_Group_2
steps:
- script: echo hello from Deployment Group 2
- job: Tag_Repo
steps:
- script: echo this is where I would tag the Repo
dependsOn:
- Deployment_Group_1
- Deployment_Group_2
However, there doesn't seem to be equivalent functionality (at least currently) in release pipelines as specified in this document.
Note
Running multiple jobs in parallel is supported only in build pipelines at present. It is not yet supported in release pipelines.
Although it doesn't specifically mention the dependsOn feature, there doesn't seem to be a way to utilize it in release pipelines (correct me if I am wrong).
I realize I could probably create a separate stage containing a single job and task to create the Git tag, but that feels like a hack. Is there a better way to run a specific release job after all other release jobs have completed?
Just a suggestion: you could make use of multistage pipelines which then are also very clearly represented in the Azure Devops Ui.
Stages have jobs, jobs have steps:
Example pipeline yml for this:
trigger:
batch: true
branches:
include:
- "*"
resources:
containers:
- container: ubuntu
image: ubuntu:18.04
stages:
- stage: STAGE1
jobs:
- job: PrintInfoStage1Job1
container: ubuntu
steps:
- script: |
echo "THIS IS STAGE 1, JOB 1"
displayName: "JOB 1"
- job: PrintInfoStage1Job2
dependsOn: PrintInfoStage1Job1
container: ubuntu
steps:
- script: |
echo "THIS IS STAGE 1, JOB 2"
displayName: "JOB 2"
- stage: STAGE2
dependsOn: STAGE1
jobs:
- job: PrintInfoStage2Job1
dependsOn: []
container: ubuntu
steps:
- script: |
echo "THIS IS THE STAGE 2, JOB 1"
displayName: "JOB 1"
- job: PrintInfoStage2Job2
container: ubuntu
dependsOn: []
steps:
- script: |
echo "THIS IS THE STAGE 2, JOB 2"
displayName: "JOB 2"
Just be sure to not miss switch on this preview feature on in your user's settings.
After creating a test project and adding several jobs to a release pipeline for it, then running it several times in a row, it appears that the order of the jobs is deterministic. That is, they always seem to run in the order that they physically appear in the portal.
I did several Google searches, and this behavior doesn't seem be documented anywhere. So, I don't know for sure if it is guaranteed. But it will probably work for my case.
Please leave a comment if there are any official sources that confirm that the job order is guaranteed.
Looking at the example that has been specified, you don't need to create different jobs for each of the steps. Every task could be added in a single job.
jobs:
- job:
steps:
- script: echo hello from Deployment Group 1
- script: echo hello from Deployment Group 2
- script: echo this is where I would tag the Repo
You can also remove Jobs > Job in the code if you want.
I was facing a similar issue with my jobs not being exected in a defined order. Also, I was referencing other templates for jobs.I was under impression that every template has to be in a new job. Later, I managed to dissolve my jobs into tasks.
Points to note:
Job runs on a sperate agent. Which you might not want, if it is just a sequence of scripts that you want to execute. Job essentially contains steps, which is a group of tasks.
Tasks in steps are executed one-by-one, so you don't have to provide order explicitly.
You can use conditions:
jobs:
- job: Deployment_Group_1
steps:
- script: echo hello from Deployment Group 1
- job: Deployment_Group_2
dependsOn: Deployment_Group_1
condition: succeeded()
steps:
- script: echo hello from Deployment Group 2
- job: Tag_Repo
steps:
- script: echo this is where I would tag the Repo
dependsOn:
- Deployment_Group_1
- Deployment_Group_2
Related
I have 2 stages with multiple jobs and the jobs in the first stage have some rules that tell them if the need to run or not, so what I am trying to do is to tell some of the jobs in the second stage to execute only if the relevant job in the first stage ran.
I don't want to use the same rules I used for the first stage job to prevent conflicts.
Is there a way to do that?
stages:
- build
- deploy
Build0:
stage: build
extends:
- .Build0Rules
- .Build0Make
Build1:
stage: build
extends:
- .Build1Rules
- .Build1Make
Deploy0:
stage: deploy
dependencies:
- Build0
script:
- bash gitlab-ci/deploy0.sh
Deploy1:
stage: deploy
dependencies:
- Build1
script:
- bash gitlab-ci/deploy1.sh
Thank you in advance :)
No you cannot specify that a job should be added to the pipeline if another job was added to the pipeline. Each job can specify whether it is added to the pipeline using only/except conditions or rules, but these are not able to reference other jobs.
It is possible to generate a pipeline yaml file and then trigger it, but I think this would not be ideal because of the amount of work involved.
stages:
- Build
- Deploy
build:
stage: Build
script:
- do something...
artifacts:
paths:
- deploy-pipeline-gitlab-ci.yml
deploy:
stage: Deploy
trigger:
include:
- artifact: deploy-pipeline-gitlab-ci.yml
job: build
strategy: depend
I would recommend using similar only/except conditions or rules on each job to build the pipeline that you want.
Yes you can. You should check the keyword needs that allow to do what you want: execute a job based on the execution of other jobs, ignoring stages order.
The documentation: https://docs.gitlab.com/ee/ci/yaml/#needs
Here is also an exemple of how to build a DAG (direct acrylic graph) using needs: https://about.gitlab.com/blog/2020/12/10/basics-of-gitlab-ci-updated/#directed-acyclic-graphs-get-faster-and-more-flexible-pipelines
In your case:
Deploy0:
stage: deploy
needs: ["Build0"]
script:
- bash gitlab-ci/deploy0.sh
Deploy1:
stage: deploy
needs: ["Build1"]
script:
- bash gitlab-ci/deploy1.sh
Note you can also specify multiple jobs in the needs command:
needs: ["build0", "test0", "test1"]
I have following jobs and stages and with this yml configuration the test stage runs on schedule and regular pipelines when I set $RUN_JOB variable to true in my schedule and in my project's CI/CD variables. But this also schedules scheduled-test-1 and scheduled-test-2 in my scheduled pipelines.
What I want to do is that the test stage should continue to run on schedule and regular pipelines but scheduled-test-1 and scheduled-test-2 should not be scheduled with test stage.
stages:
build
test
deploy
scheduled-test-1
scheduled-test-2
build:
script:
- echo $Service_Version
only:
- develop
except:
- schedules
test:
script:
- echo $Service_Version
only:
variables:
- $RUN_JOB
deploy:
script:
- echo $Service_Version
only:
- develop
except:
- schedules
scheduled-test-1:
script:
- echo $Service_Version
only:
- schedules
scheduled-test-2:
script:
- echo $Service_Version
only:
- schedules
There might be a simpler option but, using artifacts:reports as in Exporting environment variables from one stage to the next in GitLab CI might help.
The idea would be to export a RUN_TEST environment variable (a flag to signal the test job has been executed), and use it in the scheduled-test-x jobs in a rules section:
rules:
- if: $RUN_TEST != ""
While those scheduled-test-x jobs might still be scheduled, at least they would not run, but their rule would be false.
I am trying to add needs between jobs in the Gitlab CI yaml configuration file.
stages:
- build
- test
- package
- deploy
maven-build:
stage: build
only:
- merge_requests
- master
- branches
...
test:
stage: test
needs: [ "maven-build" ]
only:
- merge_requests
- master
...
docker-build:
stage: package
needs: [ "test" ]
only:
- master
...
deploy-stage:
stage: deploy
needs: [ "docker-build" ]
only:
- master
...
deploy-prod:
stage: deploy
needs: [ "docker-build" ]
only:
- master
when: manual
...
I have used the GitLab CI online lint tools to check my syntax, it is correct.
But when I pushed the codes, it always complains:
'test' job needs 'maven-build' job
but it was not added to the pipeline
You can also test your .gitlab-ci.yml in CI Lint
The GitLab CI did not run at all.
Update: Finally I made it. I think the needs position is sensitive, move all needs under the stage, it works. My original scripts included some other configuration between them.
CI-jobs that depend on each other need to have the same limitations!
In your case that would mean to share the same only targets:
stages:
- build
- test
maven-build:
stage: build
only:
- merge_requests
- master
- branches
test:
stage: test
needs: [ "maven-build" ]
only:
- merge_requests
- master
- branches
that should work from my experience^^
Finally I made it. I think the needs position is sensitive, move all needs under the stage, it works
Actually... that might no longer be the case with GitLab 14.2 (August 2021):
Stageless pipelines
Using the needs keyword in your pipeline configuration helps to reduce cycle times by ignoring stage ordering and running jobs without waiting for others to complete.
Previously, needs could only be used between jobs on different stages.
In this release, we’ve removed this limitation so you can define a needs relationship between any job you want.
As a result, you can now create a complete CI/CD pipeline without using stages by including needs in every job to implicitly configure the execution order.
This lets you define a less verbose pipeline that takes less time to create and can run even faster.
See Documentation and Issue.
The rule in both jobs should be that same or otherwise GitLab cannot create job dependency between the jobs when the trigger rule is different.
I don't know why, but if the jobs are in different stages (as in my case), you have to define the jobs that will be done later with "." at the start.
Another interesting thing is GitLab's own CI/CD Lint online text editor does not complain there is an error. So you have to start the pipeline to see the error.
Below, notice the "." in ".success_notification" and ".failure_notification"
stages:
- prepare
- build_and_test
- deploy
- notification
#SOME CODE
build-StandaloneWindows64:
<<: *build
image: $IMAGE:$UNITY_VERSION-windows-mono-$IMAGE_VERSION
variables:
BUILD_TARGET: StandaloneWindows64
.success_notification:
needs:
- job: "build-StandaloneWindows64"
artifacts: true
stage: notification
script:
- wget https://raw.githubusercontent.com/DiscordHooks/gitlab-ci-discord-webhook/master/send.sh
- chmod +x send.sh
- ./send.sh success $WEBHOOK_URL
when: on_success
.failure_notification:
needs:
- job: "build-StandaloneWindows64"
artifacts: true
stage: notification
script:
- wget https://raw.githubusercontent.com/DiscordHooks/gitlab-ci-discord-webhook/master/send.sh
- chmod +x send.sh
- ./send.sh failure $WEBHOOK_URL
when: on_failure
#SOME CODE
The goal
I'm pretty new to Azure and pipelines, and I'm trying to trigger a pipeline from a pr in Azure. The repo lives in Github.
Here is the pipeline yaml: pipeline.yml
trigger: none # I turned this off for to stop push triggers (but they work fine)
pr:
branches:
include:
- "*" # This does not trigger the pipeline
stages:
- stage: EchoTriggerStage
displayName: Echoing trigger reason
jobs:
- job: A
steps:
- script: echo 'Build reason::::' $(Build.Reason)
displayName: The build reason
# ... some more stages here triggered by PullRequests....
# ... some more stages here triggered by push (CI)....
The pr on Github looks like this:
The problem
However, the pipeline is not triggered, when the push triggers work just fine.
I have read in the docs but I can't see why this does not work.
The pipeline is working perfectly fine when I am triggering it through git push. However, when I try to trigger it with PR's from Github, nothing happens. In the code above, I tried turning off push triggers, and allow for all pr's to trigger the pipeline. Still nothing.
I do not want to delete the pipeline yet and create a new one.
Update
I updated the yaml file to work as suggested underneath. Since the pipeline actually runs through a push command now, the rest of the details of the yaml file are not relevant and are left out.
Other things I have tried
Opening new PR on Github
Closing/Reopening PR on Github
Making change to existing PR on Github
-> Still no triggering of pipeline.
You have a mistake in your pipeline. It should be like this:
trigger: none # turned off for push
pr:
- feature/automated-testing
steps:
- script: echo "PIPELINE IS TRIGGERED FROM PR"
Please change this
- stage:
- script: echo "PIPELINE IS TRIGGERED FROM PR"
to
- stage:
jobs:
- job:
steps:
- script: echo "PIPELINE IS TRIGGERED FROM PR"
EDIT
I used your pipeline
trigger: none # I turned this off for to stop push triggers (but they work fine)
pr:
branches:
include:
- "*" # This does not trigger the pipeline
stages:
- stage: EchoTriggerStage
displayName: Echoing trigger reason
jobs:
- job: A
steps:
- script: echo 'Build reason::::' $(Build.Reason)
displayName: The build reason
# ... some more stages here triggered by PullRequests....
# ... some more stages here triggered by push (CI)....
and all seems to be working.
Here is PR and here build for it
I didn't do that but you can try to enforce this via branch policy. TO do that please go to repo settings and then as follow:
The solution was to go to Azure Pipelines -> Edit pipeline -> Triggers -> Enable PR validation.
You can follow below steps to troubleshooting your pipeline.
1, First you need to make sure a pipeline was created from the yaml file on azure devops Portal. See example in this tutorial.
2, Below part of your yaml file is incorrect. - script task should be under steps section.
Change:
stages:
- stage:
- script: echo "PIPELINE IS TRIGGERED FROM PR"
To:
stages:
- stage:
jobs:
- job:
step:
- script: echo "PIPELINE IS TRIGGERED FROM PR"
3, I saw you used template in your yaml file. Please make sure the template yaml files are in correct format. For example:
the dockerbuild-dashboard-client.yml template of yours is a step template. You need to make sure its contents is like below:
parameters:
...
steps:
- script: echo "task 1"
- script: echo "task 2"
And webapprelease-dashboard-dev-client.yml of yours is a job template. Its contents should be like below:
parameters:
name: ''
pool: ''
sign: false
jobs:
- job: ${{ parameters.name }}
pool: ${{ parameters.pool }}
steps:
- script: npm install
- script: npm test
- ${{ if eq(parameters.sign, 'true') }}:
- script: sign
4, After the pipeline was created on azure devops Portal. You can manually run this pipeline to make sure there is no error in the yaml file and the pipeline can be successfully executed.
5, If All above are checked, but the PR trigger still is not working. You can try deleting the pipeline(created on the first step) created on Azure devops portal and recreated a new pipeline from the yaml file.
I am trying to figure out how to share custom variables across ADO pipelines in my script. Below is my script with 2 stages.
I am setting the curProjVersion as an output variable and trying to access it from a different stage. Am I doing it right?
stages:
- stage: Build
displayName: Build stage
jobs:
- job: VersionCheck
pool:
vmImage: 'ubuntu-latest'
displayName: Version Check
continueOnError: false
steps:
- script: |
echo "##vso[task.setvariable variable=curProjVersion;isOutput=true]1.4.5"
name: setCurProjVersion
displayName: "Collect Application Version ID"
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
variables:
curProjVersion1: $[ dependencies.Build.VersionCheck.outputs['setCurProjVersion.curProjVersion'] ]
jobs:
- job:
steps:
- script: |
echo $(curProjVersion1)
Updated:
Share variables across stages feature has been released in Sprint 168 now.
Please use below format to access output variables from previous stage:
stageDependencies.{stageName}.{jobName}.outputs['{stepName}.{variableName}']
Original:
Share variables across stages in Azure DevOps Pipelines
I'm afraid to say that it does not supported to share the variable which defined in one stage and pass it into another stage.
This is the feature we are plan to add, but until now, it does not supported. You can follow this Github issue, many people has the same demand with you. You can follow track that.
Until now, we only support set a multi-job output variable, but this only support YAML. For Classic Editor, there's no any plan to add this feature in release.
For work around, you can predefined the variables before the stages. But one important thing is if you change its value in one stage. The new value could not be passed to the next stage. The lifetime of variable with new value only exists in stage.
What is important to mention stageDependencies is not available in condition at stage level. It is aviable in jobs, but not in stage directly (at least at the moment).
stages:
- stage: A
jobs:
- job: JA
steps:
- script: |
echo "This is job Foo."
echo "##vso[task.setvariable variable=doThing;isOutput=true]Yes" #The variable doThing is set to true
name: DetermineResult
- script: echo $(DetermineResult.doThing)
name: echovar
- job: JA_2
dependsOn: JA
condition: eq(dependencies.JA.outputs['DetermineResult.doThing'], 'Yes')
steps:
- script: |
echo "This is job Bar."
#stage B runs if DetermineResult task set doThing variable n stage A
- stage: B
dependsOn: A
jobs:
- job: JB
condition: eq(stageDependencies.A.JA.outputs['DetermineResult.doThing'], 'Yes') #map doThing and check if true
variables:
varFromStageA: $[ stageDependencies.A.JA.outputs['DetermineResult.doThing'] ]
steps:
- bash: echo "Hello world stage B first job"
- script: echo $(varFromStageA)
Jobs can now access variables from previous stages
Output variables are still produced by steps inside of jobs. Instead of referring to dependencies.jobName.outputs['stepName.variableName'], stages refer to stageDependencies.stageName.jobName.outputs['stepName.variableName'].
https://learn.microsoft.com/en-us/azure/devops/release-notes/2020/sprint-168-update#azure-pipelines-1
This is available as of May 4th 2020
Jobs can access output variables from previous stages:
Output variables may now be used across stages in a YAML-based pipeline. This helps you pass useful information, such as a go/no-go decision or the ID of a generated output, from one stage to the next. The result (status) of a previous stage and its jobs is also available.
Output variables are still produced by steps inside of jobs. Instead of referring to dependencies.jobName.outputs['stepName.variableName'], stages refer to stageDependencies.stageName.jobName.outputs['stepName.variableName'].
Note
By default, each stage in a pipeline depends on the one just before it in the YAML file. Therefore, each stage can use output variables from the prior stage. You can alter the dependency graph, which will also alter which output variables are available. For instance, if stage 3 needs a variable from stage 1, it will need to declare an explicit dependency on stage 1.
For stage conditions on Azure DevOps Version Dev17.M153.5 with Agent Version 2.153.1 the following works:
stages:
- stage: A
jobs:
- job: JA
steps:
- script: |
echo "This is job Foo."
echo "##vso[task.setvariable variable=doThing;isOutput=true]Yes" #The variable doThing is set to 'Yes'
name: DetermineResult
#stage B runs if DetermineResult task set doThing variable on stage A
- stage: B
dependsOn: A
condition: eq(dependencies.A.outputs['JA.DetermineResult.doThing'], 'Yes')
jobs:
- job: JB
steps:
- bash: echo "Hello world stage B first job"
Note: The layout of properties is different on stage compared to job:
dependencies.{stage name}.outputs['{job name}.{script name}.{variable name}']
Note: The expression with 'stageDependencies' failed with the following error message:
An error occurred while loading the YAML build pipeline. Unrecognized value: 'stageDependencies'. Located at position XX within expression: and(always(), eq(stageDependencies.A.outputs['JA.DetermineResult.doThing'], 'Yes')). For more help, refer to https://go.microsoft.com/fwlink/?linkid=842996
Bonus:
See the following documentation on how to access the status of the dependent stages: link
Corresponding documentations:
Use output variables from tasks
Stage to stage dependencies
To use the output from a different stage, you must use the syntax depending on whether you're at the stage or job level:
At the stage level, the format for referencing variables from a different stage is dependencies.STAGE.outputs['JOB.TASK.VARIABLE']. You can use these variables in conditions.
At the job level, the format for referencing variables from a different stage is stageDependencies.STAGE.JOB.outputs['TASK.VARIABLE']
Note: By default, each stage in a pipeline depends on the one just before it in the YAML file. If you need to refer to a stage that isn't immediately prior to the current one, you can override this automatic default by adding a dependsOn section to the stage.
Yes this is possible, you need stageDependencies like below:
stages:
- stage: Build
displayName: Build stage
jobs:
- job: VersionCheck
pool:
vmImage: 'ubuntu-latest'
displayName: Version Check
continueOnError: false
steps:
- script: |
echo "##vso[task.setvariable variable=curProjVersion;isOutput=true]1.4.5"
name: setCurProjVersion
displayName: "Collect Application Version ID"
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
variables:
curProjVersion1: $[ stageDependencies.Build.VersionCheck.outputs['setCurProjVersion.curProjVersion'] ]
jobs:
- job:
steps:
- script: |
echo $(curProjVersion1)
Note that I have changed
$[ dependencies.Build.VersionCheck.outputs['setCurProjVersion.curProjVersion'] ]
TO
$[ stageDependencies.Build.VersionCheck.outputs['setCurProjVersion.curProjVersion'] ]
source: https://jimferrari.com/2023/01/05/pass-variables-across-jobs-and-stages-in-azure-devops-pipelines/
As an update for anyone seeing this question, it seems passing variables between stages has been implemented and should be released in the next couple of weeks.
You can define a global variable and use a Powershell to assign the value of the stage variable to the global variable.
Write-Output ("##vso[task.setvariable variable=globalVar;]$stageVar")
Global variables can either be defined in the yaml itself or in variable groups.
Initialise the var with an empty value.
E.g. yaml
variables:
globalVar: ''
You can define the variables just after you define the trigger and before you define the stages
trigger:
- master
variables:
VarA: aaaaa
VarB: bbbbb
stages:
- stage: Build
jobs:
- job: Build
pool:
vmImage: 'vs2017-win2016'